book_name
stringclasses 89
values | def_number
stringlengths 12
19
| text
stringlengths 5.47k
10k
|
---|---|---|
110_The Schwarz Function and Its Generalization to Higher Dimensions | Definition 5.1 |
Definition 5.1. Let \( C \) be a convex subset of an affine space \( A \) . The algebraic interior of \( C \), denoted by \( {\operatorname{ai}}_{A}\left( C\right) \) (ai(C) if \( A \) is understood from context), is the set of all points \( x \in C \) such that every line \( \ell \subseteq A \) through \( x \) contains a line segment in \( C \) having \( x \) in its interior, that is,
\[
{\operatorname{ai}}_{A}\left( C\right) \mathrel{\text{:=}} \{ x \in C : \forall z \in A,\exists \delta > 0,\left\lbrack {x - \delta \left( {z - x}\right), x + \delta \left( {z - x}\right) }\right\rbrack \subseteq A\}
\]
\[
= \{ x \in C : \forall z \in A,\exists \delta > 0,\lbrack x, x + \delta \left( {z - x}\right) ) \subseteq A\} .
\]
\( A \) convex set \( C \subseteq A \) is called a convex algebraic body if \( {\operatorname{ai}}_{A}\left( C\right) \neq \varnothing \) .
If the affine set \( A \) is \( \operatorname{aff}\left( C\right) \), the affine hull of \( C \), then \( {\operatorname{ai}}_{A}\left( C\right) \) is called the relative algebraic interior of \( C \), and is denoted by \( \operatorname{rai}\left( C\right) \) .
The algebraic closure of \( C \), denoted by \( \operatorname{ac}\left( C\right) \), is the set of all points \( z \in A \) such that \( \lbrack x, z) \subseteq C \) for some \( x \in C \) ,
\[
\operatorname{ac}\left( C\right) \mathrel{\text{:=}} \{ z \in A : \exists x \in C,\lbrack x, z) \subseteq C\} .
\]
Lemma 5.2. If \( C \) is a convex set in an affine space \( A \), then \( \operatorname{ai}\left( C\right) \) and \( \operatorname{ac}\left( C\right) \) are also convex sets in \( A \) .
Proof. Let \( x, y \in \operatorname{ai}\left( C\right) \) . If \( u \in A \), then there exists \( \delta > 0 \) such that \( x + \delta (u - \) \( x)\rbrack = : \left\lbrack {x, p}\right\rbrack \subset C \) and \( \left\lbrack {y, y + \delta \left( {u - y}\right) }\right\rbrack = : \left\lbrack {y, q}\right\rbrack \subset C \) ; see the first figure in Figure 5.1. If \( z \mathrel{\text{:=}} \left( {1 - t}\right) x + {ty} \) for some \( t \in \left( {0,1}\right) \), then we have that
\[
r \mathrel{\text{:=}} z + \delta \left( {u - z}\right) = \left( {1 - t}\right) \left( {x + \delta \left( {u - x}\right) }\right) + t\left( {y + \delta \left( {u - y}\right) }\right) = \left( {1 - t}\right) p + {tq}
\]
lies in \( C \) ; thus \( \left\lbrack {z, r}\right\rbrack \subset C \), meaning that \( z \in \operatorname{ai}\left( C\right) \) .
Let \( u, v \in \operatorname{ac}\left( C\right) \), where and \( \lbrack x, u) \subseteq C \) and \( \lbrack y, v) \subseteq C \) ; see the middle figure in Figure 5.1. Let \( t \in \left( {0,1}\right) \) and define \( w \mathrel{\text{:=}} \left( {1 - t}\right) u + {tv} \) and \( z \mathrel{\text{:=}} \) \( \left( {1 - t}\right) x + {ty} \in C \) . We claim that \( \lbrack z, w) \subseteq C \) . Let \( a \mathrel{\text{:=}} \left( {1 - \delta }\right) z + {\delta w} \) for some \( \delta \in \left( {0,1}\right) \) . We have
\[
a = \left( {1 - \delta }\right) z + {\delta w} = \left( {1 - \delta }\right) \left( {\left( {1 - t}\right) x + {ty}}\right) + \delta \left( {\left( {1 - t}\right) u + {tv}}\right)
\]
\[
= \left( {1 - t}\right) \left( {\left( {1 - \delta }\right) x + {\delta u}}\right) + t\left( {\left( {1 - \delta }\right) y + {\delta v}}\right) \in C,
\]
proving the claim and the lemma.
![968fd3dd-2b91-4cd3-8e1b-204f8f1c2faa_132_0.jpg](images/968fd3dd-2b91-4cd3-8e1b-204f8f1c2faa_132_0.jpg)
Fig. 5.1. Algebraic interior and algebraic closure of a convex set.
Lemma 5.3. If \( C \) is a convex set in a finite-dimensional affine space \( A \), then \( \operatorname{rai}\left( C\right) \neq \varnothing \) .
Proof. Let \( {\left\{ {x}_{i}\right\} }_{1}^{k} \subseteq C \) be an affine basis of \( \operatorname{aff}\left( C\right) \), and let \( x \mathrel{\text{:=}} \left( {\mathop{\sum }\limits_{1}^{k}{x}_{i}}\right) /k \) be the center of the simplex defined by \( {\left\{ {x}_{i}\right\} }_{1}^{k} \) . We claim that \( x \in \operatorname{rai}\left( C\right) \) . Indeed, if \( u = \mathop{\sum }\limits_{1}^{k}{t}_{i}{x}_{i} \), where \( \mathop{\sum }\limits_{1}^{k}{t}_{i} = 1 \) is an arbitrary point in aff \( \left( C\right) \), then \( x + \delta \left( {u - x}\right) = \left( {1 - \delta }\right) x + {\delta u} = \mathop{\sum }\limits_{1}^{k}\left\lbrack {\left( {1 - \delta }\right) /k + \delta {t}_{i}}\right\rbrack {x}_{i} \) . If \( \delta > 0 \) is sufficiently small, then each \( \left( {1 - \delta }\right) /k + \delta {t}_{i} \) is positive, and we have \( x + \delta \left( {u - x}\right) \in C \) .
Remark 5.4. Every infinite-dimensional affine space \( A \) contains a convex set \( C \) such that \( \operatorname{rai}\left( C\right) = \varnothing \) . Indeed, let \( X = {\left\{ {x}_{i}\right\} }_{1}^{\infty } \) be a set of affinely independent points in \( A \) and consider the convex hull \( C = \operatorname{co}\left( X\right) \) . Suppose that \( x \in \operatorname{rai}\left( C\right) \) . Then \( x \in {C}_{m} \mathrel{\text{:=}} \operatorname{co}\left( {\left\{ {x}_{i}\right\} }_{1}^{m}\right) \) for some integer \( m \) . If \( y \in C \) and \( y \neq x \), then there exists \( z \in C \) such that \( x \in \left( {y, z}\right) \) . But then \( y, z \in {C}_{n} \) for some \( n > m \) , and it is easy to prove that \( {C}_{m} \) is a face of \( {C}_{n} \) . It follows that \( y, z \in {C}_{m} \) (see Definition 5.24 and Lemma 5.25), and this implies that \( C = {C}_{m} \), a contradiction.
Lemma 5.5. Let \( C \) be a convex set in an affine space \( A \) . If \( y \in \operatorname{ac}\left( C\right) \) and \( x \in \operatorname{ai}\left( C\right) \), then \( \lbrack x, y) \subset \operatorname{ai}\left( C\right) \) .
Proof. First, assume that \( y \in C \) . Let \( z \mathrel{\text{:=}} {tx} + \left( {1 - t}\right) y = y + t\left( {x - y}\right) \in C \) , \( t \in \left( {0,1}\right) \) ; see the last figure in Figure 5.1. We claim that \( z \in \operatorname{ai}\left( C\right) \) . Let \( d \mathrel{\text{:=}} u - x \) be an arbitrary direction in \( A \), where \( u \in A \) . Since \( x \in \operatorname{ai}\left( C\right) \), there exists \( \delta > 0 \) such that \( \left\lbrack {x, x + \delta \left( {u - x}\right) }\right\rbrack = : \left\lbrack {x, q}\right\rbrack \subset C \) . We have that
\[
w \mathrel{\text{:=}} z + {t\delta }\left( {u - x}\right) = z + t\left( {q - x}\right) = y + t\left( {x - y}\right) + t\left( {q - x}\right) = y + t\left( {q - y}\right)
\]
lies in \( \mathrm{C} \) ; thus \( \left\lbrack {z, w}\right\rbrack = \left\lbrack {z, z + {t\delta }\left( {u - x}\right) }\right\rbrack \subset C \), proving the claim.
Now assume that \( y \in \operatorname{ac}\left( C\right) \smallsetminus C \) . There exists \( p \in C \) such that \( \lbrack p, y) \subset C \) . We claim that \( \lbrack x, y) \in \operatorname{ai}\left( C\right) \) . Let \( z \in \left( {x, y}\right) \) . If \( p = x \), pick \( {z}_{1} \in \left( {x, y}\right) \) such that \( z \in \left( {{z}_{1}, x}\right) \) . The first paragraph of the proof shows that \( z \in \operatorname{ai}\left( C\right) \) . Finally, suppose that \( p \neq x \) . There exists \( \delta > 0 \) such that \( \left\lbrack {x, x + \delta \left( {y - p}\right) }\right\rbrack = : \left\lbrack {x, q}\right\rbrack \subset C \) . Pick a point \( r \in \left( {y, p}\right) \) such that \( \left\lbrack {r, q}\right\rbrack \) intersects \( \left\lbrack {y, x}\right\rbrack \) at a point \( {z}_{1} \) such that \( z \in \left( {{z}_{1}, x}\right) \) . It follows again from the first paragraph that \( z \in \operatorname{ai}\left( C\right) \) .
## 5.2 Minkowski Gauge Function
Definition 5.6. Let \( C \) be a convex set in a vector space \( E \) such that \( 0 \in \) \( \operatorname{rai}\left( C\right) \) . The (Minkowski) gauge function of \( C \) is the function \( {p}_{C} \) defined on \( E \) by the formula
\[
{p}_{C}\left( x\right) \mathrel{\text{:=}} \inf \{ t > 0 : x \in {tC}\} = \inf \{ t \geq 0 : x \in {tC}\} .
\]
If \( C \) is a convex set in an affine space \( A \) and \( {x}_{0} \in \operatorname{rai}\left( C\right) \), then the gauge function of \( C \) with respect to \( {x}_{0} \) is the function \( p\left( x\right) \mathrel{\text{:=}} {p}_{C - {x}_{0}}\left( {x - {x}_{0}}\right) \) defined on \( A \), that is,
\[
p\left( x\right) \mathrel{\text{:=}} \inf \left\{ {t > 0 : x \in {x}_{0} + t\left( {C - {x}_{0}}\right) }\right\} ,\;\text{ where }\;x \in A.
\]
Theorem 5.7. Let \( C \) be a convex set in a vector space \( E \) such that \( 0 \in \operatorname{rai}\left( C\right) \) . The gauge function \( {p}_{C} \) is a nonnegative extended-valued function, \( {p}_{C} : E \rightarrow \) \( \mathbb{R} \cup \{ + \infty \} \), that is finite-valued precisely on the linear subspace \( \operatorname{span}\left( C\right) \) .
Moreover, \( {p}_{C} \) is a sublinear function, that is, for all \( x, y \in E \) and for all \( t \geq 0 \)
\[
{p}_{C}\left( {tx}\right) = t{p}_{C}\left( x\right) \;\text{ and }\;{p}_{C}\left( {x + y}\right) \leq {p}_{C}\left( x\right) + {p}_{C}\left( y\right) .
\]
Thus, \( {p}_{C} \) is a convex homogeneous function of degree one.
Proof. Evidently, \( p\left( x\right) \mathrel{\text{:=}} {p}_{C}\left( x\right) \geq 0 \) for all \( x \in E \) . Since \( 0 \in \operatorname{rai}\left( C\right) \), there exists \( t > 0 \) such that \( x \in {tC} \) if and only if \( x \in L \mathrel{\text{:=}} \operatorname{span}\left( C\right) \) . Thus \( p\left( x\right) \) is finite if and only if \( x \in L \) .
The homogeneity of \( p \) is obvious from its definition, and the convexity of \( p \) is thus equivalent to its subadditivity. To prove the former, let \( {x}_{1},{x}_{2} \in L \) and \( 0 < t < 1 \) . Given an arbitrary \( \epsilon > 0 \), note that \( {x}_{i} \in \left( {p\left( {x}_{i}\right) + \epsilon }\right) C, i = 1,2 \) . We have
\[ |
1083_(GTM240)Number Theory II | Definition 9.4.2 |
Definition 9.4.2. For any function \( \chi \) defined on \( \mathbb{Z} \) we define the function \( {\chi }^{ - } \) by \( {\chi }^{ - }\left( n\right) = \chi \left( {-n}\right) \) .
Thus, if \( \chi \) is a Dirichlet character we have \( {\chi }^{ - } = \chi \left( {-1}\right) \chi \), so that, contrary to \( \bar{\chi },{\chi }^{ - } \) is not a Dirichlet character when \( \chi \left( {-1}\right) = - 1 \), but only a periodic arithmetic function.
Lemma 9.4.3. We have
\[
t{e}^{tx}\frac{\mathop{\sum }\limits_{{1 \leq r \leq m}}\chi \left( r\right) {e}^{-{rt}}}{1 - {e}^{-{mt}}} = \mathop{\sum }\limits_{{k \geq 0}}\frac{{B}_{k}\left( {{\chi }^{ - }, x}\right) }{k!}{t}^{k}.
\]
Proof. Immediate and left to the reader.
See also Proposition 9.4.9 below.
The alternative definition would be to use \( {B}_{k}\left( {{\chi }^{ - }, x}\right) \) as \( \chi \) -Bernoulli polynomials. We will see that this is indeed the natural definition to use in many applications.
Proposition 9.4.4. We have \( {B}_{k}^{\prime }\left( {\chi, x}\right) = k{B}_{k - 1}\left( {\chi, x}\right) \) .
Proof. Clear since \( \left( {d/{dx}}\right) E\left( {\chi, t, x}\right) = {tE}\left( {\chi, t, x}\right) \) .
Proposition 9.4.5. We have the following formulas:
\[
{B}_{k}\left( {\chi, x}\right) = \mathop{\sum }\limits_{{j = 0}}^{k}\left( \begin{array}{l} k \\ j \end{array}\right) {B}_{j}\left( \chi \right) {x}^{k - j} = {m}^{k - 1}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) {B}_{k}\left( \frac{x + r}{m}\right)
\]
\[
= \mathop{\sum }\limits_{{j = 0}}^{k}\left( \begin{array}{l} k \\ j \end{array}\right) {B}_{j}{m}^{j - 1}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) {\left( x + r\right) }^{k - j},\text{ and }
\]
\[
{B}_{k}\left( \chi \right) = {m}^{k - 1}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) {B}_{k}\left( \frac{r}{m}\right) = \mathop{\sum }\limits_{{j = 0}}^{k}\left( \begin{array}{l} k \\ j \end{array}\right) {B}_{j}{m}^{j - 1}{S}_{k - j}\left( \chi \right) ,
\]
where \( {S}_{n}\left( \chi \right) = \mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) {r}^{n} \) .
Proof. The first formula follows from the identity \( E\left( {\chi, t, x}\right) = {e}^{tx}E\left( {\chi, t,0}\right) \) . The second follows from
\[
E\left( {\chi, t, x}\right) = \frac{1}{m}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) \frac{{mt}{e}^{\left( {\left( {x + r}\right) /m}\right) {mt}}}{{e}^{mt} - 1}
\]
\[
= \frac{1}{m}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) \mathop{\sum }\limits_{{k \geq 0}}{B}_{k}\left( \frac{x + r}{m}\right) \frac{{m}^{k}}{k!}{t}^{k}
\]
by definition of Bernoulli polynomials. The third formula follows from \( {B}_{k}\left( z\right) = \) \( \mathop{\sum }\limits_{{j = 0}}^{k}\left( \begin{array}{l} k \\ j \end{array}\right) {B}_{j}{z}^{k - j} \), and the last two are obtained by specializing to \( x = 0 \) .
For example,
\[
{B}_{0}\left( \chi \right) = \frac{1}{m}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) ,\;{B}_{1}\left( \chi \right) = \frac{1}{m}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) \left( {r - \frac{m}{2}}\right) ,\;\text{ and }
\]
\[
{B}_{2}\left( \chi \right) = \frac{1}{m}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) \left( {{r}^{2} - {mr} + \frac{{m}^{2}}{6}}\right) .
\]
For future reference, note the following results.
Corollary 9.4.6. If \( x \in {\mathbb{Z}}_{ \geq 0} \) we have
\[
{m}^{k - 1}\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( {x + r}\right) {B}_{k}\left( \frac{x + r}{m}\right) = {B}_{k}\left( \chi \right) + k\mathop{\sum }\limits_{{0 \leq r < x}}\chi \left( r\right) {r}^{k - 1}.
\]
Proof. Indeed, for any function \( f \) and \( x \in \mathbb{Z} \geq 0 \) we have
\[
\mathop{\sum }\limits_{{0 \leq r < m}}f\left( {x + r}\right) = \mathop{\sum }\limits_{{x \leq r < m + x}}f\left( r\right) = \mathop{\sum }\limits_{{0 \leq r < m}}f\left( r\right) + \mathop{\sum }\limits_{{0 \leq r < x}}\left( {f\left( {r + m}\right) - f\left( r\right) }\right) ,
\]
so the formula follows from \( {B}_{k}\left( {z + 1}\right) - {B}_{k}\left( z\right) = k{z}^{k - 1} \) and the proposition.
Lemma 9.4.7. If \( m \mid M \) then
\[
{B}_{k}\left( {\chi, x}\right) = {M}^{k - 1}\mathop{\sum }\limits_{{0 \leq r < M}}\chi \left( r\right) {B}_{k}\left( \frac{x + r}{M}\right) .
\]
Proof. Write \( n = M/m \), and for \( 0 \leq r < M \) let \( r = {qm} + s \) with \( 0 \leq \) \( s < m \) and \( 0 \leq q < n \) . By the distribution formula for Bernoulli polynomials (Proposition 9.1.3) we have
\[
{M}^{k - 1}\mathop{\sum }\limits_{{0 \leq r < M}}\chi \left( r\right) {B}_{k}\left( \frac{x + r}{M}\right) = {M}^{k - 1}\mathop{\sum }\limits_{{0 \leq s < m}}\chi \left( s\right) \mathop{\sum }\limits_{{0 \leq q < n}}{B}_{k}\left( {\frac{x + s}{M} + \frac{q}{n}}\right)
\]
\[
= \frac{{M}^{k - 1}}{{n}^{k - 1}}\mathop{\sum }\limits_{{0 \leq s < m}}\chi \left( s\right) {B}_{k}\left( \frac{n\left( {x + s}\right) }{M}\right)
\]
\[
= {m}^{k - 1}\mathop{\sum }\limits_{{0 \leq s < m}}\chi \left( s\right) {B}_{k}\left( \frac{x + s}{m}\right) = {B}_{k}\left( {\chi, x}\right)
\]
as claimed.
Proposition 9.4.8. We have
\[
{B}_{k}\left( {\chi, x + m}\right) = {B}_{k}\left( {\chi, x}\right) + k\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) {\left( x + r\right) }^{k - 1}.
\]
Proof. Follows from the formula
\[
E\left( {\chi, t, x + m}\right) - E\left( {\chi, t, x}\right) = t\mathop{\sum }\limits_{{0 \leq r < m}}\chi \left( r\right) {e}^{\left( {x + r}\right) t}.
\]
Proposition 9.4.9. (1) We have
\[
{B}_{k}\left( {\chi , - x}\right) = {\left( -1\right) }^{k}\left( {{B}_{k}\left( {{\chi }^{ - }, x}\right) + \chi \left( 0\right) k{x}^{k - 1}}\right) ,
\]
or equivalently,
\[
{B}_{k}\left( {{\chi }^{ - }, x}\right) = {\left( -1\right) }^{k}{B}_{k}\left( {\chi , - x}\right) - \chi \left( 0\right) k{x}^{k - 1}.
\]
In particular, \( {B}_{k}\left( {\chi }^{ - }\right) = {\left( -1\right) }^{k}{B}_{k}\left( \chi \right) - \chi \left( 0\right) {\delta }_{k,1} \), where we recall that \( {\delta }_{k,1} = 1 \) if \( k = 1 \), and \( {\delta }_{k,1} = 0 \) otherwise.
(2) In particular, if \( \chi \) is an even function then \( {B}_{k}\left( \chi \right) = 0 \) for \( k \geq 3 \) odd and \( {B}_{1}\left( \chi \right) = - \chi \left( 0\right) /2 \), while if \( \chi \) is an odd function then \( {B}_{k}\left( \chi \right) = 0 \) for all \( k \geq 0 \) even.
Proof. An easy computation shows that
\[
E\left( {{\chi }^{ - }, t, x}\right) - E\left( {\chi , - t, - x}\right) = - \chi \left( 0\right) t{e}^{xt},
\]
which is clearly equivalent to the first formula, and the other statements follow by specializing to \( x = 0 \) .
The above proposition will be used in particular when \( \chi \) is a Dirichlet character.
## 9.4.2 \( \chi \) -Bernoulli Functions
Definition 9.4.10. We define the \( \chi \) -Bernoulli functions and we denote by \( {B}_{k}\left( {\chi ,\{ x{\} }_{\chi }}\right) \) the functions defined for \( x \in \mathbb{R} \) by
\[
{B}_{k}\left( {\chi ,\{ x{\} }_{\chi }}\right) = {m}^{k - 1}\mathop{\sum }\limits_{{r{\;\operatorname{mod}\;m}}}\chi \left( r\right) {B}_{k}\left( \left\{ \frac{x + r}{m}\right\} \right) .
\]
Note that since \( \chi \left( r\right) \) and \( \{ \left( {x + r}\right) /m\} \) are periodic functions in \( r \) of period dividing \( m \) it is not necessary to specify the precise range of summation for \( r \), so we simply write \( r{\;\operatorname{mod}\;m} \) . It is clear that \( {B}_{k}\left( {\chi ,\{ x{\} }_{\chi }}\right) \) generalizes the function \( {B}_{k}\left( {\{ x\} }\right) \), and that \( {B}_{k}\left( {\chi ,\{ 0{\} }_{\chi }}\right) = {B}_{k}\left( \chi \right) \) .
Proposition 9.4.11. The \( \chi \) -Bernoulli functions satisfy the following properties:
(1) We have \( {B}_{0}\left( {\chi ,\{ x{\} }_{\chi }}\right) = {B}_{0}\left( \chi \right) = {S}_{0}\left( \chi \right) /m \) .
(2) We have
\[
{B}_{1}^{\prime }\left( {\chi ,\{ x{\} }_{\chi }}\right) = {B}_{0}\left( {\chi ,\{ x{\} }_{\chi }}\right) - \mathop{\sum }\limits_{{r \in \mathbb{Z}}}{\chi }^{ - }\left( r\right) {\delta }_{r}\left( x\right) ,
\]
where \( {\delta }_{r} \) is the Dirac distribution concentrated at the point \( r \) .
(3) We have \( {B}_{k}^{\prime }\left( {\chi ,\{ x{\} }_{\chi }}\right) = k{B}_{k - 1}\left( {\chi ,\{ x{\} }_{\chi }}\right) \) for all \( k \geq 1 \) and all \( x \notin \mathbb{Z} \) .
(4) The function \( {B}_{k}\left( {\chi ,\{ x{\} }_{\chi }}\right) \) is continuous for \( k \geq 2 \) .
(5) We have \( {\int }_{0}^{m}{B}_{k}\left( {\chi ,\{ x{\} }_{\chi }}\right) {dx} = 0 \) for \( k \geq 1 \) .
(6) If \( n \in \mathbb{Z} \) we have
\[
\mathop{\lim }\limits_{\substack{{x \rightarrow n} \\ {x > n} }}{B}_{1}\left( {\chi ,\{ x{\} }_{\chi }}\right) = {B}_{1}\left( {\chi ,\{ n{\} }_{\chi }}\right) \;\text{ and }
\]
\[
\mathop{\lim }\limits_{\substack{{x \rightarrow n} \\ {x < n} }}{B}_{1}\left( {\chi ,\{ x{\} }_{\chi }}\right) = {B}_{1}\left( {\chi ,\{ n{\} }_{\chi }}\right) + {\chi }^{ - }\left( n\right) .
\]
(7) On any interval \( \rbrack r, r + 1\left\lbrack \right. \) with \( r \in \mathbb{Z} \) the function \( {B}_{k}\left( {\chi ,\{ x{\} }_{\chi }}\right) \) is a polynomial of degree less than or equal to \( k \) .
(8) For \( k \geq 2 \) we have \( {B}_{k}\left( {\chi ,\{ x{\} }_{\chi }}\right) \in {C}^{k - 2}\left( \mathbb{R}\right) \) .
(9) \( {B}_{k}\left( {\chi ,\{ x + m{\} }_{\chi }}\right) = {B}_{k}\left( {\chi ,\{ x{\} }_{\chi }}\right) \) for all \( x \in \mathbb{R} \) and \( k \geq 0 \) .
Conversely, the sequence of \( \chi \) -Bernoulli functions is the only sequence satisfying properties (1) to (5) above.
Proof. All these properties are essentially clear from the definition and the basic properties of ordinary Bernoulli polynomials. For instance, let us prove (2) and (8). An easy exercise in distributions (Exercise 50) shows that
\[
{\left\{ \frac |
108_The Joys of Haar Measure | Definition 2.5.12 |
Definition 2.5.12. Let \( {\chi }_{1},\ldots ,{\chi }_{k} \) be multiplicative characters on \( {\mathbb{F}}_{q} \) .
(1) We define the Jacobi sum with parameter \( a \in {\mathbb{F}}_{q} \) associated with these characters by the formula
\[
{J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k};a}\right) = \mathop{\sum }\limits_{\substack{{{x}_{i} \in {\mathbb{F}}_{q}} \\ {{x}_{1} + \cdots + {x}_{k} = a} }}{\chi }_{1}\left( {x}_{1}\right) \cdots {\chi }_{k}\left( {x}_{k}\right) .
\]
(2) We simply write \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) \) instead of \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k};1}\right) \), and call it the Jacobi sum associated with the \( {\chi }_{i} \) ’s.
(3) For notational simplicity, by abuse of notation we will often write \( {J}_{k}\left( a\right) \) instead of \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k};a}\right) \), the characters \( {\chi }_{i} \) being implicit.
Remarks. (1) We have \( {J}_{1}\left( {\chi }_{1}\right) = 1 \) for any character \( {\chi }_{1} \), and more generally \( {J}_{1}\left( {{\chi }_{1};a}\right) = {\chi }_{1}\left( a\right) \)
(2) It is clear that the value of \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k};a}\right) \) does not depend on the ordering of the characters \( {\chi }_{i} \) .
(3) As desired we have \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k};a}\right) \in \mathbb{Q}\left( {\zeta }_{q - 1}\right) \), which is a much smaller number field.
(4) The introduction of a parameter \( a \) is analogous to that of \( \tau \left( {\chi, a}\right) \) for Gauss sums associated with a Dirichlet character. In fact, as for Gauss sums, the following lemma shows that there is a close link between \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k};a}\right) \) and \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) \) .
Lemma 2.5.13. For \( a \neq 0 \) we have
\[
{J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k};a}\right) = \left( {{\chi }_{1}\cdots {\chi }_{k}}\right) \left( a\right) {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) ,
\]
while (abbreviating as above \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k};0}\right) \) to \( {J}_{k}\left( 0\right) \) ) we have
\[
{J}_{k}\left( 0\right) = \left\{ \begin{array}{ll} {q}^{k - 1} & \text{ if }{\chi }_{j} = \varepsilon \text{ for all }j, \\ 0 & \text{ if }{\chi }_{1}\cdots {\chi }_{k} \neq \varepsilon , \\ {\chi }_{k}\left( {-1}\right) \left( {q - 1}\right) {J}_{k - 1}\left( {{\chi }_{1},\ldots ,{\chi }_{k - 1}}\right) & \text{ if }{\chi }_{1}\cdots {\chi }_{k} = \varepsilon \text{ and }{\chi }_{k} \neq \varepsilon \end{array}\right.
\]
Proof. The formula for \( a \neq 0 \) is clear by setting \( {y}_{k} = {x}_{k}/a \), so assume \( a = 0 \) . If all the \( {\chi }_{j} \) are equal to \( \varepsilon \) then \( {J}_{k}\left( 0\right) \) is equal to the number of \( \left( {{x}_{1},\ldots ,{x}_{k}}\right) \in {\mathbb{F}}_{q}^{k} \) such that \( {x}_{1} + \cdots + {x}_{k} = 0 \), hence to \( {q}^{k - 1} \), which is the
first formula, so we may assume that not all the \( {\chi }_{j} \) are equal to \( \varepsilon \), and since \( {J}_{k}\left( a\right) \) is invariant under permutation of the indices we assume that \( {\chi }_{k} \neq \varepsilon \) . We thus have \( {\chi }_{k}\left( 0\right) = 0 \), hence
\[
{J}_{k}\left( 0\right) = {\chi }_{k}\left( 0\right) {J}_{k - 1}\left( 0\right) + \mathop{\sum }\limits_{\substack{{{x}_{i} \in {\mathbb{F}}_{q},{x}_{k} \neq 0} \\ {\left( {-{x}_{1}/{x}_{k}}\right) + \cdots + \left( {-{x}_{k - 1}/{x}_{k}}\right) = 1} }}{\chi }_{1}\left( {x}_{1}\right) \cdots {\chi }_{k}\left( {x}_{k}\right)
\]
\[
= {\chi }_{k}\left( {-1}\right) \mathop{\sum }\limits_{{{x}_{k} \in {\mathbb{F}}_{q}^{ * }}}\left( {{\chi }_{1}\cdots {\chi }_{k}}\right) \left( {-{x}_{k}}\right) \mathop{\sum }\limits_{\substack{{{y}_{i} \in {\mathbb{F}}_{q}} \\ {{y}_{1} + \cdots + {y}_{k - 1} = 1} }}{\chi }_{1}\left( {y}_{1}\right) \cdots {\chi }_{k - 1}\left( {y}_{k - 1}\right)
\]
\[
= {\chi }_{k}\left( {-1}\right) {J}_{k - 1}\left( {{\chi }_{1},\ldots ,{\chi }_{k - 1}}\right) \mathop{\sum }\limits_{{y \in {\mathbb{F}}_{q}^{ * }}}\left( {{\chi }_{1}\cdots {\chi }_{k}}\right) \left( y\right) ,
\]
and since \( \left| {\mathbb{F}}_{q}^{ * }\right| = q - 1 \) the result follows from Proposition 2.1.20.
The main result concerning Jacobi sums is the following close link with Gauss sums:
Proposition 2.5.14. Let \( \psi \) be a nontrivial additive character and let \( {\chi }_{1} \) , \( \ldots ,{\chi }_{k} \) be multiplicative characters of \( {\mathbb{F}}_{q} \) . Denote by \( t \) the number of such \( {\chi }_{i} \) equal to the trivial character \( \varepsilon \) .
(1) If \( t = k \) then \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) = {q}^{k - 1} \) .
(2) If \( 1 \leq t \leq k - 1 \) then \( {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) = 0 \) .
(3) If \( t = 0 \) and \( {\chi }_{1}\cdots {\chi }_{k} \neq \varepsilon \) then
\[
{J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) = \frac{\tau \left( {{\chi }_{1},\psi }\right) \cdots \tau \left( {{\chi }_{k},\psi }\right) }{\tau \left( {{\chi }_{1}\cdots {\chi }_{k},\psi }\right) }.
\]
(4) If \( t = 0 \) and \( {\chi }_{1}\cdots {\chi }_{k} = \varepsilon \) then
\[
{J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) = - \frac{\tau \left( {{\chi }_{1},\psi }\right) \cdots \tau \left( {{\chi }_{k},\psi }\right) }{q}
\]
\[
= - {\chi }_{k}\left( {-1}\right) \frac{\tau \left( {{\chi }_{1},\psi }\right) \cdots \tau \left( {{\chi }_{k - 1},\psi }\right) }{\tau \left( {{\chi }_{1}\cdots {\chi }_{k - 1},\psi }\right) }
\]
\[
= - {\chi }_{k}\left( {-1}\right) {J}_{k - 1}\left( {{\chi }_{1},\ldots ,{\chi }_{k - 1}}\right) .
\]
In particular, in this case we have
\[
\frac{\tau \left( {{\chi }_{1},\psi }\right) \cdots \tau \left( {{\chi }_{k},\psi }\right) }{q} = {\chi }_{k}\left( {-1}\right) {J}_{k - 1}\left( {{\chi }_{1},\ldots ,{\chi }_{k - 1}}\right) .
\]
Proof. (1) and (2). For \( k = 1 \) the result is trivial, so assume that \( k \geq 2 \) . We can write
\[
{J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) = \mathop{\sum }\limits_{{{x}_{1},\ldots ,{x}_{k - 1} \in {\mathbb{F}}_{q}}}{\chi }_{1}\left( {x}_{1}\right) \cdots {\chi }_{k - 1}\left( {x}_{k - 1}\right) {\chi }_{k}\left( {1 - \left( {{x}_{1} + \cdots + {x}_{k - 1}}\right) }\right) .
\]
If \( t = k \), in other words if all the \( {\chi }_{i} \) are equal to \( \varepsilon \), this is evidently equal to \( {q}^{k - 1} \), which is (1). If \( 1 \leq t \leq k - 1 \), then since \( {J}_{k} \) is invariant by permutation of the characters we may assume that \( {\chi }_{k} = \varepsilon \) . Thus the above sum decomposes into a product:
\[
{J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) = \mathop{\prod }\limits_{{1 \leq i \leq k - 1}}\mathop{\sum }\limits_{{{x}_{i} \in {\mathbb{F}}_{q}}}{\chi }_{i}\left( {x}_{i}\right) .
\]
Since \( t \leq k - 1 \), at least one of the \( {\chi }_{i} \) for \( i \leq k - 1 \) is a nontrivial character, hence \( \mathop{\sum }\limits_{{{x}_{i} \in {\mathbb{F}}_{q}}}{\chi }_{i}\left( {x}_{i}\right) = 0 \), proving (2).
(3) and (4). We therefore assume that all the \( {\chi }_{i} \) are nontrivial characters. For simplicity, write \( \tau \left( \chi \right) \) instead of \( \tau \left( {\chi ,\psi }\right) \), since in the present proof the additive character \( \psi \) does not change. Thus \( \tau \left( {\chi }_{i}\right) = \mathop{\sum }\limits_{{{x}_{i} \in {\mathbb{F}}_{q}}}{\chi }_{i}\left( {x}_{i}\right) \psi \left( {x}_{i}\right) \) , where we may include \( {x}_{i} = 0 \), since \( {\chi }_{i}\left( 0\right) = 0,{\chi }_{i} \) being nontrivial. Hence grouping terms with given \( {x}_{1} + \cdots + {x}_{k} = a \) and using the above lemma we have
\[
\tau \left( {\chi }_{1}\right) \cdots \tau \left( {\chi }_{k}\right) = \mathop{\sum }\limits_{{{x}_{i} \in {\mathbb{F}}_{q}}}{\chi }_{1}\left( {x}_{1}\right) \cdots {\chi }_{k}\left( {x}_{k}\right) \psi \left( {{x}_{1} + \cdots + {x}_{k}}\right)
\]
\[
= \mathop{\sum }\limits_{{a \in {\mathbb{F}}_{q}}}\psi \left( a\right) \mathop{\sum }\limits_{\substack{{{x}_{i} \in {\mathbb{F}}_{q}} \\ {{x}_{1} + \cdots + {x}_{k} = a} }}{\chi }_{1}\left( {x}_{1}\right) \cdots {\chi }_{k}\left( {x}_{k}\right)
\]
\[
= {J}_{k}\left( 0\right) + \mathop{\sum }\limits_{{a \in {\mathbb{F}}_{q}^{ * }}}\psi \left( a\right) \left( {{\chi }_{1}\cdots {\chi }_{k}}\right) \left( a\right) {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right)
\]
\[
= {J}_{k}\left( 0\right) + \tau \left( {{\chi }_{1}\cdots {\chi }_{k}}\right) {J}_{k}\left( {{\chi }_{1},\ldots ,{\chi }_{k}}\right) .
\]
If \( {\chi }_{1}\cdots {\chi }_{k} \neq \varepsilon \) the lemma tells us that \( {J}_{k}\left( 0\right) = 0 \), proving (3). If \( {\chi }_{1}\cdots {\chi }_{k} = \) \( \varepsilon \) then on the one hand by the lemma we have \( {J}_{k}\left( 0\right) = {\chi }_{k}\left( {-1}\right) (q - \) 1) \( {J}_{k - 1}\left( {{\chi }_{1},\ldots ,{\chi }_{k - 1}}\right) \) . However, since \( {\chi }_{k} \neq \varepsilon \) while \( {\chi }_{1}\cdots {\chi }_{k} = \varepsilon \) we have \( {\chi }_{1}\cdots {\chi }_{k - 1} \neq \varepsilon \) (and all the \( {\chi }_{i} \) still different from \( \varepsilon \) ), so by (3) and Proposition 2.5.8 we have
\[
{J}_{k - 1}\left( {{\chi }_{1},\ldots ,{\chi }_{k - 1}}\right) = \frac{\tau \left( {\chi }_{1}\right) \cdots \tau \left( {\chi }_{k - 1}\right) }{\tau \left( {\chi }_{k}^{-1}\right) } = {\chi }_{k}\left( {-1}\right) \frac{\tau \left( {\chi }_{1}\right) \cdots \tau \left( {\chi }_{k - 1}\right) }{\overline{\tau \left( {\chi }_{k}\right) }}
\]
\[
= {\chi }_{k}\left( {-1}\right) \frac{\tau \left( {\chi }_{1}\right) \cdots \tau \left( {\chi }_{k - 1}\right) \tau \left( {\chi }_{k}\right) }{q}
\]
by Proposition 2.5.9, so \( {J}_{k}\left( 0\right) = \left( {1 - 1/q}\right) \tau \left( {\chi }_{1}\right) \cdots \tau \left( { |
1088_(GTM245)Complex Analysis | Definition 10.14 |
Definition 10.14. Euler’s \( \Gamma \) -function is defined by
\[
\Gamma \left( z\right) = \frac{1}{{zH}\left( z\right) } = \frac{{\mathrm{e}}^{-{\gamma z}}}{z}\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 + \frac{z}{n}\right) }^{-1}{\mathrm{e}}^{\frac{z}{n}}, z \in \mathbb{C}.
\]
(10.7)
Note that \( \Gamma \) is a meromorphic function on \( \mathbb{C} \), with simple poles at \( z = \) \( 0, - 1, - 2,\ldots \), and that it has no zeros. The \( \Gamma \) -function satisfies a number of useful functional equations. We derive some of these, which will lead us to (10.12). The first of these functional equations is
\[
\Gamma \left( {z + 1}\right) = \frac{1}{\left( {z + 1}\right) H\left( {z + 1}\right) } = \frac{1}{H\left( z\right) } = {z\Gamma }\left( z\right) .
\]
(10.8)
Further, it follows from (10.4) that
\[
\Gamma \left( z\right) \Gamma \left( {1 - z}\right) = \frac{\pi }{\sin {\pi z}}.
\]
(10.9)
A simple calculation shows that
\[
\Gamma \left( 1\right) = {\mathrm{e}}^{-\gamma }\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 + \frac{1}{n}\right) }^{-1}{\mathrm{e}}^{\frac{1}{n}} = 1.
\]
Together with (10.8), this implies that
\[
\Gamma \left( n\right) = \left( {n - 1}\right) !,\text{ for all }n \in {\mathbb{Z}}_{ > 0}.
\]
Also, it follows from (10.9) that \( {\left( \Gamma \left( \frac{1}{2}\right) \right) }^{2} = \frac{\pi }{\sin \frac{\pi }{2}} = \pi \), and hence
\[
\Gamma \left( \frac{1}{2}\right) = \sqrt{\pi }
\]
since \( \Gamma \left( \frac{1}{2}\right) > 0 \) from (10.7).
We derive some other properties of Euler’s \( \Gamma \) -function that we will need. We start with a calculation, from (10.7):
\[
\frac{\mathrm{d}}{\mathrm{d}z}\frac{{\Gamma }^{\prime }\left( z\right) }{\Gamma \left( z\right) } = \frac{\mathrm{d}}{\mathrm{d}z}\frac{\mathrm{d}}{\mathrm{d}z}(\log \left( {\Gamma \left( z\right) }\right)
\]
\[
= \frac{\mathrm{d}}{\mathrm{d}z}\left( {-\gamma - \frac{1}{z} - \mathop{\sum }\limits_{{n = 1}}^{\infty }\left( {\frac{1}{z + n} - \frac{1}{n}}\right) }\right)
\]
\[
= \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( \frac{1}{z + n}\right) }^{2}
\]
(10.10)
Both functions
\[
z \mapsto \Gamma \left( {2z}\right) \text{ and }z \mapsto \Gamma \left( z\right) \Gamma \left( {z + \frac{1}{2}}\right)
\]
have simple poles precisely at the points \( 0, - 1, - 2,\ldots \) and \( - \frac{1}{2}, - \frac{3}{2},\ldots \) . The ratio of the two functions is hence entire without zeros. The next calculation will show more:
\[
\frac{\mathrm{d}}{\mathrm{d}z}\left( \frac{{\Gamma }^{\prime }\left( z\right) }{\Gamma \left( z\right) }\right) + \frac{\mathrm{d}}{\mathrm{d}z}\left( \frac{{\Gamma }^{\prime }\left( {z + \frac{1}{2}}\right) }{\Gamma \left( {z + \frac{1}{2}}\right) }\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{1}{{\left( z + n\right) }^{2}} + \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{1}{{\left( z + n + \frac{1}{2}\right) }^{2}}
\]
\[
= 4\left( {\mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{1}{{\left( 2z + 2n\right) }^{2}} + \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{1}{{\left( 2z + 2n + 1\right) }^{2}}}\right)
\]
\[
= 4\left( {\mathop{\sum }\limits_{{m = 0}}^{\infty }\frac{1}{{\left( 2z + m\right) }^{2}}}\right)
\]
\[
= 2\frac{\mathrm{d}}{\mathrm{d}z}\left( \frac{{\Gamma }^{\prime }\left( {2z}\right) }{\Gamma \left( {2z}\right) }\right)
\]
Therefore, there exists a constant \( A \) such that
\[
2\frac{{\Gamma }^{\prime }\left( {2z}\right) }{\Gamma \left( {2z}\right) } = \frac{{\Gamma }^{\prime }\left( z\right) }{\Gamma \left( z\right) } + \frac{{\Gamma }^{\prime }\left( {z + \frac{1}{2}}\right) }{\Gamma \left( {z + \frac{1}{2}}\right) } - A
\]
or, equivalently,
\[
\frac{\mathrm{d}}{\mathrm{d}z}\log \Gamma \left( {2z}\right) = \frac{\mathrm{d}}{\mathrm{d}z}\log \left\lbrack {\Gamma \left( z\right) \Gamma \left( {z + \frac{1}{2}}\right) }\right\rbrack - A.
\]
Thus there exists a constant \( B \) such that
\[
\log \Gamma \left( {2z}\right) = \log \left\lbrack {\Gamma \left( z\right) \Gamma \left( {z + \frac{1}{2}}\right) }\right\rbrack - {Az} - B
\]
or, equivalently,
\[
\Gamma \left( {2z}\right) {\mathrm{e}}^{{Az} + B} = \Gamma \left( z\right) \Gamma \left( {z + \frac{1}{2}}\right) .
\]
(10.11)
Next work backward to determine \( A \) and \( B \) . Setting \( z = \frac{1}{2} \) in (10.11), we obtain \( 1 \cdot {\mathrm{e}}^{\frac{1}{2}A + B} = \sqrt{\pi } \cdot 1 \) ; that is, \( \frac{1}{2}A + B = \frac{1}{2}\log \pi \) . Setting \( z = 1 \) in (10.11), we obtain \( {\mathrm{e}}^{A + B} = \frac{1}{2}\sqrt{\pi } \) ; that is, \( A + B = \frac{1}{2}\log \pi - \log 2 \) . Thus \( A = - 2\log 2 \) and \( B = \frac{1}{2}\left( {\log \pi }\right) + \log 2 \), and we have established
## 10.4.1.1 Legendre's Duplication Formula
\[
\sqrt{\pi }\Gamma \left( {2z}\right) = {2}^{{2z} - 1}\Gamma \left( z\right) \Gamma \left( {z + \frac{1}{2}}\right) .
\]
(10.12)
## 10.4.2 Estimates for \( \Gamma \left( z\right) \)
The estimate of \( \Gamma \left( z\right) \) for large values of \( \left| z\right| \) that is found in this section is known as Stirling's formula. To derive this formula, we first express the partial sums \( \mathop{\sum }\limits_{{k = 0}}^{n}{\left( \frac{1}{z + k}\right) }^{2} \) of \( \frac{\mathrm{d}}{\mathrm{d}z}\frac{{\Gamma }^{\prime }\left( z\right) }{\Gamma \left( z\right) } \) in (10.10) as a convenient line integral. View \( z = x + \imath y \) as a (fixed) parameter and \( \zeta = \xi + {\iota \eta } \) as a variable, and define
\[
\Phi \left( \zeta \right) = \frac{\pi \cot {\pi \zeta }}{{\left( z + \zeta \right) }^{2}},\text{ for }\zeta \in \mathbb{C}.
\]
The function \( \Phi \) has singularities at \( \zeta = - z \) and at \( \zeta \in \mathbb{Z} \) ; if \( z \notin \mathbb{Z},\Phi \) has a double pole at \( - z \) and simple poles at the integers. Let \( Y \) be a positive real number, \( n \) be a nonnegative integer, and \( K \) be the rectangle in the \( \zeta \) -plane described by \( - Y \leq \eta \leq Y \) and \( 0 \leq \xi \leq n + \frac{1}{2} \) ; see Fig. 10.1. Then the residue theorem yields the next result.
Lemma 10.15. If \( \Re z > 0 \), then
\[
\frac{1}{{2\pi }\imath }\text{ pr. v. }{\int }_{\partial K}\Phi \left( \zeta \right) \mathrm{d}\zeta = - \frac{1}{2{z}^{2}} + \mathop{\sum }\limits_{{v = 0}}^{n}\frac{1}{{\left( z + v\right) }^{2}},
\]
![a50267de-c956-4a7f-8c2e-850adafcee65_299_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_299_0.jpg)
Fig. 10.1 The rectangle \( K \)
where as usual \( \partial K \) is positively oriented.
We plan to let \( Y \rightarrow \infty \) and \( n \rightarrow \infty \) . We thus have to study several line integrals, as follows.
## 10.4.2.1 The Integral Over the Horizontal Sides: \( \eta = \pm Y \)
On the horizontal sides \( \eta = \pm Y \) , \( \cot {\pi \zeta } \) converges uniformly to \( \pm \iota \) as \( Y \) goes to \( \infty \) . Thus \( \frac{\cot {\pi \zeta }}{{\left( z + \zeta \right) }^{2}} \) converges to 0 on each of the line segments \( \xi \geq 0,\eta = \pm Y \) as \( Y \rightarrow \infty \) . We need to show that
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}\mathop{\lim }\limits_{{Y \rightarrow \infty }}{\int }_{0}^{n + \frac{1}{2}}\frac{\cot \pi \left( {\xi \pm \imath Y}\right) }{{\left( z + \xi + \imath Y\right) }^{2}}\mathrm{\;d}\xi = 0.
\]
Since we are able to control the speed with which \( Y \) and \( n \) approach infinity, this presents a small challenge; for example, we can set \( Y = {n}^{2} \) and then let \( n \rightarrow \infty \) .
## 10.4.2.2 The Integral Over the Vertical Side \( \xi = n + \frac{1}{2} \)
On the vertical line, \( \xi = n + \frac{1}{2} \), cot \( {\pi \zeta } \) is bounded since cot is a periodic function. Thus we conclude that for some constant \( c \) ,
\[
\left| {{\int }_{\xi = n + \frac{1}{2}}\Phi \left( \zeta \right) \mathrm{d}\zeta }\right| \leq c{\int }_{\xi = n + \frac{1}{2}}\frac{\mathrm{d}\eta }{{\left| \zeta + z\right| }^{2}}.
\]
On \( \xi = n + \frac{1}{2} \), we have \( \bar{\zeta } = n + \frac{1}{2} - {\iota \eta } = {2n} + 1 - \zeta \) . We then use residue calculus (as in case 1 of Sect. 6.6) to conclude that
\[
\frac{1}{\iota }{\int }_{\xi = n + \frac{1}{2}}\frac{\mathrm{d}\zeta }{{\left| \zeta + z\right| }^{2}} = \frac{1}{\iota }{\int }_{\xi = n + \frac{1}{2}}\frac{\mathrm{d}\zeta }{\left( {\zeta + z}\right) \left( {{2n} + 1 - \zeta + \bar{z}}\right) } = \frac{2\pi }{{2n} + 1 + {2x}}.
\]
Therefore,
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{\int }_{\xi = n + \frac{1}{2}}\frac{\mathrm{d}\eta }{{\left| \zeta + z\right| }^{2}} = 0.
\]
## 10.4.2.3 The Integral Over the Imaginary Axis
We now turn to the computation of the principal value of the integral over the imaginary axis, which may be written as follows.
\[
\frac{1}{2}{\int }_{0}^{\infty }\cot {\pi \iota \eta }\left\lbrack {\frac{1}{{\left( \iota \eta + z\right) }^{2}} - \frac{1}{{\left( \iota \eta - z\right) }^{2}}}\right\rbrack \mathrm{d}\eta = - {\int }_{0}^{\infty }\cot {h\pi \eta }\frac{2\eta z}{{\left( {\eta }^{2} + {z}^{2}\right) }^{2}}\mathrm{\;d}\eta .
\]
We now return to the main task. We let \( Y \) and \( n \) tend to \( \infty \) in Lemma 10.15 to conclude that
\[
\frac{\mathrm{d}}{\mathrm{d}z}\frac{{\Gamma }^{\prime }\left( z\right) }{\Gamma \left( z\right) } = \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{1}{{\left( z + n\right) }^{2}}
\]
\[
= \frac{1}{2{z}^{2}} + \text{ pr. v. }{\int }_{+\infty }^{-\infty }\Phi \left( \zeta \right) \mathrm{d}\zeta
\]
\[
= \frac{1}{2{z}^{2}} + {\int }_{0}^{\infty }\cot {h\pi \eta }\frac{2\eta z}{{\left( {\eta }^{2} + {z}^{2}\right) }^{2}}\mathrm{\;d}\eta .
\]
Replacing \( \cot {h\pi \eta } \) by \( 1 + \frac{2}{{\mathrm{e}}^{2\pi \eta } - 1} \) in the above expression and noting that
\[
{\int }_{0}^{\infty }\frac{2\eta z}{{\left( {\eta }^{2} + {z}^{2}\right) }^{2}}\mathrm{\;d}\eta = \frac{1}{z}
\]
we obtain
\[
\frac{\mathrm{d}}{\mathrm{d}z}\frac{{\Gamma }^{\prime }\left( z\r |
1119_(GTM273)Homotopical Topology | Definition 2 |
Definition 2. Let \( \xi = \left( {E, B, F, p}\right) \) be a fibration, and let \( f : {B}^{\prime } \rightarrow B \) be a continuous map. Denote by \( {E}^{\prime } \) the subset of \( E \times {B}^{\prime } \) consisting of all points \( \left( {e,{b}^{\prime }}\right) \) such that \( f\left( {b}^{\prime }\right) = p\left( e\right) \) . Then define a map \( {p}^{\prime } : {E}^{\prime } \rightarrow {B}^{\prime } \) by the formula \( {p}^{\prime }\left( {e,{b}^{\prime }}\right) = {b}^{\prime } \) . The locally trivial fibration \( \left( {{E}^{\prime },{B}^{\prime }, F,{p}^{\prime }}\right) \) (EXERCISE 6; check the local triviality of this fibration) is called the fibration induced by \( \xi \) by means of \( f \) and is denoted as \( {f}^{ * }\xi \) .
Clarification of Definition 2. Obviously,
\[
{\left( {p}^{\prime }\right) }^{-1}\left( {b}^{\prime }\right) = {p}^{-1}\left( {f\left( {b}^{\prime }\right) }\right)
\]
[we mean the canonical homeomorphism established by the map \( {E}^{\prime } \rightarrow E \) , \( \left( {e,{b}^{\prime }}\right) \mapsto e\rbrack \) . Thus, we can say that the fibered space \( {E}^{\prime } \) is made out of fibers of the fibration \( \xi \) in such a way that the fiber over \( b \) is used as a fiber over \( {b}^{\prime } \) whenever \( f\left( {b}^{\prime }\right) = b \) ; if \( f \) is not one-to-one, the same fiber of \( \xi \) can be used many times as a fiber of \( {f}^{ * }\xi \) .
Remark. The notions introduced by Definitions 1 and 2 are interrelated. First, \( {\left. \xi \right| }_{{B}^{\prime }} \) is \( {i}^{ * }\xi \), where \( i : {B}^{\prime } \rightarrow B \) is the inclusion map. Second, \( {f}^{ * }\xi = {\left. \left( {B}^{\prime } \times \xi \right) \right| }_{\operatorname{graph}\left( f\right) } \), where \( {B}^{\prime } \times \xi = \left( {{B}^{\prime } \times E,{B}^{\prime } \times B, F,{\operatorname{id}}_{{B}^{\prime }} \times p}\right) \) and \( \operatorname{graph}\left( f\right) = \left\{ {\left( {{b}^{\prime }, b}\right) \in {B}^{\prime } \times B \mid b = f\left( {b}^{\prime }\right) }\right\} ; \) obviously, \( \operatorname{graph}\left( f\right) \) is canonically homeomorphic to \( {B}^{\prime } \) .
EXERCISE 7. Let \( \xi = \left( {E, B, F, p}\right) \) be a fibration, and let \( f : {B}^{\prime } \rightarrow B \) be a continuous map. Prove that if \( {\xi }^{\prime } = \left( {{E}^{\prime },{B}^{\prime }, F,{p}^{\prime }}\right) \) is a fibration for which there exists a continuous map \( h : {E}^{\prime } \rightarrow E \) which maps every fiber \( {p}^{-1}\left( {b}^{\prime }\right) \) of \( {\xi }^{\prime } \) homeomorphically onto the fiber \( {\left( {p}^{\prime }\right) }^{-1}\left( {f\left( {b}^{\prime }\right) }\right) \) of \( \xi \), then the fibration \( {\xi }^{\prime } \) is equivalent to \( {f}^{ * }\xi \) .
Lemma (Feldbau's Theorem). Every locally trivial fibration whose base is a cube (of any dimension) is trivial.
Proof. Let \( \xi = \left( {E,{I}^{n}, F, p}\right) \) be our fibration.
Step 1. Let
\[
{I}_{1}^{n} = \left\{ {\left( {{x}_{1},\ldots ,{x}_{n}}\right) \in {I}^{n} \mid {x}_{n} \leq \frac{1}{2}}\right\} ,
\]
\[
{I}_{2}^{n} = \left\{ {\left( {{x}_{1},\ldots ,{x}_{n}}\right) \in {I}^{n} \mid {x}_{n} \geq \frac{1}{2}}\right\} ,
\]
and let \( {\xi }_{1} = {\left. \xi \right| }_{{I}_{1}^{n}},{\xi }_{2} = {\left. \xi \right| }_{{I}_{2}^{n}} \) . We will prove that if \( {\xi }_{1},{\xi }_{2} \) are trivial, then \( \xi \) is trivial. Let \( {h}_{1} : {p}^{-1}\left( {I}_{1}^{n}\right) \rightarrow {I}_{1}^{n} \times F,{h}_{2} : {p}^{-1}\left( {I}_{2}^{n}\right) \rightarrow {I}_{2}^{n} \times F \) be trivializations of \( {\xi }_{1},{\xi }_{2} \) . The maps \( {h}_{1},{h}_{2} \) do not form any map of \( E = {p}^{-1}\left( {I}_{1}^{n}\right) \cup {p}^{-1}\left( {I}_{2}^{n}\right) \) into \( {I}^{n} \times F \), because they are not compatible on \( {p}^{-1}\left( {I}_{1}^{n}\right) \cap {p}^{-1}\left( {I}_{2}^{n}\right) = {I}^{n - 1} \times \frac{1}{2} \) . Actually, for \( x \in {I}^{n - 1} \), there arises a homeomorphism \( {\varphi }_{x} : F = x \times F\xrightarrow[]{{h}_{2}^{-1}}{p}^{-1}\left( x\right) \xrightarrow[]{{h}_{1}}x \times F = F \) . We define \( h : E \rightarrow {I}^{n} \times F \) by the formula
\[
h\left( e\right) = \left\{ \begin{array}{ll} {h}_{1}\left( e\right) , & \text{ if }p\left( e\right) \in {I}_{1}^{n}, \\ \left( {{\operatorname{id}}_{{I}_{2}^{n}} \times {\varphi }_{x}}\right) \circ {h}_{2}\left( e\right) , & \text{ if }p\left( e\right) \in x \times \left\lbrack {\frac{1}{2},1}\right\rbrack \subset {I}_{2}^{n} \end{array}\right.
\]
(see Fig. 44). This is a trivialization of \( \xi \) .
![ca3d23e8-d3f0-4fbc-a946-2e04586ccbf2_121_0.jpg](images/ca3d23e8-d3f0-4fbc-a946-2e04586ccbf2_121_0.jpg)
Fig. 44 Proof of Feldbau's theorem, step 1
![ca3d23e8-d3f0-4fbc-a946-2e04586ccbf2_121_1.jpg](images/ca3d23e8-d3f0-4fbc-a946-2e04586ccbf2_121_1.jpg)
Fig. 45 Proof of Feldbau's theorem, step 2
Step 2. Cut the cube \( {I}^{n} \) into \( {N}^{n} \) small cubes with the side \( \frac{1}{N} \), where \( N \) is so big that the fibration is trivial over every small cube, and numerate these small cubes as \( {I}_{1},{I}_{2},\ldots ,{I}_{{N}^{n}} \) in lexicographical order. For \( 1 \leq m \leq {N}^{n} \), let \( {J}_{m} = {I}_{1} \cup \cdots \cup {I}_{m} \) . Then every \( {J}_{m} \) is homeomorphic to \( {I}^{n} \), and for \( m \geq 2 \), the homeomorphism \( {q}_{m} : {J}_{m} \rightarrow {I}^{n} \) can be chosen in such a way that \( {q}_{m}\left( {J}_{m - 1}\right) = {I}_{1}^{n} \) and \( {q}_{m}\left( {I}_{m}\right) = {I}_{2}^{n} \) . Then, according to step 1, if the fibration is trivial over \( {J}_{m - 1} \), it is also trivial over \( {J}_{m} \) .
Since the fibration is trivial over \( {J}_{1} = {I}_{1} \), the induction shows that it is trivial over \( {J}_{{N}^{n}} = {I}^{n} \) (Fig. 45).
## 9.3 Proof of CHP
In this section we will prove (the relative version of) the theorem of Sect. 6.2. We will successively consider four cases.
Case 1: The given fibration is trivial. In this case we can assume that \( E = B \times F \) and identify maps \( X \rightarrow E \) with pairs of maps \( X \rightarrow B, X \rightarrow F \) . We are given a pair of maps \( {\varphi }_{1} : X \rightarrow B,{\varphi }_{2} : X \rightarrow F \), a homotopy \( {\Phi }_{1} : X \times I \rightarrow B \) of \( {\varphi }_{1} \), and, in addition to \( {\Psi }_{1} = {\left. {\Phi }_{1}\right| }_{Y \times I} \), a homotopy \( {\Psi }_{2}{\left| Y \times I\text{of}{\varphi }_{2}\right| }_{Y} \) . We need to extend the homotopy \( {\Psi }_{2} \) to a homotopy \( {\Phi }_{2} : X \times I \rightarrow F \) of \( {\varphi }_{2} \) . But this is precisely what Borsuk’s theorem (Sect. 2.5) provides.
Case 2: The fibration is arbitrary, \( \left( {X, Y}\right) = \left( {{D}^{n},{S}^{n - 1}}\right) \) . The induced fibration \( {\Phi }^{ * }\left( {E, B, F, p}\right) = \left( {{E}^{\prime },{D}^{n} \times I, F,{p}^{\prime }}\right) \) is trivial by Feldbau’s theorem \( \left( {{D}^{n} \times I}\right. \) is homeomorphic to \( {I}^{n + 1} \) ). Recall that \( {E}^{\prime } \subset \left( {{D}^{n} \times I}\right) \times E \) . The map \( \widetilde{\omega } : {D}^{n} \rightarrow {E}^{\prime },\widetilde{\omega }\left( x\right) = \) \( \left( {\left( {x,0}\right) ,\widetilde{\varphi }\left( x\right) }\right) \) and homotopies \( \Omega = \) id: \( {D}^{n} \times I \rightarrow {D}^{n} \times I \) and \( \widetilde{\Lambda } : {S}^{n - 1} \times I \rightarrow \) \( {E}^{\prime },\widetilde{\Lambda }\left( {x, t}\right) = \left( {\left( {x, t}\right) ,\widetilde{\Psi }\left( {x, t}\right) }\right) \) satisfy the requirements of the theorem, and, by case 1, there exists a homotopy \( \widetilde{\Omega } : {D}^{n} \times I \rightarrow {E}^{\prime } \) of \( \widetilde{\omega } \) which covers \( \Omega \) and extends \( \widetilde{\Lambda } \) . If \( \widetilde{\Omega }\left( {x, t}\right) = \left( {\left( {x, t}\right) ,\widetilde{\Phi }\left( {x, t}\right) }\right) \), then \( \widetilde{\Phi } : {D}^{n} \times I \rightarrow E \) is a homotopy of \( \widetilde{\varphi } \) which covers \( \Phi \) and extends \( \widetilde{\Psi } \) .
Case 3: The fibration is arbitrary, and the CW complex \( X \) is finite. The obvious induction makes it possible to assume that \( X - Y \) is one cell, e. Let \( f : {D}^{n} \rightarrow X \) be a characteristic map of \( e\left\lbrack {\operatorname{so}f\left( {S}^{n - 1}\right) \subset Y}\right. \) and \( \left. {X = Y\mathop{\bigcup }\limits_{{{\left. f\right| }_{{cn} - 1}{D}^{n}}}}\right\rbrack \) . The map \( \widetilde{\sigma } = \widetilde{\varphi } \circ f : {D}^{n} \rightarrow X \) and homotopies \( \sum = \Phi \circ \left( {f \times I}\right) : {D}^{n} \times I \rightarrow B \) and \( \widetilde{T} = \) \( \widetilde{\Psi } \circ \left( {{\left. f\right| }_{{S}^{n - 1}} \times I}\right) : {S}^{n - 1} \times I \rightarrow E \) satisfy the requirement of the theorem, and, by case 2, there exists a homotopy \( \widetilde{\sum } : {D}^{n} \times I \rightarrow E \) of \( \widetilde{\sigma } \) which covers \( \sum \) and extends \( \widetilde{T} \) . The homotopies \( \widetilde{\Psi } : Y \times I \rightarrow E \) and \( \widetilde{\sum } : {D}^{n} \times I \rightarrow E \) compose a homotopy \( \widetilde{\Phi } : X \times I \rightarrow E \) that is required by the CHP. (We leave to the reader to check that \( \widetilde{\Psi } \) and \( \widetilde{\sum } \) are compatible with the attaching of \( {D}^{n} \times I \) to \( Y \times I \) by the map \( {\left. f\right| }_{{S}^{n - 1}} \times I \) .)
Case 4: General. If \( X \) has infinitely many cells of one dimension not contained in \( Y \), then we need to apply the construction of case 3 to these cells simultaneously. If \( X - Y \) contains cells of unlimited dimensions, then we have to apply this construction infinitely many times. In |
1329_[肖梁] Abstract Algebra (2022F) | Definition 7.1.1 |
Definition 7.1.1. Fix a prime number \( p \) .
(1) A \( p \) -group is a finite group whose order is a power of \( p \) .
(2) If \( G \) is a finite group of order \( \left| G\right| = {p}^{r}m \) with \( r, m \in \mathbb{N} \) and \( p \nmid m \), a subgroup \( H \) of
\( G \) of order exactly \( {p}^{r} \) is called a Sylow \( p \) -subgroup or \( p \) -Sylow subgroup. Write
\[
{\operatorname{Syl}}_{p}\left( G\right) \mathrel{\text{:=}} \{ \text{ Sylow }p\text{-subgroups of }G\} \;\text{ and }\;{n}_{p} \mathrel{\text{:=}} \left| {{\operatorname{Syl}}_{p}\left( G\right) }\right| .
\]
Theorem 7.1.2 (Sylow’s theorem). Let \( G \) be a finite group with \( \left| G\right| = {p}^{r}m \) with \( r, m \in \mathbb{N} \) and \( p \nmid m \) .
- (First Sylow Theorem) Sylow p-subgroups exist.
- (Second Sylow Theorem) If \( P \) is a Sylow p-subgroup of \( G \), and \( Q \leq G \) is a subgroup of p-power order, then there exists \( g \in G \) such that \( Q \leq {gP}{g}^{-1} \) (note \( {gP}{g}^{-1} \) is also a Sylow p-subgroup).
In other words, we have
- all Sylow p-subgroups are conjugate; and
- all subgroups of p-power order is contained in a Sylow p-subgroup.
- (Third Sylow Theorem) The number \( {n}_{p} = \left| {{\operatorname{Syl}}_{p}\left( G\right) }\right| \) satisfies
(1) \( {n}_{p} \equiv 1{\;\operatorname{mod}\;p} \), and
(2) \( {n}_{p} \mid m \) .
## 7.2. Proof of Sylow's theorems and their corollaries.
7.2.1. Proof of First Sylow Theorem. We use an induction on \( \left| G\right| \) . When \( \left| G\right| = 1 \), there is nothing to prove.
Suppose that the First Sylow Theorem is proved for finite groups of order \( < n \) . Let \( G \) be a finite group of order \( n = {p}^{r}m \) with \( r, m \in \mathbb{N} \) and \( p \nmid m \) .
Case 1: \( p \nmid \left| G\right| \), i.e. \( r = 0 \) . Then \( \{ 1\} \subseteq G \) is the Sylow \( p \) -subgroup of \( G \) .
Case 2: If \( p \) divides \( \left| {Z\left( G\right) }\right| \), then \( Z\left( G\right) \) is a finitely generated abelian group; so
\[
Z\left( G\right) = \underset{p\text{-part }}{\underbrace{{\mathbf{Z}}_{{p}^{{r}_{1}}} \times \cdots \times {\mathbf{Z}}_{{p}^{{r}_{s}}}}} \times \cdots
\]
We write \( Z{\left( G\right) }_{p} \) for the \( p \) -part of \( Z\left( G\right) \) ; then \( \left| {Z{\left( G\right) }_{p}}\right| = {p}^{{r}^{\prime }} \) for some \( {r}^{\prime } \geq 1 \) .
Now, we consider the quotient homomorphism \( G\overset{\pi }{ \rightarrow }G/Z{\left( G\right) }_{p} \mathrel{\text{:=}} \bar{G} \), where the quotient \( \bar{G} \) has order \( n/{p}^{{r}^{\prime }} = {p}^{r - {r}^{\prime }}m < n \) . By inductive hypothesis, \( \bar{G} \) contains a Sylow \( p \) -subgroup \( \bar{H} \) of order \( {p}^{r - {r}^{\prime }} \) . Then \( {\pi }^{-1}\left( \bar{H}\right) \) is a subgroup of \( G \) of order
\[
\left| \bar{H}\right| \cdot \left| {\ker \pi }\right| = {p}^{r - {r}^{\prime }} \cdot {p}^{{r}^{\prime }} = {p}^{r}.
\]
So \( H \) is a Sylow \( p \) -subgroup of \( G \) .
Case 3: If \( p \) does not divide \( \left| {Z\left( G\right) }\right| \) but \( p \) divides \( \left| G\right| \) (thus \( r \geq 1 \) ).
We use the class equation for the conjugation of \( G \) on itself proved in the previous lecture (Theorem 6.3.1), to deduce that ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_45_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_45_0.jpg)
It follows that there exists one \( i \) such that \( \left\lbrack {G : {C}_{G}\left( {g}_{i}\right) }\right\rbrack \) is not divisible by \( p \) . Thus \( {C}_{G}\left( {g}_{i}\right) \) has order \( {p}^{r}{m}^{\prime } \) for some \( {m}^{\prime } \mid m \) and \( {m}^{\prime } \neq m \) .
By inductive hypothesis applied to \( {C}_{G}\left( {g}_{i}\right) \), there exists a subgroup \( H \) of \( {C}_{G}\left( {g}_{i}\right) \) of order \( {p}^{r} \) . This \( H \) is also a Sylow \( p \) -subgroup of \( G \) .
This completes the inductive proof of First Sylow Theorem.
7.2.2. Proof of Second Sylow Theorem. Now let \( P \leq G \) be a Sylow \( p \) -subgroup, and \( Q \leq G \) a subgroup of \( p \) -power order.
When \( \left| Q\right| = 1 \), i.e. \( Q = \{ 1\} \), clearly, \( Q \leq P \), we are done.
Now we assume that \( \left| Q\right| = {p}^{{r}^{\prime }} \) with \( {r}^{\prime } \geq 1 \) . Consider the left translation action of \( Q \) on \( G/P \) :
\[
Q \subset G/P = \{ {gP} \mid g \in G\} .
\]
More precisely, \( q \cdot {gP} \mathrel{\text{:=}} {qgP} \), for \( q \in Q \) and \( {gP} \in G/P \) .
Then we must have
\[
\frac{\left| \left| G/P\right| \right| }{ \uparrow } = \mathop{\sum }\limits_{{i = 1}}^{t}\left| \text{ Orbits }\right| = \mathop{\sum }\limits_{{i = 1}}^{t}\left| {Q/{\operatorname{Stab}}_{i}}\right|
\]
not divisible by \( p \)
Thus, there exists one orbit, say the orbit of \( {gP} \) such that the number of elements in the orbit is not divisible by \( p \) . Let
\[
{Q}^{\prime } \mathrel{\text{:=}} \left\{ {q \in Q \mid {qgP} = {gP}}\right\} = {\operatorname{Stab}}_{Q}\left( {gP}\right) \leq Q
\]
be the stabilizer group. It follows that \( \left| {Q/{Q}^{\prime }}\right| \) is the same as the size of the orbit, and is thus not divisible by \( p \) .
Yet \( \left| Q\right| = {p}^{{r}^{\prime }} \) is a power of \( p \), we must have \( Q = {Q}^{\prime } \) . In particular, this says that for any \( q \in Q \), we have
\[
{qgP} = {gP}\; \Rightarrow \;{qg} \in {gP}\; \Rightarrow \;q \in {gP}{g}^{-1}.
\]
So we deduce that \( Q \leq {gP}{g}^{-1} \) .
Corollary 7.2.3. All Sylow subgroups are conjugate.
Proof. Let \( P \) and \( Q \) be two Sylow \( p \) -subgroups. The Second Sylow Theorem implies that there exists \( g \in G \) such that \( Q \leq {gP}{g}^{-1} \) . But \( \left| Q\right| = \left| {{gP}{g}^{-1}}\right| \) . So \( Q = {gP}{g}^{-1} \) .
Corollary 7.2.4. There is only one Sylow p-subgroup if and only if one Sylow p-subgroup \( P \leq G \) is normal.
Proof. " \( \Rightarrow \) ": for any \( g \in G,{gP}{g}^{-1} \) is also a Sylow \( p \) -subgroup. So the uniqueness implies that \( P = {gP}{g}^{-1} \) . Thus \( P \) is normal.
" \( \Leftarrow \) ": Since all Sylow \( p \) -subgroups are conjugate by the above Corollary and all conjugates \( {gP}{g}^{-1} \) are the same as \( P \), there is only one Sylow \( p \) -subgroup, namely \( P \) .
Remark 7.2.5. Usually, when quoting Corollaries 7.2.3 and 7.2.4, we will just say Second Sylow Theorem.
Corollary 7.2.6. If \( P \) is a Sylow \( p \) -subgroup, then \( {N}_{G}\left( {{N}_{G}\left( P\right) }\right) = {N}_{G}\left( P\right) \), and \( {N}_{G}\left( P\right) \) contains a unique Sylow p-subgroup, which is \( P \) .
Proof. Note that \( P \trianglelefteq {N}_{G}\left( P\right) \) tautologically holds; so \( P \) is a normal Sylow \( p \) -subgroup of \( {N}_{G}\left( P\right) \) . By the above Corollary, \( P \) is the unique Sylow \( p \) -subgroup of \( {N}_{G}\left( P\right) \) . (This proves the second statement.)
It is clear that \( {N}_{G}\left( P\right) \subseteq {N}_{G}\left( {{N}_{G}\left( P\right) }\right) \) . Conversely, for \( n \in {N}_{G}\left( {{N}_{G}\left( P\right) }\right) \), we have
![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_46_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_46_0.jpg)
But we have just shown that \( {N}_{G}\left( P\right) \) has a unique Sylow \( p \) -subgroup. So \( {nP}{n}^{-1} = P \), i.e. \( n \in {N}_{G}\left( P\right) \) .
7.2.7. Proof of Third Sylow Theorem. Recall that \( \left| G\right| = n = {p}^{r}m \) with \( r \in {\mathbb{Z}}_{ \geq 0} \) and \( p \nmid m \) . Put \( {n}_{p} \mathrel{\text{:=}} \left| {{\operatorname{Syl}}_{p}\left( G\right) }\right| \) .
(1) Consider the conjugation of \( G \) on \( {\operatorname{Syl}}_{p}\left( G\right) : g \star P \mathrel{\text{:=}} {gP}{g}^{-1} \) for \( g \in G \) and \( P \) a Sylow \( p \) -subgroup of \( G \) .
By Second Sylow Theorem, all Sylow \( p \) -subgroups are conjugate and thus the conjugation \( G \) -action on \( {\operatorname{Syl}}_{p}\left( G\right) \) is transitive. From this, we deduce
\[
{n}_{p} = \left| {{\operatorname{Syl}}_{p}\left( G\right) }\right| = \left| {G/{N}_{G}\left( P\right) }\right| = \frac{\left| G\right| }{\left| {N}_{G}\left( P\right) \right| } = \frac{{p}^{r} \cdot m}{{p}^{r} \cdot \left\lbrack {{N}_{G}\left( P\right) : P}\right\rbrack }.
\]
It is clear from this that \( {n}_{p} \mid m \) .
(2) Consider \( P \) acting on \( {\operatorname{Syl}}_{p}\left( G\right) \) by conjugation. Then we have
\( \left( {7.2.7.1}\right) \)
\[
{n}_{p} = \left| {{\operatorname{Syl}}_{p}\left( G\right) }\right| = \mathop{\sum }\limits_{{\text{orbits }{\operatorname{Ad}}_{P}\left( {P}_{i}\right) }}\left| {P/{\operatorname{Stab}}_{P}\left( {P}_{i}\right) }\right| .
\]
- If \( {\operatorname{Stab}}_{P}\left( {P}_{i}\right) \neq P \), then \( {\operatorname{Stab}}_{P}\left( {P}_{i}\right) \) is a subgroup of \( P \) ; Lagrange theorem implies that \( \left| {P/{\operatorname{Stab}}_{P}\left( {P}_{i}\right) }\right| \) is a nontrivial \( p \) -power and in particular divisible by \( p \) .
- If \( {\operatorname{Stab}}_{P}\left( {P}_{i}\right) = P \), then \( P \subseteq {N}_{G}\left( {P}_{i}\right) \) . But Corollary 7.2.6 implies that \( {N}_{G}\left( {P}_{i}\right) \) contains a unique Sylow \( p \) -subgroup, i.e. \( P = {P}_{i} \) . So there is a unique such orbit.
It follows from the above discussion that \( {n}_{p} \equiv 1\left( {\;\operatorname{mod}\;p}\right) \) .
## 7.3. Applications of Sylow's Theorem.
7.3.1. Classification of groups of order \( {pq} \) . Assume that \( G \) is a group of order \( {pq} \), where \( p < q \) are prime numbers.
Let \( Q \) denote a Sylow \( q \) -subgroup of \( G \) . By Sylow’s Third Theorem, we have
\[
{n}_{q} \mid p,\;\text{ and }\;{n}_{q} \equiv 1\;\left( {\;\operatorname{mod}\;q}\right) \; \Rightarrow \;{n}_{q} = 1.
\]
So \( Q \) is a normal subgroup (and it is uniqu |
1042_(GTM203)The Symmetric Group | Definition 1.4.1 |
Definition 1.4.1 Let \( V \) be a \( G \) -module. A submodule of \( V \) is a subspace \( W \) that is closed under the action of \( G \), i.e.,
\[
\mathbf{w} \in W \Rightarrow g\mathbf{w} \in W\text{ for all }g \in G.
\]
We also say that \( W \) is a \( G \) -invariant subspace. Equivalently, \( W \) is a subset of \( V \) that is a \( G \) -module in its own right. We write \( W \leq V \) if \( W \) is a submodule of \( V \) . \( \blacksquare \)
As usual, we illustrate the definition with some examples.
Example 1.4.2 Any \( G \) -module, \( V \), has the submodules \( W = V \) as well as \( W = \{ \mathbf{0}\} \), where \( \mathbf{0} \) is the zero vector. These two submodules are called trivial. All other submodules are called nontrivial. -
Example 1.4.3 For a nontrivial example of a submodule, consider \( G = {\mathcal{S}}_{n} \) , \( n \geq 2 \), and \( V = \mathbb{C}\{ \mathbf{1},\mathbf{2},\ldots ,\mathbf{n}\} \) (the defining representation). Now take
\[
W = \mathbb{C}\{ \mathbf{1} + \mathbf{2} + \cdots + \mathbf{n}\} = \{ c\left( {\mathbf{1} + \mathbf{2} + \cdots + \mathbf{n}}\right) : c \in \mathbb{C}\}
\]
i.e., \( W \) is the one-dimensional subspace spanned by the vector \( \mathbf{1} + \mathbf{2} + \cdots + \mathbf{n} \) . To check that \( W \) is closed under the action of \( {\mathcal{S}}_{n} \), it suffices to show that
\[
\pi \mathbf{w} \in W\text{for all}\mathbf{w}\text{in some basis for}W\text{and all}\pi \in {\mathcal{S}}_{n}\text{.}
\]
(Why?) Thus we need to verify only that
\[
\pi \left( {\mathbf{1} + \mathbf{2} + \cdots + \mathbf{n}}\right) \in W
\]
for each \( \pi \in {\mathcal{S}}_{n} \) . But
\[
\pi \left( {\mathbf{1} + \mathbf{2} + \cdots + \mathbf{n}}\right) = \pi \left( \mathbf{1}\right) + \pi \left( \mathbf{2}\right) + \cdots + \pi \left( \mathbf{n}\right)
\]
\[
= \mathbf{1} + \mathbf{2} + \cdots + \mathbf{n} \in W
\]
because applying \( \pi \) to \( \{ 1,2,\ldots, n\} \) just gives back the same set of numbers in a different order. Thus \( W \) is a submodule of \( V \) that is nontrivial since \( \dim W = 1 \) and \( \dim V = n \geq 2 \) .
Since \( W \) is a module for \( G \) sitting inside \( V \), we can ask what representation we get if we restrict the action of \( G \) to \( W \) . But we have just shown that every \( \pi \in {\mathcal{S}}_{n} \) sends the basis vector \( \mathbf{1} + \mathbf{2} + \cdots + \mathbf{n} \) to itself. Thus \( X\left( \pi \right) = \left( 1\right) \) is the corresponding matrix, and we have found a copy of the trivial representation in \( \mathbb{C}\{ \mathbf{1},\mathbf{2},\ldots ,\mathbf{n}\} \) . In general, for a vector space \( W \) of any dimension, if \( G \) fixes every element of \( W \), we say that \( G \) acts trivially on \( W \) . ∎
Example 1.4.4 Next, let us look again at the regular representation. Suppose \( G = \left\{ {{g}_{1},{g}_{2},\ldots ,{g}_{n}}\right\} \) with group algebra \( V = \mathbb{C}\left\lbrack \mathbf{G}\right\rbrack \) . Using the same idea as in the previous example, let
\[
W = \mathbb{C}\left\lbrack {{\mathbf{g}}_{1} + {\mathbf{g}}_{2} + \cdots + {\mathbf{g}}_{n}}\right\rbrack
\]
the one-dimensional subspace spanned by the vector that is the sum of all the elements of \( G \) . To verify that \( W \) is a submodule, take any \( g \in G \) and compute:
\[
g\left( {{\mathbf{g}}_{1} + {\mathbf{g}}_{2} + \cdots + {\mathbf{g}}_{n}}\right) = g{\mathbf{g}}_{1} + g{\mathbf{g}}_{2} + \cdots + g{\mathbf{g}}_{n}
\]
\[
= {\mathbf{g}}_{1} + {\mathbf{g}}_{2} + \cdots + {\mathbf{g}}_{n} \in W
\]
because multiplying by \( g \) merely permutes the elements of \( G \), leaving the sum unchanged. As before, \( G \) acts trivially on \( W \) .
The reader should verify that if \( G = {\mathcal{S}}_{n} \), then the sign representation can also be recovered by using the submodule
\[
W = \mathbb{C}\left\lbrack {\mathop{\sum }\limits_{{\pi \in {\mathcal{S}}_{n}}}\operatorname{sgn}\left( \pi \right) \mathbf{\pi }}\right\rbrack
\]
We now introduce the irreducible representations that will be the building blocks of all the others.
Definition 1.4.5 A nonzero \( G \) -module \( V \) is reducible if it contains a nontrivial submodule \( W \) . Otherwise, \( V \) is said to be irreducible. Equivalently, \( V \) is reducible if it has a basis \( \mathcal{B} \) in which every \( g \in G \) is assigned a block matrix
of the form
\[
X\left( g\right) = \left( \begin{matrix} A\left( g\right) & B\left( g\right) \\ 0 & C\left( g\right) \end{matrix}\right)
\]
(1.4)
where the \( A\left( g\right) \) are square matrices, all of the same size, and 0 is a nonempty matrix of zeros. -
To see the equivalence, suppose \( V \) of dimension \( d \) has a submodule \( W \) of dimension \( f,0 < f < d \) . Then let
\[
\mathcal{B} = \left\{ {{\mathbf{w}}_{1},{\mathbf{w}}_{2},\ldots ,{\mathbf{w}}_{f},{\mathbf{v}}_{f + 1},{\mathbf{v}}_{f + 2},\ldots ,{\mathbf{v}}_{d}}\right\}
\]
where the first \( f \) vectors are a basis for \( W \) . Now we can compute the matrix of any \( g \in G \) with respect to the basis \( \mathcal{B} \) . Since \( W \) is a submodule, \( g{\mathbf{w}}_{i} \in W \) for all \( i,1 \leq i \leq f \) . Thus the last \( d - f \) coordinates of \( g{\mathbf{w}}_{i} \) will all be zero. That accounts for the zero matrix in the lower left corner of \( X\left( g\right) \) . Note that we have also shown that the \( A\left( g\right), g \in G \), are the matrices of the restriction of \( G \) to \( W \) . Hence they must all be square and of the same size.
Conversely, suppose each \( X\left( g\right) \) has the given form with every \( A\left( g\right) \) being \( f \times f \) . Let \( V = {\mathbb{C}}^{d} \) and consider
\[
W = \mathbb{C}\left\{ {{\mathbf{e}}_{1},{\mathbf{e}}_{2},\ldots ,{\mathbf{e}}_{f}}\right\}
\]
where \( {\mathbf{e}}_{i} \) is the column vector with a 1 in the \( i \) th row and zeros elsewhere (the standard basis for \( {\mathbb{C}}^{d} \) ). Then the placement of the zeros in \( X\left( g\right) \) assures us that \( X\left( g\right) {\mathbf{e}}_{i} \in W \) for \( 1 \leq i \leq f \) and all \( g \in G \) . Thus \( W \) is a \( G \) -module, and it is nontrivial because the matrix of zeros is nonempty.
Clearly, any epresentation of degree 1 is irreducible. It seems hard to determine when a representation of greater degree will be irreducible. Certainly, checking all possible subspaces to find out which ones are submodules is out of the question. This unsatisfactory state of affairs will be remedied after we discuss inner products of group characters in Section 1.9.
From the preceding examples, both the defining representation for \( {\mathcal{S}}_{n} \) and the group algebra for an arbitrary \( G \) are reducible if \( n \geq 2 \) and \( \left| G\right| \geq 2 \), respectively. After all, we produced nontrivial submodules. Let us now illustrate the alternative approach via matrices using the defining representation of \( {\mathcal{S}}_{3} \) . We must extend the basis \( \{ \mathbf{1} + \mathbf{2} + \mathbf{3}\} \) for \( W \) to a basis for \( V = \mathbb{C}\{ \mathbf{1},\mathbf{2},\mathbf{3}\} \) . Let us pick
\[
\mathcal{B} = \{ \mathbf{1} + \mathbf{2} + \mathbf{3},\mathbf{2},\mathbf{3}\}
\]
Of course, \( X\left( e\right) \) remains the \( 3 \times 3 \) identity matrix. To compute \( X\left( \left( {1,2}\right) \right) \) , we look at \( \left( {1,2}\right) \) ’s action on our basis:
\[
\left( {1,2}\right) \left( {\mathbf{1} + \mathbf{2} + \mathbf{3}}\right) = \mathbf{1} + \mathbf{2} + \mathbf{3},\;\left( {1,2}\right) \mathbf{2} = \mathbf{1} = \left( {\mathbf{1} + \mathbf{2} + \mathbf{3}}\right) - \mathbf{2} - \mathbf{3},\;\left( {1,2}\right) \mathbf{3} = \mathbf{3}.
\]
So
\[
X\left( \left( {1,2}\right) \right) = \left( \begin{array}{rrr} 1 & 1 & 0 \\ 0 & - 1 & 0 \\ 0 & - 1 & 1 \end{array}\right)
\]
The reader can do the similar computations for the remaining four elements of \( {\mathcal{S}}_{3} \) to verify that
\[
X\left( \left( {1,3}\right) \right) = \left( \begin{array}{rrr} 1 & 0 & 1 \\ 0 & 1 & - 1 \\ 0 & 0 & - 1 \end{array}\right) ,\;X\left( \left( {2,3}\right) \right) = \left( \begin{array}{lll} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right) ,
\]
\[
X\left( \left( {1,2,3}\right) \right) = \left( \begin{array}{rrr} 1 & 0 & 1 \\ 0 & 0 & - 1 \\ 0 & 1 & - 1 \end{array}\right) ,\;X\left( \left( {1,3,2}\right) \right) = \left( \begin{array}{rrr} 1 & 1 & 0 \\ 0 & - 1 & 1 \\ 0 & - 1 & 0 \end{array}\right) .
\]
Note that all these matrices have the form
\[
X\left( \pi \right) = \left( \begin{matrix} 1 & * & * \\ 0 & * & * \\ 0 & * & * \end{matrix}\right)
\]
The one in the upper left corner comes from the fact that \( {\mathcal{S}}_{3} \) acts trivially on \( W \) .
## 1.5 Complete Reducibility and Maschke's Theorem
It would be even better if we could bring the matrices of a reducible \( G \) -module to the block diagonal form
\[
X\left( g\right) = \left( \begin{matrix} A\left( g\right) & 0 \\ 0 & B\left( g\right) \end{matrix}\right)
\]
for all \( g \in G \) . This is the notion of a direct sum.
Definition 1.5.1 Let \( V \) be a vector space with subspaces \( U \) and \( W \) . Then \( V \) is the (internal) direct sum of \( U \) and \( W \), written \( V = U \oplus W \), if every \( \mathbf{v} \in V \) can be written uniquely as a sum
\[
\mathbf{v} = \mathbf{u} + \mathbf{w},\;\mathbf{u} \in U,\mathbf{w} \in W.
\]
If \( V \) is a \( G \) -module and \( U, W \) are \( G \) -submodules, then we say that \( U \) and \( W \) are complements of each other.
If \( X \) is a matrix, then \( X \) is the direct sum of matrices \( A \) and \( B \), written \( X = A \oplus B \), if \( X \) has the block diagonal form
\[
X = \left( \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right) .
\]
To see the relationship between the module and matrix definitions, let \( V \) be a \( G \) -module with \( V = U \oplus W \), where \( U, W \leq V \) . Since this is a direct sum of ve |
1098_(GTM254)Algebraic Function Fields and Codes | Definition 8.3.1 |
Definition 8.3.1. For \( r \in \mathbb{Z} \) we define the code
\[
{C}_{r} \mathrel{\text{:=}} {C}_{\mathcal{L}}\left( {D, r{Q}_{\infty }}\right)
\]
(8.6)
where
\[
D \mathrel{\text{:=}} \mathop{\sum }\limits_{{{\beta }^{q} + \beta = {\alpha }^{q + 1}}}{P}_{\alpha ,\beta }
\]
(8.7)
is the sum of all places of degree one (except \( {Q}_{\infty } \) ) of the Hermitian function field \( H/{\mathbb{F}}_{{q}^{2}} \) . The codes \( {C}_{r} \) are called Hermitian codes.
Hermitian codes are codes of length \( n = {q}^{3} \) over the field \( {\mathbb{F}}_{{q}^{2}} \) . For \( r \leq s \) we obviously have \( {C}_{r} \subseteq {C}_{s} \) . Let us first discuss some trivial cases. For \( r < 0 \) , \( \mathcal{L}\left( {r{Q}_{\infty }}\right) = 0 \) and therefore \( {C}_{r} = 0 \) . For \( r > {q}^{3} + {q}^{2} - q - 2 = {q}^{3} + \left( {{2g} - 2}\right) \) , Theorem 2.2.2 and the Riemann-Roch Theorem yield
\[
\dim {C}_{r} = \ell \left( {r{Q}_{\infty }}\right) - \ell \left( {r{Q}_{\infty } - D}\right)
\]
\[
= \left( {r + 1 - g}\right) - \left( {r - {q}^{3} + 1 - g}\right) = {q}^{3} = n.
\]
Hence \( {C}_{r} = {\mathbb{F}}_{{q}^{2}}^{n} \) in this case, and it remains to study Hermitian codes with \( 0 \leq r \leq {q}^{3} + {q}^{2} - q - 2. \)
Proposition 8.3.2. The dual code of \( {C}_{r} \) is
\[
{C}_{r}^{ \bot } = {C}_{{q}^{3} + {q}^{2} - q - 2 - r}.
\]
Hence \( {C}_{r} \) is self-orthogonal if \( {2r} \leq {q}^{3} + {q}^{2} - q - 2 \), and \( {C}_{r} \) is self-dual for \( r = \left( {{q}^{3} + {q}^{2} - q - 2}\right) /2 \) .
Proof. Consider the element
\[
t \mathrel{\text{:=}} \mathop{\prod }\limits_{{\alpha \in {\mathbb{F}}_{{q}^{2}}}}\left( {x - \alpha }\right) = {x}^{{q}^{2}} - x.
\]
\( t \) is a prime element for all places \( {P}_{\alpha ,\beta } \leq D \), and its principal divisor is \( \left( t\right) = D - {q}^{3}{Q}_{\infty } \) . Since \( {dt} = d\left( {{x}^{{q}^{2}} - x}\right) = - {dx} \), the differential \( {dt} \) has the divisor \( \left( {dt}\right) = \left( {dx}\right) = \left( {{q}^{2} - q - 2}\right) {Q}_{\infty } \) (Lemma 6.4.4). Now Theorem 2.2.8 and Proposition 8.1.2 imply
\[
{C}_{r}^{ \bot } = {C}_{\Omega }\left( {D, r{Q}_{\infty }}\right) = {C}_{\mathcal{L}}\left( {D, D - r{Q}_{\infty } + \left( {dt}\right) - \left( t\right) }\right)
\]
\[
= {C}_{\mathcal{L}}\left( {D,\left( {{q}^{3} + {q}^{2} - q - 2 - r}\right) {Q}_{\infty }}\right) = {C}_{{q}^{3} + {q}^{2} - q - 2 - r}.
\]
Our next aim is to determine the parameters of \( {C}_{r} \) . We consider the set \( I \) of pole numbers of \( {Q}_{\infty } \) (cf. Definition 1.6.7); i.e.,
\[
I = \left\{ {n \geq 0 \mid \text{ there is an element }z \in H\text{ with }{\left( z\right) }_{\infty } = n{Q}_{\infty }}\right\} .
\]
For \( s \geq 0 \) let
\[
I\left( s\right) \mathrel{\text{:=}} \{ n \in I \mid n \leq s\} .
\]
(8.8)
Then \( \left| {I\left( s\right) }\right| = \ell \left( {s{Q}_{\infty }}\right) \), and the Riemann-Roch Theorem gives
\[
\left| {I\left( s\right) }\right| = s + 1 - q\left( {q - 1}\right) /2\text{ for }s \geq {2g} - 1 = q\left( {q - 1}\right) - 1.
\]
From Lemma 6.4.4 we obtain the following description of \( I\left( s\right) \) :
\[
I\left( s\right) = \{ n \leq s \mid n = {iq} + j\left( {q + 1}\right) \text{with}i \geq 0\text{and}0 \leq j \leq q - 1\} \text{,}
\]
hence
\[
\left| {I\left( s\right) }\right| = \left| \left\{ {\left( {i, j}\right) \in {\mathbb{N}}_{0} \times {\mathbb{N}}_{0};j \leq q - 1\text{ and }{iq} + j\left( {q + 1}\right) \leq s}\right\} \right| .
\]
Proposition 8.3.3. Suppose that \( 0 \leq r \leq {q}^{3} + {q}^{2} - q - 2 \) . Then the following hold:
(a) The dimension of \( {C}_{r} \) is given by
\[
\dim {C}_{r} = \left\{ \begin{matrix} \left| {I\left( r\right) }\right| & \text{ for }0 \leq r < {q}^{3}, \\ {q}^{3} - \left| {I\left( s\right) }\right| & \text{ for }{q}^{3} \leq r \leq {q}^{3} + {q}^{2} - q - 2, \end{matrix}\right.
\]
where \( s \mathrel{\text{:=}} {q}^{3} + {q}^{2} - q - 2 - r \) and \( I\left( r\right) \) is defined by (8.8).
(b) For \( {q}^{2} - q - 2 < r < {q}^{3} \) we have
\[
\dim {C}_{r} = r + 1 - q\left( {q - 1}\right) /2.
\]
(c) The minimum distance \( d \) of \( {C}_{r} \) satisfies
\[
d \geq {q}^{3} - r
\]
If \( 0 \leq r < {q}^{3} \) and both numbers \( r \) and \( {q}^{3} - r \) are pole numbers of \( {Q}_{\infty } \), then
\[
d = {q}^{3} - r
\]
Proof. (a) For \( 0 \leq r < {q}^{3} \) Corollary 2.2.3 gives
\[
\dim {C}_{r} = \dim \mathcal{L}\left( {r{Q}_{\infty }}\right) = \left| {I\left( r\right) }\right| .
\]
For \( {q}^{3} \leq r \leq {q}^{3} + {q}^{2} - q - 2 \) we set \( s \mathrel{\text{:=}} {q}^{3} + {q}^{2} - q - 2 - r \) . Then \( 0 \leq s \leq \) \( {q}^{2} - q - 2 < {q}^{3} \) . By Proposition 8.3.2 we obtain
\[
\dim {C}_{r} = {q}^{3} - \dim {C}_{s} = {q}^{3} - \left| {I\left( s\right) }\right| .
\]
(b) For \( {q}^{2} - q - 2 = {2g} - 2 < r < {q}^{3} \), Corollary 2.2.3 gives
\[
\dim {C}_{r} = r + 1 - g = r + 1 - q\left( {q - 1}\right) /2.
\]
(c) The inequality \( d \geq {q}^{3} - r \) follows from Theorem 2.2.2. Now let \( 0 \leq r < {q}^{3} \) and assume that both numbers \( r \) and \( {q}^{3} - r \) are pole numbers of \( {Q}_{\infty } \) . In order to prove the equality \( d = {q}^{3} - r \) we distinguish three cases.
Case 1: \( r = {q}^{3} - {q}^{2} \) . Choose \( i \mathrel{\text{:=}} {q}^{2} - q \) distinct elements \( {\alpha }_{1},\ldots ,{\alpha }_{i} \in {\mathbb{F}}_{{q}^{2}} \) . Then the element
\[
z \mathrel{\text{:=}} \mathop{\prod }\limits_{{\nu = 1}}^{i}\left( {x - {\alpha }_{\nu }}\right) \in \mathcal{L}\left( {r{Q}_{\infty }}\right)
\]
has exactly \( {qi} = r \) distinct zeros \( {P}_{\alpha ,\beta } \) of degree one, and the weight of the corresponding codeword \( {\operatorname{ev}}_{D}\left( z\right) \in {C}_{r} \) is \( {q}^{3} - r \) . Hence \( d = {q}^{3} - r \) .
Case 2: \( r < {q}^{3} - {q}^{2} \) . We write \( r = {iq} + j\left( {q + 1}\right) \) with \( i \geq 0 \) and \( 0 \leq j \leq \) \( q - 1 \), so \( i \leq {q}^{2} - q - 1 \) . Fix an element \( 0 \neq \gamma \in {\mathbb{F}}_{q} \) and consider the set \( A \mathrel{\text{:=}} \left\{ {\alpha \in {\mathbb{F}}_{{q}^{2}} \mid {\alpha }^{q + 1} \neq \gamma }\right\} \) . Then \( \left| A\right| = {q}^{2} - \left( {q + 1}\right) \geq i \), and we can choose distinct elements \( {\alpha }_{1},\ldots ,{\alpha }_{i} \in A \) . The element
\[
{z}_{1} \mathrel{\text{:=}} \mathop{\prod }\limits_{{\nu = 1}}^{i}\left( {x - {\alpha }_{\nu }}\right)
\]
has \( {iq} \) distinct zeros \( {P}_{\alpha ,\beta } \leq D \) . Next we choose \( j \) distinct elements \( {\beta }_{1},\ldots ,{\beta }_{j} \in \) \( {\mathbb{F}}_{{q}^{2}} \) with \( {\beta }_{\mu }^{q} + {\beta }_{\mu } = \gamma \) and set
\[
{z}_{2} \mathrel{\text{:=}} \mathop{\prod }\limits_{{\mu = 1}}^{j}\left( {y - {\beta }_{\mu }}\right)
\]
\( {z}_{2} \) has \( j\left( {q + 1}\right) \) zeros \( {P}_{\alpha ,\beta } \leq D \), and all of them are distinct from the zeros of \( {z}_{1} \) because \( {\beta }_{\mu }^{q} + {\beta }_{\mu } = \gamma \neq {\alpha }_{\nu }^{q + 1} \) for \( \mu = 1,\ldots, j \) and \( \nu = 1,\ldots, i \) . Hence
\[
z \mathrel{\text{:=}} {z}_{1}{z}_{2} \in \mathcal{L}\left( {\left( {{iq} + j\left( {q + 1}\right) }\right) {Q}_{\infty }}\right) = \mathcal{L}\left( {r{Q}_{\infty }}\right)
\]
has \( r \) distinct zeros \( {P}_{\alpha ,\beta } \leq D \) . The corresponding codeword \( {\operatorname{ev}}_{D}\left( z\right) \in {C}_{r} \) has weight \( {q}^{3} - r \) .
Case 3: \( {q}^{3} - {q}^{2} < r < {q}^{3} \) . By assumption, \( s \mathrel{\text{:=}} {q}^{3} - r \) is a pole number and \( 0 < s < {q}^{2} \leq {q}^{3} - {q}^{2} \) . By case 2 there exists an element \( z \in H \) with principal divisor \( \left( z\right) = {D}^{\prime } - s{Q}_{\infty } \) where \( 0 \leq {D}^{\prime } \leq D \) and \( \deg {D}^{\prime } = s \) . The element \( u \mathrel{\text{:=}} {x}^{{q}^{2}} - x \in H \) has the divisor \( \left( u\right) = D - {q}^{3}{Q}_{\infty } \), hence
\[
\left( {{z}^{-1}u}\right) = \left( {D - {D}^{\prime }}\right) - \left( {{q}^{3} - s}\right) {Q}_{\infty } = \left( {D - {D}^{\prime }}\right) - r{Q}_{\infty }.
\]
The codeword \( {\operatorname{ev}}_{D}\left( {{z}^{-1}u}\right) \in {C}_{r} \) has weight \( {q}^{3} - r \) .
We mention that the minimum distance of \( {C}_{r} \) is known also in the remaining cases (where \( r \geq {q}^{3} \), or one of the numbers \( r \) or \( {q}^{3} - r \) is a gap of \( {Q}_{\infty } \) ).
One can easily specify a generator matrix for the Hermitian codes \( {C}_{r} \) . We fix an ordering of the set \( T \mathrel{\text{:=}} \left\{ {\left( {\alpha ,\beta }\right) \in {\mathbb{F}}_{{q}^{2}} \times {\mathbb{F}}_{{q}^{2}} \mid {\beta }^{q} + \beta = {\alpha }^{q + 1}}\right\} \) . For \( s = {iq} + j\left( {q + 1}\right) \) (where \( i \geq 0 \) and \( 0 \leq j \leq q - 1 \) ) we define the vector
\[
{u}_{s} \mathrel{\text{:=}} {\left( {\alpha }^{i}{\beta }^{j}\right) }_{\left( {\alpha ,\beta }\right) \in T} \in {\left( {\mathbb{F}}_{{q}^{2}}\right) }^{{q}^{3}}.
\]
Then we have:
Corollary 8.3.4. Suppose that \( 0 \leq r < {q}^{3} \) . Let \( 0 = {s}_{1} < {s}_{2} < \ldots < {s}_{k} \leq r \) be all pole numbers \( \leq r \) of \( {Q}_{\infty } \) . Then the \( k \times {q}^{3} \) matrix \( {M}_{r} \) whose rows are \( {u}_{{s}_{1}},\ldots ,{u}_{{s}_{k}} \), is a generator matrix of \( {C}_{r} \) .
Proof. Corollary 2.2.3.
In the same manner we obtain a parity check matrix for \( {C}_{r} \) (for \( r > {q}^{2} - q - 2 \) ), since the dual of \( {C}_{r} \) is the code \( {C}_{s} \) with \( s = {q}^{3} + {q}^{2} - q - 2 - r \) .
Finally we study automorphisms of Hermitian codes. Let \( H = {\mathbb{F}}_{{q}^{2}}\left( {x, y}\right) \) as before, cf. (8.5). Let
\[
\varepsilon \in {\mathbb{F}}_{{q}^{2}} \smallsetminus \{ 0\} ,\delta \in {\mathbb{F}}_{{q}^{2}}\text{ and }{\mu }^{q} + \mu = {\delta }^{ |
1089_(GTM246)A Course in Commutative Banach Algebras | Definition 5.6.5 |
Definition 5.6.5. A measure \( \mu \in M\left( G\right) \) is said to be canonical if \( \mu \) is of the
form
\[
\mu = \mathop{\sum }\limits_{{j = 1}}^{m}{n}_{j}\left( {{\gamma }_{j} \cdot {m}_{H}}\right)
\]
where \( H \) is a compact subgroup of \( G,{n}_{1},\ldots ,{n}_{m} \) are integers, and \( {\gamma }_{1},\ldots ,{\gamma }_{m} \) are characters of \( G \) .
Two measures \( \mu \) and \( \nu \) in \( M\left( G\right) \) are called mutually singular if they are concentrated on disjoint sets (we then write \( \mu \bot \nu \) ). In that case, \( \parallel \mu + \nu \parallel = \) \( \parallel \mu \parallel + \parallel \nu \parallel \)
With Lemma 5.6.4 at our disposal, we can now prove Cohen's theorem. We remind the reader that assuming \( G \) to be compact is no restriction since the support group of any \( \mu \in F\left( G\right) \) is compact by Proposition 5.6.3.
Theorem 5.6.6. Let \( G \) be a compact Abelian group. Then each \( \mu \in F\left( G\right) \) is a finite sum of mutually singular canonical measures.
Proof. Let \( 0 \neq \mu \in F\left( G\right) \) and set
\[
M = \left\{ {\gamma \cdot \mu : \gamma \in \widehat{G},{\int }_{G}\gamma \left( x\right) {d\mu }\left( x\right) \neq 0}\right\} .
\]
Let \( \bar{M} \) denote the \( {w}^{ * } \) -closure of \( M \) in \( M\left( G\right) \) . Then \( 0 \notin \bar{M} \), because
\[
\left| \left\langle {\gamma \cdot \mu ,{1}_{G}}\right\rangle \right| = \left| {{\int }_{G}\gamma \left( x\right) {d\mu }\left( x\right) }\right| \geq 1
\]
for all \( \gamma \cdot \mu \in M \) since \( \widehat{\mu } \) is integer-valued. Let \( \delta = \inf \{ \parallel \lambda \parallel : \lambda \in \bar{M}\} \) . Then
the sets
\[
\left\{ {\lambda \in \bar{M} : \parallel \lambda \parallel \leq \delta + \frac{1}{n}}\right\}
\]
\( n \in \mathbb{N} \), are all nonempty and \( {w}^{ * } \) -compact. It follows that there exists \( \nu \in \bar{M} \) such that \( \parallel \nu \parallel = \delta \) . In particular, \( \delta > 0 \) since \( 0 \notin \bar{M} \) . Let
\[
N = \left\{ {\gamma \cdot \nu : \gamma \in \widehat{G},{\int }_{G}\gamma \left( x\right) {d\nu }\left( x\right) \neq 0}\right\} .
\]
Then \( N \) is contained in \( \bar{M} \) . Indeed, if \( \gamma \cdot \nu \in N \) and \( {\left( {\gamma }_{\alpha }\right) }_{\alpha } \) is a net in \( \widehat{G} \) such that \( {\gamma }_{\alpha } \cdot \mu \rightarrow \nu \), then \( \left( {\gamma {\gamma }_{\alpha }}\right) \cdot \mu \rightarrow \gamma \cdot \nu \) in the \( {w}^{ * } \) -topology, and since \( {\int }_{G}\gamma \left( x\right) {d\nu }\left( x\right) \neq 0 \), we have \( \left\langle {\left( {\gamma {\gamma }_{\alpha }}\right) \cdot \mu ,{1}_{G}}\right\rangle \neq 0 \) and hence \( {\gamma }_{\alpha } \cdot \mu \in M \) eventually. Consequently, \( \nu \in \bar{M} \) . But now \( N \) must be finite, because otherwise \( N \) has a \( {w}^{ * } \) -accumulation point \( \sigma \), and then \( \parallel \sigma \parallel < \parallel \nu \parallel \) by Lemma 5.6.4 and \( \sigma \in \bar{M} \) , contradicting the fact that \( \nu \) is an element of \( \bar{M} \) with minimal norm.
We now construct the support group of \( \nu \) . Let
\[
\Gamma = \left\{ {\gamma \in \widehat{G} : {\int }_{G}\gamma \left( x\right) {d\nu }\left( x\right) \neq 0}\right\} ,
\]
and define an equivalence relation on \( \Gamma \) by \( {\gamma }_{1} \sim {\gamma }_{2} \) if and only if \( {\gamma }_{1} \cdot \nu = \) \( {\gamma }_{2} \cdot \nu \) . Because \( N \) is finite, \( \Gamma \) consists of only finitely many equivalence classes, \( {\Gamma }_{1},\ldots ,{\Gamma }_{m} \) say. For \( 1 \leq i \leq m \), let
\[
{H}_{i} = \left\{ {x \in G : \gamma \left( x\right) = {\gamma }^{\prime }\left( x\right) \text{ for all }\gamma ,{\gamma }^{\prime } \in {\Gamma }_{i}}\right\} .
\]
Then \( {H}_{i} \) is a closed subgroup of \( G \) . We show next that \( \left| \nu \right| \left( {G \smallsetminus {H}_{i}}\right) = 0 \) . Since
\[
{H}_{i} = \left\{ {x \in G : \left( {{\delta }^{-1}\gamma }\right) \left( x\right) = 1\text{ for all }\gamma ,\delta \in {\Gamma }_{i}}\right\}
\]
\( \widehat{G/{H}_{i}} \) is the subgroup of \( \widehat{G} \) generated by all these elements \( {\delta }^{-1}\gamma ,\gamma ,\delta \in {\Gamma }_{i} \) . Thus, if \( \lambda \in \widehat{G/{H}_{i}} \) and hence \( \lambda = \mathop{\prod }\limits_{{l = 1}}^{r}{\delta }_{l}^{-1}{\gamma }_{l} \) with \( {\gamma }_{l},{\delta }_{l} \in {\Gamma }_{i} \) and \( \alpha \) is an arbitrary element of \( \widehat{G} \), then by definition of \( {\Gamma }_{i} \) ,
\[
\widehat{\nu }\left( {\alpha \lambda }\right) = {\int }_{G}\alpha \left( x\right) \mathop{\prod }\limits_{{l = 1}}^{r}\left( {{\delta }_{l}^{-1}{\gamma }_{l}}\right) \left( x\right) {d\nu }\left( x\right)
\]
\[
= {\int }_{G}\alpha \left( x\right) \mathop{\prod }\limits_{{l = 1}}^{{r - 1}}\left( {{\delta }_{l}^{-1}{\gamma }_{l}}\right) \left( x\right) d\left( {\left( {{\delta }_{r}^{-1}{\gamma }_{r}}\right) \cdot \nu }\right) \left( x\right)
\]
\[
= {\int }_{G}\alpha \left( x\right) \mathop{\prod }\limits_{{l = 1}}^{{r - 1}}\left( {{\delta }_{l}^{-1}{\gamma }_{l}}\right) \left( x\right) {d\nu }\left( x\right)
\]
\[
= \ldots = {\int }_{G}\alpha \left( x\right) {d\nu }\left( x\right)
\]
\[
= \widehat{\nu }\left( \alpha \right)
\]
Thus, for every \( \lambda \in \widehat{G/{H}_{i}} \), the Fourier-Stieltjes transform of \( \left( {{1}_{G} - \lambda }\right) \cdot \nu \) vanishes on all of \( \widehat{G} \) . This implies that \( \left| \nu \right| \left( {G \smallsetminus {H}_{i}}\right) = 0,1 \leq i \leq m \) . Now, let \( H = \mathop{\bigcap }\limits_{{i = 1}}^{m}{H}_{i} \) . It follows that \( \left| \nu \right| \left( {G \smallsetminus H}\right) = 0 \) .
We claim that \( \nu \) has only a finite number of nonzero Fourier coefficients on \( H \) . For that, observe that \( {\left. \Gamma \right| }_{H} \) is finite since \( \Gamma = \mathop{\bigcup }\limits_{{i = 1}}^{m}{\Gamma }_{i} \) and \( {\Gamma }_{i} \subseteq {\gamma }_{i} \cdot \widehat{G/{H}_{i}} \subseteq \) \( {\gamma }_{i} \cdot \widehat{G/H} \) for every \( {\gamma }_{i} \in {\Gamma }_{i} \) . Now, if \( \chi \in \widehat{H} \) is such that \( {\int }_{H}\chi \left( x\right) {d\nu }\left( x\right) \neq 0 \), then \( \chi \) is the restriction to \( H \) of some \( \gamma \in \Gamma \) since \( \left| \nu \right| \left( {G \smallsetminus H}\right) = 0 \) . This proves the claim. It follows that \( \nu \) is a finite sum
\[
\nu = \mathop{\sum }\limits_{{i = 1}}^{m}{n}_{i}\left( {{\bar{\chi }}_{i} \cdot {m}_{H}}\right) = \mathop{\sum }\limits_{{i = 1}}^{m}{n}_{i}\left( {{\bar{\gamma }}_{i} \cdot {m}_{H}}\right)
\]
where \( {n}_{i} \in \mathbb{Z} \) and \( {\gamma }_{i} \in {\Gamma }_{i}, i = 1,\ldots, m \) .
To prove that \( \mu \) is a finite sum of mutually singular canonical measures, we distinguish two cases. First, if \( \nu \) is not an accumulation point of \( M \), then \( \nu = \gamma \cdot \mu \) for some \( \gamma \in \widehat{G} \), so \( \mu = \bar{\gamma } \cdot \nu \) is canonical. Thus we can assume that \( \nu \) is an accumulation point of \( M \) . Because \( \nu \) is absolutely continuous with respect to \( {m}_{H} \), it follows that \( \nu = {\left. \gamma \cdot \mu \right| }_{H} \) for some \( \gamma \in \widehat{G} \) . Let \( {\mu }_{1} = \bar{\gamma } \cdot \nu \) . Then \( \mu = {\mu }_{1} + \left( {\mu - {\mu }_{1}}\right) \) and \( {\mu }_{1} \bot \left( {\mu - {\mu }_{1}}\right) \) since \( {\mu }_{1} = {\left. \mu \right| }_{H} \) . Thus
\[
\parallel \mu \parallel = \begin{Vmatrix}{\mu }_{1}\end{Vmatrix} + \begin{Vmatrix}{\mu - {\mu }_{1}}\end{Vmatrix} \geq 1 + \begin{Vmatrix}{\mu - {\mu }_{1}}\end{Vmatrix}
\]
so \( \begin{Vmatrix}{\mu - {\mu }_{1}}\end{Vmatrix} \leq \parallel \mu \parallel - 1 \) . Since \( \mu - {\mu }_{1} \in F\left( G\right) \), the same argument can now be applied to \( \mu - {\mu }_{1} \), and so on. Because the norm is decreased by at least one at each step, this process must stop after a finite number of steps.
In order to describe the support set \( S\left( \mu \right) \) of \( \mu \in F\left( G\right) \) in the most convenient manner, we have to introduce the coset ring of an Abelian group.
Definition 5.6.7. The coset ring of an Abelian group \( G \), denoted \( \mathcal{R}\left( G\right) \), is the smallest Boolean algebra of subsets of \( G \) containing the cosets of all subgroups of \( G \) . That is, \( \mathcal{R}\left( G\right) \) is the smallest family of subsets of \( G \) which contains all the cosets of subgroups of \( G \) and which is closed under forming finite unions, finite intersections, and complements.
Let \( G \) be a topological Abelian group. The closed coset ring of \( G,{\mathcal{R}}_{c}\left( G\right) \) , is defined to be
\[
{\mathcal{R}}_{c}\left( G\right) = \{ E \in \mathcal{R}\left( G\right) : E\text{ is closed in }G\} .
\]
The following proposition is a first indication of the importance of the coset ring in our context.
Proposition 5.6.8. Let \( G \) be a compact Abelian group and \( \mu \in F\left( G\right) \) . Then the set
\[
S\left( \mu \right) = \{ \alpha \in \widehat{G} : \widehat{\mu }\left( \alpha \right) \neq 0\}
\]
belongs to \( \mathcal{R}\left( \widehat{G}\right) \) .
Proof. For \( k \in {\mathbb{Z}}^{ * } = \mathbb{Z} \smallsetminus \{ 0\} \), let \( S{\left( \mu \right) }_{k} = \{ \alpha \in \widehat{G} : \widehat{\mu }\left( \alpha \right) = k\} \) . Since the range of \( \widehat{\mu } \) is finite, it suffices to show that \( S{\left( \mu \right) }_{k} \in \mathcal{R}\left( \widehat{G}\right) \) for each \( k \) .
Assume first that \( \mu \) is a canonical measure. Thus \( \mu \) is of the form \( \mu = \) \( \mathop{\sum }\limits_{{j = 1}}^{n}{n}_{j}\left( {{\gamma }_{j} \cdot {m}_{H}}\right) \), where \( H \) is a closed subgroup of \( G \), the \( {n}_{j} \) are nonzero integers, and the \( |
1139_(GTM44)Elementary Algebraic Geometry | Definition 4.5 |
Definition 4.5. The determinant in (14) is called the resultant of \( f \) and \( g \) ; we denote it by \( \mathcal{R}\left( {f, g}\right) \) . If \( f, g \in D\left\lbrack {{X}_{1},\ldots ,{X}_{t}}\right\rbrack \), then for any \( i \) ,
\[
f, g \in D\left\lbrack {{X}_{1},\ldots ,{X}_{i - 1},{X}_{i + 1},\ldots ,{X}_{t}}\right\rbrack \left\lbrack {X}_{i}\right\rbrack = {D}^{\prime }\left\lbrack {X}_{i}\right\rbrack ;
\]
the corresponding resultant is called the resultant of \( f \) and \( g \) with respect to \( {X}_{i} \), denoted by \( {\mathcal{R}}_{{X}_{i}}\left( {f, g}\right) \) . For any \( f \in D\left\lbrack X\right\rbrack \), one can define the formal derivative \( {df}/{dX} \in D\left\lbrack X\right\rbrack \) using the relations
\[
\frac{d\left( {au}\right) }{dX} = a\frac{du}{dX},\;\frac{d\left( {uv}\right) }{dX} = u\frac{dv}{dX} + v\frac{du}{dX}\;\left( {a \in D, u, v \in D\left\lbrack X\right\rbrack }\right) .
\]
Then the resultant \( \mathcal{R}\left( {f,{f}^{\prime }}\right) \) of \( f \in D\left\lbrack X\right\rbrack \) and its derivative \( {df}/{dX} = \) \( {f}^{\prime } \in D\left\lbrack X\right\rbrack \) is called the discriminant of \( f \), denoted \( \mathcal{D}\left( f\right) \) ; if \( f \in D\left\lbrack {{X}_{1},\ldots {D}_{t}}\right\rbrack \) , then \( {\mathcal{R}}_{{X}_{i}}\left( {f,\partial f/\partial {X}_{i}}\right) \) is called the discriminant of \( f \) with respect to \( {X}_{i} \) , denoted \( {\mathcal{D}}_{{X}_{i}}\left( f\right) \) . If \( f \in \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{t}}\right\rbrack \), then the variety
\[
\mathrm{V}\left( {{\mathcal{D}}_{{X}_{i}}\left( f\right) }\right) \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{i - 1},{X}_{i + 1},\ldots ,{X}_{t}} = {\mathbb{C}}^{t - 1}
\]
is called the discriminant variety of \( {\mathcal{D}}_{{X}_{i}}\left( f\right) \) .
Remark 4.6. It is easily checked that \( {\mathcal{D}}_{X}\left( {a{X}^{2} + {bX} + c}\right) \) is essentially the familiar " \( {b}^{2} - {4ac} \) " (Exercise 4.2).
The following will be used frequently in the sequel:
Lemma 4.7. Let \( D \) be any unique factorization domain of characteristic zero. Then \( f \in D\left\lbrack X\right\rbrack \) has a repeated (nonconstant) factor iff \( f \) and \( {f}^{\prime } \) have a common factor. Thus
\[
\text{fhas a repeated factor iff}\mathcal{D}\left( f\right) = 0\text{.}
\]
In particular:
\[
\text{If}D = \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{i - 1},{X}_{i + 1},\ldots ,{X}_{t}}\right\rbrack \text{, then}f\left( {X}_{i}\right) \in D\left\lbrack {X}_{i}\right\rbrack \text{has}
\]
\[
\text{a repeated factor (involving}{X}_{i}\text{) iff}{\mathcal{D}}_{{X}_{i}}\left( f\right) = 0\text{.}
\]
Proof. First, suppose that \( f \) has no repeated factors. Then \( f = {p}_{1}{p}_{2},\ldots ,{p}_{r} \) , where the \( {p}_{i} \) are distinct irreducible polynomials. Differentiating, we obtain
\[
{f}^{\prime } = {p}_{1}^{\prime }{p}_{2},\ldots ,{p}_{r} + {p}_{1}{p}_{2}^{\prime },\ldots ,{p}_{r} + \ldots + {p}_{1}{p}_{2},\ldots ,{p}_{r}^{\prime }.
\]
All terms except the \( i \) th are divisible by \( {p}_{i} \), but the \( i \) th term is not divisible by \( {p}_{i} \) . Indeed, \( {p}_{i} \nmid {p}_{i}^{\prime } \) in characteristic zero, since \( {p}_{i}^{\prime } \neq 0 \) and \( \deg {p}_{i}^{\prime } < \deg {p}_{i} \) . Hence \( {p}_{i}\chi {f}^{\prime } \), so \( f \) and \( {f}^{\prime } \) have no common factors.
Conversely, suppose that \( f \) has a repeated factor, say \( f = {g}^{s}h \), where \( s \geq 2 \) . Then \( {f}^{\prime } = s{g}^{s - 1}{g}^{\prime }h + {g}^{s}{h}^{\prime } \), so \( g \) is a common factor of \( f \) and \( {f}^{\prime } \) .
Lemma 4.8. Suppose \( p\left( {X, Y}\right) \in \mathbb{C}\left\lbrack {X, Y}\right\rbrack \) satisfies Assumption 4.2, p having (total) degree \( n \) . Then the points \( {x}_{0} \in {\mathbb{C}}_{X} \) at which \( p\left( {{x}_{0}, Y}\right) \) has fewer than \( n \) zeros are precisely the zeros of the polynomial \( {\mathcal{D}}_{Y}\left( p\right) \in \mathbb{C}\left\lbrack X\right\rbrack \) .
Proof. Let \( {x}_{0} \in {\mathbb{C}}_{X} \) . Then \( \deg p\left( {{x}_{0}, Y}\right) = n \), and from the form of the resultant in (14) it is evident that
\[
{\mathcal{D}}_{Y}{\left( p\left( X, Y\right) \right) }_{X = {x}_{0}} = \mathcal{D}\left( {p\left( {{x}_{0}, Y}\right) }\right) .
\]
This, together with Lemma 4.7, gives the result.
Remark 4.9. Note that the conclusion of Lemma 4.8 need not hold if Assumption 4.2 on \( p\left( {X, Y}\right) \) is not satisfied. For instance, \( p\left( {X, Y}\right) = Y - {X}^{2} \) does not satisfy the condition, and
\[
{\mathcal{D}}_{Y}\left( p\right) = {\mathbb{R}}_{Y}\left\lbrack {{1Y} + \left( {-{X}^{2}}\right) {Y}^{0},1{Y}^{0}}\right\rbrack \equiv 1.
\]
And for each \( X = {x}_{0}, p\left( {{x}_{0}, Y}\right) \) has only one zero \( \left( {Y = {x}_{0}{}^{2}}\right) \), not two. (One can think of "the other zero" as lying on the line at infinity.)
This completes our detour into resultants and discriminants. We now return to the proof of \( \left( {4.1.2}^{\prime }\right) \) . We are almost done.
Let us write, in accordance with Assumption 4.2,
\[
p\left( {X, Y}\right) = {Y}^{n} + \ldots + {a}_{n}\left( X\right)
\]
\[
\frac{\partial p}{\partial Y}\left( {X, Y}\right) = {p}_{Y}\left( {X, Y}\right) = {b}_{0}\left( X\right) {Y}^{n - 1} + \ldots + {b}_{n - 1}\left( X\right) ,
\]
where \( {a}_{i}\left( X\right) ,{b}_{i}\left( X\right) \in \mathbb{C}\left\lbrack X\right\rbrack \), deg \( {a}_{i}\left( X\right) \leq i \) (or \( {a}_{i}\left( X\right) = 0 \) ), and \( {b}_{0}\left( X\right) = n \neq 0 \) .
If \( p \) and \( \partial p/\partial Y \) have a common zero at \( X = {x}_{0} \), the determinant \( {\mathcal{D}}_{Y}\left( p\right) \in \mathbb{C}\left\lbrack X\right\rbrack \) must vanish at \( X = {x}_{0} \) . If this discriminant polynomial is not the zero polynomial, there are, of course, only finitely many values \( {x}_{0} \) for which \( p\left( {{x}_{0}, Y}\right) \) and \( \left( {\partial p/\partial Y}\right) \left( {{x}_{0}, Y}\right) \) could possibly possess a common zero. Thus, to prove there are only finitely many points \( \left( {{x}_{0},{y}_{0}}\right) \in {\mathbb{C}}_{XY} \) satisfying
\[
p\left( {{x}_{0},{y}_{0}}\right) = \frac{\partial p}{\partial Y}\left( {{x}_{0},{y}_{0}}\right) = 0,
\]
there remain only these two things to clear up:
(4.10) \( {\mathcal{D}}_{Y}\left( p\right) \) is not the zero polynomial.
(4.11) At any zero \( {x}_{0} \in {\mathbb{C}}_{X} \) of \( {\mathcal{D}}_{Y}\left( p\right) \), there are not infinitely many
solutions to \( p\left( {{x}_{0}, Y}\right) = \left( {\partial p/\partial Y}\right) \left( {{x}_{0}, Y}\right) = 0 \) .
First,(4.10) follows at once from the assumption that \( p \) has no repeated irreducible factors (Lemma 4.7).
Second,(4.11) holds since for any \( {x}_{0}, p\left( {{x}_{0}, Y}\right) \) is a nonzero polynomial in \( Y \) having at most \( n \) zeros.
We have thus completed the proof of \( \left( {{4.1.}{2}^{\prime }}\right) \) .
We now turn to the proof of \( \left( {4.13}^{\prime }\right) \) . First recall the following standard fact from complex analysis:
Theorem 4.12 (Riemann extension theorem). Let \( \Omega \) be a nonempty open subset of \( \mathbb{C} \), let \( c \) be an arbitrary point of \( \Omega \) and let \( h\left( X\right) \) be single-valued and analytic at each point of \( \Omega \smallsetminus \{ c\} \) . Then if \( h \) is bounded at \( c \) (i.e., if there is an \( M \in \mathbb{R} \) such that \( \left| {h\left( X\right) }\right| < M \), for all \( X \) near \( c \) ), \( h \) may be uniquely extended to a function holomorphic on all of \( \Omega \) (i.e., there is a unique \( {h}^{ * } \), analytic on \( \Omega \) with restriction \( {h}^{ * } \mid \Omega \smallsetminus \{ c\} = h \) .)
In proving (4.1.3’), we continue to assume that \( p \) is a product of distinct factors.
Let \( \left( {{x}_{0},{y}_{0}}\right) \) be a point of \( {\mathbb{C}}_{XY} \) satisfying, without loss of generality, \( p\left( {{x}_{0},{y}_{0}}\right) \) \( = \left( {\partial p/\partial Y}\right) \left( {{x}_{0},{y}_{0}}\right) = 0 \) . Then \( {y}_{0} \) is a multiple root of \( p\left( {{x}_{0}, Y}\right) = 0 \) . Let \( r > 1 \) be its multiplicity, and let \( \Delta = \Delta \left( {{y}_{0},\varepsilon }\right) \) be a disk in \( {\mathbb{C}}_{Y} \) centered at \( {y}_{0} \) , whose closure contains no other \( {y}_{0i} \) . By the argument principle (see Theorem 3.8), we have
\[
\frac{1}{2\pi i}{\int }_{\partial \Delta }\frac{{p}_{Y}\left( {{x}_{0}, Y}\right) }{p\left( {{x}_{0}, Y}\right) }{dY} = r
\]
we now reason as before in the proof of Theorem 3.6.
Since \( p\left( {{x}_{0}, Y}\right) \) is never zero on \( \partial \Delta \), a small change in \( {x}_{0} \) to \( {x}_{1} \), yields a small change in the integrand, hence in the integral. Thus the integral
\[
\frac{1}{2\pi i}{\int }_{\partial \Delta }\frac{{p}_{Y}\left( {{x}_{1}, Y}\right) }{p\left( {{x}_{1}, Y}\right) }{dY}
\]
has value \( r \) for all \( {x}_{1} \in {\mathbb{C}}_{X} \) sufficiently near \( {x}_{0} \) . We then see that for a sufficiently small disk \( \Delta \left( {{y}_{0},\varepsilon }\right) \subset {\mathbb{C}}_{Y} \) centered at \( {y}_{0} \), there is a sufficiently small disk \( {\Delta }^{\prime }\left( {{x}_{0},\delta }\right) \subset {\mathbb{C}}_{X} \) about \( {x}_{0} \) so that for each \( {x}_{1} \in {\Delta }^{\prime }\left( {{x}_{0},\delta }\right) \smallsetminus \left\{ {x}_{0}\right\} \) there are exactly \( r \) zeros of \( p\left( {{x}_{1}, Y}\right) \) in \( \Delta \left( {{y}_{0},\varepsilon }\right) \), counted with multiplicity. But for \( {\Delta }^{\prime }\left( {{x}_{0},\delta }\right) \) sufficiently small, \( {x}_{1} \in {\Delta }^{\prime }\left( {{x}_{0},\delta }\right) \smallsetminus \left\{ {x}_{0}\right\} \) is never in the discrimi |
1234_[丁一文] Number Theory 1 | Definition 2.2.4 |
Definition 2.2.4. An additive valuation \( v \) on \( K \) is a map \( v : K \rightarrow \mathbb{R} \cup \{ + \infty \} \) such that
1. \( v\left( x\right) = + \infty \) if and only if \( x = 0 \) ,
2. \( v\left( {xy}\right) = v\left( x\right) + v\left( y\right) \) ,
3. \( v\left( {x + y}\right) \geq \min \{ v\left( x\right), v\left( y\right) \} \) .
Two additive valuations \( {v}_{1},{v}_{2} \) are called equivalent if there exists \( r > 0 \) such that \( {v}_{2} = r{v}_{1} \) .
The following lemma is clear.
Lemma 2.2.5. Let \( q > 1 \), the map \( v \mapsto \left\lbrack {x \mapsto {q}^{-v\left( x\right) }}\right\rbrack \) gives a bijection between the set of additive valuations on \( K \) and the set of non-archimedean norms on \( K \), where the inverse is given by \( \left| \cdot \right| \mapsto \left\lbrack {x \mapsto - {\log }_{q}\left( \left| x\right| \right) }\right. \) .
Theorem 2.2.6 (Ostrouski). Let \( \left| \cdot \right| \) be a non-trivial norm on \( \mathbb{Q} \) .
(a) If \( \left| \cdot \right| \) is archimedean, then \( \left| \cdot \right| \) is equivalent to the standard absolute value \( {\left| \cdot \right| }_{\infty } \) .
(b) If \( \left| \cdot \right| \) is non-archimedean, then it is equivalent to \( {\left. \cdot \right| }_{p} \) for prime number \( p \) .
## 2.3 Non-archimedean valuation field
Let \( \left( {K,\left| \cdot \right| }\right) \) be a non-archimedean valuation field, let \( v \) be an associated additive valuation.
Lemma 2.3.1. \( \left| {x + y}\right| = \max \{ \left| x\right| ,\left| y\right| \} \) if \( \left| x\right| \neq \left| y\right| \) .
Proof. Suppose \( \left| x\right| > \left| y\right| \), then \( \left| x\right| \leq \max \{ \left| {x + y}\right| ,\left| {-y}\right| \} \) . Note \( \left| {-y}\right| = \left| {-1}\right| \left| y\right| = \left| y\right| \) \( \left( {\left| {1 \cdot x}\right| = \left| x\right| = \left| 1\right| \left| x\right| \Rightarrow \left| 1\right| = 1\text{and}\left| {\left( {-1}\right) \left( {-1}\right) }\right| = \left| 1\right| \Rightarrow \left| {-1}\right| = 1}\right) \) . The lemma follows.
Lemma 2.3.2. Let \( r > 0 \), then \( \{ x \in K\left| \right| x \mid \leq r\} \) is an open subset of \( K \) .
Proof. For any \( a \in \{ x \in K\left| \right| x \mid \leq r\} ,0 < s < r \), we have \( \{ x \in K\left| \right| x - a \mid < s\} \subset \{ x \in \) \( K \mid \left| x\right| \leq r\} \) .
Put \( {\mathcal{O}}_{K} \mathrel{\text{:=}} \{ x \in K\left| \right| x \mid \leq 1\} = \{ x \in K \mid v\left( x\right) \geq 0\} \) .
Lemma 2.3.3. \( {\mathcal{O}}_{K} \) is a subring of \( K \) .
Proof. Let \( x, y \in {\mathcal{O}}_{K} \), then \( \left| {xy}\right| \leq 1 \), and \( \left| {x \pm y}\right| \leq 1 \) .
We call \( {\mathcal{O}}_{K} \) the valuation ring of \( \left( {K,\left| \cdot \right| }\right) \) .
Proposition 2.3.4. Two non-trivial non-archimedean norms \( {\left| \cdot \right| }_{1} \) and \( {\left| \cdot \right| }_{2} \) on a field \( K \) are equivalent if and only if their valuation rings are the same.
Proof. "Only if" is clear. Suppose for any \( x \in K,{\left| x\right| }_{1} \leq 1 \) if and only if \( {\left| x\right| }_{2} \leq 1 \) . Let \( b \in K \) such that \( {\left| b\right| }_{1} > 1 \), there exists \( r > 0 \) such that \( {\left| b\right| }_{2} = {\left| b\right| }_{1}^{r} \) . For any \( x \in K \smallsetminus \{ 0\} \), there exists \( \rho \) such that \( {\left| x\right| }_{1} = {\left| b\right| }_{1}^{\rho } \) . For any \( u, v,{u}^{\prime },{v}^{\prime } \in \mathbb{Z} \) such that \( v,{v}^{\prime } \neq 0 \) and \( \frac{u}{v} \leq \rho \leq \frac{{u}^{\prime }}{{v}^{\prime }} \) , we have \( {\left| b\right| }_{1}^{\frac{u}{v}} \leq {\left| x\right| }_{1} \leq {\left| b\right| }_{1}^{\frac{{u}^{\prime }}{{v}^{\prime }}} \), that is equivalent to \( {\left| {b}^{u}/{x}^{v}\right| }_{1} \leq 1 \) and \( {\left| {x}^{{v}^{\prime }}/{b}^{{u}^{\prime }}\right| }_{1} \leq 1 \) . Hence \( {\left| {b}^{u}/{x}^{v}\right| }_{2} \leq 1 \) and \( {\left| {x}^{{v}^{\prime }}/{b}^{{u}^{\prime }}\right| }_{2} \leq 1 \), and thus \( {\left| b\right| }_{2}^{\frac{u}{v}} \leq {\left| x\right| }_{2} \leq {\left| b\right| }_{2}^{\frac{{u}^{\prime }}{{v}^{\prime }}} \) . Let \( \frac{u}{v} \) and \( \frac{{u}^{\prime }}{{v}^{\prime }} \) converge to \( \rho \) , we see \( {\left| x\right| }_{2} = {\left| b\right| }_{2}^{\rho } = {\left| x\right| }_{1}^{r} \) and hence \( {\left| \cdot \right| }_{1} \sim {\left| \cdot \right| }_{2} \) .
Definition 2.3.5. If \( {\left. v\right| }_{{\mathcal{O}}_{K}\smallsetminus \{ 0\} } \) has discrete image, then we call \( {\mathcal{O}}_{K} \) a discrete valuation ring.
Remark 2.3.6. Let \( {\mathcal{O}}_{K} \) be a discrete valuation ring, then \( v\left( {K\smallsetminus \{ 0\} }\right) \) is a lattice in \( \mathbb{R} \) . We call the valuation \( v \) normalized if \( v\left( {K\smallsetminus \{ 0\} }\right) = \mathbb{Z} \) .
Example 2.3.7. \( {\mathbb{Z}}_{p} \) is a discrete valuation ring.
Proposition 2.3.8. Let \( \left( {K,\left| \cdot \right| }\right) \) be a non-archimedean valuation field.
(1) \( {\mathcal{O}}_{K} \) is integrally closed and is a local ring with maximal ideal \( {\mathfrak{m}}_{K} \mathrel{\text{:=}} \{ x \in K\left| \right| x \mid < \) \( 1\} \) (a ring is called local if it has a unique maximal ideal).
(2) Let \( \widehat{K} \) be the completion of \( K \), then \( {\mathcal{O}}_{\widehat{K}} \) is the completion of \( {\mathcal{O}}_{K} \) under \( \left| \cdot \right| \) . If \( \pi \in {\mathfrak{m}}_{K} \) is a non-zero element, then one has a canonical isomorphism (where \( {\mathcal{O}}_{K}/{\pi }^{n} \) is equipped with the discrete topology)
\[
{\mathcal{O}}_{\widehat{K}} \cong \underset{n}{\underbrace{\lim }}{\mathcal{O}}_{K}/{\pi }^{n} = \left\{ {\left( {x}_{n}\right) \in \mathop{\prod }\limits_{{n \geq 1}}{\mathcal{O}}_{K}/{\pi }^{n} \mid {x}_{n + 1} \equiv {x}_{n}\;\left( {\;\operatorname{mod}\;{\pi }^{n}}\right) }\right\} .
\]
(3) If \( \left| \cdot \right| \) is a discrete valuation, then \( {\mathfrak{m}}_{K} \) is principal and all the non-zero ideals of \( {\mathcal{O}}_{K} \) are of the form \( {\mathfrak{m}}_{K}^{n} \) with \( n \in {\mathbb{Z}}_{ \geq 0} \) .
Proof. (1) Let \( x \in K \) and suppose \( {x}^{n} + {a}_{n - 1}{x}^{n - 1} + \cdots + {a}_{0} = 0,{a}_{i} \in {\mathcal{O}}_{K} \) . If \( \left| x\right| > 1 \), then \( \left| {x}^{n}\right| > \left| {{a}_{i}{x}^{i}}\right| \) for all \( i = 0,\cdots, n - 1 \) . So \( 0 = \left| {{x}^{n} + \cdots + {a}_{0}}\right| = \left| {x}^{n}\right| > 1 \), a contradiction.
(2) Let \( x \in {\mathcal{O}}_{\widehat{K}} \subset \widehat{K} \), thus \( x = \mathop{\lim }\limits_{{n \rightarrow \infty }}{a}_{n} \) with \( {a}_{n} \in K \) . Since \( \left| x\right| \leq 1 \), we have \( \left| {a}_{n}\right| \leq 1 \) for \( n \) sufficiently large. This implies \( x \in \widehat{{\mathcal{O}}_{K}} \) (i.e. the completion of \( {\mathcal{O}}_{K} \) under \( \left| \cdot \right| ) \) . Conversely if \( x \in \widehat{{\mathcal{O}}_{K}} \), then it is clear that \( {\left| x\right| }_{\widehat{K}} \leq 1 \) .
For any \( x \in {\mathcal{O}}_{\widehat{K}} \cong \widehat{{\mathcal{O}}_{K}} \), let \( \left( {a}_{n}\right) \) be a Cauchy sequence in \( {\mathcal{O}}_{K} \) that converges to \( x \) . By removing certain terms, we can and do assume \( \left| {x - {a}_{n}}\right| \leq {\left| \pi \right| }^{n} \) that implies \( \left| {{a}_{m} - {a}_{n}}\right| \leq {\left| \pi \right| }^{n} \)
for \( m \geq n \) . Thus \( \left| {{\pi }^{-n}\left( {{a}_{m} - {a}_{n}}\right) }\right| \leq 1 \Leftrightarrow {\pi }^{-n}\left( {{a}_{m} - {a}_{n}}\right) \in {\mathcal{O}}_{K} \Leftrightarrow {a}_{m} - {a}_{n} \in {\pi }^{n}{\mathcal{O}}_{K} \) . If \( \left( {a}_{n}^{\prime }\right) \) is another Cauchy sequence that converges to \( x \) satisfying the same condition, we see \( \left| {{a}_{n}^{\prime } - {a}_{n}}\right| \leq {\left| \pi \right| }^{n} \) and hence \( {a}_{n}^{\prime } - {a}_{n} \in {\pi }^{n}{\mathcal{O}}_{K} \) by the same argument as above. In particular, we obtain a well-defined map
\[
{\mathcal{O}}_{\widehat{K}} \rightarrow \underset{n}{\underbrace{\lim }}{\mathcal{O}}_{K}/{\varpi }^{n}, x \mapsto \left( {a}_{n}\right) .
\]
It is straightforward to check this map is a bijective ring homomorphism and is a homem-orphism (Exercise).
(3) Suppose \( \left| \cdot \right| \) is discrete, and let \( v \) be the corresponding normalized additive valuation. Let \( \alpha \in {\mathcal{O}}_{K} \) such that \( v\left( \alpha \right) = 1 \) . For \( \beta \in {\mathfrak{m}}_{K}, v\left( \beta \right) \in {\mathbb{Z}}_{ > 0} \) and hence \( v\left( {\beta /\alpha }\right) \geq 0 \Rightarrow \) \( \beta \in \alpha {\mathcal{O}}_{K} \) . Thus \( {\mathfrak{m}}_{K} = \alpha {\mathcal{O}}_{K} \) . Let \( I \) be a non-zero ideal, and let \( m \mathrel{\text{:=}} \inf \{ v\left( x\right) \mid x \in I\} \) . Thus For \( x \in I, v\left( {x/{\alpha }^{m}}\right) \geq 0 \) hence \( x \in {\alpha }^{m}{\mathcal{O}}_{K} = {\mathfrak{m}}_{K}^{m} \) . If \( x \in I \) satisfies \( v\left( x\right) = m \), then \( {\alpha }^{m} \in x{\mathcal{O}}_{K} \subset I \) . Thus \( I = {\mathfrak{m}}_{K}^{m} \) .
Definition 2.3.9. Let \( \left( {K,\left| \cdot \right| }\right) \) be a non-archimedean valuation field. We call \( {\mathcal{O}}_{K}/{\mathfrak{m}}_{K} \) the residue field of \( {\mathcal{O}}_{K} \) . If \( {\mathcal{O}}_{K} \) is a discrete valuation ring, we call a generator \( \pi \) of \( {\mathfrak{m}}_{K} \) a uniformizer of \( K \) (or of \( {\mathcal{O}}_{K} \) ).
Proposition 2.3.10. For a domain \( R, R \) is a discrete valuation ring if and only if \( R \) is a local Dedekind domain.
Proof. The "only if" part follows from the above proposition. Suppose \( R \) is a local Dedekind domain, and let \( \mathfrak{m} \) be the unique maximal ide |
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org | Definition 1.34 |
Definition 1.34. A multimap \( F : T \rightrightarrows X \) between two topological spaces is said to be compactly outward continuous at some point \( {t}_{0} \) of \( T \) if for every compact subset \( K \) of \( X \) the multimap \( {F}_{K} : t \rightrightarrows F\left( t\right) \cap K \) is outward continuous at \( {t}_{0} \) .
A multimap \( F : T \rightrightarrows X \) between two topological spaces is said to be closed at some point \( {t}_{0} \) of \( T \) if for all \( x \in X \smallsetminus F\left( {t}_{0}\right) \) there exist neighborhoods \( U \) of \( {t}_{0}, V \) of \( x \) such that \( F\left( t\right) \cap V = \varnothing \) for all \( t \in U \) .
Clearly outward continuity implies compact outward continuity. In fact, \( F \) is outward continuous (resp. compactly outward continuous) at \( {t}_{0} \) if and only if for every closed (resp. compact) subset \( C \) of \( X \) contained in \( X \smallsetminus F\left( {t}_{0}\right) \), there exists a neighborhood \( U \) of \( {t}_{0} \) such that for all \( t \in U, F\left( t\right) \) and \( C \) are disjoint. The terminology " \( F \) is closed at \( {t}_{0} \) " is justified by its rephrasing in terms of closure: \( F \) is closed at \( {t}_{0} \) if and only if
\[
\operatorname{cl}\left( F\right) \cap \left( {\left\{ {t}_{0}\right\} \times X}\right) = \left\{ {t}_{0}\right\} \times F\left( {t}_{0}\right)
\]
In fact, it is easy to see that \( F \) is closed at \( {t}_{0} \) if and only if for every \( x \in X \) and nets \( {\left( {t}_{i}\right) }_{i \in I} \rightarrow {t}_{0},{\left( {x}_{i}\right) }_{i \in I} \rightarrow x \), one has \( x \in F\left( {t}_{0}\right) \) whenever \( {x}_{i} \in F\left( {t}_{i}\right) \) for all \( i \in I \) . This property implies that \( F\left( {t}_{0}\right) \) is closed in \( X \), but is more demanding than closedness of \( F\left( {t}_{0}\right) \) in general (Exercise 2). Also, one can check that \( F \) is closed at every point of \( T \) if and only if its graph is closed in \( T \times X \) . In many cases, outward continuity is more stringent than closedness.
Proposition 1.35. (a) If \( X \) is a regular (resp. Hausdorff) space, if \( F : T \rightrightarrows X \) is outward continuous at some point \( {t}_{0} \in T \), and if \( F\left( {t}_{0}\right) \) is closed (resp. compact), then \( F \) is closed at \( {t}_{0} \) .
(b) If \( F \) is closed at \( {t}_{0} \) and if every open neighborhood \( W \) of \( F\left( {t}_{0}\right) \) is such that \( X \smallsetminus W \) is compact, then \( F : T \rightrightarrows X \) is outward continuous at \( {t}_{0} \) . In particular, if for some neighborhood \( U \) of \( {t}_{0} \) the set \( F\left( U\right) \) is contained in a compact subset \( Y \) of \( X \), then \( F \) is closed at \( {t}_{0} \) if and only if \( F\left( {t}_{0}\right) \) is closed and \( F \) is outward continuous at \( {t}_{0} \) .
Proof. (a) When \( F\left( {t}_{0}\right) \) is closed and \( X \) is regular, given \( x \in X \smallsetminus F\left( {t}_{0}\right) \) there exist neighborhoods \( V \) of \( x, W \) of \( F\left( {t}_{0}\right) \) that are disjoint (take for \( V \) a closed neighborhood of \( x \) contained in \( X \smallsetminus F\left( {t}_{0}\right) \) and \( W \mathrel{\text{:=}} X \smallsetminus V \) ). If \( U \in \mathcal{N}\left( {t}_{0}\right) \) is such that \( F\left( t\right) \subset W \) for all \( t \in U \), we get \( F\left( t\right) \cap V = \varnothing \) for all \( t \in U \) . When \( X \) is just Hausdorff but \( F\left( {t}_{0}\right) \) is compact, one can also find disjoint neighborhoods \( V, W \) of \( x \) and \( F\left( {t}_{0}\right) \) respectively.
(b) Suppose \( F \) is closed at \( {t}_{0} \) and for every open neighborhood \( W \) of \( F\left( {t}_{0}\right), X \smallsetminus W \) is compact. If \( F \) is not outward continuous at \( {t}_{0} \) one can find an open neighborhood \( W \) of \( F\left( {t}_{0}\right) \) and a net \( {\left( {t}_{i}\right) }_{i \in I} \rightarrow {t}_{0} \) such that for all \( i \in I, F\left( {t}_{i}\right) \smallsetminus W \) is nonempty. Since \( X \smallsetminus W \) is compact, taking \( {x}_{i} \in F\left( {t}_{i}\right) \smallsetminus W \) there exists a subnet \( {\left( {x}_{j}\right) }_{j \in J} \) of \( {\left( {x}_{i}\right) }_{i \in I} \) that converges. Its limit is in \( X \smallsetminus W \), hence in \( X \smallsetminus F\left( {t}_{0}\right) \), a contradiction to the closedness of \( F \) at \( {t}_{0} \) . The last assertion stems from the fact that one can replace \( T \) with \( U \) and \( X \) with \( Y \) .
Corollary 1.36. If \( F : T \rightrightarrows X \) is closed at \( {t}_{0} \), then \( F \) is compactly outward continuous at \( {t}_{0} \) and \( F\left( {t}_{0}\right) \) is closed. The converse holds when \( {t}_{0} \) and the points of \( X \) have a countable base of neighborhoods.
Proof. If \( F : T \rightrightarrows X \) is closed at \( {t}_{0} \), then \( F\left( {t}_{0}\right) \) is closed, and for every compact subset \( K \) of \( X \) the multimap \( {F}_{K} : t \rightrightarrows F\left( t\right) \cap K \) is closed at \( {t}_{0} \), hence is outward continuous at \( {t}_{0} \) by Proposition 1.35 (b).
Suppose \( {t}_{0} \) and the points of \( X \) have a countable base of neighborhoods, \( F\left( {t}_{0}\right) \) is closed, and \( F \) is compactly outward continuous at \( {t}_{0} \) . Let \( \left( {U}_{n}\right) \) be a countable base of neighborhoods of \( {t}_{0} \) . If \( F \) is not closed at \( {t}_{0} \), then there exist \( x \in X \smallsetminus F\left( {t}_{0}\right) \) and a countable base of neighborhoods \( \left( {V}_{n}\right) \) of \( x \) such that \( {V}_{n} \) meets \( F\left( {U}_{n}\right) \) for all \( n \in \mathbb{N} \) . Let \( {t}_{n} \in {U}_{n} \) and \( {x}_{n} \in F\left( {t}_{n}\right) \cap {V}_{n} \) . Then \( K \mathrel{\text{:=}} \{ x\} \cup \left\{ {{x}_{n} : n \in \mathbb{N}}\right\} \) is compact. Since \( {F}_{K} \) is not closed, by Proposition 1.35 (a), it cannot be outward continuous. Thus we get a contradiction to the assumption that \( F \) is compactly outward continuous at \( {t}_{0} \) .
Examples. (a) The multimap \( D : \mathbb{R} \rightrightarrows {\mathbb{R}}^{2} \) of the preceding examples is closed at every point of \( \mathbb{R} \) but is nowhere outward continuous.
(b) The multimap \( G \) of these examples is not outward continuous at 0 .
(c) The multimap \( H \) of these examples is everywhere outward continuous.
(d) If \( U \) is a compact topological space, if \( g : T \times U \rightarrow X \) is continuous, then \( F\left( \cdot \right) \mathrel{\text{:=}} \) \( g\left( {\cdot, U}\right) \) is outward continuous.
(e) Given \( f : T \rightarrow \mathbb{R} \), its hypograph multifunction \( {H}_{f} : T \rightrightarrows \mathbb{R} \) is outward continuous at \( {t}_{0} \in T \) if and only if \( f \) is outward continuous at \( {t}_{0} \) .
Again, the easy proofs of the following properties are left as exercises.
Lemma 1.37. (a) If \( F \) is the multimap \( t \rightrightarrows \{ f\left( t\right) \} \) associated with a map \( f : T \rightarrow \) \( X, F \) is outward continuous at \( {t}_{0} \) if and only if \( f \) is continuous at \( {t}_{0} \) .
(b) \( F : T \rightrightarrows X \) is outward continuous (resp. compactly outward continuous) if and only if for every closed (resp. compact) subset \( C \) of \( X \), the set \( {F}^{-1}\left( C\right) \) is closed in \( T \) .
(c) If \( F : T \rightrightarrows X \) is outward continuous at \( {t}_{0} \), if \( Y \) is another topological space, and if \( G : X \rightrightarrows Y \) is outward continuous at every \( {x}_{0} \in F\left( {t}_{0}\right) \), then \( H \mathrel{\text{:=}} G \circ F : T \rightrightarrows Y \) is outward continuous at \( {t}_{0} \) .
(d) If \( F : S \rightrightarrows X \) and \( G : T \rightrightarrows Y \) are two multimaps that are closed at \( {s}_{0},{t}_{0} \) respectively, then their product \( H \) given by \( H\left( {s, t}\right) \mathrel{\text{:=}} F\left( s\right) \times G\left( t\right) \) is closed at \( \left( {{s}_{0},{t}_{0}}\right) \) . If \( F\left( {s}_{0}\right) \) and \( G\left( {t}_{0}\right) \) are compact, one can replace "closed" by "outward continuous" in this assertion.
(e) If \( F \) and \( G \) are two multimaps from \( T \) to \( X \) and \( Y \) respectively that are closed (resp. compactly outward continuous) at \( {t}_{0} \), then the multimap \( \left( {F, G}\right) \) given by \( \left( {F, G}\right) \left( t\right) \mathrel{\text{:=}} F\left( t\right) \times G\left( t\right) \) is closed (resp. compactly outward continuous) at \( {t}_{0} \) . If \( F\left( {t}_{0}\right) \) and \( G\left( {t}_{0}\right) \) are compact, one can replace "closed" by "outward continuous" in this assertion.
(f) If \( F \) and \( G \) are two multimaps from \( T \) to \( X \) that are outward continuous at \( {t}_{0} \) , then their union \( H \) given by \( H\left( t\right) \mathrel{\text{:=}} F\left( t\right) \cup G\left( t\right) \) is outward continuous at \( {t}_{0} \) .
The following two results are easy extensions of known properties for continuous maps. The first one can be established by an easy covering argument. For the second one, we recall that a topological space \( X \) is said to be connected if it cannot be split into two nonempty disjoint open subsets.
Proposition 1.38. If \( T \) is a compact space, if \( X \) is a Hausdorff space, and if \( F \) : \( T \rightrightarrows X \) is outward continuous with compact values, then \( F\left( T\right) \) is compact.
Proposition 1.39. If \( T \) is a connected topological space and \( F : T \rightrightarrows X \) has connected values and is either inward continuous or outward continuous, then \( F\left( T\right) \) is connected.
Let us end this subsection with the quotation of a famous and useful result.
Theorem 1. |
1096_(GTM252)Distributions and Operators | Definition 3.4 |
Definition 3.4. We say that \( \Lambda \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) is of order \( N \in {\mathbb{N}}_{0} \) when the inequalities (2.15) hold for \( \Lambda \) with \( {N}_{j} \leq N \) for all \( j \) (but the constants \( {c}_{j} \) may very well depend on \( j \) ). \( \Lambda \) is said to be of infinite order if it is not of order \( N \) for any \( N \) ; otherwise it is said to be of finite order. The order of \( \Lambda \) is the smallest \( N \) that can be used, resp. \( \infty \) .
In all the examples we have given, the order is finite. Namely, \( {L}_{1,\mathrm{{loc}}}\left( \Omega \right) \) and \( \mathcal{M}\left( \Omega \right) \) define distributions of order 0 (cf. (3.3),(3.5) and (3.12)), whereas \( {\Lambda }_{\alpha } \) and \( {\Lambda }_{f,\alpha } \) in (3.13) and (3.15) are of order \( \left| \alpha \right| \) . To see an example of a distribution of infinte order we consider the distribution \( \Lambda \in {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \) defined by
\[
\langle \Lambda ,\varphi \rangle = \mathop{\sum }\limits_{{N = 1}}^{\infty }\left\langle {{1}_{\left\lbrack N,2N\right\rbrack },{\varphi }^{\left( N\right) }\left( x\right) }\right\rangle
\]
(3.18)
cf. (A.27). (As soon as we have defined the notion of support of a distribution it will be clear that when a distribution has compact support in \( \Omega \), its order is finite, cf. Theorem 3.12 below.)
The theory of distributions was introduced systematically by L. Schwartz; his monograph [S50] is still a principal reference in the literature on distributions.
## 3.2 Rules of calculus for distributions
When \( T \) is a continuous linear operator in \( {C}_{0}^{\infty }\left( \Omega \right) \), and \( \Lambda \in {\mathcal{D}}^{\prime }\left( \Omega \right) \), then the composition defines another element \( {\Lambda T} \in {\mathcal{D}}^{\prime }\left( \Omega \right) \), namely, the functional
\[
\left( {\Lambda T}\right) \left( \varphi \right) = \langle \Lambda ,{T\varphi }\rangle
\]
The map \( {T}^{ \times } : \Lambda \mapsto {\Lambda T} \) in \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) is simply the adjoint map of the map \( \varphi \mapsto {T\varphi } \) . (We write \( {T}^{ \times } \) to avoid conflict with the notation for taking adjoints of operators in complex Hilbert spaces, where a certain conjugate linearity has to be taken into account. The notation \( {T}^{\prime } \) may also be used, but the prime could be misunderstood as differentiation.)
As shown in Theorem 2.6, the following simple maps are continuous in \( {C}_{0}^{\infty }\left( \Omega \right) \) :
\[
{M}_{f} : \varphi \mapsto {f\varphi },\;\text{ when }f \in {C}^{\infty }\left( \Omega \right) ,
\]
\[
{D}^{\alpha } : \varphi \mapsto {D}^{\alpha }\varphi
\]
They induce two maps in \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) that we shall temporarily denote \( {M}_{f}^{ \times } \) and \( {\left( {D}^{\alpha }\right) }^{ \times } \) :
\[
\left\langle {{M}_{f}^{ \times }\Lambda ,\varphi }\right\rangle = \langle \Lambda ,{f\varphi }\rangle
\]
\[
\left\langle {{\left( {D}^{\alpha }\right) }^{ \times }\Lambda ,\varphi }\right\rangle = \left\langle {\Lambda ,{D}^{\alpha }\varphi }\right\rangle
\]
for \( \Lambda \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) and \( \varphi \in {C}_{0}^{\infty }\left( \Omega \right) \) .
How do these new maps look when \( \Lambda \) itself is a function? If \( \Lambda = v \in \) \( {L}_{1,\text{ loc }}\left( \Omega \right) \), then
\[
\left\langle {{M}_{f}^{ \times }v,\varphi }\right\rangle = \langle v,{f\varphi }\rangle = \int v\left( x\right) f\left( x\right) \varphi \left( x\right) {dx} = \langle {fv},\varphi \rangle
\]
hence
\[
{M}_{f}^{ \times }v = {fv},\;\text{ when }v \in {L}_{1,\text{ loc }}\left( \Omega \right) .
\]
(3.19)
When \( v \in {C}^{\infty }\left( \Omega \right) \) ,
\[
\left\langle {{\left( {D}^{\alpha }\right) }^{ \times }v,\varphi }\right\rangle = \left\langle {v,{D}^{\alpha }\varphi }\right\rangle = \int v\left( x\right) \left( {{D}^{\alpha }\varphi }\right) \left( x\right) {dx}
\]
\[
= {\left( -1\right) }^{\left| \alpha \right| }\int \left( {{D}^{\alpha }v}\right) \left( x\right) \varphi \left( x\right) {dx} = \left\langle {{\left( -1\right) }^{\left| \alpha \right| }{D}^{\alpha }v,\varphi }\right\rangle ,
\]
so that
\[
{\left( -1\right) }^{\left| \alpha \right| }{\left( {D}^{\alpha }\right) }^{ \times }v = {D}^{\alpha }v,\;\text{ when }v \in {C}^{\infty }\left( \Omega \right) .
\]
(3.20)
These formulas motivate the following definition.
Definition 3.5. \( {1}^{ \circ } \) When \( f \in {C}^{\infty }\left( \Omega \right) \), we define the multiplication operator \( {M}_{f} \) in \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) by
\[
\left\langle {{M}_{f}\Lambda ,\varphi }\right\rangle = \langle \Lambda ,{f\varphi }\rangle \;\text{ for }\varphi \in {C}_{0}^{\infty }\left( \Omega \right) .
\]
Instead of \( {M}_{f} \) we often just write \( f \) .
\( {2}^{ \circ } \) For any \( \alpha \in {\mathbb{N}}_{0}^{n} \), the differentiation operator \( {D}^{\alpha } \) in \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) is defined by
\[
\left\langle {{D}^{\alpha }\Lambda ,\varphi }\right\rangle = \left\langle {\Lambda ,{\left( -1\right) }^{\left| \alpha \right| }{D}^{\alpha }\varphi }\right\rangle \text{ for }\varphi \in {C}_{0}^{\infty }\left( \Omega \right) .
\]
Similarly, we define the operator \( {\partial }^{\alpha } \) in \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) by
\[
\left\langle {{\partial }^{\alpha }\Lambda ,\varphi }\right\rangle = \left\langle {\Lambda ,{\left( -1\right) }^{\left| \alpha \right| }{\partial }^{\alpha }\varphi }\right\rangle \text{ for }\varphi \in {C}_{0}^{\infty }\left( \Omega \right) .
\]
In particular, these extensions still satisfy: \( {D}^{\alpha }\Lambda = {\left( -i\right) }^{\left| \alpha \right| }{\partial }^{\alpha }\Lambda \) .
The definition really just says that we denote the adjoint of \( {M}_{f} : \mathcal{D}\left( \Omega \right) \rightarrow \) \( \mathcal{D}\left( \Omega \right) \) by \( {M}_{f} \) again (usually abbreviated to \( f \) ), and that we denote the adjoint of \( {\left( -1\right) }^{\left| \alpha \right| }{D}^{\alpha } : \mathcal{D}\left( \Omega \right) \rightarrow \mathcal{D}\left( \Omega \right) \) by \( {D}^{\alpha } \) ; the motivation for this "abuse of notation" lies in the consistency with classical formulas shown in (3.19) and (3.20). As a matter of fact, the abuse is not very grave, since one can show that \( {C}^{\infty }\left( \Omega \right) \) is a dense subset of \( {\mathcal{D}}^{\prime }\left( \Omega \right) \), when the latter is provided with the weak* topology, cf. Theorem 3.18 below, so that the extension of the operators \( f \) and \( {D}^{\alpha } \) from elements \( v \in {C}^{\infty }\left( \Omega \right) \) to \( \Lambda \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) is uniquely determined.
Observe also that when \( v \in {C}^{k}\left( \Omega \right) \), the distribution derivatives \( {D}^{\alpha }v \) coincide with the usual partial derivatives for \( \left| \alpha \right| \leq k \), because of the usual formulas for integration by parts. We may write \( {\left( -1\right) }^{\left| \alpha \right| }{D}^{\alpha } \) as \( {\left( -D\right) }^{\alpha } \) .
The exciting aspect of Definition 3.5 is that we can now define derivatives of distributions - hence, in particular, derivatives of functions in \( {L}_{1,\text{ loc }} \) which were not differentiable in the original sense.
Note that \( {\Lambda }_{\alpha } \) and \( {\Lambda }_{f,\alpha } \) defined in (3.13) and (3.15) satisfy
\[
\left\langle {{\Lambda }_{\alpha },\varphi }\right\rangle = \left\langle {{\left( -D\right) }^{\alpha }{\delta }_{{x}_{0}},\varphi }\right\rangle ,\;\left\langle {{\Lambda }_{f,\alpha },\varphi }\right\rangle = \left\langle {{\left( -D\right) }^{\alpha }f,\varphi }\right\rangle
\]
(3.21)
for \( \varphi \in {C}_{0}^{\infty }\left( \Omega \right) \) . Let us consider an important example (already mentioned in Chapter 1):
By \( H\left( x\right) \) we denote the function on \( \mathbb{R} \) defined by
\[
H\left( x\right) = {1}_{\{ x > 0\} }
\]
(3.22)
(cf. (A.27)); it is called the Heaviside function. Since \( H \in {L}_{1,\operatorname{loc}}\left( \mathbb{R}\right) \), we have that \( H \in {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \) . The derivative in \( {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \) is found as follows:
\[
\left\langle {\frac{d}{dx}H,\varphi }\right\rangle = \left\langle {H, - \frac{d}{dx}\varphi }\right\rangle = - {\int }_{0}^{\infty }{\varphi }^{\prime }\left( x\right) {dx}
\]
\[
= \varphi \left( 0\right) = \left\langle {{\delta }_{0},\varphi }\right\rangle \;\text{ for }\varphi \in {C}_{0}^{\infty }\left( \mathbb{R}\right) .
\]
We see that
\[
\frac{d}{dx}H = {\delta }_{0}
\]
(3.23)
the delta-measure at \( 0!H \) and \( \frac{d}{dx}H \) are distributions of order 0, while the higher derivatives \( \frac{{d}^{k}}{d{x}^{k}}H \) are of order \( k - 1 \) . As shown already in Example 1.1, there is no \( {L}_{1,\text{ loc }}\left( \mathbb{R}\right) \) -function that identifies with \( {\delta }_{0} \) .
There is a similar calculation in higher dimensions, based on the Gauss formula (A.18). Let \( \Omega \) be an open subset of \( {\mathbb{R}}^{n} \) with \( {C}^{1} \) -boundary. The function \( {1}_{\Omega } \) (cf. (A.27)) has distribution derivatives described as follows: For \( \varphi \in {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) \)
\[
\left\langle {{\partial }_{j}{1}_{\Omega },\varphi }\right\rangle \equiv - {\int }_{\Omega }{\partial }_{j}{\varphi dx} = {\int }_{\partial \Omega }{\nu }_{j}\left( x\right) \varphi \left( x\right) {d\sigma }.
\]
(3.24)
Since
\[
\left| {{\int }_{\partial \Omega }{\nu }_{j}\left( x\right) \varphi \left( x\right) {d\sigma }}\right| \leq {\int }_{\partial \Omega \cap K}{ |
1116_(GTM270)Fundamentals of Algebraic Topology | Definition 2.2.18 |
Definition 2.2.18. The degree of the cover is the cardinality of \( {p}^{-1}\left( {x}_{0}\right) \) .
\( \diamond \)
Theorem 2.2.19. Under Hypotheses 2.2.17:
(i) To each subgroup \( H \) of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) there corresponds a covering projection \( p \) : \( Y \rightarrow X \) and a point \( {y}_{0} \in Y \) with \( p\left( {y}_{0}\right) = {x}_{0} \) such that
\[
{p}_{ * }\left( {{\pi }_{1}\left( {Y,{y}_{0}}\right) }\right) = H \subseteq {\pi }_{1}\left( {X,{x}_{0}}\right)
\]
and \( \left( {Y,{y}_{0}}\right) \) is unique up to equivalence.
(ii) The points in \( {p}^{-1}\left( {x}_{0}\right) \) are in 1-1 correspondence with the right cosets of \( H \) in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . Thus the degree of the cover is the index of \( H \) in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) .
(iii) \( H \) is normal in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) if and only if \( Y \) is a regular cover. In this case the group of covering translations is isomorphic to the quotient group \( {\pi }_{1}\left( {X,{x}_{0}}\right) /H \) .
Remark 2.2.20. By Corollary 2.2.10, this is a 1-1 correspondence.
\( \diamond \)
Corollary 2.2.21. Under Hypotheses 2.2.17:
Every \( X \) has a simply-connected cover \( p : \widetilde{X} \rightarrow X \), unique up to equivalence. \( \widetilde{X} \) is the universal cover of \( X \), and \( X \) is the quotient of \( \widetilde{X} \) by the group of covering translations. Also, if \( Y \) is any cover of \( X \), then \( \widetilde{X} \) is a cover of \( Y \) .
Proof. This is a direct consequence of Theorem 2.2.19, and our earlier results, taking \( H \) to be the trivial subgroup of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) .
Remark 2.2.22. This shows that, in the situation where Hypotheses 2.2.17 hold, the covering projection \( p : \widetilde{X} \rightarrow X \) from the universal cover \( \widetilde{X} \) to \( X \) is exactly the quotient map under the action of the group \( {G}_{p} \) of covering translations, isomorphic to \( {\pi }_{1}\left( {X,{x}_{0}}\right) \), considered in Theorem 2.2.6.
The only difference is that we have reversed our point of view: In Theorem 2.2.6 we assumed \( {G}_{p} \) was known, and used it to find \( {\pi }_{1}\left( {X,{x}_{0}}\right) \), while in Theorem 2.2.19 we assumed \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) was known, and used it to find \( {G}_{p} \) . ◇
## 2.3 van Kampen's Theorem and Applications
van Kampen's theorem allows us, under suitable circumstances, to compute the fundamental group of a space from the fundamental groups of subspaces.
Theorem 2.3.1. Let \( X = {X}_{1} \cup {X}_{2} \) and suppose that \( {X}_{1},{X}_{2} \), and \( A = {X}_{1} \cap {X}_{2} \) are all open, path connected subsets of \( X \) . Let \( {x}_{0} \in A \) . Then \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) is the free product with amalgamation
\[
{\pi }_{1}\left( {X,{x}_{0}}\right) = {\pi }_{1}\left( {{X}_{1},{x}_{0}}\right) { * }_{{\pi }_{1}\left( {A,{x}_{0}}\right) }{\pi }_{1}\left( {{X}_{2},{x}_{0}}\right) .
\]
In other words, if \( {i}_{1} : A \rightarrow {X}_{1} \) and \( {i}_{2} : A \rightarrow {X}_{2} \) are the inclusions, then \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) is the free product \( {\pi }_{1}\left( {{X}_{1},{x}_{0}}\right) * {\pi }_{1}\left( {{X}_{2},{x}_{0}}\right) \) modulo the relations \( {\left( {i}_{1}\right) }_{ * }\left( \alpha \right) = {\left( {i}_{2}\right) }_{ * }\left( \alpha \right) \) for every \( \alpha \in {\pi }_{1}\left( {A,{x}_{0}}\right) \) .
As important special cases we have:
Corollary 2.3.2. Under the hypotheses of van Kampen's theorem:
(i) If \( {X}_{1} \) and \( {X}_{2} \) are simply connected, then \( X \) is simply connected.
(ii) If \( A \) is simply connected, then \( {\pi }_{1}\left( {X,{x}_{0}}\right) = {\pi }_{1}\left( {{X}_{1},{x}_{0}}\right) * {\pi }_{1}\left( {{X}_{2},{x}_{0}}\right) \) .
(iii) If \( {X}_{2} \) is simply connected, then \( {\pi }_{1}\left( {X,{x}_{0}}\right) = {\pi }_{1}\left( {{X}_{1},{x}_{0}}\right) /\left\langle {{\pi }_{1}\left( {A,{x}_{0}}\right) }\right\rangle \) where \( \left\langle {{\pi }_{1}\left( {A,{x}_{0}}\right) }\right\rangle \) denotes the subgroup normally generated by \( {\pi }_{1}\left( {A,{x}_{0}}\right) \) .
Corollary 2.3.3. For \( n > 1 \), the \( n \) -sphere \( {S}^{n} \) is simply connected.
Proof. We regard \( {S}^{n} \) as the unit sphere in \( {\mathbb{R}}^{n + 1} \) . Let \( {X}_{1} = {S}^{n} - \{ \left( {0,0,\ldots ,0,1}\right) \} \) and \( {X}_{2} = {S}^{n} - \{ \left( {0,0,\ldots ,0, - 1}\right) \} \) . Then \( {X}_{1} \) and \( {X}_{2} \) are both homeomorphic to \( {\mathring{D}}^{n} \), so are path connected and simply connected, and \( {X}_{1} \cap {X}_{2} \) is path connected, as \( n > 1 \), so by Corollary 2.3.2(i) \( {S}^{n} \) is simply connected.
Example 2.3.4. (i) Regard \( {S}^{n} \) as the unit sphere in \( {\mathbb{R}}^{n + 1} \) and let \( {\mathbb{Z}}_{2} \) act on \( {S}^{n} \), where the nontrivial element \( g \) of \( {\mathbb{Z}}_{2} \) acts via the antipodal map, \( g\left( {{z}_{1},\ldots ,{z}_{n + 1}}\right) = \) \( \left( {-{z}_{1},\ldots , - {z}_{n + 1}}\right) \) . The quotient \( \mathbb{R}{P}^{n} = {S}_{n}/{\mathbb{Z}}_{2} \) is real projective \( n \) -space. Note that \( p : {S}^{0} \rightarrow \mathbb{R}{P}^{0} \) is the map from the space of two points to the space of one point, and \( p : {S}^{1} \rightarrow \mathbb{R}{P}^{1} \) may be identified with the cover in Example 2.2.3(iib) for \( n = 2 \) . But for \( n > 1 \), by Corollary 2.3.3 and Theorem 2.2.6 we see that \( {\pi }_{1}\left( {\mathbb{R}{P}^{n},{x}_{0}}\right) = {\mathbb{Z}}_{2}. \)
(ii) For \( n = {2m} - 1 \) odd, regard \( {S}^{n} \) as the unit sphere in \( {\mathbb{C}}^{m} \) . Fix a positive integer \( k \) and integers \( {j}_{1},\ldots ,{j}_{m} \) relatively prime to \( k \) . Let the group \( {\mathbb{Z}}_{k} \) act on \( {S}^{n} \) where a fixed generator \( g \) acts by \( g\left( {{z}_{1},\ldots ,{z}_{m}}\right) = \) \( \left( {\exp \left( {{2\pi i}{j}_{1}/k}\right) {z}_{1},\ldots ,\exp \left( {{2\pi i}{j}_{m}/k}\right) {z}_{m}}\right) \) . The quotient \( L = {L}^{{2m} - 1}\left( {k;{j}_{1},\ldots ,{j}_{m}}\right) \) is a lens space. For \( m = 1 \) the projection \( p : {S}^{{2m} - 1} \rightarrow L \) may be identified with the cover in Example 2.2.3(iib) with (in the notation there) \( n = k \) . But for \( m > 1 \) , by Corollary 2.3.3 and Theorem 2.2.6 we see that \( {\pi }_{1}\left( {L,{x}_{0}}\right) = {\mathbb{Z}}_{k} \) .
Example 2.3.5. Regard \( {S}^{1} \) as the unit circle in \( \mathbb{C} \) . Let \( n \) be a positive integer. For \( k = 1,\ldots, n \) let \( {\left( {S}^{1}\right) }_{k} \) be a copy of \( {S}^{1} \) . The \( n \) -leafed rose is the space \( {R}_{n} \) obtained from the disjoint union of \( {\left( {S}^{1}\right) }_{1},\ldots ,{\left( {S}^{1}\right) }_{n} \) by identifying the point 1 in each copy of \( {S}^{1} \) .
Let \( {r}_{0} \in {R}_{n} \) be the common identification point. Let \( {\left( {S}^{1}\right) }_{k} \) be coordinated by \( {\left( z\right) }_{k} \) , and let \( {i}_{k} : {\left( {S}^{1}\right) }_{k} \rightarrow {R}_{n} \) be the inclusion.
Corollary 2.3.6. The fundamental group \( {\pi }_{1}\left( {{R}_{n},{r}_{0}}\right) \) is the free group on the \( n \) elements \( {\alpha }_{k} = {\left( {i}_{k}\right) }_{ * }\left( {g}_{k}\right) \), where \( {g}_{k} \) is a generator of \( {\pi }_{1}\left( {{\left( {S}^{1}\right) }_{k},{\left( 1\right) }_{k}}\right) \), for \( k = 1,\ldots, n \) .
Proof. We proceed by induction on \( n \) .
For \( n = 1 \) this is Example 2.2.7.
Now suppose that \( n \geq 1 \) and that the theorem is true for \( n \) . Write \( {R}_{n + 1} = {X}_{1} \cup {X}_{2} \) where:
\[
{X}_{1} = \mathop{\bigcup }\limits_{{k = 1}}^{n}{\left( {S}^{1}\right) }_{k} \cup \left\{ {{\left( z\right) }_{n + 1} \in {\left( {S}^{1}\right) }_{n + 1} \mid \operatorname{Re}\left( z\right) > 0}\right\} ,
\]
\[
{X}_{2} = \mathop{\bigcup }\limits_{{k = 1}}^{n}\left\{ {{\left( z\right) }_{k} \in {\left( {S}^{1}\right) }_{k} \mid \operatorname{Re}\left( z\right) > 0}\right\} \cup {\left( {S}^{1}\right) }_{n + 1},
\]
Then \( {X}_{1} \cap {X}_{2} \) is contractible. (It has the point 1 as a strong deformation retract.) Also, \( {X}_{1} \) has \( {R}_{n} \) as a strong deformation retract, and \( {X}_{2} \) has \( {\left( {S}^{1}\right) }_{n + 1} \) as a strong deformation retract. By the induction hypothesis \( {\pi }_{1}\left( {{R}_{n},{r}_{0}}\right) \) is the free group with generators \( {\alpha }_{1},\ldots ,{\alpha }_{n} \), and by the \( n = 1 \) case \( {\pi }_{1}\left( {{\left( {S}^{1}\right) }_{n + 1},1}\right) \) is the free group on \( {\alpha }_{n + 1} \), so, by Corollary 2.3.2(ii), \( {\pi }_{1}\left( {{R}_{n + 1},{r}_{0}}\right) \) is the free group with generators \( {\alpha }_{1},\ldots ,{\alpha }_{n + 1} \), so the theorem is true for \( n + 1 \) .
Thus by induction we are done.
Here is a picture for the case \( n = 3 \) : ![21ef530b-1e09-406a-b041-cf4539af5c14_26_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_26_0.jpg) ![21ef530b-1e09-406a-b041-cf4539af5c14_26_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_26_1.jpg) ![21ef530b-1e09-406a-b041-cf4539af5c14_26_2.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_26_2.jpg) \( {R}_{3} \) \( {X}_{1} \) \( {X}_{2} \) \( {X}_{1} \cap {X}_{2} \)
## 2.4 Applications to Free Groups
We now show how to use the topological methods we have developed so far to easily derive purely algebraic results about subgroups of free groups.
Definition 2.4.1. A 1-complex is an identification space \( C = \left( {V, E}\right) / \sim \) where \( V = \) \( \left\{ {v}_{i}\right\} \) is a collection of points, the vertices of \( C \), and \( E = \left\{ {I}_{j}\right\} \) is a collection of intervals, \( I = \left\lbrack {0,1}\right\rbrack \), the edges of \( C \), with \( 0 \in {E}_{j} |
1124_(GTM30)Lectures in Abstract Algebra I Basic Concepts | Definition 3 |
Definition 3. A lattice is called modular (Dedekind) if it satisfies the condition \( {L}_{5} \) .
The importance of these lattices for the applications to other branches of algebra stems from the following
Theorem 2. The lattice of invariant subgroups of any group is modular.
Proof. Let \( \mathfrak{G} \) be the given group and let \( {\mathfrak{H}}_{1},{\mathfrak{H}}_{2},{\mathfrak{H}}_{3} \) be invariant subgroups such that \( {\mathfrak{H}}_{1} \geq {\mathfrak{H}}_{2}\left( {{\mathfrak{H}}_{1} \supseteq {\mathfrak{H}}_{2}}\right) \) . Consider the intersection \( {\mathfrak{H}}_{1} \cap \left( {{\mathfrak{H}}_{2} \cup {\mathfrak{H}}_{3}}\right) \) where \( {\mathfrak{H}}_{2} \cup {\mathfrak{H}}_{3} \) now denotes the l.u.b. of \( {\mathfrak{H}}_{2} \) and \( {\mathfrak{H}}_{3} \) in the lattice of subgroups. Thus \( {\mathfrak{H}}_{2} \cup {\mathfrak{H}}_{3} \) is the subgroup generated by \( {\mathfrak{H}}_{2} \) and \( {\mathfrak{H}}_{3} \) . Since the \( {\mathfrak{H}}_{i} \) are invariant, we know that \( {\mathfrak{H}}_{2} \cup {\mathfrak{H}}_{3} = {\mathfrak{H}}_{2}{\mathfrak{H}}_{3} = {\mathfrak{H}}_{3}{\mathfrak{H}}_{2} \) . Hence, if \( {a\varepsilon }{\mathfrak{H}}_{1} \cap \) \( \left( {{\mathfrak{H}}_{2} \cup {\mathfrak{H}}_{3}}\right), a = {h}_{1}\varepsilon {\mathfrak{H}}_{1} \) and \( a = {h}_{2}{h}_{3} \) where \( {h}_{2}\varepsilon {\mathfrak{H}}_{2} \) and \( {h}_{3}\varepsilon {\mathfrak{H}}_{3} \) . From \( {h}_{1} = {h}_{2}{h}_{3} \) we obtain \( {h}_{2}{}^{-1}{h}_{1} = {h}_{3} \) . Since \( {\mathfrak{H}}_{1} \geq {\mathfrak{H}}_{2} \) the left-hand side of this equation represents an element of \( {\mathfrak{H}}_{1} \) . Hence \( {h}_{3}\varepsilon {\mathfrak{H}}_{1} \) and so \( {h}_{3}\varepsilon {\mathfrak{H}}_{1} \cap {\mathfrak{H}}_{3} \) . We have therefore proved the essential inequality
\[
{\mathfrak{H}}_{1} \cap \left( {{\mathfrak{H}}_{2} \cup {\mathfrak{H}}_{3}}\right) \leq {\mathfrak{H}}_{2} \cup \left( {{\mathfrak{H}}_{1} \cap {\mathfrak{H}}_{3}}\right)
\]
Previously we had noted that the reverse inequality is a general lattice theoretic property. Hence
\[
{\mathfrak{H}}_{1} \cap \left( {{\mathfrak{H}}_{2} \cup {\mathfrak{H}}_{3}}\right) = {\mathfrak{H}}_{2} \cup \left( {{\mathfrak{H}}_{1} \cap {\mathfrak{H}}_{3}}\right)
\]
and the theorem is proved.
It is clear that any sublattice of a modular lattice is modular. Hence the lattice of invariant \( M \) -subgroups of any \( M \) -group is modular. Hence, also the lattice of submodules of any module and the lattices of ideals (left, right, two-sided) of any ring are modular. On the other hand, the lattice of all subgroups of a group is generally not modular. This fact makes it somewhat unnatural to try to subsume all of group theory under the theory of lattices.*
We note that the principle of duality holds in modular lattices; for the dual of \( {\mathrm{L}}_{5} \) reads: if \( a \leq b \), then \( a \cup \left( {b \cap c}\right) = b \cap \left( {a \cup c}\right) \) , and this clearly means the same thing as \( {\mathrm{L}}_{5} \) . An alternative useful definition of a modular lattice can be extracted from the following
Theorem 3. A lattice \( L \) is modular if and only if \( a \geq b \) and \( a \cup c = b \cup c, a \cap c = b \cap c \) for any \( c \) imply that \( a = b \) .
Proof. Let \( L \) be modular and let \( a, b, c \) be elements of \( L \) such that \( a \geq b \) and \( a \cup c = b \cup c, a \cap c = b \cap c \) . Then
\[
a = a \cap \left( {a \cup c}\right) = a \cap \left( {b \cup c}\right) = b \cup \left( {a \cap c}\right)
\]
\[
= b \cup \left( {b \cap c}\right) = b\text{.}
\]
Conversely suppose that \( L \) is any lattice that satisfies the condition of the theorem. Let \( a \geq b \) . Then we know that \( a \cap \) \( \left( {b \cup c}\right) \geq b \cup \left( {a \cap c}\right) \) . Also
\[
\left( {a \cap \left( {b \cup c}\right) }\right) \cap c = a \cap \left( {\left( {b \cup c}\right) \cap c}\right) = a \cap c
\]
and
\[
a \cap c = \left( {a \cap c}\right) \cap c \leq \left( {b \cup \left( {a \cap c}\right) }\right) \cap c \leq a \cap c
\]
* See the remarks on the Jordan-Hölder theorem on p. 200.
so that
\[
\left( {b \cup \left( {a \cap c}\right) }\right) \cap c = a \cap c.
\]
By duality we have
\[
\left( {a \cap \left( {b \cup c}\right) }\right) \cup c = b \cup c
\]
\[
\left( {b \cup \left( {a \cap c}\right) }\right) \cup c = b \cup c.
\]
Hence,
\[
a \cap \left( {b \cup c}\right) = b \cup \left( {a \cap c}\right)
\]
and \( L \) is modular.
We establish next an analogue for modular lattices of the second isomorphism theorem for groups, namely,
Theorem 4. If a and \( b \) are any two elements of a modular lattice, then the intervals \( I\left\lbrack {a \cup b, a}\right\rbrack \) and \( I\left\lbrack {b, a \cap b}\right\rbrack \) are isomorphic.
Proof. Let \( x \) be in the interval \( I\left\lbrack {a \cup b, a}\right\rbrack \), so that \( a \cup b \geq \) \( x \geq a \) . Then \( b \geq x \cap b \geq a \cap b \) and \( x \cap b \) is in the interval \( I\lbrack b \) , \( a \cap b\rbrack \) . Similarly, if \( y \) is in \( I\left\lbrack {b, a \cap b}\right\rbrack \), then \( y \cup a \) is in \( I\left\lbrack {a \cup b, a}\right\rbrack \) . We therefore have a mapping \( x \rightarrow x \cap b \) of \( I\left\lbrack {a \cup b, a}\right\rbrack \) into \( I\left\lbrack {b, a \cap b}\right\rbrack \) and a mapping \( y \rightarrow y \cup a \) of \( I\left\lbrack {b, a \cap b}\right\rbrack \) into \( I\left\lbrack {a \cup b, a}\right\rbrack \) . We shall now show that these are inverses of each other so that either one defines a 1-1 correspondence of one of the intervals onto the other. Let \( {x\varepsilon I}\left\lbrack {a \cup b, a}\right\rbrack \) . Then since \( x \geq a \) ,
\[
\left( {x \cap b}\right) \cup a = x \cap \left( {a \cup b}\right) \text{.}
\]
Since \( x \leq a \cup b \), this gives \( \left( {x \cap b}\right) \cup a = x \) . Dually we can prove that if \( {y\varepsilon I}\left\lbrack {b, a \cup b}\right\rbrack \), then \( \left( {y \cup a}\right) \cap b = y \) . This proves our assertion. Since our mappings are evidently order preserving they are lattice isomorphisms.
This theorem leads us to introduce a notion of equivalence for intervals that is stronger than isomorphism. First we define \( I\left\lbrack {u, v}\right\rbrack \) and \( I\left\lbrack {w, t}\right\rbrack \) to be transposes (similar) if there exists elements \( a, b \) in \( L \) such that one of the pairs can be represented as \( I\left\lbrack {a \cup b, a}\right\rbrack \) while the other has the form \( I\left\lbrack {b, a \cap b}\right\rbrack \) . The intervals \( I\left\lbrack {u, v}\right\rbrack \) and \( I\left\lbrack {w, t}\right\rbrack \) are called projective if there exists a finite sequence
\[
I\left\lbrack {u, v}\right\rbrack = I\left\lbrack {{u}_{1},{v}_{1}}\right\rbrack, I\left\lbrack {{u}_{2},{v}_{2}}\right\rbrack ,\cdots, I\left\lbrack {{u}_{n},{v}_{n}}\right\rbrack = I\left\lbrack {w, t}\right\rbrack
\]
beginning with \( I\left\lbrack {u, v}\right\rbrack \) and ending with \( I\left\lbrack {w, t}\right\rbrack \) such that consecutive pairs are transposes. It is immediate that the relation that we have defined is an equivalence. Also by Theorem 4 projective intervals are isomorphic.
We observe now that in the lattice of invariant \( M \) -subgroups of any \( M \) -group \( \otimes \) projectivity of a pair of intervals \( I\left\lbrack {\mathfrak{H},\mathfrak{K}}\right\rbrack, I\left\lbrack {\mathfrak{M},\mathfrak{N}}\right\rbrack \) implies \( M \) -isomorphism of the factor groups \( \mathfrak{H}/\mathfrak{K},\mathfrak{M}/\mathfrak{N} \) . It suffices to consider a pair of transposed intervals, say, \( I\left\lbrack {{\mathfrak{H}}_{1} \cup {\mathfrak{H}}_{2}}\right. \) , \( \left. {\mathfrak{H}}_{1}\right\rbrack \) and \( I\left\lbrack {{\mathfrak{H}}_{2},{\mathfrak{H}}_{1} \cap {\mathfrak{H}}_{2}}\right\rbrack \) . For these, the isomorphism of \( \left( {{\mathfrak{H}}_{1} \cup }\right. \) \( \left. {\mathfrak{H}}_{2}\right) /{\mathfrak{H}}_{1} \) and \( {\mathfrak{H}}_{2}/\left( {{\mathfrak{H}}_{1} \cap {\mathfrak{H}}_{2}}\right) \) follows directly from the second isomorphism theorem for groups. This remark will enable us to translate some of the lattice theoretic results to results on group isomorphisms.
## EXERCISES
1. Show that, if a lattice is not distributive, then it has a sublattice of order 5 whose diagram is either the first or the second on p. 188. Show also that a non-modular lattice contains a sublattice whose diagram is the first on p. 188.
2. Show that the lattice of subgroups of \( {A}_{4} \) is not modular.
3. Prove that, if \( \mathfrak{G} \) is a group that is generated by two elements \( a \) and \( b \) such that \( {a}^{{p}^{m}} = 1,{b}^{{p}^{r}} = 1,{b}^{-1}{ab} = {a}^{n} \) where \( {n}^{{p}^{r}} \equiv 1\left( {\;\operatorname{mod}\;{p}^{m}}\right) \), then any two subgroups of \( \mathfrak{G} \) commute. Use this to show that the lattice of subgroups of \( \mathfrak{G} \) is modular.
4. Show that if \( a \) covers \( a \cap b \) in a modular lattice \( L \) then \( a \cup b \) covers \( b \) . A lattice that has this property is called semi-modular. Verify that the lattice whose diagram is
![9c7d47d5-24bb-4360-bb03-9c6a5458d669_208_0.jpg](images/9c7d47d5-24bb-4360-bb03-9c6a5458d669_208_0.jpg)
is semi-modular but not modular.
4. Schreier’s theorem. The chain conditions. Let \( a \) and \( b \) be two elements of a modular lattice satisfying \( a \geq b \) . We consider now the finite descending chains
(2)
\[
a = {a}_{1} \geq {a}_{2} \geq {a}_{3} \geq \cdots \geq {a}_{n + 1} = b
\]
connecting \( a \) and \( b \) . One such chain is called a refinement of a second if its terms include all the terms of the other chain. Two chains are said to be equivalent if it is possible to set up a 1-1 correspondence between the intervals \( I\left\ |
1282_[张恭庆] Methods in Nonlinear Analysis | Definition 5.5.22 |
Definition 5.5.22 Let \( S \) be a compact invariant set for \( \varphi \) ; a subset \( A \subset S \) is called an attractor in \( S \), if there exists a neighborhood \( U \) of \( A \) such that \( \omega \left( {U \cap S}\right) = A \) . The dual repeller of \( A \) in \( S \) is defined by
\[
{A}^{ * } = \{ x \in S \mid \omega \left( x\right) \cap A = \varnothing \} .
\]
The pair \( \left( {A,{A}^{ * }}\right) \) is called an attractor-repeller pair.
The set
\[
C\left( {{A}^{ * }, A, S}\right) = \left\{ {x \in S \mid \omega \left( x\right) \subset A,{\omega }^{ * }\left( x\right) \subset {A}^{ * }}\right\}
\]
is called the set of connecting orbits from \( {A}^{ * } \) to \( A \) in \( S \) .
The following properties hold.
(1) \( A \) and \( {A}^{ * } \) are disjoint compact invariant sets.
Proof. In the following, we use the notation \( U \) as in Definition 5.5.22. In fact, \( A = \omega \left( {U \cap S}\right) \) is invariant. Both \( \omega \left( x\right) \) and \( A \) are invariant, so is \( {A}^{ * } \) . If \( \exists x \in A \cap {A}^{ * } \), then \( \omega \left( x\right) \subset A \) . But by definition of \( {A}^{ * },\omega \left( x\right) \cap A = \varnothing \) . Since \( S \) is compact, \( \omega \left( x\right) \neq \varnothing \) . This is a contradiction. Therefore \( A \cap {A}^{ * } = \varnothing \) .
By definition, \( A \) is closed.
It remains to verify the closedness of \( {A}^{ * } \) .
If \( \left\{ {x}_{n}\right\} \subset {A}^{ * } \) with \( {x}_{n} \rightarrow x \in S \), and if \( \omega \left( x\right) \cap A \neq \varnothing \), then \( \varphi \left( {t, x}\right) \in U \) for some \( t > 0 \) . Consequently \( \varphi \left( {t,{x}_{n}}\right) \in U \) for \( n \) large. Therefore \( \omega \left( {x}_{n}\right) \subset \) \( \omega \left( {U \cap S}\right) = A \) . This is impossible. Therefore \( \omega \left( x\right) \cap A = \varnothing \), i.e., \( x \in {A}^{ * } \) .
(2) For any open neighborhood \( V \) of \( A,\exists T = T\left( V\right) \) such that \( \underset{t \geq T}{ \cup }\varphi (t, U \cap \) \( S) \subset V \) .
Proof. If not, \( \exists V \supset A \) open, \( \exists {x}_{n} \in U \cap S\exists {t}_{n} \rightarrow + \infty \) such that \( \varphi \left( {{t}_{n},{x}_{n}}\right) \notin V \) . Since \( S \) is a compact invariant set, \( \varphi \left( {{t}_{n},{x}_{n}}\right) \rightarrow y \in S \smallsetminus V \) . But \( y \in \omega \left( {U \cap S}\right) = \) \( A \) . This is a contradiction.
(3) If \( B \) is a closed set disjoint from \( A \), then \( \forall \epsilon > 0\exists T = {T}_{\epsilon } > 0 \) such that \( d\left( {x,{A}^{ * }}\right) < \epsilon \), whenever \( x \in S \) and \( t \geq T \) such that \( \varphi \left( {t, x}\right) \in B \) .
Proof. If not, \( \exists \epsilon > 0\exists {t}_{n} \rightarrow \infty \exists {x}_{n} \in S \) such that \( \varphi \left( {{t}_{n},{x}_{n}}\right) \in B \) but \( d\left( {{x}_{n},{A}^{ * }}\right) \geq \epsilon \) .
Since \( S \) is compact and invariant, \( \exists x \in S \) such that \( {x}_{n} \rightarrow x \) . Then \( d\left( {x,{A}^{ * }}\right) \geq \epsilon \), which implies that \( x \notin {A}^{ * } \), i.e., \( \omega \left( x\right) \cap A \neq \varnothing \) . Thus for the neighborhood \( U \) of \( A \) ; we have \( {t}_{1} \in {R}^{1} \) such that \( \varphi \left( {{t}_{1}, x}\right) \in U \cap S \) and then \( \varphi \left( {{t}_{1},{x}_{n}}\right) \in U \cap S \) for \( n \) large. Let \( V = X \smallsetminus B \), which is an open neighborhood of \( A \), from (2), \( \exists T > 0 \) such that \( \varphi \left( {t,{x}_{n}}\right) \in V\forall t \geq T \) . This contradicts \( \varphi \left( {{t}_{n},{x}_{n}}\right) \in B\forall n. \)
For any \( x \in X \), we denote the orbit passing through \( x \) by \( o\left( x\right) = \) \( \left\{ {\varphi \left( {t, x}\right) \mid t \in {R}^{1}}\right\} \) .
(4) \( \omega \left( y\right) \cap {A}^{ * } \neq \varnothing \Rightarrow o\left( y\right) \subset {A}^{ * } \) .
Proof. Let \( B \) be a closed neighborhood of \( {A}^{ * } \) such that \( A \cap B = \varnothing \) . Since \( \omega \left( y\right) \cap {A}^{ * } \neq \varnothing ,\exists {t}_{n} \rightarrow + \infty \) such that \( \varphi \left( {{t}_{n}, y}\right) \in B.\forall t \in {R}^{1} \), let \( z = \varphi \left( {t, y}\right) \) ; we have \( \varphi \left( {{t}_{n} - t, z}\right) = \varphi \left( {{t}_{n}, y}\right) \in B \), thus from (3), \( \forall \epsilon > 0 \), we have \( d\left( {z,{A}^{ * }}\right) < \epsilon \) . Since \( \epsilon > 0 \) is arbitrarily small, \( o\left( y\right) \subset {A}^{ * } \) .
(5) \( {\omega }^{ * }\left( y\right) \cap A \neq \varnothing \Rightarrow o\left( y\right) \subset A \) .
Proof. By the assumption that \( \exists {t}_{n} \rightarrow + \infty \) such that \( \varphi \left( {-{t}_{n}, y}\right) \in U \cap S \), then \( \forall t \in {R}^{1}, t + {t}_{n} \geq 0 \) for \( n \) large, from \( \varphi \left( {t, y}\right) = \varphi \left( {{t}_{n} + t,\varphi \left( {-{t}_{n}, y}\right) }\right) \) we have \( \varphi \left( {t, y}\right) \in \omega \left( {U \cap S}\right) \), i.e., \( o\left( y\right) \subset A \) .
Lemma 5.5.23 Let \( \left( {A,{A}^{ * }}\right) \) be an attractor-repeller pair of a compact invariant set \( S \), then
\[
S = A \cup {A}^{ * } \cup C\left( {{A}^{ * }, A, S}\right) .
\]
Proof. It is sufficient to prove that \( \forall x \in S \smallsetminus \left( {A \cup {A}^{ * }}\right) ,\omega \left( x\right) \subset A \), and \( {\omega }^{ * }\left( x\right) \subset \) \( {A}^{ * } \) .
1. \( \omega \left( x\right) \subset A : \forall y \in \omega \left( x\right) \), i.e., \( \exists {t}_{n} \rightarrow + \infty \) such that \( \varphi \left( {{t}_{n}, x}\right) \rightarrow y \) . Let \( B = S \smallsetminus U \), where \( U \) is as in Definition 5.5.22, then either \( \exists {n}_{0} \in \mathbb{N} \) such that \( z = \varphi \left( {{t}_{{n}_{0}}, x}\right) \in U \) or \( \varphi \left( {{t}_{n}, x}\right) \in B,\forall n \) . But the latter case is impossible, because from (3) we would have \( x \in {A}^{ * } \) . In the former case, \( y \in \omega \left( {S \cap U}\right) = A \) .
2. \( {\omega }^{ * }\left( x\right) \subset {A}^{ * } : \forall y \in {\omega }^{ * }\left( x\right) ,\exists {t}_{n} \rightarrow + \infty \) such that \( {z}_{n} = \varphi \left( {-{t}_{n}, x}\right) \rightarrow y \) . Let \( B = \{ x\} \) . Since \( \{ x\} \cap A = \varnothing \), we have \( d\left( {{z}_{n},{A}^{ * }}\right) \rightarrow 0 \), provided by (3). Thus \( y \in {A}^{ * } \) .
Combining the conclusions of Lemma 5.5.23 and properties (4) and (5), we see that \( \forall x \in C\left( {{A}^{ * }, A, S}\right) ,\omega \left( x\right) \subset A \) and \( {\omega }^{ * }\left( x\right) \subset {A}^{ * } \), and there are connecting orbits from \( A \) to \( {A}^{ * } \) but none from \( {A}^{ * } \) to \( A \) .
Let us define the Morse decomposition for a compact invariant set.
Definition 5.5.24 Let \( S \) be a compact invariant set of \( X \) with respect to the flow \( \varphi \) . An ordered collection \( \left( {{M}_{1},\ldots ,{M}_{n}}\right) \) of invariant subsets \( {M}_{j} \subset S \) is called a Morse decomposition of \( S \), if there exists an increasing sequence of attractors in \( S \) :
\[
\varnothing = {A}_{0} \subset {A}_{1} \subset \cdots \subset {A}_{n} = S
\]
such that \( {M}_{j} = {A}_{j} \cap {A}_{j - 1}^{ * },1 \leq j \leq n \) .
Example 1. An attractor-repeller pair \( \left( {A,{A}^{ * }}\right) \) of a compact invariant set \( S \) is a Morse decomposition, where \( {A}_{0} = \varnothing ,{A}_{1} = A,{A}_{2} = S \) .
Example 2. Suppose \( f \in {C}^{1}\left( {M,{\mathbb{R}}^{1}}\right) \), where \( M \) is a compact manifold. Assume that \( {f}^{-1}\left\lbrack {a, b}\right\rbrack \cap K = \left\{ {{p}_{1},\ldots ,{p}_{n}}\right\} \), where \( a, b \) are regular values of \( f \) with \( f\left( {p}_{i}\right) \leq f\left( {p}_{i + 1}\right), i = 1,\ldots, n - 1 \) . Then \( \left( {\left\{ {p}_{1}\right\} ,\ldots ,\left\{ {p}_{n}\right\} }\right) \) is a Morse decomposition of \( S = I\left( {{f}^{-1}\left( \left\lbrack {a, b}\right\rbrack \right) }\right) \) .
In fact, by setting \( {A}_{0} = \varnothing \), and \( {A}_{i} = \left\{ {x \in S \mid {\omega }^{ * }\left( x\right) \subset \left\{ {{p}_{1},\ldots ,{p}_{i}}\right\} }\right\}, i = \) \( 1,2,\ldots, n \) . We shall verify that this is an increasing sequence of attractors in \( S \) with \( {A}_{i}^{ * } = \left\{ {x \in S \mid \omega \left( x\right) \subset \left\{ {{p}_{i + 1},\ldots ,{p}_{n}}\right\} }\right\} \), and then \( {A}_{i} \cap {A}_{i - 1}^{ * } = \left\{ {p}_{i}\right\} \) .
It is proved by induction. Let \( {S}_{k} = I\left( {{f}^{-1}\left\lbrack {a,{a}_{k}}\right\rbrack }\right) \), where \( {a}_{k} \in \left( {f\left( {p}_{k}\right) }\right. \) , \( \left. {f\left( {p}_{k + 1}\right) }\right), k = 1,\ldots, n - 1 \), and \( {a}_{n} = b \) . Thus \( {S}_{n} = S \) . We verify that \( {A}_{i} = \) \( \left\{ {x \in {S}_{k} \mid {\omega }^{ * }\left( x\right) \subset \left\{ {{p}_{1},{p}_{2},\ldots ,{p}_{i}}\right\} }\right\}, i = 1,2,\ldots, k \), is an increasing sequence of attractors in \( {S}_{k} \), and \( {A}_{i}^{ * } = \left\{ {x \in {S}_{k} \mid \omega \left( x\right) \subset \left\{ {{p}_{i + 1},\ldots ,{p}_{k}}\right\} }\right\} \) .
For \( n = 1 \) . Obviously, \( {A}_{1} = \left\{ {p}_{1}\right\} \) is an attractor in \( {S}_{1} = \left\{ {p}_{1}\right\} \), with \( {A}_{1}^{ * } = \varnothing \) and \( {A}_{0}^{ * } = {S}_{1} \) . Thus \( {A}_{1} \cap {A}_{0}^{ * } = \left\{ {p}_{1}\right\} \) .
If the conclusion holds for \( n = k \), i.e., \( {A}_{i} = \left\{ {x \in {S}_{k} \mid {\omega }^{ * }\left( x\right) \subset }\right. \) \( \left. \left\{ {{p}_{1},\ldots ,{p}_{i}}\right\} \right\}, i = 1,\ldots, k \), is an increasing sequence of attra |
1129_(GTM35)Several Complex Variables and Banach Algebras | Definition 5.4 |
Definition 5.4. Choose \( {\omega }^{k} \) in \( { \land }^{k}\left( \omega \right) \) ,
\[
{\omega }^{k} = \mathop{\sum }\limits_{{I, J}}{a}_{IJ}d{z}_{I} \land d{\bar{z}}_{J}
\]
\[
\partial {\omega }^{k} = \mathop{\sum }\limits_{{I, J}}\partial {a}_{IJ} \land d{z}_{I} \land d{\bar{z}}_{J}
\]
and
\[
\bar{\partial }{\omega }^{k} = \mathop{\sum }\limits_{{I, J}}\bar{\partial }{a}_{IJ} \land d{z}_{I} \land d{\bar{z}}_{J}
\]
Observe that, by (l), if \( {\omega }^{k} \) is as above,
\[
\bar{\partial }{\omega }^{k} + \partial {\omega }^{k} = \mathop{\sum }\limits_{{I, J}}d{a}_{IJ} \land d{z}_{I} \land d{\bar{z}}_{J} = d{\omega }^{k},
\]
so we have
(2)
\[
\bar{\partial } + \partial = d
\]
as operators from \( { \land }^{k}\left( \Omega \right) \rightarrow { \land }^{k + 1}\left( \Omega \right) \) . Note that if \( \omega \in { \land }^{r, s},\partial \omega \in { \land }^{r + 1, s} \) and \( \bar{\partial }\omega \in { \land }^{r, s + 1} \) .
Lemma 5.3. \( {\bar{\partial }}^{2} = 0,{\partial }^{2} = 0 \), and \( \partial \bar{\partial } = \bar{\partial }\partial = 0 \) .
Why is the \( \bar{\partial } \) -operator of interest to us? Consider \( \bar{\partial } \) as the map from \( {C}^{\infty } \rightarrow \) \( { \land }^{1}\left( \Omega \right) \) . What is its kernel?
Let \( f \in {C}^{\infty } \cdot \bar{\partial }f = 0 \) if and only if
(3)
\[
\frac{\partial f}{\partial {\bar{z}}_{j}} = 0\text{ in }\Omega ,\;j = 1,2,\ldots, n.
\]
For \( n = 1 \) and \( \Omega \) a domain in the \( z \) -plane,(3) reduces to
\[
\frac{df}{\partial \bar{z}} = 0\;\text{ or }\;\frac{\partial f}{\partial x} + i\frac{\partial f}{\partial y} = 0.
\]
For \( f = u + {iv}, u \) and \( v \) real-valued, this means that
\[
\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y},\;\frac{\partial v}{\partial x} = - \frac{\partial u}{\partial y},
\]
or \( u \) and \( v \) satisfy the Cauchy-Riemann equations. Thus here
\[
\partial f = 0\text{in}\Omega \text{is equivalent to}f \in H\left( \Omega \right) \text{.}
\]
Definition 5.5. Let \( \Omega \) be an open subset of \( {\mathbb{C}}^{n}.H\left( \Omega \right) \) is the class of all \( f \in {C}^{\infty } \) with \( \bar{\partial }f = 0 \) in \( \Omega \), or, equivalently,(3).
We call the elements of \( H\left( \Omega \right) \) holomorphic in \( \Omega \) . Note that, by (3), \( f \in H\left( \Omega \right) \) if and only if \( f \) is holomorphic in each fixed variable \( {z}_{j} \) (as the function of a single complex variable), when the remaining variables are held fixed.
Let now \( \Omega \) be the domain
\[
\left\{ {z \in {\mathbb{C}}^{n}\left| \right| {z}_{j} \mid < {R}_{j}, j = 1,\ldots, n}\right\}
\]
where \( {R}_{1},\ldots ,{R}_{n} \) are given positive numbers. Thus \( \Omega \) is a product of \( n \) open plane disks. Let \( f \) be a once-differentiable function on \( \Omega \) ; i.e., \( \partial f/\partial {x}_{j} \) and \( \partial f/\partial {y}_{j} \) exist and are continuous in \( \Omega, j = 1,\ldots, n \) .
Lemma 5.4. Assume that \( \partial f/\partial {\bar{z}}_{j} = 0, j = 1,\ldots, n \), in \( \Omega \) . then there exist constants \( {A}_{v} \) in \( \mathbb{C} \) for each tuple \( v = \left( {{v}_{1},\ldots ,{v}_{n}}\right) \) of nonnegative integers such that
\[
f\left( z\right) = \mathop{\sum }\limits_{v}{A}_{v}{z}^{v}
\]
where \( {z}^{v} = {z}_{1}^{{v}_{1}} \cdot {z}_{2}^{{v}_{2}}\cdots {z}_{n}^{{v}_{n}} \), the series converging absolutely in \( \Omega \) and uniformly on every compact subset of \( \Omega \) .
For a proof of this result, see, e.g., [Hö, Th. 2.2.6].
This result then applies in particular to every \( f \) in \( H\left( \Omega \right) \) . We call \( \mathop{\sum }\limits_{\nu }{A}_{\nu }{z}^{\nu } \) the Taylor series for \( f \) at 0 .
We shall see that the study of the \( \bar{\partial } \) -operator, to be undertaken in the next section and in later sections, will throw light on the holomorphic functions of several complex variables.
For further use, note also
Lemma 5.5. If \( {\omega }^{k} \in { \land }^{k}\left( \Omega \right) \) and \( {\omega }^{l} \in { \land }^{l}\left( \Omega \right) \), then
\[
\bar{\partial }\left( {{\omega }^{k} \land {\omega }^{l}}\right) = \bar{\partial }{\omega }^{k} \land {\omega }^{l} + {\left( -1\right) }^{k}{\omega }^{k} \land \bar{\partial }{\omega }^{l}.
\]
6
The Equation \( \bar{\partial }u = f \)
As before, fix an open set \( \Omega \subset {\mathbb{C}}^{n} \) . Given \( f \in { \land }^{r, s + 1}\left( \Omega \right) \), we seek \( u \in { \land }^{r, s} \) such that
(1)
\[
\bar{\partial }u = f\text{.}
\]
Since \( {\bar{\partial }}^{2} = 0 \) (Lemma 5.3), a necessary condition on \( f \) is
(2)
\[
\bar{\partial }f = 0.
\]
If (2) holds, we say that \( f \) is \( \bar{\partial } \) -closed. What is a sufficient condition on \( f \) ? It turns out that this will depend on the domain \( \Omega \) .
Recall the analogous problem for the operator \( d \) on a domain \( \Omega \subset {\mathbb{R}}^{n} \) . If \( {\omega }^{k} \) is a \( k \) -form in \( { \land }^{k}\left( \Omega \right) \), the condition
(3)
\[
d{\omega }^{k} = 0\;\left( {\omega \text{ is "closed" }}\right)
\]
is necessary in order that we can find some \( {\tau }^{k - 1} \) in \( { \land }^{k - 1}\left( \Omega \right) \) with
(4)
\[
d{\tau }^{k - 1} = {\omega }^{k}
\]
However,(3) is, in general, not sufficient. (Think of an example when \( k = 1 \) and \( \Omega \) is an annulus in \( {\mathbb{R}}^{2} \) .) If \( \Omega \) is contractible, then (3) is sufficient in order that (4) admit a solution.
For the \( \bar{\partial } \) -operator, a purely topological condition on \( \Omega \) is inadequate. We shall find various conditions in order that (1) will have a solution. Denote by \( {\Delta }^{n} \) the closed unit polydisk in \( {\mathbb{C}}^{n} : {\Delta }^{n} = \left\{ {z \in {\mathbb{C}}^{n}\left| \right| {z}_{j} \mid \leq 1, j = 1,\ldots, n}\right\} \) .
Theorem 6.1 (Complex Poincaré Lemma). Let \( \Omega \) be a neighborhood of \( {\Delta }^{n} \) . Fix \( \omega \in { \land }^{p, q}\left( \Omega \right), q > 0 \), with \( \bar{\partial }\omega = 0 \) . Then there exists a neighborhood \( {\Omega }^{ * } \) of \( {\Delta }^{n} \) and there exists \( {\omega }^{ * } \in { \land }^{p, q - 1}\left( {\Omega }^{ * }\right) \) such that
\[
\bar{\partial }{\omega }^{ * } = \omega \text{ in }{\Omega }^{ * }.
\]
We need some preliminary work.
32 6. The Equation \( \bar{\partial }u = f \)
Lemma 6.2. Let \( \phi \in {C}^{1}\left( {\mathbb{R}}^{2}\right) \) and assume that \( \phi \) has compact support. Put
\[
\Phi \left( \zeta \right) = - \frac{1}{\pi }{\int }_{{\mathbb{R}}^{2}}\phi \left( z\right) \frac{dxdy}{z - \zeta }.
\]
Then \( \Phi \in {C}^{1}\left( {\mathbb{R}}^{2}\right) \) and \( \partial \Phi /\partial \bar{\zeta } = \phi \left( \zeta \right) \), all \( \zeta \) .
Proof. Choose \( \mathrm{R} \) with \( \operatorname{supp}\phi \subset \{ z\left| \right| z \mid \leq R\} \) .
\[
{\pi \Phi }\left( \zeta \right) = {\int }_{\left| z\right| \leq R}\phi \left( z\right) \frac{1}{\zeta - z}{dxdy} = {\int }_{\left| {{z}^{\prime } - \zeta }\right| \leq R}\phi \left( {\zeta - {z}^{\prime }}\right) \frac{d{x}^{\prime }d{y}^{\prime }}{{z}^{\prime }}
\]
\[
= {\int }_{{\mathbb{R}}^{2}}\phi \left( {\zeta - {z}^{\prime }}\right) \frac{d{x}^{\prime }d{y}^{\prime }}{{z}^{\prime }}.
\]
Since \( 1/{z}^{\prime } \in {L}^{1}\left( {d{x}^{\prime }d{y}^{\prime }}\right) \) on compact sets, it is legal to differentiate the last integral under the integral sign. We get
\[
\pi \frac{\partial \Phi }{\partial \bar{\zeta }}\left( \zeta \right) = {\int }_{{\mathbb{R}}^{2}}\frac{\partial }{\partial \bar{\zeta }}\left\lbrack {\phi \left( {\zeta - {z}^{\prime }}\right) }\right\rbrack \frac{d{x}^{\prime }d{y}^{\prime }}{{z}^{\prime }} = {\int }_{{\mathbb{R}}^{2}}\frac{\partial \phi }{\partial \bar{z}}\left( {\zeta - {z}^{\prime }}\right) \frac{d{x}^{\prime }d{y}^{\prime }}{{z}^{\prime }}
\]
\[
= {\int }_{{\mathbb{R}}^{2}}\frac{\partial \phi }{\partial \bar{z}}\left( z\right) \frac{dxdy}{\zeta - z}.
\]
On the other hand, Lemma 2.5 gives that
\[
- {\pi \phi }\left( \zeta \right) = {\int }_{{\mathbb{R}}^{2}}\frac{\partial \phi }{\partial \bar{z}}\left( z\right) \frac{dxdy}{z - \zeta }.
\]
Hence \( \partial \Phi /\partial \zeta = \phi \) .
Lemma 6.3. Let \( \Omega \) be a neighborhood of \( {\Delta }^{n} \) and fix \( f \) in \( {C}^{\infty }\left( \Omega \right) \) . Fix \( j,1 \leq j \) \( \leq n \) . Assume that
(5)
\[
\frac{\partial f}{\partial {\bar{z}}_{k}} = 0\text{ in }\Omega, k = {k}_{1},\ldots ,{k}_{s}\text{, each }{k}_{i} \neq j.
\]
Then we can find a neighborhood \( {\Omega }_{1} \) of \( {\Delta }^{n} \) and \( F \) in \( {C}^{\infty }\left( {\Omega }_{1}\right) \) such that
(a) \( \partial F/\partial {\bar{\zeta }}_{j} = f \) in \( {\Omega }_{1} \) .
(b) \( \partial F/\partial {\bar{\zeta }}_{k} = 0 \) in \( {\Omega }_{1}, k = {k}_{1},\ldots ,{k}_{s} \) .
Proof. Choose \( \varepsilon > 0 \) so that if \( z = \left( {{z}_{1},\ldots ,{z}_{n}}\right) \in {\mathbb{C}}^{n} \) and \( \left| {z}_{v}\right| < 1 + {2\varepsilon } \) for all \( v \), then \( z \in \Omega \) .
Choose \( \psi \in {C}^{\infty }\left( {\mathbb{R}}^{2}\right) \), having support contained in \( \{ z\left| \right| z \mid < 1 + {2\varepsilon }\} \), with \( \psi \left( z\right) = 1 \) for \( \left| z\right| < 1 + \varepsilon \) . Put
\[
F\left( {{\zeta }_{1},\ldots ,{\zeta }_{j},\ldots ,{\zeta }_{n}}\right)
\]
\[
= - \frac{1}{\pi }{\int }_{{\mathbb{R}}^{2}}\psi \left( z\right) f\left( {{\zeta }_{1},\ldots ,{\zeta }_{j - 1}, z,{\zeta }_{j + 1},\ldots ,{\zeta }_{n}}\right) \frac{dxdy}{z - {\zeta }_{j}}.
\]
For fixed \( {\zeta }_{1},\ldots ,{\zeta }_{j - 1},{\zeta }_{j + 1},\ldots ,{\zeta }_{n} \) with \( \left| {\zeta }_{\nu }\right| < 1 + \varepsilon \), all \( \nu \), we now apply Lemma 6.2 with
\[
\phi \left( z\r |
1064_(GTM223)Fourier Analysis and Its Applications | Definition 6.1 |
Definition 6.1 An operator \( A : {\mathcal{D}}_{A} \rightarrow V \) is said to be symmetric, if
\[
\langle {Au}, v\rangle = \langle u,{Av}\rangle \;\text{ for all }u, v \in {\mathcal{D}}_{A}.
\]
Example 6.7. Let \( V = {L}^{2}\left( \mathbf{T}\right) ,{\mathcal{D}}_{A} = V \cap {C}^{2}\left( \mathbf{T}\right) \) and let \( A \) be the operator \( - {D}^{2} \), so that \( {Au} = - {u}^{\prime \prime } \) . Since \( u \in {C}^{2}\left( \mathbf{T}\right) \), the image \( {Au} \) is a continuous function and thus belongs to \( V \) . We have
\[
\langle {Au}, v\rangle = - {\int }_{\mathbf{T}}{u}^{\prime \prime }\left( x\right) \overline{v\left( x\right) }{dx} = - {\left\lbrack {u}^{\prime }\left( x\right) \overline{v\left( x\right) }\right\rbrack }_{-\pi }^{\pi } + {\int }_{\mathbf{T}}{u}^{\prime }\left( x\right) \overline{{v}^{\prime }\left( x\right) }{dx}
\]
\[
= {\left\lbrack u\left( x\right) \overline{{v}^{\prime }\left( x\right) }\right\rbrack }_{-\pi }^{\pi } - {\int }_{\mathbf{T}}u\left( x\right) \overline{{v}^{\prime \prime }\left( x\right) }{dx} = \langle u,{Av}\rangle .
\]
The integrated parts are zero, because all the functions are periodic and thus have the same values at \( - \pi \) and \( \pi \) .
Definition 6.2 An operator \( A : {\mathcal{D}}_{A} \rightarrow V \) is said to have an eigenvalue \( \lambda \) , if there exists a vector \( u \in {\mathcal{D}}_{A} \) such that \( u \neq 0 \) and \( {Au} = {\lambda u} \) . Such a vector \( u \) is called an eigenvector, more precisely, an eigenvector belonging to the eigenvalue \( \lambda \) . The set of eigenvectors belonging to a particular eigenvalue \( \lambda \) (together with the zero vector) make up the eigenspace belonging to \( \lambda \) .
Example 6.8. We return to the situation in Example 6.7. If \( u\left( x\right) = \) \( a\cos {nx} + b\sin {nx} \), where \( a \) and \( b \) are arbitrary constants and \( n \) is an integer \( \geq 0 \), then clearly \( {Au} = {n}^{2}u \) . In this situation we have thus the eigenvalues \( \lambda = 0,1,4,9,\ldots \) . For \( \lambda = 0 \), the eigenspace has dimension 1 (it consists of the constant functions), for the other eigenvalues the dimension is 2 . (The fact that this is the complete story of the eigenvalues of this operator was shown in Sec. 6.3, "Method 2.")
For symmetric operators on a finite-dimensional space there is a spectral theorem, which is a simple adjustment to the case of complex scalars of the theorem from real linear algebra: If \( A \) is a symmetric operator defined on all of \( {\mathbf{C}}^{n} \) (for example), then there is an orthogonal basis for \( {\mathbf{C}}^{n} \), consisting of eigenvectors for \( A \) . The proof of this can be performed as a replica of the corresponding proof for the real case (if anything, the complex case is rather easier to do than a purely "real" proof). In infinite dimensions things are more complicated, but in many cases similar results do hold there as well.
First we give a couple of simple results that do not depend on dimension.
Lemma 6.1 A symmetric operator has only real eigenvalues, and eigenvectors corresponding to different eigenvalues are orthogonal.
Proof. Suppose that \( {Au} = {\lambda u} \) and \( {Av} = {\mu v} \), where \( u \neq 0 \) and \( v \neq 0 \) . Then we can write
\[
\lambda \langle u, v\rangle = \langle {\lambda u}, v\rangle = \langle {Au}, v\rangle = \langle u,{Av}\rangle = \langle u,{\mu v}\rangle = \bar{\mu }\langle u, v\rangle .
\]
\( \left( {6.15}\right) \)
First, choose \( v = u \), so that also \( \mu = \lambda \), and we have that \( \lambda \parallel u{\parallel }^{2} = \bar{\lambda }\parallel u{\parallel }^{2} \) . Because of \( u \neq 0 \) we conclude that \( \lambda = \bar{\lambda } \), and thus \( \lambda \) is real. It follows that all eigenvalues must be real. But then we can return to (6.15) with the information that \( \mu \) is also real, and thus \( \left( {\lambda - \mu }\right) \langle u, v\rangle = 0 \) . If now \( \lambda - \mu \neq 0 \) , then we must have that \( \langle u, v\rangle = 0 \), which proves the second assertion.
Regrettably, it is not easy to prove in general that there are "sufficiently many" eigenvectors (to make it possible to construct a "basis," as in finite dimensions). We shall here mention something about one situation where this does hold, the study of which was initiated by STURM and LIOUVILLE during the nineteenth century. As special cases of this situation we shall recognize some of the boundary value problems studied in this text, starting in Sec. 1.4.
We settle on a compact interval \( I = \left\lbrack {a, b}\right\rbrack \) . Let \( p \in {C}^{1}\left( I\right) \) be a real-valued function such that \( p\left( a\right) \neq 0 \neq p\left( b\right) \) ; let \( q \in C\left( I\right) \) be another real-valued function; and let \( w \in C\left( I\right) \) be a positive function on the same interval (i.e., \( w\left( x\right) > 0 \) for \( x \in I \) ). We are going to study the ordinary differential equation
(E)
\[
{\left( p{u}^{\prime }\right) }^{\prime } + {qu} + {\lambda wu} = 0 \Leftrightarrow
\]
\[
\frac{d}{dx}\left( {p\left( x\right) \frac{du}{dx}}\right) + q\left( x\right) u\left( x\right) + {\lambda w}\left( x\right) u\left( x\right) = 0,\;x \in I.
\]
Here, \( \lambda \) is a parameter and \( u \) the "unknown" function. Furthermore, we shall consider boundary conditions, initially of the form
(B)
\[
{A}_{0}u\left( a\right) + {A}_{1}{u}^{\prime }\left( a\right) = 0,\;{B}_{0}u\left( b\right) + {B}_{1}{u}^{\prime }\left( b\right) = 0.
\]
Here, \( {A}_{j} \) and \( {B}_{j} \) are real constants such that \( \left( {{A}_{0},{A}_{1}}\right) \neq \left( {0,0}\right) \neq \left( {{B}_{0},{B}_{1}}\right) \) . Remark. If we take \( p\left( x\right) = w\left( x\right) = 1, q\left( x\right) = 0,{A}_{0} = {B}_{0} = 1 \) and \( {A}_{1} = {B}_{1} = 0 \) , we recover the problem studied in Sec. 1.4.
The problem (E)+(B) is called a regular Sturm-Liouville problem. We introduce the space \( {L}^{2}\left( {I, w}\right) \), where \( w \) is the function occurring in (E). This means that we have an inner product
\[
\langle u, v\rangle = {\int }_{I}u\left( x\right) \overline{v\left( x\right) }w\left( x\right) {dx}.
\]
In particular, all functions \( u \in C\left( I\right) \) will belong to \( {L}^{2}\left( {I, w}\right) \), since the interval is compact.
We define an operator \( A \) by the formula
\[
{Au} = - \frac{1}{w}\left( {{\left( p{u}^{\prime }\right) }^{\prime } + {qu}}\right)
\]
\[
{\mathcal{D}}_{A} = \left\{ {u \in {C}^{2}\left( I\right) : {Au} \in {L}^{2}\left( {I, w}\right) \text{ and }u\text{ satisfies }\left( \mathrm{B}\right) }\right\} .
\]
Then,(E) can be written simply as \( {Au} = {\lambda u} \) . The problem of finding nontrivial solutions of the problem (E)+(B) has been rephrased as the problem of finding eigenvectors of the operator \( A \) . (The fact that \( {\mathcal{D}}_{A} \) is a linear space is a consequence of the homogeneity of the boundary conditions.)
The symmetry of \( A \) can be shown as a slightly more complicated parallel of Example 6.7 above. On the one hand,
\[
\langle {Au}, v\rangle = - {\int }_{a}^{b}\frac{1}{w}\left( {{\left( p{u}^{\prime }\right) }^{\prime } + {qu}}\right) \bar{v}{wdx} = - {\int }_{a}^{b}\left( {{\left( p{u}^{\prime }\right) }^{\prime } + {qu}}\right) \bar{v}{dx}
\]
\[
= - {\int }_{a}^{b}{\left( p{u}^{\prime }\right) }^{\prime }\bar{v}{dx} - {\int }_{a}^{b}{qu}\bar{v}{dx} = - {\left\lbrack p{u}^{\prime }\bar{v}\right\rbrack }_{a}^{b} + {\int }_{a}^{b}\left( {p{u}^{\prime }\overline{{v}^{\prime }} - {qu}\bar{v}}\right) {dx}.
\]
On the other hand (using the fact that \( p, q \) and \( w \) are real-valued),
\[
\langle u,{Av}\rangle = {\int }_{a}^{b}u \cdot \left( {-\frac{1}{\bar{w}}}\right) \overline{\left( {\left( p{v}^{\prime }\right) }^{\prime } + qv\right) }{wdx} = - {\int }_{a}^{b}u{\left( p{\bar{v}}^{\prime }\right) }^{\prime }{dx} - {\int }_{a}^{b}{uq}\bar{v}{dx}
\]
\[
= - {\left\lbrack up{\bar{v}}^{\prime }\right\rbrack }_{a}^{b} + {\int }_{a}^{b}\left( {{u}^{\prime }p\overline{{v}^{\prime }} - {uq}\bar{v}}\right) {dx}.
\]
We see that
\[
\langle {Au}, v\rangle - \langle u,{Av}\rangle = {\left\lbrack pu{\bar{v}}^{\prime } - p{u}^{\prime }\bar{v}\right\rbrack }_{a}^{b} = {\left\lbrack p\left( x\right) \left| \frac{u\left( x\right) }{v\left( x\right) }\frac{{u}^{\prime }\left( x\right) }{{v}^{\prime }\left( x\right) }\right| \right\rbrack }_{x = a}^{x = b}.
\]
But the determinant in this expression, for \( x = a \), must be zero: indeed, we assume that both \( u \) and \( v \) satisfy the boundary condition (B) at \( a \), which
means that
\[
\left\{ \begin{array}{l} {A}_{0}u\left( a\right) + {A}_{1}{u}^{\prime }\left( a\right) = 0 \\ {A}_{0}\overline{v\left( a\right) } + {A}_{1}\overline{{v}^{\prime }\left( a\right) } = 0 \end{array}\right.
\]
This can be considered to be a homogeneous linear system of equations with (the real numbers) \( {A}_{0} \) and \( {A}_{1} \) as unknowns, and it has a nontrivial solution (since we assume that \( \left. {\left( {{A}_{0},{A}_{1}}\right) \neq \left( {0,0}\right) }\right) \) . Thus the determinant is zero. In the same way it follows that the determinant is zero at \( x = b \) . We conclude then that
\[
\langle {Au}, v\rangle = \langle u,{Av}\rangle
\]
so that \( A \) is symmetric.
In this case, the symmetry is achieved by the fact that a certain substitution of values results in zero at each end of the interval. Clearly, this is not necessary. An operator can be symmetric for other reasons, too. We shall not delve deeper into this in this text, but refer the reader to texts on ordinary differential equations.
For the case we have sketched above, the following result holds.
Theorem 6.1 (Sturm-Liouville’s theorem) The operator \( A \), belonging to the problem (E)+(B), has infinitely many eigenvalues, which can be arranged in an increasing |
18_Algebra Chapter 0 | Definition 7.6 |
Definition 7.6. The \( i \) -th homology of a complex
\[
{M}_{ \bullet } : \cdots \overset{{d}_{i + 2}}{ \rightarrow }{M}_{i + 1}\xrightarrow[]{{d}_{i + 1}}{M}_{i}\overset{{d}_{i}}{ \rightarrow }{M}_{i - 1}\xrightarrow[]{{d}_{i - 1}}\cdots
\]
of \( R \) -modules is the \( R \) -module
\[
{H}_{i}\left( {M}_{ \bullet }\right) \mathrel{\text{:=}} \frac{\ker {d}_{i}}{\operatorname{im}{d}_{i + 1}}.
\]
That is, \( {H}_{i}\left( {M}_{ \bullet }\right) \) is a module capturing the ’light gray annulus’ in my heuristic picture of a complex. Of course
\[
{H}_{i}\left( {M}_{ \bullet }\right) = 0 \Leftrightarrow \operatorname{im}{d}_{i + 1} = \ker {d}_{i} \Leftrightarrow \text{the complex}{M}_{ \bullet }\text{is exact at}{M}_{i}\text{:}
\]
that is, the homology modules are a measure of the 'failure of a complex from being exact'.
Example 7.7. In fact, homology should be thought of as a (vast) generalization of the notions of kernel and cokernel. Indeed, consider the (very) particular case in which \( {M}_{ \bullet } \) is the complex
\[
0 \rightarrow {M}_{1}\overset{\varphi }{ \rightarrow }{M}_{0} \rightarrow 0.
\]
Then
\[
{H}_{1}\left( {M}_{ \bullet }\right) \cong \ker \varphi ,\;{H}_{0}\left( {M}_{ \bullet }\right) \cong \operatorname{coker}\varphi .
\]
I will end this very brief excursion into more abstract territories by indicating how a commutative diagram involving two short exact sequences generates a 'long exact sequence' in homology. This is actually a particular case of a more general construction-according to which a suitable commutative diagram involving three complexes yields a really long 'long exact homology sequence'. We will come back to this general construction when we deal more extensively with homological algebra in Chapter IX. The reader is also likely to learn about it in a course on algebraic topology, where this fact is put to impressive use in studying invariants of manifolds.
In the simple form we will analyze, this is affectionately known as the snake lemma. Consider two short exact sequences linked by homomorphisms, so as to form a commutative diagram 34:
![23387543-548b-40c2-8595-200756212a0f_202_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_202_0.jpg)
Lemma 7.8 (The snake lemma). With notation as above, there is an exact sequence
\[
0 \rightarrow \ker \lambda \rightarrow \ker \mu \rightarrow \ker \nu \overset{\delta }{ \rightarrow }\operatorname{coker}\lambda \rightarrow \operatorname{coker}\mu \rightarrow \operatorname{coker}\nu \rightarrow \operatorname{coker}\nu \rightarrow 0.
\]
Remark 7.9. Most of the homomorphisms in this sequence are induced in a completely straightforward way from the corresponding homomorphisms \( \lambda ,\mu ,\nu \) . The one ’surprising’ homomorphism is the one denoted \( \delta \) ; I will discuss its definition below.
Remark 7.10. In view of Example 7.7, we could have written the sequence in this statement as
\[
\begin{array}{l} 0 \rightarrow {H}_{1}\left( {L}_{ \bullet }\right) \rightarrow {H}_{1}\left( {M}_{ \bullet }\right) \rightarrow {H}_{1}\left( {N}_{ \bullet }\right) \\ \left( \begin{matrix} \delta \rightarrow {H}_{0}\left( {L}_{ \bullet }\right) \rightarrow {H}_{0}\left( {M}_{ \bullet }\right) \rightarrow {H}_{0}\left( {N}_{ \bullet }\right) \rightarrow 0 \end{matrix}\right) \\ \end{array}
\]
where \( {L}_{ \bullet } \) is the complex \( 0 \rightarrow {L}_{1}\overset{\lambda }{ \rightarrow }{L}_{0} \rightarrow 0 \), etc. The snake lemma generalizes to arbitrary complexes \( {L}_{ \bullet },{M}_{ \bullet },{N}_{ \bullet } \), producing a ’long exact homology sequence' of which this is just the tail end. As mentioned above, we will discuss this rather straightforward generalization later (\$IX 3.3).
Remark 7.11. A popular version of the snake lemma does not assume that \( {\alpha }_{1} \) is injective and \( {\beta }_{0} \) is surjective: that is, we could consider a commutative diagram of
---
\( {}^{34} \) In fact, it is better to view this diagram as three (very short) complexes linked by \( R \) - module homomorphisms \( {\alpha }_{i},{\beta }_{i} \) so that ’the rows are exact’. In fact, one can define a category of complexes, and this diagram is nothing but a 'short exact sequence of complexes'; this is the approach we will take in Chapter IX
---
exact sequences
![23387543-548b-40c2-8595-200756212a0f_203_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_203_0.jpg)
The lemma will then state that there is 'only' an exact sequence
\[
\ker \lambda \rightarrow \ker \mu \rightarrow \ker \nu \overset{\delta }{ \rightarrow }\operatorname{coker}\lambda \rightarrow \operatorname{coker}\mu \rightarrow \operatorname{coker}\nu .
\]
Proving the snake lemma is something that should not be done in public, and it is notoriously useless to write down the details of the verification for others to read: the details are all essentially obvious, but they lead quickly to a notational quagmire. Such proofs are collectively known as the sport of diagram chase, best executed by pointing several fingers at different parts of a diagram on a blackboard, while enunciating the elements one is manipulating and stating their fate 35
Nevertheless, I should explain where the ’connecting’ homomorphism \( \delta \) comes from, since this is the heart of the statement of the snake lemma and of its proof. Here is the whole diagram, including kernels and cokernels; thus, columns are exact (as well as the two original sequences, placed horizontally):
![23387543-548b-40c2-8595-200756212a0f_203_1.jpg](images/23387543-548b-40c2-8595-200756212a0f_203_1.jpg)
By the way, I trust that the reader now sees why this lemma is called the snake lemma.
Definition of the snaking homomorphism \( \delta \) . Let \( a \in \ker \nu \) . I claim that \( a \) can be mapped through the diagram all the way to coker \( \lambda \), along the solid arrows marked
---
\( {}^{35} \) Real purists chase diagrams in arbitrary categories, thus without the benefit of talking about 'elements', and we will practice this skill later on (Chapter IX). For example, the snake lemma can be proven by appealing to universal property after universal property of kernels and cokernels, without ever choosing elements anywhere. But the performing technique of pointing fingers at a board while monologuing through the argument remains essentially the same.
---
here:
![23387543-548b-40c2-8595-200756212a0f_204_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_204_0.jpg)
Indeed,
- \( \ker \nu \subseteq {N}_{1} \) ; so view \( a \) as an element \( b \) of \( {N}_{1} \) .
- \( {\beta }_{1} \) is surjective, so \( \exists c \in {M}_{1} \), mapping to \( b \) .
- Let \( d = \mu \left( c\right) \) be the image of \( c \) in \( {M}_{0} \) .
- What is the image of \( d \) in the spot marked \( * \) ? By the commutativity of the diagram, it must be the same as \( \nu \left( b\right) \) . However, \( b \) was the image in \( {N}_{1} \) of \( a \in \ker \nu \), so \( \nu \left( b\right) = 0 \) . Thus, \( d \in \ker {\beta }_{0} \) . Since rows are exact, \( \ker {\beta }_{0} = \operatorname{im}{\alpha }_{0} \) ; therefore, \( \exists e \in {L}_{0} \), mapping to \( d \) .
- Finally, let \( f \in \operatorname{coker}\lambda \) be the image of \( e \) .
I want to set \( \delta \left( a\right) \mathrel{\text{:=}} f \) .
Is this legal? At two steps in the chase we have taken preimages:
- \( \exists c \in {M}_{1} \) such that \( {\beta }_{1}\left( c\right) = b \) ,
- \( \exists e \in {L}_{0} \) such that \( {\alpha }_{0}\left( e\right) = d \) .
The second step does not involve a choice: because \( {\alpha }_{0} \) is injective by assumption, so the element \( e \) mapping to \( d \) is uniquely determined by \( d \) . But there was a choice involved in the first step: in order to verify that \( \delta \) is well-defined, we have to show that choosing some other \( c \) would not affect the proposed value \( f \) for \( \delta \left( a\right) \) .
This is proved by another chase. Here is the relevant part of the diagram:
![23387543-548b-40c2-8595-200756212a0f_204_1.jpg](images/23387543-548b-40c2-8595-200756212a0f_204_1.jpg)
Suppose we choose a different \( {c}^{\prime } \) mapping to the same \( b \) :
Then \( {\beta }_{1}\left( {{c}^{\prime } - c}\right) = 0 \) ; by exactness, \( \exists g \in {L}_{1} \) such that \( \left( {{c}^{\prime } - c}\right) = {\alpha }_{1}\left( g\right) \) :
\[
0\cdots \cdots \cdots \cdots g\xrightarrow[]{{\alpha }_{1}}\left( {{c}^{\prime } - c}\right) \xrightarrow[]{{\beta }_{1}}0\cdots \cdots \cdots \cdots 0.
\]
Now the point is that, since columns form complexes, \( g \) dies in coker \( \lambda \) :
![23387543-548b-40c2-8595-200756212a0f_205_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_205_0.jpg)
and it follows (by the commutativity of the diagram and the injectivity of \( {\alpha }_{0} \) ) that changing \( c \) to \( {c}^{\prime } \) modifies \( e \) to \( e + \lambda \left( g\right) \) and \( f \) to \( f + 0 = f \) . That is, \( f \) is indeed independent of the choice.
Thus \( \delta \) is well-defined!
This is a tiny part of the proof of the snake lemma, but it probably suffices to demonstrate why reading a written-out version of a diagram chase may be supremely uninformative.
The rest of the proof (left to the reader(!) but I am not listing this as an official exercise for fear that someone might actually turn a solution in for grading) amounts to many, many similar arguments. The definition of the maps induced on kernels and cokernels is substantially less challenging than the definition of the connecting morphism \( \delta \) described above. Exactness at most spots in the sequence
\[
0 \rightarrow \ker \lambda \rightarrow \ker \mu \rightarrow \ker \nu \overset{\delta }{ \rightarrow }\operatorname{coker}\lambda \rightarrow \operatorname{coker}\mu \rightarrow \operatorname{coker}\nu \rightarrow 0
\]
is also reasonably straightforward; most |
1143_(GTM48)General Relativity for Mathematicians | Definition 1.3.1 |
Definition 1.3.1. A spacetime \( \left( {M, g, D}\right) \) is a connected 4-dimensional, oriented, and time-oriented Lorentzian manifold \( \left( {M, g}\right) \) together with the Levi-Civita connection \( D \) of \( g \) on \( M \) .
In context, we shall sometimes write \( \left( {M, g}\right) \) or just \( M \) for \( \left( {M, g, D}\right) \) . A general relativistic gravitational field \( \left\lbrack \left( {M, g}\right) \right\rbrack \) is an equivalence class of spacetimes where the equivalence is defined by orientation and time-orientation-preserving isometries (Exercise 1.2.6). Each \( \left( {M,\mathbf{g}, D}\right) \in \left\lbrack \left( {M,\mathbf{g}}\right) \right\rbrack \) is a representative of \( \left\lbrack \left( {M,\mathbf{g}}\right) \right\rbrack \) . Physically, all representatives of \( \left\lbrack \left( {M,\mathbf{g}}\right) \right\rbrack \) model the same situation. We shall normally work with one representative, but focus attention on properties shared by all representatives in the same gravitational field.
We discuss some motivations.
The spacetimes of significance in physics are all models of (a part of) the history of (some portion of) the universe. The dimension of a spacetime is intuitively accounted for by the three spatial dimensions of the known universe and an extra dimension of time. Since spacetimes model histories, "disconnected" would connote "always was, is, and always will be disconnected." Thus one assumes \( M \) connected. The requirement of time orient-ability is suggested by our knowledge of thermodynamical processes on the earth, now. The second law of thermodynamics implies that one can distinguish past directions from future directions on earth by measuring the increase in entropy. It seems somewhat reasonable to assume that thermodynamics will smoothly determine future directions in the whole universe. No one knows if this is true, but if we ever really met beings going the wrong way in time, trying to communicate with them would presumably be as confusing as trying to talk to some of the regents of the University of California. Orientability of \( M \) is also a plausible condition to impose because the nonconservation of parity is now established for a whole class of experiments (the so-called "weak interactions"). On earth, we can thus intrinsically distinguish between right-handed and left-handed coordinate systems in ordinary 3-space. Thus \( \left( {M, g, D}\right) \) can at least be oriented in the region surrounding the earth, now, in the following way: in each coordinate neighborhood, the 4-form \( d{x}^{1} \land d{x}^{2} \land d{x}^{2} \land d{x}^{4} \) is consistent with the orientation iff (a) each \( d{x}^{1}, d{x}^{2}, d{x}^{3} \) is spacelike and \( \left\{ {d{x}^{1}, d{x}^{2}, d{x}^{3}}\right\} \) is dual to a right-handed spatial coordinate system of the tangent space at each point, and (b) \( d{x}^{4} \) is future pointing and timelike. Again, the extrapolation of this property to other parts of the universe for all time involves some guesswork but is standard practice.
To a geometer, that \( M \) should be a \( {C}^{\infty } \) manifold is perhaps the most acceptable and the most obvious requirement. However, this is probably the most mystifying requirement on a deeper level. Why should all macroscopic physical phenomena-past, present and future-be regarded as occurring on a smooth structure? Offhand, one would think that nature might use something logically simpler-say, piecewise linear manifolds or more general topological spaces. Perhaps she does. The internal contradictions of present special relativistic quantum theory are severe. These contradictions may stem from trying to force a "jumpy" quantum world into a \( {C}^{\infty } \) manifold.
Many modifications of Definition 1.3.1 have been suggested. For example, one might use a metric connection with torsion in place of the Levi-Civita connection. There are perhaps a thousand such modifications of various kinds which have appeared in print. We shall not consider them here.
Although we have not done so, many physicists would include stable causality (Hawking-Ellis [1]) in the definition of a spacetime. On the other hand, a geometer approaching the same subject would most likely require \( M \) to be complete. This we have not done for the simple reason that even the weaker requirement of infinite extendibility of all non-spacelike geodesics would exclude most of the spacetimes of current interest (Section 1.4 and Chapters 6 and 7). For example, in the standard cosmological models particles enter the universe with a big bang (Chapter 7) and the history of such a particle is represented by an inextendible timelike geodesic whose parameter is bounded from below (compare Corollary 1.4.6 following). Whether incompleteness is a property of nature or a misleading feature of current models is a highly controversial question. We remark that infinite extendibility of spacelike geodesics has no direct physical interpretation.
Newtonian analogue. Let \( \phi \left( \overrightarrow{x}\right) \) be a time-independent Newtonian gravitational potential (Section 0.1.8). In our units (Section 0.1.4), \( \max \left| \phi \right| \cong {10}^{-6} \) within the solar system. Whenever \( \max \left| \phi \right| \ll 1 \) , Newtonian space, time, and gravitational potential can be replaced by a crude spacetime model as follows. Let \( M = {\mathbb{R}}^{4} \) . Define \( \phi : M \rightarrow \mathbb{R} \) by \( {\phi x} = \phi \left( {{u}^{1}x,{u}^{2}x,{u}^{3}x}\right) \forall x \in M \) . Let \( g = \left( {1 - {2\phi }}\right) \mathop{\sum }\limits_{{\mu = 1}}^{3}d{u}^{\mu } \otimes d{u}^{\mu } - \) \( \left( {1 + 2\ddot{\phi }}\right) d{u}^{4} \otimes d{u}^{4} \) . Take \( {\partial }_{4} \) as future pointing and orient \( M \) by \( d{u}^{1} \land \cdots \land d{u}^{4} \) . Then \( \left( {M, g, D}\right) \) is a spacetime. Using it for a general relativistic model, and following the rules of Chapters 2 and 3, gives results at worst as inaccurate as the corresponding Newtonian model (cf. Section 9.3). Roughly, \( g \) replaces \( \phi \) and \( D \) replaces the Newtonian gravitational field \( - \overrightarrow{\nabla }\phi \) . However, even when \( \left| \phi \right| \ll 1 \) in Newtonian theory, more accurate general relativistic models are sometimes needed. Moreover, some spacetimes model situations altogether beyond the scope of Newtonian physics, such as black holes (Example 1.4.2 and Section 7.5) and gravitational waves (Section 7.6).
Let \( \left( {M, g}\right) \) and \( \left( {N, h}\right) \) be spacetimes. Define \( \left( {N, h}\right) \) to contain \( \left( {M, g}\right) \) iff \( M \) is an open submanifold of \( {\left. N,\mathbf{h}\right| }_{M} = \mathbf{g} \), and \( \left( {M,\mathbf{g}}\right) \) has the induced orientation and time orientation. Define \( \left( {M, g}\right) \) as maximal iff each spacetime that contains \( \left( {M,\mathbf{g}}\right) \) is \( \left( {M,\mathbf{g}}\right) \) . In physics, one prefers in principle to work with maximal spacetimes. However, one is sometimes too lazy to work out the properties of spacetime in regions where "matter" is present; moreover, one sometimes suspects that in some regions conditions may be so extreme that current physics cannot adequately describe them. Then one works with a spacetime that is not maximal. Compare Sections 7.3 to 7.5.
Proposition 1.3.2. Suppose \( \left( {N, h}\right) \) contains \( \left( {M, g}\right) \) but, \( \forall \) lightlike geodesic \( \lambda : \mathcal{E} \rightarrow N \) such that \( \left( {\lambda \mathcal{E}}\right) \cap M \neq \phi ,\lambda \mathcal{E} \subset M \) . Then \( M = N \) .
Roughly, the proposition says that a spacetime is maximal iff one cannot see into it or out of it. Like many other results, it indicates the key role played by lightlike geodesics. The proof uses techniques more advanced than have been discussed here. The idea is to assume a point \( p \) on the boundary of \( M \) and show that to each point in a sufficiently small neighborhood of \( p \) there is a once-broken lightlike geodesic from \( p \) . We omit the details (but see Exercise 5.2.7).
## EXERCISE 1.3.3
Show that a complete spacetime is maximal.
## EXERCISE 1.3.4
Suppose \( M = {\mathbb{R}}^{2}, g = d{u}^{1} \otimes d{u}^{1} - d{u}^{2} \otimes d{u}^{2}, h = d{u}^{1} \otimes d{u}^{1} - \left( {\exp {u}^{2}}\right) d{u}^{2} \otimes \) \( d{u}^{2} \) . Show \( \left( {M, g}\right) \) is maximal and \( \left( {M, h}\right) \) is not.
Sections 8.2 to 8.4 outline some global properties of spacetimes.
## 1.4 Examples of spacetimes
The spacetimes most important in current physics are given in the next three examples. We define them mathematically now. They will be used to illustrate various mathematical and physical concepts as they arise. We will discuss in detail the physical applications of Schwarzschild spacetimes (Example 1.4.2) in Chapter 7, and of Einstein-de Sitter spacetime (Example 1.4.3) in Chapter 6.
EXAMPLE 1.4.1. MINKOWSKI SPACE. On \( {\mathbb{R}}^{4} \) define \( g = \mathop{\sum }\limits_{{\mu = 1}}^{3}d{u}^{\mu } \otimes d{u}^{\mu } - \) \( d{u}^{4} \otimes d{u}^{4} \) ; time orient \( \left( {{\mathbb{R}}^{4}, g}\right) \) by \( {\partial }_{4} \) and orient \( {\mathbb{R}}^{4} \) by \( d{u}^{1} \land d{u}^{2} \land d{u}^{3} \land d{u}^{4} \) . The Levi-Civita connection of \( g \) is then uniquely determined by \( D{\partial }_{i}{\partial }_{j} = 0 \) \( \forall i, j = 1,\ldots ,4 \) (Bishop-Goldberg 5.6). \( \left( {{\mathbb{R}}^{4}, g, D}\right) \) is a spacetime; it is called Minkowski space. The gravitational field \( \left\lbrack \left( {M, g}\right) \right\rbrack \), which contains Minkowski space, is the trivial gravitational field; (nonquantum) special relativity and (special relativistic) quantum theory use the trivial gravitational field. The trivial gravitatio |
1112_(GTM267)Quantum Theory for Mathematicians | Definition 7.13 |
Definition 7.13 (Functional Calculus) If \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint and \( f : \sigma \left( A\right) \rightarrow \mathbb{C} \) is a bounded measurable function, define an operator \( f\left( A\right) \) by setting
\[
f\left( A\right) = {\int }_{\sigma \left( A\right) }f\left( \lambda \right) d{\mu }^{A}\left( \lambda \right)
\]
where \( {\mu }^{A} \) is the projection-valued measure in Theorem 7.12.
We may extend the projection-valued measure \( {\mu }^{A} \) from \( \sigma \left( A\right) \) to all of \( \mathbb{R} \) by assigning measure 0 to \( \mathbb{R} \smallsetminus \sigma \left( A\right) \) . Then, roughly speaking, \( f\left( A\right) \) is the operator that is equal to \( f\left( \lambda \right) I \) on the range of the projection operator \( {\mu }^{A}\left( {\lbrack \lambda ,\lambda + {d\lambda }}\right) ) \) .
Since the integral with respect to \( {\mu }^{A} \) is multiplicative, it follows from (7.19) that if \( f\left( \lambda \right) = {\lambda }^{m} \) for some positive integer \( m \), then \( f\left( A\right) \) is the \( m \) th power of \( A \) . Further, since the series \( {e}^{a\lambda } = \mathop{\sum }\limits_{{m = 0}}^{\infty }{\left( a\lambda \right) }^{m}/m \) ! converges uniformly on the compact set \( \sigma \left( A\right) \), the operator \( {e}^{aA} \) (computed using the functional calculus for the function \( f\left( \lambda \right) = {e}^{a\lambda } \) ) may be computed as a power series.
Definition 7.14 (Spectral Subspaces) For \( A \in \mathcal{B}\left( \mathbf{H}\right) \), let \( {\mu }^{A} \) be the associated projection-valued measure, extended to be a measure on \( \mathbb{R} \) by setting \( {\mu }^{A}\left( {\mathbb{R} \smallsetminus \sigma \left( A\right) }\right) = 0 \) . Then for each Borel set \( E \subset \mathbb{R} \), define the spectral subspace \( {V}_{E} \) of \( \mathbf{H} \) by
\[
{V}_{E} = \operatorname{Range}\left( {{\mu }^{A}\left( E\right) }\right)
\]
The definition of a projection-valued measure implies that these spectral subspaces satisfy the first four properties listed in Sect. 7.2.1. We now show that (7.19) implies the remaining two properties we anticipated for the spectral subspaces.
Proposition 7.15 If \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint, the spectral subspaces associated with \( A \) have the following properties.
1. Each spectral subspace \( {V}_{E} \) is invariant under \( A \) .
2. If \( E \subset \left\lbrack {{\lambda }_{0} - \varepsilon ,{\lambda }_{0} + \varepsilon }\right\rbrack \) then for all \( \psi \in {V}_{E} \), we have
\[
\begin{Vmatrix}{\left( {A - {\lambda }_{0}I}\right) \psi }\end{Vmatrix} \leq \varepsilon \parallel \psi \parallel .
\]
3. The spectrum of \( {\left. A\right| }_{{V}_{E}} \) is contained in the closure of \( E \) .
4. If \( {\lambda }_{0} \) is in the spectrum of \( A \), then for every neighborhood \( U \) of \( {\lambda }_{0} \) , we have \( {V}_{U} \neq \{ 0\} \), or, equivalently, \( \mu \left( U\right) \neq 0 \) .
Proof. For Point 1, observe that for any bounded measurable functions \( f \) and \( g \) on \( \sigma \left( A\right) \), the operators \( f\left( A\right) \) and \( g\left( A\right) \) commute, since the product in either order is equal to the integral of the function \( {fg} = {gf} \) with respect to \( {\mu }^{A} \) . In particular, \( A \), which is the integral of the function \( f\left( \lambda \right) = \lambda \) , commutes with \( {\mu }^{A}\left( E\right) \), which is the integral of the function \( {1}_{E} \) . Thus, given a vector \( {\mu }^{A}\left( E\right) \phi \) in the range of \( {\mu }^{A}\left( E\right) \), we have
\[
A{\mu }^{A}\left( E\right) \phi = {\mu }^{A}\left( E\right) {A\phi }
\]
which is again in the range of \( {\mu }^{A}\left( E\right) \), establishing the invariance of the spectral subspace.
For Point 2, suppose that \( \psi \in {V}_{E} \), where \( E \subset \left\lbrack {{\lambda }_{0} - \varepsilon ,{\lambda }_{0} + \varepsilon }\right\rbrack \) . Then \( \psi \) is in the range of \( {\mu }^{A}\left( E\right) \), and so
\[
\left( {A - {\lambda }_{0}I}\right) \psi = \left( {A - {\lambda }_{0}I}\right) {\mu }^{A}\left( E\right) \psi .
\]
But \( {\mu }^{A}\left( E\right) = {1}_{E}\left( A\right) \) and \( A - {\lambda }_{0}I = f\left( A\right) \), where \( f\left( \lambda \right) = \lambda - {\lambda }_{0} \) . By the multiplicativity of the integral, then,
\[
\left( {A - {\lambda }_{0}I}\right) \psi = \left( {f{1}_{E}}\right) \left( A\right) \psi .
\]
But \( \left| {f\left( \lambda \right) {1}_{E}\left( \lambda \right) }\right| \leq \varepsilon \) and so by (7.16), the operator \( \left( {f{1}_{E}}\right) \left( A\right) \) has norm at most \( \varepsilon \) .
For Point 3, if \( {\lambda }_{0} \) is not in \( \bar{E} \), then the function \( g\left( \lambda \right) \mathrel{\text{:=}} {1}_{E}\left( \lambda \right) \left( {1/\left( {\lambda - {\lambda }_{0}}\right) }\right) \) is bounded. Thus, \( g\left( A\right) \) is a bounded operator and
\[
g\left( A\right) \left( {A - {\lambda }_{0}I}\right) = \left( {A - {\lambda }_{0}I}\right) g\left( A\right) = {1}_{E}\left( A\right) .
\]
This shows that the restriction to \( {V}_{E} \) of \( g\left( A\right) \) is the inverse of the restriction to \( {V}_{E} \) of \( A \) . Thus, \( {\lambda }_{0} \) is not in the spectrum of \( {\left. A\right| }_{{V}_{E}} \) .
For Point 4, fix \( {\lambda }_{0} \in \sigma \left( A\right) \) and suppose for some \( \varepsilon > 0 \), we have \( \mu \left( \left( {{\lambda }_{0} - }\right. \right. \) \( \left. \left. {\varepsilon ,{\lambda }_{0} + \varepsilon }\right) \right) = 0 \) . Consider, then, the bounded function \( f \) defined by
\[
f\left( \lambda \right) = \left\{ \begin{matrix} \frac{1}{\lambda - {\lambda }_{0}} & \left| {\lambda - {\lambda }_{0}}\right| \geq \varepsilon \\ 0 & \left| {\lambda - {\lambda }_{0}}\right| < \varepsilon \end{matrix}\right.
\]
Since \( f\left( \lambda \right) \cdot \left( {\lambda - {\lambda }_{0}}\right) \) equals 1 except on \( \left( {{\lambda }_{0} - \varepsilon ,{\lambda }_{0} + \varepsilon }\right) \), the equation \( f\left( \lambda \right) \cdot \left( {\lambda - {\lambda }_{0}}\right) = 1 \) holds \( \mu \) -almost everywhere. Thus, the integral of this function coincides with the integral of the constant function 1, which is \( I \) . Since the integral is multiplicative, we see that
\[
f\left( A\right) \left( {A - {\lambda }_{0}I}\right) = \left( {A - {\lambda }_{0}I}\right) f\left( A\right) = I,
\]
showing that the bounded operator \( f\left( A\right) \) is the inverse of \( \left( {A - {\lambda }_{0}I}\right) \) . This contradicts the assumption that \( {\lambda }_{0} \in \sigma \left( A\right) \) . ∎
Proposition 7.16 If \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint and \( B \in \mathcal{B}\left( \mathbf{H}\right) \) commutes with \( A \), the following results hold.
1. For all bounded measurable functions \( f \) on \( \sigma \left( A\right) \), the operator \( f\left( A\right) \) commutes with \( B \) .
2. Each spectral subspace for \( A \) is invariant under \( B \) .
The proof of this proposition is deferred until Chap. 8. We conclude this section by fulfilling (at least for bounded self-adjoint operators) one of the goals of the spectral theorem, namely to give a probability measure describing the probabilities for measurements of a self-adjoint operator \( A \) in the state \( \psi \) .
Proposition 7.17 Suppose \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint and \( \psi \in \mathbf{H} \) is a unit vector. Then there exists a unique probability measure \( {\mu }_{\psi }^{A} \) on \( \mathbb{R} \) such that
\[
{\int }_{\mathbb{R}}{\lambda }^{m}d{\mu }_{\psi }^{A}\left( \lambda \right) = \left\langle {\psi ,{A}^{m}\psi }\right\rangle
\]
for all non-negative integers \( m \) .
We will prove a version of Proposition 7.17 for unbounded self-adjoint operators in Chap. 9. In the unbounded case, however, we will not obtain uniqueness of the probability measure, even if \( \psi \) is in the domain of \( {A}^{m} \) for all \( m \) . Even in the unbounded case, however, the spectral theorem provides a canonical choice of the probability measure.
Proof. We define a measure \( {\mu }_{\psi }^{A} \) on \( \sigma \left( A\right) \) as in Sect. 7.2.2 by
\[
{\mu }_{\psi }^{A}\left( E\right) = \left\langle {\psi ,{\mu }^{A}\left( E\right) \psi }\right\rangle
\]
The properties of integration with respect to \( {\mu }^{A} \) then tell us that
\[
\left\langle {\psi ,{A}^{m}\psi }\right\rangle = \left\langle {\psi ,\left( {{\int }_{\sigma \left( A\right) }{\lambda }^{m}d{\mu }^{A}\left( \lambda \right) }\right) \psi }\right\rangle = {\int }_{\sigma \left( A\right) }{\lambda }^{m}d{\mu }_{\psi }^{A}\left( \lambda \right) .
\]
We then extend \( {\mu }_{\psi }^{A} \) to \( \mathbb{R} \) by setting it equal to zero on \( \mathbb{R} \smallsetminus \sigma \left( A\right) \), establishing the existence of the desired probability measure on \( \mathbb{R} \) . Since
\[
\left| \left\langle {\psi ,{A}^{m}\psi }\right\rangle \right| \leq \parallel \psi {\parallel }^{2}\begin{Vmatrix}{A}^{m}\end{Vmatrix} \leq \parallel \psi {\parallel }^{2}\parallel A{\parallel }^{m},
\]
the moments grow only exponentially with \( m \) . Thus, standard uniqueness results for the moment problem (e.g., Theorem 8.1 in Chap. 4 of [18]) give the uniqueness of \( {\mu }_{\psi }^{A} \) .
## 7.3 Spectral Theorem for Bounded Self-Adjoint Operators, II
As we have already noted in Sect. 6.5, one version of the spectral theorem asserts that every self-adjoint operator is unitarily equivalent to a multiplication operator. In the case of a bounded self-adjoint operator \( A \), on a separable Hilbert space \( \mathbf{H} \), this result means that \( A \) is unitarily equivalent to the operator \( {M}_{h} \) on \( {L}^{2}\left( {X,\mu }\righ |
1185_(GTM91)The Geometry of Discrete Groups | Definition 9.2.3 |
Definition 9.2.3. A fundamental domain \( D \) for \( G \) is said to be locally finite if and only if each compact subset of \( \Delta \) meets only finitely many \( G \) -images of \( \widetilde{D} \) .
In order to appreciate the implications of Definition 9.2.3, suppose that \( D \) is locally finite. Each \( z \) in \( \Delta \) has a compact neighbourhood \( N \) and this meets only finitely many \( G \) -images, say \( {g}_{i}\left( \widetilde{D}\right) \), of \( \widetilde{D} \) . By decreasing \( N \) if necessary, we may assume that all these images actually contain \( z \) . Finally, if \( h\left( D\right) \) meets \( N \), then \( h\left( D\right) \) meets the union of the \( {g}_{i}\left( \widetilde{D}\right) \) and so (as \( \partial D \) has measure zero) \( h = {g}_{i} \) for some \( i \) . To summarize, if \( D \) is locally finite, each \( z \) has a compact neighbourhood \( N \) and an associated finite subset \( {g}_{1},\ldots ,{g}_{n} \) of \( G \) with
(1) \( z \in {g}_{1}\left( \widetilde{D}\right) \cap \cdots \cap {g}_{n}\left( \widetilde{D}\right) \) ;
(2) \( N \subset {g}_{1}\left( \widetilde{D}\right) \cup \cdots \cup {g}_{n}\left( \widetilde{D}\right) \) ;
(3) \( h\left( D\right) \cap N = \varnothing \) unless \( h \) is some \( {g}_{j} \) .
We shall use these facts consistently throughout the following discussion.
Theorem 9.2.4. \( D \) is locally finite if and only if \( \theta \) is a homeomorphism of \( \widetilde{D}/G \) onto \( \Delta /G \) .
Proof. First, we suppose that \( \theta \) is a homeomorphism and that \( D \) is not locally finite and we seek a contradiction. As \( D \) is not locally finite there exists some \( w \) in \( \Delta \), points \( {z}_{1},{z}_{2},\ldots \) in \( D \) and distinct \( {g}_{1},{g}_{2},\ldots \) in \( G \) with
\[
{g}_{n}\left( {z}_{n}\right) \rightarrow w\;\text{ as }n \rightarrow \infty .
\]
(9.2.2)
Now write
\[
K = \left\{ {{z}_{1},{z}_{2},\ldots }\right\}
\]
First, \( K \subset D \) . Next, every neighbourhood of \( w \) meets infinitely many of the distinct images \( {g}_{n}\left( D\right) \), thus \( w \notin h\left( D\right) \) for any \( h \) in \( G \) . We deduce that
\[
\pi \left( w\right) \notin \pi \left( K\right)
\]
The contradiction we seek is obtained by proving that
\[
\pi \left( w\right) \in \pi \left( K\right) \text{.}
\]
(9.2.3)
The points \( {g}_{n}^{-1}\left( w\right) \) cannot accumulate in \( \Delta \) as \( G \) is discrete. Because of (9.2.2), the points \( {z}_{n} \) cannot accumulate in \( \Delta \) and this shows that \( K \) is closed in \( D \) . As \( K \subset D \), we have
\[
{\widetilde{\pi }}^{-1}\left( {\widetilde{\pi }K}\right) = K
\]
and the definition of the quotient topology on \( \widetilde{D}/G \) may be invoked to deduce that \( \widetilde{\pi }\left( K\right) \) is closed in \( \widetilde{D}/G \) . By (9.2.1),
\[
\pi \left( K\right) = {\pi \tau }\left( K\right) = \theta \left( {\widetilde{\pi }K}\right)
\]
and as \( \theta \) is a homeomorphism, this is closed in \( \Delta /G \) . We conclude that
\[
\pi \left( w\right) = \lim \pi \left( {{g}_{n}{z}_{n}}\right) = \lim \pi \left( {z}_{n}\right) \in \pi \left( K\right)
\]
and this is (9.2.3).
To complete the proof, we must show that if \( D \) is locally finite, then \( \theta \) is a homeomorphism. We assume, then, that \( D \) is locally finite: by Proposition 9.2.2, we need only prove that \( \theta \) maps open sets to open sets.
Accordingly, we select any non-empty open subset \( A \) of \( \widetilde{D}/G \) . As \( \widetilde{\pi } \) is both surjective and continuous, there exists an open subset \( B \) of \( \Delta \) with
\[
{\widetilde{\pi }}^{-1}\left( A\right) = \widetilde{D} \cap B,\;\widetilde{\pi }\left( {\widetilde{D} \cap B}\right) = A.
\]
Now put
\[
V = \mathop{\bigcup }\limits_{{g \in G}}g\left( {\widetilde{D} \cap B}\right)
\]
Then
\[
\pi \left( V\right) = \pi \left( {\widetilde{D} \cap B}\right)
\]
\[
= {\pi \tau }\left( {\widetilde{D} \cap B}\right)
\]
\[
= \theta \widetilde{\pi }\left( {\widetilde{D} \cap B}\right)
\]
\[
= \theta \left( A\right) \text{.}
\]
We need to prove that \( \theta \left( A\right) \) is open but as \( \pi \) is an open map, it is sufficient to prove that \( V \) is an open subset of \( \Delta \) . This has nothing to do with quotient spaces and depends only on the assumption that \( D \) is locally finite.
Consider any \( z \) in \( V \) : we must show that \( V \) contains an open set \( N \) which contains \( z \) . As \( V \) is \( G \) -invariant, we may assume that
\[
z \in \widetilde{D} \cap B.
\]
As \( D \) is locally finite there exists an open hyperbolic disc \( N \) with centre \( z \) which meets only the images
\[
{g}_{0}\left( \widetilde{D}\right) ,{g}_{1}\left( \widetilde{D}\right) ,\ldots ,{g}_{m}\left( \widetilde{D}\right)
\]
of \( \widetilde{D} \) where \( {g}_{0} = I \) : also, we may suppose that each of these sets contains \( z \) . Then
\[
{g}_{j}^{-1}\left( z\right) \in \widetilde{D},\;j = 0,\ldots, m,
\]
and this means that \( \widetilde{\pi } \) is defined at \( {g}_{j}^{-1}\left( z\right) \) . Clearly \( \widetilde{\pi } \) maps this point to \( \widetilde{\pi }\left( z\right) \) in \( A \) so
\[
{g}_{j}^{-1}\left( z\right) \in {\widetilde{\pi }}^{-1}\left( A\right) = \widetilde{D} \cap B.
\]
It follows that \( z \in {g}_{j}\left( B\right) \) and by decreasing the radius of \( N \) still further, we may assume that
\[
N \subset {g}_{0}\left( B\right) \cap \cdots \cap {g}_{m}\left( B\right)
\]
It is now clear that \( N \subset V \) . Indeed, if \( w \in N \), then for some \( j, w \) is in both \( {g}_{j}\left( \widetilde{D}\right) \) and \( {g}_{j}\left( B\right) \) :
\[
w \in {g}_{j}\left( {\widetilde{D} \cap B}\right) \subset V.
\]
The proof is now complete.
Next, we give an example to show that convexity is not sufficient to ensure local finiteness.
Example 9.2.5. We shall exhibit a convex five-sided polygon which is a fundamental domain for a Fuchsian group \( G \) but which is not locally finite. The group \( G \) is the group acting on \( {H}^{2} \) and generated by
\[
f\left( z\right) = {2z},\;g\left( z\right) = \frac{{3z} + 4}{{2z} + 3}.
\]
Our first task is to show that \( G \) is discrete and to identify a fundamental domain for \( G \) . To do this, consider Figure 9.2.3.
A computation shows that \( f\left( {\gamma }_{1}\right) = {\gamma }_{2} \) and \( g\left( {\sigma }_{1}\right) = {\sigma }_{2} \) and a straightforward application of Theorem 5.3.15 (with \( {G}_{1} = \langle f\rangle ,{D}_{1} \) the region between \( {\gamma }_{1} \) and \( {\gamma }_{2} \) and similarly for \( g \) ) shows that \( G \) is discrete and \( h\left( D\right) \cap D \) \( = \varnothing \) whenever \( h \in G, h \neq I \) ( \( D \) being the region bounded by \( {\gamma }_{1},{\gamma }_{2},{\sigma }_{1} \) and \( \left. {\sigma }_{2}\right) \) .
![32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_223_0.jpg](images/32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_223_0.jpg)
Figure 9.2.3
In fact, \( D \) is a (locally finite) fundamental domain for \( G \) . To see this, take any \( z \) in \( {H}^{2} \) and select an image of \( z \) which is closest to \( i\sqrt{2} \) (this is possible as \( G \) is discrete). By relabelling, we may assume that \( z \) itself has this property. It is now easy to see that
\[
\rho \left( {z, i\sqrt{2}}\right) \leq \rho \left( {z, f\left( {i\sqrt{2}}\right) }\right) = \rho \left( {{f}^{-1}z, i\sqrt{2}}\right)
\]
if and only if \( \left| z\right| \leq 2 \) . Similarly, \( z \) is closer to \( i\sqrt{2} \) than to \( {f}^{-1}\left( {i\sqrt{2}}\right) \) if and only if \( \left| z\right| \geq 1 \) . With a little more computation (Theorem 7.2.1) or geometry we find that \( z \) lies outside or on \( {\sigma }_{1} \) and \( {\sigma }_{2} \) because
\[
\rho \left( {z, i\sqrt{2}}\right) \leq \rho \left( {{gz}, i\sqrt{2}}\right) = \rho \left( {z,{g}^{-1}\left( {i\sqrt{2}}\right) }\right)
\]
and similarly for \( {g}^{-1} \) . We deduce that \( z \in \widetilde{D} \) and this proves that \( D \) is a fundamental domain for \( G \) .
We proceed by modifying \( D \) to obtain a new fundamental domain \( \sum \) . The essential feature of this process is to replace parts of \( D \) by various images of these parts in such a way that the modified domain is still a fundamental domain. First, we replace
\[
{D}_{1} = D \cap \{ z : \operatorname{Re}\left\lbrack z\right\rbrack < 0\}
\]
by \( g\left( {D}_{1}\right) \) : the new domain is illustrated in Figure 9.2.4 and this is still a fundamental domain for \( G \) .
Next, construct the vertical geodesics \( x = 1 \) and \( x = 2 \) and let \( w,\zeta \) and \( {\zeta }^{\prime } \) be as in Figure 9.2.5. We now replace the closed triangle \( T\left( {w,1,{2w}}\right) \) with vertices \( w,1,{2w} \) by the triangle \( T\left( {{2w},2,{4w}}\right) \left( { = f\left( T\right) }\right) \) . Each Euclidean segment \( \left\lbrack {\zeta ,{2\zeta }}\right\rbrack \), where \( \zeta \) lies on \( \left| z\right| = 1 \) and is strictly between \( w \) and \( i \), is replaced by the equivalent segment \( \left\lbrack {{\zeta }^{\prime },2{\zeta }^{\prime }}\right\rbrack \) . Finally, the segment \( \left\lbrack {i,{2i}}\right\rbrack \) is deleted: note, however, that \( \left\lbrack {i,{2i}}\right\rbrack \) is equivalent to the hyperbolic segment \( \left\lbrack {g\left( i\right), g\left( {2i}\right) }\right\rbrack \) on the boundary of \( g\left( {D}_{1}\right) \) and, as this segment is retained, the new domain \( \sum \) still contains in its closure at least one point from every orbit.
![32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_224_0.jpg](images/32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_224_0.jpg)
Figure 9.2.4
![32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_224_1.jpg](images/32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_224_1.jpg)
Figure 9.2.5
The construction given above replaces the quadrilateral \( D \) |
1112_(GTM267)Quantum Theory for Mathematicians | Definition 10.7 |
Definition 10.7 (Measurement Probabilities) If \( A \) is a self-adjoint operator on \( \mathbf{H} \), then for any unit vector \( \psi \in \mathbf{H} \), define a probability measure \( {\mu }_{\psi }^{A} \) on \( \mathbb{R} \) by the formula
\[
{\mu }_{\psi }^{A}\left( E\right) = \left\langle {\psi ,{\mu }^{A}\left( E\right) \psi }\right\rangle
\]
If the operator \( A \) represents some observable in quantum mechanics, then we interpret \( {\mu }_{\psi }^{A} \) to be the probability distribution for the result of measuring \( A \) in the state \( \psi \) .
Proposition 10.8 Let \( A \) be a self-adjoint operator on \( \mathbf{H} \) . Then the spectral subspaces \( {V}_{E} \) associated to \( A \) have the following properties.
1. If \( E \) is a bounded subset of \( \mathbb{R} \), then \( {V}_{E} \subset \operatorname{Dom}\left( A\right) ,{V}_{E} \) is invariant under \( A \), and the restriction of \( A \) to \( {V}_{E} \) is bounded.
2. If \( E \) is contained in \( \left( {{\lambda }_{0} - \varepsilon ,{\lambda }_{0} + \varepsilon }\right) \), then for all \( \psi \in {V}_{E} \), we have
\[
\begin{Vmatrix}{\left( {A - {\lambda }_{0}I}\right) \psi }\end{Vmatrix} \leq \varepsilon \parallel \psi \parallel .
\]
Proof. Point 1 holds because the function \( f\left( \lambda \right) = \lambda \) is bounded on \( E \) . (See the proof of Proposition 10.3.) Point 2 then holds because, as in the proof of Proposition 10.3, the restriction of \( A \) to \( {V}_{E} \) coincides with the restriction to \( {V}_{E} \) of the operator \( f\left( A\right) \), where \( f\left( \lambda \right) = \lambda {1}_{E}\left( \lambda \right) \) . ∎
Theorem 10.9 (Spectral Theorem, Second Form) Suppose \( A \) is a self-adjoint operator on \( \mathbf{H} \) . Then there is a \( \sigma \) -finite measure \( \mu \) on \( \sigma \left( A\right) \) , a direct integral
\[
{\int }_{\sigma \left( A\right) }^{ \oplus }{\mathbf{H}}_{\lambda }{d\mu }\left( \lambda \right)
\]
and a unitary map \( U \) from \( \mathbf{H} \) to the direct integral such that:
\[
U\left( {\operatorname{Dom}\left( A\right) }\right) = \left\{ {s \in {\int }_{\sigma \left( A\right) }^{ \oplus }{\mathbf{H}}_{\lambda }{d\mu }\left( \lambda \right) \left| {\;{\int }_{\sigma \left( A\right) }\parallel {\lambda s}\left( \lambda \right) {\parallel }_{\lambda }^{2}{d\mu }\left( \lambda \right) < \infty }\right. }\right\}
\]
and such that
\[
\left( {{UA}{U}^{-1}\left( s\right) }\right) \left( \lambda \right) = {\lambda s}\left( \lambda \right)
\]
for all \( s \in U\left( {\operatorname{Dom}\left( A\right) }\right) \) .
Theorem 10.10 (Spectral Theorem, Multiplication Operator Form) Suppose \( A \) is a self-adjoint operator on \( \mathbf{H} \) . Then there is a \( \sigma \) -finite measure space \( \left( {X,\mu }\right) \), a measurable, real-valued function \( h \) on \( X \), and a unitary map \( U : \mathbf{H} \rightarrow {L}^{2}\left( {X,\mu }\right) \) such that
\[
U\left( {\operatorname{Dom}\left( A\right) }\right) = \left\{ {\psi \in {L}^{2}\left( {X,\mu }\right) \mid {h\psi } \in {L}^{2}\left( {X,\mu }\right) }\right\}
\]
and such that
\[
\left( {{UA}{U}^{-1}\left( \psi \right) }\right) \left( x\right) = h\left( x\right) \psi \left( x\right)
\]
for all \( \psi \in U\left( {\operatorname{Dom}\left( A\right) }\right) \) .
These theorems are also proved in Sect. 10.4.
## 10.2 Stone's Theorem and One-Parameter Unitary Groups
In this section we explore the notion of one-parameter unitary groups and their connection to self-adjoint operators. We assume here the spectral theorem, the proof of which (in Sect. 10.4) does not use any results from this section.
Definition 10.11 A one-parameter unitary group on \( \mathbf{H} \) is a family \( U\left( t\right), t \in \mathbb{R} \), of unitary operators with the property that \( U\left( 0\right) = I \) and that \( U\left( {s + t}\right) = U\left( s\right) U\left( t\right) \) for all \( s, t \in \mathbb{R} \) . A one-parameter unitary group is said to be strongly continuous if
\[
\mathop{\lim }\limits_{{s \rightarrow t}}\parallel U\left( t\right) \psi - U\left( s\right) \psi \parallel = 0
\]
(10.11)
for all \( \psi \in \mathbf{H} \) and all \( t \in \mathbb{R} \) .
Almost all one-parameter unitary groups arising in applications are strongly continuous.
Example 10.12 Let \( \mathbf{H} = {L}^{2}\left( {\mathbb{R}}^{n}\right) \) and let \( {U}_{\mathbf{a}}\left( t\right) \) be the translation operator given by
\[
\left( {{U}_{\mathbf{a}}\left( t\right) \psi }\right) \left( \mathbf{x}\right) = \psi \left( {\mathbf{x} + t\mathbf{a}}\right)
\]
\( \left( {10.12}\right) \)
Then \( U\left( \cdot \right) \) is a strongly continuous one-parameter unitary group.
Proof. It is easy to see that \( {U}_{\mathbf{a}}\left( \cdot \right) \) is a one-parameter unitary group. To see that \( {U}_{\mathbf{a}}\left( \cdot \right) \) is strongly continuous, consider first the case in which \( \psi \) is continuous and compactly supported. Since a continuous function on a compact metric space is automatically uniformly continuous, it follows that \( \psi \left( {\mathbf{x} + t\mathbf{a}}\right) \) tends uniformly to \( \psi \left( \mathbf{x}\right) \) as \( t \) tends to zero. Since also the support of \( \psi \) is compact and thus of finite measure, it follows that \( \psi \left( {\mathbf{x} + t\mathbf{a}}\right) \) tends to \( \psi \left( \mathbf{x}\right) \) in \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) as \( t \) tends to zero.
Now, the space \( {C}_{c}\left( {\mathbb{R}}^{n}\right) \) of continuous functions of compact support is dense in \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) (Theorem A.10). Thus, given \( \varepsilon > 0 \) and \( \psi \in {L}^{2}\left( {\mathbb{R}}^{n}\right) \), we can find \( \phi \in {C}_{c}\left( {\mathbb{R}}^{n}\right) \) such that \( \parallel \psi - \phi {\parallel }_{{L}^{2}\left( \mathbb{R}\right) } < \varepsilon /3 \) . Then choose \( \delta \) so that \( \begin{Vmatrix}{{U}_{\mathbf{a}}\left( a\right) \phi - \phi }\end{Vmatrix} < \varepsilon /3 \) whenever \( \left| a\right| < \delta \) . Then given \( t \in \mathbb{R} \), if \( \left| {t - s}\right| < \delta \), we have
\[
\begin{Vmatrix}{{U}_{\mathbf{a}}\left( t\right) \psi - {U}_{\mathbf{a}}\left( s\right) \psi }\end{Vmatrix}
\]
\[
\leq \begin{Vmatrix}{{U}_{\mathbf{a}}\left( t\right) \psi - {U}_{\mathbf{a}}\left( t\right) \phi }\end{Vmatrix} + \begin{Vmatrix}{{U}_{\mathbf{a}}\left( t\right) \phi - {U}_{\mathbf{a}}\left( s\right) \phi }\end{Vmatrix} + \begin{Vmatrix}{{U}_{\mathbf{a}}\left( s\right) \phi - {U}_{\mathbf{a}}\left( s\right) \psi }\end{Vmatrix}
\]
\[
= \begin{Vmatrix}{{U}_{\mathbf{a}}\left( t\right) \left( {\psi - \phi }\right) }\end{Vmatrix} + \begin{Vmatrix}{{U}_{\mathbf{a}}\left( s\right) \left( {{U}_{\mathbf{a}}\left( {t - s}\right) \phi - \phi }\right) }\end{Vmatrix} + \begin{Vmatrix}{{U}_{\mathbf{a}}\left( s\right) \left( {\phi - \psi }\right) }\end{Vmatrix}.
\]
(10.13)
Since \( {U}_{\mathbf{a}}\left( t\right) \) and \( {U}_{\mathbf{a}}\left( s\right) \) are unitary, we can see that each of the terms on the last line of (10.13) is less than \( \varepsilon /3 \) . -
Note that for \( \mathbf{a} \neq 0 \) the unitary group \( {U}_{\mathbf{a}}\left( \cdot \right) \) in Example 10.12 is not continuous in the operator norm topology. After all, given any \( \varepsilon \neq 0 \), we can take a nonzero element \( \psi \) of \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) that is supported in a very small ball around the origin. Then \( {U}_{\mathbf{a}}\left( \varepsilon \right) \psi \) is orthogonal to \( \psi \) and has the same norm as \( \psi \), so that
\[
\begin{Vmatrix}{{U}_{\mathbf{a}}\left( \varepsilon \right) \psi - {U}_{\mathbf{a}}\left( 0\right) \psi }\end{Vmatrix} = \begin{Vmatrix}{{U}_{\mathbf{a}}\left( \varepsilon \right) \psi - \psi }\end{Vmatrix} = \sqrt{2}\parallel \psi \parallel .
\]
Thus, \( \begin{Vmatrix}{{U}_{\mathbf{a}}\left( \varepsilon \right) - {U}_{\mathbf{a}}\left( 0\right) }\end{Vmatrix} \geq \sqrt{2} \) for all \( \varepsilon \neq 0 \) .
Definition 10.13 If \( U\left( \cdot \right) \) is a strongly continuous one-parameter unitary group, the infinitesimal generator of \( U\left( \cdot \right) \) is the operator \( A \) given by
\[
{A\psi } = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{i}\frac{U\left( t\right) \psi - \psi }{t}
\]
(10.14)
with \( \operatorname{Dom}\left( A\right) \) consisting of the set of \( \psi \in \mathbf{H} \) for which the limit in (10.14) exists in the norm topology on \( \mathbf{H} \) .
The following result shows that we can construct a strongly continuous one-parameter unitary group from any self-adjoint operator \( A \) by setting \( U\left( t\right) = {e}^{iAt} \) . Furthermore, the original operator \( A \) is precisely the infinitesimal generator of \( U\left( t\right) \) .
Proposition 10.14 Suppose \( A \) is a self-adjoint operator on \( \mathbf{H} \) and let \( U\left( \cdot \right) \) be defined by
\[
U\left( t\right) = {e}^{itA}
\]
where the operator \( {e}^{itA} \) is defined by the functional calculus for \( A \) . Then the following hold.
1. \( U\left( \cdot \right) \) is a strongly continuous one-parameter unitary group.
2. For all \( \psi \in \operatorname{Dom}\left( A\right) \), we have
\[
{A\psi } = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{i}\frac{U\left( t\right) \psi - \psi }{t}
\]
where the limit is in the norm topology on \( \mathbf{H} \) .
3. For all \( \psi \in \mathbf{H} \), if the limit
\[
\mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{i}\frac{U\left( t\right) \psi - \psi }{t}
\]
exists in the norm topology on \( \mathbf{H} \), then \( \psi \in \operatorname{Dom}\left( A\right) \) and the limit is equal to \( {A\psi } \) .
Proof. Since \( \sigma \left( A\right) \subset \mathbb{R} \), the function \( f\left( \lambda \right) \mathrel{\text{:=}} {e}^{it\lambda } \) is bounded on \( \sigma \left( A\right) \) and |
1185_(GTM91)The Geometry of Discrete Groups | Definition 3.1.1 |
Definition 3.1.1. A Möbius transformation acting in \( {\widehat{\mathbb{R}}}^{n} \) is a finite composition of reflections (in spheres or planes).
Clearly, each Möbius transformation is a homeomorphism of \( {\widehat{\mathbb{R}}}^{n} \) onto itself. The composition of two Möbius transformations is again a Möbius transformation and so also is the inverse of a Möbius transformation for if \( \phi = {\phi }_{1}\cdots {\phi }_{m} \) (where the \( {\phi }_{j} \) are reflections) then \( {\phi }^{-1} = {\phi }_{m}\cdots {\phi }_{1} \) . Finally, for any reflection \( \phi \) say, \( {\phi }^{2}\left( x\right) = x \) and so the identity map is a Möbius transformation.
Definition 3.1.2. The group of Möbius transformations acting in \( {\widehat{\mathbb{R}}}^{n} \) is called the General Möbius group and is denoted by \( \operatorname{GM}\left( {\widehat{\mathbb{R}}}^{n}\right) \) .
Let us now consider examples of Möbius transformations. First, the translation \( x \mapsto x + a, a \in {\mathbb{R}}^{n} \), is a Möbius transformation for it is the reflection in \( \left( {x.a}\right) = 0 \) followed by the reflection in \( \left( {x.a}\right) = \frac{1}{2}{\left| a\right| }^{2} \) . Next, the magnification \( x \mapsto {kx}, k > 0 \), is also a Möbius transformation for it is the reflection in \( S\left( {0,1}\right) \) followed by the reflection in \( S\left( {0,\sqrt{k}}\right) \) .
If \( \phi \) and \( {\phi }^{ * } \) denote reflections in \( S\left( {a, r}\right) \) and \( S\left( {0,1}\right) \) respectively and if \( \psi \left( x\right) = {rx} + a \), then (by computation)
\[
\phi = \psi {\phi }^{ * }{\psi }^{-1}
\]
(3.1.4)
As \( \psi \) is a Möbius transformation, we see that any two reflections in spheres are conjugate in the group \( \operatorname{GM}\left( {\widehat{\mathbb{R}}}^{n}\right) \) .
As further examples of Möbius transformations we have the entire class of Euclidean isometries. Note that each isometry \( \phi \) of \( {\mathbb{R}}^{n} \) is regarded as acting on \( {\widehat{\mathbb{R}}}^{n} \) with \( \phi \left( \infty \right) = \infty \) .
Theorem 3.1.3. Each Euclidean isometry of \( {\mathbb{R}}^{n} \) is a composition of at most \( n + 1 \) reflections in planes. In particular each isometry is a Möbius transformation.
Proof. As each reflection in a plane is an isometry, it is sufficient to consider only those isometries \( \phi \) which satisfy \( \phi \left( 0\right) = 0 \) . Such isometries preserve the lengths of vectors because
\[
\left| {\phi \left( x\right) }\right| = \left| {\phi \left( x\right) - \phi \left( 0\right) }\right| = \left| {x - 0}\right| = \left| x\right|
\]
and also scalar products because
\[
2\left( {\phi \left( x\right) .\phi \left( y\right) }\right) = {\left| \phi \left( x\right) \right| }^{2} + {\left| \phi {\left( y\right) }^{2} - \left| \phi \left( x\right) - \phi \left( y\right) \right| \right| }^{2}
\]
\[
= {\left| x\right| }^{2} + {\left| y\right| }^{2} - {\left| x - y\right| }^{2}
\]
\[
= 2\left( {x.y}\right) \text{.}
\]
This means that the vectors \( \phi \left( {e}_{1}\right) ,\ldots ,\phi \left( {e}_{n}\right) \) are mutually orthogonal and so are linearly independent. As there are \( n \) of them, they are a basis of the vector space \( {\mathbb{R}}^{n} \) and so for each \( x \) in \( {\mathbb{R}}^{n} \) there is some \( \mu \) in \( {\mathbb{R}}^{n} \) with
\[
\phi \left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\mu }_{j}\phi \left( {e}_{j}\right)
\]
But as the \( \phi \left( {e}_{j}\right) \) are mutually orthogonal,
\[
{\mu }_{j} = \left( {\phi \left( x\right) \cdot \phi \left( {e}_{j}\right) }\right)
\]
\[
= \left( {x.{e}_{j}}\right)
\]
\[
= {x}_{j}\text{.}
\]
Thus
\[
\phi \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}{e}_{j}}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}\phi \left( {e}_{j}\right)
\]
and this shows that \( \phi \) is a linear transformation of \( {\mathbb{R}}^{n} \) into itself. As any isometry is \( 1 - 1 \), the kernel of \( \phi \) has dimension zero: thus \( \phi \left( {\mathbb{R}}^{n}\right) = {\mathbb{R}}^{n} \) .
If \( A \) is the matrix of \( \phi \) with respect to the basis \( {e}_{1},\ldots ,{e}_{n} \) then \( \phi \left( x\right) = {xA} \) and \( A \) has rows \( \phi \left( {e}_{1}\right) ,\ldots ,\phi \left( {e}_{n}\right) \) . This shows that the \( \left( {i, j}\right) \) th element of the matrix \( A{A}^{t} \) is \( \left( {\phi \left( {e}_{i}\right) .\phi \left( {e}_{j}\right) }\right) \) and as this is \( \left( {{e}_{i}.{e}_{j}}\right) \), it is 1 if \( i = j \) and is zero otherwise. We conclude that \( A \) is an orthogonal matrix.
We shall now show that \( \phi \) is a composition of at most \( n \) reflections in planes. First, put
\[
{a}_{1} = \phi \left( {e}_{1}\right) - {e}_{1}
\]
If \( {a}_{1} \neq 0 \), we let \( {\psi }_{1} \) be the reflection in the plane \( P\left( {{a}_{1},0}\right) \) and a direct computation using (3.1.2) shows that \( {\psi }_{1} \) maps \( \phi \left( {e}_{1}\right) \) to \( {e}_{1} \) . If \( {a}_{1} = 0 \) we let \( {\psi }_{1} \) be the identity so that in all cases, \( {\psi }_{1} \) maps \( \phi \left( {e}_{1}\right) \) to \( {e}_{1} \) . Now put \( {\phi }_{1} = {\psi }_{1}\phi \) : thus \( {\phi }_{1} \) is an isometry which fixes 0 and \( {e}_{1} \) .
In general, suppose that \( {\phi }_{k} \) is an isometry which fixes each of \( 0,{e}_{1},\ldots ,{e}_{k} \) and let
\[
{a}_{k + 1} = {\phi }_{k}\left( {e}_{k + 1}\right) - {e}_{k + 1}.
\]
Again, we let \( {\psi }_{k + 1} \) be the identity (if \( {a}_{k + 1} = 0 \) ) or the reflection in \( P\left( {{a}_{k + 1},0}\right) \) (if \( {a}_{k + 1} \neq 0 \) ) and exactly as above, \( {\psi }_{k + 1}{\phi }_{k} \) fixes 0 and \( {e}_{k + 1} \) . In addition, if \( 1 \leq j \leq k \) then
\[
\left( {{e}_{j} \cdot {a}_{k + 1}}\right) = \left( {{e}_{j} \cdot {\phi }_{k}\left( {e}_{k + 1}\right) }\right) - \left( {{e}_{j} \cdot {e}_{k + 1}}\right)
\]
\[
= \left( {{\phi }_{k}\left( {e}_{j}\right) \cdot {\phi }_{k}\left( {e}_{k + 1}\right) }\right) - 0
\]
\[
= \left( {{e}_{j} \cdot {e}_{k + 1}}\right)
\]
\[
= 0
\]
and so by (3.1.2),
\[
{\psi }_{k + 1}\left( {e}_{j}\right) = {e}_{j}
\]
As \( {\phi }_{k} \) also fixes \( 0,{e}_{1},\ldots ,{e}_{k} \) we deduce that \( {\psi }_{k + 1}{\phi }_{k} \) fixes each of \( 0,{e}_{1} \) , \( \ldots ,{e}_{k + 1} \) . In conclusion, then, there are maps \( {\psi }_{j} \) (each the identity or a reflection in a plane) so that the isometry \( {\psi }_{n}\cdots {\psi }_{1}\phi \) fixes each of \( 0,{e}_{1},\ldots ,{e}_{n} \) . By our earlier remarks, such a map is necessarily a linear transformation and so is the identity: thus \( \phi = {\psi }_{1}\cdots {\psi }_{n} \) . This completes the proof of Theorem 3.1.3 as any isometry composed with a suitable reflection is of the form \( \phi \) .
There is an alternative formulation available.
Theorem 3.1.4. A function \( \phi \) is a Euclidean isometry if and only if it is of the form
\[
\phi \left( x\right) = {xA} + {x}_{0}
\]
where \( A \) is an orthogonal matrix and \( {x}_{0} \in {\mathbb{R}}^{n} \) .
Proof. As an orthogonal matrix preserves lengths, it is clear that any \( \phi \) of the given form is an isometry. Conversely, if \( \phi \) is an isometry, then \( \phi \left( x\right) - \phi \left( 0\right) \) is an isometry which fixes the origin and so is given by an orthogonal matrix (as in the proof of Theorem 3.1.3).
More detailed information on Euclidean isometries is available: for example, we have the following result.
Theorem 3.1.5. Given any real orthogonal matrix \( A \) there is a real orthogonal matrix \( Q \) such that
\[
{QA}{Q}^{-1} = \left( \begin{matrix} {A}_{1} & & & & \\ & \ddots & & & 0 \\ & & {A}_{r} & & \\ & & & {I}_{s} & \\ 0 & & & & - {I}_{t} \end{matrix}\right) ,
\]
where \( r, s, t \) are non-negative integers and
\[
{A}_{k} = \left( \begin{array}{rr} \cos {\theta }_{k} & - \sin {\theta }_{k} \\ \sin {\theta }_{k} & \cos {\theta }_{k} \end{array}\right)
\]
Any Euclidean isometry which fixes the origin can therefore be represented (with a suitable choice of an orthonormal basis) by such a matrix and this explicitly displays all possible types of isometries.
We now return to discuss again the general reflection \( \phi \) . It seems clear that \( \phi \) is orientation-reversing and we shall now prove that this is so.
## Theorem 3.1.6. Every reflection is orientation-reversing and conformal.
Proof. Let \( \phi \) be the reflection in \( P\left( {a, t}\right) \) . Then we can see directly from (3.1.2) that \( \phi \) is differentiable and that \( {\phi }^{\left( 1\right) }\left( x\right) \) is the constant symmetric matrix \( \left( {\phi }_{ij}\right) \) where
\[
{\phi }_{ij} = {\delta }_{ij} - \frac{2{a}_{i}{a}_{j}}{{\left| a\right| }^{2}}
\]
( \( {\delta }_{ij} \) is the Kronecker delta and is 1 if \( i = j \) and is zero otherwise). We prefer to write this in the form
\[
{\phi }^{\prime }\left( x\right) = I - 2{Q}_{a}
\]
where \( {Q}_{a} \) has elements \( {a}_{i}{a}_{j}/{\left| a\right| }^{2} \) . Now \( {Q}_{a} \) is symmetric and \( {Q}_{a}^{2} = {Q}_{a} \), so
\[
{\phi }^{\prime }\left( x\right) \cdot {\phi }^{\prime }{\left( x\right) }^{t} = {\left( I - 2{Q}_{a}\right) }^{2} = I.
\]
This shows that \( {\phi }^{\prime }\left( x\right) \) is an orthogonal matrix and so establishes the con-formality of \( \phi \) .
Now let \( D = \det {\phi }^{\prime }\left( x\right) \) . As \( {\phi }^{\prime }\left( x\right) \) is orthogonal, \( D \neq 0 \) (in fact, \( D = \pm 1 \) ). Moreover, \( D \) is a continuous function of the vector \( a \) in \( {\mathbb{R}}^{n} - \{ 0\} \) and so is a continuous map of \( {\mathbb{R}}^{n} - \{ 0\} \) into \( {\mathbb{R}}^{1} - \{ 0\} \) . As \( {\mathbb{R}}^{n} - \{ 0\} \) is connected (we assume that \( n \geq 2 \) ), \( D \) is either positive for all non-zero \( a \) or is negative for all non-zero \( a \) . If \( a |
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds | Definition 7.7.6 |
Definition 7.7.6 A quaternion algebra A over a number field \( k \) is said to satisfy the Eichler condition if there is at least one infinite place of \( k \) at which \( A \) is not ramified.
One immediate consequence is the following result, which we have already used in calculating the type number of a quaternion algebra in \( §{6.7} \) .
Theorem 7.7.7 (Eichler) Let \( A \) be a quaternion algebra over a number field \( k \) where \( A \) satisfies the Eichler condition. Let \( \mathcal{O} \) be a maximal order and let \( I \) be an ideal such that \( {\mathcal{O}}_{\ell }\left( I\right) = \mathcal{O} \) . Then \( I \) is principal; that is, \( I = \mathcal{O}\alpha \) for some \( \alpha \in {A}^{ * } \) if and only if \( n\left( I\right) \) is principal (i.e. \( n\left( I\right) = {R}_{k}x \) for some \( x \in {k}_{\infty }^{ * } \) ).
Proof: Clearly, if \( I = \mathcal{O}\alpha \), then \( n\left( I\right) = {R}_{k}n\left( \alpha \right) \) and \( n\left( \alpha \right) \in {k}_{\infty }^{ * } \) . Now suppose that \( n\left( I\right) = {R}_{k}x \), where \( x \in {k}_{\infty }^{ * } \) . By the Norm Theorem 7.4.1, there exists \( \alpha \in {A}^{ * } \) such that \( n\left( \alpha \right) = x \) . Consider the ideal \( I{\alpha }^{-1} \) . For all but a finite set \( S \) of prime ideals, \( {\left( I{\alpha }^{-1}\right) }_{\mathcal{P}} = {\mathcal{O}}_{\mathcal{P}} \), and for \( \mathcal{P} \in S \) , \( {\left( I{\alpha }^{-1}\right) }_{\mathcal{P}} = {\mathcal{O}}_{\mathcal{P}}{\beta }_{\mathcal{P}} \) by Lemma 6.6.3. Now \( n\left( {\left( I{\alpha }^{-1}\right) }_{\mathcal{P}}\right) = {R}_{\mathcal{P}} = n\left( {\beta }_{\mathcal{P}}\right) {R}_{\mathcal{P}} \) . So \( n\left( {\beta }_{\mathcal{P}}\right) \in {R}_{\mathcal{P}}^{ * } \) . Furthermore, since locally \( n\left( {\mathcal{O}}_{\mathcal{P}}^{ * }\right) = {R}_{\mathcal{P}}^{ * } \) (see Exercise 6.7, No. 1), we can assume that \( n\left( {\beta }_{\mathcal{P}}\right) = 1 \) . By the Strong Approximation Theorem, there exists \( \gamma \in {A}_{k}^{1} \) such that \( \gamma \) is arbitrarily close to \( {\beta }_{\mathcal{P}} \) for \( \mathcal{P} \in S \) and lies in \( {\mathcal{O}}_{\mathcal{P}}^{1} \) for all other \( \mathcal{P} \) . Then \( {\left( \mathcal{O}\gamma \right) }_{\mathcal{P}} = {\mathcal{O}}_{\mathcal{P}} = {\left( I{\alpha }^{-1}\right) }_{\mathcal{P}} \) for \( \mathcal{P} \notin S \) . If \( \mathcal{P} \in S \), then \( {\left( \mathcal{O}\gamma \right) }_{\mathcal{P}} = {\mathcal{O}}_{\mathcal{P}}{\beta }_{\mathcal{P}} = {\left( I{\alpha }^{-1}\right) }_{\mathcal{P}} \) . Thus since ideals are uniquely determined by their localisations, \( \mathcal{O}\gamma = I{\alpha }^{-1} \) and \( I = \mathcal{O}{\gamma \alpha } \) .
## Exercise 7.7
1. Show that when \( X \) has no divisors of zero, then \( {X}_{\mathcal{A}}^{ * }/{X}_{k}^{ * } \) is a direct product of \( {\mathbb{R}}^{ + } \) and a compact group.
2. Prove the following extension of the Norm Theorem 7.4.1: Let \( A \) be a quaternion algebra over the number field \( k \) where \( A \) satisfies the Eichler condition. Let \( x \in {R}_{k} \cap {k}_{\infty }^{ * } \) . Show that there is an integer \( \alpha \in A \) such that \( n\left( \alpha \right) = x \) .
## 7.8 Further Reading
The lines of argument throughout this chapter were strongly influenced by the exposition in Vignéras (1980a). The use of adèle rings and idèle groups in studying the arithmetic of algebraic number fields is covered in several number theory texts [e.g., Cassels and Frölich (1967), Hasse (1980), Lang (1970), Weiss (1963)]. The extensions to quaternion algebras are treated in Vignéras (1980a) and lean heavily on the discussion in Weil (1967). As a special case of central simple algebras, the adèle method is applied to quaternion algebras in Weil (1982) in the more general setting of algebraic groups. For this, also see the various articles in Borel and Mostow (1966) discussing adèles, Tamagawa numbers and Strong Approximation. The wider picture is well covered in Platonov and Rapinchuk (1994). The elements of abstract harmonic analysis which are assumed here, notably in Theorem 7.2.1 and in \( §{7.5} \), can be found, for example, in Folland (1995), Hewitt and Ross (1963) and Reiter (1968).
Tamagawa measures on local fields or quaternion algebras over local fields are described in Vignéras (1980a) and the computations of the related local Tamagawa volumes are given in Vignéras (1980a) and Borel (1981). The details of the proof that the Tamagawa number of the algebraic group given by \( {A}^{1} \) where \( A \) is a quaternion algebra over a number field, is one, stated in Theorem 7.6.3 can be found in Vignéras (1980a) and, in a more general setting, in Weil (1982). The Strong Approximation Theorem for the cases considered here was one of the foundational results, due to Eichler (Eichler (1938a)), in the general problem of establishing the Strong Approximation Theorem in certain algebraic groups, discussed for example by Kneser in Borel and Mostow (1966). This result, and indeed many others particularly in Chapters 8 and 10, have their natural setting in a wider context than is discussed in this book, but can be found in Platonov and Rapinchuk (1994).
## 8 Arithmetic Kleinian Groups
In this chapter, arithmetic Kleinian groups are described in terms of quaternion algebras. An almost identical description leads to arithmetic Fuchsian groups. Both of these are special cases of discrete groups which arise from the group of elements of norm 1 in an order in a quaternion algebra over a number field. Such groups are discrete subgroups of a finite product of locally compact groups, which will be shown, using the results of the preceding chapter, to give quotient spaces of finite volume. Suitable arithmetic restrictions on the quaternion algebras then yield discrete subgroups of \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) and \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) of finite covolume and in this way, the existence of arithmetic Kleinian and arithmetic Fuchsian groups is obtained.
The general definition of discrete arithmetic subgroups of semi-simple Lie groups will be discussed in Chapter 10, where it will also be shown that in the cases of \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) and \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \), the classes of discrete arithmetic groups which arise from this general definition coincide with those which are described here via quaternion algebras.
It will be shown in this and subsequent chapters that for these classes of arithmetic Kleinian groups and arithmetic Fuchsian groups, many important features - topological, geometric, group-theoretic - can be determined from the arithmetic data going into the definition of the group. Thus it is important to be able to identify, among all Kleinian groups, those that are arithmetic. This also holds for Fuchsian groups. This is carried out here and the result is termed the identification theorem. This theorem shows that for an arithmetic Kleinian group, the number field and quaternion algebra used to define the arithmetic structure coincide with the invariant trace field and the invariant quaternion algebra as defined in Chapter 3. Thus the methods developed earlier to determine the invariant trace field and the invariant quaternion algebra of a Kleinian group can be employed and taken a stage farther to determine whether or not the group is arithmetic. Additionally, the identification theorem shows that for arithmetic Kleinian groups and arithmetic Fuchsian groups, the invariant trace field and the invariant quaternion algebra form a complete commensurability invariant of these groups.
## 8.1 Discrete Groups from Orders in Quaternion Algebras
Let \( A \) be a quaternion algebra over a number field \( k \) where \( k \) has \( {r}_{1} \) real places and \( {r}_{2} \) complex places so that \( n = \left\lbrack {k : \mathbb{Q}}\right\rbrack = {r}_{1} + 2{r}_{2} \) . Let the embeddings of \( k \) in \( \mathbb{C} \) be denoted \( {\sigma }_{1},{\sigma }_{2},\ldots ,{\sigma }_{n} \) . Let \( {k}_{v} \) denote the completion of \( k \) at the Archimedean place \( v \) which corresponds to \( \sigma \) . Then \( {A}_{v} = A{ \otimes }_{k}{k}_{v} \cong {M}_{2}\left( \mathbb{C}\right) \) if \( \sigma \) is complex and \( \cong \mathcal{H} \) or \( {M}_{2}\left( \mathbb{R}\right) \) if \( \sigma \) is real.
Theorem 8.1.1 If \( A \) is ramified at \( {s}_{1} \) real places, then
\[
A{ \otimes }_{\mathbb{Q}}\mathbb{R} \cong \oplus {s}_{1}\mathcal{H} \oplus \left( {{r}_{1} - {s}_{1}}\right) {M}_{2}\left( \mathbb{R}\right) \oplus {r}_{2}{M}_{2}\left( \mathbb{C}\right) .
\]
Proof: Let \( A = \left( \frac{a, b}{k}\right) \) with standard basis \( \{ 1, i, j,{ij}\} \) . Let us order the embeddings so that the first \( {s}_{1} \) corrrespond to the real ramified places, the next \( {r}_{1} - {s}_{1} \) to the remaining real places and the remainder to complex conjugate pairs. Let \( {A}_{i} = \left( \frac{{\sigma }_{i}\left( a\right) ,{\sigma }_{i}\left( b\right) }{K}\right) \), where \( K = \mathbb{R} \) for \( i = 1,2,\ldots ,{r}_{1} \) and \( \mathbb{C} \) otherwise. If we denote the standard basis of \( {A}_{i} \) by \( \left\{ {1,{i}_{i},{j}_{i},{i}_{i}{j}_{i}}\right\} \) , then defining \( {\widehat{\sigma }}_{i} : A \rightarrow {A}_{i} \) by
\[
\widehat{{\sigma }_{i}}\left( {{x}_{0} + {x}_{1}i + {x}_{2}j + {x}_{3}{ij}}\right) = {\sigma }_{i}\left( {x}_{0}\right) + {\sigma }_{i}\left( {x}_{1}\right) {i}_{i} + {\sigma }_{i}\left( {x}_{2}\right) {j}_{i} + {\sigma }_{i}\left( {x}_{3}\right) {i}_{i}{j}_{i}
\]
gives a ring homomorphism extending the embedding \( {\sigma }_{i} : k \rightarrow K \) . Then define
\[
\phi : A{ \otimes }_{\mathbb{Q}}\mathbb{R} \rightarrow \oplus \mathop{\sum }\limits_{{i = 1}}^{n}{A}_{i}
\]
(8.1)
by \( \phi \left( {\alpha \otimes b}\right) = \left( {b\wide |
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space | Definition 1.2.13 |
Definition 1.2.13. By a subspace of a Hilbert space, we mean a subset of the space that is closed in the topological sense in addition to being closed under the vector space operations. By a linear manifold we mean a subset that is closed under the vector operations but is not necessarily closed in the topology.
We will often have occasion to consider the smallest subspace containing a given collection of vectors.
Definition 1.2.14. If \( \mathcal{S} \) is any nonempty subset of a Hilbert space, then the span of \( \mathcal{S} \), often denoted by
\[
\bigvee \{ f : f \in \mathcal{S}\} \;\text{ or }\;\bigvee \mathcal{S},
\]
is the intersection of all subspaces containing \( \mathcal{S} \) . It is obvious that \( \bigvee \mathcal{S} \) is always a subspace.
Definition 1.2.15. If \( A \) is an operator and \( \mathcal{M} \) is a subspace, we say that \( \mathcal{M} \) is an invariant subspace of \( A \) if \( A\mathcal{M} \subset \mathcal{M} \) . That is, \( \mathcal{M} \) is invariant under \( A \) if \( f \in \mathcal{M} \) implies \( {Af} \in \mathcal{M} \) .
The trivial subspaces, \( \{ 0\} \) and \( \mathcal{H} \), are invariant under every operator. One of the most famous unsolved problems in analysis (the invariant subspace problem) is the question whether every bounded linear operator on an infinite-dimensional Hilbert space has a nontrivial invariant subspace.
Notation 1.2.16. If \( \mathcal{M} \) is an invariant subspace of the operator \( A \), then \( {\left. A\right| }_{\mathcal{M}} \) is the restriction of the operator \( A \) to \( \mathcal{M} \) .
Definition 1.2.17. Given a vector \( f \) and a bounded linear operator \( A \), the invariant subspace generated by \( f \) is the subspace
\[
\mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{A}^{n}f}\right\}
\]
We say that an invariant subspace \( \mathcal{M} \) of \( A \) is cyclic if there is a vector \( g \) such that \( \mathcal{M} = \mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{A}^{n}g}\right\} \) . If
\[
\mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{A}^{n}g}\right\} = \mathcal{H}
\]
we say that \( g \) is a cyclic vector for \( A \) .
Clearly, the invariant subspace problem can be rephrased: does every bounded linear operator on Hilbert space have a noncyclic vector other than zero?
It turns out that the collection of subspaces invariant under an operator (or any family of operators) is a lattice.
Definition 1.2.18. A lattice is a partially ordered set in which every pair of elements has a least upper bound and a greatest lower bound. A lattice is complete if every nonempty subset of the lattice has a least upper bound and a greatest lower bound.
It is easily seen that the collection of all subspaces invariant under a given bounded linear operator is a complete lattice under inclusion, where the least upper bound of a subcollection is its span and the greatest lower bound of a subcollection is its intersection.
Notation 1.2.19. For \( A \) a bounded linear operator, we use the notation Lat \( A \) to denote the lattice of all invariant subspaces of \( A \) .
Theorem 1.2.20. Let \( A \) be a bounded linear operator. Then \( \mathcal{M} \in \operatorname{Lat}A \) if and only if \( {\mathcal{M}}^{ \bot } \in \operatorname{Lat}{A}^{ * } \) .
Proof. This follows immediately from the fact that, for \( f \in \mathcal{M} \) and \( g \in {\mathcal{M}}^{ \bot } \) , \( \left( {{Af}, g}\right) = \left( {f,{A}^{ * }g}\right) . \)
Recall that, if \( \mathcal{M} \) is a subspace of \( \mathcal{H} \), every vector \( f \in \mathcal{H} \) can be written uniquely in the form \( f = m + n \), where \( m \in \mathcal{M} \) and \( n \in {\mathcal{M}}^{ \bot } \) .
Notation 1.2.21. If \( \mathcal{M} \) and \( \mathcal{N} \) are subspaces of a Hilbert space, the notation \( \mathcal{M} \oplus \mathcal{N} \) is used to denote \( \{ m + n : m \in \mathcal{M} \) and \( n \in \mathcal{N}\} \) when every vector in \( \mathcal{M} \) is orthogonal to every vector in \( \mathcal{N} \) . The expression \( \mathcal{M} \ominus \mathcal{N} \) denotes \( \mathcal{M} \cap {\mathcal{N}}^{ \bot } \) .
Definition 1.2.22. If \( \mathcal{M} \) is a subspace then the projection onto \( \mathcal{M} \) is the operator defined by \( {Pf} = g \), where \( f = g + h \) with \( g \in \mathcal{M} \) and \( h \in {\mathcal{M}}^{ \bot } \) .
It is easy to see that every projection is a bounded self-adjoint operator of norm at most one. Also, since \( P\mathcal{H} = \mathcal{M}, P\mathcal{H} \) is always a subspace.
Theorem 1.2.23. If \( \mathcal{M} \in \operatorname{Lat}A \) and \( P \) is the projection onto \( \mathcal{M} \), then \( {AP} = \) PAP. Conversely, if \( P \) is a projection and \( {AP} = {PAP} \), then \( P\mathcal{H} \in \operatorname{Lat}A \) .
Proof. Let \( \mathcal{M} \in \operatorname{Lat}A \) and \( P \) be the projection onto \( \mathcal{M} \) . If \( f \in \mathcal{H} \) then \( {Pf} \in \mathcal{M} \) and therefore \( {APf} \) is contained in \( A\mathcal{M} \) . Since \( A\mathcal{M} \subset \mathcal{M} \) it follows that \( P\left( {APf}\right) = {APf} \) .
Conversely, let \( P \) be a projection and assume that \( {AP} = {PAP} \) . If \( f \in P\mathcal{H} \) , then \( {Pf} = f \) and therefore \( {APf} = {PAPf} \) simplifies to \( {Af} = {PAf} \) . Thus \( {Af} \in P\mathcal{H} \) and \( P\mathcal{H} \in \operatorname{Lat}A \) .
Recall that a decomposition of a Hilbert space \( \mathcal{H} \) in the form \( \mathcal{M} \oplus {\mathcal{M}}^{ \bot } \) leads to a block matrix representation of operators on \( \mathcal{H} \) . If \( P \) is the projection of \( \mathcal{H} \) onto \( \mathcal{M} \) and \( {A}_{1} \) is the restriction of \( {PA} \) to \( \mathcal{M},{A}_{2} \) is the restriction of \( {PA} \) to \( {\mathcal{M}}^{ \bot },{A}_{3} \) is the restriction of \( \left( {I - P}\right) A \) to \( \mathcal{M} \), and \( {A}_{4} \) is the restriction of \( \left( {I - P}\right) A \) to \( {\mathcal{M}}^{ \bot } \), then \( A \) can be represented as
\[
A = \left( \begin{array}{ll} {A}_{1} & {A}_{2} \\ {A}_{3} & {A}_{4} \end{array}\right)
\]
with respect to the decomposition \( \mathcal{M} \oplus {\mathcal{M}}^{ \bot } \) . That is, if \( f = g + h \) with \( g \in \mathcal{M} \) and \( h \in {\mathcal{M}}^{ \bot } \), we have
\[
{Af} = \left( \begin{array}{ll} {A}_{1} & {A}_{2} \\ {A}_{3} & {A}_{4} \end{array}\right) \left( \begin{array}{l} g \\ h \end{array}\right) = \left( \begin{array}{l} {A}_{1}g + {A}_{2}h \\ {A}_{3}g + {A}_{4}h \end{array}\right) = \left( {{A}_{1}g + {A}_{2}h}\right) + \left( {{A}_{3}g + {A}_{4}h}\right) .
\]
If the subspace \( \mathcal{M} \) is invariant under \( A \), then Theorem 1.2.23 implies that \( {A}_{3} = 0 \) . Thus each nontrivial invariant subspace of \( A \) yields an upper triangular representation of \( A \) .
Definition 1.2.24. The subspace \( \mathcal{M} \) reduces the operator \( A \) if both \( \mathcal{M} \) and \( {\mathcal{M}}^{ \bot } \) are invariant under \( A \) .
Theorem 1.2.25. Let \( P \) be the projection onto the subspace \( \mathcal{M} \) . Then \( \mathcal{M} \) is a reducing subspace for \( A \) if and only if \( {PA} = {AP} \) . Also, \( \mathcal{M} \) reduces \( A \) if and only if \( \mathcal{M} \) is invariant under both \( A \) and \( {A}^{ * } \) .
Proof. If \( \mathcal{M} \) is a reducing subspace, then \( \mathcal{M} \) and \( {\mathcal{M}}^{ \bot } \) are invariant under \( A \) . If \( P \) is the projection onto \( \mathcal{M} \), it is easily seen that \( I - P \) is the projection onto \( {\mathcal{M}}^{ \bot } \) . The previous theorem then implies \( A\left( {I - P}\right) = \left( {I - P}\right) A\left( {I - P}\right) \) . Expanding the latter equation gives \( A - {AP} = A - {PA} - {AP} + {PAP} \), which simplifies to \( {PAP} = {PA} \) . Since \( \mathcal{M} \in \operatorname{Lat}A \) we also have that \( {AP} = {PAP} \) and thus \( {PA} = {AP} \) .
Conversely, assume \( {AP} = {PA} \) . Let \( f \in \mathcal{M} \) ; to prove \( \mathcal{M} \) is invariant we need to show that \( {Af} \in \mathcal{M} \) . By hypothesis, \( {PAf} = {APf} \) and, since \( {Pf} = f \) , it follows that \( {PAf} = {Af} \), which is equivalent to \( {Af} \in \mathcal{M} \) . Thus \( \mathcal{M} \in \operatorname{Lat}A \) . We also have that \( \left( {I - P}\right) A = A\left( {I - P}\right) \) and thus an analogous argument shows that if \( f \in {\mathcal{M}}^{ \bot } \), then \( {Af} \in {\mathcal{M}}^{ \bot } \) . Hence \( {\mathcal{M}}^{ \bot } \in \operatorname{Lat}A \) and \( A \) is reducing.
For the second part of the theorem notice that, since \( P \) is self-adjoint, \( {PA} = {AP} \) if and only if \( P{A}^{ * } = {A}^{ * }P \) . This means that \( \mathcal{M} \) is reducing for \( A \) if and only if \( \mathcal{M} \) is reducing for \( {A}^{ * } \) . In particular, \( \mathcal{M} \) is invariant for both \( A \) and \( {A}^{ * } \) .
For the converse of the second part, observe that \( {PAP} = {AP} \) and \( P{A}^{ * }P = \) \( {A}^{ * }P \) . If we take the adjoint of the latter equation it follows that \( {AP} = {PA} \) and thus \( \mathcal{M} \) is reducing, by the first part of the theorem.
It is easily seen that the subspace \( \mathcal{M} \) reduces \( A \) if and only if the decomposition of \( A \) with respect to \( \mathcal{M} \oplus {\mathcal{M}}^{ \bot } \) has the form
\[
A = \left( \begin{matrix} {A}_{1} & 0 \\ 0 & {A}_{4} \end{matrix}\right)
\]
where \( {A}_{1} \) is an operator on \( \mathcal{M} \) and \( {A}_{4} \) is an operator on \( {\mathcal{M}}^{ \bot } \) . This matrix representation shows why the word "reducing" is used.
Definition 1.2.26. The rank of the operator \( A \) is the dimension of its range.
Finite-rank operators (i.e., those operators whose rank is a natural number) share many properties with operators on finite-dimensional spaces and thus are particularly tractable. Operators whose rank is 1 are o |
1185_(GTM91)The Geometry of Discrete Groups | Definition 5.3.6 |
Definition 5.3.6. Let \( G \) be a non-elementary subgroup of \( \mathcal{M} \) ( \( G \) need not be discrete) and let \( {\Lambda }_{0} \) denote the set of points fixed by some loxodromic element in \( G \) . The limit set \( \Lambda \left( G\right) \) of \( G \) is the closure of \( {\Lambda }_{0} \) in \( \widehat{\mathbb{C}} \) : the ordinary set \( \Omega \left( G\right) \) of \( G \) is the complement of \( \Lambda \) in \( \widehat{\mathbb{C}} \) .
In general, we shall write \( \Lambda \) and \( \Omega \) without explicit mention of \( G \) . Note that if \( G \subset {G}_{1} \) then
\[
\Lambda \left( G\right) \subset \Lambda \left( {G}_{1}\right) ,\;\Omega \left( G\right) \supset \Omega \left( {G}_{1}\right) .
\]
We shall study \( \Lambda \) first and then \( \Omega \) .
Theorem 5.3.7. For any non-elementary group \( G \), the limit set \( \Lambda \) is the smallest non-empty \( G \) -invariant closed subset of \( \widehat{\mathbb{C}} \) . In addition, \( \Lambda \) is a perfect set and is therefore uncountable.
Proof. As \( {\Lambda }_{0} \) is \( G \) -invariant, so is \( \Lambda \) . By definition, \( \Lambda \) is closed and by Theorem 5.1.3, \( \Lambda \neq \varnothing \) . Now let \( E \) be any non-empty, closed \( G \) -invariant subset of \( \widehat{\mathbb{C}} \) . As \( G \) is non-elementary, every orbit is infinite, thus \( E \) is infinite. Now take any point \( v \) fixed by a loxodromic element \( g \) in \( G \) . There is some \( w \) in \( E \) not fixed by \( g \) and the set \( \left\{ {{g}^{n}\left( w\right) : n \in \mathbb{Z}}\right\} \) accumulates at \( v \) (and at the other fixed point of \( g \) ). As \( E \) is closed, \( v \in E \) . This shows that \( {\Lambda }_{0} \subset E \) ; hence \( \Lambda \subset E \) .
This argument also shows that \( {\Lambda }_{0} \) has no isolated points (we simply choose \( w \) in \( {\Lambda }_{0} \) but not fixed by \( g \) ): hence \( \Lambda \) has no isolated points. A set is perfect if it is closed and without isolated points and as is well known any non-empty perfect set is uncountable. As \( \Lambda \) is perfect, the proof is complete.
Theorem 5.3.7 shows that the countable set \( {\Lambda }_{0} \) is dense in the uncountable set \( \Lambda \) but we can say even more than this.
Theorem 5.3.8. Let \( G \) be a non-elementary subgroup of \( \mathcal{M} \) and let \( {O}_{1} \) and \( {O}_{2} \) be disjoint open sets both meeting \( \Lambda \) . Then there is a loxodromic \( g \) in \( G \) with a fixed point in \( {O}_{1} \) and a fixed point in \( {O}_{2} \) .
Proof. Recall that if \( f \) is loxodromic with an attractive fixed point \( \alpha \) and a repulsive fixed point \( \beta \), then as \( n \rightarrow + \infty ,{f}^{n} \rightarrow \alpha \) uniformly on each compact subset of \( \widehat{\mathbb{C}} - \{ \beta \} \) and \( {f}^{-n} \rightarrow \beta \) uniformly on each compact subset of \( \widehat{\mathbb{C}} - \) \( \{ \alpha \} \) (Theorem 4.3.10). The repulsive fixed point of \( f \) is the attractive fixed point of \( {f}^{-1} \) .
Now consider \( G,{O}_{1} \) and \( {O}_{2} \) as in the theorem. It follows (Definition 5.3.6) that there is a loxodromic \( p \) with attractive fixed point in \( {O}_{1} \) and a loxodromic \( q \) with attractive fixed point in \( {O}_{2} \) . By Theorem 5.1.3, there is a loxodromic \( f \) with attractive fixed point \( \alpha \) and repulsive fixed point \( \beta \) , neither fixed by \( p \) . Now choose (and then fix) some sufficiently large value of \( m \) so that
\[
g = {p}^{m}f{p}^{-m}
\]
![32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_110_0.jpg](images/32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_110_0.jpg)
Figure 5.3.1
has its attractive fixed point \( {\alpha }_{1}\left( { = {p}^{m}\alpha }\right) \) and repulsive fixed point \( {\beta }_{1}\left( { = {p}^{m}\beta }\right) \) in \( {O}_{1} \) . Then choose (and fix) some sufficiently large value of \( r \) so that
\[
h = {q}^{r}
\]
maps \( {\alpha }_{1} \) into \( {O}_{2} \) : put \( {\alpha }_{2} = h\left( {\alpha }_{1}\right) \) . See Figure 5.3.1
Next, construct open discs \( E \) and \( K \) with the properties
\[
{\beta }_{1} \in E \subset \bar{E} \subset {O}_{1}
\]
\[
{\alpha }_{2} \in K \subset \bar{K} \subset {O}_{2}
\]
As \( {\beta }_{1} \notin \bar{K} \) we see that \( {g}^{n} \rightarrow {\alpha }_{1} \) uniformly on \( \bar{K} \) as \( n \rightarrow + \infty \) . As \( {h}^{-1}\left( K\right) \) is an open neighbourhood of \( {\alpha }_{1} \) we see that for all sufficiently large \( n \) ,
\[
{g}^{n}\left( \bar{K}\right) \subset {h}^{-1}\left( K\right)
\]
and so
\[
h{g}^{n}\left( \bar{K}\right) \subset K\text{.}
\]
(5.3.9)
As \( h\left( {\alpha }_{1}\right) \notin \bar{E} \) so \( {\alpha }_{1} \) is not in \( {h}^{-1}\left( \bar{E}\right) \) and so \( {g}^{-n} \rightarrow {\beta }_{1} \) uniformly on \( {h}^{-1}\left( \bar{E}\right) \) as \( n \rightarrow + \infty \) . Thus for all sufficiently large \( n \) ,
\[
{g}^{-n}{h}^{-1}\left( \bar{E}\right) \subset E
\]
(5.3.10)
Choose a value of \( n \) for which (5.3.9) and (5.3.10) hold. By Lemma 5.3.5, \( h{g}^{n} \) is loxodromic with a fixed point in \( K \) : also, \( {g}^{-n}{h}^{-1} \), which is \( {\left( h{g}^{n}\right) }^{-1} \) , has a fixed point in \( E \), hence so does \( h{g}^{n} \) .
Theorems 5.3.7 and 5.3.8 do not require \( G \) to be discrete. If we add the extra condition that \( G \) is discrete, we can describe \( \Lambda \) in terms of any one orbit. For any \( z \) in \( \widehat{\mathbb{C}} \), let \( \Lambda \left( z\right) \) be the set of \( w \) with the property that there are distinct \( {g}_{n} \) in \( G \) with \( {g}_{n}\left( z\right) \rightarrow w \) (the points \( {g}_{n}\left( z\right) \) need not be distinct).
Theorem 5.3.9. Let \( G \) be a non-elementary discrete subgroup of \( \mathcal{M} \) . Then for all \( z \) in \( \widehat{\mathbb{C}} \), we have \( \Lambda = \Lambda \left( z\right) \) .
Remark. The group generated by \( z \mapsto {2z} \) shows that the conclusion may fail if \( G \) is only discrete. The group of Möbius transformations preserving the unit disc shows that the conclusion may fail if \( G \) is only non-elementary.
Proof of Theorem 5.3.9. Each \( \Lambda \left( z\right) \) is closed, non-empty and \( G \) -invariant so by Theorem 5.3.7, we have
\[
\Lambda \subset \Lambda \left( z\right)
\]
If \( z \in \Lambda \), then \( G\left( z\right) \subset \Lambda \) and so
\[
\Lambda \left( z\right) \subset \overline{G\left( z\right) } \subset \Lambda
\]
in this case, then we have \( \Lambda = \Lambda \left( z\right) \) .
Now suppose that \( z \) is in \( \Omega \) and select any \( w \) in \( \Lambda \left( z\right) \) : we must show that \( w \in \Lambda \) . Suppose not, then \( w \in \Omega \) and there is a disc \( Q \) with centre \( w \) whose closure \( \bar{Q} \) lies in \( \Omega \) . We may suppose that 0 and \( \infty \) are in \( \Lambda \) so taking \( K = \bar{Q} \cup \{ z\} \) we deduce from Theorem 4.5.6 that for all \( g \) in \( G \) and all \( {z}^{\prime } \) in \( Q \) ,
\[
d\left( {{gz}, g{z}^{\prime }}\right) \leq m/\parallel g{\parallel }^{2}.
\]
As \( w \in \Lambda \left( z\right) \), there are distinct \( {g}_{n} \) with \( {g}_{n}\left( z\right) \rightarrow w \) : as \( {\begin{Vmatrix}{g}_{n}\end{Vmatrix}}^{2} \rightarrow + \infty \), we deduce that \( {g}_{n} \rightarrow w \) uniformly on \( \bar{Q} \) . This implies that for large \( n \) ,
\[
{g}_{n}\left( \bar{Q}\right) \subset Q
\]
hence for Lemma 5.3.5 we have \( Q \cap \Lambda \neq \varnothing \) and this contradicts \( Q \subset \Omega \) .
We now turn our attention to the open set \( \Omega \) .
Theorem 5.3.10. Suppose that \( G \) is a discrete non-elementary subgroup of \( \mathcal{M} \) . Then \( \Omega \) is the maximal domain of discontinuity in \( \widehat{\mathbb{C}} \) of \( G \) : precisely,
(i) \( G \) acts discontinuously in \( \Omega \) ; and
(ii) if \( G \) acts discontinuously in an open subset \( D \) of \( \widehat{\mathbb{C}} \), then \( D \subset \Omega \) .
Remark. Traditionally, a discrete group \( G \) was called Kleinian if \( \Omega \neq \varnothing \) . More recently, Kleinian is used synonomously with discrete.
Proof of Theorem 5.3.10. If \( G \) does not act discontinuously in \( \Omega \), then there is a compact subset \( K \) of \( \Omega \) and distinct \( {g}_{1},{g}_{2},\ldots \) in \( G \) such that \( {g}_{n}\left( K\right) \cap K \neq \varnothing \) . Thus there are points \( {z}_{1},{z}_{2},\ldots \) in \( K \) with \( {g}_{n}\left( {z}_{n}\right) \in K \) . By taking a subsequence, we may assume that \( {g}_{n}\left( {z}_{n}\right) \rightarrow w \) in \( K \) and so \( w \in \Omega \) . However, exactly as in the proof of Theorem 5.3.9, we now see that \( {g}_{n} \rightarrow w \) uniformly on \( K \) and so \( w \in \Lambda \), a contradiction. This proves (i).
It is easy to prove (ii). By Lemma 5.3.3, \( D \cap {\Lambda }_{0} = \varnothing \) . As \( D \) is open, this implies that \( D \cap \Lambda = \varnothing \) so \( D \subset \Omega \) .
Theorem 5.3.10 has an interesting corollary.
Corollary. Let \( G \) be discrete and non-elementary. Then \( \Omega \neq \varnothing \) if and only if for some \( z, G\left( z\right) \) is not dense in \( \widehat{\mathbb{C}} \) .
Proof. By Theorem 5.3.9, \( \Omega \neq \varnothing \) if and only if \( \Lambda \left( z\right) \left( { = \Lambda }\right) \) is not \( \widehat{\mathbb{C}} \) and this is the assertion in the corollary.
Lemma 5.3.3 shows that the fixed points of parabolic and loxodromic elements of \( G \) lie in \( \Lambda \) and hence not in \( \Omega \) . It is not hard to see that there can be fixed points of elliptic elements of \( G \) both in \( \Lambda \) and in \( \Omega \) . However, if an elliptic fixed point lies in \( \Omega \), the stabilizer of that point must be cyclic.
Theore |
1069_(GTM228)A First Course in Modular Forms | Definition 7.7.3 |
Definition 7.7.3. Let \( E \) be an elliptic curve over \( \mathbb{Q} \) . The smallest \( N \) that can occur in Version \( {X}_{\mathbb{Q}} \) of the Modularity Theorem is called the analytic conductor of \( E \) .
Thus the analytic conductor is well defined on isomorphism classes over \( \mathbb{Q} \) of elliptic curves over \( \mathbb{Q} \) . The next section will show that isogeny over \( \mathbb{Q} \) of elliptic curves over \( \mathbb{Q} \) is an equivalence relation, so in fact the analytic conductor is well defined on isogeny classes of elliptic curves over \( \mathbb{Q} \) . We will say more about the analytic conductor in Chapter 8.
Similar algebraic refinements of Versions \( {J}_{\mathbb{C}} \) and \( {A}_{\mathbb{C}} \) of the Modularity Theorem from Chapter 6 require the notion of a variety, the higher-dimensional analog of an algebraic curve. As in Section 7.2, let \( m \) and \( n \) be positive integers, consider a set of \( m \) polynomials over \( \mathbb{Q} \) in \( n \) variables, and suppose that the ideal they generate in the ring of polynomials over \( \overline{\mathbb{Q}} \) ,
\[
I = \left\langle {{\varphi }_{1},\ldots ,{\varphi }_{m}}\right\rangle \subset \overline{\mathbb{Q}}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack
\]
is prime. Then the set \( V \) of simultaneous solutions of the ideal,
\[
V = \left\{ {P \in {\overline{\mathbb{Q}}}^{n} : \varphi \left( P\right) = 0\text{ for all }\varphi \in I}\right\}
\]
is a variety over \( \mathbb{Q} \) . The coordinate ring of \( V \) over \( \overline{\mathbb{Q}} \) is the integral domain
\[
\overline{\mathbb{Q}}\left\lbrack V\right\rbrack = \overline{\mathbb{Q}}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack /I
\]
and the function field of \( V \) over \( \overline{\mathbb{Q}} \) is the quotient field of the coordinate ring,
\[
\overline{\mathbb{Q}}\left( V\right) = \{ F = f/g : f, g \in \overline{\mathbb{Q}}\left\lbrack V\right\rbrack, g \neq 0\} .
\]
The dimension of \( V \) is the transcendence degree of \( \overline{\mathbb{Q}}\left( V\right) \) over \( \overline{\mathbb{Q}} \), i.e., the number of algebraically independent transcendentals in \( \overline{\mathbb{Q}}\left( V\right) \), the fewest transcendentals one can adjoin to \( \overline{\mathbb{Q}} \) so that the remaining extension up to \( \overline{\mathbb{Q}}\left( V\right) \) is algebraic. In particular a curve is a 1-dimensional variety. If \( V \) is a variety over \( \mathbb{Q} \) and for each point \( P \in V \) the \( m \) -by- \( n \) derivative matrix \( \left\lbrack {{D}_{j}{\varphi }_{i}\left( P\right) }\right\rbrack \) has \( \operatorname{rank}n - \dim \left( V\right) \) then \( V \) is nonsingular. As with curves, homogenizing the defining polynomials leads to a homogeneous version \( {V}_{\text{hom }} \) of the variety, and the definition of nonsingularity extends to \( {V}_{\text{hom }} \) . Morphisms between nonsingular projective varieties are defined as between curves.
The Jacobians from Chapter 6 have algebraic descriptions as nonsingular projective varieties, defined over \( \mathbb{Q} \) since modular curves are defined over \( \mathbb{Q} \) , and having the same dimensions algebraically as they do as complex tori. We will see in Section 7.9 that the Hecke operators are defined over \( \mathbb{Q} \) as well. Consequently the construction of Abelian varieties in Definition 6.6.3 as quotients of Jacobians, defined by the Hecke operators, also gives varieties defined over \( \mathbb{Q} \), and again the two notions of dimension agree. Versions \( {J}_{\mathbb{C}} \) and \( {A}_{\mathbb{C}} \) of the Modularity Theorem modify accordingly.
Theorem 7.7.4 (Modularity Theorem, Version \( {J}_{\mathbb{Q}} \) ). Let \( E \) be an elliptic curve over \( \mathbb{Q} \) . Then for some positive integer \( N \) there exists a surjective morphism over \( \mathbb{Q} \) of varieties over \( \mathbb{Q} \)
\[
{\mathrm{J}}_{0}{\left( N\right) }_{\text{alg }} \rightarrow E
\]
Theorem 7.7.5 (Modularity Theorem, Version \( {A}_{\mathbb{Q}} \) ). Let \( E \) be an elliptic curve over \( \mathbb{Q} \) . Then for some positive integer \( N \) and some newform \( f \in {\mathcal{S}}_{2}\left( {{\Gamma }_{0}\left( N\right) }\right) \) there exists a surjective morphism over \( \mathbb{Q} \) of varieties over \( \mathbb{Q} \)
\[
{A}_{f,\text{ alg }}^{\prime } \rightarrow E
\]
Our policy will be to give statements of Modularity involving varieties but, since we have not studied varieties, to argue using only curves. In particular, we never use the variety structure of the Abelian variety of a modular form. We mention in passing that rational maps between nonsingular projective varieties are defined as for curves, but a rational map between nonsingular projective varieties need not be a morphism; this is particular to curves. The Varieties-Fields Correspondence involves rational maps rather than morphisms, and the rational maps are dominant rather than surjective, where dominant means mapping to all of the codomain except possibly a proper subvariety.
Applications of Modularity to number theory typically rely on the algebraic versions of the Modularity Theorem given in this section. A striking example is the construction of rational points on elliptic curves, meaning points whose coordinates are rational. The key idea is that a natural construction of Heegner points on modular curves, points with algebraic coordinates, follows from the moduli space point of view. Taking the images of these points on elliptic curves under the map of Version \( {X}_{\mathbb{Q}} \) and then symmetrizing under conjugation gives points with rational coordinates. As mentioned early in this chapter, the Mordell-Weil Theorem states that the group of rational points of an elliptic curve over \( \mathbb{Q} \) takes the form \( T \oplus {\mathbb{Z}}^{r} \) where \( T \) is the torsion subgroup, the group of points of finite order, and \( r \geq 0 \) is the rank. The Birch and Swinnerton-Dyer Conjecture, to be discussed at the end of Chapter 8, provides a formula for \( r \) . Gross and Zagier showed that when the conjectured \( r \) is 1 , the Heegner point construction in fact yields points of infinite order. See Henri Darmon's article in [CSS97] for more on this subject.
## Exercises
7.7.1. Show that the images of \( {K}_{0} \) and \( {K}_{1} \) under \( \rho \) are given by (7.13). (A hint for this exercise is at the end of the book.)
7.7.2. Let \( {p}_{1,\mathbb{C}} \in \mathbb{C}\left( j\right) \left\lbrack x\right\rbrack \) be the minimal polynomial of \( {f}_{1} \) over \( \mathbb{C}\left( j\right) \) and let \( {p}_{1,\mathbb{Q}} \in \mathbb{Q}\left( j\right) \left\lbrack x\right\rbrack \) be the minimal polynomial of \( {f}_{1} \) over \( \mathbb{Q}\left( j\right) \), a multiple of \( {p}_{1,\mathbb{C}} \) in \( \overline{\mathbb{C}}\left( j\right) \left\lbrack x\right\rbrack \) . Let \( {\epsilon }_{N} \) be 2 if \( N \in \{ 1,2\} \) and 1 if \( N > 2 \) . Show that by Galois theory and Figure 7.2,
\[
\deg \left( {p}_{1,\mathbb{C}}\right) = {\epsilon }_{N}\left| {{\mathrm{{SL}}}_{2}\left( {\mathbb{Z}/N\mathbb{Z}}\right) }\right| /\left( {2N}\right)
\]
and show similarly that this is also \( \deg \left( {p}_{1,\mathbb{Q}}\right) \) . Thus the polynomials are equal since both are monic.
7.7.3. The relevant field for the algebraic model of \( X\left( N\right) \) is
\[
\mathbb{K} = \mathbb{Q}\left( {j,{f}_{1,0},{f}_{0,1}}\right)
\]
analogous to the function field \( \mathbb{C}\left( {j,{f}_{1,0},{f}_{0,1}}\right) \) of \( X\left( N\right) \) as a complex algebraic curve. Show that the corresponding subgroup \( K \) of \( {H}_{\mathbb{Q}} \) satisfies \( \rho \left( K\right) = \{ \pm I\} \) , so \( \mathbb{K} \cap \overline{\mathbb{Q}} = \mathbb{Q}\left( {\mathbf{\mu }}_{N}\right) \) . The algebraic model of the modular curve for \( \Gamma \left( N\right) \) is the curve \( X{\left( N\right) }_{\text{alg }} \) over \( \mathbb{Q}\left( {\mathbf{\mu }}_{N}\right) \) with function field \( \mathbb{Q}\left( {\mathbf{\mu }}_{N}\right) \left( {X{\left( N\right) }_{\text{alg }}}\right) = \mathbb{K} \) . Connect this to the complex modular curve \( X\left( N\right) \) .
## 7.8 Isogenies algebraically
Let \( N \) be a positive integer. Recall the moduli space \( {\mathrm{S}}_{1}\left( N\right) \) from Section 1.5,
\( {\mathrm{S}}_{1}\left( N\right) = \{ \) equivalence classes \( \left\lbrack {E, Q}\right\rbrack \) of enhanced elliptic curves for \( {\Gamma }_{1}\left( N\right) \} \) ,
and recall the moduli space interpretation of the Hecke operator \( {T}_{p} \) on the divisor group of \( {\mathrm{S}}_{1}\left( N\right) \) from Section 5.2,
\[
{T}_{p}\left\lbrack {E, Q}\right\rbrack = \mathop{\sum }\limits_{C}\left\lbrack {E/C, Q + C}\right\rbrack
\]
where the sum is taken over all order \( p \) subgroups \( C \) of \( E \) with \( C \cap \langle Q\rangle = \{ 0\} \) . So far we understand the quotient \( E/C \) only when \( E \) is a complex torus. This section uses both parts of the Curves-Fields Correspondence to construct the quotient when \( E \) is an algebraic elliptic curve instead. Thus \( {T}_{p} \) can be viewed in algebraic terms. The algebraic formulation of \( {T}_{p} \) will be elaborated in the next section and used in Chapter 8.
Section 1.3 defined an isogeny of complex tori as a nonzero holomorphic homomorphism, which thus surjects and has finite kernel. Any such isogeny is the quotient map by a finite subgroup followed by an isomorphism. Every isogeny \( \varphi \) of complex tori has a dual isogeny \( \psi \) in the other direction such that \( \psi \circ \varphi = \left\lbrack N\right\rbrack \) where \( \left\lbrack N\right\rbrack = \deg \left( \varphi \right) = \left| {\ker \left( \varphi \right) }\right| \) . Similarly, define an isogeny between algebraic elliptic curves to be a nonzero holomorphic morphism.
For an algebraic construction of the q |
109_The rising sea Foundations of Algebraic Geometry | Definition 8.41 |
Definition 8.41. Given \( \alpha ,\beta \in \Phi \), the pair \( \{ \alpha ,\beta \} \) is called prenilpotent if \( \alpha \cap \beta \) and \( \left( {-\alpha }\right) \cap \left( {-\beta }\right) \) each contain at least one chamber. In this case we set
\[
\left\lbrack {\alpha ,\beta }\right\rbrack \mathrel{\text{:=}} \{ \gamma \in \mathit{Φ} \mid \gamma \supseteq \alpha \cap \beta \text{ and } - \gamma \supseteq \left( {-\alpha }\right) \cap \left( {-\beta }\right) \} \text{.}
\]
We record for future reference some immediate consequences of the definition. The proofs are easy and are left to the reader:
Lemma 8.42. Let \( \alpha \) and \( \beta \) be roots.
(1) \( \{ \alpha ,\beta \} \) is prenilpotent if and only if \( \{ - \alpha , - \beta \} \) is prenilpotent. In this case \( \left\lbrack {-\alpha , - \beta }\right\rbrack = - \left\lbrack {\alpha ,\beta }\right\rbrack \mathrel{\text{:=}} \{ - \gamma \mid \gamma \in \left\lbrack {\alpha ,\beta }\right\rbrack \} . \)
(2) If the pair \( \{ \alpha ,\beta \} \) is nested (i.e., \( \alpha \subseteq \beta \) or \( \beta \subseteq \alpha \) ), then \( \{ \alpha ,\beta \} \) is pre-nilpotent.
(3) \( \{ \alpha ,\beta \} \) is not prenilpotent if and only if the pair \( \{ \alpha , - \beta \} \) is nested. In particular, if \( \{ \alpha ,\beta \} \) is not prenilpotent, then \( \{ \alpha , - \beta \} \) is prenilpotent.
Example 8.43. If \( W \) is finite, then \( \{ \alpha ,\beta \} \) is prenilpotent if and only if \( \beta \neq - \alpha \) . (This follows for instance from Lemma 3.53 and part (3) of Lemma 8.42.) And the definition of \( \left\lbrack {\alpha ,\beta }\right\rbrack \) in this case coincides with the one given in Definition 7.71, since we can apply the opposition involution to the inclusion \( \gamma \supseteq \alpha \cap \beta \) .
Example 8.44. Suppose \( W \) is a Euclidean reflection group, so that \( \sum \) can be identified with a Euclidean space \( V \) decomposed into simplices by affine hyperplanes. [Readers who are not familiar with Euclidean reflection groups, which will not be formally introduced until Chapter 10, can think about the case in which \( V \) is the plane tiled by equilateral triangles.] The closed half-spaces determined by the hyperplanes can be identified with the roots. For each root \( \alpha \), let \( {e}_{\alpha } \) be the unit vector orthogonal to \( \partial \alpha \) and pointing to the side containing \( \alpha \) . Thus \( \alpha \) is defined by an inequality \( \left\langle {{e}_{\alpha }, - }\right\rangle \geq c \) for some \( c \in \mathbb{R} \) . The result, then, is that two roots \( \alpha ,\beta \) form a prenilpotent pair if and only if \( {e}_{\alpha } \neq - {e}_{\beta } \) . In more detail, we have the following possibilities for \( \alpha ,\beta \) :
(1) \( {e}_{\alpha } = - {e}_{\beta } \) and \( \alpha = - \beta \) . Then \( \alpha \cap \beta = \left( {-\alpha }\right) \cap \left( {-\beta }\right) = \partial \alpha \) . Neither intersection contains a chamber, and \( \{ \alpha ,\beta \} \) is not prenilpotent.
(2) \( {e}_{\alpha } = - {e}_{\beta } \) but \( \alpha \neq - \beta \) . Then \( \partial \alpha \) and \( \partial \beta \) are distinct and parallel. Of the two intersections \( \alpha \cap \beta \) and \( \left( {-\alpha }\right) \cap \left( {-\beta }\right) \), one contains a chamber but the other is empty. The pair \( \{ \alpha ,\beta \} \) is not prenilpotent.
(3) \( {e}_{\alpha } = {e}_{\beta } \) . Then \( \partial \alpha \) and \( \partial \beta \) are parallel and the pair \( \{ \alpha ,\beta \} \) is nested. In particular, it is prenilpotent.
(4) \( {e}_{\alpha } \neq \pm {e}_{\beta } \) . Then \( \partial \alpha \) and \( \partial \beta \) intersect and divide \( V \) into four quadrants corresponding to the four intersections \( \pm \alpha \cap \pm \beta \), each of which contains a chamber. In particular, \( \{ \alpha ,\beta \} \) is prenilpotent.
We now describe the interval \( \left\lbrack {\alpha ,\beta }\right\rbrack \) in each of the prenilpotent cases. In case (3), suppose \( \alpha \subseteq \beta \) . Then \( \left\lbrack {\alpha ,\beta }\right\rbrack \) consists of the roots \( \gamma \) such that \( \alpha \subseteq \gamma \subseteq \beta \) . There are finitely many such \( \gamma \), one for each wall parallel to \( \partial \alpha \) and \( \partial \beta \) and between them. See Figure 8.2. In case (4), choose a maximal
![85b011f4-34bf-48b4-8882-cd79e6f4beb0_479_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_479_0.jpg)
Fig. 8.2. A root \( \gamma \in \left( {\alpha ,\beta }\right) \) ; the parallel-walls case.
simplex \( A \) in \( \partial \alpha \cap \partial \beta \) . As in Lemma 7.81, one can then set up a bijection between \( \left\lbrack {\alpha ,\beta }\right\rbrack \) and \( \left\lbrack {{\alpha }^{\prime },{\beta }^{\prime }}\right\rbrack \), where \( {\alpha }^{\prime },{\beta }^{\prime } \) are the roots in the spherical rank-2 Coxeter complex \( {\sum }^{\prime } \mathrel{\text{:=}} {\operatorname{lk}}_{\sum }A \) obtained by intersecting \( \alpha \) and \( \beta \) with \( {\sum }^{\prime } \) . [See Lemma 8.45 below.] In particular, the bounding wall of every \( \gamma \in \left\lbrack {\alpha ,\beta }\right\rbrack \) must contain \( A \), so that \( \gamma \) corresponds to a root \( {\gamma }^{\prime } \) of \( {\sum }^{\prime } \) . See Figure 8.3 for an example; we leave it to the reader to locate \( {\alpha }^{\prime },{\beta }^{\prime },{\gamma }^{\prime } \) in the picture. Note in both cases that \( \{ \gamma \in \Phi \mid \gamma \supseteq \alpha \cap \beta \} \) is much bigger than \( \left\lbrack {\alpha ,\beta }\right\rbrack \) . In fact, it is infinite,
![85b011f4-34bf-48b4-8882-cd79e6f4beb0_480_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_480_0.jpg)
Fig. 8.3. A root \( \gamma \in \left( {\alpha ,\beta }\right) \) ; the intersecting-walls case.
whereas \( \left\lbrack {\alpha ,\beta }\right\rbrack \) is finite. Thus it is crucial that we require \( - \gamma \supseteq \left( {-\alpha }\right) \cap \left( {-\beta }\right) \) in the definition of \( \left\lbrack {\alpha ,\beta }\right\rbrack \) .
We come now to the main point of this subsection, which is a simple description of the interval \( \left\lbrack {\alpha ,\beta }\right\rbrack \) when \( \{ \alpha ,\beta \} \) is prenilpotent (cf. Lemma 7.81).
Lemma 8.45. Let \( \{ \alpha ,\beta \} \) be a prenilpotent pair of roots. Then one of the following holds:
(a) \( \{ \alpha ,\beta \} \) is nested, say \( \alpha \subseteq \beta \), in which case
\[
\left\lbrack {\alpha ,\beta }\right\rbrack = \{ \gamma \in \Phi \mid \alpha \subseteq \gamma \subseteq \beta \} .
\]
(b) Every maximal simplex \( A \) of \( \partial \alpha \cap \partial \beta \) has codimension 2, and the link \( {L}_{A} \) is spherical. For any such \( A \), let \( {\alpha }^{\prime } \mathrel{\text{:=}} \alpha \cap {L}_{A} \) and \( {\beta }^{\prime } \mathrel{\text{:=}} \beta \cap {L}_{A} \) be the roots of \( {L}_{A} \) corresponding to \( \alpha \) and \( \beta \) . Then there is a bijection
\[
\left\lbrack {\alpha ,\beta }\right\rbrack \sim \left\lbrack {{\alpha }^{\prime },{\beta }^{\prime }}\right\rbrack
\]
given by \( \gamma \mapsto {\gamma }^{\prime } \mathrel{\text{:=}} \gamma \cap {L}_{A} \) for \( \gamma \in \left\lbrack {\alpha ,\beta }\right\rbrack \) .
Proof. If each of the four intersections \( \left( {\pm \alpha }\right) \cap \left( {\pm \beta }\right) \) contains a chamber, then we are in case (b). Indeed, the first assertion follows from Lemma 3.164, and the rest is proved exactly as in the proof of Lemma 7.81. [One of course uses the definition of prenilpotence instead of the opposition involution in the proof.] Otherwise, either \( \alpha \cap \left( {-\beta }\right) \) or \( \left( {-\alpha }\right) \cap \beta \) contains no chamber, which says precisely that \( \{ \alpha ,\beta \} \) is nested.
Exercise 8.46. If \( W \) is infinite, show that there is always a pair of positive roots \( \{ \alpha ,\beta \} \) that is not prenilpotent.
## 8.6 General RGD Systems
In this section \( \left( {W, S}\right) \) will be an arbitrary Coxeter system and \( \sum \) will denote the standard thin building \( \left( {W,{\delta }_{W}}\right) \) or, equivalently, the W-metric building associated to the Coxeter complex \( \sum \left( {W, S}\right) \) . We denote by \( \Phi \) the set of roots of \( \sum \) and by \( {\Phi }_{ + } \) (resp. \( {\Phi }_{ - } \) ) the set of positive (resp. negative) roots. Thus
\[
{\Phi }_{ + } = \{ \alpha \in \Phi \mid 1 \in \alpha \}
\]
and
\[
{\Phi }_{ - } = \{ \alpha \in \Phi \mid 1 \notin \alpha \} .
\]
Our goal in this section is to write down group-theoretic axioms that will ultimately lead to a Moufang twin building. It might therefore seem more natural to work with the standard thin twin building and its set of twin roots. It turns out, however, that we will have to do a considerable amount of work with the \( {\mathcal{C}}_{ + } \) half of the desired twin building before we can complete the construction. It is therefore more convenient to work with just the ordinary thin building \( \sum \) .
## 8.6.1 The RGD Axioms
To define an RGD system \( \left( {G,{\left( {U}_{\alpha }\right) }_{\alpha \in \Phi }, T}\right) \) of type \( \left( {W, S}\right) \), one simply repeats Definition 7.82 verbatim with the exception of (RGD1). This is rewritten as follows:
(RGD1) For all \( \alpha \neq \beta \) in \( \Phi \) such that \( \{ \alpha ,\beta \} \) is prenilpotent in the sense of Definition 8.41,
\[
\left\lbrack {{U}_{\alpha },{U}_{\beta }}\right\rbrack \leq {U}_{\left( \alpha ,\beta \right) }
\]
The interval \( \left( {\alpha ,\beta }\right) \), of course, now has to be interpreted as in Definition 8.41. In view of Example 8.43, the new (RGD1) is equivalent to the original one if \( W \) is finite, so there is no harm in continuing to |
108_The Joys of Haar Measure | Definition 3.6.9 |
Definition 3.6.9. Let \( m \mid \left( {q - 1}\right) \) be such that \( m > 1 \) .
(1) For notational simplicity we will set \( d = \left( {q - 1}\right) /m \) .
(2) If \( t \) is coprime to \( m \) we denote by \( {\sigma }_{t} \) the element of \( \operatorname{Gal}\left( {{L}_{m}/K}\right) \) such that \( {\sigma }_{t}\left( {\zeta }_{m}\right) = {\zeta }_{m}^{t} \) (and of course leaving fixed \( {\zeta }_{p} \) ).
Although strictly speaking \( {\sigma }_{t} \) depends on \( m \), since these maps are compatible under restriction, there is no possibility of confusion.
Proposition 3.6.10. Let \( m \mid \left( {q - 1}\right) \), set \( d = \left( {q - 1}\right) /m \), and recall that \( {L}_{m} = \mathbb{Q}\left( {{\zeta }_{m},{\zeta }_{p}}\right) \) and that \( {\mathfrak{P}}_{m} \) is the prime ideal of \( {L}_{m} \) below \( \mathfrak{P} \) . Then
\[
\tau \left( {\omega }^{-{rd}}\right) {\mathbb{Z}}_{{L}_{m}} = \mathop{\prod }\limits_{{t \in {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * }/\langle p\rangle }}{\sigma }_{t}^{-1}{\left( {\mathfrak{P}}_{m}\right) }^{s\left( {rtd}\right) }.
\]
Proof. First note that the values of \( {\omega }^{-{rd}} \) are in \( {K}_{m} \), so that \( \tau \left( {\omega }^{-{rd}}\right) \in {L}_{m} \) . Since \( {\mathfrak{P}}_{m} \) is a prime ideal of \( {L}_{m} \) above \( \mathfrak{p} \), by Galois theory all the prime ideals of \( {L}_{m} \) above \( \mathfrak{p} \) have the form \( \sigma \left( {\mathfrak{P}}_{m}\right) \) for \( \sigma \in \operatorname{Gal}\left( {{L}_{m}/K}\right) \simeq \operatorname{Gal}\left( {{K}_{m}/\mathbb{Q}}\right) \simeq \) \( {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * } \) . By definition of the Gauss sum we have \( {\sigma }_{t}\left( {\tau \left( {\omega }^{-{rd}}\right) }\right) = \tau \left( {\omega }^{-{rtd}}\right) \) . Thus by the above corollary
\[
{v}_{{\sigma }_{t}^{-1}\left( {\mathfrak{P}}_{m}\right) }\left( {\tau \left( {\omega }^{-{rd}}\right) }\right) = {v}_{{\mathfrak{P}}_{m}}\left( {{\sigma }_{t}\left( {\tau \left( {\omega }^{-{rd}}\right) }\right) }\right) = {v}_{\mathfrak{P}}\left( {\tau \left( {\omega }^{-{rtd}}\right) }\right) ) = s\left( {rtd}\right) ,
\]
since \( \mathfrak{P} \) is unramified over \( {\mathfrak{P}}_{m} \) . Furthermore, the decomposition group \( D\left( {{\mathfrak{P}}_{m}/\mathfrak{p}}\right) \) is isomorphic to \( D\left( {{\mathfrak{p}}_{m}/p}\right) \) ; hence by Proposition 3.5.18 it is isomorphic to the cyclic subgroup of \( \operatorname{Gal}\left( {{L}_{m}/K}\right) \) generated by \( {\sigma }_{p} \), using the same notation as above. This means that the ideals of \( {L}_{m} \) above \( \mathfrak{p} \) are obtained once and only once as \( {\sigma }_{t}^{-1}\left( {\mathfrak{P}}_{m}\right) \) for \( {\sigma }_{t} \in \operatorname{Gal}\left( {{L}_{m}/K}\right) /D\left( {{\mathfrak{P}}_{m}/\mathfrak{p}}\right) \), in other words for \( t \in {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * }/\langle p\rangle x \) . Finally, since \( \tau \left( {\omega }^{-{rd}}\right) \overline{\tau \left( {\omega }^{-{rd}}\right) } = q = {p}^{f} \) for \( r \neq 0 \), it follows that the only ideals of \( {L}_{m} \) dividing \( \tau \left( {\omega }^{-{rd}}\right) \) are the ideals above \( p \), hence the ideals above \( \mathfrak{p} \), in other words the \( {\sigma }_{t}^{-1}\left( {\mathfrak{P}}_{m}\right) \) for \( t \in {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * }/\langle p\rangle \), so the proposition follows.
Corollary 3.6.11. Let \( m \mid \left( {q - 1}\right) \), and recall that \( {K}_{m} = \mathbb{Q}\left( {\zeta }_{m}\right) \) and that \( {\mathfrak{p}}_{m} \) is the prime ideal of \( {K}_{m} \) below \( \mathfrak{P} \) . Then
\[
\tau {\left( {\omega }^{-d}\right) }^{m}{\mathbb{Z}}_{{K}_{m}} = \mathop{\prod }\limits_{{t \in {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * }/\langle p\rangle }}{\sigma }_{t}^{-1}{\left( {\mathfrak{p}}_{m}\right) }^{{v}_{t}},
\]
where
\[
{v}_{t} = \frac{m}{p - 1}s\left( {t\frac{q - 1}{m}}\right) = m\mathop{\sum }\limits_{{0 \leq i < f}}\left\{ \frac{{p}^{i}t}{m}\right\} .
\]
Proof. By Corollary 2.5.15 we know that \( \tau {\left( {\omega }^{-d}\right) }^{m} \in {K}_{m} \) . Since \( e\left( {{\mathfrak{P}}_{m}/{\mathfrak{p}}_{m}}\right) = \) \( e\left( {\mathfrak{p}/p}\right) = p - 1 \) we have
\[
{v}_{{\mathfrak{p}}_{m}}\left( {\tau {\left( {\omega }^{-d}\right) }^{m}}\right) = \left( {m/\left( {p - 1}\right) }\right) {v}_{{\mathfrak{P}}_{m}}\left( {\tau \left( {\omega }^{-d}\right) }\right) ,
\]
so the corollary immediately follows from the proposition, the second formula for \( {v}_{t} \) coming from Lemma 3.6.7.
The above results are expressed much more nicely in the language of group rings, which we recall for the convenience of the reader.
Definition 3.6.12. Let \( A \) be a commutative ring and let \( G \) be a finite abelian group. We define the group algebra \( A\left\lbrack G\right\rbrack \) by
\[
A\left\lbrack G\right\rbrack = \left\{ {\mathop{\sum }\limits_{{g \in G}}{a}_{g}g,{a}_{g} \in A}\right\}
\]
in other words the set of formal linear combinations with coefficients in \( A \) of the elements of \( G \), the addition law coming from that of \( A \) and the multiplication law from that of \( A \) and of the group law of \( G \), in other words,
\[
\left( {\mathop{\sum }\limits_{{g \in G}}{a}_{g}g}\right) \left( {\mathop{\sum }\limits_{{h \in G}}{b}_{h}h}\right) = \mathop{\sum }\limits_{{g, h \in G}}{a}_{g}{b}_{h}{gh} = \mathop{\sum }\limits_{{g \in G}}\left( {\mathop{\sum }\limits_{{h \in G}}{a}_{h}{b}_{{h}^{-1}g}}\right) g.
\]
It is immediately checked that \( A\left\lbrack G\right\rbrack \) is a commutative ring. If \( E \) is a group (written multiplicatively, say) on which the group \( G \) acts, then \( E \) has a natural structure of a (left) \( \mathbb{Z}\left\lbrack G\right\rbrack \) -module through the formula
\[
\left( {\mathop{\sum }\limits_{{g \in G}}{a}_{g}g}\right) \cdot v = \mathop{\prod }\limits_{{g \in G}}{\left( g \cdot v\right) }^{{a}_{g}}.
\]
If \( \theta = \mathop{\sum }\limits_{{g \in G}}{a}_{g}g \), it is customary to write \( {v}^{\theta } \) instead of \( \theta \cdot v \), so that the formula \( {\theta }_{1} \cdot \left( {{\theta }_{2} \cdot v}\right) = \left( {{\theta }_{1}{\theta }_{2}}\right) \cdot v \) translates into the identity \( {v}^{{\theta }_{1}{\theta }_{2}} = {\left( {v}^{{\theta }_{1}}\right) }^{{\theta }_{2}} \) , where it is essential that \( G \) be an abelian group.
The situation considered in practice will be the following. We will be given an Abelian extension \( K/\mathbb{Q} \) with commutative Galois group \( G \) . The group \( G \) acts naturally on all natural objects linked to \( K \), for instance on elements, units, ideals, ideal classes, etc., so by the above we have an action of \( \mathbb{Z}\left\lbrack G\right\rbrack \) , written in an exponential manner.
As a first application of this notation we have the following proposition, which is a restatement of Corollary 3.6.11.
Proposition 3.6.13. Set
\[
\Theta = \mathop{\sum }\limits_{{t \in {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * }}}\left\{ \frac{t}{m}\right\} {\sigma }_{t}^{-1} = \frac{1}{m}\mathop{\sum }\limits_{\substack{{1 \leq t \leq m - 1} \\ {\gcd \left( {t, m}\right) = 1} }}t{\sigma }_{t}^{-1}.
\]
With the same notation as above we have \( \tau {\left( {\omega }^{-d}\right) }^{m}{\mathbb{Z}}_{{K}_{m}} = {\mathfrak{p}}_{m}^{m\Theta } \) .
Proof. Let \( T \) be a system of representatives of \( {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * } \) modulo the cyclic subgroup \( \langle p\rangle \) generated by \( p \) . Proposition 3.6.11 can be restated by saying that \( \tau {\left( {\omega }^{-d}\right) }^{m}{\mathbb{Z}}_{{K}_{m}} = {\mathfrak{p}}_{m}^{m\theta } \) with
\[
\theta = \mathop{\sum }\limits_{{t \in T}}\mathop{\sum }\limits_{{0 \leq i < f}}\left\{ {{p}^{i}t/m}\right\} {\sigma }_{t}^{-1}
\]
(note that it would not make sense to replace \( T \) by \( {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * }/\langle p\rangle \) in the above expression since \( {\sigma }_{t} \) would not be defined). By definition, as \( t \) ranges in \( T \) and \( i \) ranges from 0 to \( f - 1 \) the elements \( {p}^{i}t \) modulo \( m \) range through \( {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * } \) , so that
\[
\theta = \mathop{\sum }\limits_{{t \in {\left( \mathbb{Z}/m\mathbb{Z}\right) }^{ * }}}\{ t/m\} {\sigma }_{{t}_{1}}^{-1},
\]
where \( {t}_{1} \) is the representative in \( T \) of the class of \( t \) modulo \( \langle p\rangle \) . Since the decomposition group of \( {\mathfrak{p}}_{m}/p \) is the group generated by \( {\sigma }_{p} \), it follows that \( {\mathfrak{p}}_{m}^{m\theta } = {\mathfrak{p}}_{m}^{m\Theta } \), where \( \Theta \) is as in the proposition.
The following corollary is one of the most important consequences.
Corollary 3.6.14. Let \( \mathbb{Q}\left( {\zeta }_{m}\right) \) be a cyclotomic field and let \( \Theta \) be defined as above. For all fractional ideals \( \mathfrak{a} \) of \( \mathbb{Q}\left( {\zeta }_{m}\right) \) the ideal \( {\mathfrak{a}}^{m\Theta } \) is a principal ideal.
Proof. We know that in any ideal class there exists an integral ideal co-prime to any fixed ideal, in particular to \( m \) . In other words, there exists an element \( \alpha \) such that \( \mathfrak{b} = \alpha \mathfrak{a} \) is an integral ideal coprime to \( m \) . If \( {\mathfrak{p}}_{m} \) is a prime ideal dividing \( \mathfrak{b} \) above some prime \( p \) we have \( p \nmid m \) ; hence the proposition implies that \( {\mathfrak{p}}_{m}^{m\Theta } \) is a principal ideal. Since this is true for every prime ideal dividing \( \mathfrak{b} \), by multiplicativity it is true for \( \mathfrak{b} \) itself, and hence for \( \mathfrak{a} \) .
## 3.6.4 The Stickelberger Ideal
Although the above results and in particular Corollary 3.6.14 are remarkable results, they suffer from the presence of \( m |
1063_(GTM222)Lie Groups, Lie Algebras, and Representations | Definition 7.25 |
Definition 7.25. For each root \( \alpha \in R \), define a linear map \( {s}_{\alpha } : \mathfrak{h} \rightarrow \mathfrak{h} \) by the formula
\[
{s}_{\alpha } \cdot H = H - 2\frac{\langle \alpha, H\rangle }{\langle \alpha ,\alpha \rangle }\alpha .
\]
(7.14)
The Weyl group of \( R \), denoted \( W \), is then the subgroup of \( \mathrm{{GL}}\left( \mathfrak{h}\right) \) generated by all the \( {s}_{\alpha } \) ’s with \( \alpha \in R \) .
Note that since each root \( \alpha \) is in \( i\mathrm{t} \) and our inner product is real on \( \mathrm{t} \), if \( H \) is in \( i\mathfrak{t} \), then \( {s}_{\alpha } \cdot H \) is also in \( i\mathfrak{t} \) . As a map of \( i\mathfrak{t} \) to itself, \( {s}_{\alpha } \) is the reflection about the hyperplane orthogonal to \( \alpha \) . That is to say, \( {s}_{\alpha } \cdot H = H \) whenever \( H \) is orthogonal to \( \alpha \), and \( {s}_{\alpha } \cdot \alpha = - \alpha \) . Since each reflection is an orthogonal linear transformation, we see that \( W \) is a subgroup of the orthogonal group \( \mathrm{O}\left( {it}\right) \) .
Theorem 7.26. The action of \( W \) on it preserves \( R \) . That is to say, if \( \alpha \) is a root, then \( w \cdot \alpha \) is a root for all \( w \in W \) .
Proof. For each \( \alpha \in R \), consider the invertible linear operator \( {S}_{\alpha } \) on \( \mathfrak{g} \) given by
\[
{S}_{\alpha } = {e}^{{\operatorname{ad}}_{{X}_{\alpha }}}{e}^{-{\operatorname{ad}}_{{Y}_{\alpha }}}{e}^{{\operatorname{ad}}_{{X}_{\alpha }}}.
\]
Now, if \( H \in \mathfrak{h} \) satisfies \( \langle \alpha, H\rangle = 0 \), then \( \left\lbrack {H,{X}_{\alpha }}\right\rbrack = \langle \alpha, H\rangle {X}_{\alpha } = 0 \) . Thus, \( H \) and \( {X}_{\alpha } \) commute, which means that \( {\operatorname{ad}}_{H} \) and \( {\operatorname{ad}}_{{X}_{\alpha }} \) also commute, and similarly for \( {\operatorname{ad}}_{H} \) and \( {\operatorname{ad}}_{{Y}_{\alpha }} \) . Thus, if \( \langle \alpha, H\rangle = 0 \), the operator \( {S}_{\alpha } \) will commute with \( {\operatorname{ad}}_{H} \), so that
\[
{S}_{\alpha }{\operatorname{ad}}_{H}{S}_{\alpha }^{-1} = {\operatorname{ad}}_{H},\;\langle \alpha, H\rangle = 0.
\]
(7.15)
On the other hand, if we apply Point 3 of Theorem 4.34 to the adjoint action of \( {\mathfrak{s}}^{\alpha } \cong \mathrm{{sl}}\left( {2;\mathbb{C}}\right) \) on \( \mathfrak{g} \), we see that
\[
{S}_{\alpha }{\operatorname{ad}}_{{H}_{\alpha }}{S}_{\alpha }^{-1} = - {\operatorname{ad}}_{{H}_{\alpha }}
\]
(7.16)
By combining (7.15) and (7.16), we see that for all \( H \in \mathfrak{h} \), we have
\[
{S}_{\alpha }{\operatorname{ad}}_{H}{S}_{\alpha }^{-1} = {\operatorname{ad}}_{{s}_{\alpha } \cdot H}
\]
(7.17)
Now if \( \beta \) is any root and \( X \) is an associated root vector, consider the vector \( {S}_{\alpha }^{-1}\left( X\right) \in \mathfrak{g} \) . We compute that
\[
{\operatorname{ad}}_{H}\left( {{S}_{\alpha }^{-1}\left( X\right) }\right) = {S}_{\alpha }^{-1}\left( {{S}_{\alpha }{\operatorname{ad}}_{H}{S}_{\alpha }^{-1}}\right) \left( X\right)
\]
\[
= {S}_{\alpha }^{-1}{\operatorname{ad}}_{{s}_{\alpha } \cdot H}\left( X\right)
\]
\[
= \left\langle {\beta ,{s}_{\alpha } \cdot H}\right\rangle {S}_{\alpha }^{-1}\left( X\right)
\]
\[
= \left\langle {{s}_{\alpha }^{-1} \cdot \beta, H}\right\rangle {S}_{\alpha }^{-1}\left( X\right) .
\]
Thus, \( {S}_{\alpha }^{-1}\left( X\right) \) is a root vector with root \( {s}_{\alpha }^{-1} \cdot \beta = {s}_{\alpha } \cdot \beta \) . This shows that the set of roots is invariant under each \( {s}_{\alpha } \) and, thus, under \( W \) .
Actually, since \( {s}_{\alpha } \cdot {s}_{\alpha } \cdot \beta = \beta \), each reflection maps \( R \) onto \( R \) . It follows that each \( w \in W \) also maps \( R \) onto \( R \) .
Corollary 7.27. The Weyl group is finite.
Proof. Since the roots span \( \mathfrak{h} \), each \( w \in W \) is determined by its action on \( R \) . Since, also, \( w \) maps \( R \) onto \( R \), we see that \( W \) may be thought of as a subgroup of the permutation group on the roots.
## 7.5 Root Systems
In this section, we record several important properties of the roots, using results from the two previous sections. Recall that for each root \( \alpha \), we have an element \( {H}_{\alpha } \) of \( \mathfrak{h} \) contained in \( \left\lbrack {{\mathfrak{g}}_{\alpha },{\mathfrak{g}}_{-\alpha }}\right\rbrack \) as in Theorem 7.19. As we saw in (7.8) and (7.9), \( {H}_{\alpha } \) satisfies \( \left\langle {\alpha ,{H}_{\alpha }}\right\rangle = 2 \) and is related to \( \alpha \) by the formula \( {H}_{\alpha } = {2\alpha }/\langle \alpha ,\alpha \rangle \) . In particular, the element \( {H}_{\alpha } \) is independent of the choice of \( {X}_{\alpha } \) and \( {Y}_{\alpha } \) in Theorem 7.19.
Definition 7.28. For each root \( \alpha \), the element \( {H}_{\alpha } \in \mathfrak{h} \) given by
\[
{H}_{\alpha } = 2\frac{\alpha }{\langle \alpha ,\alpha \rangle }
\]
is the coroot associated to the root \( \alpha \) .
Proposition 7.29. For all roots \( \alpha \) and \( \beta \), we have that
\[
\left\langle {\beta ,{H}_{\alpha }}\right\rangle = 2\frac{\langle \alpha ,\beta \rangle }{\langle \alpha ,\alpha \rangle }
\]
(7.18)
is an integer.
We have actually already made use of this result in the proof of Lemma 7.24.
Proof. If \( {\mathfrak{s}}^{\alpha } = \left\langle {{X}_{\alpha },{Y}_{\alpha },{H}_{\alpha }}\right\rangle \) is as in Theorem 7.19 and \( X \) is a root vector associated to the root \( \beta \), then \( \left\lbrack {{H}_{\alpha }, X}\right\rbrack = \left\langle {\beta ,{H}_{\alpha }}\right\rangle X \) . Thus, \( \left\langle {\beta ,{H}_{\alpha }}\right\rangle \) is an eigenvalue for the adjoint action of \( {\mathfrak{s}}^{\alpha } \cong \mathrm{{sl}}\left( {2;\mathbb{C}}\right) \) on \( \mathfrak{g} \) . Point 1 of Theorem 4.34 then shows that \( \left\langle {\beta ,{H}_{\alpha }}\right\rangle \) must be an integer.
Recall from elementary linear algebra that if \( \alpha \) and \( \beta \) are elements of an inner product space, the orthogonal projection of \( \beta \) onto \( \alpha \) is given by
\[
\frac{\langle \alpha ,\beta \rangle }{\langle \alpha ,\alpha \rangle }\alpha
\]
The quantity on the right-hand side of (7.18) is thus twice the coefficient of \( \alpha \) in the projection of \( \beta \) onto \( \alpha \) . We may therefore interpret the integrality result in Proposition 7.29 in the following geometric way:
If \( \alpha \) and \( \beta \) are roots, the orthogonal projection of \( \alpha \) onto \( \beta \) must be an integer or half-integer multiple of \( \beta \) .
Alternatively, we may think about Proposition 7.29 as saying that \( \beta \) and \( {s}_{\alpha } \cdot \beta \) must differ by an integer multiple of \( \alpha \) [compare (7.14)].
If we think of the set \( R \) of roots as a subset of the real inner product space \( E \mathrel{\text{:=}} {it} \), we may summarize the properties of \( R \) as follows.
Theorem 7.30. The set \( R \) of roots is a finite set of nonzero elements of a real inner product space \( E \), and \( R \) has the following additional properties.
1. The roots span \( E \) .
2. If \( \alpha \in R \), then \( - \alpha \in R \) and the only multiples of \( \alpha \) in \( R \) are \( \alpha \) and \( - \alpha \) .
3. If \( \alpha \) and \( \beta \) are in \( R \), so is \( {s}_{\alpha } \cdot \beta \), where \( {s}_{\alpha } \) is the reflection defined by (7.14).
4. For all \( \alpha \) and \( \beta \) in \( R \), the quantity
\[
2\frac{\langle \alpha ,\beta \rangle }{\langle \alpha ,\alpha \rangle }
\]
is an integer.
Any such collection of vectors is called a root system. We will look in detail at the properties of root systems in Chapter 8.
## 7.6 Simple Lie Algebras
Every semisimple Lie algebra decomposes as a direct sum of simple algebras (Theorem 7.8). In this section, we give a criterion for a semisimple Lie algebra to be simple. We will eventually see (Sect. 8.11) that most of the familiar examples of semisimple Lie algebras are actually simple.
Proposition 7.31. Suppose \( \mathfrak{g} \) is a real Lie algebra and that the complexification \( {\mathfrak{g}}_{\mathbb{C}} \) of \( \mathfrak{g} \) is simple. Then \( \mathfrak{g} \) is also simple.
Proof. Since \( {\mathfrak{g}}_{\mathbb{C}} \) is simple, the dimension of \( {\mathfrak{g}}_{\mathbb{C}} \) over \( \mathbb{C} \) is at least 2, so that the dimension of \( \mathfrak{g} \) over \( \mathbb{R} \) is also at least 2 . If \( \mathfrak{g} \) had a nontrivial ideal \( \mathfrak{h} \), then the complexification \( {\mathfrak{h}}_{\mathbb{C}} \) of \( \mathfrak{h} \) would be a nontrivial ideal in \( \mathfrak{g} \) .
The converse of Proposition 7.31 is false in general. The Lie algebra \( \mathbf{{so}}\left( {3;1}\right) \), for example, is simple as a real Lie algebra, and yet its complexification is isomorphic to \( \mathfrak{{so}}\left( {4;\mathbb{C}}\right) \), which in turn is isomorphic to \( \mathfrak{{sl}}\left( {2;\mathbb{C}}\right) \oplus \mathfrak{{sl}}\left( {2;\mathbb{C}}\right) \) . See Exercise 14.
Theorem 7.32. Suppose \( K \) is a compact matrix Lie group whose Lie algebra \( \mathfrak{k} \) is simple as a real Lie algebra. Then the complexification \( \mathfrak{g} \mathrel{\text{:=}} {\mathfrak{k}}_{\mathbb{C}} \) of \( \mathfrak{k} \) is simple as a complex Lie algebra.
For more results about simple algebras over \( \mathbb{R} \), see Exercises 12 and 13. Before proving Theorem 7.32 result, we introduce a definition.
Definition 7.33. If \( \mathfrak{g} \) is a real Lie algebra, \( \mathfrak{g} \) admits a complex structure if there exists a "multiplication by \( i \) " map \( J : \mathfrak{g} \rightarrow \mathfrak{g} \) that makes \( \mathfrak{g} \) into a complex vector space in such a way that the bracket map \( \left\lbrack {\cdot , \cdot }\right\rbra |
113_Topological Groups | Definition 2.35 |
Definition 2.35. \( a \) is the binary operation on \( \omega \) given by the following conditions: for any \( m, n \in \omega \) ,
\[
a\left( {m,0}\right) = m
\]
\[
a\left( {m, n + 1}\right) = {m}^{a\left( {m, n}\right) }.
\]
Thus \( a\left( {m, n}\right) \) is the iterated exponential, \( m \) raised to the \( m \) power \( n \) times. Although exponentiation is elementary by \( {2.8}\left( v\right) \), we shall see that iterated exponentiation is not. The reason is that it grows faster than any elementary function; see 2.44. Obviously, we have:
Lemma 2.36. a is primitive recursive.
Lemma 2.37. \( m \leq a\left( {m, n}\right) \) for all \( m, n \) .
Proof. We may assume that \( m \neq 0 \) . Now we prove 2.37 by induction on \( n \) : \( a\left( {m,0}\right) = m \) . Assuming \( m \leq a\left( {m, n}\right) \) ,
\[
a\left( {m, n + 1}\right) = {m}^{a\left( {m, n}\right) } \geq {m}^{m} \geq m.
\]
Lemma 2.38. \( a\left( {m, n}\right) < a\left( {m, n + 1}\right) \) for all \( m > 1 \) and all \( n \in \omega \) .
Proof. \( a\left( {m, n + 1}\right) = {m}^{a\left( {m, n}\right) } > a\left( {m, n}\right) \) .
Lemma 2.39. \( a\left( {m, n}\right) < a\left( {m + 1, n}\right) \) for all \( m \neq 0 \) and all \( n \in \omega \) .
Proof. We proceed by induction on \( n : a\left( {m,0}\right) = m < m + 1 = a\left( {m + 1,0}\right) \) . Assuming our result for \( n \) ,
\[
a\left( {m, n + 1}\right) = {m}^{a\left( {m, n}\right) } \leq {\left( m + 1\right) }^{a\left( {m, n}\right) }
\]
\[
< {\left( m + 1\right) }^{a\left( {m + 1, n}\right) } = a\left( {m + 1, n + 1}\right) .
\]
Lemma 2.40. \( a\left( {m, n}\right) + a\left( {m, p}\right) \leq a\left( {m,\max \left( {n, p}\right) + 1}\right) \) for all \( m > 1 \) and all \( n, p \in \omega \) .
Proof. \( a\left( {m, n}\right) + a\left( {m, p}\right) \leq {2a}\left( {m,\max \left( {n, p}\right) }\right) \) by 2.38
\[
\leq {2}^{a\left( {m,\max \left( {n, p}\right) }\right) } \leq {m}^{a\left( {m,\max \left( {n, p}\right) }\right) }
\]
\[
= a\left( {m,\max \left( {n, p}\right) + 1}\right) \text{.}
\]
Lemma 2.41. \( a\left( {m, n}\right) \cdot a\left( {m, p}\right) \leq a\left( {m,\max \left( {n, p}\right) + 1}\right) \) for all \( m > 1 \) and all \( n, p \in \omega \) .
Proof. If \( n = p = 0 \) then the inequality is obvious. Hence assume that \( n \neq 0 \) or \( p \neq 0 \) . Then
\[
a\left( {m, n}\right) \cdot a\left( {m, p}\right) \leq a{\left( m,\max \left( n, p\right) \right) }^{2}\text{ by }{2.38}
\]
\[
= {\left( {m}^{a\left( {m,\max \left( {n, p}\right) - 1}\right) }\right) }^{2} = {m}^{{2a}\left( {m,\max \left( {n, p}\right) - 1}\right) }
\]
\[
\leq {m}^{\exp \left( {2,\alpha \left( {m,\max \left( {n, p}\right) - 1}\right) }\right) } \leq {m}^{\alpha \left( {m,\max \left( {n, p}\right) }\right) }
\]
\[
= a\left( {m,\max \left( {n, p}\right) + 1}\right) \text{.}
\]
Lemma 2.42. \( a{\left( m, n\right) }^{a\left( {m, p}\right) } \leq a\left( {m,\max \left( {p + 2, n + 1}\right) }\right) \) for all \( m > 1 \) and all \( n, p \in \omega \) .
Proof. For \( n = 0 \) we have
\[
a{\left( m, n\right) }^{a\left( {m, p}\right) } = {m}^{a\left( {m, p}\right) } = a\left( {m, p + 1}\right) \leq a\left( {m,\max \left( {p + 2, n + 1}\right) }\right)
\]
(using 2.38). If \( n \neq 0 \) we have
\[
a{\left( m, n\right) }^{\alpha \left( {m, p}\right) } = {m}^{\alpha \left( {m, n - 1}\right) \cdot \alpha \left( {m, p}\right) } \leq {m}^{\alpha \left( {m,\max \left( {n - 1, p}\right) + 1}\right) }
\]
by 2.41
\[
= a\left( {m,\max \left( {p + 2, n + 1}\right) }\right) \text{.}
\]
Lemma 2.43. \( a\left( {a\left( {m, n}\right), p}\right) \leq a\left( {m, n + {2p}}\right) \) for all \( m > 1 \) and all \( n, p \in \omega \) .
Proof. We proceed by induction on \( p \) :
\[
a\left( {a\left( {m, n}\right) ,0}\right) = a\left( {m, n}\right) = a\left( {m, n + 2 \cdot 0}\right) .
\]
Assuming our result for \( p \), we then have
\[
a\left( {a\left( {m, n}\right), p + 1}\right) = a{\left( m, n\right) }^{a\left( {a\left( {m, n}\right), p}\right) } \leq a{\left( m, n\right) }^{a\left( {m, n + {2p}}\right) }
\]
\[
\leq a\left( {m,\max \left( {n + {2p} + 2, n + 1}\right) }\right)
\]
by 2.42
\[
= a\left( {m, n + 2\left( {p + 1}\right) }\right) \text{.}
\]
Lemma 2.44. If \( g \) is a \( k \) -ary elementary function then there is an \( m \in \omega \) such that for all \( {x}_{0},\ldots ,{x}_{k - 1} \in \omega \), if \( \max \left( {{x}_{0},\ldots ,{x}_{k - 1}}\right) > 1 \) then \( g\left( {{x}_{0},\ldots }\right. \) , \( \left. {x}_{k - 1}\right) < a\left( {\max \left( {{x}_{0},\ldots ,{x}_{k - 1}}\right), m}\right) . \)
Proof. Let \( A \) be the set of all functions \( g \) (of any rank) for which there is such an \( m \) . To prove the lemma it suffices to show that \( A \) is closed under elementary recursive operations.
(1)
\[
+ \in A\text{.}
\]
In fact, let \( m = 2 \) : for any \( {x}_{0},{x}_{1} \in \omega \) with \( \max \left( {{x}_{0},{x}_{1}}\right) > 1 \) ,
\[
{x}_{0} + {x}_{1} \leq \max \left( {{x}_{0},{x}_{1}}\right) + \max \left( {{x}_{0},{x}_{1}}\right)
\]
\[
= a\left( {\max \left( {{x}_{0},{x}_{1}}\right) ,0}\right) + a\left( {\max \left( {{x}_{0},{x}_{1}}\right) ,0}\right)
\]
\[
< a\left( {\max \left( {{x}_{0},{x}_{1}}\right) ,1}\right) + a\left( {\max \left( {{x}_{0},{x}_{1}}\right) ,1}\right) \;\text{by 2.3}
\]
\[
\leq a\left( {\max \left( {{x}_{0},{x}_{1}}\right) ,2}\right)
\]
by 2.40
Thus (1) holds, Analogously,
(2)
\[
\cdot \in A\text{.}
\]
(3)
\[
f \in A,\;\text{ where }f\left( {m, n}\right) = \left| {m - n}\right| \text{ for all }m, n \in \omega .
\]
For if \( \max \left( {{x}_{0},{x}_{1}}\right) > 1 \), then \( \left| {{x}_{0} - {x}_{1}}\right| \leq \max \left( {{x}_{0},{x}_{1}}\right) = a\left( {\max \left( {{x}_{0},{x}_{1}}\right) ,0}\right) < \) \( a\left( {\max \left( {{x}_{0},{x}_{1}}\right) ,1}\right) \) . Similarly, the next two statements hold:
(4)
\[
f \in A,\;\text{where}f\left( {m, n}\right) = \left\lbrack {m/n}\right\rbrack \text{for all}m, n \in \omega \text{.}
\]
(5)
\[
{\mathrm{U}}_{i}^{n} \in A,\;\text{for any positive}n \in \omega \text{and any}i < n\text{.}
\]
(6) \( A \) is closed under composition.
For, suppose \( f \) is \( m \) -ary, \( {g}_{0},\ldots ,{g}_{m - 1} \) are \( n \) -ary, and \( f,{g}_{0},\ldots ,{g}_{m - 1} \in A \) . Choose \( p,{q}_{0},\ldots ,{q}_{m - 1} \in \omega \) such that \( \max \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) > 1 \) implies that \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) < a\left( {\max \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right), p}\right) \), and such that for each \( i < m \) , \( \max \left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) > 1 \) implies that \( {g}_{i}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) < a\left( {\max \left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) ,{q}_{i}}\right) . \) Let \( h = {\mathrm{K}}_{n}^{m}\left( {f;{g}_{0},\ldots ,{g}_{m - 1}}\right) \) . Let
\( s = \max \left\{ {{q}_{i} : i < m}\right\} + {2p} + \max \left\{ {f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) : {x}_{0},\ldots ,{x}_{m - 1} \leq 1}\right\} + 1. \)
Now suppose that \( \max \left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) > 1 \) . Then if \( {g}_{0}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) ,\ldots \) , \( {g}_{n - 1}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) \leq 1 \), we obviously have
\[
h\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) = f\left( {{g}_{0}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) ,\ldots ,{g}_{n - 1}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) }\right)
\]
\[
< s \leq a\left( {\max \left( {{x}_{0},\ldots ,{x}_{n - 1}}\right), s}\right)
\]
by 2.38
Assume now that \( \max \left\{ {{g}_{i}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) : i < m}\right\} > 1 \) . Then
\[
h\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) = f\left( {{g}_{0}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) ,\ldots ,{g}_{n - 1}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) }\right)
\]
\[
< a\left( {\max \left\{ {{g}_{i}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) : i < m}\right\}, p}\right)
\]
\[
< a\left( {\max \left\{ {a\left( {\max \left\{ {{x}_{0},\ldots ,{x}_{n - 1}}\right\} ,{q}_{i}}\right) : i < m}\right\}, p}\right) \;\text{ by 2.39 }
\]
\[
= a\left( {a\left( {\max \left\{ {{x}_{0},\ldots ,{x}_{n - 1}}\right\} ,\max \left\{ {{q}_{0},\ldots ,{q}_{m - 1}}\right\} }\right), p}\right) \text{by 2.38}
\]
\[
\leq a\left( {\max \left\{ {{x}_{0},\ldots ,{x}_{n - 1}}\right\} ,\max \left\{ {{q}_{0},\ldots ,{q}_{m - 1}}\right\} + {2p}}\right) \text{by 2.43}
\]
\[
< a\left( {\max \left\{ {{x}_{0},\ldots ,{x}_{n - 1}}\right\}, s}\right)
\]
(7)
\[
A\text{is closed under}\sum \text{.}
\]
In fact, suppose \( f \in A \), say \( f \) is \( m \) -ary, and let \( g = \sum f \) . Since \( f \in A \), choose \( p \in \omega \) such that \( \max \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) > 1 \) implies that \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) < \) \( a\left( {\max \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right), p}\right) \) . Let
\[
q = p + 1 + \max \left\{ {f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) : {x}_{0},\ldots ,{x}_{m - 1} \leq 1}\right\} .
\]
Then for any \( {x}_{0},\ldots ,{x}_{m - 1} \in \omega, f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) < a\left( {\max \left( {{x}_{0},\ldots ,{x}_{m - 1},2}\right), q}\right) \) , using 2.38. Thus if \( \max \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) > 1 \) we have
\[
g\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = \mathop{\sum }\limits_{{y < x\left( {m - 1}\right) }}f\left( {{x}_{0},\ldots ,{x}_{m - 2}, y}\right)
\]
\[
< \mathop{\sum }\limits_{{y < x\left( {m - 1}\right) }}a\left( {\max \left( {{x}_{0},\ldots ,{x}_{m - 2}, y,2}\right), q}\right)
\]
\[
\leq \mathop{\sum }\limits_{{y < x\left( {m - 1}\right) }}a\left( {\max \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right), q}\right)
\]
by 2.39
\[
= a\left( {\max \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right), q}\right) \cdot {x}_{m - 1}
\]
\[
\leq a\left( {\max \left( {{x}_{0},\ldots ,{x}_{m - |
1009_(GTM175)An Introduction to Knot Theory | Definition 10.3 |
Definition 10.3. The Arf invariant \( \mathcal{A}\left( L\right) \) of an oriented link \( L \) having the property \( \left( \star \right) \) is \( c\left( q\right) \), where \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) is the quadratic form described above.
## Proposition 10.4. The Arfinvariant \( \mathcal{A}\left( L\right) \) for an oriented link \( L \) having property \( \left( \star \right) \) is well defined.
Proof. It is necessary to check that \( \mathcal{A}\left( L\right) \) does not depend on the choice of Seifert surface \( F \) . By Theorem 8.2, it is only necessary to check what happens when \( F \) is changed to \( {F}^{\prime } \) by embedded surgery along an arc in \( {S}^{3} \) . Suppose that \( \left\{ {{e}_{1},{f}_{1},{e}_{2},{f}_{2},\ldots ,{e}_{n},{f}_{n}}\right\} \) is a symplectic base for \( {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \) represented by simple closed curves (for example the first \( {2g} \) curves, renamed, of Figure 6.1). That base can be augmented by \( \left\{ {{e}_{n + 1},{f}_{n + 1}}\right\} \) to give a symplectic base for \( {H}_{1}\left( {{F}^{\prime };\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial {F}^{\prime };\mathbb{Z}/2\mathbb{Z}}\right) \) : Choose \( {e}_{n + 1} \) to be represented by a simple closed curve encircling once the solid cylinder defining the embedded surgery, that curve being met at exactly one point by a simple closed curve representing \( {f}_{n + 1} \) . Note that an isotopy of the end points of the surgery arc \( \alpha \) ensures that the two points of \( \partial \alpha \) are not separated by any base curve. Then \( q\left( {e}_{n + 1}\right) = 0 \), and so \( \mathop{\sum }\limits_{{i = 1}}^{n}q\left( {e}_{i}\right) q\left( {f}_{i}\right) = \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}q\left( {e}_{i}\right) q\left( {f}_{i}\right) \) .
Note that \( \mathcal{A} \) (the unknot) \( = 0 \) and \( \mathcal{A} \) (the trefoil) \( = 1 \), for as shown in Figure 6.3 (when \( n = 1 \) ), the trefoil has a symplectic base \( \left\{ {{e}_{1},{f}_{1}}\right\} \) for which \( q\left( {e}_{1}\right) = \) \( q\left( {f}_{1}\right) = 1 \) . Note, too, that the addition formula for the Arf invariant of the direct sum of two quadratic forms implies that \( \mathcal{A}\left( {L + {L}^{\prime }}\right) = \mathcal{A}\left( L\right) + \mathcal{A}\left( {L}^{\prime }\right) \) for any two links \( L \) and \( {L}^{\prime } \) having property \( \left( \star \right) \) (whatever components are chosen for the summing operation).
Lemma 10.5. Suppose that \( L \) and \( {L}^{\prime } \) are oriented links having property \( \left( \star \right) \) which are the same except near one point, where they are as shown in Figure 10.1; then \( \mathcal{A}\left( L\right) = \mathcal{A}\left( {L}^{\prime }\right) \)
Proof. The two segments shown on one of the two sides of Figure 10.1 must belong to the same component of the link. Suppose, without loss of generality, it is the two segments on the left side. Then using the Seifert circuit method of Theorem 2.2, a Seifert surface can be constructed for the left link that meets the neighbourhood of the point in question in the way indicated by the shading. Adding a band to that produces a Seifert surface for the right link as indicated. Now, as these two surfaces just differ by a band added to the boundary, the \( \mathbb{Z}/2\mathbb{Z} \) -homology of the second surface is just that of the first surface with an extra \( \mathbb{Z}/2\mathbb{Z} \) summand. However, that summand is in the image of the homology of the boundary of the surface; this image is disregarded (by means of the quotienting) in construction of the quadratic form that gives the Arf invariant.
![5aaec141-7895-41cf-bdc1-c8a33b18f96f_116_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_116_0.jpg)
Figure 10.1
Note that elementary consideration of linking numbers shows the following: If the two segments of the link \( L \) shown on one side of Figure 10.1 belong to distinct components, and if \( L \) has the property \( \left( \star \right) \), then \( {L}^{\prime } \) also has the property \( \left( \star \right) \) .
With the definition of the Arf invariant and its elementary properties now established, its relevance to the Jones polynomial can now be considered. The result linking the two topics is as follows:
Theorem 10.6. The Jones polynomial of any oriented link \( L \) in \( {S}^{3} \), evaluated at \( t = i\left( \right. \) with \( \left. {{t}^{1/2} = {e}^{{i\pi }/4}}\right) \), is given by
\[
V{\left( L\right) }_{\left( t = i\right) } = \left\{ \begin{array}{ll} {\left( -\sqrt{2}\right) }^{\# L - 1}{\left( -1\right) }^{\mathcal{A}\left( L\right) } & \text{ if }L\text{ has property }\left( \star \right) , \\ 0 & \text{ otherwise,} \end{array}\right.
\]
where \( \# L \) is the number of components of \( L \) and \( \mathcal{A}\left( L\right) \) is its Arfinvariant.
Proof. Define \( A\left( L\right) \) to be the integer given by
\[
A\left( L\right) = \left\{ \begin{array}{ll} {\left( -1\right) }^{\mathcal{A}\left( L\right) } & \text{ if }L\text{ has property }\left( \star \right) , \\ 0 & \text{ otherwise. } \end{array}\right.
\]
Now suppose that \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are three oriented links that are exactly the same except near a point where they are as shown in Figure 3.2 (the usual relationship). The proof considers two cases as follows:
Suppose first that the two segments of \( {L}_{ + } \) near the point in question are parts of the same component of \( {L}_{ + } \) . (Then either both \( {L}_{ + } \) and \( {L}_{ - } \) have property \( \left( \star \right) \) or neither of them does.) If \( {L}_{0} \) has property \( \left( \star \right) \) so, by the above remark, do \( {L}_{ + } \) and \( {L}_{ - } \), and by Lemma 10.5, \( \mathcal{A}\left( {L}_{0}\right) = \mathcal{A}\left( {L}_{ + }\right) = \mathcal{A}\left( {L}_{ - }\right) \) . Thus certainly
\[
A\left( {L}_{ + }\right) + A\left( {L}_{ - }\right) - {2A}\left( {L}_{0}\right) = 0,
\]
an equation that also, trivially, holds if none of \( {L}_{ + },{L}_{ - } \) or \( {L}_{0} \) has property \( \left( \star \right) \) . There remains the possibility that \( {L}_{ + } \) and \( {L}_{ - } \) have property \( \left( \star \right) \) but that \( {L}_{0} \) does not. Consider the two links shown in Figure 10.2. It is easy to check that the first link, \( X \) say, has property \( \left( \star \right) \), and so its Arf invariant exists and by Lemma 10.5, \( \mathcal{A}\left( {L}_{ + }\right) = \mathcal{A}\left( X\right) \) . The second link is just \( {L}_{ - } \) in disguise. It can also be thought of as \( X \) first summed with a trefoil knot and then having two components banded together. Thus, again using Lemma \( {10.5},\mathcal{A}\left( X\right) + 1 = \mathcal{A}\left( {L}_{ - }\right) \) modulo 2. Hence again it is true that \( A\left( {L}_{ + }\right) + A\left( {L}_{ - }\right) - {2A}\left( {L}_{0}\right) = 0 \) .
![5aaec141-7895-41cf-bdc1-c8a33b18f96f_117_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_117_0.jpg)
Figure 10.2
Secondly, suppose that the two segments of \( {L}_{ + } \) near the point in question are parts of different components of \( {L}_{ + } \) . If \( {L}_{0} \) does not have property \( \left( \star \right) \) then neither do \( {L}_{ + } \) and \( {L}_{ - } \), and so trivially
\[
A\left( {L}_{ + }\right) + A\left( {L}_{ - }\right) - A\left( {L}_{0}\right) = 0.
\]
Otherwise \( {L}_{0} \) and one of \( {L}_{ + } \) and \( {L}_{ - } \) has property \( \left( \star \right) \), and this formula is again true (using Lemma 10.5).
If \( \widehat{A}\left( L\right) \) denotes \( {\left( -\sqrt{2}\right) }^{\# L - 1}A\left( L\right) \), then the two preceding displayed formulae both become
\[
\widehat{A}\left( {L}_{ + }\right) + \widehat{A}\left( {L}_{ - }\right) + \sqrt{2}\widehat{A}\left( {L}_{0}\right) = 0,
\]
and of course if \( L \) is the unknot, \( \widehat{A}\left( L\right) = 1 \) . However, as discussed in Chapter 3, the Jones polynomial \( V\left( L\right) \in \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \) is characterised by being 1 on the unknot and by satisfying
\[
{t}^{-1}V\left( {L}_{ + }\right) - {tV}\left( {L}_{ - }\right) + \left( {{t}^{-1/2} - {t}^{1/2}}\right) V\left( {L}_{0}\right) = 0.
\]
Substituting \( {t}^{\frac{1}{2}} = {e}^{{i\pi }/4} \) reduces this to exactly the above formula for \( \widehat{A} \) .
If, in the notation used in the above proof, \( {L}_{ + } \) is a knot, then so is \( {L}_{ - } \), and \( {L}_{0} \) is a link of two components. Of course, \( {L}_{0} \) has the property \( \left( \star \right) \) if and only if \( \operatorname{lk}\left( {L}_{0}\right) \) , the linking number of the two component of \( {L}_{0} \), is even. The second paragraph of the above proof shows that \( \mathcal{A}\left( {L}_{ + }\right) - \mathcal{A}\left( {L}_{ - }\right) \equiv \operatorname{lk}\left( {L}_{0}\right) \) modulo 2 .
Theorem 10.7. Let \( K \) be a knot. Then \( \mathcal{A}\left( K\right) \equiv {a}_{2}\left( K\right) \) modulo 2, where \( {a}_{2}\left( K\right) \) is the coefficient of \( {z}^{2} \) in the Conway polynomial \( {\nabla }_{K}\left( z\right) \) . The Arfinvariant of \( K \) is related to the Alexander polynomial by
\[
\mathcal{A}\left( K\right) = \left\{ \begin{array}{ll} 0 & \text{ if }{\Delta }_{K}\left( {-1}\right) \equiv \pm 1\text{ modulo }8, \\ 1 & \text{ if }{\Delta }_{K}\left( {-1}\right) \equiv \pm 3\text{ modulo }8. \end{array}\right.
\]
If \( K \) is a slice knot, then \( \mathcal{A}\left( K\right) = 0 \) .
Proof. The formula \( \mathcal{A}\left( {L}_{ + }\right) - \mathcal{A}\left( {L}_{ - }\right) \equiv \operatorname{lk}\ |
1009_(GTM175)An Introduction to Knot Theory | Definition 10.2 |
Definition 10.2. The Arf invariant \( c\left( \psi \right) \) of the non-singular quadratic form \( \psi : V \rightarrow \mathbb{Z}/2\mathbb{Z} \) is the value,0 or 1, taken more often by \( \psi \left( u\right) \) as \( u \) varies over the \( {2}^{2n} \) elements of \( V \) .
It is easy to show, by induction on \( n \), that the value 1 is taken \( {2}^{{2n} - 1} - {2}^{n - 1} \) times by \( \psi \left( u\right) \) if \( \psi \) is of Type 1 and \( {2}^{{2n} - 1} + {2}^{n - 1} \) times if \( \psi \) is of Type 2 . Hence \( c\left( \psi \right) \) is always defined, no choice is involved in its definition and
\[
c\left( \psi \right) = \left\{ \begin{array}{ll} 0 & \text{ if }\psi \text{ is of Type }1 \\ 1 & \text{ if }\psi \text{ is of Type }2 \end{array}\right.
\]
Note that if \( {\psi }_{1} \) and \( {\psi }_{2} \) are quadratic forms on \( {V}_{1} \) and \( {V}_{2} \), respectively, then the form \( {\psi }_{1} \oplus {\psi }_{2} \) on \( {V}_{1} \oplus {V}_{2} \) has \( c\left( {{\psi }_{1} \oplus {\psi }_{2}}\right) = c\left( {\psi }_{1}\right) + c\left( {\psi }_{2}\right) \) modulo 2 . This follows by checking the possible Types. Note also that if \( {e}_{1},{f}_{1},{e}_{2},{f}_{2},\ldots ,{e}_{n},{f}_{n} \) is any symplectic base, then
\[
c\left( \psi \right) = \mathop{\sum }\limits_{{i = 1}}^{n}\psi \left( {e}_{i}\right) \psi \left( {f}_{i}\right)
\]
The above theory of \( \mathbb{Z}/2\mathbb{Z} \) quadratic forms is applied to links in the following way: Let \( L \) be an oriented link in \( {S}^{3} \) with Seifert surface \( F \), the orientation being needed to define \( F \) . Define \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) by \( q\left( x\right) = {\alpha }_{2}\left( {x, x}\right) \in \) \( \mathbb{Z}/2\mathbb{Z} \), where \( {\alpha }_{2} \) is the Seifert form \( \alpha \) (see Chapter 6) reduced modulo 2 . Thus if \( x \) is (represented by) a simple closed curve on \( F, q\left( x\right) \) is the number, modulo 2, of twists in an annular neighbourhood of \( x \) in \( F \) . Then
\[
q\left( {x + y}\right) + q\left( x\right) + q\left( y\right) = {\alpha }_{2}\left( {x, y}\right) + {\alpha }_{2}\left( {y, x}\right) = \mathcal{F}\left( {x, y}\right) ,
\]
where \( \mathcal{F} \) is the intersection form (which just counts the number of intersection points of transverse curves) modulo 2 on \( {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) \) . However, a glance at the base shown in Figure 6.1 reveals that \( \mathcal{F} \) is non-singular only when \( L \) has one component. A second glance shows that \( \mathcal{F} \) induces a non-singular form on the quotient \( {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \), where \( \iota \) is the inclusion map. Suppose that \( L \) has components \( \left\{ {L}_{i}\right\} \) and that \( L \) has the property that
(*)
\[
\operatorname{lk}\left( {{L}_{i}, L - {L}_{i}}\right) \equiv 0{\;\operatorname{modulo}\;2}.
\]
Then \( q\left( \left\lbrack {L}_{i}\right\rbrack \right) \equiv \operatorname{lk}\left( {{L}_{i}^{ - },{L}_{i}}\right) = \operatorname{lk}\left( {{L}_{i}, L - {L}_{i}}\right) \equiv 0 \) modulo 2, as \( {L}_{i} \) is homologous to \( L - {L}_{i} \) in the complement of \( {L}_{i}^{ - } \) . For any \( x \in {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) \), clearly \( \mathcal{F}\left( {x,\left\lbrack {L}_{i}\right\rbrack }\right) = 0 \), so \( q\left( {x + \left\lbrack {L}_{i}\right\rbrack }\right) = q\left( x\right) \), and hence \( q \) induces a well-defined non-singular quadratic form \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) .
Definition 10.3. The Arf invariant \( \mathcal{A}\left( L\right) \) of an oriented link \( L \) having the property \( \left( \star \right) \) is \( c\left( q\right) \), where \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) is the quadratic form described above.
## Proposition 10.4. The Arfinvariant \( \mathcal{A}\left( L\right) \) for an oriented link \( L \) having property \( \left( \star \right) \) is well defined.
Proof. It is necessary to check that \( \mathcal{A}\left( L\right) \) does not depend on the choice of Seifert surface \( F \) . By Theorem 8.2, it is only necessary to check what happens when \( F \) is changed to \( {F}^{\prime } \) by embedded surgery along an arc in \( {S}^{3} \) . Suppose that \( \left\{ {{e}_{1},{f}_{1},{e}_{2},{f}_{2},\ldots ,{e}_{n},{f}_{n}}\right\} \) is a symplectic base for \( {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \) represented by simple closed curves (for example the first \( {2g} \) curves, renamed, of Figure 6.1). That base can be augmented by \( \left\{ {{e}_{n + 1},{f}_{n + 1}}\right\} \) to give a symplectic base for \( {H}_{1}\left( {{F}^{\prime };\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial {F}^{\prime };\mathbb{Z}/2\mathbb{Z}}\right) \) : Choose \( {e}_{n + 1} \) to be represented by a simple closed curve encircling once the solid cylinder defining the embedded surgery, that curve being met at exactly one point by a simple closed curve representing \( {f}_{n + 1} \) . Note that an isotopy of the end points of the surgery arc \( \alpha \) ensures that the two points of \( \partial \alpha \) are not separated by any base curve. Then \( q\left( {e}_{n + 1}\right) = 0 \), and so \( \mathop{\sum }\limits_{{i = 1}}^{n}q\left( {e}_{i}\right) q\left( {f}_{i}\right) = \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}q\left( {e}_{i}\right) q\left( {f}_{i}\right) \) .
Note that \( \mathcal{A} \) (the unknot) \( = 0 \) and \( \mathcal{A} \) (the trefoil) \( = 1 \), for as shown in Figure 6.3 (when \( n = 1 \) ), the trefoil has a symplectic base \( \left\{ {{e}_{1},{f}_{1}}\right\} \) for which \( q\left( {e}_{1}\right) = \) \( q\left( {f}_{1}\right) = 1 \) . Note, too, that the addition formula for the Arf invariant of the direct sum of two quadratic forms implies that \( \mathcal{A}\left( {L + {L}^{\prime }}\right) = \mathcal{A}\left( L\right) + \mathcal{A}\left( {L}^{\prime }\right) \) for any two links \( L \) and \( {L}^{\prime } \) having property \( \left( \star \right) \) (whatever components are chosen for the summing operation).
Lemma 10.5. Suppose that \( L \) and \( {L}^{\prime } \) are oriented links having property \( \left( \star \right) \) which are the same except near one point, where they are as shown in Figure 10.1; then \( \mathcal{A}\left( L\right) = \mathcal{A}\left( {L}^{\prime }\right) \)
Proof. The two segments shown on one of the two sides of Figure 10.1 must belong to the same component of the link. Suppose, without loss of generality, it is the two segments on the left side. Then using the Seifert circuit method of Theorem 2.2, a Seifert surface can be constructed for the left link that meets the neighbourhood of the point in question in the way indicated by the shading. Adding a band to that produces a Seifert surface for the right link as indicated. Now, as these two surfaces just differ by a band added to the boundary, the \( \mathbb{Z}/2\mathbb{Z} \) -homology of the second surface is just that of the first surface with an extra \( \mathbb{Z}/2\mathbb{Z} \) summand. However, that summand is in the image of the homology of the boundary of the surface; this image is disregarded (by means of the quotienting) in construction of the quadratic form that gives the Arf invariant.
![5aaec141-7895-41cf-bdc1-c8a33b18f96f_116_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_116_0.jpg)
Figure 10.1
Note that elementary consideration of linking numbers shows the following: If the two segments of the link \( L \) shown on one side of Figure 10.1 belong to distinct components, and if \( L \) has the property \( \left( \star \right) \), then \( {L}^{\prime } \) also has the property \( \left( \star \right) \) .
With the definition of the Arf invariant and its elementary properties now established, its relevance to the Jones polynomial can now be considered. The result linking the two topics is as follows:
Theorem 10.6. The Jones polynomial of any oriented link \( L \) in \( {S}^{3} \), evaluated at \( t = i\left( \right. \) with \( \left. {{t}^{1/2} = {e}^{{i\pi }/4}}\right) \), is given by
\[
V{\left( L\right) }_{\left( t = i\right) } = \left\{ \begin{array}{ll} {\left( -\sqrt{2}\right) }^{\# L - 1}{\left( -1\right) }^{\mathcal{A}\left( L\right) } & \text{ if }L\text{ has property }\left( \star \right) , \\ 0 & \text{ otherwise,} \end{array}\right.
\]
where \( \# L \) is the number of components of \( L \) and \( \mathcal{A}\left( L\right) \) is its Arfinvariant.
Proof. Define \( A\left( L\right) \) to be the integer given by
\[
A\left( L\right) = \left\{ \begin{array}{ll} {\left( -1\right) }^{\mathcal{A}\left( L\right) } & \text{ if }L\text{ has property }\left( \star \right) , \\ 0 & \text{ otherwise. } \end{array}\right.
\]
Now suppose that \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are three oriented links that are exactly the same except near a point where they are as shown in Figure 3.2 (the usual relationship). The proof considers two cases as follows:
Suppose first that the two segments of \( {L}_{ + } \) near the point in question are parts of the same component of \( {L}_{ + } \) . (Then either both \( {L}_{ + } \) and \( {L}_{ - } \) have property \( \left( \star \right) \) or neither of them does.) If \( {L}_{0} \) has property \( \left( \star \right) \) so, by the above remark, do \( {L}_{ + } \) and \( {L}_{ - } \), and by Lemma 10.5, \( \mathcal{A}\left( {L}_{0}\right) = \mathcal{A}\left( {L}_{ + }\right) = \mathcal{A}\left( {L}_{ - |
1077_(GTM235)Compact Lie Groups | Definition 6.34 |
Definition 6.34. Let \( G \) be a compact connected Lie group with maximal torus \( T \) . Let \( N = N\left( T\right) \) be the normalizer in \( G \) of \( T, N = \left\{ {g \in G \mid {gT}{g}^{-1} = T}\right\} \) . The Weyl group of \( G, W = W\left( G\right) = W\left( {G, T}\right) \), is defined by \( W = N/T \) .
If \( {T}^{\prime } \) is another maximal torus of \( G \), Corollary 5.10 shows that there is a \( g \in G \), so \( {c}_{g}T = {T}^{\prime } \) . In turn, this shows that \( {c}_{g}N\left( T\right) = N\left( {T}^{\prime }\right) \), so that \( W\left( {G, T}\right) \cong W\left( {G,{T}^{\prime }}\right) \) . Thus, up to isomorphism, the Weyl group is independent of the choice of maximal torus.
Given \( w \in N, H \in \mathfrak{t} \), and \( \lambda \in {\mathfrak{t}}^{ * } \), define an action of \( N \) on \( \mathfrak{t} \) and \( {\mathfrak{t}}^{ * } \) by
(6.35)
\[
w\left( H\right) = \operatorname{Ad}\left( w\right) H
\]
\[
\left\lbrack {w\left( \lambda \right) }\right\rbrack \left( H\right) = \lambda \left( {{w}^{-1}\left( H\right) }\right) = \lambda \left( {\operatorname{Ad}\left( {w}^{-1}\right) H}\right) .
\]
As usual, extend this to an action of \( N \) on \( {\mathfrak{t}}_{\mathbb{C}},{it},{\mathfrak{t}}_{\mathbb{C}}^{ * } \), and \( {\left( i\mathfrak{t}\right) }^{ * } \) by \( \mathbb{C} \) -linearity. As \( \operatorname{Ad}\left( T\right) \) acts trivially on \( \mathfrak{t} \), the action of \( N \) descends to an action of \( W = N/T \) .
Theorem 6.36. Let \( G \) be a compact connected Lie group with a maximal torus \( T \) .
(a) The action of \( W \) on it and on \( {\left( i\mathfrak{t}\right) }^{ * } \) is faithful, i.e., a Weyl group element acts trivially if and only it is the identity element.
(b) For \( w \in N \) and \( \alpha \in \Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) \cup \{ 0\} ,\operatorname{Ad}\left( w\right) {\mathfrak{g}}_{\alpha } = {\mathfrak{g}}_{w\alpha } \) .
(c) The action of \( W \) on \( {\left( i\mathfrak{t}\right) }^{ * } \) preserves and acts faithfully on \( \Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) .
(d) The action of \( W \) on it preserves and acts faithfully on \( \left\{ {{h}_{\alpha } \mid \alpha \in \Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) }\right\} \) . Moreover, \( w{h}_{\alpha } = {h}_{w\alpha } \) .
(e) \( W \) is a finite group.
(f) Given \( {t}_{i} \in T \), there exists \( g \in G \) so \( {c}_{g}{t}_{1} = {t}_{2} \) if and only if there exists \( w \in N \), so \( {c}_{w}{t}_{1} = {t}_{2} \)
Proof. For part (a), suppose \( w \in N \) acts trivially on \( \mathfrak{t} \) via Ad. Since \( \exp \mathfrak{t} = T \) and since \( {c}_{w} \circ \exp = \exp \circ \operatorname{Ad}\left( w\right) \), this implies that \( w \in {Z}_{G}\left( T\right) \) . However, Corollary 5.13 shows that \( {Z}_{G}\left( T\right) = T \) so that \( w \in T \), as desired.
For part (b), let \( w \in N, H \in {\mathfrak{t}}_{\mathbb{C}} \), and \( {X}_{\alpha } \in {\mathfrak{g}}_{\alpha } \) and calculate
\[
\left\lbrack {H,\operatorname{Ad}\left( w\right) {X}_{\alpha }}\right\rbrack = \left\lbrack {\operatorname{Ad}\left( {w}^{-1}\right) H,{X}_{\alpha }}\right\rbrack = \alpha \left( {\operatorname{Ad}\left( {w}^{-1}\right) H}\right) {X}_{\alpha } = \left\lbrack {\left( {w\alpha }\right) \left( H\right) }\right\rbrack {X}_{\alpha },
\]
which shows that \( \operatorname{Ad}\left( w\right) {\mathfrak{g}}_{\alpha } \subseteq {\mathfrak{g}}_{w\alpha } \) . Since \( \dim {\mathfrak{g}}_{\alpha } = 1 \) and since \( \operatorname{Ad}\left( w\right) \) is invertible, \( \operatorname{Ad}\left( w\right) {\mathfrak{g}}_{\alpha } = {\mathfrak{g}}_{w\alpha } \) and, in particular, \( {w\alpha } \in \Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) . Noting that \( W \) acts trivially on \( \mathfrak{z}\left( \mathfrak{g}\right) \cap \mathfrak{t} \), we may reduce to the case where \( \mathfrak{g} \) is semisimple. As \( \Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) spans \( {\left( i\mathfrak{t}\right) }^{ * } \) , parts (b) and (c) are therefore finished.
For part (d), calculate
\[
B\left( {{u}_{w\alpha }, H}\right) = B\left( {w\alpha }\right) \left( H\right) = \alpha \left( {{w}^{-1}H}\right) = B\left( {{u}_{\alpha },{w}^{-1}H}\right) = B\left( {w{u}_{\alpha }, H}\right) ,
\]
so that \( {u}_{w\alpha } = w{u}_{\alpha } \) . Since the action of \( w \) preserves the Killing form, it follows that \( w{h}_{\alpha } = {h}_{w\alpha } \), which finishes part (d). As \( \Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) is finite and the action is faithful, part (e) is also done.
For part (f), suppose \( {c}_{g}{t}_{1} = {t}_{2} \) for \( g \in G \) . Consider the connected compact Lie subgroup \( {Z}_{G}{\left( {t}_{2}\right) }^{0} = {\left\{ h \in G \mid {c}_{{t}_{2}}h = h\right\} }^{0} \) of \( G \) with Lie algebra \( {\mathfrak{z}}_{\mathfrak{g}}\left( {t}_{2}\right) = \) \( \left\{ {X \in \mathfrak{g} \mid \operatorname{Ad}\left( {t}_{2}\right) X = X}\right\} \) (Exercise 4.22). Clearly \( \mathfrak{t} \subseteq {\mathfrak{z}}_{\mathfrak{g}}\left( {t}_{2}\right) \) and \( \mathfrak{t} \) is still a Cartan subalgebra of \( {\mathfrak{z}}_{\mathfrak{g}}\left( {t}_{2}\right) \) . Therefore \( T \subseteq {Z}_{G}\left( {t}_{2}\right) \) and \( T \) is a maximal torus of \( {Z}_{G}\left( {t}_{2}\right) \) . On the other hand, \( \operatorname{Ad}\left( {t}_{2}\right) \operatorname{Ad}\left( g\right) H = \operatorname{Ad}\left( g\right) \operatorname{Ad}\left( {t}_{1}\right) H = \operatorname{Ad}\left( g\right) H \) for \( H \in \mathfrak{t} \) . Thus \( \operatorname{Ad}\left( g\right) \mathfrak{t} \) is also a Cartan subalgebra in \( {\mathfrak{z}}_{\mathfrak{g}}\left( {t}_{2}\right) \), and so \( {c}_{g}T \) is a maximal torus in \( {Z}_{G}{\left( {t}_{2}\right) }^{0} \) . By Corollary 5.10, there is a \( z \in {Z}_{G}\left( {t}_{2}\right) \), so that \( {c}_{z}\left( {{c}_{g}T}\right) = T \), i.e., \( {zg} \in N\left( T\right) \) . Since \( {c}_{zg}{t}_{1} = {c}_{z}{t}_{2} = {t}_{2} \), the proof is finished.
## 6.4.2 Classical Examples
Here we calculate the Weyl group for each of the compact classical Lie groups. The details are straightforward matrix calculations and are mostly left as an exercise (Exercise 6.27).
6.4.2.1 \( U\left( n\right) \) and \( {SU}\left( n\right) \) For \( U\left( n\right) \) let \( {T}_{U\left( n\right) } = \left\{ {\operatorname{diag}\left( {{e}^{i{\theta }_{1}},\ldots ,{e}^{i{\theta }_{n}}}\right) \mid {\theta }_{i} \in \mathbb{R}}\right\} \) be a maximal torus. Write \( {\mathcal{S}}_{n} \) for the set of \( n \times n \) permutation matrices. Recall that an element of \( {GL}\left( {n,\mathbb{C}}\right) \) is a permutation matrix if the entries of each row and column consists of a single one and \( \left( {n - 1}\right) \) zeros. Thus \( {\mathcal{S}}_{n} \cong {S}_{n} \) where \( {S}_{n} \) is the permutation group on \( n \) letters. Since the set of eigenvalues is invariant under conjugation, any \( w \in N \) must permute, up to scalar, the standard basis of \( {\mathbb{R}}^{n} \) . In particular, this shows that
\[
N\left( {T}_{U\left( n\right) }\right) = {\mathcal{S}}_{n}{T}_{U\left( n\right) }
\]
\[
W \cong {S}_{n}
\]
\[
\left| W\right| = n!\text{.}
\]
Write \( \left( {\theta }_{i}\right) \) for the element \( \operatorname{diag}\left( {{\theta }_{1},\ldots ,{\theta }_{n}}\right) \in \mathfrak{t} \) and \( \left( {\lambda }_{i}\right) \) for the element \( \mathop{\sum }\limits_{i}{\lambda }_{i}{\epsilon }_{i} \in \) \( {\left( i\mathfrak{t}\right) }^{ * } \) . It follows that \( W \) acts on \( i{\mathfrak{t}}_{U\left( n\right) } = \left\{ {\left( {\theta }_{i}\right) \mid {\theta }_{i} \in \mathbb{R}}\right\} \) and on \( {\left( i{\mathfrak{t}}_{U\left( n\right) }\right) }^{ * } = \) \( \left\{ {\left( {\lambda }_{i}\right) \mid {\lambda }_{i} \in \mathbb{R}}\right\} \) by all permutations of the coordinates.
For \( {SU}\left( n\right) \), let \( {T}_{{SU}\left( n\right) } = {T}_{U\left( n\right) } \cap {SU}\left( n\right) = \left\{ {\operatorname{diag}\left( {{e}^{i{\theta }_{1}},\ldots ,{e}^{i{\theta }_{n}}}\right) \mid {\theta }_{i} \in \mathbb{R},\mathop{\sum }\limits_{i}{\theta }_{i} = }\right. \) \( 0\} \) be a maximal torus. Note that \( U\left( n\right) \cong \left( {{SU}\left( n\right) \times {S}^{1}}\right) /{\mathbb{Z}}_{n} \) with \( {S}^{1} \) central, so that \( W\left( {{SU}\left( n\right) }\right) \cong W\left( {U\left( n\right) }\right) \) . In particular for the \( {A}_{n - 1} \) root system,
\[
N\left( {T}_{{SU}\left( n\right) }\right) = \left( {{\mathcal{S}}_{n}{T}_{U\left( n\right) }}\right) \cap {SU}\left( n\right)
\]
\[
W \cong {S}_{n}
\]
\[
\left| W\right| = n!\text{.}
\]
As before, \( W \) acts on \( i{\mathfrak{t}}_{{SU}\left( n\right) } = \left\{ {\left( {\theta }_{i}\right) \mid {\theta }_{i} \in \mathbb{R},\mathop{\sum }\limits_{i}{\theta }_{i} = 0}\right\} \) and \( {\left( i{\mathfrak{t}}_{{SU}\left( n\right) }\right) }^{ * } = \) \( \left\{ {\left( {\lambda }_{i}\right) \mid {\lambda }_{i} \in \mathbb{R},\mathop{\sum }\limits_{i}{\lambda }_{i} = 0}\right\} \) by all permutations of the coordinates.
6.4.2.2 \( {Sp}\left( n\right) \) For \( {Sp}\left( n\right) \) realized as \( {Sp}\left( n\right) \cong U\left( {2n}\right) \cap {Sp}\left( {n,\mathbb{C}}\right) \), let
\[
T = \left\{ {\operatorname{diag}\left( {{e}^{i{\theta }_{1}},\ldots ,{e}^{i{\theta }_{n}},{e}^{-i{\theta }_{1}},\ldots ,{e}^{-i{\theta }_{n}}}\right) \mid {\theta }_{i} \in \mathbb{R}}\right\} .
\]
For \( 1 \leq i \leq n \), write \( {s}_{1, i} \) for the matrix realizing the linear transformation that maps \( {e}_{i} \), the \( {i}^{\text{th }} \) standard basis vector of \( {\mathbb{R}}^{2n} \), to \( - {e}_{i + n} \), maps \( {e}_{i + n} \) to \( {e}_{n} \), and fixes the remaining standard basis vectors. In particular, \( {s}_{1, i} \) is just the natural emb |
1077_(GTM235)Compact Lie Groups | Definition 6.4 |
Definition 6.4. Let \( V \) and \( W \) be representations of a Lie algebra \( \mathfrak{g} \) of a Lie subgroup of \( {GL}\left( {n,\mathbb{C}}\right) \) .
(1) \( \mathfrak{g} \) acts on \( V \oplus W \) by \( X\left( {v, w}\right) = \left( {{Xv},{Xw}}\right) \) .
(2) \( \mathfrak{g} \) acts on \( V \otimes W \) by \( X\sum {v}_{i} \otimes {w}_{j} = \sum X{v}_{i} \otimes {w}_{j} + \sum {v}_{i} \otimes X{w}_{j} \) .
(3) \( \mathfrak{g} \) acts on \( \operatorname{Hom}\left( {V, W}\right) \) by \( \left( {XT}\right) \left( v\right) = \bar{X}T\left( v\right) - T\left( {Xv}\right) \) .
(4) \( \mathfrak{g} \) acts on \( {\bigotimes }^{k}V \) by \( X\sum {v}_{{i}_{1}} \otimes \cdots {v}_{{i}_{k}} = \sum \left( {X{v}_{{i}_{1}}}\right) \otimes \cdots {v}_{{i}_{k}} + \cdots \sum {v}_{{i}_{1}} \otimes \cdots \left( {X{v}_{{i}_{k}}}\right) \) .
(5) \( \mathfrak{g} \) acts on \( \mathop{\bigwedge }\limits^{k}V \) by \( X\sum {v}_{{i}_{1}} \land \cdots {v}_{{i}_{k}} = \sum \left( {X{v}_{{i}_{1}}}\right) \land \cdots {v}_{{i}_{k}} + \cdots \sum {v}_{{i}_{1}} \land \cdots \left( {X{v}_{{i}_{k}}}\right) \) .
(6) \( \mathfrak{g} \) acts on \( {S}^{k}\left( V\right) \) by \( X\sum {v}_{{i}_{1}}\cdots {v}_{{i}_{k}} = \sum \left( {X{v}_{{i}_{1}}}\right) \cdots {v}_{{i}_{k}} + \cdots \sum {v}_{{i}_{1}}\cdots \left( {X{v}_{{i}_{k}}}\right) \) .
(7) \( \mathfrak{g} \) acts on \( {V}^{ * } \) by \( \left( {XT}\right) \left( v\right) = - T\left( {Xv}\right) \) .
(8) \( \mathfrak{g} \) acts on \( \bar{V} \) by the same action as it does on \( V \) .
## 6.1.2 Complexification of Lie Algebras
Definition 6.5. (a) Let \( \mathfrak{g} \) be the Lie algebra of a Lie subgroup of \( {GL}\left( {n,\mathbb{C}}\right) \) . The complexification of \( \mathfrak{g},{\mathfrak{g}}_{\mathbb{C}} \), is defined as \( {\mathfrak{g}}_{\mathbb{C}} = \mathfrak{g}{ \otimes }_{\mathbb{R}}\mathbb{C} \) . The Lie bracket on \( \mathfrak{g} \) is extended to \( {\mathfrak{g}}_{\mathbb{C}} \) by \( \mathbb{C} \) -linearity.
(b) If \( \left( {\psi, V}\right) \) is a representation of \( \mathfrak{g} \), extend the domain of \( \psi \) to \( {\mathfrak{g}}_{\mathbb{C}} \) by \( \mathbb{C} \) -linearity. Then \( \left( {\psi, V}\right) \) is said to be irreducible under \( {\mathfrak{g}}_{\mathbb{C}} \) if there are no proper \( \psi \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) -invariant subspaces.
Writing a matrix in terms of its skew-Hermitian and Hermitian parts, observe that \( \mathfrak{{gl}}\left( {n,\mathbb{C}}\right) = \mathfrak{u}\left( n\right) \oplus i\mathfrak{u}\left( n\right) \) . It follows that if \( \mathfrak{g} \) is the Lie algebra of a compact Lie group \( G \) realized with \( G \subseteq U\left( n\right) ,{\mathfrak{g}}_{\mathbb{C}} \) may be identified with \( \mathfrak{g} \oplus i\mathfrak{g} \) equipped with the standard Lie bracket inherited from \( \mathfrak{{gl}}\left( {n,\mathbb{C}}\right) \) (Exercise 6.3). We will often make this identification without comment. In particular, \( \mathfrak{u}{\left( n\right) }_{\mathbb{C}} = \mathfrak{{gl}}\left( {n,\mathbb{C}}\right) \) . Similarly, \( \mathfrak{{su}}{\left( n\right) }_{\mathbb{C}} = \mathfrak{{sl}}\left( {n,\mathbb{C}}\right) ,\mathfrak{{so}}{\left( n\right) }_{\mathbb{C}} \) is realized by
\[
\mathfrak{{so}}\left( {n,\mathbb{C}}\right) = \left\{ {X \in \mathfrak{{sl}}\left( {n,\mathbb{C}}\right) \mid {X}^{t} = - X}\right\} ,
\]
and, realizing \( \mathfrak{{sp}}\left( n\right) \) as \( \mathfrak{u}\left( {2n}\right) \cap \mathfrak{{sp}}\left( {n,\mathbb{C}}\right) \) as in \( §{4.1.3},\mathfrak{{sp}}{\left( n\right) }_{\mathbb{C}} \) is realized by \( \mathfrak{{sp}}\left( {n,\mathbb{C}}\right) \) (Exercise 6.3).
Lemma 6.6. Let \( \mathfrak{g} \) be the Lie algebra of a Lie subgroup of \( {GL}\left( {n,\mathbb{C}}\right) \) and let \( \left( {\psi, V}\right) \) be a representation of \( \mathfrak{g} \) . Then \( V \) is irreducible under \( \mathfrak{g} \) if and only if it is irreducible under \( {\mathfrak{g}}_{\mathbb{C}} \) .
Proof. Simply observe that since a subspace \( W \subseteq V \) is a complex subspace, \( W \) is \( \psi \left( \mathfrak{g}\right) \) -invariant if and only if it is \( \psi \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) -invariant.
For example, \( \mathfrak{{su}}{\left( 2\right) }_{\mathbb{C}} = \mathfrak{{sl}}\left( {2,\mathbb{C}}\right) \) is equipped with the standard basis
\[
E = \left( \begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right) ,\;H = \left( \begin{matrix} 1 & 0 \\ 0 & - 1 \end{matrix}\right) ,\;F = \left( \begin{array}{ll} 0 & 0 \\ 1 & 0 \end{array}\right)
\]
(c.f. Exercise 4.21). Since \( E = \frac{1}{2}\left( \begin{matrix} 0 & 1 \\ - 1 & 0 \end{matrix}\right) - \frac{i}{2}\left( \begin{matrix} 0 & i \\ i & 0 \end{matrix}\right) \), Equation 6.3 shows that the resulting action of \( E \) on \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is given by
\[
E \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = \frac{1}{2}\left\lbrack {-k{z}_{1}^{k - 1}{z}_{2}^{n - k + 1} - \left( {k - n}\right) {z}_{1}^{k + 1}{z}_{2}^{n - k - 1}}\right\rbrack
\]
\[
- \frac{i}{2}\left\lbrack {-{ik}{z}_{1}^{k - 1}{z}_{2}^{n - k + 1} + i\left( {k - n}\right) {z}_{1}^{k + 1}{z}_{2}^{n - k - 1}}\right\rbrack
\]
\[
= - k{z}_{1}^{k - 1}{z}_{2}^{n - k + 1}\text{.}
\]
Similarly (Exercise 6.4), the action of \( H \) and \( F \) on \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is given by
(6.7)
\[
H \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = \left( {n - {2k}}\right) {z}_{1}^{k}{z}_{2}^{n - k}
\]
\[
F \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = \left( {k - n}\right) {z}_{1}^{k + 1}{z}_{2}^{n - k - 1}.
\]
Irreducibility of \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is immediately apparent from these formulas (Exercise 6.7).
## 6.1.3 Weights
Let \( G \) be a compact Lie group and \( \left( {\pi, V}\right) \) a finite-dimensional representation of \( G \) . Fix a Cartan subalgebra \( \mathfrak{t} \) of \( \mathfrak{g} \) and write \( {\mathfrak{t}}_{\mathbb{C}} \) for its complexification. By Theorem 5.6, there exists an inner product, \( \left( {\cdot , \cdot }\right) \), on \( V \) that is \( G \) -invariant and for which \( {d\pi } \) is skew-Hermitian on \( \mathfrak{g} \) and is Hermitian on \( i\mathfrak{g} \) . Thus \( {\mathfrak{t}}_{\mathbb{C}} \) acts on \( V \) as a family of commuting normal operators and so \( V \) is simultaneously diagonalizable under the action of \( {\mathrm{t}}_{\mathbb{C}} \) . In particular, the following definition is well defined.
Definition 6.8. Let \( G \) be a compact Lie group, \( \left( {\pi, V}\right) \) a finite-dimensional representation of \( G \), and \( \mathfrak{t} \) a Cartan subalgebra of \( \mathfrak{g} \) . There is a finite set \( \Delta \left( V\right) = \Delta \left( {V,{\mathfrak{t}}_{\mathbb{C}}}\right) \subseteq \) \( {\mathfrak{t}}_{\mathbb{C}}^{ * } \), called the weights of \( V \), so that
\[
V = {\bigoplus }_{\alpha \in \Delta \left( V\right) }{V}_{\alpha }
\]
where
\[
{V}_{\alpha } = \left\{ {v \in V \mid {d\pi }\left( H\right) v = \alpha \left( H\right) v, H \in {\mathrm{t}}_{\mathbb{C}}}\right\}
\]
is nonzero. The above displayed equation is called the weight space decomposition of \( V \) with respect to \( {\mathfrak{t}}_{\mathbb{C}} \) .
As an example, take \( G = {SU}\left( 2\right), V = {V}_{n}\left( {\mathbb{C}}^{2}\right) \), and \( \mathfrak{t} \) to be the diagonal matrices in \( \mathfrak{{su}}\left( 2\right) \) . Define \( {\alpha }_{m} \in {\mathfrak{t}}_{\mathbb{C}}^{ * } \) by requiring \( {\alpha }_{m}\left( H\right) = m \) . Then Equation 6.7 shows that the weight space decomposition for \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is \( {V}_{n}\left( {\mathbb{C}}^{2}\right) = {\bigoplus }_{k = 0}^{n}{V}_{n}{\left( {\mathbb{C}}^{2}\right) }_{{\alpha }_{n - {2k}}} \), where \( {V}_{n}{\left( {\mathbb{C}}^{2}\right) }_{{\alpha }_{n - {2k}}} = \mathbb{C}{z}_{1}^{k}{z}_{2}^{n - k} \) .
Theorem 6.9. (a) Let \( G \) be a compact Lie group, \( \left( {\pi, V}\right) \) a finite-dimensional representation of \( G, T \) a maximal torus of \( G \), and \( V = {\bigoplus }_{\alpha \in \Delta \left( {V,{\mathrm{t}}_{\mathbb{C}}}\right) }{V}_{\alpha } \) the weight space decomposition. For each weight \( \alpha \in \Delta \left( V\right) ,\alpha \) is purely imaginary on \( \mathfrak{t} \) and is real valued on it.
(b) For \( t \in T \), choose \( H \in \mathfrak{t} \) so that \( {e}^{H} = t \) . Then \( t{v}_{\alpha } = {e}^{\alpha \left( H\right) }{v}_{\alpha } \) for \( {v}_{\alpha } \in {V}_{\alpha } \) .
Proof. Part (a) follows from the facts that \( {d\pi } \) is skew-Hermitian on \( \mathfrak{t} \) and is Hermitian on \( i\mathrm{t} \) . Part (b) follows from the fact that \( \exp \mathrm{t} = T \) and the relation \( {e}^{d\pi H} = \pi \left( {e}^{H}\right) . \)
By \( \mathbb{C} \) -linearity, \( \alpha \in \Delta \left( V\right) \) is completely determined by its restriction to either \( \mathfrak{t} \) or \( {it} \) . Thus we permit ourselves to interchangeably view \( \alpha \) as an element of any of the dual spaces \( {\mathfrak{t}}_{\mathbb{C}}^{ * },{\left( i\mathfrak{t}\right) }^{ * } \) (real valued), or \( {\mathfrak{t}}^{ * } \) (purely imaginary valued). In alternate notation (not used in this text), \( {it} \) is sometimes written \( {\mathfrak{t}}_{\mathbb{C}}\left( \mathbb{R}\right) \) .
## 6.1.4 Roots
Let \( G \) be a compact Lie group. For \( g \in G \), extend the domain of \( \operatorname{Ad}\left( g\right) \) from \( \mathfrak{g} \) to \( {\mathfrak{g}}_{\mathbb{C}} \) by \( \mathbb{C} \) -linearity. Then \( \left( {\mathrm{{Ad}},{\mathfrak{g}}_{\mathbb{C}}}\right) \) is a representation of \( G \) with differential given by ad (extended by \( \mathbb{C} \) -linearity). It has a weight space decomposition
\[
{\mathfrak{g}}_{\mathbb{C}} = {\bigoplus }_{\alpha \in \Delta \left( {{\mathfrak{g}}_{\mathbb{C}},{\mathfrak{t}}_{\mat |
1116_(GTM270)Fundamentals of Algebraic Topology | Definition 5.6.10 |
Definition 5.6.10. The cup product
\[
\cup : {H}^{j}\left( X\right) \otimes {H}^{k}\left( X\right) \rightarrow {H}^{j + k}\left( X\right)
\]
is the map defined as follows: For \( x \in {H}^{j}\left( X\right) \) and \( y \in {H}^{k}\left( X\right) \) ,
\[
x \cup y = {\bigtriangleup }^{ * }\left( {x \times y}\right)
\]
where \( x \times y \) is the cross product of \( x \) and \( y \), an element of \( {H}^{j + k}\left( {X \times X}\right) \), and \( {\bigtriangleup }^{ * } \) : \( {H}^{j + k}\left( {X \times X}\right) \rightarrow {H}^{j + k}\left( X\right) \) is the map induced on cohomology by the diagonal map
\[
\bigtriangleup : X \rightarrow X \times X
\]
given by \( \bigtriangleup \left( p\right) = \left( {p, p}\right) \) .
Definition 5.6.11. The cap product
\[
\cap : {H}^{j}\left( X\right) \otimes {H}_{j + k}\left( X\right) \rightarrow {H}_{k}\left( X\right)
\]
is the map defined as follows: For \( x \in {H}^{j}\left( X\right) \) and \( y \in {H}_{j + k}\left( X\right) \) ,
\[
x \cap y = x \smallsetminus {\bigtriangleup }_{ * }\left( y\right)
\]
where \( {\bigtriangleup }_{ * } : {H}_{j + k}\left( X\right) \rightarrow {H}_{j + k}\left( {X \times X}\right) \) is the map induced on homology by the diagonal map \( \bigtriangleup \) .
We have defined the cup and cap products from the cross and slant products. But in fact we can recover cross and slant products from a knowledge of cup and cap products.
Lemma 5.6.12. Let \( {\pi }_{1} : X \times Y \rightarrow X \) and \( {\pi }_{2} : X \times Y \rightarrow Y \) be projection on the first and second factors respectively.
(1) Let \( \alpha \in {H}^{j}\left( X\right) \) and \( \beta \in {H}^{k}\left( Y\right) \) . Then
\[
\alpha \times \beta = {\pi }_{1}{}^{ * }\left( \alpha \right) \cup {\pi }_{2}{}^{ * }\left( \beta \right) = \left( {\alpha \times {1}^{Y}}\right) \cup \left( {{1}^{X} \times \beta }\right) .
\]
(2) Let \( \alpha \in {H}^{j}\left( X\right) \) and \( \beta \in {H}_{j + k}\left( Y\right) \) . Then
\[
\alpha \smallsetminus \beta = {\pi }_{1 * }\left( {{\pi }_{2}{}^{ * }\left( \alpha \right) \cap \beta }\right) .
\]
Proof. We prove the first of these. The last equality is just Lemma 5.5.23. To prove the first, let \( \bigtriangleup : X \times Y \rightarrow \left( {X \times Y}\right) \times \left( {X \times Y}\right) \) be the diagonal. Then, by definition,
\[
{\pi }_{1}{}^{ * }\left( \alpha \right) \cup {\pi }_{2}{}^{ * }\left( \beta \right) = {\bigtriangleup }^{ * }\left( {{\pi }_{1}{}^{ * }\left( \alpha \right) \times {\pi }_{2}{}^{ * }\left( \beta \right) }\right)
\]
\[
= {\bigtriangleup }^{ * }\left( {{{\pi }_{1}}^{ * } \times {{\pi }_{2}}^{ * }}\right) \left( {\alpha \times \beta }\right)
\]
\[
= {\left( \left( {\pi }_{1} \times {\pi }_{2}\right) \bigtriangleup \right) }^{ * }\left( {\alpha \times \beta }\right)
\]
\[
= \alpha \times \beta
\]
as \( \left( {{\pi }_{1} \times {\pi }_{2}}\right) \bigtriangleup \) is the identity map.
We now summarize some properties of cup and cap products. These follow fairly directly from our previous work and from the properties of cross and slant products.
Theorem 5.6.13. (1) The cup product is associative.
(2) Cup and cap products are bilinear with respect to addition of cohomology and homology classes.
(3) If \( \alpha \in {H}^{j}\left( X\right) \) and \( \beta \in {H}^{k}\left( X\right) \) ,
\[
\beta \cup \alpha = {\left( -1\right) }^{jk}\alpha \cup \beta .
\]
(4) For any \( \alpha \in {H}^{k}\left( X\right) ,{1}^{X} \cup \alpha = \alpha \), and for any \( \gamma \in {H}_{j + k}\left( X\right) ,{1}^{X} \cap \gamma = \gamma \) .
(5) For any \( \alpha \in {H}^{j}\left( X\right) ,\beta \in {H}^{k}\left( X\right) \), and \( \gamma \in {H}_{l}\left( X\right) \) ,
\[
\alpha \cap \left( {\beta \cap \gamma }\right) = \left( {\alpha \cup \beta }\right) \cap \gamma
\]
(6) For any \( \alpha \in {H}^{j}\left( X\right) \) and \( \gamma \in {H}_{j}\left( X\right) \) ,
\[
{\varepsilon }_{ * }\left( {\alpha \cap \gamma }\right) = e\left( {\alpha ,\gamma }\right)
\]
(7) For any \( \alpha \in {H}^{j}\left( X\right) ,\beta \in {H}^{k}\left( X\right) \), and \( \gamma \in {H}_{j + k}\left( X\right) \) ,
\[
e\left( {\alpha ,\beta \cap \gamma }\right) = e\left( {\alpha \cup \beta ,\gamma }\right)
\]
(8) Let \( f : X \rightarrow Y \) . For any \( \zeta \in {H}^{j}\left( Y\right) \) and \( \alpha \in {H}_{j + k}\left( X\right) \) ,
\[
{f}_{ * }\left( {{f}^{ * }\left( \zeta \right) \cap \alpha }\right) = \zeta \cap {f}_{ * }\left( \alpha \right) \in {H}_{k}\left( Y\right) .
\]
We also record the following more complicated relationship between cup, cap, and cross products, which includes Lemma 5.6.12(1) as a very special case.
Theorem 5.6.14. (1) Let \( \alpha \in {H}^{j}\left( X\right) ,\beta \in {H}^{k}\left( X\right) ,\gamma \in {H}^{l}\left( Y\right) \), and \( \delta \in {H}^{m}\left( Y\right) \) .
Then, if \( n = j + k + l + m \) ,
\[
\left( {\alpha \cup \beta }\right) \times \left( {\gamma \cap \delta }\right) = {\left( -1\right) }^{kl}\left( {\alpha \times \gamma }\right) \cup \left( {\beta \times \delta }\right) \in {H}^{n}\left( {X \times Y}\right) .
\]
(2) Let \( \alpha \in {H}^{j}\left( X\right) ,\beta \in {H}_{j + k}\left( X\right) ,\gamma \in {H}^{l}\left( Y\right) \), and \( \delta \in {H}_{l + m}\left( Y\right) \) . Then, if \( n = k + m \) ,
\[
\left( {\alpha \cap \beta }\right) \times \left( {\gamma \cap \delta }\right) = {\left( -1\right) }^{\left( {j + k}\right) l}\left( {\alpha \times \gamma }\right) \cap \left( {\beta \times \delta }\right) \in {H}_{n}\left( {X \times Y}\right) .
\]
This follows from the previous properties we have obtained with enough careful attention to detail (including signs). Note in (2) that the first and third cross products are in homology and the second is in cohomology.
Many of the properties of the cup and cap product that we have stated can be subsumed under the following theorem. (See Definition A.1.13 for the definition of a graded algebra and module.)
Theorem 5.6.15. (1) For any nonempty space \( X,\mathcal{S} = {\bigoplus }_{i}{H}^{i}\left( X\right) \) is a graded commutative \( R \) -algebra and \( \mathcal{N} = {\bigoplus }_{i}{H}_{i}\left( X\right) \) is a left \( \mathcal{S} \) -module.
(2) Let \( X \) and \( Y \) be nonempty spaces and let \( f : X \rightarrow Y \) be a map. Let \( \mathcal{T} = {\bigoplus }_{i}{H}^{i}\left( X\right) \) and \( \mathcal{S} = { \oplus }_{i}{H}^{i}\left( Y\right) \) . Then \( {f}^{ * } : \mathcal{S} \rightarrow \mathcal{T} \) is an R-algebra homomorphism.
Corollary 5.6.16. If \( X \) and \( Y \) are homotopy equivalent, then \( { \oplus }_{i}{H}^{i}\left( X\right) \) and \( { \oplus }_{i}{H}^{i}\left( Y\right) \) are isomorphic as graded \( R \) -algebras.
We now consider homology and cohomology of pairs. Again we can define cup and cap products (through with some mild restrictions) and the results we obtain are almost the same.
Theorem 5.6.17. Let \( X \) be a space and let \( C \) and \( D \) be subspaces of \( X \) . Assume that \( \{ X \times C, D \times X\} \) and \( \{ C, D\} \) are both excisive couples. Then there is a cup product
\[
\cup : {H}^{j}\left( {X, C}\right) \otimes {H}^{k}\left( {X, D}\right) \rightarrow {H}^{j + k}\left( {X, C \cup D}\right)
\]
and a cap product
\[
\cap : {H}^{j}\left( {X, C}\right) \otimes {H}_{j + k}\left( {X, C \cup D}\right) \rightarrow {H}_{k}\left( {X, D}\right) .
\]
Proof. The condition that \( \{ X \times C, D \times X\} \) be excisive is necessary in order to apply the Eilenberg-Zilber theorem, and the condition that \( \{ C, D\} \) be excisive is necessary to obtain the analog of Lemma 5.6.12. Otherwise, the constructions are entirely analogous (though more complicated).
The following particulary useful special cases of this theorem are worth pointing out.
Corollary 5.6.18. Let \( \left( {X, A}\right) \) be a pair. In part (1), assume that \( \{ X \times A, A \times X\} \) is an excisive couple.
(1) There is a cup product
\[
\cup : {H}^{j}\left( {X, A}\right) \otimes {H}^{k}\left( {X, A}\right) \rightarrow {H}^{j + k}\left( {X, A}\right)
\]
and a cap product
\[
\cap : {H}^{j}\left( {X, A}\right) \otimes {H}_{j + k}\left( {X, A}\right) \rightarrow {H}_{k}\left( {X, A}\right) .
\]
(2) There is a cup product
\[
\cup : {H}^{j}\left( X\right) \otimes {H}^{k}\left( {X, A}\right) \rightarrow {H}^{j + k}\left( {X, A}\right)
\]
and a cap product
\[
\cap : {H}^{j}\left( X\right) \otimes {H}_{j + k}\left( {X, A}\right) \rightarrow {H}_{k}\left( {X, A}\right) .
\]
(3) There is a cup product
\[
\cup : {H}^{j}\left( {X, A}\right) \otimes {H}^{k}\left( X\right) \rightarrow {H}^{j + k}\left( {X, A}\right)
\]
and a cap product
\[
\cap : {H}^{j}\left( {X, A}\right) \otimes {H}_{j + k}\left( {X, A}\right) \rightarrow {H}_{k}\left( X\right) .
\]
Proof. (1) This is the special case \( C = D = A \) of Theorem 5.6.17.
(2) This is the special case \( C = A, D = \varnothing \) of Theorem 5.6.17.
(3) This is the special case \( C = \varnothing, D = A \) of Theorem 5.6.17.
In (1), we need the hypothesis that \( \{ X \times A, A \times X\} \) is excisive. But in these special cases, all the other excisiveness hypotheses are automatic.
Remark 5.6.19. Note that if \( A \) is nonempty then \( {\bigoplus }_{i}{H}^{i}\left( {X, A}\right) \) is a graded commutative ring without 1 . For example, if \( X \) is a path connected space and \( A \) is a nonempty subspaces of \( X \), then \( {H}^{0}\left( {X, A}\right) = \{ 0\} \) .
Otherwise the analogous statements all hold and we will not bother to explicitly formulate them.
## 5.7 Some Applications of the Cup Product
In this section we give several applications of the cup product. We begin by considering a pair of spaces that have the same (co)homology groups, and show they have different cohomology ring structures, so they cannot be homotopy equivalent. Then we compute the ring structure on the cohomology |
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds | Definition 6.6.6 |
Definition 6.6.6 Let \( \mathcal{O} \) be an Eichler order in the quaternion algebra \( A \) over the number field \( k \) . Then the level of \( \mathcal{O} \) is the ideal \( N \) of \( {R}_{k} \) such that \( {N}_{\mathcal{P}} \) is the level of \( {\mathcal{O}}_{\mathcal{P}} \) for each prime ideal \( \mathcal{P} \) .
Theorem 6.6.7 If \( \mathcal{O} \) is an Eichler order of level \( N \), then its discriminant is given by \( d\left( \mathcal{O}\right) = {N}^{2}\Delta {\left( A\right) }^{2} \) .
It should be noted that the discriminant does not, in general, characterise Eichler orders (see Exercise 6.6, No. 6).
In the last section, we briefly discussed principal congruence subgroups of the group \( \operatorname{SL}\left( {2,{R}_{\mathcal{P}}}\right) = {\mathcal{O}}_{\mathcal{P}}^{1} \) for \( \mathcal{O} \) a maximal order in the cases where \( \mathcal{P} \) is unramified in \( A \) . Let us now consider principal congruence subgroups at the global level.
Definition 6.6.8 Let \( \mathcal{O} \) be a maximal order in a quaternion algebra \( A \) over a number field \( k \) . Let \( I \) be a two-sided integral ideal of \( A \) in \( \mathcal{O} \) . The principal congruence subgroup of \( {\mathcal{O}}^{1} \) is
\[
{\mathcal{O}}^{1}\left( I\right) = \left\{ {\alpha \in {\mathcal{O}}^{1} \mid \alpha - 1 \in I}\right\}
\]
Thus \( {\mathcal{O}}^{1}\left( I\right) \) is the kernel of the natural map \( {\mathcal{O}}^{1} \rightarrow {\left( \mathcal{O}/I\right) }^{ * } \) . Since \( \mathcal{O}/I \) is a finite ring, the group \( {\mathcal{O}}^{1}\left( I\right) \) is of finite index in \( {\mathcal{O}}^{1} \) . The groups can be described locally as
\[
{\mathcal{O}}^{1}\left( I\right) = \left\{ {\alpha \in {\mathcal{O}}^{1} \mid \alpha - 1 \in {I}_{\mathcal{P}}\forall \mathcal{P} \in {\Omega }_{f}}\right\}
\]
For all but a finite set \( S \) of primes \( \mathcal{P},{I}_{\mathcal{P}} = {\mathcal{O}}_{\mathcal{P}} \) . If \( \mathcal{P} \in S \), and \( \mathcal{P} \) is unramified in \( A \), then \( {I}_{\mathcal{P}} = {\pi }^{{n}_{\mathcal{P}}}{\mathcal{O}}_{\mathcal{P}} \) by Theorem 6.5.3. In that case, under the embedding \( {\mathcal{O}}^{1} \rightarrow {\mathcal{O}}_{\mathcal{P}}^{1} \), the image of \( {\mathcal{O}}^{1}\left( I\right) \) will lie in the principal congruence subgroup of level \( {\mathcal{P}}^{{n}_{\mathcal{P}}} \), as described in \( §{6.5} \) . If \( \mathcal{P} \) is ramified in \( A \), then \( {I}_{\mathcal{P}} = {j}^{{n}_{\mathcal{P}}}{\mathcal{O}}_{\mathcal{P}} \), as described in \( §{6.4} \) . In the particular case where \( {n}_{\mathcal{P}} = 1 \), the corresponding subgroup of \( {\mathcal{O}}_{\mathcal{P}}^{1} \) under the description of the unique maximal order given in Exercise 6.4, No. 1 is the kernel of the reduction map \( {\mathcal{O}}_{\mathcal{P}}^{1} \rightarrow {\left( {R}_{F}/\pi {R}_{F}\right) }^{ * } \) given by \( \left( {\begin{matrix} a, & b \\ \pi {b}^{\prime } & \end{matrix},\begin{matrix} b \\ {a}^{\prime } \end{matrix}}\right) \mapsto a + \pi {R}_{F} \) .
Theorem 6.6.9 If \( \mathcal{O} \) is a maximal order in a quaternion algebra \( A \) over a number field \( k \), there are infinitely many principal congruence subgroups \( {\mathcal{O}}^{1}\left( I\right) \) which are torsion free.
The proof of this is left as an exercise (see Exercise 6.6, No. 7).
## Examples 6.6.10
1. Consider the Bianchi groups \( \operatorname{PSL}\left( {2,{O}_{d}}\right) \) where \( {O}_{d} \) is the ring of integers in \( \mathbb{Q}\left( \sqrt{-d}\right) \) . Since \( 2\cos \pi /n \in {O}_{d} \) if and only if \( n = 2,3 \), these groups can only contain elements of order 2 and 3 . Thus if \( J \) is an ideal of \( {O}_{d} \) such that \( \left( {J,2}\right) = 1 \) and, in addition, \( \left( {J,3}\right) = 1 \) in those cases where 3 is ramified in \( \mathbb{Q}\left( \sqrt{-d}\right) \mid \mathbb{Q} \), then the principal congruence subgroup of level \( J \) is torsion free by Lemma 6.5.6. In the notation at Definition 6.6.8, this is the group \( {\mathcal{O}}^{1}\left( I\right) \), where \( \mathcal{O} = {M}_{2}\left( {O}_{d}\right) \) and \( I = J{M}_{2}\left( {O}_{d}\right) \) .
2. Let \( k \) denote the cyclotomic field \( \mathbb{Q}\left( \zeta \right) \), where \( \zeta = {e}^{{2\pi i}/p} \) with \( p \) prime, and let \( A = {M}_{2}\left( k\right) \) . Then \( \alpha = \left( \begin{matrix} \zeta & 0 \\ 0 & {\zeta }^{-1} \end{matrix}\right) \) has order \( p \) and lies in the principal congruence subgroup of level \( \mathcal{P} = < \zeta - 1 > \) . Exercise 6.6
1. Let \( A = \left( \frac{3,5}{\mathbb{Q}}\right) \) . Obtain a \( \mathbb{Z} \) -basis for a maximal order in \( A \) .
2. Show that the order \( \mathcal{O} \) described in Exercise 6.3, No. 2 is maximal.
3. Show that being a principal ideal in a quaternion algebra over a number field is not a local-global property.
4. Let \( I \) be a normal ideal in \( A \) . Prove that \( {\left( {I}^{-1}\right) }^{-1} = I \) .
5. Let \( I \) and \( J \) be ideals in \( A \) such that \( {\mathcal{O}}_{\ell }\left( I\right) \) is maximal and \( {\mathcal{O}}_{r}\left( I\right) = \) \( {\mathcal{O}}_{\ell }\left( J\right) \) . Prove that
\[
n\left( {IJ}\right) = n\left( I\right) n\left( J\right)
\]
(6.11)
6. Let \( \mathcal{P} \) be a prime ideal in \( {R}_{k} \) such that \( \mathcal{P} \) is relatively prime to \( \Delta \left( A\right) \) . Show the following:
(a) For every integer \( n \geq 2 \), there are orders in \( A \) whose discriminant is \( \Delta {\left( A\right) }^{2}{\mathcal{P}}^{2n} \) but they are not Eichler orders.
(b) Every order of \( A \) with discriminant \( \Delta {\left( A\right) }^{2}{\mathcal{P}}^{2} \) is an Eichler order.
7. Complete the proof of Theorem 6.6.9.
## 6.7 The Type Number of a Quaternion Algebra
We have already seen that when \( R \) is a principal ideal domain, then all maximal orders in the quaternion algebra \( {M}_{2}\left( k\right) \) are conjugate to \( {M}_{2}\left( R\right) \) . This does not hold in general and in this section, we determine the number of conjugacy classes of maximal orders, which is finite. The number is measured by the order of a quotient group of a certain ray class group over the number field \( k \) . The proof could be couched in terms of a suitable adèle ring. We have chosen not to do this, but these adèle rings will be discussed and used in the next chapter. Our proof, however, will require results to be proved in the next chapter, namely the Norm Theorem and the Strong Approximation Theorem.
## Definition 6.7.1
- The type number of a quaternion algebra \( A \) is the number of conjugacy classes of maximal orders in \( A \) .
- If \( I \) and \( J \) are two ideals in \( A \), then \( I \) is equivalent to \( J \) if there exists \( t \in {A}^{ * } \) such that \( J = {It} \) .
We first establish some notation. Let \( \mathcal{O} \) be a maximal order. The set of ideals \( I \) such that \( {\mathcal{O}}_{\ell }\left( I\right) = \mathcal{O} \) (respectively \( {\mathcal{O}}_{r}\left( I\right) = \mathcal{O} \) ) will be denoted \( \mathcal{L}\left( \mathcal{O}\right) \) (respectively \( \mathcal{R}\left( \mathcal{O}\right) \) ). Likewise, the set of two sided ideals will be denoted by \( \mathcal{L}\mathcal{R}\left( \mathcal{O}\right) \) . Note that if \( I \) and \( J \) are ideals, then so is \( {IJ} \) . In particular, by Corollary 6.6.5, \( \mathcal{{LR}}\left( \mathcal{O}\right) \) forms a group under this operation.
If we denote the set of equivalence classes of ideals in \( \mathcal{L}\left( \mathcal{O}\right) \) by \( \mathcal{L}\left( \mathcal{O}\right) / \sim , \) then the action of \( \mathcal{L}\mathcal{R}\left( \mathcal{O}\right) \) on \( \mathcal{L}\left( \mathcal{O}\right) \) by \( I \mapsto {XI} \) for \( I \in \mathcal{L}\left( \mathcal{O}\right) \) and \( X \in \) \( \mathcal{L}\mathcal{R}\left( \mathcal{O}\right) \) preserves these equivalence classes.
Lemma 6.7.2 If \( \mathcal{C} \) denotes the set of conjugacy classes of maximal orders in \( A \), there is a bijection from \( \mathcal{C} \) to \( \mathcal{L}\mathcal{R}\left( \mathcal{O}\right) \smallsetminus (\mathcal{L}\left( \mathcal{O}\right) / \sim \), where \( \mathcal{O} \) is a fixed maximal order.
Proof: Denote equivalence classes of ideals by square brackets and conjugacy classes of orders also by square brackets. If \( I \in \mathcal{L}\left( \mathcal{O}\right) \), let \( {\mathcal{O}}^{\prime } = {\mathcal{O}}_{r}\left( I\right) \) , which is maximal. Define \( \theta : \mathcal{L}\left( \mathcal{O}\right) / \sim \rightarrow \mathcal{C} \) by \( \theta \left( \left\lbrack I\right\rbrack \right) = \left\lbrack {\mathcal{O}}^{\prime }\right\rbrack \) . Note that \( {\mathcal{O}}_{r}\left( {It}\right) = {t}^{-1}{\mathcal{O}}^{\prime }t \), so that \( \theta \) is well-defined. Furthermore, any pair \( \mathcal{O},{\mathcal{O}}^{\prime } \) of maximal orders are linked (see Exercise 6.1, No. 2) [i.e., there exists an ideal \( I \) such that \( {\mathcal{O}}_{\ell }\left( I\right) = \mathcal{O},{\mathcal{O}}_{r}\left( I\right) = {\mathcal{O}}^{\prime } \) (e.g., \( \left. \left. {\mathcal{O}{\mathcal{O}}^{\prime }\text{will do}}\right) \right\rbrack \) . Thus \( \theta \) is onto.
Now suppose \( \theta \left( \left\lbrack I\right\rbrack \right) = \theta \left( \left\lbrack {I}^{\prime }\right\rbrack \right) \) so that there exists \( t \in {A}^{ * } \) such that \( t{\mathcal{O}}_{r}\left( I\right) {t}^{-1} = {\mathcal{O}}_{r}\left( {I}^{\prime }\right) \) . Thus \( {\mathcal{O}}_{r}\left( {I{t}^{-1}}\right) = {\mathcal{O}}_{r}\left( {I}^{\prime }\right) \) . Let \( J = I{t}^{-1}{{I}^{\prime }}^{-1} \) so that \( J \in \mathcal{L}\mathcal{R}\left( \mathcal{O}\right) \) . Now \( J{I}^{\prime } = I{t}^{-1}{\mathcal{O}}_{r}\left( {I}^{\prime }\right) = I{t}^{-1} \) by Corollary 6.6.5. \( ▱ \)
By taking norms of ideals (see Definition 6 |
1074_(GTM232)An Introduction to Number Theory | Definition 4.10 |
Definition 4.10. If \( I \) and \( J \) are ideals in \( {O}_{\mathbb{K}} \), we write \( I \mid J \) ( \( I \) divides \( J \) ) if there is an ideal \( K \) in \( {O}_{\mathbb{K}} \) with \( J = {IK} \) .
Notice that \( {IK} \subseteq I \), so if \( I \mid J \) then \( J \subseteq I \) .
Lemma 4.11. Given two ideals \( I \) and \( J \) in \( {O}_{\mathbb{K}}, I \mid J \) if and only if \( J \subseteq I \) .
Proof. One direction is already proved, so assume that \( J \subseteq I \) . Then
\[
J{I}^{ * } \subseteq I{I}^{ * } = \left( {N\left( I\right) }\right)
\]
so
\[
K = \frac{1}{N\left( I\right) }J{I}^{ * }
\]
is an ideal contained in \( {O}_{\mathbb{K}} \) . It follows that
\[
{IK} = \frac{1}{N\left( I\right) }I\left( {J{I}^{ * }}\right) = \frac{1}{N\left( I\right) }J\left( {I{I}^{ * }}\right) = \frac{1}{N\left( I\right) }J\left( {N\left( I\right) }\right) = J,
\]
and hence \( I \mid J \) as claimed.
In what follows, we see a real duplication of ideas from Chapter 1, worked out in the context of ideals. The interchangeability of inclusion and divisibility for ideals will be used repeatedly.
Definition 4.12. A nonzero ideal \( I \neq R \) in a commutative ring \( R \) is called maximal if for any ideal \( J, J \mid I \) implies that \( J = I \) . An ideal \( P \) is prime if \( P \mid {IJ} \) implies that \( P \mid I \) or \( P \mid J \) .
Exercise 4.12. In a commutative ring \( R \), let \( M \) and \( P \) denote ideals.
(a) Show that \( M \) is maximal if and only if the quotient ring \( R/M \) is a field.
(b) Show that \( P \) is prime if and only if \( R/P \) is an integral domain (that is, in \( R/P \) the equation \( {ab} = 0 \) forces either \( a \) or \( b \) to be 0 ). (c) Deduce that every maximal ideal is prime.
Theorem 4.13. [Fundamenta Theorem of Arithmetic for Ideals] Any nonzero proper ideal in \( {O}_{\mathbb{K}} \) can be written as a product of prime ideals, and that factorization is unique up to order.
Proof. If \( I \) is not maximal, it can be written as a product of two nontrivial ideals. Comparing norms shows these ideals must have norms smaller than \( I \) . Keep going: The sequence of norms is descending, so it must terminate, resulting in a finite factorization of \( I \) . By Exercise 4.12, every maximal ideal is prime, so all that remains is to demonstrate that the resulting factorization is unique. This uniqueness follows from Corollary 4.9, which allows cancellation of nonzero ideals common to two products.
## 4.4 The Ideal Class Group
In this section, we are going to see how the nineteenth-century mathematicians interpreted Exercise 3.32 on p. 81 in terms of quadratic fields. The major result we will present is that ideals in \( {O}_{\mathbb{K}} \), for a quadratic field \( \mathbb{K} \), can be described using a finite list of representatives \( {I}_{1},\ldots ,{I}_{h} \) ; any nontrivial ideal \( I \) can be written \( {I}_{i}P \), where \( 1 \leq i \leq h \) and \( P \) is a principal ideal. Thus \( h \), known as the class number, measures the extent to which \( {O}_{\mathbb{K}} \) fails to be a principal ideal domain. This statement was proved for arbitrary algebraic number fields and proved to be influential in the way number theory developed in the twentieth century.
Given two ideals \( I \) and \( J \) in \( {O}_{\mathbb{K}} \), define a relation \( \sim \) by
\[
I \sim J\text{if and only if}I = {\lambda J}\text{for some}\lambda \in {\mathbb{K}}^{ * }\text{.}
\]
Exercise 4.13. Show that \( \sim \) is an equivalence relation.
We are going to outline a proof of the following important theorem.
Theorem 4.14. There are only finitely many equivalence classes of ideals in \( {O}_{\mathbb{K}} \) under \( \sim \) .
One class is easy to spot - namely the one consisting of all principal ideals. Of course, \( {O}_{\mathbb{K}} \) is a principal ideal domain if and only if there is only one class under the relation. One can define a multiplication on classes: If \( \left\lbrack I\right\rbrack \) denotes the class containing \( I \), then one can show that the multiplication defined by
\[
\left\lbrack I\right\rbrack \left\lbrack J\right\rbrack = \left\lbrack {IJ}\right\rbrack
\]
(4.2)
is independent of the representatives chosen.
Corollary 4.15. The set of classes under \( \sim \) forms a finite Abelian group.
The group in Corollary 4.15 is known as the ideal class group of \( \mathbb{K} \) (or just the class group).
Proof of Corollary 4.15. In the class group, associativity of multiplication is inherited from \( {O}_{\mathbb{K}} \) . The element \( \left\lbrack {O}_{\mathbb{K}}\right\rbrack \) acts as the identity. Finally, given any nonzero ideal \( I \), the relation \( I{I}^{ * } = \left( {N\left( I\right) }\right) \) shows that the inverse of the class \( \left\lbrack I\right\rbrack \) is \( \left\lbrack {I}^{ * }\right\rbrack \) .
Lemma 4.16. Given a square-free integer \( d \neq 1 \), there is a constant \( {C}_{d} \) that depends upon \( d \) only such that for any nonzero ideal \( I \) of \( {O}_{\mathbb{K}},\mathbb{K} = \mathbb{Q}\left( \sqrt{d}\right) \) , there is a nonzero element \( \alpha \in I \) with \( \left| {N\left( \alpha \right) }\right| \leq {C}_{d}N\left( I\right) \) .
Exercise 4.14. *Prove Lemma 4.16. The basic idea is a technique similar to that used in the proof of Theorem 3.21 showing that a lattice point must exist in a region constrained by various inequalities. Since the original proof, considerable efforts have gone into decreasing the constant \( {C}_{d} \) for practical application. The best techniques use the geometry of numbers, a theory initiated by Minkowski.
Proof of Theorem 4.14. First show that every class contains an ideal whose norm is bounded by \( {C}_{d} \) . Given a class \( \left\lbrack I\right\rbrack \), apply Lemma 4.16 with \( {I}^{ * } \) replacing \( I \) . Now \( \left( \alpha \right) \subseteq {I}^{ * } \), so we can write \( \left( \alpha \right) = {I}^{ * }J \) for some ideal \( J \) . However, this gives a relation \( \left\lbrack {I}^{ * }\right\rbrack \left\lbrack J\right\rbrack = \left\lbrack \left( \alpha \right) \right\rbrack \) in the class group. This means that \( \left\lbrack J\right\rbrack \) is the inverse of \( \left\lbrack {I}^{ * }\right\rbrack \) . However, we remarked earlier that \( \left\lbrack I\right\rbrack \) and \( \left\lbrack {I}^{ * }\right\rbrack \) are mutual inverses in the class group. Hence \( \left\lbrack I\right\rbrack = \left\lbrack J\right\rbrack \) . Now
\[
\left| {N\left( \alpha \right) }\right| = N\left( \left( \alpha \right) \right) = N\left( {I}^{ * }\right) N\left( J\right)
\]
Since the left-hand side is bounded by \( {C}_{d}N\left( {I}^{ * }\right) \), we can cancel \( N\left( {I}^{ * }\right) \) to obtain \( N\left( J\right) \leq {C}_{d} \) .
Now the theorem follows easily: For any given integer \( k \geq 0 \), there are only finitely many ideals of norm \( k \) ; this is because any ideal must be a product of prime ideals of norm \( p \) or \( {p}^{2} \), where \( p \) runs through the prime factors of \( k \) . There are only finitely many such prime ideals and hence there are only finitely many ideals of norm \( k \) . Now apply this to the integers \( k \leq {C}_{d} \) to deduce that there are only finitely many ideals of norm bounded by \( {C}_{d} \) . Since each class contains an ideal whose norm is thus bounded, by the first part of the proof, it follows that there are only finitely many classes.
Exercise 4.15. Investigate the relationship between quadratic forms and ideals in quadratic fields. In particular, show that Exercise 3.32 on p. 81 is equivalent to Theorem 4.14. (Hint: If \( I \) denotes an ideal with basis \( \{ \alpha ,\beta \} \), show that for \( x, y \in \mathbb{Z}, N\left( {{x\alpha } + {y\beta }}\right) /N\left( I\right) \) is a (binary) integral quadratic form. How does a change of basis for \( I \) relate to the form? What effect does multiplying \( I \) by a principal ideal have on the form?)
## 4.4.1 Prime Ideals
To better understand prime ideals, we close with an exercise that links up the various trains of thought in this chapter and shows that ideal theory better explains the various phenomena encountered in Chapter 3.
Exercise 4.16. Factorize the ideal (6) into prime ideals in \( \mathbb{Z}\left\lbrack \sqrt{-5}\right\rbrack \), expressing each prime factor in the form \( \left( {a, b + c\sqrt{-5}}\right) \) .
Exercise 4.17. Let \( {O}_{\mathbb{K}} \) denote the ring of algebraic integers in the quadratic field \( \mathbb{K} = \mathbb{Q}\left( \sqrt{d}\right) \) for a square-free integer \( d \) .
(a) If \( P \) is a prime ideal in \( {O}_{\mathbb{K}} \), show that \( P \mid \left( p\right) \) for some integer prime \( p \in \mathbb{Z} \) . (b) Show that there are only three possibilities for the factorization of the ideal \( \left( p\right) \) in \( {O}_{\mathbb{K}} \) :
\( \left( p\right) = {P}_{1}{P}_{2} \) where \( {P}_{1} \) and \( {P}_{2} \) are prime ideals in \( {O}_{\mathbb{K}} \) ( \( p \) splits);
\( \left( p\right) = P \), where \( P \) is a prime ideal in \( {O}_{\mathbb{K}} \) ( \( p \) is inert);
\( \left( p\right) = {P}^{2} \), where \( P \) is a prime ideal in \( {O}_{\mathbb{K}} \) ( \( p \) is ramified).
This should be compared with the possible primes in \( \mathbb{Z}\left\lbrack i\right\rbrack \) described in Theorem 2.8(3). The following exercise gives a complete description of splitting types in terms of the Legendre symbol.
Exercise 4.18. Let \( {O}_{\mathbb{K}} \) denote the ring of algebraic integers in the quadratic field \( \mathbb{K} = \mathbb{Q}\left( \sqrt{d}\right) \) for a square-free integer \( d \) . Let \( D = d \) if \( d \equiv 1 \) modulo 4 and let \( D = {4d} \) otherwise. Show that an odd prime \( p \) is inert, ramified, or split as the Legendre symbol \( \left( \frac{D}{p}\right) \) is \( - 1,0 \), or +1, respectively. What are the possibilities when \( p = 2 \) |
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org | Definition 3.96 |
Definition 3.96. A Banach space \( X \) is called an Asplund space (resp. a Mazur space) if every continuous convex function \( f \) defined on an open convex subset \( W \) of \( X \) is Fréchet (resp. Hadamard) differentiable on a dense \( {\mathcal{G}}_{\delta } \) subset \( D \) of \( W \) .
These terminologies are not the original ones: initially, Asplund spaces were called differentiability spaces, and usually Mazur spaces are called weak Asplund spaces.
By Theorem 3.34, separable Banach spaces are Mazur spaces. A stronger separability assumption ensures that the space is Asplund.
Theorem 3.97. A Banach space \( X \) whose dual is separable is an Asplund space.
Proof. Let \( f \) be a continuous convex function on an open convex subset \( W \) of \( X \) . For all \( x \in W \) let \( {g}_{x} \in \partial f\left( x\right) \) and \( \delta \left( x\right) \mathrel{\text{:=}} d\left( {x, X \smallsetminus W}\right) \) . The set \( A \mathrel{\text{:=}} W \smallsetminus F \) of points of \( W \) at which \( f \) is nondifferentiable is the union over \( m \in \mathbb{N} \smallsetminus \{ 0\} \) of the sets
\[
{A}_{m} \mathrel{\text{:=}} \left\{ {x \in W : \forall r \in \left( {0,\delta \left( x\right) }\right) \exists v \in r{B}_{X}, f\left( {x + v}\right) - f\left( x\right) - {g}_{x}\left( v\right) > \left( {6/m}\right) \parallel v\parallel }\right\} .
\]
Since \( {X}^{ * } \) is separable, for all \( m \in \mathbb{N} \smallsetminus \{ 0\} \) there is a countable cover \( {\mathcal{B}}_{m} \mathrel{\text{:=}} \left\{ {{B}_{m, n} : }\right. \) . \( n \in \mathbb{N}\} \) of \( {X}^{ * } \) by balls with radius \( 1/m \) . Let \( {A}_{m, n} \mathrel{\text{:=}} \left\{ {x \in {A}_{m} : {g}_{x} \in {B}_{m, n}}\right\} \) . Since \( W \) is a Baire space, and since \( A \) is the union of the sets \( {A}_{m, n} \), it suffices to show that the closure of \( {A}_{m, n} \) in \( W \) has an empty interior. Given an element \( w \) of this closure, let us show that for every \( \varepsilon \in \left( {0,\delta \left( w\right) }\right) \), the ball \( B\left( {w,\varepsilon }\right) \) is not contained in the closure of \( {A}_{m, n} \) in \( W \) . We will show that there exists some \( y \in B\left( {w,\varepsilon }\right) \) that has a neighborhood \( V \) disjoint from \( {A}_{m, n} \) . Without loss of generality, taking a smaller \( \varepsilon \) if necessary, we may suppose \( f \) is Lipschitzian on \( B\left( {w,\varepsilon }\right) \) with rate \( k \) for some \( k > 1/m \) . Since \( w \) is in the closure of \( {A}_{m, n} \), we can find some \( x \in {A}_{m, n} \cap B\left( {w,\varepsilon }\right) \) . By definition of \( {A}_{m} \) , taking \( r \in \left( {0,\varepsilon - d\left( {w, x}\right) }\right) \), there exists \( v \in r{B}_{X} \) such that
\[
f\left( {x + v}\right) - f\left( x\right) > {g}_{x}\left( v\right) + \left( {6/m}\right) \parallel v\parallel .
\]
(3.67)
We will show that for \( y \mathrel{\text{:=}} x + v, s \mathrel{\text{:=}} \parallel v\parallel, V \mathrel{\text{:=}} B\left( {y, s/{km}}\right) \cap B\left( {w,\varepsilon }\right) \in \mathcal{N}\left( y\right) \), we have \( V \cap {A}_{m, n} = \varnothing \) . Suppose, to the contrary, that one can find some \( z \in V \cap {A}_{m, n} \) . Then by definition of \( {A}_{m, n} \), we have \( \begin{Vmatrix}{{g}_{x} - {g}_{z}}\end{Vmatrix} < 2/m \), and since \( {g}_{z} \in \partial f\left( z\right) \) ,
\[
f\left( x\right) - f\left( z\right) \geq {g}_{z}\left( {x - z}\right)
\]
Combining this inequality with relation (3.67), we obtain
\[
f\left( {x + v}\right) - f\left( z\right) \geq {g}_{x}\left( {v + x - z}\right) - {g}_{x}\left( {x - z}\right) + {g}_{z}\left( {x - z}\right) + \left( {6/m}\right) \parallel v\parallel .
\]
(3.68)
Since \( \parallel v + x - z\parallel = \parallel y - z\parallel < s/{km} \) and \( \begin{Vmatrix}{g}_{x}\end{Vmatrix} \leq k \), we have \( \left| {{g}_{x}\left( {v + x - z}\right) }\right| < s/m \) . Moreover, the inequalities \( \parallel x - z\parallel = \parallel y - v - z\parallel \leq \parallel y - z\parallel + \parallel v\parallel < s/{km} + s < {2s} \) , \( \left| {{g}_{x}\left( {x - z}\right) - {g}_{z}\left( {x - z}\right) }\right| \leq {2s}\begin{Vmatrix}{{g}_{x} - {g}_{z}}\end{Vmatrix} < {4s}/m \), and (3.68) yield
\[
f\left( {x + v}\right) - f\left( z\right) > - s/m - {4s}/m + \left( {6/m}\right) \parallel v\parallel = s/m,
\]
a contradiction to the inequality \( \parallel v + x - z\parallel < s/{km} \) and the fact that \( f \) is Lipschitz-ian with rate \( k \) on the ball \( B\left( {w,\varepsilon }\right) \), which contains both \( y \mathrel{\text{:=}} x + v \) and \( z \) .
The importance of Asplund spaces for generalized differentiation is illuminated by the following deep result, which is outside the scope of this book.
Theorem 3.98 (Preiss [850]). Every locally Lipschitzian function \( f \) on an open subset \( U \) of an Asplund space is Fréchet differentiable on a dense subset of \( U \) .
The class of Asplund spaces can be characterized in a number of different ways and satisfies interesting stability and duality properties. In particular, it is connected with the Radon-Nikodým property described below. Let us just mention the following facts in this connection, referring to [98, 289, 832] for proofs and additional information.
Proposition 3.99. (a) A Banach space \( X \) is an Asplund space if and only if every separable subspace of \( X \) is an Asplund space.
(b) A Banach space \( X \) is an Asplund space if and only if the dual of every separable subspace of \( X \) is separable.
In particular, every reflexive Banach space is an Asplund space. On the other hand, \( C\left( \left\lbrack {0,1}\right\rbrack \right) ,{L}_{1}\left( \left\lbrack {0,1}\right\rbrack \right) ,{\ell }_{1}\left( \mathbb{N}\right) \), and \( {\ell }_{\infty }\left( \mathbb{N}\right) \) are not Asplund spaces.
Let us state some permanence properties.
Proposition 3.100. (a) The class of Asplund spaces is closed under isomorphisms; that is, if \( X \) and \( Y \) are isomorphic Banach spaces and if \( X \) is Asplund, then \( Y \) is Asplund.
(b) Every closed linear subspace of an Asplund space is an Asplund space.
(c) Every quotient space of an Asplund space is an Asplund space.
(d) The class of Asplund spaces is closed under extensions: if \( X \) is a Banach space and \( Y \) is an Asplund subspace of \( X \) such that the quotient space \( X/Y \) is Asplund, then \( X \) is Asplund.
Corollary 3.101. If \( X \) is an Asplund space, then for all \( n \in \mathbb{N} \smallsetminus \{ 0\} ,{X}^{n} \) is an Asplund space.
Proof. Let us prove the result by induction. Assume that \( {X}^{n} \) is Asplund. The graph \( Y \) of the map \( s : \left( {{x}_{1},\ldots ,{x}_{n}}\right) \mapsto {x}_{1} + \cdots + {x}_{n} \) is isomorphic to \( {X}^{n} \), hence is Asplund by assumption. The quotient of \( {X}^{n + 1} \) by \( Y \) is isomorphic to \( X \), since \( s \) is onto. Thus \( {X}^{n + 1} \) is Asplund.
The following result clarifies the relationships between Asplund spaces and spaces that can be renormed by a Fréchet differentiable norm. It will be explained and proved in the next chapter (Corollary 4.66).
Theorem 3.102 (Ekeland-Lebourg [352]). If a Banach space can be renormed by a norm that is Fréchet differentiable off 0 (or more generally, if it admits a Fréchet differentiable bump function), then it is an Asplund space.
For separable spaces, there is a partial converse, but in general the converse fails: R. Haydon has exhibited a compact space \( T \) such that \( C\left( T\right) \) is Asplund but cannot be renormed by a Fréchet (or even Gâteaux) differentiable norm.
Proposition 3.103. For every separable Asplund space \( X \) there exists a norm inducing the topology of \( X \) that is Fréchet differentiable on \( X \smallsetminus \{ 0\} \) .
## Proof. This is a consequence of Proposition 3.99 and Theorem 3.95.
On the other hand, on any Banach space that is not Asplund, one can find a Lipschitzian convex function that is nowhere differentiable. One can even take for it an equivalent norm, as explained in the next proposition. In order to prove this and give a dual characterization of Asplund spaces, let us define a weak* slice of a nonempty set \( A \subset {X}^{ * } \) as a subset of \( A \) of the form
\[
S\left( {x, A,\alpha }\right) = \left\{ {{x}^{ * } \in A : \left\langle {{x}^{ * }, x}\right\rangle > {\sigma }_{A}\left( x\right) - \alpha }\right\} ,
\]
where \( x \in X \smallsetminus \{ 0\} ,\alpha > 0 \), and where \( {\sigma }_{A} \) is the support function of \( A \) :
\[
{\sigma }_{A}\left( x\right) = \sup \left\{ {\left\langle {{x}^{ * }, x}\right\rangle : {x}^{ * } \in A}\right\} .
\]
The subset \( A \) is said to be weak* dentable if it admits weak* slices of arbitrarily small diameter. The space \( {X}^{ * } \) is said to have the dual Radon-Nikodým property if every nonempty bounded subset \( A \) of \( {X}^{ * } \) is weak* dentable. This property is important in functional analysis, in particular for vector measures (see [98,832,941]).
Theorem 3.104 ([832, Theorem 5.7]). A Banach space \( X \) is an Asplund space if and only if its dual space \( {X}^{ * } \) has the dual Radon-Nikodým property.
The following proposition shows the implication that the dual of an Asplund space has the dual Radon-Nikodým property. We omit the reverse implication.
Proposition 3.105. Let \( \left( {X,\parallel \cdot \parallel }\right) \) be a Banach space whose dual space does not have the dual Radon-Nikodým Property. Then there exist \( c > 0 \) and an equivalent norm \( \parallel \cdot {\parallel }^{\prime } \) on \( X \) such that for all \( x \in X \) ,
\[
\mathop{\limsup }\limits_{{w \rightarrow 0, w \neq 0}}\frac{1}{\parallel w{\parallel }^{\prime }}\left( {\parallel x + w{\parallel }^{\prime } + \parallel x - w{\parallel }^{\prime } - 2\parallel x{ |
1139_(GTM44)Elementary Algebraic Geometry | Definition 6.9 |
Definition 6.9. Two divisors \( {D}_{1} \) and \( {D}_{2} \) are linearly equivalent \( \left( {{D}_{1} \simeq {D}_{2}}\right) \) if they differ by a principal divisor \( \left( {{D}_{1} = {D}_{2} + \operatorname{div}\left( g\right) }\right. \), for some \( \left. {g \in {K}_{C}\smallsetminus \{ 0\} }\right) \) . The set of principal divisors forms a subgroup of the group of all divisors on \( C \), the quotient group being the set of linear equivalence classes of divisors on \( C \) .
A search for further conditions will shed light on possible analogues of (6.4) and (6.5). Of course, the above example shows that (6.5) does not generalize verbatim to \( C \) . (However, note that if \( D \) is a fixed divisor on \( C \), then the zero function together with all functions \( f \) of \( K \) satisfying \( D \leq \operatorname{div}\left( f\right) \) forms a complex vector space \( L\left( D\right) \) ; this follows at once from the definition of \( \operatorname{div}\left( f\right) \) and the fact that \( {\operatorname{ord}}_{P}\left( {f + g}\right) \geq \min \left\{ {{\operatorname{ord}}_{P}f,{\operatorname{ord}}_{P}g}\right\} \) . It follows from Lemma 7.1 that this vector space is finite dimensional.) What are possible natural generalizations? The answers to such questions constitute some of the most central facts about algebraic curves. One generalization of (6.5) will in fact be the Riemann-Roch theorem.
Before beginning a study of this problem, we first look at differentials, which are intimately connected to the above questions. We motivate this discussion by briefly looking at a well-spring of differentials, namely integration.
To integrate on any space \( S \) one needs some sort of measure on \( S \), perhaps given directly, perhaps induced by a metric, perhaps by a system of local coordinates, etc. In complex integration on \( \mathbb{C} \), one customarily uses the canonical measure induced by the coordinate system \( Z = X + {iY} \) . But in contrast to \( \mathbb{C} \), it is easy to see that for any nonsingular projective curve, there is never any one coordinate neighborhood covering the whole curve-we always need several neighborhoods and each coordinate neighborhood has its own canonical measure.
For instance, \( {\mathbb{C}}_{Z} \) covers all of \( \mathbb{C} \cup \{ \infty \} = {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) except \( \{ \infty \} \) ; one can then choose a second copy \( {\mathbb{C}}_{w} \) of \( \mathbb{C} \) covering all of \( \mathbb{C} \cup \{ \infty \} \) except \( \{ 0\} \) , \( {\mathbb{C}}_{Z} \) and \( {\mathbb{C}}_{W} \) being related by \( W = 1/Z \) in their intersection. In this example, the distance from a point \( P \neq 0 \in {\mathbb{C}}_{Z} \) to \( \{ \infty \} \) is infinite in \( {\mathbb{C}}_{Z} \), but finite in \( {\mathbb{C}}_{W} \) ; hence the metrics in the two coordinate neighborhoods are surely different. One can get around this kind of problem by adjusting the metrics in different neighborhoods so they agree on their common part. Thus at each point common to two neighborhoods on \( C \), coordinatized by, say, \( Z \) and by \( W \), the metric element \( {dZ} \) may be modified to agree with \( {dW} \) by multiplying by a derivative: \( {dW} = \left( {{dW}/{dZ}}\right) \cdot {dZ} \), where \( W = W\left( Z\right) \) . For instance in the example of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) above, where \( W = 1/Z \), we have \( {dW} = \) \( - \left( {1/{Z}^{2}}\right) {dZ} \) .
On \( \mathbb{C} \), when one uses a phrase such as integrating a function \( f\left( Z\right) \), the canonical \( {dZ} \) can be left in the background. But it is the differential \( f\left( Z\right) {dZ} \) which tells a more complete story since it takes into account the underlying measure; it is the natural object to use when coordinate changes are involved.
Aside from the obvious relation to integration, a study of differentials helps to reveal the connection between the topology of \( C \) and the existence of functions with prescribed zeros and poles; this will be our main use of differentials.
We now formally define differential on an irreducible variety; our definition is purely algebraic and has certain advantages; it affords a clean algebraic development and can be used in a very general setting. In Remark 6.14 we indicate for nonsingular plane curves how these differentials may be looked at as geometric objects on a variety.
Definition 6.10. Let \( k \subset K \) be any two fields of characteristic 0, and let \( {V}_{1} \) be the vector space over \( k \) generated by the set of indeterminate objects \( \{ {dx} \mid x \in K\} \) ; let \( {V}_{2} \) be the subspace of \( {V}_{1} \) generated by the set of indeterminate objects.
\[
\{ d\left( {{ax} + {by}}\right) - \left( {{adx} + {bdy}}\right), d\left( {xy}\right) - \left( {{xdy} + {ydx}}\right) \mid x, y \in K\text{ and }a, b \in k\} .
\]
Then \( {V}_{1}/{V}_{2} \) is the vector space of differentials of \( K \) over \( k \), which we denote by \( \Omega \left( {K, k}\right) \) . If \( {K}_{V} \) is the function field of an irreducible variety \( V \) over \( k \) , then the elements \( \omega \in \Omega \left( {{K}_{V}, k}\right) \) are the differentials on \( V \) .
Remark 6.11. One might more accurately call the above differentials differentials of the first order, for on varieties of dimension \( > 1 \) one may consider differentials of higher order. (See, for example, [Lang, Chapter VII].) However we shall use only differentials on curves in this book. Note that the generators of \( {V}_{2} \) simply express familiar algebraic properties of a differential.
Remark 6.12. Definition 6.10 immediately implies that \( {da} = 0 \in {V}_{1}/{V}_{2} \) for each \( a \in k \), and that if \( f \in K = k\left( {{x}_{1},\ldots ,{x}_{n}}\right) \), then \( {df} \) is just the usual total differential \( d\left( {p/q}\right) \left( { = \left( {{qdp} - {pdq}}\right) /{q}^{2}}\right. \) evaluated at \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \), where \( p \) and \( q \) are polynomials in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) such that \( f = p\left( {{x}_{1},\ldots ,{x}_{n}}\right) /q\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) .
For our applications in this book, we now assume that \( k = \mathbb{C} \) and that \( K \) has transcendence degree one over \( \mathbb{C} \) . Thus \( K \) is the function field of an irreducible curve. In this case, for any two functions \( f \in K \) and \( g \in K \smallsetminus \mathbb{C} \) , there is a well-defined derivative \( {df}/{dg} \in K \) having the properties one would expect of a derivative. We use this fundamental fact (Theorem 6.13) to see the geometric meaning behind Definition 6.10.
Theorem 6.13. Let a field \( K \) have transcendence degree one over \( \mathbb{C} \), and let \( f \in K, g \in K \smallsetminus \mathbb{C} \) . Then there is a unique element \( \kappa \in K \) such that \( {df} = {\kappa dg} \) . (We denote \( \kappa \) by the symbol \( {df}/{dg} \) ; it is called the derivative of \( f \) with respect to \( g \) .)
Proof. Let \( {\dim }_{K}\Omega \) denote the dimension of the \( K \) -vector space \( \Omega \left( {K,\mathbb{C}}\right) \) . Theorem 6.13 will follow easily once we show \( {\dim }_{K}\Omega = 1 \) . By the theorem of the primitive element, we may write \( K = \mathbb{C}\left( {x, y}\right) \) ; suppose \( x \) is transcendental over \( \mathbb{C} \), write \( x = X \), and let a minimal polynomial of \( y \) over \( \mathbb{C}\left\lbrack X\right\rbrack \) be \( p\left( {X, Y}\right) \in \mathbb{C}\left\lbrack {X, Y}\right\rbrack \) . It follows from Remark 6.12 that \( \{ {dx},{dy}\} K \) -generates \( \Omega \left( {K,\mathbb{C}}\right) \), so \( {\dim }_{K}\Omega \leq 2 \) . Now the element \( p\left( {x, y}\right) = 0 \in \mathbb{C} \) has differential zero; but \( p\left( {x, y}\right) \) is \( p\left( {X, Y}\right) \) evaluated at \( \left( {x, y}\right) \), so \( 0 = d\left( {p\left( {x, y}\right) }\right) = {p}_{X}\left( {x, y}\right) {dx} + \) \( {p}_{Y}\left( {x, y}\right) {dy}\left( {{p}_{X},{p}_{Y}}\right. \) being ordinary partials with respect to the indeterminates \( X \) and \( Y) \), that is, \( {p}_{Y}\left( {x, y}\right) {dy} = - {p}_{X}\left( {x, y}\right) {dx} \) . Since \( p \) is minimal, \( {p}_{Y}\left( {x, y}\right) \neq 0 \) , hence
\[
{dy} = \frac{-{p}_{X}\left( {x, y}\right) }{{p}_{Y}\left( {x, y}\right) }{dx}
\]
thus \( {dx} \) generates \( \Omega \left( {K,\mathbb{C}}\right) \), so \( {\dim }_{K}\Omega \leq 1 \) . To get equality, it suffices to show that \( {dx} \neq 0 \) -that is, that \( {dx} \) is not in the subspace \( {V}_{2} \) of Definition 6.10. For this, define a map from \( {V}_{1} \) to \( K \) in this way: Let \( h,{h}^{ * } \) be any two elements of \( K \), and let \( H \) be an element of \( \mathbb{C}\left( {X, Y}\right) \) such that \( h = H\left( {x, y}\right) \) . Our map is then
\[
\phi : {h}^{ * }{dh} \rightarrow {h}^{ * }\left( {{H}_{X}\left( {x, y}\right) {p}_{Y}\left( {x, y}\right) + {H}_{Y}\left( {x, y}\right) {p}_{X}\left( {x, y}\right) }\right) .
\]
It is easily seen that this map is well defined and that any element in \( {V}_{2} \) of Definition 6.10 must map to \( 0 \in K \) . Hence \( \phi \) induces a linear function from \( \Omega \left( {K,\mathbb{C}}\right) = {V}_{1}/{V}_{2} \) into \( K \) . Now \( \phi \left( {dx}\right) = 1 \cdot {p}_{Y}\left( {x, y}\right) \) (since \( {X}_{X} = {dX}/{dX} = 1 \) , and \( \left. {{X}_{Y} = 0}\right) \) . But we know \( {p}_{Y}\left( {x, y}\right) \neq 0 \), so \( \phi \left( {dx}\right) \neq 0 \) . Hence \( {dx} \neq 0 \), and therefore \( {\dim }_{K}\Omega = 1 \) .
Since \( {dxK} \) -generates \( \Omega \left( {K,\mathbb{C}}\right) \), we have \( {df} = {\eta }_{1}{dx} \) and \( {dg} = {\eta }_{2}{dx} \) for unique \( {\eta }_{1},{\eta }_{2} \in K \) . Now \( g \in K \smallsetminus \ma |
1088_(GTM245)Complex Analysis | Definition 4.38 |
Definition 4.38. Let \( c \in \mathbb{C} \) and let \( \gamma \) be a continuous closed path in \( \mathbb{C} - \{ c\} \) . We define the index or winding number of \( \gamma \) with respect to \( c \) as the following integer:
\[
I\left( {\gamma, c}\right) = \frac{1}{{2\pi }\imath }{\int }_{\gamma }\frac{\mathrm{d}z}{z - c}.
\]
Example 4.39 (In polar coordinates). Let \( r = g\left( \theta \right) > 0 \), with \( g \in {\mathbf{C}}^{1}\left( \mathbb{R}\right) \) . Let \( n \in \) \( {\mathbb{Z}}_{ > 0} \) and define \( \gamma \left( \theta \right) = g\left( \theta \right) {\mathrm{e}}^{\iota \theta } \), where \( \theta \in \left\lbrack {0,{2\pi n}}\right\rbrack \) . Assume that \( g\left( 0\right) = g\left( {2\pi n}\right) \) . Observe that the conditions on \( g \) imply that the curve \( \gamma \) winds around the origin \( n \) times in the counterclockwise direction and, as expected,
\[
I\left( {\gamma ,0}\right) = \frac{1}{{2\pi }\imath }{\int }_{\gamma }\frac{\mathrm{d}z}{z} = \frac{1}{{2\pi }\imath }{\int }_{0}^{2\pi n}\frac{d\left( {g\left( \theta \right) {\mathrm{e}}^{\imath \theta }}\right) }{g\left( \theta \right) {\mathrm{e}}^{\imath \theta }}
\]
\[
= \frac{1}{{2\pi }\imath }{\int }_{0}^{2\pi n}\frac{{g}^{\prime }\left( \theta \right) {\mathrm{e}}^{\imath \theta } + \imath g\left( \theta \right) {\mathrm{e}}^{\imath \theta }}{g\left( \theta \right) {\mathrm{e}}^{\imath \theta }}\mathrm{d}\theta
\]
\[
= \frac{1}{{2\pi }\imath }{\int }_{0}^{2\pi n}\left\lbrack {\frac{{g}^{\prime }\left( \theta \right) }{g\left( \theta \right) } + \imath }\right\rbrack \mathrm{d}\theta = n.
\]
In general, let \( \gamma : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{C} - \{ c\} \) be a continuous closed path and let \( f \) be a primitive of \( \frac{\mathrm{d}z}{z - c} \) on \( \gamma \) . Then \( f\left( t\right) \) agrees with a branch of \( \log \left( {\gamma \left( t\right) - c}\right) \) ; that is,
\[
{\mathrm{e}}^{f\left( t\right) } = \gamma \left( t\right) - c\text{ for all }t \in \left\lbrack {a, b}\right\rbrack .
\]
Hence
\[
I\left( {\gamma, c}\right) = \frac{f\left( b\right) - f\left( a\right) }{{2\pi }\imath }.
\]
We see that the \( {\mathbf{C}}^{1} \) assumption on \( g \) is unnecessary as we will also be able to conclude using homotopy of curves discussed in the next section.
## 4.4 Homotopy and Simple Connectivity
In order to give the integration results the clearest formulation (see for instance Corollary 4.52), we introduce the topological concepts of homotopic curves and simply connected domains.
Definition 4.40. Let \( {\gamma }_{0} \) and \( {\gamma }_{1} \) be two continuous paths in a domain \( D \), parameterized by \( I = \left\lbrack {0,1}\right\rbrack \) with the same end points; that is, \( {\gamma }_{0}\left( 0\right) = {\gamma }_{1}\left( 0\right) \) and \( {\gamma }_{0}\left( 1\right) = {\gamma }_{1}\left( 1\right) \) . We say that \( {\gamma }_{0} \) and \( {\gamma }_{1} \) are homotopic on \( D \) (with fixed end points) if there exists a continuous function \( \delta : I \times I \rightarrow D \) such that
(1) \( \delta \left( {t,0}\right) = {\gamma }_{0}\left( t\right) \) for all \( t \in I \)
(2) \( \delta \left( {t,1}\right) = {\gamma }_{1}\left( t\right) \) for all \( t \in I \)
(3) \( \delta \left( {0, u}\right) = {\gamma }_{0}\left( 0\right) = {\gamma }_{1}\left( 0\right) \) for all \( u \in I \)
(4) \( \delta \left( {1, u}\right) = {\gamma }_{0}\left( 1\right) = {\gamma }_{1}\left( 1\right) \) for all \( u \in I \)
We call \( \delta \) a homotopy with fixed end points between \( {\gamma }_{0} \) and \( {\gamma }_{1} \) ; see Fig. 4.3, with \( {\gamma }_{u}\left( t\right) = \delta \left( {t, u}\right) \), for fixed \( u \) in \( I \) .
Let \( {\gamma }_{0} \) and \( {\gamma }_{1} \) be two continuous closed paths in a domain \( D \) parameterized by \( I = \left\lbrack {0,1}\right\rbrack \) ; that is, \( {\gamma }_{0}\left( 0\right) = {\gamma }_{0}\left( 1\right) \) and \( {\gamma }_{1}\left( 0\right) = {\gamma }_{1}\left( 1\right) \) . We say that \( {\gamma }_{0} \) and \( {\gamma }_{1} \) are homotopic as closed paths on \( D \) if there exists a continuous function \( \delta : I \times I \rightarrow D \) such that
(1) \( \delta \left( {t,0}\right) = {\gamma }_{0}\left( t\right) \) for all \( t \in I \)
(2) \( \delta \left( {t,1}\right) = {\gamma }_{1}\left( t\right) \) for all \( t \in I \)
(3) \( \delta \left( {0, u}\right) = \delta \left( {1, u}\right) \) for all \( u \in I \)
The map \( \delta \) is called a homotopy of closed curves or paths; see Fig. 4.4, with \( {\gamma }_{u}\left( t\right) = \) \( \delta \left( {t, u}\right) \) for fixed \( u \) in \( I \) .
A continuous closed path is homotopic to a point if it is homotopic to a constant path (as a closed path).
![a50267de-c956-4a7f-8c2e-850adafcee65_116_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_116_0.jpg)
Fig. 4.3 Homotopy with fixed end points
![a50267de-c956-4a7f-8c2e-850adafcee65_117_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_117_0.jpg)
Fig. 4.4 Homotopy of closed paths
Example 4.41. Continuing with Example 4.39, it is easy to see that for continuous \( g \), the path \( \gamma \) is homotopic as a closed path in \( \mathbb{C} - \{ 0\} \) to the circle \( {S}^{1} \) traversed \( n \) times in the positive direction. Thus both have the same winding number about the origin.
Remark 4.42. Note that the notion of being homotopic (in all its versions) depends on the domain \( D \) . For instance, the closed path \( \gamma \left( t\right) = \exp \left( {2\pi \iota t}\right) \) for \( 0 \leq t \leq 1 \) is homotopic to a point in \( \mathbb{C} \) (set \( \delta \left( {t, u}\right) = {u\gamma }\left( t\right) \) ), but it is not homotopic to a point in \( \mathbb{C} - \{ 0\} \), as we will soon see (Remark 4.55).
Definition 4.43. Let \( \mathrm{I} = \left\lbrack {0,1}\right\rbrack \), let \( \delta : I \times I \rightarrow D \subseteq \mathbb{C} \) be a continuous map, and let \( \omega \) be a closed form on \( D \) .
A function \( f : I \times I \rightarrow \mathbb{C} \) is said to be a primitive for \( \omega \) along \( \delta \) provided for every \( \left( {{t}_{0},{u}_{0}}\right) \in I \times I \), there exists a neighborhood \( V \) of \( \delta \left( {{t}_{0},{u}_{0}}\right) \) in \( D \) and a primitive \( F \) for \( \omega \) on \( V \) such that
\[
f\left( {t, u}\right) = F\left( {\delta \left( {t, u}\right) }\right)
\]
for all \( \left( {t, u}\right) \) in some neighborhood of \( \left( {{t}_{0},{u}_{0}}\right) \) in \( I \times I \) .
Remark 4.44. (1) Such a function \( f \) is automatically continuous on \( I \times I \) .
(2) For fixed \( u \in I, f\left( {\cdot, u}\right) \) is a primitive for \( \omega \) along the path \( t \mapsto \delta \left( {t, u}\right) \) (see Definition 4.31).
Theorem 4.45. If \( \omega \) is a closed form on \( D \) and \( \delta : \left\lbrack {0,1}\right\rbrack \times \left\lbrack {0,1}\right\rbrack \rightarrow D \) is a continuous map, then a primitive \( f \) for \( \omega \) along \( \delta \) exists and is unique up to an additive constant.
Proof. We leave the proof as an exercise for the reader.
We now observe that all integrals of a closed form along homotopic paths coincide.
Theorem 4.46. Let \( {\gamma }_{0} \) and \( {\gamma }_{1} \) be continuous paths in a domain \( D \) and let \( \omega \) be a closed form on \( D \) . If \( {\gamma }_{0} \) is homotopic to \( {\gamma }_{1} \) with fixed end points, then
\[
{\int }_{{\gamma }_{0}}\omega = {\int }_{{\gamma }_{1}}\omega
\]
Proof. We assume that both paths are parameterized by the interval \( I = \left\lbrack {0,1}\right\rbrack \) . Let \( \delta : I \times I \rightarrow D \) be a homotopy between our two paths and let \( f \) be a primitive of \( \omega \) along \( \delta \) . Thus \( u \mapsto f\left( {0, u}\right) \) is a primitive of \( \omega \) along the constant curve \( u \mapsto \) \( \delta \left( {0, u}\right) = {\gamma }_{0}\left( 0\right) \), and hence \( f\left( {0, u}\right) \) is a constant \( \alpha \) independent of \( u \) . Similarly \( f\left( {1, u}\right) = \beta \in \mathbb{C} \) .
But then
\[
{\int }_{{\gamma }_{0}}\omega = f\left( {1,0}\right) - f\left( {0,0}\right) = \beta - \alpha
\]
and
\[
{\int }_{{\gamma }_{1}}\omega = f\left( {1,1}\right) - f\left( {0,1}\right) = \beta - \alpha .
\]
Remark 4.47. A similar result holds for two curves that are homotopic as closed paths (see Exercise 4.10).
Corollary 4.48. If \( \gamma \) is homotopic to a point in \( D \) and \( \omega \) is a closed form in \( D \), then
\[
{\int }_{\gamma }\omega = 0
\]
This corollary motivates the following.
Definition 4.49. A region \( D \subseteq \mathbb{C} \) is called simply connected if every continuous closed path in \( D \) is homotopic to a point in \( D \) .
Example 4.50. (1) The complex plane \( \mathbb{C} \) is simply connected. More generally,
(2) Discs are simply connected: let \( c \in \mathbb{C} \), and for \( R \in \left( {0, + \infty }\right) \) set \( D = {U}_{c}\left( R\right) = \) \( \{ z \in \mathbb{C};\left| {z - c}\right| < R\} \) . Without loss of generality we assume \( c = 0 \) and \( R = 1 \) . Let \( \gamma \) be a continuous closed path in \( D \) parameterized by \( \left\lbrack {0,1}\right\rbrack \), and define \( \delta \left( {t, u}\right) = {u\gamma }\left( t\right) \)
Corollary 4.51. If \( D \) is a simply connected domain and \( \gamma \) is a continuous closed path in \( D \), then \( {\int }_{\gamma }\omega = 0 \) for all closed forms \( \omega \) on \( D \) .
We obtain the simplest formulation of the main result:
Corollary 4.52. In a simply connected domain a differential form is closed if and only if it is exact.
Remark 4.53. The property appearing in the last corollary actually characterizes simply connected domains. We will not prove nor use this fact.
An immediate corollary gives the |
1329_[肖梁] Abstract Algebra (2022F) | Definition 4.2.1 |
Definition 4.2.1. In the permutation group \( {S}_{n} \), recall that for distinct numbers \( {a}_{1},\ldots ,{a}_{m} \in \) \( \{ 1,\ldots, n\} \), one has an \( m \) -cycle \( \sigma = \left( {{a}_{1}{a}_{2}\cdots {a}_{m}}\right) \) :
![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_26_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_26_0.jpg)
A 2-cycle \( \left( {xy}\right) \), for \( x, y \in \{ 1,\ldots, n\} \) distinct, is called a transposition.
Remark 4.2.2. As \( \left( {{a}_{1}\ldots {a}_{m}}\right) = \left( {{a}_{1}{a}_{m}}\right) \left( {{a}_{1}{a}_{m - 1}}\right) \cdots \left( {{a}_{1}{a}_{2}}\right) \), every element of \( {S}_{n} \) is a product of transpositions.
Properties 4.2.3. Before proceeding, we point out a key observation: for \( \sigma \in {S}_{n} \), we have
(4.2.3.1)
\[
\sigma \left( {{a}_{1},\ldots ,{a}_{m}}\right) {\sigma }^{-1} = \left( {\sigma \left( {a}_{1}\right) ,\ldots ,\sigma \left( {a}_{m}\right) }\right) .
\]
This can be proved by noting:
\[
\sigma \left( {a}_{i}\right) \overset{{\sigma }^{-1}}{ \mapsto }{a}_{i}\overset{\left( {a}_{1},\ldots ,{a}_{m}\right) }{ \mapsto }{a}_{i + 1}\overset{\sigma }{ \mapsto }\sigma \left( {a}_{i + 1}\right) .
\]
Definition 4.2.4. Define the following for \( \sigma \in {S}_{n} \)
\[
\Delta \mathrel{\text{:=}} \mathop{\prod }\limits_{{1 \leq i < j \leq n}}\left( {{x}_{i} - {x}_{j}}\right) ,\;\sigma \left( \Delta \right) \mathrel{\text{:=}} \mathop{\prod }\limits_{{1 \leq i < j \leq n}}\left( {{x}_{\sigma \left( i\right) } - {x}_{\sigma \left( j\right) }}\right) \in \{ \pm \Delta \} .
\]
For each \( \sigma \in {S}_{n} \), define \( \operatorname{sgn}\left( \sigma \right) \in \{ \pm 1\} \) so that \( \sigma \left( \Delta \right) = \operatorname{sgn}\left( \sigma \right) \Delta \) .
We call \( \operatorname{sgn}\left( \sigma \right) \) the sign of \( \sigma \) .
\[
\sigma \text{ is called an }\left\{ \begin{array}{ll} \text{ even permutation } & \text{ if }\operatorname{sgn}\left( \sigma \right) = 1; \\ \text{ odd permutation } & \text{ if }\operatorname{sgn}\left( \sigma \right) = - 1; \end{array}\right.
\]
Proposition 4.2.5. The map \( \operatorname{sgn} : {S}_{n} \rightarrow \{ \pm 1\} \) is a homomorphism.
Proof. By definition, for \( \sigma ,\tau \in {S}_{n} \), we have
\[
\operatorname{sgn}\left( {\sigma \tau }\right) = \frac{\mathop{\prod }\limits_{{1 \leq i < j \leq n}}\left( {{x}_{{\sigma \tau }\left( i\right) } - {x}_{{\sigma \tau }\left( j\right) }}\right) }{\mathop{\prod }\limits_{{1 \leq i < j \leq n}}\left( {{x}_{i} - {x}_{j}}\right) }.
\]
![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_26_1.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_26_1.jpg)
\[
\mathop{\prod }\limits_{{1 \leq i < j \leq n}}\frac{\left( {x}_{{\sigma \tau }\left( i\right) } - {x}_{{\sigma \tau }\left( j\right) }\right) }{\left( {x}_{\tau \left( i\right) } - {x}_{\tau \left( j\right) }\right) }
\]
Here the vertical equality holds (or rather we are allowed to make the substitution \( {i}^{\prime } = \tau \left( i\right) \) and \( \left. {{j}^{\prime } = \tau \left( j\right) }\right) \) because in the product of \( \frac{\left( {x}_{\sigma \left( {i}^{\prime }\right) } - {x}_{\sigma \left( {j}^{\prime }\right) }\right) }{\left( {x}_{{i}^{\prime }} - {x}_{{j}^{\prime }}\right) } \), we can swap the order of \( {i}^{\prime } \) and \( {j}^{\prime } \) (and thus release the constraint that \( {i}^{\prime } < {j}^{\prime } \) ).
From above, we may cancel the terms \( \mathop{\prod }\limits_{{1 \leq i < j \leq n}}\left( {{x}_{\tau \left( i\right) } - {x}_{\tau \left( j\right) }}\right) \) and thus get \( \operatorname{sgn}\left( {\sigma \tau }\right) = \) \( \operatorname{sgn}\left( \sigma \right) \operatorname{sgn}\left( \tau \right) \) .
Definition 4.2.6. The normal subgroup \( {A}_{n} \mathrel{\text{:=}} \ker \left( {\operatorname{sgn} : {S}_{n} \rightarrow \{ \pm 1\} }\right) \) is called the alternating group.
Properties 4.2.7. (1) \( {A}_{n} \vartriangleleft {S}_{n} \) and \( {S}_{n}/{A}_{n} \cong \{ \pm 1\} \) . In particular,
\[
\left| {A}_{n}\right| = \left| {S}_{n}\right| /\left| {\{ \pm 1\} }\right| = \frac{n!}{2}.
\]
(2) We claim that \( \operatorname{sgn}\left( \text{transpotion}\right) = - 1 \) . Indeed, \( \operatorname{sgn}\left( \left( {12}\right) \right) = - 1 \) because for \( \sigma = \left( {12}\right) \) ,
\[
\sigma \left( \Delta \right) = \mathop{\prod }\limits_{{1 \leq i < j \leq n}}\left( {{x}_{\sigma \left( i\right) } - {x}_{\sigma \left( j\right) }}\right) = \left( {{x}_{\sigma \left( 2\right) } - {x}_{\sigma \left( 1\right) }}\right) \mathop{\prod }\limits_{\substack{{1 \leq i < j \leq n} \\ {j \geq 3} }}\left( {{x}_{i} - {x}_{j}}\right) = - \Delta .
\]
For a general transposition \( \left( {ab}\right) \), fix \( \tau \in {S}_{n} \), such that \( \tau \left( 1\right) = a \) and \( \tau \left( 2\right) = b \) . Then (4.2.3.1) implies that \( \tau \left( {12}\right) {\tau }^{-1} = \left( {ab}\right) \) . Thus,
\[
\operatorname{sgn}\left( \left( {ab}\right) \right) = \operatorname{sgn}\left( \tau \right) \cdot \operatorname{sgn}\left( \left( {12}\right) \right) \cdot \operatorname{sgn}{\left( \tau \right) }^{-1} = \operatorname{sgn}\left( \left( {12}\right) \right) = - 1.
\]
Thus, in general, we have for \( \sigma \in {S}_{n} \) ,
\[
\operatorname{sgn}\left( \sigma \right) = {\left( -1\right) }^{\text{number of transpositions in the factorization of }}.
\]
In particular, \( {A}_{n} = \left\{ {\sigma \in {S}_{n} \mid \sigma }\right. \) is an even permutation \( \} \) .
Theorem 4.2.8. When \( n \geq 5,{A}_{n} \) is a simple group.
Remark 4.2.9. (1) \( {A}_{3} = \langle \left( {123}\right) \rangle \) is a cyclic group of order 3 .
(2) \( {A}_{4} \trianglerighteq \{ 1,\left( {12}\right) \left( {34}\right) ,\left( {14}\right) \left( {23}\right) ,\left( {13}\right) \left( {24}\right) \} \cong {\mathbf{Z}}_{2}^{2} \) .
(3) It is known that a simple group of order 60 is isomorphic to \( {A}_{5} \) . (It is the smallest non-commutative simple group.)
Proof of Theorem 4.2.8. Recall that a 3-cycle \( \left( {ijk}\right) \) always belong to \( {A}_{n} \) . We will prove three statements, together they prove Theorem 4.2.8.
(1) \( {A}_{n} \) is generated by all 3-cycles (true with \( n \geq 3 \) ).
Indeed, \( \left( {a, b}\right) \left( {c, d}\right) = \left( {a, c, b}\right) \left( {a, c, d}\right) \) and \( \left( {a, c}\right) \left( {a, b}\right) = \left( {a, b, c}\right) \) .
(2) If a normal subgroup \( N \trianglelefteq {A}_{n} \) contains a 3-cycle, then it contains all 3-cycles (true for \( n \geq 3 \) ).
Indeed, Assume that \( N \) contains the 3-cycle \( \left( {i, j, k}\right) \) . Note that, for every \( \sigma \in {S}_{n} \) , either \( \sigma \in {A}_{n} \) or \( \sigma \left( {i, j}\right) \in {A}_{n} \) . Then (4.2.3.1) implies that
- either \( \sigma \left( {i, j, k}\right) {\sigma }^{-1} = \left( {\sigma \left( i\right) ,\sigma \left( j\right) ,\overline{\sigma \left( k\right) }}\right) \in N \), or
- \( \sigma \left( {i, j}\right) \left( {i, j, k}\right) {\left( \sigma \left( i, j\right) \right) }^{-1} = \sigma \left( {j, i, k}\right) {\sigma }^{-1} = \left( {\sigma \left( j\right) ,\sigma \left( i\right) ,\sigma \left( k\right) }\right) \in N \) (but then we
have \( {\left( \sigma \left( j\right) ,\sigma \left( i\right) ,\sigma \left( k\right) \right) }^{2} = \left( {\sigma \left( i\right) ,\sigma \left( j\right) ,\sigma \left( k\right) }\right) \in N \) ).
So \( N \) always contains \( \left( {\sigma \left( i\right) ,\sigma \left( j\right) ,\sigma \left( k\right) }\right) \) for every \( \sigma \in {S}_{n} \), and thus \( N \) contains all 3-cycles.
(3) If \( \{ 1\} \neq N \vartriangleleft {A}_{n} \) is a nontrivial normal subgroup, then \( N \) contains all 3-cycles.
Take a nontrivial element \( \sigma \in N \) . We separate several cases:
(a) If \( \sigma \) is a product of disjoint cycles, at least one cycle has length \( > 3 \), i.e. \( \sigma = \) \( \mu \left( {{a}_{1},{a}_{2},\ldots ,{a}_{r}}\right) \) with \( r > 3 \), then we have
\[
\left( {{a}_{1},{a}_{2},{a}_{3}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{3}\right) }^{-1} = \mu \left( {{a}_{2},{a}_{3},{a}_{1},{a}_{4},{a}_{5},\ldots ,{a}_{r}}\right) \in N.
\]
So \( {\sigma }^{-1} \circ \left( {{a}_{1},{a}_{2},{a}_{3}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{3}\right) }^{-1} \) sends
![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_28_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_28_0.jpg)
It is equal to \( \left( {{a}_{1},{a}_{3},{a}_{r}}\right) \), a 3-cycle.
(b) Suppose that (a) does not hold, and then \( \sigma \) is a product of disjoint 3-cycles and 2-cycles. It then follows that \( {\sigma }^{3} \) is a product of disjoint 2-cycles and \( {\sigma }^{2} \) is a product of disjoint 3-cycles (and they cannot be both 1). So (by considering \( {\sigma }^{3} \) or \( {\sigma }^{2} \) instead of \( \sigma \), we are reduced to the case when \( \sigma \) is purely a product of disjoint 3-cycles or a product of disjoint 2-cycles.
(c) If \( \sigma \) is a product of one 3-cycles, we are already done. If \( \sigma \) is a product of more than one disjoint 3-cycles, we write \( \sigma = \mu \left( {{a}_{4},{a}_{5},{a}_{6}}\right) \left( {{a}_{1},{a}_{2},{a}_{3}}\right) \) . Then
\[
\left( {{a}_{1},{a}_{2},{a}_{4}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{4}\right) }^{-1}{\sigma }^{-1} \in N
\]
and we compute it as:
![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_28_1.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_28_1.jpg)
This is \( \left( {{a}_{1},{a}_{2},{a}_{5},{a}_{3},{a}_{4}}\right) \), a 5-cycle, and we are reduced to case (a).
(d) If \( \sigma \) is a product of (necessarily even number) of disjoint transpositions, we write \( \sigma = \mu \left( {{a}_{1},{a}_{2}}\right) \left( {{a}_{3},{a}_{4}}\right) \) . Then
\[
\left( {{a}_{1},{a}_{2},{a}_{3}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{3}\right) }^{-1}{\sigma }^{-1} = \left( {{a}_{1 |
1569_混合相依变量的极限理论(陆传荣) | Definition 6.2.3 |
Definition 6.2.3. The random field \( \left\{ {{\xi }_{n,\mathbf{j}},\mathbf{j} \in {\mathcal{J}}_{n}}\right\} \) is said to be symmetric \( \varphi \) -mixing if \( \varphi \left( {nx}\right) \rightarrow 0 \) as \( n \rightarrow \infty \), where
\[
\varphi \left( {nx}\right) = \mathop{\sup }\limits_{\substack{{I, J \subset {\mathcal{J}}_{n}} \\ {d\left( {I, J}\right) \geq x} }}\mathop{\sup }\limits_{\substack{{A \in \sigma \left( {{\xi }_{n,\mathbf{j}};\mathbf{j} \in I}\right) } \\ {B \in \sigma \left( {{\xi }_{n,\mathbf{j}};\mathbf{j} \in J}\right) } }}\max \left( {\left| {P\left( {A \mid B}\right) - P\left( A\right) }\right| ,\left| {P\left( {B \mid A}\right) - P\left( B\right) }\right| }\right) .
\]
Definition 6.2.4. The random field \( \left\{ {{\xi }_{n\mathbf{j}},\mathbf{j} \in {\mathcal{J}}_{n}}\right\} \) is said to be absolutely regular if \( \beta \left( {nx}\right) \rightarrow 0 \) as \( n \rightarrow \infty \), where
\[
\beta \left( {nx}\right) = \mathop{\sup }\limits_{{I, J \subset {\mathcal{J}}_{n}, d\left( {I, J}\right) \geq x}}\parallel \mathcal{L}\left( {{\xi }_{n,\mathbf{j}},\mathbf{j} \in I \cup J}\right)
\]
\[
- \mathcal{L}\left( {{\xi }_{n,\mathbf{j}},\mathbf{j} \in I}\right) \mathcal{L}\left( {{\xi }_{n,\mathbf{j}},\mathbf{j} \in J}\right) {\parallel }_{\mathrm{{Var}}}
\]
\( \mathcal{L}\left( {\xi \left( \cdot \right) }\right) \) is the distribution law of \( \{ \xi \left( \cdot \right) \} \) and \( \parallel \cdot {\parallel }_{\mathrm{{Var}}} \) is variation norm.
It is clear that
\[
\alpha \left( {nx}\right) \leq \rho \left( {nx}\right) \leq {2\varphi }\left( {nx}\right) ,\;\alpha \left( {nx}\right) \leq \beta \left( {nx}\right) \leq \varphi \left( {nx}\right) .
\]
\( \left( {6.2.2}\right) \)
Next, we introduce the metric entropy condition. We say that Borel sets \( A, B \) in \( {\mathcal{B}}^{d} \) are equivalent if \( \left| {A\bigtriangleup B}\right| = 0 \), and denote the set of equivalence classes by \( \mathcal{E} \) . Define \( {d}_{L}\left( {A, B}\right) = \left| {A\bigtriangleup B}\right| \), it can be proved that \( {d}_{L}\left( {\cdot , \cdot }\right) \) is a metric on \( \mathcal{E} \) . The set \( \mathcal{E} \) forms a complete metric space under \( {d}_{L} \) .
Definition 6.2.5. A subset \( \mathcal{A} \) of \( \mathcal{E} \) is called totally bounded with inclusion, if for every \( \delta > 0 \) there is a finite set \( {\mathcal{A}}_{\delta } \subset \mathcal{E} \), such that for every \( A \in \mathcal{A} \) there exist \( {A}^{ + },{A}^{ - } \in {\mathcal{A}}_{\delta } \) with \( {A}^{ - } \subset A \subset {A}^{ + } \) and \( \left| {{A}^{ + } \smallsetminus {A}^{ - }}\right| \leq \delta . \)
Note that \( {\mathcal{A}}_{\delta } \) is a \( \delta \) -net with respect to \( {d}_{L} \) for \( \mathcal{A} \) .
Let \( \mathcal{A} \) be a totally bounded subset of \( \mathcal{E} \) . Its closure \( \overline{\mathcal{A}} \) is complete and totally bounded, hence compact. Let \( C\left( \overline{\mathcal{A}}\right) \) be the space of continous functions on \( \overline{\mathcal{A}} \) with the sup norm \( \parallel \cdot \parallel \) . Because \( \overline{\mathcal{A}} \) is compact, \( C\left( \overline{\mathcal{A}}\right) \) is separable. Thus \( C\left( \overline{\mathcal{A}}\right) \) is a complete, separable metric space. Let \( {CA}\left( \overline{\mathcal{A}}\right) \) be the set of everywhere additive elements of \( C\left( \overline{\mathcal{A}}\right) \), namely, elements \( f \) such that \( \;f\left( {A \cup B}\right) = f\left( A\right) + f\left( B\right) - f\left( {A \cap B}\right) \; \) whenever \( \;A, B, A \cup B, A \cap B \in \overline{\mathcal{A}}. \) It can be shown that for fixed \( \omega ,{Z}_{n}\left( \cdot \right) \in {CA}\left( \overline{\mathcal{A}}\right) \), i.e. \( {Z}_{n} \) are random elements of \( {CA}\left( \overline{\mathcal{A}}\right) \) . A standard Wiener process on \( \overline{\mathcal{A}} \) is a random element \( W \) of \( {CA}\left( \overline{\mathcal{A}}\right) \) whose finite dimensional laws are Gaussian with \( {EW}\left( A\right) = \) \( 0,{EW}\left( A\right) W\left( B\right) = \left| {A \cap B}\right| \) . In order that \( W \) should exist it is necessary (see Dudley 1973) that \( \mathcal{A} \) satisfies a metric entropy condition.
Definition 6.2.6. Let \( \mathcal{A} \) be a totally bounded subset of \( \mathcal{E},{\mathcal{A}}_{\delta } \) be the smallest \( \delta \) -net of \( \mathcal{A} \) . Denote
\[
N\left( {\delta ,\mathcal{A}}\right) = \operatorname{Card}{\mathcal{A}}_{\delta },\;H\left( \delta \right) = \log N\left( {\delta ,\mathcal{A}}\right) .
\]
\( \mathcal{A} \) is said to satisfy a metric entropy condition, or a convergent entropy integral, if
\[
{\int }_{0}^{1}{\left( \frac{H\left( \delta \right) }{\delta }\right) }^{1/2}{d\delta } < \infty
\]
\( \left( {6.2.3}\right) \)
Define the exponent of metric entropy of \( \mathcal{A} \), denoted by \( r \mathrel{\text{:=}} \inf \{ s, s > \) \( \left. {0, H\left( \delta \right) = O\left( {\delta }^{-s}\right) \text{as}\delta \rightarrow 0}\right\} \) . If \( r < 1 \), then (6.2.3) holds.
Remark 6.2.1. Some examples of classes of sets which satisfy the metric entropy condition are as follows:
If \( {\mathcal{C}}^{d} \) denotes the convex subsets of \( {\left\lbrack 0,1\right\rbrack }^{d} \), then \( r = \left( {d - 1}\right) /2 \) (see Dudley 1974).
If \( {\mathcal{I}}^{d} = \left\{ {(\mathbf{a},\mathbf{b}\rbrack : \mathbf{a},\mathbf{b} \in {\left\lbrack 0,1\right\rbrack }^{d}}\right\} \) as above, then \( r = 0 \) .
If \( {\mathcal{P}}^{d, m} \) denotes the family of all polygonal regions of \( {\left\lbrack 0,1\right\rbrack }^{d} \) with no more than \( m \) vertices, then \( r = 0 \) (see Erickson 1981).
If \( {\mathcal{E}}^{d} \) denotes the sets of all ellipsoidal regions in \( {\left\lbrack 0,1\right\rbrack }^{d} \), then \( r = 0 \) (see Gaeussler 1983).
For the Vapnik- \( \check{C} \) ervonenkis class \( \mathcal{V} \) that includes the above three examples, it is known that \( N\left( {\delta ,\mathcal{V},{d}_{\lambda }}\right) = \operatorname{Card}{\mathcal{V}}_{\delta } \leq c{\delta }^{-v} \) for some \( c \) and \( v > 0 \) (Dudley 1978).
When \( \left\{ {{\xi }_{\mathbf{t}},\mathbf{t} \in {\mathbb{Z}}^{d}}\right\} \) is independent, the weak convergence of \( {Z}_{n} \) to \( W \) has been studied by Bass and Pyke \( \left( {{1984},{1985}}\right) \), Alexander and Pyke
(1986), Lu (1992), etc. For the mixing random field \( \left\{ {{\xi }_{n,\mathbf{j}},\mathbf{j} \in {\mathcal{J}}_{n}, n \geq }\right. \) \( 1\} \), the weak convergence of \( {Z}_{n} \) to \( W \) was first discussed by Goldie and Greenwood (1986a, b). They proved the following theorem.
Theorem 6.2.1. Assume that \( E{\xi }_{n,\mathbf{j}} = 0 \), and
(i) for some \( s > 2,\left\{ {{\left| {n}^{d/2}{\xi }_{n,\mathbf{j}}\right| }^{s},\mathbf{j} \in {\mathcal{J}}_{n}, n \geq 1}\right\} \) is uniformly integrable;
(ii) the exponent \( r \) of metric entropy (with inclusion) of \( \mathcal{A} \) satisfies \( r < 1 \)
(iii) \( \beta \left( {nx}\right) = O\left( {\left( nx\right) }^{b}\right) \) as \( {nx} \rightarrow \infty \), the exponent \( b \) of absolute regularity satisfies \( b \geq {ds}/\left( {s - 2}\right) \) and \( b > d\left( {1 + r}\right) /\left( {1 - r}\right) \) ;
(iv) the symmetric \( \varphi \) -mixing coefficients satisfy
\[
\mathop{\sup }\limits_{{n \geq 1}}\mathop{\sum }\limits_{{j = 1}}^{\infty }{\varphi }^{1/2}\left( {{2}^{j}{n}^{-1}}\right) < \infty
\]
(v) for any null family \( \left\{ {{D}_{h},0 < h < {h}_{0}}\right\} \) in \( {\mathcal{I}}^{d} \) (a null family is a collection such that \( {D}_{h} \subseteq {D}_{{h}^{\prime }} \) for \( h \leq {h}^{\prime } \) and \( \left| {D}_{h}\right| = h \) for each \( h \) ),
\[
\mathop{\lim }\limits_{{h \downarrow 0}}\mathop{\limsup }\limits_{{n \rightarrow \infty }}\left| {\frac{E{Z}_{n}^{2}\left( {D}_{h}\right) }{\left| {D}_{h}\right| } - 1}\right| = 0.
\]
Then \( {Z}_{n} \) converges weakly in \( {CA}\left( \overline{\mathcal{A}}\right) \) to \( W \) .
For a random field \( \left\{ {{\xi }_{\mathbf{t}},\mathbf{t} \in {\mathbb{Z}}^{d}}\right\} \), the versions of Definitions 6.2.1,6.2.2, 6.2.3 and 6.2.4 are as follows:
\[
\alpha \left( x\right) = \mathop{\sup }\limits_{\substack{{I, J \subset {\mathbb{Z}}^{d}} \\ {d\left( {I, J}\right) \geq x} }}\mathop{\sup }\limits_{{A \in \sigma \left( {{\xi }_{\mathbf{i}},\mathbf{i} \in I}\right) }}\left| {P\left( {AB}\right) - P\left( A\right) P\left( B\right) }\right| ,
\]
\[
\rho \left( x\right) = \mathop{\sup }\limits_{{I, J \subset {\mathbb{Z}}^{d}}}\mathop{\sup }\limits_{{X \in {L}_{2}\left( {\sigma \left( {{\xi }_{\mathbf{i}},\mathbf{i} \in I}\right) }\right) }}\frac{\left| \operatorname{Cov}\left( X, Y\right) \right| }{\sqrt{\operatorname{Var}X\operatorname{Var}Y}}
\]
\[
d\left( {I, J}\right) \geq {xY} \in {L}_{2}\left( {\sigma \left( {{\xi }_{\mathbf{j}},\mathbf{j} \in J}\right) }\right)
\]
\[
\beta \left( x\right) = \mathop{\sup }\limits_{{I, J \subset {\mathbb{Z}}^{d}, d\left( {I, J}\right) \geq x}}\parallel \mathcal{L}\left( {{\xi }_{\mathbf{j}},\mathbf{j} \in I \cup J}\right) - \mathcal{L}\left( {{\xi }_{\mathbf{j}},\mathbf{j} \in I}\right) \mathcal{L}\left( {{\xi }_{\mathbf{j}},\mathbf{j} \in J}\right) {\parallel }_{\mathrm{{Var}}},
\]
\[
\varphi \left( x\right) = \mathop{\sup }\limits_{{I, J \subset {\mathbb{Z}}^{d}}}\;\mathop{\sup }\limits_{{A \in \sigma ({\xi }_{\mathbf{i}},\mathbf{i} \in I}}
\]
\[
d\left( {I, J}\right) \geq x\overset{B}{ \in }\sigma \left( {{\xi }_{\mathbf{j}},\mathbf{j} \in J}\right) \widehat{P}\left( A\right) P\left( B\right) > 0
\]
\[
\times \max \left( {\left| {P\left( {A \mid B}\right) - P\left( A\right) }\right| ,\left| {P\left( {B \mid A}\right) - P\left( B\right) }\right| }\right) \text{.}
\]
Corollary 6.2.1. Let \( \left\{ {{\xi }_{\mathbf{t}},\mathbf{t} \in {\mathbb{Z}}^{d}}\right\} \) |
1225_(Griffiths) Introduction to Algebraic Curves | Definition 3.1 |
Definition 3.1. Suppose \( C \) is a compact Riemann surface. The first cohomology group of \( C \) is
\[
{H}^{1}\left( {C,\mathbb{C}}\right) = {\operatorname{Hom}}_{\mathbb{C}}\left( {{H}_{1}\left( {C,\mathbb{Z}}\right) ,\mathbb{C}}\right) .
\]
Definition 3.2. The first De Rham cohomology group of \( C \) (denoted \( {H}_{\mathrm{{DR}}}^{1}\left( C\right) \) ) is defined to be the quotient group of all the closed differential one-forms on \( C \) modulo all the exact differential one-forms on \( C \) .
Suppose \( \lambda \) is a closed differential one-form on \( C \) . In Proposition 2.4 of the preceding section we have already defined the linear function \( {\eta }_{\lambda } \in \) \( {H}^{1}\left( {C,\mathbb{C}}\right) \) on \( {H}_{1}\left( {C,\mathbb{Z}}\right) \) . We then have a homomorphism
\[
\eta : \{ \text{all closed differential one-forms on}C\} \rightarrow {H}^{1}\left( {C,\mathbb{C}}\right) \text{,}
\]
\[
\lambda \mapsto {\eta }_{\lambda }
\]
Proposition 2.5 of the preceding section further tells us that \( \eta \) induces an injective homomorphism from \( {H}_{\mathrm{{DR}}}^{1}\left( C\right) \) into \( {H}^{1}\left( {C,\mathbb{C}}\right) \) .
Moreover, by Proposition 2.6 of the preceding section, the homomorphism
\[
{\Omega }^{1}\left( C\right) \oplus \overline{{\Omega }^{1}\left( C\right) } \rightarrow {H}_{\mathrm{{DR}}}\left( C\right)
\]
\[
\omega \oplus \bar{\varphi } \mapsto \omega + \bar{\varphi }
\]
is also injective. We therefore get a chain of injective homomorphisms:
\[
{\Omega }^{1}\left( C\right) \oplus \overline{{\Omega }^{1}\left( C\right) } \rightarrowtail {H}_{\mathrm{{DR}}}^{1}\left( C\right) \rightarrowtail {H}^{1}\left( {C,\mathbb{C}}\right) .
\]
Noting that the complex vector spaces at both ends of this chain are both of dimension \( {2g} \), we see that these two homomorphisms are both isomorphisms. This then proves the Hodge and De Rham theorems for the case of a compact Riemann surface which we state next:
THEOREM 3.3 (HODGE). For a compact Riemann surface \( C \), we have
\[
{\Omega }^{1}\left( C\right) \oplus \overline{{\Omega }^{1}\left( C\right) } \cong {H}_{\mathrm{{DR}}}^{1}\left( C\right)
\]
THEOREM 3.4 (DE RHAM). For a compact Riemann surface \( C \), we have
\[
{H}_{\mathrm{{DR}}}^{1}\left( C\right) \cong {H}^{1}\left( {C,\mathbb{C}}\right) .
\]
Next, we give yet another result which will be needed later.
Definition 3.5. Suppose \( C \) is a compact Riemann surface of genus \( g \) , with \( {\omega }_{1},\ldots ,{\omega }_{g} \) forming a basis for \( {\Omega }^{1}\left( C\right) \), and \( {\gamma }_{1},\ldots ,{\gamma }_{2g} \) forming a \( \mathbb{Z} \) -basis for \( {H}_{1}\left( {C,\mathbb{Z}}\right) \) . We call the following \( {2g} \) g-dimensional vectors
\[
{\pi }_{i} = \left( \begin{matrix} {\int }_{{\gamma }_{i}}{\omega }_{1} \\ \vdots \\ {\int }_{{\gamma }_{i}}{\omega }_{g} \end{matrix}\right) \;\left( {i = 1,\ldots ,{2g}}\right)
\]
a system of period vectors of \( C \), and we call the matrix
\[
\Omega = {\left( {\pi }_{1},\ldots ,{\pi }_{2g}\right) }_{g \times {2g}},
\]
a period matrix of \( C \) .
Proposition 3.6. The \( {2g} \) period vectors given above are \( \mathbb{R} \) -linearly independent.
Proof. We prove this by contradiction. Suppose \( {\pi }_{1},\ldots ,{\pi }_{2g} \) are \( \mathbb{R} \) - dependent. Then there exist real numbers, \( {a}_{1},\ldots ,{a}_{2g} \), not all zero, such that
\[
{a}_{1}{\pi }_{1} + \cdots + {a}_{2g}{\pi }_{2g} = 0.
\]
(3.1)
Taking the complex conjugate of this, we get
\[
{a}_{1}{\bar{\pi }}_{1} + \cdots + {a}_{2g}{\bar{\pi }}_{2g} = 0,
\]
(3.2)
where
\[
{\bar{\pi }}_{i} = \left( \begin{matrix} {\int }_{{\gamma }_{i}}{\bar{\omega }}_{1} \\ \vdots \\ {\int }_{{\gamma }_{1}}{\bar{\omega }}_{g} \end{matrix}\right) \left( {i = 1,\ldots ,{2g}}\right) .
\]
Consider the matrix
\[
{\Omega }^{ * } = {\left( \begin{array}{lll} {\pi }_{1} & \cdots & {\pi }_{2g} \\ {\bar{\pi }}_{1} & \cdots & {\bar{\pi }}_{2g} \end{array}\right) }_{{2g} \times {2g}}.
\]
Equations (3.1) and (3.2) together imply that the linear combination of the \( {2g} \) columns of \( {\Omega }^{ * } \) with coefficients \( {a}_{1},\ldots ,{a}_{2g} \) is equal to zero, and hence the rank of \( {\Omega }^{ * } \) is smaller than \( {2g} \), so there exist complex numbers, \( {\lambda }^{1},\ldots ,{\lambda }^{g},{\eta }^{1},\ldots ,{\eta }^{g} \), not all equal to zero, such that the linear combination of the \( {2g} \) rows of \( {\Omega }^{ * } \) with these as coefficients is also equal to zero, i.e.,
\[
{\int }_{{\gamma }_{i}}\left( {\mathop{\sum }\limits_{{j = 1}}^{g}{\lambda }^{j}{\omega }_{j} + \mathop{\sum }\limits_{{j = 1}}^{g}{\eta }^{j}{\bar{\omega }}_{j}}\right) = 0\;\left( {i = 1,\ldots ,{2g}}\right) .
\]
Letting \( \omega = \mathop{\sum }\limits_{{j = 1}}^{g}{\lambda }^{j}{\omega }_{j},\varphi = \mathop{\sum }\limits_{{j = 1}}^{g}{\bar{\eta }}^{j}{\omega }_{j} \), then
\[
{\int }_{{\gamma }_{i}}\left( {\omega + \bar{\varphi }}\right) = 0\;\left( {i = 1,\ldots ,{2g}}\right) .
\]
From Propositions 2.5 and 2.6 of the preceding section, we know that \( \omega = \varphi = 0 \), and this contradicts the linear independence of \( {\omega }_{1},\ldots ,{\omega }_{g} \) . Q.E.D.
## §4. The Riemann inequality
The two inequalities we give in this section are of basic importance to the proof of the Riemann-Roch theorem.
Proposition 4.1 (The Riemann inequality). Suppose \( C \) is a compact Riemann surface of genus \( g, D \in \operatorname{Div}\left( C\right) \) and \( \deg D = d \) . Then
\[
l\left( D\right) \geq d - g + 1
\]
Proof. Proceeding in a manner similar to that of the proof of Proposition 2.3 in \( §2 \), we still need to start out by assuming the following basic fact: there exists a holomorphic mapping
\[
\pi : C \rightarrow {C}^{\prime } \subset {\mathbb{P}}^{2},
\]
such that \( C \) is the normalization of \( {C}^{\prime } \) and where \( {C}^{\prime } \) is an algebraic curve containing only \( \delta \) ordinary double points, \( {p}_{1},\ldots ,{p}_{\delta } \) .
![93de83f3-2e19-47fb-bc9e-589ef88e15e1_123_0.jpg](images/93de83f3-2e19-47fb-bc9e-589ef88e15e1_123_0.jpg)
FIGURE 3.2
Let \( {\pi }^{-1}\left( {p}_{i}\right) = {p}_{i}^{\prime } + {p}_{i}^{\prime \prime }\left( {i = 1,2,\ldots ,\delta }\right) \), and write
\[
\Delta = \mathop{\sum }\limits_{{i = 1}}^{\delta }\left( {{p}_{i}^{\prime } + {p}_{i}^{\prime \prime }}\right) \in \operatorname{Div}\left( C\right)
\]
With the help of a coordinate change in \( {\mathbb{P}}^{2} \), there is no harm in assuming that neither \( \pi \left( D\right) \) nor \( \pi \left( \Delta \right) \) contains the point at infinity (see Figure 3.2).
Suppose \( {C}^{\prime } \) is given by the equation \( F\left( {{\xi }^{0},{\xi }^{1},{\xi }^{2}}\right) = 0 \) with \( \deg F = \) \( m \) . Let \( D = {D}^{\prime } - {D}^{\prime \prime } \), where \( {D}^{\prime },{D}^{\prime \prime } \geq 0 \), and write
\[
\deg {D}^{\prime } = {d}^{\prime },\;\deg {D}^{\prime \prime } = {d}^{\prime \prime }.
\]
Let \( {S}^{n} \) represent the set of complex polynomials in three variables of degree \( n \) . Then \( {S}^{n} \) is a linear space over \( \mathbb{C} \), with
\[
\dim {S}^{n} = \frac{1}{2}\left( {n + 1}\right) \left( {n + 2}\right)
\]
Let \( G\left( {{\xi }^{0},{\xi }^{1},{\xi }^{2}}\right) \in {S}^{n} \) satisfy the following two conditions
a) \( F \nmid G \)
b) \( G \cdot C \geq {D}^{\prime } + \Delta \), where \( G \cdot C = \left( {{\pi }^{ * }\left( {G\left( {{\xi }^{0},{\xi }^{1},{\xi }^{2}}\right) }\right) }\right) \) (i.e., the divisor induced by the zeros of \( {\pi }^{ * }G \) on \( C \) ).
We elaborate on the divisor \( G \cdot C \), which is an element in \( \operatorname{Div}\left( C\right) \) . For \( p \in C \), let us write
\[
{p}^{\prime } = \pi \left( p\right) \in {C}^{\prime }.
\]
A neighborhood of \( p \) gets mapped under \( \pi \) into a curve component of \( {C}^{\prime } \) near \( {p}^{\prime } \), and we denote this curve component by \( {B}_{p} \) . After a coordinate change in \( {\mathbb{P}}^{2} \), we may take \( {p}^{\prime } \) to be \( \left\lbrack {1,0,0}\right\rbrack \) . Under the same change of coordinates \( G \) gets transformed into \( {G}_{p} \) . Since the multiple points of \( {C}^{\prime } \) are all ordinary double points, the equation of \( {B}_{p} \) near \( {p}^{\prime } \) has the form:
\[
y = {a}_{0}x + {a}_{1}{x}^{2}\cdots
\]
Substituting this expression in \( {G}_{p}\left( {1, x, y}\right) \), we get a power series in terms of \( x \) . Letting \( n\left( p\right) \) denote the degree of the lowest term, then
\[
G \cdot C = \mathop{\sum }\limits_{{p \in C}}n\left( p\right) p
\]
It is easy to see that the right side of the above expression is a finite sum (since \( F \nmid G \) and \( F \) is irreducible, \( {C}^{\prime } \) and \( \{ G = 0\} \) have no common curve components and so \( {C}^{\prime } \cap \{ G = 0\} \) is a finite set; moreover only if \( p \) belongs to the inverse image under \( \pi \) of this finite set do we have \( n\left( p\right) \neq 0 \) ), and the definition of \( G \cdot C \) is independent of the choice of the coordinate transformation on \( {\mathbb{P}}^{2} \) used above.
We claim: the condition \( G \cdot C \geq {D}^{\prime } + \Delta \) implies that the coefficients of \( G \) must satisfy \( {d}^{\prime } + \delta \) linear equations. In fact, suppose
\[
{D}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{\delta }\left( {{n}_{i}^{\prime }{p}_{i}^{\prime } + {n}_{i}^{\prime \prime }{p}_{i}^{\prime \prime }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\gamma }{m}_{j}{q}_{j},
\]
where the \( {p}_{i}^{\prime },{p}_{i}^{\prime \prime } \) and \( {q}_{j} \) are all mutually distinct, and \( {n}_{i}^{\prime },{n}_{i}^{\prime \prime },{m}_{j} \geq 0 \) . Then
\[
{D}^{\prime } + \Delta = \mathop{\sum }\limits_{{i = 1}}^{\delta }\left( {\left( {{n}_{i}^{\prime } + 1}\ri |
1075_(GTM233)Topics in Banach Space Theory | Definition 14.1.5 |
Definition 14.1.5. Let \( X \) and \( Y \) be metric spaces. A map \( f : X \rightarrow Y \) is Lipschitz with constant \( K \) (also \( K \) -Lipschitz) if
\[
d\left( {f\left( x\right), f\left( y\right) }\right) \leq {Kd}\left( {x, y}\right) ,\;\forall x, y \in X.
\]
(14.4)
The least constant \( K \) in (14.4) will be denoted by \( \operatorname{Lip}\left( f\right) \), i.e.,
\[
\operatorname{Lip}\left( f\right) = \sup \left\{ {\frac{d\left( {f\left( x\right), f\left( y\right) }\right) }{d\left( {x, y}\right) } : x, y \in X, x \neq y}\right\} .
\]
Such a mapping \( f \) is a Lipschitz embedding if \( f \) is one-to-one and Lipschitz, and \( {f}^{-1} : f\left( X\right) \rightarrow X \) is also Lipschitz, i.e., there exist constants \( 0 < {c}_{1} < {c}_{2} < \infty \) such that
\[
{c}_{1}d\left( {x, y}\right) \leq d\left( {f\left( x\right), f\left( y\right) }\right) \leq {c}_{2}d\left( {x, y}\right) ,\;\forall x, y \in X.
\]
The distortion constant of a Lipschitz embedding \( f \) is \( \operatorname{dist}\left( f\right) = \operatorname{Lip}\left( f\right) \operatorname{Lip}\left( {f}^{-1}\right) \) . Obviously, \( f \) is a Lipschitz isomorphism if it is a surjective Lipschitz embedding. Thus we may define two metric spaces \( X \) and \( Y \) to be Lipschitz isomorphic if there is a Lipschitz isomorphism \( f : X \rightarrow Y \) .
Given Banach spaces \( X \) and \( Y \), we will write \( {\operatorname{Lip}}_{0}\left( {X, Y}\right) \) for the Banach space of all Lipschitz maps from \( X \) into \( Y \) that send 0 to 0 with the norm
\[
\operatorname{Lip}\left( f\right) = \sup \left\{ {\frac{\parallel f\left( x\right) - f\left( y\right) \parallel }{\parallel x - y\parallel } : \left( {x, y}\right) \in {X}^{2}, x \neq y}\right\} .
\]
If \( Y = \mathbb{R} \), we call \( {\operatorname{Lip}}_{0}\left( {X,\mathbb{R}}\right) \) the Lipschitz dual of \( X \) and denote it by \( {\operatorname{Lip}}_{0}\left( X\right) \) .
It is easily checked that \( \mathcal{B}\left( {X, Y}\right) \) is a subspace of \( {\operatorname{Lip}}_{0}\left( {X, Y}\right) \), and in particular, the dual space of \( X \) is a linear subspace of \( {\operatorname{Lip}}_{0}\left( X\right) \) with \( \begin{Vmatrix}{x}^{ * }\end{Vmatrix} = \operatorname{Lip}\left( {x}^{ * }\right) \) for all \( {x}^{ * } \in {X}^{ * } \) .
Definition 14.1.6. A map \( f : X \rightarrow Y \) is uniformly continuous if for every \( \epsilon > 0 \) there exists \( \delta = \delta \left( \epsilon \right) > 0 \) such that
\[
d\left( {f\left( x\right), f\left( y\right) }\right) < \epsilon
\]
whenever \( 0 < d\left( {x, y}\right) < \delta \) . If \( f \) is one-to-one and both \( f \) and \( {f}^{-1} : f\left( X\right) \rightarrow X \) are uniformly continuous, then \( f \) is called a uniform embedding. The mapping \( f \) is a uniform homeomorphism if it is an onto uniform embedding. The spaces \( X \) and \( Y \) are uniformly homeomorphic if there is a uniform homeomorphism \( f : X \rightarrow Y \) .
To quantify the uniform continuity of a map \( f : X \rightarrow Y \), it is useful to introduce the modulus of continuity of \( f \) ,
\[
{\omega }_{f}\left( s\right) = \sup \{ d\left( {f\left( x\right), f\left( y\right) }\right) : x, y \in X, d\left( {x, y}\right) \leq s\} ,\;s > 0.
\]
For general \( f \) one has \( 0 \leq {\omega }_{f}\left( s\right) \leq \infty \), and \( f : X \rightarrow Y \) is uniformly continuous if and only if \( {\omega }_{f}\left( s\right) \rightarrow 0 \) when \( s \rightarrow {0}^{ + } \) . When \( X \) is a convex subset of a Banach space, or more generally when \( X \) is a metrically convex space, and \( f : X \rightarrow Y \) is any mapping, then by the triangle inequality we have subadditivity of the function \( {\omega }_{f} \), i.e.,
\[
{\omega }_{f}\left( {s + t}\right) \leq {\omega }_{f}\left( s\right) + {\omega }_{f}\left( t\right) ,\;\forall s, t > 0.
\]
Recall that a metric space \( \left( {X, d}\right) \) is metrically convex if given any \( x, y \) in \( X \) and \( 0 < \lambda < 1 \) there exists \( {z}_{\lambda } \in X \) with
\[
d\left( {x,{z}_{\lambda }}\right) = {\lambda d}\left( {x, y}\right) \;\text{ and }\;d\left( {y,{z}_{\lambda }}\right) = \left( {1 - \lambda }\right) d\left( {x, y}\right) .
\]
The following lemma gathers other basic facts about the modulus of continuity.
Lemma 14.1.7. Let \( f : X \rightarrow Y \) be a map between two metric spaces.
(i) Suppose \( \omega : \lbrack 0,\infty ) \rightarrow \left\lbrack {0,\infty }\right\rbrack \) is a function such that \( d\left( {f\left( x\right), f\left( y\right) }\right) \leq \omega \left( {d\left( {x, y}\right) }\right) \) for every \( x, y \in X \), and \( \omega \left( s\right) \rightarrow 0 \) as \( s \rightarrow {0}^{ + } \) . Then \( f \) is uniformly continuous and \( \omega \geq {\omega }_{f} \) .
(ii) \( f \) is \( K \) -Lipschitz if and only if \( {\omega }_{f}\left( s\right) \leq {Ks} \) for all \( s > 0 \) .
(iii) If \( f \) is uniformly continuous and \( X \) is metrically convex, then \( {\omega }_{f}\left( s\right) < \infty \) for all \( s > 0 \) .
Proof. We do (iii) and leave the other statements as an exercise. We need to show that for \( s > 0 \) there is \( {C}_{s} > 0 \) such that \( d\left( {f\left( x\right), f\left( y\right) }\right) \leq {C}_{s} \) whenever \( d\left( {x, y}\right) \leq s \) . From the definition of uniform continuity there exists \( {\delta }_{1} > 0 \) such that if \( 0 < \) \( d\left( {a, b}\right) < {\delta }_{1} \), then \( d\left( {f\left( a\right), f\left( b\right) }\right) < 1 \) . Let \( N = {N}_{s} \in \mathbb{N} \) be such that \( s/N < {\delta }_{1} \) . By the metric convexity of \( X \) one can find points \( x = {x}_{0},{x}_{1},\ldots ,{x}_{N} = y \) in \( X \) such that \( d\left( {{x}_{j},{x}_{j + 1}}\right) < d\left( {x, y}\right) /N < s/N < {\delta }_{1} \) for \( 0 \leq j \leq N - 1 \) . Therefore, by the triangle inequality,
\[
d\left( {f\left( x\right), f\left( y\right) }\right) \leq \mathop{\sum }\limits_{{j = 0}}^{{N - 1}}d\left( {f\left( {x}_{j}\right), f\left( {x}_{j + 1}\right) }\right) \leq N,
\]
and our claim holds with \( {C}_{s} = {N}_{s} \) .
By Lemma 14.1.7 (ii), the modulus of continuity of a Lipschitz map is controlled by a linear function. Roughly speaking, one could interpret this by saying that the Lipschitz behavior of a map is closer to a linear behavior than the uniform behavior; hence it seems natural to attempt to Lipschitz-ize a uniformly continuous map. The next result [53] does this in an explicit manner.
Proposition 14.1.8 (Corson and Klee [53]). Let \( f : X \rightarrow Y \) be a uniformly continuous map. If \( X \) is metrically convex, then for every \( \theta > 0 \) there exists a constant \( {K}_{\theta } > 0 \) such that \( d\left( {f\left( x\right), f\left( y\right) }\right) \leq {K}_{\theta }d\left( {x, y}\right) \) whenever \( d\left( {x, y}\right) \geq \theta \) .
Proof. Fix \( \theta > 0 \) . Given \( x, y \) in \( X \) with \( d\left( {x, y}\right) \geq \theta \), let \( m \) be the smallest integer such that \( d\left( {x, y}\right) /m < \theta \) . By the metric convexity of \( X \) we may choose points \( x = {x}_{0},{x}_{1},\ldots ,{x}_{m} = y \) in \( X \) with \( d\left( {{x}_{j},{x}_{j + 1}}\right) < \theta \) . The triangle inequality, Lemma 14.1.7(iii), and our choice of \( m \) yield
\[
\left. {d\left( {f\left( x\right), f\left( y\right) }\right) \leq \mathop{\sum }\limits_{{j = 0}}^{{m - 1}}{df}\left( {x}_{j}\right), f\left( {x}_{j + 1}\right) }\right) \leq m{\omega }_{f}\left( \theta \right) \leq \frac{2{\omega }_{f}\left( \theta \right) }{\theta }d\left( {x, y}\right) .
\]
Definition 14.1.9. Given a map \( f : X \rightarrow Y \) between two metric spaces \( X \) and \( Y \), for \( \theta > 0 \) let us define the (possibly infinite) number
\[
{\operatorname{Lip}}_{\theta }\left( f\right) = \sup \left\{ {\frac{d\left( {f\left( x\right), f\left( y\right) }\right) }{d\left( {x, y}\right) } : x, y \in X, d\left( {x, y}\right) \geq \theta }\right\} .
\]
Obviously, \( {\operatorname{Lip}}_{\theta }\left( f\right) \) decreases as \( \theta \) increases. Put
\[
{\operatorname{Lip}}_{\infty }\left( f\right) \mathrel{\text{:=}} \mathop{\inf }\limits_{{\theta > 0}}{\operatorname{Lip}}_{\theta }\left( f\right) = \mathop{\lim }\limits_{{\theta \rightarrow \infty }}{\operatorname{Lip}}_{\theta }\left( f\right)
\]
to denote the (possibly zero) asymptotic Lipschitz constant of \( f \) . The map \( f \) is coarse Lipschitz if \( {\operatorname{Lip}}_{\infty }\left( f\right) < \infty \), i.e., there is \( \theta > 0 \) for which \( {\operatorname{Lip}}_{\theta }\left( f\right) < \infty \) . In this case it is said that \( f \) satisfies a Lipschitz condition for large distances. When \( {\operatorname{Lip}}_{\theta }\left( f\right) < \infty \) for all \( \theta > 0 \), we say that \( f \) is Lipschitz at large distances.
Definition 14.1.10. A map \( f : X \rightarrow Y \) between metric spaces is a coarse Lispchitz embedding if there exist constants \( 0 < {c}_{1} < {c}_{2} \) and \( \theta > 0 \) such that
\[
{c}_{1}d\left( {x, y}\right) \leq d\left( {f\left( x\right), f\left( y\right) }\right) \leq {c}_{2}d\left( {x, y}\right) ,\;\forall x, y \in X\text{ with }d\left( {x, y}\right) \geq \theta .
\]
(14.5)
Note that a coarse Lipschitz embedding need not be injective.
Coarse Lipschitz embeddings are the large-scale analogue of Lipschitz embed-dings. Of course, this notion is of interest only for unbounded metric spaces.
Definition 14.1.11. A metric space \( \left( {X, d}\right) \) is said to be bounded if there exists \( r > 0 \) such that \( d\left( {x, y}\right) \leq r \) for all \( x, y \in X \) . Otherwise, it is called unbounded.
A coarse Lipschitz embedding does not observe the fine structure of a metric space in a neighborhood of a point, since it need not be continuous; it captures only the macroscopic structure o |
1112_(GTM267)Quantum Theory for Mathematicians | Definition 23.15 |
Definition 23.15 For any \( z \in N \), a subspace \( P \) of \( {T}_{z}N \) is said to be Lagrangian if \( \dim P = n \) and \( \omega \left( {X, Y}\right) = 0 \) for all \( X, Y \in P \) .
Definition 23.16 A polarization of a symplectic manifold \( N \) is a choice at each point \( z \in N \) of a Lagrangian subspace \( {P}_{z} \subset {T}_{z}^{\mathbb{C}}\left( X\right) \), satisfying the following two conditions.
1. If two complex vector fields \( X \) and \( Y \) lie in \( {P}_{z} \) at each point \( z \), then so does \( \left\lbrack {X, Y}\right\rbrack \) .
2. The dimension of \( {P}_{z} \cap \overline{{P}_{z}} \) is constant.
The first condition is called integrability, and we have motivated this condition in the discussion preceding the definition. The second condition is a technical one that prevents problems with certain constructions, such as the pairing map. (Although, in practice, one sometimes needs to work with "polarizations" in which the second condition is violated, extra care is needed in such cases.)
There is one small inaccuracy in our discussion of polarizations: For purely conventional reasons, the quantum Hilbert space is defined as the space of sections that are covariantly constant in the direction of \( \bar{P} \), rather than \( P \) . Thus, \( P \) should really be the complex conjugate of the space of directions in which the sections are constant. This convention, however, makes no difference to the definition of a polarization, since if \( P \) satisfies the conditions of Definition 23.16, so does \( \bar{P} \) .
Example 23.17 If \( M \) is any smooth manifold, let \( N = {T}^{ * }M \) be the cotangent bundle of \( M \), equipped with the canonical 2-form \( \omega \) (Example 21.2). For each \( z \in {T}^{ * }M \), let \( {P}_{z} \) be the complexification of the tangent space to the fiber \( {T}_{z}^{ * }M \) . Then \( P \) is a polarization on \( {T}^{ * }M \), called the vertical polarization.
Proof. If \( \left\{ {x}_{j}\right\} \) is any local coordinate system on \( M \), let \( \left\{ {{x}_{j},{p}_{j}}\right\} \) be the associated local coordinate system on \( {T}^{ * }M \) . The canonical 2 -form is given by \( \omega = d{p}_{j} \land d{x}_{j} \) . At each point \( z \in {T}^{ * }M \), the vertical subspace \( {P}_{z} \) is spanned by the vectors \( \partial /\partial {p}_{j} \) . Since \( \omega \left( {\partial /\partial {p}_{j},\partial /\partial {p}_{k}}\right) = 0 \), we see that \( {P}_{z} \) is Lagrangian. Furthermore, \( {P}_{z} = {\bar{P}}_{z} \) at every point, and so \( \dim {P}_{z} \cap \overline{{P}_{z}} \) has the constant value \( n = \dim M \) . Finally, the integrability of \( P \) follows by computing the commutator of two vector fields of the form \( {f}_{j}\left( {x, p}\right) \partial /\partial {p}_{j} \) , which will again be a linear combination of the \( \partial /\partial {p}_{j} \) ’s. Integrability also follows from the easy direction of the Frobenius theorem, since the fibers of \( {T}^{ * }M \) are integral submanifolds for \( P \) . ∎
We may identify two special classes of polarizations, those that are purely real (i.e., \( \overline{{P}_{z}} = {P}_{z} \) for all \( z \in N \) ) and those that are purely complex (i.e., \( {P}_{z} \cap \overline{{P}_{z}} = \{ 0\} \) for all \( z \in N) \) . The vertical polarization, for example, is purely real.
If \( P \) is purely real, the integrability of \( P \) implies, by the Frobenius theorem, that every point in \( N \) is contained in a unique submanifold \( R \) that is maximal in the class of connected integral submanifolds for \( P \) . [An integral submanifold \( R \) for \( P \) is submanifold for which \( {T}_{z}^{\mathbb{C}}\left( R\right) = {P}_{z} \) for all \( z \in R \) .] We will refer to the maximal connected, integral submanifolds of a purely real polarization as the leaves of the polarization.
In general, the leaves may not be embedded submanifolds of \( N \) . Suppose, for example, that \( N = {S}^{1} \times {S}^{1} \), with \( \omega = {d\theta } \land {d\phi } \), where \( \theta \) and \( \phi \) are angular coordinates on the two copies of \( {S}^{1} \) . Then the tangent space to \( N \) at any point may be identified with \( {\mathbb{R}}^{2} \) by means of the basis \( \{ \partial /\partial \theta ,\partial /\partial \phi \} \) . We may define a polarization \( P \) on \( N \) by defining \( {P}_{z} \) to be the span of the
vector
\[
\frac{\partial }{\partial \theta } + a\frac{\partial }{\partial \phi }
\]
for some fixed irrational number \( a \) . Each leaf of \( P \) is then a set of the form
\[
\left\{ {\left. {\left( {{e}^{i{\theta }_{0}}{e}^{it},{e}^{iat}}\right) \in {S}^{1} \times {S}^{1}}\right| \;t \in \mathbb{R}}\right\} ,
\]
for some \( {\theta }_{0} \), which is an "irrational line" in \( {S}^{1} \times {S}^{1} \) . Each leaf is then dense in \( {S}^{1} \times {S}^{1} \) and, thus, not embedded. We will need to avoid such pathological examples if we hope to successfully carry out the program of geometric quantization with respect to a real polarization. Much more information about the structure of real polarizations may be found in Sects. 4.5-4.7 of [45].
We now consider some elementary results concerning purely complex polarizations.
Proposition 23.18 Suppose \( P \) is a purely complex polarization on \( N \) . For each \( z \in N \), let \( {J}_{z} : {T}_{z}^{\mathbb{C}}N \rightarrow {T}_{z}^{\mathbb{C}}N \) be the unique linear map such that \( {J}_{z} = \) \( {iI} \) on \( {P}_{z} \) and \( {J}_{z} = - {iI} \) on \( \overline{{P}_{z}} \) . Then \( {J}_{z} \) is real (i.e., it maps the real tangent space to itself) and \( \omega \) is \( {J}_{z} \) -invariant [i.e., \( \omega \left( {{J}_{z}{X}_{1},{J}_{z}{X}_{2}}\right) = \omega \left( {{X}_{1},{X}_{2}}\right) \) for all \( \left. {{X}_{1},{X}_{2} \in {T}_{z}^{\mathbb{C}}N}\right\rbrack \) .
Proof. Since the restriction of \( {J}_{z} \) to \( \overline{{P}_{z}} \) is the complex-conjugate of its restriction to \( {P}_{z} \), the map \( {J}_{z} \) commutes with complex conjugation and thus maps real vectors (those satisfying \( \bar{X} = X \) ) to real vectors. Meanwhile, since \( {P}_{z} \) is Lagrangian and \( \omega \) is real, \( \overline{{P}_{z}} \) is also Lagrangian. Given two vectors \( {X}_{1} = {Y}_{1} + {Z}_{1} \) and \( {X}_{2} = {Y}_{2} + {Z}_{2} \), with \( {Y}_{j} \in {P}_{z} \) and \( {Z}_{j} \in \overline{{P}_{z}} \), we compute that
\[
\omega \left( {{J}_{z}{X}_{1},{J}_{z}{X}_{2}}\right)
\]
\[
= \omega \left( {i{Y}_{1}, i{Y}_{2}}\right) + \omega \left( {i{Y}_{1}, - i{Z}_{2}}\right) + \omega \left( {-i{Z}_{1}, i{Y}_{2}}\right) + \omega \left( {-i{Z}_{1}, - i{Z}_{2}}\right)
\]
\[
= \omega \left( {{Y}_{1},{Z}_{2}}\right) + \omega \left( {{Z}_{1},{Y}_{2}}\right)
\]
A similar calculation gives the same value for \( \omega \left( {{X}_{1},{X}_{2}}\right) \), showing that \( \omega \) is \( {J}_{z} \) -invariant. ∎
A complex structure on a \( {2n} \) -dimensional manifold \( N \) is a collection of "holomorphic" coordinate systems that cover \( N \) and such that the transition maps between coordinate systems are holomorphic as maps between open sets in \( {\mathbb{R}}^{2n} \cong {\mathbb{C}}^{n} \) . At each point \( z \in N \), there is a linear map \( {J}_{z} : {T}_{z}N \rightarrow {T}_{z}N \) defined by the expression
\[
{J}_{z}\left( \frac{\partial }{\partial {x}_{j}}\right) = \frac{\partial }{\partial {y}_{j}};\;{J}_{z}\left( \frac{\partial }{\partial {y}_{j}}\right) = - \frac{\partial }{\partial {x}_{j}},
\]
where the \( {x}_{j} \) ’s and \( {y}_{j} \) ’s are the real and imaginary parts of holomorphic coordinates. This map is independent of the choice of holomorphic coordinates and satisfies \( {J}_{z}^{2} = - I \) . At each point \( z \in N \), the complexified tangent space \( {T}_{z}^{\mathbb{C}}N \) can be decomposed into eigenspaces for \( {J}_{z} \) with eigenvalues \( i \) and \( - i \) ; these are called the \( \left( {1,0}\right) \) - and \( \left( {0,1}\right) \) -tangent spaces, respectively.
Meanwhile, if \( N \) is any \( {2n} \) -dimensional manifold and \( J \) is a smoothly varying family of linear maps on each tangent space satisfying \( {J}_{z}^{2} = - I \) for all \( z \), then \( J \) is called an almost-complex structure. Given an almost complex structure, we can divide the complexified tangent space into \( \pm i \) eigenspaces for \( J \) . The Newlander-Nirenberg theorem asserts that if the family of \( + i \) eigenspaces is integrable (in the sense of Point 1 of Definition 23.16), then there exists a unique complex structure on \( N \) for which these are the \( \left( {1,0}\right) \) - tangent spaces.
A purely complex polarization \( P \) gives rise to a complex structure on \( N \) , as follows. By Proposition 23.18 and the Newlander-Nirenberg theorem, there is a unique complex structure on \( N \) for which \( {P}_{z} \) is the \( \left( {1,0}\right) \) -tangent space, for all \( z \in N \) .
Now, we have already seen in the \( {\mathbb{R}}^{2n} \) case that some purely complex polarizations behave better than others. [Compare (22.11) to (22.13)]. The geometric condition that characterizes the "good" polarizations is the following.
Definition 23.19 For any purely complex polarization \( P \), let \( J \) be the unique almost-complex structure on \( N \) such that \( {J}_{z} = {iI} \) on \( {P}_{z} \) and \( {J}_{z} = \) -iI on \( \overline{{P}_{z}} \) . We say that \( P \) is a Kähler polarization if the bilinear form
\[
g\left( {X, Y}\right) \mathrel{\text{:=}} \omega \left( {X,{J}_{z}Y}\right)
\]
(23.10)
is positive definite for each \( z \in N \) .
For any purely complex polarization, the bilinear form \( g \) in (23.10) is symmetric, as the reader may easily verify using the \( {J}_{z} \) -invariance of \( \omega \) .
Suppose, for example, that we identify \( {\mathbb{R}}^{2} \) with \( \mathbb{C} \) by the map \( z = x - {i\ |
1094_(GTM250)Modern Fourier Analysis | Definition 1.4.1 |
Definition 1.4.1. Let \( 0 \leq \gamma < 1 \) . A function \( f \) on \( {\mathbf{R}}^{n} \) is said to be Lipschitz of order \( \gamma \) if it is continuous and bounded, and satisfies (1.4.1). In this case, we set
\[
\parallel f{\parallel }_{{\Lambda }_{\gamma }\left( {\mathbf{R}}^{n}\right) } = \parallel f{\parallel }_{{L}^{\infty }} + \mathop{\sup }\limits_{{x \in {\mathbf{R}}^{n}}}\mathop{\sup }\limits_{{h \in {\mathbf{R}}^{n}\smallsetminus \{ 0\} }}\frac{\left| f\left( x + h\right) - f\left( x\right) \right| }{{\left| h\right| }^{\gamma }}
\]
and we define the space
\[
{\Lambda }_{\gamma }\left( {\mathbf{R}}^{n}\right) = \left\{ {f : {\mathbf{R}}^{n} \rightarrow \mathbf{C} : \text{ continuous and }\parallel f{\parallel }_{{\Lambda }_{\gamma }\left( {\mathbf{R}}^{n}\right) } < \infty }\right\} .
\]
We call \( {\Lambda }_{\gamma }\left( {\mathbf{R}}^{n}\right) \) an inhomogeneous Lipschitz space of order \( \gamma \) . We note that \( {\Lambda }_{0}\left( {\mathbf{R}}^{n}\right) = \) \( {L}^{\infty }\left( {\mathbf{R}}^{n}\right) \cap \mathcal{C}\left( {\mathbf{R}}^{n}\right) \), where \( \mathcal{C}\left( {\mathbf{R}}^{n}\right) \) is the space of all continuous functions on \( {\mathbf{R}}^{n} \), and the \( \parallel \cdot {\parallel }_{{\Lambda }_{0}} \) norm is comparable with the \( {L}^{\infty } \) norm; see Exercise 1.4.2.
Obviously, only constants satisfy
\[
\mathop{\sup }\limits_{{x \in {\mathbf{R}}^{n}}}\mathop{\sup }\limits_{{h \in {\mathbf{R}}^{n}\smallsetminus \{ 0\} }}{\left| h\right| }^{-\gamma }\left| {f\left( {x + h}\right) - f\left( x\right) }\right| < \infty
\]
for \( \gamma > 1 \), and the preceding definition would not be applicable in this case. To extend Definition 1.4.1 for indices \( \gamma \geq 1 \), for \( h \in {\mathbf{R}}^{n} \) we define the difference operator \( {D}_{h} \) by setting
\[
{D}_{h}\left( f\right) \left( x\right) = f\left( {x + h}\right) - f\left( x\right)
\]
for a continuous function \( f : {\mathbf{R}}^{n} \rightarrow \mathbf{C} \) . One easily verifies that
\[
{D}_{h}^{2}\left( f\right) \left( x\right) = {D}_{h}\left( {{D}_{h}f}\right) \left( x\right) = f\left( {x + {2h}}\right) - {2f}\left( {x + h}\right) + f\left( x\right) ,
\]
\[
{D}_{h}^{3}\left( f\right) \left( x\right) = {D}_{h}\left( {{D}_{h}^{2}f}\right) \left( x\right) = f\left( {x + {3h}}\right) - {3f}\left( {x + {2h}}\right) + {3f}\left( {x + h}\right) - f\left( x\right) ,
\]
and in general, that \( {D}_{h}^{k + 1}\left( f\right) = {D}_{h}^{k}\left( {{D}_{h}\left( f\right) }\right) \) is given by
\[
{D}_{h}^{k + 1}\left( f\right) \left( x\right) = \mathop{\sum }\limits_{{s = 0}}^{{k + 1}}{\left( -1\right) }^{k + 1 - s}\left( \begin{matrix} k + 1 \\ s \end{matrix}\right) f\left( {x + {sh}}\right)
\]
(1.4.2)
for a nonnegative integer \( k \) . See Exercise 1.4.3.
Definition 1.4.2. For \( \gamma > 0 \) define
\[
\parallel f{\parallel }_{{\Lambda }_{\gamma }} = \parallel f{\parallel }_{{L}^{\infty }} + \mathop{\sup }\limits_{{x \in {\mathbf{R}}^{n}}}\mathop{\sup }\limits_{{h \in {\mathbf{R}}^{n}\smallsetminus \{ 0\} }}\frac{\left| {D}_{h}^{\left\lbrack \gamma \right\rbrack + 1}\left( f\right) \left( x\right) \right| }{{\left| h\right| }^{\gamma }},
\]
where \( \left\lbrack \gamma \right\rbrack \) denotes the integer part of \( \gamma \), and set
\[
{\Lambda }_{\gamma } = \left\{ {f : {\mathbf{R}}^{n} \rightarrow \mathbf{C}\text{ continuous : }\parallel f{\parallel }_{{\Lambda }_{\gamma }} < \infty }\right\} .
\]
We call \( {\Lambda }_{\gamma }\left( {\mathbf{R}}^{n}\right) \) an inhomogeneous Lipschitz space of order \( \gamma \in {\mathbf{R}}^{ + } \) .
We note that functions in \( {\Lambda }_{\gamma } \) and \( {\dot{\Lambda }}_{\gamma } \) are required to be continuous, since this does not necessarily follow from the definition when \( \gamma \geq 1 \) . This is because of the axiom of choice, which implies the existence of a basis \( {\left\{ {v}_{i}\right\} }_{i \in I} \) of the vector space \( \mathbf{R} \) over \( \mathbf{Q} \) . Without loss of generality we may assume that 1 is an element of the basis. Define a function \( f \) by setting \( f\left( 1\right) = 1 \) and \( f\left( {v}_{i}\right) = - 1 \) if \( {v}_{i} \neq 1 \), and extend \( f \) to \( \mathbf{R} \) by linearity. Then \( f \) is everywhere discontinuous \( {}^{1} \) but \( {D}_{h}\left( f\right) \left( x\right) = f\left( h\right) \) for all \( x, h \in \mathbf{R} \), and thus \( {D}_{h}^{k}\left( f\right) = 0 \) for all \( k \geq 2 \) .
\( {}^{1} \) If \( {v}_{i} \neq 1 \), then \( {v}_{i} \) is irrational. Let \( {q}_{k} \in \mathbf{Q} \) such that \( {q}_{k} \rightarrow {v}_{i} \) as \( k \rightarrow \infty \) . Then \( f\left( {q}_{k}\right) = {q}_{k} \rightarrow {v}_{i} \) as \( k \rightarrow \infty \) but \( f\left( {v}_{i}\right) = - 1 \neq {v}_{i} \) ; thus, \( f \) is discontinuous at \( {v}_{i} \) and by linearity everywhere else.
We now define the homogeneous Lipschitz spaces.
Definition 1.4.3. For \( \gamma > 0 \) we define
\[
\parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }} = \mathop{\sup }\limits_{{x \in {\mathbf{R}}^{n}}}\mathop{\sup }\limits_{{h \in {\mathbf{R}}^{n}\smallsetminus \{ 0\} }}\frac{\left| {D}_{h}^{\left\lbrack \gamma \right\rbrack + 1}\left( f\right) \left( x\right) \right| }{{\left| h\right| }^{\gamma }}
\]
and we let \( {\dot{\Lambda }}_{\gamma } \) be the space of all continuous functions \( f \) on \( {\mathbf{R}}^{n} \) that satisfy \( \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }} < \infty \) . We call \( {\dot{\Lambda }}_{\gamma } \) a homogeneous Lipschitz space of order \( \gamma \) .
We verify that elements of \( {\dot{\Lambda }}_{\gamma } \) have at most polynomial growth at infinity. Indeed, identity (1.4.2) implies for all \( h \in {\mathbf{R}}^{n} \)
\[
{D}_{h}^{k + 1}\left( {f - f\left( 0\right) }\right) \left( 0\right) = \mathop{\sum }\limits_{{s = 1}}^{{k + 1}}{\left( -1\right) }^{k + 1 - s}\left( \begin{matrix} k + 1 \\ s \end{matrix}\right) \left( {f\left( {sh}\right) - f\left( 0\right) }\right)
\]
and thus
\[
\left| {f\left( {\left( {k + 1}\right) h}\right) - f\left( 0\right) }\right| \leq \mathop{\sum }\limits_{{s = 1}}^{k}\left( \begin{matrix} k + 1 \\ s \end{matrix}\right) \left| {f\left( {sh}\right) - f\left( 0\right) }\right| + \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}{\left| h\right| }^{k + 1}
\]
\[
\leq {2}^{k + 1}\left\lbrack {\mathop{\sup }\limits_{{s \in \{ 1,\ldots, k\} }}\left| {f\left( {sh}\right) - f\left( 0\right) }\right| + \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}{\left| h\right| }^{k + 1}}\right\rbrack .
\]
Iterating, we obtain for all \( h \in {\mathbf{R}}^{n} \)
\[
\left| {f\left( {{\left( k + 1\right) }^{2}h}\right) - f\left( 0\right) }\right| \leq {2}^{k + 1}\left\lbrack {\mathop{\sup }\limits_{{s \in \{ 1,\ldots, k\} }}\left| {f\left( {s\left( {k + 1}\right) h}\right) - f\left( 0\right) }\right| + \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}{\left| h\right| }^{k + 1}}\right\rbrack
\]
\[
\leq {2}^{k + 1}\left\lbrack {{2}^{k + 1}\mathop{\sup }\limits_{{s,{s}^{\prime } \in \{ 1,\ldots, k\} }}\left| {f\left( {s{s}^{\prime }h}\right) - f\left( 0\right) }\right| + 2\parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}{\left| h\right| }^{k + 1}}\right\rbrack
\]
\[
\leq {\left( {2}^{k + 1}\right) }^{2}\left\lbrack {\mathop{\sup }\limits_{{s \in \left\{ {1,\ldots ,{k}^{2}}\right\} }}\left| {f\left( {sh}\right) - f\left( 0\right) }\right| + \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}{\left| h\right| }^{k + 1}}\right\rbrack ,
\]
and thus by induction for all \( M \in {\mathbf{Z}}^{ + } \) and \( h \in {\mathbf{R}}^{n} \) we deduce
\[
\left| {f\left( {{\left( k + 1\right) }^{M}h}\right) - f\left( 0\right) }\right| \leq {\left( {2}^{k + 1}\right) }^{M}\left\lbrack {\mathop{\sup }\limits_{{s \in \left\{ {1,\ldots ,{k}^{M}}\right\} }}\left| {f\left( {sh}\right) - f\left( 0\right) }\right| + \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}{\left| h\right| }^{k + 1}}\right\rbrack .
\]
It follows from this that
\[
\left| {f\left( h\right) - f\left( 0\right) }\right| \leq {\left( {2}^{k + 1}\right) }^{M}\left\lbrack {\mathop{\sup }\limits_{{s \in \left\{ {1,\ldots ,{k}^{M}}\right\} }}\left| {f\left( {s{\left( k + 1\right) }^{-M}h}\right) - f\left( 0\right) }\right| + \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}}\right\rbrack .
\]
Given \( \left| h\right| > 1 \), there is an \( M \in {\mathbf{Z}}^{ + } \) such that \( {\left( \frac{k + 1}{k}\right) }^{M - 1} < \left| h\right| \leq {\left( \frac{k + 1}{k}\right) }^{M} \) . Then, if \( c\left( k\right) = \left( {k + 1}\right) /{\log }_{2}\left( \frac{k + 1}{k}\right) \), we have
\[
{\left( {2}^{k + 1}\right) }^{M} = {\left( \frac{k + 1}{k}\right) }^{{Mc}\left( k\right) } \leq {\left( \frac{k + 1}{k}\right) }^{c\left( k\right) }{\left| h\right| }^{c\left( k\right) }.
\]
But \( f \) is continuous, so \( \parallel f{\parallel }_{{L}^{\infty }\left( \overline{B\left( {0,1}\right) }\right) } < \infty \), and consequently for all \( \left| h\right| > 1 \) we obtain
\[
\left| {f\left( h\right) - f\left( 0\right) }\right| \leq {\left( \frac{k + 1}{k}\right) }^{c\left( k\right) }\left\lbrack {2\parallel f{\parallel }_{{L}^{\infty }\left( \overline{B\left( {0,1}\right) }\right) } + \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}}\right\rbrack {\left| h\right| }^{c\left( k\right) }.
\]
We conclude that functions in \( {\dot{\Lambda }}_{\gamma } \) have at most polynomial growth at infinity and they can be thought of as elements of \( {\mathcal{S}}^{\prime }\left( {\mathbf{R}}^{n}\right) \) .
Since elements of \( {\dot{\Lambda }}_{\gamma } \) can be viewed as tempered distributions, we extend the definition of \( {D}_{h}^{k}\left( u\right) \) to tempered distributions. For \( u \in {\mathcal{S}}^{\prime }\left( {\mathbf{R}}^{n}\right) \) we define another tempered distribution \( {D}_{h}^{k}\left( u\right) \) via the identity
|
109_The rising sea Foundations of Algebraic Geometry | Definition 12.41 |
Definition 12.41. A hyperbolic building is a building whose Weyl group \( \left( {W, S}\right) \) is a hyperbolic Coxeter system in the sense of Definition 10.57.
Example 12.42. Let \( \Delta \) be a hyperbolic building of type \( \left( {W, S}\right) \) . Choose a realization of \( \left( {W, S}\right) \) as a hyperbolic reflection group acting on \( {\mathbb{H}}^{n} \) with fundamental domain \( P \) . Let \( Z = P \) with its hyperbolic metric, and let \( {Z}_{s} \) for \( s \in S \) be the face of \( P \) fixed by \( s \) . In view of Example 12.14(c), the \( Z \) -realization of any apartment is the hyperbolic space \( {\mathbb{H}}^{n} \), with its canonical metric. The \( Z \) -realization \( X \) of \( \Delta \) is then a (complete) \( \operatorname{CAT}\left( {-1}\right) \) space by Proposition 12.29.
The study of hyperbolic buildings via their metric realizations is a very active area of research at the time of this writing, largely because many examples arise from the theory of Kac-Moody groups (see Remark 8.97). In fact, it is this realization itself that is often called a "hyperbolic building." See Gaboriau-Paulin [104] for an extensive study of hyperbolic buildings from this point of view, along with many examples. See also \( \left\lbrack {{45},{46},{266},{267}}\right\rbrack \) .
Example 12.43. Let \( \Delta \) be an arbitrary building, with Weyl group \( \left( {W, S}\right) \) . Let \( Z \) be the geometric realization of the flag complex of the poset of spherical subsets of \( S \) as in Example 12.2(e). There is a piecewise Euclidean metric on \( Z \) obtained by identifying each simplex with a suitable Euclidean simplex. We will explain this in detail in Section 12.3, where we will quote a theorem of Moussong, according to which the \( Z \) -realization of an apartment is always a \( \operatorname{CAT}\left( 0\right) \) space. The \( Z \) -realization of \( \Delta \) is then a (complete) \( \operatorname{CAT}\left( 0\right) \) space by Proposition 12.29; see Theorem 12.66 below. This is the Davis realization mentioned in the introduction to this chapter.
If \( \Delta \) is spherical, then the Davis realization, as a set, is the cone over the ordinary geometric realization; each apartment is the cone over a sphere and is metrically a ball. If \( \Delta \) is Euclidean, the Davis realization is the same as the usual one that we studied extensively in Chapter 11. If \( \Delta \) is hyperbolic, however, the Davis realization is not in general the same as the one in Example 12.42, even as a set. For example, suppose \( \Delta \) is a single apartment \( \sum \left( {W, S}\right) \) , where \( W = {\mathrm{{PGL}}}_{2}\left( \mathbb{Z}\right) \) . Then the realization in Example 12.42 is the hyperbolic plane with its standard metric, whereas the Davis realization is the subset of the plane obtained by "cutting off the cusps"; see Figure 12.11, where the Davis realization is the shaded part. Moreover, the metric on the Davis realization is not the hyperbolic one, but rather a piecewise Euclidean metric (which is Lipschitz equivalent to the hyperbolic metric).
Example 12.44. Let \( \Delta \) be a building of type \( \left( {W, S}\right) \), where \( W \) is hyperbolic in the sense of Gromov. Moussong has characterized such Coxeter groups \( W \) ; see Section 12.3.9 below. Moussong has also shown that the Davis realization of an apartment admits a different metric in this case, obtained by using the same set \( Z \) but giving it a piecewise hyperbolic metric instead of a piecewise-Euclidean metric. The new metric makes the \( Z \) -realization \( X \) of \( \Delta \) a (complete) \( \operatorname{CAT}\left( {-1}\right) \) space; see Section 12.3.9 below.
Example 12.45. Our final example is actually a nonexample, i.e., it is a metric realization that can be defined for an arbitrary Coxeter group \( W \) but that does not fit into our framework. Namely, Niblo and Reeves [180] have constructed a finite-dimensional \( \operatorname{CAT}\left( 0\right) \) cubical complex on which \( W \) acts, where the cubes are metrically equivalent to the standard cube. (See Section A.2.1 for the notion of "cubical complex.") The Niblo-Reeves complex has had useful applications to the study of Coxeter groups; see \( \left\lbrack {{68},{70}}\right\rbrack \), for example. But the construction works only for Coxeter groups and does not lead to realizations of arbitrary buildings.
The rest of this chapter is devoted to filling in some of the missing details in the construction of the Davis realization.
## 12.3 The Dual Coxeter Complex
In Example 12.43 above we described the choice of \( \left( {Z,{\left\{ {Z}_{s}\right\} }_{s \in S}}\right) \) that will give the Davis realization of a building as soon as we explain the metric on \( Z \) . It would be possible to do that fairly quickly as in [88], but we will instead give a more long-winded treatment that reveals some interesting combinatorial geometry, beginning with the case of a single apartment. Readers who just want to get the main ideas may wish to concentrate on the introductory remarks and examples (Sections 12.3.1 and 12.3.2) and ignore the rigorous construction (Section 12.3.3).
## 12.3.1 Introduction
Given a Coxeter system \( \left( {W, S}\right) \), recall that the Coxeter complex \( \sum = \sum \left( {W, S}\right) \) is the poset of standard cosets in \( W \), ordered by reverse inclusion. It is a simplicial complex on which \( W \) acts, and the stabilizers of the nonempty simplices are the proper parabolic subgroups of \( W \) . Now when one is using geometric methods to study a group, it is often convenient to have an action with finite stabilizers. This leads one to try to construct a modified Coxeter complex, in which the stabilizers are the finite parabolic subgroups.
A natural starting point is the subposet \( {\sum }_{f} = {\sum }_{f}\left( {W, S}\right) \) of \( \sum \) consisting of the finite standard cosets. Note, however, that \( {\sum }_{f} \) is not a subcomplex of \( \sum \) ’ in general, i.e., it is not closed under passage to faces. In fact, it has the dual property that if \( A < B \) in \( \sum \) and \( A \in {\sum }_{f} \), then \( B \in {\sum }_{f} \) . So perhaps we should reverse the ordering, i.e., we should consider the poset of finite standard cosets ordered by inclusion instead of reverse inclusion. This poset does indeed turn out to be the poset of cells of a cell complex (generally not simplicial), which we will call the dual Coxeter complex of \( \left( {W, S}\right) \) and denote by \( {\sum }_{d} \) or \( {\sum }_{d}\left( {W, S}\right) \) .
The upshot of all this will be the following. Given \( \left( {W, S}\right) \), there are two complexes on which \( W \) acts. The first is the ordinary Coxeter complex \( \sum \) , which is a simplicial complex whose nonempty simplices correspond to the proper standard cosets in \( W \), ordered by reverse inclusion. The stabilizers of the nonempty simplices are the proper parabolic subgroups of \( W \) . The second is the dual Coxeter complex \( {\sum }_{d} \), which is a regular cell complex whose (nonempty) cells correspond to the finite standard cosets in \( W \), ordered by inclusion. The stabilizers are the finite parabolic subgroups of \( W \) .
Most of this section will be devoted to a combinatorial discussion of \( {\sum }_{d} \) , independent of Section 12.1. At the end, however, we will make the connection with metric realizations. We proceed now to the details, starting with some motivating examples. The reader may find it helpful to glance first at Section A. 2 for terminology regarding cell complexes.
## 12.3.2 Examples
Example 12.46. Let \( W \) be the rank-1 Coxeter group \( \langle s\rangle \), thought of as a reflection group acting on \( \mathbb{R} \) . Its ordinary Coxeter complex \( \sum \) is combinatorially the 0 -sphere, with two 0 -simplices and the empty simplex. But it is more convenient for us to identify \( \sum \) with the poset of conical cells (two open half-lines, and their common face \( \{ 0\} \) ). If we draw the chamber graph on top of a picture of \( \mathbb{R} \), we see a regular cell complex with one 1-cell and two 0-cells, which are faces of the 1-cell (Figure 12.6). This will be the dual Coxeter complex \( {\sum }_{d} \) . Note that the poset of (nonempty) cells is isomorphic to \( \sum \), with the order
![85b011f4-34bf-48b4-8882-cd79e6f4beb0_631_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_631_0.jpg)
Fig. 12.6. The dual Coxeter complex of type \( {\mathrm{A}}_{1} \) .
reversed. Note also that the action of \( s \) flips the 1-cell, interchanging its two vertices.
Example 12.47. Let \( W \) be the dihedral group of order 6, i.e., the Coxeter group of type \( {\mathrm{A}}_{2} \), viewed as a reflection group acting on \( {\mathbb{R}}^{2} \) . Its ordinary Coxeter complex \( \sum \) is a circle decomposed into six arcs by the three reflecting hyperplanes. Equivalently, it is the hexagon pictured in Figure 12.7. It has 13 simplices, counting the empty simplex: six of rank 2, six of rank 1, and one of rank 0 . The dual Coxeter complex \( {\sum }_{d} \) will turn out to be the solid hexagon shown in Figure 12.8, which appears naturally if one draws the chamber graph of \( \sum \) on top of the picture of \( {\mathbb{R}}^{2} \) and then fills it in. It is a regular cell complex with six 0-cells, six 1-cells, and one 2-cell. Note that the stabilizer of a 1-cell is a rank-1 Coxeter group, acting on that edge as in Example 12.46. The stabilizer of a 0-cell, however, is trivial. In fact, the 0-cells correspond to the chambers of \( \sum \), and \( W \) permutes these simply transitively.
This example generalizes in an obvious way to the dihedral group of order \( {2m} \), in which case the dual Coxeter complex is a solid \( {2m} \) -gon, with a \( W \) -action that permutes the vertices simply transitively.
![85b011f4-34bf- |
113_Topological Groups | Definition 12.12 |
Definition 12.12. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( {\mathrm{{CA}}}_{\alpha } \) ’s, with notation as in 12.9. A homomorphism from \( \mathfrak{A} \) into \( \mathfrak{B} \) is a function \( f \) mapping \( A \) into \( B \) such that for all \( x, y \in A \) and \( \kappa ,\lambda < \alpha \) ,
(i) \( f\left( {x + y}\right) = {fx} + {}^{\prime }{fy} \) ;
(ii) \( f\left( {x \cdot y}\right) = {fx} \cdot \prime {fy} \) ;
(iii) \( f\left( {-x}\right) = - {}^{\prime }{fx} \) ;
(iv) \( {f0} = {0}^{\prime } \) ;
(v) \( {f1} = {1}^{\prime } \) ;
(vi) \( f{\mathrm{c}}_{\kappa }x = {\mathrm{c}}_{\kappa }^{\prime }{fx} \) ;
(vii) \( f{\mathrm{\;d}}_{\kappa \lambda } = {\mathrm{d}}_{\kappa \lambda }^{\prime } \) .
The terms homomorphism onto, isomorphism into, and isomorphism onto have the obvious meaning. We write \( \mathfrak{A} \cong \mathfrak{B} \) if there is an isomorphism of \( \mathfrak{A} \) into \( \mathfrak{B} \) .
Proposition 12.13. If \( f \) is a homomorphism from \( \mathfrak{A} \) into \( \mathfrak{B} \) and \( x \in A \), then \( {\Delta fx} \subseteq {\Delta x} \) .
Definition 12.14. Let \( \mathfrak{A} = {\left\langle A,+,\cdot ,-,0,1,{\mathrm{c}}_{\kappa },{\mathrm{d}}_{\kappa \lambda }\right\rangle }_{\kappa ,\lambda < \alpha } \) be a \( {\mathrm{{CA}}}_{\alpha } \) . An ideal of \( \mathfrak{A} \) is an ideal of \( \langle A, + , \cdot , - ,0,1\rangle \) such that for all \( \kappa < \alpha \), if \( x \in I \) then \( {\mathrm{c}}_{\kappa }x \in I \) .
Proposition 12.15. Let \( I \) be an ideal in a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{A} \), and let \( R = \{ \left( {x, y}\right) : x \cdot - y + \) \( - x \cdot y \in I\} \) (cf. 9.16). Then for any \( \kappa < \alpha \) and \( x, y \in A \), if \( {xRy} \) then \( {\mathrm{c}}_{\kappa }{xR}{\mathrm{c}}_{\kappa }y \) .
Proof. Assume that \( {xRy} \) and \( \kappa < \alpha \) . Thus \( x \cdot - y + - x \cdot y \in I \) . Now
\[
{\mathrm{c}}_{\kappa }x = {\mathrm{c}}_{\kappa }\left( {x \cdot - y + x \cdot y}\right) = {\mathrm{c}}_{\kappa }\left( {x \cdot - y}\right) + {\mathrm{c}}_{\kappa }\left( {x \cdot y}\right) \leq {\mathrm{c}}_{\kappa }\left( {x \cdot - y}\right) + {\mathrm{c}}_{\kappa }y
\]
hence
\[
{\mathrm{c}}_{\kappa }x \cdot - {\mathrm{c}}_{\kappa }y \leq {\mathrm{c}}_{\kappa }\left( {x \cdot - y}\right) \leq {\mathrm{c}}_{\kappa }\left( {x \cdot - y + y \cdot - x}\right) \in I.
\]
Thus \( {\mathrm{c}}_{\kappa }x \cdot - {\mathrm{c}}_{\kappa }y \in I \), and by symmetry, \( {\mathrm{c}}_{\kappa }y \cdot - {\mathrm{c}}_{\kappa }x \in I \), so \( {\mathrm{c}}_{\kappa }x \cdot - {\mathrm{c}}_{\kappa }y + - {\mathrm{c}}_{\kappa }x \) . \( {\mathrm{c}}_{\kappa }y \in I \) and \( {\mathrm{c}}_{\kappa }{xR}{\mathrm{c}}_{\kappa }y \) .
Along with 9.16 and 9.17, Proposition 12.15 justifies the following definition:
Definition 12.16. Let \( I \) be an ideal in a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{A} = {\left\langle A,+,\cdot ,-,0,1,{\mathrm{c}}_{\kappa },{\mathrm{d}}_{\kappa \lambda }\right\rangle }_{\kappa ,\lambda < \alpha } \) .
We define \( \mathfrak{A}/I = {\left\langle A/I,{ + }^{\prime },{ \cdot }^{\prime },{ - }^{\prime },{0}^{\prime },{1}^{\prime },{\mathrm{c}}_{\kappa }^{\prime },{\mathrm{d}}_{\kappa \lambda }^{\prime }\right\rangle }_{\kappa ,\lambda < \alpha } \), where
\[
\langle A, + , \cdot , - ,0,1\rangle /I = \left\langle {A/I,{ + }^{\prime },{ \cdot }^{\prime },{ - }^{\prime },{0}^{\prime },{1}^{\prime }}\right\rangle
\]
in accordance with \( {9.17},{\mathrm{c}}_{\kappa }^{\prime }\left\lbrack a\right\rbrack = \left\lbrack {{\mathrm{c}}_{\kappa }a}\right\rbrack \) for all \( a \in A \) and \( \kappa < \alpha \), and \( {\mathrm{d}}_{\kappa \lambda }^{\prime } = \) \( \left\lbrack {\mathrm{d}}_{\kappa \lambda }\right\rbrack \) for all \( \kappa ,\lambda < \alpha \) .
Proposition 12.17. If \( I \) is an ideal in a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{A} \), then \( \mathfrak{A}/I \) is a \( {\mathrm{{CA}}}_{\alpha } \), and \( {I}^{ * } \) is a homomorphism from \( \mathfrak{A} \) onto \( \mathfrak{A}/I \) .
Proposition 12.18. If \( f \) is a homomorphism from a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{A} \) onto a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{B} \) and \( I = \{ x \in A : {fx} = 0\} \), then \( I \) is an ideal of \( \mathfrak{A} \), and \( \mathfrak{B} \cong \mathfrak{A}/I \) .
Proposition 12.19. The intersection of any nonempty family of ideals in a \( {\mathrm{{CA}}}_{\alpha } \) is an ideal.
Definition 12.20. If \( \mathfrak{A} \) is a \( {\mathrm{{CA}}}_{\alpha } \) and \( x \subseteq A \), then the ideal generated by \( X \) is the set
\[
\bigcap \{ I : X \subseteq I, I\text{ an ideal of }\mathfrak{A}\} .
\]
We can directly generalize 9.23 to give a simple expression for the members of the ideal generated by a set:
Proposition 12.21. If \( X \subseteq A,\mathfrak{A}a{\mathrm{{CA}}}_{\alpha } \), then the ideal generated by \( X \) is the collection of all \( y \in A \) such that there exist \( m, n \in \omega \) and \( x \in {}^{m}X,\kappa \in {}^{n}\alpha \) with \( y \leq {\mathrm{c}}_{╄}\cdots {\mathrm{c}}_{\kappa \left( {n - 1}\right) }\left( {{x}_{0} + \cdots + {x}_{m - 1}}\right) . \)
Proof. Let \( I \) be the collection of all \( y \in A \) such that such \( m, n, x,\kappa \) exist. Clearly \( I \) is contained in the ideal generated by \( X \) . Thus it is enough to show that \( X \subseteq I \) and \( I \) is an ideal. Taking \( m = 1 \) and \( n = 0 \) we easily see that \( X \subseteq I \) . Taking \( m = n = 0 \), we see that \( 0 \in I \) and hence \( I \neq 0 \) . If \( z \leq y \in I \) , obviously also \( z \in I \) . If \( y \in I \), with \( m, n, x,\kappa \) as above, and if \( \lambda < \alpha \), then
\[
{\mathrm{c}}_{\lambda }y \leq {\mathrm{c}}_{\lambda }{\mathrm{c}}_{╄}\cdots {\mathrm{c}}_{\kappa \left( {n - 1}\right) }\left( {{x}_{0} + \cdots + {x}_{m - 1}}\right) ,
\]
so \( {\mathrm{c}}_{\lambda }y \in I \) . Finally, suppose \( y,{y}^{\prime } \in I \), with \( m, n, x,\kappa \) and \( {m}^{\prime },{n}^{\prime },{x}^{\prime },{\kappa }^{\prime } \) satisfying the corresponding conditions. Then
\[
y + {y}^{\prime } \leq {\mathrm{c}}_{╄}\cdots {\mathrm{c}}_{\kappa \left( {n - 1}\right) }\left( {{x}_{0} + \cdots + {x}_{m - 1}}\right) + {\mathrm{c}}_{{\kappa }^{\prime }0}\cdots {\mathrm{c}}_{{\kappa }^{\prime }\left( {{n}^{\prime } - 1}\right) }\left( {{x}_{0}^{\prime } + \cdots + {x}_{{m}^{\prime } - 1}^{\prime }}\right)
\]
\[
\leq {\mathrm{c}}_{╄}\cdots {\mathrm{c}}_{\kappa \left( {n - 1}\right) }{\mathrm{c}}_{{\kappa }^{\prime }0}\cdots {\mathrm{c}}_{{\kappa }^{\prime }\left( {{n}^{\prime } - 1}\right) }\left( {{x}_{0} + \cdots + {x}_{m - 1} + {x}_{0}^{\prime } + \cdots + {x}_{{m}^{\prime } - 1}^{\prime }}\right) ,
\]
so \( y + {y}^{\prime } \in I \) also.
We shall not develop the algebraic theory of \( {\mathrm{{CA}}}_{\alpha } \) ’s any further. Instead, we now turn to the relationships between first-order logic and cylindric algebras. In this regard the following definition is fundamental.
Definition 12.22. For \( \mathcal{L} \) a first-order language and \( \Gamma \) a set of sentences in \( \mathcal{L} \) we set
\[
{ \equiv }_{\Gamma }^{\mathcal{L}} = \{ \left( {\varphi ,\psi }\right) : \varphi \text{ and }\psi \text{ are formulas of }\mathcal{L}\text{ and }\Gamma \vDash \varphi \leftrightarrow \psi \} .
\]
Furthermore, we let \( \mathfrak{M}\mathcal{F} = {\left\langle {\mathrm{{Fmla}}}_{\mathcal{L}}/ \equiv \mathcal{F},+,\cdot ,-,0,1,{\mathrm{c}}_{\kappa },{\mathrm{d}}_{\kappa \lambda }\right\rangle }_{\kappa ,\lambda < \omega } \) , where for any \( \varphi ,\psi \in {\operatorname{Fmla}}_{\mathcal{L}} \) and any \( \kappa ,\lambda \in \omega \) ,
\[
\left\lbrack \varphi \right\rbrack + \left\lbrack \psi \right\rbrack = \left\lbrack {\varphi \vee \psi }\right\rbrack
\]
\[
\left\lbrack \varphi \right\rbrack \cdot \left\lbrack \psi \right\rbrack = \left\lbrack {\varphi \land \psi }\right\rbrack ;
\]
\[
- \left\lbrack \varphi \right\rbrack = \left\lbrack {\neg \varphi }\right\rbrack
\]
\[
0 = \left\lbrack {\neg {v}_{0} = {v}_{0}}\right\rbrack
\]
\[
1 = \left\lbrack {{v}_{0} = {v}_{0}}\right\rbrack
\]
\[
{\mathrm{c}}_{\kappa }\left\lbrack \varphi \right\rbrack = \left\lbrack {\exists {\mathrm{v}}_{\kappa }\varphi }\right\rbrack
\]
\[
{\mathrm{d}}_{\kappa \lambda } = \left\lbrack {{\mathrm{v}}_{\kappa } = {\mathrm{v}}_{\lambda }}\right\rbrack .
\]
This definition is easily justified (see 9.54-9.56). Routine checking gives: Proposition 12.23. \( {\mathfrak{M}}_{\Gamma }^{\mathcal{L}} \) is \( a{\mathrm{{CA}}}_{\omega } \) .
As in the case of sentential logic and Boolean algebras, there is a natural correspondence between notions of first-order logic and notions of cylindric algebras. We give two instances of this correspondence. The first one indicates the close relationship between set algebras and models of a theory:
Proposition 12.24. Let \( \Gamma \) be a set of sentences in a first-order language \( \mathcal{L} \), and let \( \mathfrak{A} \) be a model of \( \Gamma \) . Then \( \left\{ {{\varphi }^{\mathfrak{A}} : \varphi \in {\mathrm{{Fmla}}}_{\mathcal{L}}}\right\} \) is an \( \omega \) -dimensional field of sets. Let \( \mathfrak{B} \) be the associated cylindric set algebra. Then the function \( f \) such that \( f\left\lbrack \varphi \right\rbrack = {\varphi }^{\mathfrak{A}} \) for each \( \varphi \in {\mathrm{{Fmla}}}_{\mathcal{L}} \) is a homomorphism of \( {\mathfrak{M}}_{\mathrm{F}}^{\mathcal{L}} \) onto \( \mathfrak{B} \) ( \( f \) is easily seen to be well defined).
This proposition can be routinely checked. The following proposition is established just like 9.59.
Proposition 12.25. Let \( I \) be an ideal in a \( {\mathrm{{CA}}}_{\omega }\mathfrak{M} \), and set \( \Delta = \left\{ {\varphi \in {\operatorname{Sent}}_{\mathcal{L}}}\right. \) : \( - \left. {\left\lbrack \varphi \right\rbrack \in I}\righ |
1048_(GTM209)A Short Course on Spectral Theory | Definition 1.6.1 |
Definition 1.6.1. For every element \( x \in A \), the spectrum of \( x \) is defined as the set
\[
\sigma \left( x\right) = \left\{ {\lambda \in \mathbb{C} : x - \lambda \notin {A}^{-1}}\right\}
\]
We will develop the basic properties of the spectrum, the first being that it is always compact.
Proposition 1.6.2. For every \( x \in A,\sigma \left( x\right) \) is a closed subset of the disk \( \{ z \in \mathbb{C} : \left| z\right| \leq \parallel x\parallel \} \)
Proof. The complement of the spectrum is given by
\[
\mathbb{C} \smallsetminus \sigma \left( x\right) = \left\{ {\lambda \in \mathbb{C} : x - \lambda \in {A}^{-1}}\right\} .
\]
Since \( {A}^{-1} \) is open and the map \( \lambda \in \mathbb{C} \mapsto x - \lambda \in A \) is continuous, the complement of \( \sigma \left( x\right) \) must be open.
To prove the second assertion, we will show that no complex number \( \lambda \) with \( \left| \lambda \right| > \parallel x\parallel \) can belong to \( \sigma \left( x\right) \) . Indeed, for such a \( \lambda \) the formula
\[
x - \lambda = \left( {-\lambda }\right) \left( {1 - {\lambda }^{-1}x}\right)
\]
together with the fact that \( \begin{Vmatrix}{{\lambda }^{-1}x}\end{Vmatrix} < 1 \), implies that \( x - \lambda \) is invertible.
We now prove a fundamental result of Gelfand.
THEOREM 1.6.3. \( \sigma \left( x\right) \neq \varnothing \) for every \( x \in A \) .
Proof. The idea is to show that if \( \sigma \left( x\right) = \varnothing \), the \( A \) -valued function \( f\left( \lambda \right) = {\left( x - \lambda \right) }^{-1} \) is a bounded entire function that tends to zero as \( \lambda \rightarrow \infty \) ; an appeal to Liouville's theorem yields the desired conclusion. The details are as follows.
For every \( {\lambda }_{0} \notin \sigma \left( x\right) ,{\left( x - \lambda \right) }^{-1} \) is defined for all \( \lambda \) sufficiently close to \( {\lambda }_{0} \) because \( \sigma \left( x\right) \) is closed, and we claim that
\( \left( {1.10}\right) \)
\[
\mathop{\lim }\limits_{{\lambda \rightarrow {\lambda }_{0}}}\frac{1}{\lambda - {\lambda }_{0}}\left\lbrack {{\left( x - \lambda \right) }^{-1} - {\left( x - {\lambda }_{0}\right) }^{-1}}\right\rbrack = {\left( x - {\lambda }_{0}\right) }^{-2}
\]
in the norm topology of \( A \) . Indeed, we can write
\[
{\left( x - \lambda \right) }^{-1} - {\left( x - {\lambda }_{0}\right) }^{-1} = {\left( x - \lambda \right) }^{-1}\left\lbrack {\left( {x - {\lambda }_{0}}\right) - \left( {x - \lambda }\right) }\right\rbrack {\left( x - {\lambda }_{0}\right) }^{-1}
\]
\[
= \left( {\lambda - {\lambda }_{0}}\right) {\left( x - \lambda \right) }^{-1}{\left( x - {\lambda }_{0}\right) }^{-1}.
\]
Divide by \( \lambda - {\lambda }_{0} \), and use the fact that \( {\left( x - \lambda \right) }^{-1} \rightarrow {\left( x - {\lambda }_{0}\right) }^{-1} \) as \( \lambda \rightarrow {\lambda }_{0} \) to obtain (1.10).
Contrapositively, assume that \( \sigma \left( x\right) \) is empty, and choose an arbitrary bounded linear functional \( \rho \) on \( A \) . The scalar-valued function
\[
f\left( \lambda \right) = \rho \left( {\left( x - \lambda \right) }^{-1}\right)
\]
is defined everywhere in \( \mathbb{C} \), and it is clear from (1.10) that \( f \) has a complex derivative everywhere satisfying \( {f}^{\prime }\left( \lambda \right) = \rho \left( {\left( x - \lambda \right) }^{-2}\right) \) . Thus \( f \) is an entire function.
Notice that \( f \) is bounded. To see this we need to estimate \( \begin{Vmatrix}{\left( x - \lambda \right) }^{-1}\end{Vmatrix} \) for large \( \lambda \) . Indeed, if \( \left| \lambda \right| > \parallel x\parallel \), then
\[
\begin{Vmatrix}{\left( x - \lambda \right) }^{-1}\end{Vmatrix} = \frac{1}{\left| \lambda \right| }\begin{Vmatrix}{\left( \mathbf{1} - {\lambda }^{-1}x\right) }^{-1}\end{Vmatrix}.
\]
The estimates of Theorem 1.5.2 therefore imply that
\[
\begin{Vmatrix}{\left( x - \lambda \right) }^{-1}\end{Vmatrix} \leq \frac{1}{\left| \lambda \right| \left( {1 - \parallel x\parallel /\left| \lambda \right| }\right) } = \frac{1}{\left| \lambda \right| - \parallel x\parallel },
\]
and the right side clearly tends to zero as \( \left| \lambda \right| \rightarrow \infty \) . Thus the function \( \lambda \mapsto \begin{Vmatrix}{\left( x - \lambda \right) }^{-1}\end{Vmatrix} \) vanishes at infinity. It follows that \( f \) is a bounded entire function, which, by Liouville's theorem, must be constant. The constant value is 0 because \( f \) vanishes at infinity.
We conclude that \( \rho \left( {\left( x - \lambda \right) }^{-1}\right) = 0 \) for every \( \lambda \in \mathbb{C} \) and every bounded linear functional \( \rho \) . The Hahn-Banach theorem implies that \( {\left( x - \lambda \right) }^{-1} = 0 \) for every \( \lambda \in \mathbb{C} \) . But this is absurd because \( {\left( x - \lambda \right) }^{-1} \) is invertible (and \( 1 \neq 0 \) in \( A \) ).
The following application illustrates the power of this result.
Definition 1.6.4. A division algebra (over \( \mathbb{C} \) ) is a complex associative algebra \( A \) with unit 1 such that every nonzero element in \( A \) is invertible.
Definition 1.6.5. An isomorphism of Banach algebras \( A \) and \( B \) is an isomorphism \( \theta : A \rightarrow B \) of the underlying algebraic structures that is also a topological isomorphism; thus there are positive constants \( a, b \) such that
\[
a\parallel x\parallel \leq \parallel \theta \left( x\right) \parallel \leq b\parallel x\parallel
\]
for every element \( x \in A \) .
COROLLARY 1. Any Banach division algebra is isomorphic to the one-dimensional algebra \( \mathbb{C} \) .
Proof. Define \( \theta : \mathbb{C} \rightarrow A \) by \( \theta \left( \lambda \right) = \lambda \mathbf{1} \) . \( \theta \) is clearly an isomorphism of \( \mathbb{C} \) onto the Banach subalgebra \( \mathbb{C}1 \) of \( A \) consisting of all scalar multiples of the identity, and it suffices to show that \( \theta \) is onto \( A \) . But for any element \( x \in A \) Gelfand’s theorem implies that there is a complex number \( \lambda \in \sigma \left( x\right) \) . Thus \( x - \lambda \) is not invertible. Since \( A \) is a division algebra, \( x - \lambda \) must be 0, hence \( x = \theta \left( \lambda \right) \), as asserted.
There are many division algebras in mathematics, especially commutative ones. For example, there is the algebra of all rational functions \( r\left( z\right) = p\left( z\right) /q\left( z\right) \) of one complex variable, where \( p \) and \( q \) are polynomials with \( q \neq 0 \), or the algebra of all formal Laurent series of the form \( \mathop{\sum }\limits_{{-\infty }}^{\infty }{a}_{n}{z}^{n} \) , where \( \left( {a}_{n}\right) \) is a doubly infinite sequence of complex numbers with \( {a}_{n} = 0 \) for sufficiently large negative \( n \) . It is significant that examples such as these cannot be endowed with a norm that makes them into a Banach algebra.
Exercises.
(1) Give an example of a one-dimensional Banach algebra that is not isomorphic to the algebra of complex numbers.
(2) Let \( X \) be a compact Hausdorff space and let \( A = C\left( X\right) \) be the Banach algebra of all complex-valued continuous functions on \( X \) . Show that for every \( f \in C\left( X\right) ,\sigma \left( f\right) = f\left( X\right) \) .
(3) Let \( T \) be the operator defined on \( {L}^{2}\left\lbrack {0,1}\right\rbrack \) by \( {Tf}\left( x\right) = {xf}\left( x\right), x \in \) \( \left\lbrack {0,1}\right\rbrack \) . What is the spectrum of \( T \) ? Does \( T \) have point spectrum?
For the remaining exercises, let \( \left( {{a}_{n} : n = 1,2,\ldots }\right) \) be a bounded sequence of complex numbers and let \( H \) be a complex Hilbert space having an orthonormal basis \( {e}_{1},{e}_{2},\ldots \) .
(4) Show that there is a (necessarily unique) bounded operator \( A \in \) \( \mathcal{B}\left( H\right) \) satisfying \( A{e}_{n} = {a}_{n}{e}_{n + 1} \) for every \( n = 1,2,\ldots \) Such an operator \( A \) is called a unilateral weighted shift (with weight sequence \( \left. \left( {a}_{n}\right) \right) \) .
A unitary operator on a Hilbert space \( H \) is an invertible isometry \( U \in \mathcal{B}\left( H\right) \) .
(5) Let \( A \in \mathcal{B}\left( H\right) \) be a weighted shift as above. Show that for every complex number \( \lambda \) with \( \left| \lambda \right| = 1 \) there is a unitary operator \( U = \) \( {U}_{\lambda } \in \mathcal{B}\left( H\right) \) such that \( {UA}{U}^{-1} = {\lambda A} \) .
(6) Deduce that the spectrum of a weighted shift must be the union of (possibly degenerate) concentric circles about \( z = 0 \) .
(7) Let \( A \) be the weighted shift associated with a sequence \( \left( {a}_{n}\right) \in {\ell }^{\infty } \) .
(a) Calculate \( \parallel A\parallel \) in terms of \( \left( {a}_{n}\right) \) .
(b) Assuming that \( {a}_{n} \rightarrow 0 \) as \( n \rightarrow \infty \), show that
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} = 0
\]
## 1.7. Spectral Radius
Throughout this section, \( A \) denotes a unital Banach algebra with \( \parallel \mathbf{1}\parallel = 1 \) . We introduce the concept of spectral radius and prove a useful asymptotic formula due to Gelfand, Mazur, and Beurling.
Definition 1.7.1. For every \( x \in A \) the spectral radius of \( x \) is defined by
\[
r\left( x\right) = \sup \{ \left| \lambda \right| : \lambda \in \sigma \left( x\right) \} .
\]
REMARK 1.7.2. Since the spectrum of \( x \) is contained in the central disk of radius \( \parallel x\parallel \), it follows that \( r\left( x\right) \leq \parallel x\parallel \) . Notice too that for every \( \lambda \in \mathbb{C} \) we have \( r\left( {\lambda x}\right) = \left| \lambda \right| r\left( x\right) \) .
We require the following rudimentary form of the spectral mapping theorem. If \( x \) is an elemen |
1189_(GTM95)Probability-1 | Definition 8 |
Definition 8. Let \( \xi \) be a random variable. A function \( F = F\left( {\omega ;x}\right) ,\omega \in \Omega, x \in R \), is a regular distribution function for \( \xi \) with respect to \( \mathcal{G} \) if:
(a) \( F\left( {\omega ;x}\right) \) is, for each \( \omega \in \Omega \), a distribution function on \( R \) ;
(b) \( F\left( {\omega ;x}\right) = \mathrm{P}\left( {\xi \leq x \mid \mathcal{G}}\right) \left( \omega \right) \) (a. s.), for each \( x \in R \) .
Theorem 4. A regular distribution function and a regular conditional distribution always exist for the random variable \( \xi \) with respect to a \( \sigma \) -algebra \( \mathcal{G} \subseteq \mathcal{F} \) .
Proof. For each rational number \( r \in R \), define \( {F}_{r}\left( \omega \right) = \mathrm{P}\left( {\xi \leq r \mid \mathcal{G}}\right) \left( \omega \right) \), where \( \mathrm{P}\left( {\xi \leq r \mid \mathcal{G}}\right) \left( \omega \right) = \mathrm{E}\left( {{I}_{\{ \xi \leq r\} } \mid \mathcal{G}}\right) \left( \omega \right) \) is any version of the conditional probability, with respect to \( \mathcal{G} \), of the event \( \{ \xi \leq r\} \) . Let \( \left\{ {r}_{i}\right\} \) be the set of rational numbers in \( R \) . If \( {r}_{i} < {r}_{j} \), Property B* implies that \( \mathrm{P}\left( {\xi \leq {r}_{i} \mid \mathcal{G}}\right) \leq \mathrm{P}\left( {\xi \leq {r}_{j} \mid \mathcal{G}}\right) \) (a. s.), and therefore if \( {A}_{ij} = \left\{ {\omega : {F}_{{r}_{j}}\left( \omega \right) < {F}_{{r}_{i}}\left( \omega \right) }\right\}, A = \bigcup {A}_{ij} \), we have \( \mathrm{P}\left( A\right) = 0 \) . In other words, the set of points \( \omega \) at which the distribution function \( {F}_{r}\left( \omega \right), r \in \left\{ {r}_{i}\right\} \), fails to be monotonic has measure zero.
Now let
\[
{B}_{i} = \left\{ {\omega : \mathop{\lim }\limits_{{n \rightarrow \infty }}{F}_{{r}_{i} + \left( {1/n}\right) }\left( \omega \right) \neq {F}_{{r}_{i}}\left( \omega \right) }\right\} ,\;B = \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{B}_{i}.
\]
It is clear that \( {I}_{\left\{ \xi \leq {r}_{i} + \left( 1/n\right) \right\} } \downarrow {I}_{\left\{ \xi \leq {r}_{i}\right\} }, n \rightarrow \infty \) . Therefore, by (a) of Theorem 2, \( {F}_{{r}_{i} + \left( {1/n}\right) }\left( \omega \right) \rightarrow {F}_{{r}_{i}}\left( \omega \right) \) (a. s.), and therefore the set \( B \) on which continuity on the right (along the rational numbers) fails also has measure zero, \( \mathrm{P}\left( B\right) = 0 \) .
In addition, let
\[
C = \left\{ {\omega : \mathop{\lim }\limits_{{n \rightarrow \infty }}{F}_{n}\left( \omega \right) \neq 1}\right\} \cup \left\{ {\omega : \mathop{\lim }\limits_{{n \rightarrow - \infty }}{F}_{n}\left( \omega \right) \neq 0}\right\} .
\]
Then, since \( \{ \xi \leq n\} \uparrow \Omega, n \rightarrow \infty \), and \( \{ \xi \leq n\} \downarrow \varnothing, n \rightarrow - \infty \), we have \( \mathrm{P}\left( C\right) = 0 \) .
Now put
\[
F\left( {\omega ;x}\right) = \left\{ \begin{array}{ll} \mathop{\lim }\limits_{{r \downarrow x}}{F}_{r}\left( \omega \right) , & \omega \notin A \cup B \cup C, \\ G\left( x\right) , & \omega \in A \cup B \cup C, \end{array}\right.
\]
where \( G\left( x\right) \) is any distribution function on \( R \) ; we show that \( F\left( {\omega ;x}\right) \) satisfies the conditions of Definition 8.
Let \( \omega \notin A \cup B \cup C \) . Then it is clear that \( F\left( {\omega ;x}\right) \) is a nondecreasing function of \( x \) . If \( x < {x}^{\prime } \leq r \), then \( F\left( {\omega ;x}\right) \leq F\left( {\omega ;{x}^{\prime }}\right) \leq F\left( {\omega ;r}\right) = {F}_{r}\left( \omega \right) \downarrow \) \( F\left( {\omega, x}\right) \) when \( r \downarrow x \) . Consequently \( F\left( {\omega ;x}\right) \) is continuous on the right. Similarly \( \mathop{\lim }\limits_{{x \rightarrow \infty }}F\left( {\omega ;x}\right) = 1,\mathop{\lim }\limits_{{x \rightarrow - \infty }}F\left( {\omega ;x}\right) = 0 \) . Since \( F\left( {\omega ;x}\right) = G\left( x\right) \) when \( \omega \in A \cup B \cup C \), it follows that \( F\left( {\omega ;x}\right) \) is a distribution function on \( R \) for every \( \omega \in \Omega \), i.e., condition (a) of Definition 6 is satisfied.
By construction, \( \mathrm{P}\left( {\xi \leq r \mid \mathcal{G}}\right) \left( \omega \right) = {F}_{r}\left( \omega \right) = F\left( {\omega ;r}\right) \) . If \( r \downarrow x \), we have \( F\left( {\omega ;r}\right) \downarrow \) \( F\left( {\omega ;x}\right) \) for all \( \omega \in \Omega \) by the continuity on the right that we just established. But by conclusion (a) of Theorem 2, we have \( \mathrm{P}\left( {\xi \leq r \mid \mathcal{G}}\right) \left( \omega \right) \rightarrow \mathrm{P}\left( {\xi \leq x \mid \mathcal{G}}\right) \left( \omega \right) \) (a. s.). Therefore \( F\left( {\omega ;x}\right) = \mathrm{P}\left( {\xi \leq x \mid \mathcal{G}}\right) \left( \omega \right) \) (a. s.), which establishes condition (b) of Definition 8.
We now turn to the proof of the existence of a regular conditional distribution of \( \xi \) with respect to \( \mathcal{G} \) .
Let \( F\left( {\omega ;x}\right) \) be the function constructed above. Put
\[
Q\left( {\omega ;B}\right) = {\int }_{B}F\left( {\omega ;{dx}}\right)
\]
where the integral is a Lebesgue-Stieltjes integral. From the properties of the integral (see Subsection 8 in Sect. 6), it follows that \( Q\left( {\omega ;B}\right) \) is a measure in \( B \) for each given \( \omega \in \Omega \) . To establish that \( Q\left( {\omega ;B}\right) \) is a version of the conditional probability \( \mathrm{P}\left( {\xi \in B \mid \mathcal{G}}\right) \left( \omega \right) \), we use the principle of appropriate sets.
Let \( \mathcal{C} \) be the collection of sets \( B \) in \( \mathcal{B}\left( R\right) \) for which \( Q\left( {\omega ;B}\right) = \mathrm{P}\left( {\xi \in B \mid \mathcal{G}}\right) \left( \omega \right) \) (a.s.). Since \( F\left( {\omega ;x}\right) = \mathrm{P}\left( {\xi \leq x \mid \mathcal{G}}\right) \left( \omega \right) \) (a. s.), the system \( \mathcal{C} \) contains the sets \( B \) of the form \( B = ( - \infty, x\rbrack, x \in R \) . Therefore \( \mathcal{C} \) also contains the intervals of the form \( (a, b\rbrack \), and the algebra \( \mathcal{A} \) consisting of finite sums of disjoint sets of the form \( (a, b\rbrack \) . Then it follows from the continuity properties of \( Q\left( {\omega ;B}\right) \left( {\omega \text{fixed}}\right) \) and from conclusion (b) of Theorem 2 that \( \mathcal{C} \) is a monotonic class, and since \( \mathcal{A} \subseteq \mathcal{C} \subseteq \) \( \mathcal{B}\left( R\right) \), we have, from Theorem 1 of Sect. 2,
\[
\mathcal{B}\left( R\right) = \sigma \left( \mathcal{A}\right) \subseteq \sigma \left( \mathcal{C}\right) = \mu \left( \mathcal{C}\right) = \mathcal{C} \subseteq \mathcal{B}\left( R\right) ,
\]
whence \( \mathcal{C} = \mathcal{B}\left( R\right) \) .
This completes the proof of the theorem.
口
By using topological considerations we can extend the conclusion of Theorem 4 on the existence of a regular conditional distribution to random elements with values in what are known as Borel spaces. We need the following definition.
Definition 9. A measurable space \( \left( {E,\mathcal{E}}\right) \) is a Borel space if it is Borel equivalent to a Borel subset of the real line, i.e., there is a one-to-one mapping \( \varphi = \varphi \left( e\right) : \left( {E,\mathcal{E}}\right) \rightarrow \) \( \left( {R,\mathcal{B}\left( R\right) }\right) \) such that
(1) \( \varphi \left( E\right) \equiv \{ \varphi \left( e\right) : e \in E\} \) is a set in \( \mathcal{B}\left( R\right) \) ;
(2) \( \varphi \) is \( \mathcal{E} \) -measurable \( \left( {{\varphi }^{-1}\left( A\right) \in \mathcal{E}, A \in \varphi \left( E\right) \cap \mathcal{B}\left( R\right) }\right) \) ,
(3) \( {\varphi }^{-1} \) is \( \mathcal{B}\left( R\right) /\mathcal{E} \) -measurable \( \left( {\varphi \left( B\right) \in \varphi \left( E\right) \cap \mathcal{B}\left( R\right), B \in \mathcal{E}}\right) \) .
Theorem 5. Let \( X = X\left( \omega \right) \) be a random element with values in the Borel space \( \left( {E,\mathcal{E}}\right) \) . Then there is a regular conditional distribution of \( X \) with respect to \( \mathcal{G} \subseteq \mathcal{F} \) .
Proof. Let \( \varphi = \varphi \left( e\right) \) be the function in Definition 9. By (2) in this definition \( \varphi \left( {X\left( \omega \right) }\right) \) is a random variable. Hence, by Theorem 4, we can define the conditional distribution \( Q\left( {\omega ;A}\right) \) of \( \varphi \left( {X\left( \omega \right) }\right) \) with respect to \( \mathcal{G}, A \in \varphi \left( E\right) \cap \mathcal{B}\left( R\right) \) .
We introduce the function \( \widetilde{Q}\left( {\omega ;B}\right) = Q\left( {\omega ;\varphi \left( B\right) }\right), B \in \mathcal{E} \) . By (3) of Definition 9, \( \varphi \left( B\right) \in \varphi \left( E\right) \cap \mathcal{B}\left( R\right) \) and consequently \( \widetilde{Q}\left( {\omega ;B}\right) \) is defined. Evidently \( \widetilde{Q}\left( {\omega ;B}\right) \) is a measure in \( B \in \mathcal{E} \) for every \( \omega \) . Now fix \( B \in \mathcal{E} \) . By the one-to-one character of the mapping \( \varphi = \varphi \left( e\right) \) ,
\[
\widetilde{Q}\left( {\omega ;B}\right) = Q\left( {\omega ;\varphi \left( B\right) }\right) = \mathsf{P}\{ \varphi \left( X\right) \in \varphi \left( B\right) \mid \mathcal{G}\} \left( \omega \right) = \mathsf{P}\{ X \in B \mid \mathcal{G}\} \left( \omega \right) \;\text{ (a. s.). }
\]
Therefore \( \widetilde{Q}\left( {\omega ;B}\right) \) is a regular conditional distribution of \( X \) with respect to \( \mathcal{G} \) .
This completes the proof of the theorem.
Corollary. Let \( X = X\left( \omega \right) \) be a random |
1358_[陈松蹊&张慧铭] A Course in Fixed and High-dimensional Multivariate Analysis (2020) | Definition 1.2.4 |
Definition 1.2.4 The partial correlation coefficient between \( {\mathbf{X}}_{i} \) and \( {\mathbf{X}}_{j} \) for \( i, j = \) \( 1,\ldots, q \) given \( {\mathbf{X}}^{\left( \mathbf{2}\right) } = {\left( {\mathbf{X}}_{q + 1},\ldots ,{\mathbf{X}}_{p}\right) }^{T} \) is
\[
{\rho }_{{ij} \cdot q + 1,\ldots, p} = \frac{{\sigma }_{{ij} \cdot q + 1,\ldots, p}}{\sqrt{{\sigma }_{{ii} \cdot q + 1,\ldots, p}{\sigma }_{{jj} \cdot q + 1,\ldots, p}}}
\]
Remark: "Partial" should be interpreted as "conditional". So, \( {\rho }_{{ij} \cdot q + 1\ldots, p} \) is really a conditional correlation coefficient between two \( {X}_{i} \) and \( {X}_{j} \) in \( {\mathbf{X}}^{\left( \mathbf{1}\right) } \) given \( {\mathbf{X}}^{\left( \mathbf{2}\right) } \) .
Example: When \( p = 2, q = 1 \), both \( {\mathbf{X}}^{\left( \mathbf{1}\right) } = {\mathbf{X}}_{\mathbf{1}} \) and \( {\mathbf{X}}^{\left( \mathbf{2}\right) } = {\mathbf{X}}_{\mathbf{2}} \) are univariate,
\[
\mathbf{\sum } = \left( \begin{matrix} {\sigma }_{1}^{2} & \rho {\sigma }_{1}{\sigma }_{2} \\ \rho {\sigma }_{1}{\sigma }_{2} & {\sigma }_{2}^{2} \end{matrix}\right) = \left( {\begin{array}{ll} {\mathbf{\sum }}_{11} & {\mathbf{\sum }}_{12} \\ {\mathbf{\sum }}_{21} & {\mathbf{\sum }}_{22} \end{array},}\right)
\]
99 i.e. \( {\mathbf{\sum }}_{11} = {\sigma }_{1}^{2},{\mathbf{\sum }}_{22} = {\sigma }_{2}^{2},{\mathbf{\sum }}_{12} = \rho {\sigma }_{1}{\sigma }_{2} = {\mathbf{\sum }}_{21} \) . Here the regression coefficient matrix \( \beta = \left( {\beta }_{12}\right) = {\mathbf{\sum }}_{12}{\mathbf{\sum }}_{22}^{-1} = {\sigma }_{1}{\sigma }_{2}\rho \frac{1}{{\sigma }_{2}^{2}} = \frac{\rho {\sigma }_{1}}{{\sigma }_{2}} \) . The covariance matrix of \( {\mathbf{X}}^{\left( \mathbf{1}\right) } \) given \( {\mathbf{X}}^{\left( \mathbf{2}\right) } \) is
\[
{\sigma }_{12} = {\mathbf{\sum }}_{{11} \cdot 2} = {\mathbf{\sum }}_{11} - {\mathbf{\sum }}_{12}{\mathbf{\sum }}_{22}^{-1}{\mathbf{\sum }}_{21} = {\sigma }_{1}^{2} - {\rho }^{2}{\sigma }_{1}^{2}{\sigma }_{2}^{2}/{\sigma }_{2}^{2} = {\sigma }_{1}^{2}\left( {\mathbf{1} - {\rho }^{2}}\right) < {\sigma }_{1}^{2}
\]
if and only if \( \rho \neq \mathbf{0} \) .
The conditional density of \( {\mathbf{X}}^{\left( \mathbf{1}\right) } \) given \( {\mathbf{X}}^{\left( \mathbf{2}\right) } \) is
\[
f\left( {{\mathbf{x}}_{1} \mid {\mathbf{x}}_{2}}\right)
\]
\[
= \frac{1}{\sqrt{2\pi }\sqrt{{\sigma }_{1}^{2}(1 - {\rho }^{2})}}\exp \left\{ { - \frac{1}{2}{\left\lbrack {\mathbf{x}}_{1} - {\mu }_{1} - \frac{\rho {\sigma }_{1}}{{\sigma }_{2}}\left( {\mathbf{x}}_{2} - {\mu }_{2}\right) \right\rbrack }^{2} \times {\sigma }_{1}^{ - 2}{\left( 1 - {\rho }^{2}\right) }^{ - 1}}\right\} .{2.12})
\]
503 and \( {\mathbf{X}}_{1} \mid {\mathbf{X}}_{2} \sim N\left( {{\mu }_{1} + \frac{\rho {\sigma }_{1}}{{\sigma }_{2}}\left( {{\mathbf{x}}_{2} - {\mu }_{2}}\right) ,{\sigma }_{1}^{2}\left( {\mathbf{1} - {\rho }^{2}}\right) }\right. \) .
so4 Example: Partition \( \mathbf{X} = {\left( \begin{array}{l} {\mathbf{X}}^{\left( 1\right) } \\ {\mathbf{X}}^{\left( 2\right) } \\ {\mathbf{X}}^{\left( 3\right) } \end{array}\right) }^{\mathrm{p}} \sim {N}_{p + q + r}\left( {\mu ,\mathbf{\sum }}\right) ,\mu = \left( \begin{array}{l} {\mu }^{\left( 1\right) } \\ {\mu }^{\left( 2\right) } \\ {\mu }^{\left( 3\right) } \end{array}\right) \) and \( \mathbf{\sum } = \)
sos \( \left( \begin{array}{l} {\mathbf{\sum }}_{11}{\mathbf{\sum }}_{12}{\mathbf{\sum }}_{13} \\ {\mathbf{\sum }}_{21}{\mathbf{\sum }}_{22}{\mathbf{\sum }}_{23} \\ {\mathbf{\sum }}_{31}{\mathbf{\sum }}_{32}{\mathbf{\sum }}_{33} \end{array}\right) \) are similar partitions. From (1.2.9) and (1.2.10)
\[
E\left\lbrack {\left( \begin{matrix} {\mathbf{X}}^{\left( \mathbf{1}\right) } \\ {\mathbf{X}}^{\left( \mathbf{2}\right) } \end{matrix}\right) \mid {\mathbf{X}}^{\left( \mathbf{3}\right) } = {\mathbf{x}}^{\left( \mathbf{3}\right) }}\right\rbrack = \left( \begin{matrix} {\mu }^{\left( \mathbf{1}\right) } \\ {\mu }^{\left( \mathbf{2}\right) } \end{matrix}\right) + \left( \begin{matrix} {\mathbf{\sum }}_{13} \\ {\mathbf{\sum }}_{23} \end{matrix}\right) {\mathbf{\sum }}_{33}^{-1}\left( {{\mathbf{x}}^{\left( \mathbf{3}\right) } - {\mu }^{\left( \mathbf{3}\right) }}\right)
\]
\[
= \left( \begin{array}{l} {\mu }^{\left( \mathbf{1}\right) } + {\mathbf{\sum }}_{13}{\mathbf{\sum }}_{33}^{-1}\left( {{\mathbf{x}}^{\left( \mathbf{3}\right) } - {\mu }^{\left( \mathbf{3}\right) }}\right) \\ {\mu }^{\left( \mathbf{2}\right) } + {\mathbf{\sum }}_{23}{\mathbf{\sum }}_{33}^{-1}\left( {{\mathbf{x}}^{\left( \mathbf{3}\right) } - {\mu }^{\left( \mathbf{3}\right) }}\right) \end{array}\right)
\]
506
\[
\operatorname{Var}\left\lbrack {\left( \begin{matrix} {\mathbf{X}}^{\left( \mathbf{1}\right) } \\ {\mathbf{X}}^{\left( \mathbf{2}\right) } \end{matrix}\right) \mid {\mathbf{X}}^{\left( \mathbf{3}\right) } = {\mathbf{x}}^{\left( \mathbf{3}\right) }}\right\rbrack = \left( \begin{array}{ll} {\mathbf{\sum }}_{11} & {\mathbf{\sum }}_{12} \\ {\mathbf{\sum }}_{21} & {\mathbf{\sum }}_{22} \end{array}\right) - \left( \begin{array}{l} {\mathbf{\sum }}_{13} \\ {\mathbf{\sum }}_{23} \end{array}\right) {\mathbf{\sum }}_{33}^{-1}\left( {{\mathbf{\sum }}_{31}{\mathbf{\sum }}_{32}}\right)
\]
\[
= \left( \begin{array}{l} {\mathbf{\sum }}_{11} - {\mathbf{\sum }}_{13}{\mathbf{\sum }}_{33}^{-1}{\mathbf{\sum }}_{31}{\mathbf{\sum }}_{12} - {\mathbf{\sum }}_{13}{\mathbf{\sum }}_{33}^{-1}{\mathbf{\sum }}_{32} \\ {\mathbf{\sum }}_{21} - {\mathbf{\sum }}_{23}{\mathbf{\sum }}_{33}^{-1}{\mathbf{\sum }}_{31}{\mathbf{\sum }}_{22} - {\mathbf{\sum }}_{23}{\mathbf{\sum }}_{33}^{-1}{\mathbf{\sum }}_{32} \end{array}\right)
\]
Similarly we can derive \( E\left\lbrack {{\mathbf{X}}^{\left( \mathbf{1}\right) } \mid \left( \begin{array}{l} {\mathbf{X}}^{\left( \mathbf{2}\right) } \\ {\mathbf{X}}^{\left( \mathbf{3}\right) } \end{array}\right) }\right\rbrack \) and \( \operatorname{Var}\left\lbrack {{\mathbf{X}}^{\left( \mathbf{1}\right) } \mid \left( \begin{array}{l} {\mathbf{X}}^{\left( \mathbf{2}\right) } \\ {\mathbf{X}}^{\left( \mathbf{3}\right) } \end{array}\right) }\right\rbrack \) using the formula
for the inverse of block matrices:
\[
{\left( \begin{array}{ll} {\mathbf{A}}_{11} & {\mathbf{A}}_{12} \\ {\mathbf{A}}_{21} & {\mathbf{A}}_{22} \end{array}\right) }^{-1} = \left( \begin{matrix} {\mathbf{A}}_{11}^{-1} & 0 \\ 0 & 0 \end{matrix}\right) + \left( \begin{matrix} {\mathbf{A}}_{11}^{-1}{\mathbf{A}}_{12} \\ - \mathbf{I} \end{matrix}\right) {\mathbf{B}}^{-1}\left( {{\mathbf{A}}_{21}{\mathbf{A}}_{11}^{-1} - \mathbf{I}}\right)
\]
where
\[
\mathbf{B} = {\mathbf{A}}_{22} - {\mathbf{A}}_{21}{\mathbf{A}}_{11}^{-1}{\mathbf{A}}_{12} = {\mathbf{A}}_{{22} \cdot 1}
\]
provided all the inverse matrices exist.
## Multiple correlation
A correlation between a component \( {X}_{i} \) of \( {\mathbf{X}}^{\left( \mathbf{1}\right) } = {\left( {X}_{1},\ldots ,{X}_{q}\right) }^{T}, i < q \) and a set of components \( {\mathbf{X}}^{\left( \mathbf{2}\right) } = {\left( {X}_{q + 1},\ldots ,{X}_{p}\right) }^{T} \) . In this sub-section, we do not need the random variable to be normally distributed. Hence, the notions covered here are for any distributions.
Definition 1.2.5 Let \( \mathbf{X} = {\left( {\mathbf{X}}^{\left( \mathbf{1}\right) },{\mathbf{X}}^{\left( \mathbf{2}\right) }\right) }^{T} \) and \( \beta \) be the regression coefficient matrix. The vector \( {\mathbf{X}}^{\left( \mathbf{1} \cdot \mathbf{2}\right) } = {\mathbf{X}}^{\left( \mathbf{1}\right) } - {\mu }^{\left( \mathbf{1}\right) } - \beta \left( {{\mathbf{X}}^{\left( \mathbf{2}\right) } - {\mu }^{\left( \mathbf{2}\right) }}\right) \) is called the vector of residuals of \( {\mathbf{X}}^{\left( \mathbf{1}\right) } \) from its regression on \( {\mathbf{X}}^{\left( \mathbf{2}\right) } \) .
Theorem 1.2.8. \( {\mathbf{X}}^{\left( \mathbf{1} \cdot \mathbf{2}\right) } \) is uncorrelated with \( {\mathbf{X}}^{\left( \mathbf{2}\right) } \) .
Proof: \( \operatorname{Cov}\left( {{\mathbf{X}}^{\left( \mathbf{1} \cdot \mathbf{2}\right) },{\mathbf{X}}^{\left( \mathbf{2}\right) }}\right) = \operatorname{Cov}\left( {{\mathbf{X}}^{\left( \mathbf{1}\right) } - {\mu }^{\left( \mathbf{1}\right) } - \beta \left( {{\mathbf{X}}^{\left( \mathbf{2}\right) } - {\mu }^{\left( \mathbf{2}\right) }}\right) ,{\mathbf{X}}^{\left( \mathbf{2}\right) }}\right) = {\mathbf{\sum }}_{12} - {\mathbf{\sum }}_{12}{\mathbf{\sum }}_{22}^{-1}{\mathbf{\sum }}_{22} = \)
0.
Let \( {X}_{i} \) be a component of \( \mathbf{X} \) with \( i \leq q,{\sigma }_{\left( i\right) }^{T} \) be the \( i \) -th row of \( {\mathbf{\sum }}_{12} \), i.e. \( {\mathbf{\sum }}_{12} = \) \( {\left( {\sigma }_{\left( 1\right) },\ldots ,{\sigma }_{\left( q\right) }\right) }^{T};{\mathbf{\beta }}_{\left( i\right) }^{T} \) be the \( i \) -th row of \( \mathbf{\beta } = {\mathbf{\sum }}_{12}{\mathbf{\sum }}_{22}^{-1} = {\left( {\beta }_{\left( 1\right) },\ldots ,{\beta }_{\left( q\right) }\right) }^{T} = {\left( {\sigma }_{\left( 1\right) },\ldots ,{\sigma }_{\left( q\right) }\right) }^{T} \) . \( {\mathbf{\sum }}_{22}^{-1} = {\left( {\mathbf{\sum }}_{22}^{-1} \cdot {\sigma }_{\left( 1\right) },\ldots ,{\mathbf{\sum }}_{22}^{-1} \cdot {\sigma }_{\left( q\right) }\right) }^{T}. \)
Hence, \( {\beta }_{\left( i\right) }^{T} = {\sigma }_{\left( i\right) }^{T}{\mathbf{\sum }}_{22}^{-1} \) and the \( {i}^{\text{th }} \) component of \( {\mathbf{X}}^{\left( \mathbf{1} \cdot \mathbf{2}\right) } \)
\[
{X}_{i}^{\left( 1 \cdot 2\right) } = {X}_{i} - {\mu }_{i} - {\beta }_{\left( i\right) }^{T}\left( {{\mathbf{X}}^{\left( \mathbf{2}\right) } - {\mu }^{\left( \mathbf{2}\right) }}\right)
\]
where \( {\mu }_{i} = E\left( {X}_{i}\right) \) .
Theorem 1.2.9. For any \( \alpha \in {\mathbb{R}}^{p - q} \), Var \( \left( {X}_{i}^{\left( 1 : 2\right) }\right) \leq \operatorname{Var}\left( {{X}_{i} - {\alpha }^{T}{\mathbf{X}}^{\left( \mathbf{2}\right) }}\right) \) .
Proof: Clearly \( E\left( |
1075_(GTM233)Topics in Banach Space Theory | Definition 5.2.2 |
Definition 5.2.2. A bounded subset \( \mathcal{F} \subset {L}_{1}\left( \mu \right) \) is called equi-integrable (or uniformly integrable) if given \( \epsilon > 0 \) there is \( \delta = \delta \left( \epsilon \right) > 0 \) such that for every set \( E \subset \Omega \) with \( \mu \left( E\right) < \delta \) we have \( \mathop{\sup }\limits_{{f \in \mathcal{F}}}{\int }_{E}\left| f\right| {d\mu } < \epsilon \), i.e.,
\[
\mathop{\lim }\limits_{{\mu \left( E\right) \rightarrow 0}}\mathop{\sup }\limits_{{f \in \mathcal{F}}}{\int }_{E}\left| f\right| {d\mu } = 0
\]
In this definition we can omit the word bounded if \( \mu \) is nonatomic, since then given any \( \delta > 0 \) it is possible to partition \( \Omega \) into a finite number of sets of measure \( < \delta \) .
Example 5.2.3. (i) For \( h \in {L}_{1}\left( \mu \right) \) with \( h \geq 0 \) the set \( \mathcal{F} = \left\{ {f \in {L}_{1}\left( \mu \right) ;\left| f\right| \leq h}\right\} \) is equi-integrable.
(ii) The closed unit ball of \( {L}_{2}\left( \mu \right) \) is an equi-integrable subset of \( {L}_{1}\left( \mu \right) \) . Indeed, for every \( f \in {B}_{{L}_{2}\left( \mu \right) } \) and measurable set \( E \), by the Cauchy-Schwarz inequality,
\[
{\int }_{E}\left| f\right| {d\mu } \leq {\left( {\int }_{E}1d\mu \right) }^{1/2}{\left( {\int }_{E}{\left| f\right| }^{2}d\mu \right) }^{1/2} \leq {\left( \mu \left( E\right) \right) }^{1/2}.
\]
Then,
\[
\mathop{\lim }\limits_{{\mu \left( E\right) \rightarrow 0}}\mathop{\sup }\limits_{{f \in F}}{\int }_{E}\left| f\right| {d\mu } = 0
\]
(iii) The closed unit ball of \( {L}_{1}\left( \mu \right) \) is not equi-integrable, as one can easily check by taking the subset \( \mathcal{F} = \left\{ {{\delta }^{-1}{\chi }_{\left\lbrack 0,\delta \right\rbrack };0 < \delta < 1}\right\} \) .
Lemma 5.2.4. Let \( \mathcal{F} \) and \( \mathcal{G} \) be bounded sets of equi-integrable functions in \( {L}_{1}\left( \mu \right) \) . Then the sets \( \mathcal{F} \cup \mathcal{G} \) and \( \mathcal{F} + \mathcal{G} = \{ f + g;f \in \mathcal{F}, g \in \mathcal{G}\} \subset {L}_{1}\left( \mu \right) \) are (bounded and) equi-integrable.
This is a very elementary deduction from the definition, and we leave the proof to the reader. Next we give an alternative formulation of equi-integrability.
Lemma 5.2.5. Suppose \( \mathcal{F} \) is a bounded subset of \( {L}_{1}\left( \mu \right) \) . Then the following are equivalent:
(i) \( \mathcal{F} \) is equi-integrable;
(ii) \( \mathop{\lim }\limits_{{M \rightarrow \infty }}\mathop{\sup }\limits_{{f \in \mathcal{F}}}{\int }_{\{ \left| f\right| > M\} }\left| f\right| {d\mu } = 0 \) .
Proof. (i) \( \Rightarrow \) (ii) Since \( \mathcal{F} \) is bounded, there is a constant \( A > 0 \) such that \( \mathop{\sup }\limits_{{f \in \mathcal{F}}}\parallel f{\parallel }_{1} \leq A \) . Given \( f \in \mathcal{F} \), by Chebyshev’s inequality,
\[
\mu \left( {\{ \left| f\right| > M\} }\right) \leq \frac{\parallel f{\parallel }_{1}}{M} \leq \frac{A}{M}.
\]
Therefore, \( \mathop{\lim }\limits_{{M \rightarrow \infty }}\mu \left( {\{ \left| f\right| > M\} }\right) = 0 \) . Using the equi-integrability of \( \mathcal{F} \), we conclude that
\[
\mathop{\lim }\limits_{{M \rightarrow \infty }}\mathop{\sup }\limits_{{f \in \mathcal{F}}}{\int }_{\{ \left| f\right| > M\} }\left| f\right| {d\mu } = 0
\]
(ii) \( \Rightarrow \) (i) Given \( f \in \mathcal{F} \) and \( E \in \sum \), for every finite \( M > 0 \) we have
\[
{\int }_{E}\left| f\right| {d\mu } = {\int }_{E\cap \{ \left| f\right| \leq M\} }\left| f\right| {d\mu } + {\int }_{E\cap \{ \left| f\right| > M\} }\left| f\right| {d\mu }
\]
\[
\leq {M\mu }\left( E\right) + {\int }_{E\cap \{ \left| f\right| > M\} }\left| f\right| {d\mu }
\]
\[
\leq {M\mu }\left( E\right) + {\int }_{\{ \left| f\right| > M\} }\left| f\right| {d\mu }
\]
\[
\leq {M\mu }\left( E\right) + \mathop{\sup }\limits_{{f \in F}}{\int }_{\{ \left| f\right| > M\} }\left| f\right| {d\mu }
\]
Hence,
\[
\mathop{\sup }\limits_{{f \in \mathcal{F}}}{\int }_{E}\left| f\right| {d\mu } \leq {M\mu }\left( E\right) + \mathop{\sup }\limits_{{f \in F}}{\int }_{\{ \left| f\right| > M\} }\left| f\right| {d\mu }.
\]
Given \( \epsilon > 0 \), let us pick \( M = M\left( \epsilon \right) \) such that
\[
\mathop{\sup }\limits_{{f \in \mathcal{F}}}{\int }_{\{ \left| f\right| > M\} }\left| f\right| {d\mu } < \frac{\epsilon }{2}
\]
Then if \( \mu \left( E\right) < \frac{\epsilon }{2M} \), we obtain
\[
\mathop{\sup }\limits_{{f \in \mathcal{F}}}{\int }_{E}\left| f\right| {d\mu } \leq M\frac{\epsilon }{2M} + \frac{\epsilon }{2} = \epsilon
\]
Note that whenever \( {\left( {f}_{n}\right) }_{n = 1}^{\infty } \) is a sequence bounded above by an integrable function, then in particular, \( {\left( {f}_{n}\right) }_{n = 1}^{\infty } \) is equi-integrable. The next lemma establishes that conversely, equi-integrability is a condition that can replace the existence of a dominating function in the dominated convergence theorem:
Lemma 5.2.6. Suppose \( {\left( {f}_{n}\right) }_{n = 1}^{\infty } \) is an equi-integrable sequence in \( {L}_{1}\left( \mu \right) \) that converges a.e. to some \( g \in {L}_{1}\left( \mu \right) \) . Then
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{\int }_{\Omega }{f}_{n}{d\mu } = {\int }_{\Omega }{gd\mu }
\]
Proof. For each \( M > 0 \) let us consider the truncations
\[
{f}_{n}^{\left( M\right) } = \left\{ {\begin{array}{ll} M & \text{ if }{f}_{n} > M, \\ {f}_{n} & \text{ if }\left| {f}_{n}\right| \leq M, \\ - M & \text{ if }{f}_{n} < - M, \end{array}\;{g}^{\left( M\right) } = \left\{ \begin{array}{ll} M & \text{ if }g > M, \\ g & \text{ if }\left| g\right| \leq M, \\ - M & \text{ if }g < - M, \end{array}\right. }\right.
\]
and write
\[
\left| {{\int }_{\Omega }\left( {{f}_{n} - g}\right) {d\mu }}\right| \leq \left| {{\int }_{\Omega }\left( {{f}_{n} - {f}_{n}^{\left( M\right) }}\right) {d\mu }}\right| + \left| {{\int }_{\Omega }\left( {{f}_{n}^{\left( M\right) } - {g}^{\left( M\right) }}\right) {d\mu }}\right| + \left| {{\int }_{\Omega }\left( {g - {g}^{\left( M\right) }}\right) {d\mu }}\right| .
\]
Now,
\[
\left| {{\int }_{\Omega }\left( {{f}_{n} - {f}_{n}^{\left( M\right) }}\right) {d\mu }}\right| \leq {\int }_{\left\{ \left| {f}_{n}\right| > M\right\} }\left( {\left| {f}_{n}\right| - M}\right) {d\mu } \leq {\int }_{\left\{ \left| {f}_{n}\right| > M\right\} }\left| {f}_{n}\right| {d\mu } \rightarrow 0
\]
uniformly in \( n \) as \( M \rightarrow \infty \) by Lemma 5.2.5. Analogously, since \( g \in {L}_{1}\left( \mu \right) \) ,
\[
\left| {{\int }_{\Omega }\left( {g - {g}^{\left( M\right) }}\right) {d\mu }}\right| \leq {\int }_{\{ \left| g\right| > M\} }\left( {\left| g\right| - M}\right) {d\mu } \leq {\int }_{\{ \left| g\right| > M\} }\left| g\right| {d\mu }\overset{M \rightarrow \infty }{ \rightarrow }0.
\]
And finally, for each \( M \) we have
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{\int }_{\Omega }{f}_{n}^{\left( M\right) }{d\mu } = {\int }_{\Omega }{g}^{\left( M\right) }{d\mu }
\]
by the bounded convergence theorem. The combination of these three facts finishes the proof.
We now come to an important technical lemma that is often referred to as the subsequence splitting lemma. This lemma enables us to take an arbitrary bounded sequence in \( {L}_{1}\left( \mu \right) \) and extract a subsequence that can be split into two sequences, the first disjointly supported and the second equi-integrable. It is due to Kadets and Pełczyński and provides a very useful bridge between sequence space methods (gliding hump techniques) and function spaces.
Lemma 5.2.7 (Subsequence Splitting Lemma [147]). Let \( {\left( {f}_{n}\right) }_{n = 1}^{\infty } \) be a bounded sequence in \( {L}_{1}\left( \mu \right) \) . Then there exist a subsequence \( {\left( {g}_{n}\right) }_{n = 1}^{\infty } \) of \( {\left( {f}_{n}\right) }_{n = 1}^{\infty } \) and a sequence of disjoint measurable sets \( {\left( {A}_{n}\right) }_{n = 1}^{\infty } \) such that if \( {B}_{n} = \Omega \smallsetminus {A}_{n} \), then \( {\left( {g}_{n}{\chi }_{{B}_{n}}\right) }_{n = 1}^{\infty } \) is equi-integrable.
Proof. Without loss of generality we can assume \( {\begin{Vmatrix}{f}_{n}\end{Vmatrix}}_{1} \leq 1 \) for all \( n \) . We will first find a subsequence \( {\left( {f}_{{n}_{s}}\right) }_{s = 1}^{\infty } \) and a sequence of measurable sets \( {\left( {F}_{s}\right) }_{s = 1}^{\infty } \) such that if \( {E}_{s} = \Omega \smallsetminus {F}_{s} \), then \( {\left( {f}_{{n}_{s}}{\chi }_{{E}_{s}}\right) }_{s = 1}^{\infty } \) is equi-integrable and \( \mathop{\lim }\limits_{{s \rightarrow \infty }}{f}_{{n}_{s}}{\chi }_{{F}_{s}} = {0\mu } \) -a.e.
For every choice of \( k \in \mathbb{N} \), Chebyshev’s inequality gives
\[
0 \leq \mu \left( {\left| {f}_{n}\right| > k}\right) \leq \frac{1}{k},\;\forall n \in \mathbb{N}.
\]
Since \( {\left( \mu \left( \left| {f}_{n}\right| > k\right) \right) }_{n = 1}^{\infty } \) is a bounded sequence, by passing to a subsequence we can assume that \( {\left( \mu \left( \left| {f}_{n}\right| > k\right) \right) }_{n = 1}^{\infty } \) converges for each \( k \) . Let us call \( {\alpha }_{k} \) its limit. Our first goal is to see that the series \( \mathop{\sum }\limits_{{k = 1}}^{\infty }{\alpha }_{k} \) is convergent with sum no bigger than 1 .
For each \( n \) ,
\[
1 \geq {\int }_{\Omega }\left| {f}_{n}\right| {d\mu } = {\int }_{0}^{\infty }\mu \left( {\left| {f}_{n}\right| > t}\right) {dt}
\]
\[
= \mathop{\sum }\limits_{{k = 1}}^{\infty }{\int }_{k - 1}^{k}\mu \left( {\left| {f}_{n}\right| > t}\right) {dt}
\]
\[
\geq \mathop{\sum }\limits_{{k = 1}}^{\infty }\mu \left( {\left| {f}_{n}\right| > k}\right)
\]
Therefore the partial sums of \( \mathop{\sum }\limits_{{k = 1}}^{\infty }{\alpha }_{k} \) are uniformly bounded:
\[
\mathop{\sum }\limits_{{k = 1}}^{N}{\alpha }_{k} = \mathop{\sum }\limits_{{k = 1}}^{N}\mathop{\ |
1167_(GTM73)Algebra | Definition 6.2 |
Definition 6.2. The permutations \( {\sigma }_{1},{\sigma }_{2},\ldots ,{\sigma }_{\mathrm{r}} \) of \( {\mathrm{S}}_{\mathrm{n}} \) are said to be disjoint provided that for each \( 1 \leq \mathrm{i} \leq \mathrm{r} \), and every \( \mathrm{k}\varepsilon {\mathrm{I}}_{\mathrm{n}},{\sigma }_{\mathrm{i}}\left( \mathrm{k}\right) \neq \mathrm{k} \) implies \( {\sigma }_{\mathrm{j}}\left( \mathrm{k}\right) = \mathrm{k} \) for all \( \mathrm{j} \neq \mathrm{i} \) .
In other words \( {\sigma }_{1},{\sigma }_{2},\ldots ,{\sigma }_{r} \) are disjoint if and only if no element of \( {I}_{n} \) is moved by more than one of \( {\sigma }_{1},\ldots ,{\sigma }_{r} \) . It is easy to see that \( {\tau \sigma } = {\sigma \tau } \) whenever \( \sigma \) and \( \tau \) are disjoint.
Theorem 6.3. Every nonidentity permutation in \( {\mathrm{S}}_{\mathrm{n}} \) is uniquely (up to the order of the factors) a product of disjoint cycles, each of which has length at least 2.
SKETCH OF PROOF. Let \( \sigma \in {S}_{n},\sigma \neq \left( 1\right) \) . Verify that the following is an equivalence relation on \( {I}_{n} \) : for \( x, y \in {I}_{n}, x \sim y \) if and only if \( y = {\sigma }^{m}\left( x\right) \) for some \( m \in \mathbf{Z} \) . The equivalence classes \( \{ B : \mid 1 \leq i \leq s\} \) of this equivalence relation are called the orbits of \( \sigma \) and form a partition of \( {I}_{n} \) (Introduction, Theorem 4.1). Note that if \( x \in {B}_{i} \) , then \( {B}_{i} = \{ u \mid x \sim u\} = \left\{ {{\sigma }^{m}\left( x\right) \mid m \in \mathbf{Z}}\right\} \) . Let \( {B}_{1},{B}_{2},\ldots ,{B}_{r}\left( {1 \leq r \leq s}\right) \) be those orbits that contain more than one element each \( \left( {r \geq 1\text{since}\sigma \neq \left( 1\right) }\right) \) . For each \( i \leq r \) define \( {\sigma }_{i}\varepsilon {S}_{n} \) by:
\[
{\sigma }_{i}\left( x\right) = \left\{ \begin{array}{lll} \sigma \left( x\right) & \text{ if } & x \in {B}_{i} \\ x & \text{ if } & x \notin {B}_{i} \end{array}\right.
\]
Each \( {\sigma }_{i} \) is a well-defined nonidentity permutation of \( {I}_{n} \) since \( \sigma \mid {B}_{i} \) is a bijection \( {B}_{i} \rightarrow {B}_{i}.{\sigma }_{1},{\sigma }_{2},\ldots ,{\sigma }_{r} \) are disjoint permutations since the sets \( {B}_{1},\ldots ,{B}_{r} \) are mutually disjoint. Finally verify that \( \sigma = {\sigma }_{1}{\sigma }_{2}\cdots {\sigma }_{r} \) ; (note that \( x \in {B}_{i} \) implies \( \sigma \left( x\right) = {\sigma }_{i}\left( x\right) \) if \( i \leq r \) and \( \sigma \left( x\right) = x \) if \( i > r \) ; use disjointness). We must show that each \( {\sigma }_{i} \) is a cycle.
If \( x \in {B}_{i}\left( {i \leq r}\right) \), then since \( {B}_{i} \) is finite there is a least positive integer \( d \) such that \( {\sigma }^{d}\left( x\right) = {\sigma }^{j}\left( x\right) \) for some \( j\left( {0 \leq j < d}\right) \) . Since \( {\sigma }^{d - j}\left( x\right) = x \) and \( 0 < d - j \leq d \), we must have \( j = 0 \) and \( {\sigma }^{d}\left( x\right) = x \) . Hence \( \left( {{x\sigma }\left( x\right) {\sigma }^{2}\left( x\right) \cdots {\sigma }^{d - 1}\left( x\right) }\right) \) is a well-defined cycle of length at least 2 . If \( {\sigma }^{m}\left( x\right) \varepsilon {B}_{i} \), then \( m = {ad} + b \) for some \( a,{b\varepsilon }\mathbf{Z} \) such that \( 0 \leq b < d \) . Hence \( {\sigma }^{m}\left( x\right) = {\sigma }^{b + {ad}}\left( x\right) = {\sigma }^{b}{\sigma }^{ad}\left( x\right) = {\sigma }^{b}\left( x\right) \varepsilon \left\{ {x,\sigma \left( x\right) ,{\sigma }^{2}\left( x\right) ,\ldots ,{\sigma }^{d - 1}\left( x\right) }\right\} \) . Therefore \( {B}_{i} = \left\{ {x,\sigma \left( x\right) ,{\sigma }^{2}\left( x\right) ,\ldots ,{\sigma }^{d - 1}\left( x\right) }\right\} \) and it follows that \( {\sigma }_{i} \) is the cycle
\[
\text{.}\left( {{x\sigma }\left( x\right) {\sigma }^{2}\left( x\right) \cdots {\sigma }^{d - 1}\left( x\right) }\right) \text{.}
\]
Suppose \( {\tau }_{1},\ldots ,{\tau }_{t} \) are disjoint cycles such that \( \sigma = {\tau }_{1}{\tau }_{2}\cdots {\tau }_{t} \) . Let \( x \in {I}_{n} \) be such that \( \sigma \left( x\right) \neq x \) . By disjointness there exists a unique \( j\left( {1 \leq j \leq t}\right) \) with \( \sigma \left( x\right) = {\tau }_{j}\left( x\right) \) . Since \( \sigma {\tau }_{j} = {\tau }_{j}\sigma \), we have \( {\sigma }^{k}\left( x\right) = {\tau }_{j}{}^{k}\left( x\right) \) for all \( k \in \mathbf{Z} \) . Therefore, the orbit of \( x \) under \( {\tau }_{j} \) is precisely the orbit of \( x \) under \( \sigma \), say \( {B}_{i} \) . Consequently, \( {\tau }_{j}\left( y\right) = \sigma \left( y\right) \) for every \( y \in {B}_{i} \) (since \( y = {\sigma }^{n}\left( x\right) = {\tau }_{j}^{n}\left( x\right) \) for some \( n \in \mathbf{Z} \) ). Since \( {\tau }_{j} \) is a cycle it has only one nontrivial orbit (verify!), which must be \( {B}_{i} \) since \( x \neq \sigma \left( x\right) = {\tau }_{j}\left( x\right) \) . Therefore \( {\tau }_{j}\left( y\right) = y \) for all \( y \notin {B}_{i} \), whence \( {\tau }_{j} = {\sigma }_{i} \) . A suitable inductive argument shows that \( r = t \) and (after reindexing) \( {\sigma }_{i} = {\tau }_{i} \) for each \( i = 1,2,\ldots, r \) .
Corollary 6.4. The order of a permutation \( \sigma \in {\mathrm{S}}_{\mathrm{n}} \) is the least common multiple of the orders of its disjoint cycles.
PROOF. Let \( \sigma = {\sigma }_{1}\cdots {\sigma }_{r} \), with \( \left\{ {\sigma }_{i}\right\} \) disjoint cycles. Since disjoint cycles commute, \( {\sigma }^{m} = {\sigma }_{1}{}^{m}\cdots {\sigma }_{r}{}^{m} \) for all \( m \in \mathbf{Z} \) and \( {\sigma }^{m} = \left( 1\right) \) if and only if \( {\sigma }_{i}{}^{m} = \left( 1\right) \) for all \( i \) . Therefore \( {\sigma }^{m} = \left( 1\right) \) if and only if \( \left| {\sigma }_{i}\right| \) divides \( m \) for all \( i \) (Theorem 3.4). Since \( \left| \sigma \right| \) is the least such \( m \), the conclusion follows.
Corollary 6.5. Every permutation in \( {\mathrm{S}}_{\mathrm{n}} \) can be written as a product of (not necessarily disjoint) transpositions.
PROOF. It suffices by Theorem 6.3 to show that every cycle is a product of transpositions. This is easy: \( \left( {x}_{1}\right) = \left( {{x}_{1}{x}_{2}}\right) \left( {{x}_{1}{x}_{2}}\right) \) and for \( r > 1,\left( {{x}_{1}{x}_{2}{x}_{3}\cdots {x}_{r}}\right) \) \( = \left( {{x}_{1}{x}_{r}}\right) \left( {{x}_{1}{x}_{r - 1}}\right) \cdots \left( {{x}_{1}{x}_{3}}\right) \left( {{x}_{1}{x}_{2}}\right) . \)
Definition 6.6. A permutation \( \tau \in {\mathrm{S}}_{\mathrm{n}} \) is said to be even [resp. odd] if \( \tau \) can be written as a product of an even [resp. odd] number of transpositions.
The sign of a permutation \( \tau \), denoted sgn \( \tau \), is 1 or -1 according as \( \tau \) is even or odd. The fact that \( \operatorname{sgn}\tau \) is well defined is an immediate consequence of
Theorem 6.7. A permutation in \( {\mathrm{S}}_{\mathrm{n}}\left( {\mathrm{n} \geq 2}\right) \) cannot be both even and odd.
PROOF. Let \( {i}_{1},{i}_{2},\ldots ,{i}_{n} \) be the integers \( 1,2,\ldots, n \) in some order and define \( \Delta \left( {{i}_{1},\ldots ,{i}_{n}}\right) \) to be the integer \( \prod \left( {{i}_{j} - {i}_{k}}\right) \), where the product is taken over all pairs \( \left( {j, k}\right) \) such that \( 1 \leq j < k \leq n \) . Note that \( \Delta \left( {{i}_{1},\ldots ,{i}_{n}}\right) \neq 0 \) . We first compute \( \Delta \left( {\sigma \left( {i}_{1}\right) ,\ldots ,\sigma \left( {i}_{n}\right) }\right) \) when \( \sigma \in {S}_{n} \) is a transposition, say \( \sigma = \left( {{i}_{c}{i}_{d}}\right) \) with \( c < d \) . We have \( \Delta \left( {{i}_{1},\ldots ,{i}_{n}}\right) = \left( {{i}_{c} - {i}_{d}}\right) {ABCDEFG} \), where
\[
A = \mathop{\prod }\limits_{\substack{{j < k} \\ {j, k \neq c, d} }}\left( {{i}_{j} - {i}_{k}}\right) ;\;B = \mathop{\prod }\limits_{{j < c}}\left( {{i}_{j} - {i}_{c}}\right) ;\;C = \mathop{\prod }\limits_{{j < c}}\left( {{i}_{j} - {i}_{d}}\right) ;
\]
\[
D = \mathop{\prod }\limits_{{c < j < d}}\left( {{i}_{j} - {i}_{d}}\right) ;\;E = \mathop{\prod }\limits_{{c < k < d}}\left( {{i}_{c} - {i}_{k}}\right) ;\;F = \mathop{\prod }\limits_{{d < k}}\left( {{i}_{c} - {i}_{k}}\right) ;
\]
\[
G = \mathop{\prod }\limits_{{d < k}}\left( {{i}_{d} - {i}_{k}}\right)
\]
We write \( \sigma \left( A\right) \) for \( \mathop{\prod }\limits_{\substack{{j < k} \\ {j, k \neq c, d} }}\left( {\sigma \left( {i}_{j}\right) - \sigma \left( {i}_{k}\right) }\right) \) and similarly for \( \sigma \left( B\right) ,\sigma \left( C\right) \), etc. Verify that \( \sigma \left( A\right) = A;\sigma \left( B\right) = C \) and \( \sigma \left( C\right) = B;\sigma \left( D\right) = {\left( -1\right) }^{d - c - 1}E \) and \( \sigma \left( E\right) = {\left( -1\right) }^{d - c - 1}D \) ; \( \sigma \left( F\right) = G \), and \( \sigma \left( G\right) = F \) . Finally, \( \sigma \left( {{i}_{c} - {i}_{d}}\right) = \sigma \left( {i}_{c}\right) - \sigma \left( {i}_{d}\right) = {i}_{d} - {i}_{c} = - \left( {{i}_{c} - {i}_{d}}\right) \) . Consequently,
\[
\begin{aligned} \Delta \left( {\sigma \left( {i}_{1}\right) ,\ldots ,\sigma \left( {i}_{n}\right) }\right) & = \sigma \left( {{i}_{c} - {i}_{d}}\right) \sigma \left( A\right) \sigma \left( B\right) \cdots \sigma \left( G\right) \\ & = {\left( -1\right) }^{1 + 2\left( {d - c - 1}\right) }\left( {{i}_{c} - {i}_{d}}\right) {ABCDEFG} \end{aligned}
\]
\[
= - \Delta \left( {{i}_{1},\ldots ,{i}_{n}}\right) \text{.}
\]
Suppose for some \( \tau \in {S}_{n},\tau = {\tau }_{1}\cdots {\tau }_{\tau } \) and \( \tau = {\sigma }_{1}\cdots {\sigma }_{s} \) with \( {\tau }_{i},{\sigma }_{j} \) transpositions, \( r \) even and \( s \) odd. Then for \( \left( {{i}_{1},\ldots ,{i}_{n}}\right) = \left( {1,2,\ldots, n}\right) \) the previous paragraph impl |
112_TopicsInAnalysis | Definition 2 |
Definition 2. If \( \mathfrak{A} \) is an ideal in \( R \), the RADICAL of \( \mathfrak{A} \), denoted by \( \sqrt{\mathfrak{A}} \) , consists of all elements \( b \) of \( R \) some power of which belongs to \( \mathfrak{A} \) .
THEOREM 9. The radical of \( \mathfrak{A} \) is an ideal containing \( \mathfrak{A} \) . It satisfies the following rules:
(7)
If \( {\mathfrak{A}}^{k} \subset \mathfrak{B} \), for some positive integer \( k \), then \( \sqrt{\mathfrak{A}} \subset \sqrt{\mathfrak{B}} \) ;
(8)
\[
\sqrt{\mathfrak{{AB}}} = \sqrt{\mathfrak{A} \cap \mathfrak{B}} = \sqrt{\mathfrak{A}} \cap \sqrt{\mathfrak{B}}
\]
(9)
\[
\sqrt{\mathfrak{A} + \mathfrak{B}} = \sqrt{\sqrt{\mathfrak{A}} + \sqrt{\mathfrak{B}}}
\]
(10)
\[
\sqrt{\sqrt{2\mathfrak{l}}} = \sqrt{2\mathfrak{l}}.
\]
Proof. That \( \mathfrak{A} \subset \sqrt{\mathfrak{A}} \) is obvious. To show that \( \sqrt{\mathfrak{A}} \) is an ideal, let \( b, c \in \sqrt{\mathfrak{A}} \), so that \( {b}^{m} \in \mathfrak{A},{c}^{n} \in \mathfrak{A} \) for some integers \( m, n \) . In the binomial expansion of \( {\left( b - c\right) }^{m + n - 1} \) every term has a factor \( {b}^{i}{c}^{j} \), with \( i + j = m + n - 1 \) . Since either \( i \geqq m \) or \( j \geqq n \), either \( {b}^{i} \) or \( {c}^{j} \) is in \( \mathfrak{A} \), hence \( {\left( b - c\right) }^{m + n - 1} \in \mathfrak{A} \), and \( b - c \in \sqrt{\mathfrak{A}} \) . If \( b \in \sqrt{\mathfrak{A}} \), and \( d \in R \) , then \( {b}^{m} \in \mathfrak{A} \) for some \( m \) ; hence \( {\left( db\right) }^{m} \in \mathfrak{A} \) and \( {db} \in \sqrt{\mathfrak{A}} \) . Thus \( \sqrt{\mathfrak{A}} \) is indeed an ideal.
Suppose \( {\mathfrak{A}}^{k} \subset \mathfrak{B} \) . If \( c \in \sqrt{\mathfrak{A}} \), then \( {c}^{m} \in \mathfrak{A} \) for some \( m,{c}^{mk} \in {\mathfrak{A}}^{k} \subset \mathfrak{B} \) , hence \( c \in \sqrt{\mathfrak{B}} \) .
Since \( \mathfrak{{AB}} \subset \mathfrak{A} \cap \mathfrak{B} \), and \( \mathfrak{A} \cap \mathfrak{B} \) is contained in \( \mathfrak{A} \) and in \( \mathfrak{B} \), we have by (7) (with \( k = 1 \) ) that \( \sqrt{\mathfrak{{AB}}} \subset \sqrt{\mathfrak{A} \cap \mathfrak{B}} \subset \sqrt{\mathfrak{A}} \cap \sqrt{\mathfrak{B}} \) . Now if \( c \in \sqrt{\mathfrak{A}} \cap \sqrt{\mathfrak{B}} \), then there exist integers \( m \) and \( n \) such that \( {c}^{m} \in \mathfrak{A} \) , \( {c}^{n} \in \mathfrak{B} \) . Then \( {c}^{m}{c}^{n} \in \mathfrak{{AB}} \), whence \( c \in \sqrt{\mathfrak{{AB}}} \) . Thus (8) is proved.
Equation (10) is obvious. Since \( \mathfrak{A} + \mathfrak{B} \subset \sqrt{\mathfrak{A}} + \sqrt{\mathfrak{B}} \) ,
\[
\sqrt{\mathfrak{A} + \mathfrak{B}} \subset \sqrt{\sqrt{\mathfrak{A}} + \sqrt{\mathfrak{B}}};
\]
since
\[
\sqrt{\mathfrak{A}} + \sqrt{\mathfrak{B}} \subset \sqrt{\mathfrak{A} + \mathfrak{B}},\sqrt{\sqrt{\mathfrak{A}} + \sqrt{\mathfrak{B}}} \subset \sqrt{\sqrt{\mathfrak{A} + \mathfrak{B}}} = \sqrt{\mathfrak{A} + \mathfrak{B}}.
\]
This proves (9).
If \( \mathfrak{A} = \left( 0\right) \), then \( \sqrt{\mathfrak{A}} \) consists of all nilpotent elements-that is, elements some power of which is 0 . This ideal is sometimes called the radical of the ring \( R \) .
We now consider the effect of a homomorphic mapping on the five operations defined above. Suppose, then, that \( R \) and \( {R}^{\prime } \) are rings, \( T \) a homomorphism of \( R \) onto \( {R}^{\prime } \) with kernel \( N \) . If \( \mathfrak{A} \) and \( \mathfrak{B} \) are ideals in \( R \), then
\( \mathfrak{A} \subset \mathfrak{B} \) implies \( \mathfrak{A}T \subset \mathfrak{B}T \)
\( \left( {\mathfrak{A} + \mathfrak{B}}\right) T = \mathfrak{A}T + \mathfrak{B}T \)
\( \left( \mathfrak{{AB}}\right) T = \left( \mathfrak{AT}\right) \left( \mathfrak{BT}\right) \)
\( \left( {\mathfrak{A} \cap \mathfrak{B}}\right) T \subset \mathfrak{A}T \cap \mathfrak{B}T \), with equality if \( \mathfrak{A} \supset N \) or if \( \mathfrak{B} \supset N \) ;
\( \left( {\mathfrak{A} : \mathfrak{B}}\right) T \subset \mathfrak{A}T : \mathfrak{B}T \), with equality if \( \mathfrak{A} \supset N \) ;
\( \sqrt{\mathfrak{A}}T \subset \sqrt{\mathfrak{A}T} \), with equality if \( \mathfrak{A} \supset N \) .
If \( {\mathfrak{A}}^{\prime } \) and \( {\mathfrak{B}}^{\prime } \) are ideals in \( {R}^{\prime } \), then:
\( {\mathfrak{A}}^{\prime } \subset {\mathfrak{B}}^{\prime } \) implies \( {\mathfrak{A}}^{\prime }{T}^{-1} \subset {\mathfrak{B}}^{\prime }{T}^{-1}; \)
\( \left( {{\mathfrak{A}}^{\prime } + {\mathfrak{B}}^{\prime }}\right) {T}^{-1} = {\mathfrak{A}}^{\prime }{T}^{-1} + {\mathfrak{B}}^{\prime }{T}^{-1} \)
\( \left( {{\mathfrak{A}}^{\prime }{\mathfrak{B}}^{\prime }}\right) {T}^{-1} \supset \left( {{\mathfrak{A}}^{\prime }{T}^{-1}}\right) \left( {{\mathfrak{B}}^{\prime }{T}^{-1}}\right) \), with equality if the right side
contains \( N \) ;
\( \left( {{\mathfrak{A}}^{\prime } \cap {\mathfrak{B}}^{\prime }}\right) {T}^{-1} = {\mathfrak{A}}^{\prime }{T}^{-1} \cap {\mathfrak{B}}^{\prime }{T}^{-1} \)
(21) \( \left( {{\mathfrak{A}}^{\prime } : {\mathfrak{B}}^{\prime }}\right) {T}^{-1} = {\mathfrak{A}}^{\prime }{T}^{-1} : {\mathfrak{B}}^{\prime }{T}^{-1} \)
(22) \( \sqrt{{\mathfrak{A}}^{\prime }}{T}^{-1} = \sqrt{{\mathfrak{A}}^{\prime }{T}^{-1}} \)
Most of the statements (11)-(16) are trivial. As an example we prove (15). If \( {c}^{\prime } \in \left( {\mathfrak{A} : \mathfrak{B}}\right) T \), then \( {c}^{\prime } = {cT} \), where \( c \in \mathfrak{A} : \mathfrak{B} \) ; then \( c\mathfrak{B} \subset \mathfrak{A} \) , \( \left( {cT}\right) \left( {\mathfrak{B}T}\right) \subset \mathfrak{A}T,{cT} \in \mathfrak{A}T : \mathfrak{B}T \) . On the other hand, suppose \( {c}^{\prime } \in \) \( \mathfrak{A}T : \mathfrak{B}T \), whence \( {c}^{\prime }\left( {\mathfrak{B}T}\right) \subset \mathfrak{A}T \) ; since \( T \) maps \( R \) onto \( {R}^{\prime } \), there is an element \( c \) in \( R \) such that \( {c}^{\prime } = {cT} \) . Then \( \left( {c\mathfrak{B}}\right) T \subset \mathfrak{A}T, c\mathfrak{B} \subset \left( {\mathfrak{A}T}\right) {T}^{-1} \) ; if \( \mathfrak{A} \supset N \), then \( \left( {\mathfrak{A}T}\right) {T}^{-1} = \mathfrak{A} \), hence \( c\mathfrak{B} \subset \mathfrak{A}, c \in \mathfrak{A} : \mathfrak{B},{c}^{\prime } = {cT} \in \) \( \left( {\mathfrak{A} : \mathfrak{B}}\right) T \) .
To prove (17)-(22), let \( \mathfrak{A} = {\mathfrak{A}}^{\prime }{T}^{-1},\mathfrak{B} = {\mathfrak{B}}^{\prime }{T}^{-1} \), so \( {\mathfrak{A}}^{\prime } = \mathfrak{A}T \) , \( {\mathfrak{B}}^{\prime } = \overline{\mathfrak{B}}T \), and \( \mathfrak{A} \) and \( \mathfrak{B} \) contain \( N \) . To prove (19), for example, we observe that \( {\mathfrak{A}}^{\prime }{\mathfrak{B}}^{\prime } = \left( \mathfrak{{AB}}\right) T,\left( {{\mathfrak{A}}^{\prime }{\mathfrak{B}}^{\prime }}\right) {T}^{-1} = \left( {\left( \mathfrak{{AB}}\right) T}\right) {T}^{-1} \supset \mathfrak{{AB}} \) ; and equality holds at this last point if \( \mathfrak{A}\mathfrak{B} \supset N \) . Again, for (21), \( {\mathfrak{A}}^{\prime } : {\mathfrak{B}}^{\prime } = \) \( \left( {\mathfrak{A} : \mathfrak{B}}\right) T \) by \( \left( {15}\right) \), since \( \mathfrak{A} \supset N \) . \( \; \) Thus \( \left( {{\mathfrak{A}}^{\prime } : {\mathfrak{B}}^{\prime }}\right) {\bar{T}}^{-1} = \left( {\left( {\mathfrak{A} : \mathfrak{B}}\right) T}\right) {T}^{-1} = \) \( \mathfrak{A} : \mathfrak{B} \), since \( \mathfrak{A} : \mathfrak{B} \supset \mathfrak{A} \supset N \) . The others are similarly proved.
§ 8. Prime and maximal ideals. So far we have considered ideals in all generality. Now we consider two important special types of ideals.
Definition. Let \( \mathfrak{A} \) be an ideal in a ring \( R \) . \( \mathfrak{A} \) is said to be prime if whenever a product of two elements of \( R \) is in \( \mathfrak{A} \), then at least one of the factors is in \( \mathfrak{A} \) . An ideal \( \mathfrak{A} \) is said to be MAXIMAL if \( \mathfrak{A} \neq R \) and if there is no ideal between \( \mathfrak{A} \) and \( R \) .
Thus \( \mathfrak{A} \) is prime if “ \( b, c \in R,{bc} \in \mathfrak{A} \) ” implies \( b \in \mathfrak{A} \) or \( c \in \mathfrak{A} \) . In particular, \( R \) itself is always prime; (0) is prime if and only if \( R \) has no zero divisors.
We illustrate this definition by some examples.
1) If \( J \) is the ring of integers and \( n \in J, n > 1 \), then the principal ideal \( \left( n\right) \) is prime if and only if \( n \) is a prime number. This follows from the fact that (1) if \( n \) is a prime number then " \( {ab} \) divisible by \( n \) " always implies that either \( a \) or \( b \) is divisible by \( n \) and (2) if \( n \) is not a prime number then, by definition, there exist integers \( a, b \) such that \( a\bar{b} = n \) , \( 0 < a < n,0 < b < n \) .
2) The foregoing reasoning can be repeated without change for any unique factorization domain \( R \), showing that if \( w \in R \) then the principal ideal \( \left( w\right) \) is prime if and only if \( w \) is either a unit or irreducible. (See I, § 14, Theorem 4.)
3) Let \( R = F\left\lbrack {{x}_{1},{x}_{2},\cdots ,{x}_{n}}\right\rbrack \) be a polynomial ring in \( n \) variables \( {x}_{i} \) over a field \( F \) . If \( {a}_{1},{a}_{2},\cdots ,{a}_{n} \) are given elements of \( F \), then the elements \( g\left( {{x}_{1},{x}_{2},\cdots ,{x}_{n}}\right) \) of \( R \) such that \( g\left( {{a}_{1},{a}_{2},\cdots ,{a}_{n}}\right) = 0 \) form a prime ideal \( \mathfrak{p} \) in \( R \) . Since \( R \) is also a polynomial ring in the elements \( {x}_{1} - {a}_{1},{x}_{2} - {a}_{2},\cdots ,{x}_{n} - {a}_{n} \), every polynomial \( f\left( x\right) \) in \( R \) can be written in the form \( b + \mathop{\sum }\limits_{i}{f}_{i}\left( x\right) \left( {{x}_{i} - {a}_{i}}\right) \), where \( b \in F \) and the \( {f}_{i}\left( x\right) \) are elements of \( R |
111_111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org | Definition 15.2 |
Definition 15.2 Let \( f \) be a function on \( \left( {a, b}\right) \) . We say that \( f \) is in \( {L}^{2} \) near \( a \) (resp. \( b) \) if there is a number \( c \in \left( {a, b}\right) \) such that \( f \in {L}^{2}\left( {a, c}\right) \) (resp. \( f \in {L}^{2}\left( {c, b}\right) \) ).
Let \( \lambda \in \mathbb{C} \), and let \( f \) be a solution of (15.5) on \( \left( {a, b}\right) \) . Since then \( f,{f}^{\prime } \in {AC}\left\lbrack {\alpha ,\beta }\right\rbrack \) for any interval \( \left\lbrack {\alpha ,\beta }\right\rbrack \subset \left( {a, b}\right) \), it is clear that \( f \in \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) \), or equivalently \( f \in {L}^{2}\left( {a, b}\right) \), if and only if \( f \) is in \( {L}^{2} \) near \( a \) and \( f \) is in \( {L}^{2} \) near \( b \) . This simple fact will be often used without mention when we study the deficiency indices of \( T \) .
Some results concerning regular end points are collected in the next proposition.
Proposition 15.5 Suppose that the end point \( a \) is regular for \( \mathcal{L} \) .
(i) If \( f \in \mathcal{D}\left( {T}^{ * }\right) \), then \( f \) and \( {f}^{\prime } \) can be extended to continuous functions on \( \lbrack a, b) \) .
(ii) Let \( f \) be a solution of (15.5) on \( \left( {a, b}\right) \), where \( \lambda \in \mathbb{C} \) . Then \( f \) and \( {f}^{\prime } \) extend to continuous functions on \( \lbrack a, b) \) . In particular, \( f \) is in \( {L}^{2} \) near \( a \) .
(iii) The vector space \( \left\{ {\left( {f\left( a\right) ,{f}^{\prime }\left( a\right) }\right) : f \in \mathcal{D}\left( {T}^{ * }\right) }\right\} \) is equal to \( {\mathbb{C}}^{2} \) .
(iv) If \( f \in \mathcal{D}\left( \bar{T}\right) \), then \( f\left( a\right) = {f}^{\prime }\left( a\right) = 0 \) .
(v) If \( \mathcal{L} \) is regular at \( a \) and \( b \), then \( f\left( a\right) = {f}^{\prime }\left( a\right) = f\left( b\right) = {f}^{\prime }\left( b\right) = 0 \) for any \( f \in \mathcal{D}\left( \bar{T}\right) \), and the vector space \( \left\{ {\left( {f\left( a\right) ,{f}^{\prime }\left( a\right), f\left( b\right) ,{f}^{\prime }\left( b\right) }\right) : f \in \mathcal{D}\left( {T}^{ * }\right) }\right\} \) is equal to \( {\mathbb{C}}^{4} \) .
Proof Since \( {T}^{ * }f \in {L}^{2}\left( {a, b}\right) \subseteq {L}_{\text{loc }}^{1}\left( {a, b}\right) \), Proposition 15.3(i) applies to \( g \mathrel{\text{:=}} {T}^{ * }f \) and \( \lambda = 0 \) and yields the assertion of (i). (ii) follows also from Proposition 15.3(i) now applied with \( g = 0 \) .
(iii): Let \( \left\{ {{u}_{1},{u}_{2}}\right\} \) be a fundamental system of solutions of \( \mathcal{L}\left( f\right) = 0 \) on \( \left( {a, b}\right) \) . By Proposition 15.3(i), \( {u}_{j} \) and \( {u}_{j}^{\prime } \) extend to continuous functions on \( \lbrack a, b) \) . Then the Wronskian \( W = {u}_{1}{u}_{2}^{\prime } - {u}_{1}^{\prime }{u}_{2} \) is a nonzero constant on \( \lbrack a, b) \) . For \( \left( {{\alpha }_{1},{\alpha }_{2}}\right) \in {\mathbb{C}}^{2} \), set \( g \mathrel{\text{:=}} {\alpha }_{1}{u}_{1} + {\alpha }_{2}{u}_{2} \) . Given \( \left( {{c}_{1},{c}_{2}}\right) \in {\mathbb{C}}^{2} \), the requirements \( g\left( a\right) = {c}_{1},{g}^{\prime }\left( a\right) = {c}_{2} \) lead to a system of two linear equations for \( {\alpha }_{1} \) and \( {\alpha }_{2} \) . It has a unique solution, because its determinant is the nonzero Wronskian \( W \) . Let \( g \) be the corresponding function. Choosing a function \( f \in \mathcal{D}\left( {T}^{ * }\right) \) such that \( f = g \) in some neighborhood of \( a \) and \( f = 0 \) in some neighborhood of \( b \), we have \( f\left( a\right) = {c}_{1},{f}^{\prime }\left( a\right) = {c}_{2} \) .
(iv): Let \( f \in \mathcal{D}\left( \bar{T}\right) \) and \( g \in \mathcal{D}\left( {T}^{ * }\right) \) . By (i), \( f,{f}^{\prime }, g \), and \( {g}^{\prime } \) extend by continuity to \( a \) . From Lemma 15.2 we obtain \( {\left\lbrack f, g\right\rbrack }_{a} = f\left( a\right) {g}^{\prime }\left( a\right) - {f}^{\prime }\left( a\right) g\left( a\right) = 0 \) . Because the values \( g\left( a\right) ,{g}^{\prime }\left( a\right) \) are arbitrary by (iv), the latter implies that \( f\left( a\right) = {f}^{\prime }\left( a\right) = 0 \) .
(v): The first assertion follows at once from (iii) and the corresponding result for \( b \) . Let \( c = \left( {{c}_{1},{c}_{2},{c}_{3},{c}_{4}}\right) \in {\mathbb{C}}^{4} \) be given. By (iv) there are \( g, h \in \mathcal{D}\left( {T}^{ * }\right) \) satisfying \( g\left( a\right) = {c}_{1},{g}^{\prime }\left( a\right) = {c}_{2}, h\left( b\right) = {c}_{3},{h}^{\prime }\left( b\right) = {c}_{4} \) . We choose functions \( {g}_{0},{h}_{0} \in \mathcal{D}\left( {T}^{ * }\right) \) such that \( g = {g}_{0} \) and \( {h}_{0} = 0 \) in some neighborhood of \( a \) and \( {g}_{0} = 0 \) and \( h = {h}_{0} \) in some neighborhood of \( b \) . Setting \( f \mathrel{\text{:=}} {g}_{0} + {h}_{0} \), we then have \( f\left( a\right) = {c}_{1},{f}^{\prime }\left( a\right) = {c}_{2} \) , \( f\left( b\right) = {c}_{3},{f}^{\prime }\left( b\right) = {c}_{4}. \)
Proposition 15.6 Let \( \lambda \in \mathbb{C} \smallsetminus \mathbb{R} \) . Then for each end point of the interval \( \left( {a, b}\right) \), there exists a nonzero solution of (15.5) which is in \( {L}^{2} \) near to it.
Proof We carry out the proof for the end point \( b \) ; the case of \( a \) is similar.
Fix \( c \in \left( {a, b}\right) \) and let \( {T}_{c} \) denote the corresponding operator on the interval \( \left( {c, b}\right) \) in \( {L}^{2}\left( {c, b}\right) \) . Choosing functions \( f, g \in {C}_{0}^{\infty }\left( {a, b}\right) \) such that \( f\left( c\right) = {g}^{\prime }\left( c\right) = 0 \) and \( {f}^{\prime }\left( c\right) = g\left( c\right) = 1 \) and applying (15.4) to the operator \( {T}_{c}^{ * } \) we obtain
\[
\left\langle {{T}_{c}^{ * }f, g}\right\rangle - \left\langle {f,{T}_{c}^{ * }f}\right\rangle = - {\left\lbrack f, g\right\rbrack }_{c} = 1.
\]
Hence, \( {T}_{c}^{ * } \) is not symmetric. Therefore, by Proposition 15.4, the symmetric operator \( {T}_{c} \) has nonzero equal deficiency indices. Hence, there exists a nonzero \( {f}_{0} \) in \( \mathcal{N}\left( {{T}_{c}^{ * } - {\lambda I}}\right) \) . Then \( \mathcal{L}\left( {f}_{0}\right) \left( x\right) = \lambda {f}_{0}\left( x\right) \) on \( \left( {c, b}\right) \) . We fix a number \( d \in \left( {c, b}\right) \) . By Proposition 15.3(ii) there exists a solution \( f \) of the equation \( \mathcal{L}\left( f\right) = {\lambda f} \) on \( \left( {a, b}\right) \) such that \( f\left( d\right) = {f}_{0}\left( d\right) \) and \( {f}^{\prime }\left( d\right) = {f}_{0}^{\prime }\left( d\right) \) . From the uniqueness assertion of Proposition 15.3(ii) it follows that \( f\left( x\right) = {f}_{0}\left( x\right) \) on \( \left( {c, b}\right) \) . Then \( f \) is a nonzero solution of (15.5), and \( f \) is in \( {L}^{2} \) near \( b \), because \( {f}_{0} \in {L}^{2}\left( {c, b}\right) \) .
Corollary 15.7 If at least one end point is regular, then the deficiency indices of \( T \) are \( \left( {1,1}\right) \) or \( \left( {2,2}\right) \) . If both end points are regular, then \( T \) has deficiency indices \( \left( {2,2}\right) \) .
Proof Let \( \lambda \in \mathbb{C} \smallsetminus \mathbb{R} \) . If both end points are regular, by Proposition 15.5(ii) all solutions \( f \) of (15.5) are in \( {L}^{2} \) near \( a \) and in \( {L}^{2} \) near \( b \), so \( f \in \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) \) . Thus, \( \dim \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) = 2 \), and \( T \) has deficiency indices \( \left( {2,2}\right) \) .
Suppose that \( \mathcal{L} \) is regular at one end point, say \( a \) . By Proposition 15.6 there is a nonzero solution \( f \) of (15.5) which is in \( {L}^{2} \) near \( b \) . Since \( \mathcal{L} \) is regular at \( a, f \) is also in \( {L}^{2} \) near \( a \) by Proposition 15.5(ii). Therefore, \( f \in {L}^{2}\left( {a, b}\right) \), and hence \( f \in \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) \) . Hence, by Proposition 15.4, the deficiency indices of \( T \) are \( \left( {1,1}\right) \) or \( \left( {2,2}\right) \) .
## 15.2 Limit Circle Case and Limit Point Case
The following classical result of \( \mathrm{H} \) . Weyl is crucial for the study of deficiency indices and of self-adjointness questions for Sturm-Liouville operators.
Theorem 15.8 (Weyl’s alternative) Let \( d \) denote an end point of the interval \( \left( {a, b}\right) \) . Then precisely one of the following two possibilities is valid:
(i) For each \( \lambda \in \mathbb{C} \), all solutions of (15.5) are in \( {L}^{2} \) near \( d \) .
(ii) For each \( \lambda \in \mathbb{C} \), there exists one solution of (15.5) which is not in \( {L}^{2} \) near \( d \) .
In case (ii), for any \( \lambda \in \mathbb{C} \smallsetminus \mathbb{R} \), there is a unique (up to a constant factor) nonzero solution of (15.5) which is in \( {L}^{2} \) near \( d \) .
Proof We shall carry out the proof for \( d = b \) . Since the solution space of (15.5) is two-dimensional, the last assertion only restates Proposition 15.6.
To prove the alternative, it suffices to show that if for some \( {\lambda }_{0} \in \mathbb{C} \), all solutions of (15.5) are in \( {L}^{2} \) near \( b \), this is also true for each \( \lambda \in \mathbb{C} \) . Let \( u \) be a solution of \( \mathcal{L}\left( f\right) = {\lambda f} \) . We have to prove that \( u \in {L}^{2}\left( {c, b}\right) \) for some \( c \in \left( {a, b}\right) \) .
We fix a basis \( \left\{ {{u}_{1},{u}_{2}}\right\} \) of the solution space of \( \mathcal{L}\left( f\right) = {\lambda }_{0}f \) . Then the Wronskian \( W\left( {{u}_{1},{u}_{2}}\right) \) is a nonzero constant on \( \left( {a, b}\right) \), so we |
11_2022-An_Analogy_of_the_Carleson-Hunt_Theorem_with_Respe | Definition 2.7 |
Definition 2.7. Let \( \left( {X,{\mathcal{B}}_{X},\mu, T}\right) \) and \( \left( {Y,{\mathcal{B}}_{Y},\nu, S}\right) \) be measure-preserving systems on probability spaces.
(1) The system \( \left( {Y,{\mathcal{B}}_{Y},\nu, S}\right) \) is a factor of \( \left( {X,{\mathcal{B}}_{X},\mu, T}\right) \) if there are sets \( {X}^{\prime } \) in \( {\mathcal{B}}_{X} \) and \( {Y}^{\prime } \) in \( {\mathcal{B}}_{Y} \) with \( \mu \left( {X}^{\prime }\right) = 1,\nu \left( {Y}^{\prime }\right) = 1, T{X}^{\prime } \subseteq {X}^{\prime }, S{Y}^{\prime } \subseteq {Y}^{\prime } \) and a measure-preserving map \( \phi : {X}^{\prime } \rightarrow {Y}^{\prime } \) with
\[
\phi \circ T\left( x\right) = S \circ \phi \left( x\right)
\]
for all \( x \in {X}^{\prime } \) .
(2) The system \( \left( {Y,{\mathcal{B}}_{Y},\nu, S}\right) \) is isomorphic to \( \left( {X,{\mathcal{B}}_{X},\mu, T}\right) \) if there are sets \( {X}^{\prime } \) in \( {\mathcal{B}}_{X},{Y}^{\prime } \) in \( {\mathcal{B}}_{Y} \) with \( \mu \left( {X}^{\prime }\right) = 1,\nu \left( {Y}^{\prime }\right) = 1, T{X}^{\prime } \subseteq {X}^{\prime }, S{Y}^{\prime } \subseteq {Y}^{\prime } \) , and an invertible measure-preserving map \( \phi : {X}^{\prime } \rightarrow {Y}^{\prime } \) with
\[
\phi \circ T\left( x\right) = S \circ \phi \left( x\right)
\]
for all \( x \in {X}^{\prime } \) .
In measure theory it is natural to simply ignore null sets, and we will sometimes loosely think of a factor as a measure-preserving map \( \phi : X \rightarrow Y \) for which the diagram
![8dbccc5c-1b75-4e41-a357-96decb48997c_35_0.jpg](images/8dbccc5c-1b75-4e41-a357-96decb48997c_35_0.jpg)
is commutative, with the understanding that the map is not required to be defined everywhere.
A factor map
\[
\left( {X,{\mathcal{B}}_{X},\mu, T}\right) \rightarrow \left( {Y,{\mathcal{B}}_{Y},\nu, S}\right)
\]
will also be described as an extension of \( \left( {Y,{\mathcal{B}}_{Y},\nu, S}\right) \) . The factor \( \left( {Y,{\mathcal{B}}_{Y},\nu, S}\right) \) is called trivial if as a measure space \( Y \) comprises a single element; the extension is called trivial if \( \phi \) is an isomorphism of measure spaces.
Example 2.8. Define the \( \left( {\frac{1}{2},\frac{1}{2}}\right) \) measure \( {\mu }_{\left( 1/2,1/2\right) } \) on the finite set \( \{ 0,1\} \) by
\[
{\mu }_{\left( 1/2,1/2\right) }\left( {\{ 0\} }\right) = {\mu }_{\left( 1/2,1/2\right) }\left( {\{ 1\} }\right) = \frac{1}{2}.
\]
Let \( X = \{ 0,1{\} }^{\mathbb{N}} \) with the infinite product measure \( \mu = \mathop{\prod }\limits_{\mathbb{N}}{\mu }_{\left( 1/2,1/2\right) } \) (see Sect. A. 2 and Example 2.9 where we will generalize this example). This space is a natural model for the set of possible outcomes of the infinitely repeated toss of a fair coin. The left shift map \( \sigma : X \rightarrow X \) defined by
\[
\sigma \left( {{x}_{0},{x}_{1},\ldots }\right) = \left( {{x}_{1},{x}_{2},\ldots }\right)
\]
preserves \( \mu \) (since it preserves the measure of the cylinder sets described in Example 2.9). The map \( \phi : X \rightarrow \mathbb{T} \) defined by
\[
\phi \left( {{x}_{0},{x}_{1},\ldots }\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{{x}_{n}}{{2}^{n + 1}}
\]
is measure-preserving from \( \left( {X,\mu }\right) \) to \( \left( {\mathbb{T},{m}_{\mathbb{T}}}\right) \) and \( \phi \left( {\sigma \left( x\right) }\right) = {T}_{2}\left( {\phi \left( x\right) }\right) \) . The map \( \phi \) has a measurable inverse defined on all but the countable set of dyadic rationals \( \mathbb{Z}\left\lbrack \frac{1}{2}\right\rbrack /\mathbb{Z} \), where
\[
\mathbb{Z}\left\lbrack \frac{1}{2}\right\rbrack = \left\{ {\frac{m}{{2}^{n}} \mid m \in \mathbb{Z}, n \in \mathbb{N}}\right\}
\]
so this shows that \( \left( {X,\mu ,\sigma }\right) \) and \( \left( {\mathbb{T},{m}_{\mathbb{T}},{T}_{2}}\right) \) are measurably isomorphic.
When the underlying space is a compact metric space, the \( \sigma \) -algebra is taken to be the Borel \( \sigma \) -algebra (the smallest \( \sigma \) -algebra containing all the open sets) unless explicitly stated otherwise. Notice that in both Example 2.8 and Example 2.9 the underlying space is indeed a compact metric space (see Sect. A.2).
Example 2.9. The shift map in Example 2.8 is an example of a one-sided Bernoulli shift. A more general \( {}^{\left( {13}\right) } \) and natural two-sided definition is the following. Consider an infinitely repeated throw of a loaded \( n \) -sided die. The possible outcomes of each throw are \( \{ 1,2,\ldots, n\} \), and these appear with probabilities given by the probability vector \( \mathbf{p} = \left( {{p}_{1},{p}_{2},\ldots ,{p}_{n}}\right) \) (probability vector means each \( {p}_{i} \geq 0 \) and \( \left. {\mathop{\sum }\limits_{{i = 1}}^{n}{p}_{i} = 1}\right) \), so \( \mathbf{p} \) defines a measure \( {\mu }_{\mathbf{p}} \) on the finite sample space \( \{ 1,2,\ldots, n\} \), which is given the discrete topology. The sample space for the die throw repeated infinitely often is
\[
X = \{ 1,2,\ldots, n{\} }^{\mathbb{Z}}
\]
\[
= \left\{ {x = \left( {\ldots ,{x}_{-1},{x}_{0},{x}_{1},\ldots }\right) \mid {x}_{i} \in \{ 1,2,\ldots, n\} \text{ for all }i \in \mathbb{Z}}\right\} .
\]
The measure on \( X \) is the infinite product measure \( \mu = \mathop{\prod }\limits_{\mathbb{Z}}{\mu }_{\mathbf{p}} \), and the \( \sigma \) - algebra \( \mathcal{B} \) is the Borel \( \sigma \) -algebra for the compact metric space* \( X \), or equivalently is the product \( \sigma \) -algebra defined below and in Sect. A.2.
A better description of the measure is given via cylinder sets. If \( I \) is a finite subset of \( \mathbb{Z} \), and \( \mathbf{a} \) is a map \( I \rightarrow \{ 1,2,\ldots, n\} \), then the cylinder set defined by \( I \) and \( \mathbf{a} \) is
\[
I\left( \mathbf{a}\right) = \left\{ {x \in X \mid {x}_{j} = \mathbf{a}\left( j\right) \text{ for all }j \in I}\right\} .
\]
It will be useful later to write \( {\left. x\right| }_{I} \) for the ordered block of coordinates
\[
{x}_{i}{x}_{i + 1}\cdots {x}_{i + s}
\]
when \( I = \{ i, i + 1,\ldots, i + s\} = \left\lbrack {i, i + s}\right\rbrack \) . The measure \( \mu \) is uniquely determined by the property that
\[
\mu \left( {I\left( \mathbf{a}\right) }\right) = \mathop{\prod }\limits_{{i \in I}}{p}_{a\left( i\right) }
\]
and \( \mathcal{B} \) is the smallest \( \sigma \) -algebra containing all cylinders (see Sect. A. 2 for the details).
Now let \( \sigma \) be the (left) shift on \( X : \sigma \left( x\right) = y \) where \( {y}_{j} = {x}_{j + 1} \) for all \( j \) in \( \mathbb{Z} \) . Then \( \sigma \) is \( \mu \) -preserving and \( \mathcal{B} \) -measurable. So \( \left( {X,\mathcal{B},\mu ,\sigma }\right) \) is a measure-preserving system, called the Bernoulli scheme or Bernoulli shift based on \( \mathbf{p} \) . A measure-preserving system measurably isomorphic to a Bernoulli shift is sometimes called a Bernoulli automorphism.
The next example, which we learned from Doug Lind, gives another example of a measurable isomorphism and reinforces the point that being a probability space is a finiteness property of the measure, rather than a metric boundedness property of the space. The measure \( \mu \) on \( \mathbb{R} \) described in Example 2.10 makes \( \left( {\mathbb{R},\mu }\right) \) into a probability space.
Example 2.10. Consider the 2-to-1 map \( T : \mathbb{R} \rightarrow \mathbb{R} \) defined by
\[
T\left( x\right) = \frac{1}{2}\left( {x - \frac{1}{x}}\right)
\]
for \( x \neq 0 \), and \( T\left( 0\right) = 0 \) . For any \( {L}^{1} \) function \( f \), the substitution \( y = T\left( x\right) \) shows that
* The topology on \( X \) is simply the product topology, which is also the metric topology given by the metric defined by \( \mathrm{d}\left( {x, y}\right) = {2}^{-k} \) where
\[
k = \max \left\{ {j \mid {x}_{i} = {y}_{i}\text{ for }\left| j\right| \leq k}\right\}
\]
if \( x \neq y \) and \( \mathrm{d}\left( {x, x}\right) = 0 \) . In this metric, points are close together if they agree on a large block of indices around \( 0 \in \mathbb{Z} \) .
\[
{\int }_{-\infty }^{\infty }f\left( {T\left( x\right) }\right) \frac{\mathrm{d}x}{\pi \left( {1 + {x}^{2}}\right) } = {\int }_{-\infty }^{\infty }f\left( y\right) \frac{\mathrm{d}y}{\pi \left( {1 + {y}^{2}}\right) }
\]
(in this calculation, note that \( T \) is only injective when restricted to \( \left( {0,\infty }\right) \) or \( \left( {-\infty ,0}\right) ) \) . It follows by Lemma 2.6 that \( T \) preserves the probability measure \( \mu \) defined by
\[
\mu \left( \left\lbrack {a, b}\right\rbrack \right) = {\int }_{a}^{b}\frac{\mathrm{d}x}{\pi \left( {1 + {x}^{2}}\right) }.
\]
The map \( \phi \left( x\right) = \frac{1}{\pi }\arctan \left( x\right) + \frac{1}{2} \) from \( \mathbb{R} \) to \( \mathbb{T} \) is an invertible measure-preserving map from \( \left( {\mathbb{R},\mu }\right) \) to \( \left( {\mathbb{T},{m}_{\mathbb{T}}}\right) \) where \( {m}_{\mathbb{T}} \) denotes the Lebesgue measure on \( \mathbb{T} \) (notice that the image of \( \phi \) is the subset \( \left( {0,1}\right) \subseteq \mathbb{T} \), but this is an invertible map in the measure-theoretic sense).
Define the map \( {T}_{2} : \mathbb{T} \rightarrow \mathbb{T} \) by \( {T}_{2}\left( x\right) = {2x}\left( {\;\operatorname{mod}\;1}\right) \) as in Example 2.4. The map \( \phi \) is a measurable isomorphism from \( \left( {\mathbb{R},\mu, T}\right) \) to \( \left( {\mathbb{T},{m}_{\mathbb{T}},{T}_{2}}\right) \) . Example 2.8 shows in turn that \( \left( {\mathbb{R},\mu, T}\right) \) is isomorphic to the one-sided full 2-shift.
It is often more convenient to work with an invertible measure-preserving transformation as in Example 2.9 instead of a non-invertible transformation as in Examples 2.4 and 2.8. Exercise 2.1.7 gives a general construction of an invertible system from a non-invertible one.
# |
1042_(GTM203)The Symmetric Group | Definition 4.3.4 |
Definition 4.3.4 The nth power sum symmetric function is
\[
{p}_{n} = {m}_{\left( n\right) } = \mathop{\sum }\limits_{{i \geq 1}}{x}_{i}^{n}
\]
The nth elementary symmetric function is
\[
{e}_{n} = {m}_{\left( {1}^{n}\right) } = \mathop{\sum }\limits_{{{i}_{1} < \cdots < {i}_{n}}}{x}_{{i}_{1}}\cdots {x}_{{i}_{n}}.
\]
The nth complete homogeneous symmetric function is
\[
{h}_{n} = \mathop{\sum }\limits_{{\lambda \vdash n}}{m}_{\lambda } = \mathop{\sum }\limits_{{{i}_{1} \leq \cdots \leq {i}_{n}}}{x}_{{i}_{1}}\cdots {x}_{{i}_{n}}▱.
\]
As examples, when \( n = 3 \) ,
\[
{p}_{3} = {x}_{1}^{3} + {x}_{2}^{3} + {x}_{3}^{3} + \cdots
\]
\[
{e}_{3} = {x}_{1}{x}_{2}{x}_{3} + {x}_{1}{x}_{2}{x}_{4} + {x}_{1}{x}_{3}{x}_{4} + {x}_{2}{x}_{3}{x}_{4} + \cdots ,
\]
\[
\begin{matrix} {h}_{3} & = & {x}_{1}^{3} + {x}_{2}^{3} + \cdots + {x}_{1}^{2}{x}_{2} + {x}_{1}{x}_{2}^{2} + \cdots + {x}_{1}{x}_{2}{x}_{3} + {x}_{1}{x}_{2}{x}_{4} + \cdots . \end{matrix}
\]
The elementary function \( {e}_{n} \) is just the sum of all square-free monomials of degree \( n \) . As such, it can be considered as a weight generating function for partitions with \( n \) distinct parts. Specifically, let
\[
S = \{ \lambda : l\left( \lambda \right) = n\}
\]
where \( l\left( \lambda \right) \) is the number of parts of \( \lambda \), known as its length. If \( \lambda = \left( {{\lambda }_{1} > }\right. \) \( \left. {{\lambda }_{2} > \cdots > {\lambda }_{n}}\right) \), we use the weight
\[
\text{wt}\lambda = {x}_{{\lambda }_{1}}{x}_{{\lambda }_{2}}\cdots {x}_{{\lambda }_{n}}
\]
which yields
\[
{e}_{n}\left( \mathbf{x}\right) = {f}_{S}\left( \mathbf{x}\right)
\]
Similarly, \( {h}_{n} \) is the sum of all monomials of degree \( n \) and is the weight generating function for all partitions with \( n \) parts. What if we want to count partitions with any number of parts?
Proposition 4.3.5 We have the following generating functions
\[
E\left( t\right) \overset{\text{ def }}{ = }\mathop{\sum }\limits_{{n \geq 0}}{e}_{n}\left( \mathbf{x}\right) {t}^{n} = \mathop{\prod }\limits_{{i \geq 1}}\left( {1 + {x}_{i}t}\right)
\]
\[
H\left( t\right) \overset{\text{ def }}{ = }\mathop{\sum }\limits_{{n \geq 0}}{h}_{n}\left( \mathbf{x}\right) {t}^{n} = \mathop{\prod }\limits_{{i \geq 1}}\frac{1}{\left( 1 - {x}_{i}t\right) }.
\]
Proof. Work in the ring \( \mathbb{C}\left\lbrack \left\lbrack {\mathbf{x}, t}\right\rbrack \right\rbrack \) . For the elementary symmetric functions, consider the set \( S = \{ \lambda : \lambda \) with distinct parts \( \} \) with weight
\[
{\mathrm{{wt}}}^{\prime }\lambda = {t}^{l\left( \lambda \right) }\mathrm{{wt}}\lambda
\]
where wt is as before. Then
\[
{f}_{S}\left( {\mathbf{x}, t}\right) = \mathop{\sum }\limits_{{\lambda \in S}}{\mathrm{{wt}}}^{\prime }\lambda
\]
\[
= \mathop{\sum }\limits_{{n \geq 0}}\mathop{\sum }\limits_{{l\left( \lambda \right) = n}}{t}^{n}\operatorname{wt}\lambda
\]
\[
= \mathop{\sum }\limits_{{n \geq 0}}{e}_{n}\left( \mathbf{x}\right) {t}^{n}
\]
To obtain the product, write
\[
S = \left( {\left\{ {1}^{0}\right\} \uplus \left\{ {1}^{1}\right\} }\right) \times \left( {\left\{ {2}^{0}\right\} \uplus \left\{ {2}^{1}\right\} }\right) \times \left( {\left\{ {3}^{0}\right\} \uplus \left\{ {3}^{1}\right\} }\right) \times \cdots ,
\]
so that
\[
{f}_{S}\left( {\mathbf{x}, t}\right) = \left( {1 + {x}_{1}t}\right) \left( {1 + {x}_{2}t}\right) \left( {1 + {x}_{3}t}\right) \cdots .
\]
The proof for the complete symmetric functions is analogous. -
While we are computing generating functions, we might as well give one for the power sums. Actually, it is easier to produce one for \( {p}_{n}\left( \mathbf{x}\right) /n \) .
Proposition 4.3.6 We have the following generating function:
\[
\mathop{\sum }\limits_{{n \geq 1}}{p}_{n}\left( \mathbf{x}\right) \frac{{t}^{n}}{n} = \ln \mathop{\prod }\limits_{{i \geq 1}}\frac{1}{\left( 1 - {x}_{i}t\right) }
\]
Proof. Using the Taylor expansion of \( \ln \frac{1}{1 - x} \), we obtain
\[
\ln \mathop{\prod }\limits_{{i \geq 1}}\frac{1}{\left( 1 - {x}_{i}t\right) } = \mathop{\sum }\limits_{{i \geq 1}}\ln \frac{1}{\left( 1 - {x}_{i}t\right) }
\]
\[
= \mathop{\sum }\limits_{{i \geq 1}}\mathop{\sum }\limits_{{n \geq 1}}\frac{{\left( {x}_{i}t\right) }^{n}}{n}
\]
\[
= \mathop{\sum }\limits_{{n \geq 1}}\frac{{t}^{n}}{n}\mathop{\sum }\limits_{{i \geq 1}}{x}_{i}^{n}
\]
\[
= \mathop{\sum }\limits_{{n \geq 1}}{p}_{n}\left( \mathbf{x}\right) \frac{{t}^{n}}{n}\text{. ∎}
\]
In order to have enough elements for a basis of \( {\Lambda }^{n} \), we must have one function for each partition of \( n \) according to Proposition 4.3.3. To extend Definition 4.3.4 to \( \lambda = \left( {{\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{l}}\right) \), let
\[
{f}_{\lambda } = {f}_{{\lambda }_{1}}{f}_{{\lambda }_{2}}\cdots {f}_{{\lambda }_{l}}
\]
where \( f = p, e \), or \( h \) . We say that these functions are multiplicative. To illustrate, if \( \lambda = \left( {2,1}\right) \), then
\[
{p}_{\left( 2,1\right) } = \left( {{x}_{1}^{2} + {x}_{2}^{2} + {x}_{3}^{2} + \cdots }\right) \left( {{x}_{1} + {x}_{2} + {x}_{3} + \cdots }\right) .
\]
Theorem 4.3.7 The following are bases for \( {\Lambda }^{n} \) .
1. \( \left\{ {{p}_{\lambda } : \lambda \vdash n}\right\} \) .
2. \( \left\{ {{e}_{\lambda } : \lambda \vdash n}\right\} \) .
3. \( \left\{ {{h}_{\lambda } : \lambda \vdash n}\right\} \) .
Proof.
1. Let \( C = \left( {c}_{\lambda \mu }\right) \) be the matrix expressing the \( {p}_{\lambda } \) in terms of the basis \( {m}_{\mu } \) . If we can find an ordering of partitions such that \( C \) is triangular with nonzero entries down the diagonal, then \( {C}^{-1} \) exists, and the \( {p}_{\lambda } \) are also a basis. It turns out that lexicographic order will work. In fact, we claim that
\[
{p}_{\lambda } = {c}_{\lambda \lambda }{m}_{\lambda } + \mathop{\sum }\limits_{{\mu \vartriangleright \lambda }}{c}_{\lambda \mu }{m}_{\mu }
\]
(4.6)
where \( {c}_{\lambda \lambda } \neq 0 \) . (This is actually stronger than our claim about \( C \) by Proposition 2.2.6.) But if \( {\mathbf{x}}_{1}^{{\mu }_{1}}{\mathbf{x}}_{2}^{{\mu }_{2}}\cdots {\mathbf{x}}_{m}^{{\mu }_{m}} \) appears in
\[
{p}_{\lambda } = \left( {{x}_{1}^{{\lambda }_{1}} + {x}_{2}^{{\lambda }_{1}} + \cdots }\right) \left( {{x}_{1}^{{\lambda }_{2}} + {x}_{2}^{{\lambda }_{2}} + \cdots }\right) \cdots
\]
then each \( {\mu }_{i} \) must be a sum of \( {\lambda }_{j} \) ’s. Since adding together parts of a partition makes it become larger in dominance order, \( {m}_{\lambda } \) must be the smallest term that occurs.
2. In a similar manner we can show that there exist scalars \( {d}_{\lambda \mu } \) such that
\[
{e}_{{\lambda }^{\prime }} = {m}_{\lambda } + \mathop{\sum }\limits_{{\mu \vartriangleleft \lambda }}{d}_{\lambda \mu }{m}_{\mu }
\]
where \( {\lambda }^{\prime } \) is the conjugate of \( \lambda \) .
3. Since there are \( p\left( n\right) = \dim {\Lambda }^{n} \) functions \( {h}_{\lambda } \), it suffices to show that they generate the basis \( {e}_{\mu } \) . Since both sets of functions are multiplicative, we may simply demonstrate that every \( {e}_{n} \) is a polynomial in the \( {h}_{k} \) . From the products in Proposition 4.3.5, we see that
\[
H\left( t\right) E\left( {-t}\right) = 1
\]
Substituting in the summations for \( H \) and \( E \) and picking out the coefficient of \( {t}^{n} \) on both sides yields
\[
\mathop{\sum }\limits_{{r = 0}}^{n}{\left( -1\right) }^{r}{h}_{n - r}{e}_{r} = 0
\]
for \( n \geq 1 \) . So
\[
{e}_{n} = {h}_{1}{e}_{n - 1} - {h}_{2}{e}_{n - 2} + \cdots
\]
which is a polynomial in the \( h \) ’s by induction on \( n \) . ∎
Part 2 of this theorem is often called the fundamental theorem of symmetric functions and stated as follows: Every symmetric function is a polynomial in the elementary functions \( {e}_{n} \) .
## 4.4 Schur Functions
There is a fifth basis for \( {\Lambda }^{n} \) that is very important, the Schur functions. As we will see, they are also intimately connected with the irreducible representations of \( {\mathcal{S}}_{n} \) and tableaux. In fact, they are so protean that there are many different ways to define them. In this section, we take the combinatorial approach.
Given any composition \( \mu = \left( {{\mu }_{1},{\mu }_{2},\ldots ,{\mu }_{l}}\right) \), there is a corresponding monomial weight in \( \mathbb{C}\left\lbrack \left\lbrack \mathbf{x}\right\rbrack \right\rbrack \) :
\[
{x}^{\mu }\overset{\text{ def }}{ = }{x}_{1}^{{\mu }_{1}}{x}_{2}^{{\mu }_{2}}\cdots {x}_{m}^{{\mu }_{l}}
\]
(4.7)
Now consider any generalized tableau \( T \) of shape \( \lambda \) . It also has a weight, namely,
\[
{\mathbf{x}}^{T}\overset{\text{ def }}{ = }\mathop{\prod }\limits_{{\left( {i, j}\right) \in \lambda }}{x}_{{T}_{i, j}} = {\mathbf{x}}^{\mu }
\]
(4.8)
where \( \mu \) is the content of \( T \) . For example, if
\[
T = \begin{array}{lll} 4 & 1 & 4 \\ 1 & 3 & \end{array}
\]
then
\[
{\mathbf{x}}^{T} = {x}_{1}^{2}{x}_{3}{x}_{4}^{2}
\]
Definition 4.4.1 Given a partition \( \lambda \), the associated Schur function is
\[
{s}_{\lambda }\left( \mathbf{x}\right) = \mathop{\sum }\limits_{T}{\mathbf{x}}^{T}
\]
where the sum is over all semistandard \( \lambda \) -tableaux \( T \) .
By way of illustration, if \( \lambda = \left( {2,1}\right) \), then some of the possible tableaux are
\[
T : \begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 3 \end{array},\begin{array}{l} 1 \\ 3 \end{array},\begin{array}{l} 1 \\ 3 \end{array},\ldots \begin{array}{l} 1 \\ 3 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 4 \end{array},\ldots ,
\]
so
\[
{s}_{\left( 2,1\right) }\left( \mathbf{x}\right) = {x}_{1}^{2}{x}_{2} + {x}_{1}{x}_{2}^{2} + {x}_{1}^{2}{x}_{3} + {x}_{1}{x}_{3}^{2} + \cdots + 2{x}_ |
1139_(GTM44)Elementary Algebraic Geometry | Definition 8.10 |
Definition 8.10. Let \( \mathcal{O} \) be a connected open subset of \( \mathbb{C} = {\mathbb{C}}_{X} \) . The graph in \( \mathcal{O} \times {\mathbb{C}}_{Y} \) of a function single-valued and complex-analytic on \( \mathcal{O} \), is called an analytic function element. Note that an analytic function element describes in a natural way a lifting of \( \mathcal{O} \) ; we therefore write \( \mathcal{Q} \) to denote such a function element. If \( P \in Q \), then \( Q \) is an analytic function element through \( P \) . A chain \( \left( {{\mathcal{Q}}_{1},\ldots ,{\mathcal{Q}}_{m}}\right) \) of analytic function elements lifting a chain \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) of \( \mathbb{C} \) is called the analytic continuation from \( {\mathcal{Q}}_{1} \) to \( {\mathcal{Q}}_{m} \) along \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) , or the analytic continuation of \( {\mathcal{Q}}_{1} \) along \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) ; if \( P \in {\mathcal{Q}}_{1} \) and \( {P}^{\prime } \in {\mathcal{Q}}_{m} \) , \( \left( {{\mathcal{Q}}_{1},\ldots ,{\mathcal{Q}}_{m}}\right) \) is the analytic continuation from \( P \) to \( {P}^{\prime } \) along \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) .
Relative to the cover of special interest to us, namely \( \left( {\mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) }\right. \) , \( \mathbb{C} \smallsetminus \mathcal{D},{\pi }_{Y} \) ), there is about each point of \( \mathbb{C} \smallsetminus \mathcal{D} \) a connected open neighborhood \( \mathcal{O} \) which has a lifting \( \mathcal{Q} \) . Any such lifting is the graph of a function analytic on \( \mathcal{C} \) (from Theorem 3.6)-that is, any such \( \mathcal{Q} \) is an analytic function element.
When considering chains in our proof of Theorem 8.5, it will be of technical convenience to restrict our attention to connected open sets \( {\mathcal{O}}_{i} \) of \( \mathbb{C} \smallsetminus \mathcal{D} \) which are liftable through each point of \( {\pi }_{Y}{}^{-1}\left( {\mathcal{O}}_{i}\right) \) (which means that \( {\pi }_{Y}{}^{-1}\left( {\mathcal{O}}_{i}\right) \) consists of \( n\left( { = {\deg }_{Y}p}\right) \) functional elements. Note that there is such an \( \mathcal{O} \) about each point of \( \mathbb{C} \smallsetminus \mathcal{D} \) .
Definition 8.11. Relative to \( \left( {\mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) ,\mathbb{C} \smallsetminus \mathcal{D},{\pi }_{Y}}\right) \), any connected open set \( \mathcal{O} \) of \( \mathbb{C} \smallsetminus \mathcal{D} \) which lifts through each point of \( {\pi }_{Y}{}^{-1}\left( \mathcal{O}\right) \) is an allowable set. Any chain of allowable open sets is an allowable chain.
Lemma 8.13 below is used in our proof of Theorem 8.5 and gives an important class of allowable open sets.
Definition 8.12. An open set \( \Omega \subset \mathbb{C} \) is simply connected if it is homeomorphic
to an open disk.
Examples are: \( \mathbb{C} \) itself; \( \mathbb{C} \smallsetminus \) (nonnegative real axis); \( \mathbb{C} \smallsetminus \Phi \), where \( \Phi \) is any closed, non-self-intersecting polygonal path that goes out to the infinite point of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) (see Figure 20).
![9396b131-9501-41be-b2cf-577fd90ab693_97_0.jpg](images/9396b131-9501-41be-b2cf-577fd90ab693_97_0.jpg)
Figure 20
Lemma 8.13. Relative to \( \left( {\mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) ,\mathbb{C} \smallsetminus \mathcal{D},{\pi }_{Y}}\right) \), any simply connected open subset of \( \mathbb{C} \smallsetminus \mathcal{D} \) is allowable.
This is an immediate consequence of the familiar "monodromy theorem" (proved in most standard texts on elementary complex analysis). To state it, we use the following ideas: First, let \( U \) be a nonempty open subset of \( \mathbb{C} \) . A polygonal path in \( U \) is the union of closed line segments \( {\overline{{P}_{i}, P}}_{i + 1} \subset U(i = 0,\ldots \) , \( r - 1) \) connecting finitely many ordered points \( \left( {{P}_{0},\ldots ,{P}_{r}}\right) \left( {{P}_{i} \neq {P}_{i + 1}}\right) \) in \( U \) . Now suppose \( \mathcal{Q} \) is an analytic function element which is a lifting of a connected open set \( \mathcal{O} \subset U \) . We say \( \mathcal{Q} \) can be continued along a polygonal path \( \overline{{P}_{0},{P}_{1}} \cup \ldots \cup \overline{{P}_{r - 1},{P}_{r}} \) in \( U \) if \( {P}_{0} \in \mathcal{O} \) and if there is a chain \( \mathcal{O},{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{r} \) in \( U \) such that \( {\bar{P}}_{i},{\bar{P}}_{i + 1} \subseteq {\mathcal{O}}_{i + 1}\left( {i = 0,\ldots, r - 1}\right) \), and such that there is an analytic continuation of \( Q \) along \( O,\ldots ,{O}_{r} \) .
Theorem 8.14 (Monodromy theorem). Let \( \Omega \) be a simply connected open set in \( \mathbb{C} \), and suppose an analytic function element \( \mathcal{Q} \) is a lifting of a connected open set \( \mathcal{O} \subset \Omega \) . If \( \mathcal{Q} \) can be analytically continued along any polygonal path in \( \Omega \) , then \( \mathcal{Q} \) has a unique extension to a (single-valued) function which is analytic at each point of \( \Omega \) .
For a proof of Theorem 8.14, see, e.g., [Ahlfors, Chapter VI, Theorem 2].
To prove Lemma 8.13, we need only verify that in our case, the hypothesis of Theorem 8.14 is satisfied, i.e., that for any simply connected open subset \( \Omega \) of \( \mathbb{C} \smallsetminus \mathcal{D} \), we can analytically continue any analytic function element along any polygonal path in \( \Omega \) . The argument is easy, and may be left to the exercises (Exercise 8.1).
We now prove that \( \mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) \) is chainwise connected by contradiction. Suppose \( P \) and \( Q \) are two points of \( \mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) \) such that there is no analytic continuation from \( P \) to \( Q \) along any allowable chain in \( \mathbb{C} \smallsetminus \mathcal{D} \) .
Choose a non-self-intersecting polygonal path \( \Phi \) in \( \mathbb{C} \) connecting the finitely many points of \( \mathcal{D} \), and the infinite point of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) = {\mathbb{C}}_{X} \cup \{ \infty \} \), as suggested by Figure 20. We can obviously choose \( \Phi \) so it does not go through \( {\pi }_{Y}\left( P\right) \) or \( {\pi }_{Y}\left( Q\right) \) . The "slit sphere" \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \smallsetminus \Phi \) is then topologically an open disk of \( \mathbb{C} \), and is therefore simply connected. Now each point of \( C \) above any point of \( \mathbb{C} \smallsetminus \Phi \) is contained in an analytic function element, and by Lemma 8.13 each such function element extends to an analytic function on \( \mathbb{C} \smallsetminus \Phi \) . Since there are \( n \) points of \( C \) above each point of \( \mathbb{C} \smallsetminus \Phi \), there are just \( n \) such functions \( {f}_{i} \) on \( \mathbb{C} \smallsetminus \Phi \) . Call their graphs \( {F}_{1},\ldots ,{F}_{n} \) . Suppose, to be specific, that \( P \in {F}_{1} \) and \( Q \in {F}_{n} \) .
Now let \( P \) and \( {P}^{\prime } \) be two points of \( C \) lying over \( \mathbb{C} \smallsetminus \Phi \), and suppose that we can analytically continue from \( P \) to \( {P}^{\prime } \) along some allowable chain \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{r}}\right) \) in \( C \smallsetminus \mathcal{D} \) . Choose open sets such that \( {\mathcal{O}}_{0},{\mathcal{O}}_{n + 1} \subset \mathbb{C} \smallsetminus \Phi, P \in {\mathcal{O}}_{0} \subset {\mathcal{O}}_{1} \) and \( {P}^{\prime } \in {\mathcal{O}}_{r + 1} \supset {\mathcal{O}}_{r} \) . Since \( \mathbb{C} \smallsetminus \Phi \) is simply connected, it is allowable by Lemma 8.13. Hence its subsets \( {\mathcal{O}}_{0},{\mathcal{O}}_{r + 1} \) are also allowable, and therefore \( (\mathbb{C} \smallsetminus \Phi ,{\mathcal{O}}_{0} \) , \( {\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{r},{\mathcal{O}}_{r + 1},\mathbb{C} \smallsetminus \Phi \) ) is an allowable chain. Thus we may assume without loss of generality that any such analytic continuation in \( \mathbb{C} \smallsetminus \mathcal{D} \) from a point \( P \in C \) to any other point \( {P}^{\prime } \in C \), where \( {\pi }_{Y}\left( P\right) \) and \( {\pi }_{Y}\left( {P}^{\prime }\right) \in \mathbb{C} \smallsetminus \Phi \), is the lifting of some allowable chain from \( \mathbb{C} \smallsetminus \Phi \) to \( \mathbb{C} \smallsetminus \Phi \) . If \( P \in {F}_{i} \) and \( {P}^{\prime } \in {F}_{j} \), then this same chain also defines a continuation from any point in \( {F}_{i} \) to any point in \( {F}_{j} \) . Now consider any \( {F}_{k} \) . This chain also lifts to an analytic continuation from \( {F}_{k} \) to some one of \( {F}_{1},\ldots ,{F}_{n} \) -that is, it defines a mapping \( \rho \) from the set \( \left\{ {{F}_{1},\ldots ,{F}_{n}}\right\} \) into itself. Clearly if this chain defines a continuation from \( {F}_{i} \) to \( {F}_{j} \), then the "reverse chain" \( \left( {\mathbb{C} \smallsetminus \Phi ,{\mathcal{O}}_{r + 1},{\mathcal{O}}_{r},\ldots ,{\mathcal{O}}_{1},{\mathcal{O}}_{0},\mathbb{C} \smallsetminus \Phi }\right) \) defines a continuation from \( {F}_{j} \) to \( {F}_{i} \) . Hence \( \rho \) has an inverse and is therefore \( 1 : 1 \) and onto-that is, \( \rho \) is a permutation. The set of all allowable chains in \( \mathbb{C} \smallsetminus \mathcal{D} \) from \( \mathbb{C} \smallsetminus \Phi \) to \( \mathbb{C} \smallsetminus \Phi \) then defines a set of permutations (actually a group of permutations, as the reader may check for himself).
Let us consider a |
1077_(GTM235)Compact Lie Groups | Definition 3.37 |
Definition 3.37. (1) Let \( G \) be a compact Lie group. The operator valued Fourier transform, \( \mathcal{F} : {L}^{2}\left( G\right) \rightarrow \mathrm{{Op}}\left( \widehat{G}\right) \), is defined by
\[
\mathcal{F}f = {\left( {\left( \dim {E}_{\pi }\right) }^{\frac{1}{2}}\pi \left( f\right) \right) }_{\left\lbrack \pi \right\rbrack \in \widehat{G}}.
\]
(2) For \( {T}_{\pi } \in \operatorname{End}\left( {E}_{\pi }\right) \), write \( \operatorname{tr}\left( {{T}_{\pi } \circ {g}^{-1}}\right) \) for the smooth function on \( G \) defined by \( g \rightarrow \operatorname{tr}\left( {{T}_{\pi } \circ \pi \left( {g}^{-1}\right) }\right) \) . The inverse operator valued Fourier transform, \( \mathcal{I} : \operatorname{Op}\left( \widehat{G}\right) \rightarrow \) \( {L}^{2}\left( G\right) \), is given by
\[
\mathcal{I}{\left( {T}_{\pi }\right) }_{\left\lbrack \pi \right\rbrack \in \widehat{G}} = \mathop{\sum }\limits_{{\left\lbrack \pi \right\rbrack \in \widehat{G}}}{\left( \dim {E}_{\pi }\right) }^{\frac{1}{2}}\operatorname{tr}\left( {{T}_{\pi } \circ {g}^{-1}}\right)
\]
with respect to \( {L}^{2} \) convergence.
It is necessary to check that \( \mathcal{F} \) and \( \mathcal{I} \) are well defined and inverses of each other. These details will be checked in the proof below. In the following theorem, view \( {L}^{2}\left( G\right) \) as an algebra with respect to convolution and remember that \( {L}^{2}\left( G\right) \) is a \( G \times \) \( G \) -module with \( \left( {{g}_{1},{g}_{2}}\right) \in G \times G \) acting as \( {r}_{{g}_{1}} \circ {l}_{{g}_{2}} \) so \( \left( {\left( {{g}_{1},{g}_{2}}\right) f}\right) \left( g\right) = f\left( {{g}_{2}^{-1}g{g}_{1}}\right) \) for \( f \in {L}^{2}\left( G\right) \) and \( {g}_{i}, g \in G \) .
Theorem 3.38 (Plancherel Theorem). Let \( G \) be a compact Lie group. The maps \( \mathcal{F} \) and \( \mathcal{I} \) are well defined unitary, algebra, \( G \times G \) -intertwining isomorphisms and inverse to each other so that
\[
\mathcal{F} : {L}^{2}\left( G\right) \overset{ \cong }{ \rightarrow }\operatorname{Op}\left( \widehat{G}\right)
\]
with \( \parallel f{\parallel }_{{L}^{2}\left( G\right) } = \parallel \mathcal{F}f{\parallel }_{\mathrm{{Op}}\left( \widehat{G}\right) },\mathcal{F}\left( {{f}_{1} * {f}_{2}}\right) = \left( {\mathcal{F}{f}_{1}}\right) \left( {\mathcal{F}{f}_{2}}\right) ,\mathcal{F}\left( {\left( {{g}_{1},{g}_{2}}\right) f}\right) = \) \( \left( {{g}_{1},{g}_{2}}\right) \left( {\mathcal{F}f}\right) \), and \( {\mathcal{F}}^{-1} = \mathcal{I} \) for \( f \in {L}^{2}\left( G\right) \) and \( {g}_{i} \in G \) . Proof. Recall the decomposition \( {L}^{2}\left( G\right) \cong {\widehat{\bigoplus }}_{\left\lbrack \pi \right\rbrack \in \widehat{G}}{E}_{\pi }^{ * } \otimes {E}_{\pi } \) from Corollary 3.26 that maps \( \lambda \otimes v \in {E}_{\pi }^{ * } \otimes {E}_{\pi } \) to \( {f}_{\lambda \otimes v} \) where \( {f}_{\lambda \otimes v}\left( g\right) = \lambda \left( {{g}^{-1}v}\right) \) for \( g \in G \) . Since \( \operatorname{Op}\left( \widehat{G}\right) = {\widehat{\bigoplus }}_{\left\lbrack \pi \right\rbrack \in \widehat{G}}\operatorname{End}{\left( {E}_{\pi }\right) }_{HS} \) and since isometries on dense sets uniquely extend by continuity, it suffices to check that \( \mathcal{F} \) restricts to a unitary, algebra, \( G \times G \) - intertwining isomorphism from \( \operatorname{span}\left\{ {{f}_{\lambda \otimes v} \mid \lambda \otimes v \in {E}_{\pi }^{ * } \otimes {E}_{\pi }}\right\} \) to \( \operatorname{End}\left( {E}_{\pi }\right) \) with inverse \( \mathcal{I} \) . Here \( \operatorname{End}\left( {E}_{\underline{\pi }}\right) \) is viewed as a subspace of \( \operatorname{Op}\left( \widehat{G}\right) \) under the natural inclu- \( \operatorname{sion}\operatorname{End}\left( {E}_{\pi }\right) \hookrightarrow \operatorname{Op}\left( \widehat{G}\right) . \)
Write \( \left( {\cdot , \cdot }\right) \) for a \( G \) -invariant inner product on \( {E}_{\pi } \) . Any \( \lambda \in {E}_{\pi }^{ * } \) may be uniquely written as \( \lambda = \left( {\cdot, v}\right) \) for some \( v \in {E}_{\pi } \) . Thus the main problem revolves around evaluating \( {\pi }^{\prime }\left( {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}}\right) \) for \( \left\lbrack {\pi }^{\prime }\right\rbrack \in \widehat{G} \) and \( {v}_{i} \in {E}_{\pi } \) . Therefore choose \( {w}_{i} \in {E}_{{\pi }^{\prime }} \) and a \( G \) -invariant inner product \( {\left( \cdot , \cdot \right) }^{\prime } \) on \( {E}_{{\pi }^{\prime }} \) and calculate
\[
{\left( {\pi }^{\prime }\left( {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}}\right) \left( {w}_{1}\right) ,{w}_{2}\right) }^{\prime } = {\int }_{G}\left( {\pi \left( {g}^{-1}\right) {v}_{2},{v}_{1}}\right) {\left( {\pi }^{\prime }\left( g\right) {w}_{1},{w}_{2}\right) }^{\prime }{dg}
\]
\[
= {\int }_{G}{\left( {\pi }^{\prime }\left( g\right) {w}_{1},{w}_{2}\right) }^{\prime }\overline{\left( \pi \left( g\right) {v}_{1},{v}_{2}\right) }{dg}.
\]
If \( {\pi }^{\prime } \ncong \pi \), the Schur orthogonality relations imply that \( {\left( {\pi }^{\prime }\left( {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}}\right) \left( {w}_{1}\right) ,{w}_{2}\right) }^{\prime } = 0 \) , so that \( {\pi }^{\prime }\left( {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}}\right) = 0 \) . Thus \( \mathcal{F} \) maps \( \operatorname{span}\left\{ {{f}_{\lambda \otimes v} \mid \lambda \otimes v \in \overline{{E}_{\pi }^{ * }} \otimes {E}_{\pi }}\right\} \) to \( \operatorname{End}\left( {E}_{\pi }\right) \) . On the other hand, if \( {\pi }^{\prime } = \pi \), the Schur orthogonality relations imply that
\[
\left( {\pi \left( {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}}\right) \left( {w}_{1}\right) ,{w}_{2}}\right) = {\left( \dim {E}_{\pi }\right) }^{-1}\left( {{w}_{1},{v}_{1}}\right) \overline{\left( {w}_{2},{v}_{2}\right) }.
\]
In particular, \( \pi \left( {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}}\right) = {\left( \dim {E}_{\pi }\right) }^{-1}\left( {\cdot ,{v}_{1}}\right) {v}_{2} \), so
\[
\mathcal{F}{f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}} = {\left( \dim {E}_{\pi }\right) }^{-\frac{1}{2}}\left( {\cdot ,{v}_{1}}\right) {v}_{2}.
\]
Viewed as a map from \( \operatorname{span}\left\{ {{f}_{\lambda \otimes v} \mid \lambda \otimes v \in {E}_{\pi }^{ * } \otimes {E}_{\pi }}\right\} \) to \( \operatorname{End}\left( {E}_{\pi }\right) \), this shows that \( \mathcal{F} \) is surjective and, by dimension count, an isomorphism.
To see that \( \mathcal{I} \) is the inverse of \( \mathcal{F} \), calculate the trace using an orthonormal basis that starts with \( {\begin{Vmatrix}{v}_{2}\end{Vmatrix}}^{-1}{v}_{2} \) :
\[
\operatorname{tr}\left( {\left\lbrack {\left( {\cdot ,{v}_{1}}\right) {v}_{2}}\right\rbrack \circ \pi \left( {g}^{-1}\right) }\right) = \left( {\left\lbrack {\left\lbrack {\left( {\cdot ,{v}_{1}}\right) {v}_{2}}\right\rbrack \circ \pi \left( {g}^{-1}\right) }\right\rbrack \left( \frac{{v}_{2}}{\begin{Vmatrix}{v}_{2}\end{Vmatrix}}\right) ,\frac{{v}_{2}}{\begin{Vmatrix}{v}_{2}\end{Vmatrix}}}\right)
\]
\[
= \left( {\pi \left( {g}^{-1}\right) {v}_{2},{v}_{1}}\right) = {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}}\left( g\right) .
\]
Thus
(3.39)
\[
\mathcal{I}\left( {{\left( \dim {E}_{\pi }\right) }^{-\frac{1}{2}}\left( {\cdot ,{v}_{1}}\right) {v}_{2}}\right) = {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}}
\]
and \( \mathcal{I} = {\mathcal{F}}^{-1} \) .
To check unitarity, use the Schur orthogonality relations to calculate
\[
{\left( {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}},{f}_{\left( {\cdot ,{v}_{3}}\right) \otimes {v}_{4}}\right) }_{{L}^{2}\left( G\right) } = {\int }_{G}\left( {{g}^{-1}{v}_{2},{v}_{1}}\right) \overline{\left( {g}^{-1}{v}_{4},{v}_{3}\right) }{dg}
\]
\[
= {\left( \dim {E}_{\pi }\right) }^{-1}\left( {{v}_{2},{v}_{4}}\right) \overline{\left( {v}_{1},{v}_{3}\right) }\text{.}
\]
To calculate a Hilbert-Schmidt norm, first observe that the adjoint of \( \left( {\cdot ,{v}_{3}}\right) {v}_{4} \in \) \( \operatorname{End}{\left( {E}_{\pi }\right) }_{HS} \) is \( \left( {\cdot ,{v}_{4}}\right) {v}_{3} \) since
\[
\left( {\left( {{v}_{5},{v}_{3}}\right) {v}_{4},{v}_{6}}\right) = \left( {{v}_{5},{v}_{3}}\right) \left( {{v}_{4},{v}_{6}}\right) = \left( {{v}_{5},\left( {{v}_{6},{v}_{4}}\right) {v}_{3}}\right) .
\]
Hence
\[
{\left( \mathcal{F}{f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}},\mathcal{F}{f}_{\left( {\cdot ,{v}_{3}}\right) \otimes {v}_{4}}\right) }_{HS} = {\left( \dim {E}_{\pi }\right) }^{-1}{\left( \left( \cdot ,{v}_{1}\right) {v}_{2},\left( \cdot ,{v}_{3}\right) {v}_{4}\right) }_{HS}
\]
\[
= {\left( \dim {E}_{\pi }\right) }^{-1}\operatorname{tr}\left\lbrack {\left( {\left( {\cdot ,{v}_{1}}\right) {v}_{2},{v}_{4}}\right) {v}_{3}}\right\rbrack
\]
\[
= {\left( \dim {E}_{\pi }\right) }^{-1}\left( {{v}_{2},{v}_{4}}\right) \operatorname{tr}\left\lbrack {\left( {\cdot ,{v}_{1}}\right) {v}_{3}}\right\rbrack
\]
\[
= {\left( \dim {E}_{\pi }\right) }^{-1}\left( {{v}_{2},{v}_{4}}\right) \left( {\left( {\frac{{v}_{3}}{\begin{Vmatrix}{v}_{3}\end{Vmatrix}},{v}_{1}}\right) {v}_{3},\frac{{v}_{3}}{\begin{Vmatrix}{v}_{3}\end{Vmatrix}}}\right)
\]
\[
= {\left( \dim {E}_{\pi }\right) }^{-1}\left( {{v}_{2},{v}_{4}}\right) \left( {{v}_{3},{v}_{1}}\right)
\]
\[
= {\left( {f}_{\left( {\cdot ,{v}_{1}}\right) \otimes {v}_{2}},{f}_{\left( {\cdot ,{v}_{3}}\right) \otimes {v}_{4}}\right) }_{{L}^{2}\left( G\right) },
\]
and so \( \mathcal{F} \) is unitary.
To check that the algebra structures are preserved, simply use Lemma 3.35 to observe that \( \pi \left( {{f}_{1} * {f}_{2}}\right) = \pi \left( {f}_{1}\right) \circ \pi \left( {f}_{2}\right) \) . Thus
\[
\mathcal{F}\left( {{f}_{1} * {f}_{2}}\right) = {\left( \dim {E}_{\pi }\right) }^{\frac{1}{2}}\pi \left( {f}_{1}\right) \circ \pi \left( {f}_{2}\right)
\]
\[
= {\left( \dim {E |
1096_(GTM252)Distributions and Operators | Definition 10.5 |
Definition 10.5. A Poisson operator of order \( d \) is an operator defined by a formula
\[
\left( {Kv}\right) \left( {{x}^{\prime },{x}_{n}}\right) = {\int }_{{\mathbb{R}}^{n - 1}}{e}^{i{x}^{\prime } \cdot {\xi }^{\prime }}\widetilde{k}\left( {{x}^{\prime },{x}_{n},{\xi }^{\prime }}\right) \widehat{v}\left( {\xi }^{\prime }\right) d{\xi }^{\prime }
\]
(10.29)
where the symbol-kernel \( \widetilde{k} \) belongs to \( {S}_{1,0}^{d - 1}\left( {{\mathbb{R}}^{n - 1},{\mathbb{R}}^{n - 1},{\mathcal{S}}_{ + }}\right) \) . See also (10.30), (10.31).
Again, symbol-kernels depending on \( \left( {{x}^{\prime },{y}^{\prime }}\right) \) can be allowed:
\[
\left( {Kv}\right) \left( {{x}^{\prime },{x}_{n}}\right) = {\int }_{{\mathbb{R}}^{2\left( {n - 1}\right) }}{e}^{i\left( {{x}^{\prime } - {y}^{\prime }}\right) \cdot {\xi }^{\prime }}\widetilde{k}\left( {{x}^{\prime },{y}^{\prime },{x}_{n},{\xi }^{\prime }}\right) v\left( {y}^{\prime }\right) d{y}^{\prime }d{\xi }^{\prime }.
\]
\( \left( {10.30}\right) \)
The symbol corresponding to \( \widetilde{k}\left( {x,{\xi }^{\prime }}\right) \) is
\[
k\left( {{x}^{\prime },\xi }\right) = {\mathcal{F}}_{{x}_{n} \rightarrow {\xi }_{n}}{e}^{ + }\widetilde{k}\left( {x,{\xi }^{\prime }}\right) .
\]
(10.31)
In the polyhomogeneous case, it has an expansion in homogeneous terms in \( \xi \) (for \( \left| {\xi }^{\prime }\right| \geq 1 \) ) of degree \( d - 1 - l \) . In this case we often denote \( {\widetilde{k}}_{d - 1} = {\widetilde{k}}^{0} \) and \( {k}_{d - 1} = {k}^{0} \), the principal symbol-kernel or symbol. Again, one can view \( K \) defined in (10.29) (also denoted \( \operatorname{OPK}\left( k\right) \) or \( \operatorname{OPK}\left( \widetilde{k}\right) \) ) as an operator \( K = \) \( {\mathrm{{OP}}}^{\prime }\left( {k\left( {{x}^{\prime },{\xi }^{\prime },{D}_{n}}\right) }\right) \), where \( k\left( {{x}^{\prime },{\xi }^{\prime },{D}_{n}}\right) \) is the boundary symbol operator from \( \mathbb{C} \) to \( \mathcal{S}\left( {\overline{\mathbb{R}}}_{ + }\right) \) :
\[
k\left( {{x}^{\prime },{\xi }^{\prime },{D}_{n}}\right) a = \widetilde{k}\left( {{x}^{\prime },{x}_{n},{\xi }^{\prime }}\right) \cdot a\text{ for }a \in \mathbb{C},
\]
(10.32)
also denoted \( {\operatorname{OPK}}_{n}\left( k\right) \) or \( {\operatorname{OPK}}_{n}\left( \widetilde{k}\right) \) .
Remark 10.6. The above order convention, introduced originally in [B71], may seem a bit strange: Polyhomogeneous Poisson operators of order \( d \) have principal symbols homogeneous of degree \( d - 1 \) . But the convention will fit the purpose that the composition of two operators of order \( d \) resp. \( {d}^{\prime } \) will be of order \( d + {d}^{\prime } \) (valid, e.g., for the \( \psi \) do \( {TK} \) on \( {\mathbb{R}}^{n - 1} \) ).
The trace operators \( {T}^{\prime } \) of class 0 (and order \( d \) ) have as adjoints precisely the Poisson operators (of order \( d + 1 \) ), and vice versa. This is obvious on the boundary-symbol-operator level:
\[
t\left( {D}_{n}\right) : u \in {L}_{2}\left( {\mathbb{R}}_{ + }\right) \mapsto \left( {u,\overline{\widetilde{f}}}\right) \in \mathbb{C}\text{and}k\left( {D}_{n}\right) : v \in \mathbb{C} \mapsto v \cdot \overline{\widetilde{f}} \in {L}_{2}\left( {\mathbb{R}}_{ + }\right)
\]
are adjoints of one another. On the full operator level it is easy to show for symbols depending on \( \left( {{x}^{\prime },{y}^{\prime }}\right) \) instead of \( {x}^{\prime } \) ; here if \( {T}^{\prime } \) has symbol-kernel \( \widetilde{f}\left( {{x}^{\prime },{y}^{\prime },{x}_{n},{\xi }^{\prime }}\right) \), the Poisson operator \( {T}^{\prime * } \) has symbol-kernel \( \overline{\widetilde{f}}\left( {{y}^{\prime },{x}^{\prime },{x}_{n},{\xi }^{\prime }}\right) \) . Details will be given in Theorem 10.29 later.
Trace operators of class \( r > 0 \) do not have adjoints within the calculus.
Example 10.7. The operator \( {K}_{\gamma } \) introduced in Theorem 9.3 is the Poisson operator with symbol-kernel \( \widetilde{k}\left( {{x}_{n},{\xi }^{\prime }}\right) = {e}^{-\left\langle {\xi }^{\prime }\right\rangle {x}_{n}} \) ; it is of order 0 . Its symbol
is
\[
k\left( {{\xi }^{\prime },{\xi }_{n}}\right) = \frac{1}{\left\langle {\xi }^{\prime }\right\rangle + i{\xi }_{n}}
\]
(10.33)
cf. Exercise 5.3. Inserting the expansion (7.11) of \( \left\langle {\xi }^{\prime }\right\rangle \) in (10.33) one can expand in homogeneous terms of falling degree (beginning with degree -1), showing that the symbol and symbol-kernel are polyhomogeneous of degree \( - 1 \) .
The adjoint of \( {K}_{\gamma } \) is the trace operator \( T \) of class 0 with symbol-kernel \( \widetilde{t}\left( {{x}_{n},{\xi }^{\prime }}\right) = {e}^{-\left\langle {\xi }^{\prime }\right\rangle {x}_{n}} \) and symbol \( t\left( {{\xi }^{\prime },{\xi }_{n}}\right) = \frac{1}{\left\langle {\xi }^{\prime }\right\rangle - i{\xi }_{n}} \) ; it is of degree and order -1. Furthemore, a calculation shows that for \( Q = \mathrm{{OP}}\left( {\langle \xi {\rangle }^{-2}}\right) \) as in (9.36)ff., \( {\gamma }_{0}{Q}_{ + } \) is the trace operator with symbol-kernel \( \frac{1}{2\left\langle {\xi }^{\prime }\right\rangle }{e}^{-\left\langle {\xi }^{\prime }\right\rangle {x}_{n}} \) .
Poisson operators also arise from the following situation: Let \( v\left( {x}^{\prime }\right) \in \) \( \mathcal{S}\left( {\mathbb{R}}^{n - 1}\right) \), and consider the distribution \( v\left( {x}^{\prime }\right) \otimes \delta \left( {x}_{n}\right) \) (the product of \( v\left( {x}^{\prime }\right) \) and \( \left. {\delta \left( {x}_{n}\right) }\right) \) . When \( P \) is a \( \psi \) do satisfying the transmission condition, one can show that \( {r}^{ + }P\left( {v \otimes \delta }\right) \) makes sense as a function in \( {C}^{\infty }\left( {\overline{\mathbb{R}}}_{ + }^{n}\right) \), and the mapping \( K : v \mapsto {r}^{ + }P\left( {v \otimes \delta }\right) \) is a Poisson operator. See Theorem 10.25 later.
## Singular Green operators
We now get to the most unfamiliar element \( G \) of \( \mathcal{A} \) in (10.18). A singular Green operator (s.g.o.) \( G \) arises, for instance, when we compose a Poisson operator \( K \) with a trace operator \( T \) as \( G = {KT} \) ; this operator acts in \( {\mathbb{R}}_{ + }^{n} \) but is not a \( {P}_{ + } \) . Another situation where s.g.o.s enter is when we compose two \( \psi \) do’s \( {P}_{ + } \) and \( {Q}_{ + } \) (satisfying the transmission condition); then the "leftover operator"
\[
L\left( {P, Q}\right) \equiv {\left( PQ\right) }_{ + } - {P}_{ + }{Q}_{ + } = {r}^{ + }{PQ}{e}^{ + } - {r}^{ + }P{e}^{ + }{r}^{ + }Q{e}^{ + }
\]
(10.34)
\[
= {r}^{ + }P\left( {I - {e}^{ + }{r}^{ + }}\right) Q{e}^{ + }
\]
is an operator acting in \( {\mathbb{R}}_{ + }^{n} \), that is not a \( \psi \) do (more about \( L\left( {P, Q}\right) \) in Section 10.4). It turns out that these cases are covered by operators of the following form (they are in fact convergent series of products of Poisson and trace operators, cf. (10.107) later):
Definition 10.8. A singular Green operator \( G \) of order \( d\left( { \in \mathbb{R}}\right) \) and class \( r \) \( \left( { \in {\mathbb{N}}_{0}}\right) \) is an operator
\[
G = \mathop{\sum }\limits_{{0 \leq j \leq r - 1}}{K}_{j}{\gamma }_{j} + {G}^{\prime }
\]
(10.35)
where the \( {K}_{j} \) are Poisson operators of order \( d - j \), the \( {\gamma }_{j} \) are standard trace operators and \( {G}^{\prime } \) is an operator of the form
\[
\left( {{G}^{\prime }u}\right) \left( x\right) = {\int }_{{\mathbb{R}}^{n - 1}}{e}^{i{x}^{\prime } \cdot {\xi }^{\prime }}{\int }_{0}^{\infty }{\widetilde{g}}^{\prime }\left( {{x}^{\prime },{x}_{n},{y}_{n},{\xi }^{\prime }}\right) \acute{u}\left( {{\xi }^{\prime },{y}_{n}}\right) d{y}_{n}d{\xi }^{\prime },
\]
(10.36)
where \( {\widetilde{g}}^{\prime } \), the symbol-kernel of \( {G}^{\prime } \), is in \( {S}_{1,0}^{d - 1}\left( {{\mathbb{R}}^{n - 1},{\mathbb{R}}^{n - 1},{\mathcal{S}}_{+ + }}\right) \), cf. Definition \( {10.33}^{ \circ } \) . There is a corresponding symbol \( {g}^{\prime } \), defined by
\[
{g}^{\prime }\left( {{x}^{\prime },{\xi }^{\prime },{\xi }_{n},{\eta }_{n}}\right) = {\mathcal{F}}_{{x}_{n} \rightarrow {\xi }_{n}}{\overline{\mathcal{F}}}_{{y}_{n} \rightarrow {\xi }_{n}}{e}_{{x}_{n}}^{ + }{e}_{{y}_{n}}^{ + }{\widetilde{g}}^{\prime }\left( {{x}^{\prime },{x}_{n},{y}_{n},{\xi }^{\prime }}\right) .
\]
(10.37)
The symbol-kernel and symbol of \( G \) itself are
\[
\widetilde{g}\left( {{x}^{\prime },{x}_{n},{y}_{n},{\xi }^{\prime }}\right) = \mathop{\sum }\limits_{{0 \leq j < r}}{\widetilde{k}}_{j}\left( {{x}^{\prime },{x}_{n},{\xi }^{\prime }}\right) {D}_{{y}_{n}}^{j}\delta \left( {y}_{n}\right) + {\widetilde{g}}^{\prime }\left( {{x}^{\prime },{x}_{n},{y}_{n},{\xi }^{\prime }}\right) ,
\]
\[
g\left( {{x}^{\prime },{\xi }^{\prime },{\xi }_{n},{\eta }_{n}}\right) = \mathop{\sum }\limits_{{0 \leq j < r}}{k}_{j}\left( {{x}^{\prime },\xi }\right) {\eta }_{n}^{j} + {g}^{\prime }\left( {{x}^{\prime },{\xi }^{\prime },{\xi }_{n},{\eta }_{n}}\right) .
\]
(10.38)
In the polyhomogeneous case, the principal symbol-kernel and symbol are \( {\widetilde{g}}^{0} = {\widetilde{g}}_{d - 1} \) resp. \( {g}^{0} = {g}_{d - 1} \) . (Both for singular Green symbols and for trace symbols, the definition can be refined further to allow the notion of negative class \( r < 0 \), see [G96, Sect. 2.8].) In some recent works, it has been practical to replace the enumeration \( d - 1 - l \) by \( d - l \), but we here stick to the notation of [G96].
We define the boundary symbol operator \( g\left( {{x}^{\prime },{\xi }^{\prime },{D}_{n}}\right) \) from \( \widetilde{g} \) by
\[
g\left( {{x}^{\prime },{\xi }^{\prime },{D}_{n}}\right) u\left( {x}_{n}\right) = \mathop{\sum }\limits_{{0 \leq j < r}}{\widetilde{k}}_{j}\left( {{x}^{\prime },{x}_{n},{\xi }^{\prime }}\right) {\gamma }_{j}u + {\int }_{0}^{ |
1094_(GTM250)Modern Fourier Analysis | Definition 6.1.6 |
Definition 6.1.6. Let \( N : \mathbf{R} \rightarrow {\mathbf{R}}^{ + } \) be a measurable function, let \( s \in \mathbf{D} \), and let \( E \) be a set of finite measure. Then we introduce the quantity
\[
\mathcal{M}\left( {E;\{ s\} }\right) = \frac{1}{\left| E\right| }\mathop{\sup }\limits_{\substack{{u \in \mathbf{D}} \\ {s < u} }}\mathop{\int }\limits_{{E \cap {N}^{-1}\left\lbrack {\omega }_{u}\right\rbrack }}\frac{{\left| {I}_{u}\right| }^{-1}{dx}}{{\left( 1 + \frac{\left| x - c\left( {I}_{u}\right) \right| }{\left| {I}_{u}\right| }\right) }^{10}}.
\]
## 6.1 Almost Everywhere Convergence of Fourier Integrals
Fig. 6.2 A tree of seven tiles ![4d96d124-1e54-43db-8e54-2e727636ea55_442_0.jpg](images/4d96d124-1e54-43db-8e54-2e727636ea55_442_0.jpg) including the darkened top. The top together with the three tiles on the right forms a 1-tree, while the top together with the three tiles on the left forms a 2-tree.
We call \( \mathcal{M}\left( {E;\{ s\} }\right) \) the mass of \( E \) with respect to \( \{ s\} \) . Given a subset \( \mathbf{P} \) of \( \mathbf{D} \), we define the mass of \( E \) with respect to \( \mathbf{P} \) as
\[
\mathcal{M}\left( {E;\mathbf{P}}\right) = \mathop{\sup }\limits_{{s \in \mathbf{P}}}\mathcal{M}\left( {E;\{ s\} }\right)
\]
We observe that the mass of \( E \) with respect to any set of tiles is at most
\[
\frac{1}{\left| E\right| }{\int }_{-\infty }^{+\infty }\frac{dx}{{\left( 1 + \left| x\right| \right) }^{10}} \leq \frac{1}{\left| E\right| }.
\]
Definition 6.1.7. Given a finite subset \( \mathbf{P} \) of \( \mathbf{D} \) and a function \( g \) in \( {L}^{2}\left( \mathbf{R}\right) \), we introduce the quantity
\[
\mathcal{E}\left( {g;\mathbf{P}}\right) = \frac{1}{\parallel g{\parallel }_{{L}^{2}}}\mathop{\sup }\limits_{\mathbf{T}}{\left( \frac{1}{\left| {I}_{\operatorname{top}\left( \mathbf{T}\right) }\right| }\mathop{\sum }\limits_{{s \in \mathbf{T}}}{\left| \left\langle g \mid {\varphi }_{s}\right\rangle \right| }^{2}\right) }^{\frac{1}{2}},
\]
where the supremum is taken over all 2-trees \( \mathbf{T} \) contained in \( \mathbf{P} \) . We call \( \mathcal{E}\left( {g;\mathbf{P}}\right) \) the energy of the function \( g \) with respect to the set of tiles \( \mathbf{P} \) .
We now state three important lemmas which we prove in the remaining three subsections, respectively.
Lemma 6.1.8. There exists a constant \( {C}_{1} \) such that for any measurable function \( N \) : \( \mathbf{R} \rightarrow {\mathbf{R}}^{ + } \), for any measurable subset \( E \) of the real line with finite measure, and for any finite set of tiles \( \mathbf{P} \) there is a subset \( {\mathbf{P}}^{\prime } \) of \( \mathbf{P} \) such that
\[
\mathcal{M}\left( {E;\mathbf{P} \smallsetminus {\mathbf{P}}^{\prime }}\right) \leq \frac{1}{4}\mathcal{M}\left( {E;\mathbf{P}}\right)
\]
and \( {\mathbf{P}}^{\prime } \) is a union of trees \( {\mathbf{T}}_{j} \) satisfying
\[
\mathop{\sum }\limits_{j}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{j}\right) }\right| \leq \frac{{C}_{1}}{\mathcal{M}\left( {E;\mathbf{P}}\right) }
\]
(6.1.33)
Lemma 6.1.9. There exists a constant \( {C}_{2} \) such that for any finite set of tiles \( \mathbf{P} \) and for all functions \( g \) in \( {L}^{2}\left( \mathbf{R}\right) \) there is a subset \( {\mathbf{P}}^{\prime \prime } \) of \( \mathbf{P} \) such that
\[
\mathcal{E}\left( {g;\mathbf{P} \smallsetminus {\mathbf{P}}^{\prime \prime }}\right) \leq \frac{1}{2}\mathcal{E}\left( {g;\mathbf{P}}\right)
\]
and \( {\mathbf{P}}^{\prime \prime } \) is a union of trees \( {\mathbf{T}}_{j} \) satisfying
\[
\mathop{\sum }\limits_{j}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{j}\right) }\right| \leq \frac{{C}_{2}}{\mathcal{E}{\left( g;\mathbf{P}\right) }^{2}}
\]
(6.1.34)
Lemma 6.1.10. (The basic estimate) There is a finite constant \( {C}_{3} \) such that for all trees \( \mathbf{T} \), all functions \( g \) in \( {L}^{2}\left( \mathbf{R}\right) \), for any measurable function \( N : \mathbf{R} \rightarrow {\mathbf{R}}^{ + } \), and for all measurable sets \( E \) we have
\[
\mathop{\sum }\limits_{{s \in \mathbf{T}}}\left| {\left\langle {g \mid {\varphi }_{s}}\right\rangle \left\langle {{\chi }_{E \cap {N}^{-1}\left\lbrack {\omega }_{s\left( 2\right) }\right\rbrack } \mid {\varphi }_{s}}\right\rangle }\right|
\]
(6.1.35)
\[
\leq {C}_{3}\left| {I}_{\operatorname{top}\left( \mathbf{T}\right) }\right| \mathcal{E}\left( {g;\mathbf{T}}\right) \mathcal{M}\left( {E;\mathbf{T}}\right) \parallel g{\parallel }_{{L}^{2}}\left| E\right| .
\]
In the rest of this subsection, we conclude the proof of Theorem 6.1.1 assuming Lemmas 6.1.8, 6.1.9, and 6.1.10.
Given a finite set of tiles \( \mathbf{P} \), a measurable set \( E \) of finite measure, a measurable function \( N : \mathbf{R} \rightarrow {\mathbf{R}}^{ + } \), and a function \( f \) in \( \mathcal{S}\left( \mathbf{R}\right) \), we find a very large integer \( {n}_{0} \) such that
\[
\mathcal{E}\left( {f;\mathbf{P}}\right) \leq {2}^{{n}_{0}}
\]
\[
\mathcal{M}\left( {E;\mathbf{P}}\right) \leq {2}^{2{n}_{0}}.
\]
We shall construct by decreasing induction a sequence of pairwise disjoint sets
\[
{\mathbf{P}}_{{n}_{0}},{\mathbf{P}}_{{n}_{0} - 1},{\mathbf{P}}_{{n}_{0} - 2},{\mathbf{P}}_{{n}_{0} - 3},\ldots
\]
such that
\[
\mathop{\bigcup }\limits_{{j = - \infty }}^{{n}_{0}}{\mathbf{P}}_{j} = \mathbf{P}
\]
(6.1.36)
and such that the following properties are satisfied:
(1) \( \mathcal{E}\left( {f;{\mathbf{P}}_{j}}\right) \leq {2}^{j + 1} \) for all \( j \leq {n}_{0} \) ;
(2) \( \mathcal{M}\left( {E;{\mathbf{P}}_{j}}\right) \leq {2}^{{2j} + 2} \) for all \( j \leq {n}_{0} \) ;
(3) \( \mathcal{E}\left( {f;\mathbf{P} \smallsetminus \left( {{\mathbf{P}}_{{n}_{0}} \cup \cdots \cup {\mathbf{P}}_{j}}\right) }\right) \leq {2}^{j} \) for all \( j \leq {n}_{0} \) ;
(4) \( \mathcal{M}\left( {E;\mathbf{P} \smallsetminus \left( {{\mathbf{P}}_{{n}_{0}} \cup \cdots \cup {\mathbf{P}}_{j}}\right) }\right) \leq {2}^{2j} \) for all \( j \leq {n}_{0} \) ;
(5) \( {\mathbf{P}}_{j} \) is a union of trees \( {\mathbf{T}}_{jk} \) such that for all \( j \leq {n}_{0} \) we have
\[
\mathop{\sum }\limits_{k}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{jk}\right) }\right| \leq {C}_{0}{2}^{-{2j}}
\]
where \( {C}_{0} = {C}_{1} + {C}_{2} \) and \( {C}_{1} \) and \( {C}_{2} \) are the constants that appear in Lemmas
6.1.8 and 6.1.9, respectively.
Assume momentarily that we have constructed a sequence \( {\left\{ {\mathbf{P}}_{j}\right\} }_{j \leq {n}_{0}} \) with the described properties. Then to obtain estimate (6.1.32) we use (1), (2), (5), the observation that the mass of any set of tiles is always bounded by \( {\left| E\right| }^{-1} \), and Lemma 6.1.10 to obtain
\[
\mathop{\sum }\limits_{{s \in \mathbf{P}}}\left| {\left\langle {f \mid {\varphi }_{s}}\right\rangle \left\langle {{\chi }_{E \cap {N}^{-1}\left\lbrack {\omega }_{s\left( 2\right) }\right\rbrack } \mid {\varphi }_{s}}\right\rangle }\right|
\]
\[
= \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{{s \in {\mathbf{P}}_{j}}}\left| {\langle f\left| {{\varphi }_{s}\rangle \left\langle {{\chi }_{E \cap {N}^{-1}\left\lbrack {\omega }_{s\left( 2\right) }\right\rbrack } \mid {\varphi }_{s}}\right\rangle }\right| }\right|
\]
\[
\leq \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{k}\mathop{\sum }\limits_{{s \in {\mathbf{T}}_{jk}}}\left| {\left\langle {f \mid {\varphi }_{s}}\right\rangle \left\langle {{\chi }_{E \cap {N}^{-1}\left\lbrack {\omega }_{s\left( 2\right) }\right\rbrack } \mid {\varphi }_{s}}\right\rangle }\right|
\]
\[
\leq {C}_{3}\mathop{\sum }\limits_{j}\mathop{\sum }\limits_{k}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{jk}\right) }\right| \mathcal{E}\left( {f;{\mathbf{T}}_{jk}}\right) \mathcal{M}\left( {E;{\mathbf{T}}_{jk}}\right) \parallel f{\parallel }_{{L}^{2}}\left| E\right|
\]
\[
\leq {C}_{3}\mathop{\sum }\limits_{j}\mathop{\sum }\limits_{k}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{jk}\right) }\right| {2}^{j + 1}\min \left( {{\left| E\right| }^{-1},{2}^{{2j} + 2}}\right) \parallel f{\parallel }_{{L}^{2}}\left| E\right|
\]
\[
\leq {C}_{3}\mathop{\sum }\limits_{j}{C}_{0}{2}^{-{2j}}{2}^{j + 1}\min \left( {{\left| E\right| }^{-1},{2}^{{2j} + 2}}\right) \parallel f{\parallel }_{{L}^{2}}\left| E\right|
\]
\[
\leq 8{C}_{0}{C}_{3}\mathop{\sum }\limits_{j}\min \left( {{2}^{-j}{\left| E\right| }^{-\frac{1}{2}},{2}^{j}{\left| E\right| }^{\frac{1}{2}}}\right) \parallel f{\parallel }_{{L}^{2}}{\left| E\right| }^{\frac{1}{2}}
\]
\[
\leq C{\left| E\right| }^{\frac{1}{2}}\parallel f{\parallel }_{{L}^{2}}
\]
This proves estimate (6.1.32).
It remains to construct a sequence of disjoint sets \( {\mathbf{P}}_{j} \) satisfying properties (1)-(5). The selection of these sets is based on decreasing induction. We start the induction at \( j = {n}_{0} \) by setting \( {\mathbf{P}}_{{n}_{0}} = \varnothing \) . Then (1),(2), and (5) are clearly satisfied, while
\[
\mathcal{E}\left( {f;\mathbf{P} \smallsetminus {\mathbf{P}}_{{n}_{0}}}\right) = \mathcal{E}\left( {f;\mathbf{P}}\right) \leq {2}^{{n}_{0}},
\]
\[
\mathcal{M}\left( {E;\mathbf{P} \smallsetminus {\mathbf{P}}_{{n}_{0}}}\right) = \mathcal{M}\left( {E;\mathbf{P}}\right) \leq {2}^{2{n}_{0}};
\]
hence (3) and (4) are also satisfied for \( {\mathbf{P}}_{{n}_{0}} \) .
Suppose that we have selected pairwise disjoint sets \( {\mathbf{P}}_{{n}_{0}},{\mathbf{P}}_{{n}_{0} - 1},\ldots ,{\mathbf{P}}_{n} \) for some \( n \leq {n}_{0} \) such that (1)-(5) are satisfied for all \( j \in \left\{ {{n}_{0},{n}_{0} - 1,\ldots, n}\right\} \) . We construct a set of tiles \( {\mathbf{P}}_{n - 1} \) disjoint from all \( {\mathbf{P}}_{j} \) with \( j \geq n \) such that (1)-(5) are satisfied for \( j = n - 1 \) .
We define first an auxiliary set \( {\mathbf{P}}_{n - 1}^{\prime } \) . If \( \mathcal{M}\left( {E;\mathbf{P} \smallsetminus \left( {{\mathbf{P}}_{{n}_{0}} \cup \cdots \cup {\mathbf{P}}_{n}}\right) }\right |
109_The rising sea Foundations of Algebraic Geometry | Definition 3.77 |
Definition 3.77. We say that a wall \( H \) strictly separates two simplices if they are in opposite roots determined by \( H \) and neither is in \( H \) . We denote by \( \mathcal{S}\left( {A, B}\right) \) the set of walls that strictly separate two simplices \( A \) and \( B \) .
Proposition 3.78. For any two simplices \( A, B \) in a Coxeter complex \( \sum \), we have
\[
d\left( {A, B}\right) = \left| {\mathcal{S}\left( {A, B}\right) }\right|
\]
i.e., \( d\left( {A, B}\right) \) is equal to the number of walls \( H \) that strictly separate \( A \) from \( B \) . More precisely, the walls crossed by any minimal gallery from \( A \) to \( B \) are distinct and are precisely the walls in \( \mathcal{S}\left( {A, B}\right) \) .
Proof. A proof from the point of view of the Tits cone was sketched in Section 2.7. Here is a combinatorial proof: Let \( \Gamma : {C}_{0},\ldots ,{C}_{d} \) be a minimal gallery from \( A \) to \( B \) . Then it is also a minimal gallery from \( {C}_{0} \) to \( {C}_{d} \), so it crosses \( d \) distinct walls, and these are the walls separating \( {C}_{0} \) from \( {C}_{d} \) . It is immediate that \( \mathcal{S}\left( {A, B}\right) \subseteq \mathcal{S}\left( {{C}_{0},{C}_{d}}\right) \), so \( \Gamma \) crosses all the walls in \( \mathcal{S}\left( {A, B}\right) \) . We must show, conversely, that every wall \( H \) crossed by \( \Gamma \) is in \( \mathcal{S}\left( {A, B}\right) \) . Suppose not. Then there is a root \( \alpha \) bounded by \( H \) that contains both \( A \) and \( B \) . But then we can get a shorter gallery from \( A \) to \( B \) by applying the folding of \( \sum \) onto \( \alpha \) . This contradicts the minimality of \( \Gamma \) .
We close this section by making some remarks that will be useful later, concerning links. Given a simplex \( A \) in a Coxeter complex \( \sum \), recall that its link \( {\sum }^{\prime } \mathrel{\text{:=}} {\operatorname{lk}}_{\sum }A \) is again a Coxeter complex (Proposition 3.16). We wish to explicitly describe its walls and roots. Suppose \( H \) is a wall of \( \sum \) containing \( A \) , and let \( \pm \alpha \) be the corresponding roots. Then one checks immediately from the definitions that \( {H}^{\prime } \mathrel{\text{:=}} H \cap {\sum }^{\prime } \) is a wall of \( {\sum }^{\prime } \), with associated roots \( \pm {\alpha }^{\prime } \mathrel{\text{:=}} \pm \alpha \cap {\sum }^{\prime }. \)
Proposition 3.79. The function \( H \mapsto {H}^{\prime } \mathrel{\text{:=}} H \cap {\sum }^{\prime } \) is a bijection from the set of walls of \( \sum \) containing \( A \) to the set of walls of \( {\sum }^{\prime } \) . Similarly, the function \( \alpha \mapsto {\alpha }^{\prime } \mathrel{\text{:=}} \alpha \cap {\sum }^{\prime } \) is a bijection from the set of roots of \( \sum \) whose boundary contains \( A \) to the set of roots of \( {\sum }^{\prime } \) .
Proof. It suffices to prove the first assertion. Since a wall of \( {\sum }^{\prime } \) is completely determined by any panel that it contains, we can reformulate the assertion as follows: For any panel \( {P}^{\prime } \) of \( {\sum }^{\prime } \), there is a unique wall \( H \) of \( \sum \) with \( A \in H \) and \( {P}^{\prime } \in H \cap {\sum }^{\prime } \) . Equivalently, there is a unique wall of \( \sum \) containing the simplex \( P \mathrel{\text{:=}} {P}^{\prime } \cup A \) . [The equivalence follows from the fact that walls are full subcomplexes by Lemma 3.54.] Since \( P \) is a panel of \( \sum \), the proposition is now immediate.
Remark 3.80. Recall that we may identify \( {\sum }^{\prime } \) with \( {\sum }_{ \geq A} \) via \( {B}^{\prime } \mapsto {B}^{\prime } \cup A \) for \( {B}^{\prime } \in {\sum }^{\prime } \), and \( B \mapsto B \smallsetminus A \) for \( B \in {\sum }_{ \geq A} \) . If we make this identification, then the bijections in the proposition are still given by intersection. In other words, if \( H \) is a wall of \( \sum \) containing \( A \) and \( {H}^{\prime } \mathrel{\text{:=}} H \cap {\sum }^{\prime } \), then
\[
\left\{ {B \in {\sum }_{ \geq A} \mid \left( {B \smallsetminus A}\right) \in {H}^{\prime }}\right\} = H \cap {\sum }_{ \geq A},
\]
and similarly for roots.
## Exercises
Assume throughout these exercises that \( \sum \) is a Coxeter complex.
3.81. Let \( H \) be a wall with associated roots \( \pm \alpha \), and let \( A \) be an arbitrary simplex. Show that \( A \in H \) if and only if there are chambers \( C,{C}^{\prime } \geq A \) with \( C \in \alpha \) and \( {C}^{\prime } \in - \alpha \) .
3.82. With the notation of the previous exercise, if \( A \in H \) show that the chambers \( C,{C}^{\prime } \) can be taken to be adjacent. Thus there is a panel \( P \) in \( \sum \) such that \( A \leq P \in H \) .
3.83. Assume that \( \sum \) is infinite.
(a) Show that \( \sum \) has infinitely many walls.
(b) Assume that \( \sum \) is irreducible (i.e., its Coxeter diagram is connected). For every vertex \( x \) of \( \sum \), show that there are infinitely many walls not containing \( x \) . [See Lemma 2.92 for the same result expressed in terms of the Tits cone.]
## 3.5 The Weyl Distance Function
In this section we introduce an important tool, whose usefulness will become more and more apparent as we develop the theory of buildings. Let \( \sum \) be a Coxeter complex. Choose a type function on \( \sum \) with values in a set \( S \), which is not necessarily given to us as the set of generators of a Coxeter group.
Definition 3.84. The Coxeter matrix of \( \sum \) is the matrix \( M = {\left( m\left( s, t\right) \right) }_{s, t \in S} \) defined by
\[
m\left( {s, t}\right) \mathrel{\text{:=}} \operatorname{diam}\left( {\operatorname{lk}A}\right) ,
\]
where \( A \) is any simplex of cotype \( \{ s, t\} \) (see Remark 3.21). Note that if \( \sum = \sum \left( {W, S}\right) \) for some Coxeter system \( \left( {W, S}\right) \) and we use the canonical type function, then \( M \) is the Coxeter matrix of \( \left( {W, S}\right) \) . It follows that \( M \) is well defined in general. The Weyl group of \( \sum \) is defined to be the Coxeter group \( {W}_{M} \) defined by \( M \) . It has generating set \( S \) and defining relations \( {\left( st\right) }^{m\left( {s, t}\right) } = 1 \) .
Note that if \( \sum \) is given to us as \( \sum \left( {W, S}\right) \) (with its canonical type function), then \( {W}_{M} \) is the group \( W \) that we started with. The following result is therefore not surprising:
Proposition 3.85. There is a type-preserving isomorphism \( \sum \cong \sum \left( {{W}_{M}, S}\right) \) , where \( \sum \left( {{W}_{M}, S}\right) \) is given its canonical type function with values in \( S \) .
Proof. By definition, there is a simplicial isomorphism \( \phi : \sum \rightarrow \sum \left( {{W}^{\prime },{S}^{\prime }}\right) \) for some Coxeter system \( \left( {{W}^{\prime },{S}^{\prime }}\right) \) . Let \( {\phi }_{ * } : S \rightarrow {S}^{\prime } \) be the induced type-change bijection (Proposition A.14), where \( \sum \left( {{W}^{\prime },{S}^{\prime }}\right) \) is given its canonical type function. For any \( s, t \in S \) and any simplex \( A \) of cotype \( \{ s, t\} \), the image \( {A}^{\prime } \mathrel{\text{:=}} \phi \left( A\right) \) has cotype \( \left\{ {{s}^{\prime },{t}^{\prime }}\right\} \), where \( {s}^{\prime } \mathrel{\text{:=}} {\phi }_{ * }\left( s\right) \) and \( {t}^{\prime } \mathrel{\text{:=}} {\phi }_{ * }\left( t\right) \) . Since \( \phi \) induces an isomorphism \( {\operatorname{lk}}_{\sum }A \rightarrow {\operatorname{lk}}_{{\sum }^{\prime }}{A}^{\prime } \), it follows that \( m\left( {s, t}\right) = {m}^{\prime }\left( {{s}^{\prime },{t}^{\prime }}\right) \) , where \( {m}^{\prime }\left( {-, - }\right) \) denotes the Coxeter matrix of \( \left( {{W}^{\prime },{S}^{\prime }}\right) \) . Hence \( {\phi }_{ * } \) extends to an isomorphism \( \left( {{W}_{M}, S}\right) \rightarrow \left( {{W}^{\prime },{S}^{\prime }}\right) \) of Coxeter systems, which in turn induces an isomorphism \( \psi : \sum \left( {{W}_{M}, S}\right) \rightarrow \sum \left( {{W}^{\prime },{S}^{\prime }}\right) \) . Note that the induced type-change bijection \( {\psi }_{ * } : S \rightarrow {S}^{\prime } \) is equal to \( {\phi }_{ * } \) . It follows that the composite isomorphism \( {\psi }^{-1} \circ \phi : \sum \rightarrow \sum \left( {{W}_{M}, S}\right) \) is type-preserving.
This motivates the following terminology.
Definition 3.86. Let \( \left( {W, S}\right) \) be a Coxeter system with Coxeter matrix \( M \) . We say that a Coxeter complex \( \sum \) is of type \( \left( {W, S}\right) \) (or of type \( M \) ) if \( \sum \) comes equipped with a type function having values in \( S \) such that the Coxeter matrix of \( \sum \) is \( M \) or, equivalently, such that there is a type-preserving isomorphism \( \sum \rightarrow \sum \left( {W, S}\right) \) . We can then identify \( W \) with the Weyl group \( {W}_{M} \) of \( \sum \) .
We now wish to define a function \( \delta : \mathcal{C}\left( \sum \right) \times \mathcal{C}\left( \sum \right) \rightarrow {W}_{M} \), called the Weyl distance function, such that
\[
d\left( {{C}_{1},{C}_{2}}\right) = l\left( {\delta \left( {{C}_{1},{C}_{2}}\right) }\right)
\]
(3.5)
for any two chambers \( {C}_{1},{C}_{2} \) . Intuitively, \( \delta \left( {{C}_{1},{C}_{2}}\right) \) is something like a vector pointing from \( {C}_{1} \) to \( {C}_{2} \) ; it tells us the distance from \( {C}_{1} \) to \( {C}_{2} \) as well as what "direction" to go in to get from \( {C}_{1} \) to \( {C}_{2} \) .
To define \( \delta \left( {{C}_{1},{C}_{2}}\right) \), choose an arbitrary gallery from \( {C}_{1} \) to \( {C}_{2} \), let \( \left( {{s}_{1},{s}_{2},\ldots ,{s}_{d}}\right) \) be its type, and set
\[
\delta \left( {{C}_{1},{C}_{2}}\right) \mathrel{\text{:=}} {s}_{1}{s}_{2}\cdots {s}_{d} \in {W}_{M}.
\]
(3.6)
T |
1112_(GTM267)Quantum Theory for Mathematicians | Definition 8.6 |
Definition 8.6 If \( f \) is a bounded measurable (complex-valued) function on \( \sigma \left( A\right) \), define a map \( {Q}_{f} : \mathbf{H} \rightarrow \mathbb{C} \) by the formula
\[
{Q}_{f}\left( \psi \right) = {\int }_{\sigma \left( A\right) }f\left( \lambda \right) d{\mu }_{\psi }\left( \lambda \right)
\]
where \( {\mu }_{\psi } \) is the measure in (8.8).
If \( f \) happens to be real valued and continuous, then \( {Q}_{f}\left( \psi \right) \) is equal \( \langle \psi, f\left( A\right) \psi \rangle \), in which case \( {Q}_{f} \) is a bounded quadratic form. (See Definition A.60 and Example A.62.) It turns out that \( {Q}_{f} \) is a bounded quadratic form for any bounded measurable \( f \), in which case Proposition A.63 allows us to associate with \( {Q}_{f} \) a bounded operator, which we denote by \( f\left( A\right) \) . Once the relevant properties of \( f\left( A\right) \) are established, we will construct the desired projection-valued measure by setting \( {\mu }^{A}\left( E\right) = {1}_{E}\left( A\right) \) .
Proposition 8.7 For any bounded measurable function \( f \) on \( \sigma \left( A\right) \), the map \( {Q}_{f} \) in Definition 8.6 is a bounded quadratic form.
Proof. Let \( \mathcal{F} \) denote the space of all bounded, Borel-measurable functions \( f \) for which \( {Q}_{f} \) is a quadratic form. Then \( \mathcal{F} \) is a vector space and contains \( \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . Furthermore, \( \mathcal{F} \) is closed under uniformly bounded pointwise limits, because \( {Q}_{f}\left( \psi \right) \) is continuous with respect to such limits, by dominated convergence. Standard measure-theoretic techniques (Exercise 3) then show that \( \mathcal{F} \) is the space of all bounded Borel-measurable functions on \( X \) .
Meanwhile, it follows from (8.9) that
\[
\left| {{Q}_{f}\left( \psi \right) }\right| \leq \mathop{\sup }\limits_{{\lambda \in \sigma \left( A\right) }}\left| {f\left( \lambda \right) }\right| \parallel \psi {\parallel }^{2},
\]
showing that \( {Q}_{f} \) is always a bounded quadratic form.
Definition 8.8 For a bounded measurable function \( f \) on \( \sigma \left( A\right) \), let \( f\left( A\right) \) be the operator associated to the quadratic form \( {Q}_{f} \) by Proposition A.63. This means that \( f\left( A\right) \) is the unique operator such that
\[
\langle \psi, f\left( A\right) \psi \rangle = {Q}_{f}\left( \psi \right) = {\int }_{\sigma \left( A\right) }{fd}{\mu }_{\psi }
\]
for all \( \psi \in \mathbf{H} \) .
Observe that if \( f \) is real valued, then \( {Q}_{f}\left( \psi \right) \) is real for all \( \psi \in \mathbf{H} \), which means (Proposition A.63) that the associated operator \( f\left( A\right) \) is self-adjoint. We will shortly associate with \( A \) a projection-valued measure \( {\mu }^{A} \), and we will show that \( f\left( A\right) \), as given by Definition 8.8, agrees with \( f\left( A\right) \) as given by \( {\int }_{\sigma \left( A\right) }f\left( \lambda \right) d{\mu }^{A}\left( \lambda \right) \) . [See (8.10) and compare Definition 7.13.]
Proposition 8.9 For any two bounded measurable functions \( f \) and \( g \), we have
\[
\left( {fg}\right) \left( A\right) = f\left( A\right) g\left( A\right)
\]
Proof. Let \( {\mathcal{F}}_{1} \) denote the space of bounded measurable functions \( f \) such that \( \left( {fg}\right) \left( A\right) = f\left( A\right) g\left( A\right) \) for all \( g \in \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . Then \( {\mathcal{F}}_{1} \) is a vector space and contains \( \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . We have already noted that dominated convergence guarantees that the map \( f \mapsto {Q}_{f}\left( \psi \right) ,\psi \in \mathbf{H} \), is continuous under uniformly bounded pointwise convergence. By the polarization identity (Proposition A.59), the same is true for the map \( f \mapsto {L}_{f}\left( {\phi ,\psi }\right) \), where \( {L}_{f} \) is the sesquilinear form associated to \( {Q}_{f} \) . Now, by the polarization identity, \( f \) will be in \( {\mathcal{F}}_{1} \) provided that
\[
\langle \psi ,\left( {fg}\right) \left( A\right) \psi \rangle = \langle \psi, f\left( A\right) g\left( A\right) \psi \rangle
\]
or, equivalently,
\[
{Q}_{fg}\left( \psi \right) = {L}_{f}\left( {\psi, g\left( A\right) \psi }\right)
\]
for all \( \psi \in \mathbf{H} \) and all \( g \in \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . From this, we can see that \( {\mathcal{F}}_{1} \) is closed under uniformly bounded pointwise limits. Thus, by Exercise \( 3,{\mathcal{F}}_{1} \) consists of all bounded, Borel-measurable functions.
We now let \( {\mathcal{F}}_{2} \) denote the space of all bounded, Borel-measurable functions \( f \) such that \( \left( {fg}\right) \left( A\right) = f\left( A\right) g\left( A\right) \) for all bounded Borel-measurable functions \( g \) . Our result for \( {\mathcal{F}}_{1} \) shows that \( {\mathcal{F}}_{2} \) contains \( \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . Thus, the same argument as for \( {\mathcal{F}}_{1} \) shows that \( {\mathcal{F}}_{2} \) consists of all bounded, Borel-measurable functions.
Theorem 8.10 Suppose \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint. For any measurable set \( E \subset \sigma \left( A\right) \), define an operator \( {\mu }^{A}\left( E\right) \) by
\[
{\mu }^{A}\left( E\right) = {1}_{E}\left( A\right)
\]
where \( {1}_{E}\left( A\right) \) is given by Definition 8.8. Then \( {\mu }^{A} \) is a projection-valued measure on \( \sigma \left( A\right) \) and satisfies
\[
{\int }_{\sigma \left( A\right) }{\lambda d}{\mu }^{A}\left( \lambda \right) = A
\]
Theorem 8.10 establishes the existence of the projection-valued measure in our first version of the spectral theorem (Theorem 7.12).
Proof. Since \( {1}_{E} \) is real-valued and satisfies \( {1}_{E} \cdot {1}_{E} = {1}_{E} \), Proposition 8.4 tells us that \( {1}_{E}\left( A\right) \) is self-adjoint and satisfies \( {1}_{E}{\left( A\right) }^{2} = {1}_{E}\left( A\right) \) . Thus, \( {\mu }^{A}\left( E\right) \) is an orthogonal projection (Proposition A.57), for any measurable set \( E \subset X \) . If \( {E}_{1} \) and \( {E}_{2} \) are measurable sets, then \( {1}_{{E}_{1} \cap {E}_{2}} = {1}_{{E}_{1}} \cdot {1}_{{E}_{2}} \) and so
\[
{\mu }^{A}\left( {{E}_{1} \cap {E}_{2}}\right) = {\mu }^{A}\left( {E}_{1}\right) {\mu }^{A}\left( {E}_{2}\right) .
\]
If \( {E}_{1},{E}_{2},\ldots \) are disjoint measurable sets, then \( {\mu }^{A}\left( {E}_{j}\right) {\mu }^{A}\left( {E}_{k}\right) = {\mu }^{A}\left( \varnothing \right) = 0 \) , for \( j \neq k \), and so the ranges of the projections \( {\mu }^{A}\left( {E}_{j}\right) \) and \( {\mu }^{A}\left( {E}_{k}\right) \) are
orthogonal. It then follows by an elementary argument that, for all \( \psi \in \mathbf{H} \) , we have
\[
\mathop{\sum }\limits_{{j = 1}}^{\infty }{\mu }^{A}\left( {E}_{j}\right) \psi = {P\psi }
\]
where the sum converges in the norm topology of \( \mathbf{H} \) and where \( P \) is the orthogonal projection onto the smallest closed subspace containing the range of \( {\mu }^{A}\left( {E}_{j}\right) \) for every \( j \) . On the other hand, if \( E \mathrel{\text{:=}} { \cup }_{j = 1}^{\infty }{E}_{j} \), then the sequence \( {f}_{N} \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 1}}^{N}{1}_{{E}_{j}} \) is uniformly bounded (by 1) and converges pointwise to \( {1}_{E} \) . Thus, using again dominated convergence in (8.8),
\[
\mathop{\lim }\limits_{{N \rightarrow \infty }}\left\langle {\psi ,\mathop{\sum }\limits_{{j = 1}}^{N}{1}_{{E}_{j}}\left( A\right) \psi }\right\rangle = \left\langle {\psi ,{1}_{E}\left( A\right) \psi }\right\rangle .
\]
It follows that \( {1}_{E}\left( A\right) \) coincides with \( P \), which establishes the desired countable additivity for \( {\mu }^{A} \) .
Finally, if \( f = {1}_{E} \) for some Borel set \( E \), then
\[
{\int }_{\sigma \left( A\right) }f\left( \lambda \right) d{\mu }^{A}\left( \lambda \right) = f\left( A\right)
\]
(8.10)
where \( f\left( A\right) \) is given by Definition 8.8. [The integral is equal to \( {\mu }^{A}\left( E\right) \), which is, by definition, equal to \( {1}_{E}\left( A\right) \) .] The equality (8.10) then holds for simple functions by linearity and for all bounded, Borel-measurable functions by taking limits. In particular, if \( f\left( \lambda \right) = \lambda \), then the integral of \( f \) against \( {\mu }^{A} \) agrees with \( f\left( A\right) \) as defined in Definition 8.8, which agrees with \( f\left( A\right) \) as defined in the continuous functional calculus, which in turn agrees with \( f\left( A\right) \) as defined for polynomials - namely, \( f\left( A\right) = A \) . This means that
\[
{\int }_{\sigma \left( A\right) }{\lambda d}{\mu }^{A}\left( \lambda \right) = A
\]
as desired. -
We have now completed the existence of the projection-valued measure \( {\mu }^{A} \) in Theorem 7.12. The uniqueness of \( {\mu }^{A} \) is left as an exercise (Exercise 4). We close this section by proving Proposition 7.16, which states that if a bounded operator \( B \) commutes with a bounded self-adjoint operator \( A \) , then \( B \) commutes with \( f\left( A\right) \), for all bounded, Borel-measurable functions \( f \) on \( \sigma \left( A\right) \) .
Proof of Proposition 7.16. If \( B \) commutes with \( A \), then \( B \) commutes with \( p\left( A\right) \), for any polynomial \( p \) . Thus, by taking limits as in the construction of the continuous functional calculus, \( B \) will commute with \( f\left( A\right) \) for any continuous real-valued function \( f \) on \( \sigma \left( A\right) \) . We now let \( \mathcal{F} \) denote the space of all bounded, Borel-measurable functions \( f \) on \( \sigma \left( A\right) \) for which \( f\left( A\right) \) c |
1075_(GTM233)Topics in Banach Space Theory | Definition 2.1.8 |
Definition 2.1.8. A bounded operator \( T \) from a Banach space \( X \) into a Banach space \( Y \) is strictly singular if there is no infinite-dimensional subspace \( E \subset X \) such that \( {\left. T\right| }_{E} \) is an isomorphism onto its range.
Theorem 2.1.9. If \( p \neq r \), every bounded operator \( T : {\ell }_{p} \rightarrow {\ell }_{r} \) is strictly singular.
Proof. This is immediate from Corollary 2.1.6.
## 2.2 Complemented Subspaces of \( {\ell }_{p}\left( {1 \leq p < \infty }\right) \) and \( {c}_{0} \)
The results of this section are due to Pełczyński [241]; they demonstrate the power of basic sequence techniques.
Proposition 2.2.1. Every infinite-dimensional closed subspace \( Y \) of \( {\ell }_{p}(1 \leq p < \) \( \infty ) \) [respectively, \( {c}_{0} \) ] contains a closed subspace \( Z \) such that \( Z \) is isomorphic to \( {\ell }_{p} \) [respectively, \( {c}_{0} \) ] and complemented in \( {\ell }_{p} \) [respectively, \( {c}_{0} \) ].
Proof. Since \( Y \) is infinite-dimensional, for every \( n \) there is \( {x}_{n} \in Y \) with \( \begin{Vmatrix}{x}_{n}\end{Vmatrix} = 1 \) such that \( {e}_{k}^{ * }\left( {x}_{n}\right) = 0 \) for \( 1 \leq k \leq n \) . If not, for some \( N \in \mathbb{N} \) the projection \( {S}_{N}\left( {\mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{e}_{n}}\right) = \mathop{\sum }\limits_{{n = 1}}^{N}{a}_{n}{e}_{n} \) restricted to \( Y \) would be injective (since \( 0 \neq y \in \) \( Y \) would imply \( {S}_{N}\left( y\right) \neq 0 \) ), and so \( {\left. {S}_{N}\right| }_{Y} \) would be an isomorphism onto its image, which is impossible because \( Y \) is infinite-dimensional. By Proposition 2.1.3 the sequence \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) has a subsequence \( {\left( {x}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) that is basic, equivalent to the canonical basis of the space and such that the subspace \( Z = \left\lbrack {x}_{{n}_{k}}\right\rbrack \) is complemented.
Since \( {c}_{0} \) and \( {\ell }_{1} \) are nonreflexive and every closed subspace of a reflexive space is reflexive, using Proposition 2.2.1 we obtain the following result.
Proposition 2.2.2. Let \( Y \) be an infinite-dimensional closed subspace of either \( {c}_{0} \) or \( {\ell }_{1} \) . Then \( Y \) is not reflexive.
Suppose now that \( Y \) is itself complemented in \( {\ell }_{p}\left( {1 \leq p < \infty }\right) \) [respectively, \( {c}_{0} \) ]. Proposition 2.2.1 tells us that \( Y \) contains a complemented copy of \( {\ell }_{p} \) [respectively, \( \left. {c}_{0}\right\rbrack \) . Can we say more? Remarkably, Pełczyński discovered a trick that enables us, by rather "soft" arguments, to do quite a bit better. This trick is nowadays known as the Pelczyński decomposition technique and has proved very useful in different contexts.
The situation is this: we have two Banach spaces \( X \) and \( Y \) such that \( Y \) is isomorphic to a complemented subspace of \( X \), and \( X \) is isomorphic to a complemented subspace of \( Y \) . We would like to deduce that \( X \) and \( Y \) are isomorphic. This is known (by analogy with a similar result for cardinals) as the Schroeder-Bernstein problem for Banach spaces. It was not until 1996 that Gowers [115] showed that it is not true in general, using a space constructed by himself and Maurey that contains no unconditional basic sequences (see the mention of the unconditional basic sequence problem in Section 3.5); one year later, Gowers and Maurey [117] gave an example of a Banach space \( X \) that is isomorphic to \( {X}^{3} \) but not to \( {X}^{2} \) (thereby failing to have the Schroeder-Bernstein property).
The Pelczyński decomposition technique gives two criteria for which the Schroeder-Bernstein problem has a positive solution.
We need to introduce the spaces \( {\ell }_{p}\left( X\right) \) for \( 1 \leq p < \infty \) and \( {c}_{0}\left( X\right) \), where \( X \) is a given Banach space.
For \( 1 \leq p < \infty \), the space \( {\ell }_{p}\left( X\right) = {\left( X \oplus X \oplus \cdots \right) }_{p} \), called the infinite direct sum of \( X \) in the sense of \( {\ell }_{p} \), consists of all sequences \( x = {\left( x\left( n\right) \right) }_{n = 1}^{\infty } \) with values in \( X \) such that \( {\left( \parallel x\left( n\right) {\parallel }_{X}\right) }_{n = 1}^{\infty } \in {\ell }_{p} \), with the norm
\[
\parallel x\parallel = {\begin{Vmatrix}{\left( \parallel x\left( n\right) {\parallel }_{X}\right) }_{n = 1}^{\infty }\end{Vmatrix}}_{p} = {\left( \mathop{\sum }\limits_{{n = 1}}^{\infty }\parallel x\left( n\right) {\parallel }_{X}^{p}\right) }^{1/p}.
\]
Similarly, the space \( {c}_{0}\left( X\right) = {\left( X \oplus X \oplus \cdots \right) }_{{c}_{0}} \), called the infinite direct sum of \( X \) in the sense of \( {c}_{0} \), consists of all \( X \) -valued sequences \( x = {\left( x\left( n\right) \right) }_{n = 1}^{\infty } \) such that \( \mathop{\lim }\limits_{n}\parallel x\left( n\right) {\parallel }_{X} = 0 \) under the norm
\[
\parallel x\parallel = \mathop{\max }\limits_{{1 \leq n < \infty }}\parallel x\left( n\right) {\parallel }_{X}
\]
Notice that \( {\ell }_{p}\left( {\ell }_{p}\right) \) can be identified with \( {\ell }_{p}\left( {\mathbb{N} \times \mathbb{N}}\right) \), and hence it is isometric to \( {\ell }_{p} \) . Analogously, \( {c}_{0}\left( {c}_{0}\right) \) is isometric to \( {c}_{0} \) .
Theorem 2.2.3 (The Pelczyński decomposition technique [241]). Let \( X \) and \( Y \) be Banach spaces such that \( X \) is isomorphic to a complemented subspace of \( Y \), and \( Y \) is isomorphic to a complemented subspace of \( X \) . Suppose further that either
(a) \( X \approx {X}^{2} = X \oplus X \) and \( Y \approx {Y}^{2} \), or
(b) \( X \approx {c}_{0}\left( X\right) \) or \( X \approx {\ell }_{p}\left( X\right) \) for some \( 1 \leq p \leq \infty \) .
Then \( X \) is isomorphic to \( Y \) .
Proof. Let us put \( X \approx Y \oplus E \) and \( Y \approx X \oplus F \) . If (a) holds, then we have
\[
X \approx Y \oplus Y \oplus E \approx Y \oplus X
\]
and by a symmetric argument \( Y \approx X \oplus Y \) . Hence \( Y \approx X \) .
If \( X \) satisfies \( \left( b\right) \) in particular, we have \( X \approx {X}^{2} \), so as in part \( \left( a\right) \) we obtain \( Y \approx X \oplus Y \) . On the other hand, then
\[
{\ell }_{p}\left( X\right) \approx {\ell }_{p}\left( {Y \oplus E}\right) \approx {\ell }_{p}\left( Y\right) \oplus {\ell }_{p}\left( E\right) .
\]
Hence if \( X \approx {\ell }_{p}\left( X\right) \) ,
\[
X \approx Y \oplus {\ell }_{p}\left( Y\right) \oplus {\ell }_{p}\left( E\right) \approx Y \oplus {\ell }_{p}\left( X\right) \approx Y \oplus X.
\]
The proof is analogous if \( X \approx {c}_{0}\left( X\right) \) .
We are ready to prove a beautiful theorem due to Pelczyński [241] that had a profound influence on the development of Banach space theory.
Theorem 2.2.4. Suppose \( Y \) is a complemented infinite-dimensional subspace of \( {\ell }_{p} \) where \( 1 \leq p < \infty \) [respectively, \( {c}_{0} \) ]. Then \( Y \) is isomorphic to \( {\ell }_{p} \) [respectively, \( {c}_{0} \) ].
Proof. Proposition 2.2.1 gives an infinite-dimensional subspace \( Z \) of \( Y \) such that \( Z \) is isomorphic to \( {\ell }_{p} \) [respectively, \( {c}_{0} \) ] and \( Z \) is complemented in \( {\ell }_{p} \) [respectively, \( \left. {c}_{0}\right\rbrack \) . Obviously \( Z \) is also complemented in \( Y \) ; therefore, \( {\ell }_{p} \) [respectively, \( {c}_{0} \) ] is (isomorphic to) a complemented subspace in \( Y \) . Since \( {\ell }_{p}\left( {\ell }_{p}\right) = {\ell }_{p} \) [respectively, \( \left. {{c}_{0}\left( {c}_{0}\right) = {c}_{0}}\right\rbrack \) ,(b) of Theorem 2.2.3 applies and we are done.
At this point let us discuss where this theorem leads. First, the alert reader may ask whether it is true that every subspace of \( {\ell }_{p} \) is actually complemented. Certainly this is true when \( p = 2 \) ! This is a special case of what is known as the complemented subspace problem.
The complemented subspace problem. If \( X \) is a Banach space such that every closed subspace is complemented, is \( X \) isomorphic to a Hilbert space?
This problem was settled positively by Lindenstrauss and Tzafriri in 1971 [200]. We will later discuss its general solution, but at the moment, let us point out that it is not so easy to demonstrate the answer even for the \( {\ell }_{p} \) -spaces when \( p \neq 2 \) . In this chapter we will show that \( {\ell }_{1} \) has an uncomplemented subspace.
Another way to approach the complemented subspace problem is to demonstrate that \( {\ell }_{p} \) has a subspace that is not isomorphic to the whole space. Here we meet another question dating back to Banach:
The homogeneous space problem. Let \( X \) be a Banach space that is isomorphic to every one of its infinite-dimensional closed subspaces. Is \( X \) isomorphic to a Hilbert space?
This problem was finally solved, again positively, by Komorowski and Tomczak-Jaegermann [175] in 1996 (using an important ingredient by Gowers [114]).
Oddly enough, the \( {\ell }_{p} \) -spaces for \( p \neq 2 \) are not as regular as one would expect. In fact, for every \( p \neq 2,{\ell }_{p} \) contains a subspace without a basis. For \( p > 2 \) this was proved by Davie in 1973 [55]; for general \( p \) it was obtained by Szankowski [289] a few years later. However, the construction of such subspaces is far from easy and will not be covered in this book. Notice that this provides an example of a separable Banach space without a basis.
One natural idea that comes out of Theorem 2.2.4 is the notion that the \( {\ell }_{{p}^{ - }} \) spaces and \( {c}_{0} \) are the building blocks from which Banach spaces are constructed; by analogy they might play the role of primes in number theory. This thinking is behind the following definition:
|
110_The Schwarz Function and Its Generalization to Higher Dimensions | Definition 5.6 |
Definition 5.6. Let \( C \) be a convex set in a vector space \( E \) such that \( 0 \in \) \( \operatorname{rai}\left( C\right) \) . The (Minkowski) gauge function of \( C \) is the function \( {p}_{C} \) defined on \( E \) by the formula
\[
{p}_{C}\left( x\right) \mathrel{\text{:=}} \inf \{ t > 0 : x \in {tC}\} = \inf \{ t \geq 0 : x \in {tC}\} .
\]
If \( C \) is a convex set in an affine space \( A \) and \( {x}_{0} \in \operatorname{rai}\left( C\right) \), then the gauge function of \( C \) with respect to \( {x}_{0} \) is the function \( p\left( x\right) \mathrel{\text{:=}} {p}_{C - {x}_{0}}\left( {x - {x}_{0}}\right) \) defined on \( A \), that is,
\[
p\left( x\right) \mathrel{\text{:=}} \inf \left\{ {t > 0 : x \in {x}_{0} + t\left( {C - {x}_{0}}\right) }\right\} ,\;\text{ where }\;x \in A.
\]
Theorem 5.7. Let \( C \) be a convex set in a vector space \( E \) such that \( 0 \in \operatorname{rai}\left( C\right) \) . The gauge function \( {p}_{C} \) is a nonnegative extended-valued function, \( {p}_{C} : E \rightarrow \) \( \mathbb{R} \cup \{ + \infty \} \), that is finite-valued precisely on the linear subspace \( \operatorname{span}\left( C\right) \) .
Moreover, \( {p}_{C} \) is a sublinear function, that is, for all \( x, y \in E \) and for all \( t \geq 0 \)
\[
{p}_{C}\left( {tx}\right) = t{p}_{C}\left( x\right) \;\text{ and }\;{p}_{C}\left( {x + y}\right) \leq {p}_{C}\left( x\right) + {p}_{C}\left( y\right) .
\]
Thus, \( {p}_{C} \) is a convex homogeneous function of degree one.
Proof. Evidently, \( p\left( x\right) \mathrel{\text{:=}} {p}_{C}\left( x\right) \geq 0 \) for all \( x \in E \) . Since \( 0 \in \operatorname{rai}\left( C\right) \), there exists \( t > 0 \) such that \( x \in {tC} \) if and only if \( x \in L \mathrel{\text{:=}} \operatorname{span}\left( C\right) \) . Thus \( p\left( x\right) \) is finite if and only if \( x \in L \) .
The homogeneity of \( p \) is obvious from its definition, and the convexity of \( p \) is thus equivalent to its subadditivity. To prove the former, let \( {x}_{1},{x}_{2} \in L \) and \( 0 < t < 1 \) . Given an arbitrary \( \epsilon > 0 \), note that \( {x}_{i} \in \left( {p\left( {x}_{i}\right) + \epsilon }\right) C, i = 1,2 \) . We have
\[
\left( {1 - t}\right) {x}_{1} + t{x}_{2} \in \left( {1 - t}\right) \left\lbrack {\left( {p\left( {x}_{1}\right) + \epsilon }\right) C)}\right\rbrack + t\left\lbrack {\left( {p\left( {x}_{2}\right) + \epsilon }\right) C}\right\rbrack
\]
\[
= \left\lbrack {\left( {1 - t}\right) p\left( {x}_{1}\right) + {tp}\left( {x}_{2}\right) + \epsilon }\right\rbrack C
\]
where the equality follows since \( C \) is a convex set. Since \( \epsilon > 0 \) is arbitrary, we have
\[
p\left( {\left( {1 - t}\right) {x}_{1} + t{x}_{2}}\right) \leq \left( {1 - t}\right) p\left( {x}_{1}\right) + {tp}\left( {x}_{2}\right) .
\]
Theorem 5.8. If \( C \) is a convex set in a vector space \( E \) such that \( 0 \in \operatorname{rai}\left( C\right) \) , then
\[
\operatorname{rai}\left( C\right) = \operatorname{rai}\left( {\operatorname{rai}\left( C\right) }\right) = \operatorname{rai}\left( {\operatorname{ac}\left( C\right) }\right) = \{ x \in E : p\left( x\right) < 1\}
\]
(5.1)
\[
\operatorname{ac}\left( C\right) = \operatorname{ac}\left( {{ac}\left( C\right) }\right) = \operatorname{ac}\left( {\operatorname{rai}\left( C\right) }\right) = \{ x \in E : p\left( x\right) \leq 1\} .
\]
Proof. Note that without any loss of generality, we may assume that \( E = \) \( \operatorname{span}\left( C\right) \) . Then \( {p}_{C} \) is finite-valued, and relative algebraic interiors become algebraic interiors.
First, it is clear from the definition of \( p \) that
\[
\{ x \in E : p\left( x\right) < 1\} \subseteq C \subseteq \{ x : p\left( x\right) \leq 1\} .
\]
Let \( x \in E \) be such that \( p\left( x\right) < 1 \) . If \( y \in E \) is arbitrary, then for small enough \( t > 0 \), we have \( p\left( {x + {ty}}\right) \leq p\left( x\right) + {tp}\left( y\right) < 1 \) ; thus \( \left\lbrack {x, x + {ty}}\right\rbrack \subset C \), which proves that \( x \in \operatorname{ai}\left( C\right) \) . Lemma 5.5 then implies that \( x + {sy} \in \operatorname{ai}\left( C\right) \) for \( s \in \lbrack 0, t) \), and we have \( x \in \operatorname{ai}\left( {\operatorname{ai}\left( C\right) }\right) \) . Therefore,
\[
\{ x \in E : p\left( x\right) < 1\} \subseteq \operatorname{ai}\left( {\operatorname{ai}\left( C\right) }\right) ,
\]
(5.2)
and consequently,
\[
\operatorname{ai}\left( {\operatorname{ai}\left( C\right) }\right) \subseteq \operatorname{ai}\left( C\right) \subseteq \{ x \in E : p\left( x\right) < 1\} \overset{\left( {5.2}\right) }{ \subseteq }\operatorname{ai}\left( {\operatorname{ai}\left( C\right) }\right) \subseteq \operatorname{ai}\left( {\operatorname{ac}\left( C\right) }\right) .
\]
Here the second inclusion follows because if \( x \in \operatorname{ai}\left( C\right) \), then there exists \( t > 1 \) such that \( {tx} \in C \), implying \( {tp}\left( x\right) = p\left( {tx}\right) \leq 1 \), and hence \( p\left( x\right) < 1 \) . Note that the first three inclusions above are actually equalities, so that the first line of (5.1) is proved, except for the inclusion \( \operatorname{ai}\left( {\operatorname{ac}\left( C\right) }\right) \subseteq \{ x \in E : p\left( x\right) < 1\} \) .
To prove the second line of (5.1), assume that \( x \in \operatorname{ac}\left( C\right) \), with \( \lbrack u, x) \subset C \) . If \( t \in \lbrack 0,1) \), then
\[
p\left( x\right) = p\left( {u + t\left( {x - u}\right) + \left( {1 - t}\right) \left( {x - u}\right) }\right) \leq p\left( {u + t\left( {x - u}\right) }\right) + \left( {1 - t}\right) p\left( {x - u}\right)
\]
\[
\leq 1 + \left( {1 - t}\right) p\left( {x - u}\right)
\]
Letting \( t \nearrow 1 \), we conclude that \( p\left( x\right) \leq 1 \) ; this proves that
\[
\operatorname{ac}\left( C\right) \subseteq \{ x : p\left( x\right) \leq 1\} .
\]
Now if \( x \in \operatorname{ac}\left( {\operatorname{ac}\left( C\right) }\right) \), then since \( 0 \in \operatorname{ai}\left( C\right) \subseteq \operatorname{ai}\left( {\operatorname{ac}\left( C\right) }\right) \), Lemma 5.5 implies that \( {tx} \in \operatorname{ai}\left( {{ac}\left( C\right) }\right) \subseteq \operatorname{ac}\left( C\right) \) for \( t \in \left( {0,1}\right) \) ; thus \( {tp}\left( x\right) = p\left( {tx}\right) \leq 1 \), and letting \( t \nearrow 1 \) leads to \( p\left( x\right) \leq 1 \), proving
\[
\operatorname{ac}\left( {\operatorname{ac}\left( C\right) }\right) \subseteq \{ x \in E : p\left( x\right) \leq 1\} .
\]
(5.3)
Also, if \( p\left( x\right) = 1 \), we have \( p\left( z\right) < 1 \) for every \( z \in \lbrack 0, x) \), hence \( \lbrack 0, x) \subset \) \( \operatorname{ai}\left( {\operatorname{ai}\left( C\right) }\right) \subset \operatorname{ai}\left( C\right) \), proving \( x \in \operatorname{ac}\left( {\operatorname{ai}\left( C\right) }\right) \), so that
\[
\{ x \in E : p\left( x\right) = 1\} \subseteq \operatorname{ac}\left( {\operatorname{ai}\left( C\right) }\right) .
\]
(5.4)
These give
\[
\operatorname{ac}\left( C\right) \subseteq \operatorname{ac}\left( {\operatorname{ac}\left( C\right) }\right) \overset{\left( {5.3}\right) }{ \subseteq }\{ x \in E : p\left( x\right) \leq 1\} \overset{\left( {5.2}\right) ,\left( {5.4}\right) }{ \subseteq }\operatorname{ac}\left( {\operatorname{ai}\left( C\right) }\right) \subseteq \operatorname{ac}\left( C\right) ,
\]
and hence all inclusions are equalities, proving the second line in (5.1).
Finally, it remains to prove that \( \operatorname{ai}\left( {\operatorname{ac}\left( C\right) }\right) \subseteq \{ x \in E : p\left( x\right) < 1\} \) . If \( x \in \operatorname{ai}\left( {\operatorname{ac}\left( C\right) }\right) \), then there exists \( t > 1 \) such that \( {tx} \in \operatorname{ac}\left( C\right) \) . Then (5.3) implies \( {tp}\left( x\right) = p\left( {tx}\right) \leq 1 \), and thus \( p\left( x\right) < 1 \) . The theorem is proved.
Corollary 5.9. Let \( C \) be a convex algebraic body in an affine space \( A \) . Then
\[
\operatorname{rai}\left( C\right) = \operatorname{rai}\left( {\operatorname{rai}\left( C\right) }\right) = \operatorname{rai}\left( {\operatorname{ac}\left( C\right) }\right) = \{ x \in A : p\left( x\right) < 1\}
\]
\[
\operatorname{ac}\left( C\right) = \operatorname{ac}\left( {\operatorname{ac}\left( C\right) }\right) = \operatorname{ac}\left( {\operatorname{rai}\left( C\right) }\right) = \{ x \in A : p\left( x\right) \leq 1\}
\]
where \( p\left( x\right) \) is the gauge function with respect to any point \( {x}_{0} \in \operatorname{rai}\left( C\right) \) .
Proof. Let \( {x}_{0} \in \operatorname{rai}\left( C\right) \) . Evidently, \( 0 \in \operatorname{rai}\left( {C - {x}_{0}}\right) = \operatorname{rai}\left( C\right) - {x}_{0},\operatorname{ac}\left( {C - {x}_{0}}\right) = \) \( \operatorname{ac}\left( C\right) - {x}_{0} \), and \( {p}_{C}\left( x\right) = {p}_{\left( C - {x}_{0}\right) }\left( {x - {x}_{0}}\right) \) . The corollary follows immediately from Theorem 5.8.
## 5.3 Calculus of Relative Algebraic Interior and Algebraic Closure of Convex Sets
Lemma 5.10. Let \( {\left\{ {C}_{i}\right\} }_{1}^{m} \) be convex sets in a vector space \( E \) . If \( { \cap }_{1}^{m}\operatorname{rai}\left( {C}_{i}\right) \neq \) \( \varnothing \) , then
(a) \( \operatorname{aff}\left( {{ \cap }_{1}^{m}{C}_{i}}\right) = { \cap }_{1}^{m}\operatorname{aff}\left( {C}_{i}\right) \) ,
(b) \( \operatorname{rai}\left( {{ \cap }_{1}^{m}{C}_{i}}\right) = { \cap }_{1}^{m}\operatorname{rai}\left( {C}_{i}\right) \) .
Proof. Write \( C \mathrel{\text{:=}} { \cap }_{1}^{m}{C}_{i} \) and \( D \mathrel{\text{:=}} { \cap }_{1}^{m}\operatorname{aff}\left( {C}_{i}\right) \) . Since \( C \subseteq D \) and \( D \) is an affine set, it follows that \( \operatorname{aff}\left( C\right) \subseteq D \) . To prove the reverse |
1063_(GTM222)Lie Groups, Lie Algebras, and Representations | Definition 8.1 |
Definition 8.1. A root system \( \left( {E, R}\right) \) is a finite-dimensional real vector space \( E \) with an inner product \( \langle \cdot , \cdot \rangle \), together with a finite collection \( R \) of nonzero vectors in \( E \) satisfying the following properties:
1. The vectors in \( R \) span \( E \) .
2. If \( \alpha \) is in \( R \) and \( c \in \mathbb{R} \), then \( {c\alpha } \) is in \( R \) only if \( c = \pm 1 \) .
3. If \( \alpha \) and \( \beta \) are in \( R \), then so is \( {s}_{\alpha } \cdot \beta \), where \( {s}_{\alpha } \) is the linear transformation of \( E \) defined by
\[
{s}_{\alpha } \cdot \beta = \beta - 2\frac{\langle \beta ,\alpha \rangle }{\langle \alpha ,\alpha \rangle }\alpha ,\;\beta \in E.
\]
4. For all \( \alpha \) and \( \beta \) in \( R \), the quantity
\[
2\frac{\langle \beta ,\alpha \rangle }{\langle \alpha ,\alpha \rangle }
\]
is an integer.
The dimension of \( E \) is called the rank of the root system and the elements of \( R \) are called roots.
Note that since \( {s}_{\alpha } \cdot \alpha = - \alpha \), we have that \( - \alpha \in R \) whenever \( \alpha \in R \) . In the theory of symmetric spaces, there arise systems satisfying Conditions 1, 3, and 4, but not Condition 2. These are called "nonreduced" root systems. In the theory of Coxeter groups, there arise systems satisfying Conditions 1, 2, and 3, but not Property 4. These are called "noncrystallographic" or "nonintegral" root systems. In this book, we consider only root systems satisfying all of the conditions in Definition 8.1.
The map \( {s}_{\alpha } \) is the reflection about the hyperplane orthogonal to \( \alpha \) ; that is, \( {s}_{\alpha } \cdot \alpha = \) \( - \alpha \) and \( {s}_{\alpha } \cdot \beta = \beta \) for all \( \beta \) that are orthogonal to \( \alpha \), as is easily verified from the formula for \( {s}_{\alpha } \) . From this description, it should be evident that \( {s}_{\alpha } \) is an orthogonal transformation of \( E \) with determinant -1 .
We can interpret Property 4 geometrically in one of two ways. In light of the formula for \( {s}_{\alpha } \), Property 4 is equivalent to saying that \( {s}_{\alpha } \cdot \beta \) should differ from \( \beta \) by an integer multiple of \( \alpha \) . Alternatively, if we recall that the orthogonal projection of \( \beta \) onto \( \alpha \) is given by \( \left( {\langle \beta ,\alpha \rangle /\langle \alpha ,\alpha \rangle }\right) \alpha \), we note that the quantity in Property 4 is twice the coefficient of \( \alpha \) in this projection. Thus, Property 4 is equivalent to saying that the projection of \( \beta \) onto \( \alpha \) is an integer or half-integer multiple of \( \alpha \) .
We have shown that one can associate a root system to every complex semisimple Lie algebra. It turns out that every root system arises in this way, although this is far from obvious-see Sect. 8.11.
Definition 8.2. If \( \left( {E, R}\right) \) is a root system, the Weyl group \( W \) of \( R \) is the subgroup of the orthogonal group of \( E \) generated by the reflections \( {s}_{\alpha },\alpha \in R \) .
By assumption, each \( {s}_{\alpha } \) maps \( R \) into itself, indeed onto itself, since each \( \beta \in R \) satisfies \( \beta = {s}_{\alpha } \cdot \left( {{s}_{\alpha } \cdot \beta }\right) \) . It follows that every element of \( W \) maps \( R \) onto itself. Since the roots span \( E \), a linear transformation of \( E \) is determined by its action on \( R \) . Thus, the Weyl group is a finite subgroup of \( \mathrm{O}\left( E\right) \) and may be regarded as a subgroup of the permutation group on \( R \) . We denote the action of \( w \in W \) on \( H \in E \) by \( w \cdot H \) .
Proposition 8.3. Suppose \( \left( {E, R}\right) \) and \( \left( {F, S}\right) \) are root systems. Consider the vector space \( E \oplus F \), with the natural inner product determined by the inner products on \( E \) and \( F \) . Then \( R \cup S \) is a root system in \( E \oplus F \), called the direct sum of \( R \) and \( S \) .
Here, we are identifying \( E \) with the subspace of \( E \oplus F \) consisting of all vectors of the form \( \left( {e,0}\right) \) with \( e \) in \( E \), and similarly for \( F \) . Thus, more precisely, \( R \cup S \) means the elements of the form \( \left( {\alpha ,0}\right) \) with \( \alpha \) in \( R \) together with elements of the form \( \left( {0,\beta }\right) \) with \( \beta \) in \( S \) . (Elements of the form \( \left( {\alpha ,\beta }\right) \) with \( \alpha \in R \) and \( \beta \in S \) are not in \( R \cup S \) .)
Proof. If \( R \) spans \( E \) and \( S \) spans \( F \), then \( R \cup S \) spans \( E \oplus F \), so Condition 1 is satisfied. Condition 2 holds because \( R \) and \( S \) are root systems in \( E \) and \( F \) , respectively. For Condition 3, if \( \alpha \) and \( \beta \) are both in \( R \) or both in \( S \), then \( {s}_{\alpha } \cdot \beta \in \) \( R \cup S \) because \( R \) and \( S \) are root systems. If \( \alpha \in R \) and \( \beta \in S \) or vice versa, then \( \langle \alpha ,\beta \rangle = 0 \), so that
\[
{s}_{\alpha } \cdot \beta = \beta \in R \cup S.
\]
Similarly, if \( \alpha \) and \( \beta \) are both in \( R \) or both in \( S \), then \( 2\langle \alpha ,\beta \rangle /\langle \alpha ,\alpha \rangle \) is an integer because \( R \) and \( S \) are root systems, and if \( \alpha \in R \) and \( \beta \in S \) or vice versa, then \( 2\langle \alpha ,\beta \rangle /\langle \alpha ,\alpha \rangle = 0 \) . Thus, Condition 4 holds for \( R \cup S \) .
Definition 8.4. A root system \( \left( {E, R}\right) \) is called reducible if there exists an orthogonal decomposition \( E = {E}_{1} \oplus {E}_{2} \) with \( \dim {E}_{1} > 0 \) and \( \dim {E}_{2} > 0 \) such that every element of \( R \) is either in \( {E}_{1} \) or in \( {E}_{2} \) . If no such decomposition exists, \( \left( {E, R}\right) \) is called irreducible.
If \( \left( {E, R}\right) \) is reducible, then it is not hard to see that the part of \( R \) in \( {E}_{1} \) is a root system in \( {E}_{1} \) and the part of \( R \) in \( {E}_{2} \) is a root system in \( {E}_{2} \) . Thus, a root system is reducible precisely if it can be realized as a direct sum of two other root systems. In the Lie algebra setting, the root system associated to a complex semisimple Lie algebra \( \mathfrak{g} \) is irreducible precisely if \( \mathfrak{g} \) is simple (Theorem 7.35).
Definition 8.5. Two root systems \( \left( {E, R}\right) \) and \( \left( {F, S}\right) \) are said to be isomorphic if there exists an invertible linear transformation \( A : E \rightarrow F \) such that \( A \) maps \( R \) onto \( S \) and such that for all \( \alpha \in R \) and \( \beta \in E \), we have
\[
A\left( {{s}_{\alpha } \cdot \beta }\right) = {s}_{A\alpha } \cdot \left( {A\beta }\right) .
\]
A map \( A \) with this property is called an isomorphism.
Note that the linear map \( A \) is not required to preserve inner products, but only to preserve the reflections about the roots. If, for example, \( F = E \) and \( S \) consists of elements of the form \( {c\alpha } \) with \( \alpha \in R \), then \( \left( {F, S}\right) \) is isomorphic to \( \left( {E, R}\right) \), with the isomorphism being the map \( A = {cI} \) .
![a7bfd4a7-7795-4350-a407-6ad11be11f96_208_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_208_0.jpg)
Fig. 8.1 The basic acute angles and length ratios
We now establish a basic result limiting the possible angles and length ratios occurring in a root system.
Proposition 8.6. Suppose \( \alpha \) and \( \beta \) are roots, \( \alpha \) is not a multiple of \( \beta \), and \( \langle \alpha ,\alpha \rangle \geq \) \( \langle \beta ,\beta \rangle \) . Then one of the following holds:
1. \( \langle \alpha ,\beta \rangle = 0 \)
2. \( \langle \alpha ,\alpha \rangle = \langle \beta ,\beta \rangle \) and the angle between \( \alpha \) and \( \beta \) is \( \pi /3 \) or \( {2\pi }/3 \)
3. \( \langle \alpha ,\alpha \rangle = 2\langle \beta ,\beta \rangle \) and the angle between \( \alpha \) and \( \beta \) is \( \pi /4 \) or \( {3\pi }/4 \)
4. \( \langle \alpha ,\alpha \rangle = 3\langle \beta ,\beta \rangle \) and the angle between \( \alpha \) and \( \beta \) is \( \pi /6 \) or \( {5\pi }/6 \)
Figure 8.1 shows the allowed angles and length ratios, for the case of an acute angle. In each case, \( 2\langle \alpha ,\beta \rangle /\langle \alpha ,\alpha \rangle = 1 \), whereas \( 2\langle \beta ,\alpha \rangle /\langle \beta ,\beta \rangle \) takes the values 1, 2, and 3 in the three successive cases. Section 8.2 shows that each of the angles and length ratios permitted by Proposition 8.6 actually occurs in some root system.
Proof. Suppose that \( \alpha \) and \( \beta \) are roots and let \( {m}_{1} = 2\langle \alpha ,\beta \rangle /\langle \alpha ,\alpha \rangle \) and \( {m}_{2} = \) \( 2\langle \beta ,\alpha \rangle /\langle \beta ,\beta \rangle \), so that \( {m}_{1} \) and \( {m}_{2} \) are integers. Assume \( \langle \alpha ,\alpha \rangle \geq \langle \beta ,\beta \rangle \) and note that
\[
{m}_{1}{m}_{2} = 4\frac{\langle \alpha ,\beta {\rangle }^{2}}{\langle \alpha ,\alpha \rangle \langle \beta ,\beta \rangle } = 4{\cos }^{2}\theta
\]
(8.1)
where \( \theta \) is the angle between \( \alpha \) and \( \beta \), and that
\[
\frac{{m}_{2}}{{m}_{1}} = \frac{\langle \alpha ,\alpha \rangle }{\langle \beta ,\beta \rangle } \geq 1
\]
(8.2)
![a7bfd4a7-7795-4350-a407-6ad11be11f96_209_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_209_0.jpg)
Fig. 8.2 The projection of \( \beta \) onto \( \alpha \) equals \( \alpha /2 \) and \( {s}_{\alpha } \cdot \beta \) equals \( \beta - \alpha \)
whenever \( \langle \alpha ,\beta \rangle \neq 0 \) . From (8.1), we conclude that \( 0 \leq {m}_{1}{m}_{2} \leq 4 \) . If \( {m}_{1}{m}_{2} = 0 \) , then \( \cos \theta |
1094_(GTM250)Modern Fourier Analysis | Definition 1.2.4 |
Definition 1.2.4. Let \( z \) be a complex number satisfying \( 0 < \operatorname{Re}z < \infty \) . The Bessel potential operator of order \( z \) is
\[
{\mathcal{J}}_{z} = {\left( I - \Delta \right) }^{-z/2}
\]
This operator acts on functions \( f \) as follows:
\[
{\mathcal{J}}_{z}\left( f\right) = {\left( \widehat{f}\widehat{{G}_{z}}\right) }^{ \vee } = f * {G}_{z}
\]
where
\[
{G}_{z}\left( x\right) = {\left( {\left( 1 + 4{\pi }^{2}{\left| \xi \right| }^{2}\right) }^{-z/2}\right) }^{ \vee }\left( x\right) .
\]
The Bessel potential is obtained by replacing \( 4{\pi }^{2}{\left| \xi \right| }^{2} \) in the Riesz potential by the smooth term \( 1 + 4{\pi }^{2}{\left| \xi \right| }^{2} \) . This adjustment creates smoothness, which yields rapid decay for \( {G}_{z} \) at infinity. The next result quantifies the behavior of \( {G}_{z} \) near zero and near infinity.
Proposition 1.2.5. Let \( z \) be a complex number with \( \operatorname{Re}z > 0 \) . Then the function \( {G}_{z} \) is smooth on \( {\mathbf{R}}^{n} \smallsetminus \{ 0\} \) . Moreover, if \( s \) is real, then \( {G}_{s} \) is strictly positive, \( {\begin{Vmatrix}{G}_{s}\end{Vmatrix}}_{{L}^{1}} = 1 \) , and there exist positive finite constants \( C\left( {s, n}\right), c\left( {s, n}\right) \) such that
\[
{G}_{s}\left( x\right) \leq C\left( {s, n}\right) {e}^{-\frac{\left| x\right| }{2}}\;\text{ when }\left| x\right| \geq 2
\]
(1.2.11)
and such that
\[
\frac{1}{c\left( {s, n}\right) } \leq \frac{{G}_{s}\left( x\right) }{{H}_{s}\left( x\right) } \leq c\left( {s, n}\right) \;\text{ when }\left| x\right| \leq 2,
\]
(1.2.12)
where \( {H}_{s} \) is equal to
\[
{H}_{s}\left( x\right) = \left\{ \begin{array}{ll} {\left| x\right| }^{s - n} + 1 + O\left( {\left| x\right| }^{s - n + 2}\right) & \text{ for }0 < s < n, \\ \log \frac{2}{\left| x\right| } + 1 + O\left( {\left| x\right| }^{2}\right) & \text{ for }s = n, \\ 1 + O\left( {\left| x\right| }^{s - n}\right) & \text{ for }s > n, \end{array}\right.
\]
and \( O\left( t\right) \) is a function with the property \( \left| {O\left( t\right) }\right| \leq \left| t\right| \) for \( t \geq 0 \) .
Now let \( z \) be a complex number with \( \operatorname{Re}z > 0 \) . Then there exist finite positive constants \( {C}^{\prime }\left( {\operatorname{Re}z, n}\right) \) and \( {c}^{\prime }\left( {\operatorname{Re}z, n}\right) \) such that when \( \left| x\right| \geq 2 \), we have
\[
\left| {{G}_{z}\left( x\right) }\right| \leq \frac{{C}^{\prime }\left( {\operatorname{Re}z, n}\right) }{\left| \Gamma \left( \frac{z}{2}\right) \right| }{e}^{-\frac{\left| x\right| }{2}}
\]
(1.2.13)
and when \( \left| x\right| \leq 2 \), we have
\[
\left| {{G}_{z}\left( x\right) }\right| \leq \frac{{c}^{\prime }\left( {\operatorname{Re}z, n}\right) }{\left| \Gamma \left( \frac{z}{2}\right) \right| }\left\{ \begin{array}{ll} {\left| x\right| }^{\operatorname{Re}z - n} & \text{ for }\operatorname{Re}z < n, \\ \log \frac{2}{\left| x\right| } & \text{ for }\operatorname{Re}z = n, \\ 1 & \text{ for }\operatorname{Re}z > n. \end{array}\right.
\]
Proof. For \( A > 0 \) and \( z \) with \( \operatorname{Re}z > 0 \) we have the gamma function identity
\[
{A}^{-\frac{z}{2}} = \frac{1}{\Gamma \left( \frac{z}{2}\right) }{\int }_{0}^{\infty }{e}^{-{tA}}{t}^{\frac{z}{2}}\frac{dt}{t}
\]
which we use to obtain
\[
{\left( 1 + 4{\pi }^{2}{\left| \xi \right| }^{2}\right) }^{-\frac{z}{2}} = \frac{1}{\Gamma \left( \frac{z}{2}\right) }{\int }_{0}^{\infty }{e}^{-t}{e}^{-\pi {\left| 2\sqrt{\pi t}\xi \right| }^{2}}{t}^{\frac{z}{2}}\frac{dt}{t}.
\]
Note that the preceding integral converges at both ends. Now take the inverse Fourier transform in \( \xi \) and use the fact that the function \( {e}^{-\pi {\left| \xi \right| }^{2}} \) is equal to its Fourier transform (Example 2.2.9 in [156]) to obtain
\[
{G}_{z}\left( x\right) = \frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\Gamma \left( \frac{z}{2}\right) }{\int }_{0}^{\infty }{e}^{-t}{e}^{-\frac{{\left| x\right| }^{2}}{4t}}{t}^{\frac{z - n}{2}}\frac{dt}{t}.
\]
This identity shows that \( {G}_{z} \) is smooth on \( {\mathbf{R}}^{n} \smallsetminus \{ 0\} \) . Moreover, taking \( z = s > 0 \) proves that \( {G}_{s}\left( x\right) > 0 \) for all \( x \in {\mathbf{R}}^{n} \) . Consequently, \( {\begin{Vmatrix}{G}_{s}\end{Vmatrix}}_{{L}^{1}} = {\int }_{{\mathbf{R}}^{n}}{G}_{s}\left( x\right) {dx} = \widehat{{G}_{s}}\left( 0\right) = 1 \) .
Now suppose \( \left| x\right| \geq 2 \) . Then \( t + \frac{{\left| x\right| }^{2}}{4t} \geq t + \frac{1}{t} \) and \( t + \frac{{\left| x\right| }^{2}}{4t} \geq \left| x\right| \) . This implies that
\[
- t - \frac{{\left| x\right| }^{2}}{4t} \leq - \frac{t}{2} - \frac{1}{2t} - \frac{\left| x\right| }{2}
\]
from which it follows that when \( \left| x\right| \geq 2 \) ,
\[
\left| {{G}_{z}\left( x\right) }\right| \leq \frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\left| \Gamma \left( \frac{z}{2}\right) \right| }\left( {{\int }_{0}^{\infty }{e}^{-\frac{t}{2}}{e}^{-\frac{1}{2t}}{t}^{\frac{\operatorname{Re}z - n}{2}}\frac{dt}{t}}\right) {e}^{-\frac{\left| x\right| }{2}} = \frac{{C}^{\prime }\left( {\operatorname{Re}z, n}\right) }{\left| \Gamma \left( \frac{z}{2}\right) \right| }{e}^{-\frac{\left| x\right| }{2}}.
\]
This proves (1.2.13) and (1.2.11) with \( C\left( {s, n}\right) = \Gamma {\left( \frac{s}{2}\right) }^{-1}{C}^{\prime }\left( {s, n}\right) \) when \( s > 0 \) .
Suppose now that \( \left| x\right| \leq 2 \) and \( s > 0 \) . Write \( {G}_{s}\left( x\right) = {G}_{s}^{1}\left( x\right) + {G}_{s}^{2}\left( x\right) + {G}_{s}^{3}\left( x\right) \), where
\[
{G}_{s}^{1}\left( x\right) = \frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\Gamma \left( \frac{s}{2}\right) }{\int }_{0}^{{\left| x\right| }^{2}}{e}^{-{t}^{\prime }}{e}^{-\frac{{\left| x\right| }^{2}}{4{t}^{\prime }}}{\left( {t}^{\prime }\right) }^{\frac{s - n}{2}}\frac{d{t}^{\prime }}{{t}^{\prime }}
\]
\[
= {\left| x\right| }^{s - n}\frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\Gamma \left( \frac{s}{2}\right) }{\int }_{0}^{1}{e}^{-t{\left| x\right| }^{2}}{e}^{-\frac{1}{4t}}{t}^{\frac{s - n}{2}}\frac{dt}{t},
\]
\[
{G}_{s}^{2}\left( x\right) = \frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\Gamma \left( \frac{s}{2}\right) }{\int }_{{\left| x\right| }^{2}}^{4}{e}^{-t}{e}^{-\frac{{\left| x\right| }^{2}}{4t}}{t}^{\frac{s - n}{2}}\frac{dt}{t},
\]
\[
{G}_{s}^{3}\left( x\right) = \frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\Gamma \left( \frac{s}{2}\right) }{\int }_{4}^{\infty }{e}^{-t}{e}^{-\frac{{\left| x\right| }^{2}}{4t}}{t}^{\frac{s - n}{2}}\frac{dt}{t}.
\]
Since \( t{\left| x\right| }^{2} \leq 4 \), in \( {G}_{s}^{1} \) we have \( {e}^{-t{\left| x\right| }^{2}} = 1 + O\left( {t{\left| x\right| }^{2}}\right) \) by the mean value theorem, where \( O\left( t\right) \) is a function with the property \( \left| {O\left( t\right) }\right| \leq \left| t\right| \) . Thus, we can write
\[
{G}_{s}^{1}\left( x\right) = {\left| x\right| }^{s - n}\frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\Gamma \left( \frac{s}{2}\right) }{\int }_{0}^{1}{e}^{-\frac{1}{4t}{t}^{\frac{s - n}{2}}}\frac{dt}{t} + O\left( {\left| x\right| }^{s - n + 2}\right) \frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\Gamma \left( \frac{s}{2}\right) }{\int }_{0}^{1}{e}^{-\frac{1}{4t}}{t}^{\frac{s - n}{2}}{dt}.
\]
Since \( 0 \leq \frac{{\left| x\right| }^{2}}{4t} \leq \frac{1}{4} \) and \( 0 \leq t \leq 4 \) in \( {G}_{s}^{2} \), we have \( {e}^{-\frac{17}{4}} \leq {e}^{-t - \frac{{\left| x\right| }^{2}}{4t}} \leq 1 \) ; thus, we deduce
\[
{G}_{s}^{2}\left( x\right) \approx {\int }_{{\left| x\right| }^{2}}^{4}{t}^{\frac{s - n}{2}}\frac{dt}{t} = \left\{ \begin{array}{ll} \frac{2}{n - s}{\left| x\right| }^{s - n} - \frac{{2}^{s - n + 1}}{n - s} & \text{ for }s < n, \\ 2\log \frac{2}{\left| x\right| } & \text{ for }s = n, \\ \frac{1}{s - n}{2}^{s - n + 1} - \frac{2}{s - n}{\left| x\right| }^{s - n} & \text{ for }s > n. \end{array}\right.
\]
Finally, we have \( {e}^{-\frac{1}{4}} \leq {e}^{-\frac{{\left| x\right| }^{2}}{4t}} \leq 1 \) in \( {G}_{s}^{3} \), which yields that \( {G}_{s}^{3}\left( x\right) \) is bounded above and below by fixed positive constants. Combining the estimates for \( {G}_{s}^{1}\left( x\right) ,{G}_{s}^{2}\left( x\right) \) , and \( {G}_{s}^{3}\left( x\right) \), we obtain (1.2.12).
When \( z \) is complex, with \( \operatorname{Re}z > 0 \), we write as before \( {G}_{z} = {G}_{z}^{1} + {G}_{z}^{2} + {G}_{z}^{3} \) . When \( \left| x\right| \leq 2 \), we have that \( \left| {{G}_{z}^{1}\left( x\right) }\right| \leq {c}_{1}\left( {\operatorname{Re}z, n}\right) {\left| \Gamma \left( \frac{z}{2}\right) \right| }^{-1}{\left| x\right| }^{\operatorname{Re}z - n} \) . For \( {G}_{z}^{2} \) we have
\[
\left| {{G}_{z}^{2}\left( x\right) }\right| \leq \frac{{\left( 2\sqrt{\pi }\right) }^{-n}}{\left| \Gamma \left( \frac{z}{2}\right) \right| }\left\{ \begin{array}{ll} \frac{2}{n - \operatorname{Re}z}{\left| x\right| }^{\operatorname{Re}z - n} - \frac{{2}^{\operatorname{Re}z - n + 1}}{n - \operatorname{Re}z} & \text{ for }\operatorname{Re}z < n, \\ 2\log \frac{2}{\left| x\right| } & \text{ for }\operatorname{Re}z = n, \\ \frac{1}{\operatorname{Re}z - n}{2}^{\operatorname{Re}z - n + 1} - \frac{2}{\operatorname{Re}z - n}{\left| x\right| }^{\operatorname{Re}z - n} & \text{ for }\operatorname{Re}z > n \end{array}\right.
\]
\[
\leq \frac{{c}_{2}\left( {\operatorname{Re}z, n}\right) }{\left| \Gamma \left( \frac{z}{2}\right) \right| }\left\{ \begin{array}{ll} {\left| x\right| }^{\operatorname{Re}z - n} & \text{ for }\operatorname{Re}z < n, \\ \log \frac{2}{\left| x\right| } & \text{ for }\operatorname{Re}z = n, \\ 1 & \text{ for }\operatorname{Re}z > n, \end{array}\right.
\]
when \( \left| x\right| \leq 2 \) . Finally, \( \left| {{G}_{z}^{3}\left( x\right) }\right| \leq {c}_{3}\left( {\operatorname{Re}z, n}\right) {\left| \Gamma \left( \frac{z}{2}\right) \right| }^{-1} \) when \( \left| x\right| \leq 2 \) . Combining these estimates we obtain the |
117_《微积分笔记》最终版_by零蛋大 | Definition 9.5 |
Definition 9.5. For \( f \in C\left( {X, R}\right), n \geq 1 \) and \( \varepsilon > 0 \) put
\[
{P}_{n}\left( {T, f,\varepsilon }\right) = \sup \left\{ {\mathop{\sum }\limits_{{x\; \in \;E}}{e}^{\left( {{S}_{n}f}\right) \left( x\right) } \mid E\text{ is a }\left( {n,\varepsilon }\right) \text{ separated subset of }X}\right\} .
\]
Remarks
(8) If \( {\varepsilon }_{1} < {\varepsilon }_{2} \) then \( {P}_{n}\left( {T, f,{\varepsilon }_{1}}\right) \geq {P}_{n}\left( {T, f,{\varepsilon }_{2}}\right) \) .
(9) \( {P}_{n}\left( {T,0,\varepsilon }\right) = {s}_{n}\left( {\varepsilon, X}\right) \) .
(10) In Definition 9.5 it suffices to take the supremum over all the \( \left( {n,\varepsilon }\right) \) separated sets which fail to be \( \left( {n,\varepsilon }\right) \) separated when any point of \( X \) is added. This is because \( {e}^{\left( {{S}_{n}f}\right) \left( x\right) } > 0 \) .
(11) We have \( {Q}_{n}\left( {T, f,\varepsilon }\right) \leq {P}_{n}\left( {T, f,\varepsilon }\right) \) . This follows from Remark (10) and the fact that \( a\left( {n,\varepsilon }\right) \) separated set which cannot be enlarged to \( a\left( {n,\varepsilon }\right) \) separated set must be a \( \left( {n,\varepsilon }\right) \) spanning set for \( X \) .
(12) If \( \delta > 0 \) is such that \( d\left( {x, y}\right) < \varepsilon /2 \) implies \( \left| {f\left( x\right) - f\left( y\right) }\right| < \delta \) then \( {P}_{n}\left( {T, f,\varepsilon }\right) \leq {e}^{n\delta }{Q}_{n}\left( {T, f,\varepsilon \geq 2}\right) . \)
Proof. Let \( E \) be a \( \left( {n, c}\right) \) separated set and \( F \) a \( \left( {n, c,2}\right) \) spanning set. Define \( \phi : E \rightarrow F \) by choosing, for each \( x \in E \), some point \( \phi \left( x\right) \in F \) with \( {d}_{n}\left( {x,\phi \left( x\right) }\right) \leq \) \( \varepsilon ,2 \) (using the notation \( {d}_{n}\left( {x, y}\right) = \mathop{\max }\limits_{{0 \leq i \leq n - 1}}d\left( {{T}^{i}\left( x\right) ,{T}^{i}\left( y\right) }\right) \) . Then \( \phi \) is injective so
\[
\mathop{\sum }\limits_{{y \in F}}{e}^{\left( {{S}_{n}f}\right) \left( y\right) } \geq \mathop{\sum }\limits_{{y \in {\phi E}}}{e}^{\left( {{S}_{n}f}\right) \left( y\right) } \geq \left( {\mathop{\min }\limits_{{x \in F}}{e}^{\left( {{S}_{n}f}\right) \left( {\phi x}\right) -\langle {S}_{n}f{\rangle }_{\{ x\} }}}\right) \mathop{\sum }\limits_{{x \in E}}{e}^{\left( {{S}_{n}f}\right) \left( x\right) }
\]
\[
\geq {e}^{-{n\delta }}\mathop{\sum }\limits_{{x \in E}}{e}^{\left( {{S}_{n}f}\right) \left( x\right) }
\]
Therefore \( {Q}_{n}\left( {T, f,\varepsilon /2}\right) \geq {e}^{-{n\delta }}{P}_{n}\left( {T, f,\varepsilon }\right) \) .
Definition 9.6. For \( f \in C\left( {X, R}\right) \) and \( \varepsilon > 0 \) put
\[
P\left( {T, f,\varepsilon }\right) = \mathop{\limsup }\limits_{{n \rightarrow \infty }}\frac{1}{n}\log {P}_{n}\left( {T, f,\varepsilon }\right) .
\]
Remarks
(13) \( Q\left( {T, f,\varepsilon }\right) \leq P\left( {T, f,\varepsilon }\right) \) (by Remark (11)).
(14) If \( \delta \) is such that \( d\left( {x, y}\right) < \varepsilon /2 \) implies \( \left| {f\left( x\right) - f\left( y\right) }\right| < \delta \) then \( P\left( {T, f,\varepsilon }\right) \leq \) \( \delta + Q\left( {T, f,\varepsilon }\right) \) (by Remark (12)).
(15) If \( {\varepsilon }_{1} < {\varepsilon }_{2} \) then \( P\left( {T, f,{\varepsilon }_{1}}\right) \geq P\left( {T, f,{\varepsilon }_{2}}\right) \) .
Theorem 9.1. If \( f \in C\left( {X, R}\right) \) then \( P\left( {T, f}\right) = \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}P\left( {T, f,\varepsilon }\right) \) .
Proof. The limit exists by Remark 15. By Remark 13 we have \( P\left( {T, f}\right) \leq \) \( \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}P\left( {T, f,\varepsilon }\right) \) .
By Remark 14, for any \( \delta > 0 \) we have \( \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}P\left( {T, f,\varepsilon }\right) \leq \delta + P\left( {T, f}\right) \) so \( \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}P\left( {T, f,\varepsilon }\right) \leq P\left( {T, f}\right) . \)
To obtain definitions of pressure involving open covers we generalise Theorem 7.7. We need the following definitions.
Definition 9.7. If \( f \in C\left( {X, R}\right), n \geq 1 \) and \( \alpha \) is an open cover of \( X \) put
\[
{q}_{n}\left( {T, f,\alpha }\right) = \inf \left\{ {\mathop{\sum }\limits_{{B \in \beta }}\mathop{\inf }\limits_{{x \in B}}{e}^{\left( {{S}_{n}f}\right) \left( x\right) } \mid \beta }\right.
\]
is a finite subcover of \( \left. {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right\} \) and
\[
{p}_{n}\left( {T, f,\alpha }\right) = \inf \left\{ {\mathop{\sum }\limits_{{B \in \beta }}\mathop{\sup }\limits_{{x \in B}}{e}^{\left( {{S}_{n}f}\right) \left( x\right) } \mid \beta }\right.
\]
is a finite subcover of \( \left. {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-1}\alpha }\right\} \) .
Clearly \( {q}_{n}\left( {T, f,\alpha }\right) \leq {p}_{n}\left( {T, f,\alpha }\right) \) .
Theorem 9.2. Let \( T : X \rightarrow X \) be continuous and \( f \in C\left( {X, R}\right) \) .
(i) If \( \alpha \) is an open cover of \( X \) with Lebesgue number \( \delta \) then \( {q}_{n}\left( {T, f, x}\right) \leq \) \( {Q}_{n}\left( {T, f,\delta /2}\right) \leq {P}_{n}\left( {T, f,\delta /2}\right) \)
(ii) If \( \varepsilon \geq 0 \) and \( \ddot{\gamma } \) is an open cover with \( \operatorname{diam}\left( \ddot{\gamma }\right) \leq \varepsilon \) then \( {Q}_{n}\left( {T, f,\varepsilon }\right) \leq \) \( {P}_{n}\left( {T, f,\varepsilon }\right) \leq {p}_{n}\left( {T, f,\gamma }\right) . \)
Proof. We know from Remark 13 that \( {Q}_{n}\left( {T, f,\varepsilon }\right) \leq {P}_{n}\left( {T, f,\varepsilon }\right) \) for all \( \varepsilon > 0 \) .
(i) If \( F \) is an \( \left( {n,\delta /2}\right) \) spanning set then \( X = \mathop{\bigcup }\limits_{{x \in F}}\mathop{\bigcap }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\bar{B}\left( {{T}^{i}x;\delta /2}\right) \) . Since each \( \bar{B}\left( {{T}^{i}x : \delta ,2}\right) \) is a subset of a member of \( \alpha \) we have \( {q}_{n}\left( {T, f,\alpha }\right) \leq \) \( \mathop{\sum }\limits_{{x \in F}}{e}^{\left( {{S}_{n}f}\right) \left( x\right) } \) and hence \( {q}_{n}\left( {T, f,\alpha }\right) \leq {Q}_{n}\left( {T, f,\delta /2}\right) \) .
(ii) Let \( E \) be a \( \left( {n,\varepsilon }\right) \) separated subset of \( X \) . Since no member of \( \mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i} \) , contains two elements of \( E \) we have \( \mathop{\sum }\limits_{{x \in E}}{e}^{\left( {{S}_{n}f}\right) \left( x\right) } \leq {p}_{n}\left( {T, f,\gamma }\right) \) . Therefore \( {P}_{n}\left( {T, f,\varepsilon }\right) \leq {p}_{n}\left( {T, f,\gamma }\right) . \)
## Remarks
(16) If \( \alpha ,\gamma \) are open covers of \( X \) and \( \alpha < \gamma \) then \( {q}_{n}\left( {T, f,\alpha }\right) \leq {q}_{n}\left( {T, f,\gamma }\right) \) .
(17) If \( d\left( {x, y}\right) < \operatorname{diam}\left( x\right) \) implies \( \left| {f\left( x\right) - f\left( y\right) }\right| \leq \delta \) then \( {p}_{n}\left( {T, f,\alpha }\right) \leq \) \( {e}^{n\delta }{q}_{n}\left( {T, f,\alpha }\right) \)
Lemma 9.3. If \( f \in C\left( {X, R}\right) \) and \( \alpha \) is an open cover of \( X \) then
\[
\mathop{\lim }\limits_{{n \rightarrow \alpha }}\frac{1}{n}\log {p}_{n}\left( {T, f,\alpha }\right)
\]
exists and equals \( \mathop{\inf }\limits_{n}\left( {1/n}\right) \log {p}_{n}\left( {T, f.\alpha }\right) \) .
Proof. By Theorem 4.9 it suffices to show \( {p}_{n + k}\left( {T, f, x}\right) \leq {p}_{n}\left( {T, f,\alpha }\right) \cdot {p}_{k}\left( {T, f, x}\right) \) . If \( \beta \) is a finite subcover of \( \mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha \) and \( \gamma \) is a finite subcover of \( \mathop{\bigvee }\limits_{{i = 0}}^{{k - 1}}{T}^{-i}\alpha \) then \( \beta \vee {T}^{-n}\gamma \) is a finite subcover of \( \mathop{\bigvee }\limits_{{i = 0}}^{{k + n - 1}}{T}^{-i}\alpha \), and we have
\[
\mathop{\sum }\limits_{{D \in \beta \vee {T}^{-{n}_{\gamma }}}}\mathop{\sup }\limits_{{x \in D}}{e}^{\left( {{S}_{n + k}f}\right) \left( x\right) } \leq \left( {\mathop{\sum }\limits_{{B \in \beta }}\mathop{\sup }\limits_{{x \in B}}{e}^{\left( {{S}_{n}f}\right) \left( x\right) }}\right) \left( {\mathop{\sum }\limits_{{C \in \gamma }}\mathop{\sup }\limits_{{x \in C}}{e}^{\left( {{S}_{k}f}\right) \left( x\right) }}\right) .
\]
Therefore \( {p}_{n + k}\left( {T, f,\alpha }\right) \leq {p}_{n}\left( {T, f,\alpha }\right) \cdot {p}_{k}\left( {T, f,\alpha }\right) \) .
The following gives definitions of pressure using open covers.
Theorem 9.4. If \( T : X \rightarrow X \) is continuous and \( f \in C\left( {X, R}\right) \) then each of the following equals \( P\left( {T, f}\right) \) .
(i) \( \mathop{\lim }\limits_{{\delta \rightarrow 0}}\left\lbrack {\mathop{\sup }\limits_{\alpha }\left\{ {\mathop{\lim }\limits_{{n \rightarrow \infty }}\left( {1/n}\right) \log {p}_{n}\left( {T, f,\alpha }\right) \mid \alpha }\right. }\right. \) is an open cover of \( X \) with \( \operatorname{diam}\left( \alpha \right) \leq \delta \} \overline{\rbrack } \) .
(ii) \( \mathop{\lim }\limits_{{k \rightarrow \infty }}\left\lbrack {\mathop{\lim }\limits_{{n \rightarrow \infty }}\left( {1/n}\right) \log {p}_{n}\left( {T, f,{\alpha }_{k}}\right) }\right\rbrack \) if \( \left\{ {\alpha }_{k}\right\} \) is a sequence of open covers with \( \operatorname{diam}\left( {\alpha }_{k}\right) \rightarrow 0 \) .
(iii) \( \mathop{\lim }\limits_{{\delta \rightarrow 0}}\left\lbrack {\mathop{\sup }\limits_{\alpha }\left\{ {\mathop{\liminf }\limits_{{n \rightarrow \infty }}\left( {1/n}\right) \log {q}_{n}\left( {T, f,\alpha }\right) \mid \alpha }\right. }\right. \) is an open cover of \( X \) with \( \operatorname{diam}\left( \alpha \right) \leq \del |
1097_(GTM253)Elementary Functional Analysis | Definition 6.8 |
Definition 6.8. Let \( \left( {X,\mathcal{F}}\right) \) be any measurable space. A spectral measure on \( X \) is a function \( E : \mathcal{F} \rightarrow \mathcal{B}\left( \mathcal{H}\right) \), where \( \mathcal{H} \) is a Hilbert space, satisfying the following:
(1) For each \( S \) in \( \mathcal{F}, E\left( S\right) \) is an orthogonal projection.
(2) \( \;E\left( \varnothing \right) = 0 \) and \( E\left( X\right) = I \) .
(3) If \( {S}_{1},{S}_{2} \) are in \( \mathcal{F} \), and \( {S}_{1} \cap {S}_{2} = \varnothing \), then
\[
E\left( {S}_{1}\right) \mathcal{H} \bot E\left( {S}_{2}\right) \mathcal{H}
\]
(4) If \( {\left\{ {S}_{k}\right\} }_{1}^{\infty } \) is a sequence of pairwise disjoint sets from \( \mathcal{F} \), then for each \( h \in \mathcal{H} \)
\[
\mathop{\sum }\limits_{{k = 1}}^{n}E\left( {S}_{k}\right) h \rightarrow E\left( {{ \cup }_{k = 1}^{\infty }{S}_{k}}\right) h
\]
as \( n \rightarrow \infty \) .
Note that as a special case of (4), if \( {S}_{1} \cap {S}_{2} = \varnothing \), then \( E\left( {S}_{1}\right) + E\left( {S}_{2}\right) = E\left( {{S}_{1} \cup {S}_{2}}\right) \) .
Typically our interest in spectral measures will be in the case that \( X \) is \( \mathbb{C} \) or a subset of \( \mathbb{C} \), and \( \mathcal{F} \) is the Borel \( \sigma \) -algebra on \( X \) . There are some useful "ordinary" measures, defined on the \( \sigma \) -algebra \( \mathcal{F} \), which can be created from a spectral measure \( E \) ; we will see this in Proposition 6.14 below.
Example 6.9. To find an easy example of a spectral measure, let \( \left( {X,\mathcal{F},\mu }\right) \) be a measure space and set \( \mathcal{H} = {L}^{2}\left( {X,\mu }\right) \) . Define \( E : \mathcal{F} \rightarrow \mathcal{B}\left( \mathcal{H}\right) \) by \( E\left( S\right) = {M}_{{\chi }_{S}} \), the operator of multiplication by the characteristic function \( {\chi }_{S} \), acting on \( {L}^{2}\left( {X,\mu }\right) \) ; the function \( E \) is a spectral measure on \( X \) . The reader is encouraged to check the details verifying properties (1)-(4) in Definition 6.8.
Example 6.10. Suppose that \( T \) is a diagonal operator on \( {\ell }^{2} \) with diagonal sequence \( \left\{ {{\lambda }_{1},{\lambda }_{2},\ldots }\right\} \) . Define \( E \) on the Borel subsets of \( \mathbb{C} \) by setting \( E\left( S\right) \) to be the diagonal operator with diagonal sequence \( \left\{ {{\alpha }_{1},{\alpha }_{2},\ldots }\right\} \), where \( {\alpha }_{j} = 1 \) if \( {\lambda }_{j} \) is in \( S \) and \( {\alpha }_{j} = 0 \) if \( {\lambda }_{j} \) is not in \( S \) . One easily checks that \( E \) is a spectral measure.
Example 6.11. Suppose that \( {M}_{\varphi } \) is a multiplication operator on \( {L}^{2}\left( {X,\mu }\right) \), where \( \left( {X,\mu }\right) \) is a measure space. The function \( E : \mathcal{F} \rightarrow \mathcal{B}\left( {{L}^{2}\left( {X,\mu }\right) }\right) \) given by
\[
E\left( S\right) = {M}_{{\chi }_{{\varphi }^{-1}\left( S\right) }}
\]
for \( S \) a Borel subset of \( \mathbb{C} \) is a spectral measure on \( \mathbb{C} \) . Clearly conditions (1) and (2) of Definition 6.8 hold. For (3) observe that if \( {S}_{1} \cap {S}_{2} = \varnothing \), then for any \( {h}_{1},{h}_{2} \in {L}^{2}\left( {X,\mu }\right) \) we have
\[
\left\langle {E\left( {S}_{1}\right) {h}_{1}, E\left( {S}_{2}\right) {h}_{2}}\right\rangle = \left\langle {{M}_{{\chi }_{{\varphi }^{-1}\left( {S}_{1}\right) }}{h}_{1},{M}_{{\chi }_{{\varphi }^{-1}\left( {S}_{2}\right) }}{h}_{2}}\right\rangle .
\]
(6.4)
Disjointness of \( {S}_{1} \) and \( {S}_{2} \) ensures that \( {\chi }_{{\varphi }^{-1}\left( {S}_{1}\right) }{\chi }_{{\varphi }^{-1}\left( {S}_{2}\right) } = 0 \), so that the inner product in Equation (6.4) is zero. Property (4) in Definition 6.8 is the statement that for each \( h \in {L}^{2}\left( {X,\mu }\right) \)
\[
{\chi }_{{\varphi }^{-1}\left( {{ \cup }_{1}^{n}{S}_{k}}\right) }h \rightarrow {\chi }_{{\varphi }^{-1}\left( {{ \cup }_{1}^{\infty }{S}_{k}}\right) }h
\]
in \( {L}^{2}\left( {X,\mu }\right) \) as \( n \rightarrow \infty \) . This is easily verified by, say, an appeal to the dominated convergence theorem.
Notice that if \( S \) is contained in the complement of the range of \( \varphi \), then \( E\left( S\right) = 0 \) by definition. Sometimes one defines the support of a spectral measure on \( \mathbb{C} \) to be the complement of the union of all open sets \( S \) for which \( E\left( S\right) = 0 \) . In this example, the support of the spectral measure \( E \) is the essential range of \( \varphi \), or equivalently the spectrum of the operator \( {M}_{\varphi } \) (Exercise 6.5). We may think of the spectral measure \( E \) in this example as being defined on the Borel subsets of \( \sigma \left( {M}_{\varphi }\right) \), and extended to all Borel subsets of \( \mathbb{C} \) by setting \( E\left( C\right) = 0 \) if \( C \subseteq \mathbb{C} \smallsetminus \sigma \left( {M}_{\varphi }\right) \) .
Example 6.12. Suppose \( \left( {X,\mathcal{F}}\right) \) is a measurable space and \( E : \mathcal{F} \rightarrow \mathcal{B}\left( \mathcal{H}\right) \) is a spectral measure. If \( \mathcal{K} \) is another Hilbert space and \( W : \mathcal{H} \rightarrow \mathcal{K} \) is unitary, then the formula
\[
F\left( S\right) = {WE}\left( S\right) {W}^{-1}
\]
defines a spectral measure \( F : \mathcal{F} \rightarrow \mathcal{B}\left( \mathcal{K}\right) \) . The reader is asked to verify the details in Exercise 6.6.
Property (4) in Definition 6.8 is sometimes described by saying that
\[
\mathop{\sum }\limits_{{k = 1}}^{\infty }E\left( {S}_{k}\right) = E\left( {{ \cup }_{k = 1}^{\infty }{S}_{k}}\right)
\]
with the convergence of the sum of projections taking place in the strong operator topology. This terminology appeared earlier, in Exercise 2.23 of Chapter 2. Recall from that exercise that a sequence \( \left\{ {T}_{n}\right\} \) of operators on a Hilbert space is said to converge in the strong operator topology to an operator \( T \) if for each vector \( h,{T}_{n}h \rightarrow \) \( {Th} \) . The next result serves to describe the operator \( E\left( {{ \cup }_{k = 1}^{\infty }{S}_{k}}\right) = \mathop{\sum }\limits_{1}^{\infty }E\left( {S}_{k}\right) \) . Recall that \( \vee {M}_{k} \) denotes the closed linear span of the sets \( {M}_{k} \), and when the \( {M}_{k} \) are pairwise orthogonal closed subspaces, \( \vee {M}_{k} = \sum \oplus {M}_{k} \) (Exercise 1.33 in Chapter 1).
Lemma 6.13. Suppose that \( \mathcal{H} \) is a Hilbert space and that \( {\left\{ {E}_{k}\right\} }_{k = 1}^{\infty } \) is a sequence of orthogonal projections on \( \mathcal{H} \) with \( {E}_{j}\mathcal{H} \bot {E}_{k}\mathcal{H} \) for all \( j \neq k \) . We have
\[
\mathop{\sum }\limits_{{k = 1}}^{\infty }{E}_{k} = E
\]
in the sense of strong operator convergence, where \( E \) is the projection of \( \mathcal{H} \) onto
\[
\mathop{\bigvee }\limits_{1}^{\infty }{E}_{k}\mathcal{H} = \mathop{\sum }\limits_{1}^{\infty } \oplus {E}_{k}\mathcal{H}
\]
Proof. Let \( {P}_{n} = \mathop{\sum }\limits_{1}^{n}{E}_{k} \) . We want to show that \( {P}_{n}h \rightarrow {Eh} \) for each \( h \in \mathcal{H} \), where \( E \) is defined as in the statement of the lemma. First suppose that \( h \) is in the subspace \( \mathop{\sum }\limits_{1}^{\infty } \oplus {E}_{k}\mathcal{H} \), so that \( h = \mathop{\sum }\limits_{1}^{\infty }{h}_{k} \) with \( {h}_{k} \in {E}_{k}\mathcal{H} \) and \( \sum {\begin{Vmatrix}{h}_{k}\end{Vmatrix}}^{2} < \infty \) . Since \( {P}_{n}h = \mathop{\sum }\limits_{1}^{n}{h}_{k} \) , we have \( {P}_{n}h \rightarrow h = {Eh} \) as \( n \rightarrow \infty \) . If, on the other hand, \( h \bot {E}_{k}\mathcal{H} \) for all \( k \), then \( {P}_{n}h = 0 = {Eh} \) . Given any \( x \in \mathcal{H} \), write \( x = y + z \) where \( y \in \mathop{\sum }\limits_{1}^{\infty } \oplus {E}_{k}\mathcal{H} \) and \( z \bot {E}_{k}\mathcal{H} \) for all \( k \) . We have \( {P}_{n}x \rightarrow y = {Ex} \), as desired.
By this lemma, we see that in (4) of Definition 6.8, \( \mathop{\sum }\limits_{{k = 1}}^{\infty }E\left( {S}_{k}\right) \) is the projection onto \( \mathop{\sum }\limits_{1}^{\infty } \oplus E\left( {S}_{k}\right) \mathcal{H} \), and moreover, for each \( h \in \mathcal{H} \) ,
\[
{\begin{Vmatrix}E\left( { \cup }_{1}^{\infty }{S}_{k}\right) h\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{1}^{\infty }{\begin{Vmatrix}E\left( {S}_{k}\right) h\end{Vmatrix}}^{2}.
\]
(6.5)
Since a spectral measure \( E \) maps disjoint sets to projections with orthogonal ranges, it follows easily that
\[
{S}_{1} \subseteq {S}_{2} \Rightarrow E\left( {S}_{1}\right) \mathcal{H} \subseteq E\left( {S}_{2}\right) \mathcal{H}
\]
To see this, write \( {S}_{2} \) as the disjoint union of \( {S}_{1} \) and \( {S}_{2} \smallsetminus {S}_{1} \), so that
\[
E\left( {S}_{2}\right) \mathcal{H} = E\left( {S}_{1}\right) \mathcal{H} \oplus E\left( {{S}_{2} \smallsetminus {S}_{1}}\right) \mathcal{H} \supseteq E\left( {S}_{1}\right) \mathcal{H}.
\]
Using this observation we see that for any two measurable sets \( {B}_{1} \) and \( {B}_{2} \) ,
\[
E\left( {{B}_{1} \cap {B}_{2}}\right) = E\left( {B}_{1}\right) E\left( {B}_{2}\right)
\]
and so the operators \( E\left( {B}_{1}\right) \) and \( E\left( {B}_{2}\right) \) commute. The details of this statement are left to the reader in Exercise 6.7. Since we have \( E\left( {S}_{k}\right) \mathcal{H} \subseteq E\left( {{ \cup }_{1}^{\infty }{S}_{j}}\right) \mathcal{H} \) for each \( k \) , and thus
\[
\mathop{\sum }\limits_{1}^{\infty } \oplus E\left( {S}_{k}\right) \mathcal{H} \subseteq E\left( {{ \cup }_{1}^{\infty }{S}_{k}}\right) \mathcal{H}
\]
the significance of condition (4) in Definition 6.8 is to demand the reverse containment.
The property in condition (4) of Definition 6.8 is certainly reminiscent of the defining property for a (scalar-valu |
109_The rising sea Foundations of Algebraic Geometry | Definition 4.43 |
Definition 4.43. Let \( \Delta \) be a graph, i.e., a 1-dimensional simplicial complex. The girth of \( \Delta \) is the smallest integer \( k \geq 3 \) such that \( \Delta \) contains a \( k \) -gon, provided such an integer \( k \) exists. Otherwise, \( \Delta \) is a tree, and we define its girth to be \( \infty \) .
Proposition 4.44. Let \( \Delta \) be a connected bipartite graph in which every vertex is a face of at least two edges. Then \( \Delta \) is a building if and only if \( \Delta \) has diameter \( m \) and girth \( {2m} \) for some \( m \) with \( 2 \leq m \leq \infty \) . In this case \( \Delta \) has type \( {I}_{2}\left( m\right) \) .
("Diameter" here means the supremum of the distances \( d\left( {u, v}\right) \) between vertices \( u, v \) . Here \( d\left( {-, - }\right) \) denotes the usual distance between two vertices, not the gallery distance defined in Section A.1.3. Note that when \( m = \infty \), the content of the proposition is that a building of type \( \circ \infty \) is the same thing as a tree without endpoints, as claimed in Example 4.16.)
Proof. Suppose that \( \Delta \) has diameter \( m \) and girth \( {2m} \) . Then one easily checks the following properties for any two vertices \( u, v \) :
- If \( d\left( {u, v}\right) < m \), then there is a unique path from \( u \) to \( v \) of length \( \leq m \) with no backtracking. ["No backtracking" means that any three consecutive vertices are distinct.]
- If \( 0 < d\left( {u, v}\right) < m \), then there is a unique vertex \( {v}^{\prime } \) adjacent to \( v \) such that \( d\left( {u,{v}^{\prime }}\right) = d\left( {u, v}\right) - 1 \) . For all other vertices \( {v}^{\prime } \) adjacent to \( v, d\left( {u,{v}^{\prime }}\right) = \) \( d\left( {u, v}\right) + 1 \) .
- If \( d\left( {u, v}\right) = m \) (and hence \( m < \infty \) ), then \( d\left( {u,{v}^{\prime }}\right) = m - 1 \) for all vertices \( {v}^{\prime } \) adjacent to \( v \) . It is now a routine exercise to show that \( \Delta \) is a building of type \( {\mathrm{I}}_{2}\left( m\right) \), with the apartments being all the \( {2m} \) -gons in \( \Delta \) .
Conversely, suppose that \( \Delta \) is a building of type \( {\mathrm{I}}_{2}\left( m\right) \) . Since each apartment is a \( {2m} \) -gon and any two vertices are contained in an apartment, it suffices to show that \( \Delta \) does not contain a \( k \) -gon for \( k < {2m} \) . Let \( Z \) be a \( k \) -gon in \( \Delta \) with \( 3 \leq k < \infty \) . Let \( C \) be a chamber in \( Z \), with vertices \( v \) and \( w \) , let \( \sum \) be any apartment containing \( C \), and let \( \rho \) be the retraction \( {\rho }_{\sum, C} \) . As we traverse \( Z \) starting at \( C \) (thought of as oriented from \( v \) to \( w \), say), the image under \( \rho \) is a closed curve in \( \sum \) passing through the vertices \( v, w,\ldots \) and never traversing \( C \) again before returning to \( v \) [see part (1) of Proposition 4.39]. Since \( \sum \) is a \( {2m} \) -gon, this is possible only if \( k \geq {2m} \) .
Remark 4.45. In view of Definition 4.20, the proposition can be viewed as a characterization of generalized \( m \) -gons; this characterization is often taken as the definition of a generalized \( m \) -gon.
## Exercises
4.46. Deduce from Proposition 4.44 with \( m = 3 \) that every building of type \( {\mathrm{A}}_{2} \) is the flag complex of a projective plane, as claimed in Example 4.16.
4.47. Give an alternative proof that a building of type \( {\mathrm{I}}_{2}\left( m\right) \) has girth \( {2m} \) by considering types of galleries.
4.48. Let \( \Delta \) be the incidence graph of a generalized \( m \) -gon (i.e., a building of type \( \circ m \) -o). Given two vertices \( u, v \) of \( \Delta \), a path of minimal length from \( u \) to \( v \) is called a geodesic. As we noted in the proof of Proposition 4.44, there is a unique such geodesic if \( d\left( {u, v}\right) < m \) . If \( d\left( {u, v}\right) = m < \infty \), however, there are at least two geodesics from \( u \) to \( v \) .
(a) Let \( {\Delta }^{\prime } \) be a convex chamber subcomplex of \( \Delta \) . If \( u \) and \( v \) are vertices of \( {\Delta }^{\prime } \) such that \( d\left( {u, v}\right) < m \) in \( \Delta \), show that the (unique) geodesic joining them is contained in \( {\Delta }^{\prime } \) .
(b) If \( d\left( {u, v}\right) = m < \infty \), show that there is a convex chamber subcomplex \( {\Delta }^{\prime } \) containing \( u \) and \( v \) but only one of the geodesics joining them.
4.49. (a) Where in the proof of Proposition 4.40 did we use the assumption that \( C \) is a chamber?
(b) Give an example to show that this assumption cannot be dropped.
4.50. Show that every building is a flag complex.
4.51. Given an apartment \( \sum \), a chamber \( C \in \sum \), and a chamber \( D \in \Delta \), show that there is a unique type-preserving chamber map \( \rho : \Delta \rightarrow \sum \) such that \( \rho \left( D\right) = C \) and \( \rho \) maps every apartment containing \( D \) isomorphically onto \( \sum \) . For lack of a better name, we will call \( \rho \) the canonical map \( \Delta \rightarrow \sum \) such that \( \rho \left( D\right) = C \) .
4.52. (a) Let \( \Delta \) be a thick building, let \( C \) be a chamber, and let \( {\mathcal{A}}^{\prime } \) be the set of apartments containing \( C \) . Show that \( C \) is the only chamber in \( \mathop{\bigcap }\limits_{{\sum \in {\mathcal{A}}^{\prime }}}\sum \) . (In more intuitive language, every closed chamber is an intersection of apartments. The interested reader can phrase this precisely in terms of the geometric realization.)
(b) Give an example to show that the thickness assumption cannot be dropped.
4.53. (a) Recall that the notions of elementary homotopy and homotopy were defined for galleries in Coxeter complexes in Section 3.2. Generalize to buildings.
(b) Given a nonminimal gallery, how can one modify it to obtain a minimal gallery with the same extremities?
## 4.5 The Complete System of Apartments
We have seen several cases in which something that seemed a priori to depend on a choice of apartment system \( \mathcal{A} \) turned out to be independent of \( \mathcal{A} \) . The next theorem is the ultimate result of this type; it can be viewed as saying that all systems of apartments in a given building are compatible with one another. Although the statement of the result does not refer to colorings, we will continue to assume that \( \Delta \) comes equipped with a fixed type function having values in a set \( S \) . In particular, we have a Coxeter matrix \( M \), which will be used in the proof of the theorem.
Theorem 4.54. If \( \Delta \) is a building, then the union of any family of apartment systems is again an apartment system. Consequently, \( \Delta \) admits a largest system of apartments.
Proof. It is obvious that (B0) and (B1) hold for the union, so the only problem is to prove (B2). We will work with the variant \( \left( {\mathrm{B}{2}^{\prime \prime }}\right) \) . Suppose, then, that \( \sum \) and \( {\sum }^{\prime } \) are apartments in different apartment systems and that \( \sum \cap {\sum }^{\prime } \) contains at least one chamber. We must find an isomorphism \( {\sum }^{\prime } \rightarrow \sum \) that fixes \( \sum \cap {\sum }^{\prime } \) pointwise.
Choose an arbitrary chamber \( C \in \sum \cap {\sum }^{\prime } \) . There are then two obvious candidates for the desired isomorphism \( {\sum }^{\prime } \rightarrow \sum \) . On the one hand, we know by Corollary 4.36 that \( \sum \) and \( {\sum }^{\prime } \) have the same Coxeter matrix \( M \), so we can find a type-preserving isomorphism \( \phi : {\sum }^{\prime } \rightarrow \sum \) by Corollary 4.8. And we can certainly choose \( \phi \) such that \( \phi \left( C\right) = C \), since the group of type-preserving automorphisms of \( \sum \) is transitive on the chambers. It then follows that \( \phi \) fixes \( C \) pointwise. Unfortunately, it is not obvious that \( \phi \) fixes \( \sum \cap {\sum }^{\prime } \) pointwise.
The other candidate is provided by the theory of retractions. Namely, let \( \rho \) be the retraction \( {\rho }_{\sum, C} \) and let \( \psi : {\sum }^{\prime } \rightarrow \sum \) be the restriction of \( \rho \) to \( {\sum }^{\prime } \) . Then \( \psi \) obviously fixes \( \sum \cap {\sum }^{\prime } \) pointwise, simply because \( \rho \) is a retraction onto \( \sum \) .
But it is not obvious that \( \psi \) is an isomorphism. [Readers who are tempted to say that \( \psi \) is an isomorphism by the construction of \( \rho \) should recall that we do not know that \( {\sum }^{\prime } \) is part of an apartment system containing \( \sum \) ; indeed, that is what we are trying to prove!]
To complete the proof, we will show by the standard uniqueness argument that \( \phi \) and \( \psi \) are in fact the same map, which therefore has all the required properties. Since \( \phi \) and \( \psi \) both fix \( C \) pointwise, the standard argument will go through if we can show that if \( \Gamma \) is a minimal gallery in \( {\sum }^{\prime } \) starting at \( C \) , then the pregalleries \( \phi \left( \Gamma \right) \) and \( \psi \left( \Gamma \right) \) are galleries. This is clear for \( \phi \left( \Gamma \right) \) since \( \phi \) is an isomorphism. And it is true for \( \psi \left( \Gamma \right) \) because of two facts proved in the previous section: (a) \( \Gamma \) is still minimal when viewed as a gallery in \( \Delta \) ; and (b) \( \rho \) preserves distances from \( C \) .
Definition 4.55. The maximal apartment system will be called the complete system of apartments. It consists, then, of all subcomplexes \( \sum \subseteq \Delta \) such that \( \sum \) is in some apartment system \( \mathcal{A} \) .
Remark 4.56. This description of the complete apartment sys |
106_106_The Cantor function | Definition 1.1 |
Definition 1.1. A set \( \mathcal{F} \) of subsets of \( I \) satisfying the conditions (i),(ii) and (iii) above is called a filter on \( I \) . A filter which satisfies (iv) is called an ultrafilter.
The filters on \( I \), being subsets of \( \operatorname{Pow}\left( I\right) \), are partially ordered by inclusion. The ultrafilters are the maximal elements of the set of filters.
## Examples
1.2. If \( I \neq \varnothing ,\{ I\} \) is a filter on \( I \) .
1.3. If \( k \) is a fixed element of \( I, F = \{ J \subseteq I \mid k \in J\} \) is an ultrafilter on \( I \) . (Ultrafilters constructed in this way are called principal ultrafilters.)
1.4. If \( I \) is infinite, the complements of the finite subsets of \( I \) form a filter. (When \( I = \mathbf{N} \), this filter is called the Fréchet filter.)
Exercise 1.5. \( \mathcal{F} \) is an ultrafilter on \( I \) and \( J \in \mathcal{F} \) . Prove that \( {\mathcal{F}}_{J} = \) \( \{ A \cap J \mid A \in \mathcal{F}\} \) is an ultrafilter on \( J \), and that for \( A \subseteq I, A \in \mathcal{F} \) if and only if \( A \cap J \in {\mathcal{F}}_{J} \) . ( \( {\mathcal{F}}_{J} \) is called the restriction of \( \mathcal{F} \) to \( J \) .)
Let \( a, b \in \mathop{\prod }\limits_{{i \in I}}{M}_{i} \) and let \( \mathcal{F} \) be an ultrafilter on \( I \) . We write \( a \equiv b{\;\operatorname{mod}\;\mathcal{F}} \) if \( \left\{ {i \in I \mid {a}_{i} = {\widetilde{b}}_{i}}\right\} \in \mathcal{F} \), and denote the congruence class containing \( a \) by \( a\mathcal{F} \) . The set of all congruence classes is denoted by \( \mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F} \) . For each \( r \in \mathcal{R} \) , we define the relation \( {\psi r} \) on \( \mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F} \) by \( a\mathcal{F} \in {\psi r} \) if \( \left\{ {i \in I \mid {a}_{i} \in {\psi }_{i}r}\right\} \in \mathcal{F} \) . (Here, \( a \) is an \( n \) -tuple if \( r \in {\mathcal{R}}_{n} \) .) This definition is clearly independent of the choice of representative of the congruence class. To complete the construction, we define \( v\left( c\right) \) for \( c \in C \) to be the congruence class of the function \( I \rightarrow \mathop{\bigcup }\limits_{{i \in I}}{M}_{i} \) whose \( i \) -component is \( {v}_{i}\left( c\right) \) .
Theorem 1.6. \( \mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F} \) is a model of \( \mathcal{T} = \left( {\mathcal{R}, A, C}\right) \) . An element a \( \mathcal{F} \) of \( \mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F} \) satisfies \( p\left( x\right) \in P \) (where \( a, x \) may be \( n \) -tuples) if and only if \( \left\{ {i \in I \mid {a}_{i}}\right. \) satisfies \( \left. {p\left( x\right) }\right\} \in \mathcal{F} \) .
Proof: \( \mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F} \) is clearly a model of \( {\mathcal{T}}^{\prime } = \left( {\mathcal{R},\varnothing, C}\right) \) . To show that it is a model of \( \mathcal{T} \), we have to show that \( v\left( p\right) = 1 \) for all \( p \in A \) . Since for \( p \in A,\left\{ {i \in I \mid p\text{is true in}{M}_{i}}\right\} = I \in \mathcal{F} \), this will be an immediate consequence of the second assertion of the theorem. We shall prove this latter assertion by induction over the length of \( p \) .
If \( p = r\left( x\right) \), where \( r \in \mathcal{R} \), then the result holds by the definition of \( {\psi r} \) . If \( p = {q}_{1} \Rightarrow {q}_{2} \), then \( v\left( p\right) = 0 \) if and only if we have \( v\left( {q}_{1}\right) = 1 \) and \( v\left( {q}_{2}\right) = 0 \) . By induction, this holds precisely when \( {J}_{1} = \left\{ {i \in I \mid {v}_{i}\left( {q}_{1}\right) = 1}\right\} \) and \( {J}_{2} = \) \( \left\{ {i \in I \mid {v}_{i}\left( {q}_{2}\right) = 0}\right\} \) are both in \( \mathcal{F} \) . Put \( {J}_{3} = {J}_{1} \cap {J}_{2} \) . If \( {J}_{3} \in \mathcal{F} \), then \( {J}_{1} \in \mathcal{F} \) and \( {J}_{2} \in \mathcal{F} \) by condition (ii), while \( {J}_{1} \in \mathcal{F} \) and \( {J}_{2} \in \mathcal{F} \) imply \( {J}_{3} \in \mathcal{F} \) by condition (iii). Thus \( v\left( p\right) = 0 \) if and only if \( {J}_{3} = \left\{ {i \in I \mid {v}_{i}\left( p\right) = 0}\right\} \in \mathcal{F} \) . By condition (iv), \( v\left( p\right) = 1 \) if and only if \( I - {J}_{3} = \left\{ {i \in I \mid {v}_{i}\left( p\right) = 1}\right\} \in \mathcal{F} \) .
If \( p\left( x\right) = \left( {\forall y}\right) q\left( {x, y}\right) \), then \( a\mathcal{F} \) satisfies \( p\left( x\right) \) if and only if for every \( b\mathcal{F} \in \mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F},\left( {a\mathcal{F}, b\mathcal{F}}\right) \) satisfies \( q\left( {x, y}\right) \) . By induction, the latter holds if and only if for all \( b\mathcal{F},\left\{ {i \in I \mid \left( {{a}_{i},{b}_{i}}\right) }\right. \) satisfies \( \left. {q\left( {x, y}\right) }\right\} \in \mathcal{F} \) . Let \( J = \left\{ {i \in I \mid {a}_{i}}\right. \) satisfies \( p\left( x\right) \} \) . Suppose \( J \in \mathcal{F} \) . Then for all \( i \in J \) and all \( b\mathcal{F} \), we have \( \left( {{a}_{i},{b}_{i}}\right) \) satisfies \( q\left( {x, y}\right) \) since \( {a}_{i} \) satisfies \( \left( {\forall y}\right) q\left( {x, y}\right) \) . Thus \( a\mathcal{F} \) satisfies \( p\left( x\right) \) . Suppose \( J \notin \mathcal{F} \) . Then for each \( i \in K = I - J \), there exists an element \( {b}_{i} \in {M}_{i} \) such that \( \left( {{a}_{i},{b}_{i}}\right) \) does not satisfy \( q\left( {x, y}\right) \) . Thus there exists \( b \in \mathop{\prod }\limits_{{i \in I}}{M}_{i} \) such that, for all \( i \in K,\left( {{a}_{i},{b}_{i}}\right) \) does not satisfy \( q\left( {x, y}\right) \) . Since \( K \in \mathcal{F},\left( {a\mathcal{F}, b\mathcal{F}}\right) \) does not satisfy \( q\left( {x, y}\right) \) and \( a\mathcal{F} \) does not satisfy \( p\left( x\right) \) .
Definition 1.7. The model \( \mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F} \) of \( \mathcal{T} \) is called the ultraproduct of the models \( {M}_{i} \) with respect to the ultrafilter \( \mathcal{F} \) .
## Exercises
1.8. Let \( {p}_{i} \) be the \( i \) th prime and let \( {F}_{i} \) be a field of characteristic \( {p}_{i} \) . Let \( \mathcal{F} \) be an ultrafilter on the set \( I \) of positive integers, such that no member of \( \mathcal{F} \) is a singleton. Prove that \( \mathop{\prod }\limits_{{i \in I}}{F}_{i}/\mathcal{F} \) is a field of characteristic zero.
1.9. \( \mathcal{F} \) is an ultrafilter on \( I,{M}_{i}\left( {i \in I}\right) \) are models of the theory \( \mathcal{T} \), and \( J \in \mathcal{F} \) . Prove
\[
\mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F} \simeq \mathop{\prod }\limits_{{j \in J}}{M}_{j}/{\mathcal{F}}_{J}
\]
where \( {\mathcal{F}}_{J} \) is the restriction of \( \mathcal{F} \) to \( J \) .
## §2 Non-Principal Ultrafilters
Principal ultrafilters on \( I \), as constructed in Exercise 1.3, are of no use for the construction of new models, because an ultraproduct with respect to a principal ultrafilter is always isomorphic to one of the factors.
## Exercises
2.1. If \( k \in I \) and \( \mathcal{F} = \{ J \subseteq I \mid k \in J\} \), prove that
\[
\mathop{\prod }\limits_{{i \in I}}{M}_{i}/\mathcal{F} \simeq {M}_{k}
\]
2.2. \( \mathcal{F} \) is an ultrafilter on \( I \) and \( A \in \mathcal{F} \) is a finite subset of \( I \) . Prove that \( \mathcal{F} \) is principal.
We now investigate conditions on a set \( S \) of subsets of \( I \) for the existence of an ultrafilter \( \mathcal{F} \supseteq S \) . By an appropriate choice of \( S \), we shall be able to ensure that every such ultrafilter is non-principal.
Definition 2.3. The set \( S \) of subsets of \( I \) is said to have the finite intersection property if every finite subset of \( S \) has non-empty intersection.
Lemma 2.4. Let \( S \) be a set of \( I \) . There exists a filter on \( I \) containing \( S \) if and only if \( S \) has the finite intersection property.
Proof: The necessity of the condition is immediate, so we prove its sufficiency. Suppose \( S \) has the finite intersection property, and put
\[
T = \left\{ {U \subseteq I \mid U = {J}_{1} \cap \cdots \cap {J}_{n}\text{ for some }n\text{ and some }{J}_{1},\ldots ,{J}_{n} \in S}\right\} .
\]
Let
\[
\mathcal{F} = \{ F \subseteq I \mid F \supseteq U\text{ for some }U \in T\} .
\]
We prove that \( \mathcal{F} \), which clearly contains \( S \), is a filter. By the finite intersection property of \( S,\varnothing \notin T \) and so \( \varnothing \notin \mathcal{F} \) . Also, condition (ii) for a filter is clearly
satisfied by \( \mathcal{F} \) . Finally, if \( {F}_{1},\ldots ,{F}_{n} \in \mathcal{F} \), then for \( i = 1,\ldots, n,{F}_{i} \supseteq \mathop{\bigcap }\limits_{{j = 1}}^{{m}_{i}}{J}_{ij} \)
for some \( {m}_{i} \) and \( {J}_{i1},\ldots ,{J}_{i{m}_{i}} \in S \) . Hence.
\[
\mathop{\bigcap }\limits_{{i = 1}}^{n}{F}_{i} \supseteq \mathop{\bigcap }\limits_{{i = 1}}^{n}\mathop{\bigcap }\limits_{{j = 1}}^{{m}_{i}}{J}_{ij}
\]
and so belongs to \( \mathcal{F} \) . Thus condition (iii) is satisfied and \( \mathcal{F} \) is a filter.
Lemma 2.5. Let \( \mathcal{F} \) be a filter on \( I \) . Then there exists an ultrafilter \( {\mathcal{F}}^{ * } \supseteq \mathcal{F} \) on \( I \) .
Proof: The set of filters containing \( \mathcal{F} \) is an inductive set. By Zorn’s Lemma, it has a maximal member \( {\mathcal{F}}^{ * } \) .
## Exercises
2.6. Let \( \alpha = \left| I\right| \) and suppose \( \alpha \geq \beta \geq {\aleph }_{0} \) . Put \( S = \{ J \subseteq I\left| \right| I - J \mid < \beta \} \) . Prove that \( S \) is a filter and that if \( \mathcal{F} \) is an ultrafilter containing \( S \), then no member of \( \mathcal{F} \) has cardinal less than \( \beta \) .
2.7. An ultrafilter \( \mathcal{F} \) on \( I \) is called uniform if \( \le |
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations | Definition 2.3 |
Definition 2.3. A system \( X \) is said to be of type (1) if it satisfies condition (1) in Theorem 2.2; it is said to be of type \( \left( {1,2}\right) \) if it satisfies conditions \( \left( 1\right) ,\left( 2\right) \), etc.
Definition 2.4. For a given set of systems \( \mathcal{C} \subset \mathcal{B} \), we say \( \mathcal{C} \) approximates \( X \) if \( X \in \overline{\mathcal{C}} \) .
In order to prove Theorem 2.2, Peixoto first proved a series of approximating lemmas (i.e., Lemma 2.2 to Lemma 2.9). Theorem 2.2 is then proved by using these lemmas.
LEMMA 2.2. Any system \( X \in \mathcal{B} \) can be approximated by a system \( {Y}_{1} \) of type \( \left( 1\right) \) .
The proof is not difficult. However, it requires some knowledge of differential topology, and it is thus omitted. (See [4].)
As for Lemmas 2.3 to 2.9, we will present them in detail in the following two subsections.
(B) On the elimination of nontrivial minimal sets. A minimal set is a nonempty invariant closed set which does not contain any nonempty invariant closed proper subset. For example, a critical point or a closed orbit of a system is clearly a minimal set of the system. In Chapter VII, we found that an irrational flow on the torus forms a minimal set of the system. Usually, a minimal set which is neither a critical point nor a closed orbit is called a nontrivial minimal set of the system [16].
In this section we essentially prove that the systems of type \( \left( {1,2}\right) \) are dense in the systems of type (1). Thus, by Lemma 2.2, the systems of type \( \left( {1,2}\right) \) are then dense in \( \mathcal{B} \) . The main idea is to show that under \( {C}^{1} \) small perturbations we can eliminate a nontrivial minimal set in system \( {Y}_{1} \), and obtain a new closed-orbit or new orbit connecting a saddle point to a saddle point. This is the method of elmination of nontrivial minimal sets to be presented in this subsection.
Let \( P \) be an ordinary point of the system \( {Y}_{1} \) . For \( P \), we construct a "square" \( R = {abcd} \), where the "horizontal" sides \( {ca} \) and \( {db} \) are two orbit arcs of \( {Y}_{1} \), and the "vertical" sides \( {ab} \) and \( {cd} \) are two arcs orthogonal to the linear field of \( {Y}_{1} \) (as shown in Figure 8.16).
Without loss of generality, we may assume that for the given local coordinate system, \( R \) is chosen sufficiently small such that it is contained in the local coordinate neighborhood of \( P \) . Moreover, we may assume that \( R \) is \( \left| x\right| \leq 1,\left| y\right| \leq 1 \), and \( P = \left( {0,0}\right), a = \left( {1,1}\right), b = \left( {1, - 1}\right), c = \left( {-1,1}\right) \) , \( d = \left( {-1, - 1}\right) \) . Further, inside \( R \), the vector field \( {Y}_{1} \) is always in the direction of the positive \( x \) -axis and is of length 1 .
Let \( q \in \left\lbrack {a, b}\right\rbrack \), and consider the orbit \( \gamma \left( q\right) \) of \( {Y}_{1} \) passing through the point \( q \) . Suppose \( \gamma \left( q\right) \) intercepts with \( \left\lbrack {c, d}\right\rbrack \) after \( q \), denote the first such
![bea09977-be18-4815-a30e-4fa2fe3b219c_462_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_462_0.jpg)
Figure 8.16
time by \( {T}_{a} \) . This determines a map \( T : \left\lbrack {a, b}\right\rbrack \rightarrow \left\lbrack {c, d}\right\rbrack \) ; and let its domain of definition be \( \Gamma \subset \left\lbrack {a, b}\right\rbrack \), where \( \Gamma \) can possibly be empty. Let \( {c}_{0},{d}_{0} \in \left\lbrack {a, b}\right\rbrack \) be the points, if they exist, such that \( {T}_{{c}_{0}} = c,{T}_{{d}_{0}} = d \) . For simplicity, we always assume that \( {ca} \) and \( {bd} \) of the rectangle \( R \) do not lie on the same orbit.
LEMMA 2.3. Let the set \( \Gamma \subset \left\lbrack {a, b}\right\rbrack \) be the domain of definition for the mapping \( T : \left\lbrack {a, b}\right\rbrack \rightarrow \left\lbrack {c, d}\right\rbrack \) . Then \( \Gamma \) consists of a finite number of intervals. Moreover, suppose an endpoint \( S \) of these intervals has the property that \( S \notin \) \( \Gamma \), then the orbit \( \gamma \left( S\right) \) through \( S \) tends to a saddle point.
Proof. Let \( q \in \Gamma \smallsetminus a \cup b \cup {c}_{0} \cup {d}_{0} \) . The continuity of the system implies that there exists a small neighborhood \( U\left( q\right) \) of \( q \) in \( \left\lbrack {a, b}\right\rbrack \) such that \( U\left( q\right) \subset \Gamma \) . That is, \( \Gamma \smallsetminus a \cup b \cup {c}_{0} \cup {d}_{0} \) is open in \( \left\lbrack {a, b}\right\rbrack \) ; and it is clearly the union of at most countably many disjoint open intervals. Let \( \left( {s,{s}^{\prime }}\right) \) be one of these intervals, and assume that \( s \notin \Gamma \) . Consider all the orbits which start from \( \left( {s,{s}^{\prime }}\right) \) and intersect \( \left\lbrack {c, d}\right\rbrack \) (see Figure 8.17). For all \( q \in \left( {s,{s}^{\prime }}\right) \), the arcs of the orbits \( \overset{⏜}{q{T}_{q}} \) form a "strip" \( \Delta \) . Consider the orbit \( \gamma \left( S\right) \) through \( S \) . Since \( S \notin \Gamma ,\gamma \left( S\right) \) must lie on the boundary \( \partial \Delta \) of \( \Delta \) and \( \omega \left( \gamma \right) \subset \partial \Delta \) . It can be readily shown that \( \Delta \) is homeomorphic to a rectangle in \( {R}^{2} \), excluding two parallel lines (as shown in
![bea09977-be18-4815-a30e-4fa2fe3b219c_462_1.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_462_1.jpg)
FIGURE 8.17
![bea09977-be18-4815-a30e-4fa2fe3b219c_463_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_463_0.jpg)
Figure 8.18
Figure 8.18). Clearly, \( \omega \left( \gamma \right) \) can only be a critical point; and since the system \( {Y}_{1} \) is of type (1), this critical point must be a saddle point. On the other hand, since a system of type (1) can only have a finite number of critical points, the set \( \Gamma \smallsetminus a \cup b \cup {c}_{0} \cup {d}_{0} \) must be the union of finitely many disjoint open intervals. Hence, \( \Gamma \) consists of a finite number of open, closed, or half-open, half-closed intervals.
LEMMA 2.4. Consider a point \( P \) in a nontrivial minimal set \( \mu \) of \( {Y}_{1} \) . Suppose that there exists a local coordinate square \( R \) surrounding \( P \), such that no orbit starting from the right side ab of \( R \) tends to a saddle point. Then \( {Y}_{1} \) can be approximated by a system which has a closed orbit passing through \( P \), and this closed orbit does not bound a cell.
Proof. If \( T \) is defined at \( a \) and \( b \), then Lemma 2.3 implies that \( T \) is defined everywhere in \( \left\lbrack {a, b}\right\rbrack \) . If \( T \) is not defined at either \( a \) or \( b \), then the property of nontrivial minimal set \( \mu \) implies that we can find a point \( \bar{P} \) in \( \mu \) with a corresponding square \( \bar{R} \) such that \( T \) is defined at both \( a \) and \( b \) of the right-hand side \( \left\lbrack {a, b}\right\rbrack \) ; and thus \( T \) is defined everywhere in \( \left\lbrack {a, b}\right\rbrack \) . (The proof is left to the reader.) Hence, we may assume that for \( P \in \mu \) and the corresponding square \( R \), the map \( T \) is defined everywhere on the right side \( \left\lbrack {a, b}\right\rbrack \) . (As shown in Figure 8.19).
Let \( \gamma \) denote the orbit through \( P \), and assume that there are infinitely many arcs of \( \gamma \) arbitrarily close to \( P \) . Let \( {q}_{t} \) be the \( i \) th time intersection after \( P \) that the orbit \( \gamma \) intersects with the segment \( \sigma : x = - 1 \) , \( 0 \geq y \geq - 1/2 \) . There are sufficiently large \( i \) such that the \( {q}_{l} \) are arbitrarily
![bea09977-be18-4815-a30e-4fa2fe3b219c_463_1.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_463_1.jpg)
FIGURE 8.19
close to \( \left( {-1,0}\right) \) ; and let \( {P}_{l} \) be the corresponding point on the \( y \) -axis such that \( {P}_{l} \) is arbitrarily close to \( P = \left( {0,0}\right) \) .
Let \( \varphi \) be a differentiable function such that \( \varphi > 0 \) inside \( R \) and \( \varphi = 0 \) outside \( R \) ; and let \( Z = \left( {0,1}\right) \) be the unit upward field in \( R \) . For \( 0 \leq u \leq 1 \) , define a new vector field on \( {M}^{2}, X\left( u\right) = {Y}_{1} + {\varepsilon u\varphi Z} \) . When \( \varepsilon \) is sufficiently small, this field can be made arbitrarily close to \( {Y}_{1} \) . Let \( \gamma \left( u\right) \) be the orbit of \( X\left( u\right) \) passing through \( P \) . Since \( T \) is defined everywhere in \( \left\lbrack {a, b}\right\rbrack \), the orbit \( \gamma \left( u\right) \) must intersect \( R \) infinitely many times, and it does not tend to a critical point.
For each point \( y \in \sigma \), consider the orbits of \( X\left( 0\right) \) and \( X\left( 1\right) \) passing through the point \( y \) . Let \( \delta \left( y\right) > 0 \) denote the length of the arc segment on the \( y \) -axis determined by these two orbits. \( \delta \left( y\right) \) is continuous with respect to \( y \), and thus by compactness we conclude that there exists a constant \( \delta > 0 \) such that \( \delta \left( y\right) > \delta \) for all \( y \in \sigma \) .
Choose a sufficiently large \( i \) such that \( \rho \left( {{P}_{i}, P}\right) < \delta \) . From the orientabil-ity of \( {M}^{2} \) it follows that we can find a sufficiently small \( {u}_{0} \) such that for \( u \leq {u}_{0} \), the \( i \) th times \( \gamma \left( u\right) \) intersect \( \sigma \) at a point \( {q}_{1}\left( u\right) \) above \( {q}_{1} \) . Hence, the corresponding \( {P}_{l}\left( u\right) \), when \( \gamma \left( u\right) \) intersects the \( y \) -axis, also lie above \( {P}_{l} |
1189_(GTM95)Probability-1 | Definition 4 |
Definition 4. The sequence \( {\xi }_{1},{\xi }_{2},\ldots \) of random variables converges in distribution to the random variable \( \xi \) (notation: \( {\xi }_{n}\overset{d}{ \rightarrow }\xi \) or \( {\xi }_{n}\overset{\text{ law }}{ \rightarrow }\xi \) ) if
\[
\mathrm{E}f\left( {\xi }_{n}\right) \rightarrow \mathrm{E}f\left( \xi \right) ,\;n \rightarrow \infty ,
\]
(4)
for every bounded continuous function \( f = f\left( x\right) \) . The reason for the terminology is that, according to what will be proved in Sect. 1 of Chap. 3 condition (4) is equivalent to the convergence of the distribution functions \( {F}_{{\xi }_{n}}\left( x\right) \) to \( {F}_{\xi }\left( x\right) \) at each point \( x \) of continuity of \( {F}_{\xi }\left( x\right) \) . This convergence is denoted by \( {F}_{{\xi }_{n}} \Rightarrow {F}_{\xi } \) .
We emphasize that the convergence of random variables in distribution is defined only in terms of the convergence of their distribution functions. Therefore it makes sense to discuss this mode of convergence even when the random variables are defined on different probability spaces. This convergence will be studied in detail in Chapter 3, where, in particular, we shall explain why in the definition of \( {F}_{{\xi }_{n}} \Rightarrow {F}_{\xi } \) we require only convergence at points of continuity of \( {F}_{\xi }\left( x\right) \) and not at all \( x \) .
2. In solving problems of analysis on the convergence (in one sense or another) of a given sequence of functions, it is useful to have the concept of a fundamental sequence (or Cauchy sequence). We can introduce a similar concept for each of the first three kinds of convergence of a sequence of random variables.
Let us say that a sequence \( {\left\{ {\xi }_{n}\right\} }_{n \geq 1} \) of random variables is fundamental in probability, or with probability 1, or in the mean of order \( p,0 < p < \infty \), if the corresponding one of the following properties is satisfied: \( \mathrm{P}\left\{ {\left| {{\xi }_{n} - {\xi }_{m}}\right| > \varepsilon }\right\} \rightarrow 0 \) as \( m, n \rightarrow \infty \)
for every \( \varepsilon > 0 \) ; the sequence \( {\left\{ {\xi }_{n}\left( \omega \right) \right\} }_{n \geq 1} \) is fundamental for almost all \( \omega \in \Omega \) ; the sequence \( {\left\{ {\xi }_{n}\left( \omega \right) \right\} }_{n \geq 1} \) is fundamental in \( {L}^{p} \), i.e., \( \mathrm{E}{\left| {\xi }_{n} - {\xi }_{m}\right| }^{p} \rightarrow 0 \) as \( n, m \rightarrow \infty \) .
## 3. Theorem 1.
(a) A necessary and sufficient condition that \( {\xi }_{n} \rightarrow \xi \) (P-a.s.) is that
\[
\mathrm{P}\left\{ {\mathop{\sup }\limits_{{k \geq n}}\left| {{\xi }_{k} - \xi }\right| \geq \varepsilon }\right\} \rightarrow 0,\;n \rightarrow \infty ,
\]
(5)
for every \( \varepsilon > 0 \) .
(b) The sequence \( {\left\{ {\xi }_{n}\right\} }_{n \geq 1} \) is fundamental with probability 1 if and only if
\[
\mathrm{P}\left\{ {\mathop{\sup }\limits_{{k \geq n, l \geq n}}\left| {{\xi }_{k} - {\xi }_{l}}\right| \geq \varepsilon }\right\} \rightarrow 0,\;n \rightarrow \infty ,
\]
(6)
for every \( \varepsilon > 0 \) ; or equivalently
\[
\mathrm{P}\left\{ {\mathop{\sup }\limits_{{k \geq 0}}\left| {{\xi }_{n + k} - {\xi }_{n}}\right| \geq \varepsilon }\right\} \rightarrow 0,\;n \rightarrow \infty .
\]
(7)
Proof. (a) Let \( {A}_{n}^{\varepsilon } = \left\{ {\omega : \left| {{\xi }_{n} - \xi }\right| \geq \varepsilon }\right\} ,{A}^{\varepsilon } = \lim \sup {A}_{n}^{\varepsilon } \equiv \mathop{\bigcap }\limits_{{n = 1}}^{\infty }\mathop{\bigcup }\limits_{{k \geq n}}{A}_{k}^{\varepsilon } \) . Then
\[
\left\{ {\omega : {\xi }_{n} \nrightarrow \xi }\right\} = \mathop{\bigcup }\limits_{{\varepsilon \geq 0}}{A}^{\varepsilon } = \mathop{\bigcup }\limits_{{m = 1}}^{\infty }{A}^{1/m}.
\]
But
\[
\mathrm{P}\left( {A}^{\varepsilon }\right) = \mathop{\lim }\limits_{n}\mathrm{P}\left( {\mathop{\bigcup }\limits_{{k \geq n}}{A}_{k}^{\varepsilon }}\right)
\]
hence (a) follows from the following chain of implications:
\[
\mathrm{P}\left\{ {\omega : {\xi }_{n} \nrightarrow \xi }\right\} = 0 \Leftrightarrow \mathrm{P}\left( {\mathop{\bigcup }\limits_{{\varepsilon > 0}}{A}^{\varepsilon }}\right) = 0 \Leftrightarrow \mathrm{P}\left( {\mathop{\bigcup }\limits_{{m = 1}}^{\infty }{A}^{1/m}}\right) = 0
\]
\[
\Leftrightarrow \mathrm{P}\left( {A}^{1/m}\right) = 0, m \geq 1 \Leftrightarrow \mathrm{P}\left( {A}^{\varepsilon }\right) = 0,\varepsilon > 0
\]
\[
\Leftrightarrow \mathrm{P}\left( {\mathop{\bigcup }\limits_{{k \geq n}}{A}_{k}^{\varepsilon }}\right) \rightarrow 0, n \rightarrow \infty ,\varepsilon > 0
\]
\[
\Leftrightarrow \mathrm{P}\left( {\mathop{\sup }\limits_{{k \geq n}}\left| {{\xi }_{k} - \xi }\right| \geq \varepsilon }\right) \rightarrow 0, n \rightarrow \infty ,\varepsilon > 0.
\]
(b) Let
\[
{B}_{k, l}^{\varepsilon } = \left\{ {\omega : \left| {{\xi }_{k} - {\xi }_{l}}\right| \geq \varepsilon }\right\} ,\;{B}^{\varepsilon } = \mathop{\bigcap }\limits_{{n = 1}}^{\infty }\mathop{\bigcup }\limits_{\substack{{k \geq n} \\ {l \geq n} }}{B}_{k, l}^{\varepsilon }.
\]
Then \( \left\{ {\omega : {\left\{ {\xi }_{n}\left( \omega \right) \right\} }_{n \geq 1}}\right. \) is not fundamental \( \} = \mathop{\bigcup }\limits_{{\varepsilon > 0}}{B}^{\varepsilon } \), and it can be shown as in (a) that \( \mathrm{P}\left\{ {\omega : {\left\{ {\xi }_{n}\left( \omega \right) \right\} }_{n \geq 1}}\right. \) is not fundamental \( \} = 0 \Leftrightarrow \left( 6\right) \) . The equivalence of (6) and (7) follows from the obvious inequalities
\[
\mathop{\sup }\limits_{{k \geq 0}}\left| {{\xi }_{n + k} - {\xi }_{n}}\right| \leq \mathop{\sup }\limits_{\substack{{k \geq 0} \\ {l \geq 0} }}\left| {{\xi }_{n + k} - {\xi }_{n + l}}\right| \leq 2\mathop{\sup }\limits_{{k \geq 0}}\left| {{\xi }_{n + k} - {\xi }_{n}}\right|
\]
This completes the proof of the theorem.
口
Corollary. Since
\[
\mathrm{P}\left\{ {\mathop{\sup }\limits_{{k \geq n}}\left| {{\xi }_{k} - \xi }\right| \geq \varepsilon }\right\} = \mathrm{P}\left\{ {\mathop{\bigcup }\limits_{{k \geq n}}\left( {\left| {{\xi }_{k} - \xi }\right| \geq \varepsilon }\right) }\right\} \leq \mathop{\sum }\limits_{{k \geq n}}\mathrm{P}\left\{ {\left| {{\xi }_{k} - \xi }\right| \geq \varepsilon }\right\}
\]
a sufficient condition for \( {\xi }_{n}\overset{\text{ a. s. }}{ \rightarrow }\xi \) is that
\[
\mathop{\sum }\limits_{{k = 1}}^{\infty }\mathbf{P}\left\{ {\left| {{\xi }_{k} - \xi }\right| \geq \varepsilon }\right\} < \infty
\]
(8)
is satisfied for every \( \varepsilon > 0 \) .
It is appropriate to observe at this point that the reasoning used in obtaining (8)
lets us establish the following simple but important result which is essential in studying properties that are satisfied with probability 1 .
Let \( {A}_{1},{A}_{2},\ldots \) be a sequence of events in \( \mathcal{F} \) . Let (see Table 2.1 in Sect. 1) \( \left\{ {{A}_{n}\text{ i.o. }}\right\} \) denote the event \( \lim \sup {A}_{n} \) that consists in the realization of infinitely many of \( {A}_{1},{A}_{2},\ldots \)
Borel-Cantelli Lemma.
(a) If \( \sum \mathrm{P}\left( {A}_{n}\right) < \infty \) then \( \mathrm{P}\left\{ {{A}_{n}\text{i.o.}}\right\} = 0 \) .
(b) If \( \overline{\sum }\mathrm{P}\left( {A}_{n}\right) = \infty \) and \( {A}_{1},{A}_{2},\ldots \) are independent, then \( \mathrm{P}\left\{ {{A}_{n}\text{i.o.}}\right\} = 1 \) .
Proof. (a) By definition \( \left\{ {{A}_{n}\text{i.o.}}\right\} = \lim \sup {A}_{n} = \mathop{\bigcap }\limits_{{n = 1}}^{\infty }\mathop{\bigcup }\limits_{{k \geq n}}{A}_{k} \) . Consequently
\[
\mathrm{P}\left\{ {{A}_{n}\text{ i.o. }}\right\} = \mathrm{P}\left( {\mathop{\bigcap }\limits_{{n = 1}}^{\infty }\mathop{\bigcup }\limits_{{k \geq n}}{A}_{k}}\right) = \lim \mathrm{P}\left( {\mathop{\bigcup }\limits_{{k \geq n}}{A}_{k}}\right) \leq \lim \mathop{\sum }\limits_{{k \geq n}}\mathrm{P}\left( {A}_{k}\right)
\]
and (a) follows.
(b) If \( {A}_{1},{A}_{2},\ldots \) are independent, so are \( {\bar{A}}_{1},{\bar{A}}_{2},\ldots \) . Hence for \( N \geq n \) we have
\[
\mathrm{P}\left( {\mathop{\bigcap }\limits_{{k = n}}^{N}{\bar{A}}_{k}}\right) = \mathop{\prod }\limits_{{k = n}}^{N}\mathrm{P}\left( {\bar{A}}_{k}\right)
\]
and it is then easy to deduce that
\[
\mathrm{P}\left( {\mathop{\bigcap }\limits_{{k = n}}^{\infty }{\bar{A}}_{k}}\right) = \mathop{\prod }\limits_{{k = n}}^{\infty }\mathrm{P}\left( {\bar{A}}_{k}\right)
\]
(9)
Since \( \log \left( {1 - x}\right) \leq - x,0 \leq x < 1 \) ,
\[
\log \mathop{\prod }\limits_{{k = n}}^{\infty }\left\lbrack {1 - \mathrm{P}\left( {A}_{k}\right) }\right\rbrack = \mathop{\sum }\limits_{{k = n}}^{\infty }\log \left\lbrack {1 - \mathrm{P}\left( {A}_{k}\right) }\right\rbrack \leq - \mathop{\sum }\limits_{{k = n}}^{\infty }\mathrm{P}\left( {A}_{k}\right) = - \infty .
\]
Consequently
\[
\mathrm{P}\left( {\mathop{\bigcap }\limits_{{k = n}}^{\infty }{\bar{A}}_{k}}\right) = 0
\]
for all \( n \), and therefore \( \mathrm{P}\left( {A}_{n}\right. \) i.o. \( ) = 1 \) .
This completes the proof of the lemma.
口
Corollary 1. If \( {A}_{n}^{\varepsilon } = \left\{ {\omega : \left| {{\xi }_{n} - \xi }\right| \geq \varepsilon }\right\} \) then (8) means that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\mathrm{P}\left( {A}_{n}^{\varepsilon }\right) < \infty \) ,
\( \varepsilon > 0 \), and then by the Borel-Cantelli lemma we have \( \mathrm{P}\left( {A}^{\varepsilon }\right) = 0,\varepsilon > 0 \), where \( {A}^{\varepsilon } = \lim \sup {A}_{n}^{\varepsilon }\left( { = \left\{ {{A}_{n}^{\varepsilon }\text{ i.o. }}\right\} }\right) \) . Therefore
\[
\mathop{\sum }\limits_{{k = 1}}^{\infty }\mathrm{P}\left\{ {\left| {{\xi }_{k} - \xi }\right| \geq \varepsilon }\right\} < \infty ,\varepsilon > 0 \Rightarrow \mathrm{P}\left( {A}^{\varepsilon }\right) = 0,\varepsilon > 0
\]
\[
\Leftrightarrow \mathrm{P}\left\{ {\omega : {\xi }_{n} \nrightarrow \x |
1057_(GTM217)Model Theory | Definition 1.3.1 |
Definition 1.3.1 Let \( \mathcal{M} = \left( {M,\ldots }\right) \) be an \( \mathcal{L} \) -structure. We say that \( X \subseteq \) \( {M}^{n} \) is definable if and only if there is an \( \mathcal{L} \) -formula \( \phi \left( {{v}_{1},\ldots ,{v}_{n},{w}_{1},\ldots ,{w}_{m}}\right) \) and \( \bar{b} \in {M}^{m} \) such that \( X = \left\{ {\bar{a} \in {M}^{n} : \mathcal{M} \vDash \phi \left( {\bar{a},\bar{b}}\right) }\right\} \) . We say that \( \phi \left( {\bar{v},\bar{b}}\right) \) defines \( X \) . We say that \( X \) is \( A \) -definable or definable over \( A \) if there is a formula \( \psi \left( {\bar{v},{w}_{1},\ldots ,{w}_{l}}\right) \) and \( \bar{b} \in {A}^{l} \) such that \( \psi \left( {\bar{v},\bar{b}}\right) \) defines \( X \) .
We give a number of examples using \( {\mathcal{L}}_{\mathrm{r}} \), the language of rings.
- Let \( \mathcal{M} = \left( {R,+,-,\cdot ,0,1}\right) \) be a ring. Let \( p\left( X\right) \in R\left\lbrack X\right\rbrack \) . Then, \( Y = \{ x \in R : p\left( x\right) = 0\} \) is definable. Suppose that \( p\left( X\right) = \mathop{\sum }\limits_{{i = 0}}^{m}{a}_{i}{X}^{i}. \) Let \( \phi \left( {v,{w}_{0},\ldots ,{w}_{n}}\right) \) be the formula
\[
{w}_{n} \cdot \underset{n - \text{ times }}{\underbrace{v\cdots v}} + \ldots + {w}_{1} \cdot v + {w}_{0} = 0
\]
(in the future, when no confusion arises, we will abbreviate such a formula as " \( {w}_{n}{v}^{n} + \ldots + {w}_{1}v + {w}_{0} = 0 \) "). Then, \( \phi \left( {v,{a}_{0},\ldots ,{a}_{n}}\right) \) defines \( Y \) . Indeed, \( Y \) is \( A \) -definable for any \( A \supseteq \left\{ {{a}_{0},\ldots ,{a}_{n}}\right\} \) .
- Let \( \mathcal{M} = \left( {\mathbb{R},+,-,\cdot ,0,1}\right) \) be the field of real numbers. Let \( \phi \left( {x, y}\right) \) be the formula
\[
\exists z\left( {z \neq 0 \land y = x + {z}^{2}}\right) .
\]
Because \( a < b \) if and only if \( \mathcal{M} \vDash \phi \left( {a, b}\right) \), the ordering is \( \varnothing \) -definable.
- Let \( \mathcal{M} = \left( {\mathbb{Z},+,-,\cdot ,0,1}\right) \) be the ring of integers. Let \( X = \{ \left( {m, n}\right) \in \) \( {\mathbb{Z}}^{2} : m < n\} \) . Then, \( X \) is definable (indeed \( \varnothing \) -definable). By Lagrange’s Theorem, every nonnegative integer is the sum of four squares. Thus, if we let \( \phi \left( {x, y}\right) \) be the formula
\[
\exists {z}_{1}\exists {z}_{2}\exists {z}_{3}\exists {z}_{4}\left( {{z}_{1} \neq 0 \land y = x + {z}_{1}^{2} + {z}_{2}^{2} + {z}_{3}^{2} + {z}_{4}^{2}}\right) ,
\]
then \( X = \left\{ {\left( {m, n}\right) \in {\mathbb{Z}}^{2} : \mathcal{M} \vDash \phi \left( {m, n}\right) }\right\} \) .
- Let \( F \) be a field and \( \mathcal{M} = \left( {F\left\lbrack X\right\rbrack ,+,-,\cdot ,0,1}\right) \) be the ring of polynomials over \( F \) . Then \( F \) is definable in \( \mathcal{M} \) . Indeed, \( F \) is the set of units of \( F\left\lbrack X\right\rbrack \) and is defined by the formula \( x = 0 \vee \exists {yxy} = 1 \) .
- Let \( \mathcal{M} = \left( {\mathbb{C}\left( X\right) ,+,-,\cdot ,0,1}\right) \) be the field of complex rational functions in one variable. We claim that \( \mathbb{C} \) is defined in \( \mathbb{C}\left( X\right) \) by the formula
\[
\exists x\exists y{y}^{2} = v \land {x}^{3} + 1 = v.
\]
For any \( z \in \mathbb{C} \) we can find \( x \) and \( y \) such that \( {y}^{2} = {x}^{3} + 1 = z \) . Suppose that \( h \) is a nonconstant rational function and that there are nonconstant rational functions \( f \) and \( g \) such that \( h = {g}^{2} = {f}^{3} + 1 \) . Then \( t \mapsto \left( {f\left( t\right), g\left( t\right) }\right) \) is a nonconstant rational function from an open subset of \( \mathbb{C} \) into the curve \( E \) given by the equation \( {y}^{2} = {x}^{3} + 1 \) . But \( E \) is an elliptic curve and it is known (see for example [95]) that there are no such functions.
A similar argument shows that \( \mathbb{C} \) is the set of rational functions \( f \) such that \( f \) and \( f + 1 \) are both fourth powers. These ideas generalize to show that \( \mathbb{C} \) is definable in any finite algebraic extension of \( \mathbb{C}\left( X\right) \) .
- Let \( \mathcal{M} = \left( {{\mathbb{Q}}_{p},+,-,\cdot ,0,1}\right) \) be the field of \( p \) -adic numbers. Then \( {\mathbb{Z}}_{p} \) the ring of \( p \) -adic integers is definable. Suppose \( p \neq 2 \) (we leave \( {\mathbb{Q}}_{2} \) for Exercise 1.4.13) and \( \phi \left( x\right) \) is the formula \( \exists y{y}^{2} = p{x}^{2} + 1 \) . We claim that \( \phi \left( x\right) \) defines \( {\mathbb{Z}}_{p} \) .
First, suppose that \( {y}^{2} = p{a}^{2} + 1 \) . Let \( v \) denote the \( p \) -adic valuation. Because \( v\left( {p{a}^{2}}\right) = {2v}\left( a\right) + 1 \), if \( v\left( a\right) < 0 \), then \( v\left( {p{a}^{2}}\right) \) is an odd negative integer and \( v\left( {y}^{2}\right) = v\left( {p{a}^{2} + 1}\right) = v\left( {p{a}^{2}}\right) \) . On the other hand, \( v\left( {y}^{2}\right) = {2v}\left( y\right) \) , an even integer. Thus, if \( \mathcal{M} \vDash \phi \left( a\right) \), then \( v\left( a\right) \geq 0 \) so \( a \in {\mathbb{Z}}_{p} \) .
On the other hand, suppose that \( a \in {\mathbb{Z}}_{p} \) . Let \( F\left( X\right) = {X}^{2} - \left( {p{a}^{2} + 1}\right) \) . Let \( \bar{F} \) be the reduction of \( F{\;\operatorname{mod}\;p} \) . Because \( v\left( a\right) \geq 0, v\left( {pa}\right) > 0 \) and \( \bar{F}\left( X\right) = {X}^{2} - 1 \) and \( {\bar{F}}^{\prime } = {2X} \) . Thus, \( \bar{F}\left( 1\right) = 0 \) and \( {\bar{F}}^{\prime }\left( 1\right) \neq 0 \) so, by Hensel’s Lemma, there is \( b \in {\mathbb{Z}}_{p} \) such that \( F\left( b\right) = 0 \) . Hence \( \mathcal{M} \vDash \phi \left( a\right) \) .
- Let \( \mathcal{M} = \left( {\mathbb{Q},+,-,\cdot ,0,1}\right) \) be the field of rational numbers. Let \( \phi \left( {x, y, z}\right) \) be the formula
\[
\exists a\exists b\exists c{xy}{z}^{2} + 2 = {a}^{2} + x{y}^{2} - y{c}^{2}
\]
and let \( \psi \left( x\right) \) be the formula
\[
\forall y\forall z\left( {\left\lbrack {\phi \left( {y, z,0}\right) \land \left( {\forall w\left( {\phi \left( {y, z, w}\right) \rightarrow \phi \left( {y, z, w + 1}\right) }\right) }\right) }\right\rbrack \rightarrow \phi \left( {y, z, x}\right) }\right) .
\]
A remarkable result of Julia Robinson (see [34]) shows that \( \psi \left( x\right) \) defines the integers in \( \mathbb{Q} \) .
- Consider the natural numbers \( \mathbb{N} \) as an \( \mathcal{L} = \{ + , \cdot ,0,1\} \) structure. The definable sets are quite complex. For example, there is an \( \mathcal{L} \) -formula \( T\left( {e, x, s}\right) \) such that \( \mathbb{N} \vDash T\left( {e, x, s}\right) \) if and only if the Turing machine with program coded by \( e \) halts on input \( x \) in at most \( s \) steps (see, for example, [51]). Thus, the Turing machine with program \( e \) halts on input \( x \) if and only if \( \mathbb{N} \vDash \exists {sT}\left( {e, x, s}\right) \), so the set of halting computations is definable. It is well known that this set is not computable (see, for example, [94]). This leads to an interesting conclusion.
Proposition 1.3.2 The full \( \mathcal{L} \) -theory of the natural numbers is undecidable (i.e., there is no algorithm that when given an \( \mathcal{L} \) -sentence \( \psi \) as input will always halt answering "yes" if \( \mathbb{N} \vDash \psi \) and "no" if \( \mathbb{N} \vDash \neg \psi \) ).
Proof For each \( e \) and \( x \), let \( {\phi }_{e, x} \) be the \( \mathcal{L} \) -sentence
\[
\exists {sT}\left( {\underset{e - \text{ times }}{\underbrace{1 + \ldots + 1}},\underset{x - \text{ times }}{\underbrace{1 + \ldots + 1}}, s}\right) .
\]
If there were such an algorithm we could decide whether the program coded by \( e \) halts on input \( x \) by asking whether \( \mathbb{N} \vDash {\phi }_{e, x} \) .
Recursively enumerable sets have simple mathematical definitions. By the Matijasevič-Robinson-Davis-Putnam solution to Hilbert's 10th Problem (see [24]).
for any recursively enumerable set \( A \subseteq {\mathbb{N}}^{n} \) there is a polynomial \( p\left( {{X}_{1},\ldots ,{X}_{n},{Y}_{1},\ldots ,{Y}_{m}}\right) \in \mathbb{Z}\left\lbrack {\bar{X},\bar{Y}}\right\rbrack \) such that
\[
A = \left\{ {\bar{x} \in {\mathbb{N}}^{n} : \mathbb{N} \vDash \exists {y}_{1}\ldots \exists {y}_{m}p\left( {\bar{x},\bar{y}}\right) = 0}\right\} .
\]
The following example will be useful later.
Lemma 1.3.3 Let \( {\mathcal{L}}_{\mathrm{r}} \) be the language of ordered rings and \( (\mathbb{R}, + , - , \cdot \) , \( < ,0,1) \) be the ordered field of real numbers. Suppose that \( X \subseteq {\mathbb{R}}^{n} \) is \( A \) - definable. Then, the topological closure of \( X \) is also \( A \) -definable.
Proof Let \( \phi \left( {{v}_{1},\ldots ,{v}_{n},\bar{a}}\right) \) define \( X \) . Let \( \psi \left( {{v}_{1},\ldots ,{v}_{n},\bar{w}}\right) \) be the formula
\[
\forall \epsilon \left\lbrack {\epsilon > 0 \rightarrow \exists {y}_{1},\ldots ,{y}_{n}\left( {\phi \left( {\bar{y},\bar{w}}\right) \land \mathop{\sum }\limits_{{i = 1}}^{n}{\left( {x}_{i} - {y}_{i}\right) }^{2} < \epsilon }\right) }\right\rbrack .
\]
Then, \( \bar{b} \) is in the closure of \( X \) if and only if \( \mathcal{M} \vDash \phi \left( {\bar{b},\bar{a}}\right) \) .
We can give a more concrete characterization of the definable sets.
Proposition 1.3.4 Let \( \mathcal{M} \) be an \( \mathcal{L} \) -structure. Suppose that \( {D}_{n} \) is a collection of subsets of \( {M}^{n} \) for all \( n \geq 1 \) and \( \mathcal{D} = \left( {{D}_{n} : n \geq 1}\right) \) is the smallest collection such that:
i) \( {M}^{n} \in {D}_{n} \) ;
ii) for all \( n \) -ary function symbols \( f \) of \( \mathcal{L} \), the graph of \( {f}^{\mathcal{M}} \) is in \( {D}_{n + 1} \) ;
iii) for all \( n \) -ary relation symbols \( R \) of \( \mathcal{L},{R}^{\mathcal{M}} \i |
106_106_The Cantor function | Definition 4.2 |
Definition 4.2. An algebra is relatively free if it is a free algebra of some variety.
Theorem 4.3. For any type \( T \), and any set \( L \) of laws, let \( V \) be the variety of \( T \) -algebras defined by \( L \) . For any set \( X \), there exists a free \( T \) -algebra of \( V \) on \( X \) .
Proof: Let \( \left( {F,\rho }\right) \) be the free \( T \) -algebra on \( X \) . A congruence relation on \( F \) is defined by putting \( u \sim v \) (where \( u, v \in F \) ) if \( \varphi \left( u\right) = \varphi \left( v\right) \) for every homomorphism \( \varphi \) of \( F \) into an algebra in \( V \) . Clearly \( \sim \) is an equivalence relation on \( F \) . If now \( t \in {T}_{k} \) and \( {u}_{i} \sim {v}_{i}\left( {i = 1,\ldots, k}\right) \), then for every such homomorphism \( \varphi ,\varphi \left( {u}_{i}\right) = \varphi \left( {v}_{i}\right) \), and so
\[
\varphi \left( {t\left( {{u}_{1},\ldots ,{u}_{k}}\right) }\right) = t\left( {\varphi \left( {u}_{1}\right) ,\ldots ,\varphi \left( {u}_{k}\right) }\right) = t\left( {\varphi \left( {v}_{1}\right) ,\ldots ,\varphi \left( {v}_{k}\right) }\right) = \varphi \left( {t\left( {{v}_{1},\ldots ,{v}_{k}}\right) }\right) ,
\]
verifying that \( \varphi \) is a congruence relation.
We define \( R \) to be the set of congruence classes of elements of \( F \) with respect to this congruence relation. Denoting the congruence class containing \( u \) by \( \bar{u} \), we define the action of \( t \in {T}_{k} \) on \( R \) by putting \( t\left( {{\bar{u}}_{1},\ldots ,{\bar{u}}_{k}}\right) = \) \( t\left( {{u}_{1},\ldots ,{u}_{k}}\right) \) . This definition is independent of the choice of representatives \( {u}_{1},\ldots ,{u}_{k} \) of the classes \( {\bar{u}}_{1},\ldots ,{\bar{u}}_{k} \), and makes \( R \) a \( T \) -algebra. Also, the map \( u \rightarrow \bar{u} \) is clearly a homomorphism \( \eta : F \rightarrow R \) . Finally, we define \( \sigma : X \rightarrow R \) by \( \sigma \left( x\right) = \overline{\rho \left( x\right) } \) .
We now prove that \( \left( {R,\sigma }\right) \) is relatively free on \( X \) . Let \( A \) be any algebra in \( V \), and let \( \tau : X \rightarrow A \) be any function from \( X \) into \( A \) . Because \( \left( {F,\rho }\right) \) is free, there exists a unique homomorphism \( \psi : F \rightarrow A \) such that \( {\psi \rho } = \tau \) .
![aa35aa61-3413-461b-96f9-006ca0282e6b_18_0.jpg](images/aa35aa61-3413-461b-96f9-006ca0282e6b_18_0.jpg)
For \( \bar{u} \in R \), we define \( \varphi \left( \bar{u}\right) = \psi \left( u\right) \) . This is independent of the choice of representative \( u \) of the element \( \bar{u} \), since if \( \bar{u} = \bar{v} \), then \( \psi \left( u\right) = \psi \left( v\right) \) . The map \( \varphi : R \rightarrow A \) is clearly a homomorphism, and \( {\varphi \sigma } = {\varphi \eta \rho } = {\psi \rho } = \tau \) . If \( {\varphi }^{\prime } : R \rightarrow A \) is another homomorphism such that \( {\varphi }^{\prime }\sigma = \tau \), then \( {\varphi }^{\prime }{\eta \rho } = \tau \) and therefore \( {\varphi }^{\prime }\eta = \psi \) . Consequently for each element \( \bar{u} \in R \) we have
\[
{\varphi }^{\prime }\left( \bar{u}\right) = {\varphi }^{\prime }\eta \left( u\right) = \psi \left( u\right) = \varphi \left( \bar{u}\right)
\]
and hence \( {\varphi }^{\prime } = \varphi \) .
When considering only the algebras of a given variety \( V \), we may redefine variables and words accordingly. Thus we define a \( V \) -variable as an element of the free generating set of a free algebra of \( V \), and a \( V \) -word in the \( V \) -variables \( {x}_{1},\ldots ,{x}_{n} \) as an element of the free algebra of \( V \) on the free generators \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \)
## Examples
4.4. \( T \) consists of a single binary operation which we shall write as juxtaposition. Let \( V \) be the variety of associative \( T \) -algebras. Then all products in the free \( T \) -algebra obtained by any bracketing of \( {x}_{1},\ldots ,{x}_{n} \) , taken in that order, are congruent under the congruence relation used in our construction of the relatively free algebra, and correspond to the one word \( {x}_{1}{x}_{2}\cdots {x}_{n} \) of \( V \) . We observe that in this example, all elements of the absolutely free algebra \( F \), which map to a given element \( {x}_{1}{x}_{2}\cdots {x}_{n} \) of the relatively free algebra, come from the same layer \( {F}_{n - 1} \) of \( F \) .
4.5. \( T \) consists of a 0-ary, a 1-ary and a 2-ary operation. \( V \) is the variety of abelian groups, defined by the laws given in Example 3.5 together with the law \( \left( {{x}_{1}{x}_{2},{x}_{2}{x}_{1}}\right) \) . In this case, the relatively free algebra on \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) is the set of all \( {x}_{1}^{{r}_{1}}{x}_{2}^{{r}_{2}}\cdots {x}_{n}^{{r}_{n}} \) (or equivalently the set of all \( n \) -tuples \( \left( {{r}_{1},\ldots ,{r}_{n}}\right) \) ) with \( {r}_{i} \in \mathbb{Z} \) . Here the layer property of Example 4.4 does not hold, because, for example, we have the identity \( e \in {F}_{0},{x}_{1}^{-1} \in {F}_{1},{x}_{1}^{-1} * {x}_{1} \in {F}_{2} \) and yet \( \bar{e} = \overline{{x}_{1}^{-1} * {x}_{1}} \) .
## Exercises
4.6. \( K \) is a field. Show that vector spaces over \( K \) form a variety \( V \) of algebras, and that every vector space over \( K \) is a free algebra of \( V \) .
4.7. \( R \) is a commutative ring with 1 and \( V \) is the variety of commutative rings \( S \) which contain \( R \) as a subring and in which \( {1}_{R} \) is a multiplicative identity of \( S \) . Show that the free algebra of \( V \) on the set \( X \) of variables is the polynomial ring over \( R \) in the elements of \( X \) .
## Chapter II
## Propositional Calculus
## §1 Introduction
Mathematical logic is the study of logic as a mathematical theory. Following the usual procedure of applied mathematics, we construct a mathematical model of the system to be studied, and then conduct what is essentially a pure mathematical investigation of the properties of our model. Since this book is intended for mathematicians, the system we propose to study is not general logic but the logic used in mathematics. By this restriction, we achieve considerable simplification, because we do not have to worry about precise meanings of words-in mathematics, words have precisely defined meanings. Furthermore, we are free of reasoning based on things such as emotive argument, which must be accounted for in any theory of general logic. Finally, the nature of the real world need not concern us, since the world we shall study is the purely conceptual one of pure mathematics.
In any formal study of logic, the language and system of reasoning needed to carry out the investigation is called the meta-language or meta-logic. As we are constructing a mathematical model of logic, our meta-language is mathematics, and so all our existing knowledge of mathematics is available for possible application to our model. We shall make specific use of informal set theory (including cardinal numbers and Zorn's lemma) and of the universal algebra developed in Chapter I.
For the purpose of our study, it suffices to describe mathematics as consisting of assertions that if certain statements are true then so are certain other statements, and of arguments justifying these assertions. Hence a model of mathematical reasoning must include a set of objects which we call statements or propositions, some concept of truth, and some concept of a proof. Once a model is constructed, the main subject of investigation is the relationship between truth and proof. We shall begin by constructing a model of the simpler parts of mathematical reasoning. This model is called the Propositional Calculus. Later, we shall construct a more refined model (known as the First-Order Predicate Calculus), copying more complicated parts of the reasoning used in mathematics.
## §2 Algebras of Propositions
The Propositional Calculus considers ways in which simple statements may be combined to form more complex statements, and studies how the truth or falsity of complex statements is related to that of their component statements. Some of the ways in which statements are combined in mathematics are as follows. We often use "and" to combine statements, and we write \( p \land q \) for the statement " \( p \) and \( q \) ", which is regarded as true if and only if both the statements \( p, q \) are true. We frequently assert that (at least) one of two possibilities is true, and we write \( p \vee q \) for the statement " \( p \) or \( q \) ", which we consider to be true if at least one of \( p, q \) is true and false if both \( p \) and \( q \) are false. We often assert that some statement is false, and we write \( \sim p \) (read "not \( p \) ") for the statement " \( p \) is false", which is regarded as true if and only if \( p \) is false. Another common way of linking two statements is through an assertion "if \( p \) is true, then so is \( q \) ". For this we write " \( p \Rightarrow q \) " (read " \( p \) implies \( q \) "), which, in mathematical usage, is true unless \( q \) is false and \( p \) is true.
We want our simple model to imitate the above constructions, so we want our set of propositions to be an algebra with respect to the four operations given above. This could be done by taking the free algebra with these operations, but we know that in ordinary usage, the four operations are not independent. Thus a simpler system is suggested, in which we choose some basic operations which will enable us to define all the above operations. This may be done in many ways, some of which are explored in exercises at the end of Chapter III, where they may be studied more thoroughly. We choose a way which is perhaps not the natural one, but which has advantages in that it simplifies the development of the t |
1065_(GTM224)Metric Structures in Differential Geometry | Definition 4.1 |
Definition 4.1. A curve \( c \) in \( M \) is said to be a geodesic if \( {\nabla }_{D}\dot{c} = 0 \) .
We will see below that the tangent fields of geodesics are integral curves of a certain vector field on \( {TM} \), and therefore enjoy the usual existence, uniqueness, and smooth dependence on initial conditions properties. These properties can also be shown to hold locally with the help of a chart \( \left( {U, x}\right) \) : Let \( {X}_{i} = \partial /\partial {x}^{i} \) , and define the Christoffel symbols to be the functions \( {\Gamma }_{ij}^{k} \in \mathcal{F}U \) given by
\[
{\Gamma }_{ij}^{k} = d{x}^{k}\left( {{\nabla }_{{X}_{i}}{X}_{j}}\right) ,\;1 \leq i, j, k \leq n.
\]
Since \( \dot{c} = {c}_{ * }D = \mathop{\sum }\limits_{i}{c}_{ * }D\left( {x}^{i}\right) {X}_{i} \circ c = \mathop{\sum }\limits_{i}D\left( {{x}^{i} \circ c}\right) {X}_{i} \circ c = \mathop{\sum }\limits_{i}{\left( {x}^{i} \circ c\right) }^{\prime }{X}_{i} \circ c \) ,
\[
{\nabla }_{D}\dot{c} = \mathop{\sum }\limits_{i}D{\left( {x}^{i} \circ c\right) }^{\prime }{X}_{i} \circ c + {\left( {x}^{i} \circ c\right) }^{\prime }{\nabla }_{D}\left( {{X}_{i} \circ c}\right)
\]
\[
= \mathop{\sum }\limits_{i}{\left( {x}^{i} \circ c\right) }^{\prime \prime }{X}_{i} \circ c + {\left( {x}^{i} \circ c\right) }^{\prime }{\nabla }_{\dot{c}}{X}_{i}
\]
\[
= \mathop{\sum }\limits_{i}{\left( {x}^{i} \circ c\right) }^{\prime \prime }{X}_{i} \circ c + {\left( {x}^{i} \circ c\right) }^{\prime }\mathop{\sum }\limits_{j}{\left( {x}^{j} \circ c\right) }^{\prime }\left( {{\nabla }_{{X}_{j}}{X}_{i}}\right) \circ c
\]
\[
= \mathop{\sum }\limits_{i}{\left( {x}^{i} \circ c\right) }^{\prime \prime }{X}_{i} \circ c + {\left( {x}^{i} \circ c\right) }^{\prime }\mathop{\sum }\limits_{{j, k}}{\left( {x}^{j} \circ c\right) }^{\prime }\left( {{\Gamma }_{ji}^{k} \circ c}\right) {X}_{k} \circ c.
\]
Thus, \( c \) is a geodesic iff
\[
{\left( {x}^{k} \circ c\right) }^{\prime \prime } + \mathop{\sum }\limits_{{i, j}}{\left( {x}^{i} \circ c\right) }^{\prime }{\left( {x}^{j} \circ c\right) }^{\prime }{\Gamma }_{ij}^{k} \circ c = 0,\;1 \leq k \leq n.
\]
Existence and uniqueness of geodesics for initial conditions in \( c \) and \( \dot{c} \) are then guaranteed by classical theorems on differential equations. A connection is said to be complete if its geodesics are defined on all of \( \mathbb{R} \) .
EXAMPLES AND REMARKS 4.1. (i) If \( M = {\mathbb{R}}^{n} \) with the standard flat connection, then \( {\Gamma }_{ij}^{k} = 0 \), and the geodesics are the straight lines \( t \mapsto {at} + b, a \) , \( b \in M \) . The connection is complete.
(ii) More generally, let \( M \) be a parallelizable manifold, \( {X}_{1},\ldots ,{X}_{k} \in \mathfrak{X}M \) a parallelization of \( M \) . Any \( X \in \mathfrak{X}M \) can be written \( X = \mathop{\sum }\limits_{i}{X}^{i}{X}_{i},{X}^{i} \in \mathcal{F}M \) . The formula
\[
{\nabla }_{u}X = \mathop{\sum }\limits_{i}u\left( {X}^{i}\right) {X}_{i}\left( {\pi \left( u\right) }\right) ,\;u \in {TM},
\]
defines a connection on \( M \) such that \( X \in \mathfrak{X}M \) is a parallel section of \( {\tau M} \) iff \( X \) is a constant linear combination \( \sum {a}_{i}{X}_{i},{a}_{i} \in \mathbb{R} \) . This can be seen by checking axioms (1)-(4) of Theorem 2.1. Alternatively, one can define a horizontal distribution \( \mathcal{H} \) whose value at \( u = \sum {a}_{i}{X}_{i}\left( {\pi \left( u\right) }\right) \) is \( {\mathcal{H}}_{u} \mathrel{\text{:=}} {X}_{*\pi \left( u\right) }{M}_{\pi \left( u\right) } \), where \( X \mathrel{\text{:=}} \sum {a}_{i}{X}_{i} \in \mathfrak{X}M \) . Since \( \pi \circ X = {1}_{M},{\pi }_{ * }{\mathcal{H}}_{u} = {\pi }_{ * }{X}_{ * }{M}_{\pi \left( u\right) } = {M}_{\pi \left( u\right) } \) . Furthermore, \( {\mathcal{H}}_{au} = {\left( aX\right) }_{ * }{M}_{\pi \left( u\right) } = {\left( {\mu }_{a} \circ X\right) }_{ * }{M}_{\pi \left( u\right) } = {\mu }_{a * }{\mathcal{H}}_{u} \) for \( a \in \mathbb{R} \), so that \( \mathcal{H} \) is a connection.
Given \( p \in M, u = \sum {a}_{i}{X}_{i}\left( p\right) \in {M}_{p} \), the geodesic \( c \) of \( M \) with \( \dot{c}\left( 0\right) = u \) is the integral curve of the vector field \( X \mathrel{\text{:=}} \sum {a}_{i}{X}_{i} \) passing through \( p \) when \( t = 0 \) : Notice that \( {\nabla }_{X}X = 0 \), so that if \( \gamma \) is an integral curve of \( X \), then
\[
{\nabla }_{D}\dot{\gamma } = {\nabla }_{D}\left( {X \circ \gamma }\right) = {\nabla }_{\dot{\gamma }}X = \left( {{\nabla }_{X}X}\right) \circ \gamma = 0.
\]
(iii) In case \( M \) is a Lie group \( G \), the connection from (ii) obtained by choosing as parallelization a basis of the Lie algebra \( \mathfrak{g} \) is called the left-invariant connection of \( G \) . It is independent of the chosen basis, because for \( u \in {TG} \) , \( {\mathcal{H}}_{u} = {X}_{*\pi \left( u\right) }{G}_{\pi \left( u\right) } \), where \( X \) is the element of \( \mathfrak{g} \) with \( {X}_{\pi \left( u\right) } = u \) .
The geodesics \( c \) with \( c\left( 0\right) = e \) coincide with the Lie group homomorphisms \( c : \mathbb{R} \rightarrow G \) (and in particular the connection is complete): If \( c : I \rightarrow G \) is an integral curve of \( X \) with \( c\left( 0\right) = e \), let \( s, t \) be numbers such that \( s, t, s + t \in I \) . Then the curve \( \gamma \) defined by \( \gamma \left( t\right) = c\left( s\right) c\left( t\right) \) is an integral curve of \( X \), since
\[
\dot{\gamma }\left( t\right) = {L}_{c\left( s\right) * }X\left( {c\left( t\right) }\right) = X\left( {{L}_{c\left( s\right) }c\left( t\right) }\right) = X\left( {\gamma \left( t\right) }\right) .
\]
But \( t \mapsto c\left( {s + t}\right) \) is also an integral curve of \( X \) which coincides with \( \gamma \) when \( t = 0 \), so that \( c\left( {s + t}\right) = c\left( s\right) c\left( t\right) \) . A similar argument shows that if \( c : I \rightarrow G \) is a maximal integral curve of \( X \), then \( s + t \in I \) whenever \( s, t \in I \) . In particular, \( I = \mathbb{R} \) .
Conversely, suppose \( c : \mathbb{R} \rightarrow G \) is a homomorphism, and let \( X \) be the left-invariant vector field with \( X\left( e\right) = \dot{c}\left( 0\right) \) . If \( {t}_{0} \in \mathbb{R} \), and \( \gamma \) is the curve given by \( \gamma \left( t\right) = c\left( {{t}_{0} + t}\right) = c\left( {t}_{0}\right) c\left( t\right) \), then as above,
\[
\dot{c}\left( {t}_{0}\right) = \dot{\gamma }\left( 0\right) = {L}_{c\left( {t}_{0}\right) * }\dot{c}\left( 0\right) = {L}_{c\left( {t}_{0}\right) * }X\left( e\right) = X\left( {c\left( {t}_{0}\right) }\right) ,
\]
so that \( c \) is an integral curve of \( X \) .
The curvature tensor of a left-invariant connection on a Lie group is zero, since the connection is flat.
(iv) (Geodesics on \( {S}^{n} \) ) Let \( p, q \in {S}^{n}, p \bot q \) . Then the great circle \( t \mapsto \) \( c\left( t\right) = \left( {\cos t}\right) p + \left( {\sin t}\right) q \) is a geodesic: In fact, \( \dot{c}\left( t\right) = {\mathcal{J}}_{c\left( t\right) }{c}^{\prime }\left( t\right) = - \left( {\sin t}\right) {\mathcal{J}}_{c\left( t\right) }p + \) \( \left( {\cos t}\right) {\mathcal{J}}_{c\left( t\right) }q \), and \( t \mapsto {\mathcal{J}}_{c\left( t\right) }p \) is parallel along \( c \) for the connection \( \widetilde{\nabla } \) on \( {\mathbb{R}}^{n + 1} \) . Thus,
\[
{\widetilde{\nabla }}_{D\left( t\right) }\dot{c} = - \left( {\cos t}\right) {\mathcal{J}}_{c\left( t\right) }q - \left( {\sin t}\right) {\mathcal{J}}_{c\left( t\right) }q = - P \circ c\left( t\right) ,
\]
where \( P \) is the position vector field, and \( {\imath }_{ * }{\nabla }_{D}\dot{c} = {\left( {\widetilde{\nabla }}_{D}\dot{c}\right) }^{ \bot } = 0 \) . Since \( {\imath }_{*p}{S}_{p}^{n} = \) \( {\mathcal{J}}_{p}\left( {p}^{ \bot }\right) \), this describes, up to reparametrization, all geodesics passing through \( p \) at \( t = 0 \), cf. Exercise 96.
Definition 4.2. A vector field \( S \) on \( {TM} \) is called a spray on \( M \) if
(1) \( {\pi }_{ * } \circ S = {1}_{TM} \), and
(2) \( S \circ {\mu }_{a} = a{\mu }_{a * }S \), for \( a \in \mathbb{R} \) .
By Theorem 7.5 in Chapter 1, the maximal flow \( \Phi : W \rightarrow {TM} \) of \( S \) is defined on an open set \( W \subset \mathbb{R} \times {TM} \) containing \( 0 \times {TM} \), and if \( {\Phi }_{v} : {I}_{v} \rightarrow {TM} \) denotes the maximal integral curve of \( S \) with \( {\Phi }_{v}\left( 0\right) = v \), then \( {\Phi }_{v}\left( t\right) = \Phi \left( {t, v}\right) \) .
Let \( \widetilde{TM} \) denote the open subset of \( {TM} \) consisting of all \( v \) such that \( 1 \in {I}_{v} \) , and define the exponential map \( \exp : \widetilde{TM} \rightarrow M \) of the spray \( S \) by
(4.1)
\[
\exp \left( v\right) \mathrel{\text{:=}} \pi \circ \Phi \left( {1, v}\right)
\]
For \( p \in M,{\exp }_{p} \) will denote the restriction of exp to \( \widetilde{{M}_{p}} \mathrel{\text{:=}} \widetilde{TM} \cap {M}_{p} \) .
THEOREM 4.1. Let \( S \) be a spray on \( {M}^{n} \) with exponential map \( \exp : \widetilde{TM} \rightarrow \) \( M \) . Then for any \( p \in M \) ,
(1) \( {\widetilde{M}}_{p} \) is a star-shaped neighborhood of \( 0 \in {M}_{p} \) : If \( v \in {\widetilde{M}}_{p} \), then \( {tv} \in {\widetilde{M}}_{p} \) for \( 0 \leq t \leq 1 \), and \( \exp \left( {tv}\right) = \pi \circ {\Phi }_{v}\left( t\right) \) .
(2) \( {\exp }_{p} \) has rank \( n \) at \( 0 \in {M}_{p} \), and therefore maps a neighborhood of 0 in \( {M}_{p} \) diffeomorphically onto a neighborhood of \( p \) in \( M \) .
(3) \( \left( {\pi ,\exp }\right) : \widetilde{TM} \rightarrow M \times M \) has rank \( {2n} \) at \( 0 \in {M}_{p} \), and therefore maps a neighborhood of 0 in TM diffeomorphically onto a neighborhood of \( \left( {p, p}\right) \) in \( M \times M \) . If \( s : M \rightarrow {TM} \) denotes th |
1329_[肖梁] Abstract Algebra (2022F) | Definition 22.3.1 |
Definition 22.3.1. Let \( I \) be a filtered poset. Let \( {\left( {A}_{i}\right) }_{i \in I} \) be an inverse system of sets. We provide each \( {A}_{i} \) with discrete topology.
We may define a topology on the inverse limit \( A = \mathop{\lim }\limits_{{i \leftarrow i \in I}}{A}_{i} \) by
(1) An open subset is the union of basic opens: for each \( i \in I \) and each \( {a}_{i} \in {A}_{i} \), we require \( {\pi }_{i}^{-1}\left( {a}_{i}\right) \subseteq A \) to be open, where \( {\pi }_{i} : A \rightarrow {A}_{i} \) is the projection.
(2) Embed ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_145_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_145_0.jpg)
so that the right hand side is endowed with the product topology and the inverse limit is defined by the subspace topology.
Lemma 22.3.2. The two above definitions of topology on \( \mathop{\lim }\limits_{{i \rightarrow i \in I}}{A}_{i} \) are equivalent.
Proof. Clearly, the open subset in (1) is clearly open in the sense of (2). Conversely, we note that an open subset as defined in (2) takes the form of \( {\pi }_{{i}_{1}}^{-1}\left( {a}_{{i}_{1}}\right) \cap \cdots {\pi }_{{i}_{n}}^{-1}\left( {a}_{{i}_{n}}\right) \), where \( {i}_{1},\ldots ,{i}_{n} \in I \) and \( {a}_{{i}_{j}} \in {A}_{{i}_{j}} \) . We need to show that such set is open in the sense of (1).
By induction, it is enough to show that for \( i, j \in I,{a}_{i} \in {A}_{i} \), and \( {a}_{j} \in {A}_{j} \), the intersection \( {\pi }_{i}^{-1}\left( {a}_{i}\right) \cap {\pi }_{j}^{-1}\left( {a}_{j}\right) \) is open in topology defined in (1). As \( I \) is filtered, there exists \( k \in I \) such that \( i < k \) and \( j < k \) .
\[
\begin{array}{l} {A}_{i} \leftarrow {\varphi }_{ki} \\ {A}_{j} \leftarrow {\varpi }_{kj} \end{array}{A}_{k}
\]
Put \( {B}_{k} \mathrel{\text{:=}} {\varphi }_{ki}^{-1}\left( {a}_{i}\right) \cap {\varphi }_{kj}^{-1}\left( {a}_{j}\right) \) . Then we have
\[
{\pi }_{i}^{-1}\left( {a}_{i}\right) \cap {\pi }_{j}^{-1}\left( {a}_{j}\right) = \mathop{\bigcup }\limits_{{b \in {B}_{k}}}{\pi }_{k}^{-1}\left( b\right) .
\]
The latter union is open in the topology defined in (1).
Theorem 22.3.3. If each \( {A}_{i} \) is finite, then \( \mathop{\lim }\limits_{{i \leftarrow i \in I}}{A}_{i} \) is compact and Hausdorff. In this case, we say that \( \mathop{\lim }\limits_{{i \in I}}{A}_{i} \) profinite.
Proof. We consider the subspace ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_145_1.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_145_1.jpg)
The latter space is compact and Hausdorff (as the product of compact spaces is compact). The subspace is determined by taking the conditions \( {\varphi }_{ji}\left( {a}_{j}\right) = {a}_{i} \) for any \( j > i \) . This defined the left hand side as a closed subspace.
So \( \mathop{\lim }\limits_{{i \in I}}{A}_{i} \) is compact and Hausdorff.
Example 22.3.4. The topological rings \( {\mathbb{Z}}_{p} \) and \( \widehat{\mathbb{Z}} \) are profinite, and thus compact and Hausdorff.
Definition 22.3.5. A topological group is a group \( G \) with a topology on the underlying subset such that the two maps
\[
\iota : G \rightarrow G
\]
\( m : G \times G \rightarrow G \)
\[
g \mapsto {g}^{-1}
\]
\( \left( {g, h}\right) \mapsto {gh} \)
are continuous.
So if \( U \subseteq G \) is an open subset, then \( {gUh} \subseteq G \) is an open subset for any \( g, h \in G \) .
The following are interesting properties of topological groups.
Lemma 22.3.6. If \( H \leq G \) is an open subset of a topological group \( G \), then \( H \) is also closed!
Proof. Note that we have a disjoint union
\[
G = \mathop{\coprod }\limits_{{{gH} \in G/H}}{gH}
\]
of subsets. But each \( {gH} \) is open. It implies that
\[
H = G \smallsetminus \left( {\mathop{\coprod }\limits_{{{gH} \neq H}}{gH}}\right)
\]
is closed.
Lemma 22.3.7. If \( G \) is a compact topological group, then a subgroup \( H \leq G \) is open if and only if it is closed and of finite index in \( G \) .
Proof. " \( \Rightarrow \) " Lemma 22.3.6 implies that \( H \) is closed. To see that \( H \) has finite index in \( G \), we note that \( G = \coprod {gH} \) is an open disjoint cover. Yet \( G \) is compact, so the number of subsets in the disjoint union is finite, i.e. \( \left\lbrack {G : H}\right\rbrack < \infty \) .
" \( \Leftarrow \) " If \( H \leq G \) is a closed subgroup of finite index, we write \( G = \coprod {gH} \), then we have
\[
H = G \smallsetminus \left( {\mathop{\bigcup }\limits_{{{gH} \neq H}}{gH}}\right)
\]
The union on the right hand side is a finite union, so the union is closed, and thus the complement \( H \) is open.
Definition 22.3.8. A profinite group is an inverse limit of finite groups, with the inverse limit topology.
Lemma 22.3.9. For a profinite group \( G \), we have
\[
G \mathrel{\text{:=}} \mathop{\lim }\limits_{\substack{ \leftarrow \\ {H \leq G} }}\mathop{\lim }\limits_{\text{open normal }}G/H.
\]
Proof. There is an obvious map \( G \rightarrow \mathop{\lim }\limits_{{H \leq G\text{ open normal }}}G/H = : {G}^{\prime } \) .
By definition, \( G = \mathop{\lim }\limits_{{i \leftarrow i \in I}}{G}_{i} \) . We want to construct the reserve arrow:
\[
{G}^{\prime } = \mathop{\lim }\limits_{\substack{ \leftarrow \\ {H \leq G\text{ open normal }} }}G/H \rightarrow \mathop{\lim }\limits_{\substack{ \leftarrow \\ {i \in I} }}{G}_{i}.
\]
For this, it is enough to provide a compatible system of homomorphisms \( {G}^{\prime } \rightarrow {G}_{i} \) for each \( i \) . But note that the a natural map \( {\pi }_{i} : G \rightarrow {G}_{i} \) has kernel ker \( {\pi }_{i} \vartriangleleft G \) (which has finite index since \( {G}_{i} \) is finite). Thus we can define the corresponding map
\[
{G}^{\prime } = \mathop{\lim }\limits_{{H \leq G}}\mathop{\lim }\limits_{\text{open normal }}G/H \rightarrow \frac{G}{\ker {\pi }_{i}} \rightarrow {G}_{i}.
\]
This defines the a compatible system of maps \( {G}^{\prime } \rightarrow {G}_{i} \) (for each \( i \in I \) ). Then by Fact 22.2.3, we get a map \( {G}^{\prime } \rightarrow G \) . By tracing back the definition, we see that this gives the inverse of the map \( G \rightarrow {G}^{\prime } \) we constructed above.
22.4. Infinite Galois theory. Let us recall that a Galois extension \( K \) over \( F \) is an extension that is separable and normal, i.e.
(1) any intermediate field \( E \) that is finite over \( F \) is separable over \( F \) ,
(2) any irreducible polynomial \( f\left( x\right) \in F\left\lbrack x\right\rbrack \) having one root in \( K \) splits in \( K\left\lbrack x\right\rbrack \) .
Definition 22.4.1. Let \( K \) be a Galois extension of \( F \) . Define
\[
\operatorname{Gal}\left( {K/F}\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{E/F\text{ finite }\text{ Galois }}}\operatorname{Gal}\left( {E/F}\right) .
\]
The connecting map is, if \( {E}_{1} \supseteq {E}_{2} \supset F \) are finite extensions, we have \( \operatorname{Gal}\left( {{E}_{1}/F}\right) \rightarrow \) \( \operatorname{Gal}\left( {{E}_{2}/F}\right) \) .
Example 22.4.2. (1) Write \( \mathbb{Q}\left( {\mu }_{{p}^{\infty }}\right) \mathrel{\text{:=}} \mathbb{Q}\left( {{\zeta }_{{p}^{n}};n \in \mathbb{N}}\right) \) . We have
\[
\operatorname{Gal}\left( {\mathbb{Q}\left( {\mu }_{{p}^{\infty }}\right) /\mathbb{Q}}\right) = \mathop{\lim }\limits_{n}\operatorname{Gal}\left( {\mathbb{Q}\left( {\zeta }_{{p}^{n}}\right) /\mathbb{Q}}\right) \cong \underset{n}{\underbrace{\lim }}{\left( \mathbb{Z}/{p}^{n}\mathbb{Z}\right) }^{ \times } = {\mathbb{Z}}_{p}^{ \times }.
\]
(2) Write \( \mathbb{Q}\left( {\mu }_{\infty }\right) \mathrel{\text{:=}} \mathbb{Q}\left( {{\zeta }_{n};n \in \mathbb{N}}\right) \) . We have
\[
\operatorname{Gal}\left( {\mathbb{Q}\left( {\mu }_{\infty }\right) /\mathbb{Q}}\right) = \underset{n}{\underbrace{\lim }}\operatorname{Gal}\left( {\mathbb{Q}\left( {\zeta }_{n}\right) /\mathbb{Q}}\right) \cong \underset{n}{\underbrace{\lim }}{\left( \mathbb{Z}/n\mathbb{Z}\right) }^{ \times } = {\mathbb{Z}}^{ \times } \cong \mathop{\prod }\limits_{p}{\mathbb{Z}}_{p}^{ \times }.
\]
Definition 22.4.3. The topology on \( \operatorname{Gal}\left( {K/F}\right) \) is given as follows: if \( E \) is an intermediate finite Galois extension of \( F \) and if \( {a}_{E} \in \operatorname{Gal}\left( {E/F}\right) \) is an element, then
\[
\left\{ {g = {\left( {g}_{{E}^{\prime }}\right) }_{{E}^{\prime }} \in \operatorname{Gal}\left( {K/F}\right) {\left| g\right| }_{E} = {a}_{E}}\right\}
\]
is a standard open subsets of \( \operatorname{Gal}\left( {K/F}\right) \) .
The open subsets of \( \operatorname{Gal}\left( {K/F}\right) \) are unions of such standard opens.
Theorem 22.4.4 (Galois theory for infinite extensions). Let \( K \) be a Galois extension of \( F \) .
Then there is a one-to-one inclusive-reversing correspondence
\[
\{ \text{closed subgroups}\;H\;\text{of}\;\text{Gal}\left( {K/F}\right) \} \;\longleftrightarrow \;\{ \text{intermediate}\;\text{fields}\;E\;\text{of}\;K/F\}
\]
Proof. This essentially follow from the Galois theory. But we explain where the closedness condition comes from. Put
\[
{G}^{\prime } \mathrel{\text{:=}} \left\{ {g \in \operatorname{Gal}\left( {K/F}\right) {\left| g\right| }_{E} = {\operatorname{id}}_{E}}\right\} \subseteq \operatorname{Gal}\left( {K/F}\right) .
\]
The above condition is a closed condition and hence defined a closed subgroup.
We need to show that we have a homeomorphism
(22.4.4.1)
\[
{G}^{\prime } \cong \operatorname{Gal}\left( {K/E}\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{{E}^{\prime }/E\text{ finite Galois }}}\operatorname{Gal}\left( {{E}^{\prime }/E}\right) .
\]
(1) We first construct a map \( \operatorname{Gal}\left( {K/E}\right) \rightarrow {G}^{\prime } \) . For this, it is enough to construct a compatible family of map \( \operatorname{Gal}\left( {K/E}\right) \rightarrow \operatorname{Gal}\left( {{F}^{\prime }/F}\right) \) .
As \( {F}^{\prime } \) is Galois |
1088_(GTM245)Complex Analysis | Definition 6.4 |
Definition 6.4. We consider the special case of functions holomorphic on a degenerate annulus with \( {R}_{1} = 0 \) and \( {R}_{2} \in (0, + \infty \rbrack \) ; that is, we fix a point \( c \in \mathbb{C} \) and a holomorphic function \( f \) on the punctured disc \( A = \left\{ {z \in \mathbb{C};0 < \left| {z - c}\right| < {R}_{2}}\right\} \) . In this case \( c \) is called an isolated singularity of \( f \) .
We know that then \( f \) has a Laurent series expansion (6.1). There are three possibilities for the coefficients \( {\left\{ {a}_{n}\right\} }_{n \in {\mathbb{Z}}_{ < 0}} \) ; we now analyze each possibility.
1. If \( {a}_{n} = 0 \) for all \( n \) in \( {\mathbb{Z}}_{ < 0} \), then \( f \) has a removable singularity at \( z = c \), and \( f \) can be extended to be a holomorphic function in the disc \( \left| {z - c}\right| < {R}_{2} \) by defining \( f\left( c\right) = {a}_{0}. \)
Shortly (in Theorem 6.7), we will establish a useful criterion for proving that such an isolated singularity is removable.
2. Only finitely many, at least one, nonzero coefficients with negative indices appear in the Laurent series; that is, there exists \( N \) in \( {\mathbb{Z}}_{ > 0} \) such that \( {a}_{-n} = 0 \) for all \( n > N \) and \( {a}_{-N} \neq 0 \) . We can hence write
\[
f\left( z\right) = \mathop{\sum }\limits_{{n = - N}}^{{-1}}{a}_{n}{\left( z - c\right) }^{n} + \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{\left( z - c\right) }^{n}
\]
for \( 0 < \left| {z - c}\right| < {R}_{2} \) . In this case, \( \mathop{\sum }\limits_{{n = - N}}^{{-1}}{a}_{n}{\left( z - c\right) }^{n} \) is called the principal or singular part of \( f \) at \( c \) . Observe that
\[
\mathop{\lim }\limits_{{z \rightarrow c}}{\left( z - c\right) }^{N}f\left( z\right) = {a}_{-N} \neq 0
\]
and \( N \) is characterized by this property (that the limit exists and is different from zero). Note that in this case the function \( z \mapsto {\left( z - c\right) }^{N}f\left( z\right) \) has a removable singularity at \( c \), and hence can be extended to be holomorphic in the disc \( \left| {z - c}\right| < {R}_{2} \) . Therefore \( f \) is meromorphic in the disc \( \left| {z - c}\right| < {R}_{2} \), and \( f \) has a pole of order \( N \) at \( z = c \) .
3. Infinitely many nonzero coefficients with negative indices appear in the Laurent series. Then \( c \) is called an essential singularity of \( f \) .
We extend the definition of isolated singularity to include the point at \( \infty \) .
Definition 6.5. A function \( f \) holomorphic in a deleted neighborhood of \( \infty \) (see Definition 3.52) has an isolated singularity at \( \infty \) if \( g\left( z\right) = f\left( \frac{1}{z}\right) \) has an isolated singularity at \( z = 0 \) .
If \( f \) has a pole of order \( N \) at \( \infty \), the principal part of \( {fat}\infty \) is the polynomial \( \mathop{\sum }\limits_{{n = 1}}^{N}{a}_{n}{z}^{n} \), where the Laurent series expansion for \( g \) near zero is given by
\[
g\left( z\right) = f\left( \frac{1}{z}\right) = \mathop{\sum }\limits_{{n = 1}}^{N}{a}_{n}{z}^{n} + \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{z}^{-n}
\]
for \( \left| z\right| > 0 \) small.
Example 6.6. We have
\[
\exp \left( \frac{1}{z}\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{{z}^{-n}}{n!}
\]
for \( \left| z\right| > 0 \) . Here \( {R}_{1} = 0 \) and \( {R}_{2} = + \infty ;0 \) is an essential singularity of the function and \( \infty \) is a removable singularity \( \left( {f\left( \infty \right) = 1}\right) \) .
Theorem 6.7. Let \( c \in \mathbb{C} \) and \( f \) be a holomorphic function on \( A = \{ 0 < \left| {z - c}\right| < \) \( \left. {R}_{2}\right\} \) . Assume that (6.1) is the Laurent series of \( f \) on the punctured disc A. If there exist an \( M > 0 \) and \( 0 < {r}_{0} < {R}_{2} \) such that
\[
\left| {f\left( z\right) }\right| \leq M\text{ for }0 < \left| {z - c}\right| < {r}_{0},
\]
then \( f \) has a removable singularity at \( z = c \) .
Proof. We know that \( {a}_{n} = \frac{1}{{2\pi }\imath }{\int }_{{\gamma }_{r}}\frac{f\left( t\right) }{{\left( t - c\right) }^{n + 1}}\mathrm{\;d}t \) for all \( n \in \mathbb{Z} \), where \( {\gamma }_{r}\left( \theta \right) = \) \( c + r{\mathrm{e}}^{\iota \theta } \), for \( \theta \in \left\lbrack {0,{2\pi }}\right\rbrack \) and \( 0 < r < {r}_{0} \) . We estimate \( \left| {a}_{n}\right| \leq \frac{M}{{r}^{n}} \) . For \( n < 0 \), we let \( r \rightarrow 0 \) and conclude that \( {a}_{n} = 0 \) .
Theorem 6.8 (Casorati-Weierstrass). If \( f \) is holomorphic on \( \{ 0 < \left| {z - c}\right| < \) \( \left. {R}_{2}\right\} \) and has an essential singularity at \( z = c \), then for all \( w \in \mathbb{C} \) the function
\[
g\left( z\right) = \frac{1}{f\left( z\right) - w}
\]
is unbounded in any punctured neighborhood of \( z = c \) . Therefore the range of \( f \) restricted to any such neighborhood is dense in \( \mathbb{C} \) .
Proof. Assume that, for some \( w \in \mathbb{C} \), the function \( g \) is unbounded in some punctured neighborhood \( N \) of \( z = c \) . Then there is \( \varepsilon > 0 \) such that \( N = \) \( U\left( {c,\varepsilon }\right) - \{ c\} \), and, for any \( M > 0 \), there exists a \( z \in N \) such that \( \left| {g\left( z\right) }\right| > M \) ; that is, such that \( \left| {f\left( z\right) - w}\right| < \frac{1}{M} \) . Thus \( w \) is a limit point of \( f\left( N\right) \) and the last statement in the theorem is proved. Now it suffices to prove that for all \( w \in \mathbb{C} \) and all \( \varepsilon > 0 \), the function \( g \) is unbounded in \( U\left( {c,\varepsilon }\right) - \{ c\} \) . If \( g \) were bounded in such a neighborhood, it would have a removable singularity at \( z = c \) and thus would extend to a holomorphic function on \( U\left( {c,\varepsilon }\right) \) ; therefore \( f \) would be meromorphic there.
A much stronger result can be established. We state it without proof. \( {}^{2} \)
Theorem 6.9 (Picard). If \( f \) is holomorphic in \( 0 < \left| {z - c}\right| < {R}_{2} \) and has an essential singularity at \( z = c \), then there exists a \( {w}_{0} \in \mathbb{C} \) such that for all \( w \in \) \( \mathbb{C} - \left\{ {w}_{0}\right\}, f\left( z\right) = w \) has infinitely many solutions in \( 0 < \left| {z - c}\right| < {R}_{2} \) .
Example 6.10. The function \( \exp \left( \frac{1}{z}\right) \) shows the above theorem is sharp, with \( c = \) 0 and \( {w}_{0} = 0 \) .
\( {}^{2} \) For a proof, see Conway’s book listed in the bibliography.
We now have a complete description of the behavior of a holomorphic function near an isolated singularity.
Theorem 6.11. Assume that \( f \) is a holomorphic function in a punctured disc \( {U}^{\prime } = \) \( U\left( {c, R}\right) - \{ c\} \) around the isolated singularity \( c \in \mathbb{C} \) . Then
(1) \( c \) is a removable singularity if and only if \( f \) is bounded in \( {U}^{\prime } \) if and only if \( \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) \) exists and is finite.
(2) \( c \) is a pole of \( f \) if and only if \( \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \infty \), in the sense of Definition 3.52.
(3) \( c \) is an essential singularity of \( f \) if and only if \( f\left( {U}^{\prime }\right) \) is dense in \( \mathbb{C} \) .
Proof. (1) follows from Theorem 6.7 and the definition of removable singularity.
The Casorati-Weierstrass Theorem 6.8 shows that if \( c \) is an essential singularity then \( f\left( {U}^{\prime }\right) \) is dense in \( \mathbb{C} \) . We complete the proof by showing (2). If \( c \) is a pole of \( f \) of order \( N \geq 1 \), then we know from the Laurent series expansion for \( f \) that there exists \( 0 < r < R \) and a holomorphic function \( g : U\left( {c, r}\right) \rightarrow \mathbb{C} \) such that \( f\left( z\right) = \frac{g\left( z\right) }{{\left( z - c\right) }^{N}} \) on \( U\left( {c, r}\right) - \{ c\} \) and \( g\left( c\right) \neq 0 \) . By continuity of \( g \) we may assume that \( \left| {g\left( z\right) }\right| \geq M > 0 \) for all \( z \) in \( U\left( {c, r}\right) \) . Then \( \left| {f\left( z\right) }\right| \geq \frac{M}{{\left| z - c\right| }^{N}} \), and it follows that \( \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \infty \) . Conversely, if \( \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \infty \), then \( c \) is not a removable singularity of \( f \) since for every \( M > 0 \) there exists \( \delta > 0 \) such that
\[
\left| {f\left( z\right) }\right| \geq M\text{ for }0 < \left| {z - c}\right| < \delta ,
\]
and the Casorati-Weierstrass theorem implies that \( c \) is not an essential singularity of \( f \) . Thus it must be a pole.
Example 6.12. For an entire function \( f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{z}^{n} \) (we know that its radius of convergence \( \rho = + \infty \) ), there are two possibilities:
(a) Either there exists an \( N \) such that \( {a}_{n} = 0 \) for all \( n > N \), in which case \( f \) is a polynomial of degree \( \leq N \) . If \( \deg f = N \geq 1 \), then \( f \) has a pole of order \( N \) at \( \infty \), and \( f\left( z\right) - f\left( 0\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{a}_{i}{z}^{i} \) is the principal part of \( f \) at \( \infty \) . If \( \deg f = 0 \) , then \( f \) is constant, of course.
(b) Or \( f \) has an essential singularity at \( \infty \) .
We can now establish the following result.
Theorem 6.13. Let \( f : \mathbb{C} \cup \{ \infty \} \rightarrow \mathbb{C} \cup \{ \infty \} \) . Then,
(a) If \( f \) is holomorphic, it is constant, and
(b) If \( f \) is meromorphic, it is a rational function.
Proof. If \( f \) is holomorphic on \( \mathbb{C} \cup \{ \infty \} \), a compact set, it must be bounded. Since it is also an entire function, it must be constant, by Liouville |
1329_[肖梁] Abstract Algebra (2022F) | Definition 10.2.1 |
Definition 10.2.1. For two complex valued functions \( \chi ,\eta \) on \( G \), we put
\[
\langle \chi ,\eta \rangle \mathrel{\text{:=}} \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\chi \left( g\right) \eta \left( {g}^{-1}\right)
\]
Note that setting \( h = {g}^{-1} \), we deduce that
\[
\langle \chi ,\eta \rangle = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\chi \left( g\right) \eta \left( {g}^{-1}\right) = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{h \in G}}\chi \left( {h}^{-1}\right) \eta \left( h\right) = \langle \eta ,\chi \rangle .
\]
This pairing is symmetric.
Remark 10.2.2. When both functions are characters for representations \( \left( {\rho, V}\right) \) and \( \left( {{\rho }^{\prime }, W}\right) \) of \( G \), Proposition 10.1.3(3) implies that, \( \overline{{\chi }_{W}\left( g\right) } = {\chi }_{W}\left( {g}^{-1}\right) \) . Then we have a pairing which is Hermitian:
\[
\left\langle {{\chi }_{V},{\chi }_{W}}\right\rangle = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}{\chi }_{V}\left( g\right) \overline{{\chi }_{W}\left( {g}^{-1}\right) }.
\]
Recall that an irreducible representations of \( G \) is a representation whose only subrepre-sentations are 0 and itself.
Theorem 10.2.3 (Schur’s orthogonality). If \( \left( {{\rho }_{1},{V}_{1}}\right) \) and \( \left( {{\rho }_{2},{V}_{2}}\right) \) are irreducible representations of \( G \), then
(1) when \( \left( {{\rho }_{1},{V}_{1}}\right) ≄ \left( {{\rho }_{2},{V}_{2}}\right) \), then \( \left\langle {{\chi }_{{V}_{1}},{\chi }_{{V}_{2}}}\right\rangle = 0 \) ;
(2) when \( \left( {{\rho }_{1},{V}_{1}}\right) = \left( {{\rho }_{2},{V}_{2}}\right) ,\left\langle {{\chi }_{{V}_{1}},{\chi }_{{V}_{1}}}\right\rangle = 1 \) .
In other words, the characters of irreducible representations are orthonormal functions for the pairing \( \langle - , - \rangle \) .
We first establish an (somewhat surprisingly) extremely useful result, called the Schur's lemma. It is one of the most important tools in the study of representation theory. (We stick to the version concerning finite dimensional \( \mathbb{C} \) -vector space representations.)
Proposition 10.2.4 (Schur’s lemma). Let \( \left( {{\rho }_{1},{V}_{1}}\right) \) and \( \left( {{\rho }_{2},{V}_{2}}\right) \) be irreducible \( \mathbb{C} \) -representations of \( G \) and let \( \phi : {V}_{1} \rightarrow {V}_{2} \) be a homomorphism. Then
(1) If \( \left( {{\rho }_{1},{V}_{1}}\right) \) and \( \left( {{\rho }_{2},{V}_{2}}\right) \) are not isomorphic, we have \( \phi = 0 \) .
(2) If \( \left( {{\rho }_{1},{V}_{1}}\right) = \left( {{\rho }_{2},{V}_{2}}\right) ,\phi \) is a homothety, i.e. a scalar multiple of the identity.
Proof. The case \( \phi = 0 \) being trivial, we now assume that \( \phi \neq 0 \) .
(a) The kernel of \( \phi \) is a subrepresentation of \( {V}_{1} \) (and \( \phi \) is not zero); so \( \ker \left( \phi \right) = 0 \) . Thus \( \phi \) is injective.
(b) The image of \( \phi \) is a subrepresentation of \( {V}_{2} \) (and \( \phi \) is not zero); so \( \operatorname{Im}\left( \phi \right) = {V}_{2} \), i.e. \( \phi \) is surjective.
This implies that \( \phi \) is an isomorphism. The contrapositive of the above argument is (1).
For \( \left( 2\right) \), we view \( \phi \) as an element of \( \mathrm{{GL}}\left( V\right) \), i.e. a matrix. Take one eigenvalue \( \lambda \in \mathbb{C} \) of \( \phi \) , with eigenvector \( v \neq 0 \) . (Here we crucially used that \( \mathbb{C} \) is "algebraically closed", i.e. every monic polynomial in \( \mathbb{C} \) has a zero.)
Consider another linear map \( {\phi }^{\prime } \mathrel{\text{:=}} \phi - \lambda \cdot {\operatorname{id}}_{V} : V \rightarrow V \) . Since \( \rho \left( g\right) \circ \phi = \phi \circ \rho \left( g\right) \) for every \( g,\rho \left( g\right) \circ {\phi }^{\prime } = {\phi }^{\prime } \circ \rho \left( g\right) \) for every \( g \in G \), i.e. \( {\phi }^{\prime } \) is also a homomorphism.
Yet \( \ker {\phi }^{\prime } \) contains the vector \( v \) and hence \( \ker {\phi }^{\prime } \neq 0 \) . Thus by the above discussion applied to \( {\phi }^{\prime } \) instead, we must have \( {\phi }^{\prime } = 0 \) .
This means that \( \phi = \lambda \cdot {\mathrm{{id}}}_{V} \) .
Corollary 10.2.5. Let \( \left( {{\rho }_{1},{V}_{1}}\right) \) and \( \left( {{\rho }_{2},{V}_{2}}\right) \) be two irreducible representations of \( G \) . Let \( \phi : {V}_{1} \rightarrow {V}_{2} \) be a \( \mathbb{C} \) -linear map (not necessarily \( G \) -linear). Then
\[
\widetilde{\phi } \mathrel{\text{:=}} \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}{\rho }_{2}\left( g\right) \circ \phi \circ {\rho }_{1}{\left( g\right) }^{-1} : {V}_{1} \rightarrow {V}_{2}
\]
is a homomorphism (by Construction 9.2.6).
(1) If \( \left( {{\rho }_{1},{V}_{1}}\right) ≄ \left( {{\rho }_{2},{V}_{2}}\right) \), then \( \widetilde{\phi } = 0 \) .
(2) If \( \left( {{\rho }_{1},{V}_{1}}\right) = \left( {{\rho }_{2},{V}_{2}}\right) \), then \( \widetilde{\phi } = \lambda \cdot {\operatorname{id}}_{{V}_{1}} \), with
(10.2.5.1)
\[
\lambda = \frac{1}{\dim {V}_{1}}\operatorname{Tr}\left( \widetilde{\phi }\right) = \frac{1}{\dim {V}_{1}}\operatorname{Tr}\left( \phi \right)
\]
Proof. Construction 9.2.6 implies that \( \widetilde{\phi } \) is a homomorphism. Then Schur’s lemma implies (1) immediate. For (2), Schur’s lemma implies that \( \widetilde{\phi } = \lambda \cdot {\operatorname{id}}_{{V}_{1}} \) . The first equality in (10.2.5.1) holds because \( \widetilde{\phi } \) is scalar multiplication, and the second equality in (10.2.5.1) follows from that
\[
\operatorname{Tr}\left( \widetilde{\phi }\right) = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\underset{\operatorname{Tr}\left( \phi \right) }{\underbrace{\operatorname{Tr}\left( {{\rho }_{1}\left( g\right) \circ \phi \circ {\rho }_{1}{\left( g\right) }^{-1}}\right) }} = \operatorname{Tr}\left( \phi \right) .
\]
In fact, we will show a statement stronger than Theorem 10.2.3. We start with Theorem[10.2.3(1)
Proposition 10.2.6. Assume that \( \left( {{\rho }_{1},{V}_{1}}\right) ≄ \left( {{\rho }_{2},{V}_{2}}\right) \) . Identify \( {V}_{1} \) with \( {\mathbb{C}}^{m} \) and \( {V}_{2} \) with \( {\mathbb{C}}^{n} \) , and thus, the two representations can be viewed as
\[
g \mapsto {\left( {\rho }_{1}{\left( g\right) }_{ij}\right) }_{i, j = 1,\ldots, m}\;\text{ and }\;g \mapsto {\left( {\rho }_{2}{\left( g\right) }_{k\ell }\right) }_{k,\ell = 1,\ldots, n}.
\]
Then, for any \( i, j \in \{ 1,\ldots, m\} \) and \( k,\ell \in \{ 1,\ldots, n\} \), we have
(10.2.6.1)
\[
\left\langle {{\rho }_{1}{\left( -\right) }_{ij},{\rho }_{2}{\left( -\right) }_{k\ell }}\right\rangle \mathrel{\text{:=}} \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}{\rho }_{1}{\left( {g}^{-1}\right) }_{ij}{\rho }_{2}{\left( g\right) }_{k\ell } = 0.
\]
Proof. Consider \( \phi : {V}_{1} \rightarrow {V}_{2} \) given by the matrix \( {E}_{\ell i} \), which is zero in all entries except having 1 at the \( \left( {\ell, i}\right) \) -entry. Then Corollary 10.2.5(1) implies that
\[
\mathop{\sum }\limits_{{g \in G}}{\rho }_{2}\left( g\right) {E}_{\ell i}{\rho }_{1}\left( {g}^{-1}\right) = 0
\]
Taking the \( \left( {k, j}\right) \) -entry of this matrix implies that
\[
\mathop{\sum }\limits_{{g \in G}}\mathop{\sum }\limits_{{s, t}}{\rho }_{2}{\left( g\right) }_{k, s}{\left( {E}_{\ell i}\right) }_{s, t}{\rho }_{1}{\left( {g}^{-1}\right) }_{t, j} = 0.
\]
Note that \( {\left( {E}_{\ell i}\right) }_{s, t} = 0 \) unless \( \ell = s, i = t \) . So the above equality is truly
\[
\mathop{\sum }\limits_{{g \in G}}{\rho }_{2}{\left( g\right) }_{k,\ell }{\rho }_{1}{\left( {g}^{-1}\right) }_{i, j} = 0
\]
which holds for all \( i, j, k,\ell \) .
10.2.7. Proof of Theorem 10.2.3. (1) is an immediate corollary of Proposition 10.2.6. Indeed,
\[
\left\langle {{\chi }_{{V}_{2}},{\chi }_{{V}_{1}}}\right\rangle = \mathop{\sum }\limits_{{i, k}}\left\langle {{\rho }_{2}{\left( -\right) }_{ii},{\rho }_{1}{\left( -\right) }_{kk}}\right\rangle \overset{\text{ (10.2.6.1) }}{ = }0.
\]
(2) The proof is somewhat similar. If \( \left( {{\rho }_{1},{V}_{1}}\right) = \left( {{\rho }_{2},{V}_{2}}\right) = \left( {\rho ,{\mathbb{C}}^{n}}\right) \), then
\( \left( {10.2.7.1}\right) \)
\[
\left\langle {{\chi }_{V},{\chi }_{V}}\right\rangle = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\operatorname{Tr}\left( {\rho \left( g\right) }\right) \operatorname{Tr}\left( {\rho \left( {g}^{-1}\right) }\right) = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\mathop{\sum }\limits_{{i, j}}\rho {\left( g\right) }_{i, i}\rho {\left( {g}^{-1}\right) }_{j, j}.
\]
Applying Corollary 10.2.5(2) to \( \phi = {E}_{ij} \), we deduce that
\[
\frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}{E}_{ij}\rho \left( {g}^{-1}\right) = \frac{1}{\dim V}\operatorname{Tr}\left( \phi \right) = \left\{ \begin{array}{ll} \frac{1}{\dim V} & \text{ if }i = j \\ 0 & \text{ otherwise. } \end{array}\right.
\]
Consider the \( \left( {i, j}\right) \) -entry of the matrix above, we see that
\[
\text{- if}i \neq j,\frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\rho {\left( g\right) }_{ii}\rho {\left( {g}^{-1}\right) }_{j, j} = 0\text{;}
\]
\[
\text{- if}i = j,\frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\rho {\left( g\right) }_{ii}\rho {\left( {g}^{-1}\right) }_{i, i} = \frac{1}{\dim V}\text{.}
\]
Plugging in the two cases above to 10.2.7.1 , we see that
\[
\left\langle {{\chi }_{V},{\chi }_{V}}\right\rangle = \mathop{\sum }\limits_{{i = j}}\frac{1}{\dim V} = 1
\]
Remark 10.2.8. (1) The character of an irreducible representation is often called an irreducible character.
(2) The meaning of Schur's orthogonality is that characters of irreducible representations are orthonormal as class functions. In fact we will prove below that th |
1172_(GTM8)Axiomatic Set Theory | Definition 7.30. 1 |
Definition 7.30. 1. If \( \mathbf{A} \) is a structure and \( \varphi \left( {{a}_{0},{a}_{1},\ldots ,{a}_{n}}\right) \) is a formula in the language of \( \mathbf{A} \), then a function \( f : {A}^{n} \rightarrow A \) is a Skolem function for \( \left( {\exists x}\right) \varphi \left( {x,{a}_{1},\ldots ,{a}_{n}}\right) \) with respect to \( \mathbf{A} \) iff
\[
\left( {\forall {x}_{1},\ldots ,{x}_{n} \in A}\right) \left\lbrack {\mathbf{A} \vDash \left( {\exists x}\right) \varphi \left( {x,{x}_{1},\ldots ,{x}_{n}}\right) \leftrightarrow \mathbf{A} \vDash \varphi \left( {f\left( {{x}_{1},\ldots ,{x}_{n}}\right) ,{x}_{1},\ldots ,{x}_{n}}\right) }\right\rbrack .
\]
2. \( \mathbf{B} \) is an elementary substructure of \( \mathbf{A} \) (written \( \mathbf{B} \prec \mathbf{A} \) ) iff \( \mathbf{B} \) is a substructure of \( \mathbf{A} \) and for every formula \( \varphi \) of the language of \( \mathbf{A} \) i.e., \( \mathcal{L}\left( {C\left( A\right) }\right) \), and \( \forall {a}_{1},\ldots ,{a}_{n} \in B \)
\[
\mathbf{B} \vDash \varphi \left( {{a}_{1},\ldots ,{a}_{n}}\right) \leftrightarrow \mathbf{A} \vDash \varphi \left( {{a}_{1},\ldots ,{a}_{n}}\right) .
\]
Remark. We next show how to obtain an elementary substructure of \( \mathbf{A} \) that contains a given subset of \( A \), provided that we have a family of Skolem functions for all formulas of the language of \( \mathbf{A} \) .
Theorem 7.31. If \( \mathbf{A} \) is a structure and \( F \) a set of Skolem functions such that for every formula \( \left( {\exists x}\right) \varphi \left( {x,{a}_{1},\ldots ,{a}_{n}}\right) \) of the language of \( \mathbf{A} \) there exists in \( F \) a Skolem function for that formula with respect to \( \mathbf{A} \), if \( B \subseteq A \) and if \( B \) is closed under the functions of \( F \) then \( \mathbf{B} \triangleq \mathbf{A} \sqcap B \) is an elementary substructure of \( \mathbf{A} \) .
* \( {}^{\Gamma }{\varphi }^{1} \) is the Gödel number of \( \varphi \) .
Proof. By induction on the number of logical symbols in \( \varphi \) . If \( \varphi \) is atomic or of the form \( \neg \psi \) or \( \psi \land \eta \) the conclusion is obvious. If \( \varphi \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) is \( \left( {\exists x}\right) \psi \left( {x,{a}_{1},\ldots ,{a}_{n}}\right) \) and if \( {b}_{1},\ldots ,{b}_{n} \in B \) then
\[
\mathbf{B} \vDash \left( {\exists x}\right) \psi \left( {x,{b}_{1},\ldots ,{b}_{n}}\right) \rightarrow \left( {\exists b \in B}\right) \left\lbrack {\mathbf{B} \vDash \psi \left( {b,{b}_{1},\ldots ,{b}_{n}}\right) }\right\rbrack
\]
\[
\rightarrow \left( {\exists b \in B}\right) \left\lbrack {\mathbf{A} \vDash \psi \left( {b,{b}_{1},\ldots ,{b}_{n}}\right) }\right\rbrack
\]
\[
\rightarrow \mathbf{A} \vDash \left( {\exists x}\right) \psi \left( {x,{b}_{1},\ldots ,{b}_{n}}\right)
\]
\[
\rightarrow \left( {\exists f \in F}\right) \left\lbrack {\mathbf{A} \vDash \psi \left( {f\left( {{b}_{1},\ldots ,{b}_{n}}\right) ,{b}_{1},\ldots ,{b}_{n}}\right) }\right\rbrack
\]
\[
\rightarrow \left( {\exists x \in B}\right) \left\lbrack {\mathbf{A} \vDash \psi \left( {x,{b}_{1},\ldots ,{b}_{n}}\right) }\right\rbrack
\]
\[
\rightarrow \mathbf{B} \vDash \left( {\exists x}\right) \psi \left( {x,{b}_{1},\ldots ,{b}_{n}}\right) .
\]
Lemma. If \( A \) is a set, and if
\[
\left( {\forall x, y \in A}\right) \left\lbrack {x \neq y \rightarrow \left( {\exists z \in A}\right) \neg \left\lbrack {z \in x \leftrightarrow z \in y}\right\rbrack }\right\rbrack
\]
then there exists a transitive set \( a \) and a function \( f \) such that
\[
f : A\xrightarrow[\text{ onto }]{1 - 1}a
\]
and \( \left( {\forall x, y \in A}\right) \left\lbrack {x \in y \leftrightarrow f\left( x\right) \in f\left( y\right) }\right\rbrack \) . Moreover if \( b \) is a transitive subset of \( A \) then \( f \sqcap b = I \sqcap b \) .*
Proof. We define \( f \) recursively by
\[
f\left( y\right) = \{ f\left( x\right) \mid x \in A \cap y\} .
\]
The conclusion is then immediate from the definition of \( f \), that is,
\[
\left( {\forall x, y \in A}\right) \left\lbrack {x \in y \leftrightarrow f\left( x\right) \in f\left( y\right) }\right\rbrack .
\]
Also, by \( \epsilon \) -induction, it follows that if \( b \) is a transitive subset of \( A \) then \( f \sqsubset b = I \sqsubset b \) . (For details see Takeuti and Zaring: Introduction to Axiomatic Set Theory, Springer-Verlag, 1971, p. 19.)
Remark. In the foregoing Lemma both \( f \) and \( a \) are unique.
Theorem 7.32. If \( A \) is a transitive set, if \( k \in A \) and if \( \langle A, \in, k\rangle \) is a model of \( {ZF} + V = {L}_{k} \) then \( \left( {\exists \alpha }\right) \left\lbrack {A = {A}_{\alpha }}\right\rbrack \) where \( {A}_{\alpha } \) is as in Definition 7.24.
Proof. Since \( {L}_{k} = \mathop{\bigcup }\limits_{{\alpha \in {On}}}{A}_{\alpha } \) and \( {A}_{\alpha } \) is absolute with respect to \( A \) for each \( \alpha \in A \cap {On}, A = \mathop{\bigcup }\limits_{{\alpha \in A \cap {On}}}{A}_{\alpha } \) . Furthermore, because \( A \) is transitive, \( A \cap {On} = \mathop{\bigcup }\limits_{{\alpha \in A}}\alpha \triangleq \beta \) . Therefore since \( A \) is a model of \( {ZF},\beta \in {K}_{\mathrm{{II}}} \) and hence
\[
A = \mathop{\bigcup }\limits_{{\alpha \in \beta }}{A}_{\alpha } = {A}_{\beta }
\]
Theorem 7.33. If \( {k}_{0} \) is the transitive closure of \( k \) then
\[
V = {L}_{k} \rightarrow \left( {\forall \alpha }\right) \left\lbrack {{\overline{\bar{k}}}_{0} \leq {\aleph }_{\alpha } \rightarrow {2}^{{\aleph }_{\alpha }} = {\aleph }_{\alpha + 1}}\right\rbrack .
\]
Proof. If \( V = {L}_{k} \) then \( k \in {L}_{k} \) . Let \( F \) be a countable family of Skolem functions, with respect to \( {L}_{k} \), for all formulas of the language \( {\mathcal{L}}_{0}\left( {\{ k\left( \right) \} }\right) \) . If \( a \subseteq \aleph \alpha ,{\bar{k}}_{0} \leq {\aleph }_{\alpha } \) and
\[
b \triangleq \{ a\} \cup {\aleph }_{\alpha } \cup {k}_{0} \cup \{ k\}
\]
* \( I \triangleq \{ \langle x, x\rangle \mid x \in V\}, f \vdash b \triangleq \{ \langle x, y\rangle \in f \mid x \in b\} \) .
then \( b \) is transitive and \( \overline{\bar{b}} = {\aleph }_{\alpha } \) . Let \( A \) be the closure of \( b \) under all of the functions in \( F \) . Then \( \bar{A} = \bar{b} = {\aleph }_{\alpha } \) and, by Theorem 7.31
\[
\langle A, \in, k\rangle \vDash {ZF} + V = {L}_{k}.
\]
From the Lemma there exists a transitive set \( {a}_{0} \) and a function \( f \) from \( A \) one-to-one onto \( {a}_{0} \) such that \( f \) preserves the \( \epsilon \) -relation. Since \( b \) is a transitive subset of \( A, f \) is the identity function on \( b \), in particular \( f\left( k\right) = k \) . Therefore \( \left\langle {{a}_{0}, \in, k}\right\rangle \vDash {ZF} + V = {L}_{k} \) . By Theorem 7.32
\[
\left( {\exists \beta }\right) \left\lbrack {{a}_{0} = {A}_{\beta }}\right\rbrack \text{.}
\]
But \( {\overline{\bar{a}}}_{0} = \overline{\bar{A}} = {\aleph }_{\alpha } \) . Hence \( \overline{\bar{\beta }} \leq {\aleph }_{\alpha } \) i.e., \( \beta < {\aleph }_{\alpha + 1} \) . Since \( a = f\left( a\right) \in {f}^{cc}A = {a}_{0} \) , this proves that \( \left( {\forall a \subseteq {\aleph }_{\alpha }}\right) \left( {\exists \beta < {\aleph }_{\alpha + 1}}\right) \left\lbrack {a \in {A}_{\beta }}\right\rbrack \) . Therefore
\[
\mathcal{P}\left( {\aleph }_{\alpha }\right) \subseteq {A}_{{\aleph }_{\alpha + 1}}
\]
But \( {\bar{A}}_{{\aleph }_{\alpha + 1}} = {\aleph }_{\alpha + 1} \) . Hence \( \overline{\mathcal{P}\left( {\aleph }_{\alpha }\right) } = {\aleph }_{\alpha + 1} \) .
Remark. Note that \( V = {L}_{k} \) can be expressed as a simple sentence \( V = \mathop{\bigcup }\limits_{{\alpha \in {On}}}{A}_{\alpha } \) in the language \( \mathcal{L}\left( {\{ k\left( \right) \} }\right) \) . To prove the preceding theorem assuming the axioms of \( {ZF} \) and \( V = {L}_{k} \) we note that in fact we used only finitely many axioms \( {\varphi }_{0},\ldots ,{\varphi }_{n} \) . Let \( {F}_{0} \) be the family of Skolem functions for the finitely many subformulas of \( {\varphi }_{0},\ldots ,{\varphi }_{n} \) . Then \( {F}_{0} \) can be defined in the language of \( {ZF} \) . The proof can then be carried out with \( F \) replaced by \( {F}_{0} \) .
As a corollary we have \( V = L \rightarrow {GCH} \) and hence the following theorem.
Theorem 7.34. If there exists a standard transitive model of \( {ZF} \) then there exists a standard transitive model of \( {ZF} + {AC} + {GCH} \) .
Remark. For our second application of our general theory we define \( L\left\lbrack A\right\rbrack \) .
Definition 7.35. If \( K \) is a transitive class, if \( F \subseteq K \), if \( \mathcal{L} = \mathcal{L}\left( {\{ K\left( \;\right), F\left( \;\right) \} }\right) \) and if \( {\mathbf{B}}_{\alpha } = \left\langle {{B}_{\alpha },{\bar{K}}_{\alpha },{\bar{F}}_{\alpha }}\right\rangle \) are structures for \( \mathcal{L} \) defined recursively by
1. \( {B}_{0} \triangleq 0 \) .
2. \( {\bar{K}}_{\alpha } \triangleq R\left( \alpha \right) \cap K \land {\bar{F}}_{\alpha } = R\left( \alpha \right) \cap F \) .
3. \( {B}_{\alpha + 1} \triangleq {Df}\left( {\mathbf{B}}_{\alpha }\right) \cup {\bar{K}}_{\alpha + 1} \) .
4. \( {B}_{\alpha } \triangleq \mathop{\bigcup }\limits_{{\beta \in \alpha }}{B}_{\beta },\alpha \in {K}_{\mathrm{{II}}} \)
then
\[
L\left\lbrack {K;F}\right\rbrack \triangleq \mathop{\bigcup }\limits_{{\alpha \in {On}}}{B}_{\alpha }
\]
Remark. Since \( K \) is transitive, \( {B}_{\alpha } \) is transitive for each \( \alpha \) . Then \( \left\langle {{\mathbf{B}}_{\alpha } \mid \alpha \in {On}}\right\rangle \) satisfies the conditions 1-3 of page 68. Consequently we can prove the following
Theorem 7.36. \( L\left\lbrack {K;F}\right\rbrack \) is a standard transitive model of \( {ZF} \) and \( {On} \subseteq L\left\lbrack {K;F}\right\rbrack \) .
Definition 7.37. |
1329_[肖梁] Abstract Algebra (2022F) | Definition 11.2.8 |
Definition 11.2.8. Let \( I \) and \( J \) be ideals of a ring \( R \) .
(1) Define the sum of ideals to be
\[
I + J = \{ a + b \mid a \in I, b \in J\} .
\]
(2) Define the product of ideals to be
\[
{IJ} = \{ \text{ finite sums of elements }{ab}\text{ for }a \in I, b \in J\} .
\]
Caveat 11.2.9. In general, it is not true that all elements of \( {IJ} \) can be written as a pure
product \( {ab} \) for \( a \in I \) and \( b \in J \) . For example, \( R = \mathbb{Z}\left\lbrack x\right\rbrack \) and \( I = \left( {2, x}\right) = \{ f\left( x\right) \in \) \( \mathbb{Z}\left\lbrack x\right\rbrack \mid f\left( 0\right) = 2\} \) . Then \( {x}^{2} + 4 \in {I}^{2} \) yet it cannot be written in the form of \( {ab} \) with \( a, b \in I \) .
Example 11.2.10. If \( R \) is a commutative ring and \( I = \left( {{a}_{1},\ldots ,{a}_{s}}\right) \) and \( J = \left( {{b}_{1},\ldots ,{b}_{t}}\right) \) , then
\[
I + J = \left( {{a}_{1},\ldots ,{a}_{s},{b}_{1},\ldots ,{b}_{t}}\right) ,\;{IJ} = \left( {{a}_{1}{b}_{1},\ldots ,{a}_{1}{b}_{t},\ldots ,{a}_{i}{b}_{j},\ldots ,{a}_{s}{b}_{t}}\right) .
\]
Remark 11.2.11. It is important to explain the practical meaning of taking quotient rings: it is to imposing relations among generators. We explain this through an example: for \( k \) a field, we show that
\[
k\left\lbrack {x, y, z}\right\rbrack /\left( {x - {y}^{2}, y - {z}^{3}}\right) \cong k\left\lbrack z\right\rbrack .
\]
Indeed, for example, the element \( {x}^{2}y \) can be written as
\[
{x}^{2}y = {\left( x - {y}^{2} + {y}^{2}\right) }^{2}y = \left( {x - {y}^{2}}\right) \cdot * + {y}^{4}y = \left( {x - {y}^{2}}\right) \cdot * + {\left( y - {z}^{3} + {z}^{3}\right) }^{5}
\]
\[
= \left( {x - {y}^{2}}\right) \cdot * + \left( {y - {z}^{3}}\right) \cdot * + {z}^{15}.
\]
So \( {x}^{2}y \) is equivalent to \( {z}^{15} \) in the quotient.
For another example, consider a homomorphism
\[
{\phi }_{i} : \mathbb{R}\left\lbrack x\right\rbrack \rightarrow \mathbb{C}
\]
\[
f\left( x\right) \mapsto f\left( i\right)
\]
The kernel \( \ker {\phi }_{i} = \left( {{x}^{2} + 1}\right) \) . So
\[
\mathbb{R}\left\lbrack x\right\rbrack /\left( {{x}^{2} + 1}\right) \cong \mathbb{C}.
\]
(namely, we are imposing the relation \( {x}^{2} + 1 = 0 \) in the ring.) We also point out that this is the prototype for field extension later, describing \( \mathbb{C} \) in terms \( \mathbb{R} \) .
## Extended readings after Section 11
11.3. Quaternions over \( \mathbb{Q} \) . In fact, in the constructions of Hamilton quaternions, we do not really need it to have coefficients in \( \mathbb{R} \) . In fact, for nonzero numbers \( A, B \in \mathbb{Q} \), we may define a quaternion ring over \( \mathbb{Q} \) :
\[
{D}_{A, B} \mathrel{\text{:=}} \{ a + {bi} + {cj} + {dij} \mid a, b, c, d \in \mathbb{Q}\}
\]
where the multiplications are \( \mathbb{Q} \) -linear and are governed by
\[
{i}^{2} = A,\;{j}^{2} = B,\;{ij} = - {ji}.
\]
When \( A = B = - 1 \) and if we change the coefficients from \( \mathbb{Q} \) to \( \mathbb{R} \), then we recover the Hamilton quaternions.
It is an interesting fact (which is also important in number theory) that such \( {D}_{A, B} \) is either isomorphic to the matrix ring \( {\operatorname{Mat}}_{2 \times 2}\left( \mathbb{Q}\right) \) or is a division ring. (For example, if both \( A \) and \( B \) are negative then \( {D}_{A, B} \) is a division ring for the "same" reason as in for \( \mathbb{H} \) . Yet for each \( p \), if \( A \) is exactly divisible by \( p \) and \( B \) is an integer whose reduction modulo \( p \) is not a square, then \( {D}_{A, B} \) is a division ring "for reasons at \( p \) ".)
12.1. Chinese remainder theorem. Recall that the classical Chinese remainder theorem can be restated as follows: if \( {n}_{1},\ldots ,{n}_{r} \) are pair-wise coprime integers, then
\[
\mathbb{Z} \rightarrow \mathbb{Z}/{n}_{1}\mathbb{Z} \times \cdots \mathbb{Z}/{n}_{r}\mathbb{Z}
\]
is surjective and its kernel is \( {n}_{1}\mathbb{Z} \cap \cdots \cap {n}_{r}\mathbb{Z} = {n}_{1}\cdots {n}_{r}\mathbb{Z} \) .
Definition 12.1.1. Let \( R \) be a two commutative ring. We say two ideals \( I \) and \( J \) of \( R \) are comaximal if \( I + J = R \), i.e. \( 1 \in R \) can be written as \( 1 = a + b \) with \( a \in I \) and \( b \in J \) .
(Note that in the case \( R = \mathbb{Z}, m, n \in \mathbb{Z} \) are coprime if and only if \( \left( m\right) + \left( n\right) = \left( {\gcd \left( {m, n}\right) }\right) = \) (1).)
Theorem 12.1.2. Let \( {I}_{1},\ldots ,{I}_{k} \) be ideals of a commutative ring \( R \) . Then the natural map
\[
\phi : R \rightarrow R/{I}_{1} \times \cdots \times R/{I}_{k}
\]
\[
x \mapsto \left( {x{\;\operatorname{mod}\;{I}_{1}},\ldots, x{\;\operatorname{mod}\;{I}_{k}}}\right)
\]
is a ring homomorphism with kernel \( {I}_{1} \cap \cdots \cap {I}_{k} \) .
If \( {I}_{1},\ldots ,{I}_{k} \) are pairwise comaximal, then
(1) \( \phi \) is surjective, and
(2) \( {I}_{1} \cap \cdots \cap {I}_{k} = {I}_{1}\cdots {I}_{k} \) .
In particular, this implies that
\[
\phi : R/{I}_{1}\cdots {I}_{k} \cong R/{I}_{1} \cap \cdots \cap {I}_{k}\overset{ \simeq }{ \rightarrow }R/{I}_{1} \times \cdots \times R/{I}_{k}.
\]
Proof. The first claim on \( \phi \) being a homomorphism with kernel \( {I}_{1} \cap \cdots \cap {I}_{k} \) is clear. We now prove (1) and (2).
We first assume that \( k = 2 \) . As \( {I}_{1}{I}_{2} \subseteq {I}_{1} \) and \( {I}_{1}{I}_{2} \subseteq {I}_{2} \), we have \( {I}_{1}{I}_{2} \subseteq {I}_{1} \cap {I}_{2} \) . Now, if \( R = {I}_{1} + {I}_{2} \), we may write \( 1 = {a}_{1} + {a}_{2} \) with \( {a}_{1} \in {I}_{1} \) and \( {a}_{2} \in {I}_{2} \) . Then for \( b \in {I}_{1} \cap {I}_{2} \) ,
\[
b = \underset{\text{in }{I}_{2}{I}_{1}}{\underbrace{b{a}_{1}}} + \underset{\text{in }{I}_{1}{I}_{2}}{\underbrace{b{a}_{2}}} \in {I}_{1}{I}_{2}
\]
This implies that \( {I}_{1}{I}_{2} = {I}_{1} \cap {I}_{2} \) .
To see that \( \phi \) is surjective in this case, we note that
\[
\phi \left( {a}_{1}\right) = \left( {{a}_{1}{\;\operatorname{mod}\;{I}_{1}},{a}_{1} = 1 - {a}_{2}{\;\operatorname{mod}\;{I}_{2}}}\right) = \left( {0,1}\right) ;
\]
\[
\phi \left( {a}_{2}\right) = \left( {{a}_{2} = 1 - {a}_{1}{\;\operatorname{mod}\;{I}_{1}},{a}_{2}{\;\operatorname{mod}\;{I}_{2}}}\right) = \left( {1,0}\right) ;
\]
Thus, for any \( \left( {{x}_{1}{\;\operatorname{mod}\;{I}_{1}},{x}_{2}{\;\operatorname{mod}\;{I}_{2}}}\right) \in A/{I}_{1} \times A/{I}_{2} \), it is \( \phi \left( {{a}_{1}{x}_{2} + {a}_{2} + {x}_{1}}\right) \) .
In general, we use induction to show
\[
\phi : R \rightarrow R/{I}_{1} \times R/{I}_{2}\cdots {I}_{k} \rightarrow R/{I}_{1} \times \cdots \times R/{I}_{k}.
\]
For this, we need to check \( {I}_{1} \) and \( {I}_{2}\cdots {I}_{k} \) are comaximal, i.e. \( {I}_{1} + {I}_{2}\cdots {I}_{k} = R \) . This is because for each \( i = 2,\ldots, k,1 = {a}_{i} + {b}_{i} \) for \( {a}_{i} \in {I}_{1} \) and \( {b}_{i} \in {I}_{i} \) . Thus
\[
1 = \left( {{a}_{2} + {b}_{2}}\right) \cdots \left( {{a}_{k} + {b}_{k}}\right) = \underset{\text{in }{I}_{1}}{\underbrace{{a}_{2}\cdots {a}_{k} + \text{ product with some }{a}_{i}}} + \underset{\text{in }{I}_{2}\cdots {I}_{k}}{\underbrace{{b}_{1}\cdots {b}_{k}}}.
\]
12.2. Quadratic integer rings.
12.2.1. Quadratic integer rings. Fix a square-free integer \( D \) (positive or negative), i.e. \( D = \) \( \pm \) product of distinct primes, and \( D \neq 1 \) . Let
\[
\mathbb{Q}\left( \sqrt{D}\right) = \{ x + y\sqrt{D} \mid x, y \in \mathbb{Q}\}
\]
be the "quadratic field" (it is a two-dimensional \( \mathbb{Q} \) -vector space).
The "correct" analogue of \( \mathbb{Z} \) inside \( \mathbb{Q}\left( \sqrt{D}\right) \) is
\[
\mathcal{O} = {\mathcal{O}}_{\mathbb{Q}\left( \sqrt{D}\right) } \mathrel{\text{:=}} \left\{ \begin{array}{ll} \mathbb{Z}\left\lbrack \sqrt{D}\right\rbrack & \text{ if }D \equiv 2,3{\;\operatorname{mod}\;4}; \\ \mathbb{Z}\left\lbrack \frac{1 + \sqrt{D}}{2}\right\rbrack & \text{ if }D \equiv 1{\;\operatorname{mod}\;4}. \end{array}\right.
\]
This is because for \( z = \sqrt{D} \) or \( \frac{1 + \sqrt{D}}{2} \), it is a zero of \( {z}^{2} - D \) or \( {z}^{2} - z + \frac{1 - D}{4} \in \mathbb{Z}\left\lbrack z\right\rbrack \) in two cases.
One sometimes write this in a diagram as:
![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_77_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_77_0.jpg)
Notation 12.2.2. Define the following norm map:
\[
\mathrm{{Nm}} : \mathbb{Q}\left( \sqrt{D}\right) \rightarrow \mathbb{Q}
\]
\[
\operatorname{Nm}\left( {x + y\sqrt{D}}\right) \mathrel{\text{:=}} \left( {x + y\sqrt{D}}\right) \left( {x - y\sqrt{D}}\right) = {x}^{2} - D{y}^{2}.
\]
(If \( D < 0,\operatorname{Nm}\left( a\right) \geq 0 \) for any \( a \in \mathbb{Q}\left( \sqrt{D}\right) \) .)
Properties 12.2.3. (1) If \( x + y\sqrt{D} \in \mathcal{O} \), then \( \operatorname{Nm}\left( {x + y\sqrt{D}}\right) \in \mathbb{Z} \) .
(2) We can define a conjugation: \( \overline{x + y\sqrt{D}} \mathrel{\text{:=}} x - y\sqrt{D} \) . Then
\[
\operatorname{Nm}\left( a\right) = a\bar{a},\;\text{ with }a \in \mathcal{O}.
\]
When \( D < 0 \), this is nothing but just the complex conjugation. But when \( D > 0 \) , this makes sense purely because of algebraic structure we have on \( \mathcal{O} \) .
(3) \( \mathrm{{Nm}} \) is multiplicative, i.e. \( \mathrm{{Nm}}\left( {ab}\right) = \mathrm{{Nm}}\left( a\right) \mathrm{{Nm}}\left( b\right) \) for \( a, b \in {\mathcal{O}}_{\mathbb{Q}\left( \sqrt{D}\right) } \) . (Again, this is clear when \( D < 0 \) from the usual story of complex numbers, but when \( D > 0 \), this follows from the purely algebraic argument.
Proof. We leave (1) and (2) as exercises. For (3), we use (2) to note that \( \operatorname{Nm}\left( {ab}\right) = {ab}\overline{ab} = \) \( a\bar{a} \cdot b\bar{b} = \operatorname{Nm}\left( a\right) \operatorname{Nm}\left( b\right) \) . (Here we used that \( \overline{ab} = \bar{a}\bar |
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space | Definition 5.1.7 |
Definition 5.1.7. For functions \( f \) and \( g \) analytic on \( \mathbb{D}, f \) is subordinate to \( g \) if there exists an analytic function \( \phi : \mathbb{D} \rightarrow \mathbb{D} \) with \( \phi \left( 0\right) = 0 \) such that \( f = g \circ \phi \)
The previous corollary is a well-known classical theorem.
Corollary 5.1.8 (Littlewood’s Subordination Theorem). If \( f \) in \( {\mathbf{H}}^{2} \) is subordinate to \( g \) in \( {\mathbf{H}}^{2} \), then \( \parallel f\parallel \leq \parallel g\parallel \) .
Proof. Apply the previous corollary to \( f = {C}_{\phi }g \) .
Reproducing kernels give a lot of information about composition operators. Recall that (see Definition 1.1.7), for \( \lambda \in \mathbb{D} \), the function \( {k}_{\lambda } \) defined by
\[
{k}_{\lambda }\left( z\right) = \frac{1}{1 - \bar{\lambda }z}
\]
has the property that \( \left( {f,{k}_{\lambda }}\right) = f\left( \lambda \right) \) for every \( f \) in \( {\mathbf{H}}^{2} \) .
Lemma 5.1.9. If \( {C}_{\phi } \) is a composition operator and \( {k}_{\lambda } \) is a reproducing kernel function, then \( {C}_{\phi }^{ * }{k}_{\lambda } = {k}_{\phi \left( \lambda \right) } \) .
Proof. For each \( f \) in \( {\mathbf{H}}^{2} \) ,
\[
\left( {f,{C}_{\phi }^{ * }{k}_{\lambda }}\right) = \left( {{C}_{\phi }f,{k}_{\lambda }}\right) = \left( {f \circ \phi ,{k}_{\lambda }}\right) = f\left( {\phi \left( \lambda \right) }\right) .
\]
But also
\[
\left( {f,{k}_{\phi \left( \lambda \right) }}\right) = f\left( {\phi \left( \lambda \right) }\right)
\]
and therefore
\[
\left( {f,{C}^{ * }{k}_{\lambda }}\right) = \left( {f,{k}_{\phi \left( \lambda \right) }}\right)
\]
for all \( f \in {\mathbf{H}}^{2} \) . This implies that \( {C}_{\phi }^{ * }{k}_{\lambda } = {k}_{\phi \left( \lambda \right) } \) .
Theorem 5.1.10. For every composition operator \( {C}_{\phi } \) ,
\[
\frac{1}{\sqrt{1 - {\left| \phi \left( 0\right) \right| }^{2}}} \leq \begin{Vmatrix}{C}_{\phi }\end{Vmatrix} \leq \frac{2}{\sqrt{1 - {\left| \phi \left( 0\right) \right| }^{2}}}.
\]
Proof. Using the previous lemma with \( \lambda = 0 \) yields
\[
{C}_{\phi }^{ * }{k}_{0} = {k}_{\phi \left( 0\right) }
\]
Recall (Theorem 1.1.8) that
\[
{\begin{Vmatrix}{k}_{\lambda }\end{Vmatrix}}^{2} = \frac{1}{1 - {\left| \lambda \right| }^{2}}
\]
and therefore \( \begin{Vmatrix}{k}_{0}\end{Vmatrix} = 1 \) and \( \begin{Vmatrix}{k}_{\phi \left( 0\right) }\end{Vmatrix} = \frac{1}{\sqrt{1 - {\left| \phi \left( 0\right) \right| }^{2}}} \) . Since
\[
\begin{Vmatrix}{k}_{\phi \left( 0\right) }\end{Vmatrix} = \begin{Vmatrix}{{C}_{\phi }^{ * }{k}_{0}}\end{Vmatrix} \leq \begin{Vmatrix}{C}_{\phi }^{ * }\end{Vmatrix}\begin{Vmatrix}{k}_{0}\end{Vmatrix}
\]
it follows that
\[
\frac{1}{\sqrt{1 - {\left| \phi \left( 0\right) \right| }^{2}}} \leq \begin{Vmatrix}{C}_{\phi }^{ * }\end{Vmatrix} = \begin{Vmatrix}{C}_{\phi }\end{Vmatrix}
\]
To prove the other inequality, we begin with the result from Theorem 5.1.5:
\[
\begin{Vmatrix}{C}_{\phi }\end{Vmatrix} \leq \sqrt{\frac{1 + \left| {\phi \left( 0\right) }\right| }{1 - \left| {\phi \left( 0\right) }\right| }}
\]
Observe that, for \( 0 \leq r < 1 \), we have the inequality
\[
\sqrt{\frac{1 + r}{1 - r}} = \sqrt{\frac{{\left( 1 + r\right) }^{2}}{1 - {r}^{2}}} = \frac{1 + r}{\sqrt{1 - {r}^{2}}} \leq \frac{2}{\sqrt{1 - {r}^{2}}}.
\]
It follows that
\[
\begin{Vmatrix}{C}_{\phi }\end{Vmatrix} \leq \sqrt{\frac{1 + \left| {\phi \left( 0\right) }\right| }{1 - \left| {\phi \left( 0\right) }\right| }} \leq \frac{2}{\sqrt{1 - {\left| \phi \left( 0\right) \right| }^{2}}}.
\]
We showed (Corollary 5.1.6) that \( \begin{Vmatrix}{C}_{\phi }\end{Vmatrix} = 1 \) if \( \phi \left( 0\right) = 0 \) . The converse is also true.
Corollary 5.1.11. The norm of the composition operator \( {C}_{\phi } \) is 1 if and only if \( \phi \left( 0\right) = 0 \) .
Proof. As indicated, we have already established (Corollary 5.1.6) that \( \begin{Vmatrix}{C}_{\phi }\end{Vmatrix} = \) 1 if \( \phi \left( 0\right) = 0 \) . Conversely, if \( \begin{Vmatrix}{C}_{\phi }\end{Vmatrix} = 1 \), then the inequality
\[
\frac{1}{\sqrt{1 - {\left| \phi \left( 0\right) \right| }^{2}}} \leq \begin{Vmatrix}{C}_{\phi }\end{Vmatrix}
\]
implies
\[
\frac{1}{\sqrt{1 - {\left| \phi \left( 0\right) \right| }^{2}}} \leq 1
\]
so \( \phi \left( 0\right) = 0 \) .
Composition operators are characterized as those operators whose adjoints map the set of reproducing kernels into itself.
Theorem 5.1.12. An operator \( A \) on \( {\mathbf{H}}^{2} \) is a composition operator if and only if \( {A}^{ * } \) maps the set of reproducing kernels into itself.
Proof. We showed above that \( {A}^{ * }{k}_{\lambda } = {k}_{\phi \left( \lambda \right) } \) when \( A = {C}_{\phi } \) . Conversely, suppose that for each \( \lambda \in \mathbb{D},{A}^{ * }{k}_{\lambda } = {k}_{{\lambda }^{\prime }} \) for some \( {\lambda }^{\prime } \in \mathbb{D} \) . Define \( \phi : \mathbb{D} \rightarrow \mathbb{D} \) by \( \phi \left( \lambda \right) = {\lambda }^{\prime } \)
Notice that, for \( f \in {\mathbf{H}}^{2} \) ,
\[
\left( {{Af},{k}_{\lambda }}\right) = \left( {f,{A}^{ * }{k}_{\lambda }}\right) = \left( {f,{k}_{\phi \left( \lambda \right) }}\right) = f\left( {\phi \left( \lambda \right) }\right) .
\]
If we take \( f\left( z\right) = z \), then \( g = {Af} \) is in \( {\mathbf{H}}^{2} \), and is thus analytic. But then, by the above equation, we have
\[
g\left( \lambda \right) = \left( {g,{k}_{\lambda }}\right) = \left( {{Af},{k}_{\lambda }}\right) = f\left( {\phi \left( \lambda \right) }\right) = \phi \left( \lambda \right) .
\]
Therefore \( g = \phi \) and \( \phi \) is analytic, so the composition operator \( {C}_{\phi } \) is well-defined and bounded.
It follows that \( A = {C}_{\phi } \), since
\[
\left( {Af}\right) \left( \lambda \right) = \left( {{Af},{k}_{\lambda }}\right)
\]
\[
= f\left( {\phi \left( \lambda \right) }\right) \;\text{ (as shown above) }
\]
\[
= \left( {{C}_{\phi }f}\right) \left( \lambda \right)
\]
for all \( f \) in \( {\mathbf{H}}^{2} \) .
There are other interesting characterizations of composition operators. Recall that \( {e}_{n}\left( z\right) = {z}^{n} \) for nonnegative integers \( n \) .
Theorem 5.1.13. An operator \( A \) in \( {\mathbf{H}}^{2} \) is a composition operator if and only if \( A{e}_{n} = {\left( A{e}_{1}\right) }^{n} \) for \( n = 0,1,2,\ldots \) .
Proof. If \( A = {C}_{\phi } \), then \( A{e}_{1} = {C}_{\phi }{e}_{1} = {C}_{\phi }z = \phi \) and \( A{e}_{n} = {C}_{\phi }{e}_{n} = {C}_{\phi }{z}^{n} = \) \( {\phi }^{n} \), and therefore \( {\left( A{e}_{1}\right) }^{n} = A{e}_{n} \) .
Conversely, suppose \( A{e}_{n} = {\left( A{e}_{1}\right) }^{n} \) for all nonnegative integers \( n \) . Define \( \phi \) by \( \phi = A{e}_{1} \) . Since \( A{e}_{1} \) is in \( {\mathbf{H}}^{2},\phi \) is analytic on \( \mathbb{D} \) .
To show that \( A = {C}_{\phi } \), it suffices to prove that \( \left| {\phi \left( z\right) }\right| < 1 \) for all \( z \in \mathbb{D} \) , since then it would follow that the composition operator \( {C}_{\phi } \) is well-defined and bounded. Then
\[
A{e}_{n} = {\left( A{e}_{1}\right) }^{n} = {\phi }^{n} = {C}_{\phi }{z}^{n} = {C}_{\phi }{e}_{n};
\]
thus by linearity and continuity, it would follow that \( A = {C}_{\phi } \) .
To show that \( \left| {\phi \left( z\right) }\right| < 1 \), note that \( {\phi }^{n} = A{e}_{n} \) implies that \( \begin{Vmatrix}{\phi }^{n}\end{Vmatrix} \leq \parallel A\parallel \) for all nonnegative \( n \) . We claim that \( \left| {\widetilde{\phi }\left( {e}^{i\theta }\right) }\right| \leq 1 \) for almost all \( \theta \) . Consider any \( \delta > 0 \) and define the set \( E \) by \( E = \left\{ {{e}^{i\theta } : \left| {\widetilde{\phi }\left( {e}^{i\theta }\right) }\right| \geq 1 + \delta }\right\} \) . Then
\[
{\begin{Vmatrix}{\phi }^{n}\end{Vmatrix}}^{2} = \frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| \widetilde{\phi }\left( {e}^{i\theta }\right) \right| }^{2n}{d\theta }
\]
\[
\geq \frac{1}{2\pi }{\int }_{E}|\overset{\sim }{\phi }({e}^{i\theta }){|}^{2n}\;{d\theta }
\]
\[
\geq {\int }_{E}{\left( 1 + \delta \right) }^{2n}{dm}
\]
\[
= m\left( E\right) {\left( 1 + \delta \right) }^{2n}
\]
where \( m\left( E\right) \) is the measure of \( E \) . If \( m\left( E\right) > 0 \), this would imply that \( \left\{ \begin{Vmatrix}{\phi }^{n}\end{Vmatrix}\right\} \rightarrow \) \( \infty \) as \( n \rightarrow \infty \) which contradicts the fact that \( \begin{Vmatrix}{\phi }^{n}\end{Vmatrix} \leq \parallel A\parallel \) for all \( n \) . Hence \( m\left( E\right) = 0 \) and therefore \( \left| {\widetilde{\phi }\left( {e}^{i\theta }\right) }\right| \leq 1 \) . It follows that \( \left| {\phi \left( z\right) }\right| \leq 1 \) for all \( z \) in \( \mathbb{D} \) (Corollary 1.1.24).
We claim that \( \left| {\phi \left( z\right) }\right| < 1 \) for all \( z \) in \( \mathbb{D} \) . If not, then there exists \( {z}_{0} \in \mathbb{D} \) such that \( \left| {\phi \left( {z}_{0}\right) }\right| = 1 \) . By the maximum modulus principle ([9, pp. 79,128], [47, p. 212]) this implies that \( \phi \) is a constant function; say \( \phi \left( z\right) = \lambda \), with \( \lambda \) of modulus 1 . Since \( A{e}_{n} = {\left( A{e}_{1}\right) }^{n} \), it follows that \( A{e}_{n} = {\lambda }^{n} \) . But then \( \left( {{A}^{ * }{e}_{0},{e}_{n}}\right) = \left( {{e}_{0}, A{e}_{n}}\right) = \left( {{e}_{0},{\lambda }^{n}}\right) = {\bar{\lambda }}^{n} \), so
\[
{\begin{Vmatrix}{A}^{ * }{e}_{0}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{k = 0}}^{\infty }{\left| \left( {A}^{ * }{e}_{0},{e}_{n}\right) \right| }^{2} = \mathop{\sum }\limits_{{k = 0}}^{\infty }{\left| \lambda \right| }^{2n} = \infty ,
\]
since \( \left| \lambda \right| = 1 \) . This is a contradiction.
Corollary 5.1.14. The operator \( A \) on \( {\mathbf{H}}^{2} \) is a composition operator if and only if it is multiplicative in the sens |
1359_[陈省身] Lectures on Differential Geometry | Definition 2.1 |
Definition 2.1. The quotient space \( {\mathcal{F}}_{p}/{\mathcal{H}}_{p} \) is called the cotangent space of \( M \) at \( p \), denoted by \( {T}_{p}^{ * } \) (or \( {T}_{p}^{ * }\left( M\right) \) ). The \( {\mathcal{H}}_{p} \) -equivalence class of the function germ \( \left\lbrack f\right\rbrack \) is denoted by \( \left\lbrack \widetilde{f}\right\rbrack \) or \( {\left( df\right) }_{p} \), and is called a cotangent vector on \( M \) at \( p \) .
\( {T}_{p}^{ * } \) is a linear space. It has a linear structure induced from the linear space \( {\mathcal{F}}_{p} \), i.e. for \( \left\lbrack f\right\rbrack ,\left\lbrack g\right\rbrack \in {\mathcal{F}}_{p}, a \in \mathbb{R} \) we have
\[
\begin{cases} \widetilde{\left\lbrack f\right\rbrack } + \widetilde{\left\lbrack g\right\rbrack } & = \left( \widetilde{\left\lbrack f\right\rbrack + \left\lbrack g\right\rbrack }\right) , \\ a \cdot \widetilde{\left\lbrack f\right\rbrack } & = \left( \widetilde{a\left\lbrack f\right\rbrack }\right) . \end{cases}
\]
(2.9)
Theorem 2.2. Suppose \( {f}^{1},\cdots ,{f}^{s} \in {C}_{p}^{\infty } \) and \( F\left( {{y}^{1},\cdots ,{y}^{s}}\right) \) is a smooth function in a neighborhood of \( \left( {{f}^{1}\left( p\right) ,\cdots ,{f}^{s}\left( p\right) }\right) \in {\mathbb{R}}^{s} \) . Then \( f = \) \( F\left( {{f}^{1},\cdots ,{f}^{s}}\right) \in {C}_{p}^{\infty }\; \) and
\[
{\left( df\right) }_{p} = \mathop{\sum }\limits_{{k = 1}}^{s}\left\lbrack {\left( {\frac{\partial F}{\partial {f}^{k}}\left( {{f}^{1}\left( p\right) ,\cdots ,{f}^{s}\left( p\right) }\right) }\right) \cdot {\left( d{f}^{k}\right) }_{p}}\right\rbrack .
\]
(2.10)
Proof. Suppose the domain of \( {f}^{k} \) containing \( p \) is \( {U}_{k} \) . Then \( f \) is defined in \( \mathop{\bigcap }\limits_{{k = 1}}^{s}{U}_{k} \), and for \( q \in \mathop{\bigcap }\limits_{{k = 1}}^{s}{U}_{k} \)
\[
f\left( q\right) = F\left( {{f}^{1}\left( q\right) ,\cdots ,{f}^{s}\left( q\right) }\right) .
\]
Since \( F \) is a smooth function, \( f \in {C}_{p}^{\infty } \) . Let \( {a}_{k} = \frac{\partial F}{\partial {f}^{k}}\left( {{f}^{1}\left( p\right) ,\cdots ,{f}^{s}\left( p\right) }\right) \) . Then for any \( \gamma \in {\Gamma }_{p} \) ,
\[
\ll \gamma ,\left\lbrack f\right\rbrack \gg = {\left. \frac{d}{dt}\right| }_{t = 0}\left( {f \circ \gamma }\right)
\]
\[
= {\left. \frac{d}{dt}\right| }_{t = 0}F\left( {{f}^{1} \circ \gamma \left( t\right) ,\cdots ,{f}^{s} \circ \gamma \left( t\right) }\right)
\]
\[
= {\left. \mathop{\sum }\limits_{{k = 1}}^{s}{a}_{k}\frac{d}{dt}\right| }_{t = 0}\left( {{f}^{k} \circ \gamma \left( t\right) }\right)
\]
\[
= \ll \gamma ,\mathop{\sum }\limits_{{k = 1}}^{s}{a}_{k}\left\lbrack {f}^{k}\right\rbrack \gg
\]
Thus
\[
\left\lbrack f\right\rbrack - \mathop{\sum }\limits_{{k = 1}}^{s}{a}_{k}\left\lbrack {f}^{k}\right\rbrack \in {\mathcal{H}}_{p}
\]
i.e.,
\[
{\left( df\right) }_{p} = \mathop{\sum }\limits_{{k = 1}}^{s}{a}_{k}{\left( d{f}^{k}\right) }_{p}
\]
Corollary 1. For any \( f, g \in {C}_{p}^{\infty }, a \in \mathbb{R} \), we have
\[
d{\left( f + g\right) }_{p} = {\left( df\right) }_{p} + {\left( dg\right) }_{p}
\]
(2.11)
\[
d{\left( af\right) }_{p} = a \cdot {\left( df\right) }_{p},
\]
(2.12)
\[
d{\left( fg\right) }_{p} = f\left( p\right) \cdot {\left( dg\right) }_{p} + g\left( p\right) \cdot {\left( df\right) }_{p}.
\]
\( \left( {2.13}\right) \)
We see that (2.11) and (2.12) are the same as (2.9), and (2.13) follows directly from Theorem 2.2.
Corollary 2. \( \dim {T}_{p}^{ * } = m \) .
Proof. Choose an admissible coordinate chart \( \left( {U,{\varphi }_{U}}\right) \), and define local coordinates \( {u}^{i} \) by
\[
{u}^{i}\left( q\right) = {\left( {\varphi }_{U}\left( q\right) \right) }^{i} = {x}^{i} \circ {\varphi }_{U}\left( q\right) ,\;q \in U,
\]
(2.14)
where \( {x}^{i} \) is a given coordinate system in \( {\mathbb{R}}^{m} \) . Then \( {u}^{i} \in {C}_{p}^{\infty },{\left( d{u}^{i}\right) }_{p} \in {T}_{p}^{ * } \) . We will prove that \( \left\{ {{\left( d{u}^{i}\right) }_{p},1 \leq i \leq m}\right\} \) is a basis for \( {T}_{p}^{ * } \) .
Suppose \( {\left( df\right) }_{p} \in {T}_{p}^{ * } \) . Then \( f \circ {\varphi }_{U}^{-1} \) is a smooth function defined on an open set of \( {\mathbb{R}}^{m} \) . Let \( F\left( {{x}^{1},\ldots ,{x}^{m}}\right) = f \circ {\varphi }_{U}^{-1}\left( {{x}^{1},\ldots ,{x}^{m}}\right) \) . Thus
\[
f = F\left( {{u}^{1},\ldots {u}^{m}}\right)
\]
(2.15)
By Theorem 2.2,
\[
{\left( df\right) }_{p} = \mathop{\sum }\limits_{{i = 1}}^{m}\left\lbrack {\left( {\frac{\partial F}{\partial {u}^{i}}\left( {{u}^{1}\left( p\right) ,\cdots ,{u}^{m}\left( p\right) }\right) }\right) \cdot {\left( d{u}^{i}\right) }_{p}}\right\rbrack .
\]
(2.16)
Thus \( {\left( df\right) }_{p} \) is a linear combination of the \( {\left( d{u}^{i}\right) }_{p},1 \leq i \leq m \) .
If there exist real numbers \( {a}_{i},1 \leq i \leq m \), such that
\[
\mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}{\left( d{u}^{i}\right) }_{p} = 0
\]
(2.17)
i.e.
\[
\mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}\left\lbrack {u}^{i}\right\rbrack \in {\mathcal{H}}_{p}
\]
then for any \( \gamma \in {\Gamma }_{p} \), we have
\[
\ll \gamma ,\mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}\left\lbrack {u}^{i}\right\rbrack \gg = {\left. \mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}\frac{d\left( {{u}^{i} \circ \gamma \left( t\right) }\right) }{dt}\right| }_{t = 0} = 0.
\]
(2.18)
Choose \( {\lambda }_{k} \in {\Gamma }_{p},1 \leq k \leq m \) such that
\[
{u}^{i} \circ {\lambda }_{k}\left( t\right) = {u}^{i}\left( p\right) + {\delta }_{k}^{i}t
\]
(2.19)
where
\[
{\delta }_{k}^{i} = \left\{ \begin{array}{ll} 1, & i = k \\ 0, & i \neq k \end{array}\right.
\]
Then
\[
{\left. \frac{d\left( {{u}^{i} \circ {\lambda }_{k}\left( t\right) }\right) }{dt}\right| }_{t = 0} = {\delta }_{k}^{i}
\]
Let \( \gamma = {\lambda }_{k} \) . By (2.18) \( {a}_{k} = 0,1 \leq k \leq m \), i.e., \( \left\{ {{\left( d{u}^{i}\right) }_{p},1 \leq i \leq m}\right\} \) is linearly independent. Therefore it forms a basis for \( {T}_{p}^{ * } \), called the natural basis of \( {T}_{p}^{ * } \) with respect to the local coordinate system \( {u}^{i} \) . Thus \( {T}_{p}^{ * } \) is an \( m \) -dimensional linear space.
By definition, \( \left\lbrack f\right\rbrack - \left\lbrack g\right\rbrack \in {\mathcal{H}}_{p} \) if and only if \( \ll \gamma ,\left\lbrack f\right\rbrack \gg = \ll \gamma ,\left\lbrack g\right\rbrack \gg \) for all \( \gamma \in {\Gamma }_{p} \), so we can define
\[
\ll \gamma ,{\left( df\right) }_{p} \gg = \ll \gamma ,\left\lbrack f\right\rbrack \gg ,\;\gamma \in {\Gamma }_{p},\;{\left( df\right) }_{p} \in {T}_{p}^{ * }.
\]
(2.20)
Now define a relation \( \sim \) in \( {\Gamma }_{p} \) as follows. Suppose \( \gamma ,{\gamma }^{\prime } \in {\Gamma }_{p} \) . Then \( \gamma \sim {\gamma }^{\prime } \) if and only if for any \( {\left( df\right) }_{p} \in {T}_{p}^{ * } \) ,
\[
\ll \gamma ,{\left( df\right) }_{p} \gg = \ll {\gamma }^{\prime },{\left( df\right) }_{p} \gg .
\]
(2.21)
Obviously this is an equivalence relation. Denote the equivalence class of \( \gamma \) by \( \left\lbrack \gamma \right\rbrack \) . Hence we can define
\[
\left\langle {\left\lbrack \gamma \right\rbrack ,{\left( df\right) }_{p}}\right\rangle = \ll \gamma ,{\left( df\right) }_{p} \gg .
\]
\( \left( {2.22}\right) \)
We will prove that the \( \left\lbrack \gamma \right\rbrack ,\gamma \in {\Gamma }_{p} \), form the dual space of \( {T}_{p}^{ * } \) . For this purpose we will use local coordinate systems.
Under the local coordinates \( {u}^{i} \), suppose \( \gamma \in {\Gamma }_{p} \) is given by the functions
\[
{u}^{i} = {u}^{i}\left( t\right) ,\;1 \leq i \leq m.
\]
(2.23)
Then (2.22) can be written as
\[
\left\langle {\left\lbrack \gamma \right\rbrack ,{\left( df\right) }_{p}}\right\rangle = \mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}{\xi }^{i}
\]
(2.24)
where
\[
{a}_{i} = {\left( \frac{\partial \left( {f \circ {\varphi }_{U}^{-1}}\right) }{\partial {u}^{i}}\right) }_{{\varphi }_{U}\left( p\right) },\;{\xi }^{i} = {\left( \frac{d{u}^{i}}{dt}\right) }_{t = 0}.
\]
\( \left( {2.25}\right) \)
The coefficients \( {a}_{i} \) are exactly the components of the cotangent vector \( {\left( df\right) }_{p} \) with respect to the natural basis \( {\left( d{u}^{i}\right) }_{p} \) [see (2.16)]. Obviously, \( \left\langle {\left\lbrack \gamma \right\rbrack ,{\left( df\right) }_{p}}\right\rangle \) is a linear function on \( {T}_{p}^{ * } \), which is determined by the components \( {\xi }^{i} \) . Choose \( \gamma \) such that
\[
{u}^{i}\left( t\right) = {u}^{i}\left( p\right) + {\xi }^{i}t
\]
(2.26)
with \( {\xi }^{i} \) arbitrary. Thus the \( \left\langle {\left\lbrack \gamma \right\rbrack ,{\left( df\right) }_{p}}\right\rangle ,\gamma \in {\Gamma }_{p} \), represent the totality of linear functionals on \( {T}_{p}^{ * } \) and form its dual space, \( {T}_{p} \), called the tangent space of \( M \) at \( p \) . Elements in the tangent space are called tangent vectors.
The geometric meaning of tangent vectors is quite simple: if \( {\gamma }^{\prime } \in {\Gamma }_{p} \) is given by functions
\[
{u}^{i} = {u}^{\prime i}\left( t\right) ,\;1 \leq i \leq m
\]
![89cd1142-afa9-47ad-a74a-27b70d90fa5e_26_0.jpg](images/89cd1142-afa9-47ad-a74a-27b70d90fa5e_26_0.jpg)
Figure 4.
then a necessary and sufficient condition for \( \left\lbrack \gamma \right\rbrack = \left\lbrack {\gamma }^{\prime }\right\rbrack \) is
\[
{\left( \frac{d{u}^{i}}{dt}\right) }_{t = 0} = {\left( \frac{d{u}^{\prime i}}{dt}\right) }_{t = 0}.
\]
Hence the equivalence of \( \gamma \) and \( {\gamma }^{\prime } \) means that these two parametrized curves have the same tangent vector at the point \( p \) (see Figure 4). Thus we identify a tangent vector \( X \) of \( M \) at \( p \) with the set of all parametrized curves through \( p \) with a common tangent vector.
By the discussion above, t |
1089_(GTM246)A Course in Commutative Banach Algebras | Definition 2.7.1 |
Definition 2.7.1. A character \( \alpha \) of \( G \) is a continuous homomorphism from \( G \) into the circle group \( \mathbb{T} \) . Clearly, the pointwise product of two characters is again a character and so is \( {\alpha }^{-1} \) defined by \( {\alpha }^{-1}\left( x\right) = \overline{\alpha \left( x\right) } \) for all \( x \in G \) . Thus \( \widehat{G} \), the set of all characters of \( G \), forms a group, the dual group of \( G \) .
We proceed to show that there is a bijection between \( \widehat{G} \) and \( \Delta \left( {{L}^{1}\left( G\right) }\right) \) .
Theorem 2.7.2. For \( \alpha \in \widehat{G} \), let \( {\varphi }_{\alpha } : {L}^{1}\left( G\right) \rightarrow \mathbb{C} \) be defined by
\[
{\varphi }_{\alpha }\left( f\right) = {\int }_{G}f\left( x\right) \overline{\alpha \left( x\right) }{dx},\;f \in {L}^{1}\left( G\right) .
\]
Then \( {\varphi }_{\alpha } \in \Delta \left( {{L}^{1}\left( G\right) }\right) \) and the mapping \( \alpha \rightarrow {\varphi }_{\alpha } \) is a bijection from \( \widehat{G} \) onto \( \Delta \left( {{L}^{1}\left( G\right) }\right) \) .
Proof. Of course, \( {\varphi }_{\alpha } \) is a linear functional. For \( f, g \in {C}_{c}\left( G\right) \), Fubini’s theorem and the invariance of Haar measure yield
\[
{\varphi }_{\alpha }\left( {f * g}\right) = {\int }_{G}\overline{\alpha \left( x\right) }{\int }_{G}f\left( y\right) g\left( {{y}^{-1}x}\right) {dydx}
\]
\[
= {\int }_{G}{\int }_{G}f\left( y\right) \overline{\alpha \left( x\right) }g\left( {{y}^{-1}x}\right) {dxdy}
\]
\[
= {\int }_{G}{\int }_{G}f\left( y\right) \overline{\alpha \left( {yx}\right) }g\left( x\right) {dxdy}
\]
\[
= {\int }_{G}{\int }_{G}\overline{\alpha \left( x\right) }g\left( x\right) \overline{\alpha \left( y\right) }f\left( y\right) {dxdy}
\]
\[
= {\varphi }_{\alpha }\left( f\right) {\varphi }_{\alpha }\left( g\right)
\]
Since \( \left| {{\varphi }_{\alpha }\left( f\right) }\right| \leq \parallel f{\parallel }_{1} \) for all \( f \in {L}^{1}\left( G\right) \), this formula even holds for all \( f, g \in \) \( {L}^{1}\left( G\right) \) . Moreover, \( {\varphi }_{\alpha } \) is nonzero since for any nonnegative function \( f \) in \( {C}_{c}\left( G\right) \) , \( f \neq 0 \), we have
\[
{\varphi }_{\alpha }\left( {\alpha f}\right) = {\int }_{G}f\left( x\right) {\left| \alpha \left( x\right) \right| }^{2}{dx} > 0.
\]
This shows that \( {\varphi }_{\alpha } \in \Delta \left( {{L}^{1}\left( G\right) }\right) \) . Moreover, the map \( \alpha \rightarrow {\varphi }_{\alpha } \) is injective. Indeed, if \( \alpha ,\beta \in \widehat{G} \) are such that
\[
0 = {\varphi }_{\alpha }\left( f\right) - {\varphi }_{\beta }\left( f\right) = {\int }_{G}f\left( x\right) \left( {\overline{\alpha \left( x\right) } - \overline{\beta \left( x\right) }}\right) {dx}
\]
for all \( f \in {L}^{1}\left( G\right) \), then \( \alpha = \beta \) because \( {L}^{1}{\left( G\right) }^{ * } = {L}^{\infty }\left( G\right) \) and \( \alpha \) and \( \beta \) are continuous functions.
It remains to show that given \( \varphi \in \Delta \left( {{L}^{1}\left( G\right) }\right) \), there exists \( \alpha \in \widehat{G} \) such that \( \varphi = {\varphi }_{\alpha } \) . To that end, choose \( g \in {L}^{1}\left( G\right) \) such that \( \varphi \left( g\right) = 1 \) and observe that since \( \varphi \in {L}^{1}{\left( G\right) }^{ * } \), there exists \( \chi \in {L}^{\infty }\left( G\right) \) such that \( \varphi \left( f\right) = {\int }_{G}f\left( x\right) \chi \left( x\right) {dx} \) for all \( f \in {L}^{1}\left( G\right) \) . The function
\[
\left( {x, y}\right) \rightarrow \chi \left( x\right) f\left( y\right) g\left( {{y}^{-1}x}\right)
\]
belongs to \( {L}^{1}\left( {G \times G}\right) \), and hence Fubini’s theorem implies that
\[
\varphi \left( f\right) = \varphi \left( {f * g}\right) = {\int }_{G}\chi \left( x\right) \left( {{\int }_{G}f\left( y\right) g\left( {{y}^{-1}x}\right) {dy}}\right) {dx}
\]
\[
= {\int }_{G}f\left( y\right) \left( {{\int }_{G}g\left( {{y}^{-1}x}\right) \chi \left( x\right) {dx}}\right) {dy}
\]
\[
= {\int }_{G}f\left( y\right) \varphi \left( {{L}_{y}g}\right) {dy}
\]
for all \( f \in {L}^{1}\left( G\right) \) . Now, define \( \alpha : G \rightarrow \mathbb{C} \) by \( \alpha \left( y\right) = \overline{\varphi \left( {{L}_{y}g}\right) }, y \in G \) . The function \( \alpha \) is continuous because the map \( y \rightarrow {L}_{y}g \) from \( G \) into \( {L}^{1}\left( G\right) \) is continuous and
\[
\left| {\alpha \left( x\right) - \alpha \left( y\right) }\right| = \left| {\varphi \left( {{L}_{x}g - {L}_{y}g}\right) }\right| \leq {\begin{Vmatrix}{L}_{x}g - {L}_{y}g\end{Vmatrix}}_{1}
\]
for all \( x, y \in G \) . From \( g * {L}_{xy}g = {L}_{x}g * {L}_{y}g \) it follows that
\[
\alpha \left( {xy}\right) = \overline{\varphi \left( {{L}_{xy}g}\right) } = \overline{\varphi \left( g\right) }\overline{\varphi \left( {{L}_{xy}g}\right) } = \overline{\varphi \left( {g * {L}_{xy}g}\right) }
\]
\[
= \overline{\varphi \left( {{L}_{x}g * {L}_{y}g}\right) } = \overline{\varphi \left( {{L}_{x}g}\right) \varphi \left( {{L}_{y}g}\right) }
\]
\[
= \alpha \left( x\right) \alpha \left( y\right) .
\]
We claim that \( \left| {\alpha \left( x\right) }\right| = 1 \) for all \( x \in G \) . For that, notice that
\[
\left| {\alpha \left( y\right) }\right| = \left| {\varphi \left( {{L}_{y}g}\right) }\right| \leq {\begin{Vmatrix}{L}_{y}g\end{Vmatrix}}_{1} = \parallel g{\parallel }_{1}
\]
for all \( y \in G \), and hence, by the multiplicativity of \( \alpha \) ,
\[
{\left| \alpha \left( x\right) \right| }^{n} = \left| {\alpha \left( {x}^{n}\right) }\right| \leq \parallel g{\parallel }_{1}
\]
for all \( n \in \mathbb{Z} \) . Since \( \alpha \left( e\right) = \overline{\varphi \left( g\right) } = 1 \), we conclude that \( \left| {\alpha \left( x\right) }\right| = 1 \) for every \( x \in G \) . This shows that \( \alpha \in \widehat{G} \) and \( {\varphi }_{\alpha } = \varphi \) .
After identifying \( \Delta \left( {{L}^{1}\left( G\right) }\right) \) as a set with \( \widehat{G} \), our next purpose is to describe the Gelfand topology on \( \widehat{G} \) in terms of \( G \) itself rather than \( {L}^{1}\left( G\right) \) .
Lemma 2.7.3. Let \( f \in {L}^{1}\left( G\right) \) and \( \alpha \in \widehat{G} \) .
(i) For all \( x \in G,\left( {f * \alpha }\right) \left( x\right) = \alpha \left( x\right) \widehat{f}\left( \alpha \right) = \widehat{{L}_{{x}^{-1}}f}\left( \alpha \right) \) . In particular, \( \widehat{{L}^{1}\left( G\right) } \) is invariant under multiplication with functions of the form \( \alpha \rightarrow \alpha \left( x\right) \) , \( x \in G \) .
(ii) If \( g \in {L}^{1}\left( G\right) \) is defined by \( g\left( x\right) = \alpha \left( x\right) f\left( x\right) \), then \( \widehat{g} = {L}_{\alpha }\widehat{f} \) . In particular, \( \widehat{{L}^{1}\left( G\right) } \subseteq {C}_{0}\left( \widehat{G}\right) \) is translation invariant.
(iii) \( \widehat{{f}^{ * }} = \overline{\widehat{f}} \) and \( \widehat{{L}^{1}\left( G\right) } \subseteq {C}_{0}\left( \widehat{G}\right) \) is norm-dense in \( {C}_{0}\left( \widehat{G}\right) \) .
Proof. (i) \( f * \alpha \) is a continuous function and
\[
\left( {f * \alpha }\right) \left( x\right) = {\int }_{G}f\left( y\right) \alpha \left( {{y}^{-1}x}\right) {dy} = \alpha \left( x\right) \widehat{f}\left( \alpha \right)
\]
for all \( x \in G \) . On the other hand,
\[
\left( {f * \alpha }\right) \left( x\right) = {\int }_{G}f\left( {xy}\right) \overline{\alpha \left( y\right) }{dy} = \widehat{{L}_{{x}^{-1}}f}\left( \alpha \right) .
\]
(ii) For all \( \beta \in \widehat{G} \), we have
\[
\widehat{g}\left( \beta \right) = {\int }_{G}f\left( x\right) \overline{\beta \left( x\right) }\alpha \left( x\right) {dx} = \widehat{f}\left( {{\alpha }^{-1}\beta }\right) = {L}_{\alpha }\widehat{f}\left( \beta \right) ,
\]
so that \( {L}_{\alpha }\widehat{f} = \widehat{g} \in \widehat{{L}^{1}\left( G\right) } \) .
(iii) For each \( \alpha \in \widehat{G} \), we have
\[
\widehat{{f}^{ * }}\left( \alpha \right) = {\int }_{G}\overline{f\left( {x}^{-1}\right) \alpha \left( x\right) }{dx} = {\int }_{G}\overline{f\left( x\right) }\alpha \left( x\right) {dx} = \overline{\widehat{f}\left( \alpha \right) },
\]
so that \( \widehat{{f}^{ * }} = \bar{f} \) . Thus \( \widehat{{L}^{1}\left( G\right) } \) is a self-adjoint subalgebra of \( {C}_{0}\left( \widehat{G}\right) \) which strongly separates the points of \( \widehat{G} \) and therefore is dense in \( \left( {{C}_{0}\left( \widehat{G}\right) ,\parallel \cdot {\parallel }_{\infty }}\right) \) by the Stone-Weierstrass theorem.
Lemma 2.7.4. Let \( f \in {L}^{1}\left( G\right) \) and \( \epsilon > 0 \) and let \( \sigma \) denote the Gelfand topology on \( \widehat{G} \) . Then there exists a neighbourhood \( W \) of \( e \) in \( G \) with the following property. If \( y, x \in G \) and \( \beta ,\alpha \in \widehat{G} \) are such that \( y \in {Wx},{\varphi }_{\alpha }\left( f\right) = 1 \), and \( \beta \in U\left( {\alpha, f,{L}_{x}f,\epsilon /3}\right) \), then
\[
\left| {\beta \left( y\right) - \alpha \left( x\right) }\right| < \epsilon
\]
In particular, the function \( \left( {x,\alpha }\right) \rightarrow \alpha \left( x\right) \) is continuous on \( G \times \left( {\widehat{G},\sigma }\right) \) .
Proof. For arbitrary \( y, x \in G \) and \( \beta ,\alpha \in \widehat{G} \) such that \( \widehat{f}\left( \alpha \right) = 1 \) we obtain from Lemma 2.7.3,
\[
\left| {\beta \left( y\right) - \alpha \left( x\right) }\right| \leq \left| {\overline{\beta \left( y\right) } - \overline{\beta \left( y\right) }\widehat{f}\left( \beta \right) }\right| + \left| {\overline{\beta \left( y\right) }\widehat{f}\left( \beta \right) - \overline{\beta \left( x\right) }\widehat{f}\left( \beta \right) }\right|
\]
\[
+ \left| {\overline{\beta \left( x\right) }\widehat{f}\left( \beta \right |
1068_(GTM227)Combinatorial Commutative Algebra | Definition 8.12 |
Definition 8.12 Let \( S \) be a polynomial ring multigraded by \( A \) . An \( S \) -module \( M \) is multigraded by \( A \) (sometimes we just say graded) if it has been endowed with a decomposition \( M = {\bigoplus }_{\mathbf{a} \in A}{M}_{\mathbf{a}} \) as a direct sum of graded components such that \( {S}_{\mathbf{a}}{M}_{\mathbf{b}} \subseteq {M}_{\mathbf{a} + \mathbf{b}} \) for all \( \mathbf{a},\mathbf{b} \in A \) . Write \( M\left( \mathbf{a}\right) \) for the \( A \) -graded translate of \( M \) that satisfies \( M{\left( \mathbf{a}\right) }_{\mathbf{b}} = {M}_{\mathbf{a} + \mathbf{b}} \) for all \( \mathbf{a},\mathbf{b} \in A \) .
The convention for \( A \) -graded translates makes the rank 1 free module \( S\left( {-\mathbf{a}}\right) \) into a copy of \( S \) generated in degree \( \mathbf{a} \) . The tensor product \( M{ \otimes }_{S}N \) of two multigraded modules is still multigraded, its degree a component being spanned by all elements \( {m}_{\mathbf{b}} \otimes {n}_{\mathbf{a} - \mathbf{b}} \) such that \( {m}_{\mathbf{b}} \in {M}_{\mathbf{b}} \) and \( {n}_{\mathbf{a} - \mathbf{b}} \in {N}_{\mathbf{a} - \mathbf{b}} \) . Consequently, \( M\left( \mathbf{a}\right) = M{ \otimes }_{S}S\left( \mathbf{a}\right) \) is another way to express an \( A \) -graded translate of \( M \) . The notion of graded homomorphism makes sense as well: a map \( \phi : M \rightarrow N \) is graded (of degree \( \mathbf{0} \) ) if \( \phi \left( {M}_{\mathbf{a}}\right) \subseteq {N}_{\mathbf{a}} \) for all \( \mathbf{a} \in A \) . Graded maps of graded modules have graded kernels, images, and cokernels.
Definition 8.12 does not assume that \( M \) is finitely generated, and indeed, we will see a variety of infinitely generated examples in Chapter 11. In addition, a graded module \( M \) might be nonzero in degrees from \( A \) that lie outside of the subgroup generated by \( Q = \deg \left( {\mathbb{N}}^{n}\right) \) .
Example 8.13 Let \( S = \mathbb{k}\left\lbrack {a, b, c, d}\right\rbrack \) be multigraded by \( {\mathbb{Z}}^{2} \), with \( \deg \left( a\right) = \) \( \left( {4,0}\right) ,\deg \left( b\right) = \left( {3,1}\right) ,\deg \left( c\right) = \left( {1,3}\right) \), and \( \deg \left( d\right) = \left( {0,4}\right) \) . These vectors generate the semigroup \( {Q}^{\prime } \subset {\mathbb{Z}}^{2} \) in Fig. 7.2. Express the semigroup ring \( \mathbb{k}\left\lbrack {Q}^{\prime }\right\rbrack \) as a quotient of \( S \), as in Example 7.14. Although the semigroup \( {Q}^{\prime } \) does not generate \( {\mathbb{Z}}^{2} \) as a group, the \( {\mathbb{Z}}^{2} \) -graded translate \( M = \mathbb{k}\left\lbrack {Q}^{\prime }\right\rbrack \left( {-\left( {1,1}\right) }\right) \) of the semigroup ring \( \mathbb{k}\left\lbrack {Q}^{\prime }\right\rbrack \) is still a valid \( {\mathbb{Z}}^{2} \) -graded \( S \) -module, and its graded components \( {M}_{\mathbf{a}} \) for \( \mathbf{a} \in {Q}^{\prime } \) are all zero.
## 8.2 Hilbert series and \( K \) -polynomials
Given a finitely generated graded module \( M \) over a positively multigraded polynomial ring, the dimensions \( {\dim }_{\mathbb{k}}\left( {M}_{\mathbf{a}}\right) \) are all finite, by Theorem 8.6. In the case where \( M = S \) is the polynomial ring itself, the dimension of \( {S}_{\mathbf{a}} \) is the cardinality of the fiber \( \left( {\mathbf{u} + L}\right) \cap {\mathbb{N}}^{n} \) for any vector \( \mathbf{u} \in {\mathbb{Z}}^{n} \) mapping to a under the degree map \( {\mathbb{Z}}^{n} \rightarrow A \), where again, \( L \) is the kernel of the degree map as in (8.1). Geometrically, this cardinality is the number of lattice points in the polytope \( \left( {\mathbf{u} + \mathbb{R}L}\right) \cap {\mathbb{R}}_{ > 0}^{n} \) . Just as in the "coarsely graded" case (where \( A = \mathbb{Z} \) ; see Chapter 2 and Section 12.1, for example) and the "finely graded" case (where \( A = {\mathbb{Z}}^{n} \) ; see Part I), the generating functions for the dimensions of the multigraded pieces of graded modules play a central role.
Definition 8.14 The Hilbert function of a finitely generated module \( M \) over a positively graded polynomial ring is the set map \( A \rightarrow \mathbb{N} \) whose value at each group element \( \mathbf{a} \in A \) is the vector space dimension \( {\dim }_{\mathbb{k}}\left( {M}_{\mathbf{a}}\right) \) . The multigraded Hilbert series of \( M \) is the Laurent series
\[
H\left( {M;\mathbf{t}}\right) = \mathop{\sum }\limits_{{\mathbf{a} \in A}}{\dim }_{\mathbb{k}}\left( {M}_{\mathbf{a}}\right) {\mathbf{t}}^{\mathbf{a}}\;\text{ in the additive group }\;\mathbb{Z}\left\lbrack \left\lbrack A\right\rbrack \right\rbrack = \mathop{\prod }\limits_{{\mathbf{a} \in A}}\mathbb{Z} \cdot {\mathbf{t}}^{\mathbf{a}}.
\]
Elements in the abelian group \( \mathbb{Z}\left\lbrack \left\lbrack A\right\rbrack \right\rbrack \) are not really Laurent series, but just formal elements in the product, and the letter \( \mathbf{t} \) here is a dummy variable. However, when we have an explicitly given inclusion \( A \subseteq {\mathbb{Z}}^{d} \), so that \( {\mathbf{a}}_{1},\ldots ,{\mathbf{a}}_{n} \) are vectors of length \( d \), the symbol \( \mathbf{t} \) can also stand for the list \( {t}_{1},\ldots ,{t}_{d} \) of variables, so that \( {\mathbf{t}}^{\mathbf{a}} = {t}_{1}^{{a}_{1}}\cdots {t}_{d}^{{a}_{d}} \) as usual. This common special case lends the suggestive name to the elements of \( \mathbb{Z}\left\lbrack \left\lbrack A\right\rbrack \right\rbrack \) .
We would like to write Hilbert series as rational functions, just as we did in Corollary 4.20 for monomial ideals in the finely graded polynomial ring, where the degree map has image \( Q = {\mathbb{N}}^{n} \) . In order to accomplish this task in the current more general setting, we need an ambient algebraic structure in which to equate Laurent series with rational functions. To start, consider the semigroup ring \( \mathbb{Z}\left\lbrack Q\right\rbrack = {\bigoplus }_{\mathbf{a} \in Q}\mathbb{Z} \cdot {\mathbf{t}}^{\mathbf{a}} \) over the integers \( \mathbb{Z} \) . When \( Q \) is pointed (Definition 7.8), the ideal of \( \mathbb{Z}\left\lbrack Q\right\rbrack \) generated by all monomials \( {\mathbf{t}}^{\mathbf{a}} \neq 1 \) is a proper ideal. The completion of \( \mathbb{Z}\left\lbrack Q\right\rbrack \) at this ideal [Eis95, Chapter 7] is the ring \( \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) of power series supported on \( Q \) . Let us justify the name.
Lemma 8.15 Elements in the completion \( \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) for a pointed semigroup \( Q \) can be expressed uniquely as formal series \( \mathop{\sum }\limits_{{\mathbf{a} \in Q}}{c}_{\mathbf{a}}{\mathbf{t}}^{\mathbf{a}} \) with \( {c}_{\mathbf{a}} \in \mathbb{Z} \) .
Proof. The lemma is standard when \( Q = {\mathbb{N}}^{n} \), as \( \mathbb{Z}\left\lbrack \left\lbrack {\mathbb{N}}^{n}\right\rbrack \right\rbrack = \mathbb{Z}\left\lbrack \left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \right\rbrack \) is an honest power series ring. For a general pointed semigroup \( Q \), write \( \mathbb{Z}\left\lbrack Q\right\rbrack \) as a module over \( \mathbb{Z}\left\lbrack {\mathbb{N}}^{n}\right\rbrack = \mathbb{Z}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . Then \( \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) is the completion of \( \mathbb{Z}\left\lbrack Q\right\rbrack \) at the ideal \( \mathfrak{m} = \left\langle {{x}_{1},\ldots ,{x}_{n}}\right\rangle \subset \mathbb{Z}\left\lbrack {\mathbb{N}}^{n}\right\rbrack \) . It follows from condition 3 in Theorem 8.6 that every element in \( \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) has some expression as a power series \( p\left( \mathbf{t}\right) = \mathop{\sum }\limits_{{\mathbf{a} \in Q}}{c}_{\mathbf{a}}{\mathbf{t}}^{\mathbf{a}} \) . To see that this expression is unique, we need only show that \( p\left( \mathbf{t}\right) \) is nonzero in \( \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) whenever \( {c}_{\mathbf{a}} \neq 0 \) for some \( \mathbf{a} \in Q \) . We will use that the natural map \( \mathbb{Z}\left\lbrack Q\right\rbrack /{\mathfrak{m}}^{r}\mathbb{Z}\left\lbrack Q\right\rbrack \rightarrow \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack /{\mathfrak{m}}^{r}\mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) is an isomorphism for all \( r \in \mathbb{N} \), which follows by definition of completion.
Choose a vector \( \mathbf{u} \in {\mathbb{N}}^{n} \) mapping to a. There is a positive integer \( r \) such that \( {c}_{\mathbf{a}}{\mathbf{x}}^{\mathbf{u}} \) lies outside of \( {\mathfrak{m}}^{r} \), and for this choice of \( r \), the image of the series \( p\left( \mathbf{t}\right) \) in \( \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack /{\mathfrak{m}}^{r}\mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \cong \mathbb{Z}\left\lbrack Q\right\rbrack /{\mathfrak{m}}^{r}\mathbb{Z}\left\lbrack Q\right\rbrack \) is a nonzero polynomial.
Any element \( p\left( \mathbf{t}\right) \in \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) with constant term \( \pm 1 \) is a unit in \( \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) . Indeed, if \( p\left( \mathbf{t}\right) = 1 - q\left( \mathbf{t}\right) \) and \( q \) has constant term 0, then the inverse \( {p}^{-1} = 1 + q + {q}^{2} + \cdots \) is well-defined because \( \mathbb{Z}\left\lbrack \left\lbrack Q\right\rbrack \right\rbrack \) is complete with respect to an ideal containing \( q \) . In particular, \( 1 - {\mathbf{t}}^{\mathbf{a}} \) is invertible for all \( \mathbf{a} \in Q \) .
Lemma 8.16 The Hilbert series for the multigraded polynomial ring \( S \) is
\[
H\left( {S;\mathbf{t}}\right) = \frac{1}{\left( {1 - {\mathbf{t}}^{{\mathbf{a}}_{1}}}\right) \cdots |
1075_(GTM233)Topics in Banach Space Theory | Definition 12.3.8 |
Definition 12.3.8. The spreading sequence space \( \mathcal{X} \) introduced in Theorem 12.3.7 is called a spreading model for the sequence \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) .
We now turn to Krivine's theorem. This result was obtained by Krivine in 1976, and although the main ideas of the proof we include here are the same as in Krivine's original proof, we have used ideas from two subsequent expositions of Krivine's theorem by Rosenthal [274] and Lemberg [186].
Krivine's theorem should be contrasted with Tsirelson space, which we constructed in Section 11.3. The existence of Tsirelson space implies that there is a Banach space with a basis such that no (infinite) block basic sequence can be equivalent to one of the spaces \( {\ell }_{p} \) and \( {c}_{0} \) . However, if we are content with finite block basic sequences, then we can always find a good copy of one of these spaces! This difference in behavior between infinite and arbitrarily large but finite is a recurrent theme in modern Banach space theory.
Theorem 12.3.9 (Krivine’s Theorem). Let \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) be a normalized sequence in a Banach space \( X \) such that \( {\left\{ {x}_{n}\right\} }_{n = 1}^{\infty } \) is not relatively compact. Then, either \( {c}_{0} \) is block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \), or there exists \( 1 \leq p < \infty \) such that \( {\ell }_{p} \) is block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) .
In order to simplify the proof of Theorem 12.3.9 let us start by making some observations.
We first claim that it suffices to prove the theorem when \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is replaced by the canonical basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of a spreading model \( \mathcal{X} \) ; this is a direct consequence of Theorem 12.3.7. We next claim that we can suppose that the canonical basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of the spreading model \( \mathcal{X} \) is unconditional with suppression constant \( {K}_{s} = 1 \) (and hence 2-unconditional). Indeed, if the canonical basis of the spreading model fails to be weakly Cauchy, then it is equivalent to the canonical \( {\ell }_{1} \) -basis, and the fact that \( {\ell }_{1} \) is block finitely representable in \( \mathcal{X} \) is simply the content of James’s distortion theorem (Theorem 11.3.1). If \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is weakly Cauchy but not weakly null, we use Proposition 12.3.6 and replace it by the spreading sequence
\[
{f}_{k} = \frac{{e}_{2k} - {e}_{{2k} + 1}}{\begin{Vmatrix}{e}_{2k} - {e}_{{2k} + 1}\end{Vmatrix}},\;k = 1,2,\ldots
\]
In this way we reduce the proof to showing the result for the canonical basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of some spreading sequence space \( \mathcal{X} \) .
We also observe at this point that James distortion theorem for \( {c}_{0} \) (see Problem 11.3) implies that if the spreading model is isomorphic to \( {c}_{0} \), then \( {c}_{0} \) is finitely-representable in it. This reduction will be used later.
Now we will introduce some notation. Suppose \( \mathcal{X} \) is a spreading sequence space whose canonical basis is unconditional with suppression constant \( {K}_{s} = 1 \) . The norm of each \( \xi \in \mathcal{X} \) depends only on its nonzero entries and their order of appearance. We shall say that the sequences \( \xi \) and \( \eta \) in \( {c}_{00} \) are equivalent if their nonzero entries and their order of appearance are identical. We will say that \( \xi \) and \( \eta \) are \( \epsilon \) -equivalent if there exist \( u, v \in {c}_{00} \) such that \( u + \xi \) and \( v + \eta \) are equivalent and \( \parallel u{\parallel }_{\mathcal{X}} + \parallel v{\parallel }_{\mathcal{X}} < \epsilon \) .
If \( \xi ,\eta \in {c}_{00} \), we define \( \xi \oplus \eta \) to be any vector for which nonzero entries of \( \xi \) (in correct order) precede the nonzero entries of \( \eta \) (in correct order). For example, \( \xi \oplus \eta \) could be obtained by writing first the entries of \( \xi \) in order and then the nonzero entries of \( \eta \) in order. Thus, if \( n \) is the largest integer such that \( \xi \left( n\right) \neq 0 \), we could take
\[
\xi \oplus \eta = \mathop{\sum }\limits_{{j = 1}}^{n}\xi \left( j\right) {e}_{j} + \mathop{\sum }\limits_{{j = n + 1}}^{\infty }\eta \left( {j - n}\right) {e}_{j}.
\]
We will say that \( \xi \) is replaceable by \( \eta \) if
\[
\parallel u \oplus \xi \oplus v{\parallel }_{\mathcal{X}} = \parallel u \oplus \eta \oplus v{\parallel }_{\mathcal{X}},\;u, v \in {c}_{00},
\]
and that \( \xi \) is \( \epsilon \) -replaceable by \( \eta \) if
\[
\left| {\parallel u \oplus \xi \oplus v{\parallel }_{\mathcal{X}} - \parallel u \oplus \eta \oplus v{\parallel }_{\mathcal{X}}}\right| < \epsilon ,\;u, v \in {c}_{00}.
\]
Let us notice that if \( \xi \) and \( \eta \) are equivalent, then \( \xi \) is replaceable by \( \eta \) . Similarly, if \( \xi \) and \( \eta \) are \( \epsilon \) -equivalent, then \( \xi \) is \( \epsilon \) -replaceable by \( \eta \) .
To complete the proof of Krivine's theorem we will need the following two lemmas.
Lemma 12.3.10. Suppose \( \mathcal{X} \) is a spreading sequence space. Then there is a spreading sequence space \( \mathcal{Y} \) that is block finitely representable in \( \mathcal{X} \) such that the canonical basis of \( \mathcal{Y} \) is unconditional with unconditional basis constant \( {K}_{u} = 1 \) .
Proof. By the previous remarks we can assume that the canonical basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of \( \mathcal{X} \) is 2-unconditional, and that \( \mathcal{X} \) is not isomorphic to \( {c}_{0} \) . Thus, if we let \( {y}_{n} = \) \( \mathop{\sum }\limits_{{j = 1}}^{n}{\left( -1\right) }^{j}{e}_{j} \), we have \( \begin{Vmatrix}{y}_{n}\end{Vmatrix} \rightarrow \infty \) . For each \( k \) let \( {u}_{k} = {y}_{k}/\begin{Vmatrix}{y}_{k}\end{Vmatrix} \) . Then \( {u}_{k} \) is \( {\epsilon }_{k} \) - equivalent to \( - {u}_{k} \) for \( {\epsilon }_{k} = 2/\begin{Vmatrix}{y}_{k}\end{Vmatrix} \) .
If we take a block basic sequence \( {\left( {z}_{n}\right) }_{n = 1}^{\infty } \) with respect to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \), where each \( {z}_{n} \) is equivalent to \( {u}_{k} \), we obtain a spreading sequence where \( - {z}_{n} \) is \( {\epsilon }_{k} \) -replaceable by \( {z}_{n} \) . Define \( {\mathcal{Y}}_{k} \) by
\[
\parallel \xi {\parallel }_{{\mathcal{Y}}_{k}} = {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{\infty }\xi \left( j\right) {z}_{j}\end{Vmatrix}}_{\mathcal{X}}
\]
We can pass to a subsequence \( {\left( {k}_{m}\right) }_{m = 1}^{\infty } \) in such a way that \( \mathop{\lim }\limits_{{m \rightarrow \infty }}\parallel \xi {\parallel }_{{\mathcal{Y}}_{{k}_{m}}} \) exists for all \( \xi \in {c}_{00} \) . This is done by a standard diagonal argument for those \( \xi \) with rational coefficients, and then extended to all \( \xi \) by a routine approximation argument. This formula defines a spreading sequence space, still block finitely representable in \( \mathcal{X} \) but such that \( {e}_{1} \) is replaceable by \( - {e}_{1} \) . This shows that the canonical basis of \( \mathcal{Y} \) is 1-unconditional.
Lemma 12.3.11. Suppose \( \mathcal{X} \) is a spreading sequence space whose canonical basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is 1-unconditional.
(i) If \( {2}^{1/p}{e}_{1} \) is replaceable by \( {e}_{1} + {e}_{2} \) for some \( 1 \leq p < \infty \), then the norm on \( \mathcal{X} \) is equivalent to the canonical \( {\ell }_{p} \) -norm.
(ii) If for some \( 1 \leq p < \infty ,{2}^{1/p}{e}_{1} \) is replaceable by \( {e}_{1} + {e}_{2} \), and \( {3}^{1/p}{e}_{1} \) is replaceable by \( {e}_{1} + {e}_{2} + {e}_{3} \), then the norm on \( \mathcal{X} \) coincides with the \( {\ell }_{p} \) -norm.
Proof. (i) Suppose \( {\left( {k}_{j}\right) }_{j = 1}^{\infty } \) is a sequence of nonnegative integers. If for each \( n \) we let \( N = \mathop{\sum }\limits_{{j = 1}}^{n}{2}^{{k}_{j}} \), then we have
\[
{\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{n}{2}^{{k}_{j}/p}{e}_{j}\end{Vmatrix}}_{\mathcal{X}} = {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{N}{e}_{j}\end{Vmatrix}}_{\mathcal{X}}
\]
Notice also that
\[
{\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{{2}^{r}}{e}_{j}\end{Vmatrix}}_{\mathcal{X}} = {2}^{r/p}
\]
and so
\[
{2}^{-1/p}{N}^{1/p} \leq {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{N}{e}_{j}\end{Vmatrix}}_{\mathcal{X}} \leq {2}^{1/p}{N}^{1/p}.
\]
Suppose now that \( \left( {a}_{j}\right) \) are scalars with \( \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {a}_{j}\right| }^{p} = 1 \), and let \( \alpha \) be the least nonzero value of \( \left| {a}_{j}\right| \) . For each \( j \) pick a nonnegative integer \( {k}_{j} \) with \( {2}^{{k}_{j}/p} \leq \left| {a}_{j}\right| {\alpha }^{-1} \leq \) \( {2}^{\left( {{k}_{j} + 1}\right) /p} \) . Then, if \( N = \mathop{\sum }\limits_{{j = 1}}^{n}{2}^{{k}_{j}} \), we have
\[
{\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{N}{e}_{j}\end{Vmatrix}}_{\mathcal{X}} \leq {\alpha }^{-1}{\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{e}_{j}\end{Vmatrix}}_{\mathcal{X}} \leq {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{{2N}}{e}_{j}\end{Vmatrix}}_{\mathcal{X}}
\]
and so \( N{\alpha }^{p} \leq 1 \leq {2N}{\alpha }^{p} \) . Thus
\[
{2}^{-1/p}{N}^{1/p}\alpha \leq {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{e}_{j}\end{Vmatrix}}_{\mathcal{X}} \leq {2}^{2/p}{N}^{1/p}\alpha
\]
which implies
\[
{2}^{-2/p} \leq {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{e}_{j}\end{Vmatrix}}_{\mathcal{X}} \leq {2}^{2/p}
\]
The proof of \( \left( {ii}\right) \) is similar to \( \left( i\right) \) . Here we use that the set of real numbers of the form |
1281_[张恭庆] Lecture Notes on Calculus of Variations | Definition 3.1 |
Definition 3.1 (Conjugate points) Let \( {u}_{0} \) be a solution of the E-L equation of the functional \( I\left( u\right) = {\int }_{J}L\left( {t, u\left( t\right) ,\dot{u}\left( t\right) }\right) {dt} \) . We call \( \left( {a,{u}_{0}\left( a\right) }\right) \) and \( \left( {b,{u}_{0}\left( b\right) }\right) \) a pair of conjugate points along the orbit \( \left( {t,{u}_{0}\left( t\right) }\right) \), if there exists a nonzero Jacobi field \( \varphi \in {C}_{0}^{1}\left( {\left\lbrack {a, b}\right\rbrack ,{\mathbb{R}}^{N}}\right) \) along \( {u}_{0}\left( t\right) \) (see Figure 3.1).
![8b13b55d-05be-4695-9a8b-9f585037c671_48_0.jpg](images/8b13b55d-05be-4695-9a8b-9f585037c671_48_0.jpg)
Fig. 3.1
Sometimes, if there are no conjugate points on the orbit \( \left\{ {\left( {t,{u}_{0}\left( t\right) }\right) \mid t \in }\right. \) \( \left. \left( {{t}_{0},{t}_{1}}\right) \right\} \), we simply say that \( {u}_{0} \) has no conjugate points.
Example 3.3 Given the metric
\[
e\left( {x, y}\right) d{x}^{2} + {2f}\left( {x, y}\right) {dxdy} + g\left( {x, y}\right) d{y}^{2}
\]
on a surface \( S \) in \( {\mathbb{R}}^{3} \) . We choose a geodesic \( \gamma \) in \( S \) and without loss of generality, we may assume that it is the \( x \) -axis \( \left( {y = 0}\right) \) and the curves \( x = \) const are perpendicular to \( \gamma \) . We furnish \( S \) with an orthonormal frame, under which, the square of the line element of the curve \( y = u\left( x\right) \) is given by
\[
d{s}^{2} = e\left( {x, y}\right) d{x}^{2} + d{y}^{2},
\]
where \( e > 0, e\left( {x,0}\right) = 1 \), and \( {e}_{y}\left( {x,0}\right) = 0 \) . The arclength functional is
\[
I\left( u\right) = {\int }_{a}^{b}\sqrt{e\left( {x, u}\right) + {\dot{u}}^{2}}{dx}
\]
i.e.
\[
L\left( {t, u}\right) = \sqrt{e\left( {x, u}\right) + {p}^{2}}.
\]
Hence,
\[
{L}_{pp} = \frac{e}{{\left( e + {p}^{2}\right) }^{\frac{3}{2}}},{L}_{up} = {L}_{pu} = 0,{L}_{uu} = \frac{2{e}_{uu}\left( {e + {p}^{2}}\right) - {e}_{u}^{2}}{4{\left( e + {p}^{2}\right) }^{\frac{3}{2}}}.
\]
Along the geodesic \( \gamma : y = 0 \), we have:
\[
\left( \begin{array}{ll} A & B \\ B & C \end{array}\right) = \left( \begin{matrix} 1 & 0 \\ 0 & \frac{1}{2}{e}_{uu} \end{matrix}\right)
\]
In differential geometry, we call the quantity
\[
K\left( x\right) = - \frac{1}{2}{e}_{uu}\left( {x,0}\right)
\]
the Gaussian curvature, whose accessory variational integral is
\[
{Q}_{0}\left( \varphi \right) = \frac{1}{2}{\int }_{a}^{b}\left\lbrack {{\dot{\varphi }}^{2} - K\left( x\right) {\varphi }^{2}}\right\rbrack {dx}.
\]
The Jacobi operator is then
\[
{J}_{0}\left( \varphi \right) = \ddot{\varphi } + {K\varphi }
\]
When \( K \) is a constant, the Jacobi field is of the form
\[
\varphi \left( t\right) = \left\{ \begin{array}{ll} \frac{1}{\sqrt{-K}}\sinh \left( {\sqrt{-K}t}\right) , & K < 0 \\ t, & K = 0 \\ \frac{1}{\sqrt{K}}\sin \left( {\sqrt{K}t}\right) , & K > 0 \end{array}\right.
\]
It follows that if \( K \leq 0 \), then there are no conjugate points. However, when \( K > 0 \), the first conjugate point of \( \left( {0,0}\right) \) along \( \gamma \) is \( \left( {\pi /\sqrt{K},0}\right) \) .
Remark 3.1 For a general Riemannian manifold \( \left( {M, g}\right), g \) is a Riemannian metric on \( M \), the Lagrangian of a geodesic is
\[
L\left( {u, p}\right) = \sum {g}_{ij}\left( u\right) {p}_{i}{p}_{j}
\]
the corresponding Jacobi equation is
\[
\frac{{d}^{2}\varphi }{d{t}^{2}} + R\left( {\dot{u}\left( t\right) ,\varphi \left( t\right) }\right) \dot{u}\left( t\right) = 0
\]
where \( R\left( {\cdot , \cdot }\right) \) denotes the Riemann curvature operator.
Theorem 3.3 Suppose \( {u}_{0} \) is a solution of the \( E \) -L equation of the functional \( I\left( u\right) = {\int }_{J}L\left( {t, u\left( t\right) ,\dot{u}\left( t\right) }\right) {dt} \) and \( {A}_{{u}_{0}} \) is positive definite. If \( {\delta }^{2}I\left( {{u}_{0},\varphi }\right) \geq 0 \) for all \( \varphi \in {C}_{0}^{1}\left( {J,{\mathbb{R}}^{N}}\right) \), then there is no \( a \in \left( {{t}_{0},{t}_{1}}\right) \) such that \( \left( {a,{u}_{0}\left( a\right) }\right) \) is conjugate to \( \left( {{t}_{0},{u}_{0}\left( {t}_{0}\right) }\right) \) .
Proof Suppose not, then \( \exists a \in \left( {{t}_{0},{t}_{1}}\right) \) such that \( \left( {a,{u}_{0}\left( a\right) }\right) \) and \( \left( {{t}_{0},{u}_{0}\left( {t}_{0}\right) }\right) \) are conjugate points, i.e. there exists a nonzero Jacobi field \( \xi \in {C}^{2}\left( {\left\lbrack {{t}_{0}, a}\right\rbrack ,{\mathbb{R}}^{N}}\right) \) along \( {u}_{0}\left( t\right) \) satisfying: \( {J}_{{u}_{0}}\left( \xi \right) = 0 \) and \( \xi \left( {t}_{0}\right) = \xi \left( a\right) = 0 \) . Let
\[
\widetilde{\xi }\left( t\right) = \left\{ \begin{array}{ll} \xi \left( t\right) & t \in \left\lbrack {{t}_{0}, a}\right\rbrack \\ 0 & t \in \left\lbrack {a,{t}_{1}}\right\rbrack \end{array}\right.
\]
then \( \widetilde{\xi } \in \operatorname{Lip}\left( {J,{\mathbb{R}}^{N}}\right) \) with \( \widetilde{\xi }\left( {t}_{0}\right) = \widetilde{\xi }\left( {t}_{1}\right) = 0 \) and
\[
{Q}_{{u}_{0}}\left( \widetilde{\xi }\right) = {\int }_{{t}_{0}}^{a}{\Phi }_{{u}_{0}}\left( {t,\xi \left( t\right) ,\dot{\xi }\left( t\right) }\right) {dt} = 0.
\]
By Theorem 3.2, \( \widetilde{\xi } \in {C}^{2}\left( {J,{\mathbb{R}}^{N}}\right) \) must satisfy the Jacobi equation \( {J}_{{u}_{0}}\left( \widetilde{\xi }\right) = 0 \) . By the uniqueness of the solution of the initial value problem of a second order ordinary differential equation, \( \widetilde{\xi } \equiv 0 \), contradictory to \( \dot{\xi } \neq 0 \), hence completes the proof.
For the special case \( N = 1 \), we can also show that the converse of the above theorem is also true.
We shall henceforth assume \( {u}_{0} \) is a solution of the E-L equation. Notice that if \( {u}_{0} \) has no conjugate points on \( \left( {{t}_{0},{t}_{1}}\right\rbrack \), then there exists a positive Jacobi field \( \psi > 0,\forall t \in J \) .
To see this, suppose \( \lambda \) is a Jacobi field with the initial conditions \( \lambda \left( {t}_{0}\right) = 0 \) and \( \lambda \left( {t}_{0}\right) = 1 \) . By assumption, the next root \( a \) satisfies \( a > {t}_{1} \) . Since the solution of an ordinary differential equation varies continuously dependent on the initial values, there exist \( \epsilon > 0 \) and a Jacobi field \( \psi \) such that \( \psi \left( {{t}_{0} - \epsilon }\right) = 0 \) , \( \dot{\psi }\left( {{t}_{0} - \epsilon }\right) = 1 \), and \( \psi \left( t\right) > 0,\forall t \in J \) .
Lemma 3.3 Suppose \( \psi \left( t\right) > 0,\forall t \in J \) is a Jacobi field along \( {u}_{0} \), then for all \( \varphi \in {C}_{0}^{1}\left( J\right) \), we have:
\[
{Q}_{{u}_{0}}\left( \varphi \right) = {\int }_{J}{A}_{{u}_{0}}\left( t\right) {\psi }^{2}\left( t\right) {\left( {\left( \frac{\varphi }{\psi }\right) }^{\prime }\left( t\right) \right) }^{2}{dt}.
\]
Proof Let \( \lambda = \frac{\varphi }{\psi } \), then \( \varphi = {\lambda \psi } \) and \( {\varphi }^{\prime } = {\lambda }^{\prime }\psi + \lambda {\psi }^{\prime } \) . Hence,
\[
{A}_{{u}_{0}}{\varphi }^{\prime 2} + 2{B}_{{u}_{0}}{\varphi }^{\prime }\varphi + {C}_{{u}_{0}}{\varphi }^{2}
\]
\[
= {\lambda }^{2}\left( {{A}_{{u}_{0}}{\psi }^{\prime 2} + 2{B}_{{u}_{0}}{\psi }^{\prime }\psi + {C}_{{u}_{0}}{\psi }^{2}}\right) + 2{\lambda }^{\prime }{\lambda \psi }\left( {{A}_{{u}_{0}}{\psi }^{\prime } + {B}_{{u}_{0}}\psi }\right) + {\lambda }^{\prime 2}{A}_{{u}_{0}}{\psi }^{2}.
\]
Now since \( \psi \) satisfies the Jacobi equation, we then have:
\[
{\int }_{J}\left( {{A}_{{u}_{0}}{\varphi }^{\prime 2} + 2{B}_{{u}_{0}}{\varphi }^{\prime }\varphi + {C}_{{u}_{0}}{\varphi }^{2}}\right) {dt}
\]
\[
= {\int }_{J}\left\{ \left\lbrack {{\psi }^{\prime }{\lambda }^{2}\left( {{A}_{{u}_{0}}{\psi }^{\prime } + {B}_{{u}_{0}}\psi }\right) + \frac{d\left( {{A}_{{u}_{0}}{\psi }^{\prime } + {B}_{{u}_{0}}\psi }\right) }{dt}\psi {\lambda }^{2}}\right. \right.
\]
\[
\left. {\left. {+2{\lambda }^{\prime }{\lambda \psi }\left( {{A}_{{u}_{0}}{\psi }^{\prime } + {B}_{{u}_{0}}\psi }\right) }\right\rbrack + {A}_{{u}_{0}}{\lambda }^{\prime 2}{\psi }^{2}}\right\} {dt}
\]
\[
= {\int }_{J}\left( {{A}_{{u}_{0}}{\lambda }^{\prime 2}{\psi }^{2}}\right) {dt} + {\left. \psi {\lambda }^{2}\left( {A}_{{u}_{0}}{\psi }^{\prime } + {B}_{{u}_{0}}\psi \right) \right| }_{{t}_{0}}^{{t}_{1}}
\]
\[
= {\int }_{J}\left( {{A}_{{u}_{0}}{\lambda }^{\prime 2}{\psi }^{2}}\right) {dt}
\]
Theorem 3.4 Let \( N = 1 \) and assume \( {u}_{0} \in {C}^{1}\left( J\right) \) is a solution of the \( E \) -L equation. If \( \exists \lambda > 0 \) such that \( {A}_{{u}_{0}}\left( t\right) \geq \lambda ,\forall t \in J \) and there exists a Jacobi field \( \psi > 0 \) on \( J \), then \( {u}_{0} \) is a strict minimum.
Proof Denote
\[
\alpha = \mathop{\inf }\limits_{J}\left( {{A}_{{u}_{0}}\left( t\right) {\psi }^{2}\left( t\right) }\right) > 0.
\]
\( \forall \varphi \in {C}_{0}^{1}\left( J\right) \), we use Lemma 3.3 and Poincaré’s inequality to obtain:
\[
{Q}_{{u}_{0}}\left( \varphi \right) = {\int }_{J}{A}_{{u}_{0}}{\psi }^{2}{\left( \frac{\varphi }{\psi }\right) }^{\prime 2}{dt}
\]
\[
\geq \alpha {\int }_{J}{\left( \frac{\varphi }{\psi }\right) }^{\prime 2}{dt}
\]
\[
\geq \alpha \frac{1}{{\left| J\right| }^{2}}{\int }_{J}{\left( \frac{\varphi }{\psi }\right) }^{2}{dt}
\]
\[
\geq \alpha \mathop{\inf }\limits_{J}\left( \frac{1}{{\psi }^{2}}\right) \frac{1}{{\left| J\right| }^{2}}{\int }_{J}{\varphi }^{2}{dt}
\]
Thus, there exists \( \mu > 0 \) such that
\[
{Q}_{{u}_{0}}\left( \varphi \right) \geq \mu {\int }_{J}{\left| \varphi \right| }^{2}{dt}
\]
The assertion now follows from Lemma 3.2.
Example 3.4 Let \( M = \left\{ {u \in {C}^{1}\left( \left\lbrack {0,1}\right\rbrack \right) \mid u\left( 0\right) = a, u\left( 1\right) = b}\right\} \) and consider the functional
\[
I\left( u\ri |
1075_(GTM233)Topics in Banach Space Theory | Definition 6.2.1 |
Definition 6.2.1. The Rademacher functions \( {\left( {r}_{k}\right) }_{k = 1}^{\infty } \) are defined on \( \left\lbrack {0,1}\right\rbrack \) by
\[
{r}_{k}\left( t\right) = \operatorname{sgn}\left( {\sin {2}^{k}{\pi t}}\right) .
\]
Alternatively, the sequence \( {\left( {r}_{k}\right) }_{k = 1}^{\infty } \) can be described as
\[
{r}_{1}\left( t\right) = \left\{ \begin{array}{ll} 1 & \text{ if }t \in \left\lbrack {0,\frac{1}{2}}\right) \\ - 1 & \text{ if }t \in \left\lbrack {\frac{1}{2},1}\right) \end{array}\right.
\]
\[
{r}_{2}\left( t\right) = \left\{ \begin{array}{ll} 1 & \text{ if }t \in \left\lbrack {0,\frac{1}{4}}\right) \cup \left\lbrack {\frac{1}{2},\frac{3}{4}}\right) \\ - 1 & \text{ if }t \in \left\lbrack {\frac{1}{4},\frac{1}{2}}\right) \cup \left\lbrack {\frac{3}{4},1}\right) \end{array}\right.
\]
\[
\vdots
\]
\[
{r}_{k + 1}\left( t\right) = \left\{ \begin{array}{ll} 1 & \text{ if }t \in \mathop{\bigcup }\limits_{{s = 1}}^{{2}^{k}}\left\lbrack {\frac{{2s} - 2}{{2}^{k + 1}},\frac{{2s} - 1}{{2}^{k + 1}}}\right) , \\ - 1 & \text{ if }t \in \mathop{\bigcup }\limits_{{s = 1}}^{{2}^{k}}\left\lbrack {\frac{{2s} - 1}{{2}^{k + 1}},\frac{2s}{{2}^{k + 1}}}\right) . \end{array}\right.
\]
That is,
\[
{r}_{k + 1} = \mathop{\sum }\limits_{{s = 1}}^{{2}^{k}}{h}_{{2}^{k} + s},\;k = 0,1,2,\ldots
\]
Thus \( {\left( {r}_{k}\right) }_{k = 1}^{\infty } \) is a block basic sequence with respect to the Haar basis in every \( {L}_{p} \) for \( 1 \leq p < \infty \) . The key properties we need are the following:
- \( {r}_{k}\left( t\right) = \pm 1 \) a.e. for all \( k \) ,
- \( {\int }_{0}^{1}{r}_{{k}_{1}}{r}_{{k}_{2}}\left( t\right) \ldots {r}_{{k}_{m}}\left( t\right) {dt} = 0 \), whenever \( {k}_{1} < {k}_{2} < \cdots < {k}_{m} \) .
The Rademacher functions were first introduced by Rademacher in 1922 [264] with the idea of studying the problem of finding conditions under which a series of real numbers \( \sum \pm {a}_{n} \), where the signs were assigned randomly, would converge almost surely. Rademacher showed that if \( \sum {\left| {a}_{n}\right| }^{2} < \infty \), then \( \sum \pm {a}_{n} \) converges almost surely. The converse was proved in 1925 by Khintchine and Kolmogorov [171]. Historically, the subject of finding estimates for averages over all choices of signs was initiated in 1923 by the classical Khintchine's inequalities [170], but the usefulness of a probabilistic viewpoint in studying the \( {L}_{p} \) -spaces seems to have been fully appreciated quite late (around 1970).
Theorem 6.2.2 (Khintchine’s Inequalities). For every \( 1 \leq p < \infty \) there exist positive constants \( {A}_{p} \) and \( {B}_{p} \) such that for every finite sequence of scalars \( {\left( {a}_{i}\right) }_{i = 1}^{n} \) and \( n \in \mathbb{N} \) we have
\[
{A}_{p}{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}\right) }^{1/2} \leq {\begin{Vmatrix}\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{r}_{i}\end{Vmatrix}}_{p} \leq {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}\right) }^{1/2}\;\text{ if }1 \leq p < 2,
\]
and
\[
{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}\right) }^{1/2} \leq {\begin{Vmatrix}\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{r}_{i}\end{Vmatrix}}_{p} \leq {B}_{p}{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}\right) }^{1/2}\;\text{ if }p > 2.
\]
We will not prove this here, but it will be derived as a consequence of a more general result below. Theorem 6.2.2 was first given in the stated form by Littlewood in 1930 [208], but Khintchine's earlier work (of which Littlewood was unaware) implied these inequalities as a consequence.
Remark 6.2.3. (a) Khintchine’s inequalities tell us that \( {\left( {r}_{i}\right) }_{i = 1}^{\infty } \) is a basic sequence equivalent to the \( {\ell }_{2} \) -basis in every \( {L}_{p} \) for \( 1 \leq p < \infty \) . In \( {L}_{\infty } \), though, one readily checks that \( {\left( {r}_{i}\right) }_{i = 1}^{\infty } \) is isometrically equivalent to the canonical \( {\ell }_{1} \) -basis.
(b) \( {\left( {r}_{i}\right) }_{i = 1}^{\infty } \) is an orthonormal sequence in \( {L}_{2} \), which yields the identity
\[
{\begin{Vmatrix}\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{r}_{i}\end{Vmatrix}}_{2} = {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}\right) }^{1/2},
\]
for any choice of scalars \( \left( {a}_{i}\right) \) . But \( {\left( {r}_{i}\right) }_{i = 1}^{\infty } \) is not a complete system in \( {L}_{2} \), that is, \( \left\lbrack {r}_{i}\right\rbrack \neq {L}_{2} \) (for instance, notice that the function \( {r}_{1}{r}_{2} \) is orthogonal to the subspace \( \left. \left\lbrack {r}_{i}\right\rbrack \right) \) . However, one can obtain a complete orthonormal system for \( {L}_{2} \) using the Rademacher functions by adding to \( {\left( {r}_{i}\right) }_{i = 1}^{\infty } \) the constant function \( {r}_{0} = \) 1 and the functions of the form \( {r}_{{k}_{1}}{r}_{{k}_{2}}\ldots {r}_{{k}_{n}} \) for any \( {k}_{1} < {k}_{2} < \cdots < {k}_{n} \) . This collection of functions are the Walsh functions.
Khintchine's inequalities can also be interpreted by saying that all the norms \( \left\{ {\parallel \cdot {\parallel }_{p} : 1 \leq p < \infty }\right\} \) are equivalent on the linear span of the Rademacher functions in \( {L}_{p} \) . It turns out that in this form, the statement can be generalized to an arbitrary Banach space. This generalization was first obtained by Kahane in 1964 [150].
For our purposes it will be convenient to replace the concrete Rademacher functions by an abstract model. To that end we will use the language and methods of probability theory (see Appendix I for a quick fix on the basics that will be required in this chapter).
Definition 6.2.4. A Rademacher sequence is a sequence of mutually independent random variables \( {\left( {\varepsilon }_{n}\right) }_{n = 1}^{\infty } \) defined on some probability space \( \left( {\Omega ,\mathbb{P}}\right) \) such that \( \mathbb{P}\left( {{\varepsilon }_{n} = 1}\right) = \mathbb{P}\left( {{\varepsilon }_{n} = - 1}\right) = \frac{1}{2} \) for every \( n \) .
The terminology is justified by the fact that the Rademacher functions \( {\left( {r}_{n}\right) }_{n = 1}^{\infty } \) are a Rademacher sequence on \( \left\lbrack {0,1}\right\rbrack \) . Thus,
\[
{\int }_{0}^{1}\begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{r}_{i}\left( t\right) {x}_{i}}\end{Vmatrix}{dt} = \mathbb{E}\begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\varepsilon }_{i}{x}_{i}}\end{Vmatrix} = {\int }_{\Omega }\begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\varepsilon }_{i}\left( \omega \right) {x}_{i}}\end{Vmatrix}d\mathbb{P}.
\]
Theorem 6.2.5 (Kahane-Khintchine Inequalities). For each \( 1 \leq p < \infty \) there exists a constant \( {C}_{p} \) such that for every Banach space \( X \) and finite sequence \( {\left( {x}_{i}\right) }_{i = 1}^{n} \) in \( X \), the following inequality holds:
\[
\mathbb{E}\begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\varepsilon }_{i}{x}_{i}}\end{Vmatrix} \leq {\left( \mathbb{E}{\begin{Vmatrix}\mathop{\sum }\limits_{{i = 1}}^{n}{\varepsilon }_{i}{x}_{i}\end{Vmatrix}}^{p}\right) }^{1/p} \leq {C}_{p}\mathbb{E}\begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\varepsilon }_{i}{x}_{i}}\end{Vmatrix}.
\]
We will prove the Kahane-Khintchine inequalities (and this will imply the Khintchine inequalities by taking \( X = \mathbb{R} \) or \( X = \mathbb{C} \) ), but first we shall establish three lemmas on our way to the proof. To avoid repetitions, in all three lemmas, \( \left( {\Omega ,\sum ,\mathbb{P}}\right) \) will be a probability space and \( X \) will be a Banach space. Let us recall that an \( X \) - valued random variable on \( \Omega \) is a function \( f : \Omega \rightarrow X \) such that \( {f}^{-1}\left( B\right) \in \sum \) for every Borel set \( B \subset X \) . The random variable \( f \) is symmetric if \( \mathbb{P}\left( {f \in B}\right) = \mathbb{P}\left( {-f \in B}\right) \) for all Borel subsets \( B \) of \( X \) .
Lemma 6.2.6. Let \( f : \Omega \rightarrow X \) be a symmetric random variable. Then for all \( x \in X \) we have
\[
\mathbb{P}\left( {\parallel f + x\parallel \geq \parallel x\parallel }\right) \geq \frac{1}{2}
\]
Proof. Let us take any \( x \in X \) . For every \( \omega \in \Omega \), using the convexity of the norm of \( X \), clearly \( \parallel f\left( \omega \right) + x\parallel + \parallel x - f\left( \omega \right) \parallel \geq 2\parallel x\parallel \) . Then, either \( \parallel f\left( \omega \right) + x\parallel \geq \parallel x\parallel \) or \( \parallel x - f\left( \omega \right) \parallel \geq \parallel x\parallel \) . Hence
\[
1 \leq \mathbb{P}\left( {\parallel f + x\parallel \geq \parallel x\parallel }\right) + \mathbb{P}\left( {\parallel x - f\parallel \geq \parallel x\parallel }\right) .
\]
Since \( f \) is symmetric, \( x + f \) and \( x - f \) have the same distribution, and the lemma follows.
Let \( {\left( {\varepsilon }_{i}\right) }_{i = 1}^{\infty } \) be a Rademacher sequence on \( \Omega \) . Given \( n \in \mathbb{N} \) and vectors \( {x}_{1},\ldots ,{x}_{n} \) in \( X \), we shall consider \( {\Lambda }_{m} : \Omega \rightarrow X\left( {1 \leq m \leq n}\right) \) defined by
\[
{\Lambda }_{m}\left( \omega \right) = \mathop{\sum }\limits_{{i = 1}}^{m}{\varepsilon }_{i}\left( \omega \right) {x}_{i}
\]
Lemma 6.2.7. For all \( \lambda > 0 \) ,
\[
\mathbb{P}\left( {\mathop{\max }\limits_{{m \leq n}}\begin{Vmatrix}{\Lambda }_{m}\end{Vmatrix} > \lambda }\right) \leq 2\mathbb{P}\left( {\begin{Vmatrix}{\Lambda }_{n}\end{Vmatrix} > \lambda }\right) .
\]
Proof. Given \( \lambda > 0 \), for \( m = 1,\ldots, n \) put
\[
{\Omega }_{m}^{\left( \lambda \right) } = \left\{ {\omega \in \Omega : \begin{Vmatrix}{{\Lambda }_{m}\left( \omega \right) }\end{Vmatrix} > \lambda \text{ and }\begin{V |
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds | Definition 7.5.4 |
Definition 7.5.4 The Tamagawa measure on \( H \) is the additive measure on \( H \) which is self-dual in the sense described for the Fourier transform associated to the canonical character \( {\psi }_{H} \) .
These Tamagawa measures can be related to the normalised Haar measures given in Definition 7.5.2 via the discriminant of \( H \) in the \( \mathcal{P} \) -adic cases.
Definition 7.5.5 Let \( K \) be a \( \mathcal{P} \) -adic field and \( H = K \) or a quaternion algebra \( A \) over \( K \) . Suppose \( K \) is a finite extension of \( {\mathbb{Q}}_{p} \) and let \( \mathcal{B} \) be a maximal order in \( H \) . Choose a \( {\mathbb{Z}}_{p} \) -basis \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{n}}\right\} \) of \( \mathcal{B} \) . Then the discriminant of \( H = {D}_{H} = {\begin{Vmatrix}\det \left( {T}_{H}\left( {e}_{i}{e}_{j}\right) \right) \end{Vmatrix}}_{{\mathbb{Q}}_{p}}^{-1} \) .
Note that when \( H = K \), this notion of discriminant agrees with the field discriminant \( K \mid {\mathbb{Q}}_{p} \) . (See Definition 0.1.2.) For the connection in the cases where \( H = A \), see Exercise 7.5, No. 2.
## Lemma 7.5.6
1. If \( \mathbb{R} \subset H \), then the Tamagawa measure on \( H \) is \( d{x}_{H} \), as given in Definition 7.5.2.
2. If \( \mathbb{R} ⊄ H \), the Tamagawa measure on \( H \) is \( {D}_{H}^{-1/2}d{x}_{H} \), where \( d{x}_{H} \) is given in Definition 7.5.2.
Proof: The first part is a straightforward calculation (see Exercise 7.5, No. 3).
For the second part, consider first the case where \( H = {\mathbb{Q}}_{p} \) . Let \( \Phi \) denote the characteristic function of \( {\mathbb{Z}}_{p} \) and let \( {dx} \) denote the additive Haar measure. Recall that \( {\psi }_{p}\left( x\right) = {e}^{{2\pi i} < x > } \) where \( < x > \) is the unique rational of the form \( a/{p}^{m} \) in the interval \( (0,1\rbrack \) such that \( x - < x > \in {\mathbb{Z}}_{p} \) . Now
\[
\widehat{\Phi }\left( x\right) = {\int }_{{\mathbb{Z}}_{p}}{\psi }_{p}\left( {xy}\right) {dy} = 1\;\text{ if }x \in {\mathbb{Z}}_{p}.
\]
Now suppose that \( x \notin {\mathbb{Z}}_{p} \) and so \( \langle x\rangle = a/{p}^{m} \in \left( {0,1}\right) \) . Let \( {\mathbb{Z}}_{p} = \) \( { \cup }_{i = 0}^{{p}^{m} - 1}\left( {i + {p}^{m}{\mathbb{Z}}_{p}}\right) \) and let \( \xi = \exp \left( {{2\pi i}/{p}^{m}}\right) \) . Then
\[
\widehat{\Phi }\left( x\right) = {\int }_{{p}^{m}{\mathbb{Z}}_{p}}\left( {1 + \xi + {\xi }^{2} + \cdots + {\xi }^{{p}^{m} - 1}}\right) {dy} = 0.
\]
Thus \( \widehat{\Phi } = \Phi \) and so \( {dx} \) is the Tamagawa measure in this case.
More generally, let \( \mathcal{B} \) denote a maximal order in \( H \) with the \( {\mathbb{Z}}_{p} \) -basis \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{n}}\right\} \) . Take the dual basis with respect to the trace so that \( {e}_{i}^{ * } \) is defined by \( {T}_{H}\left( {{e}_{i}^{ * }{e}_{j}}\right) = {\delta }_{ij} \) . Thus if
\[
\widetilde{\mathcal{B}} = \left\{ {x \in H \mid {T}_{H}\left( {xy}\right) \in {\mathbb{Z}}_{p}\forall y \in \mathcal{B}}\right\}
\]
then \( \left\{ {{e}_{1}^{ * },{e}_{2}^{ * },\ldots ,{e}_{n}^{ * }}\right\} \) is a \( {\mathbb{Z}}_{p} \) -basis of \( \widetilde{\mathcal{B}} \) and \( \widetilde{\widetilde{\mathcal{B}}} = \mathcal{B} \) . Let \( \Phi \) be the characteristic function of \( \mathcal{B} \) . Then in the same way as for \( {\mathbb{Z}}_{p},\widehat{\Phi } \) is the characteristic function of \( \widetilde{\mathcal{B}} \) . Thus \( \widehat{\widehat{\Phi }} = \operatorname{Vol}\left( \widetilde{\mathcal{B}}\right) \Phi \), so that the Tamagawa measure will be \( \operatorname{Vol}{\left( \widetilde{\mathcal{B}}\right) }^{-1/2}d{x}_{H} \) . Now if \( {e}_{i}^{ * } = \sum {q}_{ji}{e}_{j} \), then \( \operatorname{Vol}\left( \widetilde{\mathcal{B}}\right) = \parallel \det \left( Q\right) {\parallel }_{{\mathbb{Q}}_{p}} \), where \( Q = \left( {q}_{ij}\right) \) . However, \( {Q}^{-1} = \left( {{T}_{H}\left( {{e}_{i}{e}_{j}}\right) }\right) \) and the result follows.
This then normalises the Haar measure on the additive structures of local fields and quaternion algebras over these local fields. We now extend this to multiplicative structures and also to other related locally compact groups. Continuing to use our blanket notation \( H \), the multiplicative Tamagawa measure \( d{x}_{H}^{ * } \) on \( {H}^{ * } \) is obtained from the additive measure as in Definition 7.5.2.
For discrete groups \( G \) which arise, the chosen measure will, in general, assign to each element the value 1. Exceptionally, in the cases where \( \mathbb{R} ⊄ H \) and \( G \) is the discrete group of modules \( \begin{Vmatrix}{H}^{ * }\end{Vmatrix} \), each element is assigned its real value.
All other locally compact groups which will be considered both in this section and the following two are obtained from previously defined ones via obvious exact sequences. In these circumstances, it is required that the measures be compatible. Thus suppose that we have a short exact sequence of locally compact groups
\[
1 \rightarrow Y\overset{i}{ \rightarrow }Z\overset{j}{ \rightarrow }T \rightarrow 1
\]
with Haar measures \( {dy},{dz} \) and \( {dt} \), respectively. These measures are said to be compatible if, for every suitable function \( f \) ,
\[
{\int }_{Z}f\left( z\right) {dz} = {\int }_{T}{\int }_{Y}f\left( {i\left( y\right) z}\right) {dydt}\;\text{ where }t = j\left( z\right) .
\]
It should be noted that this depends not just on the groups involved, but on the particular exact sequence used. Given measures on two of the groups involved in the exact sequence, the measure on the third group will be defined by requiring that it be compatible with the other two and the short exact sequence.
All volumes which are calculated and used subsequently are computed using the Tamagawa measures and otherwise using compatible measures obtained from these. These local volumes will be used to obtain covolumes of arithmetic Kleinian and Fuchsian groups and so are key components going in to the volume calculations in \( §{11.1} \) . Some of the calculations are made here, others are assigned to Exercises 7.5.
Lemma 7.5.7 \( \operatorname{Vol}\left( {\mathcal{H}}^{1}\right) = \operatorname{Vol}\left\{ {x \in {\mathcal{H}}^{ * } \mid n\left( x\right) = 1}\right\} = 4{\pi }^{2} \) .
Proof: For the usual measures on \( {\mathbb{R}}^{4} \), the volume of a ball of radius \( r \) is \( {\pi }^{2}{r}^{4}/2 \) . Thus, for \( x = {x}_{1} + {x}_{2}i + {x}_{3}j + {x}_{4}{ij}, n\left( x\right) = {x}_{1}^{2} + {x}_{2}^{2} + {x}_{3}^{2} + {x}_{4}^{2} \), so that \( \parallel x\parallel = n{\left( x\right) }^{2} \) .
The volume of \( {\mathcal{H}}^{1} \) will be obtained from the short exact sequence
\[
1 \rightarrow {\mathcal{H}}^{1}\overset{i}{ \rightarrow }{\mathcal{H}}^{ * }\overset{n}{ \rightarrow }{\mathbb{R}}^{ + } \rightarrow 1
\]
Now the Tamagawa measure on \( {\mathcal{H}}^{ * } \) is \( n{\left( x\right) }^{-2}{4d}{x}_{1}d{x}_{2}d{x}_{3}d{x}_{4} \) (see Exercise 7.5, No.1), and on \( {\mathbb{R}}_{ + }^{ * } \) it is \( {t}^{-1}{dt} \) . As a suitable function on \( {\mathcal{H}}^{ * } \) choose,
\[
g\left( x\right) = \left\{ \begin{array}{ll} n{\left( x\right) }^{2} & \text{if }1/2 \leq n\left( x\right) \leq 1 \\ 0 & \text{otherwise.} \end{array}\right.
\]
Now if \( t = n\left( x\right) \), we obtain
\[
{\int }_{{\mathcal{H}}^{ * }}g\left( x\right) n{\left( x\right) }^{-2}{4d}{x}_{1}d{x}_{2}d{x}_{3}d{x}_{4} = \frac{4{\pi }^{2}}{2}\left( {1 - \frac{1}{4}}\right) = \frac{3{\pi }^{2}}{2}
\]
\[
= {\int }_{{\mathbb{R}}_{ + }^{ * }}{\int }_{{\mathcal{H}}^{1}}g\left( {i\left( y\right) x}\right) {dydt} = \operatorname{Vol}\left( {\mathcal{H}}^{1}\right) {\int }_{1/2}^{1}\frac{{t}^{2}{dt}}{t} = \frac{3}{8}\operatorname{Vol}\left( {\mathcal{H}}^{1}\right) .
\]
Lemma 7.5.8 Let \( \mathcal{O} \) be a maximal order in the quaternion algebra \( A \) over the \( \mathcal{P} \) - adic field \( K \) . Let \( {D}_{K} \) denote the discriminant of \( K \) and \( q = \left| {R/{\pi R}}\right| \) . Then
\[
\operatorname{Vol}\left( {\mathcal{O}}^{1}\right) = {D}_{K}^{-3/2}\left( {1 - {q}^{-2}}\right) \left\{ \begin{array}{ll} {\left( q - 1\right) }^{-1} & \text{ if }A\text{ is a division algebra } \\ 1 & \text{ if }A = {M}_{2}\left( K\right) . \end{array}\right.
\]
Proof: Note that the reduced norm \( n \) maps \( {\mathcal{O}}^{ * } \) onto \( {R}^{ * } \) (see Exercise 6.7,
No. 1) so there is an exact sequence
\[
1 \rightarrow {\mathcal{O}}^{1}\overset{i}{ \rightarrow }{\mathcal{O}}^{ * }\overset{n}{ \rightarrow }{R}^{ * } \rightarrow 1
\]
Thus for the volume of \( {\mathcal{O}}^{1} \), we have
\[
\frac{\text{ Tamagawa Vol of }{\mathcal{O}}^{ * }}{\text{ Tamagawa Vol of }{R}^{ * }} = \frac{\left( {1 - {q}^{-1}}\right) {D}_{A}^{-1/2}\text{ multiplicative Haar Vol. of }{\mathcal{O}}^{ * }}{\left( {1 - {q}^{-1}}\right) {D}_{K}^{-1/2}\text{ multiplicative Haar Vol. of }{R}^{ * }}
\]
by Lemma 7.5.6 and Definition 7.5.2. The result then follows by Lemma
7.5.3 and Exercise 7.5, No 2.
## Exercise 7.5
1. Show that the additive Haar measures on \( H \), where \( H \supset \mathbb{R} \), are as follows:
(a) \( H = \mathbb{C}, x = {x}_{1} + i{x}_{2}, d{x}_{\mathbb{C}} = {2d}{x}_{1}d{x}_{2} \) .
(b) \( H = \mathcal{H}, x = {x}_{1} + {x}_{2}i + {x}_{3}j + {x}_{4}{ij}, d{x}_{\mathcal{H}} = {4d}{x}_{1}d{x}_{2}d{x}_{3}d{x}_{4} \) .
(c) \( H = {M}_{2}\left( \mathbb{R}\right), x = \left( \begin{array}{ll} {x}_{1} & {x}_{2} \\ {x}_{3} & {x}_{4} \end{array}\right), d{x}_{{M}_{2}\left( \mathbb{R}\right) } = d{x}_{1}d{x}_{2}d{x}_{3}d{x}_{4} \) .
(d) \( H = {M}_{2}\left( \mathbb{C}\right), x = \left( \begin{array}{ll} {x}_{1} + i{x}_{2} & {x}_{3} + i{x}_{4} \\ {x}_{5} + i{x}_{6} & {x}_{7} + i{x}_{8} \end{array}\right), d{x}_{{M}_{2}\left( \mathbb{C}\right) } = d{x}_{1}\cdots d{x}_{8} \) .
2. If \( A \) is a quaternion algebra over the \( \mathcal{P} \) -adic field and \( |
1139_(GTM44)Elementary Algebraic Geometry | Definition 6.8 |
Definition 6.8. A divisor on \( C \) which is the divisor of an element of \( {K}_{C} \smallsetminus \{ 0\} \) is a principal divisor.
Definition 6.9. Two divisors \( {D}_{1} \) and \( {D}_{2} \) are linearly equivalent \( \left( {{D}_{1} \simeq {D}_{2}}\right) \) if they differ by a principal divisor \( \left( {{D}_{1} = {D}_{2} + \operatorname{div}\left( g\right) }\right. \), for some \( \left. {g \in {K}_{C}\smallsetminus \{ 0\} }\right) \) . The set of principal divisors forms a subgroup of the group of all divisors on \( C \), the quotient group being the set of linear equivalence classes of divisors on \( C \) .
A search for further conditions will shed light on possible analogues of (6.4) and (6.5). Of course, the above example shows that (6.5) does not generalize verbatim to \( C \) . (However, note that if \( D \) is a fixed divisor on \( C \), then the zero function together with all functions \( f \) of \( K \) satisfying \( D \leq \operatorname{div}\left( f\right) \) forms a complex vector space \( L\left( D\right) \) ; this follows at once from the definition of \( \operatorname{div}\left( f\right) \) and the fact that \( {\operatorname{ord}}_{P}\left( {f + g}\right) \geq \min \left\{ {{\operatorname{ord}}_{P}f,{\operatorname{ord}}_{P}g}\right\} \) . It follows from Lemma 7.1 that this vector space is finite dimensional.) What are possible natural generalizations? The answers to such questions constitute some of the most central facts about algebraic curves. One generalization of (6.5) will in fact be the Riemann-Roch theorem.
Before beginning a study of this problem, we first look at differentials, which are intimately connected to the above questions. We motivate this discussion by briefly looking at a well-spring of differentials, namely integration.
To integrate on any space \( S \) one needs some sort of measure on \( S \), perhaps given directly, perhaps induced by a metric, perhaps by a system of local coordinates, etc. In complex integration on \( \mathbb{C} \), one customarily uses the canonical measure induced by the coordinate system \( Z = X + {iY} \) . But in contrast to \( \mathbb{C} \), it is easy to see that for any nonsingular projective curve, there is never any one coordinate neighborhood covering the whole curve-we always need several neighborhoods and each coordinate neighborhood has its own canonical measure.
For instance, \( {\mathbb{C}}_{Z} \) covers all of \( \mathbb{C} \cup \{ \infty \} = {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) except \( \{ \infty \} \) ; one can then choose a second copy \( {\mathbb{C}}_{w} \) of \( \mathbb{C} \) covering all of \( \mathbb{C} \cup \{ \infty \} \) except \( \{ 0\} \) , \( {\mathbb{C}}_{Z} \) and \( {\mathbb{C}}_{W} \) being related by \( W = 1/Z \) in their intersection. In this example, the distance from a point \( P \neq 0 \in {\mathbb{C}}_{Z} \) to \( \{ \infty \} \) is infinite in \( {\mathbb{C}}_{Z} \), but finite in \( {\mathbb{C}}_{W} \) ; hence the metrics in the two coordinate neighborhoods are surely different. One can get around this kind of problem by adjusting the metrics in different neighborhoods so they agree on their common part. Thus at each point common to two neighborhoods on \( C \), coordinatized by, say, \( Z \) and by \( W \), the metric element \( {dZ} \) may be modified to agree with \( {dW} \) by multiplying by a derivative: \( {dW} = \left( {{dW}/{dZ}}\right) \cdot {dZ} \), where \( W = W\left( Z\right) \) . For instance in the example of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) above, where \( W = 1/Z \), we have \( {dW} = \) \( - \left( {1/{Z}^{2}}\right) {dZ} \) .
On \( \mathbb{C} \), when one uses a phrase such as integrating a function \( f\left( Z\right) \), the canonical \( {dZ} \) can be left in the background. But it is the differential \( f\left( Z\right) {dZ} \) which tells a more complete story since it takes into account the underlying measure; it is the natural object to use when coordinate changes are involved.
Aside from the obvious relation to integration, a study of differentials helps to reveal the connection between the topology of \( C \) and the existence of functions with prescribed zeros and poles; this will be our main use of differentials.
We now formally define differential on an irreducible variety; our definition is purely algebraic and has certain advantages; it affords a clean algebraic development and can be used in a very general setting. In Remark 6.14 we indicate for nonsingular plane curves how these differentials may be looked at as geometric objects on a variety.
Definition 6.10. Let \( k \subset K \) be any two fields of characteristic 0, and let \( {V}_{1} \) be the vector space over \( k \) generated by the set of indeterminate objects \( \{ {dx} \mid x \in K\} \) ; let \( {V}_{2} \) be the subspace of \( {V}_{1} \) generated by the set of indeterminate objects.
\[
\{ d\left( {{ax} + {by}}\right) - \left( {{adx} + {bdy}}\right), d\left( {xy}\right) - \left( {{xdy} + {ydx}}\right) \mid x, y \in K\text{ and }a, b \in k\} .
\]
Then \( {V}_{1}/{V}_{2} \) is the vector space of differentials of \( K \) over \( k \), which we denote by \( \Omega \left( {K, k}\right) \) . If \( {K}_{V} \) is the function field of an irreducible variety \( V \) over \( k \) , then the elements \( \omega \in \Omega \left( {{K}_{V}, k}\right) \) are the differentials on \( V \) .
Remark 6.11. One might more accurately call the above differentials differentials of the first order, for on varieties of dimension \( > 1 \) one may consider differentials of higher order. (See, for example, [Lang, Chapter VII].) However we shall use only differentials on curves in this book. Note that the generators of \( {V}_{2} \) simply express familiar algebraic properties of a differential.
Remark 6.12. Definition 6.10 immediately implies that \( {da} = 0 \in {V}_{1}/{V}_{2} \) for each \( a \in k \), and that if \( f \in K = k\left( {{x}_{1},\ldots ,{x}_{n}}\right) \), then \( {df} \) is just the usual total differential \( d\left( {p/q}\right) \left( { = \left( {{qdp} - {pdq}}\right) /{q}^{2}}\right. \) evaluated at \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \), where \( p \) and \( q \) are polynomials in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) such that \( f = p\left( {{x}_{1},\ldots ,{x}_{n}}\right) /q\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) .
For our applications in this book, we now assume that \( k = \mathbb{C} \) and that \( K \) has transcendence degree one over \( \mathbb{C} \) . Thus \( K \) is the function field of an irreducible curve. In this case, for any two functions \( f \in K \) and \( g \in K \smallsetminus \mathbb{C} \) , there is a well-defined derivative \( {df}/{dg} \in K \) having the properties one would expect of a derivative. We use this fundamental fact (Theorem 6.13) to see the geometric meaning behind Definition 6.10.
Theorem 6.13. Let a field \( K \) have transcendence degree one over \( \mathbb{C} \), and let \( f \in K, g \in K \smallsetminus \mathbb{C} \) . Then there is a unique element \( \kappa \in K \) such that \( {df} = {\kappa dg} \) . (We denote \( \kappa \) by the symbol \( {df}/{dg} \) ; it is called the derivative of \( f \) with respect to \( g \) .)
Proof. Let \( {\dim }_{K}\Omega \) denote the dimension of the \( K \) -vector space \( \Omega \left( {K,\mathbb{C}}\right) \) . Theorem 6.13 will follow easily once we show \( {\dim }_{K}\Omega = 1 \) . By the theorem of the primitive element, we may write \( K = \mathbb{C}\left( {x, y}\right) \) ; suppose \( x \) is transcendental over \( \mathbb{C} \), write \( x = X \), and let a minimal polynomial of \( y \) over \( \mathbb{C}\left\lbrack X\right\rbrack \) be \( p\left( {X, Y}\right) \in \mathbb{C}\left\lbrack {X, Y}\right\rbrack \) . It follows from Remark 6.12 that \( \{ {dx},{dy}\} K \) -generates \( \Omega \left( {K,\mathbb{C}}\right) \), so \( {\dim }_{K}\Omega \leq 2 \) . Now the element \( p\left( {x, y}\right) = 0 \in \mathbb{C} \) has differential zero; but \( p\left( {x, y}\right) \) is \( p\left( {X, Y}\right) \) evaluated at \( \left( {x, y}\right) \), so \( 0 = d\left( {p\left( {x, y}\right) }\right) = {p}_{X}\left( {x, y}\right) {dx} + \) \( {p}_{Y}\left( {x, y}\right) {dy}\left( {{p}_{X},{p}_{Y}}\right. \) being ordinary partials with respect to the indeterminates \( X \) and \( Y) \), that is, \( {p}_{Y}\left( {x, y}\right) {dy} = - {p}_{X}\left( {x, y}\right) {dx} \) . Since \( p \) is minimal, \( {p}_{Y}\left( {x, y}\right) \neq 0 \) , hence
\[
{dy} = \frac{-{p}_{X}\left( {x, y}\right) }{{p}_{Y}\left( {x, y}\right) }{dx}
\]
thus \( {dx} \) generates \( \Omega \left( {K,\mathbb{C}}\right) \), so \( {\dim }_{K}\Omega \leq 1 \) . To get equality, it suffices to show that \( {dx} \neq 0 \) -that is, that \( {dx} \) is not in the subspace \( {V}_{2} \) of Definition 6.10. For this, define a map from \( {V}_{1} \) to \( K \) in this way: Let \( h,{h}^{ * } \) be any two elements of \( K \), and let \( H \) be an element of \( \mathbb{C}\left( {X, Y}\right) \) such that \( h = H\left( {x, y}\right) \) . Our map is then
\[
\phi : {h}^{ * }{dh} \rightarrow {h}^{ * }\left( {{H}_{X}\left( {x, y}\right) {p}_{Y}\left( {x, y}\right) + {H}_{Y}\left( {x, y}\right) {p}_{X}\left( {x, y}\right) }\right) .
\]
It is easily seen that this map is well defined and that any element in \( {V}_{2} \) of Definition 6.10 must map to \( 0 \in K \) . Hence \( \phi \) induces a linear function from \( \Omega \left( {K,\mathbb{C}}\right) = {V}_{1}/{V}_{2} \) into \( K \) . Now \( \phi \left( {dx}\right) = 1 \cdot {p}_{Y}\left( {x, y}\right) \) (since \( {X}_{X} = {dX}/{dX} = 1 \) , and \( \left. {{X}_{Y} = 0}\right) \) . But we know \( {p}_{Y}\left( {x, y}\right) \neq 0 \), so \( \phi \left( {dx}\right) \neq 0 \) . Hence \( {dx} \neq 0 \), and therefore \( {\dim }_{K}\Omega = 1 \) .
Since \( {dxK} \) -generates \( \Omega \left( {K,\mathbb{C}}\right) \), we have \( {df} |
1068_(GTM227)Combinatorial Commutative Algebra | Definition 5.62 |
Definition 5.62 The support-regularity of a monomial ideal \( I \) is
\[
\operatorname{supp}.\operatorname{reg}\left( I\right) = \max \left\{ {\left| {\operatorname{supp}\left( \mathbf{b}\right) }\right| - i \mid {\beta }_{i,\mathbf{b}}\left( I\right) \neq 0}\right\} ,
\]
and \( I \) is said to have a support-linear free resolution if there is a \( d \in \mathbb{N} \) such that \( \left| {\operatorname{supp}\left( m\right) }\right| = d = \operatorname{supp} \) .reg \( \left( I\right) \) for all minimal generators \( m \) of \( I \) .
For squarefree ideals the notions of regularity and support-regularity coincide, because the only degrees we ever care about are squarefree. In particular, the two sentences in the following result specialize to the Eagon-Reiner Theorem and Theorem 5.59 when \( \mathbf{a} = \left( {1,\ldots ,1}\right) \) .
Theorem 5.63 If a monomial ideal \( I \) is generated in degrees preceding \( \mathbf{a} \) , then \( S/I \) is Cohen-Macaulay if and only if the Alexander dual ideal \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) has support-linear free resolution. More generally, \( \operatorname{pd}\left( {S/I}\right) = \operatorname{supp}\operatorname{.reg}\left( {I}^{\left\lbrack \mathbf{a}\right\rbrack }\right) \) .
The optimal insight provided by Theorem 5.63 comes in a context combining monomial matrices for free and injective resolutions, the latter of which we will introduce in Chapter 11. For a glimpse of this context, see Exercise 11.2. Essentially, decreases in the dimensions of the indecomposable injective summands in a minimal injective resolution of \( S/I \) correspond precisely to increases in the supports of the degrees in a minimal free resolution of \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) . The former detect the projective dimension of \( S/I \) by the Auslander-Buchsbaum formula. Thus, when the supports of syzygy degrees of \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) increase as slowly as possible, so that \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) has support-linear free resolution, the dimensions of indecomposable summands in a minimal injective resolution of \( S/I \) decrease as slowly as possible. This slowest possible decrease in dimension postpones the occurrence of summands isomorphic to injective hulls of \( \mathbb{k} \) as long as possible, making the depth of \( S/I \) as large as possible. As a result, \( S/I \) must be Cohen-Macaulay (see Theorem 13.37.7).
At the beginning of this section, we noted that Alexander duality interchanges two types of homological invariants, by which we meant projective dimension and regularity. Theorem 5.61 extends this interchange to a flip on a family of refinements of this pair of invariants. In contrast, the crux of Theorem 5.63 is that we could have meant a different interchange: namely the switch of Betti numbers for Bass numbers (Definition 11.37): whereas Betti numbers determine the regularity, the projective dimension can be reinterpreted in terms of depth-and hence in terms of Bass numbers-via the Auslander-Buchsbaum formula.
## Exercises
5.1 Prove Theorem 5.11 directly, by tensoring the coKoszul complex \( {\mathbb{K}}^{ \bullet } \) with \( S/I \) .
5.2 Prove Corollary 5.12 by applying Theorem 5.6 to Corollary 1.40.
5.3 Compute the Alexander dual of \( \left\langle {{x}^{4},{y}^{4},{x}^{3}z,{y}^{3}z,{x}^{2}{z}^{2},{y}^{2}{z}^{2}, x{z}^{3}, y{z}^{3}}\right\rangle \) with respect to \( \mathbf{a} = \left( {5,6,8}\right) \) .
5.4 Resume the notation from Exercise 3.6.
(a) Turning the picture there upside down yields the staircase diagram for an Alexander dual ideal \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) . What is \( \mathbf{a} \) ?
(b) On a photocopy of the upside down staircase diagram, draw the Buchberger graph of \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) . Compare it to the graph \( \operatorname{Buch}\left( I\right) \) that you drew in Exercise 3.6.
(c) Use the labels on the planar map determined by \( \operatorname{Buch}\left( {I}^{\left\lbrack \mathbf{a}\right\rbrack }\right) \) to relabel the vertices, edges, and regions in the planar map determined by \( \operatorname{Buch}\left( I\right) \) .
(d) Show that this relabeled planar map is colabeled and determines the resolution Alexander dual to the usual one from \( \operatorname{Buch}\left( I\right) \), as in Theorem 5.37.
5.5 For any monomial ideal \( I \), let \( {\mathbf{a}}_{I} \) be the exponent on the least common multiple of all minimal generators of \( I \), and define the tight Alexander dual \( {I}^{ \star } = {I}^{\left\lbrack {\mathbf{a}}_{I}\right\rbrack } \) . Find a monomial ideal \( I \) such that \( {\left( {I}^{ \star }\right) }^{ \star } \neq I \) . Characterize such ideals \( I \) .
5.6 Show that tight Alexander duality commutes with radicals: \( \operatorname{rad}{\left( I\right) }^{ \star } = \operatorname{rad}\left( {I}^{ \star }\right) \) .
5.7 Prove from first principles that a monomial ideal is irreducible as in Definition 5.16 if and only if it cannot be expressed as an intersection of two (perhaps ungraded) ideals strictly containing it.
5.8 The socle of a module \( M \) is the set \( \operatorname{soc}\left( M\right) = \left( {0{ : }_{M}\mathfrak{m}}\right) \) of elements in \( M \) annihilated by every variable. If \( M = S/I \) is artinian, prove that \( {\mathbf{x}}^{\mathbf{b}} \in \operatorname{soc}\left( M\right) \) if and only if \( {\mathfrak{m}}^{\mathbf{b} + \mathbf{1}} \) is an irreducible component of \( I \) . Use Corollary 5.39 and Hochster's formula to construct another proof of Theorem 5.42.
5.9 The monomial localization of a monomial ideal \( I \subseteq \mathbb{k}\left\lbrack \mathbf{x}\right\rbrack \) at \( {x}_{i} \) is the ideal \( {\left. I\right| }_{{x}_{i} = 1} \in \mathbb{k}\left\lbrack {\mathbf{x} \smallsetminus {x}_{i}}\right\rbrack \) that results after setting \( {x}_{i} = 1 \) in all generators of \( I \) . Suppose that a labeled cell complex \( X \) supports a minimal cellular resolution of \( S/\left( {I + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) \) . Explain how to recover a minimal cellular resolution of \( {\left. I\right| }_{{x}_{i} = 1} \) from the faces of \( X \) containing the vertex \( v \in X \) labeled by \( {\mathbf{a}}_{v} = {x}_{i}^{{a}_{i} + 1} \) . This set of faces is called the star of \( v \), and the minimal cellular resolution will be supported on the link of \( v \) (also known as the vertex figure of \( X \) in a neighborhood of \( v \) ).
5.10 Suppose that a colabeled cell complex \( Y \) supports a minimal cocellular resolution of \( S/\left( {I + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) \) . Explain why the set of faces of \( Y \) whose labels have \( {i}^{\text{th }} \) coordinate \( {a}_{i} + 1 \) is another colabeled complex. Show that it supports a minimal cocellular resolution of the monomial localization \( {\left. I\right| }_{{x}_{i} = 1} \) (Exercise 5.9).
5.11 Exhibit an example demonstrating that if the condition of minimality in Theorem 5.42 is omitted, then the intersection given there can fail to be an irreducible decomposition-even a redundant one. Nonetheless, prove that if the intersection is taken over a suitable subset of facets, then the conclusion still holds.
5.12 If \( {\mathcal{F}}_{X} \) is a minimal cellular resolution of an artinian quotient, then a face \( G \in X \) is in the boundary of \( X \) if and only if its label \( {\mathbf{a}}_{G} \) fails to have full support.
5.13 Prove that weakly cellular resolutions (Exercise 4.3) of artinian quotients are cellular if they are minimal.
5.14 Prove that the cohull resolution \( {\mathcal{F}}^{{\operatorname{cohull}}_{\mathbf{a}}\left( I\right) } \) of \( I \) with respect to a can be viewed as a weakly cellular free resolution \( {\mathcal{F}}_{{\text{cohull }}_{\mathbf{a}}\left( I\right) } \) . Hint: Consider the polyhedron dual to \( {\mathcal{P}}_{t} \) from Definition 4.16, and use Theorem 4.31.
5.15 Prove that if \( \operatorname{hull}\left( {{I}^{\left\lbrack \mathbf{a}\right\rbrack } + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) \) is minimal, then \( {\mathcal{F}}_{{\operatorname{cohull}}_{\mathbf{a}}\left( I\right) } \) is a minimal cellular (not weakly cellular) resolution.
\( {\mathbf{{5.16}}}^{ * } \) Open problem: Prove that all cohull resolutions are cellular.
5.17 Replace " \( {\mathcal{F}}_{X} \) a minimal cellular resolution" in Theorem 5.42 by " \( {\mathcal{F}}_{X} \) the (possibly nonminimal) hull resolution", and conclude with these hypotheses that the intersection \( \mathop{\bigcap }\limits_{G}{\mathfrak{m}}^{{\widehat{\mathbf{a}}}_{G}} \) over facets \( G \in X \) is a (possibly redundant) irreducible decomposition of \( I \) . Hint: Use Exercises 4.3 and 5.14.
5.18 Define a vector \( \mathbf{b} \in {\mathbb{N}}^{n} \) to lie on the staircase surface of a monomial ideal \( I \) if \( {\mathbf{x}}^{\mathbf{b}} \in I \) but \( {\mathbf{x}}^{\mathbf{b} - \operatorname{supp}\left( \mathbf{b}\right) } \notin I \) . Prove that every face label on the hull complex hull \( \left( I\right) \) lies on the staircase surface of \( I \) . Hint: This can be done directly, using the convex geometry of hull complexes, or with Exercises 4.3 and 5.14.
5.19 Prove that if a monomial ideal \( I \) is not generated in a single \( \mathbb{N} \) -graded degree, then \( I \) has a minimal first syzygy between two generators of different \( \mathbb{N} \) -degrees. Conclude that if the module \( M \) in Lemma 5.55 is a monomial ideal, then \( M \) can only have linear free resolution if its generators all have the same total degree.
## Notes
In one form or another, Alexander duality has been appearing in the context of commutative algebra for decades. A seminal such use of it came in Hochster's paper [Hoc77]; our proof of Theorem 5.6 more or less constitutes his proof of Corollary 5.12. Sharper focus has been g |
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org | Definition 2.63 |
Definition 2.63. The map \( f : W \rightarrow Y \) has a Newton approximation at \( \bar{x} \in W \) if there exist \( r > 0,\alpha > 0 \) and a map \( A : B\left( {\bar{x}, r}\right) \rightarrow L\left( {X, Y}\right) \) such that \( B\left( {\bar{x}, r}\right) \subset W \) and
\[
\forall x \in B\left( {\bar{x}, r}\right) ,\;\parallel f\left( x\right) - f\left( \bar{x}\right) - A\left( x\right) \left( {x - \bar{x}}\right) \parallel \leq \alpha \parallel x - \bar{x}\parallel .
\]
(2.15)
A map \( A : V \rightarrow L\left( {X, Y}\right) \) is a slant derivative of \( f \) at \( \bar{x} \) if \( V \) is a neighborhood of \( \bar{x} \) contained in \( W \) and if for every \( \alpha > 0 \) there exists some \( r > 0 \) such that \( B\left( {\bar{x}, r}\right) \subset V \) and relation (2.15) holds.
Thus \( f \) is differentiable at \( \bar{x} \) if and only if \( f \) has a slant derivative at \( \bar{x} \) that is constant on some neighborhood of \( \bar{x} \) . But condition (2.15) is much less demanding, as the next lemma shows.
Lemma 2.64. The following assertions are equivalent:
(a) \( f \) has a Newton approximation A that is bounded near \( \bar{x} \)
(b) \( f \) has a slant derivative A at \( \bar{x} \) that is bounded on some neighborhood of \( \bar{x} \)
(c) \( f \) is stable at \( \bar{x} \), i.e., there exist \( c > 0, r > 0 \) such that
\[
\forall x \in B\left( {\bar{x}, r}\right) ,\;\parallel f\left( x\right) - f\left( \bar{x}\right) \parallel \leq c\parallel x - \bar{x}\parallel .
\]
(2.16)
Proof. (a) \( \Rightarrow \) (c) If for some \( \alpha ,\beta > 0 \) and some \( r > 0 \) a map \( A : B\left( {\bar{x}, r}\right) \rightarrow L\left( {X, Y}\right) \) is such that (2.15) holds with \( \parallel A\left( x\right) \parallel \leq \beta \) for all \( x \in B\left( {\bar{x}, r}\right) \), then by the triangle inequality, relation (2.16) holds with \( c \mathrel{\text{:=}} \alpha + \beta \) .
(c) \( \Rightarrow \) (b) We use a corollary of the Hahn-Banach theorem asserting the existence of some map \( s : X \rightarrow {X}^{ * } \) such that \( s\left( x\right) \left( x\right) = \parallel x\parallel \) and \( \parallel s\left( x\right) \parallel = 1 \) for all \( x \in X \) . Suppose (2.16) holds. Then setting \( A\left( \bar{x}\right) = 0 \) and for \( w \in W \smallsetminus \{ \bar{x}\}, x \in X \) ,
\[
A\left( w\right) \left( x\right) = \langle s\left( {w - \bar{x}}\right), x\rangle \frac{f\left( w\right) - f\left( \bar{x}\right) }{\parallel w - \bar{x}\parallel },
\]
we easily check that \( \parallel A\left( w\right) \parallel \leq c \) for all \( w \in W \) and that \( A\left( x\right) \left( {x - \bar{x}}\right) = f\left( x\right) - f\left( \bar{x}\right) \) for all \( x \in W \), so that (2.15) holds with \( \alpha = 0 \) and \( A \) is a slant derivative of \( f \) at \( \bar{x} \) .
(b) \( \Rightarrow \) (a) is obvious, a slant derivative of \( f \) at \( \bar{x} \) being a Newton approximation of \( f \) at \( \bar{x} \) .
In the elementary Newton method that follows, we first assume that (2.14) has a solution \( \bar{x} \) .
Proposition 2.65. Let \( \bar{x} \) be a solution to (2.14), let \( \alpha ,\beta, r > 0 \) satisfy \( \gamma \mathrel{\text{:=}} {\alpha \beta } < 1 \) , and let \( A : B\left( {\bar{x}, r}\right) \rightarrow L\left( {X, Y}\right) \) be such that (2.15) holds, \( A\left( x\right) \) being invertible with \( \begin{Vmatrix}{A{\left( x\right) }^{-1}}\end{Vmatrix} \leq \beta \) for all \( x \in B\left( {\bar{x}, r}\right) \) . Then the sequence \( \left( {x}_{n}\right) \) given by
\[
{x}_{n + 1} \mathrel{\text{:=}} {x}_{n} - A{\left( {x}_{n}\right) }^{-1}\left( {f\left( {x}_{n}\right) }\right)
\]
(2.17)
is well defined for every initial point \( {x}_{0} \in B\left( {\bar{x}, r}\right) \) and converges linearly to \( \bar{x} \) with rate \( \gamma \) .
The last assertion means that \( \begin{Vmatrix}{{x}_{n + 1} - \bar{x}}\end{Vmatrix} \leq \gamma \begin{Vmatrix}{{x}_{n} - \bar{x}}\end{Vmatrix} \), hence \( \begin{Vmatrix}{{x}_{n} - \bar{x}}\end{Vmatrix} \leq c{\gamma }^{n} \) for some \( c > 0 \) (in fact \( c \mathrel{\text{:=}} \begin{Vmatrix}{{x}_{0} - \bar{x}}\end{Vmatrix} \) ). Thus, if \( A \) is a slant derivative of \( f \) at \( \bar{x} \), then \( \left( {x}_{n}\right) \) converges superlinearly to \( \bar{x} \) : for all \( \varepsilon > 0 \) there is some \( k \in \mathbb{N} \) such that \( \begin{Vmatrix}{{x}_{n + 1} - \bar{x}}\end{Vmatrix} \leq \) \( \varepsilon \begin{Vmatrix}{{x}_{n} - \bar{x}}\end{Vmatrix} \) for all \( n \geq k \) .
Proof. Using the fact that \( f\left( \bar{x}\right) = 0 \), so that
\[
{x}_{n + 1} - \bar{x} = A{\left( {x}_{n}\right) }^{-1}\left( {f\left( \bar{x}\right) - f\left( {x}_{n}\right) + A\left( {x}_{n}\right) \left( {{x}_{n} - \bar{x}}\right) }\right) ,
\]
we inductively obtain that
\[
\begin{Vmatrix}{{x}_{n + 1} - \bar{x}}\end{Vmatrix} \leq \beta \begin{Vmatrix}{f\left( {x}_{n}\right) - f\left( \bar{x}\right) - A\left( {x}_{n}\right) \left( {{x}_{n} - \bar{x}}\right) }\end{Vmatrix} \leq {\alpha \beta }\begin{Vmatrix}{{x}_{n} - \bar{x}}\end{Vmatrix},
\]
so that \( {x}_{n + 1} \in B\left( {\bar{x}, r}\right) \) : the whole sequence \( \left( {x}_{n}\right) \) is well defined and converges to \( \bar{x} \) .
Under reinforced assumptions, one can show the existence of a solution.
Theorem 2.66 (Kantorovich). Let \( {x}_{0} \in W,\alpha ,\beta > 0, r > 0 \) with \( \gamma \mathrel{\text{:=}} {\alpha \beta } < 1 \) , \( B\left( {{x}_{0}, r}\right) \subset W \) and let \( A : B\left( {{x}_{0}, r}\right) \rightarrow L\left( {X, Y}\right) \) be such that for all \( x \in B\left( {{x}_{0}, r}\right) \) the map \( A\left( x\right) : X \rightarrow Y \) has a right inverse \( B\left( x\right) : Y \rightarrow X \) satisfying \( \parallel B\left( x\right) \left( \cdot \right) \parallel \leq \beta \parallel \cdot \parallel \) and
\[
\forall w, x \in B\left( {{x}_{0}, r}\right) ,\;\parallel f\left( w\right) - f\left( x\right) - A\left( x\right) \left( {w - x}\right) \parallel \leq \alpha \parallel w - x\parallel .
\]
(2.18)
If \( \begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix} < {\beta }^{-1}\left( {1 - \gamma }\right) r \) and if \( f \) is continuous, the sequence given by the Newton iteration
\[
{x}_{n + 1} \mathrel{\text{:=}} {x}_{n} - B\left( {x}_{n}\right) \left( {f\left( {x}_{n}\right) }\right)
\]
(2.19)
is well defined and converges to a solution \( \bar{x} \) of (2.14). Moreover, one has \( \begin{Vmatrix}{{x}_{n} - \bar{x}}\end{Vmatrix} \leq \) \( r{\gamma }^{n} \) for all \( n \in \mathbb{N} \) and \( \begin{Vmatrix}{\bar{x} - {x}_{0}}\end{Vmatrix} \leq \beta {\left( 1 - \gamma \right) }^{-1}\begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix} < r. \)
Here \( B\left( x\right) \) is a right inverse of \( A\left( x\right) \) if \( A\left( x\right) \circ B\left( x\right) = {I}_{Y};B\left( x\right) \) is not assumed to be linear.
Proof. Let us prove by induction that \( {x}_{n} \in B\left( {{x}_{0}, r}\right) ,\begin{Vmatrix}{{x}_{n + 1} - {x}_{n}}\end{Vmatrix} \leq \beta {\gamma }^{n}\begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix} \), and \( \begin{Vmatrix}{f\left( {x}_{n}\right) }\end{Vmatrix} \leq {\gamma }^{n}\begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix} \) . For \( n = 0 \) these relations are obvious. Assuming that they are valid for \( n < k \), we get
\[
\begin{Vmatrix}{{x}_{k} - {x}_{0}}\end{Vmatrix} \leq \mathop{\sum }\limits_{{n = 0}}^{{k - 1}}\begin{Vmatrix}{{x}_{n + 1} - {x}_{n}}\end{Vmatrix} \leq \beta \begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix}\mathop{\sum }\limits_{{n = 0}}^{\infty }{\gamma }^{n} = \beta \begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix}{\left( 1 - \gamma \right) }^{-1} < r,
\]
or \( {x}_{k} \in B\left( {{x}_{0}, r}\right) \), and since \( f\left( {x}_{k - 1}\right) + A\left( {x}_{k - 1}\right) \left( {{x}_{k} - {x}_{k - 1}}\right) = 0 \), from (2.18),(2.19), we have
\[
\begin{Vmatrix}{f\left( {x}_{k}\right) }\end{Vmatrix} \leq \begin{Vmatrix}{f\left( {x}_{k}\right) - f\left( {x}_{k - 1}\right) - A\left( {x}_{k - 1}\right) \left( {{x}_{k} - {x}_{k - 1}}\right) }\end{Vmatrix} \leq \alpha \begin{Vmatrix}{{x}_{k} - {x}_{k - 1}}\end{Vmatrix} \leq {\gamma }^{k}\begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix}
\]
and
\[
\begin{Vmatrix}{{x}_{k + 1} - {x}_{k}}\end{Vmatrix} \leq \beta \begin{Vmatrix}{f\left( {x}_{k}\right) }\end{Vmatrix} \leq \beta {\gamma }^{k}\begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix}.
\]
Since \( \gamma < 1 \), the sequence \( \left( {x}_{n}\right) \) is a Cauchy sequence, hence converges to some \( \bar{x} \in X \) satisfying \( \begin{Vmatrix}{\bar{x} - {x}_{0}}\end{Vmatrix} \leq \beta \begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix}{\left( 1 - \gamma \right) }^{-1} < r \) . Moreover, by the continuity of \( f \), we get \( f\left( \bar{x}\right) = \mathop{\lim }\limits_{n}f\left( {x}_{n}\right) = 0 \) . Finally,
\[
\begin{Vmatrix}{{x}_{n} - \bar{x}}\end{Vmatrix} \leq \mathop{\lim }\limits_{{p \rightarrow + \infty }}\begin{Vmatrix}{{x}_{n} - {x}_{p}}\end{Vmatrix} \leq \mathop{\lim }\limits_{{p \rightarrow + \infty }}\mathop{\sum }\limits_{{k = n}}^{{p - 1}}\begin{Vmatrix}{{x}_{k + 1} - {x}_{k}}\end{Vmatrix} \leq r{\gamma }^{n}.
\]
We deduce from Kantorovich's theorem a result that is the root of important estimates in nonlinear analysis.
Theorem 2.67 (Lyusternik-Graves theorem). Let \( X \) and \( Y \) be Banach spaces, let \( W \) be an open subset of \( X \), and let \( g : W \rightarrow Y \) be circa-differentiable at some \( \bar{x} \in W \) with a surjective derivative \( {Dg}\left( \bar{x}\right) \) . Then \( g \) is open at \( \bar{x} \) . More precisely, there exist some \( \rho ,\sigma ,\kappa > 0 \) such that \( g \) has a right inverse \( h : B\left( {g\left( \bar{x}\right) ,\sigma }\right) \rightarrow W \) satisfying \( \parallel h\left( y\right) - \bar{x}\parallel \leq \kappa \para |
109_The rising sea Foundations of Algebraic Geometry | Definition 1.1 |
Definition 1.1. A hyperplane in \( V \) is a subspace \( H \) of codimension 1. The reflection with respect to \( H \) is the linear transformation \( {s}_{H} : V \rightarrow V \) that is the identity on \( H \) and is multiplication by -1 on the (1-dimensional) orthogonal complement \( {H}^{ \bot } \) of \( H \) . If \( \alpha \) is a nonzero vector in \( {H}^{ \bot } \), so that \( H = {\alpha }^{ \bot } \), we will sometimes write \( {s}_{\alpha } \) instead of \( {s}_{H} \) .
Example 1.2. Let \( s : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n} \) interchange the first two coordinates, i.e.,
\[
s\left( {{x}_{1},{x}_{2},{x}_{3},\ldots ,{x}_{n}}\right) = \left( {{x}_{2},{x}_{1},{x}_{3},\ldots ,{x}_{n}}\right) .
\]
Equivalently, \( s \) transposes the first two standard basis vectors \( {e}_{1},{e}_{2} \) and fixes the others. Then \( s \) is the identity on the hyperplane \( {x}_{1} - {x}_{2} = 0 \), which is the orthogonal complement of \( \alpha \mathrel{\text{:=}} {e}_{1} - {e}_{2} \), and \( s\left( \alpha \right) = - \alpha \) . So \( s \) is the reflection \( {s}_{\alpha } \) .
For future reference, we derive a formula for \( {s}_{\alpha } \) . Given \( x \in V \), write \( x = \) \( h + {\lambda \alpha } \) with \( h \in H \) and \( \lambda \in \mathbb{R} \) . Taking the inner product of both sides with \( \alpha \) , we obtain \( \lambda = \langle \alpha, x\rangle /\langle \alpha ,\alpha \rangle \), where the angle brackets denote the inner product in \( V \) . Then \( {s}_{\alpha }\left( x\right) = h - {\lambda \alpha } = x - {2\lambda \alpha } \), and hence
\[
{s}_{\alpha }\left( x\right) = x - 2\frac{\langle \alpha, x\rangle }{\langle \alpha ,\alpha \rangle }\alpha .
\]
(1.1)
In words, this says that the mirror image of \( x \) with respect to \( {\alpha }^{ \bot } \) is obtained by subtracting twice the component of \( x \) in the direction of \( \alpha \), thereby changing the sign of that component. Suppose, for instance, that \( \alpha = {e}_{1} - {e}_{2} \) as in Example 1.2; then \( \langle \alpha ,\alpha \rangle = 2 \), so equation (1.1) becomes
\[
{s}_{\alpha }\left( x\right) = x - \langle \alpha, x\rangle \alpha .
\]
Definition 1.3. A finite reflection group is a finite group \( W \) of invertible linear transformations of \( V \) generated by reflections \( {s}_{H} \), where \( H \) ranges over a set of hyperplanes.
The group law is of course composition. We will sometimes refer to the pair \( \left( {W, V}\right) \) as a finite reflection group when it is necessary to emphasize the vector space \( V \) on which \( W \) acts.
The requirement that \( W \) be finite is a very strong one. Suppose, for instance, that \( \dim V = 2 \) and that \( W \) is generated by two reflections \( s \mathrel{\text{:=}} {s}_{H} \) and \( {s}^{\prime } \mathrel{\text{:=}} {s}_{{H}^{\prime }} \) . Then the rotation \( s{s}^{\prime } \in W \) has infinite order (and hence \( W \) is infinite) unless the angle between the lines \( H \) and \( {H}^{\prime } \) is a rational multiple of \( \pi \) . The following criterion is often used to verify that a given group generated by reflections is finite:
Lemma 1.4. Let \( \Phi \) be a finite set of nonzero vectors in \( V \), and let \( W \) be the group generated by the reflections \( {s}_{\alpha }\left( {\alpha \in \Phi }\right) \) . If \( \Phi \) is invariant under the action of \( W \), then \( W \) is finite.
Proof. We will show that \( W \) is isomorphic to a group of permutations of the finite set \( \Phi \) . Let \( {V}_{1} \) be the subspace of \( V \) spanned by \( \Phi \), and let \( {V}_{0} \) be its orthogonal complement. Then \( {V}_{0} = \mathop{\bigcap }\limits_{{\alpha \in \Phi }}{\alpha }^{ \bot } \), which is the fixed-point set \( {V}^{W} \mathrel{\text{:=}} \{ v \in V \mid {wv} = v \) for all \( w \in W\} \) . In view of the orthogonal decomposition \( V = {V}_{0} \oplus {V}_{1} \), it follows that an element of \( W \) is completely determined by its action on \( {V}_{1} \) and hence by its action on \( \Phi \) .
The group \( W \) defined in the lemma will be denoted by \( {W}_{\Phi } \) . Such groups arise classically in the theory of Lie algebras, where \( \Phi \) is the root system associated with a complex semisimple Lie algebra and \( {W}_{\Phi } \) is the corresponding Weyl group. (This explains the use of the letter \( W \) for a finite reflection group.)
We will not need the precise definition of "root system," but the interested reader can find it in Appendix B. For now, we need to know only that a root system satisfies the hypotheses of Lemma 1.4 as well as an integrality condition that forces \( {W}_{\Phi } \) to leave a lattice invariant. It will be convenient to have a name for sets \( \Phi \) as in the lemma that are not necessarily root systems in the classical sense.
Definition 1.5. A set \( \Phi \) satisfying the hypotheses of Lemma 1.4 will be called a generalized root system. The elements of \( \Phi \) will be called roots. We will always assume (without loss of generality) that our generalized root systems are reduced, in the sense that \( \pm \alpha \) (for \( \alpha \in \Phi \) ) are the only scalar multiples of \( \alpha \) that are again roots. Thus there is exactly one pair \( \pm \alpha \) for each generating reflection in the statement of Lemma 1.4.
To emphasize the distinction between generalized root systems and the classical ones that leave a lattice invariant, we will sometimes refer to the classical ones as crystallographic root systems.
It is also convenient to have some terminology for the sort of decomposition of \( V \) that arose in the proof of Lemma 1.4. Let \( W \) be a group generated by reflections \( {s}_{H}\left( {H \in \mathcal{H}}\right) \), where \( \mathcal{H} \) is a set of hyperplanes. Let \( {V}_{0} \) be the fixed-point set
\[
{V}^{W} = \mathop{\bigcap }\limits_{{H \in \mathcal{H}}}H
\]
Definition 1.6. We call \( {V}_{0} \) the inessential part of \( V \), and we call its orthogonal complement \( {V}_{1} \) the essential part of \( V \) . The pair \( \left( {W, V}\right) \) is called essential if \( {V}_{1} = V \), or, equivalently, if \( {V}_{0} = 0 \) . The dimension of \( {V}_{1} \) is called the rank of the finite reflection group \( W \) .
The study of a general \( \left( {W, V}\right) \) is easily reduced to the essential case. Indeed, \( {V}_{1} \) is \( W \) -invariant since \( {V}_{0} \) is, and clearly \( {\left( {V}_{1}\right) }^{W} = 0 \) ; so we have an orthogonal decomposition \( V = {V}_{0} \oplus {V}_{1} \), where the action of \( W \) is trivial on the first summand and essential on the second. We may therefore identify \( W \) with a group acting on \( {V}_{1} \), and as such, \( W \) is essential (and still generated by reflections). If \( W \) is the group \( {W}_{\Phi } \) associated with a generalized root system, then \( W \) is essential if and only if \( \Phi \) spans \( V \) .
Exercise 1.7. Show that every finite reflection group \( W \) has the form \( {W}_{\Phi } \) for some generalized root system \( \Phi \) .
## 1.2 Examples
There are two classical families of examples of finite reflection groups. The first, as we have already indicated, consists of Weyl groups of (crystallographic) root systems. The second consists of symmetry groups of regular solids. We will not assume that the reader knows anything about either of these two subjects. But it will be convenient to use the language of root systems or regular solids informally as we discuss examples. It is a fact that all finite reflection groups can be explained in terms of one or both of these theories; we will return to this in the next section.
Example 1.8. The group \( W \) of order 2 generated by a single reflection \( {s}_{\alpha } \) is a finite reflection group of rank 1. After passing to the essential part of \( V \), we may identify \( W \) with the group \( \{ \pm 1\} \) acting on \( \mathbb{R} \) by multiplication. It is the group of symmetries of the regular solid \( \left\lbrack {-1,1}\right\rbrack \) in \( \mathbb{R} \) . It is also the Weyl group of the root system \( \Phi \mathrel{\text{:=}} \{ \pm \alpha \} \), which is called the root system of type \( {\mathrm{A}}_{1} \) .
Example 1.9. Let \( V \) be 2-dimensional, and choose two hyperplanes (lines) that intersect at an angle of \( \pi /m \) for some integer \( m \geq 2 \) . Let \( s \) and \( t \) be the corresponding reflections and let \( W \) be the group \( \langle s, t\rangle \) they generate. [Here and throughout this book we use angle brackets to denote the group generated by a given set.] Then the product \( \rho \mathrel{\text{:=}} {st} \) is a rotation through an angle of \( {2\pi }/m \) and hence is of order \( m \) . Moreover, \( s \) conjugates \( \rho \) to \( s\left( {st}\right) s = {ts} = {\rho }^{-1} \) and similarly for \( t \), so the cyclic subgroup \( C \mathrel{\text{:=}} \langle \rho \rangle \) of order \( m \) is normal in \( W \) . Finally, the quotient \( W/C \) is easily seen to be of order 2 ; hence \( W \) is indeed a finite reflection group, of order \( {2m} \) .
This group \( W \) is called the dihedral group of order \( {2m} \), and we will denote it by \( {D}_{2m} \) . If \( m \geq 3, W \) is the group of symmetries of a regular \( m \) -gon. If \( m = 3,4 \), or 6, then \( W \) can also be described as the Weyl group of a root system \( \Phi \), said to be of type \( {\mathrm{A}}_{2},{\mathrm{\;B}}_{2} \), or \( {\mathrm{G}}_{2} \), respectively. The root system of type \( {\mathrm{A}}_{2}\left( {m = 3}\right) \) consists of 6 equally spaced vectors of the same length, as shown in Figure 1.1, which also shows the three reflecting hyperplanes (lines). There are two oppositely oriented root vectors for each hyperplane. To get
![85b011f4-34bf-48b4-8882-cd79e6f4beb0_33_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_33_0.jpg)
Fig. 1.1. The root system of type \( {\mathrm{A}}_{2} |
18_Algebra Chapter 0 | Definition 7.11 |
Definition 7.11. Let \( H \) be a normal subgroup of a group \( G \) . The quotient group of \( G \) modulo \( H \), denoted \( {32}\mathrm{G}/H \), is the group \( G/ \sim \) obtained from the relation \( \sim \) defined above. In terms of (left-) cosets, the product in \( G/H \) is defined by
\[
\left( {aH}\right) \left( {bH}\right) \mathrel{\text{:=}} \left( {ab}\right) H.
\]
The identity element \( {e}_{G/H} \) of the quotient group \( G/H \) is the coset of the identity, \( {e}_{G}H = H \) .
By Proposition 7.3, the quotient function
\[
\pi : G \rightarrow G/H
\]
sending \( g \in G \) to \( {gH} = {Hg} \) is a group homomorphism and is universal with respect to group homomorphisms \( \varphi : G \rightarrow {G}^{\prime } \) such that \( {aH} = {bH} \Rightarrow \varphi \left( a\right) = \varphi \left( b\right) \) . This universal property is extremely useful, so I will grace it with theorem status:
Theorem 7.12. Let \( H \) be a normal subgroup of a group \( G \) . Then for every group homomorphism \( \varphi : G \rightarrow {G}^{\prime } \) such that \( H \subseteq \ker \varphi \) there exists a unique group homomorphism \( \widetilde{\varphi } : G/H \rightarrow {G}^{\prime } \) so that the diagram
![23387543-548b-40c2-8595-200756212a0f_116_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_116_0.jpg)
commutes.
Proof. We only need to match the stated universal property with the one we proved in Proposition 7.3, and indeed,
\[
H \subseteq \ker \varphi \Leftrightarrow \left( {\forall h \in H}\right) : \varphi \left( h\right) = {e}_{{G}^{\prime }}
\]
is equivalent to
\[
\left( {\forall a, b \in G}\right) : a{b}^{-1} \in H \Rightarrow \varphi \left( {a{b}^{-1}}\right) = {e}_{{G}^{\prime }}
\]
---
\( {}^{32} \) In a large display I sometime use the full ’fraction’ notation \( \frac{G}{H} \) .
---
that is, to
\[
\left( {\forall a, b \in G}\right) : a{b}^{-1} \in H \Rightarrow \varphi \left( a\right) = \varphi \left( b\right)
\]
and finally, keeping in mind how the relation \( \sim \) corresponding to \( H \) is defined,
\[
\left( {\forall a, b \in G}\right) : a \sim b \Rightarrow \varphi \left( a\right) = \varphi \left( b\right) ,
\]
the condition giving the universal property in Proposition 7.3
7.5. Example. The reader is already very familiar with an important class of examples: the cyclic groups \( \mathbb{Z}/n\mathbb{Z} \) . Indeed, in \( \$ {2.3} \) we defined \( \mathbb{Z}/n\mathbb{Z} \) as the set of equivalence classes in \( \mathbb{Z} \) with respect to the congruence equivalence relation
\[
\left( {\forall a, b \in \mathbb{Z}}\right) : \;a \equiv b{\;\operatorname{mod}\;n} \Leftrightarrow n \mid \left( {b - a}\right) .
\]
Now we recognize that \( n \mid \left( {b - a}\right) \) is equivalent to
\[
b - a \in n\mathbb{Z}
\]
which is the relation \( { \sim }_{L} \) corresponding (in ’abelian’ notation) to the subgroup \( n\mathbb{Z} \) of \( \mathbb{Z} \) . This subgroup is of course normal, since \( \mathbb{Z} \) is abelian. The ’congruence classes \( {\;\operatorname{mod}\;{n}^{\prime }} \) are nothing but the cosets of the subgroup \( n\mathbb{Z} \) in \( \mathbb{Z} \) ; using abelian notation for cosets, we could write
\[
{\left\lbrack a\right\rbrack }_{n} = a + \left( {n\mathbb{Z}}\right)
\]
Of course the operation defined on \( \mathbb{Z}/n\mathbb{Z} \) in \( \$ {2.3} \) matches precisely the one defined above for quotient groups. This justifies the notation \( \mathbb{Z}/n\mathbb{Z} \) introduced in [2.3]
The reader can already appreciate in this simple context the usefulness of Theorem 7.12 Let \( g \in G \) be an element of order \( n \) and consider the exponential map
\[
{\epsilon }_{g} : \mathbb{Z} \rightarrow G,\;N \mapsto {g}^{N}.
\]
By Corollary 1.11,
\[
\ker {\epsilon }_{g} = \{ N \in \mathbb{Z} \mid N\text{ is a multiple of }\left| g\right| \} = n\mathbb{Z}.
\]
Theorem 7.12 then implies right away that \( {\epsilon }_{g} \) factors through the quotient:
![23387543-548b-40c2-8595-200756212a0f_117_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_117_0.jpg)
That is, there is an induced map
\[
\mathbb{Z}/n\mathbb{Z} \rightarrow \langle g\rangle
\]
In fact, the 'canonical decomposition' of 12.8 implies that this is an isomorphism (verifying that \( \langle g\rangle \) is cyclic in the sense of Definition 4.7, as the reader should have checked 'by hand' already in Exercise 6.4 . We will formalize this observation in general in the next section.
Also note that \( \left| g\right| = n = \left| {\langle g\rangle }\right| \) in this case.
7.6. kernel \( \Leftrightarrow \) normal. If \( H \) is a normal subgroup, we have now constructed in gory detail a group \( G/H \) and a surjective homomorphism
\[
\pi : G \rightarrow G/H\text{.}
\]
What is the kernel of \( \pi \) ? The identity of \( G/H \) is the coset \( {e}_{G}H \), that is, \( H \) itself. Therefore
\[
\ker \pi = \{ g \in G \mid {gH} = H\} = H.
\]
This observation completes the circle of ideas begun in [7.1] there we had noticed that every kernel (of a group homomorphism) is a normal subgroup; and now we have verified that every normal subgroup is in fact a kernel (of some group homomorphism). I encapsulate this in the slogan
\[
\text{ kernel } \Leftrightarrow \text{ normal : }
\]
in group theory 33, 'kernel' and 'normal subgroup' are equivalent concepts.
For example, every subgroup in an abelian group is the kernel of some homomorphism: yet another indication that life is simpler in \( \mathrm{{Ab}} \) than in \( \mathrm{{Grp}} \) .
## Exercises
7.1. \( \vartriangleright \) List all subgroups of \( {S}_{3} \) (cf. Exercise 6.13) and determine which subgroups are normal and which are not normal. [87.1]
7.2. Is the image of a group homomorphism necessarily a normal subgroup of the target?
7.3. \( \vartriangleright \) Verify that the equivalent conditions for normality given in 7.1 are indeed equivalent. \( \left\lbrack {§{7.1}}\right\rbrack \)
7.4. Prove that the relation defined in Exercise 5.10 on a free abelian group \( F = \) \( {F}^{ab}\left( A\right) \) is compatible with the group structure. Determine the quotient \( F/ \sim \) as a better known group.
7.5. \( \neg \) Define an equivalence relation \( \sim \) on \( {\mathrm{{SL}}}_{2}\left( \mathbb{Z}\right) \) by letting \( A \sim {A}^{\prime } \Leftrightarrow {A}^{\prime } = \) \( \pm A \) . Prove that \( \sim \) is compatible with the group structure. The quotient \( {\mathrm{{SL}}}_{2}\left( \mathbb{Z}\right) / \sim \) is denoted \( {\operatorname{PSL}}_{2}\left( \mathbb{Z}\right) \) and is called the modular group; it would be a serious contender in a contest for 'the most important group in mathematics', due to its role in algebraic geometry and number theory. Prove that \( {\operatorname{PSL}}_{2}\left( \mathbb{Z}\right) \) is generated by the (cosets of the) matrices
\[
\left( \begin{array}{rr} 0 & - 1 \\ 1 & 0 \end{array}\right) \text{ and }\left( \begin{array}{rr} 1 & - 1 \\ 1 & 0 \end{array}\right)
\]
(You will not need to work very hard, if you use the result of Exercise 6.10.) Note that the first has order 2 in \( {\operatorname{PSL}}_{2}\left( \mathbb{Z}\right) \), the second has order 3, and their product has infinite order. [9.14]
---
\( {}^{33} \) We will run into analogous observations in ring theory, where we will verify that kernels and ideals coincide, and for modules, as kernels and submodules again coincide.
---
7.6. Let \( G \) be a group, and let \( n \) be a positive integer. Consider the relation
\[
a \sim b \Leftrightarrow \left( {\exists g \in G}\right) a{b}^{-1} = {g}^{n}.
\]
- Show that in general \( \sim \) is not an equivalence relation.
- Prove that \( \sim \) is an equivalence relation if \( G \) is commutative, and determine the corresponding subgroup of \( G \) .
7.7. Let \( G \) be a group, \( n \) a positive integer, and let \( H \subseteq G \) be the subgroup generated by all elements of order \( n \) in \( G \) . Prove that \( H \) is normal.
7.8. \( \vartriangleright \) Prove Proposition 7.6 [7.3]
7.9. State and prove the 'mirror' statements of Propositions 7.4 and 7.6, leading to the description of relations satisfying \( \left( {\dagger \dagger }\right) \) .
7.10. \( \neg \) Let \( G \) be a group, and \( H \subseteq G \) a subgroup. With notation as in Exercise 6.7, show that \( H \) is normal in \( G \) if and only if \( \forall \gamma \in \operatorname{Inn}\left( G\right) ,\gamma \left( H\right) \subseteq H \) .
Conclude that if \( H \) is normal in \( G \), then there is an interesting homomorphism \( \operatorname{Inn}\left( G\right) \rightarrow \operatorname{Aut}\left( H\right) \) . 8.25
7.11. \( \vartriangleright \) Let \( G \) be a group, and let \( \left\lbrack {G, G}\right\rbrack \) be the subgroup of \( G \) generated by all elements of the form \( {ab}{a}^{-1}{b}^{-1} \) . (This is the commutator subgroup of \( G \) ; we will return to it in [IV13.3]) Prove that \( \left\lbrack {G, G}\right\rbrack \) is normal in \( G \) . (Hint: With notation as in Exercise 4.8, \( g \cdot {ab}{a}^{-1}{b}^{-1} \cdot {g}^{-1} = {\gamma }_{g}\left( {{ab}{a}^{-1}{b}^{-1}}\right) \) .) Prove that \( G/\left\lbrack {G, G}\right\rbrack \) is commutative. 7.12, IV 3.3
7.12. \( \vartriangleright \) Let \( F = F\left( A\right) \) be a free group, and let \( f : A \rightarrow G \) be a set-function from the set \( A \) to a commutative group \( G \) . Prove that \( f \) induces a unique homomorphism \( F/\left\lbrack {F, F}\right\rbrack \rightarrow G \), where \( \left\lbrack {F, F}\right\rbrack \) is the commutator subgroup of \( F \) defined in Exercise 7.11. (Use Theorem 7.12) Conclude that \( F/\left\lbrack {F, F}\right\rbrack \cong {F}^{ab}\left( A\right) \) . (Use Proposition I 5.4.) [6.4, 7.13, VI 1.20]
7.13. \( \neg \) Let \( A, B \) be sets and \( F\left( A\right), F\left( B\right) \) the corresponding free groups. Assume \( F\left( A\right) \cong F\left( B\right) \) . If |
1042_(GTM203)The Symmetric Group | Definition 3.7.1 |
Definition 3.7.1 If \( \mu \subseteq \lambda \) as Ferrers diagrams, then the corresponding skew diagram, or skew shape, is the set of cells
\[
\lambda /\mu = \{ c : c \in \lambda \text{ and }c \notin \mu \} .
\]
A skew diagram is normal if \( \mu = \varnothing \) . ∎
If \( \lambda = \left( {3,3,2,1}\right) \) and \( \mu = \left( {2,1,1}\right) \), then we have the skew diagram
\[
\lambda /\mu =
\]
Of course, normal shapes are the left-justified ones we have been considering all along.
The definitions of skew tableaux, standard skew tableaux, and so on, are all as expected. In particular, the definition of the row word of a tableau still makes sense in this setting. Thus we can say that two skew partial tableaux \( P, Q \) are Knuth equivalent, written \( P\overset{K}{ \cong }Q \), if
\[
{\pi }_{P}\overset{K}{ \cong }{\pi }_{Q}
\]
Similar definitions hold for the other equivalence relations that we have introduced. Note that if \( \pi = {x}_{1}{x}_{2}\ldots {x}_{n} \), then we can make \( \pi \) into a skew tableau by putting \( {x}_{i} \) in the cell \( \left( {n - i + 1, i}\right) \) for all \( i \) . This object is called the antidi-agonal strip tableau associated with \( \pi \) and is also denoted by \( \pi \) . For example, if \( \pi = {3142} \) (a good approximation, albeit without the decimal point), then
![fe1808d3-ed76-4667-ba97-eb284d29fcc8_126_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_126_0.jpg)
So \( \pi \overset{K}{ \cong }\sigma \) as permutations if and only if \( \pi \overset{K}{ \cong }\sigma \) as tableaux.
We now come to the definition of a jeu de taquin slide, which is essential to all that follows.
Definition 3.7.2 Given a partial tableau \( P \) of shape \( \lambda /\mu \), we perform a forward slide on \( P \) from cell \( c \) as follows.
F1 Pick \( c \) to be an inner corner of \( \mu \) .
F2 While \( c \) is not an inner corner of \( \lambda \) do
Fa If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \min \left\{ {{P}_{i + 1, j},{P}_{i, j + 1}}\right\} \) .
Fb Slide \( {P}_{{c}^{\prime }} \) into cell \( c \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) .
If only one of \( {P}_{i + 1, j},{P}_{i, j + 1} \) exists in step Fa, then the maximum is taken to be that single value. We denote the resulting tableau by \( {j}^{c}\left( P\right) \) . Similarly, a backward slide on \( P \) from cell \( c \) produces a tableau \( {j}_{c}\left( P\right) \) as follows
B1 Pick \( c \) to be an outer corner of \( \lambda \) .
B2 While \( c \) is not an outer corner of \( \mu \) do
Ba If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \max \left\{ {{P}_{i - 1, j},{P}_{i, j - 1}}\right\} \) .
Bb Slide \( {P}_{{c}^{\prime }} \) into cell \( c \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) . ∎
By way of illustration, let
\[
P = \begin{array}{lllll} & & & 6 & 8 \\ & 2 & 4 & 5 & 9. \\ 1 & 3 & 7 & & \end{array}
\]
We let a dot indicate the position of the empty cell as we perform a forward slide from \( c = \left( {1,3}\right) \) .
<table><tr><td></td><td></td><td>0</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td></tr><tr><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>-</td><td>5</td><td>9</td><td></td><td>2</td><td>5</td><td>-</td><td>9</td><td></td><td>2</td><td>5</td><td>9</td><td>0</td></tr><tr><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td></tr></table>
Thus
\[
{j}^{c}\left( P\right) = \begin{array}{llll} & 4 & 6 & 8 \\ 2 & 5 & 9 & \\ 1 & 3 & 7 & \end{array}
\]
A backward slide from \( c = \left( {3,4}\right) \) looks like the following.
<table><tr><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td></tr><tr><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>0</td><td>5</td><td>9</td><td></td><td>-</td><td>2</td><td>5</td><td>9</td></tr><tr><td>1</td><td>3</td><td>7</td><td>-</td><td></td><td>1</td><td>3</td><td></td><td>7</td><td></td><td>1</td><td>3</td><td>4</td><td>7</td><td></td><td>1</td><td>3</td><td>4</td><td>7</td><td></td></tr></table>
So ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_127_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_127_0.jpg)
Note that a slide is an invertible operation. Specifically, if \( c \) is a cell for a forward slide on \( P \) and the cell vacated by the slide is \( d \), then a backward slide into \( d \) restores \( P \) . In symbols,
\[
{j}_{d}{j}^{c}\left( P\right) = P.
\]
(3.13)
Similarly,
\[
{j}^{c}{j}_{d}\left( P\right) = P.
\]
(3.14)
if the roles of \( d \) and \( c \) are reversed.
Of course, we may want to make many slides in succession.
Definition 3.7.3 A sequence of cells \( \left( {{c}_{1},{c}_{2},\ldots ,{c}_{l}}\right) \) is a slide sequence for a tableau \( P \) if we can legally form \( P = {P}_{0},{P}_{1},\ldots ,{P}_{l} \), where \( {P}_{i} \) is obtained from \( {P}_{i - 1} \) by performing a slide into cell \( {c}_{i} \) . Partial tableaux \( P \) and \( Q \) are equivalent, written \( P \cong Q \), if \( Q \) can be obtained from \( P \) by some sequence of slides. -
This equivalence relation is the same as Knuth equivalence, as the next series of results shows.
Proposition 3.7.4 ([Scii 76]) If \( P, Q \) are standard skew tableaux, then
\[
P \cong Q \Rightarrow P\overset{K}{ \cong }Q.
\]
Proof. By induction, it suffices to prove the theorem when \( P \) and \( Q \) differ by a single slide. In fact, if we call the operation in steps Fb or Rb of the slide definition a move, then we need to demonstrate the result only when \( P \) and \( Q \) differ by a move. (The row word of a tableau with a hole in it can still be defined by merely ignoring the hole.)
The conclusion is trivial if the move is horizontal because then \( {\pi }_{P} = {\pi }_{Q} \) . If the move is vertical, then we can clearly restrict to the case where \( P \) and \( Q \) have only two rows. So suppose that \( x \) is the element being moved and that
<table><thead><tr><th></th><th>\( {R}_{l} \)</th><th>\( x \)</th><th>\( {R}_{r} \)</th></tr></thead><tr><td>\( {S}_{l} \)</td><td></td><td>\( \bullet \)</td><td>\( {S}_{r} \)</td></tr></table>
<table><thead><tr><th></th><th>\( {R}_{l} \)</th><th>·</th><th colspan="2">\( {R}_{r} \)</th></tr></thead><tr><td>\( {S}_{l} \)</td><td></td><td>\( x \)</td><td>\( {S}_{r} \)</td><td></td></tr></table>
where \( {R}_{l} \) and \( {S}_{l} \) (respectively, \( {R}_{r} \) and \( {S}_{r} \) ) are the left (respectively, right) portions of the two rows.
Now induct on the number of elements in \( P \) (or \( Q \) ). If both tableaux consist only of \( x \), then we are done.
Now suppose \( \left| {R}_{r}\right| > \left| {S}_{r}\right| \) . Let \( y \) be the rightmost element of \( {R}_{r} \) and let \( {P}^{\prime },{Q}^{\prime } \) be \( P, Q \), respectively, with \( y \) removed. By our assumption \( {P}^{\prime } \) and \( {Q}^{\prime } \) are still skew tableaux, so applying induction yields
\[
{\pi }_{P} = {\pi }_{{P}^{\prime }}y\overset{K}{ \cong }{\pi }_{{Q}^{\prime }}y = {\pi }_{Q}
\]
The case \( \left| {S}_{l}\right| > \left| {R}_{l}\right| \) is handled similarly.
Thus we are reduced to considering \( \left| {R}_{r}\right| = \left| {S}_{r}\right| \) and \( \left| {R}_{l}\right| = \left| {S}_{l}\right| \) . Say
\[
{R}_{l} = {x}_{1}\ldots {x}_{j},\;{R}_{r} = {y}_{1}\ldots {y}_{k},
\]
\[
{S}_{l} = {z}_{1}\ldots {z}_{j},\;{S}_{r} = {w}_{1}\ldots {w}_{k}.
\]
By induction, we may assume that one of \( j \) or \( k \) is positive. We will handle the situation where \( j > 0 \), leaving the other case to the reader. The following simple lemma will prove convenient.
Lemma 3.7.5 Suppose \( {a}_{1} < {a}_{2} < \cdots < {a}_{n} \) .
1. If \( x < {a}_{1} \), then \( {a}_{1}\ldots {a}_{n}x\overset{K}{ \cong }{a}_{1}x{a}_{2}\ldots {a}_{n} \) .
2. If \( x > {a}_{n} \), then \( x{a}_{1}\ldots {a}_{n}\overset{K}{ \cong }{a}_{1}\ldots {a}_{n - 1}x{a}_{n} \) . ∎
Since the rows and columns of \( P \) increase, we have \( {x}_{1} < {z}_{i} \) and \( {x}_{1} < {w}_{i} \) for all \( i \) as well as \( {x}_{1} < x \) . Thus
\[
\begin{array}{lll} \overset{K}{ \cong } & {z}_{1}{x}_{1}{z}_{2}\ldots {z}_{j}{w}_{1}\ldots {w}_{k}{x}_{2}\ldots {x}_{j}x{y}_{1}\ldots {y}_{k} & \left( {\text{ Lemma }{3.7.5}\text{, part }1}\right) \end{array}
\]
\[
\begin{array}{lll} \cong & {z}_{1}{x}_{1}{z}_{2}\ldots {z}_{j}x{w}_{1}\ldots {w}_{k}{x}_{2}\ldots {x}_{j}{y}_{1}\ldots {y}_{k} & \text{ (induction) } \end{array}
\]
![fe1808d3-ed76-4667-ba97-eb284d29fcc8_129_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_129_0.jpg)
Schützenberger’s teasing game can be described in the following manner.
Definition 3.7.6 Given a partial skew tableau \( P \), we play jeu de taquin by choosing an arbitrary slide sequence that brings \( P \) to normal shape and then applying the slides. The resulting tableau is denoted by \( j\left( P\right) \) . ∎
It is not obvious at first blush that \( j\left( P\right) \) is well defined-i.e., independent of the slide sequence. However, it turns out that we will always get the Robinson-Schensted \( P \) -tableau for the row word of \( P \) .
Theorem 3.7.7 ([Scii 76]) If \( P \) is a partial skew tableau that is brought to a normal tableau \( {P}^{\prime } \) by slides, then \( {P}^{\prime } \) is unique. In fact, \( {P}^{\prime } \) is the insertion tableau for \( {\pi }_{P} \) .
Proof. By the previous proposition, \( {\pi }_{P}\overset{K}{ \cong }{\pi }_{{P}^{\prime }} \) . Thus by Knuth’s theorem on \( P \) -equivalence (Theore |
113_Topological Groups | Definition 13.1 |
Definition 13.1. Let \( \Gamma \) be a theory in an effectivized first-order language \( \mathcal{L} = \left( {L, v,\mathcal{O},\mathcal{R}, g}\right) \) . We say that \( \Gamma \) is decidable if \( {g}^{+ * }\Gamma \) is recursive, and undecidable if \( {\mathcal{F}}^{+ * }\Gamma \) is not recursive.
In this chapter we give a few examples of decidable theories. The methods for proving theories decidable are numerous. Some of the easiest methods are model-theoretic, so we shall give more examples of decidable theories in Part IV. For proving theories decidable, the extensive mechanism of recursive function theory is not really needed. Almost all of the work can be done on the intuitive level of recognizing that certain procedures are effective. Everything is made rigorous by applying the weak Church's thesis (see p. 46).
Many of the theories which have been proved to be decidable are rather simple. The table below may help the reader to get an idea of the complexity decidable theories can have, especially when compared with our list of undecidable theories on p. 279. We also list in each case a convenient method of proof for the decidability of the theory.
We shall give a detailed treatment in this chapter for the theories 1, 5, 7 in this table. The method we will use is that of elimination of quantifiers. This method can be described in rough terms as follows. In our given language we single out effectively certain formulas as basic formulas. These will usually not be quantifier free. Then we show (eliminating quantifiers) that any formula is effectively equivalent within our given theory to a sentential combination of basic formulas, i.e., a combination using only \( v, \land ,\neg \) . Finally, we give an effective procedure for determining whether or not such a combination is
Some decidable theories
<table><thead><tr><th>Theory of</th><th>Proved by</th><th>A method of proof</th><th>In this book</th></tr></thead><tr><td>1. Equality (no nonlogical constants, no axioms)</td><td>Löwenheim 1915</td><td>Elimination of quantifiers</td><td>p. 241</td></tr><tr><td>2. Finitely many sets ( \( m \) unary relation symbols, no axioms)</td><td></td><td>Elimination of quantifiers</td><td>p. 243</td></tr><tr><td>3. One equivalence relation</td><td>Janiczak 1953</td><td>\( m \) -elementary equivalence</td><td>p. 354</td></tr><tr><td>4. One unary function</td><td>Ehrenfeucht 1959</td><td>Tree automata</td><td>l</td></tr><tr><td>5. \( \left( {\omega ,\mathrm{s}}\right) \)</td><td>\( - \)</td><td>Elimination of quantifiers</td><td>p. 236</td></tr><tr><td>6. Two successor functions*</td><td>Rabin 1968</td><td>Tree automata</td><td>\( - \)</td></tr><tr><td>7. \( \left( {\mathbb{Z}, + }\right) \)</td><td>Presburger 1929</td><td>Elimination of quantifiers</td><td>p. 240</td></tr><tr><td>8. Simple ordering</td><td>Ehrenfeucht 1959</td><td>Tree automata</td><td>\( - \)</td></tr><tr><td>9. \( \left( {\mathbf{S}{I,}\cup ,\cap , \sim ,0, I}\right) \)</td><td>Skolem 1917</td><td></td><td>\( - \)</td></tr><tr><td>10. Boolean algebras</td><td>Tarski 1949</td><td>Model completeness</td><td>\( - \)</td></tr><tr><td>11. Free groups</td><td>Malcev 1961</td><td></td><td>\( - \)</td></tr><tr><td>12. Absolutely free algebras</td><td>Malcev 1961</td><td></td><td>\( - \)</td></tr><tr><td>13. Abelian groups</td><td>Szmielew 1949</td><td>Model completeness</td><td>\( - \)</td></tr><tr><td>14. Ordered abelian groups</td><td>Gurevich 1964</td><td></td><td>\( - \)</td></tr><tr><td>15. Algebraically closed fields</td><td>Tarski 1949</td><td>Vaught's test</td><td>p. 351</td></tr><tr><td>16. Real-closed fields 17. \( p \) -adic fields</td><td>Tarski 1949 Ax, Kochen; Ershov 1965</td><td>Model completeness</td><td>p. 362 l</td></tr><tr><td>18. Euclidean geometry</td><td>Tarski 1949</td><td>Reduction to 16</td><td>\( - \)</td></tr><tr><td>19. Hyperbolic geometry</td><td>Schwabhäuser 1959</td><td>Reduction to 16</td><td>\( - \)</td></tr></table>
* This is the theory of \( \mathfrak{A} = \left( {A,{o}_{0},{o}_{1}}\right) \), where \( A = \mathop{\bigcup }\limits_{{m \in \omega }}{}^{m}2,{o}_{0}w = w\langle 0\rangle ,{o}_{1}w = w\langle 1\rangle \), for all \( w \in A \) .
a consequence of the theory. This method yields much more information than just the decidability of the theory, as we shall see.
## The Theory of \( \left( {\omega, s}\right) \)
For technical reasons, instead of this theory we first consider the theory of \( \left( {\omega, s,0}\right) \) . First we need some notions from sentential logic.
Definition 13.2. Let \( \Gamma \subseteq {\operatorname{Fmla}}_{\mathcal{L}} \) (where \( \mathcal{L} \) is an arbitrary first-order language). The set Qf \( \Gamma \) of quantifier-free combinations of members of \( \Gamma \) is
the intersection of all sets \( \Delta \subseteq {\mathrm{{Fmla}}}_{\mathcal{L}} \) such that \( \Gamma \subseteq \Delta \) and \( \Delta \) is closed under \( v, \land \), and \( \neg \) .
Theorem 13.3 (Disjunctive normal form theorem). Let \( \Gamma \subseteq {\mathrm{{Fmla}}}_{\mathcal{L}} \) . Then for any \( \varphi \in \mathrm{{Qf}}\Gamma \) such that \( \neg \varphi \) is not a tautology there exist \( p, m \in \omega \) and a function \( \psi \) such that:
(i) the domain of \( \psi \) is \( \left( {p + 1}\right) \times \left( {m + 1}\right) \) ;
(ii) for each \( i \leq p \) and \( j \leq m,{\psi }_{ij} \in \Gamma \) or \( {\psi }_{ij} = \neg \chi \) with \( \chi \in \Gamma \) ;
(iii) \( \vDash \varphi \leftrightarrow \mathop{\bigvee }\limits_{{i \leq p}}\mathop{\bigwedge }\limits_{{j \leq m}}{\psi }_{ij} \) .
For the proof, see 8.38.
We now turn to the decidability proof for the theory of \( \left( {\omega, s,0}\right) \) . We work in an effectivized language with a unary operation symbol \( \mathbf{s} \) and an individual constant \( \mathbf{O} \) . By induction we set
\[
⏈ = \mathbf{O};\;\Delta \left( {m + 1}\right) = \mathbf{s}{\Delta m};\;{\mathbf{s}}^{1} = \mathbf{s};\;{\mathbf{s}}^{m + 1} = \mathbf{s}{\mathbf{s}}^{m};\;\mathbf{m} = {\Delta m}.
\]
The terms of this language are just of two kinds: \( {\mathbf{s}}^{m}\alpha \) for some variable \( \alpha \) , and \( \mathbf{m} \) for some \( m \in \omega \), where \( {\mathbf{s}}^{0}\alpha \) is just \( \alpha \) . A formula will be called basic if it has one of the following forms:
\[
{\mathbf{s}}^{m}{v}_{i} = \sigma ,\sigma \text{a term not involving}{v}_{i}\text{;}
\]
\[
0 = 0\text{.}
\]
Clearly there is an effective method for recognizing when a formula is basic. Let \( {\Gamma }_{1} \) be the set of all sentences which hold in \( \left( {\omega, s,0}\right) \) . Obviously \( {\Gamma }_{1} \) is a complete and consistent theory. Two formulas \( \varphi \) and \( \psi \) are equivalent (under \( \left. {\Gamma }_{1}\right) \) provided that \( \varphi \leftrightarrow \psi \) holds in \( \left( {\omega, s,0}\right) \) .
Lemma 13.4. For any formula \( \varphi \) one can effectively find a formula \( \psi \) equivalent under \( {\Gamma }_{1} \) to \( \varphi \) such that \( \psi \) is a quantifier-free combination of basic formulas and \( \mathrm{{Fv}}\psi \subseteq \mathrm{{Fv}}\varphi \) .
Proof. We proceed by induction on \( \varphi \) . First suppose \( \varphi \) is atomic; thus \( \varphi \) has the form \( \sigma = \tau \) . If no variable occurs in \( \varphi \), then \( \varphi \) has the form \( \mathbf{m} = \mathbf{n} \) . This is equivalent to \( \mathbf{O} = \mathbf{O} \) or to \( \neg \left( {\mathbf{O} = \mathbf{O}}\right) \) according as \( m = n \) or \( m \neq n \) , Suppose a variable occurs in \( \varphi \) . If \( \varphi \) has the form \( {\mathbf{s}}^{m}\alpha = {\mathbf{s}}^{n}\alpha \) for some variable \( \alpha \), then \( \varphi \) is equivalent to \( \mathbf{O} = \mathbf{O} \) or to \( \neg \left( {\mathbf{O} = \mathbf{O}}\right) \) according as \( m = n \) or \( m \neq n \) . The only forms left are \( {\mathbf{s}}^{m}\alpha = \sigma \) or \( \sigma = {\mathbf{s}}^{m}\alpha \) (for some variable \( \alpha \) ), where \( \sigma \) does not involve \( \alpha \) . Both are equivalent to \( {\mathbf{s}}^{m}\alpha = \sigma \) . This takes care of the atomic case.
The induction steps using \( \neg, v, \land \) are trivial. To make the induction step using \( \forall \alpha \) it suffices to show that if \( \varphi \) is a quantifier free combination of basic formulas then \( \exists {\alpha \varphi } \) is equivalent to a quantifier free combination of basic formulas determined effectively from \( \exists {\alpha \varphi } \) . Since
\[
\exists \alpha \left( {\psi \vee \chi }\right) \leftrightarrow \exists {\alpha \psi } \vee \exists {\alpha \chi }
\]
is logically valid, we may by 13.3 assume that \( \varphi \) is a conjunction of basic formulas and their negations. Now in general if \( \alpha \) does not occur in \( \psi \) then
\[
\exists \alpha \left( {\psi \land \chi }\right) \leftrightarrow \psi \land \exists {\alpha \chi }
\]
is logically valid. Hence we may assume that each conjunct of \( \varphi \) actually involves \( \alpha \) . Thus we may assume that \( \exists {\alpha \varphi } \) is the formula
\[
\exists \alpha \left\lbrack {{\mathbf{s}}^{k0}\alpha = {\sigma }_{0} \land \cdots \land {\mathbf{s}}^{k\left( {m - 1}\right) }\alpha = {\sigma }_{m - 1}}\right.
\]
\[
\land \neg \left( {{\mathbf{s}}^{km}\alpha \equiv {\sigma }_{m}}\right) \land \cdots \land \neg \left( {{\mathbf{s}}^{k\left( {n - 1}\right) } \equiv {\sigma }_{n - 1}}\right) \rbrack
\]
where \( 0 \leq m \leq n > 0 \), and \( {\sigma }_{0},\ldots ,{\sigma }_{n - 1} \) do not involve \( \alpha \) . Noting that \( \sigma = \tau \) is equivalent under \( {\Gamma }_{1} \) to \( \mathbf{s}\sigma = \mathbf{s}\tau \), if we let \( l \) be the maximum of \( {k}_{0},\ldots ,{k}_{n - 1} \) we easily see that \( \exists {\alpha \varphi } \) is equivalent to a formula of the form
\[
\exists \alpha \left\lbrack {{\mathbf{s}}^{l}\alpha = {\tau }_{0} \ |
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space | Definition 2.3.5 |
Definition 2.3.5. For \( f \in {\mathbf{H}}^{2} \), if \( f = {\phi F} \) with \( \phi \) inner and \( F \) outer, we call \( \phi \) the inner part of \( f \) and \( F \) the outer part of \( f \) .
Theorem 2.3.6. The zeros of an \( {\mathbf{H}}^{2} \) function are precisely the zeros of its inner part.
Proof. This follows immediately from Theorem 2.3.2 and Theorem 2.3.4.
To understand the structure of Lat \( U \) as a lattice requires being able to determine when \( {\phi }_{1}{\mathbf{H}}^{2} \) is contained in \( {\phi }_{2}{\mathbf{H}}^{2} \) for inner functions \( {\phi }_{1} \) and \( {\phi }_{2} \) . This will be accomplished by analysis of a factorization of inner functions.
## 2.4 Blaschke Products
Some of the invariant subspaces of the unilateral shift are those consisting of the functions vanishing at certain subsets of \( \mathbb{D} \) . The simplest such subspaces are those of the form, for \( {z}_{0} \in \mathbb{D} \) ,
\[
{\mathcal{M}}_{{z}_{0}} = \left\{ {f \in {\mathbf{H}}^{2} : f\left( {z}_{0}\right) = 0}\right\} .
\]
The subspace \( {\mathcal{M}}_{{z}_{0}} \) is an invariant subspace for \( U \) . Therefore Beurling’s theorem (Corollary 2.2.12) implies that there is an inner function \( \psi \) such that \( {\mathcal{M}}_{{z}_{0}} = \psi {\mathbf{H}}^{2} \)
Theorem 2.4.1. For each \( {z}_{0} \in \mathbb{D} \), the function
\[
\psi \left( z\right) = \frac{{z}_{0} - z}{1 - \overline{{z}_{0}}z}
\]
is an inner function and \( {\mathcal{M}}_{{z}_{0}} = \left\{ {f \in {\mathbf{H}}^{2} : f\left( {z}_{0}\right) = 0}\right\} = \psi {\mathbf{H}}^{2} \) .
Proof. The function \( \psi \) is clearly in \( {\mathbf{H}}^{\infty } \) . Moreover, it is continuous on the closure of \( \mathbb{D} \) . Therefore, to show that \( \psi \) is inner, it suffices to show that \( \left| {\psi \left( z\right) }\right| = 1 \) when \( \left| z\right| = 1 \) . For this, note that \( \left| z\right| = 1 \) implies \( z\bar{z} = 1 \), so that
\[
\left| \frac{{z}_{0} - z}{1 - \overline{{z}_{0}}z}\right| = \left| \frac{{z}_{0} - z}{z\left( {\bar{z} - \overline{{z}_{0}}}\right) }\right| = \frac{1}{\left| z\right| }\left| \frac{{z}_{0} - z}{\bar{z} - \overline{{z}_{0}}}\right| = 1.
\]
To show that \( {\mathcal{M}}_{{z}_{0}} = \psi {\mathbf{H}}^{2} \), first note that \( \psi \left( {z}_{0}\right) f\left( {z}_{0}\right) = 0 \) for all \( f \in {\mathbf{H}}^{2} \) , so \( \psi {\mathbf{H}}^{2} \subset {\mathcal{M}}_{{z}_{0}} \) . For the other inclusion, note that \( f\left( {z}_{0}\right) = 0 \) implies that \( f\left( z\right) = \psi \left( z\right) g\left( z\right) \) for some function \( g \) analytic in \( \mathbb{D} \) .
Let
\[
\varepsilon = \inf \left\{ {\left| {\psi \left( z\right) }\right| : z \in \mathbb{D},\;\left| z\right| \geq \frac{1 + \left| {z}_{0}\right| }{2}}\right\} .
\]
Clearly \( \varepsilon > 0 \) . Thus
\[
\frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| f\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta } \geq {\varepsilon }^{2}\frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| g\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta }
\]
for \( r \geq \frac{1 + \left| {z}_{0}\right| }{2} \) . Therefore
\[
\mathop{\sup }\limits_{{0 < r < 1}}\frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| g\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta } \leq \frac{1}{{\varepsilon }^{2}}\mathop{\sup }\limits_{{0 < r < 1}}\frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| f\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta }.
\]
It follows from Theorem 1.1.12 that \( g \in {\mathbf{H}}^{2} \) . Hence \( f = {\psi g} \) is in \( \psi {\mathbf{H}}^{2} \) .
A similar result holds for subspaces of \( {\mathbf{H}}^{2} \) vanishing on any finite subset of \( \mathbb{D} \) .
Theorem 2.4.2. If \( {z}_{1},{z}_{2},\ldots ,{z}_{n} \in \mathbb{D} \) ,
\[
\mathcal{M} = \left\{ {f \in {\mathbf{H}}^{2} : f\left( {z}_{1}\right) = f\left( {z}_{2}\right) = \cdots = f\left( {z}_{n}\right) = 0}\right\} ,
\]
and
\[
\psi \left( z\right) = \mathop{\prod }\limits_{{k = 1}}^{n}\frac{{z}_{k} - z}{1 - \overline{{z}_{k}}z}
\]
then \( \psi \) is an inner function and \( \mathcal{M} = \psi {\mathbf{H}}^{2} \) .
Proof. It is obvious that a product of a finite number of inner functions is inner. Thus Theorem 2.4.1 above implies that \( \psi \) is inner.
It is clear that \( \psi {\mathbf{H}}^{2} \) is contained in \( \mathcal{M} \) . The proof of the opposite inclusion is very similar to the proof of the case of a single factor established in Theorem 2.4.1 above. That is, if \( f\left( {z}_{1}\right) = f\left( {z}_{2}\right) = \cdots = f\left( {z}_{n}\right) = 0 \), then \( f = {\psi g} \) for some function \( g \) analytic on \( \mathbb{D} \) . It follows as in the previous proof (take \( r \) greater than the maximum of \( \frac{1 + \left| {z}_{j}\right| }{2} \) ) that \( g \) is in \( {\mathbf{H}}^{2} \), so \( f \in \psi {\mathbf{H}}^{2} \) .
It is important to be able to factor out the zeros of inner functions. If an inner function has only a finite number of zeros in \( \mathbb{D} \), such a factorization is implicit in the preceding theorem, as we now show. (We will subsequently consider the case in which an inner function has an infinite number of zeros.) It is customary to distinguish any possible zero at 0 .
Corollary 2.4.3. Suppose that the inner function \( \phi \) has a zero of multiplicity \( s \) at 0 and also vanishes at the nonzero points \( {z}_{1},{z}_{2},\ldots ,{z}_{n} \in \mathbb{D} \) (allowing repetition according to multiplicity). Let
\[
\psi \left( z\right) = {z}^{s}\mathop{\prod }\limits_{{k = 1}}^{n}\frac{{z}_{k} - z}{1 - \overline{{z}_{k}}z}.
\]
Then \( \psi \left( z\right) \) is an inner function and \( \phi \) can be written as a product \( \phi \left( z\right) = \) \( \psi \left( z\right) S\left( z\right) \), where \( S \) is an inner function.
Proof. Since \( \psi \) is a product of inner functions, \( \psi \) is inner. The function \( \phi \) is in the subspace \( \mathcal{M} \) of the preceding Theorem 2.4.2, so that theorem implies that \( \phi = {\psi S} \), where \( S \) is in \( {\mathbf{H}}^{2} \) . Moreover, \( \widetilde{\phi } = \widetilde{\psi }\widetilde{S} \), so \( \left| {\widetilde{S}\left( {e}^{i\theta }\right) }\right| = 1 \) a.e. Therefore \( S \) is an inner function.
Recall that the Weierstrass factorization theorem asserts that, given any sequence \( \left\{ {z}_{j}\right\} \) with \( \left\{ \left| {z}_{j}\right| \right\} \rightarrow \infty \) and any sequence of natural numbers \( \left\{ {n}_{j}\right\} \) , there exists an entire function whose zeros are precisely the \( {z}_{j} \) ’s with multiplicity \( {n}_{j} \) . It is well known that similar techniques establish that, given any sequence \( \left\{ {z}_{j}\right\} \subset \mathbb{D} \) with \( \left\{ \left| {z}_{j}\right| \right\} \rightarrow 1 \) as \( j \rightarrow \infty \) and any sequence of natural numbers \( \left\{ {n}_{j}\right\} \), there is a function \( f \) analytic on \( \mathbb{D} \) whose zeros are precisely the \( {z}_{j} \) ’s with multiplicity \( {n}_{j} \) ([9, p. 169-170],[47, p. 302-303]). For some sequences \( \left\{ {z}_{j}\right\} \) there is no such function in \( {\mathbf{H}}^{2} \) ; it will be important to determine the sequences that can arise as zeros of functions in \( {\mathbf{H}}^{2} \) . By Theorem 2.3.6, this reduces to determining the zeros of the inner functions.
There are many sequences \( \left\{ {z}_{j}\right\} \) with \( \left\{ \left| {z}_{j}\right| \right\} \rightarrow 1 \) that cannot be the set of zeros of a function in \( {\mathbf{H}}^{2} \) . To see this, we begin with a fact about products of zeros of inner functions.
Theorem 2.4.4. If \( \phi \) is an inner function and \( \phi \left( 0\right) \neq 0 \), and if \( \left\{ {z}_{j}\right\} \) is a sequence in \( \mathbb{D} \) such that \( \phi \left( {z}_{j}\right) = 0 \) for all \( j \), then \( \left| {\phi \left( 0\right) }\right| < \mathop{\prod }\limits_{{j = 1}}^{n}\left| {z}_{j}\right| \) for all \( n \) .
Proof. For each natural number \( n \), let
\[
{B}_{n}\left( z\right) = \mathop{\prod }\limits_{{j = 1}}^{n}\frac{{z}_{j} - z}{1 - \overline{{z}_{j}}z}.
\]
As shown in Corollary 2.4.3, each \( {B}_{n} \) is an inner function and, for each \( n \), there is an inner function \( {S}_{n} \) such that \( \phi = {B}_{n}{S}_{n} \) . By Theorem 2.2.10, \( \left| {{S}_{n}\left( z\right) }\right| < 1 \) for all \( z \in \mathbb{D} \) . Thus \( \left| {\phi \left( z\right) }\right| < \left| {{B}_{n}\left( z\right) }\right| \) for \( z \in \mathbb{D} \) . In particular,
\[
\left| {\phi \left( 0\right) }\right| < \left| {{B}_{n}\left( 0\right) }\right| = \mathop{\prod }\limits_{{j = 1}}^{n}\left| {z}_{j}\right|
\]
Example 2.4.5. If \( {z}_{k} = \frac{k}{k + 1} \) for natural numbers \( k \), there is no function \( f \) in \( {\mathbf{H}}^{2} \) whose set of zeros is exactly \( \left\{ {z}_{k}\right\} \) .
Proof. Suppose that \( f \) was such a function and let \( \phi \) be its inner part. In particular, \( \phi \left( 0\right) \neq 0 \) . By the previous theorem,
\[
\left| {\phi \left( 0\right) }\right| < \mathop{\prod }\limits_{{j = k}}^{n}\left| {z}_{k}\right|
\]
for every natural number \( n \) . But
\[
\mathop{\prod }\limits_{{k = 1}}^{n}\left| {z}_{k}\right| = \frac{1}{n + 1}
\]
Choosing \( n \) large enough so that \( \frac{1}{n + 1} < \left| {\phi \left( 0\right) }\right| \) gives a contradiction.
To describe the zeros of functions in \( {\mathbf{H}}^{2} \) requires some facts about infinite products. We begin with the definition of convergence.
Definition 2.4.6. Given a sequence \( {\left\{ {w}_{k}\right\} }_{k = 1}^{\infty } \) of nonzero complex numbers, we say that \( \mathop{\prod }\limits_{{k = 1}}^{\infty }{w}_{k} \) converges to \( P \) and write
\[
\mathop{\prod }\limits_{{k = 1}}^{\infty }{w}_{k} = P
\]
if \( \left\{ {\mathop{\prod }\limits_{{k = 1}}^{n}{w}_{k}}\right\} \rightarrow P \) as \( n \rightarr |
1077_(GTM235)Compact Lie Groups | Definition 7.2 |
Definition 7.2. Let \( V \) be a representation of \( \mathfrak{g} \) with weight space decomposition \( V = \) \( {\bigoplus }_{\lambda \in \Delta \left( V\right) }{V}_{\lambda }. \)
(a) A nonzero \( v \in {V}_{{\lambda }_{0}} \) is called a highest weight vector of weight \( {\lambda }_{0} \) with respect to \( {\Delta }^{ + }\left( {\mathfrak{g}}_{\mathbb{C}}\right) \) if \( {\mathfrak{n}}^{ + }v = 0 \), i.e., if \( {Xv} = 0 \) for all \( X \in {\mathfrak{n}}^{ + } \) . In this case, \( {\lambda }_{0} \) is called a highest weight of \( V \) .
(b) A weight \( \lambda \) is said to be dominant if \( B\left( {\lambda ,\alpha }\right) \geq 0 \) for all \( \alpha \in \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) \), i.e., if \( \lambda \) lies in the closed Weyl chamber corresponding to \( {\Delta }^{ + }\left( {\mathfrak{g}}_{\mathbb{C}}\right) \) .
As an example, recall that the action of \( \mathfrak{{su}}{\left( 2\right) }_{\mathbb{C}} = \mathfrak{{sl}}\left( {2,\mathbb{C}}\right) \) on \( {V}_{n}\left( {\mathbb{C}}^{2}\right), n \in {\mathbb{Z}}^{ \geq 0} \) , from Equation 6.7 is given by
\[
E \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = - k{z}_{1}^{k - 1}{z}_{2}^{n - k + 1}
\]
\[
H \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = \left( {n - {2k}}\right) {z}_{1}^{k}{z}_{2}^{n - k}
\]
\[
F \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = \left( {k - n}\right) {z}_{1}^{k + 1}{z}_{2}^{n - k - 1},
\]
and recall that \( \left\{ {{V}_{n}\left( {\mathbb{C}}^{2}\right) \mid n \in {\mathbb{Z}}^{ \geq 0}}\right\} \) is a complete list of irreducible representations for \( {SU}\left( 2\right) \) . Taking \( {it} = \operatorname{diag}\left( {\theta , - \theta }\right) ,\theta \in \mathbb{R} \), there are two roots, \( \pm {\epsilon }_{12} \), where \( {\epsilon }_{12}\left( {\operatorname{diag}\left( {\theta , - \theta }\right) }\right) = {2\theta } \) . Choosing \( {\Delta }^{ + }\left( {\mathfrak{{sl}}\left( {2,\mathbb{C}}\right) }\right) = \left\{ {\epsilon }_{12}\right\} \), it follows that \( {z}_{2}^{n} \) is a highest weight vector of \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) of weight \( n\frac{{\epsilon }_{12}}{2} \) . Notice that the set of dominant analytically integral weights is \( \left\{ {n\frac{{\epsilon }_{12}}{2} \mid n \in {\mathbb{Z}}^{ \geq 0}}\right\} \) . Thus there is a one-to-one correspondence between the set of highest weights of irreducible representations of \( {SU}\left( 2\right) \) and the set of dominant analytically integral weights. This correspondence will be established for all connected compact groups in Theorem 7.34.
Theorem 7.3. Let \( G \) be a connected compact Lie group and \( V \) an irreducible representation of \( G \) .
(a) \( V \) has a unique highest weight, \( {\lambda }_{0} \) .
(b) The highest weight \( {\lambda }_{0} \) is dominant and analytically integral, i.e., \( {\lambda }_{0} \in A\left( T\right) \) .
(c) Up to nonzero scalar multiplication, there is a unique highest weight vector.
(d) Any weight \( \lambda \in \Delta \left( V\right) \) is of the form
\[
\lambda = {\lambda }_{0} - \mathop{\sum }\limits_{{{\alpha }_{i} \in \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) }}{n}_{i}{\alpha }_{i}
\]
for \( {n}_{i} \in {\mathbb{Z}}^{ \geq 0} \) .
(e) For \( w \in W, w{V}_{\lambda } = {V}_{w\lambda } \), so that \( \dim {V}_{\lambda } = \dim {V}_{w\lambda } \) . Here \( W\left( G\right) \) is identified with \( W\left( {\Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) }\right) \), as in Theorem 6.43 via the Ad-action from Equation 6.35.
(f) Using the norm induced by the Killing form, \( \parallel \lambda \parallel \leq \begin{Vmatrix}{\lambda }_{0}\end{Vmatrix} \) with equality if and only if \( \lambda = w{\lambda }_{0} \) for \( w \in W\left( {\mathfrak{g}}_{\mathbb{C}}\right) \) .
(g) Up to isomorphism, \( V \) is uniquely determined by \( {\lambda }_{0} \) .
Proof. Existence of a highest weight \( {\lambda }_{0} \) follows from the finite dimensionality of \( V \) and Theorem 6.11. Let \( {v}_{0} \) be a highest weight vector for \( {\lambda }_{0} \) and inductively define \( {V}_{n} = {V}_{n - 1} + {\mathfrak{n}}^{ - }{V}_{n - 1} \) where \( {V}_{0} = \mathbb{C}{v}_{0} \) . This defines a filtration on the \( \left( {{\mathfrak{n}}^{ - } \oplus {\mathfrak{t}}_{\mathbb{C}}}\right) \) - invariant subspace \( {V}_{\infty } = { \cup }_{n}{V}_{n} \) of \( V \) . If \( \alpha \in \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) \), then \( \left\lbrack {{\mathfrak{g}}_{\alpha },{\mathfrak{n}}^{ - }}\right\rbrack \subseteq {\mathfrak{n}}^{ - } \oplus {\mathfrak{t}}_{\mathbb{C}} \) . Since \( {\mathfrak{g}}_{\alpha }{V}_{0} = 0 \), a simple inductive argument shows that \( {\mathfrak{g}}_{\alpha }{V}_{n} \subseteq {V}_{n} \) . In particular, this suffices to demonstrate that \( {V}_{\infty } \) is \( {\mathfrak{g}}_{\mathbb{C}} \) -invariant. Irreducibility implies \( V = {V}_{\infty } \) and part (d) follows.
If \( {\lambda }_{1} \) is also a highest weight, then \( {\lambda }_{1} = {\lambda }_{0} - \sum {n}_{i}{\alpha }_{i} \) and \( {\lambda }_{0} = {\lambda }_{1} - \sum {m}_{i}{\alpha }_{i} \) for \( {n}_{i},{m}_{i} \in {\mathbb{Z}}^{ \geq 0} \) . Eliminating \( {\lambda }_{1} \) and \( {\lambda }_{0} \) shows that \( - \sum {n}_{i}{\alpha }_{i} = \sum {m}_{i}{\alpha }_{i} \) . Thus \( - {n}_{i} = {m}_{i} \), so that \( {n}_{i} = {m}_{i} = 0 \) and \( {\lambda }_{1} = {\lambda }_{0} \) . Furthermore, the weight decomposition shows that \( {V}_{\infty } \cap {V}_{{\lambda }_{0}} = {V}_{0} = \mathbb{C}{v}_{0} \), so that parts (a) and (c) are complete.
The proof of part (e) is done in the same way as the proof of Theorem 6.36. For part (b), notice that \( {r}_{{\alpha }_{i}}{\lambda }_{0} \) is a weight by part (e). Thus
\[
{\lambda }_{0} - 2\frac{B\left( {{\lambda }_{0},{\alpha }_{i}}\right) }{B\left( {{\alpha }_{i},{\alpha }_{i}}\right) }{\alpha }_{i} = {\lambda }_{0} - \mathop{\sum }\limits_{{{\alpha }_{j} \in \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) }}{n}_{j}{\alpha }_{j}
\]
for \( {n}_{j} \in {\mathbb{Z}}^{ \geq 0} \) . Hence \( 2\frac{B\left( {{\lambda }_{0},{\alpha }_{i}}\right) }{B\left( {{\alpha }_{i},{\alpha }_{i}}\right) } = {n}_{i} \), so that \( B\left( {{\lambda }_{0},{\alpha }_{i}}\right) \geq 0 \) and \( {\lambda }_{0} \) is dominant. Theorem 6.27 shows that \( {\lambda }_{0} \) (in fact, any weight of \( V \) ) is analytically integral.
For part (f), Theorem 6.43 shows that it suffices to take \( \lambda \) dominant by using the Weyl group action. Write \( \lambda = {\lambda }_{0} - \sum {n}_{i}{\alpha }_{i} \) . Solving for \( {\lambda }_{0} \) and using dominance in the second line,
\[
{\begin{Vmatrix}{\lambda }_{0}\end{Vmatrix}}^{2} = \parallel \lambda {\parallel }^{2} + 2\mathop{\sum }\limits_{{{\alpha }_{i} \in \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) }}{n}_{i}B\left( {\lambda ,{\alpha }_{i}}\right) + {\begin{Vmatrix}\mathop{\sum }\limits_{{{\alpha }_{i} \in \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) }}{n}_{i}{\alpha }_{i}\end{Vmatrix}}^{2}
\]
\[
\geq \parallel \lambda {\parallel }^{2} + {\begin{Vmatrix}\mathop{\sum }\limits_{{{\alpha }_{i} \in \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) }}{n}_{i}{\alpha }_{i}\end{Vmatrix}}^{2} \geq \parallel \lambda {\parallel }^{2}
\]
In the case of equality, it follows that \( \mathop{\sum }\limits_{{{\alpha }_{i} \in \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) }}{n}_{i}{\alpha }_{i} = 0 \), so that \( {n}_{i} = 0 \) and \( \lambda = {\lambda }_{0} \) .
For part \( \left( \mathrm{g}\right) \), suppose \( {V}^{\prime } \) is an irreducible representation of \( G \) with highest weight \( {\lambda }_{0} \) and corresponding highest weight vector \( {v}_{0}^{\prime } \) . Let \( W = V \oplus {V}^{\prime } \) and define \( {W}_{n} = {W}_{n - 1} + {\mathfrak{n}}^{ - }{W}_{n - 1} \), where \( {W}_{0} = \mathbb{C}\left( {{v}_{0},{v}_{0}^{\prime }}\right) \) . As above, \( {W}_{\infty } = { \cup }_{n}{W}_{n} \) is a subrepresentation of \( V \oplus {V}^{\prime } \) . If \( U \) is a nonzero subrepresentation of \( {W}_{\infty } \), then \( U \) has a highest weight vector, \( \left( {{u}_{0},{u}_{0}^{\prime }}\right) \) . In turn, this means that \( {u}_{0} \) and \( {u}_{0}^{\prime } \) are highest weight vectors of \( V \) and \( {V}^{\prime } \), respectively. Part (a) then shows that \( \mathbb{C}\left( {{u}_{0},{u}_{0}^{\prime }}\right) = {W}_{0} \) . Thus \( U = {W}_{\infty } \) and \( {W}_{\infty } \) is irreducible. Projection onto each coordinate establishes the \( G \) -intertwining map \( V \cong {V}^{\prime } \) .
The above theorem shows that highest weights completely classify irreducible representations. It only remains to parametrize all possible highest weights of irreducible representations. This will be done in \( \$ {7.3.5} \) where we will see there is a bijection between the set of dominant analytically integral weights and irreducible representations of \( G \) .
Definition 7.4. Let \( G \) be connected and let \( V \) be an irreducible representation of \( G \) with highest weight \( \lambda \) . As \( V \) is uniquely determined by \( \lambda \), write \( V\left( \lambda \right) \) for \( V \) and write \( {\chi }_{\lambda } \) for its character.
Lemma 7.5. Let \( G \) be connected. If \( V\left( \lambda \right) \) is an irreducible representation of \( G \), then \( V{\left( \lambda \right) }^{ * } \cong V\left( {-{w}_{0}\lambda }\right) \), where \( {w}_{0} \in W\left( {\Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) }\right) \) is the unique element mapping the positive Weyl chamber to the negative Weyl chamber (c.f. Exercise 6.40).
Proof. Since \( V\left( \lambda \right) \) is irreducible, the character theory of Theorems 3.5 and 3.7 show that \( V{\left( \lambda \right) }^{ * } \) is irreducible. It therefore suffices to show that the highest weight of \( V{\left( \lambda \right) }^{ * } \) is \( - {w}_{0}\lambda \) .
Fix a \( G |