text
stringlengths
143
6k
book_name
stringlengths
15
181
question
stringlengths
47
3.56k
proof
stringlengths
1
3.65k
category_id
stringclasses
15 values
subcategory_id
stringlengths
1
7
category_name
stringclasses
15 values
subcategory_name
stringlengths
7
56
Lemma 3.5. The order of any element in \[ {\left. \operatorname{Ker}\left( {\psi }^{p + 1} - 1\right) \right| }_{B{P}_{q{p}^{j + 1} - 2}\left( {P \land P}\right) } \] divides \( {p}^{j + 2} \) . Proof of Theorem 3.1. Suppose \( {\theta }_{j} \) exists and lifts to \( {\widetilde{\theta }}_{j} \in {\pi }_{q{p}^{j + 1} - 2}^{s}\left( {P \land P}\right) \) ; let \( {\mathbf{\Theta }}_{j} \) be the \( {BP} \) -Hurewicz image of \( {\widetilde{\theta }}_{j} \) and \( {n}_{j} \) be the order of \( {\mathbf{\Theta }}_{j} \) . As \( {\theta }_{j} \) is detected in \( {\operatorname{Ext}}_{{\mathcal{A}}_{v}}^{2, q{p}^{j + 1}}\left( {\mathbb{Z}/p,\mathbb{Z}/p}\right) \) and the double transfer has (classical) Adams filtration 2, the mod \( p \) Hurewicz image of \( {\widetilde{\theta }}_{j} \) is nontrivial. Then, Proposition 2.3(ii) implies it should be a nonzero multiple of \( \mathop{\sum }\limits_{{1 \leq i \leq p - 1}}\left( {1/i}\right) {x}_{q{p}^{j}i - 1} \otimes {x}_{q{p}^{j}\left( {p - i}\right) - 1} \) . So Lemma 3.4 forces \[ {p}^{\left( {p}^{j - 1}\right) }\left| {{n}_{j}\text{ when }j \geq 2,\;\text{ and }\;{p}^{p - 1}}\right| {n}_{1}. \] On the other hand, as \( {\mathbf{\Theta }}_{j} \in \operatorname{Ker}\left( {{\psi }^{p + 1} - 1}\right) \mid {}_{B{P}_{q{p}^{j + 1} - 2}\left( {P \land P}\right) } \), Lemma 3.5 implies \[ {n}_{j} \mid {p}^{j + 2}\text{.} \] Combining these two, we get \( {p}^{j - 1} \leq j + 2 \) when \( j \geq 2 \), and \( p - 1 \leq 3 \) when \( j = 1 \) . But, this happens only if \( j \leq 2 \) when \( p = 3 \), and \( j = 0 \) when \( p \geq 5 \) . ## 4. The odd-primary Mahowald’s \( {\eta }_{j} \) elements As was indicated in the introduction, the problem of whether the odd-primary \( {\eta }_{j} \) elements exist or not is still completely open. The purpose of this section is to study this mysterious element. To state our result, let \( {\left( {S}^{2n}\right) }_{p - 1} \) be the image of \( {\left( {S}^{2n}\right) }^{p - 1} \) in the reduced product \( {\left( {S}^{2n}\right) }_{\infty }\left( {\overset{ \simeq }{ \rightarrow }\Omega {S}^{{2n} + 1}}\right) \left\lbrack {21}\right\rbrack \) of \( {S}^{2n} \) . Then, as was noticed by James and Toda (see Theorem 2.4 in [58]), there is a fiber sequence at \( p \) : \[ {\left( {S}^{2n}\right) }_{p - 1}\overset{E}{ \rightarrow }\Omega {S}^{{2n} + 1}\overset{{H}_{p}}{ \rightarrow }\Omega {S}^{{2pn} + 1}, \] where \( E \) is induced by the inclusion \( {\left( {S}^{2n}\right) }_{p - 1} \rightarrow {\left( {S}^{2n}\right) }_{\infty }\left( {\overset{ \simeq }{ \rightarrow }\Omega {S}^{{2n} + 1}}\right) \), and \( {H}_{p} \) is the James-Hopf map. Just like the usual EHP sequence, denote the connecting homomorphism in homotopy by \[ P : {\pi }_{* + 1}\left( {S}^{{2pn} + 1}\right) \rightarrow {\pi }_{* - 1}\left( {\left( {S}^{2n}\right) }_{p - 1}\right) . \] Now the following is the main result in this section:
2304_Topology and Its Applications 2000-03-03_ Vol 101 Iss 3
Lemma 3.5. The order of any element in\n\n\[ \n{\left. \operatorname{Ker}\left( {\psi }^{p + 1} - 1\right) \right| }_{B{P}_{q{p}^{j + 1} - 2}\left( {P \land P}\right) }\n\]\n\ndivides \( {p}^{j + 2} \) .
Proof of Theorem 3.1. Suppose \( {\theta }_{j} \) exists and lifts to \( {\widetilde{\theta }}_{j} \in {\pi }_{q{p}^{j + 1} - 2}^{s}\left( {P \land P}\right) \) ; let \( {\mathbf{\Theta }}_{j} \) be the \( {BP} \) -Hurewicz image of \( {\widetilde{\theta }}_{j} \) and \( {n}_{j} \) be the order of \( {\mathbf{\Theta }}_{j} \) .\n\nAs \( {\theta }_{j} \) is detected in \( {\operatorname{Ext}}_{{\mathcal{A}}_{v}}^{2, q{p}^{j + 1}}\left( {\mathbb{Z}/p,\mathbb{Z}/p}\right) \) and the double transfer has (classical) Adams filtration 2, the mod \( p \) Hurewicz image of \( {\widetilde{\theta }}_{j} \) is nontrivial. Then, Proposition 2.3(ii) implies it should be a nonzero multiple of \( \mathop{\sum }\limits_{{1 \leq i \leq p - 1}}\left( {1/i}\right) {x}_{q{p}^{j}i - 1} \otimes {x}_{q{p}^{j}\left( {p - i}\right) - 1} \) . So Lemma 3.4 forces\n\n\[ \n{p}^{\left( {p}^{j - 1}\right) }\left| {{n}_{j}\text{ when }j \geq 2,\;\text{ and }\;{p}^{p - 1}}\right| {n}_{1}.\n\]\n\nOn the other hand, as \( {\mathbf{\Theta }}_{j} \in \operatorname{Ker}\left( {{\psi }^{p + 1} - 1}\right) \mid {}_{B{P}_{q{p}^{j + 1} - 2}\left( {P \land P}\right) } \), Lemma 3.5 implies\n\n\[ \n{n}_{j} \mid {p}^{j + 2}\text{}.\n\]\n\nCombining these two, we get \( {p}^{j - 1} \leq j + 2 \) when \( j \geq 2 \), and \( p - 1 \leq 3 \) when \( j = 1 \) . But, this happens only if \( j \leq 2 \) when \( p = 3 \), and \( j = 0 \) when \( p \geq 5 \) .
2
Unknown
Algebra
Unknown
Theorem 17.1 Under assumption 1 and 2, the channel capacity of the genetic code is 2.92 Shannon bits. Proof. \[ \text{Capacity} = H\left( Y\right) = H\left( \frac{2,6,3,1,4,6,4,4,4,2,2,2,2,2,2,2,2,3,1,6,4}{64}\right) \] \[ = {2.92} \] 0 This result is not surprising due to the fact that there is much redundancy in the genetic code, mainly arising from the \( {3}^{rd} \) nucleotide. Our model is much too simplistic, but out modest goal was to merely illustrate a first pass at understating the channel capacity. To modify assumption 1 , we might best proceed on the assumption that the source is ergodic. Following on from this, we might want to assume (as has been done) that we can obtain a useful Markov model using an irreducible Markov chain. We refer to Chapter 14 for discussion of Markov chains. In his book, Guiasu [Gui77] states that "this capacity seems to be different from the maximum possible quantity \( \log {21} = {4.3920}^{y} \) . In practice, the bases A, T, G, C are roughly equiprobable but not independent. Thus, assumption 1 is unrealistic. Denote the bases A, T, G, C as \( {X}_{1},{X}_{2},{X}_{3},{X}_{4} \) . In applications it is easier to calculate the transition probabilities \( \Pr \left( {{X}_{\jmath } \mid {X}_{i}}\right) \) . We can then form the 4 by 4 Markov transition matrix \( M = \left( {p}_{ij}\right) \) where \( {p}_{ij} = \Pr \left( {{X}_{j} \mid {X}_{i}}\right) \) . We can also think of \( {p}_{ij} \) as the probability of moving from state \( {X}_{i} \) to state \( {X}_{j} \) . Then assuming that the source is stationary we calculate the fixed probability vector \( \mathbf{w} \) where \( \mathbf{w}P = \mathbf{w} \) and proceed along the lines of Chapter 14 To elaborate on this, we would find the stationary probabilities \( \Pr \left( \mathrm{A}\right) ,\Pr \left( \mathrm{T}\right) ,\Pr \left( \mathrm{G}\right) \) , \( \Pr \left( \mathrm{C}\right) \) on solving the following system of equations. (We assume that the transition probabilities have already been experimentally calculated). We have \[ \left( {{Pr}\left( A\right) ,{Pr}\left( T\right) ,{Pr}\left( G\right) ,{Pr}\left( C\right) }\right) M = \left( {{Pr}\left( A\right) ,{Pr}\left( T\right) ,{Pr}\left( G\right) ,{Pr}\left( C\right) }\right) \] \( \left( {17.4}\right) \) Where \( M \) is the following transition matrix \[ M = \left( \begin{array}{llll} \Pr \left( {A \mid A}\right) & \Pr \left( {T \mid A}\right) & \Pr \left( {G \mid A}\right) & \Pr \left( {C \mid A}\right) \\ \Pr \left( {A \mid T}\right) & \Pr \left( {T \mid T}\right) & \Pr \left( {G \mid T}\right) & \Pr \left( {C \mid T}\right) \\ \Pr \left( {A \mid G}\right) & \Pr \left( {T \mid G}\right) & \Pr \left( {G \mid G}\right) & \Pr \left( {C \mid G}\right) \\ \Pr \left( {A \mid C}\right) & \Pr \left( {T \mid C}\right) & \Pr \left( {G \mid C}\right) & \Pr \left( {C \mid C}\right) \end{array}\right) \] \( \left( {17.5}\right) \) Having found the stationary probability distribution for the variables \( \left( {{X}_{1},{X}_{2},{X}_{3},{X}_{4}}\right) = \) \( \left( {A, T, G, C}\right) \), we can now find the input distribution for the codons to be transmitted over the memoryless channel that constitutes the genetic code. We already know the transition probabilities. Now, by the Markov property we have: \[ \Pr \left( {{X}_{i}{X}_{j}{X}_{k}}\right) = \Pr \left( {X}_{i}\right) \Pr \left( {{X}_{j} \mid {X}_{i}}\right) \Pr \left( {{X}_{k} \mid {X}_{j}}\right) \tag{17.6} \] We also have an expression for the transition probabilities of the codons as follows: \[ \Pr \left( {{X}_{a}{X}_{b}{X}_{c} \mid {X}_{i}{X}_{j}{X}_{k}}\right) = \Pr \left( {{X}_{a} \mid {X}_{i}{X}_{j}{X}_{k}}\right) \Pr \left( {{X}_{b} \mid {X}_{a}}\right) \Pr \left( {{X}_{c} \mid {X}_{b}}\right) \] \( \left( {17.7}\right) \) Summing up then we have a source, namely a source of codons. This is a stationary Markov source for which we have calculated the stationary probabilities and the transition probabilities. We can now use this for our calculation of the channel capacity of the genetic code. Of course we could generalize to n-step Markov processes for better accuracy. The question of deriving a more realistic assumption 2 to allow for channel transcription errors and also to incorporate mutation errors has led to an enormous volume of biological research which lies beyond the scope of this book. The long term goal would be to discover a general method for regulating biological information in an error-free way, and reducing genetic disorders by using various channel capacities by analogy with the classical results of Shannon. But success seems a long way off at this stage. Part III Mainly Error-Correction Chapter 18 ## Error-Correction, Hadamard, Block Designs and \( p \) -rank ## 18.1 General Ideas of Error Correction Error-correcting and error-detecting codes have their origins in the pioneering work of Hamming and Golay around 1950. The general theory is closely connected to topics in combinatorics and statistics such as block designs, which are also discussed here. On the other hand, we are able to introduce linear codes by the "back door" in order to get an improvement on the (combinatorial) Gilbert-Varshamov bound. Nowadays the theory is applied in all situations involving communication channels. The channel might involve a telephone conversation, an encrypted electronic message, an internet transaction, a deep space satellite transmission, or a compact disc recording. The basic principle is to introduce redundancy - "good redundancy" - with a message in order to improve reliability, and has been discussed in Chapter 12. We review some basic ideas and give several examples here. We also develop some classical bounds on the size of codes. On the practical side, one of the main problems in coding theory is to construct codes getting close to the Shannon capacity bound of Chapter 12. On the theoretical side, the "main" coding theory problem is discu
9381_Cryptography_ information theory_ and error-correction_ a handbook for the 21st century
Theorem 17.1 Under assumption 1 and 2, the channel capacity of the genetic code is 2.92 Shannon bits.
\[ \text{Capacity} = H\left( Y\right) = H\left( \frac{2,6,3,1,4,6,4,4,4,2,2,2,2,2,2,2,2,3,1,6,4}{64}\right) \] \[ = {2.92} \]
10
47
Computer Science and Engineering
Information and communication, circuits
Corollary 1.1.9 If \( L, M \) are finite separable extensions of \( k \), their composi-tum is separable as well. Proof: By definition of \( {LM} \) there exist finitely many separable elements \( {\alpha }_{1},\ldots ,{\alpha }_{m} \) of \( L \) such that \( {LM} = M\left( {{\alpha }_{1},\ldots ,{\alpha }_{m}}\right) \) . As the \( {\alpha }_{i} \) are separable over \( k \), they are separable over \( M \), and so the extension \( {LM} \mid M \) is separable by the previous corollary. But so is \( M \mid k \) by assumption, and we conclude by Corollary 1.1.7. In view of the above two corollaries the compositum of all finite separable subextensions of \( \bar{k} \) is a separable extension \( {k}_{s} \mid k \) containing each finite separable subextension of \( \bar{k} \mid k \) . Definition 1.1.10 The extension \( {k}_{s} \) is called the separable closure of \( k \) in \( \bar{k} \) . From now on by "a separable closure of \( k \) " we shall mean its separable closure in some chosen algebraic closure. The following important property of finite separable extensions is usually referred to as the theorem of the primitive element.
1509_Galois groups and fundamental groups
Corollary 1.1.9 If \( L, M \) are finite separable extensions of \( k \), their compositum is separable as well.
Proof: By definition of \( {LM} \) there exist finitely many separable elements \( {\alpha }_{1},\ldots ,{\alpha }_{m} \) of \( L \) such that \( {LM} = M\left( {{\alpha }_{1},\ldots ,{\alpha }_{m}}\right) \). As the \( {\alpha }_{i} \) are separable over \( k \), they are separable over \( M \), and so the extension \( {LM} \mid M \) is separable by the previous corollary. But so is \( M \mid k \) by assumption, and we conclude by Corollary 1.1.7.
2
5
Algebra
Field theory and polynomials
Proposition 1. Let the function \( u = u\left( x\right) \in {C}^{2}\left( {B}_{R}\right) \) be given, and its Fourier coefficients \( {f}_{kl}\left( r\right) \) are defined due to the formula (26). Then the Fourier coefficients \( {\widetilde{f}}_{kl}\left( r\right) \) of \( {\Delta u} \), namely \[ {\widetilde{f}}_{kl}\left( r\right) \mathrel{\text{:=}} {\int }_{\left| \eta \right| = 1}{\Delta u}\left( {r\eta }\right) {H}_{kl}\left( \eta \right) {d\sigma }\left( \eta \right) ,\;k = 0,1,2,\ldots ,\;l = 1,\ldots, N\left( {k, n}\right) , \] satisfy the identity \[ {\widetilde{f}}_{kl}\left( r\right) = {L}_{k, n}{f}_{kl}\left( r\right) ,\;k = 0,1,2,\ldots ,\;l = 1,\ldots, N\left( {k, n}\right) , \tag{28} \] with \( 0 \leq r < R \) . Proof: We choose \( 0 \leq r < R \), and calculate with the aid of (3) as follows: \[ {\widetilde{f}}_{kl}\left( r\right) = {\int }_{\left| \xi \right| = 1}{\Delta u}\left( {r\xi }\right) {H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \] \[ = {\int }_{\left| \xi \right| = 1}\left\{ {\left( {\frac{{\partial }^{2}}{\partial {r}^{2}} + \frac{n - 1}{r}\frac{\partial }{\partial r} + \frac{1}{{r}^{2}}\mathbf{\Lambda }}\right) u\left( {r\xi }\right) }\right\} {H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \] \[ = \left( {\frac{{\partial }^{2}}{\partial {r}^{2}} + \frac{n - 1}{r}\frac{\partial }{\partial r}}\right) {\int }_{\left| \xi \right| = 1}u\left( {r\xi }\right) {H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \] \[ + \frac{1}{{r}^{2}}{\int }_{\left| \xi \right| = 1}u\left( {r\xi }\right) \mathbf{\Lambda }{H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \] \[ = \left( {\frac{{\partial }^{2}}{\partial {r}^{2}} + \frac{n - 1}{r}\frac{\partial }{\partial r} - \frac{k\left( {k + \left( {n - 2}\right) }\right) }{{r}^{2}}}\right) {\int }_{\left| \xi \right| = 1}u\left( {r\xi }\right) {H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \] \[ = {L}_{k, n}{f}_{kl}\left( r\right) \;\text{ for }\;k = 0,1,2,\ldots ,\;l = 1,\ldots, N\left( {k, n}\right) . \] q.e.d. Remark: The most important partial differential equation of the second order in quantum mechanics, namely the Schrödinger equation, contains the Laplacian as its principal part. Therefore, the investigation of eigenvalues of this operator is of central interest. This will be presented in Chapter VIII. Portrait of Joseph Antoine Ferdinand Plateau (1801-1883); in possession of the Universitätsbibliothek Bonn; taken from the book by S. Hildebrandt and A. Tromba: Panoptimum - Mathematische Grundmuster des Vollkommenen, Spektrum-Verlag Heidelberg (1986). ![019187e1-2c19-7083-a0fa-101c00170a1b_365_612376.jpg](images/019187e1-2c19-7083-a0fa-101c00170a1b_365_612376.jpg) ## Linear Partial Differential Equations in \( {\mathbb{R}}^{n} \) In this chapter we become aquainted with the different types of partial differential equations in \( {\mathbb{R}}^{n} \) . We treat the maximum principle for elliptic differential equations and prove the uniqueness of the mixed boundary value problem for quasilinear elliptic differential equations. Then we consider the initial value problem of the parabolic heat equation. Finally, we solve the Cauchy initial value problem for the hyperbolic wave equation in \( {\mathbb{R}}^{n} \) and show its invariance under Lorentz transformations. The differential equations presented are situated in the center of mathematical physics. ## §1 The maximum principle for elliptic differential equations We shall consider a class of differential equations, which contains the Laplace operator. Definition 1. Let \( \Omega \subset {\mathbb{R}}^{n} \) be a domain with \( n \in \mathbb{N} \), where the continuous coefficient functions \( {a}_{ij}\left( x\right) ,{b}_{i}\left( x\right), c\left( x\right) : \Omega \rightarrow \mathbb{R} \in {C}^{0}\left( \Omega \right) \) for \( i, j = 1,\ldots, n \) are defined. Furthermore, let the matrix \( {\left( {a}_{ij}\left( x\right) \right) }_{i, j = 1,\ldots, n} \) be symmetric for all \( x \in \Omega \) . The linear partial differential operator of the second order \( \mathcal{L} \) : \( {C}^{2}\left( \Omega \right) \rightarrow {C}^{0}\left( \Omega \right) \) defined by \[ \mathcal{L}u\left( x\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{i, j = 1}}^{n}{a}_{ij}\left( x\right) \frac{{\partial }^{2}}{\partial {x}_{i}\partial {x}_{j}}u\left( x\right) + \mathop{\sum }\limits_{{i = 1}}^{n}{b}_{i}\left( x\right) \frac{\partial }{\partial {x}_{i}}u\left( x\right) + c\left( x\right) u\left( x\right) ,\;x \in \Omega , \tag{1} \] is named elliptic (or alternatively degenerate elliptic), if and only if \[ \mathop{\sum }\limits_{{i, j = 1}}^{n}{a}_{ij}\left( x\right) {\xi }_{i}{\xi }_{j} > 0\;\left( {\text{ or alternatively }\mathop{\sum }\limits_{{i, j = 1}}^{n}{a}_{ij}\left( x\right) {\xi }_{i}{\xi }_{j} \geq 0}\right) \] for all \( \xi = \left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \in {\mathbb{R}}^{n} \smallsetminus \{ 0\} \) and all \( x \in \Omega \) is satisfied. When we have the ellipticity constants \( 0 < m \leq M < + \infty \) such that \[ m{\left| \xi \right| }^{2} \leq \mathop{\sum }\limits_{{i, j = 1}}^{n}{a}_{ij}\left( x\right) {\xi }_{i}{\xi }_{j} \leq M{\left| \xi \right| }^{2} \] for all \( \xi = \left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \in {\mathbb{R}}^{n} \) and all \( x \in \Omega \) holds true, the operator \( \mathcal{L} \) is called uniformly elliptic. In the case \( c\left( x\right) \equiv 0, x \in \Omega \), we use the notation \( \mathcal{M}u\left( x\right) \mathrel{\text{:=}} \mathcal{L}u\left( x\right), x \in \Omega \) for the reduced differential operator . Remark: A uniformly elliptic differential operator is elliptic, and an elliptic differential operator is degenerate elliptic. The Laplace operator appears for \( {a}_{ij}\left( x\right) \equiv {\delta }_{ij},{b}_{i}\left( x\right) \equiv 0, c\left( x\right) \equiv 0 \) with \( i, j = 1,\ldots, n \) and is consequently uniformly el
22824_Partial Differential Equations_ Foundations and Integral Representations Part 1
Proposition 1. Let the function \( u = u\left( x\right) \in {C}^{2}\left( {B}_{R}\right) \) be given, and its Fourier coefficients \( {f}_{kl}\left( r\right) \) are defined due to the formula (26). Then the Fourier coefficients \( {\widetilde{f}}_{kl}\left( r\right) \) of \( {\Delta u} \), namely\n\n\[ \n{\widetilde{f}}_{kl}\left( r\right) \mathrel{\text{:=}} {\int }_{\left| \eta \right| = 1}{\Delta u}\left( {r\eta }\right) {H}_{kl}\left( \eta \right) {d\sigma }\left( \eta \right) ,\;k = 0,1,2,\ldots ,\;l = 1,\ldots, N\left( {k, n}\right) ,\n\]\n\nsatisfy the identity\n\n\[ \n{\widetilde{f}}_{kl}\left( r\right) = {L}_{k, n}{f}_{kl}\left( r\right) ,\;k = 0,1,2,\ldots ,\;l = 1,\ldots, N\left( {k, n}\right) , \tag{28}\n\]\n\nwith \( 0 \leq r < R \) .
Proof: We choose \( 0 \leq r < R \), and calculate with the aid of (3) as follows:\n\n\[ \n{\widetilde{f}}_{kl}\left( r\right) = {\int }_{\left| \xi \right| = 1}{\Delta u}\left( {r\xi }\right) {H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \n\]\n\n\[ \n= {\int }_{\left| \xi \right| = 1}\left\{ {\left( {\frac{{\partial }^{2}}{\partial {r}^{2}} + \frac{n - 1}{r}\frac{\partial }{\partial r} + \frac{1}{{r}^{2}}\mathbf{\Lambda }}\right) u\left( {r\xi }\right) }\right\} {H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \n\]\n\n\[ \n= \left( {\frac{{\partial }^{2}}{\partial {r}^{2}} + \frac{n - 1}{r}\frac{\partial }{\partial r}}\right) {\int }_{\left| \xi \right| = 1}u\left( {r\xi }\right) {H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \n\]\n\n\[ \n+ \frac{1}{{r}^{2}}{\int }_{\left| \xi \right| = 1}u\left( {r\xi }\right) \mathbf{\Lambda }{H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \n\]\n\n\[ \n= \left( {\frac{{\partial }^{2}}{\partial {r}^{2}} + \frac{n - 1}{r}\frac{\partial }{\partial r} - \frac{k\left( {k + \left( {n - 2}\right) }\right) }{{r}^{2}}}\right) {\int }_{\left| \xi \right| = 1}u\left( {r\xi }\right) {H}_{kl}\left( \xi \right) {d\sigma }\left( \xi \right) \n\]\n\n\[ \n= {L}_{k, n}{f}_{kl}\left( r\right) \;\text{ for }\;k = 0,1,2,\ldots ,\;l = 1,\ldots, N\left( {k, n}\right) .\n\]\n\nq.e.d.
5
25
Analysis
Potential theory
Proposition 6.9. For an equilibrium point \( \left( {{x}_{e},{y}_{e}}\right) \) as defined by Prop. 6.2 if the following equations hold: \[ \left\{ \begin{array}{l} \frac{\partial m}{\partial x} + \left( \frac{{y}_{e} + 1}{{x}_{e} + F}\right) \frac{\partial m}{\partial y} < 0 \\ \frac{\partial m}{\partial x} + \left( \frac{{y}_{e} + 1}{{x}_{e} + F}\right) \frac{\partial m}{\partial y} < 0 \end{array}\right. \tag{18} \] then \( \left( {{x}_{e},{y}_{e}}\right) \) is asymptotically stable. ## Proof (Proof of Prop. 6.9) To analyse of the equilibrium point \( \left( {{x}_{e},{y}_{e}}\right) \), we take the Jacobian \( J \) of the rate equations (2) and (3) at this point. The partial derivatives \( \frac{\partial g}{\partial x} \) and \( \frac{\partial g}{\partial y} \) at \( \left( {{x}_{e},{y}_{e}}\right) \) are obtained from the delay equation, Eq. (7). \[ \frac{\partial g}{\partial x}\left( {{x}_{e},{y}_{e}}\right) = \frac{\left( {1 + \eta }\right) \left( {{y}_{e} + 1}\right) {FC}}{{\left( F{y}_{e} + {x}_{e}\left( 1 + \eta \right) + F\left( 2 + \eta \right) \right) }^{2}} \tag{19} \] \[ \frac{\partial g}{\partial y}\left( {{x}_{e},{y}_{e}}\right) = - \frac{\left( {1 + \eta }\right) \left( {{x}_{e} + F}\right) {FC}}{{\left( F{y}_{e} + {x}_{e}\left( 1 + \eta \right) + F\left( 2 + \eta \right) \right) }^{2}} \tag{20} \] The equilibrium point \( \left( {{x}_{e},{y}_{e}}\right) \) is asymptotically stable if the eigenvalues of the \( J \) at \( \left( {{x}_{e},{y}_{e}}\right) \) have strictly negative real parts [7, Ch. \( 2\& 5 \) ]. Characteristic polynomial of \( J \) is: \[ {\lambda }^{2} + \] \[ \left( {{\lambda }_{y}F\left( {\frac{\partial m}{\partial y} - \left( {1 + \eta }\right) \frac{\partial m}{\partial x}}\right) + \frac{\partial g}{\partial x} - \frac{\partial g}{\partial y}}\right) \lambda + \] \[ \eta \left( {{\lambda }_{y}F\left( {\frac{\partial m}{\partial x}\frac{\partial g}{\partial y} - \frac{\partial m}{\partial y}\frac{\partial g}{\partial x}}\right) }\right) \] From this and equations (19) and (20), real parts of the roots are strictly negative iff: \[ \left( {1 + \eta }\right) \frac{\partial m}{\partial x} - \frac{\partial m}{\partial y} < \frac{\left( {1 + \eta }\right) C\left( {1 + {x}_{e} + {y}_{e} + F}\right) }{{\lambda }_{y}{\left( F\left( 2 + {y}_{e} + \eta \right) + {x}_{e}\left( 1 + \eta \right) \right) }^{2}} \tag{21} \] and \[ \frac{\partial m}{\partial x} + \left( \frac{{y}_{e} + 1}{{x}_{e} + F}\right) \frac{\partial m}{\partial y} < 0 \tag{22} \] Inequalities of the proposition are obtained using combination of equations (21) and (22). Proposition 6.2 and 6.9 give sufficient conditions on \( m \) to define stable equilibria. Next, we prove that there exists \( m \) which stabilizes the system for any arrival setting.
12899_Analytical and Stochastic Modeling Techniques and Applications_ 16th International Conference_ ASMTA
Proposition 6.9. For an equilibrium point \( \left( {{x}_{e},{y}_{e}}\right) \) as defined by Prop. 6.2 if the following equations hold:\n\n\[ \left\{ \begin{array}{l} \frac{\partial m}{\partial x} + \left( \frac{{y}_{e} + 1}{{x}_{e} + F}\right) \frac{\partial m}{\partial y} < 0 \\ \frac{\partial m}{\partial x} + \left( \frac{{y}_{e} + 1}{{x}_{e} + F}\right) \frac{\partial m}{\partial y} < 0 \end{array}\right. \tag{18} \]\n\nthen \( \left( {{x}_{e},{y}_{e}}\right) \) is asymptotically stable.
## Proof (Proof of Prop. 6.9)\n\nTo analyse of the equilibrium point \( \left( {{x}_{e},{y}_{e}}\right) \), we take the Jacobian \( J \) of the rate equations (2) and (3) at this point. The partial derivatives \( \frac{\partial g}{\partial x} \) and \( \frac{\partial g}{\partial y} \) at \( \left( {{x}_{e},{y}_{e}}\right) \) are obtained from the delay equation, Eq. (7).\n\n\[ \frac{\partial g}{\partial x}\left( {{x}_{e},{y}_{e}}\right) = \frac{\left( {1 + \eta }\right) \left( {{y}_{e} + 1}\right) {FC}}{{\left( F{y}_{e} + {x}_{e}\left( 1 + \eta \right) + F\left( 2 + \eta \right) \right) }^{2}} \tag{19} \]\n\n\[ \frac{\partial g}{\partial y}\left( {{x}_{e},{y}_{e}}\right) = - \frac{\left( {1 + \eta }\right) \left( {{x}_{e} + F}\right) {FC}}{{\left( F{y}_{e} + {x}_{e}\left( 1 + \eta \right) + F\left( 2 + \eta \right) \right) }^{2}} \tag{20} \]\n\nThe equilibrium point \( \left( {{x}_{e},{y}_{e}}\right) \) is asymptotically stable if the eigenvalues of the \( J \) at \( \left( {{x}_{e},{y}_{e}}\right) \) have strictly negative real parts [7, Ch. \( 2\& 5 \) ]. Characteristic polynomial of \( J \) is:\n\n\[ {\lambda }^{2} + \]\n\n\[ \left( {{\lambda }_{y}F\left( {\frac{\partial m}{\partial y} - \left( {1 + \eta }\right) \frac{\partial m}{\partial x}}\right) + \frac{\partial g}{\partial x} - \frac{\partial g}{\partial y}}\right) \lambda + \]\n\n\[ \eta \left( {{\lambda }_{y}F\left( {\frac{\partial m}{\partial x}\frac{\partial g}{\partial y} - \frac{\partial m}{\partial y}\frac{\partial g}{\partial x}}\right) }\right) \]\n\nFrom this and equations (19) and (20), real parts of the roots are strictly negative iff:\n\n\[ \left( {1 + \eta }\right) \frac{\partial m}{\partial x} - \frac{\partial m}{\partial y} < \frac{\left( {1 + \eta }\right) C\left( {1 + {x}_{e} + {y}_{e} + F}\right) }{{\lambda }_{y}{\left( F\left( 2 + {y}_{e} + \eta \right) + {x}_{e}\left( 1 + \eta \right) \right) }^{2}} \tag{21} \]\n\nand\n\n\[ \frac{\partial m}{\partial x} + \left( \frac{{y}_{e} + 1}{{x}_{e} + F}\right) \frac{\partial m}{\partial y} < 0 \tag{22} \]\n\nInequalities of the proposition are obtained using combination of equations (21) and (22).
6
36
Differential Equations and Dynamical Systems
Partial differential equations
Lemma 7.5. Let \( m \) be a nonnegative real number. (a) The following equality holds for every positive real number \( c \) : \[ 2{c}^{2}{\int }_{0}^{1}r{\left| {J}_{m}\left( cr\right) \right| }^{2}{dr} = {c}^{2}{\left| {J}_{m}^{\prime }\left( c\right) \right| }^{2} + \left( {{c}^{2} - {m}^{2}}\right) {\left| {J}_{m}\left( c\right) \right| }^{2}. \tag{7.5} \] (b) The functions \( {J}_{m}\left( x\right) \) and \( {J}_{m}^{\prime }\left( x\right) \) are positive in \( (0, m\rbrack \), so their first positive roots are bigger than \( m \) . (c) If \( \alpha \geq 0 \), then the first positive root of \( \alpha {J}_{m}\left( x\right) + x{J}_{m}^{\prime }\left( x\right) \) is bigger than \( m \) . (d) If \( \alpha < 0 \) and \( m > \left| \alpha \right| \), then the first positive root of \( \alpha {J}_{m}\left( x\right) + x{J}_{m}^{\prime }\left( x\right) \) is bigger than \( \sqrt{{m}^{2} - {\alpha }^{2}} \) . Proof. (a) Using the power series expansion (7.4), we see that \( y \mathrel{\text{:=}} {J}_{m}\left( x\right) \) satisfies the differential equation \[ {x}^{2}{y}^{\prime \prime } + x{y}^{\prime } + \left( {{x}^{2} - {m}^{2}}\right) y = 0\;\text{ in }\;\left( {0,\infty }\right) . \tag{7.6} \] Multiplying this equation by \( 2{y}^{\prime } \) and integrating over \( \left( {0, c}\right) \), we obtain the equality \[ {\int }_{0}^{c}{2x}{y}^{2}{dx} = {\left\lbrack {x}^{2}{\left( {y}^{\prime }\right) }^{2} + \left( {x}^{2} - {m}^{2}\right) {y}^{2}\right\rbrack }_{0}^{c} = {c}^{2}{y}^{\prime }{\left( c\right) }^{2} + \left( {{c}^{2} - {m}^{2}}\right) y{\left( c\right) }^{2} + {m}^{2}y{\left( 0\right) }^{2}. \] Since \( y\left( 0\right) = 0 \) for \( m > 0 \) from the power series expansion (7.4), the last term always vanishes. The equality (7.5) follows by the change of variables \( x = {cr} \) in the integral. (b) One can see from the power series expansion (7.4) that \( {J}_{m}\left( x\right) \) and \( {J}_{m}^{\prime }\left( x\right) \) are positive for small positive values of \( x \) . Therefore, we infer from (7.5) that \[ {\left| {J}_{m}^{\prime }\left( c\right) \right| }^{2} \geq 2{\int }_{0}^{1}r{\left| {J}_{m}\left( cr\right) \right| }^{2}{dr} > 0 \] for all \( 0 < c \leq m \) . Hence \( {J}_{m}^{\prime }\left( x\right) \) is positive in \( (0, m\rbrack \) . Consequently, \( {J}_{m}\left( x\right) \) is increasing and thus also positive in \( (0, m\rbrack \) . (c) If \( \alpha \geq 0 \), then \( \alpha {J}_{m}\left( x\right) + x{J}_{m}^{\prime }\left( x\right) \) is positive in \( (0, m\rbrack \) by property (b). (d) If \( \alpha {J}_{m}\left( c\right) + c{J}_{m}^{\prime }\left( c\right) = 0 \) for some \( 0 < c \leq m \), then we obtain from (7.5) the following equality: \[ 2{c}^{2}{\int }_{0}^{1}r{\left| {J}_{m}\left( cr\right) \right| }^{2}{dr} = \left( {{\alpha }^{2} + {c}^{2} - {m}^{2}}\right) {\left| {J}_{m}\left( c\right) \right| }^{2}. \] Since \( {J}_{m} > 0 \) in \( (0, m\rbrack \) by (b), we must have \( {\alpha }^{2} + {c}^{2} - {m}^{2} > 0 \), i.e., \( c > \) \( \sqrt{{m}^{2} - {\alpha }^{2}} \) . Next we recall the classical Sturm oscillation theorem:
11657_Fourier_series_in_control
Lemma 7.5. Let \( m \) be a nonnegative real number.\n\n(a) The following equality holds for every positive real number \( c \) :\n\n\[ 2{c}^{2}{\int }_{0}^{1}r{\left| {J}_{m}\left( cr\right) \right| }^{2}{dr} = {c}^{2}{\left| {J}_{m}^{\prime }\left( c\right) \right| }^{2} + \left( {{c}^{2} - {m}^{2}}\right) {\left| {J}_{m}\left( c\right) \right| }^{2}. \tag{7.5} \]
Proof.\n\n(a) Using the power series expansion (7.4), we see that \( y \mathrel{\text{:=}} {J}_{m}\left( x\right) \) satisfies the differential equation\n\n\[ {x}^{2}{y}^{\prime \prime } + x{y}^{\prime } + \left( {{x}^{2} - {m}^{2}}\right) y = 0\;\text{ in }\;\left( {0,\infty }\right) . \tag{7.6} \]\n\nMultiplying this equation by \( 2{y}^{\prime } \) and integrating over \( \left( {0, c}\right) \), we obtain the equality\n\n\[ {\int }_{0}^{c}{2x}{y}^{2}{dx} = {\left\lbrack {x}^{2}{\left( {y}^{\prime }\right) }^{2} + \left( {x}^{2} - {m}^{2}\right) {y}^{2}\right\rbrack }_{0}^{c} = {c}^{2}{y}^{\prime }{\left( c\right) }^{2} + \left( {{c}^{2} - {m}^{2}}\right) y{\left( c\right) }^{2} + {m}^{2}y{\left( 0\right) }^{2}. \]\n\nSince \( y\left( 0\right) = 0 \) for \( m > 0 \) from the power series expansion (7.4), the last term always vanishes. The equality (7.5) follows by the change of variables \( x = {cr} \) in the integral.
6
35
Differential Equations and Dynamical Systems
Ordinary differential equations
Lemma 11.3. The operation \( \odot \) on \( {\mathbb{Z}}_{n} \) is associative and commutative and has [1] as an identity element. PROOF. Make the obvious changes in the proof of Theorem 11.1. Let \( {\mathbb{Z}}_{n}^{\# } \) denote the set \( \{ \left\lbrack 1\right\rbrack ,\left\lbrack 2\right\rbrack ,\ldots ,\left\lbrack {n - 1}\right\rbrack \} \), that is, \( {\mathbb{Z}}_{n} \) with \( \left\lbrack 0\right\rbrack \) deleted. Although \( {\mathbb{Z}}_{n} \) with \( \odot \) is never a group, the next example shows that \( {\mathbb{Z}}_{n}^{\# } \) with \( \odot \) can be a group.
175_Modern Algebra_ An Introduction_ Sixth Edition
Lemma 11.3. The operation \( \odot \) on \( {\mathbb{Z}}_{n} \) is associative and commutative and has [1] as an identity element.
PROOF. Make the obvious changes in the proof of Theorem 11.1.
2
8
Algebra
Associative rings and algebras
Lemma 9.1.8. [28] Let \( \left( {X, q{p}_{b}}\right) \) be a quasi-partial \( b \) -metric space and \( \left( {X,{d}_{q{p}_{b}}}\right) \) be the corresponding \( b \) -metric space. Then \( \left( {X,{d}_{q{p}_{b}}}\right) \) is complete if \( \left( {X, q{p}_{b}}\right) \) is complete.
25453_Advances in Mathematical Analysis and its Applications
Lemma 9.1.8. [28] Let \( \left( {X, q{p}_{b}}\right) \) be a quasi-partial \( b \) -metric space and \( \left( {X,{d}_{q{p}_{b}}}\right) \) be the corresponding \( b \) -metric space. Then \( \left( {X,{d}_{q{p}_{b}}}\right) \) is complete if \( \left( {X, q{p}_{b}}\right) \) is complete.
Null
4
19
Geometry and Topology
General topology
Problem 12.3. Let the process \( X\left( t\right), t \in \left\lbrack {0, T}\right\rbrack \), be the geometric Lévy pro- cess \[ {dX}\left( t\right) = X\left( {t}^{ - }\right) \left\lbrack {\alpha \left( t\right) {dt} + \beta \left( t\right) {dW}\left( t\right) + {\int }_{{\mathbb{R}}_{0}}\gamma \left( {t, z}\right) \widetilde{N}\left( {{dt},{dz}}\right) }\right\rbrack \] where the involved coefficients are deterministic. Find \( {D}_{t, z}X\left( T\right) \) for \( t \leq T \) .
35342_Malliavin calculus for Lévy processes with applications to finance _ Lévy过程的Malliavin分析及其在金融学中的应用
Problem 12.3. Let the process \( X\\left( t\\right), t \\in \\left\\lbrack {0, T}\\right\\rbrack \\), be the geometric Lévy pro-\n\nprocess\n\n\\[ \n{dX}\\left( t\\right) = X\\left( {t}^{ - }\\right) \\left\\lbrack {\\alpha \\left( t\\right) {dt} + \\beta \\left( t\\right) {dW}\\left( t\\right) + {\\int }_{{\\mathbb{R}}_{0}}\\gamma \\left( {t, z}\\right) \\widetilde{N}\\left( {{dt},{dz}}\\right) }\\right\\rbrack \n\\]\n\nwhere the involved coefficients are deterministic. Find \( {D}_{t, z}X\\left( T\\right) \) for \( t \\leq T \) .
Null
6
35
Differential Equations and Dynamical Systems
Ordinary differential equations
Corollary 7.1 (Odd prime \( p \) ) For idempotents \( e \in {Z}_{p - 1} \) as naturals \( e < p : {e}^{p} \equiv \) \( e{\;\operatorname{mod}\;{p}^{3}} \Rightarrow e = 1 \) . For \( q = p - 1 \) and some carry \( 0 \leq c < q \) : idempotent \( {e}^{2} \equiv e{\;\operatorname{mod}\;q} \) implies \[ {e}^{2} = {cq} + e = c\left( {p - 1}\right) + e < {q}^{2}. \tag{#} \] Notice that carry \( c = 0 \) resp. \( c > 0 \) yield: \[ c = 0 \Leftrightarrow e = 1,\;\text{ and }\;e > 1 \Rightarrow {e}^{2} > q \Rightarrow e > \sqrt{p - 1}. \tag{7.5} \]
22305_Associative Digital Network Theory_ An Associative Algebra Approach to Logic_ Arithmetic and State M
Corollary 7.1 (Odd prime \( p \) ) For idempotents \( e \in {Z}_{p - 1} \) as naturals \( e < p : {e}^{p} \equiv \) \( e{\;\operatorname{mod}\;{p}^{3}} \Rightarrow e = 1 \) .
For \( q = p - 1 \) and some carry \( 0 \leq c < q \) : idempotent \( {e}^{2} \equiv e{\;\operatorname{mod}\;q} \) implies\n\n\[ \n{e}^{2} = {cq} + e = c\left( {p - 1}\right) + e < {q}^{2}. \tag{#} \n\]\n\nNotice that carry \( c = 0 \) resp. \( c > 0 \) yield:\n\n\[ \nc = 0 \Leftrightarrow e = 1,\;\text{ and }\;e > 1 \Rightarrow {e}^{2} > q \Rightarrow e > \sqrt{p - 1}. \tag{7.5} \n\]
3
13
Number Theory
Number theory
Theorem 7. If (W2), (W3) and (W4) hold, then eq. (50) has a nontrivial, finite energy solution. This solution has radial symmetry, namely \[ \mathbf{A}\left( x\right) = {g}^{-1}\mathbf{A}\left( {gx}\right) \;\forall g \in O\left( 3\right) \] where \( O\left( 3\right) \) is the orthogonal group in \( {\mathbf{R}}^{3} \) . ## 3.3. Solitary waves A solitary wave is a solution of a field equation whose energy travels as a localized packet. Since our equations are invariant for the action of the Poincaré group, given a static solution \[ \left( {\mathbf{A}\left( x\right) ,\varphi \left( x\right) }\right) \] it is possible to produce a solitary wave, moving with velocity \( \mathbf{v} = \left( {v,0,0}\right) \) with \( \left| v\right| < 1 \), just making a Lorentz transformation \[ \left\lbrack \begin{matrix} {\mathbf{A}}_{\mathbf{v}}\left( {x, t}\right) \\ {\varphi }_{\mathbf{v}}\left( {x, t}\right) \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} {\mathbf{A}}^{\prime }\left( {{x}^{\prime }\left( {x, t}\right) }\right) \\ {\varphi }^{\prime }\left( {{x}^{\prime }\left( {x, t}\right) }\right) \end{matrix}\right\rbrack \tag{54} \] where \[ {x}^{\prime } = \left\lbrack \begin{matrix} \frac{{x}_{1} - {vt}}{\sqrt{1 - {v}^{2}}} \\ {x}_{2} \\ {x}_{3} \end{matrix}\right\rbrack ;{\mathbf{A}}^{\prime } = \left\lbrack \begin{matrix} \frac{{A}_{1} + {v\varphi }}{\sqrt{1 - {v}^{2}}} \\ {A}_{2} \\ {A}_{3} \end{matrix}\right\rbrack ;{\varphi }^{\prime } = \frac{\varphi + v{A}_{1}}{\sqrt{1 - {v}^{2}}}. \] Solitary waves have a particle-like behavior. In particular, if \( \left( {\mathbf{A},\varphi }\right) \) is a static solution with radial symmetry, the region filled with matter \[ \Omega = \left\{ {x \in {\mathbf{R}}^{3} : \left| {{\left| \mathbf{A}\left( x\right) \right| }^{2} - \varphi {\left( x\right) }^{2}}\right| \geq 1}\right\} \] is a ball centered at the origin. Applying the above Lorentz transformation, \[ {\Omega }_{t} = \left\{ {x \in {\mathbf{R}}^{3} : \left| {{\left| {\mathbf{A}}_{\mathbf{v}}\left( x, t\right) \right| }^{2} - {\varphi }_{\mathbf{v}}{\left( x, t\right) }^{2}}\right| \geq 1}\right\} \] becomes a rotation ellipsoid moving in direction \( {x}_{1} \) with velocity \( \mathbf{v} \) and having the shortest axis of length \( R\sqrt{1 - {v}^{2}} \) (where \( R \) is the radius of \( \Omega \) ). If we let the nonlinear term \[ W\left( t\right) = {W}_{\varepsilon }\left( t\right) \mathrel{\text{:=}} \frac{1}{{\varepsilon }^{2}}{W}_{1}\left( t\right) \] depend on a small parameter \( \varepsilon \), then the radius of \( \Omega \) becomes \( {\varepsilon R} \) . Letting \( \varepsilon \rightarrow 0 \) , the particles approach a pointwise behavior. Moreover, the solitary waves obtained by this method present the following features: - It can be directly verified that the momentum \( \mathbf{P}\left( {{\mathbf{A}}_{\mathbf{v}},{\varphi }_{\mathbf{v}}}\right) \) in (40) of the solitary wave (54) is proportional to the velocity \( \mathbf{v} \) , \[ \mathbf{P}\left( {{\mathbf{A}}_{\mathbf{v}},{\varphi }_{\mathbf{v}}}\right) = m\mathbf{v}, m > 0. \] So the constant \( m \) defines the inertial mass of (54). Moreover, if we calculate the energy \( \mathcal{E}\left( {{\mathbf{A}}_{\mathbf{v}},{\varphi }_{\mathbf{v}}}\right) \left( {39}\right) \) of (54), it can be seen that \[ m = \mathcal{E}\left( {{\mathbf{A}}_{\mathbf{v}},{\varphi }_{\mathbf{v}}}\right) \] (Einstein equation; in our model we have set \( c = 1 \) ). - The mass increases with velocity \[ m = \frac{{m}_{0}}{\sqrt{1 - {v}^{2}}} \] where \[ {m}_{0} = \mathcal{E}\left( {\mathbf{A},\varphi }\right) = \frac{1}{3}\int \left( {{\left| \nabla \times \mathbf{A}\right| }^{2} - {\left| \nabla \varphi \right| }^{2}}\right) {dx} = \int W\left( {{\left| \mathbf{A}\right| }^{2} - {\varphi }^{2}}\right) {dx} \tag{55} \] is the rest mass. - If we do not take into account the effect of the magnetic moment \( \mu \) (see proposition 3 and remark 4) the barycenter \( \mathbf{q} = \mathbf{q}\left( \mathbf{t}\right) \) of the solitary wave (54) in a given exterior field \( \mathbf{E},\mathbf{H} \) satisfies the equation \[ \frac{d}{dt}\frac{{m}_{0}}{\sqrt{1 - {\left| \dot{\mathbf{q}}\right| }^{2}}}\dot{\mathbf{q}} = e\left( {\mathbf{E} + \mathbf{v} \times \mathbf{H}}\right) \] where \( e \) is the charge (see (41)) of the solitary wave \( \left( {{\mathbf{A}}_{\mathbf{v}},{\varphi }_{\mathbf{v}}}\right) \) , \[ e = \int {W}^{\prime }\left( {{\left| {\mathbf{A}}_{\mathbf{v}}\right| }^{2} - {\varphi }_{\mathbf{v}}^{2}}\right) {\varphi }_{\mathbf{v}}{dx} \] Concluding, our solitary waves behave as relativistic particles except that they have space extension. These facts are a consequence of the invariance of the Lagrangian density with respect to the Poincaré group. For more details we refer to \( \left\lbrack 3\right\rbrack \) . ## 4. The existence result First we write equation (50) in a slightly more general form. Let \( f : {\mathbf{R}}^{3} \rightarrow \mathbf{R} \) be a \( {C}^{2} \) function satisfying the assumptions \[ f\left( 0\right) = 0\text{and}f\text{strictly convex.} \tag{56} \] There are positive constants \( {c}_{1},{c}_{2}, p, q \) with \( 2 < p < 6 < q \) such that \[ {c}_{1}{\left| \xi \right| }^{p} \leq f\left( \xi \right) \text{ for }\left| \xi \right| \geq 1 \tag{57} \] \[ {c}_{1}{\left| \xi \right| }^{q} \leq f\left( \xi \right) \text{ for }\left| \xi \right| \leq 1 \tag{58} \] \[ \left| {{f}^{\prime }\left( \xi \right) }\right| \leq {c}_{2}{\left| \xi \right| }^{p - 1}\text{ for }\left| \xi \right| \geq 1 \tag{59} \] \[ \left| {{f}^{\prime }\left( \xi \right) }\right| \leq {c}_{2}{\left| \xi \right| }^{q - 1}\text{ for }\left| \xi \right| \leq 1 \tag{60} \] We are interested in finding nontrivial, finite energy, weak solutions \( \mathbf{A} : {\mathbf{R}}^{3} \rightarrow {\mathbf{R}}^{3} \) of the equation \[ \nabla \times \nabla \times \mathbf{A} = {f}^{\prime }\left( \mathbf
12335_Contributions to Nonlinear Analysis_ A Tribute to D_G_ de Figueiredo on the Occasion of his 70th Bir
Theorem 7. If (W2), (W3) and (W4) hold, then eq. (50) has a nontrivial, finite energy solution. This solution has radial symmetry, namely\n\n\[ \n\mathbf{A}\left( x\right) = {g}^{-1}\mathbf{A}\left( {gx}\right) \;\forall g \in O\left( 3\right) \n\]\n\nwhere \( O\left( 3\right) \) is the orthogonal group in \( {\mathbf{R}}^{3} \) .
Null
5
Unknown
Analysis
Unknown
Example 20.1.2 (Replacing a variable by a constant) Consider again the skeleton class declaration in Table 20.1. If none of \( {\mathrm{C}}_{1},\ldots ,{\mathrm{C}}_{k} \) contains a command of the form \( \mathrm{X} \mathrel{\text{:=}} \mathrm{E} \), then replacing each occurrence of the expression \( \mathrm{X} \) in \( {\mathrm{C}}_{1},\ldots ,{\mathrm{C}}_{k} \) by 0 (the initial value of \( \mathrm{X} \) ) and deleting the declaration of \( \mathrm{X} \) should not change the behaviour of objects of the class, and hence should not affect the behaviour of any program in which the class appears. Let us reuse the notations \( {V}_{j},{M}_{\mathrm{A}} \), etc. from Example 20.1.1, and write \( {\mathrm{A}}^{\prime \prime } \) for the class obtained from A by applying the transformation. So, \[ {\mathcal{O}}_{\mathrm{A}} \equiv \left( {{\nu g}, p}\right) \left( {{V}_{j} \mid \left( {\nu \widetilde{g},\widetilde{p}}\right) \left( {{\Pi }_{i \neq j}{V}_{i} \mid {M}_{\mathrm{A}}}\right) }\right) \] and \[ {\mathcal{O}}_{{\mathbf{A}}^{\prime \prime }} \equiv \left( {\nu \widetilde{g},\widetilde{p}}\right) \left( {{\Pi }_{i \neq j}{V}_{i} \mid {M}_{{\mathbf{A}}^{\prime \prime }}}\right) . \] Suppose there are \( n \) occurrences of \( \mathrm{X} \) in the method bodies. Then there exists an \( n \) -ary context \( D \) such that \( g, p \notin \mathrm{{fn}}\left( D\right) \) and \[ {M}_{\mathbf{A}} = D\left\lbrack {\llbracket \mathrm{x}\rrbracket \left\langle {{h}_{1}, g,\ldots }\right\rangle ,\ldots ,\llbracket \mathrm{x}\rrbracket \left\langle {{h}_{n}, g,\ldots }\right\rangle }\right\rbrack \] and \[ {M}_{{\mathbf{A}}^{n}} = D\left\lbrack {\llbracket 0\rrbracket \left\langle {{h}_{1},\ldots }\right\rangle ,\ldots ,\llbracket 0\rrbracket \left\langle {{h}_{n},\ldots }\right\rangle }\right\rbrack \] for some names \( {h}_{1},\ldots ,{h}_{n} \) .
18074_The Pi Calculus
Example 20.1.2 (Replacing a variable by a constant) Consider again the skeleton class declaration in Table 20.1. If none of \( {\mathrm{C}}_{1},\ldots ,{\mathrm{C}}_{k} \) contains a command of the form \( \mathrm{X} \mathrel{\text{:=}} \mathrm{E} \), then replacing each occurrence of the expression \( \mathrm{X} \) in \( {\mathrm{C}}_{1},\ldots ,{\mathrm{C}}_{k} \) by 0 (the initial value of \( \mathrm{X} \) ) and deleting the declaration of \( \mathrm{X} \) should not change the behaviour of objects of the class, and hence should not affect the behaviour of any program in which the class appears.
Let us reuse the notations \( {V}_{j},{M}_{\mathrm{A}} \), etc. from Example 20.1.1, and write \( {\mathrm{A}}^{\prime \prime } \) for the class obtained from A by applying the transformation. So,\n\n\[ \n{\mathcal{O}}_{\mathrm{A}} \equiv \left( {{\nu g}, p}\right) \left( {{V}_{j} \mid \left( {\nu \widetilde{g},\widetilde{p}}\right) \left( {{\Pi }_{i \neq j}{V}_{i} \mid {M}_{\mathrm{A}}}\right) }\right) \n\]\n\nand\n\n\[ \n{\mathcal{O}}_{{\mathbf{A}}^{\prime \prime }} \equiv \left( {\nu \widetilde{g},\widetilde{p}}\right) \left( {{\Pi }_{i \neq j}{V}_{i} \mid {M}_{{\mathbf{A}}^{\prime \prime }}}\right) . \n\]\n\nSuppose there are \( n \) occurrences of \( \mathrm{X} \) in the method bodies. Then there exists an \( n \) -ary context \( D \) such that \( g, p \notin \mathrm{{fn}}\left( D\right) \) and\n\n\[ \n{M}_{\mathbf{A}} = D\left\lbrack {\llbracket \mathrm{x}\rrbracket \left\langle {{h}_{1}, g,\ldots }\right\rangle ,\ldots ,\llbracket \mathrm{x}\rrbracket \left\langle {{h}_{n}, g,\ldots }\right\rangle }\right\rbrack \n\]\n\nand\n\n\[ \n{M}_{{\mathbf{A}}^{n}} = D\left\lbrack {\llbracket 0\rrbracket \left\langle {{h}_{1},\ldots }\right\rangle ,\ldots ,\llbracket 0\rrbracket \left\langle {{h}_{n},\ldots }\right\rangle }\right\rbrack \n\]\n\nfor some names \( {h}_{1},\ldots ,{h}_{n} \) .
10
46
Computer Science and Engineering
Computer science
Proposition 7.5. The Lie algebras \( \overline{gl}\left( \infty \right) \) and \( \widehat{gl}\left( \infty \right) \) with the above introduced degree are graded Lie algebras. The algebra \( \overline{gl}\left( \infty \right) \) is also graded as associative algebra. Proof. The Equation (7.26) shows the gradedness for \( \overline{gl}\left( \infty \right) \) . For \( \overset{⏜}{gl}\left( \infty \right) \) we only have to recall that the cocycle vanishes if \( r \neq - s \), see (7.19). Hence, only for \( r + s = 0 \) will there be a term coming with the central element \( t \), which has also degree zero. As usual in the graded case, the subspaces of degree zero constitute subalgebras \( \overline{gl}{\left( \infty \right) }_{0} \) and \( \overset{⏜}{gl}{\left( \infty \right) }_{0} \) . This graded decomposition is also valid for \( {gl}\left( \infty \right) \) . The elements there correspond to "diagonals" with only a finite number of entries. ## 7.1.2 Semi-infinite wedge representation for \( \widehat{gl}\left( \infty \right) \) Let \( {v}_{s} \in {\mathbb{C}}^{\mathbb{Z}} \) be the infinite sequence given by \( {v}_{s} = {\left( {\delta }_{i}^{s}\right) }_{i \in \mathbb{Z}} \) . Set \[ V = {\bigoplus }_{s \in \mathbb{Z}}\mathbb{C}{v}_{s} \tag{7.27} \] the vector space generated by those. By defining \[ {E}_{ij}{v}_{s} = {\delta }_{j}^{s}{v}_{i} \tag{7.28} \] we obtain an operation of \( \overline{gl}\left( \infty \right) \) on \( V \) which generalizes the operation matrix \( \times \) vector to infinite dimension. Indeed, each of the \( {A}_{r}\left( \mu \right) \) acts in a well-defined manner as \[ {A}_{r}\left( \mu \right) {v}_{s} = \mathop{\sum }\limits_{{i \in \mathbb{Z}}}{\mu }_{i}{E}_{i, i + r}{v}_{s} = {\mu }_{s - r}{v}_{s - r} \tag{7.29} \] Starting from \( V \) we will introduce the semi-infinite higher exterior power - also called semi-infinite wedge space. To achieve this goal we have to linearly order the basis elements \( \left\{ {{v}_{s} \mid s \in \mathbb{Z}}\right\} \) by increasing index \( s \) . Let \( H \) be the vector space freely generated by the formal expressions of the kind \[ \Phi = {v}_{{i}_{0}} \land {v}_{{i}_{1}}\cdots \land \cdots {v}_{{i}_{k}} \land {v}_{{i}_{k + 1}} \land {v}_{{i}_{k + 2}} \land \cdots , \tag{7.30} \] where the indices \( {i}_{j} \) are in strictly increasing order, i.e., \( {i}_{0} \lneqq {i}_{1} \lneqq {i}_{2} \lneqq \cdots \), and they are stabilizing with a certain index. This means that starting with an index \( k \) (which depends on the element \( \Phi \) ), we have \[ {i}_{k + s} = {i}_{k} + s,\;\forall s \in \mathbb{N}. \tag{7.31} \] In the following, forms which do not fulfill the strictly increasing condition will occur. The wedge symbol \( \land \) indicates how to deal with them. If two entries are the same the result will be the zero element. If two entries are not in the correct order we interchange them and change the sign. Moreover, the forms are linear in each entry. In detail, if \( u \) and \( w \) are finite neighboring pieces and \( \psi \) is an infinite neighboring piece, then \[ u \land {v}_{j} \land w \land {v}_{i} \land \psi \mathrel{\text{:=}} - u \land {v}_{i} \land w \land {v}_{j} \land u,\;j > i \] \[ \phi \land {v}_{i} \land w \land {v}_{i} \land \psi \mathrel{\text{:=}} 0 \tag{7.32} \] \[ u \land \left( {\mathop{\sum }\limits_{{i = 1}}^{r}{c}_{i}{v}_{i}}\right) \land \psi \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{r}{c}_{i}\left( {u \land {v}_{i} \land \psi }\right) . \] Next we want to extend the Lie action of \( {gl}\left( \infty \right) \) and \( \overline{gl}\left( \infty \right) \) on \( V \) to \( H \) by using the Leibniz rule. This means that if \( A \in \overline{gl}\left( \infty \right) \), then \( A \) should operate on each factor in \( \Phi \) separately and the results are added up, i.e., \[ A.\Phi \mathrel{\text{:=}} \left( {A.{v}_{{i}_{0}}}\right) \land {v}_{{i}_{1}} \land \ldots + {v}_{{i}_{0}} \land \left( {A.{v}_{{i}_{1}}}\right) \land \ldots \] \[ + \cdots + {v}_{{i}_{0}} \land {v}_{i + 1} \land \ldots \land \left( {A.{v}_{{i}_{k}}}\right) \land {v}_{{i}_{k + 1}} \land \ldots + \ldots \tag{7.33} \] A priori we obtain by this definition an infinite number of summands. The (linear) action will be well-defined only if a finite number of nonvanishing summands appear. This has to be true for all \( \Phi \) . First we observe that \( {E}_{ij} \) . \( \Phi \) is always well-defined, as there can only be a contribution if the element \( {v}_{j} \) appears in \( \Phi \), which happens at least once. The result will be the \( \Phi \) with \( {v}_{j} \) exchanged by \( {v}_{i} \) . Hence, \( {gl}\left( \infty \right) \) operates on \( H \) . Next we consider \( \overline{gl}\left( \infty \right) \) . For this we have to study the action of the generators \( {A}_{r}\left( \mu \right) \) .
8322_Krichever-Novikov Type Algebras Theory and Applications
Proposition 7.5. The Lie algebras \( \overline{gl}\left( \infty \right) \) and \( \widehat{gl}\left( \infty \right) \) with the above introduced degree are graded Lie algebras. The algebra \( \overline{gl}\left( \infty \right) \) is also graded as associative algebra.
Proof. The Equation (7.26) shows the gradedness for \( \overline{gl}\left( \infty \right) \) . For \( \overset{⏜}{gl}\left( \infty \right) \) we only have to recall that the cocycle vanishes if \( r \neq - s \), see (7.19). Hence, only for \( r + s = 0 \) will there be a term coming with the central element \( t \), which has also degree zero.
4
15
Geometry and Topology
Topological groups, Lie groups
Theorem 1.10. A finitistic ANRU \( X \) is dominated in uniform homotopy by a uniform polyhedron \( P \) . Moreover, if \( \Delta \mathrm{d}X \leq n \), then \( P \) can be chosen so that \( \dim P \leq n \) . Here \( \Delta \mathrm{d}X \) denotes the uniform dimension of \( X \) (see [3]). For those reasons, throughout the paper, unless otherwise stated, we assume all uniform spaces are finitistic and all polyhedra are finite-dimensional. Let Unif denote the category of finitistic uniform spaces and uniform maps, and let UPol and ANRU denote the full subcategories of Unif whose objects are uniform polyhedra and ANRU's, respectively. Also let HUnif denote the category of finitistic uniform spaces and uniform homotopy classes, and let HUPol and HANRU denote the full subcategories of HUnif whose objects are uniform spaces which have the uniform homotopy types of a uniform polyhedron and an ANRU, respectively. ## 2. Approximate systems For any uniform space \( X,\operatorname{Cov}\left( X\right) \) denotes the family of uniform coverings of \( X \) . An approximate system \( \mathbf{X} = \left( {{X}_{a},{\mathcal{U}}_{a},{p}_{a{a}^{\prime }}, A}\right) \) consists of a directed preordered set \( A = \left( {A, < }\right) \) with no maximal element, uniform spaces \( {X}_{a},{\mathcal{U}}_{a} \in \operatorname{Cov}\left( {X}_{a}\right) \) for \( a \in A \), and uniform maps \( {p}_{a{a}^{\prime }} : {X}_{{a}^{\prime }} \rightarrow {X}_{a} \) for \( a < {a}^{\prime } \) with the following three properties: --- \[ \text{(UA1)}\left( {{p}_{a{a}^{\prime }}{p}_{{a}^{\prime }{a}^{\prime \prime }},{p}_{a{a}^{\prime \prime }}}\right) < {\mathcal{U}}_{a}\text{for}a < {a}^{\prime } < {a}^{\prime \prime }\text{.} \] --- (UA2) For each \( a \in A \) and \( \mathcal{U} \in \operatorname{Cov}\left( {X}_{a}\right) \) there exists \( {a}^{\prime } > a \) such that \( \left( {{p}_{a{a}_{1}}{p}_{{a}_{1}{a}_{2}},{p}_{a{a}_{2}}}\right) < \) \( \mathcal{U} \) for \( {a}^{\prime } < {a}_{1} < {a}_{2} \) . (UA3) For each \( a \in A \) and \( \mathcal{U} \in \operatorname{Cov}\left( {X}_{a}\right) \) there exists \( {a}^{\prime } > a \) such that \( {\mathcal{U}}_{{a}^{\prime \prime }} < {p}_{a{a}^{\prime \prime }}^{-1}\mathcal{U} \) for \( {a}^{\prime } < {a}^{\prime \prime } \) . We say an approximate system \( \mathbf{X} = \left( {{X}_{a},{\mathcal{U}}_{a},{p}_{a{a}^{\prime }}, A}\right) \) is commutative provided \( {p}_{a{a}^{\prime }}{p}_{{a}^{\prime }{a}^{\prime \prime }} = {p}_{a{a}^{\prime \prime }} \) for \( a < {a}^{\prime } < {a}^{\prime \prime } \) . Every approximate system \( \mathbf{X} = \left( {{X}_{a},{\mathcal{U}}_{a},{p}_{a{a}^{\prime }}, A}\right) \) admits an approximate system \( {\mathbf{X}}^{ * } = \left( {{X}_{a},{\mathcal{U}}_{a},{p}_{a{a}^{\prime }},{A}^{ * }}\right) \) with the following condition (proved as in \( \left\lbrack {{12},{1.6}}\right\rbrack \) ): \[ {\mathcal{U}}_{{a}^{\prime }} < {p}_{aa}^{-1}{\mathcal{U}}_{a}\text{ for }a < {a}^{\prime }. \] An approximate uniform map \( \mathbf{p} = \left( {{p}_{a} \mid a \in A}\right) : X \rightarrow X = \left( {{X}_{a},{\mathcal{U}}_{a},{p}_{a{a}^{\prime }}, A}\right) \) of a uniform space \( X \) to an approximate system consists of uniform maps \( {p}_{a} : X \rightarrow {X}_{a} \) for \( a \in A \) with the following property: (UAS) For each \( a \in A \) and \( \mathcal{U} \in \operatorname{Cov}\left( {X}_{a}\right) \), there exists \( {a}^{\prime } > a \) such that \( \left( {{p}_{a{a}^{\prime \prime }}{p}_{{a}^{\prime \prime }},{p}_{a}}\right) < \) \( \mathcal{U} \) for \( {a}^{\prime \prime } > {a}^{\prime } \) . An approximate uniform map \( \mathbf{p} = \left( {{p}_{a} \mid a \in A}\right) : X \rightarrow X = \left( {{X}_{a},{\mathcal{U}}_{a},{p}_{a{a}^{\prime }}, A}\right) \) is commutative if \( {p}_{a}{p}_{a{a}^{\prime }} = {p}_{{a}^{\prime }} \) for \( a < {a}^{\prime } \) . An approximate uniform map \( \mathbf{p} = \left( {{p}_{a} \mid a \in }\right. \) \( A) : X \rightarrow X = \left( {{X}_{a},{\mathcal{U}}_{a},{p}_{a{a}^{\prime }}, A}\right) \) is a limit of \( X \) provided the following condition is satisfied: (UL) For each approximate uniform map \( \mathbf{q} = \left( {{q}_{a} \mid a \in A}\right) : Y \rightarrow X \) of a uniform space \( Y \) there exists a unique uniform map \( g : Y \rightarrow X \) such that \( {p}_{a}g = {q}_{a} \) for each \( a \in A \) . A limit \( p : X \rightarrow X \) of \( X \) is unique up to a unique uniform isomorphism, and hence we write \( X = \lim X \) . A thread of an approximate system \( X = \left( {{X}_{a},{\mathcal{U}}_{a},{p}_{a{a}^{\prime }}, A}\right) \) is a point \( x = \left( {x}_{a}\right) \in \mathop{\prod }\limits_{{a \in A}}{X}_{a} \) with the following property: (UT) For each \( a \in A \) and \( \mathcal{U} \in \operatorname{Cov}\left( {X}_{a}\right) \) there exists \( {a}^{\prime } > a \) such that \( {p}_{a{a}^{\prime \prime }}\left( {x}_{{a}^{\prime \prime }}\right) \in \) \( \operatorname{st}\left( {{x}_{a},\mathcal{U}}\right) \) for \( {a}^{\prime \prime } > {a}^{\prime }. \) Here \( \mathop{\prod }\limits_{{a \in A}}{X}_{a} \) denotes the uniform product. The condition (UT) is equivalent to \( {\left( \mathrm{{UT}}\right) }^{ * } \) For each \( a \in A,\lim \left\{ {{p}_{a{a}^{\prime \prime }}\left( {x}_{{a}^{\prime \prime }}\right) \mid {a}^{\prime \prime } > a}\right\} = {x}_{a} \) . If \( {X}_{a} \) are compact metric spaces, then our definitions of approximate inverse systems and limits respectively coincide with those of Mardešić and Rubin [10]. Similarly to [12, 1.14] and \( \left\lbrack {{12},{1.16}}\right\rbrack \), we have the following two results:
23563_Topology and Its Applications 2001-06-29_ Vol 113 Iss 1-3
Theorem 1.10. A finitistic ANRU \( X \) is dominated in uniform homotopy by a uniform polyhedron \( P \) . Moreover, if \( \Delta \mathrm{d}X \leq n \), then \( P \) can be chosen so that \( \dim P \leq n \) . Here \( \Delta \mathrm{d}X \) denotes the uniform dimension of \( X \) (see [3]).
Null
4
20
Geometry and Topology
Algebraic topology
Corollary 2.11. For any invariant point selection \( \mathbf{X} \), the adjoint functors \( \mathcal{C} \) and \( \mathcal{S} \) restrict to an equivalence between the category XSO of X-sober spaces and the category XVS of X- \( \bigvee \) -spatial lattices. With any invariant point selection \( \mathbf{X} \), there is associated its dual \( {\mathbf{X}}^{d} \) defined by \( {\mathbf{X}}^{d}L = \) \( \mathbf{X}{L}^{d} \), which is invariant, too. We say \( L \) is \( \mathbf{X} \) - \( \bigwedge \) -spatial (or dually \( \mathbf{X} \) -based) if \( {L}^{d} \) is \( \mathbf{X} \) - \( \bigvee \) - spatial, in other words, if \( {\mathbf{X}}^{d}L \) is meet-dense in \( L \) . In the classical example, where \( \mathbf{X} \) selects the \( \vee \) -prime elements, the \( \mathbf{X} \) - \( \bigwedge \) -spatial lattices are just the spatial frames or locales (cf. [13,33,36]), whereas the \( \mathbf{X} \) - \( \vee \) -spatial lattices are the cospatial lattices or spatial coframes. We have now the following 12 categories of complete lattices: <table><thead><tr><th rowspan="2">XV VX</th><th rowspan="2">Morphisms X- and V-pres. maps have \( \mathbf{X} \) -pres. coadjoint</th><th rowspan="2">XII ax</th><th rowspan="2">Objects X-conservative lattices</th><th rowspan="2">XVS SVX</th><th>Objects</th></tr><tr><th>X-V-spatial lattices</th></tr></thead><tr><td>XA</td><td>\( {\mathbf{X}}^{d} \) - and \( \bigwedge \) -pres. maps</td><td>XD</td><td>dually X-conser-</td><td>XAS</td><td>X-A-spatial</td></tr><tr><td>AX</td><td>have \( {\mathbf{X}}^{d} \) -pres. adjoint</td><td>DX</td><td>vative lattices</td><td>SAX</td><td>lattices</td></tr></table> Here, "dually X-conservative" means that the dual lattice is X-conservative, but not that the original lattice is \( {\mathbf{X}}^{d} \) -conservative! In the subsequent diagram, \( r \) means "reflective" and \( c \) "coreflective subcategory". Recall that \( \mathcal{U} \) (respectively \( \mathcal{L} \) ) sends a join- (meet-)preserving map to its upper (lower) adjoint, \( \mathcal{D} \) dualizes the objects, and \( \mathcal{T} \) has both effects. ![019184cf-8e95-7637-acf6-99b1a61d14a7_149_211910.jpg](images/019184cf-8e95-7637-acf6-99b1a61d14a7_149_211910.jpg)
23554_Topology and its Applications 2004-02-28_ Vol 137 Iss 1-3
Corollary 2.11. For any invariant point selection \( \mathbf{X} \), the adjoint functors \( \mathcal{C} \) and \( \mathcal{S} \) restrict to an equivalence between the category XSO of X-sober spaces and the category XVS of X- \( \bigvee \) -spatial lattices.
Null
0
1
Foundations and Logic
Category theory
Theorem 6.26. Let \( u,{u}^{\prime } \) be primitive elements of \( F\left( {a, b}\right) \), and set \( \bar{u} = {u}^{\pi } \) , \( \overline{{u}^{\prime }} = {u}^{\prime \pi } \) for the projections. Then \[ \bar{u} = \overline{{u}^{\prime }} \Leftrightarrow u,{u}^{\prime }\text{ are conjugate in }F\left( {a, b}\right) . \] Proof. If \( u = w{u}^{\prime }{w}^{-1} \), then \[ \overline{\mathbf{u}} = {\mathbf{u}}^{\mathbf{\pi }} = {\mathbf{w}}^{\mathbf{\pi }}{\mathbf{u}}^{\prime \mathbf{\pi }}{\left( {\mathbf{w}}^{\mathbf{\pi }}\right) }^{-1} = {\mathbf{u}}^{\prime \mathbf{\pi }} = \overline{{\mathbf{u}}^{\prime }}. \] Assume, conversely, \( \bar{u} = {u}^{7} \), and let \( \{ u, v\} \) be a basis of \( F\left( {a, b}\right) \) . Since \( {u}^{\prime } \) is also primitive, there is a map \( \varphi \in \operatorname{Aut}F\left( {a, b}\right) \) with \( {u}^{\varphi } = {u}^{\prime } \) . We infer (recall that \( {\varphi \pi } = \pi \bar{\varphi } \) ) that \[ {\bar{u}}^{\bar{\varphi }} = {u}^{\pi \bar{\varphi }} = {u}^{\varphi \pi } = {u}^{\prime \pi } = \overline{{u}^{\prime }} = \bar{u}, \] \[ {\bar{v}}^{\bar{\varphi }} = {v}^{\pi \bar{\varphi }} = {v}^{\varphi \pi } = {\left( {v}^{\varphi }\right) }^{\pi } = {\bar{u}}^{\ell }{\bar{v}}^{k}, \tag{6.7} \] with \[ \det \left( \begin{array}{ll} 1 & 0 \\ {l}^{\prime } & k \end{array}\right) = \pm 1 \] and thus \( \ell \in \mathbb{Z}, k \in \{ 1, - 1\} \) . Let \( \psi \in \operatorname{Aut}F\left( {a, b}\right) \) be defined by \[ {u}^{\psi } = u,{v}^{\psi } = {u}^{-k\ell }{v}^{k}. \tag{6.8} \] Then we get with (6.7),(6.8), and \( {k}^{2} = 1 \) , \[ {\bar{u}}^{\bar{\psi }\bar{\varphi }} = {u}^{{\pi \psi }\bar{\varphi }} = {u}^{{\psi \pi }\bar{\varphi }} = {u}^{\pi \bar{\varphi }} = {\bar{u}}^{\bar{\varphi }} = \bar{u}, \] \[ \overline{{v}^{\psi \varphi }} = {v}^{\pi \overline{\psi }\overline{\varphi }} = {v}^{{\psi \pi }\overline{\varphi }} = {\left( {u}^{-k\ell }{v}^{k}\right) }^{\pi \overline{\varphi }} = {\left( {\overline{u}}^{-k\ell }{\overline{v}}^{k}\right) }^{\overline{\varphi }} = \left( {\overline{u}}^{-k\ell }\right) \left( {{\overline{u}}^{k\ell }{\overline{v}}^{{k}^{2}}}\right) = \overline{v}. \] We conclude that \( \overline{\psi \varphi } = 1 \), and hence \( {\psi \varphi } \in \operatorname{Inn}F\left( {a, b}\right) \) by Theorem 6.24. So there is some \( w \in F\left( {a, b}\right) \) with \[ {x}^{\psi \varphi } = {wx}{w}^{-1}\text{ for all }x \in F\left( {a, b}\right) , \] and, in particular, \[ {u}^{\psi \varphi } = {u}^{\varphi } = {u}^{\prime } = {wu}{w}^{-1}. \] Thus \( u \) and \( {u}^{\prime } \) are conjugate elements.
27350_Markov_s theorem and 100 years of the uniqueness conjecture_ a mathematical journey from irrational
Theorem 6.26. Let \( u,{u}^{\prime } \) be primitive elements of \( F\left( {a, b}\right) \), and set \( \bar{u} = {u}^{\pi } \) , \( \overline{{u}^{\prime }} = {u}^{\prime \pi } \) for the projections. Then\n\n\[ \n\bar{u} = \overline{{u}^{\prime }} \Leftrightarrow u,{u}^{\prime }\text{ are conjugate in }F\left( {a, b}\right) .\n\]
Proof. If \( u = w{u}^{\prime }{w}^{-1} \), then\n\n\[ \n\overline{\mathbf{u}} = {\mathbf{u}}^{\mathbf{\pi }} = {\mathbf{w}}^{\mathbf{\pi }}{\mathbf{u}}^{\prime \mathbf{\pi }}{\left( {\mathbf{w}}^{\mathbf{\pi }}\right) }^{-1} = {\mathbf{u}}^{\prime \mathbf{\pi }} = \overline{{\mathbf{u}}^{\prime }}.\n\]\n\nAssume, conversely, \( \bar{u} = {u}^{7} \), and let \( \{ u, v\} \) be a basis of \( F\left( {a, b}\right) \) . Since \( {u}^{\prime } \) is also primitive, there is a map \( \varphi \in \operatorname{Aut}F\left( {a, b}\right) \) with \( {u}^{\varphi } = {u}^{\prime } \) . We infer (recall that \( {\varphi \pi } = \pi \bar{\varphi } \) ) that\n\n\[ \n{\bar{u}}^{\bar{\varphi }} = {u}^{\pi \bar{\varphi }} = {u}^{\varphi \pi } = {u}^{\prime \pi } = \overline{{u}^{\prime }} = \bar{u},\n\]\n\n\[ \n{\bar{v}}^{\bar{\varphi }} = {v}^{\pi \bar{\varphi }} = {v}^{\varphi \pi } = {\left( {v}^{\varphi }\right) }^{\pi } = {\bar{u}}^{\ell }{\bar{v}}^{k}, \tag{6.7}\n\]\n\nwith\n\n\[ \n\det \left( \begin{array}{ll} 1 & 0 \\ {l}^{\prime } & k \end{array}\right) = \pm 1\n\]\n\nand thus \( \ell \in \mathbb{Z}, k \in \{ 1, - 1\} \) . Let \( \psi \in \operatorname{Aut}F\left( {a, b}\right) \) be defined by\n\n\[ \n{u}^{\psi } = u,{v}^{\psi } = {u}^{-k\ell }{v}^{k}. \tag{6.8}\n\]\n\nThen we get with (6.7),(6.8), and \( {k}^{2} = 1 \),\n\n\[ \n{\bar{u}}^{\bar{\psi }\bar{\varphi }} = {u}^{{\pi \psi }\bar{\varphi }} = {u}^{{\psi \pi }\bar{\varphi }} = {u}^{\pi \bar{\varphi }} = {\bar{u}}^{\bar{\varphi }} = \bar{u},\n\]\n\n\[ \n\overline{{v}^{\psi \varphi }} = {v}^{\pi \overline{\psi }\overline{\varphi }} = {v}^{{\psi \pi }\overline{\varphi }} = {\left( {u}^{-k\ell }{v}^{k}\right) }^{\pi \overline{\varphi }} = {\left( {\overline{u}}^{-k\ell }{\overline{v}}^{k}\right) }^{\overline{\varphi }} = \left( {\overline{u}}^{-k\ell }\right) \left( {{\overline{u}}^{k\ell }{\overline{v}}^{{k}^{2}}}\right) = \overline{v}.\n\]\n\nWe conclude that \( \overline{\psi \varphi } = 1 \), and hence \( {\psi \varphi } \in \operatorname{Inn}F\left( {a, b}\right) \) by Theorem 6.24. So there is some \( w \in F\left( {a, b}\right) \) with\n\n\[ \n{x}^{\psi \varphi } = {wx}{w}^{-1}\text{ for all }x \in F\left( {a, b}\right) ,\n\]\n\nand, in particular,\n\n\[ \n{u}^{\psi \varphi } = {u}^{\varphi } = {u}^{\prime } = {wu}{w}^{-1}.\n\]\n\nThus \( u \) and \( {u}^{\prime } \) are conjugate elements.
2
11
Algebra
Group theory and generalizations
Lemma 4.1. Let \( C \in {\mathcal{M}}_{d}\left( \mathbb{R}\right) \) be a symmetric positive definite matrix, let \( G \sim \) \( {\mathcal{N}}_{d}\left( {0, C}\right) \), and let \( \phi ,\psi : {\mathbb{R}}^{d} \rightarrow \mathbb{R} \) be two Lipschitz and \( {\mathcal{C}}^{1} \) functions. Then \[ \operatorname{Cov}\left( {\phi \left( G\right) ,\psi \left( G\right) }\right) = {\int }_{0}^{1}E{\left\langle \sqrt{C}\nabla \phi \left( {G}_{\alpha }\right) ,\sqrt{C}\nabla \psi \left( {H}_{\alpha }\right) \right\rangle }_{{\mathbb{R}}^{d}}{d\alpha }, \tag{4.7} \] where \[ \left( {{G}_{\alpha },{H}_{\alpha }}\right) \sim {\mathcal{N}}_{2d}\left( {0,\left( \begin{matrix} C & {\alpha C} \\ {\alpha C} & C \end{matrix}\right) }\right) ,\;0 \leq \alpha \leq 1. \] Proof. By bilinearity and approximation, it is enough to show (4.7) for \( \phi \left( x\right) = \) \( {e}^{i\langle t, x{\rangle }_{{\mathbb{R}}^{d}}} \) and \( \psi \left( x\right) = {e}^{i\langle s, x{\rangle }_{{\mathbb{R}}^{d}}} \) when \( s, t \in {\mathbb{R}}^{d} \) are given (and fixed once for all). Set ![01918210-f9f5-72a0-b4bf-e75f98c22883_51_195413.jpg](images/01918210-f9f5-72a0-b4bf-e75f98c22883_51_195413.jpg) We have \[ {\int }_{0}^{1}E{\left\langle \sqrt{C}\nabla \phi \left( {G}_{\alpha }\right) ,\sqrt{C}\nabla \psi \left( {H}_{\alpha }\right) \right\rangle }_{{\mathbb{R}}^{d}}{d\alpha } = - \langle \sqrt{C}t,\sqrt{C}s{\rangle }_{{\mathbb{R}}^{d}}{\int }_{0}^{1}{\varphi }_{\alpha }\left( {t, s}\right) {d\alpha }. \tag{4.8} \] Observe that \( {G}_{\alpha }\overset{\text{ law }}{ = }{H}_{\alpha }\overset{\text{ law }}{ = }G \sim {\mathcal{N}}_{d}\left( {0, C}\right) \), that \( {H}_{0} \) and \( {G}_{0} \) are independent and that \( {H}_{1} = {G}_{1} \) a.s. Hence, \[ \operatorname{Cov}\left( {\phi \left( G\right) ,\psi \left( G\right) }\right) = {\varphi }_{1}\left( {t, s}\right) - {\varphi }_{0}\left( {t, s}\right) = {\int }_{0}^{1}\frac{\partial }{\partial \alpha }{\varphi }_{\alpha }\left( {t, s}\right) {d\alpha }. \] Since \[ \left( \begin{matrix} C & {\alpha C} \\ {\alpha C} & C \end{matrix}\right) = \alpha \left( \begin{array}{ll} C & C \\ C & C \end{array}\right) + \left( {1 - \alpha }\right) \left( \begin{array}{ll} C & 0 \\ 0 & C \end{array}\right) \] we have, with \( {\varphi }_{G}\left( t\right) = E\left\lbrack {e}^{i\langle t, G{\rangle }_{{\mathbb{R}}^{d}}}\right\rbrack \) , \[ {\varphi }_{\alpha }\left( {t, s}\right) = {\varphi }_{1}{\left( t, s\right) }^{\alpha }{\varphi }_{0}{\left( t, s\right) }^{1 - \alpha } = {\varphi }_{G}{\left( t + s\right) }^{\alpha }{\varphi }_{G}{\left( t\right) }^{1 - \alpha }{\varphi }_{G}{\left( s\right) }^{1 - \alpha }. \] Consequently, \[ \frac{\partial }{\partial \alpha }{\varphi }_{\alpha }\left( {t, s}\right) = \left( {\log {\varphi }_{G}\left( {t + s}\right) - \log {\varphi }_{G}\left( t\right) - \log {\varphi }_{G}\left( s\right) }\right) {\varphi }_{\alpha }\left( {t, s}\right) , \] implying in turn, since \( {\varphi }_{G}\left( t\right) = {e}^{-\frac{1}{2}\parallel \sqrt{C}t{\parallel }_{{\mathbb{R}}^{d}}^{2}} \) (see (1.1), \[ {\varphi }_{1}\left( {t, s}\right) - {\varphi }_{0}\left( {t, s}\right) = - \langle \sqrt{C}t,\sqrt{C}s{\rangle }_{{\mathbb{R}}^{d}}{\int }_{0}^{1}{\varphi }_{\alpha }\left( {t, s}\right) {d\alpha }. \tag{4.9} \] The two right-hand sides in (4.8) and (4.9) being the same, the proof of Lemma 4.1 is complete. We are now able to prove Theorem 4.3. Proof of Theorem 4.3. Replacing \( \phi \) by \( \phi - E\left\lbrack {\phi \left( G\right) }\right\rbrack \) if necessary, we may assume that \( E\left\lbrack {\phi \left( G\right) }\right\rbrack = 0 \) without loss of generality. By Lemma 4.1, we can write \[ E\left\lbrack {\phi \left( G\right) {e}^{{t\phi }\left( G\right) }}\right\rbrack = \operatorname{Cov}\left( {\phi \left( G\right) ,{e}^{{t\phi }\left( G\right) }}\right) \] \[ \leq t{\int }_{0}^{1}E\left\lbrack {{\begin{Vmatrix}\sqrt{C}\nabla \phi \left( {G}_{\alpha }\right) \end{Vmatrix}}_{{\mathbb{R}}^{d}}{\begin{Vmatrix}\sqrt{C}\nabla \phi \left( {H}_{\alpha }\right) \end{Vmatrix}}_{{\mathbb{R}}^{d}}{e}^{{t\phi }\left( {H}_{\alpha }\right) }}\right\rbrack {d\alpha } \] \[ \leq t{M}^{2}{\int }_{0}^{1}E\left\lbrack {e}^{{t\phi }\left( {H}_{\alpha }\right) }\right\rbrack {d\alpha } = t{M}^{2}E\left\lbrack {e}^{{t\phi }\left( G\right) }\right\rbrack , \] ![01918210-f9f5-72a0-b4bf-e75f98c22883_52_586212.jpg](images/01918210-f9f5-72a0-b4bf-e75f98c22883_52_586212.jpg) where, in the last equality, we used that \( {H}_{\alpha }\overset{\text{ law }}{ = }G \) for all \( \alpha \in \left\lbrack {0,1}\right\rbrack \) . Thus, \[ \frac{\partial }{\partial t}E\left\lbrack {e}^{{t\phi }\left( G\right) }\right\rbrack = E\left\lbrack {\phi \left( G\right) {e}^{{t\phi }\left( G\right) }}\right\rbrack \leq t{M}^{2}E\left\lbrack {e}^{{t\phi }\left( G\right) }\right\rbrack , \] so that, after integration, \[ E\left\lbrack {e}^{{t\phi }\left( G\right) }\right\rbrack \leq {e}^{\frac{{t}^{2}{M}^{2}}{2}},\;t > 0. \] Using Markov inequality and then setting \( t = x/{M}^{2} \) (which is the optimal choice), we get, for any \( x > 0 \) , \[ P\left( {\phi \left( G\right) \geq x}\right) \leq {e}^{-{tx}}E\left\lbrack {e}^{{t\phi }\left( G\right) }\right\rbrack \leq {e}^{-{tx} + \frac{{t}^{2}{M}^{2}}{2}} \leq \exp \left( {-\frac{{x}^{2}}{2{M}^{2}}}\right) . \] By replacing \( \phi \) by \( - \phi \), we deduce \[ P\left( {\phi \left( G\right) \leq - x}\right) = P\left( {-\phi \left( G\right) \geq x}\right) \leq \exp \left( {-\frac{{x}^{2}}{2{M}^{2}}}\right) \] as well, which concludes the proof of the theorem. As a corollary of Theorem 4.3, we get the following result.
27353_Selected Aspects of Fractional Brownian Motion
Lemma 4.1. Let \( C \in {\mathcal{M}}_{d}\left( \mathbb{R}\right) \) be a symmetric positive definite matrix, let \( G \sim \) \( {\mathcal{N}}_{d}\left( {0, C}\right) \), and let \( \phi ,\psi : {\mathbb{R}}^{d} \rightarrow \mathbb{R} \) be two Lipschitz and \( {\mathcal{C}}^{1} \) functions. Then\n\n\[ \n\operatorname{Cov}\left( {\phi \left( G\right) ,\psi \left( G\right) }\right) = {\int }_{0}^{1}E{\left\langle \sqrt{C}\nabla \phi \left( {G}_{\alpha }\right) ,\sqrt{C}\nabla \psi \left( {H}_{\alpha }\right) \right\rangle }_{{\mathbb{R}}^{d}}{d\alpha }, \tag{4.7} \n\] \n\nwhere \n\n\[ \n\left( {{G}_{\alpha },{H}_{\alpha }}\right) \sim {\mathcal{N}}_{2d}\left( {0,\left( \begin{matrix} C & {\alpha C} \\ {\alpha C} & C \end{matrix}\right) }\right) ,\;0 \leq \alpha \leq 1. \n\]
Proof. By bilinearity and approximation, it is enough to show (4.7) for \( \phi \left( x\right) = \) \( {e}^{i\langle t, x{\rangle }_{{\mathbb{R}}^{d}}} \) and \( \psi \left( x\right) = {e}^{i\langle s, x{\rangle }_{{\mathbb{R}}^{d}}} \) when \( s, t \in {\mathbb{R}}^{d} \) are given (and fixed once for all). Set\n\n![01918210-f9f5-72a0-b4bf-e75f98c22883_51_195413.jpg](images/01918210-f9f5-72a0-b4bf-e75f98c22883_51_195413.jpg)\n\nWe have\n\n\[ \n{\int }_{0}^{1}E{\left\langle \sqrt{C}\nabla \phi \left( {G}_{\alpha }\right) ,\sqrt{C}\nabla \psi \left( {H}_{\alpha }\right) \right\rangle }_{{\mathbb{R}}^{d}}{d\alpha } = - \langle \sqrt{C}t,\sqrt{C}s{\rangle }_{{\mathbb{R}}^{d}}{\int }_{0}^{1}{\varphi }_{\alpha }\left( {t, s}\right) {d\alpha }. \tag{4.8} \n\] \n\nObserve that \( {G}_{\alpha }\overset{\text{ law }}{ = }{H}_{\alpha }\overset{\text{ law }}{ = }G \sim {\mathcal{N}}_{d}\left( {0, C}\right) \), that \( {H}_{0} \) and \( {G}_{0} \) are independent and that \( {H}_{1} = {G}_{1} \) a.s. Hence,\n\n\[ \n\operatorname{Cov}\left( {\phi \left( G\right) ,\psi \left( G\right) }\right) = {\varphi }_{1}\left( {t, s}\right) - {\varphi }_{0}\left( {t, s}\right) = {\int }_{0}^{1}\frac{\partial }{\partial \alpha }{\varphi }_{\alpha }\left( {t, s}\right) {d\alpha }. \n\] \n\nSince\n\n\[ \n\left( \begin{matrix} C & {\alpha C} \\ {\alpha C} & C \end{matrix}\right) = \alpha \left( \begin{array}{ll} C & C \\ C & C \end{array}\right) + \left( {1 - \alpha }\right) \left( \begin{array}{ll} C & 0 \\ 0 & C \end{array}\right) \n\] \n\nwe have, with \( {\varphi }_{G}\left( t\right) = E\left\lbrack {e}^{i\langle t, G{\rangle }_{{\mathbb{R}}^{d}}}\right\rbrack \) ,\n\n\[ \n{\varphi }_{\alpha }\left( {t, s}\right) = {\varphi }_{1}{\left( t, s\right) }^{\alpha }{\varphi }_{0}{\left( t, s\right) }^{1 - \alpha } = {\varphi }_{G}{\left( t + s\right) }^{\alpha }{\varphi }_{G}{\left( t\right) }^{1 - \alpha }{\varphi }_{G}{\left( s\right) }^{1 - \alpha }. \n\] \n\nConsequently,\n\n\[ \n\frac{\partial }{\partial \alpha }{\varphi }_{\alpha }\left( {t, s}\right) = \left( {\log {\varphi }_{G}\left( {t + s}\right) - \log {\varphi }_{G}\left( t\right) - \log {\varphi }_{G}\left( s\right) }\right) {\varphi }_{\alpha }\left( {t, s}\right) , \n\] \n\nimplying in turn, since \( {\varphi }_{G}\left( t\right) = {e}^{-\frac{1}{2}\parallel \sqrt{C}t{\parallel }_{{\mathbb{R}}^{d}}^{2}} \) (see (1.1),\n\n\[ \n{\varphi }_{1}\left( {t, s}\right) - {\varphi }_{0}\left( {t, s}\right) = - \langle \sqrt{C}t,\sqrt{C}s{\rangle }_{{\mathbb{R}}^{d}}{\int }_{0}^{1}{\varphi }_{\alpha }\left( {t, s}\right) {d\alpha }. \tag{4.9} \n\] \n\nThe two right-hand sides in (4.8) and (4.9) being the same, the proof of Lemma 4.1 is complete.
9
Unknown
Probability and Statistics
Unknown
Example 5.17 Consider again the Bayesian network in Figure 5.8a. Any of the cluster-trees in Figure 5.9 describes a partition of variables into clusters. We can now place each input function into a cluster that contains its scopes, and verify that each is a legitimate tree decomposition. For example, Figure 5.9c shows a cluster-tree decomposition with two vertices, and labeling \( \chi \left( 1\right) = \) \( \{ G, F\} \) and \( \chi \left( 2\right) = \{ A, B, C, D, F\} \) . Any function with scope \( \{ G\} \) must be placed in vertex 1 because vertex 1 is the only vertex that contains variable \( G \) (placing a function having \( G \) in its scope in another vertex will force us to add variable \( G \) to that vertex as well). Any function with scope \( \{ A, B, C, D\} \) or one of its subsets must be placed in vertex 2, and any function with scope \( \{ F\} \) can be placed either in vertex 1 or 2 . Notice that the tree-decomposition at Figure 5.9a is actually a bucket-tree. We see that for some nodes \( \operatorname{sep}\left( {u, v}\right) = \chi \left( u\right) \) . That is, all the variables in vertex \( u \) belong to an adjacent vertex \( v \) . In this case the number of clusters in the tree decomposition can be reduced by merging vertex \( u \) into \( v \) without increasing the cluster size in the tree-decomposition. This is accomplished by moving from Figure 5.9a to Figure 5.9b. Definition 5.18 Minimal tree decomposition. A tree decomposition is minimal if \( \operatorname{sep}\left( {u, v}\right) \subsetneqq \) \( \chi \left( u\right) \) and \( \operatorname{sep}\left( {u, v}\right) \subsetneqq \chi \left( v\right) \) for each pair \( \left( {u, v}\right) \) of adjacent nodes. We can show the following.
16173_Reasoning with probabilistic and deterministic graphical models_ exact algorithms
Example 5.17 Consider again the Bayesian network in Figure 5.8a. Any of the cluster-trees in Figure 5.9 describes a partition of variables into clusters. We can now place each input function into a cluster that contains its scopes, and verify that each is a legitimate tree decomposition. For example, Figure 5.9c shows a cluster-tree decomposition with two vertices, and labeling \( \chi \left( 1\right) = \) \( \{ G, F\} \) and \( \chi \left( 2\right) = \{ A, B, C, D, F\} \) . Any function with scope \( \{ G\} \) must be placed in vertex 1 because vertex 1 is the only vertex that contains variable \( G \) (placing a function having \( G \) in its scope in another vertex will force us to add variable \( G \) to that vertex as well). Any function with scope \( \{ A, B, C, D\} \) or one of its subsets must be placed in vertex 2, and any function with scope \( \{ F\} \) can be placed either in vertex 1 or 2 . Notice that the tree-decomposition at Figure 5.9a is actually a bucket-tree.
Null
10
47
Computer Science and Engineering
Information and communication, circuits
Theorem 9 [35,36] Suppose \( c \in {\mathbb{R}}^{\ell }\langle \langle X\rangle \rangle \) has coefficients which satisfy \[ \left| \left( {c,\eta }\right) \right| \leq {K}_{c}{M}_{c}^{\left| \eta \right| },\;\forall \eta \in {X}^{ * }. \] Then there exists a real number \( \widehat{R} > 0 \) such that for each \( \widehat{u} \in {B}_{\infty }^{m + 1}\left\lbrack 1\right\rbrack \left( \widehat{R}\right) \), the series (57) converges absolutely for any \( N \geq 1 \) . Proof Fix \( N \geq 1 \) . From the assumed coefficient bound and Lemma 3, it follows that \[ \left| {{\widehat{F}}_{c}\left( \widehat{u}\right) \left( N\right) }\right| \leq \mathop{\sum }\limits_{{j = 0}}^{\infty }\mathop{\sum }\limits_{{\eta \in {X}^{j}}}\left| \left( {c,\eta }\right) \right| \left| {{S}_{\eta }\left\lbrack \widehat{u}\right\rbrack \left( N\right) }\right| \leq \mathop{\sum }\limits_{{j = 0}}^{\infty }{K}_{c}{\left( {M}_{c}\left( m + 1\right) \right) }^{j}{2}^{N - 1}{\left( 2\widehat{R}\right) }^{j} \] \[ = \frac{{K}_{c}{2}^{N - 1}}{1 - 2{M}_{c}\left( {m + 1}\right) \widehat{R}}, \] provided \( \widehat{R} < 1/2{M}_{c}\left( {m + 1}\right) \) . The final convergence theorem shows that the restriction on the norm of \( \widehat{u} \) can be removed if an even more stringent growth condition is imposed on \( c \) .
1529_Discrete Mechanics_ Geometric Integration and Lie_Butcher Series_ DMGILBS_ Madrid_ May 2015
Theorem 9 [35,36] Suppose \( c \in {\mathbb{R}}^{\ell }\langle \langle X\rangle \rangle \) has coefficients which satisfy\n\n\[ \left| \left( {c,\eta }\right) \right| \leq {K}_{c}{M}_{c}^{\left| \eta \right| },\;\forall \eta \in {X}^{ * }.\]\n\nThen there exists a real number \( \widehat{R} > 0 \) such that for each \( \widehat{u} \in {B}_{\infty }^{m + 1}\left\lbrack 1\right\rbrack \left( \widehat{R}\right) \), the series (57) converges absolutely for any \( N \geq 1 \) .
Proof Fix \( N \geq 1 \) . From the assumed coefficient bound and Lemma 3, it follows that\n\n\[ \left| {{\widehat{F}}_{c}\left( \widehat{u}\right) \left( N\right) }\right| \leq \mathop{\sum }\limits_{{j = 0}}^{\infty }\mathop{\sum }\limits_{{\eta \in {X}^{j}}}\left| \left( {c,\eta }\right) \right| \left| {{S}_{\eta }\left\lbrack \widehat{u}\right\rbrack \left( N\right) }\right| \leq \mathop{\sum }\limits_{{j = 0}}^{\infty }{K}_{c}{\left( {M}_{c}\left( m + 1\right) \right) }^{j}{2}^{N - 1}{\left( 2\widehat{R}\right) }^{j}\]\n\n\[ = \frac{{K}_{c}{2}^{N - 1}}{1 - 2{M}_{c}\left( {m + 1}\right) \widehat{R}}, \]\n\nprovided \( \widehat{R} < 1/2{M}_{c}\left( {m + 1}\right) \) .
5
28
Analysis
Sequences, series, summability
Theorem 11.8 Let \( f \) be a self-map and let \( \left( {X, d}\right) \) be a complete metric space. Iffor each \( k \neq l \in X \) \[ {\int }_{0}^{d\left( {{fk},{fl}}\right) }\omega \left( s\right) {ds} \leq \gamma \left( {d\left( {k, l}\right) }\right) {\int }_{0}^{m\left( {k, l}\right) }\omega \left( s\right) {ds} \tag{11.1} \] and \[ m\left( {k, l}\right) = \max \left\{ {\frac{d\left( {k,{fk}}\right) \cdot d\left( {l,{fl}}\right) }{1 + d\left( {k, l}\right) }, d\left( {k, l}\right) }\right\} , \tag{11.2} \] where \( \omega \in \Psi ,\gamma : {R}^{ + } \rightarrow \lbrack 0,1) \) is a function with \[ \mathop{\lim }\limits_{{\delta \rightarrow t > 0}}\sup \gamma \left( \delta \right) < 1 \tag{11.3} \] Then \( f \) has a unique fixed point. Proof. Let us consider \( s \in X \) be any arbitrary point in \( \mathrm{X} \) . Now construct a sequence \( \left\{ {s}_{n}\right\} \) in \( \mathrm{X} \) such that \( f{s}_{n} = {s}_{n + 1} \) . Step - 1: Claim \( \mathop{\lim }\limits_{{n \rightarrow \infty }}d\left( {{s}_{n},{s}_{n + 1}}\right) = 0 \) For all \( n \geq 0 \), construct iteration on \( f \) by putting \( k = {s}_{n} \) and \( l = {s}_{n + 1} \) . From Equation (11.1), we get \[ {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} = {\int }_{0}^{d\left( {f{s}_{n - 1}, f{s}_{n}}\right) }\omega \left( s\right) {ds} \tag{11.4} \] \[ \leq \gamma \left( {d\left( {{s}_{n - 1},{s}_{n}}\right) }\right) {\int }_{0}^{m\left( {{s}_{n - 1},{s}_{n}}\right) }\omega \left( s\right) {ds} \] where \[ m\left( {{s}_{n - 1},{s}_{n}}\right) = \max \left\{ {\frac{d\left( {{s}_{n - 1}, f{s}_{n - 1}}\right) d\left( {{s}_{n}, f{s}_{n}}\right) }{1 + d\left( {{s}_{n - 1},{s}_{n}}\right) }, d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\} \] \[ = \max \left\{ {\frac{d\left( {{s}_{n - 1},{s}_{n}}\right) d\left( {{s}_{n},{s}_{n + 1}}\right) }{1 + d\left( {{s}_{n - 1},{s}_{n}}\right) }, d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\} \] Since \( \mathrm{d} \) is metric therefore for all \( \mathrm{n} \) , \[ \frac{d\left( {{s}_{n - 1},{s}_{n}}\right) }{1 + d\left( {{s}_{n - 1},{s}_{n}}\right) } < 1\text{ implies that }\frac{d\left( {{s}_{n - 1},{s}_{n}}\right) d\left( {{s}_{n},{s}_{n + 1}}\right) }{1 + d\left( {{s}_{n - 1},{s}_{n}}\right) } < d\left( {{s}_{n},{s}_{n + 1}}\right) \] and hence \[ m\left( {{s}_{n - 1},{s}_{n}}\right) = \max \left\{ {d\left( {{s}_{n},{s}_{n + 1}}\right), d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\} \tag{11.5} \] If we suppose that \( d\left( {{s}_{n},{s}_{n + 1}}\right) > d\left( {{s}_{n},{s}_{n - 1}}\right) \) then \[ m\left( {{s}_{n - 1},{s}_{n}}\right) = \max \left\{ {d\left( {{s}_{n},{s}_{n + 1}}\right), d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\} = d\left( {{s}_{n},{s}_{n + 1}}\right) \] On combining (11.4) & (11.3) implies \[ {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} \leq \gamma \left( {d\left( {{s}_{n - 1},{s}_{n}}\right) }\right) {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} < {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} \] This is not possible. Therefore \( d\left( {{s}_{n},{s}_{n + 1}}\right) \leq d\left( {{s}_{n},{s}_{n - 1}}\right) \) and hence from (11.5), \( m\left( {{s}_{n - 1},{s}_{n}}\right) = d\left( {{s}_{n - 1},{s}_{n}}\right) \) . Therefore, from (11.4) \[ {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} \leq \gamma \left( {d\left( {{s}_{n - 1},{s}_{n}}\right) }\right) {\int }_{0}^{d\left( {{s}_{n - 1},{s}_{n}}\right) }\omega \left( s\right) {ds} \] \[ {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} \leq {\int }_{0}^{d\left( {{s}_{n - 1},{s}_{n}}\right) }\omega \left( s\right) {ds} \tag{11.6} \] Thus, we obtained a monotone decreasing sequence \( \left\{ {{\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds}}\right\} \) of nonnegative real numbers and so we can find some \( k \geq 0 \) such that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} = k \tag{11.7} \] Suppose that \( k > 0 \), then on using (11.1) and (11.5) we have \[ 0 < {\int }_{0}^{k}\omega \left( s\right) {ds} = \mathop{\lim }\limits_{{n \rightarrow \infty }}\sup {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} \] \[ \leq \mathop{\lim }\limits_{{n \rightarrow \infty }}\sup \left( {\gamma \left( {d\left( {{s}_{n - 1},{s}_{n}}\right) }\right) {\int }_{0}^{m\left( {{s}_{n - 1},{s}_{n}}\right) }\omega \left( s\right) {ds}}\right) \] \[ \leq \mathop{\lim }\limits_{{n \rightarrow \infty }}\sup \gamma \left( {d\left( {{s}_{n - 1},{s}_{n}}\right) }\right) \mathop{\lim }\limits_{{n \rightarrow \infty }}\sup {\int }_{0}^{\max \left\{ {d\left( {{s}_{n},{s}_{n + 1}}\right), d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\} }\omega \left( s\right) {ds} \] \[ \leq \left( {\mathop{\lim }\limits_{{s \rightarrow k}}\sup \gamma \left( k\right) }\right) {\int }_{0}^{k}\omega \left( s\right) {ds} < {\int }_{0}^{k}\omega \left( s\right) {ds} \] This is not possible and so \[ \Rightarrow \mathop{\lim }\limits_{{n \rightarrow \infty }}{\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} = 0 \] \[ \Rightarrow \mathop{\lim }\limits_{{n \rightarrow \infty }}d\left( {{s}_{n},{s}_{n + 1}}\right) = 0 \tag{11.8} \] First, we claim that \( \left\{ {s}_{n}\right\} \) is a Cauchy sequence. Suppose it is not. That is there exist an \( \varepsilon > 0 \) and two sub sequence \( \left\{ {s}_{{n}_{k}}\right\} ,\left\{ {s}_{{m}_{k}}\right\} \) of \( \left\{ {s}_{n}\right\} \) the subject to \( {m}_{k} > {n}_{k} \geq k, k > 0 \) and satisfying \[ d\left( {{s}_{{m}_{k}},{s}_{{n}_{k}}}\right) \geq \varepsilon \& d\left( {{s}_{{m}_{k - 1}},{s}_{{n}_{k}}}\right) < \varepsilon \tag{11.9} \] For all \( k = 0 \), we have, \[ {\int }_{0}^{\varepsilon }\ome
6842_Advances in Applied Mathematical Analysis and Applications
Theorem 11.8 Let \( f \) be a self-map and let \( \left( {X, d}\right) \) be a complete metric space. If for each \( k \neq l \in X \)\n\n\[{\int }_{0}^{d\left( {{fk},{fl}}\right) }\omega \left( s\right) {ds} \leq \gamma \left( {d\left( {k, l}\right) }\right) {\int }_{0}^{m\left( {k, l}\right) }\omega \left( s\right) {ds} \tag{11.1}\]\n\nand\n\n\[m\left( {k, l}\right) = \max \left\{ {\frac{d\left( {k,{fk}}\right) \cdot d\left( {l,{fl}}\right) }{1 + d\left( {k, l}\right) }, d\left( {k, l}\right) }\right\} , \tag{11.2}\]\n\nwhere \( \omega \in \Psi ,\gamma : {R}^{ + } \rightarrow \lbrack 0,1) \) is a function with\n\n\[\mathop{\lim }\limits_{{\delta \rightarrow t > 0}}\sup \gamma \left( \delta \right) < 1 \tag{11.3}\]\n\nThen \( f \) has a unique fixed point.
Proof. Let us consider \( s \in X \) be any arbitrary point in \( \mathrm{X} \) . Now construct a sequence \( \left\{ {s}_{n}\right\} \) in \( \mathrm{X} \) such that \( f{s}_{n} = {s}_{n + 1} \) .\n\nStep - 1: Claim \( \mathop{\lim }\limits_{{n \rightarrow \infty }}d\left( {{s}_{n},{s}_{n + 1}}\right) = 0 \)\n\nFor all \( n \geq 0 \), construct iteration on \( f \) by putting \( k = {s}_{n} \) and \( l = {s}_{n + 1} \) . From Equation (11.1), we get\n\n\[{\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} = {\int }_{0}^{d\left( {f{s}_{n - 1}, f{s}_{n}}\right) }\omega \left( s\right) {ds} \tag{11.4}\]\n\n\[\leq \gamma \left( {d\left( {{s}_{n - 1},{s}_{n}}\right) }\right) {\int }_{0}^{m\left( {{s}_{n - 1},{s}_{n}}\right) }\omega \left( s\right) {ds}\]\n\nwhere\n\n\[m\left( {{s}_{n - 1},{s}_{n}}\right) = \max \left\{ {\frac{d\left( {{s}_{n - 1}, f{s}_{n - 1}}\right) d\left( {{s}_{n}, f{s}_{n}}\right) }{1 + d\left( {{s}_{n - 1},{s}_{n}}\right) }, d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\}\]\n\n\[= \max \left\{ {\frac{d\left( {{s}_{n - 1},{s}_{n}}\right) d\left( {{s}_{n},{s}_{n + 1}}\right) }{1 + d\left( {{s}_{n - 1},{s}_{n}}\right) }, d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\}\]\n\nSince \( \mathrm{d} \) is metric therefore for all \( \mathrm{n} \),\n\n\[\frac{d\left( {{s}_{n - 1},{s}_{n}}\right) }{1 + d\left( {{s}_{n - 1},{s}_{n}}\right) } < 1\text{ implies that }\frac{d\left( {{s}_{n - 1},{s}_{n}}\right) d\left( {{s}_{n},{s}_{n + 1}}\right) }{1 + d\left( {{s}_{n - 1},{s}_{n}}\right) } < d\left( {{s}_{n},{s}_{n + 1}}\right)\]\n\nand hence\n\n\[m\left( {{s}_{n - 1},{s}_{n}}\right) = \max \left\{ {d\left( {{s}_{n},{s}_{n + 1}}\right), d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\} \tag{11.5}\]\n\nIf we suppose that \( d\left( {{s}_{n},{s}_{n + 1}}\right) > d\left( {{s}_{n},{s}_{n - 1}}\right) \) then\n\n\[m\left( {{s}_{n - 1},{s}_{n}}\right) = \max \left\{ {d\left( {{s}_{n},{s}_{n + 1}}\right), d\left( {{s}_{n - 1},{s}_{n}}\right) }\right\} = d\left( {{s}_{n},{s}_{n + 1}}\right)\]\n\nOn combining (11.4) & (11.3) implies\n\n\[{\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} \leq \gamma \left( {d\left( {{s}_{n - 1},{s}_{n}}\right) }\right) {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} < {\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds}\]\n\nThis is not possible. Therefore \( d\left( {{s}_{n},{s}_{n + 1}}\right) \leq d\left( {{s}_{n},{s}_{n - 1}}\right) \) and hence from (11.5), \( m\left( {{s}_{n - 1},{s}_{n}}\right) = d\left( {{s}_{n - 1},{s}_{n}}\right) \) . Therefore, from (11.4)\n\n\[{\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} \leq \gamma \left( {d\left( {{s}_{n - 1},{s}_{n}}\right) }\right) {\int }_{0}^{d\left( {{s}_{n - 1},{s}_{n}}\right) }\omega \left( s\right) {ds}\]\n\n\[{\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds} \leq {\int }_{0}^{d\left( {{s}_{n - 1},{s}_{n}}\right) }\omega \left( s\right) {ds} \tag{11.6}\]\n\nThus, we obtained a monotone decreasing sequence \( \left\{ {{\int }_{0}^{d\left( {{s}_{n},{s}_{n + 1}}\right) }\omega \left( s\right) {ds}}\right\} \) of nonnegative real numbers and so we
5
23
Analysis
Measure and integration
Exercise 5.16. Suppose that \( \mathbb{K} \) denotes any field with characteristic not equal to 2 or 3, and \( E : {y}^{2} = {x}^{3} + {ax} + b\left( {a, b \in \mathbb{K}}\right) \) . Assuming the binary operation defined before makes \( E\left( \mathbb{K}\right) \) into a group, prove that \( P = \left( {x, y}\right) \) has order 2 if and only if \( y = 0 \) .
6066_An Introduction to Number Theory
Exercise 5.16. Suppose that \( \mathbb{K} \) denotes any field with characteristic not equal to 2 or 3, and \( E : {y}^{2} = {x}^{3} + {ax} + b\left( {a, b \in \mathbb{K}}\right) \). Assuming the binary operation defined before makes \( E\left( \mathbb{K}\right) \) into a group, prove that \( P = \left( {x, y}\right) \) has order 2 if and only if \( y = 0 \).
Null
4
14
Geometry and Topology
Algebraic geometry
Example 5.8.12. To compute the variance of a standard normal distributed random variable \( X \), we compute \( {\int }_{-\infty }^{\infty }{x}^{2}{f}_{X}\left( x\right) {dx} = 1 \) . This can be done with partial integration. Hence \( E\left( {X}^{2}\right) = 1 \), and since \( E\left( X\right) = 0 \), it follows that the variance of \( X \) is 1 . If \( X \) has a normal distribution with parameters \( \mu \) and \( {\sigma }^{2} \), we use the fact that \[ Y = \frac{X - \mu }{\sigma } \] has a standard normal distribution. Since \( X = {\sigma Y} + \mu \), this gives, using Exercise 5.8.10, that \( \operatorname{var}\left( X\right) = {\sigma }^{2}\operatorname{var}\left( Y\right) = {\sigma }^{2} \) . - Exercise 5.8.13. Compute the variance of an exponentially distributed random variable. - Exercise 5.8.14. Show that when \( X \) and \( Y \) are independent and have a joint density, then \[ \operatorname{var}\left( {X + Y}\right) = \operatorname{var}\left( X\right) + \operatorname{var}\left( Y\right) \]
23684_A Natural Introduction to Probability Theory_ Second Edition
Example 5.8.12. To compute the variance of a standard normal distributed random variable \( X \), we compute \( {\int }_{-\infty }^{\infty }{x}^{2}{f}_{X}\left( x\right) {dx} = 1 \) . This can be done with partial integration. Hence \( E\left( {X}^{2}\right) = 1 \), and since \( E\left( X\right) = 0 \), it follows that the variance of \( X \) is 1 .
If \( X \) has a normal distribution with parameters \( \mu \) and \( {\sigma }^{2} \), we use the fact that \[ Y = \frac{X - \mu }{\sigma } \] has a standard normal distribution. Since \( X = {\sigma Y} + \mu \), this gives, using Exercise 5.8.10, that \( \operatorname{var}\left( X\right) = {\sigma }^{2}\operatorname{var}\left( Y\right) = {\sigma }^{2} \).
9
44
Probability and Statistics
Probability theory and stochastic processes
Corollary 8.3.7. Let \( R \) be a ring and \( S \) is a multiplicative closed subset of \( R \) . (i) If \( R \) is a directed union of Artinian subrings, then so is \( {S}^{-1}R \) . (ii) If \( \mathcal{D}\mathcal{U}\left( R\right) \neq \varnothing \), then \( \mathcal{D}\mathcal{U}\left( {{S}^{-1}R}\right) \neq \varnothing \) . (iii) We have \( \mathcal{D}\mathcal{U}\left( {R\left( X\right) }\right) \neq \varnothing \) if \( \mathcal{D}\mathcal{U}\left( R\right) \neq \varnothing \), where \( R\left( X\right) \) is the Nagata ring and \( X \) is an indeterminate over \( R \) . Proof. (i) Suppose that \( R = \mathop{\bigcup }\limits_{{\alpha \in A}}{R}_{\alpha } \) is a directed union of Artinian subrings and \( {S}_{\alpha } = S \cap {R}_{\alpha } \) be a multiplicative closed subset of \( {R}_{\alpha } \) . It is not difficult to show that \( {S}^{-1}R = \mathop{\bigcup }\limits_{{\alpha \in A}}{S}_{\alpha }{R}_{\alpha } \) . Since \( {R}_{\alpha } \) is Artinian, the localization \( {S}_{\alpha }^{-1}{R}_{\alpha } \) is also Artinian. The family \( {\left\{ {S}_{\alpha }^{-1}{R}_{\alpha }\right\} }_{\alpha \in A} \) is directed since \( {\left\{ {R}_{\alpha }\right\} }_{\alpha \in A} \) so is. It follows that \( {S}^{-1}R = \mathop{\bigcup }\limits_{{\alpha \in A}}{S}_{\alpha }{R}_{\alpha } \) is a directed union of Artinian subrings. (ii) Suppose that \( \mathcal{D}\mathcal{U}\left( R\right) \neq \varnothing \), let \( T = \mathop{\bigcup }\limits_{{j \in J}}{T}_{j} \in \mathcal{D}\mathcal{U}\left( R\right) \) and \( U = S \cap T \) be a multiplicative closed subset of \( T \) . By (i), \( {U}^{-1}T \subseteq {S}^{-1}R \) is a directed union of Artinian subrings. It follows that \( \mathcal{D}\mathcal{U}\left( {{S}^{-1}R}\right) \neq \varnothing \) . As \( R\left( X\right) = {S}^{-1}R\left\lbrack X\right\rbrack \), we have \( \mathcal{D}\mathcal{U}\left( {R\left( X\right) }\right) \neq \varnothing \) if \( \mathcal{D}\mathcal{U}\left( R\right) \neq \varnothing \) . (iii) If \( R = \mathop{\bigcup }\limits_{{i \in I}}{R}_{i} \) is a directed union of Artinian subrings, then \( R\left( X\right) = \) \( \mathop{\bigcup }\limits_{{i \in I}}{R}_{i}\left( X\right) \) . Since each \( {R}_{i} \) is Noetherian, by \( \left\lbrack {{13},\left( {6.17}\right) }\right\rbrack ,{R}_{i}\left( X\right) \) is also Noetherian and each \( {R}_{i}\left( X\right) \) is zero-dimensional as each \( {R}_{i} \) is zero-dimensional (cf.[2, Proposition 1.21]). By [4, Theorem 8.5], \( {R}_{i}\left( X\right) \) is an Artinian ring for each \( i \in I \) . The family \( {\left\{ {R}_{i}\left( X\right) \right\} }_{i \in I} \) is directed because so is the family \( {\left\{ {R}_{i}\right\} }_{i \in I} \) . Then \( R\left( X\right) \) is a directed union of Artinian subrings. Now, we are in a position to answer question \( \left( {q}_{2}\right) \) for the infinite product of rings.
30537_Non-Associative and Non-Commutative Algebra and Operator Theory_ NANCAOT_ Dakar_ Senegal_ May 23_25_
Corollary 8.3.7. Let \( R \) be a ring and \( S \) is a multiplicative closed subset of \( R \). (i) If \( R \) is a directed union of Artinian subrings, then so is \( {S}^{-1}R \) .
Proof. (i) Suppose that \( R = \mathop{\bigcup }\limits_{{\alpha \in A}}{R}_{\alpha } \) is a directed union of Artinian subrings and \( {S}_{\alpha } = S \cap {R}_{\alpha } \) be a multiplicative closed subset of \( {R}_{\alpha } \). It is not difficult to show that \( {S}^{-1}R = \mathop{\bigcup }\limits_{{\alpha \in A}}{S}_{\alpha }{R}_{\alpha } \). Since \( {R}_{\alpha } \) is Artinian, the localization \( {S}_{\alpha }^{-1}{R}_{\alpha } \) is also Artinian. The family \( {\left\{ {S}_{\alpha }^{-1}{R}_{\alpha }\right\} }_{\alpha \in A} \) is directed since \( {\left\{ {R}_{\alpha }\right\} }_{\alpha \in A} \) so is. It follows that \( {S}^{-1}R = \mathop{\bigcup }\limits_{{\alpha \in A}}{S}_{\alpha }{R}_{\alpha } \) is a directed union of Artinian subrings.
2
6
Algebra
Commutative algebra
Theorem 11.12 (Final Value Theorem) If the Laplace transforms of \( f : \lbrack 0,\infty ) \rightarrow \) \( \mathbb{R} \), its derivative \( {f}^{\prime } \) exist and \( F\left( s\right) = \mathcal{L}f\left( t\right) \) then \[ \mathop{\lim }\limits_{{s \rightarrow 0}}{sF}\left( s\right) = \mathop{\lim }\limits_{{t \rightarrow \infty }}f\left( t\right) \] provided the two limits exit. Proof Since \[ {sF}\left( s\right) - f\left( 0\right) = \mathcal{L}\left\lbrack {{f}^{\prime }\left( t\right) }\right\rbrack = {\int }_{0}^{\infty }{e}^{-{st}}{f}^{\prime }\left( t\right) \mathrm{d}t \] we have \[ \mathop{\lim }\limits_{{s \rightarrow 0}}{sF}\left( s\right) - f\left( 0\right) = {\int }_{0}^{\infty }{f}^{\prime }\left( t\right) \mathrm{d}t = \mathop{\lim }\limits_{{t \rightarrow \infty }}f\left( t\right) - f\left( 0\right) \] and so \( \mathop{\lim }\limits_{{s \rightarrow 0}}{sF}\left( s\right) = \mathop{\lim }\limits_{{t \rightarrow \infty }}f\left( t\right) \) .
21999_Fundamentals of Partial Differential Equations
Theorem 11.12 (Final Value Theorem) If the Laplace transforms of \( f : \lbrack 0,\infty ) \rightarrow \) \( \mathbb{R} \), its derivative \( {f}^{\prime } \) exist and \( F\left( s\right) = \mathcal{L}f\left( t\right) \) then\n\n\[ \mathop{\lim }\limits_{{s \rightarrow 0}}{sF}\left( s\right) = \mathop{\lim }\limits_{{t \rightarrow \infty }}f\left( t\right) \]\n\nprovided the two limits exit.
Proof Since\n\n\[ {sF}\left( s\right) - f\left( 0\right) = \mathcal{L}\left\lbrack {{f}^{\prime }\left( t\right) }\right\rbrack = {\int }_{0}^{\infty }{e}^{-{st}}{f}^{\prime }\left( t\right) \mathrm{d}t \]\n\nwe have\n\n\[ \mathop{\lim }\limits_{{s \rightarrow 0}}{sF}\left( s\right) - f\left( 0\right) = {\int }_{0}^{\infty }{f}^{\prime }\left( t\right) \mathrm{d}t = \mathop{\lim }\limits_{{t \rightarrow \infty }}f\left( t\right) - f\left( 0\right) \]\n\nand so \( \mathop{\lim }\limits_{{s \rightarrow 0}}{sF}\left( s\right) = \mathop{\lim }\limits_{{t \rightarrow \infty }}f\left( t\right) \) .
6
35
Differential Equations and Dynamical Systems
Ordinary differential equations
Theorem 5.2. Let \( \alpha \in \left( {0,1}\right), T > 0 \) and \( \Omega \) be a bounded domain in \( {\mathbb{R}}^{d} \) . Let \( {u}_{0} \in {L}_{\infty }\left( \Omega \right) \) and suppose that the assumptions (HA) and (Hf) are satisfied. Let \( u \in {W}_{\alpha } \) be a bounded weak solution of (16) in \( \left( {0, T}\right) \times \Omega \) . Then there holds for any \( Q \subset \left( {0, T}\right) \times \Omega \) separated from the parabolic boundary \( \left( {\{ 0\} \times \Omega }\right) \cup \left( {\left( {0, T}\right) \times \partial \Omega }\right) \) by a positive distance \( D \) , \[ {\left\lbrack u\right\rbrack }_{{C}^{\frac{\alpha \epsilon }{2},\epsilon }\left( \bar{Q}\right) } \leq C\left( {{\left| u\right| }_{{L}_{\infty }\left( {\left( {0, T}\right) \times \Omega }\right) } + {\left| {u}_{0}\right| }_{{L}_{\infty }\left( \Omega \right) } + {\left| f\right| }_{{L}_{r}\left( {\left( {0, T}\right) ;{L}_{q}\left( \Omega \right) }\right) }}\right) \] with positive constants \( \epsilon = \epsilon \left( {{\left| A\right| }_{\infty }, v,\alpha, r, q, d,\operatorname{diam}\Omega ,\mathop{\inf }\limits_{{\left( {\tau, z}\right) \in Q}}\tau }\right) \) and \( C = C\left( {{\left| A\right| }_{\infty }, v,\alpha ,}\right. \) \( r, q, d,\operatorname{diam}\Omega ,{\lambda }_{d + 1}\left( Q\right), D) \) . Theorem 5.2 gives an interior Hölder estimate for bounded weak solutions of (16) in terms of the data and the \( {L}_{\infty } \) -bound of the solution. It can be viewed as the time fractional analogue of the classical parabolic version \( \left( {\alpha = 1}\right) \) of the celebrated De Giorgi-Nash theorem on the Hölder continuity of weak solutions to elliptic equations in divergence form (De Giorgi [8], Nash [33]); see also [14] for the elliptic, and [25] as well as the seminal contribution by Moser [31] for the parabolic case. The proof of Theorem 5.2 is quite involved. It uses De Giorgi's technique and the method of nonlocal growth lemmas, which has been developed in [38] for integrodifferential operators like the fractional Laplacian. The fundamental identity is frequently used to derive various a priori estimates for \( u \) and certain logarithmic expressions involving \( u \) . The following result gives conditions on the data which are sufficient for Hölder continuity up to the parabolic boundary of \( \left( {0, T}\right) \times \Omega \) . It has been taken from [52].
20333_Handbook of Fractional Calculus with Applications_ Volume 2_ Fractional Differential Equations
Theorem 5.2. Let \( \alpha \in \left( {0,1}\right), T > 0 \) and \( \Omega \) be a bounded domain in \( {\mathbb{R}}^{d} \) . Let \( {u}_{0} \in {L}_{\infty }\left( \Omega \right) \) and suppose that the assumptions (HA) and (Hf) are satisfied. Let \( u \in {W}_{\alpha } \) be a bounded weak solution of (16) in \( \left( {0, T}\right) \times \Omega \) . Then there holds for any \( Q \subset \left( {0, T}\right) \times \Omega \) separated from the parabolic boundary \( \left( {\{ 0\} \times \Omega }\right) \cup \left( {\left( {0, T}\right) \times \partial \Omega }\right) \) by a positive distance \( D \) ,\n\n\[{\left\lbrack u\right\rbrack }_{{C}^{\frac{\alpha \epsilon }{2},\epsilon }\left( \bar{Q}\right) } \leq C\left( {{\left| u\right| }_{{L}_{\infty }\left( {\left( {0, T}\right) \times \Omega }\right) } + {\left| {u}_{0}\right| }_{{L}_{\infty }\left( \Omega \right) } + {\left| f\right| }_{{L}_{r}\left( {\left( {0, T}\right) ;{L}_{q}\left( \Omega \right) }\right) }}\right)\]\n\nwith positive constants \( \epsilon = \epsilon \left( {{\left| A\right| }_{\infty }, v,\alpha, r, q, d,\operatorname{diam}\Omega ,\mathop{\inf }\limits_{{\left( {\tau, z}\right) \in Q}}\tau }\right) \) and \( C = C\left( {{\left| A\right| }_{\infty }, v,\alpha ,}\right. \) \( r, q, d,\operatorname{diam}\Omega ,{\lambda }_{d + 1}\left( Q\right), D) \) .
Null
6
36
Differential Equations and Dynamical Systems
Partial differential equations
Example 3.20 For the random variable \( \widetilde{X} \), the range set and probability mass function are given as \[ {R}_{\widetilde{X}} = \{ a, b, c\} \] \[ p\left( {x = a}\right) = \frac{1}{4}\;p\left( {x = b}\right) = \frac{2}{4}\;p\left( {x = c}\right) = \frac{1}{4}. \] (a) Let \( {\widetilde{X}}_{1}^{N} \sim p\left( x\right) \) . Find a strongly typical sequence \( {x}_{1}^{N} \) for \( N = {20} \), and verify the inequality \[ {2}^{-N\left( {H\left( \widetilde{X}\right) + \epsilon }\right) } \leq p\left( {{x}_{1},{x}_{2},\ldots ,{x}_{N}}\right) \leq {2}^{-N\left( {H\left( \widetilde{X}\right) - \epsilon }\right) } \tag{3.44} \] for your sequence. (b) Find the total number of strongly typical sequences, and weakly typical sequences. ## Solution 3.20 (a) In a strongly typical sequence \( {x}_{1}^{20} \), the symbol ’ \( a \) ’ appears \( {20} \times \frac{1}{4} = 5 \), the symbol ’ \( b \) ’ appears \( {20} \times \frac{2}{4} = {10} \) times, and the symbol ’ \( c \) ’ appears \( {20} \times \frac{1}{4} = 5 \) times. Then, we can form a strongly typical sequence as \[ {x}_{1}^{20} = \left\lbrack {ababababababbcbcbcbcbc}\right\rbrack . \tag{3.45} \] The entropy of the generic random variable \( \widetilde{X} \sim p\left( x\right) \) can be calculated as \[ H\left( \widetilde{X}\right) = {1.5}\text{.} \] Let’s choose \( \epsilon = {0.01} \), then the inequality (3.44) for the found sequence in (3.45) happens to be as \[ {2}^{-{20}\left( {{1.5} + {0.01}}\right) } \leq \underset{{\left( \frac{1}{4}\right) }^{5}{\left( \frac{2}{4}\right) }^{10}{\left( \frac{1}{4}\right) }^{5}}{\underbrace{p\left( {{x}_{1},{x}_{2},\ldots ,{x}_{N}}\right) }} \leq {2}^{-{20}\left( {{1.5} - {0.01}}\right) } \] leading to \[ {8.108} \times {10}^{-{10}} \leq {9.32} \times {10}^{-{10}} \leq {1.07} \times {10}^{-9}\sqrt{} \] which is a correct inequality. (b) The total number of strongly typical sequences can be found using \[ \left( \begin{matrix} {20} \\ 5 \end{matrix}\right) \left( \begin{array}{l} {15} \\ {10} \end{array}\right) \left( \begin{array}{l} 5 \\ 5 \end{array}\right) \rightarrow {46558512} \] The total number of typical sequences can be calculated as \[ {2}^{{NH}\left( \widetilde{X}\right) } \rightarrow {2}^{30} \rightarrow {1.0737} \times {10}^{9}. \] Then, the total number of weakly typical sequences is \[ {1.0737} \times {10}^{9} - {46558512} = {1.0272} \times {10}^{9} \] from which we see that, most of the typical sequences fall into the category of weakly typical sequences.
3010_Information Theory for Electrical Engineers
Example 3.20 For the random variable \( \widetilde{X} \), the range set and probability mass function are given as\n\n\[ \n{R}_{\widetilde{X}} = \{ a, b, c\}\n\]\n\n\[ \np\left( {x = a}\right) = \frac{1}{4}\;p\left( {x = b}\right) = \frac{2}{4}\;p\left( {x = c}\right) = \frac{1}{4}.\n\]\n\n(a) Let \( {\widetilde{X}}_{1}^{N} \sim p\left( x\right) \) . Find a strongly typical sequence \( {x}_{1}^{N} \) for \( N = {20} \), and verify the inequality\n\n\[ \n{2}^{-N\left( {H\left( \widetilde{X}\right) + \epsilon }\right) } \leq p\left( {{x}_{1},{x}_{2},\ldots ,{x}_{N}}\right) \leq {2}^{-N\left( {H\left( \widetilde{X}\right) - \epsilon }\right) } \tag{3.44}\n\]\n\nfor your sequence.
## Solution 3.20\n\n(a) In a strongly typical sequence \( {x}_{1}^{20} \), the symbol ’ \( a \) ’ appears \( {20} \times \frac{1}{4} = 5 \), the symbol ’ \( b \) ’ appears \( {20} \times \frac{2}{4} = {10} \) times, and the symbol ’ \( c \) ’ appears \( {20} \times \frac{1}{4} = 5 \) times. Then, we can form a strongly typical sequence as\n\n\[ \n{x}_{1}^{20} = \left\lbrack {ababababababbcbcbcbcbc}\right\rbrack . \tag{3.45}\n\]\n\nThe entropy of the generic random variable \( \widetilde{X} \sim p\left( x\right) \) can be calculated as\n\n\[ \nH\left( \widetilde{X}\right) = {1.5}.\n\]\n\nLet’s choose \( \epsilon = {0.01} \), then the inequality (3.44) for the found sequence in (3.45) happens to be as\n\n\[ \n{2}^{-{20}\left( {{1.5} + {0.01}}\right) } \leq \underset{{\left( \frac{1}{4}\right) }^{5}{\left( \frac{2}{4}\right) }^{10}{\left( \frac{1}{4}\right) }^{5}}{\underbrace{p\left( {{x}_{1},{x}_{2},\ldots ,{x}_{N}}\right) }} \leq {2}^{-{20}\left( {{1.5} - {0.01}}\right) }\n\]\n\nleading to\n\n\[ \n{8.108} \times {10}^{-{10}} \leq {9.32} \times {10}^{-{10}} \leq {1.07} \times {10}^{-9}\sqrt{}\n\]\n\nwhich is a correct inequality.
Unknown
Unknown
Unknown
Unknown
Lemma 4.73. Let \( w \in C\left( {\left\lbrack {0, T}\right\rbrack \;;\;{H}_{\mathrm{{per}}}^{s}}\right) \cap {C}^{1}\left( {\left\lbrack {0, T}\right\rbrack \;;\;{H}_{\mathrm{{per}}}^{s - 1}}\right) \; \) satisfy \( \;{\partial }_{t}^{2}w\left( t\right) = \) \( {c}^{2}{\partial }_{x}^{2}w\left( t\right) \) . Define \[ {E}_{s}\left( t\right) = {E}_{s}\left( {t;w}\right) = {\begin{Vmatrix}\frac{1}{c}{\partial }_{t}w\left( t\right) \end{Vmatrix}}_{s - 2}^{2} + {\begin{Vmatrix}{\partial }_{x}w\left( t\right) \end{Vmatrix}}_{s - 2}^{2}. \tag{4.137} \] Then \[ {\partial }_{t}{E}_{s}\left( t\right) = 0\forall t \in \left\lbrack {0, T}\right\rbrack , \tag{4.138} \] so that \[ {E}_{s}\left( t\right) = {E}_{s}\left( 0\right) \;\forall t \in \left\lbrack {0, T}\right\rbrack . \tag{4.139} \] + This could have been done in Section 4.2, but the presence of the term \( {iq}\left( D\right) \) would lead to a system of two real linear equations. Proof. The conditions on \( w \) imply that \( {\partial }_{t}^{2}w\left( t\right) = {c}^{2}{\partial }_{x}^{2}w\left( t\right) \in C(\left\lbrack {0, T}\right\rbrack \) ; \( \left. {H}_{\text{per }}^{s - 2}\right) \) . A simple computation leads to \[ {\partial }_{t}{E}_{s}\left( t\right) = \mathop{\lim }\limits_{{h \rightarrow 0}}\left\lbrack {{h}^{-1}\left( {{E}_{s}\left( {t + h}\right) - {E}_{s}\left( t\right) }\right) }\right\rbrack \] \[ = 2{\left( \frac{1}{c}{\partial }_{t}u\left( t\right) \left| \;\frac{1}{c}{\partial }_{t}^{2}u\left( t\right) \right. \right) }_{s - 2} + 2{\left( {\partial }_{x}u\left( t\right) |{\partial }_{t}{\partial }_{x}u\left( t\right) \right) }_{s - 2}. \tag{4.140} \] Since \[ {\left( \frac{1}{c}{\partial }_{t}u\left( t\right) \left| \;\frac{1}{c}{\partial }_{t}^{2}u\left( t\right) \right. \right) }_{s - 2} = {\left( {\partial }_{t}u\left( t\right) \left| \;{\partial }_{x}^{2}u\left( t\right) \right. \right) }_{s - 2} \tag{4.141} \] and \[ {\left( {\partial }_{x}u\left( t\right) \mid {\partial }_{t}{\partial }_{x}u\left( t\right) \right) }_{s - 2} = \mathop{\lim }\limits_{{h \rightarrow 0}}{\left( {\partial }_{x}u\left( t\right) \left| \;\frac{{\partial }_{x}u\left( {t + h}\right) - {\partial }_{x}u\left( t\right) }{h}\right. \right) }_{s - 2} \] \[ = - \mathop{\lim }\limits_{{h \rightarrow 0}}{\left( {\partial }_{x}^{2}u\left( t\right) \left| \;\frac{u\left( {t + h}\right) - u\left( t\right) }{h}\right. \right) }_{s - 2} \] \[ = - {\left( {\partial }_{x}^{2}u\left( t\right) \mid {\partial }_{t}u\left( t\right) \right) }_{s - 2}, \tag{4.142} \] the result follows because \( u \) is real valued. COROLLARY 4.74. There exists at most one solution of (4.136) (and therefore of 4.135). Proof. If \( u \) and \( \widetilde{u} \) are solutions of (4.136), let \( w = u - \widetilde{u} \) . Then \( w\left( 0\right) = 0 \) and \( w \) satisfies the conditions of the previous lemma. Therefore, \[ {\partial }_{t}w\left( t\right) = {\partial }_{x}w\left( t\right) = 0\forall t \in \left\lbrack {0, T}\right\rbrack . \] It follows that \( w \) must be independent of \( t \), that is, \( w\left( t\right) = w\left( 0\right) = 0 \) for all \( t \in \left\lbrack {0, T}\right\rbrack \) . This finishes the proof. The solution itself can be obtained using the Fourier transform. Writing \( \widehat{u}\left( {t, k}\right) = \left( \widehat{u\left( t\right) }\right) \left( k\right) \) and \( \widehat{f}\left( {t, k}\right) = \left( \widehat{f\left( t\right) }\right) \left( k\right) \), as usual, we obtain \[ \widehat{u} \in C\left( {\left\lbrack {0, T}\right\rbrack ;{\ell }_{s}^{2}}\right) \] \[ {\partial }_{t}^{2}\widehat{u}\left( {t, k}\right) + {c}^{2}{k}^{2}\widehat{u}\left( {t, k}\right) = \widehat{F}\left( {t, k}\right) , \] \[ \widehat{u}\left( 0\right) = \widehat{\phi } \in {\ell }_{s}^{2},\;{\partial }_{t}\widehat{u}\left( 0\right) = \widehat{\psi } \in {\ell }_{s - 1}^{2}. \tag{4.143} \] This is an uncoupled system of second order linear ODEs, each one of which is provided with the appropriate pair of initial conditions ([37], [45], [72], [106]). Its solution is given by \[ \widehat{u}\left( {t, k}\right) = {\widehat{u}}_{h}\left( {t, k}\right) + {\widehat{u}}_{p}\left( {t, k}\right) \tag{4.144} \] where \( {\widehat{u}}_{h}\left( {t, k}\right) \) is the general solution of the associated homogeneous equation given by \[ {\widehat{u}}_{h}\left( {t, k}\right) = \left\{ \begin{array}{l} \left( {\cos \left( {c\left| k\right| t}\right) }\right) \widehat{\phi }\left( k\right) + \frac{\sin \left( {c\left| k\right| t}\right) }{c\left| k\right| }\widehat{\psi }\left( k\right) ,\text{ if }k \neq 0, \\ \widehat{\phi }\left( 0\right) + t\widehat{\psi }\left( 0\right) ,\text{ if }k = 0, \end{array}\right. \tag{4.145} \] and \( {\widehat{u}}_{p}\left( {t, k}\right) \) is a particular solution of the nonhomogeneous equations, obtained using the method of variation of parameters (see [37], [45], [72], or [106] for example), \[ {\widehat{u}}_{p}\left( {t, k}\right) = \left\{ \begin{array}{l} {\int }_{0}^{t}\frac{\sin \left( {c\left| k\right| \left( {t - {t}^{\prime }}\right) }\right) }{c\left| k\right| }\widehat{F}\left( {{t}^{\prime }, k}\right) d{t}^{\prime },\text{ if }k \neq 0, \\ {\int }_{0}^{t}\left( {t - {t}^{\prime }}\right) \widehat{F}\left( {{t}^{\prime },0}\right) d{t}^{\prime },\text{ if }k = 0. \end{array}\right. \tag{4.146} \] Thus, the solution must be given by \[ u\left( t\right) = \widehat{\phi }\left( 0\right) + t\widehat{\psi }\left( 0\right) + {\int }_{0}^{t}\left( {t - {t}^{\prime }}\right) \widehat{F}\left( {{t}^{\prime },0}\right) d{t}^{\prime } \] \[ + \mathop{\sum }\limits_{{k \neq 0}}\left( {{\widehat{u}}_{h}\left( {t, k}\right) + {\widehat{u}}_{p}\left( {t, k}\right) }\right) \exp \left( {ikx}\right) . \tag{4.147} \] To rewrite it in a more concise form, it is convenient to introduce the following functions of the operator \( D = \frac{1}{i}{\partial }_{x} \) . Let \( f \in {\mathcal{P}}^{\prime } \)
13050_Fourier Analysis and Partial Differential Equations
Lemma 4.73. Let \( w \in C\left( {\left\lbrack {0, T}\right\rbrack \;;\;{H}_{\mathrm{{per}}}^{s}}\right) \cap {C}^{1}\left( {\left\lbrack {0, T}\right\rbrack \;;\;{H}_{\mathrm{{per}}}^{s - 1}}\right) \; \) satisfy \( \;{\partial }_{t}^{2}w\left( t\right) = \) \( {c}^{2}{\partial }_{x}^{2}w\left( t\right) \) . Define\n\n\[ \n{E}_{s}\left( t\right) = {E}_{s}\left( {t;w}\right) = {\begin{Vmatrix}\frac{1}{c}{\partial }_{t}w\left( t\right) \end{Vmatrix}}_{s - 2}^{2} + {\begin{Vmatrix}{\partial }_{x}w\left( t\right) \end{Vmatrix}}_{s - 2}^{2}. \tag{4.137} \n\]\n\nThen\n\n\[ \n{\partial }_{t}{E}_{s}\left( t\right) = 0\forall t \in \left\lbrack {0, T}\right\rbrack , \tag{4.138} \n\]\n\nso that\n\n\[ \n{E}_{s}\left( t\right) = {E}_{s}\left( 0\right) \;\forall t \in \left\lbrack {0, T}\right\rbrack . \tag{4.139} \n\]
Proof. The conditions on \( w \) imply that \( {\partial }_{t}^{2}w\left( t\right) = {c}^{2}{\partial }_{x}^{2}w\left( t\right) \in C(\left\lbrack {0, T}\right\rbrack \) ; \( \left. {H}_{\text{per }}^{s - 2}\right) \) . A simple computation leads to\n\n\[ \n{\partial }_{t}{E}_{s}\left( t\right) = \mathop{\lim }\limits_{{h \rightarrow 0}}\left\lbrack {{h}^{-1}\left( {{E}_{s}\left( {t + h}\right) - {E}_{s}\left( t\right) }\right) }\right\rbrack \n\]\n\n\[ \n= 2{\left( \frac{1}{c}{\partial }_{t}u\left( t\right) \left| \;\frac{1}{c}{\partial }_{t}^{2}u\left( t\right) \right. \right) }_{s - 2} + 2{\left( {\partial }_{x}u\left( t\right) |{\partial }_{t}{\partial }_{x}u\left( t\right) \right) }_{s - 2}. \tag{4.140} \n\]\n\nSince\n\n\[ \n{\left( \frac{1}{c}{\partial }_{t}u\left( t\right) \left| \;\frac{1}{c}{\partial }_{t}^{2}u\left( t\right) \right. \right) }_{s - 2} = {\left( {\partial }_{t}u\left( t\right) \left| \;{\partial }_{x}^{2}u\left( t\right) \right. \right) }_{s - 2} \tag{4.141} \n\]\n\nand\n\n\[ \n{\left( {\partial }_{x}u\left( t\right) \mid {\partial }_{t}{\partial }_{x}u\left( t\right) \right) }_{s - 2} = \mathop{\lim }\limits_{{h \rightarrow 0}}{\left( {\partial }_{x}u\left( t\right) \left| \;\frac{{\partial }_{x}u\left( {t + h}\right) - {\partial }_{x}u\left( t\right) }{h}\right. \right) }_{s - 2} \n\]\n\n\[ \n= - \mathop{\lim }\limits_{{h \rightarrow 0}}{\left( {\partial }_{x}^{2}u\left( t\right) \left| \;\frac{u\left( {t + h}\right) - u\left( t\right) }{h}\right. \right) }_{s - 2} \n\]\n\n\[ \n= - {\left( {\partial }_{x}^{2}u\left( t\right) \mid {\partial }_{t}u\left( t\right) \right) }_{s - 2}, \tag{4.142} \n\]\n\nthe result follows because \( u \) is real valued.
6
36
Differential Equations and Dynamical Systems
Partial differential equations
Theorem 6.2 If \( \lambda \in \Lambda \), then any solution of \( J{u}^{\prime } + {qu} = {\lambda wu} \) is identically equal to 0 . Proof Since \( \lambda \in \Lambda \) there is a point \( {x}_{0} \) for which \( {B}_{ \pm }\left( {{x}_{0},\lambda }\right) \) are not invertible. Of course \( {x}_{0} + \omega \) is then also such a point. First we show that \( \operatorname{ran}{B}_{ + }\left( {{x}_{0},\lambda }\right) \) and \( \operatorname{ran}{B}_{ - }\left( {{x}_{0},\lambda }\right) \) intersect only trivially. Let us abbreviate \( {B}_{ \pm }\left( {{x}_{0},\lambda }\right) \) simply by \( {B}_{ \pm } \) . Neither \( {B}_{ + } \) nor \( {B}_{ - } \) is 0 . Hence there are two vectors \( {v}_{ + } \) and \( {v}_{ - } \) spanning, respectively, their kernels. Moreover, \( 0 \neq {2J}{v}_{ + } = {B}_{ - }{v}_{ + } \) and \( 0 \neq {2J}{v}_{ - } = {B}_{ + }{v}_{ - } \) . Assuming now, by way of contradiction and without loss of generality that \( {B}_{ - }{v}_{ + } = \) \( {B}_{ + }{v}_{ - } \) we get \( J{v}_{ + } = J{v}_{ - } \) which contradicts the fact that \( J \) is injective and shows that \( \operatorname{ran}{B}_{ + } \cap \operatorname{ran}{B}_{ - } = \{ 0\} \) . If \( u \) solves the equation \( J{u}^{\prime } + {qu} = {\lambda wu} \) we must have \( {B}_{ + }\left( {{x}_{0},\lambda }\right) {u}^{ + }\left( {x}_{0}\right) = \) \( {B}_{ - }\left( {{x}_{0},\lambda }\right) {u}^{ - }\left( {x}_{0}\right) \) and hence, by the above, that \( {B}_{ \pm }\left( {{x}_{0},\lambda }\right) {u}^{ \pm }\left( {x}_{0}\right) = 0 \) . By the same argument we get \( {B}_{ \pm }\left( {{x}_{0} + \omega ,\lambda }\right) {u}^{ \pm }\left( {{x}_{0} + \omega }\right) = 0 \) . It follows now that \( v = \) \( {\left( u{\chi }_{\left( {x}_{0},{x}_{0} + \omega \right) }\right) }^{\# } \) is also a solution of \( J{u}^{\prime } + {qu} = {\lambda wu} \), in fact a solution of finite norm. If the norm were positive we would have a complex eigenvalue of \( {T}_{\min } \) which is impossible. Therefore \( {wv} = 0 \) so that \( v \) also solves \( J{u}^{\prime } + {qu} = {\lambda wu} \) for all \( \lambda \in \mathbb{C} \) including \( \lambda = 0 \) . However, for \( \lambda = 0 \) solutions of initial value problems are unique. Hence \( {\mathcal{L}}_{0} \) is trivial and \( u \) is identically equal to 0 .
25784_From Complex Analysis to Operator Theory_ A Panorama_ In Memory of Sergey Naboko _Operator Theory_ A
Theorem 6.2 If \( \lambda \in \Lambda \), then any solution of \( J{u}^{\prime } + {qu} = {\lambda wu} \) is identically equal to 0 .
Proof Since \( \lambda \in \Lambda \) there is a point \( {x}_{0} \) for which \( {B}_{ \pm }\left( {{x}_{0},\lambda }\right) \) are not invertible. Of course \( {x}_{0} + \omega \) is then also such a point. First we show that \( \operatorname{ran}{B}_{ + }\left( {{x}_{0},\lambda }\right) \) and \( \operatorname{ran}{B}_{ - }\left( {{x}_{0},\lambda }\right) \) intersect only trivially. Let us abbreviate \( {B}_{ \pm }\left( {{x}_{0},\lambda }\right) \) simply by \( {B}_{ \pm } \) . Neither \( {B}_{ + } \) nor \( {B}_{ - } \) is 0 . Hence there are two vectors \( {v}_{ + } \) and \( {v}_{ - } \) spanning, respectively, their kernels. Moreover, \( 0 \neq {2J}{v}_{ + } = {B}_{ - }{v}_{ + } \) and \( 0 \neq {2J}{v}_{ - } = {B}_{ + }{v}_{ - } \) . Assuming now, by way of contradiction and without loss of generality that \( {B}_{ - }{v}_{ + } = \) \( {B}_{ + }{v}_{ - } \) we get \( J{v}_{ + } = J{v}_{ - } \) which contradicts the fact that \( J \) is injective and shows that \( \operatorname{ran}{B}_{ + } \cap \operatorname{ran}{B}_{ - } = \{ 0\} \) . If \( u \) solves the equation \( J{u}^{\prime } + {qu} = {\lambda wu} \) we must have \( {B}_{ + }\left( {{x}_{0},\lambda }\right) {u}^{ + }\left( {x}_{0}\right) = \) \( {B}_{ - }\left( {{x}_{0},\lambda }\right) {u}^{ - }\left( {x}_{0}\right) \) and hence, by the above, that \( {B}_{ \pm }\left( {{x}_{0},\lambda }\right) {u}^{ \pm }\left( {x}_{0}\right) = 0 \) . By the same argument we get \( {B}_{ \pm }\left( {{x}_{0} + \omega ,\lambda }\right) {u}^{ \pm }\left( {{x}_{0} + \omega }\right) = 0 \) . It follows now that \( v = \) \( {\left( u{\chi }_{\left( {x}_{0},{x}_{0} + \omega \right) }\right) }^{\# } \) is also a solution of \( J{u}^{\prime } + {qu} = {\lambda wu} \), in fact a solution of finite norm. If the norm were positive we would have a complex eigenvalue of \( {T}_{\min } \) which is impossible. Therefore \( {wv} = 0 \) so that \( v \) also solves \( J{u}^{\prime } + {qu} = {\lambda wu} \) for all \( \lambda \in \mathbb{C} \) including \( \lambda = 0 \) . However, for \( \lambda = 0 \) solutions of initial value problems are unique. Hence \( {\mathcal{L}}_{0} \) is trivial and \( u \) is identically equal to 0 .
6
35
Differential Equations and Dynamical Systems
Ordinary differential equations
Lemma 9.5.29. The following statements are true. (b1) Equation (9.5.89) has solutions of prime period 2 if and only if \( \alpha = 1 \) . (b2) Suppose \( \alpha = 1 \) . Let \( \{ x\left( k\right) {\} }_{k = - 1}^{\infty } \) solve (9.5.89). Then \( \{ x\left( k\right) {\} }_{k = - 1}^{\infty } \) is periodic with period 2 if and only if \( x\left( {-1}\right) \neq 1 \) and \( {x}_{0} = x\left( {-1}\right) /\left\lbrack {x\left( {-1}\right) - 1}\right\rbrack \) .
26736_Discrete oscillation theory
Lemma 9.5.29. The following statements are true.\n\n(b1) Equation (9.5.89) has solutions of prime period 2 if and only if \( \alpha = 1 \) .
Null
6
37
Differential Equations and Dynamical Systems
Dynamical systems and ergodic theory
Theorem 3.2. Let \( \left( {\mu }_{n}\right) \) be a sequence of probability measures on \( \mathbb{Z} \) and for \( f : \mathbb{Z} \rightarrow \mathbb{R} \) define the maximal operator \[ \left( {Mf}\right) \left( x\right) = \mathop{\sup }\limits_{n}\left| {\left( {{\mu }_{n}f}\right) \left( x\right) }\right|, x \in \mathbb{Z}. \] We assume (*) (Regularity of coefficients). There is \( 0 < \alpha \leq 1 \) and \( C > 0 \) such that, for each \( n \geq 1 \) , \[ \left| {{\mu }_{n}\left( {x + y}\right) - {\mu }_{n}\left( x\right) }\right| \leq C\frac{{\left| y\right| }^{\alpha }}{{\left| x\right| }^{1 + \alpha }}\text{ for }x, y \in \mathbb{Z},0 < 2\left| y\right| \leq \left| x\right| . \] Then the maximal operator \( M \) is weak-type \( \left( {1,1}\right) \) ; i.e., there is a \( {C}^{\prime } > 0 \) such that for any \( \lambda > 0 \) \[ \left| \left\{ {x \in \mathbb{Z} : \left( {Mf}\right) \left( x\right) > \lambda }\right\} \right| \leq \frac{{C}^{\prime }}{\lambda }\parallel f{\parallel }_{1}\text{ for all }f \in {\ell }^{1} = {\ell }^{1}\left( \mathbb{Z}\right) . \] ## Comments: 1. We proved Theorem 3.2 several years ago, unaware of the existence of Zo's paper (see [2], also [1], 292-295). It turns out that Theorem 3.2 above is a special case of Zo's Theorem. In fact, it is enough to notice that, for \( 0 < 2\left| y\right| \leq \left| x\right| \) , \[ \varphi \left( {x, y}\right) = \mathop{\sup }\limits_{n}\left| {{\mu }_{n}\left( {x + y}\right) - {\mu }_{n}\left( x\right) }\right| \leq C\frac{{\left| y\right| }^{\alpha }}{{\left| x\right| }^{1 + \alpha }} \] and that \[ {\int }_{\{ \left| x\right| \geq 2\left| y\right| \} }C\frac{{\left| y\right| }^{\alpha }}{{\left| x\right| }^{1 + \alpha }}{dx} \leq C\left( \alpha \right) < \infty \] independently of \( y \in \mathbf{R} - \{ 0\} \) . Thus the basic assumption in Zo’s Theorem is satisfied. It should be noted also that our proof of Theorem 3.2 and the proof of Zo's Theorem are very similar: they are direct applications of the Calderón-Zygmund decomposition. 2. The maximal operator \( M \) in Theorems 3.1 and 3.2 is obviously strong type \( \left( {\infty ,\infty }\right) \) and hence, by the Marcinkiewicz Interpolation Theorem, also strong type \( \left( {p, p}\right) \), for \( 1 < p < \infty \) . For the sake of completeness we reproduce below our proof. Proof. We start with \( f \in {\ell }_{ + }^{1} \) and \( \lambda > 0 \) . Apply the Calderón-Zygmund decomposition to \( f \) and \( \rho = \frac{\lambda }{4} \) . We get dyadic intervals \( \left( {Q}_{i}\right) \), which are pairwise disjoint, such that \[ \rho < \frac{1}{\left| {Q}_{i}\right| }{\int }_{{Q}_{i}}f\left( y\right) {dy} \leq {2\rho }\text{ for all }i \] and \[ 0 \leq f\left( x\right) \leq \rho \text{ for }x \notin { \cup }_{i}{Q}_{i} \] Set \[ {f}_{1}\left( x\right) = \left\{ \begin{matrix} \frac{1}{\left| {Q}_{i}\right| }{\int }_{{Q}_{i}}f\left( y\right) {dy} & \text{ for }x \in {Q}_{i} \\ f\left( x\right) & \text{ for }x \notin { \cup }_{i}{Q}_{i} \end{matrix}\right. \] and \[ {f}_{2} = f - {f}_{1} \] We have \[ \begin{array}{l} \text{ (1) }\;\left\{ \begin{matrix} \rho \leq {f}_{1}\left( x\right) \leq {2\rho } & \text{ for }x \in { \cup }_{i}{Q}_{i} \\ 0 \leq {f}_{1}\left( x\right) \leq \rho & \text{ for }x \notin { \cup }_{i}{Q}_{i}, \end{matrix}\right. \\ \text{ (2) }\;\left\{ \begin{matrix} \frac{1}{\left| {Q}_{i}\right| }{\int }_{{Q}_{i}}{f}_{2}\left( y\right) {dy} = 0 & \text{ for each }i \\ {f}_{2}\left( x\right) = 0 & \text{ for }x \notin { \cup }_{i}{Q}_{i}, \end{matrix}\right. \end{array} \] \( \left( 3\right) \;\left| {\cup ,{Q}_{i}}\right| \rho \leq {\int }_{{ \cup }_{i}{Q}_{i}}f\left( y\right) \;{dy} \leq \left| \right| f{\left| \right| }_{1}, \) (4) \( {\begin{Vmatrix}{f}_{1}\end{Vmatrix}}_{1} \leq \parallel f{\parallel }_{1} \) and \( {\begin{Vmatrix}{f}_{2}\end{Vmatrix}}_{1} \leq 2\parallel f{\parallel }_{1} \) . Since \( 0 \leq {f}_{1} \leq {2\rho } = \frac{\lambda }{2} \), we have \( M{f}_{1} \leq \frac{\lambda }{2} \) . Also since \( f = {f}_{1} + {f}_{2} \) and \( M \) is sublinear we have \[ \{ {Mf} > \lambda \} \leq \left\{ {M{f}_{1} > \frac{\lambda }{2}}\right\} \cup \left\{ {M{f}_{2} > \frac{\lambda }{2}}\right\} = \left\{ {M{f}_{2} > \frac{\lambda }{2}}\right\} . \] Thus we only need to worry about \( {f}_{2} \) . Consider the intervals \( {Q}_{i}^{ * } \) obtained by dilating \( {Q}_{i} \) by a factor of 5 (but with the same center) and note that by (3) (5) \[ \left| {{ \cup }_{i}{Q}_{i}^{ * }}\right| \leq \mathop{\sum }\limits_{i}\left| {Q}_{i}^{ * }\right| \leq \mathop{\sum }\limits_{i}5\left| {Q}_{i}\right| = 5\left| {{ \cup }_{i}{Q}_{i}}\right| \leq \frac{5}{\rho }\parallel f{\parallel }_{1} = \frac{20}{\lambda }\parallel f{\parallel }_{1}. \] Thus when checking the weak-type inequality for \( {f}_{2} \) it is enough to consider the complement of \( { \cup }_{i}{Q}_{i}^{ * } \) . Fix \( x \in S = {\left( { \cup }_{i}{Q}_{i}^{ * }\right) }^{c} \), and choose \( {y}_{i} \in {Q}_{i} \) for each \( i \) . We have, using (2), for each \( n \) , \[ \left( {{\mu }_{n} * {f}_{2}}\right) \left( x\right) \; = \;\mathop{\sum }\limits_{i}{\int }_{{Q}_{i}}{\mu }_{n}\left( {x - y}\right) {f}_{2}\left( y\right) \;{dy} \] \[ = \mathop{\sum }\limits_{i}{\int }_{{Q}_{i}}\left\lbrack {{\mu }_{n}\left( {x - y}\right) {f}_{2}\left( y\right) - {\mu }_{n}\left( {x - {y}_{i}}\right) {f}_{2}\left( y\right) }\right\rbrack {dy} \] \[ = \mathop{\sum }\limits_{i}{\int }_{{Q}_{i}}\left\lbrack {{\mu }_{n}\left( {x - y}\right) - {\mu }_{n}\left( {x - {y}_{i}}\right) }\right\rbrack {f}_{2}\left( y\right) {dy}. \] But by assumption \( \left( *\right) \) (Regularity of coefficients), since \( \left| {{y}_{i} - y}\right| \leq \left| {Q}_{i}\right| \) and \( \left| {x - {y}_{i}}\right| \geq 2\left| {Q}_{i}\right| \), we have \[ \left| {{\mu }_{n}\left( {x - y}\right) - {\mu }_{n}\left( {x - {y}_{i}}\right) }\right| \leq C\fra
18335_Harmonic Analysis and Partial Differential Equations_ Essays in Honor of Alberto P_ Calderon _Chicag
Theorem 3.2. Let \( \\left( {\\mu }_{n}\\right) \) be a sequence of probability measures on \( \\mathbb{Z} \) and for \( f : \\mathbb{Z} \\rightarrow \\mathbb{R} \) define the maximal operator\n\n\[ \n\\left( {Mf}\\right) \\left( x\\right) = \\mathop{\\sup }\\limits_{n}\\left| {\\left( {{\\mu }_{n}f}\\right) \\left( x\\right) }\\right|, x \\in \\mathbb{Z}. \n\]\n\nWe assume\n\n(*) (Regularity of coefficients). There is \( 0 < \\alpha \\leq 1 \) and \( C > 0 \) such that, for each \( n \\geq 1 \) ,\n\n\[ \n\\left| {{\\mu }_{n}\\left( {x + y}\\right) - {\\mu }_{n}\\left( x\\right) }\\right| \\leq C\\frac{{\\left| y\\right| }^{\\alpha }}{{\\left| x\\right| }^{1 + \\alpha }}\\text{ for }x, y \\in \\mathbb{Z},0 < 2\\left| y\\right| \\leq \\left| x\\right| . \n\]\n\nThen the maximal operator \( M \) is weak-type \( \\left( {1,1}\\right) \) ; i.e., there is a \( {C}^{\\prime } > 0 \) such that for any \( \\lambda > 0 \)\n\n\[ \n\\left| \\left\\{ {x \\in \\mathbb{Z} : \\left( {Mf}\\right) \\left( x\\right) > \\lambda }\\right\\} \\right| \\leq \\frac{{C}^{\\prime }}{\\lambda }\\parallel f{\\parallel }_{1}\\text{ for all }f \\in {\\ell }^{1} = {\\ell }^{1}\\left( \\mathbb{Z}\\right) . \n\]
Proof. We start with \( f \\in {\\ell }_{ + }^{1} \) and \( \\lambda > 0 \) . Apply the Calderón-Zygmund decomposition to \( f \) and \( \\rho = \\frac{\\lambda }{4} \) . We get dyadic intervals \( \\left( {Q}_{i}\\right) \), which are pairwise disjoint, such that\n\n\[ \n\\rho < \\frac{1}{\\left| {Q}_{i}\\right| }{\\int }_{{Q}_{i}}f\\left( y\\right) {dy} \\leq {2\\rho }\\text{ for all }i \n\]\n\nand\n\n\[ \n0 \\leq f\\left( x\\right) \\leq \\rho \\text{ for }x \\notin { \\cup }_{i}{Q}_{i} \n\]\n\nSet\n\n\[ \n{f}_{1}\\left( x\\right) = \\left\\{ \\begin{matrix} \\frac{1}{\\left| {Q}_{i}\\right| }{\\int }_{{Q}_{i}}f\\left( y\\right) {dy} & \\text{ for }x \\in {Q}_{i} \\\\ f\\left( x\\right) & \\text{ for }x \\notin { \\cup }_{i}{Q}_{i} \\end{matrix}\\right. \n\]\n\nand\n\n\[ \n{f}_{2} = f - {f}_{1} \n\]\n\nWe have\n\n\[ \n\\begin{array}{l} \\text{ (1) }\\;\\left\\{ \\begin{matrix} \\rho \\leq {f}_{1}\\left( x\\right) \\leq {2\\rho } & \\text{ for }x \\in { \\cup }_{i}{Q}_{i} \\\\ 0 \\leq {f}_{1}\\left( x\\right) \\leq \\rho & \\text{ for }x \\notin { \\cup }_{i}{Q}_{i}, \\end{matrix}\\right. \\\\ \\text{ (2) }\\;\\left\\{ \\begin{matrix} \\frac{1}{\\left| {Q}_{i}\\right| }{\\int }_{{Q}_{i}}{f}_{2}\\left( y\\right) {dy} = 0 & \\text{ for each }i \\\\ {f}_{2}\\left( x\\right) = 0 & \\text{ for }x \\notin { \\cup }_{i}{Q}_{i}, \\end{matrix}\\right. \\end{array} \n\]\n\n\( \\left( 3\\right) \\;\\left| {\\cup ,{Q}_{i}}\\right| \\rho \\leq {\\int }_{{ \\cup }_{i}{Q}_{i}}f\\left( y\\right) \\;{dy} \\leq \\left| \\right| f{\\left| \\right| }_{1}, \n\n(4) \\;{\\begin{Vmatrix}{f}_{1}\\end{Vmatrix}}_{1} \\leq \\parallel f{\\parallel }_{1} \) and \( {\\begin{Vmatrix}{f}_{2}\\end{Vmatrix}}_{1} \\leq 2\\parallel f{\\parallel }_{1} \) .\n\nSince \( 0 \\leq {f}_{1} \\leq {2\\rho } =
5
23
Analysis
Measure and integration
Theorem 6.14. Euler’s Theorem. If \( m \) is a positive integer and \( a \) is an integer with \( \left( {a, m}\right) = 1 \), then \( {a}^{\phi \left( m\right) } \equiv 1\left( {\;\operatorname{mod}\;m}\right) \) . Before we prove Euler's theorem, we illustrate the idea behind the proof with an example.
31750_Elementary Number Theory and Its Applications_Fourth Edition
Theorem 6.14. Euler’s Theorem. If \( m \) is a positive integer and \( a \) is an integer with \( \left( {a, m}\right) = 1 \), then \( {a}^{\phi \left( m\right) } \equiv 1\left( {\;\operatorname{mod}\;m}\right) \) .
Null
3
13
Number Theory
Number theory
Example 6.3.3. The balance equations for the \( M/M/2/3 \) queueing system with \( \lambda = {2\mu } \) are: state \( j \) departure rate from \( j = \) arrival rate to \( j \) \[ {2\mu }{\pi }_{0} = \mu {\pi }_{1} \] \[ 1\;\left( {{2\mu } + \mu }\right) {\pi }_{1} = {2\mu }{\pi }_{0} + \left( {2 \times \mu }\right) {\pi }_{2} \] \[ 2\;\left( {{2\mu } + 2 \times \mu }\right) {\pi }_{2} = {2\mu }{\pi }_{1} + \left( {2 \times \mu }\right) {\pi }_{3} \] \[ \left( {2 \times \mu }\right) {\pi }_{3} = {2\mu }{\pi }_{2} \] That is, \[ 2{\pi }_{0}\overset{\left( 0\right) }{ = }{\pi }_{1} \] \[ 3{\pi }_{1}\overset{\left( 1\right) }{ = }2{\pi }_{0} + 2{\pi }_{2} \] \[ 2{\pi }_{2}\overset{\left( 2\right) }{ = }{\pi }_{1} + {\pi }_{3} \] \[ {\pi }_{3}\overset{\left( 3\right) }{ = }{\pi }_{2} \] It is a simple matter to solve this system of linear equations. The equations for states 2 and 3 yield that \( {\pi }_{1} = {\pi }_{2} = {\pi }_{3} \) . Then, making use of the equation for state 0, we can write that \[ {\pi }_{0} + 2{\pi }_{0} + 2{\pi }_{0} + 2{\pi }_{0} = 1\; \Rightarrow \;{\pi }_{0} = \frac{1}{7}\;\text{ and }\;{\pi }_{1} = {\pi }_{2} = {\pi }_{3} = \frac{2}{7}. \] Finally, we find at once that this solution satisfies the equation for state 1 as well. In the case of the \( M/M/2/2 \) queueing system (with \( \lambda = {2\mu } \) ), we deduce from (6.23) and (6.24) that \[ {\pi }_{0} = {\left( \mathop{\sum }\limits_{{k = 0}}^{2}\frac{{2}^{k}}{k!}\right) }^{-1} = {\left( 1 + 2 + 2\right) }^{-1} = \frac{1}{5}\text{ and }{\pi }_{1} = {\pi }_{2} = \frac{2}{5}. \] ## 6.4 Exercises for Chapter 6 ## Solved exercises ## Question no. 1 A system is made up of \( n \) components that operate independently of one another and all have an exponential lifetime, with parameter \( {\mu }_{k} \), for \( k = 1,\ldots, n \) . When the system breaks down, the failed components are replaced by new ones. Let \( N\left( t\right) \) be the number of system breakdowns in the interval \( \left\lbrack {0, t}\right\rbrack \) . Is the stochastic process \( \{ N\left( t\right), t \geq 0\} \) a continuous-time Markov chain if the components are connected (a) in series? (b) in parallel? (c) in standby redundancy? ## Question no. 2 The particular pure birth process known as the Yule process is such that \( {\lambda }_{n} = {n\lambda } \) , for \( n = 0,1,\ldots \) . It can be shown that \[ {p}_{i, j}\left( t\right) = \left( \begin{array}{l} j - 1 \\ i - 1 \end{array}\right) {e}^{-{i\lambda t}}{\left( 1 - {e}^{-{\lambda t}}\right) }^{j - i}\;\text{ for }j \geq i \geq 1. \] What is the expected value of \( X\left( t\right) \), given that \( X\left( 0\right) = i > 0 \) ? ## Question no. 3 Let \( \{ X\left( t\right), t \geq 0\} \) be a birth and death process with state space \( \{ 0,1,2\} \) and birth and death rates given by \[ {\lambda }_{0} = \lambda ,\;{\lambda }_{1} = {2\lambda }\;\text{ and }\;{\mu }_{1} = \mu ,\;{\mu }_{2} = {2\mu }. \] Find the limiting probabilities of the process from its balance equations. ## Question no. 4 Find the limiting probabilities of an \( M/M/1 \) queue at a (large enough) time instant \( {t}_{0} \), given that there are either two, three, or four customers in the system at \( {t}_{0} \) . Under the same condition, what is the expected time that the first customer who enters the system after \( {t}_{0} \) will spend waiting in line if we assume that the customer who was being served at time \( {t}_{0} \) is still present when the new one arrives? ## Question no. 5 Suppose that the server in an \( M/M/1 \) queueing system works twice as fast when there are at least three customers in the system, so that \( \mu \left\lbrack {X\left( t\right) }\right\rbrack = \mu \) if \( X\left( t\right) = 1 \) or 2, and \( \mu \left\lbrack {X\left( t\right) }\right\rbrack = {2\mu } \) if \( X\left( t\right) \geq 3 \) . Write the balance equations for this system. What is the condition for the existence of the limiting probabilities? ## Question no. 6 Consider the \( M/M/1/2 \) queueing system in stationary regime. Suppose that a departure took place at time \( {t}_{0} \) and that the next two arrivals, from \( {t}_{0} \), occurred at \( t = {t}_{0} + 1 \) and \( t = {t}_{0} + 2 \) . What is the probability that the customer who arrived at time \( {t}_{0} + 2 \) was able to enter the system? ## Question no. 7 Suppose that the server in an \( M/M/1/3 \) queueing system decides to work twice as fast, in order to increase the profits of the system. However, after a while, the arrival rate of customers goes from \( \lambda \) to \( \lambda /2 \), because of poor service. Assume that \( \lambda = \mu \) . If every customer who actually enters the system pays \( \$ x \), what is the average amount of money that the system earns per unit of time when the service rate is \( \mu \) ? Is the server better off to serve at rate \( \mu \) or at rate \( {2\mu } \) ? ## Question no. 8 Let \( X\left( t\right) \) be the number of customers at time \( t \) in an \( M/M/2 \) queueing model. Suppose that \( X\left( {t}_{0}\right) \geq 2 \), and let \( {\tau }_{i} \) be the departure time of the customer being served by server no. \( i \), for \( i = 1,2 \) . Calculate the probability \( P\left\lbrack {{\tau }_{2} \leq {\tau }_{1} + 1}\right\rbrack \) . ## Question no. 9 Write the balance equations for the \( M/M/2/3 \) queueing system if we suppose that the service time of server no. \( i \) has an exponential distribution with parameter \( {\mu }_{i} \), for \( i = 1,2 \) . That is, the two servers do not necessarily work at the same speed. Assume that, when the system is empty, an arriving customer heads for server no. 1 with probability 1. In terms of the limiting probabilities of the process, what is the average time that an entering customer spends in the system (in stationary regime)? ## Question no. 10 Drivers arrive according to a Poisson process with rate \( \lambda \) at a se
8535_Basic Probability Theory with Applications
Example 6.3.3. The balance equations for the \( M/M/2/3 \) queueing system with \( \lambda = {2\mu } \)\n\nare:\n\nstate \( j \) departure rate from \( j = \) arrival rate to \( j \)\n\n\[ \n{2\mu }{\pi }_{0} = \mu {\pi }_{1} \n\]\n\n\[ \n1\;\left( {{2\mu } + \mu }\right) {\pi }_{1} = {2\mu }{\pi }_{0} + \left( {2 \times \mu }\right) {\pi }_{2} \n\]\n\n\[ \n2\;\left( {{2\mu } + 2 \times \mu }\right) {\pi }_{2} = {2\mu }{\pi }_{1} + \left( {2 \times \mu }\right) {\pi }_{3} \n\]\n\n\[ \n\left( {2 \times \mu }\right) {\pi }_{3} = {2\mu }{\pi }_{2} \n\]\n\nThat is,\n\n\[ \n2{\pi }_{0}\overset{\left( 0\right) }{ = }{\pi }_{1} \n\]\n\n\[ \n3{\pi }_{1}\overset{\left( 1\right) }{ = }2{\pi }_{0} + 2{\pi }_{2} \n\]\n\n\[ \n2{\pi }_{2}\overset{\left( 2\right) }{ = }{\pi }_{1} + {\pi }_{3} \n\]\n\n\[ \n{\pi }_{3}\overset{\left( 3\right) }{ = }{\pi }_{2} \n\]
It is a simple matter to solve this system of linear equations. The equations for states 2 and 3 yield that \( {\pi }_{1} = {\pi }_{2} = {\pi }_{3} \) . Then, making use of the equation for state 0, we can write that\n\n\[ \n{\pi }_{0} + 2{\pi }_{0} + 2{\pi }_{0} + 2{\pi }_{0} = 1\; \Rightarrow \;{\pi }_{0} = \frac{1}{7}\;\text{ and }\;{\pi }_{1} = {\pi }_{2} = {\pi }_{3} = \frac{2}{7}. \n\]\n\nFinally, we find at once that this solution satisfies the equation for state 1 as well.
8
43
Optimization and Control
Operations research, mathematical programming
Problem 472. Describe the sample space, if two distinguishable coins are rolled simultaneously. Solution. Since the pair \( \{ H, T\} \) and \( \{ T, H\} \) are indistinguishable, the sample space consists of three elements \( \{ H, H\} ,\{ T, T\} ,\{ H, T\} \), where the first two can be identified with \( \{ H\} \) and \( \{ T\} \), respectively. However, if the coins are distinguishable, then the sets \( \left( {H, T}\right) \neq \left( {T, H}\right) \) are different, and by the Product Rule, the new sample space contains 4 points. If three different fair dice are rolled simultaneously, then the sample space consists of \( {6}^{3} = {216} \) ordered triples, \( S = \{ \left( {1,1,1}\right) ,\left( {1,1,2}\right) ,\ldots ,\left( {6,6,6}\right) \} \) . ## 22.2 PROBABILITY DISTRIBUTIONS Up to this point, we have discussed only sample spaces. The probability theory originates when a certain specific number \( p\left( s\right) \), called the probability of the outcome \( {s}^{1} \), is assigned to each point \( s \in S \) of the sample space. The set of these values is called a probability distribution on the sample space \( S \), because we distribute a certain given "supply" of probability among the points of \( S \) . These values cannot be assigned arbitrarily, though, they must satisfy certain assumptions, axioms of the probability theory; for more on that see, for example, [17]. We consider the following system of axioms: --- \( {}^{1} \) One can often hear in everyday talk,"It’s probable" or "That’s unlikely." Based on such individual judgment, some people play lotteries while the others do not, because they do not believe that there are reasonable chances to win. Any discussion of such subjective probabilities is beyond the scope of this book. --- PA1) \( p\left( s\right) \geq 0 \) for any point \( s \in S \) PA2) if \( E = \left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{k}}\right\} \subset S \), then \( p\left( E\right) = p\left( {s}_{1}\right) + p\left( {s}_{2}\right) + \cdots + p\left( {s}_{k}\right) \) PA3) \( p\left( S\right) = 1 \) . Therefore, we have assumed that probability values are nonnegative, the probability of any event \( E \) is the sum of the probabilities of elementary events composing \( E \), and the total probability is 1 . These axioms immediately imply that if \( {E}_{1},\ldots ,{E}_{k} \) are any pairwise disjoint events, that is, \( {E}_{1},\ldots ,{E}_{k} \subset S \) and \( {E}_{i} \cap {E}_{j} = \varnothing ,1 \leq i, j \leq k \), then \( p\left( {{E}_{1} \cup \cdots \cup {E}_{k}}\right) = p\left( {E}_{1}\right) + \cdots + p\left( {E}_{k}\right) \), that is, the probability is finitely additive. Moreover, for any event \( E \) we have \( p\left( E\right) = p\left( {E \cup \varnothing }\right) = p\left( E\right) + p\left( \varnothing \right) \), thus, \( p\left( \varnothing \right) = 0 \), the empty event must have zero probability. In some cases, we can conduct a random experiment in reality; for instance, we can toss a coin many times and record the numbers of heads, \( n\left( H\right) \), and tails, \( n\left( T\right) \), occurred. If the experiment was repeated \( n \) times, the favorable outcomes for an event \( E \) were observed \( k\left( E\right) \) times among the \( n \) outcomes, and the sequence \( \{ k\left( E\right) \} \) is stabilizing (has the limit) \( f\left( E\right) \) when \( n \rightarrow \infty \), then we can claim that the event \( E \) has the experimental or frequency probability. The ratio \( f\left( E\right) = \frac{k\left( E\right) }{n} \) is called the experimental or frequency probability of the event \( E \) . Clearly, the frequency \( f\left( E\right) \) depends, among other things, on the length \( n \) of the experiment. If with \( n \) increasing, \( f\left( E\right) \) is stabilizing to a number \( p\left( E\right) \), we can use \( f\left( E\right) \) as an estimation of the probability \( p\left( E\right) \) of the event \( E \), but this is only a plausible approximation. Indeed, there is nothing unusual to get two heads in a row, thus in this series of \( n = 2 \), the experimental probabilities are \( p\left( H\right) = 1/1 = 1 \) and \( p\left( T\right) = 0 \) . However, if we use this very short sequence to estimate the probability of getting a tail, we have \( p\left( T\right) = f\left( T\right) /2 = 0 \), which obviously makes no sense. Advanced courses in probability theory treat this issue in more detail - namely, it is discussed what is the appropriate length of an experiment. Any set of positive numbers, satisfying axioms PA1)-PA3), can be used as a probability distribution. For example, experimenting with a coin and choosing the sample space \( S = \{ H, T\} \), we can assign \( p\left( H\right) = 1/3 \) and \( p\left( T\right) = 2/3 \) . However, unless we have a specifically tailored (biased) coin, the results of our physical experiments will likely be essentially different from the results predicted by the mathematical model. So that, to assign a probability distribution, we use either some previous experience (the results of real experiments) or a theory, if it exists. It is physically impossible (probably) to make a perfect coin; however, real experiments have confirmed that if a coin was chosen at random, then as the first approximation, it is quite realistic to assign the probabilities \( p\left( H\right) = p\left( T\right) = 1/2 \) . On the other hand, the same experiments show that no real coin satisfies this probability distribution precisely but exhibits slight deviations from the theoretical probability \( 1/2 \) . Nevertheless, it is customary in theoretical studies to accept the hypothesis of equally likely probabilities or equal chances \( {}^{2} \), that is, to assign equal probabilities to each point of the sample space. Definition 131. It is said that the assumption of equally likely probabilities is valid for a given problem with the sample space \( S = \left\{ {{s}_{1},{s}
24820_Discrete Mathematics with Cryptographic Applications_ A Self-Teaching Introduction
Problem 472. Describe the sample space, if two distinguishable coins are rolled simultaneously.
Solution. Since the pair \( \{ H, T\} \) and \( \{ T, H\} \) are indistinguishable, the sample space consists of three elements \( \{ H, H\} ,\{ T, T\} ,\{ H, T\} \), where the first two can be identified with \( \{ H\} \) and \( \{ T\} \), respectively. However, if the coins are distinguishable, then the sets \( \left( {H, T}\right) \neq \left( {T, H}\right) \) are different, and by the Product Rule, the new sample space contains 4 points. If three different fair dice are rolled simultaneously, then the sample space consists of \( {6}^{3} = {216} \) ordered triples, \( S = \{ \left( {1,1,1}\right) ,\left( {1,1,2}\right) ,\ldots ,\left( {6,6,6}\right) \} \) .
1
2
Combinatorics
Combinatorics
Example 7.3.8 --- Automobile Emissions. We can apply Theorem 7.3.3 to answer the question at the end of Example 7.3.7. In the notation of the theorem, we have \( n = {46},{\sigma }^{2} = {0.5}^{2} = {0.25} \) , \( {\mu }_{0} = 2 \), and \( {v}^{2} = {1.0} \) . The average of the 46 measurements is \( {\bar{x}}_{n} = {1.329} \) . The posterior distribution of \( \theta \) is then the normal distribution with mean and variance given by \[ {\mu }_{1} = \frac{{0.25} \times 2 + {46} \times 1 \times {1.329}}{{0.25} + {46} \times 1} = {1.333}, \] \[ {v}_{1}^{2} = \frac{{0.25} \times 1}{{0.25} + {46} \times 1} = {0.0054} \] The mean \( {\mu }_{1} \) of the posterior distribution of \( \theta \), as given in Eq. (7.3.1), can be rewritten as follows: \[ {\mu }_{1} = \frac{{\sigma }^{2}}{{\sigma }^{2} + n{v}_{0}^{2}}{\mu }_{0} + \frac{n{v}_{0}^{2}}{{\sigma }^{2} + n{v}_{0}^{2}}{\bar{x}}_{n}. \tag{7.3.3} \] It can be seen from Eq. (7.3.3) that \( {\mu }_{1} \) is a weighted average of the mean \( {\mu }_{0} \) of the prior distribution and the sample mean \( {\bar{x}}_{n} \) . Furthermore, it can be seen that the relative weight given to \( {\bar{x}}_{n} \) satisfies the following three properties: (1) For fixed values of \( {v}_{0}^{2} \) and \( {\sigma }^{2} \), the larger the sample size \( n \), the greater will be the relative weight that is given to \( {\bar{x}}_{n} \) . (2) For fixed values of \( {v}_{0}^{2} \) and \( n \), the larger the variance \( {\sigma }^{2} \) of each observation in the sample, the smaller will be the relative weight that is given to \( {\bar{x}}_{n} \) . (3) For fixed values of \( {\sigma }^{2} \) and \( n \), the larger the variance \( {v}_{0}^{2} \) of the prior distribution, the larger will be the relative weight that is given to \( {\bar{x}}_{n} \) . Moreover, it can be seen from Eq. (7.3.2) that the variance \( {v}_{1}^{2} \) of the posterior distribution of \( \theta \) depends on the number \( n \) of observations that have been taken but does not depend on the magnitudes of the observed values. Suppose, therefore, that a random sample of \( n \) observations is to be taken from a normal distribution for which the value of the mean \( \theta \) is unknown, the value of the variance is known, and the prior distribution of \( \theta \) is a specified normal distribution. Then, before any observations have been taken, we can use Eq. (7.3.2) to calculate the actual value of the variance \( {v}_{1}^{2} \) of the posterior distribution. However, the value of the mean \( {\mu }_{1} \) of the posterior distribution will depend on the observed values that are obtained in the sample. The fact that the variance of the posterior distribution depends only on the number of observations is due to the assumption that the variance \( {\sigma }^{2} \) of the individual observations is known. In Sec. 8.6, we shall relax this assumption. ---
7422_Probability and Statistics 4
Automobile Emissions. We can apply Theorem 7.3.3 to answer the question at the end of Example 7.3.7. In the notation of the theorem, we have \( n = {46},{\sigma }^{2} = {0.5}^{2} = {0.25} \) , \( {\mu }_{0} = 2 \), and \( {v}^{2} = {1.0} \) . The average of the 46 measurements is \( {\bar{x}}_{n} = {1.329} \) . The posterior distribution of \( \theta \) is then the normal distribution with mean and variance given by
\[{\mu }_{1} = \frac{{0.25} \times 2 + {46} \times 1 \times {1.329}}{{0.25} + {46} \times 1} = {1.333},\]\[{v}_{1}^{2} = \frac{{0.25} \times 1}{{0.25} + {46} \times 1} = {0.0054}\]\n\nThe mean \( {\mu }_{1} \) of the posterior distribution of \( \theta \), as given in Eq. (7.3.1), can be rewritten as follows:\n\n\[{\mu }_{1} = \frac{{\sigma }^{2}}{{\sigma }^{2} + n{v}_{0}^{2}}{\mu }_{0} + \frac{n{v}_{0}^{2}}{{\sigma }^{2} + n{v}_{0}^{2}}{\bar{x}}_{n}. \tag{7.3.3}\]\n\nIt can be seen from Eq. (7.3.3) that \( {\mu }_{1} \) is a weighted average of the mean \( {\mu }_{0} \) of the prior distribution and the sample mean \( {\bar{x}}_{n} \) . Furthermore, it can be seen that the relative weight given to \( {\bar{x}}_{n} \) satisfies the following three properties: (1) For fixed values of \( {v}_{0}^{2} \) and \( {\sigma }^{2} \), the larger the sample size \( n \), the greater will be the relative weight that is given to \( {\bar{x}}_{n} \) . (2) For fixed values of \( {v}_{0}^{2} \) and \( n \), the larger the variance \( {\sigma }^{2} \) of each observation in the sample, the smaller will be the relative weight that is given to \( {\bar{x}}_{n} \) . (3) For fixed values of \( {\sigma }^{2} \) and \( n \), the larger the variance \( {v}_{0}^{2} \) of the prior distribution, the larger will be the relative weight that is given to \( {\bar{x}}_{n} \) .\n\nMoreover, it can be seen from Eq. (7.3.2) that the variance \( {v}_{1}^{2} \) of the posterior distribution of \( \theta \) depends on the number \( n \) of observations that have been taken but does not depend on the magnitudes of the observed values.
9
45
Probability and Statistics
Statistics
Corollary 26.6.9 Suppose that \( f \) is a meromorphic function on a domain \( U \), and that \( {S}_{f} \) is the disjoint union of \( A \) and \( B \) . Then there exist meromorphic functions \( g \) and \( h \) such that \( f = g + h,{S}_{g} = A \) and \( {S}_{h} = B \) . Proof By the theorem, there exists \( g \) with \( {S}_{g} = A \) such that \( f - g \) has removable singularities at the points of \( A \) . Remove them, and set \( h = f - g \) .
30436_A Course in Mathematical Analysis_ Volume III_ Complex Analysis_ Measure and Integration
Corollary 26.6.9 Suppose that \( f \) is a meromorphic function on a domain \( U \), and that \( {S}_{f} \) is the disjoint union of \( A \) and \( B \) . Then there exist meromorphic functions \( g \) and \( h \) such that \( f = g + h,{S}_{g} = A \) and \( {S}_{h} = B \) .
Proof By the theorem, there exists \( g \) with \( {S}_{g} = A \) such that \( f - g \) has removable singularities at the points of \( A \) . Remove them, and set \( h = f - g \) .
5
24
Analysis
Functions of a complex variable
Example 4.4.1. Let \( f\left( x\right) = {x}^{2},0 \leq x \leq {10} \) . The function is continuous and strictly monotone increasing. Therefore, its inverse \( g\left( x\right) = \sqrt{x},0 \leq x \leq {100} \) is continuous strictly monotone increasing as well.
27302_Calculus Light
Example 4.4.1. Let \( f\left( x\right) = {x}^{2},0 \leq x \leq {10} \) . The function is continuous and strictly monotone increasing. Therefore, its inverse \( g\left( x\right) = \sqrt{x},0 \leq x \leq {100} \) is continuous strictly monotone increasing as well.
Null
5
22
Analysis
Real functions
Example 3.4.5. Let \( {\bar{B}}_{{c}_{0}}\left( {0,1}\right) \) be the closed unit ball of \( {c}_{0} \), the space of all null sequences \( \mathrm{x} = \left\{ {{x}_{n} : n \in \mathbb{N}}\right\} ,\mathop{\lim }\limits_{n}{x}_{n} = 0 \), endowed with the norm \( \parallel \mathrm{x}\parallel = \mathop{\sup }\limits_{i}\left| {x}_{i}\right| \) . We define \( f : {\bar{B}}_{{c}_{0}}\left( {0,1}\right) \rightarrow {\bar{B}}_{{c}_{0}}\left( {0,1}\right) \), by \( f\left( \mathrm{x}\right) = f\left( {{x}_{1},{x}_{2},\ldots }\right) = \left( {1,{x}_{1},{x}_{2},\ldots }\right) \) . Then \( \parallel f\left( \mathrm{x}\right) - f\left( \mathrm{z}\right) \parallel = \parallel \mathrm{x} - \mathrm{z}\parallel \), for every \( \mathrm{x},\mathrm{z} \in {c}_{0} \), however, the equation \( f\left( \mathrm{x}\right) = \mathrm{x} \) is satisfied only if \( \mathrm{x} = \left( {1,1,\ldots }\right) \) which is not in \( {c}_{0} \) . 9 See Theorem 2.6.5( \( {\mathrm{i}}^{\prime } \) ) and apply it to \( {\mathrm{x}}_{1} - f\left( {\mathrm{x}}_{\lambda }\right) \) and \( {\mathrm{x}}_{2} - f\left( {\mathrm{x}}_{\lambda }\right) \) . ## 3.4.2 The Krasnoselskii-Mann algorithm Given that a fixed point exists for a nonexpansive operator, how can we approximate it? The Krasnoselskii-Mann algorithm provides an iterative scheme that allows this, in the special case where \( X = H \) is a Hilbert space (see, e.g.,[20]). The crucial observation behind this scheme is that even though the iterative procedure for a nonexpansive operator \( f \) may not converge to a fixed point of this operator, \( {}^{10} \) nevertheless the operator \( {f}_{\lambda }\left( \mathrm{x}\right) = \left( {1 - \lambda }\right) \mathrm{x} + {\lambda f}\left( \mathrm{x}\right) \), for any \( \lambda \in \left( {0,1}\right) \) when iterated, weakly converges to a fixed point. \( {}^{11} \) In particular, we introduce the following definition. Definition 3.4.6 (Averaged operators). A map \( {f}_{\lambda } : H \rightarrow H \) which can be expressed as \( {f}_{\lambda } = \left( {1 - \lambda }\right) I + {\lambda f} \), for some \( \lambda \in \left( {0,1}\right) \), where \( f \) is a nonexpansive operator is called a \( \lambda \) -averaged operator. \( {}^{12} \)
17895_Variational Methods in Nonlinear Analysis_ With Applications in Optimization and Partial Differentia
Example 3.4.5. Let \( {\bar{B}}_{{c}_{0}}\left( {0,1}\right) \) be the closed unit ball of \( {c}_{0} \), the space of all null sequences \( \mathrm{x} = \left\{ {{x}_{n} : n \in \mathbb{N}}\right\} ,\mathop{\lim }\limits_{n}{x}_{n} = 0 \), endowed with the norm \( \parallel \mathrm{x}\parallel = \mathop{\sup }\limits_{i}\left| {x}_{i}\right| \) . We define \( f : {\bar{B}}_{{c}_{0}}\left( {0,1}\right) \rightarrow {\bar{B}}_{{c}_{0}}\left( {0,1}\right) \), by \( f\left( \mathrm{x}\right) = f\left( {{x}_{1},{x}_{2},\ldots }\right) = \left( {1,{x}_{1},{x}_{2},\ldots }\right) \) . Then \( \parallel f\left( \mathrm{x}\right) - f\left( \mathrm{z}\right) \parallel = \parallel \mathrm{x} - \mathrm{z}\parallel \), for every \( \mathrm{x},\mathrm{z} \in {c}_{0} \), however, the equation \( f\left( \mathrm{x}\right) = \mathrm{x} \) is satisfied only if \( \mathrm{x} = \left( {1,1,\ldots }\right) \) which is not in \( {c}_{0} \).
Null
5
32
Analysis
Functional analysis
Proposition 13.4.18. For \( n \geq 1 \) an element \( x \in {G}_{n} \) is thin if and only if \( {\Phi x} = 0 \) . Proof. We have shown that \( \Phi {\varepsilon }_{j}y = 0,\Phi {\Gamma }_{j}y = 0 \) for all \( y \in {G}_{n - 1} \) (see Proposition 13.4.11). It follows from Proposition 13.4.14 that \( {\Phi x} = 0 \) whenever \( x \) is thin. To see the converse, we recall the definition \[ {\Phi }_{j}x = {\left\lbrack -{\varepsilon }_{j}{\partial }_{j}^{ + }x, - {\Gamma }_{j}{\partial }_{j + 1}^{ - }x, x,{\Gamma }_{j}{\partial }_{j + 1}^{ + }x\right\rbrack }_{j + 1} \] which can be rewritten as \[ x = {\left\lbrack {\Gamma }_{j}{\partial }_{j + 1}^{ - }x,{\varepsilon }_{j}{\partial }_{j}^{ + }x,{\Phi }_{j}x, - {\Gamma }_{j}{\partial }_{j + 1}^{ + }x\right\rbrack }_{j + 1}. \] These two equations show that \( {\Phi }_{j}x \) is thin if and only if \( x \) is thin. Hence \( {\Phi x} \) is thin if and only if \( x \) is thin. In particular, if \( {\Phi x} = 0 \) (i.e. \( {\Phi x} = {\varepsilon }_{1}^{n}{\beta x} \) ) then \( {\Phi x} \) is thin, so \( x \) is also thin. ## 13.5 \( n \) -shells: coskeleton and skeleton To work inductively on an \( \omega \) -groupoid, we have at each step \( n \) to restrict our attention to dimensions \( \leq n \) and the minimal part accompanying it. To this end, it is useful to introduce the \( n \) -skeleton of an \( \omega \) -groupoid as the \( \omega \) -subgroupoid generated by the part of dimensions \( \leq n \), analogously to the constructions for crossed complexes in Section 7.1.vi. Again, it is useful to make the construction a bit more categorical. \( {}^{219} \) Definition 13.5.1. If we ignore the elements of dimension higher than \( n \) in an \( \omega \) - groupoid we obtain a cubical \( n \) -groupoid. This gives the \( n \) -truncation functor \[ {\operatorname{tr}}_{n} : \omega \text{-Gpds} \rightarrow \omega \text{-Gpds}{}_{n}\text{.} \] We shall show that \( {\operatorname{tr}}_{n} \) has both a right adjoint \( {\operatorname{cosk}}^{n} : \omega \) -Gpds \( {}_{n} \rightarrow \omega \) -Gpds, the \( n \) -coskeleton functor (Definition 13.5.5) and a left adjoint \( {\mathrm{{sk}}}^{n} : \omega \) -Gpds \( {}_{n} \rightarrow \omega \) -Gpds, the \( n \) -skeleton functor (Definition 13.5.14). We will see that both can be described in terms of ’shells’, i.e. families of \( r \) -cubes that fit together as the faces of an \( \left( {r + 1}\right) \) -cube do. A trivial example is the total boundary of an \( n \) -cube. For any cubical \( n \) -groupoid \( G = \left( {{G}_{n},{G}_{n - 1},\ldots ,{G}_{0}}\right) \) we will construct an \( \omega \) - groupoid \( {\operatorname{cosk}}^{n}G \) by adding ’shells’ in all dimensions \( \geq n \) . To check that \( {\operatorname{cosk}}^{n}G \) is an \( \omega \) -groupoid we need to explain how to apply faces, degeneracies and connections to these shells. As a consequence, we describe the result of applying the folding operations \( {\Phi }_{i} \) and \( \Phi \) to these shells. In particular, we prove that \( \Phi \) commutes with the total boundary. All these results may be used to prove the existence and uniqueness of fillers for \( n \) -shells. Associated to any \( n \) -cube \( x \in {G}_{n} \) we have its total boundary \( \partial x \) and its folding \( {\Phi x} \) satisfying \( \partial {\Phi x} = {\Phi x} \) . Conversely, for any \( \mathbf{x} \in {\square }^{\prime }{G}_{n - 1} \) and \( \xi \in \gamma {G}_{n}\left( {\beta \mathbf{x}}\right) \) and \( n \geq 2 \) such that \( {\delta \xi } = {\delta \Phi }\mathbf{x} \) exists \( x \in {G}_{n} \) with \( \partial x = \mathbf{x} \) and \( {\Phi x} = \xi \) is. This \( x \) is unique and it is denoted \( x = \langle \mathbf{x},\xi \rangle \) . This property and notation allows the reconstruction of \( G \) from \( {\gamma G} \) . We finish the section by constructing \( {\mathrm{{sk}}}^{n} \) the \( n \) -skeleton functor as an \( \omega \) -subgroupoid of \( {\operatorname{cosk}}^{n} \), and proving that it is the left adjoint of \( {\operatorname{tr}}_{n} \) . Definition 13.5.2. In any cubical set \( K \), an \( n \) -shell is a family \( \mathbf{x} = \left( {x}_{i}^{\alpha }\right) \) of \( n \) -cubes \( \left( {i = 1,2,\ldots, n + 1;\alpha = \pm }\right) \) satisfying \[ {\partial }_{j}^{\beta }{x}_{i}^{\alpha } = {\partial }_{i - 1}^{\alpha }{x}_{j}^{\beta }\;\text{ for }1 \leq j < i \leq n + 1\text{ and }\alpha ,\beta = \pm . \] We denote by \( {\square }^{\prime }{K}_{n} \) the set of all \( n \) -shells of \( K \) . We usually write shells in boldface.
10462_Nonabelian Algebraic Topology_ filtered spaces_ crossed complexes_ cubical higher homotopy groupoids
Proposition 13.4.18. For \( n \geq 1 \) an element \( x \in {G}_{n} \) is thin if and only if \( {\Phi x} = 0 \) .
Proof. We have shown that \( \Phi {\varepsilon }_{j}y = 0,\Phi {\Gamma }_{j}y = 0 \) for all \( y \in {G}_{n - 1} \) (see Proposition 13.4.11). It follows from Proposition 13.4.14 that \( {\Phi x} = 0 \) whenever \( x \) is thin. To see the converse, we recall the definition\n\n\[ \n{\Phi }_{j}x = {\left\lbrack -{\varepsilon }_{j}{\partial }_{j}^{ + }x, - {\Gamma }_{j}{\partial }_{j + 1}^{ - }x, x,{\Gamma }_{j}{\partial }_{j + 1}^{ + }x\right\rbrack }_{j + 1} \n\]\n\nwhich can be rewritten as\n\n\[ \nx = {\left\lbrack {\Gamma }_{j}{\partial }_{j + 1}^{ - }x,{\varepsilon }_{j}{\partial }_{j}^{ + }x,{\Phi }_{j}x, - {\Gamma }_{j}{\partial }_{j + 1}^{ + }x\right\rbrack }_{j + 1}. \n\]\n\nThese two equations show that \( {\Phi }_{j}x \) is thin if and only if \( x \) is thin. Hence \( {\Phi x} \) is thin if and only if \( x \) is thin. In particular, if \( {\Phi x} = 0 \) (i.e. \( {\Phi x} = {\varepsilon }_{1}^{n}{\beta x} \) ) then \( {\Phi x} \) is thin, so \( x \) is also thin.
2
11
Algebra
Group theory and generalizations
Corollary 46. (The Divergence Theorem) If \( X \) is a vector field on \( \left( {M, g}\right) \) with compact support, then \[ {\int }_{M}\operatorname{div}X \cdot d\operatorname{vol} = 0 \] Proof. Just observe \[ \operatorname{div}X \cdot d\mathrm{{vol}} = {L}_{X}d\mathrm{{vol}} \] \[ = {i}_{X}d\left( {d\mathrm{{vol}}}\right) + d\left( {{i}_{X}d\mathrm{{vol}}}\right) \] \[ = d\left( {{i}_{X}d\mathrm{{vol}}}\right) \] and use Stokes' theorem. COROLLARY 47. (Green’s Formulae) If \( {f}_{1},{f}_{2} \) are two compactly supported functions on \( \left( {M, g}\right) \), then \[ {\int }_{M}\left( {\Delta {f}_{1}}\right) \cdot {f}_{2} \cdot d\mathrm{{vol}} = - {\int }_{M}g\left( {\nabla {f}_{1},\nabla {f}_{2}}\right) = {\int }_{M}{f}_{1} \cdot \left( {\Delta {f}_{2}}\right) \cdot d\mathrm{{vol}}. \] Proof. Just use that \[ \operatorname{div}\left( {{f}_{1} \cdot \nabla {f}_{2}}\right) = g\left( {\nabla {f}_{1},\nabla {f}_{2}}\right) + {f}_{1} \cdot \Delta {f}_{2} \] and apply the divergence theorem to get the desired result. COROLLARY 48. (Integration by Parts) If \( S, T \) are two \( \left( {1, p}\right) \) tensors with compact support on \( \left( {M, g}\right) \), then \[ {\int }_{M}g\left( {{S}^{b},\nabla \operatorname{div}T}\right) \cdot d\operatorname{vol} = - {\int }_{M}g\left( {\operatorname{div}S,\operatorname{div}T}\right) \cdot d\operatorname{vol} \] where \( {S}^{b} \) denotes the \( \left( {0, p + 1}\right) \) -tensor defined by \[ {S}^{b}\left( {X, Y, Z,\ldots }\right) = g\left( {X, S\left( {Y, Z,\ldots }\right) }\right) . \] Proof. For simplicity, first assume that \( S \) and \( T \) are vector fields \( X \) and \( Y \) . Then the formula can be interpreted as \[ {\int }_{M}g\left( {X,\nabla \operatorname{div}Y}\right) \cdot d\operatorname{vol} = - {\int }_{M}\operatorname{div}X \cdot \operatorname{div}Y \cdot d\operatorname{vol}. \] We can then use that \[ \operatorname{div}\left( {f \cdot X}\right) = g\left( {\nabla f, X}\right) + f \cdot \operatorname{div}X. \] Therefore, if we define \( f = \operatorname{div}Y \) and use the divergence theorem, we get the desired formula. In general, choose an orthonormal frame \( {E}_{i} \), and observe that we can define a vector field by \[ X = \mathop{\sum }\limits_{{{i}_{1},\ldots ,{i}_{p}}}S\left( {{E}_{{i}_{1}},\ldots ,{E}_{{i}_{p}}}\right) \operatorname{div}T\left( {{E}_{{i}_{1}},\ldots ,{E}_{{i}_{p}}}\right) . \] In other words, if we think of \( g\left( {V, S\left( {{X}_{1},\ldots ,{X}_{p}}\right) }\right) \) as a \( \left( {0, p}\right) \) -tensor, then \( X \) is implicitly defined by \[ g\left( {X, V}\right) = g\left( {g\left( {V, S}\right) ,\operatorname{div}T}\right) . \] Then we have \[ \operatorname{div}X = g\left( {\operatorname{div}S,\operatorname{div}T}\right) + g\left( {{S}^{b},\nabla \operatorname{div}T}\right) , \] and the formula is established as before. It is worthwhile pointing out that usually \[ {\int }_{M}g\left( {{S}^{b},\operatorname{div}\nabla T}\right) \neq - {\int }_{M}g\left( {\nabla S,\nabla T}\right) , \] even when the tensors are vector fields. On Euclidean space, for example, simply define \( S = T = {x}^{1}{\partial }_{1} \) . Then \[ \nabla \left( {{x}^{1}{\partial }_{1}}\right) = d{x}^{1}{\partial }_{1} \] \[ \left| {d{x}^{1}{\partial }_{1}}\right| = 1 \] \[ \operatorname{div}\left( {d{x}^{1}{\partial }_{1}}\right) = 0. \] Of course, the tensors in this example do not have compact support, but that can easily be fixed by multiplying with a compactly supported function. ## 4. Čech Cohomology Before defining de Rham cohomology, we shall briefly mention how Čech cohomology is defined. This is the cohomology theory that seems most natural from a geometric point of view. Also, it is the cohomology that is most naturally associated with de Rham cohomology For a manifold \( M \), suppose that we have a covering of contractible open sets \( {U}_{\alpha } \) such that all possible nonempty intersections \( {U}_{{\alpha }_{1}} \cap \cdots \cap {U}_{{\alpha }_{k}} \) are also contractible. Such a covering is called a good cover. Now let \( {I}^{k} \) be the set of ordered indices that create nontrivial intersections \[ {I}^{k} = \left\{ {\left( {{\alpha }_{0},\ldots ,{\alpha }_{k}}\right) : {U}_{{\alpha }_{0}} \cap \cdots \cap {U}_{{\alpha }_{k}} \neq 0}\right\} . \] Čech cycles with values in a ring \( R \) are defined as a space of alternating maps \[ {\breve{Z}}^{k} = \left\{ {f : {I}^{k} \rightarrow R : f \circ \tau = - f\text{ where }\tau \text{ is a transposition of two indices }}\right\} . \] The differential, or coboundary operator, is now defined by \[ d\; : \;{\check{Z}}^{k} \rightarrow {\check{Z}}^{k + 1} \] \[ {df}\left( {{\alpha }_{0},\ldots ,{\alpha }_{k + 1}}\right) = \mathop{\sum }\limits_{{i = 0}}^{k}{\left( -1\right) }^{i}f\left( {{\alpha }_{0},\ldots ,{\widehat{\alpha }}_{i},\ldots ,{\alpha }_{k + 1}}\right) . \] Čech cohomology is then defined as \[ {H}^{k}\left( {M, R}\right) = \frac{\ker \left( {d : {\check{Z}}^{k} \rightarrow {\check{Z}}^{k + 1}}\right) }{\operatorname{im}\left( {d : {\check{Z}}^{k - 1} \rightarrow {\check{Z}}^{k + 1}}\right) }. \] The standard arguments with refinements of covers can be used to show that this cohomology theory is independent of the choice of good cover. Below, we shall define de Rham cohomology for forms and prove several properties for that cohomology theory. At each stage one can easily see that Čech cohomology satisfies those same properties. Note that Čech cohomology seems almost purely combinatorial. This feature makes it very natural to work with in many situations. ## 5. De Rham Cohomology Throughout we let \( M \) be an \( n \) -manifold. Using that \( d \circ d = 0 \), we trivially get that the exact forms \[ {B}^{p}\left( M\right) = d\left( {{\Omega }^{p - 1}\left( M\right) }\right) \] are a subset of the closed forms \[ {Z}^{p}\left( M\right) = \left\{ {\omega \in {\Omega }^{p}\left( M
18462_Riemannian geometry
Corollary 46. (The Divergence Theorem) If \( X \) is a vector field on \( \left( {M, g}\right) \) with compact support, then\n\n\[ \n{\int }_{M}\operatorname{div}X \cdot d\operatorname{vol} = 0 \n\]
Proof. Just observe\n\n\[ \n\operatorname{div}X \cdot d\mathrm{{vol}} = {L}_{X}d\mathrm{{vol}} \n\]\n\n\[ \n= {i}_{X}d\left( {d\mathrm{{vol}}}\right) + d\left( {{i}_{X}d\mathrm{{vol}}}\right) \n\]\n\n\[ \n= d\left( {{i}_{X}d\mathrm{{vol}}}\right) \n\]\n\nand use Stokes' theorem.
6
36
Differential Equations and Dynamical Systems
Partial differential equations
Proposition 4. Let \( A \) be a simple algebra over \( K \) . Then every automorphism \( \alpha \) of \( A \) over \( K \) is of the form \( x \rightarrow {a}^{-1}{xa} \) with \( a \in {A}^{ \times } \) . Take a basis \( \left\{ {{a}_{1},\ldots ,{a}_{N}}\right\} \) of \( A \) over \( K \) . Then every element of \( A \otimes {A}^{0} \) can be written in one and only one way as \( \sum {a}_{i} \otimes {b}_{i} \), with \( {b}_{i} \in {A}^{0} \) for \( 1 \leq i \leq N \) . By prop. 3, \( \alpha \) can therefore be written as \( x \rightarrow \sum {a}_{i}x{b}_{i} \) . Writing that \( \alpha \left( {xy}\right) = \alpha \left( x\right) \alpha \left( y\right) \) for all \( x, y \), we get \[ 0 = \sum {a}_{i}{xy}{b}_{i} - \sum {a}_{i}x{b}_{i}\alpha \left( y\right) = \sum {a}_{i}x\left( {y{b}_{i} - {b}_{i}\alpha \left( y\right) }\right) . \] For each \( y \in A \), this is so for all \( x \) ; by prop. 3, we must therefore have \( y{b}_{i} = {b}_{i}\alpha \left( y\right) \) . In particular, since this gives \( y\left( {{b}_{i}z}\right) = {b}_{i}\alpha \left( y\right) z \) for all \( y \) and \( z \) in \( A,{b}_{i}A \) is a two-sided ideal in \( A \), hence \( A \) or \( \{ 0\} \), for all \( i \), so that \( {b}_{i} \) is either 0 or invertible in \( A \) . As \( \alpha \) is an automorphism, the \( {b}_{i} \) cannot all be 0 ; taking \( a = {b}_{i} \neq 0 \), we get the announced result. COROLLARY. Let \( \alpha \) and a be as in proposition 4, and let \( {a}^{\prime } \in A \) be such that \( {a}^{\prime }\alpha \left( x\right) = x{a}^{\prime } \) for all \( x \in A \) . Then \( {a}^{\prime } = {\xi a} \) with \( \xi \in K \) . In fact, the assumption can be written as \( {a}^{\prime }{a}^{-1}x = x{a}^{\prime }{a}^{-1} \) for all \( x \) ; this means that \( {a}^{\prime }{a}^{-1} \) is in the center \( K \) of \( A \) . Proposition 4 is generally known as "the theorem of Skolem-Noether" (although that name is sometimes reserved for a more complete statement involving a simple subalgebra of \( A \) ). One can prove, quite similarly, that every derivation of \( A \) is of the form \( x \rightarrow {xa} - {ax} \), with \( a \in A \) . We will also need a stronger result than corollary 2 of prop. 3 ; this will appear as a corollary of the following:
7839_Basic Number Theory
Proposition 4. Let \( A \) be a simple algebra over \( K \) . Then every automorphism \( \alpha \) of \( A \) over \( K \) is of the form \( x \rightarrow {a}^{-1}{xa} \) with \( a \in {A}^{ \times } \) .
Take a basis \( \left\{ {{a}_{1},\ldots ,{a}_{N}}\right\} \) of \( A \) over \( K \) . Then every element of \( A \otimes {A}^{0} \) can be written in one and only one way as \( \sum {a}_{i} \otimes {b}_{i} \), with \( {b}_{i} \in {A}^{0} \) for \( 1 \leq i \leq N \) . By prop. 3, \( \alpha \) can therefore be written as \( x \rightarrow \sum {a}_{i}x{b}_{i} \) . Writing that \( \alpha \left( {xy}\right) = \alpha \left( x\right) \alpha \left( y\right) \) for all \( x, y \), we get\n\n\[ 0 = \sum {a}_{i}{xy}{b}_{i} - \sum {a}_{i}x{b}_{i}\alpha \left( y\right) = \sum {a}_{i}x\left( {y{b}_{i} - {b}_{i}\alpha \left( y\right) }\right) . \]\n\nFor each \( y \in A \), this is so for all \( x \) ; by prop. 3, we must therefore have \( y{b}_{i} = {b}_{i}\alpha \left( y\right) \) . In particular, since this gives \( y\left( {{b}_{i}z}\right) = {b}_{i}\alpha \left( y\right) z \) for all \( y \) and \( z \) in \( A,{b}_{i}A \) is a two-sided ideal in \( A \), hence \( A \) or \( \{ 0\} \), for all \( i \), so that \( {b}_{i} \) is either 0 or invertible in \( A \) . As \( \alpha \) is an automorphism, the \( {b}_{i} \) cannot all be 0 ; taking \( a = {b}_{i} \neq 0 \), we get the announced result.
2
9
Algebra
Nonassociative rings and algebras
Exercise 9.1 Solve the following differential equations. 1. \( \frac{\mathrm{d}y}{\mathrm{\;d}x} = \frac{x + y}{x - y} \) 2. \( \sqrt{{x}^{2} + {y}^{2}}\mathrm{\;d}x = y\mathrm{\;d}y \) 3. \( \left( {{x}^{2} + {y}^{2}}\right) \mathrm{d}{xxy}\mathrm{\;d}y = 0 \) 4. \( \frac{\mathrm{d}y}{\mathrm{\;d}x} = \frac{1 + y}{1 - x} \) 5. \( {ay}\ln \left( x\right) = \frac{\mathrm{d}y}{\mathrm{\;d}x} - {ay}, a \in \mathbb{R} \) ## 9.5 Exact Equations Let \( F\left( {x, y}\right) \), then \( \mathrm{d}F = \frac{\partial F}{\partial x}\mathrm{\;d}x + \frac{\partial F}{\partial y}\mathrm{\;d}y \) . If \( F \) is a constant function, then \[ M\left( {x, y}\right) \mathrm{d}x + N\left( {x, y}\right) \mathrm{d}y = \frac{\partial F}{\partial x}\mathrm{\;d}x + \frac{\partial F}{\partial y}\mathrm{\;d}y = 0 \tag{9.6} \] hence \( \frac{\partial F}{\partial x} = M\left( {x, y}\right) ,\frac{\partial F}{\partial y} = N\left( {x, y}\right) \) . The differential equation (9.6) is said to be exact if there exists \( F\left( {x, y}\right) \) such that: \[ M\left( {x, y}\right) = \frac{\partial F\left( {x, y}\right) }{\partial x}\text{ and }N\left( {x, y}\right) = \frac{\partial F\left( {x, y}\right) }{\partial y} \] In this case, the Eq. (9.6) can be written as: \[ \mathrm{d}F = M\left( {x, y}\right) \mathrm{d}x + N\left( {x, y}\right) \mathrm{d}y = 0 \] and its solutions are given by \( F\left( {x, y}\right) = \) constant.
22067_Algebraic and Differential Methods for Nonlinear Control Theory_ Elements of Commutative Algebra and
Exercise 9.1 Solve the following differential equations.\n\n1. \( \frac{\mathrm{d}y}{\mathrm{\;d}x} = \frac{x + y}{x - y} \)
Null
6
35
Differential Equations and Dynamical Systems
Ordinary differential equations
Corollary 3.2. Let the notation be as above. If \( \gcd \left( {m, n}\right) = 1 \) then \( \# A\left( {\mathbb{F}}_{{q}^{mn}}\right) = \) \( \# B\left( {\mathbb{F}}_{{q}^{m}}\right) \) . If \( n \mid m \) then \( \# B\left( {\mathbb{F}}_{{q}^{m}}\right) = {\left( \# A\left( {\mathbb{F}}_{{q}^{m/n}}\right) \right) }^{n} \) . Proof. Let \( {\zeta }_{n} = \exp \left( {{2\pi }\sqrt{-1}/n}\right) \) and let \( {P}_{A}\left( T\right) = \mathop{\prod }\limits_{{i = 1}}^{{2g}}\left( {1 - {\alpha }_{i}}\right) \) . Then \( {P}_{B}\left( T\right) = \) \( \mathop{\prod }\limits_{i}\mathop{\prod }\limits_{{j = 1}}^{n}\left( {1 - {\zeta }_{n}^{j}{\alpha }_{i}^{1/n}}\right) \) and \( \# B\left( {\mathbb{F}}_{{q}^{m}}\right) = \mathop{\prod }\limits_{i}\mathop{\prod }\limits_{j}\left( {1 - {\zeta }_{n}^{mj}{\alpha }_{i}^{m/n}}\right) . \) When \( \gcd \left( {m, n}\right) = 1 \) then it is the case that the groups \( A\left( {\mathbb{F}}_{{q}^{nm}}\right) \) and \( B\left( {\mathbb{F}}_{{q}^{m}}\right) \) are isomorphic. We return to the curves produced by the method of [11]. The original elliptic curve \( E \) over \( {\mathbb{F}}_{{q}^{g}} \) has \( {P}_{E}\left( T\right) = {T}^{2} + {aT} + {q}^{n} \) (with \( \left| a\right| < 2\sqrt{{q}^{g}} \) ) and so, by Proposition 3.1, the abelian variety \( A \) has \( {P}_{A}\left( T\right) = {T}^{2g} + a{T}^{g} + {q}^{g} \) . If \( E \) is non-supersingular then \( a \) is coprime to \( q \) and so \( A \) is also non-supersingular (see [5]). In the construction of [11] we have \( \operatorname{Jac}\left( C\right) \) isogenous to \( A \) and so it has the same polynomial \( P\left( T\right) \) . This proves the following result.
24975_Public-Key Cryptography and Computational Number Theory_ Proceedings of the International Conference
Corollary 3.2. Let the notation be as above. If \( \gcd \left( {m, n}\right) = 1 \) then \( \# A\left( {\mathbb{F}}_{{q}^{mn}}\right) = \) \( \# B\left( {\mathbb{F}}_{{q}^{m}}\right) \) . If \( n \mid m \) then \( \# B\left( {\mathbb{F}}_{{q}^{m}}\right) = {\left( \# A\left( {\mathbb{F}}_{{q}^{m/n}}\right) \right) }^{n} \) .
Proof. Let \( {\zeta }_{n} = \exp \left( {{2\pi }\sqrt{-1}/n}\right) \) and let \( {P}_{A}\left( T\right) = \mathop{\prod }\limits_{{i = 1}}^{{2g}}\left( {1 - {\alpha }_{i}}\right) \) . Then \( {P}_{B}\left( T\right) = \) \( \mathop{\prod }\limits_{i}\mathop{\prod }\limits_{{j = 1}}^{n}\left( {1 - {\zeta }_{n}^{j}{\alpha }_{i}^{1/n}}\right) \) and \( \# B\left( {\mathbb{F}}_{{q}^{m}}\right) = \mathop{\prod }\limits_{i}\mathop{\prod }\limits_{j}\left( {1 - {\zeta }_{n}^{mj}{\alpha }_{i}^{m/n}}\right) . \n\nWhen \( \gcd \left( {m, n}\right) = 1 \) then it is the case that the groups \( A\left( {\mathbb{F}}_{{q}^{nm}}\right) \) and \( B\left( {\mathbb{F}}_{{q}^{m}}\right) \) are isomorphic.
2
5
Algebra
Field theory and polynomials
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card