diff --git "a/stack-exchange/math_stack_exchange/shard_110.txt" "b/stack-exchange/math_stack_exchange/shard_110.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_110.txt" +++ /dev/null @@ -1,6086 +0,0 @@ -TITLE: Why do ratios of these Fibonacci-type sequences approach $\pi$? -QUESTION [8 upvotes]: Define $A_n$ by $A_1=12$, $A_2=18$, and $A_n=A_{n-1}+A_{n-2}$ for $n\ge3$. Similarly define $B_n$ by $B_1=5$, $B_2=5$, and $B_n=B_{n-1}+B_{n-2}$ for $n\ge3$. -Terms of $A_n$: $12, 18, 30, 48, 78,\dots$ -Terms of $B_n$: $5, 5, 10, 15, 25,\dots$ -I found that dividing element $A_n$ by $B_n$ where $n$ approaches $\infty$ appears to result in: -$$\lim\limits_{n\to \infty}\left ( \frac{A_n}{B_n} \right ) = \pi$$ -My question is, why does the ratio appear to converge towards $\pi$, and what is the significance of $5, 12, 18$ as to why this happens? - -REPLY [4 votes]: Recurrences of the form -$$T_n=T_{n-1}+T_{n-2}$$ are linear and known to have a general solution of the form $$T_n=C_0z_0^n+C_1z_1^n,$$ where $z_0,z_1$ are the roots of the "characteristic equation", $z^2=z+1$. -By the usual formulas, -$$z_0,z_1=\frac{1\pm\sqrt5}2.$$ -Using the initial conditions, -$$T_0=C_0+C_1,\\T_1=C_0z_0+C_1z_1.$$ -As $|z_0|>|z_1|$, the first term quickly dominates and -$$T_n\approx\frac{T_1-z_1T_0}{z_0-z_1}z_0^n.$$ -In your case, for large $n$, -$$\frac{A_n}{B_n}\approx\frac{A_1-z_1A_0}{B_1-z_1B_0}=\frac{3\sqrt5+9}5=3.1416407864999\cdots$$<|endoftext|> -TITLE: What is a direct limit of exact sequences? -QUESTION [6 upvotes]: (Hatcher Section 3.3, page 243) First, recalling the definition of a directed system of groups: - -Suppose one has abelian groups $G_\alpha$ indexed by some partial ordered index set $I$ having the property that for each pair $\alpha, \beta \in I$ there exists $\gamma \in I$ with $\alpha \leq \gamma$ and $\beta \leq \gamma$. Such an $I$ is called a directed set. Suppose also that for each pair $\alpha \leq \beta$ one has a homomorphism $f_{\alpha\beta} : G_\alpha \to G_\beta$, such that $f_{\alpha\alpha} = 1$ for each $\alpha$, and if $\alpha \leq \beta \leq \gamma$ then $f_{\alpha\gamma}$ is the composition of $f_{\alpha\beta}$ and $f_{\beta\gamma}$.... this data... is called a directed system of groups. - -Hatcher then gives two alternate definitions for the direct limit group $\varinjlim G_\alpha$ for a directed system of groups $\{G_\alpha\}_{\alpha \in I}$. -"The shorter definition": - -... the quotient of the direct sum $\bigoplus_\alpha G_\alpha$ by the subgroup generated by all the elements of the form $a - f_{\alpha\beta}(a)$ for $a \in G_\alpha$, where we are viwing each $G_\alpha$ as a subgroup of $\bigoplus_\alpha G_\alpha$. - -and "the other definition, which is often more convenient to work with": - -Define an equivalence relation on the set $\bigsqcup_\alpha G_\alpha$ by $a \sim b$ if $f_{\alpha\gamma}(a) = f_{\beta\gamma}(b)$ for some $\gamma$, where $a \in G_\alpha$ and $b \in G_\beta$.... It could also be described as the equivalence relation generated by setting $a \sim f_{\alpha\beta}(a)$. Any two equivalence classes $[a]$ and $[b]$ have representatives $a'$ and $b'$ lying in the same $G_\gamma$, so define $[a] + [b] = [a' + b']$. One checks that this is well-defined and gives an abelian group structure to the set of equivalence classes. It is easy to check further that the map sending an equivalence class $[a]$ to the coset of $a$ in $\varinjlim G_\alpha$ is a homomorphism, with an inverse induced by the map $\sum_i a_i \mapsto \sum_i [a_i]$ for $a_i \in G_\alpha$. Thus we can identify $\varinjlim G_\alpha$ with the group of equivalence classes [a]. - -Now, Exercise 17 in that section (page 259) asks, - -Show that a direct limit of exact sequences is exact. - -The problem continues with - -More generally, show that homology commutes with direct limits: If $\{C_\alpha, f_{\alpha\beta}\}$ is a directed system of chain complexes, with the maps $f_{\alpha\beta} : C_\alpha \to C_\beta$ chain maps, then $H_n(\varinjlim C_\alpha) = \varinjlim H_n(C_\alpha)$. - -For now, I'd like to focus on the first part of the exercise. -I am having the problem I often have with this text, which is that when I read a longer passage informally giving a definition or a structure, I feel like I follow it fine, sentence by sentence, while reading it - but then, at the end of the paragraph, I don't have a grasp of what was just discussed. -I feel as though I can individually follow each of the definitions given above, but in response to "Show that the direct limit of exact sequences is exact," I draw a blank. -My question here: What is a "direct limit of exact sequences"? How do I write it down? (From there, I imagine that showing that structure, whatever it is, is exact, will follow straightforwardly.) - -REPLY [5 votes]: A direct limit of exact sequences is like a direct limit of groups, except you replace each individual group by an exact sequence. So you have a direct set $I$, and for each $\alpha\in I$ you have an exact sequence $A_\alpha\stackrel{s_{\alpha}}{\to}B_\alpha\stackrel{t_\alpha}{\to} C_\alpha$, and whenever $\alpha<\beta$ you have maps $f_{\alpha\beta}:A_\alpha\to A_\beta$, $g_{\alpha\beta}:B_\alpha\to B_\beta$, and $h_{\alpha\beta}:C_\alpha\to C_\beta$, such that $g_{\alpha\beta}s_\alpha=s_\beta f_{\alpha\beta}$ and $h_{\alpha\beta}t_\alpha=t_\beta g_{\alpha\beta}$ (i.e., the obvious diagram commutes). Furthermore, we require that $f_{\alpha\gamma}=f_{\beta\gamma}f_{\alpha\beta}$ and similarly for $g$ and $h$, so we have three separate directed systems of groups. -You can then take the direct limits $\varinjlim A_\alpha$, $\varinjlim B_\alpha$, and $\varinjlim C_\alpha$, and you can show that the maps $s_\alpha$ and $t_\alpha$ induce maps $s:\varinjlim A_\alpha\to \varinjlim B_\alpha$ and $t:\varinjlim B_\alpha\to \varinjlim C_\alpha$. The question is then asking you to show that $s$ and $t$ also form an exact sequence. -To put it another way, directed systems of groups indexed by $I$ form a category: a map between a directed system $(A_\alpha, f_{\alpha\beta})$ and a directed system $(B_\alpha,g_{\alpha\beta})$ consist of a map $s_\alpha:A_\alpha\to B_\alpha$ for each $\alpha$ such that $g_{\alpha\beta}s_\alpha=s_\beta f_{\alpha\beta}$ for all $\alpha\leq\beta$. Such a map induces a map between the direct limits (i.e. taking the direct limit is a functor from the category of directed systems to the category of groups). You can then define an exact sequence of directed systems of groups to be a sequence of maps of directed system such that for each $\alpha$, the $\alpha$-terms of the sequence form an exact sequence of groups. The question is then whether given an exact sequence of directed systems, the induced sequence on their direct limits is also exact.<|endoftext|> -TITLE: A harmonic function with sublinear growth at infinity is constant -QUESTION [6 upvotes]: Show that if $u$ is harmonic in $\mathbb R^n$ and $u=o(|x|)$, then $u$ is a constant.(Hint: use the solid version of the mean value property $u(x)=\frac{1}{\omega R^n} \int _{B_{R(x)}} u(y)dy$, and estimate $|u(x)-u(x')|$.) -Can anyone tell me how to use this hint? - -REPLY [5 votes]: The key point is that for fixed $x,x'$ the volume of the symmetric difference $B(x,R)\triangle B(x',R)$ is $O(R^{n-1})$ as $R\to\infty$. Hence, the difference of integrals of $u$ over these two balls is $o(R^n)$. Dividing by the volume of $B(x,R)$ we find $u(x)-u(x') = o(1)$, which means $u(x)=u(x')$ as claimed.<|endoftext|> -TITLE: Is there a geometrical interpretation of this equality $2\cdot 4\cdot 6\cdot\ldots\cdot(2n)=2^nn!$? -QUESTION [5 upvotes]: $$2\cdot 4\cdot 6\cdot\ldots\cdot(2n)=2^nn!$$ -How it can be seen in a plane? -I have found many proofs with by induction but I wish to understand it geometrically. - -REPLY [14 votes]: I've actually been meaning to think about this exact question for a while, in a specific context: - -Theorem: The $n$-dimensional cube $Q_n$ has exactly $2^n \cdot n!$ symmetries, including reflections. - -I've always seen it listed that the size $\left| \operatorname{Sym}(Q_n)\right|$ of the symmetry group of $Q_n$ is $2^n \cdot n!$, and only recently did I learn this happens to be equal to the double factorial $(2n)!! = 2 \cdot 4 \cdot \ldots \cdot 2n$. But shamefully, I've never known why either should count symmetries of the cube! -We'll look first at why $\left| \operatorname{Sym}(Q_n)\right| = 2^n \cdot n!$. We'll pick a particular vertex $V$ of $Q_n$, say the front-upper-left one in the following picture. - -Now, the orbit-stabilizer theorem tells us that -$$\left| \operatorname{Sym}(Q_n)\right| = (\text{number of places $V$ can go}) \cdot (\text{number of symmetries fixing }V),$$ -where it's apparent that $V$ can get sent to any of the other $2^n$ vertices. But counting symmetries that fix $V$ can be a bit tricky. More pictures are in order. - -On the left we see vertex $V$ in red, and some rotations implied by the green arrows. On the right, there's a hyperplane (implied in green) through which we can reflect, to obtain another symmetry. -But what the two pictures really have in common is the blue triangle: it's the key! To get the triangle, take the vertices vertices that are just one step away from $V$ and connect them (see also, vertex figure). A moment's thought should convince you of two things: - -That for an $n$-dimensional cube, you won't necessarily get a $2$-dimensional triangle, rather an $(n - 1)$-dimensional simplex. -Further, symmetries fixing $V$ are exactly symmetries of this simplex; symmetries fixing $V$ are in bijection with symmetries of a uniform simplex. - -However, counting symmetries of the $(n-1)$-simplex is quite tractable, and there are $n!$ of these, hence $\left| \operatorname{Sym}(Q_n)\right| = 2^n \cdot n!$. - -Now it remains to show that $\left| \operatorname{Sym}(Q_n)\right| = 2 \cdot 4 \cdot \ldots \cdot 2n = (2n)!!$. Our approach will appeal to the fact that the cube $Q_n$ is a regular polytope, which means that its symmetry group acts transitively on its set of flags. -A flag, you say, more terminology? Yes, but it's quite manageable. A flag is simply a nested set of faces of the cube, one for each dimension. For example, in the $3$-dimensional cube $Q_3$, a flag looks like -$$\text{vertex $\subset$ edge $\subset 2$-dimensional face}.$$ -Now, the definition of regularity just says that we can move any such chain of nested faces to any other chain, using one of the symmetries of the cube. -Put another way, symmetries of the cube are in bijection with flags of the cube! Now, those are quite easy to count, since each face of the cube $Q_n$ is some cube $Q_k$ of dimension $k$, which has exactly $2k$ faces of dimension $k-1$. -To start specifying a flag in the $n$-dimensional cube $Q_n$, we have $2n$ faces of maximal dimension $n-1$ to choose from (e.g., $6$ faces of dimension $2$ in $Q_3$). For our next step down, there are $2(n - 1)$ faces of dimension $n - 2$, since we're only allowed to choose a maximal face of the face we've already chosen (e.g., having chosen one of the six square faces in $Q_3$, we choose one of its $4$ edges). We continue like this until we've chosen one of the $4$ vertices of a $2$-dimensional face, at which point we choose one of its $2$ vertices. Now we're done, having made $2n \cdot 2(n - 1) \cdot \ldots 4 \cdot 2$ choices and completely determined a flag! -Thus $$2^n \cdot n! = \left| \operatorname{Sym}(Q_n)\right| = 2 \cdot 4 \cdot \ldots \cdot 2n.$$<|endoftext|> -TITLE: Covariance of multinomial distribution -QUESTION [5 upvotes]: Let $X = (X_1,\ldots, X_k)$ be multinomially distributed based upon $n$ trials with parameters $p_1,\ldots,p_k$ such that the sum of the parameters is equal to $1$. I am trying to find, for $i \neq j$, $\operatorname{Var}(X_i + X_j)$. Knowing this will be sufficient to find the $\operatorname{Cov}(X_i,X_j)$. -Now $X_i \sim \text{Bin}(n, p_i)$. The natural thing to say would be that $X_i + X_j\sim \text{Bin}(n, p_i+p_j)$ (and this would, indeed, yield the right result), but I m not sure if this is indeed so. -Suggestions for how to go about this are greatly appreciated! -UPDATE: @grand_chat very nicely answered the question about the distribution of $X_i + X_j$. How would we go about computing the variance of $X_i - X_j$? As @grand_chat correctly points out, this cannot be binomial because it is not guaranteed to be positive. How, then, should one go about computing the variance of this random variable? -UPDATE 2: The answer in this link answers the question in my UPDATE. - -REPLY [4 votes]: There are several ways to do this, but one neat proof of the covariance of a multinomial uses the property you mention that $X_i + X_j \sim \text{Bin}(n, p_i + p_j)$ which some people call the "lumping" property -Covariance in a Multinomial -Given $(X_1,...,X_k) \sim Mult_k(n , \vec{p})$ find $Cov(X_i,X_j)$ for all $i,j$. -\begin{aligned} - & If \ i = j, Cov(X_i, X_i) = Var(X_i) = np_i(1 - p_i) - \\ - \\ - & If \ i \neq j, Cov(X_i, X_j) = C \ \ \text{ i.e. what we are trying to find} - \\ - \\ - & Var(X_i + X_j) = Var(X_i) + Var(X_j) + 2Cov(X_i, X_j) - \\ - \\ - & Var(X_i + X_j) = np_i (1 - p_i) + np_j (1 - p_j)+ 2C - \\ - \\ - & \text{By the lumping property } X_i + X_j \sim Bin(n, p_i + p_j) - \\ - \\ - & n(p_i + p_j)(1 - (p_i + p_j)) = np_i(1 - p_i) + np_j(1 - p_j) + 2C - \\ - & (p_i + p_j)(1 - (p_i + p_j)) = p_i(1 - p_i) + p_j(1 - p_j) + \frac{2C}{n} - \\ - & C = - n p_i p_j -\end{aligned}<|endoftext|> -TITLE: An elliptic integral? -QUESTION [7 upvotes]: I ran into an integral a little while ago that looks like an elliptic integral of the first kind, however I am having trouble seeing how it can be put into the standard form. I've tried messing around with some trig identities but didn't get anywhere. Perhaps there is some other definition that I'm missing. Here it is. If it is solvable by contour integration I would be open to this as well. -$$ \int_0^{\phi_0} \frac{d\phi}{\sqrt{1-a\sin{\phi}}}$$ -EDIT -If it makes it any easier, let $a=2$ and $\phi_0=\frac\pi6$. By standard form I mean: -$$ \int_0^{\phi_0} \frac{d\phi}{\sqrt{1-k^2\sin^2{\phi}}}=F(\phi_o,k) $$ -A Mathematica calculation reveals (using the constants I mentioned): -$$\int_0^{\pi/6} \frac{d\phi}{\sqrt{1-2\sin{\phi}}}=i\left(2F(\pi/4,4)-K(1/4)\right)$$ -Where $F$ is an incomplete elliptic integral of the first kind and $K$ is a complete elliptic integral of first kind. Note that Mathematica uses a different convention where $k$ is replaced by the parameter $m=k^2$. This answer leads me to believe that the integral should be split up somehow. - -REPLY [7 votes]: Let $\displaystyle\;\mathcal{I} = \int_0^{\phi_1} \frac{d\phi}{\sqrt{1-a\sin\phi}}\;$ be the integral at hand. Since -$$\sin\phi = -\cos\left(\phi + \frac{\pi}{2}\right) = 2 \sin^2\left(\frac{\phi}{2} + \frac{\pi}{4}\right) - 1$$ -If one change variable to $\theta = \frac{\phi}{2} + \frac{\pi}{4}$, we have -$$\begin{align} -\mathcal{I} -&= \int_{\theta_0}^{\theta_1} \frac{2d\theta}{\sqrt{(1+a)-2a\sin^2\theta}} -= \frac{2}{\sqrt{1+a}}\int_{\theta_0}^{\theta_1} -\frac{d\theta}{\sqrt{1-k^2\sin^2\theta}}\\ -&= \frac{2}{\sqrt{1+a}}\left[ F(\theta_1,k) - F\left(\theta_0, k\right) \right] -\end{align} -$$ -where -$\displaystyle\;\theta_0 = \frac{\pi}{4}\;$, -$\displaystyle\;\theta_1 = \frac{\phi_1}{2} + \frac{\pi}{4}\;$ -and $\displaystyle\;k = \sqrt{\frac{2a}{1+a}}$. -If $0 < a < 1$, this is what we need. -If $a > 1$, the modulus $k$ for the elliptic integral is bigger than $1$. -This is usually undesirable. Sometimes this will lead to superficial complex numbers in the expression of a real integral. -We can resolve this by another change of variable $\sin\psi = k\sin\theta$. -Let $\psi_0 = \sin^{-1}(k\sin\theta_0)$ and $\psi_1 = \sin^{-1}(k\sin\theta_1)$, -we have -$$\frac{\sqrt{1+a}}{2}\mathcal{I} -= \int_{\theta_0}^{\theta_1} \frac{d\sin\theta}{\sqrt{(1-\sin^2\theta)(1-k^2\sin^2\theta)}} -= \int_{\psi_0}^{\psi_1} \frac{k^{-1}d\sin\psi}{\sqrt{(1-k^{-2}\sin^2\psi)(1-\sin^2\psi)}}$$ -This leads to an alternate expression for the integral at hand: -$$\mathcal{I} = \sqrt{\frac{2}{a}}\int_{\psi_0}^{\psi_1} \frac{d\psi}{\sqrt{1-k^{-2}\sin^2\psi}} -= \sqrt{\frac{2}{a}}\left[F(\psi_1;k^{-1}) - F(\psi_0;k^{-1})\right] -$$ -For the test case $\phi_1 = \frac{\pi}{6}$, this give us -$$\mathcal{I} -= K\left(\sqrt{\frac34}\right) - F\left(\sin^{-1}\sqrt{\frac23},\sqrt{\frac34}\right)$$ -Using the command EllipticK[3/4] - EllipticF[ArcSin[Sqrt[2/3]],3/4] on WA, one find $$\mathcal{I} \approx 1.078257823749821617719337499400161014432055108246412680182...$$ -This matches numerically the result i*(2*EllipticF[Pi/4,4]-EllipticK[1/4]) returned by symbolic integrating the integral on WA.<|endoftext|> -TITLE: Planes through the origin are subspaces of $\Bbb{R}^3$ -QUESTION [5 upvotes]: I'm reading the book Elementary Linear Algebra by Anton and Rorres, and the following has me a bit confused: -"If $\mathbf{u}$ and $\mathbf{v}$ are vectors in a plane $W$ through the origin of $\Bbb{R}^3$, then it is evident geometrically that $\mathbf{u + v}$ and $k\mathbf{u}$ also lie in the same plane $W$ for any scalar $k$ (Figure 4.2.3). Thus $W$ is closed under addition and scalar multiplication. - -It says that it is evident that $k\mathbf{u}$ also lies in the same plane $W$, but I feel like if $k$ is sufficiently large enough, the vector $k\mathbf{u}$ would extend outside of the vector space $W$. Can someone explain this to me a little more clearly please? - -REPLY [2 votes]: You can also approach it algebraically. -A plane in $W\subset\textbf{R}^{3}$ which passes through the origin has general equation given by -\begin{align*} -ax + by + cz = 0 -\end{align*} -Thus if the vectors $\textbf{u} = (u_{1},u_{2},u_{3})\in W$ and $\textbf{v} = (v_{1},v_{2},v_{3})\in W$, then their coordinates must satisfy -\begin{align*} -\begin{cases} -au_{1} + bu_{2} + cu_{3} = 0\\ -av_{1} + bv_{2} + cv{3} = 0 -\end{cases}\Longrightarrow a(u_{1} + v_{1}) + b(u_{2} + v_{2}) + c(u_{3} + v_{3}) = 0 \Rightarrow \textbf{u}+\textbf{v}\in W -\end{align*} -A similar reasoning applies to $k\textbf{u}$.<|endoftext|> -TITLE: derivative of a function divided by the same function -QUESTION [5 upvotes]: I've been trying to understand and look for a proof that for example -(1) $$\frac{\frac{d}{dx}f(x)}{f(x)}$$ -is equal to -(2) $$\frac{d}{dx}ln[f(x)]$$ -Can someone help me understand why 1 & 2 are equal? - -REPLY [7 votes]: $$\frac{d}{dx}[\ln f(x)]= \frac{1}{f(x)} \cdot f'(x) = \frac{1}{f(x)}\cdot \frac{d}{dx}f(x) = \frac{\frac{d}{dx}f(x)}{f(x)}$$ -where the second step is true by the chain rule.<|endoftext|> -TITLE: Why do we say function "parameterized by" vs just function of (x,y,z,...)? -QUESTION [28 upvotes]: I'm studying statistics, and in a lot of textbooks, the regression formulas always refers to the functions themselves as f(x) parameterized by a,b,c or something. And they are often written $f(x; a,b,c)$, why can't we just write $f(x, a, b, c)$? -I would like to know perhaps the history of how this came about, and why there is this seemingly dual method of expressing the same idea. Is there some esoteric mathematical notation that a noob like me hasn't encountered yet, or did mathematicians of old just like doing things in different ways? -An example: A function that maps age of tree to height: -$$f(age; growth\_rate) = age * growth\_rate$$ vs $$f(age, growth\_rate) = age * growth\_rate$$ - -REPLY [7 votes]: Think to something like this: $f(x;\sigma) = e^{-x^2/\sigma^2}$. -If you read for example "the derivative of $f$" probably you immediately undestand that we are speaking of $\partial f/\partial x$ because you recognize $f$ as a function of $x$, whose definition contains some $\sigma$ which is not a "variable". -Otherwise if you read: -$f(x, \sigma) = e^{-x^2/\sigma^2}$ -a sentence like "the derivative of $f$" would be ambiguous: it's $\partial f/\partial x$? $\partial f/\partial \sigma$? $\nabla f = (\partial f/\partial x, \partial f/\partial \sigma)$? -So without this distinction every time one has to write explicitly "the derivative of $f$ with respect to the variable $x$".<|endoftext|> -TITLE: Distributivity of ordinal arithmetic -QUESTION [8 upvotes]: Let greek letters be ordinals. I want to prove $\alpha(\beta + \gamma) = \alpha\beta + \alpha\gamma$ by induction on $\gamma$ and I already know it holds true for $\gamma = \emptyset$ and $\gamma$ a successor ordinal. Let $\gamma$ be a limit ordinal. I found -$$ -\alpha(\beta + \gamma) = \alpha \cdot \sup_{\epsilon < \gamma} (\beta + \epsilon) = \sup_{\epsilon < \gamma} (\alpha(\beta + \epsilon)) = \sup_{\epsilon < \gamma} (\alpha\beta + \alpha\epsilon) = \alpha\beta + \alpha\gamma, -$$ -but I am suddenly doubting if the second equality is justified. -Question: Is the second equality correct? - -REPLY [7 votes]: Yes, it is correct. It may be more obvious, if we arrange things a bit differently: If $\gamma$ is a limit ordinal, then $\beta+\gamma$ is a limit ordinal as well. So by definition of ordinal multiplication we have -$$ -\begin{align*} -\alpha \cdot (\beta + \gamma) &= \sup_{\delta < \beta + \gamma} \alpha \cdot \delta \\ -&= \sup_{\epsilon < \gamma} \alpha \cdot (\beta + \epsilon). -\end{align*} -$$<|endoftext|> -TITLE: Measurability of product measures $ \{\mu \in M: (\mu \times \mu)(A) \in B\} \in \mathscr{M}$ -QUESTION [5 upvotes]: Let $(X,\mathscr{F})$ be a measurable space, and let $M$ be the set all probability measures $\mu: \mathscr{F} \to [0,1]$. Let us denote with $\mathscr{M}$ the $\sigma$-algebra on $M$ generated by the mappings $\mu \to \mu(F)$, with $F \in \mathscr{F}$. -Now fix a Borel set $B$ of $\mathbb{R}$ and $A \in \mathscr{F} \otimes \mathscr{F}$. How can we prove that -$$ -\{\mu \in M: (\mu \times \mu)(A) \in B\} \in \mathscr{M}? -$$ -[Here $(\mu\times \mu)$ stands for the product measure and $\mathscr{F}\otimes \mathscr{F}$ for the product $\sigma$-algebra] -[Linked thread on MO: here] - -REPLY [2 votes]: Use the $\pi$-$\lambda$ theorem. -For a set $E \in \mathscr{F}$, define $F_E : M \to [0,1]$ by $F_E(\mu) = \mu(E)$. By assumption, each $F_E$ is measurable with respect to $\mathscr{M}$. -For a set $A \in \mathscr{F} \otimes \mathscr{F}$, define $G_A : M \to [0,1]$ by $G_A(\mu) = (\mu \times \mu)(A)$. -Let $\mathcal{L}$ be the collection of all $A \in \mathscr{F} \otimes \mathscr{F}$ such that $G_A$ is measurable. -Let $\mathcal{P}$ be the collection of all sets of the form $A = E_1 \times E_2$, where $E_1, E_2 \in \mathcal{F}$. Note that $\mathcal{P}$ is closed under intersections, i.e. it is a $\pi$-system. -Now observe: - -If $A = E_1 \times E_2 \in \mathcal{P}$ then $G_A = F_{E_1} F_{E_2}$ which is a measurable function. Thus $\mathcal{P} \subset \mathcal{L}$. -$G_{X \times X}= 1$ which is a measurable function, so $X \times X \in \mathcal{L}$. (Note $M$ consists only of probability measures) -If $A \in \mathcal{L}$ then $G_{A}$ is measurable and hence so is $G_{A^c} = 1-G_{A}$. So $A^c \in \mathcal{L}$. -If $A_1, A_2, \dots \in \mathcal{L}$ are disjoint, then letting $A = \bigcup_n A_n$ we have $G_A = \sum_n G_{A_n}$ which is measurable. So $A \in \mathcal{L}$. - -We have thus shown $\mathcal{L}$ is a $\lambda$-system. By the $\pi$-$\lambda$ theorem, we have $\mathcal{L} \supset \sigma(\mathcal{P}) = \mathscr{F} \otimes \mathscr{F}$. -That is, every $G_A$ is a measurable function, and we have $$\{\mu \in M: (\mu \times \mu)(A) \in B\} = G_A^{-1}(B) \in \mathscr{M}.$$<|endoftext|> -TITLE: Why does Wolframalpha think that this sum converges? -QUESTION [34 upvotes]: Looking at the sum: -$$\sum_{n=1}^\infty\tan\left(\frac\pi{2^n}\right)$$ -I'd say that it does not converge, because for $n=1$ the tangent $\tan\left(\frac\pi 2\right)$ should be undefined. But Wolframlpha thinks that the sum converges somewhere around $1.63312×10^{16}$. -What am I missing? - -REPLY [52 votes]: For floating point numbers stored in IEEE double precision format, the significant has $53$ bit of accuracy. The most significant bit is implied and is always one. Only $52$ bits are actually stored. -Since $1 \le \frac{\pi}{2} < 2$, among those numbers representable by IEEE, -the closest number to $\frac{\pi}{2}$ is -$$\left(\frac{\pi}{2}\right)_{fp} \stackrel{def}{=} 2^{-52}\left\lfloor \frac{\pi}{2} \times 2^{52}\right\rfloor$$ -Numerically, we have $$\frac{\pi}{2} - \left(\frac{\pi}{2}\right)_{fp} \approx 6.1232339957\times 10^{-17}$$ -Since for $\theta \approx \frac{\pi}{2}$, $\displaystyle\;\tan\theta \approx \frac{1}{\frac{\pi}{2} - \theta}$, we have -$$\tan\left(\frac{\pi}{2}\right)_{fp} -\approx \frac{1}{6.1232339957\times 10^{-17}} -\approx 1.6331239353 \times 10^{16}$$ -This is approximately the number you observed.<|endoftext|> -TITLE: Example of generated sigma-algebra -QUESTION [5 upvotes]: I looked at the definition of a generated $\sigma$-algebra in wikipedia (https://en.wikipedia.org/wiki/Sigma-algebra) and would like to know if this is correct. -Let $X=\{1,2,3,4\}$ and $F=\{\{1\},\{2\}\}$. Is it correct that $\sigma(F)=\{\emptyset,\{1,2,3,4\},\{1\},\{2,3,4\},\{2\},\{1,3,4\}\}$? Thanks. -edit: The correct answer is $\sigma(F)=\{\emptyset,\{1,2,3,4\},\{1\},\{2,3,4\},\{2\},\{1,3,4\},\{1,2\},\{3,4\}\}$. - -REPLY [4 votes]: You missed -$$\{ 3,4 \} = \{ 1,3,4 \} \cap \{ 2,3,4 \}$$ -and -$$\{ 1,2 \} = \{ 1 \} \cup \{ 2 \}$$ -Adding these two, you get a $\sigma$-algebra.<|endoftext|> -TITLE: Does the equation $x^2+23y^2=2z^2$ have integer solutions? -QUESTION [10 upvotes]: I would like to show that the image of the norm map $\text N : \mathbb Z \left[\frac{1 + \sqrt{-23}}{2} \right] \to \mathbb Z$ does not include $2.$ I first thought that the norm map from $\mathbb Q(\sqrt{-23}) \to \mathbb Q$ does not have $2$ as its image either, so I tried to solve the Diophantine equation $$x^2 + 23y^2 = 2z^2$$ in integers. - -After taking congruences with respect to several integers, such as $2, 23, 4, 8$ and even $16,$ I still cannot say that this equation has no integer solutions. Then I found out that the map $\text{N}$ has a simpler expression and can be easily shown not to map to $2.$ -But I still want to know about the image of $\text N,$ and any help will be greatly appreciated, thanks in advance. - -REPLY [5 votes]: Conclusion: it is likely that the parametrization -$$ \color{blue}{ \left(7u^2 + 10uv -3 v^2 \right)^2 + 23 \left(u^2 -2uv - v^2 \right)^2 = 8 \left( 3u^2 + uv + 2 v^2 \right)^2} $$ -gives all solutions to $x^2 + 23 y^2 = 8 z^2$ with $\gcd(x,y,z) = 1.$ There is a theorem that all primitive solutions can be found with a small finite number of such parametrizations. This version is better than the one at the end of my discussion, $z$ is discriminant $-23$ rather than $-92,$ that is why there is no longer a problem about $z \equiv 2 \pmod 4.$ Live and learn. -If you wish to see how this does by computer (I advise this), with $z = 3u^2 + uv + 2 v^2 \leq M$ for some upper bound $M,$ we can demand -$$ |u| \leq \sqrt {\frac{8M}{23}}, $$ -$$ |v| \leq \sqrt {\frac{12M}{23}}. $$ -Here is a raw search for primitive solutions with $z \leq 100.$ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= - x y z - 3 1 2 - 7 1 3 - 25 1 9 - 5 7 12 - 15 7 13 - 45 1 16 - 59 7 24 - 9 17 29 - 81 7 31 - 61 17 36 - 1 23 39 - 111 7 41 - 105 17 47 - 147 1 52 - 149 7 54 - 35 31 54 - 135 31 71 - 53 41 72 - 63 41 73 - 205 17 78 - 163 31 78 - 41 47 81 - 73 49 87 - 263 1 93 - 259 17 96 - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -Next is by parametrization, where it was necessary to use absolute values to get a nice ordering. In order to reduce repetition I took only $u \geq 0.$ I notice how not all $\pm$ signs occur in $x,y$ here. To get all types, one would need four slightly different parametrizations, $(x,y)$ as written, then $(x,-y),$ $(-x,y),$ $(-x,-y).$ - z |x| |y| z x y z u v - 2 3 1 2 -3 -1 2 0 1 - 2 3 1 2 -3 -1 2 0 -1 - 3 7 1 3 7 1 3 1 0 - 9 25 1 9 -25 1 9 1 -2 - 12 5 7 12 5 7 12 2 -1 - 13 15 7 13 15 -7 13 1 2 - 16 45 1 16 45 -1 16 2 1 - 24 59 7 24 -59 7 24 2 -3 - 29 9 17 29 -9 17 29 3 -2 - 31 81 7 31 -81 -7 31 1 -4 - 36 61 17 36 61 -17 36 2 3 - 39 1 23 39 -1 -23 39 1 4 - 41 111 7 41 111 -7 41 3 2 - 47 105 17 47 -105 17 47 3 -4 - 52 147 1 52 -147 -1 52 2 -5 - 54 149 7 54 149 7 54 4 1 - 54 35 31 54 -35 31 54 4 -3 - 71 135 31 71 135 -31 71 3 4 - 72 53 41 72 53 -41 72 2 5 - 73 63 41 73 63 41 73 5 -2 - 78 163 31 78 -163 31 78 4 -5 - 78 205 17 78 205 -17 78 4 3 - 81 41 47 81 -41 -47 81 1 6 - 87 73 49 87 -73 49 87 5 -4 - 93 263 1 93 263 1 93 5 2 - 96 259 17 96 -259 -17 96 2 -7 - z |x| |y| z x y z u v - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -There is another way to talk about the behavior of the norm map. The binary quadratic forms of discriminant $-92$ are in three classes, reduced are $x^2 + 23 y^2,$ $3x^2 + 2xy + 8y^2,$ $3x^2 - 2xy + 8y^2.$ -That is, under Gauss composition, the groups is cyclic order three and $3x^2 + 2xy + 8y^2$ is a generator. It primitively represents $8.$ It also primitively represents infinitely many squares $z^2,$ and composition tells us that $8z^2$ is primitively represented by $x^2 + 23 y^2.$ -The part about squares is best written in Dirichlet's version of composition, which was all we had until Bhargava gave other interpretations of Gauss. The form $3x^2 + 2xy + 8y^2$ is integrally equivalent ($SL_2 \mathbb Z$) to $3x^2 + 4 xy + 9 y^2.$ -Dirichlet's methods tell us (page 49 in Cox, although the first edition has a typo, pasting corrected selection from second edition ) that -$$ \left( 3 u^2 + 4 uv + 9 v^2 \right)^2 = 9 U^2 + 4 UV + 3 V^2, $$ -where $$ U = u^2 - 3 v^2, \; \; \; V = 6uv + 4 v^2. $$ -Another arrangement of the symbols $a,a',B,C$ tells us -$$ \left( 9U^2 + 4 U V + 3 V^2 \right) \left(3 s^2 + 4 st + 9 t^2 \right) = 27 S^2 + 4 ST + T^2, $$ -where -$$ S = Us-Vt, \; \; \; T = 9Ut + 3Vs +4Vt. $$ Taking lower case $s=1,t=-1$ gives -$$ 8 \left(3 U^2 + 4 UV + 9 V^2 \right) = 27 S^2 + 4 ST + T^2, $$ -where -$$ S = U +V, \; \; \; T = -9U + 3V - 4V = -9U -V. $$ -Finally -$$ (T+2S)^2 + 23 S^2 = 27 S^2 + 4 ST + T^2. $$ -Put it together, we can, for example with $p$ prime, solve $x^2 + 23 y^2 = 2 p^2$ whenever $p = 3 u^2 + 2 uv + 8 v^2.$ This is possible for $p = 3$ and then for all $(23|p) = 1$ such that -$ w^3 - w + 1$ is irreducible $\pmod p.$ This last fact is not in Cox, it is class field theory from a 1991 article by Hudson and Williams. -Maybe I should summarize: we get an explicit formula for $x,y$ in -$$ x^2 + 23 y^2 = 8 \left( 3 u^2 + 4 uv + 9 v^2 \right)^2 $$ -given $u,v$ arbitrary integers. That is, -$$ \color{blue}{ \left(-7u^2 + 6uv + 25 v^2 \right)^2 + 23 \left(u^2 + 6uv + v^2 \right)^2 = 8 \left( 3u^2 + 4 uv + 9 v^2 \right)^2}. $$ -Now that I've corrected some errors of mine (I was out of practice) the final formula can be checked easily enough.<|endoftext|> -TITLE: Using quadratic reciprocity to motivate higher reciprocity laws? -QUESTION [6 upvotes]: I'm an undergraduate following Neukirch's Algebraic Number Theory; Please do not assume much more than chapters $1$ and $2$ of this book to answer. The topics covered are: algebraic number fields, behaviour of ideals, prime splitting, $p$-adic numbers, valuations in general, ... . -So, to the question: -There is this famous theorem of number theory, called Gauss' Quadratic Reciprocity, which gives us a criterion for deciding whether a equation $x^{2}\equiv p\pmod{q}$, for given primes $p,q$ has solutions. -That's what Quadratic Reciprocity was for me: a computational tool (an interesting one). -I'm about to begin Class Field Theory and I'm been told that I'll study generalizations of this Gauss' law, namely the so called Reciprocity Map (I think due to Artin). That been said, It would be nice to be able to look at the classic Quadratic Reciprocity from a new point a view that makes it very explicit what aspects of it we are generalizing. Or perhaps just a way of look at the classic result as more than a computational tool would be nice already. -There is a part of Neukirch's book where he discusses the quadratic reciprocity (way before CFT), and he says that "the law of decomposition in the cyclotomic field provides the proper explanation to Gauss' Reciprocity Law". -The proof itself, however, seems a bit technical and I'm not getting it. -Can someone help me out? Thanks. - -REPLY [5 votes]: There are a few ways to generalize quadratic reciprocity. Let me describe for you how class field theory over $\mathbb{Q}$ works. -Let $K$ be a finite abelian extension of $\mathbb{Q}$ of degree $n$, Galois group $\Gamma$. One goal of class field theory is the following: give a general rule describing how prime numbers $p$ decompose into a product of primes in $\mathcal O_K$. It's cumbersome to give a succinct rule which describes how all primes split, so the usual workaround is to give a rule which applies to all primes except those in a finite $S$, usually containing all the ramified primes. -This generalizes quadratic reciprocity in the following way: let $K = \mathbb{Q}(\beta)$ for some $\beta \in \mathcal O_K$, and let $f \in \mathbb{Z}[X]$ be the minimal polynomial of $f$ over $\mathbb{Q}$. For almost all primes $p$, you have $(\mathcal O_K)_{(p)} = \mathbb{Z}_{(p)}[\beta]$, and so you can apply the following criterion: if you factor $f$ into a product of irreducibles $p_1^{e} \cdots p_g^{e}$ over $\mathbb{Z}/p\mathbb{Z}[X]$, then $e$ will be the ramification index of $p$ in $K$, the degree $f$ of any of the $p_i$ will be the inertia, and $p$ will split into $\frac{n}{ef}$ primes. -So splitting of primes is analogous to factoring certain polynomials over $\mathbb{Z}/p\mathbb{Z}[X]$. In particular, quadratic reciprocity deals with determining for which primes $p$ the polynomial $X^2 - q \in \mathbb{Z}/p\mathbb{Z}$ is irreducible for a fixed prime $q$ (possibly $q$ is a negative prime). In other words, quadratic reciprocity answers the question of which primes $p$ split in $\mathbb{Q}(\sqrt{q})$, or more generally which primes split in a quadratic extension of $\mathbb{Q}$. -Now, here is how class field theory gives you an algorithm for determining (with a finite number of exceptions) how primes split in $K$, or at least implies the existence of such an algorithm. Given a prime number $p$, fix a prime $\mathfrak p$ of $K$ lying over $p$. Remember that for $p$ unramified in $K$, the decomposition group $$\Gamma_{p} = \{ \sigma \in \Gamma : \sigma \mathfrak p = \mathfrak p \}$$ is cyclic, and it has a particularly nice generator, commonly denoted $(p, K/\mathbb{Q})$. It does not depend on the choice of $\mathfrak p$, because $K/\mathbb{Q}$ is abelian. It is called the Frobenius at $p$. The order $f = f_p$ of $(p,K/\mathbb{Q})$ is the inertial degree of $p$ in $K$, and $p$ splits into $\frac{n}{f}$ primes in $K$. -So given a divisor $g$ of $n$, we want an algorithm for determining which primes $p$ split into $g$ primes in $K$, i.e. which primes satisfy $g = \frac{n}{f_p}$. -The nonconstructive part of the proof is the Kronecker-Weber theorem. It says that there exists some integer $m$ such that $K$ is contained in $\mathbb{Q}(\zeta_m)$. Suppose we have found such an $m$. Let $L = \mathbb{Q}(\zeta_m)$, $G = \textrm{Gal}(L/\mathbb{Q})$, and $H = \textrm{Gal}(L/K)$. There is a canonical identification of $G$ with the group $(\mathbb{Z}/m\mathbb{Z})^{\ast}$, and under this identification we can think of $H$ as a subgroup of $(\mathbb{Z}/m\mathbb{Z})^{\ast}$. For $p$ not dividing $m$, the Frobenius $(p,L/\mathbb{Q})$ has the effect $\zeta_m \mapsto \zeta_m^p$. In particular, $(p,L/\mathbb{Q})$ can be identified with $p$ modulo $m$, and the inertial degree of $p$ in $L$ is then the multiplicative order of $p$ modulo $m$. -In general, if $\phi: A \rightarrow B$ is a homomorphism of finite groups, and $a \in A$, then the order of $\phi(a)$ is the smallest number $f$ such that $a^f \in \textrm{Ker } \phi$. Now, you can check that the restriction of $(p,L/\mathbb{Q})$ to an automorphism of $K$ is exactly $(p,K/\mathbb{Q})$. Therefore, the inertial degree of $(p,K/\mathbb{Q})$ is the smallest number $f$ such that $p^f$ is congruent modulo $m$ to a member of $H$. -So to summarize, here is how to determine how primes split in $K$: - -Find an $m$ such that $K \subseteq \mathbb{Q}(\zeta_m)$. -Identify the Galois group of $\mathbb{Q}(\zeta_m)$ with $(\mathbb{Z}/m\mathbb{Z})^{\ast}$, and identify $H = \textrm{Gal}(\mathbb{Q}(\zeta_m)/\mathbb{Q})$ as a subgroup of $(\mathbb{Z}/m\mathbb{Z})^{\ast}$. -With the exception of the prime factors of $m$, the primes $p$ which split into $g$ factors in $K$ are exactly those primes for which $p+H$ has order $\frac{n}{g}$ in $G/H$.<|endoftext|> -TITLE: Given any nine numbers, prove there exists a subset of five numbers such that its sum is divisible by $5$. -QUESTION [7 upvotes]: Given any nine numbers, prove there exists a subset of five numbers such that its sum is divisible by $5$. -I tried to take the numbers in the format $5k+1$, $5l+2$, and so on. However, I am stuck in choosing ANY five from them...also,the numbers included in the subset may or may not belong to same format. - -REPLY [3 votes]: We can prove the following more general result, which is a classical problem. The proof below is a classical one. -Lemma If $p$ is prime, then among any $2p-1$ numbers you can find $p$ whose sum is divisible by $p$. -Proof: Let $a_1,.., a_{2p-1}$ be the numbers. -Consider all $n=\binom{2p-1}{p}$ subsets of $p$ numbers and denote by $S_1,...,S_n$ the sums of each subset. -Let -$$S:= S_1^{p-1}+...+S_n^{p-1}$$ -Let us observe first that $p|S$. -Note that -$$S_j^{p-1} = (a_{j_1}+...+a_{j_p})^{p-1}$$ - Any term in this sum has the form $a_{k_1}^{b_1}a_{k_2}^{b_2}...a_{k_l}^{b_l}$ with the coefficient being $\binom{p-1}{b_1,..,b_k}$. -Therefore $S$ is the sum of terms of this form. -Now, let us check the coefficient of such a term. Whenever when this appears in some $S_j^{p-1}$ the coefficient is exactly $\binom{p-1}{b_1,..,b_l}$. -So we need to figure how many times does this appear in $S_j^{p-1}$. In order for this to happen, $a_1,a_2,..., a_l$ need to be $l$ of the $p-1$ terms, the other $p-l$ can be anything. Therefore, this appears in exactly $\binom{2p-1-l}{p-l}$ sums. But since $p$ is prime, -$$p| \frac{(2p-1-l)!}{(p-l)!(p-1)!}=\binom{2p-1-l}{p-l}$$ -Now, if we assume by contradiction that none of $S_j$ is divisible by $p$, by Fermat Little Theorem we have -$$S= S_1^{p-1}+...+S_n^{p-1} \equiv 1+1+...+1 \equiv n \pmod{p}$$ -But $n \not\equiv 0 \pmod{p}$ contradiction.<|endoftext|> -TITLE: When is a function of the largest eigenvalue continuous and/or differentiable? -QUESTION [12 upvotes]: I want to understand why the following function, the largest eigenvalue of a symmetric linear operator, is continuous and Gâteaux differentiable. -\begin{equation*} -\lambda(V)=\sup_{f \in \ell^2(I):\ \rVert f \lVert_2} \langle (A+V)f, f \rangle, \qquad V \in \ell^2(I) -\end{equation*} -where - -$I$ is a finite index set (subset of $\mathbb Z^d$ in fact) -$A: \ell^2(I) \rightarrow \ell^2(I)$ is a symmetric linear operator that is nonnegative outside its diagonal, so $-A$ is positive definite -$V \in \ell^2(I)$ multiplies like the diagonal matrix $\mathrm{diag}(V_1, \dots, V_n)$. - -I encountered this statement in a probability theory proof where it simply states that this follows easily from the Perron-Frobenius theorem and basic linear algebra. -So we should have - -(1) \begin{equation*} -\lim_{n\rightarrow\infty} \sup_{\lVert f \rVert_2=1} \langle(A+V_n)f,f\rangle -=\sup_{\lVert f \rVert_2=1} \langle(Af,f\rangle, \qquad \mathrm{\ where\ } V_n \rightarrow 0 \mathrm{\ pointwise} -\end{equation*} - -and the existence of - -(2)\begin{equation*} -\lim_{t \rightarrow 0}\frac{1}{t} \left( \lambda(V+hg)-\lambda(V) \right) -=\lim_{t \rightarrow 0}\frac{1}{t} \left( \sup_{\lVert f \rVert_2=1} \langle(A+V+hg)f,f\rangle - \sup_{\lVert f \rVert_2=1} \langle(A+V)f,f\rangle \right). -\end{equation*} - -In (1) the problem is that it's not obvious to me that we may swap the limit and the supremum and I don't see any good reason for this to be true. For (2) I'm simply puzzled. The Perron-Frobenius theorem says that the largest eigenvalue of $A+V$ is simple and that it has a positive eigenfunction. But I don't see how to conclude the existence of the Gâteaux derivative from there. I guess there must be some theorem from linear algebra, but so far my research didn't give me an answer either. - -Some more context on how I encountered this problem -The operator $A$ is the generator $\Delta$ of a symmetric random walk $(X_t)_{t\geq0}$ on $\mathbb{Z}^d$, restricted to a finite, connected subset: -\begin{equation*} -\Delta_I f(x) = \sum_{y\in\mathbb{Z}^d:\ |x-y|=1} \omega_{xy} [f(y)-f(x)], \qquad x\in I,\ f: \mathbb{Z}^d \rightarrow \mathbb{R},\ \mathrm{supp}(f)\subset I -\end{equation*} -where $\omega_{xy}=\omega_{yx}\in(0,\infty)$ are symmetric weights. -Then I need $\Lambda(V):=\lambda(V)-\lambda(0)$ to be Gâteaux differentiable and continuous with respect to pointwise convergence in order to apply a large deviations principle with rate function $\Lambda$. - -REPLY [3 votes]: Proposition. Let $P(x,t)=\sum_{i=0}^na_i(t)x^i$ where the $(a_i(t))$ are $C^k$. We assume that $P(x_0,t_0)=0$ where $x_0$ is a simple root of $P(x,t_0)=0$. Then, in a neighborhood of $(x_0,t_0)$, there is $\phi\in C^k$ s.t. $x_0=\phi(t_0),P(\phi(t),t)=0$. -Proof. Since $x_0$ is a simple root, $\dfrac{\partial P}{\partial x}(x_0,t_0)\not= 0$. According to the implicit function theorem, we are done. -Application. Let $A(t)$ be a non-negative irreducible matrix that $C^k$ depends on $t$. Then $\rho(A(t))$ is a simple positive eigenvalue of $A(t)$ and, consequently, is a $C^k$ function of $t$ (put $P(x,t)=\det(A(t)-xI_n)$). -Now, when the eigenvalues of $A(t)$ are not simple, it is more complicated. One can prove that there are CONTINUOUS functions $(x_i(t))_{i\leq n}$ (eventually complex) s.t. for every $t$, $spectrum(A(t))=(x_i(t))_{i\leq n}$. Thus, if $A(t)$ is symmetric, then the greatest eigenvalue of $A(t)$ is $\sup_ix_i(t)$, that is clearly a continuous function of $t$. -EDIT. Answer to @ Amarus . Your question is essentially about the continuity (or diff.) of the roots of a polynomial. The question of continuity is treated here: -http://www.ams.org/journals/proc/1965-016-01/S0002-9939-1965-0171902-8/S0002-9939-1965-0171902-8.pdf -Note that all proofs of the previous result use the Rouché's theorem. -There is an intermediate result: if the polynomial has real $C^{\infty}$ coefficients with only real roots (hyperbolic polynomial), then each root is Lipschitz (due to Bronshtein) -one can soften the condition $C^{\infty}$- -Beware. The eigenvectors cannot necessarily be written as continuous functions.<|endoftext|> -TITLE: Does the functional equation $p(x^2)=p(x)p(x+1)$ have a combinatorial interpretation? -QUESTION [12 upvotes]: A recent question asked about polynomial solutions to the functional equation $p(x^2)=p(x)p(x+1)$. Subsequently, Robert Israel posted an answer showing that solutions are necessarily of the form $p(x)=x^m(x-1)^m$. -What I had hoped to do myself was provide a solution by interpreting $p(x)$ as a generating function, with the functional equation leading to some kind of counting argument for the form of $p(x)$. Robert's answer complicates this, however, because $x^m(x-1)^m$ has coefficients of alternating signs. I then thought to route around this by replacing $x\to-x$ in the functional equation. But, while $p(-x)$ has positive coefficients, $p(-x+1)$ does not and so runs into the same issue. -That's as far as my thinking lead me. My question is: Is there some way to understand $p(x)$ (or some transformation of it) so as to provide a useful combinatorial interpretation of the functional equation? - -REPLY [11 votes]: Assume $q(n)$ is the number of ways to coloring two squares of ordered $n$ squares by blue and red . - -Theorem : Number of coloring of two $1\times 1$ squares in $n\times n$ square by blue and red , is $q(n+1)q(n)$ . - -Proof : There is one-to-one correspondence between coloring of two $1\times 1$ squares in $n\times n$ square, and coloring of two $1\times 1$ squares in $n\times (n+1)$ square with the condition that two colored square is not in the same row or same column. -Here is the sketch of this correspondence : - - -If two colored squares in $n\times n$ square aren't in the same row or -column then we correspond those to the same squares . -If they are in the same column , then we put the blue square in the -same square , and move red square to the $n+1$th column and same row . -If they are in the same row , put $i=$ blue column . Now we put the red square in the same square , and move blue square to the $n+1$th column in the $i$th or $i+1$th row , depending on $i<$ red row or $i\geq $ red row (like above example) . - -It's easy to check that this is an one-to-one correspondence . -Now the number of coloring $n\times (n+1)$ squares with mentioned condition , is $q(n+1)q(n)$ because we have $q(n+1)$ ordered choice of columns , and $q(n)$ ordered choice of rows . $\Box$ -Thus $q(n^2)=q(n+1)q(n)$ . -If we have $m$ numbers of $n\times n$ squares , and we want to color two squares of each of them with blue and red , and $p(n)$ be the number of ways to do that , then by above conclusion we can conclude : $$p(n^2) = p(n+1)p(n)$$ -and by combinatorics we know : $$p(n)=(n(n-1))^m$$<|endoftext|> -TITLE: Why is $A$ a compact operator? -QUESTION [5 upvotes]: Let $X$ be a compact space and let $\mu$ be a positive Borel measure on X. Let $T\in \mathscr{B}(L^p(\mu),C(X))$ where $1\lt p \lt \infty$. -Show that if $A:L^p(\mu)\rightarrow L^p(\mu)$ defined by $Af=Tf$, then $A$ is a compact operator. -Here is an example: let $X=[0,1]$,$\mu$ is the lebesgue measure on $[0,1]$, p=2. -$\forall f\in L^2(0,1)$, define $T(f)=\int_0^1{f(x)e^{-ixy}}dx,y\in[0,1]$. Then $$A:L^2(0,1)\rightarrow L^2(0,1)$$ $$f(x)\rightarrow \int_0^1{f(x)e^{-ixy}}dx, y\in[0,1]$$ -is a compact operator.(this is a integral operator) -Any help would be appreciated! - -REPLY [3 votes]: You need to assume that $\mu$ is finite, otherwise take $\mu$ the Lebesgue measure on $\mathbb{R}$, let $X$ be the one-point compactification of $\mathbb{R}$ and (identifying $L^p(X)$ with $L^p(\mathbb{R})$) -let $Af:=f*\rho$, where $\rho\in C^\infty_c(\mathbb{R})$ and $\int\rho=1$. -Then $A$ maps $L^p(X)\to C(X)$ continuously (we use the fact that $f*\rho$ vanishes at infinity) but it is not compact from $L^p(X)$ to $L^p(X)$ (why?). -So let us assume $\mu(X)<\infty$. Let $(f_n)\subseteq L^p(\mu)$ be a bounded sequence. Since $L^p(\mu)$ is reflexive, we have $f_n\rightharpoonup f$ for some $f\in L^p(X)$ (up to subsequences), so we also have $Af_n\rightharpoonup Af$. Now for any $x\in X$ -$$ (Af_n)(x)\to (Af)(x), $$ -since evaluation at a point is a continuous functional on $C(X)$ and $Af_n$ is converging weakly to $Af$. Moreover $\|Af_n\|_\infty\le\|A\|\|f_n\|_p$, -so we can apply the dominated convergence theorem to conclude that -$$ \int_X |Af_n-Af|^p\,d\mu\to 0 $$ -as $n\to\infty$ (here we are using the finiteness of $\mu$). So $Tf_n\to Tf$ in $L^p(X)$, proving the compactness of $T$.<|endoftext|> -TITLE: Idempotent ideals in certain commutative rings -QUESTION [6 upvotes]: Let $R$ be a commutative ring with zero Jacobson radical such that each maximal ideal of $R$ is idempotent. Does it guarantee that each ideal is idempotent? - -I know only that if each maximal ideal is generated by an idempotent element then $R$ turns out to be semisimple Artinian. I think this fact is associated with my question, at least if one could show that any maximal ideal is generated by an idempotent element. -Thanks for any suggestion! - -REPLY [6 votes]: In a commutative ring $R$ every ideal is idempotent (iff every ideal is radical) iff $R$ is VNR. - -Then the question asks if a commutative ring $R$ with $J(R)=0$ and $\mathfrak m^2=\mathfrak m$ for every maximal ideal $\mathfrak m$ is VNR. - -The answer is negative: the ring of continuous functions $R=\mathcal C[0,1]$ satisfies both conditions and it's not VNR.<|endoftext|> -TITLE: Hartshorne or Vakil's notes -QUESTION [18 upvotes]: I believe Hartshorne and Vakil's notes are two most popular text currently, so my question is about how to choose the text. -I have worked through the first 4 chapters of Vakil's notes and now I am thinking whether should I continue or try to study Hartshorne. -Vakil's notes are very well-organized. Especially, the exercises appear just in the right time, and there are more explanation of the exercises, so that I know what I am doing. But the problem is most arguments are given in the form of exercises, which means I am always stuck. The typical situation is after 2 hours work, maybe I am still in the same page. But the book has almost 800 pages! Hartshorne has some proof, the exercises also have some explanation. So maybe I should try to work Hartshorne? -Another question is about exercises. How long should I spend for an exercise that stick me. Should I look up a solution after maybe struggling for half hour? There are solutions for Hartshorne, so maybe study Hartshorne is more convenient since it is easier to look up solution? -Also, what is the right pace to learn the stuff? I mean should I worry if every day I spend 3 hours to learn the stuff but I only finish 1 page?( I know maybe I should spend more time, but unfortunately I am teaching myself algebraic geometry and I have other classes currently) -I appreciate any advice, thanks! - -REPLY [23 votes]: In my humble opinion, the Vakil notes (also known as FOAG) are very complete with regards to scheme theory; they include all prerequisites (category theory, commutative algebra, topology, etcetera omissis [e.o.]) to scheme theory, an extensive bibliography, and also information about the "art status" of algebraic geometry. -But this completeness is an overload of information, so I use FOAG only for when I want a detailed study of some argument. On the other hand, the Harthshorne book (I write about his "Algebraic Geometry") is an underload of information; because it recaps Éléments de géométrie algébrique by Grothendieck and Dieudonné (which is exactly 1800 pages of scheme theory, not a page more, not a page less), it is not very easy to read. In the opinion of someone who has studied it, the essence of Hartshorne's book is in the exercises, and the exposition of the theory is not very clear (for obvious reasons). -After all this, my recommendation is that you continue your study of algebraic geometry from another textbook; I suggest: - -Bosch - Algebraic Geometry and Commutative Algebra, -Eisenbud and Harris - The Geometry of Schemes, -Gathmann's lecture notes (Classical Algebraic Geometry and Scheme Theory), -Görtz and Wedhorn - Algebraic Geometry I, -Mumford - The Red Book of Varieties and Schemes; - -You can use FOAG for some more detailed study as these books complement it well. For example, IMHO the Bosch book is poor on cohomology theory, so I studied cohomology from FOAG; Görtz and Wedhorn's book is poor on commutative algebra; Eisenbud and Harris's book is rich with examples, but less so compared to FOAG; Mumford's book does not contain exercises; e.o.<|endoftext|> -TITLE: Prove that $| S |\leq5$ -QUESTION [7 upvotes]: For any $a,b \in S$ there exists $c \in S$ such that $a,b,c$ form an arithmetic progression in some order. -Prove that $| S |\leq5$. - -I am struggling to find examples where it works. -I found a very simple example of $\{1,2,3\}$. I tried solving the contrapositive but that seemed to complicate things even further. How should one start when going about solving this? - -REPLY [4 votes]: Let $S$ be such a set with $|S|>1$. -As the property of $S$ is invariant under translation and scaling, we may assume wlog. that $\min S=-1$, $\max S=1$. -For every positive $x\in S$ we must have $\frac{x-1}2\in S$ in order to form a progression containing $-1$ and $x$. -For every negative $x\in S$ we must have $\frac{x+1}2\in S$ in order to form a progression containing $1$ and $x$. -We can summarize this as -$$x\in S\implies f(x) \in S $$ -where $$f(x)=\frac{x-\operatorname{sgn}(x)}{2}.$$ -In particular, $0=f(\pm1)\in S$. -Note that $f$ restricted to $(0,1)$ is an injective map $(0,1)\to(-1,0)$; likewise it is an injective map $(-1,0)\to (0,1)$, hence an injective map from $(-1,0)\cup(0,1)$ to itself. But $f$ also maps $S\to S$. -Hence if we let $T=S\setminus\{-1,0,1\}$ then $f$ is an injective map $T\to T$. -We conclude that $f\colon T\to T$ is bijective, and so is $f\circ f$. -In particular, for any $x_0\in T$, the sequence defined recursively by $x_{n+1}=f(f(x_n))$ must be periodic. -If $x>0$ then $$f(f(x))=f(\tfrac{x-1}{2})=\frac{\tfrac{x-1}{2}+1}{2}=\frac{x+1}{4}$$ -Thus for $x_0>0$ the sequence $(x_n)$ converges to the unique fixed point of the map $x\mapsto\frac{x+1}{4}$, i.e., $x_n\to \frac13$. A convergent periodic sequence must be constant, whence $x_0=\frac13$. -Similarly, for $x<0$ we have $f(f(x))=\frac{x-1}4$ and conclude that the only possible negative element of $T$ is $-\frac13$. It follows that $T\subseteq \{-\frac13,\frac1\}$ and so $S\subseteq \left\{-1,-\frac13,0,\frac13,1\right\} $ -or (undoing the normalization from the beginning) -$$S\subseteq \left\{a-3d,a-d,a,a+d,a+3d\right\} $$ -with $a\in\Bbb R$ and $d>0$. -Remark. Revisiting the argument above and noting that $f(\pm\frac13)=\mp\frac13$, there are only the following possibilities (in particular, $|S|$ is neither $2$ nor $4$): -$$\begin{align}S&=\emptyset\\ -S&=\{a\}\\ -S&=\{a-d,a,a+d\}\\ -S&=\{a-3d,a-d,a,a+d,a+3d\}\end{align} $$ -with $a\in\Bbb R$, $d>0$.<|endoftext|> -TITLE: Integrating $\int \frac{\sin x-\cos x}{(\sin x+\cos x)\sqrt{(\sin x \cos x + \sin^2x\cos^2x)}}\,dx$ -QUESTION [6 upvotes]: I came across a question today... - -Integrate $\int \dfrac{\sin x-\cos x}{(\sin x+\cos x)\sqrt{(\sin x \cos x + \sin^2x\cos^2x)}}\,dx$ - -How to do it? I tried -1. to take $\sin x \cos x =t$ but no result -2. to convert the thing in the square root into $\sin x +\cos x$ so that I could take $\sin x + \cos x = t$ but then something I got is $\int\frac{-2}{t|t+1|\sqrt{t-1}}\,dt$. Now I don't know how to get past through it. - -REPLY [4 votes]: Notice, $$\int\frac{\sin x-\cos x}{(\sin x+\cos x)\sqrt{\sin x\cos x+\sin^2 x\cos^2x}}\ dx$$ -$$=\int\frac{\sin x-\cos x}{(\sin x+\cos x)\sqrt{\frac{(\sin x+\cos x)^4-1}{4}}}\ dx$$ -$$=2\int\frac{(\sin x-\cos x)dx}{(\sin x +\cos x)\sqrt{(\sin x+\cos x)^4-1}}$$ -let $\sin x+\cos x=t\implies (\cos x-\sin x)\ dx=dt$, -$$=-2\int\frac{dt}{t\sqrt{t^4-1}}$$ -let $t^4-1=u^2\implies 4t^3\ dt=2u\ du$, $$=-2\int\frac{udu}{2u(u^2+1)} $$ -$$=-\int\frac{du}{1+u^2}$$$$=-\tan^{-1}(u)+C$$ -$$=-\tan^{-1}\left(\sqrt{t^4-1}\right)+C$$ -$$=-\tan^{-1}\left(2\sqrt{\sin x\cos x+\sin^2x\cos^2 x}\right)+C$$<|endoftext|> -TITLE: Proof using Cauchy Integral Theorem -QUESTION [7 upvotes]: Suppose that I have the following integral: -$$ \int_L \frac {dz}{z^2+1} $$ -I need to show that this is equal to $0$ if $L$ is any closed rectifiable simple curve in the outside of the closed unit disc. Simply put, this is where $|z| > 1$. -Initially, I thought that because "closed rectifiable simple curve" is a crucial part of the Cauchy Integral Theorem, the best way to go about this is to show that the rest of theorem must hold. That is, the domain, $|z|>1$, is a simply connected domain, and that $f(z)$ is analytic in this domain. -However, the domain does not appear to be a simply connected domain, as everything on the inside of the unit disc would prevent it from being so. By definition, a simply connected domain is one where any simple curve in that domain can be shrunk to a point that's also in the domain, so therefore, it may be possible to shrink a curve in this domain to the inside of the circle! -I must be missing a key part to this proof, because as of now, this contradicting statement has me stumped. Any thoughts/ideas? - -REPLY [3 votes]: Based on your question, you are using a very limited form of Cauchy's theorem. The general theorem (even when restricted to just curves) does not require the domain to be simply-connected, nor that the curve be simple. It only requires that the winding number of the curve around any point not in the domain be $0$. -However, this must be done with the tools you have, not with the tools you wish you had, so we'll stick with these restrictions: - -Cauchy's theorem only applies to simply-connected domains and to simple closed curves (but at least, $L$ is a simple closed curve). -No Cauchy Integral Formula. (Residues are a consequence of Cauchy's integral formula, so they are out too.) - -First, choose a point $p$ of minimum magnitude on $L$, and for $\epsilon > 0$, another point $q_\epsilon$ on $L$ within $\epsilon$ of $p$. Let $r$ be such that $1 < r < |p|$, and drop parallel line segments $\ell, \ell_\epsilon$ down from $p$ and $q_\epsilon$ respectively to the circle $C$ of radius $r$ about the origin. Then form the curve $L'$ that - -Follows $L$ from $p$ the long way around to $q_\epsilon$, -traverses the line $\ell_\epsilon$ from $q_\epsilon$ down to $C$, -traverses $C$ in the opposite direction around till it intersects with $\ell$. -traverses $\ell$ back to $p$ to close. - - -Then $L'$ is a simple closed curve. Further, we can split the domain $U$ by a line raising up from the unit circle midway between $\ell$ and $\ell_\epsilon$ until it crosses $L$. After that, we can continue this as a curve that follows $L$ closely without touching until it reaches a point where it can extend to $\infty$ without intersecting $L$. When this curve is removed from $U$, the remainder is simply connected. Thus we can apply Cauchy's theorem to $L'$ to say that $\oint_{L'} f(z)dz = 0$ -As you let $\epsilon \to 0$, $q_\epsilon \to p$ and $\ell_\epsilon \to \ell$, so $$0 = \oint_{L'} f(z)dz \to \oint_L f(z)dz + \int_{\ell} f(z)dz - \oint_C f(z)dz - \int_{\ell} f(z)dz$$ -From which it follows that $$\oint_L f(z)dz = \oint_C f(z)dz$$ -So all that is left is to show that $\oint_C f(z)dz = 0$. But if $C'$ is a circle about the origin of higher radius than $C$, by letting $C'$ take the place of $L$ in the result just shown, they have the same integral. Thus $\oint_C f(z)dz$ does not depend on the radius $r$ of $C$. Now $$\oint_C f(z)dz = i\int_0^{2\pi} \frac{rd\theta}{e^{-i\theta} + r^2e^{i\theta}}$$ -As $r \to \infty$, this converges to $0$. But since it is constant, it had to be $0$ all along, which completes the proof (except for cleaning up all the handwaving).<|endoftext|> -TITLE: Concerning an infinite server queue with Poisson arrivals -QUESTION [9 upvotes]: Here's the statement of the problem (from Ross's Introduction to Probability Models): - -For those unfamiliar with "infinite server queues," they are described here. In this case, however, the service times are not exponentially distributed; rather, they are distributed according to some common distribution $G$. It follows that $X(t)$, the number of customers that have completed service by time $t$ and that arrived at time $s, s\le t,$ is Poisson distributed with mean -$$E[X(t)]=\lambda \int_{0}^{t}G(t-s)ds=\lambda \int_{0}^{t}G(y)dy.$$ -Similarly, the distribution of $Y(t)$, the number of customers being served at time $t$ and that arrived at time $s, s \le t,$ is Poisson distributed with mean -$$E[Y(t)]=\lambda \int_{0}^{t}\bar G(t-s)ds=\lambda \int_{0}^{t}\bar G(y)dy$$ -where $\bar G(t-s) = 1 - G(t-s)$. -Now, for part $(a)$, let $A =\{\text{the first customer to arrive is also the first to depart} \}$, i.e., our desired event; and suppose the first customer arrives at time $0$ and departs at time $t$. Then, we consider the event $A$ conditioned on the event in which $0$ customers have completed service by time $t$, i.e., -$$ \mathbb P[A | X(t) = 0] = \exp\left\{ -\lambda \int_{0}^{t}G(y)dy\right\}. $$ -Ok, I get that. But then, for some reason, the following is the answer: -$$ \mathbb P[A] = \int_{0}^{\infty} \left( \exp\left\{ -\lambda \int_{0}^{t}G(y)dy\right\} \right) dG(t). $$ -And I don't really understand where this comes from. If anybody could shed some light on this, I'd really appreciate it. Thanks. - -REPLY [6 votes]: Given an empty system at time $0$ the number of departures by time $t$ is a non homogeneous Poisson process with rate -$$\lambda(t)=\lambda\int^{t}_{0}G(y)dy$$ -and so the probability of no departures by time $t$ is given by: -$$exp(-\lambda\int^{t}_{0}G(y)dy)$$ -now if we set time $0$ to be time of the first arrival and let $G_1$ be the service time of the first arrival then we have: -$$P(A|G_1=t)=exp(-\lambda\int^{t}_{0}G(y)dy)$$ -now we have -$$P(A)=\int^{\infty}_{0}P(A|G_1=t)dG_1(t)=$$ -$$\int^{\infty}_{0}P(A|G_1=t)dG(t)=$$ -$$\int^{\infty}_{0}exp\left(-\lambda\int^{t}_{0}G(y)dy\right)dG(t)$$<|endoftext|> -TITLE: Is the set of probability density functions convex? -QUESTION [7 upvotes]: Given is the set of probability density functions defined as $P:=\left \{ p(x)\mid p(x)\, is\ a\ probability \ density \ function \right \}$ -Is $P$ a convex set? -I am not sure that here i have to use the classical definition for the convex set. In the lecture we have seen that for the general case, suppose $p:\mathbb{R}^{n}\rightarrow \mathbb{R}$ satisfies $p(x) \geq 0$ for all $x\in C$ and $\int _{C} p(x)dx=1$ where $C\subseteq \mathbb{R}^{n}$ is convex. Then $\int_{C}p(x)xdx\in C$ if the integral exists. -I guess i have to use this definition to see if $P$ is convex but i have no idea where to start... -Can anybody help me with this problem, please? -Thank you in advance! - -REPLY [6 votes]: Take the definition of convex set (e.g. chapter 2.1.4 from Boyd & Vandenberghe). -Can you say that $p(x) = \theta \, p_1(x) + (1-\theta) \, p_2(x) \in P$ for any $0 \leq \theta \leq 1$ and any $p_1(x),\,p_2(x) \in P$? -Yes, $p(x)$ is a valid pdf, it is non-negative and integrates to 1. BTW, the resulting density is called a mixture density distribution (to draw a sample from this distribution, you can imagine that you first flip a loaded coin with heads probability $\theta$, then you sample from $p_1$ if the outcome is a head, from $p_2$ if a tail).<|endoftext|> -TITLE: Computing normal closure and Galois group of quintic $x^5 - 3$ -QUESTION [6 upvotes]: Having trouble big time. - -I am asked to find the normal closure for the extension $\mathbb{Q}(a):\mathbb{Q}$ where $a$ is the fifth root of $3$ and real. Then I am asked to find Galois groups for the extension above and $N:\mathbb{Q}$, both. - -Now, I found $N$ to be $\mathbb{Q}(a,\omega)$ where $\omega=\cos{\frac{2 \pi}{5}}+i \sin{\frac{2 \pi}{5}}$, the fifth root of unity. Problem is, for this -$N:\mathbb{Q}=\mathbb{Q}(a,\omega):\mathbb{Q}$ extension, finding the Galois group, equivalently finding the automorphisms $Aut_{\mathbb{Q}}(\mathbb{Q}(a,\omega)$ is looking unbelievably tedious and difficult. -I mean, the elements of $N=\mathbb{Q}(a,\omega)$ will be something like $p+qa+ra^2+...+ta^4+w\omega+x\omega^2+...+ba\omega+ca^2\omega+...+ga^3\omega^3$ where coefficients are in $\mathbb{Q}$. Basically, the form of the element is massive. Everything except $\omega^4$ terms appear(since $\omega^4=-(\omega^3+\omega^2+\omega+1)$. -..Am I right so far? If so, finding automorphisms on such elements must be so tedious...say, conjugates(if I should be calling them that), $p-qa+ra^2+...+ta^4+w\omega+x\omega^2+...+ba\omega+ca^2\omega+...+ga^3\omega^3$,$p+qa-ra^2+...+ta^4+w\omega+x\omega^2+...+ba\omega+ca^2\omega+...+ga^3\omega^3$ etc would still be a $\mathbb{Q}$-automorphism so anything map as above would be in the Galois group, yes? -I mean, there's just loads of maps (automorphisms) of $\mathbb{Q}(a,\omega)$ that I doubt I have to write them all down for this Galois group. -Or is that really the answer? Some massive monstrous group? - -REPLY [6 votes]: You correctly state that the normal closure of $F = \mathbb{Q}(\sqrt[5]{3})$ is $L = \mathbb{Q}(\sqrt[5]{3}, \zeta)$, where $\zeta$ is a primitive $5^\text{th}$ root of unity. $L$ has two important subfields, $F$ and $K = \mathbb{Q}(\zeta)$. Note that $K/\mathbb{Q}$ is Galois, since the other roots of its minimal polynomial $\frac{x^5 - 1}{x-1} = x^4 + x^3 + x^2 + x +1$ are just the powers of $\zeta$. Also note that $[\mathbb{Q}(\zeta): \mathbb{Q}] = \varphi(5) = 4$ and $[\mathbb{Q}(\sqrt[5]{3}) : \mathbb{Q}] = 5$ are relatively prime, so $20$ divides $[\mathbb{Q}(\sqrt[5]{3}, \zeta) : \mathbb{Q}]$. Since $[\mathbb{Q}(\sqrt[5]{3}, \zeta) : \mathbb{Q}] \leq 20$, this shows that $[\mathbb{Q}(\sqrt[5]{3}, \zeta) : \mathbb{Q}] = 20$. Thus we have the following field diagram -$\hspace 2.5cm$ -and the corresponding Galois group diagram. -$\hspace 2cm$ -Since $K/\mathbb{Q}$ is Galois, then $N = \text{Gal}(L/K)$ is normal in $G = \text{Gal}(L/\mathbb{Q})$. Moreover, since $N \cap H = 1$ by order considerations, then $|NH| = \frac{|N||H|}{|N \cap H|} = \frac{4 \cdot 5}{1} = 20$, so $NH = G$. Then $G \cong N \rtimes H$, i.e., $G$ is the semidirect product of $N$ and $H$. Since $K \cap F = \mathbb{Q}$ and $L = KF$, then -$$ -H = \text{Gal}(L/F) \cong \text{Gal}(K/\mathbb{Q}) = \text{Gal}(\mathbb{Q}(\zeta)/\mathbb{Q}) \cong (\mathbb{Z}/5\mathbb{Z})^\times \cong \mathbb{Z}/4\mathbb{Z} -$$ -by results on composite fields and cyclotomic extensions. Now $|N| = 5$ so $N \cong \mathbb{Z}/5\mathbb{Z}$, hence we have (abstractly), $G \cong N \rtimes H \cong \mathbb{Z}/5\mathbb{Z} \rtimes \mathbb{Z}/4\mathbb{Z}$. -Maybe you're satisfied with that, but we can also get a more concrete description by examining the action of $H$ on $N$ in the semidirect product. Let -\begin{align*} -\sigma: L &\to L\\ -\sqrt[5]{3} &\mapsto \sqrt[5]{3}\\ -\zeta &\mapsto \zeta^2 -\end{align*} -\begin{align*} -\tau: L &\to L\\ -\sqrt[5]{3} &\mapsto \zeta\sqrt[5]{3}\\ -\zeta &\mapsto \zeta -\end{align*} -Note that $\sigma$ fixes $F$ and $\tau$ fixes $K$, so $\sigma \in H$ and $\tau \in N$. Moreover, since $2$ generates $(\mathbb{Z}/5\mathbb{Z})^\times$, then $\sigma$ generates $H$, and since $\tau$ has order $5$, it generates $N$. Since $N \trianglelefteq G$, then $H$ acts on $N$ by conjugation. Note that $\sigma^{-1}$ is $\zeta \mapsto \zeta^3$ since $3 = 2^{-1} \pmod{5}$. Computing $\sigma \tau \sigma^{-1}$, we find -\begin{align*} -\sigma \tau \sigma^{-1}: \sqrt[5]{3} &\overset{\sigma^{-1}}{\longmapsto} \sqrt[5]{3} \overset{\tau}{\longmapsto} \zeta \sqrt[5]{3} \overset{\sigma}{\longmapsto} \zeta^2 \sqrt[5]{3}\\ -\zeta &\overset{\sigma^{-1}}{\longmapsto} \zeta^3 \overset{\tau}{\longmapsto} \zeta^3 \overset{\sigma}{\longmapsto} \zeta \, . -\end{align*} -This is exactly the same action as $\tau^2$, so $\sigma \tau \sigma^{-1} = \tau^2$, i.e., $\sigma \tau = \tau^2 \sigma$, which provides us with a commutation relation. This allows us to write every element of $G$ as $\tau^i \sigma^j$ for some $i \in \{0, 1, 2, 3\}$ and $j \in \{0, 1, 2, 3, 4\}$, which accounts for all $20$ of the elements of $G$. Thus we have the presentation -$$ -G = \langle \sigma, \tau \mid \sigma^4 = \tau^5 = 1, \sigma \tau = \tau^2 \sigma \rangle \, . -$$ -As you can see here, this is a presentation for the Frobenius group of order $20$.<|endoftext|> -TITLE: Why is it important to find both solutions to a second order linear differential equation? -QUESTION [8 upvotes]: Given the equation $$y'' + y=0$$ -A solution is $y=\sin(t)$ -Why can't we stop there since we know a way to solve the system? Why should we consider all of the ways to solve the system? -I would really like to see a real world example when having a single solution is inadequate. I know this is asking a lot, but I often find mathematics only becomes easier to understand once I need to use it to solve something and it becomes relate-able to real things. - -REPLY [17 votes]: Consider a point mass $m= 1 \ \mathrm{kg}$ attached to one end of a spring with spring constant $k = 1 \ \mathrm{N/m}$. Suppose that the spring is suspended vertically from an immovable support. The oscillations of the mass around its equilibrium position can then be described by the equation -$$y''+y = 0$$ -where $'$ denotes time derivative, $y$ the displacement and where we measure distance in $\mathrm{m}$ and time in $\mathrm{s}$. -As you have said, $y = \sin t$ is a solution to this equation. For this solution, we have $y(0) = 0$ and $y'(0) = 1$. This means that the mass starts from rest with unit speed. Furthermore, the maximum displacement of the mass is $1$. -But what if the mass doesn't start from rest and what if it doesn't have unit speed? What if we release the mass $1.5 \ \mathrm{m}$ from its equilibrium at $t=0$ and with zero initial speed? Then your solution would be $y = 1.5 \cos t$. This is physically different from $y = \sin t$. -In order to be able to account for different initial conditions, you need the general solution, which can be written in many ways, one of which is $y = A\sin (t) + B\sin (t)$. We could also write this as $y = C\sin(t+ \phi)$, as $C\sin(t+\phi) = C\cos(\phi)\sin(t) + C\sin(\phi)\cos(t)$. However, this is still very different from simply $y = \sin t$.<|endoftext|> -TITLE: every subgroup of the quaternion group is normal -QUESTION [8 upvotes]: Show that every subgroup of the quaternion group is normal and find the isomorphism type of the corresponding quotient ? -I know that $Q_8$ has a subgroup $\langle i\rangle=\{1,i,-1,-i\}$, $\langle j\rangle=\{1,j,-1,-j\}$, $\langle k\rangle=\{1,k,-1,-k\}$, $\langle -1\rangle=\{1,-1\}$. So basically, I have to prove that everyone of these subgroups are a normal subgroup the isomorphism type of the corresponding quotient. Would anyone has an idea how I can get started. I want keep in mind that we have not gone over Lagrange's theorem yet and has not proved that every subgroup of index $2$ is normal. - -REPLY [6 votes]: It's easy to show that $\langle -1\rangle$ is normal, because the elements $1$ and $-1$ commute with each element. -Consider $\langle i\rangle$; you just need to prove $xix^{-1}\in\langle i\rangle$, for every $x\in Q_8$, because you know $x1x^{-1}=1\in \langle i\rangle$ and $x(-1)x^{-1}=-1\in\langle i\rangle$; also $x(-i)x^{-1}=-xix^{-1}$, so $x(-i)x^{-1}\in\langle i\rangle$ as soon as $xix^{-1}\in\langle i\rangle$. -The statement is obviously true for $x=1,-1,i,-i$. Try with $j$: -$$ -jij^{-1}=ji(-j)=-j(ij)=-jk=-i -$$ -Similarly, -$$ -kik^{-1}=ki(-k)=-(ki)k=-jk=-i -$$ -Apply the same for $\langle j\rangle$ and $\langle k\rangle$. - -On the other hand, you don't need to know special theorems for deducing that a subgroup $N$ of index $2$ in a group $G$ is normal. Indeed, the left cosets are $N$ and $gN=G\setminus N$ (where $g\notin N$), whereas the right cosets are $N$ and $Ng=G\setminus N$. So, for any $x\in N$, $xN=N=Nx$, and for $x\notin N$, $xN=G\setminus N=Nx$.<|endoftext|> -TITLE: Problems understanding proof of s-m-n Theorem using Church-Turing thesis -QUESTION [7 upvotes]: I am reading Barry Cooper's Computability Theory and he states the following as the s-m-n theorem: -Let $f:\mathbb{N}^2\mapsto\mathbb{N}$ be a (partial) recursive function. Then there exists a computable function $g(x)$ such that $f(x,y) = \Phi_{g(x)}(y)$ for all $x,y \in \mathbb{N}$. Here, $\Phi_n$ refers to the $n$th recursive function. -The proof goes like this: -For a fixed $x_0$, the function $h_{x_0}(y) = f(x_0,y)$ is computable (this I agree with) and so there exists an index $e_{x_0}$ so that $h_{x_0} = \Phi_{e_{x_0}}$ (this I also agree with). -So, the function $g$ that to each natural $x$ assigns such index $e_x$ (so that $h_x = \Phi_{g(x)}$) is computable (this is the part I don't understand). -When saying that $g$ is computable it means that we can describe an algorithm that takes $x$ as an input and will output the desired $g(x)$. I don't see how such algorithm can be described. (I guess my confusion has to do with the "there exists an" that I placed in bold letters.) -If it helps, we are using Godel numberings of Turing Machines to index the recursive functions. - -REPLY [5 votes]: I guess it should be $h_{x_0}(y) = f(x_0, y)$. I'm assuming your Turing machines have unary inputs and use states $q_0, q_1, \dots$. I describe the general idea, and if you're allowed to appeal to Church-Turing thesis, then it should be convincing, otherwise you should write down the program for the Turing machine described below (it depends on $x$) and plug it into your specific Godel numbering function. In the latter case you will obtain $g(x)$ explicitly and see that it is not only computable but also primitive-recursive (provided that your numbering is). -Roughly speaking $\Phi_{g(x)}(y)$ does the following: it prints $x$ on the tape before $y$ and runs the (slightly modified) program for $f(x, y)$ on these inputs. So we need to check that the program for this routine depends computably and uniformly on $x$. Let's go into some more details. -Let $P$ be a program that computes $f(x, y)$. We describe the program that computes $\Phi_{g(x)}(y)$. It has $y$ on the tape. At first it executes simple "insert $x$" program as follows. It moves head left by $x+1$ cells, writes $x$ and places head at the first digit of $x$. So now we have $x$ and $y$ on the tape. This can be done using (not more than) $2x$ states $q_0, q_1, \dots, q_{2x}$ (you can replace $2x$ with any computable function $h(x)$ that bounds the number of states needed to perform these actions). After that it executes $P'$ on these inputs, where $P'$ is obtained from $P$ by adding $2x + 1$ to all state indices (to ensure that its set of states doesn't intersect the set of states for the "inserting $x$" program). -Now $g(x)$ is the number of the Turing machine which does all the stuff described above. To convince yourself that $g$ is computable, you need to check that the program above depends on $x$ in a computable uniform way. Indeed, "inserting $x$" program is easy to write and calculate its number in terms of $x$. If you have a number of $P$, then the number of $P'$ is obtained from it in a computable way - $P'$ differs only in state indices and they are also changed using computable function $(i, x) \mapsto i + 2x + 1$. After that the number of the program obtained by concatenation of "insert $x$" program and $P'$ (given that they have non-intersecting sets of states) can be also easily computed. Thus, $g(x)$ is computable.<|endoftext|> -TITLE: Any set of linearly independent and commuting vector fields CAN be realized (locally) as partial derivatives of a local coordinate -QUESTION [6 upvotes]: I am a beginner of differential geometry. I wonder if the following proposition is true: - - -Let $M$ be an n-dimensional manifold and - $X_1, \dots ,X_m(m \le n)$ be m commuting and linearly independent vector fields in a neighborhood of a point $p$ in M, then there is a coordinate system $(U, x_1, . . . , x_n)$ around $p$ such that - $X_1 =\frac{∂}{∂x_1},...,X_m =\frac{∂}{∂x_m}$ on $U$. - - -Since $X_1, \dots ,X_m(m \le n)$ are commuting, we have $[X_i,X_j]=0$ for any $i,j$. -When $m=1$, this is the proposition 1.53(Page 40) in Warner's book. -Honestly speaking, I don't even know how to prove second simplest cases when $m=2.$ If this proposition is true, then I think it should also be a theorem in some book. Every hint, solution, or reference will be appreciated! - -REPLY [8 votes]: Yes, your proposition is true. This can be proven using the fact that two vector fields commute iff. their flows commute: -$$ -[X,Y] = 0 \Leftrightarrow \phi^X_t\circ\phi_s^Y=\phi^Y_s\circ\phi^X_t. -$$ -First of all, the proof can be found in J. M. Lee's "Introduction to Smooth Manifolds", Thm. 9.46. -To explain the idea, I will assume that $m=n$ (for the general case, see Lee). Given a point $p\in U$ (the "origin"), the flows define maps $\phi^i|_p: I_i\rightarrow U$ onto U which are your coordinate lines through $p$, and the tangent vectors are exactly the $X_i$ by definition. The flows span a coordinate grid on $U$ and the corresponding chart is then defined by the inverse of $\Phi(t^1,\dots,t^n)=\phi^1_{t^1}\circ\dots\circ\phi^n_{t^n}(p)$. To prove that this is a chart it is crucial that the flows commute, because this guarantees that a point $q\in U$ has unique coordinates $(t^1,\dots,t^n)$ and $\Phi$ is invertible. -To see this, consider the pushforward of the cartesian vector fields $\partial_{t^i}$ at some point $t_0$: -$$ -(\Phi_*(\partial_{t^i}|_{t_0}))f=\partial_{t^i}|_{t_0}f(\Phi(t^1,\dots,t^n))=\partial_{t^i}|_{t_0}f(\phi^1_{t^1}\circ\dots\circ\phi^n_{t^n}(p))=\partial_{t^i}|_{t_0}f(\phi^i_{t^i}\circ\phi^1_{t^1}\circ\dots\circ\hat{\phi}^i_{t^i}\circ\dots\phi^n_{t^n}(p)=:\partial_{t^i}|_{t_0}f(\phi^i_{t^i}(q))=X_i|_{\Phi(t_0)}f. -$$ -Here we used the commutivity of the flows and that $t^i\mapsto\phi^i_{t^i}(q)$ is a integral curve of $X_i$. Hence $\Phi_*$ maps the $\partial_{t^i}$ onto $X_i$. In particular $\Phi_*|_0$ maps $\partial_{t^1}|_0,\dots,\partial_{t^n}|_0$ to $X_1|_p,\dots,X_n|_p$, which are both basises of $T_0\mathbb{R}^n$ and $T_pM$, respectively. Hence, $\Phi_*|_0$ is an isomorphism and by the inverse function theorem, $\Phi$ is a local diffeomorphism, thus $\Phi$ defines a chart on some neighbourhood of $0\in\mathbb{R}^n$.<|endoftext|> -TITLE: For what kind of infinite subset A of $\mathbb Z$ and irrational number $\alpha$, is $\{e^{k\alpha \pi i}: k\in A \}$ dense in $S^1 $? -QUESTION [17 upvotes]: There is a well-known result saying that $\{e^{k\alpha \pi i}: k\in \mathbb Z \}$ is dense in $S^1$. By density, we can select an infinite subset $A$ of $\mathbb Z$ such that $\{e^{k\alpha \pi i}: k\in A \}$ is not dense in $S^1$ any more. So there is a natural question: - - -For what kind of infinite subset A of $\mathbb Z$ and irrational number $\alpha$, is $\{e^{k\alpha \pi i}: k\in A \}$ dense in $S^1 $ ? - - -I feel like if A contains a collection of an infinite Arithmetic sequence(like 1,3,5,...), then $\{e^{k\alpha \pi i}: k\in A \}$ should be dense in $S^1$(I don't know how to prove this). But this doesn't look like a necessary condition. I think such a "basic" problem must be studied before, so every solution or partial solution(for specific types of A) or reference will be appreciated! - -REPLY [2 votes]: It is certainly not necessary that $A$ contains an infinite arithmetic sequence, nor even an arbitrarily long arithmetic sequence. In fact, the set $A$ can be as sparse as you like in the following sense: for any irrational $\alpha$ and any sequence of natural numbers $n_1 < n_2 < n_3 < ...$ we can find a subset $A$ of the natural numbers with its elements listed in increasing order as $A=\{a_1,a_2,a_3,...\}$ such that $a_j \ge n_j$ for all $j$ and such that $\{e^{k \alpha\pi i} : k \in A\}$ is dense. -To prove this, choose $B_1,B_2,...$ to be a countable base for the topology on $S^1$. Then define the elements $a_j$ of $A$ by induction subject to the stated conditions plus the additional condition that $e^{a_j \alpha \pi i} \in B_j$. The choice is always possible because at each stage $j$ of the induction, only finitely many values of $k$ are disallowed by the previous stages, and yet what remains after removing those finitely many values of $e^{k \alpha \pi i}$ is still a dense subset of $S^1$ and hence there are infinitely many such values to choose from in $B_j$.<|endoftext|> -TITLE: Simple questions about infinite products -QUESTION [5 upvotes]: We just learned about infinite products in class. There's no textbook for the course so I am struggling with the following two basic problems. -Let $ a_n(z) = 1 + b_n(z), |b_n(z)| \leq \lambda < 1, z \in E $. Prove that -(1) $ \prod (1+|b_n|) $ converges uniformly on $ E $ if and only if $ \sum |b_n| $ does; -(2) If $ \prod (1+|b_n|) $ converges uniformly on $ E $, then $ \prod (1+b_n) $ does too. -Part (1) seems to be just taking logarithm, because $ \prod (1+|b_n|) $ converges uniformly is the equivalent to $ \sum \log (1+|b_n|) $ converges uniformly, but $ \log (1+|b_n|) \sim |b_n| $ as $ b_n \to 0 $. Is this correct or does it work only for pointwise convergence? I am not sure how to do part (2). -Any help is appreciated. - -REPLY [2 votes]: Hint for (1): -Using $1 + x \leqslant e^x$ for $x > 0$ we have -$$\sum_{k = n+1}^m |b_k(z)| \leqslant \prod_{ k = n+1}^m (1 + |b_k(z)|) \leqslant \exp \left(\sum_{k = n+1}^m |b_k(z)| \right).$$ -Note that by the Cauchy criterion $\sum |b_k(z)|$ converges uniformly if and only if for any $\epsilon > 0$ there exist $N(\epsilon) \in \mathbb{N}$ such for all $m \geqslant n \geqslant N(\epsilon)$ and all $z \in E$ we have -$$\sum_{k = n+1}^m |b_k(z)| < \epsilon$$ -Part (2) is a bit tricky. -Let -$$P_n = \prod_{k=1}^n [1 + b_n(z)], \\ R_n = \prod_{k=1}^n [1 + |b_n(z)|]. $$ -Then -$$|P_n - P_{n-1}| = |(1 +b_1(z)) \ldots (1 + b_{n-1}(z))b_n(z)| \leqslant (1 +|b_1(z)|) \ldots (1 + |b_{n-1}(z)|)|b_n(z)| = R_n - R_{n-1}.$$ -If $R_n$ is uniformly convergent, then so too is the telescoping sum $\sum (R_n - R_{n-1})$. By the comparison test, the telescoping sum $\sum (P_n - P_{n-1})$ is unformly convergent. Hence $P_n$ is uniformly convergent.<|endoftext|> -TITLE: $|A \cup B| = \mathfrak{c}$, then $A$ or $B$ has cardinality $\mathfrak{c}$. -QUESTION [8 upvotes]: $A$ and $B$ are sets. $|A \cup B| = \mathfrak{c}$, prove that $A$ or $B$ has cardinality $\mathfrak{c}$. -This is an exercise problem from my textbook. It's easy if I assume CH to be true. But how can I prove this without it? I don't know how to eliminate the case of $A$ and $B$ both has cardinality larger than that of natural numbers and strictly smaller than that of reals. - -REPLY [5 votes]: It has been suggested in the comments that you should use the theory of well-ordered sets or Zorn's lemma. Here, instead, is a simple proof using the axiom of choice directly. -I assume that you are familiar with the Cantor-Bernstein theorem. Because of that, it suffices to prove that $|A|\ge\mathfrak c$ or $|B|\ge\mathfrak c.$ I also assume you know that $|\mathbb R\times\mathbb R|=|\mathbb R|=\mathfrak c$; thus we may assume that $A\cup B=\mathbb R\times\mathbb R.$ -Case 1. If $A$ is disjoint from some horizontal line in the plane $\mathbb -R\times\mathbb R,$ then $B$ contains a horizontal line, and so $|B|\ge\mathfrak c.$ -Case 2. If $A\cap L\ne\emptyset$ for every horizontal line $L,$ then by the axiom of choice there is a set $S\subseteq A$ such that $|S\cap L|=1$ for every horizontal line $L.$ Clearly $|S|=\mathfrak c$ and so $|A|\ge\mathfrak c.$ -P.S. This argument shows that, if $a\lt c$ and $b\lt c,$ then $a+b\lt c\cdot c.$ This is a special case of Kőnig's theorem: if $a_i\lt b_i$ for each $i,$ then $\sum_i a_i\lt\prod_i b_i.$<|endoftext|> -TITLE: Fitting a parabola to separate two classes of points in the plane -QUESTION [13 upvotes]: Suppose we have a set of points $(x,y)$ in the plane where each point is either boy or a girl. Does there exists a randomized linear-time algorithm to determine if we can fit a parabola (given by a polynomial $ax^2+bx+c$) that separates the boys from girls in the plane? -In addition to finding such a parabola if one exists, how can the algorithm detect if no such parabola exists and terminate? - -REPLY [6 votes]: A Support Vector Machine might be able to do it. Suppose we map your 2D $(x,y)$ space to 3D $(x,x^2,y)$. An SVM will find a plane in this space, of the form $a x + b x^2 + c y = d$ which optimally separates the classes. Solving for y we get $y = \frac{d}{c} - \frac{a}{c} x - \frac{b}{c} x^2$ which is indeed the equation of a parabola. SVMs can be made robust against the possibility that a separating plane (parabola) does not exist, they will still find the "best" solution in some sense, and afterwards you can check (in linear time) whether the solution actually separates the classes (also see this question). -I am not certain whether you can train such an SVM in linear time. This paper suggests that it can be done.<|endoftext|> -TITLE: Analytic function on annulus bounded by $\log 1/|z|$ is zero -QUESTION [5 upvotes]: Let $f(z)$ be an analytic function on $A(0,1)=\{z\in\mathbb{C}\mid0<|z|<1\}$ such that $$\forall z\in A(0,1)\quad|f(z)|\le\log\bigg(\frac 1 {|z|}\bigg).$$ Prove $f\equiv 0.$ - -Define $g(z)=e^{f(z)}$ and note that $$\forall z\in A(0,1),\quad |g(z)|=e^{\Re f(z)}\le e^{|f(z)|}\le e^{\log |z|^{-1}}=\frac{1}{|z|}.$$ -Now I don't know how to prove $g\equiv c$. Suppose I did that, $f=\ln c$ but since $f(1)=0$ we get the result. -As said, I'm struggling with proving that g is constant. I thought doing it By applying Cauchy integral formula but I only succeeded bounding the derivative. -How can I prove $g$ is constant ? - -REPLY [4 votes]: The given bound for $|f(z)|$ implies -$$ - \lim_{z \to 0} z \, f(z) = 0 -$$ -and therefore (Riemann's theorem) -that $f$ has a removable singularity at $z=0$, -i.e. it can be continued to a holomorphic function in the unit disk $\Bbb D$. -Now you can apply the maximum modulus principle and conclude that -for all $z \in \Bbb D$ and $|z| < r < 1$, -$$ - |f(z)| \le \max \{ |f(\zeta)| : |\zeta| = r \} \le \log \frac 1r -$$ -and with $r \to 1$ it follows that $f(z) = 0$.<|endoftext|> -TITLE: If a matrix's eigenvalues are all $1$, is the matrix the identity? -QUESTION [7 upvotes]: It's that time of night when my girl and I bicker about matrices. Tonight we ponder whether a square matrix of dimension $d$ which has a spectrum of $1$'s with multiplicity $d$ must be $I$. -If such a matrix is diagonalizable, then it must be $I$. But we're not sure what would happen if it is not diagonalizable or whether that is even a possibility. -Relatedly, we're having a tough time thinking up a matrix other than $I$ whose spectrum is all $1$'s. - -REPLY [3 votes]: To close the question, I'll answer referring to gniourf_gniourf's comments. -No. If the eigenvalues of a matrix are all $1$, then the matrix need not be the identity. -Counterexample: $\begin{pmatrix}1&1\\0&1\end{pmatrix}$ -If the eigenvalues of a matrix are all $1$ and it is diagonalizable, then it is the identity.<|endoftext|> -TITLE: Find the ratio of integrals $\int_0^1 (1\pm x^4)^{-1/2}\,dx$ -QUESTION [7 upvotes]: How to find this ratio - -$$\frac{\displaystyle \int_{0}^{1}\frac{1}{\sqrt{1+x^{4}}}\mathrm{d}x}{\displaystyle \int_{0}^{1}\frac{1}{\sqrt{1-x^{4}}}\mathrm{d}x}$$ - -without evaluating each integral? The integrals themselves can be expressed as elliptic integrals, but the ratio may be simpler. - -REPLY [14 votes]: For the numerator, change variable to $t = \tan\frac{\theta}{2}$, we get -$$\int_0^1 \frac{dt}{\sqrt{1+t^4}} = -\int_0^1 \frac{dt}{\sqrt{(1+t^2)^4 - 2t^2}} = -\frac12\int_0^1 \frac{1}{\sqrt{1 - \frac12\left(\frac{2t}{1+t^2}\right)}}\frac{2dt}{1+t^2}\\ -= \frac12 \int_0^{\pi/2}\frac{d\theta}{\sqrt{1-\frac12\sin^2\theta}} -= \frac12 K\left(\frac{1}{\sqrt{2}}\right) -$$ -For the denominator, change variable to $t = \cos\theta$, we have -$$\int_0^1 \frac{dt}{\sqrt{1-t^4}} -= \int_{\pi/2}^0\frac{d\cos\theta}{\sqrt{(1-\cos^2\theta)(1+\cos^2\theta)}} = \int_0^{\pi/2}\frac{d\theta}{\sqrt{1+\cos^2\theta}}\\ -= \int_0^{\pi/2}\frac{d\theta}{\sqrt{2-\sin^2\theta}} -= \frac{1}{\sqrt{2}}K\left(\frac{1}{\sqrt{2}}\right) -$$ -where $$K(k) = \int_0^1 \frac{dt}{\sqrt{(1-t^2)(1-k^2t^2)}} = \int_0^{\pi/2} \frac{d\theta}{\sqrt{1-k^2\sin^2\theta}}$$ -is the complete elliptic integral of the first kind. -Compare the two expressions, it is clear the desired ratio is $\displaystyle\;\frac{1}{\sqrt{2}}$. -Update -Thinking more about this, we can completely get rid of the use of elliptic integral. -For the denominator, if we change variable to $t = \frac{1-s^2}{1+s^2}$, -it becomes -$$ -\int_1^0 \frac{d\left(\frac{1-s^2}{1+s^2}\right)}{\sqrt{1 - \left(\frac{1-s^2}{1+s^2}\right)^4}} -= 4\int_0^1 \frac{sds}{\sqrt{(1+s^2)^4 - (1-s^2)^4}} -= 4\int_0^1 \frac{sds}{\sqrt{8s^2 + 8s^6}} -= \sqrt{2}\int_0^1 \frac{ds}{\sqrt{1+s^4}}$$ -This is nothing but $\sqrt{2}$ times the numerator!<|endoftext|> -TITLE: How to compute the pullback of $(2xy+x^{2}+1)dx+(x^{2}-y)dy$ along $f(u,v,w)=(u-v,v^{2}-w)$? -QUESTION [5 upvotes]: I'm trying to do my first pull-back of a differential form. -I know that $\omega=(2xy+x^{2}+1)dx+(x^{2}-y)dy$ is a differential form on $\mathbb{R}^{2}$. -I have $f : \mathbb{R}^{3} \to \mathbb{R}^{2}$ which is -$$f(u,v,w)=(u-v,v^{2}-w)$$ -and I have to calculate the pullback. I was told that by definition -$$(f^{*}\omega)(X) = \omega(f_{*}(X)),$$ -and so I calculated -$$f_{*}=\begin{pmatrix} -1 & -1 & 0\\ -0 & 2v & 1 -\end{pmatrix}$$ -But then I don't really know how to proceed. Should I take a general vector and calculate the form, should I substitute $x,y$ with $u,v,w$? Do you have a general recipe to proceed? - -REPLY [4 votes]: The nice thing about forms is that you are intuitively doing the right thing, which is: just plug in $x=u-v$ etc. as Andrew already wrote. -To see the connection with the formal definitions, note that the exterior derivative and pullbacks commute and hence $f^*dx=d(f^*x)=d(x\circ f)=d(u-v)$. Furthermore, pullbacks respect wedge products, so e.g. -$$f^*(x^2dx)=f^*(x^2 \wedge dx)=(f^*x^2) \wedge (f^*dx)=(u-v)^2d(u-v)$$<|endoftext|> -TITLE: Integral $I=\int \frac{dx}{(x^2+1)\sqrt{x^2-4}} $ -QUESTION [16 upvotes]: Frankly, i don't have a solution to this, not even incorrect one, but, this integral looks a lot like that standard type of integral $I=\int\frac{Mx+N}{(x-\alpha)^n\sqrt{ax^2+bx+c}}$ which can be solved using substitution $x-\alpha=\frac{1}{t}$ so i tried to find such subtitution that will make this integral completely the same as this standard integral so i could use substitution i mentioned, so i tried two following substitutions -$x^2-4=t^2 \Rightarrow x^2=t^2+4 \Rightarrow x=\sqrt{t^2+4}$ then i had to determine $dx$ -$2xdx=2tdt \Rightarrow dx=\frac{tdt}{\sqrt{t^2+4}}$ -from here i got: -$\int\frac{dt}{(t^2+5)\sqrt{(t^2+4)}}$ but i have no idea what could i do with this, so i tried different substitution -$x^2+1=t^2$ and then, by implementing the same pattern i used with the previous substitution i got this integral -$\int\frac{dt}{t\sqrt{(t^2-1)(t^2-5)}}$ but again, i don't know what to do with this, so i could use some help. - -REPLY [3 votes]: The curve $t^2=x^2-4$ is a hyperbola, which can be parametrized by a single value. Rewrite this equation as $(x+t)(x-t)=4$, and set $y=x+t$. Then $4/y=x-t$, and we have -$$ -x=\frac{y+\frac{4}{y}}{2},\;\;\;\;t=\frac{y-\frac{4}{y}}{2}. -$$ -Compute $dx=(1/2-2 y^{-2})dy$, and the integral becomes -$$ -\int \frac{dx}{(x^2+1)\sqrt{x^2-4}}=\int \frac{\frac{1}{2}-\frac{2}{y^2}}{\left(\left(\frac{y+\frac{4}{y}}{2}\right)^2+1\right)\left(\frac{y-\frac{4}{y}}{2}\right)}dy=\int \frac{4y\, dy}{y^4+12 y^2+16}. -$$ -This can be computed using partial fractions.<|endoftext|> -TITLE: Curve-fitting using circles -QUESTION [5 upvotes]: I'm working for a firm, who can only use straight lines and (parts of) circles. -Now I would like to do the following: imagine a square of size $5\times5$. I would like to expand it with $2$ in the $x$-direction and $1$ in the $y$-direction. The expected result is a rectangle of size $7\times9$. Until here, everything is OK. -Now I would like the edges to be rounded, but as the length expanding is different in $x$ and $y$ direction, the rounding should be based on ellipses, not on circles, but I don't have ellipses to my disposal, so I'll need to approximate those ellipses using circular arcs. -I've been looking on the internet for articles about this matter (searching for splines, Bézier, interpolation, ...), but I don't find anything. I have tried myself to invent my own approximation using standard curvature calculations, but I don't find a way to glue the curvature circular arcs together. -Does somebody know a way to approximate ellipses using circular arcs? -Thanks -Dominique - -REPLY [2 votes]: If you want to control the error in the approximation, then biarc interpolation/approximation is what you need, as indicated in the answer from @Paul H. -A biarc curve is usually constructed from two points and two tangent vectors. This actually leaves one degree of freedom unfixed, and there are several different ways to fix this. It can be shown that the "junction" where the two arcs meet lies somewhere on a certain circle, and you can choose where. See this paper for more than you probably want to know. -To get a good approximation, you'll probably have to use several biarc curves, strung end-to-end. Start out with the end-points and end-tangents of your quarter ellipse, and construct a single biarc. Calculate 10 or 15 points along the ellipse, and measure their distance to the biarc, to estimate the approximation error. If the error is too large, compute a point and tangent at some interior location on the ellipse, and approximate with two biarcs. Rinse and repeat until you have an approximation that's satisfactory. More details in this paper. -If you just want a simple approximation, and you don't care very much about the error, then there is a technique that draftsmen have used to draw ellipses for decades, sometimes called the "four center" method. See section 5.2.0 of this document., or this web page.<|endoftext|> -TITLE: When does a linear combination of trigonometric functions have an axis of symmetry? -QUESTION [6 upvotes]: I am trying to find out when a linear combination of $\sin(ax)$ and $\cos(bx)$ has an axis of symmetry. -Clearly, $\sin(x)+\cos(x)$ has an axis of symmetry at $\pi/4$. It seems as if $\sin(3 x)+\cos(x)$ does not have an axis of symmetry (could not prove it, but a plot suggests it). -My question is: Is it possible to claim some conditions on the parameters $a$ and $b$ guaranteeing that a linear combination of $\sin(a x)$ and $\cos(b x)$ has an axis of symmetry? - -REPLY [2 votes]: Lets see what happens for $\sin(ax)+cos(x), a\neq1$. for your example, asking for an axis of symmetry is the same as asking for $p \in \mathbb{R}$ such that -$$ \sin(ax+ap) - \sin(-ax + ap) + \cos(x + p ) - \cos(-x + p) = 0 \quad \forall x \in \mathbb{R} $$ -we can differentiate this as many times as we want, dropping the constants multiplicatively and obtaining more equations, lets differentiate $4n$ times: -$$ a^{4n}\left( \sin(ax+ap) - \sin(-ax + ap) \right) + \cos(x + p ) - \cos(-x + p) = 0 \quad \forall x \in \mathbb{R} $$ -With this process it is easy to prove that the only way to keep this equality true for all $n \in \mathbb{N},x \in \mathbb{R}$ is that each pair of terms be equal to $0$, thus we need $p$ such that -$$ sin(ax + ap) = sin(-ax + ap), \quad cos(x + p) = cos(-x + p) \quad \forall x \in \mathbb{R}$$ -From here it follows that for the sine, $p \in \{ \frac{\pi}{2a} + \frac{\pi k}{a}, k\in \mathbb{N} \}$ and the cosine $p \in \{ \pi k, k \in \mathbb{N} \}$. Just to have fun we can search all $a$'s such that there is an axis of symmetry, after some manipulations you can see there is one if and only if $a \neq 1$ can be written as a fraction with an odd numerator and even denominator. -For multiple terms, you can separate each different scale and use the same criteria, most generally I would not hope to find symmetry axes unless choosing too carefully the constants.<|endoftext|> -TITLE: There is no sequence such that $a_{a_n}=a_{n+1}a_{n-1}-a_{n}^2$ -QUESTION [6 upvotes]: Prove that there is no infinite sequence of natural numbers such that $a_{a_n}=a_{n+1}a_{n-1}-a_{n}^2$ for all $n\geq 2$. - -This question is from a Belarusian math contest and any help is appreciated. - -REPLY [2 votes]: Taking vadim123 idea: -We have $a_{n+1}>a_{n}$ or $a_{n-1}>a_{n}$. -If $a_{n-1}>a_{n}$ and since $a_{a_{n-1}}>0$ then $a_{n}a_{n-2}>a_{n-1}^2$ from which $a_{n-2}>a_{n-1}>a_{n}$. -Following this reasoning we have that the sequence $a_{n}$ is strictly decreasing, -which is a contraction since all terms of this infinite sequence are natural numbers. -So $a_{n-1}< a_{n}$. -As vadim123 showed previously $a_n\le a_{n+1}1$ be such that $a_{n}>N$. -$a_{a_{n}}\geq a_{n}$ -$a_{n+1}a_{n-1}>a_{n}^2+N$ -$a_{n+1}a_{n-1}>Na_{n-1}+N$ -$a_{n+1}>N+N/a_{n-1}$ -$a_{n}>N+N/a_{n-2}$ and since $a_{n-2}N+N/a_{n-1}$. -Again, from $a_{a_{n}}\geq a_{n}$ we have -$a_{n+1}a_{n-1}>a_{n}^2+N+N/a_{n-1}$ -$a_{n+1}a_{n-1}>Na_{n-1}+N+N/a_{n-1}$ -$a_{n+1}>N+N/a_{n-1}+N/a_{n-1}^2$ -$a_{n}>N+N/a_{n-2}+N/a_{n-2}^2$ -$a_{n}>N+N/a_{n-1}+N/a_{n-1}^2$. -Let $L=\sum_{i=0}^{\infty }1/a_{n}^i>1$. -Repeating the above process over and over we can see that -$a_{n}\geq NL$. -Again, from $a_{a_{n}}\geq a_{n}$ we have -$a_{n+1}a_{n-1}\geq a_{n}^2+NL$ -$a_{n+1}a_{n-1}>NLa_{n-1}+NL$ -$a_{n+1}>NL+NL/a_{n-1}$ -$a_{n}>NL+NL/a_{n-2}$ -$a_{n}>NL+NL/a_{n-1}$. -Repeating the above process over and over we have: -$a_{n}\geq NL^k$ for all positive integers $k$, which is a contradiction since $\lim_{k \to \infty}NL^k=\infty$.<|endoftext|> -TITLE: Integrating $\int\frac{x^2-1}{(x^2+1)\sqrt{x^4+1}}\,dx$ -QUESTION [5 upvotes]: I came across a question today... - -Find $$\displaystyle\int\dfrac{x^2-1}{(x^2+1)\sqrt{x^4+1}}\,dx$$ - -How to do this? I tried to take $x^4+1=u^2$ but no result. Then I tried to take $x^2+1=\frac{1}{u}$, but even that didn't work. Then I manipulated it to $\int \dfrac{1}{\sqrt{1+x^4}}\,dx+\int\dfrac{2}{(x^2+1)\sqrt{1+x^4}}\,dx$, butI have no idea how to solve it. -Wolframalpha gives some imaginary result...but the answer is $\dfrac{1}{\sqrt2}\arccos\dfrac{x\sqrt2}{x^2+1}+C$ - -REPLY [4 votes]: With (this seem to be some kind of substitution of the day) -$$ -t=\frac{x}{\sqrt{1+x^4}} -$$ -you get -$$ -\int\frac{1}{1+2t^2}\,dt. -$$ -I'm sure you can take it from here.<|endoftext|> -TITLE: Projection of Antoine's necklace -QUESTION [15 upvotes]: Antoine's necklace is a pathological embedding of the Cantor set into $\Bbb R^3$. The second iteration looks like this: - -Interestingly, the complement $\Bbb R^3\setminus\rm A$ is not simply connected. This property is preserved by ambient isotopies. (Thanks, @MikeMiller!) Anything with an ambient isotopy to what's defined in the article, then, should be considered to be an Antoine's necklace. -What happens when you project the necklace onto a plane? That is, if $\pi:\Bbb R^3\to\Bbb R^2$ is a projection, what is $\pi({\rm A})$? I'm sure you don't get another Cantor set. In fact, it seems like it would always be connected. My guess is that it must be homeomorphic to the Sierpiński carpet. Is it? - -REPLY [11 votes]: The answer is no - it is quite possible for an Antoine necklace to be configured in such a way that some projection of it is not homeomorphic to the Sierpinski carpet. In particular, it's possible that some such projection has a cut point. This is impossible if it's homeomorphic to the Sierpinski carpet, which has no cut points. -To see this, let's take a look at a specific Antoine necklace: - -And here it is from the side in orthographic projection: - -Note that in the middle, there is always a torus on its side. As a result, the height of that torus is decreasing down to zero and the projection in that direction will intersect a vertical line in a single point. That point is a cut point.<|endoftext|> -TITLE: Automorphism group of genus 2 curve -QUESTION [5 upvotes]: Suppose C is a genus 2 curve over a field k such that char k is not 2. Is there an easy way to show that the automorphism group is finite? -If we assume k is algebraically closed then C is hyperelliptic, does this help? - -REPLY [7 votes]: Yes, here is a relatively easy way to do it using a few facts about hyperelliptic curves (where here a curve is a projective, geometrically regular, geometrically integral variety of dimension $1$ over a field $k$). These facts can be found in Ravi Vakil's notes "The Rising Sea: Foundations of Algebraic Geometry" in chapter $19$ and are all relatively simple to work through. Moreover this question is exercise $19.6.B$ from these notes. -Proposition 1): Let $C$ be a curve of genus at least $1$ with a degree two line bundle $\mathcal{L}$ such that $h^0(C,\mathcal{L})\geq 2$. Then $h^0(C,\mathcal{L}) = 2$, $\mathcal{L}$ is basepoint free and the corresponding complete linear system is hyperelliptic. -$$ C \xrightarrow[]{|\mathcal{L}|} \mathbb{P}^1$$ -Proposition 2): Let $C$ be a hyperelliptic curve of genus at least $2$ and $\pi, \pi'$ any two hyperelliptic maps. Then there is an automorphism of $\mathbb{P}^1$ such that the following diagram commutes. -$$\require{AMScd} -\begin{CD} -C @>{1}>> C\\ -@VV{\pi}V @VV{\pi'}V \\ -\mathbb{P}^1 @>{\alpha}>>\mathbb{P}^1\end{CD}$$ -Proposition 3): Let $\pi: C \rightarrow \mathbb{P}^1$ be a hyperelliptic map for $C$ a curve of genus $g$ over a field $k$ of characteristic not $2$. Then $\pi$ has $2g+2$ branch points. -Now to prove that any genus $2$ curve over any (not necessarily algebraically closed) field of characteristic not $2$ has finite automorphism group: -Let $C$ be such a curve. Then $\omega_C$ has degree $2g-2 = 2$ and $g = 2$ sections, whence induces a hyperelliptic map by proposition $1$. Thus any genus $2$ curve is hyperelliptic. Fix some hyperelliptic map $\pi$ and let $\varphi:C \rightarrow C$ be any automorphism. Then by proposition $2$ there is an automorphism $\alpha$ of $\mathbb{P}^1$ such that the following diagram commutes. -$$\require{AMScd} -\begin{CD} -C @>{\varphi}>> C\\ -@VV{\pi}V @VV{\pi}V \\ -\mathbb{P}^1 @>{\alpha}>>\mathbb{P}^1\end{CD}$$ -But since $\pi$ and $\varphi \circ \pi$ have the same branch points (of which there are $6$ by proposition $3$) $\alpha$ must simply permute them. Thus we get a mapt $\operatorname{Aut}(C) \rightarrow S_6$ that is quickly checked to be a group homomorphism. Finally, notice that any automorphism in the kernel is an automorphism that commutes with $\pi$, since if $\alpha$ fixes all $6$ branch points, it fixed all of $\mathbb{P}^1$ (since the automorphisms act sharply triply transitively). There are exactly two such automorphisms* which implies that the size of the automorphism group of $C$ is indeed finite. -*Which can be seen either by using an explicit description of what a hyperellptic curve looks like over the usual open affine cover of $\mathbb{P}^1$, or using that such a map corresponds to an automorphism of the fraction field $\kappa(C)$ that fixes $\kappa(\mathbb{P}^1) \subset \kappa(C)$ and that this field extension is degree two, of characteristic not $2$.<|endoftext|> -TITLE: Find the 'a' such that $x^{13}+x+90=(x^2-x+a)(Q(x))$ -QUESTION [5 upvotes]: Problem: For what integer $a$ does $x^2-x+a$ divide $x^{13}+x+90=P(x)$? -I am struck at this one. I can't think of a good method (using little computations) to do this. -Of course, we could divide $P(x)$ by $x^2-x+a$ and equate the remainder to $0$ or factor out $x^2-x+a$ and use the remainder theorem (But this may not work as we would get an equation with surds and all and 13 degree terms.) -I also tried with the following form: -$x^{13}+x+90=(x^2-x+a)Q(x)$ -By differentiating in two different ways I got the following, -$13x^2+1=(2x-1)Q(x)+(x^2-x+a)Q'(x)$ -$\frac{(13x^2+1)(x^2-x+a)-(2x-1)(x^{13}+x+90)}{(x^2-x+a)^2}=Q'(x)$ -I think we can solve for $Q(x)$ from the last two equations and finish the problem(or do we get an identity?, I haven't tried as it is becoming very cumbersome). -So, can anyone help me find an elegant solution (will less computations)? -Thanks. - -REPLY [4 votes]: Inspired by Ojas's answer (which I think is incomplete because it seems to assume $a$ is positive): -$x=0\implies a\mid90$ -$x=1\implies a\mid92$ -Thus $a\mid2$, so $a\in\{\pm1,\pm2\}$ -$x=-1\implies(a+2)\mid88\implies a\in\{-1,2\}$ -Finally, we can rule out $a=-1$ because $x^2-x-1$ has a root in $[-1,0]$ while $x^{13}+x+90$ clearly does not. Thus $a=2$ is the only remaining possibility. -Remark: This doesn't prove that $x^2-x+2$ necessarily is a factor of $x^{13}+x+90$. All it proves is the following: If $a$ is an integer such that $x^2-x+a$ divides $x^{13}+x+90$, then $a=2$.<|endoftext|> -TITLE: In a triangle $\Delta ABC$, let $X,Y$ be the foot of perpendiculars drawn from $A$ to the internal angle bisectors of $B$ and $C$ -QUESTION [5 upvotes]: In a triangle $\Delta ABC$, let $X,Y$ be the foot of perpendiculars drawn from $A$ to the internal angle bisectors of $B$ and $C$. Prove that $XY$ is parallel to $BC$. - -It works for an equilateral triangle because the angular bisector is also the perpendicular bisector. -I tried drawing a diagram to get some idea, - -To prove that $XY$ is parallel to $BC$, i need to show that $\angle AFG=\angle AXY$ and $\angle AYF=\angle AGF$ - -REPLY [8 votes]: Since $CX$ is a bisector and $AX\perp CX$, $X$ is the midpoint of $AF$. -In a similar way we have that $Y$ is the midpoint of $AG$. By Thales' theorem, $XY\parallel FG$, hence $XY\parallel BC$.<|endoftext|> -TITLE: Is there a way to find one irreducible polynomial of degree n in the field Z2 -QUESTION [7 upvotes]: I'm asked to find an irreducible polynomial of a certain degree in the field Z2. -I won't specify the degree I'm asked here because I'd like to understand the method and know how to apply it to later questions. I only need one irreducible polynomial, I don't need to find them all etc. -Assume that the degree is much too high to look for 2^n different polynomials, or to use polynomial long division to test against irreducible polynomials of degree < n+1/2 -Your help and simple explanations would be greatly appreciated! -Edit: -Okay thanks for your responses, they're very useful! It seems to get a direct answer I'll need to give the specific degree (however, if anyone could answer to solve with degree n that would be fantastic). -I need to find an irreducible polynomial of degree 20 in Z2. I have an idea but it seems very long-winded: -To find all of the irreducible polynomials of degree 5 then I need to find all the polynomials of degree 5 with no polynomial divisors of degree < 3. -Could I then repeat this to find the irreducible polynomials of degree 10 using the polynomials of degree 5 I found in the previous step, and then repeat again to find the irreducible polynomials of degree 20? -Like I said this seems very long winded, I'm sure there's an easier way. - -REPLY [4 votes]: In general it is difficult, but since testing irreducibility is reasonably fast (for a suitable definition of reasonably fast) finding one by random poking won't take too long because a random degree $n$ polynomial is irreducible with probability approximately $1/n$. -As Stella Biderman said, most methods are specific to the degree, and consequently ad hoc. Degree 20 you said? That is easy, because - -$20=\phi(25)$, and -$2$ generates the group $\Bbb{Z}_{25}^*$. - -These two facts together imply that the characteristic zero cyclotomic polynomial -$$ -\Phi_{25}(x)=\frac{x^{25}-1}{x^5-1}=x^{20}+x^{15}+x^{10}+x^5+1 -$$ -remains irreducible over $\Bbb{Z}_2$. -This is seen as follows. Let $\alpha$ be a primitive root of unity of order $25$ (in some extension field of $\Bbb{Z}_2$). Let $m(x)$ be the minimal polynomial of $\alpha$. By Galois theory we get all the zeros of $m(x)$ by repeatedly applying the Frobenius automorphism, $F:x\mapsto x^2$, to $\alpha$, so the zeros of $m(x)$ include -$$ -\alpha,\alpha^2,\alpha^4,\alpha^8,\alpha^{16},\alpha^{32}=\alpha^7, \alpha^{14},\ldots -$$ -By the second bullet above the list contains exactly the powers $\alpha^k, 1\le k<25, \gcd(k,25)=1$. In other words all the primitive roots of order $25$. Therefore -$$ -m(x)=\Phi_{25}(x) -$$ -is irreducible modulo two.<|endoftext|> -TITLE: How do I show that if $f$ is entire and $\{\lvert f(z)\rvert < M\}$ is connected for all $M$, then $f$ is a power function? -QUESTION [6 upvotes]: Let $f$ be a non constant entire function satisfying the following conditions : - -$f(0)=0$ -for every positive real $M$, the set $\{z: \left|f(z)\right|0$ such that $f$ is non-zero on $B_r=\{z:|z|0$; since then $f(\partial B_r) \cap D_a = \emptyset$ it follows that $D_a \subset B_r$ (as otherwise if there is $|w|>r, |f(z)| a, |z|>r$ and since $f$ vanishes only at $0$ we are done!<|endoftext|> -TITLE: Construct a sequence of measureable sets $E_1\supseteq E_2 \supseteq E_3 \supseteq \cdots$ such that $\mu(E_n)=\infty$ for each $n$ but ... -QUESTION [6 upvotes]: Construct a sequence of measureable sets $E_1\supseteq E_2 \supseteq E_3 \supseteq \cdots$ such that $\mu(E_n)=\infty$ for each $n$ but $$\mu\left(\bigcap_{n=1}^\infty E_n\right)=0$$ -Claim: Let -\begin{equation*} -\begin{aligned} -E_1= & \left(\frac{1}{i},1\right]\cup \left(\frac{1}{i+1},2\right] \cup \cdots \\ -E_2= & \left(\frac{1}{i},2\right]\cup \left(\frac{1}{i+1},3\right] \cup \cdots \\ -\vdots & \vdots \\ -E_n= & \left(\frac{1}{i},n\right]\cup \left(\frac{1}{i+1},n+1\right] \cup \cdots \\ -\vdots & \vdots \\ -\end{aligned} -\end{equation*} -where $i$ is an arbitrary positive integer. -I believe that this sequence of sets satisfies the conditions above, but I want to formally write it out. - -REPLY [4 votes]: The question is actually whether the example given works. Notice that $(1/i,1]\subset \bigcap_{n=1}^\infty E_n$ and so it is doesn't work.<|endoftext|> -TITLE: Euler's formula for tetrahedral mesh -QUESTION [7 upvotes]: We all know Euler's formula $V + F = E + 2$, and for a surface triangulation this gives useful estimations of the number of faces and edges from number of vertices ($F \approx 2V$, and $E \approx 3V$, a discussion is here: Euler's formula for triangle mesh), and vice versa. -I am wondering for tetrahedral meshes, if there are similar formulas for estimating the number of faces, edges, and cells, again from the number of vertices. As those are volume mesh, I don't think Euler's formula in its simplest form ($V + F = E + 2$) applies. Thanks. -An example of tetrahedral mesh is here: - -REPLY [7 votes]: Let me first address the 2-dimensional version of your question. -The Euler characteristic of a triangulated surface, which is the quantity $V-E+F$ is known, is a topological invariant. What that means is that its numerical value depends on the topology alone. -So, for instance, suppose you have any kind of finite, convex, 3-dimensional object whose surface is subdivided as a tetrahedral mesh. In that situation, no matter how many total vertices, edges, and faces there are, the equation $V-E+F=2$ holds. The point here is that the surface of a finite, convex, 3-dimensional object is topologically equivalent to the surface of a nice, smooth, round 3-dimensional ball, and $V-E+F$ is a topological invariant. -Now, it's a little hard for me to be sure about the topology of the object you have shown. However, my best guess is that as long as there's no tunnel from the ear holes through to the nostril holes (e.g. no Eustachian tube), nor any tubes of any other kind, then the surface of your object is indeed topologically equivalent to the surface of a nice, smooth, round 3-dimensional ball. And so $V-E+F=2$ should still be true. -The value of $V-E+F$ would indeed change if there were tubes. For example, if there were just one tube, like one hole in a doughnut, then the surface of the object would be topologically equivalent to the surface of the doughnut, in which case $V-E+F=0$. -Every additional tube would decrement the value of $V-E+F$ by an additional amount of $2$. - -Now on to the 3-dimensional version. Your "tetrahedral mesh" would be still called a "triangulation", using the topologists habits of co-opting lower dimensional terminology to apply even in higher dimensional situations. -If we let $C$ be the number of cells, and if we still count the number $V$ of vertices, $E$ of edges, and $F$ of faces, then in this case the Euler characteristic takes the form $V-E+F-C$. It is still a topological invariant. Any object which is topologically equivalent to a nice, smooth, round, solid 3-dimensional ball has Euler characteristic $V-E+F-C=1$. -Again, my guess is that your object (assuming no tubes) is indeed topologically equivalent to a nice smooth round solid 3-dimensional ball and therefore $V-E+F-C=1$.<|endoftext|> -TITLE: Is $\mathbf{Z}[X]/(2,X^2+1)$ a field/PID? -QUESTION [9 upvotes]: I've been asked to determined whether the following are fields, PIDs, UFDs, integral domains: -$$\mathbf{Z}[X],\quad \mathbf{Z}[X]/(X^2+1),\quad \mathbf{Z}[X]/(2,X^2+1)\quad \mathbf{Z}[X]/(2,X^2+X+1)$$ - -The first is a UFD since $\mathbf{Z}$ is, but not a PID since $\mathbf{Z}[X]/(X)\simeq \mathbf{Z}$ is not a field and $X$ is irreducible. -The second is a field since $X^2+1$ is irreducible in $\mathbf{Z}$. -I am stuck on this one. I don't think it is a field because: -$$\mathbf{Z}[X]/(2,X^2+1)\simeq \mathbb{F}_2[X]/(X^2+1)$$ -But $X^2+1$ is not irreducible in $\mathbb{F}_2[X]$ since $(X+1)(X+1)=X^2+1$, so it is not a field. But I can't see how to go further. -As in 3. this is $\mathbb{F}_2[X]/(X^2+X+1)$, and $X^2+X+1$ is irreducible in $\mathbb{F}_2$ so it is field. - -Are 1, 2, and 4 correct? How can I go further with 3? Thanks for any help. - -REPLY [5 votes]: $\mathbb{Z}[X]$ is not a PID because the ideal $(2,X)$ is not principal. -$\mathbb{Z}[X]/(X^2+1)$ is the ring of Gaussian integers, which is a PID. It is not a field, because $2$ is not invertible (for instance). -$\mathbb{Z}[X]/(2,X^2+1)$ is not even a domain, because as you noticed $X^2+1=(X+1)^2$ in $\mathbb{F}_2$. -Your answer is correct.<|endoftext|> -TITLE: Why does the Axiom of Selection solve Russell's Paradox in Set Theory? -QUESTION [10 upvotes]: I am a beginner in mathematics and I was reading a text on Set Theory that talked about how Zermelo's Axiom of Selection "solves" Russel's Paradox. -I understand that the the axiom does not allow constructions of the form -$$\{x \:: \text S(x) \}$$ and only allows$$\{x \in \text A \:: \text S(x) \}$$ -but how does this change the outcome of the paradox when we have: -$$S = \{x \in \text A \:: \text x \notin \text x \}$$ where $S$ is still the set of all sets that do not contain themselves. -Won't we still get the paradox? - -REPLY [2 votes]: If we assume the existence of a set $R$ such that $$R=\{x: x\notin x\}$$then we can obtain the contradiction $R\in R $ and $R\notin R$. So, $R$ cannot exist. -To avoid Russell's Paradox then, all we need to do is not assume that, for every unary predicate $P$, there exists a set $S$ such that $$S=\{x:P(x)\}$$ -If, however, $A$ is assumed or proven to be a set, then we can assume without fear of contradiction that there exists a subset $S\subset A$ such that $$S=\{x\in A: P(x)\}$$Or equivalently$$S=\{ x: x\in A \text{ and } P(x)\}$$ -You can think of $P(x)$ as the criterion for selecting elements from the set $A$ for the subset $S$. The only restriction is that the variable $S$ may not occur in the selection criterion. This is the Axiom of Specification (Selection). -If, for example, we have set $A$, then we can assume that there exists a subset $S\subset A$ such that $$S=\{x\in A: x\notin x\}$$ Or equivalently $$S=\{x:x\in A \text{ and } x\notin x\}$$ Then we would not obtain a contradiction, but we would have $S\notin A$. (Proof left as an exercise.)<|endoftext|> -TITLE: Why is multiplication on the space of smooth functions with compact support continuous? -QUESTION [9 upvotes]: I was reading Terence Tao post -https://terrytao.wordpress.com/2009/04/19/245c-notes-3-distributions/ -and i'm not able to prove the last item of exercise 4. -I have a map $F:C_c^{\infty}(\mathbb R^d)\times C_c^{\infty}(\mathbb R^d)\to C_c^{\infty}(\mathbb R^d)$ given by $F(f,g) = fg$. -The question is: Why is $F$ continuous? -I proved that if a sequence $(f_n,g_n)$ converges to $(f,g)$ then $F(f_n,g_n) \to F(f,g)$, that is, $F$ is sequentially continuous. But, as far as i know, this does not implies that $F$ is continuous. -The topology of $C_c^{\infty}(\mathbb R^d)$ is given by seminorms $p:C_c^{\infty}(\mathbb R^d) \to \mathbb R_{\geq 0}$ such that $p\big|_{C_c^{\infty}( K)}:{C_c^{\infty}( K)} \to \mathbb R_{\geq 0}$ is continuous for every $K\subset \mathbb R^d$ compact, the topology of ${C_c^{\infty}( K)}$ is given by the seminorms $ f\mapsto \sup_{x\in K} |\partial^{\alpha} f(x)|$, $\alpha \in \mathbb N^d,$ and $C_c^{\infty}( K)$ is a Fréchet space. - -REPLY [3 votes]: Let $B_n$ be the ball with radius $n$, $K_n=C_c^\infty(B_n)$ with its metrizable topology, $\varphi_n\in K_n$ a function with support contained in $B_{n}$ and $\varphi_n(x)=1$ for $x\in B_{n-1}$. First observe that $$ -F_n\colon K_n\times K_n \to K_n $$ is a continuous map, which can be easily seen by the defining seminorms for these metric spaces. -Now let $U$ be a convex neighbourhood of $0$, i.e. $U\cap K_n$ is a convex neighbourhood of $0$ in $K_n$ for each $n$. Inductively for each $n$, you can find a $0$-neighbourhood $V_n$ of $K_n$ such that $$ -F[V_n,V_n] \subseteq U\cap K_n $$ -(by the continuity of $F_n$) and $$ \varphi_k V_n \subseteq V_k\,\,\,\,\, (1\leq k < n).$$ -Set $W_n:=V_n\cap K_{n-1}$ and $W$ as the convex hull of $\bigcup_n W_n$. Observe that for each $n$, $W_n$ is neigbourhood of $0$ in $K_{n-1}$, so $W\cap K_{n-1}\supseteq W_n$ is one too, hence $W$ is a neighbourhood of $0$ in $C_c^\infty(\mathbb{R}^d)$. Now $F[W,W]\subseteq U$ would establish the continuity of $F$. -Let $\psi, \chi\in W$, i.e. $\psi=\alpha_1\psi_1+\cdots + \alpha_m\psi_m$ and $\chi=\beta_1 \chi_1 + \cdots + \beta_m \chi_m$ with $\alpha_i, \beta_i\geq 0$, $\sum \alpha_i = \sum \beta_i =1$ and $\psi_i,\chi_i\in V_i$. As $$ -F(\psi,\chi)=\psi\cdot \chi = \sum_{i,j} \alpha_i\beta_j \cdot \psi_i\chi_j $$ and $\sum_{i,j} \alpha_i\beta_j = 1$, it it sufficient to verify $\psi_i\chi_j\in U$. Now if $i=j$, -$$ \psi_i\chi_i = F(\psi_i,\chi_i)\in F[V_i,V_i]\subseteq U\cap K_i \subseteq U.$$ -If $i\neq j$, e.g. $i -TITLE: A category whose classifying space has nontrivial higher homotopy groups -QUESTION [6 upvotes]: The classifying space of a category $\scr{C}$ is obtained by taking its nerve $N\scr{C}$, which is the simplicial set defined by -$$ -N\mathscr{C}_n:= \mathrm{Fun}([n],\mathscr{C}) -$$ -and the classifying space is defined as -$$ -B\mathscr{C}:= |N\mathscr{C}| -$$ -the geometric realization of the nerve. The only concrete examples I have every played with are the classifying spaces of groups, $BG$. But these all end up being $K(G,1)$'s. -Question: What is an explicit example of a category $\mathscr{C}$ so that its classifying space has nontrivial higher homotopy groups. -I know that such things should exist; it is my understanding that Quillen's Q-construction takes a category $M$ and outputs a category $QM$ whose classifying space is the K-theory $K(M)$. -Thanks! - -REPLY [10 votes]: For a very simple example, consider the poset $\{a,b,c,d,e,f\}$ with $a,b\leq c,d\leq e,f$. The classifying space of this poset is homeomorphic to $S^2$ (you can explicitly list out all the nondegenerate simplices in its nerve and draw a picture of them), which has plenty of higher homotopy groups. More generally, in fact, every simplicial complex is homeomorphic to the classifying space of a poset, namely its poset of faces (indeed, the nerve of the poset of faces is just the barycentric subdivision of the simplicial complex you started with). Since every space is weak homotopy equivalent to a simplicial complex, this means that every (weak) homotopy type can be the realized as the classifying space of a category.<|endoftext|> -TITLE: In an extension field, is there any difference between the original field and its isomorphic copy in the extension field? -QUESTION [7 upvotes]: I recently came to the topic of field extensions in my abstract algebra course, and there has been a slight issue which has been bothering me that I was hoping I might be able to clear up. -We have defined an extension field for a field F to be a field E such that $F \subseteq E$ and that $F$ is a field under the operations of $E$ restricted to $F$. -Sounds easy enough and I realize that we have been using objects like this for a long time. For example we know that $\mathbb{C}$ is an extension field of $\mathbb{R}$. -Something that has been bothering me a little bit though is that we have started proving theorems where we need to construct extension fields, but these extension fields don't seem to contain the original field $F$ but rather an isomorphic copy of $F$. -For example if our field was $F$, then $F[x]/(p(x))$ is a field if $p(x)$ is irreducible, which contains a subfield isomorphic to $F$. It seems strange that in the theorems (Gallian's Text) that $F[x]/(p(x))$ is considered an extension field for $F$ even though it doesn't really contain $F$ as a subset, but rather another set which is isomorphic. -I don't think I would have normally though this as being much of a problem, but I remember that earlier in the text Gallian seems to mention that even when structures are isomorphic and that they behave essentially the same, that we need to keep in mind that they are not exactly the same. -If this distinction does matter, why not make the definition of an extension field just say that $E$ is an extension field of $F$ if $E$ has a subfield isomorphic to $F$? This would seem to include all cases. Is this largely a historical issue related to how mathematicians thought about isomorphic structures in the past? - -REPLY [3 votes]: Firstly, a short answer. If you are just interested in a field extension of $F$, then you must first realise that you should be quite content with a field extension of any other field $F'$, as long as $F$ and $F'$ are isomorphic and if you know the isomorphism $F\to F'$. This is something we do often in mathematics, yes sometimes without sufficient care, namely ignoring the part that says "if you know the isomorphism...". Often you'd hear people say "we'll just identify these two things since they are isomorphic", though this is not really a healthy thing to do (nor do we actually do that). What we do often is "identify these two things since they are isomorphic and we know precisely which isomorphism we mean for the identification". That is healthy. So, for your field $F$ and the somewhat incorrect claim that $F[x]/(p(X))$ is a field extension of it, what is really going on here is that $F[x]/(p(X))$ is a field extension of an isomorphic copy of $F$, and we know precisely which isomorphism we are talking about, so its ok to identify them. More precisely, we pretend the original $F$ is the isomorphic copy we actually have an extension of. -As long as you are considering just a few objects of study this is usually quite fine. Trouble start when you are considering infinitely many objects. For instance, knowing how to obtain the splitting field extension of a polynomial it is tempting to obtain the algebraic closure of $F$ by 'simply' using a Zorn lemma argument, every time splitting one more polynomial. It is instructive to try and work out the details and see where it fails (lots of difficulties because of those identifications above). -As for you final suggestion to speak of a field extension of $F$ to mean $F'$ contains an isomorphic copy of $F$, you can do that, but you'll have to specify the isomorphism explicitly (since there could be different isomorphic copies, and they can be isomorphic in different ways). But that does not really help, or matter much, since any such superficially broader extension can be replaced, by identifying along the given isomorphism, by the good old notion of extension.<|endoftext|> -TITLE: Norm of a character in a non-unital Banach algebra without approximate identity -QUESTION [6 upvotes]: As is shown here, the norm of a character in a non-unital Banach algebra with an approximate identity is $1$. -I wonder if this result still holds for general non-unital Banach algebras. -Let $A$ be a non-unital Banach algebra and $\phi \in \Omega_A$($\Omega_A$ is the set of all nonzero homomorphisms from $A$ to $\mathbb C$) and $A^+$ be the unitization of $A$. There exists a unique extension $\phi^+$ of $\phi$ on $A^+$, then $\|\phi\| \le \| \phi^+ \|=1$. But do we have $\|\phi\|=1$? - -REPLY [5 votes]: It does not always hold, here is an example with norm $<1$. -Take $A=\mathscr l^1(\mathbb N)$ with $e_n$ as the standard basis. Define multiplication via: -$$\sum_{n=1}^\infty a_n e_n \sum_{m=1}^\infty b_m e_m = \sum_{k=1}^\infty \left(\sum_{n=1}^{k-1} a_n b_{k-n}\right) e_k$$ -The well definedness as a map on $\mathscr l^1 \times \mathscr l^1$ follows from the Cauchy product theorem. Associativity etc also hold. -Note that here -$$\| A \cdot B \|=\sum_{k=1}^\infty \left|\sum_{n=1}^{k-1} a_n b_{k-n}\right|≤\sum_{k=1}^\infty \sum_{n=1}^{k-1} \left|a_n b_{k-n}\right|=\|A\|\cdot \|B\|$$ -for all $A,B$ so we have a Banach algebra. -By setting $\Phi \left(\sum_{n=1}^\infty a_n e_n \right)=\sum_{n=1}^\infty a_n x^{n}$ for an $x \in \mathbb C$, $|x|≤1$ you get a bounded linear map from $\mathscr l^1 \to \mathbb C$. Furthermore -$$\Phi \left(\sum_{n=1}^\infty a_n e_n \right)\Phi \left(\sum_{m=1}^\infty b_m e_m\right) = \sum_{k=1}^\infty \sum_{n=1}^{k-1} a_n b_{k-n} x^{n+(k-n)}=\Phi\left(\sum_{k=1}^\infty \sum_{n=1}^{k-1} a_n b_{k-n} e_k\right)$$ -So it is a homomorphism. Also $\|\Phi\|=|x|$, which can be smaller than $1$.<|endoftext|> -TITLE: Convincing others that the method of finding the inverse function is valid -QUESTION [8 upvotes]: The method of finding the inverse of a simple function $y = f(x)$ involves the following steps:- -1) Change the subject to $x$ instead of $y$. -2) Interchange $x$ and $y$. -3) The newly formed function ($y = g(x)$, say) is then the required inverse. -We know that the method works but why does it work? My question is how to convince others that the interchange part of the above can do the magic and is logically sound? A proof would be even better. - -REPLY [4 votes]: $g(x)$ is the inverse of $f(x)$ if it satisfies $x=f(g(x))$ and $x=g(f(x))$. -On base of $x=f(g(x))$ we go hunting for $g(x)$. -First we abbreviate $g(x)$ by $y$. -Now let's find $y$ on base of the equation $x=f(y)$. - -REPLY [2 votes]: I had a (seemingly un-mathematical) practical method. -Sketch $ y = f(x) $ on a transparent plastic sheet used for projections, the edges serving as x- and y- axes. Flip the sheet swapping x and y along with rigid curve and see. It is so convincing.. no questions will be asked... -EDIT 1: -... as the operation makes it visibly obvious, so at each point you can verify : -1) x and y are interchanged -2) slope is its inverse now, no sign change $ \dfrac{dy}{dx} \rightarrow \dfrac{dx}{dy} $ -3) curvature at any point is invariant except sign change, and it can be explained by differentials -$$ \frac{d^2y/dx^2}{(1+y'^{2})^{3/2}} \rightarrow \frac{-d^2x/dy^2}{(1+x'^{2})^{3/2}} $$ -4) even higher order isometric invariants are conserved -4) one more flip and you are back; any double transformation annuls, i.e., inverse of an inverse function gives the starting function.<|endoftext|> -TITLE: Recurrence relations and limits, tough. -QUESTION [19 upvotes]: I would like a hint for the following, more specifically, what strategy or approach should I take to prove the following? -Problem: Let $P \geq 2$ be an integer. Define the recurrence -$$p_n = p_{n-1} + \left\lfloor \frac{p_{n-4}}{2} \right\rfloor$$ -with initial conditions: -$$p_0 = P + \left\lfloor \frac{P}{2} \right\rfloor$$ -$$p_1 = P + 2\left\lfloor \frac{P}{2} \right\rfloor$$ -$$p_2 = P + 3\left\lfloor \frac{P}{2} \right\rfloor$$ -$$p_3 = P + 4\left\lfloor \frac{P}{2} \right\rfloor$$ -Prove that the following limit converges: -$$\lim_{n\rightarrow \infty} \frac{p_n}{z^n}$$ -where $z$ is the positive real solution to the equation $x^4 - x^3 - \frac{1}{2} = 0$. -Note: I've already proven the following: -$$\lim_{n\rightarrow \infty} \frac{p_n}{p_{n-1}} = z$$ -Any ideas? Not sure if this result helps. Also $\lim_{n\rightarrow \infty}p_n/z^n$ is also bounded above and below. I've attempted to show $\lim_{n\rightarrow \infty} \frac{p_n}{z^n}$ is Cauchy, but had no luck with that. I don't know what the limit converges to either. -Edit: I believe the limit should converge as $p_n$ achieves an end behaviour of the form $cz^n$ for $c \in \mathbb{R}$ (this comes from the fact that the limit of the ratios of $p_n$ converge to $z$), however I do not know how to make this rigorous. -Edit 2: Proving the limit exists is equivalent to showing -$$p_0 \cdot \prod_{n=1}^{\infty} \left( \frac{p_n/p_{n-1}}{z} \right)$$ -converges. -UPDATED: -If someone could prove that $|p_n-z \cdot p_{n-1}|$ is bounded above (or converges, or diverges), then the proof is complete. - -REPLY [2 votes]: Let us start with the solution of the homogeneous recurrence -$$\phi_n = \phi_{n-1} + \frac{\phi_{n-4}}{2}$$ -its characteristic equation is -$$x^4 - x^3 - \frac{1}{2} = 0$$ -this equations has $4$ solutions, two of them are complex and the other two are a negative and a positive real number. Their approximate values, as given by Mathematica, are: -$$z_1=1.25372, \ \ z_2=-0.669107, \ \ z_3=0.207691 + 0.743573 i, \ \ z_4=0.207691 - 0.743573 i,$$ -(in your answer you label $z$ the one labeled $z_1$ above). Notice that the positive real solution $z=z_1=1.25372$ is the one with the greatest magnitude among the $4$ solutions (actually, it is the only one whose magnitude exceeds $1$). -Now, the general solution to the homogeneous recurrence is: -$$\phi_n=c_1z^n_1+c_2z^n_2+c_3z^n_3+c_4z^n_4$$ -where $c_1, c_2, c_3, c_4$ are constants to be determined from the initial conditions posed in your question. Since $z=z_1=1.25372$ has the greatest magnitude among the roots of the characteristic equation, the above general solution asymptotically (for $n$ large enough) tends to $c_1z^n$ i.e. -$$\phi_n\sim c_1z^n$$ -Consequently, -$$\frac{\phi_n}{z^n}\sim c_1 \ \ \textrm{ i.e. } \ \ \lim_{n\rightarrow\infty}\frac{\phi_n}{z^n}=c_1$$ -where $c_1$ will be determined by the solution of the $4\times 4$ linear system of equations -$$ -\phi_i=c_1z^i_1+c_2z^i_2+c_3z^i_3+c_4z^i_4 -$$ -for $i=0,1,2,3$, $p_i$ given by the initial conditions posted in the question and $z_1=z,z_2,z_3,z_4$ the roots of the characteristic equation given above. -Let me now try to justify why the convergence of $\frac{\phi_n}{z^n}$ implies also the convergence of $\frac{p_n}{z^n}$. The recurrence -$$p_n = p_{n-1} + \left\lfloor \frac{p_{n-4}}{2} \right\rfloor = p_{n-1} + \frac{p_{n-4}}{2} - \epsilon_n$$ -differs from the homogeneous, by a bounded function $0\leq\epsilon_n< 1$ of $n$. Since we are dealing with linear recurrences, increasing sequences $p_n$, $\phi_n$ and we are interested in the asymptotic behaviour of the solutions, in the limit of large $n$, the two are essentially the same. The solutions $p_n$ and $\phi_n$ differ by a $O(1)$ special solution (of the posted, non-homogeneous recurrence): -$$ -p_n=\phi_n+O(1) \Rightarrow p_n\sim\phi_n\Rightarrow\frac{p_n}{z^n}\sim \frac{\phi_n}{z^n}\Rightarrow\lim_{n\rightarrow\infty}\frac{p_n}{z^n}=c_1 -$$ - We can also see that the bigger the value of $P\geq 2$ (given in the initial conditions), the quicker $\frac{p_n}{z^n}$ converges. -P.S. Regarding the estimation that the general solutions $p_n$ and $ϕ_n$ of the respective recurrences, differ by a $O(1)$ special solution of the non-homogeneous, my argument is the following: when dealing with non-homogeneous linear recurrences with constant coefficients i.e. -$$ -p_n+c_1p_{n−1}+...+c_dp_{n−d}=h(n) -$$ -and $h(n)=const$, then it is customary to seek for a special solution to be a constant. Since here, the non-homogeneous factor is the bounded function $0\leq\epsilon_n< 1$, I guess that it is reasonable to conjecture that the corresponding special solution is $O(1)$.<|endoftext|> -TITLE: Significance of multiplying $-1$ by $-1$ -QUESTION [13 upvotes]: Maybe this is a weird question but it's been bugging me. -In the childhood we were taught that $4 \times 3$ means $4+4+4$ i.e. adding 4, 3 times. -My question is then how would you explain $-1 \times -1$ using some kind of mathematical logic?I want to know the significance in real life. - It doesn't have a meaning when I say adding $-1$, $-1$ times. - -REPLY [2 votes]: I've often seen this proof/explaination -$$\bigg(ab + (-a)b\bigg) + (-a)(-b) = \bigg(ab + (-a)b\bigg) + (-a)(-b)$$ -$$\bigg(ab + (-a)b\bigg) + (-a)(-b) = ab + \bigg((-a)b + (-a)(-b)\bigg)$$ -$$\bigg(a+(-a)\bigg)b+(-a)(-b) =ab +\bigg(b+(-b)\bigg)(-a)$$ -$$0b+(-a)(-b) =ab +0(-a)$$ -$$0+(-a)(-b) =ab +0$$ -$$(-a)(-b) =ab$$<|endoftext|> -TITLE: Evaluate $\int_{-\infty}^{\infty}\frac{\sin x}{x}\mathop{}\! \mathrm dx$ -QUESTION [9 upvotes]: I am trying to evaluate the following: -$$\int_{-\infty}^{\infty}\frac{\sin(x)}{x}\, dx$$ -My first approach was to find the antiderivative but I can't seem to express it as I have not yet learnt about $\text{Si}(x)$. I then tried replacing the $\sin(x)$ with $(e^{ix}-e^{-ix})/(2i)$ but I just ended something even more complicated. Does making it go from $0$ to $\infty$ by multiplying by $2$ help? -Please help me in evaluating this integral. -By the way, I am familiar with substitution and integration by parts but not complex analysis or contour integration. However, if this question requires something I don't already know, I am willing to try and understand it. -Thanks. - -REPLY [8 votes]: 1. If you are familiar with Dirac Delta$$ -\int_{-\infty}^{\infty}{\sin(x) \over x}\,{\rm d}x -= -\int_{-\infty}^{\infty}\left({1 \over 2}\,\int_{-1}^{1}{\rm e}^{{\rm i}kx}\,{\rm d}k\right) -\,{\rm d}x -= -\pi\int_{-1}^{1}{\rm d}k -\int_{-\infty}^{\infty}{{\rm d}x \over 2\pi}\,{\rm e}^{{\rm i}kx} -= -\pi\int_{-1}^{1}{\rm d}k\,\delta(k) = \pi -$$ -2. Trick Calculus way -$$\begin{align*} -\int_{-\infty}^{\infty} \frac{\sin x}{x} \; dx -&= 2 \int_{0}^{\infty} \frac{\sin x}{x} \; dx \\ -&= 2 \int_{0}^{\infty} \sin x \left( \int_{0}^{\infty} e^{-xt} \; dt \right) \; dx \\ -&= 2 \int_{0}^{\infty} \int_{0}^{\infty} \sin x \, e^{-tx} \; dx dt \\ -&= 2 \int_{0}^{\infty} \frac{dt}{t^2 + 1} \\ -&= \vphantom{\int}2 \cdot \frac{\pi}{2} = \pi. -\end{align*}$$ -You would love also Feynman differentiation under the integral sign<|endoftext|> -TITLE: CDF of absolute value of difference in random variables -QUESTION [10 upvotes]: Let $X$ and $Y$ be independent random variables, uniformly distributed in the interval $[0,1]$. Find the CDF and the PDF of $|X - Y|$? -Attempt -Let $Z = |X - Y|$, so for $z \geq 0$, the CDF $F_{Z}(z) = \mathbf{P}(Z \leq z) = \mathbf{P}(|X - Y| \leq z) = \mathbf{P}(-z \leq X - Y \leq z)$, which is where the algebra becomes confusing. Since they are independent, the joint pdf of $X$ & $Y$ is simply 1, as long as $(X,Y)$ belong to the unit square. -The solution suggests a plot the event of interest as a subset of the unit square and find its area. Any hints? - -REPLY [11 votes]: The area of the square that is between the two lines is the probability of the event $\{|X-Y|\le t\}$ for $0\le t\le 1$ ($t=0.5$ in the graph). - -The area of the two triangles is the same and the area of one triangle is $(1-t)^2/2$. Hence, the area of the square that is between the two lines is given by $1-2\cdot (1-t)^2/2=1-(1-t)^2$ and the probability -$$ -\Pr\{|X-Y|\le t\}=1-(1-t)^2 -$$ -for $0\le t\le 1$. - -REPLY [3 votes]: The graphical approach is often a good one when dealing with uniform distributions, because we can interpret probabilities as areas. -Draw the unit square, and the line $y=x$ within it. Now let $z\in [0,1]$. What you have to find is the area of the polygon: -$$[0,1]^2\cap \{(x,y)\in\mathbb{R}^2:|x-y|\leq z\}$$ -The set $\{(x,y)\in\mathbb{R}^2:|x-y|\leq z\}$ is just like a band centered around the first diagonal of the plane, can you see this? -More precisly, the polygon you're looking for is delimited by the points $(0,0), (0,z), (1-z,1),(1,1),(1,1-z),(z,0)$. -With a little bit of geometry, it's easy to determine its area. Now that you have the cdf, compute its derivative to get the pdf. - -REPLY [3 votes]: For fixed $z\in\mathbb{R}$ we have:$$F_Z(z)=P\left(\left|X-Y\right|\leq z\right)=\int\int1_{\left(-\infty,z\right]}\left(\left|x-y\right|\right)f_{X,Y}(x,y)dxdy$$ -where $f_{X,Y}(x,y)$ denotes the density of $\langle X,Y\rangle$. -Substituting this density we arrive at: -$$F_Z(z)=P\left(\left|X-Y\right|\leq z\right)=\int_{0}^{1}\int_{0}^{1}1_{\left(-\infty,z\right]}\left(\left|x-y\right|\right)dxdy$$ -Can you work this out? -If the CDF is found then the PDF can be found by differentiating.<|endoftext|> -TITLE: Nontrivial integral representations for $e$ -QUESTION [12 upvotes]: There are a lot of integral representations for $\pi$ as well as infinite series, limits, etc. For other transcendental constants as well (like $\gamma$ or $\zeta(3)$). -However, for every definite integral that is equal to $e$ I can think of, the integrated function contains the exponent in some way. - -Can you provide some definite integrals that have $e$ as their value (or some elementary function of $e$ that is not a logarithm), without $e$ appearing in any way under the integral or as one of its limits (and without the limits for $e$, or the infinite series for $e$)? - -The example or what I want is the following integral for $\pi$: -$$\int_0^{1} \sqrt{1-x^2} dx=\frac{\pi}{4}$$ - -REPLY [7 votes]: If you are asking whether e is a period, the question remains, technically, still open, but its answer is not expected to be affirmative. As to the non-algebraic integrands, we have $$\int_{-\infty}^\infty\frac{\cos(ax)}{1+x^2}~dx~=~\frac\pi{e^{|a|}}$$<|endoftext|> -TITLE: Semisimplicity of the induced representation of an irreducible representation -QUESTION [6 upvotes]: Let $G$ be an arbitrary group, $H$ be a subgroup of finite index $n$ and $k$ be an algebraically closed field of characteristic prime to $n$. -Suppose that we have an irreducible representation -$$\rho: H\to \mathrm{GL}(V)$$ -where $V$ is a finite-dimensional $k$-vector space. - -Is the induced representation $\mathrm{Ind}_H^G(\rho)$ semisimple? - -This will certainly be true if $H$ is finite of order prime to the characteristic, or if we are in characteristic $0$ and $G$ is compact. Can we get away with less in this case? -If $\mathrm{Ind}_H^G(\rho)$ is not semisimple, does the situation change if we assume that $\rho$ is actually the restriction of some irreducible representation $\sigma: G\to\mathrm{GL}(V)$? -Edit: Alternatively, would the situation change if we knew that $H$ was normal in $G$? - -REPLY [6 votes]: Let $k$ have characteristic $p$ and $|G:H|$ prime to $p$. Even inducing the trivial representation of $H$ (which is certainly the restriction of an irreducible representation!) to $G$ does not in general give a semisimple representation. -For example, take $p=2$, $G=A_5$ the alternating group of degree $5$, and $H$ a Sylow $2$-subgroup. Then inducing the trivial representation gives a non-semisimple module. -If $H$ is normal in $G$, then the induced representation is always semisimple. -Let $V$ be an irreducible $kH$-module, and let $\uparrow$ and $\downarrow$ denote induction and restriction between H and G. Then $V\!\!\uparrow\downarrow$, is semisimple, as it's the direct sum of irreducible $kH$-modules $V\otimes g$, where $g$ runs over a set of coset representatives of $H$ in $G$. -Suppose $U$ is a $kG$-submodule of $V\!\!\uparrow$. As a $kH$-module it is a direct summand, so there is a $kH$-module homomorphism $\alpha:V\!\!\uparrow\to U$ projecting onto $U$. As in the usual proof of Maschke's Theorem, -$$\tilde\alpha(v)=\frac{1}{|G:H|}\sum_g\alpha(vg)g^{-1},$$ -where the sum is over a set of coset representatives, is a $kG$-module homomorphism $\tilde{\alpha}:V\!\!\uparrow\to U$ projecting onto $U$. So $U$ is a $kG$-module direct summand of $V\!\!\uparrow$. -Since every $kG$-submodule of $V\!\!\uparrow$ is a direct summand, $V\!\!\uparrow$ is a semisimple $kG$-module.<|endoftext|> -TITLE: Lower semi-continuity of one dimensional Hausdorff measure under Hausdorff convergence -QUESTION [11 upvotes]: Let $\mathcal H^1$ be the one-dimensional Hausdorff measure on $ -\mathbb R^n$, and let $d_H$ be the Hausdorff metric on compact subsets of $\mathbb R^n$. If $K_n$ is connected for all $n \in \mathbb N$, and $d(K_n,K) \to 0$, I would like to know if -$$\mathcal H^1 (K) \leq \liminf\limits_{n \to \infty} \mathcal H^1(K_n).$$ -If $K_n$ is not connected, this is not true. One can take $K_n = \bigcup\limits_{i=0}^{2^n-1} [{i \over 2^n}, {i + 1/2 \over 2^n}]$. Then $\mathcal H^1(K_n) = 1/2$, but $K_n \to [0,1]$. Also, for $\mathcal H^k$, $k$ an integer greater than one, this fails spectacularly. See this picture from Frank Morgan's book: -The thing that I am trying to prove is used implicitly in Peter Jones's paper on the analyst's traveling salesman problem, I believe. - -REPLY [9 votes]: This is called Golab's Theorem. Two different proofs can be found in - -The Geometry of Fractal Sets by K. J. Falconer, Theorem 3.18. -Topics on Analysis in Metric Spaces by L. Ambrosio and P. Tilli, Theorem 4.4.17 - -I'll sketch the proof following Falconer. -Step 1: $K$ is connected. Indeed, if $K$ is partitioned into two nonempty compact sets, then their disjoint neighborhoods form a separation of $K_n$ for $n$ large enough. -Step 2: If $\liminf$ is infinite, there is nothing to prove. Otherwise, we may assume $\mathcal H^1(K_n)\le C<\infty$, by passing to a subsequence. -Step 3: Replace each $K_n$ with a topological tree $T_n\subset K_n$ such that $T_n$ still converges to $ K$. To this end, pick a finite $1/n$-net in $K_n$ and join its points by arcs, one at a time, without creating loops. This uses the fact that a continuum of finite $\mathcal H^1$ measure is arcwise connected (Lemma 3.12 in Falconer's book). -Step 4: Fix $\delta>0$ and decompose each $T_n$ into the union $\bigcup_{j=1}^k T_{nj}$ of continua with diameter at most $\delta$ such that $\sum_{j=1}^k \mathcal H(T_{nj}) = \mathcal H^1(T_n)$. This takes another lemma, proved by repeatedly truncating the longest branches of the tree. It's important that $k$ can be chosen independently of $n$; it depends only on $\delta$ and $\sup_n\operatorname{diam}T_n$. -Step 5: Using the Blaschke selection theorem (a sequence of compact sets within a bounded set has a convergent subsequence), and taking some subsequences, we can arrange that $T_{nj}\to E_j$ as $n\to\infty$, for each $j$. -Step 6: Since $K=\bigcup_{j=1}^k E_j$ and $\operatorname{diam}E_j\le \delta$ for each $j$, it follows that -$$\mathcal H^1_\delta(K)\le \sum_{j=1}^k \operatorname{diam}E_j -= \lim_{n\to\infty} \sum_{j=1}^k \operatorname{diam}T_{nj} -\le \liminf_{n\to\infty} \sum_{j=1}^k \mathcal H^1 (T_{nj}) \\ -\le \liminf_{n\to\infty} \mathcal H^1 (T_{n}) -\le \liminf_{n\to\infty} \mathcal H^1 (K_{n}) $$ -as claimed. This uses the continuity of diameter with respect to the Hausdorff metric.<|endoftext|> -TITLE: Two definitions of Hilbert series/Hilbert function in algebraic geometry -QUESTION [8 upvotes]: In classical algebraic geometry, suppose $I$ is a reduced homogeneous ideal in $k[x_0,\cdots,x_n]$, where $k$ is algebraically closed field, then $I$ cuts out a projective variety $X$, whose Hilbert function is defined by -\begin{equation} -\phi(m)=\text{dim}_k~ (k[x_0,\cdots,x_n]/I)_m -\end{equation} -where $()_m$ means the $m$-th homogeneous part of the quotient ring. When $m$ is large, there exists a polynomial $p_X$ such that -\begin{equation} -\phi(m)=p_X(m) -\end{equation} -$p_X$ is the Hilbert polynomial. However in the scheme world, from section 18.6 of Ravi Vakil's book, the Hilbert function for a projective scheme $X \hookrightarrow \mathbb{P}^n$ is defined by -\begin{equation} -\phi(m)=h^0(X,\mathcal{O}(m)) -\end{equation} -When $m$ is large, there exists a polynomial $p_X$ such that -\begin{equation} -\phi(m)=\chi(X,\mathcal{O}(m))=p_X(m) -\end{equation} -Are the two definitions of Hilbert functions the same? I guess there are issues with reducedness, could anyone give an overall explanation? - -REPLY [5 votes]: Look at the graded $k$-algebra $A=k[x_0,\ldots,x_n]/I$. The graded $k$-algebra surjection $k[x_0,\ldots,x_n]\to A$ yields a closed $k$-immersion $j:X=\mathrm{Proj}(A)\hookrightarrow\mathbf{P}_k^n$ with image $V_+(I)$. The Hilbert function $p_X$ of $X$ relative to the closed immersion $j$ (i.e. to the very ample invertible sheaf $\mathscr{O}_X(1)=j^*(\mathscr{O}_{\mathbf{P}_k^n}(1))$ is given by $p_X(m)=\dim_k(H^0(X,\mathscr{O}_X(m)))$ for $m\gg 0$, i.e., it is the Hilbert function of the finitely generated $k[x_0,\ldots,x_n]$-module $\bigoplus_{m\geq 0}H^0(X,\mathscr{O}_X(m))=\bigoplus_{m\geq 0}H^0(\mathbf{P}_k^n,(j_*(\mathscr{O}_X))(m))$. There is a canonical graded $k$-algebra map $A/I\to\bigoplus_{m\geq 0}H^0(X,\mathscr{O}_X(m))$ and it is an isomorphism in sufficiently large degree (see http://stacks.math.columbia.edu/tag/0AG7). Thus the Hilbert functions agree for sufficiently large $m$, so the Hilbert polynomials coincide.<|endoftext|> -TITLE: Homotopic maps have homotopy equivalent mapping cones -QUESTION [7 upvotes]: Let $f,g:X\to Y$ be maps of spaces such that $f\simeq g$. Is it true that the mapping cones $\operatorname{cone}(f)$ and $\operatorname{cone}(g)$ are homotopy equivalent? -Can we write down an explicit homotopy equivalence? It ought not to be too hard to get a map $\operatorname{cone}(f)\to \operatorname{cone}(g)$ from the existence of a homotopy $H:X\times I\to Y$, but I haven't been able to find a sensible candidate, despite trying for quite a while. - -REPLY [6 votes]: Define the map $k: \text{cone}(f) \to \text{cone}(g)$ by -$$ -\begin{align} -% Y \sqcup \left(X\times\left[0,\frac12\right]\right) \sqcup \left(X\times\left[\frac12,1\right]\right) & \to Y \sqcup \left(X\times I\right) \\ -y & \mapsto y \qquad \text{ for } y\in Y \\ -(x,t) & \mapsto \begin{cases} - H(x,2t) &\text{ if } t\le\frac12\\ - (x,2t-1) &\text{ if } t\ge\frac12 -\end{cases} -\end{align} -$$ -Can you show that $k$ is continuous? -Now define the "inverse" $l: \text{cone}(g) \to \text{cone}(f)$ -$$ -\begin{align} -y & \mapsto y \qquad \text{ for } y\in Y \\ -(x,t) & \mapsto \begin{cases} - H(x,1-2t) &\text{ if } t\le\frac12\\ - (x,2t-1) &\text{ if } t\ge\frac12 -\end{cases} -\end{align} -$$ -Let $m: \text{cone}(f) \to \text{cone}(f)$ be the map which is the identity on $Y$ and which -$$ -(x,t) \mapsto \begin{cases} - f(x) &\text{ if } t\le\frac34\\ - (x,4t-3) &\text{ if } t\ge\frac34 -\end{cases} -$$ -A homotopy between $lk$ and the map $m$ is given by -$$ -(x,t,s) \mapsto \begin{cases} - H(x,2t(1-s)) &\text{ if } t\le\frac12\\ - H(x,(3-4t)(1-s)) &\text{ if } \frac12\le t\le\frac34 \\ - (x,4t-3) &\text{ if } t\ge\frac34 \\ -\end{cases} -$$ -Can you show that $m$ is homotopic to the identity on $\text{cone}(f)$?<|endoftext|> -TITLE: On the theorem "$3$ is everywhere" -QUESTION [12 upvotes]: In this Numberphile video it is stated that "almost all natural numbers have the digit $3$ in their decimal representation", and a proof of this fact is proposed. -A sketch of the proof follows: -Denote by $D_3$ the set of natural numbers having a digit $3$ in their decimal representation. For all $n \ge 1$, denote by -$$f(n) = | D_3 \cap \{ 1, \dots , n\} |$$ -it is proved that for all $n$ -$$f(10^n) = 10^n- 9^n $$ -holds (and this is quite clear), hence -$$\lim_{n \to + \infty} \frac{f(10^n)}{10^n} = 1$$ -and this concludes the proof in the video. -Now, this proof is clear and evident to me, but I think that it is incomplete, since we should prove that - -$$\lim_{n \to + \infty} \frac{f(n)}{n} = 1$$ - -while this is not proved in the video. So my question is: how to prove this? -EDIT: Obviously, if the limit exists, then it is equal to $1$: so I am asking how to show that the last limit actually exists. - -REPLY [4 votes]: You are correct that proving $\lim_{n \to + \infty} \frac{f(10^n)}{10^n} = 1$ is not enough. I can define $g(n)=n$ if $n=10^k$ and $g(n)=0$ otherwise. I then have $\lim_{n \to + \infty} \frac{g(10^n)}{10^n} = 1$ but $\lim_{n \to + \infty} \frac{g(n)}{n}$ does not exist. We have shown that if $\lim_{n \to + \infty} \frac{f(n)}{n}$ exists, it is $1$, so all we need now is that it exists. We can use the same argument. Define $h(n)=n-f(n)$ as the number of numbers less than $n$ that are missing $3$. They show $h(10^n)=9^n$. $h(n)$ is monotonically increasing as when you go from $n$ to $n+1$ you either add $1$ or $0$ to $h$. Now for any $k$, let $m=\lfloor \log_{10}k \rfloor$ so that $10^m$ is the power of $10$ just below $k$. $\frac {f(k)}k =1-\frac {h(k)}k \gt 1-\frac {9^{m+1}}{10^m}\to 1$<|endoftext|> -TITLE: Relation between two notions of $BG$ -QUESTION [7 upvotes]: The following is something that's always niggled me a little bit. I usually think about stacks over schemes, so I'm a bit out of my element—I apologize if I say anything silly below. -Let $G$ be a sufficiently topological group (e.g. you can assume a Lie group) and let $\mathsf{Spaces}$ be the category of topological spaces equipped with the obvious Grothendieck topology (i.e. coverings are classical open coverings). We then have on $\mathsf{Spaces}$ the usual stack $BG$ of $G$-torsors. -What is the relationship between the stack $BG$ and the space $BG$? Of course, the stack $BG$ is not representable (it's valued in groupoids in a way not equivalent to a stack valued in sets) but one can consider its component stack $\pi_0BG$ (which assigns to $X$ isomorphism classes of $G$-torsors). Now that we have a set valued stack, it's conceivable that this is representable but, of course, it's not—it's not even a sheaf on $\mathsf{Spaces}$. -That said, $\pi_0 BG$ is 'represented' in the homotopy category in the sense that -$$BG(X)=[X,BG]$$ -which is, after all, something. -So my general question is: what really is the rigorous relationship? How does it generalize? -Some related subquestions are: should one/can one think about a topology on the homotopy category of spaces? If so, are spaces sheaves there, and can one literally say, in such a setup, that $BG$ (as a space up to homotopy) is just $\pi_0 BG$ (as a 'stack on the homotopy category'). -Thanks! -EDIT: Just to give an idea, in the theory of stacks over schemes, one can think of $BG$ as being the stack associated to the groupoid in schemes -$$G\overset{\longrightarrow}{\xrightarrow{\text{ }}}\ast$$ -perhaps, in a the same formalism, one can do this as a groupoid in spaces? Then, $BG$ as a space is taking this quotient not in the category of stacks (i.e. the stackification of the obvious groupoid valued presheaf) but taking the quotient in spaces? Of course, one has to be careful since one has to take $\ast$, in such a context, to mean a contractible space with a free $G$-action (e.g. $EG$). I don't know rigorously how this all fits together. - -REPLY [3 votes]: It occurs to me that the connection between the two notions of $B G$ is more or less the subject of [Moerdijk, Classifying spaces and classifying toposes]. Instead of directly trying to compare the space $B G$ and the stack $B G$, one can try to compare their "representations", or more precisely, the respective (higher) toposes of (higher) sheaves. -I will first discuss the case of a discrete group $G$. The two toposes to be compared are $\mathbf{Sh} (B G)$, the topos of sheaves on the space $B G$, and $\mathcal{B} G$, the topos of $G$-sets. The topos $\mathcal{B} G$ has a universal property with respect to toposes: regarding $G$ as a $G$-set under the regular action, for every Grothendieck topos $\mathcal{E}$, $f \mapsto f^* G$ is an equivalence between the category of geometric morphisms $\mathcal{E} \to \mathcal{B} G$ and the category of $G$-torsors in $\mathcal{E}$. In particular, we have a comparison geometric morphism $\mathbf{Sh} (B G) \to \mathcal{B} G$. Moreover, as is well known, this is a weak homotopy equivalence (in the sense of Artin and Mazur) – this means that the two toposes have isomorphic fundamental groups and cohomology groups with respect to locally constant coefficients. That is one sense in which the space $B G$ and the stack $B G$ represent the same $\infty$-groupoid (or, if you prefer, homotopy type). Of course, there is much more to topos cohomology than just locally constant coefficients, but that disappears when passing to $\infty$-groupoids. -The case of a topological group $G$ is considerably more complicated – after all, what should the analogue of $\mathcal{B} G$ be? If we stick with ordinary 1-toposes, it is no good to look at the topos of $G$-sets (= sets with a continuous $G$-action): if $G$ is connected, then the only possible continuous $G$-action on a set is the trivial one. Ideally, $\mathcal{B} G$ should be the homotopy limit of the cosimplicial diagram $\mathbf{Sh} (B_\bullet G)$, where $B_\bullet G$ is the simplicial bar construction of $G$, whatever $\mathbf{Sh}$ means in this context. This is essentially saying that $\mathcal{B} G$, regarded as a kind of generalised space, is the homotopy colimit of the simplicial space $B_\bullet G$, also regarded as a diagram of generalised spaces. On the other hand, the space $B G$ is the homotopy colimit of the simplicial space $B_\bullet G$ regarded as a diagram of $\infty$-groupoids. Thus, it sounds as if it should be easy to compare the two, but unfortunately $\mathbf{Sh}$ is not (usually) a functor of $\infty$-groupoids – after all, spaces $X$ and $Y$ may be homotopy equivalent without $\mathbf{Sh} (X)$ and $\mathbf{Sh} (Y)$ being equivalent. -To be concrete, suppose $\mathbf{Sh} (X)$ is the $\infty$-topos of $\infty$-sheaves on $X$. There is a full $\infty$-subcategory $\mathbf{LC} (X) \subseteq \mathbf{Sh} (X)$ of locally constant $\infty$-sheaves, and when $X$ is nice enough (I think locally contractible suffices), $\mathbf{LC} (X)$ is equivalent to the slice $\infty$-category $\infty \mathbf{Grpd}_{/ X}$. In that case, the homotopy limit of $\mathbf{LC} (B_\bullet G)$ is indeed $\mathbf{LC} (B G)$. On the other hand, the homotopy limit of $\mathbf{Sh} (B_\bullet G)$ is the $\infty$-topos $\mathcal{B} G$ of equivariant $\infty$-sheaves on $G$. Of course, the locally constant objects in $\mathcal{B} G$ are those such that the underlying $\infty$-sheaf on $G$ is locally constant, so the full $\infty$-subcategory of $\mathcal{B} G$ spanned by the locally constant objects is equivalent to $\mathbf{LC} (B G)$. So this is a precise sense in which the difference between the space $B G$ and the stack $B G$ corresponds to the difference between locally constant sheaves and general sheaves.<|endoftext|> -TITLE: Closed Form Solution of $ \arg \min_{x} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\|x \right\|}_{2} $ - Tikhonov Regularized Least Squares -QUESTION [5 upvotes]: The problem is given by: -$$ \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\|x \right\|}_{2} $$ -Where $y$ and $x$ are vectors. $\|\cdot\|_2$ is Euclidean norm. In the paper Convex Sparse Matrix Factorizations, they say the closed form solution is $x=\max\{y-\lambda \frac{y}{\|y\|_2}, 0\}$. I don't know why $x$ need to be non-negative. I think it may come from $\|x\|_2=\sqrt{x^Tx}$. But I cannot derive it. Please help. -The statement appears in the last paragraph line 2 on page 5 of the Paper. - -REPLY [2 votes]: One could see that the Support Function of the Unit Ball of $ {\ell}_{2} $ is given by: -$$ {\sigma}_{C} \left( x \right) = {\left\| x \right\|}_{2}, \; C = {B}_{{\left\| \cdot \right\|}_{2}} \left[0, 1\right] $$ -The Fenchel's Dual Function of $ {\sigma}_{C} \left( x \right) $ is given by the Indicator Function: -$$ {\sigma}_{C}^{\ast} \left( x \right) = {\delta}_{C} \left( x \right) $$ -Now, using Moreau Decomposition (Someone needs to create a Wikipedia page for that) $ x = \operatorname{Prox}_{\lambda f \left( \cdot \right)} \left( x \right) + \lambda \operatorname{Prox}_{ \frac{{f}^{\ast} \left( \cdot \right)}{\lambda} } \left( \frac{x}{\lambda} \right) $ one could see that: -$$ \operatorname{Prox}_{\lambda {\left\| \cdot \right\|}_{2}} \left( x \right) = \operatorname{Prox}_{\lambda {\sigma}_{C} \left( \cdot \right)} \left( x \right) = x - \lambda \operatorname{Prox}_{ \frac{{\delta}_{C} \left( \cdot \right)}{\lambda} } \left( \frac{x}{\lambda} \right) $$ -It is known that $ \operatorname{Prox}_{ {\delta}_{C} \left( \cdot \right) } = \operatorname{Proj}_{C} \left( x \right) $, namely the Orthogonal Projection onto the set. -In the case above, of $ C = {B}_{{\left\| \cdot \right\|}_{2}} \left[0, 1\right] $ it is given by: -$$ \operatorname{Proj}_{C} \left( x \right) = \frac{x}{\max \left( \left\| x \right\|, 1 \right)} $$ -Which yields: -$$ \begin{align} \operatorname{Prox}_{\lambda {\left\| \cdot \right\|}_{2}} \left( x \right) & = \operatorname{Prox}_{\lambda {\sigma}_{C} \left( \cdot \right)} \left( x \right) = x - \lambda \operatorname{Prox}_{ \frac{{\delta}_{C} \left( \cdot \right)}{\lambda} } \left( \frac{x}{\lambda} \right) \\ & = x - \lambda \operatorname{Prox}_{ {\delta}_{C} \left( \cdot \right) } \left( \frac{x}{\lambda} \right) \\ & = x - \lambda \operatorname{Proj}_{C} \left( \frac{x}{\lambda} \right) \\ & = x - \lambda \frac{x / \lambda}{ \max \left( {\left\| \frac{x}{\lambda} \right\|}_{2} , 1 \right) } = x \left( 1 - \frac{\lambda}{\max \left( {\left\| x \right\|}_{2} , \lambda \right)} \right) \end{align} $$<|endoftext|> -TITLE: Can a conformal map be turned into an isometry? -QUESTION [10 upvotes]: Let $f: (M, g) \to (M, g)$ be a conformal diffeomorphism of the riemannian manifold $(M, g)$, with -$$ g(f(p))(Df(p) \cdot v_1, Df(p) \cdot v_2) = \mu^2(p) g(p)(v_1, v_2), \quad \forall p \in M, \, \forall v_1, v_2 \in T_p M, $$ -for a certain function $\mu \in C^{\infty}(M)$. Is it possible to conformally change the metric of $M$ so as to $f$ become an isometry? -Explicitly, does there exist a metric $\tilde{g} = \alpha g$ in $M$ such that -$$\tilde{g}(f(p))(Df(p) \cdot v_1, Df(p) \cdot v_2) = \tilde{g}(p)(v_1, v_2), \quad \forall p \in M, \, \forall v_1, v_2 \in T_p M \, \text{ ?}$$ -Plugging $\tilde{g} = \alpha g$ in the above equation, we obtain that $\alpha$ must satisfy -$$ \alpha(p) = \mu^2(p) \alpha(f(p)), \quad \forall p \in M. $$ -Can we continue? - -REPLY [10 votes]: Theorem. Let $C(M)$ be the conformal group of a Riemannian manifold $M$ with $dim(M)=n \ge 2$. If $M$ is not conformally equivalent to $S^n$ or $E^n$, then $C(M)$ is inessential, i.e. can be reduced to a group of isometries by a conformal change of metric. -This theorem has a long and complicated history, you can find its proof and the historic discussion in -J. Ferrand, The action of conformal transformations on a Riemannian manifold. Math. Ann. 304 (1996), no. 2, 277–291. -In view of this theorem, whenever $f: (M,g)\to (M,g)$ is a conformal automorphism without a fixed point, then there exists a positive function $\alpha$ on $M$ such that $f: (M,\alpha g)\to (M,\alpha g)$ is an isometry. I will prove it in the case when $(M,g)$ is conformal to the sphere and leave you the case of $E^n$ as it is similar. -Every conformal automorphism $f$ of the standard sphere which does not have a fixed point in $S^n$ has to have a fixed point $p$ in the unit ball $B^{n+1}$. (I am using the Poincare extension of conformal transformations of $S^n$ to the hyperbolic $n+1$-space in its unit ball model.) After conjugating $f$ via an automorphism $q$ of $S^n$ (sending $p$ to the center of the ball), we obtain $h=q f q^{-1}$ fixing the origin in $B^{n+1}$, which implies that $f\in O(n+1)$ and, thus, preserves the standard spherical metric $g_0$ on $S^n$. Now, use the fact that $g_0$ is conformal to $g$ and $q^*(g_0)$ is conformal to $g$ as well. qed<|endoftext|> -TITLE: On a remarkable system of fourth powers using $x^4+y^4+(x+y)^4=2z^4$ -QUESTION [6 upvotes]: The problem is to find four integers $a,b,c,d$ such that, -$$a^4+b^4+(a+b)^4=2{x_1}^4\\a^4+c^4+(a+c)^4=2{x_2}^4\\a^4+d^4+(a+d)^4=2{\color{blue}{x_3}}^4\\b^4+c^4+(b+c)^4=2{x_4}^4\\b^4+d^4+(b+d)^4=2{x_5}^4\\c^4+d^4+(c+d)^4=2{x_6}^4$$ -As W. Jagy pointed out, the form $x^4+y^4+(x+y)^4 = 2z^4$ appear in the context of triangles with integer sides and one $120^\circ$ angle. PM 2Ring discovered that, remarkably, the quadruple, -$$a,b,c,d = 195, 264, 325, 440$$ -yields five integer $x_i$ (except $x_3$). -I found that, using an elliptic curve, it can be showed there are infinitely many non-zero integer triples with $\gcd(a,b,c)=1$ such that three $x_i$ are integers. - -Q: However, are there infinitely many quadruples with $\gcd(a,b,c,d)=1$ such that at least five of the $x_i$ are integers? - -REPLY [3 votes]: After mulling over the problem, it turned out the same elliptic curve can make five of the $x_i$ as integers. To start, note that, -$$x^4+y^4+(x+y)^4 = 2(x^2+xy+y^2)^2$$ -Thus, the system is reduced to finding, -$$\color{blue}{a^2+ab+b^2 = x_1^2}\tag1$$ -$$\color{blue}{a^2+ac+c^2 = x_2^2}\tag2$$ -$$b^2+bc+c^2 = x_4^2\tag3$$ -$$\color{brown}{b^2+bd+d^2 = x_5^2}\tag4$$ -$$\color{brown}{c^2+cd+d^2 = x_6^2}\tag5$$ -Assume $b,c$ as constant. Plugging them into $(1),(2)$, then a pair $\color{blue}{P_1}$ of quadratics in $\color{blue}a$ must be made a square. Since this has a rational point, it is birationally equivalent to an elliptic curve. -Plugging $b,c$ into $(4),(5)$, a pair $\color{brown}{P_2}$ of quadratics in $\color{brown}d$ must be made a square. But $P_1$ and $P_2$ have the same form. Thus, two different points in the same elliptic curve will yield the $a,d$. Clearing denominators, we then get another quadruple, -$$a,b,c,d = 232538560625,\, 670011598080,\, 824824884000,\, 749417043168$$ -and an infinite more with $\gcd(a,b,c,d)=1$. (Presumably, smaller ones may exist.)<|endoftext|> -TITLE: Must a minimum weight spanning tree for a graph contain the least weight edge of every vertex of the graph? -QUESTION [8 upvotes]: Currently learning about spanning trees and using Kruskal's algorithm and I was wondering whether a minimum weight spanning tree of a weighted graph must contain one of the least weight edges of every vertex. -Is it the case? - -REPLY [4 votes]: Yes. -Let's assume that's not true, i.e. there exists a vertex $v$ such that MST does not use any of its smallest weight edges (there may be more than one). Let $e$ be any of such edges, then you can add $e$ to MST and then remove the other edge of $v$ on that cycle, which by definition was of strictly greater weight. We reach a contradiction with the weight of MST. -I hope this helps $\ddot\smile$<|endoftext|> -TITLE: Modular DNA as a way of classifying numbers -QUESTION [6 upvotes]: I'm going to start with a few examples. I may need someone to help correct wording. -I'm going to write what I call the modular fingerprint of the following numbers. It's the list of the remainders of these numbers when divided by all primes smaller than the number itself: -$15\equiv$ -$1\pmod{2}$, $0\pmod{3}$, $0\pmod{5}$, $1\pmod{7}$, $4\pmod{11}$, $2\pmod{13}$ -So 15's list would be $\{1,0,0,1,4,2\}$. -17 -- $\{1,2,2,3,6,4\}$ -51 -- $\{1,0,1,2,7,12,0,13,5,22,20,14,10,8,4\}$ -Question 1: -Are there infinitely many numbers that begin with any certain fingerprint? For instance, there are infinite that begin with $\{1\}$: all the odds. There are infinite that begin with $\{1,0\}$: odd multiples of $3$. Are there also infinite numbers that begin with the same fingerprint as $51$? -Question 2: -If #1 is true, are these fingerprints continuous? For instance, is there at least one (and thus infinitely many) number for each possible starting fingerprint? -$\{0\},$ -$\{0,0\}, \{0,1\}, \{0,2\}, \{1,0\}, \{1,1\}, \{1,2\},$ -$\{0,0,0\}, \{0,0,1\}, \{0,0,2\}, \{0,0,3\}, \{0,0,4\}, \{0,1,0\}, \{0,1,1\}, \ldots, \{1,2,4\}$, etc. - -REPLY [4 votes]: Key phrase: Chinese remainder theorem.<|endoftext|> -TITLE: Irreducibility of $(1+x)^{2^s}+(1-x)^{2^s}$ -QUESTION [7 upvotes]: Let $n=2^s$ with $s \geq 1$. Is the polynomial -$$(1+x)^n+(1-x)^n \in \mathbb{Q}[x]$$ -irreducible? I have checked this for some values of $s$ and it seems to be true. Notice that we can also write it as $2 \sum_{k \geq 0} \binom{n}{2k} x^{2k}$. I have already found an irreducible factorization with substituted cyclotomic polynomials if $n$ is odd. The even case leads to the case of $n=2^s$. - -REPLY [7 votes]: Your polynomial $P_n(x)$ is irreducible if and only if $Q_n(z) = z^n + 1$ is irreducible (the fractional linear transformation $z = (1+x)/(1-x)$ maps from one to the other). It is well-known that $Q_{2^n}$ is the cyclotomic polynomial $\Phi_{2^{n+1}}$, and thus is irreducible. -EDIT: In general, consider a fractional linear transformation -$$z = \phi(x) = \dfrac{ax+b}{cx+d}$$ -where $a,b,c,d$ are rational, $ad-bc \ne 0$, $c \ne 0$. For a polynomial $P$ of degree $m$ over $\mathbb Q$, we have -$$P(z) = (cx+d)^{-m} R(x)$$ -where $R$ is again a polynomial of degree $\le m$. Since $P(a/c) = \lim_{x \to \infty} P(\phi(x))$, $R(x)$ has degree $m$ as long as -$P(a/c) \ne 0$. In the other direction, $\phi^{-1}(z) = \dfrac{dz-b}{-cz+a}$, and $cx+d = (ad-bc)/(a-cz)$ where $z = \phi(x)$, so that -$$P(z) = (ad-bc)^{-m} (a-cz)^m R(\phi^{-1}(z))$$ -so any polynomial $R$ of degree $m$ such that $R(-d/c) \ne 0$ arises in this way from such a polynomial $P$. Now $P$ factors as $P_1 P_2$ -with degrees $m_1, m_2$ iff $R$ factors as $R_1 R_2$ with degrees $m_1, m_2$, where $P_i(z) = (cx+d)^{-m_i} R_i(x)$.<|endoftext|> -TITLE: Predicate logic: How do you self-check the logical structure of your own arguments? -QUESTION [24 upvotes]: In propositional logic, there are truth tables. So you can check if the logical structure of your argument is, not correct per se, but if it's what you intended it to be. -In predicate logic, I have seen no reference to truth tables, nor have I seen any use (literal use) of truth tables when searching for examples where truth tables are used in PL. -It would be nice to check the logical structure of my own arguments, as I will not always have someone to validate my own work. I plan on employing my skills in logic, but I want a sure fire way to ensure that my form is correct :) - -REPLY [33 votes]: Truth tables are not enough to capture first-order logic (with quantifiers), so we use inference rules instead. Each inference rule is chosen to be sound, meaning that if you start with true statements and use the rule you will deduce only true statements. We say that these rules are truth-preserving. If you choose carefully enough, you can make it so that the rules are not just truth-preserving but also allow you to deduce every (well-formed) statement that is necessarily true (in all situations). -What you are probably looking for (namely a practical way to rigorously check the logical validity of your arguments) is natural deduction. There are many different styles, the most intuitive type being Fitch-style, which mark subcontexts using indentation or some related visual demarcation. The following system uses indentation and follows the intuition most closely in my opinion. -$ -\def\block#1{\begin{array}{ll}\ &{#1}\end{array}} -\def\fitch#1#2{\begin{array}{|l}#1\\\hline#2\end{array}} -\def\sub#1#2{\text{#1}:\\\block{#2}} -\def\imp{\Rightarrow} -\def\eq{\Leftrightarrow} -\def\nn{\mathbb{N}} -\def\none{\varnothing} -\def\pow{\mathcal{P}} -$ - -Contexts -Every line is either a header or a statement. We shall put a colon after each header and a full-stop after each statement. Each header specifies some subcontext (contained by the current context), and the lines governed by that header is indicated by the indentation. The full context of each line is specified by all the headers that govern it (i.e. all the nearest headers above it at each lower indentation level). -For example a nested case analysis might look like: -  $\sub{If $A$}{\sub{If $B$}{...} \\ \sub{If $¬B$}{...}} \\ \sub{If $\neg A$}{...}$ -And reasoning about an arbitrary member of a collection $S$ would look like: -  $\sub{Given $x{∈}S$}{...}$ -Note that what is stated in some context may be invalid in other contexts. Once you understand the principle behind contexts and the indentation, the following rules are very natural. Also note that for first-order logic these two kinds of context headers (for conditional subcontexts and universal subcontexts respectively) are the only kinds needed. -Syntax rules -A statement must be an atomic (indivisible) proposition or a compound statement formed in the usual way using boolean operations or quantifiers, with the restriction that every variable that is bound by a quantifier is not already used to refer to some object in the current context, and that there are no nested quantifiers that bind the same variable. -Natural deduction rules -Each inference rule is of the form: -  $\fitch{\text{X}}{\text{Y}}$ -which means that if the last lines you have written match "X" then you can write "Y" immediately after that at the same level of indentation. Each application of an inference rule is also tied to the current context, namely the context of "X". We will not mention "current context" all the time. - -Boolean operations -Take any statements $A,B,C$ (in the current context). -restate: If we prove something we can affirm it again in the same context. -  $\fitch{A.\\ ...}{A.}$ -Note that "$...$" denote any number of lines that are at least at the depicted indentation level. In the above rule, this means that all the lines written since the earlier writing of "$A.$" must be in the same context (or some subcontext). -In practice we never actually write the same line twice. To indicate that we can omit a line in a proof, I'll mark it with square-brackets like this: -  $\fitch{A. \\ ...}{[A.]}$ -⇒sub       ⇒restate     (We can create a conditional subcontext where $A$ holds.) -  $\fitch{}{\sub{If $A$}{[A.]}}$ -  $\fitch{B. \\ ... \\ \sub{If $A$}{...}}{\block{[B.]}}$ -⇒intro       ⇒elim -  $\fitch{\sub{If $A$}{... \\ B.}}{[A \imp B.]}$ -  $\fitch{A \imp B. \\ A.}{B.}$ -∧intro     ∧elim -  $\fitch{A. \\ B.}{A \land B.}$ -  $\fitch{A \land B.}{[A.] \\ [B.]}$ -∨intro     ∨elim -  $\fitch{A.}{[A \lor B.] \\ [B \lor A.]}$ -  $\fitch{A \lor B. \\ A \imp C. \\ B \imp C.}{C.}$ -¬intro     ¬elim     ¬¬elim -  $\fitch{A \imp \bot.}{\neg A.}$ -  $\fitch{A. \\ \neg A.}{\bot.}$ -  $\fitch{\neg \neg A.}{A.}$ -Note that by using ¬intro and ¬¬elim we can get the following additional inference rule: -  $\fitch{\neg A \imp \bot.}{A.}$ -which corresponds to how one would attempt to prove $A$ by contradiction, namely to show that assuming $\neg A$ implies a falsehood. -⇔intro ���   ⇔elim -  $\fitch{A \imp B. \\ B \imp A.}{A \eq B.}$ -  $\fitch{A \eq B.}{[A \imp B.] \\ [B \imp A.]}$ -Quantifiers and equality -The rules here are for restricted quantifiers because usually we think in terms of them. First we need some definitions. -Used variable: A variable that is declared in the header of some containing ∀-context or declared in some previous ∃-elimination ("let") step in some containing context. -Unused variable: A variable that is not used. -Fresh variable: A variable that does not appear in any previous statement in any containing context. -Object expression: An expression that refers to an object (e.g. a used variable, or a function-symbol applied to object expressions). -Property with $k$ parameters: A string $P$ with some blanks where each blank has some label from $1$ to $k$, such that replacing each blank in $P$ by some object expression yields a statement. If $k = 2$, then $P(E,F)$ is the result of replacing each blank labelled $1$ by $E$ and replacing each blank labelled $2$ by $F$. Similarly for other $k$. -In this section, $E,F$ (if involved) can be any object expressions (in the current context). -We start with the following rules that provide a type of all objects. -universe: $obj$ is a type. -  $\fitch{}{[E{∈}obj.]}$ -Now take any type $S$ and a 1-parameter property $P$ and an unused variable $x$ that does not appear in $S$ or $P$. -∀sub           ∀restate         (We can create a ∀-subcontext in which $x$ is of type $S$.) -  $\fitch{}{\sub{Given $x{∈}S$}{[x{∈}S.]}}$ -  $\fitch{A. \\ ... \\ \sub{Given $x{∈}S$}{...}}{\block{[A.]}}$ ($x$ must not appear in $A$) -∀intro           ∀elim -  $\fitch{\sub{Given $x{∈}S$}{... \\ P(x).}}{\forall x{∈}S\ ( \ P(x) \ ).}$ -  $\fitch{\forall x{∈}S\ ( \ P(x) \ ). \\ E{∈}S.}{P(E).}$ ($E$ must not share any unused variables with $P$) -∃intro           ∃elim -  $\fitch{E{∈}S. \\ P(E).}{\exists x{∈}S\ ( \ P(x) \ ).}$ -  $\fitch{\exists x{∈}S\ ( \ P(x) \ ).}{\text{Let $y{∈}S$ such that $P(y)$}. \\ [y{∈}S.] \\ [P(y).]}$ (where $y$ is a fresh variable) -=intro       =elim -  $\fitch{}{[E=E.]}$ -  $\fitch{E=F. \\ P(E).}{P(F).}$ ($F$ must not share any unused variable with $P$) -Variable renaming -Finally, the following rules for variable renaming are redundant, but would shorten proofs. -∀rename         ∃rename -  $\fitch{\forall x{∈}S\ ( \ P(x) \ ).}{[\forall y{∈}S\ ( \ P(y) \ ).]}$ -  $\fitch{\exists x{∈}S\ ( \ P(x) \ ).}{[\exists y{∈}S\ ( \ P(y) \ ).]}$ -  (where $y$ is an unused variable that does not appear in $P$) -Short-forms -For convenience we write "$\forall x,y{∈}S\ ( \ P(x,y) \ )$" as short-form for "$\forall x{∈}S\ ( \ \forall y{∈}S\ ( \ P(x,y) \ ) \ )$", and similarly for more variables and for "$\exists$". We shall also compress nested ∀-subcontext headers in the following form: -  $\sub{Given $x{∈}S$}{\sub{Given $y{∈}S$}{...}}$ -to: -  $\sub{Given $x,y{∈}S$}{...}$ -Additionally, "$\exists! x{∈}S\ ( \ P(x) \ )$" is short-form for "$\exists x{∈}S\ ( \ P(x) \land \forall y{∈}S\ ( \ P(y) \imp x=y \ ) \ )$". - -Example -Here is an example, where $S,T$ are any types and $P$ is any property with two parameters. -First with all lines shown: -  If $\exists x{∈}S\ ( \ \forall y{∈}T\ ( \ P(x,y) \ ) \ )$:   [⇒sub] -    $\exists x{∈}S\ ( \ \forall y{∈}T\ ( \ P(x,y) \ ) \ )$.   [⇒sub] -    Let $a{∈}S$ such that $\forall y{∈}T\ ( \ P(a,y) \ )$.   [∃elim] -    $a{∈}S$.   [∃elim] -    $\forall y{∈}T\ ( \ P(a,y) \ )$.   [∃elim] -    $\forall z{∈}T\ ( \ P(a,z) \ )$.   [∀rename] -    Given $y{∈}T$:   [∀sub] -      $y{∈}T$.   [∀sub] -      $\forall z{∈}T\ ( \ P(a,z) \ )$.   [∀restate] -      $y{∈}T$.   [restate] -      $P(a,y)$.   [∀elim] -      $a{∈}S$.   [∀restate] -      $\exists x{∈}S\ ( \ P(x,y) \ )$.   [∃intro] -    $\forall y{∈}T\ ( \ \exists x{∈}S\ ( \ P(x,y) \ ) \ )$.   [∀intro] -  $\exists x{∈}S\ ( \ \forall y{∈}T\ ( \ P(x,y) \ ) \ ) \imp \forall y{∈}T\ ( \ \exists x{∈}S\ ( \ P(x,y) \ ) \ )$.   [⇒intro] -Finally with all lines in square-brackets removed: -  If $\exists x{∈}S\ ( \ \forall y{∈}T\ ( \ P(x,y) \ ) \ )$:   [⇒sub] -    Let $a{∈}S$ such that $\forall y{∈}T\ ( \ P(a,y) \ )$.   [∃elim] -    Given $y{∈}T$:   [∀sub] -      $P(a,y)$.   [∀elim] -      $\exists x{∈}S\ ( \ P(x,y) \ )$.   [∃intro] -    $\forall y{∈}T\ ( \ \exists x{∈}S\ ( \ P(x,y) \ ) \ )$.   [∀intro] -  $\exists x{∈}S\ ( \ \forall y{∈}T\ ( \ P(x,y) \ ) \ ) \imp \forall y{∈}T\ ( \ \exists x{∈}S\ ( \ P(x,y) \ ) \ )$.   [⇒intro] -This final proof is clean yet still easily computer-verifiable. -Definitorial expansion -To facilitate definitions, which can significantly shorten proofs, we also have the following definitorial expansion rules. -For each $k$-parameter property $P$ and fresh predicate-symbol $Q$: -  $\fitch{}{\text{Let $Q(x_1,...x_k) ≡ P(x_1,...x_k)$ for each $x_1{∈}S_1$ and ... and $x_k{∈}S_k$.} - \\ [∀x_1{∈}S_1\ \cdots ∀x_k{∈}S_k\ ( \ Q(x_1,...x_k) ⇔ P(x_1,...x_k) \ ).]}$ -For each $(k+1)$-parameter property $R$ and fresh function-symbol $f$: -  $\fitch{∀x_1{∈}S_1 \cdots ∀x_k{∈}S_k\ ∃!y{∈}T ( \ R(x_1,...x_k,y) \ ) } -{\text{Let $f : S_1{×}{\cdots}{×}S_k{→}T$ such that $R(x_1,...x_k,f(x_1,...x_k))$ for each $x_1{∈}S_1$ and ... and $x_k{∈}S_k$.} - \\ [∀x_1{∈}S_1\ \cdots ∀x_k{∈}S_k\ ( \ f(x_1,...x_k)∈T ∧ R(x_1,...x_k,f(x_1,...x_k)) \ ).]}$ -These rules are redundant in the sense that any statement you can prove that does not use any of the new symbols can be proven without using definitorial expansion. -Notes -The above rules avoid the usual trouble that many other systems have, where variables used for witnesses of existential statements must be distinguished from variables used for arbitrary objects. The reason is that every variable here is either specified by a ∀-subcontext or by a "let"-statement; in other words there are no free variables. The fact that every variable is bound is strongly related to the fact that this system allows an empty universe, if there are no other axioms. -Also, every variable is specified by a unique header or "let"-statement in the current context; in other words there is no variable shadowing. This is by design, and in actual mathematical practice we also abide by this, though most other formal systems do not. As a consequence, sentences such as "$\exists x\ \forall x\ ( x=x )$." simply cannot be written down in this system. If you wish to permit such kind of terrible sentences, you would have to modify the rules appropriately, but it will most probably cause a headache. -Finally, there were some subtle technical decisions. For the quantifier rules, the reason I required that $x$ does not appear in $S,P$ is that, if we later on include rules for specifying types, we would usually have variable names in its syntax, which would cause problems. For example, if we have written in the current context "$x∈\{y:y∈S∧y∈T\}$" and "$x∈U$", it will not be sensible to allow writing "$∃y∈U\ ( y∈\{y:y∈S∧y∈T\} )$". Similarly, if we have written "$x=\{y:P(y)\}$" and "$∃y∈U\ ( Q(x,y) )$", we do not want to allow writing "$∃y∈U\ ( Q(\{y:P(y)\},y) )$". -Also, to allow a variable to become fresh again after leaving the subcontext in which it was declared, I required that the ⇒intro and ∀intro rules can be applied only immediately after the corresponding ⇒-subcontext or ∀-subcontext. It would be simpler to simply define a fresh variable as one that does not appear in any previous line, but then we can easily run out of fresh variable names in a long proof. -~ ~ ~ ~ ~ ~ ~ -To illustrate the flexibility of this system, I will express both Peano Arithmetic and Set Theory as extra rules that can be simply added to the system. -Peano Arithmetic -Add the type $\nn$ and the symbols of PA, namely the constant-symbols $0,1$ and the $2$-input function-symbols $+,·$ and the $2$-input predicate-symbol $<$. -Add the axioms of PA$^-$, adapted from here: - -$\forall x,y{∈}\nn\ ( \ x+y ∈ \nn \ )$. -$\forall x,y{∈}\nn\ ( \ x·y ∈ \nn \ )$. -$\forall x,y{∈}\nn\ ( \ x+y=y+x \ )$. -$\forall x,y{∈}\nn\ ( \ x·y=y·x \ )$. -$\forall x,y,z{∈}\nn\ ( \ x+(y+z)=(x+y)+z \ )$. -$\forall x,y,z{∈}\nn\ ( \ x·(y·z)=(x·y)·z \ )$. -$\forall x,y,z{∈}\nn\ ( \ x·(y+z)=x·y+x·z \ )$. -$\forall x{∈}\nn\ ( \ x+0=x \ )$. -$\forall x{∈}\nn\ ( \ x·1=x \ )$. -$\forall x{∈}\nn\ ( \ \neg x -TITLE: Rolling $n$ $k$-sided dice and discarding the lowest $m$ of them. -QUESTION [16 upvotes]: In this question I will use the notation $\Bbb{E}(n,k,m)$ to refer to the expected average of rolling $n$ $k$-sided dice and discarding the lowest $m$ of them. - -The most trivial response happens when $m = 0$, in which case we discard no dice and we arrive at the result: -$$\Bbb{E}(n,k,0) = \frac{k}{2} + \frac{1}{2}$$ - -When $m = 1$, I considered a sample case to give some intuition for this problem. Looking at the case of $\Bbb{E}(2, 6, 1)$, I found the following pattern. - -There are one $1$s, three $2$s, five $3$s, etc. -The sum of these outcomes is: -$$\sum_{i=1}^{6} i(2i-1)$$ -In general, the expected outcome for $\Bbb{E}(2,k,1)$ is: -$$\frac{\sum_{i=1}^{k} i(2i-1)}{k^2} = \frac{\sum_{i=1}^{k} 2i^2 - \sum_{i=1}^{k} i}{k^2} = \frac{\frac{2k(k+1)(2k+1)}{6} - \frac{k(k+1)}{2}}{k^2} = \frac{2}{3}k + \frac{1}{2} - \frac{1}{6k}$$ - -Now I want to consider $\Bbb{E}(3,k,2)$. In the previous case each value $i$ occurred $2i - 1$ number of times. Where did $2i - 1$ come from? -It looks like the frequency of occurrence is the difference of consecutive squares. -$$i^2 - (i-1)^2 = 2i - 1$$ -This seems intuitive based on the image I provided. We can reason that in the case of $n = 3$, the frequency of occurrence will be the difference of consecutive cubes. -$$i^3 - (i-1)^3 = 3i^2 - 3i + 1$$ -The expected outcome of $\Bbb{E}(3,k,2)$ is messy, so I'll just write the initial expression and the simplified expression. -$$\frac{\sum_{i=1}^{k} i(3i^2-3i+1)}{k^3} = \frac{3}{4}k + \frac{1}{2} - \frac{1}{4k}$$ - -Let's finally look at the case of $\Bbb{E}(n,k,n-1)$. Based on the previous results, I conjecture that it looks like: -$$\frac{\sum_{i=1}^{k} i(i^n - (i-1)^n))}{k^n} = \frac{n}{n+1}k + \frac{1}{2} - \mathcal{O}(\frac{1}{k})$$ -My two questions are this: - -Is my conjecture for $\Bbb{E}(n,k,n-1)$ correct, and if so, how can I prove this? -What happens when $m \ne n-1$? How can I adjust my analysis to account for discarding dice such that I leave not just the maximum value? - -REPLY [4 votes]: If $M$ is the maximum value on the $n$ dice thrown, then $\mathbb{P}(M\leq x)=(x/k)^n$ so that $\mathbb{E}(M)=\sum_{x=0}^{k-1} \left(1-\left({x\over k}\right)^n\right)=k-{1\over k^n}\sum_{x=0}^{k-1}x^n.$ Using Faulhaber's formula you can show that -$$\sum_{x=0}^{k-1}x^n -={1\over n+1}\left[k^{n+1}-{1\over 2}(n+1)k^n+O(k^{n-1})\right]$$ -and putting these together gives, as you expected, -$$\mathbb{E}(M)={n\over n+1}\,k+{1\over 2}+O(1/k).$$ - -Here is a more general argument. For any $1\leq j\leq n$ let $X_{(j)}$ be the $j$th order statistic of the $n$ -dice rolls. For $0\leq xx$ if and only if $Yx)=\mathbb{P}(Y -TITLE: Is knowledge of PDE useful for SDE? -QUESTION [5 upvotes]: I am a stochastic analysis student and am particularly interested in stochastic differential equations. What always struck me as odd is how little PDE (or even ODE for that matter) seems to have anything to do with SDE. My reasons for thinking so are the following. - -I've read on SDE many times and never encountered a single mention of PDE/ODE -My master's programme offers almost nothing on PDE. -Searching for both tags on math.SE -I encountered PDE literally only once in my life, while dealing with continuous time Markov processes. - -Recently, I've been drifting towards biology and started encountering PDEs more and more. This is not surprising, as they are arguably much more useful in that area than SDEs. It makes me wonder, however, whether I should perhaps devote time to ODE/PDE. This leads me to the following questions: - -Are PDE really so rarely relevant when it comes to SDE? Or possibly stochastic analysis in general? -What could a stochastics student take away from studying ODE/PDE? What areas should he/she focus on (if any)? (e.g. very basics of ODE, at least) -Since I am going into biology and thus might regret not knowing more PDE, how much sense would it make to make them a serious (secondary) area of study? Could sometimes PDE and SDE be seen as two approaches to the same problem, or be somehow analogous? Could they compliment each other, or would I just be doomed to be studying two mostly unrelated fields? - -Thank you. - -REPLY [3 votes]: Just my thoughts: - -I think SDEs and PDEs are rather deeply intertwined. For instance, the Kolmogorov backward equation and Fokker-Planck equations (see the link in the comments above). Indeed, as expected intuitively, Ito diffusion processes and the classical diffusion (heat) PDE are deeply related: -on a Riemannian Manifold $(M,g)$, if $X$ is Brownian motion on $M$ (i.e. an Ito diffusion process with infinitesimal generator $\Delta_g/2$) with some transition density function $p(x,y,t)$, then $p$ solves the following: -$$ - \frac{\partial p}{\partial t} = \frac{1}{2}\Delta_g\, p,\,\;\;\; - \lim_{t\rightarrow 0} p(t,x,y) = \delta_x(y) -$$ -which is of course a heat equation (see e.g. Hsu, Heat Equations on Riemannian Manifolds and Bismut's formula). -In the simple case of Euclidean space, we get $\Delta=\Delta_g$ and -$$ - p(t,x,y) = \frac{1}{ (2\pi t)^{n/2} } \exp\left( \frac{-||x-y||^2}{2t} \right) -$$ -which is the Gaussian heat kernel of the IVP above (see e.g.: Morters & Peres, Brownian Motion). -Well, beyond PDE/ODE theory itself, which is much more commonly applied (at least for biological models), you get the outlooks from (1) above. :) -As for what to study, it depends on your goal, but I would look at dynamical systems. -Dynamical systems, ODEs, and PDEs are important tools for ecological, physiological and cellular (since they are often physically based, more in the biomedical engineering realm), and molecular systems modelling (see "systems biology"). But one area I know with heavy attention paid to stochastic partial differential equations (SPDEs) in particular is in modelling neurons (e.g. see Tucker, Stochastic partial differential equations in Neurobiology: linear and nonlinear models for spiking neurons) because the ion channels are noisy and membrane voltage is a function of both space and time.<|endoftext|> -TITLE: Closed form for $\prod_{n=0}^\infty (1-z^{2^n})$ -QUESTION [7 upvotes]: Is there a closed form for the product $$f(z) = \prod_{n=0}^\infty 1-z^{2^n}$$ -either as a formal power series or as an analytic function in the disk $|z| < 1$? It's not hard to see that Taylor series coefficients of $f$ about 0 are all $\pm 1$: -$$f(z) = 1 - z - z^2 + z^3 - z^4 + z^5 + z^6 - z^7 + \dotsb$$ -and that they form a pattern in blocks, so to speak: -$$+- \quad\to\quad +--+ \quad\to\quad +--+-++- \quad\to\quad \dotsb$$ -But I don't know much else. I came across this infinite product when I incorrectly transcribed an exercise from a book, wasting a good deal of time before returning to the book to ask it, "are you sure?". (The exercise was to prove that $\prod_{n=0}^\infty 1 + z^{2^n} = (1-z)^{-1}$, which is easy.) - -REPLY [5 votes]: Converting my comment from 2016 into an answer, with elaborations: -The coefficients of $z^n$ in $f(z)$ is $-1$ to the power of the number of $1$s in the binary expansion of $n$; this is a slight variant of the Thue-Morse sequence with $1$ and $-1$ instead of $0$ and $1$ (A106400 on the OEIS). The coefficient is $1$ if $n$ is "evil" (has an even number of $1$s; A001969 on the OEIS) and $-1$ if $n$ is "odious" (has an odd number of $1$s; A000069 on the OEIS). The reciprocal -$$\frac{1}{f(z)} = \prod_{n \ge 0} \frac{1}{1 - z^{2^{n}}}$$ -is the generating function for the sequence $c_n$ counting the number of partitions of $n$ into powers of $2$, which is A018819 on the OEIS. $c_{2n}$ is A000123 on the OEIS because $c_{2n+1} = c_{2n}$. -According to Kachi and Tzermias' On the $m$-ary partition numbers this problem was first studied by Euler. Mahler showed in On a special functional equation that -$$c_n = e^{O(1)} 2^{ -{k \choose 2}} \frac{n^k}{k!}$$ -where $k$ is the unique positive integer satisfying $2^{k-1} k \le n < 2^k (k+1)$. This gives $k \approx \frac{W(n \log 2)}{\log 2}$ where $W(x) \sim \log x$ is the Lambert W function, which lets us write down an asymptotic for the logarithm of $c_n$ the leading term of which is -$$\log c_n \sim \frac{(\log n)^2}{2 \log 2}.$$ -de Brujin's On Mahler's partition problem analyzes the asymptotics more precisely. -These asymptotics imply that $c_n$ grows faster than polynomially but slower than exponentially and so rule out $f(z)$ having a particularly simple closed form; for example it follows that $f(z)$ cannot be rational (although this has an easier proof) or more generally meromorphic or algebraic.<|endoftext|> -TITLE: Probability of completing a self-avoiding chessboard tour -QUESTION [36 upvotes]: Someone asked a question about self-avoiding random walks, and it made me think of the following: -Consider a piece that starts at a corner of an ordinary $8 \times 8$ chessboard. At each turn, it moves one step, either up, down, left, or right, with equal probability, except that it must stay on the board, of course, and it must not return to a square previously visited. - -Clarification. On any given step, if the piece has $n$ available moves (excluding those that would put the piece on a previously - visited square), it chooses randomly and uniformly from those $n$ - moves. Example: Starting from a corner, at first move, $n = 2$, and - either move is chosen with probability $1/2$. Next move, $n = 2$ - also, because it cannot return to the corner, and so either of the two - other moves are chosen with probability $1/2$. On the third move, if - it is on the edge, $n = 2$, while if it is off the edge, $n = 3$. And - so on. - -It is possible for the piece to be deadlocked at some point prior to completing a tour of the chessboard. For instance, if it starts at lower left, and moves up, right, right, down, left, it is now stuck. -What is the probability that it completes the tour? Is there a method that answers this question besides exhaustive enumeration? What about $n \times n$ chessboards for $n \geq 3$? (The problem is trivial for $n = 1$ or $2$.) - -Analysis for $n = 3$, as another clarification: - -Let the chessboard be labelled $(1, 1)$ through $(3, 3)$, and the - piece starts at $(1, 1)$. Without loss of generality, the piece moves - to $(2, 1)$. Then: - -With probability $1/2$, the piece moves to the center square $(2, 2)$ on its second move. From there, only one move permits completion - of the tour—the move to $(1, 2)$—and in that case, the tour is - guaranteed to complete. This move is chosen with probability $1/3$. -With probability $1/2$, the piece moves to $(3, 1)$ on its second move. It is then forced to move to $(3, 2)$. - -With probability $1/2$, it then moves to $(3, 3)$ on its fourth - move and is guaranteed to complete the tour. -Otherwise, also with probability $1/2$, it moves to the center - square $(2, 2)$ on its fourth move. From there, it moves to $(1, 2)$ - with probability $1/2$ (and is then guaranteed to complete the tour), - or to $(2, 3)$ also with probability $1/2$ (and is then unable to - complete the tour). - - -Thus, the probability of completing the tour on a $3 \times 3$ board - is -$$ p_3 = \frac{1}{2} \times \frac{1}{3} - + \frac{1}{2} \times \left( \frac{1}{2} + \frac{1}{2} \times \frac{1}{2} \right) - = \frac{1}{6} + \frac{3}{8} = \frac{13}{24} $$ - - -Update. An exhaustive enumeration in floating point yielded the following: - -For $n = 8, p_n = 0.000006751027716$ - -REPLY [2 votes]: Let’s solve a more general version of your problem: - -Suppose a piece stands in the vertice $v$ of a graph $G = (V, E)$. At each turn, it moves along any edge starting from the vertex it is in currently, with equal probability, except that it must not return to a vertex previously visited. - -Let’s denote such probability as $P(G, v)$, then the problem is solved by the recurrent formula: -$$P(G, v) = \begin{cases} 1 & \quad |V| = 1 \\ \frac{\Sigma_{(v, w) \in E} P(G\setminus v, w)}{deg(v)} & \quad |V| > 1 \end{cases}$$ -Here $G\setminus v$ stands for a graph that is constructed by removing $v$ and all adjacent edges. -Using that formula the exact probability can always be calculated. -And the probability for $n \times n$ chessboard from your initial question is calculated by this wee piece of code in Python (which is the direct implementation of the aforementioned formula): -import numpy -from fractions import Fraction - -def rec(A, n): - if (A.size == 1): - return Fraction(1, 1) - else: - k = numpy.sum(A[n][:]) - if (k == 0): - return Fraction(0, 1) - else: - res = Fraction(0, 1) - for i in range(A.shape[0]): - if A[n][i] == 1: - if i < n: - res += rec(numpy.delete(numpy.delete(A, (n), axis = 0), (n), axis = 1), i) - else: - res += rec(numpy.delete(numpy.delete(A, (n), axis = 0), (n), axis = 1), i - 1) - res = res*Fraction(1, int(k)) - return res - - -n = int(input()) -A = numpy.zeros((n*n, n*n)) -for i in range(n): - for j in range(n): - for k in range(n): - for l in range(n): - if (i - j == 1 and k == l) or (j - i == 1 and k == l) or (k - l == 1 and i == j) or (l - k == 1 and i == j): - A[n*i + k][n*j + l] = 1 -print(rec(A, 0)) - -I should also mention, that the function "rec" is designed here to solve the generalised problem, with its arguments being exactly the adjacency matrix of $G$ and the number of its column corresponding to $v$<|endoftext|> -TITLE: Why is $(\mathbf{v} \cdot \nabla)\mathbf{v} = (\nabla \times \mathbf{v}) \times \mathbf{v} + \nabla (\frac{1}{2} \mathbf{v}^2)$? -QUESTION [5 upvotes]: The Convective Derivative or Material Derivative is usually written as $\frac{D}{Dt}=\frac{\partial}{\partial t} + \mathbf{v} \cdot \nabla$. According to MathWorld, this equation, multiplied with ${\bf{v}}$ equals: -$$ - \frac{D \mathbf{v}}{Dt} = \frac{\partial \mathbf{v}}{\partial t} + (\nabla \times \mathbf{v}) \times \mathbf{v} + \nabla (\frac{1}{2} \mathbf{v}^2) -$$ -Clearly, it must hold that; -$$ - (\mathbf{v} \cdot \nabla)\mathbf{v} = (\nabla \times \mathbf{v}) \times \mathbf{v} + \nabla (\frac{1}{2} \mathbf{v}^2) -$$ -However, I do not spot why this is true. What is the (trivial) identity that I am missing? - -REPLY [4 votes]: Here is another approach using the BAC-CAB identity and the definition of tensor product. I remind you of the BAC-CAB identity -$$\begin{align} -A \times (B \times C) &= B(A \cdot C)-C(A \cdot B) \\ -&= (B \otimes A) \cdot C - A \cdot (B \otimes C) -\end{align}$$ -So we can write -$$\begin{align} -(\nabla \times \mathbf{v}) \times \mathbf{v} &= -\mathbf{v} \times (\nabla \times \mathbf{v}) \\ -&= - [(\nabla \otimes {\bf{v}}) \cdot {\bf{v}} - {\bf{v}} \cdot (\nabla \otimes {\bf{v}})] \\ -&\equiv - [(\nabla {\bf{v}}) \cdot {\bf{v}} - {\bf{v}} \cdot (\nabla {\bf{v}})] \\ -&=-\nabla {\bf{v}} \cdot {\bf{v}} + {\bf{v}} \cdot \nabla {\bf{v}} \\ -&=-\frac{1}{2}\nabla ({\bf{v}} \cdot {\bf{v}}) + {\bf{v}} \cdot \nabla {\bf{v}} -\end{align}$$ -or equivalently we get -$${\bf{v}} \cdot \nabla {\bf{v}}=(\nabla \times \mathbf{v}) \times \mathbf{v}+\frac{1}{2}\nabla ({\bf{v}} \cdot {\bf{v}})$$<|endoftext|> -TITLE: Pi's Recursiveness -QUESTION [7 upvotes]: I don't know if this will make sense, but: -If $\pi$ is infinite and contains all strings of numbers including those of infinite length, then it must contain $\sqrt2$, and if $\sqrt2$ is infinite and contains all strings of numbers including those of infinite length then $\sqrt2$ contains $\pi$. This means that there is $\pi$ inside of $\pi$, and therefore it's recursive. -I probably went wrong somewhere (I am a secondary school student) and I don't know everything about these numbers. I am assuming that $\pi$ is infinite and contains all strings although I know this has been called into question. So can anyone help me out, because this just seems wrong yet the sum of all natural numbers is $-1/12$ so, anything is possible. - -REPLY [4 votes]: It's a good question and it shows a inquisitive and logical mind that you thought of it. -So bear that in mind when I point out what is wrong with this. -1) When people informally say "\pi's decimal expansion is infinite, and given infinite options all possibilities must occur so every possible possible string of digits must occur in pi" they mean all finite possible strings occur. To fit an infinite string in, such infinite strings never end and thus you can only put in the strings that happen to be pi with some front end cut off. It just doesn't make sense for the reasons you think it wouldn't. There's no fancy mind numbing strange counterintuitive cardinality thinky explanation to make it work. -2) We strongly suspect pi is "normal" meaning that its digits (in any base) will each occur infinitely often. That is that the digits occur with "normal" frequency and distribution. We don't actually know if pi is normal. -If pi is normal, then, statistically, all finite strings of digits must occur eventually. If pi isn't normal than that needn't be the case. -3) You are probably aware of two cardinalities of infinity. There is "countably" infinite which means an infinite set can be indexed and "counted" or, in other words there is a one-to-one corespondence between the infinite set and the Natural numbers. The digits of pi are, because the are in order of place value countably infinite. -Then there is "bigger" uncountable infinity. The real numbers for instance are uncountable. -The set of all infinite countable strings of digits is uncountable. But set of all finite strings is countable. The set of strings within pi (even the infinite ones) is countable. So there are "more" infinite strings then can possibly be in pi. -Maybe that was more than you needed. -4) The sum of 1+2+3 +... = -1/12 is kind of a misstatement. 1+2+3+ ... diverges and has no sum as is intuitively obvious. Some infinite sums converge such as 1 + 1/2 + 1/4 + 1/8 + ..... = 2, and do have sums but those that diverge to not. There is a function called the "zeta function" (you can google it) that evaluates characteristics of infinite sums. If the infinite sum converges then the zeta function will result in the sum. This is a coresponding overlap. The zeta function is not the sum but something that coincides when there is a sum. If the infinite sum does not converge the zeta function still returns a result but it isn't the sum. The zeta function evaluating the sum 1+2+3+... results in -1/12. But that isn't the same thing as the sum being -1/12 which is obviously absurd. (Clearly each partial sum is positive... and bigger than the previous one...)<|endoftext|> -TITLE: Proof of (P → Q) from ¬P? -QUESTION [7 upvotes]: I'm trying to figure out how to prove P → Q from just ¬P. I can deduce it using informal logic. Since the only way a conditional is False is in the case of T → F, if P is False, P → Q must always be true. -But I can't seem to prove it formally using a combination of these:(negation/conjunction/disjunction/conditional - introduction/elimination). -Ultimately I want to use this to prove ¬(P → Q) → P by negation elimination. - -REPLY [11 votes]: I’m going to answer on the basis of the derivational system suggested by the terminology that you’re using for your rules. What I’m doing may not be quite like what you’ve been taught, but I hope that it’s at least close. -You have $\neg P$ as hypothesis, and from that you want to derive $P\to Q$. To get $P\to Q$ by conditional introduction, you’ll need to start by taking $P$ as an assumption: -$$\begin{align*} -&1.\quad \neg P\tag{Premise}\\ -&2.\quad|\,P\tag{Assumption} -\end{align*}$$ -We want to end up concluding $Q$ within the scope of the assumption, since conditional introduction will then give us $P\to Q$. Introduce a second assumption layer: -$$\begin{align*} -&1.\quad \neg P\tag{Premise}\\ -&2.\quad|\,P\tag{Assumption}\\ -&3.\quad||\,\neg Q\tag{Assumption}\\ -&4.\quad||\,P\\ -&5.\quad||\,\neg P\\ -&6.\quad||\,\bot\\ -&7.\quad|\,\neg\neg Q\\ -\end{align*}$$ -Here $\bot$ is a standard symbol for a contradiction, and deriving the contradiction from the assumption $\neg Q$ gives me $\neg\neg Q$ by negation introduction, thereby discharging the inner assumption; I’ll leave the other justifications so far to you. -Now you need some steps deriving $Q$ from $\neg\neg Q$; I’ll leave them to you. (You may have done or seen this derivation already.) Once those are in place, you can discharge the outer assumption by conditional introduction to get $P\to Q$.<|endoftext|> -TITLE: Why might Dieudonne have been "begging the question" by appealing to second-order Peano Axioms? -QUESTION [11 upvotes]: Following a comment by Peter Smith, I've been reading A. R. D. Mathias's paper The Ignorance of Bourbaki. -Parts of the paper are above my head, but I understand it well enough for my own amateurish purposes - apart from one sentence. -The context is a quotation from Dieudonne's A Panorama of Pure Mathematics (1982): - -The first axiomatic treatments (Dedekind-Peano arithmetic, Hilbert Euclid geometry) dealt with univalent theories , i.e. theories which are entirely determined by their complete system of axioms, unlike the theory of groups. - -Mathias's commentary on this passage ends with the following sentence, which baffles me: - -In saying that Peano arithmetic is univalent, Bourbaki probably has in mind some second-order characterisation of the standard model of arithmetic, - which is, of course, to beg the question. - -I can only imagine that he means that even the second-order axioms cannot stand on their own, because any such version of the Peano Axioms has two hidden prerequisites: - -Reference to a particular (but unmentioned) version of set theory. -Reference to a "standard model of arithmetic", whose existence and uniqueness is silently taken for granted (thus "begging the question" in a simpler sense than item 1). - -But neither of these ideas is really clear to me, nor do I have any idea whether the author is alluding to either of them, both of them, or neither. -(When one is confused, it is hard to explain the precise way in which one is confused!) -If the meaning of the quoted sentence is not obvious to others, I'll consider asking the author himself about it via e-mail, but I'm marginally less nervous about posting a question here - and perhaps an answer to the question here will also interest others. -The paper is about Bourbaki's blind spot in relation to developments in logic since 1929. As my own blind spots are incomparably more severe than any of Bourbaki's, it is entirely possible that I will fail to understand a perfectly good explanation of the meaning of the above sentence! -But I will be well enough satisfied by an answer that reduces my current bafflement, by one sentence in an otherwise intelligible paper, to a more familiar perplexity about mathematics itself. - -REPLY [4 votes]: Comment to Graffitics's answer. -The quote is from: - -Nicolas Bourbaki, L'architecture des mathematiques, into François Le Lionnais (ed.), Les grands courants de la pensée mathématique (1948), page 45. See Engl.transl., page 35. - -The issue is not (according to my understanding) about first-order vs second-order axiomatization. -Bourbaki is discussing the (seminal) notions of méthode axiomatique and of structure: - -page 40 - Engl.transl., page 28 - On peut maintenant faire comprendre ce qu'il faut entendre, d'une façon générale, par une structure mathématique. [...] Faire la théorie axiomatique d'une structure donnée, c'est déduire les conséquences - logiques des axiomes de la structure, en s'interdisant toute autre hypothèse - sur les éléments considérés (en particulier, toute hypothèse sur leur «nature» propre). -page 41 - Les relations qui forment le point de départ de la définition d'une - structure peuvent être de nature assez variée. Celle qui intervient dans les - structures de groupe est ce qu'on appelle une «loi de composition», - c'est-à-dire une relation entre trois éléments déterminant le troisième - de façon unique en fonction des deux premiers. Lorsque les relations de - définition d'une structure sont des «lois de composition», la structure - correspondante est appelée structure algébrique. -page 43 - Guidés par la conception axiomatique, essayons donc de nous représenter l'ensemble de l'univers mathématique. Certes, nous n'y reconnaîtrons - plus guère l'ordre traditionnel, qui, tel celui des premières nomenclatures - des espèces animales, se bornait à ranger côte à côte les théories qui présentaient le plus de ressemblances extérieures. Au lieu des compartiments bien délimités de l'Algèbre, de l'Analyse, de la Théorie des Nombres - et de la Géométrie, nous verrons, par exemple, la théorie des nombres - premiers voisiner avec celle des courbes algébriques, ou la géométrie - euclidienne avec les équations intégrales; et le principe ordonnateur sera - la conception d'une hiérarchie de structures, allant du simple au complexe, - du général au particulier. - -In this context we have to read the final remark about: - -page 45 - Engl.transl., page 35 - les premières axiomatisations, - et qui eurent le plus de retentissement (celles de l'arithmétique avec - Dedekind et Peano, de la géométrie euclidienne avec Hilbert) portaient - sur des théories univalentes, c'est-à-dire telles que le système global - de leurs axiomes les déterminait entièrement, et n'était 'par suite susceptible de s'appliquer à aucune théorie autre que celle d'où on l'avait extrait - (au rebours de ce que nous avons vu pour la théorie des groupes, par - exemple). - -Dedekind-Peano axiomatization, as well as Euclid-Hilbert's one, aims at the "univocal characterization" of its intended structure. -According to Bourbaki's point of view, the "general" notion of structure is mathematically more interesting and fruitfully. - -Note -Mathias' concern about the meager attention by Bourbaki regrading mathematical logic is correct; see: - -page 37 - Engl.transl., page 25 - toute théorie mathématique est un enchaînement de propositions, se déduisant les unes des autres conformément aux règles d'une logique qui, pour l'essentiel, est celle codifiée depuis Aristote sous le nom de «logique formelle», convenablement adaptée aux buts particuliers du mathématicien. C'est donc un truisme banal de dire que ce «raisonnement déductif» est un principe d'unité pour la mathématique [...]. Le mode de raisonnement par enchaînement de syllogismes n'est qu'un mécanisme transformateur, applicable indifféremment à toutes sortes de prémisses, et qui ne saurait donc caractériser la nature de celles-ci. [...] Codifier ce langage, en ordonner le vocabulaire et en clarifier la syntaxe, c'est faire œuvre fort utile, et qui constitue effectivement une face de la méthode axiomatique, celle qu'on peut proprement appeler le formalisme logique (ou, comme on dit aussi, la «logistique»). Mais - et nous insistons sur ce point - ce n'en est qu'une face, et la moins intéressante. - -As noted by Mathias, Bourbaki (in 1948) seems totally unaware of Gödel's results of 1931 regarding the incompleteness of arithmetic and of 1940 regarding the consistency of the Continuum Hypothesis and the Axiom of Choice and their impact with the subsequent development of mathematical logic as a mathematical discipline: model theory, set theory, computability, etc.<|endoftext|> -TITLE: Defining homology groups directly from the topology -QUESTION [12 upvotes]: Both simplicial and singular homology theories rely on 'model objects', simplexes or simplicial complexes, to define the homology groups of a topological space. -I was wondering if there is a way to define homology groups directly from the open set structure of a space $(X,\mathcal{T})$. Here's a formal formulation of my question: -Is there a way to associate homology groups to bounded lattices with arbitrary joins so that $H_{n}(X)\simeq H_{n}(\mathcal{T})$, when seeing $\mathcal{T}$ as such a lattice ? -Of course this needs some hypothesis on $X$, if only to convene on a definition for $H_{n}(X)$. I don't mind if strong assumptions on $X$ (compact Hausdorff, triangulable) are needed. -Note that if the answer is positive, then $(X,\mathcal{T})$ and $(Y,\mathcal{T}')$ would have the same homology groups whenever $\mathcal{T}\simeq \mathcal{T'}$. But this is not surprising since in fact $\mathcal{T}\simeq \mathcal{T'}$ implies $X\simeq Y$ assuming both spaces are Hausdorff; see my previous question. - -REPLY [4 votes]: Here is the construction of a (co)homology theory for lattices which yields the Čech (co)homology in the case of topological spaces when applied to the lattice of open subsets. Recall that the Čech theory works with open coverings. An analogue of an open covering in a lattice $L$ is a subset $C$ of $L$ satisfying the property that the join of the elements of $C$ equals the unique maximal element of $L$. (The assumption is that $L$ is bounded and has arbitrary joins.) -I will refer to such $C$ as a covering of $L$. I say that a covering $C$ of $L$ refines a covering $C'$ of $L$ if for each $c\in C$ there exists $c'\in C'$ such that $c\le c'$. A refinement of $C'$ is such $C$ together with a map $f: C\to C'$ -$$ -f: c\mapsto c', \quad c\le c'. -$$ -Given a covering $C$ of $L$ define its poset $P_C$ by taking meets of finite subsets of $C$. This poset yields a simplicial complex $X_C$, whose vertices are elements of $C$ and $c_0,...,c_n\in C$ define an $n$-simplex iff -$$ -c_0 \wedge c_1 \wedge .... \wedge c_n\ne 0 -$$ -where $0$ is the least element of $L$. -The incidence relation between such simplices is the obvious one. Now, the Čech cohomology $H^*(L)$ (with coefficients in ${\mathbb Z}$) is defined as the direct limit -$$ -\lim_{C} H^*(X_C) -$$ -where the direct system is induced by the refinements of coverings $C$ of $L$. More precisely, if $(C,f)$ is a refinement of $C'$, we obtain a natural map of simplicial complexes -$$ -\tilde f: X_C\to X_{C'} -$$ -sending $c\in C$ to $f(c)\in C'$. Now, use pull-back map of the cohomology groups -$$ -\tilde f^*: H^*(X_{C'})\to H^*(X_C). -$$ -The direct limit of this direct system can be defined to be the Čech cohomology of $L$. One can define Čech homology similarly, using the homology and the associated inverse system instead of the direct one.<|endoftext|> -TITLE: If $A$ is the generator of $(P_t)$, then $A+f$ is the generator of $(P_t^f)$ -QUESTION [7 upvotes]: Let $X=(X_t)_{t\geq0}$ be a Markov process on a state space $\Gamma$ (a Hausdorff topological vector space), let $A$ be the infinitesimal generator of $X$ and let $\mathcal C(\Gamma)$ the space of continuous functions on $\Gamma$. Then for each continuous function $f \in \mathcal C(\Gamma)$ the operator $A+f$ is the infinitesimal generator of the semigroup $(P_t^f)_{t\geq0}$ defined by -\begin{equation*} -P_t^f g(x) = \mathbb E_x\left[\mathrm \exp\left\{\int_0^t f(X_s) ds\right\} g(X_t) \right], \qquad g \in \mathcal C(\Gamma), x \in \Gamma, -\end{equation*} -where $\mathbb E_x$ denotes the expectation with respect to the process $X$ starting in $x \in \Gamma$. -In other words: - -Show - \begin{equation*} -\lim_{t\rightarrow0} \frac{1}{t} \left( \mathbb E_x\left[\mathrm \exp\left\{\int_0^t f(X_s) ds\right\} g(X_t) \right] - g(x) \right) -= Ag(x) + f(x) g(x).\tag1 -\end{equation*} - -In a proof I'm studying I found this stated as a known fact about Markov processes, but I couldn't find any suitable reference and my knowledge about continuous Markov processes is limited. - -My question: Why and under which assumptions about $X$ and $\Gamma$ is (1) true? - -(In the case I encountered it, $\Gamma$ is actually finite.) -On a further note, does the semigroup $(P_t^f)$ bear any deeper meaning? The definition first seemed quite arbitrary to me, but the fact that this statement should be generally known makes it seem like there's more to it than I realise... - -REPLY [3 votes]: Write -\begin{equation*} - \frac{1}{t} \left( \mathbb E_x\left[\mathrm e^{\,\int_0^t f(X_s) ds} g(X_t) \right] - g(x) \right) -= \mathbb E_x\left[{\mathrm e^{\,\int_0^t f(X_s) ds} -1\over t}\, g(X_t) \right] +\mathbb{E}_x\left[{g(X_t)-g(x)\over t}\right]. -\end{equation*} -Letting $t\downarrow 0$, the right hand side becomes $f(x)g(x)+Ag(x).$ -You will want to assume that $f,g$ are bounded continuous functions, that $g\in{\cal D}(A)$ and that $(X_t)$ has right continuous sample paths, almost surely. -Provided that $f\leq 0,$ $A+f$ generates the Markov process which is the $A$ process with "killing" function $f$.<|endoftext|> -TITLE: Bijection between Extensions and Ext (Weibel Theorem 3.4.3) -QUESTION [8 upvotes]: I was wondering about one step in the proof of surjectivity of $\Theta$ constructed for Theorem 3.4.3 in Weibel's "An introduction to homological Algebra". -For an extension $\xi:0\to B\to X\to A\to0$ of $A$ by $B$, he associates the element $x=\Theta(\xi)\in \operatorname{Ext}^1(A,B) $, by applying $\operatorname{Ext}^*(A,-)$ to form the long exact sequence -$$\cdots\to\operatorname{Hom}(A,X)\to \operatorname{Hom}(A,A)\xrightarrow{\partial}\operatorname{Ext}^1(A,B)\to\cdots, $$ -and setting $x=\partial(\operatorname{id}_A)$. -He then shows that $\Theta$ gives a well defined map from the set of equivalence classes of extensions of $A$ by $B$ to $\operatorname{Ext}^1(A,B)$. -In proving surjectivity of $\Theta$, he considers an exact sequence $0\to M\to P \to A\to 0$ with $P$ projective. Applying $\operatorname{Ext}^*(-,B)$ - gives $\operatorname{Hom}(M,B)\xrightarrow{\partial}\operatorname{Ext}^1(A,B)\to 0$, so for $x\in \operatorname{Ext}^1(A,B)$ he picks $\beta:M\to B$ with $x=\partial(\beta)$. Then he constructs a diagram -\begin{array}{ccccccccc} - 0 & \xrightarrow{} & M & \xrightarrow{} & P & \xrightarrow{} & A & \xrightarrow{} & 0\\ - & & \downarrow & & \downarrow & & \parallel & & \\ - 0 & \xrightarrow{} & B & \xrightarrow{} & X & \xrightarrow{} & A & \xrightarrow{} & 0 -\end{array} -with the map from $M$ to $B$ given by $\beta$ and $X$ is the pushout of $B\leftarrow M\to P$. One shows that the lower row is exact (no problem). -Then he claims that the extension given by the lower row maps to $x$ under $\Theta$. How does this follow? He states that one uses the naturality of $\partial$, so one has to apply $\operatorname{Ext}$ in some way. -Edit:Taking the Ext long exact sequence doesn't solve the problem immediately, see the comments below. - -REPLY [15 votes]: The proof as intended by Weibel doesn't seem to work, since it requires one to solve the related question asked here Are those two ways to relate Extensions to Ext equivalent?. However, by using the dual version of his proof we can avoid this issue: -Pick an exact sequence $0\to B \to I\xrightarrow{\pi} N\to 0$, where now $I$ is an injective object. Then apply $Ext(A,-)$ to obtain en exact sequence -$$ ... \to Hom(A,N) \xrightarrow{\partial} Ext(A,B) \to 0,$$ and pick $\gamma \in Hom(A,N)$ with $\partial(\gamma)=x$. Now we let $X$ be the pullback of $A\xrightarrow{\gamma} N \xleftarrow{\pi}I$. This fits into a commutative diagram with exact rows: -\begin{array}{ccccccccc} - 0 & \xrightarrow{} & B & \xrightarrow{} & X & \xrightarrow{} & A & \xrightarrow{} & 0\\ - & & \parallel & & \downarrow & & \downarrow & & \\ - 0 & \xrightarrow{} & B & \xrightarrow{} & I & \xrightarrow{} & N & \xrightarrow{} & 0. -\end{array} -The upper row is now an extension $\xi$ for which one directly sees that $\Theta(\xi)=x$: -Applying $Ext(A,-)$ again gives a long ladder diagram, from which we consider the square -\begin{array}{ccc} -Hom(A,A) & \xrightarrow{\partial'} & Ext^1(A,B) \\ -\downarrow& & \parallel \\ -Hom(A,N) &\xrightarrow{\partial} & Ext^1(A,B) -\end{array} -The $\partial$ here is the same as above, and by the definition in Weibel we have $\Theta(\xi) =\partial'(id_A)$. Finally, the left vertical arrow is composition with $\gamma$ by definition of the $Hom$ functor. So $$\Theta(\xi) =\partial'(id_A)=\partial(\gamma\circ id_A) =x.$$<|endoftext|> -TITLE: Infinitely many prime divisors of $f(a)$ -QUESTION [6 upvotes]: Let $f(x)\in \mathbb{Z}[x]$ be a non constant polynomial with integer coefficients. Show that as $a$ varies over the integers, the set of divisors of $f(a)$ includes infinitely many primes... -To be frank, I have no idea where to start... -Trivial case is when constant term of $f(x)$ is zero. -In case of $f(x)=x(a_nx^n+\cdots+a_1)$ we have $p$ divides $f(p)$ for all primes $p$... -Other than this i have no idea... -Please give only hints.. - -REPLY [2 votes]: Let $f(x)=a_nx^n+a_{n-1}x^{n-1}+……+a_1x+a_0\in \mathbb Z[x]$. -If $a_0=0$ it is evident that $p$ divides $f(p)$ for arbitrary primes so we make $a_0\ne 0$. Assume for the absurd, that there is only a finite number of prime divisors $p_1,p_2,p_3,……,p_N$ for $ f (k);\space k\in \mathbb Z $, and make the product $P=p_1p_2p_3\cdot\cdot\cdot p_N$. -We have -$$f(a_0P)=a_0\left(a_na_0^{n-1}P^n+a_{n-1}a_0^{n-2}P^{n-1}+….+ \space a_2a_0P^2+a_1P+1\right)$$ -It is clear that non prime divisor of the factor $$ a_na_0^{n-1}P^n+a_{n-1}a_0^{n-2}P^{n-1}+….+ \space a_2a_0P^2+a_1P+1$$ can be one of the $p_1,p_2, p_3,……,p_N$. This is a contradiction.<|endoftext|> -TITLE: Double pendulum probability distribution -QUESTION [8 upvotes]: Double Pendulum has a very beautiful stochastic trajectory. Is there any way to calculate the distribution of probability of finding the end of pendulum at each point? -Link to formulations. - -REPLY [4 votes]: What you could try is to first write the system in Hamiltonian form. Then quote https://en.wikipedia.org/wiki/Liouville%27s_theorem_(Hamiltonian) which says that the distribution on phase space is uniform. You could assume ergodicity, that is, the solution path is dense in the energy submanifold $H(q,p) = E$. Then project this onto your 2-D space. -I think it would be a lot of work, but definitely doable. -So making a start, the potential energy is -$$ V = - m_1 g L_1 \cos(\theta_1) - m_2 g L_2 \cos(\theta_2) ,$$ -and the Kinetic Energy is -$$ T = \tfrac12 [\dot \theta_1, \dot \theta_2] A [\dot \theta_1, \dot \theta_2]^T = \tfrac12 [\eta_1, \eta_2] A^{-1} [\eta_1, \eta_2]^T $$ -where -$$ A = \begin{bmatrix} m_1 L_1^2 + m_2 L_1^2 & m_2 L_1 L_2 \cos(\theta_1-\theta_2) \\ m_2 L_1 L_2 \cos(\theta_1-\theta_2) & m_2 L_2^2 \end{bmatrix} $$ -and $\eta_1$ and $\eta_2$ are the generalized momentums. -See https://physics.stackexchange.com/questions/142238/non-integrability-of-the-2d-double-pendulum -The Hamiltonian $H = V + T$ is conserved, so calculate its value using initial conditions. For a given $[\theta_1,\theta_2]$, the probability density will be proportional to the volume of the ellipsoid -$$ \{[\eta_1,\eta_2] \in \mathbb R^2 : \tfrac12 [\eta_1, \eta_2] A^{-1} [\eta_1, \eta_2]^T \le H - V\}$$ -And this volume will be proportional to -$$ \begin{cases}\det(A) (H-V)^2 & \text{if $H > V$}\\ 0 &\text{if $H \le V$}\end{cases} $$ -Remember both $V$ and $A$ are functions of $\theta_1$ and $\theta_2$. Note that for any place where the pendulum can go, there will generally be two values of $\theta_1$ and $\theta_2$ for that point, where for one point the value of $\theta_1 - \theta_2$ will be exactly the negative of the value for the other position. -Remember, this will give you the density in $[\theta_1,\theta_2]$ space (the green picture on the left side of the web page you provided), so you will have to divide this by the absolute value of determinant of the Jacobian of the map that takes $[\theta_1,\theta_2]$ to $[x,y]$ (that is, $L_2L_2|\sin(\theta_1-\theta_2)|$). -Now my calculations might be all wrong because maybe there is another constant of the motion (which means I lose the ergodicity property). The web page https://physics.stackexchange.com/questions/142238/non-integrability-of-the-2d-double-pendulum suggests there isn't another constant of motion. But in the green picture in http://www.myphysicslab.com/pendulum/double-pendulum/double-pendulum-en.html, I can plainly see that for any choice of $[\theta_1, \theta_2]$ that the velocity has a particular direction (or rather a choice of two directions). Conservation of the Hamiltonian (energy) would only restrict the magnitude of the velocity going through each point! -Added later: I asked someone more knowledgeable than I about ergodicity. There are even situations where the set of points reached in phase space has positive measure on a level set of the energy function, but the double pendulum is still not ergodic. Apparently many books have been written on the subject. So my final answer is "I don't know." I'll not delete this answer, because I think the discussion is worthwhile. But I don't expect to receive any upvotes.<|endoftext|> -TITLE: Does H2 depends only on abelian quotient? -QUESTION [5 upvotes]: Consider a finite group $G$ and an abelian group $N$. Let $G$ act trivially on $N$. Is $H^{2}(G,N)\cong H^{2}(G^{ab},N)$? ($G^{ab}=G/[G,G]$ the abelianization of $G$) -I don't get group cohomology so I have no intuitions about this, and computing $H^{2}$ is a pain, so I am not sure how to even look for examples. - -REPLY [2 votes]: For trivial action, the relationship between $H^2(G, N)$ and the abelianization of $G$ is given by the universal coefficient theorem, which says that there is a split exact sequence -$$0 \to \text{Ext}^1(H_1(G), N) \to H^2(G, N) \to \text{Hom}(H_2(G), N) \to 0.$$ -Here $H_1(G)$ is the abelianization. $H_2(G)$ is a group called the Schur multiplier, which can be computed using Hopf's formula, and it is generally nontrivial, so it's possible for the second term to be nontrivial even if the first term vanishes. -Explicitly, take $G = A_5$. Since the abelianization of this group is trivial, the first term vanishes, and we have -$$H^2(A_5, N) \cong \text{Hom}(H_2(A_5), N).$$ -Now we need to know $H_2(A_5)$. Fortunately this was computed by our ancestors long ago: it turns out to be $\mathbb{Z}_2$. So if we take $N = \mathbb{Z}_2$ we get a nontrivial cohomology group. -If you're familiar with the relationship between $H^2(G, N)$ and central extensions of $G$ by $N$, you can also exhibit a nontrivial element of $H^2(A_5, \mathbb{Z}_2)$ by exhibiting a nontrivial $\mathbb{Z}_2$ central extension of $A_5$. There is a unique such extension called the binary icosahedral group $\widetilde{A}_5$, and it's the universal central extension of $A_5$. -Not getting group cohomology is a typical experience, I think. I expect that for many people it's a long sequence of unmotivated definitions and computations. The real meaning behind the subject is in homotopy theory and higher category theory but this requires time, experience, and the right background to appreciate.<|endoftext|> -TITLE: Is every Closed set a Perfect set? -QUESTION [9 upvotes]: From 'baby' Rudin. -I've seen that a set is closed iff it contains all of its limit points. In Rudin, $(d)$ says if every limit point of E is a point of E, then $E$ is closed. He also says $(h)$: $E$ is perfect if $E$ is closed and if every point of $E$ is a limit point of $E$. -But Closed $\implies$ contains all of its limit points. So, is every closed set a perfect set? - -REPLY [7 votes]: Closed means all limit points are in E. But that doesn't mean all points in E are limit points. Any closed set with a point that is not a limit point will not be perfect. -The easiest counter example is a set with a single point. That set is closed but its one point isn't a limit point. -Less trivial and less contrived is $D = \{a + 1/n| a \in \mathbb Z,n \in \mathbb N\} $. Every integer is a limit point. No other point is a limit point. All integers are in D (because $a + 1/1$ is an integer) so D is closed. But for all $n > 1$ then $a + 1/n $ is in D but is not a limit point. So D is not perfect.<|endoftext|> -TITLE: Prove a sequence is a Cauchy and thus convergent -QUESTION [7 upvotes]: Suppose that $0<\alpha<1$ and that $\{ x_n\}$ is a sequence which satisfies $$|x_{n+1}-x_n| \le \alpha^n$$ $$n= 1,2,....$$ -Prove that $\{x_n\}$ is a Cauchy sequence and thus converges. -Give an example of a sequence $\{ y_n\}$ s.t $y_n \to \infty $ but $$|y_{n+1}-y_n| \to 0$$ as $n \to \infty$ - -So here's my take so far, in order to understand from the beginning the definition of Cauchy is $\{v_n\}_n$ is a Cauchy sequence if for all $\varepsilon>0$ there exists $N\in \Bbb N$ such that for all natural numbers $n,m\geq N$: $|v_n-v_m|<\varepsilon$. -and I said if $n > m$ $$|x_n-x_m| \le |x_n-x_{n-1}|+|x_{n-1}-x_{n-2}|....+|x_{m+1}-x_m|$$ $$\le \alpha^{n-1}+\alpha^{n-2}+.............+\alpha^{m}$$ -$$= 1-\frac{\alpha^{n-m}}{1-\alpha}$$ -but from here I'm not quite convinced how I could process further... -Could i get some help? - -REPLY [3 votes]: An example for the second part can be: -$y_n = \sum_{k=1}^n \frac1n$. Then, $|y_{n+1} - y_n|= \frac1{n+1}$, which tends to 0, but $y_n \rightarrow +\infty$ as $n\rightarrow \infty$<|endoftext|> -TITLE: Is there a proof for L'Hôpital's Rule for limits approaching infinity? -QUESTION [6 upvotes]: L'Hopital's Rule states that: -For two differentiable functions $f$ and $g$, where $g'(x)\neq 0$, such that -$$\lim_{x\to a} f(x)=0$$ -$$\lim_{x\to a} g(x)=0$$ -We can say that: -$$\lim_{x\to a} {f(x)\over g(x)}= \lim_{x\to a} {f'(x)\over g'(x)}\\ $$ -NOTE: If you already know the proof and don't want to read all this, skip all the way down to $\blacksquare_{1.1}$ -PROOF 1.1: -Let $f$ and $g$ be continuous functions on $[a,b]$ and differentiable on $(a,b)$. -Also, assume $g'(x)\neq 0$ on $(a,b)$ and $g(b)\neq g(a).\\$ -Proposition 1.1.1: There exists some point $c$ within the open interval $(a,b)$ such that: -${f'(c)\over g'(c)} = {{f(b) - f(a)}\over {g(b) - g(a)}}$ -Proof 1.1.1 $\quad \triangleright$ -Let $$\space h(x)= f(x)-f(a) - {{f(b) - f(a)}\over {g(b) - g(a)}}\cdot (g(x)-g (a))\\$$ -By simply substituting values we can clearly see that $h(a)=h(b)=0$. -Now because $f(a)$, $f(b)$, $g(a)$ and $g(b)$ are constants, we can therefore say that much like $f$ and $g$, $h$ is also continuous on $[a,b]$ and differentiable on $(a,b).\\$ -If we differentiate $h$ w.r.t. $x$, we get the following: -$$h'(x) = f'(x) - g'(x)\cdot {{f(b) - f(a)}\over {g(b) - g(a)}}\\$$ -Using Rolle's Theorem (which I porpusely won't prove as the question is long enough as it is) we can say that there exists a $c$ in $(a,b)$ such that $h'(c)=0$ -Thus we can say that -$0 = f'(c) - g'(c)\cdot {{f(b) - f(a)}\over {g(b) - g(a)}} $ -Hence showing us that -${f'(c)\over g'(c)} = {{f(b) - f(a)}\over {g(b) - g(a)}}$ -$$\blacksquare_{1.1.1}$$ -$\triangleleft \\$ -Remember that $\lim_{x\to a} f(x)= \lim_{x\to a} g(x)=0$. -Where $a$ is finite. -We also said $g(x)\neq 0$. -Therefore we'll Let -$L:= \lim_{x\to a} {f'(x)\over g'(x)} \\$ -We're also going to define the functions $F$ and $G$. -$F(x) = f(x) \Longrightarrow x\neq a$ -$F(x) = 0 \Longrightarrow x = a$ -Similarly -$G(x) = g(x) \Longrightarrow x\neq a$ -$G(x) = 0 \Longrightarrow x = a$ -Because $F$ and $G$ are defined at $x=a$, they are continuous at $a$. (Unlike $f$ and $g$) -This means that for $x>a$, the functions $F$ and $G$ are differentiable on the open interval $(a,x)$ and continuous on the closed interval $[a,x]. \\$ -Using what we showed in Proof 1.1.1, we can state the following equality to be true: -$${F'(c)\over G'(c)} = {{F(x) - F(a)}\over {G(x) - G(a)}}$$ -Due to the fact that $F(a)=0$ and $G(a)=0$, we can thus say -$${F'(c)\over G'(c)} = {{F(x)}\over {G(x)}}$$ -Now since $c$ is within the interval $(a,x)$, we can say $a 0 $ there exists $\delta_1 > 0$ such that if $a < x < a + \delta_1$ then -$$\left|\frac{f'(x)}{g'(x)}-L \right|< C\epsilon,$$ -with $C = [2(1+|L|)]^{-1}$. -Fix $x_1 < a + \delta_1.$ By the MVT there exists $c$ such that -$$\frac{f(x)}{g(x)}h(x):=\frac{f(x)}{g(x)}\frac{1- \frac{f(x_1)}{f(x)}}{1-\frac{g(x_1)}{g(x)}}=\frac{f(x) - f(x_1)}{g(x)-g(x_1)}= \frac{f'(c)}{g'(c)}.$$ -Since $x < c < x_1 < a + \delta_1 $ we have -$$C\epsilon > \left|\frac{f'(c)}{g'(c)}-L \right| = \left|\frac{f(x)}{g(x)}h(x)-L \right|.$$ -Hence, using the reverse triangle inequality -$$C\epsilon > \left|\frac{f(x)}{g(x)}h(x)-Lh(x) + L h(x)-L \right|\geqslant \left|\frac{f(x)}{g(x)}-L\right||h(x)| - |L| |h(x)-1|, $$ -and -$$ \left|\frac{f(x)}{g(x)}-L\right||h(x)| < C\epsilon + |L| |h(x)-1|.$$ -Note that $\lim_{x \to a+}h(x) = 1$. Hence, there exists $\delta_2 > 0$ such that for $a < x < a + \delta_2$ we have $|h(x) - 1| < C\epsilon$ and $|h(x)| > 1/2.$ -Whence, if $a < x < a + \min(\delta_1,\delta_2)$ then -$$\left|\frac{f(x)}{g(x)}-L\right| < 2(1 + |L|)C\epsilon = \epsilon.$$ -Therefore, -$$\lim_{x \to a+} \frac{f(x)}{g(x)} = \lim_{x \to a+} \frac{f'(x)}{g'(x)} .$$<|endoftext|> -TITLE: Are similar circles really a thing? -QUESTION [43 upvotes]: I'm a fifteen year old who is currently studying circle geometry (if that is the appropriate term) and our teacher stated that concentric circles are similar. I thought about this, and it doesn't make sense to me. The reason is because of proportionality. For example, similar triangles are similar because they have the same angles and they have proportional sides. However, circles can not be compared for angles, so that's out (as they all have the same 360 degree angle at the center) and the only factor is their size, which is directly influenced by their radius. If the radius is the only variable involved in a triangle like this, how can a circle be NOT proportional to another circle? If a case of that existed, there would be meaning (at least from my current perspective) to the term "similar circle." -Help and critique on my logic is requested, and an explanation as to the term "similar circle." - -REPLY [12 votes]: Yes indeed. Every circle is similar. You can always scale one of them to match the other. Actually, this is the definition of similarity. In case of triangles, this definition yields the result that the sides are proportional. "The sides of one triangle are proportional to the other" is not the actual definition of similarity. -You may have a look here<|endoftext|> -TITLE: 2009 Benelux Math Olympiad (BxMO) number theory problem -QUESTION [7 upvotes]: The following problem is taken from the first Benelux Mathematical Olympiad which occurred in 2009. - -Let $n$ be a positive integer and let $k$ be an odd positive integer. Moreover, let $a$, $b$ and $c$ be integers (not necessarily positive) satisfying the equation $$a^n+kb=b^n+kc=c^n+ka.$$ Prove that $a=b=c$. - -I tried to analyze some congruences module $k$, $a$, $b$ and $c$, but it seems that these relations will not help sufficiently. Also, I did not found a solution for this problem. You can access all the other problems from other years at the BxMo site. - -REPLY [4 votes]: It is clear from the equations that if two among $a, b, c$ are equal, all three must be. -So suppose they are all distinct. Then we have from the equations -$$k = \frac{b^n-a^n}{b-c}= \frac{c^n-b^n}{c-a} = \frac{a^n-c^n}{a-b} \tag{1}$$ -Among the three $a, b, c$, we must have two of the same parity. WLOG let $a \equiv b \pmod 2$. Then for $k$ to be an odd integer, from $(1)$, we must have $c$ also of the same parity. -Similarly, now among $a \equiv b \equiv c \pmod 2$, we must have two which are equivalent $\pmod 4$. Again from $(1)$, this would force the third to also be equivalent $\pmod 4$. -Continuing in this fashion, we can have $a \equiv b \equiv c \pmod {2^m}$, for some integer $2^m > \max(|a|, |b|, |c|)$, say, which is absurd.<|endoftext|> -TITLE: perfect riffle shuffle problem -QUESTION [6 upvotes]: A perfect riffle shuffle, also known as a Faro shuffle, is performed by cutting a deck of cards exactly in half and then perfectly interleaving the two halves. There are two different types of perfect shuffles, depending on whether the top card of the resulting deck comes from the top half or the bottom half of the original deck. -An out-shuffle leaves the top card of the deck unchanged. After an in-shuffle, the original top card becomes the second card from the top. For example: -OutShuffle(A♠2♠3♠4♠5♥6♥7♥8♥) = A♠5♥2♠6♥3♠7♥4♠8♥ -InShuffle(A♠2♠3♠4♠5♥6♥7♥8♥) = 5♥A♠6♥2♠7♥3♠8♥4♠ -Consider a deck of $2^n$ distinct cards, for some non-negative integer $n$ . What is the effect of performing exactly $n$ perfect in-shuffles on this deck? -What is the answer and How can i prove that? - -REPLY [5 votes]: Now that you know the answer (it reverses the deck), here's how you can prove it. The shuffle can be written as a cyclic permutation. If I write -$$( 1 \ 2 \ 4 \ 8 \ 7 \ 5 )(3 \ 6),$$ -this is a way of writing the permutation in which the first card goes to the second place, the second goes to the fourth place, the fourth goes to the eighth place, etc. This is just a different way of writing the result of one inshuffle. Now iterating this permutation three times looks like this: -$$( 1 \ 2 \ 4 \ 8 \ 7 \ 5 )(3 \ 6)( 1 \ 2 \ 4 \ 8 \ 7 \ 5 )(3 \ 6)( 1 \ 2 \ 4 \ 8 \ 7 \ 5 )(3 \ 6),$$ -which reduces to the cycle -$$(1 \ 8)(2 \ 7)(3 \ 6)(4 \ 5),$$ -which is exactly the permutation which reverses the order. -Maybe you can try it for 16 and observe if there is a nice pattern which works for all $2^n$.<|endoftext|> -TITLE: Quadratic equation with one root in $[0,1]$ and other root in $[1,\infty]$ -QUESTION [6 upvotes]: Find the values of $a$ for which $x^2-ax+2=0$ has one root in $[0,1]$ and other root in $[1,\infty]$. -The twoo rots are $$\frac{a\pm\sqrt{a^2-8}}{2}$$ -The smaller root should be less than $1$. -So $$a-\sqrt{a^2-8}\le 2$$ -$$a-2\le\sqrt{a^2-8}$$ -$$a^2+4-4a\le a^2-8$$ -$$a\ge 3$$ -How will I find the upper bound for $a$? And what is the general approach to solve such problems where the roots are constrained between two values? - -REPLY [3 votes]: First the two roots need to exist, then $$a^2>8.$$ -Then the two conditions are -$$a-\sqrt{a^2-8}\le2,\\a+\sqrt{a^2-8}\ge2,$$ or -$$a-2,2-a\le\sqrt{a^2-8}.$$ -This is equivalent to -$$(a-2)^2\le a^2-8,\\12\le 4a.$$ -This condition is stronger than the first one.<|endoftext|> -TITLE: How to prove this inequality -QUESTION [5 upvotes]: To following problem, I can't solve it unfortunately. Prove that for all integer values $n,p,q>1(p>q)$,, -$$\dfrac{p}{q}(n+1)^{\frac{p}{q}-1}\ge (n+1)\cdot\left(\dfrac{1^p+2^p+\cdots+(n+1)^p}{1^q+2^q+\cdots+(n+1)^q}\right)-n\cdot\left(\dfrac{1^p+2^p+\cdots+n^p}{1^q+2^q+\cdots+n^q}\right)$$also I've tried to simplify that expression and I've found that it's equal to this , but I can't move on after that. - -REPLY [2 votes]: You cannot prove it because it is wrong! Take $n=2, p=3, q=2$. Then obviously $n,p,q>1\;$ and $p>q.$ -Now the LHS is -$\frac{3}{2}\sqrt{3}\approx 2.598$ but the RHS is -$$3\frac{1^3+2^3+3^3}{1^2+2^2+3^2} - 2 \frac{1^3+2^3}{1^2+2^2} -=3\frac{36}{14}-2\frac{9}{5}\approx 4.114$$<|endoftext|> -TITLE: Problem of determinant when $A^{-1}+B^{-1}=(A+B)^{-1}$ -QUESTION [6 upvotes]: I have two $4\times 4$ real matrices $A$ and $B$, and it is known that $A^{-1}+B^{-1}=(A+B)^{-1}$ ($A$, $B$ and $A+B$ are invertible). How can I prove that $\det (A)=\det (B)$? - -REPLY [10 votes]: If multiply both sides of $A^{-1}+B^{-1}=(A+B)^{-1}$ by $A+B$ on the left , then we have -$$AB^{-1} + BA^{-1}= -I$$ -Now put $AB^{-1}=C$ then -$$C^{-1} =-(C+I)$$ And multiply both sides by $C$ - $$C+I=-C^2$$ -Hence $$C^{-1}=C^2$$ $$(\det(C))^3=1$$ $$\det(C)=1$$ Thus $\det(A)=\det(B)$ .<|endoftext|> -TITLE: What does this set theory term mean: $x\in \bigcup_{k=1}^{\infty}\bigcap_{n=k}^{\infty}A_{n}.$? -QUESTION [6 upvotes]: I have a question for an assignment that involves this term. -$$x\in \bigcup_{k=1}^{\infty}\bigcap_{n=k}^{\infty}A_{n}.$$ -I have a faint idea of what it means, but perhaps someone can tell me if I am wrong. -first im considering the intersection sign by itself, assuming that n=k=1 at first, meaning a family of sets A1...A(inf) all joined by intersection such that its only true (ie. not the null set) if there is some common element in all the sets. -then i do the same thing but starting with n=k=2 so that you have an identical family of sets with the exception of it not including A1. again X must be in all the sets. -then repeat so you have an infinite family of families of sets, all one set smaller than the last. and take the union of all of them. -I'm having trouble though seeing what the point of the union is, It makes sense if the union and intersection signs were switched because then they both impose a condition, but having them this way, the union doesn't seem to do anything, or rule anything out? -if there is something obvious that I'm missing Id greatly appreciate your help. - -REPLY [7 votes]: This set is called $\lim\inf A_n$. -To understand what it means, you have to use the definitions of the union and intersection. -Let $$x\in\bigcup_{k=1}^{\infty}\bigcap_{n=k}^{\infty}A_n$$ -By definition of the union, there exists an integer $k$ such that -$$x\in\bigcap_{n=k}^{\infty}A_n$$ -Then by definition of the intersection: -$$\forall n\geq k,\, x\in A_n$$ -So $$x\in\bigcup_{k=1}^{\infty}\bigcap_{n=k}^{\infty}A_n\iff\exists k\in\mathbb{N},\, \forall n\geq k,\, x\in A_n$$ -This means exactly that $x$ belongs to all the $A_n$ except finitely many. -If you permute the intersection and the union, you get $\lim\sup A_n$ which is the set of all $x$ that belong to an infinity of $A_n$. -You can check that $\lim\inf A_n\subset\lim\sup A_n$.<|endoftext|> -TITLE: If $f$ is continuous with $f(x) = f(2x),f(1) = 3$, then what is $ \int_{-1}^{1}f(f(x))\,dx$? -QUESTION [5 upvotes]: If $f(x)$ is a continuous function such that $f(x) = f(2x)$ and $f(1) = 3\;,$ Then $\displaystyle \int_{-1}^{1}f(f(x))\,dx$ - -$\bf{My\; Try::}$ Here $-\infty -TITLE: Spectral radius of "almost" regular graph ?! -QUESTION [5 upvotes]: The answer to this question could be trivial. -The Graph Let $G$ be graph formed of two $d$-regular connected components. That is, $G= H_1\cup H_2$, where $H_1$, and $H_2$ are $d$-regular and disjoint. Let $x\in H_1$ and $y\in H_2$. Let $G'= G+xy$, then $G'$ is connected graph. My question is: -Question: What is the largest positive eigenvalue of $G'$ ? Or $\lambda_1 (G') = ??$ -Ideas: --It is obvious that $\lambda_1 (G) = d$ with multiplicity 2 ( since it is formed of 2 $d$-regular connected components). But I have no idea how to estimate $\lambda_1$ when a single edge is added between $H_1$, and $H_2$. --The interlacing theorem could help in commutating bound for the maximal eigenvalue. -Any idea will be useful! - -REPLY [2 votes]: In your case, the interlacing result you mentioned is that $\lambda_1(G') \geq \lambda_2(G)=d$. For the other direction, Weyl's inequality (the triangle inequality for spectral radius) tells you -$$\lambda_1(G') \leq \lambda_1(G) + \lambda_1(G-G')=d+1,$$ -but I think we can say something stronger. -Suppose we have -$$A'v = \lambda v $$ -where $A'$ is the adjacency matrix of $G'$. Assume WLOG that $|v(x)| \geq |v(y)|$, and let $z$ be a vertex having maximal $|v(z)|$ among all vertices other than $x$ and $y$. Using the triangle inequality and the eigenvalue definition, we have -$$\lambda |v(x)| = |(A' v) x | \leq \sum_{w \sim x} |v(w)| \leq |v(x)| + d |v(z)|$$ -(the $d$ neighbors of $x$ in $H_1$ each contribute at most $|v(z)|$, and $y$ contributes at most $|v(x)|$). Similarly, -$$\lambda |v(z)| = |(A' v) z| \leq \sum_{w \sim z} |v(w)| \leq |v(x)| + (d-1) |v(z)|$$ -Let $v'=\left(\begin{array}{c} |v(x)| \\ |v(z)| \end{array}\right)$, and let $M=\left(\begin{array}{cc} 1 & d \\ 1 & d-1\end{array}\right)$. The above inequalities tell us -$$0 \leq \lambda v' \leq M v'$$ -in each coordinate, so we have -$$\lambda ||v'|| \leq ||M v'|| \leq ||M|| ||v'|| = \frac{1}{2} (d+\sqrt{d^2+4}) ||v'||$$ -So the spectral radius of $G'$ is at most $\frac{1}{2} (d+\sqrt{d^2+4})$. For large $d$ this is roughly $d+\frac{1}{d}$. -I have a feeling this may be well-known/classical, but don't have a reference.<|endoftext|> -TITLE: Is limit of function -1/0 ok? -QUESTION [5 upvotes]: A quick question, i'm determining the limit of this function: -$$\lim_{x→1}\frac{x^2 - 2x}{x^2 -2x +1}$$ -When I divide numerator and denominator by $x^2$ and fill in $1$, I get $-1/0$. This is an illegal form right? Or does it indicate it is going to $∞$ or $-∞$? - -REPLY [2 votes]: $$\lim\limits_{x\to 1}\frac{x^2-2x}{x^2-2x+1}$$ - -$$=\lim\limits_{x\to 1}(x^2-2x)\lim\limits_{x\to 1}\frac{1}{x^2-2x+1}$$ -$$=-1\cdot\lim\limits_{x\to 1}\frac{1}{\underbrace{x^2-2x+1}_{\to 0+}}=\color{red}{-\infty}$$<|endoftext|> -TITLE: Integral solutions $(a,b,c)$ for $a^\pi + b^\pi = c^\pi$ -QUESTION [14 upvotes]: We know that $a^n + b^n = c^n$ does not have a solution if $n > 2$ and $a,b,c,n \in \mathbb{N}$, but what if $n \in \mathbb{R}$? Do we have any statement for that? -I was thinking about this but could not find any immediate counter examples. -Specifically, can $a^\pi + b^\pi = c^\pi$ for $a,b,c \in \mathbb{N}$? -I found this. It has a existential proof that $\exists \ n \in \mathbb{R}$ for any $(a,b,c)$ -The question remains open for $n = \pi$. -This question is just for fun to see if we can some up with some simple proof :) - -REPLY [6 votes]: The Wikipedia article on Fermat's last theorem has a full section about it, with plenty of references. Here are a few results (see the article for precise references): - -The equation $a^{1/m} + b^{1/m} = c^{1/m}$ has solutions $a = rs^m$, $b = rt^m$ and $c = r(s+t)^m$ with positive integers $r,s,t>0$ and $s,t$ coprime. -When $n > 2$, the equation $a^{n/m} + b^{n/m} = c^{n/m}$ has integer solutions iff $6$ divides $m$. -The equation $1/a + 1/b = 1/c$ has solutions $a = mn + m^2$, $b = mn + n^2$, $c = mn$ with $m,n$ positive and coprime integers. -For $n = -2$, there are again an infinite number of solutions. -For $n < -2$ an integer, there can be no solution, because that would imply that there are solutions for $|n|$. - -I don't know if anything is known for irrational exponents.<|endoftext|> -TITLE: Suppose $n$ divides $3^n + 4^n$. Show that $7$ divides $n$. -QUESTION [5 upvotes]: Suppose $n \geq 2$ and $n$ is a divisor of $3^n + 4^n$. Prove that $7$ is a divisor of $n$. -My work so far: -I had a hypothesis that if $n| 3^n + 4^n$, then $n = 7^k$ for some $k\in\mathbb{N}$. But this is not necessarily so. Take $n = 7⋅379$, where $3^7 + 4^7 = 7^2⋅379$. Then, $3^7+4^7$ divides $3^n + 4^n$, and since $n$ divides $3^7+4^7$, we must have $n|3^n+4^n$. - -REPLY [5 votes]: First, we note that $3 \nmid 3^n + 4^n$ and $4 \nmid 3^n + 4^n$. So $3 \nmid n$ and $2 \nmid n$. -Now, reworking $3^n + 4^n \equiv 0 \pmod n$. Since n is odd, we find -\begin{equation} -3^n \equiv (-4)^n \pmod n. -\end{equation} -Since $\gcd(3, n) = 1$, we can take the inverse of $3 \mod n$, i. e. there exists $3^{-1}$ so that $3 \cdot 3^{-1} \equiv 1 \pmod n$. Multiplying both sides by $(3^{-1})^n$ gives -\begin{equation} -(-4 \cdot 3^{-1})^n \equiv 1 \pmod n. -\end{equation} -Now, if $O_m(k)$ denotes the order of $k \mod m$, we know from algebra that -\begin{equation} -O(-4 \cdot 3^{-1}) \mid n. -\end{equation} -Now, to show that $7 \nmid n$ for any solution, let's assume $7 \nmid n$ for some $n \in \mathbb{N}$. Let's assume $n > 1$ is the smallest solution so that $7 \nmid n$ and $n \mid 3^n + 4^n$. -If we can show that either $n = 1$ or $7 \mid n$ or there exists an $m < n$ satisfying these properties, we're done. -Let's split 2 cases: -(Case 1): $O_n(-4 \cdot 3^{-1}) = 1$ Then we're done, since that implies (using uniqueness of inverses): -\begin{equation} -(-4 \cdot 3^{-1}) \equiv 1 \pmod n \implies -4 \cdot 3^{-1} \equiv 3 \cdot 3^{-1} \pmod n \\ \implies -4 \equiv 3 \pmod n \implies n \mid 7 \implies n = 1 \vee n = 7. -\end{equation} -(Case 2): $O_n(-4 \cdot 3^{-1}) > 1$. Let $d = O_n(-4 \cdot 3^{-1})$. -Then $(-4 \cdot 3^{-1})^d \equiv 1 \pmod n$, and since $d \mid n$ we have $(-4 \cdot 3^{-1})^d \equiv 1 \pmod d$. $d$ is also odd, and -\begin{equation} --4^d (3^{-1})^d \equiv 1 \pmod d \implies -4^d \equiv 3^d \pmod d -\end{equation} -If $O_d(-4 \cdot 3^{-1}) = 1$, then by the reasoning above $d \mid 7$. Since $d > 1$, we have $d = 7$. Since $d \mid n$, $7 \mid n$. -If $O_d(-4 \cdot 3^{-1}) > 1$, let m = $O_d(-4 \cdot 3^{-1})$. Then $m > 1$, $7 \nmid m$ and $m \mid 3^m + 4^m$. $m < n$, since $m \mid \varphi(n)$. So we found an $m < n$ satisfying the above conditions, implying $n$ is not the smallest.<|endoftext|> -TITLE: In a random selection of three pairs among $6$ people what is the probability that each girl will be matched with her boyfriend? -QUESTION [5 upvotes]: There are $6$ people, $3$ boys and $3$ girls. Each boy is in a relationship with one girl. Three pairs are randomly drawn. What is the probability that these three pairs will be the actual couples? - -My reasoning was -$$ -P = \frac{3}{\binom{6}{2}} \times \frac{2}{\binom{4}{2}} \times \frac{1}{\binom{2}{2}}$$ -But this does not give me the answer a professor has given me. I'd appreciate some hints or new ways of approaching the problem. - -REPLY [4 votes]: Line up the girls. -There are $3! = 6$ ways of pairing the boys with the girls, only one of which is correct. -Thus $Pr = \dfrac16$<|endoftext|> -TITLE: Equivalent definitions of a root system. -QUESTION [6 upvotes]: For studying root systems many authors start from a vector space $V$ over $\mathbb{R}$ with a positive definite scalar product $(\cdot,\cdot)$, in which a reflection $\sigma_\alpha$ is a linear application that fixes the hyperplane $H_\alpha$ and send $\alpha$ to its opposite. In formulas -\begin{gather} -\sigma_\alpha(\beta)=\beta- <\beta,\alpha>\alpha -\end{gather} -where $<\beta,\alpha>=$$2(\alpha,\beta)\over (\alpha,\alpha) $. Then a root system is defined as a subset $R$ of $V$ such that - -$\langle R \rangle =V$, $R$ finite and $0 \notin R$ -$\mathbb{R}\{ \alpha\} \cap R=\{\pm \alpha \}$ if $\alpha \in R$ -for every $\alpha, \beta \in R$, $R$ is invariant under $\sigma_\alpha$ and $<\beta,\alpha>$ is an integer. - -This is a quite strong structure, but all the properties envolved arise naturally in the form of the weights $\mu \in H^*$, where $H$ is the Cartan subalgebra of a complex Lie algebra $L$ (we are considering the adjoint representation). Then we can consider on $H^*$ the dual of the Killing form, that is again symmetric and positive definite. In this environment, we can restrict to $V_\mathbb{Q}$, the $\mathbb{Q}$-span of the non zero weights in $H^*$, (indeed one can prove that the dual of the killing form take rational values, see the chapter "Integrality Properties" of "Introduction of Lie Algebras and Representation theory", Humphreys) then consider $V=V_\mathbb{Q} \otimes_\mathbb{Q} \mathbb{R}$ and check that these are a root system in $V$. For proving the characterization of semisimple lie algebras, this can be a start point. -Nevethless, some authors need to weak slightly the definition of a root system, starting from a vector space $V$ on $\mathbb{R}$ (without scalar product) and define a root system $R$ as a subset of $V$ such that - -$R$ is finite, generates $V$ and $0 \notin R$ -$\alpha \in R \implies$$ \alpha \over 2$$ \notin R$ -for each $\alpha, \beta \in R$ there is $\alpha^\vee \in V^*$ with $\alpha^\vee (\alpha)=2$ and $\alpha^\vee(\beta) \in \mathbb{Z}$ and $s_{\alpha^\vee,\alpha}(R) \subseteq R$ - -Where for $\lambda \in V^*, w \in V$ $s_{\lambda,v}(w)=w-\lambda(w)v$. Then, considerd $G$ the finite subgroup of $GL(V)$ that preserves $R$ and $(\cdot,\cdot)$ a generic positive definite scalar product, we define -\begin{gather} -(\alpha,\beta)'=\sum_{g\in G}(g\alpha,g\beta) -\end{gather} -After realizing that $(\cdot,\cdot)'$ is a positive definite scalar product by which $G$ acts by isometries, we can prove that - -$\alpha^\vee$ is uniquely determined by $\alpha$ -$\alpha^\vee(\lambda)=2\frac{(\alpha,\lambda)'}{(\alpha,\alpha)'}$ - -Here the identifications: $\alpha^\vee$ is the element $h_\alpha$ such that $x_\alpha,y_\alpha,[x_\alpha,y_\alpha]=h_\alpha$ are the usual generators of a copy of $sl_2$ and $\alpha^\vee(\lambda)=\lambda(h_\alpha)$ after $H^{**}=H$. -My first question: what is the relation between $(\cdot,\cdot)'$ and the dual of the Killing form? My guess is that they differ from a scalar, but I cant prove it directly. -My second question: the second approach has the property that make easier to prove that the non zero weights of a Lie algebra via the adjoint representation are a root system, indeed we can avoid the use of the "orthogonality relations", but we need more work to return to the stronger situation of the first definition. So what is the real advantage of the second way? - -REPLY [2 votes]: For whom that may concern, some days ago I found a solution for my first question, I write only now since I was busy before. One can suppose that there exists $c \in \mathbb{R}$ such that $(\alpha,\alpha)=c(\alpha,\alpha)'$. Then necessary we have that -\begin{gather} -2\frac{(\alpha,\beta)}{(\alpha,\alpha)}=<\beta,\alpha>=2\frac{(\alpha,\beta)'}{(\alpha,\alpha)'} -\end{gather} -But this implies that $(\alpha,\beta)=c(\alpha,\beta)'$. Then using the same way we can prove that $(\beta,\beta)=c(\beta,\beta)'$ and again go further with a root $\gamma$ and prove that $(\beta,\gamma)=c(\beta,\gamma)'$. Proceding in this way we can expand the proportionality along one connected component of the Dynkin diagram. So we deduce that for an irreducible root system all the bilinear forms that we can replace in the first definition differ from a scalar. This mainly because the "information" of a root system is only a matter of ratios between vectors. -I gladly noticed that we can use this argument for proving that in one simple Lie algebra $L$ there is at most one invariant non degerate bilinear form, up to constants. First notice that we can replace the Killing form with a generic non degenerate symmetric form, invariant under the Lie bracket. Indeed for arriving to the root space decompositions, all the proofs involve only this property and don't rely on the mere definition of the Killing form (or at least in the book of Humphreys "Introduction to Lie Algebras and Representation Theory" every proof works fine for every such a form). Then dualizing any two such a forms in $L$ we get two scalar product on the ambient space of root system associated. For what I wrote before these two scalar product must be multiple, then going backward the two forms on $L$ that we started with, they must be multiple. -This is a "bone structure", or just an idea. I don't go into further details because I am very tired and tomorrow I have my Lie Algebras exam. I hope that this may be useful to someone in future.<|endoftext|> -TITLE: If a measure only assumes values 0 or 1, is it a Dirac's delta? -QUESTION [14 upvotes]: Let $\mu$ be a probability measure on a metric space $M$ (with the Borel $\sigma$-algebra). If $\mu(A)\in \{0,1\}$ for all measurable set $A\subset M$, then: - -Is it true that $\mu$ is a Dirac measure? - -I think the answer should be negative, but I do not know of a counterexample. There is an answer here for the general case where we don't know anything about the topology (metrizable in our case). There you can see also that the result holds for all Polish spaces. - -REPLY [6 votes]: As delfonics says, the answer is: yes, if and only if there does not exist a measurable cardinal, and an outline of the proof was given by hot_queen in an answer to a question of mine from 2014 (Consistency strength of 0-1 valued Borel measures), which I somehow completely forgot about. In the meantime, I had written out the argument below, which essentially fills in the details in hot_queen's argument. -One direction is easy. Suppose $\kappa$ is a measurable cardinal, so that there exists a $\kappa$-additive (in particular, countably additive) $\mu : 2^{\kappa} \to \{0,1\}$ which is not a Dirac mass. Let $M$ be $\kappa$ equipped with the discrete metric (or for that matter, any other metric that $\kappa$ might happen to admit). Then $\mu$ (or its restriction to the Borel sets, if we are using a non-discrete metric) is the desired counterexample. -The other direction I found in the paper [1]. The argument, for the current case, goes as follows. -Working in ZFC, suppose there is a metric space $M$ and a 0-1 valued Borel measure $\mu$ which is not a point mass. Using the axiom of choice, $M$ is in one-to-one correspondence with some cardinal $\kappa$, so we can write $M = \{x_\alpha : \alpha \in \kappa\}$. (In other words, we have well-ordered $M$.) Note each $\alpha \in \kappa$ is itself an ordinal. -For every $\alpha$, we have $\mu(\{x_\alpha\})=0$, and $\{x\}$ is a decreasing intersection of the open balls $B(x_\alpha,1/n)$. So by continuity from above, there is an open ball $B_\alpha$ centered at $x_\alpha$ with $\mu(B_\alpha) = 0$. Set $H_\alpha = B_\alpha \cap \left( \bigcup_{\beta \in \alpha} B_\beta\right)^c$. (Some of the $H_\alpha$ might be empty but that is okay.) Note that $H_\alpha$ is the intersection of an open (hence $F_\sigma$) set and a closed set, so $H_\alpha$ is Borel (indeed $F_\sigma$). And since $H_\alpha \subset B_\alpha$ we have $\mu(H_\alpha) = 0$. By construction, the $H_\alpha$ are pairwise disjoint, and $\bigcup_{\alpha \in \kappa} H_\alpha = M$. Now they cite a result of D. Montgomery [2] which asserts that any arbitrary union of the $H_\alpha$ is in fact an $F_\sigma$; in particular it is Borel. -Now define a measure $\nu : 2^{\kappa} \to \{0,1\}$ by $\nu(Y) = \mu\left(\bigcup_{\alpha \in Y} H_\alpha\right)$. Since $\mu$ is countably additive and the $H_\alpha$ are disjoint, we have that $\nu$ is countably additive. Moreover, for any $\alpha$ we have $\nu(\{\alpha\}) = \mu(H_\alpha) = 0$, and $\nu(\kappa) = \mu(M) = 1$. Thus $\kappa$ admits a nontrivial countably additive 0-1 valued measure on all its subsets. -A measurable cardinal $\lambda$ has to have a measure which is not only countably additive but actually $\lambda$-additive, so to finish, we use an argument due to Ulam, mentioned on the Wikipedia page above. Since we have shown there is a cardinal (namely $\kappa$) with a nontrivial countably additive 0-1 valued measure on all its subsets, there is a minimal cardinal with this property; call it $\kappa_1$ and let $\nu_1 : 2^{\kappa_1} \to \{0,1\}$ be the corresponding countably additive measure. -Suppose $\nu_1$ were not $\kappa_1$-additive; that means there is a collection $\mathcal{C} \subset 2^{\kappa_1}$, having $\kappa_0 := |\mathcal{C}| < \kappa_1$, such that $\mathcal{C}$ consists of pairwise disjoint sets of $\nu_1$-measure zero, and yet $\nu_1\left(\bigcup \mathcal{F}\right) = 1$. Fix a bijection $\phi : \kappa_0 \to \mathcal{C}$ and define a measure $\nu_0 : 2^{\kappa_0} \to \{0,1\}$ by $\nu_0(B) = \nu_1\left(\bigcup_{\beta \in B} \phi(\beta)\right)$. Then $\nu_0$ is countably additive; for any $\beta \in \kappa_0$ we have $\nu_0(\{\beta\}) = \nu_1(\phi(\beta)) = 0$ since every set in $\mathcal{C}$ had measure zero; and $\nu_0(\kappa_0) = \nu_1(\bigcup \mathcal{F}) = 1$. So $\nu_0$ is nontrivial, and this contradicts the minimality of $\kappa_1$. -We have thus shown that $\kappa_1$ is a measurable cardinal. -[1] Marczewski, E.; Sikorski, R. -Measures in non-separable metric spaces. -Colloquium Math. 1, (1948). 133–139. MR 25548 -[2] Montgomery, D. Non-separable metric spaces. Fundamenta Mathematicae 25, (1935). 527–533. -Note: I wasn't able to find a copy of Montgomery's paper online, and it doesn't seem to be indexed in MathSciNet. If someone has a copy of this paper, or knows where to find another proof of the result, I would be interested to hear it. I found a number of other references mentioning this result, so it seems to be fairly well established.<|endoftext|> -TITLE: What is the most general category in which exist short exact sequences? -QUESTION [10 upvotes]: Let $A,B,C$ be objects, $0$ the final object, and $f:A\to B$ and $g:B\to C$ morphisms in some category. -Consider the sequence: -$$ -0 \to A \to B \to C \to 0\;. -$$ -I would like to say something analogous to: - -$f$ is injective (or maybe some kind of kernel is trivial) -$g$ is surjective (or maybe some kind of cokernel is trivial) -$fg$ factors through $0$ (or something like $im (f) = ker (g)$). - -Of course in the category of modules and in the category of groups all of this makes sense. what about, for example, in Sets? Or in metric spaces? -In general, which properties must my category have to have (a generalization of) exact sequences? -(For example, I guess we need a terminal object...right?) - -REPLY [6 votes]: Short exact sequences make sense in any category enriched in pointed sets, i.e. in which there is a null arrow $0\colon A\to B$ between any two objects, preserved by composition on both sides ($f\circ 0 = 0$ and $0\circ f=0$ for any composable $f$ and null arrow). Examples are the category of pointed sets, pointed topological spaces, and categories of algebraic structures with a unique constant. -A kernel of $f\colon A\to B$ is then a universal arrow $k\colon K \to A$ such that $k\circ f = 0$. A short exact sequence is then a sequence $A\xrightarrow{f} B\xrightarrow{g} C$ such that $f$ is a kernel for $g$ and $g$ a cokernel for $f$. -Marco Grandis has generalised this to a setting where there may be more than one null arrow between two objects. This allows, for instance, to consider short exact sequences in the category of topological spaces equipped with a distinguished subspace, which are used in algebraic topology. See On the categorical foundations of homological and homotopical algebra. -Note: you can define another kind of short exact sequences in a general category: a diagram $R\rightrightarrows A\to Q$, where the left-hand side is the kernel pair of the right-hand side and the right-hand side is the coequaliser of the left-hand side.<|endoftext|> -TITLE: Calculating $\int_0^{\pi/2} (x \sin(x))^n dx$ -QUESTION [6 upvotes]: Define -$$I_n = \int_0^{\pi/2} (x \sin(x))^n dx$$ -for $n \ge 0$. -I calculate the value for $n = 0, 1$ and $2$. -$$I_0 = \frac{\pi}{2}, I_1 = 1, I_2 = \frac{{\pi}^3 + 6 \pi}{48} .$$ -In general, what's the value of $I_n$? -P.S. -By WolframAlpha( -n = 3 (http://www.wolframalpha.com/input/?i=integrate+%5B(x+sin(x))%5E3,+%7Bx,+0,+PI%2F2%7D%5D), -n = 4 (http://www.wolframalpha.com/input/?i=integrate+%5B(x+sin(x))%5E4,+%7Bx,+0,+PI%2F2%7D%5D), -n = 5 (http://www.wolframalpha.com/input/?i=integrate+%5B(x+sin(x))%5E5,+%7Bx,+0,+PI%2F2%7D%5D) -), -$$I_3 = \frac{7{\pi}^2}{12} - \frac{122}{27},$$ -$$ I_4 = \frac{6{\pi}^5 + 170 {\pi}^3 -975 \pi}{2560},$$ -$$ I_5 = \frac{149{\pi}^4}{720} - \frac{31841{\pi}^2}{3375} + \frac{56992552}{759375}.$$ - -REPLY [2 votes]: Possible hint, or end of the story. -$$\int_0^{\pi/2} x^n\sin^n(x)\ \text{d}x = \int_0^{\pi/2} x^n\ \left(\frac{e^{ix} - e^{-ix}}{2i}\right)^n\ \text{d}x$$ -But I warn you, it's not an easy road. You'll have to integrate by parts $n$ times. You'll end up almost surely with some hellish Hypergeometric Function. -Synthesis: no clue if there really is a general close form. -P.s. Binomial expansion may be useful.<|endoftext|> -TITLE: Infinite intersection of compact, path connected, nested sets is path connected? -QUESTION [5 upvotes]: I showed that given $A_1\supseteq A_2, ...$ compact, connected sets, $\bigcap_{i=1}^n A_n$ is connected but is the statement true if we replace connected with path connected? Is there an counterexample of compact, path connected sets whose arbitrary intersection is not path connected? - -REPLY [3 votes]: No. Recall that the closed topologists' sine curve is the set -$$C := \left\{\left(x, \sin \tfrac{1}{x}\right) : x \in (0, 1)\right\} \cup (\{0\} \times [-1, 1])$$ -Now, let $R_n$, $n = 1, 2, 3, \ldots$, denote the filled rectangle $\left[0, \tfrac{1}{n}\right] \times [-1, 1]$, and set $$A_n := C \cup R_n ,\qquad n = 1, 2, 3, \ldots .$$ Since the $R_n$ are nested, so are $A_n$. By construction, $A_n$ is the union of the compact, path-connected sets $R_n$ and $\left\{\left(x, \sin \tfrac{1}{x}\right) : \tfrac{1}{n} \leq x \leq 1\right\}$ that share a common point, so each $A_n$ is compact and path-connected. The infinite intersection $\bigcap_{n = 1}^{\infty} R_n$ of the $R_n$ is just $\{0\} \times [-1, 1] \subset C$, so the intersection $\bigcap_{n = 1}^{\infty} A_n$ of the $A_n$ is just $C$, which is not path-connected.<|endoftext|> -TITLE: Is the Hölder random constant of the Brownian Motion Integrable? -QUESTION [5 upvotes]: Let $\{B_t:t\in [0,1]\}$ be the standard one-dimensional Brownian motion on the closed unit interval. Fix $\gamma\in (0,1/2)$. It is well known that there is a positive random variable $K\equiv K(\gamma)$ such that for any pair $s,t\in [0,1]$ -we have -$$ -|B_t-B_s|\leq K|t-s|^{\gamma} \qquad \text{a.s.} -$$ -I would like to know if $K$ can be chosen so that $\mathbb{E}[K]<+\infty$. - -REPLY [2 votes]: In the text Brownian Motion by R. Schilling L. Partzsch, one can see a demonstration of the fact that, for fixed $\gamma\in(0,1/2)$, the random variable -$$ -K:=\sup\{|B_t-B_s|/|t-s|^\gamma: 0\le s,t\le 1\} -$$ -has finite moments of all orders; see Theorem 10.1 on page 150.<|endoftext|> -TITLE: Is there a continuous function such that $\int_0^{\infty} f(x)dx$ converges, yet $\lim_{x\rightarrow \infty}f(x) \ne 0$? -QUESTION [6 upvotes]: Is there a continuous function such that $\int_0^{\infty} f(x)dx$ converges, yet $\lim_{x\rightarrow \infty}f(x) \ne 0$? -I know there are such functions, but I just can't think of any example. - -REPLY [10 votes]: Here is a picture (not very accurate, I know), to see how to construct a counter-example: -$\qquad\qquad\qquad$ -The $n$-th triangle centered at $x=n$ have basis of length $1/n^2$. -This is Friedrich Philipp's idea. - -REPLY [6 votes]: Let -$$ -f(x)=\begin{cases}n^2(x-n),&\ x\in[n,n+1/n^2], \\ -n^2x+n^3+2,&\ x\in[n+1/n^2,n+2/n^2]\\ 0,&\ x\in[n+2/n^2,n+1) -\end{cases} -$$ -Then $f$ is continuous, $f(x)\geq0$ for all $x$, and -$$ -\int_0^\infty f(x)\,dx=\sum_{n=1}^\infty\frac1{n^2}=\frac{\pi^2}6. -$$ -Note also that, by pushing this idea, we can get $f$ to be unbounded (by making the triangles thin quicker and get higher).<|endoftext|> -TITLE: Why would the category of topological spaces be a balanced category (i.e. monic epimorphisms are isomorphisms)? -QUESTION [7 upvotes]: I've just read on this page that - -For example, $\mathsf {Set}$ (the cateogry of sets), $\mathsf {Grp}$ (the category of groups), and $\mathsf {Top}$ (the category of topological spaces) are all balanced. - -(Balanced means that all the monic epimorphisms are isomorphisms). -I clearly understand for $\mathsf{Set}$ and $\mathsf{Grp}$, but isn't this wrong for $\mathsf{Top}$? For instance, -$$f:[0,1[ \longrightarrow S^1 \qquad t \longmapsto e^{2πit}$$ -is continuous and bijective but is not an isomorphism in $\mathsf{Top}$. Am I missing something there? -Thank you for your comments! - -REPLY [5 votes]: As it was pointed out in the comments (by Pedro Sánchez Terraf and Rob Arthan), the PlanetMath page is wrong. It is not true that every monic epimorphism in $\sf Top$ is an isomorphism. -Other examples of such morphisms can be found in the category of Hausdorff spaces $\sf Haus$ (looking at the inclusion $\Bbb Q \hookrightarrow \Bbb R$) or in $\sf Ring$ (looking at $\Bbb Z \hookrightarrow \Bbb Q$).<|endoftext|> -TITLE: Probability that sum of independent uniform variables is less than 1 -QUESTION [5 upvotes]: I would like to determine the probability $\mathbb{P}(X_1+\dots+X_n\leq 1)$, where $X=(X_i)_{1\leq i\leq n}$ is a family of independent uniform random variables on $[0,1]$. My first idea is to do this by induction. The first three base cases are straightforward to determine and give us $\mathbb{P}(X_1\leq 1)=1$, $\mathbb{P}(X_1+X_2\leq 1)=\frac{1}{2}$ and $\mathbb{P}(X_1+X_2+X_3\leq 1)=\frac{1}{6}$, which suggests that $\mathbb{P}(X_1+\dots+X_n\leq 1)=\frac{1}{n!}$. Supposing this is true for a certain arbitrary integer $n$, I am having difficulties establishing the result for $n+1$, i.e. $\mathbb{P}(X_1+\dots+X_n+X_{n+1}\leq 1)=\frac{1}{(n+1)!}$. I believe the starting point should be: -$$\mathbb{P}(X_1+\dots+X_n+X_{n+1}\leq 1)=\mathbb{P}(X_1+\dots+X_n\leq 1-X_{n+1}),$$ -and then somehow condition on $X_{n+1}$, but I am stuck at this point of the calculation. Any ideas of references to literature or even an alternative direct proof would be greatly appreciated. - -REPLY [5 votes]: A geometric argument should suffice.   Given that $\{X_k\}_\infty$ are all iid Uniform$(0;1)$ random variables, then: -$\mathsf P(X_1+X_2\leq 1)$ is the probability that points distributed uniformly over the unit square lie in the lower left triangle; which is $1/2$ the area of the unit square. -$\mathsf P(X_1+X_2+X_3\leq 1)$ is the probability that points distributed uniformly over the unit cube lie in the $(0,0,0)$-corner pyramid; which is $1/6$ the volume of the unit cube. -$\mathsf P(X_1+X_2+X_3+X_4\leq 1)$ is the probability that points distributed uniformly over the unit tesseract lie in $(0,0,0,0)$-corner pentachron; which is $1/24$ of the hypervolume of the unit tesseract. -And so forth. -$\mathsf P(\sum\limits_{k=1}^n X_k\leq 1)$ is the probability that points distributed uniformly over a unit $n$-hypercube lie in a corner $n$-hyperpyramid; which is $1/n!$ of the $n$-hypervolume of the unit $n$-hypercube.<|endoftext|> -TITLE: Counterexample: Continuous, but not uniformly continuous functions do not preserve Cauchy Sequences -QUESTION [15 upvotes]: I want to prove this: -There exists a continuous function $f:\mathbb{Q}\to\mathbb{Q}$, but not uniformly continuous, and a Cauchy sequence $\{x_n\}_{n\in\mathbb{N}}$ of rational numbers such that $\{f(x_n)\}_{n\in\mathbb{N}}$ is not a Cauchy sequence. -More particular: -Does there exist a Cauchy sequence $\{x_n\}_{n\in\mathbb{N}}$ of rational numbers such that $\{x_n^2\}$ is not Cauchy? -I think that would be weird, and the counterexample should be with some function that is continuous in $\mathbb{Q}$ but not in $\mathbb{R}$. Am I right? Which would be some example of that? - -REPLY [4 votes]: Another simple example is given by $f(x)=\frac1{x^2-2}$.<|endoftext|> -TITLE: Does there exist a $1$-form $\alpha$ with $d\alpha = \omega$? -QUESTION [5 upvotes]: Let $\omega := dx \wedge dy$ denote the standard area form on $\mathbb{R}^2$. As the question title suggests, does there exist a $1$-form $\alpha$ with $d\alpha = \omega$? - -REPLY [12 votes]: Well, $\mathrm d\omega = 0$ and $\omega$ is defined everywhere, so the answer to your question is yes. -You probably want to see such a 1-form, though. You can either make some educated guesses such as John Ma’s suggestion and try them out, or compute an antiderivative directly. I’ll do the latter here since it’s a simple illustration of the method. -Step 1. Substitute $x^i \to tx^i$ and $\mathrm dx^i\to x^i\,\mathrm dt + t\,\mathrm dx^i$; $$(x\,\mathrm dt + t\,\mathrm dx) \land (y\,\mathrm dt + t\,\mathrm dy) = tx\,\mathrm dt\land\mathrm dy + ty\,\mathrm dx\land\mathrm dt + t^2\,\mathrm dx\land\mathrm dy$$ -Step 2. Discard all terms not involving $\mathrm dt$ and move it to the left in the remaining terms:$$tx\,\mathrm dt\land\mathrm dy - ty\,\mathrm dt\land\mathrm dx$$ -Step 3. Treat the result as an ordinary integrand w/r $t$ and integrate from $0$ to $1$:$$\int_0^1 (tx\,\mathrm dy - ty\,\mathrm dx)\,\mathrm dt = \frac12(x\,\mathrm dy-y\,\mathrm dx). $$ This form might look familiar to you. -Note that, just as there’s an arbitrary constant of integration in elementary calculus, you can add $\mathrm df$, where $f$ is a real-valued function, to this and get another antiderivative of $\omega$. E.g., taking $f(x,y)=\frac12xy$ gives $x\,\mathrm dy$. - -For the curious, here’s what’s going on above (asserted without proof, following Bamberg & Sternberg in A Course In Mathematics For Students of Physics). -Let the $k$-form $\omega$ ($k>0$) be defined in a star-shaped region $Q$ of $\mathbb R^n$. Define a function $\beta$ on $[0,1]\times Q$ that for each $p\in Q$ maps the interval $[0,1]$ to the line segment joining $p$ to the origin. For any function $g$ defined on $Q$, $(\beta^*g)(t;x^1,\dots,x^n)=g(tx^1,\dots,tx^n)$. -The pullback $\beta^*\omega$ is a sum of two types of terms: $$\tau_1(t)=A(t;x^1,\dots,x^n)\,\mathrm dx^{i_1}\land\cdots\land\mathrm dx^{i_k} \\ \tau_2(t)=B(t;x^1,\dots,x^n)\,\mathrm dt\land\mathrm dx^{i_1}\land\cdots\land\mathrm dx^{i_{k-1}},$$ i.e., terms that don’t involve $\mathrm dt$ and those that do. Define a linear operator $L$ as follows: $$\begin{align} -L\tau_1 &= 0 \\ -L\tau_2 &=\left(\int_0^1 B\,\mathrm dt\right)\,\mathrm dx^{i_1}\land\cdots\land\mathrm dx^{i_{k-1}} -\end{align}$$ This operator has the property that $$\begin{align} -\mathrm dL\tau_1+L\mathrm d\tau_1 &= \tau_1(1)-\tau_1(0) \\ -\mathrm dL\tau_2+L\mathrm d\tau_2 &= 0. -\end{align}$$ So, type $\tau_2$ terms can be ignored in $\mathrm dL(\beta^*\omega)+L\mathrm d(\beta^*\omega)$ and since $\tau_1(0)=0$ because of the factor of $t$ that accompanies each $\mathrm dx^i$, we have $$ -\mathrm dL(\beta^*\omega)+L\mathrm d(\beta^*\omega) = \omega. -$$ If $\mathrm d\omega=0$, this becomes simply $\mathrm dL\beta^*\omega = \omega$. -The process at the top of the answer computes $L\beta^*\omega$. Step 1 is just forming the pullback; step 2 is some bookkeeping; and step 3 computes $L\tau_2$. The $\tau_1$ terms can be discarded at any point and the first two steps can of course be combined if you’re careful. This procedure can be extended to work for any region that’s the image of a star-shaped region under a smooth one-to-one mapping.<|endoftext|> -TITLE: Intuition behind definition of spinor -QUESTION [12 upvotes]: Some time ago I searched for the definition of spinors and found the wikipedia page on the subject. Although highly detailed the page tries to talk about many different constructions and IMHO doesn't give the intuition behind any of them. -As far as I know physicists prefer to define spinors based on transformation laws (as with vectors and tensors), but all due respect, I find these kind of definitions quite unpleasant. Vectors and tensors can be defined in much more intuitive ways and I believe the same happens with spinors. -In that case, how does one really define spinors without resorting to transformation properties and what is the underlying intuition behind the definition? How the definition relates to the idea of spin from Quantum Mechanics? -In Wikipedia's page we have two definitions. One based on spin groups and another based on Clifford Algebras. I couldn't understand the intuition behind neither of them, so I'd like really to get not just the definition but the intuition behind it. - -REPLY [4 votes]: Rotations in three-dimensional space can be represented by the usual real 3 x 3 matrices. They work on real 3 x 1 column matrices which represent a vector. By doing so they yield the representation of the rotated vector. But rotations can also be represented by complex 2 x 2 matrices working on complex 2 x 1 column matrices. This is the representation SU(2). The complex 2 x 1 column matrices do here no longer represent vectors but rotations. They contain the information about the Euler angles, but also (by equivalence) the information about the rotation axis and the rotation angle, or the information about the image of the triad of basis vectors under the rotation. In fact, one can consider them as a steno for the 2 x 2 matrices by writing only their first column because the second column is unambiguously defined by the first column, and this property is preserved under multiplication with SU(2) matrices. In fact the matrices of SU(2) are of the form: -$ a \quad -b^{*}$ -$b \quad \quad a^{*}$ -The spinor is just the first column of that matrix. -The 2 x 1 spinor matrices are normalized to 1, in conformity with the definition of SU(2): $aa^{*} + bb^{*} =1$. They therefore contain the equivalent of three independent real parameters, i.e. the three Euler angles, or the unit vector along the rotation axis plus the the rotation angle, etc... -The question remains how we put the information about the rotation into these 2 x 2 matrices. That is done by writing the rotations as a product of reflections. (It is easy to verify that the product of two reflections is a rotation. The intersection of the reflection planes is the axis of the rotation. The angle of the rotation is twice the angle between the reflection planes. The reflections are thus generating the group of rotations, reflections and reversals). -Such a reflection matrix ${\mathbf{A}}$ is easy to find. One uses the unit vector ${\mathbf{a}}$ perpendicular to the reflection plane. The coordinates $a_x, a_y, a_z$ of ${\mathbf{a}}$ must occur somehow in the reflection matrix but we do not know how. Therefore we write the matrix heuristically as $a_x {\mathbf{M}}_x + a_y {\mathbf{M}}_y + a_z {\mathbf{M}}_z$. The matrix ${\mathbf{M}}_x$ will tell where and which coefficient $a_x$ will occur in ${\mathbf{A}}$. Analogous statements apply for the matrices ${\mathbf{M}}_y$ and ${\mathbf{M}}_z$. E.g. if the matrix ${\mathbf{M}}_z$ is -$ 1 \quad \quad 0$ -$0 \quad -1$ -then the matrix ${\mathbf{A}}$ will contain $a_z$ in position (1,1) and $-a_z$ in position (2,2). The same is true, mutatis mutandis, for the matrices ${\mathbf{M}}_x$ and ${\mathbf{M}}_y$. To find the expressions for these three matrices we express that a reflection is its own inverse, i.e. ${\mathbf{A}}^{2}=1 $ where $1$ stands for the unit matrix. This condition can be satisfied if the matrices ${\mathbf{M}}_j$ satisfy the conditions ${\mathbf{M}}_{x} {\mathbf{M}}_{y} + {\mathbf{M}}_{y} {\mathbf{M}}_{x} = {\mathbf{0}}$ (cycl.) and ${\mathbf{M}}_{x}^{2} = 1$, ${\mathbf{M}}_{y}^{2} =1$, ${\mathbf{M}}_{z}^{2} =1$. In other words, the matrices ${\mathbf{M}}_j$ can be just taken to be the Pauli matrices $\sigma_{x}, \sigma_{y}, \sigma_{z}$. The matrix becomes then: -$ \quad a_z \quad\quad\quad a_{x} -\imath a_{y}$ -$a_{x} + \imath a_{y} \quad \quad -a_{z}$ -Once we have the reflection matrices, we can obtain the rotation matrices by multiplication. This leads to the Rodrigues formula. The spinors are just the first columns of these rotation matrices. If you want it in more detail (and exactly along the lines explained here), you can read it in the third chapter of "From Spinors to Quantum Mechanics" by G. Coddens (Imperial College Press). You will there also find the link with isotropic vectors mentioned by KonKan, and with the stereographic projection. -This shows that a spinor is a way to write a group element. That idea remains valid when you want to develop the group representation theory for spinors of the homogeneous Lorentz group in Minkowski space-time (with a few complications, among others due to the metric; Instead of three Pauli matrices you will now need four 4 x 4 gamma matrices. If you know the Dirac theory, you will recognize that this procedure is exactly the way Dirac obtained the gamma matrices. But he used the energy-momentum four-vector $(E,c{\mathbf{p}})$ to define the basis rather than the four unit vectors ${\mathbf{e}}_{\mu}$ of space-time. It is all explained in detail in the reference above). -As there is a 1-1-correspondence between a set of basis vectors and the rotation that has produced it by operating on the canonical basis, you can thus visualize the spinor as a rotated basis. This remains true in Minkowski space time (now with a set of four basis vectors and with Lorentz transformations). In ${\mathbb{R}}^{3}$ we have ${\mathbf{e}}_{z} = -{\mathbf{e}}_{x} \wedge {\mathbf{e}}_{y}$, such that ${\mathbf{e}}_{x}$ and ${\mathbf{e}}_{y}$ are actually sufficient to reproduce the whole information. You can therefore also represent the complete information by the isotropic vector ${\mathbf{e}}_{x} +\imath {\mathbf{e}}_{y}$. After rotating ${\mathbf{e}}_{x} +\imath {\mathbf{e}}_{y}$ you can just find -${\mathbf{e}}'_{x}$ and ${\mathbf{e}}'_{y}$ by taking the real and imaginary parts of the rotated isotropic vector ${\mathbf{e}}'_{x} +\imath {\mathbf{e}}'_{y}$, and thus reconstruct the full rotated basis. This is the reason why one can also represent the rotations by using isotropic vectors. The rotated isotropic vectors are in 1-1-correspondence with the rotations. -It is not any more difficult than that. It is simple geometry. There should not remain any secret or mystery that stands in your way to fully understand this. It is never well explained, which is probably due to the extremely concise way Cartan introduced them, without explaining what is going on behind the scenes. I hope this helps. Kind regards.<|endoftext|> -TITLE: Nested radical $\sqrt{x+\sqrt{x^2+\cdots\sqrt{x^n+\cdots}}}$ -QUESTION [12 upvotes]: I am studying the $f(x) = \sqrt{x+\sqrt{x^2+\cdots\sqrt{x^n+\cdots}}}$ for $x \in (0,\infty)$ and I am trying to get closed form formula for this, or at least some useful series/expansion. Any ideas how to get there? -So far I've got only trivial values, which are -$$f(1)=\sqrt{1+\sqrt{1+\cdots\sqrt{1}}}=\frac{\sqrt{5}+1}{2}$$ -$$f(4) = 3$$ -Second one follows from -$$2^n+1 = \sqrt{4^n+(2^{n+1}+1)} = \sqrt{4^n+\sqrt{4^{n+1}+(2^{n+2}+1)}} = \sqrt{4^n+\sqrt{4^{n+1}+\cdots}}$$ -I have managed to compute several derivatives in $x_0=1$ by using chain rule recursively on $f_n(x) = \sqrt{x^n + f_{n+1}(x)}$, namely: -\begin{align*} -f^{(1)}(1) &= \frac{\sqrt{5}+1}{5}\\ -f^{(2)}(1) &= -\frac{2\sqrt{5}}{25}\\ -f^{(3)}(1) &= \frac{6\sqrt{5}-150}{625}\\ -f^{(4)}(1) &= \frac{1464\sqrt{5}+5376}{3125}\\ -\end{align*} -These gave me Taylor expansion around $x_0=1$ -\begin{align*} -T_4(x) &= \frac{\sqrt{5}+1}{2} + \frac{\sqrt{5}+1}{5} (x-1) - \frac{\sqrt{5}}{25} (x-1)^2 + \frac{6\sqrt{5}-150}{3750} (x-1)^3 \\ -&\ \ \ \ \ + \frac{61\sqrt{5}+224}{3125} (x-1)^4 -\end{align*} -However this approach seems to be useful only very closely to the $x=1$. I am looking for something more general in terms of any $x$, but with my limited arsenal I could not get much further than this. Any ideas? -This was inspiring but kind of stopped where I did -http://integralsandseries.prophpbb.com/topic168.html -Edit: -Thanks for the answers, i will need to go through them, looks like the main idea is to divide by $\sqrt{2x}$, so then I am getting -$$\frac{\sqrt{x+\sqrt{x^2+\cdots\sqrt{x^n+\cdots}}}}{\sqrt{2x}} = \sqrt{\frac{1}{2}+\sqrt{\frac{1}{4}+\sqrt{\frac{1}{16x}+\sqrt{\frac{1}{256x^4}+.‌​..}}}}$$ -Then to make expansion from this. This is where I am not yet following how to get from this to final expansion. - -REPLY [2 votes]: In the same spirit as Mark Fischler's answer, setting $x=\frac{y^2}2$, for large values of $x$, Taylor expansion is $$y+\frac{1}{4 \sqrt{2}}-\frac{5}{64 }\frac 1 {y}+\frac{85}{256 \sqrt{2} }\frac 1 {y^2}-\frac{1709}{8192 - }\frac 1 {y^3}+\frac{6399}{32768 \sqrt{2} }\frac 1 {y^4}+O\left(\frac{1}{y^5}\right)$$ which would converge quite fast.<|endoftext|> -TITLE: Why do we require that a simple Lie algebra be non-abelian? -QUESTION [10 upvotes]: We say that a Lie $k$-algebra is simple if it is a simple object in the category of Lie algebras, and also nonabelian. The only simple object which we do not consider to be a simple Lie algebra under this definition is the line $k$. Is there any particular reason why $k$ would be problematic if were to consider it to be a simple Lie algebra? - -REPLY [6 votes]: I think it's mainly historical and practical. There is no deep reason why one agrees that groups of prime order are simple groups and 1-dimensional Lie algebras (resp. algebraic groups, Lie groups) are not simple Lie algebras (resp. algebraic groups, Lie groups). -One difference between the two contexts is that a finite-dimensional Lie algebra (at least in char 0) has a solvable radical and the quotient is a direct product of (non-abelian) simple Lie algebras. This "separates" the abelian part (the solvable radical, which is an iterated extension of abelian guys) and the semisimple part, which is made of non-abelian simple factors. In finite group theory there is no such separation. The simplest counterexample to such a result is the symmetric group on $\ge 2$ letters, which has no nontrivial solvable normal subgroup but has an abelian Jordan-Hölder factor. In this case there is a separation the other way round, but in general is just more tangled (complicated examples can be cooked up using wreath products). -In any case, there are many results for which one has to specify "non-abelian" simple groups. In Lie algebras, I guess that if by the sake of coherence, abelian ones were allowed as simple, one would often have to specify "non-abelian simple", more often than one has to write "simple or 1-dimensional abelian".<|endoftext|> -TITLE: Text recommendations for linear algebra (tensors, jordan forms) -QUESTION [5 upvotes]: I'm having extreme difficulty trying to understand to topic of tensor products, freespaces, and jordan forms. -Are there any text books that take an elementary approach to these topics that you may recommend? -I am currently using advanced linear algebra by roman, but other sources would be appreciated. Thanks! - -REPLY [4 votes]: Peter Lax's book is usually highly recommended for Linear Algebra. I believe it covers Jordan Forms and Tensor Products, but only does so in the appendices. -For tensors, any good book on general relativity would serve you well (Das, Schutz, to name a few). However, you can also check a Quick introduction to tensor analysis which is self contained and quite good. -I've heard very good things about Matrix Analysis by Horn, which seems to cover Jordan forms in depth. -I'm sorry to say I don't have anything for free spaces.<|endoftext|> -TITLE: How to find the shortest path between opposite vertices of a cube, traveling on its surface? -QUESTION [6 upvotes]: I am stuck with the following problem that says: - -Let $A,B$ be the ends of the longest diagonal of the unit cube . The length of - the shortest path from $A$ to $B$ along the surface is : - - -$\sqrt{3}\,\,$ 2.$\,\,1+\sqrt{2}\,\,$ 3.$\,\,\sqrt{5}\,\,$ 4.$\,\,3$ - -My Try: -So, the length of the longest diagonal $AB=\sqrt{3}$. If I reach from $A$ to $B$ along the surface line $AC+CD+BD$, then it gives $3$ units. But the answer is given to be option 3. -Can someone explain? Thanks in advance for your time. - -REPLY [4 votes]: The path goes through the middle point of common opposite side considering two squares only.<|endoftext|> -TITLE: Solving linear recursive equation $a_n = a_{n-1} + 2 a_{n-2} + 2^n$. -QUESTION [6 upvotes]: I wish to solve the linear recursive equation: - -$a_n = a_{n-1} + 2a_{n-2} + 2^n$, where $a_0 = 2$, $a_1 = 1$. - -I have tried using the Ansatz method and the generating function method in the following way: -Ansatz method -First, for the homogenous part, $a_n = a_{n-1} + 2a_{n-2}$, I guess $a_n = \lambda^n$ as the solution, and substituting and solving for the quadratic, I get $\lambda = -1, 2$. So, $a_n = \alpha (-1)^n + \beta 2^n$. Then, for the inhomogenous part, I guess $a_n = \gamma 2^n$, to get $\gamma 2^n = \gamma 2^{n-1} + 2\gamma 2^{n-2} + 2^n$, whence $2^n=0$, which means, I suppose, that this guess is not valid. These are the kind of guesses that usually work, so I don't know why it fails in this particular case, and what to do otherwise, so I tried the generating function method. -Generating function method -Let -$$ -A(z) = \sum_{i=0}^{\infty} a_k z^k -$$ -be the generating function for the sequence $\{ a_n \}_{n \in \mathbb{N} \cup {0}}$. Then, I try to write down the recursive relation in terms of $A(z)$: -$$ -A(z) = zA(z) + 2z^2 A(z) + \frac{1}{1-2z} + (1 - 2z), -$$ -where the last term in the brackets arises because of the given initial conditions. Then, solving for $A(z)$, -$$ -\begin{align} -A(z) &= \frac{1}{(1+z)(1-2z)^2} + \frac{1}{1+z}\\ -&= \frac{2}{9}\frac{1}{1-2z} + \frac{2}{3}\frac{1}{(1-2z)^2} + \frac{10}{9}\frac{1}{1+z}\\ -&=\frac{2}{9} \sum_{k=0}^{\infty} 2^k z^k + \frac{2}{3} \sum_{k=0}^{\infty} (k+1)2^k z^k + \frac{10}{9} \sum_{k=0}^{\infty} (-1)^k z^k\\ -&= \sum_{k=0}^\infty \frac{(3k+4)2^{k+1} + (-1)^k 10}{9} z^k. -\end{align} -$$ -So, -$$ -a_k = \frac{(3k+4)2^{k+1} + (-1)^k 10}{9}. -$$ -But then, $a_1 = 2$, whereas we started out with $a_1 = 1$. -At first, I thought that maybe the generating function method did not work because some of the series on the right hand side were not converging, but they all look like they're converging for $|z| < 1/2$. I rechecked my calculations several times, so I don't think there is any simple mistake like that. It would be great if someone could explain to me what exactly is going wrong here. - -REPLY [2 votes]: Using the characteristic equation method, we have the homogeneous part of the given equation, -$$g_n = g_{n-1} + 2g_{n-2}$$ -As you have done, the roots of the characteristic equation are $2$ and $-1$, so the solution to the homogeneous part is $c_12^n + c_2(-1)^n$ for some constants $c_1$ and $c_2$. For the nonhomogeneous part, according to the comment by @AndreNicolas, we assume the solution is of the form $c_3n2^n$ and we can write: -$$c_3n2^n = c_3(n-1)2^{n-1} + 2c_3(n-2)2^{n-2} + 2^n \\ -\implies c_3 = \frac{2}{3}$$ -Note: We guess $c_3n2^n$ for the nonhomogeneous part, and not $c_32^n$, because $2$ is already a root of the characteristic equation of the homogeneous part. In the same way, when we have repeated roots for the homogeneous part (say the root $a$ appears thrice), we use $c_1a^n + c_2na^n + c_3n^2a^n$, etc. -Now, -$$\begin{align} -a_n &= h_n + n\frac{2 \cdot 2^n}{3} \\ -&= c_12^n + c_2(-1)^n + \frac{2^{n+1}n}{3} -\end{align}$$ -and by substituting for $a_0$ and $a_1$, we get $c_1 = \frac{5}{9}$, $c_2 = \frac{13}{9}$, and -$$a_n = \frac{5\cdot 2^n}{9} + \frac{13 \cdot (-1)^n}{9} + \frac{2^{n+1}n}{3}$$<|endoftext|> -TITLE: If $A=\frac{1}{\frac{1}{1980}+\frac{1}{1981}+\frac{1}{1982}+........+\frac{1}{2012}}\;,$ Then $\lfloor A \rfloor\;\;,$ -QUESTION [6 upvotes]: If $$A=\frac{1}{\frac{1}{1980}+\frac{1}{1981}+\frac{1}{1982}+........+\frac{1}{2012}}\;,$$ Then $\lfloor A \rfloor\;\;,$ Where $\lfloor x \rfloor $ Represent floor fiunction of $x$ - -My Try:: Using $\bf{A.M\geq H.M\;,}$ We get -$$\frac{1980+1891+1982+....+2012}{33}>\frac{33}{\frac{1}{1980}+\frac{1}{1981}+\frac{1}{1982}+........+\frac{1}{2012}}$$ -So $$\frac{1}{\frac{1}{1980}+\frac{1}{1981}+\frac{1}{1982}+........+\frac{1}{2012}}<\frac{1980+1981+....+2012}{(33)^2}=\frac{1996}{33}\approx 60.5<61$$ -Now how can i prove that the above expression $A$ is $>60$ -Help me, Thanks - -REPLY [7 votes]: HINT: There are $33$ terms in the denominator. The smallest is $\frac1{2012}$, and the largest is $\frac1{1980}$, so if $d$ is the denominator, then -$$\frac{33}{2012}\le d\le\frac{33}{1980}\;.$$ -What does this tell you about $\frac{1}d$?<|endoftext|> -TITLE: Contractive Operators on Compact Spaces -QUESTION [6 upvotes]: Suppose that $T: M \to M$ is a compact contractive Operator on a nonempty compact subset $M$ of a complete metric space $X$. Show that $T$ has a unique fixed point. Further show that the sequence defined by $x_{n+1}=Tx_n$ converges to the fixed point from an arbitrary point $x_0 \in M$. By a contractive operator I mean there exists a $1 \gt k \ge 0$ such that $d(Tx,Ty) \le k d(x,y)$. -My try: For the first part Let $S=\{(x,y): 0 \lt a \le d(x,y) \le b\}$. Let $f: M \times M \to K$ such that $f(x,y)=\frac{d(Tx,Ty)}{d(x,y)}$. $f$ is continuous on $S$ and $S$ being compact, $f$ attains its maximum say $K(a,b) \lt 1$ . Then by generalized fixed point theorem (Generalized Fixed Point Theorem) $T$ has a unique fixed point. -I have trouble showing the second part. Since $M$ is compact, every such $x_n$ will have a convergent subsequence say $x_{n_{k}}$ which goes to say $x'$. I need to show that all convergent subsequences go to $x$ which is the fixed point and somewhere use $x_{n+1}=Tx_n$. I am unable to do so. -Thanks for the help!! - -REPLY [7 votes]: If I understand correctly, the assumption is that $d(T(x),T(y) -TITLE: Is there a null set that is not a Borel set? -QUESTION [7 upvotes]: In my module notes, if $A$ is a Borel and $m(A)=0$, then it is not necessarily true that any subset $B$ of $A$ (with $m(B=0)$) is Borel. -So I am wondering if there is a null set that is not a Borel set? - -REPLY [3 votes]: We can guarantee the existence of one, but I do not know if one can be found in a direct way. -Let $\phi$ be the Cantor Lebesgue function and define $\psi(x)=\phi(x)+x$. Then $\psi$ is a strictly increasing continuous function mapping $[0,1]$ to $[0,2]$, and moreover, maps the Cantor set onto a set of positive measure. -Let $C$ be the Cantor set. Then since every set that has positive outer measure contains a non-measurable set, $\psi(C)$ contains an non-measurable set, $A$. Then $\psi^{-1}(A)$ is a subset of $C$ so it is measurable with measure zero, and it is not Borel, because the image of a Borel set through a continuous function is measurable.<|endoftext|> -TITLE: Are those two ways to relate Extensions to Ext equivalent? -QUESTION [11 upvotes]: Given an extension of $R$-modules $0\to B\to X\to A \to 0$, one usually associates $x\in\operatorname{Ext}^1(A,B)$ to this extension by taking the long exact sequence -$$\dotsb\to \operatorname{Hom}(A,X) \to \operatorname{Hom}(A,A) \xrightarrow{\partial} \operatorname{Ext}^1(A,B)\to \dotsb$$ -and setting $x=\partial(\mathrm{id}_A)$. Alternatively one could apply $\operatorname{Ext}^{*}(-,B)$ to get -$$\dotsb\to \operatorname{Hom}(X,B) \to \operatorname{Hom}(B,B) \xrightarrow{\partial} \operatorname{Ext}^1(A,B)\to \dotsb$$ -and take $y=\partial(\mathrm{id}_B)$. Do we get the same elements in this way? I.e. is $x=y$? Optimally, can you show this from the standard properties of $\operatorname{Ext}$? I became interested in this because it seems to be necessary to solve a more particular question about a proof I had. - -REPLY [6 votes]: We can compute this in the derived category. -Extensions give distinguished triangles, and are determined by the corresponding morphism $f : A \to B[1]$. -The two methods you describe for associating an element to the extension correspond to pre- and post-composition with $f$: -$$ \hom(A, A) \xrightarrow{g \mapsto f\circ g} \hom(A, B[1]) $$ - $$ \hom(B[1], B[1]) \xrightarrow{h \mapsto h\circ f} \hom(A, B[1]) $$ -and so the same element $f \in \hom(A, B[1])$ is indeed obtained by applying these maps to the respective identity morphisms.<|endoftext|> -TITLE: Verifying the Jacobi identity for the semidirect product of Lie algebras -QUESTION [6 upvotes]: Given Lie algebras $S$ and $I$ and a Lie homomorphism $\theta \colon S\to \operatorname{Der} I$, we have the semidirect product to be the vector space $S\oplus I$ with operation -$$ - (s_{1},x_{1})(s_{2}x_{2}) - := - ([s_{1},s_{2}],[x_{1},x_{2}]+\theta(s_{1})x_{2}-\theta(s_{2})x_{1}). -$$ -Show that this is a Lie algebra. - -So I can easily verify the skew-symmetric but I can't seem to work out a nice way of proving the Jacobi identity. Am I missing a simple trick or must you perform the tedious calculation to show this? Thanks. - -REPLY [7 votes]: The calculation is no longer tedious if you split it up into four cases. Since the Jacobi identity is trilinear we only need to check it for one of the following cases: -$(s_1,0),(s_2,0),(s_3,0)$, or $(s_1,0),(s_2,0),(0,x_3)$, or $(s_1,0),(0,x_2),(0,x_3)$ or $(0,x_1),(0,x_2),(0,x_3)$. The cases themselves are immediate, because they follow from the facts that either $S$ is a Lie algebra, or that $I$ is a Lie algebra, or that the -$\theta(s_i)$ are derivations, or that $\theta$ is a Lie algebra homomorphism.<|endoftext|> -TITLE: closure of inverse image is subset of inverse image of closure, given that $f$ is continuous -QUESTION [6 upvotes]: Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a continuous function. Prove then that $$ \overline{f^{-1}(X)} \subset f^{-1} (\overline{X}) $$ for every $X \subset \mathbb{R}$. -Attempt at proof: Let $a \in \overline{f^{-1}(X)}$ be arbitrary. Then by definition we have $\forall \delta > 0$ that $$ ] a - \delta, a + \delta [ \cap f^{-1}(X) \neq \emptyset. $$ Let $x$ be an element in this intersection. Thus $x \in ]a - \delta, a + \delta [ $ and $x \in f^{-1}(X)$. It follows that $f(x) \in X$. Because $f: \mathbb{R} \rightarrow \mathbb{R}$ is continuous $a$, we can find $\forall \epsilon > 0$ a $\delta > 0$ such that $\forall x \in \mathbb{R}$ it holds that $$ | f(x) - f(a) | < \epsilon $$ if $| x - a | < \delta$. Now we have $$f^{-1} (\overline{X}) = \left\{a \in \overline{X} \mid f(a) \in f(\overline{X}) \right\}. $$ This means I have to show that $a \in \overline{X}$ and then show that $f(a) \in f(\overline{X})$. This is the part where I'm stuck. -Help would be appreciated. - -REPLY [2 votes]: I finish your idea: -Let $a \in \overline{f^{-1}(X)}$ be arbitrary. Then by definition we have $\forall \delta > 0$ that $$ ] a - \delta, a + \delta [ \cap f^{-1}(X) \neq \emptyset. $$ Let $\delta_n=\frac{1}{n}$ and $x_n$ be an element in the intersection $]a-\delta_n,a+\delta_n[\cap f^{-1}(X)$. Thus $x_n \in ]a - \delta_n, a + \delta_n [ $ and $x_n \in f^{-1}(X)$. It follows that $f(x_n) \in X\subset \overline X$. Because $f: \mathbb{R} \rightarrow \mathbb{R}$ is continuous at $a$ and $x_n\to a\Rightarrow f(x_n)\to f(a)$. But as $\{f(x_n)\}\subset \overline X$ then its limit is also in $\overline X$, i.e $f(a)\in\overline X$ which means that $a\in f^{-1}(\overline X)$.<|endoftext|> -TITLE: At what rate does the entropy of shuffled cards converge? -QUESTION [17 upvotes]: Consider a somewhat primitive method of shuffling a stack of $n$ cards: In every step, take the top card and insert it at a uniformly randomly selected one of the $n$ possible positions above, between or below the remaining $n-1$ cards. -Start with a well-defined configuration, and then track the entropy of the distribution over the possible permutations of the stack as these shuffling steps are applied. It starts off at $0$. Initially most moves will lead to unique permutations, so we should have roughly $n^k$ equiprobable states after $k$ steps, so the entropy should initially increase as $k\log n$. For $k\to\infty$ it should converge to the entropy corresponding to perfect shuffling, $\log n!\approx n(\log n-1)$. -What I'd like to know is how this convergence takes place. I have no idea how to approximate the distribution as it approaches perfect shuffling. I computed the entropy for $n=8$ for $k$ up to $50$; here's a plot of the natural logarithm of the deviation from the perfect shuffling entropy $\log n!$: - -The red crosses show the computed entropy; the green line is a linear fit to the last $30$ crosses, with slope about $-0.57$. So the entropy converges to its maximal value roughly as $\exp (-0.57k)$. For $n=7$, the slope is about $-0.67$, and for $n=9$ it's about $-0.50$. How can we derive this behaviour? - -REPLY [2 votes]: Such method of shuffling cards is essentially a random walk on the symmetric group $S_n$ with step distribution $X_i = (1, Y_i)$ and $Y_i$ is uniformly distributed on $\{1, ... , n\}$. Such process (as any other random walk on a finite group) is a finite-state Markov chain. Moreover, it is not hard to see, that this Markov chain is both aperiodic and recurrent. Therefore its distribution converges to the stationary one with speed $O(\lambda_2^k)$, where $\lambda_2$ is the transition matrix eigenvalue with second largest absolute value and k is the number of transition step. -It is not hard to see, that the stationary distribution under such shuffling is the uniform distribution on $S_n$. Moreover, according to the Theorem 4.1 from "Analysis of Top to Random Shuffles" by P. Diaconis, J.A. Fill and J. Pitman, the transition matrix of this process has eigenvalues $\{\frac{j}{n}| 0 \leq j \leq n-2 \} \cup \{1\}$, with second largest eigenvalue being $\frac{n-2}{n}$. Therefore on the step $k$ the probability our process reaching any given permutation will be $\frac{1}{n!} + O((\frac{n-2}{n})^k)$ for large $k$. -Now, knowing the distribution we can compute the entropy. It is: -$$-n!(\frac{1}{n!} + O((\frac{n-2}{n})^k))\ln(\frac{1}{n!} + O((\frac{n-2}{n})^k) = -\ln(\frac{1}{n!} + O((\frac{n-2}{n})^k)) + O((\frac{n-2}{n})^k) = \ln(n!) + O((\frac{n-2}{n})^k)$$<|endoftext|> -TITLE: Could the Monty-Hall Problem be applied to multiple choice tests? -QUESTION [8 upvotes]: Given a multiple choice test where each question contains 4 possible answers, what would happen if before beginning the test (before reading the questions), someone were to make a random selection for each question? -At this point it seems logical that for a given question the student has a 1/4 chance of their choice being correct and a 3/4 chance of one of the other choices being correct. -Let's say that they now begin to read the questions and in some cases they can deduce that one of the provided answers which was not the one that they picked is not correct (let's assume that there is no error in this deduction). In the scenario with the Monty Hall Problem, the probabilities did not change once the door was opened, they just shifted. - -By applying the same logic, the original selected answer has a 1/4 chance of being correct and the other three have a 3/4 chance of being correct, except that since one was deduced to be incorrect, the two remaining options have a 3/4 chance of being correct and so switching answers would increase the odds of being correct to $\frac{1}{2} * \frac{3}{4}$. -Is this an accurate assumption or are there pitfalls in doing this? -If this is the case, then what happens if another deduction is made such that their original answer was determined to be incorrect? It seems that there would be no change in the odds, but that seems unlikely. - -REPLY [6 votes]: No, you can't apply the Monty-Hall problem to a multiple choice test. -The difference is, that in the Monty-Hall problem, there is a person which knows where the winning door is, and always opens a door which you didn't select, and which contains a goat after your first choice. -In the situation you describe, you assume to know one wrong answer, which allows you to "open a door with a goat". However, it is possible that this is exactly the answer which you blindly selected first, a scenario which isn't possible in the Monty-Hall problem, because Monty chooses the door he opens depending on your choice, which is not the case in the multiple choice test. -Mathematical explanation -Let's assume we have a multiple choice question, with four alternatives. You chose an answer at random, without reading the question. After that you read the question with the answers. -We assume that you can exclude one possible answer with certainty, but you have no knowledge concerning the other three answers, they are all as likely to be correct. -There are now two possibilities: -Alternative 1: You initially chose the wrong answer (chance of this happening is $\frac14$) -You obviously want to switch. The chance of getting the right answer is $\frac13$, assuming that each answer is just as likely to be right. -Alternative 2: You initially chose another answer than the wrong one (chance of this happening is $\frac34$) -Now we can analyze this like the Monty-Hall problem. -Possibility 1: you initially chose the correct answer (chance is $\frac13$). You switch answers and are now wrong. -Possibility 2: you initially chose a wrong answer (chance is $\frac23$). You switch answers and are now correct with chance $\frac12$). -So your chance of being correct for alternative 2 is: $\frac13 * 0 + \frac23 * \frac12 = \frac13$. -Conclusion -Your chances of getting the right answer are always $\frac13$. The same as if you immediately had read the question, eliminated the answer you know is wrong, and chose one of the remaining at random.<|endoftext|> -TITLE: How to show a function can or cannot be extended to a compactification? -QUESTION [11 upvotes]: This comes from Munkres 38.2. Let $Y$ be the compactification of $(0,1)$ induced by $h(x) = (x,\sin(1/x))$. Show that $g(x) = \cos(1/x)$ cannot be extended to this compactification $Y$. -Also I wonder in general how to show a function can or cannot be extended to a compactification induced by some other function. -Here is my attempted solution under the hint of Brian Scott: -Let $Y$ denote the compactification of $(0,1)$ induced by $h(x) = (x,\sin (1/x))$. Let $Y_0$ be $H(Y)$ where $H$ is the extension of $h$. Suppose we can extend $g$ to a continuous function $G: Y\to \mathbb{R}$, then the function $G\circ H^{-1}: Y_0 \to \mathbb{R}$ is also continuous. Consider the sequence $\{h(\frac{1}{k\pi})\}_{k\in\mathbb{Z}_+} = \{(\frac{1}{k\pi}, 0)\}_{k\in\mathbb{Z}_+}$ in $Y_0$, then $h(\frac{1}{k\pi}) \to (0,0)$ in $Y_0$, by continuity of $G\circ H^{-1}$ we must have $\lim_{k\to\infty}G(H^{-1}(h(\frac{1}{k\pi}))) = \lim_{k\to\infty} G(\frac{1}{k\pi}) = G(0)$. However, $G(H^{-1}(h(\frac{1}{k\pi}))) = g(\frac{1}{k\pi}) = (-1)^k$ does not converge, which is a contradiction. Hence $g$ cannot be extended to a continuous map on $Y$. - -REPLY [7 votes]: HINT: Consider the sequence $\langle x_n:n\in\Bbb Z^+\rangle$ in $(0,1)$, where $x_n=\frac1{n\pi}$. - -What is the sequence $\langle h(x_n):n\in\Bbb Z^+\rangle$? Does it converge in $Y$? -What is the sequence $\langle g(x_n):n\in\Bbb Z^+\rangle$? - -Remember, if we can extend $g$ to a function $G:Y\to\Bbb R$, we must have $G\big(h(x)\big)=g(x)$ for each $x\in(0,1)$. And in order for $G$ to be continuous, it must be true that if a sequence $\langle y_n:n\in\Bbb Z^+\rangle$ in $Y$ converges to some point $y\in Y$, then $\langle G(y_n):n\in\Bbb Z^+\rangle$ converges to $G(y)$.<|endoftext|> -TITLE: How can I convert this unique string of characters into a unique number? -QUESTION [5 upvotes]: I have an unusual programming problem and the math side of it has me stumped. It's probably a simple answer but math isn't my strongest area. -I've generated a unique string of 7 characters which are each randomly selected from these possibilities: ABCDEFGHIJKLMNOPQRSTUVWXYZ123456789 for example A6HJ92B and I need to convert it to a unique number value. When converted, no two versions of this random string can be the name number. -I could just generate a number rather than including letters in the original id, but of course that means I have to increase the length of my string, and it's possible that the user of my application may want to type this string, as it identifies his "session" in an application, so I want to keep it short. -So my idea was to build a table like this: -A : 1, -B : 2, -C : 3, -D : 4, -E : 5, -F : 6, -G : 7, -H : 8, - -... you get the idea ... - -5 : 31, -6 : 32, -7 : 33, -8 : 34, -9 : 35 - -And then I'd add all of the numbers up... -A6HJ92B: -A : 1 -6 : 32 -H : 8 -J : 10 -9 : 35 -2 : 28 -B : 2 - -1+32+8+10+35+28+2 = 116 -...but I realized this is a flawed idea because many possible strings will "collide" or equal the same number. I need each unique string to equal a unique number. -So even if I multiplied each character's value (1*32*8*10*35*28*2 = 5,017,600), I'm thinking there might be possible collisions there too. -Is there a way to calculate this in a way that eliminates collisions? If the collisions cant be eliminated, what methods can I use to minimize them? - -REPLY [2 votes]: Hope this java program helps you: - int i=0; - int total=1; - String[] input = new String[35]; - String inputString = "A6HJ92B"; - char[] inputChar = inputString.toCharArray(); - for(char a = 'A' ; a<='Z' ; a++ ){ - i++; - input[i-1] = a+":"+i; - } - for(char b = '1';b<='9';b++){ - i++; - input[i-1] = String.valueOf(b)+":"+i; - } - - for(int k=0;k -TITLE: Relationship Between Connections on a Vector Bundle and a Riemannian Base -QUESTION [6 upvotes]: I'm starting to get acquainted with how to define affine connections on a vector bundle. Suppose $\pi: E \to M$ is a rank $k$ vector bundle over a Riemannian manifold $M$ with metric $g$, where we will denote $g(u,v)|_p = \langle u,v\rangle_p$ for all $p \in M$ and $u,v \in T_pM$. -I know we can define an affine connection on $E$ as a mapping $\nabla: \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ (where $\Gamma(X)$ denotes the vector space of smooth global sections of a bundle $X$) such that the following conditions hold: - -Denoting $\nabla(X,s) = \nabla_Xs$ we have lower linearity: $\nabla_{fX+Y}s = f\nabla_Xs + \nabla_Ys$ where $f \in C^\infty(E)$. -Derivational property: given $f \in C^\infty(E)$ we have $\nabla_X(fs) = X(f)s + f\nabla_Xs$. - -As a reference I'm looking at some informal papers; primarily this and some of this. I've also checked out the book The Geometry of Jet Bundles. So here's my question: - -Given that the base of the bundle $M$ is a Riemannian manifold with a unique compatible connection $\nabla^M$, is there a canonical/obvious way to define a metric (and subsequently a connection $\nabla^E$) on the bundle space $E$ such that $\nabla^E$ is an extension of (or in some way compatible with) the connection $\nabla^M$? - -Full explanations or reference suggestions are all welcome. Thanks in advance! - -REPLY [4 votes]: An affine connection on $M$ manifold is a connection on $TM$, so no affine connection exists on an arbitrary vector bundle. I'd just call it a connection on the vector bundle $E$. -Now, when we have a fixed metric $g$ on a vector bundle $E\to M$, there is a Riemannian connection $\nabla^E$, which actually depends on $g$. -The space of connections on a vector bundle is actually an affine space, that is, if $\nabla_1, \nabla_2$ are connections on $E$ then $\lambda \nabla_1+(1-\lambda)\nabla_2$ are connections on $E$. This is easy to check: $C^{\infty}(M)$-linearity on one variable, and Leibniz rule on the other, i.e. $\nabla_{fX}(e)=f\nabla_X(e)$, and $\nabla_X(fe)=f\nabla_X(e)+X(f)e$ (and good behaviour w.r.t. the sum of fields $X_1+X_2$ and sections of $E$, $e_1+e_2$, of course). -Likewise, the vector space associated to the affine space of connections on $E$ is precisely $\Gamma(\Omega^1_M \otimes E)$, i.e. the vector space of $E$-valued differential forms of degree one on $M$. This follows from the fact that, if $\nabla_i$ are connections, then $\nabla_2-\nabla_1$ is $C^{\infty}$-linear on both variables, i.e. $\nabla_2-\nabla_1$ is an $E$-valued tensor, as specified above. -So, you need only fix one metric on $E$ to ensure existence of a connection on $E$. Alternatively, you can do it by piecing together connections you make up in different charts, then use partitions of unity (that's how Chern, Chen, Lam do it, if I'm not mistaken). Then all others are obtained by summing $E$-valued differential forms. I prefer piecing together metrics you build on different charts into one metric (which is shown in most textbooks), also through partitions of unity, and then to use uniqueness of the metric connection (the proof for $E$ arbitrary is similar to that shown in Do Carmo, Riemannian Geometry in the case where $E=TM$). -One final remark. There are many, many sections to the bundle $\Omega^1_M\otimes E$. It suffices to choose a trivialising open set $U$, then another contractible, well-chosen open subset $V\subset U$ and then one can build sections that take a prescribed value on $V$ and are $0$ outside $U$ (standard partition-of-unity techniques). -To sum it all up: no, unless there is some extra feature in the geometry of $E$, there is no preferred metric on $E$, but you're actually dealing with an affine space of connections with no preferred point (i.e. no origin of your preference unless some other elements enter the picture). The associated vector space is that of the global sections of $\Omega^1_M\otimes E$, and so once you establish the existence of one connection you may produce all the others by the usual procedure $E\to A$ once you fix a point $p\in A$, i.e. $\overrightarrow{v}\mapsto p+\overrightarrow{v}$, as is the case with every affine space $A$ and associated vector space $E$. -Hope this helps.<|endoftext|> -TITLE: What is the geometric relationship between $A$ and $A^T$? -QUESTION [7 upvotes]: Posed a more specific way: -Let $A \in \mathbb R ^ {m \times n}$ and $S_k$ be the unit $k$-sphere. -What is the exact geometric relationship between $E_m = \{ A\vec x \mid \vec x \in S_m \}$ and $E_n = \{ A^T\vec y \mid \vec y \in S_n \}$? - -REPLY [3 votes]: For $A \in \mathbb{R}^{n\times m}$ with $m>n$ define the SVD as -$$A=U[\Sigma\;0]V^T$$ -Since the sphere is unaffected by rotations, -$$E_m =\{U [\Sigma\;0] \vec{x} \mid \vec{x} \in S_m\}= \{U \Sigma \vec{x} \mid \vec{x} \in B_n\} $$ -where $B_n$ is the $n$-ball. Therefore $E_m$ is the $n$-ball scaled by $\Sigma$ and rotated by $U$. For $A^T$ -$$E_n =\left\{V \left[{\Sigma \atop 0}\right] \vec{x} \mid \vec{x} \in S_n\right\}$$ -we have a scaling of the $n$ sphere by $\Sigma$ then rotated into $m$-dimensional space by $V$. -If we ignore rotation/embedding in space, both objects have the same bounding shape, but $E_m$ is filled while $E_n$ is not.<|endoftext|> -TITLE: A Noetherian integral domain is a UFD iff $(f):(g)$ is principal -QUESTION [6 upvotes]: Let $R$ be a Noetherian integral domain. For $f, g \in R$, define $(f):(g)=\{h \in R \mid hg \in (f) \}$. Sow that $R$ is a UFD if and only if $(f):(g)$ is principal for all $f,g \in R$. - -It is easy to show that $(f):(g)$ is an ideal. For the forward direction, I suspect that I need to use the fact that $R$ is Noetherian to show that $(f):(g)$ is principal. For the reverse direction, I know that if every $(f):(g)$ is principal then the ring $R$ is a PID. But I don't know how to proceed. Any ideas? - -REPLY [6 votes]: I don't really understand what you mean by "I know that if every $(f):(g)$ is principal then the ring $R$ is a PID" : you seem to imply that every noetherian UFD is a PID, which is false. -The first implication (assuming $R$ is a UFD) is actually true even if $R$ is not noetherian. Indeed, if $R$ is a UFD, then write $f = u\prod p_i^{a_i}$ and $g=v\prod p_i^{b_i}$. Then for any $h = w\prod p_i^{c_i}$, you get $hg\in (f)$ iff $\forall i,$ $b_i + c_i \geqslant a_i$. Then putting $d_i = \max(0 ; a_i-b_i)$ you get $(f):(g) = (\prod p_i^{d_i})$. -As for the second implication, you only need that $R$ admits an irreducible factor decomposition (which of course is always true when $R$ is noetherian). In this case it is well-known that $R$ is a UFD (meaning that the decomposition is unique) if and only if irreducible elements satisfy the Gauss (or Euclid, or whatever) lemma : $p\mid ab$ implies $p\mid a$ or $p\mid b$. -First observe that if $(f):(g) = (x)$ then since $fg \in (f)$ you get $f\in (f):(g) = (x)$ and hence $x\mid f$. So if $p$ is irreducible, and $(p):(a) = (x)$, you get $x\mid p$, and thus $(x) = (1)$ or $(x) = (p)$. The first case happens if and only if $p\mid a$ by definition. -Now assume $p\mid ab$ and $p\nmid b$. Then $ab\in (p)$ so by definition $a\in (p):(b)$, but $(p):(b)=(p)$ by the previous observation, so $p\mid a$. This is the Gauss lemma.<|endoftext|> -TITLE: Show $\frac{3997}{4001}>\frac{4996}{5001}$ -QUESTION [9 upvotes]: I wish to show that $$\frac{3997}{4001}>\frac{4996}{5001}.$$ -Of course, with a calculator, this is incredibly simple. But is there anyway of showing this through pure analysis? So far, I just rewrote the fractions: -$$\frac{4000-3}{4000+1}>\frac{5000-4}{5000+1}.$$ - -REPLY [2 votes]: Begin by observing that $\frac{3996}{4000}=\frac{4995}{5000}$, and think of these fractions as $\frac{\mbox{wins}}{\mbox{games played}}$ for chess players $A$ ($3996$ wins) and $B$ ($4995$ wins). One additional win will do more to improve player $A$’s win percentage than it will player $B$’s.<|endoftext|> -TITLE: A complex analysis problem to prove an inequality -QUESTION [11 upvotes]: $\textbf{Problem.}$ Suppose $f$ is a holomorphic function on $\{z\in\mathbb{C}:|z|<1\}$, the open unit disk, with the property that Re$f(z)>0$ for every point $z$ in the disk. Prove that $|f'(0)|\leq 2\text{Re}f(0)$. -This is a problem that appeared on a qualifying exam in some graduate school. What I tried is, define $\displaystyle\varphi(z)=\frac{z-1}{z+1}$, then this sends the open right half plane onto the open unit disk, so let $\hat{f}=\varphi\circ f(z)$, then its image is contained in the open unit disk. And I wanted to use the Schwarz lemma, but then the origin had to be fixed, so define $\displaystyle\psi(z)=\frac{\hat{f}(0)-z}{1-\overline{\hat{f}(0)}z}$ and let $\tilde{f}=\psi\circ\hat{f}$, then the image of this is still contained in the open unit disk and it fixes the origin so the Schwarz lemma $|(\tilde{f})'(0)|\leq 1$ could be applied. But after all the calculation, I got -$$|f'(0)|\leq\frac{|f(0)+\overline{f(0)}+2|f(0)|^{2}|^{2}}{2|f(0)+1|^{2}|f(0)|}$$ -but, for example, if the magnitude of the imaginary part of $f(0)$ is quite bigger than the magnitude of the real part of $f(0)$, then this doesn't result the desired inequality but actually means that the desired inequality is wrong instead. I also thought about composing some different function to the RHS of $\hat{f}$ and use the Schwarz lemma but it didn't seem to work. -Maybe I should try something else, but what could be tried instead? - -REPLY [4 votes]: Let $w=f(0)$, and consider the map $\phi(z)=\frac{z-w}{z+\overline{w}}$. Since $-\overline{w}$ has the same imaginary part as $w$ and lies in the left half-plane, $\phi$ maps $\{z:\Re z>0\}$ into the unit disk, with $\phi(w)=0$. -Therefore if $g(z)=\phi(f(z))$, then $g$ maps the unit disk into itself, with $g(0)=0$. Therefore $|g^{\prime}(0)|\leq 1$ by the Schwarz lemma. -However, $g^{\prime}(0)=\phi^{\prime}(w)f^{\prime}(0)$ and $$\phi^{\prime}(z)=\frac{z+\overline{w}-(z-w)}{(z+\overline{w})^2}=\frac{2\Re w}{(z+\overline{w})^2}$$ -hence $\phi^{\prime}(w)=\frac{2\Re w}{(2\Re w)^2}=\frac{1}{2\Re w}$. Therefore $|g^{\prime}(0)|\leq 1$ implies that $|f^{\prime}(0)|\leq |2\Re w|=2\Re(f(0))$.<|endoftext|> -TITLE: Number of homomorphisms from a stem field to a given field -QUESTION [9 upvotes]: This is a homework, but I've generalized it as possible in order not to have exact answer rather that to understand the very principle of solution. -The problem is following: consider $\mathbb{K}$ a field and $E$ an extension field of $\mathbb{K}$. For a given irreducible polynomial $P(x)$ from the ring $\mathbb K[x]$ find the number of homomorphisms from the stem field for $P(x)$ to the field $E$. -A stem field for an irreducible polynomial $P$ in $\mathbb{K}[x]$ is a pair $(F,\alpha)$, where $\alpha$ is a root of $P$ and $F$ is an extension of $\mathbb{K}$, i.e. $F = \mathbb{K}[\alpha]$ and $P(\alpha)$=0 -My understanding is following: - -Any stem field $F$ is isomorphic to $ -\dfrac{\mathbb K[x]}{(P(x))}$ -The number of homomorphisms from $\dfrac{\mathbb K[x]}{(P(x))}$ to $E$ is equal to the number of roots of this particular polynomial in $E$. - -Example: if $\mathbb{K} = \mathbb{Q}$ and $P(x$) has $n$ roots in $\mathbb R$ (real roots) and $m$ complex (strictly non-real) roots, then the number of homomorphisms to $\mathbb R$ is n and number of homomorphisms to $\mathbb C$ is n+m. -Is my understanding correct at all? If not, can you give me a hint in what direction I should look for. - -REPLY [3 votes]: Your understanding is correct, except for one important assumption you've left out. What you say is correct provided that $\mathbb{K}$ is a subfield of $E$ and you are only considering homomorphisms $F\to E$ which are the identity of $\mathbb{K}$. In general, to define a homomorphism $\mathbb{K}[x]/(P(x))\to E$, you have to first choose a homomorphism $\varphi:\mathbb{K}\to E$, and then choose an element of $E$ which is a root of the polynomial obtained by applying $\varphi$ to the coefficients of $P$. When you require $\varphi$ to be the inclusion map, this means your only choice is an element of $E$ which is a root of $P$. But if you do not assume this, there may be many more homomorphisms $\mathbb{K}[x]/(P(x))\to E$ that do something else on $\mathbb{K}$.<|endoftext|> -TITLE: Finding Maximum Area of a Rectangle in an Ellipse -QUESTION [12 upvotes]: Question: A rectangle and an ellipse are both centred at $(0,0)$. - The vertices of the rectangle are concurrent with the ellipse as shown - -Prove that the maximum possible area of the rectangle occurs when the x coordinate of - point $P$ is $x = \frac{a}{\sqrt{2}} $ - - -What I have done -Let equation of ellipse be -$$ \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ -Solving for y -$$ y = \sqrt{ b^2 - \frac{b^2x^2}{a^2}} $$ -Let area of a rectangle be $4xy$ -$$ A = 4xy $$ -$$ A = 4x(\sqrt{ b^2 - \frac{b^2x^2}{a^2}}) $$ -$$ A'(x) = 4(\sqrt{ b^2 - \frac{b^2x^2}{a^2}}) + 4x\left( (b^2 - \frac{b^2x^2}{a^2})^{\frac{-1}{2}} \times \frac{-2b^2x}{a^2} \right) $$ -$$ A'(x) = 4\sqrt{ b^2 - \frac{b^2x^2}{a^2}} + \frac{-8x^2b^2}{\sqrt{ b^2 - \frac{b^2x^2}{a^2}}a^2} = 0 $$ -$$ 4a^2\left(b^2 - \frac{b^2x^2}{a^2} \right) - 8x^2b^2 = 0 , \sqrt{ b^2 - \frac{b^2x^2}{a^2}a^2} \neq 0 $$ -$$ 4a^2\left(b^2 - \frac{b^2x^2}{a^2} \right) - 8x^2b^2 = 0 $$ -$$ 4a^2b^2 - 4b^2x^2 - 8x^2b^2 = 0 $$ -$$ 4a^2b^2 - 12x^2b^2 = 0 $$ -$$ 12x^2b^2 = 4a^2b^2 $$ -$$ x^2 = \frac{a^2}{3} $$ -$$ x = \frac{a}{\sqrt{3}} , x>0 $$ -Where did I go wrong? -edit:The duplicate question is the same but both posts have different approaches on how to solve it so I don't think it should be marked as a duplicate.. - -REPLY [3 votes]: The ellipse $\dfrac{x^2}{a^2} + \dfrac{y^2}{b^2} = 1$ is a circle of radius $a$ in $(\hat x,y)$ coordinates, where $\hat x=\dfrac{a}{b}x$. This transformation multiplies areas by the constant $\dfrac{a}{b}$, so the problem is equivalent to finding the rectangle of maximum area in a circle, which is well-known to be a square. -Or, looked at another way (pun intended), this ellipse is what you see if you look at the circle of radius $a$ in the $x-y$ plane from just the right angle instead of from directly above. When you see what appears to be an inscribed rectangle in the ellipse of maximum area, what you’re looking at is an inscribed rectangle in the circle of maximum area.<|endoftext|> -TITLE: Are these two sequences the same? -QUESTION [11 upvotes]: I was browsing OEIS and came across the largely composite numbers, A067128, defined as the natural numbers that have at least as many divisors as all smaller natural numbers. (They are of course related to the highly composite numbers.) -A comment on the OEIS page asks whether the largely composite numbers are the same as A034287, the numbers $n$ such that the product of the divisors of $n$ is larger than for all smaller natural numbers. In reply, another comment says that the two sequences are the same for all terms less than $10^{150}$, of which there are 105834. -My questions are: - -Are these two sequences the same, or do they differ at some point after the 105834th term? -If they do differ, is there a nice way to see why the two sequences should be the same for such a large range of initial values? - -REPLY [6 votes]: Begun actual work on the thing. The product of divisors has, of course, the same prime factors as the original number. What I did not know is that, if the original exponent is $a$ and the number of divisors of the number is $d(n),$ then the new exponent of that prime (in the product of divisors) is -$$ a \, d(n) / 2. $$ -This gives the first hint of how a large number of divisors tends to give a large product of divisors, in a tightly controlled manner. Put another way, if the original number is $n$ and the product of all divisors is $P,$ then -$$ P = n^{d(n)/2} $$ -Therefore, if $n$ has at least as many divisors as all smaller numbers, then $P$ is guaranteed strictly larger than all previous values for $P.$ So, we have that A067128 is contained in A034287, maybe strictly, or maybe the sequences are equal. -THEOREM a largely composite number has a product of divisors that is strictly larger than such products for all smaller numbers. -Approaches for the other direction: if the assumption is that $n$ sets a new record for product of divisors, we are saying that, for all $m < n,$ -$$ d(n) > \left( \frac{\log m}{\log n} \right) d(m). $$ -We do have explicit upper bounds on the size of $d(n)$ due to Nicolas and Robin; the important thing is how very small these bounds are. It is possible that numbers setting new divisor product records are so frequent that, when $m$ is the previous element in that list, that $ \left( \frac{\log m}{\log n} \right) d(m) > d(m) - 1. $ That would do it; maybe it is true. I will, at least, experiment with that. OH, WELL. The conjectured inequality does not appear to be true, or even true for sufficently large numbers. On the other hand, we appear to get the promising $ \left( \frac{\log m}{\log n} \right) d(m) > d(m) - 3. $ Worth playing with this computer conjecture, because $d(m)$ is even unless $m$ itself is a square. NOPE. The $3$ does not hold up either. Here are the smallest numbers where the difference exceeds 2.0. The way this is going, I think either finding a number on one list but not the other, or a proof the lists are the same, would be a fair amount of effort. -7560 = ( 3, 3, 1, 1 ) prod = ( 96, 96, 32, 32, ) number of divisors 64 prev 60 57.2759 diff 2.7241 -131040 = ( 5, 2, 1, 1, 1 ) prod = ( 360, 144, 72, 72, 72, ) number of divisors 144 prev 144 141.958 diff 2.04152 -196560 = ( 4, 3, 1, 1, 1 ) prod = ( 320, 240, 80, 80, 80, ) number of divisors 160 prev 160 157.807 diff 2.1929 -262080 = ( 6, 2, 1, 1, 1 ) prod = ( 504, 168, 84, 84, 84, ) number of divisors 168 prev 168 165.751 diff 2.24945 -327600 = ( 4, 2, 2, 1, 1 ) prod = ( 360, 180, 180, 90, 90, ) number of divisors 180 prev 180 177.632 diff 2.36778 -655200 = ( 5, 2, 2, 1, 1 ) prod = ( 540, 216, 216, 108, 108, ) number of divisors 216 prev 216 213.306 diff 2.69428 -831600 = ( 4, 3, 2, 1, 1 ) prod = ( 480, 360, 240, 120, 120, ) number of divisors 240 prev 240 237.48 diff 2.51955 -942480 = ( 4, 2, 1, 1, 1, 1 ) prod = ( 480, 240, 120, 120, 120, 120, ) number of divisors 240 prev 240 237.816 diff 2.18367 -1330560 = ( 7, 3, 1, 1, 1 ) prod = ( 896, 384, 128, 128, 128, ) number of divisors 256 prev 256 252.23 diff 3.76961 -1663200 = ( 5, 3, 2, 1, 1 ) prod = ( 720, 432, 288, 144, 144, ) number of divisors 288 prev 288 285.123 diff 2.87715 - -Sample, just the exponents, not the primes themselves: -2 = ( 1 ) prod = ( 1 ) number of divisors 2 -3 = ( 1 ) prod = ( 1 ) number of divisors 2 -4 = ( 2, ) prod = ( 3, ) number of divisors 3 -6 = ( 1, 1 ) prod = ( 2, 2, ) number of divisors 4 -8 = ( 3, ) prod = ( 6, ) number of divisors 4 -10 = ( 1, 1 ) prod = ( 2, 2, ) number of divisors 4 -12 = ( 2, 1 ) prod = ( 6, 3, ) number of divisors 6 -18 = ( 1, 2, ) prod = ( 3, 6, ) number of divisors 6 -20 = ( 2, 1 ) prod = ( 6, 3, ) number of divisors 6 -24 = ( 3, 1 ) prod = ( 12, 4, ) number of divisors 8 -30 = ( 1, 1, 1 ) prod = ( 4, 4, 4, ) number of divisors 8 -36 = ( 2, 2, ) prod = ( 9, 9, ) number of divisors 9 -48 = ( 4, 1 ) prod = ( 20, 5, ) number of divisors 10 -60 = ( 2, 1, 1 ) prod = ( 12, 6, 6, ) number of divisors 12 -72 = ( 3, 2, ) prod = ( 18, 12, ) number of divisors 12 -84 = ( 2, 1, 1 ) prod = ( 12, 6, 6, ) number of divisors 12 -90 = ( 1, 2, 1 ) prod = ( 6, 12, 6, ) number of divisors 12 -96 = ( 5, 1 ) prod = ( 30, 6, ) number of divisors 12 -108 = ( 2, 3, ) prod = ( 12, 18, ) number of divisors 12 -120 = ( 3, 1, 1 ) prod = ( 24, 8, 8, ) number of divisors 16 -168 = ( 3, 1, 1 ) prod = ( 24, 8, 8, ) number of divisors 16 -180 = ( 2, 2, 1 ) prod = ( 18, 18, 9, ) number of divisors 18 -240 = ( 4, 1, 1 ) prod = ( 40, 10, 10, ) number of divisors 20 -336 = ( 4, 1, 1 ) prod = ( 40, 10, 10, ) number of divisors 20 -360 = ( 3, 2, 1 ) prod = ( 36, 24, 12, ) number of divisors 24 -420 = ( 2, 1, 1, 1 ) prod = ( 24, 12, 12, 12, ) number of divisors 24 -480 = ( 5, 1, 1 ) prod = ( 60, 12, 12, ) number of divisors 24 -504 = ( 3, 2, 1 ) prod = ( 36, 24, 12, ) number of divisors 24 -540 = ( 2, 3, 1 ) prod = ( 24, 36, 12, ) number of divisors 24 -600 = ( 3, 1, 2, ) prod = ( 36, 12, 24, ) number of divisors 24 -630 = ( 1, 2, 1, 1 ) prod = ( 12, 24, 12, 12, ) number of divisors 24 -660 = ( 2, 1, 1, 1 ) prod = ( 24, 12, 12, 12, ) number of divisors 24 -672 = ( 5, 1, 1 ) prod = ( 60, 12, 12, ) number of divisors 24 -720 = ( 4, 2, 1 ) prod = ( 60, 30, 15, ) number of divisors 30 -840 = ( 3, 1, 1, 1 ) prod = ( 48, 16, 16, 16, ) number of divisors 32 -1080 = ( 3, 3, 1 ) prod = ( 48, 48, 16, ) number of divisors 32 -1260 = ( 2, 2, 1, 1 ) prod = ( 36, 36, 18, 18, ) number of divisors 36 -1440 = ( 5, 2, 1 ) prod = ( 90, 36, 18, ) number of divisors 36 -1680 = ( 4, 1, 1, 1 ) prod = ( 80, 20, 20, 20, ) number of divisors 40 -2160 = ( 4, 3, 1 ) prod = ( 80, 60, 20, ) number of divisors 40 -2520 = ( 3, 2, 1, 1 ) prod = ( 72, 48, 24, 24, ) number of divisors 48 -3360 = ( 5, 1, 1, 1 ) prod = ( 120, 24, 24, 24, ) number of divisors 48 -3780 = ( 2, 3, 1, 1 ) prod = ( 48, 72, 24, 24, ) number of divisors 48 -3960 = ( 3, 2, 1, 1 ) prod = ( 72, 48, 24, 24, ) number of divisors 48 -4200 = ( 3, 1, 2, 1 ) prod = ( 72, 24, 48, 24, ) number of divisors 48 -4320 = ( 5, 3, 1 ) prod = ( 120, 72, 24, ) number of divisors 48 -4620 = ( 2, 1, 1, 1, 1 ) prod = ( 48, 24, 24, 24, 24, ) number of divisors 48 -4680 = ( 3, 2, 1, 1 ) prod = ( 72, 48, 24, 24, ) number of divisors 48 -5040 = ( 4, 2, 1, 1 ) prod = ( 120, 60, 30, 30, ) number of divisors 60 -7560 = ( 3, 3, 1, 1 ) prod = ( 96, 96, 32, 32, ) number of divisors 64 -9240 = ( 3, 1, 1, 1, 1 ) prod = ( 96, 32, 32, 32, 32, ) number of divisors 64 -10080 = ( 5, 2, 1, 1 ) prod = ( 180, 72, 36, 36, ) number of divisors 72 -12600 = ( 3, 2, 2, 1 ) prod = ( 108, 72, 72, 36, ) number of divisors 72 -13860 = ( 2, 2, 1, 1, 1 ) prod = ( 72, 72, 36, 36, 36, ) number of divisors 72 -15120 = ( 4, 3, 1, 1 ) prod = ( 160, 120, 40, 40, ) number of divisors 80 -18480 = ( 4, 1, 1, 1, 1 ) prod = ( 160, 40, 40, 40, 40, ) number of divisors 80 -20160 = ( 6, 2, 1, 1 ) prod = ( 252, 84, 42, 42, ) number of divisors 84 -25200 = ( 4, 2, 2, 1 ) prod = ( 180, 90, 90, 45, ) number of divisors 90 -27720 = ( 3, 2, 1, 1, 1 ) prod = ( 144, 96, 48, 48, 48, ) number of divisors 96 -30240 = ( 5, 3, 1, 1 ) prod = ( 240, 144, 48, 48, ) number of divisors 96 -32760 = ( 3, 2, 1, 1, 1 ) prod = ( 144, 96, 48, 48, 48, ) number of divisors 96 -36960 = ( 5, 1, 1, 1, 1 ) prod = ( 240, 48, 48, 48, 48, ) number of divisors 96 -37800 = ( 3, 3, 2, 1 ) prod = ( 144, 144, 96, 48, ) number of divisors 96 -40320 = ( 7, 2, 1, 1 ) prod = ( 336, 96, 48, 48, ) number of divisors 96 -41580 = ( 2, 3, 1, 1, 1 ) prod = ( 96, 144, 48, 48, 48, ) number of divisors 96 -42840 = ( 3, 2, 1, 1, 1 ) prod = ( 144, 96, 48, 48, 48, ) number of divisors 96 -43680 = ( 5, 1, 1, 1, 1 ) prod = ( 240, 48, 48, 48, 48, ) number of divisors 96 - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=<|endoftext|> -TITLE: Trouble understanding how this identity is derived: $\sum_{j=0}^{\infty}\binom{a+j}{j}x^j=(1-x)^{-a-1}$ -QUESTION [5 upvotes]: $$\sum_{j=0}^{\infty}\binom{a+j}{j}x^j=(1-x)^{-a-1}$$ -The $-a-1$ is throwing me off. Can anyone help me understand this identity. -I have tried letting $m=-a-1$ and then applying the binomial theorem, and letting the sum run up to $\infty$ since anything past $m$ will be $0$. But I didn't get anywhere because we'll still have $(-1)^i$ in the sum. -Since there is the $\infty$ in the sum, I have also tried thinking about it in terms of generating functions but can't get anywhere. - -REPLY [5 votes]: First, in the binomal identity -$$(1+u)^m=\sum_{j=0}^\infty\binom mju^j$$ -set $u=-x$ and $m=-a-1$ to obtain -$$(1-x)^{-a-1}=\sum_{j=0}^\infty\binom{-a-1}j(-x)^j=\sum_{j=0}^\infty(-1)^j\binom{-a-1}jx^j.\tag{1}$$ -Next, use the identity -$$\binom{-t}j=(-1)^j\binom{t+j-1}j\tag{2}$$ -with $t=a+1$ to obtain -$$(-1)^j\binom{-a-1}j=(-1)^j(-1)^j\binom{a+j}j=\binom{a+j}j.\tag{3}$$ -From (1) and (3) we get -$$\boxed{(1-x)^{-a-1}=\sum_{j=0}^\infty\binom{a+j}jx^j.}$$ -P.S. The identity (2) follows directly from the definition of the binomial coefficient: -$$\binom{-t}j=\frac{(-t)(-t-1)(-t-2)\cdots(-t-j+1)}{j!}=(-1)^j\cdot\frac{t(t+1)(t+2)\cdots(t+j-1)}{j!}=(-1)^j\binom{t+j-1}j.$$<|endoftext|> -TITLE: Expected value of $\log(\det(AA^T))$ -QUESTION [6 upvotes]: Consider uniform random $n$ by $n$ matrix $A$ where $A_{i,j} \in \{-1,1\}$. We know that with high probability $A$ is non-singular. Are there known estimates or bounds for -$$\mathbb{E}(\log(\det(AA^T)))\;?$$ - -REPLY [3 votes]: For finite $n$, the expected value of the log determinant is infinite as noted by D.A.N. because the matrix is with positive probability singular. Instead, you can talk about things like the second moment of the determinant, the limiting distribution of the log determinant, and tail bounds saying you're unlikely to be far from it. -Second Moment: It turns out (as was first observed in the 1940s by Turan) that $E(\det(A)^2)$ is much easier to analyze than $E(|\det A|)$. This is because -\begin{eqnarray*} -E(\det(A)^2) &=& E(\sum_{\sigma} \sum_\tau \prod_{i=1}^n (-1)^{sgn (\sigma) + sgn(\tau)} a_{i,\sigma(i)} a_{i, \tau(i)})\\ -&=& \sum_{\sigma} \sum_{\tau} (-1)^{sgn (\sigma) + sgn(\tau)} E( \prod_{i=1}^n a_{i, \sigma(i)} a_{i, \tau(i)}) -\end{eqnarray*} -If $\sigma \neq \tau$, then somewhere in that product there's a variable that appears exactly once. It has mean $0$, so the entire product has expectation $0$. If on the other hand $\sigma=\tau$, then every term in the product is equal to $1$. There's $n!$ choices for $\sigma$, so we have -$$E (\det (A^2) ) = n!$$ -Limiting Distribution: The asymptotic distribution of the log determinant is known to be Gaussian, in the sense that for any finite $t$, we have -$$P\left( \frac{ \log \det (A A^T ) - \log ( (n-1)! ) }{\sqrt{ 2\log n}} < t \right) \rightarrow \Phi(t)$$ -This was originally published by Girko in 1997, but Girko's proof is opaque and seems to skirt some technical details along the way. Later results of Nguyen and Vu and Bao, Pan, and Zhou fill in the gaps and give a more transparent proof. -One curious thing here is that the Gaussian distribution is centered around $(n-1)!$ instead of $n!$. It follows from the central limit theorems that the determinant of $AA^T$ is with high probability $(n-1)! e^{O(\sqrt{ \log n} )}$. But as we saw above, $E(\det(AA^T))=n!$, which lies outside this interval! The main contribution to the expectation comes from the far upper tail of the determinant distribution. -Tail Bounds The Nguyen and Vu paper gives a bound on the rate of convergence that says that the difference between the two sides in the above equation is at most $\log^{-1/3+o(1)} n$. However, this doesn't give a very strong bound on the probability the determinant is very far from the mean. In this range one slightly stronger bound is due to Vu and myself, who showed that for any $B>0$ there is a $C>0$ such that -$$P(|\log \det(AA^T) - \log n!|)> C n^{2/3} \log n) \leq n^{-B}$$ -This bound is probably very far from optimal -- the limiting distribution above had a scaling window proportional to $\sqrt{\log n}$, but now we're looking at deviations on order of $n^{2/3}$. I actually suspect that either it should be possible to extract a stronger large deviation result from the proofs of the above central limit theorems, or that someone has already proven such a result, but I don't know of one offhand. Döring and Eichelsbacher give a much stronger bound on the tail in the case where the entries are iid Gaussian instead of $\pm 1$ (see the last section of their paper).<|endoftext|> -TITLE: Does forcing preserve the least undefinable ordinal from a model of ZFC? -QUESTION [6 upvotes]: Let $M$ be a transitive model of ZFC. For convenience let assume that $M$ is countable. Now let us consider the least undefinable ordinal $\vartheta_M$ which is not definable from elements in $M$ and $M$ itself as parameters. For example, if $\alpha$ is the height of $M$ then $\alpha+\alpha$ and $\alpha^2$ are definable from $M$, so they are (strictly) less than $\vartheta_M$. -Since there are only countable possible formulas and $|M|$ possible parameters we only have $|M|$ definable ordinals from $M$, so $\vartheta_M<|M|^+$. -My question is whether forcing preserves the size of $\vartheta_M$. That is, if $M[G]$ is a generic extension of $M$ then $\vartheta_{M[G]} = \vartheta_M$? I guess that $\vartheta_M$ only depends on the height of $M$. Thanks for any help. - -I should provide more precise notion of definability. The definition of $\vartheta_M$ I imagine is: for a transitive model $M$, a definable class $C\subseteq M$ (that is, we have a formula $\ulcorner\phi\urcorner$ and parameters $a_1,\cdots, a_n\in M$ such that $x\in C\iff M\models \ulcorner\phi\urcorner(x, a_1,\cdots, a_n)$) and a definable ordering $\prec$ over $C$ well-ordered if for every definable $X\subseteq C$ either $X$ is empty or $X$ have the $\prec$-minimum. -$(C, R)$ might not be well-ordered in $V$. However if it is well-ordered then we can find the ordinal isomorphic to $(C, R)$ and we consider the least ordinal $\vartheta_M$ not isomorphic to any $(C, R)$. In that sense I can argue that $\vartheta_M < |M|^+$. - -REPLY [3 votes]: To clarify, I think that what the OP is asking is: - -For $M\in V$ a countable transitive model of ZFC, let $\alpha(M)$ be the supremum of the ordinals in $V$ such that $\alpha(M)$ is not definable in $V$ by a first-order formula with parameters from $M\cup\{M\}$. - -(Note that I write this slightly differently from the OP: the OP asks for the least "undefinable" ordinal, but I think they are tacitly assuming that the "definable" ordinals are closed downwards, which is not at all obvious to me.) -Then the question is, if (according to $V$) $N$ is a generic extension of $M$, is $\alpha(N)=\alpha(M)$? -(Note that since $M\not\in M[G]$, it's not even obvious that $\alpha(M)$ is "increasing" in $M$! In fact, by modifying the below argument I think we can show that it's not.) -If I'm interpreting the question correctly, the answer is no: forcing can definitely change what ordinals are definable in this sense. For example, for $N$ a set model of ZFC (inside $V$), let $\alpha_N$ be the minimum of the ordinals $\alpha$ such that the continuum pattern $$\{i: 2^{\aleph_{\omega_\alpha+i}}=\aleph_{\omega_\alpha+i+1}\}$$ is in $N$. Then there's no reason we can't have a model $M$ such that $\alpha_M=0$, but a forcing extension $M[G]\in V$ such that $\alpha_{M[G]}=\theta_M$ (maybe the continuum patterns in $V$ look Cohen over $M$, and we happen to pick $G$ to match the relevant pattern exactly). Of course, the obvious way to do this involves a terrible $V$, but there's no reason it can't happen. -Note that it is crucial in this argument that $G$ be generic over $M$, but not $V$. Indeed, it's not hard to show the following: - -Let $M$ be a countable transitive model in $V$, $\mathbb{P}\in M$ a forcing notion, and $G$ $\mathbb{P}$-generic over $V$. Then $\alpha^{V[G]}(M[G])=\alpha^V(M)$. - -Of course, we have to be a bit careful defining "$\alpha^{V[G]}$," but it's not hard.<|endoftext|> -TITLE: Alternating sum of roots of unity $\sum_{k=0}^{n-1}(-1)^k\omega^k$ -QUESTION [5 upvotes]: Consider the roots of unity of $z^n = 1$, say $1, \omega, \ldots, \omega^{n-1}$ where $\omega = e^{i\frac{2\pi}n}$. -It is a well known result that $\sum_{k=0}^{n-1}\omega^k = 0$, but what if we want to consider the alternating sum? I'm interested in - -Finding the value of $S = 1 - \omega +\omega^2-\ldots +(-1)^{n-1}\omega^{n-1}$ - -Keeping in mind that $1, \omega, \ldots, \omega^{n-1}$ are the vertex of a regular $n$-gon in the plane, it is easy to see that when $n$ is even, $S = 0$ -The problem arises when $n$ is odd. Here's what I've done so far: Take $x=\frac{2\pi}n$, then -$$S = \sum_{k=0}^{n-1}(-1)^k\omega^k = \sum_{k=0}^{n-1}(-1)^k(\cos kx +i \sin kx)$$ -Dealing with the real part, we notice for $k=1,2,\ldots,n-1$ that -$$\cos (n-k)x = \cos (2\pi k -kx) = \cos kx$$ -Since $n-k$ and $k$ have different parity (remember $n$ is odd), we can see that $$\sum_{k=0}^{n-1}(-1)^k(\cos kx) = 1 + \sum_{k=1}^{n-1}(-1)^k(\cos kx)=1$$ -But I have no idea on how to deal with the imaginary part. Asking almighty Wolfram, I got that -$$\sum_{k=0}^{n-1}(-1)^k(\sin k\phi)=\sec(\frac \phi2) \sin(\frac {(n-1)(\phi+\pi)}2)\sin(\frac{n(\phi+\pi)}2)$$ -Hence -$$\sum_{k=0}^{n-1}(-1)^k(\sin kx) = \sec(\frac\pi n)\sin(\frac{\pi(n-1)(2+n)}{2n})\sin(\frac{\pi(2+n)}2)$$ -In summary, I've got two questions: -a) How do you deduce the $\sum_{k=0}^{n-1}(-1)^k(\sin k \phi )$ formula? -b) Is there an alternative way to solve the original question? -Thanks in advance - -REPLY [3 votes]: As $(-1)^r\cdot w^r=(-w)^r,$ -$$S_{n-1}=\sum_{r=0}^{n-1}(-w)^r=\dfrac{1-(-w)^n}{1-(-w)}$$ -If $n$ is odd, -$$S_{n-1}=\dfrac{1-(-1)}{1+w}=\dfrac2{1+\cos\dfrac{2\pi}n+i\sin\dfrac{2\pi}n}=\dfrac{\cos\dfrac{\pi}n-i\sin\dfrac{\pi}n}{\cos\dfrac{\pi}n}$$<|endoftext|> -TITLE: Are there any "spaces" that violate symmetry of metric spaces? -QUESTION [6 upvotes]: While reading about metric spaces, the following question struck me. We know the following definition of pseudometric spaces and metric spaces: - -Suppose $d: X \times X \rightarrow \mathbb{R}$ and that for all $x,y,z \in X$: -$1. d(x,y) \geq 0$ -$2. d(x,x)=0$ -$3. d(x,y)=d(y,x)\space\space\space\space\space$ (Symmetry) -$4. d(x,z) \leq d(x,y)+d(y,z)$ (Triangle Inequality) -Such a "distance function" $d$ is called a pseudometric on X. The - pair $(X,d)$ is called a pseudometric space. -If $d$ satisfies: -$5.$ when $x \neq y,$ then $d(x,y)>0$, -then $d$ is called a metric on X and $(X,d)$ is called a metric - space. - -Now, $\ell_2^2$ with $d: \ell_2^2 \times \ell_2^2 \rightarrow \mathbb{R}$ violates the property of triangle inequality. Any pseudometric space $(X,d)$ would violate the non-negativity of metric spaces, since they have at least two points $x \neq y$ for which $d(x,y)=0$. -Similarly, are there any "spaces" that violate symmetry of metric spaces? If not, how do we justify mathematically? -Thank you in advance. - -REPLY [3 votes]: Metric spaces where distances are in $[0,\infty]$ and which drop symmetry but still satisfy the triangle inequality and and $d(x,x)\ge 0$ are called generalised metric spaces in the influential paper by Lawvere http://www.tac.mta.ca/tac/reprints/articles/1/tr1abs.html -A nice example of a space where asymmetry is important is given by finite or infinite words over an alphabet, see Rutten http://www.cwi.nl/~janr/papers/files-of-papers/1996-tcs170.pdf . The idea is that $d(w,v)=0$ means that the word $w$ is a prefix of the word $v$ and that $d(w,v)=2^{-n}$ if the first letter where $w$ differs from $v$ is at position $n+1$. -Ok, this is an ultrametric space. But as the question was about asymmetry, it is still a nice example. And I think that metric space examples can be constructed along the same lines.<|endoftext|> -TITLE: Why doesn't $z^n\cdot\left(\frac{a+b}{z}\right)^n = (a+b)^n$ always hold? -QUESTION [6 upvotes]: When I entered is(z^n*((a+b)/z)^n = (a+b)^n); into Maple, the output was false and I guess Maple assumes that $a,b,n$ and $z$ can be any number in $ℂ$. -I thought -$$z^n\cdot\left(\frac{a+b}{z}\right)^n$$ -was clearly -$$\left(z\cdot\frac{a+b}{z}\right)^n$$ -and therefore -$$(a+b)^n$$ -Why does this not apply? -Edit: -The output is still false when typing assume(z <> 0): is(z^n*((a+b)/z)^n = (a+b)^n); -Edit 2: -assume(z >= 0): is(z^n*((a+b)/z)^n = (a+b)^n); returns true. -Edit 3: -assume(z = 0): is(z^n*((a+b)/z)^n = (a+b)^n); returns false. -Isn't this a contradiction? -Edit 4: -assume(n in ℕ): is(z^n*((a+b)/z)^n = (a+b)^n); returns false. -For $a<0$ and $z<0$, Maple fails to evaluate the expression, although it is true. This means Maple returned false instead of admitting that it is unable to determine the result. -This is a flaw. - -REPLY [2 votes]: It looks like Maple has trouble evaluating the left hand side of your equation. You can help Maple by asking it to expand (or simplify) the left-hand-side of your expression before using the is command from the assume facility. -eqn := z^n*((a+b)/z)^n = (a+b)^n; -assume(z::real,n::integer); - -As we saw in your question, is(eqn); returns false but -is(simplify(eqn)); - -returns true. I suppose this arises from limitations of the Assume Facility; as referred to on the help page, this may be documented in [Corless, Monagan] (I was unable to find an online copy of it). - -Side note -If $n$ is any complex number, we cannot say anything in general (non-uniqueness) but applying the assumption that $(a+b)/z>0$ seems to be enough for the Assume Facility (with a helping hand from the simplify command): -restart; -eqn := z^n*((a+b)/z)^n = (a+b)^n; - -assume((a+b)/z>1); -is(simplify(eqn)); - -returns true although $z^n$ is ambiguous for $z,n \in \mathbb C$. This seems to be a result of the way Maple defines the complex exponential to have a unique solution, see Section 5.1 The Complex Exponential Function (document can only be opened in Maple). -[Corless, Monagan] Corless, Robert, and Monagan, Michael. "Simplification and the Assume Facility." Maple Technical Newsletter, Vol. 1 No. 1. Birkhauser, 1994.<|endoftext|> -TITLE: Please can someone help me to understand stationary distributions of Markov Chains? -QUESTION [6 upvotes]: I'm currently trying to understand (intuitively) what a stationary distribution of a Markov Chain is? In our lecture notes, we're given the following definition: - -This was of little benefit to my understanding, so I've tried searching online for a more useful explanation. I then found the following video, which improved my understanding to the extent that I now understand that stationary distributions are to do with looking at what happens to the probabilities at each state within a Markov Chain when time becomes infinitely large. This is still not a sufficient enough understanding of the concept though. -For example, I've been asked to show that -$$ -\pi_{a} = \left( \frac{2}{5}, \frac{3}{5}, 0, 0, 0 \right) \\ -\pi_{b} = \left( 0, 0, 1, 0, 0 \right) \\ -\pi_{c} = \left( 0, 0, 0, \frac{3}{5}, \frac{2}{5} \right) -$$ -are stationary distributions with respect to the Markov Chain with one-step transition martix -$$ -\mathbf{P} = \left( \begin{array}{ccccc} -\frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ -\frac{1}{3} & \frac{2}{3} & 0 & 0 & 0 \\ -0 & 0 & 1 & 0 & 0 \\ -0 & 0 & 0 & \frac{2}{3} & \frac{1}{3} \\ -0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} -\end{array} \right) -$$ -How would you do this? What is a stationary distribution, with respect to this example? -Also, could someone please confirm that I'm correct in thinking that the notation $p_{ij}$ denotes the probability of the process moving from the state $i$ to the state $j$? - -REPLY [5 votes]: "Also, could someone please confirm that I'm correct in thinking that the notation $p_{ij}$ -denotes the probability of the process moving from the state $i$ to the state $j$?" -$(*)$ -Correct - -"How would you do this? What is a stationary distribution, -with respect to this example?" -If the chain starts in state $3$ it stays there forever because according -to $(*)$ there is zero probability to move to another state. -Therefore -$\pi_{b} = ( 0, 0, 1, 0, 0)$ -is an obvious stationary distribution. -If the chain starts in state $1$ or $2$ it stays there forever because according -to $(*)$ there is zero probability to move to another state. -If the chain starts in state $4$ or $5$ it stays there forever because according -to $(*)$ there is zero probability to move to another state. -Now you can treat these as two $2 \times 2$ matrices and use the result that a vector which fulfills: -$\mathbf{\hat{\pi}} \mathbf{P} = \mathbb{\hat{\pi}}$ $\:\:(**)$ -is a stationary distribution. -So you solve these two sets of systems of equations to get the remaining stationary distributions. Here you also need to use that $\hat{\pi}$ is a probability vector; -that is, its components sum to one. - -"I now understand that stationary distributions are to do with looking at what happens to the probabilities at each state within a Markov Chain when time becomes infinitely large" -You also have this theorem that can be good to know: -If the Markov chain is irreducible and aperiodic then -$\lim \limits_{n \to \infty} P^n = \hat{P}$ -where $\hat{P}$ is a matrix whose rows are identical and -equal to the stationary distribution $\mathbb{\hat{\pi}}$ -for the Markov chain defined by equation $(**)$.<|endoftext|> -TITLE: Regarding linear independence on a normed-linear space given a condition -QUESTION [12 upvotes]: Let $(X,\|\cdot\|)$ be a normed linear space and $x_{1}, x_{2}, \ldots, x_{n}$ be $n$ linearly independent vectors in $X$. Show that there exists $\epsilon > 0$ such that if $y_{1}, y_{2}, \cdots, y_{n} \in X$ with $\|y_{i}\| < \epsilon$, $i = 1,2,\ldots,n$, then $x_{1} + y_{1}, x_{2} + y_{2},\ldots,x_{n} + y_{n}$ are also linearly independent vectors in $X$. - -I've been thinking over this for a few days and I'm not really getting anywhere. -I've tried to start with the definition of linear independence for the $x$-vectors and then working towards the linear independence of $x+y$ given the condition on $y$, but I'm not getting anywhere. -May someone offer a hint or a solution? - -REPLY [2 votes]: Linear independence is an open condition, which is most easily seen by observing that $x_1, \dotsc, x_n$ are independent iff $x_1\wedge \dotsb\wedge x_n \neq 0$. So for any neighborhood $U$ of $x_1\wedge \dotsb\wedge x_n$ in $\Lambda^n X$, there is a neighborhood $V$ in $X\times\dotsb\times X$ mapping into $U$ under the wedge map. If we choose $U$ away from the origin, then any $\epsilon$ keeping in the neighborhood $V$ will do the job.<|endoftext|> -TITLE: Another proof of Liouville's theorem. -QUESTION [5 upvotes]: Let $f(z)=\sum_{n} a_n z^n$ has radius of convergence $R>0$ and $0 -TITLE: Find a mistake in integration -QUESTION [7 upvotes]: So, the integral is: -$$I=\int \frac{3x+5}{x^2+4x+8}dx$$ -and here is how I did it, but in the end, I got a wrong result: -$$x^2+4x+8=(x+2)^2+4=4\bigg[\bigg(\frac{x+2}{2}\bigg)^2+1\bigg]$$ -$$I=\frac{1}{4} \int \frac{3x+5}{\big(\frac{x+2}{2}\big)^2+1}dx$$ -substitution: $\frac{x+2}{2}=u$, $dx=2du$, $x=2u-2$ -$$I=\frac{1}{2}\int\frac{3(2u-2)+5}{u^2+1}du=\frac{1}{2}\int\frac{6u-1}{u^2+1}du=$$ -$$\frac{3}{2}\int\frac{2u-\frac{1}{3}}{u^2+1}du=\frac{3}{2}\int\frac{2u}{u^2+1}du-\frac{1}{2}\int\frac{du}{u^2+1}=$$ -$$\frac{3}{2}\ln|u^2+1|-\frac{1}{2}\arctan(u)+C$$ -$$I=\frac{3}{2}\ln\bigg|\frac{x^2+4x+8}{4}\bigg|-\frac{1}{2}\arctan\bigg(\frac{x+2}{2}\bigg)+C$$ -Thank you for your time. - -REPLY [2 votes]: You're doing good. Maybe some passages can be done more easily (at least according to my tastes): -\begin{align} -\int\frac{3x+5}{x^2+4x+8}\,dx -&=\frac{1}{2}\int\frac{6x+10}{x^2+4x+8}\,dx\\[6px] -&=\frac{1}{2}\int\frac{6x+12-2}{x^2+4x+8}\,dx\\[6px] -&=\frac{3}{2}\int\frac{2x+4}{x^2+4x+8}\,dx- - \int\frac{1}{x^2+4x+8}\,dx -\end{align} -The first integral can be written directly as -$$ -\frac{3}{2}\log(x^2+4x+8) -$$ -and for the second one can do like you did, that is, $2t=x+2$, so $dx=2dt$ and the integral becomes -$$ -\int\frac{2}{4t^2+4}=\frac{1}{2}\arctan t=\frac{1}{2}\arctan\frac{x+2}{2} -$$ -By delaying the completion of the square we have to deal with less fractions. - -Note that -$$ -\log\frac{x^2+4x+8}{4}=-\log4+\log(x^2+4x+8), -$$ -so the result is the same as yours, because a constant can be absorbed in the constant of integration.<|endoftext|> -TITLE: Finding how many 8-bit bytes contain an even number of zeros . . . -QUESTION [15 upvotes]: I believe I'm overthinking this or otherwise confused but I believe that the method to solve this would be $2^n$ where n is the length of the bytes? So in this particular case it would be $2^8$ equal 256 possible. -But then I feel like that isn't right and I'm mixed up. What I thought about is that there is 4 possible ways to have an even number f zeros (i.e. 2 zeros, 4 zeros, 6 zeros, or 8 zeros). -Any insight would be awesome as I'm confused... - -REPLY [3 votes]: Here's another way. The number of bytes with $n$ zeroes is the coefficient of $x^n$ in $$(1+x)^8$$ -Now for any polynomial $p(x)$, the sum of the coefficients of even degree is $\frac{p(1)+p(-1)}{2}$. Hence the answer is: -$$\frac{(1+1)^8+(1-1)^8}{2} =2^7$$<|endoftext|> -TITLE: Characterization of open sets in $R^3$ homeomorphic to $R^3$. -QUESTION [6 upvotes]: Background: -By the Riemann mapping theorem, for any non-empty, simply connected open subset $U \subset \mathbb{C}$, $U \neq \mathbb{C}$ there exists a biholomorphic map (in particular a homeomorphism) $U \rightarrow \mathbb{D}$ where $\mathbb{D}$ is the unit disk. Since $\mathbb{D}$ is homeomorphic to $\mathbb{C}$, any two simply connected subsets of $\mathbb{C}$ are homeomorphic. Trivially the same result holds for $\mathbb{R}^2$. -I'm wondering if a similar result can be found for subsets of $\mathbb{R}^3$. Specifically, I want to know which open subsets of $\mathbb{R}^3$ are homeomorphic to $\mathbb{R}^3$ (or the unit ball). -The answer to this question (and this one) seems to claim that an open subset of $\mathbb{R}^n$ is homeomorphic to $\mathbb{R}^n$ if and only if it is contractible and "simply connected at infinity" (please correct me on this if I'm mistaken). -Thus as far as I can tell, the problem reduces to describing contractible open subsets of $\mathbb{R}^3$ which are simply connected at infinity. Wikipedia leads me to believe that here, contractible simply means "no holes". I have not been able to find a definition for "simply connected at infinity" that I could understand. -Question: - My question is threefold: - -Is it true that a domain in $\mathbb{R}^3$ is homeomorphic to $\mathbb{R}^3$ iff it is contractible and simply connected at infinity? Is there a simpler sufficient and necessary condition that works in $\mathbb{R}^3$? -Is there a simple sufficient condition that guarantees that a contractible, open subset of $\mathbb{R}^3$ is homeomorphic to $\mathbb{R}^3$? For example, by this question and answer, it seems that convexity is a sufficient condition. Is boundedness a sufficient condition? -Can you give an example of a contractible open subset of $\mathbb{R}^3$ that is not homeomorphic to the ball? The simpler, the better. - -Side note: It appears to me that the usual counterexample to the question "are contractible $n$-manifolds homeomorphic to the $n$-ball" is the Whitehead manifold. However, this appears to me to be a 3-manifold embedded in 4-dimensional Euclidean space and thus not a subspace of $\mathbb{R}^3$. Is this a meaningful distinction? - -REPLY [5 votes]: Pre-answer. Contractible does not mean "no holes". "No holes" is unfortunately completely meaningless. Contractible means that the identity map is null-homotopic (that is, there's a map $f: X \times I \to X$ such that $f(x,0) = x$ and $f(x,1) = c$ for some point $c \in X$.) If you want an intuitive statement of what this means, it's that you can slowly collapse the whole space at once down to a point. - -1) Yes. I do not expect there to be a condition that's nicer/reasonably checkable. This condition is ultimately equivalent to saying that the 1-point compactification is still a manifold, which maybe sounds nicer, but is completely impossible to check. -2) Boundedness is not a sufficient condition - it's not a topological property in any way. Consider the map $\Bbb R^3 \to \Bbb R^3$ given by $x \mapsto x/(1+\|x\|)$. This is a homeomorphism onto its (bounded) image (it's contained in the open 3-ball). Restricting it to the Whitehead manifold, we have an open subset of the 3-ball that's homeomorphic to the Whitehead manifold. A mild generalization of convexity that still guarantees you're homeomorphic to $\Bbb R^3$ is being star-shaped. See here. -3) It's a bit much to expect a simple example, since it evaded Whitehead (indeed, he thought that contractible open 3-manifolds were all $\Bbb R^3$ until he later found his counterexample). Whitehead's example is as simple as you're going to get. -By variants on Whitehead's construction, by the way, you get uncountably many pairwise non-homeomorphic contractible open 3-manifolds. See this paper by McMillan.<|endoftext|> -TITLE: Words built from $\{0,1,2\}$ with restrictions which are not so easy to accomodate. -QUESTION [14 upvotes]: We assume a ternary alphabet $V=\{0,1,2\}$ and are looking for a generating function describing the number of words of $V^*$ fulfilling certain restrictions. The words I am interested in do not contain runs of length $k+1$ (with $k\geq 1$) and do contain the string $1^k2$, i.e. a run of $1$ of length $k$ followed by $2$. - -I'm aware of two techniques which are useful for attacking questions of this kind. -Smirnow words: One of them is based upon Smirnov words, which are words with no consecutive equal letters. See e.g. example III.24 in Analytic Combinatorics by P. Flajolet and R. Sedgewick. The Smirnov words of the three letter alphabet $V$ are represented by the generating function $S(z)$ - \begin{align*} -S(z)=\left(1-\frac{3z}{1+z}\right)^{-1} -\end{align*} - Since we are looking for words having maximal runs of $0,1$ and $2$ of length $k$ we substitute - \begin{align*} -z\rightarrow z+z^2+\cdots+z^k=z\frac{1-z^k}{1-z} -\end{align*} - Words which do not contain runs of length $k+1$ can therefore obtained as - \begin{align*} -\left(1-\frac{3z\frac{1-z^k}{1-z}}{1+z\frac{1-z^k}{1-z}}\right)^{-1}=\frac{1-z^{k+1}}{1-3z+2z^{k+1}} -\end{align*} - -$$ $$ - -The Goulden-Jackson Cluster Method nicely presented by J. Noonan and D. Zeilberger is predestinated if we are looking for words which are not allowed to contain so-called bad words. Applying this technique it is easy to find a generating function $T(z)$ which do not contain the bad word $1^k2$. According to the formula in page $7$ of the referred paper we obtain - \begin{align*} -T(z)=\frac{1}{1-3z+z^{k+1}} -\end{align*} - The generating function $S(z)$ can also be easily obtained with this method. Again according to the formula in page $7$ we get - \begin{align*} -S(z)=\frac{1}{1-3z+3\frac{z^{k+1}(1-z)}{1-z^{k+1}}}=\frac{1-z^{k+1}}{1-3z+2z^{k+1}} -\end{align*} - -I have problems to derive a generating function which follows the combined requirements of counting words which do not contain words with runs of length $k+1$, but contain the substring $1^k2$. Any ideas? -Note: This question corresponds to the second part of this question. - -REPLY [3 votes]: Note: The instructive answer from @MarkoRiedel was worth a somewhat in-depth analysis. The outcome of this analysis serves here as supplementary information. We consider his approach in some detail, analyse the connection with Smirnov words and find some simplified representations. - -The essence of his answer is a clever two step decomposition of the language under consideration. As small starter we look at a similar decomposition in a simpler context regarding binary words. -Binary words -Non-empty binary words built from an alphabet $\{0,2\}$ and starting with the letter $0$ admit following characterization. They decompose into blocks, each block starting with $2$ and followed by zero or more $0$s. The word starts with one or more $0$s. Here is an example with $|$ indicating the block decomposition. -\begin{align*} -000|2|20|200|20|20|2|2000|200|2|2|2|200 -\end{align*} -The language $\mathcal{B}_0$ describing these words is -\begin{align*} -\mathcal{B}_0=00^*\left(20^*\right)^* -\end{align*} - with star $^*$ denoting zero or more occurrences. The corresponding generating function is - \begin{align*} -B(z)&=\frac{z}{1-z}\sum_{q=0}^{\infty}\left(\frac{z}{1-z}\right)^q\\ -&=\frac{z}{1-z}\frac{1}{1-\frac{z}{1-z}}\\ -&=\frac{z}{1-2z}\\ -&=z+2z^2+4z^3+8z^4+16z^5+32z^6+O(z^7) -\end{align*} - Now let's exchange the role of $0$ and $2$ to obtain the language $\mathcal{B}_2$ of words starting with $2$. - \begin{align*} -222|0|02|022|02|02|0|0222|022|0|0|0|022 -\end{align*} -The language $\mathcal{B}_2$ describing these words is -\begin{align*} -\mathcal{B}_2=22^*\left(02^*\right)^* -\end{align*} - and the corresponding generating function is the same as before - \begin{align*} -B(z)=\frac{z}{1-2z} -\end{align*} - Observe that all binary words can be described as - \begin{align*} -\varepsilon\cup\mathcal{B}_0\cup\mathcal{B}_2\tag{1} -\end{align*} - which comprises either the empty word or a binary word starting with $0$ or starting with $2$. The generating function is accordingly - \begin{align*} -1+\frac{2z}{1-2z}&=\frac{1}{1-2z}\\ -&=1+2z+4z^2+8z^3+16z^4+32z^5+O(z^6) -\end{align*} - -We are now well prepared for the first step in the two-step approach of Marko's answer and derive $H(z)$. - -Binary Smirnov words: -We again use the binary alphabet $\{0,2\}$. This time we are looking for words which have runs of length maximal $k$. We use a similar decomposition as we did for the language $\mathcal{B}_0$ above. -We consider zero or more blocks, each block starting with $2$ and followed by at least one or more $0$s. The words start with at least one zero and end with zero or more $2$s. The overall restriction for these words is that the maximum run length is $k$. This sounds complicated, but when we look at the formal language description, let's denote it $\mathcal{H}_0$, it is not that hard. - \begin{align*} -\mathcal{H}_0=00^{ -TITLE: Definition of convergence of $\sum_{i=-\infty}^\infty a_i$ -QUESTION [5 upvotes]: This is a really basic question, but I'm unsure about the definition for convergence of -$$\sum_{i=-\infty}^\infty a_i$$ -The definition -$$\sum_{i=-\infty}^\infty a_i=\lim_{n\to \infty} \sum_{i=-n}^n a_i$$ -seems too loose to me, but the strongest definition I can think of: -$$\sum_{i=-\infty}^\infty a_i=\lim_{\|A\|\to \infty}\sup_{A\subset\mathbb{Z}}\sum_{i\in A}a_i$$ -seems too strong. There are also others like -$$\sum_{i=-\infty}^\infty a_i=\lim_{n\to\infty}\sum_{i=-n}^{0}a_i+\lim_{n\to\infty}\sum_{i=1}^na_i$$ -but this seems a bit arbitrary. -More specifically: if I'm asked to show that a certain series of functions -$$\sum_{i=-\infty}^\infty f_i$$ -converges, then which sequence of partial sums should I consider? - -edit: yet another possibility is to consider these types of series only if they converge absolutely, in which case these considerations are irrelevant. - -edit2: The problem with the splitting definition is that it does not seem to work when considering different modes of convergence. For example, if I want to show that $\sum_{i=-\infty}^\infty f_i$ converges uniformly, I would need for a given $\epsilon$ need to find a $\delta(\epsilon)$ s.t. for $N>n$ the $N$th partial sum close enough to the limit function for all $x$ in my domain. However, here we have no well defined $N$th partial sum, since we have split our sum in the definition of the limit. So how do we define uniform convergence in this case? - -REPLY [3 votes]: The usual definition of convergence for doubly infinite series or $\mathbb{Z}$-indexed series is that -$$\sum_{i = -\infty}^{\infty} a_i\tag{1}$$ -is defined as convergent if the series -$$\sum_{i = 0}^{\infty} a_i\quad \text{and}\quad \sum_{k = 1}^{\infty} a_{-k}$$ -both converge, and the value of the doubly infinite series is the sum of the values of these two series. For doubly infinite series of functions one then has uniform convergence if the series with nonnegative indices and the series with negative indices both converge uniformly. -We can also formulate the criterion without splitting the series, using the product partial order on $\mathbb{N}^2$ to make it a directed set, and say that $(1)$ converges if -$$\lim_{(m,n) \to (\infty,\infty)} \sum_{i = -m}^n a_i\tag{2}$$ -exists. For a doubly infinite series of functions, uniform convergence again means uniform convergence of the net -$$S_{m,n} := \sum_{i = -m}^n a_i.$$ -There is an exception, however. For Fourier series -$$\sum_{n = -\infty}^{\infty} c_n e^{inx},$$ -when one is interested in pointwise (or uniform) convergence, one usually only considers the symmetric partial sums -$$\sum_{n = -N}^N c_n e^{inx}$$ -and calls the Fourier series convergent (at $x$) if the limit of the symmetric partial sums at $x$ exists. -The convergence of a doubly infinite series as defined above evidently implies the convergence of the sequence of symmetric partial sums, but the symmetric partial sums can converge when the doubly infinite series doesn't converge, the limit of the symmetric partial sums is then (often) called the principal value of the divergent doubly infinite series. This is all analogous to the situation of improper Riemann integrals.<|endoftext|> -TITLE: Determine the Integral $\int_{-\infty}^{\infty} e^{-x^2} \cos(2bx)dx$ -QUESTION [7 upvotes]: Please do not mark this question as a duplicate. I have to solve this with a different method that I don't believe has been discussed about this particular question (at least to my knowledge). -I am confronted with computing the integral below: -$$\int_{-\infty}^{\infty} e^{-x^2} \cos(2bx)dx, b \in \mathbb R, b \gt0$$ -I understand that this is a question in which Complex Analysis has applications to Real Analysis. Specifically, Cauchy's Theorem will be used. That being said, the hint I was given (and the method I am attempting to use) is the following: -Integrate $e^{-z^2}$ over this curve: - -The curve is a rectangle such that its length is $2R$ and its width is $b$. Also, the length on the lower side of the rectangle lies on the x-axis. Lastly, the direction of curvature is counter-clockwise. - -My question is this: why integrate $e^{-z^2}$, and not $e^{-z^2}\cos(2bz)$? That is, the actual integral we're computing? I know that the former would be easier to integrate, but where did the $\cos(2bz)$ term go, and where will it come into play again? -Also, it's important to say that $\int_{-\infty}^{\infty} e^{-t^2}dt = \sqrt\pi$ will be useful here as well. -Lastly, I know that from the hint suggested, there will be four curves to evaluate: the two lengths and the two widths of the rectangle. - -REPLY [12 votes]: If we set: -$$ f(b) = \int_{\mathbb{R}} e^{-x^2}\cos(2bx)\,dx \tag{1}$$ -we have: -$$ f'(b) = -\int_{\mathbb{R}} 2x\,e^{-x^2} \sin(2bx)\,dx \stackrel{\text{IBP}}{=}-2b\int_{\mathbb{R}}e^{-x^2}\cos(2bx)\,dx=-2b\,f(b).\tag{2}$$ -So we have that $f$ is a solution of a separable differential equation and -$$ f(b) = f(0)\, e^{-b^2}.\tag{3}$$ -Since $f(0)=\sqrt{\pi}$, - -$$ \int_{\mathbb{R}} e^{-x^2}\cos(2bx)\,dx = \color{red}{\sqrt{\pi}\, e^{-b^2}}\tag{4}$$ - -follows.<|endoftext|> -TITLE: how to divide a hexagon into regular polygons -QUESTION [6 upvotes]: I want to cut a hexagon paper into regions of equal areas (more precisely either into squares of side c or into regular hexagons of side c). In both cases some of the papers will be wasted. Is it possible to know what is the best way to waste the minimum of papers? (Maybe something related to the Honeycomb conjecture?) - -REPLY [4 votes]: Here are a few nontrivial examples of hexagons in hexagons:<|endoftext|> -TITLE: A definite integral that surely needs contour integration: $\int_0^{\infty} \frac{1}{x^2 + a^2}\cos\left(\frac{x(x^2 - b^2)}{x^2 - c^2}\right)\, dx$ -QUESTION [7 upvotes]: During my Master Thesis work I came up with an integral which I am going to consider as a hard challenge. I have been trying for days to crack it, but still nothing. The integral is the following -$$\int_0^{+\infty} \frac{1}{x^2 + a^2}\cos\left(\frac{x(x^2 - b^2)}{x^2 - c^2}\right)\ \text{d}x$$ -Where $a, b, c$ are simplified real constants ("simplified" means that they are constants, well defined, which I wrote $a, b, c$ for simplicity and brevity). -I do believe (but I think it's obvious [?]) that some contour integration is required. However I didn't obtain anything but to extend the integration to the whole axis since it's an even function.. -The solution does exist, and it has been confirmed by one of my professor's colleagues which used Mathematica I guess. The explicit solution is -$$\frac{\pi}{2a}\exp\left(-\frac{a(a^2 + b^2)}{a^2 + c^2}\right)$$ -Any hint to solve that integral? - -REPLY [6 votes]: Assume that all the parameters are real and nonnegative. -Then the equation $$\int_{0}^{\infty} \frac{1}{x^2 + a^2} \, \cos\left(\frac{x(x^2 - b^2)}{x^2 - c^2}\right) \, dx = \frac{\pi}{2a} \, \exp\left(-\frac{a(a^2 + b^2)}{a^2 + c^2}\right)$$ holds iff $a >0$ and $b \ge c$. -To see why this is the case, consider the function $$f(z) = \frac{1}{a^{2}+z^{2}} \, \exp \left(iz \, \frac{z^{2}-b^{2}}{z^{2}-c^{2}} \right) = \frac{1}{a^{2}+z^{2}} \, \exp(iz) \exp \left(-iz \, \frac{b^2-c^2}{z^2-c^2}\right). $$ -(Daniel Fischer suggested expressing $f(z)$ in that alternative way to make the analysis a bit easier.) -In the upper half of the complex plane, both $|\exp(iz)|$ and $ \left| \exp\Bigl(-iz \, \frac{b^2-c^2}{z^2-c^2}\Bigr) \right|$ are bounded if $b \ge c$. -The latter is not particularly obvious. But by substituting $x+iy$ for $z$, one finds that the real part of $-iz \, \frac{b^2-c^2}{z^2-c^2} $ is $$-\frac{(b^{2}-c^{2})(c^{2}y+x^{2}y+y^{3})}{(x^2-y^2-c^{2})^{2}+4x^2y^{2}},$$ which is never positive if $y>0$ and $b \ge c$. -So if $b \ge c$, the magnitude of $\exp\Bigl(-iz \, \frac{b^2-c^2}{z^2-c^2}\Bigr)$, like the magnitude of $\exp(iz)$, never exceeds $1$ in the upper half-plane, which includes near the essential singularities at $z= \pm c$. -Therefore, if $b \ge c$, we can integrate around a closed semicircular contour in the upper half-plane that is indented at $z=\pm c$ and conclude after taking limits that $$ \begin{align} \text{PV}\int_{-\infty}^{\infty} \frac{1}{x^2 + a^2} \, \cos\left(\frac{x(x^2 - b^2)}{x^2 - c^2}\right) \, dx &= \text{Re} \, 2 \pi i \, \text{Res} \left[f(z), ia \right] \\ &= \frac{\pi}{a} \, \exp\left(-\frac{a(a^2 + b^2)}{a^2 + c^2}\right). \end{align}$$ -But since $\cos\left(\frac{x(x^2 - b^2)}{x^2 - c^2}\right)$ is bounded along the real axis, the integral coverges in the traditional sense we can drop the Cauchy principal value sign. -I don't know how to determine the value of the integral when $b < c$. - -EDIT: -A solution was posted in The Gazette of the Royal Spanish Mathematical Society, but I don't think it's correct. It makes no mention of any restriction on the parameters. -I rechecked with Wolfram Alpha to make sure the equation doesn't hold if $b -TITLE: Is there a common notion of $\mathbb{R}^n$, for non-integer $n$? -QUESTION [8 upvotes]: This is not a very well-defined question. -Are there any standard constructions of metric spaces, parameterized by real-valued $n \ge 1$, such that: - -When $n$ is an integer, the metric space is precisely $\mathbb{R}^n$. -When $n$ is non-integer, the metric space can be seen as a reasonable generalization of $\mathbb{R}^n$. For example, perhaps it has Hausdorff dimension of $n$. - -Alternately, a non-existence result that you can't maintain some of the important properties of $\mathbb{R}^n$ in a generalization like this would be interesting to me. - -REPLY [11 votes]: There is no topological space $X$ such that $X\times X\cong\mathbb{R}^n$ if $n$ is an odd integer. You can prove this using homology; see, for instance, this answer on MathOverflow. In particular, this seems like pretty good evidence that there is no reasonable notion of "$\mathbb{R}^{n/2}$" when $n$ is an odd integer. By similar homology arguments you can show that if $n$ is not divisible by $m$ then there is no space $X$ such that $X^m\cong\mathbb{R}^n$, so there is no good topological candidate for $\mathbb{R}^{n/m}$. -These topological obstructions aside, I can say that if there is a "common notion" of $\mathbb{R}^n$ for non-integer $n$, it can't be too common, because I've never heard of it.<|endoftext|> -TITLE: Is calculating the summation of derivatives "mathematically sound"? -QUESTION [17 upvotes]: I have just discovered that if you take the following series: $$1 + x + x^2 + x^3 + x^4 + \cdot \cdot \cdot = \sum_{n = 0}^\infty x^n$$ and replace each term in the series with the derivative of them, you'll get: $$1 + 2x + 3x^2 + 4x^3 + 5x^4$$ Which I think could simplify to this: $$\sum_{n = 0}^\infty \frac {d}{dx}x^n$$ The question about this is: Is it [mathematically] sound to compute a summation of derivatives (or differentials)? I'm asking this because it looks like it is sound in this case because we are adding up all the derivatives of $x^n$ until $x = \infty$. So, is it sound to compute sums of derivatives? -Reminders about Question -I have seen a question related to this: infinite summation of derivatives of a convergent function, but it didn't get me to where I am aiming for. I have also seen Calculus Summations and Help with derivative inside a summation, but they don't answer my question. - -REPLY [20 votes]: Notice that -$$ -\sum_{n=2}^N \left( \frac {\sin((n+1)x)}{n+1} - \frac{\sin(nx)} n \right) = \frac{\sin((N+1)x)}{N+1} - \frac{\sin(2x)} 2 \longrightarrow \frac{-\sin(2x)} 2 \text{ as } N\to\infty -$$ -and so -$$ -\frac d {dx} \sum_{n=2}^\infty \left( \frac {\sin((n+1)x)}{n+1} - \frac{\sin(nx)} n \right) = \frac d {dx} \frac{-\sin(2x)} 2 = -\cos(2x). -$$ -But -\begin{align} -& \sum_{n=2}^N \frac d {dx} \left( \frac {\sin((n+1)x)}{n+1} - \frac{\sin(nx)} n \right) \\[8pt] -= {} & \sum_{n=2}^N \left( \cos((n+1)x) - \cos(nx) \right) \\[8pt] -= {} & \cos((N+1)x) - \cos(2x) -\end{align} -and for most values of $x$ this does not converge as $N\to\infty$. Hence $\dfrac d{dx} \sum\limits_n\cdots$ is not in all cases equal to $\sum\limits_n\dfrac d{dx}\cdots$. -However, if a power series -$$ -\sum_{n=0}^\infty a_n x^n \tag 1 -$$ -converges in an interval $(-R,R)$, then it can be validly differentiatied term-by-term in that interval. This follows in part from the fact that the convergence of $(1)$ is uniform, not necessarily in the interval $(-R,R)$, but in every interval $(-R+a,R-a)$, no matter how small $a>0$ is. -(I might have considered attempting to include a proof in this answer, but you've already accepted another answer$\,\ldots$)<|endoftext|> -TITLE: Set Theory: Tree Property -QUESTION [5 upvotes]: Why does the tree property hold for regular cardinals but not singular cardinals? -(I.e. There exists a tree of height $\kappa$ with countable levels and no cofinal branch for $\kappa$ a singular cardinal) - -REPLY [7 votes]: There are a few issues here. - -First of all, the tree property doesn't hold of regular cardinals in general. For example, it provably fails for $\kappa=\omega_1$ (see https://en.wikipedia.org/wiki/Aronszajn_tree), and getting the tree property at $\omega_2$ has high consistency strength relative to ZFC - it is equiconsistent with the existence of a weakly compact cardinal. In fact, in $ZFC+V=L$, every successor cardinal fails to have the tree property - thus, in $ZFC+V=L+$"there are no inaccessibles," the tree property never holds of any uncountable cardinal at all. - -Second, you've stated the tree property wrong: the point is not that the levels are countable, but rather that they are small (of size $<\kappa$). (Also, we usually demand that the tree itself have cardinality $\kappa$, if I remember correctly, but this actually doesn't matter: if I have a tree of height $\kappa$ with levels of size $<\kappa$ and no $\kappa$-length branches, I can prune it down to such a tree with size $\kappa$ - just fix a set of $\kappa$-many nodes, the supremum of whose heights is $\kappa$, and look at the induced subtree.) - -Finally, here's why the tree property can never hold of $\kappa$ singular. Let $\lambda=cf(\kappa)<\kappa$, let $\{\alpha_\eta: \eta\in\lambda\}$ be a cofinal subset of $\kappa$, and consider the following tree on $\kappa$: -$$T=\{f\in \kappa^{<\kappa}: f(0)<\lambda\mbox{ and }\vert f\vert<\alpha_{f(0)}\mbox{ and } f(\eta)=0 \mbox{ for all }0<\eta<\vert f\vert\}\cup\{\emptyset\}.$$ Basically, $T$ consists of $\lambda$-many branches, with the $\eta$th such branch of height $\alpha_\eta$. Then the height of $T$ is clearly $\sup\{\alpha_\eta: \eta<\lambda\}=\kappa$, but each level is of cardinality at most $\lambda$.<|endoftext|> -TITLE: Show that (p ∧ q) → (p ∨ q) is a tautology? -QUESTION [14 upvotes]: I am having a little trouble understanding proofs without truth tables particularly when it comes to → -Here is a problem I am confused with: -Show that (p ∧ q) → (p ∨ q) is a tautology -The first step shows: (p ∧ q) → (p ∨ q) ≡ ¬(p ∧ q) ∨ (p ∨ q) -I've been reading my text book and looking at Equivalence Laws. I know the answer to this but I don't understand the first step. -How is (p ∧ q)→ ≡ ¬(p ∧ q)? -If someone could explain this I would be extremely grateful. I'm sure its something simple and I am overlooking it. -The first thing I want to do when seeing this is -(p ∧ q) → (p ∨ q) ≡ ¬(p → ¬q)→(p ∨ q) -but the answer shows: -¬ (p ∧ q) ∨ (p ∨ q) (by logical equivalence) -I don't see a equivalence law that explains this. - -REPLY [2 votes]: The following is an inference rule approach to showing that $P \to Q \equiv \neg P \lor Q$, using the Constructive Dillema inference rule: -$$ \large \frac{P \to Q,~ R \to S, ~P \lor R}{ Q \lor S}$$ -It can be shown that $\neg P \lor P$ and $\neg P \to \neg P$ are tautologies, and given that we know $ P \to Q $ , we can substitute into the above inference rule. -$$ \large \frac{ \neg P \to \neg P,~P \to Q,~ ~\neg P \lor P}{ \neg P \lor Q }$$ -So far we have shown that $ (P \to Q) ~\vdash (\neg P \lor Q)$. To finish proving the equivalency $ P \to Q \equiv \neg P \lor Q ~$ we also need to show $ (\neg P \lor Q) \vdash (\neg P \lor Q) $. I don't see an obvious inference rule at this point, but we could show it by contradiction.<|endoftext|> -TITLE: Multiple Nested Radicals -QUESTION [12 upvotes]: $\sqrt{9-2\sqrt{23-6\sqrt{10+4\sqrt{3-2\sqrt{2}}}}}$ -I have no idea how to unnest radicals, can anyone help? - -REPLY [5 votes]: Work from the inside out. -Let $\sqrt{3 - 2\sqrt{2}} = \sqrt{a} - \sqrt{b}$, where $a$ and $b$ are rational numbers. Squaring both sides yields -$$3 - 2\sqrt{2} = a + b - 2\sqrt{ab}$$ -Then -\begin{align*} -a + b & = 3 \tag{1}\\ --2\sqrt{ab} & = -2\sqrt{2} \tag{2} -\end{align*} -Dividing equation by $-2$ yields -$$\sqrt{ab} = \sqrt{2}$$ -Squaring both sides of the equation yields -$$ab = 2$$ -Hence, -$$b = \frac{2}{a}$$ -Substituting for $b$ in equation 1 yields -\begin{align*} -a + \frac{2}{a} & = 3\\ -a^2 + 2 & = 3a\\ -a^2 - 3a + 2 & = 0\\ -(a - 1)(a - 2) & = 0 -\end{align*} -Hence, $a = 1$ or $a = 2$. If $a = 1$, then $b = 2$. However, $\sqrt{1} - \sqrt{2} = 1 - \sqrt{2} < 0$, but $\sqrt{3 - 2\sqrt{2}} > 0$. Thus, $a = 2$ and $b = 1$. Hence, -$$\sqrt{3 - 2\sqrt{2}} = \sqrt{2} - \sqrt{1} = \sqrt{2} - 1$$ -Substituting $\sqrt{2} - 1$ for $\sqrt{3 - 2\sqrt{2}}$ yields -$$\sqrt{10 + 4\sqrt{3 - 2\sqrt{2}}} = \sqrt{10 + 4(\sqrt{2} - 1)} = \sqrt{6 + 4\sqrt{2}}$$ -Let $\sqrt{10 + 4\sqrt{2}} = \sqrt{c} + \sqrt{d}$, where $c$ and $d$ are rational numbers. Continue.<|endoftext|> -TITLE: The category of Lie algebra representations -QUESTION [8 upvotes]: A representation of a Lie algebra $\mathfrak{g}$ on a vector space $V$ is a homomorphism of Lie algebras $\mathfrak{g} \to \mathfrak{gl}(V)$. We define morphisms between representations as intertwining linear maps as usual. Then we have a category $\mathsf{Rep}(\mathfrak{g})$ of representations of $\mathfrak{g}$. -I am wondering what are the essential properties of this category: --is it semi-abelian or even abelian? --Can we describe it as a functor category (as is the case for group representations)? --Do any of the aforementioned properties depend in some way on either the Lie algebras or the vector spaces being finite-dimensional? Do they depend on the choice of field (or commutative ring)? - -REPLY [8 votes]: The category of representations of $\mathfrak{g}$ is not just abelian but is isomorphic to the category of modules over a certain (associative) $k$-algebra (where $k$ is the base field). Indeed, define $U(\mathfrak{g})$ to be the free associative $k$-algebra on the underlying $k$-vector space $\mathfrak{g}$, modulo relations that say that for each $x,y\in \mathfrak{g}$, $xy-yx=[x,y]$ (here the left-hand side is computed using the multiplication of our associative algebra, and the right-hand side is the bracket in $\mathfrak{g}$). That is, we "freely" construct an associative algebra from $\mathfrak{g}$ in which the bracket becomes the commutator operation on elements of $\mathfrak{g}$. Then it is straightforward to verify that a $U(\mathfrak{g})$-module is the same thing as a $\mathfrak{g}$-representation, giving an isomorphism of categories. The algebra $U(\mathfrak{g})$ is known as the universal enveloping algebra of $\mathfrak{g}$. -For any ring $R$, you can form an $Ab$-enriched category $BR$ with one object whose endomorphisms are $R$ (with addition in $R$ being the $Ab$-enrichment and multiplication in $R$ being composition of maps). An $R$-module is then the same thing as a functor $BR\to Ab$ which preserves the $Ab$-enrichment. In particular, taking $R=U(\mathfrak{g})$, this gives a description of the representation category of $\mathfrak{g}$ as a certain functor category. -None of this depends on finite-dimensionality, or even on $k$ being a field ($k$ could be any commutative ring).<|endoftext|> -TITLE: Where does Gelfand Theory fail for non-commutative algebras. -QUESTION [6 upvotes]: I'm trying to get my head around Gelfand theory, and I can't seem to find the subtleties between commutative and non-commutative algebras. Why is there not a one-to-one correspondence between maximal ideals of a non-commutative algebra and the character homomorphisms from the algebra to the complex plane? Doesn't the Gelfand Mazur theorem apply to these algebras to, so if $\mathfrak{a}$ is maximal, the map -$$ A \to A/\mathfrak{a} \cong \mathbf{C} $$ -is a homomorphism with kernel $\mathfrak{a}$. Conversely, if $\phi: A \to \mathbf{C}$ is a homomorphism, then $\phi$ is surjective, so if $\mathfrak{a} = \ker(\phi)$, $\tilde{\phi}: A/\mathfrak{a} \to \mathbf{C}$ is an isomorphism, hence $A/\mathfrak{a}$ is a field, so $\mathfrak{a}$ is maximal. What's going wrong here? - -REPLY [3 votes]: For a noncommutative algebra $A$ with maximal ideal $M$, $A/M$ need not be a field, so it is not always $\Bbb C$. It is just a simple ring, and those can get pretty unusual.<|endoftext|> -TITLE: What is the Galois group of $(t^3-2)(t^3-3)\in \mathbb{Q}[t]$? -QUESTION [7 upvotes]: I know that the Galois group $f(t)=(t^3-2)(t^3-3)$ over $\mathbb{Q}$ is $Gal(\mathbb{Q}(2^{\frac{1}{3}},3^{\frac{1}{3}},\xi)/\mathbb{Q})$ (where $\xi$ is the 3rd root of unity) and its order is 18. I also know that this group should be some non-abelian subgroup of $S_3 \times S_3$. -(here, $S_n$ is the symmetric group) -But I don't know the exact shape of this Galois group. -How can we know this group? - -REPLY [7 votes]: Given that the Galois group acts on the roots this gives us intution for defining automorphisms: let -$\sigma, \tau, \eta : \mathbb{Q}(2^{1/3},3^{1/3},\xi) \to \mathbb{Q}(2^{1/3},3^{1/3},\xi)$, $\sigma(2^{1/3}) = \xi2^{1/3}$, $\tau(3^{1/3}) = \xi3^{1/3}$, $\eta(\xi) = \xi^2$. By composing such maps we generate a group of order $18$ so this must be the whole group. Its easy to see this group is not abelian (as you are aware). -$\sigma$ and $\tau$ generate an abelian subgroup of order $9$, which you know will be normal (index $2$). You can construct an exact sequence $$\langle \sigma, \tau \rangle \to \textrm{Gal}(\mathbb{Q}(2^{1/3},3^{1/3},\xi)/\mathbb{Q}) \to \langle \eta \rangle$$ and from here you'll be able to exhibit the Galois group as a direct product or a semi-direct product (hint: direct product of abeian groups is abelian). -Here's a more mechanical approach. By the Sylow theorems and other tricks from Group theory we can identify all non-abelian groups of order 18: $$D_{18}, \hspace{1mm} S_3 \times \mathbb{Z}/3\mathbb{Z}, \hspace{1mm} (\mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}) \rtimes \mathbb{Z}/2\mathbb{Z}$$ Now look at the orders of the elements in each group and it should be clear which group you can identify the Galois group with.<|endoftext|> -TITLE: Is there a simple way of proving that $\text{GL}_n(R) \not\cong \text{GL}_m(R)$? -QUESTION [10 upvotes]: Letting $\mathbb{F}_{1}$ and $\mathbb{F}_{2}$ be fields, and letting $n \geq 3$ and $m$ be natural numbers, it is known that $\text{GL}_{m}(\mathbb{F}_{1})$ and $\text{GL}_{n}(\mathbb{F}_{2})$ are elementarily equivalent if and only if $m=n$ and $\mathbb{F}_{1} \equiv \mathbb{F}_{2}$ (as proven in "Elementary Properties of Linear Groups" in the collection "The Metamathematics of Algebraic Systems — Collected Papers: 1936–1967"). -So, given a field $\mathbb{F}$, if $n \neq m$, then $\text{GL}_{m}(\mathbb{F}) \not\equiv \text{GL}_{n}(\mathbb{F})$, and thus $\text{GL}_{m}(\mathbb{F}) \not\cong \text{GL}_{n}(\mathbb{F})$. -Letting $R$ be a commutative ring (with unity), and letting $n, m \in \mathbb{N}$ be such that $n \neq m$, is there a simple "algebraic" way of proving that $\text{GL}_{m}(R)$ and $\text{GL}_{n}(R)$ are not isomorphic (as groups)? Is there a simple group-theoretic way of showing that $\text{GL}_{m}(\mathbb{F}) \not\cong \text{GL}_{n}(\mathbb{F})$ for a field $\mathbb{F}$? -Certain special cases of this problem trivially hold, for example in the case whereby $\mathbb{F}$ is finite, in which case $|\text{GL}_{m}(\mathbb{F})| \neq |\text{GL}_{n}(\mathbb{F})|$. - -REPLY [2 votes]: Let $G=GL_n(K)$ where $K$ is a field of characteristic not 2. Let $A$ be a maximal subgroup of $G$ of exponent 2. As every element of order 2 of $GL_n(K)$ is diagonalizable (with $1$ or $-1$ as eigen-values) and since $A$ is abelian, elements of $A$ are simultaneously diagonalizable. Hence we may assume that $A$ consists of matrices with 1's and $-1$'s on the diagonal. Therefore $A$ is isomorphic to the direct sum of $n$ copies of $\{1, -1\}$ and hence has order $2^{n}$. This shows that $GL_n(K)$ determines $n$. The same argument works for $SL$ instead of $GL$ if $n \geq 2$. (I just noticed that this same answer but with a different argument was given above.) -Here is a question: Assume characteristics are not 2. I can show in a quite elementary way that if the statement $SL_2(K) \simeq SL_2(L) \implies K \simeq L$ holds, then for $n \geq 2$, the statement $SL_n(K) \simeq SL_n(L) \implies K \simeq L$ holds. But I do not know how to prove this for $n=2$ in its full generality. We can of course assume the groups i.e. the fields are infinite. -On the other hand by using any non-central diagonal element as a parameter, one can define the field $K$ in the group $SL_2(K)$ as follows. Let $t_0$ be one such element. Let $T=C_{SL_2(K)}(t_0) \simeq K^*$ (torus). We may regard $T$ as the group diagonal matrices of determinant 1. There are exactly two subgroups of $SL_2(K)$ of the form $\langle u^T\cup\{1\} \rangle$ for any $1\neq u$ in the subgroup, the strictly upper and lower triangular matrices, say $U$ and $V$ (unipotent) respectively. (Because $x = (1+x/2)^2 - 1^2 - (x/2)^2$ for any $x\in K$, see below). They are both isomorphic to the addive group of $K$. Choose one of them, say $U$. Denote the elements of $T$ by $t(x)$ where $x\in K^*$ and elements of $U$ by $u(y)$ where $y\in K$. Then $T$ acts on $U$ as follows $u(y)^{t(x)} = u(x^2y)$. Thus we get the subfield of $K$ generated by the squares. But since $x = (1+x/2)^2 - 1^2 - (x/2)^2$ for any $x\in K$, the subfield generated by the squares is $K$ itself. Thus the field $K$ is definable with one parameter, namely $t_0$. (Except that the group does not know the unit element 1 of the field, we only get an affine version of a field, something like $K$ with addition and a ternary multiplication $xy^{-1}z$; to fix 1 of the field $K$ we need one more parameter, but this is irrelevant to us). -It follows that in the group $SL_2(L)$ both fields $K$ and $L$ are definable. -In particular if the automorphism takes a non-central diagonalizable element of $SL_2(K)$ to a non-central diagonalizable element of $SL_2(L)$, then we will necessarily have $K\simeq L$. This will be so if we can distinguish diagonalizable elements of $SL_2(K)$ from its non-diagonalizable semisimple elements (i.e. diagonalizable in the algebraic closure) in a group theoretic way.<|endoftext|> -TITLE: possible eigenvalues of $A$ -QUESTION [6 upvotes]: Let $A$ be an $n\times n$ matrix such that $A^2=A^t$. Then prove that possible real eigenvalues of $A$ are $0,1$. -Let $\lambda$ be an eigenvalue of $A$ then $\lambda^2$ is eigenvalue of $A^2$. -As $A^2=A^t$, $\lambda^2$ is eigenvalue of $A^t$.. -Eigenvalue of $A$ are same as eigenvalue of $A^t$.. -So, real eigenvalues of $A^t$ are $\{\lambda_1,\cdots,\lambda_r,\lambda_1^2,\cdots,\lambda_r^2\}$. -As number of real eigenvalues are fixed we must have $\lambda_i^2=\lambda_i$ or $\lambda_j$ or $\lambda_j^2$.. -$\lambda_i^2=\lambda_j^2$ and $\lambda_i\neq \lambda_j$ implies $\lambda_i=-\lambda_j$ i do not see any contradiction here.. -$\lambda_i^2=\lambda_j$ i do not know what to conclude... -$\lambda_i^2=\lambda_i$ then $\lambda_i=0$ or $1$.. which is what i want.. -Help me to clear this... - -REPLY [2 votes]: If $\lambda$ is an eigenvalue, so is $\lambda^2$, and so is, for the same reason, $\lambda^{4}$, $\lambda^{8}$... or any $\lambda^{2^k}$. -Thus, in the case $\lambda$ is not in $\{-1,0,1\}$, it would generate an infinite spectrum, which cannot be. -It remains the case $\lambda=-1$ that has to be eliminated, because all eigenvalues of $A$ are eigenvalues of $A^2$, and these are $\geq 0$.<|endoftext|> -TITLE: Can we simultaneously freely adjoin both limits and colimits to a category? -QUESTION [11 upvotes]: I'm aware that given a category $C$, it's possible to take the free (co)completion of $C$ in order to freely adjoin (co)limits to $C$, in the sense that we can construct a left adjoint to the forgetful functor from the 2-category of (co)complete categories, (co)continuous functors, and natural transformations to the 2-category of categories, functors, and natural transformations. -We can also consider the forgetful functor $U : \text{Cat}' \to \text{Cat}$ where $\text{Cat}'$ is the 2-category of complete and cocomplete categories, functors which preserve all limits and colimits, and natural transformations. My question is this: can we construct a left adjoint to $U$? If not, can we do so locally for any interesting categories $C$. In other words, when can we find a category $C'$ with a functor $i : C \to U(C')$ which induces an equivalence between $\text{Cat}'(C',D)$ and $\text{Cat}(C, U(D))$ for every complete and cocomplete category $D$? - -REPLY [6 votes]: Yes. This exists for more-or-less general reasons and is the subject of [Joyal, Free bicomplete categories]. Here's a sketch proof. -For simplicity, I will discuss categories with colimits of $\kappa$-small diagrams, where $\kappa$ is a regular cardinal. Specifically, consider the following category $\mathbf{K}$: - -The objects are small categories equipped with chosen $\kappa$-ary coproducts, $\kappa$-ary products, coequalisers of parallel pairs, and equalisers of parallel pairs. -The morphisms are functors that strictly preserve the chosen colimits and limits. - -By standard arguments, $\mathbf{K}$ is a locally $\kappa$-presentable category. The forgetful functor $U : \mathbf{K} \to \mathbf{Cat}$ preserves colimits of $\kappa$-filtered diagrams and limits of all diagrams, so it has a left adjoint $F : \mathbf{Cat} \to \mathbf{K}$. In particular, for every small category $\mathcal{C}$, there is a small category $F \mathcal{C}$ with colimits and limits of $\kappa$-small diagrams and a functor $\eta : \mathcal{C} \to F \mathcal{C}$ with the following property: - -For every small category $\mathcal{A}$ with colimits and limits of $\kappa$-small diagrams and every functor $h : \mathcal{C} \to \mathcal{A}$, there is a functor $\bar{h} : F \mathcal{C} \to \mathcal{A}$ that preserves colimits and limits of $\kappa$-small diagrams (up to isomorphism) such that $\bar{h} \circ \eta = h$. - -Of course, the above only deals with the 1-dimensional part of the universal property. To get the 2-dimensional part, note that $U : \mathbf{K} \to \mathbf{Cat}$ also preserves cotensors: after all, if $\mathcal{A}$ is an object in $\mathbf{K}$, then $[\mathcal{D}, \mathcal{A}]$ is also an object in $\mathbf{K}$ with limits and colimits constructed componentwise. Thus the adjunction $F \dashv U$ is $\mathbf{Cat}$-enriched. In particular: - -For every small category $\mathcal{A}$ with colimits and limits of $\kappa$-small diagrams and every parallel pair $\bar{h}_0, \bar{h}_1 : F \mathcal{C} \to \mathcal{A}$ that preserves colimits and limits of $\kappa$-small diagrams (up to isomorphism), every natural transformation $\bar{h}_0 \circ \eta \Rightarrow \bar{h}_1 \circ \eta$ extends to a natural transformation $\bar{h}_0 \Rightarrow \bar{h}_1$ uniquely.<|endoftext|> -TITLE: If $X_n$ converges in distribution to $X$, is it true that $\alpha_n X_n$ converges to $ \alpha X$ as well? -QUESTION [6 upvotes]: Suppose we have a sequence of non-negative random variables $(X_n)_n$ converging weakly to the random variable $X$. Let also $(\alpha_n)$ be a sequence of positive numbers converging to $\alpha>0$. I'm stuck in proving or disproving that $\alpha_k X_n$ converge in distribution to $\alpha X$. Any suggestion? - -REPLY [4 votes]: A slick way to do it is to use the following fact: If $X_n$ converges in distribution to $X$ then there is a probability space with $Y_n\sim X_n$ (that is, $Y_n$ has the same distribution as $X_n$) and $Y\sim X$ such that $Y_n \to Y$ almost surely. That is, by moving to a new probability space we can "upgrade" convergence in distribution to almost sure convergence. (One proof is to use the Skorohod representation for the $Y_n$ and $Y$ applying the inverse CDF for these random variables for a commonly chosen uniform random variable on $[0,1]$). -With this set up, of course $a_n Y_n \to Y$ almost surely as $n\to \infty$ and, hence, $a_n Y_n\to aY$ in distribution. But convergence in distribution doesn't depend on the underlying probability space, so $a_n X_n \to aX$ in distribution. -Note a version of this argument can also prove Slutsky's theorem: if $W_n\to W$ in distribution and $W$ is almost surely constant, then $W_n X_n \to W X$ in distribution.<|endoftext|> -TITLE: Is there a possibility to determine/ estimate the topological entropy? -QUESTION [5 upvotes]: By $E$, denote the set of excited states $E=\left\{1,2,\ldots,e\right\}$ and by $R$ the set of refractory states $R=\left\{e+1,e+2,\ldots,e+r\right\}$. By $0$, denote the equilibrium state. The alphabet $A$ is $A=\left\{0,1,\ldots,e,e+1,e+2,\ldots,e+r\right\}$. Let $X=A^{\mathbb{Z}}$ denote the space of all bi-infite configurations. Let $\eta\in X$. By $\eta_n(x)$, denote the state on position $x$ at time $n$. Let $T\colon X\to X$ describe the following dynamics: -$$ -\eta_{n+1}(x)=\begin{cases}i+1, & \text{ if }\eta_n(x)=i, ~1\leq i\leq e+r-1\\0, & \text{ if }\eta_n(x)=e+r\\0, & \text{ if }\eta_n(x)=0\text{ and }(\eta_n(x-1)\notin E, \eta_n(x+1)\notin E)\\1, & \text{ if }\eta_n(x)=0\text{ and }(\eta_n(x-1)\in E\text{ or }\eta_n(x+1)\in E)\end{cases} -$$ -My question is if it is possible to compute (or estimate) the topological entropy $h(X,T)$ in case $r\geq e$. -Remark -For $A=\left\{0,1,2\right\}, E=\left\{1\right\}$ and $R=\left\{2\right\}$ it is known that $h(X,T)=2\ln\rho$ where $\rho$ is the largest eigenvalue of $\lambda^3-\lambda^2-1=0$, see "Some Rigorous Results for the Greenberg–Hastings Model" by Steif and Durrett. You can find this paper and the proof of the mentioned result here, pp. 677. There, one essential step in the computation was to determine the set $Y=\bigcap_{n\geq 0}T^nX$ and to use that $h(X,T)=h(Y,T)$. The heart of the idea was that in this special case, each $y\in Y$ has a separating position $n\in\mathbb{Z}\cup\left\{\pm\infty\right\}$ such that to the left of this position, there is a right moving section and, to the right of this position, there is a left moving part. -Hence, my first idea now was to imitate this proof by trying to characterize the set $Y$ for the general setting (without knowing if this might be helpful in the general setting). Up to my attempts, it seems to be that $Y$ consists of those configurations for which there is an $n\in\mathbb{Z}\cup\left\{-\infty,+\infty\right\}$ such that - -on position $n$ and to its left - -$0$ has one of $0, 1, 2,\cdots, e$ to its left, -j has one of $j+1,j+2,\cdots j+e$ to its left for $1\leq j\leq e+r-1$ -$e+r$ has one of $0,1,\cdots e-1$ to its left - -to the right of position $n$, - -$0$ has one of $0, 1, 2,\cdots, e$ to its right, -j has one of $j+1,j+2,\cdots j+e$ to its right for $1\leq j\leq e+r-1$ -$e+r$ has one of $0,1,\cdots e-1$ to its right - - -In case, we have $E=\left\{1\right\}$ and any number of refractory states, i.e. $R=\left\{2,3,\ldots,e+r\right\}$, this implies - to my opinion - that we can use the same technique to compute $h(X,T)$ as done in the cited paper, since the separating position again separates a right- from a left-moving part. We only have to modify the involved sets and factor mapping. Hence, one should get $h(X,T)=2\ln\rho$ with $\rho$ being the largest positive eigenvalue of the polynomial $\lambda^{e+r+1}-\lambda^{e+r}-1=0$. This covers the situation in the paper. -In case we have more than just one excited state, i.e. $e\geq 2$, the set $Y$ does not longer consist only of such nice configurations, so one should have $h(X,T)\geq 2\ln\rho$. In order to compute $h(X,T)$ explicitly, I guess, one has to compute $h(X,T)$ in a completely new way or maybe an explicit computation is not possible at all. I do not know yet. That's why I am searching for some motivation/ inspiration here. Maybe one shall start with the case $E=\left\{1,2\right\}, R=\left\{3,4\right\}$, i.e. $r=e=2$. Maybe anybody has an idea how to start the computation. I did not find a way yet. Maybe, one can again find some factor map in order to go over to an easier system with same topological entropy. -Thanks in advance for any kind of inspiration! -Edit: I tried to construct a factor map. Let $E=\left\{1,2\right\}, R=\left\{3,4\right\}, A=\left\{0,1,2,3,4\right\}$, i.e. $e=r=2$. By $Y$ denote the set consisting of all configurations as described above. By $Y'$ denote the same set but for $E=\left\{1\right\}, R=\left\{2,3,4\right\}$. -Define the map $U\colon Y\to Y'$ by saying what are the pictures of triples: -$$ -000\mapsto 000\\ -001\mapsto 001\\ -002\mapsto 0012\\ -012\mapsto 012\\ -013\mapsto 0123\\ -023\mapsto 0123\\ -024\mapsto 01234\\ -123\mapsto 123\\ -124\mapsto 1234\\ -134\mapsto 1234\\ -130\mapsto 12340\\ -234\mapsto 234\\ -230\mapsto 234\\ -240\mapsto 2340\\ -241\mapsto 23401\\ -340\mapsto 340\\ -341\mapsto 3401\\ -300\mapsto 340\\ -301\mapsto 3401\\ -302\mapsto 34012\\ -400\mapsto 400\\ -401\mapsto 401\\ -402\mapsto 4012\\ -412\mapsto 4012\\ -413\mapsto 4123 -$$ -Start at the separating position $n$ of $y\in Y$ and look at the triples to its left resp. its pictures (adding as least linking entries as needed between the pictures of the triples in order to get a configuration in $Y'$) and, similarly, to the triples starting at position $n+1$. -Let "$|$" separate the $n$-th from the $(n+1)$-th position. So, for example, -$$ -\ldots 1000420321042003 | 12302413400001\ldots \mapsto \ldots 100043210432104321043|1234012340123400001\ldots -$$ -As far as I see this is a surjection fulfilling -$$ -U\circ T=T\circ U. -$$ -Since $Y'\subset Y$ and using an estimation by Bowen, we shall have -$$ -h(Y',T)=2\ln\rho\leq h(Y,T)\leqslant h(Y',T)+\sup_{y'\in Y'}h(U^{-1}(\left\{y'\right\}),T)=2\ln\rho + \sup_{y'\in Y'}h(U^{-1}(\left\{y'\right\}),T). -$$ -So, maybe the question is if we can compute/estimate the supremum. -Unfortunately, determining $U^{-1}(\left\{y'\right\})$ for some $y'\in Y'$ and, hence, $h(U^{-1}(\left\{y'\right\}),T)$ seems to be difficult. Maybe, my suggested map $U$ is simply not good enough and there is a better one. Maybe it is problematic that the pictures of the triples under $U$ have different lengths. -Anyway, I was not yet successful in finding a way to handle this. I think it may be possible also that the supremum is not finite. -Edit 2 I think I found a way to show that the entropy is infinity by approximating the max. number of separated sets. So, from my point of view the thread is through. - -REPLY [2 votes]: I'm just going to focus on proving that for $|A|=5$, the $|E|=1$ system is not isomorphic to $|E|=2$ system. To do this I will count the total number of period $5$ elements, ie points where $T^5(x) = x$, and show that in the $|E|=1$ case there are countably many, whereas in the $|E|=2$ case there are uncountably many. -Observation 1: -Given a period $5$ point, $x$, either all coordinates are $0$, or every coordinate is such that $\{ T^i(x)_n\ :\ 0\leq i \leq 4\} = \{0,1,2,3,4\}$. In the first case we call the coordinate fixed, and in the second varying. -The transformation rule tells us that we are either $0$ and waiting to be awakened, or positive and counting through the relevant values. If a coordinate of a period $5$ point is ever non $0$, then it must be a varying coordinate. A varying coordinate and a fixed coordinate cannot be adjacent, otherwise, the fixed coordinate will be awakened at some time. Therefore, a period $5$ point is composed of all varying coordinates, or all fixed coordinates. -So, we should only worry about periodic points with all varying coordinates. -Observation 2: -Coordinates must be awakened, and if coordinate $n$ awakens coordinate $n+1$, $n+1$ cannot awaken $n$. -So, firstly, we say coordinate $n$ is - -right-awakened if when it is $0$, coordinate $n+1$ is excited -left-awakened if when it is $0$, coordinate $n-1$ is excited -bi-awakened if it is both left- and right-awakened. - -All coordinates (of interesting points) must be awakened somehow, by observation 1. So, say $x_n$ is $0$. Either $1\leq x_{n+1} \leq |E|$ or $1\leq x_{n-1} \leq |E|$. So, $n$ is either left, right or bi-awakened. -But, if $n$ is right awakened, then when $x'_{n+1}=0$, $|A|-|E|\leq x'_n < |A|$ and by assumption, this must be a refractory state. So, $n+1$ is not left awakened. -Observation 3: -If we map coordinates to $L, R$ or $B$ according to how they are awakened, we have either all $L$s, all $R$s, or something with an infinite sequence of $L$s, possibly a single $B$, then an infinite string of $R$s. -We call this string the awakening string, and I claim this is relatively clear. -Observation 4: -If $|E|=1$ there are a finite number of points with each of the possible awakening strings. -Let $n$ be the rightmost left-awakened coordinate, and let it be $0$ at time $t$. The $n-1$ coord is $1$ at time $t$ (because it is awakening $n$), the $n-2$ coord must be $2$ (because it awakened $n-1$ at $t-1$), and this sequence of implication, determining the coordinate values to the left of $n$ continues. -And, by symmetry, the same thing happens going to the right of the leftmost, right-awakened coordinate. Which is hard to say, but I think it makes sense. -This implies that there are only a countable number of period $5$ points when $|E|=1$. -Observation 5: -If $|E|=2$, there is a distinct period $5$ point for every bi-infinite sequence of $1$'s and $2$. -Fix the awakening string to be all $L$'s, and then use the string of ones and twos to determine what value awakens the coordinate to the right. That's a slightly hazy description, but I think you should be able to see what I'm saying. -This implies there are an uncountable infinity of period $5$ points when $|E|=2$. -Hope that's some kind of help.<|endoftext|> -TITLE: Subgroups of an infinite abelian group with a given index -QUESTION [6 upvotes]: This is followed by the question: Subgroups of an infinite group with a given index (with a counterexample of non-abelian groups). Now, the question is: -Let $G$ be an infinite abelian group and $\alpha$ a cardinal number with $\aleph_0\leq \alpha\leq |G|$. Is there a subgroup $H$ of $G$ with $|G:H|=\alpha$ ? - -REPLY [5 votes]: First, if $G$ is countable, you just take $H$ to be the trivial subgroup of $G$, it will have countably infinite index. Thus, I assume that $G$ is uncountable of cardinality $A$. Then by -W. R. Scott, Proceedings of the American Mathematical Society -Vol. 5, No. 1 (1954), pp. 19-22, -The Number of Subgroups of Given Index in Nondenumerable Abelian Groups -the group $G$ contains $2^A$ subgroups of every index $\alpha$, -$$ -\aleph_0\leq \alpha\leq |G|. -$$<|endoftext|> -TITLE: How to prove that $\binom{n}{1}\binom{n}{2}^2\binom{n}{3}^3\cdots \binom{n}{n}^n \leq \left(\frac{2^n}{n+1}\right)^{\binom{n+1}{2}}$? -QUESTION [5 upvotes]: How can we prove that $$\binom{n}{1}\binom{n}{2}^2\binom{n}{3}^3\cdots \binom{n}{n}^n \leq \left(\frac{2^n}{n+1}\right)^{\binom{n+1}{2}}$$ - -$\bf{My\; Try::}$ Using $\bf{A.M\geq G.M\;,}$ we get -$$\binom{n}{0}+\binom{n}{1}+\binom{n}{2} + \cdots+\binom{n}{n}\geq (n+1)\cdot \left[\binom{n}{0}\cdot \binom{n}{1}\cdot \binom{n}{2} \cdots \binom{n}{n}\right]^{\frac{1}{n+1}}$$ -So $$2^n\geq (n+1)\left[\binom{n}{1}\cdot \binom{n}{2}\cdots \binom{n}{n}\right]^{\frac{1}{n+1}}$$ -How can I solve it after that? Help me. -Thanks. - -REPLY [2 votes]: You have the right approach, just need a small detour. -Hint Start with: -$$\sum_k k \binom{n}{k} = n 2^{n-1} \tag{why?}$$ -Now apply AM-GM and watch your exponentiation.<|endoftext|> -TITLE: Is there a name for a group having a normal subgroup for every divisor of the order? -QUESTION [6 upvotes]: Suppose, G is a group of order $n$. - -Is there a name (or an easy criterion) for the property that for every divisor $d|n$, there is a normal subgroup of order $d$ ? - -The abelian groups and the p-groups have this property, but other groups satisfy the property as well. The dihedral groups (excluding the 2-groups) do not have this property. - -REPLY [2 votes]: If you drop the condition on the normality of the subgroups, then this class of groups is called CLT groups (satifiying the Converse of Lagrange's Theorem). There is a classic paper of Henry Bray that provides the basic properties of these groups. We have the following proper inclusions of classes $$\{Nilpotent \text{ } Groups\} \subsetneq \{Supersolvable \text{ } Groups\} \subsetneq\{CLT \text{ } Groups\} \subsetneq \{Solvable \text{ } Groups\}.$$ -CLT groups are neither subgroup- nor quotient-closed, but a finite direct product of CLT groups is again CLT.<|endoftext|> -TITLE: How many Hamiltonian circuits are there in a complete graph with n vertices? -QUESTION [6 upvotes]: How many Hamiltonian circuits are there in a complete, undirected and simple graph with $n$ vertices? -The answer written in my book is: $$\frac{\left(n-1\right)!}{2}$$ -What is the combinatorial explanation to this? -My best shot was to try to count for each size of Hamiltonian circuit (triangles, quadrilaterals, pentagons and so on), how many of each there is, and to sum them. So I tried to count for each amount of edges the amount as possibilities, to complete it to the mentioned shapes. -I mean for n vertices, I choose any 2 vertices (that's an edge) and for each other vertex by connecting from each vertex from my edge by new edges, I can create a triangle, which is a Hamiltonian circle of size 3 and so on. -But there are a lot of repeats and that's a mess. -Maybe I didn't get the point at all, because the expression: -$$\frac{\left(n-1\right)!}{2}$$ -seems to be to me no more than the amount of Euler circles (each vertex degree is 2) which I order in a circle, so $(n-1)!$ is the possibilities to order $n$ different elements in a circle and divide by 2, because of the reflection. -Isn't a Hamiltonian circuit in such a graph an $n$-cycle, so it could be triangle, quadrilateral, and so on? - -REPLY [4 votes]: Any arrangement of the $n$ vertices yields a Hamiltonian cycle. In fact we may group the $n!$ possible arrangements in groups of $2n$ as one may choose any of the $n$ vertices to start from and any of the two directions to list the vertices in. It follows that there are precisely $\frac{n!}{2n}$ distinct Hamiltonian cycles. The result follows. -Further edit: Sketch of a more formal argument: - -Let $A=\{(v_1,\dots,v_n):v_i\in V(G),v_i\ne v_j\mbox{ for } i\ne j\}$ be the set of all arrangements of the $n$ vertices. Show that $|A|=n!$. -Show that each $(v_1,\dots,v_n)$ yields a Hamiltonian cycle and all Hamiltonian cycles arise in this manner. (Many of these cycles are duplicates of each other.) -Consider an relation on $A$: $(v_1,\dots,v_n)\sim(w_1,\dots,w_n)$ if and only if $(v_1,\dots,v_n)$ and $(w_1,\dots,w_n)$ correspond to the same cycle. Prove that this is an equivalence relation. -Observe that each equivalence class will have precisely $2n$ elements. (Work out the case $n=5$ by hand) -Conclude that there are $n!/2n$ equivalence classes. Hence conclude there are $(n-1)!/2$ Hamiltonian cycles.<|endoftext|> -TITLE: An example of prime ideal $P$ such that $\bigcap_{n=1}^{\infty}P^n$ is not prime -QUESTION [8 upvotes]: I am looking for an example of prime ideal $P$ such that $\bigcap_{n=1}^{\infty}P^n$ is not prime. - -In a Prüfer domain such an intersection is always a prime ideal. - -REPLY [3 votes]: Let $(R,m)$ be a (commutative Noetherian) local ring which is not a domain. By Krull's intersection theorem, $\bigcap_{n=1}^{\infty} m^n=0$ is not a prime. -One can use appropriate quotient of local domain to have local ring which is not a domain: $R=K[[X]]/(X^t)$<|endoftext|> -TITLE: The number and amount of dividers - a power of two -QUESTION [5 upvotes]: For a positive integer $n$ is known that the sum of all divisors of that number is a power of $2$. Prove that the number of these divisors is also a power of $2$. -My work so far: -Several of these numbers I found -Let $\tau(n) -$ the number of divisors of $n$ -1) $n=3; 1+3=4=2^2$ and $\tau (3)=2$ -2) $n=7; 1+7=8=2^3$ and $\tau (7)=2$ -From $1$ to $20$ such numbers do not have. -3) $n=21=3 \cdot 7; \tau(21)=4 (1,3,7,21)$ and $1+3+7+21=32=2^5$ -4) $n=31; 1+31=32=2^5$ and $\tau(31)=2$ - -REPLY [4 votes]: The first two divisor functions of a number with prime factorization $n=\prod_ip_i^{a_i}$ are -$$ -\sigma_0(n)=\prod_i(a_i+1) -$$ -and -$$ -\sigma_1(n)=\prod_i\left(1+p_i+\cdots+p_i^{a_i}\right)\;. -$$ -Since $\sigma_0$ is clearly a power of $2$ if and only if the $a_i+1$ are, we need to show that $a_i+1$ is a power of $2$ if $1+p_i+\cdots+p_i^{a_i}$ is a power of $2$. Now if $p_i=2$, then $1+p_i+\cdots+p_i^{a_i}=2^{a_i+1}-1$, which is not a power of $2$, so we can focus on odd primes. For odd $p_i$, if $1+p_i+\cdots+p_i^{a_i}$ is a power (and hence a multiple) of $2$, then $a_i+1$ is even, and $1+p_i+\cdots+p_i^{a_i}=(1+p_i)(1+p_i^2+\cdots+p_i^{a_i-1})$. For this to be a power of $2$, both factors must be. But then we can apply the same reasoning to the factor $1+p_i^2+\cdots+p_i^{a_i-1}$ and factor out $1+p_i^2$. We an continue to factorize the entire sum like this, and it follows that $a_i+1$, the number of summands, is a power of $2$, as required.<|endoftext|> -TITLE: Cauchy-Shwarz inequality in vector analysis -QUESTION [5 upvotes]: Vectors $x$ and $y$ are related as follows $$\mathbf{x}+\mathbf{y(x \cdot y)}=\mathbf{a}.$$ -Show $$\mathbf{(x \cdot y)}^2=\mathbf{\frac{|a|^2-|x|^2}{2+|y|^2}}$$ -I think we need to proceed using Cauchy-Shwarz inequality. -$\mathbf{y(x \cdot y)}=\mathbf{a}-\mathbf{x}$ -$\mathbf{y(y \cdot x)(y \cdot x)}=(\mathbf{a}-\mathbf{x)(x \cdot y)}$ -$\mathbf{y(y \cdot x)^2}=(\mathbf{a}-\mathbf{x)(x \cdot y)}$ -Then, I am lost. - -REPLY [5 votes]: We have $a = x + \def\<#1>{\left<#1\right>}\y$, hence -\begin{align*} - |a|^2 &= \\\ - &= \y, x+ \y}>\\ - &= \ + 2\^2 + \^2\\\ - &= |x|^2 + (2 + |y|^2)\^2\\ -\iff |a|^2 - |x|^2 &=(2 + |y|^2)\^2\\ -\iff \^2 &= \frac{ |a|^2 - |x|^2 }{2 + |y|^2} -\end{align*}<|endoftext|> -TITLE: Understanding the definition of the covariance operator -QUESTION [8 upvotes]: Let $\mathbb H$ be an arbitrary separable Hilbert space. The covariance operator $C:\mathbb H\to\mathbb H$ between two $\mathbb H$-valued zero mean random elements $X$ and $Y$ with $\operatorname E\|X\|^2<\infty$ and $\operatorname E\|Y\|^2<\infty$ is defined by -$$ -C(h)=\operatorname E[\langle X,h\rangle Y] -$$ -for each $h\in\mathbb H$. Why is the covariance operator defined in this way? -It seems that we can arrive at this definition if we try to generalise the definition of the covariance matrix. Suppose for the moment that $\mathbb H=\mathbb R^n$. Then the covariance matrix is given by $\operatorname E[XY^T]$ which is a bounded linear operator from $\mathbb R^n$ to $\mathbb R^n$. Also, we have that -$$ -\operatorname E[YX^T](h) -=\operatorname E[YX^Th] -=\operatorname E[Y\langle X,h\rangle] -=\operatorname E[\langle X,h\rangle Y] -$$ -for each $h\in\mathbb R^n$. The rightmost expression only depends on the inner product $\langle\cdot,\cdot\rangle$ and maybe we can say that this is the definition of the covariance operator for the arbitrary $\mathbb H$. Is this the right intuition behind the definition of the covariance operator? -Any help is much appreciated! - -REPLY [4 votes]: Your line of thought is one of the possible intuitions. One another is the following: Note that a matrix does not only represent a linear operator but also a bilinear form. We define the covariance form of $X,Y \colon \Omega \to H$ by $\def\(#1){\left<#1\right>}$ -$$ c(h,k) = \mathbf E [\(X,h)\(Y,k)] $$ -that is -$$ c(h,k) = \mathrm{cov}\bigl(\(X,h), \(Y,k)\bigr) $$ -we reduce it to the covariance of real - mean zero - random variables. This is a bounded, bilinear form. By the Riesz representation theorem, $c$ corresponds to an unique linear operator $C \colon H \to H$ by -$$ \(Ch,k) = c(h,k), $$ -Then we have, -$$ \(Ch, k) = \mathbf E [\(X,h)\(Y,k)] = \({\mathbf E [\(X,h)Y]}, k ) $$ -so -$$ Ch = \mathbf E [\(X,h)Y]. $$<|endoftext|> -TITLE: Last $m$ digits of a sum -QUESTION [5 upvotes]: What is an efficent way (not using any computer programs and such) to find last $m$ digits of some terrible looking sum, for example I don't know -$$1^{1000}+2^{1000}+3^{1000}+\ldots+(10^{1000})^{1000}?$$ -And let's say that also $m=1000$. I think I know how to approach this problem, but it seems very hard (rather near impossible) and I would like to find something "nice", if that is even possible. Either way it probably requires some modular arithmetics. -Thanks a lot! - -REPLY [3 votes]: There is no "one-size-fits-all" general algorithm, but there are a few principles that may help. Here are some (please feel free to add more in comments): - -When dealing with a modulus such as $10^m$, it may help to look separately at its prime power factors, in this case $2^m$ and $5^m$, and then use the Chinese Remainder Theorem to combine them. -Euler's theorem: if $\gcd(a,n) = 1$, then $a^{\phi(n)} \equiv 1 \mod n$, where $\phi$ is Euler's totient function. -Repetition: $k^x \equiv j^x \mod n$ if $k \equiv j \mod n$. Thus if $m$ is a multiple of $n$, say $m = c n$, then $\sum_{k=1}^m k^x \equiv c \sum_{j=1}^n j^x \mod n$. -Rearrangement: if $\gcd(n,a) = 1$, then -$$\sum_{k=1}^n k^x \equiv \sum_{k=1}^n (a k)^x \equiv a^x \sum_{k=1}^n k^x \mod n$$ -so $$(1-a^x) \sum_{k=1}^n k^x \equiv 0 \mod n $$ -Thus if there is $a$ such that $a$ and $a^x-1$ are coprime to $n$, we get $\sum_{k=1}^n k^x \equiv 0 \mod n$. -Faulhaber's formula expresses $\sum_{k=1}^m k^x$ as a polynomial in $m$ of degree $x+1$, with constant term $0$. Of course we need to be careful in using this with modular arithmetic, because the coefficients are rational numbers. - -By the way, in the case at hand, using Faulhaber's formula we can solve the problem: -$$\sum_{j=1}^{10^{1000}} j^{1000} \equiv 3 \times 10^{999} \mod 10^{1000}$$ -EDIT: - Faulhaber expresses $\sum_{j=1}^{10^{1000}} j^{1000}$ as a polynomial in $n = 10^{1000}$ with $502$ nonzero terms, but none of the denominators -are divisible by $2^2$ or $5^2$, so only the term in $n^1$ has a chance to be nonzero mod $n$: this term is -$B_{1000} n$ where $B_{1000}$ is the $1000$'th Bernoulli number. -The result is $10^{1000} B_{1000} \mod 10^{1000} = 3 \times 10^{999}$.<|endoftext|> -TITLE: What is a good notation for an “even falling factorial”? -QUESTION [5 upvotes]: It has been suggested to me that I use this notation: -$$ -\lfloor n \rfloor_2 = 2 \left\lfloor \frac n 2 \right\rfloor = \text{“even floor of $n$''} = \text{largest even integer}\le n. -$$ -I also want to write about an “even falling factorial” that, for example, given the inputs $57$ and $6$, or $56$ and $6$, has this value: -$$ -56\times54\times52\times50\times48\times46, -$$ -i.e. it is -$$ -\lfloor 57 \rfloor_2 \cdot (\lfloor 57 \rfloor_2 - 2) \cdot (\lfloor 57 \rfloor_2 - 4) \cdot (\lfloor 57 \rfloor_2 - 6) \cdot (\lfloor 57 \rfloor_2 - 8) \cdot (\lfloor 57 \rfloor_2 - 10)$$ -so in general, given $n$ and $k$, it is this: -$$ -\prod_{j=0}^{k-1} \lfloor n - 2j \rfloor_2. -$$ -I could just call it $n\mathbin{\sharp}k$ or something like that. But my questions are: - -Is there some standard notation for this?; and -What notation would be easiest for the reader to follow when the topic is neither the notation nor the concept that it denotes but rather the notation and the concept are merely being used in the course of discussing a topic for which they are useful? - -REPLY [4 votes]: In analogy with the usual definitions of partial permutation numbers in terms of (normal) factorials, you could compactly express this quantity as -$$ -\frac{\lfloor n \rfloor_2!!}{\lfloor n - 2j \rfloor_2!!}, -$$ -where $!!$ denotes the double factorial.<|endoftext|> -TITLE: Show that $\nabla\cdot\left(\dfrac{\mathbf{e}_r}{r^2}\right)=4\pi\delta(\mathbf{r})$ using the divergence theorem. -QUESTION [6 upvotes]: The book answer goes as follows: - -By the divergence theorem, in spherical coordinates we find $$\color{red}{\iiint_\limits{\large\text{volume}\,\tau}\nabla\cdot\left(\dfrac{\mathbf{e}_r}{r^2}\right)\mathrm{d}\tau}=\color{blue}{\iint_\limits{\large\text{surface enclosing}\, \tau}\dfrac{\mathbf{e}_r}{r^2}\cdot\mathbf{e}_r\,\mathrm{d}\sigma}=\color{#180}{\int_{\phi=0}^{2\pi}\int_{\theta=0}^{\pi}\frac{1}{r^2}r^2\sin\theta\,\mathrm{d}\theta\,\mathrm{d}\phi}=4\pi$$ Thus $\nabla\cdot\left(\dfrac{\mathbf{e}_r}{r^2}\right)$ has the properties that it is zero $\forall\,r\gt 0$ but its integral over any volume including the origin is $4\pi$; this suggests that it is equal to $4\pi\delta(\mathbf{r})$. - -As mentioned in a comment below; $\mathbf{e}_r$ is a unit radial vector. -I know that the $\color{red}{\mathrm{red}}$ and $\color{blue}{\mathrm{blue}}$ integrals are a statement of the divergence theorem. The only thing I can't understand is how the $\color{#180}{\mathrm{green}}$ integral was obtained from the $\color{blue}{\mathrm{blue}}$ integral. -I know that $$\mathrm{d}\sigma=\left|\frac{\partial r}{\partial \theta}\times\frac{\partial r}{\partial \phi}\right|\,\mathrm{d}\theta\,\mathrm{d}\phi\tag{1}$$ I think that equation $(1)$ has been used but I'm not sure how to use it. Could someone please explain how the $\color{#180}{\mathrm{green}}$ integral was reached? - -REPLY [2 votes]: A note on what I think to be a misleading trend in some texts, where I have recently stumbled. -As a proper Riemann integral, $$\iiint_{\tau}\nabla\cdot\left(\frac{\mathbf{e}_r}{r^2}\right)d\tau$$ where $\frac{\mathbf{e}_r}{r^2}=\frac{\mathbf{x}-\mathbf{y}}{\|\mathbf{x}-\mathbf{y}\|^3}$ (it seems that $\mathbf{y}=\mathbf{0}$ in your case), is $0$ if $\mathbf{y}\notin\bar{\tau}$ and does not exist if $\mathbf{y}\in\bar{\tau}$, because the integrand of a Riemann integral, in the usual calculus definitions of it, has to be defined on all the domain. -As the limit $$\lim_{\varepsilon\to 0}\iiint_{\tau\setminus B(\mathbf{y},\varepsilon)}\nabla\cdot\left(\frac{\mathbf{x}-\mathbf{y}}{\|\mathbf{x}-\mathbf{y}\|^3}\right)dx_1dx_2dx_3$$ it is $0$ because $\lim_{\varepsilon\to 0}0=0$ and the same holds for the Lebesgue integral $$\int_{\tau}\nabla\cdot\left(\frac{\mathbf{x}-\mathbf{y}}{\|\mathbf{x}-\mathbf{y}\|^3}\right)d\mu_{\mathbf{x}}$$which is calculated as the preceding limit. -This shows that, under these definitions of the integral and the usual definition of the derivative, the divergence theorem, certainly valid if, instead of $\frac{\mathbf{e}_r}{r^2}$ you had a vector field $\mathbf{F}\in C^1(\mathring{A})$ with $\tau\subset\mathring{A}$ satisfying opportune assumptions, cannot be applied. -Since $\forall\mathbf{x}\in\mathbb{R}^3\setminus\{\mathbf{y}\}\quad\frac{\mathbf{x}-\mathbf{y}}{\|\mathbf{x}-\mathbf{y}\|^3}=-\nabla\left(\frac{1}{\|\mathbf{x}-\mathbf{y}\|}\right)$ and the divergence of the gradient is the Laplacian $\nabla\cdot\nabla=\nabla^2$, we see that $\nabla\cdot\left(\frac{\mathbf{x}-\mathbf{y}}{\|\mathbf{x}-\mathbf{y}\|^3}\right)=-\nabla^2\left(\frac{1}{\|\mathbf{x}-\mathbf{y}\|}\right)$. Then by reading the integral $$\int_{\tau}-\nabla^2\left(\frac{1}{r}\right)\varphi \,d\tau$$ where $\varphi\in C^2(\mathbb{R}^3)$ (typically $\varphi\in C^\infty(\mathbb{R}^3)$) is such that $\forall\mathbf{x}\notin\tau\quad\varphi(\mathbf{x})=0$, in the symbolic way of the Laplacian of the distribution defined by $-\frac{1}{r}$, it can be shown, as it is here, that $-\int_{\tau}\nabla^2\left(\frac{1}{r}\right)\varphi \,d\tau=4\pi\int_{\tau}\delta_{\mathbf{y}}\varphi \,d\tau:=4\pi\varphi(\mathbf{y})$ (where $\delta_{\mathbf{y}}(\mathbf{x}):=\delta(\mathbf{x}-\mathbf{y})$), but that is$$\lim_{\varepsilon\to 0}\iiint_{\tau\setminus B(\mathbf{y},\varepsilon)}\frac{-\nabla^2\varphi(\mathbf{x})}{\|\mathbf{x}-\mathbf{y}\|}dx_1dx_2dx_3=\int_{\tau}\frac{-\nabla^2\varphi(\mathbf{x})}{\|\mathbf{x}-\mathbf{y}\|}d\mu_{\mathbf{x}}$$if we use the usual Riemann (one th left) or Lebesgue (on the right) integrals, while $$\lim_{\varepsilon\to 0}\iiint_{\tau\setminus B(\mathbf{y},\varepsilon)}\nabla\cdot\left(\frac{\mathbf{x}-\mathbf{y}}{\|\mathbf{x}-\mathbf{y}\|^3}\right)\varphi(\mathbf{x})dx_1dx_2dx_3=\int_{\tau}\nabla\cdot\left(\frac{\mathbf{x}-\mathbf{y}}{\|\mathbf{x}-\mathbf{y}\|^3}\right)\varphi(\mathbf{x})d\mu_{\mathbf{x}}\equiv 0$$for all $\mathbf{y}$ and all functions $\varphi$.<|endoftext|> -TITLE: Why is the group $[\Sigma\Sigma X, Y]_{\ast}$ commutative? -QUESTION [5 upvotes]: Can anyone give a reference (or explain here), why the group $[\Sigma\Sigma X,Y]_*$ is commutative? How is it related to the fact that $\Sigma X$ is a co-H-space? - -REPLY [4 votes]: This is not my area of expertise, so this is a rough idea of why $[\Sigma\Sigma X, Y]_{\ast}$ is a commutative group. -First of all, $\Sigma$ is left adjoint to $\Omega$ so $[\Sigma\Sigma X, Y]_{\ast} \cong [\Sigma X, \Omega Y]_{\ast}$. Now $\Sigma X$ is a cogroup object in $\mathsf{hTop}_{\ast}$ (which is even stronger than being a co-H-space) and $\Omega Y$ is a group object in $\mathsf{hTop}_{\ast}$, so $[\Sigma X, \Omega Y]_{\ast}$ has two natural group structures. It should then follow from the Eckmann-Hilton argument that the two group structures coincide and that the group is abelian. -May gives a sketch of a more direct proof which doesn't use any such language in a lemma at the end of Chapter $8$, section $2$ of A Concise Course in Algebraic Topology. - -This fact can be used to show that higher homotopy groups are abelian: for $n \geq 2$, -$$\pi_n(Y) = [S^n, Y]_{\ast} = [\Sigma\Sigma S^{n-2}, Y]_*$$ -so $\pi_n(Y)$ is abelian.<|endoftext|> -TITLE: Is this a characterization of commutative $C^{*}$-algebras -QUESTION [15 upvotes]: Assume that $A$ is a $C^{*}$-algebra such that $\forall a,b \in A, ab=0 \iff ba=0$. - -Is $A$ necessarily a commutative algebra? - -In particular does "$\forall a,b \in A, ab=0 \iff ba=0$" imply that $\parallel ab \parallel$ is uniformly dominated by $\parallel ba \parallel$? In the other word $\parallel ab \parallel \leq k \parallel ba \parallel$, for a uniform constant $k$. Of course the later imply commutativity. -Note added: As an example we look at the Cuntz algebra $\mathcal {O}_{2}$. There are two elements $a,b$ with $ab=0$ but $ba\neq 0$. This algebra is generated by $x,y $ with $$\begin{cases}xx^{*}+yy^{*}=1\\x^{*}x=y^{*}y=1\end{cases}$$ This implies $x^{*}(yy^{*})=0$ but $(yy^{*})x^{*} \neq 0$. -This shows that for every properly infinite $C^{*}$ algebra, there are two elements $a,b$ with $ab=0$ but $ba\neq 0$ - -REPLY [6 votes]: The property in the question is equivalent to non existence of non trivial nilpotent element, see the elegant answer of Leonel Robert to this question, but the later is equivalent to commutativity - -So $A$ is commutative if and only if $$\forall a,b \in A, ab=0 \iff ba=0 $$ - -P.S: I asked the moderators to consider this answer as a community wiki. -The following related property is proven here: -$A$ is commutative if and only if $$\forall a,b \in A,\;\; ab\in A_{sa}\iff ba \in A_{sa}$$<|endoftext|> -TITLE: Counter-example: Cauchy Riemann equations does not imply differentiability -QUESTION [10 upvotes]: I need help with this exercise: - -Let $$ f(z) = \left\{ \begin{align} -&e^{-\frac{1}{z^4}} &\hspace{1mm} \mbox{if} \hspace{1mm} z \neq 0 \\ -&0 &\hspace{1mm} \mbox{if} \hspace{1mm} z = 0 \\ -\end{align} \right. $$ Show that $f$ satisfies Cauchy-Riemann equations, but $f$ is not differentiable in $z=0$. - -I have to compute $u_x(x,y)$, $u_y(x,y)$, $v_x(x,y)$ and $v_y(x,y)$, so I have to find $u(x,y)$ and $v(x,y)$ explicitly. My attempts: if $z = x+iy$, then $z^4 = (x+iy)^4$, doing the math, I found -$$ z^4 = (x^4 - 6x^2y^2 + y^4)+i(4x^3y - 4xy^3) $$ -I don't know how to find $-\dfrac{1}{z^4}$. My second attempt was this: (trying to find $f(z)$ in polar coordinates) let be $z = |z|e^{i\theta}$, then $z^4 = |z|^4e^{i(4\theta)}$, thus -$$ e^{-\frac{1}{z^4}} = e^{-|z|^4}e^{e^{i\theta}} $$ -I appreciate any idea you have. - -REPLY [12 votes]: The function $f$ is analytic off the origin, hence satisfies the CR equations at all $z\ne0$. For $z=0$ look at -$$f_x(0,0)=\lim_{x\to0}{e^{-1/x^4}\over x}=0\ ,$$ -and similarly -$$f_y(0,0)=\lim_{y\to0}{e^{-1/(iy)^4}\over y}=0\ .$$ -It follows that $u_x(0,0)=v_x(0,0)=u_y(0,0)=v_y(0,0)=0$; so $f$ satisfies the CR equations also at $z=0$. -But $f$ is not even continuous at $z=0$: Consider the points $z(t):=(1+i)t$ for real $t$ near $0$. I leave the details to you.<|endoftext|> -TITLE: Finding vertical asymptotes of $\frac{3x^4 + 3x^3 - 36x^2}{x^4 - 25x^2 + 144}$ -QUESTION [5 upvotes]: I'm trying to find the vertical asymptotes for $$f(x) = \frac{3x^4 + 3x^3 - 36x^2}{x^4 - 25x^2 + 144}$$ -If I understand correctly, the vertical asymptote exists at $x=a$ when a value $a$ is found such that $f(a)$ increases to $∞$. -So, we must find a number that is infinitely small on the denominator, or in other words, $0$. -Setting $$x^4 - 25^2 + 144 = 0$$ -$$(x^2 - 9)(x^2 - 16) = 0$$ -$$(x+3)(x-3)(x+4)(x-4) = 0$$ -So based on my understanding, $[-4, -3, 3, 4]$ should be values of $a$ where $x = a$ is a vertical asymptote. -Graphing this function, I can clearly see that there are only vertical asymptotes at $x = 4$ and $x = -3$. -Where did I go wrong in my theory? - -REPLY [2 votes]: Hint: -By factoring the numerator we have -\begin{align} -3x^4+3x^3-36x^2&=3x^2(x^2+x-12)\\ -&=3x^2(x+4)(x-3) -\end{align} -Then -$$\frac{3x^4+3x^3-36x^2}{x^4 - 25x^2 + 144}=\frac{3x^2(x+4)(x-3)}{(x+3)(x-3)(x+4)(x-4)}=\frac{3x^2}{(x+3)(x-4)}\qquad x\neq -4, x\neq 3$$ -So, the limit of the function when $x\to -4$ and $x\to 3$ are finite, hence no asymptotes are at these points.<|endoftext|> -TITLE: Variable leaving basis in linear programming - when does it happen? -QUESTION [13 upvotes]: In the simplex algorithm in linear programming, -what are conditions for a variable to leave a basis (not necessarily basis for the/an optimal solution)? -I'm supposed to list as many sufficient and necessary conditions as possible for some basic variable $x_q$ which could be slack, artificial or non-slack and non-artificial. - - -Let $x_q$ be the s-th basic variable. Suppose the s-th row of some current simplex tableau has 1 in the column of $x_q$ and 0's everywhere else. Under what circumstances, if any, might $x_q$ leave the basis? Can any of the values in the s-th row of the tableau ever change? - - -Well since it's a basic variable, I'm guessing the $x_q$ column already has 0's everywhere except in the s-th row. Now, the $x_q$ row has 0's everywhere in the column of $x_q$ like: - - - -This is in the context of the Big M Method and artificial variables. I'm not quite sure what the relationship is exactly, though. -Edit: It looks like one of the constraints is the original (maximisation?) problem has something like -$$x_2 = 10$$ -or -$$x_4 = 0$$ -I guess the relationship would be Big M Method applies for equality constraint? -But I think an equality constraint like for example -$$x_4 = 0$$ -would lead to -$$x_4 + x_5 = 0$$ -with $z$ being replaced with $z' = z - Mx_5$ -So -$$x_4 + x_5 = 0$$ -doesn't exactly lead to a row of all but one zero entry? There are two non-zero entries? - -What I tried: -$x_q$ leaves if there is some non-basic variable $x_r$ that enters because - -$$z_r - c_r < 0$$ -$$z_r - c_r = \min_j (z_j - c_j)$$ -$$\frac{b_q'}{a_{qr}'} = \min_i \{\frac{b_i'}{a_{ir}'} | a_{ir}' > 0 \}$$ - -Is that right? Any other sufficient or necessary conditions? - -What is the relevance of the 0's in the row? - -Edit: -I guess an example would be something like -\begin{bmatrix} -2 & 0 & 10\\ -0 & 1 & 0\\ -5 & 0 & 6 -\end{bmatrix} -If $x_q$ leaves and then $x_r$ enters where $x_q$'s column is the second column, and then $x_r$ is, say, the first column. What would be the EROs? -$$\color{red}{\frac{1}{2}R_1 + R_2 \to R_2}$$ -$$-2R_2 + R_1 \to R_1$$ -$$-5R_2 + R_3 \to R_3$$ -I have never had to make $\color{red}{\text{a zero entry to a non-zero number}}$ in the simplex method. -I find this suspicious. Should I not? -Perhaps the elements in the row can never change because $x_q$ can never leave? -Or $x_q$ can never leave because row can never change? - -REPLY [3 votes]: 1. Lets show $x_q$ can't leave the basis -Intuitively -If the row $s$ contains all zeros except in the column of $x_q$, it means the problem contains a constraint of the type -$$x_q = b_s$$ -In that case, only solutions with $x_q = b_s$ are feasible, so the variable $x_q$ never leaves the basis. -Mathematically - -$x_q$ leaves if there is some non-basic variable $x_r$ that enters because - -$$z_r - c_r < 0$$ -$$z_r - c_r = \min_j (z_j - c_j)$$ -$$\frac{b_q'}{a_{qr}'} = \min_i \{\frac{b_i'}{a_{ir}'} | a_{ir}' > 0 \}$$ - - -It is true, assuming the problem is about maximization. -Now, lets apply this to the case where the row corresponding to $x_q$ contains only zeros except in the column corresponding to $x_q$. You have -$$z_q = 1\times c_q = c_q$$ -So $z_q- c_q = 0$, which is normal since $x_q$ is a basic variable. -Now, let $x_r$ be the entering variable. From the asumptions of the problem, we know that $a'_{qr} = 0$. This implies -$$\min_i \{\frac{b_i'}{a_{ir}'} | a_{ir}' > 0 \} \neq \frac{b_q'}{a_{qr}'}$$ -This means the variable $x_q$ never leaves the basis. - -2. Let's show row $s$ can't change -Intuitively -Lets call $x_r$ the entering variable, and $x_t$ the exiting variable. -Now lets see how the pivot will affect the column of $x_q$. We have -Line $i$ of the new tableau is a linear combinaison of lines $i$ and $t$ of the old tableau. Since these value are 0 for the column of $x_q$, the column of $x_q$ will never change. The reduced cost of variable $x_q$ will always be 0 and $x_q$ will never leave the basis. -Mathematically -Lets call $x_r$ the entering variable, and $x_t$ the exiting variable. -When making the pivot for row $s$, the formulae is (assuming $a'_{ij}$ is the new value in the tableau and $a_{ij}$ is the old value. -$$a'_{si} = a_{si} - \frac{a_{qr}}{a_{tr}} a_{ti}$$ -Since $a_{qr} = 0$, the row $s$ stays unchanged.<|endoftext|> -TITLE: Does $L^2 = P^2$ implies that $L = P$? -QUESTION [5 upvotes]: I have encountered a problem when solving this problem: -Assume that $\alpha \in (\pi, \frac{3}{2} \pi)$, then prove $\sqrt{\frac{1 + \sin \alpha}{1 - \sin \alpha}} - \sqrt{\frac{1 - \sin \alpha}{1 + \sin \alpha}} = -2 \tan \alpha$ -The most popular way to solve this kind of problems is to take left-hand side of the equation and prove that it is equal to the right-hand side. But this time, it is not so easy, because I can't see any way to transform LHS into RHS. My question is - can I take the square of LHS and prove that it equals to the RHS? -In the language of mathematics: does $L^2 = P^2$ implies that $L = P$? - -REPLY [4 votes]: (I made it a little more general) -Note that -$\sqrt{x}-\sqrt{\frac1{x}} -=\sqrt{x}-\frac1{\sqrt{x}} -=\frac{x-1}{\sqrt{x}} -$. -If -$x = \frac{1+y}{1-y} -$, -this is -$\begin{array}\\ -\frac{x-1}{\sqrt{x}} -&=\frac{\frac{1+y}{1-y}-1}{\sqrt{\frac{1+y}{1-y}}}\\ -&=\frac{1+y-(1-y)}{(1-y)\sqrt{\frac{1+y}{1-y}}}\\ -&=\frac{2y}{\sqrt{(1+y)(1-y)}}\\ -&=\frac{2y}{\sqrt{1-y^2}}\\ -\end{array} -$ -In your case, -with -$y = \sin a$, -this becomes -$\frac{2\sin a}{\sqrt{1-\sin^2a}} -=\pm\frac{2\sin a}{\cos a} -=\pm 2\tan a -$. -The restriction of $a$ -then decides the sign.<|endoftext|> -TITLE: Are all nimbers included in the surreals? -QUESTION [9 upvotes]: I guess the question says it all. The **nimber* (https://en.wikipedia.org/wiki/Nimber) concept, sometimes called "Sprague-Grundy numbers" embodies the "values" of positions in impartial games which can be added together to form larger games. The quintessential example is that in the game Nim, a pile with $n$ markers has a nim-value of $n$, sometimes denoted (in the notation used by Cnway, Berlekamp, and Guy) as $\star n$. -Unimpartial ("partizan") games can also have values of other nimbers which are formed by adding ordinary numbers to nimbers; also, values such as $\uparrow$ occur when a game presents a left-player move to $0$ and a right-player move to $\star$. So for example, ${\uparrow} > 0$ and ${\uparrow} > \star$but for any positive real number $p$, -we have $p > {\uparrow}$. -But partizan games can also be used to define the surreal numbers (https://en.wikipedia.org/wiki/Surreal_number. These include equivalents of all the reals, as well as infinities (and all the ordinals) and infinitessimals. -But in the pages about the surreals, there is a conspicuous lack of mention of concepts like $\star$ and $\uparrow$. So are these nimbers also included in the surreals? And if so, how does $\uparrow$ compare with $+\epsilon$? (If they are comparable I think one must say $+\epsilon > {\uparrow}$ but I'm not really sure -- and it is hard for me to construct a game model that would answer the question.) - -REPLY [5 votes]: Surreal numbers (or just "numbers", for short) are very special kinds of games, namely those which can be written in the form $x=\{S\mid T\}$, where every element of $S$ and $T$ is a surreal number and $s0$. In the first case, Left can then move to $t+0=t$, and then wins since $t>x>0$ (since $x$ is a number, it satisfies $s0$.<|endoftext|> -TITLE: Correlated joint normal distribution: calculating a probability -QUESTION [7 upvotes]: Given -$$ -f_{XY}(x,y) = \frac{1}{2\pi \sqrt{1-\rho^2}} \exp \left( -\frac{x^2 +y^2 - 2\rho xy}{2(1-\rho^2)} \right) -$$ -$Y = Z\sqrt{1-\rho^2} + \rho X$ -And -$$ -f_{XZ}(x,z) = \frac{1}{2\pi } \exp \left( -\frac{x^2 +z^2}{2} \right) -$$ -Show that $P(X>0,Y>0)= \frac{1}{4}+\frac{1}{2\pi}(\arcsin \rho) $ -I'm supposed to use the fact that X and Z are independent standard normal random variables, but I don't quite understand how. Any help would be greatly appreciated. - -REPLY [13 votes]: Here's a solution that only uses linear algebra and geometry. -If $\pmatrix{X\\ Y}$ is bivariate normal with mean $\pmatrix{0\\0}$ and covariance matrix $\Sigma=\pmatrix{1&\rho\\\rho&1}$, then $\pmatrix{U\\V}=\Sigma^{-1/2} \pmatrix{X\\Y}$ is bivariate normal with mean $\pmatrix{0\\0}$ and covariance matrix $\pmatrix{1&0\\ 0&1}.$ -That is, $U$ and $V$ are independent, standard normal random variables. -The illustration below shows that the probability that $\pmatrix{X\\Y}$ lies in the upper quadrant (in blue), is the same as the probability that $\pmatrix{U\\V}$ -lies in the wedge (in orange). Since the distribution of $\pmatrix{U\\V}$ is -rotationally invariant, simple geometry gives $\mathbb{P}(X>0,Y>0)={\theta\over 2\pi}.$ - -With $v=\Sigma^{-1/2}\pmatrix{0\\1}$ and $w=\Sigma^{-1/2}\pmatrix{1\\0},$ -we have $\cos(\theta)=\langle v,w\rangle /\|v\| \|w\|.$ But -\begin{eqnarray*} -\langle v,w\rangle &=& (0\ 1)\,\Sigma^{-1}\pmatrix{1\\0}=-\rho/(1-\rho^2)\\[5pt] -\|v\|^2&=&(0\ 1)\,\Sigma^{-1}\pmatrix{0\\1}=1/(1-\rho^2)\\[5pt] -\|w\|^2&=&(1\ 0)\,\Sigma^{-1}\pmatrix{1\\0}=1/(1-\rho^2), -\end{eqnarray*} -so that $\cos(\theta)=-\rho.$ Putting it all together gives -$$\mathbb{P}(X>0,Y>0)={\arccos(-\rho)\over 2\pi}.$$<|endoftext|> -TITLE: What is the $L^2$ gradient flow? -QUESTION [7 upvotes]: What does $L^2$ gradient flow mean? Here is the Ginzburg–Landau free -energy: -$$\mathcal{E}(\phi):=\int_{\Omega}(F(\phi)+\frac{\epsilon^2}{2}|\nabla\phi|^2)\text{d}\mathbf{x}$$ -Some references say the Allen-Cahn equation -$$\frac{\partial \phi(\mathbf{x},t)}{\partial t} = -{\epsilon^2}\Delta\phi(\mathbf{x},t)-F'(\phi(\mathbf{x},t))$$ -$$\frac{\partial \phi(\mathbf{x},t)}{\partial \mathbf{n}}=0, \ \text{on} \ \partial \Omega \times [0,T]$$ -is the $L^2$-gradient flow of the total free energy $\mathcal{E}(\phi)$. -How to derive this? - -REPLY [19 votes]: Given an energy $\mathcal{E}(\phi)$, the associated gradient flow is given by the equation -\begin{equation} - \frac{\partial \phi}{\partial t} = - \frac{\partial \mathcal{E}}{\partial \phi}. \tag{1} -\end{equation} -In other words, $\phi$ decreases along the gradient of $\mathcal{E}$. The terminology stems from the 'finite dimensional' case, where a function $f(x,y,z)$ produces a vector field $V = \nabla f$, which is called its 'gradient vector field'. Then, as with any vector field, one can study the flow induced by that vector field, i.e. the flow of the dynamical system given by $\dot{x} = V(x)$. -In $(1)$, the notation $\frac{\partial \mathcal{E}}{\partial \phi}$ denotes the so-called functional derivative of $\mathcal{E}$ to $\phi$, which generalises the 'gradient' notion for functions. There exist multiple versions of the functional derivative, mainly because its definition depends on the function space on which $\mathcal{E}$ acts. Anyway, the idea is to perturb $\phi$ a bit, i.e. to substitute $\phi \to \phi + \delta \psi$ with $0 < \delta \ll 1$, and work out the resulting expression. In the Ginzburg-Landau case, you get -\begin{align} - \mathcal{E}(\phi+\delta \psi) &= \int_\Omega F(\phi+\delta \psi) + \frac{\epsilon^2}{2} \left| \nabla \phi + \delta \nabla \psi\right|^2\,\text{d}\mathbf{x}\\ -&= \int_\Omega F(\phi) + \delta F'(\phi)\,\psi + \frac{\epsilon^2}{2} \left| \nabla \phi\right|^2 + \epsilon^2 \delta \nabla \phi \cdot \nabla \psi + \delta^2 \frac{\epsilon^2}{2}\left|\nabla \psi\right|^2\,\text{d}\mathbf{x}\\ -&= \mathcal{E}(\phi) + \delta \int_\Omega F'(\phi)\,\psi + \epsilon^2 \nabla \phi \cdot \nabla \psi\,\text{d}\mathbf{x} + \mathcal{O}(\delta^2)\\ -&= \mathcal{E}(\phi) + \delta \int_\Omega \left[F'(\phi) - \epsilon^2 \Delta \phi\right] \psi\,\text{d}\mathbf{x} + \int_{\delta \Omega}\nabla \phi \cdot \nabla \psi\,\text{d}\mathbf{n} + \mathcal{O}(\delta^2), -\end{align} -where the last equation was derived using Stokes' theorem. If you now assume that the gradient of $\phi$ vanishes at the boundary of $\Omega$, the integral over $\delta\Omega$ vanishes, and you're left with -\begin{equation} -\mathcal{E}(\phi+\delta \psi) - \mathcal{E}(\phi) = \delta \int_\Omega \left[F'(\phi) - \epsilon^2 \Delta \phi\right] \psi\,\text{d}\mathbf{x} = \delta \left\langle F'(\phi) - \epsilon^2 \Delta \phi,\psi\right\rangle_2 -\end{equation} -up to terms of order $\delta^2$, where $\langle \cdot,\cdot \rangle$ is the $L_2$ inner product over $\Omega$. The above is now interpreted as the directional derivative of $\mathcal{E}$ in the direction of $\psi$, i.e. -\begin{equation} - \left\langle \frac{\partial \mathcal{E}}{\partial \phi},\psi\right\rangle_2 := \lim_{\delta \to 0} \frac{\mathcal{E}(\phi+\delta \psi)-\mathcal{E}(\phi)}{\delta} = \left\langle F'(\phi) - \epsilon^2 \Delta \phi,\psi\right\rangle_2, -\end{equation} -and therefore we see that -\begin{equation} -\frac{\partial \mathcal{E}}{\partial \phi} = F'(\phi) - \epsilon^2 \Delta \phi. \tag{2} -\end{equation} -Using $(2)$ in $(1)$ now gives the Allen-Cahn equation.<|endoftext|> -TITLE: Axioms of Trigonometry -QUESTION [6 upvotes]: On Wikipedia it gives a picture of all trigonometric functions of an angle laid atop the unit circle, 1. Obviously there are other trigonometric identities, but what I'm wondering is, does Trigonometry have a list of axioms, or is it just a special case of analytic geometry? And if so, how does it fit into the rest of mathematics, because I seem to see it everywhere. - -REPLY [3 votes]: I'd say the modern viewpoint is that the trigonometric functions are best viewed through the lens of complex analysis. From this vantage point, there's no real "axioms of trigonometry." In particular, we define: -$$\cos(z) = \frac{1}{2}\left(e^{iz}+e^{-iz} \right) \qquad \sin(z) = \frac{1}{2i}\left(e^{iz}-e^{-iz} \right)$$ -Note that all of the transcendental functions appearing above can be defined as solutions to certain initial value problems: - -The exponential function is the unique smooth function $f:\mathbb{C} \rightarrow \mathbb{C}$ satisfying: -$$f'(z) = f(z), \qquad f(0) = 1$$ -The cosine function is the unique smooth function $f:\mathbb{C} \rightarrow \mathbb{C}$ satisfying: - -$$f''(z) = -f(z), \qquad f(0) = 1, \,f'(0) = 0$$ - -The sine function is the unique smooth function $f:\mathbb{C} \rightarrow \mathbb{C}$ satisfying: - -$$f''(z) = -f(z), \qquad f(0) = 0, \,f'(0) = 1$$ -This means that the exponential function is an eigenvector of complex differentiation (with eigenvalue $1$), and that sine and cosine are eigenvectors of twice-iterated complex differentiation (with eigenvalue $-1$). -That notwithstanding, Euclidean geometry that can certainly be given quite an abstract treatment; see my question here, and in particular, be sure to check out Audin's Geometry. This book easily deserves a 5-star rating. But I'd say the trigonometric functions "come first" so-to speak, and they exist independent of geometry.<|endoftext|> -TITLE: Show that there is no subgroup of $S_n$ of order $(n-1)!/n$. -QUESTION [10 upvotes]: I am trying to show that $S_n$ does not have a subgroup of order $(n-1)!/n$ for any $n$ other than $6$. I have checked it to be true up to $S_{13}$. Any ideas? -Of course, if $n$ is prime then that order isn't an integer, so obviously there can't be a subgroup of that order. But what about composite $n$? - -REPLY [4 votes]: This conjecture is true, here is a sketch of a proof. -You are looking for a group $G$ of index $n^2$ in $S_n$. -$G$ is either -(1) intransitive, -(2) transitive but imprimitive, -(3) primitive. -In the first case, we must have $G\leq S_a \times S_{n-a}$ for some $1\leq a\leq n/2$. But this implies that ${n}\choose{a}$ divides $n^2$. It is not hard to see that this implies that $a=1$ hence $G$ is contained in $S_{n-1}$. We are now asking about a subgroup of index $n$ in $S_{n-1}$, and we can repeat the whole argument here to find that the only option is $F_{20}$ in $S_5$. -Suppose now that $G$ is transitive but imprimitive, so $G\leq S_{n/a}^{a} \rtimes S_a$, for some divisor $1 -TITLE: Finding the intersections between $y = e^x$ and $y = x + 2$ algebraically? -QUESTION [5 upvotes]: In trying to find the intersections between $y = e^x$ and $y = x + 2$ in terms of $x$, I came up with the equation, -$e^x = x + 2$ -and subsequently, -$x = \ln(x+2)$. -Beyond that point, I am stumped. I am able to solve the equation numerically using a calculator, Newton's method, etc., but need to solve it algebraically. I have done a good deal of research on how to solve this type of problem, but have been unable to find any problems similar enough to be of help. -Thanks to the StackExchange community for your help. I love your sites and have been happy to find answers to hundreds of my own questions on them. - -REPLY [3 votes]: Here again appears the beautiful Lambert function : rewrite $$e^x=x+2\implies e^{x+2}=e^2(x+2)\implies e^y=e^2 y$$ and the solutions are given by $$x_1=-W\left(-\frac{1}{e^2}\right)-2$$ $$x_2=-W_{-1}\left(-\frac{1}{e^2}\right)-2$$ In fact, keep in mind that any equation which can write $A+Bx+C\log(D+Ex)=0$ has solutions in terms of Lambert function. -The Wikipedia page gives series approximations. -There is no other closed form to this equation. If you cannot use it, just numerical methods will provide the solutions.<|endoftext|> -TITLE: Consecutive smooth triplets -QUESTION [5 upvotes]: Consecutive $n$-smooth triplets with no common factors are possible. The sequence 64, 120, 324, 2024, 17576, 248676, 314432, 6571774, 7496644, 116026274, 196512876 isn't in OEIS, but they do appear in Koninck's Those Fascinating Numbers. For each of these, $k-1, k, k+1$ all have a rather low maximum prime factor. For consecutive smooth pairs, Størmer's theorem can be used. That's how I verified these up to 97-smooth, but that took my current program a week. Values for 101 and 113 are unverified. - -If the "no common factors" is dropped, the middle-odd $n$-smooth triples always seem to be higher than the middle-even $n$-smooth triples. - -Can anyone extend, improve, or correct these results? - -REPLY [2 votes]: I found a better result for 113: -1129770949: 41 43 53 107 113 -1129770950: 2 5 5 7 7 11 11 37 103 -1129770951: 3 19 29 67 101 101 - -You can use a fairly simple sieve algorithm to find smallish solutions very quickly if you use a reasonable programming language. My program finds all primes less than the smoothness bound and computes $v_p = \lfloor 4000 \log_{10}(p) \rfloor$ for each such prime. If you are looking for solutions in the interval $[L, L+N]$, add up $v_p$ for each small prime divisor of each number in the interval. Dump out all consecutive triplets such that the sum of relevant $v_p$'s is larger than $\lfloor 4000 \log_{10}(L) \rfloor - 4000$. Postprocess the results manually to weed out any false positives. -EDIT: I left said sieve running while I was at work today. For the common-factors case, the following solution may also be interesting: -138982582998: 2 3 3 3 3 3 3 7 29 47 97 103 -138982582999: 13 31 37 43 43 71 71 -138982583000: 2 2 2 5 5 5 23 23 59 61 73<|endoftext|> -TITLE: What is functional analysis in simple words? -QUESTION [38 upvotes]: To begin with , I am only a secondary school student (17yo) but I am very interested in higher mathematics. However we only learn so little in my school (only single variable calculus and basic linear algebra). In the past I have self-learnt some abstract algebra and very basic topology by finding online resources, but I can never get deep into those subjects. -When I read about functional analysis, I encounter objects like function spaces and infinite-dimensional spaces which I can never understand. What does it exactly mean to be a function space, how do you measure metric? I know it is hard and requires much real analysis. Can anyone give me some easy ideas and introductions? - -REPLY [3 votes]: Algebraic analysis is finding an unknown [ function ] in terms of an infinite polynomial. The unknown function is specified by some kind of differential equation. -Functional analysis is finding an unknown in terms of an infinite series of functions. The simplest example being 'fourier analysis' which is a general solution of the empty-space wave equation. -Why would we do this ? Surely it involves much more computation, especially if the end result has to be evaluated numerically; as needed in experimental physics and engineering problems ? Remember, an infinite series of transcendental functions, themselves with infinite representations, slow to evaluate numerically. -In some cases, the basis functions may be easy - such as Legendre Polynomials, which often appear in quantum theory; other times difficult functions such as Bessel Functions, which aren't well understood by even many undergraduate level mathematicians. -Even the simplest case, fourier series, in the early days pure mathematicians were unsure if a fourier decomosition of an unknown function was an accurate representation of it, and under what conditions - what ranges of dependent variables were safe ? And how many terms to use for a specified degree of accuracy ? -Later mathematicians Sturm and Liouville showed that for ALL second order linear differential equations , which are the vast majority used in Science and Engineering, the basis functions also known as eigenfunctions are are always orthogonal, linear and the functional decomposition of any solution is an unique and accurate representation of the true solution. -Further, eigenfunctions always have recurrence relations a good example being the Tchebyshef polynomials which help in further analysis, both algebraic and numerical.<|endoftext|> -TITLE: Proof that the empty set is a relation -QUESTION [23 upvotes]: In the book Naive Set Theory, Halmos mentions that the "The least exciting relation is the empty one." and proves that the empty set is a set of ordered pairs because there is no element of the empty set that is not an ordered pair. Since the empty set is a set of ordered pairs, it follows that it is a relation. -I understand this line of reasoning but couldn't I use that same line of reasoning to prove that the empty set is a set of singletons? And since the empty set is a set of singletons (because it contains no elements which are not singletons) it is not a relation (because a relation is a set of ordered pairs, not singletons). Why is this reasoning invalid? - -REPLY [2 votes]: A set of singletons can be a relationship as long as the set of singletons doesn't actually have any singletons or anything else that isn't an ordered pair. The empty set is a set of singletons that doesn't have any singletons and doesn't have anything that isn't an ordered pair. So the empty set is a set of singletons that is a relation. It's the only set of singletons that is.<|endoftext|> -TITLE: If $A^3=A^2$ then $A^2$ is diagonalizable. -QUESTION [5 upvotes]: Let $A\in \mathbb{k}^{n\times n} $. -Prove that if $A^3=A^2$ then $A^2$ is diagonalizable. -Could you give me any hints on how to prove it?. I can't use the minimal polynomial, since we haven't seen it in class. -Thanks. - -REPLY [2 votes]: Let $A^2=B$, then $B^2 =B$. (In other words, $B^2$ is a projection.) Now, if $y=B(x)\in\mathop{\mathrm{im}} B$, then -$$y = B(x) = B(B(x)) = B(y).$$ -So $y$ is an eigenvector corresponding to the eigenvalue $1$ of $B$. Also, if $y\in \ker B$, then $y$ is an eigenvalue corersponding to value $0$. -On the other hand, let $x\in k^{n\times n}$, we have -$$B(x) = B(B(x)).\tag{1}$$ -so $x- B(x)\in \ker(B)$. Thus, -$$k^{n\times n} = \mathop{\mathrm{im}} B+\ker B.$$ -Lastly, if $x \in \ker B \cap \mathop{\mathrm{im}} B$, then -$$x=B(x) = B(B(x)) = 0.$$ -So -$$k^{n\times n} = \mathop{\mathrm{im}} B \oplus \ker B.$$ -Therefore, $B$ is diagonalizable.<|endoftext|> -TITLE: If $x>0$ is such that $x^{n}+\frac{1}{x^n}$ and $x^{n+1}+\frac{1}{x^{n+1}}\in \mathbb{Q} \implies x+\frac{1}{x}\in\mathbb{Q}$? -QUESTION [5 upvotes]: Let $n \in \mathbb{N}$. If $x>0$ is such that $x^{n}+\frac{1}{x^n}$ and $x^{n+1}+\frac{1}{x^{n+1}}\in \mathbb{Q} \implies x+\frac{1}{x}\in\mathbb{Q}$? -Any thoughts on how to solve the above problem. Working for $n=2$ says that this result is true, but not sure if one can generalize - -REPLY [3 votes]: Proffering the following argument based on elementary properties of algebraic numbers. -From $x^n+x^{-n}=q_1$ it follows that $x$ satisfies the polynomial equation $x^{2n}-q_1x^n+1=0$. Furthermore, it is obvious that all the zeros of this polynomial are $x^{\epsilon}\zeta_n^k$, where $\epsilon=\pm1, \zeta_n=e^{2\pi i/n}$ and $k=0,1,2,\ldots,n-1$. -Therefore the zeros of the minimal polynomial $m(T)$ of $x$ (over $\Bbb{Q}$) are among those numbers. -But, from $x^{n+1}+x^{-(n+1)}=q_2$ it similarly follows that the zeros of the minimal polynomial of $x$ are among the numbers $x^{\epsilon}\zeta_{n+1}^\ell, \ell=0,1,2,\ldots,n$. -Therefore the zeros of $m(T)$ are either just $x$, or both $x$ and $x^{-1}$. -In the former case $x$ is rational, and the claim is immediate. In the latter case $x+\dfrac1x$ is rational because it is the coefficient of the linear term of the minimal polynomial $m(T)=(T-x)(T-1/x)\in\Bbb{Q}[x]$. - -This argument also proves that if $x^n+x^{-n}$ and -$x^{n+1}+x^{-(n+1)}$ are both integers, then $x+1/x$ must also be an integer. This is because in this case $x$ is an algebraic integer, and hence the coefficients of $m(T)$ are all integers. This old trick then implies that $x^k+x^{-k}\in\Bbb{Z}$ for all $k\in\Bbb{Z}$.<|endoftext|> -TITLE: If $A$ is diagonal and positive and $B$ is skew-hermitian, does $AB$ have only pure imaginary eigenvalues? -QUESTION [5 upvotes]: Let $A$ be diagonal with strictly positive (real) entries, and let $B$ be skew hermitian. Can it be shown that the eigenvalues of $AB$ are pure imaginary? -I suspect this also holds in the more general case that $A$ is symmetric positive definite. - -REPLY [3 votes]: Your intuition is right. When $A$ is positive definite and $B$ is skew-Hermitian, $AB$ is similar to $A^{1/2}BA^{1/2}$ (as $A^{1/2}BA^{1/2}=A^{-1/2}(AB)A^{1/2}$), which is skew-Hermitian. Hence it has a purely imaginary spectrum.<|endoftext|> -TITLE: Equivalence of forcing notions from dense embedding between them -QUESTION [5 upvotes]: In general I want to prove that $\mathbb{P}=\left(P,\leq\right)$ - and $\mathbb{Q}=\left(Q,\leq\right)$ - are forcing notions and there is a dense embedding $h:P\longrightarrow Q$ - , then $\mathbb{Q\approx P}$. -Being more specific if $G\subseteq P$ is $\mathbb{P}$-generic over $\mathbf{V}$ - , then the set $H=\left\{ q\in Q:\exists p\in G\left(q\leq h\left(p\right)\right)\right\}$ - is $\mathbb{Q}$ - -generic over $\mathbf{V}$ - and $\mathbf{V}\left[G\right]=\mathbf{V}\left[H\right]$. Conversely, if a set $H\subseteq Q$ - is $\mathbb{Q}$ - -generic over $\mathbf{V}$ - , then the set $G=\left\{ p\in P:h\left(p\right)\in H\right\}$ - is $\mathbb{P}$ - -generic over $\mathbf{V}$ - and $\mathbf{V}\left[H\right]=\mathbf{V}\left[G\right]$ - . -The first parts about being $\mathbb{Q}$-generic I have an idea how to prove, but with $\mathbf{V}[G]=\mathbf{V}[H]$ I have been through many trouble without any result. -P.S. Here forcing is a pre-order with the minimal element. - -REPLY [3 votes]: I assume that you only need help to prove $V[G]=V[H]$ (- please correct me, if I'm wrong). -The following Theorem is key: -Let $\mathbb P$ be a forcing and let $G$ be $\mathbb P$-generic. Then $V[G]$ is the minimal transitive (class) model $M$ of $\operatorname{ZFC}$ such that $\{G\} \cup V \subseteq M$. -The proof of this is basically trivial. Since $\{G\} \cup V \subseteq M$, we have $\tau \in M$ for every $\mathbb P$-name $\tau$. Furthermore $M$ satisfies $\operatorname{ZFC}$ and $G \in M$. This allows us to build $\tau^G$ in $M$ (, where $\tau^G$ is the $G$-interpretation of $\tau$). Thus $\tau^G \in M$ for every $\mathbb P$-name $\tau$ and therefore $V[G] \subseteq M$. -Now let $G$ be $\mathbb P$-generic and let $H := \{q \in \mathbb Q \mid \exists p \in G \colon h(p) \le q \}$. I'll prove that $V[G] \subseteq V[H]$ and leave the other cases as an exercise to you (should you get stuck, feel free to ask for additional help). -By the Theorem above it suffices to show that $G \in V[H]$: Work in $V[H]$ and let $G' := \{ p \in \mathbb P \mid \exists q \in H \colon h(p) \ge q \}$. Clearly $G \subseteq G'$. Conversely, let $p' \in G'$. In $V$, consider the set $D := \{ p \in \mathbb P \mid p \perp p' \vee p' \le p \}$. This is a dense set in $\mathbb P$ and since $G$ is $\mathbb P$-generic, we may fix some $p \in D \cap G$. Suppose that $p \perp p'$. Then $h(p) \perp h(p')$, contradicting the fact that $h(p),h(p') \in H$. -Thus $p' \le p$. Since $p \in G$ and $G$ is a filter, this yields $p' \in G$ and thus $G' \subseteq G$. Hence $G = G' \in V[H]$.<|endoftext|> -TITLE: prove $RP^3\cong SO(3)$ -QUESTION [8 upvotes]: Suppose $RP^3$ is the real 3-dimensional projective space,prove the rotation group $SO(3)$ is homeophoric to $RP^3$. - -REPLY [10 votes]: The following proposition is incredibly useful here: -Proposition -The homomorphism $R: S^{3} \rightarrow SO(3)$ is surjective with $\ker{R}=\{\pm 1\}$ equal to the centre of $S^{3}$. In particular, every matrix of the form -\begin{pmatrix} - a^{2}+b^{2}-c^{2}-d^{2} & 2(bc-ad) & 2(bd + ac)\\ - 2(bc+ad) & a^{2}+b^{2}-c^{2}-d^{2} & 2(cd-ad) \\ -2(bd-ac) & 2(cd+ad) & a^{2}-b^{2}-c^{2}+d^{2} -\end{pmatrix} -for some $\{a, b, c, d\} \in \mathbb{R}^{4}$ with $a^{2}+b^{2}+c^{2}+d^{2}=1$ -The cosets of $\{\pm1\}$ in $S^{3}$ are simply pairs of antipodal points. Each pair determines a line in $\mathbb{R}^{4}$. the proposition above then determines that $SO(3)=\mathbb{RP}^{3}$ as topological spaces. -Aside -You can also regard $\mathbb{RP}^{3}$ as the quotient of a solid ball in $\mathbb{R}^{3}$ by identifying antipodal points on the boundary. Every element in $SO(3)$ is a rotation about some axis by some angle $\theta \in [-\pi, \pi]$<|endoftext|> -TITLE: Integrate $\int \frac{1}{1+\arctan(x)}dx$ -QUESTION [11 upvotes]: Consider: -$$\int \frac{1}{1+\arctan(x)}dx$$ -I have attempted with making $x=\tan(u)$ and $\frac{dx}{du}=\sec^2(u)$ -then ended up with: -$$\int \frac{\sec^2(u)}{1+u}du$$ -$$\int \frac{1+\tan^2(u)}{1+u}du$$ -$$\int \frac{1}{1+u}du+\int \frac{\tan^2(u)}{1+u}du$$ -$$\ln\left|1+\arctan(x)\right|+\int \frac{\tan^2(u)}{1+u}du$$ -And I do not know what to do from there on. - -REPLY [6 votes]: There exists no solution in terms of standard mathematical functions. This can be proven using Risch Algorithm. This is not something I recommend you do by hand. -The most easy way to check if an integral has a solution is by asking Wolfram Alpha which uses this algorithm or a better one to guarantee a solution is it exists. Otherwise, it displays: - -(no result found in terms of standard mathematical functions) - -To back up my claim: - -For indefinite integrals, an extended version of the Risch algorithm - is used whenever both the integrand and integral can be expressed in - terms of elementary functions, exponential integral functions, - polylogarithms, and other related functions.<|endoftext|> -TITLE: Recovering a quadratic polynomial from three values using calculus -QUESTION [27 upvotes]: I'm asked to solve this using calculus: - -Let $$ f(x) = ax^2 + bx +c .$$ If $ f(1) = 3 $, $f(2) = 7$, $f(3) = 13$, then find $a$, $b$, and $f(0)$. - -I know I can solve this using solving three equations simultaneously. And I can also solve this using Gauss Jordan or Gaussian elimination method by writing the augmented matrix. But I'm wondering is there any other method to solve this. -Solving by any method it turns out that $a = b = c = 1$. - -REPLY [2 votes]: This caught my eye: -$$ -\begin{eqnarray} -f(1)=&3&=2\cdot1+1 \\ -f(2)=&7&=3\cdot2+1 \\ -f(3)=&13&=4\cdot3+1 \\ -\end{eqnarray} -$$ -So, $g(x)=f(x)-((x+1)x+1)$ is a quadratic polynomial that has $3$ zeros and so must be the zero polynomial. -Therefore, $f(x)=(x+1)x+1=x^2+x+1$.<|endoftext|> -TITLE: An example of prime ideal $P$ in an integral domain such that $\bigcap_{n=1}^{\infty}P^n$ is not prime -QUESTION [10 upvotes]: I am looking for an example of prime ideal $P$ in an integral domain such that the ideal $\bigcap_{n=1}^{\infty}P^n$ is not a prime ideal. -This is a followup to this question where the ring was not assumed to be an integral domain. - -REPLY [5 votes]: Let $k$ be any field. Set -$$R = \bigcup_{n=1}^{\infty} k\left[x,\ y,\ x^{1/n} y^{1/n} \right].$$ -Each one of the terms in the union is a domain, so the rising union is also a domain. Let $P$ be the ideal $\langle x, y, x y, x^{1/2} y^{1/2}, x^{1/3} y^{1/3}, x^{1/4} y^{1/4}, \cdots \rangle$. Then $R/P = k$, so $P$ is prime (and even maximal). -We have $P^k = \langle x^k , y^k, x y, x^{1/2} y^{1/2}, x^{1/3} y^{1/3}, x^{1/4} y^{1/4}, \cdots \rangle$ and $\bigcap_{k=1}^{\infty} P^k = \langle x y, x^{1/2} y^{1/2}, x^{1/3} y^{1/3}, x^{1/4} y^{1/4}, \cdots \rangle$. So $R/\bigcap_{k=1}^{\infty} P^k = k[x,y]/(xy)$ which is not a domain, and we see that $\bigcap_{k=1}^{\infty} P^k$ is not prime.<|endoftext|> -TITLE: Ellipsoid but not quite -QUESTION [13 upvotes]: I have an ellipsoid centered at the origin. Assume $a,b,c$ are expressed in millimeters. Say I want to cover it with a uniform coat/layer that is $d$ millimeters thick (uniformly). -I just realized that in the general case, the new body/solid is not an ellipsoid. I wonder: - -How can I calculate the volume of the new body? - -What is the equation of its surface? - - -I guess it's something that can be calculated via integrals but how exactly, I don't know. -Also, I am thinking that this operation can be applied to any other well-known solid (adding a uniform coat/layer around it). Is there a general approach for finding the volume of the new body (the one that is formed after adding the layer)? - -REPLY [3 votes]: Let $(x,y,z)=(a\sin u \cos v, b\sin u \sin v,c\cos u)$ on the ellipse $\displaystyle \frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2}=1$, then the unit normal vector is -$$\mathbf{n}= -\frac{\displaystyle \left(\frac{x}{a^2},\frac{y}{b^2},\frac{z}{c^2} \right)} - {\displaystyle \sqrt{\frac{x^2}{a^4}+\frac{y^2}{b^4}+\frac{z^2}{c^4}}}$$ -Then new surface will have coordinates of -$$(x',y',z')=(x,y,z)+d\mathbf{n}$$ -which no longer to be a quadric anymore. -In particular, if $d <-\frac{1}{\kappa} <0$ where $\kappa$ is one of the principal curvatures, then the inner surface will have self-intersection. -If we try reducing the dimension from three (ellipsoid) to two (ellipse) and setting $a=1.5,b=1$, the unit normal vectors (inward) won't pointing on the straight line (i.e. the degenerate ellipse $\displaystyle \frac{x^{2}}{0.5^{2}}+\frac{y^{2}}{0^{2}}=1$). - -And also the discrepancy of another case<|endoftext|> -TITLE: Probability of picking 4 red balls -QUESTION [7 upvotes]: There are $6$ red balls and $5$ blue balls in a jar. You pick any $4$ balls without looking in the jar. What is the probability that you would be having $4$ red balls in hand? - -Note that you're picking up all the $4$ balls in one single attempt and not one-by-one. Also balls of same colour are to be considered as identical. -I tried this and got the answer as $4/11$ which was wrong. I can't figure out what the cases (sample space) would be. - -REPLY [11 votes]: The answer is easly given considering permutations. Indeed the probability is: -$$P =\frac{7!}{11!} \cdot \frac{6!}{2!}=\frac{1}{22}$$ -where - -$11$ is the total number of balls; -$7$ the number of balls remaining in the jar; -$6$ the number of red balls; -$2$ the number of red balls remaining in the jar.<|endoftext|> -TITLE: Is noetherianity really about cardinals or ordinals? -QUESTION [6 upvotes]: If $\kappa$ is any cardinal, then one may define a "$\kappa$-Noetherian" ring as a ring such that for any module that has a generating set $S$ satisfying $|S|< \kappa$, then any submodule also has such a generating set. Then a Noetherian ring is just a $\aleph_0$-Noetherian ring with this definition (and a principal ring is a $2$-Noetherian domain, but I think that probably only infinite cardinals are really meaningful in this setup). I you prefer non-commutative rings, you may add "left" and "right" wherever needed. -Question 1a: Does such a notion exist somewhere ? Is it interesting, or even useful ? -My main question is based on the following observation : if on the other hand you want to generalize the ascending chain condition, then you will naturally get a condition on ordinals, since it is formulated in terms of an order. Namely, you can ask wether there are chains of ideals having the order structure of a given ordinal : if $\alpha$ is an ordinal, a ring $R$ will be said to satisfy $\alpha$-$(AC)$ if there is no strictly increasing function $\alpha \to I(R)$ where $I(R)$ is the set of ideals, given the inclusion order. Then a Noetherian ring is just a ring satisfying $\omega$-$(AC)$. Again, you can add "left" or "right" if you want to. -Question 1b: Same as question 1, for this other notion. -So then there are two quite objectively natural notions that generalize Noetherian rings, but one is based on cardinals, and the other on ordinals. There is a sort of "coincidence" in the fact that $\omega$ and $\aleph_0$ are pretty much equivalent in the sense that finite ordinals and finite cardinals are really the same thing (same objects, same operations, etc.). But this will no longer be true for higher cardinals and ordinals. -Question 2: Should one be considered the right one ? If so, which one and why ? - -REPLY [5 votes]: The thing about chains is that they have cofinal "cardinal chains". Namely, if $\alpha$ is any countable limit ordinal, then there is an $\omega$ sequence which is unbounded in $\alpha$. -If $\alpha$ is a limit ordinal of cardinality $\aleph_1$, then either there is an unbounded chain of order type $\omega$ or there is an unbounded chain of order type $\omega_1$ (but never both when the axiom of choice is assumed). -And so on. If $\alpha$ is a limit ordinal of cardinality $\kappa$, then there is a cardinal $\lambda\leq\kappa$, and an unbounded chain of order type $\lambda$. -So even if you talk about countable order type, it suffices to talk about $\omega$. And so on. So there is really no confusion between the ordinal and cardinal versions.<|endoftext|> -TITLE: Proof that $[\Bbb{Q}(\sqrt{q_1},\dots,\sqrt{q_r}):\Bbb{Q}]=2^r$ -QUESTION [7 upvotes]: Let $2\leq q_1 -TITLE: What's the difference between a Nash, Correlated, and Extreme equilibrium? -QUESTION [6 upvotes]: As the title states, what's the difference? As I understand it: - -The Nash Equilbirum (NE) is a solution concept in non-cooperative games where no player has incentive to unilaterally deviate from a given strategy. -The Correlated Equilibrium (CE) is a generalization of the NE where players have no incentive to unilaterally deviate given that all other players play according to a public signal. -The Extreme Equilibrium (EE) I'm having a hard time understanding this concept. There exist several algorithms for enumerating all EEs (e.g. the EEE algorithm and the improved EEE algorithm), but what exactly are they and how are they different from NEs or CEs? - -REPLY [3 votes]: An extreme equilibrium $(x,y)$, where $x$ and $y$ are the mixed strategies of players 1 and 2, is a Nash equilibrium where for both players mixed strategies cannot be described as convex combinations of other mixed strategies that form equilibria. -Fact: There are always finitely many extreme equilibria.** -For a mixed strategy $x$, let the support of $x$ be defined as the set of pure strategies that $x$ uses with positive probability, i.e., -$$\text{support}(x) := \{ i : x_i > 0 \}.$$ -A game is called non-degenerate game if for all mixed strategies $x$, if $|\text{support}(x)| = k$ then the number of best responses against $x$ is at most $k$. -Fact: Every non-degenerate game has only extreme equilibria. -For a simple example of a game with non-extreme equilibria consider a $2\times 2$ game where both players get $0$ for all strategy profiles. In this case, all four pure profiles are extreme equilibria and all strict mixtures are non-extreme Nash equilibria. For a less trivial example, consider any $2 \times 2$ bimatrix game where there is no strict dominance for either player but exactly one of the players has a weakly dominant strategy. Then, the game is degenerate and the game will have infinitely many equilibria. As an example, consider the following game. -$$A = \left( \begin{array}{ccc} -1 & 0 \\ -0 & 1 \\ -\end{array} \right) -\quad -B = \left( \begin{array}{ccc} -1 & 1 \\ -1 & 0 \\ -\end{array} \right) -$$ -The game has exactly two extreme equilibria. Here is the output from the online game solver http://banach.lse.ac.uk, which shows these two extreme equilibria: - -2 x 2 Payoff matrix A: - - 1 0 - 0 1 - -2 x 2 Payoff matrix B: - - 1 1 - 1 0 - -EE = Extreme Equilibrium, EP = Expected Payoff - -Decimal Output - - EE 1 P1: (1) 1.0 0.0 EP= 1.0 P2: (1) 1.0 0.0 EP= 1.0 - EE 2 P1: (1) 1.0 0.0 EP= 0.5 P2: (2) 0.5 0.5 EP= 1.0 - -Rational Output - - EE 1 P1: (1) 1 0 EP= 1 P2: (1) 1 0 EP= 1 - EE 2 P1: (1) 1 0 EP= 1/2 P2: (2) 1/2 1/2 EP= 1 - -Connected component 1: -{1} x {1, 2} - -The final line of this output shows that against the unique equilibrium strategy of player 1 (top row), player 2 can play any convex combination of $(1,0)$ and $(1/2,1/2)$. -For a thorough technical exposition, which covers both finding extreme equilibria and then from these all equilibria of bimatrix games, see, e.g., -David Avis, Gabriel D. Rosenberg, Rahul Savani, and Bernhard von Stengel (2010). -Enumeration of Nash equilibria for two-player games. -Economic Theory, 42:9–37. -On the relationship between correlated equilibria and Nash equilibria: -Fact:: The set of correlated equilibrium -distributions of an $n$-player noncooperative game is a convex -polytope that includes all the Nash equilibrium distributions. In fact, the Nash equilibria all lie on the boundary of the polytope. -See: -Robert Nau, Sabrina Gomez Canovas, and Pierre Hansen (2004). -On the geometry of Nash equilibria and correlated equilibria. International Journal of Game Theory 32: 443-453.<|endoftext|> -TITLE: Tricky proof of a result of Michael Nielsen's book "Neural Networks and Deep Learning". -QUESTION [16 upvotes]: In his free online book, "Neural Networks and Deep Learning", Michael Nielsen proposes to prove the next result: -If $C$ is a cost function which depends on $v_{1}, v_{2}, ..., v_{n}$, he states that we make a move in the $\Delta v$ direction to decrease $C$ as much as possible, and that's equivalent to minimizing $\Delta C \approx \nabla C \cdot \Delta v$. So if $\lvert\lvert\Delta v\rvert\rvert = \epsilon$ for a small $\epsilon$, it can be proved that the choice of $\Delta v$ that minimizes $\Delta C \approx \nabla C \cdot \Delta v$ is $\Delta v = -\eta \nabla C$ where $\eta = \epsilon / \lvert\lvert \nabla C \rvert\rvert$. And he suggests using the Cauchy-Schwarz inequality to prove this. -Ok, so what I've done is to minimize with respect to $\Delta v$ an equivalent function $0 = min_{\Delta v} \lvert\lvert \nabla C \Delta v \rvert\rvert^{2} \leq min_{\Delta v}\lvert\lvert \nabla C \rvert\rvert^{2}\lvert\lvert \Delta v\rvert\rvert^{2}$ (using C-S inequality). I would say this is the correct path to prove the result but I'm stuck and can't arrive to the same result. -Thanks. - -REPLY [2 votes]: suppose by absurd that -∇C ⋅ Δv < ∇C ⋅ ( -ϵ∇C / ||∇C|| ) -note that ride side is <= 0 and so left side too -changing sign now we get --∇C ⋅ Δv > ∇C ⋅ ( ϵ∇C / ||∇C|| ) -note that now both right and left side are >= 0 -applying modulo we get -| ∇C ⋅ Δv | > ∇C ⋅ ( ϵ∇C / ||∇C|| ) -given that ||Δv|| = ϵ (by hipotesys) and ||a|| = a⋅a/||a|| we get -| ∇C ⋅ Δv | > || ∇C || * || Δv || -this is absurd because by Cauchy–Schwarz inequality | ∇C ⋅ Δv | <= || ∇C || * || Δv ||<|endoftext|> -TITLE: Is it true that $\mathbb{F}_{1}^{\ast} \equiv \mathbb{F}_{2}^{\ast}$ implies $\mathbb{F}_{1} \equiv \mathbb{F}_{2}$? -QUESTION [8 upvotes]: Let $\mathbb{F}_{1}$ and $\mathbb{F}_{2}$ be fields, and let $\mathbb{F}_{1}^{\ast}$ and $\mathbb{F}_{2}^{\ast}$ denote the corresponding groups of units. If $\mathbb{F}_{1}$ and $\mathbb{F}_{2}$ are not elementarily equivalent in the language of rings, does it follow that $\mathbb{F}_{1}^{\ast}$ and $\mathbb{F}_{2}^{\ast}$ are not elementarily equivalent in the language of groups? For example, $\mathbb{R} \not\equiv \mathbb{C}$, and $\mathbb{R}^{\ast} \not\equiv \mathbb{C}^{\ast}$ since (for example) the sentence $$ \exists x \ x^4 = 1, x^2 \neq 1 $$ holds in $\mathbb{C}^{\ast}$ and does not hold in $\mathbb{R}^{\ast}$. Similarly, $\mathbb{Q} \not\equiv \mathbb{R}$, and $\mathbb{Q}^{\ast} \not\equiv \mathbb{R}^{\ast}$ since (for example) the sentence $$ \exists x \ x^2 = 1, x \neq 1, \forall y \ \exists z \ (y = z^2) \vee (xy = z^2)$$ holds in $\mathbb{R}^{\ast}$, but not in $\mathbb{Q}^{\ast}$. -It is natural to consider the contrapositive of this problem: is it true that $\mathbb{F}_{1}^{\ast} \equiv \mathbb{F}_{2}^{\ast} \Longrightarrow \mathbb{F}_{1} \equiv \mathbb{F}_{2}$? I suspect that this is not in general true, mainly because there are many well-known examples of groups which are elementarily equivalent but not isomorphic. So it seems plausible that one may use groups $G_{1}$ and $G_{2}$ which are known to satisfy $G_{1} \equiv G_{2}$ and $G_{1} \not\cong G_{2}$ to construct elementarily inequivalent fields $\mathbb{F}_{1}$ and $\mathbb{F}_{2}$ such that the underlying multiplicative group of $\mathbb{F}_{1}$ (resp. $\mathbb{F}_{2}$) is $G_{1}$ (resp. $G_{2}$). -For example, it is known that given an abelian group $A$, $A$ has unbounded exponent if and only if $A \equiv A \oplus \mathbb{Q}$ (Lemma A.2.4. in Hodges' "Model Theory"). Therefore, $\mathbb{Q} \oplus \mathbb{Q} \equiv \mathbb{Q}$. But is the additive group $\mathbb{Q}$ the underlying multiplicative group of a field? Is $\mathbb{Q} \oplus \mathbb{Q}$ the group of units of a field? -As early as 1960, Laszlo Fuchs asked which abelian groups can be realized as the underlying multiplicative group of a field. This question remains largely unanswered. However, it is known that: -Theorem: A nontrivial torsion-free divisible abelian group $G$ has infinite rank if and only if $G$ can be realized as the underlying multiplicative group a field (see "Divisible Multiplicative Groups of Fields" by Greg Oman). -Therefore, there exist fields $\mathbb{F}_{1}$ and $\mathbb{F}_{2}$ such that $\mathbb{F}_{1}^{\ast} = \mathbb{Q}$ and $\mathbb{F}_{2}^{\ast} = \mathbb{Q} \oplus \mathbb{Q}$. But it is not obvious as to whether or not these fields are elementarily equivalent. - -REPLY [15 votes]: Take $\mathbb{Q}$ and $\mathbb{Q}(\sqrt{2})$. Their unit groups are isomorphic, not just elementary equivalent, by unique prime factorization, but $\mathbb{Q}$ and $\mathbb{Q}(\sqrt{2})$ are not elementary equivalent because there exists a solution to $x^2 = 2$ in the latter but not the former.<|endoftext|> -TITLE: How can we prove that a quadratic equation has at most 2 roots? -QUESTION [5 upvotes]: A quad equation can be factored into two factors containing $x $, but how can we prove that there no other sets of different factors yielding OTHER VALUES OF $X $? - -REPLY [2 votes]: Let $\,f(x)\,$ is a polynomial of degree $2$ with coefficients in a field or domain $\,F$ (e.g. $\Bbb Q,\Bbb R,\Bbb C)$ and suppose that $\,f\,$ has $\,2\,$ distinct roots $\,a,b.\,$ By the Bifactor Theorem below we deduce that $\,f(x) = c(x\!-\!a)(x\!-\!b)\,$ for $\,c\in F.\,$ Thus if $\,d\neq a,b\,$ then $\,f(d) = c(d\!-\!a)(d\!-\!b)\ne 0\,$ since each factor is $\ne 0\,$ (recall $\,x,y\ne 0\,\Rightarrow\,xy\ne 0\,$ in a domain). So $\,f\,$ has at most $2$ distinct roots. -Bifactor Theorem $\ $ Suppose that $\rm\,a,b\,$ are elements of a domian $\rm\,F\,$ and $\rm\:f\in F[x],\,$ i.e. $\rm\,f\,$ is a polynomial with coefficients in $\rm\,F.\,$ If $\rm\ \color{#C00}{a\ne b}\ $ are elements of $\rm\,F\,$ then -$$\rm f(a) = 0 = f(b)\ \iff\ f\, =\, (x\!-\!a)(x\!-\!b)\ h\ \ for\ \ some\ \ h\in F[x]$$ -Proof $\,\ (\Leftarrow)\,$ clear. $\ (\Rightarrow)\ $ Applying Factor Theorem twice, while canceling $\rm\: \color{#C00}{a\!-\!b\ne 0},$ -$$\begin{eqnarray}\rm\:f(b)= 0 &\ \Rightarrow\ &\rm f(x)\, =\, (x\!-\!b)\,g(x)\ \ for\ \ some\ \ g\in F[x]\\ -\rm f(a) = (\color{#C00}{a\!-\!b})\,g(a) = 0 &\Rightarrow&\rm g(a)\, =\, 0\,\ \Rightarrow\,\ g(x) \,=\, (x\!-\!a)\,h(x)\ \ for\ \ some\ \ h\in F[x]\\ -&\Rightarrow&\rm f(x)\, =\, (x\!-\!b)\,g(x) \,=\, (x\!-\!b)(x\!-\!a)\,h(x)\end{eqnarray}$$<|endoftext|> -TITLE: Show that if $n$ is a positive integer and $n|2^n-1$, then $n=1$. -QUESTION [6 upvotes]: I have very limited experience with proofs, and I'm having trouble getting started with this. -This far I've been trying to prove it using the division equation $a=bq+r$, to show that the remainder $r+1$ is never zero for $n>1$ for -$2^n=nq+(r+1)$. Trivial cases are just that, and the part I'm guessing I'm having trouble with is how to generalize that, to apply to any n (if this is even a good way go about this). - -REPLY [5 votes]: Assume that $n \neq 1$. -Let $p$ be the smallest prime which divides $n$. Write $n= p^k \cdot l$. -Now, you know that -$$1 \equiv 2^n \equiv (2^l)^{p^k} \pmod p$$ -Use the fact that $a^p \equiv a \pmod p$ to conclude that $ (2^l)^{p^k} \equiv 2^l \pmod{p}$. -This shows that -$$2^l \equiv 1 \pmod{p}$$ -You also know that -$$2^{p-1} \equiv 1 \pmod{p}$$ -This gives that $2^{\gcd (l, p-1)}\equiv 1 \pmod{p}$. -Now, if $\gcd(l, p-1) \neq 1$ it is divisible by a prime $q$. This prime divides $l$ hence $n$, and $q$ also divides $p-1$ and hence is smaller than $p$. But this contradicts that $p$ is the smallest prime dividing $n$. -We can therefore conclude that $\gcd (l, p-1)=1$. -Thus -$2^1 \equiv 1 \pmod{p} \Rightarrow p\mid 2-1=1$. -This is a contradiction.<|endoftext|> -TITLE: Prove $13^{17} \ne x^2 + y^5$ -QUESTION [10 upvotes]: I've been going in circles trying to prove $$13^{17} \ne x^2 + y^5$$ -A friend of mine hinted to use modular arithmetic, but I'm not familiar with that field of study. -Any suggestions are appreciated. -Edit: $x,y \in\mathbb {N} $ - -REPLY [2 votes]: Check your equation mod 11: -$$13^{17} = 7 \pmod{11}$$ -$$\begin{array} {r|r} X & X^2 \pmod{11} \\ \hline -0 & 0\\ -1 & 1\\ -2 & 4\\ -3 & 9\\ -4 & 5\\ -5 & 3\\ -6 & 3\\ -7 & 5\\ -8 & 9\\ -9 & 4\\ -10 & 1\\ -\end{array}$$ -$$\begin{array} {r|r} Y & Y^5 \pmod{11} \\ \hline -0 & 0\\ -1 & 1\\ -2 & 10\\ -3 & 1\\ -4 & 1\\ -5 & 1\\ -6 & 10\\ -7 & 10\\ -8 & 10\\ -9 & 1\\ -10 & 10\\ -\end{array}$$ -The java program I used to find $11$: -public class ThirteenSeventeen { - - public static void main(String[] args) { - // does exist X,Y : 13^17 = x^2 + y^5 - - for (int ModBase = 1; ModBase < 100; ModBase++) { - boolean ExistFound = false; - for (int X = 0; X < ModBase; X++) { - for (int Y = 0; Y < ModBase; Y++) { - if (ModExpt(13, 17, ModBase) == (ModExpt(X, 2, ModBase) + ModExpt(Y, 5, ModBase)) % ModBase) { - ExistFound = true; - break; - } - } - if (ExistFound) break; - } - - if (!ExistFound) { - System.out.println("No Existance found for modular base: " + ModBase); - System.out.println("13^17 = " + ModExpt(13, 17, 11)); - - System.out.println("\\begin{array} {r|r} X & X^2 \\pmod{" + ModBase + "} \\\\ \\hline"); - for (int X = 0; X < ModBase; X++) { - System.out.println("" + X + " & " + ModExpt(X, 2, ModBase) + "\\\\"); - } - System.out.println("\\end{array}"); - - System.out.println("\\begin{array} {r|r} Y & Y^5 \\pmod{" + ModBase + "} \\\\ \\hline"); - for (int Y = 0; Y < ModBase; Y++) { - System.out.println("" + Y + " & " + ModExpt(Y, 5, ModBase) + "\\\\"); - } - System.out.println("\\end{array}"); - - break; - } - - } - - System.out.println("Done"); - } - - // compute X^Y % P - public static int ModExpt(int X, int Y, int P) { - int R = 1; - int S = X % P; - while (Y > 0) { - if (Y % 2 != 0) { - R = (R * S) % P; - } - S = S * S % P; - Y = Y / 2; - } - return R; - } - -}<|endoftext|> -TITLE: How to find the limit of $\lim_{x\to 0} \frac{1-\cos^n x}{x^2}$ -QUESTION [6 upvotes]: How can I show that - -$$ -\lim_{x\to 0} \frac{1-\cos^n x}{x^2} = \frac{n}{2} -$$ - -without using Taylor series $\cos^n x = 1 - \frac{n}{2} x^2 + \cdots\,$? - -REPLY [2 votes]: I have seen several limit problems on MSE where people don't use the standard limit $$\lim_{x \to a}\frac{x^{n} - a^{n}}{x - a} = na^{n - 1}\tag{1}$$ whereas frequent use is made of other standard limits like $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{e^{x} - 1}{x} = 1\tag{2}$$ and this question is also an instance where the limit $(1)$ should be used. -We have -\begin{align} -L &= \lim_{x \to 0}\frac{1 - \cos^{n}x}{x^{2}}\notag\\ -&= \lim_{x \to 0}\frac{1 - \cos^{n}x}{1 - \cos x}\cdot\frac{1 - \cos x}{x^{2}}\notag\\ -&= \lim_{x \to 0}\frac{1 - \cos^{n}x}{1 - \cos x}\cdot\lim_{x \to 0}\frac{1 - \cos x}{x^{2}}\notag\\ -&= \lim_{t \to 1}\frac{1 - t^{n}}{1 - t}\cdot\lim_{x \to 0}\frac{1 - \cos^{2}x}{x^{2}(1 + \cos x)}\text{ (putting }t = \cos x)\notag\\ -&= \lim_{t \to 1}\frac{t^{n} - 1}{t - 1}\cdot\lim_{x \to 0}\frac{\sin^{2}x}{x^{2}}\cdot\frac{1}{1 + \cos x}\notag\\ -&= n\cdot 1\cdot\frac{1}{1 + 1}\notag\\ -&= \frac{n}{2}\notag -\end{align}<|endoftext|> -TITLE: What is the probability that a point chosen randomly from inside an equilateral triangle is closer to the center than to any of the edges? -QUESTION [128 upvotes]: My friend gave me this puzzle: - -What is the probability that a point chosen at random from the interior of an equilateral triangle is closer to the center than any of its edges? - - -I tried to draw the picture and I drew a smaller (concentric) equilateral triangle with half the side length. Since area is proportional to the square of side length, this would mean that the smaller triangle had $1/4$ the area of the bigger one. My friend tells me this is wrong. He says I am allowed to use calculus but I don't understand how geometry would need calculus. Thanks for help. - -REPLY [14 votes]: The problem can be solved without calculus by using Archimedes Quadrature of the Parabola. To prove this Archimedes did, however, use techniques that are closely related to modern calculus. - -According to Archimedes Quadrature of the Parabola the area between the parabola and the edge EF is $4\over3$ the area of triangle EFG. -Let's assume that EK has length x, then due to similarity DI has length ${3\over2}x$ and DH has length ${1\over2}x$. Because point G is on the parabola we have that the length of DG is half the length of DI, hence the length of DG is ${3\over4}x$. Now the length of HG is ${3\over4}x-{1\over2}x={1\over4}x$. So the length of DH is twice the length of HG -From this it follows that the area of triangle EFG is half the area of triangle EFD. -The area between EF and the parabola is ${4\over3}\cdot{1\over2}={2\over3}$ times the area of triangle EFD. -As the area of triangle EFD is equal to ${1\over27}$ the area of triangle ABC we now find that the area of points closer to the middle then to the edges is equal to $3\cdot1{2\over3}\cdot{1\over27}={5\over27}$ the area of triangle ABC. -Hence P(point is closer to the center)=${5\over27}$.<|endoftext|> -TITLE: Identity involving pentagonal numbers -QUESTION [11 upvotes]: Let $G_n = \tfrac{1}{2}n(3n-1)$ be the pentagonal number for all $n\in \mathbb{Z}$ and $p(n)$ be the partition function. I was trying to prove one of the Ramanujan's congruences: $$p(5n-1) = 0 \pmod 5,$$ and my "brute force" proof reduces to show the following identity: $$\left(\sum_{G_n = 0 \pmod 5}(-1)^nq^{G_n}\right)\left(\sum_{G_n = 2 \pmod 5}(-1)^nq^{G_n}\right) + \left(\sum_{G_n = 1 \pmod 5}(-1)^nq^{G_n}\right)^2 = 0 \pmod 5,$$ where the sums are over all $n\in \mathbb{Z}$. After expanding a few terms (up to $q^{15000}$) of the left hand side of the identity, I strongly believe the following identity holds: -$$\left(\sum_{G_n = 0 \pmod 5}(-1)^nq^{G_n}\right)\left(\sum_{G_n = 2 \pmod 5}(-1)^nq^{G_n}\right) + \left(\sum_{G_n = 1 \pmod 5}(-1)^nq^{G_n}\right)^2 = 0.$$ -I am in particular interested in a proof of the last identity. -Update: Here is what I tried. Denote by $$F_k(q) := \sum_{G_n=k\pmod 5}(-1)^nq^{G_n}.$$ Notice that $G_n = 1 \pmod 5$ if and only if $n = 1 \pmod 5$, and $G_{5k+1} = 1 + 25G_{-k}$. This would give $F_1(q) = q\prod_{m=1}^{\infty}(1-q^{25m})$ using the pentagonal number theorem. However, I do not know a way to factor $F_0(q)$ and $F_2(q)$. - -REPLY [5 votes]: It turns out that Ramanujan has asserted a result, from which the identity in the question follows. In order to state his result, recall the $q$-Pochhammer symbols $(a;q)_\infty = \prod_{k \ge 0}(1-aq^n)$. For convenience, we always suppress the subscript $(a;q) = (a;q)_\infty$. -Ramanujan asserted that, -$$(q;q) = \frac{(q^{10};q^{25})(q^{15};q^{25})(q^{25};q^{25})}{(q^{5};q^{25})(q^{20};q^{25})} - q(q^{25};q^{25})-q^2\frac{(q^{5};q^{25})(q^{20};q^{25})(q^{25};q^{25})}{(q^{10};q^{25})(q^{15};q^{25})}.$$ -The pentagonal number theorem says that the left hand side $(q;q) = \sum_n (-1)^nq^{G_n}$. However the right hand side decomposes the sum based on $G_n\bmod 5$. Using the notation $F_k$ defined in the question, we obtain $$F_0 = \frac{(q^{10};q^{25})(q^{15};q^{25})(q^{25};q^{25})}{(q^{5};q^{25})(q^{20};q^{25})}, F_1 = - q(q^{25};q^{25}), F_2=-q^2\frac{(q^{5};q^{25})(q^{20};q^{25})(q^{25};q^{25})}{(q^{10};q^{25})(q^{15};q^{25})}.$$ -Clearly, $F_0F_2 + F_1^2 = 0$. -Inspired by the proof of the Ramanujan's identity on $(q;q)$ in Ramanujan's "most beautiful identity" by Hirschhorn, here is a slightly direct proof of $F_0F_2 + F_1^2=0$. -Recall the Jacobi triple product: $(a^{-1}; q)(aq; q)(q;q)=\sum_n(-1)^na^nq^{n(n+1)/2}$. If $a \neq 1$, then we have $$(a^{-1}q;q)(aq;q)(q;q)=\sum_n(-1)^n\frac{a^n}{1-a^{-1}}q^{n(n+1)/2}.$$ Replacing $a$ by $a^{-1}$, we get $$(aq;q)(a^{-1}q;q)(q;q)=\sum_n(-1)^n\frac{a^{-n}}{1-a}q^{n(n+1)/2}.$$ -Averaging the last two identities, we obtain $$(a^{-1}q;q)(aq;q)(q;q)=\sum_n(-1)^n\frac{1}{2}\left(\frac{a^n}{1-a^{-1}}+\frac{a^{-n}}{1-a}\right)q^{n(n+1)/2}.$$ -Let $\omega := e^{2\pi i/5}$ be a (primitive) 5th root of unity. Take $a = \omega$, we have -$$(\omega^{-1}q;q)(\omega q;q)(q;q)=\sum_n(-1)^nc_nq^{n(n+1)/2},$$ where $$c_n = \frac{1}{2}\left(\frac{\omega^n}{1-\omega^{-1}}+\frac{\omega^{-n}}{1-\omega}\right) = \begin{cases}\frac{1}{2} & \text{if }n=0\pmod 5 \\ \frac{1+\sqrt{5}}{4} & \text{if }n=1\pmod 5 \\ 0 & \text{if }n=2\pmod 5 \\ -\frac{1+\sqrt{5}}{4} & \text{if }n=3\pmod 5 \\ -\frac{1}{2} & \text{if }n=4\pmod 5\end{cases}.$$ -So $(\omega^{-1}q;q)(\omega q;q)(q;q) = \frac{1}{2}(C_0 + \phi_1 C_1 - \phi_1 C_3 - C_4)$, where $C_i = \sum_{n=i\pmod 5}(-1)^nq^{n(n+1)/2}$ and $\phi_1 = \frac{1+\sqrt{5}}{2}$. Notice that by replacing $n$ by $-(n+1)$ in the sums, one can show that $C_0 = -C_4$ and $C_1 = - C_3$. So we get $$(\omega^{-1}q;q)(\omega q;q)(q;q) = C_0+\phi_1 C_1.$$ -Similarly, take $a = \omega^2$ and obtain $$(\omega^{-2}q;q)(\omega^2 q;q)(q;q) = \frac{1}{2}(C_0+\phi_2C_1-\phi_2C_3-C_4)=C_0+\phi_2C_1,$$ where $\phi_2 = \frac{1-\sqrt{5}}{2}$. Multiply the last two identities, we have $$(\omega^{-2}q;q)(\omega^{-1}q;q)(\omega q;q)(\omega^{2}q;q)(q;q)(q;q) = C_0^2-C_0C_1-C_1^2.$$ -Since $(\omega^{-2}q;q)(\omega^{-1}q;q)(\omega q;q)(\omega^{2}q;q)(q;q) = (q^5;q^5),$ we have $$(q;q)=\frac{C_0^2-C_0C_1-C_1^2}{(q^5;q^5)}.$$ By the same reasoning we used at the very beginning, we have $$F_0 = \frac{C_0^2}{(q^5;q^5)}, F_1 = \frac{-C_0C_1}{(q^5;q^5)}, F_2 = \frac{-C_1^2}{(q^5;q^5)}.$$ -Remark: With a bit of extra work (using Jacobi triple product), one can show $C_0 = (q^{10};q^{25})(q^{15};q^{25})(q^{25};q^{25}), C_1 = (q^5;q^{25})(q^{20};q^{25})(q^{25};q^{25})$ and hence derive Ramanujan's identity.<|endoftext|> -TITLE: What's the formula for this series for $\pi$? -QUESTION [16 upvotes]: These continued fractions for $\pi$ were given here, -$$\small -\pi = \cfrac{4} {1+\cfrac{1^2} {2+\cfrac{3^2} {2+\cfrac{5^2} {2+\ddots}}}} -= \sum_{n=0}^\infty \frac{4(-1)^n}{2n+1} -= \frac{4}{1} - \frac{4}{3} + \frac{4}{5} - \frac{4}{7} + \cdots\tag1 -$$ -$$\small -\pi = 3 + \cfrac{1^2} {6+\cfrac{3^2} {6+\cfrac{5^2} {6+\ddots}}} -= 3 - \sum_{n=1}^\infty \frac{(-1)^n} {n (n+1) (2n+1)} -= 3 + \frac{1}{1\cdot 2\cdot 3} - \frac{1}{2\cdot 3\cdot 5} + \frac{1}{3\cdot 4\cdot 7} - \cdots\tag2 -$$ -$$\small -\pi = \cfrac{4} {1+\cfrac{1^2} {3+\cfrac{2^2} {5+\cfrac{3^2} {7+\ddots}}}} -= 4 - 1 + \frac{1}{6} - \frac{1}{34} + \frac {16}{3145} - \frac{4}{4551} + \frac{1}{6601} - \frac{1}{38341} + \cdots\tag3$$ -Unfortunately, the third one didn't include a closed-form for the series. (I tried the OEIS using the denominators, but no hits.) - -Q. What's the series formula for $(3)$? - -REPLY [6 votes]: Given the symmetric continued fraction found in this post -$$\frac{\displaystyle\Gamma\left(\frac{a+3b}{4(a+b)}\right)\Gamma\left(\frac{3a+b}{4(a+b)}\right)}{\displaystyle\Gamma\left(\frac{3a+5b}{4(a+b)}\right)\Gamma\left(\frac{5a+3b}{4(a+b)}\right)}=\cfrac{4(a+b)}{a+b+\cfrac{(2a)(2b)} {3(a+b)+\cfrac{(3a+b)(a+3b)}{5(a+b)+\cfrac{(4a+2b)(2a+4b)}{7(a+b)+\ddots}}}}$$ -Your continued fraction $(3)$ is a special case when $a=b=1$. -Moreover ,it has a beautiful q-analogue -$$\begin{aligned}\Big(\sum_{n=0}^\infty q^{n(n+1)}\Big)^2 =\cfrac{1}{1-q+\cfrac{q(1-q)^2}{1-q^3+\cfrac{q^2(1-q^2)^2}{1-q^5+\cfrac{q^3(1-q^3)^2}{1-q^7+\ddots}}}}\end{aligned}$$ found here and here.<|endoftext|> -TITLE: $H^{\infty}$ is not separable -QUESTION [5 upvotes]: Consider the Hardy space, $(H^{\infty},\|•\|{\tiny{\infty}})$ where, $$ \|f\|{\tiny{\infty}} = \sup\{f(z) : z\in\mathbb{D} \}<\infty,$$ - for a $f\in H^{\infty}.$ Prove that $H^{\infty}$ is not separable. - -My attempt: -I tried to find, $\phi^{i}$, $i\in I,$ functions, holomorphic, bounded over the unit disk, such that, $$B = \{\phi^{i} : i\in I\}$$ has uncountably many members and for every $i,j\in I$ with $i \neq j$, there exists a M>0 : $\|\phi^{i} - \phi^{j}\|{\tiny{\infty}}>M$. -But I cant find a spesific example that works. I tried for example $\phi^{t}(z) = e^{it}f(z)$ where $f\in H^{\infty}$. Its not easy, but there must be an elementary way to solve it, without using heavy theorems ( like interpolation theorems) or algebraic methods. - -REPLY [5 votes]: We have a nice family of inner functions: -For $\zeta \in \partial \mathbb{D}$, let -$$f_{\zeta}(z) = \exp \biggl( \frac{z+\zeta}{z - \zeta}\biggr).$$ -For $\zeta_1 \neq \zeta_2$, consider the radial limits of $f_{\zeta_k}$ at $\zeta_1$ to see $\lVert f_{\zeta_1} - f_{\zeta_2}\rVert_{\infty} \geqslant 1$.<|endoftext|> -TITLE: On the exactness of the calculus formulas -QUESTION [5 upvotes]: Are calculus formulas, differentiation and integration, exact formulas or are some approximations involved? -That is, is the value of a definite integral of a function the exact value of the (signed) area between the graph and the axis or is it only any approximation for it, and likewise for the value of the derivative and the slope of the tangent at this point. -The question arises as when introducing these notions approximations are often central, say, via Riemann sums. It is now not clear if the definite integral is another (sometimes more viable) way to get an approximation or if it is exact and the same for derivative and slope of the tangent. - -REPLY [5 votes]: Given $a[a,b]\to{\mathbb R}$ the integral -$$\int_a^b f(x)\>dx$$ -is a clear cut real number, i.e., a certain element of the set ${\mathbb R}$. This number can be familiar to you, like ${7\over 13}$, $\sqrt{5}$, or $\pi$. But maybe it has never before "occurred" in mathematics. -It is the definition of this number as a limit of Riemann sums that uses approximations; but once this definition is adopted there is no question of some fishy "approximation" involved anymore. -There remains, however, the following problem: Your function $f$ can be a simple analytic expression, like $f(x):=e^{-x^2/2}$, and your scientific pocket calculator does not have the primitive of this $f$ in store, as it is not "elementary". As a consequence your calculator can only output a numerical approximation to the integral -$$\int_0^1 e^{-x^2/2}\>dx\ ,\tag{1}$$ -and cannot give a value in terms of "standard" functions and constants, like $\exp$, $\cos$, $\pi$, $\sqrt{2}$, etc. Nevertheless the expression $(1)$ defines a certain real number $\xi$ exactly, i.e., to "infinitely many" decimal places.<|endoftext|> -TITLE: Shortcut for finding number of rational terms in $(a^{\frac{1}{p}}+b^{\frac{1}{q}})^n$ -QUESTION [6 upvotes]: My teacher taught me a shortcut for finding number of rational terms in $\left(a^{\frac{1}{p}}+b^{\frac{1}{q}}\right)^n$. -For example, find the number of rational terms in $\left(5^{\frac{1}{6}}+2^{\frac{1}{8}}\right)^{100}$. -Algorithm: - -Find LCM of $(p,q)$. In the above example, its $24$. -Divide $n$ by the LCM obtained. Let quotient be $Q$ and remainder be $R$. -If $R=0$, number of rational terms is $Q+1$. Else its $Q$. - -In the above example, $R\neq 0$. So number of rational terms is $4$. -How did he derive this shortcut? - -REPLY [3 votes]: Since the binomial expansion of this expression is -$$ -\left(a^{1/p} + b^{1/q}\right)^n=\sum_{i=0}^{n}{{n}\choose{i}}a^{i/p}b^{(n-i)/q}, -$$ -the $i$-th term is certainly rational (indeed, an integer) when $i\equiv 0$ (mod $p$) and $i\equiv n$ (mod $q$). By the Chinese remainder theorem, all solutions to these two equations are equal modulo ${\text{lcm}}(p, q)$; i.e., we get one solution every ${\text{lcm}}(p, q)$ steps. Therefore we get $Q$ or $Q+1$ solutions (in the notation of the problem) if the LCM doesn't divide $n$, and $Q+1$ solutions if the LCM divides $n$ (in which case $q$ divides $n$ as well, so the solutions start at $i=0$). When the LCM doesn't divide $n$, you need to find the first solution to decide if the result will be $Q$ or $Q+1$. This is the cause of "exceptions" like $\left(2 + 3^{1/4}\right)^6$. -The count, moreover, depends on there not being any other rational terms. I think this is guaranteed only if $a$ and $b$ are squarefree and coprime.<|endoftext|> -TITLE: Number of permutations of the word "PERMUTATION" -QUESTION [5 upvotes]: In how many ways we can arrange the letters of the word "PERMUTATION" such that no two vowels occur together and no two T's occur together. -I first arranged consonants including one T as below: -$*P*R*M*T*N*$ -Now in 6 star places i will arrange the vowels $A,E,I,O,U$ which can be done in $\binom{6}{5} \times 5!=6!$ ways. Also $P,R,M,N,T$ can themselves arrange in $5!$ ways. Hence total number of ten letter words now is $5! \times 6!$. -But one $T$ should be placed in eleven places of the ten letter word such that it should not be adjacent to $T$ which is already there. hence the remaining $T$ has $9$ ways to place. -hence total ways is $6! \times 5! \times 9$. -But my answer is not matching with book answer. please correct me - -REPLY [4 votes]: Given that we have two different answers posted thus far (820,800 and 796,800), perhaps I can be forgiven for applying heavy machinery. -We start by replacing all the vowels with Vs and ask how many permutations there are of PVRMVTVTVVN in which no two adjacent letters are equal. According to [1], the answer is -$$N = \int_0^{\infty} e^{-t}\; \ell_1(t)^4 \;\ell_2(t) \; \ell_5(t) \; dt$$ -where $$\begin{align} -\ell_1(t) &= t\\ -\ell_2(t) &= \frac{1}{2} t^2 - t\\ -\ell_5(t) &= \frac{1}{120}t^5 -\frac{1}{6}t^4 +t^3 -2t^2 + t\\ -\end{align}$$ -Mathematica evaluates the integral as $N = 6,840$. -To answer the original problem, we multiply by $5!$ to account for all the ways of replacing the five Vs with the five distinct vowels: -$$5! \; N = 820,800$$ -[1] Theorem 2.1 in "Counting words with Laguerre series" by Jair Taylor, The Electronic Journal of Combinatorics 21(2), 2014. -http://www.combinatorics.org/ojs/index.php/eljc/article/view/v21i2p1<|endoftext|> -TITLE: Use Cauchy's Theorem to show that if $\int_{0}^{\infty}f(x)dx$ exists, then so does $\int_{L}f(z)dz$ -QUESTION [5 upvotes]: Suppose that $f(z)$ is analytic at every point of the closed domain $0 \leq arg z \leq \alpha$ $(0 \leq \alpha \leq 2 \pi)$, and that $\lim_{z \to \infty}z f(z) = 0$. I need to prove that if the integral $\displaystyle J_{1}=\int_{0}^{\infty}f(x) dx$ exists, then the integral $\displaystyle J_{2}=\int_{L}f(z)dz$, where $L$ is the ray $z=r e^{i \alpha}$, $0 \leq r \leq \infty$. Moreover, I need to show that $J_{1} = J_{2}$ -I have been given the hint to use Cauchy's Theorem (not the Cauchy integral formula or residues - answers using either of those things are useless to me), and the result of the previous problem, which states as follows: - -If $f(z)$ is continuous in the closed domain $|z|\geq R_{0}$, $0 \leq arg z \leq \alpha$ $(0 \leq \alpha \leq 2 \pi)$, and if the limit $\displaystyle \lim_{z \to \infty} zf(z) = A$ exists, then $\displaystyle \lim_{R \to \infty}\int_{\displaystyle \Gamma_{R}}f(z)dz = i A \alpha$, where $\Gamma_{R}$ is the arc of the circle $|z|=R$ lying in the given domain. - -So, for this problem, I can use the fact that $\lim_{z \to \infty}zf(z) = 0$ to show that $\displaystyle \lim_{r \to \infty}\int_{\displaystyle \Gamma_{r}}f(z)dz = 0$ at some point, I guess. -Thus far, I've tried approaching this problem in two different ways. -The first way was to start out with $J_{2} = \int_{L}f(z)dz$ and then try to get $L_{1}$ to pop out somewhere. Didn't get too far with that, and anyway, I'm not sure that it is correct to write $\int_{L}f(z)dz = \lim_{r \to \infty}\int_{0}^{2\pi}f(re^{i\alpha})ire^{i \alpha}d \alpha$. All of these angles and args are confusing me, and I'm not even entirely sure what the domain on which $f(z)$ is analytic looks like. -The second way was to start out with $J_{1} = \int_{0}^{\infty}f(x) dx$, and try to parametrize it in terms of $z = re^{i \alpha}$. But, I'm not sure exactly how to do this (again, the domain is confusing. Tried to draw it; didn't help. Maybe I'm just not visualizing it right). Then, at some point, I assume I can apply Cauchy's Theorem and the given limit. -I'm guessing that since Cauchy's Theorem is involved and that the given limit goes to $0$, I'm probably going to wind up with $0 = J_{1} = J_{2}$, but I need a lot of help and guidance to show this. -I'm at my wits end, don't have a lot of time to figure this out, and am starting to panic. Please help. - -REPLY [2 votes]: I drew a picture: - -The yellow line is the part of $J_2$ of length $r$, the green line the part of $J_1$ of length $r$ and the blue line is the circle arc connecting the two lines. Note, since $f$ is analytic on the entire domain enclosed by the path and that the path is contractible in this domain, that the integral along it (going first along the yellow, then the blue, then the green or the other way around) is zero. This is the Cauchy theorem. -If we name these paths $J^1_r$, $J^2_r$ and $\Gamma_r$ (where the orientation of $\Gamma_r$ is that it goes from "top to bottom"), then the above text is just the equation: -$$\int_{J^1_r} f(z)dz = \int_{J^2_r} f(z) dz - \int_{\Gamma_r} f(z) dz$$ -Your previous result is that $\lim_{r \to \infty} \int_{\Gamma_r} f(z) dz =0$ (since $\lim_{z \to \infty} z f(z) = 0$), so taking the limit on both sides gives you -$$\int_{J_2} f(z) dz = \lim_{r \to \infty} \int_{J_2^r} f(z) dz = \lim_{r \to \infty}\left( \int_{J_1^r} f(z) dz -\int_{\Gamma_r} f(z) dz\right) = \int_{J_1^r} f(z) dz+0$$ -This shows that the integral over $J_2$ exists if the integral over $J_1$ exists, and that they are equal, which is the result you were looking for.<|endoftext|> -TITLE: A matrix norm inequality -QUESTION [5 upvotes]: Given a real $m\times n$ matrix $C$, a $m\times m$ diagonal matrix $p$ whose diagonal entries $p_{ii}$ are either 0 or 1, and a $n\times n$ diagonal matrix $q$ whose diagonal entries $q_{ii}$ are either 0 or 1. -Let $P(\alpha)=\frac{\exp(i\alpha)}{2}p + \frac{I-p}{2}$, a diagonal matrix whose diagonal elements are either $1/2$ or $\exp(i\alpha)/2$. -Let $Q(\alpha)=\frac{\exp(i\alpha)}{2}q + \frac{I-q}{2}$, a diagonal matrix whose diagonal elements are either $1/2$ or $\exp(i\alpha)/2$. -Then we can construct the function -$$n(\alpha)=\frac{\|P(\alpha) C + C Q(\alpha)\|}{\|C\|}$$ -where the norm is the operator norm. -The figure below shows all possible $n(\alpha)$ curves for a $7\times 7$ matrix $C$. - -We are interested in the behaviour of $n(\alpha)$ for $\alpha\in[0,\pi]$. -We can prove easily that $n(\alpha)\leq 1$: -$$\frac{\|P(\alpha) C + C Q(\alpha)\|}{\|C\|}\leq \frac{\|P(\alpha) C \|+\| C Q(\alpha)\|}{\|C\|}\leq \frac{\|P(\alpha)\|\| C \|+\| C \|\|Q(\alpha)\|}{\|C\|}\\ -\leq \frac{\frac{1}{2}\| C \|+\| C \|\frac{1}{2}}{\|C\|} \leq 1$$ -In the case where $P(\alpha)=I/2$, we can easily prove that $n(\alpha)$ is a nonincreasing function of $\alpha$: -$$n(\alpha+\delta_{\alpha})=\frac{\|C/2 + C Q(\alpha+\delta_{\alpha})\|}{\|C\|}=\frac{\|C (I/2+ Q(\alpha+\delta_{\alpha}))\|}{\|C\|}\\ -=\frac{\|C (I/2+ Q(\alpha))(I/2+ Q(\alpha))^{-1}(I/2+ Q(\alpha+\delta_{\alpha}))\|}{\|C\|}\\ -\leq n(\alpha)\|(I/2+ Q(\alpha))^{-1}(I/2+ Q(\alpha+\delta_{\alpha}))\|$$ -$(I/2+ Q(\alpha))^{-1}(I/2+ Q(\alpha+\delta_{\alpha}))$ is a diagonal matrix with diagonal elements either 1 or $\frac{1+\exp(i(\alpha+\delta_{\alpha}))}{1+\exp(i\alpha)}$. Note $|\frac{1+\exp(i(\alpha+\delta_{\alpha}))}{1+\exp(i\alpha)}|\leq 1$ for relevant parameter values ($\alpha\in[0,\pi],\delta_{\alpha}>0,\alpha+\delta_{\alpha}\leq\pi$), so $\|(I/2+ Q(\alpha))^{-1}(I/2+ Q(\alpha+\delta_{\alpha}))\|\leq 1$ so $n(\alpha+\delta_{\alpha})\leq n(\alpha)$, so $n(\alpha)$ is indeed a non-increasing function of $\alpha$. -So, on to the actual question - -I suspect that $n(\alpha)$ is always a non-increasing function of $\alpha$, not just in the $P=I/2$ case as shown above, but the proof technique used above does not work in the general case. How could I prove this? -It also seems like $n(\alpha)\geq \cos(\alpha/2)$. How could I prove this? - -REPLY [2 votes]: I'm not sure about other induced norms, but your conjectures are true for the induced 2-norm (i.e. the largest singular value), the induced 1-norm (the maximum absolute column sum) and the induced $\infty$-norm (the maximum absolute row sum), owing to the following observation: - -Proposition. Let $p=1,2$ or $\infty$ and $\|\cdot\|$ denotes the matrix norm induced by the vector $p$-norm. Let $A(t)=\pmatrix{X&tY\\ tZ&W}$ be a complex partitioned matrix where $t\ge0$ and $X,Y,Z,W$ are fixed (but not necessarily square). Then $\|A(t)\|$ is increasing in $t$. - -Note that the above proposition is also true when $t$ is multiplied to the diagonal blocks rather than to the antidiagonal blocks, because the induced 1-, 2- or $\infty$-norms are permutation invariant. I shall defer the proof of this proposition to the end of the answer. Let's see why your conjectures are true first. - -Presumably $C$ is nonzero. So, we may assume that it has unit norm and we may ignore the denominator in the definition of $n(\alpha)$. -Let $z=e^{i\alpha/2}$. Then both $P$ and $Q$, up to permutations of rows and columns, are of the form $(\frac{z^2}2I)\oplus(\frac12I)$ (but the sizes of the identity matrices may of course be different). -Therefore, we may assume that $C=\pmatrix{X&Y\\ Z&W}$ and $CP+QC=\pmatrix{z^2X&\frac{z^2+1}2Y\\ \frac{z^2+1}2Z&W}$. -Since $\|D_1A(t)D_2\|=\|A(t)\|$ for any diagonal unitary matrices $D_1$ and $D_2$, we have, in turn, -$$ -n(\alpha) -=\left\|\pmatrix{X&\frac{z^2+1}{2z}Y\\ \frac{z^2+1}{2z}Z&W}\right\| -=\left\|\pmatrix{X&\cos(\frac\alpha2)Y\\ \cos(\frac\alpha2)Z&W}\right\|. -$$ -So, by the previous proposition, $n(\alpha)$ is decreasing in $\alpha$. -Also, if we apply the above proposition to the diagonal blocks instead of the antidiagonal ones, we get -$$ -\left\|\pmatrix{X&\cos(\frac\alpha2)Y\\ \cos(\frac\alpha2)Z&W}\right\| -\ge\left\|\pmatrix{\cos(\frac\alpha2)X&\cos(\frac\alpha2)Y\\ \cos(\frac\alpha2)Z&\cos(\frac\alpha2)W}\right\| -=\cos(\frac\alpha2). -$$ - -Remark. -The above shows that your two conjectures are true as long as the boxed proposition is true and the matrix norm in question is induced by some vector norms such that $\|Px\|=\|Dx\|=\|x\|$ for any permutation matrix $P$ and diagonal unitary matrix $D$. However, I'm not sure whether the boxed proposition is really true for every such matrix norm. It is true, however, for the induced 1-, 2- and $\infty$-norms, as shown below. - -Proof of the proposition. -The proposition is trivial for $\|\cdot\|_1$ and $\|\cdot\|_\infty$. So, we will consider only the induced 2-norm. -Ignore the trivial case that $A(0)=0$. Let $F=\pmatrix{X&0\\ 0&W}$ and $G=\pmatrix{0&Y\\ Z&0}$, so that $A(t)=F+tG$. Now, choose a fixed $t>0$ and let $\sigma_t=\|A(t)\|_2$. Let $u$ and $v$ be respectively a left and a right unit singular vector of $A(t)$ corresponding to the singular value $\sigma_t$ (so that $Av=\sigma_t u$ and $u^\ast Av=\sigma_t$). -Note that the real part of $u^\ast Gv$ must be nonnegative. Suppose the contrary. Then $t(u^\ast Gv)=-p+bi$ for some $p>0$ and $b\in\mathbb R$. Therefore $u^\ast Fv=\sigma_t+p-bi$. Yet, by the variational characterisation of singular values, for any complex matrix $M$, we have -$$ -\|M\|_2 = \max_{\|x\|_2=\|y\|_2=1} |x^\ast My|. -$$ -Since $F+tG$ is unitarily equivalent to $F-tG$ (in fact, one can obtain the latter by left and right multiplying the former by matrices of the form $I\oplus-I$), the singular value of $F-tG$ must be equal to $\sigma_t$. Therefore, by the variational characterisation of singular values, we get $\sigma_t\ge|u^\ast(F-tG)v|=|(\sigma_t+2p)-2bi|$, which is impossible because $p>0$. Therefore the real part of $u^\ast Gv$ must be nonnegative. -So, $T>t$, -$$ -|u^\ast A(T)v| -=|u^\ast A(t)v + (T-t)(u^\ast Gv)| -=|\sigma_t + (T-t)(u^\ast Gv)| -\ge\sigma_t -$$ -and hence $\|A(T)\|_2\ge\|A(t)\|_2$ for any $T>t\ge0$. As $\|A(t)\|_2$ is continuous in $t$, we conclude that it is increasing at $t=0$ too. $\square$<|endoftext|> -TITLE: Cyclic Modules, Characteristic Polynomial and Minimal Polynomial -QUESTION [6 upvotes]: Suppose that $\mathrm{dim}_{F}M<\infty$ for $F$ a field and $M$ an $F$ vector space. Let $T$ be a linear transformation on $M$. Show that $M$ is cyclic (as an $F[x]$ module) if and only if $m(x)$ is the characteristic polynomial of $T$, for $m(x)$ being the minimal polynomial of $T$. - -How would one be able to show this? I'm not sure on how to start with either direction. We know that the torsion of $M$ would just be $M$ (since $m(T)=0$) if we consider $M$ as an $F[x]$ module with $x$ being represented as the action of $T$ (i.e. $p(x) \cdot v=p(T)v$). Would the Cayley-Hamilton theorem help in this case? -Thanks for the help. - -REPLY [3 votes]: Let me also give an elementary proof of the hard direction: - -If the minimal poylnomial $m \in F[x]$ of $T$ coincides with the - characteristic polynomial, $M$ is cyclic. - -Proof: -Let us first do the case $m=p$ for some irreducible polynomial $p \in F[x]$ of degree $d$. In this case any $v \neq 0$ will generate $M$, since a proper $T$-invariant subspace of $M$ gives rise to a factorization of $m$. -Now let us consider the case $m=p^n$ for some irreducible polynomial $p \in F[x]$ of degree $d$. Let $v$ be any vector with $p^{n-1}(T)v \neq 0$. Then -$$p^j(T)v,p^j(T)Tv, \dotsc, p^j(T)T^{d-1}v, 0 \leq j \leq n-1$$ -are $dn$ linear independent vectors, hence they form a basis of $M$, which shows that $v$ generates $M$. -To see the linear independence, apply $p^{n-1}(T)$ to a linear combination of the vectors. Then use the $n=1$-case to see that the remaining coefficients are zero. Then apply $p^{n-2}(T)$ and proceed. -Finally, the general case is a use of the chinese remainder theorem and the decomposition theorem into generalized eigenspaces. -Let $m = p_1^{n_1} \dotsb p_s^{n_s}$. We have the decomposition -$$M = \operatorname{ker}(p_1^{n_1}(T)) \oplus \dotsb \oplus \operatorname{ker}(p_s^{n_s}(T))$$ -By the cases already taken care of we obtain that $\operatorname{ker}(p_1^{n_1}(T))$ is cyclic with annihilator $p_1^{n_1}$, i.e. $\operatorname{ker}(p_1^{n_1}(T)) = F[x]/(p_1^{n_1})$. By the chinese remainder theorem we now obtain that -$$M=\operatorname{ker}(p_1^{n_1}(T)) \oplus \dotsb \oplus \operatorname{ker}(p_1^{n_1}(T))=F[x]/(p_1^{n_1}) \oplus \dotsb \oplus F[x]/(p_s^{n_s}) = F[x]/(m)$$ -is cyclic. - -Note that in the case, where $F$ is algebraically closed, the first two cases collapse into the following very easy statement: -If $T$ is nilpotent and $n$ minimal with $T^n=0$, then $v,Tv, \dotsc, T^{n-1}v$ are linear independent for any $v$ with $T^{n-1}v \neq 0$.<|endoftext|> -TITLE: Probability that one part of a randomly cut equilateral triangle covers the other -QUESTION [8 upvotes]: If you make a straight cut through a square, one part can always be made to cover the other. (This is true by symmetry if the cut goes through the centre, and if it doesn't, you can shift it to the centre while taking from one part and giving to the other.) -However, if you cut an equilateral triangle, it may or may not be the case that one part can be made to cover the other. In some cases it may depend on whether we're allowed to flip the parts; I'll leave that to you in case one or the other version has a more elegant solution. - -How can the cuts that allow one part to cover the other best be characterized? -What is the probability that a random cut will allow one part to cover the other? - -Of course we need to specify a distribution for the cuts, and again I'll leave you to choose between two plausible distributions in case one yields a nicer result: Either Jaynes' solution to the Bertrand "paradox" (i.e. random straws thrown from afar, with uniformly distributed directions and uniformly distributed coordinates perpendicular to their direction), or a cut defined by two independently uniformly distributed points on two different sides of the triangle. -Update: I've posted the case without flipping as a separate question. - -REPLY [5 votes]: If we allow flipping, an answer to the first question will be (as suggested by Thomas Ahle in the comments): - -It is impossible to cover one part by the other if and only if the cut intersects all three altitudes within the triangle. - -The "only if" part -This can be shown by contraposition. So let us assume that not all three altitudes are cut within the triangle and prove that one part covers the other: -If (at minimum) one of the altitudes is left uncut within the triangle, this uncut altitude provides an axis of symmetry over which you can reflect one part of the triangle to cover the other part. -The "if" part -First note that if a cut passes through either a vertex and/or a midpoint (at least) one of the altitudes is not being cut within the triangle. -So the cut must pass between vertices and midpoints in order to cut all three altitudes within the triangle. Thinking a bit more about this we realize that then the situation can be rotated to fit the following diagram of the situation: - -Where the cut has to enter through the red segment at one side and exit through the other red segment at the other side. This means that some kind of cut lying entirely below the blue equilateral at the top of the diagram has to be made. -But then it is clear on one hand that the bottom part cannot in any possible way cover even just the blue triangle, less so the top part of the cut. And the opposite is clearly also not possible since the bottom part has a full side length of the original triangle which is a distance that is nowhere to find in the top part of the cut. -The last part could have been put in more specific and technical terms, but I think that would blur the picture. In case someone disagrees, please suggest improvements or ask for clarification. - -Probability figure -Let us place an equilateral triangle of side length $1$ (WLOG) inside a circle of diameter $2\cdot\sqrt 3/3$: - -Then using the distribution given as method 2 in the OP slightly re-phrased, a chord can be chosen by rotating the circle by a uniformly randomly chosen angle and the choosing a point $E$ on the vertical diameter (the orange diameter in the diagram above) uniformly at random and drawing a horizontal line through that point. -Any line in the plane will form an angle $v$ within the interval $[0,\pi/6]$ with one of the altitudes of the randomly tilted equilateral triangle. WLOG assume the altitude in question is $BD$. -Now letting the point $E$ traverse the vertical diameter of length $2\cdot\sqrt 3/3$ the horizontal line will cut the triangle iff it cuts the vertical segment $AG$ which will be denoted by $w$ for future reference. So the probability that a horizontal line that cuts the circle (the event $\Omega$) also cuts the equilateral triangle (the event $\Delta$) for the given tilt angle $v$ will be: -$$ -P(\Delta\mid\Omega,v)=\frac{w}{2\cdot\sqrt 3/3}=\frac{\cos(v)}{2\cdot\sqrt 3/3} -$$ -where we have used that the side length of the equilateral was $1$ in order to establish that $w=\cos(v)$. -Next let us consider the probability that the horisontal line intersects all three altitudes (the event $\star$) given that it intersects the triangle. This can be expressed as the probability that it intersects the segment $z=DF$ given that it intersects the side $AC=1$. A simple use of trigonometry shows that: -$$ -P(\star\mid\Delta,v)=\frac z1=\frac{\sqrt 3}2\cdot\tan(v) -$$ - -Finally, some tedious integration leads to the general statement integrating out $v$ providing the probability figure: -$$ -\begin{align} -P(\star\mid\Delta)&=\dfrac{\int_0^{\pi/6}P(\star\mid\Delta,v)\cdot -P(\Delta\mid\Omega,v)\ dv}{\int_0^{\pi/6}P(\Delta\mid\Omega,v)\ dv}\\ -&=\dfrac{\frac 34-\frac 38\cdot\sqrt 3}{\sqrt 3/4}\\ -&=\sqrt 3-1.5\\ -&\approx 0.2320508 -\end{align} -$$ -So this determines the probability that any one part will fail to cover the other part, and the corresponding probability that one part will cover the other under this distribution (event $\chi$) is: -$$ -P(\chi)=1-P(\star\mid\Delta)=2.5-\sqrt 3\approx 0.76794919 -$$<|endoftext|> -TITLE: Is it true that $\mathbb{C}(x) \equiv \mathbb{C}(x, y)$? -QUESTION [15 upvotes]: It is easily seen that any two consecutive entries in the tower of fields given below are not elementarily equivalent in the language of rings: -$$\mathbb{Q} \subseteq \mathbb{Q}(\sqrt{2}) \subseteq \mathbb{R} \subseteq \mathbb{C} \subseteq \mathbb{C}(t).$$ -For example, $\mathbb{C} \not\equiv \mathbb{C}(t)$ since (for example) the sentence $$\forall x \ \exists y \ x = y^2$$ holds in $\mathbb{C}$ but not in $\mathbb{C}(t)$. So it seems natural to ask: are $\mathbb{C}(t)$ and $\mathbb{C}(t_1,t_2)$ elementarily equivalent? More generally, for what fields $\mathbb{F}$ is it true that $\mathbb{F}(t) \equiv \mathbb{F}(t_1,t_2)$? -There are many well-known results involving the elementary equivalence of fields of rational functions. For example, it is known that given a field $K$ which admits a unique ordering, $K(x) \equiv \mathbb{Q}(x)$ implies $K \cong \mathbb{Q}$. -I have considered trying to use the Keisler-Shelah isomorphism theorem to prove or disprove that $\mathbb{C}(t) \equiv \mathbb{C}(t_1,t_2)$, but it is not obvious as to whether or not ultrapowers corresponding to $\mathbb{C}(t)$ and $\mathbb{C}(t_1,t_2)$ are isomorphic. - -REPLY [7 votes]: Theorem 2.40 in the book Model Theoretic Algebra by Jensen and Lenzing gives a negative answer. This book is a great reference for questions like this, you'll probably find it very interesting. For those without access to the book, I'll summarize the reason that $\mathbb{C}(x) \not\equiv \mathbb{C}(x,y)$. -A field $K$ is called a $C_i$-field if every homogeneous polynomial of over $K$ of degree $d$ in $n$ variables, such that $n>d^i$, has a nontrivial zero in $K^n\setminus \{(0,\dots,0)\}$. It's not too hard to show that a field is $C_0$ if and only if it is algebraically closed. -Now it's a fact that $K$ is $C_i$ if and only if $K(x)$ is $C_{i+1}$. Jensen and Lenzing don't prove this, instead citing Chapter XI of Ribenboim's book L'arithmetique des corps, which might be difficult to track down... It would be nice if someone could post a more accessible reference. You can actually find the direction "if $K$ is $C_i$ then $K(x)$ is $C_{i+1}$" as Theorem 3.3.9 in this thing I wrote once, but the converse is crucial here. -Now $K_1 = \mathbb{C}(x)$ is $C_1$ but not $C_0$ (it's not algebraically closed), so $K_2 = \mathbb{C}(x,y)$ is $C_2$ but not $C_1$. Hence there is some $d^2\geq n > d^1$ and some homogeneous polynomial $p(\overline{x})$ in $n$ variables of degree $d$ over $K_2$ which has no nontrivial zero over $K_2$. Fixing $n$ and $d$ and quantifying over the coefficients, the existence of such a polynomial is described by a first-order sentence which is true in $K_2$ but not in $K_1$. -This argument generalizes to show that $\mathbb{C}(x_1,\dots,x_n)\not\equiv \mathbb{C}(x_1,\dots,x_m)$ for any $n\neq m$.<|endoftext|> -TITLE: Simplify Product of sines -QUESTION [6 upvotes]: Is there a way simplify this product? -$$ -\sin\left({n} \frac{\pi}{2}\right) \sin\left({n} \frac{\pi}{3}\right) \sin\left({n} \frac{\pi}{4}\right) ...\sin\left({n} \frac{\pi}{n-1}\right) -$$ -And, is this the correct way to write it? -$$ - \prod_{m=2}^{n-1} \sin\left(n \frac{\pi}{m}\right) -$$ -I'm not a professional so I'd appreciate a simple explanation. - -REPLY [2 votes]: I don't think so. Using de Moivre's formula $e^{ix} = \cos x + i \sin x$ your product is related to all the finite series: -$$ n\pi \times \left( \pm \frac{1}{2} \pm \frac{1}{3} \dots \pm \frac{1}{n-1} \right) \mod 1$$ -Your product is a weighted sum of $e^{[\dots]}$ for all series of this kind, behaving like the Boltzmann partition function. In fact the numbers in question wrap around the number like $[0,1]$ in a somewhat random fashion. - -It's unlikely there is any simplification unless we try to estimate this series. -$$ \sum e^{n \pi i \cdot \left( \pm \frac{1}{2} \pm \frac{1}{3} \dots \pm \frac{1}{n-1} \right)}$$ -That average is very likely to be close to $0$ since these numbers are equidistributed on the unit circle. - -A histogram shows a little bit of variance, but basically the same idea: - -It's very difficult to say how close to $0$ this result is. However there may be classical results for estimating random-ish sums of this type.<|endoftext|> -TITLE: Is the empty string always in a finite alphabet? -QUESTION [5 upvotes]: Is the empty string always an element of an aribitrary finite alphabet? -I understand that the empty string is part of the Kleene-Star of any alphabet, but is it intrinsically part of any finite alphabet where I don't explicitly mention it? -For example, if $A=\{a,b\}$, is $\epsilon$, the empty word, in $A$? Or, would it only be in $A$ if it were specified that $A=\{\epsilon,a,b\}$? - -REPLY [5 votes]: The empty string, often called $\varepsilon$ may be created using any alphabet, by creating the string which contain no characters from the alphabet. An alphabet does, by definition, only contain characters which later may be used to create strings. Kleene-star of an alphabet is a language, why it is possible for the empty string to exist in there. -In any alphabet $A$ we have that $\varepsilon\notin A$, this is because the $\varepsilon$ is not a character, but rather a string and specifically the empty string. If you create the language $\{a,b,\varepsilon\}$ then $\varepsilon$ does not represent the empty string but rather is just a character like $a$ and $b$. -Summary: The empty string is by definition not a charcter and thus it is not part of any alphabet.<|endoftext|> -TITLE: Lower limit topology is normal -QUESTION [6 upvotes]: How do I prove that the space of real numbers, under the lower limit topology, is a normal space. -I could prove very easily that it is regular, by using an argument of basic sets, but I haven't been able to generalise that argument. - -REPLY [5 votes]: HINT: Prove that it's Lindelof, because every regular Lindelof space is paracompact and thus normal. -Alternatively, you can use the following approach. Let us denote the lower limit topology on $\mathbb{R}$ as $\mathbb{R}_\ell$. Suppose $A$ and $B$ are two disjoint closed sets in $\mathbb{R}_\ell$. Then, note that $\mathbb{R}\setminus A$ and $\mathbb{R}\setminus B$ are open and that $A\subset \mathbb{R}\setminus B$ and $B \subset \mathbb{R}\setminus A$). Given any $a \in A$, there is a basic open set $U_a :=[a,a+\rho_a)\subset \mathbb{R}\setminus B$ for some $\rho_a >0$. Similarly, for each $b \in B$, we can find a $\rho_b >0$ such that $V_b:=[b,b+\rho_b) \subset \mathbb{R}\setminus A$. Let -$U=\bigcup_{A} U_a $ and $V=\bigcup_B V_b$. Clearly $A \subset U$ and $B \subset V$. Last, we show that $U$ and $V$ are disjoint. Suppose $U_a\cap V_b= [a,a+\rho_a)\cap[b,b+\rho_b) \not= \emptyset$, then $max\{a,b\}\in U_a\cap V_b$. W.l.o.g say $a = max\{a,b\}$. Then, $a \in A$ and $a \in V_b \subset \mathbb{R} \setminus A$, a contradiction.<|endoftext|> -TITLE: How to evaluate this limit about Bernoulli number? -QUESTION [11 upvotes]: First,we define $\displaystyle I_{1}\left ( x \right )=\frac{\sin x}{x}$, then $\displaystyle \lim_{x\rightarrow 0^+}I_{1}\left ( x \right )=1$, also we have -\begin{align*} -I_2\left ( x \right )&=\frac{I_1\left ( x \right )-1}{x^{2}}~,~\lim_{x\rightarrow 0^+}I_2\left ( x \right )=-\frac{1}{6}\\ -I_3\left ( x \right )&=\frac{I_2\left ( x \right )+\dfrac{1}{6}}{x^2}~,~\lim_{x\rightarrow 0^+}I_3\left ( x \right )=\frac{1}{120}\\ -&\cdots \\ -I_n\left ( x \right )&=\frac{I_{n-1}\left ( x \right )-\displaystyle \lim_{x\rightarrow 0^+}I_{n-1}\left ( x \right )}{x^{2}} -\end{align*} -Now we have the following questions. -(1)$I_n(x)$ is related to bernoulli number, but how to find it. -(2)Evaluate $\displaystyle \lim_{k\rightarrow +\infty }\left [ \lim_{x\rightarrow 0^{+}}I_{2k}\left ( x \right ) \right ]~,~\lim_{k\rightarrow +\infty }\left [ \lim_{x\rightarrow 0^{+}}I_{2k+1}\left ( x \right ) \right ]~,~k\in \mathbb{Z}.$ - -REPLY [6 votes]: About the second question: using the Taylor seres $$\sin\left(x\right)=\sum_{k\geq0}\frac{\left(-1\right)^{k}x^{2k+1}}{\left(2k+1\right)!} - $$ we note that $$\frac{\sin\left(x\right)}{x}=\sum_{k\geq0}\frac{\left(-1\right)^{k}x^{2k}}{\left(2k+1\right)!} - $$ and so $$\lim_{x\rightarrow0^{+}}I_{1}\left(x\right)=1 - $$ furthermore $$\sum_{k\geq0}\frac{\left(-1\right)^{k}x^{2k}}{\left(2k+1\right)!}-1=\sum_{k\geq1}\frac{\left(-1\right)^{k}x^{2k}}{\left(2k+1\right)!} - $$ and so $$I_{2}\left(x\right)=\frac{\sum_{k\geq1}\frac{\left(-1\right)^{k}x^{2k}}{\left(2k+1\right)!}}{x^{2}}=\sum_{k\geq1}\frac{\left(-1\right)^{k}x^{2\left(k-1\right)}}{\left(2k+1\right)!}\stackrel{x\rightarrow0^{+}}{\rightarrow}-\frac{1}{6} - $$ and so on, hence we have $$I_{n}\left(x\right)=\sum_{k\geq n-1}\frac{\left(-1\right)^{k}x^{2\left(k-n+1\right)}}{\left(2k+1\right)!}\stackrel{x\rightarrow0^{+}}{\rightarrow}\frac{\left(-1\right)^{n-1}}{\left(2n-1\right)!} - $$ then $$\lim_{n\rightarrow\infty}\left[\lim_{x\rightarrow0^{+}}I_{2n}\left(x\right)\right]=\lim_{n\rightarrow\infty}-\frac{1}{\left(4n-1\right)!}=0 - $$ and $$\lim_{n\rightarrow\infty}\left[\lim_{x\rightarrow0^{+}}I_{2n+1}\left(x\right)\right]=\lim_{n\rightarrow\infty}\frac{1}{\left(4n-3\right)!}=0.$$ -About the first question, Mathematica recognizes the series as an hypergeometric function $$I_{n}\left(x\right)=\sum_{k\geq n-1}\frac{\left(-1\right)^{k}x^{2\left(k-n+1\right)}}{\left(2k+1\right)!}=\frac{\left(-1\right)^{n+1}\,_{1}F_{2}\left(1;n,n+\frac{1}{2};-\frac{x^{2}}{4}\right)}{\left(2n-1\right)!}$$ but I don't know if there are some kind of relations with the Bernoulli numbers, probably would be useful more clarifications and details in the question. -Update: A possible relation can be this. We can observe that $I_{n} - $ is the $(n-1)$th coefficient of the sine's Taylor series. A similar approach can be done for the Bernoulli numbers. We consider the generating function of the Bernoulli numbers $$J_{0}\left(x\right)=\frac{x}{e^{x}-1}=\sum_{k\geq0}\frac{B_{k}}{k!}x^{k} - $$ so obviously $$J_{0}\left(x\right)\underset{x\rightarrow0}{\rightarrow}1 - $$ so now we take $$J_{1}\left(x\right)=\frac{\frac{x}{e^{x}-1}-1}{x}=\sum_{k\geq1}\frac{B_{k}}{k!}x^{k-1} - $$ and so $$J_{1}\left(x\right)\underset{x\rightarrow0}{\rightarrow}-\frac{1}{2} - $$ and so on. Iterating the process we can observe that we have a recursive formula for the Bernoulli numbers $$J_{n}=\lim_{x\rightarrow0}\frac{J_{n-1}\left(x\right)-J_{n-1}}{x} - =\frac{B_{n}}{n!}\Rightarrow n!J_{n}=B_{n}.$$<|endoftext|> -TITLE: Has this chaotic map been studied? -QUESTION [53 upvotes]: I have recently been playing around with the discrete map -$$z_{n+1} = z_n - \frac{1}{z_n}$$ -That is, repeatedly mapping each number to the difference between itself and its reciprocal. It shows some interesting behaviour. This map seems so simple/obvious, I highly doubt this has never been analysed before. However, I (and several people I asked) have been unable to turn up any form of literature on it online (partly because it's hard to google maths and partly because a lot of people on the internet ask about "the difference between 'inverse' and 'reciprocal'"). I also couldn't find it in Wikipedia's List of chaotic maps. Does this map have a name I could look up? -I am vaguely aware of Möbius transformations, but clearly this isn't one (although it might be possible to express it as a combination of two or more Möbius transformations). -I'm listing here some things I have observed about the map (some proven, some conjectured), in case they ring any bells for people in terms of related maps or generalisations: - -The map is chaotic on the real line. All orbits are unstable, and the map is highly sensitive to initial conditions. Here are the first 1000 iterations starting from each of $4.000001$ to $4.000009$: - -It's not chaotic anywhere else in the complex plane: all trajectories that don't start on the real line will eventually be attracted by the imaginary axis and increase in magnitude without bound. -Before doing that, any trajectory can jump in and out of the unit circle arbitrarily often. If I colour the complex plane depending on the "inside-outside" pattern of the trajectory, I get a nifty fractal of deformed circles covering the real axis (repeated colours are due to a limited palette): - -I believe that the number of orbits of period $n$ is given by OEIS A001037 (all of which are on the real line), provided you count the fixed points $\pm \infty$. -As pointed out by Mark McClure in the comments, the map -$$ z_{n+1} = z_n + \frac{1}{z_n} $$ -is identical to the one I'm looking at, but with the roles of the real and imaginary axis swapped. This map, as well as $z_{n+1} = 2z - 1/z$, has a short section in Alan F. Beardon's Iteration of Rational Functions, but that doesn't go far beyond what I've mentioned above, and doesn't help me in finding further literature about the maps at this point. - -Whether this has any use or not, analysing this map is a nice exercise in recreational maths for me, but I'm somewhat reaching the limits of my capabilities and would like to find out if anything else is known about this map (or whether I'll have to prove my conjectures myself :)). - -REPLY [7 votes]: This is the "Boole map", described by George Boole (yes, that George Boole) in his 1857 "On the Comparison of Transcendents, with Certain Applications to the Theory of Definite Integrals". A standard modern reference is the 1973 paper by R. Adler and B. Weiss, "The ergodic measure preserving transformation of Boole". It shows up in the answer to this MSE question.<|endoftext|> -TITLE: Good substitution for this integral -QUESTION [5 upvotes]: What is $$\int \frac{4t}{1-t^4}dt$$ is there some kind of substitution which might help .Note that here $t=\tan(\theta)$ - -REPLY [3 votes]: Doesn't this work? -$$\int \frac{4t}{1-t^4}dt=\int \frac{2t}{1+t^2}-\frac{-2t}{1-t^2} dt=\ln |\frac{1+t^2}{1-t^2}|+C$$ -Also $t=\tan \theta$ substitution works quite well. $$\int \frac{4\tan \theta}{1-\tan^4 \theta}\sec^2 \theta d \theta=\int \frac {4 \tan \theta }{1-\tan^2 \theta} d\theta=2\int \frac{1+\tan \theta}{1-\tan \theta}-\frac{1-\tan \theta}{1+\tan \theta} d\theta$$ -$$2\int \frac{1+\tan \theta}{1-\tan \theta}-\frac{1-\tan \theta}{1+\tan \theta} d\theta=2\int\frac{cos \theta+\sin \theta}{\cos \theta-\sin \theta}+\frac{\cos \theta-\sin \theta}{cos \theta+\sin \theta}d\theta$$ -Note that $(\sin \theta+\cos \theta)'=\cos \theta-\sin \theta$, $(\cos \theta-\sin \theta)'=-\sin \theta-\cos \theta$. -I think you can continue from here.<|endoftext|> -TITLE: Does there exist a computable number that is normal in all bases? -QUESTION [11 upvotes]: Following up on this exchange with Marty Cohen... -Almost all numbers are normal in all bases (absolutely normal), but there are only a countable number of computable numbers, so it is plausible that none of them are absolutely normal. Now I don't expect to be able to prove this since it would imply $\pi$, $\sqrt{2}$, etc. are not absolutely normal. Also I don't expect to be able to find a particular computable number that is normal in all bases, since Marty states none are known. But is it possible to show non-constructively that there is some computable number which is absolutely normal? - -REPLY [4 votes]: Below are a couple of papers for what you want. For more, google computable absolutely normal. -Verónica Becher and Santiago Figueira, An example of a computable absolutely normal number, Theoretical Computer Science 270 #1-2 (6 January 2002), 947-958. [Another copy here.] -Verónica Becher, Pablo Ariel Heiber, and Theodore A. Slaman, A computable absolutely normal Liouville number, Mathematics of Computation 84 #296 (November 2015), 2939-2952.<|endoftext|> -TITLE: What is a probability distribution? -QUESTION [5 upvotes]: I have a couple of fundamental questions about probability distributions, a term that is thrown around a lot. In my undergraduate courses, the term itself was not actually given a definition, rather we defined the PMF (and PDF); then said "this is an example of a probability distribution", and "here's another example of a probability distribution" (when referring the the common Normal Distribution, Binomial Distribution, etc.). -The 'definition' according to wikipedia is that the probability distribution "usually refers to the more complete assignment of probabilities to all measurable subsets of outcomes, not just to specific outcomes or ranges of outcomes". - -When we say the Normal distribution (for example) is a "probability distribution", do we mean to say that the Normal distribution is a family of "probability distributions" with similar properties (shapes)? -Why is the probability distribution defined in this way: when it is completely characterized by the PMF (or PDF)? It seems redundant to me. Do we need a (quote unquote) probability distribution in order to completely assign probabilities to all measurable subsets of outcomes -- or could this be done with just the probability space $(\Omega,\mathcal{F},P)$? - -REPLY [4 votes]: The distribution of a random variable $X : \Omega \to \mathbb{R}$ is simply the probability measure $P\circ X^{-1}$ on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$, that is the function: -$$(P\circ X^{-1})(A) = P(X^{-1}(A)) =: P(X\in A)$$ -If $X$ is discrete, then it is completely specified by the PMF. If $X$ is continuous, then it is completely specified by a PDF. And in any case it is completely specified by the cumulative distribution function $x\mapsto P(X\leq x)$. But conversely, the PMF, the PDF (up to "equivalence") and the CDF are determined by it. It is in a general context more useful than PMF's or PDF's, because it makes sense even for random variables, which are neither discrete nor continuous. -You can be even be more general: The distribution still makes sense, if you replace $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ by any other measurable space, even if the notion of a CDF makes no sense.<|endoftext|> -TITLE: Norm of a bounded linear functional as the reciprocal of the distance from zero to an hyperplane -QUESTION [5 upvotes]: Let $(X,\|\;\|)$ be a normed vector space over $K$ and let $f\in B(X,K)$ (bounded linear functional) $(f\neq0$). Let $L=\{x\in X: f(x)=1\}$. I want to prove that: -$$ -\|f\|=\frac{1}{d(0,L)} -$$ -My attempt goes like this: -Since $f$ is bounded, is clear that $|f(x)|\le\|f\|\|x\|\;\;\forall x\in X$. So, -$$ -1\le\|f\|\|x\|\;\forall x\in L -$$ -and, -$$ -\frac{1}{\|f\|}\le\|x\|\;\;\forall x\in L -$$ -thus -$$ -\frac{1}{\|f\|}\le\inf_{x\in L}(\|x\|)=\inf_{x\in L}(d(0,x))=d(0,L) -$$ -so -$$ -\|f\|\ge\frac{1}{d(0,L)} -$$ -Any ideas for the other part would be appreciated since I couldn't figure out how to show that $\|f\|\|x\|\le 1\;\;\forall x\in L$ - -REPLY [5 votes]: By definition of $\Vert f \Vert$, for all $\epsilon >0$, one can find $x$ with $\Vert x \Vert=1$ and $$\vert f(x) \vert \ge \frac{\Vert f \Vert}{1+\epsilon}$$ -Now take $y=\frac{x}{f(x)}$. You have $f(y)=1$, hence by definition of $d(0,L)$, $\Vert y \Vert \ge d(0,L)$ or $$1=\Vert x \Vert \ge \vert f(x) \vert d(0,L) \ge \frac{\Vert f \Vert}{1+\epsilon} d(0,L)$$ -As this is true for all $\epsilon >0$, you finally get $$1 \ge \Vert f\Vert d(0,L)$$ as desired.<|endoftext|> -TITLE: Can every element of a finite field be written as a sum of two non-squares? -QUESTION [7 upvotes]: We know that any element of a finite field $\mathbb{F_{q}}$ ($q$ odd prime power) can be written as a sum of two squares - is the same true for non-squares? Can any element of a (sufficiently large) finite field be written as a sum of two non-squares? -I know that the above is not true in general, e.g. in $\mathbb{F_{3}}$, the only non-square is $2$ and so $2$ itself cannot be written as a sum of two non-squares. However, if $q$ is large enough could the above be true? -If it is true, then could anyone provide any hints on I could prove it? -Many thanks! - -REPLY [3 votes]: For prime fields its true for $p=4k+1>5$, and for all elements except 0 for $p=4k+3$. - -If $p=4k+3$, then $0$ cannot be written in this way. -Now for non-zero elements. It's equivalent to prove that each element is the sum of two non-zero squares. - -2a. For squares use such formula -$$ -a^2=\left(\frac35a\right)^2+\left(\frac45a\right)^2. -$$ -2b. Let $x$ be non-square without such decomposition. Then, of course, all non-square don't have such decomposition. -Since $x-1$ is non-square (if not, then $x=1+a^2$), then $x-1$, $x-2$, $\dots$ $2$, $1$ are non-squares. Contradiction.<|endoftext|> -TITLE: Is modular multiplication under a prime modulus uniformly distributed? -QUESTION [5 upvotes]: Given a prime $p$ and $m \in Z_p^*$. -Assume we draw $a \stackrel{u}{\in} Z_p^*$ uniformly at random. -Will $a \cdot m \bmod p$ be distributed uniformly over $Z_p^*$? - -REPLY [3 votes]: Yes, the $am$ will be distributed uniformly modulo $p$. For any $k \in \mathbb Z^*_p$, the chance of getting $k$ is $\frac1 {p-1}$, because it happens for only one value of $a$, namely when $a \equiv m^{-1}k \pmod p$. -Note that if $p$ is not prime, then the same result holds only when $\gcd(m,p)=1$.<|endoftext|> -TITLE: Suppose $f$ is continuous on $[0,2]$ and $f(0) = f(2)$. For which $a\in(0,2)$ must there exist $x,y\in[0,2]$ so that $|y − x| = a$ and $f(x) = f(y)$? -QUESTION [12 upvotes]: Suppose $f$ is continuous on $[0,2]$ and $f(0) = f(2)$. For which $a\in(0,2)$ must there exist $x,y\in[0,2]$ so that $\lvert y − x\rvert = a$ and $f(x) = f(y)$ - -I'm really unsure how to approach this problem ...we did a similar problem where $a=1$, by defining $g(x) = f(x+1)-f(x)$ on $[0,1]$, then applying the IVT. -Is this problem approached in a similar way? If not, what's a good starting point? Thank you for any ideas! - -REPLY [2 votes]: The answer is $a \in \{\frac 2 n|n\gt 1\}$ . -Proof: Take some $a \in \{\frac 2 n|n\gt 1\}$ and suppose for purposes of contradiction that $f$ is continuous on $[0,2]$ with $f(0)=f(2)$ but there are no $x, y \in [0,2]$ such that $|x-y|=a$ and $f(x)=f(y)$. -Consider the function $g(x)=f(x+a)-f(x)$ which is well defined on $[0,2-a]$ . -Then $g(x) \neq 0$ throughout $[0,2-a]$ and due to the intermediate value theorem it must either be positive throughout $[0,2-a]$ or negative throughout $[0,2-a]$. -If $g(x)\gt0$ then $f(0)\lt f(a)\lt f(2a) …\lt f(n a)=f(2)$ which contradicts the condition $f(0)=f(2)$ . Similarly if $g(x)\lt0$ then $f(0)\gt f(a)\gt f(2a) …\gt f(n a)=f(2)$ . -So no such function $f$ can exist if $a=\frac2 n$ . -On the other hand suppose $a \not \in \{\frac 2 n|n\gt 1\}$ . To show the condition need not be satisfied in this case define $f(x) = \cos(\frac{2\pi x} a) +\frac x 2(1-\cos(\frac {4\pi} a))$ -Then if $|x-y|=a$ , $\cos(\frac{2\pi x} a)=\cos(\frac{2\pi y}a)$ and $|f(x)-f(y)|=\frac a 2(1-\cos(\frac{4\pi} a) ) \ne 0$, since $\cos(\frac{4\pi} a)\ne1$ -But $f(2)=\cos(\frac{4\pi} a) + (1-\cos(\frac{4\pi} a))=1=f(0)$ so $f$ is a counterexample<|endoftext|> -TITLE: Is it true that $V$ and $H_{\omega_1}$ agree on the truth value of $\Sigma_1$ sentences? -QUESTION [6 upvotes]: I want to see if the following result is correct or not: - -Let $\varphi(x)$ be a formula in $\mathcal{L}_{\mathrm{ZF}}$ with only bounded quantifiers such that $\exists x\,\varphi(x)$ holds in $V$, then $\exists x\,\varphi(x)$ holds in $H_{\omega_1}$. - -I wrote a proof for it using the following reasoning: - -$\varphi$ is absolute for all transitive models; let $\exists x\,\varphi(x)$ be true, then $\varphi(a)$ holds for some $a$. Let $\vartheta\in\mathbf{ON}$ such that $a\in V_\vartheta$. Let $M\prec V_\vartheta$ be a countable elementary substructure, $\bar M$ be the Mostowski collapse ($\bar M\cong M$), then $\bar M$ is transitive and countable, hence hereditarily countable, so $\bar M\subseteq H_{\omega_1}$, so there is a witness for $\varphi(x)$ in $H_{\omega_1}$. - -My questions are: (1) is the result above correct? (2) Is the proof given correct? (3) If the result is correct and the proof is not correct, what would be a correct proof for the result? - -REPLY [3 votes]: The proof is correct, but I'd change the order of a few things and extend a bit to make it slightly clearer: - -We assume that $\exists x\varphi(x)$ is true in $V$, so there is some $a$ such that $\varphi(a)$ holds. Let $\vartheta$ an ordinal such that $a\in V_\vartheta$, since $V_\vartheta$ is transitive and $\varphi$ is a bounded formula, $V_\vartheta\models\varphi(a)$ and therefore $V_\vartheta\models\exists x\varphi(x)$. Let $M$ be a countable elementary submodel of $V_\vartheta$ and let $\overline M$ be its Mostowski collapse, then $\overline M\in H_{\omega_1}$. -Finally, since $M$ is an elementary submodel of $V_\vartheta$, and $M\cong\overline M$ we have that $\overline M\models\exists x\varphi(x)$. So there is some $a'\in\overline M$ such that $\overline M\models\varphi(a')$, but again by absoluteness of bounded formulas between transitive sets, $H_{\omega_1}\models\varphi(a')$ so $H_{\omega_1}\models\exists x\varphi(x)$ as wanted. - -Essentially the same proof can be extended to show that $H_\kappa\prec_{\Sigma_1}V$, as remarked by GME in the comments.<|endoftext|> -TITLE: Example of topological spaces $A \subset X$ with $X \setminus A$ not homeomorphic to $(X/A) \setminus (A/A)$ -QUESTION [5 upvotes]: This is a homework question. I am asked to show that if $A \subset X$ is closed, then $X \setminus A$ is homeomorphic to $(X/A) \setminus (A/A)$. I have done this, and I now have to show by example that this is false if we do not require that $A$ is closed. Could someone point me in the right direction in finding such an example? - -REPLY [5 votes]: HINT: Take $X=\Bbb R$ and $A=\Bbb Q$. (On further thought: it may be a little easier to take $A=\Bbb R\setminus\Bbb Q$.)<|endoftext|> -TITLE: Have humans proved Schinzel's conjecture for one specific rational number? -QUESTION [5 upvotes]: I asked the Tooth Fairy about Schinzel's conjecture that if $x$ is a positive rational number, then it can be represented as $$\frac{p + 1}{q + 1}$$ where $p$ and $q$ are primes, for infinitely many pairs of primes $p$ and $q$. She said she almost came up with a proof two hundred years ago, that Skogsrået beat her to it but she never checked the wood nymph's proof. -Obviously humans have never proved or disproved it. But I can't find any reference in the literature to humans proving a specific case, like for example $$x = \frac{1}{2}.$$ Obviously the denominator doesn't have to be a power of $2$, any even number will do, so we can do $$\frac{1}{2} = \frac{6}{12} = \frac{12}{24} = \frac{24}{48} = \ldots$$ -Have humans proved a specific case like this one? This riddle has me stumped. - -EDIT: I forgot the important little detail about infinitely many pairs of primes. - -REPLY [6 votes]: Yes, for $x = 1$. Not that it matters if you believe me, but I thought of that before I read the comments. Then $p = q$. I also thought about $x = -1$, in which case $p = -(q + 2)$; to my pleasant surprise this leads to the twin prime conjecture (I don't suppose you demons actually know if this conjecture is true or false). -But you specifically said "positive". You did not specifically say "nontrivial", which is what $x = 1$ is. Proving a nontrivial case of this conjecture gives you two for the price of one: if it's true for $x$, it's also true for the reciprocal of $x$ (and as you already know, $1$ is its own reciprocal). -If you had specified "nontrivial" then I would be posting all this as a comment rather than an answer. We humans know that there are infinitely many primes of the form $4k + 1$, for $k$ an integer. I believe this fact can be used to prove at least two specific cases of the conjecture. But $x = 1$ is quite sufficient to answer your question as you've worded it.<|endoftext|> -TITLE: Prove that $E(X) = \int_{0}^{\infty} P(X>x)\,dx = \int_{0}^{\infty} (1-F_X(x))\,dx$. -QUESTION [8 upvotes]: Let $X$ be a continuous non-negative random variable (i.e. $R_x$ has only non-negative values). Prove that $$E(X) = \int_{0}^{\infty} P(X>x)\,dx = \int_{0}^{\infty} (1-F_X(x))\,dx$$ where $F_X(x)$ is the CDF for $X$. Using this result, find $E(X)$ for an exponential ($\lambda$) random variable. - -I know that by definition, $F_X(x) = P(X \leq x)$ and so $1 - F_X(x) = P(X>x)$ -The solution is: -$$\int_{0}^{\infty} \int_{x}^{\infty} f(y)\,dy dx -= \int_{0}^{\infty} \int_{0}^{y} f(y)\,dy dx -= \int_{0}^{\infty} yf(y) dy.$$ -I'm really confused as to where the double integral came from. I'm also rusty on multivariate calc, so I'm confused about the swapping of $x$ and $\infty$ to $0$ and $y$. -Any help would be greatly appreciated! - -REPLY [6 votes]: First note that \begin{align} -P\{X>x\}&=E(1_{X>x})\\ -&=\int_0^{\infty}1_{y>x}f(y)dy -\end{align} -Therefore -\begin{align} -\int_0^{\infty}P\{X>x\}dx&=\int_0^{\infty}\int_0^{\infty}1_{y>x}f(y)dydx\\ -&=\int_0^{\infty}\int_0^{\infty}1_{y>x}f(y)dxdy\\ -&=\int_0^{\infty}yf(y)dy\\ -\end{align}<|endoftext|> -TITLE: How to show that $(W^\bot)^\bot=W$ (in a finite dimensional vector space) -QUESTION [7 upvotes]: I need to prove that if $V$ is a finite dimensional vector space over a field K with a non-degenerate inner-product and $W\subset V$ is a subspace of V, then: -$$ -(W^\bot)^\bot=W -$$ -Here is my approach: -If $\langle\cdot,\cdot\rangle$ is the non-degenerate inner product of $V$ and $B={w_1, ... , w_n}$ is a base of $V$ where ${w_1, ... , w_r}$ is a base of $W$ then I showed that -$$ -\langle u,v\rangle=[u]^T_BA[v]_B -$$ -for a symmetric, invertible matrix $A\in\mathbb{R}^{n\times n}$. Then $W^\bot$ is the solution space of $A_rx=0$ where $A_r\in\mathbb{R}^{r\times n}$ is the matrix of the first $r$ lines of $A$. Is all this true? -I tried to exploit this but wasn't able to do so. How to proceed further? - -REPLY [3 votes]: Hint It follows from the definition that $(W^\perp)^\perp \subset W$. -Hint 2: For every subspace $U$ of $V$ you have -$$\dim(U)+ \dim(U^\perp)=\dim(V)$$ -What does this tells you about $\dim(W)$ and $\dim (W^\perp)^\perp$?<|endoftext|> -TITLE: Prove whether series converges or not? -QUESTION [9 upvotes]: Does anyone know how to determine with proof whether the series -$$\sum_{n=1}^\infty\frac{1}{n^{2+\cos(2\pi\ln(n)) }}$$ -converges? - -REPLY [3 votes]: I am not completely sure in one argument below, but I think the series diverges. Let $\delta > 0$ be small and let $\varepsilon > 0$ be such that $\cos(\pi\pm\delta)=-1+\varepsilon$. Moreover, put $\tau := \delta/(2\pi)$. For $k\in\mathbb N$ define the set $\Delta_k^\tau := (e^{k+1/2-\tau},e^{k+1/2+\tau})$. Then we have $n\in\Delta_k^\tau$ if and only if $2+\cos(2\pi\ln(n))\in [1,1+\varepsilon)$ and the series is as least as large as -$$ -\sum_k\sum_{n\in\Delta_k^\tau}\frac 1 {n^{1+\varepsilon}}. -$$ -And now, I am not 100% sure. I claim that -$$ -\sum_{n\in\Delta_k^\tau}\frac 1 {n^{1+\varepsilon}}\,\ge\,\int_{\Delta_k^\tau}\frac 1 {x^{1+\varepsilon}}\,dx. -$$ -If not, then twice the left guy should do. Now, -$$ -\int_{\Delta_k^\tau}\frac 1 {x^{1+\varepsilon}}\,dx = -\frac 1 {\varepsilon e^{\varepsilon/2}}\left(\frac 1{e^{\varepsilon\tau}} - \frac 1{e^{-\varepsilon\tau}}\right)\cdot e^{-\varepsilon k} = \frac 2 {\varepsilon e^{\varepsilon/2}}\sinh(\varepsilon\tau)e^{-\varepsilon k}. -$$ -Summing over $k$, we get -$$ -\frac 2 {\varepsilon e^{\varepsilon/2}}\sinh(\varepsilon\tau)\frac 1 {1-e^{-\varepsilon}} = \frac 1 \varepsilon \cdot\frac{\sinh(\varepsilon\tau)}{\sinh(\varepsilon/2)} = \frac 1 \pi\cdot\frac{\delta}{\varepsilon}\cdot\frac{\sinh(\varepsilon\tau)}{\varepsilon\tau}\cdot\frac{\varepsilon/2}{\sinh(\varepsilon/2)}. -$$ -Now, we let $\delta\to 0$. Then, of course, also $\tau\to 0$ and $\varepsilon\to 0$. So, the last two factors tend to one. But $\delta/\varepsilon\to\infty$. Indeed, we have $\cos(\delta) = 1-\varepsilon$, so $\delta = \arccos(1-\varepsilon)$ and -$$ -\lim_{x\downarrow 0}\frac{\arccos(1-x)}{x} = \infty. -$$ -This (hopefully) shows that the series diverges.<|endoftext|> -TITLE: Does blow up of subscheme in special fiber change the generic fiber? -QUESTION [5 upvotes]: Let $X\to \mathrm{Spec}(R)$ be a finite type scheme over DVR, choose a closed subscheme $Y$ of the closed fiber $X_0$ and blow up $Y$ in $X$, will the generic fiber always remain the same? - -REPLY [4 votes]: Note that passing to the generic fiber is flat, (it's a localization) and blowups commute with flat base change. -Therefore, the new generic fiber is the blowup of the old generic fiber $X_\eta$ at the intersection of $X_\eta$ with $Y$. This is empty, so you're blowing up nothing, and you get an isomorphism. -This is just a long-winded way of saying "when you blow up a subscheme, the compliment of that closed set never ever changes."<|endoftext|> -TITLE: Functions with "ugly" inverses -QUESTION [5 upvotes]: Inspired by this post: -I was amazed to see that (at least according to wolframalpha) the inverse of such a nice and simple function as $f(x)=x^3+x$ is: $$ f^{-1}(x) = \sqrt[3]{\frac{2}{3( \sqrt{81x^2+12}- 9x)}} - \sqrt[3]{\frac{\sqrt{81x^2+12}- 9x}{ -18}} $$ -Now there may be a way to simplify that that I'm not seeing... -But regardless, I was wondering if there are any other seemingly simple functions with crazy, ugly inverses. -Is there any way to know ahead of time whether a function will have a nice inverse? -I know that not all functions even have inverses (over the reals). But is there any rhyme or reason as to why such a simple function would have such a crazy complex inverse? And is there a more general criteria for know whether other functions will be similar in this respect? -EDIT: To make this a little easier to answer, I've been told I need a better definition than ugly. Let's go with non-analytic, just because I'm interested. But if anyone has a better idea, please let me know. - -REPLY [4 votes]: If you treat the function $f(x)=x+x^3$ as a series expansion and perform a series reversion then -$$ -f^{-1}(x) = x - x^3 + 3x^5 - 12 x^7 + 55x^9 - 273 x^{11} + \cdots -$$ -alternatively -$$ -f^{-1}(x) = \sum_{n=0}^\infty \binom{3n}{n}\frac{(-1)^n x^n}{2n+1} -$$ -with the coefficients identified thanks to OEIS entry A001764. The fact that the inverse series expansion has all integers with a simple formula is quite nice in my opinion, but if you look in the OEIS entry there are many combinatoric interpretations of this series. Viewed in the world of generating functions this is a 'nice' inverse. The function can be written in two other ways -$$ -f^{-1}(x) = \frac{2}{\sqrt{3}}\sinh\left(\frac{1}{3}\mathrm{arcsinh}\left(\frac{3\sqrt{3}x}{2}\right)\right) -$$ -and -$$ -f^{-1}(x) = x \;_2F_1\left( \frac{1}{3},\frac{2}{3}; \frac{3}{2};\frac{3^3 x^2}{2^2}\right) -$$ -In the language of hyperbolic trig functions the $\sinh$ and $\mathrm{arcsinh}$ almost cancel out if it wasn't for the factor of $1/3$, that's at least interesting, but not pretty. In the language of hypergeometric functions, this inverse is quite beautiful and mysterious. polynomial inverses often generate nice hypergeometric representations, and for higher orders that is one of the few languages that is capable of writing the inverses, (see my answer to this post for an example, note the $\frac{5^5}{4^4}$ pattern continues). -The reason it looks so ugly in it's surd form is probably because the function is not as special when written in that language. (I don't think many things look pretty with roots really).<|endoftext|> -TITLE: How to show every field is a Euclidean Domain. -QUESTION [16 upvotes]: I'm having trouble proving this. This is what I have so far: -Let $F$ be a field. -Let $v(x) \rightarrow 1$ for all $x$ not equal to $0$. -So if we let $x$ be in $F$ where $x$ not zero then we can write $x$ as: -$x=qy+r$ for some $y$ in $F$. -If $r$ not zero then $V(r)=1$. -Not sure what to do from here. I know eventually I'm supposed to get no remainder $(r=0)$ but I'm stuck at applying the definition and valuation map. - -REPLY [30 votes]: You're probably overthinking it. Let $q = x \cdot y^{-1}$, which always exists, because $F$ is a field, and $y \ne 0$. So let $r$ be zero, and you don't need to worry at all about the valuation.<|endoftext|> -TITLE: Sequential continuity implies continuity in the weak topology on a normed space -QUESTION [6 upvotes]: Let $X$ be a normed vector space (over $\mathbb{R}$ or $\mathbb{C}$) and let $f$ be a linear functional on $X$ that is not necessarily continuous. If for any sequence $(x_n)$ that converges to $x$ weakly, we have $\lim f(x_n)=f(x)$, does it follow that $f$ is continuous in the weak topology? -In other words, does sequential continuity imply continuity in the weak topology? -(We know that the weak topology need not be first countable, so a priori we cannot characterise continuity of linear functionals in terms of sequences.) - -REPLY [5 votes]: Every norm-continuous linear map between normed spaces is weakly continuous (e.g. by universal properties of the weak topologies). Norm continuity follows from weak-sequential continuity because the norm and the weak topology have the same bounded sets. -For locally convex space one has to be more careful.<|endoftext|> -TITLE: question on equivalent ideas of absolute continuity of measures -QUESTION [6 upvotes]: The measure $\nu$ is absolute continuous with respect to $\mu$ if for each $A$, $\mu(A)=0$ implies $\nu(A)=0$ (indicated by $\nu \ll \mu$). -There is an $\epsilon$-$\delta$ idea related to this definition: - -If $\nu$ is finite, then: - $\nu \ll \mu$ $\iff$ for every $\epsilon$ there exist a $\delta$ satisfying $\nu(A)<\epsilon$, if $\mu(A)<\delta$ - -What is the necessity of finiteness of $\nu$? -What happens if $\nu$ is not finite? -Is there any example to show that $\epsilon$-$\delta$ definition does not hold if $\nu$ is infinite, even if $\nu\ll\mu$? - -REPLY [4 votes]: One direction of the equivalence always holds, that is, if the $\epsilon$-$\delta$ condition holds, then $\nu<<\mu$, because if $\mu(A)=0$, then for all $\epsilon>0$ and its corresponding $\delta>0$ we have $\mu(A)<\delta\implies \nu(A)<\epsilon$, and hence $\nu(A)=0$. -To see why the finiteness of $\nu$ is important, we therefore must examine the other direction. We can trivially contradict it if we assume $\mu$ to be arbitrarily small, such as the Lebesgue measure, because then we can consistently define a $\mu$-absolutely-continuous measure by -$$\nu(A) = \begin{cases}0 & \mu(A)=0\\ \infty & \mu(A)>0\end{cases}$$ -However, to gain more insight, we may examine the proof of the other direction when $\nu$ is finite, as presented, e.g., in this answer based on Folland's Real Analysis. The proof hinges on the continuity from above of finite measures, and unsurprisingly, the measure $\nu$ we defined above is a good counterexample to that property.<|endoftext|> -TITLE: Find a polynomial with integer coefficients -QUESTION [6 upvotes]: Find a polynomial $p$ with integer coefficients for which $a = \sqrt{2} + \sqrt[3]{2}$ is a root. That is find $p$ such that for some non-negative integer $n$, and integers $a_0$, $a_1$, $a_2$, ..., $a_n$, $p(x) = a_0 + a_1 x + a_2 x^2 + ... + a_n x^n$, and $p(a) = 0$. -I do not know how to solve this. It is very challenging. Also, if you name any theorem please describe it in a way that is easy to understand. If you just name it, I won't be able to understand it. (My math might not be/is not as good as yours.) -Thanks for any help! - -REPLY [4 votes]: Here's a general approach to this kind of problem. -[NOTE: I misread the question as involving $\sqrt{2}$ and $\sqrt[3]{a}$ instead of $\sqrt{2}$ and $\sqrt[3]{2}$, whose sum is being called $a$. This means that some of the specific algebraic calculations that follow aren't exactly the ones the original questioner needs, but the overall structure is the same whatever you're taking the cube root of so I'll leave the mistake in. You could just put $a=2$ in what I wrote to get an answer to the original question.] -Consider those roots -- the square root of 2, and the cube root of $a$. There are actually two square roots of 2 (which we usually write as $\sqrt{2}$ and $-\sqrt{2}$, but as far as equation-solving goes they're interchangeable). And there are three cube roots of $a$; if we write $\omega=(-1+i\sqrt{3})/2$ they are $\sqrt[3]{a},\omega\sqrt[3]{a},\omega^2\sqrt[3]{a}$. (The point is that $1,\omega,\omega^2$ are the things whose cube is 1, just as $1,-1$ are the things whose square is 1.) -If we have an integer polynomial with, $\sqrt{2}+\sqrt[3]{a}$ as a root, then it "can't tell the difference" between that and, say, $-\sqrt{2}+\omega^2\sqrt[3]{a}$ so that had better be a root as well. (If you want more details, the technical term to look up is "conjugate" or, more specfically, "Galois conjugate".) -Therefore, an integer polynomial with $\sqrt{2}+\sqrt[3]{a}$ as a root must have all six of these "conjugates" as roots. So we have -$$(x-(\sqrt{2}+\sqrt[3]{a}))(x-(-\sqrt{2}+\sqrt[3]{a}))(x-(\sqrt{2}+\omega\sqrt[3]{a}))(x-(-\sqrt{2}+\omega\sqrt[3]{a}))(x-(\sqrt{2}+\omega^2\sqrt[3]{a}))(x-(-\sqrt{2}+\omega^2\sqrt[3]{a}))$$ -and now expanding this out gives the polynomial we need. -In practice it may be easier first to consider the cubic polynomial -$$(x-(\sqrt{2}+\sqrt[3]{a}))(x-(\sqrt{2}+\omega\sqrt[3]{a}))(x-(\sqrt{2}+\omega^2\sqrt[3]{a}))$$ -which turns out, if I've done my algebra right, to equal $x^3-3\sqrt{2}x^2+6x-2\sqrt2-a$. And then we multiply this by its conjugate (with the signs of all the $\sqrt2$s flipped) to get the degree-6 polynomial we need. -So the general recipe is: take the root you need, find all its conjugates, and multiply together all the corresponding $(x-\textrm{root})$ factors. The resulting polynomial's coefficients will all come out rational.<|endoftext|> -TITLE: Does the set of diffeomorphisms which are induced by flows form a group? -QUESTION [8 upvotes]: Let $M$ be a smooth manifold. -Consider the set of diffeomorphisms which are induced by flows of vector fields. (which are not time-dependent) -Is this set a subgroup of $\text{Diff}(M)$? -(Note that not every diffeomorphism which is isotopic to the identity is induced by a flow of a vector field, see here for details). -"A naive attempt": -Maybe it's possible to construct a counter-example when taking $M=\mathbb{S}^2$. Every vector field on $\mathbb{S}^2$ vanishes at some point, hence every flow-diffeomorphism has a fixed point. Maybe we can find two vector fields, such that the composition of their flows is a diffeomorphism without fixed points. - -REPLY [8 votes]: The relevant theorem is -Theorem (W.Thurston). Let $M$ be a smooth compact manifold, $Diff_o(M)$ is the identity component of the diffeomorphism group of $M$. Then the group $Diff_o(M)$ is simple (as an abstract group). -You can find a proof in -A. Banyaga, The structure of classical diffeomorphism groups. Mathematics and its Applications, 400. Kluwer Academic Publishers Group, Dordrecht, 1997. -Given this, let $G< Diff_o(M)$ denote the subgroup generated by the set $F_M$ of diffeomorphisms given by flows of time-independent vector fields on $M$. It is clear that this subgroup is normal and nontrivial (provided that $dim(M)>0$). Hence, $G= Diff_o(M)$ by Thurston's theorem. It follows that the set $F_M$ cannot form a subgroup unless $F_M= Diff_o(M)$. But, as you already know (see also Lee Mosher's answer here in the case of surfaces), $F_M\ne Diff_o(M)$. Therefore, $F_M$ does not form a subgroup. -Edit. For a direct proof that $F_M$ is not a subgroup see the answer by Martin M.W. to this MO question.<|endoftext|> -TITLE: Under what circumstances do we have $\partial(A \setminus B) = \partial A \cup (A \cap B)$? -QUESTION [5 upvotes]: Let $A$ and $B$ denote subsets of $\mathbb{R}^n$. -Then if $A$ is an open set and $B$ is a "sufficiently small" closed set, then we might expect the following to hold: $$\partial(A \setminus B) = \partial A \cup (A \cap B)$$ -For example, imagine that $X = \mathbb{R}^2$, that $A$ is the unit (open) ball centered at the origin, and that $B$ is a line through the origin. Then $\partial A$ is the unit circle centered at the origin, $A \cap B$ is a line segment of length $2$ through the origin, and $\partial(A \setminus B)$ is the union of these. - -Question. Let $A$ and $B$ denote subsets of $\mathbb{R}^n$, with $A$ open and $B$ closed. Under what assumptions does the identity of interest hold? I'm also interested in generalizing beyond $\mathbb{R}^n$ to e.g. sufficiently well-behaved topological spaces. - -I tried proving this under the assumption that $\mathrm{int}(B) = \emptyset$ but didn't get very far. - -REPLY [2 votes]: I think I have a proof for the case when $B^o:=int(B)=\emptyset$. Let $a\in \partial(A-B)$. Then by definition, $a\in \overline{A-B}$. Since $B$ is closed and $A$ is open, $B^c\cap A=A-B$ is open. It follows that $a\not\in (A-B)$, otherwise $a$ would be an interior point of $(A-B)$. Hence, $a\in (A-B)^c=(A\cap B^c)^c=A^c\cup B$, so either $a\in B$ or $a\in A^c$. Since $\overline{A-B}\subset \overline{A}$, we have $a\in \partial A$ or $a\in A\cap B$. -For the other direction, we need the following: -Lemma -If $B^o=\emptyset$, then $\overline{A-B}=\overline{B^c\cap A}=\overline{A}$. -proof -Since $B^o=\emptyset$, for each point $a\in B\cap A$ and all open balls $a\in B_{\epsilon}$, we have $B_{\epsilon}\cap (A\cap B^c)\neq \emptyset$. Hence, $B^c\cap A$ is dense in $A$ (in the subspace topology). -Now back to our problem...Let $a\in \partial A\cup (A\cap B)$. If $a\in \partial A$, then $a\in \overline{A-B}$ by the lemma. Moreover, $a\not \in A-B$, since then $a\in A=A^o$ would contradict our hypothesis. Hence, $a\in \partial(A-B)$. If $a\in (A\cap B)$ then again we have that every open ball intersects points of $A-B$ and hence $a\in \overline{A-B}$. Finally, $a\not\in A-B$, otherwise $a\not\in B$. -Edit -I cleaned up the proof so that its a bit more readable. -Also, the condition that $(A\cap B)^o=\emptyset$ is both necessary and sufficient. To see that its necessary, notice that if $(A\cap B)$ has interior points, then so does $\partial A\cup (A\cap B)$. However, $\partial(A-B)$ can't have any interior points. Indeed, $(A-B)=B^c\cap A$ is open and $\partial(A-B)=(\overline{A-B})-(A-B)$. If this set had an interior point, then there would be some open ball $B_{\epsilon}$ in $\overline{A-B}$, which does not intersect $A-B$. But then $\overline{A-B}-B_{\epsilon}=\overline{A-B}\cap B_{\epsilon}^c$ is a closed set containing $A-B$, contradicting the fact that $\overline{A-B}$ is the closure (smallest closed set containing $A-B$). -Finally, this works in any topological space. Just replace $B_{\epsilon}$ by an element in a basis for the topology.<|endoftext|> -TITLE: Common notation for non vacuous implication -QUESTION [5 upvotes]: Taking the definition of vacuous truth to be an implication where nothing satisfies the antecedent. Is there notation commonly used for "non-vacuous implication"? -I could write: -$(\forall x . P(x) \implies Q(x)) \land (\exists x . P(x))$ -But I need to write it a lot, so I would prefer to write some shorthand. - -REPLY [3 votes]: You may want to consider using a $\LaTeX$ command such as \xrightarrow to create a symbol such as $\xrightarrow[]{\exists}$ or $\xrightarrow[]{\text{NV}}$ to denote 'non-vacuous implications'. For example, consider writing $$P(x) \xrightarrow[]{\text{NV}} Q(x)$$ in place of $\left(\forall x \ P(x) \rightarrow Q(x)\right) \wedge \left(\exists x \ P(x) \right)$. -Several Google searches concerning non-vacuous implications suggest to me that there may not be any notation commonly used for non-vacuous implications.<|endoftext|> -TITLE: Focus of the Parabola -QUESTION [6 upvotes]: Find the Focus of $$(2x+y-1)^2=5(x-2y-3)$$. -Clearly its a Parabola whose axis is $2x+y-1=0$ and since $x-2y-3=0$ is perpendicular to $2x+y-1=0$ Tangent at the vertex is $x-2y-3=0$.Also the Vertex is $(3,-1)$, but now how to find its focus? - -REPLY [2 votes]: More generally if we have a parabola in the form (which all parabolas can be put into) $$(l_{\rm axis})^2=e\cdot l_{\text{tangent at vertex}}$$ or $$(ax+by+c)^2=e(-bx+ay+d),$$ and use the orthonormal change of basis $x'=\frac{ax+by}{\sqrt{a^2+b^2}},y'=\frac{-bx+ay}{\sqrt{a^2+b^2}}$, we get $$(\sqrt{a^2+b^2}x'+c)^2=(\sqrt{a^2+b^2}y'+d)$$ or $$(a^2+b^2)((x'+\frac{c}{\sqrt{a^2+b^2}})^2-\frac{e}{\sqrt{a^2+b^2}}(y'+\frac{d}{\sqrt{a^2+b^2}}))=0.$$ So using that$$x''^2=4\frac{e}{4\sqrt{a^2+b^2}}y''$$ has focus $$(x'',y'')=(0,\frac{e}{4\sqrt{a^2+b^2}})$$ we nest back to $$(x',y')=(-\frac{c}{\sqrt{a^2+b^2}},\frac{e-4d}{4\sqrt{a^2+b^2}})$$ and get $$(x,y)=(-\frac{a(4d-e)+4bc}{4b^2+4a^2},-\frac{(b(e-4d)+4ac}{4b^2+4a^2}).$$ -In your special case the focus is $(x,y)=(\frac54,-\frac32).$<|endoftext|> -TITLE: How to become good in Mathematics? -QUESTION [8 upvotes]: I don't know if this is the right place to ask this question...but let's try... -I have seen lots of people all around me, interested in mathematics-ranging from teachers to friends.But,I have seen some have different approach towards the subject.Their thinking is different from the rest.Both groups of people may have a good reputation among others about being good in Mathematics but approach of some of them is completely different and simple.Let us take an example. -See this question.The answer provided is good enough and effort is appreciable.But,there's another hint provided to the solution in the comments by Andre Nicolas.Using his method,the problem can be solved in a few lines but the answer provided extends the problem to about $2-3$ pages and makes things quite complicated. -So,I want to know what has helped in this difference in approach?Is it only practice or do some people have special inborn aptitude towards mathematics? -I think I fall in the first group.I have to improve myself through practice.Can I ever become so much proficient in mathematics only through mathematics and no inborn abilities? -Thanks for any response!! - -REPLY [6 votes]: Like any other ability, it's mostly a combination of inherent talent and obsessive focus and interest in the subject. Some particular ideas that are unique to success in mathematics or a mathematical career: - -A particular kind of laziness: trying to find a clever, deep idea rather than just relying on brute force. -Comfort with abstraction. There are a lot of questions here along the lines of 'What does [definition] really mean?' or 'Draw me a picture to explain [thing]'. Being a mathematician means being able to handle abstract concepts abstractly. You don't have to limit yourself to, say, the most abstract areas of algebraic geometry (or whatever), but you do have to be comfortable dealing with things that have no real-world analogue or for which familiar intuition is misleading. -Working on single problems for long periods of time. Now, it's generally a good idea to have multiple topics at once. Compare the publication rate of mathematicians versus academics in other theoretical fields, though. -An early start to one's career. Mathematicians have a short shelf life, and there's a huge amount to learn to be prepared to be a professional mathematician. It's not a career you can jump into at a later date, and there's really no entree into the subject except through climbing the academic ladder. (There are vanishingly few exceptions of people who've gone from industry or non-mathematical subjects into mathematics--- Raoul Bott is the only name that comes to mind--- especially compared to the opposite direction.) Now, there's certainly a difference between being good at mathematics and being good at being a mathematician; my point is that the former is not a particularly useful or relevant ability without the latter. If you want to do real mathematics, or even get enough practice in order to do real mathematics, you'll have to start out strong in your career in academia and remain there. -Sheer luck in having the right contacts. As mentioned in the previous point, it is extraordinarily rare to be a mathematician except by going through a certain series of steps. It's not like, say, computer science, where it's not particularly difficult to jump between academia and industry in either direction; and hardly anyone is working on nontrivial mathematics research in industry. There are gatekeepers at every step of the path, and if you get stuck with a bad advisor, department chair, etc., you're screwed. - -One other thing I'll mention is that while mathematics competitions are certainly not a bad thing, they're not really indicative of what professional mathematicians do; they're more about using elementary methods in clever ways to solve contrived problems in a short amount of time. (That having been said, feel free to practice math with them, and you should legitimately feel proud if you do well in them.)<|endoftext|> -TITLE: Evaluate the series $\sum_{n=1}^{\infty}\frac{\sin(\frac{\pi a}{a+b}n)}{n^3}+\frac{\sin(\frac{\pi b}{a+b}n)}{n^3}$ -QUESTION [5 upvotes]: I have to evaluate the series: -$$\sum_{n=1}^{\infty}\frac{\sin(\frac{\pi a}{a+b}n)}{n^3}+\frac{\sin(\frac{\pi b}{a+b}n)}{n^3}$$ -Where $a$ and $b$ are real numbers. -Since I'm not very good with series I tried brute force by using a complex analysis method which involves multiplying by the cotangent and calculating the residue at $n=0$ (I don't know the name of this method, sorry!), but I didn't get the result my teacher gave me. -I figured this is the Fourier series expansion of some odd function, but I don't know how to guess said function. -So I was wondering if someone could give me some advice. - -REPLY [3 votes]: Let's rewrite the series as -$$ -S(a,b)=\Im\sum_{n=1}^{\infty}\left[\frac{(e^{\pi i a/(a+b)})^n}{n^3}+\frac{(e^{\pi i b/(a+b)})^n}{n^3}\right] -$$ -Now we can apply the definition of the polylogarithmic functions -$$ -\text{Li}_m(z)=\sum_{n=1}^{\infty}\frac{z^n}{n^m} -$$ -we therefore obtain - -$$ -S(a,b)=\Im\left[\text{Li}_3(e^{\pi i a/(a+b)})+\text{Li}_3(e^{\pi i b/(a+b)})\right] \quad (*) -$$ - -which is i fear the best one can do for arbritary $a,b$. For example if $a,b $ are natural numbers we may rewrite this as a finite sum over Hurwitz Zeta values. -An alternative representation is terms of Clausen functions $\text{Si}_m(z)$ -$$ -S(a,b)=\text{Si}_3(e^{\pi i a/(a+b)})+\text{Si}_3(e^{\pi i b/(a+b)}) -$$ -For the special values $a=b$, $a=0$ we might find the particulary nice results -$$ -S(a,a)=\frac{\pi^3}{16} \quad \\ -S(0,b)=0 \quad -$$ -All defintions i used and much more can be found here<|endoftext|> -TITLE: How to prove $\sin\frac{A}{2}=\sqrt{\frac{(s-b)(s-c)}{bc}};\cos\frac{A}{2}=\sqrt{\frac{s(s-a)}{bc}}$ -QUESTION [5 upvotes]: I have read these formulae in my book but i could not understand how these are proved. -$\sin\frac{A}{2}=\sqrt{\frac{(s-b)(s-c)}{bc}};\sin\frac{B}{2}=\sqrt{\frac{(s-c)(s-a)}{ca}};\sin\frac{C}{2}=\sqrt{\frac{(s-a)(s-b)}{ab}}$ and -$\cos\frac{A}{2}=\sqrt{\frac{s(s-a)}{bc}};\cos\frac{B}{2}=\sqrt{\frac{s(s-b)}{ca}};\cos\frac{C}{2}=\sqrt{\frac{s(s-c)}{ab}}$ -where $A,B,C$ are the angles of a triangle $ABC$ and $a,b,c$ are the sides opposite to the angles $A,B,C$ respectively. - -I know that area of triangle$\Delta=\frac{1}{2}bc\sin A$ -But area of triangle by Heron's formula is $\Delta=\sqrt{s(s-a)(s-b)(s-c)}$ -$\sin A=\frac{2\Delta}{bc}$ -$2\sin\frac{A}{2}\cos\frac{A}{2}=\frac{2\Delta}{bc}$ -$\sin\frac{A}{2}\cos\frac{A}{2}=\frac{\sqrt{s(s-a)(s-b)(s-c)}}{bc}$ -I do not know how to prove it further or there is some other method to prove it. - -REPLY [6 votes]: \begin{aligned} - \cos A &= \frac{b^2+c^2-a^2}{2bc} \\ - \cos A &= 2\cos^2\frac{A}{2} - 1 \\ - 2\cos^2\frac{A}{2} - 1 &= \frac{b^2+c^2-a^2}{2bc}\\ - \cos^2 \frac{A}{2} &= \frac{b^2+c^2-a^2+2bc}{4bc} = \frac{(b+c-a)(b+c+a)}{4bc}\\ - \cos \frac{A}{2} &= \sqrt{\frac{(2s-2a)(2s)}{4bc}} = \sqrt{\frac{s(s-a)}{bc}} -\end{aligned} -$\sin \frac{A}{2}$ can be solved in the same way<|endoftext|> -TITLE: Inverse of Perspective Matrix -QUESTION [17 upvotes]: I am trying to calculate Image to World model for my thesis dealing with road lanes. As a disclaimer I have to say that linear algebra is not my strong suite. -The idea is - given that I know yield, pitch, and position of the camera - I can translate image pixels to real world coordinates which will be useful in road recognition algorithm. -I managed to get working Camera Pinhole Perspective projection. Here are the matrices used -Extrinsic Matrix -Translates to camera position and rotates accordingly -$$\begin{pmatrix} -&1 &0 &0 &-cx \\ -&0 &1 &0 &-cy \\ -&0 &0 &1 &-cz \\ -&0 &0 &0 &1 -\end{pmatrix} -\begin{pmatrix} -&1 &0 &0 &0 \\ -&0 &\cos(\text{yaw}) &-\sin(\text{yaw}) &0 \\ -&0 &\sin(\text{yaw}) &\cos(\text{yaw}) &0 \\ -&0 &0 &0 &1 -\end{pmatrix} -\begin{pmatrix} -&\cos(\text{pitch})) &0 &\sin(\text{pitch})) &0 \\ -&0 &1 &0 &0 \\ -&-\sin(\text{pitch}) &0 &\cos(\text{pitch}) &0 \\ -&0 &0 &0 &1 -\end{pmatrix} -$$ -Projection -$f$ is the focal length of the camera. -Based on https://en.wikipedia.org/wiki/3D_projection -$$\begin{pmatrix} -Fx\\ -Fy\\ -Fz\\ -Fw -\end{pmatrix} = \begin{pmatrix} -&1 &0 &1/f &0 \\ -&0 &1 &1/f &0 \\ -&0 &0 &1 &0 \\ -&0 &0 &1/f &0 -\end{pmatrix} -\begin{pmatrix} -dx\\ -dy\\ -dz\\ -1 -\end{pmatrix}$$ -$$p = \begin{pmatrix} -Fx/Fw\\ -Fy/Fw\\ -1 -\end{pmatrix}$$ -Intrinsic Matrix -Scales to pixel units and moves origin to center. $w$ is the width of screen and $W$ is the width of the sensor. Similarily with height -$Fx = w/W$ -$Fy = h/H$ -$$\begin{pmatrix} -&Fx &0 &w/2 \\ -&0 &Fy &h/2 \\ -&0 &0 &1 \\ -\end{pmatrix} -$$ -In typical projection I first multiply 3d point with extrinsic matrix, then project it using Projection matrix and then apply Intrinsic matrix. -But how can I reverse the process? I can use assumption that all points lie on the road plane (Y == 0). Yet I am not sure how to fit it with all these matrixes. I know I can invert Intrinsic and Extrinsic Matrix, but I can't do it with the projection matrix, because is singular. -Any lead would be useful. -Thanks - -REPLY [12 votes]: The location on the image plane will give you a ray on which the object lies. You’ll need to use other information to determine where along this ray the object actually is, though. That information is lost when the object is projected onto the image plane. Assuming that the object is somewhere on the road plane is a huge simplification. Now, instead of trying to find the inverse of a perspective mapping, you only need to find a perspective projection of the image plane onto the road. That’s a fairly straightforward construction similar to the one used to derive the original perspective projection. -Start by working in camera-relative coordinates. A point $\mathbf p_i$ on the image plane has coordinates $(x_i,y_i,f)^T$. The original projection maps all points on the ray $\mathbf p_i t$ onto this point. Now, we’re assuming that the road is a plane, so it can be represented by an equation of the form $\mathbf n\cdot(\mathbf p_o-\mathbf r)=0$, where $\mathbf n$ is a normal to the plane and $\mathbf r$ is some known point on it. We seek the intersection of the ray and this plane, which will satisfy $\mathbf n\cdot(\mathbf p_i t-\mathbf r)=0$. Solving for $t$ and substituting gives $$\mathbf p_o = {\mathbf n\cdot \mathbf r \over \mathbf n\cdot \mathbf p_i}\mathbf p_i.$$ Moving to homogeneous coordinates, this mapping is the linear transformation represented by the matrix $$ -M = \pmatrix{1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ {n_x \over \mathbf n\cdot\mathbf r} & {n_y \over \mathbf n\cdot\mathbf r} & {n_z \over \mathbf n\cdot\mathbf r} & 0}, -$$ i.e., $$ -\mathbf p_o = M\pmatrix{x_i \\ y_i \\ f \\ 1}. -$$ Once you have this, it should be obvious how to complete the mapping back to world coordinates. -All that’s left is to find the parameters $\mathbf n$ and $\mathbf r$ that describe the road plane in camera coordinates. That’s also pretty simple. Since we’re taking the road to be the plane $y=0$ in world coordinates, its normal there is $(0,1,0)^T$. As for a known point on the road, the origin will do. Another reasonable choice is the point at which the camera’s optical axis meets the road, since the the camera-relative coordinates of that point will be of the form $(0,0,z)^T$. Convert both of these into camera-relative coordinates, and you’re done. -Note that you don’t necessarily need to know anything about the camera to compute a perspective transformation that will map from the image plane to the road plane. If you can somehow find four pairs of non-colinear points, i.e., a pair of quadrilaterals, that correspond to each other on these two planes, a planar perspective transformation that relates them can be computed fairly easily. See here for details. Essentially, you calibrate the camera view by matching a region of the image to a known region in the road plane. - -Update 2018.10.22: If you have the complete camera matrix $P$, which you do, there’s a fairly straightforward way to construct the back-mapping to points on the road with a few matrix operations. We choose a coordinate system for the road plane, which gives us a $4\times3$ matrix $M$ that maps from these plane coordinates to world coordinates, i.e., $\mathbf X = M\mathbf x$. The image of this point is $PM\mathbf x$. If $PM$ is invertible, which it will be unless the camera center is on the road plane, the matrix $(PM)^{-1}$ maps from image to plane coordinates, and so the back-mapping from image to world coordinates on the road is $M(PM)^{-1}$. For the plane $Y=0$, a natural choice for $M$ is $$M=\begin{bmatrix}1&0&0\\0&0&0\\0&1&0\\0&0&1\end{bmatrix},$$ which simply inserts a $Y$-coordinate of zero to obtain world coordinates. You can adjust the origin of this coordinate system by changing the last column of $M$.<|endoftext|> -TITLE: Minimal-dimension example of (open) subset of $\mathbb{R}^n$ with trivial first cohomology but nontrivial fundamental group -QUESTION [5 upvotes]: As a follow-up to this question, I was wondering what dimension provides the minimal counterexample to the claims: - -If $U\subseteq\mathbb{R}^n$ is an open connected set with trivial $H^1(U)$, then $\pi_1(U)$ is trivial; -Same as above, but with a more general $U$. - -Let $m$ be the minimal counterexample's ambient space's dimension. -Qiaochu's answer to the above linked question shows $m\leq 4$, since for claim 2, we have $\mathbb{RP}^2$ smoothly imbedded into $\mathbb{R}^4$ providing the example directly, whereas for claim 1 we take a tubular neighborhood of the embedded image of $\mathbb{RP}^2$, which is then open and connected, and deformation retracts (I guess, correct me if I'm wrong) onto $\mathbb{RP}^2$, thus having the same fundamental group and cohomology. -Qiaochu also stated in a comment that the fundamental group of an open subset of $\mathbb{R}^2$ is free, which I found proof-sketched here. He also stated that the Universal Coefficient Theorem (which I will sooner or later be studying for an exam) can be used to prove $H^1(U)=1\iff\mathrm{Hom}(\pi_1(U),\mathbb{R})=1$. These two facts put together imply $m_1\in\{1,3,4\}$, where $m_1$ is the $m$ for claim 1. -In dimension 1, connected subsets are intervals, hence have trivial $\pi_1$. So $m_1\in\{2,3,4\}$ and $m_2\in\{3,4\}$. -The 3-dimensional case seems to have claim 1 as an open problem, so I was wondering if I could get: - -An answer about 2 dimensions and claim 2; I'm guessing we can once more take a non open connected set in the plane and say a tubular neighborhoods of it would retract onto it, having the same fundamental group, thus concluding by Qiaochu's comment's "nonobvious fact" that we have no counterexamples in the plane because all fundamental groups are free; is that right? -Something about the 3-dimensional case, if it is possible for claim 2; although the prhasing of that comment about the open problem seems to suggest that "open" does make a differnece, which would contrast with the tubular neighborhood retraction argument. - -Edit -In case comments to that answer get trimmed (they are a lot), here are screenshots 1 and 2. - -REPLY [2 votes]: In the case of open connected subsets $U$ of $R^3$, indeed, non-simply connected implies $b_1(U)>0$. Hint: First prove it for compact 3-dimensional submanifolds $M$ of $R^3$ (with boundary): Use the formula $\chi(\partial M)= 2\chi(M)$. Then use exhaustion and the fact that direct limit commutes with the $\pi_1$-functor and with the Hurewicz homomorphism.<|endoftext|> -TITLE: What is the definition of direct sum of submodules? -QUESTION [7 upvotes]: Given a ring $R$ and $M_1,\ldots,M_n$ $R$-submodules of an $R$-module $M$, what is the definition of this set? -$$\bigoplus_{i=1}^n M_i$$ -From where I am reading it seems that it is: $M_1 + \cdots + M_n$ with $M_i$ mutually disjoint. But I read in many places that it is the direct product $M_1 \times \cdots \times M_n$. -So what is it? Thanks for your help. - -REPLY [7 votes]: The ("external") direct sum of modules $M_i$ is defined as a subset of the Cartesian product of the $M_i$. -Now, there is another thing called the "internal" direct sum of submodules of a module. This is usually defined as the submodules summing to the whole module, and having the property that each component intersects the sum of others trivially. It amounts to each element having a unique representation as a sum of elements from each submodule. -The two are related this way: if you decompose $M$ as an internal direct sum of submodules $M_i$, the internal direct sum is isomorphic to an "external" direct sum via the map $m_1+m_2+\ldots \mapsto (m_1,m_2,\ldots)$. -Conversely, every decomposition of a module as a direct sum of other modules corresponds to an internal decomposition. You just look at the images of the components of the decomposition inside your module, and they form a family of submodules that defines an internal decomposition. -So you see the two are basically the same, it's just that one emphasizes working with tuples of elements in the Cartesian product, and the other works with sums of elements inside the module. - -REPLY [2 votes]: A finite direct sum is equivalent to the analogous Cartesian product. This stops being true for infinite sums/products. -As an example, $(1,1,1,1,1,\dots) \in \Bbb{Z} \times \Bbb{Z} \times \cdots$, but $(1,1,1,1,1,\dots) \not\in \Bbb{Z} \oplus \Bbb{Z} \oplus \cdots$, because elements in the direct sum have only a finite number of nonzero entries. -The product topology and the box topology also capture this distinction.<|endoftext|> -TITLE: Integral $\int_0^1 \frac{\ln|1-x^a|}{1+x}\, dx$ -QUESTION [6 upvotes]: Let's consider this integral: -$$g(a)=\int_0^1 \frac{\ln|1-x^a|}{1+x}\, dx$$ -There is a related integral, which is more widely known. See this question, this question and this paper. -$$f(a)=\int_0^1 \frac{\ln(1+x^a)}{1+x}\, dx$$ -It has a number of interesting properties, namely: -$$f(a)-f(-a)=-\frac{\pi^2}{12} a$$ -$$f(a)+f \left( \frac{1}{a} \right)=\ln^2 2$$ -We can immediately see (by evaluating the integral as well): -$$f(0)=\ln^2 2$$ -$$f(1)=\frac{\ln^2 2}{2}$$ -In the related paper the way is shown to get a closed form for infinitely many values of $a$. - -Now we consider $g(a)$. Notice the important limit: -$$\lim_{a \to \pm 0} g(a)=-\infty$$ -In this answer it is shown that: -$$g(1)=\int_0^1 \frac{\ln(1-x)}{1+x}\, dx=\frac{\ln^2 2}{2}-\frac{\pi^2}{12}$$ -I have not been able to find references about the general integral yet. However, the most interesting property is this obvious formula: - -$$g(2a)=g(a)+f(a)$$ - -Using it, we can find some values without explicitly evaluating the integral. -$$g(2)=g(1)+f(1)=\ln^2 2-\frac{\pi^2}{12}$$ -This integral also has the same symmetry relation as $f(a)$: - -$$g(a)-g(-a)=-\frac{\pi^2}{12} a$$ - -However, I have not been able to find any formula for $g(1/a)$ (the proof used for $f(a)$ fails because of the singularity at $a=0$). -Here is a plot of these two integrals and their common asymptote: - -And here is a list of 'nice' values for $f(a)$ and $g(a)$ for integer $a$: -$$ - \begin{matrix} - a & f(a) & g(a) \\ - -4 & & \dfrac{11 \pi ^2}{48}+\dfrac{21 \ln ^2 2}{12} \\ - -2 & \dfrac{7 \pi ^2}{48}+\dfrac{3 \ln ^2 2}{4} & \dfrac{\pi ^2}{12}+\ln ^2 2 \\ - -1 & \dfrac{ \pi ^2}{12}+\dfrac{\ln ^2 2}{2} & -\dfrac{\ln ^2 2}{2} \\ - -\dfrac{1}{2} & \dfrac{\pi ^2}{16}+\dfrac{\ln ^2 2}{4} & -\dfrac{ \pi ^2}{16}+\dfrac{\ln ^2 2}{4} \\ - 0 & \ln ^2 2 & -\infty \\ - \dfrac{1}{2} & \dfrac{\pi ^2}{48}+\dfrac{\ln ^2 2}{4} & -\dfrac{5 \pi ^2}{48}+\dfrac{\ln ^2 2}{4} \\ - 1 & \dfrac{\ln ^2 2}{2} & -\dfrac{ \pi ^2}{12}+\dfrac{\ln ^2 2}{2} \\ - 2 & -\dfrac{\pi ^2}{48}+\dfrac{3 \ln ^2 2}{4} & -\dfrac{ \pi ^2}{12}+\ln ^2 2 \\ - 4 & & -\dfrac{5 \pi ^2}{48}+\dfrac{21 \ln ^2 2}{12} \\ - \end{matrix} -$$ -I was able to find most of these values using the highlighted relations, only checking with Mathematica after the fact. - -Can we find more closed form expressions for $g(a)$ for some $a$, maybe using the values for $f(a)$ (see the linked paper)? -What other properties of $g(a)$ can we obtain? -And what is the one real solution of $g(a)=0$ (see the plot)? - - -Also for any $a>0$ we can define $g(a)$ in terms of $f$: -$$g(a)=g(2a)-f(a)=g(4a)-f(a)-f(2a)=g(8a)-f(a)-f(2a)-f(4a)$$ -$$\lim_{a \to + \infty} g(a)=\lim_{a \to + \infty} f(a)=0$$ - -$$g(a)=- \sum_{k=0}^{\infty} f(2^k~a), ~~~~~ a>0$$ - -The last relation also directly follows from the well-known identity: -$$\prod_{k=0}^{\infty} \left( 1+x^{2^k} \right)=\frac{1}{1-x}, ~~~~~|x|<1$$ - -We can also use $f(a)$ and $g(a)$ to define a whole family of integrals: -$$\sum_{k=0}^{n-1} x^{ka}=\frac{1-x^{na}}{1-x^a}$$ - -$$I_n (a)=\int_0^1 \ln \left( \sum_{k=0}^{n-1} x^{ka} \right) \frac{dx}{1+x}=g(na)-g(a)$$ -$$J_n (a)=\int_0^1 \ln \left( \sum_{k=0}^{n-1} (-1)^k x^{ka} \right) \frac{dx}{1+x}= \begin{cases}g(na)-f(a) & n=2j \\ f(na)-f(a) & n=2j+1 \end{cases} $$ - -For example: -$$\int_0^1 \frac{\ln(1+x+x^2+x^3)}{1+x}\, dx=g(4)-g(1)=-\dfrac{\pi ^2}{48}+\dfrac{15 \ln ^2 2}{12}$$ - -REPLY [3 votes]: Here are two asymptotics: - -When $a \to 0^+$, we can use the substitution $x = e^{-u}$ to write -$$ g(a) -= \int_{0}^{\infty} \frac{\log (1 - e^{-au})}{e^u + 1} \, du -= \int_{0}^{\infty} \frac{\log a}{e^u + 1} \, du + \int_{0}^{\infty} \frac{\log \left( \frac{ 1 - e^{-au}}{a} \right)}{e^u + 1} \, du. $$ -It is not hard to check that the last integral is dominated by some integrable function, thus in view of the dominated convergence theorem, we have -$$ g(a) = \log 2 \log a + \int_{0}^{\infty} \frac{\log u}{e^u + 1} \, du + o(1) -= \log 2 \log a - \frac{1}{2}\log^2 2 + o(1) -\quad \text{as } a \to 0^+. $$ -When $a \to \infty$, we first perform integration by parts to write -$$ g(a) = \int_{0}^{1} \frac{ax^{a-1}}{1 - x^a} \log\left( \frac{1+x}{2} \right) \, dx. $$ -Applying the substitution $x = e^{-u/a}$ and introducing a new function $h(x) = -\frac{2}{x}\log(\frac{1+e^{-x}}{2})$, we can write -$$ g(a) = -\frac{1}{2a} \int_{0}^{\infty} \frac{u h(u/a)}{e^u - 1} \, du. $$ -Since $h$ is uniformly bounded on $[0, \infty)$ and $h(x) \to 1$ as $x \to 0^+$, by the dominated convergenc theorem again we have -$$ g(a) = -\frac{1 + o(1)}{2a} \int_{0}^{\infty} \frac{u}{e^u - 1} \, du = -\frac{\zeta(2) + o(1)}{2a} \quad \text{as } a \to \infty. $$<|endoftext|> -TITLE: Closed form for the infinite product $\prod\limits_{k=0}^{\infty} \left( 1-x^{2^k} \right)$ -QUESTION [13 upvotes]: There is a known identity: -$$\prod_{k=0}^{\infty} \left( 1+x^{2^k} \right)=\frac{1}{1-x}, ~~~~~|x|<1$$ -It's easy to derive it by converting it to a telescoping product as shown in this answer. -However, we can't use the same method here. -$$\left( 1-x^{2^k} \right) \left( 1+x^{2^k} \right)=\left( 1-x^{2^{k+1}} \right)$$ -$$\left( 1-x^{2^k} \right) =\frac{\left( 1-x^{2^{k+1}} \right)}{ \left( 1+x^{2^k} \right)}$$ -This product will not telescope. We can't even use this to find something new about it: -$$p(x)=\prod_{k=0}^{\infty} \left( 1-x^{2^k} \right)=\frac{\prod_{k=0}^{\infty} \left( 1-x^{2^{k+1}} \right)}{\prod_{k=0}^{\infty} \left( 1+x^{2^k} \right)}$$ -We only get the obvious recurrence relation: -$$p(x)=(1-x)~p(x^2)$$ -Mathematica gives this plot (for 25 terms). - - -Does this product have a closed form? - -REPLY [4 votes]: Not a closed form, but we have the following series representations: -$$\begin{align*} p(x) = \prod_{k=0}^{\infty} \left( 1-x^{2^k} \right) -&= \frac1{1-x}\left(1+2\sum_{k\geq0}\frac{(-1)^kx^{2^k}}{1+x^{2^k}}\right) \\ -&= \frac{1}{1-x} - \frac{4x}{(1-x)^2} + \frac6{1-x} \sum_{k=0}^\infty \frac{x^{2^{2k}}}{1-x^{2^{2k+1}}} -\end{align*}$$ -where the last sum is $$\sum_{\nu_2(n)\text{ even}}x^n$$ - -Proof. We have: (on the level of formal series; although everything converges absolutely for $|x|<1$) -$$p(x)-1 = \sum_{n=1}^\infty x^n (-1)^{b(n)}$$ -where $b(n)$ is the number of $1$'s in the binary expansion of $n$. We can write: -$$(-1)^{b(n)} = 1+2\sum_{\substack{k \text{ for which the }\\k\text{th digit is 1,}\\\text{starting from }k=0}}(-1)^k$$ -plug this in the sum and change the order of summation $n \leftrightarrow k$: -$$\begin{align*}p(x)-1 -&= \sum_{n=1}^\infty x^n + 2 \sum_{n=1}^\infty x^n\sum_{\substack{k \text{ for which the }\\k\text{th digit in }n\text{ is 1}}}(-1)^k \\ -&= \frac x{1-x} + 2 \sum_{k=0}^\infty (-1)^k x^{2^k} \prod_{j \neq k}(1+x^{2^j}) \\ -&= \frac x{1-x} + 2 \sum_{k=0}^\infty (-1)^k \frac{x^{2^k}}{(1-x)(1+x^{2^k})} \\ -\end{align*}$$ - -For the series $S = \sum_{k=0}^\infty (-1)^k \frac{x^{2^k}}{1+x^{2^k}}$, we can write each term as a geometric series: -$$S = -\sum_{k=0}^\infty \sum_{n=1}^\infty (-1)^{k+n}x^{2^kn}$$ -Grouping the terms with the same power of $x$ gives: -$$\sum_{n=1}^\infty c_nx^n$$ -where $c_n=1$ if $\nu_2(n)$ is even, and $c_n=-2$ if $\nu_2(n)$ is odd. -We get $$\begin{align*}S -&= -2\frac x{1-x} + 3\sum_{\nu_2(n)\text{ even}}x^n \\ -&= -2\frac x{1-x} + 3\sum_{k=0}^\infty \left( (x^{4^k})^1 + (x^{4^k})^3 + (x^{4^k})^5 + \cdots \right)\\ -&= -2\frac x{1-x} + 3\sum_{k=0}^\infty \frac{x^{2^{2k}}}{1-x^{2^{2k+1}}} -\end{align*}$$<|endoftext|> -TITLE: Computing the monodromy of a local system $\mathcal{L}$ -QUESTION [6 upvotes]: I was trying to learn a little bit about local systems and their monodromy. In the notes I'm following they define the monodromy of a local system in the following way: - -Let $X$ be a topological space together with a local system $\mathcal{L}$. Given $\gamma : I \to X$ a continuous path in $X$, the inverse image of this sheaf $\gamma^{-1} \mathcal{L}$ is a constant sheaf on $I=[0,1]$. The monodromy of $\mathcal{L}$ along $\gamma$ is the composition of the isomorphisms: -$\mathcal{L}_{\gamma(0)}\cong \mathcal{L}([0,1])\cong \mathcal{L}_{\gamma(1)}$ - -I want to perfom some explicit computations, but I don't know how should I start. The easy case I'm trying to do is $\mathcal{L}=\Gamma$ where $\Gamma$ is the sheaf of sections of n-sheeted connected covering space of $S^1$. My goal is to compute the monodromy of a system of differential (complex) equations with meromorphic coefficients. I guess this should be done with transition functions of the sheaf, but I don't know how to start even in this trivial example. - -REPLY [4 votes]: $\newcommand{F}{\mathscr F} \newcommand{G}{\mathscr G}$I will take $\F$ to be $f_* \G$, where $f : S^1 \to S^1, z \mapsto z^n$ and $\G$ is the constant sheaf of stalk $\mathbb C$, i.e $\G(U) = \{s : U \to \mathbb C, s \text{ is locally constant }\}$. For a sheaf $G$ on a topological space $X$ and a continuous map $f : X \to Y$, the sheaf $f_*G$ is a sheaf on $Y$ defined by $f_*G(U) = G(f^{-1}(U))$. -We will prove that $\F$ is a local system. Let $U = S^1 \backslash N, V = S^1 \backslash S$, where $N = i$ and $S = -i$. We want isomorphism of sheaves $\F(W) \to \Bbb C^n(W)$ for $W = U,V$. -For $U$, we are looking at $\{ s : f^{-1}(U) \to \mathbb C, s \text{ is locally constant } \}$. $f^{-1}(U)$ is the disjoint union of the $n$ intervals $I_k = \{e^{i\theta} : \theta \in (\frac{\pi + 2k\pi}{2n}; \frac{\pi +2(k+2)\pi}{2n})\}$. We can write these sections $s_1, \dots, s_n$, $s_i : I_k \to \Bbb C$. This gives the desired isomorphism $ \F|_U \cong \Bbb C^n_U$. -In fact, same applies for $V$ : we are looking at $\F(V) = \{ s : f^{-1}(V) \to \mathbb C, s \text{ is locally constant } \}$. Again we can identify this with locally constant functions $ t_r : J_r \to \Bbb C $ where $J_r = \{\frac{3\pi + 2k \pi}{2n}; \frac{3 \pi + 2(k+2) \pi}{2n}\}$. -Now consider $(\lambda_1, \dots, \lambda_n) \in \F_1$. As $\F|_U$ is a constant sheaf, we have natural isomorphism $(\F|_U)_1 \cong (\F|_U)_{-1}$. This corresponds simply to $s_i(\theta) \mapsto s_i(\theta + \frac{\pi}{n})$. Now, since $-1 \in V$ we can use $(\F|_V)_{-1} \cong (\F|_U)_1$, and again this isomorphism is $t_r(\theta) \mapsto t_r(\theta + \frac{\pi}{n})$. Finally, we obtain that the composition of these isomorphism is simply the map $(\lambda_1, \dots, \lambda_n) \to (\lambda_n, \lambda_1, \dots, \lambda_{n-1})$. But this composition is exactly the one you wrote in the definition of monodromy, so we conclude that the monodromy representation of $\pi_1(S^1) \cong \Bbb Z$ is $\rho(1)(\lambda_1, \dots, \lambda_n) = (\lambda_n, \dots, \lambda_{n-1})$. -Edit : let me compute the sheaf cohomology of $\F$. This is computed by the complex $\mathbb C^n \overset{d}{\to} \mathbb C^n$ where $d(\lambda_1, \dots, \lambda_n) = (\lambda_n - \lambda_1, \lambda_1 - \lambda_2, \dots, \lambda_{n-1} - \lambda_n) = \rho - \rm id$, which has rank $n-1$. We have $H^0(S^1, \F) = \ker d = \{ (t,t, \dots, t) : t \in \Bbb C \} \cong \Bbb C$ and $H^1(S^1, \F) = \rm coker$ $ d \cong \Bbb C$. \ No newline at end of file