-1}.$$
-Then,
-$$\begin{align}
-\mathcal{I}{\left(a\right)}
-&=\int_{0}^{1}\mathrm{d}x\,\frac{2}{1+x^{2}}\left[\arctan{\left(x\right)}-\arctan{\left(px\right)}\right]^{2}\\
-&=\int_{0}^{1}\mathrm{d}x\,\frac{2}{1+x^{2}}\left[\arctan^{2}{\left(x\right)}-2\arctan{\left(x\right)}\arctan{\left(px\right)}+\arctan^{2}{\left(px\right)}\right]\\
-&=\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(x\right)}}{1+x^{2}}-\int_{0}^{1}\mathrm{d}x\,\frac{4\arctan{\left(x\right)}\arctan{\left(px\right)}}{1+x^{2}}+\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(px\right)}}{1+x^{2}}\\
-&=\frac23\arctan^{3}{\left(1\right)}\\
-&~~~~~-2\arctan^{2}{\left(1\right)}\arctan{\left(p\right)}+\int_{0}^{1}\mathrm{d}x\,\frac{2p\arctan^{2}{\left(x\right)}}{1+p^{2}x^{2}};~~~\small{I.B.P.}\\
-&~~~~~+\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(px\right)}}{1+x^{2}}\\
-&=\frac{\pi^{3}}{96}-\frac{\pi^{2}}{8}\arctan{\left(p\right)}+\int_{0}^{1}\mathrm{d}x\,\frac{2p\arctan^{2}{\left(x\right)}}{1+p^{2}x^{2}}+\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(px\right)}}{1+x^{2}}.\\
-\end{align}$$
-
-Lengthy aside on the evaluation of $\int_{0}^{z}\mathrm{d}x\,\frac{2b\arctan^{2}{\left(ax\right)}}{1+b^{2}x^{2}}$:
-Define the function $\mathcal{J}:\mathbb{R}_{>0}^{3}\rightarrow\mathbb{R}$
-$$\mathcal{J}{\left(a,b,z\right)}:=\int_{0}^{z}\mathrm{d}x\,\frac{2b\arctan^{2}{\left(ax\right)}}{1+b^{2}x^{2}}.$$
-It's a simple matter to show by rescaling the integral that
-$$\forall\left(a,b,z\right)\in\mathbb{R}_{>0}^{3}:\mathcal{J}{\left(a,b,z\right)}=\mathcal{J}{\left(az,bz,1\right)},$$
-so we may assume WLOG that $z=1$ in the general evaluation of $\mathcal{J}{(a,b,z)}$.
-Suppose $\left(a,b\right)\in\mathbb{R}_{>0}^{2}$, and set $\frac{b}{a}=:c\in\mathbb{R}_{>0}\land\arctan{\left(a\right)}=:\alpha\in\left(0,\frac{\pi}{2}\right)$. Then,
-$$\begin{align}
-\mathcal{J}{\left(a,b,1\right)}
-&=\int_{0}^{1}\mathrm{d}x\,\frac{2b\arctan^{2}{\left(ax\right)}}{1+b^{2}x^{2}}\\
-&=\int_{0}^{a}\mathrm{d}y\,\frac{2ab\arctan^{2}{\left(y\right)}}{a^{2}+b^{2}y^{2}};~~\small{\left[x=\frac{y}{a}\right]}\\
-&=\int_{0}^{a}\mathrm{d}y\,\frac{2c\arctan^{2}{\left(y\right)}}{1+c^{2}y^{2}}\\
-&=\int_{0}^{\arctan{\left(a\right)}}\mathrm{d}\varphi\,\frac{2c\varphi^{2}\sec^{2}{\left(\varphi\right)}}{1+c^{2}\tan^{2}{\left(\varphi\right)}};~~\small{\left[\arctan{\left(y\right)}=\varphi\right]}\\
-&=\int_{0}^{\alpha}\mathrm{d}\varphi\,\frac{2c\varphi^{2}\sec^{2}{\left(\varphi\right)}}{1+c^{2}\tan^{2}{\left(\varphi\right)}}\\
-&=\left[2\varphi^{2}\arctan{\left(c\tan{\left(\varphi\right)}\right)}\right]_{\varphi=0}^{\varphi=\alpha}-\int_{0}^{\alpha}\mathrm{d}\varphi\,4\varphi\arctan{\left(c\tan{\left(\varphi\right)}\right)};~~~\small{I.B.P.}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\varphi\,\varphi\arctan{\left(c\tan{\left(\varphi\right)}\right)}.\\
-\end{align}$$
-Next, by rewriting the integral as a multiple integral and changing the order of integration in the appropriate way, we obtain the following:
-$$\begin{align}
-\mathcal{J}{\left(a,b,1\right)}
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\varphi\,\varphi\arctan{\left(c\tan{\left(\varphi\right)}\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\varphi\int_{0}^{\varphi}\mathrm{d}\vartheta\,\arctan{\left(c\tan{\left(\varphi\right)}\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\,\arctan{\left(c\tan{\left(\varphi\right)}\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\int_{0}^{c}\mathrm{d}y\,\frac{d}{dy}\arctan{\left(y\tan{\left(\varphi\right)}\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\int_{0}^{c}\mathrm{d}y\,\frac{\tan{\left(\varphi\right)}}{1+y^{2}\tan^{2}{\left(\varphi\right)}}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\,\frac{\tan{\left(\varphi\right)}}{1+y^{2}\tan^{2}{\left(\varphi\right)}}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\,\frac{\sin{\left(\varphi\right)}\cos{\left(\varphi\right)}}{\cos^{2}{\left(\varphi\right)}+y^{2}\sin^{2}{\left(\varphi\right)}}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\,\frac{\sin{\left(\varphi\right)}\cos{\left(\varphi\right)}}{1+\left(y^{2}-1\right)\sin^{2}{\left(\varphi\right)}}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\sin{\left(\vartheta\right)}}^{\sin{\left(\alpha\right)}}\mathrm{d}t\,\frac{t}{1+\left(y^{2}-1\right)t^{2}};~~~\small{\left[\sin{\left(\varphi\right)}=t\right]}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-2\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\sin^{2}{\left(\vartheta\right)}}^{\sin^{2}{\left(\alpha\right)}}\mathrm{d}u\,\frac{1}{1+\left(y^{2}-1\right)u};~~~\small{\left[t^{2}=u\right]}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}\\
-&~~~~~-2\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\,\frac{\ln{\left(1+\left(y^{2}-1\right)\sin^{2}{\left(\alpha\right)}\right)}-\ln{\left(1+\left(y^{2}-1\right)\sin^{2}{\left(\vartheta\right)}\right)}}{\left(y^{2}-1\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}+\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\,\frac{2\ln{\left(\frac{1-\left(1-y^{2}\right)\sin^{2}{\left(\alpha\right)}}{1-\left(1-y^{2}\right)\sin^{2}{\left(\vartheta\right)}}\right)}}{\left(1-y^{2}\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}\\
-&~~~~~+\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{\frac{1-c}{1+c}}^{1}\mathrm{d}x\,\frac{1}{x}\ln{\left(\frac{x^{2}+2x+1-4x\sin^{2}{\left(\alpha\right)}}{x^{2}+2x+1-4x\sin^{2}{\left(\vartheta\right)}}\right)};~~~\small{\left[y=\frac{1-x}{1+x}\right]}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}\\
-&~~~~~+\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{r}^{1}\mathrm{d}x\,\frac{1}{x}\ln{\left(\frac{1+2x\cos{\left(2\alpha\right)}+x^{2}}{1+2x\cos{\left(2\vartheta\right)}+x^{2}}\right)};~~~\small{\left[r:=\frac{1-c}{1+c}\in\left(-1,1\right)\right]}.\\
-\end{align}$$
-At this point it will be helpful to introduce the following two-variable extension of the dilogarithm:
-$$\operatorname{Li}_{2}{\left(r,\theta\right)}:=-\int_{0}^{r}\mathrm{d}x\,\frac{\ln{\left(1-2x\cos{\left(\theta\right)}+x^{2}\right)}}{2x};~~~\small{\left(r,\theta\right)\in\mathbb{R}^{2}}.$$
-This function gives the real part of the dilogarithm of complex argument inside the unit circle:
-$$\Re{\left(\operatorname{Li}_{2}{\left(re^{i\theta}\right)}\right)}=\operatorname{Li}_{2}{\left(r,\theta\right)};~~~\small{\left(r,\theta\right)\in\mathbb{R}^{2}\land|r|<1}.$$
-The function $\operatorname{Li}_{2}{\left(r,\theta\right)}$ can be shown to have the following special cases:
-$$\operatorname{Li}_{2}{\left(1,\theta\right)}=\frac14\left(\pi-\theta\right)^{2}-\frac{\pi^{2}}{12};~~~\small{0\le\theta\le2\pi},$$
-$$\operatorname{Li}_{2}{\left(r,\frac{\pi}{2}\right)}=\frac14\operatorname{Li}_{2}{\left(-r^{2}\right)};~~~\small{r\in\mathbb{R}}.$$
-Continuing with our evaluation of $\mathcal{J}$, we obtain
-$$\begin{align}
-\mathcal{J}{\left(a,b,1\right)}
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}+\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{r}^{1}\mathrm{d}x\,\frac{\ln{\left(\frac{1+2x\cos{\left(2\alpha\right)}+x^{2}}{1+2x\cos{\left(2\vartheta\right)}+x^{2}}\right)}}{x}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}+\int_{0}^{\alpha}\mathrm{d}\vartheta\,\bigg{[}2\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}-2\operatorname{Li}_{2}{\left(1,\pi-2\alpha\right)}\\
-&~~~~~-2\operatorname{Li}_{2}{\left(r,\pi-2\vartheta\right)}+2\operatorname{Li}_{2}{\left(1,\pi-2\vartheta\right)}\bigg{]}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}+\int_{0}^{\alpha}\mathrm{d}\vartheta\,\bigg{[}2\vartheta^{2}-2\alpha^{2}+2\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\
-&~~~~~-2\operatorname{Li}_{2}{\left(r,\pi-2\vartheta\right)}\bigg{]}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\
-&~~~~~-2\int_{0}^{\alpha}\mathrm{d}\vartheta\,\operatorname{Li}_{2}{\left(r,\pi-2\vartheta\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\
-&~~~~~-2\int_{0}^{\alpha}\mathrm{d}\vartheta\,\Re{\left[\operatorname{Li}_{2}{\left(re^{i\left(\pi-2\vartheta\right)}\right)}\right]}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\
-&~~~~~-2\Re\int_{0}^{\alpha}\mathrm{d}\vartheta\,\operatorname{Li}_{2}{\left(re^{i\left(\pi-2\vartheta\right)}\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\
-&~~~~~-\Re\int_{\pi-2\alpha}^{\pi}\mathrm{d}\vartheta\,\operatorname{Li}_{2}{\left(re^{i\vartheta}\right)}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\
-&~~~~~-\Re{\left[\frac{1}{i}\operatorname{Li}_{3}{\left(re^{i\pi}\right)}-\frac{1}{i}\operatorname{Li}_{3}{\left(re^{i\left(\pi-2\alpha\right)}\right)}\right]}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\
-&~~~~~-\Im{\left[\operatorname{Li}_{3}{\left(-r\right)}-\operatorname{Li}_{3}{\left(re^{i\left(\pi-2\alpha\right)}\right)}\right]}\\
-&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}+\Im{\left[\operatorname{Li}_{3}{\left(-re^{-2i\alpha}\right)}\right]}.\\
-\end{align}$$
-The following pair of integration formulas then follow from special cases of the previous result:
-$$\int_{0}^{1}\mathrm{d}x\,\frac{2p\arctan^{2}{\left(x\right)}}{1+p^{2}x^{2}}=\Im{\left[\operatorname{Li}_{3}{\left(i\frac{1-p}{1+p}\right)}\right]}+\frac{\pi}{8}\operatorname{Li}_{2}{\left(-\left(\frac{1-p}{1+p}\right)^{2}\right)}+\frac{\pi^{2}}{8}\arctan{\left(p\right)}-\frac{\pi^{3}}{48},$$
-$$\begin{align}
-\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(px\right)}}{1+x^{2}}
-&=\Im{\left[\operatorname{Li}_{3}{\left(\frac{1-p}{1+p}e^{-2i\arctan{\left(p\right)}}\right)}\right]}+2\arctan{\left(p\right)}\operatorname{Li}_{2}{\left(\frac{1-p}{1+p},2\arctan{\left(p\right)}\right)}\\
-&~~~~~-\frac43\arctan^{3}{\left(p\right)}+\frac{\pi}{2}\arctan^{2}{\left(p\right)},\\
-\end{align}$$
-where $0
-TITLE: In general, when does it hold that $f(\sup(X)) = \sup f(X)$?
-QUESTION [9 upvotes]: Let $f: [-\infty, \infty] \to [-\infty, \infty]$.
-What conditions should we impose on $f$ so that the following statement becomes true?
-$$\forall \ X \subset [-\infty, \infty], \sup f(X) = f(\sup X)$$
-If that doesn't make much sense, then for some function with certain conditions, what kind of sets $X$ satisfy $f(\sup X) = \sup f(X)$?
-
-Some background to the question:
-While doing a certain proof, I was about to swap $\sqrt{\cdot}$ and $\sup$, but I soon realized that such a step probably needs some scrutiny. I still do not know whether such a step is valid, and I would like to know what sort of functions satisfy the requirement. I supposed that $f$ is an extended real valued function for the possibility of $\sup = \infty$.
-
-REPLY [8 votes]: I believe the sufficient and necessary condition for $f$ is that it is nondecreasing, left-continuous (i.e. for all $x_0$, $\lim\limits_{x\rightarrow x_0^-}f(x)=f(x_0)$) and $f(-\infty)=-\infty$. Last condition is necessary for consideration of $X=\varnothing$. First condition is necessary as for $X=\{a,b\},a\leq b$ we need $\max\{f(a),f(b)\}=\sup f(X)=f(\sup X)=f(b)$, i.e. $f(a)\geq f(b)$. Second condition is necessary for when we take $X=(-\infty,x_0)$. Since we know $f$ is nondecreasing, $\sup f(X)=\lim\limits_{x\rightarrow x_0^-}f(x)$, and it must be equal $f(\sup X)=f(x_0)$.
-As for sufficiency, assume the above three conditions. For $X=\varnothing$ of $X=\{-\infty\}$ this is clear so assume $\sup X:=x_0>-\infty$. It's easy to see that left continuity implies $\sup f((-\infty,x_0))=f(x_0)$ (thanks to monotonicity of the function), so we only need to prove $\sup f((-\infty,x_0))=\sup f(X)$. If $x_0\in X$, this is obvious from monotonicity. Clearly $\sup f((-\infty,x_0))\geq\sup f(X)$, since the former $\sup$ is taken on a (possibly) larger set. Now, take any $a\in (-\infty,x_0)$. Then there exists a $b\in X$ such that $a
-TITLE: A degree $4$ polynomial whose Galois group is isomorphic to $S_4$.
-QUESTION [8 upvotes]: I am reading an article about Galois groups. The article states that:
-
-It can also be shown that for each degree $d$ there exist polynomials whose Galois group is the fully symmetric group $S_d$.
-
-I think that the Galois group of a quadratic is isomorphic to $S_2$ if the roots are not rational.
-I think that the Galois group of $x^3 - a$ is isomorphic to $S_3$ if $a$ is not a perfect cube.
-Is it difficult to find an example of a degree $4$ polynomial whose Galois group is isomorphic to $S_4$? I am reading a text book and so far most of the examples have been for Galois groups of "small" order.
-I have not progressed to the point where I can appreciate a proof of the statement made in the article but I would like to see an example of such a polynomial of degree $4$ or hear from someone knowledgeable that this is a difficult and/or tedious task.
-
-REPLY [5 votes]: This isn't too hard to do, but the below method requires a little knowledge of the cubic resolvent $h$ of a quartic polynomial $g$, namely that $\operatorname{Gal}(h) \leq \operatorname{Gal}(g)$ and that the discriminants of $g$ and $h$ (essentially) coincide. (I can expand about this some if it would be helpful.)
-If a polynomial $g$ is irreducible (which is a necessary condition for $\operatorname{Gal}(g) \cong S_{\deg g}$, and which we thus henceforth assume), then its Galois group acts transitively. The only transitive subgroups of $S_4$ are (up to conjugacy) $S_4, A_4, D_8, \Bbb Z_4, \Bbb Z_2 \times \Bbb Z_2$. We'll use the fact that the only groups among these whose order is divisible by $6$ are $S_4$ and $A_4$.
-At least when the character of the underlying field $\Bbb F$ is not $3$, we may for simplicity of the below formulae make a linear change of variables so that $g$ has zero coefficient in its $x^3$ term and write (after dividing by the leading coefficient)
-$$g(x) = x^4 + p x^2 + q x + r,$$
-its resolvent cubic is
-$$h(x) = x^3 - 2 p x^2 + (p^2 - 4 r) x + q^2,$$
-and the discriminants of $g$ and $h$ coincide (perhaps up to an overall nonzero multiplicative constant), and are
-$$D = 16 p^4 r − 4 p^3 q^2 − 128 p^2 r^2 + 144 p q^2 r − 27 q^4 + 256 r^3.$$
-If $h$ is irreducible and its discriminant $D$ is not a square, then (1) $\operatorname{Gal}(h) \leq G := \operatorname{Gal}(g)$ is $S_3$, so $G$ has order divisible by $6$ and hence by the above is $S_4$ or $A_4$, and (2) since $D$ (the discriminant of $g$) is not a square, $G \not\leq A_4$ and hence $G \cong S_4$. For $g$ and $h$ to be irreducible, they must have nonzero constant terms and hence $q, r \neq 0$. For $p = 0$, the above formulae simplify to
-$$h(x) = x^3 - 4 r x + q^2$$ and $$D = - 27 q^4 + 256 r^3,$$ so to find an example we can search for $q, r$ for which $h$ is irreducible and $D$ is nonsquare. For $\Bbb F = \Bbb Q$ the simple choice $q = r = 1$ satisfies these criteria (the first by the Rational Root Test), so, we have for example, that $$\operatorname{Gal}(x^4 + x + 1) \cong S_4 .$$
-See these notes for more details (using the same notation). Also, note that these sorts of examples are generic in that, in a sense that can be made precise, most irreducible polynomials of degree $n$ in $\Bbb Q[x]$ have Galois group $S_n$.<|endoftext|>
-TITLE: The fundamental group of $\mathbb{R}^3$ with its non-negative half-axes removed
-QUESTION [10 upvotes]: Determine whether the fundamental group of $\mathbb{R}^3$ with its non-negative half-axes removed is trivial, infinite cyclic, or isomorphic to the figure eight space.
- I found this answer:
-
-
-Why do we have that $\alpha*\beta=\gamma$? I can't see how we have this homotopy or deformation.
-PS: I think we are actually supposed to solve this by showing that we can find that the figure eight space is a deformation retract of this space, or homotopy equivalent. Do you see a way of doing this? I cannot really see how to define the deformations.
-
-REPLY [10 votes]: Here's one approach to the question in the postscript:
-If we denote by $X$ the subspace of $\Bbb R^3 - \{ 0 \}$ whose fundamental group we are computing, one can show that the mapping $\Bbb R^3 - \{ 0 \} \to \Bbb S^2 \subset \Bbb R^3$ defined by $x \mapsto \frac{x}{||x||}$ restricts to a homotopy between $X$ and a sphere with three points deleted, namely $Y := \Bbb S^2 - \{(1, 0, 0), (0, 1, 0), (0, 0, 1)\}$, and this restriction and the inclusion $Y \hookrightarrow X$ together comprise a homotopy equivalence $X \simeq Y$. The thrice-punctured sphere $Y$ is then homeomorphic (via, e.g., stereographic projection from one of the deleted points) to the plane with two points deleted, $\Bbb R^2 - \{p, q\}$, and this space is in turn homotopic to the figure eight space (see page 3 of these notes of Munkres for some diagrams that indicate how to write down explicitly this latter homotopy equivalence).<|endoftext|>
-TITLE: Integral $\int_0^\frac{\pi}{2} \left(\operatorname{chi}(\cot^2x)+\text{shi}(\cot^2x)\right)\csc^2(x)e^{-\csc^2(x)}dx$
-QUESTION [7 upvotes]: The following problem was posted here a while ago by Cornel Ioan Valean.
-
-Evaluate:
- $$\int_0^\frac{\pi}{2} [\text{chi}(\cot^2x)+\text{shi}(\cot^2 x)]\csc^2(x)e^{-\csc^2(x)}dx$$
- where $\operatorname{shi}(x)=\int_{0}^{x}\frac{\sinh t}{t}dt$ and $\operatorname{chi}(x)=\gamma +\log(x)+\int_{0}^{x}\frac{\cosh(t)-1}{t} dt.$
-
-I have tried to use integral by parts but I didn't succeed as I crossed this:
-$$\int_0^\frac{\pi}{2}\csc^2(x)e^{-\csc^2(x)}dx$$
-Which is: $\frac {\sqrt{\pi}}{2e}.$
-I don't know how I can complete integration by parts since it doesn't have a closed form.
-Note: I guess this integral is $0$ (integration over closed path).
-
-REPLY [3 votes]: Two solutions can be found in this pdf. Here is another way:
-$$\operatorname{chi}(x)+\operatorname{shi}(x):=\gamma+\ln x+\int_0^x \frac{\cosh t+\sinh t-1}{t}=\gamma +\ln x+\int_0^1 \frac{e^{tx}-1}{t}dt$$
-$$\int_0^\frac{\pi}{2}\left(\operatorname{chi}(\cot^2 x)+\operatorname{shi}(\cot^2 x)\right)\csc^2 x\,e^{-(1+\cot^2 x)}dx\overset{\cot x=y}=\frac{1}{e}\int_0^\infty\left(\operatorname{chi}(y^2)+\operatorname{shi}(y^2)\right)e^{-y^2}dy$$
-$$=\frac{1}{e}\int_0^\infty\left(\gamma +\ln y^2\right)e^{-y^2}dy+\frac1e\int_0^\infty e^{-y^2}\int_0^{1}\frac{e^{ty^2}-1}{t}dtdy=\frac1e\left(-\sqrt \pi \ln 2+\sqrt \pi \ln 2\right)=0$$
-
-$$\int_0^\infty e^{-y^2}\ln y^2 dy = -\frac{\sqrt \pi}{2}\left(\gamma +2\ln 2 \right)\Rightarrow \int_0^\infty\left(\gamma +\ln y^2\right)e^{-y^2}dy=-\sqrt{\pi}\ln 2$$
-$$\int_0^\infty e^{-y^2}\int_0^1 \frac{e^{ty^2}-1}{t}dtdy=\int_0^1 \frac{1}{t}\int_0^\infty \left(e^{-y^2(1-t)}-e^{-y^2}\right)dydt$$
-$$=\frac{\sqrt\pi}{2}\int_0^1\frac{1}{t}\left(\frac{1}{\sqrt{1-t}}-1\right)dt\overset{1-t=x^2}=\sqrt \pi \int_0^1 \frac{1}{1+x}dx=\sqrt \pi \ln 2$$<|endoftext|>
-TITLE: Evaluate $\int \frac{x^2}{x^2 -6x + 10}\,dx$
-QUESTION [5 upvotes]: Evaluate $$\int \frac{x^2}{x^2 -6x + 10} \, dx$$
-I'd love to get a hint how to get rid of that nominator, or make it somehow simpler.
-Before posting this, I've looked into: Solve integral $\int{\frac{x^2 + 4}{x^2 + 6x +10}dx}$
-And I've not understood how they simplied the nominator. I know that it has to match $2x-6$ somehow. but the way they put $(x-6)$ and multipled the integral and have suddenly in the nominator $x+1$ does not make sense to me.
-
-REPLY [2 votes]: Hint:
-$$\frac{x^2}{x^2-6x+10}=1+\frac{6x-10}{x^2-6x+10}$$
-Now, Partial Fractions.
-
-REPLY [2 votes]: Hint : $$\int\frac{x^2}{x^2-6x+10}dx=\int1+\frac{6x-10}{x^2-6x+10}dx$$
-$\dfrac{d}{dx} \ln{(x^2-6x+10)}=\frac{2x-6}{x^2-6x+10}$ , therefore make
-$$\frac{6x-10}{x^2-6x+10}=\frac{3(2x-6)}{x^2-6x+10}+\frac{8}{x^2-6x+10}=3\frac{2x-6}{x^2-6x+10}+8\frac{1}{(x-3)^2+1}$$
-$$I=x+3\ln(x^2-6x+10)+8\arctan{(x-3)}+constant$$<|endoftext|>
-TITLE: Binomial sum gives $4^n$
-QUESTION [9 upvotes]: I was looking at this question:Swapping the $i$th largest card between $2$ hands of cards
-and WolframAlpha gave me this result.
-Why is it so? $$\sum_{k=0}^n{2k\choose k}{2n-2k\choose n-k}=4^n?$$
-
-REPLY [11 votes]: Convolution of the series
-$$
-\frac1{\sqrt{1-4x}}=\sum_{n=0}^\infty\binom{2n}{n}x^n
-$$
-gives
-$$
-\begin{align}
-\sum_{n=0}^\infty\sum_{k=0}^n\binom{2k}{k}\binom{2n-2k}{n-k}x^n
-&=\frac1{\sqrt{1-4x}}\frac1{\sqrt{1-4x}}\\
-&=\frac1{1-4x}\\
-&=\sum_{n=0}^\infty4^nx^n
-\end{align}
-$$
-Equating coefficients of $x^n$ gives
-$$
-\sum_{k=0}^n\binom{2k}{k}\binom{2n-2k}{n-k}=4^n
-$$<|endoftext|>
-TITLE: $ \lim \frac{a^x-b^x}{x}$ as $x \to 0$ where $a>b>0$
-QUESTION [6 upvotes]: Calculate $ \displaystyle \lim _{x \to 0} \frac{a^x-b^x}{x}$
- where $a>b>0$
-
-My thoughts: I think L'hopital's rule would apply. But differentiating gives me a way more complicated limit. I tried to see if it's the derivative of some function evaluated at a point, but I can't find such function.
-
-REPLY [3 votes]: Application of L'Hospital's rule gives you:
-$$ \lim _{x \to 0} \frac{a^x-b^x}{x} = \lim _{x \to 0} a^x \cdot \log a-b^x \cdot \log b = \log a - \log b = \log {a \over b}$$
-
-REPLY [3 votes]: Your other suggestion also works. Note that our function is equal to
-$$b^x \frac{(a/b)^x-1}{x}.$$
-One can recognize
-$$\lim_{x\to 0}\frac{(a/b)^x-1}{x}$$
-as a derivative.
-Even more simply, we recognize
-$$\lim_{x\to 0} \frac{a^x-b^x-0}{x}$$
-as a derivarive.
-
-REPLY [2 votes]: Hint. Let $\alpha$ be any real number. One may recall that, as $u \to 0$, one has
-$$
-\lim _{u \to 0} \frac{e^{\alpha u}-1}{u}=\alpha
-$$ then write
-$$
- \frac{a^x-b^x}{x}= \frac{e^{x\ln a}-1}{x}- \frac{e^{x\ln b}-1}{x}.
-$$<|endoftext|>
-TITLE: The probability that in a game of bridge each of the four players is dealt one ace
-QUESTION [7 upvotes]: The question is to show that the probability that each of the four players in a game of bridge receives one ace is $$ \frac{24 \cdot 48! \cdot13^4}{52!}$$ My explanation so far is that there are $4!$ ways to arrange the 4 aces, $48!$ ways to arrange the other cards, and since each arrangement is equally likely we divide by $52!$. I believe the $13^4$ represents the number of arrangements to distribute 4 aces among 13 cards, but I don't see why we must multiply by this value as well?
-
-REPLY [4 votes]: After the cards have been shuffled and cut, there are $\binom{52}4$ equally likely possibilities for the set of four positions in the deck occupied by the aces. Among those $\binom{52}4$ sets, there are $13^4$ which result in each player getting an ace; namely, make one of the $13$ cards to be dealt to South an ace, and the same fo West, North, and East. So the probability is
-$$\frac{13^4}{\binom{52}4}=\frac{4!\cdot48!\cdot13^4}{52!}=\frac{2197}{20825}\approx.1055$$<|endoftext|>
-TITLE: Classification theorem for vector spaces
-QUESTION [8 upvotes]: As I was going over the classification theorem for closed surfaces today, the text I'm reading gave another example of a classification theorem: finite dimensional vector spaces are classified by their dimension. As a point of fact, I think that this is slightly wrong -- if I understand what the author was trying to say I think that finite dimensional vector spaces are classified by their dimension and their base field. Then he's just talking about that theorem that says that any $\Bbb F$-vector space with dimension $n$ is isomorphic to $\Bbb F^n$.
-His use of the word "finite" though has me wondering, does the same thing hold for infinite dimensional vector spaces? Can we "classify" infinite dimensional vector spaces by their dimensions (meaning $\aleph_0$, $\aleph_1$, etc) and base fields? Or is there something more complex that happens for infinite dimensional spaces?
-
-REPLY [3 votes]: Yes, vector spaces are always classified by their dimension, and the argument is as you would expect: if $\text{dim }V=\text{dim }W$, then we can choose a bijection between a basis of $V$ and a basis of $W$, and this bijection induces an isomorphism $V\cong W$. (Note, however, that the existence of a basis for every (infinite-dimensional) vector space requires the use of the Axiom of Choice/Zorn's Lemma.)
-An interesting application of this fact is that the additive group of $\mathbb{C}$ is isomorphic to the additive group of $\mathbb{R}$ - indeed, both are $\mathfrak{c}$-dimensional $\mathbb{Q}$-vector spaces, where $\mathfrak{c}$ denotes the cardinality of the continuum.
-As @Tryss points out in the comments, there are different notions of isomorphism for infinite-dimensional vector spaces decorated with additional mathematical structure, e.g., normed linear spaces (vector spaces equipped with a norm), Banach spaces (complete normed linear spaces), inner product spaces (vector spaces equipped with an inner product), Hilbert spaces (complete inner product spaces) etc. Hilbert spaces are completely classified up to isomorphism by their dimension (which is at most countable assuming separability) but Banach spaces are not. However, this is a separate mathematical theory, so I won't comment further unless you would like me to (and in the meantime, would direct you to the relevant Wikipedia articles).
-Hope that helps!<|endoftext|>
-TITLE: Small Representations of $2016$
-QUESTION [13 upvotes]: It's the new year at least in my timezone, and to welcome it in, I ask for small representations of the number $2016$.
-Rules: Choose a single decimal digit ($1,2,\dots,9$), and use this chosen digit, combined with operations, to create an expression equal to $2016$. Each symbol counts as a point, and the goal is to minimize the total number of points.
-Example (my best so far):
-$$2016=\frac{(4+4)!}{4!-4}$$
-This expression scores 11 points. That is 2 points for the parentheses, 4 points for the $4$s, 1 point for $+$, 2 points for the $!$s, 1 point for the fraction, and 1 point for the $-$.
-Allowable actions: basic arithmetic operations (addition, subtraction, multiplication, division), exponentiation, factorials, repeated digits (i.e. if you are working with the digit $7$, you can use $77$ for 2 points), and use of parentheses.
-
-What is the minimum number of points for an expression of the above form equaling 2016, and what are those minimum expressions?
-
-Note that by "Use a single decimal digit" I mean you may only use one of the digits $1$ through $9$, so for example, you can't save in the above expression just by using $8!$ instead of $(4+4)!$ because you would still have the $4!-4$ part.
-This question is mostly for fun, but could have some relevance to students who participate in thematic math competitions this year.
-
-REPLY [3 votes]: Here's a devil of an answer:
-$$2016=666+666+666+6+6+6$$<|endoftext|>
-TITLE: Show that the square root of a non-negative operator is unique
-QUESTION [7 upvotes]: Let $H$ be a Hilbert space, and $A\in B(H\to H)$ be a bounded non-negative operator (i.e. $\langle Ax,x\rangle \geq 0$ for all $x\in H$). The square root of $A$ is a bounded non-negative operator $B\geq $ such that $B^2=A$.
-First, We can assume without loss of generality that $0\leq A\leq I$. Note that $B^2=A$ if and only if
-$$I-B=\frac{1}{2}((I-A)+(I-B)^2).$$
-Hence, we definite inductively a sequence $C_n$ of operators as follows: $C_0:=0$, and $C_{n+1}:=1/2((I-A)+C_n^2))$. Then it is easy to see that $C_n$ converges to a bounded non-negetive operator $B$ in the strong operator topology and we also have $B^2=A$, thus the square root exists, but I don't know how to show that it is unique.
-
-REPLY [6 votes]: Suppose $A$ is a bounded nonnegative operator on a Hilbert space. Let $(p_n)$ be a sequence of polynomials such that
-$$
-p_n(x) \to \sqrt{x}
-$$
-uniformly for $x$ in the interval $[0, \|A\|]$ (the Weierstass Approximation Theorem implies the existence of such a sequence of polynomials).
-Now suppose $B$ is a nonnegative square root of $A$. Let $\mathcal{B}$ denote the norm closed algebra generated by $B$. Then $\mathcal{B}$ is a commutative $C^*$-algebra that contains $B$ and $A$ (because $A = B^2$). Thus there is a compact Hausdorff space $K$ such that $\mathcal{B}$ is isomorphic as a $C^*$-algebra to $C(K)$. This isomorphism preserves all $C^*$ properties. Thus $A$ corresponds to some nonnegative function $f \in C(K)$ taking values in $[0, \|A\|]$ and $B$ must correspond to the function $\sqrt{f}$ (which is the only nonnegative square root of $f$ in $C(K)$).
-Because $p_n \circ f$ converges uniformly to $\sqrt{f}$ uniformly on $K$, we conclude that $p_n(A)$ converges in operator norm to $B$. But the polynomials $(p_n)$ were chosen independently of $B$. Thus $B$ is uniquely determined as a nonnegative square root of $A$.<|endoftext|>
-TITLE: Center of the Quaternions: Proof and Method
-QUESTION [9 upvotes]: I have to calculate the center of the real quaternions, $\mathbb{H}$.
-So, I assumed two real quaternions, $q_n=a_n+b_ni+c_nj+d_nk$ and computed their products. I assume since we are dealing with rings, that to check was to check their commutative product under multiplication. So i'm looking at $q_1q_2=q_2q_1$. When I do this, I find that clearly the constant terms are identical, so it is clear that the subset $\mathbb{R}$ is in the center. So, perhaps then that $\mathbb{C}\le\mathbb{H}$. However i ended up, after direct calculation with the following system;
-$$c_1d_2=c_2d_1$$
-$$b_1d_2=b_2d_1$$
-$$b_1c_2=b_2c_1$$
-So the determination is then found by solving this system. Intuitively, I felt that this lead to $0$'s everywhere and thus the center of $\mathbb{H}$, $Z(\mathbb{H})=\mathbb{R}$. I then checked online for some confirmation and indeed it seemed to validate my result. However, the proof method used is something I haven't seen. It was pretty straight forward and understandable, but again, I've never seen it. It goes like this;
-Suppose $b_1,c_1,$ and $d_1$ are arbitrary real coefficients and $b_2, c_2,$ and $d_2$ are fixed. Considering the first equation, assume that $d_1=1$ (since it is arbitrary, it's value can be any real...). This leads to
-$$c_1=\frac{c_2}{d_2}$$
-And that this is a contradiction, since $c_1$ is no longer arbitrary (it depends on $c_2$ and $d_2$)
-I really like this proof method, although it is unfamiliar to me. I said earlier that for my own understanding, it seemed intuitively obvious, but that is obviously not proof:
-1) What are some other proof methods for solving this system other than the method of contradiction used below? I was struggling with this and I feel I sholnd't be.
-2) What other proofs can be found in elementary undergraduate courses that use this method of "assume arbitrary stuff", and "fix some other stuff" and get a contradiction? I found this method very clean and fun, but have never seen it used (as far as I know) in any elementary undergraduate courses thus far...
-
-REPLY [14 votes]: I am not sure where the contradiction lies exactly in your proof by contradiction. But here is another method.
-An element $x\in \mathbb H$ belongs to the center if and only if $[x,y]=0$ for all $y\in \mathbb H$, where $[x,y]=xy-yx$ denotes the commutator of two elements.
-We see immediately that $[x,1]=0$, whereas if $x=a+bi+cj+dk$ we have
-$$
-[x,i]=-2ck+2dj.
-$$
-Thus $[x,i]=0$ if and only if $c=d=0$. Similarly $[x,j]=0$ if and only if $b=d=0$. Thus the only elements $x$ which commute with both $i$ and $j$ are $x\in \mathbb R$; in particular, it follows that $Z(\mathbb H)\subset \mathbb R$. Since it is clear that $\mathbb R\subset Z(\mathbb H)$, the result follows.
-Idea behind the proof: There are three special copies of the complex numbers sitting inside $\mathbb H$: the subspaces
-$$
-\mathbb C_i=\mathbb R[i],\qquad \mathbb C_j=\mathbb R[j],\qquad \mathbb C_k=\mathbb R[k].
-$$
-Over $\mathbb H$, all of these subspaces are their own centers: $Z_{\mathbb H}(\mathbb C_i)=\mathbb C_i$ and so forth. Since $$\mathbb H=\mathbb C_i+ \mathbb C_j+ \mathbb C_k,$$
-it follows that $Z(\mathbb H)=Z(\mathbb C_i)\cap Z(\mathbb C_j)\cap Z(\mathbb C_k)=\mathbb R$.<|endoftext|>
-TITLE: An Odd Mean Value Theorem Problem
-QUESTION [11 upvotes]: If $f: [x_1,x_2] \to \mathbb{R}$ is differentiable, show for some $c \in (x_1,x_2)$ that
-$$
-\frac{1}{x_1-x_2} \left| \begin{matrix} x_1 & x_2 \\
-f(x_1) & f(x_2) \end{matrix} \right|=f(c)-cf'(c)
-$$
-My attempt: Actually taking the determinant, multiplying by a negative, and carrying across the denominator on the left gives
-$$
-x_1f(x_2)-x_2f(x_1)=(-f(c)+cf'(c)) \cdot (x_2-x_1)
-$$
-and this screams Mean Value Theorem. So I took the function $g(x)=(x_2+x_1-x)f(x)$ which is clearly differentiable on $[x_1,x_2]$, then by the Mean Value Theorem, we know there is a $c \in (x_1,x_2)$ such that
-$$
-g(x_2)-g(x_1)=g'(c)(x_2-x_1)
-$$
-But for our function $g(x)$, we know $g(x_2)=x_1f(x_2)$ and $g(x_1)=x_2f(x_1)$. Moreover, $g'(x)= - f(x) + (x_2+x_1-x)f'(x)$. Then this gives
-$$
-x_1f(x_2)-x_2f(x_1)=(-f(c)+(x_2+x_1-c)f'(c))(x_2-x_1)
-$$
-which is so close to what we wanted to show that I do not see how this could not be the correct approach. Have I missed something or is the result false?
-
-REPLY [8 votes]: Using Cauchy's Mean Value Theorem with functions $\frac{f(x)}x$ and $\frac1x$, we get
-$$
-\frac{x_1f(x_2)-x_2f(x_1)}{x_1-x_2}=\frac{\frac{f(x_2)}{x_2}-\frac{f(x_1)}{x_1}}{\frac1{x_2}-\frac1{x_1}}
-=\frac{\frac{cf'(c)-f(c)}{c^2}}{-\frac1{c^2}}=f(c)-cf'(c)
-$$
-for some $c\in[x_1,x_2]$. For this, I believe we may need that $0\not\in[x_1,x_2]$.<|endoftext|>
-TITLE: What is the intuition for permuting $n$ objects where $p$ are alike
-QUESTION [6 upvotes]: If we have $n$ objects in which $p$ are objects are alike and rest are all different, then the number of permutations is $\frac{n!}{p!}$. Is there some intuition on how this is correct? why do we have to divide by $p!$
-
-REPLY [4 votes]: If you have $n$ objects where $p$ are alike. If you treat them like they're all different then the number of permutations is $n!$. But since there are $p$ objects alike this won't do. This is because the $n!$ will count the same permutation multiple times because it treats the $p$ alike objects like they're different. So to avoid double counting we divide by the number of ways to arrange $p$ objects, hence $\frac{n!}{p!}$.<|endoftext|>
-TITLE: Prove that $a\sqrt{a^2+bc}+b\sqrt{b^2+ac}+c\sqrt{c^2+ab}\geq\sqrt{2(a^2+b^2+c^2)(ab+ac+bc)}$
-QUESTION [44 upvotes]: Let $a$, $b$ and $c$ be non-negative numbers. Prove that:
- $$a\sqrt{a^2+bc}+b\sqrt{b^2+ac}+c\sqrt{c^2+ab}\geq\sqrt{2(a^2+b^2+c^2)(ab+ac+bc)}.$$
-
-I have a proof, but my proof is very ugly:
-it's enough to prove a polynomial inequality of degree $15$.
-I am looking for an easy proof or maybe a long, but a smooth proof.
-
-REPLY [6 votes]: $\sum\limits_{cyc}a\sqrt{a^2+bc}\geq\sqrt{2(a^2+b^2+c^2)(ab+ac+bc)}\Leftrightarrow$
-$\Leftrightarrow\sum\limits_{cyc}\left(a^4+a^2bc+2ab\sqrt{(a^2+bc)(b^2+ac)}\right)\geq\sum\limits_{cyc}(2a^3b+2a^3c+2a^2bc)\Leftrightarrow$
-$\sum\limits_{cyc}(a^4-a^3b-a^3c+a^2bc)\geq\sum\limits_{cyc}\left(a^3b+a^3c+2a^2bc-2ab\sqrt{(a^2+bc)(b^2+ac)}\right)\Leftrightarrow$
-$\Leftrightarrow\frac{1}{2}\sum\limits_{cyc}(a-b)^2(a+b-c)^2\geq\sum\limits_{cyc}ab\left(a^2+bc+b^2+ac-2\sqrt{(a^2+bc)(b^2+ac)}\right)\Leftrightarrow$
-$\Leftrightarrow\sum\limits_{cyc}(a-b)^2(a+b-c)^2\geq2\sum\limits_{cyc}ab\left(\sqrt{a^2+bc}-\sqrt{b^2+ac}\right)^2\Leftrightarrow$
-$\Leftrightarrow\sum\limits_{cyc}(a-b)^2(a+b-c)^2\left(1-\frac{2ab}{\left(\sqrt{a^2+bc}+\sqrt{b^2+ac}\right)^2}\right)\geq0$, which is obvious.<|endoftext|>
-TITLE: Difference between "real functions" and "real-valued functions"
-QUESTION [8 upvotes]: According to my textbook:
-
-A function which has either $\mathbb R$ or one of its subsets as its range is called a real valued function. Further, if its domain is also either $\mathbb R$ or a subset of $\mathbb R$, it is called a real function.
-
-As there are 2 definitions here, is there a difference between "real functions" and "real-valued functions"?
-MathWorld says that a real function is also called a real-valued function.
-
-REPLY [6 votes]: According to these definitions, any function $\mathbb C\to\mathbb R$ (for example, $z\mapsto |z|$) will be a real-valued function but not a real function.
-As your research shows, this usage is not universal -- there can't be much disagreement about what a "real-valued function" is, but how the words "real function" are used can depend on the author and field.<|endoftext|>
-TITLE: Mathematical meaning of "may not"
-QUESTION [5 upvotes]: Does "may not" mean that never allowed or sometimes not allowed? For example: the sequence may not converge. Does this mean that the sequence never converges or that there is no guarantee that the sequence converges?
-
-REPLY [5 votes]: In everyday English, the construction "$X$ may not $Y$" can mean either that it is not allowed for $X$ to do $Y$, or that it is possible/conceivable that $X$ does not do $Y$. One needs to look to context and semantics to find out which of these in the case.
-In mathematics it is unusual to speak about permission at all -- mathematical objects do whatever they do whether we want them to or not -- so generally the only meaning that makes sense in a mathematical context is that it is possible that $X$ does not do $Y$.<|endoftext|>
-TITLE: Prove that $\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right)=\left(\frac{1}{64}+\frac{3}{128\sqrt{2}}\right)\pi^3$
-QUESTION [5 upvotes]: Prove that $$\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right)=\left(\frac{1}{64}+\frac{3}{128\sqrt{2}}\right)\pi^3$$
-I don't have an idea about how to start.
-
-REPLY [2 votes]: This answer uses the hint using the polygamma function
-$$
-\psi^{(2)}(z) = -\int_0^1 \frac{t^{z-1}}{1-t}\ln^2t dt.
-$$
-First, using the expansion of $(1-t)^{-1}$, the polygamma function $\psi^{(2)}(x)$ can be written as
-\begin{align}
-\psi^{(2)}(z) &= -\sum_{n=0}^\infty \int_0^1 t^{n+z-1}\ln^2 tdt \cr
- &= -\sum_{n=0}^\infty \int_0^\infty s^2 e^{-(n+z)s}ds
-\qquad \qquad \qquad (t=e^{-s}) \cr
-&= -2\sum_{n=0}^\infty \frac{1}{(n+z)^3}.
-\end{align}
-Therefore,
-$$
-\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right) = \frac{1}{1024}\left(\psi^{(2)}\left(\frac{7}{8}\right)
--\psi^{(2)}\left(\frac{1}{8}\right)\right).
-$$
-Using the reflection relation
-$$
-\psi^{(2)}(1-z)-\psi^{(2)}(z) = \pi\frac{d^2}{dz^2}\cot \pi z
-$$
-with $z=1/8$, the sum can be written as
-$$
-\left.\frac{\pi}{1024}\frac{d^2}{dz^2}\cot \pi z\right|_{z=1/8}
-=\frac{\pi^3}{512}\left(
-\cot\frac{\pi}{8}+\cot^3\frac{\pi}{8}\right).
-$$
-Finally, using trigonometric identities (half angle),we can get $\cot \pi/8 = 1+\sqrt{2}$, and
-$$
-\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right)
-=\frac{\pi^3}{512}\Big(
-1+\sqrt{2}+(1+\sqrt{2})^3
-\Big)=\frac{\pi^3}{256}(4+3\sqrt{2}).
-$$<|endoftext|>
-TITLE: Derivative of intersection volume
-QUESTION [6 upvotes]: Let $K$ be a convex body in $\mathbb{R}^n$ and set $f:\textrm{SL}(n)\rightarrow \mathbb{R}$ as $f(T)=\textrm{Vol}_n (TB\cap K)$ where $B$ is the Euclidean unit ball. How can we find extreme points of $f$?
-What I'm looking for is some Taylor expansion of $f$, so I may write for matrices such as $Q=I_n + \epsilon F$ something in the line of $$f(Q)=f(I_n)+\epsilon f'(Q)$$
-where $f'$ is a directional derivative of some sort of $f$. I believe this should amount to something like $f'(T)=\textrm{Vol}_{n-1} (\partial TB\cap K)$, but this is pure intuition, I'm not sure how this can be proven.
-
-REPLY [3 votes]: Let us first formalise the idea of directional derivatives of matrices
-Derivatives of variable matrices are usually expressed as Lie derivatives. The basic object is a Lie group, i.e., a differentiable manifold that has a group structure such that the group operations are differentiable. In our case this is the $(n-1)$-dimensional manifold $SL(n)$ consisting of all real $n\times n$ matrices with determinant $1.$ They are also precisely the linear transformations of $\mathbb R^n$ that preserve volume and orientation.
-Lie theory considers $1$-parameter subgroups: differentiable homomorphisms from the simplest possible Lie group $(\mathbb R,+)$ to the Lie group under study:
-$$T:\mathbb R\to SL(n):t\mapsto T_t,\hskip1cm T_{s+t}=T_sT_t.$$
-The derivatives at $0$ of all such possible subgroups form the tangent space of the differentiable manifold at the unit element $T_0=I$, which in this context is called the Lie algebra. Our Lie algebra is denoted ${\mathfrak{sl}(n)}$ and it consists of all $n\times n$ matrices with trace $0.$
-The one-parameter group generated by a matrix $A\in\mathfrak{sl}(n)$ is given by the exponential mapping
-$$\exp:\mathfrak{sl}(n)\to SL(n):A\mapsto\exp(A)=\sum_{i=0}^\infty\frac{A^i}{i!}$$
-which answers your question about a power series expansion.
-The derivative of $\textrm{Vol}_n (T_tB\cap K)$ is more easily evaluated if we replace the indicator functions of the compact sets $B$ and $K$ with differentiable functions $\phi$ and $\psi$ that approximate them. So we are looking at the quantity
-$$V_t=\int_{\mathbb R^n}\phi(T_t^{-1}x)\psi(x)ds$$
-Let us evaluate the derivative of $V_t$ at $t=0.$
-$$\eqalign{
-\frac{dV_t}{dt}(t=0)&=\frac{d}{dt}\int_{\mathbb R^n}\phi(T_t^{-1}x)\psi(x)dx\\
-&=\int_{\mathbb R^n}\frac{d\phi(T_t^{-1}x)}{dt}\psi(x)dx\\
-&=\int_{\mathbb R^n}\nabla\phi\cdot (-A)x\psi(x)dx\\
-}$$
-As $\phi$ approaches the indicator of $K$ its gradient converges to a distribution that is concentrated on $\partial K$ and models the inward normal $(-n)$ of $K$ (since $K$ is convex its boundary has an inward normal almost everywhere). Thus we have
-$$\eqalign{
-\frac{dV_t}{dt}(t=0)&=\int_{\partial K\cap B}Ax\cdot n\ dS\\
-}$$
-Alternatively, notice that $Ax$ is a divergence-free vector field (because the trace of $A$ is $0$) so the integral is also equal to
-$$\eqalign{
-\frac{dV_t}{dt}(t=0)&=\int_{\partial B\cap K}Ax\cdot n\ dS\\
-}$$
-(the reason why these two integrals do not have opposite signs, as one would expect from partial integration, is that the interpretation of $n\ dS$ as an outward normal vector is different according to whether the 'outward' means out of $K$ or out of $B$)
-The second integral is different from your intuitive idea but there is a close resemblance.
-Higher derivatives of $V_t$ are not guaranteed to exist without additional conditions on the shape of $K.$ This can be intuitively understood by noticing that the first derivative is an integral where not only the integrand, but also the area of integration depends on $t.$ In fact the first derivative need not be a continuous function as can be seen in $2$ dimensions by letting $B=B(c=(5;0),r=1),$ $K$ the upper half of $B$ and $A=\left(\begin{matrix}0&1\\-1&0\end{matrix}\right)$ (generator of the rotations around the origin).<|endoftext|>
-TITLE: Determine the Size of a Test Bank
-QUESTION [8 upvotes]: Suppose you have two people take an exam which is composed of 30 questions which are randomly chosen from a test bank of n questions.
-Person A and Person B both take different randomly generated instances of the exam, and then compare the question sets they were given. Person B notices that 7 of their 30 questions were repeated from Person A's question set.
-Is there anyway to deduce the likely total number of questions in the test bank given you know 7/30 of them were repeated in a second instance of the exam? Obviously you would not get an exact value, but could you determine a range of probabilities for each different size of the test bank? How you would you go about solving this?
-Thank you!
-
-REPLY [4 votes]: The minimum number in the pool must be $53$. Suppose there are $n$ in total.
-So it's like if you had an urn with $n$ balls, $30$ are white and $n-30$ are red. Then you pull $30$ balls at random. You want to know how many of the balls you pulled are white. Or more specifically you want to know the probability that $7$ of the $30$ you pull are white.
-Let $A$ be the number of white balls. Then $P(A=k)$ is hypergeometric and equal to
-$\frac{{{30}\choose{k}}{ {n-30}\choose{30-k}}}{{n}\choose{30}}$
-So in your case:
-$\frac{{{30}\choose{7}}{{n-30}\choose{23}}}{{n}\choose{30}}$
-This is the probability of an overlap of exactly $7$.
-You now need to find the $n$ that maximizes that probability.
-If you start plugging in numbers (using a calculator) starting at $n=53$ you'll probably see that it goes up and then soon starts to go back down. Choose the max before it starts going back down. Shouldn't be too much larger than 53. I'm guessing somewhere around 100.<|endoftext|>
-TITLE: Equivalent definitions of meromorphic function
-QUESTION [5 upvotes]: My complex analysis course gives the following definition of a meromorphic function:
-"A function $f\colon A \rightarrow \mathbb{C}$ with $A\subset \mathbb{C}$ is meromorphic if it is holomorphic on $A$ except for isolated singularities, which should be poles. "
-Searching through the web, I found that some authors or websites define a meromorphic function as a quotient of two holomorphic functions e.g. the wikipedia page mentions this: https://en.wikipedia.org/wiki/Meromorphic_function
-Could anybody give me a brief outline of the proof why these two definitions are equivalent, or give me an (internet-accessible) reference?
-Thanks in advance!
-
-REPLY [2 votes]: The equivalence of the two "definitions" is a consequence of Weierstrass Factorisation Theorem, which has a relatively long proof.<|endoftext|>
-TITLE: When is a group isomorphic to the product of normal subgroup and quotient group?
-QUESTION [6 upvotes]: Let $H$ be a normal subgroup in $G$. When is $G$ isomorphic to $H\times (G/H)$?
-I think it's always true in the abelian case. Are there other rules?
-
-REPLY [16 votes]: It's not true in the abelian case. The smallest counterexample is $G = \mathbb{Z}_4, H = \mathbb{Z}_2$; the groups $\mathbb{Z}_4$ and $\mathbb{Z}_2 \times \mathbb{Z}_2$ are not isomorphic.
-In general you need $H$ to be normal for this question makes sense. Then $G$ is an extension of $G/H$ by $H$. The classification of these is difficult in general and usually there are interesting extensions other than the trivial extension $H \times G/H$. When all three groups are abelian the classification is in terms of the Ext group $\text{Ext}^1(G/H, H)$.<|endoftext|>
-TITLE: Absolute Continuity of the sum of two Cantor random variables
-QUESTION [5 upvotes]: If we have two independent random variables each having a Cantor distribution is there an easy way to see that the distribution of their sum is not absolutely continuous?
-I am pretty sure that if we let $S_n$ be the set of positive integers having an $n$ digit ternary expansion (leading zeros included) with $n/2$ or more 1's, and let
-$$T_n = \left\{\frac{2s+1}{3^n}:s\in S_n\right\}$$
- Then our random variable has more than a 50-50 chance of being within $3^{-n-1}$ of a member of $T_n$. As the number of intervals grows as $2^n$, and their width shrinks as $3^{-n}$, the measure of the whole thing goes to 0 as $n$ goes to infinity. (It took some handwaving and arithmetic to get here, so don't trust me.)
-In the best of all possible worlds, there would be an argument that works for the absolute continuity of the sum of three (or any number) of independent Cantor Random variables.
-
-REPLY [3 votes]: The characteristic function of Cantor distribution solves functional equation
-$$
-\varphi_C(t) = \frac12(e^{i2t/3} + 1) \varphi_C(t/3)\tag{1}
-$$
-(there is an explicit formula with cosines, but this is enough for us).
-In particular, $\varphi_C(\pi) = \varphi_C(3\pi) = \varphi_C(3^2\pi) = \dots$ If this value were zero, then we would get from (1) that $\varphi(3^{-n}\pi) = 0$, $n\ge 1$, which contradicts the continuity of $\varphi_C$ (otherwise, you can use the formula with cosines to argue that it is not zero).
-So for each $n\ge 1$ we have $$0\neq \varphi_C(\pi)^n = \varphi_C(3\pi)^n = \varphi_C(3^2\pi)^n = \dots\tag{2}$$ If the sum of $n$ independent Cantor variables had a density, we would have $\varphi_C(t)^n\to 0$, $t\to\infty$ by the Riemann-Lebesgue lemma, contradicting (2).
-
-I prefer the argument using the recursion (1) to the one with explicit formula, as it (or some clever modification) can be used to prove that some other random series does not have a density.<|endoftext|>
-TITLE: Is it acceptable style to mix equalities and inequalities in one line
-QUESTION [11 upvotes]: Is this considered bad style
-$$2 = \sqrt{4} < \sqrt{16} = 4?$$
-It seems as though this is not strictly correct, since $2 = \sqrt{4}$ is a logical proposition which represents boolean value (true or false). A boolean value cannot be less than $\sqrt{16}$.
-On the other hand, I am sure that most people will correctly interpret this as shorthand for $2 = \sqrt{4},$ $\sqrt{4} < \sqrt{16},$ and $\sqrt{16} = 4$
-
-REPLY [8 votes]: It's fine - people write that way all the time. But don't ever do this: $$1\le b=c>d.$$
-
-Edit: Various people have commented, saying that there's nothing wrong with the above. Perhaps not; it bothers me, but I'm not going to insist that it's wrong. If I claimed I didn't actually say it was wrong people would say I was being pedantic.
-One person points out that if you write the above it certainly is wrong to deduce a relationship between $1$ and $d$. And that's the problem - in my experience in "beginning analysis" classes students who write things like what's above do tend to draw incorrect conclusions. So I'm going to just rephrase what I said: "Wrong or not, don't do that. It's a bad idea."<|endoftext|>
-TITLE: difference between the dual space of $H^1(\Omega)$ and the dual of $H^1_0(\Omega)$
-QUESTION [5 upvotes]: In the Partial Differential Equations by Evans (2nd edition p299), $H^{-1}(\Omega)$ denotes the dual space to $H^1_0(\Omega)$ where $\Omega$ is an open subset of $\mathbb{R}^n$ and $H^1(\Omega)=W^{1,2}(\Omega)$, $H^1_0(\Omega)=W^{1,2}_0(\Omega)$:
-$$
-W^{1,2}_0(\Omega)=\overline{C_c^\infty(\Omega)}^{\|\cdot\|_{W^{1,2}(\Omega)}}
-$$
-While in the Navier Stokes Equations by Constantin and Foias (p7), $H^{-1}(\Omega)$ denotes the dual space of $H^1(\Omega)$.
-Let $X$ be the (continuous) dual of $H^1(\Omega)$ and $Y$ the dual of $H^1_0(\Omega)$. One has that $X\subset Y$.
-Here is my question:
-
-Could somebody describe the difference between $X$ and $Y$?
-
-REPLY [3 votes]: Every element of $W^{m,p}(\Omega)'$ is the continuous extension of a distribution. However, the extensions are non-unique. By restricting oneself to $W^{m,p}_0(\Omega)$ the extensions are unique. This provides a characterization of the dual of $W^{m,p}_0(\Omega)$ as the space of all distributions $T \in D'(\Omega)$ of the form
-$$T=\sum_{0\leq|\alpha|\leq m}(-1)^{|\alpha|}D^\alpha T_{v_\alpha}$$
-where $v_\alpha \in L^{p'}$ and $T_{v_\alpha}(\phi)=\langle \phi,v_\alpha \rangle$, the duality pairing.
-(This is explained in detail in Adam's book on Sobolev Spaces, in the section on "Duality and the spaces $W^{-m,p'}(\Omega)$"<|endoftext|>
-TITLE: References request for prerequisites of topology and differential geometry
-QUESTION [11 upvotes]: I am studying differential geometry and topology by myself. Not being a math major person and do not have rigorous background in analysis, manifolds, etc. I have background in intermediate linear algebra and multivariate calculus. To embark on the study, I delved into stackexchange past answers and other websites.
-
-Teaching myself differential topology and differential geometry : expounds many good references
-Introductory texts on manifolds
-recommending books for intro to diff. geometry
-Reference for Topology and Geometry
-Good introductory book on Calculus on Manifolds : This answer suggests a 'gentle' book - Topology Without Tears. It seems a good book, however, this book does not cover everything what I am looking for (and certain prerequisites as well).
-
-From these questions and their answers, I found that Milnor's Topology from a Differentiable Viewpoint, Lee's Introduction to Smooth Manifolds, Tu's An Introduction to Manifolds should work for self-study. I am not looking for a theorem and proof style book, but rather getting concepts such as topology, manifold, Lie groups, moving frames, etc.
-When I start reading even the introductory chapters from books, I find that many books simply assume that the reader would already know concepts as homomorphism, isomorphism, wedge product, cotangent space, etc. This assumption is not true for many readers (like I). As a result, it is not possible to move ahead without knowing these stuff.
-I further found that there is a large amount of literature devoted to these topics. I found, a branch of mathematics, abstract algebra, deals with homomorphism and other listed topics. Learning everything is a daunted task, in fact, only some portion might be needed for my purpose.
-Differential geometry and topology have diverse applications and many people, who are from in different areas of sciences and who are not pure mathematicians, may need to learn these areas. Can someone suggest a 'self contained' introductory book that will sufficiently cover the subject-matter? If such book is not there, can someone mention references that will (quickly and with sufficient depth) cover the assumed prerequisites for learning topology and differential geometry (homomorphism, isomorphism, wedge product, cotangent space, etc.)? So that one does not have to entirely learn abstract algebra, which looks hard method.
-Inputs are very much appreciated!
-Edit: I believe that this is not a "personal advice" question as the links provided in the question are still valid questions and they belong to category "reference request."
-
-REPLY [2 votes]: http://www.topologywithouttears.net/
-This website and its contents should be useful.
-If you want to learn some basic algebra, but nothing too in depth take a look at Fraleigh's Abstract Algebra.
-For linear algebra , Axler's "Linear Algebra Done Right" is a good introduction.<|endoftext|>
-TITLE: Calculating the value of infinite limit of $2^{-n^2}/\sum_{k=n+1}^\infty 2^{-k^2}$
-QUESTION [6 upvotes]: How to solve the limit?
- $$\lim_{n \to \infty}\frac{2^{-n^2}}{\sum_{k=n+1}^\infty 2^{-k^2}}$$
-My approach:-
- I have used logarithmic test to test the denominator sum for convergence as follows:-
-$$ \lim_{ k\to\infty}k\log \frac{u_n}{u_{n+1}}=\lim_{k\to \infty}k\log\frac{2^{-k^2}}{2^{-(k+1)^2}}=\lim_{k\to\infty}(k+2k^2)\log2=\infty$$
-Thus the infinite sum diverges.
-The numerator term also diverges because it is a term from monotonically decreasing sequence which has no lower bound.
-So overall solution is $$\infty$$
-Is my attempt correct or wrong?
-
-REPLY [7 votes]: In a different way,
-$$\frac{2^{-n^2}}{\sum_{k=n+1}^\infty 2^{-k^2}}>\frac{2^{-n^2}}{\sum_{k=(n+1)^2}^\infty 2^{-k}}=\frac{2^{-n^2}}{2^{-(n+1)^2}\cdot2}=2^{-n^2+(n+1)^2-1}=2^{2n}$$
-Therefore it diverges.
-
-REPLY [5 votes]: Unfortunately, your attempt is seriously wrong. The sequence $2^{-n^2}$ is convergent, with limit zero ($0$ is a clear lower bound). Likewise, the sum $\sum_{k = n + 1}^{\infty} 2^{-n^2}$ is convergent by, say, comparison to a geometric series. In fact, both the numerator and denominator tend to zero. Finally, even if your first two conclusions were correct, your final conclusion wouldn't follow from them.
-
-For a different approach, note that the denominator can be estimated by
-$$\sum_{k = n + 1}^{\infty} 2^{-k^2} \approx 2^{-n^2 - 2n}$$ and so your sequence is bounded below by something like
-$$\frac{2^{-n^2}}{2^{-n^2 - 2n}} = 2^{2n}$$<|endoftext|>
-TITLE: Proving $ z^n + \frac{1}{z^n} = 2\cos(n\theta) $ for $z = \cos (\theta) + i\sin(\theta)$
-QUESTION [9 upvotes]: Question: Prove that if $z = \cos (\theta) + i\sin(\theta)$, then
-$$ z^n + {1\over z^n} = 2\cos(n\theta) $$
-
-
-
-What I have attempted
-If $$ z = \cos (\theta) + i\sin(\theta) $$
-then $$ z^n = \cos (n\theta) + i\sin(n\theta) $$
-$$ z^n + {1\over z^n} $$
-$$ \cos (n\theta) + i\sin(n\theta) + {1\over \cos (n\theta) + i\sin(n\theta)} $$
-$$ (\cos (n\theta) + i\sin(n\theta))\cdot(\cos (n\theta) + i\sin(n\theta)) + 1 $$
-$$ \left[ {(\cos (n\theta) + i\sin(n\theta))\cdot(\cos (n\theta) + i\sin(n\theta)) + 1\over \cos (n\theta) + i\sin(n\theta)} \right] $$
-$$ \left[ {\cos^2(n\theta) + 2i\sin(n\theta)\cos (n\theta) - \sin^2(n\theta) + 1\over \cos (n\theta) + i\sin(n\theta)} \right] $$
-Now this is where I am stuck.. I tried to use a double angle identity but I can't eliminate the imaginary part..
-
-REPLY [6 votes]: If you want to continue in the way you started, you can simply rewrite $1=\cos^2(n\theta)+\sin^2(n\theta)$ and then simplify:
-$$
-\frac{\cos^2(n\theta) + 2i\sin(n\theta)\cos (n\theta) - \sin^2(n\theta) + 1}{ \cos (n\theta) + i\sin(n\theta)}=
-\frac{\cos^2(n\theta) + 2i\sin(n\theta)\cos (n\theta) - \sin^2(n\theta) + \cos^2(n\theta)+\sin^2(n\theta)}{ \cos (n\theta) + i\sin(n\theta)}=
-\frac{2\cos^2(n\theta) + 2i\sin(n\theta)\cos (n\theta)}{ \cos (n\theta) + i\sin(n\theta)}=
-\frac{2\cos(n\theta) (\cos(n\theta)+i\sin(n\theta))}{ \cos (n\theta) + i\sin(n\theta)}= \underline{\underline{2\cos(n\theta)}}
-$$
-Which shows that you were almost there. (But it is still useful that you posted your question here - you have seen other approaches.)<|endoftext|>
-TITLE: Why doesn't the dot product give you the coefficients of the linear combination?
-QUESTION [13 upvotes]: So the setting is $\Bbb R^{2}$.
-Let's pick two unit vectors that are linearly independent. Say: $v_{1}= \begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{2}\end{bmatrix}$ and $v_{2} = \begin{bmatrix} \frac{\sqrt{3}}{2} \\ \frac{1}{2}\end{bmatrix}$.
-Now, let's pick another vector with length smaller than $1$, say, $a = \begin{bmatrix} \frac{1}{2} \\ 0\end{bmatrix}$.
-I've been trying to understand the dot product geometrically, and what I've read online has led me to believe that $a \cdot v_{1}$ is the scalar $c$ so that $cv_{1}$ is the "shadow" of $a$ on $v_{1}$. Similarly, $a \cdot v_{2}$ is the scalar $d$ so that $dv_{2}$ is the "shadow" of $a$ on $v_{2}$.
-If this is true, then it should be that $cv_{1} + dv_{2} = a$, right? But this isn't the case.
-We have $a \cdot v_{1} = \frac{1}{4}$ and $a \cdot v_{2} = \frac{\sqrt{3}}{4}$. So $$cv_{1} + d v_{2} = \frac{1}{4}\begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{2}\end{bmatrix} + \frac{\sqrt{3}}{4}\begin{bmatrix} \frac{\sqrt{3}}{2} \\ \frac{1}{2}\end{bmatrix} = \begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{4}\end{bmatrix} \neq a.$$
-This means something is wrong with my understanding about the intuition of the dot product. I'm not sure what's wrong with it, though. Any help would be appreciated.
-
-REPLY [8 votes]: Your intuition is mostly correct, and you would probably have seen the flaws in your reasoning if you had drawn a picture like this:
-
-We have two linearly-independent unit vectors $\mathbf{U}$ and $\mathbf{V}$, and a third vector $\mathbf{W}$ (the green one). We want to write $\mathbf{W}$ as a linear combination of $\mathbf{U}$ and $\mathbf{V}$. The picture shows the projections $(\mathbf{W} \cdot \mathbf{U})\mathbf{U}$ (in red) and $(\mathbf{W} \cdot \mathbf{V})\mathbf{V}$ (in blue). These are the things you call "shadows", and that's a good name. As you can see, when you add them together using the parallelogram rule, you get the black vector, which is obviously not equal to the original vector $\mathbf{W}$. In other words
-$$
-\mathbf{W} \ne (\mathbf{W} \cdot \mathbf{U})\mathbf{U} + (\mathbf{W} \cdot \mathbf{V})\mathbf{V}
-$$
-You certainly can write $\mathbf{W}$ in the form
-$\mathbf{W} = \alpha\mathbf{U} + \beta\mathbf{V}$, but $\alpha = \mathbf{W} \cdot \mathbf{U}$ and $\beta = \mathbf{W} \cdot \mathbf{V}$ are not the correct coefficients unless $\mathbf{U}$ and $\mathbf{V}$ are orthogonal. And you can even calculate the coefficients $\alpha$ and $\beta$ using dot products, as you expected. It turns out that
-$$
-\mathbf{W} = (\mathbf{W} \cdot \bar{\mathbf{U}})\mathbf{U} + (\mathbf{W} \cdot \bar{\mathbf{V}})\mathbf{V}
-$$
-where $(\bar{\mathbf{U}}, \bar{\mathbf{V}})$ is the so-called dual basis of $(\mathbf{U}, \mathbf{V})$. You can learn more here.<|endoftext|>
-TITLE: Zeroth homotopy group: what exactly is it?
-QUESTION [8 upvotes]: What are the elements in the zeroth homotopy group? Also, why does $\pi_0(X)=0$ imply that the space is path-connected?
-Thanks for the help. I find that zeroth homotopy groups are rarely discussed in literature, hence having some trouble understanding it. I do understand that the elements in $\pi_1(X)$ are loops (homotopy classes of loops), trying to see the relation to $\pi_0$.
-
-REPLY [2 votes]: Just a slight rephrase: you can consider $\pi_0(X)$ as the quotient set of the set of all points in $X$ where you mod out by the equivalence relation that identifies two points if there is a path between them.<|endoftext|>
-TITLE: Functor of section over U is left-exact
-QUESTION [5 upvotes]: I am trying to prove $\Gamma(U,\cdot)$ is a left-exact functor $\mathfrak{Ab}(X)\to\mathfrak{Ab}$. This is Exercise 1.8 in Hartshorne, Chapter II or exercise 2.5.F of Vakil's notes (Nov 28,2015 version).
-Since monomorphism of sheaves implies injection on open sets we have the exactness at the left place. For the middle place I really do not have idea. Actually I have a very silly question, it seems to me that if kernel of morphism of sheaves equals to the image of a morphism, then the corresponding kernel and image on some U should also be the same. But in this way the functor should be exact. So why this is not true?
-
-REPLY [4 votes]: Method 1: Use Exercise II.1.4b to relate the image presheaf and its sheafification. Then by exactness of the original sequence, the isomorphism follows on sections of $U$.
-Method 2: Proceed just as in proof of Proposition II.1.1.
-Let $\varphi: \mathscr{F} \to \mathscr{F''}$ and $\psi : \mathscr{F'} \to \mathscr{F}$. We want to show that the kernel and images are equal after applying $\Gamma (U, \cdot)$. This can be checked on the stalks.
-In one direction, we have that
-$$(\varphi _U \circ \psi _U (s))_P = \phi _P \circ \psi _ P (s_P) = 0$$
-By the sheaf condition, this shows that $\phi _U \circ \psi _ U = 0$.
-Conversely, let's suppose $t \in \textrm{ker } \varphi _U$, i.e. $\varphi _U (t) = 0$. Again we know that the stalks is an exact sequence so for each $P \in U$, there is a $s_P$ such that $\psi _P (s_P) = t_P$. Let's represent each $s_P = (V_P , s(P))$ where $s(P) \in \mathscr{F'} (V_P)$. Now, $\psi (s(P))$ and $t \mid _{V_P}$ are elements of $\mathscr{F} (V_P)$ whose stalks at $P$ are the same. Thus, WLOG, assume $\psi (s(P)) = t \mid _{V_P}$ in $\mathscr{F} (V_P)$. $U$ is covered by the $V_P$ and there is a corresponding $s(P)$ on each $V_P$ which, on intersections, are both sent by $\psi$ to the corresponding $t$ on the intersection. Here, we apply injection (exactness at left place, which you showed in your OP) which allows us to glue via sheaf condition to a section $s \in \mathscr{F'} (U)$ such that $s \mid _ {V_P} = s(P)$ for each $P$. Verify that $\psi (s) = t$ and we're done by applying the sheaf property and the construction to $\psi (s) - t$.<|endoftext|>
-TITLE: What is the significance of this identity relating to partitions?
-QUESTION [7 upvotes]: I was watching a talk given by Prof. Richard Kenyon of Brown University, and I was confused by an equation briefly displayed at the bottom of one slide at 15:05 in the video.
-$$1 + x + x^3 + x^6 + \dots + x^{n(n-1)/2} + \dots = \left(\frac{1-x^2}{1-x}\right) \left(\frac{1-x^4}{1-x^3}\right) \left(\frac{1-x^6}{1-x^5}\right) \dots$$
-On the left we have the power series $\sum_{n=0}^{\infty}x^{T_n}$. On the right we have some sort of infinite product. Can anyone explain what the meaning of this identity is, in relation to integer partitions?
-
-Background: The speaker starts by discussing the generating function of the partition function,
-$$P(x) = \prod_{k=1}^\infty \left(\frac {1}{1-x^k} \right)$$
-He then uses the idea behind this generating function to derive a fun identity:
-$$(1+x)(1+x^2)(1+x^3)\dots = \frac{1}{(1-x)(1-x^3)(1-x^5)\dots}$$
-which shows that the number of partitions into unequal parts equals the number of partitions into odd parts.
-This is the context for the above identity which I fell short of understanding.
-
-Also: I did a bit of searching and came across a 1991 paper by Ono, Robins & Wahl concerning partitions using triangle numbers, which might be related.
-This paper proves that
-$$ \sum_{n=1}^{\infty}{x^{T_n}} = \prod_{n=1}^{\infty}{\frac{(1-x^{2n})^2}{1-x^n}}$$
-which shows that the identity is true.
-
-REPLY [3 votes]: If the sides are divided by the numerator of the right, the formula is
-$$\frac{\sum x^{T_n}}{(1-x^2)(1-x^4)\cdots}=\frac{1}{(1-x)(1-x^3)\cdots}. \tag{1}$$
-Here the left side with numerator replaced by $1$ represents partitions into even parts, while the right side partitions into odd parts. Putting the numerator back, the left side represents representations of a number by a single triangular number plus a sum of even parts, while the right side again represents representations by sums of odd parts.
-So in this form, the identity says the number of ways to write $n$ as a triangular number plus a sum of even parts is the same as the number of ways to write $n$ as a sum of odd parts. Note the single triangular number involved here may be $0$ (which is $T_0$). I didn't know the equality of these two counts, but tried it on some small numbers and it seems to be so.
-A slight correction and better explanation of the left side count: Since the taylor series of $1/[(1+x^2)(1+x^4)\cdots$ starts out with the term $1\cdot x^0,$ it is clear that series considers that $0$ is indeed the (only) partition of $0$ into even parts. [The same happens in the generating function for unrestricted partitions.] So when this series is multiplied by the numerator in (1), the result is counting, for a given $n,$ ordered pairs consisting of a triangular number $T$ (which may be zero) followed by a partition of $n-T$ into even parts, and notation such as $(6),2$ (for $n=8$) means it is the entity for which $T$ has been taken to be 6 and then $n-T=8-6=2$ is to be partitioned into even part(s), here the extra 2 after the (6) of $(6),2.$ Because one must "tag" these entitities by the triangular number used, this is a different entity than the $(0),2,6$ in the count. They come from different powers of $x$ in the numerator of (1). [I think somewhere in the answer or in comments I had erroneously insisted that the partition into even parts which follows the triangular number had to be positive even parts, but this is so only when $n$ itself is not triangular.]<|endoftext|>
-TITLE: convergence in measure topology
-QUESTION [6 upvotes]: I'm attending a course on measure theory this semester. While proposing different kinds of convergence (in measure, almost everywhere and in $L^{p}$), our professor stressed (and proved) the fact that convergence almost everywhere is not topological, but claimed that convergence in measure is.
-As pointed out in a few questions on this site and wikipedia (1, 2, 3), in the case where $(\Omega, \mathcal{F}, \mu)$ is a finite measure space, convergence in measure can be described by a pseudometric (hence a topology). However, I haven't found an answer to why at least a topology should exist in the case where $\mu$ is an arbitrary measure. Wikipedia (3) claims that their pseudometric works for arbitrary measures, but their proposed function can take $\infty$ as a value, which I believe isn't allowed for metrics.
-To sum up: let $(\Omega, \mathcal{F}, \mu)$ be a (not necessarily finite) measure space, does there exist a topology $\mathcal{T}$ on the set of measurable functions $f : \Omega \to \mathbb{R}$ such that a sequence of measurable functions $(f_{n})_{n}$ converges to a measurable function $f$ in measure if and only if it converges to $f$ in the topology $\mathcal{T}$? Extra: Is this topology unique?
-Thank you for your help! I've had introductory courses in topology (metric spaces), Banach (Hilbert) spaces and now measure theory.
-
-REPLY [4 votes]: The convergence in measure is not just induced by a topology, it is in fact induced by a metric! Admittedly, it is not at all obvious how to come up with it, but here it is:
-$$d(f,g) := \inf_{\delta > 0} \big(\mu(|f-g|>\delta) + \delta\big)$$
-(I found it a while back in this book)
-This is, again, in general a $[0,\infty]$-valued metric, but this is not a problem as previously noted because you could just as well use $d':=d\wedge 1$ or $d'':=\frac{d}{1+d}$ to get the same topology.
-Just a side note: What is quite neat is that you can know that there must be some metric even without having a specific candidate, because the space of measurable functions with convergence in measure is a first countable topological vector space and those are all metrisable.
-EDIT: As to the uniqueness question: It was already noted that convergence of sequences alone does not uniquely determine a topology. Not unless you add other properties. For example there is a unique metrisable/quasi-metrisable/first-countable topology that induces exactly this convergence of sequences.<|endoftext|>
-TITLE: Another way to evaluate $\int_0^{\infty} x^2 e^{-x^2}dx$
-QUESTION [5 upvotes]: In Stewart's Calculus book I came across the following Gaussian integral.
-
-Using $\int_{-\infty}^{\infty}\exp{(-x^2)}dx = \frac{\sqrt{\pi}}{2}$ evaluate
- $$
-\int_0^{\infty} x^2 e^{-x^2}dx
-$$
-
-I read in this pdf that
-$$
- \int_{-\infty}^{\infty} e^{-ax^2}dx =\sqrt{ \frac{\pi}{a}}
-$$
-and how to use differentiation under the integral sign to evaluate it (recreated below for convenience).
-$$
-\begin{align*}
-I(a) &= \int_{-\infty}^{\infty} e^{-ax^2} dx =\sqrt{ \frac{\pi}{a}} \\ I'(a)&= -\int_{-\infty}^{\infty} x^2 e^{-ax^2}dx = -\frac{1}{2}\sqrt{\pi} a^{-3/2} \\I'(1) &= \frac{\sqrt{\pi}}{2}
-\end{align*}
-$$
-Using the results above (and Wolfram Alpha), I was able to conclude that
-$$
-\int_0^{\infty} x^2 e^{-x^2}dx = \frac{\sqrt{\pi}}{4}
-$$
-however, I was wondering if there is some substitution or an another way to evaluate the aforementioned integral seeing as Leibniz's rule is not mentioned anywhere in the chapter.
-
-REPLY [2 votes]: For $$\int\limits_{0}^{\infty} x^{2} \mathrm{e}^{-x^{2}} \mathrm{d} x$$ let $y = x^{2}$
-\begin{equation}
-\int\limits_{0}^{\infty} x^{2} \mathrm{e}^{-x^{2}} \mathrm{d} x =
-\frac{1}{2} \int\limits_{0}^{\infty} \mathrm{e}^{-y} y^{\frac{1}{2}} \mathrm{d} y
- = \frac{1}{2} \Gamma\left(\frac{3}{2}\right) = \frac{\sqrt{\pi}}{4}
-\end{equation}<|endoftext|>
-TITLE: Does a connected countable metric space exist?
-QUESTION [7 upvotes]: I'm wondering if a connected countable metric space exists.
-My intuition is telling me no.
-
-For a space to be connected it must not be the union of 2 or more open
- disjoint sets.
-For a set $M$ to be countable there must exist an injective function
- from $\mathbb{N} \rightarrow M$.
-
-I know the Integers and Rationals clearly are not connected. Consider the set $\mathbb{R}$, if we eliminated a single irrational point then that would disconnect the set.
-A similar problem arises if we consider $\mathbb{Q}^2$
-In any dimension it seems by eliminating all the irrational numbers the set will become disconnected. And since $\mathbb{R}$ is uncountable there cannot exist a connected space that is countable.
-My problem is formally proving this. Though a single Yes/No answer will suffice, I would like to know both the intuition and the proof behind this.
-Thanks for any help.
-I haven't looked at cofinite topologies (which I happened to see online). I also don't see where the Metric might affect the countability of a space, if we are primarily concerned with an injective function into the set alone.
-
-REPLY [20 votes]: Fix $x_0 \in X $. Then, the continuous(!) map
-$$
-\Phi: X \to \Bbb {R}, x \mapsto d (x,x_0)
-$$
-has an (at most) countable, connected image.
-Thus, the image is a whole (nontrivial!, if $X $ has more than one point) interval, in contradiction to being countable.
-EDIT: On a related note, this even show's that every connected metric space with more than one point has at least the cardinality of the continuum.<|endoftext|>
-TITLE: Chinese remainder theorem as sheaf condition?
-QUESTION [10 upvotes]: The chinese remainder theorem in its usual version says that for a finite set of pairwise comaximal ideals $R/\bigcap _jI_j\cong \prod _j R/I_j$.
-In the binary case, the following general statement holds without conditions on the ideals $R/(I\cap J)\cong R/I\times _{R/I+J}R/J$. In this question I wanted to generalize the more general version to several ideals, but got stuck and only contrived an ad hoc justification for pairwise comaximality.
-A few weeks ago I finally thought of $R/(I\cap J)\cong R/I\times _{R/I+J}R/J$ as a sheaf condition for a cover by two elements. Then I told myself the diagram below must be an equalizer, because pairwise comaximality pops out of it so naturally.
-$$R/\bigcap _jI_j\rightarrow \prod _j R/I_j \rightrightarrows \prod _{i,j}R/(I_i+I_j)$$
-Several satisfied days later I stumbled upon this comment which to my dismay says the diagram above fails to be an equalizer for more than three ideals. But it just seems so perfect...
-Can anyone give some counterexamples which show why the diagram above is not an equalizer and explain why things fail geometrically?
-
-REPLY [8 votes]: Each ideal $I\subset R$ corresponds to a closed subscheme of $\mathrm{Spec}(R)$, intersection of ideals corresponds to union of subschemes, and sum of ideals corresponds to intersection of subschemes. If a finite collection of closed subschemes is an open cover of their union (which is the case if and only if the union is disjoint), then indeed your diagram is an equalizer, precisely because of the sheaf condition. But in general the sheaf condition won't hold for coverings by closed sets.
-A counterexample: let $k$ be a field, $R=k[x,y]$, $I_1=(x)$, $I_2=(y)$, $I_3=(x-y)$. Then $\mathrm{Spec}(R)$ is the affine plane and the ideals $I_1$, $I_2$, and $I_3$ correspond to the lines $L_1:x=0$, $L_2:y=0$, and $L_3:x=y$. The statement that your diagram is an equalizer is equivalent to the statement
-
-the data of a regular function on $L_1\cup L_2\cup L_3$ is the same as the data of a triple of regular function $f_1$, $f_2$ , $f_3$ on $L_1$, $L_2$, $L_3$ resp., such that all three functions take the same value at the origin.
-
-This is false because we need an extra condition on the functions $f_1$, $f_2$, $f_3$, namely for these functions to determine a function on $L_1\cup L_2\cup L_3$, the derivative of $f_3$ in the direction $(1,1)$ needs to equal the sum of the derivative of $f_1$ in the direction $(0,1)$ and the derivative of $f_2$ in the direction $(1,0)$.
-
-The data of $\prod R/I_j$ and the maps to $\prod R/(I_i+I_j)$ corresponds to data of a bunch of closed subschemes and their pairwise intersections. This is not enough to determine a scheme. For instance, the union $L_1\cup L_2\cup L_3$ is not isomorphic to the union of the coordinate axes in $\mathbb{A}^3$, because the tangent space to the former at the singular point is $2$-dimensional, while the tangent space to the latter is $3$-dimensional. But both schemes are the union of three lines, such that the pairwise intersection is a single point. For the example $I_1,I_2,I_3\subset R$ above, the equalizer of your diagram is the coordinate ring of the union of the coordinate axes in $\mathbb{A}^3$, rather than of $L_1\cup L_2\cup L_3$.<|endoftext|>
-TITLE: Sum and Product of continued fraction expansion?
-QUESTION [7 upvotes]: Give the continued fraction expansion of two real numbers $a,b \in \mathbb R$, is there an "easy" way to get the continued fraction expansion of $a+b$ or $a\cdot b$?
-If $a,b$ are rational it is easy as you can easily conver the back to the 'rational' form, add or multiply and then conver them back to continued fraction form. But is there a way that requres no conversation?
-Other than that I found no clues whether there is an "easy" way to do it for irrational numbers.
-
-REPLY [6 votes]: Gosper found efficient ways to do arithmetic with continued fractions (without converting them to ordinary fractions or decimals). Here is a page with links to Gosper's work, but also with an exposition of Gosper's methods.
-See also this older m.se question, Faster arithmetic with finite continued fractions<|endoftext|>
-TITLE: Is every converging sequence the sum of a constant sequence and a null sequence?
-QUESTION [7 upvotes]: Let $a_n$ be any sequence converging to $a$ when $n \to \infty$.
-Can you rewrite $a_n$ so that it is the sum of two other sequences? $$a_n=b_n + c_n,$$ with $b_n=b$ for every $n \in \mathbb{N}$ and $c_n\to 0$ as $n\to \infty$.
-In other words: Is a converging sequence ($a_n$) actually a null sequence ($c_n$) "shifted" by a constant ($b$)?
-Or is there any counterexample where one is not allowed to do so?
-
-REPLY [3 votes]: For a constant $b$ any number $a$ (whether it's a term in a sequence or not) can be written as $a = b + c$ where $c = a - b$ so any sequence $\{a_n\}$ can be written as $\{b + c_n\}$ where $c_n = a_n - b$ and in particular if $\lim a_n = a$ then the sequence can be written as $\{a + c_n\}$ where $c_n = a_n - a$. And clearly $\lim \{a_n\} = \lim \{a + c_n\} = a$.
-So your question boils down to does $\lim\{b + c_n\} = b + \lim\{c_n\}$? And therefore if $b = a = \lim a_n$ does $\lim c_n = 0$.
-This should be a basic proposition early on in the study of convergent sequence and the answer is: yes.
-$|a - a_n| = |(a -b) - (a_n - b)| = |(a - b) - c_n|$. So whatever $\epsilon$, $N$, $n > N$ crap that you can say about $a$ and $a_n$ can also be said about $(a-b)$ and $c_n$.
-So if $c_n = a_n - b$ then $\{a_n\} \rightarrow a \iff \{c_n\} \rightarrow (a - b)$.<|endoftext|>
-TITLE: Is there a branch of Mathematics which connects Calculus and Discrete Math / Number Theory?
-QUESTION [29 upvotes]: I am asking this question out of both curiosity and frustration. There are many problems in computer science which require you to perform operations on a finite set of numbers. It always bothers me that there is no way of mapping this discrete problem onto a continuous one and using the power of calculus to solve it, finally extracting out the answer of the original discrete problem.
-Is this not possible? If it is, is there a branch of mathematics which deals with precisely this? Are we confined to solving such problems only by thinking of clever algorithmic techniques?
-Thank you for taking the time to answer, and as always, apologies if the questions is silly.
-
-REPLY [2 votes]: See the book Concrete Mathematics by Graham, Knuth, and Patashnik (http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/0201558025) for a wonderful exposition of connections between CONtinuous and disCRETE mathematics, including number theory.<|endoftext|>
-TITLE: Find the summation $\frac{1}{1!}+\frac{1+2}{2!}+\frac{1+2+3}{3!}+ \cdots$
-QUESTION [11 upvotes]: What is the value of the following sum?
-
-$$\frac{1}{1!}+\frac{1+2}{2!}+\frac{1+2+3}{3!}+ \cdots$$
-
-The possible answers are:
-
-A. $e$
-B. $\frac{e}{2}$
-C. $\frac{3e}{2}$
-D. $1 + \frac{e}{2}$
-
-I tried to expand the options using the series representation of $e$ and putting in $x=1$, but I couldn't get back the original series. Any ideas?
-
-REPLY [8 votes]: $$\frac{k(k+1)}{2k!}=\frac{k(k-1)+2k}{2k!}=\frac1{2(k-2)!}+\frac1{(k-1)!}$$
-Hence the sum is
-$$\frac e2+e.$$
-
-Note that the first summation in the RHS must be started at $k=2$ (or use $1/(-1)!:=0$).<|endoftext|>
-TITLE: Finding the minimal $n$ such that a given finite group $G$ is a subgroup of $S_n$
-QUESTION [15 upvotes]: It is a theorem that, for every finite group $G$, there exists an $n$ such that $G \leq S_n$. Given a particular group, is there any way to determine or bound the smallest $n$ such that this occurs? The representation of the group is up to you, though bounds that use less information are preferred.
-If this problem is too hard, specific special cases are also interesting, as are bounds on $n$.
-
-REPLY [2 votes]: Given a finite group $G$, consider $H$ to be subgroup such that it contains no normal subgroup of $G$ and consider such $H$ of maximum possible order.
-Then $G$ permutes cosets of $H$ by left multiplication, and hence gives a homomorphism from $G$ to permutation group of cosets $\{gH\colon g\in G\}$. The kernel of this homomorphism is intersection of all the conjugates of $H$; but, by our assumption, it should be trivial. Hence $G$ embeds in permutation group on $|G/H|$ letters.
-This also explains why $H$ is chosen in beginning of maximum order.
-This works, as it can be seen for examples in other answers.<|endoftext|>
-TITLE: Sectional curvature in a paraboloid is always positive.
-QUESTION [5 upvotes]: I'm working on Lee's book ''Riemaniann Manifolds an Introduction to Curvature''. One exercise (11.1) is about to see that the paraboloid given by the equation $y=x_1^2+...+x_n^2$ has positive sectional curvature everywhere.
-It is known that $K(\pi)=\frac{\langle R(u,v)v,u \rangle}{\langle u,u \rangle \langle v,v \rangle - \langle u,v \rangle ^2}$ where $\pi$ is the plane generated by $u,v$. Cauchy Schwarz ensures denominator is always positive so it would be enough to check that $\langle R(u,v)v,u \rangle$ is positive at any point for any pair of tangents vectors. That is equivalent to check $\langle\nabla_X \nabla_Y Y - \nabla_Y \nabla_X Y -\nabla_{[X,Y]}Y,X\rangle$ is positive where $\nabla$ is Levi-Civita connection. The first Christoffel identity (here) https://en.wikipedia.org/wiki/Fundamental_theorem_of_Riemannian_geometry allows to compute the Levi-Civita connection via Christoffel symbols. I can compute the Christoffel symbols by finding the metric of the paraboloid given by the evident chart (just by pull-back).
-The question is if there is an easier way to check the paraboloid has positive sectional curvature everywhere or you have to follow the path I talked about; that I do not find complicated, but it requires some work with many calculations.
-
-REPLY [4 votes]: The key is using that the paraboloid is a submanifold of $\mathbb{R}^n$ combined with Gauss Equation.
-Because we are working on an Euclidean submanifold (as we posted in the comments) if $u=\alpha_1 e_1+...+\alpha_{n+1} e_{n+1}$, $v=\beta_1 e_1+...+\beta_{n+1} e_{n+1}$ and $N$ is a normal unitary vector field (for example, $(2x_1,...,2x_n,-1)/f$ (where $f$ is the norm of the numerator) the shape operator gives $S(u)=\nabla_u N=(2 \alpha_1,..., 2\alpha_n,0)/f + u(1/f)fN$ where $\nabla$ is Levi-Civita in $ \mathbb{R}^n$(so Christoffel Symbols are $0$). Now $K(\pi)$ has the same sign that (we also use $\langle N,u \rangle = \langle N,v\rangle=0) \rangle$$\langle S(u),u \rangle \langle S(v),v \rangle - \langle S(u),v \rangle^2 =4(\alpha_1^2+...+\alpha_n^2)(\beta_1^2+...+\beta_n^2)-4(\alpha_1 \beta_1+...+\alpha_n \beta_n)^2 \geq 0 $$
-Finally we should show that equality is impossible. If the equality holds then $u$ and $v$ are proportional in the first $n$ coordinates. The condition $\langle u, N\rangle=\langle v, N\rangle=0$ makes that, in that case, the last coordinate has the same proportion too (because the last coordinate of N never vanishes). But in that case $u$ and $v$ are proportional and can not generate a plane.<|endoftext|>
-TITLE: How to factor $x^6+x^5-3\,x^4+2\,x^3+3\,x^2+x-1$ by hand?
-QUESTION [10 upvotes]: I know that
-$x^6+x^5-3\,x^4+2\,x^3+3\,x^2+x-1 = (x^4-x^3+x+1)(x^2+2x-1)$
-but I would not know how to do that factoring without a software.
-Some idea? Thank you!
-
-REPLY [6 votes]: The equation is palindromic (well, almost), so:
-We can write it as $$x^3\left[x^3+x^2-3x+2+\frac 3x+\frac{1}{x^2}-\frac{1}{x^3}\right]$$
-$$=x^3\left[(x^3-3x+\frac 3x-\frac{1}{x^3})+\left((x-\frac 1x)^2+2\right)+2\right]$$
-$$=x^3\left[u^3+u^2+4\right],$$ where $u=x-\frac 1x$
-And hence the factorization is $$x^3(u+2)(u^2-u+2)$$ which will give us the expected answer.<|endoftext|>
-TITLE: Application of Fourier Series and Stone Weierstrass Approximation Theorem
-QUESTION [5 upvotes]: If $f \in C[0, \pi]$ and $\int_0^\pi f(x) \cos nx\, \text{d}x = 0$ , then $f = 0$
-
-
-Define $ g(x) = \begin{cases}
- f(-x) & \text{if } -\pi \leq x < 0;\\
- f(x) & \text{if } 0 \leq x \leq \pi. \end{cases}$
-which is an even function
-So $g(x)$ can be written as $\sum_{n=0}^\infty a_n\cos (nx)$ for all $x \in [-\pi , \pi]$
-$$\therefore \int_0^\pi f^2(x) \, dx = \int_0^\pi f(x) \left(\sum_{n=0}^\infty \cos(nx)\right) \, dx = \sum_{n=0}^\infty \int_0^\pi f(x) \cos(nx) \, dx = 0$$
-$$\therefore \int_0^\pi f^2(x) \, dx =0,$$ we get $f(x) = 0$
-I think this part is my true
-$$\text{If }f \in C[0 , \pi] \text{ and } \int_o^\pi x^n f(x) \, dx = 0 \text{ for all } n\geq0, \text{ then } f = 0$$
-Since $f$ is continous on a closed interval, then by stone Aproximation Theoren , for each $\epsilon > 0$ , there is a polynomial $p(x)$ such that $|f(x) - p(x)| < \epsilon$.
-I want to show that $\int_0^\pi f(x) \, dx = 0$, please help me how to proceed further and check the first part. Any help would be appreciated.
-
-REPLY [2 votes]: I am usually too lazy to critique answers even when the OP asks for it. It is easier to just snipe
-with a comment or two. But @zhw has shamed me into really looking at the answer and maybe offering
-a bit of a tutorial. Since I gave a sloppy comment I will make amends here I hope.
-Here some things that you know or should know judging from the title of the problem and your answer.
-
-A continuous even function $f$ on $[-\pi,\pi]$ has a Fourier series of the form
-$$\sum_{n=0}^\infty a_n\cos nx$$ but this series need not converge pointwise or uniformly to $f$ unless you have
-stronger assumptions on $f$. [Here you don't. A course on Fourier series may not prove this negative comment, just proving the positive comment assuming, say, that $f$ is also of bounded variation or continuously differentiable. It is essential to know why 19th century mathematicians had to fuss so much to get convergence.]
-A continuous even function $f$ on $[-\pi,\pi]$ has a uniform approximation
-by a trigonometric polynomial
-of the form
-$$\sum_{n=0}^N a_n\cos nx$$ meaning that, for every $\epsilon>0$ you can select at least one such polynomial so that
-$$\left| \sum_{n=0}^N a_n\cos nx -f(x)\right| < \epsilon$$
-for all $-\pi\leq x \leq \pi$. [Fejer's theorem supplies this as does the Stone-Weierstrass theorem.]
-[Dangerous curve ahead!] If you change $\epsilon$ you may have to choose an entirely different polynomial, so the $a_n$ might change. Thus statement #2 does not give you a series converging to $f$, it gives you a sequence of trigonometric polynomials converging uniformly to $f$. In other words Stone-Weierstrass or Fejer's theorem does not give
-$$f(x)=\sum_{n=0}^\infty a_n\cos nx$$
-either pointwise or uniformly. Don't write it!
-If $f(x)=\sum_{i=0}^\infty f_n(x)$ on $[a,b]$ you cannot write $\int_a^b f =\sum_{i=0}^\infty \int_a^b f_n $ without claiming uniform convergence (or some more advanced property).
-If $f(x)=\lim_{n\to \infty} f_n(x)$ on $[a,b]$ you can write $$\int_a^b fh =\lim_{n\to \infty} \int_a^b f_nh $$
-for any continuous $h$
-if you are sure you have uniform convergence (or some more advanced property).
-
-Now we are in a position to tidy up your solution. Your ideas were fine, just missing some caution. You tried to use a series but that fails--just use a sequence instead!
-For the first problem use Stone-Weierstrass to select an appropriate sequence of functions $p_n \to f$ uniformly on $[-\pi,\pi]$. Check that
-$$\int_{-\pi}^\pi f(x)p_n(x)\,dx =0$$
-for each $n$ and that
-$$\lim_{n\to \infty} \int_{-\pi}^\pi f(x)p_n(x)\,dx =\int_{-\pi}^\pi [f(x)]^2\,dx.$$
-Conclude that $f=0$ since it is continuous.
-For the second problem use Stone-Weierstrass to select an appropriate sequence of functions $p_n \to f$ uniformly on $[0,1]$. Check that
- $$\int_{0}^1 f(x)p_n(x)\,dx =0$$
- for each $n$ and that
- $$\lim_{n\to \infty} \int_{0}^1 f(x)p_n(x)\,dx =\int_{0}^1 [f(x)]^2\,dx.$$
- Conclude that $f=0$ since it is continuous.<|endoftext|>
-TITLE: stein and shakarchi complex analysis exercise 3.15 (b)
-QUESTION [5 upvotes]: I can't solve this exercise from the book, can anyone give me a hint?
-
-Show that if $f$ is holomorphic in the unit disc, is bounded, and converges
- uniformly to zero in the sector $\theta < \arg z < \varphi$ as $|z| \to 1$, then $f = 0$.
-(Use the Cauchy inequalities or the maximum modulus principle)
-
-My idea was to extend $f$ continuously to the border of the domain : $θ < \arg z < \varphi$ as $|z| = 1$ then since $f=0$ on the border, $f=0$ in the whole domain.
-However I can't show that $f$ is continuously extendable.
-thank you!
-
-REPLY [14 votes]: Here's one way to do it. Let $M$ be a bound for $|f|$ on the unit disc and let $t$ be slightly smaller than $\varphi-\theta$ and define
-$$
-g(z) = f(z)f(ze^{it})f(ze^{2it})\cdots f(ze^{nit})
-$$
-where $n$ is chosen so large that $nt > 2\pi$. Let $\varepsilon > 0$. By assumption there is an $r < 1$ such that $|f(z)| < \varepsilon$
- for $r < |z| < 1$ and $\theta < \arg z < \phi$. Hence
-$$
-|g(z)| < M^n \varepsilon
-$$
-for all $z$ with $r < |z| < 1$. (One factor has modulus less than $\varepsilon$ and the other factors less than $M$.) By the maximum modulus principle, $|g| < M^n\varepsilon$ on the whole disc, and since $\varepsilon$ was arbitrary, we must have that $g(z) = 0$ for all $z$ in the unit disc.
-Hence one of the factors, and consequently all factors, of $g$ vanishes identically (otherwise $g$ would have at most countably many zeros).<|endoftext|>
-TITLE: Book of integrals
-QUESTION [8 upvotes]: Is there a book which contains just a bunch of integrals to evaluate? I want to learn new integration techniques and I'm open to other suggestions as to how I can go about learning new techniques. Thank you
-
-REPLY [3 votes]: Here is a book of advanced integration if you interested
-http://advancedintegrals.com/wp-content/uploads/2016/12/advanced-integration-techniques.pdf<|endoftext|>
-TITLE: How many groups of order $2058$ are there?
-QUESTION [8 upvotes]: I tried to calculate the number of groups of order $2058=2\times3\times 7^3$ and aborted after more than an hour. I used the (apparently slow) function $ConstructAllGroups$ because $NrSmallGroups$ did not give a result.
-The number $n=2058$ is (besides $2048$) the smallest number $n$, for which I do not know $gnu(n)$
-The highest exponent is $3$, so it should be possible to calculate $gnu(2058)$ in a reasonable time.
-
-What is $gnu(2058)$. If a result is too difficult, is it smaller than ,larger than or equal to $2058$ ?
-
-REPLY [6 votes]: $\mathtt{ConstructAllGroups(2058)}$ completed for me in a little over two hours (8219 seconds on a 2.6GHz machine) and returned a list of $91$ groups, which confirms Alexander Hulpke's results.
-Many serious computations in group theory take a long time - in some cases I have left programs running for months and got useful answers at the end! So this does not rate for me as a difficult calculation.<|endoftext|>
-TITLE: Continuous map and irrational numbers
-QUESTION [7 upvotes]: My question is the following :
-
-Let $f:\mathbb{R}\to\mathbb{R}$ be a continuous map such as each irrational number is mapped to a rational number (i.e. $f(\mathbb{R}\backslash\mathbb{Q})\subset\mathbb{Q}$). Show that $f$ is a constant map.
-
-What I have done :
-Let's suppose that $f$ is not a constant map, i.e. it exists $x,y\in\mathbb{R}$ such that $f(x)\neq f(y).$ As $f$ is continuous, the intermediate value theorem gives us that $[f(x),f(y)]\subset f([x,y]).$ But, as $$f([x,y])\subset f(\mathbb{R})=f(\mathbb{R}\backslash\mathbb{Q}\,\cup\,\mathbb{Q})=f(\mathbb{R}\backslash\mathbb{Q})\,\cup\,\bigcup_{n\in\mathbb{N}}\{f(p_n)\}\subset\mathbb{Q}\,\cup\,\bigcup_{n\in\mathbb{N}}\{f(p_n)\},$$ where $p_n$ is a sequence which describes $\mathbb{Q},$ we would get that $[f(x),f(y)]$ is a subset of a countable set and so is countable, which is a contradiction and so $f$ is constant.
-My questions :
-Is my proof correct, and if yes, does someone see an other way to answer it ?
-Thank you for your comments, and happy new year !
-
-REPLY [6 votes]: Looks good! As a side note, as you asked for alternative methods, you do not have to formulate the proof by contradiction. Note that the image of $f$ is
-$$
-f(\mathbb{R})=f(\mathbb{R}\setminus\mathbb{Q}\cup\mathbb{Q})=f(\mathbb{R}\setminus\mathbb{Q})\cup f(\mathbb{Q})=A\cup \{f(q_n):n\geq1\}
-$$
-Where $A\subset \mathbb{Q}$ and $q_n$ is an enumeration of the rationals. Thus $f(\mathbb{R})$ is countable, and a continuous map with a countable image is constant.<|endoftext|>
-TITLE: A Galois theory sanity check about conjugates.
-QUESTION [5 upvotes]: Here is my question...
-If $L/K$ is an algebraic extension and $\alpha,\beta \in L$ are $K$-conjugates (that is, they have the same minimal polynomial), is it always true that there exists some $\sigma \in $ Aut$(L/K)$ such that $\sigma(\alpha)=\beta$?
-I have thought about this for a while and can neither come up with a proof nor a counterexample :( Of course this fails if the algebraicity condition is removed: consider $\mathbb{R}/\mathbb{Q}.$
-Any hints will be much appreciated!
-
-REPLY [7 votes]: Consider the equation $(x^2+x-i)(x^2+x+i)=0$ over $K=\mathbb{Q}$.
-Now take $L=\mathbb{Q}(i, \sqrt{1+4i})$. That is I have added the roots of one of the factors but not the other. (Of course one must check that $\sqrt{1-4i} \not \in L$). In any case $i\mapsto -i$ sends an element to its conjugate. It also switches the two factors in the above equation. It cannot be extended since only one of these factors has a root, in this field.
-This is a counterexample with $\alpha =i$ and $\beta =-i$<|endoftext|>
-TITLE: Definite integral $\int_0^1 \frac{\arctan x}{x\,\sqrt{1-x^2}}\,\text{d}x$
-QUESTION [17 upvotes]: Wanting to calculate the integral $\int_0^1 \frac{\arctan x}{x\,\sqrt{1-x^2}}\,\text{d}x$ it will certainly already known to many of you that an interesting way to attack it is to refer to the method of integration and differentiation with respect to a parameter, getting $\frac{\pi}{2}\,\log\left(1+\sqrt{2}\right)$.
-Instead, what it does not at all clear is how software such as Wolfram Mathematica can calculate that result in an exact manner and not only approximate. Can someone enlighten me? Thanks!
-
-REPLY [4 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
- \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
- \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
- \newcommand{\dd}{\mathrm{d}}
- \newcommand{\ds}[1]{\displaystyle{#1}}
- \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
- \newcommand{\half}{{1 \over 2}}
- \newcommand{\ic}{\mathrm{i}}
- \newcommand{\iff}{\Longleftrightarrow}
- \newcommand{\imp}{\Longrightarrow}
- \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}}
- \newcommand{\mc}[1]{\mathcal{#1}}
- \newcommand{\mrm}[1]{\mathrm{#1}}
- \newcommand{\ol}[1]{\overline{#1}}
- \newcommand{\pars}[1]{\left(\,{#1}\,\right)}
- \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
- \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
- \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
- \newcommand{\ul}[1]{\underline{#1}}
- \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
-\begin{align}
-&\color{#f00}{\int_{0}^{1}{\arctan\pars{x} \over x\root{1 - x^{2}}}\,\dd x} =
-\int_{0}^{1}{1 \over \root{1 - x^{2}}}\
-\overbrace{\int_{0}^{1}{\dd t \over 1 + x^{2}t^{2}}}
-^{\ds{\arctan\pars{x} \over x}}\ \,\dd x
-\\[5mm] = &\
-\int_{0}^{1}\int_{0}^{1}{\dd x \over \root{1 - x^{2}}\pars{1 + t^{2}x^{2}}}
-\,\dd t
-\\[5mm] \stackrel{x\ \mapsto\ 1/x}{=}\,\,\,&
-\int_{0}^{1}
-\int_{1}^{\infty}{x\,\dd x \over \root{x^{2} - 1}\pars{x^{2} + t^{2}}}\,\dd t
-\label{1}\tag{1}
-\end{align}
-
-Note that the last substitution $\ds{\pars{~x \mapsto 1/x~}}$ leads to a $\ds{\ul{trivial\ integration}}$.
-
-
-With the substitution $\ds{x^{2} \mapsto x}$ in the last integration $\ds{~\pars{\mbox{see}\ \eqref{1}}~}$,
-\begin{align}
-&\color{#f00}{\int_{0}^{1}{\arctan\pars{x} \over x\root{1 - x^{2}}}\,\dd x} =
-\half\int_{0}^{1}
-\int_{1}^{\infty}{\dd x \over \root{x - 1}\pars{x + t^{2}}}\,\dd t
-\\[5mm] &
-\stackrel{x\ \equiv\ 1 + y^{2}}{=}\,\,\,\
-\int_{0}^{1}\int_{0}^{\infty}{\dd y \over y^{2} + 1 + t^{2}}\,\dd t =
-\int_{0}^{1}{1 \over \root{1 + t^{2}}}
-\int_{0}^{\infty}{\dd y \over y^{2} + 1}\,\dd t
-\\[5mm] = &\
-{\pi \over 2}\int_{0}^{1}{\dd t \over \root{1 + t^{2}}}
-\stackrel{t\ =\ \sinh\pars{\theta}}{=}\,\,\,
-{\pi \over 2}\int_{0}^{\mrm{arcsinh}\pars{1}}\,\dd\theta =
-\color{#f00}{{\pi \over 2}\,\mrm{arcsinh}\pars{1}}
-\\[5mm] = &\
-\color{#f00}{{\pi \over 2}\,\ln\pars{1 + \root{2}}}
-\end{align}<|endoftext|>
-TITLE: What did Whitehead and Russell's "Principia Mathematica" achieve?
-QUESTION [20 upvotes]: In philosophical contexts, the Principia Mathematica is sometimes held in high regard as a demonstration of a logical system.
-But what did Whitehead and Russell's Principia Mathematica achieve for mathematics?
-
-REPLY [14 votes]: I'll try to answer referring to the Introduction to the 1st edition of W&R's Principia (3 vols, 1910-13); see :
-
-Alfred North Whitehead & Bertrand Russell, Principia Mathematica to *56 (2nd ed,1927), page 1:
-
-
-THE mathematical logic which occupies Part I of the present work has
-been constructed under the guidance of three different purposes. In the first
-place, it aims at effecting the greatest possible analysis of the ideas with
-which it deals and of the processes by which it conducts demonstrations,
-and at diminishing to the utmost the number of the undefined ideas and
-undemonstrated propositions (called respectively primitive ideas and primitive
-propositions) from which it starts. In the second place, it is framed with a
-view to the perfectly precise expression, in its symbols, of mathematical
-propositions: to secure such expression, and to secure it in the simplest and
-most convenient notation possible, is the chief motive in the choice of topics.
-In the third place, the system is specially framed to solve the paradoxes
-which, in recent years, have troubled students of symbolic logic and the
-theory of aggregates; it is believed that the theory of types, as set forth in
-what follows, leads both to the avoidance of contradictions, and to the
-detection of the precise fallacy which has given rise to them. [emphasis added.]
-
-Simplifying a little bit, the three purposes of the work are :
-
-the foundation of mathematical logic
-
-the formalization of mathematics
-
-the development of the philosophical project called Logicism.
-
-
-I'll not touch the third point.
-
-Regarding the first one, PM are unquestionably the basic building block of modern mathemtical logic.
-Unfortunately, its cumbersome notation and the "intermingling" of technical aspects and philosophical ones prevent for using it (at least the initial chapters) as a textbook.
-Compare with the first "fully modern" textbook of mathematical logic :
-
-David Hilbert & Wilhelm Ackermann Principles of Mathematical Logic, the 1950 translation of the 1938 second edition of text Grundzüge der theoretischen Logik, firstly published in 1928.
-
-See the Introduction [page 2] :
-
-symbolic logic received a new impetus from the need of mathematics for an exact foundation and strict axiomatic treatment. G.Frege published his Begriffsschrift in 1879
-and his Grundgesetze der Arithmetik in 1893-1903. G.Peano and his co-workers began in 1894 the publication of the Formulaire
-de Mathématiques, in which all the mathematical disciplines were to be presented in terms of the logical calculus. A high point of this development is the appearance of the Principia Mathematica (1910-1913) by A.N. Whitehead and B. Russell.
-
-H&A's work is a "textbook" because - in spite of Hilbert's deep involvement with is foundational project - it is devoted to a plain exposition of "technical" issues, without philosophical discussions.
-
-Now for my tentative answer to the question :
-
-what did Whitehead and Russell's Principia Mathematica achieve for mathematics?
-
-The first (and unprecedented) fully-flegded formalization of a huge part of mathematics, mainly the Cantorian mathematics of the infinite.
-Unfotunately again, we have a cumbersome symbolism, as well as an axiomatization based on the theory of classes (and not : sets) that has been subsequently "surpassed" by Zermelo's axiomatization.
-But we can find there "perfectly precise expression of mathematical propositions [and concepts]", starting from the elementary ones.
-Some examples regarding operations on classes:
-
-*22.01. $\alpha \subset \beta \ . =_{\text {Df}} . \ x \in \alpha \supset_x x \in \beta$
-[in modern notation : $\forall x \ (x \in \alpha \to x \in \beta)$]
-This defines "the class $\alpha$ is contained in the class $\beta$," or "all $\alpha$'s are $\beta$'s."
-
-and the definition of singleton:
-
-[the] function $\iota 'x$, meaning "the class of terms which are identical with $x$" which is the same thing as "the class whose only member is $x$." We are thus to have
-$$\iota'x = \hat y(y = x).$$
-
-[in modern notation : $\{ x \} = \{ y \mid y=x \}]$
-
-[...] The distinction between $x$ and $\iota'x$ is one of the merits of Peano's symbolic logic, as well as of Frege's. [....] Let $\alpha$ be a class; then the class whose only member is $\alpha$ has only one member, namely $\alpha$, while $\alpha$ may have many members. Hence the class whose only member is a cannot be identical with $\alpha$*. [...]
-*51.15. $y \in \iota'x \ . \equiv . \ y = x$
-
-[in modern notation : $y \in \{ x \} \leftrightarrow y=x$].<|endoftext|>
-TITLE: Infinite Series $\sum_{m=0}^\infty\sum_{n=0}^\infty\frac{m!\:n!}{(m+n+2)!}$
-QUESTION [7 upvotes]: Evaluating
-$$\sum_{m=0}^\infty \sum_{n=0}^\infty\frac{m!n!}{(m+n+2)!}$$
-involving binomial coefficients.
-My attempt: $$\frac{1}{(m+1)(n+1)}\sum_{m=0}^\infty \sum_{n=0}^\infty\frac{(m+1)!(n+1)!}{(m+n+2)!}=\frac{1}{(m+1)(n+1)} \sum_{m=0}^\infty \sum_{n=0}^\infty\frac{1}{\binom{m+n+2}{m+1}}=?$$
-Is there any closed form of this expression?
-
-REPLY [5 votes]: Another (way less subtle) approach. We have:
-$$ S=\sum_{m\geq 0}\sum_{n\geq 0}\frac{\Gamma(m+1)\,\Gamma(n+1)}{(m+n+2)\,\Gamma(m+n+2)}=\sum_{m,n\geq 0}\iint_{(0,1)^2} x^m(1-x)^n y^{m+n+1}\,dx\,dy \tag{1}$$
-hence:
-$$ S = \iint_{(0,1)^2}\frac{y\,dx\,dy}{(1-xy)(1-y+xy)}=2\int_{0}^{1}\frac{-\log(1-y)}{2-y}\,dy=2\int_{0}^{1}\frac{-\log(t)}{1+t}\,dt\tag{2} $$
-and by expanding $\frac{1}{1+t}$ as a geometric series,
-$$ S = 2\sum_{n\geq 0}\frac{(-1)^n}{(n+1)^2} = \color{red}{\zeta(2)} = \frac{\pi^2}{6}.\tag{3}$$<|endoftext|>
-TITLE: Two Banach spaces, if and only if criterion for range of closed unbounded operator to be closed?
-QUESTION [6 upvotes]: Let $E$ and $F$ be two Banach spaces. Let $A: D(A) \subset E \to F$ be a closed unbounded operator. How do I see that $R(A)$ is closed if and only if there exists a constant $C$ such that$$\text{dist}(u, N(A)) \le C\|Au\| \text{ for all }u \in D(A)?$$Here, $D$ denotes domain, $R$ denotes range, $N$ denotes kernel.
-Idea. We probably want to consider the operator $T: E_0 \to F$, where $E_0 = D(A)$ with the graph norm and $T = A$ in some regard? But I am not quite sure on what to do next.
-
-REPLY [4 votes]: The first part of the answer was wrong, so I have removed it. I have written a new version and put it at the end. Also the second part (which is now the first part) has been rewritten.
-Suppose $R$ is closed (with this $R$ becomes a Banach space).
-In general $D$ with the graph norm $||u||_G := ||u|| + ||A(u)||$ is a Banach space. This is so because it is a closed subset of $E \oplus F$ since $A$ is a closed operator.
-$A$ is clearly bounded on this space. Taking the quotient space $D/N$ the map $\tilde A([u]):=A(u)$ is well defined, bounded, injective and has the same range as $A$. As such $\tilde A: D/N \to R$ is a bijective bounded map between two Banach spaces. The inverse $\tilde A^{-1} : R \to D/N$ is then also a bounded linear operator.
-The norm on $D/N$ is given by: $||[u]||_{D/N}=\text{dist}(u,N)_G=\text{dist}(u,N)+||A(u)||$.
-Finally:
-$$\text{dist}(u,N)=||[u]||_{D/N}-||A(u)||=||\tilde A^{-1}(\tilde A([u]))||_{D/N}-||A(u)||≤(||\tilde A^{-1}||-1)\ ||A(u)||$$
-The other direction works as follows:
-Suppose $\text{dist}(u,N)≤C\ ||A(u)||$ $\forall u \in D$.
-As seen before $D/N$ is a Banach space if $D$ is given graph norm. On this space we can consider the bijective bounded function $\tilde A: D/N \to R$.
-The inverse of $\tilde A$ is also bounded since:
-$$\frac{||\tilde A^{-1}(\tilde A([u]))||_{D/N}}{||\tilde A([u])||}=\frac{\text{dist}(u,N)+||A(u)||}{||A(u)||}≤C+1$$
-Now let $A(u_n)$ be Cauchy. Then $||[u_n]-[u_m]||_{D/N}≤||\tilde A ^{-1}||\cdot ||A(u_n)-A(u_m)]]$ and $[u_n]$ is Cauchy. But since $D/N$ is a Banach space the limit $[u]$ exists and $||A(u_n)-A(u)||≤||\tilde A||\cdot ||[u_n]-[u]||_{D/N}$, so as a result $A(u_n)$ converges to $A(u)$.<|endoftext|>
-TITLE: Tensor product of $\mathbb{Q}$ with an infinite product
-QUESTION [6 upvotes]: How can I prove that the tensor $\mathbb{Q} \otimes \left( \prod_n \mathbb{Z}/n\mathbb{Z} \right)$, where the product is taken over all the positive
-integers $n$, is not trivial?
-
-REPLY [9 votes]: Proposition: $\mathbb{Q}$ is flat as a $\mathbb{Z}$-module. (Which is to say, the functor $ \_ \otimes_{\mathbb{Z}} \mathbb{Q}$ is exact.)
-Pf: The easiest is to observe that tensoring with $\mathbb Q$ is the same as localizing at $\mathbb{Z} \setminus \{0\}$, and localization is exact.
-Fancier proof: $\mathbb{Q}$ is the filtered colimit of the free modules $\mathbb{Z}[1/n]$, for $n = 1,2,3 \ldots$. A free module is flat. Now use that $\text{Tor}$ commutes with filtered colimits (since tensoring commutes with colimits, and taking cohomology commutes with filtered colimits).
-Suppose that $x \in A$ is an element of infinite order. This gives some $0 \to \mathbb{Z} \to A$. Tensor this exact sequence with $\mathbb{Q}$ to get $0 \to \mathbb{Q} \to A \otimes \mathbb{Q}$, using $\mathbb{Q}$ flat to keep injectivity.
-Now, the element $(1,1,\ldots) \in \Pi_n \mathbb{Z} / n \mathbb{Z}$ has infinite order.<|endoftext|>
-TITLE: Examples of Induced Representations of Lie Algebras
-QUESTION [9 upvotes]: Given a (finite-dimensional) Lie algebra $\mathfrak{g}$, a subalgebra $\mathfrak{h}\subset\mathfrak{g}$, and a representation $\rho:\mathfrak{h}\rightarrow\mathfrak{gl}(V)$ of $\mathfrak{h}$, one can form (so I'm told) a representation of the whole algebra $\mathfrak{g}$ on
-$$ \text{Ind}^{\mathfrak{g}}_{\mathfrak{h}}(V):=U(\mathfrak{g})\otimes_{U(\mathfrak{h})} V, $$
-where $U(-)$ denotes the universal enveloping algebra of $-$. My question is simple:
-
-How, exactly, is the representation of $\mathfrak{g}$ on $\text{Ind}^{\mathfrak{g}}_{\mathfrak{h}}(V)$ defined? Are there any good references covering this topic (focusing on the Lie algebra side, rather than induced representations of Lie groups, as essentially all references I've found discuss)?
-
-In particular, references going through numerous examples actually computing the universal enveloping algebras and the induced representation would be tremendously appreciated.
-
-REPLY [12 votes]: I assume that we are dealing with Lie algebras over an field $K$. Let me try to give an overview about how this induced representation is constructed, which is an exercise in understanding how representations of the Lie algebra $\mathfrak{g}$ relate to modules of the universal enveloping algebra $\mathcal{U}(\mathfrak{g})$. A book which you might want to look into is James E. Humphreys’ Introduction to Lie Algebras and Representation Theory, where the topic is covered from an algebraic point of view and without the use of Lie groups.
-The universal enveloping algebra
-As you probably already know that the universal eneveloping algebra $\mathcal{U}(\mathfrak{g})$ of the Lie algebra $\mathfrak{g}$ is an associative, untial $K$-algebra that can be defined as the quotient algebra $T(\mathfrak{g})/I$ where $T(\mathfrak{g})$ is the tensor algebra over $\mathfrak{g}$ and $I$ the two-sided ideal
-$$
- I = \langle x \otimes y - y \otimes x - [x,y] \mid x,y \in \mathfrak{g}\rangle \,.
-$$
-So as a vector space, $\mathcal{U}(\mathfrak{g})$ is generated by the monomials
-$$
- x_1 \dotsm x_n
- \qquad
- \text{with $x_1, \dotsc, x_n \in \mathfrak{g}$}.
-$$
-The inclusion map $\mathfrak{g} \hookrightarrow \mathcal{U}(\mathfrak{g})$ is a homomorphism of Lie algebras, so we can regard $\mathfrak{g}$ as a Lie subalgebra of $\mathcal{U}(\mathfrak{g})$. The multiplication in $\mathcal{U}(\mathfrak{g})$ satisfies the property
-$$
- xy-yx = [x,y]_{\mathfrak{g}}
-$$
-for all $x,y \in \mathfrak{g} \subseteq \mathcal{U}(\mathfrak{g})$.
-An essential property that follows from this construction of $\mathcal{U}(\mathfrak{g})$ is the theorem of Poincaré–Birkhoff–Witt, which decribes a vector space basis of $\mathcal{U}(\mathfrak{g})$.
-
-Theorem (Poincaré–Birkhoff–Witt). Let $\mathfrak{g}$ be a Lie algebra and let $(x_i)_{i \in I}$ be a basis of $\mathfrak{g}$ where $(I, \leq)$ is a totally ordered set. Then the ordered monomials
-$$
- x_{i_1} \dotsm x_{i_n}
-\qquad
-\text{with $n \in \mathbb{N}$, $i_1, \dotsc, i_n \in I$, $i_1 \leq \dotsb \leq i_n$}
-$$
-form a basis of $\mathcal{U}(\mathfrak{g})$.
-
-
-Remark: This basis can also be written as
-$$
- x_{i_1}^{p_1} \dotsm x_{i_m}^{p_m}
-$$
-with $m \in \mathbb{N}$, $i_1, \dotsc, i_n \in I$, $i_1 < \dotsb < i_n$, $p_1, \dotsc, p_n \in \mathbb{N}$.
-
-Let’s look at a specific example:
-
-Example: The Lie algebra $\mathfrak{sl}_2(\mathbb{C})$ admits a basis $(e,h,f)$ given by
-$$
-e =
-\begin{pmatrix}
- 0 & 1 \\
- 0 & 0
-\end{pmatrix},
-\quad
-h =
-\begin{pmatrix}
- 1 & 0 \\
- 0 & -1
-\end{pmatrix},
-\quad
-f =
-\begin{pmatrix}
- 0 & 0 \\
- 1 & 0
-\end{pmatrix}.
-$$
-This basis is already totally ordered by the order in which we write the tupel $(e,h,f)$. By the PBW-theorem it follows that $\mathcal{U}(\mathfrak{sl}_2(\mathbb{C}))$ has a basis consisting of the the monomials
-$$
-e^l h^m f^n \quad \text{with $l,m,n \in \mathbb{N}$.}
-$$
-
-One nice thing about the universal enveloping algebra is its universal property:
-
-Proposition (Universal property of the UEA).
-Let $\mathfrak{g}$ be a $K$-Lie algebra and let $A$ be an associative and unital $K$-algebra.
-Any Lie algebra homomorphism $\phi \colon \mathfrak{g} \to A$ extends uniquely to an algebra homomorphism $\Phi \colon \mathcal{U}(\mathfrak{g}) \to A$.
-
-Representations of $\mathfrak{g}$ and $\mathcal{U}(\mathfrak{g})$-modules
-This universal property has nice effects for the representation theory of $\mathfrak{g}$.
-A representation of $\mathfrak{g}$ is, by definition, a Lie algebra homomorphism $\mathfrak{g} \to \mathfrak{gl}(V)$ for some vector space $V$.
-Using the universal property of $\mathcal{U}(\mathfrak{g})$ it follows that the Lie algebra homomorphisms $\mathfrak{g} \to \mathfrak{gl}(V)$ correspond one-to-one to the algebra homomorphisms $\mathcal{U}(\mathfrak{g}) \to \mathrm{End}_K(V)$ by restricting or extending the homomorphisms in question.
-But an algebra homomorphism from $\mathcal{U}(\mathfrak{g})$ to $\mathrm{End}_K(V)$ is the same as a $\mathcal{U}(\mathfrak{g})$-module structure on $V$.
-We have thus found a one-to-one correspondence between representations of $\mathfrak{g}$ and $\mathcal{U}(\mathfrak{g})$-modules.
-Let us be more explicit:
-If $\rho \colon \mathfrak{g} \to \mathfrak{gl}(V)$ is a representation of $\mathfrak{g}$, then we have a multiplication map (i.e. a bilinear map)
-$$
- \mathfrak{g} \times V \to V,
- \quad
- (x,v) \mapsto x.v
-$$
-given by
-$$
- x.v = \rho(x)(v)
-$$
-for all $x \in \mathfrak{g}$ and $v \in V$.
-This multiplication now extends to a multiplication map
-$$
- \mathcal{U}(\mathfrak{g}) \times V \to V,
- \quad
- (y,v) \mapsto y \cdot v
-$$
-that is given with respect to the PBW-basis of $\mathcal{U}(\mathfrak{g})$ by
-$$
- (x_1 \dotsm x_n) \cdot v
- =
- x_1 . (x_2 . ( \dotsm ( x_{n-1} . (x_n.v) ) ) )
-$$
-for all $x_1, \dotsc, x_n \in \mathfrak{g}$ and $v \in V$.
-This multiplication gives $V$ the structure of an $\mathcal{U}(\mathfrak{g})$-module.
-Note also that to the restriction of this $\mathcal{U}(\mathfrak{g})$-module structure to $\mathfrak{g}$ (when regarded as a subset of $\mathcal{U}(\mathfrak{g})$ coincides with the representation of $\mathfrak{g}$ that we started with.
-Extension of scalars
-We will now need the tensor product, and in particular its use for the so called extension of scalars. A detailed explanation can, for example, be found in Abstract Algebra by Dummit and Foote.
-The basic idea behind the induced representation will be precisely this extension of scalars.
-Let $R$ is a unitial ring, let $S$ be a subring of $R$ (also unital) and let $M$ be a (unital) $S$-module.
-Then we want to somehow extend the $S$-module structure of $M$ to an $R$-module structure on $M$.
-The problem is that there is a priori no good way to extends the multiplication map $S \times M \to M$ to a multiplication map $R \times M \to M$.
-So instead, we replace $M$ by another abelian group, namely $R \otimes_S M$.
-Recall that $R \otimes_S M$ is an abelian group generated by elements of the form $r \otimes m$ with $r \in R$ and $m \in M$, under the constraints that
-$$
- (r+r') \otimes m = r \otimes m + r' \otimes m, \\
- r \otimes (m+m') = r \otimes m + r \otimes m', \\
- (rs) \otimes m = r \otimes (sm)
-$$
-where $r, r' \in R$, $m, m' \in M$ and $s \in S$.
-The nice thing about $R \otimes_S M$ is that it naturally carries the structure of an $R$-module via
-$$
- r' \cdot (r \otimes m)
- = (r' r) \otimes m
-$$
-for all $r', r \in R$ and $m \in M$.
-Note that for all $s \in S$ and $m \in M$ we have
-$$
- s \cdot (1 \otimes m)
- = (s \cdot 1) \otimes m
- = s \otimes m
- = (1 \cdot s) \otimes m
- = 1 \otimes (sm),
-$$
-so we can, to some extend, think of $R \otimes_S M$ as an extending the original $S$-module structure of $M$. Indeed, the map
-$$
- M \to R \otimes_S M \,,
- \quad
- m \mapsto 1 \otimes m
-$$
-is a homomorphism of $S$-modules. (A word of warning though: It is not always true that this homomorphism is injective. So we can not necessarily regard $M$ as an $S$-submodule of $R \otimes_S M$.)
-We refer to this process of “extending” the $S$-module structure of $M$ to the $R$-module structure on $R \otimes_S M$ as the extension of scalars from $S$ to $R$.
-The induced representation
-We are now ready to tackle the induced representation. For this let $\mathfrak{g}$ be a Lie algebra, let $\mathfrak{h}$ be a Lie subalgebra of $\mathfrak{g}$ a Lie subalgebra and let $\rho \colon \mathfrak{h} \to \mathfrak{gl}(V)$ a representation of $\mathfrak{h}$. We then have for all $x \in \mathfrak{h}$ and $v \in V$ the product $x.v = \rho(x)(v)$.
-Note that the inclusion map $\mathfrak{h} \hookrightarrow \mathfrak{g}$ is a homomorphism of Lie algebras and therefore induces (by the universal property of the universal enveloping algebra) a homomorphism of algebras $\mathcal{U}(\mathfrak{h}) \to \mathcal{U}(\mathfrak{g})$ that is given on the PBW-bases of $\mathcal{U}(\mathfrak{h})$ and $\mathcal{U}(\mathfrak{g})$ by
-$$
- x_1 \dotsm x_n \mapsto x_1 \dotsm x_n
- \quad
- \text{for all $x_1, \dotsc, x_n \in \mathfrak{h}$}.
-$$
-This homomorphism of algebras is injective since it maps the PBW-basis of $\mathcal{U}(\mathfrak{h})$ injectively into the PBW-basis of $\mathcal{U}(\mathfrak{g})$.
-We can therefore regard $\mathcal{U}(\mathfrak{h})$ as a subalgebra of $\mathcal{U}(\mathfrak{g})$.
-We can now apply the extension of scalars from $\mathcal{U}(\mathfrak{h})$ to $\mathcal{U}(\mathfrak{g})$:
-That $V$ is a representation of $\mathfrak{h}$ means that it is a $\mathcal{U}(\mathfrak{h})$-module via
-$$
- (x_1 \dotsm x_n) \cdot v
- =
- x_1.(x_2.(\dotsm(x_{n-1}.(x_n.v))))
-$$
-for all $x_1, \dotsc, x_n \in \mathfrak{h}$ and $v \in V$.
-So by applying extension of scalars we get the $\mathcal{U}(\mathfrak{g})$-module
-$$
- \mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)
- =
- \mathcal{U}(\mathfrak{g}) \otimes_{\mathcal{U}(\mathfrak{h})} V.
-$$
-But let’s see how the $\mathcal{U}(\mathfrak{g})$-module structure on $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ really looks like:
-
-As a vector space, $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is generated by the simple tensors $x \otimes v$ with $x \in \mathcal{U}(\mathfrak{g})$ and $v \in V$, and $\mathcal{U}(\mathfrak{g})$ is generated by the monomial $x_1 \dotsm x_n$ with $x_1, \dotsc, x_n \in \mathfrak{g}$.
-It follows that $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is, again as a vector space, generated by the elements
-$$
- (x_1 \dotsm x_n) \otimes v
- \quad
- \text{with $x_1, \dotsc, x_n \in \mathfrak{g}$, $v \in V$}.
-$$
-In terms of these vector space generators, the $\mathcal{U}(\mathfrak{g})$-module structure of $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is given by the multiplication
-$$
- (x_1 \dotsm x_n) \cdot ((y_1 \dotsm y_m) \otimes v)
- = (x_1 \dotsm x_n \cdot y_1 \dotsm y_m) \otimes v.
-$$
-The fact that we are tensoring over $\mathcal{U}(\mathfrak{h})$ has the effect that
-$$
- h \cdot (1 \otimes v)
- = h \otimes v
- = 1 \otimes (h.v)
- \quad
- \text{for all $h \in \mathfrak{h}$, $v \in V$}.
-$$
-
-These two properties can be nicely put together by choosing a suitable basis of $\mathfrak{g}$ and applying the PBW theorem: Let $(b_i)_{i \in I}$ be a basis of $\mathfrak{g}$ where
-
-$(I ,\leq)$ is a totally ordered set,
-we have a partition $I = J' \cup J$ so that $(b_j)_{j \in J}$ is a basis of $\mathfrak{h}$, and
-$j' \leq j$ for all $j' \in J'$ and $j \in J$.
-
-Then by the PBW-theorem, the ordered monmials
-$$
- b_{i_1} \dotsm b_{i_n} b_{j_1} \dotsm b_{j_m}
- \qquad
- \begin{alignedat}{2}
- i_1, \dotsc, i_n &\in J', \,
- &
- i_1 \leq \dotsb &\leq i_n,
- \\
- j_1, \dotsc, j_m &\in J, \,
- &
- j_1 \leq \dotsb &\leq _m
- \end{alignedat}
-$$
-form a basis of $\mathcal{U}(\mathfrak{g})$, and the ordered monomials
-$$
- b_{j_1} \dotsm b_{j_m}
- \qquad
- \text{with $j_1, \dotsc, j_m \in J$, $j_1 \leq \dotsb \leq j_m$}
-$$
-form a basis of the subalgebra $\mathcal{U}(\mathfrak{h})$.
-With respect to this basis, the $\mathcal{U}(\mathfrak{g})$-module structure on $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is given by
-$$
- (b_{i_1} \dotsm b_{i_n} b_{j_1} \dotsm b_{j_m}) . (1 \otimes v)
- = (b_{i_1} \dotsm b_{i_n}) \otimes (b_{j_1}.(\dotsm (b_{j_m}.v)))
-$$
-(Roughly speaking: The basis elements of $\mathfrak{h}$ just do what they have always done, and the “new” basis elements are put in front. Just as expected from a formal and general construction.)
-
-Example:
-Let $\mathfrak{g} = \mathfrak{gl}_2(\mathbb{C})$ and $\mathfrak{h} = \mathfrak{sl}_2(\mathbb{C})$, and let $(e,h,f)$ be the basis of $\mathfrak{sl}_2(\mathbb{C})$ as in the previous example. We can extend this basis of $\mathfrak{sl}_2(\mathbb{C})$ to a basis $(b,e,h,f)$ of $\mathfrak{gl}_2(\mathbb{C})$ by choosing
-$$
- b =
- \begin{pmatrix}
- 1 & 0 \\
- 0 & 1
- \end{pmatrix}.
-$$
-This basis $(b, e, h, f)$ of $\mathfrak{gl}_2(\mathbb{C})$ is already totally ordered by the way its elements are ordered in the tuple $(b, e, h, f)$. A basis of $\mathcal{U}(\mathfrak{gl}_n(\mathbb{C}))$ is now given by the monomials
-$$
- b^k e^l h^m f^n
- \quad
- \text{with $k,l,m,n \in \mathbb{N}$}
-$$
-and a basis of $\mathcal{U}(\mathfrak{sl}_2(\mathbb{C}))$ is given by the monomials
-$$
- e^l h^m f^n
- \text{with $l,m,n \in \mathbb{N}$}.
-$$
-Consider now the natural representation $V = \mathbb{C}^2$ of $\mathfrak{sl}_2(\mathbb{C})$, which is given by
-$$
- x.v
- =
- xv
- \text{for all $x \in \mathfrak{sl}_2(\mathbb{C})$ and $v \in \mathbb{C}^2$},
-$$
-where the right hand side denotes the usual matrix-vector-multiplication.
-Then the induced $\mathcal{U}(\mathfrak{gl}_2(\mathbb{C}))$-module structure on $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is given by
-$$
- (b^k e^l h^m f^n) \cdot (1 \otimes v)
- = b^k \otimes (e^l h^m f^n v)
- \quad
- \text{for all $v \in \mathbb{C}^2$}.
-$$
-Note that we we do not get the natural representation of $\mathfrak{gl}_2(\mathbb{C})$, as $b$ does not act by the identity, but rather via $b.(1 \otimes v) = b \otimes v$ for all $v \in \mathbb{C}^2$.
-
-We are now nearly finished: We have constructed a $\mathcal{U}(\mathfrak{g})$-module structure on $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$, which by restriction to $\mathfrak{g}$ (when regarded as a subset of $\mathcal{U}(\mathfrak{g})$) now correspond to a representation of $\mathfrak{g}$. This representation is given by
-$$
- x . ((x_1 \dotsm x_n) \otimes v)
- = (x \cdot x_1 \dotsm x_n) \otimes v
- \quad
- \text{for all $x, x_1, \dotsc, x_n \in \mathfrak{g}$, $v \in V$},
-$$
-a special case of the previous formulae.
-I hope this helps.<|endoftext|>
-TITLE: Strange Mean Inequality
-QUESTION [14 upvotes]: This problem was inspired by this question.
-$\sqrt [ 3 ]{ a(\frac { a+b }{ 2 } )(\frac { a+b+c }{ 3 } ) } \ge \frac { a+\sqrt { ab } +\sqrt [ 3 ]{ abc } }{ 3 } $
-The above can be proved using Hölder's inequality.
-$\sqrt [ 3 ]{ a(\frac { a+b }{ 2 } )(\frac { a+b+c }{ 3 } ) } =\sqrt [ 3 ]{ (\frac { a }{ 3 } +\frac { a }{ 3 } +\frac { a }{ 3 } )(\frac { a }{ 3 } +\frac { a+b }{ 6 } +\frac { b }{ 3 } )(\frac { a+b+c }{ 3 } ) } \ge \sqrt [ 3 ]{ (\frac { a }{ 3 } +\frac { a }{ 3 } +\frac { a }{ 3 } )(\frac { a }{ 3 } +\frac { \sqrt { ab } }{ 3 } +\frac { b }{ 3 } )(\frac { a }{ 3 } +\frac { b }{ 3 } +\frac { c }{ 3 } ) } (\because \text{AM-GM})\\ \ge \frac { a+\sqrt { ab } +\sqrt [ 3 ]{ abc } }{ 3 } (\because \text{Holder's inequality)}$
-However, I had trouble generalizing this inequality to
-$\sqrt [ n ]{ \prod _{ i=1 }^{ n }{ { A }_{ i } } } \ge \frac { \sum _{ i=1 }^{ n }{ { G }_{ i } } }{ n } $
-when ${ A }_{ i }=\frac { \sum _{ j=1 }^{ i }{ { a }_{ i } } }{ i } $
-and ${ G }_{ i }=\sqrt [ i ]{ \prod _{ j=1 }^{ i }{ { a }_{ i } } } $ as I could not split the fractions as I did above.
-
-REPLY [6 votes]: This result was conjectured by professor Finbarr Holand, and then it was proved by
-K. Kedlaya in an article in the American Mathematical Monthly that could be found here, and then it was generalized by Professor Holand in an article that could be found here.<|endoftext|>
-TITLE: Making a proof precise of "Aut$(Q_8)\cong S_4$"
-QUESTION [11 upvotes]: I know, with some machinery, how to prove that Aut$(Q_8)\cong S_4$. My question here is about not how to prove, but is about an incomplete proof (I feel) given by a student to me (it should be possibly somewhere on-line, since it is not so easy to get the idea of his proof, I think.)
-
-Consider a cube, and label $i,-i$ on a pair of opposite faces; similarly put $j,-j$ and $k,-k$ on other faces. Since the group of rotations of cube is $S_4$, Aut$(Q_8)$ is $S_4$.
-
-Can anyone make this argument precise?
-
-REPLY [5 votes]: As SpamIAm says in his solution, $-1$ and $1$ must be fixed by any automorphism $\phi$. Moreover, since $i$ and $j$ generate the group, an automorphism is determined by the images of $i$ and $j$. There are six possibilities left for $\phi(i)$ and at most five for $\phi(j)$. In fact, there are at most four, since $\phi(-i) = \phi(i^3) = \phi(i)^3$ is already decided. Therefore $Q_8$ has at most $24$ automorphisms.
-All that is left now is to see that every rotation of the cube is actually an automorphism of the group. But it is well known that every rotation in $\mathbb{R}^3$ corresponds to some mapping $x \mapsto qxq^{-1}$ restricted to the pure quaternions, and this mapping is an inner automorphism of the ring of quaternions.<|endoftext|>
-TITLE: How to get the complex number out of the polar form
-QUESTION [5 upvotes]: How does one get the complex number out of this equation?
-
-$$\Large{c = M e^{j \phi}}$$
-
-I would like to write a function for this in C but I don't see how I can get the real and imaginary parts out of this equation to store it in a C structure.
-
-REPLY [2 votes]: $$\mathcal{R}(c)=M\cdot\cos{\phi}$$
-$$\mathcal{I}(c)=M\cdot\sin{\phi}$$
-Assuming that $j^2=-1$ (Physics notation). If we follow a math notational convention where $i^2=-1$ and $j=e^{{2i\pi\over 3}}$ is a complex root of unity we just replace in the above $\phi$ by $\phi+{2i\pi\over 3}$<|endoftext|>
-TITLE: Conjecture $\sum_{m=1}^\infty\frac{y_{n+1,m}y_{n,k}}{[y_{n+1,m}-y_{n,k}]^3}\overset{?}=\frac{n+1}{8}$, where $y_{n,k}=(\text{BesselJZero[n,k]})^2$
-QUESTION [13 upvotes]: While solving a quantum mechanics problem using perturbation theory I encountered the following sum
-$$
-S_{0,1}=\sum_{m=1}^\infty\frac{y_{1,m}y_{0,1}}{[y_{1,m}-y_{0,1}]^3},
-$$
-where $y_{n,k}=\left(\text{BesselJZero[n,k]}\right)^2$ is square of the $k$-th zero of Bessel function $J_n$ of the first kind.
-Numerical calculation using Mathematica showed that $S_{0,1}\approx 0.1250000$. Although I couldn't verify this with higher precision I found some other cases where analogous sums are close to rational numbers. Specifically, after some experimentation I found that the sums
-$$
-S_{n,k}=\sum_{m=1}^\infty\frac{y_{n+1,m}y_{n,k}}{[y_{n+1,m}-y_{n,k}]^3}
-$$
-are independent of $k$ and have rational values for integer $n$, and made the following conjecture
-
-$\bf{Conjecture:}\ $ for $k=1,2,3,...$ and arbitrary $n\geq 0$
- $$\sum_{m=1}^\infty\frac{y_{n+1,m}y_{n,k}}{[y_{n+1,m}-y_{n,k}]^3}\overset{?}=\frac{n+1}{8},\\ \text{where}\ y_{n,k}=\left(\text{BesselJZero[n,k]}\right)^2.
-$$
-
-How one can prove it?
-It seems this conjecture is correct also for negative values of $n$. For example for $n=-\frac{1}{2}$ one has $y_{\frac{1}{2},m}=\pi^2 m^2$, $y_{-\frac{1}{2},k}=\pi^2 \left(k-\frac{1}{2}\right)^2$ and the conjecture becomes (see Claude Leibovici's answer for more details)
-$$
-\sum_{m=1}^\infty\frac{m^2\left(k-\frac{1}{2}\right)^2}{\left(m^2-\left(k-\frac{1}{2}\right)^2\right)^3}=\frac{\pi^2}{16}.
-$$
-
-REPLY [5 votes]: There is a rather neat proof of this.
-First, note that there is already an analogue for this:
-DLMF §10.21 says that a Rayleigh
-function $\sigma_n(\nu)$ is defined as a similar power series
-$$ \sigma_n(\nu) = \sum_{m\geq1} y_{\nu, m}^{-n}. $$
-It links to http://arxiv.org/abs/math/9910128v1 among others as an example of how
-to evaluate such things.
-In your case, call $\zeta_m = y_{\nu,m}$ and $z=y_{\nu-1,k}$ ($\nu$ is $n$ shifted by $1$), so that after
-expanding in partial fractions your sum is
-$$ \sum_{m\geq1} \frac{\zeta_m z}{(\zeta_m-z)^3} = \sum_{m\geq1}
-\frac{z^2}{(\zeta_m-z)^3} + \frac{z}{(\zeta_m-z)^2}. $$
-Introduce the function
-$$ y_\nu(z) = z^{-\nu/2}J_\nu(z^{1/2}). $$
-By DLMF 10.6.5 its derivative
-satisfies the two relations
-$$\begin{aligned}
- y'_\nu(z) &= (2z)^{-1} y_{\nu-1}(z) - \nu z^{-1} y_\nu(z)
-\\&=
--\tfrac12 y_{\nu+1}(z).
-\end{aligned} $$
-It also has the infinite product
-expansion
-$$ y_\nu(z) = \frac{1}{2^\nu\nu!}\prod_{k\geq1}(1 - z/\zeta_k). $$
-Therefore, each partial sum of $(\zeta_k-z)^{-s}$, $s\geq1$ can be evaluated in
-terms of derivatives of $y_\nu$:
-$$ \sum_{k\geq1}(\zeta_k-z)^{-s} = \frac{-1}{(s-1)!}\frac{d^s}{dz^s}\log
-y_\nu(z). $$
-When evaluating this logarithmic derivative, the derivative $y'_\nu$
-can be expressed in terms of $y_{\nu-1}$, going down in $\nu$, but the derivative
-$y'_{\nu-1}$ can be expressed in terms of $y_\nu$ using the other
-relation that goes up in the index $\nu$. So even higher-order derivatives contain only $y_\nu$ and $y_{\nu-1}$.
-I calculated your sum using this procedure with a CAS as:
-$$ -\tfrac12z^2(\log y)''' -z(\log y)''
-= \tfrac18\nu + z^{-1} P\big(y_{\nu-1}(z)/y_\nu(z)\big), $$
-where $P$ is the polynomial
-$$ P(q) = -\tfrac18 q^3 + (\tfrac38\nu-\tfrac18) q^2 + (-\tfrac14\nu^2
-+ \tfrac14\nu - \tfrac18)q. $$
-When $z$ is chosen to be any root of $y_{\nu-1}$,
-$z=\mathsf{BesselJZero}[\nu-1, k]\hat{}2$, $P(q)=0$, your sum is equal
-to
-$$ \frac{\nu}{8}, $$
-which is $(n+1)/8$ in your notation.
-It is possible to derive a number of such closed forms for sums of
-this type. For example, by differentiating $\log y$ differently
-(going $\nu\to\nu+1\to\nu$), one would get
-$$ \sum_{m\geq1}
-\frac{y_{\nu,m}y_{\nu+1,k}}{(y_{\nu,m}-y_{\nu+1,k})^3} =
--\frac{\nu}{8}. $$
-Some other examples, for which the r.h.s. is independent of $z$ ($\zeta_m=y_{\nu,m}, z=y_{\nu-1,l}$, $l$ arbitrary):
-$$ \begin{gathered}
-\sum_{k\geq1} \frac{\zeta_k}{(\zeta_k-z)^2} = \frac14,\\
-\sum_{k\geq1} \frac{z^2}{(\zeta_k-z)^4} - \frac{1}{(\zeta_k-z)^2} + \frac1{24}\frac{5-\nu}{\zeta_k-z} = \frac{1}{48}, \\
-\sum_{k\geq1} \frac{\zeta_k}{(\zeta_k-z)^4} + \frac1{96}\frac{z-\zeta_k-8+4\nu}{(\zeta_k-z)^2} = 0.
-\end{gathered} $$
-or with $z=y_{\nu+1,l}$, $l$ arbitrary:
-$$ \begin{gathered}
-\sum_{k\geq1} \frac{z^2}{(\zeta_k-z)^3} = -\tfrac18\nu-\tfrac14,
-\end{gathered} $$
-and they get messier with higher degrees.<|endoftext|>
-TITLE: Prove that there is only one sequence which meets the following conditions
-QUESTION [9 upvotes]: Problem statement is as follows:
-Given $n\geq 2$, prove that you can choose $1 \lt a_1 \lt a_2 \lt ... \lt a_n$ such that $$a_i | 1 + a_1a_2...a_{i-1}a_{i+1}...a_n$$ Prove that if and only if $n \in \{2, 3, 4\}$ the sequence is unique.
-I have solved the first part. A sequence that satisfies the conditions is $a_1 = 2$, $a_i = \prod_{j \lt i}{a_j} + 1$. You can see that because all $a_i$ with $i \gt j$ are equal $1$ modulo $a_j$. As for the second one I proved that in the case $n = 2$, which seems pretty easy. But I have no clue how to continue. Any help would be appreciated.
-
-REPLY [3 votes]: To prove the only if direction, it suffices to find more than one sequence if $n = 5$ since these can then be extended by the method you give in the problem statement.
-For $n = 5$, we have the three sequences
-$$a_1 = 2,a_2 = 3,a_3 = 7,a_4 = 43,a_5 = 1807$$
-$$a_1 = 2,a_2 = 3,a_3 = 7,a_4 = 47,a_5 = 395$$
-$$a_1 = 2,a_2 = 3,a_3 = 11,a_4 = 23,a_5 = 31$$
-Thus there are at least three sequences for $n \geq 5$.
-I will now prove that the sequence is unique for $n = 3$ and $n = 4$.
-This completes the if direction since $n = 2$ is trivial.
-Note first that the given condition implies that the $a_i$ are relatively prime.
-The key observation (as stated in the other answer) is the following:
-The above conditions imply that for any sequence of distinct
-indices $i_1,...,i_k$ we have
-$$ a_{i_1}a_{i_2}...a_{i_k} | 1 + \displaystyle\sum_{m = 1}^k a_1a_2...a_{i_m - 1}a_{i_m + 1}...a_n $$
-This follows by multiplying the relations $a_{i_m}|1+a_1a_2...a_{i_m - 1}a_{i_m + 1}...a_n$.
-Now I will show uniqueness if $n = 3$. Note that the above observation implies that
-$$a_2a_3 | 1 + a_1a_3 + a_1a_2$$
-I claim that we must in fact have $a_2a_3 = 1 + a_1a_3 + a_1a_2$. This follows since if $a_2a_3 < 1 + a_1a_3 + a_1a_2$, then in fact
-$$2a_2a_3 \leq 1 + a_1a_3 + a_1a_2$$
-which is impossible since $a_2a_3 > a_1a_3$ and $a_2a_3 > a_1a_2$.
-Also $a_1 | 1 + a_2a_3 = 1 + (1 + a_1a_3 + a_1a_2) = 2 + a_1(a_2 + a_3)$.
-Thus $a_1 | 2$ and $a_1 = 2$.
-Now the other conditions on the sequence imply $a_2 | 1 + 2a_3$ and
-$a_3 | 1 + 2a_2$.
-Notice that we must have $a_3 = 1 + 2a_2$ as otherwise $2a_3 \leq 1 + 2a_2$ which would mean $a_3 \leq a_2$.
-Then $a_2 | 1 + 2(1 + 2a_2) = 3 + 4a_2$ so that $a_2 | 3$ and we have
-$a_2 = 3$, $a_3 = 7$. This completes the case $n = 3$.
-Now I will prove uniqueness for $n = 4$. This proof will be more complicated than for $n = 3$ as there are many more possibilities to eliminate.
-By the observation made at the beginning, we have that the case $k = 3$ implies that
-$$a_2a_3a_4 | 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$
-Now since $a_2a_3a_4$ is strictly greater than all of the summands on the other side, we have that $$3a_2a_3a_4 > 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$
-Thus we have two possibilities
-$$2a_2a_3a_4 = 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$
-or
-$$a_2a_3a_4 = 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$
-I will proceed to eliminate the first possibility.
-Note that in the first possibility, we cannot have that $a_1$ is even.
-Additionally we see that
-$$a_1|1 + a_2a_3a_4 = (2 + 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4)/2$$
-as $a_1$ is odd we get
-$$a_1|3 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$
-so that $a_1|3$ and $a_1 = 3$. Using this information we see that
-$$2a_2a_3a_4 = 1 + 3a_2a_3 + 3a_2a_4 + 3a_3a_4 < 1 + 9a_3a_4$$
-consequently we obtain that $a_2 \leq 9/2$ so $a_2 = 4$.
-Plugging this back in we see that
-$$8a_3a_4 = 1 + 12a_3 + 12a_4 + 3a_3a_4$$
-or rearranging
-$$a_4 = \frac{12a_3 + 1}{5a_3 - 12}$$
-Combining this with $a_3 \geq 5$ we see that $a_4 \leq 61/13 < 6$, a contradiction. This proves that in fact we must have
-$$a_2a_3a_4 = 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$
-Now, we use the fact that $a_1 | 1 + a_2a_3a_4$ to conclude that
-$$a_1 | 2 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$
-and so $a_1 = 2$. Again we plug this is to obtain
-$$a_2a_3a_4 = 1 + 2a_2a_3 + 2a_2a_4 + 2a_3a_4 < 1 + 6a_3a_4$$
-So we have $a_2 \leq 6$. Since $a_2$ and $a_1$ are relatively prime we
-get that $a_2 = 3$ or $a_2 = 5$.
-Suppose that $a_2 = 5$ and plug this in to get
-$$5a_3a_4 = 1 + 10a_3 + 10a_4 + 2a_3a_4$$
-This implies that $$a_4 = \frac{10a_3 + 1}{3a_3 - 10}$$
-Combining this with the fact that $a_3 \geq 7$ ($gcd(a_1,a_3) = 1$)
-we see that $a_4 \leq 71/11 < 7$, a contradiction.
-Thus we must have $a_2 = 3$. Plugging this in we get
-$$3a_3a_4 = 1 + 6a_3 + 6a_4 + 2a_3a_4$$
-Thus we have $$a_4 = \frac{6a_3 + 1}{a_3 - 6}$$ which implies $a_3 \geq 7$.
-If $a_3 > 7$, then $a_3$ must be at least 17 ($a_3$ is relatively prime with $2$ and $3$, and $11$ and $13$ result in $a_4$ not being an integer).
-But if $a_3 \geq 17$, then $a_4 \leq 103/11 < 10$, a contradiction.
-Thus $a_3 = 7$ and $a_4 = 43$. This completes the proof of uniqueness if $n = 4$.<|endoftext|>
-TITLE: Dividing a unit square into rectangles
-QUESTION [15 upvotes]: I've been given this task:
-
-A unit square is cut into rectangles. Each of them is coloured by either yellow or blue and inside it a number is written. If the color of the rectangle is blue then its number is equal to rectangle’s width divided by its height. If the color is yellow, the number is rectangle’s height divided by its width. Let $x$ be the sum of the numbers in all rectangles. Assuming the blue area is equal to the yellow one, what is the smallest possible $x$?
-
-I've came with the solution below: I've simply split the unit square in half and assigned the colors. The reasoning behind that is that I want to have the blue side as high as possible (to make the $x$ as low as possible) and the yellow side as wide as possible (for the same reason). I didn't divide the square into rectangles with infinitely small height or width, because no matter how small they are, they eventually add up and form the two big rectangles that are on my picture.
-I feel my solution is wrong though, because it is stupidly easy (you have to admit, that often means it's wrong). Is there anything I'm missing here?
-
-REPLY [2 votes]: Here is the full solution. The answer is, indeed, $5/2$. An example was already presented by the OP. Now we need to prove the inequality.
-First of all we notice that for any color (blue or yellow) the sum of height/width (or width/height) ratios is always at least $1/2$.
-Indeed, since all dimensions do not exceed $1$, we have (e.g., for blue color)
-$$
-\sum \frac{w_i}{h_i} \geqslant \sum w_i h_i = \frac 12 \,.
-$$
-as the final sum is the total area of all blue rectangles.
-Second, we observe that either the blue rectangles connect the left and the right sides of the square, or the yellow rectangles connect the top and the bottom sides. We leave that as an exercise for the readers :) (Actually, as you will see below, it would suffice to show that either the sum of all blue widths or the sum of all yellow heights is at least $1$.)
-Without loss of generality, assume that the blue rectangles connect the lateral sides of the large square. Then we intend to prove that
-$$
-\sum \frac{w_i}{h_i} \geqslant 2 \,,
-$$
-where the summation is done over the blue squares. Combining that with the inequality $\sum h_i/w_i \geqslant 1/2$ for the yellow squares we will have the required result, namely that the overall sum is always at least $5/2$.
-Since the projections of the blue squares onto the bottom side must cover it completely, we have $\sum w_i \geqslant 1$. We also have $\sum w_ih_i = 1/2$. Now all we need is the following fact.
-Lemma. Two finite sequences of positive numbers $\{w_i\}$ and $\{h_i\}$, i = $1$, ... , $n$ are such that
-$$
-\sum w_i = W, \qquad \sum w_ih_i = S \,.
-$$
-Then
-$$
-\sum \frac{w_i}{h_i} \geqslant \frac{W^2}S \,.
-$$
-Proof. We will use the well-known Jensen's inequality (which follows from the geometric convexity of the area above the graph of any convex function) for function $f(x) = 1/x$. That gives us
-$$
-\sum \frac{w_i}W f(h_i) \geqslant f \left( \sum \frac{w_i}W h_i \right) \,.
-$$
-In other words
-$$
-\frac1W \sum \frac{w_i}{h_i} \geqslant \frac1{\sum \frac{w_i}W h_i } =
-\frac{W}{\sum w_i h_i} = \frac WS \,.
-$$
-and the required inequality immediately follows. $\square$
-Applying this lemma to our case where $W \geqslant 1$ and $S = 1/2$ completes our solution.<|endoftext|>
-TITLE: How to show that $\mathfrak{sl}_n(\mathbb{R})$ and $\mathfrak{sl}_n(\mathbb{C})$ are simple?
-QUESTION [5 upvotes]: We defined a Lie algebra to be simple, if it has no proper Lie ideals and is not $k$ (the ground field). We have the proposition that $\mathfrak{sl}_n(\mathbb{R})$ and $\mathfrak{sl}_n(\mathbb{C})$ are simple $\mathbb{R}$-Lie algebras, and $\mathfrak{sl}_n(\mathbb{C})$ is also simple as a $\mathbb{C}$-Lie algebra. We have shown the proposition for the case $n=2$ using an $\mathfrak{sl}_2$-triple.
-How would I show the statement for general $n \in \mathbb{N}$?
-I suspect there's a way to use induction, but don't see how that would work. Also, I have no idea how I would generalize and apply the idea with the $\mathfrak{sl}_2$-triple to higher dimensions. I can't even just do the case $n=3$. How do I generalize?
-
-REPLY [3 votes]: There is a direct proof that $\mathfrak{sl}(n,K)$ is a simple Lie algebra for any field $K$ of characteristic zero, which just uses Lie brackets of traceless matrices to show that a nontrivial ideal $J$ must be $\mathfrak{sl}(n,K)$ itself, see 6.4 in the book "Naive Lie Theory". This works uniformly for all $n\ge 2$. For a proof using a bit more theory, see Lemma $1.3$ here.<|endoftext|>
-TITLE: Embedding fields into the complex numbers $\mathbb{C}$.
-QUESTION [7 upvotes]: Let $k$ be a field of characteristic $0$ with $\mathrm{trdeg}_\mathbb{Q}(k)$ at most the cardinality of the continuum. I want to prove the existence of a field homomorphism $k\rightarrow\mathbb{C}$. (I hope this statement is even true, I made it up on my own.)
-Let $S$ be a transcedence base for $k/\mathbb{Q}$, $S'$ one for $\mathbb{C}/\mathbb{Q}$. Let $S\rightarrow S'$ be an injection. The induced map $\mathbb{Q}[S]\rightarrow\mathbb{Q}[S']\rightarrow\mathbb{Q}(S')\rightarrow\mathbb{C}$ is injective, and hence (by the mapping property of the fraction field) induces a map $\mathbb{Q}(S)\rightarrow\mathbb{C}$. But as $k/\mathbb{Q}(S)$ is algebraic, we get an induced map $k\rightarrow\mathbb{C}$.
-Is this proof correct?
-
-REPLY [2 votes]: Your proof seems all right to me.
-As an aside, this method can be used to prove that $\mathbb{C}$ admits infinitely many proper subfields isomorphic to itself: simply choose any non-surjective injection $S'\to S'$, which will induce a non-surjective morphism $\mathbb{Q}(S')\to \mathbb{Q}(S')$ and thus a non-surjective morphism $\mathbb{C}\to \mathbb{C}$.<|endoftext|>
-TITLE: How can I deduce that $\lim\limits_{x\to0} \frac{\ln(1+x)}x=1$ without Taylor series or L'Hospital's rule?
-QUESTION [5 upvotes]: How can I calculate this limit without Taylor series and L'Hospital's rule?
-$$\lim _{x\to0} \frac{\ln(1+x)}x=1$$
-
-REPLY [7 votes]: If the limit exists, say $L$, then you can state that:
-$$\begin{align}
-L&=\lim_{x\to0}\frac{\ln(1+x)}{x}\\
-\therefore e^L&=e^{\lim_{x\to0}\frac{\ln(1+x)}{x}}\\
-&=\lim_{x\to0}e^{\frac{\ln(1+x)}{x}}\\
-&=\lim_{x\to0}(e^{\ln(1+x)})^\frac{1}{x}\\
-&=\lim_{x\to0}(1+x)^\frac{1}{x}\\
-&=e\\
-\therefore L&=1
-\end{align}$$<|endoftext|>
-TITLE: I need help to advance in the resolution of that limit: $ \lim_{n \to \infty}{\sqrt[n]{\frac{n!}{n^n}}} $
-QUESTION [7 upvotes]: how I can continue this limit resolution?
-The limit is:
-$$ \lim_{n \to \infty}{\sqrt[n]{\frac{n!}{n^n}}} $$
-This is that I have done:
-I apply this test: $ \lim_{n \to \infty}{\sqrt[n]{a_n}} = \frac{a_{n+1}}{a_n} $
-Operating and simplifying I arrive to this point:
-$$ \lim_{n \to \infty}{\frac{n^n}{(n+1)^n}} $$
-I've done something wrong? Thanks!
-
-REPLY [3 votes]: By Stirling approximation as $n\to \infty$ we have
-$$
-\left(\frac{n!}{n^n}\right)^{1/n}\sim \left(\sqrt{2\pi n}\frac{(n/e)^n}{n^n}\right)^{1/n} \sim \frac{1}{e}(2\pi n)^{1/2n} \to e^{-1}.
-$$
-As pointed out by Clement, one should justify why one can take the limit directly inside; the reason is that for any $x< 1
-TITLE: Multiplication of algebraic fraction not giving desired result
-QUESTION [5 upvotes]: I am having a try at solving this:
-
-that supposed to return:
-
-but I get stuck at:
-
-which can be written as
-
-REPLY [2 votes]: Hint:$$x^3+3x^2y+3xy^2+y^3=(x+y)^3$$<|endoftext|>
-TITLE: Obtaining Fourier series of function without calculating the Fourier coefficients
-QUESTION [6 upvotes]: In this question in one of the answers it's shown how to get from $$f\left ( x \right )=\sum_{n=1}^{\infty}\frac{\sin\left ( nx \right )}{10^{n}}$$
-to $$f\left ( x \right )=\frac{10 \sin x}{101-20\cos x}$$
-I recently came across a function similar to $f$ and I needed to compute its Fourier coefficients. That can be done using contour integration. But then I wondered whether it is possible to go backwards and see that the function is an imaginary or real part of some complex valued function which can be expressed as an infinite sum. Basically the whole process in reverse. I hope that isn't just a trial and error process
-EDIT: I might have not expressed myself clearly. I do know how to get from the geometric progression to the function, it is basic precalc. It would be very elegant to be able to go backwards, that is, given the analytic expression for the function, trace it back to a geometric, or some other series. That could save us from a lot of work involving integration. Functions such as the one I posted above seem to be very fitting for this problem.
-
-REPLY [3 votes]: Yes, this is possible. The way to do is by analogy: With rational functions (that don't have a pole at $x=0$), one may always use a partial fraction's decomposition to rewrite a function in terms as a linear combination of the functions $\frac{1}{1-cx}$ (or powers thereof) and a polynomial. Then we use $\frac{1}{1-x}=1+x+x^2+x^3+\ldots$ (for $|x|<1$) to get a sum. So, for instance, one can go from a rational function back to a series with rational functions like this: $$\frac{1}{x^2+1}=\frac{1/2}{1-ix}+\frac{1/2}{1+ix}=\frac{1}2\sum_{n=0}^{\infty}(ix)^n + (-ix)^n=\sum_{n=0}^{\infty}(-1)^nx^{2n}.$$
-This is heavily related to the idea of generating functions - in particular, the coefficient of $x^n$ in a rational function will always obey some homogenous linear recurrence relation after a point.
-Now, let's say we've been given the ratio of two trigonometric polynomials - that is a rational function in $e^{ix}$ such as
-$$f(x)=\frac{10\sin(x)}{101-20\cos(x)}=\frac{5i(e^{-ix}-e^{ix})}{101-10(e^{ix}+e^{-ix})}=\frac{5i(1-e^{2ix})}{101e^{ix}-10(e^{2ix}+1)}.$$
-We then continue rearranging into a convenient form, where the denominator has constant term $1$:
-$$f(x)=\frac{-\frac{1}2i(1-e^{2ix})}{e^{2ix}-\frac{101}{10}e^{ix}+1}$$
-Then, to begin partial fraction decomposition, we factor the denominator into terms of the form $(1-ce^{ix})$ which gives $e^{2ix}-\frac{101}{10}e^{ix}+1=(1-10e^{ix})(1-\frac{1}{10}e^{ix})$. Then we want to find the constants so that
-$$f(x)=c_0+\frac{c_1}{1-\frac{1}{10}e^{ix}}+\frac{c_2}{1-10e^{ix}}$$
-which can be done by putting everything over one numerator and equating the numerator of this with that of our previous expression (other techniques for finding the coefficients of a partial fraction expansion work here too, though). One finds that the form of $f(x)$ is
-$$f(x)=\frac{i}2+\frac{-i/2}{1-\frac{1}{10}e^{ix}}+\frac{-i/2}{1-10e^{ix}}.$$
-Now, we proceed with a little care, since naively expanding these terms gives only terms of the form $e^{ix}$ (so we wouldn't recover real valued numbers in the sum) and also wants to use the series $1+10e^{ix}+100e^{2ix}+1000e^{3ix}+\ldots$ which obviously doesn't converge. To get around this, notice that we can actually expand $\frac{1}{1-cx}$ using two identities:
-$$\frac{1}{1-x}=1+x+x^2+x^3+\ldots$$
-$$1- \frac{1}{1-x}=1+\frac{1}x+\frac{1}{x^2}+\frac{1}{x^3}+\ldots$$
-where the latter is valid when $|x|>1$ and the former when $|x|<1$. Given that $|e^{ix}|=1$, we expand the term $\frac{-i/2}{1-\frac{1}{10}e^{ix}}$ using the first identity and the term $\frac{i}2-\frac{i/2}{1-10e^{ix}}$ using the latter to get:
-$$f(x)=\frac{-i}2\left(1+\frac{1}{10}e^{ix}+\frac{1}{100}e^{2ix}+\frac{1}{1000}e^{3ix}+\ldots\right)+\frac{i}2\left(1+\frac{1}{10}e^{-ix}+\frac{1}{100}e^{-2ix}+\frac{1}{1000}e^{-3ix}+\ldots\right)$$
-and putting them together into one sum:
-$$f(x)=\sum_{n=0}^{\infty}\frac{-i}{2}\cdot \frac{e^{nix}}{10^n}+\frac{i}{2}\cdot \frac{e^{-nix}}{10^n}=\sum_{n=0}^{\infty}\frac{\sin(nx)}{10^n}.$$
-
-One might note that repeated factors need special handling since they can result in non-linear denominators in the partial fractions decomposition - but the identity like:
-$$\frac{1}{(1-x)^n}={n-1 \choose n-1}+{n \choose n-1}x+{n+1 \choose n-1}x^2+{n+2 \choose n-1}x^3+\ldots$$
-where, for fixed $n$, the expression ${n+c-1\choose n-1}$ is simply a polynomial in $c$ of degree $n-1$ - so you might get sums like $\sum_{n=0}^{\infty}\frac{n\sin(n)}{10^n}$ at the end.
-Another restriction is that $(1-ce^{ix})$ shouldn't be a factor of the denominator for $|c|=1$ since then we can use neither expansion to get a convergent power series. I don't see any easy patch for this, but it holds for any function with no poles.<|endoftext|>
-TITLE: Why $e^{\pi}-\pi \approx 20$, and $e^{2\pi}-24 \approx 2^9$?
-QUESTION [18 upvotes]: This was inspired by this post. Let $q = e^{2\pi\,i\tau}$. Then,
-$$\alpha(\tau) = \left(\frac{\eta(\tau)}{\eta(2\tau)}\right)^{24} = \frac{1}{q} - 24 + 276q - 2048q^2 + 11202q^3 - 49152q^4+ \cdots\tag1$$
-where $\eta(\tau)$ is the Dedekind eta function.
-For example, let $\tau =\sqrt{-1}$ so $\alpha(\tau) = 2^9 =512 $ and $(1)$ "explains" why,
-$$512 \approx e^{2\pi\sqrt1}-24$$
-$$512(1+\sqrt2)^3 \approx e^{2\pi\sqrt2}-24$$
-and so on.
-
-
-Q: Let $q = e^{-\pi}$. Can we find a relation,
- $$\pi = \frac{1}{q} - 20 +c_1 q + c_2 q^2 +c_3 q^3 +\cdots\tag2$$
- where the $c_i$ are well-defined integers or rationals such that $(2)$ "explains" why $\pi \approx e^{\pi}-20$?
-
-Update: For example, we have the rather curious,
-$$\beta_1 := \frac{1}{q} - 20 +\tfrac{1}{48}q - \tfrac{1}{300}q^3 -\tfrac{1}{972}q^5 +\tfrac{1}{2160}q^7+\tfrac{1}{\color{brown}{2841}}q^9-\tfrac{1}{\color{brown}{2369}}q^{11}-\cdots\tag3$$
-$$\beta_2 := \frac{1}{q} - 20 +\tfrac{1}{48}q - \tfrac{1}{300}q^3 -\tfrac{1}{972}q^5 +\tfrac{1}{2160}q^7+\tfrac{1}{\color{brown}{2842}}q^9-\tfrac{1}{\color{brown}{2810}}q^{11}-\cdots\tag4$$
-With $q = e^{-\pi}$,
-$$\pi \approx \beta_1 = e^\pi -20 +\tfrac{1}{48}q-\dots\; (\text{differ:}\; {-4}\times 10^{-22})\\
-\pi \approx \beta_2 = e^\pi -20 +\tfrac{1}{48}q-\dots\; (\text{differ:}\; {-3}\times 10^{-22})$$
-However, there seems to be an indefinite number of formulas, where the choice of a coefficient (say, $2841$ or $2842$) determines an ever-branching tree of formulas. But there might be a subset where the coefficients have a nice closed-form.
-
-REPLY [6 votes]: This does not follow your proposal exactly but it is built on series with rational terms only.
-From expansions
-$$
-e^\pi=\sum_{k=0}^\infty\frac{3\left(e^{3\pi}-\left(-1\right)^ke^{-3\pi}\right)\Gamma\left(\frac{k}{2}+3i\right)\Gamma\left(\frac{k}{2}-3i\right)}{2 \pi k!}=\sum_{k=0}^\infty a(k)
-$$
-http://oeis.org/A166748
-and
-$
-\pi=3+\sum_{k=1}^\infty\frac{1}{4·16^k}\left(-\frac{40}{8k+1}+\frac{56}{8k+2}+\frac{28}{8k+3}+\frac{48}{8k+4}+\frac{10}{8k+5}+\frac{10}{8k+6}-\frac{7}{8k+7}\right)=3+\sum_{k=1}^\infty b(k)
-$
-https://oeis.org/wiki/User:Jaume_Oliver_Lafont/Constants#Series_involving_convergents_to_Pi
-the following representation is obtained
-$$e^\pi-\pi-20 = \frac{1201757159}{10580215726080}+\sum_{k=16}^\infty a(k) -\sum_{k=2}^\infty b(k) \approx -0.00090002 \approx -\left(\frac{3}{100}\right)^2$$
-Cancellation comes from the first three decimal digits:
-$$
-\sum_{k=0}^{15} a(k) = \frac{991388824265291953}{42849873690624000}\approx 23.136(2)
-$$
-$$
-b(1) = \frac{49087}{360360} \approx .136(1)
-$$
-[EDIT] Three digit cancellation may also be obtained by taking 14 terms from the series for $e^\pi$ and 3 terms from the simpler
-$$\pi-3=\sum_{k=1}^{\infty}\frac{3}{(1+k)(1+2k)(1+4k)}$$
-(Lehmer, http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27121.pdf, pag 139-140)
-[EDIT]
-Another expression with a "wrong digit" that leads to higher precision when corrected is given by
-$$ e - \gamma-log\left(\frac{17}{2}\right) \approx 0.0010000000612416$$
-$$ e \approx \gamma+log\left(\frac{17}{2}\right) +\frac{1}{10^3} +6.12416·10^{-11}$$
-Why is $e$ close to $H_8$, closer to $H_8\left(1+\frac{1}{80^2}\right)$ and even closer to $\gamma+log\left(\frac{17}{2}\right) +\frac{1}{10^3}$?
-[EDIT] $$\sum_{k=0}^\infty \frac{2400}{(4 k+9) (4 k+15)} = 100 \pi-\frac{58480}{231} \approx 60.99909$$<|endoftext|>
-TITLE: Is $gnu(2304)$ known?
-QUESTION [6 upvotes]: I wonder whether the number of groups of order $2304=2^8\times 3^2$ is known. GAP exited because of the memory. $gnu(2304)$ must be greater than $1,000,000$ because of $gnu(768)=1,090,235$ and $768=2^8\times 3|2^8\times 3^2=2304$.
-
-Is $gnu(2304)$ known or at least a tight upper bound ?
-What is the smallest number $n$, such that it is infeasible to calculate $gnu(n)\ ?$ I think, $gnu(2048)$ will be known in at most ten years, probably much earlier.
-Could $n=3072=2^{10}\times 3$ be the smallest too difficult case ?
-
-REPLY [13 votes]: There are indeed $112\,184+1\,953+15\,641 993 = 15\,756\,130$ groups of order 2304, computed using an algorithm developed by Bettina Eick and myself. As Alexander Konovalov already kindly pointed out, you can find this number in our paper "The construction of finite solvable groups revisited", J. Algebra 408 (2014), 166–182, also available on the arXiv.
-This is part of an on-going project to catalogue all groups up to order 10,000 (with a few orders excepted, e.g. multiples of 1024, as there are simply to many of these). So in particular, we skip groups of order 3072. There are already $49\,487\,365\,422$ groups of order 1024, and I expect the number of groups of order 3072 to be several orders of magnitude larger.
-To maybe slightly motivate why I think so, consider this the proportion of number of (isomorphism clases of) groups of order $2^n$ vs $3\cdot 2^n$, computed here using GAP:
-gap> List([0..9], n -> NrSmallGroups(3*2^n)/NrSmallGroups(2^n)*1.0);
-[ 1., 2., 2.5, 3., 3.71429, 4.52941, 5.77903, 8.66366, 19.4366, 38.9397 ]
-
-If you plot $n$ against $gnu(3\cdot 2^n)/gnu(2^n)$, you'll see a roughly exponentially looking curve. Of course that is a purely empiric argument, not a proof of anything.<|endoftext|>
-TITLE: When is the limit of a sum equal to the sum of limits?
-QUESTION [12 upvotes]: I was trying to solve a problem and got stuck at the following step:
-Suppose ${n \to \infty}$ .
-$$\lim \limits_{n \to \infty} \frac{n^3}{n^3} = 1$$
-Let us rewrite $n^3=n \cdot n^2$ as $n^2 + n^2 + n^2 + n^2 \dots +n^2$,$\space$ n times.
-Now we have
-$$\lim \limits_{n \to \infty} \frac{n^3}{n^3} = \frac {n^2 + n^2 + n^2 + n^2 + n^2 \dots +n^2}{n^3} $$
-As far as I understand, we can always rewrite the limit of a sum as the sum of limits ...
-$$\dots = \lim \limits_{n \to \infty} \left(\frac{n^2}{n^3} + \frac{n^2}{n^3} + \dots + \frac{n^2}{n^3}\right)$$
-...but we can only let ${n \to \infty}$ and calculate the limit if all of the individual limits are of defined form (is this correct?). That would be the case here, so we have:
-$= \dots \lim \limits_{n \to \infty} \left(\frac{1}{n} + \frac{1}{n} + \dots + \frac{1}{n}\right) =$[ letting ${n \to \infty}]$ $= 0 + 0 + \dots + 0 = 0$
-and the results we get are not the same.
-Where did I go wrong?
-
-REPLY [4 votes]: The problem you have described is common and there are three possible scenarios to cover:
-
-Number of terms is finite and independent of $n$: The limit of a sum is equal to sum of limits of terms provided each term has a limit.
-Number of terms is infinite: Some sort of uniform convergence of the infinite series is required and under suitable conditions the limit of a sum is equal to the sum of limits of terms.
-Number of of terms is dependent on $n$: This is the case which applies to the question at hand. The problem is difficult compared to previous two cases and a partial solution is provided by Monotone Convergence Theorem. Sadly the theorem does not apply to your specific example. But it famously applies to the binomial expansion of $(1+n^{-1})^n$ and gives the result $$e=1+1+\frac{1}{2!}+\frac{1}{3!}+\dots$$<|endoftext|>
-TITLE: Confusion about contour integration of constant function: intuition vs. Residue Theorem
-QUESTION [9 upvotes]: Let's say we have the holomorphic function
-$$f(z) = 1.$$
-Because $f(z)$ has no poles, according the Residue Theorem we have
-$$\oint_\gamma f(z)\,dz = 0$$
-for any closed counterclockwise path $\gamma$.
-But let's say that $\gamma$ is a circle around the origin of radius $r$. Then shouldn't we have
-$$\oint_\gamma f(z)\,dz = 2 \pi r$$
-because
-$$\oint_\gamma f(z)\,dz = \oint_\gamma dz = \text{arclength}\,\gamma$$
-?
-I'm pretty sure the result using the Residue Theorem is correct, so then my reasoning must be incorrect for the second way of looking at it.
-Where is my reasoning incorrect?
-
-REPLY [12 votes]: No, because $dz$ does not represent arclength - rather, $|dz|$ does. So the correct statement would be
-$$\oint_{\gamma} dz = 0, \quad\quad \oint_{\gamma} |dz| = 2\pi r$$
-Remember, you can always go back to the Riemann sum; when defining the integral $dz$, you sum things that look like $\Delta z$. If you move in a circular path, you don't travel anywhere - hence, the sum of $\Delta z$ is zero.
-
-REPLY [2 votes]: If $f$ is the constant fuction $1$, The integral
-$$\int_\gamma f(z)\,dz$$
-does not give you the length of the arc $\gamma$. That would be
-$$\int_\gamma f(z)\,d|z|,$$
-where $d|z|$ is integration with respect to arc-length.<|endoftext|>
-TITLE: Smallest $n$-digit number $x$ with cyclic permutations multiples of $1989$
-QUESTION [8 upvotes]: Suppose $x=a_1...a_n$, where $a_1...a_n$ are the digits in decimal of $x$ and $x$ is a positive integer. We define $x_1=x$, $x_2=a_na_1...a_{n-1}$, and so on until $x_n=a_2...a_na_1$. Find the smallest $n$ such that each of $x_1,...,x_n$ are divisible by $1989$. Any zero digits are ignored when at the front of a number, e.g. $x_1 = 1240$ then $x_2 = 124$.
-Defining $T_n = 11...(n 1's)$ and $S_n = \sum 10^{n-k}a_k $ it is clear to see that $1989k=T_nS_n$ is a necessary condition, which is useful for disproving small $n$.
-But doing this I found $n \geq 6$, and the arithmetic becomes very difficult to do with pen and paper.
-I'm looking for a more algebraic way of solving this question. Can anyone help me?
-
-REPLY [3 votes]: This problem can be solved surprisingly easily, building upon Steven Stadnicki's hint, as follows.
-I write $a=a_1a_2\cdots a_{n-1}=\sum_{i=1}^{n-1}a_i10^{n-1-i}$ and $b=a_n$.
-Then $x_1=10a+b$ and $x_2=10^{n-1}b+a$. These must both be divisible by all of $9,13$ and $17$. Consequently the number $10x_2-x_1=(10^n-1)b$ must similarly be divisible by all those three primes. As Steve explained, divisibility by nine is automatic. The key observation is that as $b$ is constrained to be in the range $(0)1,\ldots,9$ (we ignore zero for a moment) we can conclude that $10^n-1$ must be divisible by both $13$ and $17$.
-The order of $10$ modulo $13$ is six (ten is a quadratic residue modulo thirteen), but modulo $17$ it has the full order $16$ (in other words ten is a primitive root modulo seventeen). The least common multiple of $6$ and $16$ is $48$, so we conclude that $n$ must be a multiple of $48$.
-But, the above calculation shows that any 48-digit multiple of $1989$ actually has this property! Start with such an $x_1$. The calculation we already did shows that $10x_2-x_1$ is necessarily divisible by $1989$. Therefore so is $x_2$. Rinse. Repeat.
-So to find the smallest such $x$ we need
-$$
-m=\left\lceil\frac{10^{47}}{1989}\right\rceil=
-50276520864756158873805932629462041226747110,
-$$
-and the answer is
-$$
-x=1989m=100000000000000000000000000000000000000000001790=10^{47}+1790.
-$$
-
-The reason we were able to ignore the case $a_n=b=0$ when deriving the condition $48\mid n$ is that the number must have some non-zero digits. So we can simply rotate its digits cyclically to have a non-zero digit in the least significant position.
-
-I did check with Mathematica that all the 48 cyclic shifts of this number are divisible $1989$ :-) It is not unthinkable to actually do this with pen & paper work. You see, most of those cyclic shifts are of the form $17901\cdot10^\ell$ and $17901=1989\cdot9.$<|endoftext|>
-TITLE: Infinity-to-one function
-QUESTION [11 upvotes]: Are there continuous functions $f:I\to S^2$ such that $f^{-1}(\{x\})$ is infinite for every $x\in S^2$?
-Here, $I=[0,1]$ and $S^2$ is the unit sphere.
-I have no idea how to do this.
-Note: This is not homework! The question came up when I was thinking about something else.
-
-REPLY [11 votes]: Consider a space filling curve $\gamma: I \rightarrow I^2$, the projection $q: I^2 \rightarrow S^2$ given by the quotient topology on the square that furnishes the sphere, and the projection $\pi: I^2 \rightarrow I$ on the first coordinate.
-The map $q \circ \gamma \circ \pi \circ \gamma$ satisfies what you want.<|endoftext|>
-TITLE: why don't standard analysis texts relax the injectivity requirement of the multivariable change of variables theorem?
-QUESTION [17 upvotes]: In standard multivariable analysis texts, the change of variables for multivariable integration in Euclidean space is almost always stated for a $C^1$ diffeomorphism $\phi$, giving the familiar equation (for continuous $f$, say)
-$$\int_{\phi(U)}f=\int_U(f\circ\phi)\cdot|\det D\phi|$$
-Of course, this result by itself is not very useful in practice because a diffeomorphism is usually hard to come by. The better advanced calculus and multivariable analysis texts explain explicitly how the hypothesis that $\phi$ is injective with $\det D\phi\neq0$ can be relaxed to handle problems along sets of measure zero -- a result which is necessary for almost all practical applications of the theorem, starting with polar coordinates.
-Despite offering this slight generalization, very few of the standard texts state that the situation can be improved further still: there is an analogous theorem for arbitrary $C^1$ mappings $\phi$, not just those that are injective everywhere except on a set of measure zero. We simply account for how many times a point in the image gets hit by $\phi$, giving
-$$\int_{\phi(U)}f\cdot\,\text{card}(\phi^{-1})=\int_U(f\circ\phi)\cdot|\det D\phi|$$
-where $\text{card}(\phi^{-1})$ measures the cardinality of $\phi^{-1}(x)$.
-I think this theorem is a lot more natural and satisfying than -- and surely just as heuristically plausible as -- the first. For one thing, it removes a huge restriction, bringing the theorem closer to the standard one-variable change of variables for which injectivity is not required (though of course the one-variable theorem is really a theorem about differential forms). In particular, it emphasizes that regularity is what's important, not injectivity. For another thing, it's not a big step from here to the geometric intuition for degree theory or for the "area formula" in geometric measure theory. (Indeed, the factor $\text{card}(\phi^{-1})$ is a special case of what old references in geometric measure theory called the "multiplicity function" or the "Banach indicatrix.") It's also used in multivariate probability to write down densities of non-injective transformations of random variables. And last, it's in the spirit of modern approaches to gesture at the most general possible result. The traditional statement is really just a special case; injectivity only becomes essential when we define the integral over a manifold (rather than a parametrized manifold), which we want to be independent of parametrization. I think teaching the more general result would greatly clarify these matters, which are a constant source of confusion to beginners.
-Yet many of the standard multivariable analysis texts (Spivak, Rudin PMA and RCA, Folland, Loomis/Sternberg, Munkres, Duistermaat/Kolk, Burkill) don't mention this result, even in passing, as far as I can tell. The impression a typical undergraduate gets is that the traditional statement is the final word on the matter, not to be improved upon; after all, the possibility of improvement isn't even hinted at, even when the multivariable result is compared to the single variable result. So I've had to hunt for discussions of the extension; I've found it here:
-
-Zorich, Mathematical Analysis II (page 150, exercise 9, for the Riemann integral)
-Kuttler, Modern Analysis (page 258, for the Lebesgue integral; used later in a discussion of probabilities densities)
-Csikós, Differential Geometry (page 72, for the Lebesgue integral)
-Ciarlet, Linear and Nonlinear Functional Analysis with Applications (page 34, for the Lebesgue integral)
-Bogachev, Measure Theory I (page 381, for the Lebesgue integral)
-the Planet Math page on multivariable change of variables (Theorem 2)
-
-I'm also confident I've seen it in some multivariable probability books, but I can't remember which. But none of these is a standard textbook, except perhaps for Zorich.
-My question: are there standard analysis references with nice discussions of this extension of the more familiar result? Probability references are fine, but I'm especially curious whether I've missed some definitive treatment in one of the classic analysis texts.
-Also feel free to speculate, or explain, why so few texts mention it. (Is there really any good reason for not mentioning it, when failing to do so implicitly trains students to think injectivity is an essential ingredient for this kind of result?) I'm hoping there's a more interesting answer than "most authors don't mention it because the texts they learned from didn't either" or "even an extra sentence alluding to the possibility of a more general result is too much to ask for since the traditional theorem is hard enough to prove on its own."
-(Cross-posted on MSE.)
-
-REPLY [9 votes]: The better advanced calculus and multivariable analysis texts explain explicitly how the hypothesis that $\varphi$ is injective with $\det D \varphi \neq 0$ can be relaxed to handle problems along sets of measure zero -- a result which is necessary for almost all practical applications of the theorem, starting with polar coordinates.
-
-I speculate that most authors don't go beyond the injective immersion condition because it is sufficient for most practical applications of the theorem, polar and spherical coordinates being among the most important examples. The difficulty of formulating and proving the change of variables formula for integration is out of proportion to the rest of the content of an advanced calculus course, so if you are writing a textbook on the subject then there is a strong temptation not to stray too far from what you need to handle the basic examples and applications. This also explains why some authors are happy to live with the assumption that $\phi$ is a diffeomorphism - if your goal is just to prove Stokes' theorem, then why make it harder?
-It wouldn't hurt, I suppose, to allude to more general versions of the theorem in a parenthetical remark or an extended exercise, but I don't think the stakes are very high. Undergraduates are usually accustomed to the fact that they aren't getting the most general possible theorems in their classes.
-
-REPLY [3 votes]: As Paul Siegel says in his answer: The usual formula is sufficient for most practical applications. I would go further and say:
-
-The plain form of the change of variables theorem makes it much more clear that the main motivation for this theorem is just to compute integrals.
-
-The change of variables theorem is really a workhorse-theorem to work with and (as far as I see) is not something that is structurally important. Check Christian Blatter's answer at MSE to see a mathematician with years of experience telling you how often he really used the non-injective form.
-Also, the plain form really shows that the Jacobian is the crucial thing here and also the proofs of the plain form (at least the ones I know) makes that pretty clear. However, if you want to prove the more general form, I don't know anything else than to start from the plain result and add on top of that.
-And my last point: As I said above, the change of variables theorem is a workhorse to do something, namely, to compute integrals. If you would ever calculate an integral of the form
-$$\int_{\phi(U)}f\cdot\,\text{card}(\phi^{-1})=\int_U(f\circ\phi)\cdot|\det D\phi|$$
-what would you do? You would check for each point how often it is reached by $\phi$ and would patch the results together (neglecting the issues with null-sets) using the plain form on each patch. This is something that a student would came come up with by himself. Hence, the general form is not at all helpful to do the very thing for which the plain form of the theorem in intended. Balancing how complicated the proof of the more general result is and how intuitive and (most importantly) practically not so useful the result is, it seems clear what you should do when writing a textbook.<|endoftext|>
-TITLE: Near-integer solutions to $y=x(1+2\sqrt{2})$ given by sequence - why?
-QUESTION [9 upvotes]: EDIT: I've asked the same basic question in its more progressed state. If that one gets answered, I'll probably accept the answer given below (although I'm uncertain of whether or not this is the community standard; if you know, please let me know).
-
-I've found a sequence $x_{i+1}=\|x_i(2+k)\|$, where $k=1+2\sqrt{2}$ and where $||a||$ rounds $a$ to the nearest integer, that seems to minimize the distance $P_n$ to an integer solution (for $x$ and $y$) of the equation in the title. $P_n$ is more rigorously defined as the absolute value of $\|y_n\|-y_n$, where $y_n=nk$ for $n \in \mathbb{N}$.
-Starting with $x_0=1$ the sequence becomes $x_1=\|2+k\|=\|5.83\ldots\|=6,x_2=\|6(2+k)\|=\|34.97\ldots\|=35,204,1189,6930,\ldots$ where $P_i$ very quickly becomes small.
-In Fig. 1 below, I've plotted $P_n$ for $n$. In Fig. 2 only low values of $P$ is shown and it seems that the $P_i$'s from the sequence (in red) are the lowest value up until that $n$ (I've checked this up to $n=10^6$).
-My main question is this: Why does this sequence give the solutions nearest integer-solutions? (Edit for further clarification:) And why are these the nearest up until that point (see Fig. 2)? Can it be proven, starting from the original equation, that the elements in this sequence will give the best approximations to integers up until that point, e.g. that it's error dies off faster than all other possible sequences?
- Fig. 1
- Fig. 2
-Further questions:
-I've also "found" something that looks like an attractor, see Fig. 3 below. Can someone explain what is going on here? I haven't really studied dynamical systems, so if you could dumb it down a bit, I'd be grateful.
-Also, as seen in Fig. 1, there's a high degree of regularity here, with all the seemingly straight lines. If I remove the absolute value-part of the def. of $P_n$ the cross-pattern in Fig. 1 becomes (seemingly) straight, parallel declining lines. Why do these lines form? Could it be explained via some modular arithmetic?
- Fig. 3
-Thanks!
-EDIT: Changed "Bonus" to "Further", as I would also really like to hear answers to these questions. Should I post a new question with these, so I could accept an answer there that answers just those?
-
-REPLY [5 votes]: $$ \left( 3 + \sqrt 8 \right)^n + \left( 3 - \sqrt 8 \right)^n $$
-is an integer, while
-$$ 3 - \sqrt 8 = \frac{1}{3 + \sqrt 8} $$ has absolute value smaller than one.
-My sequence would be
-$$ x_{n+2} = 6 x_{n+1} - x_n $$
-with $x_0 = 2,$ $ x_1 = 6,$ $x_2 = 34$
-I think I see what you did. Instead of taking powers $\left( 3 + \sqrt 8 \right)^n,$ you took the nearest integer and multiplied by it again. So you get $1,6,35,204,$ but once the error is small enough you also settle into the necessary $ x_{n+2} = 6 x_{n+1} - x_n .$ That is, you have
-$ x_1 = 6,$ $x_2 = 35,$ $x_3 = 204,$ $x_4 = 1189,$ $x_5 = 6930,$ $x_6 = 40391.$ Start it with $x_0 = 1,$ because you have
-$$ \left( \frac{8 + 3 \sqrt 8}{16} \right)\left( 3 + \sqrt 8 \right)^n + \left( \frac{8 - 3 \sqrt 8}{16} \right)\left( 3 - \sqrt 8 \right)^n$$ This is equal to
-$$ \left( \frac{ \sqrt 8}{16} \right) \left( \left( 3 + \sqrt 8 \right)^{n+1} - \left( 3 - \sqrt 8 \right)^{n+1} \right) $$<|endoftext|>
-TITLE: Question about collapsing cardinals
-QUESTION [6 upvotes]: Suppose, in $M$, $\kappa$ regular, $\lambda>\kappa$ regular. Is there a generic extension of $M$ in which $\kappa^+ = \lambda$ and in which cardinals $\leq \kappa$ and $\geq \lambda$ are preserved?
-I worked out that, assuming GCH, the answer is yes if $\lambda$ is a limit or is the successor of a cardinal of cofinality $\geq\kappa$. The only remaining case is $\lambda = \delta^+$ for some $\delta$ of cofinality $<\kappa$, e.g. $\kappa = \omega_1$ and $\lambda = \omega_\omega^+$.
-I realize that in this remaining case, in $M[G]$ $\kappa^{<\kappa} \geq \lambda$, so the forcing notion cannot be $<\kappa$-distributive. Thus the Levy collapse cannot suffice.
-
-REPLY [5 votes]: Many questions of this sort are open. For example, consider the situation where $\kappa = \aleph_n$ and $\lambda = \aleph_{n+2}$. So you want to know if you can collapse $\aleph_{n+1}$ to $\aleph_n$ while preserving all other cardinals. Now if $n=0$ this is possible thanks to the Levy collapse. For $n=1$, this is again possible, which is due independently to Abraham and Todorcevic. For $n=2$ this was recently answered by Aspero (you will also find references to the other articles there). For $n \geq 3$ this is still open, and I believe it is also open, for example, whether you can collapse $\aleph_{\omega+1}$ to $\aleph_1$ with a stationary set preserving partial order of size $\aleph_{\omega+1}$ (which would hence not collapse any larger cardinals and also preserve $\aleph_1$).
-Of course, this is only considering special cases of your question, so perhaps a negative answer to the general question is already known.<|endoftext|>
-TITLE: Need a hint for this integral
-QUESTION [7 upvotes]: I'm trying to evaluate the following integral
-$$\int_0^{\infty} \frac{1}{x^{\frac{3}{2}}+1}\,dx.$$
-This is an old complex analysis exam question, so I plan to use the residue theorem.
-How can I first deal with the square-root, cubed term? I've been trying to find a clever substitution to reduce the problem to a simple one but so far I have not found a good one...
-Any ideas are welcome.
-Thanks,
-
-REPLY [9 votes]: Hint: Every time I see integrand like this which integrate over $(0,\infty)$, I will transform it to a beta integral and follows my nose.
-Spoiler 1
-
- Let $\displaystyle\;y = x^{3/2}\;$ and $\displaystyle\;z = \frac{y}{1+y}\;$, we have
-
-Spoiler 2
-
- $$\begin{align} \int_0^\infty \frac{dx}{x^{3/2}+1}&= \int_0^\infty \frac{dy^{2/3}}{y+1} = \frac23 \int_0^\infty \frac{y^{2/3-1} dy}{y+1}\\ &= \frac23 \int_0^\infty \left(\frac{y}{1+y}\right)^{2/3-1}\left(\frac{1}{1+y}\right)^{1/3-1}\frac{dy}{(1+y)^2}\\ &= \frac23 \int_0^1 z^{2/3-1}(1-z)^{1/3-1}dz \\ &= \frac23\frac{\Gamma(2/3)\Gamma(1/3)}{\Gamma(2/3+1/3)} = \frac23 \frac{\pi}{\sin\frac{\pi}{3}}\\ & = \frac{4\pi}{3\sqrt{3}} \end{align} $$
-
-Update (sorry, this part is too hard to setup as spoiler correctly)
-If you really want to compute the integral using residue, you can
-
-change variable to $y = x^{3/2}$.
-pick the branch cut of $y^{-1/3}$ in the resulting integrand along the positive real axis.
-set up a integral over the contour:
-$$C := +\infty - \epsilon i\quad\to\quad -\epsilon-\epsilon i\quad \to \quad -\epsilon + \epsilon i \quad\to\quad +\infty + \epsilon i$$
-
-If you fix the argument of $y^{-1/3}$ to be zero on the upper branch of $C$, you will have
-$$\left(1 - e^{-\frac{2\pi i}{3}}\right)\int_0^\infty \frac{y^{-1/3}dy}{y+1} = \int_C \frac{y^{-1/3}dy}{y+1} =^{\color{blue}{[1]}} 2\pi i\mathop{\text{Res}}_{y = -1}\left(\frac{y^{-1/3}}{y+1}\right) = 2\pi i e^{-\frac{\pi i}{3}}$$
-This will give you
-$$\int_0^\infty \frac{dx}{x^{3/2}+1} = \frac23 \int_0^\infty \frac{y^{-1/3} dy}{y+1} = \frac{4\pi i}{3}\left(\frac{e^{-\frac{\pi i}{3}}}{1 - e^{-\frac{2\pi i}{3}}}\right) = \frac{2\pi}{3\sin\frac{\pi}{3}} = \frac{4\pi}{3\sqrt{3}}$$
-Notes
-
-$\color{blue}{[1]}$ Since the integrand $\frac{y^{-1/3}}{y+1}$ goes to $0$ faster than $\frac{1}{|y|}$ as $|y| \to \infty$, we can complete the contour $C$ by a circle of infinite radius and evaluate the integral over $C$ by taking residue at poles within the extended contour.<|endoftext|>
-TITLE: Evaluate the integral $\int \frac{1}{\sqrt[3]{(x+1)^2(x-2)^2}}\mathrm dx$
-QUESTION [7 upvotes]: What substitution is useful for this integral?
-$$\int \frac{1}{\sqrt[3]{(x+1)^2(x-2)^2}}\mathrm dx$$
-Substitutions $u=x^{\frac{2}{3}},u=(x+1)^{\frac{2}{3}},u=(x-2)^{\frac{2}{3}}$ are not working.
-Can't find useful trigonometric substitution.
-
-REPLY [2 votes]: What substitution is useful for this integral ?
-
-None. The integrand does not possess any elementary anti-derivative. See Liouville's theorem
-and the Risch algorithm for more information. However, all definite integrals of the following
-form: $\displaystyle\int_a^b\Big[(x-x_1)(x-x_2)\Big]^r~dx,$ with $a,b\in\{x_1,x_2,\pm\infty\},$ can be evaluated in terms of the
-beta and $\Gamma$ functions, assuming, of course, that they converge in the first place. This should
-come as no surprise, given the fact that Mufasa has already been able to rewrite the original
-expression as a Wallis integral, whose relation to the aforementioned special functions is well
-known. Not to mention the fact that Claude Leibovici's hypergeometric series can also be
-expressed in terms of the incomplete beta function. Thus, for $x_{1,2}=\{-1,2\}$ and $r=-\dfrac23,$
-we have the following result:
-
-$$\begin{align}
-\int_{-\infty}^\infty\Big[(x+1)(x-2)\Big]^{-\tfrac23}~dx
-~&=~3\int_{-\infty}^{-1}\Big[(x+1)(x-2)\Big]^{-\tfrac23}~dx~=~
-\\\\
-~&=~3\int_{-1}^2\Big[(x+1)(x-2)\Big]^{-\tfrac23}~dx~=~
-\\\\
-~&=~3\int_2^\infty\Big[(x+1)(x-2)\Big]^{-\tfrac23}~dx~=~
-\\\\
-~&=~3\cdot\frac{B\Big(\tfrac13~,~\tfrac16\Big)}{\sqrt[3]{12}}
-\end{align}$$<|endoftext|>
-TITLE: If a set of integers can be partitioned into 3 subsets with equal sums, if one such subset is identified, can the remaining be partitioned as well?
-QUESTION [7 upvotes]: Specifically, if a set $S$ of integers that sums to $k$ is known to be able to be partitioned into $3$ subsets such that each subset sums to $\dfrac{k}{3}$, if $A$ is one such subset of $S$, is it always possible to partition the remaining integers of $S-A$ into two subsets that both sum to $\dfrac{k}{3}$?
-For example, if $S = \{2, 3, 4, 6, 7, 8\}$ and sums to $30$, it can be divided into sets $\{2, 8\}$, $\{3, 7\}$, and $\{4, 6\}$ which each sum to $\dfrac{30}{3}=10$. This is a simple example, but if an arbitrary set $S$ that sums to $k$ is known to be able to be partitioned into 3 subsets that each sum to $\dfrac{k}{3}$, if one such subset $A$ is identified, is it guaranteed that the remaining integers in $S-A$ can also be partitioned into subsets that each sum to $\dfrac{k}{3}$?
-My gut feeling is that the answer is yes, but I don't know how to prove it.
-
-REPLY [11 votes]: $\{1,3,4,5,8,14,16\}$
-can be partitioned into the sets
-$\{1,16\},\{3,14\},\{4,5,8\}$, each of which add to $17$.
-The subset $A=\{1,3,5,8\}$ is also a subset which adds to $17$, however the remaining integers $\{4,14,16\}$ are all even and therefore could not possibly be partitioned into subsets which add to an odd number.<|endoftext|>
-TITLE: What is the minimum polynomial of $x = \sqrt{2}+\sqrt{3}+\sqrt{4}+\sqrt{6} = \cot (7.5^\circ)$?
-QUESTION [10 upvotes]: Inspired by a previous question what let $x = \sqrt{2}+\sqrt{3}+\sqrt{4}+\sqrt{6} = \cot (7.5^\circ)$. What is the minimal polynomial of $x$ ?
-The theory of algebraic extensions says the degree is $4$ since we have the degree of the field extension $[\mathbb{Q}(\sqrt{2}, \sqrt{3}, \sqrt{4}, \sqrt{6}): \mathbb{Q}] = [\mathbb{Q}(\sqrt{2}, \sqrt{3}): \mathbb{Q}] =4$
-Does trigonometry help us find the other three conjugate roots?
-
-$+\sqrt{2}-\sqrt{3}+\sqrt{4}-\sqrt{6} = \cot \theta_1$
-$-\sqrt{2}+\sqrt{3}+\sqrt{4}-\sqrt{6} = \cot \theta_2$
-$-\sqrt{2}-\sqrt{3}+\sqrt{4}+\sqrt{6} = \cot \theta_3$
-
-This problem would be easier if we used $\cos$ instead of $\cot$. If I remember the half-angle identity or... double-angle identity:
-$$ \cot \theta = \frac{\cos \theta}{\sin \theta} = \sqrt{\frac{1 - \sin \frac{\theta}{2}}{1 + \sin \frac{\theta}{2}}}$$
-Sorry I am forgetting, but I am asking about the relationship between trigonometry and the Galois theory of this number.
-
-REPLY [4 votes]: By Galois theory the intermediate fields are $\Bbb{Q}(\sqrt2)$, $\Bbb{Q}(\sqrt3)$ and $\Bbb{Q}(\sqrt6)$. Your number is not an element of any of those, so it generates the whole 4-d extension. In particular, we know that the minimal polynomial will be a quartic. Therefore any quartic polynomial with integer coefficients with this number as a root is the minimal polynomial.
-Let $x=2+\sqrt2+\sqrt3+\sqrt6$. Then
-$$
-0=(x-2-\sqrt3)^2-(\sqrt2+\sqrt6)^2=x^2-(4+2\sqrt3)x-1.
-$$
-The idea here is that squaring removes "the $\sqrt2$ content" from $\sqrt2+\sqrt6$ leaving only irrationalities coming from $\sqrt3$.
-Using the obvious algebraic conjugate as an extra factor we see that
-$$
-m(x)=(x^2-(4+2\sqrt3)x-1)(x^2-(4-2\sqrt3)x-1)=x^4-8x^3+2x^2+8x+1
-$$
-fits the bill.
-The algebraic conjugates of values of trig functions (at rational multiples of $\pi$) are of the same type:
-$$
-\begin{aligned}
-2+\sqrt2+\sqrt3+\sqrt6&=\cot\frac{\pi}{24},\\
-2+\sqrt2-\sqrt3-\sqrt6&=\cot\frac{17\pi}{24},\\
-2-\sqrt2+\sqrt3-\sqrt6&=\cot\frac{13\pi}{24},\\
-2-\sqrt2-\sqrt3+\sqrt6&=\cot\frac{5\pi}{24}.
-\end{aligned}
-$$
-I haven't checked the details, but I'm fairly sure that when you dig for the integer multiple angle formulas for cotangent, the polynomial $m(x)$ pops out. After all, we can write $\cot 6x$ as a rational function of $\cot x$ with polynomials of degrees $\le6$ as numerators and denominators. When you use that formula in the l.h.s. of the equation
-$$
-\cot 6x=1
-$$
-and clear the denominator, the resulting degree $6$ polynomial equation in $\cot x$ factors as a product of a quartic (obviously $m(\cot x)$) and a quadratic - the latter accounting for solutions $x=3\pi/8$ and $x=7\pi/8$.<|endoftext|>
-TITLE: The smallest symmetric group $S_m$ into which a given dihedral group $D_{2n}$ embeds
-QUESTION [16 upvotes]: Several questions, both here and on MathOverflow, address the issue of determining for a given group $G$ the smallest integer $\mu(G)$ for which there is an embedding (injective homomorphism) $G \hookrightarrow S_{\mu(G)}$. In general this is a difficult problem, but it's not hard to resolve the question for $G$ of small order, and $\mu(G)$ has been determined for some important classes of groups $G$. For example, for $G$ abelian, so that we can write $G$ uniquely (up to reordering) as $\Bbb Z_{a_1} \times \cdots \times \Bbb Z_{a_r}$ for (nontrivial) prime powers $a_1, \ldots, a_r$,
-$$\mu(G) = a_1 + \cdots + a_r .$$ And of course, $\mu(S_m) = m$.
-I have not been able to find, however, where this has been resolved for the dihedral groups; my question is:
-
-For each dihedral group $D_{2n}$ (of order $2 n$), what is the smallest symmetric group into which $D_{2n}$ embeds, that is, what is $\mu_n := \mu(D_{2n})$?
-
-Of course, $D_2 \cong S_2$ and $D_6 \cong S_3$, and so $\mu_1 = 2$ and $\mu_3 = 3$; also, $D_4 \cong \Bbb Z_2 \times \Bbb Z_2$, so by the above result $\mu_2 = \mu(D_4) = 4$.
-For any group $G$ and subgroup $H \leq G$, an embedding $G \hookrightarrow S_m$ determines an embedding $H \hookrightarrow S_m$, and so $\mu(H) \leq \mu(G)$. Thus, since $D_{2n} \cong \Bbb Z_n \rtimes \Bbb Z_2$, we have $\mu_n = \mu(D_{2n}) \geq \mu(\Bbb Z_n)$, which by the above is the sum $\Sigma_n$ of the prime powers in the prime factorization of $n$. On the other hand, for $n > 2$, the usual action by rotations and reflections of $D_{2n}$ on an $n$-gon is faithful and so determines an embedding $D_{2n} \hookrightarrow S_n$; in particular, this gives the upper bound $\mu_n \leq n$.
-Already, these bounds together give $\mu_4 = 4$ (realized by the embedding of the symmetry group of the square into the symmetric group on its vertices) and more generally that $\mu_a = a$ for prime powers $a > 2$.
-This is not sufficient, however, to determine $\mu_n$ for other integers $> 5$; for example, $\mu(\Bbb Z_6) = 5$, so these bounds only give $5 \leq \mu_6 \leq 6$. It turns out that $D_{12}$ can be embedded in $S_5$ (as David points out in a comment under his question, this embedding can be realized explicitly as $\langle(12)(345), (12)(34)\rangle$), and this settles $\mu_6 = 5$. The above results together determine the subsequence $$(\mu_1, \ldots, \mu_9) = (2, 4, 3, 4, 5, 5, 7, 8, 9) ,$$ which in particular does not appear in the OEIS.
-Edit Per David's answer, the sequence $(\mu_n)$ appears to be given by
-$$
-\mu_n := \left\{\begin{array}{cl}2, & n = 1\\ 4, & n = 2\\ \Sigma_n, & n > 2 \end{array}\right. ,
-$$
-and $(\Sigma_n)$ itself appears in the OEIS as sequence A008475.
-
-REPLY [2 votes]: There is also a following geometric way to explain the solution by David.
-Let us recall that $D_{2n}$ is the group of symmetries of a regular $n$-gon. The idea is that one should consider the action of $D_{2n}$ on regular polygons with smaller number of vertices that are inscribed into the given regular n-gon.
-
-To clarify the construction, let us first consider $n=6$. There are two regular 3-gons and three regular 2-gons whose vertices are the vertices of the given regular 6-gon. Every element of $D_{12}$ acts on the set of two regular 3-gons and acts on the set of three regular 2-gons, thus one has $D_{12} \rightarrow S_2 \times S_3$.
-It is easy to see that this map is injective. Its composition with tautological embedding $S_2 \times S_3 \rightarrow S_{2+3}$ is the necessary map.
-
-Now let us consider a general $n = \prod_{i=1}^s p_i^{k_i}$. A regular $n$-gon has $p_i^{k_i}$ regular $n/p_i^{k_i}$-gons, which gives us a map $D_{2n} \to S_{p_i^{k_i}}$. Now by Chinese remainder theorem and thoughtful look one proves that the map $D_{2n} \to \prod_{i=1}^s S_{p_i^{k_i}}$ is an injection.
-(Equivalently, suppose that a symmetry $g \in D_{2n}$ preserves every regular $n/p_i^{k_i}$-gon in the given $n$-gon. Consider any vertex $v$ of the given $n$-gon. The vertex $v$ lies in a certain $n/p_1^{k_1}$-gon, in a certain $n/p_2^{k_2}$-gon, etc. Each of them is preserved by $g$ and their intersection consists of $v$ only, thus $v$ is preserved by $g$).
-Thus $D_{2n} \hookrightarrow S_{\sum_{i=1}^s p_i^{k_i}}$ and $\mu(D_{2n}) \leqslant \sum_{i=1}^s p_i^{k_i}$. But it can not be smaller by the argument by David.<|endoftext|>
-TITLE: Sequence related to solutions of the equation $x^x=c$
-QUESTION [9 upvotes]: A couple years ago I remember repeatedly pressing $\sqrt{1+ans}$ into my calculator to be astonished that my calculator gives me an answer approaching the golden ratio. I was astonished, and dug deeper into this problem. I realized that the limit of the sequence given by the recurrence:
-$$f(x_n)=x_{n+1}$$
-Is, if the limit exists, a solution to:
-$$f(x)=x$$
-And applying this I realized I could solve almost every algebra problem. So here was one that I came across:
-$$x^x=c$$
-Easily such equation can be rearranged as:
-$$x=\sqrt[x]{c}$$
-Where now the problem becomes finding the limit of the sequence:
-$$x_{n+1}=\sqrt[x_n]{c}$$
-This method worked for some $c$ and $x_1$ like $c=2$ and $x_1=1.5$. But the problem is for some $c$ the sequence seems to alternate according to my observations, like $c=100$ (I tried $x_1=3.5$ and others but it doesn't seem to matter as long as it is positive). I ask anyone if they can help me figure out for what $c$ will this sequence converge for every given $x_1>0$.
-
-REPLY [9 votes]: What you discovered is called a fixed point iteration.
-
-A fixed point $x$ of a function $f\colon X\rightarrow X$ is a point
- satisfying the equation $$x=f(x).$$
-
-A fixed point iteration is defined by the sequence
-\begin{align*}x_{1}&=\text{given}\\x_{n+1}&=f(x_{n})\text{ for }n>0.\end{align*}
-We say this iteration converges when $x_{n}\rightarrow x$ for some $x$. The Banach fixed point thereom (a.k.a. contraction mapping principle) gives sufficient conditions for when a fixed point exists, is unique, and can be computed by a fixed point iteration (i.e. as the limit of the $x_{n}$). You should read the article on Wikipedia to familiarize yourself with the ideas.
-For simplicity, let's now assume $X=\mathbb{R}$ instead of an arbitrary complete metric space, as is usually treated in the statement of the Banach fixed point theorem. One of the consequences of the theorem is as follows: let $Y\subset X$ such that $f$ is continuously differentiable on $Y$, $\sup_{Y}|f^{\prime}|\leq K$ for some $K<1$, and $f(Y)\subset Y$, then $f$ has a unique fixed point on $Y$ which can be computed by a fixed point iteration initialized in $Y$ (i.e. $x_{1}\in Y$).
-In your first example, you have $f(x)=\sqrt{1+x}$. Let $Y=[0,\infty)$. Noting that $|f^{\prime}(x)|<1$ on $Y$ (check) and $f(Y)\subset Y$ trivially, it follows that for any $x_{1}\in Y$, $x_{n}\rightarrow x$ where $x$ is the unique fixed point of $f$ on $Y$. In fact, this fixed point must satisfy $x=f(x)=\sqrt{1+x}$, or equivalently, since $x\geq0$, $x^{2}=1+x$. This quadratic equation has two roots: $x=1/2\pm\sqrt{5}/2$. The positive root is, as you pointed out, the golden ratio $1.61803\ldots$
-Your other problem, involves $f(x)=\sqrt[x]{c}$. It is not so clear under which conditions this satisfies the Banach fixed point theorem. In fact, as you noticed, you can find examples in which the iteration is nonconvergent.
-Do not despair though, there are better ways to compute solutions to your equation. One that comes to mind is Newton's method, which you should take a look at.
-In fact, a solution to your problem is given by Lambert's W function as
-$$x = \frac{\ln c}{W(\ln c)}.$$
-Values of the Lambert W are, in fact, often computed by Newton's method, or higher order Newton's methods.
-
-Addendum
-Newton's method for $g(x)=x^{x}-c$ is given by \begin{align*}x_1&=\text{given}\\x_{n+1}&=f(x_{n})\text{ for }n>0\end{align*} where $$f(x)=x+\frac{cx^{-x}-1}{1+\ln x}. $$ Since $g^{\prime\prime}(x)\geq0$ for $x>0$ and $f(x)>1/e$ for $x>1/e$ (check), it follows that Newton's method converges whenever $x_{1}>1/e$ and $x=f(x)$ has a solution with $x>1/e$ (see this). This is true at least when $c\geq1$, so that we can conclude:
-
-Newton's method for $x^x - c = 0$ converges for $c \geq 1$ and $x_1 > 1/e$.
-
-Here's some MATLAB code:
-c = 100.; % Value of c
-x = 1.; % Initial guess
-while 1
- x_new = x + (c*x^(-x) - 1)/(1 + log(x));
- if abs (x_new - x) < 1e-12; break; end
- x = x_new;
-end
-disp (x) % Solution<|endoftext|>
-TITLE: Why is there only one term on the RHS of this chain rule with partial derivatives?
-QUESTION [13 upvotes]: I know that if $u=u(s,t)$ and $s=s(x,y)$ and $t=t(x,y)$ then the chain rule is $$\begin{align}\color{blue}{\fbox{$\frac{\partial u}{\partial x}=\frac{\partial u}{\partial s}\times \frac{\partial s}{\partial x}+\frac{\partial u}{\partial t}\times \frac{\partial t}{\partial x}$}}\color{#F80}{\tag{A}}\end{align}$$
-A short extract from my book tells me that:
-
-If $u=(x^2+2y)^2 + 4$ and $p=x^2 + 2y$ then $u=p^2 + 4$ therefore $$\frac{\partial u}{\partial x}=\frac{\partial u}{\partial p}\times \frac{\partial p}{\partial x}\tag{1}$$ as $u=u(x,y)$ and $p=p(x,y)$
-
-The book mentions no origin of equation $(1)$ and unlike $\color{#F80}{\rm{(A)}}$ is has only one term on the RHS; So I would like to know how it was formed. Is $(1)$ simply equivalent to $\color{#F80}{\rm{(A)}}$ but with the last term missing? Or is there more to it than that?
-Many thanks,
-BLAZE.
-
-REPLY [11 votes]: To expand a bit on Hagen von Eitzen’s answer and littleO’s comment, there are really two different functions that are both named ‘$u$’. The first is a function of two variables, $u:(x,y)\mapsto (x^2+2y)^2+4$, while the second is a function of only one variable, $u:t\mapsto t^2+4$. Let’s call the former $\bar u$ to keep them straight. We also have $p:(x,y)\mapsto x^2+2y$, so $\bar u=u\circ p$, i.e., $\bar u(x,y)=u(p(x,y))$. By the chain rule, ${\partial\over\partial x}\bar u={\partial\over\partial x}(u\circ p)=\sum{\partial u\over\partial w_i}{\partial w_i\over\partial x}$, the sum taken over all of the parameters $w_i$ of $u$. In this case, $u$ is a function of only one variable, so this sum has only the one term, ${\partial u\over\partial p}{\partial p\over\partial x}$. Because this $u$ is a function of only one variable, you might see this written as ${du\over dp}{\partial p\over\partial x}$ instead.<|endoftext|>
-TITLE: In Borel-Cantelli lemma, what is the limit superior?
-QUESTION [5 upvotes]: In a proof of the Borel-Cantelli lemma in the stochastic process textbook, the author used the following.
-$$\limsup_{n\to\infty}A_n=\bigcap_{n\ge1}\bigcup_{k\ge n} A_k$$
-Can someone explain why lim sup is intersection and union? Thank you
-
-REPLY [13 votes]: I find it very helpful to think of the limit superior and limit inferior of a sequence of real numbers and a sequence of sets as special cases of limit superior and limit inferior in so called complete lattices:
-You probably already know the following notions for the special case of the ordered set $(\mathbb{R},\leq)$:
-
-Definition: Let $(S,\leq)$ be a partially ordered set and $A \subseteq S$ a subset. An element $s \in S$ is called an upper bound of $A$ if $a \leq s$ for all $a\ \in A$. An element $t \in S$ is called a lower bound of $A$ if $t \leq a$ for all $a \in A$.
-An element $s \in S$ is called supremum of $A$ if $s$ is a least upper bound of $A$, i.e. $s$ is an upper bound of $A$ and for every upper bound $s'$ of $A$ we have $s \leq s'$.
-Similarly an element $t \in S$ is called infimum of $A$ if $t$ is a least lower bound of $A$, i.e. $t$ is a lower bound of $A$ and for every lower bound $t'$ of $A$ we have $t' \leq t$.
-
-If $(S, \leq)$ is a partially ordered set and $A \subseteq S$ a subset then neither a supremum of $A$, nor an infimum of $A$ need to exist. If it does, however, then it is unique, and is denoted by $\sup A$ and $\inf A$ respectively.
-Notice that in the special case of $(\mathbb{R},\leq)$ the above definition coincides with the usual notion of the supremum and infimum of a set of real numbers. In the case of the extended real numbers $\mathbb{R}\cup \{-\infty,\infty\} = [-\infty,\infty]$ we have the nice property that each subset $S \subseteq [-\infty,\infty]$ has a supremum (possibly $\infty$) and an infimum (possibly $-\infty$). Such ordered sets are called complete lattices.
-
-Definition: A ordered set $(S,\leq)$ is called a complete lattice if for each subset $A \subseteq S$ both $\sup A$ and $\inf A$ exist.
-
-Aside from the extended real numbers $[-\infty,\infty]$ another complete lattice which we commonly encounter are power sets:
-
-Example: Let $X$ be any set and denote by $\mathcal{P}(X) = \{T \mid T \subseteq X\}$ the power set of $X$. With the usual subset relation $\subseteq$ the power set becomes a partially ordered set $(\mathcal{P}(X),\subseteq)$. Let $\mathcal{A} \subseteq P(X)$ (i.e. $\mathcal{A}$ is a collection of subsets of $X$).
-For any subset $S \in \mathcal{P}(X)$ we have that $S$ is an upper bound of $\mathcal{A}$ if and only if $T \subseteq S$ for all $T \in \mathcal{A}$. Therefore $S := \bigcup_{T \in \mathcal{A}} T$ is an upper bound of $\mathcal{A}$. If $S' \in \mathcal{P}(X)$ is any upper bound of $\mathcal{A}$, then we have $T \subseteq S'$ for all $T \in \mathcal{A}$, and thus also $S \subseteq S'$. So $S$ is a supremum of $\mathcal{A}$.
-In the same way we also find that $\bigcap_{T \in \mathcal{A}} T$ is an infimum of $\mathcal{A}$. So $(\mathcal{P}(X), \subseteq)$ is a complete lattice, and for any collection of subsets $\mathcal{A} \subseteq \mathcal{P}(X)$ we have $\sup \mathcal{A} = \bigcup_{T \in \mathcal{A}} T$ and $\inf \mathcal{A} = \bigcap_{T \in \mathcal{A}} T$.
-
-Notice that this result is not very suprising: The smallest set containing all sets of $\mathcal{A}$ in naturally the union of these sets. In the same way the biggest set which is contained in all sets of $\mathcal{A}$ is naturally the intersection of these sets.
-Since in complete lattices we have suprema and infima we have all that we need to define the limit superior and limit inferior.
-
-Definition: Let $(S,\leq)$ be a complete lattice and $(s_n)_{n \in \mathbb{N}}$ a sequence of elements $s_n \in S$. Then the limit superior and limit inferior of this sequence are
- $$
- \limsup_{n \to \infty} s_n = \inf_{n \geq 0} \sup_{k \geq n} s_k
-$$
- and
- $$
- \liminf_{n \to \infty} s_n = \sup_{n \geq 0} \inf_{k \geq n} s_k.
-$$
- If $\limsup_{n \to \infty} s_n = \liminf_{n \to \infty} s_n$ then we also write
- $$
- \lim_{n \to \infty} s_n = \limsup_{n \to \infty} s_n = \liminf_{n \to \infty} s_n
-$$
- and call this the limit of the sequence $(s_n)_{n \in \mathbb{N}}$.
-
-Notice that for the extended real line $[-\infty,\infty]$ this is the usual definition of limit superior and limit inferior. But what about power sets?
-
-Example: Let $X$ be any set and $(A_n)_{n \in \mathbb{N}}$ a sequence of subsets $A_n \subseteq X$. Then $(A_n)_{n \in \mathbb{N}}$ is a sequence in the complete lattice $(\mathcal{P}(X), \subseteq)$. From the previous example we see that the limit superior of this sequence is given by
- $$
- \limsup_{n \to \infty} A_n
- = \inf_{n \geq 0} \sup_{k \geq n} A_k
- = \bigcap_{n \geq 0} \bigcup_{k \geq n} A_k,
-$$
- and the limit inferior is given by
- $$
- \liminf_{n \to \infty} A_n
- = \sup_{n \geq 0} \inf_{k \geq n} A_k
- = \bigcup_{n \geq 0} \bigcap_{k \geq n} A_k.
-$$
-
-So we see that the definiton of the limit superior and limit inferior of a sequence of sets really comes down to what the supremum and infimum of a collections of sets is, which naturally is their union and intersection respectively.<|endoftext|>
-TITLE: Can exist an even number greater than $36$ with more even divisors than $36$, all of them being a prime$-1$?
-QUESTION [13 upvotes]: I did a little test today looking for all the numbers such as their even divisors are exactly all of them a prime number minus 1, to verify possible properties of them. These are the first terms, it is not included at OEIS:
-
-2, [2]
-4, [2, 4]
-6, [2, 6]
-10, [2, 10]
-12, [2, 4, 6, 12]
-18, [2, 6, 18]
-22, [2, 22]
-30, [2, 6, 10, 30]
-36, [2, 4, 6, 12, 18, 36]
-46, [2, 46]
-58, [2, 58]
-
-I tried to look for the one with the longest list of even divisors, but it seems that the longest one is $36$, at least up to $10^6$:
-
-$36$, even divisors $[2, 4, 6, 12, 18, 36]$, so the primes are $[3, 5, 7, 13, 19, 37]$.
-
-For instance, for the same exercise for the even divisors being exactly all of them a prime number plus 1 (except $1$ in the case of the even divisor $2$) it seems to be $24$
-
-$24$, $[2, 4, 6, 8, 12, 24]$, so the primes are $[3, 5, 7, 11, 23]$.
-
-And for instance for the case in which both minus and plus one are a prime (or $1$ for the even divisor $2$) the longest one seems to be $12$: $[2, 4, 6, 12]$.
-I would like to ask the following question:
-
-These are heuristics, but I do not understand why it seems impossible to find a greater number than those small values such as all the even divisors comply with the property and that list of divisors is longer than the list of $36$. Is there a theoretical reason behind that or should it be possible to find a greater number (maybe very big) complying with the property? The way of calculating such possibility is related somehow with Diophantine equations?
-
-Probably the reason is very simple, but I can not see it clearly. Thank you very much in advance!
-
-REPLY [8 votes]: Sieving with small primes reveals the following.
-Assume that
-$$
-n=2^a\cdot3^bp_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}\qquad(*)
-$$
-has the property that $d+1$ is a prime whenever $d$ is an even factor of $n$. Here $(*)$ gives the prime factorization of $n$, so $p_i$ are all distinct primes $>3$. Without loss of generality we can assume that $a>0$ and that $a_i>0$ for all $i$.
-If any of the primes $p_i$ satisfies the congruence $p_i\equiv1\pmod 3$, then $2p_i+1$ is divisible by three, and $d=2p_i\mid n$ is in violation of the assumption. We can conclude that $p_i\equiv-1\pmod 3$ for all $i$.
-If $k\ge 2$ then $2p_1p_2+1$ is divisible by three, so $d=2p_1p_2$ is in violation. Therefore $k\le1$. But also if $a_1\ge2$, then $2p_1^2+1$ is divisible by three and $d=2p_1^2$ is a violating factor.
-At this point we know that either (call this case A)
-$$
-n=2^a\cdot3^bp
-$$
-for some prime $p\equiv-1\pmod3$, or (call this case B)
-$$
-n=2^a3^b.
-$$
-In case A we make the following further observations. First we see that $4p+1$ is again divisible by three, so $d=4p$ is in violation. Therefore in case A we must have $a=1$. Also, $2\cdot 3^3+1$ is a multiple of five, so we similarly conclude that $b\le 2$.
-In case B we observe that $2^3+1$ and $2\cdot 3^3+1$ are no primes, and therefore $a\le2$ and $b\le2$.
-So we are left with the possibilities
-
-$n=2\cdot 3^b p$ with $b\le2$. This $n$ has $2(b+1)$ even factors, so there are at most six of them.
-$n=2^a\cdot 3^b$ with $a,b\le2$. This $n$ has $a(b+1)$ even factors, so again there are at most six of them.<|endoftext|>
-TITLE: in every coloring $1,...,n$ there are distinct integers $a,b,c,d$ such that $a+b+c=d$
-QUESTION [7 upvotes]: Prove that for every $k$ there is a finite integer $n = n(k)$ so that for any coloring of the integers $1, 2, . . . , n$ by $k$ colors there are distinct integers $a, b, c$ and $d$ of the same color satisfying $a + b + c = d$
-
-REPLY [6 votes]: By Ramsey's theorem we can choose $n=n(k)$ so that, for any coloring of the $2$-element subsets of an $n+1$-element set with $k$ colors, there is a $7$-element subset whose $2$-element subsets all have the same color.
-Now suppose the numbers $1,2,\dots,n$ are colored with $k$ colors. Color the $2$-element subsets of the set $\{0,1,2,\dots,n\}$ as follows: if $x,y\in\{0,1,2,\dots,n\}$ and $x\lt y$, give $\{x,y\}$ the same color as the number $y-x$. Thus there are numbers $x_1\lt x_2\lt x_3\lt x_4\lt x_5\lt x_6\lt x_7$ such that all the differences $x_j-x_i\ (1\le i\lt j\le7)$ have the same color.
-Choose $i\in\{3,4\}$ so that $x_i-x_2\ne x_2-x_1$ and then choose $j\in\{5,6,7\}$ so that $x_j-x_i\ne x_2-x_1,\ x_i-x_2$.
-Let $a=x_2-x_1,\ b=x_i-x_2,\ c=x_j-x_i$ and $d=x_j-x_1$. Then $a,\ b,\ c,\ d$ are distinct, and $a,\ b,\ c,\ a+b,\ b+c$, and $a+b+c=d$ all have the same color.
-Alternatively choose $m=m(k)$ so that, for any coloring of the $2$-element subsets of an $m+1$-element set with $k$ colors, there is a $4$-element subset whose $2$-element subsets all have the same color, and let $n=n(k)=2^m-1$.
-Now suppose the numbers $1,2,\dots,n$ are colored with $k$ colors. Color the $2$-element subsets of the set $\{0,1,\dots,m\}$ as follows: if $x,y\in\{0,1,\dots,m\}$ and $x\lt y$, give $\{x,y\}$ the same color as the number $2^y-2^x$. Thus there are numbers $x_1\lt x_2\lt x_3\lt x_4$ such that all the differences $2^{x_j}-2^{x_i}(1\le i\lt j\le4)$ have the same color.
-Let $a=2^{x_2}-2^{x_1},\ b=2^{x_3}-2^{x_2},\ c=2^{x_4}-2^{x_3},\ d=2^{x_4}-2^{x_1}$. Then $a\lt b\lt c\lt d$ and $a,\ b,\ c,\ a+b,\ b+c$, and $a+b+c=d$ all have the same color.
-This construction can be improved by using a Sidon sequence (also called a Golomb ruler) $a_0,a_1,\dots,a_m$ instead of $2^0,2^1,\dots,2^m.$<|endoftext|>
-TITLE: Which is larger, $\sqrt[2015]{2015!}$ or $\sqrt[2016]{2016!}$?
-QUESTION [18 upvotes]: This was a question in a maths contest, where no calculator was allowed. Also, note that only a (>,< or =) relationship is being searched for and not the value of the numbers itself.
-
-Which is larger, $\sqrt[2015]{2015!}$ or $\sqrt[2016]{2016!}$ ?
-
-
-What I've done:
-My approach is to divide one number by the other and infer from the result which number is the bigger one;
-WolframAlpha gives $\frac{\sqrt[2016]{2016!}}{\sqrt[2015]{2015!}}=1.0049\ldots$, so clearly $\sqrt[2016]{2016!}>\sqrt[2015]{2015!}$
-Let $a=\sqrt[2016]{2016!}$ and $b=\sqrt[2015]{2015!}$
-$\therefore a=\sqrt[2016]{2016!}={2016!}^{1 \over 2016}=2016^{1 \over 2016}\times2015!^{1\over 2016}=\sqrt[2016]{2016}\cdot \sqrt[2016]{2015!}$
-$\therefore b=\sqrt[2015]{2015!}={2015!}^{1 \over 2015}={2015!}^{\frac{2016}{2015}\cdot\frac{1}{2016}}=\sqrt[2016]{2015!^{2016 \over 2015}}$
-Hence
-$$\begin{align}
-\require{cancel}
-\frac{a}{b}=\frac{\sqrt[2016]{2016!}}{\sqrt[2015]{2015!}}&=\frac{\sqrt[2016]{2016}\cdot \sqrt[2016]{2015!}}{\sqrt[2016]{2015!^{2016 \over 2015}}}\\
-&=\sqrt[2016]{2016}\cdot \sqrt[2016]{2015!^{\frac{-1}{2015}}}= \cancelto{*}{\sqrt[2016]{\frac{2016}{2015!^{2015}}} \quad \text{which appears to be} <1}\\
-=\sqrt[2016]{\frac{2016}{2015!^\frac{1}{2015}}}\\
-\end{align}$$
-That is $\cancelto{*}{\frac{a}{b}<1 \implies a$$ $$>\frac {1}{2016}\log (2016)-\frac {1}{(2016)(2015)}\sum_{n=1}^{2015}\log (2015)=$$ $$=\frac {1}{2016}\log (2016)-\frac {1}{2016}\log (2015)\;>0\;.$$<|endoftext|>
-TITLE: Hypersurfaces meet everything of dimension at least 1 in projective space
-QUESTION [5 upvotes]: The following exercise is taken from ravi vakil's notes on algebraic geometry.
-
-Suppose $X$ is a closed subset of $\mathbb{P}^n_k$ of dimension at least $1$, and $H$ is a nonempty hypersurface in $\mathbb{P}^n_k$. Show that $H\cap X \ne \emptyset$.
-
-The clue suggests to consider the cone over $X$. I'm stuck on this and I realized that i'm at this point again where i'm not sure how a neat formal proof of this should look like.
-Thoughts:
-Does a hypersurface in projective space mean $H=V_+(f)$, the homogeneous primes not containing $f$?
-If $X \hookrightarrow Proj(S_{\bullet})$ is a closed embedding then it corresponds (before taking $Proj$(-)) to a surjection of graded rings $S_{\bullet} \to R_{\bullet}=S_{\bullet}/I_+(X)$ where $I_+(X)$ is the set of all homogeneous elements vanishing on $X$. The cone $C(X)$ over a $X$ is then obtained by taking the $Spec(-)$ of this morphism $C(X) \hookrightarrow Spec(S_{\bullet})$. Is that right? How can this help me prove the theorem above?
-I got the feeling so far that there's a very elegant way to describe all hypersurfaces in terms of vanishing of global sections of line bundles. This would help me enourmously since it would enable me to carry my geometric intuition to this setting. In this context the statement would look like:
-A global section of a non-trivial line bundle on projective space must have a zero on all zariski closed subsets. This is the same as saying that a nontrivial line bundle on projective space restricts to a nontrivial line bundle on all closed subspaces. And here I have a cohomology problem that feels pretty specific and managable, This all feels much less ambiguous to me than "take the cone over $X$". Clarifying this would help me a lot.
-
-REPLY [5 votes]: Here is a different but most elementary proof that $H\cap X\neq \emptyset$:
-The complement $ U=\mathbb P^n\setminus H$ of the hypersurface is an affine variety, an easy by-product of the Veronese embedding: see Theorem 1.1 here.
-But it is impossible that $X\subset U$, since an affine variety cannot contain a positive-dimensional dimensional projective variety.
-Hence $H\cap X$ is non-empty.
-Edit
-At the OP's request let me remind that given two points on an affine variety $U$ there is a regular function $h\in \mathcal O(U)$ taking different values at them, whereas on a projective varieties $X$ all regular functions are constant.
-This is why $U$ cannot contain $X$: consider two points on $X$ and restrict $h$ to $X$ to obtain a contradiction.<|endoftext|>
-TITLE: Countable-infinity-to-one function
-QUESTION [14 upvotes]: Are there continuous functions $f:I\to I$ such that $f^{-1}(\{x\})$ is countably infinite for every $x$? Here, $I=[0,1]$.
-The question "Infinity-to-one function" answers is similar but without the condition that it be countable. (The range was $S^2$, not $I$, but the accepted answer also worked for $I$.)
-I doubt one exists, since I haven't been able to come up with one, but I'm not sure. There's probably some topological reason why this is impossible.
-
-REPLY [6 votes]: Here is an example. Start with the Cantor function $g:[0,1]\to[0,1]$, i.e. the function that sends $x=\sum a_n/3^n$ to $\sum a_n/2^{n+1}$ if every $a_n$ is $0$ or $2$ and is locally constant off of the Cantor set $K$. Note that for each dyadic rational $q\in(0,1)$, there is a unique (nondegenerate) interval $[a_q,b_q]$ with $a_q,b_q\in K$ such that $g(x)=q$ for all $x\in[a_q,b_q]$, and $[0,1]\setminus K$ is the disjoint union of the intervals $(a_q,b_q)$. Define a function $h:[0,1]\to[0,1]$ by saying $h=g$ on $K$, and on each interval $[a_q,b_q]$, $h$ is a finite-to-one continuous surjection $[a_q,b_q]\to[q,q+1/2^n]$, where $2^n$ is the denominator of $q$ (in lowest terms), and $h(a_q)=h(b_q)=q$.
-The function $h$ is clearly continuous when restricted to each interval $[a_q,b_q]$, and in particular is continuous off of $K$. As you approach a point of $K$ without staying in a single interval $[a_q,b_q]$, you pass through infinitely many such intervals $[a_q,b_q]$, with the denominators of the numbers $q$ getting larger and larger, and so $h$ remains continuous because $g$ is continuous. Thus $h$ is continuous on all of $[0,1]$.
-I claim every point of $[0,1]$ except $0$ has countably infinitely many preimages under $h$ (since $h(x)\geq g(x)$ for all $x$, $0$ is the only preimage of $0$). It is clear that every point has countably many preimages: $h$ agrees with $g$ on $K$ and $g$ is finite-to-one on $K$, and off of $K$, $h$ is finite-to-one on each of the countably many intervals $[a_q,b_q]$. Now if $x\in[0,1]$ and $q\in(0,1)$ is any dyadic rational obtained by truncating a binary expansion of $x$ at some point, then by construction, $h$ takes the value $x$ somewhere on the interval $[a_q,b_q]$ (since $q\leq x\leq q+1/2^n$). If $x\neq0$, then there are infinitely many different dyadic rationals $q\in(0,1)$ that can be obtained by truncating a binary expansion of $x$ (if $x$ is a dyadic rational, use the binary expansion of it that ends in $1$s). So we find that $x$ must have infinitely many preimages unless $x=0$.
-Finally, it is easy to modify $h$ to give $0$ infinitely many preimages. For instance, define $f(x)=i(x)$ if $x\in[0,1/2]$ and $f(x)=h(2x-1)$ if $x\in[1/2,1]$, where $i:[0,1/2]\to[0,1]$ is any countable-to-one continuous function that achieves the value $0$ infinitely many times and satisfies $i(1/2)=0$ (it is easy to construct such a function by appropriately modifying the function $x\mapsto x\sin^2(1/x)$). Then $f$ is a continuous function $[0,1]\to[0,1]$ with countably infinitely many preimages for each point.<|endoftext|>
-TITLE: Why is my Monty Hall answer wrong using Bayes Rule?
-QUESTION [8 upvotes]: The Monty Hall problem is described this way:
-
-Suppose you're on a game show, and you're given the choice of three
- doors: Behind one door is a car; behind the others, goats. You pick a
- door, say No. 1, and the host, who knows what's behind the doors,
- opens another door, say No. 3, which has a goat. He then says to you,
- "Do you want to pick door No. 2?" Is it to your advantage to switch
- your choice?
-
-I am interested in finding the probability of winning when you switch. I already know it's $2/3$ but I want to show it with Bayes Rule.
-I tried this:
-$A$ = car behind door $1$
-$B$ = goat is behind door $3$
-$$P(A|B) = \frac{P(B|A)P(A)}{P(B)} = \frac{1 \cdot 1/3}{1-1/3} = \frac{1}{2}$$
-$P(B|A)$ = the probability that a goat is behind door $3$ given that the car is behind door $1$. This is equal to $1$ because if we know where the car is, then any other door must have a goat.
-$P(A)$ = the probability of the car being behind door $1$. Assuming any door is equally likely to contain a car before we open any doors, this is $1/3$.
-$P(B)$ = the probability of a goat behind behind door $3$. This is equal to $1$ minus the probability that the car is behind door $3$, so $1-1/3$.
-Where is my mistake?
-
-REPLY [6 votes]: If you define event $B$ simply as 'there is a goat behind door 3', then of course $P(A|B)=\frac{1}{2}$, for there are two options left for the car. And your use of Bayes' theorem to show $P(A|B)=\frac{1}{2}$ is also correct, for indeed with the $B$ defined this way, you have $P(A)=\frac{1}{3}$, $P(B)=\frac{2}{3}$, and $P(B|A)=1$
-Put differently: asking what the chance is that door 1 has a car given that door 3 has a goat is effectively ignoring the whole 'game play' behind this problem. That is, you are not taking into account that Monty is revealing a door as a result of your choice, and whatever other assumptions are in force (such as: Monty knows where the prize is; Monty is certain to open a door with a goat; if you initially pick a door with the car, Monty will randomly pick one of the remaining two). Instead, event $B$ simply says: "there is a goat behind door $3$'. Indeed, as such, the problem statement might as well be:
-
-Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. Oh, and another thing: door $3$ contains a goat. You pick door $1$. Is it to your advantage to switch your choice?
-
-OK, what we need to do is take into account Monty's actions: it is indeed Monty's-act-of-picking-and-revealing-door-$3$-to-have-a-goat that is all important here.
-So, instead, define $B$ as: "Monty Hall shows door $3$ to have a goat"
-Notice how this is crucially different: for example, if door $1$ has the car, then door $3$ is certain to have a goat, but Monty is not certain to open door $3$ and reveal that: he might also open door $2$.
-Now, let's use the standard assumptions that Monty is always sure to reveal a door with a goat and that, if both remaining doors have a goat, Monty chooses randomly between them to open.
-OK, now $P(A)$ is still $\frac{1}{3}$, but otherwise things change radically with this new definition of $B$:
-First, $P(B|A)$: Well, as pointed out above, this is no longer $1$, but becomes $\frac{1}{2}$, since Monty randomly chooses between doors 2 and 3 to open up.
-Next, $P(B)$: what is the probability Monty opens door 3 to reveal a goat? There are two cases to consider in which Monty opens door 3: door 1 has the car, or door 2 has the car, each having a probability of $\frac{1}{3}$ Now, if door 1 has the car then, as we saw, there is a probability of $\frac{1}{2}$ of Monty revealing door 3 to have a goat. If door 2 has the car, then Monty is certain to reveal door 3 to have a goat. So: $P(B)=\frac{1}{3}\cdot \frac{1}{2}+\frac{1}{3}\cdot 1 = \frac{1}{2}$
-Plugging this into Bayes' rule:
-$$P(A|B)=\frac{P(B|A)\cdot P(A)}{P(B)}=\frac{\frac{1}{2}\cdot \frac{1}{3}}{\frac{1}{2}}=\frac{1}{3}$$
-I believe the difference between these two $B$'s is actually at the heart of the Monty Hall Paradox. Most people will treat Monty opening door 3 and revealing a goat simply as the information that "door 3 has a goat" (i.e. as your initial $B$), in which case switching makes no difference, whereas using the information that "Monty Hall opens door 3 and reveals a goat" (i.e. as the newly defined $B$), switching does turn out to make a difference (again, within the context of the Standard Assumptions regarding this puzzle). And this is hard to grasp.<|endoftext|>
-TITLE: Derivative of multivariate normal distribution wrt mean and covariance
-QUESTION [11 upvotes]: I want to differentiate this wrt $\mu$ and $\Sigma$ :
-$${1\over \sqrt{(2\pi)^k |\Sigma |}} e^{-0.5 (x-\mu)^T \Sigma^{-1} (x-\mu)} $$
-I'm following the matrix cookbook here and also this answer . The solution given in the answer (2nd link), doesn't match with what I read in the cookbook.
-For example, for this term, if I follow rule 81 from the linked cookbook, I get a different answer (differentiating wrt $\mu$) :
-$(x-\mu)^T \Sigma^{-1} (x-\mu)$
-According to the cookbook, the answer should be : $-(\Sigma^{-1} + \Sigma^{-T}) (x-\mu)$ . Or, am I missing something here? Also, how do I differentiate $(x-\mu)^T \Sigma^{-1} (x-\mu)$
- with respect to $\Sigma$ ?
-
-REPLY [5 votes]: I also had the same question as you. After trying equation 81 from the Matrix cookbook, I got this equation:
-$$
-\frac{\partial{f}}{\partial{\mu}} = -\frac{1}{2}(\Sigma ^{-1} + (\Sigma^{-1})^{T}) (x - \mu)*(-1)
-$$
-Since $ \Sigma $ is the co-variance matrix, it is symmetrical. Inverse of a symmetrical matrix is also symmetric (Is the inverse of a symmetric matrix also symmetric?). Therefore, we have $ (\Sigma^{-1})^{T} = \Sigma ^{-1} $.
-Now, the above equation reduces to
-$$
-\frac{\partial{f}}{\partial{\mu}} = \Sigma ^{-1}(x - \mu) $$<|endoftext|>
-TITLE: Why does the minimum value of $x^x$ equal $1/e$?
-QUESTION [11 upvotes]: The graph of $y=x^x$ looks like this:
-
-As we can see, the graph has a minimum value at a turning point. According to WolframAlpha, this point is at $x=1/e$.
-I know that $e$ is the number for exponential growth and $\frac{d}{dx}e^x=e^x$, but these ideas seem unrelated to the fact that the mimum value of $x^x$ is $1/e$. Is this just pure coincidence, or could someone provide an intuitive explanation (i.e. more than just a proof) of why this is?
-
-REPLY [2 votes]: Try to minimize logarithm of $ y= x^x,$ i.e., $y=x \, \log x$
-Its derivative is
-$$ 1 + \log(x) $$
-When equated to zero, it solves to that the minimum of $y(x)$ ocuurs at
-$$ x= \dfrac{1}{e}= 0.36788$$
-and the minimum value is
-$$ y_{min}= \dfrac{1}{{e}^{\frac{1}{e}}} \approx 0.6922$$
-The tiny red dot shown in your graph for minimum is:
-$$ (x,y) = (0.36788, 0.6922) $$
-Note that the minimum value is not $\dfrac{1}{e}\, ! \, $ ... but is the reciprocal of $e^{th} $root of $e.$<|endoftext|>
-TITLE: How to calculate $\sum_{n \in P}\frac{1}{n^2}, P=\{n \in \mathbb{N}: \exists (a,b) \in\ \mathbb{N^+} \times \mathbb{N^+} \mbox{ with } a^2+b^2=n^2\}$
-QUESTION [5 upvotes]: How I can evaluate
-$$\sum_{n \in P}\frac{1}{n^2} \quad \quad P=\{n \in \mathbb{N^+}: \exists (a,b) \in\ \mathbb{N^+} \times \mathbb{N^+} \mbox{ with } a^2+b^2=n^2\}$$
-It's clearly convergent. I thought about seeing the sum as a sum of complex numbers using $(a+ib)(a-ib)=a^2+b^2$.
-
-REPLY [3 votes]: Let $S(x)$ denote the number of positive integers not exceeding $x$ which can be expressed as a sum of two squares. Then, as proved by Landau in 1908 the following limit holds.
-$$
-\lim_{x\to\infty} \frac{\sqrt{\ln x}}{x} S(x) = K,
-$$
-where $K \approx 0.76422365358922$ is a constant. The convergence to the constant $K$, known as the Landau–Ramanujan constant is very slow. The first ten thousand digits of $K$ is here.
-The exact value of $K$ can be expressed as
-$$
-K = \frac{1}{\sqrt{2}} \prod_{\substack{p \text{ prime } \\ \equiv \, 3 \pmod{4}}} \left(1-\frac{1}{p^2}\right)^{-1/2},
-$$
-so we have that
-$$
-\prod_{\substack{p \text{ prime } \\ \equiv \, 3 \pmod{4}}} \left(1-\frac{1}{p^2}\right) = \frac{1}{2K^2}.
-$$
-From the excellent answer of @Jack D'Aurizio we know that
-$$
-S = \sum_{n\in P}\frac{1}{n^2} = \frac{\pi^2}{6}-\frac{4}{3}\cdot \prod_{\substack{p \text{ prime } \\ \equiv \, 3 \pmod{4}}} \left(1-\frac{1}{p^2}\right)^{-1}. \\
-$$
-Using the exact value of $K$ we could express your sum in term of the Landau–Ramanujan constant.
-
-$$
-S = \sum_{n\in P}\frac{1}{n^2} = \frac{\pi^2}{6}-\frac{8K^2}{3}.
-$$
-
-Numerically
-
-$$
-\begin{align}
-S \approx
-0.08749995296754071824615285056063798739937787259940111394813\\
-9987884926591232919611579752270225245544983905801979851301833\\
-539947996701320476568130203602037004645936371\dots\phantom{0000000000000}
-\end{align}
-$$
-
-Note that the numerical evaluation of the product for $K$ is hopeless in this form. You could find an other expression for $K$ here (eq. 11), which has much faster convergence.<|endoftext|>
-TITLE: Interesting shapes using probability and discrete view of a problem
-QUESTION [9 upvotes]: Suppose we have a circle of radius $r$, we show the distance between a point and the center of the circle by $d$. We then choose each point inside the circle with probability $\frac{d}{r}$ , and turn it black (note that $\frac{d}{r}<1$). With these rules we get shapes like this: (With help of Java)
-
-
-
-The shapes made were pretty interesting to me. I decided to add a few lines of code to my program to keep count of the drawn points, and then divide them by the area of the circle, to see what percentage of the circle is full. All I got for multiple number of tests was a number getting close to $2/3$ as the circle got bigger.
-Problem A: Almost what percentage of the circle is black? I found an answer as following:
-As all the points with distance $l$ from the center lie on a circle of radius $l$ and the same center, to find the "probable" number of black points, we shall consider all circles with radius $l$ ($00$ full of black points, there exists an infinite number of points so that their distance to the center of the circle is equal, and the probability of all of them being black is $0$ (because there is an infinite number of them, $d$ and $r$ are constant, $\frac{d}{r}<1$ and $\lim_{n\to\infty}{(\frac{d}{r})}^n=0$).
-It gets more interesting with a discrete view of the problem. Suppose we have a grid full of $1*1$ squares with the size $(2r+1)*(2r+1)$.We define the reference square with length $2l+1$ (we call $l$ it's radius) as the set of squares "around" the reference square with length $2l-1$, and the reference square with length $1$ is the set of $8$ squares around the central square. We define the distance $d$ of a square from the central square by the radius of it's reference square. Now we are ready to propose a problem similar to problem A: Problem C: Suppose each square with distance $d$ to the central square turns black with a probability $\frac{d}{r}$. Prove that Almost $2/3$ of the squares turn black, as $r\to \infty$
-We begin our proof by proving a reference square of radius $l$ contains exactly $8l$ squares. This is easy because there are $(2l+1)^2-(2l-1)^2=8l$ squares in a reference square with radius $l$. Now, each square in the reference square is black with probability $\frac{l}{r}$, so almost $\frac{l}{r}*8l=\frac{8l^2}{r}$ of the reference square is black (Similar to the proof of Problem A). By summing up all the reference squares with radius $l=1$ to $r$ we get: $$\sum_{l=1}^{r} \frac{8l^2}{r} = \frac{8}{r}*\frac{(r)(r+1)(2r+1)}{6}=\frac{4(r+1)(2r+1)}{3}$$ and so the percentage of the whole square that is black (as $r$ tends to infinity) is:$$\lim_{r\to\infty} \frac{\frac{4(r+1)(2r+1)}{3}}{(2r+1)^2}=\frac{4(r+1)}{3(2r+1)}=\frac{2}{3}$$ as expected.
-Some problems, though, still remain.
-First, I would appreciate the verification of my solutions to problems A, B and C. The program I wrote in java only fills out a finite number of "pixels", does this represent Problem A or Problem C? would there be a difference if we draw the circle for infinite number of "points"? Second,The result of Problem B seems a little strange because as $X$ tends to $\frac{3}{2}$, $P$ should go to $1$ because $\frac{2}{3}$ of the circle is black (as proved in Problem A). Then Why do we get $P=0$ for all values of $X$? How can we explain this?
- Third, What is the connection between the discrete view of the problem and the main problem? Can someone generalize the proof of Problem C to prove Problem A? I think one can easily generalize Problem C to "circles" full of tiny squares and prove the fact similarly, but going to an infinite number of points instead of "pixels" (which are equivalent to tiny squares in problem C) is still another matter.
-I would appreciate any help.
-
-REPLY [2 votes]: Your approaches to Problems A and C are correct. To eliminate the ambiguity between 0-dimensional points and 2-dimensional pixels and probability, we can determine the expected proportion $s$ of shaded pixels at radius $d$:
-$$E(s)=\lim_{n\to\infty} (\sum_{i=1}^n {1 \over n} * {d \over r}) = {d \over r}$$
-That allows us to integrate from $0$ to $r$ to find the expected shaded area of the entire circle, based on each individual thin ring, just as you demonstrated.
-$$
-{\int_0^r E(s) * 2 \pi d \; dd
-\over
-\int_0^r 2 \pi d \; dd
-}
-\\
-=
-{\int_0^r {d \over r} * 2 \pi d \; dd
-\over
-\int_0^r 2 \pi d \; dd
-}
-\\=
-{{2 \over 3} \pi r^2
-\over
-\pi r^2
-}
-={2 \over 3}
-$$
-For Problem B, I think the approach may be flawed. You're trying to determine the probability of the aggregate proportional area, which is a single value and can be determined empirically based on the probability of any given point being shaded. Essentially, $P(E(s)={1 \over x})$. Imagine that you have an urn with a single white ball:
-$$P(white) = 1\\
-P(not\,white) = 0$$
-Just as this theoretical urn has only one potential outcome, a white ball, so does your circle have only one potential value for the overall ratio of shaded area.
-$$P(Shaded \, area = {2 \over 3} \pi r ^2) = 1\\
-P(Shaded \, area \ne {2 \over 3} \pi r^2) = 0$$<|endoftext|>
-TITLE: Has this equation appeared before?
-QUESTION [6 upvotes]: I want to know if the following equation has appeared in mathematical literature before, or if it has any important significance.
-$$\sqrt{\frac{a+b+x}{c}}+\sqrt{\frac{b+c+x}{a}}+\sqrt{\frac{c+a+x}{b}}=\sqrt{\frac{a+b+c}{x}},$$
-where $a,b,c$ are any three fixed positive real and $x$ is the unknown variable.
-
-REPLY [2 votes]: This provides the explicit polynomial in $x$ (for those curious), though I'm not aware if the equation has appeared in the mathematical literature. We get rid of the square roots by multiplying out the $8$ sign changes,
-$$\prod^8 \left(\sqrt{\frac{a+b+x}{c}}\pm\sqrt{\frac{b+c+x}{a}}\pm\sqrt{\frac{a+c+x}{b}}\pm\sqrt{\frac{a+b+c}{x}}\right)=0$$
-then collecting powers of $x$. It turns out the $8$th-deg equation factors into a linear (cubed), a quadratic, and a cubic. For simplicity, let,
-$$\begin{aligned}
-p &= a+b+c\\
-q &= ab+ac+bc\\
-r &= abc
-\end{aligned}$$
-Then,
-$$(p+x)^3=0\tag1$$
-$$r^2 - 2 q r x + (q^2 - 4 p r) x^2 = 0\tag2$$
-$$p r^2 + r (-2 p q + 9 r) x + (p q^2 - 4 p^2 r + 6 q r) x^2 + (q^2 - 4 p r) x^3 = 0\tag3$$
-
-Example:
-
-Let $a,b,c = 1,2,4$, then
-$$(7+x)^3=0\\
--16 + 56 x + 7 x^2 = 0\\
--112 + 248 x - 119 x^2 + 7 x^3 = 0$$
-The roots of the quadratic solve,
-$$\sqrt{\frac{a+b+x}{c}}\pm\sqrt{\frac{b+c+x}{a}}+\sqrt{\frac{a+c+x}{b}}-\sqrt{\frac{a+b+c}{x}}=0$$
-while a root of the cubic solves,
-$$\sqrt{\frac{a+b+x}{c}}-\sqrt{\frac{b+c+x}{a}}-\sqrt{\frac{a+c+x}{b}}+\sqrt{\frac{a+b+c}{x}}=0$$
-and two others, while the linear root takes care of the remaining three sign changes.<|endoftext|>
-TITLE: Why are there more Irrationals than Rationals given the density of $Q$ in $R$?
-QUESTION [5 upvotes]: I'm reading "Understanding Analysis" by Abbott, and I'm confused about the density of $Q$ in $R$ and how that ties to the cardinality of rational vs irrational numbers.
-First, on page 20, Theorem 1.4.3 "Density of $Q$ in $R$" Abbot states:
-
-For every two real numbers a and b
- with a < b, there exists a rational number r satisfying a < r < b.
-
-For which he provides a proof.
-Later, on page 22, in the section titled "Countable and Uncountable Sets" he states:
-
-Mentally, there is a temptation to think of $Q$ and $I$ as being intricately mixed together in equal proportions, but this turns out not to be the case...the irrational numbers far outnumber the rational numbers in making
- up the real line.
-
-My question is: how are these two statements not in direct contradiction? Given any closed interval of irrational numbers of cardinality $X$, $A$, shouldn't be the case that we would have corresponding set of $X-1$ rational numbers, $B$, where each rational in $B$ falls "between" two other irrationals in $A$?
-If this is not the case, how do we have so many more irrationals than rationals while still satisfying our theorem that between every two reals there is a rational number?
-I know there are other questions similar to this, but I haven't found an answer that explains this very well, and none that address this (perceived) contradiction.
-
-REPLY [4 votes]: Given any closed interval of irrational numbers of cardinality $X$, $A$, shouldn't be the case that we would have corresponding set of $X-1$ rational numbers, $B$, where each rational in $B$ falls "between" two other irrationals in $A$?
-
-That will certainly be true if you change the word "interval" to "set"
-and stipulate that $X$ is a finite integer.
-Consider a finite set $A$ containing $X$ distinct irrational numbers and nothing else, where $X \in \mathbb Z$.
-Then you can arrange the members of $A$ in increasing sequence, that is,
-write $A = {a_i}, 1 \leq i \leq X$ such that $a_i > a_{i-1}$ when $i > 1$.
-And then you can insert $X - 1$ rational numbers in the "gaps" between
-the consecutive members of ${a_i}$.
-The problem with this in the more general case is that there are more than a finite number of irrational numbers in any closed interval in $\mathbb R$.
-In fact, there are more than a countable number of them.
-You can't just go and insert a rational number between each consecutive pair
-of irrational numbers, because there is no such thing as a consecutive pair of irrational numbers in an interval. In fact, take any two irrational numbers $r, s$ in the interval; there will be an uncountably infinite number of irrational numbers between $r$ and $s$.
-We do indeed have a rational number $q$ that falls between $r$ and $s$,
-in fact a countably infinite set of such numbers; but we also have an uncountably infinite set of irrational numbers that fall between $r$ and $s$. There is no way to organize these numbers into an increasing sequence of alternating irrational and rational numbers, like this:
-$$ r_1 < q_1 < r_2 < q_2 < r_3 < \cdots < r_{X-1} < q_{X-1} < r_X, $$
-so any counting argument based on imagining such a sequence is incorrect.<|endoftext|>
-TITLE: There is no Baire bijection between $\mathbb R$ and the set of functions $\mathbb Z\to\mathbb R$ modulo shifts
-QUESTION [5 upvotes]: Let $X$ denote the set $\mathbb{R}^\mathbb{Z}$ (the set of all functions from integers to reals),
-and $\sim$ the equivalence relation on $X$ defined by:
-$f \sim g$ iff there is a $z \in \mathbb{Z}$ such that for all $z' \in \mathbb{Z}: f(z'+z) = g(z')$.
-Consider the quotient set $X/\sim$.
-Some years ago, Mike Oliver made the remark that no bijection between $X/\sim$ and $\mathbb{R}$ could possibly be a Baire function.
-I could do with a hint (or two) on how to prove that.
-
-REPLY [2 votes]: Forget about $\mathbb{R}^\mathbb{Z}$; just look at $2^\mathbb{Z}$.
-Hint 1: any set $A\subseteq 2^\mathbb{Z}$ which has the Baire property and which is $\sim$-saturated (in the sense that $x\in A$, $x\sim y$ implies $y\in A$) must be either meager or comeager.
-Hint 2: if $f : 2^\mathbb{Z}\to \mathbb{R}$ is Baire-measurable and constant on every $\sim$-equivalence class, think about what $f^{-1}(U)$ could be, where $U$ ranges over open sets. Maybe do something with a countable basis for $\mathbb{R}$...
-I hope this is enough of a hint.<|endoftext|>
-TITLE: Symbol for "the greater of the two values"
-QUESTION [11 upvotes]: I'm looking for an operator that returns the greater of two values.
-Here's an example. If $a=5$, $b=6$ and $???$ is the operator, I'd like to have $x$ equal $b$ when I do $x=a???b$, since $b$ is the larger of the two values.
-
-REPLY [3 votes]: For $S=\{a_1,\cdots,a_n\}$, define $*$ inductively by operator $*:\mathbb{R}^n\to\mathbb{R}$ as $*(a_1,a_2)=\dfrac{|a_2-a_1|+a_2+a_1}{2}$ if $n=2$ and $*:\mathbb{R}^{n+1}\to\mathbb{R}$ as $*(a_1,...,a_n,a_{n+1})=*(*(a_1,\cdots,a_n),a_{n+1})$. Thus $*(a_1,\cdots,a_n)=\max S$ for any set $S=\{a_1,\cdots,a_n\}$.<|endoftext|>
-TITLE: An elementary verification of the equivalence between two expressions for $e^x$
-QUESTION [5 upvotes]: I would appreciate some constructive comments on the following argument for
-\begin{equation*}
-\sum_{n=0}^{\infty} \frac{x^{n}}{n!}
-= \lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^{n} .
-\end{equation*}
-I understand that there are several different arguments for it. I like that this does not involve the natural logarithm function. I looked in many elementary textbooks on real analysis for such an argument. I only found one, but it was in the special case $x = 1$, and it was flawed. The only analysis techniques used are the convergence of the sequence defined by
-\begin{equation*}
-\left(1 + \frac{x}{n}\right)^{n} ,
-\end{equation*}
-and the absolute convergence of
-\begin{equation*}
-\sum_{n=0}^{\infty} \frac{x^{n}}{n!} .
-\end{equation*}
-Here it is.
-Demonstration
-According to the Binomial Theorem, for every positive integer $n$,
-\begin{align*}
-&\sum_{i=0}^{n} \frac{x^{i}}{i!} - \left(1 + \frac{x}{n}\right)^{n} \\
-&\qquad = \sum_{i=0}^{n} \frac{x^{i}}{i!} - \sum_{i=0}^{n} \binom{n}{i} \frac{x^{i}}{n^{i}} \\
-&\qquad = \sum_{i=0}^{n} \left[\frac{x^{i}}{i!} - \binom{n}{i} \frac{x^{i}}{n^{i}} \right] \\
-&\qquad = \sum_{i=0}^{n} \left[\frac{1}{i!} - \frac{1}{n^{i}} \binom{n}{i} \right] x^{i} \\
-&\qquad = \sum_{i=2}^{n} \left[\frac{1}{i!} - \frac{1}{n^{i}} \binom{n}{i} \right] x^{i} .
-\end{align*}
-For each integer $2 \leq i \leq n$,
-\begin{align*}
-\frac{1}{n^{i}} \binom{n}{i} &= \frac{1}{n^{i}} \cdot \frac{n!}{i!(n-i)!} \\
-&= \frac{1}{n^{i}} \cdot \frac{n(n - 1)(n - 2) \cdots (n - i + 1)}{i!} \\
-&= \frac{1}{i!} \cdot \frac{n(n - 1) (n - 2) \cdots (n - (i - 1))}{n^{i}} \\
-&= \frac{1}{i!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) .
-\end{align*}
-So,
-\begin{equation*}
-\sum_{i=0}^{n} \frac{x^{i}}{i!} - \left(1 + \frac{x}{n}\right)^{n}
-= \sum_{i=2}^{n} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] \frac{x^{i}}{i!}
-\end{equation*}
-According to the Triangle Inequality, for each pair of positive integers $2 \leq k < n$,
-\begin{align*}
-&\left\vert \sum_{i=0}^{n} \frac{x^{i}}{i!} - \left(1 + \frac{x}{n}\right)^{n} \right\vert \\
-&\qquad \leq \left\vert \sum_{i=2}^{k} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] \frac{x^{i}}{i!} \right\vert \\
-&\qquad\qquad + \left\vert \sum_{i=k+1}^{n} \frac{x^{i}}{i!} \right\vert
-+ \left\vert \sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} x^{i} \right\vert \\
-&\qquad \leq \sum_{i=2}^{k} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] \frac{\vert x \vert^{i}}{i!} \\
-&\qquad\qquad + \sum_{i=k+1}^{n} \frac{\vert x \vert^{i}}{i!}
-+ \sum_{i=k+1}^{n} \frac{1}{n^{i}}\binom{n}{i} \vert x \vert^{i} .
-\end{align*}
-\begin{align*}
-&\sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} \vert x \vert^{i} \\
-&\qquad = \sum_{i=k+1}^{n} \frac{1}{i!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right)
-\vert x \vert^{i} \\
-&\qquad = \frac{1}{(k+1)!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{k}{n}\right) \vert x \vert^{k+1} \\
-&\qquad\qquad \!\begin{aligned}[t]
-&+ \frac{1}{(k+2)!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{k+1}{n}\right) \vert x \vert^{k+2} \\
-&+ \frac{1}{(k+3)!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{k+2}{n}\right) \vert x \vert^{k+3} \\
-&+\ldots
-+ \frac{1}{n!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{n-1}{n}\right) \vert x \vert^{n} .
-\end{aligned} \\
-&\qquad = \frac{1}{k!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{k}{n}\right) \vert x \vert^{k} \\
-&\qquad\qquad \!\begin{aligned}[t]
-&\biggl[\frac{1}{k+1} \left(1 - \frac{k}{n}\right) \vert x \vert \\
-&\hphantom{\biggl[\vphantom{\frac{1}{k+1}}}+ \frac{1}{(k+1)(k+2)} \left(1 - \frac{k}{n}\right) \left(1 - \frac{k+1}{n}\right) \vert x \vert^{2} \\
-&\hphantom{\biggl[\vphantom{\frac{1}{k+1}}}+ \frac{1}{(k+1)(k+2)(k+3)} \left(1 - \frac{k}{n}\right) \left(1 - \frac{k+1}{n}\right) \left(1 - \frac{k+2}{n}\right) \vert x \vert^{3} \\
-&\hphantom{\biggl[\vphantom{\frac{1}{k+1}}}+ \ldots \\
-&\hphantom{\biggl[\vphantom{\frac{1}{k+1}}}
-+ \frac{1}{(k+1)(k+2) \cdots n} \left(1 - \frac{k}{n}\right) \left(1 - \frac{k+1}{n}\right) \cdots \left(1 - \frac{n-1}{n}\right)
-\vert x \vert^{n-k}
-\biggr]
-\end{aligned} \\
-&\qquad < \frac{\vert x \vert^{k}}{k!} \left[
-\frac{\vert x \vert}{k+1} + \left(\frac{\vert x \vert}{k+1}\right)^{2}
-+ \ldots + \left(\frac{\vert x \vert}{k+1}\right)^{n-k}
-\right] . \\
-\text{So, if $k \geq \vert x \vert$,} \\
-&\sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} \vert x \vert^{i} \\
-&\qquad < \frac{\vert x \vert^{k}}{k!} \sum_{i=1}^{\infty} \left(\frac{\vert x \vert}{k+1} \right)^{i} \\
-&\qquad = \frac{\vert x \vert^{k}}{k!} \cdot \frac{\dfrac{\vert x \vert}{k + 1}}{1 - \dfrac{\vert x \vert}{k+1}} \\
-&\qquad = \frac{\vert x \vert^{k}}{k!} \cdot \frac{\vert x \vert}{k + 1 - \vert x \vert} , \\
-\text{and if $k \geq 2\vert x \vert$,} \\
-&\sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} \vert x \vert^{i}
-< \frac{\vert x \vert^{k}}{k!} .
-\end{align*}
-By the absolute convergence of
-\begin{equation*}
-\sum_{n=0}^{\infty} \frac{x^{n}}{n!} ,
-\end{equation*}
-for every $\epsilon > 0$, there is a big enough positive integer $K$ such that for every integer $k \geq K$,
-\begin{equation*}
-\sum_{i=k}^{\infty} \frac{\vert x \vert ^{i}}{i!}
-< \frac{\epsilon}{3} ,
-\end{equation*}
-and so,
-\begin{equation*}
-\frac{\vert x \vert^{k}}{k!}
-< \sum_{i=k}^{\infty} \frac{\vert x \vert ^{i}}{i!}
-< \frac{\epsilon}{3}
-\qquad \text{and} \qquad
-\sum_{i=k+1}^{\infty} \frac{\vert x \vert ^{i}}{i!}
-< \sum_{i=k}^{\infty} \frac{\vert x \vert ^{i}}{i!}
-< \frac{\epsilon}{3} .
-\end{equation*}
-So, if $k \geq \max\{2\vert x \vert, \, K\}$, and if $n > k$,
-\begin{equation*}
-\sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} \vert x \vert^{i}
-< \frac{\vert x \vert^{k}}{k!}
-< \frac{\epsilon}{3} .
-\end{equation*}
-Likewise, since for each integer $2 \leq i \leq k$,
-\begin{equation*}
-\lim_{n\to\infty} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] = 0 ,
-\end{equation*}
-there is a big enough positive integer $N$ such that for every integer $n \geq N$,
-\begin{equation*}
-\sum_{i=2}^{k} \left[ 1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right]
-< \frac{\epsilon}{3 \cdot \max\{\vert x \vert^{k}, \, 1\}} ,
-\end{equation*}
-and so,
-\begin{equation*}
-\sum_{i=2}^{k} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] \frac{\vert x \vert^{i}}{i!}
-< \frac{\epsilon}{3} .
-\end{equation*}
-Consequently, for any positive integers $k \geq \max\{2\vert x \vert, \, K\}$ and $n > \max\{k, \, N\}$,
-\begin{equation*}
-\left\vert \sum_{i=0}^{n} \frac{x^{i}}{i!} - \left(1 + \frac{x}{n}\right)^{n} \right\vert < \epsilon .
-\end{equation*}
-Equivalently,
-\begin{equation*}
-\sum_{i=0}^{\infty} \frac{x^{i}}{i!} = \lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^{n} .
-\end{equation*}
-
-REPLY [2 votes]: Here is another simpler approach which does not use logarithms (so that this is not exactly an answer, but it is rather too long for a comment). Let $$E_{n}(x) = 1 + x + \frac{x^{2}}{2!} + \cdots + \frac{x^{n}}{n!}\tag{1}$$ then we know that $$E(x) = \lim_{n \to \infty}E_{n}(x)$$ exists for all $x \in \mathbb{R}$. Using binomial theorem it is easy to show that $$F_{n}(x) = \left(1 + \frac{x}{n}\right)^{n} \leq E_{n}(x) \leq \left(1 - \frac{x}{n}\right)^{-n} = G_{n}(x)\tag{2}$$ for $x > 0$ and $n > x$. (Clearly by binomial theorem each of the expressions $F_{n}(x), E_{n}(x)$ is a finite series of $(n + 1)$ terms and each term of $F_{n}(x)$ is less than or equal to the corresponding term of $E_{n}(x)$. For $0 < x < n$ the function $G_{n}(x)$ can be expressed as an infinite series via the binomial theorem for general index. And again each term of $E_{n}(x)$ is less than or equal to the corresponding term of $G_{n}(x)$. The restriction $0 < x < n$ is needed for the convergence of infinite series representation of $G_{n}(x)$.)
-Further it can be shown that both $(1 + x/n)^{n}$ and $(1 - x/n)^{-n}$ tend to same limit as $n \to \infty$ (you show that the ratio of these two expressions tends to $1$). It thus follows that for $x > 0$ we have $$E(x) = \lim_{n \to \infty}E_{n}(x) = \lim_{n \to \infty}\left(1 + \frac{x}{n}\right)^{n} = \lim_{n \to \infty}\left(1 - \frac{x}{n}\right)^{-n}\tag{3}$$ The relation obviously holds for $x = 0$. For negative $x$ it is easy to see that that $E(x)E(-x) = 1$ (by multiplication of infinite series) and the proof is easily extended to negative values of $x$. (The fact that $F_{n}(-x) = 1/G_{n}(x)$ will be of help here because it implies that the common limit of $F_{n}(x)$ and $G_{n}(x)$, say $F(x)$, will satisfy $F(x)F(-x) = 1$ similar to $E(x)E(-x) = 1$.)
-
-The approach given by OP is correct but bit lengthy with lot of intermediate steps and needs some patience to follow. A better approach in the same direction is to appeal to the general theorem called "Monotone Convergence Theorem":
-If for all natural numbers $j, k$, the dual indexed sequence $a_{j, k}$ is non-negative and $a_{j, k} \leq a_{j + 1, k}$ for all natural numbers $j, k$ then $$\lim_{j \to \infty}\sum_{k = 1}^{\infty}a_{j, k} = \sum_{k = 1}^{\infty}\lim_{j \to \infty}a_{j, k}$$
-For the current case let's set $$a_{j, k} = \binom{j}{k}\left(\frac{x}{j}\right)^{k}$$ if $k \leq j$ and $a_{j, k} = 0$ otherwise. Then we can see that $$\left(1 + \frac{x}{j}\right)^{j} = \sum_{k = 0}^{\infty}a_{j, k}$$ It is easily verified that $a_{j, k} \leq a_{j + 1, k}$ for all $j, k$ if $x > 0$. Hence the monotone convergence theorem applies and clearly $$\lim_{j \to \infty}a_{j, k} = \frac{x^{k}}{k!}$$ hence we have the result $$\lim_{j \to \infty}\left(1 + \frac{x}{j}\right)^{j} = \sum_{k = 0}^{\infty}\frac{x^{k}}{k!}$$ For $x < 0$ we again need the multiplicative properties of the series for $e^{x}$ namely that $e^{x}e^{-x} = 1$.<|endoftext|>
-TITLE: Reflected rays /lines bouncing in a circle?
-QUESTION [5 upvotes]: Consider the following situation.
-You are standing in a room that is perfectly circular with mirrors for walls. You shine a light, a single ray of light, in a random direction. Will the light ever return to its original position (the single point where the light originated from)? If so, will it return to its position an infinite amount of times or a definite amount of times? Will it ever return to its original position in the original direction?
-I thought of this little teaser when reading about a problem concerning rays in a circle and wondered about this question.
-As for my attempts, this is well beyond my skill.
-
-REPLY [4 votes]: This can be solved completely by characterizing the set of points on the ray which are the same distance from the center as the original point. We consider two symmetries which generate all such points. Here is a diagram showing all the points we consider:
-
-In particular, if our initial point is $C$, consider the line $AB$ which is coincident with the ray and which has $A$ and $B$ both on the unit circle and $B$ in the forwards direction of the ray. This segment would be visible if you shot a ray in either direction, and it did not bounce. Notice that, if $O$ is the center of the circle, then the angle $\alpha=\angle AOB$ is significant since rotating by $\alpha$ (in the same direction as the rotation that takes $B$ to $A$) takes one segment of the ray to the next segment after it bounces.
-One may notice that this rotation preserves the distance from the center of the circle - so every point ever at this distance in any segment of the light's path is such a rotation of one at that distance on $AB$. There may be at most two such points - the point $C$ itself, and the point $C'$ which is the reflection of $C$ through the perpendicular bisector of $AB$. Let us set $\beta=\angle COC'$ which should be taken as a signed angle - so if the rotation taking $C$ to $C'$ is the opposite direction as the one taking $A$ to $B$, we take $\beta$ to be negative. We will always consider $\alpha$ to be positive.
-Notice that if the ray returns, then there is some non-trivial rotation taking $C$ to itself. That means that one of the following is true (for some integers $n,k$):
-$$n\alpha = 2\pi k$$
-$$n\alpha + \beta = 2\pi k.$$
-This completely characterizes the problem. One should note that any point inside a circle along with a ray extending from it may be described up to rotation and reflection by a pair $(\alpha,\beta)$ with $0\leq \alpha \leq \pi$ and $|\beta|\leq \alpha$.
-The case of $C$ being on the circumference corresponds to the case $|\beta|=\alpha$. One should note that the first set of solutions is just when $\alpha$ is a rational multiple of $\pi$, in which case the ray is truly periodic. Moreover, if $\beta$ is a rational multiple of $\pi$, then the second set of solutions doesn't add any new solutions for $\alpha$ (meaning that if the ray ever returned, it would be bouncing periodically).<|endoftext|>
-TITLE: Finding the common integer solutions to $a + b = c \cdot d$ and $a \cdot b = c + d$
-QUESTION [14 upvotes]: I find nice that $$ 1+5=2 \cdot 3 \qquad 1 \cdot 5=2 + 3 .$$
-
-Do you know if there are other integer solutions to
- $$ a+b=c \cdot d \quad \text{ and } \quad a \cdot b=c+d$$
- besides the trivial solutions $a=b=c=d=0$ and $a=b=c=d=2$?
-
-REPLY [6 votes]: First, note that if $(a, b, c, d)$ is a solution, so are $(a, b, d, c)$, $(c, d, a, b)$ and the five other reorderings these permutations generate.
-We can quickly dispense with the case that all of $a, b, c, d$ are positive using an argument of @dREaM: If none of the numbers is $1$, we have
-$ab \geq a + b = cd \geq c + d = ab$, so $ab = a + b$ and $cd = c + d$, and we may as well assume $a \geq b$ and $c \geq d$. In particular, since $a, b, c, d > 1$, we have $a b \geq 2a \geq a + b = ab$, so $a = b = 2$ and likewise $c = d = 2$, giving the solution $$(2, 2, 2, 2).$$ On the other hand, if at least one number is $1$, say, $a$, we have $b = c + d$ and $1 + b = cd$, so $1 + c + d = cd$, and we may as well assume $c \leq d$. Rearranging gives $(c - 1)(d - 1) = 2$, so the only solution is $c = 2, d = 3$, giving the solution
-$$(1, 5, 2, 3).$$
-Now suppose that at least one of $a, b, c, d$, say, $a$ is $0$. Then, we have $0 = c + d$ and $b = cd$, so $c = -d$ and $b = -d^2$. This gives the solutions $$A_s := (0, -s^2, -s, s), \qquad s \in \Bbb Z .$$
-We are left with the case for which at least one of $a, b, c, d$, say, $a$, is negative, and none is $0$. Suppose first that none of the variables is $-1$. If $b < 0$, we must have $cd = a + b < 0$, and so we may assume $c > 0 > d$. On the other hand, $c + d = ab > 0$, and so (using a variation of the argument for the positive case) we have
-$$ab = (-a)(-b) \geq (-a) + (-b) = -(a + b) = -cd \geq c > c + d = ab,$$ which is absurd. If $b > 0$, we have $c + d = ab < 0$, so at least one of $c, d$, say, $c$ is negative. Moreover, we have $cd = a + b$, so $d$ and $a + b$ have opposite signs. If $d < 0$, then since $c, d < 0$, we are, by exploiting the appropriate permutation, in the above case in which $a, b < 0$, so we may assume that $d > 0$, and hence that $a + b < 0$. Now,
-$$ab \leq a + b = cd < c + d = ab,$$ which again is absurd, so there no solutions in this case. This leaves only the case in which at least one of $a, b, c, d$ is $-1$, say, $a$. Then, we have $-b = c + d$ and $-1 + b = cd$, so $-1 + (- c - d) = cd$. Rearranging gives $(c + 1)(d + 1) = 0$, so we may assume $c = -1$ giving (up to permtuation) the $1$-parameter family of solutions $$B_t := (-1, t, -1, 1 - t), \qquad t \in \Bbb Z,$$ I mentioned in my comment (this includes two solutions, $B_0$ and $B_1$, which are equivalent by a permutation, that include a zero entry). This exhausts all of the possibilities; in summary:
-
-Any integer solution to the system
- $$\left\{\begin{array}{rcl}a + b \!\!\!\!& = & \!\!\!\! cd \\ ab \!\!\!\! & = & \!\!\!\! c + d \end{array}\right.$$
- is equal (up to the admissible permutations mentioned at the beginning of this answer) to exactly one of
-
-$(1, 5, 2, 3)$
-$(2, 2, 2, 2)$
-$A_s := (0, -s^2, -s, s)$, $s \geq 0$, and
-$B_t := (-1, t, -1, 1 - t)$, $t \geq 2$.
-
-
-The restrictions on the parameters $s, t$ are consequences of the redundancy in the solutions we found: $A_{-s}$ is an admissible permutation of $A_s$, $B_{1 - t}$ an admissible permutation of $B_t$, and $B_1$ one of $A_1$.<|endoftext|>
-TITLE: how to make a $3$-$(10,4,1)$ design using graphs
-QUESTION [6 upvotes]: A $t$-$(v,k,\lambda)$ design is defined this way :
-We have a set with $v$ elements (called the points). We also have a collection of distinguished subsets each having $k$ elements, we call each of these subsets a block. Each set of $t$ points appear together in exactly $\lambda$ blocks.
-I want to make a $3$-$(10,4,1)$ using graphs. According to this link:
-http://www.sciencedirect.com/science/article/pii/0012365X83901176
-( page 2 )
-I know which graph I should use but I don't know how to convert that graph to a $3$-$(10,4,1)$ design.
-Note: one of my friends solved this problem using a $K_5$ graph. He said that we should see every edge of $K_5$ as a vertex in our new graph. But still we don't know why $K_5$ and why we should use this method and how to build that $3$-$(10,4,1)$ design.
-
-REPLY [6 votes]: You take $\Gamma = K_{5}$, and consider the edges of $\Gamma$ to be the points of your design. Then you have some specified subgraphs, given in the picture on page 2 that you refer to, they all have $4$ edges. The blocks of your design are the subgraphs of $\Gamma$ that are isomorphic to one of these specified subgraphs. (considered as sets of edges).
-For example, take any $3$ edges of $\Gamma$. If they all share a common vertex, then they occur together in exactly one block, corresponding to the first picture in the list for these parameters. If they form a triangle, then they occur in exactly one block, corresponding to the second picture. Otherwise, they occur in a block corresponding to the third picture (try to show this). This argument shows that this construction does in fact give a $3$-$(10,4,1)$ design.
-The selection of $K_{5}$ is just related to the construction, in this paper they always start with a complete graph $K_{n}$ for some $n$. Notice that $K_{5}$ has $(5\cdot 4)/2 = 10$ edges, which is the same as the number of points you want for your design. Also notice that all the graphs they give in the table for this parameter set have $5$ vertices, they are all supposed to be taken as subgraphs of $K_{5}$. Because of the way this is constructed, this gives the symmetric group $S_{5}$ as the automorphism group of the graph, and so $S_{5}$ will also act as an automorphism group of the design.<|endoftext|>
-TITLE: Finding the sum of the infinite series whose general term is not easy to visualize: $\frac16+\frac5{6\cdot12}+\frac{5\cdot8}{6\cdot12\cdot18}+\cdots$
-QUESTION [5 upvotes]: I am to find out the sum of infinite series:-
-$$\frac{1}{6}+\frac{5}{6\cdot12}+\frac{5\cdot8}{6\cdot12\cdot18}+\frac{5\cdot8\cdot11}{6\cdot12\cdot18\cdot24}+...............$$
-I can not figure out the general term of this series. It is looking like a power series as follows:-
-$$\frac{1}{6}+\frac{5}{6^2\cdot2!}+\frac{5\cdot8}{6^3\cdot3!}+\frac{5\cdot8\cdot11}{6^4\cdot4!}+.....$$
-So how to solve it and is there any easy way to find out the general term of such type of series?
-
-REPLY [4 votes]: Let us consider $$\Sigma=\frac{1}{6}+\frac{5}{6\times 12}+\frac{5\times8}{6\times12\times18}+\frac{5\times8\times11}{6\times12\times18\times24}+\cdots$$ and let us rewrite it as $$\Sigma=\frac{1}{6}+\frac 16\left(\frac{5}{ 12}+\frac{5\times8}{12\times18}+\frac{5\times8\times11}{12\times18\times24}+\cdots\right)=\frac{1}{6}+\frac 16 \sum_{n=0}^\infty S_n$$ using $$S_n=\frac{\prod_{i=0}^n(5+3i)}{\prod_{i=0}^n(12+6i)}$$ Using the properties of the gamma function, we have $$\prod_{i=0}^n(5+3i)=\frac{5\ 3^n \Gamma \left(n+\frac{8}{3}\right)}{\Gamma \left(\frac{8}{3}\right)}$$ $$\prod_{i=0}^n(12+6i)=6^{n+1} \Gamma (n+3)$$ which make $$S_n=\frac{5\ 2^{-n-1} \Gamma \left(n+\frac{8}{3}\right)}{3 \Gamma
- \left(\frac{8}{3}\right) \Gamma (n+3)}$$ $$\sum_{n=0}^\infty S_n=\frac{10 \left(3\ 2^{2/3}-4\right) \Gamma \left(\frac{2}{3}\right)}{9 \Gamma
- \left(\frac{8}{3}\right)}=3\ 2^{2/3}-4$$ $$\Sigma=\frac{1}{\sqrt[3]{2}}-\frac{1}{2}$$<|endoftext|>
-TITLE: Number of $k$-dimensional subspaces in $V$
-QUESTION [6 upvotes]: I know my following question is somewhat similar to this one but still I need help .
-How many $k$-dimensional subspaces of a $n$-dimensional vector space $V$ over the finite field $F$ with $q$ elements are there?
-
-REPLY [13 votes]: Let me try to answer your question:
-Let us first look at the case of one-dimensional subspaces: Every one-dimensional subspace is spanned by non-zero vector, which there are $q^n-1$ many of. Two of these vectors span the same subspace if and only if they are non-zero scalar multiples of each other; we have $q-1$ such scalars. Thus we have $\frac{q^n-1}{q-1}$ one-dimensional subspaces.
-Simlilary we can count the number of $k$-dimensional subspaces for $0 \leq k \leq n$. We will need the following formula:
-
-Proposition Let $W$ be an $n$-dimensional vector space over $\mathbb{F}_q$, the finite field with $q$ elements, and let $0 \leq k \leq n$. Then there exist
- $$
- \frac{(q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})}{k!}
-$$
- many linearly independent subsets of $W$ consisting of $k$ elements.
-Proof: We first figure out the number of linearly independent families of $k$ elements: For the first member $b_1$ we can pick any non-zero vector, so we have $q^n-1$ choices. For the second member $b_2$ we can take any vector not in the span of $b_1$. So we have $q^n-q$ choices for $b_2$. Continuing this we can pick $b_i$ arbitrarily outside of the span of the previous members $b_1, \dotsc, b_{i-1}$, so we have $q^n - q^i$ choices for $b_i$. Thus we find that we have
- $$
- (q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})
-$$
- many linearly independent families $(b_1, \dotsc, b_k)$.
- Two of these families represent the same linearly indendent subset if and only if they are the same up to reorderding of its members. Because there are $k!$ such reorderings we have
- $$
- \frac{(q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})}{k!}
-$$
- many linearly indepndent subsets consisting of $k$ elements.
-
-We know that every $k$-dimensional subspace of $V$ is spanned by $k$ linearly indendent vectors of $V$. So by the above proposition we have at most
-$$
- \frac{(q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})}{k!}
-$$
-$k$-dimensional subspaces.
-Now the same subspace $U$ is spanned by many different linearly independent subsets, say $L$ many. These $L$ subsets are precisely the bases of $U$. By we above formula we know that there are
-$$
- \frac{(q^k-1)(q^k-q) \dotsm (q^k-q^{k-1})}{k!}
-$$
-many such bases.
-Putting this together we find that there are
-$$
- \frac{
- \left( \frac{(q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})}{k!} \right)
- }{
- \left( \frac{(q^k-1)(q^k-q)(q^k-q^2) \dotsm (q^k-q^{k-1})}{k!} \right)
- }
- = \frac{
- (q^n-1)(q^n-q) \dotsm (q^n-q^{k-1})
- }{
- (q^k-1)(q^k-q) \dotsm (q^k-q^{k-1})
- }
-$$
-many different $k$-dimensional subspaces.<|endoftext|>
-TITLE: Can $G$ have a Sylow -5 subgroups and Sylow -3 subgroups [CSIR-NET-DEC-2015]
-QUESTION [8 upvotes]: Let $G$ be a simple group of order $60$. Then
-
-$G$ has six Sylow -5 subgroups.
-$G$ has four Sylow -3 subgroups.
-$G$ has a cyclic subgroup of order 6.
-$G$ has a unique element of order $2$.
-
-
-$60=2^2.3.5$ No. of Sylow -5 subgroups =$1+5k$ divides 12.So $1+5k=1,6\implies n_5=1,6\implies n_5=6$ as $G$ is a simple group.
-Consider $n_3=1+3k$ divides $20\implies 1+3k=1,4,10\implies 1+3k=4,10$. If $n_3=4$ then we have $8 $ elements of order $3$ and $A_5$ has 20 elements of order $3$ which is a contradiction.Hence $n_3=10$.
-Since $A_5$ has no element of order $6$.So 3 is false.
-$A_5$ has many elements of order $2$ viz. $(12)(34),(13)(24),,$. Hence $1$ is correct only .Please can someone check whether I am correct /not?
-
-REPLY [3 votes]: As was remarked before, you do not have to assume that $G \cong A_5$.
-(1) You did that one correctly!
-(2) If $|Syl_3(G)|=4$, and $S \in Syl_3(G)$, then $|G:N_G(S)|=4$. Now let $G$ act on the left cosets of $N_G(S)$ by left multiplication, then the kernel of this action is $core_G(N_G(S))=\bigcap_{g \in G}N_G(S)^g$, which is a normal subgroup. Hence, by the simplicity of $G$, it must be trivial and $G$ can embedded in $S_4$, a contradiction, since $60 \nmid 24$.
-(3) We prove that if a non-abelian simple $G$, with $|G|=60$, has an abelian subgroup of order $6$, then $G \cong A_5$. This gives a contradiction, since it is easily seen that $A_5$ does not contain any elements of order $6$ (note that an abelian group of order $6$ must be cyclic). So assume $H \lt G$ is abelian and $|H|=6$. $H$ is not normal so, $N_G(H)$ is a proper subgroup (if not then $H$ would be normal) and since $|G:N_G(H)| \mid 10$, we must have $|G:N_G(H)|=5$ ($=2$ is not possible since subgroups of index $2$ are normal). Similarly as in (2), $G/core_G(N_G(H))$ embeds homomorphically in $S_5$ this time. Of course $core_G(N_G(H))=1$, so $G$ is isomorphic to a subgroup of $S_5$ and since it is simple it must be isomorphic to $A_5$ (if we write also $G$ for the image in $S_5$, consider $G \cap A_5 \lhd G$ and use $|S_5:A_5|=2$).
-(4) In general: if $G$ is a group with a unique element $x$ of order $2$, then $x \in Z(G)$. Why? Because for every $g \in G$, $g^{-1}xg$ has also order $2$ and must be equal to $x$. In your case $G$ is non-abelian simple, so $Z(G)=1$.
-So only (1) is the true statement.
-
-Edit For case (3) I forgot the case where $|G:N_G(H)|=10$. I have a proof that is quite sophisticated and maybe there is an easier way. Anyway, in this case $H=N_G(H)$. Consider the subgroup $P$ of order $3$ of $H$. This must be a Sylow $3$-subgroup of $G$, since $3$ is the higest power of $3$ dividing $|G|=60$. Observe that in fact $N_G(P)=H$. This follows from what we showed in (2): $|Syl_3(G)|=|G:N_G(P)|=10$ and of course $H \subseteq N_G(P)$. Trivially, $P \subset Z(N_G(P))$. Now $P$ satifies the criterion of Burnside's Normal $p$-Complement Theorem, see for example Theorem (5.13) here. But then $P$ has a normal complement $N$, such that $G=PN$ and $P \cap N=1$. Now $G$ is non-abelian simple, so $N=1$ or $N=G$, which both lead to a contradiction.<|endoftext|>
-TITLE: The greatest integer less than or equal to the number $R=(8+3\sqrt{7})^{20}$
-QUESTION [6 upvotes]: Given $$R=(8+3\sqrt{7})^{20}, $$ if $\lfloor R \rfloor$ is Greatest integer less than or equal to $R$, then which of the following option(s) is/are true?
-
-$\lfloor R \rfloor$ is an even number
-$\lfloor R \rfloor$ is an odd number
-$R-\lfloor R \rfloor=1-\frac{1}{R}$
-$R(R-\lfloor R \rfloor-1)=-1$
-
-
-My try: I wrote $R$ as $$R=8^{20}\left(1+\sqrt{\frac{63}{64}}\right)^{20} \approx8^{20}\left(1+\sqrt{0.98}\right)^{20} \approx8^{20}\left(1.989\right)^{20} .$$
-Now, $8^{20}\left(1.989\right)^{20}$ is slightly less than $8^{20} \times 2^{20}=2^{80}$,
-$$\lfloor 2^{80}\rfloor=2^{80}$$
-hence
-$$\lfloor R \rfloor=2^{80}-1,$$
-so option $2$ is correct.
-How does one figure out whether options $3$ and $4$ are correct or wrong?
-
-REPLY [3 votes]: Hints For (1), (2): Using the binomial expansion twice gives that \begin{align}A := R + (8 - 3 \sqrt{7})^{20} &= (8 + 3 \sqrt{7})^{20} + (8 - 3 \sqrt{7})^{20} \\ &= \sum_{k = 0}^{20} {20 \choose k} 8^{20 - k} (3 \sqrt{7})^k + \sum_{k = 0}^{20} {20 \choose k} 8^{20 - k} (-3 \sqrt{7})^k \\ &= \sum_{k = 0}^{20} {20 \choose k} 8^{20 - k} (1 + (-1)^k) (3 \sqrt{7})^k .\end{align}
-The appearance of the factor $1 + (-1)^k$ means that summands with odd $k$ are zero, so only the even terms contribute, and we can rewrite the sum as
-$$A = \sum_{j = 0}^{10} {20 \choose 2j} 8^{20 - 2j} (3 \sqrt{7})^{2j} = \sum_{j = 0}^{10} {20 \choose 2j} 64^{10 - j} 63^j .$$
-In particular, $A$ is an integer. On the other hand, since $49 < 63 < 64$, we have $7 < 3 \sqrt{7} < 8$ and hence $0 < 8 - 3 \sqrt{7} < 1$.
-For (3): Note that $(8 + 3 \sqrt{7})(8 - 3 \sqrt{7}) = 64 - 63 = 1.$
-
-Additional hints For (1)-(2): So, the second summand of $A$ satisfies $0 < (8 - 3 \sqrt{7})^{20} < 1$. (In fact, it is very close to zero.) So, $A - 1 < R < A$, and in particular, $\lfloor R \rfloor = A - 1$. Since we can determine the parity of $A$ from the last summation expression, we can also determine that of $\lfloor R \rfloor$. For (3): So $$(8 + 3 \sqrt{7})^{20} (8 - 3 \sqrt{7})^{20} = 1 ,$$ hence $$(8 - 3 \sqrt{7})^{20} = \frac{1}{R} .$$
-
-There appears to be a typo in the equation in (4).<|endoftext|>
-TITLE: Is there an equivalent concept of a "variety" for SAT?
-QUESTION [5 upvotes]: Couldn't find anything via google - I was wondering what work is out there looking at SAT problems from the perspective akin to an algebraic variety, e.g. a set of variables $X_1=$true, $X_2=$false, .., $X_k=$true, etc define a set of propositions which are all satisfed by that particular set of values in much the same way that a variety defines a set of polynomials that all happen to vanish at the same points.
-Thanks.
-
-REPLY [2 votes]: For one variety over one finite field, the algebraic geometry translation of a SAT problem is not all that useful. Switching between the two descriptions is computationally trivial. Algebraic geometry doesn't help with finding solutions so much as describing relations between solutions. SAT problems do not lead to structured varieties like elliptic curves or algebraic surfaces where algebraic geometry comes into its own.
-The advantage of geometry is when there are many varieties, or many fields and rings in which to solve the equations, or both.
-This is roughly the idea of Aaronson and Wigderson's paper on Algebraization as a barrier to $P \neq NP$ proofs. General arguments about computation with bits, thought of as facts about polynomials or varieties where we look for solutions mod $2$, also would work over finite fields with more than $2$ elements, or more general rings than that. In those more general settings they can arrange oracles for which $P = NP$, so things are not that easy.
-Geometric Complexity Theory proposed by Mulmuley is an application of algebraic geometry to complexity and an approach to the $P=NP$ problem, but the geometry does not come from mapping SAT problems to algebraic equations mod $2$. The idea is to look for algebraic problems in invariant theory and group representation theory that are close enough analogues of $P=NP$ that solving the algebra problems would have implications for computational complexity theory. Algebraic geometry appears in GCT because it is a tool for the algebra problems that appear.<|endoftext|>
-TITLE: Is this a valid way to think about sheafification?
-QUESTION [5 upvotes]: This all feels like it should be valid, but I just wanted to get more experienced eyes on it in case I've made a mistake.
-Take a presheaf $\mathscr{F}$ on a topological space $X$. In order to be a sheaf, for any set of compatible functions $f_i \in \mathscr{F}(U_i)$, there needs to exist a unique gluing. So the sheafification $\mathscr{F}^+$ of $\mathscr{F}$ can be constructed by doing the "least amount of work" to make this happen.
-Intuitively, I feel like this should mean two things happen:
-First, suppose more than one gluing exist. If $f$ and $g$ are both gluings of $\{f_i\}$, we have no natural way to decide on which to keep. The easiest thing to do is to equate all gluings, and require that $f=g$ in $\mathscr{F}^+$.
-Second, if a gluing for $\{f_i\}$ does not exist, one is freely adjoined. This new section will not be equal to any old sections in $\mathscr{F}$. We can identify this new section with the collection $\{f_i\}$, perhaps even calling it by the name $[f_i]$.
-Doing this, we need to have the understanding that it's very likely another compatible family $\{g_j\}$ will exist which glues together to form the same section. If this is the case, we require $[f_i]=[g_j]$.
-But since no section existed previously, you need a rule to tell when two such compatible families of functions should glue to the same section. In general, they will be defined on different covers, and you need to take a common refinement $\{V_i\}$ of these covers. By pushing each $f_i$ and each $g_j$ through the restriction maps into the appropriate open sets in this refinement, you can compare them directly for equality. And if all the right $f_i$'s and $g_j$'s are equal, then $[f_i] = [g_j]$.
-
-REPLY [4 votes]: That's more or less right. We divide out by the relation "Two sections $f, g$ on an open $U\subseteq X$ are equivalent iff there is some cover $U_i \subseteq U$ such that $f|_{U_i} = g|_{U_i}$ for all $i$" (which turns out to be an equivalence relation, and for any given $U$, the class $[0]$ turns out to be a subgroup / ideal if $\mathscr F$ is a presheaf of groups / rings, which means dividing out by it works out nicely). And we adjoin sections where there is a cover with compatible sections.
-However, that's not the usual way to construct the associated sheaf. One usually lets $\mathscr F'$ be the sheaf given by $U\mapsto \prod_{x \in U}\mathscr F_x$ (which is a very big sheaf), and then $\mathscr F^+$ is set to be the subsheaf where for any section $f\in \mathscr F(U)$, there is some cover $U_i \subseteq U$ and sections $f_i \in \mathscr F(U_i)$ such that $f|{U_i} = f_i$. In other words, the subsheaf of $\mathscr F'$ that has some "local coherence".
-For instance, say $\mathscr F$ is the presheaf over $\Bbb C$ (with standard topology) of analytic functions (this is actually a sheaf, but nevermind). Then $\mathscr F'$ is the sheaf where a section over some open $U$ is given by, for each point, a power series around that point with some positive radius of convergence. The sheaf $\mathscr F^+$ is the subsheaf consisting of those sections where neighbouring points have power series that comes from the same analytic function. We see that we are back with the usual sheaf of analytic functions.<|endoftext|>
-TITLE: Nontrivial cup product realized in $\Bbb R^4$
-QUESTION [6 upvotes]: Let $A$ be a closed subspace $A$ of $[0,1]^4$---let's say, a subcomplex of some triangulation of the cube. I would like to show that the cup product $H^2(A)\times H^2(A)\to H^4(A)$ is trivial (or at least that the square map $x\mapsto x\smile x$ is trivial).
-I tried analyzing simple examples of nontrivial cup product occurances and it seems to me that the basic "building blocks" are examples such as products of spaces (for example, $S^2\times S^2$) and/or attaching $4$-cells to something $2$-dimensional in a nontrivial way (for example, $\Bbb CP^2$) and none of these can be realized in a Euclidean $4$-spaces; however, I don't know how to prove the claim formally.
-Thanks for possible hint.
-
-REPLY [3 votes]: You may embed your complex $A$ into $S^4$. Then Alexander duality (see Hatcher, th. 3.44) gives you
-$$
-\tilde H^k(A)=\tilde H_{3-k}(S^4\setminus A),
-$$
-so $H^4(A)$ are always $0$.<|endoftext|>
-TITLE: $p$-adic valuation of harmonic numbers
-QUESTION [8 upvotes]: For an integer $m$ let $\nu_p(m)$ be its $p$-valuation i.e. the greatest non-negative integer such that $p^{\nu_p(m)}$ divides $m$. Let now
-$H_n=1+\dfrac{1}{2}+ \cdots+ \dfrac{1}{n}$.
-If $H_n=\dfrac{a_n}{b_n}$ then $\nu_p(H_n)=\nu_p(a_n)-\nu_p(b_n).$
-It is known that if $2^k \leq n < 2^{k+1}$ then $\nu_2(H_n)=-k.$
-Question For what $n$ and arbitrary $p$ we may be sure that $\nu_p(H_n)<0?$
-Any estimates and reference, please.
-
-REPLY [2 votes]: Question For what $n$ and arbitrary $p$ we may be sure that $v_p(H_n)<0$ ?
-Any estimates and reference, please.
-
-In $[1]$ it is proved that for all $x \geq 1$ it holds $v_p(H_n) \leq 0$ for all positive integers $n \leq x$ but at most $129 p^{2/3} x^{0.765}$ exceptions. Hence $v_p(H_n) \leq 0$ for a 100% of the positive integers (respect to asymptotic density) and I guess that this is true even for the strict inequality $v_p(H_n) < 0$.
-$[1]$ C. Sanna, On the p-adic valuation of harmonic numbers, J. Number Theory (2016) 166, 41-46. (http://dx.doi.org/10.1016/j.jnt.2016.02.020)<|endoftext|>
-TITLE: How to solve this integral $\int _0^{\infty} e^{-x^3+2x^2+1}\,\mathrm{d}x$
-QUESTION [5 upvotes]: My classmate asked me about this integral:$$\int_0^{\infty} e^{-x^3+2x^2+1}\,\mathrm{d}x$$
-but I have no idea how to do it. What's the closed form of it? I guess it may be related to the Airy function.
-
-REPLY [5 votes]: I guess it may be related to the Airy function.
-
-You guessed well. In general, we have $~\displaystyle\int_0^\infty\exp\Big(-x^2(x+3a)\Big)~dx~=~\frac2{e^{2a^3}}\cdot\frac{\text{Bi}\Big(3^{2/3}~a^2\Big)}{3^{4/3}}-$
-$-a\cdot~_2F_2\bigg(\bigg[\dfrac12~,~1\bigg]~;~\bigg[\dfrac23~,~\dfrac43\bigg]~;~-4a^3\bigg).~$ In this particular case, $a=-\dfrac23.$<|endoftext|>
-TITLE: Invariant factors and elementary divisors of an abelian group
-QUESTION [5 upvotes]: I have to find the elementary divisors and invariant factors of :
-$$ \mathbb Z_6\oplus\mathbb Z_{20}\oplus\mathbb Z_{36}$$
-I'm following this.
-I think that elementary divisors are $\{2,2^2,2^2,3,3^2,5\}$, just using the prime decomposition of $\{6,20,36\}$.
-Using the web I've put above, the invariant factor decomposition is
-$$ \mathbb Z_2\oplus\mathbb Z_{12}\oplus\mathbb Z_{180}$$
-However, I have written in my notes that the invariant factors are $\{2,2,6,6,30\}$.
-I'd like to know which is the right option and where and why I'm wrong.
-Thanks in advance.
-
-REPLY [5 votes]: $\mathbb Z_2\oplus\mathbb Z_{12}\oplus\mathbb Z_{180}$ is right.
-Your notes must be wrong because if the invariant factors were $\{2,2,6,6,30\}$ then there wouldn't be an element of order $36$ but $\mathbb Z_6\oplus\mathbb Z_{20}\oplus\mathbb Z_{36}$ has an element of order $36$ coming from $\mathbb Z_{36}$. This also gives elements of order $4$, $9$, $12$, which are not in $\{2,2,6,6,30\}$.<|endoftext|>
-TITLE: Show that $\left(\int_{0}^{1}\sqrt{f(x)^2+g(x)^2}\ dx\right)^2 \geq \left(\int_{0}^{1} f(x)\ dx\right)^2 + \left(\int_{0}^{1} g(x)\ dx\right)^2$
-QUESTION [5 upvotes]: Show that
- $$
-\left( \int_{0}^{1} \sqrt{f(x)^2+g(x)^2}\ \text{d}x \right)^2
-\geq
-\left( \int_{0}^{1} f(x)\ \text{d}x\right)^2
-+ \left( \int_{0}^{1} g(x)\ \text{d}x \right)^2
-$$
- where $f$ and $g$ are integrable functions on $\mathbb{R}$.
-
-That inequality is a particular case. I want to approximate the integral curves using some inequalities who imply this inequality.
-
-REPLY [2 votes]: Suppose that $f$ and $g$ are continuous functions. Define
-$$\phi(t) = \left(\int_0^t \sqrt{f(s)^2 + g(s)^2} ds\right)^2 -\left(\int_0^t f(s) ds\right)^2 - \left(\int_0^t g(s) ds\right)^2.$$
-It is obvious that $\phi(0) =0$ and
-$$\phi'(t) = 2\left[\int_0^t \sqrt{f(t)^2 + g(t)^2}\sqrt{f(s)^2 + g(s)^2}ds - \int_0^t (f(s)f(t)+g(s) g(t))ds\right].$$
-By Cauchy-Schwartz inequality, we have
-$$\sqrt{f(t)^2 + g(t)^2}\sqrt{f(s)^2 + g(s)^2} \geq f(t)f(s) + g(t) g(s).$$
-Hence $\phi'(t) \geq 0$. This implies that $\phi(1) \geq \phi(0) =0$. This is the inequality in question.
-In general case when $f$ and $g$ are integrable, we can approximate them by continous functions, and hence finish the proof.<|endoftext|>
-TITLE: Example of a Borel measure, which is not Borel-regular
-QUESTION [5 upvotes]: I have asked a question to find four types of outer measures here, and I could find three of the four examples.
-We call an outer measure $\mu: \mathcal P(\mathbb R^n) \to [0, \infty]$ Borel, if every Borel set $B \subset \mathbb R^n$ is $\mu$-measurable. We say, that an outer measure $\mu: \mathcal P(\mathbb R^n) \to [0, \infty]$ is Borel-regular, if $\mu$ is Borel and for any subset $A \subset \mathbb R^n$ there is a Borel set $B \supset A$, such that $\mu(A) = \mu(B)$.
-I would like to give an example of a Borel measure, which is not Borel-regular. Can you help me?
-
-REPLY [5 votes]: Take a set $C\subset\mathbb{R}^n$ that is not Borel. For every $X\subset \mathbb{R}^n$, let
-$$
-\mu(X) = \begin{cases}
-0 & \text{if } X\subset C; \\
-\infty & \text{if } X\not\subset C. \\
-\end{cases}
-$$
-This is a measure on the entire $P(\mathbb{R}^n)$. In particular, all Borel sets are measuable.
-For every Borel set $B\supset C$ we have
-$\mu(B)=\infty> 0=\mu(C)$, so $\mu$ is not Borel-regular.<|endoftext|>
-TITLE: Why is this theorem also a proof that matrix multiplication is associative?
-QUESTION [6 upvotes]: The author remarks that this theorem, which is basically all about what happens if we compose linear transformations, also gives a proof that matrix multiplication is associative:
-
-Let $V$, $W$, and $Z$ be finite-dimensional vector spaces over the field $F$; let $T$ be a linear transformation from $V$ into $W$ and $U$ a linear transformation from $W$ into $Z$. If $\mathfrak{B}$, $\mathfrak{B^{'}}$, and $\mathfrak{B^{''}}$ are ordered basis for the spaces $V$, $W$, $Z$, respectively, if $A$ is the matrix of $T$ relative to the pair $\mathfrak{B}$, $\mathfrak{B^{'}}$, and $B$ is the matrix of $U$ relative to the pair $\mathfrak{B^{'}}$, $\mathfrak{B^{''}}$, then the matrix of the composition $UT$ relative to the pair $\mathfrak{B}$, $\mathfrak{B^{''}}$ is the product matrix $C=BA$.
-
-However, I see no reason why that's true...
-
-REPLY [5 votes]: Associativity is a property of function composition, and in fact essentially everything that's associative is just somehow representing function composition. This theorem says that matrix multiplication is just composition of linear transformations, and so it follows that it's associative.
-Of course in reality this is backwards: the "true" definition of matrix multiplication is "compose the linear transformations and write down the matrix," from which you can easily derive the familiar algorithm.<|endoftext|>
-TITLE: What's the formula for the 365 day penny challenge?
-QUESTION [10 upvotes]: Not exactly a duplicate since this is answering a specific instance popular in social media.
-You might have seen the viral posts about "save a penny a day for a year and make $667.95!" The mathematicians here already get the concept while some others may be going, "what"? Of course, what the challenge is referring to is adding a number of pennies to a jar for what day you're on. So:
-Day 1 = + .01
-Day 2 = + .02
-Day 3 = + .03
-Day 4 = + .04
-
-So that in the end, you add it all up like so:
-1 + 2 + 3 + 4 + 5 + 6 + ... = 66795
-
-The real question is, what's a simple formula for getting a sum of consecutive integers, starting at whole number 1, without having to actually count it all out?!
-
-REPLY [2 votes]: The arithmetic progression is the sequence of numbers such that the difference $d$ between the consecutive terms is constant. If the first term is $a_1$, the number of terms is $n$ and the last term is $a_n$, the whole sum
-$$ S = \frac{n \cdot (a_1 + a_n)}{2} $$
-where
-$$ a_n = a_1 + (n - 1) \cdot d $$
-In your example $a_1 = 0.01$, $d = 0.01$, $n = 365$, so
-$$ a_{365} = 0.01 + (365 - 1) \cdot 0.01 = 3.65 $$
-and
-$$ S = \frac{365 \cdot (0.01 + 3.65)}{2} = 667.95 $$
-If you want to use only $a_1$, $d$ and $n$ then the only one formula from both one above
-$$ S = \frac{n \cdot (2 \cdot a_1 + (n-1) \cdot d)}{2} $$
-$$ S = \frac{365 \cdot (2 \cdot 0.01 + (356-1) \cdot 0.01)}{2} = 667.95 $$<|endoftext|>
-TITLE: Is the Zariski topology the same as the cofinite topology?
-QUESTION [6 upvotes]: Let $R$ be a commutative ring, $spec(R)$ be the set of all prime ideals on $R$. For any ideal $I$ on $R$, we define the $V_I$ to be the set of all prime ideals containing $I$. We define the Zariski topology on $spec(R)$ via the closed sets $\{V_I:I\textrm{ is an ideal of }R\}$.
-I am still wrapping my mind around this topology. Can someone tell me if it is a cofinite topology, i.e. the open sets are complements of finite sets, or not?
-
-REPLY [6 votes]: There can be infinite closed sets besides the whole space.
-In $\Bbb Q[x,y]$, for example, $(y)$ is contained in infinitely many prime ideals, so $V_{(y)}$ is a closed, but not finite set.
-Another way: the cofinite topology always makes its space $T_1$, but the spectrum is $T_1$ in the Zariski topology iff prime ideals are maximal, and that only occurs for certain rings.
-Another way: The cofinite topology on a finite space is the discrete topology, but the Zariski topology on a ring with a finite spectrum need not be discrete. Consider, for example, $Spec(\Bbb Q[[x]])$, where the spectrum is $\{0, (x)\}$ and the closed sets are $\{\emptyset, \{x\}, \{0,(x)\}\}$
-I've spent some time in the same boat as you on this topic. It's especially disorienting if most of your topological experience is derived from thinking about metrizable spaces.<|endoftext|>
-TITLE: Find sum of series $\sum_{n=1}^{\infty}\frac{1}{n(4n^2-1)}$
-QUESTION [12 upvotes]: I need help with finding sum of this:
-$$
-\sum_{n=1}^{\infty}\frac{1}{n(4n^2-1)}
-$$
-First, I tried to telescope it in some way, but it seems to be dead end. The only other idea I have is that this might have something to do with logarithm, but really I don't know how to proceed. Any hint would be greatly appreciated.
-
-REPLY [3 votes]: One way forward is to note that
-$$\begin{align}
-\sum_{n=1}^{N}\left(\frac{1}{2n-1}-\frac{1}{2n}\right)&=\sum_{n=1}^{N}\left(\frac{1}{2n-1}+\frac{1}{2n}\right)-\sum_{n=1}^{N}\frac1n \tag 1\\\\
-&=\sum_{n=1}^{2N}\frac1n -\sum_{n=1}^{N}\frac1n \tag 2\\\\
-&=\sum_{n=N+1}^{2N}\frac1n \\\\
-&=\sum_{n=1}^{N}\frac{1}{n+N} \\\\
-&=\frac1N \sum_{n=1}^{N}\frac{1}{1+n/N} \tag 3
-\end{align}$$
-In going from $(1)$ to $(2)$ we simply noted that the sum, $\sum\limits_{n=1}^{2N}\frac1n$, can be written in terms of sums of even and odd indexed terms.
-Now, we observe that limit of $(3)$ is the Riemann sum for the integral $$\int_0^1 \frac{1}{1+x}\,dx=\log(2).$$
-Similarly, we see that
-$$\begin{align}
-\sum_{n=1}^{N}\left(\frac{1}{2n+1}-\frac{1}{2n}\right)&=-1+\frac1N \sum_{n=1}^{N}\frac{1}{1+n/N}
-\end{align}$$
-is the Riemann sum for $$-1+\int_0^1\frac{1}{1+x}\,dx=-1+\log(2).$$
-Putting all of this together, we recover the expected result
-$$\sum_{n=1}^\infty \frac{1}{n(2n-1)(2n+1)}=\sum_{n=1}^\infty\left(\frac{1}{2n-1}-\frac{1}{2n}\right)+\sum_{n=1}^\infty\left(\frac{1}{2n+1}-\frac{1}{2n}\right)=2\log(2)-1.$$<|endoftext|>
-TITLE: Proving that there only finitely many minimal prime ideals of any ideal in Noetherian commutative ring
-QUESTION [7 upvotes]: Currently, I'm trying to solve a problem from a textbook:
-Let $R$ be a commutative Noetherian ring with identity, and let $I \subset R$ be a proper ideal of $R$. Then we know that set of prime ideals of $R$ containing $I$ has minimal elements by inclusion (I decided to call this set $\mathrm{Min}(I)$ in sequel). Prove that $\mathrm{Min}(I)$ is finite.
-There is also a hint: Define $\mathcal{F}$ as set of all ideals $I$ of $R$ such that $ \vert \mathrm{Min}(I) \vert = \infty$. Assume that $\mathcal{F} \neq \emptyset$. Then it must have a maximal element $I$. Find ideals $J_1,J_2$ such that they all strictly include $I$, such that $J_1J_2 \subset I$ and deduce a contradiction.
-So I went along this hint: $I$ can't be a prime, as a prime is the only minimal prime over itself. It means that $\exists a,b \not \in I: ab \in I$. As $R$ is Noetherian there is a finite list of elements that generates $I = (r_1, \dots, r_n)$. Then it's possible to set $J_1 = (r_1, \dots,r_n,a)$, $J_2 =(r_1, \dots, r_n, b )$ with all required properties. As $I$ is maximal in $\mathcal{F}$ the sets $\mathrm{Min}(J_1)$ and $\mathrm{Min}(J_2)$ must be finite.
-I am failing to find a desired contradiction and will be grateful for any help.
-
-REPLY [8 votes]: (To continue your argument) But let $P\in Min(I)$, since $ab\in I\subset P$ and $P$ is prime, $a\in P$ or $b\in P$ this implies that $J_1\subset P$ or $J_2\subset P$. Remark that an element $P$ of $Min(I)$ which contains $J_l,l=1,2$ is in $Min(J_l)$ thus $Min(I)\subset Min(J_1)\cup Min(J_2)$. This implies that $Min(J_1)$ or $Min(J_2)$ is infinite. This is in the contradiction with the fact that $I$ is maximal among the ideals such that $Min(I)$ is infinite.<|endoftext|>
-TITLE: Possible mistake in Rudin's definition of limits in the extended real number system
-QUESTION [5 upvotes]: From Baby Rudin page 98
-
-This seems to be a mistake since we have seemingly absurd results like
-$$ graph(f) = \{(0,0)\} \Rightarrow \lim_{x\to \infty} f(x)= 0 $$
-We define the limit(for $x$ real) only for limit points of $E$ so my initial thinking is to enforce that every neighborhood of $x$ must have infinitely many points of $E$. This would imply that limits at infinity could only happen for unbound $E$ so the previous example would not be true. Is there a more standard way of defining such limits?
-This has been discussed before at Definition of the Limit of a Function for the Extended Reals but I'm more interested in the infinite case and how to fix the definition.
-
-REPLY [2 votes]: Yes, there is a more standard way. If you know topology, what you are doing is attaching two points to $\mathbb{R}$, which we will call $\infty$ and $-\infty$, giving the obvious order and imposing the order topology. What follows is that limits are now well-defined as in any topological space, and your proposed definition is equivalent. Just as in any topological space, limits are defined on limit points only.
-This has the advantage of putting away the "special" feeling and treatment about $\infty$ and $-\infty$, putting them in the same ground as any real number.
-
-I've made this blog post sometime ago about some considerations on the extended real line from a topological viewpoint. You may find it useful.<|endoftext|>
-TITLE: How can we find geodesics on a one sheet hyperboloid?
-QUESTION [17 upvotes]: I am looking at the following exercise:
-Describe four different geodesics on the hyperboloid of one sheet
-$$x^2+y^2-z^2=1$$ passing through the point $(1, 0, 0)$.
-$$$$
-We have that a curve $\gamma$ on a surface $S$ is called a geodesic if $\ddot\gamma(t)$ is zero or perpendicular to the tangent plane of the surface at the point $\gamma (t)$, i.e., parallel to its unit normal, for all values of the parameter $t$.
-Equivalently, $\gamma$ is a geodesic if and only if its tangent vector $\dot\gamma$ is parallel along $\gamma$.
-$$$$
-Could you give me some hints how we can find in this case the geodesics?
-
-REPLY [20 votes]: First, look at some pictures of hyperboloids, to get a feeling for their shape and symmetry.
-There are two ways to think of your hyperboloid. Firstly, it's a surface of revolution. You can form it by drawing the hyperbola $x^2 - z^2 = 1$ in the plane $y=0$, and then rotating this around the $z$-axis.
-Another way to get your hyperboloid is as a "ruled" surface. Take two circles of radius $\sqrt2$. One circle, $C_1$, lies in the plane $z=1$ and has center at the point $(0,0,1)$. The other one, $C_2$, lies in the plane $z=-1$ and has center at the point $(0,0,-1)$. As you can see, $C_1$ lies vertically above $C_2$. Their parametric equations are:
-\begin{align}
-C_1(\theta) &= (\sqrt2\cos\theta, \sqrt2\sin\theta, 1) \\
-C_2(\theta) &= (\sqrt2\cos\theta, \sqrt2\sin\theta, -1)
-\end{align}
-For each $\theta$, draw a line from $C_1(\theta)$ to $C_2(\theta + \tfrac{\pi}{2})$. This gives you the family of blue lines shown in the picture below. Similarly, you can get the red lines by joining $C_1(\theta)$ and $C_2(\theta - \tfrac{\pi}{2})$ for each theta:
-
-To identify geodesics, we will use two facts that are fairly well known (they can be found in many textbooks):
-Fact #1: Any straight line lying in a surface is a geodesic. This is because its arclength parameterization will have zero second derivative.
-Fact #2: Any normal section of a surface is a geodesic. A normal section is a curve produced by slicing the surface with a plane that contains the surface normal at every point of the curve. The commonest example of a normal section is a section formed by a plane of symmetry. So, any intersection with a plane of symmetry is always a geodesic.
-There are infinitely many geodesics passing through the point $(1,0,0)$. But, using our two facts, we can identify four of them that are fairly simple. They are the curves G1, G2, G3, G4 shown in the picture below:
-
-
-G1: the circle $x^2+y^2 =1$ lying in the plane $z=0$. This is a geodesic by Fact #2, since the plane $z=0$ is a plane of symmetry. At each point along the curve G1, the curve's principal normal must be parallel to the surface normal at the point, by symmetry. If this geometric argument is not convincing, we can confirm by calculations. At any point $P=(x,y,0)$ on G1, the surface normal and the curve's principal normal are both in the direction $(x,y,0)$. This is illustrated in the picture below:
-
-
-
-G2: the hyperbola $x^2 - z^2 = 1$ lying in the plane $y=0$. Again, this is a geodesic by Fact #2, since the plane $y=0$ is a plane of symmetry.
-
-G3: the line through the points $(1,-1,1)$ and $(1, 1, -1)$. This is one of the blue lines mentioned in the discussion of ruled surfaces above. In fact its two defining points are $(1,-1,1) = C_1\big(-\tfrac{\pi}{4}\big)$ and $(1,1,-1) = C_2\big(\tfrac{\pi}{4}\big)$. It has parametric equation
-$$
-G_3(t) = \big(x(t),y(t),z(t)\big) = (1,t,-t)
-$$
-To check that $G_3$ lies on the surface, we observe that
-$$
-x(t)^2 + y(t)^2 -z(t)^2 = 1 +t^2-t^2 = 1 \quad \text{for all } t
-$$
-It's a geodesic by Fact #1.
-
-G4: the line through the points $(1,-1,-1)$ and $(1, 1, 1)$. The reasoning is the same as for G3.<|endoftext|>
-TITLE: If $R[x]$ and $R[[x]]$ are isomorphic, then are they isomorphic to $R$ as well?
-QUESTION [12 upvotes]: There are examples of commutative rings $R \neq 0$ such that $R[x]$ is isomorphic to $R[[x]]$ (see this question; an example would be $R=S[x_1, x_2, \ldots][[y_1, y_2, \ldots]]$, with $S \neq 0$ any commutative ring). This is false, see Martin Brandenburg's answer.
-The following question was asked as a comment on the thread linked above: if $R$ is such that $R[x]\cong R[[x]]$, must we have that $R\cong R[x] \cong R[[x]]$?
-This is clearly true for the example above, which is (essentially) the only family of examples I could come up with.
-
-REPLY [2 votes]: Yes, because $R[x] \cong R[[x]]$ implies $R=0$ (see my answer to the previous question here).
-(I make this community wiki because this answer is rather trivial now.)<|endoftext|>
-TITLE: What does 'coherent isomorphism' mean in the sense of pseudofunctors?
-QUESTION [5 upvotes]: From what I've been able to find, psuedofunctors are not-quite-functors, in the sense that they preserve the identity morphism and composition of morphisms only up to coherent isomorphism, and not 'on the nose'.
-But I'm struggling to find an explicit definition of what this means anywhere. The nLab has a large section on coherence theorems, but most of it is far above my head.
-I'm assuming that it simply means 'isomorphic in a nice way', but I can't quite pin down what 'nice' means here.
-Could anybody please explain what it does mean here?
-Or even give a different definition of a pseudofunctor.
-
-REPLY [7 votes]: The definition of pseudofunctors on nLab is actually quite explicit, but it's for pseudofunctors between bicategories. Borceux (Handbook of Categorical Algebra, volume 1, 7.5) gives the definition for strict categories.
-As for what a coherent (iso)morphism is, it's a morphism that satisfies coherence laws. These laws ensure that things that ought to be equal (and trivially would be, if the morphisms were identities) really are.
-For example in this case you are given isomorphisms $φ : Ff ∘ Fg ≅ F(f ∘ g)$, but to do anything, you'll obviously need isomorphisms $Ff ∘ Fg ∘ Fh ≅ F(f ∘ g ∘ h)$ too, and similarly for any number morphisms.
-Of course you can get from $Ff ∘ Fg ∘ Fh$ to $F(f ∘ g ∘ h)$ using $φ$, but you can do so in two ways: via $Ff ∘ F(g ∘ h)$ or via $F(f ∘ g) ∘ Fh$.
-One of the coherence laws for pseudofunctors says that these two ways are the same. That given this, all paths (constructed from $φ$) from $F(f_1) ∘ ... ∘ F(f_n)$ to $F(f_1 ... f_n)$ are the same is then a part of the coherence theorem for pseudofunctors (but these can also be stated in more sophisticated ways).
-As a side-note, monoidal categories are a good way to get familiar with coherence and coherence theorems. In a way, anybody who uses product of sets or tensor products already is.<|endoftext|>
-TITLE: Optimization-like question
-QUESTION [8 upvotes]: Let's say I have a formula like $ax + by + cz = N$. $a, b, c$, and $N$ are known and cannot be changed. $x, y$, and $z$ are known and can be changed.
-The problem is that the equation is not true! My problem (for a program I'm writing) is: how can $x, y$, and $z$ be changed enough that they equal $N$ while differing from their previous values as little as possible?
-
-REPLY [2 votes]: Define $A = [a,b,c]$ and $X=[x, y, z]^T$, so your equation is $A*X = N$. Assume your known point to be $ \bar X = [x_0, y_0,z_0]^T$. Define the residual to be $R = N - A*\bar X$, since the equation is not true.
-The question becomes $$\min|| X- \bar X ||,s.t. A*X = N$$
-The solution $X^*$ that minimize the distance is $X^* = \bar X + A^+R$ where $A^+$ is the Moore_Penrose pseudo inverse of $A$ and is this case since $A$ has linearly independent rows, $A^+ = A^T(AA^T)^{-1}$. The distance is $||X^* - \bar X || = || A^+R||$, you can pick whatever norm you like.
-This is also true if $A$ is a matrix instead of a row vector suppose you have a linear equation system.<|endoftext|>
-TITLE: Generalization of Urysohn's Lemma
-QUESTION [6 upvotes]: Urysohn's lemma in general topology states:
-
-A topological space $X$ is normal (i.e., $T_4$) iff, for each pair of disjoint closed subsets $C, D \subset X$, there is a function $f : X \to [0, 1]$ such that $f(C) = 0$ and $f(D) = 1$.
-
-Of course Urysohn's proof relies heavily on the structure of $[0, 1]$, to go through, but I wondered how necessary this is. In particular, let's say
-
-A space $Y$ is said to "have property $\mathscr{U}$" if, for every compact Hausdorff space $X$, and every pair of disjoint subsets $C, D \subset X$, there exists a continuous map $f : X \to Y$ such that there are $y_1 \neq y_2 \in Y$ with $f(C) = y_1$ and $f(D) = y_2$.
-
-Question: Can we characterize those spaces with property $\mathscr{U}$?
-Urysohn's lemma says that any space containing a line segment has property $\mathscr{U}$. Is this sufficient condition in fact necessary?
-(Of course we could define a similar property replacing "compact Hausdorff" with "normal." I'm directly interested in the former situation, but I don't have any idea which is the "correct" definition to make.)
-
-Motivation: I've recently learned the following cute result: Let $X$ be a compact Hausdorff space and $F$ be a topological field. Then there is a continuous bijection
-$$\varphi : X \to \operatorname{MaxSpec} C(X, F),$$
-where the space on the right is endowed with the Zariski topology. Of course $\varphi^{-1}$ will usually be far from continuous, since the Zariski topology is fairly weak. However, when $F = \mathbb{R}$ then $\varphi$ is a homeomorphism. To see this, we verify directly that $\varphi$ is a closed map using Urysohn's lemma.
-I'm wondering if this allows us to give a characterization of the real numbers as a topological field that's essentially different from the usual ones.
-
-REPLY [5 votes]: Property $\mathscr U$ is rather trivial and it is equivalent to “the space $Y$ contains a continuous image $f_0([0,1])$ of the unit segment such that $f_0(0)\ne f_0(1)$”. Indeed, the necessity follows from the existence of a continuous map $f_0:[0,1]\to Y$ such that $f_0(0)\ne f_0(1)$, the sufficiency (even for normal spaces $X$) follows from Urysohn's lemma (the composition $f_0\circ f$ is the required separating map). (Here I assume that you mean both sets $C$ and $D$ are closed (conversely, if we take as $C$ and $D$ disjoint dense subsets of $[0,1]$ then $f(C)=f([0,1])=f(D)$ for each continuous map $f$ into a $T_1$-space such that both $f(C)$ and $f(D)$ are one-point sets)).<|endoftext|>
-TITLE: How to compute $\int_0^{\frac{\pi}{2}} \frac{\ln(\sin(x))}{\cot(x)}\ \text{d}x$
-QUESTION [7 upvotes]: I am trying to compute this integral.
-$$\int_0^{\frac{\pi}{2}} \frac{\ln(\sin(x))}{\cot(x)}\ \text{d}x$$
-Any thoughts will help. Thanks.
-
-REPLY [4 votes]: Note that
-\begin{eqnarray}
-&&\int_0^{\frac{\pi}{2}} \frac{\ln(\sin(x))}{\cot(x)}\ \text{d}x\\
-&=&\int_0^{\frac{\pi}{2}}(\cos x)^{-1}\sin x\ln(\sin x)\ \text{d}x=\lim_{a\to0,b\to2}\frac{d}{db}\int_0^{\frac{\pi}{2}}(\cos x)^{a-1}(\sin x)^{b-1}\ \text{d}x\\
-&=&\lim_{a\to0,b\to2}\frac{d}{db}\frac{1}{2}B\left(\frac{a}{2},\frac{b}{2}\right)\\
-&=&\lim_{a\to0,b\to2}\frac{1}{4}B\left(\frac{a}{2},\frac{b}{2}\right)\left(\psi\left(\frac{b}{2}\right)-\psi\left(\frac{a+b}{2}\right)\right)\\
-&=&\lim_{a\to0}\frac{1}{4}B\left(\frac{a}{2},1\right)\left(\psi\left(\frac{b}{2}\right)-\psi\left(\frac{a+2}{2}\right)\right)\\
-&=&\lim_{a\to0}\frac{1}{4}B\left(\frac{a}{2},1\right)\left(\psi(1)-\psi\left(\frac{a}{2}+1\right)\right)\\
-&=&-\lim_{a\to0}\frac{1}{4}B\left(\frac{a}{2},1\right)\left(\gamma+\psi\left(\frac{a}{2}+1\right)\right)\\
-&=&-\lim_{a\to0}\frac{1}{4}\frac{\Gamma(\frac{a}{2})\Gamma(1)}{\Gamma(\frac{a+2}{2})}\left(\gamma+\psi\left(\frac{a}{2}+1\right)\right)\\
-&=&-\lim_{a\to0}\frac{1}{4}\left(\frac{2}{a}+O(a^2)\right)\left(\frac{\pi^2}{12}a+O(a^2)\right)\\
-&=&-\frac{\pi^2}{24}.
-\end{eqnarray}
-Here we use the following fact
-$$ \Gamma(a)\approx\frac{1}{a}, \gamma+\psi(\frac{a}{2}+1)\approx\frac{\pi^2}{12}a.$$<|endoftext|>
-TITLE: Is this a way to prove there are infinitely many primes?
-QUESTION [21 upvotes]: Someone gave me the following fun proof of the fact there are infinitely many primes. I wonder if this is valid, if it should be formalized more or if there is a falsehood in this proof that has to do with "fiddling around with divergent series".
-Consider the product
-$\begin{align}\prod_{p\text{ prime}} \frac p{p-1} &= \prod_{p\text{ prime}} \frac 1{1-\frac1p}\\&=\prod_{p\text{ prime}}\left(\sum_{i=0}^\infty\frac1{p^i}\right)\\&=(1+\tfrac12+\tfrac14+\ldots)(1+\tfrac13+\tfrac19+\ldots)\ldots&(a)\\&=\sum_{i=1}^\infty\frac1i&(b)\\&=\infty\end{align}\\\text{So there are infinitely many primes }\blacksquare$
-Especially the step $(a)$ to $(b)$ is quite nice (and can be seen by considering the unique prime factorization of each natural number). Is this however a valid step, or should we be more careful, since we're dealing with infinite, diverging series?
-
-REPLY [19 votes]: The rigorous approach, as noted in comments.
-If there are only finitely many primes, pick an $n$ so that:
-$$H_n=\sum_{m=1}^{n}\frac{1}m > \prod_{p}\frac{1}{1-\frac 1p}$$
-You can do this because the right side is a finite product of positive real numbers, and the series $\sum_{m=1}^{\infty}\frac1m$ diverges.
-Next, show that:
-$$\prod_p \frac{1}{1-\frac{1}{p}} > \prod_p \sum_{k=0}^{\lfloor \log_{p}n\rfloor} \frac{1}{p^k}>\sum_{m=1}^{n}\frac{1}{m}$$
-Reaching a contradiction.<|endoftext|>
-TITLE: Combinatorial proof that $\frac{({10!})!}{{10!}^{9!}}$ is an integer
-QUESTION [8 upvotes]: I need help to prove that the quantity of this division :
-$\dfrac{({10!})!}{{10!}^{9!}}$
-is an integer number, using combinatorial proof
-
-REPLY [17 votes]: You have $10!$ students. You want to divide them into groups of $10$ and line of the groups; in how many different ways can you do this?
-You can line up the students in $(10!)!$ different orders. Now imagine that you count them off by tens and mark a line on the ground between groups of $10$; you have $\frac{10!}{10}=9!$ groups of $10$. Since all you care about is the order of the groups, you can allow the students in each group of $10$ to rearrange themselves within the group as they please, and you’ll still have the same lineup of groups. Each group can rearrange itself in $10!$ different orders, and there are $9!$ groups, so there are $10!^{9!}$ different lineups of students that produce the same lineup of groups. The number of lineups of groups is therefore $\frac{(10!)!}{10!^{9!}}$, which of course must be an integer.
-You can replace $10$ by $n$ and $9$ by $n-1$ and repeat the argument to show that $\dfrac{(n!)!}{n!^{(n-1)!}}$ is an integer.<|endoftext|>
-TITLE: What's the definition of a "collection"?
-QUESTION [7 upvotes]: I cannot seem to find a formal definition for the following.
-What's a "collection" in the context of set theory?
-
-REPLY [6 votes]: "Collections" have no formal existence in set theory. The word is deliberately left without a technical meaning, such that it is available for speaking about our intuitive non-rigorous idea about, erm, collections of things -- without implying that the collection we're talking about satisfies the formal conditions for being considered a "set" or "class", both of which are often technical terms.<|endoftext|>
-TITLE: Extension of vector bundles on $\mathbb{CP}^1$
-QUESTION [6 upvotes]: Let $\lambda\in\text{Ext}^1(\mathcal{O}_{\mathbb{P}^1}(2),\mathcal{O}_{\mathbb{P}^1}(-2))$ and $E_\lambda$ be a vector bundle on $\mathbb{CP}^1$ which is given by the exact sequence
-\begin{equation}0\to\mathcal{O}_{\mathbb{P}^1}(-2)\to E_\lambda\to\mathcal{O}_{\mathbb{P}^1}(2)\to0,\,\,\,\,(1)\end{equation}
-and corresponds to $\lambda$.
-Then, as it was discussed here, $E\cong\mathcal{O}_{\mathbb{P}^1}(a_\lambda)\oplus\mathcal{O}_{\mathbb{P}^1}(-a_\lambda)$ for some $a_\lambda\in\{0,1,2\}$.
-For each $\lambda$ I want to find the explicit value of $a_\lambda$. Can anyone help me?
-I tried the following explicit construction.
-Let $P=\mathcal{O}_{\mathbb{P}^1}(-1)^{\oplus4}$. There is the surjective map $P\to\mathcal{O}_{\mathbb{P}^1}(2)$, which is given by the evaluation map $H^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(3))\otimes\mathcal{O}_{\mathbb{P}^1}\to\mathcal{O}_{\mathbb{P}^1}(3)$ twisted by $-1$. Let $F=\mathcal{O}_{\mathbb{P}^1}(-2)^{\oplus3}$ be the kernel of this map so we have the exact sequence
-\begin{equation}0\to F\to P\to\mathcal{O}_{\mathbb{P}^1}(2)\to0.\,\,\,\,(2)\end{equation}
-Note that $\text{Ext}^1(P,\mathcal{O}_{\mathbb{P}^1}(-2))=0$. Thus applying $\text{Hom}(-,\mathcal{O}_{\mathbb{P}^1}(-2))$ to $(2)$ we obtain the surjective connecting map
-$$\delta:\text{Hom}(F,\mathcal{O}_{\mathbb{P}^1}(-2))\to\text{Ext}^1(\mathcal{O}_{\mathbb{P}^1}(2),\mathcal{O}_{\mathbb{P}^1}(-2)).$$
-Now from the surjectivity of $\delta$ there exists $f\in\text{Hom}(F,\mathcal{O}_{\mathbb{P}^1}(-2))$ such that $\delta(f)=\lambda$ and $E_\lambda$ is given as the push-out of $F\to P$ and $F\to\mathcal{O}_{\mathbb{P}^1}(-2)$.
-I don't know how to compute $f$ explicitly for a given $\lambda$ and then how to compute the push-out.
-Maybe there exists a different approach to solve this problem?
-
-REPLY [2 votes]: I think that you can use duality (Hartshorne, III Thm 7.1),
-$$ \mathrm{Ext}^1(\mathcal{O}_{\mathbb{P}^1}(2),\mathcal{O}_{\mathbb{P}^1}(-2)) \simeq \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))^{\vee} $$
-Identify $\mathcal{O}_{\mathbb{P}^1}(-2) = \mathcal{O}_{\mathbb{P}^1}(-p -q)$, where $p$ and $q$ are distinct points. Let $A_{\lambda}$ be a homogeneous quadratic polynomial corresponding to $\lambda$.
-You can use $p$, $q$ and $A_{\lambda}$ to construct $E_{\lambda}$.
-If $A_{\lambda}(p)=A_{\lambda}(q)=0$, then $a_{\lambda}=0$.
-If $A_{\lambda}(p)=0 $ and $A_{\lambda}(q)\neq 0$, then $a_{\lambda}=1$.
-If $A_{\lambda}(p\neq 0 $ and $A_{\lambda}(q)\neq 0$, then $a_{\lambda}=2$.
-Note that choosing $p$ and $q$ is the same as choosing a basis for $\mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(1))$, which determines a basis for $\mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))$ that you have to fix to construct a isomorphism
-$$ \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2)) \simeq \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))^{\vee} $$<|endoftext|>
-TITLE: A property of the series $\sum_{i=1}^{\infty}\frac{(1-x)x^i}{1+x^i}$
-QUESTION [7 upvotes]: I think this is a hard question:
-$$f(x)=\sum_{i=1}^{\infty}\frac{(1-x)x^i}{1+x^i}~~~\text{for}~~~x\in(0,1)$$
-Prove: $$\lim_{x\to 1^-}f(x)=\ln(2)$$
-Find the maximum value of $f(x)$.
-
-REPLY [2 votes]: We start by expanding $\frac{1}{1+x^i} = \frac{1-x^i}{1-x^{2i}}$ in a geometrical series to get the double sum
-$$f(x)=\sum_{i=1}^{\infty}\sum_{n=0}^\infty (1-x)(1-x^i)x^{2n i}$$
-Since the summands above are non-negative we can, by Tonelli's theorem, interchange the order of summation to get
-$$f(x) = \sum_{n=0}^\infty \frac{x^{2n+1}(1-x)^2}{(1-x^{2n+1})(1-x^{2n+2})}$$
-The function
-$$f_n(x) = \frac{x^{2n+1}(1-x)^2}{(1-x^{2n+1})(1-x^{2n+2})}$$
-is monotonely increasing on $[0,1]$ and therefore satisfy
-$$f_n(x) \leq \lim_{x\to 1^-} f_n(x) = \frac{1}{(2n+1)(2n+2)}$$
-Since $\sum_{n=0}^\infty \frac{1}{(2n+1)(2n+2)}$ converges it follows from Weierstrass M-test that the series $\sum_{n=0}^\infty f_n(x)$ converges uniformly on $[0,1]$ and therefore
-$$\lim_{x\to 1^-} f(x) = \lim_{x\to 1^-}\sum_{n=0}^\infty f_n(x) = \sum_{n=0}^\infty \lim_{x\to 1^-} f_n(x) = \sum_{n=0}^\infty \frac{1}{(2n+1)(2n+2)} = \log(2)$$
-This is also the maximum value of $f(x)$ on $[0,1]$. The last equality above follows from $\sum_{n=0}^\infty \frac{1}{(2n+1)(2n+2)} = \sum_{n=0}^\infty \frac{1}{2n+1} - \frac{1}{2n+2} = \sum_{n=0}^\infty \frac{(-1)^n}{n+1} = \log(2)$.<|endoftext|>
-TITLE: Find the range of $k$ for which the inequality $k\cos^2x-k\cos x+1\geq0 ,\forall x\in(-\infty,\infty)$ holds.
-QUESTION [5 upvotes]: Find the range of $k$ for which the inequality $k\cos^2x-k\cos x+1\geq0 ,\forall x\in(-\infty,\infty)$ holds.
-
-This is an inequality involving trigonometric function $\cos x$ which varies from $-1$ to $1$.
-If the question had been $kx^2-kx+1\geq 0$, i would have easily solved it,by using the discriminant property of the quadratics.If quadratic is positive then its discriminant is negative but i am not able to find the range of $k$ in this question.
-
-REPLY [7 votes]: Just another way: Equivalently, you have to find when
-$$4k(\cos x -\tfrac12)^2 \ge k-4$$
-Now $\cos x \in [-1, 1] \implies (\cos x - \tfrac12)^2 \in [0, \tfrac94]$, and so $ k \in [-\frac12, 4]$.<|endoftext|>
-TITLE: Does sequence always give a square
-QUESTION [5 upvotes]: Show that there exist a non-constant postive integer sequence $\{a_{n}\},a_{0}=1$ such that
-$$\dfrac{a^2_{n}+a_{n}}{2}-\dfrac{a^2_{n-1}+a_{n-1}}{2},\forall n\in\mathbb{N}$$ is always a perfect square.
-
-REPLY [3 votes]: Let $a_n=\frac{3^n-1}2$ (so indeed $a_1=\frac{3^1-1}2=1$). Then
-$$ \frac{a_n^2+a_n}2-\frac{a_{n-1}^2+a_{n-1}}2=\frac{3^{2n}-1}8-\frac{3^{2(n-1)}-1}8=3^{2(n-1)}=(3^{n-1})^2.$$<|endoftext|>
-TITLE: Constructing a Convergent/Divergent Series from a Positive Sequence
-QUESTION [7 upvotes]: Suppose that $\{x_n\}$ is a positive sequence such that $x_n \to 0$. Construct a positive sequence $s_n$ such that $\displaystyle \sum_{n=1}^\infty s_n$ diverges while $\displaystyle \sum_{n=1}^\infty x_ns_n$ converges.
-It is clear that if $\sum x_n$ converges, then taking $s_n=1$ for all $n$ will work. We are then reduced to the case where $\sum x_n$ diverges. My only thought is that as $\sum x_n$ does not converge, its tail does not form a Cauchy sequence, hence for some $\epsilon>0$, we have
-$$
-\sum_{n=m}^\infty x_n >\epsilon
-$$
-Any hint as to how I should proceed?
-
-REPLY [3 votes]: Because $x_n\to 0,$ we can say i) the $x_n$'s are bounded above by some $M;$ and ii) There are $n_1 < n_2 < \cdots $ such that $x_{n_k} < 1/k^2$ for each $k.$
-Let $E = \{n_1,n_2,\dots \}.$ For $n \in E,$ define $s_n = 1.$ For $n\notin E,$ define $s_n=1/n^2.$ Because $s_n = 1$ for infinitely many $n,$ $\sum s_n =\infty.$ We have
-$$\sum_{n=1}^{\infty} x_ns_n = \sum_{n\in E} x_ns_n + \sum_{n\notin E} x_ns_n.$$
-The first sum on the right is
-$$\sum_{k=1}^\infty x_{n_k}s_{n_k}< \sum_{k=1}^\infty \frac{1}{k^2}\cdot 1 < \infty.$$
-The second sum on the right is no more than
-$$\sum_{n\notin E} M\cdot \frac{1}{n^2} < \infty.$$
-Thus $\sum_{n=1}^{\infty} x_ns_n$ converges as desired.<|endoftext|>
-TITLE: Can Isomorphism between categories be defined only if both categories are small?
-QUESTION [6 upvotes]: In wikipedia page, "Isomorphism of categories
-",
-
-A functor F : C → D yields an isomorphism of categories if and only if
- it is bijective on objects and on morphism sets.
-
-I have heard that bijection or "morphism sets" can be defined only if the category is small. "Category isomorphism" can be only if the both categories are small?
-
-REPLY [7 votes]: No. But the answer in detail depends on the foundations you are using. For instance, in $\mathsf{ZFC}$ we may interpret classes using formulas (up to equivalence). Then, a category $C$ has a formula $\mathrm{Ob}(C)$ with one variable such that "$x$ is an object of $C$" means that $\mathrm{Ob}(C)(x)$ holds. Also, we have formulas $\mathrm{Mor}(C)$, $\mathrm{dom}(C)$, $\mathrm{cod}(C)$, $\mathrm{Comp}(C)$, $\mathrm{Id}(C)$ such that the category axioms are satisfied. Now, a functor $F : C \to D$ consists of two formulas $F_O$ and $F_M$ such that
-
-$\forall x ( \mathrm{Ob}(C)(x) \Rightarrow \exists ! y (\mathrm{Ob}(D)(y) \wedge F_O(x,y))$
-
-i.e., every object of $C$ is mapped to some specified object of $D$; one usually writes $F(x)=y$ instead of $F_O(x,y)$. Likewise for morphisms:
-
-$\forall f (\mathrm{Mor}(C)(f) \Rightarrow \exists ! g ( \mathrm{Mor}(D)(g) \wedge F_M(f,g))$
-$\forall f,g,x,x',y,y' (F_M(f,g) \wedge \mathrm{dom}(C)(f,x) \wedge \mathrm{cod}(C)(f,x') \wedge F_O(x,y) \wedge F_O(x',y') \Rightarrow \mathrm{dom}(D)(g,y) \wedge \mathrm{cod}(D)(g,y'))$
-
-i.e., if $f$ is a morphism in $C$ from $x$ to $x'$, then $F(f)$ is a morphism in $D$ from $F(x)$ to $F(x')$,
-
-compabilities with respect to identities and composition (which I won't write down here).
-
-The composition of two functors $F : C \to D$, $G : D \to E$ is the functor $G \circ F : C \to E$ defined by $(G \circ F)_O(x,z) :\Leftrightarrow \exists y (F_O(x,y) \wedge G_O(y,z))$, likewise for $(G \circ F)_M$.
-Now, $F$ is an isomorphism iff $F_O$ and $F_M$ are bijective, i.e.
-
-$\forall y ( \mathrm{Ob}(D)(y) \Rightarrow \exists ! x (\mathrm{Ob}(C)(x) \wedge F_O(x,y))$
-$\forall g (\mathrm{Mor}(D)(g) \Rightarrow \exists ! f ( \mathrm{Mor}(C)(f) \wedge F_M(f,g))$
-
-In this case, the inverse functor $F^{-1}$ is defined by $F^{-1}_O(y,x) :\Leftrightarrow F_O(x,y)$ and $F^{-1}_M(g,f) :\Leftrightarrow F_M(f,g)$.<|endoftext|>
-TITLE: Marginal Distribution of Uniform Vector on Sphere
-QUESTION [5 upvotes]: Suppose that $U=(U_1,\ldots,U_n)$ has the uniform distribution on the unit sphere $S_{n-1}=\{x\in\mathbb R^n:\|x\|_2=1\}$.
-I'm trying to understand the marginal distributions of individual components of the vector $U$, say, without loss of generality, $U_1$.
-So far, this is what I have: I know that $U$ is equal in distribution to $Z/\|Z\|_2$, where $Z$ is an $n$-dimensional standard gaussian vector. Thus, $U_1$ is equal in distribution to $Z_1/\|Z\|_2$. Then, we could in principle compute the distribution of $U_1$ as
-$$P[U_1\leq t]=P\big[Z_1\leq\|Z\|_2t\big],$$
-but the right-hand side above using conventional means is horribly messy (i.e., nested integrals over the set $[x_1\leq\|x\|t]$).
-Is there a more practical/intuitive way of computing this distribution?
-
-REPLY [4 votes]: I hope that this is still relevant to you.
-The answer can be found in eq. 1.26:
-Fang, Bi-Qi; Fang, Kai-Tai, Symmetric multivariate and related distributions, (2017). (https://books.google.co.il/books?hl=iw&lr=&id=NL1HDwAAQBAJ&oi=fnd&pg=PT10&ots=u5cnMFtVxP&sig=ZgnWbGkG8qdVARoJ64mfm9fcag0&redir_esc=y#v=onepage&q&f=false).
-The marginal density of $(z_1/\Vert \textbf{z} \Vert, ..., z_k /\Vert \textbf{z} \Vert )$ is
-$$\frac{\Gamma(n/2)}{\Gamma((n-k)/2) \pi^{k/2}} \left(1 - \sum_{i=1}^k z_i^2 \right)^{(n-k)/2 -1}$$
-This can be derived from the Dirichlet distribution.<|endoftext|>
-TITLE: Subgroup of $\mathbb{C}^*$ such that $\mathbb{C}^*/H \cong \mathbb{R}^*$
-QUESTION [9 upvotes]: Define $\mathbb{C}^* = \mathbb{C}\setminus \{0\}$ and $\mathbb{R}^* = \mathbb{R}\setminus \{0\}$
-
-Does there exist a subgroup $H$ of $\mathbb{C}^*$ such that $\mathbb{C}^*/H$ isomorphic to $\mathbb{R^*}$?
-
-I knew that $\mathbb{C}^*/{U}\cong \mathbb{R}^+$, with $U = \{ z \in \mathbb{C}:|z| = 1\}$, but how about $\mathbb{R}^*$?
-Thank a lot.
-
-REPLY [14 votes]: Suppose there exists a group epimorphism $\phi \colon \mathbb{C}^\times \to \mathbb{R}^\times$. Then there exists some $z \in \mathbb{C}^\times$ with $\phi(z) = -1$. If $w$ is a root of $z$ then $\phi(w)^2 = -1$, which is not possible.
-This show more generally that the image of every group homomorphism $\mathbb{C}^\times \to \mathbb{R}^\times$ already lies in $\mathbb{R}_{>0}$.<|endoftext|>
-TITLE: Modules over associative algebras are just special cases of "ordinary" modules over rings?
-QUESTION [5 upvotes]: By module over a ring, I mean always a right-module. All rings are supposed to be unital, and the module fulfills $m\cdot 1 = m$. If $R$ is commutative and $M$ a right-module, we can define $rx := xr$ and get also a left-module. In this situation we call $M$ a bimodule, and if not otherwise said this natural definition is applied if I speak of bimodules.
-An algebra $A$ over a commutative ring $R$ is itself a ring with $1$, which is also a $R$-module, and such that for all $r \in R$ and $x,y \in A$ we have
-$$
- (rx)y = r(xy) = x(ry).
-$$
-As the ring is assumed to be commutative, we essentially have a bimodul over $R$ as written above.
-A module $M$ over an algebra $A$ which is defined over some ring $R$ is itself an $R$-module, for which we have an operation $M \times A \to M$ such that
-for each $u, v \in M$ and $x,y \in A$ and $r \in R$ we have
-(1) $(u+v)x = ux + vx$
-(2) $v(x+y) = vx + vy$
-(3) $(vx)y = v(xy)$
-(4) $v1 = v$
-(5) $(rv)x = r(vx) = v(rx)$.
-This is the standard definition I see everywhere. But I guess I observed the following. For an algebra we can embed $R$ into $A$ by identifying it with $R\cdot 1$ for $1 \in A$. So the assertion that $A$ must be an $R$-module is implied by (1), (2), (3) and (4). So we can just define a module over an algebra as an ordinary module over $A$ seen as a ring with $1$. As $A$ is in general not commutative, this is just a right-module. But by the embedding, and the additional requirement for an algebra, every element of $R$ is in the center $Z(A) = \{ x : xy = yx \mbox{ for all y} \in A \}$, restricted to $R$ everything is fine and we again have a bimodule. So the only thing that makes modules over algebras special here is (5). But here we have
-\begin{align*}
- (rv)x & = (vr)x & \mbox{definition of $M$ as bimodul over $R$} \\
- & = v(rx) & \mbox{by (3)} \\
- & = v(xr) & \mbox{$R$ is central in $A$} \\
- & = (vx)r & \mbox{by (3)} \\
- & = r(vx) & \mbox{definition of $M$ as bimodul over $R$}
-\end{align*}
-so we see that in the third and last line we have recovered everything from (5).
-So as I see it, a module over an algebra is just an ordinary module over the algebra seen as a ring, so why bother with these extra definition? I guess if we generalise further, for example look at non-associative algebras, then they are no longer rings and we cannot define modules over them as special cases of modules over rings. But most of the time (and in the textbooks I am reading right now) such generalizations are not considered, but nevertheless most of the time modules over algebras are defined separately to modules over rings.
-So why that? Or have I overlooked something and my computations are wrong?
-Remark: These modules over algebras come from the representation theory of (finite) groups.
-
-REPLY [2 votes]: Indeed your proof seems to work fine assuming in the definition you require $M$ to be a right $R$-module. In that case one could simplify the definition by requiring that a module of an algebra $A$ is just an $A$-module (the $R$-module structure can be derived in the way you showed).
-On the other hand if we reguard $M$ as a left $R$-module things change.
-If $M$ is just a left $R$-module the condition $(5)$ is required in order to ensure that the left action of $R$ and right action of $A$ (and so also the induced right action of $R$) are compatible: that is that $M$ is a $(R,A)$-module.
-Note that in general even for $(R,R)$-bimodules (i.e. left and right $R$-modules where the two actions are compatible) it is not required that the left and right actions coincide.
-Hope this helps.<|endoftext|>
-TITLE: Example of a function differentiable in a point, but not continuous in a neighborhood of the point?
-QUESTION [15 upvotes]: Is there a function that is differentiable in a point $x_0$ (and so continuous of course in $x_0$) but not continuous in a neighborhood of $x_0$ (as said, besides the point $x_0$ itself)?
-Can anyone suggest me an example of that ?
-Thanks a lot in advice
-
-REPLY [11 votes]: To begin with we take $x_0 = 0$ : we want $f(0) = 0$ and $f'$ defined at $0$, with $f'(0)=1$.
-Then you take :
-
-$f(x) = x$ if $x \in \mathbb{Q}$
-$f(x) = x + x^2$ if $x \notin \mathbb{Q}$
-
-$f$ is continuous only at $0$. Moreover, you can check that $f'(0) = 1$ : indeed we have $f(0) = 0$, so $\mid \frac{f(x) - f(0)}{x} - 1\mid = \mid \frac{f(x)}{x} - 1\mid$. This quantity equals either to $0$ if $x \in \mathbb{Q}$ or to $\mid x \mid$ if $x \notin \mathbb{Q}$. Either way, when $x$ approaches $0$, $\mid \frac{f(x) - f(0)}{x} -1 \mid $ approaches $0$. Although $f$ is almost nowhere continuous, the derivative at $0$ exists and $f'(0) = 1$.
-
-More generally, if you want $g(x_0) = y_0$, $g'(x_0) = a$ and $g$ continuous only at $x_0$, you can take :
-$g(x) = y_0 + a\times f(x-x_0)$
-A quite equivalent (but slightly different) explicit version would be :
-
-$g(x) = y_0 + a(x-x_0)$ if $x\in \mathbb{Q}$
-$g(x) = y_0 + a(x-x_0) +(x-x_0)^2$ if $x\notin \mathbb{Q}$<|endoftext|>
-TITLE: Is it true that a ring has no zero divisors iff the right and left cancellation laws hold?
-QUESTION [9 upvotes]: This is the definition of zero divisor in Hungerford's Algebra:
-
-A zero divisor is an element of $R$ which is BOTH a left and a right zero divisor.
-
-It follows a statement:
-It is easy to verify that a ring $R$ has no zero divisors
-if and only if the right and left cancellation laws hold in $R$;
-that is,
-for all $a,b,c\in R$ with $a\neq 0$,
-$$ab=ac~~~\text{ or }~~~ba=ca~~~\Rightarrow~~~ b=c.$$
-I think it is not true.
-But I can't find a counterexample.
-
-REPLY [6 votes]: Lemma: A ring has a left (or right) zero-divisor if and only if it has a zero divisor.
-Proof: Assume $ab=0$ for $a,b\neq 0$.
-If $ba=0$, you are done - $a$ is both a left and right zero divisor.
-If $ba\neq 0$, then $a(ba)=(ab)a=0$ and $(ba)b=b(ab)=0$, so $ba$ is a left and right zero divisor.
-
-Now it is much easier to prove your theorem.
-If $ax=ay$ and $R$ has no zero-divisors, then $a(x-y)=0$. But, by the lemma, $R$ also has no left-zero divisors, so either $a=0$ or $x-y=0$.
-Similarly for $xa=ya$.
-On the other hand, if cancellation is true, then $a\cdot b=0=a\cdot 0$ means that either $a=0$ or $b=0$. So there can't be any left zero divisors, and thus no zero divisors.
-
-REPLY [2 votes]: Suppose $ab = 0$ with $a, b \ne 0$. Either $ba = 0$ (which means $a$ and $b$ are zero-divisors), or $ba \ne 0$, in which case $ba$ is a zero-divisor because $a(ba) = 0$ and $(ba)b = 0$.<|endoftext|>
-TITLE: How can I visualize independent and dependent set of vectors?
-QUESTION [9 upvotes]: Can someone help me visualize those concepts? It will also help me understand it better.
-Thanks :)
-
-REPLY [2 votes]: Two parallel vectors are linearly dependent. Three vectors lying in a plane are linearly dependent. Four vectors lying in the same three-dimensional hyperplane are linearly dependent.
-In n-dimensional space, you can find at most n linearly independent vectors.
-Think of the vectors as rods with which you want to span up a tent: one rod gives you just a line, two rods give you a face. you need a third rod outside of that plane (linearly independent) to span up a volume. Any additional rods cannot span into a fourth dimension, so four rods in three dimensions must be linearly dependent.<|endoftext|>
-TITLE: Number of different ordered pairs
-QUESTION [7 upvotes]: Let $X = \{1,2,3,4,5\}$. What is the number of different ordered pairs $(Y,Z)$ that can formed such that
-$$Y\subseteq X$$
-$$Z\subseteq X$$
-$$Y\cap Z=\emptyset$$
-How to make cases in this question?
-
-REPLY [3 votes]: Here's another way to think about the problem.
-Each element can be 'colored' with the color $Y$, $Z$ or $0$, based on whether it belongs to $Y$, $Z$ or $0$. Since $Y \cap Z = \emptyset$, each element can be assigned exactly one color for every possible ordered pair. For example, $Y = \{ 1,2,3\}$ and $Z= \{4 \}$ would give the coloring $YYYZ0$.
-There are 3 different colors and 5 symbols, so $3^5$ possible colorings. Each coloring corresponds to a pair $(Y,Z)$, so there are $3^5$ such ordered pairs.<|endoftext|>
-TITLE: How do you evaluate $\int_{0}^{\frac{\pi}{2}} \frac{(\sec x)^{\frac{1}{3}}}{(\sec x)^{\frac{1}{3}}+(\tan x)^{\frac{1}{3}}} \, dx ?$
-QUESTION [12 upvotes]: Problem:
-$$\int_{0}^{\frac{\pi}{2}} \frac{(\sec x)^{\frac{1}{3}}}{(\sec x)^{\frac{1}{3}}+(\tan x)^{\frac{1}{3}}} dx$$
-My attempt:
-I tried applying the property: $\int_{0}^{a} f(x)dx$ = $\int_{0}^{a} f(a-x)dx$ but got nowhere since the denominator changes. Even on adding the two integrals by taking LCM of the denominators, the final expression got more complicated because the numerator and denominator did not have any common factor.
-I also tried dividing numerator and denominator by $(secx)^{\frac{1}{3}}$ to get
-$$\int_{0}^{\frac{\pi}{2}} \frac{1}{1+(\sin x)^{\frac{1}{3}}} dx$$ and then tried substituting $sinx$ = $t^3$ to get a complicated integral in $t$, which I couldn't evaluate.
-
-How do you evaluate this integral? (PS: If possible, please evaluate this without using special functions since this is a practice question for an entrance exam and we've only learnt some basic special functions and the gamma function.)
-
-REPLY [8 votes]: $$ \int_{0}^{\pi/2}\frac{1}{1+(\sin x)^{1/3}} = \int_{0}^{\pi/2}\frac{1-(\sin x)^{1/3}+(\sin x)^{2/3}}{1+\sin x}\,dx=I_1-I_2+I_3$$
-where:
-$$ I_1 = \int_{0}^{\pi/2}\frac{dx}{1+\cos x}=\int_{0}^{\pi/2}\frac{1-\cos x}{\sin^2 x}\,dx = \left.\left(\csc x-\cot x\right)\right|_{0}^{\pi/2}=1,$$
-$$ I_2 = \int_{0}^{\pi/2}\frac{(\cos x)^{1/3}-(\cos x)^{4/3}}{\sin^2 x}\,dx,\quad I_3 = \int_{0}^{\pi/2}\frac{(\cos x)^{2/3}-(\cos x)^{5/3}}{\sin^2 x}\,dx $$
-but Euler's beta function gives:
-$$ \int_{0}^{\pi/2}(\sin x)^\alpha (\cos x)^{\beta}\,dx = \frac{\Gamma\left(\frac{\alpha+1}{2}\right)\cdot\Gamma\left(\frac{\beta+1}{2}\right)}{2\cdot\Gamma\left(\frac{2+\alpha+\beta}{2}\right)}$$
-hence, after some simplification:
-
-$$ \int_{0}^{\pi/2}\frac{dx}{1+(\sin x)^{1/3}} = 1-\frac{2^{4/3}\pi^2(\sqrt{3}-1)}{3\cdot\Gamma\left(\frac{1}{3}\right)^3}+\frac{2^{2/3}\pi^2(2-\sqrt{3})}{9\cdot\Gamma\left(\frac{2}{3}\right)^3}. $$<|endoftext|>
-TITLE: I know there are three real roots for cubic however cubic formula is giving me non-real answer. What am I doing wrong?
-QUESTION [6 upvotes]: I want to solve the equation $x^3-x=0$ using this cubic equation. For there to be real roots for the cubic (I know the roots are $x=-1$, $x=0$, $x=1$), I assume there must be a positive inside the inner square root. (Or is that wrong?)
-However, when I substitute in $a=1$, $b=0$, $c=-1$, $d=0$, the square root term inside the cube root terms becomes
-$$\sqrt{\;\left(\;2(0)^3 - 9(1)(0)(-1) + 27(1)^2(0)\;\right)^2 - 4 \left(\;(0)^2 - 3(1)(-1)\;\right)^3\quad}$$
-It gives me $\sqrt{-108}$, which is $10.39i$. Now that I have a non-real number as part of the equation I can't see any way for it to be cancelled or got rid of, even though I know there is a real answer.
-Could somebody please tell me how I can get a real answer and what I am doing wrong? Thanks.
-
-REPLY [7 votes]: Okay, so you're getting that operands of the cube root parts of the formula will look like this:
-$$p + q i \qquad\text{and}\qquad p - q i$$
-with some pesky non-zero $q$ (namely, $\sqrt{108}$). Well, these values are conjugates, so that their respective (principal) cube roots are conjugates, as well. For the $x_1$ value in your formula, these conjugate cube roots add together, and their imaginary parts conveniently cancel. The same kind of cancellation happens for the $x_2$ and $x_3$ values, too, because the factors $\frac{1}{2}(1+i\sqrt{3})$ and $\frac{1}{2}(1-i\sqrt{3})$ are themselves conjugates. When the dust settles, you'll have the three real roots you expect.
-As @Aloizio points out, this is historically how imaginary numbers snuck into mathematics: as temporary diversions on the way to real solutions of cubic equations. These numbers seemed weird, maybe even scary, but they cancelled in the end so no harm done. Then people started to wonder about a world in which these things didn't always cancel ...<|endoftext|>
-TITLE: List of old books that modern masters recommend
-QUESTION [36 upvotes]: This is a fairly unambigious question but it hasn't been asked before so I thought I would ask it myself:
-Which old books do the modern masters recommend?
-There are old books where the mathematical fields explored there have been so thoroughly plowed by later mathematicians that it would be extremely naive and foolish to think that anything of value can still be salvaged from them. Those books are no longer read for the purposes of inspiring research in that area, but are curiosities which are mainly consulted for historical purposes. These are not the old books I am talking about.
-The type of old book I refer to is one which still has treasures hidden inside it waiting to be explored. Such is for example Gauss's Disquisitiones Arithmeticae, which Manjul Bhargava claimed inspired his work on higher composition laws, for which he won the 2014 Fields medal.
-This is why we need the opinion of the modern masters in the field as to which books are worth consulting today, because only a master in any given field (with his experience of the literature etc) can point us to the fruitful works in that field.
-If you list a book, please include the quote of the master who recommended it.
-Here is my attempt at the first two:
-Fields medallist Alan Baker recommends Gauss's Disquisitiones Arithmeticae in his book A Comprehensive Course of Number Theory: "The theory of numbers has a long and distinguished history, and indeed the concepts and problems relating to the field have been instrumental in the foundation of a large part of mathematics. It is very much to be hoped that our exposition will serve to stimulate the reader to delve into the rich literature associated with the subject and thereby to discover some of the deep and beautiful theories that have been created as a result of numerous researches over the centuries. By way of introduction, there is a short account of the Disquisitiones Arithmeticae of Gauss, and, to begin with, the reader can scarcely do better than to consult this famous work."
-Andre Weil recommends Euler's Introductio in Analysin Infinitorum for today's Precalculus students, as quoted by J D Blanton in the preface to his translation of that book: "... our students of mathematics would profit much more from a study of Euler's Introductio in analysin infinitorum, rather than of the available modern textbooks."
-I feel this question will be found useful by many people who are looking to follow Abel's advice in a sensible and efficient manner, and I hope this question is clear-cut enough that it doesn't get voted for closure.
-Edit: Thanks to Bye-World for bringing up the question of who qualifies as an old master. My response is that any great dead mathematician should qualify as an old master, so Grothendieck is an old master for instance.
-
-REPLY [7 votes]: I would like to add another mathematics book to Breven Ellefsens list of books above. Its also a great guide for grad students or undergrad college kids struggling with math in general because it goes over general problem solving techniques along with examples from a variety of math topics such as geometry, calculus, proofs both direct and indirect.
-
-G. Polya: How to Solve It
-
-Herman Weyl in an article for the Mathematical Review, had this to say about the book:
-
-"This Elementary textbook on heuristic reasoning, shows anew how keen its author is on questions of method and the formulation of methodological principles. Exposition and illustrative material are of a disarmingly elementary character, but very carefully thought out and selected."<|endoftext|>
-TITLE: Prove that $\frac{a}{b+2c}+\frac{b}{c+2a}+\frac{c}{a+2b} \geq 1$
-QUESTION [5 upvotes]: For three positive real numbers $a,b,$ and $c$, prove that $$\dfrac{a}{b+2c}+\dfrac{b}{c+2a}+\dfrac{c}{a+2b} \geq 1.$$
-
-Attempt
-Rewritting we obtain $\dfrac{2 a^3+2 a^2 b-3 a^2 c-3 a b^2-3 a b c+2 a c^2+2 b^3+2 b^2 c-3 b c^2+2 c^3}{(a+2b)(2a+c)(b+2c)} \geq 0$. Then to I proveed to use rearrangement, AM-GM, etc. on the numerator?
-
-REPLY [7 votes]: Very similar to Nesbitt's inequality. If we set $A=b+2c,B=c+2a,C=a+2b$, we have $ 4A+B-2C = 9c $ and so on, and the original inequality can be written as:
-$$ \frac{4B+C-2A}{9A}+\frac{4C+A-2B}{9B}+\frac{4A+B-2C}{9C} \geq 1 $$
-or:
-$$ \frac{4B+C}{A}+\frac{4C+A}{B}+\frac{4A+B}{C} \geq 15 $$
-that follows from combining $\frac{B}{A}+\frac{A}{B}\geq 2$ (consequence of the AM-GM inequality) with $\frac{B}{A}+\frac{C}{B}+\frac{A}{C}\geq 3$ (consequence of the AM-GM inequality again).<|endoftext|>
-TITLE: Liouville's theorem and the Wronskian
-QUESTION [5 upvotes]: Liouville's theorem states that under the action of the equations of motion, the phase volume is conserved. The equations of motion are the flow ODE's generated by a Hamiltonian field $X$ and the solutions to Hamilton's equations are the integral curves of the system. The effect of a symplectomorphism $\phi_t$ is that it would take $(p^i,q_i) \to (p^j,q_j)$ or $\omega \to \omega'$ and that would be the contribution of the Jacobian of the transformation. If the flow $\phi_t$ changes variables from time, say, $t$ to time $0$. The Jacobian would be
-$$ J(t) = \left| \begin{array}{ccc}
-\frac{\partial p_1(t) }{\partial p_1(0) } & \ldots & \frac{\partial p_1(t) }{\partial q_{n}(0) } \\
-\vdots & \ddots & \vdots \\
-\frac{\partial p_{n}(t) }{\partial q_1(0) } &\ldots & \frac{\partial q_{n}(t) }{\partial q_{n}(0) } \
-\end{array} \right| $$
-Now, the derivatives appearing in the Jacobian satisfy a linear system of ordinary differential equations and moreover, viewed as a system of ordinary differential equations, it is easy to see that the Jacobian we need is also the Wronskian $W$ for the system of differential equations. My first question is why is this true? I know that the definition of the Wronskian of two differential functions $f,g$ is $W(f,g)=f'g-g'f$. Then I read that Wroskian's of linear equations satisfy the vector ODE
-\item Wroskian's of linear equations satisfy the vector ODE
-\begin{equation*}
-\frac{d}{dt}{\bf{y}} = {\bf{M}}(t) {\bf{y}}
-\end{equation*}
-What exactly is this equation?
-Then I read that
-\begin{equation*}
-W(t) = W(0) \exp\left( \int_0^t dt \, \text{Tr}{\bf{M}}(0) \right)
-\end{equation*}
-and for our case this trace vanishes, thus $J(0)=J(t)=1$ which shows that the contribution of the Jacobian over volume integrals is trivial.
-Can you help me to understand the above statements? Unfortunately I have lost the reference out of which I read those.
-
-REPLY [3 votes]: Equations of Motion
-What you are describing is Hamiltonian's view of the evolution of a
-dynamical system. $\mathbf q$ is the vector describing system's configuration with generalized coordinates (or degrees of freedom) in some abstract configuration space isomorphic to $\mathbb R^{s}$. For instance $s=3n$ particle coordinates for a system of $n$ free particles in Euclidean space. $\mathbf p$ is the vector of generalized momentum attached to instanteanous time rate of the configuration vector. For example for particles of mass $m$ in non-relavistic mechanics $\mathbf p= m \frac{d}{dt}\mathbf q$. Classical mechanics postulates that the simulteanous knowledge of both $\mathbf q(t)$ and $\mathbf p(t)$ at time $t$ is required to predict the system's temporal evolution for any time $t'>t$ (causality). So the complete dynamical state of the system is in fact described by the phase $\mathbf y =(\mathbf p, \mathbf q)$ which evolves in an abstract space called the phase space isomorphic to $\mathbb R^{2s}$. The fact that the knowledge of this phase is sufficient to predict the evolution demands on a mathematical point of view configuration $\mathbf q$ to evolve according to a system of $s$ ODEs of at most second order in time (known as Lagrange equations) or equivalently that phase evolves according to a system of $2s$ ODEs of the first order in time knwown as Hamilton's equations of motion,
-$\frac{d\mathbf p}{dt}=-\frac{\partial H}{\partial \mathbf q}(\mathbf y,t)$
-$\frac{d\mathbf q}{dt}=\frac{\partial H}{\partial \mathbf p}(\mathbf y,t)$
-given some Hamilton's function $H(\mathbf y,t)$ describing the system. Physically it represents mechanical energy. For non-dissipative systems it does not depend over time explicitely. as a consequence of Liouville's theorem it is a conserved quantity along phase-space curves.
-More generally, equations of motions appear to be formulated as a system of Euler-Cauchy's ODEs
-$\frac{d\mathbf y}{dt} = \mathbf f(\mathbf y,t)$
-\with $\mathbf f$ some vector-function over $\mathbb R^{2s}\times\mathbb R$
-In accordance with Hamiltonian formalism, $\mathbf f$ appears to be
-$\mathbf f= (-\frac{\partial H}{\partial \mathbf q}, \frac{\partial H}{\partial \mathbf p}) $
-Liouville's theorem
-Now consider the phase flow $\mathbf y_t$, that is consider the one-parameter ($t\in \mathbb R$) group of transformations $\mathbf y_0\mapsto \mathbf y_t(\mathbf y_0)$ mapping initial phase at time $t=0$ to current one
-in phase space. This is another parametrization of the phase using
-$y_0$ as curvilinear coordinates. Suppose you have now hypothetical
-system replica with initial phase points distributed to fill some volume $\mathcal V_0$ in phase space. Liouville's theorem says that the cloud of points will evolve such as preserving its density along their curves
-in phase space, like an incompressible fluid flow, keeping the
-filled volume unchanged. Since
-$\mathcal V(t)=\int_{\mathcal V_0} \det \frac{\partial \mathbf y}{\partial \mathbf y_0}(\mathbf y_0,t)\ \underline{dy_0} = \int_{\mathcal V_0} J(\mathbf y_0,t)\ \underline{dy_0}$
-Now compute the volume-change time rate at any instant $t$ introducing Euler's
-form (1),
-$\frac{d\mathcal V}{dt}=\int_{\mathcal V} \frac{\partial \mathbf f}{\partial t}+ \nabla_{\mathbf y_0} .\mathbf f(\mathbf y_0,t)\ \underline{dy_0}$. Setting this time rate to zero gives Liouville's theorem in local form:
-$\frac{\partial \mathbf f}{\partial t}+ \nabla_{\mathbf y_0} .\mathbf f(\mathbf y_0,t)=0$.
-Applying them to Hamilton's form,
-$\frac{\partial }{\partial t}[\frac{\partial H}{\partial \mathbf p}-\frac{\partial H}{\partial \mathbf q}] + \frac{\partial}{\partial \mathbf q}\frac{\partial H}{\partial \mathbf p} - \frac{\partial}{\partial \mathbf p}\frac{\partial H}{\partial \mathbf q}=0$ (1)
-For conservative systems in energy, $H(\mathbf y,t)$ does not depend upon time explicitely and the left-most term in (1) vanishes. Liouville's theorem can be generalized to any physical observable depending upon the phase of the system $A(\mathbf y,t)$ which is a conserved along the curves of the phase space,
-$\frac{dA}{dt}= \frac{\partial A}{\partial t} + \frac{\partial A}{\partial \mathbf q}\frac{\partial H}{\partial \mathbf p} - \frac{\partial A}{\partial \mathbf p}\frac{\partial H}{\partial \mathbf q}=0$
-The Wronskian
-Now consider the situation of Euler's ODE can be linearized around phase
-$\mathbf y_0$. The vectorial function $\mathbf f$ now expresses as a matrix
-vector product with the phase. You have now a $2s\times 2s$ square
-linear system of ODEs.
-$\frac{d\mathbf y}{dt} = {\mathbf f}(\mathbf y_0,0)+M_{\mathbf y_0}(t)(\mathbf y-\mathbf y_0)+\ldots$
-with $M=\frac{\partial f}{\partial \mathbf y_0}$.
-One can solve a new system of the form, $\mathbf y'=M_{\mathbf y_0}(t) \mathbf y$ for any translated phase around $\mathbf y_0$.
-Consider $2s$ phase solutions of this system $(\mathbf y^1, \mathbf y^2,\ldots, \mathbf y^{2s})$. Then the wronskian
-$W=\det(\mathbf y^1, \mathbf y^2,\ldots, \mathbf y^{2s})$ satisfies
-the first-order ODE,
-$\frac{d}{dt}W= \mathrm{tr}(M_{y_0}) W$
-and can be integrated as
-$W(t) = W_0\exp \int _0^t\mathrm{tr}(M_{\mathbf y_0}(s)) ds$
-Hope this helps.<|endoftext|>
-TITLE: Basic understanding of quotients of "things"?
-QUESTION [8 upvotes]: My modern algebra needs some work. Am I right in thinking that $\mathbb{Z}/2\mathbb{Z}$ refers to the two sets $$\{\pm0, \pm2, \pm4, \pm6, \ldots\}$$ and $$\{\pm1, \pm3, \pm5, \pm7\}~~?$$ What about $\mathbb{R}/2\mathbb{Z}$ if that makes sense to write? Would that mean $$\{,\ldots,[0,1),[2,3),[4,5),\ldots\}$$ and $$\{\ldots,[1,2),[3,4),[5,6),\ldots\}~~?$$ I haven't got round to looking at these "quotients" as I think they're called. It's on my list of things to do. I believe they're to do with equivalence classes? Are there some more "exotic" examples with concrete examples of the sets produced as above? Just asking to see if I'm thinking on the right track. So my original understanding of $\mathbb{R}/2\mathbb{Z}$ was incorrect (as pointed out in answer(s) below).
-EDIT
-Not sure if this is a good way to visualise what's going on with e.g. $\mathbb{R}/2\mathbb{Z}$, but imagine the Cartesian plane with $x$ and $y$ axes. For $\mathbb{R}/2\mathbb{Z}$ I can see the left hand side ($\mathbb{R}$) corresponding to the $y$-axis and the right hand side ($2\mathbb{Z}$) corresponding to the integers on the $x$-axis. If I imagine $2\mathbb{Z}$ on the $x$-axis slicing the plane vertically then what I'm left with is an infinite number of slices each of width $2$. The quotient kind of takes all these slices and stacks them on top of each other so that the only information available to me belongs to $[0,2)$. Positioning along the $x$-axis is lost.
-
-REPLY [12 votes]: A good way to think about quotients is to pretend that nothing has changed except your concept of equality.
-You can think of $\mathbb{Z}/2\mathbb{Z}$ as just the integers (under addition) but multiples of 2 (i.e. elements of $2\mathbb{Z}$) are eaten up like they're zero. So in this quotient world $1=3=-5=41$ etc. and $0=6=104=-58$ etc.
-What is $1+1$? Well, $1+1=2$. But in this quotient group $2=0$ so $1+1=0$. Notice that $1=-3=99$ so $1+1 =-3+99=96$ (which also $=0$). Equivalent "representatives" give equivalent answers.
-Formally, yes, $\mathbb{Z}/2\mathbb{Z} = \{0+2\mathbb{Z}, 1+2\mathbb{Z}\}$ where $0+2\mathbb{Z}=2\mathbb{Z}=$ even integers and $1+2\mathbb{Z}=$ odd integers.
-A more formal version of my previous calculation: $(1+2\mathbb{Z})+(1+2\mathbb{Z}) = (-3+2\mathbb{Z})+(99+2\mathbb{Z}) = (-3+99)+2\mathbb{Z} = 96+2\mathbb{Z}=0+2\mathbb{Z}$.
-If we move to $\mathbb{R}/2\mathbb{Z}$, then elements are equivalence classes: $x+2\mathbb{Z} = \{x+2k \;|\; k \in \mathbb{Z}\} = \{\dots,x-4,x-2,x,x+2,x+4,\dots\}$. Addition works exactly the same as it does in $\mathbb{R}$ (except we have enlarged what "equals" means). So $((3+\sqrt{2})+2\mathbb{Z})+((-10+\pi)+2\mathbb{Z}) = (7+\sqrt{2}+\pi)+2\mathbb{Z}$. Of course, here, $7+\sqrt{2}+\pi$ could be replaced by something like $-3+\sqrt{2}+\pi$.
-In fact, every $x+2\mathbb{Z}$ is equal to $x'+2\mathbb{Z}$ where $x' \in [0,2)$ (add an appropriate even integer to $x$ to get within the interval $[0,2)$). So as a set $\mathbb{R}/2\mathbb{Z}$ is essentially $[0,2)$ (each equivalence class in the quotient can be uniquely represented by a real number in $[0,2)$).
-Alternatively, think of this group like $[0,2]$ with $0=2$. Take the interval $[0,2]$ and glue the ends together. It's a circle group. Basically $\mathbb{R}/2\mathbb{Z}$ as a group is just like adding angles (but $2=0$ not $2\pi=0$). :)<|endoftext|>
-TITLE: What was the genesis of Hua's identity?
-QUESTION [7 upvotes]: Many resources I have read prove Hua's identity more-or-less mechanically. I have seen there is more than one raison d'être for Hua's identity: e.g. its connection to the fundamental theorem of projective geometry, and also Jordan algebra theory. My impression, though, is that these two things are mostly application rather than inspiration. (I could be wrong, though.)
-
-I would very much like to know how Hua's identity arose, hopefully with motivation/intuition as to how it was discovered.
-
-I have intended to get ahold of the(?) original proof by Hua in hopes that it contained such information, but so far I haven't managed to lay my hands on the original citation(s). This would be a much-appreciated bonus to any solution.
-If it turns out there is a good retroactive motivation/intuition for deriving the identity that beats the original, of course that would be welcome as well.
-
-Happily I've seen the original paper now (thanks Martin). Surprisingly, the identity cited by all authors since the paper is different-looking from the original. I will have to compare the two versions and see if this version gives any more insight. No direct intuition about its origins are apparent, and indeed it is called "nearly trivial" although it seems a bit mystifying, IMO.
-
-REPLY [5 votes]: Well, I can only guess, but if I were Hua this is what I would have thought to derive the identity and say it is "almost trivial":
-1) We want to generalize the theorem of Cartan and Dieudonné (now called Cartan-Brauer-Hua), so we want to express an element $a$ as a combination of sums, products and inverses of conjugates of $a,b$, for any other $b$ (such that $ab\neq ba$).
-2) Immediately we think about proving our luck with the well known formula for multiplicative commutators in division rings, since it can readily give us $a$ as a factor so that we can solve it "as a fraction", and it is close to conjugations: If we denote $(a,b):=a^{-1}b^{-1}ab$ then
-$$a(a,b)=(a-1)(a-1,b)+1.$$
-Therefore $$a((a,b)-(a-1,b))=1-(a-1,b).$$
-3) If we expand we see we don't yet have conjugates due to the $b$ factors at the right of $(a,b)=a^{-1}b^{-1}ab$, etc.; but since it is the same for all terms, we add a right $b^{-1}$ to solve the problem, and get
-$$a((a,b)-(a-1,b))b^{-1}=(1-(a-1,b))b^{-1}$$
-$$a(a^{-1}b^{-1}a-(a-1)^{-1}b^{-1}(a-1))=b^{-1}-(a-1)^{-1}b^{-1}(a-1)$$
-and now Hua's identity follows by solving for $a$.<|endoftext|>
-TITLE: Wikipedia's explanation of the lambda-calclulus form of Curry's paradox makes no sense
-QUESTION [6 upvotes]: Wikipedia gives multiple explanations of Curry's paradox, one of which is expressed via lambda calculus.
-However, the explanation doesn't look like any lambda calculus I've ever seen, and there's an existing discussion-page section that would appear to indicate I'm not alone.
-The proof is given as follows:
-
-Consider a function $r$ defined as
-$$r = ( λx. ((x x) → y) )$$
-Then $(r r)$ $\beta$-reduces to
-$$(r r) → y$$
-If $(r r)$ is true then its reduct $(r r) → y$ is also true, and, by modus ponens, so is $y$. If $(r r)$ is false then $(r r) → y$ is true by the principle of explosion, which is a contradiction. So $y$ is true and as $y$ can be any statement, any statement may be proved true.
-$(r r)$ is a non-terminating computation. Considered as logic, $(r r)$ is an expression for a value that does not exist.
-
-The last sentence appears to be irrelevant (or perhaps it's intended as a corollary), so I'll ignore it.
-The primary question on the talk page is what $→$ means in a lambda expression. I hypothesized that it might be supposed to indicate that $(x → y)$ is an expression that evaluates to $y$ if $x$ is "true" for some meaning of "truth" in lambda calculus (perhaps if $x$ is a Church-numeral greater than 0) and to something else otherwise. But in there's no "else" expression specified, so that doesn't seem right, unless $\beta$-reduction on $(x → y)$ is non-terminating for $x=\bar 0$.
-The proof then claims that if $(r r)$ is false (again, it's not clear what "true" or "false" means in this context), then the principle of explosion may be invoked, but it's not clear why $(r r)$ being false would imply a contradiction, so I don't know why the principle of explosion can be invoked here.
-Is this demonstration of the paradox correct but simply poorly explained, or is it incorrect?
-
-REPLY [4 votes]: Long Comment
-Here is an outline of Curry's original version of the paradox, using combinatory logic [hoping that someone smarter than me can derive the correct "$\lambda$-version"].
-See Haskell Curry & Robert Feys & William Craig, Combinatory Logic. Volume I (1958), Ch.8A. THE RUSSELL PARADOX, page 258-59 or Katalin Bimbò, Combinatory Logic : Pure Applied and Typed (2012), Ch.8.1 Illative combinatory logic, page 221-on.
-The system is called illative combinatory logic, that extends pure combinatory logic with the inclusion of new constants that, of course, expands the set of CL-terms.
-The new symbol [Bimbò, page 221-22]:
-
-$\mathsf P$ is the symbol for $\to$ that is, for implication. [...] $\mathsf Pxx$ is thought of as $x \to x$ , and $\mathsf P(\mathsf Pxy)(\mathsf P(\mathsf Pyz)(\mathsf Pxz))$ is $(x \to y) \to ((y \to z) \to (x \to z))$ .
-This formula is a theorem of classical logic (with formulas in place for $x, y$
- and $z$).
-DEFINITION 8.1.2. Let the set of constants be expanded by $\mathsf P$. Further, let the set of CL-terms [built from the constants into] $\{ \mathsf S, \mathsf K, \mathsf P \}$. The set of theorems comprises equalities (in the extended language) provable with the combinatory axioms restricted to $\mathsf S$ and $\mathsf K$, and assertions obtainable from the assertions (A1)–(A3) by rules (R1)–(R2).
-
-(A1) $\mathsf PM(\mathsf PNM)$ [compare with the propositional axiom : $M \to (N \to M)$]
-(A2) $\mathsf P(\mathsf PM(\mathsf PNR))(\mathsf P(\mathsf PMN)(\mathsf PMR))$ [compare with : $(M \to (N \to R)) \to ((M \to N) \to (M \to R))$]
-(A3) $\mathsf P(\mathsf P(\mathsf PMN)M)M$ [Peirce's axiom : $((M \to N) \to M) \to M$]
-(R1) $\mathsf PMN$ and $M$ imply $N$ [i.e. detachment]
-(R2) $M$ and $M = N$ imply $N$.
-
-This system is called the minimal illative combinatory logic. The axioms together with (R1) are equivalent — with $M$ and $N $thought of as formulas rather than CL-terms — to the implicational fragment of classical logic.
-
-The "implication" is characterized by the following theorems [Bimbò, page 223]:
-
-$\mathsf PMM$ : self-implication [i.e. $M \to M$]
-$\mathsf P(\mathsf PM(\mathsf PMN))(\mathsf PMN)$ : contraction [i.e. $(M \to (M \to N)) \to (M \to N)$].
-
-In addition, we need [Bimbò, page 12 and page 47]
-
-the fixed point combinator, often denoted by $\mathsf Y$. The axiom for $\mathsf Y$ is $\mathsf Yx \rhd x(\mathsf Yx)$.
-
-Now for the proof of Curry’s original paradox [Bimbò, page 224]:
-
-Let the meta-term $M$ stand for $\mathsf C \mathsf PN$ [...] $\mathsf C \mathsf PNx \rhd_w \mathsf PxN$, and so $\mathsf C \mathsf PN(\mathsf Y(\mathsf C \mathsf PN)) \rhd_w \mathsf P(\mathsf Y(\mathsf C \mathsf PN))N$. Of course, $\mathsf Y(\mathsf C \mathsf PN) \rhd_w \mathsf C \mathsf PN(\mathsf Y(\mathsf C \mathsf PN))$. That is, using $M$, we have that $\mathsf YM = \mathsf P(\mathsf YM)N$. Again we construct a proof (with some equations restated and inserted) to show that $N$ is a theorem.
-
-$\mathsf P(\mathsf P(\mathsf YM)(\mathsf P(\mathsf YM)N))(\mathsf P(\mathsf YM)N)$ [contraction]
-$\mathsf YM = \mathsf P(\mathsf YM)N$ [provable equation above]
-$\mathsf P(\mathsf P(\mathsf YM)(\mathsf YM))(\mathsf P(\mathsf YM)N)$ [by (R2) from 1. and 2.]
-$\mathsf P(\mathsf YM)(\mathsf YM)$ [self-implication]
-$\mathsf P(\mathsf YM)N$ [by (R1) from 3. and 4.]
-$\mathsf YM$ [by (R2) from 5. and 2.]
-
-
-
-$N$ [by (R1) from 5. and 6.]
-
-
-
-
-Curry's original exposition [Curry, Feys & Craig, page 4] starts from :
-
-the paradox of Russell. This may be formulated as follows: Let $F(f)$ be the property of properties $f$defined by the equation
-
-(1) $F(f) = \lnot f(f)$,
-
-where "$\lnot$" is the symbol for negation. Then, on substituting $F$ for $f$, we have :
-
-(2) $F(F) = \lnot F(F)$.
-
-If, now, we say that $F(F)$ is a proposition, where a proposition is something which is either true or false, then we have a contradiction at once. But it is an essential step in this argument that $F(F)$ should be a proposition. [...]
- The usual explanations of this paradox are to the effect that $F$, or at
- any rate $F(F)$, is "meaningless". Thus, in the Principia Mathematica the
- formation of $f(f)$ is excluded by the theory of types.
-
-Following this analysis, Curry "manufactured" the paradoxical combinator $\mathsf Y$ [CFC, page 177] :
-
-Let $Neg$ represent negation. Then the definition (1) becomes
-
-$Ff=Neg(ff) = \mathsf B Neg ff = \mathsf W(\mathsf B Neg)f$.
-
-Thus the definition (1) could be made in the form
-
-(3) $F \equiv \mathsf W(\mathsf BNeg)$.
-
-This $F$ really has the paradoxical property. For we have [that] $FF$ reduces to its own negation.
-
-The definition (3) can be generalized to :
-
-
-$\mathsf Y = \mathsf W \mathsf S(\mathsf B \mathsf W \mathsf B)$.
-
-
-We have seen that axioms (A1)-(A3) and the two rules define the implicational fragment of classical logic. We have seen also that if $Neg$ stands for negation, then $\mathsf YNeg$ is equal to its own negative.
-
-[CFC, page 258] it is clear that we cannot ascribe to $\mathsf Y Neg$ properties characteristic of propositions; and that we can avoid the paradox by formulating the category of propositions in such a way that $\mathsf YN$ is excluded from it.
-
-
-
-Note. Here is a list of combinators, given by their axioms [Bimbò, page 6]:
-
-$\mathsf I x \rhd x$ : identity combinator
-$\mathsf M x \rhd xx$ : duplicator
-$\mathsf K xy \rhd x$ : cancellator
-$\mathsf W xy \rhd xyy$ : duplicator
-$\mathsf B xyz \rhd x(yz)$ : associator
-$\mathsf C xyz \rhd xzy$ : permutator
-$\mathsf S xyz \rhd xz(yz)$
-
-The other basic definitions are that of one-step reduction and of weak reduction (denoted by $\rhd_w$), i.e. the reflexive transitive closure of the
-one-step reduction relation.<|endoftext|>
-TITLE: What is the integral of log(z) over the unit circle?
-QUESTION [6 upvotes]: I tried three times, but in the end I am concluding that it equals infinity, after parametrizing, making a substitution and integrating directly (since the Residue Theorem is not applicable, because the unit circle encloses an infinite amount of non-isolated singularities.)
-Any ideas are welcome.
-Thanks,
-
-REPLY [11 votes]: If $\log z$ is interpreted as principal value
-$${\rm Log}z:=\log|z|+i{\rm Arg} z\ ,$$
-where ${\rm Arg}$ denotes the polar angle in the interval $\ ]{-\pi},\pi[\ $, then the integral in question is well defined, and comes out to $-2\pi i$. (This is the case $\alpha:=-\pi$ in the following computations).
-But in reality the logarithm $\log z$ of a $z\in{\mathbb C}^*$ is, as we all know, not a complex number, but only an equivalence class modulo $2\pi i$. Of course it could be that due to miraculous cancellations the integral in question has a unique value nevertheless. For this to be the case we should expect that for any $\alpha\in{\mathbb R}$ and any choice of the branch of the $\log$ along
-$$\gamma:\quad t\mapsto z(t):=e^{it}\qquad(\alpha\leq t\leq\alpha+2\pi)$$ we obtain the same value of the integral. This boils down to computing
-$$\int_\alpha^{\alpha+2\pi}(it+2k\pi i)\>ie^{it}\>dt=-\int_\alpha^{\alpha+2\pi}t\>e^{it}\>dt=2\pi i\>e^{i\alpha}\ .$$
-During the computation several things have cancelled, but the factor $e^{i\alpha}$ remains. This shows that the integral in question cannot be assigned a definite value without making some arbitrary choices.<|endoftext|>
-TITLE: Extract imaginary part of $\text{Li}_3\left(\frac{2}{3}-i \frac{2\sqrt{2}}{3}\right)$ in closed form
-QUESTION [5 upvotes]: We know that polylogarithms of complex argument sometimes have simple real and imaginary parts, e.g.
-$\mathrm{Re}[\text{Li}_2(i)]=-\frac{\pi^2}{48}$
-Is there a closed form (free of polylogs and imaginary numbers) for the imaginary part of
-$\text{Li}_3\left(\frac{2}{3}-i \frac{2\sqrt{2}}{3}\right)$
-
-REPLY [2 votes]: Inspired in turn by user 153012's answer which is similar to my answer in this post, then more generally, for any real $k>1$,
-$$\Im\left[\operatorname{Li}_3\left(\frac2k\,\big(1\pm\sqrt{1-k}\big)\right)\right] =\color{red}\mp\frac13\arcsin^3\left(\frac1{\sqrt k}\right)\pm\frac2{\sqrt k}\;{_4F_3}\left(\begin{array}c\tfrac12,\tfrac12,\tfrac12,\tfrac12\\ \tfrac32,\tfrac32,\tfrac32\end{array}\middle|\;\frac1k\right)$$
-where the OP's case was just $k=3$.
-Edit: Courtesy of Oussama Boussif in his answer here, there is also a broad identity for $\rm{Li}_2(x)$ but for the real part,
-$$\Re\left[\rm{Li}_{2}\left(\frac{1}{2}+iq\right)\right]=\frac{{\pi}^{2}}{12}-\frac{1}{8}{\ln{\left(\frac{1+4q^2}{4}\right)}}^{2}-\frac{{\arctan{(2q)}}^{2}}{2}
-$$<|endoftext|>
-TITLE: A curious pattern on primes congruent to $1$ mod $4$?
-QUESTION [8 upvotes]: It is known that every prime $p$ that satisfies the title congruence can be expressed in the form $a^{2} + b^{2}$ for some integers $a,b$, and unique factorisation in $Z[i]$ ensures exactly one such representation for each $p \equiv 1 \mod 4$.
-It seems at least one of $a-b, a+b$ is always a prime ? Is there any mathematical explanation for this ?
-
-REPLY [5 votes]: Let $a+b=35$ and $a-b=9$. Neither is prime.
-Then $a=22$ and $b=13$, and the sum $(22)^2+(13)^2$ is the prime $653$.
-Remark: For nice examples of apparent patterns that disappear when we look at larger numbers, please see Richard Guy's The Strong Law of Small numbers.<|endoftext|>
-TITLE: What is the integral of 1/(z-i) over the unit circle?
-QUESTION [6 upvotes]: At present there is a simple pole on the closed contour, so the Residue Theorem appears to be inapplicable.
-But I want to claim that we can enlarge this circle to make sure that it encloses the pole, and the integral value should not change, primarily because of Cauchy's Theorem.
-So the integral is simply $2\pi i$. (The residue at $z=i$ is 1.)
-What do you think?
-Thanks,
-
-REPLY [4 votes]: The integral does not converge. To show this in the most informative way, you should recall the definition of a contour integral: Let $\gamma(t)$ be a contour defined over $\mathbb{C}$ with $t\in I$. Then
-\begin{align}
-\oint_{\gamma}{f(z)dz}:=\int_I{f(\gamma(t))\gamma'(t)dt}.
-\end{align}
-So applying this to the integral in question (I've missed out a bit of algebra), we get
-\begin{align}
-\oint_{S^1}\frac{1}{z-\imath}dz&=\int_0^{2\pi}{\frac{\imath}{e^{\imath t}-\imath}}dt\\
-&=\int_0^{2\pi}\frac{\sin(t)-1+\imath\cos(t)}{1-2\sin(t)}dt.
-\end{align}
-Now, one can (after a little work) compute that
-\begin{align}
-\int\frac{\sin(t)-1+\imath\cos(t)}{1-2\sin(t)}dt=\frac{\tanh^{-1}\left(\frac{\tan\left(\frac{t}{2}\right)-2}{\sqrt{3}}\right)}{\sqrt{3}}-\frac{t}{2}-\frac{\imath}{2}\log(1-2\sin(t))+z_0.
-\end{align}
-This function is undefined at both $0$ and $2\pi$ and hence the integral has no limit, or formally it diverges.<|endoftext|>
-TITLE: How do we compare the size of numbers that are around the size of Graham's number or larger?
-QUESTION [6 upvotes]: When numbers get as large as Graham's number, or somewhere around the point where we can't write them as numerical values, how do we compare them?
-For example:
-$$G>S^{S^{S^{\dots}}}$$
-Where $G$ is Graham's number and $S^{S^{S^{\dots}}}$ is $S$ raised to itself $S$ times and $S$ is Skewes number.
-It appears obvious (I think) that Graham's number is indeed larger, but how does one go about proving that if both numbers are "so large" that they become hard to compare?
-More generally, how do we compare numbers of this general size?
-As a much harder problem than the above, imagine a function $G(x,y)$ where $G(64,3)=$ Graham's number. The function $G(x,y)$ is as follows:
-$$G(x,y)=y\uparrow^{(G(x-1,y))}y$$
-Where $G(0,y)$ is given.
-I ask to compare $G(60,S)$ and $G(64,3)$
-
-REPLY [4 votes]: Basically you want to construct a chain of inequalities that links the smaller expression to the larger expression. Induction is often helpful in these cases.
-A useful theorem for Knuth arrows is $(a \uparrow^n b) \uparrow^n c < a \uparrow^n (b+c)$, proven in this paper. It is also proven that $a \uparrow^n c$ is monotonic in $a,n$, and $c$ when $a,c \ge 3$, which is useful as well.
-For example, one can easily see that $S < 3 \uparrow\uparrow 6$, so
-$$S^{S^{S^\cdots}} = S \uparrow \uparrow S < (3\uparrow\uparrow 6)\uparrow\uparrow(3 \uparrow\uparrow 6) < 3\uparrow\uparrow (6 + 3\uparrow\uparrow 6) < 3 \uparrow\uparrow (3 \uparrow\uparrow 7) < 3 \uparrow\uparrow (3\uparrow\uparrow 3^{3^3}) = 3 \uparrow\uparrow (3\uparrow\uparrow\uparrow 3) = 3\uparrow\uparrow\uparrow 4 < 3\uparrow\uparrow\uparrow (3 \uparrow\uparrow\uparrow 3) = 3\uparrow\uparrow\uparrow\uparrow 3 = G_1$$
-To address your harder question, first we need to know what $G(0,y)$ is. Since we need $G(0,3) =4$ so that $G(64,3)$ is Graham's number, I will assume that $G(0,y)=4$.
-Theorem: $G(n,S) < G(n+1,3)$
-We will prove this by induction. First, observe that $G(0,S) = 4 < 3\uparrow\uparrow\uparrow\uparrow 3 = G(1,3)$.
-Observe that for $n \ge 3$,
-$$S \uparrow^n S < (3\uparrow\uparrow 6)\uparrow^n (3\uparrow\uparrow 6) < (3\uparrow^n 6)\uparrow^n (3\uparrow\uparrow 6) < 3\uparrow^n (6+3\uparrow\uparrow 6) < 3\uparrow^n (3\uparrow\uparrow\uparrow 3) \le 3\uparrow^n (3\uparrow^n 3) = 3\uparrow^{n+1} 3$$
-So if we have $G(n,S) < G(n+1,3)$, then $G(n,S)+1 \le G(n+1,3)$, so
-$$G(n+1,S) = S \uparrow^{G(n,S)} S < 3 \uparrow^{G(n,S)+1} 3 \le 3 \uparrow^{G(n+1,3)} 3 = G(n+2,3)$$
-and the theorem follows by induction.
-So in particular, $G(60,S) < G(61,3) < G(64,3)$.<|endoftext|>
-TITLE: How to find the general solution to $\int f^{-1}(x){\rm d}x$ in terms of $\int f(x){\rm d}x$
-QUESTION [10 upvotes]: I am trying to find a general proof of $\int f^{-1}(x)\,{\rm d}x$ in terms of $\int f(x)\,{\rm d}x$. The first step that I took was to piece apart what it means for a function to have an inverse. So I know the way an inverse function works, but I don’t know how it works in integration like in proving this. I am very interested in seeing the solution to this because knowing what this is would help to solve integrals where you know what the integral for the inverse function is.
-
-REPLY [16 votes]: Note that
-$$\int f^{-1}(x)\,dx=\int 1\cdot f^{-1}(x)\,dx = x\cdot f^{-1}(x)-\int x\cdot (f^{-1})’(x)\,dx=x\cdot f^{-1}(x)-\int \frac{x}{f’(f^{-1}(x))}\,dx$$
-
-Now, we calculate $\int \frac{x}{f’f^{-1}(x)}\,dx$:
-Let $F=\int f(x)\,dx$, and make the substitutions of $u=f^{-1}(x)\implies x=f(u)\implies dx=f’(u)\,du$. Therefore, we have transformed our integral to:
-$$\int \frac{f(u)}{f’(u)}f’(u)\,du=F(u)=F\left(f^{-1}(x)\right)$$
-
-$$\therefore \int f^{-1}(x)\,dx = x\cdot f^{-1}(x)-F\left(f^{-1}(x)\right)$$<|endoftext|>
-TITLE: Deriving the coefficients during fourier analysis
-QUESTION [5 upvotes]: I'm self-studying Fourier transforms, but I'm stuck on a basic point about integration during the derivation of an expression for the coefficients of the Fourier transform.
-For a function of period $1$, the function can be written
-$f(t) = \sum_{k=-n}^{n} C_k e^{2 \pi i k t}$
-Now in order to obtain an expression for a specific $C_k$ (which I shall call $C_m$), I can do:
-$f(t) = \sum_{k=-n,k =/= m}^{k=n} C_k e^{2 \pi i k t} + C_m e^{2 \pi i m t}$
-$C_m e^{2 \pi i m t} = f(t) - \sum_{k=-n,k =/= m}^{k=n} C_k e^{2 \pi i k t}$
-$C_m = e^{-2 \pi imt} f(t) - \sum_{k=-n,k =/= m}^{k=n} C_k e^{2 \pi i (k-m) t}$
-Integrating over the full period from 0 to 1,
-$C_m = \int_0^1 e^{-2 \pi imt} f(t) dt - \sum_{k=-n,k =/= m}^{k=n} C_k \int_0^1 e^{2 \pi i (k-m) t}$
-which I understand.
-During evaluation of $\int_0^1 e^{2 \pi i (k-m) t}$ above, I get
-$[\frac{1}{2 \pi i (k-m)} e^{2 \pi i (k-m) t}]_{0}^{1} = \frac{1}{2 \pi i (k-m)} (e^{2 \pi i (k-m)} - e^0)$
-Because we are "integrating over the whole period", the $e^{2 \pi i (k-m)}$ term evaluates to $1$. Can anyone explain what this means?
-I tried a simple test-case with $n=2$, $m=1$, $A = 2 \pi i$:
-$C_1 = \int_0^1 e^{-1At} f(t) dt - [\frac{C_{-2}}{-3A} (e^{-3A}-1) + \frac{C_{-1}}{-2A} (e^{-2A}-1) \frac{C_{0}}{-1A} (e^{-1A}-1) + \frac{C_{2}}{1A} (e^{1A}-1)]$
-I was hoping some terms would cancel, but I guess I'm thinking about this in the wrong way somehow.
-Maybe this is a basic question, but any help is appreciated!
-
-REPLY [2 votes]: Some of those terms do cancel. The principal fact which you've not applied to your equation is that $e^{2\pi i}=1$. Written with the terms of your last equation, one has $e^A=1$. Thus, for instance, we have $$e^{-3A}=(e^A)^{-3}=1^{-3}=1.$$
-That means that all of the terms like $e^{-3A}-1$ equal $0$ and so you can cancel that whole part of the expression.
-You could also draw this simplification out to where you integrate
-$$\int_{0}^1e^{2\pi i(k-m)t}\,dt=\frac{1}{2\pi i(k-m)}(e^{2\pi i(k-m)}-e^0)=\frac{1}{2\pi i(k-m)}(1-1)=0.$$ The intuitive way to state this (which is alluded to in the phrasing "integrating over the whole period") is that the function $e^{2\pi i(k-m)x}$ traces out a circle $(k-m)$ times as $x$ goes from $0$ to $1$. The integral essentially takes the average value here - and the average value taken over the circle has to be the center of the circle, which is $0$. (One may argue that, the "average" has to be fixed by the symmetries of a circle, and the only point satisfying that is the center)<|endoftext|>
-TITLE: Any other Caltrops?
-QUESTION [13 upvotes]: This question has been edited.
-The regular tetrahedron is a caltrop. When it lands on a face, one vertex points straight up, ready to jab the foot of anyone stepping on it.
-Define a caltrop as a polyhedron with the same number of vertices and faces such that each vertex is at distance 1 from most of the corners of the opposing face. Are there any other caltrops besides the tetrahedron?
-Use these 5 values in the list of vertices that follow.
-$\text{C0}=0.056285130364630088035020091792834$
-$\text{C1}=0.180220007048851841582537343751297$
-$\text{C2}=0.309443563867344767680227839435148$
-$\text{C3}=0.348675924605445651138054435209609$
-$\text{C4}=0.466391197450500551933366795454853$
-
-verts =(
- (C1,C1,C4),(C1,-C1,-C4),(-C1,-C1,C4),(-C1,C1,-C4),(C4,C1,C1),(C4,-C1,-C1),
- (-C4,-C1,C1),(-C4,C1,-C1),(C1,C4,C1),(C1,-C4,-C1),(-C1,-C4,C1),(-C1,C4,-C1),
-(C3,-C0,C3),(C3,C0,-C3),(-C3,C0,C3),(-C3,-C0,-C3),(C3,-C3,C0),(C3,C3,-C0),
- (-C3,C3,C0),(-C3,-C3,-C0),(C0,-C3,C3),(C0,C3,-C3),(-C0,C3,C3),(-C0,-C3,-C3),
-(C2,C2,C2),(C2,-C2,-C2),(-C2,-C2,C2),(-C2,C2,-C2));
-
-The resulting polyhedron has the following appearance, arranged so that each of the three types of faces is on the bottom:
-
-Here's a transparent picture showing the 48 unit diagonals.
-
-Diagonals $(13-16, 14-15, 17-19, 18-20, 21-22, 23-24)$ have a length of about $~0.98620444$. I'm not sure of the maximum length, and don't have exact values for coordinates.
-That's one more caltrop. Are there any others?
-Rahul pointed out that some faces of my initial caltrop weren't exactly planar. This new version fixes that error, but I had to sacrifice 6 unit diagonals. A stronger caltrop would have each vertex at distance 1 from all corners of an opposing face, instead of most corners.
-
-REPLY [2 votes]: There is a caltrop on 76 points.
-
-Points 1, 13, 25, 29, 41, and 53 are as follows:
-{0.0833`, 0.0833`, 0.4930122817942774`}
-{0.32530527130128584`, -0.20709494964790603`, 0.32530527130128584`}
-{0.28875291001058745`, 0.28875291001058745`, 0.28875291001058745`}
-{-0.2142`, 0.40369721678726284`, -0.2142`}
-{-0.07272969962634213`, 0.35355339059327373`, -0.35355339059327373`}
-{0.07587339432355446`, 0.44185`, -0.23402687345687453`}
-
-This caltrop generates a solid of constant width. The polyhedron has 150 unit diagonals, is self-dual, and has tetrahedral symmetry.<|endoftext|>
-TITLE: Poincare's lemma for 1-form
-QUESTION [6 upvotes]: Let $\omega=f(x,y,z)dx+g(x,y,z)dy+h(x,y,z)dz$ be a differentiable 1-form in $\mathbb{R}^{3}$ such that $d\omega=0$. Define $\hat{f}:\mathbb{R}^{3}\to\mathbb{R}$ by
-$$\hat{f}(x,y,z)=\int_{0}^{1}{(f(tx,ty,tz)x+g(tx,ty,tz)y+h(tx,ty,tz)z)dt}.$$
-Show that $d\hat{f}=\omega$.
-My approach: If $d\omega=0$, then
-$$\left(\dfrac{\partial g}{\partial x}-\dfrac{\partial f}{\partial y}\right)dx\wedge dy+\left(\dfrac{\partial h}{\partial x}-\dfrac{\partial f}{\partial z}\right)dx\wedge dz+\left(\dfrac{\partial h}{\partial y}-\dfrac{\partial g}{\partial z}\right)dy\wedge dz=0,$$
-therefore $\dfrac{\partial g}{\partial x}=\dfrac{\partial f}{\partial y}, \dfrac{\partial h}{\partial x}=\dfrac{\partial f}{\partial z},\dfrac{\partial h}{\partial y}=\dfrac{\partial g}{\partial z}$.
-For the other hand, note that
-$$f(x,y,z)=\int_{0}^{1}{\dfrac{d}{dt}(f(tx,ty,tz)t)dt}=\int_{0}^{1}{f(tx,ty,tz)dt}+\int_{0}^{1}{t\dfrac{d}{dt}(f(tx,ty,tz))dt}$$
-where
-$$\dfrac{d}{dt}(f(tx,ty,tz))=x\dfrac{df}{dx}(tx,ty,tz)+y\dfrac{df}{dy}(tx,ty,tz)+z\dfrac{df}{dz}(tx,ty,tz).$$
-But now, I have trouble with the differential of $\hat{f}$. Then for the above equations I think we can prove $d\hat{f}=\omega$.
-
-REPLY [5 votes]: Note that $$d\hat{f} = \frac{\partial\hat{f}}{\partial x}dx + \frac{\partial\hat{f}}{\partial y}dy + \frac{\partial\hat{f}}{\partial z}dz.$$
-First we have
-\begin{align*}
-\frac{\partial\hat{f}}{\partial x} &= \frac{\partial}{\partial x}\int_{0}^{1}(f(tx,ty,tz)x+g(tx,ty,tz)y+h(tx,ty,tz)z)dt\\
-&= \int_{0}^{1}\frac{\partial}{\partial x}(f(tx,ty,tz)x+g(tx,ty,tz)y+h(tx,ty,tz)z)dt\\
-&= \int_0^1\left(\frac{\partial f}{\partial x}(tx, ty, tz)tx + f(tx, ty, tz) + \frac{\partial g}{\partial x}(tx, ty, tz)ty + \frac{\partial h}{\partial x}(tx, ty, tz)tz\right)dt\\
-&= \int_0^1\left(\frac{\partial f}{\partial x}(tx, ty, tz)tx + f(tx, ty, tz) + \frac{\partial f}{\partial y}(tx, ty, tz)ty + \frac{\partial f}{\partial z}(tx, ty, tz)tz\right)dt\\
-&= \int_0^1\left(f(tx, ty, tz) + t\left(\frac{\partial f}{\partial x}(tx, ty, tz)x + \frac{\partial f}{\partial y}(tx, ty, tz)y + \frac{\partial f}{\partial z}(tx, ty, tz)z\right)\right)dt\\
-&= \int_0^1\left(f(tx, ty, tz) + t\frac{d}{dt}(f(tx, ty, tz))\right)dt\\
-&= \int_0^1\frac{d}{dt}\left(f(tx, ty, tz)t\right)dt\\
-&= [f(tx, ty, tz)t]_0^1\\
-&= f(x, y, z).
-\end{align*}
-A similar calculation shows $\dfrac{\partial\hat{f}}{\partial y} = g$ and $\dfrac{\partial\hat{f}}{\partial z} = h$, so $d\hat{f} = \omega$.<|endoftext|>
-TITLE: Limit $(n - a_n)$ of sequence $a_{n+1} = \sqrt{n^2 - a_n}$
-QUESTION [5 upvotes]: Consider the sequence $\{a_n\}_{n=1}^{\infty}$ defined recursively by
- $$a_{n+1} = \sqrt{n^2 - a_n}$$ with $a_1 = 1$. Compute $$\lim_{n\to\infty} (n-a_n)$$
-
-I am having trouble with this. I am not even sure how to show the limit exists. I think if we know the limit exists, it is just algebra, but I'm not sure.
-
-REPLY [3 votes]: Here is a briefer answer which illustrates that the difficulty of the problem lies in bounding the growth of $a_n$.
-All we need is $a_n \sim n$, in the sense that $$\lim_{n\to\infty} \frac{a_n}{n} = 1.$$
-This would follow if we knew e.g. that $n - a_n$ is bounded. JimmyK's sharp result that $0 \le n - a_n \le 2$ is more than enough!
-So, indeed, $a_n \sim n$. From here, using difference of squares,
-$$(n+1) - a_{n+1} = \frac{(n+1)^2 - (n^2 - a_n)}{(n+1) + a_{n+1}} = \frac{2n + 1 + a_n}{n + 1 + a_{n+1}} = \frac{2 + \frac{1}{n} + \frac{a_n}{n}}{1 + \frac{1}{n} + \frac{a_{n+1}}{n}} \to \frac{2 + 0 + 1}{1 + 0 + 1} = \frac{3}{2}.$$<|endoftext|>
-TITLE: Is $(\#^k \Bbb{RP}^2) \times I$ an $\mathbb{RP}^2$-irreducible 3-manifold?
-QUESTION [7 upvotes]: Consider $S$ a surface homeomorphic to a connected sum of $n$ projective planes, $n \geq 2$. Can there be a two sided projective plane embedded in $[-\epsilon,\epsilon]\times S$?
-
-REPLY [3 votes]: Call your surface $\Sigma$ so as to avoid confusion with spheres. Our first course of action is to note that $\Sigma$ has no 2-torsion in its fundamental group. (Actually, there's a much stronger true fact: a finite CW complex with contractible universal cover has no torsion in its fundamental group. I will not need or prove this.) To see this, note that if it did, it has as a covering space a noncompact surface with fundamental group $\Bbb Z/2$; but the only simply connected surfaces are $S^2$ and $\Bbb R^2$, and $\Bbb R^2$ does not have any continuous involutions with no fixed point. (It's easier to see that neither $\Bbb R^2$ or the unit disc with hyperbolic metric have isometric involutions with no fixed point; we only need to work in the case of isometric quotients by the uniformization theorem.)
-Something slightly stronger than your question is true: there's not even a 2-sided $\Bbb{RP}^2$ in $\Sigma \times S^1$. (Indeed, there's no embedded $\Bbb{RP}^2$ at all, since a manifold with a 1-sided $\Bbb{RP}^2$ has $\Bbb{RP}^3$ as a connected summand; and we have no 2-torsion in our fundamental group, so that's not possible.)
-To see this, note that a 2-sided $\Bbb{RP}^2$ cannot possibly disconnect the manifold: this would imply that $\Bbb{RP}^2$ bounds a compact 3-manifold, which it does not. The fact that it does not disconnect implies that it is a homologically nontrivial submanifold (meaning it represents a nonzero class in $H_2$): you can find a loop that intersects with $\Bbb{RP}^2$ in precisely one point, and mod 2 intersection numbers are defined on the level of homology classes.
-Now recall that $\pi_1(S^1 \times \Sigma)$ had no 2-torsion, so the map $\pi_1(\Bbb{RP}^2) \to \pi_1(S^1 \times \Sigma)$ is trivial, and the map from $\Bbb{RP}^2$ factors through the universal cover of $S^1 \times \Sigma$: that is, through $\Bbb R^3$. But $\Bbb R^3$ is contractible, which implies your 2-sided embedding of $\Bbb{RP}^2$ was null-homotopic. This contradicts the fact above that it was a homologically nontrivial submanifold.<|endoftext|>
-TITLE: loewner ordering of symetric positive definite matrices and their inverse
-QUESTION [5 upvotes]: $M_1$ and $M_2$ are symetric positive definite matrix and $M_2>M_1$ in Loewner ordering ($M_2-M_1$ is positive definite). does this imply that $M_1^{-1}>M_2^{-1}$?
-
-REPLY [7 votes]: The answer is yes.
-Two facts first:
-(1) The statement $M_2>M_1$ is equivalent to $x^TM_2x>x^TM_1x$ for any $x\neq 0$;
-(2) For any symmetric positive definite matrix $M$, there exist a positive definition matrix $L$ such that $M=L^2$ (called the square root of $M$).
-We can show it is true when $M_1$ is the identity matrix $I$: for $M_2=L_2^2$,
-$$
-x^TM_2^{-1}x=x^TL_2^{-T}L_2^{-1}x=(L_2^{-1}x)^T(L_2^{-1}x)
-\leq (L_2^{-T}x)^TM_2(L_2^{-T}x)=x^Tx.
-$$
-In the general case for $M_1=L_1^2$, the condition $M_2>M_1$ is equivalent to
-$L_1^{-1}M_2L_1^{-1}>I$, which implies that
-$
-I>(L_1^{-1}M_2L_1^{-1})^{-1}=L_1M_2^{-1}L_1
-$
-or $M_1^{-1}>M_2^{-1}$.<|endoftext|>
-TITLE: Showing $\sum_{n = 0}^\infty \int f^n$ converges
-QUESTION [9 upvotes]: I am having trouble solving a real analysis qualifying exam problem.
-
-The question assumes $\mu(X) < \infty$ and $\left| f \right| < 1$ (EDIT: Assume $f$ is real-valued). We are to show that $$ \lim_{n \to \infty} \int_X 1 + f + \dots + f^n d\mu$$ exists, possibly equal to $\infty$.
-
-My work so far. Each integral in the sequence makes sense since $\int 1 + \left| f \right| + \dots + \left| f \right|^n < (n+1) \mu(X) < \infty$. Rephrasing the problem, we want to show $\sum_{n = 0}^\infty \int f^n$ converges. It is immediate by the Monotone Convergence Theorem that the result is true for nonnegative functions $f$. Considering absolute convergence, we have $$\sum \left| \int f^n \right| \leq \sum \int \left| f \right|^n$$ where the series on the right converges by what we just said. If said series is finite, then $\sum \int f^n$ converges absolutely, hence converges.
-Question. I am stuck on the case that $$ \sum \int \left| f \right|^n = \infty. \;\;\;\;\;\;\;\;\; (*)$$
-I know from the statement of the problem that we are allowing for $\sum \int f^n = \infty$, but it is not clear to me whether this should follow from $(*)$. We do know $$\sum \int \left| f \right|^n = \int \sum \left| f \right|^n = \frac{1}{1 - \left| f \right|}.$$ So if this equals $\infty$, then $\mu \left\{ x \colon \left| f(x) \right| > 1 - \frac{1}{n} \right\} > 0$ for all $n$. And of course $\sum f^n = \frac{1}{1 - f}$ as well. But I can't see how to put this all together.
-Any help would be much appreciated. Thanks.
-
-REPLY [2 votes]: For $f \geq 0$, you showed the claim yourself. Now, for the general case, by splitting $f = f_+ - f_-$, it suffices to show the claim for $f\leq 0$. For this, also note $f^n = (f_+)^n + (-f_-)^n $, since the supports of $f_+, f_-$ are disjoint.
-Now, for $-1 < x\leq 0$, we have
-$$
-\bigg | \sum_{k=0 }^n x^k \bigg | = \frac {1-x^{n+1}}{1-x} \leq 1,
-$$
-so that you can apply the dominated convergence theorem. This easily shows that
-$$
-\lim_n \int 1+\dots + f^n d\mu = \int \frac {1}{1-f}d\mu
-$$
-is finite (remember $f \leq 0$). Together with the case $f\geq 0$, we get the claim.<|endoftext|>
-TITLE: How many closed subsets of $\mathbb R$ are there up to homeomorphism?
-QUESTION [11 upvotes]: I know there are lists of convex subsets of $\mathbb{R}$ up to homeomorphism, and closed convex subsets of $\mathbb{R}^2$ up to homeomorphism, but what about just closed subsets in general of $\mathbb{R}$?
-
-REPLY [3 votes]: Inasmuch as there are just $2^{\aleph_0}$ closed subsets of $\mathbb R$ all told, it will suffice to exhibit $2^{\aleph_0}$ nonhomeomorphic nowhere dense closed subsets of $\mathbb R.$
-For $S\subseteq\mathbb R$ and $n\in\omega$ let $S^{(n)}$ denote the $n^{\text{th}}$ Cantor-Bendixson derivative of $S,$ i.e., $S^{(0)}=S,\ S^{(1)}=S',\ S^{(n+1)}=(S^{(n)})'.$
-For $X\subseteq\mathbb R$ let $A(X)$ denote the set of all positive integers $n$ for which there exists a relatively open set $U\subseteq X$ such that $S^{(n-1)}\cap U\ne S^{(n)}\cap U=S^{(n+1)}\cap U\ne\emptyset.$
-It will suffice to show that, for every set $A$ of positive integers, there is a nowhere dense closed set $X\subseteq R$ with $A(X)=A;$ in fact, it will suffice to show this for a one-point set $A=\{n\}$ where $n$ is a positive integer.
-Given a positive integer $n,$ construct a closed set $X\subseteq\mathbb R$ of order type $\omega^n+\varphi$ where $\varphi$ is the order type of the Cantor set; then $A(X)=\{n\}.$
\ No newline at end of file