diff --git "a/stack-exchange/math_stack_exchange/shard_101.txt" "b/stack-exchange/math_stack_exchange/shard_101.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_101.txt" +++ /dev/null @@ -1,5822 +0,0 @@ -TITLE: Find the value of $\int_{0}^{\pi}e^{\cos\theta}\cos(\sin\theta)d\theta$. -QUESTION [6 upvotes]: Find the value of $\int_{0}^{\pi}e^{\cos\theta}\cos(\sin\theta)d\theta$. - -$e^{\cos\theta}\cos(\sin\theta)=Re(e^{e^{i\theta}})$,where $Re()$ represents real part of. -So our integral becomes $$ -\int_{0}^{\pi}Re(e^{e^{i\theta}})d\theta -$$ -Now by Cauchy integral formula, -$$ -\int_{0}^{2\pi}e^zdz=2\pi i -$$ -But in Cauchy formula, the limits of integration are from $0$ to $2\pi$ and in my question, limits of integration are from $0$ to $\pi$. So I think I cannot apply Cauchy formula as such. What should I do? Please help me, Thanks. - -REPLY [2 votes]: $$I(b)=\int_0^\pi e^{b\cos x}\cos (b\sin x)dx\\ -=\frac{1}{2}\int_{-\pi}^\pi e^{b\cos x}\cos (b\sin x)dx\\ -=\frac{1}{2}\int_{0}^{2\pi} e^{b\cos x}\cos (b\sin x)dx\\ -=Re\left(\frac{1}{2}\int_{0}^{2\pi} e^{be^{ix}}dx\right)\\ -I'(b)=Re\left(\frac{1}{2}\int_{0}^{2\pi} \frac{\partial}{\partial b} e^{be^{ix}}dx\right)\\ -=Re\left(\frac{1}{2}\int_{0}^{2\pi} ibe^{be^{ix}}e^{ix}dx\right)\\ -=Re\left(\left.\frac{1}{2}e^{be^{ix}}\right]_0^{2\pi}\right)=0\\ -\therefore I(b)=I(0)=\frac{1}{2}\int_0^{2\pi}dx=\pi\\ -I(1)=\int_0^\pi e^{\cos x}\cos (\sin x)dx=\pi -$$<|endoftext|> -TITLE: How to show that the series of $\frac{\sin(n)}{\log(n)}$ converges? -QUESTION [11 upvotes]: Edit: I am seeking a solution that uses only calculus and real analysis methods -- not complex analysis. This is an old advanced calculus exam question, and I think we are not allowed to use any complex analysis that could make the problem statement a triviality. -Show that the series -$$\sum_{n=2}^{\infty} \frac{\sin(n)}{\log(n)}$$ -converges. -Any hints or suggestions are welcome. -Some thoughts: -The integral test is not applicable here, since the summands are not positive. -The Dirichlet test does seem applicable either, since if I let 1/log(n) be the decreasing sequence, then the series of sin(n) does not have bounded partial sums for every interval. -Thanks, - -REPLY [7 votes]: Note that -$$\left|\sum_{k=1}^n \sin k\right|= \frac{|\sin(n/2)\sin[(n+1)/2]|}{\sin(1/2)}\leqslant \frac{1}{\sin(1/2)}.$$ -Derivation: -$$\begin{align} 2 \sin(1/2)\sum_{k=1}^n\sin k &= \sum_{k=1}^n2 \sin(1/2)\sin k \\ &= \sum_{k=1}^n2 \sin(1/2)\cos (k +\pi/2) \\ &= \sum_{k=1}^n[\sin(k + 1/2 + \pi/2)-\sin (k -1/2 + \pi/2)] \\ &= \sin(n + 1/2 + \pi/2) - \sin(1/2 + \pi/2) \\ &= 2 \sin(n/2)\cos[(n+1)/2 +\pi/2] \\ &= 2 \sin(n/2)\sin[(n+1)/2] \end{align} \\ \implies \sum_{k=1}^n\sin k = \frac{\sin(n/2)\sin[(n+1)/2]}{\sin(1/2)}$$<|endoftext|> -TITLE: Prove that $f(A)$ is an open set and $f^{-1}:f(A)\to A$ is differentiable. -QUESTION [5 upvotes]: Let $A\subset \mathbb{R}^n$ an open set, and $f:A\to \mathbb{R}^n$ a one to one and continuously differentiable function so that $\det f'(x)\ne 0$ for all $x\in A$ Prove that $f(A)$ is an open set and $f^{-1}:f(A)\to A$ is differentiable. -Any idea on how to prove this? I´m totally lost. (This seems related to the inverse function theorem). -Any help or hint will be appreciated, thanks. - -REPLY [3 votes]: The inverse function theorem tells you that for each $x\in A$, there is an open neighborhood $U$ of $x$ and an open neighborhood $W$ of $f(x)$ contained in $f(U)$ such that $f^{-1}:W\to U$ is differentiable. So $f(A)$, being the union of these open neighborhoods $W$ around each point $f(x)\in f(A)$, is open. Since differentiability is a local property, it also follows that $f^{-1}$ is differentiable as a function from all of $f(A)$ to $A$. - -The original version of the question left out the assumption that the derivative of $f$ is continuous. In fact, the result is still true without that hypothesis, and can be proven using more advanced techniques as follows. The invariance of domain theorem (typically proved using methods from algebraic topology) states the following: - -Suppose $A\subseteq\mathbb{R}^n$ is open and $f:A\to\mathbb{R}^n$ is a continuous injective map. Then the image $f(A)$ is an open subset of $\mathbb{R}$, and the inverse map $f^{-1}:f(A)\to A$ is continuous. - -In particular, this theorem applies to your function $f$, giving you that $f(A)$ is open and $f^{-1}$ is continuous. To get that $f^{-1}$ is differentiable (and its derivative is the inverse of the derivative of $f$), you can use the same argument as in the proof of the inverse function theorem. -In fact, the result is true even if you drop the assumption that $f$ is injective, as long as you only ask for the conclusion to hold locally (i.e., every $x\in A$ has an open neighborhood $U$ such that $f(U)$ is open, $f|_U:U\to f(U)$ is injective, and $f|_U^{-1}:f(U)\to U$ is differentiable). This result is even harder, and references to proofs are given in the answers to this question on MathOverflow.<|endoftext|> -TITLE: Roots of Complex Polynomial in Disc -QUESTION [5 upvotes]: This is a theorem that I have encountered before and proved using some homotopy-theoretic arguments, but I tried to prove this statement again without such tools and ended up failing. It seems to be Rouche's Theorem, but I can't figure out how to jimmy it just right. -The statement is very nice: -All roots of $p(z) = z^n + a_{n-1}z^{n-1} + \cdots + a_0$ lie in the disc centered at zero with radius -$R = \sqrt{1 + |a_{n-1}|^2 + |a_{n-2}|^2 + \cdots + |a_0|^2}$ -I keep thinking I have done it correctly, only to find an error. Does anyone have a slick proof of this that just uses complex-analytic techniques? I would really like to see just a sketch, or just the first step to start it off, but hey, I won't stop you from finishing it up haha -For those who would think it's useful to see the homotopy-theoretic version, check Munkres Topology, Ch. 9.56 and the first exercise, but it seems impossible to state this proof without using homotopy. -Thanks a lot! - -REPLY [5 votes]: Let $|z|>1,$ and by Cauchy-Schwarz inequity -\begin{align} -|p(z)|&=|z|^n\left|1+\sum_{j=1}^{n}\frac{a_{n-j}}{z^j} \right|\geq |z|^n\left(1-\left|\sum_{j=1}^{n}\frac{a_{n-j}}{z^j}\right| \right)\\&\geq |z|^n\left[1-\left(\sum_{j=1}^{n}|a_{n-j}|^2\right)^{1/2}\left(\sum_{j=1}^{n}\frac{1}{|z|^{2j}} \right)^{1/2}\right]\\&>|z|^n\left[1-\left(\sum_{j=1}^{n}|a_{n-j}|^2\right)^{1/2}\left(\sum_{j=1}^{\infty}\frac{1}{|z|^{2j}} \right)^{1/2}\right]\\&=|z|^n\left[1-\left(\sum_{j=1}^{n}|a_{n-j}|^2\right)^{1/2}\left(\frac{1}{|z|^2-1} \right)^{1/2} \right] -\end{align} -Therefore, $|P(z)|>0$ if $1-\left(\sum_{j=1}^{n}|a_{n-j}|^2\right)^{1/2}\left(\frac{1}{|z|^2-1} \right)^{1/2}\geq 0$ i.e., if $|z|\geq \sqrt{1+|a_{n-1}|^2+|a_{n-2}|^2+\ldots+|a_0|}=R.$ -Thus, all those zeros whose modulus is greater than 1 lies in $|z|\leq R,$ and those zeros whose modulus is less than or equal to 1 already lie in $|z|\leq R.$<|endoftext|> -TITLE: Is this "snake lemma" true in derived category? -QUESTION [12 upvotes]: Suppose there is a diagram of cochain complexes -$$ -\begin{array}{c} -0 & \to & X_1\phantom\alpha & \to & Y_1\phantom\beta & \to & Z_1\phantom\gamma & \to & 0 \\ -& & \downarrow\alpha & & \downarrow\beta & & \downarrow\gamma \\ -0 & \to & X_2\phantom\alpha & \to & Y_2\phantom\beta & \to & Z_2\phantom\gamma & \to & 0 -\end{array} -$$ -where the two rows are short exact sequences of cochain complexes, and the diagram is commutative up to homotopy. -I'm trying to prove that -(a) If $\alpha$ is a quasi-isomorphism, then $\mathrm{Cone}(\beta)$ -and $\mathrm{Cone}(\gamma)$ are quasi-isomorphic. (This should be true; Nekovar's book Selmer Complexes uses this result implicitly. [EDIT] In fact the proof in it can be fixed so that the diagram actually commutes, not only up to homotopy.) -(b) More generally, there exists a distinguished triangle -$$ -\mathrm{Cone}(\alpha)\to\mathrm{Cone}(\beta)\to\mathrm{Cone}(\gamma)\to -$$ -in the derived category. (I'm not sure if it is true.) -I can only prove the case when the diagram itself actually commutes; when homotopy involved, - -I can only construct natural chain maps $\mathrm{Cone}(\alpha)\to\mathrm{Cone}(\beta)\to\mathrm{Cone}(\gamma)$, with the first map injective and the second map surjective, but the composition of these two maps is not zero. -Another attempt is trying to replace the second row by $0\to\mathrm{Cyl}(\alpha)\to\mathrm{Cyl}(\beta)\to\mathrm{Cyl}(\gamma)\to 0$ such that the diagram actually commutes; but again, I can't find a way to make this sequence exact. -The axiom (TR4) seems not enough even for (a): in this case there exists a distinguished triangle $Z_1\to Z_2\to\mathrm{Cone}(\beta)\to{}$ by (TR4), but it can't tell that this $Z_1\to Z_2$ is equal to $\gamma$. - -Any ideas? - -REPLY [5 votes]: In general, neither (a) nor (b) is true. I'll give three examples to illustrate various points. In all of these, $k$ will be a field, which I'll take to have characteristic two (just so that I don't need to be careful with signs; it's not really important!). -(1) In general, your diagram doesn't even have to induce a morphism of triangles in the derived category, as the "connecting" square -$$\require{AMScd} -\begin{CD} - Z_1 @>>> X_1[1]\\ - @VVV @VVV\\ - Z_2 @>>> X_2[1] -\end{CD}$$ -may not commute. -If you don't insist that it does commute, then there are very easy counterexamples to (a). For example, take $Y$ to be a contractible complex ($\dots\to0\to k\stackrel{1}{\to}k\to0\to\dots$, say) and $X$ a subcomplex such that $Y/X$ is not acyclic ($\dots\to0\to 0\to k\to0\to\dots$, say). Then -$$\begin{CD} - 0 @>>> X @>>> Y @>>> Y/X @>>>0\\ - @.@VV1V @VV1V @VV0V @.\\ - 0 @>>> X @>>> Y @>>> Y/X @>>>0 -\end{CD}$$ -commutes up to homotopy, but the cone of the middle vertical arrow is acyclic, whereas the cone of the third arrow is not. -However, even if you insist that the connecting square commutes, (a) and (b) can still be false. -(2) Fairly simple counterexamples to (b) can be constructed by taking a map from a triangle to a rotation of itself, as in: -$$\begin{CD} -X @>>> Y @>>> Z @>\gamma >> X[1]\\ -@VV0V @VV0V @VV\gamma V @VV0V\\ -Y @>>> Z @>>> X[1] @>>> Y[1] -\end{CD}$$ -The cones are $Y\oplus X[1]$, $Z\oplus Y[1]$ and $Y[1]$, and there are fairly easy examples where there is no triangle -$$Y\oplus X[1]\to Z\oplus Y[1]\to Y[1]\to Y[1]\oplus X[2].$$ -For example, let $R=\mathbb{Z}[t]$, and $I$ the ideal $(2,t)$, and start with the obvious triangle -$$I\to R\to R/I\to I[1]$$ -in the derived category of $R$-modules. Then there can't be a triangle -$$R\oplus I[1]\to R/I\oplus R[1]\to R[1]\to R[1]\oplus I[2],$$ -as its cohomology exact sequence would be -$$\dots\to 0\to I\to R\to R\to R\to R/I\to0\to\dots,$$ -and no such exact sequence exists, as the kernel of $R\to R/I$ would have to be $I$, and there is no surjective map $R\to I$. -(3) Let $S=k[x,t]/(xt^2)$, and in the derived category of $S$-modules form the "homotopy Cartesian square" -$$\begin{CD} -S @>xt >> S\\ -@VxVV @V\alpha VV\\ -S @>\beta >> Y -\end{CD}$$ -(i.e., $Y$ is the complex $\dots\to0\to S\stackrel{ -\begin{pmatrix}xt\\x\end{pmatrix}}{\to} S\oplus S\to0\to\dots$, with $S\oplus S$ in degree zero, -and $\alpha$ and $\beta$ are inclusions of the first and second summands). Completing to a morphism of triangles gives -$$\begin{CD} -S @>xt >> S @>>> Z @>>>S[1]\\ -@VxVV @V\alpha VV @V1VV @VxVV\\ -S @>\beta >> Y @>>> Z @>>> S[1] -\end{CD}$$ -where the cones of the first two vertical maps are both quasiisomorphic to $\dots\to0\to S\stackrel{x}{\to}S\to0\to\dots$, so this is not yet a counterexample to (a). However, we can replace the first vertical arrow by multiplication by $x+xt$: -$$\begin{CD} -S @>xt >> S @>>> Z @>>>S[1]\\ -@Vx+xt VV @V\alpha VV @V1VV @Vx+xt VV\\ -S @>\beta >> Y @>>> Z @>>> S[1] -\end{CD}$$ -and this is still a morphism of triangles, but now the cone of the first vertical map is $\dots\to0\to S\stackrel{x+xt}{\to}S\to0\to\dots$, whereas the cone of the second vertical map is still $\dots\to0\to S\stackrel{x}{\to}S\to0\to\dots$, and it can be checked that there is no quasiisomorphism between these.<|endoftext|> -TITLE: Eigenvalues of $\sum_i A_i P_i$ with $(A_i)_{jk}=\delta_{ij}\delta_{ik}$ and $P_i$ orthogonal projections -QUESTION [7 upvotes]: Assume $$C=\sum_{i=1}^nA_iP_i$$ where each $P_i$ is an $n\times n$ orthogonal projection matrix ($P_i=P_i^\top$ and $P_i=P^2_i$) and $A_i$ is an $n\times n$ matrix of zero elements except the $(i,i)$-th element equal to one. -Can I say something about the eigenvalues of $C$? - -REPLY [3 votes]: The matrix $A_iP_i$, is a matrix containing a lot of zeros, and its $i$-th row is equal to the $i$-th row of $P_i$. That is, the resulting matrix has rows all with Euclidean norm equal to $1$. This implies $\|C\|_2 \le \sqrt n$: -$$ -\|Cx\|_2^2 =\sum_i (c_i^Tx)^2\le \sum_i \|c_i\|^2 \|x\|^2 = n \|x\|^2, -$$ -where $c_i$ is the $i$-th row of $C$. -This bound is realized if all rows of $C$ are the same up to factors with absolute value $1$. -This shows that all eigenvalues of $C$ are in the circle $\{z\in \mathbb C:\ |z|\le \sqrt n\}$. -This bound is sharp as the matrix $C=(c_{ij})$ with $c_{ij}=\frac1{\sqrt n}$ shows. -Using Gershgorin theorem, one can prove inclusions for eigenvalues. -The radius of the Gershgorin circle $i$ can be estimated by -$$ -\sum_{j\ne i} |c_{ij}| \le \sqrt{n-1} \left(\sum_{j\ne i} |c_{ij}|^2 \right)^{1/2} = \sqrt{(n-1)(1-|c_{ii}|^2)}. -$$ -Then the eigenvalues of $C$ are inside the circles with center $c_{ii}$ and radius as above. If more is known about the entries of $C$, the bound can be improved.<|endoftext|> -TITLE: Limit of a sequence using Hölder's inequality -QUESTION [5 upvotes]: Let $a_1,\ldots,a_p$ be positive real numbers. Find the limit of $$\left(\frac{a_1^n+\cdots+a_p^n}{p}\right)^{1/n}$$ -My attempt: -I applied Hölder's and have obtained that this term is bounded below by $\frac{a_1+\cdots+a_p}{p}$ and since $a_1^p+\cdots+a_n^p \le (a_1+\cdots+a_p)^n$ is valid, applying log of limits technique, I get that it is bounded above by $a_1+\cdots+a_p$. However, I haven't obtained an actual limit. In fact I don't know the actual answer. I think it could be the lower bound I obtained, because that answer is validated for certain examples, like, by putting all values of $a_i$'s as a constant $k$, but I am not able to get a suitable idea to conclude that. Any help? - -REPLY [2 votes]: Note that -$$ -\left(\frac1p\sum_{i=1}^pa_i^n\right)^{1/n}\le\left(\frac1p\sum_{i=1}^p\max_k(a_k)^n\right)^{1/n}=\max_k(a_k) -$$ -and -$$ -\left(\frac1p\sum_{i=1}^pa_i^n\right)^{1/n}\ge\left(\frac1p\max_k(a_k)^n\right)^{1/n}=\left(\frac1p\right)^{1/n}\max_k(a_k) -$$ -Therefore, -$$ -\left(\frac1p\right)^{1/n}\max_k(a_k)\le\left(\frac1p\sum_{i=1}^pa_i^n\right)^{1/n}\le\max_k(a_k) -$$ -By the Squeeze Theorem, we get -$$ -\lim_{n\to\infty}\left(\frac1p\sum_{i=1}^pa_i^n\right)^{1/n}=\max_k(a_k) -$$<|endoftext|> -TITLE: Mathematicians who overcame academic failure to achieve success -QUESTION [17 upvotes]: Does anyone have any story of mathematicians who overcame "academic failure" or setbacks to achieve success later as a result of their perseverance? This is a soft question, that hopefully can inspire aspiring students. -New edit: I am especially interested in academic failure (e.g. failure of exams, failure in proof/ wrong proof, failure in getting academic jobs). This is to narrow down the question so it is not too broad. There is another related question on blind/disabled mathematicians which is very good: Who are some blind or otherwise disabled mathematicians who have made important contributions to mathematics? -My ideal accepted answer is a relatively less well known answer (so that we all learn something new), supported by factual evidence (e.g. a hyperlink to a page or a quote). -Some that I can list are: -1) Zhang Yitang, who worked in Subway (arguably a sort of a setback) but later proved a result related to the Twin Prime Conjecture. -2) Robion Kirby, who failed his oral Ph.D. qualifying examination (http://www-groups.dcs.st-and.ac.uk/history/Biographies/Kirby.html) but later proved the "torus trick". -Thanks! (Hope this question is on topic for Math Stackexchange..) - -REPLY [3 votes]: John Nash struggled with significant mental health issues when he should have been in the twilight of his mathematical journey. He and his wife spent many difficult years battling with this illness. Slowly, Nash started to get back in touch with the mathematical community in Princeton; engaging with the students, his passion for mathematics never died. It was in his latter years that he was awarded the nobel prize in economics for his contribution to game theory.<|endoftext|> -TITLE: Calculating decimal digits by hand for extremely large numbers -QUESTION [27 upvotes]: On the most recent Seton Hall Joseph W. Andrushkiw Competition, the final question was as follows: - -Let $A = (\sqrt{3}+\sqrt{2})^{2016}$. When A is written in decimal - form, what is its $31^{st}$ digit after the decimal point? - -Brute forcing it via wolfram alpha reveals that the answer is [edit: I found the 31st number from the start, not the 31st after the decimal point] zero, yet this competition does not allow the use of a calculator. It seems to me that as irrational numbers are in the base of the exponent, there should not be an identifiable pattern in the digits. -Searching this site has made me think that perhaps the answer has something to do with the Euler phi function (something which I will admit up front I have never been acquainted with), but I can't find anything which I understand enough to give me a concrete way to start to approach this. Any help on this frustrating problem would be appreciated. Thanks! - -REPLY [44 votes]: Hmm. Pretty sure that the answer is $9$. -The key observation to this problem is noticing that $(\sqrt{3}-\sqrt{2})^{2016}+(\sqrt{3}+\sqrt{2})^{2016}$ is an integer. -The proof of this is expansion using Binomial Theorem. The odd powers of the square roots get canceled out. -Now we have $(\sqrt{3}+\sqrt{2})^{2016} = N - (\sqrt{3}-\sqrt{2})^{2016}$, where $N$ is a positive integer. -Now this is easy. Since $(\sqrt{3}-\sqrt{2})^{2016} < (0.4)^{2016} = (0.064)^{\frac{2016}{3}} < (0.1)^{\frac{2016}{3}} < (0.1)^{600}$, we have $(\sqrt{3}+\sqrt{2})^{2016}= (N-1)+0.99\cdots 99$, and there are at least $500$ $9$'s there. The answer is $\boxed{9}$.<|endoftext|> -TITLE: Group of $r$ people at least three people have the same birthday? -QUESTION [15 upvotes]: What is the probability that in a randomly chosen group of $r$ people at least three people have the same birthday? - -$\displaystyle 1- \frac{365\cdot364 \cdots(365-r+1)}{365^r}$ -$\displaystyle \frac{365\cdot364 - \cdots(365-r+1)}{365^r} +{r\choose 2}\cdot \frac{365\cdot364\cdot363 \cdots - (364-(r-2) +1)}{364^{r-2}}$ -$\displaystyle 1- \frac{365\cdot364 - \cdots(365-r+1)}{365^r} +{r\choose 2}\cdot \frac{365\cdot364\cdot363 \cdots (364-(r-2) +1)}{364^{r-2}}$ -$\displaystyle\frac{365\cdot364 \cdots(365-r+1)}{365^r} - $ - - -My attempt : -(May be typo in option $(3)$ of question !) -$$P(\text{at least 3 persons have same birthday})$$ -$$= 1 - \{P\text{(no one has same birthday) + P(any 2 have same birthday)\}}$$ -So, option $(3)$ is true. - -Can you explain it, please? - -It asked here before, but I'm not satisfied by explanation. - -REPLY [14 votes]: Given $2k$ items, there are $(2k-1)!!$ ways to arrange them into pairs: the first item can be paired with $2k-1$ possibilities; the first unpaired item can be matched with $2k-3$ items; the new first unpaired item can be paired with $2k-5$ items; etc. -The number of functions from $n$ people to $365$ dates with $n-2k$ singles and $k$ pairs is -$$ -\begin{array}{cc} -&\displaystyle\underbrace{ -\overbrace{\binom{365}{n-k}}^{\substack{\text{ways to choose}\\\text{$n-k$ dates}\\\text{for birthdays}}} -\overbrace{\binom{n-k}{k}}^{\substack{\text{ways to choose}\\\text{$k$ dates}\\\text{for pairs}}} -}&\displaystyle\underbrace{ -\overbrace{\ \ \binom{n}{2k}\ \ }^{\substack{\text{ways to choose}\\\text{$2k$ people}\\\text{for pairs}}} -\overbrace{(n-2k)!\vphantom{\binom{n}{k}}}^{\substack{\text{ways to arrange}\\\text{$n-2k$ singles}}} -\overbrace{(2k-1)!!\vphantom{\binom{n}{k}}}^{\substack{\text{ways to pair}\\\text{$2k$ people}}} -\overbrace{\ \ \ \ \ k!\ \ \ \ \ \vphantom{\binom{n}{k}}}^{\substack{\text{ways to arrange}\\\text{$k$ pairs}}} -}\\ -\displaystyle= -&\displaystyle\frac{365!}{(365-n+k)!\,(n-2k)!\,k!} -&\displaystyle\frac{n!}{2^k} -\end{array} -$$ -Thus, the probability of getting at least one triple is -$$ -1-\frac{365!\,n!}{365^n}\sum_{k=0}^{365}\frac1{(365-n+k)!\,(n-2k)!\,k!\,2^k} -$$ -where we take $\frac1{n!}=0$ for $n\lt0$. -Here is a plot from $n=0$ to $n=730$. For $n\lt3$, the probability of getting a triple is $0$, and for $n\gt730$, the probability is $1$.<|endoftext|> -TITLE: Continuous functions that attain local extrema at every point -QUESTION [8 upvotes]: Let $f:[0,1]\to\mathbb R$ is a continuous function, and for all $x\in [0,1]$, $f(x)$ is either a local maximum or a local minimum. Then prove that $f$ is a constant. -Here is what I have tried, I don't know whether it is correct: -Assume $f$ is not a constant, by continuity, there exist $x_1$, $x_2$ such that $f(x_1)=m$ is a global minimum and $f(x_2)=M$ is a global maximum. Now if $m\ne M$, choose any $c\in(m,M)$ and define -$$x_0=\sup (x\in[x_1,x_2]:f(x)c$). Contradiction arises and so $f$ must be a constant. - -REPLY [6 votes]: One reason why I quite like the problem is that it starts with a nice magic trick (a misdirection if you like). Since it -is posed for continuous functions one immediately starts by thinking about that, when in fact you should be simply investigating the -nature of the set of local extrema. As others have pointed out it is just a countable vs. uncountable situation that resolves the -problem. Continuity just jumps in at the last moment to finish the argument. -I don't have any seriously new ideas but I would like to remind readers of a standard device in real variable arguments that works quite -simply here too. It is worth keeping in your toolkit and has been used many, many times. So this is essentially the same proof as the others but with more of a real-variable feel, than a topological one. - -Lemma. Let $f:\mathbb{R}\to\mathbb{R}$ and let $E$ be the set of - points at which $f$ has a local maximum or a local minimum. Then there - is a denumerable decomposition of $E$ into a sequence of sets - $\{E_n\}$ such that $f$ is constant on each $E_n$. - -Proof. Let $A$ be the set of points where $f$ has a local maximum. For each $x\in A$ there is a $ \delta(x)>0$ so -that $f(z)\leq f(x)$ for all $z\in (x-\delta(x),x+\delta(x))$. -Define the collection -$$ -A_{nj} = \left\{ - x\in A: \frac{1}{n} \leq \delta(x) < \frac1{n-1} - \right\} -\cap \left[ \frac{j}n, \frac{j+1}n \right) -$$ -for $n=1,2,3 \dots$ and $j=0, \pm 1, \pm 2, \pm 3, \dots$. -Suppose that $x$ and $y$ with $x0$ so - that $[f(z)-f(x)/[z-x]> r $ for all $z\in (x,x+\delta(x))$ - and at the same time - $[f(z)-f(x)/[z-x]< r $ - for all $z\in (x-\delta(x),x)$. -Define the collection - $$ - A_{nj} = \left\{ - x\in E^r: \frac{1}{n} \leq \delta(x) < \frac1{n-1} - \right\} - \cap \left[ \frac{j}n, \frac{j+1}n \right) - $$ - for $n=1,2,3 \dots$ and $j=0, \pm 1, \pm 2, \pm 3, \dots$. -Now show that there cannot be two or more points in any set - $ A_{nj}$. That means that $E^r$ is countable and hence (since the rationals are countable) the set - $\{x\in E: f_-'(x) < f_+'(x)\}$ is countable. -A very nice simple proof but, more importantly, a technique that can be and has been used a great many times.<|endoftext|> -TITLE: is the inverse of a absolutely continuous function with almost everywhere positive derivation absolutely continuous? -QUESTION [14 upvotes]: suppose $f$ is an absolutely continuous on $[0,1]$,that almost everywhere $f'>0$. -is the inverse of $f$ necessarily absolutely continuous on $[f(0),f(1)]$? -thank you very much! - -REPLY [12 votes]: It is always useful to have more than one answer to a problem. One learns quite a bit more. -Even better, however, is to have two contradictory answers to the same problem. That is both more entertaining and more educational. (The contradiction is just about what one wants to prove, not about errors.) -A friend of mine often tells an anecdote about a similar situation. A well-known mathematician had posed an open problem at the end of one of his papers. My friend sent him a solution. He wrote back quite pleased and informed my friend that someone else had also submitted a solution using a completely different method. He proposed that both should be published back-to-back in the Revue Roumaine to which, as an editor, he would submit them. And they were published. The methods were indeed completely different. My friend had solved the problem positively and the adjacent paper "proved" the opposite. - -Problem. Suppose that $f:[0,1]\to\mathbb{R}$ is absolutely continuous and that $f'(x)>0$ for a.e. $x\in [0,1]$. Prove that $f$ - has an absolutely continuous inverse. - -Proof. Clearly $f$ is continuous and strictly increasing on $[0,1]$ and -so it has a continuous and strictly increasing inverse $f^{-1}$. Let $P$ be the set of points at which $f$ has a finite, positive derivative. Let $Z$ be the remaining points which we know is a set of measure zero. Since $f$ is AC it satisfies Lusin's condition (N) so $f(Z)$ is also a set of measure zero. -Let $E_1= f(P)$ and $E_2=f(Z)$. These sets exhaust $[f(0),f(1)]$. We know that $f^{-1}$ has a finite positive derivative at each point of $E_1$ and that $E_2$ has measure zero. The function $f^{-1}$ maps any measure zero subset of $E_1$ to a set of measure zero (because it has a finite derivative there). Also $f^{-1}$ maps every subset of $E_2$ to a set of measure zero (i.e., a subset of $Z$). Thus $f^{-1}$ satisifies Lusin's condition (N) on -$[f(0),f(1)]$. It follows that $f^{-1}$ is absolutely continuous. -[Note added: The OP remembered a proof of this statement: - -"Suppose $f'(x)$ exists at each point $x\in E$ and that $|f'(x)|\leq - M$ there. Then $m(f(E))\leq Mm(E)$. Hence $f(E)$ is - measure zero if $E$ is measure zero. - -But this can be pushed. If $E$ has measure zero and $f'(x)$ is finite at every point of $E$ ($|f'(x)|$ not necessarily bounded) then simply write $E_n=\{x\in E: |f'(x)|\leq n\}$ and use the fact that -$$m(f(E)) \leq \sum_{n=1}^\infty m(f(E_n))\leq \sum_{n=1}^\infty nm( E_n)=0.$$ -In fact, with bit more work, one can prove that -$$m(f(D)) \leq \int_D |f'(x)|\,dx$$ -for any measurable set $D$ assuming that $f$ has a finite derivative at every point of $D$.<|endoftext|> -TITLE: Equivalence of Quadratic Forms that represent the same values -QUESTION [6 upvotes]: An integer quadratic form is a function $Q(x,y) = ax^2 + bxy + cy^2$ where the numbers $a,b,c \in \mathbb Z$. -Call the set of values a quadratic forms takes on $V(Q) = \{ Q(x,y) \in \mathbb Z | x,y \in \mathbb Z \}$. -Two quadratic forms $Q,R$ are said to be equivalent if there is a $SL_2(\mathbb Z)$ matrix $M$ such that $R(x,y) = Q((x,y)M)$. -This is the definition used in for example, 1.1 page 4 of http://www.rzuser.uni-heidelberg.de/~hb3/publ/bf.pdf . Under that definition two quadratic forms maybe be "opposite" but not equal, and take on the same set of values. -I'm interested in the equivalence we get with $GL_2(\mathbb Z)$ matrices, We'll say $R \sim Q$ if there is a $GL_2(\mathbb Z)$ matrix $M$ such that $R(x,y) = Q((x,y)M)$. - -Let $Q,R$ be two integer quadratic forms: Does $V(Q) = V(R)$ imply $Q \sim R$? - If it's true how would it be proved? If false when does it fail? - -I'm not assuming that the QFs have the same discriminant or are positive definite. - -REPLY [6 votes]: As you mentioned on Chat, Kap and I wrote a note on forms of different discriminants (but positive definite forms, meaning negative discriminants). This was corrected and extended by John Voight, now at Dartmouth; also published. -The best known examples are the pair $x^2 + xy + y^2$ and $x^2 + 3 y^2.$ The proof that these represent the same numbers is some 2 by 2 matrices, some things mod 2. Same for the indefinite pair $x^2 + xy - y^2$ and $x^2 - 5 y^2.$ -Probably worth pointing out that the forms $x^2 + xy + 2ky^2$ and $x^2 + (8k-1)y^2$ represent all the same odd numbers, including any odd primes. The latter form does not represent $2$ or $-2,$ if you can say the same about the former form they agree on primes. We called these "Trivial Pairs." Um; as with Gauss, we discard these if the discriminant is square, meaning we demand $8k - 1 \neq -w^2,$ or $k \neq \frac{1 - w^2}{8}.$ -The question changes if you allow square discriminants. -There may be infinitely many other indefinite pairs, we did not check. -If the discriminant is not a square, two forms of the same discriminant that share even a single prime are $GL_2 \mathbb Z$ equivalent. In traditional terms, they are either equivalent or opposite. -Forms with square discriminant, such as $xy$ or $x^2 - y^2,$ are unusual in representing entire arithmetic progressions. Primes do not control things. -For self study, I recommend Buell, Binary Quadratic Forms. I find it easier reading than Buchmann and Vollmer. I also recommend L. E. Dickson Introduction to the Theory of Numbers. For just the first section, I also like Cox, Primes of the Form $x^2 + n y^2.$ Cox does a good job on positive forms, genera, composition. No indefinite forms, though, no Pell. As you can see from my answers, I like the first chapter in Conway, The Sensual Quadratic Form. The wonderful thing there is the "Topograph" construction. I have written a bunch of software to tell me how to avoid arithmetic mistakes in drawing those. These give the best way for talking about a fixed indefinite form $A x^2 + B x y + C y^2$ with $B^2 - 4 AC > 0 $ but not a square. The "cycle" method of Lagrange does not do well when $|n|$ is too large, in finding all solutions to $A x^2 + B x y + C y^2 = n.$ Lagrange's method gives all answers when $|n| < \frac{1}{2} \sqrt{B^2 - 4 AC};$ this result is Theorem 85 in Dickson. Oh, both Lagrange and Conway are talking about primitive representations, $\gcd(x,y) = 1.$<|endoftext|> -TITLE: Tightest proven upper bounds for the smallest prime of the for $an + b$ -QUESTION [5 upvotes]: If $\gcd(a, b) = 1$, it is known that there are infinitely many primes of the form $an + b$. -Denote $n(a, b)$ to be the smallest $n$ such that $an + b$ is prime. -$n(a, b) \leq \max(a, b)$ holds for $1 \leq a, b\leq 10000$. - -What is the tightest PROVEN upper bound for $n(a, b)$ ? - -REPLY [3 votes]: This is a well-studied problem, a key-word is "Linnik's theorem." -Let us denote the least prime itself by $p(a,b)$, so $p(a,b) = b + n(a,b)a$. -It is more common to express results in that way, and one can pass from on to the other easily. -Then Linnik proved $p(a,b) \le c a^L$ for some constants $c$ and $L$. -Meanwhile the best constant $L$ for which this is known is $L=5$. -At least for $L=5.5$ the constant $c$ is effectively computable (I am not sure an actual value was determined though). -It is conjecture that $L=2$ is true, and more precisely that $p(a,b) < a^2$. -Under the generalized Riemann hypothesis it is known that $p(a,b) \le (1+o(1)) \varphi(a)^2 \log(a)^2$. -For further details see the linked page.<|endoftext|> -TITLE: Negative solution to $x^2=2^x$ -QUESTION [7 upvotes]: Just out of curiosity I was trying to solve the equation $x^2=2^x$, initially I thought there would be just the two solutions $x=2$ and $x=4$, but wolfram shows that the two equations intersect at not 2 but 3 locations, the third being a negative value of $x$. The third solution isn't obvious like the other two, so I just have a few questions about the negative solution. Is it rational? is it commonly represented with a greek letter? If it is irrational is there a way to approximate it? - -REPLY [3 votes]: So we have $x^2 = 2^x$. Taking the square root of both sides and assume the solution is negative gives $x=-\sqrt{2}^x$. We can then establish a recursive sequence, $x_n = -\sqrt2^{x_{n-1}}$. Assuming that this converges gives us the answer, $x=-\sqrt2 ^{-\sqrt2 ^ {-\sqrt2 ^\cdots}}$. After five iterations, we get $$x\approx-0.76961847524.$$ Substituting the answer back in we get, $$2^{-0.76961847524} = 0.58657257487 \approx 0.59231259743 =(-0.76961847524)^2.$$<|endoftext|> -TITLE: Are these trigonometric expressions for the ceiling and floor functions correct? -QUESTION [5 upvotes]: I believe that I have found a trigonometric expression for both the ceiling and floor function, and I seek confirmation that it is, indeed, correct. -Update. -$$\begin{align} -\lfloor x \rfloor &= x - \frac12+f(x) \\[4pt] -\lceil x \rceil &= x + \frac12+g(x) -\end{align}$$ -where -$$\begin{align} -f(x) &= \begin{cases} -\frac12, & x\in\Bbb{Z} \\[4pt] -0, &x=\frac12n, n\in\Bbb{Z} \\[4pt] -\frac1\pi \tan^{-1}(\cot(\pi x)), &\text{otherwise} -\end{cases} \\[10pt] -g(x) &= \begin{cases} --\frac12, & x\in\Bbb{Z} \\[4pt] -0, &x=\frac12n, n\in\Bbb{Z} \\[4pt] -\frac1\pi \tan^{-1}(\cot(\pi x)), &\text{otherwise} -\end{cases} -\end{align}$$ - -REPLY [4 votes]: If you're using the single-valued $\arctan$, i.e. $-\frac{\pi}{2} \leq \arctan(x) \leq \frac{\pi}{2}$, which is what all computer languages I've seen use, this works perfectly fine. -Proof: -Let $x = s(a + b)$, where $a \in \mathbb{N}$, $0 \leq b < 1$ and $s \in \{-1,1\}$. We can call $a$ the absolute integer part, $b$ the absolute fractional part, and $s$ is $x$'s sign. -Then: -$$ -\begin{align*} -\frac{\arctan(\cot(\pi x))}{\pi} -&= \frac{\arctan(\cot(\pi s(a + b)))}{\pi}\\[1em] -&= \frac{\arctan(s\cdot\cot(\pi a + \pi b)))}{\pi}\\[1em] -&= \frac{\arctan(s\cdot\cot(\pi b))}{\pi}\\[1em] -&= \frac{\arctan(s\cdot\tan(\frac{\pi}{2} - \pi b))}{\pi}\\[1em] -&= \frac{\arctan(s\cdot\tan(\pi(\frac{1}{2} - b)))}{\pi}\\[1em] -&= \frac{\arctan(\tan(\pi s(\frac{1}{2} - b)))}{\pi}\\[1em] -&= \frac{\pi s(\frac{1}{2} - b)}{\pi}\\[1em] -&= s\bigg(\frac{1}{2} - b\bigg)\\[1em] -\end{align*} -$$ -Note that we can assert that $\arctan(\tan(\pi s(\frac{1}{2} - b)) = \pi s(\frac{1}{2} - b)$ without issues only because -$$ -\begin{alignat*}{5} -0 && \quad \leq && b \quad && < \quad && 1 &&\implies\\[0.5em] -0 && \quad \geq && -b \quad && > \quad && -1 &&\implies\\[0.5em] -\frac{1}{2} && \quad \geq && \quad \frac{1}{2} - b \quad && > \quad && -\frac{1}{2} &&\implies\\[0.5em] -\frac{\pi}{2} && \quad \geq &&\ \ \ \pi\Bigg(\frac{1}{2} - b\Bigg) \quad && > \quad && -\frac{\pi}{2} &&\\ -\end{alignat*} -$$ -$\arctan(\tan(x)) = x$ only holds for $x$ in that range. For example, $\arctan(\tan(x)) = x - \pi$ if $\frac{\pi}{2} < x < \pi$. -With that said... -$$ -\begin{align*} -\lfloor x \rfloor &= x - \frac{1}{2} + s\bigg(\frac{1}{2} - b\bigg)\\[0.5em] -&= s(a + b) - \frac{1 - s}{2} - sb\\[0.5em] -&= s(a + b - b) - \frac{1 - s}{2}\\[0.5em] -&= sa - \frac{1 - s}{2}\\[0.5em] -\end{align*} -$$ -If $s=1$, $\lfloor x\rfloor = a - \frac{1 - 1}{2} = a$. -If $s=-1$, $\lfloor x\rfloor = -a - \frac{1 + 1}{2} = -a - 1 = -(a + 1)$. -This works because $\lfloor \pm y.uwv \rfloor$ is y if the number is positive, and $-(y + 1)$ if the number is negative. e.g. $\lfloor 1.2\rfloor = 1$ but $\lfloor -1.2\rfloor = -2$. -Proving that your $ceil$ function works should go about the same way. -I realize I am three years late to this but I initially was going to post my own trigonometric floor: -$$ -\lfloor x \rfloor = x - \text{frac}(x)\\[1.4em] -% -\text{frac}(x) = \text{sgn}\Big(\sin(2\pi x)\Big)\Bigg( \frac{\cos^{-1}(\cos(2\pi x))}{2\pi} - \frac{1}{2} \Bigg) + \frac{1}{2}\\[1.4em] -% -\text{sgn}(x) = \frac{1}{2}\Bigg(\frac{\cot^{-1}(x) - \cot^{-1}(-x)}{\big|\cot^{-1}(x)\big|}\Bigg) -$$ -But yours seems a bit simpler.<|endoftext|> -TITLE: A rank-one matrix is the product of two vectors -QUESTION [36 upvotes]: Let $A$ be an $n\times m$ matrix. Prove that $\operatorname{rank} (A) = 1$ if and only if there exist column vectors $v \in \mathbb{R}^n$ and $w \in \mathbb{R}^m$ such that $A=vw^t$. - -Progress: I'm going back and forth between using the definitions of rank: $\operatorname{rank} (A) = \dim(\operatorname{col}(A)) = \dim(\operatorname{row}(A))$ or using the rank theorem that says $ \operatorname{rank}(A)+\operatorname{nullity}(A) = m$. So in the second case I have to prove that $\operatorname{nullity}(A)=m$-1 - -REPLY [7 votes]: Suppose that $A$ has rank one. Then its image is one dimensional, so there is some nonzero $v$ that generates it. Moreover, for any other $w$, we can write -$$Aw = \lambda(w)v$$ -for some scalar $\lambda(w)$ that depends linearly on $w$ by virtue of $A$ being linear and $v$ being a basis of the image of $A$. This defines then a nonzero functional $\mathbb R^n \longrightarrow \mathbb R$ which must be given by taking dot product by some $w_0$; say $\lambda(w) =\langle w_0,w\rangle$. It follows then that -$$ A(w) = \langle w_0,w\rangle v$$ for every $w$, or, what is the same, that $A= vw_0^t$.<|endoftext|> -TITLE: Question about the proof of Rudin's Theorem 2.30 -QUESTION [5 upvotes]: The theorem states: -Suppose $Y \subset X$. A subset $E$ of $Y$ is open relative to $Y$ if and only if $E = Y \cap G$ for some subset $G$ of $X$. -I think the proof in the forward direction is relatively clear, however I have some problems relating the backward direction. The proof is relatively quick and goes as (Rudin, pg. 36): -If $G$ is open in X and $E = G \cap Y$, every $p \in E$ has a neighborhood $V_p \subset G$ (open ball $B_{r_p}(p) = \{x \in X: d(p, x) < r_p \}$). Then $V_p \cap Y \subset E$, so that $E$ is open relative to Y. -In order to prove that $E$ itself is and open set in $Y$, wouldn't we want to prove that for each $p \in E$, there is an open ball contained in $Y$. Thus would it work to remedy the proof by taking a ball for each $p$ with the following radius: -$r_p' = \min \{ r_p, \sup_{x \in E} d(p, x) \}$ ? -Then we could guarantee that the ball that is guaranteed by the openness of $G$ will let conclude the openness of $E$ relative to $Y$. -Thank you very much. - -REPLY [3 votes]: I initially found this proof very confusing because I was hazy on the meaning of "open relative to." See my response to my own confused question here: Suppose we have an open set $E$ such that $E \subset Y \subset X$ for some metric space $X$. When is $E$ *NOT* open relative to $Y$? Rudin Thm 2.30 to develop a little more intuition on what it means for one set to be open relative to another set. -I didn't find Mikhail D's proof of the reverse direction straightforward, so let me present the way that I thought about this. Hopefully someone else who gets stuck on part 2 of Theorem 2.30 will find it illuminating. -Theorem 2.30 Reverse Direction -Suppose that $E = G \cap Y$ for some $G$ that is open in $X$. We must show that $E$ is open relative to $Y$. -Proof: First, note that for every point $p$, there is a neighborhood $V_p \subset G$ (i.e. there is some $r_p > 0$, such that $\forall q$ where $d(p,q) < r_p$, we have that $q \in G$). -To see why this is true, suppose that it were false. Then there would have to be some $p \in E$ (let's call it $p_0$) such that there is NO $r > 0$ such that $N_r(p) \in G$. Now, because $E \subset Y \cap G$, we know that $E \subset G$. Hence $p_0 \in E \Rightarrow p_0 \in G$. Thus, $G$ includes a point $p_0$ such that $p_0$ is NOT an interior point of $G$. But this contradicts our assumption that $G$ was an open subset of $X$! -Hence, we know that for every point $p \in E$, there is a neighborhood $V_p \subset G$. -Now consider a point $q \in V_p \cap Y$. We know that $q \in V_p \Rightarrow q \in G$ (from our conclusion above). Hence $q \in V_p \cap Y \Rightarrow q \in G \cap Y$ (using our assumption that $E = G \cap Y$). Hence, $V_p \cap Y \subset E$. -But this means that for every point $p \in E$, there exists some $r_p > 0$ (namely, the $r_p$ such that $V_p = \{q | d(p,q) < r_p\} \subset G$), such that for all $q$, if it is the case that $d(p, q) < r_p, q \in Y$, then $q \in V_p \cap Y \Rightarrow q \in G \cap Y \Rightarrow q \in E$, which is precisely the condition for $E$ being open relative to $Y$.<|endoftext|> -TITLE: Good reference for string diagrams -QUESTION [6 upvotes]: I try to learn about string diagrams, like explained here. -First question: Is also some good written introduction to this topic? -Second question: I found The Geometry Of Tensor Calculus, which seems to be the right thing for me, but they are concerned with monodial categories, in contrary to the video which is concerned about natural transformations, it seems to me. Is there some easy way to interpret the content of the paper in terms of natural transformations between functors between (non-monoidal) categories? - -REPLY [7 votes]: John Baez, Aaron Lauda, Higher-Dimensional Algebra V, arXiv -Dan Mardsen, Category theory using string diagrams, arXiv -Peter Selinger, A survey of graphical languages for monoidal categories, arXiv<|endoftext|> -TITLE: Is it true that $e^{i\omega}=1$ for any value of $\omega$? -QUESTION [5 upvotes]: I have a problem understanding the polar form of a complex number. We have that -$$e^{i\omega} = \cos{\omega}+i\sin{\omega}.$$ -In particular, if $\omega=2\pi$ then $e^{i2\pi} = \cos{2\pi}+i\sin{2\pi}=1$. -Yet, if I rewrite it like this: $$e^{i\omega} = e^{i2\pi\frac{\omega}{2\pi}},$$ -then -$$e^{i\omega}=1^{\frac{\omega}{2\pi}}=1.$$ -Does this mean that, for any value of $\omega$, $e^{i\omega}$ is always equal to 1? - -REPLY [2 votes]: For complex numbers, we cannot say in general that $(e^z)^w=e^{zw}$. This stems from a problem of uniquely defining the power of one complex number to another. In other words, complex exponentiation is multi-valued. -To understand the multi-valuedness of complex exponentiation, sometimes I think it is helpful to distinguish between $\exp ix$, the natural exponential function with an imaginary exponent, and $e^{ix}$, the multi-valued imaginary power of a particular real number (which has the exponential function as its principal branch). This clarifies how we define the operation of raising a complex number to a complex power in general. -Defining complex exponentiation: -To start off with we can define the operation of raising a complex number to a natural number power very simply as repeated multiplication. So $z^2=zz$ and $z^3=zzz$ and so on. This gives us a unique answer for any $z^n$ with $z$ complex and $n$ natural. (This can be easily extended to integer powers as well.) -But to go further and define roots of complex numbers (so that we can define rational powers, so that we can define real powers by continuity), we immediately run into the problem that there are $n$ complex numbers $w$ satisfying $w^n=z$, and unlike the real number case, there is no natural choice for the "principal root" in general. Any choice (the root with the least angle from the positive real axis, for instance) is bound to create a discontinuity in the principal root function for a general complex input. -The natural exponential: -But we can define the natural exponential function for complex numbers, because it only needs natural number powers of complex numbers to work: -$$\exp z=\lim_{n\to \infty}\left(1+{z\over n}\right)^n$$ -This gives a unique result for any input. The polar form of a complex number, and the Euler formula, are really referring to the value of the natural exponential function - that is, they only hold if you use the principal branch of $e^{ix}$. -Now to define the result of any complex number raised to the power of another complex number, we define one of the values of $e^{z}$ to be the natural exponential function, and then pretend the exponentiation identities hold. So if $z=r\exp i\theta$ and $w=x+iy$, then: -$$z^w=(r\exp i\theta)^{x+iy}=e^{(\ln r+i\theta)(x+iy)}=\exp(x\ln r -y\theta)\exp i(y\ln r+x\theta)$$ -Multi-valuedness: -Why does this create a multi-valued function? Because it is also true that $z=r\exp i(\theta+2\pi k)$ for any integer $k$. So any of the following values are also valid results for $z^w$: -$$z^w=\exp(x\ln r -y(\theta+2\pi k))\exp i(y\ln r+x(\theta+2\pi k))$$ -In general these values all lie on a logarithmic spiral (unless the exponent is real, in which case the possible values are all on a circle, or imaginary, in which case the possible values are all on a ray from the origin). Note that there are a finite number of unique values only in the case that the exponent is rational, and a single unique value only if the exponent is an integer. -This is how you can generate contradictions like $e^{ix}=1$ for all $x$ using complex exponentiation - by switching between the different possible results. So one possible result for $1^{x/2\pi}$ is $1$ like we would expect for a real exponential. This is from saying $1=1\exp i0$. But we can also write $1=1\exp i2\pi$, and get the result $1^{x/2\pi}=e^{ix}$.<|endoftext|> -TITLE: Is $C[0,1]$ larger than $\mathbb R$? -QUESTION [6 upvotes]: Let $C[0,1]$ be the set of all continuous function from $[0,1]$ to $\mathbb R$. -Is the cardinality of $C[0,1]$ larger than or equal to that of $\mathbb R$? - -REPLY [7 votes]: It is equal, because of continuity. -Specifically, let $X\subset [0,1]$ be a countable dense subset (for example, you could take $X$ to be the set of rational numbers in $[0,1]$). -Then since every $f\in C[0,1]$ is continuous, it is determined by its restriction to $X$. Thus, restriction determines an injection $C[0,1]\hookrightarrow C(X)$. To see this, suppose we know the values of $f$ at every $x\in X$. Now let $a\in [0,1]$, then we know by density that there is some sequence $(x_n)_n$ with $a = \lim_n x_n$. But then, we have -$$f(a) = f(\lim_n x_n) = \lim_n f(x_n)$$ -Here, it is continuity that lets us say that "$f$ of a limit is the limit of the $f$'s" (this is a straightforward consequence of the definitions). Thus, $f(a)$ is determined by the values of $f$ at $X$. Hence, any two functions in $C[0,1]$ which agree on $X$ must actually be equal. -On the other hand, if $\mathfrak{c}$ is the cardinality of $\mathbb{R}$, then the cardinality of $C(X)$ is $\mathfrak{c}^{\aleph_0} = \mathfrak{c}$<|endoftext|> -TITLE: Problem involving trace and determinant of symmetric matrices -QUESTION [7 upvotes]: I've stumbled upon this exercise on a linear algebra book that asks me to determine all the ordered pairs $(a,b)$ of real numbers to which there exists an unique symmetric matrix $A\in R^{2\times 2}$ so that $tr(A)=a$ and $Det(A)=b$. -I don't even know how to tackle this problem, I've tried using Laplace expansion, and other properties of the trace and determinants. -Any help will be appreciated. - -REPLY [4 votes]: The characteristic polynomial of a $2 \times 2$ matrix can be written as: -\begin{equation} -p(\lambda) = \lambda^2 - \textrm{tr}(A)\lambda + \textrm{det}(A) -\end{equation} -(Check here). If a matrix $A$ is symmetric then it is diagonalizable such that: -\begin{equation} -A = Q \Lambda Q^T \quad \textrm{ where } \quad \Lambda= \textrm{diag}(\lambda_1,\lambda_2) -\end{equation} -(i.e. $\Lambda$ is the matrix with eigenvalues in its diagonal). Therefore we can assume the characteristic polynomial can be solved for real roots: -\begin{equation} -\lambda^2 - \underbrace{\textrm{tr}(A)}_{a}\lambda + \underbrace{\textrm{det}(A)}_{b} = 0 -\end{equation} -The problem is therefore reduced to finding for which $(a,b)$ there is a solution to: -\begin{equation} -\lambda^2 - a\lambda + b = 0 -\end{equation} -The roots of the equation are given by the usual formula: -\begin{equation} -\lambda = \frac{a \pm \sqrt{a^2 - 4b}}{2} -\end{equation} -Real-valued solutions are the ones where $a^2 \geq 4b$. This solves the problem.<|endoftext|> -TITLE: Can we continue a proof by contradiction even if we get to a contradiction? -QUESTION [5 upvotes]: consider the example below : -our set of premises are $\{ a , b , a \to c , b \to a \}$ and we want to prove $c$ is true . -someone has used proof by contradiction to prove this . -the proof : - -assume $\lnot c $ is true -we also know $ \lnot c \to \lnot a $ is true so -$\lnot a$ is true. since we know $a$ is also true -then $\lnot a \land a $ is also true -but we also know that $\lnot a \land a$ ia false -so we know $(\lnot a \land a) \to \lnot b$ is true -so from 4 and 6 $\lnot b$ is true -so from 7 we know $\lnot b$ is true and we know $b$ is true -so $\lnot b \land b$ is true so we get to a contradiction . - -since my argument was a valid one then one or more of my assumptions must be false since my only assumption (excluding axioms and definitions included in the premises ) is $\lnot c$ then $c$ is true . -the question is : -is the above proof a valid mathematical proof . -can a proof by contradiction continue and get to a second contradiction (line 9) based on it's first contradiction (line 5) ? -my teacher says even in the case of a proof by contradiction non of the steps should be a contradiction (except the last one when the proof ends ) !? -EDIT : -assume we use the proof by contradiction technique . it means we add the negation of the proposition we want to prove ($\lnot p$) to our assumptions ($A$) (the set of axioms and definitions we use in the specific field we want to use the proof) and we add all theses to our logical system $\Gamma$ (I'm mainly considering propositional and predicate logic) so we have -$$ \Gamma' = \lnot p \cup A \cup \Gamma $$ -now if $p$ was provable from $A \cup \Gamma$ this makes $\Gamma'$ an inconsistent logical system in which we have no model for it meaning there is no interpretation which gives value True to all the sentences in $\Gamma'$ (in the semantic sense) -BUT we can assign each proposition a true or false value with our semantic rules (for example truth tables) but since we are in an inconsistent system and every proposition is provable in this system we shouldn't expect all the proven propositions to have value true with respect to our semantic valuation (or any other since there is no model for this system) . in other words an inconsistent system is not sound (the meaning of the soundness of a system is different from the soundness of an argument that we will discuss later)! -validating the above proof by means of syntax -one way of talking about a proof by contradiction is to use a system of proof based logical system (by which i mean something like a Proof calculus ) -like what Taroccoesbrocco did below . -in this way we can define a valid mathematical (even a logical) proof as a proof that uses the set of definitions and axioms and possibly some previously proven theories (in the specific area of math we are interested in) and goes from one step of the proof to the other only by using the rules of proof calculus it uses . -in this way there is no problem in continuing a proof by contradiction even if we get to a contradiction as demonstrated by Taroccoesbrocco .( in this case natural deduction is the proof calculus we use and by the use of the rule EFQ we can continue the proof ) -[1] validating the above proof by means of semantics -Here I want to defend the semantic proof I gave above. -If an argument is based on a contradiction it is a valid argument, regardless of whether it follows the rules of logic we use or not! -By definition an argument is valid iff, if the premises are true the conclusion is also true (i.e. every model of premises is also a model for the conclusion) but in the case of a contradiction it is impossible for our premise(s) to be true (there is no model for the premises) so by definition this is a valid argument. -In a proof by contradiction we know our system was consistent before adding $\lnot p$. We add the negation of the proposition we want to prove to our assumptions and logical system and ask whether the system remains consistent or not (i.e. is there a model that gives value true to $ \Gamma' = \lnot p \cup A \cup \Gamma $) if the new system becomes inconsistent (there is no model) then we know that our assumption was wrong and every model for $ A \cup \Gamma $ is also a model for $p$ so p follows from them! -(Note in the semantic viewpoint $\Gamma'$ is the set that contains propositions $\lnot p \cup A$ and all the sentences that are semantically derivable from them. For example via a truth table if we know $t \to s$ and $t$ are both true then $s$ must also be true.) -So the proof above considers the logical system $ \Gamma' = \lnot c \cup A \cup \Gamma $ and assumes there is a model for it. if a model really exists then it must give value true to $\lnot c \cup A$ and all their semantic derivations. -but as we continue the proof we want to give value true to both $\lnot b$ and $b$ and thus a true value to $\lnot b \land b$ but from our definition of a model (and semantic rules ) $\lnot b \land b$ is always false. So there does not exist a model for $\Gamma'$. So $p$ is proven. -In this way we can define a valid mathematical (even a logical) proof as a proof that uses the set of definitions and axioms and possibly some previously proven theories (in the specific area of math we are interested in) and gives them value true and goes from one step of the proof to the other only by using the rules of semantics (e.g. truth tables). -The apparent problem some users pointed out was that from step 5 the argument is based on the contradiction $\lnot a \land a$ being true. But there is no problem with that since we have deduced it in a semantic proof from previous steps and all the next steps are like that too. So the proof is a valid one. -Another apparent problem with proof is that it is unsound if we continue in step 5. -The definition of sound argument is one that is valid and all it's premises are true. ue to this definition the above argument is obviously and unsound one . -since some of it's premises are always false (contradictions) . -BUT this is independent of weather the proof is continued passed stage 5 or not because in the premises of our argument in step 4 we have both $\lnot a$ and $a$ so it is impossible for the premises to be true. -There is no problem when an argument becomes unsound in a proof by contradiction. Since at some stage of the proof and for some proposition $t$ we derive $\lnot t$ and in another step we derive $t$ so we always have these two contradictory propositions among our premises. Thus actually a proof by contradiction always contains this unsound argument. -At the end it seems to me that the ideas I gave in the syntax (proof calculus based) version of the proof seem more rigorous than the ones in the semantic version. I would be glad to get some ideas (specially about the semantic argument!). - -REPLY [9 votes]: Differently from Mauro Allegranza and Graham Kemp, I think that your proof is formally correct, even if it is unnecessarily tricky, inelegant and non-efficient. -I formalized your argument in natural deduction (see below), so this proves that your proof is correct. -Let $\pi_0$ be the derivation of $\lnot c \to \lnot a$ from $\{a \to c\}$ (this correspond to the step 2 of your proof) - -Let $\pi_1$ be the derivation of $\lnot b \land b$ from $\{b, a, a \to c, \lnot c\}$ (this correspond to the steps 1, 3-8 of your proof) - -Then, the derivation of $\pi_2$ of $c$ from $\{a, b, a \to c\}$ corresponds to your whole proof. - -PS: To reply to Graham Kemp: I don't think that "once you've obtained a contradiction the proof has ended", I mean the proof is not forced to be ended. If you has proved that $\neg c, \Sigma \vdash \bot$ you can apply RAA (reductio ad absurdum) and get -$$\dfrac{\neg c, \Sigma \vdash \bot}{\Sigma \vdash c}$$ -but you can also apply EFQ (ex falso quodlibet) and get -$$\dfrac{\neg c, \Sigma \vdash \bot}{\lnot c, \Sigma \vdash d}$$ -for any formula $d$, -or you can also apply $\lnot_I$ (introduction of negation) and get -$$\dfrac{\neg c, \Sigma \vdash \bot}{\Sigma \vdash \lnot\lnot c}$$<|endoftext|> -TITLE: Does it follow that $Y$ is homotopy equivalent to $S^{n-1}$? -QUESTION [5 upvotes]: Let $M$ be a compact connected $n$-manifold (without boundary), where $n \ge 2$. Suppose that $M$ is homotopy equivalent to $\Sigma Y$ for some connected based space $Y$. Does it follow that $Y$ is homotopy equivalent to $S^{n-1}$? - -REPLY [5 votes]: I'll prove the statement I have in my comment. That any CW-complex with the homology of $S^n$ suspends to a space homotopy equivalent to $S^{n+1}$ for $n\geq 1$. If $X$ is such a CW-complex, by Mayer-Vietoris, $SX$ has the homology of $S^{n+1}$. As $X$ is $(0)$-connected, $\pi_1(SX)=0$ by the Freudenthal suspension theorem. -By applying Hurewicz repeatedly, we have $0=H_k(SX)\cong\pi_k(SX)$ for $1\leq k\leq n+1$, and $H_{n+1}(SX)=\pi_{n+1}(SX)=\Bbb Z$ and the Hurewicz map $$H:\pi_{n+1}(SX)\to H_{n+1}(SX)$$ is an isomorphism. In particular, there exists $\alpha :S^{n+1}\to SX$ with $\alpha_*([S^{n+1}])$ generating $H_{n+1}(SX)$. -Now $\alpha$ is map between simply-connected spaces which induces isomorphism on homology , so it is a weak homotopy equivalence (cf. Spanier 7.6.25) and hence a homotopy equivalence if $X$ is a CW-complex. -In particular if $M$ is $S^4$, and $Y$ is a non-trivial homology 3-sphere (e.g. the Poincaré dodecahedral space), then $S(Y)$ is homotopy equivalent to $S^4$.<|endoftext|> -TITLE: Function defined as a limit -QUESTION [23 upvotes]: Q. If $f(x)=\lim\limits_{n\to\infty}\dfrac{\log(2+x)-x^{2n}\sin(x)}{1+x^{2n}}$, then explain why the function does not vanish anywhere in the interval $[0,\pi/2]$, although $f(0)$ and $f(\pi/2)$ differ in sign. -My solution: -First, we simplify the limit a bit. -$$f(x)=\lim_{n\to\infty}\frac{\log(2+x)-x^{2n}\sin(x)}{1+x^{2n}}=\lim_{n\to\infty}\frac{\log(2+x)-(x^{2n}+1-1)\sin(x)}{1+x^{2n}}\\ \implies f(x)=\lim_{n\to\infty}\frac{\log(2+x)+\sin(x)}{1+x^{2n}}-\sin(x)$$ -Now, we divide the interval $[0,\pi/2]$ into three parts, viz., $x\in [0,1)$, $x=1$ and $x\in (1,\pi/2]$. -For the first part, i.e., when $x\in [0,1)$, we have $|x|\lt 1$ and hence $1+x^{2n}\to 1+0=1$ as $n\to\infty$, hence, by the quotient rule of limits, it reduces to, -$$f(x)=\log(2+x)+\sin(x)-\sin(x)=\log(2+x)~\forall~x\in [0,1)$$ -At $x=1$, we have $f(x)=\lim\limits_{n\to\infty}\frac{\log3+\sin(1)}{1+1^{2n}}-\sin(1)=\frac 12(\log3-\sin1)$ -For $x\in (1,\pi/2]$, we have $|x|\gt 1$, hence $1+x^{2n}\to\infty$ as $n\to\infty$. But $\log(2+x)$ and $\sin(x)$ are finite constants for a particular $x$. Hence, the limit goes to $0$ as $n\to\infty$ and we get $f(x)=0-\sin(x)=-\sin(x)~\forall~x\in (1,\pi/2]$. -So, we write our results collectively as, -$$f(x)=\begin{cases}\begin{align}\log(2+x)&&\forall~x\in [0,1)\\ \frac 12(\log3-\sin1)&&\textrm{at }x=1\\ -\sin(x)&&\forall~x\in (1,\pi/2]\end{align}\end{cases}$$ -It's easy to verify now that $f(x)$ doesn't vanish in $[0,\pi/2]$ since, for $x\in [0,1)$, we have $f(x)=\log(2+x)\gt \log1=0$ since logarithm is strictly increasing. At $x=1$, the function value is obviously not $0$ since $\log3\neq\sin1$ and for $x\in (1,\pi/2]$, we have $f(x)=-\sin(x)\in [-1,-\sin 1)$ since sine is strictly increasing in $[0,\pi/2]$. -Now, the explanation behind why $f(x)$ doesn't vanish even when $f(0)$ and $f(\pi/2)$ differ in sign would be that $f(x)$ isn't continuous on $[0,\pi/2]$, more specifically it's discontinuous at $x=1$ with left hand limit being $\log3$ and right hand limit being $-\sin1$ and hence Bolzano's theorem isn't applicable for $f(x)$ on $[0,\pi/2]$. - -Comments about my solution and improvements are welcome. Thanks! :) - -REPLY [3 votes]: This is a community wiki answer to remove this question from the unanswered list (once someone upvotes it): The OP's solution is correct, and very nicely written.<|endoftext|> -TITLE: How to identify lattice in given hasse diagrams? -QUESTION [5 upvotes]: Consider the following Hasse diagrams. - - -and given here , - -Counter example on wiki : - -Says " Non-lattice poset: b and c have common upper bounds d, e, and f, but none of them are the least upper bound." - -But my question is : f is least upper bound, right? - -Similarly, - -Says"Non-lattice poset: a and b have common lower bounds 0, d, g, h, and i, but none of them are the greatest lower bound." - -But my question is : 0 is greatest lower bound, right? - - - -Can you explain lattice such that I can identify above lattices in hasse diagrams? - -REPLY [5 votes]: For your first question, $f$ is not the least upper bound of $b$ and $c$: $b\le d$ and $c\le d$, so $d$ is an upper bound of $b$ and $c$, and $d -TITLE: Evaluating $\int_0^{\infty} \frac {e^{-x}}{a^2 + \log^2 x}\, \mathrm d x$ -QUESTION [12 upvotes]: I am trying to evaluate this integral -$$I=\int_0^{\infty} \frac {e^{-x}}{a^2 + \log^2 x}\, \mathrm d x$$ -for $a \in \mathbb R_{>0}$. -Any ideas? -In the case $a=\pi$ we have $I= F - e$ where $F$ is the Fransén–Robinson constant. - -REPLY [2 votes]: I didn't obtain a closed form but searched the expansions of $I(a)$ at $+\infty$ and $0$. -The function is smooth with some resemblance with a (faster decreasing) rectangular hyperbola. -Formal expansion as $\;a\to +\infty$ : -\begin{align} -I(a)&:=\int_0^{\infty} \frac {e^{-x}}{a^2 + (\log x)^2}\,dx\\ -&=\frac 1{a^2}\int_0^{\infty}\sum_{n=0}^\infty e^{-x}\left(-\left(\frac{\log x}a\right)^2\right)^{n}\,dx\\ -&=\frac 1{a^2}\sum_{n=0}^\infty\frac {(-1)^n}{a^{2n}}\int_0^{\infty} e^{-x}\left(\frac d{ds}\right)^{2n}\left.e^{s\log x}\right|_{s=0}\,dx\\ -&=\sum_{n=0}^\infty\frac {(-1)^n}{a^{2(n+1)}}\left(\frac d{ds}\right)^{2n}\left.\int_0^{\infty} e^{-x} x^s\,dx\right|_{s=0}\\ -&=\sum_{n=0}^\infty\frac {(-1)^n}{a^{2(n+1)}}\left.\Gamma^{(2n)}(1+s)\right|_{s=0}\\ -\end{align} -Since $\;\Gamma^{(2n)}(1)\sim (2n)!\;$ the equality must be replaced by an asymptotic expansion : -$$\tag{1}\boxed{\displaystyle I(a)\sim \sum_{n\ge 0}\frac {(-1)^n\,\Gamma^{(2n)}(1)}{a^{2(n+1)}}},\quad a\to +\infty$$ -The searched integral is thus a generating function for the even derivatives of $\,\Gamma$ at point $1$ : -it has the same expansion as $\;\displaystyle \frac{\Gamma(1+i/a)+\Gamma(1-i/a)}{2\,a^2}\;$ for $\,a\to +\infty$ but without the even factorials at the denominators (i.e. without convergence of the whole series for $a>1$). -Let's add that each derivative $\Gamma^{(n)}(1)$ may be rewritten using the expansion of $\;\displaystyle\Gamma(1+x)=\exp\left({-\gamma\,x+\sum_{ k=2}^\infty\;\zeta(k)\dfrac{(-x)^k}k}\right)\;$ to get : -$$\tag{2}\Gamma(1+x)=\overbrace{1}^{\Gamma(1)}-\gamma x+\overbrace{\left(\gamma^2+\zeta(2)\right)}^{\Gamma^{\large{(2)}}(1)}\frac{x^2}{2!}-\left(\gamma^3+3\gamma\,\zeta(2)+2\zeta(3)\right)\frac{x^3}{3!}+\overbrace{\left(\gamma^4+6\gamma^2\zeta(2)+8\gamma\,\zeta(3)+\frac{27}2\zeta(4)\right)}^{\Gamma^{\large{(4)}}(1)}\frac{x^4}{4!}-\cdots$$ -Update: An asymptotic expansion for a more general Laplace transform was given by Bouwkamp (ref. $1$ or ref. $2$) for $\sigma\ge 0$ : -$$\tag{3}t^{\sigma}\int_0^{\infty} \frac {x^{\sigma-1}e^{-t\,x}}{a^2 + (\log x)^2}\,dx\sim\sum_{n=0}^\infty\frac{\varphi_n(a,\sigma)}{(\log t)^{n+1}}\qquad(t\to\infty)$$ -with the coefficients $\varphi_n$ obtained from the generating function : -$$\tag{4}\frac{\sin(ax)}a\Gamma(\sigma+x)=\sum_{n=0}^\infty \varphi_n(a,\sigma)\frac{x^n}{n!}$$ -At this point it may be observed that if $I(a)$ is generalized to $\;\displaystyle I_{\sigma}(a):=\int_0^{\infty} \frac {x^{\,\sigma-1}e^{-x}}{a^2 + (\log x)^2}\,dx\,$ -then a derivation similar to the obtention of $(1)$ (with an additional factor $x^{\,\sigma-1}$ in the integral) will simply replace $\,\Gamma^{(2n)}(1+s)\,$ by $\,\Gamma^{(2n)}(\sigma+s)$ and produce : -$$\tag{1'}\boxed{\displaystyle I_{\sigma}(a)\sim \sum_{n\ge 0}\frac {(-1)^n\,\Gamma^{(2n)}(\sigma)}{a^{2(n+1)}}},\quad a\to +\infty$$ - -Expansion as $a\to 0$: it is very regular too with a simple pole at $0$. -The substitution $\;x:=e^{-w}\;$ gives : -\begin{align} -I(a)&=\int_0^{\infty} \frac {e^{-x}}{(\log x)^2+a^2}\,dx\\ -&=\int_{-\infty}^{\infty} \frac {e^{\large{-w-e^{-w}}}}{w^2+a^2}\,dw\\ -&=-\frac 1a\;\operatorname{Im}\int_{-\infty}^{\infty} \frac {e^{\large{-w-e^{-w}}}}{w+ia}\,dw\\ -&=-\frac 1a\;\operatorname{Im}\left[\int_{-\infty}^{\infty} \frac {e^{\large{-w-e^{-w}}}-e^{\large{ia-e^{ia}}}}{w+ia}\,dw+\int_{-\infty}^{\infty} \frac {e^{\large{ia-e^{ia}}}}{w+ia}\,dw\right]\\ -&=-\frac 1a\;\operatorname{Im}\left[\int_{-\infty}^{\infty} \frac {e^{\large{-w-e^{-w}}}-e^{\large{ia-e^{ia}}}}{w+ia}\,dw-\pi i\, e^{\large{ia-e^{ia}}}\right]\\ -\tag{5}I(a)&=E(a)+\frac{\pi}a\;\operatorname{Re}\left[e^{\large{ia-e^{\,ia}}}\right]\\ -\end{align} -The odd part of $I(a)$ is given in closed form (at the right) so let's study the even part : -$$\tag{6}E(a):=-\frac 1a\;\operatorname{Im}\int_{-\infty}^{\infty} \frac {e^{\large{-w-e^{-w}}}-e^{\large{ia-e^{ia}}}}{w+ia}\,dw=\sum_{n\ge 0} K_{2n}\, a^{2n}$$ -The integrand is an entire function (since of type $\,h_a(w):=\dfrac {f(w)-f(-ia)}{w-(-ia)},\ h_a(-ia)=f'(-ia)\;$ with $f(w):=e^{\large{-w-e^{-w}}}$ entire) and may thus be expanded in power series of $a$ as : -$$\tag{7}h_a(w)=\frac 1w\left[ e^{\large{-w-e^{-w}}}-e^{-1}\sum_{n\ge 0}\dfrac{\overline{B}_{n+1}}{n!}(ia)^n\right]\sum_{m\ge 0} \left(-\frac{ia}w\right)^m$$ -where $\,\overline{B}_n\,$ is the $n$-th "complementary Bell number" generated by $\,e^{\large{1-e^{x}}}\,$ (with a shift of $1$ from the multiplication by $e^x$). -This allows to obtain integrals for the $K_{2n}$ coefficients of $\, a^{2n}$ (from $(6)$ we need only the imaginary part) : -$$\tag{8} K_{2n}= \frac{(-1)^n}e\int_{-\infty}^{\infty}\frac{e^{\large{\,1-w-e^{-w}}}-1}{w^{2n+2}}+\sum_{k=0}^{2n-1}\frac{\overline{B}_{2n+3}}{(k+2)!\,(-w)^{2n-k}}\;dw,\quad n>0$$ -while $K_0$ may be integrated by parts and rewritten (using $\;x=e^{-w}\,$) as : -\begin{align} -K_0&=\int_{-\infty}^{\infty}\frac{e^{\large{-w-e^{-w}}}-e^{-1}}{w^{2}}dw\\ -&=\left.-\frac 1x\left(e^{\large{-w-e^{-w}}}-e^{-1}\right)\right|_{-\infty}^{\infty}+\int_{-\infty}^{\infty}\frac {e^{-w}-1}x\,e^{\large{-w-e^{-w}}}\,dw\\ -&=-\int_0^{\infty}\frac {x-1}{\log x}\,e^{-x}\,dx,\quad\text{but}\ \frac {x-1}{\log x}=\int_0^1 x^t\,dt\ \ \text{so that}\\ -&=-\int_0^1\int_0^{\infty}x^{t}e^{-t}\,dt\\ -&=-\int_0^1\Gamma(1+t)\,dt\\ -\end{align} -Concerning numerical evaluation you may use : -$$\tag{9}I(a)\approx \sum_{n=0} K_{2n} a^{2n}+\frac{\pi}a\;\operatorname{Re}\left[e^{\large{ia-e^{\,ia}}}\right]$$ -with -\begin{align} -K_0&\approx -0.9227459506806306051438804823457555774372343917106859152 \\ -K_2&\approx -0.2818432097003410734482737060285029254823027754542059884\\ -K_4&\approx -0.010603155563385929453669193499488685330976821362\\ -K_6&\approx +0.01391725152703237611983369670383107076359973\\ -K_8&\approx +0.002934676571947851378554083839156926455\\ -K_{10}&\approx -0.000021228852199696308226770734336 -\end{align} -Note that the coefficients of $\,a^{2n-1}\,$ are too related to the complementary Bell numbers by $\;\displaystyle K_{2n-1}=-(-1)^n\frac{\overline{B}_{2n+1}}{(2n)!}\,\frac {\pi}e\;$ with the odd part expanded as : -$$\frac {\pi}e\left[\frac 1a+\frac a{2!}+\frac{2\,a^3}{4!}-\frac{9\,a^5}{6!}-\frac{267\,a^7}{8!}-\frac{2180\,a^9}{10!}+\cdots\right]$$ -Ref: - -C. J. Bouwkamp $(1972)$ "Note on an asymptotic expansion". -S. G. Llewellyn Smith $(2000)$ "The asymptotic behaviour of Ramanujan's integral and its application to two-dimensional diffusion-like equations". -R. Wong $(1975)$ "On Laplace transforms near the origin"<|endoftext|> -TITLE: Special functions related to $\sum\limits _{n=1}^{\infty } \frac{x^n \log (n!)}{n!}$ -QUESTION [5 upvotes]: While doing some caculation related to von Neumann entropy, I encountered this kind of convergent series. -$$\text{Exl}(x) \equiv \sum _{n=1}^{\infty } \frac{x^n \log (n!)}{n!}$$ -In my calculation, this function Exl$(x)$ appears in some places where exponential function should be, for example, -$$\frac{\cosh (x) \text{Cxl}(x) + \sinh (x)\text{Sxl}(x)}{\cosh(2x)}$$ -appears in my calculation, where -$$\text{Cxl}(x) \equiv \frac{\text{Exl} (x) +\text{Exl} (-x)}{2}$$ -and -$$\text{Sxl}(x) \equiv \frac{\text{Exl} (x) -\text{Exl} (-x)}{2}$$ -are defined from the similarity to the hyperbolic functions. -As the given function Exl$(x)$ looks like some kind of 'augmented' exponential function as the following plot suggests, - -I suspect there's a well defined special function related to this series. Is it so? Any kind of suggestion is appreciated. - -REPLY [4 votes]: I suspect there's a well defined special function related to this series. Is it so ? - -Not really. Basically, $\text{Exl}(x)=-F'(1),$ where $F(k)=\displaystyle\sum_{n\ge0}\frac{x^n}{n!^k}~.~$ The only known values -of F are $F(0)=\dfrac1{1-x},~F(1)=e^x,$ and $F(2)=I_0\big(2\sqrt x\big).$ See Bessel function for more -information. Neither the function F, let alone its derivative, F', have ever been studied for -general values of the argument k. Alternately, we can use Stirling's approximation, but -neither $\displaystyle\sum_{n\ge1}\frac{x^n}{n!}\cdot\ln n$ nor $\displaystyle\sum_{n\ge0}\frac{x^n}{n!}\cdot n^k$ are expressible in terms of any known functions, be -they special or elementary. Well, that's not exactly true; the latter series yields the values -of Bell numbers for $x=1:$ see Dobinski's formula for more information.<|endoftext|> -TITLE: Proof that the range of a map is determined by its behaviour on the boundary. -QUESTION [16 upvotes]: Let f be a mapping from an open neighbourhood of the 3-dimensional unit ball to the 2-dimensional plane. Suppose that f is smooth (infinitely continuously differentiable on its domain) and regular (it's derivative, as a 2x3 matrix, has rank 2 everywhere). -I would like a proof, or informed hint, or counterexample, for the claim that the f restricted to the closed unit ball has the same range as f restricted to the unit sphere. -It is not difficult to construct a counterexample when f is allowed to be nonregular. -I have stated the problem for a mapping from 3 to 2 dimensions but there may be a similar problem going from n to n-1 dimensions. For n=2 the proof is trivial: a regular differentiable mapping from the closed unit disk to the line reaches its maximum on the boundary circle. - -REPLY [5 votes]: This is a very interesting problem. Despite originally thinking the result was true and posting a flawed "proof" here, I can now show that it is false. To do this, I will make use of a published result to imply the existence of a counterexample, although a more explicit construction would be desirable. -First, a bit of notation. Regular maps as stated in the question are submersions. A knot is a circle embedded in $\mathbb{R}^3$ (we only consider smoothly embedded knots here), and a link $L\subset\mathbb{R}^3$ is a union $L=\bigcup_{i=1}^nL_i$ of a finite pairwise disjoint set of knots, $\{L_1,L_2,\ldots,L_n\}$. -Given any disjoint pair $L_i,L_j$ of knots, let $lk(L_i,L_j)$ be their linking number. I will use a result by Gilbert Hector and Daniel Peralta-Salas1. A subset $L$ of $\mathbb{R}^3$ is strongly integrable (SI) if $L=\Phi^{-1}(0)$ for some smooth submersion $\Phi\colon\mathbb{R}^3\to\mathbb{R}^2$. Now, quoting from Hector & Peralta-Salas. -Theorem 3.6.11. A link in $\mathbb{R}^3$ is SI if and only if -$$ -\sum_{j\not=i}lk(L_i,L_j)=1\ mod.\ 2 -$$ -for all $i\in\{1,\ldots,n\}$. -For example, this condition is satisfied for the Hopf link $L=L_1\cup L_2$ for a pair of knots $L_1,L_2$ with linking number 1, so $L$ is SI. For our counterexample, we only require that there does exist a link which is SI. As links are compact, they are bounded in $\mathbb{R}^3$ so, by scaling, we can suppose that $L$ is contained in the open ball of radius $1$. Then, there is a smooth submersion $\Phi\colon\mathbb{R}^3\to\mathbb{R}^2$ such that $L=\Phi^{-1}(0)$. So, $0$ is in the image of $\Phi$ restricted to the closed unit ball, but is not in the image of $\Phi$ restricted to the unit sphere, giving the required counterexample. -1 Hector, G., Peralta-Salas, D: Integrable embeddings and foliations, American Journal of Mathematics, 134, 773-825 (2012), doi: 10.1353/ajm.2012.0018, available at arXiv:1012.4312.<|endoftext|> -TITLE: Proofs in Linear Algebra via Topology -QUESTION [17 upvotes]: I'm watching the lecture series by Tadashi Tokieda on Topology and Geometry on YouTube. In the second lecture he shows how one can prove, using a topological argument, that given two $n\times n$ matrices $P$ and $Q$, the matrix products $PQ$ and $QP$ share the same eigenvalues. Then he claims that one can also prove that $\det(e^L)=e^{\operatorname{tr} L}$ and the Cayley-Hamilton theorem topologically as well. -This has me interested. Is there some book/ reference work that discusses linear algebra in the context of topology? Ideally I'd like a book (at maybe the 1st year grad level) on linear algebra that uses topological arguments to prove linear algebraic statements. Does anyone know of such of work? - -REPLY [15 votes]: These topological arguments involve the same basic idea: it's often easy to prove things for a subset of matrices which are dense in the space of all matrices. Any "continuous" fact (e.g. the assertion that two continuous functions are equal) can be proven for all matrices by proving it for this dense subset. -For example, if $L$ is diagonalizable with eigenvalues $\lambda_1, \dots \lambda_n$, then it's clear that $(L - \lambda_1) \dots (L - \lambda_n) = 0$, which is the Cayley-Hamilton theorem for $L$. But the Cayley-Hamilton theorem is a "continuous" fact: for an $n \times n$ matrix it asserts that $n^2$ polynomial functions of the $n^2$ entries of $L$ vanish. And the diagonalizable matrices are dense (over $\mathbb{C}$). Hence we get Cayley-Hamilton in general. -Similarly, the claim that $PQ$ and $QP$ have the same characteristic polynomial (equivalently, the same eigenvalues, with the same arithmetic multiplicities; this is a bit stronger than what you wrote) is clear if, say, $P$ is invertible. But this is a "continuous" fact: for $n \times n$ matrices it asserts that $n$ polynomial functions of the $2n^2$ entries of $P$ and $Q$ vanish. And the invertible matrices are dense. Hence we get the claim in general. See also this blog post for other proofs and generalizations. -But I think $\det (e^L) = e^{\text{tr}(L)}$ is a bad example; the density reduction doesn't really buy you anything here. It's clear that $\text{tr}(L)$ is the sum of the eigenvalues of $L$ and that $\det (e^L)$ is the product of the exponentials of the eigenvalues of $L$ whether or not $L$ is diagonalizable, because we can upper-triangularize $L$ (e.g. bring it into Jordan normal form) instead of diagonalizing it. Note that this is not good enough to prove Cayley-Hamilton.<|endoftext|> -TITLE: why there are no parabolic (on a paraboloid) non-euclidean geometry? -QUESTION [5 upvotes]: I have seen in many contexts that Euclidean geometry is called also "parabolic geometry". -As in many things in mathematics (conics, differential equations, algebraic equations) the terms: elliptical, parabolic, and hyperbolic refer to the conics with their corresponding names. -You could say that a plane is deformed paraboloid (can you?), but why is it that it is not important to consider geometry over a paraboloid? -I know Riemannian geometry considers geometry over general surfaces (manifolds) but there might be something uninteresting about parabolids that mathematicians do not like. What is it? -Thanks. - -REPLY [5 votes]: Hyperbolic geometry is not really geometry on a hyperboloid. It's geometry on an infinite surface of constant negative Gaussian curvature, something which cannot be represented even in 3D. You can model it using a sheet of a hyperboloid, but the metric you get isn't the normal 3D metric you'd intuitively expect. -Elliptic geometry is not the geometry on an ellipsoid either. While spherical geometry is what you get as geometry on the sphere, elliptic geometry is what you get from that if you identify antipodal pairs of points. It's the geometry on a surface of constant positive Gaussian curvature. -Just like the parabola is the singular limiting case between ellipse and hyperbola, the parabolic geometry is the limiting case between elliptic and hyperbolic geometry. And between constant positive and constant negative curvature, that limiting case is zero curvature. -Might be you could model parabolic geometry on a paraboloid using some strange metric, but why bother if you can have a flat plane using normal Euclidean metric, perfectly intuitive? -For a nice uniform way of looking at these different geometries, I suggest looking into Cayley-Klein metrics. Perspectives on Projective Geometry by Richter-Gebert has some nice chapters on this. Disclaimer: I'm working with that author, so I might be somewhat biased here.<|endoftext|> -TITLE: Projection of $F$ onto any line nonparallel to coordinate axes has measure zero. -QUESTION [7 upvotes]: Given a square $K=[a,b] \times [c,d]$, we define a union of four squares $K^*\subset K$ as follows: $I_i=[a_i, a_{i+1}]$, $J_i=[c_i, c_{i+1}]$, where $a_i=a+i\frac{b-a}{4}$, $c_i=c+i\frac{d-c}{4}$, for $i=0,1,2,3$ and we let $$K^*=(I_0\times J_1)\cup(I_1 \times J_0) \cup (I_2\times J_3)\cup(I_3 \times J_2).$$ We define a decreasing sequence of compact sets $F_0, F_1, \ldots$ inductively: $F_0=[0,1]^2$, $F_n$ is a finite union of squares $K$, each two having at most one point in common, and $F_{n+1}$ is obtained from $F_n$ by replacing each square $K$ by $K^*$. Let $F=\bigcap_{n=0}^\infty F_n$. - -Let $\mathcal H^1$ be the Hausdorff measure. Let $\ell$ be a line through $(0,0)$ different from the coordinate axes. Let $\pi:\mathbb R^2\to\ell$ be the orthogonal projection onto $\ell$. Show that $\pi(F)$ is of $\mathcal H^1$-measure null. -I have trouble with this exercise. I tried to calculate $\mathcal H^1(\pi(F_n))$ but I failed. Any hints? - -EDIT: Theorem 3.32 in Falconer's "The geometry of fractal sets" states that the projection of $F$ onto almost every line has measure zero. This is weaker than the thesis of this exercise and the proof uses some non-trivial results. I am looking for an elementary argument. - -REPLY [2 votes]: A partial answer, giving the result for some countable set of lines. -For any $n$ write $F_n = \bigcup_{i=1}^{4^n} K_i^n$ where $K_1^n, K_2^n, \ldots, K_{4^n}^n$ are disjoint squares with sides equal to $4^{-n}$. Consider the set $\mathcal L$ of lines $\ell$ with the property that for some $n, i, j$ with $i \ne j$ we have $\pi_\ell(K_i^n)=\pi_\ell(K_j^n)$. -Fix $\ell \in \mathcal L$ and $n,i,j$ such that $i \neq j$ and $\pi_\ell(K_i^n)= \pi_\ell(K_j^n)$. Observe that $\pi_\ell(K_i^n\cap F) = \pi_\ell(K_j^n \cap F)$. Obviously, for any $k$ we have $\mathcal H^1(\pi_\ell(F \cap K_k^n))=4^{-n} \mathcal H^1(\pi_\ell(F))$. Thus $$\mathcal H^1(\pi_\ell(F)) = \mathcal H^1\left(\pi_\ell\left(\bigcup_{k=1}^{4^n} K_k^n\cap F\right)\right) = \mathcal H^1\left(\bigcup_{k=1}^{4^n} \pi_\ell(K_k^n\cap F)\right) \le \\ \le \mathcal H^1 (\pi_\ell(K_i^n\cap F)) + \sum_{k\notin\{i,j\}}\mathcal H^1 (\pi_\ell(K_k^n\cap F)) = (4^n-1)4^{-n}\mathcal H^1 (\pi_\ell(F)).$$ It follows that $H^1 (\pi_\ell(F))=0$. -~~~~~~~~~~ -I think this may lead to a full proof. I suspect that the set $\mathcal L$ is dense (meaning that the set of angles between $\ell \in \mathcal L$ and the $x$-axis is dense in $[0,\pi]$). For any line $\mathcal k \notin \mathcal L$ we should be able to choose $\ell \in \mathcal L$ close enough and $n \in \mathbb N$ big enough to ensure that the measure of $\pi_{\mathcal k}(F_n)$ is very close to the measure of $\pi_{\ell}(F_n)$ which is close to zero. I did not have time to formalize this argument.<|endoftext|> -TITLE: $M$-amenable ultrafilters on $\kappa$ are $\kappa$-powerset preserving -QUESTION [6 upvotes]: Let $M$ be a transitive model of $\operatorname{ZFC-}$ and let -$$ -j \colon M \rightarrow N -$$ -be elementary with $\operatorname{crit}(j) = \kappa \in \operatorname{wfp}(N)$. Let $U_j$ be defined by -$$ -X \in U_j :\Leftrightarrow X \in \mathcal P(\kappa)^M \text{ and } \kappa \in j(X). -$$ -We say that $j$ is $\kappa$-pp ($\kappa$-powerset preserving) iff $\mathcal P(\kappa)^M = \mathcal P(\kappa)^N$. And we say that $U_j$ is $M$-amenable iff for all $f \in ^\kappa \mathcal P(\kappa) \cap M \colon \{ \alpha < \kappa \mid f(\alpha) \in U_j \} \in M$. -I'd like to see that $U_j$ to be $M$-amenable implies $j$ to be $\kappa$-pp. - -My thoughts: -If $N = \operatorname{ult}(M; U_j)$, this is easy: Let $X \in \mathcal P(\kappa)^N$ and fix $h \in ^\kappa \kappa \cap M$ with $X = [h]$. Then for all $\alpha < \kappa \colon$ -$$\begin{align} -\alpha \in X & \Leftrightarrow [c_\alpha] \widetilde \in [h] \\ -& \Leftrightarrow \{ \xi < \kappa \mid \alpha \in h(\xi) \} \in U_j -\end{align}$$ -Thus defining $f \colon \kappa \rightarrow \mathcal P(\kappa), \alpha \mapsto \{\xi < \kappa \mid \alpha \in h(\xi)\}$ yields $X = \{ \alpha < \kappa \mid f(\alpha) \in U_j \}$. As $f \in M$ (provided $\kappa \times \mathcal P(\kappa)^M \in M$), the $M$-amenability of $U_j$ yields $X \in M$. -So far, I don't see how to adopt the above to the situation where $N \neq \operatorname{ult}(M; U_j)$. - -REPLY [6 votes]: I think that your claim is false. I will try to construct a counterexample. Suppose that $\kappa$ is measurable and $U$ is a normal measure on $\kappa$. Let $j:V\to M$ be the ultrapower by $U$. First, we build $M\prec H_{\kappa^+}$ such that $U'=U\cap M$ is $M$-amenable. Let $M_0$ be any transitive elementary substructure of $H_{\kappa^+}$ of size $\kappa$. Let $M_1$ be a transitive elementary substructure of $H_{\kappa^+}$ which contains $M_0$ and $U\cap M_0$ as elements. Continuing in this way construct a sequence $\langle M_n\mid n<\omega\rangle$ and let $M=\bigcup_{n<\omega}M_n$. It is easy to see that $U'=U\cap M$ is $M$-amenable. Consider the restriction $j:M\to j(M)$. Note that by elementarity $j(M)\prec H_{j(\kappa)^+}^{M}$. Find some $j(M)\prec N\prec H_{j(\kappa)^+}^{M}$ such that $N$ has a subset of $\kappa$ not in $M$. Note that $j:M\to N$ is elementary and clearly the ultrafilter for $M$ generated from $j$ is precisely $U'$, which is $M$-amenable by construction.<|endoftext|> -TITLE: Complex eigenvalues of a rotation matrix -QUESTION [6 upvotes]: I am struggling with understanding the meaning of complex eigenvalues of a rotation matrix. -1 is always an eigenvalue - that is clear, since all the vectors on the axis of rotation are not effected by the rotation. But what about complex eigenvalues and the corresponding eigenvectors? A rotation around a "complex axis"? - -REPLY [9 votes]: In a sense, those complex eigenvalues are the rotation. One way to think of a real eigenvalue is the amount by which a matrix stretches or shrinks things along a certain axis—the associated eigenvector. With a pair of complex eigenvalues (they always come in conjugate pairs for a real matrix), there’s no axis along which things are stretched, i.e., no real eigenvector. Instead, there’s a plane in which vectors get rotated along with the stretching/shrinking that might be going on. If the vector being transformed doesn’t lie in that plane, then its component in the plane undergoes the rotation+scaling. -In two dimensions, there’s only one plane, so a matrix with complex eigenvalues represents a rotation+scaling of the entire space. In three dimensions, there’s only one dimension left after you’ve defined the plane of rotation, so that’s going to be the axis of rotation that corresponds to the eigenvalue of $1$. In higher dimensions, things get wacky. In four dimensions, for instance, you can have simultaneous rotations in two different planes. -Addendum: The complex eigenvectors associated with the complex eigenvalue pair give you the plane in which the rotation occurs. If you take the real and imaginary parts of any of these eigenvectors, you get a pair of real vectors that span this plane. The action of the matrix in this plane is encoded in the eigenvalues: the argument of the complex number gives the rotation and its norm gives the dilation. So, just as with real eigenvalues and eigenvectors, they describe a subspace of the domain and the action that the matrix has on that subspace. -Once you go beyond three dimensions, complex eigenvalues can appear with multiplicities greater than one, so the associated generalized eigenspaces can have more than two dimensions. It's hard to say much about what goes on in them beyond the fact that they're invariant with respect to the transformation, i.e., vectors in that subspace get mapped to vectors in the same subspace.<|endoftext|> -TITLE: Galois group of a polynomial of degree seven -QUESTION [5 upvotes]: Let $K$ be the splitting field over $\mathbb{Q}$ w.r.t. the polynomial -$x^7 - 10x ^5+15x+5$. -I think its Galois group is the symmetric group $S_7$. I tried to prove it using a theorem which says: -"If the degree of the polynomial is a prime $p$, the polynomial is irreducible and it has exactly two non real roots, then its Galois group is $S_p$." -In this case, I know that -$x^7 - 10x ^5+15x+5$ -is irreducible (by the Eisenstein criterion). However, I could not study its roots... I tried to study its derivative... My methods were effective in the remaining questions... Someone, please, know how to solve this problem? -Thank you. - -REPLY [3 votes]: Myself's answer is very good, but I want to show how you can check the maximum number of roots with Descartes's sign rule : -$$f(x)=x^7-10x^5+15x+5$$ has two sign changes, so at most $2$ positive roots. -$$-f(-x)=x^7-10x^5+15x-5$$ has three sign changes, so at most $3$ positive roots, so $f(x)$ has at most $3$ negative roots. -With Myself's answer you can deduce that there are $5$ real roots.<|endoftext|> -TITLE: what is a valid mathematical proof? -QUESTION [8 upvotes]: from what i have seen in my experience with math we can say that - -a valid proof is one that uses some form of logic (usually predicate logic) and uses logical rules of deduction and axioms or theorems in it's specific field to drive some new sentences that will eventually lead to the proposition we want to prove . - -but we know that most of the proofs given in most of the fields (if not all!) are actually in informal language . -if we accept the definition above then non of these proofs are valid proofs . -how can we extend or change the idea of a valid proof to get to a better definition of a valid mathematical proof ? - -REPLY [2 votes]: First of all the distinction is not as important as one would think. When you use an informal formulation it can be considered a short hand description of how you would go about in creating the formal proof - just like a recipe for gingerbread is not a gingerbread, but it allows anyone to produce gingerbread if they're interested in doing so. Note that sharing proof is a matter of transferring knowledge of the proof to a receiver and if the receiver is convinced that he can produce a formal proof, actually produce a formal proof or is convinced that the proof is good enough - then it is good enough. -Second the informal language might look just like an informal language, but there's nothing that prevents a formal system to look just like some resemblance of a natural language. You could "simply" create a bijective map between a subset of the natural language and that of the formal system. Then what looks like a informal proof could be an image of a formal proof or even a formal proof itself.<|endoftext|> -TITLE: Top Cohomology group of a "punctured" manifold is zero? -QUESTION [6 upvotes]: The following question is from my algebraic topology exam which I was unable to solve. - -Let $X$ be a orientable connected closed $n$-manifold. Let $ p \in X$. Show that $H^n (X-{p},R)=0$, where $R$ is some ring. - -I observed that since $X$ is orientable so $X-p$ is also orientable hence $H_n(X-p,R) \cong R$. I think that Poincare duality may help us to prove this result but I am unable to prove it. Any ideas? - -REPLY [8 votes]: There is the long exact sequence in homology of the pair $(X, X \setminus p)$ (everything is with $\mathbb{Z}$-coefficients unless indicated otherwise): -$$0 \to H_n(X \setminus p) \to H_n(X) \to H_n(X, X \setminus p) \to H_{n-1}(X \setminus p) \to H_{n-1}(X) \to 0$$ -(recall that the local homology groups satisfy, by excision, $H_n(X, X \setminus p) = \mathbb{Z}$ and $H_k(X, X \setminus p) = 0$ for $k \neq n$). -The fundamental class $[X] \in H_n(X)$ is sent to a generator of the local homology group $H_n(X, X \setminus p) = \mathbb{Z}$ (either by definition, or cf. Hatcher, Theorem 3.26 for example). Hence $H_n(X) \to H_n(X, X \setminus p)$ is an isomorphism and thus $H_n(X \setminus p) = 0$ and $H_{n-1}(X \setminus p) \cong H_{n-1}(X)$. -The universal coefficient theorem reads -$$H^n(X \setminus p; R) \cong \operatorname{Hom}_{\mathbb{Z}}(H_n(X \setminus p), R) \oplus \operatorname{Ext}^1_{\mathbb{Z}}(H_{n-1}(X \setminus p), R).$$ -We already know the first summand vanishes. By Poincaré duality and universal coefficients, $H_{n-1}(X) \cong H^1(X) \cong \hom(H_1(X), \mathbb{Z})$. Thus $H_{n-1}(X)$ is torsion-free, and it is also finitely generated because $X$ is compact, hence it is free as an abelian group (as $\mathbb{Z}$ is a PID). So the second summand vanishes too ($H_{n-1}(X \setminus p) \cong H_{n-1}(X)$ is free) and finally $H^n(X \setminus p) = 0$.<|endoftext|> -TITLE: Distribution of weighted sum of Bernoulli RVs -QUESTION [9 upvotes]: Let $x_1,...,x_m$ be drawn from independent Bernoulli distributions with parameters $p_1,...,p_m$. -I'm interested in distribution of $t=\sum_i a_ix_i,~a_i\in \mathbb{R}$ -$m$ is not large so I can not use central limit theorems. -I have the following questions: -1- What is the distribution of $s=\sum_i x_i$? -2- What is the distribution of $t=\sum_i a_ix_i$ or $t=\sum_i a_ix_i-\sum_i a_i$ (to ensure non-negative support) for known $a_i$'s? can I approximate its distribution with a Gamma distribution? If yes, what would be the parameters (as a function of $p_i$'s and $a_i$'s)? -3- Is there a truncated Gamma distribution (or any other distribution (except normal)) that can approximately fits my problem? -However, $m$ is not very large, but it is still very large such that I can not calculate the distribution by convolution. -Thanks a bunch! - -REPLY [9 votes]: I did some search and this is what I have found: - -Bounds for tail probabilities of a weighted sum of Bernoulli trials -This seminal paper of Raghavan in 1988: "Probabilistic Construction of Deterministic Algorithms: Approximating packing integer programs". In section 1.1 he derives some bounds, which seems to be important as part of an stochastic optimization technique called randomized rounding. -Learning the distribution of a weigthed sum of Bernoulli trials -In this paper published this year by Daskalis et al.: "Learning Poisson Binomial Distributions". In theorem 2 they states that it's possible to construct an algorithm that learns the desired distribution in polynomial time. -Code to compute the distribution of a weigthed sum of Bernoulli trials -This post in researchgate.net of Dayan Adoniel also asked the same and it seems that Dayan developed a code to compute the desired distribution and that he is willing to share it. - -Unfortunately I do not have the time now to go over the details of the first two references but hopefully you can find them useful. - -Edit 1: Bounds in the tails of the distribution of a weighted sum of independent Bernoulli trials [Raghavan, 1988] -Let $a_1, a_2, \ldots, a_r$ be reals in $(0, 1]$. Let $X_1, X_2, \ldots, X_r$ be independent Bernoulli trials with $E[X_j] = p_j$. Define $\Psi = \sum_{j=1}^r a_jX_j$ with $E[\Psi] = m$. -Theorem 1 (Deviation of $\Psi$ above its mean). Let $\delta > 0$ and $m = E[\Psi] > 0$. Then -$$\mathbf{P}(\Psi > m(1+\delta)) < \left[\frac{e^\delta}{(1+\delta)^{(1+\delta)}}\right]^m.$$ -Theorem 2 (Deviation of $\Psi$ below its mean). For $\gamma \in (0,1]$, -$$\mathbf{P}(\Psi-m < -\gamma m) < \left[\frac{e^\gamma}{(1+\gamma)^{(1+\gamma)}}\right]^m.$$<|endoftext|> -TITLE: Higher math and statistics/probability -QUESTION [13 upvotes]: So I've heard that certain areas of statistics and probability use manifolds and results from analysis and topology. -Given that I lack the background to see where manifolds would become useful in these fields, I was wondering if someone could provide me with an example illustrating their application. -Thanks in advance - -REPLY [5 votes]: The most "postmodern" way that mathematics enters statistics/probability is probably via the category of Markov morphisms and chains like mentioned in -Cencov, Nikolai Nikolaevich. Statistical decision rules and optimal inference. No. 53. American Mathematical Soc., 2000. -As @Michael Hardy said, there is a functor between the category of Markov morphisms and the category of stochastic differentials. In fact, the simplest i.i.d. samples in statistics/probability is a Markov chain with constant transitional mapping. Also you can derive a corresponding SDE to describe an i.i.d. samples. In statistics, researchers tend to describe i.i.d. samples as outcomes of experiments. In this sense, statistics/probability is a variant of mathematics. The work of analysis of paths of a stochastic process mentioned by @user190080 is founded as early as 1990s by Stroock using some primitive geometric analysis techniques arxiv. But later his interest shifted. -A less elegant way is developed by Amari et.al, they introduced a geometric connection called Amari-$\alpha$ connection over the collection of statistical models(i.e. probability measures) to make it into a manifold in sense of classic Riemannian geometry. A related discussion is matheoverflow. -This approach seems very elegant at the first glance, but later on people figured out that the "informational geometry" is difficult to use since it is easy to use geometry to describe statistical models yet it is hard to transplant results in geometry to statistics/probability. -Mumford and Michor later applied differential geometry (mainly the study of the manifold of differentiable mappings) and Hamilton approach onto a branch of statistical shape analysis. Their work is fruitful yet still not yet a mainstream. -A quite interesting introduction of topology techniques (basically weak topology of Polish spaces) into statistics is the arise of Robust statistics around 1980-1990s when Huber and his students spent a whole lot of time at Princeton promoting their robust scheme(Princeton Robust Study). Now this branch is no longer heated but there is still some accumulative new works. -Today, in the study of "data analysis", persistent homology entered as a computationally convenient tool to capture some feature of the data, they are very popular in multiresolution statistical models. Wiki -There is still a long way to go before establishing a complete functorial correspondence between statistics and postmodern mathematics and I do not think any theoretical step is easy at the current moment given the overly heated trend of "statistical learning" in the community.<|endoftext|> -TITLE: How many throws of a die until every possible result is obtained? -QUESTION [6 upvotes]: A die is thrown until every possible result (i.e., every integer from 1 to 6) is obtained. Find the expected value of the number of throws. - -How do I do that? I understand that probability for the single result is $\{1, 5/6, \ldots , 1/6\}$, but what about the expected value? - -REPLY [4 votes]: While waiting for face $k$, the probability of rolling faces already rolled is $\frac{k-1}n$. Therefore, the expected number of rolls to get face $k$, after rolling face $k-1$, is -$$ -\begin{align} -\sum_{j=1}^\infty\overbrace{\left(\frac{k-1}n\right)^{j-1}}^{\text{roll $j-1$ already rolled}}\ \ \overbrace{\frac{n-k+1}n\vphantom{\left(\frac kn\right)^1}}^{\text{roll $1$ not rolled}}\,j -&=\frac{n-k+1}n\frac1{\left(1-\frac{k-1}n\right)^2}\\ -&=\frac{n}{n-k+1} -\end{align} -$$ -Thus, the expected number of rolls to get all the faces is -$$ -\begin{align} -\sum_{k=1}^n\frac{n}{n-k+1} -&=n\sum_{k=1}^n\frac1k\\ -&\sim n\log(n)+\gamma n+\frac12-\frac1{12n}+O\left(\frac1{n^3}\right) -\end{align} -$$ -where $\gamma$ is the Euler-Mascheroni Constant. The asymptotic expansion is gotten using the Euler-Maclaurin Sum Formula. -For a $6$-sided die, the expected number of rolls is exactly $14.7$. -Using the terms given in the asymptotic expansion for $n=6$ gives $14.69996$. The approximation gets better for larger $n$.<|endoftext|> -TITLE: Every element outside the maximal ideal of a local ring is a unit -QUESTION [6 upvotes]: A homework question from my algebra class asks: - -Show that in a local ring $R$ with maximal ideal $M$, every element outside $M$ is a unit. - -My argument is that since $M$ is maximal $R /M $ is a field and so for any $ x \in R \backslash M $, $ x + M $ has a multiplicative inverse, which implies $ x $ is a unit. -I don't see where we need the fact that $R$ is a local ring. - -REPLY [11 votes]: I hope you know the following theorem: - -Let $R$ be a commutative ring with $1$, $a \in R$ a non-unit. Then there exists a maximal ideal $M$ of $R$ such that $a \in M$. - -This is a standard consequence of Zorn's lemma. In particular this implies that the set of units of $R$ coincides with the complement of the union of maximal ideals of $R$. - -REPLY [3 votes]: You need to use the fact that every non-unit is contained in a maximal ideal. To prove it is an easy application of Zorn's lemma, but is probably a theorem in your book. Let $x$ be an element outside of $M$. If it is not a unit, it is contained in a maximal ideal. Since $R$ is local, there is only one maximal ideal, $M$. This is a contradiction, so $x$ must be a unit.<|endoftext|> -TITLE: Higher Ext's vanish over a PID -QUESTION [6 upvotes]: Let $R$ be a PID and $M$, $N$ be $R$-modules. I am trying to show that - $$\forall n\ge 2~: \operatorname{Ext}_{R}^{n}(M,N)=0.$$ - -For example $\forall n\ge 2~: \operatorname{Ext}_{\mathbb Z}^{n}(M,N)=0$. -Here $\operatorname{Ext}^*_R$ is the derived functor of the functor of homomorphisms of $R$-modules $\operatorname{Hom}_R$. It's defined using projective/injective resolutions, and long exact sequences. On the other hand the definition of a PID involves ideals of the ring, and it's not immediately clear how to relate this condition to the definition of $\operatorname{Ext}$. - -REPLY [5 votes]: Let $P_0 = \bigoplus_{m \in M} R_m$ be a direct sum of copies of $R$, one for each element of $M$ (the index is just here for bookkeeping reasons). This is a free $R$-module. This maps to $M$ through $\varepsilon : P_0 \to M$ by defining $\varepsilon_m : R_m \to M$, $x \mapsto x \cdot m$ and extending to the direct sum (coproduct). -The kernel $P_1 = \ker(P_0 \to M)$ is a submodule of the free module $P_0$, hence it is free as $R$ is a PID. Thus you get a free resolution (exact sequence): -$$0 \to P_1 \to P_0 \to M \to 0.$$ -Then $\operatorname{Ext}^n(M,N)$ is the homology of the chain complex you get from the exact sequence above after applying $\operatorname{Hom}_R(-,N)$, but this chain complex is zero in degrees $n \ge 2$, hence $\operatorname{Ext}^n(M,N) = 0$ for $n \ge 2$. -If you don't know that and only know the long exact sequence for $\operatorname{Ext}$, use the fact that $\operatorname{Ext}^n_R(P,N) = 0$ for $n \ge 1$ and $P$ projective (pretty much by definition), and the fact that a free module is projective. The long exact sequence derived from the short one above becomes (where $n \ge 2$): -$$\dots \to \underbrace{\operatorname{Ext}^{n-1}_R(P_1,N)}_{= 0} \to \operatorname{Ext}^n_R(M,N) \to \underbrace{\operatorname{Ext}^n_R(P_0,N)}_{=0} \to \dots$$ -and so $\operatorname{Ext}^n_R(M,N) = 0$.<|endoftext|> -TITLE: Classification of groups of order 12 -QUESTION [5 upvotes]: I have a lot of difficulty understanding this proof we went over in class about classification of groups of order 12. -Let $G$ be a finite group of order $n=p^rm$ where $m \nshortmid p$. -Denote $Syl_p(G)$ as the set of all sylow p-subgroups, and $n_p(G)$ as the cardinality of $Syl_p(G)$. -(1) G has subgroups of order $p^r$. (How do we know this??) -(2) We know all sylow p-groups are conjugate and their number $n_p(G) | m$, by the 2nd and 3rd Sylow Theorems. -(3) We have that $n_p(G) \equiv 1(p). $ -Then $n=12=2^2*3^1$ so $n_2(G)|3$ gives $n_2(G) \in \{1,3\}$ and $n_3(G)|4$ so $n_3(G) \in \{1,4\}$ since $2 \not\equiv 1(3)$. -Case 1: $n_3(G) = 4.$ -Then the action of G by conjugation on $Syl_3(G)$ gives a homomorphism $f: G \rightarrow S_{Syl_3(G)} \simeq S_4$. -$Ker(f)$ consists of $g \in G$ that normalizes all 3-Sylow subgroups. (Why?). Let $P_3$ be a 3-Sylow subgroup. Then the order of $P_3$ is 3. (I don't get that either), and $[G:N_G(P_3)] = 4.$ Thus $P_3 \subset N_G(P_3)$ gives $P_3 = N_G(P_3)$. (I'm lost here...) -So $Ker(f) = \cap Syl_3(G) = \{e\}$ (the intersection of the sylow 3-subgroups is trivial. (How did we arrive here...) -Thus we conclude that $G$ is isomorphic to a subgroup of $S_4$ of order 12. -To show $G$ is isomorphic to $A_4$, we have that $G$ has 8 elements of order 3: 4 3-sylow subgroups each has 3-1=2 elements of order 3. And $S_4$ has 8 3-cycles meaning f(G) contains all 3-cycles and hence the group they generate, which is $A_4$, so $A_4 \subset f(G)$. But since $|A_4| = 12 = |f(G)$ then $f(G) = A_4$ and $G \simeq A_4$. -(Not understanding the first half of the proof, I do not get this part either). -Case 2: $n_3(G) = 1.$ The scope of the question is too large, I will need to post separately to understand the second half, unless the organizational rules of math.stackexchange would require that I post here. In that case, I'll edit the question. -I basically cannot follow the proof because it seems to skip too many steps; it would be helpful if anyone could elaborate case 1 with more reasoning so that I can follow it. Any help would be appreciated; I've spent hours trying to look up and decipher the proof but without any success. - -REPLY [3 votes]: I have a lot of difficulty understanding this proof we went over in class about classification of groups of order 12. -Let $G$ be a finite group of order $n=p^rm$ where $m \nshortmid p$. - -It seems that there is a mistake here one should read $p \nshortmid m$ or $gcd(p,m)=1$. - -Denote $Syl_p(G)$ as the set of all sylow p-subgroups, and $n_p(G)$ as the cardinality of $Syl_p(G)$. -(1) G has subgroups of order $p^r$. (How do we know this??) - -A priori, it has no reason to be true but this is a theorem known as $\textit{the first Sylow's theorem}$. It states that for any finite group $G$ and $p$ a prime number dividing $|G|$, if we write $|G|=p^rm$ with $p \nshortmid m$ then $G$ admits a $p$-Sylow (which is defined as a subgroup of $G$ whose cardinal is $p^r$). - -(2) We know all sylow p-groups are conjugate and their number $n_p(G) | m$, by the 2nd and 3rd Sylow Theorems. -(3) We have that $n_p(G) \equiv 1(p). $ -Then $n=12=2^2*3^1$ so $n_2(G)|3$ gives $n_2(G) \in \{1,3\}$ and $n_3(G)|4$ so $n_3(G) \in \{1,4\}$ since $2 \not\equiv 1(3)$. -Case 1: $n_3(G) = 4.$ -Then the action of G by conjugation on $Syl_3(G)$ gives a homomorphism $f: G \rightarrow S_{Syl_3(G)} \simeq S_4$. -$Ker(f)$ consists of $g \in G$ that normalizes all 3-Sylow subgroups. (Why?). - -How is $f$ defined ? Since it is given by the conjugation action $f(g):=S\mapsto gSg^{-1}$ so that if $f(g)$ is trivial (the indentity function here), it follows that $f(g)$ sends any $S$ to $S$ since $f(g)$ also sends any $S$ to $gSg^{-1}$ this boils down to for all $S$ we have $S=gSg^{-1}$. - -Let $P_3$ be a 3-Sylow subgroup. Then the order of $P_3$ is 3. (I don't get that either) - -This is the very definition of a $3$-Sylow in $G$ that gives you this since $|G|=12=4\times 3$. - -, and $[G:N_G(P_3)] = 4.$ Thus $P_3 \subset N_G(P_3)$ gives $P_3 = N_G(P_3)$. (I'm lost here...) - -Since $G$ acts transitively by conjugation (this is your point 2 : $p$-Sylow are all conjugate to each other) we have by the first class formula : -$$\frac{|G|}{n_3(G)}=|N_G(P_3)|\text{ whence } |N_G(P_3)|=3$$ -By definition $H\subseteq N_G(H)$ so that $P_3\subseteq N_G(P_3)$ since both are of cardinal $3$ they are equal. - -So $Ker(f) = \cap Syl_3(G) = \{e\}$ (the intersection of the sylow 3-subgroups is trivial. (How did we arrive here...) - -We already saw that $Ker(f)$ is the intersection of normalizers of $3$-Sylows. Now we just saw that the normalizer of a $3$-Sylow is exactly the $3$-Sylow itself. Now I claim that if $S$ and $T$ are two different $3$-Sylows they cannot but intersect trivially. Indeed by Lagrange $S\cap T$ has cardinal dividing $|S|=3$. So the cardinal is either $3$ or $1$, if $|S\cap T|=3$ then $S\cap T\subseteq S$ and $|S|=3$ implies that $S\cap T=S$ and then that $S\subseteq T$ which implies (with the cardinal) that $S=T$ which is impossible. It follows that $|S\cap T|=1$ i.e. $S\cap T$ is trivial. -Since $Ker(f)$ is the intersection of $4$ different $3$-Sylows it is necessarily trivial. - -Thus we conclude that $G$ is isomorphic to a subgroup of $S_4$ of order 12. -To show $G$ is isomorphic to $A_4$, we have that $G$ has 8 elements of order 3: 4 3-sylow subgroups each has 3-1=2 elements of order 3. And $S_4$ has 8 3-cycles meaning f(G) contains all 3-cycles and hence the group they generate, which is $A_4$, so $A_4 \subset f(G)$. But since $|A_4| = 12 = |f(G)$ then $f(G) = A_4$ and $G \simeq A_4$. -(Not understanding the first half of the proof, I do not get this part either). - -Since $Ker(f)$ has been proven to be trivial it follows that $G$ is isomorphic to $f(G)$ which is a subgroup of $S_4$. Now you would like to identify the subgroup $f(G)$ you know that it contains $4$ $3$-Sylows. A $3$-Sylow is of order $3$, it contains two elements of order $3$ and one of order $1$. Since the intersection of two different $3$-Sylows is trivial and since you have $4$ $3$-Sylows you finally get in $f(G)$ exactly $8$ elements of order $3$ in $f(G)$. Since they are elements of order $3$ in $S_4$ they cannot but be $3$-cycles and since there are $8$ of them in $S_4$ they are all contained in $f(G)$. The fact that they generate $A_4$ is classical so that $A_4\leq f(G)$ and by cardinality argument they are equal. -This part is too complicated, one way to define $A_n$ is to say that this is the unique subgroup of index $2$ in $S_4$. Since $S_4$ is of order $24$ and $f(G)$ of order $12$ it follows that $f(G)$ is a subgroup of index $2$ in $S_4$ and then $A_4=f(G)$ by unicity. - -Case 2: $n_3(G) = 1.$ The scope of the question is too large, I will need to post separately to understand the second half, unless the organizational rules of math.stackexchange would require that I post here. In that case, I'll edit the question. -I basically cannot follow the proof because it seems to skip too many steps; it would be helpful if anyone could elaborate case 1 with more reasoning so that I can follow it. Any help would be appreciated; I've spent hours trying to look up and decipher the proof but without any success.<|endoftext|> -TITLE: Finding an explicit isomorphism from $\mathbb{Z}^{4}/H$ to $\mathbb{Z} \oplus \mathbb{Z}/18\mathbb{Z}$ -QUESTION [8 upvotes]: There was a past qualifying exam problem, I was having trouble with, it is stated below as follows: - -In the group $G= \mathbb{Z} \times \mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}=\mathbb{Z}^{4}$, let $H$ be the subgroup generated by $[0,0,3,1], [0,6,0,0], [0,1,0,1]$. Find an explicit isomorphism between $G/H$ and a product of cyclic groups. - -I truly do not fully understand how to define such a map on $G/H$. I have given some thought to this. -We have that $H$ consists of integer combinations of the generators above, that is, if $\xi \in H$, then $\xi=a[0,0,3,1]+b[0,6,0,0]+c[0,1,0,1]$. In particular, we have that $\xi$ is in the image of a module homomorphism $\mathbb{Z}^{3} \rightarrow \mathbb{Z}^{4}$ whose matrix with respect to a standard basis $(e_{1},e_{2}, e_{3})$ and $(f_{1}, f_{2}, f_{3}, f_{4})$ for $\mathbb{Z}^{3}$ and $\mathbb{Z}^{4}$ respectively, will be -$$A=\begin{bmatrix} -0&0&0\\ -0&6&1\\ -3&0&0\\ -1&0&1\\ -\end{bmatrix}.$$ -We can perform row operations by multiplying on the left by the matrix $P$ and column operations by multiplying on the right by the matrix $Q$ -$$P= -\begin{bmatrix} -0&0&0&1\\ -0&1&0&0\\ -0&3&1&-3\\ -1&0&0&0\\ -\end{bmatrix},$$ -$$Q=\begin{bmatrix} -1&6&-1\\ -0&1&0\\ -0&-6&1\\ -\end{bmatrix}$$ -to obtain the matrix -$$PAQ=\begin{bmatrix} -1&0&0\\ -0&0&1\\ -0&18&0\\ -0&0&0\\ -\end{bmatrix}.$$ -I believe this tells us if $\xi \in H$, then $\xi=[a,6b,18c,0]$ for $[a,b,c] \in \mathbb{Z}^{3}$. From here, I think we can conclude that -$$\mathbb{Z}^{4}/H \cong \mathbb{Z} \times \mathbb{Z}/18\mathbb{Z}.$$ -I do not see how to explicitly write a function between these two objects. -Do I think of $\mathbb{Z}^{4}/H$ as cosets? -The matrices $P$ and $Q$ above, tell us exactly the change of basis in the domain and codomain, I was wondering if I could use that. -Thanks - -REPLY [7 votes]: This problem hinges on interpreting the Smith normal form of a matrix. -As you have said, we need to study the homomorphism $\varphi: \mathbb{Z}^3 \to \mathbb{Z}^4$ mapping the standard basis vectors to generators of the subgroup $H$. With respect to the standard bases $\mathcal{E}$ and $\mathcal{F}$ for $\mathbb{Z}^3$ and $\mathbb{Z}^4$, $\varphi$ has matrix $A$. We will find new bases $\mathcal{B}$ and $\mathcal{C}$ with respect to which $\varphi$ is represented by a (much simpler) diagonal matrix. -Computing the Smith normal form by row and column operations, I find that -$$ -PAQ = -\begin{pmatrix} -1 & 0 & 0\\ -0 & 1 & 0\\ -0 & 0 & 18\\ -0 & 0 & 0 -\end{pmatrix} -$$ -where -$$ -P = -\begin{pmatrix} -0 & 0 & 0 & 1\\ -0 & 1 & 0 & 0\\ -0 & -15 & 1 & -3\\ -1 & 0 & 0 & 0 -\end{pmatrix} -\qquad -\text{and} -\qquad -Q = -\begin{pmatrix} -1 & 5 & 6\\ -0 & 1 & 1\\ -0 & -5 & -6 -\end{pmatrix} \, . -$$ -Note that $P$ and $Q$ both have determinant $-1$, hence are invertible over $\mathbb{Z}$. Interpreting $P$ and $Q$ as change of basis matrices, then -$$ -PAQ = {_\mathcal{C} [\text{id}]_\mathcal{F}} \, {_\mathcal{F}[\varphi]_\mathcal{E}} \, {_\mathcal{E}[\text{id}]_\mathcal{B}} -$$ -for some bases $\mathcal{B}$ and $\mathcal{C}$ as above. Then $\mathcal{B} = \left\{b_1 = e_1, b_2 = 5e_1 + e_2 - 5e_3, b_3 = 6e_1 + e_2 - 6e_3 \right\}$, and since -$$ -{_\mathcal{F} [\text{id}]_\mathcal{C}} = P^{-1}= -\begin{pmatrix} -0 & 0 & 0 & 1\\ -0 & 1 & 0 & 0\\ -3 & 15 & 1 & 0\\ -1 & 0 & 0 & 0 -\end{pmatrix} -$$ -we find $\mathcal{C} = \{c_1 = 3f_3 + f_4, c_2 = f_2 + 15f_3, c_3 = f_3, c_4 = f_1\}$. By the form of $PAQ$, then $\varphi(b_1) = c_1$, $\varphi(b_2) = c_2$, $\varphi(b_3) = 18c_3$ and $\varphi(b_4) = 0$, so $H = \text{img}(\varphi) = \mathbb{Z}c_1 \oplus \mathbb{Z} c_2 \oplus \mathbb{Z}18 c_3$. Thus -\begin{align*} -\frac{\mathbb{Z}^4}{H} = \frac{\mathbb{Z} c_1 \oplus \mathbb{Z} c_2 \oplus \mathbb{Z} c_3 \oplus \mathbb{Z} c_4}{\mathbb{Z}c_1 \oplus \mathbb{Z} c_2 \oplus \mathbb{Z}18 c_3} \cong \frac{\mathbb{Z}}{18\mathbb{Z}} \oplus \mathbb{Z} \, . -\end{align*} -More explicitly, the isomorphism is induced by the map -$$ -\alpha_1 c_1 + \alpha_2 c_2 + \alpha_3 c_3 + \alpha_4 c_4 \mapsto (\overline{\alpha_3},\alpha_4) -$$ -where the bar indicates the residue mod $18$.<|endoftext|> -TITLE: Prove that $\frac{f(x)}{x}$ is uniformly continuous in $[1, +∞)$ if $f$ is Lipschitz -QUESTION [6 upvotes]: Let $f(x)$ be a Lipschitz function on $[1, +∞)$, i.e. there exists a - positive constant $C$ such that $$|f(x) − f(y)| ≤ C|x − y|, ∀x, y ∈ [1, +∞).$$ -Prove that $\frac{f(x)}{x}$ is uniformly continuous in $[1,+\infty)$. - -I know that a Lipschitz function is uniformly continuous. What I did so far is: -let $g(x) = \frac{f(x)}{x}$. Then I assumed $g(x)$ is Lipschitz. (Is the assumption wrong?) -Then $|g(x)-g(y)| \le K|x-y|$ satisfies the Lipschitz condition. -Therefore $|\frac{yf(x)-xf(y)}{xy}| \le K|x-y|$. -How to continue from here? - -REPLY [5 votes]: The function $g(x) = \frac{f(x)}{x}$ is indeed Lipschitz: First of all, from -$$|f(x) - f(y)| \le C|x-y|$$ -putting $y=1$ gives -$$\tag{1} |f(x)| = |f(x) - f(1)+ f(1)|\le C|x-1| + |f(1)|.$$ -Now using $(1)$ and $x, y\ge 1$, -$$\begin{split} -|g(x) - g(y)| &= \left| \frac{f(x)}{x} - \frac{f(y)}{y}\right| \\ -&= \left| \frac{f(x)}{x} - \frac{f(y)}{x} + \frac{f(y)}{x} - \frac{f(y)}{y}\right| \\ -&\le \left| \frac{f(x)-f(y)}{x}\right| + |f(y)| \left|\frac 1x - \frac 1y\right| \\ -&\le |f(x) - f(y)| + |f(y)| \left|\frac{x-y}{xy} \right| \\ -&\le C|x-y| + \frac{C|y-1| + |f(1)|}{|xy|} |x-y| \\ -&\le C|x-y| + \left( C + |f(1)\right) |x-y| \\ -&= K|x-y|, -\end{split}$$ -where $K = 2C + |f(1)|$. Thus $g$ is also Lipschitz and so is uniformly continuous. - -REPLY [3 votes]: Below is an incomplete proof that might not even be the right way to go. -But I'm posting it anyway (at least temporarily) so that you see one way to think about/work through these problems in general. The problem at the end of this proof is that $|f(y)|$ is not necessarily bounded by a constant, so we were unable to find a Lipschitz constant. -$\left | \dfrac{f(x)}{x} - \dfrac{f(y)}{y} \right | $ -$= \left | \dfrac{yf(x) - x f(y)}{xy} \right |$ -$ = \left | \dfrac{yf(x) - yf(y) + yf(y)- x f(y)}{xy} \right |$ -$ \leq \left | \dfrac{yf(x) - yf(y)}{xy} \right | + \left | \dfrac{yf(y)- x f(y)}{xy} \right |$ -$= \left | \dfrac{f(x) - f(y)}{x} \right | + |f(y)|\left | \dfrac{y- x }{xy} \right |$ -$\leq C\left | \dfrac{x - y}{x} \right | + |f(y)|\left | \dfrac{y- x }{xy} \right |$ -$\leq \left (C + \dfrac{|f(y)|}{|y|} \right ) \left | \dfrac{x - y}{x} \right |$ -$\leq \left (C + \dfrac{|f(y)|}{|1|} \right ) \left | \dfrac{x - y}{1} \right |$ (since $x, y \in [1, \infty)$) -$= (C + |f(y)| ) \cdot |x - y|$<|endoftext|> -TITLE: Why is rotation about the y axis in $\mathbb{R^3}$ different from rotation about the x and y axis. -QUESTION [8 upvotes]: In my textbook for a counterclockwise rotation about the x-axis we have $\begin{pmatrix} -1 & 0 & 0\\ -0 & \cos\theta & -\sin\theta \\ -0 & \sin\theta & \cos\theta -\end{pmatrix}$ -For rotation about the z-axis we have -$\begin{pmatrix} -\cos\theta & -\sin\theta & 0 \\ -\sin\theta & \cos\theta & 0 \\ -0 & 0 & 1 -\end{pmatrix} -$. Now for rotation about the y-axis it's listed as $\begin{pmatrix} -\cos\theta & 0 & \sin\theta \\ -0 & 1 & 0 \\ --\sin\theta & 0 & \cos\theta -\end{pmatrix} $. I can see that they changed it to the rotational matrix for a clockwise matrix but it says right infront of these 3 matrices that they are all for counterclockwise rotations so I'm not entirely sure what's going on. - -REPLY [6 votes]: They are all three the same rotation direction. The thing at play here is orientations of coordinate systems. -What right rotation (if you point your right hand thumb along the axis and close your other fingers they will point to the direction of the rotation) around the X-axis does is rotate the Y axis towards the Z axis if XYZ is a right oriented coordinate system (if you take your right hand you can point the thumb in X direction, index finger in Y direction and middle finger in the Z direction). -Now similar is true for rotation about the Z axis, it rotates the X axis towards the Y axis because ZXY is also a right oriented coordinate system. -In fact reordering the axis by rotating/shifting the axes preserves the orientation so XYZ, YZX and ZXY are all positive oriented systems (but just swapping two axes makes it the opposite orientation - giving left orientation). You can also note that swapping twice gets back to the original orientation (all of XYZ, YZX and ZXY can be obtained by swapping two axes). -Now look at what happens to a matrix when you swap axes, then you swap both corresponding columns and rows. And if you do it twice you see that you actually arrive at the three possible matrixes (you can also rotate column- and row- wise if that suits your mind better).<|endoftext|> -TITLE: Projective limit involving p-adic numbers -QUESTION [7 upvotes]: Let $p$ and $q$ be distinct primes. What is the projective limit - $$\varprojlim \mathbb R^2 / (p^n \mathbb Z \times q^n \mathbb Z)?$$ - -That's an exercise from Robert's book A Course in p-adic Analysis. -Is it true that this limit is isomorphic to $\varprojlim (\mathbb R / p^n \mathbb Z) \times (\mathbb R / q^n \mathbb Z)$ which is just a product of projective limits of $\mathbb R / p^n \mathbb Z$ (and respective expression involving $q$), e.g. $\mathbb S_p \times \mathbb S_q$ where $\mathbb S$ denotes a solenoid? - -REPLY [2 votes]: Yes, your guess is correct. For each $n$, we have $\mathbb{R}^2/(p^n\mathbb{Z}\times q^n\mathbb{Z})\cong \mathbb{R}/(p^n\mathbb{Z})\times \mathbb{R}/(q^n\mathbb{Z})$ (just treat each coordinate separately), and these isomorphisms are compatible with the maps in the inverse system. If you have two inverse systems $(A_n)$ and $(B_n)$ and form the inverse system $(A_n\times B_n)$ with the maps just acting separately on each coodinate, then there is a natural isomorphism $\lim (A_n\times B_n)\to\lim A_n\times \lim B_n$ (you can construct the isomorphism quite explicitly in terms of elements, or more categorically, you can observe that both objects have the universal property that a map into them is the same as a family of compatible maps to both $A_n$ and $B_n$ for each $n$). Thus $\lim \mathbb{R}^2/(p^n\mathbb{Z}\times q^n\mathbb{Z})\cong\lim\mathbb{R}/(p^n\mathbb{Z})\times \lim\mathbb{R}/(q^n\mathbb{Z})=\mathbb{S}_p\times\mathbb{S}_q$.<|endoftext|> -TITLE: Artin's Algebra exercise special case of some theorem/problem? -QUESTION [6 upvotes]: The following exercise is from Artin's Algebra Text: - -Show that there is a one to one correspondence between maximal ideals of $ \bf R$$[x]$ and complex upper half plane. - -Solution: Follows from the fact that $ \bf R$$[x]$ is a PID,and any irreducible polynomial of $ \bf R$$[x]$ is either of degree $1$ or of degree $2$. -Is the above problem a special case of some theorem/Problem? In the case of algebraically closed field $k$ It's well known that maximal ideals of polynomial ring in $n$ variable over $k$ corresponds to points of affine space $ \bf A_k^n$. - -REPLY [4 votes]: Let $K$ be a field and $\bar K$ its algebraic closure. Since every element of $\bar K$ has an unique monic minimal polynomial over $K$, we can define an equivalence relation on $\bar K$ by calling two elements of $\bar K$ equivalent iff they have the same minimal polynomial. If you know some Galois theory, you will recognise these equivalence classes as the orbits of $\bar K$ under the action of the Galois group of $\bar K/K$ in the case that $\bar K$ is separable over $K$ (perhaps also in the non-separable case, but I'm not sure). -There is a bijective correspondence between these equivalence classes and the maximal ideals of $K[x]$. The reason of course is that on one hand every minimal polynomial is irreducible, so it generates a non-zero prime ideal, which is maximal since $K[x]$ is a PID. On the other hand, again using that $K[x]$ is a PID, every ideal is generated by a unique monic polynomial, which is the minimal polynomial for some element of $\bar K$. -Your statement is the special case $K = \mathbb R$, $\bar K=\mathbb C$. The only non-trivial Galois action is the complex conjugation, so the equivalence above just identifies every element of $\mathbb C$ with its complex conjugate, which allows you to choose a representative of each equivalence class in the upper half plane.<|endoftext|> -TITLE: Aggregating standard deviation to a summary point -QUESTION [12 upvotes]: I have a range of data (server performance statistics) is formatted as follows, for each server: -Time | Average | Min | Max | StdDev | SampleCount | -------------------------------------------------------------------- -Monday 1st | 125 | 15 | 220 | 12.56 | 5 | -Tuesday 2nd | 118 | 11 | 221 | 13.21 | 4 | -Wednesday 3rd | 118 | 11 | 221 | 13.21 | 3 | -.... | ... | .. | ... | ..... | . | -and so on... - -These data points are calculated from data that has a finer resolution (e.g. hourly data). -I need to aggregate this data into a single summary point so the end result is a list of servers and an aggregate average, min, max, standard deviation. -For average, I take the average of all the averages. -For min, we take the minimum min. -For max, we take the maximum max. -However, I'm not sure what method I should be using to aggregate standard deviation? I've seen various answers including square roots and variance but I really need a concrete answer on this - can anyone help? - -REPLY [12 votes]: General Solution -To compute mean, variance, and standard deviation you only need to keep track of three sums $s_0, s_1, s_2$ defined as follows for a set of values $X$: -$$(s_0, s_1, s_2) = \sum_{x \in X} (1, x, x^2)$$ -In English, $s_0$ is the number of values, $s_1$ is the sum of the values, and $s_2$ is the sum of the square of each value. Given these sums, we can now derive mean (average) $\mu$, variance (population) $\sigma^2$, and standard deviation (population) $\sigma$: -$$\mu = \frac{s_1}{s_0} \qquad \sigma^2 = \frac{s_2}{s_0} - \left(\frac{s_1}{s_0}\right)^2 \qquad \sigma = \sqrt{\frac{s_2}{s_0} - \left(\frac{s_1}{s_0}\right)^2}$$ -In English, the variance is the average of the square of each value minus the square of the average value. -Your particular case -You have $s_0, \mu, \sigma$, so you need to compute $s_1$ and $s_2$ by solving the above for those variables: -$$s_1 = s_0\mu \qquad s_2 = s_0\left(\mu^2 + \sigma^2\right)$$ -Once you have $s_0, s_1, s_2$ for each data set, aggregation is just a matter of adding the corresponding sums together and deriving the desired aggregate values from those sums. -Variance Equation Derivation -We start with the standard equation for variance (population) and go from there: -$$\sigma^2 = \frac{1}{n}\sum_{x \in X} \left(x - \mu\right)^2 -= \frac{1}{s_0}\sum_{x \in X} \left(x - \frac{s_1}{s_0}\right)^2$$ -$$= \frac{1}{s_0}\sum_{x \in X} \left(x^2 - 2x\frac{s_1}{s_0} + \left(\frac{s_1}{s_0}\right)^2\right) -= \frac{1}{s_0}\sum_{x \in X} x^2 - 2\frac{s_1}{s_0^2}\sum_{x \in X} x + \frac{s_1^2}{s_0^3}\sum_{x \in X} 1 -$$ -$$= \frac{1}{s_0}(s_2) - 2\frac{s_1}{s_0^2}(s_1) + \frac{s_1^2}{s_0^3}(s_0) -= \frac{s_2}{s_0} - 2\frac{s_1^2}{s_0^2} + \frac{s_1^2}{s_0^2} -= \frac{s_2}{s_0} - \left(\frac{s_1}{s_0}\right)^2 -$$<|endoftext|> -TITLE: Dominated convergence theorem and uniformly convergence -QUESTION [5 upvotes]: I try to solve the following task: -Let $(\Omega,\mathfrak{A},\mu)$ be a measurable space and $\mu(\Omega)<\infty$. Let $(f_n)_{n\geq1}$ be a sequence of integrable measurable functions $f_n:\Omega \rightarrow [-\infty,\infty]$ converging uniformly on $\Omega$ to a function $f$. Prove that $$\int f d\mu = \lim\limits_{n\rightarrow \infty} \int f_n d\mu$$. -What I thought: uniformly convergence of $f_n \rightarrow f$ means that $f$ is continuous and therefore measurable. Now I thought that I could use the dominated convergence theorem to show the equality. -uniformly convergence means $\lim\limits_{n\rightarrow \infty} \sup \{|f_n-f(x)|:x\in \Omega\}=0$ so I think I can define a function $s(x):=\sup \{|f_n-f(x)|:x\in \Omega\}=0$ which dominates all the $f_n$ and apply the theorem. But I'm not sure, if this is the correct way. - -REPLY [5 votes]: Fix an $\varepsilon > 0$, by uniform convergence, we know that there exists $N \in \mathbb{N}$ such that for $n \geq N$, -\begin{equation*} -|f_n| = |f_n - f + f| \leq |f_n - f| + |f| < |f| + \varepsilon -\end{equation*} -Then define the function $g : \Omega \to \mathbb{R}$ by $g(\omega) = |f(\omega)| + \varepsilon$. Then $g$ is integrable, since we work on a finite measure space. If it was an infinite measure space, the $"+ \varepsilon"$ part would give some difficulties. Next, define $h_n = f_{N + n}$, with $\lim\limits_{n \to \infty} h_n = f$, for which it holds that $|h_n| \leq g$. Then, all conditions of the dominated convergence theorem are satisfied, hence we can conclude -\begin{equation*} -\lim_{n \to \infty} \int_\Omega h_n \mathrm{d}\mu= \int_\Omega f \mathrm{d}\mu \tag{$\ast$} -\end{equation*} -Lastly, observe that the difference between $\{f_n\}$ and $\{ h_n \}$ is only an shift in indices, hence it immediately follows from ($\ast$) that -\begin{equation*} -\lim_{n \to \infty} \int_\Omega f_n \mathrm{d}\mu = \lim_{n \to \infty} \int_\Omega h_n \mathrm{d}\mu= \int_\Omega f \mathrm{d}\mu -\end{equation*}<|endoftext|> -TITLE: Property of a polynomial with no positive real roots -QUESTION [15 upvotes]: The following is an exercise (Exercise #3 (a), Chapter 3, page 28) from Richard Stanley's Algebraic Combinatorics. - -Let $P(x)$ be a nonzero polynomial with real coefficients. Show that the following two conditions are equivalent: - -There exists a nonzero polynomial $Q(x)$ with real coefficients such that all coefficients of $P(x) Q(x)$ are nonnegative. - -There does not exist a real number $a > 0$ such that $P(a) = 0$. - - - -That the first item implies the second is straightforward (since if $a$ is a positive real root of $P(x)$ then it is a positive real root of $P(x)Q(x)$, but since $P(x)Q(x)$ is nonzero with all coefficients nonnegative, this is impossible). However, I can't seem to find a way to prove that the second item implies the first. I would very much like to see a proof if someone can provide it. - -REPLY [3 votes]: copy of answer on duplicate: -It is not difficult to see that it is enough to prove this for the case -$$F(x)=x^2+ax+b$$ -where $F$ is irreducible, i.e. $a^2-4b<0$. -So consider -$$(x^2+ax+b)\sum_{i=0}^\infty c_ix^i$$ -We want to show that we can choose the $c_i$ to be equal to $0$ for $i>N$ for some $N$, and still have all non-negative coefficients. Now: -$$c_0b\geq 0\Leftrightarrow c_0\geq 0$$ -we set, to make things easier $c_0=1$. Then the next condition reads -$$c_0a+c_1b\geq 0\Leftrightarrow c_1\geq -\frac{a}{b}$$ -Again, to make things simple, set $c_1=-\frac{a}{b}$. The next condition reads -$$c_2b+c_1a+c_0\geq 0\Leftrightarrow c_2\geq \frac{1}{b}(\frac{a^2}{b}-1)=\frac{a^2}{b^2}-\frac{1}{b}$$ -Now if $\frac{a^2}{b^2}-\frac{1}{b}\leq 0$ we are done, since we can set $c_2=c_3=\dots=0$. If not, we set $c_2=\frac{a^2}{b^2}-\frac{1}{b}$. The next condition reads -$$c_3b+c_2a+c_1\geq 0\Leftrightarrow c_3\geq \frac{1}{b}\left(\frac{a}{b}-\frac{a^3}{b^2}+\frac{a}{b}\right)=\frac{2a}{b^2}-\frac{a^3}{b^3}$$ -Again, if $\frac{2a}{b^2}-\frac{a^3}{b^3}\leq 0$ we are done by setting $c_3=c_4=\dots=0$. If not, we set $c_3=\frac{2a}{b^2}-\frac{a^3}{b^3}$. The next condition reads -$$c_4b+c_ca+c_2\geq 0\Leftrightarrow c_4\geq \frac{1}{b}\left(\frac{1}{b}-\frac{a^2}{b^2}+\frac{a^4}{b^3}-\frac{2a^2}{b^2}\right)=\frac{a^4}{b^4}-\frac{3a^2}{b^3}+\frac{1}{b^2}$$ -This process keeps repeating. -The next step is to find a closed form for this sequence of lower bounds. Therefore we define -$$g(n,m)=\begin{cases} -0 &\text{ if } n>m\\ -0 &\text{ if } n\not\equiv m\mod 2\\ -(-1)^{\frac{m+n}{2}}\binom{\frac{m+n}{2}}{n} &\text{ otherwise} -\end{cases}$$ -Then the conditions read -$$c_n\geq \sum_{m=0}^n \frac{a^{m}}{b^{m/2+n/2}}g(n,m)=\frac{1}{b^{n/2}}\sum_{m=0}^n \left(\frac{a}{\sqrt{b}}\right)^mg(n,m)$$ -And so it suffices to prove that for all $a,b$ with $a^2-4b<0\Leftrightarrow \frac{a}{\sqrt{b}}<2$, the expression $\sum_{m=0}^n \left(\frac{a}{\sqrt{b}}\right)^mg(n,m)$ will be $\leq 0$ for some $n>0$. -To prove this, we use inspiration from this paper. Write -$$H_n(x)=\sum_{m=0}^n (x)^mg(n,m)$$ -Then using the substituion $x=2\cosh(z)$ we find that -$$H_n(x)=\pm\frac{\sinh((n+1)z)}{\sinh(z)}$$ -So -$$H_n(x)=0\Leftrightarrow z=\frac{i\pi k}{n+1}, k\in\{1,\dots,n\}\Leftrightarrow x=2\cosh\left(\frac{i\pi k}{n+1}\right)\in(-2,2)$$ -From here it is easy to finish the proof: for $n$ big enough we will be able to find $k$ such that -$$\frac{a}{\sqrt{b}}\in \left[2\cosh\left(\frac{i\pi k}{n+1}\right),2\cosh\left(\frac{i\pi (k+1)}{n+1}\right)\right]$$ -and $H_n(x)$ is non-positive on this interval. -Note that the condition of irreduciblity of $x^2+ax+b$ is nicely used, since it ensures that $\frac{a}{\sqrt{b}}$ is in the range $[-2,2]$ where $H_n(x)$ switches sign -To see what this looks like in the case mentioned by Arthur in the comments: we first create a small table with values for $g(n,n)$: -$$ -\begin{pmatrix} -0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ -0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0\\ -0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & -1\\ -0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 2 & 0\\ -0 & 0 & 0 & 0 & 0 & 1 & 0 & -3 & 0 & 1\\ -0 & 0 & 0 & 0 & -1 & 0 & 4 & 0 & -3 & 0\\ -0 & 0 & 0 & 1 & 0 & -5 & 0 & 6 & 0 & -1\\ -0 & 0 & -1 & 0 & 6 & 0 & -10 & 0 & 4 & 0\\ -0 & 1 & 0 & -7 & 0 & 15 & 0 & -10 & 0 & 1\\ --1 & 0 & 8 & 0 & -21 & 0 & 20 & 0 & -5 & 0 -\end{pmatrix} -$$ -So the conditions read -$$c_0 \geq 1$$ -$$c_1 \geq \frac{1.9}{1}=1.9$$ -$$c_2 \geq 1.9^2-1=2.61$$ -$$c_3 \geq 2*1.9-1.9^3=3.059$$ -$$c_4\geq 3.2021$$ -$$c_5\geq 3.02499$$ -$$c_6\geq 2.545381$$ -$$c_7\geq 1.8112339$$ -$$c_8\geq 0.89596341$$ -$$c_9\geq -0.108903421$$ -This is smaller then $0$, so we can set $c_9=c_{10}=\dots=0$, and find $$T(x)=1+1.9x+2.61x^2+3.059x^3+3.2021x^4+3.02499x^5+2.545381x^6+1.8112339x^7+0.89596341x^8$$ will work, and indeed -$$(x^2-1.9x+1)(1+1.9x+2.61x^2+3.059x^3+3.2021x^4+3.02499x^5+2.545381x^6+1.8112339x^7+0.89596341x^8)=1 + 4.440892098500626\dot{}10^{-16} x^3 + 4.440892098500626\dot{}10^{-16} x^6 + 4.440892098500626\dot{}10^{-16} x^7 + 0.108903 x^9 + 0.895963 x^{10}$$ -I think it's safe to say that, without rounding errors, we would find -$$(x^2-1.9x+1)(1+1.9x+2.61x^2+3.059x^3+3.2021x^4+3.02499x^5+2.545381x^6+1.8112339x^7+0.89596341x^8)=1 + 0.108903 x^9 + 0.895963 x^{10}$$ -So it's not suprising that Arthur conjectured that this is a counter example, since it requires a rather arbitrary looking degree $8$ polynomial. - -for the other test case: $F(x)=x^4-x+1$: we factor over $\mathbb{R}$, and find one quadratic that already has positive coefficients, and the quadratic -$$G(x)=0.713639-1.45427 x+x^2$$ -Using the algorithm, we find -$$c_0=1, c_1\geq 2.03782 c_2\geq 2.75145, c_3\geq 2.75144, c_4\geq 1.75142, c_5\geq -0.286423$$ -so we are done, and -$$(0.713639-1.45427 x+x^2)(1+2.03782x+2.75145x^2+2.75144x^3+1.75142x^4)=1.75142x^6+0.204402x^5+0.713639$$<|endoftext|> -TITLE: What are we doing when we raise $10$ to some decimal number? -QUESTION [7 upvotes]: This is something I am having some difficulty to understand. -When I have $10^x$ where $x$ is some integer greater than $0$ , I understand that this is equivalent to write $$\underbrace{10\cdot 10 \cdot 10 \cdots 10}_{x \text {times}} $$ -but when I have the case that $10$ is raised to some decimal number $x$ I really lack the intuition of what's going on here. -I ask this question because ,for example ,when I read that $10^{0,48}=3,01...$ I simply don't understand how we have that $10$ raised to some number equals $3$.In general why is it that every number can be described as $10^x$ ? -P.s I guess it's important that I say that I am yet in high school,so you know what kind of answers might be appropriate for me. - -REPLY [4 votes]: For the sake of illustration, I have made a plot of the powers of $2$ (those of $10$ grow too fast). - -You see that they nicely align on a smooth and regular curve. Then it is a natural thing to define the powers for non-integer numbers too. And by continuity of the curve, for every number $y$, there will be some $x$ such that $y=2^x$. -How exactly we can compute these is another matter, but here is a hint: it is easy to show that $\sqrt{2^{2n}}=2^n$. Then, we can generalize to real $x$ by admitting $\sqrt{2^x}=2^{x/2}$. This tells us $2^{0.5}=\sqrt{2}, 2^{1.5}=2\sqrt2,2^{0.25}=\sqrt{\sqrt{2}},2^{0.75}=\sqrt{\sqrt{2}}\sqrt{2}\cdots$<|endoftext|> -TITLE: minimizing the sum of weighted absolute distance -QUESTION [7 upvotes]: Let $x_1, \ldots, x_n \in \mathbb{R}^d$ denote $n$ points in $d$-dimensional Euclidean space, and $w_1, \ldots, w_n \in \mathbb{R}_{\geq 0}$ any non-negative weights. -$\arg\min_{\mu \in \mathbb{R}^d} \sum_{i=1}^n w_i | x_i-\mu| = median\{x_1, \ldots, x_n \}?$ -I understand why the median of $\{x_1, \ldots, x_n \}$ minimizes the function without the weights. But I am not sure if the median is what minimizes the function with the weights, should I use Lagrange multiplier to solve this? - -REPLY [2 votes]: The problem you have stated is the weighted L1 median (sometimes called the weighted geometric median) of your data. There are many references on the topic: -https://www.google.com/search?q=l1+median -A concise reference along with an algorithm is: -http://www.stat.rutgers.edu/home/cunhui/papers/39.pdf -However, note that the Vardi-Zhang process needs to be adjusted for data having repeated values. If you scan the literature, there are some references that describe this adjustment.<|endoftext|> -TITLE: For a martingale $X$ does uniform integrability imply integrability of $\sup |X_{n}|$? -QUESTION [5 upvotes]: All is in the title: if $(X_{n})$ is a uniformly integrable martingale is it true that $\sup_{n\in \mathbb{N}} |X_{n}|$ is an integrable variable ? -If I had to take a guess I'd say the answer is no, but I haven't managed to construct a counter-example. I know that if you strengthen the hypothesis to $(X_{n})$ being $L^{p}$-bounded for $p>1$ the claim is true (it follows from Doob's maximal inequality), and that if you don't assume that $X_{n}$ is a martingale then the claim is false. - -REPLY [4 votes]: Saz pointed out that Exercise II.3.15 in Revuz and Yor's Brownian motion and continuous martingales gives a negative answer to my question. Here's the counterexample, adapted to a discrete martingale for the sake of variety. -Consider a partition $(A_{k})_{k\geqslant 1}$ of a probability space $\Omega$. Let $(\alpha_{k})$ be any sequence of positive real numbers and set -\begin{equation*} -X_{\infty}=\sum_{k} \alpha_{k}\mathbf{1}_{A_{k}}. -\end{equation*} -This variable is integrable iff the series $S=\sum \alpha_{k}\mathbf{P}(A_{k})$ converges to a finite limit. Assume it is the case, and set $S_{n}$ for its partial sums, also let $P_{n}=\sum_{k=n}^{+\infty}\mathbf{P}(A_{k})$. -If we define $\mathcal{F}_{n}=\sigma(A_{k},k\leqslant n)$ and set $X_{n}=\mathbf{E}(X_{\infty}\mid \mathcal{F}_{n})$ we obtain a martingale for which we have the explicit expression -\begin{equation*} -X_{n}=\sum_{k=0}^{n}\alpha_{k}\mathbf{1}_{A_{k}} + \frac{S-S_{n}}{P_n}\mathbf{1}_{\cup_{k> n}A_{k}} -\end{equation*} -so that -\begin{equation*} -\sup_{n\in\mathbb{N}} X_{n} \geqslant \sum_{k} \frac{S-S_{k}}{P_{k}}\mathbf{1}_{A_{k}}. -\end{equation*} -One checks using the convergence criterion for Bertrand series that if $\mathbf{P}(A_{n})=\frac{1}{n(n+1)}$ and $\alpha_{n}=\frac{n}{\ln^{2}(n)}$ then indeed $S < +\infty$ but $||\sup_{n\in\mathbb{N}}X_{n}||_{1}=+\infty$.<|endoftext|> -TITLE: Can you determine a differential equation from its solutions? -QUESTION [22 upvotes]: A linear first-order differential equation has two solutions: - $$y_1(x)=x^2 \\y_2(x)=\frac{1}{x}$$ Determine the differential - equation - -I did some research and I think I can use the wronskian to determine my original DE but I dont' really get how it works. Can someone show me how it's done? (It would be nice if you could use a different example so I can solve this question myself). - -REPLY [4 votes]: In case of finding the linear second order differential equation for which $\{y_1,y_2\}$ is the basis of solutions, you can follow your first intuition and solve it using the Wronskian. -First you have to check that $W(y_1,y_2)(x) \ne 0$ for any $x$ on the interval definition of your differential equation. -Hopefully, $\forall x >0 $ or $x<0$ : -$$W(y_1,y_2)(x) = - \begin{vmatrix} - y_1 & y_2 \\ - y_1' & y_2' -\end{vmatrix} - = - \begin{vmatrix} - x^2 & x^{-1} \\ - 2x & -x^{-2} -\end{vmatrix} = - 3 \ne 0$$ -That means that the following equation -$$ \begin{vmatrix} - y & y_1 & y_2 \\ - y' & y_1' & y_2' \\ -y'' & y_1'' & y_2'' -\end{vmatrix} = 0 = y'' \begin{vmatrix} - y_1 & y_2 \\ - y_1' & y_2' -\end{vmatrix} -y' \begin{vmatrix} - y_1 & y_2 \\ - y_1'' & y_2'' -\end{vmatrix} + y \begin{vmatrix} - y_1' & y_2' \\ - y_1'' & y_2'' -\end{vmatrix} $$ -is a true second order differential equation. And you can see that both $y_1$ and $y_2$ are solution to this equation by replacing $y$ by $y_1$ or $y_2$ in the determinant expression.<|endoftext|> -TITLE: Improper integral $\int_{-\infty}^{+\infty}\frac{1}{a x^2 + bx + c} dx$, $a > 0,a x^2 + bx + c>0$ -QUESTION [5 upvotes]: I have a question about improper integral. If you can help me , I appreciate that. -If a > 0 and the graph of $y=a x^2 + bx + c$ lies entirely above the x-axis, show that -$$ -\int_{-\infty}^{+\infty} \frac{dx}{a x^2 + bx + c}=\frac{2 \pi}{\sqrt{4ac - b^2}}. -$$ - -REPLY [4 votes]: Note that if $ax^2+bx+c$ has real roots then the integral doesn't exists, since we will end up with an integral of the form $$\dfrac1{a}\int_{-\infty}^{\infty} \dfrac{dx}{(x-\alpha)(x-\beta)}$$where $\alpha, \beta \in \mathbb{R}$, which clearly diverges. Hence, we can assume that the discriminant is strictly negative, i.e., $4ac -b^2 > 0$. -Given that $4ac-b^2>0$, we have $ax^2+bx+c = a\left(x+\dfrac{b}{2a}\right)^2 + c - \dfrac{b^2}{4a}$. Hence, letting $y=x+\dfrac{b}{2a}$, we have -\begin{align} -\int_{-\infty}^{\infty} \dfrac{dx}{ax^2+bx+c} & = \int_{-\infty}^{\infty} \dfrac{dy}{ay^2 + c-\dfrac{b^2}{4a}} = \dfrac1{a} \dfrac1{\sqrt{c/a-b^2/(4a^2)}}\left. \arctan\left(\dfrac{y}{\sqrt{c/a-b^2/(4a^2)}}\right)\right \vert_{-\infty}^{\infty}\\ -& = \dfrac{2\pi}{\sqrt{4ac-b^2}} -\end{align}<|endoftext|> -TITLE: If $J$ is tangent point of $GH$ with incircle of $FGH$ and $D$ is intersection of $F$-mixtilinear inclrcle with $(FGH)$, then $\angle FGH=\angle GDJ$. -QUESTION [11 upvotes]: Let $FGH$ be a triangle with circumcircle $A$ and incircle $B$, the latter with touchpoint $J$ in side $GH$. Let $C$ be a circle tangent to sides $FG$ and $FH$ and to $A$, and let $D$ be the point where $C$ and $A$ touch, as shown here. - -Prove that $\angle FGH = \angle GDJ$. - -REPLY [4 votes]: Using the Evan Chen's mixtilinear incircle article in here, the results become trivial. -I will change some notations. -In $\triangle ABC$, let the incircle hit $BC$ at $D$ and the $A$-mixtilinear incircle hit the circumcircle of $\triangle ABC$ at $E$. Prove that $\angle DEB = \angle ABC$. -Since (9) holds, we have $\angle DTM_A = \angle AFB=180-\angle B - \frac{1}{2}\angle A$, where $F = AI \cap BC$. -Since $\angle BTM_A = 180-\frac{1}{2} \angle A$, we have $$\angle DTB = \angle BTM_A - \angle DTM_A = \angle B = \angle ABC$$ as desired.<|endoftext|> -TITLE: Find the limit $\lim_\limits{x\to 0^+}{\left( e^{\frac{1}{\sin x}}-e^{\frac{1}{x}}\right)}$ -QUESTION [8 upvotes]: Find the limit: - $$\lim_\limits{x\to 0^+}{\left( e^{\frac{1}{\sin x}}-e^{\frac{1}{x}}\right)}$$ - -Using graph inspection, I have found the limit to be $+\infty$, but I cannot prove this in any way (I tried factorizing, using DLH)... Can anyone give a hint about that? The limit should be done without any approximations, because we haven't been taught those yet. - -REPLY [3 votes]: By the Mean Value Theorem applied to $f(x)=e^{1/x}$ with $f'(x)=-x^{-2}e^{1/x}$, we have -$$e^{1/\sin x}-e^{1/x}=f(\sin x)-f(x)=(x-\sin x)f'(\xi)=\frac{x-\sin x}{\xi^2}\cdot e^{1/\xi} $$ -with $\xi$ between $x$ and $\sin x$. -We can find $a>0$ such that for all small enough positive $x$ we have -$$\tag1 \sin x < x -ax^3 $$ -and hence -$$\frac{x-\sin x}{\xi^2}>\frac{x-\sin x}{x^2}>ax.$$ -Thus for small $x$ with $t:=\frac 1t$ -$$e^{1/\sin x}-e^{1/x}=\frac{x-\sin x}{\xi^2}\cdot e^{1/\xi} >axe^{1/x}=a\cdot \frac{e^t}{t}\to +\infty$$ -because the exponential grows stronger than any polynomial (You might use $e^x\ge 1+x$ for all $x$ $\implies e^t=(e^{t/2})^2\ge(1+\frac t2)^2=1+t+\frac14t^2$ for all $t>-2$). - -How can we show $(1)$? -Pick any $a$ with $0 g''(0)=0$ for all $x\in (0,x_0]$, so staht $g'$ is strictly increasing in $[0,x_0]$. Thus $g'(x)>g'(0)=0$ for all $x\in(0,x_0]$, so that $g$ is strictly increasing on $[0,x_0]$. -Thus $g(x)>g(0)=0$ for $0 -TITLE: Prove continuity/discontinuity of the Popcorn Function (Thomae's Function). -QUESTION [8 upvotes]: I have to prove that a function $f:]0,1] \rightarrow \Bbb R$ : -$$ -f(x) = -\begin{cases} -\frac1q, & \text{if $x \in \Bbb Q$ with $ x=\frac{p}q$ for $p,q \in \Bbb N$ coprime} \\ -0, & \text{if $x \notin \Bbb Q $} -\end{cases} -$$ -is discontinuous in every point $x \in \ ]0,1] \cap\Bbb Q$. -And then to consider $x \in \ ]0,1] \backslash \Bbb Q$ and prove that it is continuous. -For now I learned different ways to prove continuity (epsilon-delta, sequences), but I'm never sure what would be better to use in each different case. -I wanted to prove the discontinuity by using sequences: -$$\forall x_n \quad x_n\rightarrow a \quad \Rightarrow \quad f(x_n) \rightarrow f(a)$$ -I tried creating a sequence $ x_n=\frac1n + a$, we know it converges to $a$ but $f(x_n)\ $ doesn't converges to $\ f(a)$ because there would still be some points not in our set (irrational numbers that creates gaps). -But I don't think it works, so I'm asking you if you could help me solving the two questions. - -REPLY [13 votes]: This sometimes called the Popcorn Function, since if you look at a picture of the graph, it looks like kernels of popcorn popping. (Also called the Thomae's Function.) -To prove it is discontinuous at any rational point, you could argue in the following way: Since the irrationals that are in $(0,1)$ are dense in $(0,1)$, given any rational number $p/q$ in $(0,1)$, we know $f(p/q) = 1/q$. But let $a_{n}$ be a sequence of irrationals converging to $p/q$ (by the definition of density). Then $\lim \limits_{a_{n} \to (p/q)} f(x) = 0 \neq f(p/q)$. Thus, $f$ fails to be continuous at $x = p/q$. -Now, to prove it is continuous at every irrational point, I recommend you do it in the following way: - -Prove that for each irrational number $x \in (0,1)$, given $N \in \Bbb N$, we can find $\delta_{N} > 0$ so that the rational numbers in $(x - \delta_{N}, x + \delta_{N})$ all have denominator larger than $N$. - -Once you have the result from above, we can use the $\epsilon-\delta$ definition of continuity to prove $f$ is continuous at each irrational point. - - -So, first prove 1. (It's not too hard.) Once you've done that, you can accomplish 2. in the following way: - -Let $\epsilon > 0$. Let $x \in (0,1)$ be irrational. Choose $N$ so that $\frac{1}{n} \leq \epsilon$ for every $n \geq N$ (by the archimedian property). -By what you proved in 1., find $\delta_{N} > 0$ so that $(x - \delta_{N}, x + \delta_{N})$ contains rational numbers only with denominators larger than $N$. -Then if $y \in (x-\delta_{N}, x + \delta_{N})$, $y$ could either be rational or irrational. If $y$ is irrational, we have $|f(x) - f(y)| = |0 - 0| = 0 < \epsilon$ (since $f(z) = 0$ if $z$ is irrational). -On the other hand, if $y$ is rational, we have $y = p/q$ with $q \geq N$. So $|f(x) - f(y)| = |0 - f(y)| = |f(y)| = |1/q| \leq |1/N| < \epsilon$, and this is true for every $y \in (x-\delta_{N}, x + \delta_{N})$. -Thus, given any $\epsilon > 0$, if $x \in (0,1)$ is irrational, we found $\delta > 0$ (which was actually $\delta_{N}$ in the proof) so that $|x - y| < \delta$ implies $|f(x) - f(y)| < \epsilon$.<|endoftext|> -TITLE: Theoretical and applied math help - what and why? -QUESTION [6 upvotes]: (Edit: If you wish to skip the prologue, you may go straight to the questions in the last few paragraphs.) -I'm not very far ahead at the moment (going to begin my undergraduate years after next summer), but there's been something lingering in my mind, and I was hoping someone would be able to help me out. -For most of my life, I've defended "theoretical" and "pure" work, holding a belief that knowledge is the most valuable item one can possibly maintain. I expected myself to grow up and go into theoretical math (as do my parents at the moment), which would entail completing graduate research, eventually earning a PhD, then moving into professorship. Recently, however, I have come to the realization that this isn't the lifestyle that fits my character best, and I've been wondering how worthwhile theoretical math actually is. -I have studied group theory, discrete (graph theory), linear algebra, real analysis, multivariable/vector calculus, probability/statistics, and advanced geometry. To be honest, my favorite bit has probably been discrete (I am also interested in computer science, if it helps), closely followed by multivariable calculus and linear algebra. While I have performed very well across all the topics - and do enjoy the interesting, incredible bits of group theory and geometry - I am not sure where they ultimately end up being used. -Let me translate the question such that it is not specific to me, but so that many can connect. I do acknowledge the fact that theoretical math is very important - I hold absolutely nothing against it - but more so now than before, I have begun asking myself "How does it matter if the Collatz (3n+1) conjecture is proven true? It is an interesting property of all numbers, but why does it matter?" Of course, when people would ask me similar questions in the past, I would defend my stance, making claims along the lines of "It is essential to further our knowledge," or "Just because it's interesting." I do in fact find such random things interesting; I have windows on multilinear algebra and n-ary group theory open right now. Recently, however, I've begun to wonder what we get out of solving these problems and studying these topics, and I've been sensing a stronger attraction towards "applied math." Not to say that this is in any way unfortunate; ultimately, math is math, whether theoretical or applied. But let me move on to the question(s). -What is the point of pursuing theoretical math? Fifty years down the line, how do you reflect on your life? What if, although unlikely, you haven't been able to make any sufficient contribution to the field? -And as for applied: what are the outcomes from pursuing applied math? What are the similarities and differences in topics between applied and theoretical, especially regarding the ones I mentioned before? Does applied math prepare you better than theoretical for fields such as economics/econometrics, physics, engineering, and computer science? Do you ever feel "limited" in your knowledge of math by branching into applied, or is it of equal rigor and level, simply less abstract? -Regarding education: for each of the following, would knowledge in applied math or theoretical math be more useful? - -Applied physics -Theoretical physics (seems obvious, but worth asking) -Astronomy and astrophysics -Engineering: robotics and AI -Engineering: aeronautical -Computer science - -To clarify, I am not worried about the difficulty of the material; my concern is being able to branch out to other fields (mainly within STEM, but also into my other interests in the arts and humanities), being able to implement the things I'm learning, and most importantly, feeling satisfied with myself at the end of the day. -Thank you very very much for reading the long post; if you can, please do try to answer any of the questions I asked, as it will help me reach a conclusion. - -REPLY [4 votes]: I would argue, as you say, that knowledge is worth something even in absence of practical applications. However, that is not to say that theoretical mathematics has no practical applications. -In my opinion the following is the main difference between applications of "theoretical" and "applied" mathematics: -Pure mathematics is an investment for the far future, not for an immediate application. You are not likely to see a big practical application of your work in your own lifetime. See this interesting question for some examples of unintended practical uses of theoretical mathematics. -Pure mathematicians' impact is more indirect in the sense that the mathematical results they come up with are used by applied mathematicians to create practical applications - and without the pure mathematicians we probably would not be where we are today. -You could maybe make an analogy to biology. When Darwin studied animals he probably had no idea what it would lead to; and many would probably say "What is the use of studying different kinds of finches?" But it eventually lead to an amazing discovery. -Similarly, theoretical mathematicians ask "This is interesting, what is there more to know about this?" which eventually may lead to great applications, whereas applied mathematicians ask "This is the problem, how can we solve it?" and they try to solve it. -Regarding you, it seems like you want a more direct, immediate impact of your work, in which case I would say that you probably are more of an applied mathematican kind of person. But that is just my opinion.<|endoftext|> -TITLE: Symbol for set all partitions of a set -QUESTION [7 upvotes]: Is there a symbol for set all partitions of a set (like P for Power set)? -P.S. I need such symbol in writing some mathematical related notes. - -REPLY [2 votes]: See this answer to a similar question on MO. I suggest stealing notation from combinatorics and letting $\mathrm{B}_S$ denote the set of partitions of $S$, since the Bell number $\mathrm{B}_{|S|}$ is the number of partitions of $S$.<|endoftext|> -TITLE: Sequence of bounded functions or pointwise bounded -QUESTION [5 upvotes]: We have sequence of function $\{f_n(x)\}$ defined on set $E$ and $n\in \mathbb{N}$. What does mean sequence of bounded functions? Is it pointwise bounded? -Definition: We say that $\{f_n\}$ is pointwise bounded on $E$ if the sequence $\{f_n(x)\}$ is bounded for every $x\in E$, that is, if there exists a finite-valued function $\phi$ defined on $E$ such that $$|f_n(x)|<\phi(x) \quad (x\in E, n\in \mathbb{N}).$$ -Can anyone explain to me please? - -REPLY [6 votes]: A bounded function is a function whose range is contained in a finite interval. A sequence of bounded functions has the property that each of its individual functions $f_n$ has such a limited range (but the overall range of the collection may still be infinitely large). -The sequence is pointwise bounded if for every $x$ separately the sequence $f_n(x)$ is contained in a finite interval. That is a different condition.<|endoftext|> -TITLE: What's exactly the deal with differentials? (Confessions of a desperate calculus student) -QUESTION [12 upvotes]: So I don't know if I'm the only one to feel this, but ever since I was introduced to Calculus, I've had a slight (if not to say major) aversion to differentials. -This sort of "phobia" started from the very first moment I delved into integrals. Riemann sums seemed to make sense, though for me they were not enough for justifying the use of "dx" after the integral sign and the function. After all, you could still do without it in practice (what's the need for writing down the base of these rectangles over and over?). I was satisfied by thinking it was something merely symbolic to remind students what they were doing when they calculated definite integrals, and/or to help them remember with respect to what variable they were integrating (kinda like the reason why we sometimes use dy/dx to write a derivative). Or so I thought. -Having now been approached to differential equations, I'm starting to realize I was completely wrong! I find "dy" and "dx" spread out around equations! How could that be possible if they are just a fancy way of transcribing derivatives and integrals? I imagined they had no meaning outside of those particular contexts (i.e.: dy/dx, and to indicate an integration with respect to x or whatever). -Could anybody help me out? I'm really confused at the moment. I'd really appreciate it :) (P.S.: Sorry to bother you all on Thanksgiving - assuming some of you might be from the US.) -EDIT: I don't think my question is a duplicate of Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio?, as that one doesn't address its use in integrals and in differential equations. Regardless of whether dy/dx is a ratio or not; what I'm really asking is why we use dx and dy separately for integration and diff. equations. Even if they're numbers, if they tend to 0, then dx (or dy) * whatever = 0. Am I wrong in thinking that way? - -REPLY [4 votes]: If you read a real analysis textbook such as Calculus by Spivak, they manage to develop calculus rigorously while avoiding differentials like "$dx$" and "$dy$" entirely. This is the standard way to make calculus rigorous -- you just avoid using differentials. And indeed, in undergrad differential equations classes, arguments that involve manipulating $dx$ and $dy$ as individual quantities can easily be rephrased to avoid doing this. -For example, if a differential equations textbook says: -\begin{align} -& y \, dy = dx \\ -\implies & \int y \, dy = \int \, dx \\ -\implies & \frac{y^2}{2} = x + C \\ -\end{align} -we can rephrase this argument as -\begin{align} -& y \frac{dy}{dx} = 1 \\ -\implies & \frac{y^2}{2} = x + C, -\end{align} -where in the second step we simply took antiderivatives of both sides, using the chain rule in reverse to find an antiderivative of $y \frac{dy}{dx}$. -But note: even though a rigourous approach might avoid using differentials entirely, there is no need to throw "differential intuition" out the window, because it makes perfect sense if we just think of $dx$ and $dy$ as being extremely tiny but finite numbers, and if we replace $=$ with $\approx$ in the equations we derive. Perhaps the word "infinitesimal" could be thought of as meaning "so tiny that the errors in our approximations are utterly negligible". We can plausibly obtain exact equations "in the limit" (if we are careful). -There is something aesthetically appealing about treating $dx$ and $dy$ symmetrically, which can perhaps in some situations give us a feeling that the approach using differentials is the "right" way or more beautiful way to do these computations. Compare these two ways of writing an "exact" differential equation: -\begin{equation} -I(x,y) \,dx + J(x,y)\, dy = 0 -\end{equation} -vs. -\begin{equation} -I(x,y) + J(x,y) \frac{dy}{dx} = 0. -\end{equation} -The first version is aesthetically compelling, because it's more symmetrical; this might help explain why the second version is not seen more often (despite its being easier to understand, in my opinion). -Of course, for any results derived using "differential intuition", we must later find a rigorous proof to confirm there is no mistake. -Note also: There are other approaches to making calculus rigorous (based on nonstandard analysis I think) that actually make infinitesimals rigorous. So they manage to embrace $dx$ and $dy$ as legitimate quantities, rather than avoiding them. -Additionally, in differential geometry, quantities like $dx$ are defined precisely as "differential forms", and some treatments of calculus (like Hubbard & Hubbard) embrace differential forms at an early stage. But you can understand calculus rigorously without using differential forms.<|endoftext|> -TITLE: The term “elliptic” -QUESTION [23 upvotes]: There are many things which are called “elliptic” in various branches of mathematics: - -Elliptic curves -Elliptic functions -Elliptic geometry -Elliptic hyperboloid -Elliptic integral -Elliptic modulus -Elliptic paraboloid -Elliptic partial differential equation - -I take it that most of them relate to ellipses in one way or another, but the relation is often unclear to me. Does anyone know the etymology of these various terms; i.e. which was derived from which, and why? For example, this answer suggests that elliptic curves derive from elliptic integrals, since the first known elliptic curves were found while studying elliptic integrals. How about the rest? -I don't expect answers to exhaustively address all terms, but hope that eventually enough answers will contribute to form a cohesive picture of the whole term. - -REPLY [2 votes]: When mathematical equations/ models distinctly trifurcate on basis of discriminant /characteristic sign of its principal arguments (< 0, 0, > 0) in a group ( i.e., when there is a transformation possible between positive and negative sign types ) we have three classes/types (elliptic, parabolic, hyperbolic ) which need not generally be associated with an ellipse curve. -Regarding the partitioning nomenclature of constant Gauss curvature surfaces ( K = +1 and -1) labelling/appellation left me never so comfortable. Hope it is appropriate to mention it here. -I think we should call them as -1)Elliptic sphere -2)Riemann sphere -3)Hyperbolic sphere -and -4)Elliptic pseudosphere -5)Central or Beltrami pseudosphere -6)Hyperbolic pseudosphere -In the following article ( I have no access at present) -Felix Klein," Vorlesungen über nicht-euklidische Geometrie" 3rd ed. (Berlin, 1928). German. Crelle's Journal. -iirc the names are like: 1)spindles 2)sphere 3) like Cheese "tires" 4) Conoid 5)Pseudosphere 6)Rings -Const_Gauss_Curvtr_names<|endoftext|> -TITLE: Given any finite string of number, is it true there exists a perfect square whose leading numbers are the string -QUESTION [5 upvotes]: Given any finite string of number, is it true there exists a perfect square whose leading numbers are the string? For example, given the string 123456, can I find a perfect square with leading digits 123456? -I think the answer is a positive one because number of integers with the string are infinite so one can keep increasing the number to obtain a perfect square? Appreciate all advice, thank you - -REPLY [2 votes]: There is a well-known algorithm for extracting the square root of a number. -Treating your desired string of digits as an integer, add $1$, -then use the standard square-root algorithm to compute the square root to -a sufficient number of digits. -The result (an approximate square root) -is a decimal number whose square is the desired -string of digits followed by some other digits. -Multiply this approximate square root by a suitable power of $10$ so that the -result is an integer. -Now you have an integer whose square starts with the desired string of digits. -Full details and justification of the algorithm can be found in an earlier answer to a related question.<|endoftext|> -TITLE: Problem in understanding the proof of Lebesgue-Radon-Nikodym Theorem -QUESTION [6 upvotes]: On Folland's Real Analysis book page $90$, the Lebesgue-Radon-Nikodym Theorem is given as -Let $\nu$ be a $\sigma$-finite signed measure and $\mu$ a $\sigma$-finite positive measure on $(X,\mathcal{M})$. There exists unique $\sigma$-finite signed measure $\lambda,\rho$ on $(X,\mathcal{M})$ such that $\lambda\perp \mu$, $\rho\ll\mu$, and $\nu=\lambda+\rho$. Moreover, there is an extended $\mu$-integrable function $f: X\to\mathbb{R}$ such that $d\rho=fd\mu$, and any two functions are equal $\mu$-a.e. -To prove this theorem, we first can consider the case that $\nu$ and $\mu$ are "finite" and "positive". Then, we can extend that to the case where $\nu$ and $\mu$ are "$\sigma-$finite" and "positive". Finally, since $\nu = \nu^+ - \nu^-$, we can conclude that for signed measure $\nu$. -But I have problem in understanding the second step. In this step, we can write $X = \cup_j A_j$ where $A_j$'s are disjoint and $\nu(A_j)< \infty$ and $\mu(A_j) < \infty$. Then, we can define, $\nu_j(E) = \nu(E \cap A_j)$ and $\mu_j(E) = \mu(E \cap A_j)$ where $\nu_j$ and $\mu_j$ are finite. So, from the results of the first step, we know that $\lambda_j\perp \mu_j$, $\rho_j\ll\mu_j$, and $\nu_j=\lambda_j+\rho_j$, $d\rho_j=f_jd\mu_j$. But then, it says that if we define $\lambda = \sum_j \lambda_j$ and $f = \sum_j f_j$, we have $\nu=\lambda+\rho$ where $d\rho = fd\mu$. -We know that $\rho_j\ll\mu_j$ and $\lambda_j\perp \mu_j$. To show that $\rho\ll\mu$ and $\lambda \perp \mu$, is it true to say that since for every $j$, $\rho_j\ll\mu_j$ and $\lambda_j\perp \mu_j$, then - we can conclude that $\sum_j\rho_j\ll \sum_j\mu_j$ and $\sum_j\lambda_j\perp \sum_j\mu_j$? - -REPLY [2 votes]: The sets $A_j$ are disjoint, and so $\nu_j\perp\nu_k$ for $j\ne k$. Then by countable subadditivity -$$\sum_j\nu_j(E)=\sum_j\nu(E\cap A_j).$$ -Again, the sets $E\cap A_j$ are disjoint, so -$$=\nu\left(E\cap(\cup_jA_j)\right)=\nu(E\cap X)=\nu(E).$$ -Similar results apply to $\lambda_j$ and $\rho_j=f_jd\mu_j$. You can use this to show that $\lambda$, $\rho$ are mutually singular unsigned measures, and that $\lambda\perp \mu$, $\rho\ll \mu$. Just write them out as the sum, remembering that the $A_j$ are disjoint. -Does that help? The proof that you have written is complete, what in particular about it doesn't make sense to you? -EDIT: In response to your edits, we will show that $\rho\ll \mu$, which is equivalent to showing $\rho=fd\mu$. By definition of $\rho$ -$$\rho(E)=\sum_j\rho_j(E).$$ -As $\rho_j=f_jd\mu$: -$$=\sum_j\int_X\chi_Ef_jd\mu.$$ -As the $f_j$ are positive we can use the Monotone Convergence Theorem to bring the sum inside the integral: -$$=\int_X\chi_E\sum_jf_jd\mu=\int_X\chi_Efd\mu=\int_Efd\mu.$$ -Thus $\rho=fd\mu$. -To show $\lambda\perp\mu$, suppose $E$ has $\mu$-measure zero, and define $E_i=E\cap A_i$. The result them follows from writing out $\mu=\sum_i\mu_i$, and noting that $\mu_i\perp\mu_j$ for $i\ne j$.<|endoftext|> -TITLE: When should one learn about $(\infty,1)$-categories? -QUESTION [11 upvotes]: I've been doing a lot of reading on homotopy theory. I'm very drawn to this subject as it seems to unify a lot of topology under simple principles. The problem seems to be that the deeper I go the more confused I get since the number of possible formalisms grows exponentially. The more i read the harder it gets to seperate the fundamental principles from the formal manipulations. -For example: I know the basics of cofiber and fiber sequences and why they work concretely but I still have no clear idea of how to organize them into a clear picture that would work the same in the homological setting. -Here are things i've been trying to understand axiomatically in terms of first principles and so far have been unsuccesful: - -General notion of a derived functor between categories with weak equivalences. -Homotopy (co-)limits - cofibrant and fibrant replacements (Which as i understand are a special case of 1). -Stable homotopy category. - -From the reading i've done on nlab it seems a lot of homotopy theory can be expressed neatly in terms of $(\infty ,1)$-categories. For me it's a pretty good argument to learn that formalism. - -1. Should I learn $(\infty ,1)$-category theory? -2. If not, is there a way to gain a formal unified undertanding of homotopy theory which feels less like walking around in a dark room and more like climbing a mountain? - -REPLY [2 votes]: I've recently gone through a bit of a journey in learning about $(\infty, 1)$ categories, and whilst I don't know whether OP will any longer be interested in my answer, perhaps I can write a few useful words for future readers. What I'll try to provide here is answers which I think that I would have found useful if I'd seen them written down elsewhere. - -A derived functor between categories is (as I've seen written in several places) a "homotopy theoretic version of the functor that you started with". The issue with this rather concise statement is getting to grips with what it actually means. I think the best way to address this difficulty is with a concrete example: -Consider the category which consists of diagrams of (CW) topological spaces of the shape $$* \leftarrow * \rightarrow *.$$ I'll denote this category $\textbf{Top}^{\mathcal{D}}.$ In particular, the objects in this category are diagrams of the above shape, and morphisms are object-wise morphisms between the spaces in the diagram. For example consider the following two diagrams (which are objects in this category): -$$D^2 \hookleftarrow S^1 \hookrightarrow D^2$$ and $$$$ $$* \leftarrow S^1 \rightarrow *.$$ -There is a morphism from the first to the second of these diagrams in this category which consists of the identity map on the $S^1$, and the constant map on the $D^2$s. -Now, one can note that given diagrams of this shape, there is an obvious way to define homotopy equivalent diagrams. They're the ones which have homotopy equivalent objects. Alternatively put, they're the ones which have object-wise homotopy equivalences as morphisms between them. In particular, the two examples which I just gave could be said to be homotopy equivalent diagrams. -The point about all this is that now I do not only have a category, but I've defined a particular class of morphisms to be "weak equivalences". This is a useful thing to do, as we know from doing it in the category Top. -Now let us consider the functor $\text{colim}: \textbf{Top}^{\mathcal{D}} \rightarrow \textbf{Top}$. This is the functor which sends a diagram to the space which is its colimit. This is fine, no problem. Diagrams are sent to topological spaces, this is a functor, all well defined. -However, what happens if I want to insist that objects which are weakly equivalent should be considered isomorphic? This is what we (at least, conceptually) do in Top, after all. We consider homotopy equivalent spaces as "basically the same". Of course, homotopy equivalent spaces are not necessarily isomorphic (homeomorphic), but really we like to let things vary up to homotopy. It makes life simpler. Anyway, this idea of "considering weakly equivalent thigs to be isomorphic" can be formalised, and is referred to as "localising at the weak equivalences" - we, in a category theoretic sense, add in inverse morphisms for all the weak equivalences in our category so that they actually become isomorphisms. -Anyway, this is all rather formal and difficult in full generality. The actual construction of the category which you get after localising at the weak equivalences is tricky - it's not easy to immediately see what the morphisms are in your new category (strange stuff happens). Mostly (in particulary in Top), it suffices to just say "things that are homotopy equivalent should be considered isomorphic" and move on. This avoids a lot of tricky business. This is fine for most purposes. -This leads to a quandry, though. In the case above, where we have the colimit functor, what happens after we localise at our weak equivalences (in this case, the object-wise homotopy equivalences)? Can we just expect the colimit functor to work as it did before? Objects which were simply homotopy equivalent are now actually isomorphic in our new category. Any well-defined functor must send isomorphic objects to isomorphic objects. This means that (once we've localised at the weak equivalences) weakly equivalent objects should be sent to weakly equivalent/isomorphic objects. -However, and this is this big thing there is absolutely no reason to expect an ordinary functor to do that. That is, there's no reason to expect a functor to send homotopy equivalent/weakly equivalent things to homotopy equivalent/weakly equivalent things. Why should functors care about your class of morphisms which you've defined to be weak equivalences and then send objects which are weakly equivalent to objects which are either isomorphic or weakly equivalent in another category? The answer is that they don't. -Concretely (again, a tip with this stuff is that it is always useful to return to concrete examples when things start getting a bit unclear), in the example above, whilst we have two "homotopy equivalent diagrams", their colimits are $S^2$ and $*$, respectively. Once we've localised at the weak equivalences, our colimit functor is sending two isomorphic objects to non-isomorphic objects. It is therefore not a well defined functor on this new category that we get after localising. -So what do we do? We have our nice homotopy equivalences/weak equivalences, and our functors ignore them. More precisely, the functor which we started with is not a well defined functor on our new category which we get after localising at the weak equivalences. -This is where derived functors come in. They are essentially designed (via a clever universal property) to be the "best version" of the functor that we started with which is well defined on the homotopy category (the homotopy category being the name for the thing which we get after localising at the weak equivalences). In our case, the derived functor for colim is the hocolim functor. These derived functors are constructed such that they do respect weak equivalences - weakly equivalent objects get sent to weakly equivalent/isomorphic objects. -Now, motivation? Well, now that we know that the colimit functor ignores the fact of whether diagrams are/are not homotopy equivalent, we realise that it is basically useless for us (as homotopy theorists) to recognise an object as the colimit of a diagram - the homotopy type of the colimit changes even if the homotopy type of the objects in the diagram don't (again, this is exactly what happens in the example above). It can however be useful to find that something is the homotopy colimit of a particular diagram - we know that the homotopy type won't change so long as the homotopy type of the objects in the diagram doesn't change. There are an enourmous plethora of examples where this is extremely useful. -Fibrant/cofibrant replacements of objects? In full generality, very difficult. This entirely depends on the so-called "model structure" that you're working with. What the term "model structure" means is simply that you're in a category where you're doing homotopy theory (i.e. you have defined weak equivalences, fibrations and cofibrations, and these classes of morphisms satisfy some nice properties with respect to one another). -There's no one-size-fits-all construction. The most useful way I can think of to explain what I mean by this is give a specific example of what I was playing with when I learnt about this stuff. I happen to have diagrams of spaces which are very very special (Reedy diagrams; where all the morphisms are cofibrant inclusions). The examples which I give above are indeed examples of these very very special diagrams. Now, when we did stuff above, the weak equivalences were simple enough to define. However, I didn't actually give the full model structure (the cofibrations and fibrations as well as the weak equivalences) - this is because the cofibrations and fibrations are not easy to define in this category. Not only are the definitions tricky, but they also depend on which model structure you pick - there is more than one available choice! The two model structures which are usually available for diagrams of spaces are the projective and injective model structures. The weak equivalences are defined the same way for both, but the cofibrations and fibrations aren't. Anyway, it's hard to identify which objects are fibrant or cofibrant. In the case of Reedy diagrams (my special case), there exists a special model structure called the Reedy model structure - this is neither the injective nor projective model structure. It's different, and often a lot easier to handle. In a very special case of the Reedy model structure, this model structure coincides with the projective model structure (that is, the fibrations and cofibrations are the same for both). Then if an object is cofibrant in the projective model structure, it's cofibrant in the Reedy model structure and visa versa. It is also a theorem that if an diagram is cofibrant in the projective model structure, then its colimit is homotopy equivalent to its homotopy colimit. It is still a little complicated to determine whether an object is cofibrant in the Reedy model structure, or what a cofibrant replacement should be, but its okay. The point is, after realising about ten million special conditions, things become tractable for me. Still not easy, but tractable. -TL;DR: Computing the homotopy colimit/homotopy limit is hard unless you have particularly nice stuff. - -Finally, in answer to your second two questions: Yes, you should learn $(\infty, 1)$ category theory - I cannot put into words the breadth of understanding which I now have which I did not have before. Seeing the bigger picture is always beneficial in mathematics, even if its not directly related to your work. In answer to the second, I did not find a quick fix. I spent a lot of time "wondering around in the dark", and talking to a lot of people, before finally seeing the bigger picture a bit more clearly. And I still don't know much. -I will recommend some sources (all freely available online): - -Daniel Dugger's Primer on Homotopy Colimits -A whole bunch of Emily Riehl's work. In my case, this was very helpful, but all of her work is really excellent -nLab. It's worth persisting. There are some well written gems in there amongst some rather more general difficult stuff. -I'll obnoxiously point you to the questions which I've asked on MathSE about this stuff. As with almost any topic, there are a vast number of experts here who can answer any questions which you have. - -<|endoftext|> -TITLE: Why does $A \circ {B^{ - 1}} + {A^{ - 1}} \circ B \ge 2{I_{n \times n}}$? -QUESTION [6 upvotes]: Let $A, B \in M_n$ be positive definite and $A \circ B = \left[ {{a_{ij}}{b_{ij}}} \right]$. -Why does $A \circ {B^{ - 1}} + {A^{ - 1}} \circ B \ge 2{I_{n \times n}}$ ? - -REPLY [11 votes]: First observe that for any two elements $X,Y \in M_{n}$, we have $X \circ Y$ is a one of the principal minor of $X \otimes Y$. And it is also straightforward to observe that any principal minor of positive definite matrix is also positive definite. -So to prove $A\circ B^{-1} + A^{-1} \circ B \geq 2 I_n$, It is sufficient to prove $$A\otimes B^{-1} + A^{-1} \otimes B \geq 2 I_{n^2}.$$ -Now $A,B$ are given positive definite. So both the element $A\otimes B^{-1}$ and its inverse $(A \otimes B^{-1})^{-1} = A^{-1}\otimes B$ is postive definte. -Now using spectral theorem we will prove that for any positive definite matrix $T \in M_k$, one will have $$T+T^{-1} \geq 2I_k.$$ -Since $T$ is positive definite, there exist a orthonormal eigen basis with respect to which $T$ will be of the diagonal form and $T +T^{-1}$ will look like -\begin{align} -\begin{pmatrix} -t_1 + \frac{1}{t_1} &0 &\cdots &0\\ -0 & t_2 + \frac{1}{t_2} &\cdots &0\\ -0& 0 &\cdots &0\\ -0& 0& \cdots & t_k + \frac{1}{t_k} -\end{pmatrix} -\end{align} -where $t_i$'s are positive eigen value of $T$. And since $x+\frac{1}{x} \geq 2$ for all $x>0$, we will have $$T+T^{-1} \geq 2I_k.$$<|endoftext|> -TITLE: Represent total variation of continuous function by integration of counting function -QUESTION [5 upvotes]: $f : [a,b] \to \mathbb R$ is continuous, let $M(y)$ be the number of points $x$ in $[a,b]$ such that $f(x)=y$. prove that $M$ is Borel masurable and $\int M(y)dy$ equals the total variation of $f$ on $[a,b]$ -This is an exercise in 'real analysis for graduate students'(Richard.F.Bass) -At first time, I thought $M(y)=\mu(\{x \mid f(x)=y\})$ where $\mu$ is counting measure. But I cannot find the relation between Borel measurability and $\mu(\{x \mid f(x)=y\})$ -Is there anyone would help me? - -REPLY [3 votes]: This is the Banach indicatrix Theorem, which he proved in 1925: -[1] S. Banach, "Sur les lignes rectifiables et les surfaces dont l'aire est finie" Fund. Math. , 7 (1925) pp. 225–236 -http://matwbn.icm.edu.pl/ksiazki/fm/fm7/fm7116.pdf -An exposition in English of Banach's proof is given in -Banach Indicatrix Function -Generalizations are in -[2] S.M. Lozinskii, "On the Banach indicatrix" Vestnik Leningrad. Univ. Math. Mekh. Astr. , 7 : 2 pp. 70–87 (In Russian) -[3] https://projecteuclid.org/journals/real-analysis-exchange/volume-27/issue-2/Generalization-of-the-Banach-Indicatrix-Theorem/rae/1212412867.full<|endoftext|> -TITLE: Using the Fano plane for octonion multiplication -QUESTION [5 upvotes]: The Fano plane is the projective plane over the field $\mathbf Z/2$. -It can be used to remember octonion multiplication, as nicely explianed in John Baez's article on octonions (see http://math.ucr.edu/home/baez/octonions/). -The picture (taken from Baez's website) is as follows: - -It indicates for example, using the cyclic orderings on the lines, that $e_6 \cdot e_1= e_5$ but that $e_6 \cdot e_4= -e_3$. -Two natural questions arise for me: - -Why did Baez label the circles in such a weird order. There is probably a good reason which is implicit. (A priori, if I permute the labels arbitrarily, I'll get something isomorphic). -Also, one could decide to choose other orientations for the arrows. -So again, why did Baez choose these orientations? And what could we get with other orientations? - -REPLY [5 votes]: The circles are labeled in such a way that the lines are given by $(i,i+1,i+3)$ modulo $7$. (Interpreted in the interval $1,...,7$). This is with good reason, see below. -The orientations are more difficult to explain in a sentence, but two remarks are in order: - -They are drawn in such a way that the automorphisms group $G\cong \mathsf{PSL}(3,2)\cong\mathsf{PSL}(2,7)$ of the Fano plane will induce automorphisms of the octonions. For instance, invariance under rotations (of oder $3$) is immediately visible from the picture; an element of order $7$ in $G$ is given by mapping $e_i\to e_{i+1}$ and since the orientation is always $(i,i+1,i+3)$ the arrows are invariant under these automorphisms too. (The automorphisms of order $2$ correspond to reflections the Fano plane, but you also have to introduce some signs like $e_i\mapsto -e_j$ so this is less obvious.) This is somewhat useful since it will immediately provide you with a bunch of automorphisms of the octonions. -You could in principle draw arbitrary arrows and study the resulting algebra generated by the same mechanism. But it is a fact that if you want the outcome to be a composition algebra, then the quadratic form determines the octonion algebra entirely. (This quadratic form is a so called Pfister form, and I think this theorem is usually attributed to Pfister. [The theorem is actually due to Jacobson, see the comment by Mariano Suárez-Alvarez here. ]) So in some sense, if you specify what $e_i^2$ is (usually one studies $e_i^2=-1$ for all $i$, this corresponds to the so called "compact real octonions") then you have specified the quadratic form entirely and there is essentially only one way to orient the edges to obtain a composition algebra. -It's very well possible that drawing different orientations on the edges gives rise to other algebra's with interesting properties. I would be very interested if anyone knew of results in that direction.<|endoftext|> -TITLE: Is this a correct/good way to think interpret differentials for the beginning calculus student? -QUESTION [7 upvotes]: I was reading the answers to this question, and I came across the following answer which seems intuitive, but too good to be true: - -Typically, the $\frac{dy}{dx}$ notation is used to denote the derivative, which is defined as the limit we all know and love (see Arturo Magidin's answer). However, when working with differentials, one can interpret $\frac{dy}{dx}$ as a genuine ratio of two fixed quantities. -Draw a graph of some smooth function $f$ and its tangent line at $x=a$. Starting from the point $(a, f(a))$, move $dx$ units right along the tangent line (not along the graph of $f$). Let $dy$ be the corresponding change in $y$. -So, we moved $dx$ units right, $dy$ units up, and stayed on the tangent line. Therefore the slope of the tangent line is exactly $\frac{dy}{dx}$. However, the slope of the tangent at $x=a$ is also given by $f'(a)$, hence the equation $$\frac{dy}{dx} = f'(a)$$ -holds when $dy$ and $dx$ are interpreted as fixed, finite changes in the two variables $x$ and $y$. In this context, we are not taking a limit on the left hand side of this equation, and $\frac{dy}{dx}$ is a genuine ratio of two fixed quantities. This is why we can then write $dy = f'(a) dx$. - -By Brendan Cordy - -REPLY [4 votes]: The conclusion is right, but you should not understand $\frac{dy}{dx}$ that way. When you do what you have done it is written $\frac{\Delta y}{\Delta x}$. -If you do what you have explained and take the fixed values, observe that you can get closer to $a$ on the tangent and do the same again. -A derivative of a sufficiently nice function is saying that no matter how close you get to $a$ using your tangent principle, the result is going to be the same. In that sense you are right, you can take any fixed value on the tangent, but fixing something is less general than saying no matter what fixed value on the tangent you take. -Observe as well that the way you construct a tangent is not something that is logically above the definition of the derivative so you could say: we know how to construct a tangent and then we can argue about the consequences. In general, drawing a tangent and finding first derivative are equivalent. -When a function is sufficiently nice all things are clearer, but you must define differentiability so that it is applicable to a wider range of problems. -You need to notice that turning $\frac{\Delta y}{\Delta x}$ into $\frac{dy}{dx}$ and approaching one and the same value is in the core of the definition of having a derivative. -Believe or not, the way you have defined a derivative is applicable in another theory: the theory of chaos, since for many chaotic curves you cannot draw a tangent. Instead you take two close points find the distance and calculate $\frac{\Delta y}{\Delta x}$. In many cases you get a fixed value as you approach $\frac{dy}{dx}$, although it is not possible to draw a tangent in the classical sense. Even when you cannot get a fixed value you make some averaging and still get something useful. -Basically $\frac{dy}{dx} = \lim\limits_{\Delta x \to 0}\frac{\Delta y}{\Delta x}$ and that is the way you should understand it. -But yes, you can find a derivative the way you have described for many nice behaving functions.<|endoftext|> -TITLE: Proof that product topology of subspace is same as induced product topology -QUESTION [20 upvotes]: Let's assume that $A\subseteq X$ is product of $A_{i}\subseteq X_{i}(i\in I)$. -Then product topology of $A$ is the same than the topology induced by $X$. -I have proved this few different times now, and for this one I need help. I like to try all kinds of proofs. -Collection $\mathcal{B}_{A}$ has sets $V:=\prod_{i\in I} V_i$, where $V_i\subseteq A_{i}$ by every $i\in I$ and $V_i\neq A_i$ finitely many $i$. -Collection $\mathcal{B}_{X}$ has sets $\prod_{i}U_{i\in I}\cap \prod_{i}A_{i\in I}$, where and $U_{i}$ is element of basis of $X_{i}$. Notation $U:=\prod_{i}U_{i\in I}$. -There is theorem that says $\mathcal{B}$ is basis for topology iff every open $U\subseteq X$ can be shown in the form -$$ -U=\bigcup_{a\in A} B_{a}, -$$ -where $B_{a}\in\mathcal{B}$ for all $a\in A$. -Now what I am trying to do in this proof attempt is that I want to try out the theorem above. -There are problems with the notion and that is one main thing where I need tips. -I hope that you get the idea what I am after here. -Proof: -$Z$ belongs in the product topology of $A$. -$\Leftrightarrow$ -$$ -Z=\bigcup_{i\in I} \big(\prod_{i\in I} V_{i} \big)_{i}\quad\text{Where } V_{i}\in \mathcal{T}_{A_{i}}\text{ for all }i\in I. -$$ -$\Leftrightarrow$ -$$ -Z=\bigcup_{i\in I} \big(\prod_{i \in I}U_{i}\cap A_{i} \big)\quad\text{Where }U_{i}\in\mathcal{T_{i}}\text{ for all }i\in I. -$$ -$\Leftrightarrow$ -$$ -Z=\bigcup_{i\in i} (\prod_{i\in I} U_{i}\cap \prod_{i\in I} A_{i})_{i} -$$ -$\Leftrightarrow$ -$$ -Z=\big(\bigcup_{i\in I}\big(\prod_{i\in I} U_{i} \big)_{i} \big)\cap \prod_{i\in I} A_{i} -$$ -$\Leftrightarrow$ -$Z$ belongs to product topology induced by $X$. - -REPLY [44 votes]: Let's start with a definition: -Let $(X,\mathcal{T})$ be a topological space. -Let $I$ be an index set, and let $Y_i (i \in I)$ be topological spaces -and let $f_i: X \rightarrow Y_i$ be a family of functions. -Then $\mathcal{T}$ is called the initial topology with respect to the maps $f_i$ -iff - -$\mathcal{T}$ makes all $f_i$ continuous. -If $\mathcal{T}'$ is any other topology on $X$ that makes all $f_i$ continuous, then $\mathcal{T} \subseteq \mathcal{T}'$. - -Or put more shortly: $\mathcal{T}$ is the smallest (coarsest) topology that makes all $f_i$ continuous. -Remark: it is not very useful to ask for the largest topology to make all -$f_i$ continuous, then we would always get the discrete topology on $X$. -This is the largest topology on $X$ and it makes any map continuous. -This is in fact a common way to construct a topology on a set $X$, based -on functions to spaces $Y_i$ that already have a topology. -E.g. in linear space theory we can consider an alternative topology on a linear -space $X$ as the smallest topology that makes all linear maps (in the original topology) -from $X$ to $\mathbb{R}$ continuous. This is called the weak topology on the linear space $X$ -(as it generally weaker (fewer open sets) than the original topology on $X$.) -That this works is based on the following simple: -Existence theorem for initial topologies: -Let $X$ be a set and $f_i : X \rightarrow Y_i$ be a collection of topological spaces -and maps. Then there is a topology on $X$ that is initial w.r.t. the maps $f_i$. -Moreover, this topology is unique and a subbase of the topology is given by -$\mathcal{S} = \{(f_i)^{-1}[O]: i \in I, O \text{ open in } Y_i\}$. -(a subbase is a collection $\mathcal{S}$ of subsets of $X$ such that all finite intersections -of elements of $\mathcal{S}$ form a base for the topology). -Proof: Let $\mathcal{T}$ be the topology generated by $\mathcal{S}$ as a subbase. -This means that $\mathcal{T}$ is the collection of all sets that can be written -as unions of finite intersections from $\mathcal{S}$. -Then all $f_i$ are continuous, as for all open $O \subseteq Y_i$ the inverse image under $f_i$ of $O$ is in $\mathcal{T}$. -And if $\mathcal{T}'$ is any topology that makes all $f_i$ continuous, $\mathcal{T}'$ -must contain all sets of the form $(f_i)^{-1}[O]$ and so $\mathcal{T}'$ must contain -$\mathcal{S}$, and as $\mathcal{T}'$ is closed under finite -intersections and unions, we have that $\mathcal{T} \subseteq \mathcal{T}'$, as required. -The unicity is clear, because if $\mathcal{T}$ and $\mathcal{T}'$ are both initial then $\mathcal{T} \subseteq \mathcal{T}'$ -(by 2) applied to $\mathcal{T}$) and $\mathcal{T}' \subseteq \mathcal{T}$ (by 2) -applied to $\mathcal{T}'$) and thus $\mathcal{T} = \mathcal{T}'$. -Example 1: if $A$ is a subset of a topological space $X$, and $i: A \rightarrow X$ -is the inclusion map from $A$ to $X$ (defined by $i(x) = x$ for all $x \in A$), -then the subspace topology on $A$ is just the initial topology w.r.t. $i$. -Proof: the subspace topology is defined by $\{O \cap A: O \text{ open in } X\}$. -But $i^{-1}[O] = O \cap A$ ($x \in O$ and $x \in A$, then $i(x) = x$ is in $O$, so -$x \in i^{-1}[O]$ and if $x \in i^{-1}[O]$ then $x \in A$ by definition and $x = i(x)$ -must be in $O$, so $x$ is in $O \cap A$). -We see that the subbase $\mathcal{S}$ from the existence theorem is just equal -to the subspace topology! -Remark: in general, when we have just one function $f: X \rightarrow Y$, and $X$ has the initial -topology w.r.t. $f$, we get the topology and not just a subbase. Also, if $f$ is moreover -injective (one-to-one), then $f$ is called an embedding. -Example 2: -If $X_i, i \in I$ is a family of topological spaces and $X$ is the Cartesian product of -the spaces $X_i$, then we have the projection maps $p_j: X \rightarrow X_j$ defined -by $p_j( (x_i)_{i \in I}) = x_j$ for all $j \in I$. -Then the product topology on $X$ is just the initial topology w.r.t. the -maps $p_i$ ($i \in I$). -Proof: the sets $(p_j)^{-1}[O]$ are just the product sets of the form $\prod_i O_i$ -where all $O_i = X_i$ except $O_j = O$. So the finite intersections of the subbase -elements are exactly all such sets $\prod_i O_i$ where finitely many -$O_i$ are some open set in their respective coordinate space -and in all other coordinates $O_i$ are equal to $X_i$; this is precisely the -standard base for the product topology. -So two very common ways of making new spaces from old ones, subspaces and products, -are special cases of initial topologies. In the following I will develop some -basic theory that will allow us to formulate and prove general principles that will -apply to all examples of initial topologies. Some well-known facts can then be seen -together in a common framework. - -The fact that a space $X$ has the initial topology w.r.t. a family of mappings, makes it -easy to recognise continuous functions to $X$. -We have the following useful: -Universal theorem of continuity for initial topologies. -Let $X$ be a space and $f_i : X \rightarrow Y_i$ $(i \in I)$ a family of mappings -and spaces $Y_i$, such that $X$ has the initial topology with respect to the $f_i$. -Let $W$ be any space and $g$ a function from $W$ to $X$. -Then $g$ is continuous iff for all $i \in I$: $f_i \circ g$ is continuous from $W$ to $Y_i$. -Proof: if $g$ is continuous then all $f_i \circ g$ are also continuous, -because all $f_i$ are continuous and compositions of continuous maps are continuous. -Suppose now that $f_i \circ g$ is continuous for all $i$. -Let $S \subseteq X$ be any element of the subbase $\mathcal{S}$ (from the existence theorem), so that -$S = (f_i)^{-1}[O]$ for some open subset $O$ of $Y_i$. -Now $$g^{-1}[S] = g^{-1}[(f_i)^{-1}[O]] = (f_i \circ g)^{-1}[O]$$ which is open in $W$ because $f_i \circ g$ is continuous. -This shows that inverse images of elements from $\mathcal{S}$ are open. -But then as $g^{-1}$ preserves (finite) intersections and unions and as all open -subsets of $X$ are unions of finite intersections of elements from $\mathcal{S}$, -we see that $g^{-1}[O]$ is open for all open subsets $O$ of $X$. Or: $g$ is continuous. -There is a converse to this as well: -Characterisation of the initial topology by the continuity theorem. -Let $X$ be a space, and $f_i: X \rightarrow Y_i$ be a family of spaces and functions. -Suppose that $X$ satisfies the universal continuity theorem in the following sense: - -(*) for all spaces $Z$, for any function $g: Z \rightarrow X$: $(f_i \circ g)$ is continuous for all $i$ iff $g$ is continuous. - -Then $X$ has the initial topology w.r.t. the maps $f_i$. -Proof: the identity on X is always continuous, so applying ($\ast$) from right to left with $g = \operatorname{id}$ -gives us that all $f_i$ are continuous. If $\mathcal{T}'$ is another topology on $X$ -that makes all $f_i$ continuous, then consider the map $g: (X, \mathcal{T}') \rightarrow (X, \mathcal{T})$, -defined by $g(x) = x$. Then all maps $f_i \circ g$ are just the maps $f_i$ as seen between -$(X, \mathcal{T}')$ and $Y_i$ which are by assumption continuous. -So by the other direction of (*) we see that $g$ is continuous, -and thus (as $g(x) = x$, and thus $g^{-1}[O] = O$ for all $O$) we have that -$\mathcal{T} \subseteq \mathcal{T}'$, -as required for the second property of the initial topology. - -Applications: -Characterisation of continuity into products: -a map $f$ into a product $\prod_{i \in I} X_i$ is continuous iff $p_i \circ f$ is continuous -for all $i \in I$. -Or suppose that $X$ is any space, $Y'$ a subspace of a space $Y$, and $g: X \rightarrow Y$ -is a map such that $g[X] \subseteq Y'$. -Then there is the "image restriction" $g': X \rightarrow Y'$ of $g$, defined by $g'(x) = g(x)$. -Note that, if $i: Y' \rightarrow Y$ is the inclusion, then $Y'$ has the initial topology w.r.t. i, -and moreover, $g = g' \circ i$. So the theorem we just proved says: $g$ continuous iff $g'$ continuous. -This has the intuitive meaning that continuity of $g$ is only determined by $Y'$. -So if $g$ is an embedding (see above), then $g$ is a continuous bijection between -$X$ and $g[X]$, which is also open as a map between these spaces, when we give $g[X]$ -the subspace topology: let $O$ be open in $X$. Then $O = g^{-1}[O']$ for some open subset -of $Y$ (by embedding = initial map), and then $O' \cap g[X]$ is open in g[X], -and $g[O] = O' \cap g[X]$, by injectivity of $g$. The reverse is also quite easy to see -(exercise): if $g:X \rightarrow g[X] \subseteq Y$ is a homeomorphism, then $g$ is an embedding from $X$ -into Y. Many books actually define embeddings that way. -Note that the restriction of $f$ to $A$, $f | A$, is just $f \circ i$, where $i$ is the embedding -of $A$ into $X$, so that $f | A$ is continuous as a composition of continuous maps. - -Application: diagonal product map. -Let $X$ be a space and let $Y_i$ ($i \in I$) be a family of spaces, and $f_i : X \rightarrow Y_i$ -be a family of functions. Let $Y$ be the product of the $Y_i$, with projections $p_i$. -Define $f:X \rightarrow Y$, the so-called diagonal product of the $f_i$, as follows: -$f(x) = (f_i(x))_{i \in I}$. Then $f$ is continuous iff for all $i \in I$, $f_i$ is continuous. -Proof: immediate from the universal continuity theorem, because for all $i$ we have -that $p_i \circ f = f_i$. - -Application: product maps. -Let $f_i : X_i \rightarrow Y_i$ be a family of functions between spaces $X_i$ and $Y_i$, -let $X = \prod_{i \in I} X_i$, $Y = \prod_{i \in I} Y_i$, and let -$f:X \rightarrow Y$ be defined by $f((x_i)_i) = (f_i(x_i))_i$, which is called the product map of the $f_i$. -Then $f$ is continuous iff for all $i \in I$ we have that $f_i$ is continuous. -Proof: let $p_i$ be the projections from $Y$ to $Y_i$, and let $q_i$ be the -projections from $X$ to the $X_i$. -Then for all $i$ we have $$p_i \circ f = f_i \circ q_i\text{.}$$ -Suppose that all $f_i$ are continuous. -Then, as all $q_i$ and $f_i$ are continuous, all maps $p_i \circ f$ are continuous. -As $Y$ has the initial topology w.r.t. the $p_i$, we have by the universal continuity -theorem that $f$ is continuous. -Now let $f$ be continuous. Fix $i$ in $I$. Also take a point $r= (r_i)_{i \in I}$ from $X$. -Let $(s_i)_{i \in I}$ in $Y$ be its image $f(r)$. -Then the map $k_i: X_i \rightarrow X$, defined as the diagonal product of the identity -on $X_i$ and all constant maps onto the point $r_j$ for all $j \neq i$. -By the previous application, this is continuous. Moreover, $q_i \circ k_i$ is the identity on $X_i$, -also denoted by $\operatorname{id}_{X_i}$. But note that -$$ f_i = f_i \circ \operatorname{id}_{X_i} = f_i \circ (q_i \circ k_i) = -(f_i \circ q_i) \circ k_i = (p_i \circ f) \circ k_i\text{,}$$ which is continuous, as $f$, $p_i$ and $k_i$ are. So $f_i$ is continuous, for all $i \in I$. - -A very useful general fact is the following: -Transitive law of initial topologies. - -Suppose that we have a family of spaces and maps $f_i : X \rightarrow Y_i$ -( $i \in I$) and for each $i \in I$ an index set $I_i$, and a family -of maps $g_{i,j} : Y_i \rightarrow Z_j$ for $j$ in $I_i$. -Assume that each $Y_i$ has the initial topology w.r.t. the $g_{i,j}$ ($j \in I_i$). -Then $X$ has the initial topology w.r.t. the maps $g_{i,j} \circ f_i$ ($i \in I, j \in I_i$) -iff $X$ has the initial topology w.r.t. the $f_i$ ($i \in I$). - -Proof: -Suppose that $X$ has the initial topology w.r.t. the $f_i$. Call this topology $\mathcal{T}$. -All $g_{i,j}$ are continuous (part of being initial of the topology on $Y_i$) -so all $g_{i,j} \circ f_i$ are continuous. Suppose that $\mathcal{T}'$ is another topology on $X$ -that makes all $g_{i,j} \circ f_i$ continuous. -Then consider the maps $f'_i : (X,\mathcal{T}') \rightarrow Y_i$, defined by $f'_i(x) = f_i(x)$. -So we have $g_{i,j} \circ f'_i = g_{i,j} \circ f_i$ for all relevant indices. -By assumption all $g_{i,j} \circ f'_i = g_{i,j} \circ f_i : (X,\mathcal{T}') \rightarrow Z_j$ -are continuous, and as all $Y_i$ -have the initial topology w.r.t. the $g_{i,j}$, we see that all $f'_i$ are (by the universal -continuity theorem) continuous. If $\operatorname{id}$ is the identity map from -$(X,\mathcal{T}') \rightarrow (X,\mathcal{T})$, then -$f_i \circ \operatorname{id} = f'_i$ for all $i$. We have just seen that all $f'_i$ are continuous, and -as $\mathcal{T}$ is initial w.r.t. the $f_i$, we see that $\operatorname{id}$ -is a continuous map by this same universal continuity theorem. -But the identity from $(X,\mathcal{T}') \rightarrow (X,\mathcal{T})$ is continuous iff -$\mathcal{T} \subset \mathcal{T}'$ -(as $O \in \mathcal{T}$ means $\operatorname{id}^{-1}[O] = O \in \mathcal{T}'$), so -$\mathcal{T}$ is indeed minimal w.r.t. the continuity -of all maps $g_{i,j} \circ f_i$, and $X$ has the initial topology w.r.t. these maps. -Suppose on the other hand that $X$ has the topology $\mathcal{T}$, which is initial w.r.t. the maps -$g_{i,j} \circ f_i$. Let $i$ be in $I$. For all $j \in I_i$ we know that $g_{i,j} \circ f_i$ -is continuous. As $Y_i$ has the initial topology w.r.t. the maps $g_{i,j}$ ($j \in I_i$), -we see again by the universal continuity theorem that $f_i$ is continuous. So all $f_i$ -(from $(X,\mathcal{T})$ to $Y_i$) are continuous. -Let $\mathcal{T}'$ be another topology on $X$ that makes all $f_i$ continuous. -This means that all $g_{i,j} \circ f_i$ are continuous, and so by minimality of $\mathcal{T}$ -(by the definition of initial topology w.r.t. the maps $g_{i,j} \circ f_i$) we see that -$\mathcal{T} \subseteq \mathcal{T}'$. -So $\mathcal{T}$ is the initial topology w.r.t. the $f_i$. - -Two useful applications, to make all this less abstract: -Subspaces of subspaces: -Let $A$ be a subspace of $B$ and $B$ a subspace of $X$ ($A \subseteq B \subseteq X$) then -$A$ is a subspace of $X$ (i.e. it has the subspace topology w.r.t. $X$). -Proof: -Apply the above to $i_B: B \rightarrow X$, $i_{A,B}: A \rightarrow B$, -$i_A: A \rightarrow X$, all basically the identity -with different domains and codomains. So by assumption $A$ has the initial topology w.r.t. $i_{A,B}$ -and $B$ has the initial topology w.r.t. $i_B$. -Note that $i_A = i_B o i_{A,B}$, so by the transitivity theorem (right to left) we see that -$A$ has the initial topology w.r.t. $i_A$, or $A$ has the subspace topology w.r.t. $X$. -Products and subspaces: - -Let $X_i$ ($i \in I$) be a family of spaces, with subspaces $A_i \subseteq X_i$. -Then $A = \prod_{i \in I} A_i$ (in the product topology of the subspace topologies) -is a subspace of $X = \prod_{i \in I} X_i$ (it has the initial topology w.r.t. the inclusion). - -Proof: -Let $k_i$ be the inclusion mapping from $A_i$ to $X_i$. -Let $k: \prod_i A_i \rightarrow \prod_i X_i$ the product mapping (as above). -Note that $k$ is also the inclusion from $A$ into $X$. -Again let $p_i$ be the projections from $A$ onto the $A_i$, and $q_i$ the projections -from $X$ onto the $X_i$. Then -$$(\ast) q_i \circ k = k_i \circ p_i \text{ for all } i \in I \text{.} $$ -$A_i$ has the initial topology w.r.t. $k_i$, and $A$ has the initial topology w.r.t. the $p_i$. -So $A$ has the initial topology w.r.t. the maps $k_i \circ p_i$ ($i \in I$) by the -right to left implication of the transitivity theorem. So $A$ has the initial topology w.r.t. the maps -$q_i \circ k$ by $(\ast)$. But by the transitivity theorem (the other implication ) we see that $A$ has the initial topology w.r.t. the map $k$, which is, as said, the inclusion $A \rightarrow X$. So $A$ has the subspace topology. -So we see that all these general considerations give a nice proof of -"a product of subspaces is a subspace), due to the special nature of these -topologies as initial topologies with respect to certain maps. -Remark: for those who know inverse limits, which are subspaces of products -of a certain kind, the above theorem also shows that in fact the inverse limit topology -is itself an initial topology w.r.t. the restricted projection maps. This gives -rise to a canonical subbase for the inverse limit, from the existence theorem, which -is sometimes useful as well. -As a final remark: a similar theory can be developped for so-called "final" topologies. -This applies to situations where we have maps $f_i: X_i\rightarrow X$ where we want $X$ to have -the largest topology that makes all $f_i$ continuous. -Special cases include quotient topologies, disjoint sums and weak topologies induced by subspaces. -Also here we have an existence theorem, a universal continuity theorem and a transitive law. See here for the details.<|endoftext|> -TITLE: On a generalization for $\sum_{d|n}rad(d)\phi(\frac{n}{d})$ and related questions -QUESTION [5 upvotes]: Let $\phi(m)$ Euler's totient function and $rad(m)$ the multiplicative function defined by $rad(1)=1$ and, for integers $m>1$ by $rad(m)=\prod_{p\mid m}p$ the product of distinct primes dividing $m$ (it is obvious that it is a multiplicative funtion since in the definition is $\prod_{p\mid m}\text{something}$ and since empty products are defined by $1$). -Denoting $r_1(n)=rad(n)$, and $$R_1(n)=\sum_{d|n}rad(d)\phi(\frac{n}{d}),$$ -I claim that it is possible to proof that this Dirichlet product of multiplicative functions (thus is multiplicative) is computed as $$\frac{n}{rad(n)}\prod_{p\mid n}(2p-1).$$ - -Question 1. Can you prove or refute that - $$R_k:=\sum_{d|n}r_k(d)\phi(\frac{n}{d})=\frac{n}{rad(n)}r_{k+1}(n)$$ - for $$r_{k+1}(n)=\prod_{p\mid n}((k+1)p-k),$$ with $k\geq 1$? Thanks in advance. - -I excuse this question since I've obtain the first examples and I don't know if I have mistakes. I know that the proof should be by induction. Since computations are tedious I would like see a full proof. In this ocassion if you are sure in your computations, you can provide to me a summary answer. The following is to obtain a more best post, in other case I should to write a new post. -I know the theorem about Dirichlet product versus Dirichlet series that provide us to write -$$\sum_{n=1}^\infty\frac{\frac{n}{rad(n)}r_2(n)}{n^2}=\left(\sum_{n=1}^\infty\frac{rad(n)}{n^s}\right)\left(\sum_{n=1}^\infty\frac{\phi(n)}{n^s}\right)=\sum_{n=1}^\infty\frac{\sum_{d\mid n}rad(d)\phi(n/d)}{n^s},$$ -for $\Re s=\sigma>2$ (I've read notes in Apostol's book about this and follows [1]). By a copy and paste from [2] we can write -$$\frac{\zeta(s)^2}{\zeta(2s)}2$. - -Question 2. Can you write and claim the convergence statement corresponding to Dirichlet series for $r_k(n)$? I say if Question 1 is true, and looking to compute these Dirichlet series for $r_k(n)$ as values, or inequalities involving these values, of the zeta function. Thanks in advance. - -I excuse this Question 2 to encourage to me read and understand well, previous references [1] and [2]. -[1] Ethan's answer, this site Arithmetical Functions Sum, $\sum_{d|n}\sigma(d)\phi(\frac{n}{d})$ and $\sum_{d|n}\tau(d)\phi(\frac{n}{d})$ -[2] LinusL's question, this site, Average order of $\mathrm{rad}(n)$ - -REPLY [2 votes]: About your first question, you just have to observe that $R_k(n)$ is multiplicative beeing a Dirichlet product of multiplicative functions, and as a consequence so is $r_{k}(n)$, so you can compute it for a prime power a then multiply, -$$ R_k(n) = \prod_{p^j\vert\vert n} R_k(p^j) $$ -For computing $R_k(p^j)$ we treat separately the divisor 1 for the rest of divisors of $p^j$, ($p,p^2,\dots,p^j$) obtaining: -$$ R_k(p^j) = p^j-p^{j-1} + \sum_{i=1}^j (kp-(k-1))\phi(p^{j-i}) = \\ -p^j-p^{j-1} + (kp-(k-1)) \left( (p^{j-1}-p^{j-2})+ \dots + (p-1) + 1\right) = \\ -(k+1)p^j-kp^{j-1} = \frac{p^{j}}{rad(p^{j})}((k+1)p-k)$$ -And you are done. -I'm not entirely sure what you are asking in the second question, if I understand you are interested in the convergence of the Dirichlet series -$$ \sum_n \frac{r_k(n)}{n^s} $$ -suppose it converges for some $s=\sigma+it$, then it is easy to show that it converges for any $s$ with real part $>\sigma$, know in that hypothesis it will have also an expresion as an Euler product: -$$ \sum_n \frac{r_k(n)}{n^s} = \prod_p \left( 1 + (kp-(k-1))(p^{-s} + p^{-2s}+\dots) \right)= \\ -\prod_p \left( \frac{1+p^{-s}(kp-k))}{1-p^{-s}} \right)=\zeta(s)A(s)$$ -Where $A(s)$ has the Euler product -$$ A(s) = \prod_p (1+p^{-s}(kp-k)) $$ -if $s$ is real then all the factors are positive so you can bound it above by -$$ A(s) < \prod_p (1+p^{-s+1})^k = \left(\sum_n \frac{\vert \mu(n) \vert}{n^{s-1}}\right)^k $$ -this implies that the original series converges for $\sigma >2$, to see that it diverges for $\sigma < 2$ it is easy for $k > 1$ as we have $kp-k >= p$, and so again for $s$ real -$$ A(s) > \prod_p(1+ p^{-s+1})=\sum_n \frac{\vert \mu(n) \vert}{n^{s-1}} $$ -and the right hand series diverges for $s=2$. It still remains to prove that it diverges for $k=1$ I can't see now any simple proof but a limiting argument should work. -I hope this is what you were looking for.<|endoftext|> -TITLE: Where to learn about model categories? -QUESTION [8 upvotes]: A model category (introduced by Quillen in the sixties) is a category equipped with three distinguished classes of morphisms (weak equivalences, fibrations and cofibrations) satisfying some axioms designed to mimic the structure of the category of topological spaces. They provide a framework to study homotopy theory which applies to topological spaces (algebraic topology), chain complexes (homological algebra), and much more. In particular the notion of Quillen equivalence allows one to say when two categories are the same "up to homotopy". - -What are good places (book, lecture notes, article...) to learn about model categories? - -Assume that the reader is reasonably familiar with category theory, and perhaps with the basics of algebraic topology or homological algebra if some motivation is needed. I would also be interested in more advanced books that deal with finer points of model category theory (localizations come to mind). -This question is inspired by this previous one where the OP asked whether to learn $\infty$-category theory. I believe knowledge of model categories is rather essential to learn $\infty$-category, as for example many coherence results are stated in the form of "Such and such categories of $\infty$-categories are Quillen equivalent". As far as I understand much of $\infty$-category is also designed with model categories in mind, basically "what happens if I'm in a category of bifibrant objects and derived hom spaces". See also the MO question Do we still need model categories? - -If possible try to argue why the book/paper you're mentioning is a good place to learn (and not just assert so), please. By now I think the subject is settled enough that there are several good books about it, and I think it would be nice to condense information about all these books in one place, instead of relying on hearsay. - -REPLY [6 votes]: W. G. Dwyer and J. Spaliński. “Homotopy theories and model categories”. In: Handbook of algebraic topology. Amsterdam: North-Holland, 1995, pp. 73–126. DOI: 10.1016/B978-044481779-2/50003-1. MR1361887. - -This paper is an introduction to the theory of model categories. The prerequisites needed to read it are very limited, and most (if not all) the proofs are detailed. All the basics of model categories are explained: basic definitions, homotopy category, derived functors, homotopy pushouts and pullbacks... Two fundamental examples (topological spaces and bounded below chains complexes) are also made explicit.<|endoftext|> -TITLE: Ordinary Differential Equations used in Cosmology -QUESTION [5 upvotes]: I'm just reading over some Cosmology notes and there is a little ODE solve that I am not quite understanding. -I have an equation of the form: -$$ -\ddot{R}=-\frac{GM}{R^{2}} -$$ -Integrating gives: -$$ -\dot{R}^{2}=+\frac{2GM}{R}+C -$$ -The notes are essentially saying that this can be solved with a parameter $\theta$: - -Could anyone run through the method for solving ODE's such as this. -In terms of density this can be written as: -$$ -\dot{R}^{2}=\frac{4\pi{G}}{3}\rho_{0}R -$$ - -REPLY [5 votes]: (This integral also comes up in the brachistochrone problem, by the way.) -Rearranging it into an integrable form gives -$$ \frac{\sqrt{R}}{\sqrt{-2GM/C-R}} \dot{R} = \sqrt{-C}, $$ -so set $K=-2GM/C$. Integrating both sides, -$$ \int_0^R \frac{\sqrt{r}}{\sqrt{K-r}} \, dr = \sqrt{-C}t. $$ -Doing the substitution $r=K(1-u^2)$, so $dr=-2Ku\, du$ the integral simplifies to -$$ \int_{\sqrt{1+r/K}}^1 \frac{\sqrt{K}\sqrt{1-u^2}}{\sqrt{Ku^2}} 2u K \, du = 2K \int_{\sqrt{1+r/K}}^1 \sqrt{1-u^2} \, du, $$ -where I have chosen the sign for $\sqrt{1+r/K}$ that gives a positive $t$, for obvious reasons. But this is the area under the circle of radius $K$, between the vertical lines $u=0$ and $u=\sqrt{1+r/K}=:U$; some simple geometry shows that we can find this as -$$ \sqrt{-C}t = K \left(- U\sqrt{1-U^2}+\arccos{U} \right), $$ -i.e. the difference of a sector and a triangle, and if we then make the substitution $\theta=2\arccos{U}$ (i.e. twice the angle from the horizontal axis to the radius through $(U,\sqrt{1-U^2})$), this simplifies to -$$ t=\frac{K}{2\sqrt{-C}}(\theta-\sin{\theta}); $$ -inverting the equation for $r$ then gives -$$ r=\frac{K}{2}(1-\cos{\theta}), $$ -and you can then check that the relationships between $K,A,B,C,GM$ all work out. There's probably a nicer way to derive this with the $A$ and $B$, but this is basically how the actual calculation works.<|endoftext|> -TITLE: Is it possible to convert a divergent series by subtracting a constant? -QUESTION [5 upvotes]: This question came to my mind after learning about the existence of the Euler Mascheroni constant. I think that if each term of the divergent series is depressed by a certain amount then maybe the series might become convergent. However, this is just my fancy and if there is solid argument that proves that this is impossible then I would greatly appreciate as I was planning to do research on this question. - -REPLY [3 votes]: Not always. Consider $1+2+3+4+\dotsb$; this diverges no matter what constant you subtract from the terms. That is: -$$(1-c)+(2-c)+(3-c)+(4-c)+\dotsb$$ -always diverges.<|endoftext|> -TITLE: Find the sum of the series $1+\frac{1}{3}\cdot\frac{1}{4}+\frac{1}{5}\cdot\frac{1}{4^2}+\frac{1}{7}\cdot\frac{1}{4^3}+\cdots$ -QUESTION [6 upvotes]: Find the sum of the series : $$1+\frac{1}{3}\cdot\frac{1}{4}+\frac{1}{5}\cdot\frac{1}{4^2}+\frac{1}{7}\cdot\frac{1}{4^3}+\cdots$$ - -REPLY [9 votes]: This can be transformed to $$\sum_{n=1}^{\infty} \frac{2}{(2n-1)2^{2n-1}}$$ -Let $$f(x)=\sum_{n=1}^{\infty} \frac{x^{2n-1}}{(2n-1)}$$ -Then, we have $f'(x)=\sum_{n=1}^{\infty} x^{2n-2} = \frac{1}{1-x^2}$. -Therefore, we have $$f(x)=\int \frac{1}{1-x^2} = \frac{1}{2} \ln \frac{x+1}{1-x}+C$$ -It is clear that $C=0$. -Now plugging $x=\frac{1}{2}$ in this equation, we have $\sum_{n=1}^{\infty} \frac{1}{(2n-1)2^{2n-1}} = \frac{1}{2} \ln 3$, so the desired answer is double that number, or $\ln 3$.<|endoftext|> -TITLE: Prove the sequence has infinitely many integers -QUESTION [5 upvotes]: Prove the sequence $a_n=\frac{p^n}{qn+1},(p,q)=1,p\ge 2$ has infinitely many integers -Could someone explain the intuitive approach for this problem? - -REPLY [3 votes]: Let $a$ the order of $p$ modulo $q$, hence $p^a$ is congruent to $1$ modulo $q$. Hence for $k\in \mathbb{N}$ we get $p^{ak}=1+m_k q$. Now $p^x>1+xq$ for $x$ large. Hence for large $k$, we have $m_k>ak$. Now put $n=m_k$, we get $\displaystyle \frac{p^n}{1+nq}=p^{m_k-ak}\in \mathbb{N}$.<|endoftext|> -TITLE: Is there a number $n$, such that there are $22$ groups of order $n$? -QUESTION [5 upvotes]: Denot : $N(n)$ : the number of groupfs of order $n$ ? - -Is there a number $n$ with $N(n)=22$ ? - -Checking the first about $2000$ numbers, I noticed that there is no $n\in [1,2000]$ with $N(n)=22$. Does such an $n$ exist ? And if no, why ? -Generalization : Given a number $k$, can we determine whether there is a $n$ with $N(n)=k$ in a reasonable time ? - -REPLY [4 votes]: According to Maple, there are a number of values of $n < 50000$ for which $N(n) = 22$: -> with( GroupTheory ): -> select( n -> NumGroups( n ) = 22, [seq]( 1 .. 50000 ) ); - - [6321, 9075, 9765, 18135, 18669, 19215, 27075, 31017, 31605, 35685, 40053, 45045, 46431, 47565, 49539] - -There is a conjecture that $N$ is surjective, but to my knowledge, there is little progress on this problem.<|endoftext|> -TITLE: Formula for cos(k*x) -QUESTION [6 upvotes]: I need to prove that: -\begin{align} -c_k =&\; \cos(k\!\cdot\!x)\\ -c_k :=&\; c_{k-1} +d_{k-1}\\ -d_k :=&\; 2d_0\!\cdot\!c_k +d_{k−1}\\ -d_0 :=&\; −2\!\cdot\!\sin^2{(x/2)}\\ -\end{align} -I've got an explicit formula for $d_k$ which should be: -\begin{align} -d_k&=d_o+\sum_{i=1}^k{2\!\cdot\!d_o\!\cdot\!c_i} -&&\implies& -c_k &=c_{k-1}+ \sum_{i=1}^k{2\!\cdot\!d_o\!\cdot\!c_i} -\end{align} -Now I want to do a proof by induction. Assuming that $c_p=\cos(p\!\cdot\!x)$ for every $p -TITLE: Sharper Lower Bounds for Binomial/Chernoff Tails -QUESTION [8 upvotes]: The Wikipedia page for the Binomial Distribution states the following lower bound, which I suppose can also be generalized as a general Chernoff lower bound. -$$\Pr(X \le k) \geq \frac{1}{(n+1)^2} \exp\left(-nD\left(\frac{k}{n}\left|\right|p\right)\right) \quad\quad\mbox{if }p<\frac{k}{n}<1$$ -Clearly this is tight up to the $(n+1)^{-2}$ factor. -However computationally it seems that $(n+1)^{-1}$ would be tight as well. Even (n+1)^{-.7} seems to be fine. -It's not as easy to find lower bounds for tails as it is upper, but for the Normal Distribution there seems to be a standard bound: -$$\int_x^\infty e^{-t^2/2}dt \ge (1/x-1/x^3)e^{-x^2/2}$$ -My question is thus, is the $\frac{1}{(n+1)^2}$ factor the best known? Or can $\frac{1}{n+1}$ be shown to be sufficient? -Update: Here is the region in which the conjecture holds numerically, by Mathematica: - -REPLY [10 votes]: Update: I wrote a note surveying different proofs of varying precisions. This gets all the way down to $1+o(1)$ sharpness. -I can at least show that $(n+1)^2$ can be improved to $\sqrt{2n}$: -$$\begin{align} -\sum_{i=0}^k {n \choose i} p^i (1-p)^{n-i} &\ge -{n \choose k} p^k (1-p)^{n-k}\\ -&= {n \choose k} \exp\left(-n(k/n \log1/p+(1-k/n)\log1/(1-p)\right)\\ -&\ge \frac{\exp(n\text{H}(k/n))}{\sqrt{8k(1-k/n)}}\, \exp(-n(\text{D}(k/n||p) + H(k/n)))\\ -&= \frac{1}{\sqrt{8k(1-k/n)}}\exp(-n\text{D}(k/n||p))\\ -&\ge \frac{1}{\sqrt{2n}}\exp(-n\text{D}(k/n||p)) -\end{align}$$ -Here I've used the lower bound for the binomial ${n\choose an}\ge\frac1{\sqrt{8na(1-a)}}\exp(n\text{H}(a))\\$ from http://www.lkozma.net/inequalities_cheat_sheet/ineq.pdf . I'd be happy if anyone can provide a better reference. -We see that it is sharp in the sense that ${2\choose1} = \frac1{\sqrt{2\cdot2}}\exp(2\text{H}{(1/2)})$. Also by A.S.'s comments we see that the bound is asymptotically sharp, up to a constant dependent on $p$ and $k$. -Update: R.B. Ash. Information Theory is a reference to the binomial approximation, and in fact they also derive the exact same bound for the distribution.<|endoftext|> -TITLE: Proof that the solutions are algebraic functions -QUESTION [8 upvotes]: I am looking at the following: - - - - - - - - -$$$$ - -$$$$ -I haven't really understood the proof... -Why do we consider the differential equation $y'=P(x)y$ ? -Why does the sentence: "If $(3)_{\mathfrak{p}}$ has a solution in $\overline{K}_{\mathfrak{p}}(x)$, then $(3)_{\mathfrak{p}}$ has also a solution $y_{\mathfrak{p}}$ in $\overline{K}_{\mathfrak{p}}[x]$." stand? -Also why do we put $\displaystyle{y_{\mathfrak{p}}=\prod_i (x-\overline{\alpha}_i)^{c_i}}$ ? -$$$$ -EDIT1: -I found now the following sentence: -The constant field of the differential field $k((x))$ is $k((x^p))$. -Hence if $(1)_p$ has a solution in $k((x))$, multiplication by a suitable constant yields a solution in $k[[x]]$. -Do we maybe have the following? -We suppose that $(3)_{\mathfrak{p}}$ has a solution in $\overline{K}_{\mathfrak{p}}(x)$. -The constant field of $\overline{K}_{\mathfrak{p}}(x)$ is $\overline{K}_{\mathfrak{p}}(x^p)$. -So if we multiply the equation by a suitable constant, it follows that $(3)_{\mathfrak{p}}$ has a solution also in $\overline{K}_{\mathfrak{p}}[x]$. -$$$$ -EDIT2: -Could you explain to me how exactly we conclude that $\beta_i \in \mathbb{Q}$ ? - -REPLY [8 votes]: Why does the sentence: "If $(3)_\mathfrak p$ has a solution $y_\mathfrak p$ in $\overline K_\mathfrak p(x)$, then $(3)_\mathfrak p$ has also a solution $\tilde y_\mathfrak p$ in $\overline K_\mathfrak p[x]$." stand? - -Assume there is a rational solution, which could always be written $y_\mathfrak p = \prod_i(x-\alpha_i)^{c_i}$ for some $c_i \in \mathbb Z\setminus 0$. -Assume that $\overline K_\mathfrak p$ has characteristic $p > 0$ (i.e. $p\in\mathfrak p$). The equation $(D-Q)y = 0$ is linear, and the solutions form a module over $\overline{K}_\mathfrak p[x^p]$ just because $D(x^p) = 0$. Then for all $n > 0$, the elements $(x-\alpha_i)^{p^n} = x^{p^n} - \alpha_i^{p^n}$ are in $\overline{K}_p[x^p]$, so you can obtain a new solution $\tilde y_p$ just by clearing the denominators of $y_p$, i.e. by taking -$$\tilde y_\mathfrak p =\prod_i (x-\alpha_i)^{p^{n_i}}$$ where $n_i$ are chosen so that $p^{n_i} \geq -c_i$. -Note: I haven't said anything about what happens when $\mathfrak p$ does not have a residue field of positive characteristic. -ADDED: - -Explanation for $\beta \in \mathbb Q$. - -He uses ($\star$) to conclude that $\beta \in \mathbb Q$. The way one should use ($\star$) is to take the number field in ($\star$) to be $\mathbb Q(\beta)$, so that the conclusion of ($\star$) is $\mathbb Q(\beta) = \mathbb Q$, i.e. $\beta \in \mathbb Q$. So, we just need to see why almost all primes in $\mathbb Q(\beta)$ are of degree one. Let's call $F := \mathbb Q(\beta)$ -Having already passed to a suitable extension field such that $\beta \in K$, there is an inclusion $F \hookrightarrow K$. Also, from Honda's proof it is known that $\beta$ is a rational integer modulo $\mathfrak p_K$ for all but finitely many primes of $K$. -Denote $\mathfrak p_F := F \cap \mathfrak p_K$ the prime of $F$ lying under $\mathfrak p_K$. Since the map $\mathcal O_F/\mathfrak p_F \to \mathcal O_K/\mathfrak p_K$ preserves $\beta$ (i.e. sends $\beta +\mathfrak p_F \mapsto \beta + \mathfrak p_K$), then $\beta$ is also a rational integer modulo $\mathfrak p_F$ (nothing special happening here: morphisms of fields are always injective, the preimage of an integer is an integer, etc.). -We have just checked that $\beta$ is a rational integer modulo $\mathfrak p$ for almost all primes $\mathfrak p \subseteq \mathcal O_F$. (The only primes for which $\beta$ is not a rational integer are the ones lying under the finitely many primes in $K$ from above.) -All that remains is to ask: if $\beta$ is a rational integer modulo $\mathfrak p_F$, then why must $\mathfrak p_F$ have degree one? Basically this is just because every element $a \in F$ (in particular, every element of $\mathcal O_F$) can be written as a sum $$a = \sum_{i=0}^n a_i\beta^i,$$ which, modulo $\mathfrak p_F$, becomes a sum of rational integers. So $\mathcal O_F/\mathfrak p_F$ is equal to its ring of rational integers, i.e. is degree one. -ADDED: (miscellaneous questions) - -could you explain to me what a residue field is? - -We start with $\mathfrak p$ a prime in $K$ - this actually means a prime ideal in the ring of algebraic integers $\mathcal{O}_K\subseteq K$. To get the residue field, first localize $(\mathcal O_K)_\mathfrak p$ so that $\mathfrak p_\mathfrak p \subseteq (\mathcal O_K)_\mathfrak p$ is now a maximal ideal, then quotient by $\mathfrak p_\mathfrak p$ to get a field. This is what is usually called the residue field. It looks like Honda is then taking the algebraic closure of this. -For example, maybe your number field is trivial, i.e. just $\mathbb Q$, then the ring of integers is $\mathbb Z$; your prime might be $(p)$ for some prime number $p$. Then the residue field is $\mathbb Z_{(p)}/(p)_{(p)} \cong \mathbb F_p$. -Or maybe your number field is $\mathbb Q(i)$, with algebraic integers $\mathbb Z[i]$. It has primes of degree $1$ like $\mathfrak p = (1+i)$ with residue field $\mathbb Z[x]/(x+1, x^2 +1) = \mathbb Z[x]/(x+1, 2) \cong \mathbb F_2$, $\mathfrak p = (2+i)$ with residue field $\mathbb F_5$, and primes of degree 2 like $\mathfrak p = 0$ with residue field $\mathbb Q(i)$ and $\mathfrak p = (3)$ with residue field $\mathbb Z[i]/3 = \mathbb F_3(i) \cong \mathbb F_9$. - -Why can a rational solution be written $y_\mathfrak p= \prod_i (x−\alpha_i)^{c_i}$ for some $c_i\in \mathbb Z\setminus 0$? - -There's no real content here, just because we have passed to the algebraic closure $\overline{K}_p$ the polynomials split into products of linear factors. - -What does the equation $(D−Q)y=0$ represent? - -I mistakenly called the rational function $P$ from the problem $Q$ instead. If the original equation is $y' = Qy$, and $D$ denotes the differentiation operator, then the equation is $Dy = Qy$, or $(D-Q)y = 0$. - -What does it mean that the solutions form a module over $\overline K_\mathfrak p[x^p]$? - -It just means that (1) if $y_1, y_2$ are both solutions to $(D-Q)y = 0$, then so are $y_1 \pm y_2$ (the solutions form an abelian group), and (2) if $y_1$ is a solution, then $y_2 := x^py_1$ is also going to be a solution. (This is because $D(x^py) = x^pD(y)$.) This implies that given any $f \in \overline K_\mathfrak p$ and solution $y_1$, $y_2 := f\cdot y_1$ is also a solution.<|endoftext|> -TITLE: Prove that $ \sqrt{\frac{a}{b+3} } +\sqrt{\frac{b}{c+3} } +\sqrt{\frac{c}{a+3} } \leq \frac{3}{2} $ -QUESTION [6 upvotes]: For all $a, b, c>0$ and $a+b+c=3$ Prove that $$ \sqrt{\frac{a}{b+3} } +\sqrt{\frac{b}{c+3} } +\sqrt{\frac{c}{a+3} } \leq \frac{3}{2} $$ -I tried cauchy-schwarz inequality for the L. H. S like and I get -$ [\left( \sqrt{a} \right) ^{2}+\left( \sqrt{b} \right) ^{2}+\left( \sqrt{c} \right) ^{2}]\left[ \left( \frac{1}{\sqrt{b+3} } \right) ^{2}+\left( \frac{1}{\sqrt{c+3} } \right) ^{2}+\left( \frac{1}{\sqrt{a+3} } \right) ^{2}\right] \geq \left( \frac{\sqrt{a} }{\sqrt{b+3} } +\frac{\sqrt{b} }{\sqrt{c+3} } +\frac{\sqrt{c} }{\sqrt{a+3} } \right) ^{2}$ -Then I get by AM-GM the maximum value of $abc=1$ and the inequality have the value 3... How I can prove inequality?. - -REPLY [6 votes]: First, we use Cauchy-Schwartz inequality: -$$ -A^2=\left(\sqrt{\frac{a}{b+3} } +\sqrt{\frac{b}{c+3} } +\sqrt{\frac{c}{a+3} }\right)^2 =\\ -\left(\sqrt{\frac{a}{(a+3)(b+3)}(a+3) } +\sqrt{\frac{b}{(b+3)(c+3)} (b+3)} +\sqrt{\frac{c}{(c+3)(a+3)}(c+3) }\right)^2 \leq \\ -\left({\frac{a}{(a+3)(b+3)} } +{\frac{b}{(b+3)(c+3)} } +{\frac{c}{(c+3)(a+3)} }\right)\times (a+3+b+3+c+3) . -$$ -Since $a+3+b+3+c+3=12$, it is enough to prove: -$$ -\left({\frac{a}{(a+3)(b+3)} } +{\frac{b}{(b+3)(c+3)} } +{\frac{c}{(c+3)(a+3)} }\right)\leq \frac 3{16}. (\star) -$$ -From this step on, I went the ugly way, since the manipulation was not so frightening. With a simple and nice proof of this step, the whole proof would become much easier. -Anyway, after the simple multiplications, we get to the following inequality: -$$ -18-7(ab+ac+bc)+3abc\geq 0. -$$ -This can be further simplified, using the fact such as $a^2+b^2+c^2=9-2(ab+ac+bc)$ and using the famous following identity: -$$ -a^3+b^3+c^3-3abc=(a+b+c)(a^2+b^2+c^2-ab-ac-bc). -$$ -We can further simplify the inequality to the following: -$$ -a^3+b^3+c^3\geq a^2+b^2+c^2. -$$ -This follows from the following steps: - -Cauchy-Schwartz: $(a+b+c)(a^3+b^3+c^3)\geq (a^2+b^2+c^2)^2 \implies 3(a^3+b^3+c^3)\geq (a^2+b^2+c^2)^2$ -Cauchy-Schwartz: $(1+1+1)(a^2+b^2+c^2)\geq (a+b+c)^2 \implies (a^2+b^2+c^2)\geq 3.$ -Combine two previous steps: -$$ -3(a^3+b^3+c^3)\geq (a^2+b^2+c^2)^2\geq 3(a^2+b^2+c^2). -$$ -And therefore, this proves $(\star)$.<|endoftext|> -TITLE: This infinitely nested root gives me two answers $ \sqrt{4+\sqrt{8+\sqrt{32+\sqrt{512+\sqrt{\frac{512^2}{2}+\sqrt{...}}}}}} $ -QUESTION [30 upvotes]: I am trying to evaluate -$$ \sqrt{4+\sqrt{8+\sqrt{32+\sqrt{512+\sqrt{\frac{512^2}{2}+\sqrt{...}}}}}} $$ -where those numbers inside roots are -$$ a_{n+1}=\frac{a_n^2}{2}$$ -And I found two ways to solve it that give different answers. I believe one of those is not right, but I don't know which and why. Please help. -Method-1. -$$x+1=\sqrt{x^2+2x+1}=\sqrt{x^2+x+\sqrt{x^2+2x+1}}\\ -=\sqrt{x^2+x+\sqrt{x^2+x+\sqrt{x^2+2x+1}}}=...$$ -$x=1\rightarrow$ -$$2=\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{...}}}}}}$$ -Therefore -$$\begin{align} -&4=2\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{...}}}}}}\\ -&=\sqrt{2^2\cdot2+2^2\cdot\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{...}}}}}}\\ -&=\sqrt{8+\sqrt{2^4\cdot2+2^4\cdot\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{...}}}}}}\\ -&=\sqrt{8+\sqrt{32+\sqrt{512+\sqrt{\frac{512^2}{2}+\sqrt{...}}}}}\end{align}$$ -Finally, -$$\sqrt{4+4}=\sqrt{4+\sqrt{8+\sqrt{32+\sqrt{512+\sqrt{\frac{512^2}{2}+\sqrt{...}}}}}}=2\sqrt2$$ -Method-2. -$$\begin{align} -&x+2=\sqrt{x^2+4x+4}=\sqrt{x^2+3x+\sqrt{x^2+8x+16}}\\ -&=\sqrt{x^2+3x+\sqrt{x^2+7x+\sqrt{x^2+32x+256}}}\\ -&=\sqrt{x^2+3x+\sqrt{x^2+7x+\sqrt{x^2+31x+\sqrt{x^2+512x+256^2}}}}... -\end{align}$$ -$x=1\rightarrow$ -$$3=\sqrt{4+\sqrt{8+\sqrt{32+\sqrt{512+\sqrt{\frac{512^2}{2}+\sqrt{...}}}}}} $$ -Alternative Method-2. -$$\begin{align} -&3=\sqrt9=\sqrt{4+5}=\sqrt{4+\sqrt{25}}=\sqrt{4+\sqrt{8+17}}\\ -&=\sqrt{4+\sqrt{8+\sqrt{2\cdot16+16^2+1}}}\\ -&=\sqrt{4+\sqrt{8+\sqrt{32+\sqrt{2\cdot16^2+16^4+1}}}}\\ -&=\sqrt{4+\sqrt{8+\sqrt{32+\sqrt{512+\sqrt{2\cdot16^4+16^8+1}}}}}\\ -&=\sqrt{4+\sqrt{8+\sqrt{32+\sqrt{512+\sqrt{\frac{512^2}{2}+\sqrt{2\cdot16^8+16^{16}+1}}}}}}=... -\end{align}$$ -So, I have two answers $2\sqrt2$ and $3$. Which one is correct and what's the problem in the other solution?? Thanks. - -Now I think I understand. Thanks for all the answers. -Let me post this method-3 just to show that it could be any number $\geq2\sqrt2$ and conclude this topic. -Method-3. -$$\begin{align} -&\sqrt{10}=\sqrt{4+6}=\sqrt{4+\sqrt{36}}=\sqrt{4+\sqrt{8+28}}\\ -&=\sqrt{4+\sqrt{8+\sqrt{32+752}}}\\ -&=\sqrt{4+\sqrt{8+\sqrt{32+\sqrt{512+564992}}}}\\ -&=\sqrt{4+\sqrt{8+\sqrt{32+\sqrt{512+\sqrt{\frac{512^2}{2}+\sqrt{...}}}}}}=... -\end{align}$$ - -REPLY [6 votes]: As Jason said in the comments, infinite radicals are not well defined. Let me show that you can in fact get any $x\geq 2\sqrt{2}$ using an infinite radicals looking like yours. -In other words, for any $x$, there exist a sequence $(c_n)$ such that -$$x=\sqrt{4+\sqrt{8+\sqrt{32+...+\sqrt{2^{\alpha_n}+c_n}}}}$$ -where $\alpha_n=2^n+1$ (your sequence if I'm not wrong). -Well, construct the sequence $(c_n)$ inductively. Put $c_0=x^2-4$, so that -$x=\sqrt{4+c_0}$. -Then $c_1=(x^2-4)^2-8$ and so on. -The particular thing with $2\sqrt{2}$ is that this is the limit of the sequence $(u_n)$ where -$$u_n=\sqrt{4+\sqrt{8+...+\sqrt{2^{\alpha_n}}}}$$ -so that if $x\geq 2\sqrt{2}$, the $c_i$'s will always be positive, so that the sequence $(c_i)$ will be well defined.<|endoftext|> -TITLE: Complex equation has two roots inside $|z|=1$ -QUESTION [8 upvotes]: Prove that the equation $z^3[\exp(1-z)]=1$ has exactly $2$ roots inside $|z|=1$. -I have tried applying Rouche Theorem , without any result... - -REPLY [6 votes]: Put $g(x)=x^3\exp(1-x)$. We show easily that $g$ is strictly increasing on $[1,3]$, hence $g(x)>1$ for $x>1$ close to $1$. -Now let $\rho>1$ close to $1$. Put $h(z)=z^3\exp(1-z)$, $f(z)=h(z)-1$. On $|z|=\rho$, we have $|f(z)-h(z)|=1$, and with $z=x+iy$, $|h(z)|=\rho^3\exp(1-x)\geq \rho^3\exp(1-\rho)>1$. Hence Rouché's theorem show that $f(z)=0$ has exactly $3$ solutions in $|z|\leq \rho$. But one of them is $z=1$ (it is easy to see that $z=1$ is a simple root), it remains two solutions. As $\rho$ can be close to $1$, we see that these two solutions verify $|z|\leq 1$. Suppose that for one we have $|z|=1$. Then $|z^3\exp(1-z)|=1=\exp(1-x)$ show that $x=1$, and $z=1$. Hence the two solutions are in $|z|<1$. We are done.<|endoftext|> -TITLE: Do the ZF-provable forcing principles differ from the ZFC-provable forcing principles? -QUESTION [7 upvotes]: In "The Modal Logic of Forcing", Joel David Hamkins and Benedikt Löwe show that the ZFC-provable forcing principles are exactly those of the modal logic S4.2 (interpreting $\Diamond \phi$ as asserting that $\phi$ is forceable). -Do the $\mathsf{ZF}$-provable principles of forcing obey a different modal logic? - -REPLY [5 votes]: The definition of forcing is the same with and without the axiom of choice. And the truth lemma holds with and without the axiom of choice. Namely, -$$p\Vdash_\Bbb P\varphi\iff\text{For every }V\text{-generic } G\subseteq\Bbb P\text{ with }p\in G: V[G]\models\varphi$$ -You can also consider iterations without choice, at least with a two-step iteration this holds absolutely no difficulties (compared to all sort of non-finite supports and so on). So the proof that $\sf S4.2$ is a subset of the modal logic of forcing in $\sf ZF$ is immediate. -In the other direction, clearly $\sf ZF$ cannot prove more than $\sf ZFC$. So you get $\sf S4.2$ immediately as a result.<|endoftext|> -TITLE: Interpretation of $d\phi(z)$ in differential geometry -QUESTION [5 upvotes]: In "Exercises and Solutions in Mathematics", Ta-Tsien, 2nd Edition, exercise 3343. -Statement of the exercise -Let $(\mathbb{H}, g)$ be the two-dimensional hyperbolic space, where -\begin{equation} -\mathbb{H} = \{(x, y) \in \mathbb{R}^2 : y > 0\} -\end{equation} -is the upper half plane of $\mathbb{R}^2 = \mathbb{C}$ and the metric $g$ is given by -\begin{equation} -g = \frac{dx^2 + dy^2}{y^2} -\end{equation} -Suppose $a$, $b$, $c$ and $d$ are real numbers such that $ad - bc = 1$. Define -\begin{equation} -\phi(z) = \frac{az + b}{cz + d} -\end{equation} -for any $z = x + \sqrt{-1} y$. Prove that $\phi$ is an isometry for $(\mathbb{H}^2, g)$. -Statement of the answer -To prove that $\phi$ is an isometry, the authors compute: -\begin{equation} -d\phi = \frac{a(dz)(cz+d) - c(dz)(az+b)}{(cz + d)^2} -\end{equation} -and after some computations, concludes that since: -\begin{equation} -\Vert d\phi(z) \Vert^2 = \frac{d\phi(z) d\overline{\phi(z)}}{[Im \phi(z)]^2} -= \frac{dx^2 + dy^2}{y^2} = \Vert dz \Vert^2 -\end{equation} -then $\phi$ is an isometry. -My question -What is the mathematical nature of the operator $d$ in this context ? It seems to me that in order to prove that $\phi$ is an isometry, one has to prove that: -$g = \phi^* g$ -i.e. that the pullback of $g$ by $\phi$ is $g$. In the context of differential geometry, I have only seen $d\phi$ standing for the exterior derivative of $\phi$ or for the differential map associated to $\phi$. -In a more "intuitive" manner, considering small variations of $\phi(z)$ and $z$, I understand that an isomorphism maps a small increment $dz$ to a small increment $d\phi(z)$. But I would like to understand the differential geometry meaning. -Is $dz$ a differential form in the context of this exercise ? Then what is the precise meaning of $\Vert dz \Vert^2$ ? Is $d\phi(z)$ of the same nature than $dz$ ? Is it a vector-valued differential form ? - -REPLY [2 votes]: $\newcommand{\vv}{\mathbf{v}}\newcommand{\Cpx}{\mathbf{C}}\DeclareMathOperator{\Im}{Im}$Since $\phi:\Cpx \to \Cpx$ is meromorphic, $d\phi$ may be viewed as the ordinary derivative of a complex-valued function, as the exterior derivative of a complex-valued $0$-form, or as the total differential of a mapping (surely among other interpretations). -To bridge the classical differential calculus and modern language, let $z$ and $w$ denote complex coordinates, and write $w = \phi(z)$. The coordinate $1$-forms satisfy $dw(\vv) = dz(\vv) = \vv$ for every tangent vector $\vv$. - -The "classical" chain rule $dw = \phi'(z)\, dz$ has the modern expression -$$ -\phi^{*}(dw) = \phi'(z)\, dz. -$$ -Indeed, for every tangent vector $\vv$, -$$ -\phi^{*}(dw)(\vv) - = dw(\phi_{*}\vv) - = \phi_{*}\vv - = d\phi(z)(\vv) - = \phi'(z)\, dz(\vv). -$$ -If we write $w = u + iv$, then $dw = du + i\, dv$. The product -$$ -dw\, d\bar{w} = du^{2} + dv^{2} -$$ -refers to the quadratic form on $\Cpx$ that sends a tangent vector $\vv$ to -$$ -dw(\vv)\, d\bar{w}(\vv) = \vv\bar{\vv} = \|\vv\|^{2}. -$$ -(Compare the tensor product -$$ -dz \otimes d\bar{z} = (du \otimes du + dv \otimes dv) - 2i\, du \wedge dv, -$$ -which accepts two vectors as input, has non-zero imaginary part, etc.) -For the mapping at hand, $w = \phi(z) = \dfrac{az + b}{cz + d}$, and therefore $\phi^{*}(dw) = \dfrac{dz}{(cz + d)^{2}}$. Since $\|dw\|^{2} := g = \dfrac{dw\, d\bar{w}}{(\Im w)^{2}}$ as a quadratic form, the final line of the author's computation asserts -$$ -\phi^{*}g - = \|\phi^{*}(dw)\|^{2} - = \frac{\phi^{*}(dw)\, \phi^{*}(d\bar{w})}{(\Im w)^{2}} - = \frac{\|\phi'(z)\|^{2}\, dz\, d\bar{z}}{(\Im \phi(z))^{2}} - = \frac{dz\, d\bar{z}}{(\Im z)^{2}} - = \|dz\|^{2} - = g. -$$<|endoftext|> -TITLE: Interchanging Malliavin derivative with Lebesgue integral -QUESTION [6 upvotes]: I am reading Oksendal's book "Malliavin calculus for Levy processes with application to finance". In the proof of Lemma 4.9 (page 47), the author interchanges the Malliavin derivative $D_t$ with the Lebesgue integral $ds$. -$$D_t\int_0^T u^2(s)\,ds = 2\int_0^T u(s)D_tu(s)\,ds$$ -Could anyone shed any light? - -REPLY [2 votes]: We'll prove this in two steps. First, we pull the derivative operator $\mathfrak{D}$ inside the (Lebesgue) integral, and then apply a "Chain Rule". Since the book by Øksendal et al. is referenced, I won't prove the "Chain Rule", but establish the interchange of the derivative and the integral through some density arguments. -The proof is not that hard, but the notation annoying though quite intuitive; I'll provide more clarification if needed. -Consider any process $U\in\mathscr{L}^2(\mathopen{[}0,\infty\mathclose{[};\mathscr{W}^{1,2})$, where $\mathscr{W}^{1,2}$ is the usual Watanabe–Sobolev space of Malliavin-differentiable functions (Nualart's and Øksendal's $\mathbb{D}^{1,2}$, but I hate the hollow 'D', and this is what Hairer uses). Note that this is the Hilbert space of processes $U\in\mathscr{L}^2(\varOmega\times\mathopen{[}0,\infty\mathclose{[};\mathbb{R})$ for which - -$U(t)\in \mathscr{W}^{1,2}$, for (almost) all $t\in\mathopen{[}0,\infty\mathclose{[}$; -there exists a version of the process $(\omega,\tau,t)\mapsto\mathfrak{D}_\tau U(\omega,t)$ in $\mathscr{L}^2(\varOmega\times\mathopen{[}0,\infty\mathclose{[}^2;\mathbb{R})$. - -It can be shown that processes $U_\nu$ of the form -$$U_\nu(t):= \sum_{j=1}^\nu X_j h_j(t),$$ -where $X_j$ is a smooth and cylindrical random variable and $h_j\in\mathscr{L}^2(\mathopen{[}0,\infty\mathclose{[};\mathbb{R})$, are dense in $\mathscr{L}^2(\mathopen{[}0,\infty\mathclose{[};\mathscr{W}^{1,2})$. In fact, $\mathscr{L}^2(\mathopen{[}0,\infty\mathclose{[};\mathscr{W}^{1,2})$ is defined as the closure of the space of such process under a suitable norm which we will not need here. -This means that we can find a sequence of such processes $(U_\nu)_{\nu\in\mathbb{N}}$ such that -$$U_\nu\to U~\text{in}~\mathscr{L}^2(\varOmega;\mathscr{L}^2(\mathopen{[}0,\infty\mathclose{[};\mathbb{R})),~\text{as}~\nu\to\infty,$$ -and -$$\mathfrak{D}_\cdot U_\nu\to \mathfrak{D}_\cdot U~\text{in}~\mathscr{L}^2(\varOmega;\mathscr{L}^2(\mathopen{[}0,\infty\mathclose{[};\mathbb{R})^{\otimes 2}),~\text{as}~\nu\to\infty.$$ -A consequence of the above is that -$$\int_0^T U_\nu(t)~\!\mathrm{d}t\to \int_0^T U(t)~\!\mathrm{d}t~\text{in}~\mathscr{L}^2(\varOmega;\mathbb{R}),~\text{as}~\nu\to\infty,$$ -and -$$\mathfrak{D}_\cdot\int_0^T U_\nu(t)~\!\mathrm{d}t\to \mathfrak{D}_\cdot\int_0^T U(t)~\!\mathrm{d}t~\text{in}~\mathscr{L}^2(\varOmega;\mathscr{L}^2(\mathopen{[}0,\infty\mathclose{[};\mathbb{R})),~\text{as}~\nu\to\infty.$$ -And these imply, -$$\mathfrak{D}_\tau\int_0^T U(t)~\!\mathrm{d}t = \int_0^T \mathfrak{D}_\tau U(t)~\!\mathrm{d}t.$$ -Now, set $U(t) = u(t)^2$. Then, by the "Chain Rule" for Malliavin derivatives (Theorem 3.5 in Øksendal et al.), -$$\mathfrak{D}_\tau\int_0^T u(t)^2~\!\mathrm{d}t = \int_0^T \mathfrak{D}_\tau \big(u(t)^2\big)~\!\mathrm{d}t = 2\int_0^T u(t)\mathfrak{D}_\tau u(t)~\!\mathrm{d}t.$$<|endoftext|> -TITLE: Is there a standard name for this infinite group? -QUESTION [11 upvotes]: Consider the group of sequences -$$\{(a_1,a_2,\dots): a_i\in\mathbb{Z}/2\mathbb{Z}\}$$ -where the group operation is component-wise addition. Is there a standard name for this group, such as $(\mathbb{Z}/2\mathbb{Z})^{\infty}$, $(\mathbb{Z}/2\mathbb{Z})^{\mathbb{N}}$, or something similar? It is isomorphic to $\mathbb{Z}/2\mathbb{Z}[x]$ under addition, but I want to emphasize the additive group structure and not assign it any multiplicative structure. -EDIT: Silly me, it's not isomorphic to $\mathbb{Z}/2\mathbb{Z}[x]$. See below. - -REPLY [9 votes]: It would be standard to call this group $(\mathbb{Z}/2\mathbb{Z})^\mathbb{N}$ or $(\mathbb{Z}/2\mathbb{Z})^\omega$, or perhaps $\prod_\mathbb{N}\mathbb{Z}/2\mathbb{Z}$. It has probably also been called $(\mathbb{Z}/2\mathbb{Z})^\infty$, but I would recommend avoiding that notation, as it often (possibly more often) refers to the subgroup of your group consisting of sequences which have only finitely many nonzero entries. Incidentally, $\mathbb{Z}/2\mathbb{Z}[x]$ is not isomorphic to your group, rather it is isomorphic to this subgroup of sequences with finite support. The full sequence space is isomorphic instead to (the underlying additive group of) the power series ring $\mathbb{Z}/2\mathbb{Z}[[x]]$.<|endoftext|> -TITLE: Number of Taxicab routes in a triangular city -QUESTION [5 upvotes]: I am assuming a triangle that is "almost" half a rectangular city with taxicab geometry. I am trying to find the number of paths in this triangular city. -Assuming that the ride starts from the corner of the city. If we move p steps in one direction, and q steps in the perpendicular direction, the number of paths in case of a rectangular city is known and is given by: -$$\binom{p+q}{p} ~ or ~ \binom{p+q}{q}$$ -For example, assume the 45-degree rotated city in the following figure (left). -Complete and half cities - -If we start from the point where the arrow is pointing, the numbers at the cross-points refer to the number of possible paths from the start point to each cross-point. -Now, assume the figure on the right side. Again, the numbers at the cross-points refer to the number of possible paths from the start point to each cross-point. I obtained these numbers using a combination of counting and observation. -The main observation is that the number of paths in the triangular city is a ratio of the rectangular city. Take for example the 7'th row, we find the following (Can you explain why?): -924/(7/1)=132 -462/(7/2)=132 -210/(7/3)=90 -84/(7/4)=48 -28/(7/5)=20 -7/(7/6)=6 -1/(7/7)=1 -This applies to all rows. -Now to my question. Assume the following shape of a city, where the inlets are at the left edge, and the outlets are at the bottom edge. -My problem - -What I want to do is to find the number of paths from any of the inlets to any of the outlets. Hopefully a formula, and a proof. -In Figure 2 is what I got so far. If the outlet is less than or equal to the input, this can be directly obtained using the formula for the rectangular case. -Assuming that this is correct, note that up to the diagonal, numbers are following the rules of Pascal's triangle. After the diagonal, which represents the boundary of the city, it does not follow the same rules, but there is a pattern. -1st diagonal after the half (subtract 1) -6= (6+1)-1 -20=(6+15)-1 -34=(15+20)-1 -2nd (subtract 6) -20=(20+6)-6 -48=(20+34)-6 -62=(34+34)-6 -3rd (subtract 20) -4th (subtract 48) -5th (subtract 90) -6th (subtract 132) -Which are the numbers in the row corresponding to inlet 7 (Again, can you explain why?). - -REPLY [2 votes]: The sequence $1,6,20,48,90,132,132$ can be generated as $\displaystyle {5+i \choose i-1}\dfrac{8-i}{7}$ with $i$ running from $1$ to $7$, though it would be slightly more conventional to write $\displaystyle {n+k \choose k}\dfrac{n-k+1}{n+1}$ with $n=6$ and $k$ running from $0$ to $6$. -The right hand part of the first diagram is known as Catalan's triangle, with the sequence $1,1,2,5,14,42,132,\ldots$ being known as Catalan numbers: they appear frequently in mathematics.<|endoftext|> -TITLE: $\text{Ext}(H, \mathbb{Z})$ is isomorphic to the torsion subgroup of $H$ if $H$ is finitely generated -QUESTION [5 upvotes]: From Hatcher, page 196, before corollary 3.3. -We are first given these three properties of the Ext functor: -$\text{Ext}(H \oplus H', G) \cong \text{Ext}(H, G)\oplus\text{Ext}(H′, G)$. -$\text{Ext}(H, G) = 0$ if $H$ is free. -$\text{Ext}(Z_n, G) \cong G/nG.$ -Then, Hatcher writes "these three properties imply that $\text{Ext}(H, \mathbb{Z})$ is isomorphic to the torsion subgroup of $H$ if $H$ is finitely generated". -I'm trying to understand why that is. -I know a torsion subgroup is the subgroup composed of all elements of finite order. Also, a finitely generated abelian group $H$ is the direct sum of a free abelian group (which I'll call $H'$) and its torsion subgroup (which I'll call $T$). -So $$\text{Ext}(H)=\text{Ext}(H'\oplus T)$$ -From the first property, this equals -$$\text{Ext}(H')\oplus\text{Ext}(T).$$ -As $H'$ is free, this equals (from the second property) -$$0\oplus \text{Ext(T)}=\text{Ext(T)}.$$ -But why does $\text{Ext}(T)=T$ hold? - -REPLY [2 votes]: To elaborate on my comment, applying $\text{Ext}^{\bullet}(A, -)$ to the short exact sequence $0 \to \mathbb{Z} \to \mathbb{R} \to S^1 \to 0$ produces the long exact sequence -$$0 \to \text{Hom}(A, \mathbb{Z}) \to \text{Hom}(A, \mathbb{R}) \to \text{Hom}(A, S^1) \to \text{Ext}^1(A, \mathbb{Z}) \to 0$$ -where we can ignore the rest of the sequence because $\text{Ext}^1(A, \mathbb{R})$ vanishes, thanks to the fact that $\mathbb{R}$ is divisible and hence injective. (So is $S^1$; in fact what we've written down above is an injective resolution of $\mathbb{Z}$.) -If $A$ is torsion, then $\text{Hom}(A, \mathbb{R}) = 0$, and the long exact sequence above produces a natural isomorphism -$$\text{Hom}(A, S^1) \cong \text{Ext}^1(A, \mathbb{Z}).$$ -The group $\text{Hom}(A, S^1)$ is known as the Pontryagin dual $\widehat{A}$ of $A$. If $A$ is finite it is noncanonically isomorphic to $A$. In general it naturally has a topology making it a profinite abelian group, and every profinite abelian group arises in this way.<|endoftext|> -TITLE: Is $\sqrt{\cos^2(x)} = |\cos x|$? -QUESTION [7 upvotes]: I am taking the principal root of $\cos^2(x)$ so I thought it would be, but when you ask wolfram alpha it says it's only sometimes true, when $x > 0$ (see here). Can someone explain why this is to me and give a value of $x$ for which it is not true? -I'm just an $11$th grader in trig, so I probably won't understand it if you make it too complex. -Thanks. Sorry if the tag is off. - -REPLY [5 votes]: The statemnet $\sqrt{\cos^2(x)}=|\cos(x)|$ is true for all real $x$. Wolfram|Alpha, on the other hand, has the often annoying habit of considering all complex numbers by default. So, it's not willing to apply the identity -$$\sqrt{x^2}=|x|$$ -unless it knows that $x$ is real. Given that $\cos(x)$ is real for all real $x$, the identity you cite is true for any real $x$ - which is all you really need to worry about. Wolfram|Alpha even, a little bit down on the page in a section labelled "Real Solutions" writes $\operatorname{Im}(x)=0$ which says that if $x$ has no imaginary part (i.e. is a real number) then it satisfies the equation. -The main problem in the complex plane is that, although it is generally true that $\sqrt{x^2}=\pm x$ it's possible that (e.g. for $x=i$ where $i$ is the imaginary unit, $\sqrt{-1}$) neither $x$ nor $-x$ is a positive real, whereas $|x|$ is always a positive real. Thus, $\sqrt{x^2}=|x|$ is not true anywhere except on the real line, which means that nearly any complex number (in particular any number $a+bi$ where $b$ is not zero and $a$ is not a multiple of $\pi$) fails to satisfy the original identity with cosine. -(But really, if you're in a trigonometry class, don't worry about any of this. This has very little to do with trigonometry. The real moral of the story is possibly that Wolfram|Alpha can say things which are useless and confusing, albeit technically true) - -REPLY [2 votes]: You want to read the alternate form assuming $x$ is real. (The one about $x>0$ is a red herring, just written there in case that was what a user wanted to know about.) If $x$ is not real but rather complex then the square root is a more complicated animal (see https://en.wikipedia.org/wiki/Square_root for more info).<|endoftext|> -TITLE: Evaluate $\lim_{x\to \infty} \frac{\tan^{2}(\frac{1}{x})}{(\ln(1+\frac{4}{x}))^2}$ -QUESTION [6 upvotes]: $$\lim_{x\to \infty} \frac{\tan^{2}(\frac{1}{x})}{(\ln(1+\frac{4}{x}))^2}$$ -I came across this problem and I am having trouble evaluating it. I know that the whole limit will probably be $0$ and that both the numerator and denominator approach $0$. -How do I evaluate it? Using L'Hospital's rule leads to complex expressions, so I don't think that's a good method. -Thank you for the help. - -REPLY [4 votes]: This is also pretty easy without using the taylor expansion. -$$L=\lim_{x\to\infty}\frac{\tan^2(1/x)}{\ln^2(1+4/x)}=\left(\lim_{x\to\infty}\frac{\tan(1/x)}{\ln(1+4/x)}\right)^2$$ -Let $u=1/x$. -$$\sqrt L=\lim_{u\to0}\frac{\tan(u)}{\ln(1+4u)}=\lim_{u\to0}\frac{\sin(u)}{\cos(u)\ln(1+4u)}$$ -$$=\lim_{u\to0}\frac{\sin(u)}{\ln(1+4u)}$$ -Now we can apply L'Hospital. -$$\sqrt L=\lim_{u\to 0}\frac{1}{4}\cos(u)(4u+1)=\frac 1 4$$ -$$\lim_{x\to\infty}\frac{\tan^2(1/x)}{\ln^2(1+4/x)}=\frac{1}{16}$$<|endoftext|> -TITLE: lim sup and lim infs of Brownian Motion: $B_t/\sqrt{t}$ as $t \to \infty$ or as $t \to 0$. -QUESTION [10 upvotes]: Below is my question. Q7.9 is what I'm stuck on. I've done Q7.8; I included it in the picture because I'll use it in Q7.9, and it gives a definition that I'll use. -Update: This question is now solved, and I've added the details below. - -What I've done so far is this: -By using time-inversion, $(tB_{1/t})_{t\ge0}$, and $(-B_t)_{t\ge0}$, sign-inversion of Brownian motion, we have that the four random variables in question are all equal to each other. Further, we see that the first is $\mathcal{F}_{0^+}$ measurable, since -$$ \limsup_{t \to 0} \frac{B_t}{\sqrt{t}} = \lim_{s \to 0} \sup_{t \le s} \frac{B_s}{\sqrt{s}}, \ \text{and} \ \sup_{t \le s} \frac{B_s}{\sqrt{s}}$$ -is $\mathcal{F}_s$ measurable for all $s > 0$, and thus the limit is $\mathcal{F}_{0^+}$ measurable. Hence by Blumenthal's $0$-$1$ law, it is almost surely constant. Hence we now have that the four random variables in question are equal to each other and almost surely constant. Since $\mathbb{P}(B_{t'} > 0) = 1/2$ for all $t' > 0$, by the Markov property, we have that the almost sure constant must be at least $0$. -I'm stuck on showing that this constant is in fact $+\infty$. I wanted to use the scaling property, $(cB_{t/c^2})_{t\ge0}$, of Brownian motion, but the issue is that this gives -$$\frac{cB_{t/c^2}}{\sqrt{t}} = \frac{B_{t/c^2}}{\sqrt{t/c^2}} = \frac{B_s}{\sqrt{s}}$$ -where $s = t/c^2$. When considering just $B_t$ or $B_t/t$, we get a factor $c$ or $1/c$ out the front, and so, since this must hold for all $c$, we know that it must be $0$ or $\infty$. (We can then show which it is.) However, we don't get this nice property when using $B_t/\sqrt{t}$. - -Solution: Define $A^x_t$ and $A^x$ as follows: -$$A^x_t = \left\{ \sup_{s \le t} \frac{B_s}{\sqrt{s}} \le x \right\}, ~ -A^x = \left\{ \lim_{t \downarrow 0}\sup_{s \le t} \frac{B_s}{\sqrt{s}} \le x \right\} -= \left\{ \limsup_{t \downarrow 0} \frac{B_s}{\sqrt{s}} \le x \right\}.$$ -Observe that $B_t/\sqrt{t} \sim N(0,1)$. Thus, since $\sup_{s \le t} {B_s}/{\sqrt{s}} \ge {B_t}/{\sqrt{t}}$, -$$P \left( \sup_{s \le t} \frac{B_s}{\sqrt{s}} \le x \right) \le P \left( \frac{B_t}{\sqrt{t}} \le x \right) = \Phi(x),$$ -where $\Phi$ is the cdf for the standard normal. We want to show that $P(A^x) = 0$ for every $x \in \mathbb{R}$ ($\therefore x \neq \infty$); by Blumenthal's $0$-$1$ law, it is enough to show that $P(A^x) < 1$. Now, $A^x_{1/n} \downarrow A^x$ as $n \to \infty$, so by monotone convergence, -$$P(A^x) = \lim_{n \to \infty}P(A^x_{1/n}) \le \Phi(x) < 1, \ x \in \Bbb R.$$ -Thus $P(A^x) < 1$, ie $P(A^x) = 0$, for all $x \in \mathbb{R}$. Thus the almost sure constant must be $+\infty$. - -Thank you to Jay.H for helping me with this! - -REPLY [2 votes]: Try to use the fact that $B_t/\sqrt{t}$ has the same distribution as $B_1$ and the fact that $\sup_{s\le t }B_s/\sqrt{s}$ is monotonic w.r.t. $t$ -[Warning: more details below] -For any constant $C>0$, let -$$ A = (\lim_{t\to 0} \sup_{0 -TITLE: Showing that there do not exist uncountably many independent, non-constant random variables on $ ([0,1],\mathcal{B},\lambda) $. -QUESTION [8 upvotes]: I have this problem in my assignment: - -Show that there do not exist uncountably many independent, non-constant random variables on $ ([0,1],\mathcal{B},\lambda) $, where $ \lambda $ is the Lebesgue measure on the Borel $ \sigma $-algebra $ \mathcal{B} $ of $ [0,1] $. - -Can someone please help me to solve this? - -REPLY [12 votes]: Assume that $(X_i)_{i \in I} $ are independent random variables. Let $Y_i := X_i \cdot 1_{|X_i|\leq C_i} $, where $C_i>0$ is chosen so large that $Y_i \not\equiv c_i $ for some constant $c_i $. This is possible since the $X_i $ are non-constant. -Then the $(Y_i - \Bbb{E}(Y_i))_i $ form a family of independent and hence orthogonal random variables. Note that the $Y_i $ are bounded and thus contained in $L^2$. -But the separable (!) space $L^2 ([0,1],\lambda) $ can only contain countably many elements which are mutually orthogonal. Hence, $I $ is countable.<|endoftext|> -TITLE: If the Exponential map is a diffeomorphism at a point, can we say something about other points? -QUESTION [7 upvotes]: Let $M$ be a complete (connected) Riemannian manifold, $p \in M$ some point in $M$. -Assume $exp_p$ is a diffeomorphism from $T_pM$ onto $M$. -Is it true that $exp_q$ is a diffeomorphism for all points $q \in M$? -Of course if $M$ has a transitive isometry group, than the answer is positive, but what about other cases? -Note that according to this answer this is equivalent to asking whether all geodesics of $M$ are globally length minimizing or all points in $M$ are joined by unique geodesics. - -REPLY [12 votes]: No. -As an example (without being rigorous) consider a manifold which looks like an inifinite half cylinder parallel to the positive $z-$axis in Euclidean three space with a hemisphere attached at the bottom along an equator and smoothed out. (A bit like a hyperboloid or an one sided infinite cigar). If you look at the south (bottom) pole of the hemisphere you will get your diffeomorphic exponential map (the geodesics starting from there will move from there to the cylinder and then parallel to the z-axis along the cylinder). But if you look a a point on the cylinder, you will even find a closed geodesic running around the cylinder. (I hope I made this clear enough by describing it just in words).<|endoftext|> -TITLE: Fundamental Theorem of Arithmetic: why greater than 1? -QUESTION [5 upvotes]: The theorem, as wikipedia states it, is - -Every integer greater than 1[note 1] either is prime itself or is the product of prime numbers, and that this product is unique. - -It does have a note there that maybe 1 can be included, but it's still "careful" about 1, and often I see it without a note. -Why exclude 1? 1 is of course the product formed from the empty set, which is the only possible prime factorization of 1 and it doesn't contain anything that isn't a prime, so this seems fine to me. - -REPLY [3 votes]: I'm glad you understand the concept of the empty product. Not everyone does. Wikipedia, despite its many, many flaws (peruse Wikipediocracy sometime), at least recognizes that this has to be dumbed down for the general public, it has to be made less sophisticated. Some people have trouble with the idea of a single integer being a product of one integer, how can they understand an integer being the product of no integers at all? -This is why the article says "either is prime itself or is the product of prime numbers" rather than "every positive integer is a product of primes." 2 is the product of a single prime, itself, while 1 is the product of no primes at all. You understand that and I understand that. But if you're writing for someone who might not necessarily understand these "subtleties," you have to dumb it down. -You also have to consider the history of the subject. 1 has never been a prime number, but it took mathematicians a long time to recognize this. In explaining a unique factorization domain, you have to keep in mind that some people may not know that 1 is really a unit, not a prime. - -Also, and I am neither the first nor the last to say this, but too many people have this idea that unique factorization is something that needs protection. It is true that in one sense, $1^3 \times 2 \times 5$ and $(-1)^4 \times 5 \times 2$ are different things. But both expressions evaluate to the same number and involve the same prime numbers. The units and the ordering changed, not the prime numbers.<|endoftext|> -TITLE: How to solve the differential equation: $y'=\sqrt{|y|}$ -QUESTION [5 upvotes]: While trying to solve the differential equation: $y'=\sqrt{|y|}$. I got confused how to deal with the absolute value. I want to draw a sketch for the direction field for that equation, and see for what initial values does this equation fulfill the conditions of the existence and uniqueness Theorem. -I tried to do the integral according to the sign, depends if $(y>0)$ or $(y<0)$. but I'm still not sure if my result is right. -result that I got: - -while $y>0$ : $y=({\frac{x}{2}+c})^{2}$ -while $y<0$ : $y=({-\frac{x}{2}+c})^{2}$ - -if that's the case, Its hard for me to imagine the Direction field by these two equations, because they are always above axis: $x$. How does it look? any hints? - -REPLY [2 votes]: For the $y<0$ case your DE is evaluated incorrectly: -$$\frac{dy}{dx}=\sqrt{-y}$$ -$$(-y)^{-\frac{1}{2}}\frac{dy}{dx}=1$$ -$$-2\sqrt{-y}=x+c$$ -$$-4y=(x+c)^2$$ -$$y=-\frac{(x+c)^2}{4}$$ -This should hopefully help you to visualize the direction field now. Something like this:<|endoftext|> -TITLE: Why is the matrix multiplication defined as it is? -QUESTION [11 upvotes]: Matrix multiplication is defined as: - -Let $A$ be a $n \times m$ matrix and $B$ a $m\times p$ matrix, the product $AB$ is defined as a matrix of size $n\times p$ such that $(AB)_i,_j = \sum\limits_{k=1}^mA_i,_kB_k,_j$. - -For what good reason did mathematicians define it like this? - -REPLY [8 votes]: One reason is that it gives you associativity with a vector: If $A$ and $B$ are matrices and $x$ is a vector, then -$$ -(AB)x = A(Bx) -$$ -ETA: This doesn't say anything more than Thomas's answer, by the way; I thought it might help if it were presented in a more tangible way, though.<|endoftext|> -TITLE: A ''strange'' integral from WolframAlpha -QUESTION [7 upvotes]: I want integrate: -$$ -\int \frac{1}{\sqrt{|x|}} \, dx -$$ -so I divide for two cases -$$ -x>0 \Rightarrow \int \frac{1}{\sqrt{x}} \, dx= 2\sqrt{x}+c -$$ -$$ -x<0 \Rightarrow \int \frac{1}{\sqrt{-x}} \, dx= -2\sqrt{-x}+c -$$ -But WolframAlpha gives: -$$ -\int \frac{1}{\sqrt{|x|}} \, dx=\left(\sqrt{-x}+\sqrt{x} \right)\operatorname{sgn}(x)-\sqrt{-x}+\sqrt{x} +c -$$ -How I can interpret this result? Maybe I'm wrong? - -REPLY [3 votes]: Since the function is not defined for $x=0$, it's not really meaningful to have a single constant of integration for the whole thing. The most general function $F$ (not defined at $0$) for which, at each point $x\ne0$, $F'(x)=\frac{1}{\sqrt{|x|}}$, is -$$ -F(x)=\begin{cases} --2\sqrt{-x}+c_1 & \text{if $x<0$}\\ -2\sqrt{x}+c_2 & \text{if $x>0$} -\end{cases} -$$ -where $c_1$ and $c_2$ are arbitrary constants. -Among these functions there are some that can be extended by continuity at $0$, namely those for which $c_1=c_2$, but they're just a special case. Note that none of these special functions is differentiable at $0$.<|endoftext|> -TITLE: Prove that a bounded, monotone increasing, and continuous function is uniformly continuous -QUESTION [6 upvotes]: I'm trying to prove that a function $f:[0,\infty) \to \mathbb R$ that is continuous, monotone increasing and bounded is uniformly continuous. -Here's a skech of what I've got so far: -$f(x) \to L$ for some $L \in \mathbb R$. Fix $\gamma>0$ then $[f^{-1}(0),L-\gamma]:=[a,b]$ is a compact interval and so $f$ is uniformly continuous on $[a,b]$. This is where I'm stuck, If I let $x,y \in [a,b]$ then $\forall \epsilon>0, \exists \delta>0$ such that $0<|x-y|<\delta \implies |f(x)-f(y)|<\epsilon$. If $x,y \notin[a,b]$ then $x,y>b$ and so I know $|f(x)-f(y)| \leq |f(x)-L|+|f(y)-L|$ which are each less than $L-\gamma$. What $\delta$ can I use to bound this expression? And what about the case where $x \in [a,b]$ and $y>b$? - -REPLY [5 votes]: Since $f$ is bounded and monotone increasing $\lim_{x\rightarrow \infty} f(x)=L $ exist. Now suppose $\varepsilon >0 $ and you want to show the uniform continuity requirement for this $\varepsilon$. Choose $R$ such that for $x>R$, $|f(x)-L|< \varepsilon$. Then for $x, y> R$, by monotonicity, $|f( x)-f(y)| < \varepsilon$. -On the other hand, $[0, R+1]$ is compact, so here you get $\delta >0$ such that $|x-y|<\delta$ implies $|f( x)-f(y)| < \varepsilon$ by standard reasoning. Wlog $\delta < 1$, so all of $[0,\infty) $ is covered.<|endoftext|> -TITLE: Cardioid in coffee mug? -QUESTION [102 upvotes]: I've been learning about polar curves in my Calc class and the other day I saw this suspiciously $r=1-\cos \theta$ looking thing in my coffee cup (well actually $r=1-\sin \theta$ if we're being pedantic.) Some research revealed that it's called a caustic. I started working out why it would be like this, but hit a snag. Here's what I did so far: -Consider the polar curves $r=1-\cos \theta$ and $r=1$. Since for a light ray being reflected off a surface (or the inside of my cup) the $\angle$ of incidence =$\angle$ of reflection, a point on the circle $(1,\theta)\to(1,2\theta)$. It looks like this has something to do with the tangent lines, so I pretend there's an $xy$ plane centered at the pole to find the slope of the line connecting the points. Since it's the unit circle the corresponding rectangular coordinates are $(\cos \theta, \sin \theta)$ and $(\cos 2\theta, \sin 2\theta).$ So -$$m={\sin 2\theta-\sin \theta \over \cos 2\theta-\cos \theta}$$ -now we see if it matches up with the slope of the lines tangent to the cardioid -$$\frac{dy}{dx}={r'\sin \theta+r \cos \theta \over r' \cos \theta - r \sin \theta}={\sin^2\theta - \cos^2 \theta +\cos \theta \over 2\sin \theta \cos \theta -\sin\theta}={\cos \theta - \cos 2\theta\over \sin 2\theta-\sin \theta}$$ -They're similar, but not identical. In particular $\frac{dy}{dx}=-\frac1m$. What error have I made, or what have I overlooked conceptually? Thanks in advance. - -REPLY [39 votes]: Light From Infinity -Most likely, the light is from a distance that is large on the scale of the cup. Therefore, we will take the incoming rays to be parallel. If so, the caustic in the coffee cup is a nephroid. -Consider the following diagram - -The ray reflected at the point $(-\cos(\theta),\sin(\theta))$ is -$$ -\frac{y-\sin(\theta)}{x+\cos(\theta)}=-\tan(2\theta)\tag{1} -$$ -which is -$$ -x\sin(2\theta)+y\cos(2\theta)=-\sin(\theta)\tag{2} -$$ -Here is a plot of the reflected rays from $(2)$ generated by uniformly distributed parallel incoming rays - -Taking the derivative of $(2)$ with respect to $\theta$: -$$ -2x\cos(2\theta)-2y\sin(2\theta)=-\cos(\theta)\tag{3} -$$ -Solving $(2)$ and $(3)$ simultaneously gives the envelope of the family of lines in $(1)$: -$$ -\begin{align} -\begin{bmatrix} -x\\ -y -\end{bmatrix} -&= -\begin{bmatrix} -\cos(2\theta)&-\sin(2\theta)\\ -\sin(2\theta)&\cos(2\theta) -\end{bmatrix}^{-1} -\begin{bmatrix} --\frac12\cos(\theta)\\ --\sin(\theta) -\end{bmatrix}\\[6pt] -&=\frac14\begin{bmatrix} -\cos(3\theta)-3\cos(\theta)\\ -3\sin(\theta)-\sin(3\theta) -\end{bmatrix}\tag{4} -\end{align} -$$ -The curve from $(4)$ is added in red - -Equation $(4)$ describes a nephroid. - -Light From A Point On The Circle -Since it is mentioned in a comment that a cardoid is formed by light coming from a point on the edge of the cup, we will do the same computation with light from a point on the circle. -Consider the following diagram - -The ray reflected at the point $(-\cos(\theta),\sin(\theta))$ is -$$ -\frac{y-\sin(\theta)}{x+\cos(\theta)}=-\tan\left(\frac{3\theta}2\right)\tag{5} -$$ -which is -$$ -y(1+\cos(3\theta))+x\sin(3\theta)=\sin(\theta)-\sin(2\theta)\tag{6} -$$ -Here is a plot of the reflected rays from $(6)$ reflected at uniformly spaced points on the circle - -Taking the derivative of $(6)$ with respect to $\theta$: -$$ --3y(\sin(3\theta))+3x\cos(3\theta)=\cos(\theta)-2\cos(2\theta)\tag{7} -$$ -Solving $(6)$ and $(7)$ simultaneously gives the envelope of the family of lines in $(5)$: -$$ -\begin{align} -\begin{bmatrix} -x\\ -y -\end{bmatrix} -&= -\begin{bmatrix} -\cos(3\theta)&-\sin(3\theta)\\ -\sin(3\theta)&1+\cos(3\theta) -\end{bmatrix}^{-1} -\begin{bmatrix} -\frac13\cos(\theta)-\frac23\cos(2\theta)\\ -\sin(\theta)-\sin(2\theta) -\end{bmatrix}\\[6pt] -&=\frac13\begin{bmatrix} -\cos(2\theta)-2\cos(\theta)\\ -2\sin(\theta)-\sin(2\theta) -\end{bmatrix}\tag{8} -\end{align} -$$ -The curve from $(8)$ is added in red - -Equation $(8)$ describes a cardoid.<|endoftext|> -TITLE: A closed form of $\int_0^1{\dfrac{1-x}{\log x}(x+x^2+x^{2^2}+x^{2^3}+\cdots)}\:dx$ -QUESTION [5 upvotes]: I need some hint to calculate this integral - -$$\int_{0}^{1}{\dfrac{1-x}{\log x}\left(x+x^{2}+x^{2^2}+x^{2^3}+\cdots\right)}{\rm d} x$$ - -Regards! - -REPLY [12 votes]: We have - -$$ -\int_{0}^{1}{\dfrac{1-x}{\log x}(x+x^{2}+x^{2^{2}}+x^{2^{3}}+\cdots)}\:dx=-\log 3. \tag1 -$$ - -Proof. One may recall that, using Frullani's integral, we have -$$ -\int_{0}^{1}\frac{x^{a-1}-x^{b-1}}{\log x}\:dx=\log\frac ab \quad (a,b>0). \tag2 -$$ -Considering a finite sum in the integrand, we get -$$ -\begin{align} -&\int_{0}^{1}{\dfrac{1-x}{\log(x)}(x+x^{2}+x^{2^{2}}+\cdots+x^{2^N})}\:dx\qquad (N=0,1,2,\cdots) -\\\\&=\sum_{n=0}^N\int_{0}^{1}\frac{x^{2^n}-x^{2^n+1}}{\log x}\:dx\\\\ -&=\sum_{n=0}^N\log\frac{2^n+1}{2^n+2}\qquad \quad (\text{using}\, (2))\\\\ -&=\sum_{n=0}^N\left(\log(2^n+1)-\log(2^{n-1}+1)-\log 2\right) \quad(\text{a telescoping sum})\\\\ -&=\log(2^N+1)-\log(2^{-1}+1)-(N+1)\log 2\\\\ -&=-\log 3+\log\left(1+1/2^N\right), -\end{align} -$$ then, letting $N \to+\infty$, leads to $(1)$. -Remark. One may see that we can generalize $(1)$.<|endoftext|> -TITLE: Why are we justified in using the real numbers to do geometry? -QUESTION [43 upvotes]: Context: I'm taking a course in geometry (we see affine, projective, inversive, etc, geometries) in which our basic structure is a vector space, usually $\mathbb{R}^2$. It is very convenient, and also very useful, since I can then use geometry whenever I have a vector space at hand. -However, some of that structure is superfluous, and I'm afraid that we can prove things that are not true in the more modest axiomatic geometry (say in axiomatic euclidian geometry versus the similar geometry in $\mathbb{R}^3$). -My questions are thus, in the context of plane geometry in particular: - -Can we deduce, from some axiomatic geometries, an algebraic structure? -Are some axiomatic geometries equivalent, in some way, to their more algebraic counterparts? - -(Note that by « more algebraic » geometry, I mean geometry in a vector space. The « more algebraic » counterpart of axiomatic euclidian geometry would be geometry in $R^2$ with the usual lines and points, and where we might restrict in some way the figures that we can build.) -I think it is useful to know when the two approaches intersect, first to be able to use the more powerful tools of algebra while doing axiomatic geometry, and second to aim for greater generality. -Another use for this type of considerations could be in the modelisation of geometry in a computer (for example in an application like Geogebra). Even though exact symbolic calculations are possible, an axiomatic formulation could be of use and maybe more economical, or otherwise we might prefer to do calculations rather than keep track of the axiomatic formulation. One of the two approaches is probably better for the computer, thus the need to be able to switch between them. - -REPLY [4 votes]: Euclid certainly did not use real numbers. He showed that the diagonal and side of a square have no "common measure", i.e. there is no segment that can be laid end-to-end an integer number of times to get the length of the side and also to get the length of the diagonal, and he knew how to define what it means to say the ratio of lengths of segment $A$ to segment $B$ is the same as the ratio of lengths of segment $C$ to segment $D$ when the latter two are the side and diagonal of a square. -And as mentioned elsewhere, Hilbert also developed Euclidean geometry while not just not using real numbers, but while making a point of avoiding them.<|endoftext|> -TITLE: When does tensor product have a (exact) left adjoint? -QUESTION [7 upvotes]: Let $A$ be a commutative Noetherian ring, and let $F$ be a flat $A$-module. We can assume $A$ is local, so $F$ is projective. - -Question 1. When does $F\otimes_A-$ preserve injective objects? -Question 2. When does $F\otimes_A-$ have a left adjoint? And when is this adjoint exact? - -The two questions are linked as follows. I learnt from this thread that, given an adjoint pair $F:\mathcal C\leftrightarrows\mathcal D:G$ where $\mathcal C$ and $\mathcal D$ are abelian categories and $\mathcal D$ has enough injectives, then $F$ is exact if and only if $G$ preserve injectives. -I would like to know when the functor $G=F\otimes_A-$ preserve injectives. So the question reduces to find out when $G$ has an exact left adjoint. -Any thoughts or comments are very welcome! - -REPLY [9 votes]: For question 2, $F\otimes_A -$ has a left adjoint iff $F$ is finitely generated, and the left adjoint is always exact. For if $(f_i)\in F^I$ is an element of an infinite product of copies of $F$, then it is easy to see $(f_i)$ is in the image of the canonical map $F\otimes_A A^I\to F^I$ iff $\{f_i\}$ is contained in a finitely generated submodule of $F$. If $F\otimes_A -$ has a left adjoint, then it this canonical map must be an isomorphism, and it follows that $F$ must be finitely generated. Conversely, if $F$ is finitely generated (and hence projective), $F^{\vee}\otimes_A -$ is adjoint to $F\otimes_A -$ on both sides. -(By a similar argument using instead the injectivity of the map $F\otimes_A A^I\to F^I$, you can show that $F$ must be finitely presented, even if you don't assume $A$ is Noetherian. So for non-Noetherian $A$, you still get that $F\otimes_A -$ has a left adjoint iff $F$ is finitely presented and flat, or equivalently finitely generated and projective.) -Note that in this case you can also easily see directly that $F\otimes_A -$ preserves injectives, since $F$ is a direct summand of a finitely generated free module and tensoring with a finitely generated free module obviously preserves injectives. In general, however, $F\otimes_A -$ might preserve injectives without having a left adjoint. For instance, it is a well-known theorem that a ring is Noetherian iff any (possibly infinite) direct sum of injective modules is injective. So since you're assuming $A$ is Noetherian, tensoring with any free module preserves injectives, and hence so does tensoring with any projective module. - -REPLY [6 votes]: Let me remove most of your assumptions and work with an arbitrary module $M$ over an arbitrary commutative ring $R$. We'd like to know when the functor $M \otimes_R (-)$ has a left adjoint. -The answer is iff $M$ is finitely presented projective, in which case the left adjoint is $M^{\ast} \otimes_R (-)$ where $M^{\ast} = \text{Hom}_R(M, R)$. You can extract a proof from this blog post. Since $M^{\ast}$ is also finitely presented projective, this functor is always exact.<|endoftext|> -TITLE: Sufficient condition in proving open mapping theorem for Banach spaces -QUESTION [5 upvotes]: Statement of the open mapping theorem: Let $X, Y$ be Banach spaces, $T:X\to Y$ be a continuous linear transformation. If T is onto, then T is an open map (i.e. for any open set $O\subset X$, $T(O)\subset Y$ is open. -[EDITED] Claim: It suffices [in proving that T is an open map] to show that $T(B_X(1))$ contains an open ball centered at the origin, where $B_X(1)$ is the open ball of radius 1 centered at the origin in $X$. -I'm not seeing how this condition implies that all open sets map to open sets. Linearity of T seems to be important here, but nothing is coming to mind except the definition $T(af+bg)=aT(f)+bT(g)$ for $a,b\in\mathbb{R}$, $f,g\in X$. -Would greatly appreciate some insight. Thanks! - -REPLY [5 votes]: I had the same question, so I will contribute the answer I found useful. -Consider the note on p. 162 of Folland's Real Analysis: Modern Techniques and Their Applications. The first statement is: - -If $X$ and $Y$ are metric spaces, [the requirement that $f: X \to Y$ be open] amounts to requiring that if $B$ is a ball centered at $x \in X$ then $f(B)$ contains a ball centered at $f(x)$. - -Clearly if $f$ is an open map this is satisfied. Now we want to show that this statement implies that $f$ is open. Let $U \subset X$ be any open set: we want to show this property implies $f(U)$ is open. However, by openness of $U$ we have that for any $x \in U$ we can construct a ball $B_{x}$ with $x \in B_{x} \subset U$, and by assumption we can construct a corresponding ball $B_{x}'$ such that $f(x) \in B_{x}' \subset f(B_{x}) \subset f(U)$. But then $f(x)$ is contained within an open ball contained within $U$. Since the point $x \in U$ (and corresponding $f(x)\in f(U)$) is arbitrary, we conclude that $U$ is open. -The second statement is: - -Specializing still further, if $X$ and $Y$ are normed linear spaces and $f$ is linear, then $f$ commutes with translations and dilations; it follows that $f$ is open iff $f(B)$ contains a ball centered at $0$ in $Y$ when $B$ is the ball of radius $1$ about $0$ in $X$. - -If $f$ is open then by the Folland's first statement above we have that $f(B)$ contains a ball centered at $f(0)$. But $f(0)=0$ (by homogeneity of $f$, $f(\mathbf{0}) = f( 0 \cdot \mathbf{v}) = 0 f( \mathbf{v}) = 0$). Conversely, let $U$ be any open set and suppose the condition in Folland's second statement is satisfied. Let $x \in U$, and let $B'$ be a ball centered at $x$ with $B' \subset U$ (by openness of $U$). Note that $B'$ can be constructed by translation/dilation of $B$; e.g. $B' = x + c B$. By Folland's first statement it suffices to show that $f(B')$ contains a ball centered at $f(x)$. By linearity of $f$ we have that $f(B') = f(x) + c f(B)$, so that actually it suffices to prove that $f(B)$ contains a ball centered at $f(0)=0$. This last part is satisfied by assumption, so we conclude $f(U)$ is open.<|endoftext|> -TITLE: (CLT) Number of rolls of two fair dice to be 90% certain that the percentage of times they show the same face is between 5/36 and 7/36 -QUESTION [6 upvotes]: How often do you have to roll two fair six-sided dice to be 90% certain that the percentage of times they show the same face is between 5/36 and 7/36? - -I was thinking, to apply the central limit theorem, between the two bounds, but I have no idea how to setup it. -First of all thanks to Henry for your answer. - -My professor said, -First : -From the statement of the problem one must know, how to apply the - correction factor to correctly have the limit in which one evaluate - the normal, and find the correct n. -Second: The distribution is binomial and it will be impossible have a - greater value (%) for 486 than 487. - - -I have three question: - -1) How Henry obtain (5/36n)^(1/2) for the standard deviation. -2) Where is my mistake evaluating the probability of 486 and 487. -3) How to solve it using the CLT. - -Thanks. - -REPLY [2 votes]: You have a binomial distribution with parameter $p$ so the expectation of the number of same-faces is $\frac{n}{6}$ and variance $\frac{5n}{36}$. -For the proportion you have expectation $\frac{1}{6}$ and variance $\frac{5}{36n}$ so a standard deviation of $\sqrt{\frac{5}{36n}}$. -If you are going to use the central limit theorem, then since $\Phi^{-1}(0.95)\approx 1.644854$ the symmetric two-tailed $90\%$ interval is $\pm 1.644854$ standard deviations from the mean. So you want to find $n$ such that $\frac{1}{36} = 1.644854 \sqrt{\frac{5}{36n}}$, suggesting $n$ should be at least $486.9978$. That is not quite an integer, so you will need to round up to $487$. -Curiously, while that is a good approximation, $487$ in fact fails, with a probability for the interval of about $0.8996$, while $486$ succeeds with a probability of about $0.90002$. -Assuming that your "between" interval is inclusive (though that only matters when $n$ is a multiple of $36$), the exact values to ensure a $90\%$ probability using a binomial distribution seem to be $474, 475,479,480,481,482,484,485,486,489,494,495,496,499,500,$ and any larger numbers, but not $487$, taken from the R code: -n <- 1:600 -intprob <- pbinom(7/36 * n, n, 1/6) - - pbinom(5/36 * n - 10^(-12), n, 1/6) -which(intprob >= 0.9)<|endoftext|> -TITLE: Is it neccessary to study things earlier? -QUESTION [6 upvotes]: I got confirmed from a graduate school starting from next year and I will major algebraic geometry. -Until now, I have never thought that I study little things than others with my age. However, I heard that some of my colleagues already studied Hartshorne at least once and quite a few of them have read Rudin's RCA when they were undergaduates. It's kinda unbelievable to me, but it seems like if they really did study and understood, then they will write absolutely a better Ph.D thesis than mine. -So I'm now very worrying myself. I want to know whether this situation is general. Is it recommenable to study graduate subjects as early as possible? Or are there people here who experienced the same thing too? Was that beneficial? -Between "studying each thing deep and slow" and "skimming many subjects as fast as possible", which one is better? - -REPLY [5 votes]: Your question is very broad, and I'm not sure this fully addresses it; but this is too long for a comment, and hopefully you find it useful nonetheless. - -I think there's a couple false assumptions here. -First, that there is a "better" way to approach studying mathematics. People vary wildly in how they learn, and ultimately I think it's best to find an approach to math that works well for you. Certainly you shouldn't discount the value of mastering difficult material early (and although rare, it is definitely believable for an undergrad to master such material), but at the same time, just knowing a bunch of stuff doesn't make you a mathematician. Grad school is much more about learning to be a mathematician than it is about learning material. -Second, and far more importantly, your line - -It seems like if they really did study and understood, then they will write absolutely a better Ph.D thesis than mine. - -This is something I'm still struggling to understand intuitively, so saying this in answer to your question also helps me internalize it: mathematics is not linear. Even ignoring the fact mentioned above that mathematics $\not=$ a bunch of facts, and even ignoring the fact that one's rate of learning changes over time, it is impossible to guess ahead of time how your thesis will compare with someone else's simply because there are so many different facets of mathematics. As you go through grad school, regardless of where you start relative to your peers, you will eventually find yourself an expert in some small area, just as they will in their own small areas. What contributions you make to this small area will surprise you, and it's pointless to try to guess ahead of time whether they will "match up" (however you might measure that) with someone else's. -Your thesis is not predetermined; it will be the product a number of things, including the growth you experience as a mathematician as you go through grad school (as well as a fair amount of chance, let's be honest). Certainly knowing more things at the beginning is an advantage, but it's by no means a dispositive one.<|endoftext|> -TITLE: Has anyone seen this pattern that evaluates to $-\frac{1}{3}$ always? -QUESTION [6 upvotes]: I was recently doodling and came upon an interesting pattern. -Beginning with $0$, add $1$, subtract $2$, divide by $3$, and multiply by $4$. Then add $5$, subtract $6$, divide by $7$, and multiply by $8$. Hopefully it's clear what I'm doing. -$$ \frac {\frac {0+1-2}{3} * 4 +5-6} {7} *8\dots$$ -After each division step (after diving by $3$, or $7$, or $11$, and so on ...), the function evaluates to $-\frac{1}{3}$. -Has anyone seen this, and if so, where? Can anyone brainstorm a practical use, or is it simply an interesting quirk? -Thanks, all. - -REPLY [4 votes]: Consider the most recent division, by $4k-1$ (first 3, then 7, then 11, etc.) -This division yields $-\frac{1}{3}$, after which we multiply by $4k$, add $4k+1$, subtract $4k+2$, and divide by $4k+3$: -$$ -\frac{-\frac{1}{3}\cdot 4k+(4k+1)-(4k+2)}{4k+3} -=\frac{-\frac{4}{3}k-1}{4k+3} -=-\frac{1}{3} -$$<|endoftext|> -TITLE: Exists continuous $f_n: [0,1] \to \mathbb{R}$ that converges pointwise, as $n \to \infty$, to $\chi_\mathbb{Q}$? -QUESTION [5 upvotes]: Does there exist a sequence of continuous $f_n: [0, 1] \to \mathbb{R}$ that converges pointwise, as $n \to \infty$, to $\chi_\mathbb{Q}$, the characteristic function of the rationals in $[0, 1]$? - -REPLY [2 votes]: There cannot be such a sequence. -$\chi_{\Bbb Q}$ is a double limit of continuous functions, cf. the Dirichlet function. Thus, it's a Baire class 2 function. As the article states, it can't be a Baire class 1 function (single pointwise limit of continuous functions) because such functions have a meager set of discontinuities, unlike $\chi_{\Bbb Q}$. For a proof, see the links and reference in the stackexchange Q&A which Joey Zou cites in his comment.<|endoftext|> -TITLE: What is the empty relation? -QUESTION [11 upvotes]: I was reading the Wikipedia article on equivalence relations and one section says that "the empty relation R on a non-empty set X is vacuosly symmetric and transitive but not reflexive." -What is the empty relation? And what is vacuosly symmetric? -Thank you very much. - -REPLY [11 votes]: A relation on a set $A$ is by definition a subset $R\subseteq A\times A$. Then "$a$ is related to $b$" means "$(a,b)\in R$. The empty relation is then just the empty set, so that "$a$ is related to $b$ is always false.<|endoftext|> -TITLE: Uses of vector spaces over $\mathbb Q$ -QUESTION [8 upvotes]: I know of two applications of vector spaces over $\mathbb Q$ to problems posed by people not specifically interested in vector spaces over $\mathbb Q$: - -Hilbert's third problem; and -The Buckingham pi theorem. - -What others are there? - -REPLY [2 votes]: Assuming $\mathsf{CH}$, one can show that $\Bbb R$ is the union of countably many metrically rigid subsets by viewing it as a vector space over $\Bbb Q$. (A set $D$ in a metric space $\langle X,d\rangle$ is metrically rigid if no two distinct two-point subsets of $D$ are isometric.) This is mentioned in Brian M. Scott and Ralph Jones, Metric rigidity in $E^n$, Proceedings of the American Mathematical Society, Vol. $53$, $1975$, $219{-}222$, though the paper itself contains a different proof of a more general result.<|endoftext|> -TITLE: Polar decomposition of Bounded Normal Operator on Hilbert Space -QUESTION [9 upvotes]: It is well known that if $T$ is a bounded linear operator on a infinite dimensional Hilbert space $H$ then there exists unique partial isometry $U$ such that $T=U \vert T \vert$,where $\vert T \vert =(T^*T)^{1/2}$.Such a decomposition is called a polar decomposition of $T$.I am trying to solve the following problem: - -Suppose $T$ is Bounded Normal operator on $H$ ,then there exists unique unitary operator $U$ such that $T=U \vert T \vert$ - -I don't have any clean way to show the existence of such a unitary operator.Can someone please give me some idea to prove this? - -REPLY [7 votes]: For any bounded operator $T$, the operator $T^{\star}T$ is a positive selfadjoint operator with unique positive square root $|T|=(T^{\star}T)^{1/2}$. And, -$$ - \|Tx\|^{2}=(T^{\star}Tx,x)=(|T|^2x,x)=(|T|x,|T|x)=\||T|x\|^{2},\;\;\; x \in H. -$$ -The operators $T$ and $|T|$ have the same null spaces, and both maps induce injective linear maps from $H/\mathcal{N}(T)$ into $X$. Therefore, there is a linear map -$$ - U : \mathcal{R}(|T|) \rightarrow \mathcal{R}(T) -$$ -such that $U|T|=T$. $U$ is isometric because $\|U|T|x\|=\|Tx\|=\||T|x\|$. So it extends uniquely by continuity to an isometric map from $\overline{\mathcal{R}(|T|)}$ onto $\overline{\mathcal{R}(T)}$. -Now assume that $T$ is normal. Then $\|Tx\|^{2}=(T^{\star}Tx,x)=(TT^{\star}x,x)=\|T^{\star}x\|^2$, which gives $\mathcal{N}(T)=\mathcal{N}(T^{\star})$. Hence, -$$ - \mathcal{R}(|T|)^{\perp} - =\mathcal{N}(|T|) - =\mathcal{N}(T) - =\mathcal{N}(T^{\star}) - = \mathcal{R}(T)^{\perp}. -$$ -So $U$ has a unitary extension to all of $H$ obtained by setting $U=I$ on $\mathcal{N}(T)$. $U$ is not unique because $U$ may be chosen to be any unitary map on $\mathcal{N}(T)$. For example, in the most extreme case where $T=0$, the operator $U$ can be any unitary map of $H$.<|endoftext|> -TITLE: T compact if and only if $T^*T$ is compact. -QUESTION [12 upvotes]: I have an operator $T \in B(\mathcal{H})$. -I need to prove that T is comapct if and only if $T^*T$ is compact. -One way is ok, because if $A$ or $B$ is compact then $AB$ is compact, so I get at once that if $T$ is compact then $T^*T$ is compact. -But how do I go the other way? If I assume that $T^*T$ is compact I am not quite sure how to see that $T$ is compact. If I assume for contradiction that $T$ is not compact I must also have that $T^*$ is not compact. If I knew that either $T$ or $T^*$ was invertible it would be ok, because then I could find a bounded subsequence that did not converge. But when I do not have invertibility I am not quite sure how to proceed. - -REPLY [18 votes]: Let $\{x_n \}$ is a bounded sequence with bound $M$. If $T^{\star}T$ is compact, then there exists $\{ x_{n_{k}}\}$ such that $\{ T^{\star}Tx_{n_{k}}\}$ converges. Then -\begin{align} - \|Tx_{n_k}-Tx_{n_j}\|^2 & =(T^{\star}Tx_{n_{k}}-T^{\star}Tx_{n_j},x_{n_k}-x_{n_j}) \\ - & \le 2M\|T^{\star}Tx_{n_k}-T^{\star}Tx_{n_j}\|. -\end{align} -This forces $\{Tx_{n_{k}}\}$ to be a Cauchy sequence and, hence, to converge.<|endoftext|> -TITLE: Hexagon tesselations -QUESTION [5 upvotes]: A configuration is made of congruent regular hexagons,where each - hexagon shares a side with another hexagon. What is the largest - integer $k$, such that the figure cannot have $k$ vertices ? For - example this figure has $13$ vertices. - -That's what I've been able to do so far. - -A hexagon built on one hexagon adds up $4$ vertices to the configuration. -A hexagon between two hexagons adds up $3$ vertices. -A hexagon between $3$ hexagons adds up $2$ vertices -A hexagon between $4$ hexagons adds $1$ vertice -A hexagon between $5$ hexagons adds $0$ vertices -A hexagon between $6$ hexagons adds $0$ vertices - -Now it's clear that what ever number of vertices I have, I get it through the above combinations. -However I don't know what I have to do now. -It seems to me that I can add an infinite number of hexagons constructed one on the other and have an infinite number of vertices... -Can you guys help ? - -REPLY [6 votes]: Making a chain of $\geq2$ hexagons you can realize all $k$ of the form $k=10+4m$, $m\geq0$. Starting with your figure and attaching a chain of hexagons you can realize all $k=13+4m$ vertices, and starting with a blob of $4$, resp. $5$, hexagons one shows that all $k$ of the form $k=16+4m$ or $k=19+4m$, $m\geq0$, can be realized in this way. Note that $10$, $13$, $16$, $19$ represent all four remainders modulo $4$. Given that, we can realize all of -$$10,13,14,16,17,18,19, 20,21,22,23,24,\ldots\ .$$ -Therefore it seems that the largest non-realizable $k$ is $19-4=15$. In order to make sure that $k=15$ cannot be realized an impossibility proof is required. Such proofs are notoriously difficult. -Any configuration can be built up one for one, so that it remains connected all the time. Start with two hexagons glued together, making $10$ vertices. Then show that you can never reach $15$ vertices using your list of simple principles.<|endoftext|> -TITLE: Does there exist $f \in L^1(\mathbb{R})$ where $\lim_{r \to 0} {1\over{r}} \int_{x-r}^{x+r} f(y)\,dy = \infty$? -QUESTION [6 upvotes]: If $E \subset \mathbb{R}$ has measure $0$, does there exist $f \in L^1(\mathbb{R})$ such that, for every $x \in E$, -$$\lim_{r \to 0} {1\over{r}} \int_{x-r}^{x+r} f(y)\,dy = \infty?$$ -What if $E$ has positive measure? - -REPLY [3 votes]: We will denote by $|S|$ the Lebesgue measure of any measurable $S\subseteq\mathbb{R}$. Suppose $|E|=0$. Since Lebesgue measure is regular, we can find for any positive integer $k$ an open set $U_k\supseteq E$ such that $|U_k|<2^{-k}$. Now put $f=\sum_{k=1}^\infty\mathbf{1}_{U_k}$ (here $\mathbf{1}_S$ denotes the indicator function of $S$). The monotone convergence theorem tells you that $\int_{\mathbb{R}}f=\sum_{k=1}^\infty\int_{\mathbb{R}}\mathbf{1}_{U_k}<\sum_{k=1}^\infty 2^{-k}<\infty$, so $f\in L^1(\mathbb{R})$. -But now let us fix any $x\in E$ and any integer $N>0$. Since $U_1\cap\cdots\cap U_N$ is an open set containing $x$, we have $(x-r,x+r)\subseteq U_1\cap\cdots\cap U_N$ for $r>0$ sufficiently small, so for such $r$ it holds -$$\int_{x-r}^{x+r}f\ge\int_{x-r}^{x+r}\sum_{k=1}^N \mathbf{1}_{U_k} -=\int_{x-r}^{x+r}N=2Nr,$$ -so (since $N$ was arbitrary) $\lim_{r \to 0}\frac{1}{r}\int_{x-r}^{x+r} f=\infty$. -As for the second question, you need a somewhat more advanced tool from analysis, namely the Hardy-Littlewood maximal function. -We have -$$\left|\frac{1}{2r}\int_{x-r}^{x+r} f\right|\le\frac{1}{2r}\int_{x-r}^{x+r}|f|\le (Mf)(x),$$ -but the maximal inequality by Hardy and Littlewood implies that $(Mf)(x)<\infty$ for a.e. $x$. So the limit that you wrote cannot be $\infty$ for all $x\in E$ (since $|E|>0$). In fact, Lebesgue differentiation theorem (which is a consequence of the maximal inequality) tells you more precisely that we have -$$\lim_{r\to 0}\frac{1}{2r}\int_{x-r}^{x+r} f=f(x)$$ -for a.e. $x$ (so in particular the limit exists and is finite a.e.).<|endoftext|> -TITLE: Certain set is dense in $l^p$ if and only if $\{x_n : n \in \mathbb{N}\} \notin l^q$, where $1/p + 1/q = 1$ -QUESTION [6 upvotes]: Assume that $\{x_n : n \in \mathbb{N}\} \subset \mathbb{R}$ is such that $x_n \neq 0$ for some $n$. Let $p \in (1, \infty)$ and$$G := \left\{\{y_n : n \in \mathbb{N}\} \in l^p : \lim_{N \to \infty} \sum_{n=1}^N y_n x_n = 0\right\}.$$Do we have that $G$ is dense in $l^p$ if and only if $\{x_n : n \in \mathbb{N}\} \notin l^q$ where $1/p + 1/q = 1$? - -REPLY [3 votes]: "$\Leftarrow$": Let us assume that $\left(x_{n}\right)_{n\in\mathbb{N}}\notin\ell^{q}$. -We will show that actually $\delta_{n_{0}}\in\overline{G}$ for every -$n_{0}\in\mathbb{N}$. Since $G$ (and hence $\overline{G}$) is obviously -a vector space and since the finitely supported sequences are dense -in $\ell^{p}$ (because of $p<\infty$), this implies density of $G$. -Note that since $\left(x_{n}\right)_{n\in\mathbb{N}}\notin\ell^{q}$, -we also have $\left(x_{n}\right)_{n\geq n_{0}+1}\notin\ell^{q}$. -Now, it is well-known that we can characterize the $\ell^{q}$ norm -by duality (also if it is infinite), i.e. we have -$$ -\infty=\left\Vert \left(x_{n}\right)_{n\geq n_{0}+1}\right\Vert _{\ell^{q}}=\sup_{\substack{\left(y_{n}\right)_{n\geq n_{0}+1}\text{ finitely supported},\\ -\left\Vert \left(y_{n}\right)_{n\geq n_{0}+1}\right\Vert _{\ell^{p}}=1 -} -}\left|\sum_{n=n_{0}+1}^{\infty}x_{n}y_{n}\right|. -$$ -Now, let $\varepsilon>0$ be arbitrary and let $\alpha:=x_{n_{0}}$. -By what we just saw, there is a finitely supported sequence $\left(y_{n}\right)_{n\geq n_{0}+1}$ -with $\left\Vert \left(y_{n}\right)_{n\geq n_{0}+1}\right\Vert _{\ell^{p}}=1$ -and $\left|\gamma\right|>\frac{\left|\alpha\right|}{\varepsilon}$ -for $\gamma:=\sum_{n=n_{0}+1}^{\infty}x_{n}y_{n}$. Now, define $z:=\left(z_{n}\right)_{n\in\mathbb{N}}$ -by $z_{n}:=-\frac{\alpha}{\gamma}\cdot y_{n}$ for $n\geq n_{0}+1$ -and $z_{n}:=0$ otherwise. Note -$$ -\sum_{n=1}^{\infty}x_{n}z_{n}=-\frac{\alpha}{\gamma}\cdot\sum_{n=1}^{\infty}x_{n}y_{n}=-\alpha -$$ -and hence -$$ -\sum_{n=1}^{\infty}\left(\delta_{n_{0}}+z\right)_{n}x_{n}=x_{n_{0}}+\sum_{n=1}^{\infty}z_{n}x_{n}=\alpha-\alpha=0 -$$ -which implies $\delta_{n_{0}}+z\in G$. But since $\left|\gamma\right|>\frac{\left|\alpha\right|}{\varepsilon}$, -we also have -$$ -\left\Vert \delta_{n_{0}}-\left(\delta_{n_{0}}+z\right)\right\Vert _{\ell^{p}}=\left\Vert z\right\Vert _{\ell^{p}}=\left|\frac{\alpha}{\gamma}\right|\cdot\left\Vert \left(y_{n}\right)_{n\geq n_{0}+1}\right\Vert _{\ell^{p}}=\left|\frac{\alpha}{\gamma}\right|<\left|\alpha\right|\cdot\frac{\varepsilon}{\left|\alpha\right|}=\varepsilon. -$$ -Since $\varepsilon>0$ was arbitrary, $\delta_{n_{0}}\in\overline{G}$. -"$\Rightarrow$": Let us assume that $G$ is dense. If we had -$\left(x_{n}\right)_{n\in\mathbb{N}}\in\ell^{q}$, then the functional -$$ -\Phi:\ell^{p}\to\mathbb{R},\left(y_{n}\right)_{n}\mapsto\sum_{n=1}^{\infty}x_{n}y_{n} -$$ -would be well-defined and bounded, so that the kernel ${\rm ker}\,\Phi=G$ -would be a closed subset of $\ell^{p}$. -But by assumption, $x_{n}\neq0$ for some $n\in\mathbb{N}$, so that -${\rm ker}\,\Phi\neq\ell^{p}$. All in all, we see that -$$ -\overline{G}=\overline{{\rm ker}\,\Phi}={\rm ker}\,\Phi\neq\ell^{p}, -$$ -in contradiction to density of $G$ in $\ell^{p}$. Hence, $\left(x_{n}\right)_{n\in\mathbb{N}}\notin\ell^{q}$, -as desired.<|endoftext|> -TITLE: Integer solutions of $x^3+y^3=z^3$ using methods of Algebraic Number Theory -QUESTION [10 upvotes]: I'm asked to prove that the famous equation $$x^3+y^3=z^3$$ has no integer (non-trivial) solutions, i.e. FLT for $n=3$ -I'm aware that on this website there are solutions using methods of Number Theory (the infinite descendant proof for example, or well, Wiles' Theorem) But my lecturer told us it can be done by methods of Algebraic Number Theory, i.e. using certain number fields and properties of them. -As an hint, he told us to consider the extension $$\mathbb{Q}(\sqrt{3})$$ and using the result that characterises ramified or non-ramified primes in quadratic fields. -Now I'd be lying saying that I have some idea on how to attack this problem. -I thought that something helpful would come using some analogue of the reasoning of finding roots of $x^2+y^2=z^2$, i.e. reasoning with the norm of a specific quadratic extension, but the norm gives a quadratic relation in this case, and not a cubic one. On the other hand I thought, ok let's consider cubic extension, but for $$\mathbb{Q}(\sqrt[3]{d})$$ the norm of $a+b\sqrt[3]{d}+c\sqrt[3]{d^2} $ is $$a^3+b^3d+c^3d^2-3abc$$ and so I have a kind of cubic relation, BUT I don't know how to get rid of the $abc$ term. -I'm aware that this is not a big effort, but this is what I'm able to think as a strategy to attack this problem. -Instead of full solutions I'd prefer suggestion and reasonings, otherwise I'll never learn how to proceed with these kind of problems :) -Thanks in advance - -REPLY [3 votes]: Fermat's equation for cubes is a common introduction to lecture notes on algebraic number theory, because it motivates to study rings of integers in a number field, and partly has been developed even for such Diophantine problems, e.g., Kummer's work concerning generalizing factorization to ideals. -For the equation $x^3+y^3=z^3$ the number field is $\mathbb{Q}(\zeta)$ with a third primitive root of unity $\zeta=e^{2\pi i/3}$. Its ring of integers is given by $\mathbb{Z}[\zeta]$, which is indeed a factorial ring (because it is Euclidean). Its units are given by $\pm 1,\pm \zeta,\pm\zeta^{-1}$. This is crucial to prove Euler's result: -Theorem(Euler $1770$): The equation $x^3+y^3=z^3$ has no non-trivial integer solutions. -The proof uses divisibility properties of the ring $\mathbb{Z}[\zeta]$, starting from the equation -$$ -z^3=x^3+y^3=(x+y)(x+\zeta y)(x+\zeta^2y). -$$ -The first case is $p=3\nmid xyz$. We may suppose that $x,y,z$ are coprime. We have $z^3\equiv \pm 1\bmod 9$ and $x^3+y^3\equiv -2,0,2 \bmod 9$, so that $x^3+y^3\neq z^3$, a contradiction. Hence we suppose that $3\mid xyz$, i.e., say, $3\mid z$ and $3\nmid xy$. Now we reformulate the equation as -$$ -x^3+y^3=(3^mz)^3, -$$ -with $x,y,z$ pairwise coprime and $3\nmid xyz$, -where we have solved the case $m=0$. The idea is now to use descent, i.e., to reduce it to the case $m=0$. The above equation becomes -$$ -(3^mz)^3=(x+y)(x+\zeta y)(x+\zeta^2y), -$$ -where the three factors are not coprime, because $1-\zeta$ is a common factor, because of $3=(1-\zeta)(1-\zeta^2)$, so that $(1-\zeta)\mid 3\mid (x+y)$. -However, since $\mathbb{Z}[\zeta]$ is factorial, all factors are cubes, i.e. , -$x+y=3^{3m-1}c^3$ with some $c\in \mathbb{Z}$, and so on. This finishes the proof, after some computations in this ring. -Unfortunately, this idea does not work for $x^p+y^p=z^p$ for primes $p$, except for $p\le 19$, because otherwise the ring of integers $\mathbb{Z}[\zeta_p]$ is no longer factorial.<|endoftext|> -TITLE: How many chains are there in a finite power set? -QUESTION [7 upvotes]: Let $A$ be a finite set with $n$ elements. How many chains are there in $\mathcal P(A)$ -- that is, how many different subsets of $\mathcal P(A)$ are totally ordered by inclusion? -It's easy enough to count maximal chains; there are $n!$ of them. But counting all chains gets me into horrible inclusion-exclusion situations even if I try to do it by hand for small $n$. -Of course this is also the number of chains in a Boolean algebra with $2^n$ elements, or the number of chains in a finite lattice with $2^n$ elements. -By hand computation, the number of chains for $n$ running from $0$ to $6$ is $2, 4, 12, 52, 300, 2164, 18732$. This sequence appears to be unknown in OEIS. - -REPLY [4 votes]: Except for the degenerate $n=0$ case, the number of chains is $4$ times the $n$th Fubini number, A000670, also known as ordered Bell numbers. -The linked Wikipedia articles lists various ways formulas for these numbers, the most elementary ones being -$$ a_n = \sum_{k=0}^n \sum_{j=0}^k (-1)^{k-j} \binom{k}{j}j^n \qquad\text{and}\qquad a_n \approx \frac{n!}{2(\log 2)^{n+1}}$$ -(and then the number of chains is then $4 a_n$). -The factor of $4$ makes sense; it represents the fact that each of $\varnothing$ and $A$ can either be omitted or included in the chain without affecting its validity.<|endoftext|> -TITLE: Proof of "Singular values of a normal matrix are the absolute values of its eigenvalues" -QUESTION [5 upvotes]: I want a simple proof of this fact using only definitions and basic facts. I've searched for it for some time and I couldn't find a satisfying proof. So I attempted to do it myself. -Let $A \in \mathbb{C}^{n \times n}$ and $A^H$ be conjugate transpose of $A$. Let $A$ be normal, i.e. $A A^H = A^H A$. The singular values of $A$ are defined as $\sigma \in \mathbb{R}^{\geq 0}$ such that -$$ \begin{align*} -A v &= \sigma u \\ -A^H u &= \sigma v -\end{align*} $$ -where $u^H u = v^H v = 1$. $u$ and $v$ are called left and right singular vectors respectively. -Now multiplying the first equation with $A^H$ and the second equation with $A$ from the left we obtain -$$ \begin{align*} -A^H A v &= \sigma A^H u = \sigma^2 v \\ -A A^H u &= \sigma A v = \sigma^2 u -\end{align*} $$ -Since $A$ is normal we obtain -$$ \begin{align*} -A A^H v &= \sigma^2 v \\ -A A^H u &= \sigma^2 u -\end{align*} $$ - -I believe we can conclude that either $u=v$ or $u=-v$ if singular values are distinct. So by the definition we can see that either $\lambda=\sigma$ or $\lambda=-\sigma$ is also an eigenvalue of $A$. So $\sigma = | \lambda |$. - -Edit: As @levap pointed out we can only conclude that $u = e^{i \theta} v, \theta \in [0, 2 \pi)$. Then we see that -$$ \begin{align*} -Av &= \sigma e^{i \theta} v \\ -A^H v &= \sigma e^{-i \theta} v -\end{align*} $$ -So, we can say that $\lambda = \sigma e^{i \theta}$ is an eigenvalue of $A$. Also, by the lemma given below $\sigma^2 \geq 0$, so $\sigma \in \mathbb{R}$ and we can always select $\sigma \geq 0$ (using $-u$ in the definition if $\sigma \leq 0$). Therefore, we can conclude that $\sigma = |\lambda|$. Also, $v$ is an eigenvector of $A^H$ with $\bar{\lambda}$. -Lemma. Eigenvalues of $AA^H$ are real and non-negative. -Proof. $0 \leq \lVert A^H v \rVert_2 = v^H A A^H v = \sigma^2 v^H v = \sigma^2$. -What happens if singular values are not distinct? - -REPLY [6 votes]: Since you work over $\mathbb{C}$, if the singular values are distinct you can only conclude that $u = e^{i\theta}v$ but this is enough. -If the singular values are not distinct, I doubt you'll find a "completely elementary" proof which avoids something like the spectral theorem as you need to know something about the eigenvalues of $AA^H$ and their relation to the eigenvalues of $A$. -Using the spectral theorem, I can offer the following argument. If $A$ is normal and $v$ is an eigenvector of $A$ with eigenvalue $\lambda$ then $v$ is also an eigenvector of $A^H$ with eigenvalue $\overline{\lambda}$. Denote the eigenvalues of $A$ by $\lambda_1, \ldots, \lambda_n$. Since $A$ is diagonalizable, we can choose a basis $(v_1, \ldots, v_n)$ with $Av_i =\lambda_i$. Then we have -$$ AA^H(v_i) = A(\overline{\lambda_i} v_i) = \lambda_i \overline{\lambda_i} v_i = |\lambda_i|^2 v_i $$ -which shows that the eigenvalues of $AA^H$ are $|\lambda_1|^2, \ldots, |\lambda_n|^2$. Since you have shown that $\sigma^2$ is an eigenvalue of $AA^H$, we have $\sigma = |\lambda_i|$ for some $1 \leq i \leq n$.<|endoftext|> -TITLE: Proof that the number $\sqrt[3]{2}$ is irrational using Fermat's Last Theorem -QUESTION [7 upvotes]: Suppose that $\sqrt[3]{2}=\frac{p}{q}$. Then $2q^3 = p^3$ i.e $q^3 + q^3 = p^3$, which is contradiction with Fermat's Last Theorem. -My question is whether this argument is a correct mathematical proof, since Fermat's Last Theorem is proven, or does it loop on itself somewhere along the proof of the Theorem? -In other words, does the proof of Fermat's Theorem somehow rely on the fact that $\sqrt[3]{2}$ is irrational? -UPD: -As pointed out in comments, this actually is a valid argument, no matter what was used in the proof of the Fermat's Last Theorem (which from now on will be referred to as the Proof). What really interests me, is whether the Proof uses on some step the fact that $\sqrt[3]{2}$ is irrational? - -REPLY [2 votes]: Your argument is correct but there is no need to use Wiles' proof of Fermat's Last Theorem: -an elementary proof of the case $n=3$ was given by Euler.<|endoftext|> -TITLE: Exponential Diophantine equation $7^y + 2 = 3^x$ -QUESTION [5 upvotes]: Find all positive integer solutions to $$7^y + 2 = 3^x.$$ - - -ATTENTION: MY SOLUTION HAS A TERRIBLE MISTAKE WHICH I HAVE OVERLOOKED! -Obviously, $x > y$. Then, we have $3^x = 7^y + 2 \equiv 0 \pmod {3^y}$. Also, $$7^y = (6 + 1)^y = \sum_{k = 0}^{y} {y \choose k} 6^k \equiv \sum_{k = 0}^{y - 1} {y \choose k} 6^k \pmod {3^y}.$$ We claim that the highest power of $3$ that divides ${y \choose k}$ is at most $2$. Indeed, $$\sum_{i = 1}^{\infty} \left [\frac {y} {3^i} \right] - \left (\sum_{i = 1}^{\infty} \left [\frac {y - k} {3^i} \right] + \sum_{i = 1}^{\infty} \left [\frac {k} {3^i} \right] \right) \leqslant 2.$$ Hence, $$7^y \equiv \sum_{k = 0}^{y - 1} {y \choose k} 6^k \leqslant 2 \sum_{k = 0}^{y - 1} 6^k = \frac {2} {5} (6^y - 1).$$ Since $(5, 3^y) = 1$, we have by Euler's Theorem that $$5^{\phi (3^y)} = 5^{3^y - 3^{y - 1}} \equiv 1 \pmod {3^y}.$$ Then, $2 \cdot 3^{y - 1} \equiv \frac {2} {5} (6^y - 1) \pmod {3^y}$ and $$0 \equiv 7^y + 2 \leqslant 2 \cdot 3^{y - 1} + 2 \pmod {3^y}.$$ Take $s > 0$ an integer for which $$3^{y} s \leqslant 2 \cdot 3^{y - 1} + 2.$$ It follows from this that $0 \leqslant 3^{y} (s - 1) \leqslant 2 - 3^{y - 1}$. Hence, $y < 2$. So the solutions are $$(x, y) = (1, 0), (2, 1).$$ -"Notes on Olympiad Problems", Nima Bavari, Tehran, 2006. - -REPLY [14 votes]: Okay so I got this solution after modular bashing for an hour. This better be right. -Since the cases $x,y \le 2$ are already investigated above easily, we look at $x,y \ge 3$. -Rewrite this equation to $$7(7^{y-1}-1)=9(3^{x-2}-1)$$ -Now, since $7|3^{x-2}-1$, and the order of $3$ modulo $7$ is $6$, we have $6|x-2$. -This gives $13|3^6-1|3^{x-2}-1$, so $13|7^{y-1}-1$. -Now we have $12|y-1$ since the order of $7$ modulo $13$ is $12$. This gives $19|7^{12}-1|7^{y-1}-1$. -Now we have $19|3^{x-2}-1$, so $18|x-2$, since the order of $3$ modulo $19$ is $18$. -Now $37|3^{18}-1|3^{x-2}-1$. This gives $37|7^{y-1}-1$. -Now we have $9|y-1$. Now we have $27|7^9-1|7^{y-1}-1$, since the order of $7$ modulo $37$ is $9$. -However, $9(3^{x-2}-1) \equiv -9 \pmod{27}$, so it cannot be a multiple of $27$. -We now have a contradiction, so the answer is $(x,y)=(1,0),(2,1)$, as desired. GG!!<|endoftext|> -TITLE: Prove that $\lim_{h \to 0}\frac{1}{h}\int_0^h{\cos{\frac{1}{t}}dt} = 0$ -QUESTION [12 upvotes]: I'm trying to prove that $$\lim_{h \to 0}\frac{1}{h}\int_0^h{f(t)dt} = 0$$ where $$f(t) = \begin{cases}\cos{\frac{1}{t}} &\text{ if } t \neq 0\\ 0&\text{otherwise}\end{cases}.$$ Can someone give me a hint where to start? Darboux sums somehow seem to lead me nowhere. -NOTE: I cannot assume that $f$ has an antiderivative $F$. - -REPLY [5 votes]: Let $g=\frac1h$ and $s=\frac1t$. Then integration by parts gives -$$ -\begin{align} -\lim_{h\to0}\frac1h\int_0^h{\cos\!\left(\frac1t\right)\mathrm{d}t} -&=\lim_{g\to\infty}g\int_g^\infty{\frac{\cos(s)}{s^2}\,\mathrm{d}s}\\ -&=\lim_{g\to\infty}g\left[-\frac{\sin(g)}{g^2}+2\int_g^\infty\frac{\sin(s)}{s^3}\,\mathrm{d}s\right]\\ -&=\lim_{g\to\infty}O\left(\frac1g\right)\\[6pt] -&=0 -\end{align} -$$ -Since $\sin(s)=O(1)$.<|endoftext|> -TITLE: What happens when the normal to surface is zero? -QUESTION [8 upvotes]: In Differential Geom we're always given that surfaces should be regular, meaning the partial derivatives at every point are linearly independent, or the normal is non-zero. -I get that the tangent space isn't well defined when the partial derivatives are linearly dependent. But I can't find any explanations as to what is happening to the surface itself. If someone could give a more geometrical/ intuitive reason why surfaces should be regular that would be great! - -REPLY [9 votes]: Consider the "surface" defined by -$$ -S(x, y) = (x^3, y, 1). -$$ -Since the image is just the plane $z = 1$, it's clearly a nice surface, even though $\partial S/\partial x$ is zero at $x = 0$. So in this case your statement "the tangent space isn't well defined when the partials are dependent" is incorrect. It's a bit more subtle than that. But your statement's not TOO far wrong, and is a workable one to go with for now. -Then consider the following: -$$ -S(x, y) = (x^3, |x^3|, y) -$$ -This "surface" looks like an extruded letter "V", but its $x-$ and $y-$ partials are defined and smooth everywhere. It's more or less the last example, with the trouble with the zero-deriv being shown off rather than hidden. And that's why you need a well-defined normal. - -REPLY [5 votes]: Geometrically, the surface has a dent, or it degenerates to a line or curve, losing precisely the $2$-dimensionality that you want to study. For example, consider $$f(u) = \begin{cases} e^{-1/u} , &\text{if }u > 0 \\ 0, & \text{otherwise} \end{cases}$$ -and plot using some program: $${\bf x}(u,v) = (f(u)\cos v, f(u)\sin v, u)$$ -This is not regular because $f$ can be zero. Things will get bad in the $z$-axis, which is part of it.<|endoftext|> -TITLE: When is the space $L^\infty(\mu)$ finite-dimensional? -QUESTION [8 upvotes]: There is a theorem that for a given $p\in [1,\infty)$ a space $L^p(\mu)$ is finite dimensional iff the set of values of $\mu$ is finite. -Is a similar theorem for the space $L^\infty(\mu)$ for general positive measure $\mu$ ( finite or infinite)? -Edit. Idea of the proof for $L^p(\mu)$ with $p\in [1,\infty)$. -We show that if the set of values of $\mu$ is infinite then $dim L^p(\mu)=\infty$. -There exists a sequence of pairwise disjoint measurable subsets of $X$ such that $0<\mu(P_n)<\infty$ with the property that the following set is infinite -$$ -\{\mu(D): D \textrm{ is measurable, } D\subset X\setminus \bigcup_{k=1}^n P_k \} -$$ -For, we take measurable -$D_1 $ be such that $0< \mu (D_1)<\mu(X)$ and $D_2=X\setminus D_1$. -Let -$$ -W_1=\{\mu(D): D \textrm{ is measurable}, D \subset D_1 \}, -$$ -$$ -W_2=\{\mu(D): D \textrm{ is measurable} , D \subset D_2 \}. -$$ -At least one among the sets $W_1, W_2$ is infinite, since for arbitrary measurable $D$ we have $D=(D\cap D_1) \cup (D\cap D_2)$. Let $P_1=D_1$ if $W_1$ is finite and $P_2=D_2$ if $W_2$ is finite. -Let pairwise disjoint measurable $P_1,...,P_n $ with the property $0<\mu(P_i)<\infty$ and such that the set $\{\mu(D): D \textrm{ is measurable}, D\subset X\setminus \bigcup_{k=1}^n P_k \}$ is infinite be defined. Let $X_n=X\setminus \bigcup_{k=1}^n P_k$. Then there is a measurable $E_1\subset X_n$ such that $0<\mu(E_1)<\mu(X_n)$. Let $E_2:=X_n\setminus E_1$. -We put -$$ -V_1=\{\mu(D): D \textrm{ is measurable}, D \subset E_1 \}, -$$ -$$ -V_2=\{\mu(D): D \textrm{ is measurable }, D \subset E_2 \}. -$$ -We define $P_{n+1}=E_1$ or $E_2$ depending on $V_1$ or $V_2$ is finite. -The characteristic functions of sets $P_1, P_2,...$ are linearly independent. -We show that if a set of values of $\mu$ is finite then $dim L^p(\mu)<\infty$. -Let $x_1$ be a smallest positive finite value of measure and let $\mu(D_1)=x_1$. Then $D_1$ is an atom. Next we take the smallest positive finite value $x_2$ of the measure on $X\setminus D_1$ and a set $D_2 \subset X\setminus D_1$ with $\mu(D_2)=x_2$-it is an atom, and so on. The procedure have to finish after finite many steps, say $n$ steps. On the set $X \setminus (D_1\cup...\cup D_n)$ the measure takes at most two values: zero and infinity, hence each function from $L^p(\mu)$ is zero a.a on this set. On arbitrary atom measurable function is constant a.a. (because it is true for measurable simple functions). Hence arbitrary function from $L^p(\mu)$ is equal a.a to linear combination of characteristic function of atoms $D_1,...,D_n$. - -REPLY [4 votes]: If we allow $\mu$ to be an infinite measure, then at least precisely the same formulation will not work. Let $\mu = \infty \cdot \lambda$ be the Lebesgue measure "multiplied by $\infty$", i.e. $\mu(E) = 0$ if $\lambda(E) = 0$ and $\mu(E) = \infty$ otherwise. Then $L^\infty (\mu) = L^\infty (\lambda)$ is infinite dimensional, although $\mu$ only assumes two values. -Now, if $\mu$ is a finite measure and $\mu$ assumes only finitely many values, then $L^1$ is finite dimensional. Since $L^\infty \hookrightarrow L^1$ (with an injective(!) embedding), $L^\infty$ is finite dimensional as well. -Conversely, if $L^\infty$ is finite dimensional, then so is $L^1$, since $L^\infty \subset L^1$ is dense (approximate $f \in L^1$ by $f \cdot 1_{|f|\leq n}$). Since finite dimensional subsets of a normed vector space are closed, we get $L^1 = L^\infty$, so that $L^1$ is finite dimensional. By your result, this shows that $\mu$ only assumes finitely many values. -BTW: Do you have a convenient source for the result in case of $1 \leq p < \infty$?<|endoftext|> -TITLE: How to solve this integral: $\int_{-\infty}^\infty\frac{x^2 e^x}{(e^x+1)^2}\:dx$ -QUESTION [7 upvotes]: I am trying to solve an integral like this: -$$ -I=\int \frac{x^2 e^x}{(e^x+1)^2} dx -$$ -And I get this answer: -$$ -\int \frac{x^2 e^x}{(e^x+1)^2}dx=x^2-\frac{x^2}{e^x+1}-2x\text{ln}(e^x+1)+2\int\text{ln}(e^x+1)dx -$$ -Where ln denotes natural logarithm. -The last integral can be rewritten as Li$_2(-e^x)$: -\begin{gather} -\int\text{ln}(e^x+1)dx=\begin{cases} u=e^x\\ -du=e^xdx\Rightarrow dx={du\over u}\end{cases}\Rightarrow\\ -\Rightarrow \int\text{ln}(e^x+1)dx=\int\frac{\text{ln}(u+1)}{u} du -\end{gather} -As Li$_1(z)=-\text{ln}(1-z)\Rightarrow \text{ln}(u+1)=\text{ln}(1-(-u))=-\text{Li}_1(-u)$, it yields: -\begin{gather} -\int\frac{\text{ln}(u+1)}{u} du=-\int\frac{\text{Li}_1(-u)}{u}du=-\text{Li}_2(-u)=-\text{Li}_2(-e^x) -\end{gather} -Finally: -\begin{gather} -\int \frac{x^2 e^x}{(e^x+1)^2} dx=\int \frac{x^2 e^x}{(e^x+1)^2}dx=x^2-\frac{x^2}{e^x+1}-2x\text{ln}(e^x+1)-2\text{Li}_2(-e^x) -\end{gather} -Now, I need to evaluate $I$ from $-\infty$ to $\infty$. I have solved it in Wolfram Alpha, but although it often shows the step by step option, in this case, it doesn't. The result is $\pi^2/3$, which is barely $\zeta(2)$, where $\zeta(\cdot)$ is the Riemann's zeta function, closely related to polylogarithms. So the question is how to evaluate this integral from $-\infty$ to $\infty$. Thank you in advance. - -REPLY [4 votes]: Because $\frac{x^2e^x}{(e^x+1)^2}=\frac{x^2}{\left(e^{x/2}+e^{-x/2}\right)^2}$ is even, we can integrate by parts and use the Dirichlet eta function: -$$ -\begin{align} -\int_{-\infty}^\infty\frac{x^2e^x}{(e^x+1)^2}\,\mathrm{d}x -&=2\int_0^\infty\frac{x^2e^x}{(e^x+1)^2}\,\mathrm{d}x\\ -&=-2\int_0^\infty x^2\,\mathrm{d}\frac1{e^x+1}\\ -&=4\int_0^\infty\frac{x}{e^x+1}\,\mathrm{d}x\\ -&=4\Gamma(2)\,\eta(2)\\[6pt] -&=2\zeta(2)\\ -&=\frac{\pi^2}3 -\end{align} -$$<|endoftext|> -TITLE: domain of $x^x$ -QUESTION [15 upvotes]: What will be the domain of $f(x)=x^x$? -I have asked this to some teachers, they say that the domain is set of all nonnegative real numbers. It is true that there are infinite negative numbers for which $f(x)$ is not defined, but at the same time there are also infinite negative numbers (eg. $x=-3, -2/5,$ etc) for which $f(x)$ gives real numbers. So what exactly can be the domain of $f(x)= x^x$? - -REPLY [13 votes]: It depends on the properties you want the function to have. At least for $x\in\mathbb Z$ you are right, as $x^x$ is well-defined for $x\in\mathbb Z,x<0$. When talking about $x\in\mathbb Q$, things get more difficult and I wouldn't argue that you can (easily) calculate $f(x)$ with $x=-\frac{2}{5}$. -If you choose to include all $x\in\mathbb Z$ with $x<0$ in the domain of your function, your domain/function gets "messed up"; your domain is not an intervall anymore, it is not a differentiable function (it is however still continuous) etc. -Now when we're talking about including some $x\in\mathbb Q$ for which $x^x$ is well-defined, things can get out of control. For $x=-\frac{1}{2}$ we would say that $$x^x=\left(-\frac{1}{2}\right)^{-\frac{1}{2}}=\frac{1}{\sqrt{-\frac{1}{2}}}$$ is undefined, however for $x=-\frac{2}{4}$ we have $$x^x=\left(-\frac{2}{4}\right)^{-\frac{2}{4}}=\frac{1}{\sqrt[4]{\left(-\frac{2}{4}\right)^2}}=\frac{1}{\sqrt[4]{\frac{4}{16}}}$$ which is well-defined. This means, that although we have $-\frac{1}{2}=-\frac{2}{4}$, that $f\left(-\frac{1}{2}\right)\neq f\left(-\frac{2}{4}\right)$ and thus $f$ is no longer a function. -For $x\in\mathbb R\setminus\mathbb Q$ we then finally need $x>0$, as for these $x$ we have $x^x=e^{x\ln(x)}$, which is only defined for $x>0$. -So although one could include all $x\in\mathbb Z$ with $x<0$, one normally chooses not to, to preserve those "nice" properties the function has when we restrict the domain to $\mathbb R_+^*$.<|endoftext|> -TITLE: Generalized pigeonhole principle: 15 workstations and 10 servers -QUESTION [7 upvotes]: Q: Suppose that a computer science laboratory has 15 workstations and 10 servers. A cable can be used to directly connect a workstation - to a server. For each server, only one direct connection to that - server can be active at any time. We want to guarantee that at any - time any set of 10 or fewer workstations can simultaneously access - different servers via direct connections. What is the minimum number - of direct connections needed to achieve this goal? - -This is an example from Rosen's Discrete Mathematics and Its Applications. The given solution to this example consists of 2 parts if I understood correctly. -The first part includes finding the number of connections like the following: - -Suppose that we label the workstations $W_1, W_2, ..., W_{15}$ and the - servers $S_1, S_2, ..., S_{10}$. Furthermore, suppose that we connect - $W_k$ to $S_k$ for $k = 1,2, ..., 10$ and each of $W_{11}, W_{12}$, - $W_{13}, W_{14}$, and $W_{15}$ to all 10 servers. We have a total of 60 - direct connections. - -Then I guess some kind of proof by contradiction is given to show that less than 60 connections is impossible. - -Now suppose there are fewer than 60 direct connections between - workstations and servers. Then some server would be connected to at - most $\lfloor 59/10\rfloor= 5 $ workstations.(If all servers were - connected to at least six workstations, there would be at least $6.10$ - $= 60$ direct connections.) This means that the remaining nine servers are not enough to allow the other 10 workstations to simultaneously - access different servers. Consequently, at least 60 direct connections - are needed. - -But I can't understand this part. How is it concluded that remaining nine servers are insufficient when at most 59 connections are used? - -REPLY [5 votes]: I’m assuming that you understood why there must be a server connected to at most $5$ workstations if there are only $59$ direct connections. -There’s no harm in labelling the servers so that server $S_1$ is connected to $5$ or fewer workstations, and there’s no harm in assuming that the workstations to which $S_1$ is directly connected are among the last five, $W_{11},W_{12},W_{13},W_{14}$, and $W_{15}$, so that $S_1$ is not directly connected to any of the ten workstations $W_1,W_2,\ldots,W_{10}$. In other words, every direct connection from one of these ten workstations is to one of the nine servers $S_2,S_3,\ldots,S_{10}$. This means that the ten workstations $W_1,W_2,\ldots,W_{10}$ cannot simultaneously directly access all ten servers: none of them has a direct connection to $S_1$, so if they all directly access a server simultaneously, two of them must access the same server. This contradicts our guarantee that every set of ten (or fewer) workstations can simultaneously directly access all ten servers.<|endoftext|> -TITLE: Continuity of min function -QUESTION [5 upvotes]: I would like to ask a question about the continuity of this min function. -Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be continuous. Define -$$F(x):=\min\{f(t):t \in [-x,x]\}.$$ -I believe that the function $F$ is continuous on $\mathbb{R}.$ -However, I do not know how to use the $\delta$-$\epsilon$ definition to show its continuity. Could somebody help me? -Thank you. -Masih - -REPLY [3 votes]: Given $\epsilon > 0$ and $x$, we want to show that there exists a $\delta>0$ such that $|F(x)-F(y)|<\epsilon$ whenever $|x-y|<\delta$. -Since $[-x,x]$ is compact, the minimum is attained at some point(s) $t_0\in[-x,x]$. -Suppose first that $\pm x$ are not points of minimum, i.e. the minimum is attained strictly inside. Let $\delta$ be such that $|x-y|<\delta$ implies $|f(x)-f(y)| < \frac12|f(x) - f(t_0)|$, by continuity of $f$. This $\delta$ will work for $F$ for any $\epsilon>0$ since $|F(x) - F(y)| = 0$ in this case. -Now suppose the minimum is attained at $\pm x$. Then again by continuity of $f$ at $\pm x$, find $\delta$ such that $|f(x) - f(y)| < \epsilon$ if $|x-y|<\delta$. This $\delta$ will work as well.<|endoftext|> -TITLE: Prove H is in the center Z(G) -QUESTION [7 upvotes]: Exercise from Artin's 2nd edition of Algebra. -Let $H$ be a normal subroup of prime order $p$ in a finite group $G$. Suppose that $p$ is the smallest prime that divides the order of $G$. Prove that $H$ is in the center $Z(G)$. -My attempt: - Choose any $e\neq h \in H$, since $h$ is normal, the conjugacy class including $h$ lies in $H$, so this class has size less than $p$. The only positive integer less than $p$ which divides $|G|$ is one. So the class containing $h$ has size one, and $h \in Z(G)$. -Correct? - -REPLY [4 votes]: Correct. Alternatively stated: $H$ is normal so it is a union of conjugacy classes; all those ccls have order dividing $G$, and hence have size either $1$ or bigger than $p$ (by definition of $p$ as the smallest prime dividing $|G|$). Therefore it is a union of size-1 ccls.<|endoftext|> -TITLE: Prove that $\frac{(2n)!}{2^n} $ is an integer for all $n \geq 0$ -QUESTION [5 upvotes]: I don't really know where to begin with this problem. Should I use induction to solve it or is there an easier way? - -REPLY [5 votes]: Consider the number of ways to arrange the numbers $1,1,2,2,\ldots,n,n$ in a line. Standard combinatorics tells you that it is $\frac{(2n)!}{2^n}$, hence that expression is an integer. -More details for why the number of ways is the desired expression: There are $(2n)!$ ways of arranging the list. Since for every $k$ we can switch the two copies of $k$ and be left with the same list, we have overcounted, so we should divide by $2$ for every $k$ to get the correct count. This gives the $2^n$ in the denominator.<|endoftext|> -TITLE: Geometrical meaning of orientation on vector space -QUESTION [5 upvotes]: Can any one explain, the geometrical meaning of orientation in a vector space? - -REPLY [6 votes]: $\newcommand{\Reals}{\mathbf{R}}\newcommand{\Vec}[1]{\mathbf{#1}}\newcommand{\ve}{\Vec{e}}\newcommand{\vv}{\Vec{v}}\newcommand{\vw}{\Vec{w}}$To ensure we're on the same page algebraically, here are some preliminaries: -Let $V$ be a finite-dimensional real vector space. If $(\vv_{i})_{i=1}^{n}$ and $(\vw_{i})_{i=1}^{n}$ are ordered bases of $V$, there exists a unique invertible linear operator $T_{\vv}^{\vw}:V \to V$ satisfying -$$ -T_{\vv}^{\vw}(\vv_{i}) = \vw_{i}\quad\text{for all $i = 1, 2, \dots, n$,} -$$ -and this operator has a well-defined determinant $\det T$, a non-zero real number. (The usual definition is to express $T$ as a matrix with respect to an arbitrary basis of $V$ and show the determinant of the resulting matrix does not depend on the choice of basis.) -Definition: We say $(\vv_{i})_{i=1}^{n}$ and $(\vw_{i})_{i=1}^{n}$ are consistently oriented if $\det T_{\vv}^{\vw} > 0$; otherwise $(\vv_{i})_{i=1}^{n}$ and $(\vw_{i})_{i=1}^{n}$ are oppositely oriented. -Lemma: "Consistently oriented" is an equivalence relation on the set of ordered bases of $V$. -Fix an orientation of $V$, i.e., an ordered basis $B = (\vv_{i})_{i=1}^{n}$. Let's try to understand the geometry carried by the orientation. -Imagine fixing the first $(n - 1)$ elements of $B$. The hyperplane $H_{n}$ spanned by $(\vv_{i})_{i=1}^{n-1}$ may be viewed as a "mirror" that separates $V$. Precisely, if $\vv$ is an arbitrary vector in $V$, one of three things happens: - -Replacing $\vv_{n}$ by $\vv$ gives a consistently ordered basis. -$\vv \in H_{n}$, so $\{\vv_{1}, \dots, \vv_{n-1}, \vv\}$ is not a basis of $V$. -Replacing $\vv_{n}$ by $\vv$ gives an oppositely ordered basis. - -You can "interpolate" between these three contingencies by allowing $\vv$ to move continuously along a path from $\vv_{n}$ to $-\vv_{n}$. At some point(s), this path must "cross the hyperplane $H_{n}$" or "travel through the mirror", whereupon the orientation changes, i.e., the resulting basis is oppositely-oriented. -There's nothing special about fixing the $n$th vector, or even about fixing all the vectors but one. You can allow each $\vv_{i}$ to vary continuously: As long as the resulting set remains a basis "at each instant", that basis is consistently oriented. -Note that changing the ordering of the basis elements generally changes the orientation. For example, swapping the labels of two basis elements changes the orientation. Generally, a re-ordering defines a consistently-oriented basis if and only if the permutation of indices is an even permutation, i.e., can be written as a composition of evenly many transpositions. - -It's worth pointing out a final subtlety in $\Reals^{3}$: If $(\ve_{i})_{i=1}^{3}$ denotes the standard basis, the (algebraic) ordering of the vectors fixes an orientation, but we must still pick a conventional geometric representation, i.e., must choose to depict the standard basis as "right-handed" or "left-handed".<|endoftext|> -TITLE: Is there a $\nabla^3 f(x)$? -QUESTION [8 upvotes]: In multivariable calculus we have so far seen the gradient, and the hessian, -So it is natural to ask whether if $\nabla^3 f(x)$ exists -Can anyone let me know what comes after the hessian? - -REPLY [6 votes]: Consider a ($k$ times) differentiable function $f: \Bbb R^n \to \Bbb R$. The derivative of this function $Df(\mathbf x)$ is also called the gradient and is given by $$Df(\mathbf x) = \pmatrix{\partial_1 f(\mathbf x) & \cdots & \partial_n f(\mathbf x)}$$ -This is an $n\times 1$ row matrix. However, once you fix the vector $\mathbf x\in \Bbb R^n$, $Df(\mathbf x)$ can also be considered a function from $\Bbb R^n\to \Bbb R$. In particular it is the linear function $$Df(\mathbf x)(\mathbf h) = \pmatrix{\partial_1 f(\mathbf x) & \cdots & \partial_n f(\mathbf x)}\pmatrix{h_1 \\ \vdots \\ h_n} = \sum_{i=1}^n h_i\partial_i f(\mathbf x)$$ -What about the second derivative? Well if $Df(\mathbf x)$ is a function from $\Bbb R^n\to \Bbb R$, then we could just call this function $g$ and take the derivative of it. We know that once we fix a vector $\mathbf y$, the derivative of $g$ is given by $Dg(\mathbf y)(\mathbf h) = \sum_{i=1}^n h_i\partial_i g(\mathbf y)$. Then plugging back in $Df(\mathbf x) = g$, we get $$Dg(\mathbf y)(\mathbf h) = D^2f(\mathbf x)(\mathbf y)(\mathbf h) = \sum_{i=1}^n h_i\partial_i\sum_{j=1}^n y_j\partial_j f(\mathbf x) \stackrel{(*)}= \sum_{i=1}^n\sum_{j=1}^n h_iy_j\partial_i\partial_j f(\mathbf x)$$ -where $(*)$ is possible because $\mathbf y$ was fixed. In matrix notation notice that this is just $$D^2 f(\mathbf x)(\mathbf y)(\mathbf h) = \pmatrix{h_1 & \cdots & h_n}\pmatrix{\partial_1\partial_1 f(\mathbf x) & \cdots & \partial_1\partial_n f(\mathbf x) \\ \partial_2\partial_1 f(\mathbf x) & \cdots & \partial_2\partial_n f(\mathbf x) \\ \vdots & \ddots & \vdots \\ \partial_n\partial_1 f(\mathbf x) & \cdots & \partial_n\partial_n f(\mathbf x)}\pmatrix{y_1 \\ y_2 \\ \vdots \\ y_n} = \mathbf h^T[Hf(\mathbf x)]\mathbf y$$ where $Hf(\mathbf x)$ is the Hessian matrix of $f$ at $\mathbf x$. -Using the summation notation there is a clear way to continue to the third (and even the $k$th) derivative. For instance we can see that $$D^3f(\mathbf x)(\mathbf y)(\mathbf z)(\mathbf h) = \sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n h_iz_jy_k\partial_i\partial_j \partial_kf(\mathbf x)$$ However there isn't a way to represent this summation using matrices. What we would need is a way to get a scalar (or equivalently a $1\times 1$ matrix) out of three column matrices and some other type of matrix, but there's no way to do this that produces the correct result. What you need is the concept of a tensor. But this is usually not covered in multivariable calculus courses, so it's unlikely that you'll see the $k$th derivative of a function from $\Bbb R^n\to \Bbb R$. -What you should be able to do now though is to evaluate the third derivative of a (at least $3$ times differentiable) function $f:\Bbb R^n \to \Bbb R$ at the ordered $4$-tuple of points $(\mathbf x, \mathbf y, \mathbf z, \mathbf h)$. - -A little exposition on tensors -A $k$-tensor is a multilinear function from $k$ copies of a vector space to scalars. Thus $T: \underbrace{V\times V \times \cdots \times V}_{k\text{ times}} \to \Bbb R$, where $V$ is a vector space, is a $k$-tensor. (One little note: this isn't the full definition of a tensor, but it'll work for what we're doing). -From this we see that $Df(\mathbf x)$ defined by $[Df(\mathbf x)](\mathbf h) = \nabla f(\mathbf x)\cdot \mathbf h$ is a $1$-tensor and $D^2f(\mathbf x)$ defined by $[D^2f(\mathbf x)](\mathbf h_1,\mathbf h_2) = {\mathbf h_2}^T[Hf(\mathbf x)]\mathbf h_1$ is a $2$-tensor. Note that the matrix expressions make it clear that $Df(\mathbf x)$ is a linear function from $\Bbb R^n\to \Bbb R$ and $D^2f(\mathbf x)$ is a bilinear function from $\Bbb R^n\times\Bbb R^n\to \Bbb R$. -Then we know that the third derivative $D^3f(\mathbf x)$ should be defined by $$[D^3f(\mathbf x)](\mathbf h_1, \mathbf h_2, \mathbf h_3) = \sum_{i,j,k} (\mathbf h_3)_i(\mathbf h_2)_j(\mathbf h_1)_k\partial_i\partial_j\partial_k f(\mathbf x)$$ -Using this and continuing in the obvious way, we can see that the $k$th order Taylor polynomial of a $k$-times differential function $f:\Bbb R^n\to \Bbb R$ at the point $\mathbf x+\mathbf h$ is given by $$P_k(\mathbf x + \mathbf h) = f(\mathbf x) + [Df(\mathbf x)](\mathbf h) + \frac{1}{2!}[D^2f(\mathbf x)](\mathbf h,\mathbf h) + \cdots + \frac{1}{k!}[D^kf(\mathbf x)](\underbrace{\mathbf h,\cdots, \mathbf h}_{k \text{ arguments}})$$ where $D^nf(\mathbf x)$ with $n\in\{1,2,\dots, k\}$ is defined as above. -Compare this with the scalar version of Taylor's theorem: Let $f:\Bbb R\to \Bbb R$ be a $k$-times differentiable function. Then the $k$th order Taylor polynomial of $f$ at $x+h$ is given by $$P_k(x+h) = f(x) + f'(x)h + \frac1{2!}f''(x)h^2 + \cdots + \frac{1}{k!}f^{(k)}(x)h^k$$<|endoftext|> -TITLE: Why does A005179 (smallest number with N factors) have spikes at prime numbers of factors? -QUESTION [6 upvotes]: A005179 is a list of the lowest number with n factors, for each n. The list has rather extreme local maxima when n is prime. Why? - n lowest number with n factors - 1 1 - 2 2 - 3 4 - 4 6 - 5 16 - 6 12 - 7 64 - 8 24 - 9 36 -10 48 -11 1024 -12 60 -13 4096 -14 192 -15 144 -16 120 -17 65536 -18 180 -19 262144 -20 240 - -REPLY [4 votes]: There's a nice theorem which is directly applicable: - -If $m=p_1^{a_1}p_2^{a_2}\dots p_k^{a_k}$ where the $p_i$s are distinct primes, then the number of factors of $m$ is $$(a_1+1)(a_2+1)\dots(a_k+1)$$ - -In the above, each $a_i+1$ term represents choosing which power of $p_i$ is used in the factor. -Now, let $m$ be the $q$-th term the A005179 sequence where $q$ is prime, and let $m=p_1^{a_1}p_2^{a_2}\dots p_k^{a_k}$. From the theorem, we have -$$q=(a_1+1)(a_2+1)\dots(a_k+1)$$ -However, since $q$ is prime it can only be written as a product of $1$ and itself - hence for some $i$ we have $a_i+1=q$ while for $j\ne i$ we have $a_j+1=1$. This means the $q$-th term in the sequence is just $m=p_i^{q-1}$ for some prime $p_i$, and since $2$ is the smallest prime, the $q$-th term is $2^{q-1}$. -It is an extreme spike because (in loose terms) the $n$-th term where $n$ is composite can be written with fewer powers of primes. For example the $10$-th entry is $48=2^4\cdot3$ requiring only $5$ powers of primes while the $11$-th entry is $1024=2^{10}$, requiring $10$ powers of a prime. In particular, the $(q-1)$-th entry is $\le 2^{(q-3)/2}\cdot3$ and the $(q+1)$-th entry is $\le 2^{(q-1)/2}\cdot3$ (where $q$ is an odd prime).<|endoftext|> -TITLE: When exponential of a matrix is diagonal? -QUESTION [8 upvotes]: $\newcommand{\C}{\mathbb{C}}$ -$\newcommand{\R}{\mathbb{R}}$ -$\newcommand{\ga}{\gamma}$ -$\newcommand{\al}{\alpha}$ -Let $A$ be an $n \times n$ real matrix. Assume $e^A$ is a diagonal matrix. Does this imply $A$ is diagonal? -If not, then for which matrices, their exponential is diagonal? Can we obtain some nice characterization? -(The complex case might also be interesting) -Update: -Define $G=\C^* = \C \setminus \{0\}$ to be the group of nonzero comlplex numbers, with multiplication. -Let $H=\{\begin{pmatrix}a&b\\-b&a\end{pmatrix}|a,b \in \R \, , \, a^2+b^2 \neq 0 \}$ be a group of $2 \times 2$ real matrices (with the operation of matrix multiplication). -Look at the following group isomorphism: -$\phi:G \to H, \phi(a+ib)=aI+bJ$, where $I=\begin{pmatrix}1&0\\0&1\end{pmatrix} \, , \, J = \begin{pmatrix}0&1\\-1&0\end{pmatrix}$ -By the theory of Lie groups, we know: -$$(1): \, \, \phi(\exp^G(v))=\exp^H(\phi_*(v))$$ Since $H$ is a subgroup of $GL_2(\R)$ $\exp^H$ is just the usual matrix exponential. -Note that $T_eG \cong \C$, We claim, $\exp^G(z)=e^z$ (where $e^z$ is the standard complex exponential). -Proof: -Let $v \in T_eG = \C$. Define $\ga:I \to \C^*$, $\ga(t)=e^{tv}$. Then $\ga$ satisfies $\ga(0)=1,\dot \ga (0) = v, \ga(t+s)=\ga(t)\cdot\ga(s)$, so $\ga$ is a one-parameter subgroup in $G=\C^*$ with initial velocity $v$. -By definition, $\exp^G(v)=\ga(1)=e^v$, as required. -Hence, equation $(1)$ becomes $(1'):$ -$$(1'): \, \, \phi(e^v)=\exp(\phi_*(v))$$ -Let us calculate $\phi^*=(d\phi)_e:T_eG \to T_IH \subset M_2$. -Let $v=x+iy \in T_eG=\C$. Define $\al(t)=1+tv=(1+tx)+i(ty),\dot \al(0)=v$, and -$$\phi^*(v)=(d\phi)_e(v)=\frac{d}{dt}\big(\phi(\al(t))\big)|_{t=0}= \frac{d}{dt}\big( \begin{pmatrix}1+tx&ty\\-ty&1+tx\end{pmatrix}\big)|_{t=0}=\begin{pmatrix}x&y\\-y&x\end{pmatrix}$$ -So, Hence, equation $(1')$ becomes $(1''):$ -$$(1''): \, \, \phi(e^{(x+iy)})=\exp(\begin{pmatrix}x&y\\-y&x\end{pmatrix})$$, -Finally, since $\phi(e^{(x+iy)})=\phi(e^x\cos y+e^x\sin y)=\begin{pmatrix}e^x\cos y&e^x\sin y\\-e^x\sin y&e^x\cos y\end{pmatrix} $ -we get the following formula: -$$\exp(\begin{pmatrix}x&y\\-y&x\end{pmatrix}) = \begin{pmatrix}e^x\cos y&e^x\sin y\\-e^x\sin y&e^x\cos y\end{pmatrix} $$ -In particualr, taking $x=0,y=t$ we get: -$$\exp(\begin{pmatrix}0&t\\-t&0\end{pmatrix}) = \begin{pmatrix}\cos t&\sin t\\-\sin t&\cos t\end{pmatrix} $$ - -REPLY [6 votes]: If $e^A$ is diagonal, then $A$ is not necessarily diagonal. -Take $A=\begin{pmatrix}0&\pi\\-\pi&0\end{pmatrix}$. Then $e^A=-I_2$. -EDIT. Anwer to Asaf. Assume that $e^A=diag(\lambda_i)$ where the $(\lambda_i)$ are non-zero distinct; since $A$ and $e^A$ commute, $A$ is diagonal; then necessarily $e^A$ admits at least one double eigenvalue. Moreover, we can prove that if $e^A$ is diagonalizable, then $A$ is also diagonalizable. -Thus, when $n=2$, $e^A=\lambda I_2$ with $\lambda\not= 0$; thus $spectrum(A)=\{u,v\}$ where $u\not= v$ and $e^u=e^v=\lambda$, that implies $v=u+2ki\pi$ with $k\in \mathbb{Z}^*$.<|endoftext|> -TITLE: Combinatorics and trigonometry identity -QUESTION [5 upvotes]: Prove the following: -$\displaystyle\prod_{n=1}^{180}\left(\cos{\left(\dfrac{n\pi}{180}\right)}+2\right)=\displaystyle\sum_{n=0}^{89}\binom{180}{2n+1}\left(\dfrac{3}{4}\right)^n$. -We can state this as $\displaystyle \binom{180}{1}\left(\dfrac{3}{4}\right)^0+ \binom{180}{3}\left(\dfrac{3}{4}\right)^1+\cdots+\binom{180}{179}\left(\dfrac{3}{4}\right)^{89} = \left(\cos\left(\dfrac{\pi}{180}\right)+2\right)\left(\cos\left(\dfrac{2\pi}{180}\right)+2\right) \cdots \left(\cos\left(\dfrac{180\pi}{180}\right)+2\right)$ -The problem I am encountering is that there is no general result we can prove about this for $n$ it seems. I tried using induction on $n$ to prove this but it fails for the base case. Also, combinations and cosines seem unrelated, so how could I prove the equation is true? - -REPLY [3 votes]: First rewrite $\cos(x)$ as $\cos(x) = \dfrac{e^{ix}+e^{-ix}}{2}$. Then we also have $\dfrac{1}{2}(4+e^{ix}+e^{-ix}) = \cos(x) + 2$. We can factorize the left-hand side of the last equality as $\dfrac{1}{2}(4+e^{ix}+e^{-ix}) = \dfrac{1}{2}\left(a+\dfrac{e^{ix}}{a}\right)\left(a+\dfrac{e^{-ix}}{a} \right)$ where $a^2+\dfrac{1}{a^2} = 4$. This gives four solutions for $a$, so I will pick the positive one, which is $\dfrac{1+\sqrt{3}}{\sqrt{2}}$. Now notice that this allows us to evaluate the left-hand side of $(1)$. We have that $\dfrac{1}{2a^2}(a^2+e^{ix})(a^2+e^{-ix}) = \cos(x) + 2$. Then make the substitution $x = \dfrac{n\pi}{180}$ to get that $$\dfrac{1}{2a^2}(a^2+e^{\frac{in\pi}{180}})(a^2+e^{-\frac{in\pi}{180}}) = \cos \left ( \dfrac{n\pi}{180} \right) + 2,$$ which we can use to rewrite the left-hand side of $(1)$. Thus, $$\displaystyle \prod_{n=1}^{180} \left (\cos \left( \dfrac{n\pi}{180} \right) + 2 \right) = \prod_{n=1}^{180} \left(\dfrac{1}{2a^2}(a^2+e^{\frac{in\pi}{180}})(a^2+e^{-\frac{in\pi}{180}}) \right).$$ Then see that the numbers $e^{\frac{in\pi}{180}}$ for $n = -180,-179,\ldots,-1,0,1,\ldots,179$ are the roots of the equation $x^{360} = 1$. Now, we see that $$\prod_{n=1}^{180} \left(\dfrac{1}{2a^2}(a^2+e^{\frac{in\pi}{180}})(a^2+e^{-\frac{in\pi}{180}}) \right) = \dfrac{1}{2^{180}a^{360}}\dfrac{(a^{720}-1)(a^2-1)}{a^{2}+1}.$$ To calculate the RHS, we see that $\displaystyle \sum_{n = 0}^{89} \binom{180}{2n+1} \left( \dfrac{3}{4} \right)^n = \dfrac{\left(1+\dfrac{\sqrt{3}}{2}\right)^{180} - \left(1-\dfrac{\sqrt{3}}{2}\right)^{180}}{\sqrt{3}}.$ It remains to show that $$\dfrac{1}{2^{180}a^{360}}\dfrac{(a^{720}-1)(a^2-1)}{a^{2}+1} = \dfrac{\left(1+\dfrac{\sqrt{3}}{2}\right)^{180} - \left(1-\dfrac{\sqrt{3}}{2}\right)^{180}}{\sqrt{3}}.$$ To see this, first observe that $1-\dfrac{\sqrt{3}}{2} = \dfrac{1}{2a^2}$, $a^2 = 1+\dfrac{\sqrt{3}}{2}$, and $$\dfrac{1}{2^{180}a^{360}}\dfrac{(a^{720}-1)(a^2-1)}{a^{2}+1}=\left[\left(\dfrac{a^2}2\right)^{180}-\left(\dfrac1{2a^2}\right)^{180}\right]\cdot\dfrac{a^2-1}{a^2+1}.$$ Then finally we see that $\dfrac{a^2-1}{a^2+1} = \dfrac{1}{\sqrt{3}}$, proving the desired result.<|endoftext|> -TITLE: Prove $n! \leq n^n$ -QUESTION [5 upvotes]: Prove by least counter example for all positive integers n. $$ n! \leq n^n$$ -I keep getting stuck after proving the least element of the set of counterexamples can not equal 1. Any suggestions would be helpful. - -REPLY [2 votes]: $n!$ is the number of permutations of $S=\{1, \dots, n \}$, while $n^n$ is the number of functions from $S$ to itself. Permutations are functions, so clearly $n! \leq n^n$.<|endoftext|> -TITLE: Primes on the form $p^2-1$ -QUESTION [8 upvotes]: Prove that there exists a unique prime number of the form $p^2 − 1$ where $p\geq 2$ is an integer. -I have no idea how to approach the question. any hints will be greatly appreciated - -REPLY [6 votes]: This question is asked so frequently on this website that the answer is immediately and unthinkingly coughed up in its shortest form each time. -What I'd like to do is show you one way you might be able to come up with the answer yourself if you don't already know it. -Does $p$ itself have to be prime? I will assume not, so I prefer to use the letter N instead of the letter P. Alright, then, can $n^2 - 1$ be prime? -Suppose $n$ is coprime to $3$. Then $n^2 \equiv 1 \pmod 3$ and therefore $n^2 - 1$ is a multiple of $3$. In every case except $n = -2$ or $2$, we will see that $n^2 - 1$ is a nontrivial multiple of $3$ (since the squares of negative numbers are positive numbers, I won't say anything further about negative numbers in this answer). -Then for $n^2 - 1$ to be prime with $n > 2$, $n$ has to be a multiple of $3$. If $n$ is an odd multiple of $3$, then $n^2$ is odd also but $n^2 - 1$ is an even composite number, that is, a nontrivial multiple of $2$. -Now the only possibility left is that $n$ is a multiple of $6$. Checking a few small cases, we see a pattern emerge: - -$6^2 - 1 = 35 = 5 \times 7$ -$12^2 - 1 = 143 = 11 \times 13$ -$18^2 - 1 = 323 = 17 \times 19$ -$24^2 - 1 = 575 = 5^2 \times 23$ but wait! That's $23 \times 25$. - -This suggests $(n - 1)(n + 1)$. Applying FOIL, we get $$(n - 1)(n + 1) = n^2 + n - n - 1 = n^2 - 1.$$<|endoftext|> -TITLE: Another polylog integral -QUESTION [8 upvotes]: In the interest of housekeeping, I recently took a look at what what polylogarithm integrals are still in the unanswered questions list. Some of those questions have probably languished there because the solutions methods are presumably too tedious and too similar to previously answered questions make carrying out a solution worth while. -A few of the integrals with products of four or more logarithms in the numerator gave more trouble than I anticipated. After playing around with substitutions/integration-by-parts/et-cetera, it seems each the unsolved integrals I looked at can be boiled down to the following integral: - -For $\left|z\right|\le1$, define $\mathcal{I}{\left(z\right)}$ via the integral representation - $$\mathcal{I}{\left(z\right)}:=-\frac16\int_{0}^{z}\frac{\ln^{3}{\left(1-x\right)}\ln{\left(1+x\right)}}{x}\,\mathrm{d}x.\tag{1}$$ - -Question: Can integral $(1)$ be evaluated in terms of polylogarithms? - -Notes: My best idea for a place to begin was to somehow reduce the integral to one with a single fourth-power logarithm in the numerator so as to make subsequent substitutions less of a hassle. I succeeded in reducing $\mathcal{I}{\left(z\right)}$ to the following integral: - -$$J_{1}{\left(z\right)}=\int_{0}^{z}\frac{\ln^{4}{\left(\frac{1-y}{\left(1+y\right)^2}\right)}}{y}\,\mathrm{d}y.\tag{2}$$ - -Everything I've tried after that doesn't appear to be going anywhere though. Can somebody perhaps help me out? -Thanks. - -REPLY [2 votes]: Each logarithm in the integrand can be written as an integral of a rational function. If you expand everything out, $\mathcal{I}(z)$ is a five-fold nested integral of rational functions. -It is a theorem of Kummer that three-fold nested integrals of rational functions can be expressed in terms of the logs, dilogs, and trilogs. For four or more nested integral, in general one will need the multiple polylogs -$$ -\mathrm{Li}_{s_1,\ldots,s_k}(z_1,\ldots,z_k):=\sum_{n_1>\ldots>n_k\geq 1}\frac{z_1^{n_1}\ldots z_k^{n_k}}{n_1^{s_1}\ldots n_k^{s_k}}, -$$ -where $s_1,\ldots,s_k$ are positive integers. For $k=1$ this is the ordinary polylog. An $N$-fold nested integral of rational functions can be expressed in terms of polylogs with $s_1+\ldots+s_k\leq N$. -Erik Panzer has written a nice Maple package "HyperInt" than can perform this kind of computation (you can find the package on his webpage). For your integral, HyperInt produces -$$ -\mathcal{I}(z)=\mathrm{Li}_{1,1,1,1,1}\left({\frac {-1+z}{z}},1,1,{\frac {1+z}{-1+z}},{\frac {z}{1+z}}\right)+\mathrm{Li}_{1,1,1,1,1}\left({\frac {-1+z}{z}},1,{\frac {1+z}{-1+z}},{\frac {-1+z}{1+z}},{\frac {z}{-1+z}}\right)\\ -+\mathrm{Li}_{1,1,1,1,1}\left({\frac {-1+z}{z}},{\frac {1+z}{-1+z}},{\frac {-1+z}{1+z}},1,{\frac {z}{-1+z}}\right)+\mathrm{Li}_{1,1,1,1,1}\left({\frac {1+z}{z}},{\frac {-1+z}{1+z}},1,1,{\frac {z}{-1+z}}\right). -$$ -This still leaves the question: can $\mathcal{I}(z)$ be expressed in terms of ordinary polylogs? I do not know.<|endoftext|> -TITLE: Fundamental group of real projective plane minus one point -QUESTION [8 upvotes]: I understand that the $\mathbb{R}P^2$ is homeomorphic to the unit disc with boundary points identified with their antipodes. But even if we puncture the disc and stretch it from the origin to let it be retracted to the boundary, how are we gonna justify that the identified boundary is still homeomorphic to $S^1$(or is it?)? - -REPLY [5 votes]: The punctured $\Bbb{RP}^2$ deformation retracts onto the boundary circle of the disc with antipodal points identified, $\Bbb{RP}^1 = S^1/(x \sim -x)$. This is homeomorphic to the circle, by the map $S^1 \to \Bbb{RP}^1$, $z \mapsto z^2$. This is a continuous bijection etc so a homeomorphism, as desired, and thus the punctured projective plane is homotopy equivalent to $S^1$, and has fundamental group $\Bbb Z$. -(One can be far more precise than this: the punctured plane is homeomorphic to the (open) Mobius band.)<|endoftext|> -TITLE: Closed form for $\int_0^{\infty}\sin(x^n)\mathbb{d}x$ -QUESTION [8 upvotes]: I was wondering if anyone knows a closed form for -$$\mathrm{I} = \int_0^{\infty}\sin(x^n)\mathbb{d}x$$ -Preliminary evaluations on Wolfram Alpha seem to yield something like this: -$$\mathrm{I} = k\sin\left(\frac{\pi}{2n}\right)\Gamma\left(\frac an \right)$$ -where $k$ is a proportionality constant, normally $1$, and $a$ is a constant, normally $1$ as well. - -REPLY [8 votes]: It has a closed form - -$$ \int_{0}^{\infty} \exp(i x^{1/\alpha}) \, \Bbb{d}x = \Gamma(1+\alpha)e^{i\pi\alpha/2}, \quad \alpha \in (0, 1). $$ - -Applying the substitution $x \mapsto x^{\alpha}$, we can write -$$ I(\alpha) := \int_{0}^{\infty} \exp(ix^{1/\alpha}) \, \Bbb{d}x = \int_{0}^{\infty} \alpha x^{\alpha-1} e^{ix} \, \Bbb{d}x, \tag{1} $$ -which is much easier to work with. - -Complex-analytic method. Rotate the contour of $\text{(1)}$ by $90^{\circ}$ degrees to have -$$ I(\alpha) = \alpha \int_{0}^{i\infty} z^{\alpha-1} e^{iz} \, dz. $$ -This is a standard technique, and we can justify this by applying the Cauchy integration formula to a quadrant contour and then utilizing the Jordan's lemma. -Then by the substitution $z \mapsto ix$, we have -$$ I(\alpha) = \alpha i^{\alpha} \int_{0}^{\infty} x^{\alpha-1}e^{-x} \, dx = \Gamma(1+\alpha)i^{\alpha}. $$ - -Real-analytic method. Since the right-hand side of $\text{(1)}$ exists as improper-integral sense, it follows from the integral version of the Abel's theorem that -$$ I(\alpha) = \int_{0}^{\infty} \alpha x^{\alpha-1} e^{ix} \, \Bbb{d}x = \lim_{\epsilon \downarrow 0} \int_{0}^{\infty} \alpha x^{\alpha-1} e^{ix}e^{-\epsilon x} \, \Bbb{d}x. \tag{2} $$ -Using the formula -$$ \frac{\Gamma(1-\alpha)}{x^{1-\alpha}} = \int_{0}^{\infty} u^{-\alpha}e^{-xu} \, \Bbb{d}u $$ -and the Fubini's theorem, we can write -\begin{align*} -\int_{0}^{\infty} \alpha x^{\alpha-1} e^{ix}e^{-\epsilon x} \, \Bbb{d}x -&= \frac{\alpha}{\Gamma(1-\alpha)} \int_{0}^{\infty} \left(\int_{0}^{\infty} u^{-\alpha}e^{-xu} \, \Bbb{d}u\right) e^{-(\epsilon-i) x} \, \Bbb{d}x \\ -&= \frac{\alpha}{\Gamma(1-\alpha)} \int_{0}^{\infty} \left(\int_{0}^{\infty} e^{-(u+\epsilon-i) x} \, \Bbb{d}x\right) u^{-\alpha} \, \Bbb{d}u \\ -&= \frac{\alpha}{\Gamma(1-\alpha)} \int_{0}^{\infty} \frac{u^{-\alpha}}{u+\epsilon-i} \, \Bbb{d}u. -\end{align*} -(Notice that we cannot apply the Fubini's theorem directly when $\epsilon = 0$ due to integrability issue. This explains why we are considering a regularized version $\text{(2)}$ of the original integral.) -Taking $\epsilon \downarrow 0$, we have -$$ I(\alpha) = \frac{\alpha}{\Gamma(1-\alpha)} \int_{0}^{\infty} \frac{u^{1-\alpha} + iu^{-\alpha}}{u^2+1} \, \Bbb{d}u. \tag{3} $$ -Using the beta function identity, the last integral in $\text{(3)}$ can be computed explicitly as -$$ I(\alpha) = \frac{\alpha}{\Gamma(1-\alpha)} \cdot \frac{\pi}{2}\left( \frac{1}{\sin(\pi\alpha/2)} + \frac{i}{\cos(\pi\alpha/2)} \right). $$ -Finally, from the Euler's reflection formula and the sine double-angle formula we have -$$ I(\alpha) = \Gamma(1+\alpha) e^{i\pi\alpha/2}. $$<|endoftext|> -TITLE: Are all quintic polynomials of this type not solvable by radicals? -QUESTION [8 upvotes]: The author of my textbook argues that the quintic polynomial $3x^5-15x+5$ is not solvable by radicals over $\mathbb{Q}$ by showing that the Galois group of $3x^5-15x+5$ over $\mathbb{Q}$ is isomorphic to $S_{5}$ (which is not solvable). -But the argument given would seemingly apply to any quintic polynomial with integer coefficients that is irreducible over $\mathbb{Q}$ and has 3 distinct real roots and 2 non-real complex roots. -Are all quintic polynomials of this type not solvable by radicals? - -REPLY [6 votes]: Yes, if $f$ is irreducible, then $5\mid [K:\mathbb{Q}]$ where $K$ is the splitting field of $f$. Hence, the Galois group $G$ of $f$ contains a 5-cycle. Furthermore, if it has only two non-real roots, then $G$ also contains a transposition. -The result now follows from the fact that if $G -TITLE: Is there a "computable" countable model of ZFC? -QUESTION [12 upvotes]: Question -Assuming ZFC is consistent (has a model), does there exist a set $S$ and a binary relation $\in_S$ on $S$ that satisfy the following? - -$S \subseteq \{0,1\}^*$ (this is the Kleene star, and in particular $S$ is countable); -$S$ is decidable, i.e. there exists a Turing machine $C$ such that on input any binary string $x$, $C$ halts and accepts when $x \in S$ and halts and rejects otherwise; -$\in_S$ is decidable, i.e. there exists a Turing machine $D$ such that on input any ordered pair $(x,y) \in S \times S$, $D$ halts and accepts when $x \in_S y$, and halts and rejects otherwise; -$(S, \in_S)$ is a model of the axioms of ZFC. - -Details -Essentially, I'm interested in whether every consistent theory has a "computable" model in the above sense. Of course, it is easy to see that theories like DLOWE (dense linear orders without endpoints) and even first-order Peano Arithmetic have such models--just let $S$ be the set of rational numbers and integers, respectively, encoded in binary in some canonical way, and it is a standard result that $<, +, \cdots$, etc. are computable operations. So I thought ZFC might be a good harder example to try--but I haven't been able to argue for the existence a model of the required form. -I posed this question to a friend and they objected that maybe every countable set, with any binary operation, can be considered computable in this sense--after all, you're allowed to assign the elements of the countable set to binary strings in any way you like. However, this is not true. Let $S$ be the vertices of a directed graph which consists of a single directed cycle for each integer $n$, where the cycle has length busy beaver of $n$. Let $\in_S$ be the binary relation corresponding to this directed graph. Then I have proven that $\in_S$ is not a decidable relation, no matter how you assign vertices in $S$ to strings. So it is conceivable that a countable model of set theory is similarly not computable. - -REPLY [11 votes]: Great question! This was already considered at Mathoverflow; the answer is no. See https://mathoverflow.net/questions/12426/is-there-a-computable-model-of-zfc. - -I've made this community wiki so I don't get a reputation bonus from others' hard work; I would vote to close as a duplicate, but the system won't let me, since the duplicate question isn't on math.stackexchange. - -Note that, despite this, we can ask: how complicated must a countable model of ZFC be? Since ZFC is a computable first-order theory, any PA degree computes such a model, and there are many such degrees; in fact, there are PA degrees which are low, that is, whose jump is Turing reducible to the Halting problem (= as small as possible). Conversely, the arguments at the mathoverflow question cited above show that any countable model of ZFC is of PA degree, so this is an exact characterization. -Note that very little specifically about ZFC is used here! Specifically, and addressing your general question: Henkin's construction shows that any PA degree computes a model of any computable first-order theory, and that conversely any degree which is not PA fails to compute a model of some computable first-order theory; though of course, for specific theories, the set of degrees of models of that theory might be more complicated. This set, by the way, has been studied a little; it is called the spectrum of the theory. See http://www.math.wisc.edu/~andrews/spectra.pdf.<|endoftext|> -TITLE: Winning Strategy with Addition to X=0 -QUESTION [5 upvotes]: Problem: -Two players play the following game. Initially, X=0. The players take turns adding any number between 1 and 10 (inclusive) to X. The game ends when X reaches 100. The player who reaches 100 wins. Find a winning strategy for one of the players. -This is my solution, which hopefully you can comment on and verify: -If I have 100 and win, then I must have had a number between 90 and 99 on my last turn. On the turn before that, my opponent must have 89 because then we will have a number between 90 and 99 on our last turn. On the turn before that, I want a number between 79 and 88 so that I could force my opponent to have 89 on their turn. On the turn before that, my opponent should have a 78 so that I can get to a number between 79 and 88. On the turn before that, I want a number between 68 and 77 so that I could force my opponent to have a 78 sum on his/her turn. Continuing in this manner, -we see that our opponent must have the sums on his/her turn: 89,78,67,56,45,34,23,12, and 1. -As the winner, I want to be in the following intervals of sums at each of my turns: 90-99,79-88,68-77,57-66,46-55,35-44,24-33,14-22,2-11. -Thus, the winning strategy is to go first and add 1 to X=0. Then, no matter how our opponent plays, we can always choose a number between 1 and 10 to force our opponent to have one of the losing positions above and so I will win... - -REPLY [2 votes]: Community wiki answer so the question can be marked as answered: -Yes, you are correct. Also note the links in the comments for further reading about similar games and a systematic way of solving them.<|endoftext|> -TITLE: $L^p$ convergence of a bounded sequence which converges almost everywhere -QUESTION [5 upvotes]: I'm having a little trouble with this homework problem: - -Suppose $\mu(X)<\infty$, $f_n\in L^1$, $f_n\to f$ a.e., and there exists $p>1$ and a constant $C>0$ such that $$\|f_n\|_p\leq C$$ for all $n.$ Prove that $f_n\to f$ in $L^p$. - -Here's what I've done so far. -Without loss of generality, suppose that $\mu(X)=1$. Since $\|f_n\|_p\leq C$, we have that $f_n\in L^p$ for all $n$. Furthermore, since $|f_n|^p\to |f|^p$, we have that $$\int |f|^p\leq \liminf \int|f_n|^p\leq C^p$$ by Fatou's lemma, so $f\in L^p$ as well. -Since $|f_n- f|\to 0$, we have that $|f_n-f|^p\to 0$. -I am thinking of applying the Dominated convergence theorem to $|f_n-f|^p$, but I cannot think of a bound. - -REPLY [2 votes]: I'll expand on the comment by Joey Zou: - -This is not true. Just take any sequence of functions which converges to $0$ a.e. and has constant $L^p$ norm. It is true that $f_n\rightarrow f$ in $L^q$ for any $q -TITLE: Proving inequality with constraint $abc=1$ -QUESTION [7 upvotes]: For positive reals $a,b,c$, with $abc=1$, prove that $$\frac{a^3+1}{b^2+1}+ \frac{b^3+1}{c^2+1}+\frac{c^3+1}{a^2+1} \geq 3$$ I tried the substitution $x/y,y/z,z/x$, but it didn't give me anything. What else to do? Thanks. - -REPLY [4 votes]: There is already a full and great answer. This is only an alternative -using AM-GM instead of the rearrangement inequality: -From the AM-GM inequality we have -$$ -\frac 13 \left(\frac{a^3+1}{b^2+1}+ \frac{b^3+1}{c^2+1}+\frac{c^3+1}{a^2+1} \right) \ge - \left(\frac{a^3+1}{b^2+1} \cdot \frac{b^3+1}{c^2+1} \cdot \frac{c^3+1}{a^2+1} \right)^{1/3} -$$ -therefore it suffices to show that -$$ -\frac{a^3+1}{a^2+1} \cdot \frac{b^3+1}{b^2+1} \cdot \frac{c^3+1}{c^2+1} -\ge 1 \, . -$$ -From -$$ - 0 \le (a^2 - 1)(a-1) = 2(a^3 +1) - (a^2+1)(a+1) -$$ -it follows that -$$ - \frac {a^3+1}{a^2+1} \ge \frac{a+1}{2} \, , -$$ -this is the crucial estimate given by Macavity in the comment Proving inequality with constraint $abc=1$ above. -We continue with -$$ - \frac{a+1}{2} \ge \sqrt{1 \cdot a} = \sqrt a \, -$$ -using AM-GM again. -The same holds for $b$ and $c$, this gives -$$ -\frac{a^3+1}{a^2+1} \cdot \frac{b^3+1}{b^2+1} \cdot \frac{c^3+1}{c^2+1} -\ge \sqrt {abc} = 1 \, . -$$<|endoftext|> -TITLE: Is it possible to characterize completeness of a normed vector space by convergence of Neumann series? -QUESTION [30 upvotes]: If $X$ is a normed vector space and if for each bounded operator $T \in B(X)$ with $\| T\| < 1$, the operator ${\rm id} - T$ is boundedly invertible, does it follow that $X$ is complete? - -Context: -It is well known that if $X$ is a Banach space and if $T \in B(X) = B(X,X)$ is a bounded linear operator on $X$ with $\| T \| <1$, then the Neumann series $\sum_{n=0}^\infty T^n$ converges (in the operator norm) to $({\rm id} - T)^{-1}$. In particular, ${\rm id} - T$ is invertible. -There are counterexample to this fact if we do not assume $X$ to be complete. For example, we can take $X = \ell_0 (\Bbb{N})$ (the finitely supported sequences) and $T = \frac{1}{2} S$, where $S$ is the right shift operator. In this case, it is easy to see that $\sum_{n=0}^\infty T^n$ does not converge to a well-defined operator from $X$ to $X$. -After I came up with the above counterexample, I wondered if we can characterize completeness of the normed vector space $X$ by the above property, as in the question stated above. -Thoughts on the problem: - -Equivalently, we could require that $\sum_{n=0}^\infty T^n$ converges to a well-defined operator from $X\to X$ as soon as $\|T\|<1$, since in the completion $\overline{X}$, we still know that $T$ extends to a contiuous linear operator $\overline{T} : \overline{X} \to \overline{X}$ with $\| \overline{T} \| = \| T\|<1$, so that $S := {\rm id_{\overline{X}}} - \overline{T}$ is invertible with $S^{-1} = \sum_{n=0}^\infty \overline{T}^n$ and the restriction of $S^{-1}$ to $X$ is the inverse of ${\rm id} - T$, so that $({\rm id}_X - T)^{-1} = \sum_{n=0}^\infty T^n$. -I know that $X$ is complete iff $B(X)$ is, so that it would suffice to show that $B(X)$ is complete. -To show that a normed vector space $Y$ is complete, it suffices to show that "absolute convergence" of a series implies convergence, or even more restrictive that if $\|x_n\|\leq 2^{-n}$ for all $n$, then the series $\sum_{n=1}^\infty x_n$ converges in $Y$. -My problem with applying observation 3 to $Y = B(X)$ is that we only know that the statement for 3 is true for $x_n = T^n$ with suitable $T$, which seems to be too restrictive. -In fact, I don't know how to construct any kind of nontrivial bounded operators on a general normed vector space $X$, apart from operators of the form $x \mapsto \varphi(x) \cdot x_0$ (and linear combinations of those), where $\varphi $ is a bounded functional on $Y$ and $x_0 \in Y$. -But for operators as above (i.e. with finite dimensional range), convergence of the series $\sum_{n=0}^\infty T^n$ is always true, since in fact we only need to consider a finite dimensional subspace, which certainly is complete. - -REPLY [12 votes]: The answer is "No". Take your favorite infinite-dimensional Banach space $Y$ and choose (alas, this requires AC, so if you are not a believer, stop reading here) some discontinuous linear functional $\psi$ on $Y$. Let $X=Ker(\psi)$ with the norm inherited from $Y$. Assume now that $A:X\to X$ has norm less than $1$. Then it extends by continuity to $Y$ and enjoys the property $AX\subset X$, so $\psi\circ A$ vanishes on $X=Ker(\psi)$. We want to show that if $y-Ay=x\in X$, then necessarily $y\in X$. Assume not. Then $\psi(y)\ne 0$ and $(\psi\circ A)(y)=\psi(y)$. Hence the linear functionals $\psi\circ A$ and $\psi$ coincide on the entire space $Y$, so $\psi(z-Az)=0$ for all $z\in Y$, which is absurd because $z-Az$ is just an arbitrary element of $Y$ and $\psi$ is certainly not identically $0$. -The trick is, of course, that it is not so easy for a continuous linear operator to preserve the kernel of a discontinuous linear functional, so the operators acting from $X$ to $X$ are rather few in a certain sense, which makes the Neumann series convergence condition vacuous exactly when it starts getting interesting.<|endoftext|> -TITLE: What is the Maximum Value of $x^T Ax$? -QUESTION [7 upvotes]: Let $A$ be the matrix $\begin{bmatrix}3 &1 \\ 1&2\end{bmatrix}$. What is the maximum value of $x^T Ax$ where the maximum is taken over all $x$ that are the unit eigenvectors of $A$? - -$5$ -$\frac{(5 + \sqrt{5})}{2}$ -$3$ -$\frac{(5 - \sqrt{5})}{2}$ - - -Eigenvalues of $A$ are $\frac{(5 \pm \sqrt{5})}{2}$ - -My question is: what is $x^T Ax$ ? Can you explain little bit please? - -REPLY [2 votes]: Here $A=\begin{bmatrix} 3 &1 \\ 1 & 2 \end{bmatrix}_{2\times 2}$ -For finding the eigen value: -Chracteristic equation:$|A-\lambda I|=0$ where $I$ is called identity matrix. -                                     $\Rightarrow|A-\lambda I|=\begin{vmatrix} 3 &1 \\ 1 & 2 \end{vmatrix}-\lambda\begin{vmatrix}  1&0 \\ 0 & 1 \end{vmatrix}=0$ -                                     $\Rightarrow \begin{vmatrix} 3 &1 \\ 1 & 2 \end{vmatrix}-\begin{vmatrix}  \lambda&0 \\ 0 & \lambda \end{vmatrix}=0$ -                                     $\Rightarrow \begin{vmatrix} 3-\lambda &1 \\ 1 & 2-\lambda \end{vmatrix}=0$ -                                     $\Rightarrow(3-\lambda)(2-\lambda)-1=0$ -                                      $\Rightarrow 6-3\lambda-2\lambda+\lambda^{2}-1=0$ -                                       $\Rightarrow \lambda^{2}-5\lambda+5=0$ -Now$,\lambda =\frac{-(-5)\pm \sqrt{25-20}}{2}$ - $\lambda =\frac{5\pm \sqrt{5}}{2}$ - So$,\lambda_{1} =\frac{5+\sqrt{5}}{2}$ and $\lambda_{2} =\frac{5-\sqrt{5}}{2}$ -Let Eigen vector $X=\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}_{2\times 1}$ -Now find the Eigen Vectors: -           $AX=\lambda X$ -       $\Rightarrow AX-\lambda X=\begin{bmatrix} 0 \end{bmatrix}$ -       $\Rightarrow(A-\lambda I )X=\begin{bmatrix} 0 \end{bmatrix}$ -  $\Rightarrow \begin{bmatrix} 3-\lambda &1 \\ 1 & 2-\lambda \end{bmatrix}\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}=\begin{bmatrix} 0\\0 \end{bmatrix}$------------$>(1)$ -           Put $\lambda=\frac{5+\sqrt{5}}{2}$ -  $\Rightarrow \begin{bmatrix} 3-(\frac{5+\sqrt{5}}{2}) &1 \\ 1 & 2-(\frac{5+\sqrt{5}}{2}) \end{bmatrix}\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}=\begin{bmatrix} 0\\0 \end{bmatrix}$ -  $\Rightarrow \begin{bmatrix}\frac{6-5-\sqrt{5}}{2} &1 \\ 1 &\frac{4-5-\sqrt{5}}{2} \end{bmatrix}\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}=\begin{bmatrix} 0\\0 \end{bmatrix}$ -  $\Rightarrow \begin{bmatrix}\frac{1-\sqrt{5}}{2} &1 \\ 1 &\frac{-1-\sqrt{5}}{2} \end{bmatrix}\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}=\begin{bmatrix} 0\\0 \end{bmatrix}$ -Perform some row operation -$R_{1}\rightarrow\frac{1+\sqrt{5}}{2}R_{1}$ -$\Rightarrow \begin{bmatrix}(\frac{1-\sqrt{5}}{2}).(\frac{1+\sqrt{5}}{2})&\frac{1+\sqrt{5}}{2} \\ 1 &\frac{-1-\sqrt{5}}{2} \end{bmatrix}\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}=\begin{bmatrix} 0\\0 \end{bmatrix}$ -$\Rightarrow \begin{bmatrix}\frac{1-5}{4}&\frac{1+\sqrt{5}}{2} \\ 1 &\frac{-1-\sqrt{5}}{2} \end{bmatrix}\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}=\begin{bmatrix} 0\\0 \end{bmatrix}$ -$\Rightarrow \begin{bmatrix}-1&\frac{1+\sqrt{5}}{2} \\ 1 &\frac{-1-\sqrt{5}}{2} \end{bmatrix}\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}=\begin{bmatrix} 0\\0 \end{bmatrix}$ -Again do some operation $R_{2}\rightarrow R_{2}+R_{1}$ -$\Rightarrow \begin{bmatrix}-1&\frac{1+\sqrt{5}}{2} \\ 0 &0 \end{bmatrix}\begin{bmatrix} x_{1}\\x_{2} \end{bmatrix}=\begin{bmatrix} 0\\0 \end{bmatrix}$ -Here $Rank (A)=1$ and $number$  $of$  $unknowns$ $ =2$ -clearly see $r(A) -TITLE: if $F(x)=\ln{x}\ln{(1-x)}$ prove $ F'(x)>0$ -QUESTION [6 upvotes]: an anyone please help me with the following proof: -Let $$F(x)=\ln{x}\ln{(1-x)},00$$ - -because -$$F'(x)=\dfrac{(1-x)\ln{(1-x)}-x\ln{x}}{x(1-x)}$$ -It suffices to show that -$$G(x)=(1-x)\ln{(1-x)}-x\ln{x}>0,0 -TITLE: limit of $n \left(e-\sum_{k=0}^{n-1} \frac{1}{k!}\right) = ?$ -QUESTION [6 upvotes]: As in the title: -$$ -\lim_{n\to\infty} n \left(e-\sum_{k=0}^{n-1} \frac{1}{k!}\right) = ? -$$ -Numerically, it seems 0, but how to prove/disprove it? -I tried to show that the speed of convergence of the sum to e is faster than $1/n$ but with no success. - -REPLY [2 votes]: Late answer because something similar was tagged as "duplicate" (although it is not that what a "duplicate" is). -So,I adjust the answer from there to the question here: -Taylor gives - -$e^1 = \sum_{k=0}^{n-1}\frac{1}{k!} + \frac{e^{\xi_n}}{n!}$ with $\xi_n \in (0,1)$ - -It follows -$$0\leq n(e-\sum_{k=0}^{n-1} \frac{1}{k!}) = n\frac{e^{\xi_n}}{n!} \leq \frac{e}{(n-1)!} \stackrel{n \to \infty}{\longrightarrow}0$$<|endoftext|> -TITLE: On discriminants and nature of an equation's roots? -QUESTION [9 upvotes]: Edited: All equations in the post are assumed to have all real coefficients and are minimal polynomials. -While trying to ascertain if the Brioschi quintic $B(x)=x^5-10cx^3+45c^2x-c^2=0$ could ever have $3$ real roots, I was led to the question if one can use the discriminant $D$ to settle this. -For $B(x)$, it is given by $D = 5^5c^8(-1+1728c)^2$ and it seems for quintics, if $D>0$, then there are either $0$ or $4$ complex roots $C=a+bi$ with $b\neq0$. Hence the Brioschi (with real coefficients) can never have $3$ real roots. -For other degrees $n$, by observing the data in the Database of Number Fields, I was able to come up with the table below. The second and third columns give the number of complex roots $C=a+bi$. -$$\begin{array}{|c|c|c|} -\hline -\text{Degree}\;n&\text{If}\;D>0&\text{If}\;D<0\\ -2&0&2\\ -3&2&0\\ -4&{0,4}&{2}\\ -5&{0,4}&{2}\\ -6&2,6&0,4\\ -7&0,4&2,6\\ -8&{0,4,8}&{2,6}\\ -9&{0,4,8}&{2,6}\\ -{10}&2,6,10&0,4,8\\ -{11}&0,4,8&2,6,10\\ -{12}&{0,4,8,12}&{2,6,10}\\ -{13}&{0,4,8,12}&{2,6,10}\\ -{14}&{0,4,8,12}&{2,6,10,14}\\ -{15}&{0,4,8,12}&{2,6,10,14}\\ -\hline -\end{array}$$ -Questions: - -Is the table true? -How do we predict the second and third columns for much higher $n$? For example, for $n=163$, does the second column start as $0,4,8,12,\dots$ or $2,6,10,14,\dots$? - -REPLY [3 votes]: I believe that there is a mistake in your table. -Brill's theorem states that the sign of the discriminant of an algebraic number field is $(-1)^{r_2}$ where $r_2$ is the number of complex places. When we have a power basis for our number field, the minimal polynomial of the generator will have $2r_2$ complex roots. Thus the column for $D>0$ should only contain integers divisible by $4$, and when $D<0$ we have a number of complex roots $\equiv 2\pmod{4}$. -For a specific example, $$x^3-x^2-3x+1$$ has three real roots and a discriminant of 148.<|endoftext|> -TITLE: Are all Lie groups Matrix Lie groups? -QUESTION [20 upvotes]: I have beard a bit about so-called matrix Lie groups. From what I understand (and I don't understand it well) a matrix Lie group is a closed subgroup of $GL_n(\mathbb{C})$. -There is also the notion of a Lie group. It is something about a smooth manifold of the manifold $M_n(\mathbb{C})$. -I have also hear something saying that all Lie groups are in fact isomorphic to a matrix Lie group. Is this correct? Could someone give me a bit more detail about this? What, for example, is the isomorphism? Is it of abstract groups, manifolds, or ...? - -REPLY [7 votes]: ...a matrix Lie group is a closed subgroup of $\mathrm{GL}_n(\mathbb{C})$. - -This is correct! - -There is also the notion of a Lie group... - -A Lie group is a tuple $(G,\cdot, \tau,\mathcal{A})$, where $G$ is a set, $\cdot$ is a group operation on $G$, $\tau$ is a topology on $G$ which turns $G$ into a topological manifold, and $\mathcal{A}$ is a maximal smooth atlas, such that the maps -$$m\colon G\times G\to G,\quad (g,h)\mapsto g\cdot h$$ -$$i\colon G\to G,\quad g\mapsto g^{-1}$$ -are smooth. - -What, for example, is the isomorphism? Is it of abstract groups, manifolds, or ...? - -Two Lie groups are said to be isomorphic if they are isomorphic as sets, groups, topological spaces and smooth manifolds. But conveniently enough, it is actually enough to check that they are isomorphic as topological groups, as one can show that every continuous group homomorphism between Lie groups is automatically smooth. - -I have also heard something saying that all Lie groups are in fact isomorphic to a matrix Lie group. Is this correct? - -This is not true! A classical counterexample is $\widetilde{\mathrm{SL}_2(\mathbb{R})}$ as mentioned in other answers. -Another classical counterexample (discussed in a paper by G. Birkhoff from 1936) is obtained by taking the 3-dimensional Heisenberg group -$$G_3=\Bigg\{\begin{pmatrix}1 & x & z\\ 0 & 1 & y\\ 0 & 0 & 1\end{pmatrix}:x,y,z\in\mathbb{R}\Bigg\}$$ -and quotienting out by the normal subgroup -$$N=\Bigg\{\begin{pmatrix}1 & 0 & n\\ 0 & 1 & 0\\ 0 & 0 & 1\end{pmatrix}:n\in\mathbb{Z}\Bigg\}.$$ -The resulting factor group -$G_3^*:=G_3/N$ can be equipped with the quotient topology. This will turn $G_3^*$ into a topological manifold, homeomorphic to the product $\mathbb{R}\times\mathbb{R}\times S^1$, which in turn is a smooth manifold whose maximal smooth atlas we can let $G_3^*$ inherit. It is easy to verify that the group structure on $G_3^*$ is "compatible" with this atlas, in the sense that both multiplication and inversion become smooth maps. -We conclude that $G_3^*$ equipped with the group operation, topology and smooth maximal atlas described above is a Lie group. However, it is not isomorphic (as Lie groups) to a matrix Lie group. -To prove this, one can show an equivalent statement, namely that $G_3^*$ doesn't admit any faithful finite-dimensional complex representations. This is done in Section 4.8 of Lie Groups, Lie Algebras and Representations by B.C. Hall. The idea behind his proof is to note that every representation $$\Sigma\colon G_3^*\to \mathrm{GL}_n(\mathbb{C})$$ of $G_3^*$ gives rise to a representation $$\Pi=\Sigma\circ \Phi\colon G_3\to \mathrm{GL}_n(\mathbb{C})$$ of $G_3$, where $\Phi\colon G_3\to G_3^*=G_3/N$ is the natural projection. By passing to the Lie algebra level, one can show (this takes some effort) that for every such representation $\Pi$, it holds that $$\ker(\Pi)\supsetneq \ker(\Phi)=N\,,$$ which implies that the representation $\Sigma$ must have a non-trivial kernel and thus cannot be faithful. -So, no, it's not true that every Lie group is isomorphic to a matrix Lie group. Interestingly, though, it actually is true for compact Lie groups: - -Theorem. Every compact Lie group $G$ admits a faithful finite-dimensional representation, and is thus isomorphic to a matrix Lie group. - -This is typically proved as a corollary of the famous Peter-Weyl theorem (see for instance Section IV.3 of Lie Groups Beyond an Introduction by A.W. Knapp).<|endoftext|> -TITLE: $\phi:\mathcal F\rightarrow \mathcal G$ is an isomorphism if there is an open cover on which $\phi(U)$ is an isomorphism. -QUESTION [6 upvotes]: Let $\mathcal F$ and $\mathcal G$ be sheaves of abelian groups over a topological space $X$ and let $\phi : \mathcal F\rightarrow \mathcal G$ be a morphism of sheaves. -If $\{U_{\alpha}\}$ is an open cover of $X$ such that $\phi(U_{\alpha}):\mathcal F(U_{\alpha})\rightarrow\mathcal G(U_{\alpha})$ is an isomorphism of abelian groups for every $\alpha$ then is it true that $\phi$ is an isomorphism of sheaves? -While trying to prove this I had to assume that for an open set $U\subseteq X$ the map $\phi(U_{\alpha}\cap U)$ is an isomorphism. However I can't show this unless the restriction map $\mathcal G(U_{\alpha})\rightarrow\mathcal G(U_{\alpha}\cap U)$ is surjective. So is there another way to prove this or is it false? Can someone give me a counter example? -Thank you. - -REPLY [4 votes]: This is not true. Let $X$ be any topological space, equipped with a nonzero sheaf $\mathcal{F}$ that has no global sections except $0$ (such spaces and sheaves exist). Consider the trivial $1$-element cover of $X$, and let $\phi$ be the $0$-morphism $\mathcal{F}\to\mathcal{F}$. Clearly, the morphism $\phi$ induces an isormorphism $\phi(X):\mathcal{F}(X)\to\mathcal{F}(X)$, but $\phi$ is not an isomorphism of sheaves. -An example of such a sheaf is the sheaf $\mathcal{O}(-1)$ on $\mathbb{P}^1_\mathbb{C}$. Indeed, this is the sheaf of sections of the 'tautological' line bundle, which is a vector bundle whose fiber over a point $p\in\mathbb{P}^1_\mathbb{C}$ corresponding to a line $\ell\subset\mathbb{C}^2$ is the line $\ell$ itself. (The specifics of the proof here depend a bit on what category you're working in, algebraic, analytic, topological...).<|endoftext|> -TITLE: Why $\mathbb{R}$ is a subset of $\mathbb{C}$? -QUESTION [6 upvotes]: Let us define $\mathbb{C}$ by -$$\mathbb{C} = \mathbb{R}^2.$$ -Real elements of $\mathbb{C}$ are tuples $(x, 0)$. But $x \neq (x, 0)$, so $\mathbb{R}$ cannot be a subset of $\mathbb{C}$. -I also read the following definition. Let -$$\mathbb{C} = \{ a + bi \mid a, b \in \mathbb{R},\ i^2 = -1 \}.$$ -But there is no $i \in \mathbb{R}$ such that $i^2 = -1$, so where does $i$ come from? If $i \in \mathbb{C}$ then we are using $\mathbb{C}$ in definition of $\mathbb{C}$. -NOTE: Apparently, there are similar questions on MSE but I think that lhf's answer is the best and most concise. Also, title "R is a subset of C?" is much easier to google than "Is a+0i in every way equal to just a?" I don't want readers to be redirected. - -REPLY [10 votes]: It is not wrong for you to worry, and the same problem occurs elsewhere: - -We might define $\Bbb Z$ from $\Bbb N_0$ as a set of equivalence classes of $\Bbb N_0^2$ where $(a,b)\sim (c,d)\iff a+d=b+c$. But then $\Bbb N\not\subset \Bbb Z$. -We might define $\Bbb Q$ as a set of equivalence classes of $\Bbb Z\times \Bbb N$ where $(a,b)\sim (c,d)\iff ad=bc$. But then $\Bbb Z\not\subset \Bbb Q$. -We might define $\Bbb R$ via Dedekind cuts, which are infinie subsets of $\Bbb Q$; or as equvalence classes of rational Cauchy seequences modulo zero sequences. In both cases, $\Bbb Q\not\subset \Bbb R$. - -But in each of these cases we have a canonical embedding of the smaller object into the larger object which completely respects the algebraic structure. In the above examples we have - -$\Bbb N_0\to \Bbb Z$, $x\mapsto [(x,0)]$ -$\Bbb Z\to\Bbb Q$, $x\mapsto [(x,1)]$ (where $1$ is the element of $\Bbb Z$ of that name, so it is $[(1,0)]$ according to the previous line; note that the $[\ ]$ denote equivalence classes with respect to totally different equivalence relations though) -$\Bbb Q\to\Bbb R$, $x\mapsto \{\,t\in\Bbb Q\mid t -TITLE: Is this alternative notion of continuity in metric spaces weaker than, or equivalent to the usual one? -QUESTION [18 upvotes]: I will try to be as clear as possible. -For simplicity I will assume that the function $f$ for which we define continuity at some point is real function of a real variable $f: \mathbb R \to \mathbb R$, although the same line of reasoning should be the same even if we talk about continuity of a function at a point that is defined at some metric space and that has values in some metric space. To define continuity of a function $f$ at some point $x_0$ from its domain we have the following standard and famous definition, the $\varepsilon$-$\delta$ definition of continuity, which goes like this: - -Definition 1: $f$ is continuous at the point $x_0$ from its domain if for every $\varepsilon>0$ there exists $\delta>0$ such that when $|x-x_0|<\delta$ then $|f(x)-f(x_0)|<\varepsilon$. - -It is clear that we could write $\delta (\varepsilon)$ instead of $\delta$ because there really is dependence of $\delta$ on $\varepsilon$. -Now, I was thinking about alternative definition of continuity that would go like this: - -Definition 2: $f$ is continuous at the point $x_0$ from its domain if there exist two sequences $\varepsilon_n$ and $\delta_n$ such that for every $n \in \mathbb N$ we have $\varepsilon_n>0$ and $\delta_n>0$ and $\lim_{n\to\infty}\varepsilon_n=\lim_{n\to\infty}\delta_n=0$ and when $|x-x_0|<\delta_n$ then $|f(x)-f(x_0)|<\varepsilon_n$. - -We could also write here $\delta_n(\varepsilon_n)$ instead of $\delta_n$ because obviously there is dependence of $\delta_n$ on $\varepsilon_n$. -It is clear that second definition does not require that for every $\varepsilon$ there exist $\delta(\varepsilon)$ (which includes in itself an uncountable number of choices for $\varepsilon$) but instead requires that for every member of the sequence $\varepsilon_n$ there is some $\delta_n$ (and this includes in itself only countable number of choices because the set $\{\varepsilon_n : n \in \mathbb N\})$ is countable). -It is clearly obvious that definition 1 implies definition 2, and the real question is is the converse true, in other words: - -If the function is continuous at some point according to definition 2, is it also continuous at the same point according to definition 1? - -REPLY [14 votes]: Definition 2 indeed implies Definition 1, -However: -There is a superfluous requirement in Definition 2. -Namely, the requirement the $\delta_n\to 0$ is not needed. -The most important part in the Definition of continuity is the fact that $\varepsilon>0$ can be as small as possible. On the other hand, apart from the fact that $\delta$ depends on $\varepsilon$, the size of $\delta$ is irrelevant. -So, it could be reworded as follows: - -Definition 2. The function $f:A\to\mathbb R$ is continuous at $x_0\in A$ if there exist two sequences $\{\varepsilon_n\}_{n\in\mathbb N}$ and $\{\delta_n\}_{n\in\mathbb N}$ of positive numbers, with $\lim_{n\to\infty}\varepsilon_n= -0$, such that, whenever $\,\lvert x-x_0\rvert<\delta_n$, then $\,\lvert\,f(x)-f(x_0)\rvert<\varepsilon_n$.<|endoftext|> -TITLE: Algorithmic way to check if a power-conjugate presentation is consistent? -QUESTION [5 upvotes]: Is there an algorithmic way to check if a power-conjugate presentation (for a finite polycyclic group) is consistent? - -Background: A finite solvable group $G$ has a subnormal series -$$ G=G_0 \triangleright G_1\triangleright \dots G_n=1$$ -with each $G_{k-1}/G_k$ cyclic of prime order $p_k$. A power-conjugate presentation is, informally, a description of $G$ in terms of this series. Specifically, for each $k$, one picks an element $a_k$ of $G_{k-1}$ whose image in the quotient $G_{k-1}/G_k$ is a generator; then $a_k^{p_k}\in G_k$ and as $G_k$ is normal in $G_{k-1}$, $a_k$ acts on $G_k$ by conjugation. One can construct $G_{k-1}$ from $G_k$ via these two pieces of data: the element of $G_k$ equal to $a_k^{p_k}$, and the conjugation action of $a_k$ on $G_k$. A power-conjugate presentation specifies exactly this data for each $a_k$ and thus has the form -$$\langle a_1,\dots,a_n\mid \{a_k^{p_k} = \mu_k\}_{1\leq k\leq n}, \{a_j^{a_k} = \mu_{kj}\}_{1\leq k -TITLE: The elliptic integral $\frac{K'}{K}=\sqrt{2}-1$ is known in closed form? -QUESTION [8 upvotes]: Has anybody computed in closed form the elliptic integral of the first kind $K(k)$ when $\frac{K'}{K}=\sqrt{2}-1$? -I tried to search the literature, but nothing has turned up. This page http://mathworld.wolfram.com/EllipticIntegralSingularValue.html cites several cases -$\frac{K'}{K}=\sqrt{r}$, when $r$ is integer. -Update: This question has been answered here. - -REPLY [3 votes]: If, -$$\frac{K'(k)}{K(k)}=\sqrt{2}-1$$ -then (in Mathematica or Walpha syntax), -$$k = \sqrt{\lambda(\tau)}=\sqrt{\text{ModularLambda[}\tau]}=0.9959420044834\dots$$ -where $\tau = \sqrt{-2}-\sqrt{-1},$ and $\lambda(\tau)$ is the elliptic lambda function. See this related answer for more details.<|endoftext|> -TITLE: Showing that similar matrices have the same minimal polynomial -QUESTION [5 upvotes]: I am in the process of proving the title. -The hint says, for any polynomial $f$, we have $$f(P^{-1}AP) = P^{-1}f(A)P.$$ A is an $n \times n$ matrix over $F$ while $P$ is an invertible matrix such that the above matrix multiplication $P^{-1}AP$ makes sense. -Why is this true? -Thank you. - -REPLY [4 votes]: if $f$ is a polynomial then you have: -$$f(x)=a_nx^n+...+a_1x+a_0$$ -Then you have -$$f(P^{-1}AP)=a_n(P^{-1}AP)^n+...+a_1(P^{-1}AP)+a_0I$$ -which is -$$f(P^{-1}AP)=a_n(P^{-1}APP^{-1}AP...P^{-1}AP)+...+a_1(P^{-1}AP)+a_0P^{-1}IP$$ -or -$$f(P^{-1}AP)=P^{-1}a_nA^nP+...+P^{-1}a_1AP+a_0P^{-1}IP$$ -which finally gives -$$f(P^{-1}AP)=P^{-1}(a_nA^n+...+a_1A+a_0I)P=P^{-1}f(A)P$$<|endoftext|> -TITLE: Free Product of Groups with Presentations -QUESTION [6 upvotes]: There is a highly believable theorem: -Let $A, B$ be disjoint sets of generators and let $F(A), F(B)$ be the corresponding free groups. Let $R_1 \subset F(A)$, $R_2 \subset F(B)$ be sets of relations and consider the quotient groups $\langle A | R_1 \rangle$ and $\langle B | R_2 \rangle$. -Then $\langle A \cup B | R_1 \cup R_2 \rangle \simeq \langle A | R_1 \rangle \ast \langle B | R_2 \rangle$, where $\ast$ is the free product. -I can supply a horrible proof using normal closures and elements and such, but surely there is a reasonable diagram-chase proof using the universal properties of free groups and the free product. However, I have been unable to construct it; I think there is probably some intermediary but basic categorical statement needed about presentations. -There is a hint given for the problem (it is coming from Lee, Topological Manifolds, problem 9-4) that is: -If $f_1 : G_1 \rightarrow H_1$ and $f_2 : G_2 \rightarrow H_2$ are homomorphisms and for $j = 1,2$ we have $i_j, i_j'$ the injections of $G_j, H_j$ into $G_1 \ast G_2$ and $H_1 \ast H_2$ respectively, there is a unique homomorphism $f_1 \ast f_2 : G_1 \ast G_2 \rightarrow H_1 \ast H_2$ making the square $(i_j, f_j, i_j', f_1 \ast f_2)$ commute for $j = 1,2$. -This is almost trivial to prove with the UMP for free products, but I can't wrangle it into a proof of the top statement. Also, anything legitimately trivial like $F(A) \ast F(B) \simeq F(A \cup B)$ or $F(F(S)) \simeq F(S)$ is fair game. -I can get a whole lot of arrows, but I am having trouble conjuring up any arrows with $\langle A | R_1 \rangle \ast \langle B | R_2 \rangle$ as their domain. We also know that the free product is the coproduct in $\textbf{Grp}$ but admittedly I haven't tried exploiting this yet. -Can anyone help with a hint? Thanks a lot! - -REPLY [3 votes]: This an old question that recently got bumped, but there's another nice proof that uses the fact that colimits commute with one another and left adjoints, so I thought I'd add it for potential future visitors. -Let $G_i =\langle X_i\mid R_i\rangle$ for $1\le i \le n$ be groups given by presentation. Another way of saying this is that we have the coequalizer diagram -$$ -F(R_i)\rightrightarrows F(X_i)\to G_i, -$$ -with the top map of the fork being induced by $R_i\subseteq F(X_i)$, and the bottom map being the zero map. -Now the coproduct in the category of groups is the free product, $\newcommand\bigast{\mathop{{\Large *}}}\bigast$, so taking the coproduct of these coequalizers, since colimits commute, we get the coequalizer -$$ -\bigast_{i=1}^n F(R_i) \rightrightarrows \bigast_{i=1}^n F(X_i)\to \bigast_{i=1}^n G_i, -$$ -but the free group functor is left adjoint to the forgetful functor, so it also commutes with colimits, so we get a coequalizer -$$ -F\left(\coprod_{i=1}^n R_i\right) \rightrightarrows F\left(\coprod_{i=1}^n X_i\right) \to \bigast_{i=1}^n G_i. -$$ -Translating this back into a statement about group presentations, this says that -$\bigast_{i=1}^n G_i$ is given by the presentation $\left\langle \coprod_{i=1}^n X_i\mid \coprod_{i=1}^n R_i \right\rangle$, as desired.<|endoftext|> -TITLE: Convergence in distribution, $X_n \xrightarrow{d} X$ and $|X_n-Y_n| \xrightarrow{P} 0$ implies $Y_n \xrightarrow{d} X$ -QUESTION [5 upvotes]: I find this problem and I'd like to know if my answer is correct. Thank you - -Let $(X, \mathscr{A}, P)$ a probability space. Suppose that $X$ is a r.v. and $\{ X_n \}$ is a sequence of r.v.'s such that $X_n \xrightarrow{d} X$ (convergence in distribution) and $\{Y_n\}$ a sequence of r.v.'s such that $|X_n-Y_n| \xrightarrow{P} 0$. Then $Y_n \xrightarrow{d} X$ - -Proof: Let $f: \mathbb R \to \mathbb R$ be bounded and uniformly continous function. Let $K$ a constant such that $|f|\le K$ and let $\epsilon>0$ given, so there is a $\delta>0$ such that for $|x-y|<\delta$ then $|f(x)-f(y)|<\epsilon$. Thus -\begin{align*} |E f(X_n)-& Ef(Y_n)|\le E |f(X_n)- f(Y_n)|\\ -&=\int_{\{|X_n- Y_n|<\delta\}} |f(X_n)- f(Y_n)| dP+E |f(X_n)- f(Y_n)|+\int_{\{|X_n- Y_n|\ge \delta\}} |f(X_n)- f(Y_n)| dP\\ -&\le \epsilon P \{|X_n- Y_n|<\delta\} +2K P\{|X_n- Y_n|\ge\delta\}\\ -&\le \epsilon + 2K P\{|X_n- Y_n|\ge\delta\} \end{align*} -Letting $n\to \infty$ we have $|E f(X_n)- Ef(Y_n)|\le \epsilon$ and since $\epsilon$ was arbitrary thus $\{Ef(y_n)\}$ converges at the same value that $\{Ef(X_n)\}$, that is, $\{Ef(y_n)\}\to Ef(X)$ and since this holds for all uniformly continuous and bounded function thus $Y_n \xrightarrow{d} X$ - -REPLY [2 votes]: In fact, such problem may generalize to the case of random elements. More specifically, let $(\Omega,\mathscr{B},p)$ be a probability space. and $S$ be a metric space. A mapping $X: \Omega \to S$ is called random element of $S$ if it is measurable in the sense that $\{ \omega:X(\omega) \in A \} = X^{-}A \in \mathscr{B}$. -If $X_n \xrightarrow{d} X $ and the distance $\rho(X_n,Y_n)\xrightarrow{p} 0$ then we have $Y_n \xrightarrow{d} X$. The proof can be found in Billingsley's Covergence of Probability Measures (first edition) Page 25, Theorem 4.1<|endoftext|> -TITLE: Show that $ \int \liminf f_n \leq \liminf \int f_n \leq \limsup \int f_n \leq \int \limsup f_n$ -QUESTION [5 upvotes]: Let $g$ be a non-negative integrable function over $E$ and suppose $\{f_n\}$ is a sequence of measurable functions on $E$ such that for each $n$, $|f_n| \leq g$ a.e. on $E$. Show that - $$ \int \liminf f_n \leq \liminf \int f_n \leq \limsup \int f_n \leq \int \limsup f_n.$$ - -I know that this problem is an application of the Lebesgue dominated convergence theorem. -Any idea of how to go about it thanks, I am really having a hard time with this problem. - -REPLY [7 votes]: By possibly excising a set of measure 0 we can assume that $|f_n| \leq g$ holds on $E$.\ -Let $$g_n =\inf\limits_{k\geq n} f_k \leq f_n \ then \ g_n \rightarrow \lim\limits_{n \to \infty} \inf f_n$$ -Note that from $-g \leq |f_n| \leq g$ for all $n$ it also follows that $-g \leq g_n \leq g$ for all $n$ and thus $|g_n| \leq g$\ -Using LDCT it follows that -$$\int\limits_{E} \lim\limits_{n \to \infty} \inf f_n = \lim\limits_{n \to \infty} \int\limits_{E} g_n = \lim\limits_{n \to \infty} inf \int\limits_{E} g_n \leq \lim\limits_{n \to \infty} \int\limits_{E} f_n$$\ -Also -$$\int\limits_{E} \lim\limits_{n \to \infty} inf f_n \leq \lim\limits_{n \to \infty} \int\limits_{E} f_n \leq \int\limits_{E} \lim\limits_{n \to \infty} \sup f_n \ \ (*)$$ -Similarly,Let $h_n =\sup\limits_{k \geq n} f_k \geq f_n$ then $h_n \rightarrow \lim\limits_{n \to \infty} \sup f_n$ and note that $|h_n| \leq g$\ -Again, using LDCT we get -$$\int\limits_{E} \lim\limits_{n \to \infty} \sup f_n = \lim\limits_{n \to \infty} \int\limits_{E} h_n = \lim\limits_{n \to \infty} \sup \int\limits_{E} g_n \geq \lim\limits_{n \to \infty} \int\limits_{E} f_n$$ -Also -$$\int\limits_{E} \lim\limits_{n \to \infty} \sup f_n \geq \lim\limits_{n \to \infty} \int\limits_{E} f_n \geq \int\limits_{E} \lim\limits_{n \to \infty} \inf f_n \ \ (**)$$ -From (*) and (**) we have: -$$\int\limits_{E} \lim\limits_{n \to \infty} inf f_n \leq \lim\limits_{n \to \infty} inf \int\limits_{E} f_n \leq \lim\limits_{n \to \infty} \sup \int\limits_{E} f_n \leq \int\limits_{E} \lim\limits_{n \to \infty} \sup f_n$$<|endoftext|> -TITLE: How to find $\sup \Pi_{i=0}^{n} (\sin(i)^2 - \frac{25}{16})$? -QUESTION [5 upvotes]: Let $\sup,\inf,{\rm dif}$ denote resp supremum , infimum and $\rm dif$ = supremum - infimum. -Does any of the 3 below have a closed form ? -$\sup \Pi_{i=0}^{n} (\sin(i)^2 - \frac{25}{16})$ -$\inf \Pi_{i=0}^{n} (\sin(i)^2 - \frac{25}{16})$ -${\rm dif} \Pi_{i=0}^{n} (\sin(i)^2 - \frac{25}{16})$ - -REPLY [2 votes]: Here is a partial answer. Let: -$$A_n := \prod_{k=0}^{n-1} \left( \sin^2 (k)-\frac{25}{16} \right),$$ -and: -$$F(x) := \ln \left( \frac{5}{4}-\sin(x) \right).$$ -Then: -$$\ln (A_n) = (-1)^n \sum_{k=0}^{n-1} \left(F(k)+F(-k)\right).$$ -Now, $F$ has integral $0$ (this comes from the choice of the constant $5/4$), and is analytic. In addition, $1/\pi$ has a finite irrationality measure. Hence, $F$ is a coboundary for the translation by $1$ on $\mathbb{R}_{/2\pi\mathbb{Z}}$. More precisely, we can solve the equation: -$$F(x) = g(x+1)-g(x),$$ -with $g(0)=0$. Taking the Fourier coefficients, we get, for $\xi \neq 0$: -$$\hat{g} (\xi) = \frac{\hat{F} (\xi)}{e^{i\xi}-1},$$ -whence: -$$g(x) = \sum_{\substack{\xi \in \mathbb{Z} \\ \xi \neq 0}} \hat{F} (\xi) \frac{e^{i\xi x}-1}{e^{i\xi}-1}.$$ -Going back to our initial problem, we get: -$$\ln (A_n) = (-1)^n (g(n)+g(-n)).$$ -The translation by $1$ is uniquely ergodic on $\mathbb{R}_{/2\pi\mathbb{Z}}$. Better, the translation by $(1,1)$ is uniquely ergodic on $\mathbb{R}_{/2\pi\mathbb{Z}} \times \mathbb{Z}_{/2\mathbb{Z}}$, so that we don't have to worry about the sign. Hence, -$$-\ln(\inf_n A_n) = \ln(\sup_n A_n) = \max_{x \in \mathbb{R}_{/2\pi\mathbb{Z}}} |g(x)+g(-x)| = 2 \max_{x \in \mathbb{R}_{/2\pi\mathbb{Z}}} \left| \sum_{\substack{\xi \in \mathbb{Z} \\ \xi \neq 0}} \hat{F} (\xi) \frac{1-\cos(\xi x)}{1-e^{i\xi}}\right|.$$ -Disclaimer : sign errors may be hidden somewhere.<|endoftext|> -TITLE: Stochastic Integrals are confusing me; Please explain how to compute $\int W_sdW_s$ for example -QUESTION [15 upvotes]: I have been trying hard to understand this topic, but only failing.Reading through my lecture notes and online videos about stochastic integration but I just can't wrap my head around it. The main reason is, the notation/terminologies. I find them very very confusing and vague; some seem to interchange one another, some don't, for example "Ito integration" and "stochastic integration." Are they the same thing? Different? -I understand my question is somewhat long, so, if tedious, please skip the bit trying to make me understand definitions and just answer, with detail, the extracted worked example I am having trouble with below -My lecture notes basically just throws things at me out of the blue saying "here's the definition. Stick with it, so here's a worked example" which I don't follow at all even with the definition. -First, here is what my notes define as a "simple process" - -Stochastic pocess $(g_t)_{t \geq0}$ is said to be simple if it has the following form $g_t(\omega)=\sum_{i=0}^\infty g_i(\omega)1_{[t_i,t_{i+1})}(t)$. Where $g_i$ are random variables adapted up to time $t_i$. - -And "the stochastic integral" - -$X_t = \int_0^tg_sdW_s := \sum_{i=0}^\infty g_i \times(W_{t_{i+1}\cdot t}-W_{t_i\cdot t})$ where $a\cdot b$ is $min\{a,b\}$. - -Here already, I get slightly confused; $X_t$ is denoted as the stochastic integral, but $X_t=X(t)$ as I understand, so this is ALSO a stochastic process itself? And what is $dW_s$? What is this variable $s$? Is it a time variable? Then doesn't it make $W_s$ a function(of $s$)? Then, unlike $dx$, $dy$, I just feel very uneasy with $dW_s$. Like, I can't express this deinition in words sensibly; "$X_t$ is a stochastic process which is an integral of a simple process $g_t$ with respect to Brownian motion $W_s$ over $0,t$....."? But what is $\omega $ in $g_t(\omega)$ in the first definition then? $g_t$ isn't a function of $W_s$, it's a function of $\omega$(whatever it is). -So these are the things that just keep going in my brain when I see this. Now here's a worked example which I am very lost with. - -Find $\int_0^T W_sdW_s$ where $W_s$ is assumed to be a Brownian motion. - -Here's the work interrupted with my voiceover - -The Ito integrable process $g_s$ can be approximated by taking a partition o $[0,T]$ given $0=t_0 -TITLE: GAP Most efficient way to check multiple properties of a group in the small group library -QUESTION [6 upvotes]: In GAP I would like to search the small groups library looking for groups with specific properties (I suppose this is the most common usage). - -If I have a list of properties I want to test, what is the most -efficient and elegant way to do this? -In some languages there is a 'Switch' statement, I couldn't find such a thing in GAP. Is there anything similar? -If I want to check several properties, is it better to iterate over a list checking all properties for each element in the list, or is it better to filter out of the list everything that doesn't have the first property, and then go back over the filtered list removing everything that doesn't have the second property, etc. -If I was iterating over the small groups by - for g in AllSmallGroups(n) do - check some property - check another property - check some attributes - od; - -GAP would have to calculate some information about the group g, in the process perhaps learning some attributes about g. -For how long would GAP 'remember' these, and what is the best way to -search keeping this in mind? - -REPLY [4 votes]: As long as g is the same group, GAP will (in the same session) remember all its properties (and use up memory for it). If you create the group anew (e.g. by SmallGroup(n,index), or by another call to AllSmallGroups), all but the properties it is given from the start will be lost. So running group by group is the better way. However the way you use the loop you first create all groups and then remember them all with all their properties and attributes, which can be hard on the memory. Assuming you're looking for a few groups in a large set (or only want to store the identification numbers of the groups anyhow) running by index numbers will be much better: -for i in [1..NrSmallGroups(n)] do - g:=SmallGroup(n,i); - Now Do Your Tests for g, if they are satisfied remember index i in a list. -od; - -(There is no switch statement, but an elif which will allow the same construct.) - -Added by AK: Also, SmallGroupsInformation is very useful: it may tell you how the groups of a given order are sorted, and which properties and attributes are precomputed for very fast use in selection functions like AllSmallGroups. For example: -gap> SmallGroupsInformation(256); - - There are 56092 groups of order 256. - They are sorted by their ranks. - 1 is cyclic. - 2 - 541 have rank 2. - 542 - 6731 have rank 3. - 6732 - 26972 have rank 4. - 26973 - 55625 have rank 5. - 55626 - 56081 have rank 6. - 56082 - 56091 have rank 7. - 56092 is elementary abelian. - - For the selection functions the values of the following attributes - are precomputed and stored: - IsAbelian, PClassPGroup, RankPGroup, FrattinifactorSize and - FrattinifactorId. - - This size belongs to layer 2 of the SmallGroups library. - IdSmallGroup is available for this size. - -You can use this knowledge to speed up the search and narrow the search space.<|endoftext|> -TITLE: Check answer, How to find Cov(x,y) and Var(2x-y)? -QUESTION [6 upvotes]: I have the following tableau - x: -1 0 1 total - -y: 1 0 1/8 3/8 1/2 - 2 3/8 1/8 0 1/2 -total: 3/8 2/8 3/8 1 - -*)Find Cov(x,y) and Var(2x-y) -My work: - I use Cov(x,y)= E(XY)-E(X)E(Y) -I have - E(x)= 0 - E(Y)= 3/2 - E(X^2)= 3/4 - E(Y^2)= 5/2 - -After plug the values I have: - Cov(xy)=-15/8 - -And, - var(2x-y)= 4 var(x) + var(y) - 2*2 cov(xy) - Var(2x-y)= 10.75 - -Question: I solve the exercise ok, or I have some problem. I want to check. Thanks! - -REPLY [2 votes]: You used a correct formula for the covariance, but then you did not compute $E(XY)$. -This is $\sum_{x,y}xy\Pr(X=x, Y=y)$. The computation is easy, since the table has many $0$'s. It turns out that $E(XY)=-\frac{3}{8}$. Since $E(X)=0$ the covariance is $-3/8$. -The second computation is fine in outline, but uses the wrong value for the covariance. I can verify the answer after it is corrected.<|endoftext|> -TITLE: Integral $\int_0^\infty\operatorname{arccot}(x)\,\operatorname{arccot}(2x)\,\operatorname{arccot}(5x)\,dx$ -QUESTION [24 upvotes]: I have to evaluate this definite integral: -$$Z=\int_0^\infty\operatorname{arccot}(x)\,\operatorname{arccot}(2x)\,\operatorname{arccot}(5x)\,dx$$ -My CAS was only able to find its approximate numeric value: -$$Z\approx0.796300956669079523165601562454031588576893734085453548868394...$$ -Is there an approach that would allow to evaluate it in a closed form? -I looked up this integral in Gradshteyn-Ryzhyk, but the closest one I found was formula 4.511: -$$\int_0^\infty\operatorname{arccot}(px)\,\operatorname{arccot}(qx)\,dx=\frac\pi2\left[\frac1p\,\ln\left(1+\frac p q\right)+\frac1q\,\ln\left(1+\frac q p\right)\right]$$ -Is there a way to generalize it to a product of 3 arccotangents? Any help is appreciated. - -REPLY [15 votes]: Using a formula from my another answer, we can get: -$$Z=\frac1{960}\Big[96\operatorname{Li}_3\left(\tfrac13\right)+744\operatorname{Li}_3\left(\tfrac23\right)-780\operatorname{Li}_3\left(\tfrac15\right)-1152\operatorname{Li}_3\left(\tfrac25\right)\\ -+408\operatorname{Li}_3\left(\tfrac35\right)-60\operatorname{Li}_3\left(\tfrac45\right)-720\operatorname{Li}_3\left(\tfrac16\right)-840\operatorname{Li}_3\left(\tfrac56\right)\\ --48\operatorname{Li}_3\left(\tfrac17\right)-1032\operatorname{Li}_3\left(\tfrac27\right)-192\operatorname{Li}_3\left(\tfrac37\right)-192\operatorname{Li}_3\left(\tfrac47\right)\\ --1200\operatorname{Li}_3\left(\tfrac57\right)+120\operatorname{Li}_3\left(\tfrac67\right)-112\operatorname{Li}_3\left(\tfrac18\right)-168\operatorname{Li}_3\left(\tfrac78\right)\\ --192\operatorname{Li}_3\left(\tfrac3{10}\right)+168\operatorname{Li}_3\left(\tfrac7{10}\right)+120\operatorname{Li}_3\left(\tfrac5{12}\right)\\ --120\ln5\cdot\left[4\operatorname{Li}_2\left(\tfrac13\right)+2\operatorname{Li}_2\left(\tfrac25\right)-\operatorname{Li}_2\left(\tfrac17\right)+4\operatorname{Li}_2\left(\tfrac27\right) --3\operatorname{Li}_2\left(\tfrac37\right)\right]\\ -+24\ln2\cdot\left[12\operatorname{Li}_2\left(\tfrac13\right)+20\operatorname{Li}_2\left(\tfrac25\right)-12\operatorname{Li}_2\left(\tfrac17\right)+20\operatorname{Li}_2\left(\tfrac27\right)-8\operatorname{Li}_2\left(\tfrac37\right)\right]\\ -\\ -+1364\ln^32+100\ln^33+148\ln^35+424\ln^37\\ --228\ln3\cdot\ln^22-168\ln5\cdot\ln^22+1176\ln^23\cdot\ln2-624\ln^25\cdot\ln2-648\ln^27\cdot\ln2\\+108\ln3\cdot\ln^25-36\ln3\cdot\ln^27-600\ln^23\cdot\ln5+564\ln^25\cdot\ln7-600\ln^27\cdot\ln5\\+504\ln3\cdot\ln7\cdot\ln2+48\ln5\cdot\ln7\cdot\ln2-288\ln3\cdot\ln5\cdot\ln7\\ --2\pi^2\cdot(3\ln2-76\ln3+37\ln5+36\ln7)+3151\,\zeta(3)\Big]$$ -Here is equivalent Mathematica expression.<|endoftext|> -TITLE: Integral representation for Fibonacci's numbers -QUESTION [7 upvotes]: We know that, for example, the Gamma function is a perfect integral representation for the factorial $n!$ for a natural number $n$. -$$\Gamma[n] = \int_0^{+\infty} t^{n-1}e^{-t}\text{d}t = (n-1)!$$ -Is there a similar integral representation through which I might find Fibonacci's numbers? Something like -$$F_n = \int_0^{+\infty} F(x, n)\ \text{d}x$$ -to obtain the $n-$th Fibonacci's number? -P.s. Not necessarily an integration from zero to infinity. - -REPLY [8 votes]: One example can be found on the Wolfram functions site -$$F_{2n}=\frac n2 \left(\frac32\right)^{n-1}\int_0^{\pi} -\left(1+\frac{\sqrt 5}{3}\cos x\right)^{n-1} \sin x \,dx,$$ -and another one in this note: -$$F_n=\frac1{\sqrt5}\left(\frac{\sqrt5 +1}{2}\right)^n-\frac2\pi -\int_0^{\infty}\frac{\sin\frac{x}2}{x}\frac{\cos n x-2\sin x\sin nx}{5\sin^2 x+\cos^2 x}dx.$$<|endoftext|> -TITLE: Triangle inequality and the square root of a metric space -QUESTION [5 upvotes]: for (i) I know that the square root part is true but I don't know how to put it into words to prove it. -For (ii) I just don't know how top apply the requirements for a metric space to the square root of another metric space. Just kind of confusing me - -REPLY [2 votes]: The prove it's a metric you need to show that -0) $\sqrt{\rho(x,y)}$ is a non-negative real value function. -1) $\sqrt{\rho(x,y)} = 0 \iff x = y$ -2) $\sqrt{\rho(x,y)} = \sqrt{\rho(y,x)}$ for all $x, y \in M$. -3) $\sqrt{\rho(x,z)} \le \sqrt{\rho(x,y)} + \sqrt{\rho(y, z)}$ for all $x, y, z \in M$. -0) As $\rho(x,y)$ is a non-negative real function (as it is a metric) and the square root of any non-negative real number is also a non-negative real number. So $\sqrt{\rho(x,z)} $ is a non-negative real function. -1) $\sqrt{v} = 0 \iff v = 0$. $\rho(x,y) = 0 \iff x = y$. Therefore $\sqrt{\rho(x,y)} = 0 \iff \rho(x,y) = 0 \iff x = y$. -2 and 3 I leave to you.<|endoftext|> -TITLE: Calculus (Limits) Doubt: $\theta - \cfrac{\theta^3}{3!} < \sin \theta < \theta$ use to solve limit. -QUESTION [5 upvotes]: Following is the question I've been trying to work on but can't get enough of it: - -$$\lim_{n\rightarrow \infty} \sin\left(\cfrac{n}{n^2+1^2}\right) + \sin\left(\cfrac{n}{n^2+2^2}\right) + \cdots + \sin\left({\cfrac{n}{n^2+n^2}}\right) $$ - -I'm required to find the value of the above limit. All I could think about is to take $n^2$ common from numerator and denominator of each term. -$$\lim_{n\rightarrow \infty} \sin\left(\cfrac{1/n}{1+1^2/n^2}\right) + \sin\left(\cfrac{1/n}{1+2^2/n^2}\right) + \cdots + \sin\left({\cfrac{1/n}{1+n^2/n^2}}\right) $$ -Now since $n \to \infty$ then shouldn't each term inside the sine function be zero and thus value of limit be zero? Where am I going wrong in this approach? -Also, I found a trick for this question specified in my book as: -To use the following inequality : $$\color{blue}{\theta - \cfrac{\theta^3}{3!} < \sin \theta < \theta }$$ -And then to replace $\theta$ with $\cfrac{n}{n^2+k^2}$. I've never seen this inequality before, can anyone refer to the proof of this inequality? (or give the proof). -EDIT: -After working with the suggestions posted in the comments and answers: (and the link of the duplicate post) -I do get the following equation: -$$ \lim_{n \to \infty} \sum_{k=1}^{n} \cfrac{1}{n} \left(\cfrac{1}{1+(k/n)^2}\right)$$ -How can I convert this into Integral now? - -REPLY [3 votes]: (Proof without Taylor series) -First we prove the right side. -$$\begin{align} -&f(x)=\sin x, \quad g(x)=x\\ -&f(0)=g(0)\\ -&f'(x)=\cos x \leq 1=g'(x)\\ -&\therefore f(x)\leq g(x) -\end{align}$$ -Now we prove the left side. -$$\begin{align} -&h(x)=x-\frac{x^3}{3!},\quad f(x)=\sin x\\ -&h(0)=f(0)\\ -&f'(x)-h'(x)=\cos x-\left(1-\frac{x^2}{2}\right)\\ -&f'(0)-h'(0)=0\\ -&f''(x)-h''(x)=-\sin x+x\geq 0 \text{ (we proved it above)}\\ -&\therefore h'(x)\leq f'(x)\quad\text{and}\quad h(x)\leq f(x) -\end{align}$$<|endoftext|> -TITLE: How to deal with an indefinite L'Hôpital operation -QUESTION [7 upvotes]: Today in our AP Calculus class, we learned what is called L'Hôpital's rule for finding the limit of indefinite limits $\infty/\infty$ or $0/0$. The operation works by continuing to take the derivative of the limit until the answer is not indeterminate. How would one "identify ahead of time" and if possible, solve for a limit that would be continuously deriving without achieving an indeterminate answer? -For example something along the lines of $\lim_{x \to \infty} \dfrac{e^{x}}{e^{x}}$ -Wouldn't this example be indefinitely deriving to achieve the answer? - -REPLY [3 votes]: The answer by RRL gives very good examples where L'Hospital's Rule fails (or is not useful) so I will not dwell upon them. I would rather like to focus on the technique of applying L'Hospital's Rule to evaluate certain limits. -Before we start to apply this rule, it is important to know the conditions under which it is applicable: - -If "$f(x) \to 0, g(x) \to 0$" or "$g(x) \to \infty$" or "$g(x) \to -\infty$" then one can think of applying the rule to calculate limit of ration $f(x)/g(x)$. -However the rule will be useful only when in addition to the first condition above the limit of the ratio $f'(x)/g'(x)$ also exists. - -One hopes that the evaluation of the limit of $f'(x)/g'(x)$ would be simpler than evaluating the limit of $f(x)/g(x)$. Note however that using L'Hospital's rule successively is almost never a good idea, because repeated differentiation can lead to complicated functions. One expects that after applying the rule, the ratio $f'(x)/g'(x)$ can be simplified (via algebraic manipulation) so as to make use of standard limits and thereby evaluate its limit.<|endoftext|> -TITLE: Does Global Lipschitz Continuous imply boundedness? -QUESTION [5 upvotes]: If a function $h$ were Lipschitz Continuous on $\mathbb{R}^n$, would that imply the function is bounded? I would assume so since Lipschitz Continuous implies Uniform Continuous. Then again, uniform continuous implies boundedness when you map a bounded open interval to $\mathbb{R}^n$. -Thank you for your time and I appreciate any feedback. - -REPLY [10 votes]: Consider the function $f:\mathbb R\to\mathbb R$ given by $f(x)=x$. -$f$ is Lipschitz, but unbounded. However a Lipschitz function is bounded on a bounded domain.<|endoftext|> -TITLE: If $P \in Syl_p(G)$ and $P$ is cyclic, then $N_G(P)=C_G(P)$ -QUESTION [6 upvotes]: Let $G$ be a group such that $|G|=p^a m$ where $p$ is the smallest prime divisor of $|G|$. - If $P \in Syl_p(G)$ and $P$ is cyclic, then $N_G(P)=C_G(P)$ -Proof -First, note that $C_G(P) \leq N_G(P) \leq G$. Thus, we are done if $|N_G(P)|/|C_G(P)|=1$. - Since $P \leq G$, by Corollary 15, $N_G(P)/C_G(P)$ is isomorphic to a subgroup of $Aut(P)$. - Since $P$ is cyclic of order $p^a$, we have $P \cong \mathbb{Z}/p^a \mathbb{Z}$. - Thus, $|Aut(P)| = |Aut(\mathbb{Z}/ p^a \mathbb{Z})| = |(\mathbb{Z}/p^a \mathbb{Z})^{\times}|$ since - $Aut(\mathbb{Z}/ p^a \mathbb{Z})| \cong (\mathbb{Z}/p^a \mathbb{Z})^{\times}$. - Then, $|Aut(P)| = \phi(p^a) = p^{a-1}(p-1)$ where $\phi$ is Euler's totient function. - Thus, $N_G(P)/C_G(P)$ divides $p^{a-1}(p-1)$. -Since $P$ is cyclic, $P$ in particular is abelian, and it follows that $P \leq C_G(P) \leq N_G(P)$. Since $P \leq N_G(P)$, there exists a positive integer $k_1, k_2$ such that $|N_G(P)|=k_1 p^a$ and $|C_G(P)|=k_2 p^a$. -Since $k_1 p^a$ divides $p^a m$, $k_1$ must divide $m$. -Since $|N_G(P)|/|C_G(P)|=k_1/k_2$, $|N_G(P)/C_G(P)|$ divides $m$. -Since the prime divisors of $m$ are greater than $p$, it must be that $|N_G(P)/C_G(P)|=1$. This completes the proof. -I don't really like the way argued the division of $m$. Is there a more succinct and better way to argue this? -Thank you in advance! - -REPLY [4 votes]: Since $P \subseteq C_G(P) \subseteq N_G(P)$ and $P$ is a Sylow subgroup it follows that $|N_G(P):C_G(P)|$ is not divisible by $p$. So it must divide $p-1$. But $p$ is the smallest prime dividing the order $|G|$ and this means that $|N_G(P):C_G(P)|=1$.<|endoftext|> -TITLE: A conceptual problem in group theory -QUESTION [8 upvotes]: As we all know that in group $S_n$ every pair of distinct disjoint cycles commute .my doubt is is it reverse all true,mean if a pair of distinct cycles commute ,then they have to be disjoint??.i tried to find examples where distinct cycles commute but not disjoint,but fail to do so - -REPLY [2 votes]: As suggested in the comments, the answer is that cycles $\sigma$ and $\tau$ commute if and only if either: -(i) $\sigma$ and $\tau$ are disjoint; or -(ii) $\sigma$ is a power of $\tau$ and $\tau$ is a power of $\sigma$. -In case (ii), this implies that they have the same order and hence the same length, and they must both be cycles on the same set of points.<|endoftext|> -TITLE: Name for the module corresponding to a square matrix -QUESTION [5 upvotes]: I recently learned that for each $n \times n$ matrix $A$ with entries in some field $F$, there is a corresponding $F[x]$-module $M_A$. Namely, $M_A$ is the set $F^n$ with vector addition defined as usual, but with scalar multiplication defined by $f(x) \cdot v = f(A) \, v$ for each $f(x) \in F[x]$. My question is whether there is a name for the module $M_A$, and if so, what is it? -I'm sorry that this is a bit of a trivial question, but I've looked around and have been unable to find an answer. If this question is inappropriate for this site please don't hesitate to let me know. Thanks. - -REPLY [2 votes]: One thing we can say is that the map $\Phi_A: \Bbb F[x] \to \Bbb F^{n \times n}$ given by $\Phi_A(f) = f(A)$ is an algebra homomorphism. In fact, we might be able to say some thing like $\Phi_A$ is the natural homomorphism induced by $A$. With representation theory in mind, we might call this the representation of $\Bbb F[x]$ induced by $A$. -Having said all of that, $f(x)\cdot v$ might be called the multiplication induced by the representation $\Phi_A$. The resulting module is probably best referred to as the image of this representation.<|endoftext|> -TITLE: The order of $ab$ when $a,b$ commute -QUESTION [7 upvotes]: Let $a,b$ be two group elements of finite order that commute. -What can be said about the order of $ab$? - I thought that $|ab| = \text{lcm}(|a|,|b|)$. My proof was that $(ab)^n = a^n b^n =e$ if and only if $a^n = b^n = e$ if and only if $|a|,|b|$ both divide $n$. The smallest $n$ such that both orders divide it is the least common multiple of $|a|$ and $|b|$. -By chance I came across this answer. It has an upvote so it's clearly correct. (?) -But it contradicts what I think: It's clear that -$$ (ab)^{\text{lcm}(|a|,|b|)} = e$$ -hence $|ab| \mid \text{lcm}(|a|,|b|)$. -It seems to me that this is saying more than $|ab| \mid |a| |b|$. Is it not? - -Now my question is: what is the precisest statement that can be made about $|ab|$? Is there anything more than $|ab| \mid \text{lcm}(|a|,|b|)$ that can be said about $|ab|$? - -REPLY [5 votes]: So, just so we're all clear: writing $m$ for the order of $a$ and $n$ for the order of $b$, it's clear that the order divides $\text{lcm}(m, n)$, and that this is a stronger statement than that it divides $mn$ (for example when $m = n$). It's also clear that not all divisors occur as orders (for example when $m$ and $n$ are coprime). -So there's an interesting question about exactly which orders dividing the lcm occur. Let $d$ be such a divisor. If $ab$ has order $d$, then $(ab)^d = e$, or equivalently $a^d = b^{-d}$. Now, $a^d$ is an element of order $\frac{n}{\gcd(n, d)}$, while $b^{-d}$ is an element of order $\frac{m}{\gcd(m, d)}$. So a necessary condition for $d$ to be a possible order is that these orders match: -$$\frac{n}{\gcd(n, d)} = \frac{m}{\gcd(m, d)}.$$ -It's cleanest here to think about everything one prime at a time. Write $\nu_p(n)$ for the greatest power of a prime $p$ dividing $n$. Then the identity above is equivalent to the identity -$$\nu_p(n) - \text{min}(\nu_p(n), \nu_p(d)) = \nu_p(m) - \text{min}(\nu_p(m), \nu_p(d))$$ -or equivalently -$$\nu_p(n) - \nu_p(m) = \text{min}(\nu_p(n), \nu_p(d)) - \text{min}(\nu_p(m), \nu_p(d))$$ -where the only constraint on $\nu_p(d)$ is that it is at most $\nu_p(\text{lcm}(m, n)) = \text{max}(\nu_p(n), \nu_p(m))$. -From here there are two cases, and different cases may occur for different primes. If $\nu_p(n) = \nu_p(m)$ then there is no constraint on $\nu_p(d)$. But if $\nu_p(n) \neq \nu_p(m)$, then both of the mins above must evaluate to $\nu_p(n)$ and $\nu_p(m)$ respectively (in order to keep their difference the same as the nonzero difference between $\nu_p(n)$ and $\nu_p(m)$), and so we conclude that $\nu_p(d) = \text{max}(\nu_p(n), \nu_p(m))$, as stated by Zoe H in the comments. -I haven't thought about whether this necessary condition is sufficient; if it is then the construction is probably straightforward. You can again work one prime at a time.<|endoftext|> -TITLE: Computing cohomology over projective curve in $\mathbf{P}^3$ -QUESTION [5 upvotes]: Let be $k$ an algebraically closed field and Let be $X\subseteq \mathbf{P}^3:=\mathbf{P}_k^3$ a smooth, irreducible curve that is not contained in any hyperplane. Let's call $d=\deg(X)$. -A well known theorem of Gruson, Lazarsfeld and Peskine states that such a curve is $d-(3-1)+1=d-1$-regular, that is -$$H^p(\mathbf{P}^3,\mathscr{I}_X(d-1-p))=0$$ -for all $p>0$. Now, some simple computations on this definition show us that this reduces to the following conditions: -$$H^1(\mathbf{P}^3,\mathscr{I}_X(d-2))=0,\,\,\,H^1(\mathbf{P}^3,\mathscr{O}_X(d-3))=0$$ -Now the actual question. I would like to show that the theorem holds without using it, but I'm a bit struck. I need in particular the cases of a complete intersection curve and a rational curve of degree $3$. -For example, if you write $X=F_1\cap F_2$ as complete intersection of hypersurfaces $F_1,F_2$ of degree $d_1,d_2$ respectively, then the Koszul complex (aka Hilbert-Burch complex) resolves the ideal of $X$: -$$0\to \mathscr O_{\mathbf{P}^3}(-d_1-d_2)\to \mathscr O_{\mathbf{R}^3}(-d_1)\oplus \mathscr O_{\mathbf{R}^3}(-d_1)\to \mathscr{I}_X\to 0$$ -Switching to cohomology sequence, this should imply $H^1(\mathbf{P}^3,\mathscr{I}_X(s))=0$ for all $s$; I'm not so sure this is completely correct. -As for the other condition, we may compute the canonical bundle $$\omega_X=\omega_{\mathbb{P}^3}(+d_1+d_2)=\omega_{\mathbb{P}^3}(d_1+d_2-4)$$ so when $s\geq d_1+d_2-3$ we have $H^1(\mathscr{O}_X(s))=0$ for sure. Then the inequality $d=d_1d_2\geq d_1+d_2-1$ shows the second condition. -Can somebody help me understand better these computations? How should I proceed to yeld a similar result in the case of a rational curve of degree $3$? - -REPLY [3 votes]: Your computation in the complete intersection case are correct, I just want to point out that you can work just with the Koszul complex, using the first definition of regularity. -When $X$ is the rational normal curve of degree $3$ we can proceed like this: we can look at $X$ as the embedding of $\mathbb{P}^1$ induced by the complete linear system $\left|\mathcal{O}_{\mathbb{P}^1}(3)\right|$, and then we have the identifications $H^i(\mathbb{P}^3,\mathcal{O}_{X}(d)) = H^i(\mathbb{P}^1,\mathcal{O}_{\mathbb{P}^1}(3d))$. In particular $$H^1(\mathbb{P}^3,\mathcal{O}_X(d-3)) = H^1(\mathbb{P}^3,\mathcal{O}_X) = H^1(\mathbb{P}^1,\mathcal{O}_{\mathbb{P}^1})=0$$. -For the other condition, consider the exact sequence of sheaves -$$ 0 \to \mathscr{I}_X(1) \to \mathcal{O}_{\mathbb{P}^3}(1) \to \mathcal{O}_X(1) \to 0 $$ -taking cohomology we get the exact sequence -$$ H^0(\mathbb{P}^3,\mathcal{O}_{\mathbb{P}^3}(1)) \to H^0(\mathbb{P}^1,\mathcal{O}_{\mathbb{P}^1}(3)) \to H^1(\mathscr{I}_X(1)) \to 0 $$ -however, the map $H^0(\mathbb{P}^3,\mathcal{O}_{\mathbb{P}^3}(1)) \to H^0(\mathbb{P}^1,\mathcal{O}_{\mathbb{P}^1}(3))$ is an isomorphism, since $X$ is the embedding of $\mathbb{P}^1$ by the complete linear system $\left| \mathcal{O}_{\mathbb{P}^1}(3) \right|$, so that $H^1(\mathscr{I}_X(1))=0$<|endoftext|> -TITLE: How do I prove that among any $5$ integers, you are able to find $3$ such that their sum is divisible by $3$? -QUESTION [6 upvotes]: How do I prove that among any $5$ integers, you are able to find $3$ such that their sum is divisible by $3?$ -I realize that this is a number theory question and we use modular arithmetic, but I'm unsure of where to begin with this specific situation. - -REPLY [4 votes]: Consider the remainders of the numbers when divided by 3. Clearly there are only three possible remainders: 0, 1 or 2. - -If all these remainders occur among the given numbers, then adding together three numbers with different remainders will yield a sum that is divisible by 3 (since 0 + 1 + 2 = 3). -Otherwise, at least one of the three remainders does not occur among the five given numbers. By the pigeonhole principle, at least one of the other two remainders must thus occur more than twice (since 5 > 2 × 2). Adding together three numbers with the same remainder will then yield a sum divisible by 3.<|endoftext|> -TITLE: Subalgebra of $C(X)$ where $X$ is a compact Hausdorff space -QUESTION [5 upvotes]: Proposition Let $X$ be a compact Hausdorff space and $C(X)$ be the set of all continuous real-valued functions on $X$. Let $\mathcal{A}$ be a subalgebra of $C(X)$ that separates points of $X$. Show that either $\overline{\mathcal{A}}= C(X)$ or there is a point $x_{0}$ such that $\overline{\mathcal{A}}= \{\ f\in C(X): f(x_{0})=0 \}\ $. - -My attempt: - -If constant function $1$ belongs to $\mathcal{A}$, then subalgebra $\mathcal{A}$ contains all constant functions and $\mathcal{A}$ is dense in $C(X)$ by Stone-Weierstrass theorem. -Otherwise, $1\notin \mathcal{A}$. Suppose that for each $x\in X$ there is an $f\in \mathcal{A}$ with $f(x)\neq 0$, then by the continuity of each $f$ and compactness of $X$ I can prove that there is a $g\in \mathcal{A}$ that is positive on $X$. But how can I use this fact to conclude a contradiction i.e. $1\in \mathcal{A}$ so that we prove the exsitence of such $x_{0}$? And I was also wondering how to prove $\{\ f\in C(X): f(x_{0})=0 \}\ \subset \overline{\mathcal{A}}$? - -Thanks! - -REPLY [2 votes]: Assume that $\mathcal{A}$ is itself closed, and suppose $1\notin \mathcal{A}$, then note that -$$ -\mathcal{A}\oplus \mathbb{R}1 = C(X) -$$ -by the Stone-Weierstrass theorem. Hence you can define a linear functional $\varphi : C(X)\to \mathbb{R}$ by -$$ -\varphi(f+ \lambda) = \lambda -$$ -This is a well-defined bounded linear functional which is also multiplicative. Hence, $\exists x_0 \in X$ such that -$$ -\varphi(g) = g(x_0) -$$ -Since $\ker(\varphi) = \mathcal{A}$ it follows that -$$ -\mathcal{A} = \{f\in C(X) : f(x_0) = 0\} -$$<|endoftext|> -TITLE: Cardinality of $\sigma$-algebra generated by an infinite family of sets -QUESTION [6 upvotes]: Let $\mathcal{F}$ be an infinite family of subset of $X$ of cardinality $\kappa$ (thus $\kappa$ is an infinite cardinal). From the recursive description of generated $\sigma$-algebra, I know that the $\sigma$-algebra $\langle \mathcal{F}\rangle$ which is generated by $\mathcal{F}$ has cardinality at most $\kappa^{\aleph_0}$. On the other hand, since $\langle \mathcal{F}\rangle $ contains $\mathcal{F}$, $\langle \mathcal{F}\rangle$ has cardinality at least $\kappa$. Thus we have $\kappa\leq |\langle \mathcal{F}\rangle |\leq \kappa^{\aleph_0}$. Is is true that $|\langle \mathcal{F}\rangle |=\kappa^{\aleph_0}$? I'm not very familiar with cardinal arithmetic, thanks for any help. - -REPLY [2 votes]: Yes. If $B$ is a sigma complete infinite boolean algebra, then $|B| = |B|^{\aleph_0}$. You can find a proof in the Handbook of Boolean algebra Vol. 1, theorem 12.2.<|endoftext|> -TITLE: Is a continuous function locally uniformly continuous? -QUESTION [10 upvotes]: Assume a function, $f : X \to Y$, mapping between two metric spaces, $X,Y$, is pointwise continuous, i.e. for every $\varepsilon >0$ and $x \in X$ there exists a $\delta>0$ such that -$$ -\|x-x'\|_X < \delta -\implies -\|f(x) - f(x')\|_Y < \varepsilon -, \qquad -\forall x' \in X. -$$ -Does this imply $f$ is locally uniformly continuous, i.e. for every $x \in X$ there exists a neighbourhood $U \subset X$ such that for every $\varepsilon > 0$ there exists a $\delta > 0$ such that -$$ -\|x_1-x_2\|_X < \delta -\implies -\|f(x_1) - f(x_2)\|_Y < \varepsilon -, \qquad -\forall x_1,x_2 \in U? -$$ -A positive answer without proof, under the condition that $X$ and/or $Y$ are locally compact, is implied here. - -REPLY [6 votes]: If $X$ is locally compact, the result follows from Cantor's theorem. If $X$ is not locally compact, the result is not necessarily true as Stefan's example shows.<|endoftext|> -TITLE: Infinite series equality $\frac{1}{1+x}+\frac{2x}{1+x^2}+\frac{3x^2}{1+x^3}+\frac{4x^3}{1+x^4}+\cdots$ -QUESTION [9 upvotes]: Prove the following equality ($|x|<1$). -$$\frac{1}{1+x}+\frac{2x}{1+x^2}+\frac{3x^2}{1+x^3}+\frac{4x^3}{1+x^4}+\cdots\\ -=\frac{1}{1-x}+\frac{3x^2}{1-x^3}+\frac{5x^4}{1-x^5}+\frac{7x^6}{1-x^7}+\cdots\\$$ - -REPLY [4 votes]: Multiplying $x$ to both sides, the identity is equivalent to -$$ \sum_{k=1}^{\infty} \frac{kx^k}{1+x^k} = \sum_{k=1}^{\infty} \frac{(2k-1)x^{2k-1}}{1-x^{2k-1}}. $$ -Expanding and rearranging, each series can be written as -\begin{align*} -\sum_{k=1}^{\infty} \frac{kx^k}{1+x^k} -&= \sum_{k=1}^{\infty} \sum_{j=1}^{\infty} (-1)^{j-1} k x^{jk} = \sum_{n=1}^{\infty} \Bigg( \sum_{d|n} (-1)^{d-1}\frac{n}{d} \Bigg) x^{n} \\ -\sum_{k=1}^{\infty} \frac{(2k-1)x^{2k-1}}{1-x^{2k-1}} -&= \sum_{k=1}^{\infty} \sum_{j=1}^{\infty} (2k-1)x^{j(2k-1)} = \sum_{n=1}^{\infty} \Bigg( \sum_{\substack{d | n \\ d \text{ odd}}} d \Bigg) x^{n} -\end{align*} -So it is sufficient to prove that -$$ \color{blue}{\sum_{d|n} (-1)^{d-1}\frac{n}{d} = \sum_{\substack{d | n \\ d \text{ odd}}} d} \quad \text{for } n = 1, 2, \cdots. \tag{1}$$ -To this end, write $n = 2^e m$ for $e \geq 0$ and $m$ is odd. Then -$$ \sum_{d|n} (-1)^{d-1}\frac{n}{d} -= \sum_{d'|m} \underbrace{\sum_{i=0}^{e} (-1)^{2^i d'-1} 2^{e-i}}_{=1} \frac{m}{d'} -= \sum_{d'|m} \frac{m}{d'} -= \sum_{d|m} d -= \sum_{\substack{d | n \\ d \text{ odd}}} d $$ -and hence $\text{(1)}$ is proved.<|endoftext|> -TITLE: What are the mean and variance of the log of a random variable? -QUESTION [5 upvotes]: Here's the problem. -We have a random variable X that follows a Poisson law. If we take the log of this variable, what are the first two moments (mean and variance) of the law it follows? -This looks like a simple question, but I can't find anything about it. Any idea? -EDIT: In order to prevent the X=0 case, we bias the Poisson law. Our random variable becomes log(X+epsilon) with X~Poisson(λ). - -REPLY [2 votes]: This is also called delta method. If $g(X)$ is a function of random variable $X$, and $g(X)$ is differentiable, in practice we can use first-order Tylor expansion at $\bar{X}$, the mean of $X$: $$g(X) \approx g(\bar{X}) + \frac{d}{dX}g(\bar{X})(X-\bar{X})$$ -Therefore, $$E[g(X)] \approx g(\bar{X}) + \frac{d}{dX}g(\bar{X})E(X-\bar{X}) = g(\bar{X}),$$ because the second term is 0. -Similarly, the variant is: -$$Var[g(X)] \approx Var(\frac{d}{dX}g(\bar{X})X) = [\frac{d}{dX}g(\bar{X})]^2Var(X)$$ -In your case $X\sim Poisson(\lambda)$, $g(X) = logX$; plug in and you got: -$$E(logX) \approx log\lambda$$ -$$Var(logX) \approx \frac{1}{\lambda}$$ -You have to be careful with X=0 case, as other folks have already stated.<|endoftext|> -TITLE: A solution of a second order homogenous ODE has infinite zeros on a closed interval is $y=0$ -QUESTION [6 upvotes]: We have the ODE $y''+p(x)y'+q(x)y=0$, the functions $p(x)$ and $q(x)$ - continuous on a closed interval $[a,b]$. Prove that if the solution - $y(x)$ has an infinite number of roots ($x$s such that $y(x)=0$) on - the interval $[a,b]$, then $y(x)=0$. - -I tried negating the claim, saying there is an $x_0$ such that $y(x_0)\neq 0$, but got nowhere. -A hint or a direction of thought would be appreciated. - -REPLY [7 votes]: Choose a sequence $\{x_n\}_{n=1}^{\infty}$ of distinct roots of $y$ so $y(x_n) = 0$ and $x_n \in [a,b]$. Since $[a,b]$ is compact, the sequence $\{ x_n \}_{n=1}^{\infty}$ has a convergent subsequence which we will still denote by $\{ x_n \}_{n=1}^{\infty}$ with $x_n \to x_0$ and $x_0 \in [a,b]$. Since $y$ is continuous, we have $y(x_0) = 0$. -Consider -$$ y'(x_0) = \lim_{x \to x_0} \frac{y(x) - y(x_0)}{x - x_0} = \lim_{x \to x_0} \frac{y(x)}{x - x_0}. $$ -The limit exists and so we can calculate it using an arbitrary sequence approaching $x_0$. Since the $x_n$ are distinct, then by throwing at most one element from the sequence we can assume that $x_n \neq x_0$ for all $n \in \mathbb{N}$. Calculating the derivative using $\{ x_n -\}$, we get -$$ y'(x_0) = \lim_{n \to \infty} \frac{y(x_n)}{x_n - x_0} = \lim_{n \to \infty} \frac{0}{x_n - x_0} = 0. $$ -The uniqueness of solutions to the initial value problem then guarantees that $y(x) = 0$ for all $x \in [a,b]$.<|endoftext|> -TITLE: Relation between coefficients of a matrix and its eigenvalues -QUESTION [5 upvotes]: Let $A \in \mathbb{R}^{nxn}$ be a matrix and $\rho(A)$ its largest eigenvalue (or largest module of its eigenvalues). Let $a_{ij}$ be a typical entry of the matrix $A$ at the $i$-th row and $j$-th column such that $a_{ij} \in \{0,1\}$ and $a_{ii} \equiv 0$. If the following condition is satisfied for some constant $\alpha > 0$ $$ \alpha \rho(A) < 1 $$ can we deduce that $$\alpha a_{ij} < 1 $$ for all $i$ and $j$ in $\{1, \ldots n \}$ ? -For example call the following matrix A, \begin{pmatrix} - 0 & 0 & 1 \\ - 1 & 0 & 1 \\ -1 & 1 & 0 -\end{pmatrix} -has eigenvalues $-0.6180$ and $1.6180$ and $-1.0000$. Since $\rho(A)= 1.618$ it seems to be true for this special case. Can anyone see an obvious counter example? -We know from Gershgorin circle theorem that every eigen value of the square matrix $A$ lies in at least one of Gershgorin's disc $D(a_{ii} , R_i)$, where $D(a_{ii} , R_i)$ is a closed disc centered at $a_{ii}$ with radius $R_i = \sum_{ j \neq i } |a_{ij} |$. So we have an estimate of the range of the eigenvalues but it doesn't directly answer my question. - -REPLY [2 votes]: Question: Let $A=(a_{i,j})\in\{0,1\}^{n\times n}$, find $I\subset [0,\infty)$ such that $$\alpha \in I\qquad \iff \qquad \begin{cases}\alpha\ \rho(A)<1 \\ \alpha\ a_{i,j}<1 & \forall i,j=1,\ldots,n\end{cases}$$ -Answer: - - If $\rho(A)>0$, then $I=\big[0,\min\{1,\rho(A)^{-1}\}\big).$ - - If $\rho(A)=0$ and $A\neq 0$, then $I= [0,1)$ - - If $A=0$, then $I =[0,\infty)$. - -Note that if $\rho(A)> 0$ -$$ \alpha \ \rho(A)< 1 \quad\iff\quad \alpha < \rho(A)^{-1} \qquad \text{and}\qquad \alpha \ a_{i,j}<1\quad \forall i,j\quad \iff\quad \alpha <1.$$ -Where we have used that $\rho(A)>0$ implies $A\neq 0$ and thus there is $i,j$ such that $a_{i,j}=1$. -If $\rho(A)=0$ and $A\neq 0$, then -$$ 0=\alpha \ \rho(A)< 1 \qquad \text{and}\qquad \alpha \ a_{i,j}<1\quad \forall i,j\quad \iff\quad \alpha <1$$ -If $A=0$ then the results is obvious.<|endoftext|> -TITLE: Show that $\int_{\mathbb{R}} |f'(t)|^2+(9t^6+18t^4)|f(t)|^2 dt\ge 3$ for functions with unit $L^2$ norm -QUESTION [5 upvotes]: I want to show that $$g(f):=\int_{\mathbb{R}} |f'(t)|^2+(9t^6+18t^4)|f(t)|^2 dt$$ is bounded from below by $3$ for $f \in C_c^{\infty}(\mathbb{R})$ and $||f||_{L^2}=1.$ -What is obvious is that $g$ is bounded below by $0,$ but I don't see how the $3$ comes into the game. Does anybody have an idea? -My ideas so far: -Throw away any of the terms, as they are all positive (does not sound that good to me, as it is a very bold approximation). -Use Sobolev's inequality to eliminate the derivative. -In particular, I think we have to do something about this polynomial there. -Use the Fourier transform (Plancherel) to turn derivatives into polynomials and vice versa. -If anything is unclear, please let me know. - -REPLY [3 votes]: First of all, the lower bound you are looking for follows directly if we show that the smallest eigenvalue of the self-adjoint realization of -$$ -\mathcal L=-\frac{d^2}{dt^2}+9t^6+18t^4 -$$ -in $L^2(\mathbb R)$ is bounded below by $3$. This fact follows by considering the Rayleigh–Ritz quotient for $\mathcal L$, since $C_c^{+\infty}$ is included in the domain of $\mathcal L$. - -Statement The smallest eigenvalue of $\mathcal L$ equals $3$. - -Proof -To make a rather long story short, after some failed tries to use spectral bounds (if you consider a gaussian for example, you will see that the lowest eigenvalue is less than $3.09217$, this is the reason of my comment above) , I finally came up with looking at functions in the form (this is motivated firstly by gaussians, but also by the fact that we want an even function) -$$ -\psi(t)=ce^{a_2t^2+a_4t^4}, -$$ -where $c$ is anormalization constant. It turns out that if we let $a_2=-3/2$ and $a_4=-3/4$, i.e. consider the function -$$ -\psi(t)=ce^{-\frac{3}{4}t^2(2+t^2)} -$$ -then a simple differentiation shows that -$$ -\mathcal L\psi=3\psi. -$$ -Finally, one must argue that the eigenfunction $\psi$ corresponds to the smallest eigenvalue. But, that follows from Sturm–Liouville theory, since $\psi$ does not change sign. $\square$ -As a bonus, I give you a plot of the four lowest eigenfunctions of $\mathcal L$ (plotted at the height of their corresponding eigenvalue) together with the potential (the plot was made using a slight modification of this great code).<|endoftext|> -TITLE: Branch of $\sqrt{1-z^2}$ -QUESTION [8 upvotes]: Show that a branch of $\sqrt{1-z^2}$ can be defined in any region $\Omega$ where the points $1,-1$ are in the same component of its complement. -This is a question in Ahlfors' Complex Analysis (P.148 Q5) that I came across while trying to self-study the book. I tried to tackle the problem by considering $\Omega=\mathbb{C} \backslash [-1,1]$ first, and tried the approach as in Section 4.4 Corollary 2, namely find a branch of the corresponding log first. For this $\Omega$, the image of $1-z^2$ is $\mathbb{C} \backslash [0,1]$, so a branch of $log(1-z^2)$ cannot be defined; evidently one needs to construct the branch of $\sqrt{1-z^2}$ directly. Here is where I ran out of ideas... Any help is appreciated! - -REPLY [3 votes]: A branch of $\sqrt{1-z^2}$ is a lift of $f(z)=1-z^2$ with respect to the covering map $p:\mathbb{C}^\times\to\mathbb{C}^\times,\,p(z)=z^2 $. Fix a basepoint $z_0\in\Omega$, and let $\tilde{w_0}\in \mathbb{C}^\times$ be a point such that $p(\tilde{w_0})=f(z_0)$. By the theory of covering spaces, $f$ has a lift with respect to $p$ if and only if $f_\ast(\pi_1(\Omega,z_0))\subset p_\ast(\pi _1(\mathbb{C}^\times,\tilde{w_0}))$. Now we may identify $\pi_1(\mathbb{C}^\times,{f(z_0)})$ with $\mathbb{Z}$ via the integration $$\gamma\mapsto \frac{1}{2\pi i}\int_\gamma \frac{1}{z}dz,$$ and under this identification $\operatorname{im}p_\ast$ corresponds to $2\mathbb{Z}$. THerefore, we have $\operatorname{im}f_\ast\subset\operatorname{im}p_\ast$ if and only if $$\frac{1}{2\pi i} \int_{f\ast\gamma}\frac{1}{z}dz\in 2\mathbb{Z}$$ for (every piecewise $C^1$) curve. On the other hand, $$\frac{1}{2\pi i} \int_{f\ast\gamma}\frac{1}{z}dz=\frac{1}{2\pi i}\int_{\gamma}\frac{2z}{z^2-1}dz=\frac{1}{2\pi i}\int_\gamma\left(\frac{1}{1-z}+\frac{1}{1+z}\right)dz$$ is the sum of the winding number of $\gamma$ around $1$ and $-1$. By our hypothesis on $\Omega$, these winding numbers are equal, and we are done.<|endoftext|> -TITLE: What is the agreed upon definition of a "positive definite matrix"? -QUESTION [12 upvotes]: In here: http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/positive-definite-matrices-and-applications/symmetric-matrices-and-positive-definiteness/MIT18_06SCF11_Ses3.1sum.pdf - -A positive definite matrix is a symmetric matrix A for which all - eigenvalues are positive. - Gilbert Strang - -I have heard of positive definite quadratic forms, but never heard of definiteness for a matrix. - -Because definiteness is higher dimensional analogy for whether if -something is convex (opening up) or concave (opening down). It does -not make sense to me to say a matrix is opening up, or matrix is -opening down. - -Therefore it does not make sense to say that a matrix has definiteness. - -In addition, when we say $M \in \mathbb{R}^{n \times n}$ positive -definite, what is the first thing we do? We plug $M$ into a function(al) -$x^T (\cdot) x$ and check whether the function is positive for all $x - \in \mathbb{R}^n$. Clearly, that means we are defining this -definiteness with respect to $x^T (\cdot) x$ and NOT $M$ itself. -Furthermore, when matrix have complex eigenvalues, then we ditch the -notion of definiteness property all together. Clearly, definiteness -is a flimsy property for matrices if we can just throw it away when it becomes inconvenient. - - -I will grant you that if we were to define positive definite matrices, we should only define with respect to symmetric matrices. This is the definition on Wikipedia, the definition used by numerous linear algebra books and many applied math books. -But then when confronted with a matrix of the form -$$\begin{bmatrix} 1 & -1 \\ 0 & 1 \end{bmatrix}$$ -I still firmly believe that this matrix is not positive definite because it is not symmetric. Because to me positive definiteness implies symmetry. -To what degree is it widely agreed upon in the math community that a positive definite matrix is defined strictly with respect to symmetric matrices and why only with respect to symmetric matrices? - -REPLY [16 votes]: The positive definiteness (as you already pointed out) is a property of quadratic forms. However, there is a "natural" one-to-one correspondence between symmetric matrices and quadratic forms, so I really cannot see any reason why not to "decorate" symmetric matrices with positive definiteness (and other similar adjectives) just because it is in "reality" the form they define which actually has this property. I can see this one-to-one correspondence as one of the reasons why the symmetry should be implicitly assumed when talking about positive definite matrices. -One can of course devise a different name for this property, but why? In addition, positive definite matrix is a pretty standard term so if you continue reading on matrices I'm sure you will find it more and more often. -Some authors (not only on Math.SE) allow positive definite matrices to be nonsymmetric by saying that $M$ is such that $x^TMx>0$ for all nonzero $x$. In my opinion this adds more confusion than good (not only on Math.SE). Also note that (with a properly "fixed" inner product) such a definition would not even make sense in the complex case if the matrix was allowed to be non-Hermitian ($x^*Mx$ is real for all $x$ if and only if...). -Anyway, for real matrices, it of course makes sense to study nonsymmetric matrices giving a positive definite quadratic form through $x^TMx$ (which effectively means that the symmetric part is positive definite). However, I find denoting them as positive definite quite unlucky.<|endoftext|> -TITLE: Stabilizer Conjugation -QUESTION [5 upvotes]: This may be a straightforward question, but if I have a group $G$ acting on a set $A$, and two elements $a,b\in A$ belong to the same orbit, how do I show that their stabilizers are conjugate. -So far I know that $a=gb$ for some $g\in G$. Do I just need to show that $g$ times some element of the stabilizer of $b$ is equal to a stabilizer of $a$? - -REPLY [2 votes]: It is $b \in O(a) \Leftrightarrow \exists \bar g \in G \mid b= \bar g \cdot a$. Then: -\begin{alignat}{1} -\operatorname{Stab}(b) &= \{g \in G \mid g \cdot b = b\} \\ -&= \{g \in G \mid g \cdot (\bar g \cdot a) = \bar g \cdot a\} \\ -&= \{g \in G \mid (g \bar g) \cdot a = \bar g \cdot a\} \\ -&= \{g \in G \mid \bar g^{-1}\cdot((g \bar g) \cdot a) = \bar g^{-1}\cdot(\bar g \cdot a)\} \\ -&= \{g \in G \mid (\bar g^{-1}g \bar g) \cdot a = a\} \\ -\tag 1 -\end{alignat} -Now, call $g':=\bar g^{-1}g \bar g$; then, $g=\bar gg'\bar g^{-1}$ and $(1)$ reads: -\begin{alignat}{1} -\operatorname{Stab}(b) &= \{\bar gg'\bar g^{-1} \in G \mid g' \cdot a = a\} \\ -&= \{\bar gg'\bar g^{-1} \in G \mid g' \in \operatorname{Stab}(a)\} \\ -&= \bar g \operatorname{Stab}(a)\bar g^{-1} -\end{alignat}<|endoftext|> -TITLE: Show that $\lim_{\epsilon\to0^{+}}\frac{1}{2\pi i}\int_{\gamma_\epsilon}\frac{f(z)}{z-a} \, dz=f(a)$ -QUESTION [7 upvotes]: Let $a\in\Bbb C$ and $r>0$ and denote by $B(a,r)\subseteq \Bbb C$ the open ball of center $a$ and radius $r$. Assume that $f:B(a,r)\to\Bbb C$ is a continuous function and for each $\epsilon>0$ let $\gamma_\epsilon:[0,2\pi]\to \Bbb C$ be given by $\gamma_\epsilon(t)=a+\epsilon e^{it}$. Show that - $$\lim_{\epsilon\to0^{+}}\frac{1}{2\pi i}\int_{\gamma_\epsilon}\frac{f(z)}{z-a} \, dz = f(a).$$ - -I tried the following -$$\lim_{\epsilon\to0^{+}}\frac{1}{2\pi i}\int_{\gamma_\epsilon}\frac{f(z)}{z-a}dz=\lim_{\epsilon\to0^{+}}\frac{1}{2\pi i}\int_0^{2\pi}\frac{f(a+\epsilon e^{it})}{a+\epsilon e^{it}-a}i\epsilon e^{it} \, dt = \lim_{\epsilon\to0^{+}} \frac{1}{2\pi} \int_0^{2\pi}f(a+\epsilon e^{it}) \, dt$$ -If I can change the limit and the integral, then it is obvious. I tried to use the Lebesgue bounded convergence theorem to argue that, since $f:B(a,r)\to\Bbb C$ thus on $B(a,r)$ we have $|f(a+\epsilon e^{it})|\le M$, where M is the maximum of $|f(x)|$ on $B(a,r)$. -Is that valid? - -REPLY [4 votes]: $f$ is continuous in $a$, therefore for all $\eta > 0$ there exists -a $\delta > 0$ such that -$$ - |f(z) - f(a)| < \eta \text{ for all } z \in B(a, \delta) \, . -$$ -Then for $0 < \epsilon < \delta$ -$$ -\left| \frac{1}{2\pi i}\int_{\gamma_\epsilon}\frac{f(z)}{z-a} \, dz - f(a) \right| = \left | \frac{1}{2\pi}\int_0^{2\pi}(f(a+\epsilon e^{it}) - f(a)) \, dt \right| -\le \frac{1}{2\pi} \int_0^{2\pi} \bigl|f(a+\epsilon e^{it}) - f(a) \bigr| \, dt \\ -\le \frac{1}{2\pi} \int_0^{2\pi} \eta \, dt = \eta -$$ -and the conclusion follows.<|endoftext|> -TITLE: Sigma algebra - motivation in measure theory -QUESTION [9 upvotes]: Taken from the Motivation section of sigma-algebra article: - -A measure on $X$ is a function that assigns a non-negative real number - to subsets of $X$; this can be thought of as making precise a notion of - "size" or "volume" for sets. We want the size of the union of disjoint - sets to be the sum of their individual sizes, even for an infinite - sequence of disjoint sets. -One would like to assign a size to every subset of $X$, but in many - natural settings, this is not possible. For example the axiom of - choice implies that when the size under consideration is the ordinary - notion of length for subsets of the real line, then there exist sets - for which no size exists, for example, the Vitali sets. For this - reason, one considers instead a smaller collection of privileged - subsets of $X$. These subsets will be called the measurable sets. They - are closed under operations that one would expect for measurable sets, - that is, the complement of a measurable set is a measurable set and - the countable union of measurable sets is a measurable set. Non-empty - collections of sets with these properties are called $\sigma$-algebras. - -source -Does it explain the need for sigma-algebra in mathematics or measure theory? I just don't see how this description explains the motivation for sigma-algebra. -First, the article states that there are sets that are not Lebesgue-measurable, i.e. it's impossible to assign a Lebesgue measure to them (one satisfying the property that measure of a set is its length, which is very natural). Ok, fine. However, next it says: - -For this - reason, one considers instead a smaller collection of privileged - subsets of $X$. These subsets will be called the measurable sets. They - are closed under operations that one would expect for measurable sets, - that is, the complement of a measurable set is a measurable set and - the countable union of measurable sets is a measurable set. - -What are the 'privileged subsets of X'? Borel sets for instance? Is it that because all Borel sets form the smallest sigma algebra (smallest meaning it has the least elements among all sigma algebras containing all Borel sets), and Borel sets have the nice property that they are made of open intervals, which means we can assign size to those sets that is equal to the length of that interval. That's obviously one of many possible measures we can use. -AFAIK, I can have a sigma-algebra that contains Vitali set. Everything can be a sigma-algebra, as long as it satisfies its 3 simple axioms. So if there is something you could add to clarify the quoted explanation, I'd be very grateful. -I've seen some amazing answers here om Math SE to related questions, like this one. - -REPLY [19 votes]: Let me take a sidestep. Trying to understand $\sigma$ algebra through Vitali sets and Lebesgue measure, etc. is, in my opinion, the wrong approach. Let me offer a much, much simpler example. -A probability space is a measure space $(X, \sigma, \mu)$ where $\mu(X)=1$. -What's the simplest thing you can model with probability theory? Well, the flip of a fair coin. Lets write out all the measure theoretic details. -What are the possible outcomes? Well you can get a heads, $H$, or a tail, $T$. Measure theoretically, this is the set $X$. That is, $X=\{H,T\}$. -What are the possible events? Well first of all nothing can happen or something can happen. This is a given in ANY probability space (or measure space). Also, you can get a heads or you can get a tail. This is the $\sigma$ algebra. That is, $\sigma=\{\emptyset, X, \{H\}, \{T\}\}$. -What is the (probability) measure? Well what are the probailities? First, what is the probability that NOTHING happens? Well, $0$. That is, $\mu(\emptyset)=0$. What is the probability that SOMETHING happens? Well, $1$. That is, $\mu(X)=1$ (this is what makes something a probability space). What is the probability of a heads or a tail? Well $1 \over 2$. That is, $\mu(\{H\})=\mu(\{T\})=\frac12$. -This is just filling out all the measure theoretic details in a very simple situation. -Okay. In this situation EVERY subset of $X$ is measurable. So still, who gives a damn about $\sigma$ algebras? Why can't you just say "everything is measurable" (which is a perfectly fine $\sigma$ algebra for every set, including $\Bbb{R}$!!) and totally forget this business about $\sigma$ algebras? Well lets very slightly modify the previous example. -Lets say you find a coin, and you're not sure if it's fair or not. Lets talk about about the flip of a possibly unfair coin and fill in all the measure theoretic details. -Again, what are the possible outcomes? Again $X=\{H,T\}$. -Now here is where things are interesting. What is the $\sigma$ algebra for this coin flip (the events)? This is where things differ. The $\sigma$ algebra is just $\{\emptyset, X\}$. Remember, a $\sigma$ algebra is the DOMAIN of the measure $\mu$. Remember that we don't know if the coin is fair or not. So we don't know what the measure (probability) of a heads is! That is, $\{H\}$ is not a measurable set! -What is the measure? $\mu(\emptyset)=0$ and $\mu(X)=1$. That is, something will happen for sure and nothing won't happen. What a remarkably uninformational measure. -Here is a situation where the $\sigma$ algebras ARE different because they describe two different situations. A good way to think about $\sigma$ algebras is "information", especially with probability spaces. -Lets connect this back to Vitali sets and Lebesgue measure. - -AFAIK, I can have a sigma-algebra that contains Vitali set. Everything can be a sigma-algebra, as long as it satisfies its 3 simple axioms. So if there is something you could add to clarify the quoted explanation, I'd be very grateful. - -You are quite right. You CAN have a $\sigma$ algebra of $\Bbb{R}$ that contains every Vitali set. Just like you can have a $\sigma$ algebra of $\{H,T\}$ that contains $\{H\}$. But the point is that you might not be able to assign this a measure in a satisfactory way! Meaning, that if you have a list of properties that a you want Lebesgue measure wants to satisfy, a Vitali set necessarily can't satisfy them. So you "don't know" what measure to assign to it. It is not measurable with respect to a specified measure. You can create all sorts of measures where Vitali sets are measurable (for example one where the measure of everything is $0$). -Let me motivate the axioms of a $\sigma$ algebra in terms of probability. -The first axiom is that $\emptyset, X \in \sigma$. Well you ALWAYS know the probability of nothing happening ($0$) or something happening ($1$). -The second axiom is closed under complements. Let me offer a stupid example. Again, consider a coin flip, with $X=\{H, T\}$. Pretend I tell you that the $\sigma$ algebra for this flip is $\{\emptyset, X, \{H\}\}$. That is, I know the probability of NOTHING happening, of SOMETHING happening, and of a heads but I DON'T know the probability of a tails. You would rightly call me a moron. Because if you know the probability of a heads, you automatically know the probability of a tails! If you know the probability of something happening, you know the probability of it NOT happening (the complement)! -The last axiom is closed under countable unions. Let me give you another stupid example. Consider the roll of a die, or $X=\{1,2,3,4,5,6\}$. What if I were to tell you the $\sigma$ algebra for this is $\{\emptyset, X, \{1\}, \{2\}\}$. That is, I know the probability of rolling a $1$ or rolling a $2$, but I don't know the probability of rolling a $1$ or a $2$. Again, you would justifiably call me an idiot (I hope the reason is clear). What happens when the sets are not disjoint, and what happens with uncountable unions is a little messier but I hope you can try to think of some examples. -I hope this cleared things up.<|endoftext|> -TITLE: Finding the generating function for the Catalan number sequence -QUESTION [5 upvotes]: I know that generating function for the Catalan number sequence is $$f(x) = \frac{1 -\sqrt{4x}}{2x}$$ but I wan to prove it. -So the sequence for the Catalan numbers is $$1,1,2,5,14....$$ as we all know. -Now I have to find a generating function that generates this sequence. -I read that we can prove it this way: Asssume that $f(x)$ is the generating function for the Catalan sequence then by the Cauchy product rule it can be shown that $xf(x)^2 = f(x) − 1$ -And so this implies that $$xf(x)^2 - f(x) + 1 = 0$$ and so we can get that $$f(x) = \frac{1-\sqrt{4x}}{2x}$$ -But I can't get to understand how is that possible ? Like assume that $f(x)$ is the generating function for the Catalan sequence, then how come by the Cauchy product we have that $xf(x)^2 = f(x) − 1$ -I know that if we multiply the sequence $$1,1,2,5,14,....$$ -By itself we would get in the resulting sequence $$1,1,5,14,...$$ -Because we have that $c_k = a_0b_k + a_1b_{k-1}+ ........ + a_kb_0$ using the cauchy product formula but still how do we have that $xf(x)^2 = f(x) − 1$ -and how did we get that -$$f(x) = \frac{1-\sqrt{4x}}{2x}$$ - from -$xf(x)^2 = f(x) − 1$ ? did we use the quadratic formula some how ? - -REPLY [3 votes]: I'd like to ressurect this old post :) -To motivate the equation for the generating function you may do direct calculations (!) on the set of correct bracket sequences. -Let -$$ -S = \square + () + (()) + ()() + ((())) + (())() + ()(()) + \ldots -$$ -Multiplication is intuitive, like $() \cdot (()) = ()(())$. -As @Timur vural pointed out: look at the expression that is surrounded by the first bracket ( and its matching pair ). -$$ -S = \square + (\square) \cdot \square + (())\cdot \square + (\square) \cdot () + ((()))\cdot \square + (())\cdot () + ()\cdot (()) + \ldots -$$ -And now let the magic happen! -$$ -S = \square + (\cdot S \cdot ) \cdot S -$$ -Replace the pair of brackets with $x$ and $S$ with $f(x)$ and get the equation you want: -$$ -f(x) = 1 + x\cdot f(x)\cdot f(x) -$$<|endoftext|> -TITLE: Infinite product equality $\prod_{n=1}^{\infty} \left(1-x^n+x^{2n}\right) = \prod_{n=1}^{\infty} \frac1{1+x^{2n-1}+x^{4n-2}}$ -QUESTION [7 upvotes]: Prove the following equation ($|x|<1$) -$$\prod_{n=1}^{\infty} \left(1-x^n+x^{2n}\right) = \prod_{n=1}^{\infty} \frac1{1+x^{2n-1}+x^{4n-2}}$$ -I made this question and I have the following answer but I think it may be incomplete. -If anyone can point out a flaw in my proof or give a better proof then it would be appreciated. -My solution: -$$\begin{align} -&f(x)=\prod_{n=1}^{\infty} \left(1-x^n+x^{2n}\right)\\ -&N(p,q)=\{n\in\mathbb N \mid n\ne (2m-1)2^{k-1};m,k\in\mathbb N,2m-1\leq p,k\leq q\}\\ -&f(x)(1+x+x^2)=(1+x+x^2)(1-x+x^2)\prod_{n\in N(1,1)}^{\infty} \left(1-x^n+x^{2n}\right)\\ -&=(1+x^2+x^4)\prod_{n\in N(1,1)}^{\infty} \left(1-x^n+x^{2n}\right)=(1+x^2+x^4)(1-x^2+x^4)\prod_{n\in N(1,2)}^{\infty} \left(1-x^n+x^{2n}\right)\\ -&=(1+x^4+x^8)\prod_{n\in N(1,2)}^{\infty} \left(1-x^n+x^{2n}\right)=(1+x^4+x^8)(1-x^4+x^8)\prod_{n\in N(1,3)}^{\infty} \left(1-x^n+x^{2n}\right)\\ -&=(1+x^8+x^{16})\prod_{n\in N(1,3)}^{\infty} \left(1-x^n+x^{2n}\right)=\cdots\\ -&=\lim_{k\to\infty}(1+x^{2^k}+x^{2^{k+1}})\prod_{n\in N(1,k)}^{} \left(1-x^n+x^{2n}\right)=\prod_{n\in N(1,\infty)}^{} \left(1-x^n+x^{2n}\right)\\ -&\text{Similarly,}\\ -&f(x)(1+x+x^2)(1+x^3+x^6)=\prod_{n\in N(3,\infty)}^{} \left(1-x^n+x^{2n}\right)\\ -&f(x)(1+x+x^2)(1+x^3+x^6)(1+x^5+x^{10})=\prod_{n\in N(5,\infty)}^{} \left(1-x^n+x^{2n}\right)\\ -&\cdots\\ -&f(x)\prod_{m=1}^{\infty} \left(1+x^{2m-1}+x^{2(2m-1)}\right)=\lim_{p\to\infty} \prod_{n\in N(p,\infty)}^{} \left(1-x^n+x^{2n}\right)=1 -\end{align}$$ -(*) Is it obvious that $\{(2m-1)\cdot2^{k-1}\mid m,k\in \mathbb N\}$ is equivalent to $\mathbb N$, or should I also prove it? -Thanks. - -REPLY [5 votes]: We have -\begin{align} -\prod_{n = 1}^\infty (1 - x^n + x^{2n}) &= \prod_{n = 1}^\infty \frac{1+x^{3n}}{1+x^n} \\ -&= \prod_{n = 1}^\infty \frac{1 - x^{6n}}{1-x^{3n}}\prod_{n=1}^\infty\frac{1-x^{n}}{1-x^{2n}}\\ -& = \prod_{n = 1}^\infty \frac{1}{1-x^{3(2n-1)}}\prod_{n = 1}^\infty (1 - x^{2n-1})\\ -&= \prod_{n = 1}^\infty \frac{1-x^{2n-1}}{1-(x^{2n-1})^3}\\ -&= \prod_{n = 1}^\infty \frac{1}{1 + x^{2n-1} + x^{4n-2}}. -\end{align}<|endoftext|> -TITLE: Two term free resolution of an abelian group. -QUESTION [7 upvotes]: This is probably a very easy question but I think I am missing some background regarding free abelian groups to answer it for myself. -In Hatcher's Algebraic Topology, the idea of a free resolution is introduced in the section on cohomology. - -A $\textbf{free resolution}$ of an abelian group is an exact sequence $$ \cdots \to F_2 \to F_1 \to F_0 \to H \to 0$$ such that each $F_i$ is free. - -Let $f_0:F_0 \to H$ and choose a set of generators of $H$. Let $F_0$ be the free abelian group with basis in one-to-one correspondence with this set of generators. -Then we can easily form the two term free resolution -$$\cdots \to 0 \to Ker(f_0) \to F_0 \to H \to 0$$ -Why is $H$ an abelian group a necessary condition so that there necessarily exists resolution of the form $0 \to F_1 \to F_0 \to H \to 0$? -For what non-abelian group does such a free resolution not exist? - -REPLY [6 votes]: An abelian group is the same thing as a module over the ring $\mathbb{Z}$ (think about it). The ring $\mathbb{Z}$ is a PID, thus submodules of free $\mathbb{Z}$-modules are free. Reformulated in the context of abelian groups, submodules of free abelian groups are free abelian. You can use this fact to show that every abelian group has a length 2 free resolution (this works over any PID $R$, in particular $R = \mathbb{Z}$): - -Let $P_0 = \bigoplus_{m \in M} R_m$ be a direct sum of copies of $R$, one for each element of $M$ (the index is just here for bookkeeping reasons). This is a free $R$-module. This maps to $M$ through $\varepsilon : P_0 \to M$ by defining $\varepsilon_m : R_m \to M$, $x \mapsto x \cdot m$ and extending to the direct sum (coproduct). -The kernel $P_1 = \ker(P_0 \to M)$ is a submodule of the free module $P_0$, hence it is free as $R$ is a PID. Thus you get a free resolution (exact sequence): -$$0 \to P_1 \to P_0 \to M \to 0.$$ - -In the above proof, notice that "free $\mathbb{Z}$-module" is also the same thing as "free abelian group", so everything works out. You can also choose a set of generators for your module $M$, but I feel it's cleaner by just taking every element of $M$ and be done with it. -As Bernard mentions in the comments, for finitely generated abelian groups it's easy to see from the structure theorem (e.g. $\mathbb{Z}/n\mathbb{Z}$ has the free resolution $0 \to \mathbb{Z} \xrightarrow{\cdot n} \mathbb{Z} \to \mathbb{Z}/n\mathbb{Z} \to 0$). -Other people have already commented on the difference between resolutions of abelian groups and groups in general. Surprisingly enough, subgroups of free groups are free too by the Nielsen–Schreier theorem, and the standard proof even uses algebraic topology! It all comes around. So you can directly adapt the above argument to show that every (not necessarily abelian) group has a free resolution of length at most two: - -Let $G$ be a group. Let $P_0 = \bigstar_{g \in G} \mathbb{Z}_g$ be the free product of copies of $\mathbb{Z}$, one for each element of $g$. This is a free group. By the universal property of free groups, this maps to $G$ by sending $1 \in \mathbb{Z}_g$ to $g \in G$. The kernel $P_1 = \ker(P_0 \to G)$ is a subgroup of a free group, thus it is free itself, and you get a free resolution of $G$ of length at most 2: -$$0 \to P_1 \to P_0 \to G \to 0.$$ - -In the above proof, $\mathbb{Z}$ appears too, but for different reasons: it is the free group on one generator. It also happens to be the free abelian group on one generator, but that's not its role in the proof above. The groups $P_0$ and $P_1$ that appear in the proof are, in general, not free abelian.<|endoftext|> -TITLE: Uniqueness of elements in Klein Four and "Klein Five" group -QUESTION [7 upvotes]: I had a question about the uniqueness of group elements. -Let the Klein Four group be defined as the group generated by the elements ${1,a,b,c}$ such that $a^2=b^2=c^2=1$ and $ab=c$, $bc=a$, $ca=b$, and $1$ is the identity element. -Let the "Klein Five" group be defined as the group generated by the elements ${1,a,b,c,d}$ such that $a^2=b^2=c^2=d^2=1$ and $ab=c$, $bc=d$, $cd=a$, $da=b$, and $1$ is the identity element. -If I manipulate the symbols of the "Klein Five" group I defined above, I can show every element is equivalent to the identity. From $ab=c=ad$ I can see $a$ is the identity, from $bc=d=ba$ I can see $b$ is the identity, and so on. This gives that $a=b=c=d=1$. In some sense, this group doesn't seem to exist. I can't make a group such that $a \neq b \neq c \neq d \neq 1$ with the constraints above. -How do I know that the same isn't true of the Klein Four group? How do I know there isn't some set of constraints that makes the group "non-existent" for unique elements in the same way as the "Klein Five" group? -Any help would be appreciated! - -REPLY [5 votes]: In case you don't relish the idea of writing down a multiplication table and verifying that it satisfies the group axioms (checking associativity in particular is tedious even for a $4 \times 4$ table, and impractical for anything much larger than that), another approach is to exhibit the Klein four group as a subgroup of an existing group. -For example, consider the set $GL_2(\mathbb R)$ of invertible $2\times 2$ matrices, which form a group under multiplication. Then verify that the set -$$\left\{\begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}, -\begin{pmatrix}-1 & 0 \\ 0 & 1\end{pmatrix}, -\begin{pmatrix}1 & 0 \\ 0 & -1\end{pmatrix}, -\begin{pmatrix}-1 & 0 \\ 0 & -1\end{pmatrix} -\right\}$$ -is closed under multiplication and inverses, and hence is a subgroup of $GL_2(\mathbb R)$. Then check that it satisfies the conditions which define the Klein 4-group.<|endoftext|> -TITLE: Modulus of complex number less then equal to 1 -QUESTION [5 upvotes]: It $|z+1|\le 1 \text{ and } |z^2+1|\le 1$, then we have $$ |z|\le 1.$$ -I wrote $z=x+iy, x,y\in \mathbb{R}$ and the inequalities from hypothesis become -\begin{equation} (x+1)^2+y^2\le 1 \text{ and } (x^2-y^2+1)^2+4x^2y^2\le 1 \end{equation}...and I don't see how to deduce from here $$x^2+y^2\le 1.$$ I represent with Wolfram the region and I obteined something that didn't help my intuition... -first inequality, -second one - -REPLY [5 votes]: Applying Triangle Inequality: $$2|z| = |z^2+2z+1 - 1 - z^2|\le |z+1|^2 + |z^2+1| \le 2$$<|endoftext|> -TITLE: Prove that the boy cannot escape the teacher -QUESTION [22 upvotes]: I'm struggling with the following problem from Terence Tao's "Solving Mathematical Problems": - -Suppose the teacher can run six times as fast as the - boy can swim. Now show that the boy cannot escape. (Hint: Draw an - imaginary square of sidelength 1/6 unit centred at $O$. Once the boy leaves - that square, the teacher gains the upper hand.) - -Here $O$ is the center of the swimming pool. This question is a follow up on the previous one, which is solved in the affirmative in the text - -(Taylor 1989, p. 34, Q2). In the centre of a square swimming - pool is a boy, while his teacher (who cannot swim) is at one corner - of the pool. The teacher can run three times faster than the boy can swim, - but the boy can run faster than the teacher can. Can the boy escape from - the teacher? (Assume both persons are infinitely manoeuvrable.) - - -My attempt: -Since the boy can always swim back into the small square of sidelength 1/6 centered at $O$, I can't see how to apply the hint properly. Also, since the student's path need not even be smooth (it was taken as a polygonal chain in the previous question) I'm having difficulties writing data down clearly. -Any help would be appreciated. Thanks. - -REPLY [2 votes]: The boy has no incentive to change direction since he loses time. -Which direction should he pick? - -In the 6×speed case, no matter which direction the boy chooses, the teacher can get there faster. -In the 3×speed case, there are numerous directions where the boy can escape... just not the opposite diagonal corner.<|endoftext|> -TITLE: How to show $\frac{\mathbb{Z}_m\times \mathbb{Z}_n}{\langle (a,b)\rangle}\simeq \mathbb{Z}_{\frac mc}\times \mathbb{Z}_{\frac nd}$? -QUESTION [5 upvotes]: Suppose we have the group $\mathbb{Z}_m\times\mathbb{Z}_n$, and $(a,b)\in\mathbb{Z}_m\times\mathbb{Z}_n$. We need to justify that - -(i) There exist $c, d$ such that $\langle (a,b)\rangle$ is isomorphic to the group $\mathbb{Z}_c\times \mathbb{Z}_d$ with $c\mid m, d\mid n$. -(ii) $\dfrac{\mathbb{Z}_m\times \mathbb{Z}_n}{\langle (a,b)\rangle}\simeq \mathbb{Z}_{\frac mc}\times \mathbb{Z}_{\frac nd}$. - -How to show these ? -The first one I tried as: Since $\langle (a,b)\rangle$ is cyclic there is $\alpha$ such that $\langle (a,b)\rangle\simeq \mathbb{Z}_\alpha$. And then $\alpha\mid mn$ which means we can find two relatively prime $c,d$ such that $cd=\alpha, c\mid m, d\mid n$ and $\mathbb{Z}_\alpha\simeq \mathbb{Z}_c\times \mathbb{Z}_d$. Then ? - -REPLY [2 votes]: (i) In fact, $\alpha$ is the order of $(a,b)$ in $\mathbb Z_m\times\mathbb Z_n$, so $\alpha=\operatorname{lcm}(m',n')$, where $m'=\operatorname{ord}(a)\mid m$ and $n'=\operatorname{ord}(b)\mid n$. Set $d'=\gcd(m',n')$ and write $m'=d'm_1'$, $n'=d'n_1'$ with $\gcd(m_1',n_1')=1$. Then $\alpha=d'm_1'n_1'$. -Now your goal is to write $\alpha=\alpha_1\alpha_2$ with $\gcd(\alpha_1,\alpha_2)=1$ and $\alpha_1\mid m'$, $\alpha_2\mid n'$. This can be done as follows. If $\gcd(d'm_1',n_1')=1$ or $\gcd(m_1',d'n_1')=1$ then set $\alpha_1=d'm_1'$ and $\alpha_2=n_1'$, or $\alpha_1=m_1'$ and $\alpha_2=d'n_1'$, respectively. Otherwise, there are some primes in common between $d'$ and $n_1'$, respectively between $d'$ and $m_1'$ (but there are no primes in common between these two sets of primes since $\gcd(m_1',n_1')=1$). Now write $d'=ed_1'd_2'$ where $d_1'$ is the product of all common primes between $d'$ and $m_1'$, $d_2'$ is the product of all common primes between $d'$ and $n_1'$, and $e$ is the product of primes in $d'$ which don't show up neither in $m_1'$ nor in $n_1'$. Then $\alpha=(ed_1'm_1')(d_2'n_1')$ and notice that $\gcd(ed_1'm_1',d_2'n_1')=1$. -(ii) By using the SNF for the matrix whose rows are $(a,b)$, $(m,0)$, and $(0,n)$ one find $$\frac{\mathbb{Z}_m\times \mathbb{Z}_n}{\langle (a,b)\rangle}\simeq \mathbb{Z}_{d_1}\times \mathbb{Z}_{d_2},$$ -where $d_1=\gcd(a,b,m,n)$ and $d_2=\gcd(bm,an,mn)/d_1$. We have $d_1\mid m$, but I can't see why $d_2\mid n$.<|endoftext|> -TITLE: What is the mathematical notation for Convex Hull? -QUESTION [8 upvotes]: I've been scanning through scientific papers, this site and just googling for it, but I can't find a commonly accepted notation for the convex hull. -So my question is; if there is, what is the standard notation for convex hull? -Seems to me people just use their favorite out of a large collection of notations (or invent their own notation) to denote the convex hull of, say $S$, including -\begin{align} -\mathrm{conv}(S),\ \mathrm{CH}(S),\ \mathrm{conv.hull}(S),\ \mathrm{Co}(S),\ \mathrm{C}(S), -\end{align} -which I find frustrating. :) - -REPLY [2 votes]: I've seen $\mathrm{Hull}(\cdot)$ used pretty often; I think Michael Kapovich uses it in his papers. Unless there's another usage of hull in mathematics that I'm not aware of, it's a pretty safe and easy to understand notation.<|endoftext|> -TITLE: Manifold and maximal atlas -QUESTION [7 upvotes]: 1) I didn't understand really what is a maximal atlas. Is it as set of compatible chart maximal in the sens that adding one more chart will yield the atlas not compatible ? -2) Let two atlas $\mathcal A$ and $\mathcal A'$. So if they are compatible, they are both in a maximal atlas $\hat{\mathcal A}$ ? -3) And if they are not compatible, there are two atlas $\hat{\mathcal A}$ and $\tilde{\mathcal A}$ such that $\mathcal A$ is for example in $\hat{\mathcal A}$ and $\mathcal A'\in\tilde{\mathcal A}$ ? -4) And if I understood well, $\hat{\mathcal A}$ gives smooth structure and $\tilde{\mathcal A}$ gives an other smooth structure ? But both are incompatible ? -I hope my question are clear enough. - -REPLY [7 votes]: (1) You are correct. $\mathcal{A}$ maximal atals is maximal in the sense that it contains all possible compatible charts. -(2) Yes. Every atlas $\mathcal{A}$ is contained in exactly one maximal atlas, and it is easy to desribe it: it is the set of all charts compatible with $\mathcal{A}$. Since $\mathcal{A}$ already covers $M$, it can be checked that any two such charts are compatible (i.e the corresponding transition maps are smooth) via going back and forth through charts in $\mathcal{A}$. -In particular, if ${\mathcal{A}}',{\mathcal{A}}$ are compatible, they are both contained in the same maximal atlas. -(3) Yes. same argument as (2). -(4) Yes. - -It's worth noting why do we define smooth structure to be a maximal atlas: -We want each smooth structure (=maximal atlas) to define a unique sense of what does it mean for a function on the manifold (say from $M \to \mathbb{R}$) to be smooth. -We want a one-to-one corespondence between smooths structures and subsets of smooth functions. -Two compatible atlases are indistinguishable from this point of view, since they give rise to identical notions of smoothness of maps.<|endoftext|> -TITLE: What are the applications of functional analysis? -QUESTION [77 upvotes]: I recently had a course on functional analysis. I was thinking of studying the mathematical applications of functional analysis. I came to know it had some applications on calculus of variations. I am not specifically interested in applications of functional analysis on pure branches of mathematics but rather interested in applied mathematics. -Can anyone give a brief on what are the mathematical applications of functional analysis? Also, please suggest some good books for it. - -REPLY [3 votes]: One field where functional analysis is brought close to applications is inverse problems. -This is a branch of mathematics concerning indirect measurements. -For a concrete example, consider X-ray tomography. -The physical problem is to find the (position-dependent) attenuation coefficient from measured intensity drop along every line through the object. -(A machine shoots an X-ray through the object and compares the initial and final intensity. This is repeated for a great number of trajectories.) -Using the Beer–Lambert law of attenuation brings us to a mathematical formulation: How to reconstruct a function $\mathbb R^n\to\mathbb R$ from its integrals over all lines? -Is the function even uniquely determined by this data? -This problem becomes more tractable within a functional analytic framework. -We need a space $E$ of functions $\mathbb R^n\to\mathbb R$ and a space $F$ of functions $\Gamma\to\mathbb R$, where $\Gamma$ is the set of all lines in the Euclidean space. -The X-ray transform $I:E\to F$ is defined so that $If(\gamma)$ is the integral of $f$ over $\gamma$. -The mathematical X-ray tomography question can be reformulated: Is the X-ray transform injective? -This leads to a number of questions: -Is $I:E\to F$ continuous? -If it is injective, it has a left inverse. -Is it continuous $F\to E$? -How does this depend on the function spaces $E$ and $F$? -What happens if one only has some kind of partial data, perhaps with errors? -Is there perhaps a good pseudoinverse that is optimal in some way? -How can one define $I$ if $F$ and $E$ are distribution spaces or some other "non-classical objects"? -For example, $E=C_c(\mathbb R^n)$ and $F=C(\Gamma)$ makes $I$ continuous, but the inverse is discontinuous. -The same happens when $E$ and $F$ are $L^2$ spaces. -However, with suitable function spaces (Sobolev spaces) $I$ can indeed be an isomorphism (continuous and continuously invertible). -In many cases it is convenient to study not $I$ directly by the normal operator $I^*I$. -Here $I^*$ is the $L^2$ adjoint, which turns out to be useful even when $I$ is not continuous or even well defined on $L^2$. -A weaker version of the adjoint is needed. -These endeavors can be taken in a number of different directions. -One can study the fine details of stability using microlocal analysis, or extend the theory to geodesics on a manifold (which has applications in seismic imaging, for example), or study converge of numerical approximation schemes, or find a way to get a decent X-ray image with minimal radiation dose, or… -I wrote introductory lecture notes on the topic with very little prerequisites: Analysis and X-ray tomography. -There are a number of books on different aspects of X-ray tomography. -The classics of the mathematical theory are by Helgason and Natterer. -There are still open problems in this field, and even more so in the whole field of inverse problems.<|endoftext|> -TITLE: Folding a rectangular paper such that one corner moves along an opposite side -QUESTION [5 upvotes]: A rectangular paper is folded such that one corner moves along the opposite side. Prove that all the creases formed are tangent to a parabola. - -Attempt: -Let the paper be oriented such that its in the first quadrant and has one corner as the origin and sides along the axis. -After folding like so: - -Let one side of the paper be $a$. -The equation of the crease is $$y=(x-h)\tan\theta$$ -Also, in $\Delta LOH$, -$$\cos(\pi-2\theta)=-\cos(2\theta)=\frac{h}{a-h}$$ -$$-\frac{1-\tan^2\theta}{1+\tan^2\theta}=\frac{h}{a-h}$$ -Substituting the value of $\tan\theta$ from equation of crease, -$$ah^2+2(y^2-ax)h+a(x^2-y^2)=0$$ -I am not getting anywhere close to proving the statement given. - -REPLY [3 votes]: This is a basic question to be solved using Mathematical Origami, - which was initially developed to serve as a tangible key to - mathematical comprehension of geometry shapes. - -Let $L_1$ be the line forming the bottom edge of our paper. Let $P_1$ be a point towards the middle, fairly close to $L_1$, and $P_2$ be a point on the left or right edge of our paper. Fold the paper and call the creased line $L_2$. - -From the point $P_1$ construct a line which is $\perp$ to the folded portion of $L_1$. Let $X$ be the point where this line intersects $L_2$. - -By opening our paper, we observe that the line segment $XP_1$ and the line segment from $X$ to $L_1$, call it $\overline{XA}$, are equal. - -Therefore, $X$ is the point on $L_2$ which is equidistant to both $P_1$ and $L_1$. By definition, this point is on the parabola with focus $P_1$ and directrix $L_1$. $L_2$ is also the $\perp$ bisector of $\angle AXP_1$. Therefore, any point on $L_2$ is equidistant to $A$ and $P_1$. -Choose a point $Y$, on $L_2$ between $P_2$ and $X$. Construct the $\perp$ line to $L_1$ passing through $Y$, call it $\overline{YB}$. Note, that $\triangle YBA$ is right, thus $\overline{YB}<\overline{YA} = \overline{YP_1}$. Since all points of the parabola must be equidistant to both the directrix, $L1$, and the focus, $P1$, we know the parabola lies above $L_2$ at this point. - -Similarly, we can show this is true for any point on $L_2$ from $X$ to $L_1$. Thus, $L_2$ is the tangent line to the parabola.[Source]<|endoftext|> -TITLE: Functional equation $f(x+y)-f(x)-f(y)=\alpha\big(f(xy)-f(x)f(y)\big)$ is solvable without regularity conditions -QUESTION [6 upvotes]: I was reviewing this question and got motivated to solve this general problem: - -Find all functions $f:\mathbb R\to\mathbb R$ such that for all real numbers $x$ and $y$, -$$f(x+y)-f(x)-f(y)=\alpha\big(f(xy)-f(x)f(y)\big)$$ -where $\alpha$ is a nonzero real constant. - -I found out that it can be solved in a similar way to what I did in my own answer to that question. It was interesting for me that we don't need any regularity conditions like continuity. As I didn't find any questions about the same functional equation on the site, I thought it might be useful to post my own answer to it. -I appreciate any other ways to solve the problem or any insights helping me understand why there's no need of regularities while some functional equations like Cauchy's need such additional assumptions to have regular solutions. - -REPLY [3 votes]: First, letting $y=1$ in -$$f(x+y)-f(x)-f(y)=\alpha\big(f(xy)-f(x)f(y)\big)\tag0\label0$$ -and rearranging the terms, we get: -$$f(x+1)=f(1)+(\beta-1)f(x)\tag1\label1$$ -$$\therefore\:f(2)=\beta f(1)\tag2\label2$$ -where $\beta=2+\alpha-\alpha f(1)$. Next, substituting $x+1$ for $x$ in \eqref{1} and using \eqref{1} and \eqref{2} we have: -$$f(x+2)-f(2)=(\beta-1)^2f(x)\tag3\label3$$ -Next, letting $y=2$ in \eqref{0} and using \eqref{2} and \eqref{3} we get: -$$\big((\beta-1)^2-1\big)f(x)=\alpha\big(f(2x)-\beta f(1)f(x)\big)$$ -which by simple calculations leads to -$$f(2x)=\beta f(x)\tag4\label4$$ -Now, substituting $2x$ for $x$ and $2y$ for $y$ in \eqref{0} and using \eqref{4}, we'll get: -$$\beta\big(f(x+y)-f(x)-f(y)\big)=\alpha\beta^2\big(f(xy)-f(x)f(y)\big)$$ -Mutiplying \eqref{0} by $\beta$ and subtracting the last equation, we'll have: -$$\beta(\beta-1)\big(f(xy)-f(x)f(y)\big)=0\tag5\label5$$ -If $\beta=0$ then by \eqref{4} we conclude that $f$ is the constant zero function. So this case can only happen when $\alpha=-2$. -If $\beta=1$ then by \eqref{1} we conclude that $f$ is the constant $1+\frac1\alpha$ function. -If $\beta\neq0$ and $\beta\neq1$ then by \eqref{0} and \eqref{5} we have: -$$f(xy)=f(x)f(y)\tag6\label6$$ -$$f(x+y)=f(x)+f(y)\tag7\label7$$ -By letting $y=x$ in \eqref{6} we conclude that $f(x)$ is nonnegative for nonnegative $x$. Combining with \eqref{7} we find out that $f$ is increasing. An increasing additive function is of the form $f(x)=kx$. So by \eqref{6} we conclude that $f$ is the constant zero function or the identity function.<|endoftext|> -TITLE: Geometric interpretation of a quintic's roots as a pentagon? -QUESTION [6 upvotes]: According to this subsection, "Geometric interpretation of a cubic's roots", given, -$$F(x)=x^3+ax^2+bx+c=0$$ -with three real roots, "...then the roots are the projection on the $x$-axis of the vertices $A, B, C$ of an equilateral triangle. The center of the triangle has the same abscissa as the inflection point." -$\hskip2.7in$ -$$\text{Fig.1}$$ -Questions: - -In general, given $F(x) = 0$ of degree $n>2$ with $n$ real roots, are the roots the projection on the $x$-axis of the vertices of a regular n-gon? -If so (and at least for deg $n=4,5$), what features of the $n$-gon (like $\theta$, center's abscissa, etc) can be given a closed-form in terms of the roots? - -REPLY [2 votes]: I'll use a separate answer to address OP's request for handling the specific case where the points are the roots of this quintic: -$$x^5 + x^4 - 4 x^3 - 3 x^2+3 x+1 = 0$$ -We'll polygon-ize by imposing an order on the roots from least to greatest; that is we take this to be the coordinate matrix: -$$P := \left[\begin{matrix} --1.918\dots & -1.309\dots & -0.284\dots & 0.830\dots & 1.682\dots \\ - 0 & 0 & 0 & 0 & 0\end{matrix}\right]$$ -Here's the graph of the polynomial, showing the (tinted) roots and the (black) $\{5/0\}$ component at the roots' centroid (aka, their average). - -Following the process outlined in my other answer, we get an affinely-regular origin-centered $5$-gon and $\{5/2\}$-gon; each of these decomposes into two actually-regular components, as shown: - - -You'll notice that in this special case, where $P$'s vertices are collinear, the regular components of each type are mirror images of each other, in order that their $y$-coordinates in the vector sums cancel. If we choose to introduce projection to zero-out $y$-coordinates, then we can eliminate one of each of the regular components, and double the size of the remaining one. Here's one possibility: - -In the above, I took the opportunity to re-center the regular components with the centroid. The extra dots floating around are the vector sums of corresponding vertices of the components; projections of these onto the $x$-axis give the polynomial's roots.<|endoftext|> -TITLE: Lie groups for beginners: Lie group of hyperbolic geometry -QUESTION [5 upvotes]: I am trying to understand Lie groups and their relation to (2 dimensional) hyperbolic geometry. -as far as I understand it (which is not very far, I am pushing my understanding here) the Lie-group is the set of all isometric transformations in a geometry. -So in hyperbolic geometry this is the set of all reflections , translations, rotations, horolation and maybe other (hyperbolic) length preserving transformations. -But then -What does it mean that "the transformation group of hyperbolic geometry is the Orthochronous Lorentz group $O ( 1 , n ) / O ( 1 ) $ ?" (found for example at https://en.wikipedia.org/wiki/Klein_geometry#Examples ) -As far as I can follow it (and I am pushing my understanding here) it should depend on which mode of hyperbolic geometry you use. -In the Poincare disk model the transformation group is the set of 1) all circle inversions in circles orthogonal to the boundary circle and 2) their combinations. (the first one being reflections in hyperbolic lines, the second one multiple reflections. -In the Poincare half plane model they are another transformation group. the set of 1) all circle inversions in circles centered on the boundary circle 2) reflections in lines orthogonal to the boundary line and 3) their combinations. (the first two being reflections in hyperbolic lines, the third one multiple reflections.) -But then I got stumped what does this has to do with the lorentz group? or any other named group $ SO(2)$ , $ SO(2)$or $SO^+(2)$ or $ O ( 1 , n ) / ( O ( 1 ) × O ( n ) ) $ (from https://en.wikipedia.org/wiki/Hyperbolic_geometry#Homogeneous_structure , I guess n= 2 here but I don't even understand the formula)? -I could do with a basic "Introduction to Lie groups for hyperbolic critters" book, recommendations welcome. - -REPLY [2 votes]: There are many ways to think about hyperbolic geometry, but for me, the hyperbolic plane is a two dimensional complete simply-connected Riemannian manifold $(M,g)$ with constant curvature $-1$. -Given this definition, one can ask two questions. The first is whether such a space exists and the second is whether those properties determine the space uniquely up to an isometry in the sense that if $(M_1,g_1)$ and $(M_2,g_2)$ are two dimensional complete simply-connected Riemannnian manifolds with constant curvature $-1$ then $(M_1,g_1)$ and $(M_2,g_2)$ are isometric. -Treating the existence, one can construct such a space in various ways (which are called models of the hyperbolic plane). The model relevant to your question is called the Hyperboloid model for which $M$ is taken to be forward pointing sheet of a hyperboloid sitting inside the Minkowski space $\mathbb{R}^{1+2}$ and the metric $g$ is the Riemannian metric on $M$ induced on $M$ from the pseudo-Rimennaian Minkowski metric on $\mathbb{R}^{1+2}$. -Since $M$ sits inside $\mathbb{R}^{1+2}$, any isometry $\varphi$ of $\mathbb{R}^{1+2}$ that fixes $M$ (satisfies $\varphi(M) = M$) will descend to an isometry of $M$. The group of linear isometries of $\mathbb{R}^{1+2}$ is a subgroup of $GL_3(\mathbb{R})$ called the Lorentz group and is denoted by $O(1,2)$. The subgroup of $O(1,2)$ fixing $M$ is called the orthochronous Lorentz group and is denoted by $O^{+}(1,2)$. It turns out that $O^{+}(1,2)$ is the full group of isometries of $M$ and $O^{+}(1,2)$ is isomorphic to $O(1,2)/O(1)$, hence the description you ask about. -Besides the hyperboloid model, one can construct many other explicit models for the hyperbolic plane, show that they satisfy the definition given in the beginning and show explicitly that each pair of models are indeed isometric. In particular, this implies that the isometry groups of different models should be isomorphic. For example, in the Poincaré upper half-plane model, orientation preserving isometries are interpreted as Möbius transformations and the group of orientation preserving isometries is naturally identified with $PSL(2,\mathbb{R})$. Since the hyperboloid model and the upper half-plane model are isometric, the groups of orientation preserving isometries in both models should be isomorphic and indeed $PSL(2,\mathbb{R}) \cong SO^{+}(1,2)$. Thus, when you read about different models for the hyperbolic plane, you may find different descriptions of the isometry groups but they all will be isomorphic. -Finally, a non-trivial result states that indeed the definition stated in the beginning determines the space uniquely up to an isometry so it's not a coincidence that all the models one usually deals with for the hyperbolic plane are isometric.<|endoftext|> -TITLE: What does the case of $\operatorname{Spec}C^{\infty}(M)$ tell us about the relavance of scheme theory to general rings? -QUESTION [8 upvotes]: I guess a lot of people with previous exposure to differential geometry have had this naive question pop out in their mind when studying schemes for the first time. -The category of compact smooth real manifolds is contra-equivalent to the category of smooth rings on them $C^{\infty} (-)$. The inverse functor to the global section functor takes maximal ideals and makes sheafs out of the smooth rings. By a partion of unity argument the ring of global sections determines the sheaf so wer'e good. (At least, I hope we are. It's not something I found written explicitly in any book). -On the other hand, the famous result for schemes gives the equivalence $\mathsf {Aff} \cong (\mathsf{Ring})^{op}$. This suggests that the correct and uniform way to think about rings geometrically is studying the functor $Spec$. -There are many reasons for why studying $\operatorname{Spec}C^{\infty}(M)$ Is not so fruitfull. Elaboration on this in answers would be welcome as well, though my question is a more philosophical one. - -Question: Given what we know about the unsuitability of scheme theory for the study of smooth rings why should we believe that it's the right generalization for algebraic geometry over arbitrary rings? - -Personally, (and I hope It's okay to express my opinion despite my ignonrance) I think this is a good argument for studying general locally ringed spaces (Of which schemes are a part of but not at the center necessarily)... What do you think? - -REPLY [3 votes]: The point of scheme theory is not to study arbitrary commutative rings. Scheme theory was invented as a tool to answer geometric questions that people were ultimately asking about varieties. -Here's an example. The prime spectrum of $\mathbb{F}_2^{\mathbb{N}}$ turns out to be quite complicated: it is the Stone-Cech compactification $\beta \mathbb{N}$, otherwise known as the space of ultrafilters on $\mathbb{N}$. This is an "ungeometric" answer: $\text{Spec } \mathbb{F}_2^{\mathbb{N}}$ is the coproduct, in the category of affine schemes, of $\mathbb{N}$ copies of $\text{Spec } \mathbb{F}_2$, so the geometric answer "should" be that this spectrum just looks like $\mathbb{N}$. -We get the answer we expect by taking the coproduct in the category of schemes instead: in other words, the inclusion from affine schemes into schemes does not preserve infinite coproducts. So the passage from affine schemes to arbitrary schemes is not totally innocent: it does something to at least some colimits to make them more "geometric."<|endoftext|> -TITLE: Is the homology class of a compact complex submanifold non-trivial? -QUESTION [13 upvotes]: Let $X$ be a connected complex manifold (not necessarily compact). -Let $C \subset X$ be a compact complex $k$-dimensional submanifold (for some $k>0$). -Is it true, in this generality, that the homology class $[C] \in H_{2k} (X,\mathbb{Z})$ is non trivial? -EDIT - some motivating observations: a first striking fact in the study of complex manifolds is that there is no analogue of Whitney embedding theorem for compact ones; indeed by the maximum modulus principle $\mathbb{C}^n$ has no compact complex submanifolds. I am not very familiar with complex manifolds of dimension $n>1$ (and of course in dimension $1$ this problem is not very interesting). The examples of compact complex submanifolds I have and can handle (as far as the above problem is concerned) are the following - -the first factor in the product $K \times X$ where $K$ is any compact complex manifold and $X$ any complex manifold -the base of a vector bundle over a compact manifold $K$ -complex projective subspaces $\mathbb{CP}^k \subseteq \mathbb{CP}^n$ - -and in these cases it is easy to see that I get something which is non trivial in homology, by quite general facts not really related to complex geometry. Moreover I stumbled upon the fact that there exists many (non algebraic) 2-dimensional tori without compact complex (1-dimensional) submanifolds, as discussed for instance here. This has boosted my impression that if we manage to find a compact complex submanifold, then it must be very special indeed, in some sense. I would like to know if there is some counterexample to the sentence above, or if it can be proved by general methods in complex geometry. I am asking it in that generality also because I am not very familiar with Kähler or algebraic geometry, but of course I appreciate answers under the additional hypothesis that $X$ is compact/projective/Kähler/... - -REPLY [13 votes]: As Mike Miller points out in the comments, if $X$ is a Kähler manifold (not necessarily compact), and $C$ is a $k$-dimensional compact complex submanifold, then $i_*[C] \in H_{2k}(X, \mathbb{Z})$ is non-trivial (here $i : C \to X$ is the inclusion map and $[C] \in H_{2k}(C, \mathbb{Z})$ is the fundamental class of $C$). To see this, let $\omega$ be the Kähler form, then $\int_C\omega^k = \operatorname{Vol}(C)$ by Wirtinger's Theorem (actually, Wirtinger's Theorem is much stronger than this). Now note that $\int_C\omega^k$ is actually a pairing of homology and cohomology classes, namely -$$\int_C\omega^k = \langle i_*[C], [\omega]^k\rangle.$$ -Keep in mind, this is a pairing of real homology and cohomology classes, not integral ones. Although $i_*[C] \in H_{2k}(X, \mathbb{Z})$, we only have $[\omega] \in H^2(X, \mathbb{R})$ - provided $X$ is compact, finding a Kähler metric with $[\omega]$ integral is equivalent to $X$ being projective. We're identifying $i_*[C] \in H_{2k}(X, \mathbb{Z})$ with its image under the map $H_{2k}(X, \mathbb{Z}) \to H_{2k}(X, \mathbb{R})$ induced by the inclusion $\mathbb{Z} \to \mathbb{R}$. -If $i_*[C] \in H_{2k}(X, \mathbb{Z})$ were trivial, then its image in $H_{2k}(X, \mathbb{R})$ would also be trivial, in which case the pairing $\langle i_*[C], [\omega]^k\rangle$ would be zero. As $\operatorname{Vol}(C) > 0$, we therefore see that $i_*[C]$ is non-trivial. -A common misconception with this argument is that if a class in $H_{2k}(X, \mathbb{Z})$ is non-zero, then its image in $H_{2k}(X, \mathbb{R})$ will also be non-zero. At no point of the argument did I make such a claim, which is good because it is false: $H_{2k}(X, \mathbb{Z})$ may have torsion which will necessarily be mapped to zero in $H_{2k}(X, \mathbb{R})$. - -As for the non-Kähler case, the result is no longer true. Let $X$ be the standard Hopf surface: $(\mathbb{C}^2\setminus\{(0,0)\})/\mathbb{Z}$ where the $\mathbb{Z}$-action is generated by the map $(z_1, z_2) \mapsto (2z_1, 2z_2)$. The image of $\mathbb{C}^*\times\{0\}$ under the natural projection $\pi : \mathbb{C}^2\setminus\{(0,0)\} \to X$ is -$$C := \{[(w, 0)] : w \in \mathbb{C}^*\} \cong \mathbb{C}^*/\mathbb{Z}$$ -where the $\mathbb{Z}$-action is given by $w \mapsto 2w$. This is a one-dimensional compact complex submanifold of $X$, namely a torus. To see that the image of the fundamental class of $C$ is trivial in $H_2(X, \mathbb{Z})$, note that $X$ is diffeomorphic to $S^1\times S^3$, so by the Künneth Theorem, $H_2(X, \mathbb{Z}) = 0$. -Combining the considerations in the Kähler case, together with this example in the non-Kähler case, Donu Araparu gave a nice example of a non-compact complex surface which is not Kähler.<|endoftext|> -TITLE: Idea behind the definition of different ideal -QUESTION [6 upvotes]: Let $L/K$ be an extension of number fields. Let $I$ be a fractional ideal in $L$ and $$I^*:=\{x\in L \mid \text{Tr}_{L/K}(xI)\subset \mathcal{O}_K\}.$$ -The different of $I$ is the following fractional ideal -$$\mathcal{D}_{L/K}(I):=(I^*)^{-1}.$$ -I understand the importance of the different ideal in the study of ramification. For example, we know that if $P$ is a prime in $\mathcal{O}_K$ and $P\mathcal{O}_L=Q^eI$, with $(Q, I)=1$, then $Q^{e-1}\mid \mathcal{D}_{L/K}(\mathcal{O}_L)$. -But I don't understand what is the idea behind its definition. What (historically) led to that definition? - -REPLY [4 votes]: I don’t think that the analogy with lattices in R^n is just a « similarity » as you say. On the contrary, since the trace pairing is a non degenerate bilinear form when the field extension E/F (not just K/Q) is separable, the definition of the « dual » of a lattice (extending naturally the one given in K. Conrad’s notes) is a general one, which is available as soon as one is given a non degenerate bilinear form together with « integral structures » inside the fields involved (here, the existence of the rings of integers). From the point of view of geometry of lattices,« the different ideal could be considered as a measure of how much O_E fails to be self-dual as a lattice in E » (op. cit.). As you point out, the interest of the different in number theory proper lies in Dedekind’s theorem (= « central theorem » of loc. cit.) which characterizes ramification. The classical discriminant does the same job, but the single fact that the discriminant ideal is the norm of the different ideal shows that the second invariant is a finer one. For example, as you notice, it gives the ramification index in the case of tame ramification (the theory of wild ramification is more complicated, it requires to put into the game the so called Hasse-Herbrand functions, see e.g. chapter IV of Serre’s book « Corps Locaux »). Concerning the origin of the notion, K. Conrad suggests that it could be related to differentiation, and this may be actually the case if you think of Kähler differentials, which provide an adaptation of differential forms to arbitrary commutative rings or schemes. In chapter III, §7 of his book op. cit., Serre shows that the universal ring Omega(O_F,O_E) of the O_F-differentials of the ring O_E is a cyclic O_E-module, the annihilator of which is the different of E/F. But since Kähler comes much later than Dedekind, I still wonder about the origin of the name « different ».<|endoftext|> -TITLE: What are the tightest known bounds for the number of groups of order $2048$? -QUESTION [8 upvotes]: The number of groups of order $2048$ is unknown. - -What are the tightest known bounds (lower and upper bounds : I am interested in both) for the number of groups of order $2048$ ? - -I know the asymptotic formula for $p^k$, but I do not think that it gives a useful bound for $p^k=2048$. Somewhere, I read that a subset of the groups (but I do not remember what kind of subset) was calculated to obtain a lower bound. - -Can it be estimated how long it will take to determine the number ? - -REPLY [3 votes]: According to the second paragraph of -https://www.math.auckland.ac.nz/~obrien/research/gnu.pdf -(which I posted in another of your questions yesterday) -"[The number of groups of order 2048] is still not precisely known, but it strictly exceeds 1774274116992170, which is the exact number -of groups of order 2048 that have exponent-2 class 2, and can confidently be expected to agree with that number in its first 3 digits. -" -This is probably the best answer you can get easily.<|endoftext|> -TITLE: Showing that a cubic extension of an imaginary quadratic number field is unramified. -QUESTION [6 upvotes]: Let $\alpha^3-\alpha-1=0$, $K=\mathbb Q(\sqrt{-23})$, $K'=\mathbb Q(\alpha)$, and $L=\mathbb Q(\sqrt{-23},\alpha)$. -Then I am asked to show that the field extension $L/K$ is unramified. -I know that if $\mathfrak p\in\operatorname{Max}(\mathcal O_K)$ ramifies in $L$ then $\mathfrak p\mid\mathfrak d$ where $\mathfrak d$ is the discriminant ideal, i.e. the ideal of $\mathcal O_L$ generated by all $\operatorname{disc}(x_1,x_2,x_3)$ such that $(x_1,x_2,x_3)$ is a $K$-basis of $L$ and $x_1,x_2,x_3\in\mathcal O_L$. -I know that $(1,\alpha,\alpha^2)$ is one such basis with discriminant $-23$, so if $\mathfrak p$ ramifies in $L$ then $23\in\mathfrak p$, and in $\mathcal O_K$ we have $(23)=(\sqrt{-23})^2$. Therefore the only candidate for $\mathfrak p$ is $(\sqrt{23})$. If I could find a basis with discriminant not divisible by $23$ then I would be done, but that quickly turns messy. -Now factoring $23$ in $\mathcal O_{K'}$ gives $(23)=(23,\alpha-3)(23,\alpha-10)^2$, so I need would like to have two different prime ideals of $\mathcal O_L$ containing $(23,\alpha-10)$, then I will have three different prime ideals of $\mathcal O_L$ containing $\sqrt{-23}$ and I will be done. -Alternatively I need to show that no prime ideal $\mathfrak q$ of $\mathcal O_L$ has $\sqrt{-23}\in\mathfrak q^2$. -Any ideas? - -REPLY [3 votes]: Here’s an approach quite different from what you had in mind, purely local and not ideal-theoretic. -The field $K$ is ramified only at $23$, and since the $\Bbb Q$-discriminant of $k=\Bbb Q(\alpha)$ is of absolute value $23$ as well, the only possibility for ramification of $K'=Kk$ over $K$ is above the prime $23$, in other words at the unique prime of $K$ lying over the $\Bbb Z$-prime $23$. So we may think of localizing and completing. -Calling $\,f(X)=X^3-X-1$, we have $f(3)=23$ and $f(10)=989=23\cdot43$. Indeed, over $\Bbb F_{23}$, we have $f(X)\equiv(X-3)(X-10)^2$. Let’s examine $g(X)=f(X+10)=X^3+30X^2+299X+989=X^3+30X+13\cdot23X+43\cdot23$, which shows that $g$ has two roots of $23$-adic valuation $1/2$ (additive valuation, that is). Now let’s go farther, and, calling $\sqrt{-23}=\beta$, look at -\begin{align} -h(X)=g(X+4\beta)&=X^3+(30+12\beta)X^2+(-805 + 240\beta)X+(-10051 - 276\beta)\\ -&=X^3+(30+12\beta)X^2+(-35\cdot23+240\beta)X+(-19\cdot23^2-12\cdot23\beta)\,. -\end{align} -Look at the $23$-adic valuations of the coefficients: $0$ for the degree-$3$ and degree-$2$ terms, $1/2$ for the linear term, and $3/2$ for the constant term. So the Newton polygon of $h$ has three segments of width one, slopes $0$, $1/2$, and $1$. Thus $h(X)=f(X+10+4\beta)$ has three roots all in $\Bbb Q_{23}(\sqrt{-23}\,)$, and therefore the unique prime of $K$ above $23$ splits completely in $K'$, and the extension is unramified.<|endoftext|> -TITLE: Proof of $\sum\limits_{n=1}^{\infty} \frac{x^n \log(n!)}{n!} \sim x \log(x) e^x$ as $x \to \infty$ -QUESTION [11 upvotes]: Prove that -$$\sum_{n=1}^{\infty} \frac{x^n \log(n!)}{n!} \sim x \log(x) e^x \,\,\,\text{as}\,\,\, x \to \infty$$ -and -$$\sum_{n=1}^{\infty} \frac{(-x)^n \log(n!)}{n!} \to 0 \,\,\,\text{as}\,\,\, x \to \infty$$ -This question is related to my previous question. My heuristic approach is that the sum's major contribution comes from $n\approx x$ term, so -$$\sum_{n=1}^{\infty} \frac{x^n \log(n!)}{n!} \sim \frac{x^x\log(x!)}{x!}$$ -but using the Stirling formula twice leads to -$$\sum_{n=1}^{\infty} \frac{x^n \log(n!)}{n!} \sim \frac{\sqrt{x}\log(x)}{\sqrt{2\pi}}e^x$$ -But this reasoning is flawed and I believe I should not ignore all the other terms. (Which I believe to account for $\sqrt{2\pi x}$factor) -What kind of approach may give the wanted asymptotic behavior? Any kind of hint is welcome. - -REPLY [2 votes]: Your idea is right that the entries near $n=x$ matter the most. In particular, the entries with $n=x+O(\sqrt{x})$ dominate this sum. -The following is not a complete answer, but I believe the holes can be filled in. -Suppose that $N=x + \alpha \sqrt{x},$ where $\alpha$ is fixed. Then, by Stirling's formula -\begin{align*} -\frac{\ln(N!)}{N!} x^N &= \left(1+O\left(\frac{1}{\ln x}\right)\right) \frac{(x+\alpha \sqrt{x}) \ln x}{\sqrt{2 \pi x} (x+\alpha \sqrt{x})^{x+\alpha \sqrt{x}}}e^{x+\alpha \sqrt{x}} x^{x+\alpha \sqrt{x}} \\ &=\left(1+O\left(\frac{1}{\ln x}\right)\right) x (\ln x) e^x \left[e^{\alpha \sqrt{x}} \left( \frac{x}{x+\alpha \sqrt{x}}\right)^{x+\alpha \sqrt{x}} \frac{1}{\sqrt{2\pi x}} \right]. -\end{align*} -Now, note that -\begin{align*} -\left( \frac{x}{x+\alpha \sqrt{x}}\right)^{x+\alpha \sqrt{x}}&=\exp \left( -(x+\alpha \sqrt{x}) \cdot \ln\left(1+\frac{\alpha}{\sqrt{x}} \right)\right) \\&=\exp\left( -(x+\alpha \sqrt{x}) \left( \frac{\alpha}{\sqrt{x}} - \frac{\alpha^2}{2x} + O(n^{-3/2}) \right) \right) \\&= \exp \left( -\alpha \sqrt{x} - \alpha^2/2 + O(x^{-1/2}) \right). -\end{align*} -Therefore -$$ -\frac{\ln N!}{N!}x^N = \left(1+O\left(\frac{1}{\ln x}\right)\right) [x \ln(x) e^x] \frac{e^{-\alpha^2/2}}{\sqrt{2\pi}} \frac{1}{\sqrt{x}} -$$ -Now the sum of these $e^{-\alpha^2/2}{\sqrt{2\pi}} \frac{1}{\sqrt{x}}$ over $N \in [x-C \sqrt{x}, x+C \sqrt{x}]$, $(C$ fixed w.r.t. $x$), is a Riemann sum for the integral $\int_{-C}^C \frac{e^{-t^2/2}}{\sqrt{2\pi}}dt$. This integral can be arbitrarily close to 1 for $C$ large enough. -It remains to be shown that for any $\epsilon >0$ (fixed w.r.t. $x$), -$$ -\sum_{n \notin [x-C \sqrt{x}, x+C \sqrt{x}]} \frac{\ln(N!)}{N!} x^N < \epsilon \, x \ (\ln x) \ e^x, -$$ -for $C$ large enough.<|endoftext|> -TITLE: Does there exist an $n$ such that all groups of order $n$ are Abelian? -QUESTION [12 upvotes]: I know that all groups of order $\leq$ 5 are Abelian and all groups of prime order are Abelian. Are there any other examples? If so is there something special about the orders of these groups? - -REPLY [3 votes]: The terminology you're looking for is: - -Definition. A number $n \in \mathbb{N}$ is said to be - -cyclic iff all groups of order $n$ are cyclic. -abelian iff all groups of order $n$ are abelian. -nilpotent iff all groups of order $n$ are nilpotent. - - -Using basic group theory, we see that these are related as follows: - -Proposition. For natural numbers, we have: $$\mbox{ prime } \rightarrow \mbox{ cyclic } \rightarrow \mbox{ abelian } \rightarrow \mbox{ nilpotent }$$ - -A Peter explains, your question is essentially: does there exist an abelian number strictly greater than $5$ that isn't prime? Yes, $15$ does the trick. In fact, $15$ has the stronger property of being a cyclic number strictly greater than $5$ that isn't prime. -You can have a look at this mathoverflow post for more information.<|endoftext|> -TITLE: Evaluate $\lim_{n\to \infty}{\sqrt[n]\frac{(2n)!}{n^n\times{n!}}}$ -QUESTION [5 upvotes]: $$\lim_{n\to \infty}{\sqrt[n]\frac{(2n)!}{n^n\times{n!}}}$$ -It is a sequence and n is natural -It looks like I should use $\lim_{n\to \infty}{\sqrt[n]{a_n}}=x$ but I don't know how. Does it mean that $a_n=\frac{(2n)!}{n^n\times{n!}}$ and then do it from there or is $a_n=\sqrt[n]\frac{(2n)!}{n^n\times{n!}}$ -I have never used this before and I am not sure what to do - -REPLY [4 votes]: Let $a_n = \dfrac{(2n)!}{n^n{n!}}$. Then -$\displaystyle -\lim_{n\to \infty}{\sqrt[n]{a_n}} -= -\lim_{n\to \infty}\dfrac{a_{n+1}}{a_n} -$ -if this limit exists. -Now -$$ -\dfrac{a_{n+1}}{a_n} -= -\dfrac{(2(n+1))!}{(n+1)^{n+1}{(n+1)!}} -\ -\dfrac{n^n{n!}}{(2n)!} -= -\dfrac{(2n+2)(2n+1)}{(n+1)(n+1)} -\ -\left(\dfrac{n}{n+1}\right)^n -\to -\dfrac{4}{e} -$$<|endoftext|> -TITLE: Prove that zeros of f are poles of 1/f -QUESTION [5 upvotes]: Let $f$ be analytic at $z=z_0$ and have a zero of $n$th order at $z=z_0$. Then $1/f(z)$ has a pole of $n$th order at $z=z_0$. -I want to prove this, and for this I expand $f(z)$ as a power series, -\begin{align*} -f(z) = \sum_{k=0}^\infty c_k (z-z_0)^k -\end{align*} -Since we know that $(z-z_0)$ is zero at $z=z_0$ all the way up to order $n$, i.e. $(z-z_0)^k = 0$ all the way up to $k=n$ (since it can be rewritten as $(z-z_0$ and by the definition of a zero $a_0=0$ such that $f(z_0) = 0$, it follows that -\begin{align*} -\frac{1}{f(z)} = \frac{1}{\sum_{k=0}^\infty c_k (z-z_0)^k}, -\end{align*} -such that $1/f \rightarrow \infty$ about the same point. Is this proof complete enough or am I missing something? - -REPLY [6 votes]: Yeah, you're probably going to kick yourself. That expansion doesn't prove anything, for a simple reason: -You're starting these from $k = 0$ but they should start from $k = n$. -Done. -Furthermore an easier way to write this is $f(z) = (z - z_0)^n g(z)$ for some analytic $g$ with $g(z_0) \ne 0,$ and then you'll want to say that $1/f(z) = (z - z_0)^{-n} h(z)$ for $h(z) = 1/g(z)$ also analytic in this neighborhood with $h(z_0) \ne 0$, so the lowest term in the Laurent series is $h(z_0) / (z - z_0)^n,$ and the pole exists and is of order $n$.<|endoftext|> -TITLE: The domain of $x^x$? -QUESTION [9 upvotes]: This one looks simple, but apparently there is something more to it. -$$f{(x)=x^x}$$ -I read somewhere that the domain is $\Bbb R_+$, a friend said that $x\lt-1, x\gt0$... -I'm really confused, because i don't understand why the domain isn't just all the real numbers. -According to any grapher online the domain is $\Bbb R_+$. -Any Thoughts on the matter? -Can someone explain what am I missing? - -REPLY [6 votes]: Split it into cases: - -When $x=p/q$ where $p\in \mathbb Z,q\in\mathbb N_{>1},p\ne0,\gcd(p,q)=1$, then: -$$x^x=\left(\frac{p}{q}\right)^\frac{p}{q}=\sqrt[q]{\left(\frac{p}{q}\right)^p}$$ - - -when $p<0$ then - $$x^x=\sqrt[q]{\left(-\frac{q}{|p|}\right)^{|p|}}$$ -if $p$ is even, then $\left(-\frac{q}{|p|}\right)^{|p|}$ is positive, otherwise it's negative and the root doesn't exist for even $q$. -when $p>0$ then - $$x^x=\sqrt[q]{\left(\frac{|p|}{q}\right)^{|p|}}$$ -and $\left(\frac{|p|}{q}\right)^{|p|}$ is always positive. - -When $x\in\mathbb Z$ the value $x^x$ always exist except $x=0$. -When $x$ is irrational then the only way to define $x^x$ is $$x^x=\exp(x\ln x)$$ and for real numbers we have $x>0$. - -Summarizing, $x^x$ exist for all - -$x\in\mathbb R_+$ -$x\in\mathbb Z_-$ -$x\in\left\{ -\frac{p}{q}\in \mathbb Q\colon p,q\in\mathbb N_+ \land \gcd(p,q)=1\land q\text{ is odd}\right\}$ - -Why we don't see the negative part of the plot - -Technical reason: $x^x$ in programs is usually defined as exp(x*log(x)) and the function log(x) is not defined for negative x. -Mathematical reason: set of negative $x$ which $x^x$ exists for is countable. Countable many points is not enough to form a curve. - -This function may be plotted with points for negative $x$.<|endoftext|> -TITLE: Exact Sum of Series -QUESTION [24 upvotes]: I am a tutor at university, and one of my students brought me this question, which I was unable to work out. It is from a past final exam in calculus II, so any response should be very basic in what machinery it uses, although it may be complicated. The series is: $$\sum \limits_{n=1}^{\infty} \frac{(-1)^n}{(2n+3)(3^n)}.$$ -Normally I'm pretty good with infinite series. It is clear enough to me that this sum converges. None of the kind of obvious rearrangements yielded anything, and I couldn't come up with any smart tricks in the time we had. I put it into Wolfram and got a very striking answer indeed. Wolfram reports the value to be $\frac{1}{6}(16-3\sqrt{3} \pi)$. It does this using something it calls the "Lerch Transcendent" (link here about Lerch). After looking around, I think maybe I can understand how the summing is done, if you knew about this guy and special values it takes. -But how could I do it as a calculus II student, never having seen anything like this monstrosity before? - -REPLY [22 votes]: depending on the geometric series -$$\frac{1}{1+x}=\sum_{n=0}^{\infty }(-1)^nx^n$$ -$$ \frac{1}{1+x}-1=\sum_{n=1}^{\infty }(-1)^nx^n$$ -$$ \frac{-x}{1+x}=\sum_{n=1}^{\infty }(-1)^nx^n$$ -$x\rightarrow x^2$ -$$ \frac{-x^2}{1+x^2}=\sum_{n=1}^{\infty }(-1)^nx^{2n}$$ -multiply by $x^2$ -$$ \frac{-x^4}{1+x^2}=\sum_{n=1}^{\infty }(-1)^nx^{2n+2}$$ -$$ \int_{0}^{x}\frac{-x^4}{1+x^2}dx=\int_{0}^{x}\sum_{n=1}^{\infty }(-1)^nx^{2n+2}dx$$ -$$x-x^3/3-\tan^{-1}(x)=\sum_{n=1}^{\infty }\frac{(-1)^nx^{2n+3}}{2n+3}$$ -divide by $x^3$ -$$1/x^2-1/3-\tan^{-1}(x)/x^3=\sum_{n=1}^{\infty }\frac{(-1)^nx^{2n}}{2n+3}$$ -now let $x=\frac{1}{\sqrt{3}}$ -$$\sum_{n=1}^{\infty }\frac{(-1)^n}{(2n+3)3^n}=\frac{8}{3}-\frac{\sqrt{3}\pi}{2}$$<|endoftext|> -TITLE: Do the polynomials $(1+z/n)^n$ converge compactly to $e^z$ on $\mathbb{C}$? -QUESTION [6 upvotes]: The question is - -Do the polynomials $p_n(x)=(1+z/n)^n$ converge compactly (or uniformly on compact subsets) to $e^z$ on $\mathbb{C}$? - -I thought about expanding -$$p_n(z)=\sum_{k=0}^n a_k^{(n)}z^k$$ -where -$$a_k^{(n)}=\binom{n}{k}\frac{1}{n^k}=\frac{1}{k!}\prod_{j=0}^{k-1}\left(1-\frac{j}{n}\right)$$ -and trying to show that $\frac{1}{k!}-a_k^{(n)}$ decreases sufficiently fast on any closed ball. That is, I tried to show -$$\lim_{n\rightarrow\infty}\max_{z\in\overline{B_0(A)}}\left|\sum_{k=0}^n\frac{z^k}{k!}-p_n(z)\right|=0$$ -for any fixed $A>0$, but I had difficulty with this approach. -Any help is appreciated. - -REPLY [2 votes]: Here's a less computational way to get the result. -Lemma 1: If $f_n: E \to \mathbb C $ are uniformly bounded and converge uniformly to $f$ on $E,$ then $e^{f_n} \to e^f$ uniformly on $E.$ Proof: Easy. -Lemma 2: Let $\log $ denote the principal value logarithm. Then there is a constant $C$ such that for $0<|u|< 1/2,$ -$$\left|\frac{\log (1+u)}{u}-1\right| \le C|u|.$$ -Proof: Inside the absolute values on the left we have a function that extends to be analytic in $D(0,1)$ with value $0$ at $0.$ The result follows easily. -So let $R>0.$ For $0<|z|2R,$ Lemma 2 shows -$$|n\log(1+z/n) - z| = |z|\,\left|\frac{\log(1+z/n)}{z/n}-1\right| \le -R\cdot C(R/n) = CR^2/n.$$ -So we have uniform convergence of $n\log(1+z/n)$ to $z$ on $D(0,R),$ at least along the sequence of $n$'s greater than $2R.$ Because these functions are uniformly bounded on $D(0,R),$ Lemma 1 shows -$$\tag 1 \exp (n\log(1+z/n)) \to e^z$$ -uniformly on $D(0,R).$ Since the left side of $(1)$ equals $[\exp (\log(1+z/n))]^n = p_n(z),$ we're done.<|endoftext|> -TITLE: Is every locally compact Hausdorff space paracompact? -QUESTION [11 upvotes]: It seems likely that for any open cover, we can construct a locally finite refinement using the local compactness of the space. I can't figure out how to work the construction though, and I'm not yet convinced that there is no counterexample. - -REPLY [3 votes]: $\pi$-Base is an online encyclopedia in the spirit of Steen and Seebach's Counterexamples in Topology. According to its database, the following spaces are locally compact Hausdorff but not paracompact. (You can view the search result to learn more.) -$[0,\Omega) \times I^I$ -Deleted Tychonoff Plank -Open Ordinal Space $[0,\Omega)$ -Rational Sequence Topology -The Long Line -The Open Long Line -Thomas' Plank<|endoftext|> -TITLE: How is $-16/4i$ equal to $4i$? -QUESTION [6 upvotes]: I came across a problem: $-16/4i$. Every time I put it into a calculator, it comes out as $4i$, but when I try to solve it is $-4i$, because of the negative one in front of the $16$. - -REPLY [2 votes]: Another way of doing it is: -$\frac{-16}{4i}$ Then reduce to get $\frac{-4}{i}$. We know that $i^2=-1$, there fore we can make it $\frac{4i^2}{i}$ and by reduceing again we get $4i$.<|endoftext|> -TITLE: Show that $\sup_n \mathbb{E}(|S_n|)<\infty$ implies $\mathbb{E} \left( \sup_n |S_n| \right)<\infty$ for $S_n=\sum_{k=1}^n X_k$ with $X_k$ iid -QUESTION [9 upvotes]: Let $S_n=\sum^n_{k=1}X_k$, and $X_k$'s are mutually independent. Suppose $X_k$ are integrable, and $\sup_nE|S_n|<\infty$. Show that $E(\sup_n|S_n|)<\infty$. - -I have shown Ottaviani's inequality: -$P(\max_{k\leq n}|S_k|\geq t+s)\leq P(|S_n|\geq t)+P(\max_{k\leq n}|S_k|\geq t+s)\max_{k\leq n}P(|S_n-S_k|>s)$. -I think this should be helpful, however, I don't know how. - -REPLY [4 votes]: By the triangle inequality, -$$\mathbb{P}(|S_n-S_k| > s) \leq \mathbb{P}(|S_k|>s/2) + \mathbb{P}(|S_n|>s/2).$$ -Applying Markov's inequality yields -$$\begin{align*} \max_{k \leq n} \mathbb{P}(|S_n-S_k|>s) \leq 2 \max_{k \leq n} \mathbb{P}(|S_k|>s/2) \leq \frac{4}{s} \max_{k \leq n} \mathbb{E}(|S_k|). \end{align*}$$ -If we choose $s_0$ sufficiently large, then we get for all $s \geq s_0$ -$$\begin{align*}\max_{k \leq n} \mathbb{P}(|S_n-S_k|>s)&\leq \max_{k \leq n} \mathbb{P}(|S_n-S_k|>s_0) \\ &\leq \frac{4}{s_0} \sup_{n \in \mathbb{N}} \mathbb{E}(|S_n|) =: c<1. \tag{1} \end{align*}$$ -Using $(1)$ and Ottaviani's inequality for $s=t$, we find -$$\mathbb{P} \left( \max_{k \leq n} |S_k| \geq 2s \right) \leq \mathbb{P}(|S_n| \geq s) + c \mathbb{P} \left( \max_{k \leq n} |S_k| \geq 2s \right),$$ -i.e. (as $c \in (0,1)$), -$$\mathbb{P} \left( \max_{k \leq n} |S_k| \geq 2s \right) \leq \frac{1}{1-c} \mathbb{P}(|S_n| \geq s). \tag{2}$$ -Note that this inequality holds for all $s \geq s_0$ and all $n \in \mathbb{N}$. Now recall that -$$\mathbb{E}(X) = \int_0^{\infty} \mathbb{P}(X \geq r) \, dr \tag{3}$$ -holds for any non-negative random variable $X$. Combining this identity with $(2)$, we get -$$\begin{align*} \mathbb{E} \left( \max_{k \leq n} |S_k| \right) &\stackrel{(3)}{=} \int_0^{\infty} \mathbb{P} \left( \max_{k \leq n} |S_k| \geq r \right) \, dr \\ &\stackrel{(2)}{\leq} 2s_0 + \frac{1}{1-c} \int_{2s_0}^{\infty} \mathbb{P}(|S_n| \geq r/2) \, dr \\ &\stackrel{(3)}{\leq} 2s_0 + \frac{2}{1-c} \mathbb{E}(|S_n|). \tag{4} \end{align*}$$ -Finally, by the monotone convergence theorem, -$$\mathbb{E} \left( \sup_{n \in \mathbb{N}} |S_n| \right) = \sup_{n \in\mathbb{N}} \mathbb{E} \left( \max_{k \leq n} |S_k| \right) \stackrel{(4)}{\leq} 2s_0 + \frac{2}{1-c} \sup_{n \in \mathbb{N}} \mathbb{E}(|S_n|) < \infty.$$<|endoftext|> -TITLE: Is the scalar multiple of an eigenvector also an eigenvector for a particular eigenvalue? -QUESTION [6 upvotes]: I'm working on a problem from my textbook and found that $\left(\frac{1}{2}, \frac{1}{2}, 1\right)$ is an eigenvector for a particular eigenvalue of $4$. -The textbook solution says that the answer is $(1, 1, 2)$ which is just $2 \times \left(\frac{1}{2}, \frac{1}{2}, 1\right)$ - -REPLY [12 votes]: Yes it is. Here is a short proof -Take $Ax = \lambda x$ and multiply by scalar $k$ we get $kAx = k \lambda x$ -but $kAx$ = $A(kx)$ and $ k \lambda x$ = $\lambda (kx)$ so we see by definition that $kx$ is an eigenvector -$$A(kx) = \lambda (kx)$$<|endoftext|> -TITLE: Proving the intersection of distinct eigenspaces is trivial -QUESTION [7 upvotes]: Suppose $\lambda_1$ and $\lambda_2$ are different eigenvalues of $T$. Prove $E_{\lambda_1} \cap E_{\lambda_2}= \{\vec0\}$. -I have a basic idea of what to do. Since both eigenvalues are distinct, doesn't that mean the basis for each space are linearly independent of each other so that no vector in one is in the span of the basis of the other? I'm just looking on how to formalize these ideas. - -REPLY [6 votes]: Here's one way to look at it: Suppose $\vec x$ is in the intersection of the eigenspaces corresponding to the two eigenvalues $\lambda_1$ and $\lambda_2$, where $\lambda_1\ne\lambda_2$. Then -$$ -\lambda_1 \vec x = T\vec x = \lambda_2\vec x, -$$ -so -$$ -(\lambda_1 - \lambda_2) \vec x = \vec 0. -$$ -Since $\lambda_1-\lambda_2\ne0$, the only way one can have $(\lambda_1 - \lambda_2) \vec x = \vec 0$ is if $\vec x=\vec0$.<|endoftext|> -TITLE: just another $\pi$ formula -QUESTION [8 upvotes]: I've found this $\pi$ formula: -$$ -\pi =\lim_{n\to \infty }4\sum_{k=1}^{n} \frac{2 n^3 (1-2 k)^2 \left((k-1) k+n^2\right)}{\left(k^2+n^2\right)^2\left((k-1)^2+n^2\right)^2} -$$ -What is interesting is that the formula has a very simple geometric explanation. So it could have been found long time ago, long before the discovery of series. So this is also my question: there are so many formulas for $\pi$ that it is hard to say if it is a new one or not. Even if the convergence is slow, it's much better than the classic ArcTan series for $x=1$: -For example and $\pi=3.14159...$, then for $n=100$, the formula gives $3.14144$ against $3.15149$ for the ArcTan series. So 3 correct terms against 1 for the ArcTan series. -Does anybody know? -EDIT1: in fact, the same kind of formula can be used to compute $\text{arccos}(\cdot)$, $\text{arcsin}(\cdot)$ and $\text{arctan}(\cdot)$ formulas based on the area of the angle. See my other post Riemann sum formulas for $\text{acos}(x)$, $\text{asin}(x)$ and $\text{atan}(x)$. -EDIT2: in fact, both formulas given by Lucian are almost equivalent to the one I gave. Indeed, we can rewrite the formula above as: -$$ -\frac{\pi }{8}=\lim_{n\to \infty }\sum_{k=1}^{n} \frac{n^3 (2 k-1)^2 \left((k-1) k+n^2\right)}{\left(k^2+n^2\right)^2\left((k-1)^2+n^2\right)^2} -$$ -Now, in relation to large $n$, we can assume that $k-1\approx k$ and so: -$$ -\frac{\pi }{8}=\lim_{n\to \infty }\sum_{k=1}^{n} \frac{n^3 (2 k-1)^2 \left(k^2+n^2\right)}{\left(k^2+n^2\right)^2\left(k^2+n^2\right)^2}=\lim_{n\to \infty }\sum_{k=1}^{n} \frac{n^3 (2 k-1)^2 }{\left(k^2+n^2\right)^3} -$$ -and again assuming that $2k-1\approx 2k$: -$$ -\frac{\pi }{8}=\lim_{n\to \infty }\sum_{k=1}^{n} \frac{n^3 (2 k)^2 }{\left(k^2+n^2\right)^3}\text{ },\text{thus}\text{ }\frac{\pi }{32}= \lim_{n\to \infty }\sum_{k=1}^{n}\frac{n^3 k^2 }{\left(k^2+n^2\right)^3} -$$ -But with each simplification, the formula converges slower... - -REPLY [7 votes]: Precision such as the one employed in this expression is quite a bit overkill. One could simply -have asked for a proof of $~\displaystyle\lim_{n\to\infty}~\sum_{k=1}^n\dfrac{n^3~(2k-1)^2}{(n^2+k^2)^3}=\dfrac\pi8$ , which, come to think about it, looks -suspiciously similar to a simple Riemann sum...<|endoftext|> -TITLE: Number of $n^2\times n^2$ permutation matrices with a 1 in each $n\times n$ subgrid -QUESTION [9 upvotes]: I found the following question in a paper I was trying to solve: -The following figure shows a $3^2 \times 3^2$ grid divided into $3^2$ subgrids of size $3 \times 3$. This grid has $81$ cells, $9$ in each subgrid. - -Now consider an $n^2 \times n^2$ grid divided into $n^2$ subgrids of size $n \times n$. Find the number of ways in which you can select $n^2$ cells from this grid such that there is exactly one cell coming from each subgrid, one from each row and one from each column. -My try: -Since we have $n^2$ rows, $n^2$ columns and $n^2$ subgrids in total, we have to choose one and only one cell from each of them. Let's choose them one at a time. We can choose the first cell in $n^4$ many ways. Then, we'll have to avoid that subgrid, that column and that row that we've chosen the first one from when choosing the second cell. So, we have $n^4-n^2-2n(n-1)$ choices. We can continue this to get the total number of possible ways. But, I think there's a hole. Say, we've chosen the first cell from the subgrid of the up-left corner and the second from the subgrid just right to it so that it doesn't violate any rules. Then, when finding the number of ways we can choose the third cell, we would have substracted some of the cells twice. I think you get it. Please, if anyone can help me solving this problem, it'd be greatly appreciated. - -REPLY [3 votes]: Using the counting system outlined in the comments, we can associate to each subgrid a positive integer: the number of choices in that subgrid. It's fairly clear that the number is invariant of the exact method of getting to that subgrid: however you do it, subgrid $(i,j)$ (in the usual matrix suffix notation) has associated with it $(n+1-i)(n+1-j)$ choices. Now the matrices go as follows: -\begin{eqnarray*} -n=1 &\mapsto& \begin{pmatrix}1\end{pmatrix}\\ -n=2 &\mapsto& \begin{pmatrix}4&2\\2&1\end{pmatrix}\\ -n=3 &\mapsto& \begin{pmatrix}9&6&3\\6&4&2\\3&2&1\end{pmatrix}\\ -&\vdots&\\ -n=k&\mapsto& -\begin{pmatrix} -k^{2} & k(k-1) & k(k-2) & \ldots & 2k & k\\ -k(k-1) & (k-1)^{2} & (k-1)(k-2) & \ldots & 2(k-1) & k-1\\ -k(k-2) & (k-1)(k-2) & (k-2)^{2} & \ldots & 2(k-2) & k-2\\ -\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ -2k & 2(k-1) & 2(k-2) & \ldots & 4 & 2\\ -k & k-1 & k-2 & \ldots & 2 & 1 -\end{pmatrix}\\ -&\vdots& -\end{eqnarray*} -For each $n$, the answer to the problem is the product of all the entries in the corresponding matrix; denote each of these numbers by $P_{n}$. Now since each matrix contains the preceding matrix as a "submatrix", it is clear that we can find a recursive formula: in particular, some thought gives -\begin{eqnarray*} -P_{n} & = & P_{n-1}n^{2}\prod_{i=1}^{n-1}(in)^{2}\\ -& = & P_{n-1}n^{2n}[(n-1)!]^{2}, -\end{eqnarray*} -with the initial condition $P_{1}=1$. -Now for each $n\in\mathbb{N}$ let $f(n) = (n!)^{2n}$. Clearly $P_{1}=f(1)$. Further, suppose that, for some $n\in\mathbb{N}$, we know that $P_{n-1} = f(n-1)$; then we have -\begin{eqnarray*} -P_{n} & = & P_{n-1}n^{2n}[(n-1)!]^{2}\\ -& = & [(n-1)!]^{2(n-1)}n^{2n}[(n-1)!]^{2}\\ -& = & (n!)^{2n}\\ -& = & f(n). -\end{eqnarray*} -By induction, $P_{n} = (n!)^{2n}$ for all $n\in\mathbb{N}$.<|endoftext|> -TITLE: Please, help to identify this numerical constant -QUESTION [6 upvotes]: I'm trying to find an answer to this question. Let -$K(k)$ be the elliptic integral of the first kind and $K'=K(\sqrt{1-k^2})$. -According to Abel's theorem (see this link) we know that if $\frac{K'}{K}=\frac{a+b\sqrt{n}}{c+d\sqrt{n}}$ where $a,b,c,d,n$ are integers, then $k$ is the root of an algebraic equation with integer coefficients. (As I understand it, by algebraic equation with integer coefficients they mean a polynomial with integer coefficients?) So I solved the equation -$$ -\displaystyle \frac{K(\sqrt{1-k^2})}{K(k)}=\sqrt{2}-1 -$$ -numerically with 200 digit accuracy and found the corresponding $k$: - -$$ -k=0.995942004483485267626221391206685115425856878468293704688032877263852669\\384783141641717390148240933985687938743309287701383005201421919573984016706\\406957193055047393475454459046127151355572762924 -$$ - -Here is the Mathematica code I used: -k:=Sqrt[m/.FindRoot[EllipticK[1 - m]/EllipticK[m] - Sqrt[ 2 ] +1==0,{m,1},WorkingPrecision -> 200]]; -N[k,200] -(As was pointed out below by ccorn EllipticK(k^2)$=K(k)$.) -Then I tried to identify this algebraic constant by Inverse symbolic calculator but it couldn't identify it. Is there some other way to identify this equation with integer coefficients? - -REPLY [6 votes]: I think Mathworld's explanation of Abel's Theorem is incomplete. -Part I. The Mathematica command is, -$$\frac{K'(k)}{K(k)}=\frac{\text{EllipticK[1-ModularLambda[}\tau]]}{\text{EllipticK[ModularLambda[}\tau]]}=\sqrt{2}-1\tag1$$ -where the argument $\tau$ is, -$$\color{brown}{\tau = \sqrt{-2}-\sqrt{-1}}\tag2$$ -Thus, we get your constant, -$$k = \sqrt{\lambda(\tau)}=\sqrt{\text{ModularLambda[}\tau]}=0.9959420044834\dots\tag3$$ -Other expressions are, -$$\big(\lambda(\tau)\big)^{1/8} = \frac{\sqrt{2}\,\eta(\tfrac{\tau}{2})\,\eta^2(2\tau)}{\eta^3(\tau)} = \left(\frac{\vartheta_2(0,q)}{\vartheta_3(0,q)}\right)^{1/2} = \cfrac{\sqrt{2}\,q^{1/8}}{1+\cfrac{q}{1+q+\cfrac{q^2}{1+q^2+\cfrac{q^3}{1+q^3+\ddots}}}}$$ -with the nome $q = \exp(i \pi \tau)$, Dedekind eta function $\eta(\tau)$, and Jacobi theta function $\vartheta_n(0,q)$, with both functions built in Mathematica. See also this post. -Part II. Now as to whether $k$ given by $(3)$ is algebraic, your question is equivalent to asking if the eta quotient, -$$x =\frac{\eta(2\tau)}{\eta(\tau)}\tag4$$ -is algebraic. If $\tau$ is an imaginary quadratic $\tau_2$, then it is yes. However, your $\tau$ is an imaginary quartic root $\tau_4$, a root of, -$$\color{brown}{\tau^4+6\tau^2+1 = 0}\tag5$$ -Conclusion: I think Abel's Theorem covers only at most $\tau_2$, not $\tau_4$, and Mathworld forgot(?) that crucial detail. Re this comment in an MO post.<|endoftext|> -TITLE: Do infinite dimensional Hermitian operators admit a complete basis of eigenvectors? -QUESTION [9 upvotes]: I'm currently taking a quantum mechanics course. We have proven that hermitian operators always have real eigenvalues, that we can choose the eigenvectors to be orthonormal, and that finite dimensional hermitian operators are diagonalizable (i.e., admit a complete basis of eigenvectors). Can this last result be generalized to the infinite dimensional case? The standard proof seems to use induction on the dimension of the operator, so this proof certainly doesn't carry over. - -REPLY [9 votes]: Yes and no. No, because hermitian operators need not have eigenvalues at all! However, if you assume that your hermitian operator is compact, then the result is true. -The correct approach to generalising this fact to infinite dimensions is by using the spectral theorem, where you replace summation with respect to eigenvalues corresponding to orthogonal eigenspaces by integration with respect to the spectral measure.<|endoftext|> -TITLE: A product of smooth manifolds together with one smooth manifold with boundary is a smooth manifold with boundary -QUESTION [5 upvotes]: Suppose $M_1, \dots M_k$ are smooth manifolds and $N$ is a smooth manifold with boundary. Then how do I see that $M_1 \times \dots \times M_k \times N$ is a smooth manifold with boundary, and$$\partial(M_1 \times \dots \times M_k \times N) = M_1 \times \dots \times M_k \times \partial N?$$ - -REPLY [5 votes]: Using the simpler result that finite products of smooth (boundaryless) manifolds are smooth manifolds, your problem can be reduced to the case where $k=1$. -Now, given any point $\left(x,y\right) \in M_{1} \times N$, there exists a chart $\left(U,\phi\right)$ at $x$ in the atlas of $M_{1}$ and a chart $\left(V,\psi\right)$ at $y$ in the atlas of $N$. Now, $U \times V$ is a neighborhood of $\left(x,y\right)$ in the product topology, and the map $\phi \times \psi$ is a homeomorphism from $U\times V$ to $\mathbb{R}^{m} \times \mathbb{H}^{n}$. Noting that $\mathbb{H}^{n} := \left\{\left(x_{1},\ldots,x_{n}\right)\in\mathbb{R}^{n} | x_{n} \geq 0\right\}$, we have $\mathbb{R}^{m} \times \mathbb{H}^{n} = \mathbb{H}^{m+n}$. Use this to show that $M_{1} \times N$ is a manifold with boundary. -For the second part, given any point $\left(x,y\right)$, either $y \in \partial N$ (that is, $\psi\left(y\right) \in \partial \mathbb{H}^{n}$) or $y \notin \partial N$ (that is, $\psi\left(y\right) \notin \partial \mathbb{H}^{n}$. Now, $\partial \mathbb{H}^{n} := \left\{\left(x_{1},\ldots,x_{n}\right)\in\mathbb{R}^{n} | x_{n} = 0\right\}$. Thus, $\partial \left(\mathbb{R}^{m} \times \mathbb{H}^{n}\right) = \mathbb{R}^{m} \times \partial \mathbb{H}^{n}$. Use this to show that $\partial M_{1} \times N = M_{1} \times \partial N$.<|endoftext|> -TITLE: $\text{Hom}_k(M,N)\cong M^*\otimes_k N$ as Hopf-algebra modules. -QUESTION [5 upvotes]: I'm reading Representations and Cohomology by D.J. Benson. At the beginning of the third chapter the following is explained: -Let $\Lambda$ be a bialgebra over $R$ and $M,N$ left $\Lambda$-modules. We make $M\otimes_R N$ into a $\Lambda$-module as follows: If -$$\Delta(\lambda)=\sum_i \mu_i\otimes \nu_i$$ then -$$\lambda(m\otimes n)=\sum_i \mu_i(m)\otimes \nu_i(n).$$ -We can also make $R$ into a $\Lambda$-module via $\lambda(r)=\varepsilon(\lambda)r$. -If $\Lambda$ is a Hopf-algebra over $R$ and $M,N$ are left $\Lambda$-modules, then we make $\text{Hom}_R(M,N)$ into a $\Lambda$-module as follows: If $$\Delta(\lambda)=\sum_i \mu_i\otimes \nu_i$$ and $\phi\in \text{Hom}_R(M,N)$, then -$$\lambda(\phi)(m)=\sum_i\mu_i(\phi(\eta(\nu_i)(m))),$$ -where $\eta$ means the antipode of $\Lambda$. -We write $M^*=\text{Hom}_R(M,R)$. Note that we are viewing $M^*$ as a left $\Lambda$-module. Because of the antipode $\eta$, we can regard right $\Lambda$-modules as left $\Lambda$-modules via $\lambda m = m\eta(\lambda)$ and vice-versa. -Now, suppose that $R=k$ is a field and $\Lambda$ is a cocommutative Hopf-algebra. Suppose that $M,N$ are two left $\Lambda$-modules that are finite dimensional as $k$-vector spaces, then the natural vector space isomorphism -$$\text{Hom}_k(M,N)\cong M^*\otimes_kN$$ is a $\Lambda$-module isomorphism. -I don't understand why this last statement is true. -I know that the natural bijection is given by -$$f:M^*\otimes_k N\rightarrow \text{Hom}_k(M,N):\phi\otimes w\mapsto \phi(\cdot)w.$$ I only have to show that this map is a $\Lambda$-module morphism. So let $\lambda\in \Lambda$ such that $\Delta(\lambda)=\sum_i\mu_i\otimes \nu_i=\sum_i\nu_i\otimes \mu_i$ (cocommutativity), and we calculate -\begin{eqnarray*} - (\lambda\cdot f(\phi\otimes w))(v) &=& \sum_i \mu_i(f(\phi\otimes w)(\eta(\nu_i)(v)))\\ -&=& \sum_i \mu_i(\phi(\eta(\nu_i)(v))w),\\ -f(\lambda\cdot (\phi\otimes w))(v) &=& f(\sum_i (\mu_i\phi)\otimes (\nu_i w))(v)\\ -&=& \sum_i (\mu_i\phi)(v)\nu_i w. -\end{eqnarray*} -By cocommutativity $\mu_i$ and $\nu_i$ are interchangeable in the calculation, however if I want to proceed, we need to consider $\Delta(\mu_i)$ to calculate $\mu_i\phi$ since $\mu_i\in \Lambda$ and $\phi\in \text{Hom}_k(M,k)$ (which we gave a $\Lambda$-module structure.) Now this leads to nowhere, so how do I see that this really is a $\Lambda$-module morphism? -Thank you in advance! - -REPLY [4 votes]: I hope I solved my question, though it would be nice if anyone was willing to check this answer. Using the notation above, we get the following: -Suppose that $\Delta(\lambda)=\sum_i\mu_i\otimes\nu_i$ and for any $i$, $\Delta(\mu_i)=\sum_j \alpha_{ij}\otimes \beta_{ij}$. Then -\begin{eqnarray*} - (\lambda\cdot f(\phi\otimes w))(v) &=& \sum_i\mu_i(f(\phi\otimes w)(\eta(\nu_i)(v)))\\ -&=& \sum_i \mu_i(\phi(\eta(\nu_i)(v))w)\\ -&=& \sum_i \phi(\eta(\nu_i)(v))\mu_iw,\\ -f(\lambda\cdot (\phi\otimes w))(v) &=& f(\sum_i(\mu_i\cdot\phi)\otimes (\nu_iw))(v)\\ -&=& \sum_i f ((\mu_i\cdot\phi)\otimes (\nu_iw))(v)\\ -&=& \sum_i (\mu_i\cdot \phi)(v)\nu_iw\\ -&=& \sum_i \left[ \sum_j \alpha_{ij}(\phi(\eta(\beta_{ij})(v))) \right]\nu_iw\\ -&=& \sum_i \left[\sum_j \varepsilon(\alpha_{ij})\phi(\eta(\beta_{ij})(v))\right]\nu_iw\\ -&=& \sum_i \left[ \sum_j \phi(\varepsilon(\alpha_{ij})\eta(\beta_{ij})(v))\right]\nu_iw\\ -&=& \sum_i \left[\phi( \eta(\sum_j\varepsilon(\alpha_{ij})\beta_{ij}(v))\right]\nu_iw\\ -&=& \sum_i \phi(\eta(\mu_i)(v))\nu_iw\\ -&=& \sum_i \phi(\eta(\nu_i)(v))\mu_iw. -\end{eqnarray*} -Hence $f$ is a $\Lambda$-module morphism map, I hope.<|endoftext|> -TITLE: How calculators do trigonometry -QUESTION [5 upvotes]: I need to use a calculator to find trigonometric ratios like $\sin(41),\cos(32)$. -How do calculators work for trigonometry? Also, as calculators are invented by the human mind can we do it mentally? Is there any binary trick or something like that? - -REPLY [11 votes]: Generally speaking, for routines that are used over and over like those for trig functions, calculators and computers have a table of precomputed values and use those to generate other values. The details of this depend on the problem. A common scheme is to use polynomial interpolation or piecewise polynomial interpolation of computed values of the desired functions. The CORDIC algorithm is often used for trig functions in particular, and it does something else, which is nicely discussed in the Wikipedia article. Maybe this is of interest to you, but I always find it a bit unsatisfying when I get these answers, because they usually don't say where the precomputed values came from. -Here is a simple algorithm for computing $\sin$ with little precomputing required. It's based on Taylor approximation. To make it work, you need to already know $\pi$, and you also need to precompute the number of Taylor terms that will be used. The latter is discussed below. -First we'll use some basic trigonometric identities to reduce to a function sin1 which is restricted to the first quadrant. (This is written in Matlab syntax.) -function y=mysin(x) -z=mod(x,2*pi); -if 3*pi/2<=z - y=-sin1(2*pi-z); -elseif pi<=z - y=-sin1(z-pi); -elseif pi/2<=z - y=sin1(pi-z); -else - y=sin1(z); -end - -Now we will reduce that to the first half of the first quadrant. We can do that by just computing cosine after reflecting through the line $y=x$. -function y=sin1(x) -if x>pi/4 - y=cos1(pi/2-x); - return -end - -So let's deal with the ones that didn't need to get reduced to cosine. Here Taylor's theorem tells us -$$\sin(x)=\sum_{n=0}^N \frac{(-1)^n x^{2n+1}}{(2n+1)!} + R_N$$ -where $|R_N| \leq \frac{|x|^{2N+2}}{(2N+2)!}$. So we need to identify $N$ such that $\frac{|\pi/4|^{2N+2}}{(2N+2)!}<10^{-16}$ (around the usual error tolerance for IEEE double precision). Just directly checking some values reveals that it is enough to take $N=8$. So we can finish the second half of our "sin1" function like this: -n=0:8; -m=2*n+1; -y=sum((-1).^n.*x.^(m)./factorial(m)); - -Then you can do basically the same thing to define cos1, to finish the overall problem: -function y=cos1(x) -n=0:8; -m=2*n; -y=sum((-1).^n.*x.^(m)./factorial(m)); - -Note that this can be made considerably more efficient without actually changing the mathematical character of it by precomputing the coefficients and evaluating the polynomial using Horner's method. -Note that not all the reductions that I did here were strictly necessary. We do need to reduce to one period (otherwise the method would need to dynamically choose $N$ and would be very slow for large arguments). But if we do that and don't reduce to the first quadrant, we can just raise $N$ to $20$. Alternately, we could just reduce to the first quadrant itself and stop, provided we raise $N$ to $11$. -Also, I tested the above a little bit. It deviates a little bit from Matlab's sin function for large arguments; for instance sin(50000)= -0.999840189089790 and mysin(50000)= -0.999840189089824. The problem appears to be entirely caused by arithmetic error in the first step, since (within Matlab) sin(mod(50000,2*pi))==mysin(50000). Perhaps someone could comment on a good way to fix this?<|endoftext|> -TITLE: If J is a 101×101 matrix with all entries equal to 1 and let I denote the identity matrix of order 101. Then what is the determinant of J-I? -QUESTION [5 upvotes]: If $J$ is a $101\times 101$ matrix with all entries equal to $1$ and let $I$ denote the identity matrix of order $101$. Then what is the determinant of $J-I$ ? - -REPLY [7 votes]: Let $J$ be the $n\times n$ matrix with all entries equal to $1$ and let $I$ be the $n\times n$ identity matrix. -Furthermore, let $A=-I$ and let $u=v=\left(\begin{array}{c}1\\\vdots\\1\end{array}\right)$. Then $J-I=A+uv^T$, such that by this Lemma: -$$\det(J-I)=\det\left(A+uv^T\right)=\left(1+v^TA^{-1}u\right)\det(A)$$ - -$A=-I$, such that $\det(A)=(-1)^n$. -$A^{-1}=-I$, such that $A^{-1}u=\left(\begin{array}{c}-1\\\vdots\\-1\end{array}\right)$ and $v^TA^{-1}u=-n$, thus - -$$\det(J-I)=(1-n)(-1)^n$$ -and for $n=101$ you get -$$\det(J-I)=(1-101)(-1)^{101}=100.$$<|endoftext|> -TITLE: Different function with the same derivative -QUESTION [12 upvotes]: Today at school I entered in a problem when the professor asked us to differentiate the following function: -$$f(x)=\arctan\left(\frac {x-1}{x+1}\right)$$ -With the basic rules of differentiation I came to a confusing result: -$$f'(x)=\frac 1{1+x^2}$$ -And the teacher agreed, and so does Wolfram (I checked at home) but what surprised me is that it's the same derivative as -$$f(x)=\arctan x$$ -$$f'(x)=\frac 1{1+x^2}$$ -So I'm wondering: is that wrong in some sense ? Are the two function equals indeed ? If I integrate $\frac 1{1+x^2}$ what should I choose from the two ? Are there any other examples of different functions with the same derivative? - -REPLY [10 votes]: Are you surprised from $3-2=10-9$? ;-) I guess you aren't. -Are you surprised from the fact that $f(x)=x^2$ and $g(x)=x^2+1$ have the same derivative? Not at all, I believe. -The same holds in this case, and it's not the only one! For instance, $f(x)=\arcsin x$ and $g(x)=-\arccos x$ have the same derivative! Also $f(x)=\log x$ and $g(x)=\log(3x)$ do. -The conclusion you can draw is that the two functions differ by a constant on each interval where they are both defined. Since $\arctan\frac{x-1}{x+1}$ is defined for $x\ne-1$, you know that there exist constants $h$ and $k$ such that -$$ -\begin{cases} -\arctan\dfrac{x-1}{x+1}=h+\arctan x & \text{for $x<-1$}\\[12px] -\arctan\dfrac{x-1}{x+1}=k+\arctan x & \text{for $x>-1$} -\end{cases} -$$ -You can now compute $h$ and $k$, by evaluating the limit at $-\infty$ and at $\infty$: -$$ -\frac{\pi}{4}=\lim_{x\to-\infty}\arctan\dfrac{x-1}{x+1}= -\lim_{x\to-\infty}(h+\arctan x)=h-\frac{\pi}{2} -$$ -and -$$ -\frac{\pi}{4}=\lim_{x\to\infty}\arctan\dfrac{x-1}{x+1}= -\lim_{x\to\infty}(k+\arctan x)=k+\frac{\pi}{2} -$$<|endoftext|> -TITLE: Definition of the Tangent Space -QUESTION [6 upvotes]: I'm watching a series of lectures on differential geometry, and I've run into a bit of a problem on the definition of the tangent space. We first defined a tangent space as $\{(p,v) | v \in \mathbb{R}^n\}$, which makes sense to me: it's the set of all vectors attached at point $p$. We then defined the directional derivative as -$$ -(Df)(p,v) = \lim_{t \rightarrow \infty} \frac{f(p + tv) - f(p)}{t} -$$ -We expanded that to this: -$$ -(Df)(p,v) = \left( \sum_{i = 0}^{n}v_i \left.\frac{\partial}{\partial x_i}\right|_{p} -\right) f$$ -This makes sense to me; we have defined the directional derivative as an operator that is applied to the function. -Here's the part where I lose the plot. I'm then told that, if I think about it, the portion inside the parentheses is really interchangeable with $(p,v)$. I'm afraid that I've thought about it, and I can't see the equivalence. $\sum_{i = 0}^{n}v_i \left.\frac{\partial}{\partial x_i}\right|_{p}$ is an operator (isn't it?) whereas $(p,v)$ is a ordered pair of elements of $\mathbb{R}^n$. Does that mean that the expression $(p,v)(f)$ makes sense? What would that mean? -I must be thinking about this the wrong way; can someone clarify? - -REPLY [8 votes]: I think the phrase "the portion inside the parentheses is really interchangeable with $(p,v)$" is quite misleading. This is not really true -- as you correctly observed, $(p,v)$ is an ordered pair of elements of $\mathbb R^n$, while the expression in parentheses is an operator on functions. -What is true is that there is a linear map from the set $\{(p,v)|v\in\mathbb R^n\}$ (let's call that the geometric tangent space) into the set of linear differential operators on functions, which takes the pair $(p,v)$ to the "directional derivative operator" that you wrote down. The image of this map is the set of derivations at $\boldsymbol p$, which is the set of all linear maps $X\colon C^\infty(\mathbb R^n)\to \mathbb R$ that satisfy this product rule: -$$ -X(fg) = f(p)X(g) + g(p)X(f). -$$ -The geometric tangent space is thus canonically isomorphic to the set of derivations at $p$. -Why does this matter? Because on an abstract manifold, the geometric tangent space doesn't have any coordinate-independent meaning, but the space of derivations at $p$ does. So we take the space of derivations at $p$ as our definition of the tangent space to $M$ at $p$. -Once you get comfortable with the canonical isomorphism between the geometric tangent space and the space of derivations at $p$, then you might start thinking of them as "interchangeable." But when you're first trying to learn this stuff, it's more productive to think of them as canonically isomorphic.<|endoftext|> -TITLE: On $e^{5x}+e^{4x}+e^{3x}+e^{2x}+e^{x}+1$ -QUESTION [6 upvotes]: Define the following, -$$F_2(x) := \frac{1}{2}+\frac{(2x)}{1!} B_2\Big(\tfrac{1}{2}\Big)+\frac{(2x)^2}{2!}B_3\Big(\tfrac{1}{2}\Big)+\frac{(2x)^3}{3!}B_4\Big(\tfrac{1}{2}\Big)+\dots $$ -$$\color{brown}{F_3(x)} := \frac{1}{3}+\frac{(3x)}{1!} B_2\Big(\tfrac{1}{3}\Big)+\frac{(3x)^2}{2!}B_3\Big(\tfrac{1}{3}\Big)+\frac{(3x)^3}{3!}B_4\Big(\tfrac{1}{3}\Big)+\dots $$ -$$F_4(x) := \frac{1}{4}+\frac{(4x)}{1!} B_2\Big(\tfrac{1}{4}\Big)+\frac{(4x)^2}{2!}B_3\Big(\tfrac{1}{4}\Big)+\frac{(4x)^3}{3!}B_4\Big(\tfrac{1}{4}\Big)+\dots $$ -$$\color{brown}{F_6(x)} := \frac{1}{6}+\frac{(6x)}{1!} B_2\Big(\tfrac{1}{6}\Big)+\frac{(6x)^2}{2!}B_3\Big(\tfrac{1}{6}\Big)+\frac{(6x)^3}{3!}B_4\Big(\tfrac{1}{6}\Big)+\dots $$ -From Summation of Series (2nd ed) by L.B Jolley, page 26, it seems that, -$$\color{brown}{F_3(x)} \overset{?}= \frac{1}{1+e^x+e^{2x}}\\ -\color{brown}{F_6(x)} \overset{?}= \frac{1}{1+e^{x}+e^{2x}+e^{3x}+e^{4x}+e^{5x}}\tag1$$ -I assume that $B_n(x)$ are the Bernoulli polynomials. However, when I try to numerically evaluate those two $F_k(x)$ using Mathematica, the LHS does not agree with the RHS. -Questions: - -How to interpret $B(n)$ such that $(1)$ holds true? -And do $F_2(x)$ and $F_4(x)$ evaluate similar to $(1)$? - -REPLY [5 votes]: Ad Question 1: The polynomials $B_n(x)$ defined in L.B.W. Jolley's book Summation of Series are close relatives of the Bernoulli polynomials defined e.g. in L. Comtet's classic Advanced Combinatorics but not the same. -To differentiate the polynomials defined in L.B.W. Jolley's book from the more commonly defined Bernoulli polynomials we use, we denote them from now on with $\widetilde{B}_n$. - -In number (1128) on p. 228 in Jolley's book and in the subsequent section (1129) we find a representation of $\widetilde{B}_n(x)$ as generating series. - \begin{align*} -t\frac{e^{xt}-1}{e^x-1}=\sum_{n\geq 1}n\widetilde{B}_n(x)\frac{t^n}{n!}\tag{1} -\end{align*} -A generating function of the Bernoulli polynomials $B_n(x)$ is given as - \begin{align*} -t\frac{e^{xt}}{e^t-1}=\sum_{n\geq 0}B_n(x)\frac{t^n}{n!} -\end{align*} - Since a generating function of the Bernoulli numbers $B_n=B_n(0)$ is - \begin{align*} -\frac{t}{e^t-1}=\sum_{n\geq 0}B_n(0)\frac{t^n}{n!} -\end{align*} -we obtain the following relationship between Bernoulli polynomials $B_n(x)$ and $\widetilde{B}_n(x)$ -\begin{align*} -\widetilde{B}_n(x)=\frac{B_n(x)-B_n(0)}{n}\qquad\qquad n\geq 1 -\end{align*} - -Ad question 2: The answer is affirmative. The nice formulas of $F_t(x)$ are valid for all $t\in \mathbb{N}$. - -On one hand we obtain for $t\geq 1$ by expanding numerator and denominator of $F_t(x)$ with $1-e^x$ -\begin{align*} -F_t(x)&=\frac{1}{1+e^x+e^{2x}+\cdots+e^{(t-1)x}}\\ -&=\frac{1-e^x}{1-e^{xt}} -\end{align*} - -On the other hand note, that the general expression of $F_t(x)$ is -\begin{align*} -F_t(x)=\frac{1}{t}+\sum_{n\geq 1}\widetilde{B}_{n+1}\left(\frac{1}{t}\right)\frac{(tx)^n}{n!}\tag{2} -\end{align*} - -We obtain -\begin{align*} -F_t(x)&=\frac{1}{t}+\sum_{n\geq 1}\widetilde{B}_{n+1}\left(\frac{1}{t}\right)\frac{(tx)^n}{n!}\\ -&=\frac{1}{t}+\sum_{n\geq 2}\widetilde{B}_n\left(\frac{1}{t}\right)\frac{(tx)^{n-1}}{(n-1)!}\\ -&=\frac{1}{t}+\frac{1}{tx}\sum_{n\geq 2}n\widetilde{B}_n\left(\frac{1}{t}\right)\frac{(tx)^{n}}{n!}\tag{3}\\ -&=\frac{1}{t}+\frac{1}{tx}\left(tx\frac{e^{\frac{1}{t}tx}-1}{e^{tx}-1}-\widetilde{B_1}\left(\frac{1}{t}\right)tx\right)\tag{4}\\ -&=\frac{1}{t}+\frac{1}{tx}\left(tx\frac{e^x-1}{e^{tx}-1}-x\right)\\ -&=\frac{e^x-1}{e^{tx}-1}\\ -\end{align*} -and the claim follows. - -Comment: - -In (3) we use the generating series for $n\widetilde{B}_n(x)$ according to (1) -In (4) note that $\widetilde{B}_1(x)=x$ - - -Epilogue: -Symmetry -The Function $F_t(x)$ and its reciprocal $\frac{1}{F_t(x)}$ admit a nice nearly symmetrical representation with respect to Bernoulli polynomials and their arguments. The following is valid -\begin{align*} -F_t(x)-1& -=\sum_{n\geq 0}\frac{B_{n+1}\left(\frac{1}{t}\right)-B_{n+1}(0)}{n+1}\frac{(tx)^n}{n!}\\ -\frac{1}{F_t(x)}& -=\sum_{n\geq 0}\frac{B_{n+1}(t)-B_{n+1}(0)}{n+1}\frac{x^n}{n!} -\end{align*} - -This holds, since according to (2) we get -\begin{align*} -F_t(x)&=\frac{1}{\sum_{k=0}^{t-1}e^{kx}} -=\frac{1}{t}+\sum_{n\geq 1}\frac{B_{n+1}\left(\frac{1}{t}\right)-B_{n+1}(0)}{n+1}\frac{(tx)^n}{n!}\\ -&=1+\sum_{n\geq 0}\frac{B_{n+1}\left(\frac{1}{t}\right)-B_{n+1}(0)}{n+1}\frac{(tx)^n}{n!}\\ -\end{align*} -and -\begin{align*} -\frac{1}{F_t(x)}&=\sum_{k=0}^{t-1}e^{kx} -=\sum_{k=0}^{t-1}\sum_{n\geq 0}\frac{(kx)^n}{n!} -=\sum_{n\geq 0}\left(\sum_{k=0}^{t-1}k^n\right)\frac{x^n}{n!}\\ -&=\sum_{n\geq 0}\frac{B_{n+1}(t)-B_{n+1}(0)}{n+1}\frac{x^n}{n!} -\end{align*} - -Historical Note -The definition of the Bernoulli numbers $\widetilde{B}_n$ in L.B.W. Jolley's book is taken from the British Association Report from 1877. The first few numbers are -\begin{array}{cccccccccc} -n&0&1&2&3&4&5&6&7&8\\ -\hline\\ -\widetilde{B}_n&-1&\frac{1}{6}&\frac{1}{30}&\frac{1}{42}&\frac{1}{30} -&\frac{5}{66}&\frac{691}{2730}&\frac{7}{6}&\frac{3617}{510}\\ -\\ -B_n&0&-\frac{1}{2}&\frac{1}{6}&0&-\frac{1}{30}&0&\frac{1}{42}&0&-\frac{1}{30} -\end{array} -This corresponds for $n\geq 1$ to Comtet's statement in section 1.14 -(notation somewhat adapted) ... Bernoulli numbers, denoted by $b_n$ by Bourbaki, are sometimes also defined by: - \begin{align*} -\frac{t}{e^t-1}=1-\frac{1}{2}t+\sum_{n\geq 1}(-1)^{n+1}B_n\frac{t^{2n}}{(2n)!} -\end{align*} -Each $B_n$ is then $>0$ and equals $(-1)^{n+1}B_{2n}$ as a function of our Bournoulli numbers.<|endoftext|> -TITLE: Non-diffeomorphic structures on the sphere -QUESTION [5 upvotes]: How many smooth structures are there on $S^2$, $S^3$, and $S^4$ up to diffeomorphism? I looked around and couldn't find an answer; two books I have say different things on the subject. -I know one of these should still be an open question. - -REPLY [10 votes]: All manifolds of dimension up to 3 have a unique smooth structure up to diffeomorphism. 1 is essentially trivial, 2 is due to Rado, 3 is due to Moise. -Whether or not there exist exotic smooth structures on $S^4$ is wide open. This is known as the smooth Poincaré conjecture in 4 dimensions. Some topologists think there should even be infinitely many exotic structures on $S^4$ but this opinion is certainly not uniform. -The keyword you want is exotic sphere.<|endoftext|> -TITLE: When is the quotient ring finite? -QUESTION [8 upvotes]: My question is in the title: - -Given a commutative ring $R$ with identity $1\neq 0$ and an ideal $M$ of $R$. For what condition of $M$, the quotient ring $R/M$ is finite? - -My question comes from the theorem: if $M$ is a maximal ideal, then $M$ is a prime ideal. In general, the converse is not true. However, we know that if $M$ is a prime ideal then $R/M$ is an integral domain, and if $R/M$ is also finite, then it is a field, and the converse holds. -Thank you so much for any help. - -REPLY [2 votes]: Speaking from my own experiences, I'm not aware of a nontrivial characterization of when a quotient ring is finite. -Actually, you could generalize your question by asking "When is $R/P$ Artinian?" since an Artinian domain is also a field. There is not, as far as I'm aware, any useful characterization of when $R/I$ is Artinian. -Rings for which $R/J(R)$ is Artinian are called semilocal rings. There is a slightly useful characterization of commutative semilocal rings: they're exactly the commutative rings with finitely many maximal ideals. -This doesn't generalize your original post, but if $R/P$ is von Neumann regular, $R/P$ is also a field. -So "when is $R/I$ von Neumann regular?" is another question, but again there does not seem to be any useful characterization. Rings for which $R/J(R)$ is von Neumann regular are called semiregular rings. -Since you seem to be interested in cases where prime ideals turn out to be maximal, I'll link you to a characterization of commutative rings whose prime ideals are all maximal.<|endoftext|> -TITLE: Does it follow that the hermetian part of a matrix is positive definite, that the matrix itself is invertible? -QUESTION [5 upvotes]: I came across this in a paper and I was wondering whether it is true. We have a complex matrix $M$ such that $(M+M^H)$ is positive definite. Now, it is clear that $(M+M^H)$ is invertible, but does that hold for $M$? -Is there (in general) some connections between the hermetian part of a matrix and the hermetian part of its inverse? -Thanks guys! - -REPLY [3 votes]: The implication $\mathcal{Re}M \succ 0 \implies \mathcal{Re}M^{-1} \succ 0$ appeared before on the site, might as well repeat the argument. -First, if $v$ is an eigenvector for $M$ for the eigenvalue $\lambda$ then -$v$ is an eigenvector for $\mathcal{Re}M$ for the eigenvalue $\mathcal{Re}\lambda$. -Next, if $\mathcal{Re}M\succ 0$ then all the eigenvalues of $M$ have real part $>0$ ( from the above). In particular, all are nonzero, so $M$ is invertible. -Now, $\mathcal{Re}M \succ 0$ is equivalent to $\langle M v, v \rangle +\langle v, M v \rangle > 0$ for any nonzero vector $v$. We know from the above that $M$ is invertible. Plug in in the previous equation $v\colon = M^{-1} w$ and get -$\langle w, M^{-1}w \rangle +\langle M^{-1}w, w \rangle > 0$ for any nonzero vector $w$. Therefore $\mathcal{Re}M^{-1}\succ 0$<|endoftext|> -TITLE: Finding two non-congruent right-angle triangles -QUESTION [5 upvotes]: The map $g: B \to A, \ (x,y) \mapsto \left(\dfrac {x^2 - 25} y, \dfrac {10x} y, \dfrac {x^2 + 25} y \right)$ is a bijection where $A = \{ (a,b,c) \in \Bbb Q ^3 : a^2 + b^2 = c^2, \ ab = 10 \}$ and $B = \{ (x,y) \in \Bbb Q ^2 : y^2 = x^3 - 25x, \ y \ne 0 \}$. - -Given the Pythagorean triple, -$$ -\begin{aligned} -a&=\frac {x^2 - 25} y\\ -b&=\frac {10x} y\\ -c&=\frac {x^2 + 25} y -\end{aligned} -$$ -so $a^2+b^2 = c^2$ and $y\neq 0$. We wish to set its area $G = 5$. Thus, -$$G = \frac{1}{2}ab = \frac{1}2 \frac {10x} y \frac {(x^2 - 25)} y = 5$$ -Or simply, -$$x^3-25x = y^2\tag1$$ -Can you help find two non-congruent right-angled triangles that have rational side lengths and area equal to $5$? - -REPLY [3 votes]: Update: It turns out that for rational $a,b,c$, if, -$$a^2+b^2 = c^2$$ -$$\tfrac{1}{2}ab = n$$ -then $n$ is a congruent number. The first few are $n = 5, 6, 7, 13, 14, 15, 20, 21, 22,\dots$ - -After all the terminology, I guess all the OP wanted was to solve $(1)$. He asks, "... find two non-congruent right-angled triangles that have rational side lengths and area equal to 5". -We'll define congruence as "... two figures are congruent if they have the same shape and size." Since we want non-congruent, then two such right triangles are, -$$a,b,c = \frac{3}{2},\;\frac{20}{3},\;\frac{41}{6}$$ -$$a,b,c = \frac{1519}{492},\;\frac{4920}{1519},\;\frac{3344161}{747348}$$ -However, since the elliptic curve, -$$x^3-25x = y^2\tag1$$ -has an infinite number of rational points, then there is no need to stop at just two. Two small solutions to $(1)$ are $x_1 = 5^2/2^2 =u^2/v^2$ and $x_2 = 45$. We can use the first one as an easy way to generate an infinite subset of solutions. Let, $x =u^2/v^2$, so, -$$\frac{u^2}{v^6}(u^4-25v^4) = y^2$$ -or just, -$$u^4-25v^4 = w^2\tag2$$ -a curve also discussed in the post cited by J. Lahtonen. Here is a theorem by Lagrange. Given an initial solution $u^4+bv^4 = w^2$, then further ones are, -$$X^4+bY^4 = Z^2$$ -where, -$$X,Y,Z = u^4-bv^4,\; 2uvw,\; (u^4+bv^4)^2 + 4 b u^4 v^4$$ -Thus, using Lagrange's theorem recursively, we have the infinite sequence, -$$u,v,w = 5,\;2,\;15$$ -$$u,v,w = 41,\;12,\;1519$$ -$$u,v,w = 3344161,\; 1494696,\; 535583225279$$ -and so on.<|endoftext|> -TITLE: Product space that is compact, but isn't sequentially compact -QUESTION [6 upvotes]: I need to find an example of infinite product space that is compact, but it's not sequentially compact. -So this is my example: -$$ -\prod_{i\in [0,1[}\{0,\ldots,9 \}. -$$ -It sure is compact, but is it sequentially compact? -Let's define a sequence $(f_{n})_{n\geq 1}$ by -$$ f_{n}:[0,1[\rightarrow \{0,\ldots ,9\},\ f_{n}(x)=a_{n}, $$ where $x\in [0,1[$ has decimal expansion $0.a_{1}a_{2}a_{3}a_{4}...$. -Are there more simple examples than this? - -REPLY [10 votes]: An example which is homeomorphic to yours, but does not depend on expansion of reals etc: take the index set $I = \{0,1\}^{\mathbb{N}}$, the set of all $0$-$1$-sequences (the Cantor set, to a topologist). This has natural functions $\pi_n$ that sends an $i \in I$ (which is really a sequence, i.e itself a function from $\mathbb{N}$ to $\{0,1\}$) to its $n$-th coordinate; this corresponds to the $n$-th decimal of the real number $i$ in your example. -Now consider $X = \{0,1\}^I$. This is compact by Tychonoff's theorem. Define $f_n \in X$ (for every (fixed for now) $n$) as $f_n(i) = \pi_n(i)$. This is well-defined, because a member of $X$ is just a function from $I$ to $\{0,1\}$, and for some fixed $n$ we have that we send $i \in I$ to its $n$-th coordinate, which is $0$ or $1$. So we have a sequence $(f_n)$ in $X$. -A basic fact for the product topology, for any sequence $(x_n)$ in $X$ (so all $x_n$ are functions from $I$ to $\{0,1\}$!): $(x_n) \rightarrow x (\in X)$ iff for all $i \in I$, $x_n(i) \rightarrow x(i)$. So convergence is determined pointwise. In fact we only need the implication from left to right (which just says that all projections are sequentially continuous). -Suppose that $(f_n)_n$ has a convergent subsequence $f_{n_k}$. Define a point $j \in I$ by $j_{n_{2k}} = 1$ for all $k$, $j_m = 0$ for all other $m$. -Then $f_{n_k}(j) = \pi_{n_k}(j) = j_{n_k}$ which is just (as $k$ goes from $0,1,\ldots$) the sequence $1, 0, 1, 0, 1, 0, \ldots$ by how $j$ was defined. So this subsequence does not converge pointwise at $j$ so cannot converge in $X$ at all. Contradiction. -So $X$ is not sequentially compact. -It turns out that if $X_i$ are non-trivial (more than 1 point) sequentially compact Tychonoff spaces for $i \in I$, then $\prod_i X_i$ is sequentially compact for countable index set $I$, and never sequentially compact for $I$ of size $\mathfrak{c}$. There is some turning point cardinal in between, related to the cardinal invariant called the splitting number. If CH holds, of course all is known, but in general we could have that e.g. $\{0,1\}^{\aleph_2}$ is sequentially compact, while $\{0,1\}^{\aleph_3}$ is not, even with $\aleph_3 < \mathfrak{c}$, etc. This as a set-theory aside and it shows that we need "big" products to kill sequential compactness in products.<|endoftext|> -TITLE: Hilbert Spaces and Banach Spaces -QUESTION [6 upvotes]: I have a problem with the definition of Hilbert Space and Banach Space. -What is the difference between a Hilbert Space and a Banach Space? - -REPLY [3 votes]: A Hilbert space is a Banach space. What makes it special is that its norm satisfies a parallelogram law: $$\| x-y \|^2 + \| x + y\|^2 = 2 \|x\|^2 + 2\|y\|^2.$$ When the norm obeys the parallelogram law, there is an inner product $\langle x, y \rangle$ such that $\langle x, x \rangle = \|x\|^2$. That is, the inner product induces the norm. -The specialness of Hilbert spaces comes from the inner product. There it is possible to define angles in infinite dimensional spaces that has some resemblance to our intuition about Euclidean spaces. In particular we say that two vectors are at $90^\circ$ (or they are orthogonal) with one another if they satisfy $\langle x, y \rangle = 0$. -Moreover, not all Banach spaces are reflexive, like Hilbert spaces are. Hilbert spaces are infact more than just reflexive, their dual space is isometrically isomorphic to the Hilbert space itself. -Recall that the dual of a Banach space is the collection of continuous linear functionals. The Riesz representation theorem declares that for every continuous functional $T$ on a Hilbert space, $H$, there is a vector $y \in H$ such that $T(x) = \langle x, y \rangle$ and $\| T \| = \|y\|_H$. Thus $T$ can be identified with $y$. If we let $H'$ be the dual of $H$ we then have $H=H' = H''$, and the identity $H=H''$ is called the reflexive property. -If you consider the Banach space $c_0(\mathbb{N})$, the dual space of $c_0(\mathbb{N})$ is $l^1(\mathbb{N})$, and the dual space of $l^1(\mathbb{N})$ is $l^\infty(\mathbb{N})$. Thus $c_0(\mathbb{N})$ is not reflexive. This also demonstrates that the norm for $c_0(\mathbb{N})$ does not arise from an inner product, since otherwise the space would be reflexive. The same can be said of $l^1(\mathbb{N})$, since it is not self-dual. - -Definitions: -$c_0(\mathbb{N}) = \{ (a_n)_{n\in \mathbb{N}} \subset \mathbb{C} : a_n \to 0 \}$ and has the norm $\|(a_n)\| = \sup_{n} |a_n|$. -$l^1(\mathbb{N}) = \{ (a_n)_{n\in \mathbb{N}} \subset \mathbb{C} : \|(a_n)\| = \sum |a_n| < \infty \}$. -$l^\infty(\mathbb{N}) = \{ (a_n)_{n\in \mathbb{N}} \subset \mathbb{C} : \| (a_n) \| = \sup_{n} |a_n| < \infty \}$. \ No newline at end of file