diff --git "a/stack-exchange/math_stack_exchange/shard_109.txt" "b/stack-exchange/math_stack_exchange/shard_109.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_109.txt" +++ /dev/null @@ -1,5100 +0,0 @@ -TITLE: How many different numbers can be written if each used digit symbol is used at least 2 times? -QUESTION [18 upvotes]: How many different numbers can be written if each used digit symbol is used at least 2 times ? -I would like to find the function $P(n,d)$: -$P(n,d)$ where $n$ is base, $d$ is digit; -Some examples: - $n=3$ , $d=3$; -$$(000)_3$$ -$$(111)_3$$ -$$(222)_3$$ -$P(3,3)=3$ - -It is easy that we can generalize for 3 digit numbers that $P(n,3)=n$ -$P(3,4)=3.\cfrac{4!}{4!}+\cfrac{3.2}{2!}.\cfrac{4!}{(2!)^2}=21$ - -$(0000)_3 , (1111)_3$ two examples for $3.\cfrac{4!}{4!}$ -$(0011)_3 ,(1212)_3$ two examples for $\cfrac{3.2}{2!}.\cfrac{4!}{(2!)^2}$ - - -$P(4,4)=4.\cfrac{4!}{4!}+\cfrac{4.3}{2!}\cfrac{4!}{2!.2!}=40$ - -$(0000)_4, (1111)_4$ two examples in $4.\cfrac{4!}{4!}$ -$(0011)_4 ,(3232)_4 $ two examples in $\cfrac{4.3}{2!}\cfrac{4!}{2!.2!}$ - - -$P(3,5)=3.\cfrac{5!}{5!}+3.2\cfrac{5!}{3!.2!}=63$ - -$(00000)_3 ,(22222)_3$ two examples in $3.\cfrac{5!}{5!}$ -$(00110)_3 ,(02020)_3 $ two examples in $3.2\cfrac{5!}{3!.2!}$ - - -$P(3,6)=3.\cfrac{6!}{6!}+\cfrac{3.2}{2!}\cfrac{6!}{3!.3!}+(3.2)\cfrac{6!}{2!.4!}+\cfrac{3.2.1}{3!}\cfrac{6!}{2!.2!.2!}=243$ - -$(000000)_3 , (222222)_3$ two examples in $3.\cfrac{6!}{6!}$ -$(001101)_3 , (020220)_3$ two examples in $\cfrac{3.2}{2!}\cfrac{6!}{3!.3!}$ -$(002200)_3 , (111122)_3$ two examples in $(3.2)\cfrac{6!}{2!.4!}$ -$(112200)_3 , (102021)_3$ two examples in $\cfrac{3.2.1}{3!}\cfrac{6!}{2!.2!.2!}$ - - -Thanks for helps -EDIT: (2/21/2016) -I have noticed my mistakes in my formulas above and I corrected . Thanks a lot for answers. -During my research on $P(n,d)$ , I got a conjecture. Thus the question has been going into very interesting points. -(I do not know if it is known conjecture or not? Please let me know if you heard it. If it is true, this can let us to generalize The Fermat's Little theorem for any positive number. ) -Fermat's little theorem: -$n^{p}\equiv n \pmod {p}$ where $p$ is prime number; $n$ is positive integer and $\gcd(n,p)=1$. -My conjecture: -$$n^{d}\equiv P(n,d) \pmod {d}$$ -where $d$ and $n$ are any positive integers. -The conjecture is true for the (7X7) table that @Markus Scheuer gave in his answer. All values in the table were tested with success. I need your contribution to test my conjecture for large numbers. I have not found any counter-example to disprove my conjecture yet. -Note that I have the conjecture without proof. -How can the conjecture be proven? -I would like to share some my results for $P(n,d)$ . Please let me know if any fault in my formulas below. -$$P(n,1)=0$$ -$$P(n,2)=n$$ -$$P(n,3)=n$$ -$$P(n,4)=n+\cfrac{n(n-1)}{2!}\cfrac{4!}{2!2!}=3n^2-2n$$ -$$P(n,5)=n+\cfrac{n(n-1)}{1!}\cfrac{5!}{2!3!}=10n^2-9n$$ -$$P(n,6)=n+\cfrac{n(n-1)}{2!}\cfrac{6!}{3!3!}+\cfrac{n(n-1)}{1!}\cfrac{6!}{4!2!}+\cfrac{n(n-1)(n-2)}{3!}\cfrac{6!}{2!2!2!}$$ -$$P(n,6)=n+25n(n-1)+15n(n-1)(n-2)=15n^3-20n^2+6n$$ -$$P(n,7)=n+\cfrac{n(n-1)}{1!}\cfrac{7!}{4!3!}+\cfrac{n(n-1)}{1!}\cfrac{7!}{2!5!}+\cfrac{n(n-1)(n-2)}{2!}\cfrac{7!}{3!2!2!}$$ -$$P(n,7)=n+56n(n-1)+105n(n-1)(n-2)$$ -Note that those formulas above satisfy the $(7x7)$ (n/d) table that @Markus Scheuer gave in his answer. -We can write more terms if we want. -$$P(n,8)=n+\cfrac{n(n-1)}{1!}\cfrac{8!}{2!6!}+\cfrac{n(n-1)}{1!}\cfrac{8!}{3!5!} +\cfrac{n(n-1)}{2!}\cfrac{8!}{4!4!}+\cfrac{n(n-1)(n-2)}{2!}\cfrac{8!}{2!2!4!}+\cfrac{n(n-1)(n-2)}{2!}\cfrac{8!}{2!3!3!}+\cfrac{n(n-1)(n-2)(n-3)}{4!}\cfrac{8!}{2!2!2!2!}$$ -.. -.. -$$P(n,d)=n+\cfrac{n(n-1)}{1!}\cfrac{d!}{2!(d-2)!}+\cfrac{n(n-1)}{1!}\cfrac{d!}{3!(d-3)!} +\cfrac{n(n-1)}{1!}\cfrac{d!}{4!(d-4)!}+.....$$ - -$$n^{d}\equiv P(n,d) \pmod {d}$$ -If we put the results on above in my conjecture , we can get: -$$n^{2}\equiv n \pmod {2}$$ -$$n^{3}\equiv n \pmod {3}$$ -$$n^{4}\equiv 3n^2-2n \pmod {4}$$ -$$n^{5}\equiv 10n^2-9n\pmod {5}\equiv -9n\pmod {5}\equiv n\pmod {5}$$ -$$n^{6}\equiv 15n^3-20n^2+6n \pmod {6} \equiv 3n^3-2n^2 \pmod {6}$$ -$$n^{7}\equiv n+56n(n-1)+105n(n-1)(n-2) \pmod {7}\equiv n \pmod {7}$$ -$$n^{8}\equiv n^2(n-2)^2 \pmod {8}$$ -EDIT: (2/22/2016) -An observation: -$$(n+1)^{d}=\sum_{k=0}^{d}{d\choose k}\>n^k$$ -$$(n+1)^{d}\equiv \sum_{k=0}^{d}{d\choose k}\>n^k \pmod {d}$$ -If $n^{d}\equiv P(n,d) \pmod {d}$ true; -$$P(n+1,d)\equiv \sum_{k=0}^{d}{d\choose k}\>P(n,k) \pmod {d}$$ -Because of ${d\choose d-1}=d$ -$$P(n+1,d)\equiv P(n,d)+ \sum_{k=0}^{d-2}{d\choose k}\>P(n,k) \pmod {d}$$ -This result is very similar result with the recursion formula of @Christian Blatter wrote in his answer. -$$P(n+1,d)=P(n,d)+\sum_{k=0}^{d-2}{d\choose k}\>P(n,k)\qquad(d\geq2)$$ -EDIT -I have tested my conjecture with big numbers. -According to the table in sequence A231797 (Thanks a lot to @MarkusScheuer for the link) -There is an output for big numbers Table of n, a(n) for n = 0..410 . I tested for $n=d=410$ -$P(410,410)$ is 990 digits integer, you can see in the table that I gave in the link above. -I confirmed in an online calculator that my conjecture is still true for that big number. -$$410^{410}\equiv 0 \pmod {410}$$ -$$P(410,410)\equiv 0 \pmod {410}$$ -$$410^{410}\equiv P(410,410) \pmod {410}$$ -Note: I have not tested my conjecture with big numbers for the case that $d \neq n$ . I do not know a generating function for the case $d \neq n$ as we have for $d=n$ as Markus Scheuer informed in his answer. Any idea for generating function for $d \neq n$? -Generating function for $d = n$ -\begin{align*} -P(n,n)=n![x^n]\left(e^x-x\right)^n\qquad\qquad n\geq 0 -\end{align*} -Thanks for helps and contributions - -EDIT (26/2/2016): I have posted a proof for my conjecture above. You can find it below as an answer. Please feel free to write comments on it. - -$$n^{d}\equiv P(n,d) \pmod {d}$$ -where $d$ and $n$ are any positive integers. -Many special thanks to @ChristianBlatter and @MarkusScheuer for their contributions to prove it. Especially Christian Blatter's recurrence formula for $P(n,d)$ is the key to prove the conjecture. Thanks a lot for sharing his idea with us. -I have not found a related link for that theorem in the internet. Could you please share reference books or links if you know it? -Thanks a lot for your helps - -REPLY [3 votes]: We can keep this simple. The labeled species of sequences of $q$ sets -with at least two elements is given by -$$\mathfrak{S}_{=q}(\mathfrak{P}_{\ge 2}(\mathcal{Z})).$$ -We get the admissible contributions to $P(n,d)$ by choosing $q$ values -from the $n$ different digits and letting the first set be the -positions of the smallest chosen digit, the next one of the second -smallest and so on. This yields the species -$$\sum_{q=1}^n {n\choose q} -\mathfrak{S}_{=q}(\mathfrak{P}_{\ge 2}(\mathcal{Z})).$$ -Translating to generating functions we have -$$\sum_{q=1}^n {n\choose q} (\exp(z)-1-z)^q -= -1 + (\exp(z)-z)^n.$$ -This finally yields the closed formula -$$d! [z^d] (\exp(z)-z)^n.$$ -Writing this with Stirling numbers we get -$$d! [z^d] (\exp(z)-1+1-z)^n -= d! [z^d] \sum_{p=0}^n {n\choose p} (\exp(z)-1)^p (1-z)^{n-p} -\\ = d! \sum_{p=0}^n {n\choose p} -\sum_{q=0}^d [z^q] (\exp(z)-1)^p [z^{d-q}] (1-z)^{n-p} -\\ = d! \sum_{p=0}^n {n\choose p} -\sum_{q=0}^d \frac{p!}{q!} {q\brace n} (-1)^{d-q} {n-p\choose d-q}.$$<|endoftext|> -TITLE: How can I check whether a given finite group is a semidirect product of proper subgroups? -QUESTION [6 upvotes]: Suppose, a finite group $G$ is given. -I want to check whether there is a proper normal subgroup $N$ of $G$ and a subgroup $H$ of $G$, such that $G$ is the semidirect product of the groups $N$ and $H$. -In the case that the order of $N$ is coprime to the order of $G/N$, we can simply choose $H:=G/N$ and G is the semidirect product of $N$ and $H$, but how can I find out whether a suitable $H$ exists in general ? -GAP allows to enumerate the normal subgroups of a given finite group $G$, but I have no idea how to search for $H$ with GAP. - -REPLY [8 votes]: In general such a subgroup $H$ is called a complement to $N$. Complements could be conjugate, and so GAP has a function ComplementClassesRepresentatives that returns representatives of such classes. (That is, if an empty list is returned there is no complement.) -The methods used build on group cohomology, a description of the algorithm for the case of solvable normal subgroups can be found in: -Celler, F.; Neubüser, J.; Wright, C. R. B. -Some remarks on the computation of complements and normalizers in soluble groups. -Acta Appl. Math. 21 (1990), no. 1-2, 57–76 -and a generalization that also allows to deal with cases in which only the factor group is solvable is in my paper: -Hulpke, A. -Calculation of the subgroups of a trivial-fitting group. ISSAC 2013—Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, 205–210, ACM, New York, 2013<|endoftext|> -TITLE: Is this really an open problem? Maximizing angle between $n$ vectors -QUESTION [9 upvotes]: It is well known that the trigonal planar molecule (with bond angle $\alpha=120^{\circ}$) and the famous tetrahedral (with bond angle $\alpha\approx 109.5^{\circ}$) maximizes the angle between the vectors pointing along the bonds. So the question is this: - -How do we (analytically) maximize the angle between $n$ vectors in $\mathbb{R^3}$? - -Or put differently: - -What is the maximum distance that one of $n$ vectors in $\mathbb{R^3}$ can be from any other of the $n$ vectors, such that this is true for all $n$ vectors? - -I know that for $n=\{4,6,\require{cancel} \cancel{8},12,\require{cancel} \cancel{20}\}$ one can simply make use of the Platonic solids embedded in a circle to calculate the corresponding angles/distances. -However, I have not been able to find any resources for the general case. Is this problem really unsolved? - -REPLY [9 votes]: This is known as the Tammes problem. For general $n$, it is a hard problem. -According to Conway and Sloane's book Sphere packings, lattices and groups, the optimal arrangement -for $n = 4, 6, 8, 12$ and $24$ are known: - -4 : regular tetrahedron, -6 : regular octahedron, -8 : square anti-prism, -12: regular icosahedron, -24: snub cube - -The book doesn't have the answer for $n = 20$ but it points out a regular dodecahedron isn't the answer. Furthermore, it is common the best known solutions for larger values of $n$ are not the (highly symmetric) candidates. -If you can get a copy of Conway and Sloane's book, look at the references there. -The info in this book is not most uptodate. Sloane has another book on similar topic (called Spherical codes) under preparation. I think I have seen a preliminary copy of it floating around the Web. In any event, look at Sloane's site for spherical codes as a start. -Update -As of July 2015, the Tammers problem for $3 \le N \le 14$ and $24$ has been solved. Following is a short list of papers for the solutions. - -$N = 3,4,6,12$ by L. Fejes Tóth (1943) -Über die Abschätzung des kürzesten Abstandes zweier Punkte eines -auf einer Kugelfläche liegenden Punktsystems, Jber. Deutch. Math. Verein. 53 (1943) -$N = 5,7,8,9$ by Schütte and van der Waerden (1951) -Auf welcher Kugel haben 5,6,7,8 oder 9 Punkte mit -Mindestabstand 1 Platz? -Math. Ann. 123 (1951), 96-124. -$N = 10,11$ by Danzer (19??) -Finite point-sets on $S^2$ with minimum distance as large as possible, -Discr. Math., vol. 60, 1986, pp. 3-66. -$N = 13,14$ by Musin and A. S. Tarasov (2012,2015?) -Enumeration of irreducible contact graphs on the sphere, -Fundam. Prikl. Mat., 18:2 (2013), 125-145 [J. Math. Sci. 203 (2014), 837–850]. -The Tammes problem for N = 14 (2015?) (preprint on arxiv) -$N = 24$ by Robinson (1961) -Arrangement of 24 circles on a sphere, Math. Ann. 144 (1961), 17-48 - -Edith Mooers has an survey article on Tammes's problem on 1994 -summarizes all the solutions for $N \le 12$. An online copy can be found -here. -Together with Musin's preprint on $N = 14$, one should have a good -overview on the current status of the problem.<|endoftext|> -TITLE: Can $e^{ax}$ be said to be the eigenfunction of the operator $\frac{d^{(n)}}{dx}$? -QUESTION [6 upvotes]: I'm gradually getting familiar with operators (as they are used in QM) and the terminology surrounding them, and I was wondering whether all the (to me) well-known operators have straight-forward, elementary functions, as seems to be the case with $\frac{d^{(n)}}{dx}$, because $$\frac{d^{(n)}}{dx}e^{ax}=a^n e^{ax}$$ -and could one say that the spectrum of these eigenfunctions is degenerate, since $a$ can vary (I know "spectrum" is usually used for the set of eigenvalues, but it seems appropriate here)? Is this the correct interpretation? If so, what is the eigenfunctions and -values for the following differentialoperators (if you could point me in the direction of a resource that either collects them or - even better - shows how they are obtained, that would be very acceptable): - -$\int dx$ -$\nabla$ (grad) -$\nabla \cdot $ (div) -$\nabla^2$ (Laplace) -If you have a cool one, please do just throw it in there! - -Thanks! - -REPLY [8 votes]: For each value of $a$, $$\frac{d^n}{dx^n} e^{ax} = a^n e^{ax} ,$$ -and so $e^{ax}$ is an eigenfunction of the operator linear differential operator $\frac{d^n}{dx^n}$ of eigenvalue $a^n$. Conversely, the $a^n$-eigenspace of this operator (regarded as a map, e.g., from $C^{\infty}(\Bbb R)$ to itself) has dimension $n$, and it's not too hard to write down an explicit basis thereof. (NB that for $a = 0$, the eigenspace is qualitatively different from the general case, as the solutions of $\frac{d^n}{dx^n} f = 0$ are just the polynomials of degree $< n$.) With a little more work (mostly involving passing to the complex setting), we can show that the spectrum of this operator is $\Bbb R$ itself. -For most of the other operators you mention, the input and output objects are of different types, so without more structure it doesn't make any sense to talk about eigenfunctions; for example, the gradient maps functions to vector fields. A little more subtly, the operator $f \mapsto \int f \,dx$ maps functions to equivalence classes of functions (where $f \sim \hat{f}$ iff $\hat{f} - f$ is a constant). -The exception to this is the Laplacian $\Delta := \nabla^2$; in this case, the eigenvalue equation, -$$\Delta f = -\lambda f$$ is essentially the interesting and well-studied Helmholtz equation. The eigenvalues and eigenfunctions in this case depend on the (fixed, and often bounded) domain $\Omega$ of $f$, or, if you like, compact Riemannian manifold $(\Omega, g)$ with boundary. For general $\Omega$ the eigenfunctions are complicated, but when $\Omega$ is the $n$-sphere, the eigenvalues are the rather tractable spherical harmonics, which are important, for example, in the quantum mechanics of the hydrogen atom. -For any such domain $\Omega$, it's particularly interesting to restrict attention to eigenfunctions that vanish on the boundary (i.e., the functions $f$ that, in addition to the above equation, satisfy $f\vert_{\partial \Omega} = 0$). In this setting, we can ask whether the spectrum of eigenvalues for such eigenfunctions determine $\Omega$ (say, up to isometry), or a little more poetically, "Can One Hear the Shape of a Drum?" This question was posed by Mark Kac in a rightfully celebrated 1966 article by that title (though the origins of this circle of ideas are rather earlier), with emphasis on bounded domains $\Omega \subset \Bbb R^2$, and this question wasn't answered (in this case) until 1992, in the negative. -Another operator familiar from vector calculus that has the same domain and codomain is the curl operator, $\nabla \times\,\cdot\,$, on $\Bbb R^3$ (or any $3$-dimensional Riemannian manifold); its eigenvectors are the subject of this question.<|endoftext|> -TITLE: Basis for $\mathbb Q (\sqrt2 , \sqrt3 )$ over $\mathbb Q$ -QUESTION [8 upvotes]: List a basis for $\mathbb K =\mathbb Q (\sqrt2 , \sqrt3 )$ as a vector space over $\mathbb Q $. -I don't know how people come to the conclusion of a claimed basis. Like I am pretty sure that we just claim $\{1, \sqrt2 , \sqrt3, \sqrt6 \}$ is a basis and then prove that it is LI and it is a spanning set of $\mathbb K$ but how do you even think of this claim? That is my first question. I know that the dimension of $\mathbb K$ is $4$ so the basis should have $4$ elements but it doesn't really tell you which $4$ specifically. -Secondly, to prove that these are LI: -$1$ is LI to all the other elements clearly since the others are irrational and $1$ is rational. Not sure on how the other three are LI to each other though. -Also I am unsure on the spanning part. - -REPLY [7 votes]: You can note that $\sqrt{3}$ has degree $2$ over $\mathbb{Q}(\sqrt{2})$, because -$$ -(a+b\sqrt{2})^2=3 -$$ -leads to -$$ -a^2+2b^2=3,\qquad 2ab=0 -$$ -which has no solution in the rational numbers. Therefore $\{1,\sqrt{3}\}$ is a basis of $\mathbb{Q}(\sqrt{2},\sqrt{3})$ over $\mathbb{Q}(\sqrt{2})$, which in turn has $\{1,\sqrt{2}\}$ as a basis over $\mathbb{Q}$. The standard proof of the dimension theorem says that -$$ -\{1\cdot 1,1\cdot\sqrt{2},1\cdot\sqrt{3},\sqrt{2}\cdot\sqrt{3}\} -$$ -is a basis of $\mathbb{Q}(\sqrt{2},\sqrt{3})$ over $\mathbb{Q}$.<|endoftext|> -TITLE: Simple linear algebra problem: prove a matrix is invertible -QUESTION [18 upvotes]: I'm preparing for a test in linear algebra and I've come across a problem I'm having trouble with for some reason: -Given a square matrix A, $A^2=2I$, prove that $A-I$ is invertible. -I know this is pretty simple but I can't seem to play with the equations to get it so that for some $B$, $B(A-I)=I$ -It's pretty easy to see that $A^{-1}=\frac{1}{2}A$, but beyond that I haven't been able to get very far. -Can anyone help with this? - -REPLY [2 votes]: Suppose that $A-I$ is not invertible. Then $\lambda = 1$ is an eigenvalue of $A$, but then $1$ is an eigenvalue of $A^{2}$. So that $v = A^{2}v = 2v \implies 1 = 2$ a contradiction.<|endoftext|> -TITLE: A matrix is unitary if, and only if, diagonalizable and all eigenvalues are on the unit circle -QUESTION [5 upvotes]: I would like to know if the class of unitary matrices is the same as the class of diagonalizable matrices who have all their eigenvalues on the unit circle. - -REPLY [7 votes]: Every unitary matrix is diagonalizable and the eigenvalues are on the unit circle. However, the converse is not true, since unitary matrices have orthogonal eigenspaces, whereas your condition does not imply this. -As an example, $\begin{bmatrix} 1 & 1 \\ 0 & -1 \end{bmatrix}$ has the two different eigenvalues $\pm 1$, so it is diagonalizable with eigenvalues on the unit circle. However, it is not unitary.<|endoftext|> -TITLE: Intended solution to proving $1994\mid 10^{900}-2^{1000}$ other than $1994\mid 10^{9k}-2^{10k}$ -QUESTION [7 upvotes]: Earlier in the week, while tutoring in the math lab, a student came to me asking for assistance on proving the following statement: - -$$1994\mid 10^{900}-2^{1000}$$ - -The numbers were much too large to attempt to brute force, $1994 = 2\cdot 997$ and $997$ is prime, so Fermat's little theorem is of little help here. -Factoring $10^{900}-2^{1000} = 2^{900}(5^{450}+2^{50})(5^{225}+2^{25})(5^{225}-2^{25})$ reduces the problem to seeing if $997$ divides any of the above terms, but that didn't seem to help much either as again, the numbers involved are so large. -I was able to make a claim which if true would lead to a solution, but had no way of expecting to be true ahead of time, namely that $1994\mid 10^{9k}-2^{10k}$ for each $k$. As it so happens, this claim turned out to be true which yields the desired result by setting $k=100$. Giving him the claim as a clue, he was able to proceed. -As it required proving a much stronger statement first and taking a stab in the dark, I wondered if this was indeed the intended solution or if there is something that I missed that yields the result a bit more easily. - -Is there a better/alternative way to prove this? - - -Short version of my proof: -Let $k=1$. Then $10^{9}-2^{10} = 999998976=1994\cdot 501504$ so the base case holds. -Assume claim holds for some $k\geq 1$. Then for $k+1$ we have: -$10^{9(k+1)}-2^{10(k+1)} = 10^910^{9k}-2^{10}2^{10k} = (10^9+2^{10}-2^{10})10^{9k}-2^{10}2^{10k}$ -$=2^{10}(10^{9k}-2^{10k})+(10^9-2^{10})10^{9k}$ which is the sum of numbers divisible by $1994$, proving the claim. - -REPLY [7 votes]: We have -$$10^{900}=1000^{300}\equiv 3^{300}\pmod{997}.$$ -Also -$$2^{1000}=(2^{10})^{100}\equiv (-27)^{100}\equiv (-3)^{300}\equiv 3^{300} \pmod{997}.$$ -Thus $10^{900}-2^{1000}\equiv 0\pmod{997}$. And of course $10^{900}-2^{1000}\equiv 0\pmod{2}$.<|endoftext|> -TITLE: How to show $\zeta (1+\frac{1}{n})\sim n$ -QUESTION [5 upvotes]: How to show $\zeta (1+\frac{1}{n})\sim n$ as $n\rightarrow \infty$ where $\zeta$ is the Riemann zeta function. -And can we say $\lceil \zeta (1+\frac{1}{n}) \rceil=n$ for any positive integer $n\geq 1$. How to prove it? - -REPLY [2 votes]: According to this page -$$\zeta(s)=\frac{1}{s-1}+\sum_{n=0}^\infty \frac{(-1)^n}{n!} \gamma_n \; (s-1)^n$$ -the terms $γ_n$ being Stieltjes constants. So, if $s=1+\frac 1n$ $$\zeta\big(1+\frac 1n\big)=n+\sum_{i=0}^\infty \frac{(-1)^i}{i!} \gamma_i \; \frac 1{n^i}=n+\gamma+\sum_{i=1}^\infty \frac{(-1)^i}{i!} \; \frac{\gamma_i } {n^i}$$<|endoftext|> -TITLE: What does "Mathematics of Computation" mean? -QUESTION [10 upvotes]: I visited this link: http://www.ams.org/journals/mcom/1950-04-030/S0025-5718-50-99474-9/ -And I a bit confused by its title "Mathematics of Computation". - -I am not a native English speaker. Could anyone tell me what does this phrase really mean? What's the difference of: - -Mathematics -Calculus -Computation - -And how could Mathematics be used together with Computation? -I think this could help me get a more deep understanding about What math is. Thanks. - -REPLY [6 votes]: It seems to me that this journal deals with computational mathematics and numerical methods. We're talking about applied mathematics in the realm of computation. Computation in this sense means calculating something. It does not necessarily imply that we are using a modern, transistor based PC, but that is often related. -Mathematics is a rather exact subject, but when it comes to actually computing numerical values, we are faced with challenges. As humans, our solution is to develop tools: stone tablets, the abacus, pencil and paper, the slide rule, the pocket calculator, the computer. These tools have advantages and disadvantages. -Modern PCs are incredibly fast computers, but there are limitations. Take speed for instance. We're always striving for more powerful technology, but part of that research lies in the optimization of preexisting algorithms. For instance, you may be aware that matrices and computer graphics are intimately connected. Image processing and graphics rendering, to my knowledge, boils down to linear algebra. Even a basic operation such as matrix multiplication can turn into a burdensome task for a computer when matrix dimensions grow ever larger. To overcome these limitations, we develop more efficient algorithms. -Another limitation is in the finite nature of computing calculations. How do we perform, say, integration when non-exact values and non-closed form problems are involved? Or forget that, how do we handle continuous real variables? We might, for example, employ finite difference methods. Most non-linear systems of differential equations rely on such methods. -More often than not, we have to sacrifice numerical accuracy for practicality. Anyone who's used $\pi \approx 3.14$ is guilty of this! When employing numerical methods, we often are forced to accept that exact values are pipe dreams. I say this without even broaching the subject of floating point error... - -... which would bridge us towards computer science. Anyway, the subject of computational mathematics can also deal with addressing these deficiencies. -These are the sorts of maths that journal likely explores, both in a theoretical and applied sense.<|endoftext|> -TITLE: Prove that $f$ is a constant function if $f(x)-f(y) \leq (x-y)^2$ -QUESTION [5 upvotes]: Assume $f$ is a function defined over real numbers for which $f(x)-f(y) \leq (x-y)^2$ for all $x,y \in R$. Prove that $f$ is a constant function. - -Attempt -We have that $f(x)-f(0) \leq x^2$ and $f(0)-f(y) \leq y^2$ and thus $f(x)-f(y) \leq x^2+y^2$. But we also have that $f(x)-f(y) \leq (x-y)^2 = x^2+y^2 -2xy$. So that seems to imply that $xy$ is positive. I am not sure how much that helps, though. - -REPLY [10 votes]: Fix any $x\in\mathbb R$ and $h\in\mathbb R\setminus\{0\}$. Then, letting $y\equiv x+h$ and dividing by $|x-y|=|h|$, one has that $$\left|\frac{f(x+h)-f(x)}{h}\right|\leq|h|.$$ Letting $h\to 0$ implies that $f$ is differentiable at $x$ and $f'(x)=0$.<|endoftext|> -TITLE: Prove that $1280000401$ is Composite -QUESTION [11 upvotes]: I tried to prove $N=1280000401$ as composite using complex cube roots of unity: -we can write $$N=1+400+(128*10^{7})$$ which gives -$$N=1+20^2+20^{7}$$ -now if $F(x)=1+x^2+x^7$, $w$ and $w^2$ are roots of $F(x)=0$ where $w=\frac{-1+i\sqrt{3}}{2}$ and $w^2$ its conjugate. -Hence $x^2+x+1$ is factor of $1+x^2+x^7$ -Hence $1+20+20^2=421$ is a factor of $N$ and hence it is composite. -But how can we prove that without using complex numbers? - -REPLY [8 votes]: You can say $1+x^2+x^7=1+x+x^2+(x^7-x)=1+x+x^2+x(x^3-1)(x^3+1)=(1+x+x^2)(1+x(x-1)(x^3+1))$ -but your approach is a good one to find this. People have been very clever in finding things to try.<|endoftext|> -TITLE: Fritz John PDE book chapter 1 exercise: Prove that $u$ vanishes identically if $au_x+bu_y=-u$ -QUESTION [9 upvotes]: I was trying out this question: -Let $u$ be a solution in $C^1$ of the PDE -$$ a(x,y)u_x + b(x,y)u_y = -u $$ -on the closed unit disc $\Omega$ in the xy-plane. Let $a(x,y)x + b(x,y)y > 0$ on the boundary of $\Omega$. Prove that $u$ vanishes identically. -According to the hint, we are supposed to show that $\max_\Omega u \leq 0 $ and $\min_\Omega u \geq 0$. If the maxima/minima occurs in the interior of $\Omega$, since $u_x = u_y = 0$ we get $\max u = \min u = 0 $, and so $u$ vanishes identically. Now, if, say the maxima occurs on the boundary, we take $f(\theta) = u(\cos\theta,\sin\theta)$ and differentiate to get a condition on maxima: -$$ f'(\theta) = -u_xy + u_yx = 0 \implies u_x = \lambda x; \, u_y = \lambda y $$ -This is where I am stuck. To specify that this is a maxima and not a minima, we need to differentiate $f$ further. But we aren't allowed to do that since $u \in C^1$. Even if I disregard that constraint and differentiate anyway (and use $u_x,\,u_y$ from above), I am stuck in terms containing $\lambda_x,\,\lambda_y$. -What am I missing? Or am I doing something completely wrong? - -REPLY [7 votes]: A key relevant concept is directional derivative. The expression $au_x+bu_y$ is the directional derivative of $u$ along vector $(a,b)$. And this is the direction you should differentiate in, not along the boundary. -You want to rule out two scenarios: (a) $u$ has a strictly positive maximum; (b) $u$ has a strictly negative minumum. They are similar (and one reduces to the other by considering $-u$), so let's suppose (a) holds. -You correctly observed that having $u(x_0,y_0)\ne 0$ at an interior stationary point is impossible, so this positive maximum $(x_0,y_0)$ has to be on the boundary. At this point, the directional derivative $au_x+bu_y$ is strictly negative. Crucially, the vector $(a,b)$ points outside of the disk. Hence, we can move in the opposite direction and stay in the domain. This yields higher values of $u$: $$u(x_0-\epsilon a, y_0-\epsilon b)>u(x_0,y_0)$$ -A contradiction.<|endoftext|> -TITLE: Commutative diagrams: including the inverse map -QUESTION [5 upvotes]: Suppose I have a commutative diagram in which some of the arrows are isomorphisms. It is an interesting fact that the diagram does not necessarily remain commutative if I add the inverses of these arrows--even if they are identity maps. For instance, in the following diagram, - -I cannot necessarily travel by $g^{-1}=\text{Id}$, since then if I start at the lower left copy of $\mathbb{Z}$, I have that $j \circ g^{-1} \circ f \neq h$. -Despite this being evident, it still seems sort of surprising to me that I can't always travel by the inverse arrow. If someone could shed some more light on it in any way, I'd appreciate it. For instance, under what circumstances may I include the inverse arrow and have the diagram remain commutative? - -REPLY [4 votes]: Let $\mathcal{C}$ be a category, $\mathcal{P}$ be a preordered graph (i.e. a subgraph of a preorder), $D\colon\mathcal{P}\to\mathcal{C}$ be a diagram. Then the diagram $D$ is called commutative iff $D$ lifts to a functor $F(\mathcal{P})\to C$, where $F(\mathcal{P})$ is the free preorder, generated by $\mathcal{P}$. If we add inverses to some set of arrows $S\subset Arr(\mathcal{P})$ to $\mathcal{P}$, i.e. we get a preordered graph $\mathcal{P}_{S}$, and the diagram $D$ extends to a diagram $D_{S}\colon\mathcal{P}_{S}\to\mathcal{C}$, then $D_{S}$ is commutative iff $D_{S}$ lifts to a functor from a localization $F(\mathcal{P})[S^{-1}]\to\mathcal{C}$. -If the image $D(Arr(\mathcal{P}))$ consists only of isomorphisms, then for every $S\subset Arr(\mathcal{P})$ the diagram $D_{S}$ lifts to a functor $F(\mathcal{P})[S^{-1}]\to\mathcal{C}$, namely, to a functor $F(\mathcal{P})[S^{-1}]\to F(\mathcal{P})[\mathcal{P}^{-1}]\to\mathcal{C}[D(\mathcal{P})^{-1}]$, because in this case $\mathcal{C}[D(\mathcal{P})^{-1}]=\mathcal{C}$ is a trivial localization! -Actually, the diagram $D_S$ always commutes in $\mathcal{C}[D(\mathcal{P})^{-1}]$. For example, let $\mathcal{C}=\mathcal{K}(\mathcal{A})$ be a homotopy category of an abelian category $\mathcal{A}$ and $D$ be a diagram of quasi-isomorphisms in $\mathcal{A}$, some of whose are isomorphisms in $\mathcal{K}(\mathcal{A})$. Then if we add all inverses of isomorphisms to this diagram, then it commutes in the derived category $\mathcal{D}(\mathcal{A})$. -Your diagram, of course, consists not only of isomorphisms. The localization of the free preordered graph, generated by it (with inverses of identity morphisms in $\mathbf{Ab}$) is a full graph on four vertices, so each $\mathbf{Ab}$-arrow in the image of the original diagram must be an isomorphism, but they aren't.<|endoftext|> -TITLE: Does the logarithm function grow slower than any polynomial? -QUESTION [24 upvotes]: Does $f(x)=ax^b$ grow faster than $g(x)=\ln x$ for all $a, b > 0$? Can I say that $f(x) > g(x)$ as $x$ approaches infinity? -I thought the answer is yes, but this graph appears to be telling a different story. - -Is the polynomial (the green curve) going to cross the log function (the red curve) and exceed in value for some large value of x? -If the answer is yes, does it mean that if I subtract the two functions and set it to zero, the resulting equation will have two roots? What are the roots? - -REPLY [22 votes]: The answer is yes, although in some cases (like the one you have given) it takes a very long time for the polynomial function to catch up to and ultimately dominate the log function. -A rigorous formation of what you are saying is: -$$ \lim_{x \to \infty} \frac{\log(x)}{P(x)}=0$$ -where $P(x)$ is any polynomial. The limit tending to zero just means that the bottom terms dominates as $x \to \infty$. -Here is a proof of the limit equality for the case of $P(x)=x^b$ for some $b>0$. The case of polynomials follows as an easy corollary. -$$ \lim_{x \to \infty} \frac{\log(x)}{x^b} = \lim_{x \to \infty} \frac{1/x}{bx^{b-1}} = \lim_{x \to \infty} \frac{1}{bx^b}=0$$ -where the first equality follows from l'Hôpital's rule.<|endoftext|> -TITLE: Uniformly continuous functions sequence $f_n(x)$ converges uniformly to a uniformly continuous function $f(x)$? -QUESTION [6 upvotes]: We know that if continuous functions sequence $g_n(x)$ converges uniformly to $g(x)$, then $g(x)$ is continuous function. -But what if uniformly continuous functions sequence $f_n(x)$ converges uniformly to $f(x)$? Does that imply $f(x)$ is uniformly continuous function? -If so - how do we prove it? what $\delta$ should we take for $f(x)$? We can't simply choose $\min\{ \delta_n \}$. -And what if $f_n(x)$ converges pointwise to $f(x)$. Will $f(x)$ be continuous? uniformly continuous? Or pointwise convergence is not enough? - -REPLY [9 votes]: Claim: Let $f_n$ be a sequence of uniformly continuous functions which converges uniformly to $f$, then $f$ is uniformly continuous. -Proof: Let $\epsilon >0$. Since $f_n \to f$ uniformly, there is $f_n$ so that -$$|f_n(x) - f(x) | < \epsilon /3$$ -for all $x$. Since $f_n$ is uniformly continuous, there is $\delta >0$ so that -$$|f_n(x) - f_n(y)|<\epsilon/3$$ -whenever $|x-y|<\delta$. Thus -$$|f(x) - f(y)| \le |f(x) - f_n(x) | + |f_n(x) - f_n(y)| + |f_n(y) - f(y)| < \epsilon$$ -whenever $|x-y|<\delta$. Thus $f$ is uniformly continuous. -If $f_n \to f$ only pointwisely, $f$ might not even be continuous (think of $x^n $ on $[0,1]$)<|endoftext|> -TITLE: $\Delta$-complex structure on the singular complex $S(X)$ -QUESTION [6 upvotes]: I am confused with the definition of $S(X)$ so that I can't see why $S(X)$ is a $\Delta$-complex. - -Here are some material by Allen Hatcher. - -Though singular homology looks so much more general than simplicial homology, it can actually be regarded as a special case of simplicial homology by means of the following construction. For an arbitrary space $X$, define the singular complex $S(X)$ to be the $\Delta$-complex with one $n$-simplex $\Delta_\sigma^n$ for each singular $n$-simplex $\sigma:\Delta^n\to X$, with $\Delta_\sigma^n$ attached in the obvious way to the $(n-1)$-simplices of $S(X)$ that are the restrictions of $\sigma$ to the various $(n-1)$-simplices in $\partial\Delta^n$. It is clear from the definitions that $H_n^\Delta(S(X))$ is identical with $H_n(X)$ for all $n$. - -And here it is the one of the three conditions of the definition of $\Delta$-complex: - -A $\Delta$-complex structure on a space $X$ is a collection of maps $\sigma_\alpha:\Delta^n\to X$, with $n$ depending on the index $\alpha$, such that (i) The restriction $\sigma_\alpha\big|\mathring{\Delta}^n$ is injective, and each point of $X$ is in the image of exactly one such restriction $\sigma_\alpha\big|\mathring{\Delta}^n$. - -Considering there is so much singular $n$-simplex, how does the condition holds? -In particular, why don't we may loss the injective condition? -If I get the idea of $S(X)$, then I will see that $H_n^\Delta(S(X))$ is identical with $H_n(X)$ for all $n$. - -REPLY [5 votes]: I am thinking that perhaps your question is about the formalities of forming the quotient space $S(X)$ by gluing together simplices. So I'll write an answer along those lines. -To allay any possible confusion, I am going to imagine that the second yellow box has been rewritten with $X$ replaced by $Y$. -The goal is to construct the $\Delta$-complex -$$Y=S(X) -$$ -starting with any given topological space $X$. -The $\Delta$-complex $Y$ is constructed as a certain quotient space. To do this carefully and rigorously, one needs a domain of the quotient map. I'll explicitly construct this domain, denoted $D$. -Informally, $D$ is a disjoint union of copies of standard simplices, one for each singular simplex in $Y$. To do this formally, I'll need notation for the set of singular simplices. -Let $\Sigma$ be the set of all singular simplices $\sigma : \Delta^n \to X$, for all $n=0,1,2,...$. Let me break $\Sigma$ into disjoint subsets, one for each value of $n=0,1,2,...$, where $\Sigma_n$ is the set of all $n$-dimensional singular simplices $\sigma : \Delta^n \to X$. The domain of the quotient map is a disjoint union of copies of standard simplices, one for each element of the set $\Sigma$, using the $n$-dimensional standard simplex for elements of the subset $\Sigma_n$: -$$D = \sqcup_{n=0}^\infty \,\, \Sigma_n \times \Delta^n -$$ -Using Cartesian products in this manner is a standard formal way to produce disjoint unions, by employing the index set with the discrete topology as one of the Cartesian factors (in this case, $\Sigma_n$). -Now we have to describe the gluing maps used to form $Y$ as a quotient space of $D$. There is one such gluing map for each $n$, each $\sigma \in \Sigma_n$, and each $n-1$ dimensional face $K \subset \Delta^n$. Namely, letting $i_K : \Delta^{n-1} \to K \subset \Delta^n$ denote the standard face map, and letting $\sigma | K \in \Sigma_{n-1}$ denote the composition -$$\sigma | K = \Delta^{n-1} \rightarrow^{i_K} K \rightarrow^{\subset} \Delta^n \rightarrow^{\sigma} X -$$ -for each $x \in \Delta^{n-1}$ we identity the point $(\sigma|K,x) \in \Sigma_{n-1} \times \Delta^{n-1}$ with the point $(\sigma,i_K(x)) \in \Sigma_n \times \Delta^n$. What this does is to identify the $n-1$ dimensional simplex -$$\{\sigma | K\} \times \Delta^{n-1} \subset \Sigma_{n-1} \times \Delta^{n-1} \subset D -$$ -with the $n-1$ dimensional face -$$\{\sigma\} \times K \subset \Sigma_n \times \Delta^n \subset D -$$ -of the $n$ dimensional simple -$$\{\sigma\} \times \Delta^n \subset \Sigma_n \times \Delta^n \subset D -$$ -Make all those identifications on $D$, one for each $n$, $K$, and $\sigma$, and you get the quotient $\Delta$-complex $Y=S(X)$.<|endoftext|> -TITLE: Is homology of a chain complex a universal delta-functor? -QUESTION [7 upvotes]: Let $\mathcal{A}$ be an abelian category and let $Ch(\mathcal{A})$ be the category of homologicaly, non-negatively graded chain complexes in $\mathcal{A}$. The sequence of homology functors $H_n:Ch(\mathcal{A})\to \mathcal{A}$ is a (in fact, the prototypical) $\delta$-functor. My question is: - -Is it true that $(H_n)_{n\in \mathbb{N}}$ is a universal $\delta$-functor? - -Intuitively, this of course should be the case, but I couldn't find a direct argument. What I was able to show is that, if $\mathcal{A}$ has enough projectives, then $Ch(\mathcal{A})$ has enough projectives and we can show that the homology functors are the derived functors of $H_0$ and thus by the genral theory a universal $\delta$-functor. This method requires the identification of projectives in $Ch(\mathcal{A})$ and a (simple) spectral sequence argument for the double chain complex obtained from the projective resolution of a complex in $Ch(\mathcal{A})$. It also gives a bit more as one can extend the argument to show that the total derived functor of $H_0$ is quasi isomorphic to the identity. -I don't worry much about the "enough projectives" hypothesis, but I would like to see a direct argument for the seemingly tautological fact that homology is a universal $\delta$-functor. - -REPLY [3 votes]: By the effecability criterion, it suffices to show that any complex $X$ can be written as a quotient of a complex with vanishing higher homology. In the unbounded setting, you can take the cone over $\Sigma^{-1} X$, which is even contractible, and for non-negative chain complexes, you need to truncate this cone at $0$.<|endoftext|> -TITLE: The logic behind a sequence -QUESTION [5 upvotes]: I am trying to get the logic behind the sequence: -for $n=2,3,\ldots$ -$$\left(\frac{\log (2)}{\log \left(\frac{3}{2}\right)},\frac{\log (3)}{\log \left(\frac{17}{9}\right)},\frac{\log (4)}{\log \left(\frac{71}{32}\right)},\frac{\log (5)}{\log \left(\frac{1569}{625}\right)},\frac{\log (6)}{\log \left(\frac{899}{324}\right)},\frac{\log (7)}{\log \left(\frac{355081}{117649}\right)},\frac{\log (8)}{\log \left(\frac{425331}{131072}\right)},\frac{\log (9)}{\log \left(\frac{16541017}{4782969}\right)},\frac{\log (10)}{\log \left(\frac{5719087}{1562500}\right)},\frac{\log (11)}{\log \left(\frac{99920609601}{25937424601}\right)},\frac{\log (12)}{\log \left(\frac{144619817}{35831808}\right)},\ldots\right)$$ -for $n=30$ it is $$\frac{\log (30)}{\log \left(\frac{53774416559964522337191179}{16}\right)-16 \log (3)-23 \log (5)}$$ -$\textbf{Background}$: -This is part of a project to figure out how slowly the errors of power law distributed sums obey the law of large numbers. So it may not be necessary to find the logic of the sequence above, but these correspond to the exponent $\left\{{\alpha:\frac{MD(n)}{MD(1)}=\left(\frac{1}{n}\right)^{1-\frac{1}{\alpha }}}\right\}$,where $MD(n)$ is the mean absolute deviation of an n-summed Student T distributed variable with tail exponent/degrees of freedom equal 3 (and a mean of $0$). Numerically we get $$\{1.70951,1.72741,1.73951,1.74855,1.7557,1.76158,1.76655,1.77084,1.7746,1.77795,1.78095,1.78367,1.78615,1.78843,1.79054,1.79249,1.79432,1.79602,1.79762,1.79913,1.80056,1.80191,1.80319,1.80441,1.80557,1.80669,1.80775,1.80877,1.80974\},$$ -a slow convergence to 2, which is the case with a Normal Distribution. - -REPLY [3 votes]: Let me call $b(n)$ the $n$-th term of your sequence. -It is easy(?) to see that -$$ -b(n) = \frac{\log(n)}{\log a(n)-(n-1)\log(n)} -$$ -where -$$ -a(n)=\sum_{k=1}^n\frac{n!\cdot n^{n-k-1}}{(n-k)!}=\frac{1}{n}\left[n^n+\sum_{k=1}^{n-1}{n\choose k}(n-k)^{n-k}k^k\right]=\frac{e^n\cdot \Gamma(n+1,n)}{n}-n^{n-1}\,, -$$ -where -$$\Gamma(a,z)=\int_z^\infty t^{a-1}e^{-t}\,dt$$ -is the incomplete Gamma function. -Actually $a(n)$ is the sequence A001865 (Number of connected functions on n labeled nodes) in the OEIS, so if you follow the link, you will find more details, asymptotics and other combinatorial interpretations for $a(n)$. Hope this helps.<|endoftext|> -TITLE: Find all real and continuous functions that are a 3-involution. -QUESTION [5 upvotes]: Find all continuous functions $f: \mathbb R \rightarrow \mathbb R$such that $f(f(f(x)))=x$. -Obviously one solution to this functional equation is $f(x)=x$. -If the function is NOT continuous, there are also other solutions such as $f(x)=\frac{1}{1-x}$, but I'm not sure how to find all solutions that are continuous. - -REPLY [3 votes]: Observe that $f$ must be injective. Now, since $f$ is continuous and injective on the interval $\mathbb{R}$, we conclude that $f$ is monotone. Now the variations of $f\circ f\circ f$ are that of $f$, and we conclude that $f$ must be increasing. -Now, let $x\in\mathbb{R}$. There are three cases: - -$f(x)x$, -$f(x)=x$. - -We'll show that cases 1. and 2. are impossible: - -Assume that $f(x)x$. Then, $f(f(x))>f(x)$ and $x=f(f(f(x)))>f(f(x))$ from which we conclude that $x>x$, which is impossible. -Hence we must have $f(x)=x$. - - -This proof easily generalizes to the case of a continuous function $f:\mathbb{R}\longrightarrow\mathbb{R}$ such that -$$\forall x\in\mathbb{R},\ f^{[2p+1]}(x)=x,$$ -where $p\in\mathbb{N}$ and $f^{[2p+1]}$ stands for the $(2p+1)$-th iterate of $f$.<|endoftext|> -TITLE: Differentiation: Vectors or scalars -QUESTION [6 upvotes]: As you know, we can geometrically interpret the set of all reals $\Bbb R$: every point on the line can be uniquely expressed by a real number x (it's coordinate). However, we could go a step further and use the line to geometrically express operations between real numbers, such as addition. In order for us to do this we introduce vectors and now every point on the line and every coordinate associated with that point represent a vector starting from the origin and ending at the given point. Thus every real number either represents a point on the line or a vector on the line. Now, in the definition of the derivative$$ \frac{{\rm d} }{{\rm d}x} y(x) =\lim_{h\rightarrow0} \frac{1}{h}( y(x+h)-y(x)) $$ we have x+h, which only makes sense geometrically if x and h are both vectors (addition of points on the line makes no sense). My question is: In order to have a meaningful definiton of the derivative, should the function y(x) be geometrically interpreted as mapping vectors to vectors? - -REPLY [2 votes]: In a nitpicking sense, the derivative -$$ -f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} -$$ -of a function $f$ is indeed a scaling map that specifies how a (small) displacement (vector) $h$ at a point $x$ gets stretched into a displacement vector $f'(x)\, h$ at $y = f(x)$. -Commonly this relationship is expressed in terms of points as $f(x + h) \approx f(x) + f'(x)\, h$. The derivative itself, $f'(x)$, is neither a point nor a vector. -In more general settings (such as coordinate-free formulations of mechanics), "points" live in a space $M$ (a "manifold"). To formulate differential calculus, one introduces an entirely new space $TM$ (the "tangent bundle"), whose elements may be viewed as pairs $(x, h)$ consisting of a point $x$ of $M$ and a displacement $h$ at $x$. -In this setting, if $f:M \to N$ is a differentiable mapping, its "(total) derivative" is a mapping $f_{*}:TM \to TN$ defined by -$$ -f_{*}(x, h) = \bigl(f(x), f'(x)\, h\bigr). -$$ -The derivative $f'(x):T_{x} M \to T_{f(x)} N$ is a "linear transformation" between tangent spaces, a generalization of a scaling map. -The conceptual asymmetry between points and vectors becomes starkly visible in this setting. Adding points makes no sense at all. The sum "$x + h$" of a point and a vector cannot be viewed as a point of $M$. It can be viewed as a point of $TM$, but doing so doesn't have much use. Instead, one "probes the infinitesimal structure of $M$" with "paths through $x$ whose velocity is $h$".<|endoftext|> -TITLE: Convergence of zeta functions for schemes of finite type over the integers -QUESTION [10 upvotes]: In his lecture "Zeta functions and $L$-functions", Serre presents a very elegant proof of the convergence of the zeta function -$ \zeta (X,s) = \prod_{x \in |X|} (1- N(x)^{-s})^{-1}$ in the half plane $R(s) > dim(X)$, where $X$ is a scheme of finite type over $\mathbb{Z}$, $|X|$ the set of closed points of $X$ and $N(x)$ the number of elements in the residue field $k(x)$. -He reduces the claim to the case where $X = Spec \, A[x_1, \ldots x_n]$ and $A$ is either $\mathbb{Z}$ or $\mathbb{F}_p$. -The decisive input is the following lemma: -a) If $X$ is the finite union of the schemes $X_i$, and the claim holds for all $X_i$, then it holds for $X$. -b) If $f: X \to Y$ is finite and the claim holds for $Y$, then it holds for $X$ as well. -I've been trying to prove b) but I seem to be missing something. Here's what I've tried so far: -I was considering $\zeta(X,s) = \prod_{y \in |Y|} \zeta(X_y \, ,s)$, where $X_y$ is the fiber of $f$ at $y$. I now the fibers are finite but I don't know how to connect this with the fact that $\zeta(Y,s)$ converges. Is it true that the residue field $k(y)$ is a finite extension of $k(x)$ for all $x \in X_y$ (of degree $\deg f$)? I know this is the case for the function fields. -Any help is very appreciated! - -REPLY [5 votes]: Let's see. The key point is that the formula for the zeta function is that, if we consider the case of finite type over a finite field $X/k={\mathbb F}_q)$, there is an equivalence between the 'absolute' zeta function as you define it and the relative zeta function: -$$Z(X,k;t)=exp\{\sum_{m\leq 1} \frac{N_m}{m} t^m\},$$ - where $N_m=card(X({\mathbb F}_{q^m}))$ is the number of rational points of $X$ over the unique extension of degree $m$ over the base field $k$. -The fact is that, if we have a rational point over an extension $k_m$, then the image is also defined over such extension! That's the rationale behind the bound provided (simple as hell!). That's why, if $x\in X(k_m)$, then so does $f(x)$, and there are clearly no more $k_m$-defined points on $X$ above $f(x)$ than $deg(f)$. And that's that! -In fact, the "absolute" zeta function is derived from the former by the substitution $t=q^{-s}.$ -This is key, for in our case, to every finite morphism $f:X\to Y$ (in the case where $X, Y$ are defined over a finite field $k$) corresponds an easy bound -$$N_m(X)\leq deg(f)\cdotp N_m(Y).$$ We do not think in terms of the field generated by the coordinates of our points, but we merely ask that these belong to a fixed field $k_m.$ This facilitates our count enormously, and enables us to use the degree of $f$ efficiently. -With this bound, you can obtain the desired convergence result by taking an Euler product over all (finite) characteristics. -I am pretty sure that Serre's paper contained this kind of background (don't have it here with me), but in any case Mircea Mustata has a lovely set of notes on the matter: -http://www.math.lsa.umich.edu/~mmustata/zeta_book.pdf -Needless to say, but I'll just remind that the dimension of an algebraic scheme is its Kronecker dimension, i.e. an elliptic curve over $\mathbb{Z}$ is of dimension $1+1=2$ (that's why it's called an arithmetic surface!). This does indeed count when you write bounds on the product, Euler-style. -Let us deal with the case where $X \to Spec(\mathbb{Z})$ misses a finite number of points of its target. -Taking logarithms, one sees that $\log \zeta(X_p,s)$ is equivalent to $C_p p^{-(s-d)}$, where $d$ is the fibre dimension of the structure map ($C_p$ is controlled essentially by $deg(f)$ and by $Y$, and is $\leq deg(f)$ if our $f$ has the affine space over $\mathbb{Z}$ as its target). It suffices to argue as in the case of the zeta function so as to establish that the infinite product converges for $Re(s-d)>1$, and since $\dim X=d+1$, we are done. I can imagine, though, that using the existence of a finite $f:X\to Y$ does imply, through the above bounds, that the absolute zeta function of $X$ converges whenever $Re(s)>\dim X$. -In the case where we have a finite morphism $f:X\to \mathbb{A}^n_{\mathbb{Z}},$ (or finite over an open subset of $Spec(\mathbb{Z}$) the zeta function of $Y$ corresponds to $\zeta(s-n)$, and the lower bound for $Re(s)$ is $n+1$, i.e. the Kronecker dimension of the schemes involved. -That's how I did it, way back when. Should you need further clarification, just ask.<|endoftext|> -TITLE: Translating an Italian exercise precisely -QUESTION [8 upvotes]: I am trying to help a friend with his algebra course. However, his exercises are in Italian, and unfortunately he translates them poorly for me since he does not know the mathematical terms in English. I have tried to translate it myself, but without luck. I still do not know what exactly to do. -More precisely, it is the description in the exercise and exercise (a) that I do not understand completely; the rest I understand. - - -E2) Sia $f\colon\mathbb{C}^4\to\mathbb{C}^4$ una trasformazione lineare e si supponga che la matrice associata a $f$ rispetto alla base $\mathcal{B} = \{\mathbf{e}_2; \mathbf{e}_1; \mathbf{e}_3+\mathbf{e}_4; \mathbf{e}_3+\mathbf{e}_2\}$ su dominio e codominio ($\mathbf{e}_i$ sono i vettori della base canonica di $\mathbb{C}^4$) sia -$$\mathbf{A} = \begin{bmatrix}3&3&2&2\\ 3&3&2&2\\ 0&0&1&1\\ 0&0&1&1\end{bmatrix}$$ -(a) Si determini la matrice $\mathbf{B}$ associata a $f$ rispetto alle basi canoniche. -(b) Si calcoli la dimensione dell'immagine di $f$. -(c) Si dica se la matrice $\mathbf{B}$ è diagonalizzabile. -(d) Si calcoli una base dello spazio nullo dell'applicazione lineare $f$. - -REPLY [9 votes]: Let $f:\mathbb{C}^4 \rightarrow \mathbb{C}^4$ be a linear map and suppose that the matrix associated with $f$ with respect to the basis $\mathcal{B}=\{ e_2,e_1,e_3 + e_4, e_3 + e_2\}$ (where $e_i$ are the canonical basis vectors of $\mathbb{C}^4$) is -(the matrix $A$ written above in the exercise). -(a) Determine the matrix $B$ associated with $f$ with respect to the canonical basis of $\mathcal{C}^{4}$. -(b) Calculate the dimension of the image of $f$. -(c) Say whether the matrix $B$ is diagonalizable. -(d) Calculate a basis of the null space of the linear map $f$.<|endoftext|> -TITLE: Matrix of a conformal linear map -QUESTION [6 upvotes]: Could you please explain why every conformal linear map is a scalar times a rotation matrix? I can prove that every scalar-rotation matrix is a conformal map but not the opposite. - -REPLY [5 votes]: Let $C$ the matrix that represents the linear transformation, such transformation is conformal iff preserves the angles, so we must have, for allo vectors $x,y$: -$$ -\frac{\langle Cx,Cy \rangle}{|Cx||Cy|}=\frac{\langle x,y \rangle}{|x||y|} -$$ -now consider the standard orthonormal basis $\{e_i\}$, we have, for $i\ne j$: -$$ -\langle e_i,e_j \rangle=0 \Rightarrow \langle Ce_i,Ce_j \rangle=0 \Rightarrow e_j^T(C^TC)e_i=0 -$$ -and this means that all the non diagonal elements of $C$ are null: $(C^TC)_{ij}=0$. -We have also: -$$ -\langle e_i-e_j,e_i+e_j \rangle=0 \Rightarrow \langle C(e_i-e_j),C(e_i+e_j) \rangle=0 -$$ -and, in the same way, this means that the diagonal elements of $C^TC$ are all identical: $(C^TC)_{ii}=(C^TC)_{jj}$, so we have that -$$C^TC=\lambda I $$ -Now let $C=\sqrt{\lambda}R$, we have: -$$ -C^TC=\lambda I \Rightarrow R^TR=I -$$ -and $R$ is a matrix that represents an orthogonal transformation and, if by preserving the angles we means also the orientation, $R$ is a rotation and $C$ is the product of scalar transformation and a rotation.<|endoftext|> -TITLE: $52$ cards reciprocal sum probability -QUESTION [17 upvotes]: Imagine a deck of $52$ cards but instead of having suits and ranks, they only have sequential (unique) integer ranks from $1$ to $52$. You could also imagine a standard deck of $52$ cards but convert the ranks and suits to an integer number from $1$ to $52$ so that all the cards have different numbers assigned to them. -So the question is, what is the probability, if you randomly choose exactly $10$ cards from that deck without replacement, that the sum of the reciprocals of the cards ranks (from $1$ to $52$) totals exactly $1$? For example, if you chose cards $5,10,15,20,25,30,35,40,45$ and $50$, the sum of the reciprocals would only be $0.58579$... so that is too small. -As a hint, I believe if you sort the cards in ascending order, the first card (the lowest rank card) MUST be between a $2$ and $6$ (inclusive) to be a candidate solution. This is because $1/1$ is already a sum of $1$ so any additional cards will make it too large a sum. Also $7$ as a lowest card will not work cuz $1/7$ + $1/8$... + $1/16$ = $0.93$... so the lowest card MUST be a $2,3,4,5$, or $6$. I am re-running my modified simulation now with that added information to prune the state space it must check. -Also I am seeing multiple solutions so there is not just $1$ solution but the probability is likely very low, only a small fraction of $1$% I would guesstimate. -Also note than many solutions are very close to $1$ but not exactly $1$, thus making computer simulation of this type of problem more difficult. An example of a "close solution" is $2,4,13,25,35,41,47,50,51,52$ which evaluates to $0.99999995750965$. The closest sum not equal to $1$ happens with $3,4,8,14,17,22,26,29,46,47$ which evaluates to $1.00000000288991$. That is $8$ zeros after the $1$. -An interesting note... That number very close to $1$ is gotten by summing only $10$ terms. This is impressive since even summing negative powers of $2$ which converges to $1$, ($1/2$ + $1/4$ + $1/8$...), it takes $28$ terms to almost match that closeness to $1$ and $29$ terms to beat it. - -REPLY [4 votes]: The a=6 case can be eliminated by pencil and paper. -Consider that 1/1 = 1, 1/2 = 7, 1/3 = 9, and 1/4 = 10, all mod 13. The only multiple of 13 we can get out of sums of these numbers is 7+9+10=26, which shows that 1/26+1/39+1/52=1/12 is feasible, but 1/13 must not appear in the sum. -Also 1/1 = 1, 1/2 = 6, 1/3 = 4, 1/4 = 3, all mod 11. The only multiple of 11 in this case is 1+6+4=11, so 1/44 is excluded, and if 1/11 is in the sum, then 1/22 and 1/33 must also be. -These two considerations alone mean that a=6 is impossible: -sum(1.0/[6,7,8,9,10,11,12,13,14,15]) = 1.034896 -sum(1.0/[6,7,8,9,10,11,12,14,22,33]) = 0.9670634 -sum(1.0/[6,7,8,9,10,12,14,15,16,17]) = 0.9883869 - -Similar considerations exclude a total of 20 cards. With that, along with the exclusion of card 1 by its magnitude, the deck is down to 31 cards and there are only 4 possibilities for the lowest card. Throw in an early-out test at about the seventh card and compiled code gets down to about 30 ms. The interpreted code should go through in tolerable time. Obviously not an improvement on brute force because that is quick enough and less error-prone.<|endoftext|> -TITLE: Convergence of $\int_{0}^{+\infty} \frac{x}{1+e^x}\,dx$ -QUESTION [8 upvotes]: Does this integral converge to any particular value? -$$\int_{0}^{+\infty} \frac{x}{1+e^x}\,dx$$ -If the answer is yes, how should I calculate its value? -I tried to use convergence tests but I failed due to the complexity of the integral itself. - -REPLY [2 votes]: It is possible to generalize the result. -Claim: $$I=\int_{0}^{\infty}\frac{x^{a}}{e^{x-b}+1}dx=-\Gamma\left(a+1\right)\textrm{Li}_{a+1}\left(-e^{-b}\right) - $$ where $\textrm{Li}_{n}\left(x\right) - $ is the polylogarithm and $a>0$. -Consider $$\left(-1\right)^{n}\int_{0}^{\infty}x^{a}e^{-n\left(x-b\right)}dx=\frac{\left(-1\right)^{n}e^{-nb}}{n^{a+1}}\int_{0}^{\infty}y^{a}e^{-y}dy=\frac{\Gamma\left(a+1\right)\left(-1\right)^{n}e^{-bn}}{n^{a+1}} - $$ and recalling that $$\textrm{Li}_{k}\left(x\right)=\sum_{n\geq1}\frac{x^{n}}{n^{k}} - $$ we have $$\Gamma\left(a+1\right)\textrm{Li}_{a+1}\left(-e^{-b}\right)=\sum_{n\geq1}\left(-1\right)^{n}\int_{0}^{\infty}x^{a}e^{-n\left(x-b\right)}dx=\int_{0}^{\infty}x^{a}\sum_{n\geq1}\left(-1\right)^{n}e^{-n\left(x-b\right)}dx= - $$ $$=-\int_{0}^{\infty}\frac{x^{a}}{e^{x-b}+1}dx.$$ -Maybe it's interesting to note that the function ${x^{a}}/{e^{x-b}+1}$ is the Fermi-Dirac distribution.<|endoftext|> -TITLE: Relative consistency of slightly modified GCH -QUESTION [9 upvotes]: Is it consistent with ZFC that $2^\kappa=\kappa^{++}$ for all regular $\kappa$ and $2^\kappa=\kappa^{+}$ for all singular $\kappa$? - -REPLY [3 votes]: Sure. -Easton's theorem tells us that if $F(\kappa)$ satisfies $\operatorname{cf}(F(\kappa))>\kappa$, then it is consistent that for all regular cardinals $F(\kappa)=2^\kappa$, and for singular cardinals the least possible value is taken. -Starting with a model of $\sf GCH$ the function $F(\kappa)=\kappa^{++}$ satisfies the needed conditions, and it is not hard to check that for singular cardinals the least possible value is indeed $\kappa^+$. -So this theory is indeed equiconsistent with $\sf ZFC$. If, however, you want for all cardinals to satisfy $2^\kappa=\kappa^{++}$, you will have to assume the consistency of some large cardinals as this will imply the violation of $\sf SCH$ at every singular cardinal. But this is possible, if you assume the consistency of enough large cardinals. This MathOverflow thread and the linked questions there are relevant to this.<|endoftext|> -TITLE: Weak convergence problem -QUESTION [5 upvotes]: Let $\Omega$ ∈ $\mathbb{R}^d$ be a bounded set and $d\ge$ 1. Consider function sequence $f_n ∈ L_3(\Omega)$ such that, -$$f_n \to f\mbox{ weakly in } L_2(\Omega)\mbox{ and } \|f_n\|_{L_3(\Omega)} \le M$$ -for set constant $M$. We additionally know that, -$$ \|f_n\|_{L_2 (\Omega)} \to \|f\|_{L_2 (\Omega)} $$ -Show that, -$$ \|f_n - f \|_{L_p(\Omega)} \to 0\mbox{ for every } p \in [2,3) $$ -Whether limit function $f$ have to be an element of $ L_3(\Omega) $ ? -Thank for help. - -REPLY [4 votes]: It's well known that norm convergence in $L^2$ follows from weak convergence plus the convergence of the norms. This gives that $f_n\to f$ pointwise a.e. on a subsequence, and thus $f\in L^3$ by Fatou's Lemma. -Now -$$ -\int_{|f_n-f|\le C} |f_n-f|^p \le C^{p-2}\|f_n-f\|_2^2 -$$ -and -$$ -\int_{|f_n-f|>C} |f_n-f|^p \lesssim C^{p-3} , -$$ -since at least one of $f_n$, $f$ is $>C/2$ on this set and, as we observed, the $3$ norms are bounded. So if $2\le p<3$, then we can make $\|f_n-f\|_p$ arbitrarily small by taking both $C$ and $n$ large.<|endoftext|> -TITLE: Integrating with respect to an angle -QUESTION [5 upvotes]: Hello maths community! -One day I was solving a geometry problem and I thought I had found a way of solving it. When I was solving the problem, I kind of invented a new way of finding an area of a shape that is related to circle but just recently I realized that I was wrong. The method I invented was invalid but I just don't understand why wouldn't it work... I won't describe the problem here but I am going to show an example of the method I invented. -Warning. The following methods are invalid! -To find the area of a circle with my method is to first split the circle in half and first find the area of that and then multiply it by 2 in the end to make our life easier. -A half circle is made out of bunch of vertical "lines" and I thought one could use integrals to add these "lines" up to get the area. To do this we need to find a function to get the length of one "line" and then integrate the function. If we draw a right triangle inside the half circle, the task will become easy. -constructing a half circle -triangle inside the half circle -Now, to get $l$, we use sine: $l=r\cdot \sin(φ)$ -Then we take the integral with respect to $φ$. I guess this part is the one causing problems... -$$\int_{0}^\pi l \, dφ=\int_0^\pi r\cdot \sin(φ) \, dφ=r\int_0^\pi \sin(φ) \, dφ = r (-\cos(\pi) +\cos(0))=2r$$ -Well we know already that the area of a half circle is definitely $2r$ -and if we multiplied it by $2$ we would get $4r$ and $4r \neq \pi r^2$ -With my logic this would seem very valid and I would just like to know why this doesn't work. Thanks! - -REPLY [2 votes]: You need $\displaystyle \int \ell\,dx$, not $\displaystyle\int\ell\,d\varphi$. You have $x=r\cos\varphi$, so $dx = -r\sin\varphi\,d\varphi$.<|endoftext|> -TITLE: Flatness of torus and surfaces of higher genus -QUESTION [7 upvotes]: For the very first sight it may be surprise that the ordinary torus $S^1 \times S^1$ is flat: one argument to see this is the following. One can imagine a torus as a square with opposite sides identified and the square is obviously flat. However there are surfaces with higher genus (higher numbers of "holes") which also can be represented as polygons (4$g$ polygon for a surface of genus $g$) with sides indetified. However higher genus surfaces are negatively curved. So my question is - -Why higher genus surfaces are negatively curved while they can be represented as flat polygons with sides identified? - -REPLY [12 votes]: If you try to turn those polygon identifications into flat metrics you run into a problem at the vertices: the angles don't add up. This issue can only be addressed for the square.<|endoftext|> -TITLE: Composition series for Verma modules. -QUESTION [6 upvotes]: Let $L$ a Lie Algebra. I need prove that that every Verma module $\Delta(\lambda)$ admits a composition series, i.e a series of submodules with simple factors. -I found a proof that is quite short in this scripts, at Proposition 5.5. -At the end of the proof, when builds a concrete series, the term $M_i$ is given and is considered $M_i$ a maximal sub-module of $M$. I am not too much sure how it is obvious that such a maximal sub-module exists. -I thought to two ways to proceed, the first is to consider that $M$ is a sub-module of a Verma module, that the weights of $M$ are bounded by above and deduce (in someway, if it is true) that there is a finite number of maximal weights of $M$, say $\mu_1 \dots \mu_n $. Then since $\text{dim}$ $ M_{\mu_i} < \infty$ we can choose a basis $B_i$ of $M_{\mu_i}$. If $M$ is generated by all $v \in B_i$ for all $i$, then we can conclude using the theory of finitely generated modules (in particular, this question). -The second way is to consider a maximal weight $\mu$ of $M$ and $v \in M_\mu$, then try to prove that $M'=\sum N $, where the sum is over all the submodules of $M'$ that don't contain $v$, is maximal in $M$. -Then, how can I prove that the factors (and their occurrence) are independent on the choice of the Jordan-Holder chain? - -REPLY [2 votes]: Ok, I found a proof of the fact that every submodule of highest weight module have a maximal proper submodule. -According to this question on math overflow, the universal enveloping algebra $U(L)$ of a lie algebra $L$ is noetherian. Now an $L$-module $M$ that is a highest weight module is a finitely generated $U(L)$-module (actually is $1$-generated by a maximal vector). Recall - -If $R$ is a noetherian ring and $M$ a finitely generated left $R$-module, then $M$ is a noetherian module - -The proof of that can be found in "Basic Algebra II" of Jacobson, Theorem 3.4. Then a highest weight module $M$ is a noetherian module. -Now remains to prove that a noetherian module $M$ has a maximal proper submodule. Consider $\mathcal{S}$ the set of proper submodules of $M$, that is non empty since contains the zero module. Suppose that a maximal submodule doesn't exists, so for each $N \in\mathcal{S}$ we can define the non empty set -\begin{gather} -\mathcal{S}(N):=\{L \in \mathcal{S} \colon N < L\} -\end{gather} -By the choice axiom for each $N \in \mathcal{S}$ we can choose a $L_N \in \mathcal{S}(N)$, then the mapping $N \to L_N$ define a map $f$ from $\mathcal{S}$ to itself such that $f(N)>N$ for each $N \in \mathcal{S}$. -Pick $N_1 \in \mathcal{S}$, then define $N_2=f(N_1)>N_1$, $N_3=f(N_2)>N_2$ and so on. Then $N_1M_1>M_2 \dots$ of submodules with simple factors. -For ending the proof that a highest weight module admits a composition sequence I should prove that the sequence $M_i$ is bounded from above, but I have no idea at the moment. -EDIT:ok, I'm writing to myself and seems that nobody have considered this question. Nevertheless I think that an answer can be useful for someone in future, so I write how I found out a complete (I hope) solution. -Suppose that a highest weight module $M$ has a descending series $M=M_0>M_1>M_2> \dots$ with simple factors. From the general theory of highest weight modules, a simple factor $M_i/M_{i+1}$ must be isomorphic to $L(\mu)$, i.e. the Verma module $\Delta(\mu)$ over its maximal (and unique) submodule. Studying the value of the Casimir operator, we conclude that the number of $\mu$ such that $L(\mu)$ appears as simple factor in the series above is finite. -Suppose now that the series $M=M_0>M_1>M_2> \ldots$ is infinite, by the considerations above so there exists a $\mu$ such that $L(\mu)$ appears infinitely many times. Then there exists a subsequence $M_{i_j}$ of $M_i$ such that $M_{i_j}/M_{i_j+1}$ is isomorphic to $L(\mu)$. For each $j \in \mathbb{N}$ choose $v_j \in (M_{i_j})_{\mu}$ such that $v_j$ projects to a generator on the factor.. The set $\{v_1,v_2,v_3 \dots\}$ is and infinite set of independent eigenvectors of the same eigenvalue $\mu$, that is impossible since the eigenspaces of $M$ are finite dimensional. -There is a much quicker and easier way that is sketched in the amazing book "Introduction of Lie Algebras and Representation Theory" of J. Humphreys. It is in the appendix at the end of chapter 24. This appendix was added in the later editions, so if someone wants to read it should take care to get a enough recent edition of the book.<|endoftext|> -TITLE: Construct a triangle with its orthocenter and circumcenter on its incircle. -QUESTION [7 upvotes]: Construct $\triangle ABC$ such that its orthocenter ($H$) and circumcenter ($O$) are on its incircle. - -I've tried something by inverting everything WRT circumcircle but don't have proper idea... Or since $O$ and $H$ are isogonal conjugates trying to reflect them WRT sides of triangles and find something but nothing tried many different approaches... Does anyone has some idea how to do that? - -(Image from @Blue, using proportions calculated algebraically. (See comments, but ignore my non-constructibility nonsense.) It may-or-may-not be helpful to note that points $B$, $C$, $O$, $I$, $H$ lie on a circle congruent to the circumcircle. Specifically, $O$ is the midpoint of $\stackrel{\frown}{BC}$ on that circle, and $I$ is in turn the midpoint of $\stackrel{\frown}{OH}$. One can show that this property is a consequence of $\angle A=60^\circ$ alone. The construction corresponding to the additional geometric condition that causes $O$ and $H$ to lie on the incircle remains elusive.) - -REPLY [5 votes]: We begin by proving the claims made by Blue in the edit to the question. -Let $I$ be the incentre of $ABC$, and let $R$ be its circumradius. Since $O$ and $H$ lie inside $ABC$, the triangle must be acute. -The incircle is divided into three arcs by its points of contact with the sides of $ABC$. At least one of these arcs, say the one nearest vertex $A$, contains neither $O$ nor $H$. Thus when the rays $AO$ and $AH$ meet the incircle at $O$ and $H$, respectively, each of these rays is intersecting the incircle for the second time. Moreover, as in any triangle, $AI$ bisects angle $OAH$. It follows that the points $O$ and $H$ are symmetric about $AI$. In particular, $AH = AO = R$. -In any triangle, $\overrightarrow{AH} = 2\overrightarrow{OA'}$, where $A'$ is the midpoint of $BC$. Hence $OA' = R/2$. It follows from this that $\angle BOC = 120^{\circ}$, hence that $\angle BAC = 60^{\circ}$. -If we introduce $O'$ as in the figure (the reflection of $O$ through $A'$), then $O$, $B$ and $C$ belong to the circle with radius $R$ centred at $O'$. Since $\overrightarrow{AH} = \overrightarrow{OO'}$, the quadrilateral $OAHO'$ is a rhombus with side $R$. Consequently, $H$ also belongs to circle $BOC$. -If $J$ is the point halfway along arc $OH$ on circle $BOC$, then $BJ$ bisects $\angle OBH$, hence $J$ lies on $BI$. Similarly, $J$ lies on $CI$. Hence $J = I$, and $I$ lies on circle $BOC$. Since $AI$ bisects $\angle BAC$, it meets the circumcircle of $ABC$ again at $O'$, which is midway between $B$ and $C$. -Conversely, we carry out a construction corresponding to the above requirements. Start with a circle centred at $O$ with radius $1$. Mark two points $B$ and $C$ on the circle so that $\angle BOC = 120^{\circ}$. Let $O'$ be the reflection of $O$ through $BC$. Then $O'$ is on the circle. Now let $I$ be any point on circle $BOC$, on the same side of $BC$ as $O$. (We will specify $I$ further below.) Let $O'I$ cut $BO'C$ again at $A$. Let $H$ be the reflection of $O$ through $O'I$. Then reversing the arguments above, we find that $H$ is the orthocentre and $I$ the incentre of triangle $ABC$, and that $IH = IO$. -The only question that remains is how to choose $I$ on circle $BOC$ so that $IO$ is equal to the inradius of $ABC$. If we let $x$ be the inradius, then $x$ is the distance from $I$ to line $BC$. We also have $IO^2 = (1/2 - x)^2 + 1 - (x+1/2)^2 = 1-2x$. The condition $OI = x$ is equivalent to $x^2 = 1 - 2x$, or $x = \sqrt{2} - 1$. -Thus the construction can be completed by letting $I$ be a point of intersection of circle $BOC$ with a circle centred at $O$ with radius $\sqrt{2}-1$. -I'm not sure how to motivate this last step geometrically. -Summary of my construction Given two points $O$ and $O'$, write $R = OO'$. Construct the circles $K$ and $K'$ of radius $R$ centred at $O$ and $O'$, respectively. Let $B$ and $C$ be the points of intersection. Let $I$ be a point of intersection of $K'$ with the circle of radius $(\sqrt{2}-1)R$ centred at $O$. Then let $A$ be the point of intersection of $O'I$ with $K$. -Alternative construction (using $IA = 2IO$, proved by dxiv below) Instead of constructing $I$, construct $A$ directly by intersecting $K$ with the circle of radius $(2\sqrt{2} - 1)R$ centred at $O'$. -Summary of dxiv's construction Construct a triangle $AIO$ with $IO= r$, $IA = 2r$, $OA = (\sqrt{2}+1)r$. Let $K$ be the circle centred at $O$ passing through $A$. Construct angles of $30^{\circ}$ on either side of $AI$. Let $B$ and $C$ be the intersections with $K$ of the outer sides of these angles.<|endoftext|> -TITLE: Nested Radicals and Continued Fractions -QUESTION [11 upvotes]: Is there some interconnection between these two topics? -A sort of classification of the possibile types of nested radicals and maybe some way (hopefully bijective, in some sense) to pass from a nested radical to a partial fraction and vice versa? -I know this is vague, but I didn't found nothing about it. - -REPLY [7 votes]: Yes, there is. Consider a continued fraction in the form: -$$x=\cfrac{a}{b+\cfrac{a}{b+\cfrac{a}{b+\cdots}}}$$ -Assume the limit exists and find it: -$$x=\cfrac{a}{b+x}$$ -$$x^2+bx-a=0$$ -$$x=\frac{\sqrt{b^2+4a}-b}{2}$$ -Now consider the nested radical: -$$x=\sqrt{c+d\sqrt{c+d\sqrt{c+\cdots}}}$$ -Assume the limit exists and find it: -$$x=\sqrt{c+dx}$$ -$$x^2-dx-c=0$$ - -If we set $d=-b$ and $c=a$ we get exactly the same value of the limit. I assumed that $b>0$ so in this case $d<0$ and we get the radical: -$$x=\sqrt{a-b\sqrt{a-b\sqrt{a-\cdots}}}=\cfrac{a}{b+\cfrac{a}{b+\cfrac{a}{b+\cdots}}}$$ -With the condition $a>b>0$, of course. - -This is the most simple connection we could find, but of course there may be countless others.<|endoftext|> -TITLE: Mirror algorithm for computing $\pi$ and $e$ - does it hint on some connection between them? -QUESTION [59 upvotes]: Benoit Cloitre offered two 'mirror sequences', which allow to compute $\pi$ and $e$ in similar ways: -$$u_{n+2}=u_{n+1}+\frac{u_n}{n}$$ -$$v_{n+2}=\frac{v_{n+1}}{n}+v_{n}$$ - -$$u_1=v_1=0$$ -$$u_2=v_2=1$$ - -$$\lim_{n \to \infty} \frac{n}{u_n}=e$$ -$$\lim_{n \to \infty} \frac{2n}{v_n^2}=\pi$$ - -The formulation and the proof can be seen here. - -What do you think - is it just a coincidence or is there some deeper meaning in this mirror algorithm about the connection of two constants? - - -By @EricStucky in the comment, the better question: - -Is there any connection between $e$ and $π$ which is essentially different than Euler's formula? - -Of course, I expect an answer related to my own question about this 'mirror sequence' -If, on the other hand, someone shows a clear relation between this sequence and Euler's formula, that's fine too - -REPLY [21 votes]: Both the limits can be evaluated in a more general scenario where you put constants $a$ and $b$ in front of the two terms on the right-hand side. The recurrence equations can be solved analytically and the corresponding limits taken in the case $a=1 \wedge b=1$. They both involve the gamma function and some powers. The difference between the two "algorithms" is then that in the first, the gamma function is trivial while the power $e^1$ remains, while in the second, the power term calcels out with the factor $2$ but the gamma function brings in $\pi$ (as it often does). So the answer is no, these two sequences don't disclose any commonalities between the two constants. They just evaluate to expressions which somehow contain both, and in either, one of the terms becomes trivial, leaving only the other. -Details -Let's solve the two systems in parallel (NB. the equations below come in pairs to show the correspondence, they are not systems.) Using the method of generating functions, we convert the equations -$$\begin{aligned} -u_{n+2} &= a u_{n+1} + b\frac{u_n}n, \\ -v_{n+2} &= a \frac{v_{n+1}}n + b v_n -\end{aligned}$$ -with initial conditions $u_1 = v_1 = 0$, $u_2 = v_2 = 1$, to ordinary differential equations -$$\begin{aligned} -\frac{f'(x)}x - \frac{2f(x)}{x^2} &= a\left(f'(x) - \frac{f(x)}x\right) + b f(x), \\ -\frac{g'(x)}x - \frac{2g(x)}{x^2} &= a \frac{g(x)}x + b x g'(x) -\end{aligned}$$ -where -$$\begin{aligned} -f(x) &= \sum_{n=2}^{+\infty} u_n x^n, \\ -g(x) &= \sum_{n=2}^{+\infty} v_n x^n. -\end{aligned}$$ -The solutions for $a>0, b>0$ with a unit quadratic term (the initial condition) are -$$\begin{aligned} -f(x) &= x^2 e^{-\frac{bx}a} (1 - a x)^{-1-\frac b{a^2}}, \\ -g(x) &= x^2 (1 - \sqrt b x)^{-1 - \frac a{2\sqrt b}} (1 + \sqrt b x)^{-1 + \frac a{2\sqrt b}}. -\end{aligned}$$ -We can extract the coefficients by writing down and expanding the Taylor series: -$$\begin{aligned} -f(x) &= x^2 \sum_{k=0}^{+\infty} \frac1{k!} \left(\frac{-bx}a\right)^k \sum_{l=0}^{+\infty} \frac{(1+b/a^2)_l}{l!} (ax)^l \\ -&\quad = \sum_{n=0}^{+\infty} \left[ \sum_{k=0}^n \frac{(-1)^k}{k!(n-k)!} \left(\frac ba\right)^k (1+b/a^2)_{n-k} a^{n-k} \right] x^{n+2}, \\ -% -g(x) &= x^2 \sum_{k=0}^{+\infty} \frac{\big(1 + a/(2\sqrt b)\big)_k}{k!} (\sqrt b x)^k \sum_{l=0}^{+\infty} \frac{\big(1 - a/(2\sqrt b)\big)_l}{l!} (-\sqrt b x)^l \\ -&\quad = \sum_{n=0}^{+\infty} \left[ \sum_{l=0}^n \frac{(-1)^l}{l!(n-l)!} \big(1 - a/(2\sqrt b)\big)_l \big(1 + a/(2\sqrt b)\big)_{n-l} (\sqrt b)^n \right] x^{n+2}. -\end{aligned}$$ -The two expansions have some similarities and both allow us to write down and simplify (converting the Pochhammer symbols with negative $k$ to positive $k$ followed by using a straightforward application of the definitions, all doable in hand) the $(n+2)$-th coefficient: -$$\begin{aligned} -u_{n+2} &= \binom{n+p}{p} a^n {}_1F_1(-n; -n-p; -p), \\ -v_{n+2} &= \binom{n+q}{q} \sqrt b^n {}_2F_1(-n, 1-q; -q-n; -1), -\end{aligned}$$ -where $p$ and $q$ are shorthand notation for the recurring subexpressions in either formula, -$$\begin{aligned} -p &= b / a^2,\\ -q &= a / (2 \sqrt b). -\end{aligned}$$ -Now the similarity is quite striking (keeping in mind, however, that there's a world of difference between $_1F_1$ and $_2F_1$, and between evaluation something in $-1$ and in a generic point), but that's still not too surprising given how simple the original recurrent equations were. -We can find asymptotic forms of $u_n$ and $v_n$ from here. For that, it's advantageous to rewrite the hypergeometrics so that the dependence on $n$ appears only in the denominator terms. Here are two handy identities just for us: for the $_1F_1$ and for the $_2F_1$. This brings $u_{n+2}$ and $v_{n+2}$ to their equivalent forms -$$\begin{aligned} -u_{n+2} &= \binom{n+p}{p} a^n e^{-p} {}_1F_1(-p; -n-p; p), \\ -v_{n+2} &= \binom{n+q}{q} \sqrt b^n 2^{q-1} {}_2F_1(-q, 1-q; -q-n; 1/2), -\end{aligned}$$ -Now as $n \to +\infty$, we are guaranteed that the $_pF_q$ terms approach $1$, so we can just trim them to the zeroth term each. For the generalized binomial coefficients, we use Stirling's formula for the "big" factorials, leaving -$$\binom{n+\Delta}{\Delta} \approx \frac{n^\Delta}{\Delta!}.$$ -This gives us the asymptotics -$$\begin{aligned} -u_{n+2} &\approx \frac{a^n n^p e^{-p}}{p!}, \\ -v_{n+2} &\approx \frac{\sqrt b^n n^q 2^{q-1}}{q!}. -\end{aligned}$$ -As mentioned in the introduction, each of these terms has a geometric term and a gamma function (the factorial). They also have one exponential term each, which goes away for $a=1$ and for $b=1$, respectively: -For $a=1$, $p=b$ and the asymptotic behaviour of $u_{n+2}$ is -$$u_{n+2} \approx n^b e^{-b} / b!$$ -and when compensated in a limit procedure, this tends to -$$\large \lim_{n \to +\infty} \frac{n^b}{u_n} = e^b b! = \Gamma(b+1) e^b.$$ -In particular, $b=1$ gives the limit $e^1 1! = e$. For half-integer values of $b$, both $e$ (in some power) and $\pi$ (in a square root) will be present simultaneously. -For $b=1$, $q=a/2$ and the asymptotic behaviour of $v_{n+2}$ is -$$v_{n+2} \approx n^{a/2} 2^{a/2 - 1} / (a/2)!$$ -and when squared and compared to $n^a$, this gives the limit -$$\large \lim_{n \to +\infty} \frac{n^a}{v_n^2} = \frac{(a/2)!^2}{2^{a-2}} = \frac{\Gamma(1+\frac a2)^2}{2^{a-2}}.$$ -In particular, $a=1$ produces $\Gamma(3/2)^2 / 2^{-1}$. Since $\Gamma(3/2) = (1/2)! = \sqrt\pi/2$, this is one half of $\pi$. The residual power of $2$ is cancelled when the limit expression is ${\bf 2}n / v_n^2$. For a generic odd $a$, there would be some multiple of $\pi$ and some power of $2$. -Conclusion -Albeit the two equations can be solved using quite similar methods and even their full solutions share a lot of common features, the results $e$ and $\pi$ come ultimately from different and unrelated parts of the formulas. The former comes from a power term when all the other terms become trivial. The latter comes from a special value of a gamma function when the other terms are reduced to a constant of $2$. The base $e$ is connected to the properties of the Kummer function $_1F_1$ whereas the base $2$ appears in a similar function for the Gauss function $_2F_1$.<|endoftext|> -TITLE: example of a subset of a smooth manifold admitting a unique smooth structure making the inclusion an immersion, which is not a weak embedding. -QUESTION [5 upvotes]: A subset $S$ of a smooth manifold $M$ is called a weakly embedded submanifold (at least in Lee) if it admits a smooth structure making the inclusion an immersion, and such that for any other smooth manifold $N$, a map $N \to S$ is smooth iff its composition with $S \hookrightarrow M$ is smooth. Such a smooth structure on $S$ is clearly unique. To get a better feel for this property I am asking for the following: - -Is there an example of a subset $S$ of smooth manifold $M$ which admits a unique smooth structure making the inclusion $S \hookrightarrow M$ an immersion, but which is not a weak embedding? - -REPLY [3 votes]: Nice question. Here's an example. -Let $S\subset\mathbb R^2$ be the union of the $x$-axis with the positive half of the $y$-axis, with the smooth structure induced by the immersion $F\colon \mathbb R\times \{0,1\}\to \mathbb R^2$ given by -\begin{align*} -F(x,0) &= (x,0),\\ -F(x,1) &= (0,e^x). -\end{align*} -It's a fairly straightforward exercise to prove that this is the only topology and smooth structure making $S$ into an immersed submanifold of $\mathbb R^2$. -Now consider the smooth map $\phi\colon \mathbb R\to \mathbb R^2$ given by -\begin{equation*} -\phi(t) = -\begin{cases} -(e^{-1/t^2},0), & t>0,\\ -(0,0), & t=0,\\ -(0,e^ {-1/t^2}) & t<0. -\end{cases} -\end{equation*} -Note that the $x$-axis is an open subset of $S$ in its submanifold topology. But -the preimage of the $x$-axis under $\phi$ is $[0,\infty)$, which is not open in $\mathbb R$, so $\phi$ is not continuous as a map into $S$.<|endoftext|> -TITLE: Does the Fundamental Theorem of Algebra hold true for infinite polynomials? -QUESTION [6 upvotes]: I know that by the Fundamental Theorem of Algebra, every polynomial of positive degree has a zero in $\mathbb{C}$. -Does this also hold for polynomials of infinite degree? Like, for example, the Taylor expansion of $e^z$ or $\cos z$? - -REPLY [5 votes]: The fundamental theorem does not hold for infinite polynomials. -Your own example of $\mathrm e^z$ is a good one. We have $\mathrm e^z \neq 0$ for all $z \in \mathbb C$. -This raises a really important question. -All of the non-constant, finite Taylor Polynomials for the exponential function satisfy the fundamental theorem. For example: - -$1+z = 0$ has, when counted with multiplicity, one solution over $\mathbb C$. -$1+z+\frac{1}{2}z^2 = 0$ has, when counted with multiplicity, two solutions over $\mathbb C$. -$1+z+\frac{1}{2}z^2+\frac{1}{3!}z^3 = 0$ has, when counted with multiplicity, three solution over $\mathbb C$. -$1+z+\frac{1}{2}z+\cdots+\frac{1}{n!}z^n = 0$ has, when counted with multiplicity, $n$ solution over $\mathbb C$. - -For the Taylor Polynomial of degree $n$, let $R_n$ be the set of its roots, for example: - -$R_1 = \{-1\}$ -$R_2 = \{-1-\mathrm i, \ -1+\mathrm i\}$ - -What happens to these roots as $n \to \infty$? -I've calculated (using my computer) $R_1,$ $R_2$, $R_3$ and $R_4$ and found that all of the elements of $R_4$ have larger moduli than the elements of $R_3$, and similarly for $R_3$ and $R_2$, etc. -It seems that the roots are getting bigger in terms of their modulus. It is tempting to think that in the limit, they all retreat to infinity and that is why there are no finite solutions to $\mathrm{e}^{z} = 0$. -Sadly, $\mathrm e^z$ does not seem to behave well at complex infinity - at the North Pole of the Riemann Sphere. If we write $z = x+\mathrm i y$ then we get $\mathrm e^z = \left(\mathrm e^x\cos y\right) + \mathrm i \left(\mathrm e^x \sin y\right)$. -If we let $z \to \infty$ along the positive real axis, i.e. $y=0$, $x>0$ and $x \to +\infty$ then $\mathrm e^z \to + \infty$. -If we let $z \to \infty$ along the negative real axis, i.e. $y=0$, $x<0$ and $x \to -\infty$ then $\mathrm e^z \to 0$. -If we let $z \to \infty$ along either the positive, or the negative imaginary axis, i.e. $x=0$ and $y \to \pm \infty$ then there is no well-defined limit; the value of $\mathrm e^z$ lies in $\{\cos y +\mathrm i \sin y: y \in \mathbb R\}$, but it keeps spiralling around and never settles down. -I would be tempted to say that all of the roots have retreated to infinity and imposed some irreconcilable conditions that cause a discontinuity. -Don't believe a word of this though; it's all personal speculation.<|endoftext|> -TITLE: Handshakes in a party -QUESTION [7 upvotes]: Here's the question which I'm stuck with - - -There are $20$ married couples at a party. Every man shakes hands with everyone except himself and his spouse. Half of the women refuse to shake hands with any other women. The other $10$ women all shake hands with each other (but not with themselves). How many handshakes are there at the party? - -My Solution: - -Handshakes done by men: There are $20$ ways to pick a man and $38$ ways to pick the other person, totaling to $20 \cdot 38 = 760$. But since every handshake is counted twice, the answer is $\frac{760}{2} = 380$ -Handshakes done by the women who refuse to shake hand with any other women: these are already counted in $380$ handshakes (Because the women can only shake hands with men. This was taken care of above). -Handshakes done by the other $10$ women and men: These are counted in $380$ handshakes. -Handshakes done by women and women in the group of $10$: are $\binom{10}{2} = 45$. - -Hence the total handshakes are $380 + 45 = 425$. - -However, the total handshakes are $615$ according to my textbook. Can anyone please help me to find out the mistake? - -REPLY [12 votes]: Your mistake can be seen in your first line: you should not divide by $2$ as you did not count the handshakes between men and women twice. -Instead, the ways to pick a man is $20$. The number of men that shake hands with him is $19$. Since very handshake is counted twice, the men shake hands $190$ times. -The number of handshakes men had with women is simply $20 \times 19$, thus $380$. -Combining this with your results gives the answer in the textbook, or $570+45=615$. - -REPLY [3 votes]: There are the following types of handshakes - -Woman vs woman: $\dbinom{10}{2}=45$. You were correct on that. -Man vs man: $\dbinom{20}{2}=190$. -Man vs woman: each man performs $19$ handshakes with women. Since there are $20$ men this gives $$19\times20=380$$ such handshakes. - -Summing up $$45+190+380=615$$ Your mistake was in steps $2$ and $3$ because you treated them as $1$ step. While handshakes between man and man are counted indeed twice, handshakes between man and woman are counted only once, so dividing all with $2$ is wrong. You have to distinguish between these two categories.<|endoftext|> -TITLE: Proof through combinatorial argument -QUESTION [7 upvotes]: I am attempting to solve this counting problem through combinatorial argument. The following is the equation I am given: -$$\sum_{i=1}^n (i-1)(n-i) = \binom{n}{3}$$ -I understand that the right-hand side of this equation represents a set of $n$-elements out of which we choose 3. For example I believe we can say suppose we have a group of $n$ people and we want to choose 3 out of $n$ to be in a committee. However I'm not sure how to express the left-hand side in words. If forming a committee is an appropriate way to tackle this problem then I know the left-hand side must utilize the addition and multiplication principles, but I don't know how to put it into words. Also my intuition tells me that in solving this we should first flip $$(n-i)(i-1)$$ -Thanks! - -REPLY [9 votes]: Hint: split on the fact that the middle member (in sorted numerical order, the members are numbered $1$ to $n$) is $i$. Then we pick one from before and one from after.<|endoftext|> -TITLE: Does $\sqrt{\cos{\theta} \sqrt{\cos{\theta} \sqrt{\cos{\theta\dots}}}}=\sqrt{1 \sqrt{1 \sqrt{1\dots}}} \implies \cos{\theta}=1$? -QUESTION [7 upvotes]: I was solving this equation: -$$\sqrt{\cos{\theta} \sqrt{\cos{\theta} \sqrt{\cos{\theta\dots}}}}=1$$ -I solved it like this: -The given equation can be written as: -\begin{align*} -\sqrt{\cos{\theta} \sqrt{\cos{\theta} \sqrt{\cos{\theta\dots}}}}&=\sqrt{1 \sqrt{1 \sqrt{1\dots}}} \\ -\cos{\theta}&=1 \\ -\theta&=\arccos {1} -\end{align*} -So the solution is $2n\pi, n \in \mathbb Z$. -Have I solved it the wrong way? -(The title originally contained a more general question: Does $\sqrt{a \sqrt{a \sqrt{a\dots}}}=\sqrt{b \sqrt{b \sqrt{b\dots}}} \implies a=b$? The current title is consistent with the body and the accepted answer.) - -REPLY [3 votes]: Another answer -suppose that $$\sqrt{\cos{\theta} \sqrt{\cos{\theta} \sqrt{\cos{\theta}}}}...=1$$ -square both side you get $$cos{\theta} \sqrt{\cos{\theta} \sqrt{\cos{\theta}}}...=1$$ -We know that $$\ \sqrt{\cos{\theta} \sqrt{\cos{\theta}}}...=1$$ -thus $$\cos{\theta}*1=1$$ -therefore $$\theta=2k\pi,k\in\mathbb Z$$<|endoftext|> -TITLE: Field with four elements -QUESTION [7 upvotes]: If $F=\{0,1,a,b\}$ is a field (where the four elements are distinct), then: -1.What is the characteristic of $F$? -2.Write $b$ in terms of the other elements. -3.What are the multiplication and addition tableau of these operations? - -REPLY [4 votes]: A field has characteristic a prime number $p$ or $0$, (consider the homomorphism $\mathbf Z\rightarrow F, \enspace n\mapsto n\cdot 1$. -Any finite field $\mathbf F_{p^n}$ is an extension of degree $n$ of the prime field $\mathbf F_p$ and it is a simple extension of $\mathbf F_p$ generated by a root of any irreducible polynomial of degree $n$ in $\mathbf F_p[x]$. In the present case the only irreducible polynomial of degree $2$ is $x^2+x+1$. Let $\omega$ be one of its roots; the other root is its inverse, $\omega+1$. -Here is its multiplication table: -$$\begin{array}{c|cccc} -&0&1 &\omega&\omega+1\\\hline -0&0&0&0&0\\\hline -1&0&1 &\omega&\omega+1\\\hline -\omega&0&\omega&\omega+1&1\\\hline -\omega+1&0&\omega+1&1&\omega -\end{array}$$<|endoftext|> -TITLE: Eilenberg–Zilber as abstract nonsense - why is it important? -QUESTION [8 upvotes]: The Eilenberg–Zilber theorem in singular homology, relating the monoidal structure of the category of chain complexes with the chain complex of the cartesian product of the underlying spaces, is used in proving the Künneth theorem. -When I read about it online, I often see it's referred to in much more abstract contexts, where it's usually stated along the lines of: - -The singular chain functor is lax-monoidal. - -What is the structural significance of this? I understand extra structure is always nice, but what are some consequences that are easy to see in this light? - -REPLY [7 votes]: As Zhen Lin says, a really important part of the Eilenberg–Zilber theorem is that the lax-monoidal structure map is a quasi-isomorphism. Nevertheless, reformulating the first part of the theorem as "the functor $C_*(-)$ is lax-monoidal" is helpful for applying general results to it. For example: - -Proposition. Let $\mathsf{C}$ and $\mathsf{D}$ be symmetric monoidal categories, and $F : \mathsf{C} \to \mathsf{D}$ a lax monoidal functor. Then $F$ induces a functor from monoid objects in $\mathsf{C}$ to monoid objects in $\mathsf{D}$. -Proof. Let $M$ be a monoid object in $\mathsf{C}$. The product morphism on $F(M)$ is given by -$$F(M) \otimes F(M) \xrightarrow{\text{lax monoidal}} F(M \otimes M) \xrightarrow{F(\mu)} F(M)$$ -and the unit by -$$1_\mathsf{D} \xrightarrow{\text{lax monoidal}} F(\mathsf{1}_C) \xrightarrow{F(e)} F(M).$$ -Then it's easy to check that this gives $F(M)$ the structure of a group object, and that $F$ then defines a functor from group objects to group objects. -Corollary. If $M$ is a topological monoid, then $C_*(M)$ is a monoid in the category of chain complexes, i.e. a dg-algebra. - -The proof is immediate from the Eilenberg–Zilber theorem, and it's helpful to split the proof in two parts in order not to get bogged down in technical details. More generally, if $\mathtt{P}$ is a topological operad, then $C_*(\mathtt{P})$ is a dg-operad, and if $A$ is a $\mathtt{P}$-algebra, then $C_*(A)$ is a $C_*(\mathtt{P})$-algebra. -Basically, any sort of "product" structure can be transported from topological spaces to chain complexes; in general, this is "the point" of determining if a functor is lax monoidal or not. You would need an oplax monoidal functor to transform topological "coproduct" structures into coproduct dg-structures. The EZ functor isn't oplax monoidal, so in general you cannot expect it to turn "coalgebras" into "coalgebras".<|endoftext|> -TITLE: Regular sets in set theory -QUESTION [5 upvotes]: I have come across the following definition in different books on set theory: - -A set $x$ is regular iff $(\forall y)(x\in y\implies(\exists w\in y)(w\cap y=\emptyset))$. - -I don't grasp this concept. What does it intuitively mean for $x$ to be regular? Considering $y=\{x \}$ it tells me $x\not\in x$ but does it provide any information beyond that? -Also is this related to regular languages in computer science? - -A set is regular iff it is the language of a regular expression (alternatively: deterministic finite automaton). - -REPLY [6 votes]: No, this has nothing to do with regular languages (and likewise nothing to do with regular cardinals). -As for intuition, I find it much easier to think about what it would mean for a set not to be regular: - -A set $x$ is non-regular if there is an $y$ which contains $x$ as an element, such that for every $w\in y$ there is a $z\in w$ that is also in $y$. - -This means, in particular, that we can take $w$ to be $x$, and find a chain of elements -$$ x \ni w_1 \ni w_2 \ni w_3 \ni \cdots $$ -where at each step $w_i$ can be chosen to be in $y$, which allows us to continue the chain indefinitely. -So if a set $x$ is not regular, there is a way to construct an infinitely descending $\in$-chain starting from $x$. (This can be carried out inside the set theory itself if we have the Axiom of Choice, and at the metalevel if we have choice there). -Conversely, if $x$ is regular, then there cannot be any infinite chain -$$ x \ni x_1 \ni x_2 \ni x_3 \ni \cdots $$ --- or at least such a chain cannot be known inside set theory in the form of a function defined on $\omega$ that maps each $i$ to $x_i$ -- because the range of that function would work as an $y$ that certifies $x$ as non-regular. -It is still possible that an element of a model of set theory can be regular, yet there is an infinite chain that we can see at the metalevel, that is, looking at the model from outside. (This will be the case for any model of set theory that contains non-standard integers, for example). But in that case the chain cannot be encoded in toto as an object within the model.<|endoftext|> -TITLE: Independent $\sigma$-algebras using $\pi$-$\lambda$-theorem -QUESTION [6 upvotes]: Let $\mathcal{E}_1, ...,\mathcal{E}_n$ be collections of measurable sets on $(\Omega,\mathcal{F},P)$, each closed under intersection. Suppose -\begin{align*} -P(A_1\cap...\cap\ A_n)=P(A_1)\cdot ... \cdot P(A_n), -\end{align*} -for all $A_i \in \mathcal{E}_i$ for $1 \leq i \leq n$. -Now I want to show that the $\sigma$-algebras $\sigma(\mathcal{E}_i)$ for $1 \leq i \leq n$ are independent, using an application of the $\pi$-$\lambda$-theorem. -Since $\mathcal{E}_i$ for $1 \leq i \leq n$ are closed under intersection, each $\mathcal{E}_i$ is a $\pi$-system. Now, for me it is unclear how to define a $\lambda$-system and how to apply the $\pi$-$\lambda$-theorem. - -REPLY [5 votes]: Fix $A_i,\forall i=2,\cdots,n$, define $G=\{B\in \sigma(\mathcal{E}_1)|P(B\cap\cdots\cap\ A_n)=P(B)\cdot \cdots \cdot P(A_n)\}$, -By assumption $\mathcal{E}_1\subset G$, it suffices to show $G$ is a $\lambda$-system by definition. -Then by $\pi-\lambda$ theorem, we have $\sigma(\mathcal{E}_1)\subset G$, this shows given $\mathcal{E}_1, \cdots,\mathcal{E}_n$ independent, we have $\sigma(\mathcal{E}_1), \cdots,\mathcal{E}_n$ independent, hence $\mathcal{E}_2, \cdots,\mathcal{E}_n,\sigma(\mathcal{E}_1)$ independent. THen repeart - -REPLY [2 votes]: Since $\mathcal E_i$ are $π$-systems and independent, then you know (if not, there is a sketch of the proof in the end of this answer) that also the induced Dynkin systems $δ(\mathcal E_i)$ are independent. Now, since the $\mathcal E_i$'s are $π$-systems, Dynkin's theorem states that $$δ(\mathcal E_i)=σ(\mathcal E_i)$$ which completes the proof. - -Statement: If $\mathcal E_i,\ i \in I$ are independent in $(Ω,\mathcal F, P)$, then the induced Dynkin systems $δ(\mathcal E_i), i\in I$ are also independent. -Sketch of the proof: Define $$\mathsf E_1=\{A\in \mathcal F:P(AA_2\dots A_n)\}=P(A)P(A_2)\dots P(A_n), \forall A_i\in\mathcal E_i \cup Ω, i=2,\dots, n\}$$ i.e. $\mathsf E_1$ is the set of all the "good sets" that are independent of $\mathcal E_2, \dots, \mathcal E_n$ (so $\mathsf E_1$ is the maximal class such that $\mathsf E_1$ and $\mathcal E_2, \dots, \mathcal E_n$ are independent. Now, show that $\mathsf E_1$ is a Dynkin system (about two pages of calculations in my textbook...) which by definition contains $\mathcal E_1$. Hence the minimality of the induced Dynkin system $δ(\mathcal E_1)$ yields $δ(\mathcal E_1)\subseteq \mathsf E_1$ which completes the proof.<|endoftext|> -TITLE: Converting decimal fractions to base N -QUESTION [6 upvotes]: I find converting from base N to base 10 very easy. I have no problem converting integers in base 10 to base N but when I have to find, for example, 0.5 base 3. I am not sure how I am supposed to do it. I can't do division with remainders any more. -How is this done? - -REPLY [9 votes]: Instead of using division with remainder as you do for integers, you can use multiplication for the decimal part: -$0.5 * 3 = 1.5$ -Use the integer part as your first digit after the comma, and repeat with the decimal part: -$0.5 * 3 = 1.5$ -You see that this will continue forever, so we have $0.5_{10} = 0.\overline{1}_3$. -Another example: -$$0.4 * 3 = 1.2\\ -0.2*3 = 0.6\\ -0.6*3=1.8\\ -0.8*3=2.4\\ -0.4*3 = 1.2$$ -And now we're repeating already, so $0.4_{10} = 0.\overline{1012}_3$<|endoftext|> -TITLE: Finding the number of irreducible quadratics in $\Bbb Z_p[x]$, where $p$ is a prime -QUESTION [6 upvotes]: The problem is to find the number of irreducible quadratics in $\Bbb Z_p[x]$, where $p$ is a prime number. -To solve this, I wish to find first the number of reducible quadratics of the form $x^2+ax+b$, then the number of reducible quadratics, and subtract this from the total number of quadtratics. -I know that each reducible quadratic that is of the form $x^2 +ax+b$ is a product $(x+c)(x+d)$ for $c,d\in \Bbb Z_p$. -So, I don't know what to do next. Please help me solve this problem. -Thanks for the help! - -REPLY [3 votes]: MJ. Rivo, your idea is correct. Here is how to do the calculation. -The total number of monic quadratic polynomials in $\Bbb Z_p[x]$ is -$p^2$. -To see this, just note there is a bijection associating each monic quadratic polynomials $x^2 +ax+b$ to $(a,b) \in \Bbb Z_p[x] \times \Bbb Z_p[x]$. -On the other hand, the total number of reducible monic quadratic polynomials in $\Bbb Z_p[x]$ is -$$\frac{p(p+1)}{2}.$$ -To see this, just note that each reducible monic quadratic polynomial bijectively corresponds to a product $(x+c)(x+d)$ for $c,d\in \Bbb Z_p$. How many such products we have? Since multiplication is commutative, the order does not matter. So we get $\frac{p(p-1)}{2}$ (for $c\neq d$) plus $p$ (for $c= d$). The number then is $\frac{p(p+1)}{2}$. -So, the total number of irreducible monic quadratic polynomials in $\Bbb Z_p[x]$ is -$$p^2-\frac{p(p+1)}{2}=\frac{p(p-1)}{2}.$$ -So the answer for irreducible monic quadratic polynomials is $\frac{p(p-1)}{2}$. -If you want the number of all irreducible quadratic polynomials (not necessarily monic), just note that each irreducible quadratic polynomials bijectively corresponds to $e(x^2 +ax+b)$ where $e \in \Bbb Z_p \setminus \{0\}$ and $x^2 +ax+b$ is an irreducible monic quadratic polynomials in $\Bbb Z_p[x]$. So the total number of all irreducible quadratic polynomials in $\Bbb Z_p[x]$ is -$$\frac{p(p-1)^2}{2}$$<|endoftext|> -TITLE: Why do we use only compatible charts in the Theory of Manifolds? -QUESTION [5 upvotes]: I couldn't find a duplicate, although I think is a very common question. - -Given two charts, ($U_{1},φ_{1}$), ($U_{2},φ_{2}$), on a n-dimensional topological manifold M, such that: $U_{1} \cap U_{2}\neq \emptyset$, we get transition maps: -$φ_{1}\circ φ_{2}^{-1} : φ_{2}(U_{1}\cap U_{2}) \rightarrow φ_{1}(U_{1}\cap U_{2})$, and -$φ_{2}\circ φ_{1}^{-1} : φ_{1}(U_{1}\cap U_{2}) \rightarrow φ_{2}(U_{1}\cap U_{2})$ -Two charts, as above, are called compatible if the transition maps, as above, are homeomorphisms. If $U_{1} \cap U_{2} = \emptyset$, then they are compatible. - -My question is, why do we need this behavior? In addition, if we want to define C$^{\infty}$-compatible charts, why do we need to take transition maps to be smooth, Euclidian mappings? - -REPLY [5 votes]: Yes, each individual chart $\phi \colon U \to \mathbb{R}^n$ is $C^\infty$, but remember that each chart in only defined on one open neighbourhood $U \subset M$. To allow us to consider behaviour on the whole of the manifold, it's important to be able to patch the $U$ neighbourhoods together nicely, going from one chart $\phi$ to another without having any non-differentiable issues. This is why we need transition maps to be smooth as well as the charts.<|endoftext|> -TITLE: Relation between Cartesian closed category and Lambda Calculus -QUESTION [8 upvotes]: I am programmer (from the object oriented world) and currently getting my head around functional programming. I was looking to get some basics right. -I understand what category theory and lambda calculus try and tell . I did read that lambda calculus can be modelled in any Cartesian closed category. and this is the how the 2 ideas can combine. But this is where i lost my way. -1) What is a Cartesian closed category ? (dint understand the mathematical semantics given , so wanted to know in simple English with some examples if possible). -2) How can Lambda calculus express this Cartesian closed category? - -REPLY [9 votes]: I see that this question has an already accepted answer. Nonetheless I would like to give an alternative answer: sometimes seeing -things from different perspective can help getting a better understanding of the subject. -There are many possible definitions/characterization of cartesian closed category (CCC in what follows), one of them is the following. -A CCC is a category $\mathbf C$ with: - -an object $1 \in \mathbf C$ such that for each object $X \in \mathbf C$ there is one and only one morphism $t \colon X \to 1$ -(you can think of $1$ as a generalization of the singleton set, the set with only one element) -for every pair of object $X$ and $Y$ you have an object $X \times Y$ with two projection morphisms -$$\pi_X \colon X \times Y \to X \ \pi_Y \colon X \times Y \to Y$$ -and an operation, called pairing, that for every pair $(f,g) \in \mathbf C[T,X] \times \mathbf C[T,Y]$ gives you a morphism -$$\langle f,g \rangle \in \mathbf C[T,X \times Y]$$ -with the requirement that $\pi_X \circ \langle f,g\rangle=f$ and $\pi_Y \circ \langle f,g \rangle=g$ -for each pair of objects $X$ and $Y$ an object $Z^Y \in \mathbf C$ with a morphism $\epsilon \colon Z^Y \times Y \to Z$ -such that the mapping -$$\mathbf C[T,Z^Y] \to \mathbf C[T \times Y,Z]$$ -$$f \colon T \to Z^Y \mapsto \epsilon \circ (f\circ \pi_T, \pi_Y) \colon T \times Y \to Z$$ -is a bijection. - -If you reguard a category as some sort of generalized system of types and function between them then: - -the object $1$ become the unit type, the type with only one element, and the unique function $t \colon T \to 1$ is the constant function; -for each pair of objects/types $X$ and $Y$, the object $X \times Y$ is nothing but the product type (the type of tuples) with projections $\pi_X$ and $\pi_Y$ playing the role of ...... well projections and the pairing $\langle,\rangle$ generalizing the pairing constructor for ordered pairs -the object $Z^Y$ is the type of functions from $Y$ to $Z$, the morphism $\epsilon \colon Z^Y \times Y \to Z$ is the evaluation, and the mapping -$$\mathbf C[X,Z^Y] \to \mathbf C[X \times Y,Z]$$ -is the uncurrying. - -This parallelism makes arise a notion of interpretation of simply typed $\lambda$-calculus in cartesian closed categories: where an interpretation is nothing but a way to associate objects of a CCC to types and morphisms to $\lambda$-terms (functions). -Giving the details of this construction would be very long so I would rather avoid to give it here, by the way I think you can find different references on the subject: try googling categorical semantics of simply typed lambda-calculus. -I hope this helps.<|endoftext|> -TITLE: Stochastic processes book suggestions. -QUESTION [8 upvotes]: I would like to find a book that introduces me gently to the subject of stochastic processes without sacrificing mathematical rigor. It would be great if the book has lots of examples and that the book is designed for undergraduates. Just like how there are rigorous undergraduate abstract algebra books (I am thinking of GAllian's contemporary modern algebra here). -I already studied measure theory, and prob. theory. -Thank you. - -REPLY [8 votes]: Here is my own (admittedly, personally biased) list. I do not claim it is better -than anyone else's list, but at least I do know them all very well, having taught undergraduate stochastic processes courses out of each in various decades. -All are -mathematically rigorous, and all are below the measure theoretic level. A full dose of lower-division calculus and perhaps a -beginning course in probability would be helpful background. -All of the authors have clearly used much of the material in -real applications, which I think is important for beginners at the undergraduate -level. Listed in order of publication. -1) Appropriate parts of Feller (vol. 1): A classic book with many surprisingly difficult problems, but heavy on intuitive explanations. -2) Parzen: Somewhat unusual approach and collection of topics, but based on real-world concerns, and with intuitive explanations. (Rigorous enough for many years of use at Stanford.) -3) N.T.J. Bailey: Many examples from biological sciences, but -mathematically rigorous. Includes time-continuous processes. -4) Roe Goodman: Meticulously clear, often intuitive. Some attention to simulation, but not as a substitute for rigorous presentation. -More attention to queueing and other time-continuous processes -than in many books accessible to undergraduates.<|endoftext|> -TITLE: Is a finite volume Lie group compact? -QUESTION [5 upvotes]: I know an example of a finite volume homogeneous space which is not compact, $SL_2(\mathbb(R)) / SL_2(\mathbb{Z})$. But what about a Lie group with this property? Can it happen? -(The Lie group is assumed to have the Haar measure.) - -REPLY [6 votes]: Yes, it is true. Here is a more general statment: -Suppose that $G$ is a locally compact group with Haar measure $\mu$. Then $G$ is compact iff it has finite Haar measure. (A.5.1. In Kazhdan's Property (T) by Bekka, de la Harpe, Valette) -Proof that finite Haar measure implies $G$ is compact: - -Assume $G$ is not compact. -Let $U$ be a compact neighborhood of $e$. -By 1, we can inductively build a sequence of $g_i \in G$ so that $g_{n + 1} \not \in \cup_{i = 1}^n g_i U$. -Letting $V$ be a neighborhood of $e$ so that $V^{-1} = V$ and $V^2 \subset U$, then $g_n V \cap g_m V = \emptyset$ if $n \not = m$. (Follows from $3$.) -Then $\mu(G) \geq \mu ( \bigcup g_n V) = \Sigma \mu(g_n V) = \Sigma \mu(V) = \infty$. -QED - -We use that open sets have positive measure in Haar measure. -The converse is because compact sets have finite Haar measure.<|endoftext|> -TITLE: Prove that there is no element of order $8$ in $SL(2,3)$ -QUESTION [6 upvotes]: Let $SL(2,3)=SL(2,\mathbb{F}_3)$. Prove that there is no element of order $8$ in $SL(2,3)$. - -My attempt: -Let $A$ be a matrix in $SL(2,3)$. -Then $A=U X U^{-1} $ for some invertible $U$ where $X$ is the diagonal matrix of eigenvalues of $A$. Then $A^n=UX^nU^{-1}$ and then we take the cases for different eigenvalues of $A$ but this does not seem to work as it implies every matrix in $SL(2,3)$ has order at most $2$. -Thanks in advance - -REPLY [8 votes]: Possibly the way that involves the least working with specific matrices and the most group theory: -Lemma1: The $p$-Sylow in $SL_n(\mathbb{F}_q)$ is not normal when $q$ is a power of $p$. -Proof: It is straightforward to check that either the set of upper- or lower triangular matrices with $1$'s on the diagonal is a $p$-Sylow, and since these are distinct, a $p$-Sylow is not normal. -Lemma2: If $|G| = 2^rm$ with $m$ odd and the $2$-Sylow is cyclic, then $G$ has a normal subgroup of order $m$. -Proof: Consider the action of $G$ on itself by left translation as a map to $S_{|G|}$ and not that a generator for a $2$-Sylow corresponds to $m$ disjoint $2^r$-cycles and is hence an odd permutations. This means that the image of $G$ is not contained in $A_{|G|}$ and thus the preimage of $A_{|G|}$ is a subgroup of $G$ of index $2$. Now the claim follows by induction on $r$ (since a normal subgroup of order $m$ is a Hall subgroup and thus characteristic). -Finally, we see that there is no element of order $8$ in $SL_2(\mathbb{F}_3)$ since this would mean it had a cyclic $2$-Sylow and by Lemma2 thus a normal subgroup of order $3$, i.e. a normal $3$-Sylow, which contradicts Lemma1.<|endoftext|> -TITLE: Is there more than one occurrence of a power of two between twin primes? -QUESTION [9 upvotes]: $2^2$ is between the twin primes $3$ and $5$. Are there any other instances of a power of two between twin primes? If so, how many? -That there are Mersenne primes (primes of the form $2^n-1$) makes this a little more tantalizing, but a brief search didn't spit out any results right away. - -REPLY [2 votes]: Just some basics. Twin primes use the last digits of 1 & 3, 7 & 9, and 9 & 1 5 is and exception to the general rule. The powers of 2 use the last digits of 2, 4, 6, and 8. The only valid powers of 2 will have the last digits of 2 and 8. 2^2 = 4 and 2^3 = 8. Lots of luck trying to find more of them.<|endoftext|> -TITLE: odd prime division -QUESTION [12 upvotes]: Prove that if $p$ is an odd prime then $p$ divides -$\lfloor(2+\sqrt5)^p\rfloor -2^{p+1}$ - -I am struggling to progress with this question. Here is my working out so far: -Page 1 working out -Page 2 working out -I don't know if I'm on the right track or if I'm heading to abyss. - -REPLY [16 votes]: Let -$$N=(2+\sqrt{5})^p+(2-\sqrt{5})^p.$$ -Note that $N$ is an integer. There are various ways to see this. One can for example expand using the binomial theorem, and observe that the terms involving odd powers of $\sqrt{5}$ cancel. -Because $(2-\sqrt{5})^p$ is a negative number close to $0$, it follows that -$N=\left\lfloor (2+\sqrt{5})^p\right\rfloor$. -In the two binomial expansions, all the binomial coefficients $\binom{p}{k}$ apart from the first and last are divisible by $p$. The first term in each expansion is $2^p$. We conclude that $N\equiv 2\cdot 2^p\pmod{p}$, and the result follows.<|endoftext|> -TITLE: Algebraic closure of a perfect field. -QUESTION [5 upvotes]: I don't know if this result is true or not, if we are in the first case, how can I prove it ? -$$k \subset \overline{k} \text{ is Galois Extension } \Leftrightarrow k \text{ is a perfect field} $$ - -REPLY [5 votes]: This is true. Perfect fields have the nice property that their separable closure (the maximal separable extension of $k$ contained in $\bar{k}$) is exactly their algebraic closure, and both of these are normal extensions, so we have -$k$ perfect $\Rightarrow$ $\bar{k}/k$ Galois. Indeed, perfect fields are the only fields for which the algebraic closure is separable, so if $\bar{k}/k$ is Galois, then $k$ must be perfect.<|endoftext|> -TITLE: "Lifting" fibres of morphism of arithmetic schemes to get rid of "nongeometric" ramification -QUESTION [9 upvotes]: This is a soft question and really a request for pointers towards a certain rigorous formulation of geometric intuition I've had for some "arithmetic schemes". I'm looking for ideas and key references of areas to look into. -Anyway, I've been thinking about the "arithmetic surface" $X = \mathbb{A}^1_{\mathbb{Z}} = \text{Spec}\mathbb{Z}[x]$ and the closed subscheme $Y = \text{Spec}\mathbb{Z}[\sqrt{3}]\subseteq X$ along with the natural morphism $\pi: Y\rightarrow \text{Spec}\mathbb{Z}$. Following the treatment in Eisenbud and Harris' The Geometry of Schemes (p.83) we can draw $Y$ as a scheme fibred over $\text{Spec}\mathbb{Z}$ that looks something like this: - -The fibre above a prime $p\in\mathbb{Z}$ has one of three types: - -Type 1, like the points above $p=11,13$, consists of a fibre with two distinct points. This occurs in general when the Legendre symbol $(3/p)=1$ and $p$ does not divide the discriminant $12$ of $K=\mathbb{Q}(\sqrt{3})$, and these are reduced points with residue field $\mathbb{F}_p$.This occurs when the ideal $p\mathcal{O}_K$ factors into two distinct prime ideals in the ring of integers $\mathcal{O}_K$ of $K$. I'm not going to talk more about these points - it's the distinction between the following two that counts; -Type 2, like the points above $p=5,7$ and more generally when $(3/p)=-1$ and $p$ does not divide $\text{disc}(K)$, is a fibre consisting of one reduced point when the ideal $p\mathcal{O}_K$ remains prime in $\mathcal{O}_K$. Here the residue field of this point is a quadratic extension $\mathbb{F}_{p^2}$. In my picture - and in Eisenbud/Harris - these are drawn as single points where the generic point of $Y$ ramifies and becomes tangent to a horizonal line; -Type 3 like the fibres above primes $p=2,3$ that divide $\text{disc}(K)$; these are the ramified primes, for obvious geometrical reasons. Here the single point in the fibre is nonreduced but has residue field $\mathbb{F}_p$. For these primes the ideal $p\mathcal{O}_K$ is the square of a prime ideal in $\mathcal{O}_K$. Here in my picture the generic point ramifies and becomes tangent to a vertical line at these points. - -Now I was wondering why, in algebraic number theory, Type 3 primes are called ramified whilst Type 2 ones aren't, given that the fibres "ramify" as point sets above these primes and become singleton sets rather than pairs of distinct points. One intuitive reason to me is that the non-reducedness of the truly ramified points underlies that two distinct fibres are joining together at this point. However, in the Type 2 case these points are reduced, so although these look "ramified" in the picture there is a different phenomenon occuring here. -The way I tried to understand it is this: the generic point of $Y$ consists of two "strands" which truly ramify only at the primes $p=2,3$. When they pass through a Type 2 prime like 5 they each intersect at a pair of distinct Galois-conjugate points $\pm \alpha\in \mathbb{F}_{25}$ each of which is a root of $x^2-3\in\mathbb{F}_5 [x]$. Now in the fibre above $5$, which is isomorphic to $\mathbb{A}^1_5 := \text{Spec}\mathbb{F}_5 [x]$, this pair of elements of $\mathbb{F}_{25}$ constitutes a single point (corresponding to the prime ideal $(x^2-3)$), which is why the fibre is a singleton. However, where we able to "lift" this fibre to $\text{Spec}\mathbb{F}_{25}[x]$ we would see that really there are two branches of this generic point passing through the pair of distinct points $\pm\alpha$, and these only "arithmetically ramify" because they map to the same point under the map $\text{Spec}\mathbb{F}_{25}[x]\rightarrow\text{Spec}\mathbb{F}_5 [x]$. So we could somehow "resolve" this ramification over (the affine line over) a quadratic extension field of $\mathbb{F}_5$ to see that "geometrically" this isn't true ramification. -In contrast, the point in the fibre above a Type 3 prime like $p=2,3$ corresponds to a Galois orbit with one element since it has residue field $\mathbb{F}_p$, and represents the "geometric" branches of the generic point of $Y$ genuinely coming together at this point. To me it seems that these points are where real ramification occurs, because no "lift" to an extension field can separate the branches of the generic point. -So my questions are: - -Is this geometric picture the reason ramified (Type 3) primes are so named? -For Type 2 primes, is there a formal construction for this "lifting" process that I described, whereby we "resolve" the ramification at Type 2 primes by lifting the whole of Y to another scheme on which the fibres above Type 2 primes are isomorphic to/contain $\text{Spec}\mathbb{F}_{p^2}[x]$ and hence distinguish these Galois-conjugate points? - -REPLY [3 votes]: Yes, this is correct. Whenever we define a property $P$ of a morphism $f:X\to Y$, one of the FIRST things one verifies, to make sure it's a reasonable class, is that - -a) Property $P$ is closed under composition. -b) Property $P$ is closed under base change. -Namely, whatever it should mean for a morphism $f:X\to Y$ to be 'unramified' it certainly should be the case that $f_S:X_S\to S$ is also 'unramified' for any $Y$-scheme $S\to Y$. -In particular, for a morphism $f:X\to\text{Spec}(k)$, it should be true that if $f$ satisfies $P$ then $f_{\overline{k}}:X_{\overline{k}}\to \text{Spec}(\overline{k})$ satisfies $P$. -In particular, for ramifiedness, one can truly only get a GEOMETRIC picture (i.e. ramified morphisms are those where strands intersection) only in a GEOMETRIC setting. Namely, working rationally (i.e. over $\text{Spec}(k)$) and expecting to visually be able to 'see' a property is a little misguided—all the points of the situation aren't 'visible' yet, you have to first move to $\text{Spec}(\overline{k})$. -That said, there are many, many beautiful scheme-theoretic definitions of unramifiedness that one can easily verify agree with our intuitive definition when $f:C\to C'$ is a map of curves over an algebraically closed field (the situation that you are most thinking about). -For example, let us make the definition that a map $f:C\to C'$ of curves over $\text{Spec}(\overline{k})$ is unramfied if for all $y\in C'$ a closed point the fiber $f^{-1}(y)\to \text{Spec}(k(y))$ is reduced—that it's just a disjoint union of $\text{Spec}(\overline{k})$ if $y$ is a closed point. This is intuitively what we want since, we think that a point $x\in f^{-1}(y)$ of ramifification should 'count' for more than just a point, since it's picking up the data of multiple strands, and so we expect that point in the fiber to have global sections greater than dimension $1$, which, since we've over $\overline{k}$ can only happen if that point is non-reduced. -If we're NOT working over $\overline{k}$, but just $k$, then it's obvious how to fix this. Namely, $f:C\to C'$ non-constant should be unramified when $f^{-1}(y)\to\text{Spec}(k(y))$ is geometrically reduced. Thus, perhaps a point has global sections of dimension greater than $1$, but not because it has multiple strands meeting at that point, but that it's a Galois orbit of points glued together, that would separate into distrinct strands geometrically. -So, let us make the preliminary definition that a finite map $f:X\to Y$ of Dedekind schemes (i.e. integral normal schemes of dimension $1$) of finite type over $\text{Spec}(\mathbb{Z})$ is unramified if for all $y\in Y$ the scheme $f^{-1}(y)\to\text{Spec}(k(y))$ is geometrically reduced carrying the same intuition we had for curves over a field. -Let us then verify that this definition agrees with the one we learn in number theory. Namely, let's assume, for the time being, that $X$ and $Y$ are affine (we can reduce to this case since $f$ is affine). Namely, let's assume that $Y=\text{Spec}(A)$ and $X=\text{Spec}(B)$. Then, for a non-zero prime $\mathfrak{p}\in Y$ we know that $f^{-1}(\mathfrak{p})\subseteq X$ is given by -$$\text{Spec}(B\otimes_{A}(A/\mathfrak{p}))=\text{Spec}(B/\mathfrak{p}B)$$ -Let us then use that $B$ is Dedekind to factor $\mathfrak{p}$ as follows -$$\mathfrak{p}=\mathfrak{P}_1^{e_1}\cdots\mathfrak{P}_m^{e_m}$$ -with $\mathfrak{P}_i$ distinct primes of $B$. -We claim that the $k(\mathfrak{p})=A/\mathfrak{p}$-algebra $B/\mathfrak{p}B$ is geometrically reduced if and only if $e_i=1$ for all $i$. Indeed, -$$B/\mathfrak{p}B\cong \prod_i B/\mathfrak{P}_i^{e_i}$$ -and so -$$(B/\mathfrak{p}B)\otimes_{k(\mathfrak{p})}\overline{k(\mathfrak{p})}=\prod_i B_{\overline{k(\mathfrak{p})}}/\mathfrak{P}^{e_i}B_{\overline{k(\mathfrak{p})}}$$ -Thus, if $e_i>1$ for some $i$, this is evidently still nonreduced. -Suppose now that all the $e_i=1$, so that $B/\mathfrak{p}B$ above is isomorphic to a product with terms $B/\mathfrak{P}B$. Note that this is a finite separable extension of $k(\mathfrak{p})$ (here I am using that we're finite type over $\text{Spec}(\mathbb{Z})$, so these are both finite fields!). Then, by the primitive element theorem -$$B/\mathfrak{P}B=k(\mathfrak{p})[x]/(f(x))$$ -for some separable polynomial $f(x)\in k(\mathfrak{p})[x]$. Then, $(B/\mathfrak{P}B)\otimes_{k(\mathfrak{p})}\overline{k(\mathfrak{p})}$ is just $\overline{k(\mathfrak{p})}[x]/(f(x))$. Since $f(x)$ was separable it becomes a product of distinct linear factors in $\overline{k(\mathfrak{p})}$ which implies that $\overline{k(\mathfrak{p})}[x]/(f(x))$ is isomorphic to a product of $\overline{k(\mathfrak{p})}$ and so separable. -Thus, our geometric definition, that we can geometrically see ramification (the coming together of strands) by reducedness of fibers agrees with the arithmetic definition! -NB: There are two things that I glossed over in the above. First, I didn't talk about the fiber over the generic point of $X$ and $Y$. What do you think happens there? Secondly, I made the artificial assumption that we were finite type over $\text{Spec}(\mathbb{Z})$ so that I didn't have to talk about assuming that the residue field extensions were separable. So, you can now guess the correct definition for an arbitrary map of Dedekind schemes—that the fibers are a product of separable extensions of $k(y)$ (or $k(\mathfrak{p})$. -Let me give one more possible nice interpretation of unramifiedness, and let you work out the details yourself. So, let's suppose that we have a surjective finite map $X\to Y$ of Dedekind schemes. Then, we imagine that $f$ should be unramied at $x\in X$ if 'locally around $x$' the map $f$ is an isomorphism. -Here is one way of interpretting this. Since $X$ is a Dedekind scheme, the ring $\mathcal{O}_{X,x}$ is a DVR and so it's maximal ideal $\mathfrak{m}_x\mathcal{O}_{X,x}$ is principal, say with uniformizer $t_x$. We then think about $t_x$ as being a COORDINATE of $X$ at $x$—a chart. Then, one way to think about $f$ being a local isomorphism at $x$ is that if we take the chart $t_y$ at $y=f(x)$ (defined in the same way) then '$t_y\circ f$' should be a chart at $x$—that it should generate the same ideal as $t_x$. -Thus, another intuitive definition of unramifiedness is that for all $x\in X$ the map $\mathcal{O}_{Y,f(x)}\to \mathcal{O}_{X,x}$ has the property that -$$\mathcal{m}_y\mathcal{O}_{X,x}=\mathcal{m}_x$$ -that the chart at $y$ is still, after being pulled back by $f$, a chart. -I leave it to you to check that this is also equivalent, in the arithmetic situation, to the usual definition of being unramified, that there are no higher powers of primes showing up in the factorization. -Let me end with two more remarks: -a) All of the above, in some sense, has been a ruse. Namely, by using the crutch that we were dealing with Dedekind schemes, we were able to ignore an important property—flatness. Namely, what you're picturing in your head is not really unramifiedness, it's étaleness (google this!). Although, frankly, this doesn't really factor into your picture because every morphism we picture naturally is flat! -So, if you want to learn more about this topic and the beautiful, beautiful details it entails, I HIGHLY recommend picking up the Book Galois Groups and Fundamental Groups by Szamuely. -b) I can't resist saying this. Another interpretation of unramifiedness (again, I'm lying! I really mean étaleness, but this is the same for surjective maps of Dedekind schemes) one might take is that the morphism if `smooth of relative dimension $0$'. If one understands enough algebraic geometry, one can see that this really means that $f$ is flat and that $\Omega_{X/Y}$, the relative cotangent sheaf, is zero. -So, where is this cotangent sheaf showing up arithmetically? Well, if $\Omega_{X/Y}$ being zero is measuring unramifiedness, then ramifiedness has to do with the points $x\in X$ where $(\Omega_{X/Y})_x\ne 0$. Or, thought about differently, we want to know that the annihilator of $\Omega_{X/Y}$ is the whole ring (at least in the case that $X$ and $Y$ are affine). -But, what is the annihilator of $\Omega_{X/Y}$? It's the different ideal! The mysterious ideal which dictates the ramification properties of our map/extension. - -I think this is answered in my above post. Basically, one just wants to move everything to the geometric situation over the algebraic closure of the residue field of the point in the target.<|endoftext|> -TITLE: $A=\{1,2,3,4,5\}$, $B=\{0,1,2,3,4,5\}$. Find the number of one-one functions $f:A\rightarrow B$ such that $f(i)\neq i$ and $f(1)\neq0,1$ -QUESTION [5 upvotes]: $A=\{1,2,3,4,5\}$, $B=\{0,1,2,3,4,5\}$. Find the number of one-one functions $f:A\rightarrow B$ such that $f(i)\neq i$ and $f(1)\neq 0\text{ or } 1$. - -This is like finding the number of ways of putting $r$ letters in $r$ envelopes such that all leters are in the wrong envelops. The formula for number of such arrangements is $$r!\left(1-\frac{1}{1!}+\frac{1}{2!}-\cdots(-1)^r\frac{1}{r!}\right)$$ -Here, $A$ is the envelope and $B$ is the letter. - -If $0$ is not included in the range of $f$, the number of functions is $44$ using the formula -If $1$ is not included in the range of $f$, the number of functions is also $44$ using the formula -If $0,1$ both are included in the range, I can't use the formula. How can I find the number of such possibilities? - -REPLY [5 votes]: I approached it a bit differently, solving a slightly different and more general problem first and using that to get the result. -Let $A_n=\{1,\ldots,n\}$ and $B_n=\{0,1,\ldots,n\}$. Let $a_n$ be the number of one-one functions $f:A\to B$ such that $f(i)\ne i$ for each $i\in A$. - -If $d_n$ is the number of derangements of $B_n$ (so that, for instance, $d_5=44$), show that $$a_n=d_n+na_{n-1}$$ for $n\ge 0$. - -(We take $A_0=\varnothing$ and note that the empty function is a one-one function from $A_0$ to $B_0=\{0\}$ that has no fixed points, so $a_0=1$.) This makes it quite easy to evaluate $a_n$ recursively for small $n$. -Now your problem is a little different. You’ve already seen that there are $d_5$ one-one functions $f:A_5\to B_5$ such that $0\notin\operatorname{ran}f$. Suppose now that $f:A\to B$ is one-one, and $0$ is in the range of $f$, but $f(1)\ne 0$. There must be a $k\in\{2,3,4,5\}$ such that $f(k)=0$. The rest of $f$ must be a one-one function from $A_5\setminus\{k\}$ to $A_5$ such that $f(i)\ne i$ for $i\in A_5\setminus\{k\}$. - -Explain why there are $a_4$ ways to choose a one-one function from $A_5\setminus\{k\}$ to $A_5$ such that $f(i)\ne i$ for $i\in A_5\setminus\{k\}$. -Conclude that the answer to your question is $d_5+4a_4$, and calculate it numerically. - - -As an aside, the numbers $a_n$ are the sequence OEIS A000255; it turns out that $a_n$ is the integer closest to -$$\frac{(n+2)n!}e\;,$$ -which can be written -$$\left\lfloor\frac{(n+2)n!}e+\frac12\right\rfloor\;.$$ -Added 29 January 2022: It’s not hard to generalize the argument to see that if $b_n$ is the number of one-one functions $f:A_n\to B_n$ such that $f(i)\ne i$ for $i\in A_n$ and $f(1)\ne 0$, then $b_n=d_n+(n-1)a_{n-1}$. -Since $d_n$ is the integer closest to $\frac{n!}3$, which can be written $\left\lfloor\frac{n!}e+\frac12\right\rfloor$, we have -$$b_n=\left\lfloor\frac{n!}e+\frac12\right\rfloor+(n-1)\left\lfloor\frac{(n+1)(n-1)!}e+\frac12\right\rfloor\,.$$<|endoftext|> -TITLE: If $u \in H^s(\mathbb{R}^n)$ for $s > n/2$, then $u \in L^\infty(\mathbb{R}^n)$? -QUESTION [8 upvotes]: How do I use the Fourier transform to see that if $u \in H^s(\mathbb{R}^n)$ for $s > n/2$, then $u \in L^\infty(\mathbb{R}^n)$, with the bound$$\|u\|_{L^\infty(\mathbb{R}^n)} \le C\|u\|_{H^s(\mathbb{R}^n)},$$the constant $C$ depending only on $s$ and $n$? - -REPLY [5 votes]: I think it is worth mentioning a proof using the Fourier analytic definition of $H^s$, if only for its succinctness. -We have -$$ -\| u \|_{L^\infty} \leq \| \hat{u} \|_1 \leq \| \langle \xi \rangle^{-s} \|_2 \| \langle \xi \rangle^s \hat{u} \|_2 \leq C \| u \|_{H^s}.$$ -Here $\langle \xi \rangle = \sqrt{1 + \lvert \xi \rvert^2}$. Interpolation with $L^2$ implies -$$ -\| u \|_p \leq C(s) \| u \|_{H^s} \quad \forall 2 \leq p \leq \infty. -$$<|endoftext|> -TITLE: Original usage of 'Bénabou cosmos' -QUESTION [6 upvotes]: A (Bénabou) cosmos is a bicomplete closed symmetric monoidal category (see, for example, the $n$Lab). -However, I can't find the paper where Bénabou first uses this term - googling turns up nothing. -Does anybody know where it is first used, or how I could find out? -Edit: having read through all the summaries of papers by Bénabou that I could find, I couldn't find any references to the word cosmos. -This search is not going well... - -REPLY [3 votes]: Like the other answers given so far, the following is not an answer to what the opening poster is really asking about (i.e., where Bénabou first published the word). -The following was kindly pointed out to me by someone else in an email I read today, and, lest I forget this topic because of other things, I'll quickly incorporate the email, lightly redacted, into an answer to the opening poster. -It seems to me, not knowing why, the most polite modus operandi to not acknowledge my correspondent, and trust that they'll let me know if they wish to be so acknowledged. Citing the email seems the right form of acknowledgement here. - -[A relevant reference] is Street's Elementary Cosmoi (https://link.springer.com/chapter/10.1007/BFb0063103), p. 134 (first page): -[Street writes:] "Our use of the word 'cosmos' is presumptuous. To Bénabou the word means 'bicomplete symmetric monoidal closed category', such categories V being rich enough so that the theory of categories enriched in V develops to a large extent as the theory of ordinary categories." -There are three papers (co)authored by Bénabou in the references; I'm not sure which one mentions the word. - -I can offer a minuscule further piece of relevant information for this open question: - -of the three references mentioned in [Street: Elementary cosmoi], the reference [J. Bénabou and J. Roubaud, Monades et descente. C.R. Acad. Sc. Paris 270 (1970) 96–98] is not where you'll find an answer: I translated this article here, and 'cosmos' does not appear therein. - -So it seems it remains for you to read the following two: - -J. Bénabou, Introduction to bicategories. Lecture Notes in Mathematics 47 (1967) 1–77 - -J. Bénabou, Catégories avec multiplication. C.R. Acad. Sc. Paris 256 (1963) 1887–1890 - - -I haven't looked into those. Good luck for finding it.<|endoftext|> -TITLE: What has a chain homotopy to do with homotopy? -QUESTION [7 upvotes]: I'm in an intro algebraic topology class. In the textbook Algebraic Topology (Hatcher), a chain homotopy is defined by saying that a map $P$ is a chain homotopy between two maps $f$ and $g$ if $dP + Pd = g - f$, where $d$ is the boundary map in a chain complex. -What I don't understand is what this has to do with homotopy. In what sense is a chain homotopy a homotopy? - -REPLY [11 votes]: This is a very good question, so let me give a lowbrow answer and a highbrow answer. -The lowbrow answer is that homotopies of topological spaces induce this relation on their associated chain complexes (the ones you use to define singular homology, for example), so it makes sense to call this relation homotopy. See the proof of homotopy invariance of homology in Hatcher's text, for example. -The highbrow answer is that one can axiomatize homotopy theory, which gives the notion of a model category. In a model category, we use an "interval object" to define the notion of homotopy. For example, in the category of topological spaces, the interval object is $[0,1]$. It turns out one can define an interval object for the category of chain complexes, and if one works out the abstract definition of homotopy in the category of chain complexes, we get exactly the definition you gave. -For more on the interval object, see here.<|endoftext|> -TITLE: Evaluation of $\lim_{x\rightarrow \infty}\left\{\left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}-x\right\}$ -QUESTION [14 upvotes]: Evaluation of $\displaystyle \lim_{x\rightarrow \infty}\left\{\left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}-x\right\}$ - -$\bf{My\; Try::}$ Here $(x+1)\;,(x+2)\;,(x+3)\;,(x+4)\;,(x+5)>0\;,$ when $x\rightarrow \infty$ -So Using $\bf{A.M\geq G.M}\;,$ We get $$\frac{x+1+x+2+x+3+x+4+x+5}{5}\geq \left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}$$ -So $$x+3\geq \left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}$$ -So $$\left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}-x\leq 3$$ -and equality hold when $x+1=x+2=x+3=x+4=x+5\;,$ Where $x\rightarrow \infty$ -So $$\lim_{x\rightarrow 0}\left[\left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}-x\right]=3$$ -Can we solve the above limit in that way, If not then how can we calculate it -and also plz explain me where i have done wrong in above method -Thanks - -REPLY [3 votes]: $$\lim _{t\to 0}\left(\left[\left(\frac{1}{t}+1\right)\left(\frac{1}{t}+2\right)\left(\frac{1}{t}+3\right)\left(\frac{1}{t}+4\right)\left(\frac{1}{t}+5\right)\right]^{\frac{1}{5}}-\frac{1}{t}\right) = \lim _{t\to 0}\left(\frac{\sqrt[5]{1+15t+85t^2+225t^3+274t^4+120t^5}-1}{t}\right) $$ -Now we use the Taylor's development at the first order -$$= \lim _{t\to 0}\left(\frac{1+3t-1+o(t)}{t}\right) = \color{red}{3}$$<|endoftext|> -TITLE: Is it possible to find inflection points by setting the first derivative to 0? -QUESTION [5 upvotes]: I have the following -$$y = \frac{x^2}{2}-\ln x$$ -$$y'= x - \frac1x$$ -I learned that inflection points were found by setting the $2^{nd}$ derivative equal to $0$, however, if I do that in this case I would get $i$, and I already checked and such is not possible in this case. However when I sent the $1^{st}$ derivative equal to $0$ I get, -$$x = \pm1$$ -as possible inflection points which makes more sense. -Do I have a misconception as to how to find inflection points ? Or am I missing something ? - -REPLY [9 votes]: An inflection point is where the sign of curvature changes, from concave up to down or vice versa, hence the necessity of the vanishing second derivative. -But $f(x) = \frac{x^2}{2} - \ln x$ is concave up (convex) before and after the stationary point $x = 1$. This can be seen from the graph, or by noting that for all $x \in \mathbb{R}$ $$f''(x) = 1 + \frac{1}{x^2} > 0$$ -So there can't be any inflection point.<|endoftext|> -TITLE: What is the chance that a rabbit won't fall off a table if you put it somewhere and it moves. -QUESTION [45 upvotes]: If you would put a rabbit randomly on a circular table with radius $r= 1$ meter and it moves $1$ meter in a random direction, what is the chance it won't fall off? -I tried to do this using integrals, but then I noticed you need a double integral or something and since I'm in the 5th form I don't know how that works. - -REPLY [2 votes]: Let Roger Rabbit's location on the circular table be defined by the point $(X,Y)$, where random variables $X$ and $Y$ have a bivariate Uniform distribution inside the unit circle, with joint pdf $f(x,y)$: - -By symmetry, we can assume, without loss of generality, that the rabbit jumps say 1m to the east (as Steve Jessop has noted above). Then, the probability that Roger Rabbit still lands on the table is $P[(X+1)^2 + Y^2 \leq 1]$: - -All done. -Notes - -The Prob function used above is from the mathStatica package for Mathematica. As disclosure, I should add that I am one of the authors.<|endoftext|> -TITLE: How to prove $b=c$ if $ab=ac$ (cancellation law in groups)? -QUESTION [23 upvotes]: I want to prove for a group $G$, that if -$$a\circ b =a\circ c$$ then this is true $$b=c$$ -I started with $b=b\circ e$, but this didn't help me at all. -Next I tried with this: -$$(a\circ b)\circ c=a\circ (b\circ c)$$ but I don't know/understand how to go further. How can I prove this equation? - -REPLY [4 votes]: By the group properties each element has an inverse. So you can just multiply your equation on the left by $a^{-1}$.<|endoftext|> -TITLE: Irreducibles elements in $\mathbf Z[\sqrt{-3}]$ -QUESTION [6 upvotes]: The ring $A:=\mathbb Z[\sqrt{-3}]$ is the prototype of the rings usually used in a first algebraic number theory course to show the difference between prime and irreducible elements. -I was wondering if I could list all prime and irreducible elements of this ring, instead of just giving examples showing they are different sets. -Since every element $x$ in $A$ divides a non zero integer (its norm $N(x):=x \cdot \overline{x}$), one can restrict the search to divisors of integers. -For prime elements the situation is easy: almost by definition, we find all primes elements in $A$ as divisors of the primes in $\mathbb Z$. On sees that a primes $p\in \mathbb Z$ remains prime in $A$ iff $-3$ is not a square mod $p$ (this can be made more explicit using quadratic reciprocity...). If $-3$ is a square, we get two prime factors. -For irreducible elements, can we make the list (as explicit as possible) of all such elements? -Of course there are all elements whose norm is not a (non trivial) product of elements of the form $a^2+3b^2$. It's not clear to me that these are the only ones. -Can we make it explicit? - -REPLY [4 votes]: Mr. Brooks was on the right track for the first two paragraphs of his answer. What he needed to do next, after clarifying the part about this subdomain not being integrally closed, was to clarify the difference between irreducibles that are also prime and irreducibles that are not prime. -If whenever $p = ab$ either $a$ or $b$ is a unit, then $p$ is irreducible. But if also whenever $p \mid ab$ either $p \mid a$ or $p \mid b$ holds true (maybe both), then $p$ is also prime. -So we see that in $\mathbb{Z}[\sqrt{-3}]$ (a subdomain of $\mathbb{Z}[\omega]$) the number 2 is irreducible. It is also irreducible in $\mathbb{Z}[\omega]$, as are all primes $p$ of $\mathbb{Z}^+$ satisfying $p \equiv 2 \pmod 3$. -And yet $4 = 2^2 = (1 - \sqrt{-3})(1 + \sqrt{-3})$, but $$\frac{1 - \sqrt{-3}}{2} \not\in \mathbb{Z}[\sqrt{-3}]$$ and likewise $$\frac{1 + \sqrt{-3}}{2} \not\in \mathbb{Z}[\sqrt{-3}].$$ -But both of these numbers are in $\mathbb{Z}[\omega]$, one is $-\omega$, the other is $-\omega^2$. And since $N(\omega) = 1$ (as Mr. Brooks already showed), then $(1 - \sqrt{-3})(1 + \sqrt{-3})$ is an incomplete factorization of 4 in $\mathbb{Z}[\omega]$. -So what I think you're actually looking for is numbers of the form $a^2 + 3b^2$ that are composite in $\mathbb{Z}$ and divisible by some positive purely real integer $p \equiv 2 \pmod 3$. -My hunch is that only 2 fits this bill, since, for example, $5^2 + 3 \times 1^2 = 28 = 2^2 \times 7$, but $7 \equiv 1 \pmod 3$ and $(2 - \sqrt{-3})(2 + \sqrt{-3}) = 7$. I will continue to ponder this as I wait for further clarification from you.<|endoftext|> -TITLE: Is this function nowhere analytic? -QUESTION [39 upvotes]: One usually sees $f(x):=\exp\frac{-1}{x^2}$ as an example of a $C^\infty$ function that is not analytic, having one point of non-analyticity (the point $0$). -The Fabius function is a canonical example of a $C^\infty$ function that is non-analytic on a continuum. -Consider now the (real) function $f(x)=\exp\frac{-1}{x^2}$ from above. With the understanding that $f$ is a bounded function and all derivatives of $f$ are bounded, define -$$g(x):=\sum_n 2^{-n}\ f(x-a_n)$$ -Where $a_n$ is an enumeration of $\mathbb Q$. We get again a $C^\infty$ function, as uniform convergence of the sum and of the sum of derivatives follows from all derivatives (of $f$) being bounded. -It looks like $g$ is also nowhere analytic, since the points of non-analyticity of all the summands together is $\mathbb Q$, which is dense in $\mathbb R$ (if a function is analytic at $p$, there exists an open neighbourhood of $p$ on which it is also analytic). -But a proof is something different, and maybe, since we are putting non-analyticities arbitrarily close together, the non-analytic parts cancel at some points. -Is $g$ nowhere analytic? - -REPLY [18 votes]: This is not a solution, but rather some history about the problem I uncovered while looking through my personal "library" this morning and from some brief on-line searches just now. [2] appears to give a rigorous proof of a slightly more general result, and I can email anyone interested a .pdf copy of [2]. See my stackexchange profile for my email address. In particular, I encourage someone to use this paper to write up a careful proof of the result, preferably a proof for the specific case that s.harp asked about (which will decrease the notational clutter involved in dealing with additional generalizations given in [2]). -As I mentioned in a comment above, this is basically Problem 1 on p. 2 of Bishop/Crittenden's 1964 book Geometry of Manifolds. -Maury Barbato asked how to do the Bishop/Crittenden problem in a 29 October 2009 sci.math post. That sci.math thread has 16 other posts in it, including posts by several very respectable sci.math participants, and no one was able to come up with a rigorous proof. Maury Barbato made a follow-up post on 12 November 2009 where he said that the problem remains unsolved for him. -This morning I found the following two relevant items in my folders of papers on this topic. -[1] Editorial Note, Infinitely differentiable functions which are nowhere analytic [Solution to Advanced Problem #5061], American Mathematical Monthly 70 #10 (December 1963), 1109. - -Editorial Note. I. N. Baker proves that the following function $F(x)$ is infinitely differentiable but not analytic anywhere on the real axis: $\sum_{n=1}^{\infty}2^{-n}f_n(x),$ where $f(x) = \exp(-1/x^2),$ $x \neq 0,$ $f(0)=0,$ and $f_n(x) = f(x - p_n),$ with $p_n$ the rational numbers in a sequence. Many examples are in the literature. The following references are cited by readers: $[[\cdots]]$ - -[2] Paweł Grzegorz Walczak, A proof of some theorem on the $C^{\infty}$-functions of one variable which are not analytic, Demonstratio Mathematica 4 #4 (1972), 209-213. MR 49 #504; Zbl 253.26011 - -(from top of p. 212) As a corollary of the theorem we prove that a well known function defined by formula (2) when $r$ is the set of all rational numbers and -$$\varphi(x)=\begin{cases} e^{-\frac{1}{x}} & \text{ when } x > 0 \\ 0 & \text{ when } x \leq 0 \end{cases}$$ -is a $C^{\infty}$-function which is not analytic at any point of $R.$ - -Walczak's paper has only two references --- the 1950 Russian edition of Markushevich's book "Theory of Analytic Functions" and Bishop/Crittenden's 1964 book "Geometry of Manifolds". Markushevich's book is only cited for a standard fact about bounds on the magnitudes of the derivatives of a function that is analytic on a specified bounded open interval. Bishop/Crittenden's book is not cited anywhere as far as I can tell, which I suspect was an editing oversight in the final draft of the paper. My guess is that Walczak's paper arose from a student project to give a rigorous proof of the claim made in Problem 1 on p. 2 of Bishop/Crittenden's book, although the paper gives no explicit mention of its purpose (aside from stating what is to be proved). I have sent an email to Walczak asking him what led him to write the paper, and I will give an update if/when I get a reply from him. -I am fairly certain that Bishop/Crittenden underestimated the difficulty of their Problem 1. Indeed, when their book was reprinted (with corrections) by AMS Chelsea in 2001, Problem 1 was replaced with -$$f(x) \; = \; \sum_{n=1}^{\infty} 2^{-2^{n}}\exp\left(-\csc^2\left(2^{n}x\right)\right)$$ -along with the comment "This replacement of the problem given in the first edition was formulated by Eric Bedford of Indiana University." -(NEXT DAY UPDATE) I have heard back from Paweł Walczak and my guess about the origin and context of his paper was correct. For those who might be interested, below is a description of what is proved in the paper. In what follows I have tried to convey exactly what is done mathematically, but the wording is my own and it differs quite a bit from the original wording. -Walczak's paper proves the following result and then gives a specific illustration of this result. (Here I use ${\mathbb R},$ $Q,$ ${\delta},$ where Walczak uses $R,$ $r,$ ${\delta}_{0}$ but otherwise the notation is essentially the same.) Let $Q = \{r_1,r_2,r_3,\ldots \}$ be an injectively-indexed countably infinite subset of ${\mathbb R}$ (i.e. $i \neq j$ implies $r_i \neq r_{j})$ and let $\{a_n \}$ be a sequence of nonzero real numbers such that $\sum_{n=1}^{\infty}|a_n| < \infty.$ Let $\varphi : {\mathbb R} \rightarrow {\mathbb R}$ be bounded on $\mathbb R$ and $C^{\infty}$ on $\mathbb R$ and real-analytic on ${\mathbb R} - \{0\}.$ Assume there exist $\delta > 0$ and $A > 0$ and $L > 0$ such that, for each $x \in \mathbb R$ with $|x| > A$ and for each $k \in \{0,1,2,\ldots\},$ we have $|\varphi^{(k)}(x)| < L \cdot k! \cdot {\delta}^{-k}.$ Finally, define $f: {\mathbb R} \rightarrow {\mathbb R}$ by $f(x) = \sum_{n=1}^{\infty}a_{n}\varphi(x-r_{n}).$ Then $f$ is $C^{\infty}$ at each $x \in {\mathbb R},$ and $f$ is real-analytic at each $x \in {\mathbb R} - \overline{Q},$ and $f$ is NOT real-analytic at each $x \in \overline{Q},$ where $\overline{Q}$ is the topological closure of $Q$ in ${\mathbb R}.$ -As a corollary Walczak shows that the assumptions above hold if we let $Q = \mathbb Q$ and $\varphi(x)=\begin{cases} e^{-\frac{1}{x}} & \text{ when } x > 0 \\ 0 & \text{ when } x \leq 0. \end{cases}$ Doing this gives us a function $f:{\mathbb R} \rightarrow {\mathbb R}$ that is $C^{\infty}$ and nowhere real-analytic. Indeed, as Walczak mentions at the bottom of p. 211, given any (infinite) closed set $E \subseteq \mathbb R$ (the case for finite closed sets is easy without Walczak's result) and letting $Q$ be a countable dense subset of $E,$ we can get a $C^{\infty}$ function $f: {\mathbb R} \rightarrow {\mathbb R}$ that is real-analytic at each $x \in {\mathbb R} - E$ and NOT real-analytic at each $x \in E.$ -(2 DAYS AFTER LAST UPDATE) A couple of days ago, shortly after my last update, I sent an email to Eric Bedford in which I mentioned this stackexchange web page and asked if he had anything to add to what I have written. Bedford said that during Fall 1969 or Spring 1970 (in Fall 1970 he began graduate school at University of Michigan) he worked through a large portion of Bishop/Crittenden's 1964 book in a reading course with Bishop, and at this time he came up with a replacement function for Problem 1. He did not actually say whether the function he came up with then is the same function that appears in the 2001 edition of the book. However, I strongly suspect it was the same function, because the function in the 2001 book is essentially the same function I have written on a piece of paper (which took me over an hour to locate this morning, by the way) that he gave me in his office in Fall 1982, a day or two after I had asked him in a class meeting (a first semester graduate complex analysis course I was taking at that time from him) whether there exists a function that is $C^{\infty}$ and nowhere analytic. He had just given us the $\exp(-1/x^2)$ example in class, and it seemed natural to me to wonder whether a $C^{\infty}$ function can actually be nowhere analytic (in analogy with the fact that a continuous function can be nowhere differentiable). I seem to recall that he said, when I asked the question, something to the effect that he was pretty sure he had an example, but he didn't remember the exact formulation and needed to look through his stuff in his office for it. -For what it's worth, here is the exact formulation---the exact same symbols and grouping symbols and such---of what is on this piece of paper from Fall 1982: -$$f(x) \; = \; \sum_{n=0}^{\infty} 2^{-2^{n}}\exp\left[\frac{-1}{\left(\sin\left(2^{n}x\right)\right)^{2}}\right]$$<|endoftext|> -TITLE: If I take a Ring modulo a prime-like element, does it become a field? -QUESTION [6 upvotes]: Let's say we have a ring, $R$. We call an element $p$ of $R$ "prime-like" if for all $a$ and $b$ such that $p=ab$, exactly one of $a$ or $b$ is multiplicatively invertible. For example, in the integers, $-7$ is prime-like, because in $-7=-1*7$ and $-7=1*7$, exactly one of the terms is invertible. For real polynomials, $x^2+1$ is prime-like, because it can only be expressed as a product by $(\frac1ax^2+\frac1a)a$ for a nonzero constant $a$, and $a$ is invertible and $\frac1ax^2+\frac1a$ is not. Not that $0$ is never prime-like since $0=0*0$ (both are noninvertible), and an invertible element, $w$ is never prime-like since $w = w*1$ (both $w$ and $1$ are invertible). -We take a ring modulo an element $r$ by taking an equivalence relation where $r_1=r_2$ in the new ring if there is an $r_3$ such that $r_1-r_2=r_3*r$ in the old ring. This new ring can be shown to in fact be a ring, and there is a ring homomorphism from the old ring to the new ring ($r$ gets mapped to $0$). -If $r$ is prime-like, is the new ring a field? For example, if you take the integers modulo or a prime $p$ (which is the same as modulo $-p$), it becomes a prime finite field. If you take the real polynomials modulo $x^2+1$, you get the complex number field. -Is it true in general that rings modulo prime-like elements are fields? - -REPLY [8 votes]: As the other answer remarked, modding out by an irreducible (aka "prime-like") element doesn't give a field in general. The obvious counterexamples are polynomial rings; either a polynomial ring over another non-field in a single indeterminate, like $\mathbb{Z}[x]$; or a polynomial ring in two variables over a field like $K[x,y]$ when $K$ is a field. Here, if you mod out by one of the indeterminates - say $y$, which is an irreducible element - then you still have a polynomial ring $K[x]$ which is definitely not a field! There are also other examples of weird rings with this property too that are not polynomial rings. -However, there is a subclass of rings which does have the property that taking the quotient by an irreducible element gives a field. These are principal ideal domains, which are rings where every ideal is generated by a single element. If an ideal is generated by an irreducible element then this ideal is maximal and the corresponding quotient is a field. The examples above for polynomials fail because neither $\mathbb{Z}[x]$ nor $K[x,y]$ is a principal ideal domain; for example, in the first case given a prime number $p$ and an irreducible polynomial $f\in \mathbb{Z}[x]$ that remains irreducible modulo $p$ one can show that the ideal $(p,f)$ is not principal - it's not generated by a single element of $\mathbb{Z}[x]$. So modding out by one of the elements generating this ideal still leaves some nontrivial ideal structure - and fields have no nontrivial ideals. In the case where your ring is $K[x,y]$, the ideal $(x,y)$ is also not principal, so modding out by one of these indeterminates doesn't "kill off" the other one, even though both are irreducible. -However, if you are in a principal ideal domain - for example, $K[x]$ for a field $K$, and you mod out by the principal ideal generated by an irreducible polynomial $f\in K[x]$ - which satisfies your definition of "prime-like" - then you get a field, which is a field extension of $K$ obtained by adjoining to $K$ a root of this polynomial.<|endoftext|> -TITLE: On lifting an action of $G$ on $X$ to an action of $G'$ on $\tilde{X}$. -QUESTION [5 upvotes]: I am reading the section on covering actions from Glen Bredon's Tranformation groups. -Let $G$ be a Lie group (not necessarily connected) acting effectively/faithfully on a connected, locally path connected, semi-locally simply connected space $X$, not necessarily with fixed points. Let $p:\tilde{X}\to X$ be the universal covering of $X$. -For any $g\in G$, $\theta_g:X\to X$ is the map given by $x\mapsto g\cdot x$. Now he makes the following statement - - -$\theta_g$ can be covered by a homeomorphism of $\tilde{X}$ since $\tilde{X}$ is simply connected, and any two such liftings differ by a deck transformation. Clearly, all such liftings for all $g$ form a subgroup $G'$ of $\operatorname{Homeo}(\tilde{X})$. - -My question is how do we get such a homeomorphism of $\tilde{X}$? -I think it should be by general lifting theorem. So for a choice of base point $x_0\in X$ and $x'_0\in\tilde{X}$ such that $p(x'_0)=g\cdot x_0$ there will be a unique homeomorphism from $\tilde{X}\to\tilde{X}$ sending $x'_0$ to itself. So why does he talk about any two such liftings? Does he mean for different choice of base points? Also why do they differ by a deck transformation? -Thank you. - -REPLY [4 votes]: First, you ask how to get lifts of $\theta_g$ to $\tilde X$. Here is the construction, an application of the lifting theorem. -Consider the original base point $x_0$, and consider the alternate base point $x'_0 = \theta_g(x_0)$. Fix a lift $\tilde x_0 \in \tilde X$ of $x_0$. -For each choice of a lift $\tilde x'_0$ of $x'_0$, let's consider the following lifting problem: how to lift the map $\theta_g : (X,x_0) \to (X,x'_0)$ to a map $\tilde \theta_g : (\tilde X,\tilde x_0) \to (\tilde X,\tilde x'_0)$. By the lifting lemma, to determine whether this lift $\tilde\theta_g$ exists, we must consider the following two subgroups of $\pi_1(X,x'_0)$, namely: -$$(\tilde\theta_g)_*(\pi_1(\tilde X,\tilde x_0)) -$$ -and -$$p_*(\pi_1(\tilde X,\tilde x'_0)) -$$ -The necessary and sufficient condition for $\tilde\theta_g$ to exist is that the first subgroup is contained in the second, which is obvious because both subgroups are trivial since $\tilde X$ is simply connected. -You also asked "Why does he talk about two such liftings?" Notice, the construction of $\tilde \theta_g$ has a choice, namely: a choice of lift $\tilde x'_0$ of $x'_0$. Two different choice of $\tilde x'_0$ would yield two different lifts of $\theta_g$. -From another point of view, suppose that $\theta_{g,1}, \theta_{g,2} : \tilde X \to \tilde X$ are two lifts of $g : X \to X$. To say that these two lifts "differ by a covering transformation" means that their difference $\theta_{g,1}^{-1} \circ \theta_{g,2} : \tilde X \to \tilde X$ is a covering transformation of the universal covering map $p : \tilde X \to X$. -To prove this, first notice that $\theta_{g,1}^{-1} \circ \theta_{g,2}$ is a lift of $\theta_g^{-1} \circ \theta_g$ which equals the identity map on $X$. Almost by their very definition, the deck transformations are the maps $\tilde X \mapsto \tilde X$ which are lifts of the identity map on $X$. So, $\theta_{g,1}$ and $\theta_{g,2}$ differ by a deck transformation.<|endoftext|> -TITLE: Permutation of ZF model -QUESTION [5 upvotes]: I want to make sure I understand how these work. Can someone please check my answers to the following exercise // answer my questions. :) - -Let $(V, \in)$ be a model of ZF, and let $\sigma$ be a permutation of $V$. We define a new binary relation $\in^\sigma$ on $V$ by $(x\in^\sigma y)\iff(x\in \sigma(y))$. - -Verify that the structure $(V,\in^\sigma)$ satisfies all the axioms of ZF except possibly for Foundation. -By taking $\sigma$ to be the transposition which interchanges $\emptyset$ and $\{\emptyset\}$ (and fixes everything else), show that Foundation may fail. -More generally, let $a$ be a set none of whose members is a singleton, and let $\sigma$ be the permutation which interchanges $x$ and $\{x\}$ for each $x\in a$. Show that $(V,\in^\sigma)$ satisfies a weak version of Foundation which says that every nonempty set $x$ has a member $y$ satisfying either $x\cap y=\emptyset$ or $y=\{y\}$. - - - -Extension: -$$(\forall x)(\forall y)[(\forall z)(z\in\sigma(x) \iff z\in \sigma(y))\implies \sigma(x)=\sigma(y)]$$ -And $\sigma(x)=\sigma(y)$ implies $x=y$ as $\sigma$ is a permutation. -Separation: -$$(\forall t_1)\ldots(\forall t_n)(\forall x)(\exists y)(\forall z)[z\in\sigma(y)\iff (z\in\sigma(x)\land \phi)]$$ -Empty set: -$$(\exists x)(\forall y)(\neg (y\in\sigma(x)))$$ -Pair set: -$$(\forall x)(\forall y)(\exists z)(\forall t)[t\in\sigma(z)\iff(t=x\lor t=y)]$$ -Union: -$$(\forall x)(\exists y)(\forall z)[z\in\sigma(y)\iff(\exists t)(\sigma(t)\in\sigma(x)\land z\in\sigma(t)]$$ -I get $(\forall x)(\exists y)(\forall z)[z\in^\sigma y\iff(\exists t)(\sigma(t)\in^\sigma x\land z\in^\sigma t]$ which is not quite the union axiom ($\sigma(t)$ instead of $t$). How do I fix this? -Power set: -$$(\forall x)(\exists y)(\forall z)(z\in\sigma(y)\iff z\subseteq x)$$ -Infinity: -$$(\exists x)(\emptyset \in\sigma(x))\land(\forall y)(y \in\sigma(x)\implies y\cup\{y\}\in \sigma(x))$$ -Replacement: -$$(\forall w_1,\ldots,w_n)((\forall y,y')((\phi\land\phi[y'/y])\implies(y=y'))\implies((\forall u)(\exists v)((\forall y)(y\in \sigma(v))\iff(\exists x)((x\in \sigma(u))\land\phi)))$$ -We get $\emptyset\in\emptyset$, isn't that enough already? -For a given set $x$, if none of its members has been in $a$ in the previous structure then there must be a $y$ satisfying $x\cap y=\emptyset$ since $(V,\in)$ satisfies ZF. So we can find a $y\in^\sigma x$ with $y\in a$. But does that mean $y=\{y\}$? I struggle to apply extensionality in the new structure. On the one hand we have $y\in^\sigma y$ and $z\not\in^\sigma y$ for all $z\neq y$ which looks like $y = \{ y\}$. On the other hand there are elements $z\in^\sigma\{y\}$ with $z\neq y$, namely the elements of $y$ in the original structure. Is equality not symmetric any more after applying the permutation? - -REPLY [3 votes]: Continuing from my comments, I would say that other proofs except 3 looks fine. -For 3, I think your argument against to sets with empty intersection with $a$ seems incomplete, because $(V,\in)\models \mathsf{regularity}$ seems not to imply your statement, that a set $x$ containing no elements in $a$ has some element $y$ such that $x\cap y = \varnothing$. The "transitive closure" of $x$ might be containing some elements in $a$ though $x$ itself is not. -My suggestion to proving 3 is defining a hierarchy of the universe and define a rank from the hierarchy. I am going to describe the detail at below: - - - -$V_0(a) = a$ -$V_{\alpha+1}(a) = \mathcal{P}^{(V,\in^\sigma)}(V_\alpha)$. -$V_{\alpha}(a) = \bigcup^{(V,\in^\sigma)}\{V_\xi : \xi<\alpha\}$ for limit $\alpha$. - -where $\mathcal{P}^{(V,\in^\sigma)}$ and $\bigcup^{(V,\in^\sigma)}$ are power set operation and union operation relativized to the model $(V,\in^\sigma)$ ― their definitions are same as that of ordinary power set and union, except that $\in$ is replaced to $\in^\sigma$. -For each $x$, define a rank $\rho(x)$ as follow: -$$\rho(x) = 1+\min\{\alpha:x\subseteq^\sigma V_\alpha(a)\}.$$ For example, ranks of the empty set and elements in $a$ are 0. Now divide the case: - -$x$ contains a element of rank 0. -$x$ has no element of rank 0. - -In case 2, though every element has non-zero rank, some element in $x$ has minimal rank. Now consider such set $y$. You can see that if $z\in^\sigma y$ then $\rho(z)\le \rho(y)$ and the inequality is strict if $\rho(y)>0$. From this you can complete the proof. - -I realize that there is a more simple direct proof. Here is a detail: We divide the cases. For given $x$, -Case 1. If $(x\in a)^{(V,\in)}$, take $y=x$. -Case 2. If $(x\cap a\neq\varnothing)^{(V,\in)}$, we can find some $y$ such that $(y\in x\cap a)^{(V,\in)}$. Take such $y$. -Case 3. If $x$ satisfies none of the conditions described above, Neither elements in $x$ nor $x$ itself are permuted by $\sigma$. If $y$ is a $\in$-minimal element of $x = \sigma(x)$ then $\sigma(y)=y$. Thus if $z\in\sigma(x)$ then $z\notin y = \sigma(y)$.<|endoftext|> -TITLE: Precise definition of free group -QUESTION [9 upvotes]: I have seen the definition of a free group go like this: - -Let $S = \{s_i : i\in \mathbb{N} \}$ be a countable set. Let $S^{-1}$ be the set $\{s_i^{-1}: i\in \mathbb{N}\}$. Here one is to understand $s_i^{-1}$ as a notation for an element of this other set. Let $W(S)$ be the set of words in $S$ and $S^{-1}$. That is, the elements in $W(S)$ are strings of elements from $S$ and $S^{-1}$. One then defines a function $W(S) \times W(S) \to W(S)$. One makes $W(S)$ into a group (free group) be defining an operation with certain cancellation operations (so $s_is_i^{-1}$ is the empty word). - -All of this makes much sense intuitively. My question is: -Is there is a more precise way to define this? -What I mean is, talking about strings, empty word, and cancellation properties doesn't seem very clear (precise). I understand that a more precise definition would be harder to grasp intuitively, but I was just wondering if there, for example, was some extremely precise way to define a free group only relying on language from, for example, set theory. -Edit: I am in beginning Abstract Algebra. - -REPLY [2 votes]: In my opinion, precise definition of a free group should be based on the concepts word, sub-word, empty word, inverse word, cancelation, reduced word, and juxtaposition. These concepts are easy to grasp and is really clear and precise. -Even if you define a free group in terms of category theory (functors and homomorphism), known as a universal property, you would want to apply to the notions of words and generators in order to prove the existence of a free group. Of course, there may be purely categorical proof of existence of a free group (like here), but it is not simpler than usual proof using words and generators. This is a good resource explaining this approach. -Another reason for giving a definition of a free group in terms of words and generators is that it is easier to grasp for computer scientists who usually deal with strings, algorithms or rewrite systems.<|endoftext|> -TITLE: Proof of concavity of log function -QUESTION [9 upvotes]: Does anybody have a proof of the concavity of the $\log{x}$ that does not use calculus? - -REPLY [2 votes]: A proof by Cauchy induction, which does not involve calculus, that the arithmetic mean is no less than the geometric mean is given here. Let $\alpha$ be any real number between $0$ and $1$. Then there is a sequence of natural numbers $m(n)$, with $0\leqslant m(n)\leqslant n$ ($n=1,2,...$ ), such that the rational numbers $m(n)/n$ converge to $\alpha$ as $n\to\infty$. For given $n$ and positive reals $x$ and $y$, consider the arithmetic and geometric means of the $n$ positive reals $x_1,...,x_n$, where $x_1=\cdots=x_{m(n)}=x$ and $x_{m(n)+1}=\cdots=x_n=y$. The AM-GM inequality for $x_1,...,x_n$ is $$\frac{x_1+\cdots+x_n}n\geqslant(x_1\cdots x_n)^{1/n},$$or $$\frac{m(n)}{n}x+\left(1-\frac{m(n)}n\right)y\geqslant x^{m(n)/n}y^{1-m(n)/n}.$$By the continuity of the arithmetical functions involved, we have in the limit as $n\to\infty$ that$$\alpha x +(1-\alpha)y\geqslant x^{\alpha}y^{1-\alpha}.$$Now take the logarithm, and we are home.<|endoftext|> -TITLE: Show that $2$ is a primitive root modulo $13$. -QUESTION [5 upvotes]: Find all primitive roots modulo $13$. - -We show $2$ is a primitive root first. Note that $\varphi(13)=12=2^2\cdot3$. So the order of $2$ modulo $13$ is $2,3,4,6$ or $12$. -\begin{align} -2^2\not\equiv1\mod{13}\\ -2^3\not\equiv1\mod{13}\\ -2^4\not\equiv1\mod{13}\\ -2^6\not\equiv1\mod{13}\\ -2^{12}\equiv1\mod{13} -\end{align} -Hence $2$ has order $12$ modulo 13 and is therefore a primitive root modulo $13$. -Now note all even powers of $2$ can't be primitive roots as they are squares modulo $13$. $(*)$ -There are $\varphi(12)=4$ primitive roots modulo $13$. These must therefore be -$$2,2^5=6,2^7=11,2^{11}=7\mod{13}.$$ - -Questions: - -Why do we only check the divisors of $\varphi(13)$ lead to $1\mod{13}$? -does the line marked $(*)$ mean they can be written in the form $a^2$? Why does this mean they can't be a primitive root? -I thought $\varphi(12)$ counts the number of coprimes to $12$.. Why does this now suddenly tell us the number of primitive roots modulo $13$? -How have these powers been plucked out of thin air? I understand even powers can't be primitive roots, also we have shown $2^3$ can't be a primitive root above but what about $2^9$? - -REPLY [2 votes]: 1 . This is Lagrange's theorem. If $G$ is the group $(\mathbb{Z}/13\mathbb{Z})^{\ast}$ (the group of units modulo $13$), then the order of an element $a$ (that is, the smallest number $t$ such that $a^t \equiv 1 \pmod{13}$) must divide the order of the group, which is $\varphi(13) = 12$. So we only check the divisors of $12$. -2 . Yes, that is a square mod $13$. To say that $a$ is a primitive root mod $13$ means that $a^{12} \equiv 1 \pmod{13}$, but all lower powers $a, a^2, ... , a^{11}$ are not congruent to $1$. Again use Lagrange's theorem: supposing $a^2$ were a primitive root, then $12$ would be the smallest power of $a^2$ such that $(a^2)^{12} \equiv 1$. But note that $b^{12} \equiv 1$ for ANY integer $b$ not divisible by $13$. So $(a^2)^{6} = a^{12} \equiv 1$, and $6 < 12$, contradiction. -3 . It's a general result about finite cyclic groups. A cyclic group of order $m$ is a group of the form $H = \{ 1, g, g^2, ... , g^{m-1}\}$. It is basically the same thing as the group $\mathbb{Z}/m\mathbb{Z}$ with respect to addition. In general, if $d \geq 1$, there exist elements in $H$ with order $d$ (that is, their $d$th power is $1$, all lower powers are not $1$) if and only if $d$ is a divisor of $m$, and there are exactly $\varphi(d)$ such elements. -In particular, if $p$ is an odd prime number, the result is that $(\mathbb{Z}/p\mathbb{Z})^{\ast}$ is a cyclic group of order $\varphi(p) = p-1$, and the number of primitive roots (that is, the number of elements with order $p-1$) is exactly $\varphi(p-1) = \varphi(\varphi(p))$. -4 . If you have found a primitive root modulo $p$ (where $p$ is an odd prime), then you can easily find the rest of them: if $a$ is a primitive root mod $p$, then the other primitive roots are $a^k$, where $k$ runs through those numbers which don't have any prime factors in common with $p-1$. It's a good exercise to prove this. So $2^9$ wouldn't work; $9$ has prime factors in common with $12$.<|endoftext|> -TITLE: Represent $1^5+2^4+3^3+4^2+5^1$ in sum notation -QUESTION [5 upvotes]: I imagine the sum notation for -$$1^5+2^4+3^3+4^2+5^1$$ -Would look something like -$$ \sum x^y,\ x=1 \text{ to } 5,\ y=5 \text{ to } 1 $$ -Is this correct or am I missing something? - -REPLY [10 votes]: Observe that in each term, the sum of the base and the exponent is $6$. Thus, if the base is $k$, the exponent is $6 - k$. Since the base increases from $1$ to $5$, we obtain -$$\sum_{k = 1}^{5} k^{6 - k}$$ - -REPLY [3 votes]: Two ways I can think of: -$$\sum_{k=1}^5 k^{6-k}$$ -$$\sum_{a,b\in\mathbb{N}_{\ge1}: a+b=6}a^b$$ - -REPLY [2 votes]: $\sum_{i=1}^5 i^{6-i}$ is equal to your sum<|endoftext|> -TITLE: Motivation for the mapping cone complexes -QUESTION [12 upvotes]: I was reading some topics in Homological Algebra when I came across the concepts of cone of a map of complexes and cylinder. -My knowledge of Algebraic Topology is pretty basic so I only used these concepts in a pure algebraic setting. What is the motivation for this? -The shapes of a cone for example appears only in the simplicial context of Algebraic Topology or is it possible to "see" the cone in algebraic terms ? - -REPLY [29 votes]: One possible motivation for the mapping cone is the fact that a morphism of chain complexes is a quasi-isomorphism iff its mapping cone has vanishing homology. So in this sense, the homology of the mapping cone of $f$ measures the default of $f$ to be a quasi-isomorphism. -From an abstract homotopy theory point of view, one can first consider the mapping cylinder of $f : X_* \to Y_*$. In algebraic topology, the mapping cylinder of $f : X \to Y$ is $Y \cup_{X \times 0} X \times I$. We'll try to see how this could be translated in homological algebra, and hopefully the picture will be clearer. -In homological algebra, the interval $I = [0,1]$ is replaced by the chain complex $I_*$ that has $I_0 = \mathbb{Z} v_+ \oplus \mathbb{Z} v_-$ (this is a free abelian group of rank two) and $I_1 = \mathbb{Z} e$, $I_n = 0$ if $n \neq 0,1$, and $d : I_1 \to I_0$ is given by $d(e) = v_+ - v_-$. It's an acyclic chain complex that represents an interval (in some sense that can be made precise; it is a path object in the model category of chain complexes). Roughly speaking $v_+$ is the vertex $\{1\}$, $v_-$ is the vertex $0$, and $e$ is the edge between the two. -The product $X \times I$ becomes the tensor product $X_* \otimes I_*$, which has: -$$(X \otimes I)_n = X_n \otimes v_+ \oplus X_n \otimes v_- \oplus X_{n-1} \otimes e$$ -and the differential is given by $$d(x \otimes v_\pm) = dx \otimes v_\pm, \\ d(x \otimes e) = dx \otimes e + x \otimes v_+ - x \otimes v_-.$$ -And now the mapping cylinder $Y \cup_{X \times 0} X \times I$ is replaced by $\operatorname{Cyl}(f) = Y \oplus_{X \otimes v_-} X \otimes I$. It is the quotient of $Y \oplus X \otimes I$ where you identify $x \otimes v_- \in X \otimes I$ with $f(x) \in Y$ (recall that $v_-$ represents the vertex $0 \in [0,1]$). So concretely we get: -$$\operatorname{Cyl}(f)_n = Y_n \oplus X_n \oplus X_{n-1} \\ -d(y, 0, 0) = (dy, 0, 0) \\ -d(0,x,0) = (0, dx, 0) \\ -d(0,0,x') = (-f(x'), x', dx')$$ -The first factor is the image of $X \otimes v_-$, which is identified with $Y$. The second factor is $X \otimes v_+$, and the last part is $X \otimes e$. -Now to get the mapping cone from the mapping cylinder, in algebraic topology you collapse $X \times 1$. The $X \times 1$ part in homological algebra corresponds to the middle $X_n$ (really $X_n \otimes v_+$) in $\operatorname{Cyl}(f)_n$, so just quotient out by this ideal to get -$$\operatorname{Cone}(f)_n = Y_n \oplus X_{n-1}\\ -d(y,0) = (dy, 0) \\ -d(0,x') = (-f(x'), dx)$$ -And this is exactly the definition of the mapping cone. There are various way to get to this result in a systematic manner. For example you can put what is called a model structure on the category of chain complexes, and then the mapping cone of $f$ becomes its homotopy cokernel. Or you can put a triangulated structure on it (though that's a bit circular, since you need to know what the mapping cone is to get the triangulated structure). -PS: A lot of things that are true in algebraic topology are also true in homological algebra. For example, if you have $A \subset X$, you can consider the cone on $A$ to get $X \cup CA$, then you can cone $X$ inside it to get $(X \cup CA) \cup CX$, and this is homotopy equivalent to the suspension $\Sigma A$ (the beginning of the Puppe sequence). Well, in homological algebra it's exactly the same: say you have a subcomplex $i : A_* \to X_*$, you can take the cone $\operatorname{Cone}(i)$, of which $X_*$ is a subcomplex; if you then take the cone of this inclusion, you get a complex homotopy equivalent to the suspension (shift in degree) of $A_*$. This is because all this can be encoded in the triangulated structure of chain complexes! -$$ $$<|endoftext|> -TITLE: Is the action of $\mathbb Z$ on $\mathbb R$ by translation the only such action? -QUESTION [5 upvotes]: It is well known that $\mathbb Z$ acts on $\mathbb R$ by translation. That is by $n\cdot r=n+r$. The quotient space of this action is $S^1$. -Could someone give me an example where $\mathbb Z$ acts on $\mathbb R$ in some other (non trivial) way to give (possibly) some other quotient? I am only interested in continuous actions. -Specifically, is there an action such that the quotient is a closed interval? -Thank you. - -REPLY [3 votes]: You want continuous actiones, i.e., for each $n\in\Bbb Z$, the map $f_n\colon r\mapsto n\cdot r$ should be continuous. -Note that the action is completely detemined by $f_1$ and that $f_1$ must have an inverse $f_{-1}$ that is also continuous. In other words: $f_1$ is a homeomorphism. As such, it may either be orientation preserving (i.e., strictly increasing) or not (i.e. strictly decreasing). -Consider first the former case and let $F\subseteq \Bbb R$ be the set of fixedpoints of $f_1$. As $F$ is a closed subset of $\Bbb R$, its complement is open, hence is the disjoint union of countably many open intervals $(a,b)$ (with infinite ends allowed). Each such $(a,b)$ is not only homeomorphic to $\Bbb R$, but in fact is so in a way tkat turns the action of $\Bbb Z$ on $(a,b)$ into the standard action of $\Bbb Z$ on $\Bbb R$. -To see this pick $x_0\in (a,b)$, let $x_1=f_1(x_0)$. -We start defining our homeomorphism $\phi\colon (a,b)\to\Bbb R$ by setting $f(x)=\frac{x-x_0}{x_1-x_0}$ for $x\in[x_0,x_1]$. After that, for any $x\in (a,b)$ we find $n\in \Bbb Z$ with $f_1^{\circ n}(x)\in[x_0,x_1)$ and can let $\phi(x)=\phi(f_1^{\circ n}(x))-n$. -We see that each open inter-fixpoint interval $(a,b)$ between contributes a copy of $S^1$ to the quotient space. -The fixpoints $x\in F$ themselves "survive" the transition to the quotient space. It seems that we have a disjoint union of $F$ (with subspace topology) and a couple of copies of $S^1$, -but we have to be a bit careful with the quotient topology: If $a\in F$ is a boundary point of $(a,b)$ (and/or $(c,a)$) then any open neighbourhood of it in $\Bbb R$ contains enough of the adjacent open interval(s) to cover a "full round" of the corresponding $S^1$. In other words, in the quotient space, every open neighbourhood of $a$ contains the one or two adjacent $S^1$'s. -Now back to the beginning - what happens if $f_1$ is strictly decreasing? In that case the action of $2\Bbb Z$ is of the kind described above, and the full action of $\Bbb Z$ intruduces some pairwise identification: The unique fixed point $z$ of $f_1$ is also a fixed point of $f_2$ (so not within one of the $S^1$). The copies of $S^1$ are identified in pairs (which still makes them $S^1$'s), components of $F$ not containing $z$ are also identified in pairs, only the component of $F$ containing $z$ is glued similarly to $[-a,a]\to [0,a]$, $x\mapsto |x|$; however, if $z$ is an isolated point of $F$, then as above the two adjacent $S^1$ are identified with the additional strange effect that the only open neighbourhoods of $z$ in this $S^1$ contains all of $S^1$<|endoftext|> -TITLE: Affine scheme obtained from (commutative) group algebra -QUESTION [6 upvotes]: Let $G$ be a finite abelian group (written multiplicatively), $R$ a commutative ring and let $R [G]$ denote the set of all formal linear combinations of elements of $G$ with coefficients in $R$. Then $R[G]$ is an $R$-algebra and in particular a ring with multiplication of elements defined in the obvious way: -$(\sum_{g\in G} a_g g)\cdot (\sum_{h\in G} b_h h) = \sum_{g\in G} \sum_{h\in G} a_g b_h (g h)$ -where the product of group elements occurs in $G$. This object is called the group ring or the group algebra of $G$ (over $R$). -This is just an idle speculative question which occurred to me during my representation theory class, but given that we've got a natural $R$-algebra $R[G]$ here, does the affine scheme $\text{Spec}R[G]$ carry any information about $G$? Particular cases of interest to me are when $R = \mathbb{C}$ (this most closely ties in with my representation theory course) and when $R$ is somehow "arithmetic" e.g. $\mathbb{Z}$ or $\mathbb{F}_p$ for a prime $p$. - -REPLY [3 votes]: This is not an answer, since it doesn't address general connections between $G$ and $\operatorname{Spec} R[G]$, but it's a little long for a comment. My goal is to make some basic observations about the spectrum of a group ring. -First, observe that $R[G\times H] \cong R[G]\otimes_R R[H]$, so $\operatorname{Spec} R[G\times H] \cong \operatorname{Spec} R[G] \times_{\operatorname{Spec} R} \operatorname{Spec} R[H]$. Thus we can restrict our attention to indecomposable groups. -In particular, we can compute this spectrum geometrically for finitely generated abelian groups. For example, we have $R[\mathbb{Z}] \cong R[X,X^{-1}]$, so $\operatorname{Spec} R[\mathbb{Z}^n] \cong \mathbb{G}_m^n$ is the $n$-torus over $R$. -Similarly, $\operatorname{Spec} R[\mathbb{Z}/k] \cong \operatorname{Spec} R[X]/(X^k-1)$ is the closed subscheme of $\mathbb{G}_m$ of $k$-th roots of unity (in characteristic relatively prime to $k$, anyway). -To connect these affine schemes with our original group $G$, we should of course keep track of the action of $G$. Or perhaps coaction? I don't know very much about this.<|endoftext|> -TITLE: How am I to interpret induction/recursion in type theory? -QUESTION [10 upvotes]: This may have been asked before, but I haven't been able to find it if so. -The induction and recursion principles for various types in (for me, at least, homotopy) type theory allow one to define a function despite having what appears to be only partial information. -For example one can prove results about (or define functions from) product types by proving those results (defining the function) only for pairs, one can prove results for $x: \textbf1$ by proving them only for $\star:\textbf{1}$, one can prove results concerning equality by only considering reflexivity. -How am I best to interpret the fact that we can do this? I've flip-flopped between three different views so far, given below, for a fair bit. - -Am I supposed to decide that these functions are only partially defined (whence what if I were to find elements of $\textbf{1}$ which weren't definitionally equal to $\star$?)? -Am I supposed to decide that the rules simply express that there exists functions out of, say, $A\times B$ which have such results on pairs (and we just don't know what these functions will do for possible non-pairs, except that they will send them to the target). -Am I supposed to see this as an assertion that the only elements of these types we will ever meet are those equal to whatever we are inducting over? (But this can't be right because the univalence axiom gives more proofs of equality that refl). -Anything else? - -The fact that we can prove the relevant propositional equalities using these principles doesn't really solve the problem for me, because without a proper interpretation this still very much seems as though I'm only proving this for the special objects we care about. -Which is the better, or more correct, interpretation, here? I personally prefer the second one, it seems to me as though that makes all the relevant propositions-as-types identifications more agreeable. - -REPLY [4 votes]: As Mike Shulman says, 3 is the best interpretation. 2, by itself, is true but is arguably not really induction (without 3). I want to go into more detail about this, but first I want to elaborate on Mike's comment with regards to identity types. -To use your wording what the elimination rule for identity types says is that for $M : A$, $\langle M,\mathbf{refl}_M \rangle$ is the only value of $\Sigma x:A.(M =_A x)$ that we'll ever see or, to use Mike's wording, that exists. The rule says no such thing about $(M =_A N)$ for given or fixed $M, N : A$. Of course, there may be inhabitants of $\Sigma x:A.(M =_A x)$ that are only propositionally but not definitionally equal to $\langle M,\mathbf{refl}_M \rangle$. But ultimately they would be like different representatives from the same equivalence class, e.g. $\frac{2}{3} = \frac{4}{6}$ even though they look different. (Though generally I think it is better to think in terms of widening what we consider equal rather than forming equivalence classes, something much easier done in type theories than in [classical] set theories with a global notion of equality.) Notice that equality varies with type and this makes a crucial difference in this case. For example, you may retort "$\langle \mathbf{base}, \mathbf{loop} \rangle$ does not equal $\langle \mathbf{base}, \mathbf{refl_{base}} \rangle$ for the circle $\mathbb{S}^1$". To which I would reply, "at what type?" At the type $\mathbb{S}^1\times(\mathbf{base} =_{\mathbb{S}^1} \mathbf{base})$, i.e. $\Sigma x:\mathbb{S}^1.(\mathbf{base} =_{\mathbb{S}^1} \mathbf{base})$, you would be right. But at the type $\Sigma x:\mathbb{S}^1.(\mathbf{base} =_{\mathbb{S}^1} x)$, you would be wrong. (If we looked at the groupoid model, what's happening is that in this case these types as groupoids (may) have the same objects but have different arrows. In particular, the latter type has more isomorphisms.) -That was a bit longer than I was expecting. Moving to induction, it may be useful to look at an example where a nominal induction rule failed to achieve this property. The main example of this is the failure of the first-order induction schema in Peano arithmetic to rule out non-standard models. (Note, Peano's original formulation used a second-order induction rule, so didn't have this problem.) The first-order induction schema says $$\frac{P(0) \qquad \forall n.(P(n) \implies P(S(n)))}{\forall n.P(n)}$$ -for each $P$ that you can write down. You could view it as a macro that looks at the syntax of the property $P$ you want to check, and then expands out into a tailor-made axiom for that property. The problem, of course, is not every property in the model need be syntactically expressible. On the other hand, the second-order characterization, i.e. where we quantify over $P$, essentially says that all properties in the model obey this rule. You can think of this as saying the induction rule is "uniform" or "continuous". -The "standard" set-theoretic model for Peano arithmetic is the minimal one. The second-order rule is interpreted as saying that we are looking for the set that is the intersection of all sets $X$ for which $0 \in X$ and $n \in X \implies S(n) \in X$. Categorically, this is stated by defining the natural numbers as the initial algebra of a functor. It's unique up to unique isomorphism. In type theory, if we have a universe, we can directly state what we want to achieve, namely any other type that "behaves" like the natural numbers (i.e. provides the same constructors and induction rule) is isomorphic to the natural numbers. Below is the Agda code. It's easy to change it to have the assumed induction rule on N target a type of propositions which better fits the preceding scenarios. Making that change means the recursor, rec, must now be assumed rather than defined from ind, and we also then need to assume that N is a (h-)set, i.e. that the identity type Id n m is a proposition for any two values n and m of N. Once that is done, Nat-is-unique barely changes and we don't necessarily need to be working in a type theory with a universe. This illustrates what's going on though. Among the properties being quantified over there is, for each potential model of the naturals, the property for which the conclusion of the induction rule is that the naturals embed in this alternate model and the premises of that instance of induction are satisfiable. -For inductive families like $=_A$, but also $\leq_{\mathbb{N}}$ and the rules defining type systems and operational semantics and many other things, we are inductively defining a subset of a (product) type. For $=_A$ we are saying the subset $\{ x:A\ |\ M =_A x \}$ is inductively defined. Or equivalently, the subset $\{ \langle x,y \rangle:A\times A\ |\ x =_A y \}$ is inductively defined. In extensional type theories, this is basically exactly what is happening. In intensional type theories like HoTT, our types are not necessarily "sets" (h-sets), and so "subset" becomes a much richer notion. -(I want to point out that there is still a lot of nuance here. In particular, it's critical to keep the distinction between internal and external views of a logic/type theory. Andrej Bauer has a good article showing that (externally) inductive types can look quite different from what we expect. But also see Peter Lumsdaine's comment therein.) -data Nat : Set where - Zero : Nat - Succ : Nat → Nat - -data Id {A : Set} (x : A) : A → Set where - Refl : Id x x - -_trans_ : {A : Set}{x y z : A} → Id x y → Id y z → Id x z -Refl trans q = q - -cong : {A B : Set}{x y : A}(f : A → B) → Id x y → Id (f x) (f y) -cong f Refl = Refl - -record Iso (X Y : Set) : Set where - field - to : X → Y - from : Y → X - lInv : (x : X) → Id x (from (to x)) - rInv : (y : Y) → Id y (to (from y)) - -module _ (N : Set) - (zero : N) - (succ : N → N) - (ind : (P : N → Set) → P zero → ({n : N} → P n → P (succ n)) → (n : N) → P n) where - rec : (A : Set) → A → (A → A) → N → A - rec A z s = ind (λ _ → A) z (λ {_} → s) - - module _ (recZ : (A : Set) → (z : A) → (s : A → A) - → Id z (rec A z s zero)) - (recS : (A : Set) → (z : A) → (s : A → A) → (n : N) - → Id (s (rec A z s n)) (rec A z s (succ n))) where - - Nat-is-unique : Iso N Nat - Nat-is-unique = record { to = toNat; from = fromNat; lInv = invN; rInv = invNat } - where toNat : N → Nat - toNat = rec Nat Zero Succ - fromNat : Nat → N - fromNat Zero = zero - fromNat (Succ n) = succ (fromNat n) - invN : (n : N) → Id n (fromNat (toNat n)) - invN = ind (λ n → Id n (fromNat (toNat n))) (cong fromNat (recZ Nat Zero Succ)) - (λ {n} p → cong succ p trans cong fromNat (recS Nat Zero Succ n)) - invNat : (n : Nat) → Id n (toNat (fromNat n)) - invNat Zero = recZ Nat Zero Succ - invNat (Succ n) = cong Succ (invNat n) trans recS Nat Zero Succ (fromNat n)<|endoftext|> -TITLE: Polyhedra with identical faces -QUESTION [12 upvotes]: The isohedra have identical faces. They have symmetries acting transitively on their faces -- any face can be mapped to any other face to give the same figure. -There are also polyhedra where all faces are the same, but the faces are not transitive. For example, take an antiprism and make caps with the same triangles. -I just found that this net seems to work, with all faces identical. The long edges all have length 1, with angles of 60 and 90 degrees. - -Have polyhedra like this been explored? Is there a name for non-isohedra where all faces are the same? - -REPLY [7 votes]: I just posted a question about these polyhedra, which are monohedral but not isohedral. I've included your shape in the question, which seems to have been unknown at least prior to 1996. I've included every example I know of there, though I very much doubt it encompasses all known non-isohedral convex monohedra.<|endoftext|> -TITLE: Points in the boundary of a compact set $K\subset\mathbb{R}^2$ reachable by a path in $K^c$ -QUESTION [8 upvotes]: Let $K\subset\mathbb{R}^2$ be compact. Let the path boundary of $K$ denote the set of points in $z\in K$ such that for some point $w\in K^c$, there is a continuous path $\gamma:[0,1]\to\mathbb{R}^2$ such that - -$\gamma(0)=w$. -$\gamma((0,1))\subset K^c$. -$\gamma(1)=z$. - -Of course the path boundary of $K$ is contained in the boundary of $K$, and it is not hard to find a set whose path boundary is a strict subset of its boundary. Take for example the block $[-1,1]\times[-1,1]$, and remove the sets $\left\{(x,y):y>0,\dfrac{1}{n^2} -TITLE: Prove that zero multiplied by zero is equal to zero. -QUESTION [19 upvotes]: This is my proof: -So, $0\cdot0=0$ -And we know that -$a-a=0$ -By substitution, -We have $(a-a)(a-a)=0$ -Then by simplifying, $a^2-a^2+a^2-a^2=0$ -and the we have $0-0=0$, -Therefore, $0=0$. -I am not sure about my answer. Will you please show me another way of proving it or some way to improve my answer? Thank you! - -REPLY [2 votes]: If we take real numbers: -$0\cdot0 = (0 + 0) \cdot 0\qquad$ ($0 = 0+0$ because $0$ is the additive zero element) -$\qquad= 0\cdot0 + 0\cdot0\qquad$ (distributive law, $(a+b)\cdot c = a\cdot c + b\cdot c$) -Since $x = x+a$ with $x = 0\cdot0$ and $a = 0\cdot0$, this makes $0\cdot0$ the additive zero.<|endoftext|> -TITLE: Is geodesic distance equivalent to "norm distance" in $SL_n(\mathbb{R})$? -QUESTION [12 upvotes]: Take any norm, $\|\cdot\|$on $\mathbb{R}^n,$ and consider the resulting norm on $SL_n(\mathbb{R})$: -$$\|A\|:= sup\{\|Av\|: \|v\|=1\}.$$ -Now take any left-invariant Riemannian metric, $g$, on $SL_n$. How do the geodesic balls, $B_g(I, r)$ around the identity matrix, $I$, compare with the metric balls, $B_{\|\cdot\|}(I,r)$ coming from $\|\cdot\|$? In particular do there exist $c, C$ such that $$B_{\|\cdot\|}(I,cr)\subset B_g(I, r) \subset B_{\|\cdot\|}(I,Cr)$$ for all sufficiently small $r$? Or anything of the sort? - -REPLY [5 votes]: Here's what I see in Einsiedler-Ward: -Your norm on $SL_n(\mathbb{R})$ is induced by the operator norm on the vector space $M_{n}(\mathbb{R})$. Being a norm on a finite-dimensional vector space, it is equivalent to the euclidean norm $\|\cdot\|$ on $M_{n}(\mathbb{R})$. -Now any left-invariant Riemannian metric $\langle \ ,\rangle$ on $TG=G\times \mathfrak{g}$ is determined by its restriction to $TG_{I}=\{I\}\times \mathfrak{g} \cong \mathfrak{g} \subset M_n(\mathbb{R})$. Without loss of generality, we can assume that the Riemannian metric restricted to $\mathfrak{g}$ is induced by the euclidean norm on $M_n(\mathbb{R}).$ Let $d$ be the distance on $G$ induced by the path integral formula of $\langle \ , \rangle$. -Let $B$ be a pre-compact neighbourhood of $I$ in $G$ where the local inverse ($\log$) of the exponential map is defined. Assume $\log(B)$ is a convex ball in $\mathfrak{g}$. Let $B'$ be another pre-compact neighbourhood containing the closure of $B$. -Say $\phi:[0,1] \to B'$ joins $g_0, g_1 \in B$. Then since the norm of $\phi(t)$ is bounded, we get $c>0$ (independent of $g_0,g_1$) such that -$$L(\phi):= \int\langle D\phi(t), D\phi(t) \rangle^{1/2}dt = \int \left\langle DL^{-1}_{\phi(t)}\circ D\phi(t), DL^{-1}_{\phi(t)}\circ D\phi(t)\right\rangle^{1/2} dt = \int \|\phi(t)^{-1}\phi'(t)\| dt -\\ \geq c\int\|\phi'(t)\|dt \geq c\| g_1-g_0\|.$$ -This shows that $c\|g_1-g_0\| \leq d(g_0,g_1)$ if the infimum of path integrals is taken over paths which remain in $B'$. But since $d\left(B,(B')^c\right)>0,$ we can assume that this estimate holds in general. Hence for all $g_0,g_1 \in B$, we have -\begin{equation} -c\|g_1-g_0\| \leq d(g_0,g_1) \qquad (1) -\end{equation} -and it remains to show a reverse inequality. -Consider the path $\phi:[0,1] \to B$ given by $t \mapsto \exp\left(\log g_0 + t(\log g_1-\log g_0)\right)$. This is well defined since we assumed $\log(B)$ was a convex ball in $\mathfrak{g}$. Then, since the norm of $\phi(t)^{-1}$ is bounded, and since $d(\exp)$ is bounded in $\log(B)$ and since $\log$ is Lipschitz (by the mean value theorem) in a neighbourhood of $I$, -$$d(g_0,g_1) \leq \int\langle D\phi(t), D\phi(t) \rangle^{1/2}dt = \int \|\phi(t)^{-1}\phi'(t)\|dt \leq \int C_1\|\phi'(t)\|dt \leq C_1C_2\|g_1-g_0\|$$ -for some $C_1, C_2>0$ (independent of $g_0, g_1$). Hence for all $g_0,g_1\in B$, we have -$$ d(g_0,g_1) \leq C \|g_0-g_1\|. \qquad (2)$$<|endoftext|> -TITLE: How long to do math each day? -QUESTION [37 upvotes]: I have seen some posts math.SE (mkko's answer) indicating that it is the norm for (undergrad?) math majors to study 70-80 hours per week. I'm a little bit shocked by that. For some background on me, I'm not very advanced (only finished calculus 1-3 and taking my first DE class). However, personally if I am trying to solve a tough problem or prove a theorem, I can't work on it for more than an hour at a time without killing my ability to think creatively, which is the most important skill we mathematicians should cultivate, right? After about one or two hours, I can't engage the theorem in deep thought, so trying proving it becomes more of just an unproductive guessing game. If I just come back the next day, I feel like I have digested the problem much better and gained more perspective. When there is new material, I do spend multiple hours just trying to learn the material and become familiar with all the definitions and intricacies. However, I feel that it's most important to focus on problem-solving and proofs. -So my question is, what do these 80 hours a week consist of for typical math students? Is most of that time spent on trying to just learn the material? Is it actually spent on cultivating problem-solving skills but I have just have a really low tolerance for focusing? Is spending no more than 1-2 hours per day on the same problem optimal, but just a luxury that students can't afford once they reach a certain level? - -REPLY [2 votes]: In the book Outliers, author Malcolm Gladwell says that it takes roughly ten thousand hours of practice to achieve mastery in a field. -If you want to (Bachelor + Graduate + PhD) this makes roughly 5/6 hours per day. That's it. I find this estimate quite reasonable. If you want to accomplish the same result before graduation You have to switch to the 10 hours mode.<|endoftext|> -TITLE: if $e$ is an idempotent, then $Re$ is a projective module -QUESTION [6 upvotes]: I would like to show that if $R$ is a ring with $1$, and $e$ is an idempotent in $R$, then $Re$ is a projective module. -My idea is to show that $R = Re \oplus R(1-e)$. It's clear to me that $R = Re + R(1-e)$, but I am having trouble showing that $Re \cap R(1-e) = \{0\}$. I tried that if $x \in Re \cap R(1-e)$, then $x = re = r'(1-e)$ for some $r,r' \in R$. Then $re = re^2 = r'(1-e)e = r'(e - e^2) = 0$, but how can I show $r$ is $0$ if $R$ need not necessarily be a domain? - -REPLY [6 votes]: As pointed out in the comments, your approach is very good. Here is another, which you might like. -There is an obvious surjective $R$-module homomorphism $\varphi \colon R \to Re$ given by $r \mapsto re$, so we obtain an exact sequence of $R$-modules -$$0 \to \ker(\varphi) \to R \xrightarrow{\varphi} Re \to 0$$ -To show that $Re$ is a direct summand of $R$, it suffices to show that this sequence is split. The inclusion $i \colon Re \to R$ is a splitting, since $\varphi(i(re)) = re\cdot e = re^{2} = re$, so $R \cong \ker(\varphi) \oplus Re$.<|endoftext|> -TITLE: Closure of Interior and Interior of Closure -QUESTION [6 upvotes]: I know questions similar to this have been asked here but, is it possible to find a subset of a topological space such that its closure of interior and interior of closure does not contain each other? -For example if $X=\mathbb{R}$, $A=\mathbb{Q}$, the closure of interior of A would be contained in the interior of closure of A. -Thanks - -REPLY [5 votes]: An example inside $\Bbb{R}$ is -$$A=([0,1] \cap \Bbb{Q}) \cup [2,3]$$ -the interior of the closure of $A$ is $(0,1) \cup (2,3)$, while the closure of the interior is $[2,3]$: these two sets are not comparable by inclusion.<|endoftext|> -TITLE: derivative on endpoints -QUESTION [6 upvotes]: what's the derivative of $f(x)= x^{2}$ ($x\geq 0$) when x=0? -from my understand, it doesn't exist because even $\lim_{h \to 0^{+}}\frac{f(x+h)-f(x)}{h}$ is 0, but $\lim_{h \to 0^{-}}\frac{f(x+h)-f(x)}{h}$ doesn't exist, am I correct? - -REPLY [3 votes]: If you look for the rigorous definition of limit, then you may understand that $f'(0)$ exists. In high-school or calculus course, we usually learn limit naively because rigorous approach is too hard. But sometimes it occurs cognitive obstacles, and thinking that limit exists only if left and right limit must exist like you is also cognitive obstacles. Let's introduce rigorous definition of limit of function- Let $A\subset \mathbb{R}$ be a domain of $f$. - -If for every $\epsilon > 0$, there exists $\delta > 0$ such that if for all $x\in A$, $|x-a|<\delta$ then $|f(x)-L|<\epsilon$, then we say $f$ converges to $L$ as $x\to a$. - -Then how about left or right sided limit? - -If for every $\epsilon > 0$, there exists $\delta > 0$ such that if for all $x\in A$, $a < x -TITLE: Prove that $f(x)=8$ for all natural numbers $x\ge{8}$ -QUESTION [10 upvotes]: A function $f$ is such that $$f(a+b)=f(ab)$$ for all natural numbers $a,b\ge{4}$ and $f(8)=8$. Prove that $f(x)=8$ for all natural numbers $x\ge{8}$ - -REPLY [2 votes]: For $x\geq 4$, we have $$f(x+5)=f(5x)=f(4x+x)=f(4x^2)=f(2x\cdot 2x)=f(4x)=f(x+4)$$ so $f$ is constant over $[8,\infty)$ as desired.<|endoftext|> -TITLE: Liouville numbers and continued fractions -QUESTION [11 upvotes]: First, let me summarize continued fractions and Liouville numbers. - -Continued fractions. -We can represent each irrational number as a (simple) continued fraction by $$[a_0;a_1,a_2,\cdots\ ]=a_0+\frac{1}{a_1+\frac{1}{a_2+\frac{1}{\ddots}}}$$ -where for natural numbers $i$ we have $a_i\in\mathbb{N}$, and we also have $a_0\in\mathbb{Z}$. Each irrational number has a unique continued fraction and each continued fraction represents a unique irrational number. - -Liouville numbers. -An irrational number $\alpha$ is a Liouville number such that, for each positive integer $n$, there exist integers $p,q$ (where $q$ is nonzero) with $$\left|\alpha-\frac pq\right|<\frac1{q^n}$$ -The important thing here is that you can approximate Liouville numbers well, and the side effect is that these numbers are transcendental. - -Now if we look at the Liouville's constant, that is, $L=0.1100010\ldots$ (where the $i!$-th digit is a $1$ and the others are $0$), then we can write -$$L=[0;9,1,99,1,10,9,999999999999,1,\cdots\ ]$$ -The large numbers in the continued fraction make the convergents very close to the actual value, so that the number it represents is in that sense "well approximatable". - -My question now is, can we bound the numbers in the continued fraction below to be sure that the number it represents is a Liouville number? - -REPLY [8 votes]: Bounding the error. -The error between a continued fraction $[a_0;a_1,a_2,\ldots]$ and its truncation to the rational number $[a_0;a_1,a_2,\ldots,a_n]$ is given by -$$ -|[a_0;a_1,a_2,a_3,\ldots] - [a_0;a_1,a_2,\ldots,a_n]|=\left|\left(a_0+\frac{1}{[a_1;a_2,a_3,\ldots]}\right) - \left(a_0 + \frac{1}{[a_1;a_2,a_3,\ldots,a_n]}\right)\right|=\left|\frac{[a_1;a_2,a_3,\ldots,a_n]-[a_1;a_2,a_3,\ldots]}{[a_1;a_2,a_3,\ldots]\cdot[a_1;a_2,a_3,\ldots,a_n]}\right| \le \frac{\left|[a_1;a_2,a_3,\ldots,a_n]-[a_1;a_2,a_3,\ldots]\right|}{a_1^2}, -$$ -terminating with $\left|[a_0;a_1,a_2,a_3,\ldots] - [a_0;]\right|\le 1/a_1$; by iterating this recursive bound we conclude that -$$ -\left|[a_0;a_1,a_2,a_3,\ldots] - [a_0;a_1,a_2,\ldots,a_n]\right| \le \frac{1}{a_1^2 a_2^2 \cdots a_n^2}\cdot \frac{1}{a_{n+1}}. -$$ -Let $D([a_0;a_1,a_2,\ldots,a_n])$ be the denominator of the truncation $[a_0;a_1,a_2,\ldots,a_n]$ (in lowest terms). Then we have a Liouville number if for any $\mu > 0$, the inequality -$$ -a_{n+1} \ge \frac{D([a_0;a_1,a_2,\ldots,a_n])^\mu}{a_1^2 a_2^2 \cdots a_n^2} -$$ -holds for some $n$. To give a more explicit expression, we need to bound the growth of $D$. - -Bounding the denominator. -Let $D(x)$ and $N(x)$ denote the denominator and numerator of a rational number $x$ in lowest terms. Then -$$ -D([a_0;a_1,a_2,\ldots, a_n])=D\left(a_0+\frac{1}{[a_1;a_2,a_3,\ldots,a_n]}\right)\\ =D\left(\frac{1}{[a_1;a_2,a_3,\ldots,a_n]}\right)=N([a_1;a_2,a_3,\ldots,a_n]), -$$ -and -$$ -N([a_0;a_1,a_2,\ldots, a_n])=N\left(a_0+\frac{1}{[a_1;a_2,a_3,\ldots,a_n]}\right)\\ =N\left(a_0+\frac{D([a_1;a_2,a_3,\ldots,a_n])}{N([a_1;a_2,a_3,\ldots,a_n])}\right) = a_0 N([a_1;a_2,a_3,\ldots,a_n]) + D([a_1;a_2,a_3,\ldots,a_n]) \\ = a_0 D([a_0;a_1,a_2,\ldots, a_n]) + D([a_1;a_2,a_3,\ldots,a_n]). -$$ -So -$$ -D([a_0;a_1,a_2,\ldots,a_n]) = a_1 D([a_1;a_2,a_3,\ldots,a_n]) + D([a_2;a_3,a_4,\ldots,a_n]), -$$ -and the recursion terminates with $D([a_0;])=1$ and $D([a_0;a_1])=D(a_0+1/a_1)=a_1$. Since we have $D([a_0;a_1,a_2,\ldots,a_n]) \ge a_1 D([a_1;a_2,a_3,\ldots,a_n])$, we can say that $D([a_1;a_2,a_3\ldots,a_n]) \le \frac{1}{a_1}D([a_0;a_1,a_2,\ldots,a_n])$, and so -$$ -D([a_0;a_1,a_2,\ldots,a_n]) \le \left(a_1 +\frac{1}{a_2}\right) D([a_1;a_2,a_3,\ldots,a_n]) \le (a_1 + 1)D([a_1;a_2,a_3,\ldots,a_n]). -$$ -An explicit bound on the size of the denominator is therefore -$$ -D([a_0;a_1,a_2,\ldots,a_n]) \le (a_1+1)(a_2+1)\cdots(a_n+1). -$$ - -Conclusion. -We conclude the following theorem: - -The continued fraction $[a_0;a_1,a_2,\ldots]$ is a Liouville number if, for any $\mu > 0$, there is some index $n$ such that $$a_{n+1} \ge \prod_{i=1}^{n}\frac{(a_i + 1)^{\mu}}{a_i^2}.$$<|endoftext|> -TITLE: Why is not possible to draw this triangle? -QUESTION [6 upvotes]: Why is it not possible to draw triangle $DEF$ with $EF=5.5cm$,$\angle E=75^0$ and $DE-DF=1.5cm$?(I used this method for consruction-http://gradestack.com/CBSE-Class-9th-Complete/Construction/Construction-of-a/14905-2953-4044-study-wtw) -I could see it follows all triangles inequalities I could make out.So,why is this triangle not possible to draw? - -REPLY [2 votes]: $\DeclareMathOperator{\d}{d}$ -And now for a completely different approach altogether. Not necessarily the recommended one, but it has some interesting ideas in it. -First, those numbers really are a smidgeon unpleasant. Please allow me the luxury of working in half-centimetres rather than centimetres, so I get some nice wholesome integers to work with. That way $EF = 11$ half-centimetres and $DE - DF = 3$ half-centimetres. (Really I'm just scaling the lengths and keeping the angles the same. Think similarity.) Now let's consider $E$ fixed at $(0,0)$ and $F$ fixed at $(11,0)$. The locus of all points $D$ such that $\angle DEF = 75^{\circ}$ is just a ray through the origin; its Cartesian equation is $y = (\tan 75^{\circ}) x$. We shall find the locus of points that satisfy $DE - DF = 3$, and see if the two loci intersect. If they do, then any point of intersection satisfies both conditions and we can mark point $D$ of the triangle. If they do not, then there is no way for $D$ to satisfy both conditions simultaneously, and the triangle is impossible to construct. -What might the locus $\d(D,E) - \d(D,F) = 3$ look like? Well, Wikipedia has a nice article on the Cartesian oval, which is the plane curve traced out by all points $S$ with the same linear combination of distances from foci $P$ and $Q$: -$$\d(P,S) + m \d(Q,S) = a$$ -So we're in business. Moreover, there's a list of special cases including where $m=-1$, which gives us a hyperbola. You might already have known it would be a hyperbola if you'd learned about conic sections before. Let's find its equation, by letting $E=(x,y)$ and using the Pythagorean distance formula to find $DE$ and $EF$: -$$\sqrt{x^2 + y^2} - \sqrt{(x-11)^2 + y^2} = 3$$ -What an almighty mess! To render it more user-friendly, we can follow the steps laid out on this Maths SE thread about Cartesian ovals. First, move the square roots to opposite sides, and square. -$$x^2 + y^2 = \left(3 + \sqrt{(x-11)^2 + y^2} \right)^2 = x^2 + y^2 - 22x + 130 + 6\sqrt{(x-11)^2 + y^2}$$ -Then simplify (e.g. we can cancel the quadratic terms and divide by two), isolate the square root on one side, then square again. Et voilà, no more radicals to be seen. -$$ \left(11x - 65\right)^2= \left(3\sqrt{(x-11)^2 + y^2}\right)^2 = 9 \left( (x-11)^2 + y^2 \right)$$ -Skipping merrily through some algebra, we obtain something we can actually graph: -$$ 9y^2 = 112x^2 - 1232x + 3136 $$ - -The blue line is the $75^{\circ}$ ray (the angle does not look that size, but that is only because of the disparity in vertical and horizontal scale) and the red curve is the hyperbola. It seems they do intersect after all, near $(2.5,9.3)$, yet this is clearly closer to $E$ than $F$! What has happened? -Unfortunately, squaring the equation makes it "forget" whether we originally had $DE - DF = 3$ or, instead, $DF - DE = 3$. The hyperbola has two branches: the left branch corresponds to $D$ being three units closer to $E$ than $F$ (which is possible, hence the point of intersection) while the right branch corresponds to $D$ being three units closer to $F$ than $E$ (in which case it is apparently impossible to satisfy $\angle DEF = 75^{\circ}$, because the graph suggests the ray misses this branch of the hyperbola entirely). You might have noticed the branches have $x$-intercepts at $(4,0)$ and $(7,0)$ — the former being seven units from $F$ and only four from $E$ (three units closer!), the latter vice versa. -If we extend the ray down into the third quadrant, where $x$ and $y$ are both negative, we see it will intersect the (left branch of) the hyperbola again, about $(-94.7, -353.6)$. This isn't a solution to the original problem: not only would $D$ be three units closer to $E$ than $F$ whereas we seek the reverse, but $\angle DEF = 105^{\circ}$ (the supplement of $75^{\circ}$) because the ray is in the wrong quadrant. So why should we care about this point? Well, the existence of two distinct points of intersection on the left branch of the hyperbola proves that there can be no point of intersection on the right branch. A line can only intersect a hyperbola at most twice — a general result true for all conic sections, except for the degenerate ones. Solving their equations simultaneously would give rise to a quadratic, which can have at most two solutions. This is sufficient to show that the desired construction is impossible. -If you want to have a go at procuring the numerical results for these points of intersection for yourself, it may help you to know that $\tan 75^{\circ} = 2+\sqrt{3}$, so the line has equation $y=(2+\sqrt{3})x$. But rather than do that, I shall investigate for which angles $\theta = \angle DEF$ the construction is possible. We can do this algebraically by checking whether the line $y=mx$ intersects the right branch, where $m=\tan \theta$. It's clear from the graph that any ray through the origin will intersect the left branch, so for any desired $\angle DEF$ we can construct a triangle where $D$ is three units closer to $E$ than $F$. The question is whether the equations will yield a second solution, and if so, on which branch? Substituting $y=mx$ into the equation of the hyperbola gives us -$$ 9m^2x^2 = 112x^2 - 1232x + 3136$$ -This is quadratic with coefficients $a=112-9m^2$, $b=-1232$ and $c=3136$. Note that $b$ is fixed and negative, $c$ is fixed and positive, whereas by suitable choice of $m$ we can attain any value $a \le 112$, positive or negative. Imagine running through the angles $\theta = 0^{\circ}$ to $90^{\circ}$: this means running through gradients $m = \tan \theta$ from zero to positive infinity, and hence $a$ runs through all values from $112$ to negative infinity. Beware this means passing through $a=0$. This creates difficulties with the standard quadratic formula, -$$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$ -When $a$ is near zero, both numerator (when the "plus" is taken at the $\pm$) and denominator are near zero; when $a=0$ we obtain a fraction of the form $\frac{0}{0}$. This makes careful analysis of the roots quite difficult. It would be much more convenient if $a$ only appeared in the equation once, and hence affected only one of the numerator and denominator. Fortunately, there is an alternative quadratic formula sometimes used (e.g. in Muller's method) for its numerical properties: -$$x=\frac{-2c}{b\pm\sqrt{b^2-4ac}}$$ -Let's first consider taking the "minus" at the $\pm$. Remembering that $c>0$ and $b<0$, we see the denominator is negative (it is the sum of two negative terms), as is the numerator, so this root is always positive. When $\theta = 0^{\circ}$ and $a=112$, we find the root is -$$x = \frac{-2(3136)}{-1232 - \sqrt{(-1232)^2-4(112)(3136)}} = 4$$ -As we swing our line up through larger $\theta$, the falling value of $a$ makes the denominator even more negative. The $x$ coordinate of our point of intersection remains positive, but smoothly decreases so the point moves to the left. As our line nears the vertical, and $\theta$ approaches a right angle, then $a$ and hence the entire denominator approach negative infinity, $x$ approaches zero, and our point of intersection nears the $y$-axis. Looking at the graph, it's clear we've traced a path along the left branch of the hyperbola. For each angle, we've found a point three units closer to $E$ than $F$. - -Now switch our attention to the root which takes the "plus" at the $\pm$. Starting at $\theta = 0^{\circ}$, we can again substitute $a=112$ to find $x=7$. With an angle of zero, we must be at $(7,0)$, so you can see we are initially on the right branch of the hyperbola, three units closer to $F$ than $E$. As we pivot our line up through greater $\theta$, we must pay careful consideration to signs. Since $c>0$ and $b<0$, the numerator is clearly negative, but the denominator is the sum of positive and negative terms. Its sign will be positive if and only if $\sqrt{b^2-4ac}$ is greater than the magnitude of $b$. -To start with, this is clearly not the case. But as $a$ declines, the value of $\sqrt{b^2-4ac}$ rises, the denominator becomes less negative (i.e. closer to zero), and the $x$ increases so our point of intersection moves to the right. When $a$ is only just above zero, the $b + \sqrt{b^2-4ac}$ is only just below zero, and $x$ will move very far to the right indeed: in fact, as $a \to 0^+$ we have $x \to +\infty$. But when we reach the critical angle at which $a$ is exactly zero, then $x = \frac{-2c}{0}$ yields no root. What's going on here is that, with $a=0$, our "quadratic" equation is really linear, and only has one root. It's not here, and that's because we found it already: it's sitting happily on the left branch of the hyperbola, where we chose "minus" at the $\pm$ and the special case $a=0$ caused no particular problems. (If we had used the standard quadratic formula, putting $a=0$ gives indeterminate forms $0/0$ in both $\pm$ cases, obfuscating which root has really been "lost". Using the alternative formula avoids this: its drawback is that analogous trouble would break out if $c=0$, but that's not a problem for us here.) -A negligible increase in $\theta$ will tip $a$ into negative territory, and $\sqrt{b^2-4ac}>\mid b \mid$ (albeit only just). A tiny positive denominator ensures that $x$ is large and negative so we must have switched over to the third quadrant (for $0^{\circ}<\theta<90^{\circ}$ then $m=\tan \theta >0$ so $x<0$ implies $y<0$) and the left branch of the hyperbola. As $\theta$ approaches a right angle, $a$ monotonically approaches negative infinity and the denominator monotonically approaches positive infinity: this means $x$ approaches zero from the left, and our point of intersection is smoothly moving right and nearing the $y$-axis. There is no way for it to return to the right branch of the hyperbola after its jump to the left. For angles above the critical angle, the construction is impossible. - -In conclusion, for any desired angle $0^{\circ} < \angle DEF < 90^{\circ}$ we can construct a point $D$ such that $D$ lies three units nearer to $E$ than to $F$. But if we want $D$ three units closer to $F$ than to $E$, this can only be achieved below a critical angle for which $a=112-9(\tan \theta)^2=0$. This gives $\tan \theta = \frac{4\sqrt{7}}{3}$ and $\theta \approx 74.17^{\circ}$. If you are trying to see how this ties into other answers, it's interesting that this angle has a cosine of $\frac{3}{11}$. You might want to consider which constructions are possible for an obtuse $\angle DEF$: the hyperbolas are still valid, you just need to consider the intersections with the appropriate rays. - -A slightly more general method: consider a hyperbola with foci at $(-c, 0)$ and $(c,0)$ hence centred at $(0,0)$. Consider the locus of a point whose distances from the foci differ by $2a$, then the Cartesian equation of the hyperbola is: -$$\sqrt{(x-c)^2 + y^2} - \sqrt{(x+c)^2 + y^2} = \pm 2a$$ -Moving the square roots to opposite sides and squaring, we obtain: -$$(x-c)^2 + y^2 = 4a^2 + (x+c)^2 + y^2 \pm 4a \sqrt{(x+c)^2 + y^2}$$ -Expanding the brackets, simplifying, and making the term with square root the subject, -$$\mp a \sqrt{(x+c)^2 + y^2} = cx + a^2$$ -Squaring and simplifying, we obtain -$$a^2 y^2 + (a^2 - c^2) x^2 = a^4 - a^2 c^2$$ -Which can be put into the more familiar form for a hyperbola, -$$\frac{x^2}{a^2} - \frac{y^2}{c^2 - a^2} = 1$$ -Note that the asymptotes of the hyperbola are at $y = \pm \frac{b}{a} x$ where $b^2 = c^2 - a^2$ is the denominator of the $y^2$ fraction. If we want change the origin to be the focus on the left, we just translate right by $c$ so that the hyperbola has centre $(c,0)$: -$$\frac{(x-c)^2}{a^2} - \frac{y^2}{c^2 - a^2} = 1$$ -The asymptotes still have slope $b/a$ but go through $(c,0)$, so have equation $y = \pm \frac{b}{a} (x - c)$. The equation of the hyperbola in your case can be rearranged into the desired form, or we can procure it by using $c=5.5$ and $a=1.5$ (since we demanded foci a distance $2c = EF = 11$ apart, and the distances from the foci had do differ by $2a = 3$) in the general formula given: -$$\frac{(x-5.5)^2}{2.25} - \frac{y^2}{28} = 1$$ -The slope of the upwards asymptote is $\frac{b}{a} = \sqrt{\frac{28}{2.25}} = \frac{4\sqrt{7}}{3}$, which we previously identified as the critical slope. The key idea here is that the right branch of the hyperbola is entirely to the right of the asymptotes through $(c,0)=(5.5,0)$, although it does approach close to them. If the ray from the origin is as steep or steeper than the upwards asymptote, which will occur when the $\tan \angle DEF$ is more than or equal to the gradient of the asymptote, then the ray can never cross the asymptote and hence never intersects the hyperbola: the triangle can not be constructed. If it is less steep, then it is bound to cross the asymptote, and this means that it will intersect the hyperbola and the triangle can be constructed. -The final argument, that the ray crossing the asymptote guarantees it intersects the hyperbola, can be proven either using the algebraic root-finding approach outlined above, or an argument from analysis: to the right of the point of intersection, the ray lies below the asymptote — by an ever-increasing distance the further right you go — yet the hyperbola is a continuous curve that lies below the asymptote and must become arbitrarily close to it. The height of the ray above the hyperbola must be positive where it crosses the asymptote, but must eventually become negative, so by the Intermediate Value Theorem there must be a point where that height is zero. -More generally, the critical slope is $\frac{b}{a}$ and so the critical angle satisfies $\tan^2 \theta = \frac{b^2}{a^2}$. To put this in terms of the original $a$ and $c$, we can write $\tan^2 \theta = \frac{c^2 - a^2}{a^2} = \frac{c^2}{a^2} - 1$. Since $\sec^2 \theta = \tan^2 \theta + 1$ we find that $\sec^2 \theta = \frac{c^2}{a^2}$, and so -$$\cos \theta = \frac{a}{c} = \frac{2a}{2c} $$ -So the cosine of the critical angle is simply the ratio between the difference in distances from $D$ to the foci, and the distance between the two foci. This is why, in our case, we simply obtained $\cos \theta = \frac{3}{11}$.<|endoftext|> -TITLE: $N\rtimes H$ is isomorphic to $N'\rtimes H$ over $H$, then $N\simeq N'$? -QUESTION [5 upvotes]: Let $s:H\hookrightarrow G$ be an inclusion of groups and $f,f':G\to H$ be two morphisms s.t. $f\circ s = f'\circ s = \mathrm{id}_H$. Then can we conclude that $\mathrm{Ker}(f) \simeq \mathrm{Ker}(f')$? (The title is an equivalent version of the question) - -REPLY [6 votes]: Let $G = \langle x,y,z \mid x^3=y^2=z^2=(xy)^2=[x,z]=[y,z]=1 \rangle \cong D_6 \times C_2$ (where $D_6$ means dihedral of order $6$), and $H = \langle y \rangle$. -Define $f_1:G\to H$ by $x \mapsto 1$, $y \mapsto y$, $z \mapsto 1$ and -$f_2:G\to H$ by $x \mapsto 1$, $y \mapsto y$, $z \mapsto y$ -Then $\ker(f_1) = \langle x,z \rangle \cong C_6$ and $\ker(f_2) = \langle x,yz \rangle \cong D_6$.<|endoftext|> -TITLE: Solving Laplace's equation in a sphere with mixed boundary conditions on the surface. -QUESTION [5 upvotes]: Can anyone help point me to a solution method for this problem? -Solve $C(\vec{x})$, where $\vec{x}=(r,\theta,\phi)$ on -$\Omega=\{\vec{x}\in\mathbb{R}^3\ |\ r\in[0,R],\ \phi\in[0,2\pi),\ \theta\in[0,\pi)\}$, where $R>0$. We define the boundaries and regions within $\Omega$ as follows: -\begin{align} -\partial\Omega_1 &= -% -\{\vec{x}\in\mathbb{R}^3\ |\ r=R,\ \theta\in[0,\theta_1),\ \phi\in[0,2\pi)\}\\ -% -\partial\Omega_2 &= -% -\{\vec{x}\in\mathbb{R}^3\ |\ r=R,\ \theta\in[\theta_1,\theta_2),\ \phi\in[0,2\pi)\}\\ -% -\partial\Omega_3 &= -% -\{\vec{x}\in\mathbb{R}^3\ |\ r=R,\ \theta\in[\theta_2,\pi),\ \phi\in[0,2\pi)\} -\end{align} -$C(\vec{x})$ is governed by the diffusion equation within $\Omega$ with boundary conditions given below, -\begin{align} -% -0 &= \nabla^2 C -% -\qquad &\text{for}\ \vec{x}\in\Omega \\ -% --\vec{n}\cdot\nabla C &= -\mu -% -\qquad &\text{for}\ \vec{x}\in\partial\Omega_1\\ -% --\vec{n}\cdot\nabla C &= \sigma C -% -\qquad &\text{for}\ \vec{x}\in\partial\Omega_2\\ -% --\vec{n}\cdot\nabla C &= 0 -% -\qquad &\text{for}\ \vec{x}\in\partial\Omega_3 -\end{align} -where $\mu,\sigma>0$. -By symmetry the problem reduces to -\begin{align} -0 =& -% -\frac{\partial }{\partial r}\left( -r^2 -\frac{\partial C}{\partial r} -\right) -% -+ -\frac{1}{\sin{\theta}} -\frac{\partial}{\partial \theta} -\left( -\sin{\theta} -\frac{\partial C}{\partial \theta} -\right) -\end{align} -With the same BC, however I can't find a solution method that does not cause the problem to become badly posed. - -EDIT: I have come across this paper by Mottin, I am unsure of its applicability here due to the piecewise definition of our Robin boundary condition. Does this invalidate the result of this paper? - -REPLY [2 votes]: The paper [Mottin,2016] corresponds to the case where the boundaries are the pure Robin conditions (h is a constant). -For your boundary conditions see the paragraph 8.3 of this paper and the references: -[Alessandrini G. , Piero L. D. , Rondi L., Stable determination of corrosion by a single electrostatic boundary measurement, Inverse Probl. 2003; 19:973-984.] -[Fasino D, Inglese G. An inverse Robin problem for Laplace’s equation: theoretical results and numerical methods. Inverse Probl. 1999;15:41–48].<|endoftext|> -TITLE: An example that a 2D shape with its centre of mass on its boundary -QUESTION [14 upvotes]: The object has constant density. Could any body suggest one for me? - -REPLY [3 votes]: It's hard to improve on @YvesDaoust 's answer, but this athletic feat suggests another: in a well executed high jump, the jumper's center of gravity stays well under the bar, so is outside his (or her) body. At some point in the jump it's on his boundary. -Pictures here, including one that is a direct answer to the question: -http://nrich.maths.org/2742<|endoftext|> -TITLE: Group Operations/ Group Actions -QUESTION [5 upvotes]: I'm currently taking my first abstract algebra course and am learning about group actions, orbits, and stabilizers. I'm reading the Artin textbook and I am not very clear of what exactly a group action allows us to do, what it looks like, and why it's important. I know the two properties that must be satisfied to be a group action, but I just don't understand the usefulness of it yet. I have watched some videos of them and read a few other sections of some texts but am still not very clear. Does anyone have any simple clear examples to understand group actions, stabilizers, and orbits? Would be very much appreciated. - -REPLY [5 votes]: Consider for example the action of $\Bbb Z\times\Bbb Z$ on the plane $\Bbb R^2$ given by $(m,n)(x,y)=(x+m,y+n)$. The orbit of a point is a lattice in $\Bbb R\times\Bbb R$. And the unit square $[0,1)\times[0,1)$ is a set of representatives, one point for each orbit. The action identifies a side of that square with the opposite side. Now if you take a square and identify opposite sides like that, you get a torus. So the orbit space of this action is a torus. Now having the torus as an orbit space allows us to identify certain structural properties of it and it gives us a nice continuous map from $\Bbb R^2$ to the torus obtained by mapping a point $p\in\Bbb R^2$ to its orbit.<|endoftext|> -TITLE: May I know how this integral was evaluated by using the theory of elliptic integrals? -QUESTION [5 upvotes]: I can not solve the following integral using the theory of elliptic integrals: -$$\int_a^b \frac{\sin(x)}{\sqrt{c-\sin(x)}}dx$$ -Where $a\geq 0, b>0, c>0$. -Wolfram$|$Alpha showed the following result: -http://www.wolframalpha.com/input/?i=integrate+%5B%2F%2Fmath:sinx%2F(a-sinx)%5E(1%2F2)%2F%2F%5D+dx -But I do not understand how Wolfra$|$Alpha came to this result. -Thanks in advance. -P.S: -How can they have also resulted in the form of hypergeometric functions ? - -REPLY [4 votes]: I will show the relation to the elliptic integrals, as I said in a comment (and Albas in the next comment): -$$\sin(x)=-\sin(-x)=-\cos \left(x +\frac{\pi}{2} \right)=-1+2 \sin^2 \left( \frac{x}{2}+\frac{\pi}{4} \right)$$ -$$\phi=\frac{x}{2}+\frac{\pi}{4}$$ -$$x=2\phi-\frac{\pi}{2}$$ -$$\alpha=\frac{a}{2}+\frac{\pi}{4}$$ -$$\beta=\frac{b}{2}+\frac{\pi}{4}$$ -$$\gamma=\sqrt{\frac{2}{c+1}}$$ - -$$\int_a^b \frac{\sin(x)}{\sqrt{c-\sin(x)}}dx=\sqrt{2} \gamma \int_\alpha^\beta \frac{-1+2 \sin^2 \phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}d\phi=$$ -$$=2\sqrt{2} \gamma \int_\alpha^\beta \frac{\sin^2 \phi ~d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}-\sqrt{2} \gamma \int_\alpha^\beta \frac{d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}$$ - -The second integral is incomplete elliptic integral of the first kind $F(\alpha, \gamma)-F(\beta, \gamma)$ and the second can be calculated in terms of incomplete elliptic integrals of first and second kind. - -I will show the full solution for the easiest case of complete elliptic integrals. -$$\alpha=0$$ -$$\beta=\frac{\pi}{2}$$ -This means that: -$$a=-\frac{\pi}{2}$$ -$$b=\frac{\pi}{2}$$ -Now the second integral will just be: -$$ \int_0^{\frac{\pi}{2}} \frac{d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}=K(\gamma)$$ -On the other hand, the complete elliptic integral of the second kind is defined: -$$\int_0^{\frac{\pi}{2}} \sqrt{1-\gamma^2 \sin^2 \phi} ~~d\phi=E(\gamma)$$ -$$\frac{d E}{d \gamma}=-\gamma \int_0^{\frac{\pi}{2}} \frac{\sin^2 \phi ~d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}=\frac{1}{\gamma}(E(\gamma)-K(\gamma))$$ -So the first integral is: -$$\int_0^{\frac{\pi}{2}} \frac{\sin^2 \phi ~d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}=\frac{1}{\gamma^2}(K(\gamma)-E(\gamma))$$<|endoftext|> -TITLE: Finding limit of $\lim_{x\to 0^+}⁡\{[(1+x)^{1/x}]/e\}^{1/x}$ -QUESTION [5 upvotes]: This is a question given in our weekly test. -$$f = \lim_{x\to 0^+}⁡\{[(1+x)^{1/x}]/e\}^{1/x}.$$ -Find the value of $f$. I tried to use 1^ infinity form but I didn't get it. So anybody please help me. - -REPLY [4 votes]: Your limit $f$ exists if and only if its logarithm exists: -\begin{align} -\log f -&=\lim_{x\to0^+}\log\bigl(\bigl((1+x)^{1/x})/e\bigr)^{1/x}\bigr) -\\[6px] -&=\lim_{x\to0^+}\frac{\dfrac{1}{x}\log(1+x)-1}{x} -\\[6px] -&=\lim_{x\to0^+}\frac{\log(1+x)-x}{x^2} -\\[6px] -&=\lim_{x\to0^+}\frac{x-x^2/2+o(x^2)-x}{x^2}=-\frac{1}{2} -\end{align} -If you don't trust Taylor expansions (or cannot use them), -$$ -\lim_{x\to0^+}\frac{\log(1+x)-x}{x^2} -\overset{*}{=} -\lim_{x\to0^+}\frac{\dfrac{1}{1+x}-1}{2x} -= -\lim_{x\to0^+}\frac{-x}{2x(1+x)} -$$ -(where $\overset{*}{=}$ denotes an application of l'Hôpital).<|endoftext|> -TITLE: Show that for any positive integer $n$ there is $x \in [0,1-\frac{1}{n}]$ for which $f(x) = f(x+\frac{1}{n})$ -QUESTION [6 upvotes]: Suppose $f$ is a continuous function over $[0,1]$ such that $f(0) = f(1).$ Show that for any positive integer $n$ there is $x \in [0,1-\frac{1}{n}]$ for which $f(x) = f(x+\frac{1}{n})$. - -We seem to be saying that $f(x+\frac{1}{n})$ is periodic with respect to any positive integer $n$. Also this seems to make sense since we start and end at the same spot there have to be values of $x$ with the same $f(x)$ as other $x$'s. But I am struggling to see how to prove this specific statement. - -REPLY [6 votes]: Consider $g(x) = f(x) - f(x+\frac{1}{n})$. We want to prove that $g(x) = 0$ for some $x\in [0,1-\frac{1}{n}]$. Consider the values $g(0), g(1/n), g(2/n), \ldots, g(1-1/n)$. At least one of these values must be negative and at least one must be positive (if one is zero, we're done). Otherwise, they are all positive or all negative. But the first case would mean $f(0) > f(1/n) > f(2/n) > \cdots > f(1-1/n) > f(1)$ which is false, and the second case would mean $f(0) < f(1)$. Thus, one of the values is positive and one is negative, and therefore by continuity $g(x) = 0$ for some $x$ in the given interval.<|endoftext|> -TITLE: Flat non-trivial $U(1)$-bundle? Is it possible? -QUESTION [11 upvotes]: maybe this is a very stupid question and I'm missing something very trivial. -It's well known that $U(1)$-bundles are classified by the Euler class or the first Chern class. More precisely, the isomorphism $$c: \check{H}^1 (X, \mathscr{C}^{\infty} (-, U(1))) \xrightarrow{\sim} H^2 (X, \mathbb{Z})$$, induced by the exact sequence $0 \rightarrow \mathbb{Z} \rightarrow \mathbb{R} \xrightarrow{exp} U(1) \rightarrow 0$, the sheaf isomorphism $\mathscr{C}^{\infty} (-, \mathbb{Z}) \cong \mathbb{Z}_X$ and the isomorphism from \v{C}ech to singular cohomology (or de Rham cohomology with integral periods), is the first Chern class or the Euler class. -It's well known too that the first Chern class is given by $$[\frac{1}{2\pi i}F_{\nabla}]$$. This equality can be accomplished by using that the first Chern class is equal to the Euler class, the global angular form $\psi$ restricts to the Maurer-Cartan form on each fiber and every connection is of the form $\psi + \pi^{*} \alpha$ for some $\alpha \in \Omega^1 (X, \mathfrak{u}(1))$, where $\pi: P \twoheadrightarrow X$ is the $U(1)$-bundle -1) From these two remarks, one can conclude that if $F_{\nabla} = 0$ on some circle bundle $P$, then $c_1 (P) = 0$ and, therefore, $P$ is trivial. -However it's well known too that in the theory of Cheeger-Simons differential characters $$H^1 (X, U(1))$$ classifies flat $U(1)$-bundles with connection by assigning the holonomy $$\text{Hol}_{\nabla} (z) = \langle c, z \rangle$$ for every $z \in Z^1 (X, \mathbb{Z})$ to each $[c] \in H^1 (X, U(1))$ and using the canonical pairing coming from the fact that $U (1)$ is divisible and, hence, $H^1 (X, U(1)) = \text{Hom} (H_1 (X, \mathbb{Z}), U(1))$. -2) Therefore it may exist non-trivial bundles with flat connection whenever the first cohomology does not vanish. -Having this in mind, why there's a non-trivial flat $U(1)$-bundle? -In other words, what's wrong in my conclusions in 1 and 2? -Thanks in advance. - -REPLY [3 votes]: Topological line bundles are classified by $H^1(X, U(1)) \cong H^2(X, \mathbb{Z})$ where $U(1)$ has the usual topology. Flat line bundles are classified by $H^1(X, U(1)) \cong \text{Hom}(H_1(X), U(1))$ where $U(1)$ has the discrete topology; this is often denoted by $U(1)_d$. There's a natural map -$$H^1(X, U(1)_d) \to H^1(X, U(1)) \cong H^2(X, \mathbb{Z})$$ -sending a flat line bundle to its first Chern class, and Chern-Weil theory implies that if $X$ is a smooth manifold then this class has image zero in de Rham cohomology. But as Mike Miller says this only implies that it's torsion, not zero.<|endoftext|> -TITLE: Proving properties for the Poisson-process. -QUESTION [12 upvotes]: Define a Poisson process as a Levy process where the increments have a Poisson distribution with parameter $\lambda$*"length of increment". -I want to prove these properties: -It has almost surely jumps of value 1. -It is almost surely increasing. -When it changes, the change it is almost surely integer-valued. -It is almost surely positive. I think this will follow from the above. -Earlier I tried to prove that the jumps is almost surely of value 1 here: -Proving that the Poisson process has a.s. jumps of value 1. -But looking back on the proof I think it is wrong. Because that proof is built on that the if for instance $N((k+1)*T/n)-N(kT/n)$ in our entire partition is either 0 or 1, then this event is contained in the event that all our jumps are of value either 0 or 1. However it could theoretically move up and down more in each interval. -So do you see how to prove the properties I wrote in the start? My only idea is to some way look at partitions that approaces 0 in length, and use that then the probability that $N((k+1)*T/n)-N(kT/n)$ is either 0 or 1 approaches 1. But the problem is that even if $N((k+1)*T/n)-N(kT/n)=1$ the process may have had two jumps where it went down 2.4 and then up 3.4. -Another problem is the increasing part. For any interval, there is a positive probability that the end value of the process at the interval minus the start value is positive, is 1.(Since the distribution here is Poisson) However, this does not say that the process is increasing throughout the interval? -How do we show that it behaves the way we know it does, a.s.? -Or can we maybe only say something about the process if we fix the interval first? If we fix a given interval, we can say with probability 1 that it increases and that the increasing value is of integer value? But we can't prove that the process has a.s. increasing sample paths? So that in essence we can only prove things about finitely many points, but not the sample path as a whole? But then, what happens with jumps of value 1 a.s.? Maybe we can not even prove this? -UPDATE: Cleaner version of the quesiton -Maybe it is not so easy to see what I am wondering about in the quesiton above. I'll try to explain it a little more precise. -Assume that you only have the interval [0,T]. Lets say that you have a sequence of partitions that converges to 0. Then you only look at the respective values of the process in the partition points. You can then prove that these values are increasing with probability 1. But if you looked at the entire sample path, could you then prove that the path is increasing with probability 1? -In each partition you can prove with probability 1 that the function only changes with integer values at the partition points. Can you prove this property for the entire sample-path a.s.? -You can prove that as the parition intervals goes to zero, the probability of the jumps at two points next to each other are bigger than 1, converges to 0.(I have only prove this myself for partitions of equal length but I assume it holds in the general case). Can this property be extended to the entire sample path?, so when the function changes value, it does so by increasing its value to 1. (as) -So in summary: If you look at the finite dimensional distribution of a Poisson process, where you look at points close to each other, you have that with probability 1 it is increasing, and the jumps are integer --valued. And you have with a probability close to 1(depending on the partition) that the jumps are of value 1. Can these properties in some way be extended to the entire sample-path?. - -REPLY [11 votes]: Let $(X_t)_{t \geq 0}$ be a Poisson process with intensity $\lambda$. -Step 1: $(X_t)_{t \geq 0}$ has almost surely increasing sample paths. -Proof: Fix $s \leq t$. Since $(X_t)_{t \geq 0}$ has stationary increments and $X_{t-s}$ is Poisson distributed, we have -$$\mathbb{P}(X_s>X_t) = \mathbb{P}(X_t-X_s < 0) = \mathbb{P}(X_{t-s}<0)=0,$$ -i.e. $X_s \leq X_t$ almost surely. As $\mathbb{Q}_+$ is countable, this implies -$$\mathbb{P}(\forall q \leq r, q,r \in \mathbb{Q}_+: X_q \leq X_r)=1.$$ -Since $(X_t)_{t \geq 0}$ has càdlàg sample paths, this already implies -$$\mathbb{P}(\forall s \leq t: X_s \leq X_t)= 1.$$ - -Step 2: $(X_t)_{t \geq 0}$ takes almost surely only integer values. -We have $\mathbb{P}(X_q \in \mathbb{N}_0)=1$ for all $q \in \mathbb{Q}_+$. Hence, $\mathbb{P}(\forall q \in \mathbb{Q}_+: X_q \in \mathbb{N}_0)=1$. As $(X_t)_{t \geq 0}$ has càdlàg sample paths, we get -$$\mathbb{P}(\forall t \geq 0: X_t \in \mathbb{N}_0)=1.$$ -(Note that $\Omega \backslash \{\forall t \geq 0: X_t \in \mathbb{N}_0\} \subseteq \{\exists q \in \mathbb{Q}_+: X_q \notin \mathbb{N}_0$ and that the latter is a $\mathbb{P}$-null set.) - -Step 3: $(X_t)_{t \geq 0}$ has almost surely integer-valued jump heights. -We already know from step 1 and 2 that there exists a null set $N$ such that $X_t(\omega) \in \mathbb{N}_0$ and $t \mapsto X_t(\omega)$ is non-decreasing for all $\omega \in \Omega \backslash N$. Consequently, we have -$$X_s(\omega) - X_t(\omega) \in \mathbb{N}_0$$ -for any $s \geq t$ and $\omega \in \Omega \backslash N$. On the other hand, we know that the limit -$$\lim_{t \uparrow s} (X_s(\omega)-X_t(\omega)) = \Delta X_s(\omega) $$ -exists. Combing both considerations yields $\Delta X_s(\omega) \in \mathbb{N}_0$. (Check that the following statement is true: If $(a_n)_{n \in \mathbb{N}} \subseteq \mathbb{N}_0$ and the limit $a:=\lim_n a_n$ exists, then $a \in \mathbb{N}_0$.) Since this holds for any $s \geq 0$, we get -$$\Delta X_s(\omega) \in \mathbb{N}_0 \qquad \text{for all $\omega \in \Omega \backslash N$, $s \geq 0$},$$ -i.e. $$\mathbb{P}(\forall s \geq 0: \Delta X_s \in \mathbb{N}_0)=1.$$ - -Step 4: $(X_t)_{t \geq 0}$ has almost surely jumps of height $1$. -By step 3, it suffices to show that $$\mathbb{P}(\exists t \geq 0: \Delta X_t \geq 2)=0.$$ -Since the countable union of null sets is a null set, it suffices to show -$$p(T) := \mathbb{P}(\exists t \in [0,T]: \Delta X_t \geq 2)=0$$ -for all $T>0$. To this end, we first note that -\begin{align*} \Omega_0 \cap \{\exists t \in [0,T]: \Delta X_t \geq 2\} &\subseteq \Omega_0 \cap \bigcup_{j=1}^{kT} \{X_{\frac{j}{k}}-X_{\frac{j-1}{k}} \geq 2\} \\ &\subseteq \bigcup_{j=1}^{kT} \{X_{\frac{j}{k}}-X_{\frac{j-1}{k}} \geq 2\} \end{align*} -for all $k \in \mathbb{N}$ where $$\Omega_0 := \{\omega; s \mapsto X_s \, \, \text{is non-decreasing}\}.$$ Using that $\mathbb{P}(\Omega_0)=1$ (by Step 1) and the fact that the increments $X_{\frac{j}{k}}-X_{\frac{j-1}{k}}$ are independent Poisson distributed random variables with parameter $\lambda/k$, we get -$$\begin{align*} p(T) &=\mathbb{P}(\Omega_0 \cap \{\exists t \in [0,T]: \Delta X_t \geq 2\}) \\ &\leq \sum_{j=1}^{kT} \mathbb{P}(X_{\frac{j}{k}}-X_{\frac{j-1}{k}} \geq 2) \\ &= kT \mathbb{P}(X_{\frac{1}{k}} \geq 2) = kT \left(1-e^{-\lambda/k} \left[1+\frac{\lambda}{k} \right]\right) \\ &= \lambda T \frac{1-e^{-\lambda/k} \left(1+\frac{\lambda}{k} \right)}{\frac{\lambda}{k}}. \end{align*}$$ -Letting $k \to \infty$, we find -$$p(T) \leq \lambda T \frac{d}{dx} (-e^{-x}(1+x)) \bigg|_{x=0} = 0.$$<|endoftext|> -TITLE: Determinant of a block matrix including non-square matrices -QUESTION [7 upvotes]: I am trying to find a nice way of computing the determinant of the matrix -\begin{equation} -M= -\begin{bmatrix} -A & B \\ C & D -\end{bmatrix} \in \mathbb{R}^{T\times T} -\end{equation} -where $A \in \mathbb{R}^{M\times N}$, $B \in \mathbb{R}^{M\times (T-N)}$, $C \in \mathbb{R}^{(T-M)\times N}$ and $D \in \mathbb{R}^{(T-M)\times (T-N)}$. Furthermore, $(A)_{i,j} = f_i(x_j)$, $(C)_{i,j} = g_i(x_j)$ where $f$ and $g$ are differentiable functions. -I know there are nice ways to compute it when either $A$ or $D$ are invertible but is there a way to do it in the more general case above? -When $A$ is invertible, -$$|M|=|A||D-CA^{-1}B|$$ - A similar formula holds when $D$ is invertible. The question is specifically if such formulas can be extended to give $|M|$ in the case where neither $A$ nor $D$ is invertible (indeed, both could be non-square). - -REPLY [2 votes]: We suppose the following matrix multiplication by the original matrix and own transposed form: -$$ -\begin{aligned} -MM^{\text{T}}=& -\begin{bmatrix} -A & B \\ -C & D -\end{bmatrix} -\begin{bmatrix} -A^{\text{T}} & C^{\text{T}} \\ -B^{\text{T}} & D^{\text{T}} -\end{bmatrix} -\\=& -\begin{bmatrix} -AA^{\text{T}}+BB^{\text{T}} & AC^{\text{T}}+BD^{\text{T}} \\ -CA^{\text{T}}+DB^{\text{T}} & CC^{\text{T}}+DD^{\text{T}} -\end{bmatrix} -\end{aligned} -$$ -Then, each diagonal block matrix becomes a square form. Therefore we can apply the determinant formula of a block matrix: -$$ -\begin{aligned} -\det(MM^{\text{T}})=&\det(M)^2 \\=& -\det\left| -\begin{array}{cc} -AA^{\text{T}}+BB^{\text{T}} & AC^{\text{T}}+BD^{\text{T}} \\ -CA^{\text{T}}+DB^{\text{T}} & CC^{\text{T}}+DD^{\text{T}} -\end{array} -\right| -\geq 0 -\end{aligned} -$$ -Hence we obtain: -$$ -\begin{aligned} -&\det(M)= \\ \pm& \sqrt{ -\det -\Big( -AA^{\text{T}}+BB^{\text{T}} -\Big) -\det -\Big( -(CC^{\text{T}}+DD^{\text{T}})- -(CA^{\text{T}}+DB^{\text{T}}) -(AA^{\text{T}}+BB^{\text{T}})^{-1} -(AC^{\text{T}}+BD^{\text{T}}) -\Big) -} -\end{aligned} -$$<|endoftext|> -TITLE: Why isn't every $\mathcal O_X$-module quasi-coherent? -QUESTION [6 upvotes]: This might be a stupid question, but I don't understand an easy fact. Let $(X,\mathcal O_X)$ a ringed space. -We know that every module $M$ over a ring $R$ has a free presentation, so why isn't every $\mathcal O_X$-module quasi-coherent? -Why doesn't the free presentation: - $\mathcal O_X^{(J)}|_U\rightarrow\mathcal O_X^{(I)}|_U\rightarrow\mathcal F|_U\rightarrow 0$ - exist in general? -Many thanks in advance. - -REPLY [6 votes]: 1) You write "We know that every module $M$ over a ring $R$ has a free presentation, so why isn't every $\mathcal O_X$-module quasi-coherent? -" -But this is begging the question: the correspondence on affine schemes between sheaves and their modules of sections is only valid for quasi-coherent sheaves! -2) Given an arbitrary $\mathcal O_X$- module and a point $x\in X$, there need not even exist a neighbourhood $U$ of $x$ and a surjective morphism of sheaves $\mathcal O_X^{(I)}|U\rightarrow\mathcal F|U\rightarrow 0$. -Indeed, from the Stacks Project (section 17.8) we can extract the following -Example: -Let $X=\mathbb R$ with the usual topology, endowed with the constant sheaf $\mathcal O_X=\underline{\mathbb Z}$ to make it a ringed space. -Let $U =\mathbb R^*_+\subset X$ be the open subspace of positive numbers and let $\mathcal Z=\mathcal O_X\mid U$ be the constant sheaf associated to $\mathbb Z$ on $U$. -Now if $i:U\hookrightarrow \mathbb R$ is the inclusion, consider the sheaf $\mathcal F:=i_!\mathcal Z$ on $X$. -For any connected neighbourhood $U$ of $0$ in X we have $\Gamma(U,\mathcal F)=0$ so that there can be no surjection $\mathcal O _X^{(I)}\mid U\to \mathcal F\mid U\to 0$. -[Recall that morphisms $\mathcal O _X^{(I)}\mid U\to \mathcal F\mid U$ correspond to families $(s_i)$ of sections $s_i\in \Gamma(U,\mathcal F)$]<|endoftext|> -TITLE: Fixed points in the enumeration of inaccessible cardinals -QUESTION [8 upvotes]: Let inaccessible cardinal mean uncountable regular strong limit cardinal. Consider $\mathsf{ZFC}$ with an additional axiom: For every set $x$ there is an inaccessible cardinal $\kappa$ such that $\kappa\notin x$ (in other words, inaccessible cardinals form a proper class). Let $\lambda_\alpha$ be the $\alpha^{\text{th}}$ inaccessible cardinal. Note that $\alpha\mapsto\lambda_\alpha$ is not a normal function, because $\lambda_\omega\ne\bigcup\limits_{\alpha<\omega}\lambda_\alpha$ (the rhs is singular). Is it possible to prove the function $\alpha\mapsto\lambda_\alpha$ has a fixed point? a proper class of fixed points? - -REPLY [10 votes]: No, of course not. If $\lambda$ is the least fixed point of such function, then $V_\lambda$ must satisfy that there is a proper class of inaccessible cardinals without a fixed point of the enumeration. -To see this, simply note that if $\lambda$ is a fixed point it has to have $\lambda$ inaccessible cardinals below it. On the other hand, if $\lambda$ is an inaccessible which is a limit of inaccessible cardinals, then it has to be a fixed point. So this gives you that fixed points are $1$-inaccessible cardinals (or $2$-, depending whether or not $0$- means an inaccessible or not). -If there is a proper class of fixed points, it means that in some sense the ordinals are $2$-inaccessible (or $3$-, depending who taught you how to count). And so on. So from the assumption mentioned by GME that "$\rm Ord$ is Mahlo" you get that there are proper classes of fixed points of every possible order.<|endoftext|> -TITLE: The function $f (x) = f \left (\frac x2 \right ) + f \left (\frac x2 + \frac 12\right)$ -QUESTION [15 upvotes]: The function $f: [0,1] → \mathbb R $ satisfies the equation -$$f (x) = f \left (\frac x2 \right ) + f \left (\frac x2 + \frac 12\right)$$ for every $x$ in $[0,1]$. -Can we assert that $f (x) = c (1-2x)$ for some real $c$ if: -a) $f$ twice continuously differentiable on $[0,1]$; -b) $f$ continuously differentiable on $[0,1]$; -c) $f$ continuous on [0,1]? -I do not know the answer, the solution to all points. Please help. - -REPLY [8 votes]: Here is a way to find Julian's example. This functional equation is very well behaved under the Fourier transform. Let $f$ be an integrable solution of the equation. For any integer $n$, let: -$$\hat{f} (n) := \int_0^1 f(x) e^{-2\pi inx} \ dx.$$ -Then: -$$\hat{f} (n) = \int_0^1 f \left( \frac{x}{2}\right) e^{-2\pi inx} \ dx + \int_0^1 f \left( \frac{x+1}{2}\right) e^{-2\pi inx} \ dx \\ -= 2\int_0^{\frac{1}{2}} f (u) e^{-2\pi in (2u)} \ dx + 2 \int_{\frac{1}{2}}^1 f (u) e^{-2\pi in (2u-1)} \ dx \\ -= 2 \hat{f} (2n).$$ -Necessarily, $\hat{f} (0) = 0$. To get continuous solutions, take any sequence $(a_{2k+1})_{k \in \mathbb{Z}}$ which is summable and defined on the odd integers. Then extend this sequence to the integers by $b_0 = 0$ and: -$$b_{2^n (2k+1)} := 2^{-n} a_{2k+1},$$ -and put $f(x) := \sum_{n \in \mathbb{Z}} b_n e^{2 \pi i nx}$. Then $f$ is continuous (its Fourier coefficients are summable), and satisfies the functional equation. If you want real-valued solutions, take the imaginary or real part (or choose $a_{-k} = a_k$). -More generally, the Fourier transform is well-defined when the coefficients are square-integrable. So you can choose a sequence $(a_{2k+1})_{k \in \mathbb{Z}}$ whose square is summable, extend it in the same way to a sequence $(b_n)_{n \in \mathbb{Z}}$, which will still be square-summable, and take the inverse Fourier transform. For instance, with $a_{2k+1} = -i(2k+1)^{-1}$, you get $b_n = -\delta_{0n} i n^{-1}$, so a solution to the functional equation is: -$$f(x) = 2\sum_{n=1}^{+ \infty} \frac{\sin (2 \pi n x)}{n},$$ -which, up to a multiplicative constant, is $1-2x$. If you try to make this function periodic, it will have a discontinuity at the integers (which is hidden here by the fact that we worked on $[0,1]$), which explain that the coefficients of the Fourier transform are not summable. -It can be shown that any continuous solution $f$ to the functional equation such that $f(0) = f(1)$ must at best have a modulus of continuity $\omega_f (h) \simeq h |\ln (h)|$ (and I think that this modulus of continuity is optimal almost everywhere), so they cannot be $\mathcal{C}^1$, and they look indeed pathological. This is because their Fourier coefficients do not decay very quickly. -With this method, you can solve other similar functional equations, say, for instance, -$$f(x) = \frac{2}{3} \left[ f \left( \frac{x}{2}\right) + f \left( \frac{x+1}{2}\right) \right],$$ -or: -$$f(x) = f \left( \frac{x}{3}\right) + f \left( \frac{x+1}{3}\right) + f \left( \frac{x+2}{3}\right),$$ -as well as some similar functional equations with more variables.<|endoftext|> -TITLE: open map from a topological space whose connected components aren't open to a connected space -QUESTION [6 upvotes]: Let $X$ be a connected topological space, and $Y$ a space which is not the disjoint union (as topological spaces) of its connected components (ie, the connected components of $Y$ are not all open). Can there exist an open continuous surjective map $Y\rightarrow X$ with finite fibers? -Motivation: I want to argue that any surjective etale map of schemes $Y\rightarrow X$ where $X$ is a connected scheme must have $Y$ be the disjoint union of its connected components. -EDIT: By disjoint union (as topological spaces), I mean the coproduct in the category of topological spaces, sometimes denoted as the "topological sum". - -REPLY [5 votes]: Im not sure if this is what you are looking for (I don't know anything about schemes) but there is at least one Y for which no such map exists for any connected X. -Let $Y = \{1/2,1/3,...,0\}$ equipped with the subspace topology from $\mathbb{R}$. The connected components of this space are the singletons, and all of them are open except for $\{0\}$. -Assume we have an open, continuous, surjective map $f:Y \to X$ with finite fibers. We will show that X is necessarily disconnected. -Since $f$ is an open map with finite fibers, the set $f(Y - \{0\})$ is infinite and all of its points are open. If $f(Y - \{0\}) = X$ then X is disconnected and we are done. -Assume $f(Y - \{0\}) \neq X$. Since f is finite-to-one there must be a largest k such that $f(1/2) = f(1/k)$. Then $f(\{1/(k+1),1/(k+2),...,0\})$ is an open set which contains $f(0)$ but does not contain $f(1/2)$. In fact the complement of $f(\{1/(k+1),1/(k+2),...,0\})$ is a finite number of points, all of which are open in X. Therefore X is disconnected. -On the flip side, there is a space Y and a connected space X for which such a map does exist. -Let $Y = \{(0,1/n) \times \{1-1/n\}: n \geq 1\} \cup \{(0,1)\} \subset \mathbb{R}^2$. Let $X$ be the half open interval $[0,1) \times \{0\}$. Then the projection map from $Y$ to $X$ is an open, continuous, surjective map with finite fibers.<|endoftext|> -TITLE: Derivative of $\ln (z), z\in\mathbb{C}$ -QUESTION [5 upvotes]: Let $f(z) = \ln z := \ln |z| + \arg (z)i$. Then the derivative is (if it exists) by definition: -$$\lim_{h\to 0}\frac{\ln (z+h)-\ln (z)}{h}=\lim_{h\to 0}\frac{\ln |z+h| +\arg(z+h)i-\ln |z| -\arg(z)i }{h}$$ - -REPLY [6 votes]: Let $ U =\mathbb C \setminus (-\infty,0].$ Then $f(z)=\ln |z| + i\arg z$ is continuous on $U$ and we have $e^{f(z)} = z$ there. This shows $f(z)$ is injective on $U.$ Fix $z\in U.$ Then for small nonzero $h$ we have -$$1=\frac{e^{f(z+h)}-e^{f(z)}}{h} = \frac{e^{f(z+h)}-e^{f(z)}}{f(z+h) - f(z)}\frac{f(z+h) - f(z)}{h}.$$ -The injectivity of $f$ shows that $f(z+h) - f(z)\ne 0,$ so we're OK dividing by it above. As $h\to 0, f(z+h) \to f(z)$ by the continuity of $f.$ So the first difference quotient on the right tends to the derivative of $e^w$ at $w=e^{f(z)},$ which is $e^{f(z)} = z \ne 0.$ Knowing all of this, we can now write -$$\tag 1 \frac{f(z+h) - f(z)}{e^{f(z+h)}-e^{f(z)}} = \frac{f(z+h) - f(z)}{h}.$$ -Since the left side of $(1) \to 1/z,$ we get $f'(z) = 1/z$ (as expected).<|endoftext|> -TITLE: Does the logistic function have a relation with $\arctan(x)$? -QUESTION [6 upvotes]: The logistic function is: $$f(x)=\frac{L}{1+e^{-k(x-x_0)}}+B.$$ -It's plot looks similar to the plot of $\arctan(x)$. Therefore, I was wondering whether there is a relationship between these two functions. -Can one transform the logistic function in such a way that it equals $\arctan(x)$? For example by giving the constants certain values? - -REPLY [9 votes]: One of the differences between a logistic function and the arctan is that the logistic function approaches its asymptotes exponentially, i.e. (if $k > 0$) -$$\eqalign{f(x) \sim L + B - L \exp(k x_0) \exp(-k x) & \ \text{as $x \to +\infty$}\cr -f(x) \sim B + L \exp(-k x_0) \exp(k x) & \ \text{as $x \to -\infty$}}$$ -while the arctan approaches its asymptotes much more slowly, like $x^{-1}$: -$$ \eqalign{\arctan(x) \sim \frac{\pi}{2} - \frac{1}{x} & \ \text{as $x \to +\infty$}\cr - \arctan(x) \sim -\frac{\pi}{2} - \frac{1}{x} & \ \text{as $x \to -\infty$}\cr}$$<|endoftext|> -TITLE: Derivative at Endpoint -QUESTION [7 upvotes]: In Rudin's "Principles of Mathematical Analysis" he defines the limit of a function as follows. - -Let $X$ and $Y$ be metric spaces; suppose $E \subset X$, $f$ maps $E$ into $Y$, and $p$ is a limit point of $E$. Then - $$ -\lim_{x \to p} f(x) = q -$$ - if there is a point $q \in Y$ with the following property: For every $\epsilon > 0$ there exists a $\delta > 0$ such that - $$ -d_Y(f(x),q) < \epsilon -$$ - for all points $x \in E$ for which - $$ -0 < d_X(x,p) < \delta. -$$ - -The functions $d_X$ and $d_Y$ are the metrics on $X$ and $Y$, respectively. He then defines the derivative of a real function as follows. - -Let $f$ be defined on $[a,b]$. For any $x \in [a,b]$ form the quotient - $$ -\phi(t) = \frac{f(t) - f(x)}{t - x} \qquad a < t < b,\, t \neq x, -$$ - and define - $$ -f^\prime(x) = \lim_{t \to x}\, \phi(t), \qquad (1) -$$ - provided this limit exists in accordance with [the above definition]. We thus associate with the function $f$ a function $f^\prime$ whose domain is the set of points $x$ at which the limit (1) exists. - -I can't reconcile what's wrong with derivatives at endpoints. As a concrete example, let $f: [0,1] \to \mathbb{R}$ be defined by $f(x) = x^2$. I claim that $f^\prime(0) = 0$. Indeed, the difference quotient is -$$ -\phi(t) = \frac{t^2 - 0^2}{t - 0} = t -$$ -and so we need to show -$$ -\lim_{t \to 0}\, t = 0. -$$ -Let $\epsilon > 0$ and choose $\delta = \epsilon$. Then for all $t \in [0,1]$ such that $0 < |t| < \delta$ we have $|t| < \epsilon$. Hence $f^\prime(0) = 0$, even though it is at an endpoint of the domain of $f$. -In particular, there's no use to introduce one-sided derivatives. Am I missing something? - -REPLY [2 votes]: I only can say that you are right in everything you write. This is an issue of definition. Rudin simply allows endpoints of the interval of definition of $f$ to be in the domain of the derivative. Others don't or only consider open intervals.<|endoftext|> -TITLE: Solve system of simultaneous equations in $3$ variables: $x+y+xy=19$, $y+z+yz=11$, $z+x+zx=14$ -QUESTION [7 upvotes]: Solve the following equation system: -$$x+y+xy=19$$ -$$y+z+yz=11$$ -$$z+x+zx=14$$ -I've tried substituting, adding, subtracting, multiplying... Nothing works. Could anyone drop me a few hints without actually solving it? Thanks! - -REPLY [13 votes]: Add $1$ to both sides of all the equations. To get -\begin{align*} -(x+1)(y+1) & = 20\\ -(y+1)(z+1) & = 12\\ -(z+1)(x+1) & = 15\\ -\end{align*} -Now let $u=x+1,v=y+1,w=z+1$. And you have -\begin{align*} -uv&=20\\ -vw&=12\\ -wu&=15 -\end{align*} -From this you can get -$$(uvw)=\pm 60.$$ -Now use the above equations to compute $u=\pm 5$ and so on.<|endoftext|> -TITLE: Is polar coordinates enough to prove that a limit exists -QUESTION [9 upvotes]: Somewhat of a basic question but I failed to find an answer or come up with a formal one myself. -Suppose I want to find the limit $\lim_{{(x,y)} \to {(0,0)}}f(x,y)$ using spherical coordinates $x:=r\cos \theta$, $y:= r\sin\theta$. Suppose I found that $\lim_{r \to 0} f(r,\theta)$ exists and is equal to $\alpha$ regardless of $\theta$. Did I really cover every possible path? Can we say for sure that the limit is $\alpha$? maybe some other limit exists in another path that we didn't cover. -For example, take $\lim_{(x,y) \to (0,0)}\frac{x^2y}{x^2+y^2} = \lim_{r\to 0}\frac{r^3\cos^2\theta \sin \theta}{r^2} = \lim_{r \to 0}r\cos^2 \theta \sin \theta = 0$. -I agree that IF the limit exists, it has to be zero. But maybe there is some path we didn't cover and from that path the limit is something else? - -REPLY [6 votes]: The fine point is in the phrase "regardless of $\theta$". Let $f$ be the characteristic function of the set -$$A:=\left\{{1\over k}\biggl(\cos{1\over k},\sin{1\over k}\biggr)\>\biggm|\>k\in{\mathbb N}_{\geq1}\right\}\ .$$ -Then $\lim_{r\to0}\tilde f(r,\theta)=0$, regardless of $\theta$, but the $\lim_{(x,y)\to(0,0)} f(x,y)$ does not exist. -If, however, you can prove that for some $(x,y)\mapsto g(x,y)$ defined in a punctured neighborhood of $(0,0)$ you have -$$|g(r\cos\theta,r\sin\theta)|\leq q(r)\quad \wedge\quad \lim_{r\to0}q(r)=0$$ -then you may conclude that in fact $\lim_{(x,y)\to(0,0)} g(x,y)=0$.<|endoftext|> -TITLE: Find the covariances of a multinomial distribution -QUESTION [14 upvotes]: If $(X_1,\cdots, X_n)$ is a vector with multinomial distribution, proof that $\text{Cov}(X_i,X_j)=-rp_ip_j$, $i\neq j$ where $r$ is the number of trials of the experiment, $p_i$ is the probability of success for the variable $X_i$. -$$fdp=f(x_1,...x_n)={r!\over{x_1!x_2!\cdots x_n!}}p_1^{x_1}\cdots p_n^{x_n} $$ if $ x_1+x_2+\cdots +x_n=r$ -I'm trying to use the property: $\text{Cov}(X_i,X_j)=E[X_iX_j]-E[X_i]E[X_j]$ and find that $E[X_i]=rp_i$, but I don´t know the efficient way to calculate $E[X_iX_j].$ - -REPLY [21 votes]: We can use indicator random variables to help simplify the covariance expression. We can interpret the problem as $r$ independent rolls of an $n$ sided die. Let $X_i$ be the number of rolls that result in side $i$ facing up, and let $I_{k}^{(i)}$ be an indicator equal to $1$ when roll $k$ is equal to $i$ and $0$ otherwise. Then, we can express $X_i$ and $X_j$ as follows: -$$\begin{equation} -X_i = \sum_{k=1}^{r} I_{k}^{(i)}~~~\mathrm{and}~~~X_j = \sum_{k=1}^{r} I_{k}^{(j)} -\end{equation}$$ -Let's re-write the covariance using indicators: -$$\begin{equation} -\mathrm{Cov}(X_i,X_j) = E[X_i X_j] - E[X_i]E[X_j] -\end{equation}$$ -Let's compute the first term: -$$\begin{eqnarray} -E[X_i X_j] &=& E\bigg[(\sum_{k=1}^{r}I_{k}^{(i)}) (\sum_{l=1}^{r}I_{l}^{(j)})\bigg] = \sum_{k=l}E\big[I_{k}^{(i)}I_{l}^{(j)}\big] + \sum_{k\neq l}E\big[I_{k}^{(i)}I_{l}^{(j)}\big] = \\ -&=& 0 + \sum_{k\neq l}E\big[I_{k}^{(i)}\big] E\big[I_{l}^{(j)}\big] = \sum_{k\neq l} p_i p_j = (r^2 - r)p_i p_j -\end{eqnarray}$$ -where we expanded the product of sums, used linearity of expectation and the fact that when $k=l$ we can't simultaneously roll $i$ and $j$ on the same trial $k=l$ (making the product of indicators zero) Finally we applied independence of rolls that enabled us to write it as a product of probabilities. Let's compute the remaining term: -$$\begin{equation} -E[X_i] = E[\sum_{k=1}^{r}I_{k}^{(i)}] = \sum_{k=1}^{r}E[I_{k}^{(i)}] = rp_i -\end{equation}$$ -Therefore, the covariance equals: -$$\begin{equation} -\mathrm{Cov}(X_i,X_j) = E[X_i X_j] - E[X_i]E[X_j] = (r^2-r)p_ip_j - r^2p_ip_j = -r p_i p_j -\end{equation}$$ -Notice that $\mathrm{Cov}(X_i, X_j) = -r p_i p_j < 0$ is negative, this makes sense intuitively since for a fixed number of rolls $r$, if we roll many outcomes $i$, this reduces the number of possible outcomes $j$, and therefore $X_i$ and $X_j$ are negatively correlated!<|endoftext|> -TITLE: Tetrahedron packing in Cube -QUESTION [8 upvotes]: I'm thinking about following solid geometry problem. -Q: Suppose you have a box of "cube" shape with edge length 1. -Then, How many regular tetrahedrons(with edge length 1) can be in the box? -So, this is kind of packing problem inside cube. -I guess the answer is 3. but I don't know how to prove 3 is the maximum number. -Is there any rigorous way to show this? -Thanks for any help in advance. - -REPLY [7 votes]: "3" is possible, as shown in following diagram. - -The vertices of the red tetrahedron are -$$\left(\frac12,-\frac12,\frac12\right),\;\; -\left(\frac12,-\frac12,-\frac12\right),\;\; -\left(\frac{1}{2\sqrt{2}},\frac{1}{2\sqrt{2}},0\right)\;\;\text{ and }\;\; -\left(-\frac{1}{2\sqrt{2}},-\frac{1}{2\sqrt{2}},0\right)$$ -The green and blue tetrahedrons can be obtained from the red one by rotating it -along the $(1,1,1)$ diagonal for $120^\circ$ and $240^\circ$ respectively. -I believe "3" is the maximum number. Following is a heuristic argument: -Let's say we have $n$ tetrahedrons inside a cube. There are $\frac{n(n-1)}{2}$ ways of picking a pair of tetrahedron $A, B$ among them. -Pick a point $a$ from $A$, a point $b$ from $B$ such that the distance $|a-b|$ is maximized. No matter how I place $A$ and $B$, I always get $|a - b| \ge 2\sqrt{\frac23} \approx 1.633$. -Since this value is very close to $\sqrt{3} \approx 1.732$, the diameter of the cube, the points $a, b$ will be very close to the two end points of a diagonal. This means each pair of tetrahedron will occupy at least one diagonals of the cube. -Since a cube has $4$ diagonals and it seems impossible for different pairs of tetrahedron to share a diagonal, we find: -$$\frac{n(n-1)}{2} \le 4 \implies n \le 3$$<|endoftext|> -TITLE: The kernel of a representation is a normal subgroup -QUESTION [5 upvotes]: Let $X$ be a matrix representation. -Let the kernel of $X$ be defined as $N = {\{g \in G: X(g) = I}\}$. A representation is faithful if it's one to one. -Show that $N$ is a normal subgroup of $G$ and find a condition on $N$ equivalent to the representation being faithful. - -Proof: -Let $X : G → GL(V)$ be a group representation. Let $g_1 \in N$ and $g \in G$. -Then $$X(g^{-1}g_1g) = X(g^{-1})X(g_1)X(g) = X(g)^{-1}(I)X(g) = X(g)^{-1}X(g) = I.$$ -Thus $g^{-1}g_1g \in N$, so $N$ is a normal subgroup of $G$. -Further, $X$ is faithful if and only if $N$ is the identity subgroup of $G$. - -Can someone please verify, or give feedback on, this proof. - -REPLY [5 votes]: Yes, everything is in working order here. It's the same format to prove that the kernel of any homomorphism is a normal subgroup, so I'm a little surprised you didn't just say that the representation $X$ is, among other things, a homomorphism and thus its kernel is a normal subgroup of $G$. -You also have a perfectly good characterization of a faithful representation, that it has a trivial kernel. The non representation-specific version is that any homomorphism is injective if and only if its kernel is trivial.<|endoftext|> -TITLE: Calculating $\int_0^{\pi/2} \sqrt{\cot x} + \sqrt{\cos x} dx$ -QUESTION [5 upvotes]: How should I solve the following integral: -$$\int_0^{\pi/2} (\sqrt{\cot x} + \sqrt{\cos x} )\,\mathrm dx$$ - -REPLY [3 votes]: Substituting $u=\sin^2(x)$ and $1-u=\cos^2(x)$, we get the same answer as Alexis, but go a bit further. -$$ -\begin{align} -\int_0^{\pi/2}\left(\sqrt{\cot(x)}+\sqrt{\cos(x)}\right)\,\mathrm{d}x -&=\int_0^{\pi/2}\sqrt{\cot(x)}\,\mathrm{d}x+\int_0^{\pi/2}\sqrt{\cos(x)}\,\mathrm{d}x\\ -&=\int_0^{\pi/2}\sqrt{\frac1{\sin(x)\cos(x)}}\,\mathrm{d}\sin(x)+\int_0^{\pi/2}\sqrt{\frac1{\cos(x)}}\,\mathrm{d}\sin(x)\\ -&=\frac12\int_0^1u^{-3/4}(1-u)^{-1/4}\,\mathrm{d}u+\frac12\int_0^1u^{-1/2}(1-u)^{-1/4}\,\mathrm{d}u\\ -&=\frac12\frac{\Gamma(1/4)\Gamma(3/4)}{\Gamma(1)}+\frac12\frac{\Gamma(1/2)\Gamma(3/4)}{1/4\Gamma(1/4)}\\ -&=\frac\pi{\sqrt2}+\frac{(2\pi)^{3/2}}{\Gamma(1/4)^2}\\[6pt] -&\doteq3.4195817038147753309 -\end{align} -$$<|endoftext|> -TITLE: Defining weak* convergence of measures using compactly supported continuous functions -QUESTION [6 upvotes]: I'm reading some lecture notes and the author defines the following: -Let $\mu_{n},\mu$ - be probability measures on $\left(\mathbb{R}^{k},\mathcal{B}\left(\mathbb{R}^{k}\right)\right)$ - , we say $\mu_{n}$ - converge weakly to $\mu$ - if $\int fd\mu_{n}\longrightarrow\int fd\mu$ - for all continuous compactly supported functions $f:\mathbb{R}^{k}\to\mathbb{R}$ - . -An almost identical definition appears in many text books with the change of requiring the same thing for any continuous bounded function. I couldn't find any reference which showed that it actually does suffice to look only at compactly supported functions. Is this actually true? - -REPLY [7 votes]: So as i said in the comment a usefull notion here is what's called "tendue" in french, i don't know the equivalent english mathematical word for that but the english translation of the common french word "tendue" could be taut, tense or tight. I don't want to use the wrong english word so let's just use tendue for this post. -edit : as said in the comments the english word for tendue is "tight". -A sequences $(\mu_n)$ is said to be tendue if for every $\varepsilon >0$ there is a compact $K$ such that $\mu_n(K^C)<\varepsilon$ for all $n$. -If your sequence $(\mu_n)$ is tendue then the two definitions are equivalent, this is because in that case $C_c(\mathbb R)$ is dense in $C_b(\mathbb R)$ for the $L^1(\mathbb R, \mu_n)$-norm uniformly in $n$. Phrased like this it might seems a little obscure but try to show it as an exercice, it's not hard. So what we want to show now is that if (using your definition) $(\mu_n)$ converges weakly to the probability measure $\mu$ then $(\mu_n) $ is tendue. This will prove that the two properties are equivalent. -Now suppose that $(\mu_n)$ is not tendue, so there exists an $\eta>0$ such that for every compact $K$ one have $\lim \sup \mu_n(K^C)>\eta$. Since $\mu$ is a probability measure for every $\varepsilon>0$ there exists a compact set $K$ such that $\mu(K)>1-\varepsilon$ and $\mu(K^C)<\varepsilon$. Take $\varepsilon=\eta/2$, there is a continuous compactly supported function $f$ such that $0\leq f \leq 1$ and $\int fd\mu>1-\varepsilon>1-\eta/2$. But we also have $\liminf \int f d \mu_n<1-\eta$, which is in contradiction with the fact that $\int fd\mu_{n}\longrightarrow\int fd\mu$. This is absurd so $(\mu_n)$ must be tendue. -But the situation is not as nice as you could think : if you only assume that $(\mu_n)$ converge to some measure (not necessarily a probability measure) then the two definition are not equivalent. The (now deleted) example of nicomezi was a good illustration : take $\mu_n=\delta_n$, according to your definition $\delta_n$ converges weakly to $0$, but with the definition using $C_b$ functions $\delta_n$ doesn't converges. However, if you suppose that $(\mu_n)$ is tendue and converge weakly to some measure $\mu$ (not necessarily a probability measure) then $\mu$ is a probability measure and the two definitions are equivalent. So the good notion here is the notion of tendue sequences.<|endoftext|> -TITLE: How to evaluate this integral $\int_{0}^{\infty }\frac{\ln\left ( 1+x^{3} \right )}{1+x^{2}}\mathrm{d}x$ -QUESTION [9 upvotes]: How to evaluate this integral -$$\mathcal{I}=\int_{0}^{\infty }\frac{\ln\left ( 1+x^{3} \right )}{1+x^{2}}\mathrm{d}x$$ -Mathematica gave me the answer below -$$\mathcal{I}=\frac{\pi }{4}\ln 2+\frac{2}{3}\pi \ln\left ( 2+\sqrt{3} \right )-\frac{\mathbf{G}}{3}$$ -where $\mathbf{G}$ is Catalan's constant. - -REPLY [5 votes]: Lemma 1::$$\int_{0}^{\infty}\dfrac{\ln{(x^2-x+1)}}{x^2+1}=\dfrac{2\pi}{3}\ln{(2+\sqrt{3})}-\dfrac{4}{3}G$$ -Use this well known -$$\int_{0}^{+\infty}\dfrac{\ln{(x^2+2\sin{a}\cdot x+1)}}{1+x^2}dx=\pi\ln{2\cos{\dfrac{a}{2}}}+a\ln{|\tan{\dfrac{a}{2}}|}+2\sum_{k=0}^{+\infty}\dfrac{\sin{(2k+1)a}}{(2k+1)^2}$$ -this indentity proof is very easy consider $\ln{(x^2+2\sin{a}\cdot x+1)}$ Fourier expansions(possion fourier). -then you can take -$a=-\dfrac{\pi}{6}$ -then we have -$$\pi\ln{2\cos{\dfrac{\pi}{12}}}=\dfrac{\pi}{2}\ln{(2+\sqrt{3})}$$ -$$-\dfrac{\pi}{6}\ln{\tan{\dfrac{\pi}{12}}}=\dfrac{\pi}{6}\ln{(2+\sqrt{3})}$$ -and -$$2\sum_{k=0}^{3N}\dfrac{\sin{(2k+1)-\pi/6}}{(2k+1)^2}=-\sum_{k=0}^{3N}\dfrac{(-1)^k}{(2k+1)^2}-3\sum_{k=0}^{N-1}\dfrac{(-1)^k}{(6k+3)^2}\to -G-\dfrac{G}{3}=-\dfrac{4}{3}G$$ -so -$$\int_{0}^{\infty}\dfrac{\ln{(x^2-x+1)}}{x^2+1}=\dfrac{2\pi}{3}\ln{(2+\sqrt{3})}-\dfrac{4}{3}G$$ -By done! -Lemma 2:$$\int_{0}^{+\infty}\dfrac{\ln{(1+x)}}{1+x^2}dx=\dfrac{\pi}{4}\ln{2}+G$$ -\begin{align*} \int_{0}^{\infty} \frac{\log (x + 1)}{x^2 + 1} \, dx -&= \int_{0}^{1} \frac{\log (x + 1)}{x^2 + 1} \, dx + \int_{1}^{\infty} \frac{\log (x + 1)}{x^2 + 1} \, dx \\ -&= \int_{0}^{1} \frac{\log (x + 1)}{x^2 + 1} \, dx + \int_{0}^{1} \frac{\log (x^{-1} + 1)}{x^2 + 1} \, dx \quad (x \mapsto x^{-1}) \\ -&= 2 \int_{0}^{1} \frac{\log (x + 1)}{x^2 + 1} \, dx - \int_{0}^{1} \frac{\log x}{x^2 + 1} \, dx\\ -&=\dfrac{\pi}{4}\ln{2}+G -\end{align*} -so -$$\int_{0}^{+\infty}\dfrac{\ln{(1+x^3)}}{1+x^2}dx=\int_{0}^{+\infty}\dfrac{\ln{(1+x)}}{1+x^2}dx+\int_{0}^{+\infty}\dfrac{\ln{(x^2-x+1)}}{1+x^2}dx=\frac{\pi }{4}\ln 2+\frac{2}{3}\pi \ln\left ( 2+\sqrt{3} \right )-\frac{\mathbf{G}}{3}$$<|endoftext|> -TITLE: Eigenvalues of the principal submatrix of a Hermitian matrix -QUESTION [10 upvotes]: This question aims at creating an "abstract duplicate" of various questions that can be reduced to the following: - -Let $A$ be an $n\times n$ Hermitian matrix and $B$ be an $r\times r$ principal submatrix of $A$. How are the eigenvalues of $A$ and $B$ related? - -Here are some questions on this site that can be viewed as duplicates of this question: - -Eigenvalues of $MA$ versus eigenvalues of $A$ for orthogonal projection $M$ -Relationship of eigenvalues of a diagonal matrix D and $\mathbf{VDV}^{T}$, where V is a semi-orthogonal matrix - -REPLY [13 votes]: Proposition. Let $\lambda_k(\cdot)$ denotes the $k$-th smallest eigenvalue of a Hermitian matrix. Then - $$ -\lambda_k(A)\le\lambda_k(B)\le\lambda_{k+n-r}(A),\quad 1\le k\le r. -$$ - -This is a well-known result in linear algebra. Since the usual proof is just a straightforward application of the celebrated Courant-Fischer minimax principle, we shall not repeat it here. See, e.g. theorem 4.3.15 (p.189) of Horn and Johnson, Matrix Analysis, 1/e, Cambridge University Press, 1985.<|endoftext|> -TITLE: Convergence in law implies uniform convergence of cdf's -QUESTION [6 upvotes]: Let $F_n, \ F$ be distribution functions with respect to some variables $X_n,\ X$ (in a not necessarily common probability space). Suppose that $F$ is continuous and $F_n \overset{d}{\rightarrow}F$ (i.e in law). Prove that $(F_n)$ converges uniformly to $F$, i.e. $$\displaystyle \lim_{n\rightarrow +\infty}\sup_{x\in \mathbb{R}}|F_n(x)-F(x)|=0$$ - -Comments. On a proof by contadiction, I 'd suppose that for some $\varepsilon>0$ for all $n$ there is $x_n$ such that $|F_n(x_n)-F(x_n)|\geq \varepsilon.$ But a classic analytic approach throught Bolzano - Weierstrass theorem can not be applied here since there is no information on the boundness of $(x_n).$ -Thanks a lot in advance for the help! - -REPLY [7 votes]: Hint: (as stated in Parzen, 1960). Convergence in distribution is defined for points $x$ of continuity of $F$, so here since $F$ is continuous, that is for every $x\in \mathbb R$. Now, to any $ε>0$, choose points $$-\infty=x_0 -TITLE: Showing that the exponential expression $e^x (x-1) + 1$ is positive -QUESTION [11 upvotes]: I'm looking at -$$ f(x) = e^x (x-1) + 1$$ -I'm having the feeling (based on the application where I am using it), that $f(x)$ should be strictly positive for $x > 0$. Indeed, Wolfram Alpha plots it as such, with a global minimum of ($f(0)x=0$). -However, I fail to show this. It is trivial for $x \geq 1$, but what for $x < 1$? - -REPLY [14 votes]: For $x\in(0,1)$, the inequality $e^x (x-1)+1 > 0$ is equivalent to: -$$ e^x < \frac{1}{1-x} \tag{1}$$ -or to: -$$ 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\ldots < 1+x+x^2+x^3+\ldots \tag{2} $$ -that is trivial.<|endoftext|> -TITLE: Universal cover of $\mathbb R^2\setminus\{0\}$ -QUESTION [9 upvotes]: We know that a necessary and sufficient condition for a path-connected, locally path-connected space to have a universal cover is that it is semi-locally simply connected. -Now since $\mathbb R^2\setminus\{0\}$ is such a space, it must have a universal cover. However I can't see what the universal cover of $\mathbb R^2\setminus\{0\}$ actually is. Can someone help me? -Thank you. - -REPLY [7 votes]: Hint: Think of the mapping $z\mapsto e^z$ from $\Bbb{C}$ to $\Bbb{C}\setminus\{0\}$. Its derivative is also $e^z$, which is always non-zero. Therefore the mapping is conformal everywhere, i.e. a local homeomorphism.<|endoftext|> -TITLE: Characters group and cocharacters group Hom duality -QUESTION [5 upvotes]: Let $T$ be an algebraic torus over $\mathbb{C}$. For brevity denote $C = \mathbb{C}^\times = \mathbb{G}_{m,\mathbb{C}}$ the multiplicative subgroup of $\mathbb{C}$. Define character group by -$$ X^*(T) = \operatorname{Hom}(T,C) = \{ f : T \to C \mid f \mbox{ is homomorphism} \} $$ -and cocharacter group -$$ X_*(T) = \operatorname{Hom}(C,T) = \{ f^\vee : C \to T \mid f^\vee \mbox{ is homomorphism} \} $$ -Then - -$\operatorname{Hom}( X^*(T), \mathbb{Z} ) = X_*(T) $ - -I ask: - -How to prove this fact by elementary or straight forward methods? -The equality implies that each homomorphism from $X^*(T)$ to $\mathbb{Z}$ defines a cocharacter $\chi^\vee \in X_*(T)$? I want to find this cocharacter expliticly. -What is the converse statement - i.e. taking Hom on $X_*(T)$ to get $X^*(T)$? - -Thank you. - -REPLY [5 votes]: As you might expect, this is all rather formal: -Adhering to the functorial spin on algebraic geometry - and varying your notation slightly, if only to make it easier to type - define the $\mathbb C$ algebraic group $G$ by $$ G(A)=\mathbb G_m ( A) = A^*,$$ where $A^*$ is set of the invertible elements of the $\mathbb C$-algebras $A$. -We can also write $G= \mathbb G_m = \mathop{\rm Spec} R$, with $R=\mathbb C[T,T^{-1}]$. In fact, the identification is canonical (i.e. identify $a\in G(A) = A^*$ for any $\mathbb C$-algebra $A$, with $\lambda_a:T\mapsto a\in A$, for $\lambda_a\in {\rm hom}_{\rm alg}(R,A)$ - so the element of $T\in G(R)$ corresponds to the identity homorphism – i.e. $T\to T$). -We get, in the category of algebraic groups, after a bit more formalism, perhaps, that -$${\rm hom} (G,G) = \mathbb Z,\tag{*}$$ canonically, with the identity homomorphism on the left identified with $1$ on the right – at the level of algebras, $T \mapsto T^n$ on the left corresponds to $n\in \mathbb Z$ on the right. -However, if you prefer, identify $G$ with $G(\mathbb C) = \mathbb C^*$, as you have done - the only point that matters here is $(*)$. This is all window dressing to be able to talk about the $n$ in $z \mapsto z^n$, for $n\in \mathbb Z$. -In any event, for any algebraic group $T$ one certainly has a canonical pairing $$X_*(T) \otimes X^*(T) \to {\rm hom} (G,G) = \mathbb Z.$$ -That is, -$$ (f^\vee, f) \mapsto f\circ f^\vee.$$ -We would like to see that the pairing is perfect if $T$ is a torus. -If $T= G^n$, then $X^*(T) = \mathop {\rm hom} (G^n,G) = \mathbb Z^n$, and $X_*(T) =\mathop {\rm hom} (G,G^n) = \mathbb Z^n$. So, if $a = (a_1\cdots, a_n) \in \mathbb Z^n = X_*(T)$, and $b= (b_1,\cdots,b_n)\in \mathbb Z^n=X^*(T)$, the pairing is -$$ (a,b) \mapsto \sum b_k \,a_k, $$ -and this is clearly perfect. -[With your identification, the co-character is $z\mapsto (z^{a_1},\cdots, z^{a_n})$, the character is $(z_1,\cdots, z_n)\mapsto z_1^{b_1} \cdots z_n^{b_n}$, and the pairing gives us -$$z \mapsto z^{\sum{b_k a_k}}.]$$ -But, by definition, in the general case, if $T$ is a torus, there is an isomorphism $\lambda \colon T \xrightarrow{\sim} G^n$, although the choice is not necessarily canonical. -Still: -$$(f^\vee , f ) \mapsto f\circ f^\vee = f \circ\left( \lambda^{-1} \circ \lambda \right) \circ f^\vee = \left(f\circ \lambda^{-1}\right) \circ \left( \lambda \circ f^\vee\right),$$ -which brings us back to the previous case.<|endoftext|> -TITLE: Finding a $n \in \mathbb{N}$ such that $\frac{\phi(n+1)}{\phi(n)} = 4$ -QUESTION [8 upvotes]: Let $\phi$ denote the Euler Phi Function. -How do I find a $n \in \mathbb{N}$ such that $\frac{\phi(n+1)}{\phi(n)} = 4$. I can find $n \in\mathbb{N}$ such that $\frac{\phi(n+1)}{\phi(n)}=3$, for example, $n=12$ works. -How does one go about constructing such an $n$?? - -REPLY [4 votes]: This answer is just a summary of some points mentioned in the comments, a way to numerically find solutions, some references and a short summary of what I found in the litterature about this and similar questions. - -As pointed out by lhf, the solutions to $\frac{\phi(n+1)}{\phi(n)} = 4$ is sequence A172314 in OEIS. The solutions for $n\leq 10^7$ are -$$\{1260, 13650, 17556, 18720, 24510, 42120, 113610, 244530, 266070,\\ 712080, 749910, 795690, 992250, 1080720, 1286730, 1458270, \\1849470, 2271060, 2457690, 3295380, 3370770, 3414840, 3714750,\\ 4061970, 4736490, 5314050, 5827080, 6566910, 6935082, 7303980,\\ 7864080, 7945560, 8062860, 8223150, 8559720, 9389040, 9774030\}$$ -These solutions was found using mathematical software and took a minute on my laptop. See a simple Mathematica code in the end that does this computation. -I guess a numerical solution is not what you are after, however solving it "by-brain" seems like a hard problem since we need information about the prime-factors of both $n$ and $n+1$ to do so. There is no simple relationship between these unless for very special cases. It's tempting to try some of these special cases like for example $n$ being a prime on the the form $2^k\pm 1$ (Fermat or Mersenne primes), however this does not lead to a solution. Numerically it seems like $n$ has to be a product of at least $4$ distinct primes which rules out most of the other simple cases we could try. -It should be noted that very little is known about the (I presume) simpler question that asks to find solutions to $\phi(n+1) = \phi(n)$ and more generally $\phi(n+k) = \phi(n)$. Some references are given below (taken from the OEIS page linked above): - -K. Miller, Solutions of $\phi(n) = \phi(n+1)$ for $1 \leq n \leq 500000$. De Pauw University, 1972. -V. L. Klee, Jr., Some remarks on Euler's totient function, Amer. Math. Monthly, 54 (1947), 332. -M. Lal and P. Gillard, On the Equation $\phi(n) = \phi(n+k)$, Math. Comp. 26 (1972), 579-583. -L. Moser, Some equations involving Euler's totient function, Amer. Math. Monthly, 56 (1949), 22-23. - -As you can see from these references most of the work is simply reporting solutions found numerically. There are some special cases where analytical solutions are know. Some examples mentioned in the papers above are: - -$\phi(n) = \phi(n+2)$ is satisfied by $n=2(2p-1)$ if both $p$ and $2p-1$ are odd primes, and by $n = 2^{2^a+1}$ if $2^{2^a}+1$ is a Fermat prime. -The equation $\phi(n) + 2 = \phi(n+2)$ is satisfied if $n$ and $n+2$ are primes, if $n$ has the form $4p$ where $p$ and $2p+1$ are primes, and if $n=2M_p$ where $M_p = 2^p-1$ is a Mersenne prime. -If $n=2^a+1$ is a prime and $k=2^{a+1}-n$, then $\phi(n) = \phi(n+k)$. -If $n=2^a3^b+1$ is a prime and $k=2^{a+1}3^{b}-1$, then $\phi(n) = \phi(n+k)$. -If $n=3k$ and $2$ and $3$ do not divide $k$ then $\phi(3k) = \phi(4k)$. -If $n=2p$ where $p$ is an odd prime and $(p+1)/2$ is a prime other than $3$, and $k=3(p+1)/2-n$ then $\phi(n) = \phi(n+k)$. -If $n=3p$ where $p$ is a prime other than $3$ and $(p+1)/2$ is a prime other than $5$, and $k=5(p+1)/2-n$ then $\phi(n) = \phi(n+k)$. - - -Mathematica code to find all solutions below a given $n$: -n = 1000000; -philist = Table[EulerPhi[i], {i, 1, n+1}]; -Do[ - If[ philist[[i + 1]]/philist[[i]] == 4, Print[i] ]; - , {i, 1, n}]<|endoftext|> -TITLE: Derivation of $\tanh$ solution to $\frac{1}{2}f''=f^3 - f$ -QUESTION [6 upvotes]: I am a mechanical engineering student, and I am trying to solve the following ODE: -$$\frac{1}{2}f''=f^3 - f$$ -where $f=f(x)$ and the boundary conditions are $f(0)=0$ and $f'(\infty)=0$. On the Wolfram Mathworld page for the hyperbolic tangent, it is remarked that the solution to this ODE is given by $f=\tanh(x)$. This can easily be verified by substitution, but I am looking for a step-by-step procedure to get to the solution. -I have tried using the substitution $g=f'$, which yields -$$ g' = \frac{dg}{dx}=\frac{dg}{df} \frac{df}{dx}=\frac{dg}{df}f'=\frac{dg}{df}g$$ -Using $f''=g'$, the substitution of the expresion above in the ODE leads to -$$\frac{1}{2} g dg = (f^3-f) df$$ -which can be integrated to yield -$$\frac{1}{4} (f')^2 = \frac{1}{4}f^4-\frac{1}{2}f^2 + C$$ -At this point, I am stuck and I do not know how to proceed. Any help would be greatly appreciated! -Best regards, -Nick -Update 1: -Thanks to Mattos' answer, I realised that I can write the final expression as (I already use here $C=0$, although I am not completely sure if that is allowed already): -$$\frac{df}{f\sqrt{f^2-2}} = dx$$ -Using the following expression I found on sosmath (using different symbols to avoid confusion): -$$\int \frac{dh}{h \sqrt{h^2-a^2}}=\frac{1}{a}\sec^{-1}\left|\frac{h}{a}\right|$$ -we can integrate to find: -$$\frac{1}{\sqrt{2}} \sec^{-1}\left|\frac{f}{\sqrt{2}}\right|=x$$ -which is definitely a lot closer to the answer I am looking for. Any help to go from here to $\tanh$ is appreciated. I will make sure to post the answer if I figure it out. Thanks! -Update 2: -I am not sure if the above approach will lead to the correct answer. However, an extensive derivation is given in the answers by LutzL. Thanks for the help! - -REPLY [2 votes]: Multiply through with $4f'$ and integrate to get -$$ -(f')^2=f^4-2f^2+C -$$ -in a shorter way. -For a twice continuously differentiable function, $f'(∞)=0$ implies that all higher derivatives also vanish. This gives for the value at infinity -$$ -0=\frac12f''(∞)=f(∞)(f(∞)^2-1)\text{ and }0=(f'(∞))^2=f(∞)^4-2f(∞)^2+C -$$ -which gives the variants $f(∞)=0$ and the symmetric $f(∞)=\pm1$ (note that for any solution $f$, also $-f$ is a solution). - -Or put in a physical way, $f''=-V'(f)$ with the potential function $V(y)=-(y^2-1)^2$. The "particle" obeying this law of motion starts in the local minimum at $y=0$ and is supposed to come to rest at infinity. This is only possible if it stays at the minimum or approaches asymptotically one of the maxima at $y=\pm 1$. - -The first variant implies $C=0$, and that the zero solution is the single and unique solution for these conditions. -$f(∞)=1$ implies $C=1$ and thus -$$ -f'=\pm(f^2-1) -$$ -Since the solution starts at $0$, and $f\equiv\pm1$ are solutions of this first order ODE, $|f|<1$. To get a solution growing from $0$ to $1$ you need the negative sign, that is -$$ -f'(x)=1-f(x)^2\\\implies\\ \frac{f'(x)}{1+f(x)}+\frac{f'(x)}{1-f(x)}=2 -\\ -\implies\\\ln|1+f(x)|-\ln|1-f(x)|=2x+2C -\\\implies\\ -f(x)=\frac{e^{2x+2C}-1}{e^{2x+2C}+1}=\frac{e^{x+C}-e^{-(x+C)}}{e^{x+C}+e^{-(x+C)}} -$$<|endoftext|> -TITLE: A map of complexes which is zero on cohomology but not zero in $D(\mathcal{A})$ -QUESTION [5 upvotes]: Yesterday I asked a very similar question about an exercise of Gelfand's book "Methods of Homological Algebra". In the comments it was pointed out that there was an easier version of that exercise but I couldn't solve it. -Given $f: K^{\bullet} \to L^{\bullet}$ given by the two upper rows of: - -I have to show that this map is not zero in $D(\mathcal{A})$ ie there is no quasi isomorphism $s$ such that $sf$ is homotopic to $0$. -I know that the lowest row is acyclic in all degrees except 0 since $s$ is a quasi isomorphism. Also if we denote $d^{A}$ the differential of the last complex it's clear that $s_1$ factors through $Ker(d^{A}_0)$. -If we denote $t$ the homotopy map such that $s_1=d^{A}_{-1} t_1 + t_0 2$ and $0=d^{A}_0 t_0$ we can also see that $t_0$ factors through the kernel. I don't know how to get more information to find a contradiction. -Any tips to finish this? Also I don't know if this approach will be useful should I consider some mapping cones ? - -REPLY [3 votes]: Since $s$ is a quasi-isomorphism, you know that $$\bar s:\Bbb Z \to H^0(A)$$ is an isomorphism. Let $x\in A_0$ be $s_1(1)$. Then $\bar s(1)=x+Im(d^{-1})$ generates $H^0(A)\cong \Bbb Z$. -Let $y=t_1(1)$ and $z=t_0(1)$, then $d^{-1}y+2z=x$. As a consequence, you have that $x=2z$ mod $Im(d^{-1})$, i.e., $x=2z$ in $H^0(A)$. But that's a contradiction since $x$ is a generator of $H^0(A)\cong \Bbb Z$ (you can't divide 1 by 2 in $\Bbb Z$). -Therefore there is non such homotopy $t$.<|endoftext|> -TITLE: Strictly Convex Implies Invertible Gradient? -QUESTION [7 upvotes]: If $f:\mathbb R^n \rightarrow \mathbb R$ is strictly convex and continuously differentiable, does this imply that $\nabla f$ is a one-to-one mapping? -To be precise, can we say that $x, y \in \mathbb R^n$ and $\nabla f(x) = \nabla f(y)$ implies $x = y$? - -REPLY [9 votes]: Suppose that there exist $x, y \in \mathbb R^n$ such that $\nabla f(x) = \nabla f(y)$ and $x \neq y$. -Then, by the strict convexity of $f$, we can write -\begin{equation} -\nabla f(x) \cdot (y-x) < f(y) - f(x) -\end{equation} -and similarly -\begin{equation} -\nabla f(y) \cdot (x-y) < f(x) - f(y). -\end{equation} -Multiplying both sides of the latter inequality by $-1$ and substituting $\nabla f(x)$ in place of $\nabla f(y)$, we obtain -\begin{equation} -\nabla f(x) \cdot (y - x) > f(y) - f(x), -\end{equation} -which contradicts the first inequality. -Thus, if $x, y \in \mathbb R^n$ satisfy $\nabla f(x) = \nabla f(y)$, then $x =y$.<|endoftext|> -TITLE: Construct a continuous function $f$ over $[0,1]$ satisfying $f(0) = f(1)$ but $f(x) \neq f(x+a)$ -QUESTION [8 upvotes]: Suppose $0 < a < 1$ is not of the form $\dfrac{1}{n}$ for positive integer $n$. Construct a continuous function $f$ over $[0,1]$ satisfying $f(0) = f(1)$ but $f(x) \neq f(x+a)$ for all $x \in [0,1-a]$. - -This is a follow up question to this. I am wondering how it is possible to construct such a function. I would start by saying $g(x) = f(x)-f(x+a) \neq 0$ for all $x$ in the domain and then doing casework on the values of $g(x)$. But this unlike the last question doesn't have a nice casework for the values of $g(x)$ so I am stuck. - -REPLY [4 votes]: Let $ n $ be the largest integer such that $ na < 1 $. -Let $ g $ be any continuous function on $ [0, a] $ such that -$$ g(0) = 0 $$ -$$ g(1-na) = -n $$ -$$ g(a) = 1 $$ -Then choose $ f(ka+x) = g(x) + k $ for $ k \in \mathbb{N}, x \in [0,a) $ -Edit: I drew a picture of $ f $: - -Basically, $ f $ is found by first setting $ f(x+a) = f(x) + 1$, with $ f(0) = f(1) = 0 $. This gives you all the points in the above drawing. Then choose a continuous $ g $ on the first 3 points, copy it and translate it by $ (a,1) $ a bunch of times to obtain $ f $.<|endoftext|> -TITLE: How did Leibniz prove that $\sin (x)$ is not an algebraic function of $x$? -QUESTION [8 upvotes]: In the Wikipedia article about transcendental numbers we can read the following: -The name "transcendental" comes from Leibniz in his 1682 paper where he proved that sin(x) is not an algebraic function of x. -I would like to know can someone reproduce here the proof of Leibniz of the non-algebraicity of $\sin$ or point me to the place on the internet where we have this proof. -I do not know German or Latin so if someone has the link where this is proved it would be OK if it is written in English. -Although I am an amateur it seems to me that in the time of Leibniz the techniques for proving the non-algebraicity of functions were not developed enough so it would be nice to see how he proved that for $\sin$. - -REPLY [4 votes]: Leibniz's paper with the term "transcendental" is : DE VERA PROPORTIONE CIRCULI AD QUADRATUM CIRCUMSCRIPTUM IN NUMERIS RATIONALIBUS EXPRESSA, Act.Erudit.Lips.1682. -Reprinted into: - -Gottfried Wilhelm von Leibniz, Leibnizens mathematische Schriften, herausgegeben von C.I.Gerhardt (1858), page 118-on; see page 120: - - -transcendens inter alia habetur per aequationes gradus indefiniti. - -For the proof, see: PRAEFATIO OPUSCULI DE QUADRATURA CIRCULI ARITHMETICA, page 93-on; see page 98.<|endoftext|> -TITLE: Difficulty in understanding a part in a proof from Stein and Shakarchi Fourier Analysis book. -QUESTION [8 upvotes]: Theorem 2.1 : Suppose that $f$ is an integrable function on the circle with $\hat f(n)=0$ for all $n \in \Bbb Z$. Then $f(\theta_0)=0$ whenever $f$ is continuous at the point $\theta_0$. -Proof : We suppose first that $f$ is real-valued, and argue by contradiction. Assume, without loss of generality, that $f$ is defined on $[-\pi,\pi]$, that $\theta_0=0$, and $f(0) \gt 0$. -Since $f$ is continuous at $0$, we can choose $ 0\lt \delta \le \frac \pi2$, so that $f(\theta) \gt \frac {f(0)}2$ whenever $|\theta| \lt \delta$. Let $$p(\theta)=\epsilon + \cos\theta,$$ - Where $\epsilon \gt 0$ is chosen so small that $|p(\theta)| \lt 1 - \frac \epsilon2$, whenever $\delta \le |\theta| \le \pi$. Then, choose a positive $\eta$ with $\eta \lt \delta$, so that $p(\theta) \ge 1 + \frac \epsilon2$, for $|\theta| \lt \eta$. Finally, let $p_k(\theta)=|p(\theta)|^k$, and select $B$ so that $|f(\theta)| \le B$ for all $\theta$. This is possible since $f$ is integrable, hence bounded. -By construction, each $p_k$ is a trigonometric polynomial, and since $\hat f(n)=$ for all $n$, we must have $\int_{-\pi}^{\pi} f(\theta)p_k(\theta)\,d\theta=0$ for all $k$. - -I understood the first paragraph clearly. But the rest is not making it's way into my head. - -In the beginning of second paragraph, how does the given range works for choosing $\delta$? If the continuity is used to get the range, then how? -How can we choose $\epsilon$ so small such that, $|p(\theta)| \lt 1 - \frac \epsilon2$, whenever $\delta \le |\theta| \le \pi$? -How can we choose positive $\eta$ with $\eta \lt \delta$, so that $p(\theta) \ge 1+ \frac \epsilon2$, for $|\theta| \lt \eta$. -Why do we must have $\int_{-\pi}^{\pi}f(\theta)p_k(\theta)\,d\theta=0$ for all $k$? - -REPLY [9 votes]: Continuity tells us we can choose $\delta>0$ so that $|f(\theta) - f(0)|< \frac{f(0)}{2}$ if $|\theta - 0| < \delta$ (which in particular implies $f(\theta) > \frac{f(0)}{2}$). Once the existence of such a $\delta$ is established, we can assume it is as small as we need; in particular, we're free to take it to be less than $\pi/2$, without affecting the inequality $f(\theta) > \frac{f(0)}{2}$. If you want to see this more concretely, then take the first $\delta$ that we obtained through continuity, and set $\delta' = \min(\delta,\frac{\pi}{2})$. Then $0<\delta < \frac{\pi}{2}$, and $|\theta|<\delta'$ implies $|\theta|<\delta$ implies $f(\theta) > \frac{f(0)}{2}$. -Here we're using the fact that $\cos\theta < 1$ when $\theta$ is away from $0$. To make this quantitative, $\cos\theta$ ranges from a maximum of $\cos\delta$ to a minimum of $-1 = \cos\pi$ on the set $\{\delta \leq |\theta|\leq\pi\}$. Therefore -$$ -\epsilon - 1 \leq p(\theta) \leq \epsilon + \cos\delta -$$ -when $\delta \leq |\theta| \leq \pi$. On the left, we can throw out an extra $\epsilon/2$: -$$ -\frac{\epsilon}{2} - 1 \leq \epsilon - 1 \leq p(\theta). -$$ -On the right, we have -$$ -\epsilon + \cos\delta = \epsilon + 1 - (1-\cos\delta) = \epsilon + 1 - \lambda. -$$ -Note $\lambda > 0$. If we choose $\frac{3\epsilon}{2}< \lambda$, then $-\lambda < -\frac{3\epsilon}{2}$, and -$$ -\epsilon + 1 - \lambda < 1 - \frac{\epsilon}{2}. -$$ -Therefore if we choose $\epsilon < \frac{2}{3}(1-\cos\delta)$, and obtain $\delta$ as above, then -$$ -\frac{\epsilon}{2} - 1 \leq p(\theta) \leq 1 - \frac{\epsilon}{2}, -$$ -or equivalently -$$ -|p(\theta)| \leq 1 - \frac{\epsilon}{2} -$$ -whenever $\delta \leq |\theta| \leq \pi$. -This is similar to both 1 and 2. Now we're using the fact that near $\theta = 0$, $p(\theta) \sim \epsilon + 1$. By continuity of $\cos\theta$ at $\theta = 0$, there exists $\eta>0$ such that if $|\theta| < \eta$, then $|1 - \cos\theta| < \frac{\epsilon}{2}$. This inequality implies $\cos\theta > 1 - \frac{\epsilon}{2}$. Therefore -$$ -p(\theta) = \epsilon + \cos\theta > 1 + \frac{\epsilon}{2}. -$$ -Again, once the existence of such $\eta>0$ is established, then we are free to take it to be as small as we want; in particular we may specify that $\eta<\delta$. - -For example, look at $p_1(\theta) = p(\theta) = \epsilon + \cos\theta$. Then -$$ -\int_{-\pi}^\pi f(\theta)p_1(\theta) d\theta = \epsilon\int_{-\pi}^\pi f(\theta) d\theta + \int_{-\pi}^\pi f(\theta)\cos(\theta) d\theta . -$$ -Note that the first integral on the RHS is just $\epsilon\hat{f}(0)$, so this is $0$. The second integral is just the first Fourier cosine coefficient: in fact, -$$ -\int_{-\pi}^\pi f(\theta)\cos\theta d\theta = \int_{-\pi}^\pi \text{Re}(f(\theta)e^{i\theta} )d\theta = \text{Re}\left(\int_{-\pi}^\pi f(\theta)e^{i\theta} d\theta\right) = \text{Re}\hat{f}(1) = 0. -$$ -(I may be off by a constant prefactor of $\frac{1}{2\pi}$, but this is unimportant.) Here I'm using the assumption that $f$ is real-valued to take $f(\theta)$ under the real part. For higher powers of $p(\theta)$, you'll see the other Fourier coefficients come up just like this, because $p(\theta)^k$ is a trigonometric polynomial (i.e. linear combinations of $\sin(k\theta)$ and $\cos(k\theta)$ for all $k$. So all of these integrals vanish.<|endoftext|> -TITLE: Solve $\lim_{n\to \infty}\left(\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{n+n}\right)$ without using Riemann sums. -QUESTION [5 upvotes]: The natural way (and I think, the easier) to solve -$$\lim_{n\to \infty}\left(\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{n+n}\right)$$ -involves Riemann sums. Multiplying and dividing by $n$, we get -$$\lim_{n\to \infty}\left(\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{n+n}\right)=\frac{1}{n}\sum_{i=1}^{n}\frac{n}{n+i}=\frac{1}{n}\sum_{i=1}^nf\left(\frac{i}{n}\right)$$ -where $f(x)=\frac{1}{1+x}$. The last expression is a Riemann sum associated to the partition $P_n=\{0, \frac{1}{n},\dots,\frac{n}{n}\}$ of $[0,1]$, so by Darboux Theorem -$$\lim_{n\to \infty}\frac{1}{n}\sum_{i=1}^nf\left(\frac{i}{n}\right)=\int_0^1\frac{1}{1+x}dx=\ln2$$ -However, I'm curious if there are other techniques that can be used to find that limit, so my question is - -Is it possible to find $$\lim_{n\to \infty}\left(\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{n+n}\right)$$ without using Riemann sums? If so, how? - -REPLY [2 votes]: Here is one approach. Write the sum of interest as -$$\begin{align} -\sum_{k=n+1}^{2n}\frac1k&=\sum_{k=1}^n\left(\frac{1}{2k-1}-\frac1{2k}\right)\\\\ -&=\sum_{k=1}^n\left(\int_0^1 (x^{2k-2}-x^{2k-1})\,dx\right)\\\\ -&=\int_0^1 \sum_{k=1}^n \left(x^{2k-2}-x^{2k-1}\right)\,dx\\\\ -&=\int_0^1\frac{1-x^{2n}}{1+x}\,dx -\end{align}$$ -Using the Dominated Convergence Theorem, we have -$$\begin{align} -\lim_{n\to \infty}\sum_{k=n+1}^{2n}\frac1k&=\int_0^1\lim_{n\to \infty}\left(\frac{1-x^{2n}}{1+x}\right)\,dx\\\\ -&=\int_0^1 \frac{1}{1+x}\,dx\\\\ -&=\log(2) -\end{align}$$ -and we are done!<|endoftext|> -TITLE: A question about complete metric spaces. -QUESTION [5 upvotes]: Is there a theorem which states: "Every infinite metric space that is complete, connected and locally connected, is arc-wise connected"? - -REPLY [3 votes]: Yes, as I already stated in this answer. A proof is in Hocking and Young. It's a classical result from the 1920's.<|endoftext|> -TITLE: Proving the Powerset Axiom for hereditarily finite sets -QUESTION [7 upvotes]: Consider $\mathsf{ZF}$, and relace the Axiom of Infinity with its negation. This gives us the theory of hereditarily finite sets. Its universe is $V_\omega$. Intuitively, I feel that I can construct any hereditarily finite set starting from the empty set and using only Pairing and Union. So, my questions are: - -Can I drop the Powerset Axiom and prove it from the remaining axioms? -Can I prove the Axiom of Choice in this theory? -Assuming I have an explicit axiom postulating the existence of the empty set, can I drop the Axiom Schema of Separation and prove its every instance from the remaining axioms? The same question about Replacement. - -All questions are under the assumption that $\mathsf{ZFC}$ is consistent. - -REPLY [2 votes]: Let me denote $ZF^{¬\infty}$ to be the axioms of $ZF$ with the axiom of infinity replaced by its negation. -Before going into anything else, it is worth pointing out that (if $ZF$ is consistent) then $ZF^{¬\infty}$ has many different models, not just $V_\omega$. This is because without infinity, the axiom of foundation becomes significantly weaker. To see this - consider the following two statements: - -The Axiom of Foundation (Fo): If $X$ is a set, then there exists $y \in X$ with $y \cap X = \emptyset$. -The Von Neumann Rank Axiom (VNR): If $X$ is a set, then there exists an ordinal $\alpha$ with $X \subseteq V_\alpha$. - -In $(ZF - Fo)$ these two axioms are equivalent, but this equivalence brakes down in the absence of infinity. There are models of $ZF^{¬\infty}$ where VNR fails, and such an example can be seen here. - -As a side note, the claim that "$V_\omega$ is the universe of sets" is equivalent to VRN with respect to $ZF^{¬\infty},$ or in other words $ZF^{¬\infty} \vDash ($VNR $\iff$ "$V_\omega$ is the universe of sets"$)$. -VNR is a very powerful strengthening of foundation with respect to $ZF^{¬\infty}$. If we denote: - -$T = (ZF^{¬\infty} + VNR) - ($"the axiom of replacement/separation" and "the power set axiom"$)$ - -then $T$ is powerful enough to prove all three of these missing axioms - and also the axiom of choice. We can see this as follows: - -Given that $X$ is a set, and $\phi$ is a class-function. Then there exists a natural number $n$ with $X \subseteq V_n$, and we can prove by induction that $|V_n|$ is finite - and therefore |X| is finite. This means that the class $Y = \{y : \exists x \in X(\phi(x) = y) \}$ is finite. -The rank of each $y \in Y$ (the smallest $m$ such that $y \subseteq V_m$) is finite as $Y \subseteq V_\omega$, and $rank(Y) = \sup\{rank(y) : y \in Y\}$. This means that $rank(Y)$ is finite (as it is the supremum of a finite amount of natural numbers), and therefore there exists a natural number $N$ with $Y \subseteq V_N$ - so that $Y \in V_\omega$ and therefore $Y$ is a set. -This proves the axiom of replacement, as $X$ and $\phi$ are arbitrary. -The axiom of separation follows trivially from the axiom of replacement and the empty set axiom. -Given that $X$ is a set. Then there exists a natural number $n$ with $X \subseteq V_n$. Then we have the set $P_X = \{Y \in V_{n+1} : Y \subseteq X \}$ via the axiom of separation, and moreover $P_X$ qualifies as being the power set of $X$. This proves the power set axiom, as $X$ is arbitrary. -Given that $X$ is a set. Then there exists a natural number $n$ with $X \subseteq V_n$. Note that by induction we can construct a bijection $f_n \colon V_n \to {^{n-1}2}$, and then $f_n$ restricts to an injection $f_n |_X \colon X \to {^{n-1}2}$. (Side note, we define ${^{-1}2} = 0$ and ${^{m+1}2} = 2^{(^{m}2)}$). -This allows us to well-order $X$ via $x \leq y$ if and only if $f_n|_X(x) \leq f_n|_X(y)$, which proves the well-ordering theorem (and therefore the axiom of choice) as $X$ is arbitrary. - - -This shows us the power of the axiom system $T$ where we had strengthened the axiom of foundation to satisfy VNR. However without VNR we don't have such luxury. - -The example from the beginning gives us a model of $ZF^{¬\infty}$ that satisfies the negation of choice. -Define $S = ZF^{¬\infty} - ($"the axiom of replacement" and "the power set axiom"$)$. Then it can be shown that $S \vDash ($"the power set axiom"$ \implies $"the axiom of replacement"$)$. -This is because in $S$, the power set axiom holds if and only if there are no infinite sets - and finite classes are always sets via the axiom of pairs and the axiom of unions. -On the other hand - the reverse implication doesn't hold. For example if we define a set $X$ to be pseudo-finite iff $X, \bigcup X, \bigcup \bigcup X, ...$ are all Dedekind-finite - then for any model $M$ of $ZF$, we have that $P = \{x \in M : x$ is pseudo-finite$\}$ is always a model of $S$ and satisfies the axiom of replacement. -So simply take $M$ to be a model of $ZF$ that contains an infinite pseudo-finite set, and we have our example. -As for a model of $S$ that satisfies both the negation of the power set axiom and the negation the axiom of replacement, start with some infinite pseudo-finite set $A$. Then define the following sets: - -$B_0 = V_\omega \cup A \cup \{A\}$. -$B_{2n+1}$ is the closure of $B_{2n}$ under pairs and unions, for $n \in \mathbb{N}$. -$B_{2n+2}$ is the set of all sets obtainable from $B_{2n+1}$ via the axiom of separation, for $n \in \mathbb{N}$. - - -Then $B = \bigcup_{n \in \mathbb{N}} B_n$ is a model of $S$ and satisfies the negation of the power set axiom, yet the class $\{ \{A,a\} : a \in A\}$ cannot be a set in $B$ despite being definable via a class-function from $A$. -Hence $B$ is a model of $S$ that satisfies both the negation of the power set axiom and the negation of the axiom of replacement.<|endoftext|> -TITLE: Are the additive group of rationals and the multiplicative group of positive rationals isomorphic? -QUESTION [6 upvotes]: This question is a little bit different from Group of positive rationals under multiplication not isomorphic to group of rationals since I was wondering if logarithmic function could solve this or not, thank you. -Consider two groups $(\mathbb Q^+,\cdot)$ and $(\mathbb Q,+)$, does an isomorphism exist between them? -My attempt: Let $\varphi:\mathbb Q^+\rightarrow\mathbb Q$ be the isomorphic function, then the below statement must hold true for all $a,b\in\mathbb Q^+$: -$$\varphi(a\cdot b)=\varphi(a)+\varphi(b)$$ -So I guess maybe a logarithmic function would be fine here, since it's bijective, too. -But the problem is I can not show for a specific base like $10$ for example, $\log(\mathbb Q^+)=\mathbb Q$. - -REPLY [3 votes]: So now you know they're not isomorphic, but you can go a bit further with what you already know. For example, $\mathbb{Q}^+$ has a subgroup whose elements are the powers $2^n$ of $2$, including $1$ and $1/2$, etc. Because the map $n \mapsto 2^n$ from $(\mathbb Z,+)$ is injective, this is a cyclic subgroup. You can do the same thing with the powers $3^n$ and see these two subgroups intersect only in $\{1\}$. -Now, you know every natural number has a unique factorization in terms of primes. Can you use that to say more about $(\mathbb Q^+,\cdot)$? -By comparison, you can check for any two elements $g$ and $h$ of $(\mathbb Q,+)$,there are integers $m$ and $n$ such that $mg = nh$. This means that, in contrast to the case of $(\mathbb Q^+,\cdot)$, any two cyclic subgroups have a nontrivial intersection. One says that $(\mathbb Q^+,\cdot)$ has rank equal to 1, meaning there's a $\mathbb Z$ subgroup, but only one independent such $\mathbb Z$ subgroup, in that all other isomorphic copies of $\mathbb Z$ meet it nontrivially.<|endoftext|> -TITLE: How many different functions we have by only use of $\min$ and $\max$? -QUESTION [13 upvotes]: We can making many functions of three variable by only use and combining of $\min$ and $\max$ functions. But many of them are not different , like : -$$\min(x,y,z)=\min(x,\min(y,z)),\quad\min(x,\max(x,y)) = \min(x,x) = \max(x,x)$$ - -How many different functions $\mathbb R ^3 \rightarrow \mathbb R$ of this form we have? - -The upper bound of numbers of this functions is $3^{3!}$ . Because there are only $3!$ states for $x , y , z$ like: $ x < y < z$ and $ x < z < y$ and ... and each state gives one of the values of $\{x,y,z\}$ . -And my second question is : - -How many different functions $\mathbb R ^n \rightarrow \mathbb R$ of this form we have ? - -REPLY [8 votes]: Thanks to Giovanni Resta answer I will prove that $M(n)-2$ different functions of $n$ variable exist; Where $M(n)$ is $n$th number of Dedekind numbers. -At first it's simple to check : -$\max(x,\min(y,z))=\min(\max(x,y),\max(x,z))$ -$\min(x,\max(y,z))=\max(\min(x,y),\min(x,z))$ -Now we can consider $\max(x,y)$ as $x \vee y$ and $\min(x,y)$ as $x \wedge y$ and then above identities convert to below familiar identities (distributive laws) : -$x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z)$ -$x \wedge(y \vee z) = (x \wedge y) \vee (x \wedge z)$ -Now number of OP functions with $n$ variable is equal to number of Free distributive lattice with $n$ generators without empty joins and empty meets. -For example for $n=3$ from https://oeis.org/A007153 we have : -a -b -c -a or b -a or c -b or c -a or b or c -a and b -a and c -b and c -a and (b or c) -b and (a or c) -c and (a or b) -(a or b) and (a or c) -(b or a) and (b or c) -(c or a) and (c or b) -a and b and c -(a or b) and (a or c) and (b or c)<|endoftext|> -TITLE: Analytic vs. Analytical -QUESTION [5 upvotes]: I am trying write an article. -Little Summary: -I have developed some tools to analyze derivative of some function $f$. This characterization leads to better results than previous works that only studied the function itself. -I am trying to say that: -"Our analytic view of the problem provides a better characterization of blah blah blah " -When I say analytic view I mean that we not only look at a function but also its derivatives. -My question: -"Our analytic view of the problem provides a better characterization of blah blah blah " -or -"Our analytical view of the problem provides a better characterization of blah blah blah " - -REPLY [2 votes]: If you have doubts, most probably some other people will also have! -It is always better to be clear, although it is sometimes a difficult decision where to stop (it depends on whether it is for a paper or for a book, whether it is for an abstract or introduction, etc, etc). -Summing up, much better something like: "By looking not only at the function itself but also at its derivatives, in contrast to what Mr.X did in [Y], we are able to provide a better characterization of".<|endoftext|> -TITLE: A Successor Cardinal is Regular -QUESTION [7 upvotes]: Trying to show that every cardinal $k$ , $k^+$ , its successor, is regular. This is what I've come up with. Thoughts? -If this does not hold, then a cofinal map $f: \lambda\rightarrow k$ where $\lambda\leq k$ would exist. This would mean $k^+$ would be a $k$ union of sets each of size $\leq k$ . This would contradict the following: -Let $k \in CARD$ and let $X=\cup_{\alpha -TITLE: Functions that preserve asymptotic equivalence -QUESTION [8 upvotes]: Is there any notion of preserving asymptotic equivalence by a real-valued function? Any facts known about such functions? -To clarify what I'm asking I'll introduce one formalization of the idea which has come to my mind. - -$\mathbb R_+$ denotes $[0,+\infty)$. -Consider a class $\mathcal{A}$ of continuous functions $f\,\colon \mathbb R_+ \rightarrow \mathbb R_+$, satisfying following conditions: - -$\lim \limits_{x \rightarrow +\infty} f(x) = +\infty$, -for any two positive sequences $\{a_n\}_{n=1}^\infty$ and $\{b_n\}_{n=1}^\infty$, such that $$a_n \rightarrow +\infty \;\;\;\text{and}\;\;\; a_n \sim b_n \quad\text{as}\;\;\; n\rightarrow \infty,$$ we have $f(a_n) \sim f(b_n)$, as $n$ goes to infinity. - -Some facts about $\mathcal A$: - -$\mathcal A$ contains such functions, as $\log(1+x)$ and $x^\alpha$ for $\alpha>0$. -For continuous $f,g\,\colon \mathbb R_+ \rightarrow \mathbb R_+$, if $f \in \mathcal A$ and $f \sim g$ then we have $g \in \mathcal A$. In particular, in our class lie the functions $f$ analityc at infinity, which means $\frac{1}{f(1/x)}$ to be analytic at zero. Thanks Joel Cohen who helped me note it. -One can show that $\mathcal A$ is closed under composition, multiplication and taking linear combinations with positive coefficients. Moreover, this operations are well-defined on set $A$ of $\sim$-equivalency classes of $f \in \mathcal A$. - -Then, we can state some problems: - -Is $\mathcal A$ bounded in meaning of asymptotic growth? -Can we characterize $f \in \mathcal A$ in some other terms? -Is $\mathcal A$ closed under taking indefinite integral? If so, is this operation well-defined on $A$? -If we modify the definition by replacing equivalent sequences with such $C^k$-smooth functions on $\mathbb R_+$, would we get the same class? What if we restrict this sequences/functions to be strictly increasing? - -REPLY [6 votes]: The functions which preserve asymptotic equivalence have turned out to be one of a vast number of generalizations of Karamata's regularly varying functions. Paper "On some extensions of Karamata's theory and their applications" by Buldygin et. al. introduces pseudo-regularly varying (PRV) functions which are, in fact, exactly that measurable functions, not necessarily tending to infinity, which preserve asymptotics at $+\infty$ in my sense. -The paper contains many properties of PRV functions and conditions for a function to be PRV. For example, PRV fuctions are characterized as being eventually ($\forall x>x_0$) of the form -$$ -f(x) = \exp \bigg( a(x) + \int\limits_{x_0}^x b(t) \frac{dt}{t} \bigg), -$$ -where $a$ and $b$ are bounded measurable functions with $\lim\limits_{c\rightarrow 1}\limsup\limits_{x\rightarrow +\infty} \lvert a(cx)-a(x) \rvert = 0$. - -So the reference request is fulfilled, I'll return to finer class $\mathcal A$ described in the question and answer the "mini-questions". -Besides the above, we can charactirize $\mathcal A$ in terms of uniform continuity, although this characterization is not so constructive. - -Proposition. A continuous function $f\colon \mathbb R_+ \rightarrow \mathbb R_+$ with $\lim \limits_{x \rightarrow +\infty} f(x) = +\infty$ belongs to $\mathcal A$ if and only if function $F(t) = \log f(e^t)$, correctly defined on some ray $[T,+\infty)$, is uniformly continuous. -Proof. Note that $\log f(x) = F(\log x)$. -1) To prove the "if" part, pick $\{a_n\}_{n=1}^\infty$ and $\{b_n\}_{n=1}^\infty$ as in the definition of $\mathcal A$. Since $\lim\limits_{n\rightarrow\infty} \log\tfrac{a_n}{b_n} = 0$ and $F$ is uniformly continuous, - \begin{align*} -\left| \log \frac{f(a_n)}{f(b_n)} \right| &= \lvert \log f(a_n) - \log f(b_n) \rvert = \lvert F(\log a_n)-F(\log b_n) \rvert = \\ &= \lvert F(\log b_n+\log\tfrac{a_n}{b_n}) - F(\log b_n) \rvert \leqslant \sup\limits_{t \geqslant T} \lvert F(t+\log\tfrac{a_n}{b_n}) - F(t) \rvert \xrightarrow{\;n\rightarrow\infty\;} 0, -\end{align*} - so finally $f(a_n) \sim f(b_n)$ as $n$ tends to infinity. -2) To prove the "only if" part, suppose that, coversely, $F$ is not uniformly coninuous. That means there exists $\varepsilon > 0$ and two sequences $\{t_n\}_{n=1}^\infty, \{s_n\}_{n=1}^\infty \subseteq [T, +\infty)$, such that - $$ -\lim\limits_{n\rightarrow\infty} (t_n-s_n) = 0 \quad\text{and}\quad \forall n\in \mathbb N\ \; \lvert F(t_n) - F(s_n) \rvert \geqslant \varepsilon. -$$ - It's necessary that $\{t_n\}_{n=1}^\infty$ is unbounded, because by Cantor's theorem $F$ is uniformly continuous on each finite interval. Hence, taking subsequence if needed, we can assume that $t_n$ (and so $s_n$) goes to infinity. -Define $a_n := \exp(t_n), \; b_n := \exp(s_n)$. We have $\lim\limits_{n\rightarrow\infty} \tfrac{a_n}{b_n} = \lim\limits_{n\rightarrow\infty} \exp(t_n-s_n) = 1$ and - $$ -\bigg\lvert \! \log \frac{f(a_n)}{f(b_n)} \! \bigg\rvert = \lvert F(\log a_n)-F(\log b_n) \rvert = \lvert F(t_n)-F(s_n) \rvert \geqslant \varepsilon \;\;\; \forall n \in \mathbb N, -$$ - what contradicts with $f \in \mathcal A$. - -From this characterization it follows that a functions from $\mathcal A$ grows at most like power function. (It's also follows from the exponent expression for PRV functions.) - -Corollary. If $f \in \mathcal A$, then $f(x) = O(x^\alpha)$ for some $\alpha > 0$ as $x \rightarrow +\infty$. -Proof. Take $F$ like in above proposition. Since $F$ is uniformly continuous, $F(t) \leqslant \alpha t \;\;\; \forall t \geqslant T$ with some $\alpha,T \geqslant 0$ (see this question). Then - $$ -f(x) = \exp\big(F(\log x)\big) \leqslant \exp(\alpha \log x) \leqslant x^\alpha \quad \forall x \geqslant e^T. -$$ - -And finally, we can indeed replace equivalent sequences in the definition of $\mathcal A$ with such $C^\infty$-smooth functions on $\mathbb R_+$ and get the same class. It's straightforward, we just need to construct smooth function with given values in, say, $\mathbb N$. -We may also restrict the sequences from the definition to be strictly increasing, hence each real sequence not bounded above admits a strictly increasing subsequence. And thus the smooth functions from the alternative definition can be all stricly increasing. It's bit more tricky and this is crucial for l'Hopital's rule application to prove - -Proposition. If $f, g \in \mathcal A$, then $F(x)=\int_0^x f(t)dt$ and $G(x) = \int_0^x g(t)dt$ lie in $\mathcal A$ and, moreover, $F \sim G$ at infinity.<|endoftext|> -TITLE: Express cofundamental weights using coroots. -QUESTION [5 upvotes]: In type $A_2$ root system, we have $\alpha_1 = 2 \omega_1 - \omega_2$, $\alpha_2 = - \omega_1 + 2 \omega_2$. How to express fundamental coweights $\omega_1^{\vee}, \omega_2^{\vee}$ using coroots $\alpha_1^{\vee}, \alpha_2^{\vee}$? Any help will be greatly appreciated! - -REPLY [6 votes]: The fundamental roots are dual to the fundamental coweights and the fundamental weights are dual to the fundamental coroots. -$$ \alpha_i(\omega_j^\vee) = \delta_{ij} \qquad \qquad - \omega_i(\alpha_j^\vee) = \delta_{ij}$$ -Using these relationships you can find the relationship between these bases. The Cartan matrix gives a change of basis matrix from weights to roots. So the transpose of the Cartan matrix gives a change of basis between the coroots to the coweights -- that's where I have to double check my own thinking to make sure I didn't get it backwards! :) -Let $a_{ij}$ be the $(i,j)$-entry of our Cartan matrix. Then... -$$\alpha_i(\alpha_j^\vee) = \sum\limits_{k} a_{ik}\omega_k(\alpha_j^\vee) = \sum\limits_{k} a_{ik}\delta_{kj}=a_{ij}$$ -Actually $\alpha_i(\alpha_j^\vee)=a_{ij}$ may (essentially) be your definition of the entries of the Cartan matrix (depending on where your starting point is). -Now suppose that $c_{ij}$ is the $(i,j)$-entry of your change of basis matrix from the coweights to the coroots. This means that $\alpha_j^\vee = \sum\limits_{k} c_{jk}\omega_k^\vee$. Then... -$$a_{ij}=\alpha_i(\alpha_j^\vee)=\alpha_i\left(\sum\limits_{k} c_{jk}\omega_k^\vee\right)=\sum\limits_k c_{jk} \alpha_i(\omega_k^\vee) = \sum\limits_k c_{jk}\delta_{ik} = c_{ji}$$ -Thus the change of basis matrix is just the transpose. [There's nothing special about roots and weights here. This is just a general linear algebra fact about switching between dual bases.] -Now your Cartan matrix is $A=\begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix}$ is symmetric. So you get... -$$\alpha^\vee = 2\omega_1^\vee-\omega_2^\vee \qquad \qquad - \alpha_2^\vee = -\omega_1^\vee+2\omega_2^\vee$$ -For simple Lie algebras of type ADE (simply laced algebras) the same thing happens (since their Cartan matrices are symmetric). If you play around with the non-simply laced types, be mindful of the transpose.<|endoftext|> -TITLE: Proof Nehari manifold of semilineal subcritical $-\Delta u = f(u)$ in $\Omega$ is not empty. -QUESTION [10 upvotes]: Given the problem -$$ -\left\{ -\begin{array}{rll} --\Delta u& = f(u) & \text{in }\Omega \\ - u & = 0 & \text{in } \partial\Omega -\end{array} -\right. -$$ -In a bounded domain $\Omega\subset \mathbb{R}^N, N\geq 3$ with $f$ satisfying: - -$f\in C^1(\mathbb{R}), f(0)=0$ and $f'(0)<\lambda_1$, $\lambda_1$ the first eigenvalue of $-\Delta$ in $H_0^1(\Omega)$ -There exists $c>0$ and $\sigma \in (1,2^{*}-1)$ such that $$|f'(s)|\leq c(|s|^{\sigma-1}+1) \quad \forall s\in \mathbb{R}$$ -There is some $\alpha\in(0,1)$ such that $$f(s)s-2F(s)\geq \alpha f(s)s \quad \forall s$$ for $F(s)=\int_0^s f$. -For all $s$ $$f'(s)>\frac{f(s)}{s} \quad \text{and} \quad \lim_{|s|\rightarrow \infty}\frac{f(s)}{s}=\infty$$ - -I am to show that the Nehari manifold associated with the functional $$J(u)=\frac{1}{2}\| u\|^2-\int_{\Omega} F(u)$$ is not empty. I already proved $u\in H_0^1$ is a solution to the PDE if and only if it is a critical point of $J$. I know from a previous exercise that $J$ is $C^2$ and $$J'(u)v=\langle u,v \rangle -\int f(u)v$$ and $$J''(u)(v,w)=\langle v,w\rangle-\int f'(u)vw$$ -It is hinted that I must show that given $u\not\equiv 0$, the real valued function $J_u(t):(0,\infty)\mapsto \mathbb{R}, \, J_u(t)$$ -=J(tu)$ satisfies $J^\prime_u (t)=0$ for exactly one $t$ and that this is a maximum. -I used the Chain rule for generalized (Frechet) derivatives to obtain -$$\frac{d J_u}{d t}(t)=\frac{d }{d t} J\circ h \,(t)=J'(h(t))\circ h'(t)=J'(tu)u.$$ Where I defined $h(t)=tu$. I must therefore prove $$t^2\|u\|^2-\int f(tu)tu=0$$ has precisely one solution for $t$. I know that the above equation is continuous in $t$ so I thought I'd show the above goes to $0^{+}$ as $t\rightarrow 0^{+}$, and to $-\infty$ as $t\rightarrow \infty$, but I can't quite work it out. -Am I missing something? or am I just getting tunnel vision? -Thank you in advance. - -REPLY [2 votes]: If we only want to show that the Nehari manifold is not empty, we can do the following. Let $J_u(t)=J(tu)$ where $u\in H_0^1(\Omega)\cap L^\infty(\Omega)$ and $u>0$. As you alread have noted, we have that $$J'_u(t)=t^2\|u\|^2-\int f(tu)tu.$$ -From 3. you have that $(1-\alpha)f(s)s\ge 2 F(s)$, which implies that $$\frac{F'(s)}{F(s)}\ge \frac{2}{(1-\alpha)s}.$$ -For $s\geq1$, we conclude that $$F(s)\geq C_1s^{\frac{2}{1-\alpha}},$$ -for some positive constant $C_1$. Therefore $F(s)\ge C_1s^{\mu}+C_2$ for $s>0$, $C_2\in\mathbb{R}$ a constant and $\mu>2$. By using again the inequality $(1-\alpha)f(s)\ge 2F(s)$, we have that $$-f(s)s\le -D_1s^\mu-D_2,\ s>0,$$ -where $D_1>0$ and $D_2\in\mathbb{R}$ are contants. So $$J'_u(t)\le t^2\|u\|^2-D_1t^\mu\int u^\mu-D_2|\Omega|.$$ -Once $\mu>2$, we must conclude that $J_u'(t)\to -\infty$ when $t\to \infty$. -On the other hand, the hypotheis $f'(0)$ implies that for small $x>0$, there is $\delta>0$ such that $$f(x)<(\lambda_1-\delta)x,$$ -which implies that for small $x>0$ $$-f(x)x> -(\lambda_1-\delta)x^2.$$ -Therefore $$J'_u(t)\ge t^2\|u\|^2-(\lambda_1-\delta)t^2\int u^2.$$ -Once $\|u\|_2^2\le \frac{1}{\lambda_1}\|u\|$, we obtain that $$J'_u(t)\ge t^2\|u\|^2-\frac{\lambda_1-\delta}{\lambda_1}t^2\|u\|^2,$$ -which implies that $J'_u(t)$ is positive near ht origin. Be cause of the continuity of $J'_u$, we conclude that there is $t>0$ such that $J'_u(t)=0$, which is to say that $$\langle J'(tu),tu\rangle =0,$$ -or equivalently that $tu$ belongs to the Nehari manifold. -Edit: In the above calculations, I have assumed implicitly that $F(s)\ge 0$, which is not true, however, we can overcome this problem by using the hypothesis $f(s)/s\to \infty$ when $t\to \infty$, indeed, we can conclude that $$F(s)\ge C s^{\mu},$$ -where $\mu$ is as above and $s$ is such that $F(s)=\int_0^s f(t)dt$ is positive (which will be true for bigger $s$). Therefore $F(s)\ge C s^{\mu}+D$ for all $s>0$, where $C>0$ and $D\in\mathbb{R}$ are constants.<|endoftext|> -TITLE: How to solve these two simultaneous "divisibilities" : $n+1\mid m^2+1$ and $m+1\mid n^2+1$ -QUESTION [88 upvotes]: Is it possible to find all integers $m>0$ and $n>0$ such that $n+1\mid m^2+1$ and $m+1\,|\,n^2+1$ ? -I succeed to prove there is an infinite number of solutions, but I cannot progress anymore. -Thanks ! - -REPLY [2 votes]: It is convenient to introduce $x:=m+1$ and $y:=n+1$. Then -$$ -\begin{cases} -x\ |\ y^2 - 2y + 2,\\ -y\ |\ x^2 - 2x + 2. -\end{cases} -$$ -It follows that $\gcd(x,y)\mid 2$, and thus there are two cases to consider: - -Case $\gcd(x,y)=1$. We have -$$xy\ |\ x^2 + y^2 - 2x - 2y + 2.$$ -This is solved via Vieta jumping. Namely, one can show that $\frac{x^2 + y^2 - 2x - 2y + 2}{xy}\in\{ 0, -4, 8 \}$. The value $0$ corresponds to an isolated solution $x=y=1$, while each of the other two produces an infinite series of solutions, where $x,y$ represent consecutive terms of the following linear recurrence sequences: -$$1,\ -1,\ 5,\ -17,\ 65,\ \dots \qquad(s_k=-4s_{k-1}-s_{k-2}+2)$$ -and -$$-1, -1, -5, -37, -289,\ \dots, \qquad(s_k=8s_{k-1}-s_{k-2}+2).$$ -However, there are no entirely positive solutions here. - -Case $\gcd(x,y)=2$. Letting $x:=2u$ and $y:=2v$ with $\gcd(u,v)=1$, we similarly get -$$ -\begin{cases} -u\ |\ 2v^2 - 2v + 1,\\ -v\ |\ 2u^2 - 2u + 1. -\end{cases} -$$ -Unfortunately, Vieta jumping is not applicable here. Still, if we fix -$$k:=\frac{2u^2 + 2v^2 - 2u - 2v + 1}{uv},$$ -then the problem reduces to the following Pell-Fermat equation: -$$((k^2-16)v + 2k+8)^2 - (k^2-16)(4u - kv - 2)^2 = 8k(k+4).$$ -Example. Value $k=9$ gives -$$z^2 - 65t^2 = 936.$$ -with $z:=65v + 26$ and $t:=4u - 9v - 2$. It has two series of integer solutions in $z,t$: -$$\begin{bmatrix} z_\ell\\ t_\ell\end{bmatrix} = \begin{bmatrix} 129 & -1040\\ -16 & 129\end{bmatrix}^\ell \begin{bmatrix} z_0\\ t_0\end{bmatrix}$$ -with initial values $(z_0,t_0) \in \{(39,-3),\ (-1911,237)\}$. -Not every value of $(z_\ell, t_\ell)$ corresponds to integer $u,v$. Since the corresponding matrix has determinant 260, we need to consider sequences $(z_\ell, t_\ell)$ modulo 260. It can be verified that only first sequence produces integer $u,v$ and only for odd $\ell$, that is -\begin{split} -\begin{bmatrix} v_s\\ u_s\end{bmatrix} &= \begin{bmatrix} 65 & 0\\ -9 & 4\end{bmatrix}^{-1}\left(\begin{bmatrix} 129 & -1040\\ -16 & 129\end{bmatrix}^{2s+1} \begin{bmatrix} 39\\ -3\end{bmatrix} + \begin{bmatrix} -26\\ 2\end{bmatrix}\right)\\ -&=\begin{bmatrix} 70433 & -16512\\ 16512 & -3871\end{bmatrix}^s \begin{bmatrix} 627/5 \\ 147/5\end{bmatrix} - \begin{bmatrix} 2/5 \\ 2/5\end{bmatrix} -\end{split} -or in a recurrence form: -\begin{cases} -v_s = 70433\cdot v_{s-1} -16512\cdot u_{s-1} + 21568,\\ -u_s = 16512\cdot v_{s-1} -3871\cdot u_{s-1} + 5056, -\end{cases} -with the initial value $(v_0,u_0) = (125, 29)$. The next couple of values is $(8346845, 1956797)$ and $(555582723389, 130248348509)$, and it can be seen that the sequence grows quite fast. -UPDATE. Series of positive solutions exist for -$$k\in \{9, 13, 85, 97, 145, 153, 265, 289, 369, \dots\}.$$ -Most likely, this set is infinite, but I do not know how to prove this.<|endoftext|> -TITLE: Opening and closing convex sets -QUESTION [6 upvotes]: It seems true that, given $K \subseteq \mathbb{R}^n$ a convex set with $K^\circ \neq \emptyset$, then $\overline{K^{\circ}} = \overline{K}$ and $\left ( \overline{K} \right )^\circ = K^\circ$. -I am able to prove the first equality by making use of the "segment Lemma", which states that if $y \in K$ and $x \in K^\circ$, then $[x, y[ \subseteq K^\circ$ (here $[x, y[$ is the segment joining $x$ and $y$ without taking $y$). -However I have not found any correct proof of the second equality, and neither a counter-example has come to my mind. -Thanks all! - -REPLY [2 votes]: Here is a more functional-analytic proof, which avoids the argument with simplexes: -assume by contradiction that there exists $x\in(\overline{K})^\circ\setminus K^\circ$. By Hahn-Banach theorem (applied to $K^\circ$ and $\{x\}$) and the fact that $(\mathbb{R}^n)^*=\mathbb{R}^n$, you can find $v\neq 0$ such that -$\langle v,K^\circ\rangle \le \langle v,x\rangle$. So -$$ \langle v,K^\circ\rangle + \epsilon|v|^2\le\langle v,x+\epsilon v\rangle. $$ -But $x+\epsilon v\in\overline{K}$ if $\epsilon>0$ is small. -In particular you can find $x'\in K$ such that $|(x+\epsilon v)-x'|<\frac{\epsilon}{2}|v|$, which gives -$$ \langle v,K^\circ\rangle + \frac{\epsilon}{2}|v|^2\le\langle v,x'\rangle. $$ -But by the "segment lemma", for any fixed $z\in K^\circ$, $[z,x')\subseteq K^\circ$, so $\langle v,[z,x')\rangle\le\langle v,x'\rangle-\frac{\epsilon}{2}|v|^2$ and by continuity $\langle v,x'\rangle\le\langle v,x'\rangle-\frac{\epsilon}{2}|v|^2$, contradiction. -We assumed $K^\circ\neq 0$, but you can always find an affine subspace containing $K$ where this happens and you can repeat the proof here.<|endoftext|> -TITLE: Prove that the Pontryagin dual of $\mathbb{R}$ is $\mathbb{R}$. -QUESTION [5 upvotes]: From Wikipedia: the group of real numbers $\mathbb{R}$, is isomorphic to its own dual; the characters on $\mathbb{R}$ are of the form $r \to e^{i\theta r}$. -How can I prove this assertion? - -REPLY [5 votes]: This assertion is shown in the appendix on page $10$ of K. Conrad's notes about the character group of $\mathbb{Q}$. His proof for $\mathbb{R}$ is elementary, not using integrals (which is done in Conway's book on functional analysis). For more references see also this MSE question.<|endoftext|> -TITLE: Bézier curve approximation of a circular Arc -QUESTION [6 upvotes]: I would like to know how I can get the coordinates of four control points of a Bézier curve that represents the best approximation of a circular arc, knowing the coordinates of three points of the corresponding circle. I would like at least to know the solution to this problem in the case where two of the known circle points are the two ends of a diameter of the circle. - -REPLY [2 votes]: You can use the following ways to find the control points of a cubic Bezier curve for approximating a circular arc with end points $P_0$, $P_1$, radius R and angular span A: -Denoting the control points as $Q_0$, $Q_1$, $Q_2$ and $Q_3$, then -$Q_0=P_0$, -$Q_3=P_1$, -$Q_1=P_0 + LT_0$ -$Q_2=P_1 - LT_1$ -where $T_0$ and $T_1$ are the unit tangent vector of the circular arc at $P_0$ and $P_1$ and $L = \frac{4R}{3}tan(\frac{A}{4})$. -Please note that above formula will give you a pretty good approximation for the circular arc. But it is not "the best" approximation. We can achieve an even better approximation with more complicated formula for the $L$ value. But for practical purpose, above formula is typically good enough.<|endoftext|> -TITLE: Express the permutation $\sigma = \left({}^1_5\,{}^2_8 \,{}^3_3\,{}^4_6\,{}^5_7\,{}^6_4\,{}^7_1\,{}^8_2\right)$ as a product of transpositions -QUESTION [5 upvotes]: $\newcommand{\lcm}{\operatorname{lcm}}$ -I was hoping to get some feedback. - -Consider the permutation - $$\sigma = \begin{pmatrix} -1&2&3&4&5&6&7&8\\ -5&8&3&6&7&4&1&2 -\end{pmatrix}.$$ - (a) Describe the orbits of $\sigma$. - (b) Express $\sigma$ as a product of disjoint cycles, and then as a product of transpositions. - (c) What is the order of $\sigma$? Explain. - - -(a) The orbits of $\sigma$ are $\{1,5,7\}, \{2,8\},\{3\},\{4,6\}$. -(b) As a product of disjoint cycles: $\sigma = (1\;5\;7)(2\;8)(3)(4\;6)$ -As a product of transpositions: $\sigma = (1\;7)(1\;5)(2\;8)\boxed{(3)\,}(4\,6)$ -I am not sure if I should include $(3)$ since transpositions are cycles of length $2$. -(c) I don't know what the order of a permutation is. My first guess was that it is the number of orbits of $\sigma$. So I would say $4$. But then I saw a post that used the $\lcm$. I infer that it is the $\lcm$ of the lengths of the orbits of $\sigma$; hence $\lcm(2,3) = 6$. Is this correct? If so, why is it the $\lcm$? - -REPLY [5 votes]: (a) -- Correct. -(b) -- No, you don't need to include $(3)$. -(c) -- The order of a permutation $\sigma$ is the smallest positive integer $\ell$ so that $\sigma^\ell=1$. Since the disjoints cycles of $\sigma$ are mutually commutative, $\ell$ is divisible by the order of each cycle. Therefore, $\ell$ is the l.c.m of those orders, which are exactly the lengths of the cycles.<|endoftext|> -TITLE: Explaining Newton Polygon for proving irreducibility of polynomial in $\mathbb{Z}[x]$ in elementary way -QUESTION [11 upvotes]: In this question of mine Proving irreduciblity over $\mathbb{Z}$ I was recommended to read Newton Polygon. Also this appears to be an interesting topic. Also I am currently studying irreducibility of polynomials and I saw somewhere that Newton Polygon provides motivation Eisenstein's irreducibility criterion. But I could not understand the content in the links. But I am in high school and my scope is uptil Olympiad level math and i dont know about Abstract Alegbra an p-adics. -So, can anyone explain what Newton Polygons are or some elementary special case of it. Any links would also be great. - -REPLY [24 votes]: I'll try to be as elementary as possible. Take $P\in \mathbb{Q}[X]$ a polynomial of degree $n\in \mathbb{N}^*$. We are interested in its decomposition in irreducible factors in $\mathbb{Q}[X]$. -Valuation -Let $p$ be a fixed prime number. I will write $v(x)$ for the $p$-adic valuation of $x\in \mathbb{Q}$, meaning that it is the power of $p$ that appears in its prime factorisation ; note that $v(x)\geqslant 0$ if $x\in \mathbb{Z}$, but in general $v(x)$ may be negative, for instance $v(1/p^2)=-2$ (also, $v(0)=\infty$, more or less by convention). -The Newton polygon -Now take the points in the real plane defined by $A_i = (i,v(a_i))$ for $i=0,\dots,n$, where $P=\sum a_k X^k$. If $a_i=0$ for some $i$, then $a_i$ does not appear on the plane (it is consistant with what follows to imagine that it exists as a point at infinity, but it doesn't matter). The Newton polygon of $P$ (with respect to $p$, implicitly) is the lower convex hull of these points. What does it mean ? You start with $A_0$, then take the segment from $A_0$ to some $A_i$ that has the lowest possible slope. Then you take the segment from this $A_i$ to some $A_j$ (with $j>i$) that has the lowest possible slope. You keep going until you arrive at $A_n$. You can also think of it as taking a string from below the points and pulling it upward, then see what shape it takes when you can't pull it anymore. -Now what you get is not a real polygon, but rather a broken line, made of segments having increasing slopes, starting from $A_0$ and stopping at $A_n$. It is very important to note that drawing it is completely trivial, you just have to find the exponent of $p$ in the coefficients. -Lots of authors demand that $P(0)=1$, which is always possible by dividing $P$ by an appropriate power of $X$, and then by $P(0)$. It has the effect of translating the Newton polygon so that it starts from the point $A_0 = (0,0)$. It is both harmless and mostly useless except for some calculations. -Call $P$ pure if its polygon is just one straight line (no slope changes). If $P(0)=1$, it is equivalent to the fact that $v(a_i)/i\geqslant v(a_n)/n$ for all $i>0$. -The Newton polygon of a product -The point is the following : if $Q_1,\dots,Q_r\in \mathbb{Q}[X]$ are pure, with increasing slopes, then the Newton polygon of $P=\prod Q_i$ is the concatenation of the Newton polygon of the $Q_i$, meaning that if $Q_i$ has degree $d_i$ and its Newton polygon has slope $s_i$, then the Newton polygon is made of a segment of length $d_1$ and slope $s_1$, followed by a segment of length $d_2$ and slope $s_2$ etc. -Note that if $s_i = s_{i+1}$ then the segments "fuse" in the polygon of $P$ : the product of two pure polynomials with the same slope is pure, with still the same slope. -The proof of the above statement is an interesting but painless excercice in elementary arithmetic. I strongly encourage you to try to do it, starting with the case of two polynomials of small degree to see what happens. -Consequences -So you can hope to use the Newton polygon of $P$ to detect factorizations. There are two drawbacks: - -You have no converse : just because there are indeed slope changes in the polygon of $P$ (ie it is not pure) does not mean that there is a corresponding decomposition as a product of pure factors. -If $P$ has a decomposition in pure polynomials each having the same slope, it will not be detected in the Newton polygon (since they will "fuse"). - -First obstruction -The first point actually has a very nice solution : you do have a decomposition in pure factors, but not in $\mathbb{Q}[X]$. You have to go find it in $\mathbb{Q}_p[X]$, with $\mathbb{Q}_p$ the field of $p$-adic numbers, which is the completion of $\mathbb{Q}$ for the $p$-adic distance, just as $\mathbb{R}$ is its completion for the ordinary distance. I will assume that $p$-adic numbers are beyond the scope of your question, but this is where stuff really happens (actually it really happens in $\mathbb{C}_p$ but this is an even more complicated field : in this field, $P$ is split, and the Newton polygon gives exactly the valuations of the roots ; this is what the Newton polygon really is about). -Second obstruction -Now for the second point, it is almost hopeless : whatever the field you work in, the product of two pure polynomials is pure, you can't change that fact. But you can make some ad hoc observation : suppose $n$ in coprime with $v(a_n)$. -Assume that $P=QR$ with $Q,R\in\mathbb{Q}[X]$ both pure of slope $s$, and also that $P(0)=Q(0)=R(0)=1$. -Then if the degree of $Q$ is $d>1$ and if $P=\sum a_k X^k$ and $Q = \sum b_k X^k$, you get $s = v(a_n)/n = v(b_d)/d$, so $dv(a_n)$ is divisible by $n$, and since $n$ is coprime with $v(a_n)$ you get that $n$ divides $d$, so $n=d$ and $R=1$. -So in this case you can deduce that $P$ is irreducible in $\mathbb{Q}[X]$. This is where you get back to Eisenstein criterion as a special case : it states that $P$ is irreducible if $v(a_0)=1$, $v(a_i)>0$ for $0 -TITLE: Are Sobolev spaces $W^{k,1}(\mathbb R^d)$ and $H^{k,1}(\mathbb R^d)$ the same? -QUESTION [14 upvotes]: We consider the following spaces $H^{k,p}(\mathbb R^d)$, $k \geq 1$ is integer, $p \geq 1$ (Bessel potential spaces): -$$ - H^{k,p}(\mathbb R^d) = \bigl\{ f \in L^p(\mathbb R^d) \colon \mathcal F^{-1}[(1+|\xi|^2)^{\frac k 2} \mathcal F f] \in L^p(\mathbb R^d) \bigr\}, -$$ -where $\mathcal F$ is the Fourier transform. We also consider the classical Sobolev spaces -$$ - W^{k,p}(\mathbb R^d) = \bigl\{ f \in L^p(\mathbb R^d) \colon D^\alpha f \in L^p(\mathbb R^d), \; |\alpha| \leq k \bigr\}, -$$ -where $D^\alpha$ is the derivative of multi-order $\alpha$. -It is written in Wikipedia that for $p>1$ we have $H^{k,p}(\mathbb R^d) = W^{k,p}(\mathbb R^d)$. My question is what is the relation between the spaces $H^{k,1}(\mathbb R^d)$ and $W^{k,1}(\mathbb R^d)$? Are they still equal? -For example, if $f \in W^{k,1}(\mathbb R^d)$, then all derivatives of $f$ up to order $k$ are in $L^1(\mathbb R^d)$ and hence $(1+|\xi|^2)^{\frac k 2}\mathcal F f \in L^\infty(\mathbb R^d)$. But it is not clear that the latter function is the Fourier transform of some function from $L^1(\mathbb R^d)$ (if it the case, we have $W^{k,1}(\mathbb R^d) \subseteq H^{k,1}(\mathbb R^d)$). -On the other hand, if $f \in H^{k,1}(\mathbb R^d)$ then $f \in L^1(\mathbb R^d)$ and $(1+|\xi|^2)^{\frac k 2}\mathcal F f \in L^\infty(\mathbb R^d)$. I am not sure that it implies that $f$ has distributional derivatives in $L^1(\mathbb R^d)$ up to order $k$ (it would imply $H^{k,1}(\mathbb R^d) \subseteq W^{k,1}(\mathbb R^d)$). - -REPLY [7 votes]: The spaces $W^{k,1}(\mathbb{R}^{d})$ and $H^{k,1}(\mathbb{R}^{d})$ coincide -for $k$ even and $d=1$, while $W^{k,1}(\mathbb{R}^{d})\subset H^{k,1} -(\mathbb{R}^{d})$ for $k$ even and $d>1$. For $k$ odd there is no relation -between $W^{k,1}(\mathbb{R}^{d})$ and $H^{k,1}(\mathbb{R}^{d})$. It's a guided -exercise in Stein's book on singular integrals (see Exercise 6.6 on page 160). -The notation is different (he uses $L^{k,p}(\mathbb{R}^{d})=W^{k,p} -(\mathbb{R}^{d})$ and $\mathcal{L}^{k,p}(\mathbb{R}^{d})=H^{k,p} -(\mathbb{R}^{d})$).<|endoftext|> -TITLE: Finding the integral closure of $\mathbb{Z}$ in $\mathbb{Q}[i]$ -QUESTION [5 upvotes]: I've just learned what the integral closure is. - -I would like to find what is the intergral closure of $\mathbb{Z}$ in $\mathbb{Q}[i]$. - -Let $\mathcal{R}$ the integral closure of $\mathbb{Z}$ in $\mathbb{Q}[i]$. To determine $\mathcal{R}$ I started to note that -$$\mathbb{Z}[i]\subset \mathcal{R}.$$ -Indeed if $\alpha = x+iy\in \mathbb{Z}[i]$, $\alpha$ is a root of the monic polynomial -$$f(X)=X^2-2xX+x^2+y^2\in\mathbb{Z}[X].$$ -And we know that since $\mathbb{Z}$ and $\mathbb{Q}[i]$ are rings so is $\mathcal{R}$. -From there I'm not sure, is it true that there exist no ring $A$ such that -$$\mathbb{Z}[i] \subsetneq A \subsetneq \mathbb{Q}[i] \text{ ?}$$ -I've assumed that and show that $1/2\notin \mathcal{R}$ for example, otherwise it exists a monic polynomial $f\in \mathbb{Z}[X]$ such that -$$f\left(\frac{1}{2}\right) =\frac{1}{2^n}+a_{n-1}\frac{1}{2^{n-1}}+\cdots+a_1\frac{1}{2}+a_0 =0$$ -So -$$2^n f\left(\frac{1}{2}\right) =0 \iff -1=2\underbrace{(a_{n-1}+\cdots +2^{n-2}a_1+2^{n-1}a_0)}_{\in \mathbb{Z}}$$ -which is impossible. -In conclusion $\mathbb{Q}[i]\not\subset\mathcal{R}$ so $\mathcal{R}=\mathbb{Z}[i]$. - -Is my proof correct ? Is there an other (easier) way to prove that ? And the most important, is there exist a general method to find the integral closure please ? - -Thank you for your help. - -REPLY [2 votes]: $X = a+ib \in \mathbb Q[i]$ is integral iff it is a solution of a monic polynomial in $\mathbb Z[X]$ of degree 1 or 2. Calculating $X^2$ we get -$X^2 -2aX +a^2 + b^2=0$. -Thus $-2a$ and $a^2 + b^2$ must be integers. So if $a$ is an integer, $b$ is an integer, giving us $\mathbb Z[i]$. If $a=\frac k 2$ with $k$ odd integer, we get $k^2/4 + b^2 \in \mathbb Z$ and so $4b^2 + k^2 \equiv 0 \pmod 4$, absurd. -There are a lots of methods to find a ring integers, but as far as I know there is not a standard method, unfortunately. For example the ring of integers of $\mathbb Q[\zeta_n]$, with $\zeta_n$ the $n$-th root of unit, is hard to compute (even if it's the expected $\mathbb Z[\zeta_n]$).<|endoftext|> -TITLE: What is the value of $\frac 1 {0! + 1! + 2!} +\frac 1 {1! + 2! + 3!} + \frac 1 { 2! + 3! + 4!} + …?$ -QUESTION [7 upvotes]: What is the value of x? -$$e = 1 / 0! + 1 / 1! + 1 / 2! + … .$$ -$$1 = 1 / (0! + 1!) + 1 / (1! + 2!) + 1 / (2! + 3!) + … .$$ -$$(x = ) 1 / (0! + 1! + 2!) + 1 / (1! + 2! + 3!) + 1 / (2! + 3! + 4!) + … .$$ -In my calculation by programming, -x is about 0.40037967700464134050027862710343065978234584790717558212650643072643052259740811195942853169077421. -If it is possible to find the value of x, -please tell me the value. - -REPLY [8 votes]: Note that $$k! + (k+1)! + (k+2)! = k!(k+2)^2,$$ so that the sum $$S = \sum_{k=0}^\infty \frac{1}{k!+(k+1)!+(k+2)!} = \sum_{k=0}^\infty \frac{1}{(k+2)^2 k!}.$$ This suggests considering the function $$f(z) = z e^z = \sum_{k=0}^\infty \frac{z^{k+1}}{k!}.$$ Taking the integral gives $$g(z) = \int_{t=0}^z f(t) \, dt = (z-1)e^z + 1 = \sum_{k=0}^\infty \frac{z^{k+2}}{(k+2)k!}.$$ Dividing by $z$ and integrating once again, we get $$\sum_{k=0}^\infty \frac{z^{k+2}}{(k+2)^2 k!} = \int_{t=0}^z \frac{g(t)}{t} \, dt = e^z - 1 + \int_{t=0}^z \frac{e^t - 1}{t} \, dt,$$ for which the choice $z = 1$ yields $$S = e - 1 + \int_{t=0}^1 \frac{e^t-1}{t} \, dt.$$ This last integral doesn't have an elementary closed form; Mathematica evaluates it as $$\gamma - \operatorname{Ei}(1),$$ where $$\operatorname{Ei}(z) = -\int_{t=-z}^\infty \frac{e^{-t}}{t} \, dt.$$<|endoftext|> -TITLE: $a,b,c$ are real numbers $>0$. If $a+b+c=1$, show that $(a+\frac{1}{a})^2+(b+\frac{1}{b})^2+(c+\frac{1}{c})^2\ge\frac{100}{3}$ -QUESTION [5 upvotes]: $a,b,c$ are real numbers $>0$. If $a+b+c=1$, show that - -$$(a+\frac{1}{a})^2+(b+\frac{1}{b})^2+(c+\frac{1}{c})^2\geq\frac{100}{3}$$ - -REPLY [4 votes]: Let $f(x)=(x+{1 \over x})^2$ and note that $f'$ is strictly increasing on $(0,\infty)$. -Let $\phi(x) = \{ f(x_1)+f(x_2)+f(x_3)$ and consider $\min \{ \phi(x) | \sum_k x_k = 1, x_i \ge 0 \}$. Since $\phi(x)$ is unbounded if any component of $x$ approaches zero, and the simplex is compact, we see that the problem -has a solution $\hat{x}$ and $\hat{x}_k >0$ for all $k$. Hence we can apply Lagrange multipliers to get some $\lambda $ such that -$f'(\hat{x}_k) + \lambda = 0$ for all $k$, and hence -$f'(\hat{x}_1) = f'(\hat{x}_2) =f'(\hat{x}_3) $. -Since $f'$ is injective, we see that $\hat{x}_1 = \hat{x}_2 =\hat{x}_3 $, and hence $\hat{x}_1 = \hat{x}_2 =\hat{x}_3 = {1 \over 3}$ from which we get -$\phi(x) \ge \phi(\hat{x}) = 3 f'({1 \over3}) = {100 \over 3}$.<|endoftext|> -TITLE: How to calculate $\int_0^\pi \ln(1+\sin x)\mathrm dx$ -QUESTION [10 upvotes]: How to calculate this integral -$$\int_0^\pi \ln(1+\sin x)\mathrm dx$$ -I didn't find this question in the previous questions. With the help of Wolframalpha I got an answer $-\pi \ln 2+4\mathbf{G}$, where $\mathbf{G}$ denotes Catalan's Constant. - -REPLY [10 votes]: First we rewrite $\log(1+\sin(x))=\log(2)+2\log(\sin(x/2+\pi/4))$. This yields, after shifting $x/2+\pi/4\rightarrow y,dx\rightarrow 2 dy$ -$$ -I=\pi\log(2)+4\underbrace{\int_{\pi/4}^{3\pi/4}\log(\sin(y))dy}_{J} -$$ -Now we might employ the Fourier series of $\log(\sin(x))$ to calculate $J$ -$$ -J=-\log(2)\int_{\pi/4}^{3\pi/4}dy-\sum_{n=1}^{\infty}\frac{1}{k}\int_{\pi/4}^{3\pi/4}\cos(2yk)dy=\\ --\frac{\pi}{2}\log(2)+\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2} -$$ -where we used $\sin(\frac{3\pi}{2} k)-\sin(\frac{\pi}{2} k)=(-1)^{k}-(-1)^{k+1}=2(-1)^k$ in the second line. -Employing the series representation of the Catalan constant we find -$$ -J=-\frac{\pi}{2}\log(2)+G -$$ -and therefore - -$$ -I=\pi \log(2)+J=-\pi \log(2)+4G \quad (*) -$$ - - -Just for fun, lets see what contour integration can do. -We again use $J$ rewriting it by the help of the identity $\log(\sin(x))=\log(i/2)-ix+\log(1-e^{2 i x})$. The integral over the first two terms is trivial leaving us with -$$ -J=-\frac{\pi}{2}\log(2)+\underbrace{\int_{\pi/4}^{3\pi/4}\log(1-e^{2 i x})}_{K} -$$ -to evaluate $K$ we integrate the complex valued function -$$ -f(z)=\log(1-e^{2 i z}) -$$ -over an rectangle $C$ in the complex plane with verticies $(\pi/4,3\pi/4,3\pi/4+i R,\pi/4+i R)$ where in the end we want to take the limit $R\rightarrow +\infty$. As one can easily check, the contribution from the top of the rectangle vanishs (the integrand vanishs as $\mathcal{O}(e^{-2R})$ for big $R$). Furthermore the integrand is holomorphic inside the contour of integration and therefore we can express the integral of interest in terms of the vertical pieces of the contour -$$ -\int_C f(z)dz=K+i\int_{0}^{\infty}\log(1+ie^{-2y})dy-i\int_{0}^{\infty}\log(1-ie^{-2y})dy=0 -$$ -this can be simplified to -$$ -K=2\int_0^{\infty}\arctan(e^{-2y})dy=\int_0^{\infty}\arctan(e^{-y})dy -$$ -using the series repesentation of arctan, we may easily conclude that this integral is equal to $G$ and therefore - -$$ -K=G -$$ - -from which it follows that -$$ -J=-\frac{\pi}{2}\log(2)+K=-\frac{\pi}{2}\log(2)+G$$ -from which (*) follows immediatly<|endoftext|> -TITLE: Conormal bundle of Cartier divisors -QUESTION [6 upvotes]: Given any closed immersion of schemes $i:Z\to X$ defined by a sheaf of ideals $\mathcal{I}$ on $X$, apparently the conormal bundle is $\mathcal{C}_{Z/X}:= {\mathcal{I}}/{\mathcal{I}^2}$ "seen as a sheaf on $Z$". My guess is that it means that in fact $\mathcal{C}_{X/Y}=i^*({\mathcal{I}}/{\mathcal{I}^2})$, but I am not entirely sure (so that's my first question). -Also, an effective Cartier divisor on a scheme $X$ is a closed subscheme given by a sheaf of ideals $\mathcal{I}$ such that on an affine cover the ideal corresponding to $\mathcal{I}$ are all principal and generated by a non-zero-divisor. With this definition the sheaf $\mathcal{O}(-D)$ is nothing but $\mathcal{I}$ and $\mathcal{O}(D)$ is its "dual" sheaf $\mathcal{Hom}_X(\mathcal{I},\mathcal{O}_X)$ (notice that this way the bidual is not necessarily isomorphic to $\mathcal{I}$). Now I would like to know why $\mathcal{O}(-D)|_D=\mathcal{C}_{Z/X}$ (which most likely means that there exists a canonical isomorphism). -I tried the obvious thing but I am very confused: locally (say in $\text{Spec}A$) $\mathcal{I}$ comes from an ideal $I=(f)$ with $f$ a non-zero-divisor. Then $\mathcal{O}(-D)|_D$ becomes $I\otimes_A A/I\cong I/I^2$. On the other hand, $\mathcal{I}/\mathcal{I}^2$ is locally $I/I^2$ and the pullback is then $I/I^2\otimes_A A/I\cong I\otimes_A A/I\otimes_A A/I\cong I\otimes A/I\cong I/I^2$. This just can't be right because then it would be true for any closed subscheme, and there are surely many errors in definitions/tensor product. Could you help me correcting that (or propose your own solution if you prefer)? - -REPLY [6 votes]: You are correct that it is extremely common to abuse notation and write $\mathscr{I}/\mathscr{I}^2$ for the conormal sheaf of a (closed) immersion when what one really intends (and should write to be technically correct) is $i^*(\mathscr{I}/\mathscr{I}^2)$. The reason this abuse takes place, as you probably know, is that for $i:Z\to X$ a closed immersion, $i_*$ is fully faithful with essential image the $\mathscr{O}_X$-modules annihilated by $\mathscr{I}$ (such modules are necessarily supported in $Z$) and for such an $\mathscr{O}_X$-module $\mathscr{F}$, one indeed has that the unit $\eta_\mathscr{F}:\mathscr{F}\to i_*(i^*\mathscr{F})$ of the adjunction $i^*\dashv i_*$ is an isomorphism, as can be checked on stalks, reducing to the fact that if $M$ is an $A$-module for some ring $A$, killed by an ideal $I\subseteq A$, then the natural map $M\to M\otimes_A(A/I)$ is an isomorphism. -Now let $i:D\to X$ be an effective Cartier divisor, meaning a closed subscheme of $X$ whose ideal sheaf $\mathscr{I}$ is invertible. One denotes $\mathscr{I}$ by $\mathscr{O}_X(-D)$ and defines the ``associated invertible module" to be $\mathscr{O}_X(D):=\mathscr{I}^{-1}:=\mathscr{I}^\vee=\mathscr{H}om_{\mathscr{O}_X}(\mathscr{I},\mathscr{O}_X)$. I'm not sure what you mean when you say "the bidual is not necessarily isomorphic to $\mathscr{I}$." For any finite locally free $\mathscr{O}_X$-module $\mathscr{F}$, the natural evaluation map $\mathscr{F}\to(\mathscr{F}^\vee)^\vee$ is an isomorphism, which can be checked on stalks (for source modules of finite presentation, taking stalks is compatible with formation of $\mathrm{Hom}$ sheaves). -Anyway, for your question: why is it the case that $\mathscr{O}_X(-D)\vert_D\simeq\mathscr{C}_{Z/X}$. You absolutely have the right idea. The LHS is $i^*(\mathscr{I})$, while the RHS is $i^*(\mathscr{I}/\mathscr{I}^2)$. The canonical map $\mathscr{I}\to\mathscr{I}/\mathscr{I}^2$ pulls back to $i^*(\mathscr{I})\to\mathscr{C}_{Z/X}$. Because $i_*$ is fully faithful, that this map is an isomorphism can be verified after applying $i_*$. The stalk of the resulting map at a point $x\in X-Z$ is the zero map between the zero modules, so that's fine. At a point $z\in Z$, the stalk is $\mathscr{I}_z\otimes_{\mathscr{O}_{X,z}}\mathscr{O}_{Z,z}\to\mathscr{I}_z/\mathscr{I}_z^2\otimes_{\mathscr{O}_{X,z}}\mathscr{O}_{Z,z}$. But $\mathscr{O}_{Z,z}=\mathscr{O}_{X,z}/\mathscr{I}_z$. Similarly to the first paragraph, in general, for a ring $A$ and an $A$-module $M$, and an ideal $I$, the natural map $M\otimes_A(A/I)\to (M/IM)\otimes_A(A/I)$ is an isomorphism. Indeed, the source has a canonical isomorphism to $M/IM$, and the same fact allows one to identify the target with $(M/IM)/I(M/IM)=M/IM$ since $M/IM$ is killed by $I$. Under these identifications, your map is just the identity, so you win. -Notice that this doesn't actually have anything to do with Cartier divisors (the third paragraph anyway). It works for an arbitrary closed immersion, and indeed, the principle is summarized (admittedly with a tiny deficit of precision) by the phrase "tensoring a module $M$ killed by $I$ with $A/I$ does nothing."<|endoftext|> -TITLE: Uniform continuity, uniform convergence, and translation -QUESTION [40 upvotes]: Let $f:\mathbb R \to \mathbb R$ be a continuous function. Define $f_n:\mathbb R \to \mathbb R$ by -$$ f_n(x) := f(x+1/n). $$ -Suppose that $(f_n)_{n=1}^\infty$ converges uniformly to $f$. Does it follow that $f$ is uniformly continuous? -Note: the answer is clearly no if we don't assume that $f$ is continuous. I suspect there is a counterexample, showing that the answer is no even if $f$ is continuous. -Edit: -The following observation might help. -There exists a continuous, non-uniformly continuous function $f:\mathbb R \to \mathbb R$ such that $(f_n)_{n=1}^\infty$ converges uniformly to $f$, if and only if there exists a continuous, non-uniformly continuous function $g:\mathbb N \times \mathbb R \to \mathbb R$, such that $(g_n)_{n=1}^\infty$ converges uniformly to $g$, where $g_n(k,x) = g(k,x+1/n)$ and the metric on $\mathbb N \times \mathbb R$ comes from viewing it as a subspace of $\mathbb R \times \mathbb R$ (with the Euclidean metric, say). -Proof: -Given the function $g$, there is some $\epsilon>0$ which witnesses non-uniform continuity. -That is, for each $n$, there exists $m \in \mathbb N$ and $x,y \in \mathbb R$ such that $|x-y|<1/n$ and $$|g(m,x)-g(m,y)|\geq\epsilon.$$ -By moving things around, we can assume that for each $k$, there exists $y \in (0,1/k)$ such that $$ |g(k,0)-g(k,y)| \geq \epsilon. $$ -Next, we can also assume that $g(k,x)=0$ for all $|x|>2$. -This is achieved by multiplying $g$ by the function $h$ which is $1$ on $\mathbb N \times [-1,1]$, $0$ on $\mathbb N \times (\mathbb R \setminus (-2,2)$, and linear elsewhere. -That $(gh)_n \to gh$ uniformly can be proven by the same argument that the product of uniformly continuous functions is uniformly continuous. -Now, we just define $f$ piecewise, by -[ f(6k+x)=g(k,x) ] -for $k \in \mathbb N$ and $x \in [-3,3]$, and -[ f(x) = 0 ] -for $x < -3$. - -REPLY [13 votes]: The given conditions do not imply uniform continuity. -Let us construct our counterexample by first creating a "weight" function on the rational numbers defined as follows: -$$w(x)=\min\left\{n:x=a_0+\frac{1}{a_1}+\frac{1}{a_2}+\ldots+\frac{1}{a_n},\,a_i\in\mathbb Z\right\}$$ -That is, $w(x)$ is the least number $n$ such that $x$ is an integer plus the reciprocal of $n$ integers. Note that $w$ is $0$ on the integers themselves. Now, define a family of functions $d_{\alpha,\beta}:\mathbb R\rightarrow\mathbb R$ for $\alpha,\beta>0$ as follows: -$$d_{\alpha,\beta}(x)=\min\{\alpha|x-q|+\beta w(q):q\in \mathbb Q\}.$$ -One can see that the given set actually has a minimum since the set $Q_n=\{q\in \mathbb Q:w(q)\leq n\}$ is closed, so there is a well-defined minimum distance from $x$ to an element of $Q_n$, and for large $n$ the $\beta w(q)$ term will be too large to be a minimum. We may also note that this function satisfies the triangle inequality: -$$d_{\alpha,\beta}(x+y)\leq d_{\alpha,\beta}(x)+d_{\alpha,\beta}(y).$$ -This may be proved by first noting that $w$ satisfies the triangle inequality, then doing some algebra. -We will critically rely on two consequences of the triangle inequality. Firstly, since $d_{\alpha,\beta}(x)\leq \alpha |x|$, we get that $d_{\alpha,\beta}$ is continuous. We can also use the fact that $d_{\alpha,\beta}(1/n)=\min(\beta,\alpha/n)$ as a bound for the difference $d_{\alpha,\beta}(x)-d_{\alpha,\beta}(x+1/n)$, which is important for our application. -Now, let us choose a sequence $s_i$ of irrational numbers converging to $0$ and try to set our parameters to make $d_{\alpha,\beta}(s_m)$ large, while decreasing $\beta$. Thus, we get that differences $d_{\alpha,\beta}(x)-d_{\alpha,\beta}(x+1/n)$ will be small, but the function will obtain large values near zero. The big picture is that we will stitch together lots of functions like this, causing a big problem for uniform continuity. -Specifically, let $\varepsilon_i$ be the distance from $s_i$ to any element of $Q_i$. That is, $\varepsilon_i=\min\{|s_i-q|:q\in Q_i\}$. Set $\alpha_i=\frac{1}{\varepsilon_i}$ and $\beta_i=\frac{1}i$. We may note that $d_{\alpha_i,\beta_i}(s_i)\geq 1$ by seeing that if $q\in Q_i$, then $\alpha_i|s_i-q|\geq 1$ by definition of $\alpha_i$, and if $q\not\in Q_i$ then $\beta_i w(q)\geq 1$, thus the expression $\alpha_i|s_i-q|+\beta_i w(q)$ is at least $1$ for all rational $q$. -Now, we need to come up with a way to get our function to be compactly supported, but still do what we want. Define a window map $g$ by the following relations: -$$g(x)=\begin{cases}x & \text{if }0\leq x\leq 1 \\ 1 & \text{if } 1\leq x\leq 2 \\ 3-x & \text{if }2\leq x \leq 3 \\ 0 &\text{otherwise}\end{cases}.$$ -This map is a sort of "table" with a flat middle section and two sloping sides. Now, we define -$$f_i(x)=g(x)\cdot \min(d_{\alpha_i,\beta_i}(x),1).$$ -Let us bound $|f_i(x)-f_i(x+1/n)|$. To do so, note that $\left| d_{\alpha_i,\beta_i}(x)-d_{\alpha_i,\beta_i}(x+1/n)\right|\leq \min(\beta_i,\alpha_i/n)$. Obviously, $\left|g(x)-g(x+1/n)\right|\leq 1/n$. Together, we can use these to get the bound: -$$\left|f_i(x)-f_i(x+1/n)\right|\leq 1/n + \min(\beta_i,\alpha_i/n).$$ -We also have that $|f_i(1)-f_i(1+s_i)|=1$, which we will use later to contradict uniform continuity. -Finally, we do the stitching. Define $f$ as follows: -$$f(x)=\begin{cases}f_i(x-3i) & \text{if }3i \leq x \leq 3i+3,\,i\in \mathbb N \\ 0 &\text{if } x \leq 0 \end{cases}.$$ -Since, in any interval of length at most one, $f$ may decomposed as a sum of translates of two functions $f_i$ and $f_{i+1}$, we immediately get an inequality -$$\left|f(x)-f(x+1/n)\right| \leq 2/n + \min(\beta_i,\alpha_i/n) + \min(\beta_{i+1},\alpha_{i+1}/n).$$ -$$\left|f(x)-f(x+1/n)\right| \leq \sup \{2/n + 2\min(\beta_i,\alpha_i/n):i\in\mathbb N\}$$ -One has that $\lim_{n\rightarrow\infty}\sup\{2/n + 2\min(\beta_i,\alpha_i/n)\}=0$ since, for any $\varepsilon$, there is some $I$ such that for all $i>I$ we have $\beta_i<\varepsilon$. However, if we let $N=\max\{\alpha_i/\varepsilon:i\leq I\}$ then we have that $\alpha_i/n<\varepsilon$ for all $i\leq I$ and $n>N$. Thus, the given term is less than $2/n+2\varepsilon$ for all $n>N$, meaning it converges to $0$. This, of course, gives that $f(x+1/n)$ converges uniformly to $f(x)$. -However, $f$ is not uniformly continuous since $|f(3i+1)-f(3i+1+s_i)|=1$ for all $s_i$, even though $s_i$ goes to zero. Thus, there can be no $\delta$ such that $|x-y|<\delta$ implies $|f(x)-f(y)|<1$. We are therefore done. - -I thought, to help guide intuition, a few pictures might help out. One should note that only the ratio $\alpha/\beta$ actually affects the shape of $d_{\alpha,\beta}$ due to the identity $d_{c\alpha,c\beta}(x)=cd_{\alpha,\beta}(x)$. Here's a couple plots where $\alpha/\beta=12$ in the first picture, $40$ in the second, and $600$ in the third. Each one shows an increase in the number of "tiers" in the function due to bands of rationals where $w(q)=d_{\alpha,\beta}(q)$. - - -The last picture nearly melted my computer to plot. The moral of the story there is that $\alpha_i$ might need to be really big to satisfy what is demanded of it.<|endoftext|> -TITLE: Proof of Glivenko-Cantelli Theorem for non-continuity points -QUESTION [6 upvotes]: I have a question on the proof of the Glivenko-Cantelli Theorem. Consider the i.i.d. random variables $(X_1,\ldots,X_n)$ defined on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ (they can be discrete or continuous) with cdf $F(t):=\mathbb{P}(X \leq t)$ $\forall t \in \mathbb{R}$. Let $F_n(t):=\frac{1}{n}\sum_{i=1}^n1(X_i\leq t)$ where $1(\cdot)$ is the indicator function taking value $1$ if the condition inside is satisfied and $0$ otherwise. -Theorem: $ \sup_{t \in \mathbb{R}}|F_n(t)-F(t)| \rightarrow_{a.s.}0$ where $\rightarrow_{a.s.}$ denotes almost sure convergence. -Proof (sketch) -1) Fix $\varepsilon>0$ -1) Show that we can construct a finite partition of $\mathbb{R}$ such that $-\infty=t_00$ $\exists n_\varepsilon \in \mathbb{N}$ such that $|F_n(t_j)-F(t_j)|\leq \varepsilon$ $\forall n \geq n_\varepsilon$ $\forall j$ -(4) $\forall t_{j-1}\leq t\leq t_j$ and $\forall j$ -$$ -F_n(t)-F(t)\leq \lim_{t\rightarrow t_j^-}F_n(t)-\lim_{t\rightarrow t_j^-}F(t)+\varepsilon\overbrace{\underbrace{\leq}_{\text{ wp1, } n\geq n_{\varepsilon}}}^{\star} 2\varepsilon -$$ -$$ -F_n(t)-F(t)\leq F_n(t_{j-1})-F(t_{j-1})-\varepsilon\underbrace{\leq}_{\text{ wp1, } n\geq n_{\varepsilon}} -2\varepsilon -$$ -(5) Hence, $\forall \varepsilon>0$, $\forall t$, $|F_n(t)-F(t)|\leq 2\varepsilon$ $\forall n \geq n_\varepsilon$ wp$1$. Done! -Question: How can I do step $(\star)$ when $t_j$ is not a continuity point of $F(\cdot)$? - -REPLY [4 votes]: Clarification of definitions: define $F(t^- )=P(X\lt t)$ and $F_n (t^- )=\frac{1}{n}\sum^n_{i=1} 1(X_i\lt t)$. This is consistent with the source I have linked to below. Given these definitions it follows that $F_n ({t_j}^- )=\lim_{t\rightarrow t_j^{-}} F_n (t)$, where the limit $t_j$ is approached from the left, since $X_i\lt t_j$ iff $\exists t\lt t_j$ such that $X_i\leq t $ i.e. the sets $(X_i\leq t)\uparrow (X_i\lt t_j)$ as $t\uparrow t_j$. Also, $F({t_j}^- )=\lim_{t\rightarrow t_j^{-}} F(t)$. This is true because, once again, the sets $(X\leq t)\uparrow (X\lt t_j)$ as $t\uparrow t_j$, and you can then use continuity of the probability measure. -First, a comment about step (1) where you construct a partition of $\mathbb{R}$. You need to allow for the possibility that sometimes we might have $t_j=t_{j-1}$ e.g. for discrete distributions with point masses. -Next, in step (1) you fixed $\epsilon$ but in (3) you allow $\epsilon$ to vary. It should be possible to rewrite (3) so that this confusion can be avoided. -Now, on to the question about step $(\star)$. To see why a step of this kind is possible, it helps to look more closely at how step (3) was achieved. Step (3) is possible because the Strong Law of Large Numbers can be applied to each $t_j$ individually, and the claim in step (3) then follows because there are only a finite number of $t_j$. More precisely, we apply the Strong Law to the sequence of random variables $1(X_n\le t_j)$ to obtain $F_n(t_j)\to F(t_j)$ a.s. However, we can also apply the Strong Law to the random variables $1(X_n\lt t_j)$, which leads to $F_n({t_j}^-)\to F({t_j}^-)$ a.s. So the conclusion of step (3) can be strengthened by adding additional terms of the form $|F_n({t_j}^-)-F({t_j}^-)|$ to the 'sup'. If this is done then the proof in step (4) should be straightforward. -There is a nice presentation of the proof in this handout.<|endoftext|> -TITLE: $n^2 + (n+1)^2 = m^3$ has no solution in the positive integers -QUESTION [9 upvotes]: The problem from Burton: show that the equation $n^2 + (n+1)^2 = m^3$ has no solution in the positive integers. -So far, I can see that gcd($n$,$n+1$)$=1$ and $m \equiv_4 1$ and $m=a^2 + b^2$ for some integers a,b. I'm guessing I need to reach a contradiction. -At this point, I am stuck. Any hints? - -REPLY [7 votes]: The equation is equivalent to $(2(2n+1))^2+4=(2m)^3$. -$a^2+4=b^3$ with $a,b\in\mathbb Z$ is a Mordell's Equation. See this paper (page $6$) for an elementary solution that uses $\Bbb Z[i]$ (the Gaussian integers). The only solutions are $(a,b)=(\pm 2,2), (\pm 11, 5)$. -$2m$ is even, so $2m=2$ and $2(2n+1)=\pm 2$, so the only integer solutions are $(m,n)=(1,-1),(1,0)$.<|endoftext|> -TITLE: Finding prime numbers equal to the sum of squared primes -QUESTION [6 upvotes]: I was doing some work on prime numbers, and I came across this problem: -"For prime numbers $p$ and $q$, determine the greatest prime, $r$ less than $100$ for which $r = p^2 + q^2$." -Of course, you can always do it by hand, but I was wondering, are there were any faster methods to solving it? -Thanks for any help. - -REPLY [27 votes]: Note that if $p$ and $q$ are odd, then $p^2$ and $q^2$ are also. Then $p^2+q^2$ is even and can't be prime ($2$ can't be written as such a sum). Then necessarily $$r=4+q^2$$ -With $r$ less than $100$ the only possible values for $q$ are the primes up to $10$.<|endoftext|> -TITLE: Lower bound on the number of faces of a polyhedron of genus g -QUESTION [5 upvotes]: Is there a lower bound on the number of faces of a polyhedron of topological genus g? -For example: it seems very reasonable that $g$ < $F$ -i.e. the genus of a polehydron is less than the number of faces of the polyhedron, but i can't find a proof. -To be clear what is meant by polyhedron let's use the definition from wikipedia: -"A polyhedron is a solid in three dimensions with flat polygonal faces, straight edges and sharp corners or vertices." -The genus can be calculated by $g = \frac{2-\chi}{2}$, where $\chi$ is the Euler characteristic of the polyhedron. - -REPLY [2 votes]: Example of polyhedron with 4096 faces and 4097 holes. -P. McMullen, C. Schulz, and J.M. Wills. "Polyhedral manifolds in E3 with unusually large genus". Israel Journal of Mathematics., 46 (1983), no. 1-2, pages 127–144 -https://link.springer.com/article/10.1007%2FBF02760627<|endoftext|> -TITLE: Why doesn't this operation work? -QUESTION [15 upvotes]: In my maths class we are learning about indefinite integrals, this is the problem we were working on: -$$ -\int \frac{1}{2x+1}dx -$$ -Using u-substitution we obtain: -$$ -\frac{1}{2}\ln\left | 2x+1 \right | + C -$$ -But why does it not work to pull out a $\frac{1}{2}$ so that we don't have to do u-substitution -$$ -\frac{1}{2}\int \frac{1}{x+\frac{1}{2}}dx -$$ -This yields a completely different result -$$ -\frac{1}{2}\ln \left |x+\frac{1}{2}\right | + C_1 \neq \frac{1}{2}\ln\left | 2x+1 \right | + C_2 -$$ -Pulling out the $\frac{1}{2}$ seems like a completely valid move, so why does it get a completely different result? - -REPLY [26 votes]: They are the same because -$$\ln|2x+1|=\ln|2(x+1/2)|=\ln(2)+\ln|x+1/2|$$ -and the constant $\ln(2)$ goes with the constant $C$.<|endoftext|> -TITLE: Leinster's Category theory model answers -QUESTION [9 upvotes]: Is there a source of model answers for the examples in Leinster's Basic Category theory? -The book has an excellent set of questions but sadly no apparent answers are provided, despite the book being aimed at those with no previous experience learning outside of lectures. As a graduate moving into mathematics from physics the ability to see the rigorous formulation of the answers would be a godsend (Never mind those that leave me stumped). - -REPLY [4 votes]: If anyone is interested, I also wrote down solutions to all exercises in the book (I was not aware of the other set of solutions above). -You can find them here: https://positron0802.wordpress.com/basic-category-theory-leinster/<|endoftext|> -TITLE: $\lim_{x \to +\infty}\frac{x+\sqrt{x}}{x-\sqrt{x}}$ -QUESTION [6 upvotes]: Calculate: -$$\lim_{x \to +\infty}\frac{x+\sqrt{x}}{x-\sqrt{x}}$$ -i tried to take $X=\sqrt{x}$ we give us -when $x \to 0$ we have $X \to 0$ -But i really don't know if it's a good idea - -REPLY [3 votes]: You may have noticed the similarity between the nominator and the denominator. If you choose to make use of this you can express the ratio in the following way. $$\frac{x+\sqrt{x}}{x-\sqrt{x}}=\frac{(x-\sqrt{x})+2\sqrt{x}}{x-\sqrt{x}} = 1+\frac{2}{\sqrt{x}-1}\ .$$ -If $x$ is a large positive number, then so will its square-root be a large positive number; consequently the original ratio will be close to $1$. If you like, you can formalize this using an $\epsilon$-$\delta$-argument, but I will leave that to you.<|endoftext|> -TITLE: Is a sinc-distance matrix positive semidefinite? -QUESTION [6 upvotes]: I've been trying to crack this problem for days but I can't find a way around it. Given a set of unique $N$ points $X = \{x_1,\dots,x_N\}, x_i \in R^3$, the associated sinc-distance matrix $S \in R^{n\times n}$ is $S(i,j) = \sin(|x_i-x_j|)/(|x_i-x_j|)$. My question is if this matrix is positive semidefinite. -I've ran numerical tests and the matrix always appears PSD, but I haven't been able to prove it formally. -Things I've tried: - -Try to prove it using minors and so: no luck, sinc's are difficult to work with in order to simplify the form of the determinants. -Try proving that $\forall y$, $y^TSy\geq 0$, that ends up being $\sum_{i=1}^N \sum_{j=1}^N y_i y_j \sin(|x_i-x_j|)/(|x_i-x_j|) \geq 0$. No luck either. My intuition is that the triangle inequality for distances has to play a role somewhere in it, but can't find it. -Try a constructive approach. We know that for the $N=2$ case, the matrix is PD, as the diagonal elements are 1 (sinc of 0) and the non-diagonal are $<1$, so the matrix is strictly diagonally dominant (also the determinant and trace are both positive, so $S \succ 0$. Then prove that adding an element to the set doesn't make the resulting matrix indefinite. I've tried using Schur complements for this task but no luck yet. -Also for any set $X$, we can multiply all elements by a scalar $\alpha$. As $\alpha$ goes to 0, all the elements in the set also go to 0 and $S \to \vec{1} \vec{1}^T$, which is PSD rank 1. As $\alpha$ goes to infinity, so do the points and so do the distances, so $S \to I$. So changing $\alpha$, the matrix moves in a 1D curve in the group $\mathcal{S}$ of symmetric matrices and the curve starts in the boundary of the cone of PSD matrices and ends in the middle of the cone, so it seems natural that it doesn't leave the cone, but I can't prove it. - -Any help would be greatly appreciated! Looking forward to the discussion! -Thanks! - -REPLY [4 votes]: In the arXiv paper "Schoenberg matrices of radial positive definite functions -and Riesz sequences in $L^2(\mathbb{R}^n)$" by L. Golinskii, M. Malamud and L. Oridoroga, Definition 1.1 states - -Let $n \in \mathbb{N}$. A real-valued and continuous function $f$ on $\mathbb{R}_+ = [0,\infty)$ is called a radial positive definite function, if for an arbitrary finite set $\{x_1,\ldots,x_m\}$, $x_k \in \mathbb{R}^n$, and $\{\xi_1,\ldots,\xi_m\} \in \mathbb{C}^m$ $$\sum_{k,j = 1}^{n}f(|x_k-x_j|)\xi_k\overline{\xi}_j \ge 0.$$ - -Note that this definition is equivalent to saying that the matrix $S \in \mathbb{R}^{m \times m}$ defined by $S(k,j) = f(\|x_k-x_j\|)$ is positive semidefinite for any choice of points $x_1,\ldots,x_m \in \mathbb{R}^n$. -Also, Theorem 1.2 states the following: - -A function $f \in \Phi_n$, $f(0) = 1$ if and only if there exists a probability measure $\nu$ on $\mathbb{R}_+$ such that $$f(r) = \int_{0}^{\infty}\Omega_n(rt)\nu(dt), ~~~~ r \in \mathbb{R}_+$$ where $$\Omega_n(s) := \Gamma(q+1)\left(\dfrac{2}{s}\right)^qJ_q(s) = \sum_{j = 0}^{\infty}\dfrac{\Gamma(q+1)}{j!\Gamma(j+q+1)}\left(-\dfrac{s^2}{4}\right)^j, ~~~~ q = \dfrac{n}{2}-1,$$ $J_q$ is the Bessel function of the first kind and order $q$. - -(In their paper $\Phi_n$ denotes the set of radial positive definite functions on $\mathbb{R}^n$.) -Theorem 1 in "Metric Spaces and Completely Monotone Functions" by I. J. Schoenberg gives the proof of this result. -By applying this theorem for $n = 3$ and $\nu = \delta_1$, you get that $f(r) = \Omega_3(r) = \dfrac{\sin r}{r}$ is a radial positive definite function in $\mathbb{R}^3$. In other words, for any choice of $m$ points $x_1,\ldots,x_m \in \mathbb{R}^3$, the matrix $S \in \mathbb{R}^{m \times m}$ defined by $S(k,j) = \dfrac{\sin(\|x_k-x_j\|)}{\|x_k-x_j\|}$ is positive semidefinite. - -It should be clear that if you choose $m$ points $x_1,\ldots,x_m$ in $\mathbb{R}$ or $\mathbb{R}^2$, then $S$ is still guaranteed to be positive semidefinite. However, that is not the case if the points are chosen in $\mathbb{R}^n$ where $n \ge 4$. joriki's answer gives a nice counterexample for $n \ge 5$. Here is an ugly counterexample (found by randomly generating points in MATLAB) with $6$ points in $\mathbb{R}^4$: -$x_1 = (-0.7,0.8,-0.3,-0.4)$ -$x_2 = (0.8,-0.8,-0.9,0.7)$ -$x_3 = (-0.1,0.0,0.1,-0.1)$ -$x_4 = (-0.8,-0.7,-0.1,0.2)$ -$x_5 = (0.2,-0.2,0.5,-0.9)$ -$x_6 = (0.4,0.8,0.8,0.9)$ -You can check that the $6 \times 6$ sinc distance matrix $S$ for these points has an eigenvalue of $\approx -0.0024103$, and thus, $S$ is not positive semi definite.<|endoftext|> -TITLE: correct typesetting for quantifiers -QUESTION [6 upvotes]: For years I have been typing and writing quantifiers in a certain way. Now that I am writing my thesis, my adviser is taking issue with some of these things. Since he is my adviser I'm going to do what he says, but I am curious about the general consensus on this. -As an example, let's say I wanted to write symbolically "There exists an element $a$ of $A$ such that $a$ is positive." My habitual way of doing this would be -$$\exists a\in A: a>0.$$ -My adviser has 2 problems with this. Firstly he says there should be a space between the $\exists$ and the $a$. Secondly he says I cannot assume people will read the colon as "such that." So he would have me change this to: -$$\exists\ a\in A\text{ such that }a>0.$$ -Which seems correct to you? -As for the space after the $\exists$, -it looks funny to me. -It also seems significant that $\LaTeX$ -does not automatically put a space after the $\exists$ -and I need to write \exists\ instead of \exists. -As for the colon, it's been a few years but I used to study logic, and I think in the conventions there my usage is fine. I'm not sure of the grammatical terminology but there is a sense in which the colon indicates that we are done quantifying things and are now going to indicate the property the quantified things have. Not only have I been writing this way for years, I have been teaching students to write this way. -I was never quite sure though about the colon with a universal quantifier? Like, is it conventional at least for some people, if I write "For all elements $a$ of $A$, $f(a)=c$" as follows? -$$\forall a\in A: f(a)=c.$$ -A lot of people don't write the universal quantifier at the beginning either. I feel that my more hardcore logic professors would never do this, but a topologist would have no problem writing: -$$f(a)=c, \forall a\in A.$$ -The predicate logic conventions, if I'm remembering them properly, seem just way more ... logical. But I want my writing to be familiar to my audience, which is something my adviser has the best feel for. I guess once I am more established I will have more freedom in how I write. In the meantime I'd like to hear which of these things look correct or incorrect, and please also mention what area of math you work in because that seems to matter. -UPDATE: I appreciate the many insightful comments this question has received, but would someone please post their answer as an answer? - -REPLY [4 votes]: First of all, I would avoid logical symbols as much as possible. For instance, you initial example could be phrased as "There exists a positive element $a$ in $A$". Now, if you really need to use quantifiers, my advice would be the following: - -Avoid any unnecessary symbols like ":" or "." -Don't hesitate to add spaces, and possibly parentheses and brackets to improve readability. $\LaTeX$ offers a large panel of possibilities to do so. For instance, $\exists x\ \forall y\ \varphi(x,y)$ looks better than $\exists x \forall y \varphi(x,y)$. -Put quantifiers in the front, not at the end. Although it is acceptable to write "$f(n) > 0$ holds for every integer $n$", if you really need to use quantifiers, it is preferable to write -$$ -\forall n \in \mathbb{Z}\quad f(n) > 0 -$$ -or, as suggested by Brian M. Scott, -$$ -\forall n \in \mathbb{Z}\quad (f(n) > 0) -$$ -Double check again. Do you really need quantifiers? You will not find a single quantifier in Rudin's Real and Complex Analysis. You will no find quantifiers in Bourbaki's chapters on topology either, although Bourbaki's style is usually very formal.<|endoftext|> -TITLE: Is the sum of the square roots of all natural numbers up to n whole for any value of n other than 1? -QUESTION [7 upvotes]: For the summation $\sum_{n=0}^x \sqrt{n}$ are there any values of $x$ where the summation equals a whole number other than 1? - -REPLY [4 votes]: I doubt there's an elementary proof of this, but the fact that this is irrational for $x>1$ follows from the following theorem: - -The set of numbers of the form $\{\sqrt{n}:n\in \mathbb Z,\,n \text{ squarefree}\}$ is linearly independent over $\mathbb Q$. - -One can find proofs for this here and in the special case for primes here. -This is essentially an immediate corollary: Suppose for contradiction that $\sum_{n=1}^x\sqrt{n}=q$ was rational. Every term in the sum may be written of the form $c_n\sqrt{d_n}$ where $d_n$ is square-free and $n=c_n^2d_n$ and both $c_n$ and $d_n$ are integers. If we regroup the terms to collect coefficients of $\sqrt{d_n}$ we'll get -$$\sum_{\substack{d=1\\d\text{ squarefree}}}^{x}\left(\sum_{c=1}^{\lfloor\sqrt{x/d}\rfloor}c\right)\sqrt{d}$$ -and if we subtract $q$ from this and pull out the first term, we get -$$\left(-q+\sum_{c=1}^{\lfloor\sqrt{x}\rfloor}c\right)\cdot \sqrt{1}+\sum_{\substack{d=2\\d\text{ squarefree}}}^{x}\left(\sum_{c=1}^{\lfloor\sqrt{x/d}\rfloor}c\right)\sqrt{d}=0$$ -which contradicts that the set of square roots of square free numbers is linearly independent over $\mathbb Q$. Therefore, the sum $q$ is not rational - and, in particular, is not an integer. -One should note that this gives a stronger property: If you sum up a bunch of square roots of natural numbers with positive rational coefficients, where at least one of the square roots is irrational, the sum will never be rational.<|endoftext|> -TITLE: Does the sum of the inverses of the sums of the primes converge? -QUESTION [36 upvotes]: $$\sum_{m=0}^∞ \frac{1}{\sum_{n=0}^m p_n} = \frac{1}{2} + \frac{1}{5} + \frac{1}{10} + \frac{1}{17} ... $$ -Where $p_n$ is the $n$th prime number, does $\sum_{m=0}^∞ \frac{1}{\sum_{n=0}^m p_n}$ converge? - -REPLY [4 votes]: The n-th prime is quite obviously ≥ 2n - 1. Therefore the sum of the first n primes is ≥ $n^2$. The sum in question converges and is at most the sum over $1 / n^2$ which converges to $π^2 / 6$. Actually, it must be less than $π^2 / 6 - 1/2$ because the first element of the sum is 1/2, not 1. -That sum is about 1.1493, not worlds apart from the actual result. -I'm just curious why an answer saying "all you need is $p_n ≥ n$" gets voted up while an answer saying "$p_n ≥ 2n - 1$" and giving a reasonable upper bound for the limit doesn't.<|endoftext|> -TITLE: Proof for the calculation of mean in negative binomial distribution -QUESTION [13 upvotes]: I am trying to figure out the mean for negative binomial distribution but have run into mistakes. I know there are other posts on deriving the mean bu I am attempting to derive it in my own way. I wonder if any of you can point out where my mistake is: -In negative binomial distribution, the probability is: -$$ -p(X=x) = \frac{(x-1)!}{(r-1)!(x-r)!}p^r(1-p)^{x-r}, -$$ -where $X$ is a random variable for the number of trials required, $x$ is the number of trials, p is the probability of success, and r is the number of success until $x$th trial. Therefore, to calculate expectation: -$$ -E(x) = \sum_{x=r}^{\infty}xp(x)=x\sum_{x=r}^{\infty}\frac{(x-1)!}{(r-1)!(x-r)!}p^r(1-p)^{x-r}=\sum_{x=r}^{\infty}\frac{x!}{(r-1)!(x-r)!}p^r(1-p)^{x-r} -$$ -Let $k=x-r$, then the formula becomes: -$$ -E(x)=\sum_{k=0}^{\infty}\frac{(k+r)!}{(r-1)!k!}p^r(1-p)^k= -\sum_{k=0}^{\infty}\frac{(k+r)!}{(r-1)!k!}p^r(1-p)^k= -r\sum_{k=0}^{\infty}\frac{(k+r)!}{r!k!}p^r(1-p)^k -$$ -By binomial theorem, $\sum_{k=0}^{\infty}\frac{(k+r)!}{r!k!}p^r(1-p)^k$ becomes $[p+(1-p)]^{k+r} = 1$, and thus $E(x) = r$, which is obviously wrong. -I cannot figure out what is wrong with my proof, and thus any help will be appreciated. For reference, someone else has done a similar proof here, but I still have trouble understanding the mistake(s) in my proof: -Deriving Mean for Negative Binomial Distribution. - -REPLY [4 votes]: Your utilization of the Binomial theorem is wrong. In -$$\sum_{k=0}^{\infty}\frac{(k+r)!}{r!k!}p^r(1-p)^k$$ -the sum $r+k$ isn't constant. - -For instance, with $r=2$, -$$\sum_{k=0}^{\infty}\frac{(k+r)!}{r!k!}p^rq^k=\frac{2!}{2!}p^2+\frac{3!}{2!}p^2q+\frac{4!}{2!2!}p^2q^2+\frac{5!}{2!3!}p^2q^3\cdots\\ -=\frac{p^2}2\left(2\cdot1+3\cdot2q+4\cdot3q^2+5\cdot4q^3+\cdots\right)=\frac{p^2}2\frac2{(1-q)^3}=\frac1p.$$<|endoftext|> -TITLE: Assume $f$ is a continuous one-to-one function over an interval. Prove that $f$ is strictly monotone -QUESTION [7 upvotes]: Assume $f$ is a continuous one-to-one function over an interval. Prove that $f$ is strictly monotone. - -Attempt -Since we know that $f$ is one-to-one, for every $f(x)$ there is exactly one element $x_0$ that maps to it. Thus, if $f(x) = f(y)$, then $x=y$. Now let's suppose that $f$ isn't monotone. That is, it goes from nonincreasing to nondecreasing. Thus, on some interval $[a,b]$ we must have $f'(x) < 0$ and on some other interval $[c,d]$ we must have $f'(x) > 0$. As a result we must have that $f'(x) = 0$ at some point. How do I show this contradicts the definition of $f$? - -REPLY [12 votes]: It is given that $f$ is one-to-one. -Within that context, to say that $f$ is not strictly monotone is to say that one can find $af(c)$ or $f(a)>f(b)f(c)$. Then $f(b)>\max\{f(a),f(c)\}$. Since $f$ is continuous, the intermediate value theorem can be used. It tells us that there exist $x_1\in[a,b)$ and $x_2\in(b,c]$ such that $f(x_1)=f(x_2)$. But $x_1 -TITLE: Closed form 0f $I=\int _{ 0 }^{ 1 }{ \frac { \ln { x } { \left( \ln { \left( 1-{ x }^{ 2 } \right) } \right) }^{ 3 } }{ 1-x } dx }$ -QUESTION [5 upvotes]: While solving a problem, I got stuck at an integral. The integral is as follows: - -Find the closed form of: $$I=\int _{ 0 }^{ 1 }{ \frac { \ln { x } { \left( \ln { \left( 1-{ x }^{ 2 } \right) } \right) }^{ 3 } }{ 1-x } dx } $$ - -I tried using power series but it failed. I tried various subtitutions which came to be of no use. Please help. - -REPLY [4 votes]: We have $$ I=\int_{0}^{1}\frac{\log\left(x\right)\log^{3}\left(1-x^{2}\right)}{1-x}dx=\int_{0}^{1}\frac{\log\left(x\right)\log^{3}\left(1-x^{2}\right)}{1-x^{2}}dx+\int_{0}^{1}\frac{x\log\left(x\right)\log^{3}\left(1-x^{2}\right)}{1-x^{2}}dx - $$ and so if we put $x=\sqrt{y}$ we get $$I=\frac{1}{4}\int_{0}^{1}\frac{y^{-1/2}\log\left(y\right)\log^{3}\left(1-y\right)}{1-y}dy+\frac{1}{4}\int_{0}^{1}\frac{\log\left(y\right)\log^{3}\left(1-y\right)}{1-y}dy$$ and recalling the definition of beta function $$B\left(a,b\right)=\int_{0}^{1}x^{a-1}\left(1-x\right)^{b-1}dx - $$ we have $$\frac{\partial^{h+k}B}{\partial a^{h}\partial b^{k}}\left(a,b\right)=\int_{0}^{1}x^{a-1}\log^{h}\left(x\right)\left(1-x\right)^{b-1}\log^{k}\left(1-x\right)dx - $$ hence $$I=\frac{1}{4}\frac{\partial^{4}B}{\partial a\partial b^{3}}\left(\frac{1}{2},0^{+}\right)+\frac{1}{4}\frac{\partial^{4}B}{\partial a\partial b^{3}}\left(1,0^{+}\right).$$ For the computation of the limit, we can use the asymptotic $\Gamma(x)=\frac{1}{x}+O(1)$ when $x\rightarrow 0$ and the relations between the polygamma terms and zeta.<|endoftext|> -TITLE: Is the quotient $X/G$ homeomorphic to $\tilde{X}/G'$? -QUESTION [5 upvotes]: Let $G$ be a Lie group (not necessarily connected) acting effectively/faithfully on a connected, locally path connected, semi-locally simply connected space $X$ (not necessarily with fixed points). Let $p:\tilde{X}\to X$ be the universal covering of $X$. -For any $g\in G$, $\theta_g:X\to X$ is the map given by $x\mapsto g\cdot x$. -Then $\theta_g$ can be covered by a homeomorphism of $\tilde{X}$ since $\tilde{X}$ is simply connected, and any two such liftings differ by a deck transformation. Clearly, all such liftings for all $g$ form a subgroup $G'$ of $\operatorname{Homeo}(\tilde{X})$. -My question : Are the quotient spaces $X/G$ and $\tilde{X}/G'$ homeomorphic? -My Attempt : -Let $q:X\to X/G$ be the quotient map. I am trying to show that $\psi=q\circ p:\tilde{X}\to X/G$ satisfies the universal property. Let $Z$ be any topological space and let $f:\tilde{X}\to Z$ be a continuous map such that for all $x,y\in\tilde{X}$ , $x\sim y\Longrightarrow f(x)=f(y)$. We need to show that there exists a unique continuous map $\phi:X/G\to Z$ such that $f=\phi\circ \psi$. Let $\bar{x}\in X/G$. Pick $x\in \psi^{-1}(\bar{x})$ and define $\phi(\bar{x})=f(x)$. If this is a well defined function then it is clear that it is continuous and satisfies $f=\phi\circ \psi$. -To check it is well defined- -Suppose $y\in \psi^{-1}(\bar{x})\Longrightarrow \psi(y)=\psi(x)$. From here I am unsure how to proceed. I have to show that $x\sim y$ that is I need to find a $g\in G'$ such that $y=g\cdot x$. Can someone help? -Thank you. - -REPLY [4 votes]: By definition, $q \circ p(x) = \psi(x)=\psi(y) = q \circ p(y) \in X/G$. So $p(x),p(y)$ are in the same orbit of the action of $G$ on $X$. Pick $g \in G$ such that $g \cdot p(x) = p(y)$. In $\tilde X$, the point $x$ is a lift of $p(x)$ and the point $y$ is a lift of $p(y)$. As shown in my answer to your previous question, there exists $g' \in G'$ which is a lift of $g$ such that $g' \cdot x = y$.<|endoftext|> -TITLE: Homeomorphism definition -QUESTION [5 upvotes]: I was told by my professor that homeomorphisms are continuous maps with continuous inverse, but do those conditions also imply that the map is bijective? - -REPLY [3 votes]: It was inconvenient that your professor worded it this way. From the get-go, saying that a homeomorphism is continuous function with a continuous inverse assumes that we have some function $f$, and that its inverse, $f^{-1}$, exists. Off-the-bat, just cause we have a function $f$, it does not mean that it has an inverse. -In addition, the way your professor phrased it doesn't make it clear whether we are talking about a subset of the range, or the whole range itself: in a homeomorphism, we must have the whole range, which is something your professors phrasing neglected to capture. -If I were you, I would just forget about what your professor said. It is kind of circular. Just to make it clear. By definition, a homeomorphism is a function $h$, from a topological space $X$, to a topological space $Y$, such that the following hold: - -$h$ is 1-1 -$h$ is onto -$h$ is continuous -$h^{-1}$ is continuous - -Note, we do not say $h^{-1}$ exists here: this is a consequence of $h$ being 1-1, as we can always create $h^{-1}$ in such a case. ALSO, we imply that the image of $h$ is all of its range: this is captured by saying that $h$ is a bijection.<|endoftext|> -TITLE: Understanding an exercise from Fulton's Book on Algebraic Curves -QUESTION [7 upvotes]: I am reading Fulton's book Algebraic Curves. -Currently I am working on a specific problem (2.43), and I have doubts about my work and would appreciate another opinion(s). - -Assume $p$ is the origin in $\mathbb{A}^n$ and $\mathcal O_p(\mathbb{A}^n)$ is the set of all rational functions defined on $\mathbb A^n$ and $m_p (\mathbb{A} ^n)$ is the set of non units. Show $I\mathcal O_p = m_p$ so $I^r\mathcal O_p = m_p^r$ where $I$ is the ideal generated by $x_1,...,x_n$. - -My proof seems too simple and that is what bothers me. -$(\supset)$ Let $\phi \in m_p$ thus $\phi = \frac{f}{g} $ such that $f(p)=0$. Well $I \mathcal O_p$ is the set generated by $\frac{r}{u}$ where $r \in I$. Thus because $f(p)=0$ this implies it is in $\mathcal I(V(p))$ thus $f \in I$ -$(\subset)$ Let $\phi \in I\mathcal O_p$ thus $\phi= \frac{k}{h}$ where $k \in I =$ thus $\phi \in m_p(\mathbb{A}^n)$ -I am not very sure on the second part, but how does this look? - -REPLY [3 votes]: The idea of the proof is correct. The proof is simple, but conveys the important relationship between $\mathcal{I}(p)$, the maximum ideal of polynomials vanishing on $p$, and $m_p$, the maximum ideal of rational functions vanishing on $p$. They are based on the same property of vanishing on $p$, which is not affected by denominators! -One important fact to state is that $\mathcal{I}(p) = I$. With this, the logical flow of the end of the first and second part of your proof is clear: $f(p) = 0$ implies $f\in\mathcal{I}(p) = I$ (note: not $\mathcal{I}(V(p))$!) for the first part, and $k\in I = \mathcal{I}(p)$ implies $k(p) = 0$ for the second. -In fact, the proof can be concisely written as follows: -$$ -\phi= \frac{f}{g}\in m_p \Leftrightarrow \frac{f(p)}{g(p)} = 0 \Leftrightarrow f(p) = 0 \Leftrightarrow f\in \mathcal{I}(p) = I\Leftrightarrow \phi = \frac{f}{g}\in I\mathcal{O}_p -$$<|endoftext|> -TITLE: Stopping time on an asymmetric random walk -QUESTION [8 upvotes]: Suppose we have an asymmetric random walk whose step $x_i$ is distributed as $P(\xi_i = 1) = p$ and $P(\xi_i = -1) = 1-p = q$, where $p >1/2$. The hitting time, $T_x$ is defined as $\inf{\{n : S_n = x\}}$ where $S_n$ represents the simple walk $S_n = \sum_{i \leq n} \xi_i$. It can be shown that $$\Bbb E T_1 = (p-q)^{-1}$$ However, how can we deduce from this that $\Bbb ET_b = b(p-q)^{-1}$ for all $b>0$? I tried to use Wald's equation, but does not seem to work. - -REPLY [3 votes]: Let's see what we can do for $T_2$. We have -$$E[T_2]=E[E[T_2\mid T_1]]$$ -Now what is $E[T_2\mid T_1=t]$? We have $$T_2\mid\{ T_1=t\}=\inf\{n:S_{t+n}=1\}=t+\inf\{n:S_n=1\}=2t$$ Here I used the fact that your random walk is a homogeneous Markov chain. So $E[T_2\mid T_1]=2T_1$, thus $$E[T_2]=2E[T_1]=2(p-q)^{-1}$$ -Then you can prove the rest by induction.<|endoftext|> -TITLE: Prove that $8640$ divides $n^9 - 6n^7 + 9n^5 - 4n^3$. -QUESTION [7 upvotes]: I found this problem in a book, I can't solve it unfortunately. -Prove that for all integer values $n$, $n^9 - 6n^7 + 9n^5 - 4n^3$ is divisible by $8640.$ -So far I've noticed that $8460 = 6! \times 12$, also I've tried to simplify that expression and I've found that it's equal to this $n^3(n^3-3n-2)(n^3-3n+2)$, but I can't move on after that. - -REPLY [7 votes]: $$ -\begin{align} -&n^9-6n^7+9n^5-4n^3\\ -&\small=362880\binom{n}{9}+1451520\binom{n}{8}+2298240\binom{n}{7}+1814400\binom{n}{6}\\ -&\small+734400\binom{n}{5}+138240\binom{n}{4}+8640\binom{n}{3}\\ -&=\small8640\left[42\binom{n}{9}+168\binom{n}{8}+266\binom{n}{7}+210\binom{n}{6}+85\binom{n}{5}+16\binom{n}{4}+\binom{n}{3}\right] -\end{align} -$$<|endoftext|> -TITLE: Continuity of cartesian product of functions between topological spaces -QUESTION [7 upvotes]: I want to prove the following theorem: -If $f:X\rightarrow X'$ and $g:Y\rightarrow Y'$ are continuous functions between topological spaces, then the mapping between product spaces -$$f\times g:X\times Y\rightarrow X'\times Y', (x,y)\mapsto(f(x),g(y)) $$ -is continuous. - -I am using the theorem written below: -Theorem. Let $X, Y$ be topological spaces and $X\times Y$ their product space. If $Z$ is a topological space and $f:Z\rightarrow X\times Y$ a mapping, then $f$ is continuous iff $p\circ f, q\circ f$ are continous, where $p:X\times Y\rightarrow X, q:X\times Y\rightarrow Y$ are projections. - -Assume that $f:X\rightarrow X'$ and $g:Y\rightarrow Y'$ are continuous functions between topological spaces. -Let $p':X'\times Y'\rightarrow X',q':X'\times Y'\rightarrow Y'$. -Then let's assume that $p'\circ f\times g, q'\circ f\times g$ are continous. Let $W\subseteq X'\times Y'$ be open. Then there exists open sets $U_{i}\subseteq X'$ and $V_{i}\subseteq Y'$ $(i\in I)$ such that $U_i$ is open in $X'$ and $V_{i}$ is open in $Y'$ by every $i\in I$ and also $W=\bigcup_{i\in I} U_i\times V_i$. -Because $$(f\times g)^{-1}(W)=(f\times g)^{-1}\bigg(\bigcup_{i\in I} U_i\times V_i\bigg)=\bigcup_{i\in I}(f\times g)^{-1}(U_{i}\times V_{i}), $$ -it's enough to show that $(f\times g)^{-1}(U_{i}\times V_{i})$ is open by every $i\in I$. -Now, $U_{i}\times V_{i} = (U_{i}\times Y')\cap (X'\times V_{i})=p'^{-1}(U_i)\cap q'^{-1}(V_i)$. -Then, -$$\begin{align*}(f\times g)^{-1}(U_{i}\times V_{i})&=(f\times g)^{-1}(p'^{-1}(U_i)\cap q'^{-1}(V_i))\\ -&=(p'\circ f\times g)^{-1}(U_{i})\cap(q'\circ f\times g)^{-1}(V_i). -\end{align*}$$ -Now, there are also projections $p:X\times Y\rightarrow X, q:X\times Y\rightarrow Y$. Because functions $f,g$ are continuous, then $f\circ p, g\circ q$ are continuous. -And forward, because $f\times g\circ p' = f\circ p$ and $f\times g\circ q' = g\circ q$ we can continue -$$(p'\circ f\times g)^{-1}(U_{i})\cap (q'\circ f\times g)^{-1}(V_i)=(f\circ p)^{-1}(U_i)\cap (g\circ q)^{-1}(V_{i})$$ -which is open. - -Fixed. - -REPLY [2 votes]: This solution seems to simple to be correct (but I can't find a mistake): -Take a basic open set in $X'\times Y'$, call it $U\times V$. It's enough to show that $(f\times g)^{-1}(U\times V)$ is open. But $$(f\times g)^{-1}(U\times V)=\{(x,y)\mid (f(x),g(y))\in U\times V\}$$ -$$= \{(x,y)\mid f(x)\in U\}\cap \{(x,y)\mid g(y)\in V\}$$ -$$= f^{-1}(U)\times Y \bigcap X\times g^{-1}(V),$$ -which is the intersection of two open sets and therefore open.<|endoftext|> -TITLE: Is Gaussian integral the only one that can be easily solved by this double integral trick? -QUESTION [15 upvotes]: For a lot of people the favorite way of solving Gaussian integral $I=\int^{\infty}_{-\infty} e^{-x^2} dx$ is to find $I^2$ in polar coordinates and then take a root. -The trick may be useful in this case, but I struggle to find any other integral it can be applied to. The obvious condition for the integrated function is: -$$f(x) \cdot f(y)=g(x^2+y^2)=h(|r|)$$ -I don't know any other function aside from $e^{bx^2}$ that meets this condition. -Moreover, the limits for the argument should be infinite. Otherwise we can't equate integration in the square $x,y \in (-a,a)$ with integration in the cirlce $r \in (0,a)$. - -But maybe this method can be generalized? For example, there may be some functions that give elementary integrals in polar form when multiplied $f(x)f(y)$ even if their product depends on the angle too? - -REPLY [3 votes]: The closely related integral -$\Gamma(1/2)=\int_0^\infty\dfrac{e^{-x}dx}{\sqrt x}=\sqrt\pi$ -can also be solved through a double integration ... with a different sort of coordinate transformation. -Let $I$ be the target integral and multiply it by itself to get the double integral -$I^2=\int_0^\infty\int_0^\infty\dfrac{e^{-(x+y)}(dx)(dy)}{\sqrt {xy}}$ -And then rotate the coordinate system by 45°: -$\xi=(x+y)/\sqrt2,\eta=(y-x)/\sqrt2$ -Therefore, using the algebraic identity $xy=(1/2)(\xi^2-\eta^2)$ with variables defined as above, we may render -$I^2=\int_0^\infty\int_{-\xi}^\xi\dfrac{\sqrt2e^{-\xi\sqrt2}(d\eta)(d\xi)}{\sqrt{\xi^2-\eta^2}}$ -Where is the Jacobian conversion factor above? That's the beauty part. Unlike conversion to polar coordinates, the rotation of the Cartesian coordinates is area-preserving (both magnitude and sense of rotation about the boundary of the area), so the conversion factor is simply $+1$. We do not have to jump through hoops to identify that "extra" factor of $r$ we get with the polar conversion, and yet the exponential function integration will remain elementary. -So we separate terms dependent only on $\xi$ to get -$I^2=\int_0^\infty\sqrt2e^{-\xi\sqrt2}[\int_{-\xi}^{\xi}\dfrac{d\eta}{\sqrt{\xi^2-\eta^2}}]d\xi$ -Plugging in $u=\eta/\xi$ converts the $\eta$ integral to -$\int_{-1}^1\dfrac{du}{\sqrt{1-u^2}}=\sin^{-1}u| _{-1}^1=\pi$ -The $\xi$ integral gives directly -$\int_0^{\infty}\sqrt2e^{-\xi\sqrt2}d\xi=e^{-\xi\sqrt 2}|_0^{\infty}=1$ -So -$I^2=(1)(\pi)=\pi, I>0; \therefore I=\Gamma(1/2)=\sqrt{\pi}\approx 1.772.$ - -The above result may be generalized to cover the $\Gamma$ function for all positive arguments. Consider the integral -$\Gamma(a)=\int_0^{\infty}x^{a-1}e^{-x}dx, a>0$ -Square and convert to double integral form, then rotate the coordinates as above: -$[\Gamma(a)]^2=\int_0^\infty\int_0^\infty\dfrac{e^{-(x+y)}(dx)(dy)}{\sqrt {xy}}$ -$=\int_0^\infty\int_{-\xi}^\xi2^{1-a}e^{-\xi\sqrt2}(\xi^2-\eta^2)^{a-1}(d\xi)(d\eta)$ -The $\eta$ integration is done by putting in $u=\eta/\xi$. Note that with $a\ne1/2$ we get an extra factor entering the $\xi$ integration: -$[\Gamma(a)]^2=2^{1-a}\int_0^\infty\xi^{2a-1}e^{-\xi\sqrt2}\int_{-1}^1(1-u^2)^{a-1}du$ -$=2^{1-2a}\Gamma(2a)\int_{-1}^1(1-u^2)^{a-1}du$ -And thus -$\Gamma(a)=2^{(1/2)-a}\sqrt{\Gamma(2a)\int_{-1}^1(1-u^2)^{a-1}du}$ -Thus $\Gamma(a)$ is rendered in terms of $\Gamma(2a)$ and an algebraic-function integral. If $a$ is half a natural number, tgen $\Gamma(2a)=(2a-1)!$ is a factorial and thealgebraic-funxtion integral is elementary, thus recovering the familiar function values. For instance: -$\Gamma(1/2)=2^0\sqrt{0!\int_{-1}^1(1-u^2)^{-1/2}du}=1\sqrt{(1)(\pi)}=\sqrt\pi\approx 1.772.$ -$\Gamma(1)=2^{-1/2}\sqrt{1!\int_{-1}^1(1-u^2)^0du}=\sqrt{1/2}\sqrt{(1)(2)}=1.$ -$\Gamma(3/2)=2^{-1}\sqrt{2!\int_{-1}^1(1-u^2)^{1/2}du}=(1/2)\sqrt{(2)(\pi/2)}=(1/2)\sqrt\pi\approx 0.886.$ -For other values of $a$ the integral is nonelementary and the $\Gamma$ function will also be nonelementary, but for some rational arguments expressions are available in terms of elliptic integrals. Let us explore the simplest such case, $a=1/4$. -For this case we render -$\Gamma(1/4)=2^{1/4}\sqrt{(\sqrt\pi)\int_{-1}^1(1-u^2)^{-3/4}du}$ -The integral may be converted to a form matching the complete elliptic integral of the first kind: -$\int_{-1}^1(1-u^2)^{-3/4}du=2\int_0^1(1-u^2)^{-3/4}du$ -$=2\int_0^1\dfrac{2t^3(dt)}{t^3\sqrt{1-t^4}}, t=(1-u^2)^{1/4}$ -$=4\int_0^1\dfrac{dt}{\sqrt{1-t^4}}=4\int_0^1\dfrac{dt}{\sqrt{1-t^2}\sqrt{1+t^2}}=4K(i)$ -where the complete elliptic integral of the first kind is defined as -$K(k)=\int_0^1\dfrac{dt}{\sqrt{1-t^2}\sqrt{1-k^2t^2}}$ -(The argument $i$ is a square root of $-1$; the function is real for pure imaginary arguments.) Thereby -$\Gamma(1/4)=(32\pi)^{1/4}\sqrt{K(i)}\approx 3.626$ -where the elliptic integral may be rendered with high efficiency using (real) arithmetic and geometric means.<|endoftext|> -TITLE: How do you call the relation between these 2 variables? -QUESTION [6 upvotes]: Let's say I have an X number and I want to increment it by 50%, I would get -$X*A=Y$ -Then in order to multiply Y and get X again I would need to do -$Y*Z=X$ -How do you call the relation between A and Z? -Example with numbers: -$$5*1.5=7.5$$ -$$7.5*Z=5$$ -$$Z=0.66$$ -How do you call the relation between 1.5 and 0.66 when $X*1.5*0.66=X$? - -REPLY [10 votes]: A and Z are multiplicative inverses; i.e., A × Z = 1. -Therefore, X × A × Z = X -Usually, we just say that A is the inverse of Z (and vice-versa).<|endoftext|> -TITLE: Is this property of continuous maps equivalent to properness? -QUESTION [7 upvotes]: For the purposes of my question, a continuous map $f : X \to Y$ is proper if it is closed and the preimage of every compact subspace of $Y$ is a compact subspace of $X$. -Say a continuous map $f : X \to Y$ is semiproper if, for every continuous map $y : T \to Y$ where $T$ is compact, the space $T \times_Y X = \{ (t, x) \in T \times X : y (t) = f (x) \}$ is compact. -It is a fact that a closed map is proper if and only if it is semiproper. -Question. Are semiproper maps always closed? - -If $Y$ is a compactly generated Hausdorff space, then it is easy to check that every semiproper map $f : X \to Y$ is closed – indeed, we only need the defining property for subspace inclusions $y : T \to Y$. On the other hand, if we weaken the definition by restricting to subspace inclusions $y : T \to Y$, then there are easy counterexamples. -That leaves non-(compactly generated Hausdorff) spaces. Perhaps there is a counterexample there? - -REPLY [2 votes]: Let $X$ be an uncountable discrete space, let $Y$ be the same set with the cocountable topology, and let $f:X\to Y$ be the identity map. Then $f$ is not closed, but I claim $f$ is semiproper. Indeed, if $T$ is compact and $y:T\to Y$ is continuous, then the image of $y$ must be compact in $Y$ and hence finite. The topologies of $X$ and $Y$ agree on finite sets and so $T\times_Y X\cong T\times_Y Y\cong T$ is compact. (Explicitly, $T\times_Y X$ is just $T$ with its topology refined so that each fiber of $y$ is clopen, but each fiber of $y$ is already closed in $T$ and thus clopen since there are only finitely many of them.) -More generally, let $Y$ be any space and let $X$ be its CG-ification (i.e., $X$ is $Y$ with the topology generated by its compact subspaces). Then every continuous map $y:T\to Y$ from a compact space is also continuous as a map to $X$, and so $T\times_Y X\cong T$ is compact (since $T\times_Y X$ is just $T$ with its topology refined so that $y$ is continuous as a map to $X$). So the identity map $X\to Y$ is semiproper, but is not closed unless $Y$ is compactly generated.<|endoftext|> -TITLE: Reference on manifolds with corners -QUESTION [11 upvotes]: Is there a systematic treatment of (finite dimensional) manifolds with corners in the literature which carefully introduces all usual differential topological notions (submanifolds, embeddings, etc.) and which includes proofs of the usual statements in geometric topology like the existence of collars or isotopy extension theorems in the generality of manifolds with corners? -Most of the common textbooks treat the case without corners nor boundary and mention the case of boundaries. Some of them take care of boundaries more closely, but I am not aware of a detailed reference covering the situation with corners. - -REPLY [5 votes]: Differential Topology, by Margalef-Roif and Dominguez, builds the standard smooth manifold theory (inverse function theorem, submanifolds, transversality, etc) in the level of generality of Banach manifolds with corners. -I don't actually know a source that deals with isotopy extension and collar neighborhoods. Probably your best bet is just to carefully check the details of what happens in that setting yourself.<|endoftext|> -TITLE: A continuous function with positive and negative values but never zero? -QUESTION [11 upvotes]: Well, it is easy to prove that $e^z$ is never zero and $z$ is any complex number. Also, $e^z$ can be both positive and negative. On the other hand, $e^z$ is continuous. How that's possible that a continuous function can be negative and positive but never meets zero? -Detailed simple explanations would be much appreciated. - -REPLY [21 votes]: The function $f:\mathbb Q\setminus \{0\}\to \mathbb R$ defined by $f(q)=q$ is continuous at each rational number $q\neq 0$, takes positive and negative values, but is never $0$. The intermediate value theorem is valid for functions $f: I\subset \mathbb R\to\mathbb R$, where $I$ is a closed interval (i.e connected set in $\mathbb R$). -The example you give with $e^z:\mathbb C\to\color{red}{\mathbb C}$ actually doesn't show anything, because there is no total ordering on the complex numbers. Also you can read from Wikipedia: -The intermediate value theorem generalizes in a natural way: Suppose that $X$ is a connected topological space and $(Y, <)$ is a totally ordered set equipped with the order topology, and let $f : X → Y$ be a continuous map. If $a$ and $b$ are two points in $X$ and $u$ is a point in $Y$ lying between $f(a)$ and $f(b)$ with respect to $<$, then there exists $c$ in $X$ such that $f(c) = u$. -Edit: -If $f$ is continuous, then the IVT can fail to apply either because the domain of $f$ is not connected, or because the codomain is not totally ordered: -In my example, $\mathbb R$ is totally ordered and the IVT fails to apply because $\mathbb Q\setminus \{0\}$ is not connected. -In the OP example $e^z:\color{blue}{\mathbb C}\to\color{red}{\mathbb C}$, $\quad \color{blue}{\mathbb C}$ is connected and the IVT fails to apply because $\color{red}{\mathbb C}$ is not totally ordered.<|endoftext|> -TITLE: What are some applications of Chebotarev Density Theorem? -QUESTION [22 upvotes]: Let $L/K$ be a Galois extension of number fields and let $\mathcal{C}$ be a conjugacy class in $Gal(L/K)$. Let $\mathbb{P}(K)$ be the set of all prime ideals in $K$ and let $\left(\frac{L/K}{\mathfrak{p}} \right)$ correspond to the associated conjugacy class of Frobenius elements living over $\mathfrak{p}$(of course unramified) and suppose $A=\left\lbrace \mathfrak{p}\in P(K) \mid \left(\frac{L/K}{\mathfrak{p}} \right)=\mathcal{C} \right\rbrace$. -Then the Chebotarev Density Theorem states that $\delta(A)=\frac{|C|}{[L:K]}$. -This also is a generalisation of Frobenius density theorem. -For positive integers $a,n$ such that $\gcd(a,n)=1$ CDT for $K=\mathbb{Q}$ and $L=\mathbb{Q}(\zeta_n)$ and $\mathcal{C}=\lbrace \zeta_n \to \zeta_n^a \rbrace$ gives Dirichlet's theorem of infinitude of primes in arithmetic progression. -I wish to ask what other applications are of this theorem. - -REPLY [3 votes]: In 1996 Pascal Koiran showed that given a system $F=0$ of $k$ polynomial equations on $n$ variables, where the maximal degree over all monomials is $D$ and the bit-size of the largest coeffient is $h$, one can determine if the system has a solution $F=0$ over $\mathbb{C}^n$, by showing that it has a solution in $\mathbb{Z}/p \mathbb{Z}$ for many primes $p$. -In more detail, if the system does not have a solution in $\mathbb{C}^n$, then unconditionally, by Effective Nullstellensatz, there are at most -$$A_F=4n(n+1)D^n(h+\log k + (n+7)\log(n+1)D$$ -primes $p$ such that $F=0\mod p$. -Likewise, if the system does have a solution in $\mathbb{C}^n$, then by the prime ideal theorem, unconditionally there is a positive density of primes $p$ modulo which there is a solution. -The denouement is that, conditioned on the Generalized Riemann Hypothesis, by Effective Chebotarev Density, Koiran showed that these primes $p$ are distributed "evenly" enough such that one can apply standard tricks of universal hashing to give a small certificate that the system $F=0$ is likely to be satisfiable modulo more than $2A_F$ primes $p$, and thus is likely to be satisfiable in $\mathbb{C}^n$. Essentially, one finds a prime $q$ such that $F$ is satisfiable in $\mathbb{Z}/q \mathbb{Z}$ and $H(q)=0$ for some nice random hash function $H$, thus showing there are likely enough primes $q$ to invert $H$, thus likely enough primes modulo which $F=0$ has a solution. -As a bonus that's an answer in its own right, in 2011 Kuperberg applied Koiran's results to give a small certificate of knottedness for a knot diagram (again conditioned on Effective Chebotarev by way of the GRH.)<|endoftext|> -TITLE: Separable field extensions *without* using embeddings or automorphisms -QUESTION [10 upvotes]: If $K\subseteq L$ is a field extension and $x\in L$ is algebraic, we say that $x$ is separable over $K$ iff its minimal polynomial $f$ over $K$ is separable (i.e., $f$ is relatively prime with its derivative). We say that $L$ is a separable algebraic extension iff every element of $L$ is separable algebraic. -These definitions, of course, are quite standard. Now what I'd like to find is a proof of such standard facts as "if $x$ is algebraic separable over $K$ then $K(x)$ is separable" and "if $L$ is algebraic separable over $K$ and $M$ is such over $L$ then $M$ is such over $K$", or a definition of the separable degree of an extension, all without using field automorphisms or the trick of counting embeddings in an algebraic closure. -(There can be a number of reasons to want this: for pedagogical purposesout of a desire to postpone a discussion of Galois theory at a later point, or because embeddings/automorphisms are computationally or logically more complex objects than field extensions, or simply because it seems that the point of view in the first paragraph above should be more natural, or to compare different points of view.) -Now every textbook I could find on field extensions uses at some point a comparison between the number of embeddings of $L$ in the algebraic closure of $K$ and the degree $[L:K]$. But surely this can be avoided (we can, instead, work explicitly with roots of polynomials and perhaps elementary symmetric functions). -So, does someone know a place where separable field extensions are introduced without counting embeddings or similar objects, staying as close as possible to the definition I gave above? -Edit: Maybe the nicest definition of an algebraic $x$ being separable over $K$ of characteristic $p$ is that $K(x) = K(x^p)$. - -REPLY [3 votes]: After giving this some thought, here's how it might work: - -Define an element $x$ of an extension field of $k$ to be algebraic separable over $k$ iff it is algebraic and its minimal polynomial $f$ over $k$ is relatively prime with $f'$, which is equivalent to saying $f' \neq 0$. Since any polynomial $f$ in characteristic $p>0$ can be written uniquely as $f(t) = f_0(t^{p^e})$ for some $e$ with $f_0 \neq 0$, in the context where $x$ has $f$ as minimal polynomial, $f_0$ is irreducible and $x$ is irreducible iff $e=0$. -Proposition 1: over a field $k$ of characteristic $p>0$, if $f(t) = f_0(t^{p^e})$ with $e>0$ and $f_0$ monic, then $f$ is reducible iff the coefficients of $f_0$ (equivalently, those of $f$) are $p$-th powers, and in this case $f$ is, in fact, a $p$-th power. The "if" part is easy, and the "only if" part can be proved by reducing to $e=1$, considering the factorization of $f_0^p$ inside $k^p[t]$ and using the following lemma: if $h \in k[t]$ satisfies $h^i \in k^p[t]$ for some $1\leq i0$ then exactly one of the following statement holds: either (a) $x$ is separable, the minimal polynomial of $x^p$ over $k$ has coefficients in $k^p$ and $\deg(x) = \deg(x^p)$ and $k(x) = k(x^p)$, or (b) $x$ is not separable, the minimal polynomial of $x^p$ over $k$ does not have all its coefficients in $k^p$ and $\deg(x) = p\cdot\deg(x)$. (This is easy using proposition 1.) -Proposition 3: if $k \subseteq K$ is a finite extension of fields of characteristic $p>0$ and $K^p$ spans $K$ as a $k$-vector space, then $K$ is separable over $k$ (meaning that every element of $K$ is separable over $k$). This is basically saying that the extensions $K^p$ and $k$ of $k^p$ (inside $K$) are linearly disjoint, but we don't really need all the machinery of linear disjointness to prove it. Proof. Let $x_1,\ldots,x_d$ be a basis of $K$ as a $k$-vector space (where $d = [K:k]$) and let $y \in K$ have degree $d'$: write $y^j = \sum_{i=0}^{d-1} c_{i,j} x_i$ for $0\leq j\leq d'-1$ on the chosen basis: since $1,y,\ldots,y^{d'-1}$ are $k$-linearly independent, the matrix $c_{i,j}$ has rank $d'$; but raising to the $p$-th power, we have $y^{pj} = \sum_{i=0}^{d-1} c_{i,j}^p x_i^p$. Now the hypothesis that $K^p$ spans $K$ as a $k$-vector space implies that $x_1^p,\ldots,x_d^p$ do so, so they are a basis of $K$ as a $k$-vector space. And the matrix of the $c_{i,j}^p$ has the same rank as that of the $c_{i,j}$ since Frobenius is an isomorphism from $k$ to $k^p$, and the rank of a matrix does not depend on the field where it is computed. So from $y^{pj} = \sum_{i=0}^{d-1} c_{i,j}^p x_i^p$ we deduce that $1,y^p,\ldots,y^{p(d'-1)}$ are linearly independent over $k$, that is, $y^p$ has degree $d' = \deg(y)$, and by proposition 2 that $y$ is separable. End of proof. -Proposition 4: if $x_1,\ldots,x_n$ are such that $x_i$ is algebraic separable over $k(x_1,\ldots,x_{i-1})$, then $k(x_1,\ldots,x_n)$ is (algebraic) separable over $k$. Proof: in characteristic $0$ there is nothing to prove, and in characteristic $p>0$ we have $k(x_1) = k(x_1^p)$ because $x_1$ is separable over $k$, then $k(x_1)(x_2) = k(x_1)(x_2^p) = k(x_1^p)(x_2^p)$ because $x_2$ is separable over $k(x_1)$, and so on; so $k(x_1,\ldots,x_n) = k(x_1^p,\ldots,x_n^p)$ so it is easy to see that the monomials in $x_1^p,\ldots,x_n^p$ span $k(x_1,\ldots,x_n)$ as a $k$-vector space, so proposition 3 applies. -The following statements are then almost trivial: if $(x_i)_{i\in I}$ are all algebraic separable over $k$ then the extension $k(x_i)_{i\in I}$ they generate is (algebraic) separable. (Proof: it is enough to prove it for a finite subfamily, a case which is contained in proposition 4.) And if $k \subseteq K \subseteq L$ is a tower of fields with $K$ algebraic separable over $k$ and $L$ algebraic separable over $K$ then $L$ is (algebraic) separable over $k$. (Proof: take $y \in L$ and $x_1,\ldots,x_n \in K$ the coefficients of its minimal polynomial over $k$; then proposition 4 applies to $k(x_1,\ldots,x_n,y)$.) -It then makes sense to define the relative (algebraic) separable closure of $k$ in some extension field $K$ as the extension generated by all elements of $K$ which are algebraic separable over $k$, and which is, in fact, the set of all such elements by the above. We can say that $K$ is a purely inseparable extension of $k$ when $k$ is equal to its separable closure in $K$, and this is equivalent to the minimal polynomial over $k$ of every element of $K$ to being of the form $t^{p^e} - c$ for some $c \in k$. - -I didn't work out the properties of the separable degree in as much detail, but we can define $[K:k]_{\mathrm{sep}}$ it as the degree of the relative (algebraic) separable closure of $k$ inside $K$, which makes sense because of the above. The crucial point to showing that the separable degree is multiplicative is to show that if $k \subseteq K$ is purely inseparable and $K \subseteq K'$ is finite separable and $k'$ the separable closure of $k$ in $K'$ then $[k':k] = [K':K]$, meaning essentially that $K$ and $k'$ are linearly disjoint extensions of $k$ inside $K'$: again, the machinery of linear disjointness can certainly be avoided. -To summarize, I think it's best to think in terms of linear disjointness ("MacLane's criterion"), if not explicitly at least implicitly. -I'm still interested in knowing whether any textbook uses this approach.<|endoftext|> -TITLE: Can a "continuous" convex combination not be element of the convex hull? -QUESTION [10 upvotes]: Short version of question: can a "continuous" convex combination not be element of the convex hull? -I am not a mathematician, so please excuse me if I am not precise. I consider first, e.g., 4 dimensional real valued vectors $a \in \mathbb{R}^4$. Now consider a set of $n$ vectors $a_i, i={1,2,...,n}$ and the set containing all convex combinations of these vectors -\begin{equation} -C=\left\{\sum_{i=1}^k \hat{w}_i a_i | k\in\{1,2,...,n\}, i\in\{1,2,...,n\}, \sum_{i=1}^k \hat{w}_i = 1, \hat{w}_i \geq 0 \forall i\right\} \ . -\end{equation} -As far as I understand the definition of the convex hull, see 3.definition in Wikipedia, the set $C$ is the convex hull of these vectors and trivially any convex combination of vectors lies in $C$. -Now, I am taking a look at the following problem over a non-convex region $\Omega \subset \mathbb{R}^2$ for vector valued vector functions $a(x) \in \mathbb{R}^4$ and $x \in \Omega$ -\begin{equation} -\lambda = \int_\Omega w(x) a(x) dx \in \mathbb{R}^4 -\end{equation} -with real valued $w(x) \in \mathbb{R}$ with the following properties -\begin{equation} -\int_\Omega w(x) dx = 1 , \quad -w(x) \geq 0 \quad \forall x \in \Omega -\end{equation} -at what $w(x)$ is a distribution. Due to the properties of $w(x)$, I interpret for any $w(x)$ the integral $\lambda$ to be a "continuous" convex combination of the values of $a(x)$ over $\Omega$. The set of all possible $\lambda$ for all distributions $w(x)$ having the properties mentioned above will be denoted as -\begin{equation} -\Lambda = \left\{\lambda | \lambda = \int_\Omega w(x) a(x) dx , \int_\Omega w(x) dx = 1 , -w(x) \geq 0 \quad \forall x \in \Omega\right\} -\end{equation} -and the convex hull of all values of $a(x)$ as -\begin{equation} -\Gamma = \left\{\sum_{i=1}^k \hat{w}_i a(x_i) | k\in\mathbb{N}, x_i \in \Omega, \sum_{i=1}^k \hat{w}_i = 1, \hat{w}_i \geq 0 \forall i\right\} \ . -\end{equation} -Question: are the sets $\Lambda$ and $\Gamma$ the same or can I find a $w(x)$ such that the resulting $\lambda \not\in \Gamma$? This would be somehow very unintuitive for me, but I am not a mathematician. I keep thinking of this with Dirac distributions defined for $\Omega$ and the $n$ going to infinity in the case of simple vectors, as sketched at the beginning. Therefore, I can not imagine any case for which I should be able to combine values of $a(x)$ and end outside of $\Gamma$. But the more I read about distributions, the more weird things are possible! Any help is very appreciated. Thanks a lot! - -REPLY [5 votes]: As others have said the two sets are the same. The fact that $\Gamma\subset \Lambda$ essentially follows from the fact that a convex combination $\sum_1^k w_i a_i$ is equal to $\int a(x)w(x)\, dx$ with $w=\sum_1^k w_i\delta_{a_i}$ (Dirac deltas concentrated at $a_i$). -The opposite inclusion follows from Jensen's inequality. Consider the function (this is called characteristic (or indicator) function in convex analysis) -$$I_\Gamma(x)=\begin{cases} 0 & x\in \Gamma \\ +\infty & x\notin \Gamma\end{cases}$$ -This function is convex and $\Gamma=\{x\ :\ I_\Gamma(x)=0\}$. Now let $\lambda = \int a(x)w(x)\, dx\in\Lambda$. By Jensen's inequality -$$ -I_\Gamma(\lambda)\le \int I_\Gamma(a(x))w(x)\, dx=0, $$ -so $I_\Gamma(\lambda)=0$, which means that $\lambda \in \Gamma$.<|endoftext|> -TITLE: In what sense does analyticity guarantee the following equality? -QUESTION [9 upvotes]: I was reading a paper$^1$ on particle physics, and at some point it is stated that, provided $f(x)$ is analitic, we have -$$ -f(x)-f(0)=\frac{x}{\pi}\int_0^\infty \frac{\text{Im}\;f(y)}{y(y-x-i\varepsilon)} \;\mathrm dy\tag{1} -$$ -where the $i\varepsilon$ is supposed to be taken $\varepsilon\to 0^+$ after integrating. -This looks very similar to (what we physicists) call the Kramers-Kronig relations, though I believe in mathematics it is called the Sokhotski-Plemelj theorem: -$$ -\int_a^b\frac{f(x)}{x-i\varepsilon}\mathrm dx=i\pi f(0)+\mathcal P\!\int_a^b\frac{f(x)}{x}\mathrm dx \tag{2} -$$ -where $\mathcal P$ means Cauchy principal value. -My questions: is the relation $(1)$ true in general? under what circumstances? is it possible to prove $(1)$ from $(2)$? or is $(2)$ irrelevant here? - -$^1$ The Muon g-2, by F. Jegerlehner and A. Nyffelerpage, arXiv:0902.3360v1, page 39. - -REPLY [3 votes]: I think I was able to prove this myself: -Write the Kramers-Kronig relations as -$$ -\text{Re}\; g(x)=\frac{1}{\pi}\mathcal P\int \frac{\text{Im}\;g(y)}{y-x} \mathrm dy -$$ -Using the Sokhotski-Plemelj theorem, namely -$$ -\mathcal P\int \frac{\text{Im}\;g(y)}{y-x} \mathrm dy=\int \frac{\text{Im}\;g(y)}{y-x-i\varepsilon} \mathrm dy-i\pi\ \text{Im}\; g(x) -$$ -we find -$$ -g(x)=\frac{1}{\pi}\int \frac{\text{Im}\;g(y)}{y-x-i\varepsilon} \mathrm dy -$$ -Finally, by taking $g(x)\equiv (f(x)-f(0))/x$ we get the expression in the OP. The answer to the question "under what assumptions" is: we must have $\text{Im}\;f(0)=0$.<|endoftext|> -TITLE: Why is exponentiation right associative? -QUESTION [6 upvotes]: From Wikipedia: - -In order to reflect normal usage, addition, subtraction, - multiplication, and division operators are usually left-associative - while an exponentiation operator (if present) is right-associative - -For example, we evaluate $2^{2^2}$ as $2^4$ rather than $4^2$. All other operations besides exponentiation, tetration, etc. are inherently left associative. Is there any reason why it's different for exponents? - -REPLY [10 votes]: As André Nicolas writes, it is because $$ (a^b)^c = \underbrace{(a^b) \cdot (a^b) \cdots (a^b)}_{c\text{-times}} = a^{b \cdot c} \text{,} $$ so there is no need to use iterated exponentiation to represent that idea. -Additionally, consider the evaluation tree for the expression "a^b^c". It is -"^(a, ^(b,c))" because, by order of operations, we evaluate the exponent first. -Finally, addition and multiplication are commutative and associative, so it does not matter in what order they are applied. Claiming that subtraction is left associative may be incomplete, depending on how you define subtraction. If subtraction is not adding the additive inverse, then it does not acquire commutativity and associativity $$\begin{align*} - a - b - c &= (a-b)-c \neq a-(b-c) \\ - &= a-(b+c) \\ - &= a+ (-b) + (-c). -\end{align*}$$ The same thing happens for division: $$\begin{align*} - a / b / c &= (a/b)/c \neq a/(b/c) \\ - &= a/(b \times c) \\ - &= a(b^{-1})(c^{-1}). -\end{align*}$$ (It is, perhaps, instructive to realize I just wrote the same display twice.) If we define subtraction and division as in the third lines of the two displays, then these operations are just as associative and commutative as addition and multiplication. -Is there any hope of doing the same thing with exponentiation? The above come in inverse pairs. Perhaps we should look at logarithms. But neither $ \log_a b = \log_b a$ nor $a^b = b^a$, so we don't have an associative and commutative member of the pair to write both operations in terms of. This is a commutative variation, $a^{\ln b} = \mathrm{e}^{(\ln a) (\ln b)} = b^{\ln a}$, but I can't say this is a common operation or that others would recognize it as rapidly as the six operations discussed so far. (This idea can be generalized to make commutative variants of tetration and higher operations.)<|endoftext|> -TITLE: Is it always possible to find one non-trivial homomorphism between modules? -QUESTION [8 upvotes]: I believe this question is a elementary one, and it may have a very simple answer, of which I'm not aware yet. -Given two non-trivial modules over the same non-trivial ring (or two groups, or two rings, whatever..) is it always possible to find a non-trivial homomorphism between them? Not any special type of homomorphism, just a non-trivial one. If not, could you give me a counter-example? -I am thinking if I can make a commutative diagram like this: be $X$, $Y$ and $Z$ non-trivial modules over the same ring, and be $\lambda: Z \rightarrow X$ a module-homomorphism, whose image $\text{Img}(\lambda)$ is a proper submodule of $X$. So the quotient module $X/\text{Img}(\lambda)$ is a non-trivial module. Can I guarantee existence of a non-trivial module-homomorphism between $X/\text{Img}(\lambda)$ and $Y$, my arbitrary module? And by that, I have a induced non-trivial homomorphism between $X$ and $Y$. - -REPLY [10 votes]: For commutative rings there is a geometric way to think about these things. Every module $M$ over a commutative ring $R$ has a support, which is the set of prime ideals $P$ such that the localization $M_P$ at $P$ is nonzero. This use of "support" is analogous to the notion of support of a function: it's "where the module is nonzero." -For example, when $R = \mathbb{Z}$ the set of prime ideals consists of the zero ideal $(0)$ and the ideals $(p)$ for $p$ a prime. The support of $\mathbb{Z}/p\mathbb{Z}$ consists only of $(p)$; loosely speaking, this module behaves like a "delta function" which is nonzero only at $p$. - -Proposition: if $M$ and $N$ are $R$-modules with disjoint support such that $M$ is finitely presented, then the only homomorphism $M \to N$ is the zero homomorphism. - -Proof. We want to show that the hom module $\text{Hom}_R(M, N)$ is zero. This condition is local in the sense that a module is zero iff its localizations are, so it suffices to show that the localizations $\text{Hom}_R(M, N)_P$ are zero. Since $M$ is finitely presented, localization commutes with hom in the sense that -$$\text{Hom}_R(M, N)_P \cong \text{Hom}_{R_P}(M_P, N_P)$$ -for all prime ideals $P$. But by hypothesis, $M$ and $N$ have disjoint supports, so for any $P$ either $M_P$ or $N_P$ is zero, and hence so is the localization of the hom module at $P$. $\Box$ -In other words, again loosely speaking, a homomorphism $M \to N$ must be zero because it is "zero at every point." -The analogy between support for modules and support for functions is tighter if we take tensor products instead of homs: we can drop the finitely presented hypothesis, because localization always commutes with tensor products, and we get that if $M$ and $N$ are $R$-modules with disjoint support then $M \otimes_R N = 0$, an exact analogue of the observation that two functions with disjoint support multiply to zero.<|endoftext|> -TITLE: Arithmetic growth versus exponential decay -QUESTION [6 upvotes]: I have a kilogram of an element that has a long half-life - say, 1 year - and I put it in a container. Now every day after that I add another kilogram of the element to the container. -Does the exponential decay eventually "dominate" or does the amount of the substance in the container increase without bound? -I know this should be a simple answer but it's been too long since college... - -REPLY [2 votes]: Instead of looking at the whole sequence, you can look at a difference equation. Suppose $m(t)$ is the mass you after $t$ days. Put $\Delta(m)(t)=m(t+1)-m(t)$. What you are asking is, essentially, what is the (approximate) limit for a solution to the initial value problem -$$ -\begin{cases} - \Delta(m)=-(1-(\frac{1}{2})^{1/365})m+1\\ - m(0)=1 -\end{cases} -$$ -Consider the difference equation $\Delta(m)=-(1-(\frac{1}{2})^{1/365})m+1$. This is equal to zero precisely when $m$ is equal to the equilibrium mass $m_e=1/(1-(\frac{1}{2})^{1/365})\approx 527.083$, and moreover, whenever $m0$, while whenever $m>m_e$ we have $\Delta(m)<0$. Moreover, if $m$ is very close to $m_e$, the absolute value $\lvert \Delta(m)\rvert$ is very small (so once we are close to $m_e$ we cant jump too far away), while it is relatively large while we are far away (so we can't suddenly slow down far away from $m_e$, so we will eventually get close). -This tells us that regardless of the initial value $m(0)$, the masses in consecutive days will either a) get closer and closer to $m_e$ indefinitely or b) get closer and closer and eventually it will go past $m_e$, and immediately start going back, perhaps overshooting $m_e$ again, possibly repeating the pattern indefinitely, and possibly only a few times. -If you were to add the new mass continually as opposed to every day, you would always end up in the case a) instead, and possibly you can rule out b) by analysing the numbers more carefully. Either way, no matter where you start (i.e. even if you start with a huge amount of the stuff, or somehow have it in the negatives to begin with), you will eventually get close to $m_e$ and stay close.<|endoftext|> -TITLE: Why isn't there a formula for $\zeta(k)=\sum_{n=1}^\infty\frac{1}{n^k}$ involving $\pi$ when $k$ is odd? -QUESTION [6 upvotes]: Now we know that $\sum \frac{1}{n}=\text{divergent}, \sum \frac{1}{n^2}=\frac{\pi^2}{6}$ but now this for $\sum \frac{1}{n^3}=1.20....$ and again $\sum \frac{1}{n^4}=\frac{\pi^2}{90}$ .Now somewhere there is a general term when $n$ has power $2k$ where $k$ is a positive integer. Why doesn't there exist a $\pi$ series for odd numbers like $n^3$, $n^5$? - -REPLY [9 votes]: In Euler's solution to the Basel problem, he factors sine into an infinite product of binomials, -$$ \frac{\sin( z)}{z} =\prod_{n=1}^\infty \left(1-\frac{z^2}{\pi^2 n^2}\right) , $$ -these binomials have the nice form of a difference of squares because the roots of sine are symmetric about the origin. The roots are also spaced by integer units of $\pi$ which is the key to solving the problem. -Because sine's roots are symmetric, we can rotate the argument of sine in the complex plane to get a difference of $4$'th powers or a difference of $6$'th powers and so on. For instance we can write, -$$ \frac{\sin(z)}{z}\frac{\sin(iz)}{iz} =\prod_{n=1}^\infty \left(1-\frac{z^4}{\pi^4 n^4}\right) , $$ -this essentially follows from the identity $(1-(iz)^2)(1-z^2)=(1-z^4)$. -If we could somehow get a difference of cubes we could solve $\sum 1/n^3$ using, -$$ \text{(Some function with a known Taylor series)}=\prod_{n=1}^\infty \left(1-\frac{z^3}{n^3} \right).$$ -The problem is that the cubed roots of unity are not symmetric about the origin. Because of this we can't use sine. -We could use the reciprocal of the Gamma function, which has roots only at the negative integers, -$$ \frac{1}{\Gamma(-z)}\frac{1}{\Gamma(-e^{i2\pi/3}z)} \frac{1}{\Gamma(-e^{-2i\pi/3}z)} \ \propto \ \prod_{n=1}^\infty \left(1-\frac{z^3}{n^3} \right),$$ -but unfortunately the relevant Taylor coefficient is only known in terms of $\zeta(3)$ which is what we would like to discover.<|endoftext|> -TITLE: Mutual information vs Information Gain -QUESTION [11 upvotes]: I always thought that mutual information and information gain refer to the same thing, however looking at Wikipedia: -http://en.wikipedia.org/wiki/Information_gain -https://en.wikipedia.org/wiki/Mutual_information -I see that Information gain is something completely different and asymmetrical, -what are the differences in practice ? when should I choose one of the other ? - -REPLY [8 votes]: We know that $H(X)$ quantifies the amount of information that each observation of $X$ provides, or, equivalently, the minimal amount of bits that we need to encode $X$ ($L_X \to H(X)$, where $L_X$ is the optima average codelength - first Shannon theorem) -The mutual information -$$I(X;Y)=H(X) - H(X \mid Y)$$ -measures the reduction in uncertainity (or the "information gained") for $X$ when $Y$ is known. -It can be written as $$I(X;Y)=D(p_{X,Y}\mid \mid p_X \,p_Y)=D(p_{X\mid Y} \,p_Y \mid \mid p_X \,p_Y)$$ -wher $D(\cdot)$ is the Kullback–Leibler divergence or distance, or relative entropy... or information gain (this later term is not so much used in information theory, in my experience). -So, they are the same thing. Granted, $D(\cdot)$ is not symmetric in its arguments, but don't let confuse you. We are not computing $D(p_X \mid \mid p_Y)$, but $D(p_X \,p_Y\mid \mid p_{X,Y})$, and this is symmetric in $X,Y$. -A slightly different situation (to connect with this) arises when one is interested in the effect of knowing a particular value of $Y=y$ . In this case, -because we are not averaging on $y$, the amount of bits gained [*] would be $ D(p_{X\mid Y} \mid \mid p_X )$... which depends in $y$. -[*] To be precise, that's actually the amount of bits we waste when coding the conditioned source $X\mid Y=y$ as if we didn't knew $Y$ (using the unconditioned distribution of $X$)<|endoftext|> -TITLE: Does pointwise convergence implies uniform convergence when the limit is continous? -QUESTION [8 upvotes]: Suppose we have a series of functions $f_n: \mathbb R \rightarrow [0,1]$ and continous function $f: \mathbb R \rightarrow [0,1]$. Suppose that $f_n \rightarrow f$ pointwise as $n \rightarrow \infty$. Is it true, that $f_n$ converges uniformly also? What is the case if all $f_n$ and $f$ are monotone functions? -Edit1: Consider the case when $f_n$ and $f$ are distribution functions. All are monoton, continous functions with $$\lim_{x\to -\infty}f(x)=0$$ and $$\lim_{x\to\infty}f(x)=1.$$ -There are numerous question on the site already, the most relevant is this: -Does pointwise convergence against a continuous function imply uniform convergence? -In the marked answer, there are two counterexamples, but I think both example series of functions converge to a $g(x)=\delta(x)$, which is not continous. Am I right? If yes, how the original statement could be proved? -I am also aware of Dini's Theorem, but that applies only to function on closed intervals. - -REPLY [3 votes]: Consider a index function $f_n(x)=I_{(n,n+1)}(x)$. This converges pointwise to the zero function, but the convergence is not uniform. You can replace $f_n(x)$ with a continuous function (a bump function) and the same idea works. If you want a monotone function, use $f_n(x)=I_{(n,\infty)}(x)$ (or a continuous version of this).<|endoftext|> -TITLE: Limit definition of curvature and torsion -QUESTION [8 upvotes]: Given two point, $P$ and $Q$, lying on a curve $\gamma: \mathbb{R} \to \mathbb{R}^3$, curvature at $P$ can be defined via limit $$\kappa (P) = \lim_{Q \to P} \sqrt{\frac{24 (s(P,Q) - d(P,Q))}{s(P,Q)^3}},$$ where $s(P,Q)$ is the arc length of the curve $\gamma$ between the points $P$ and $Q$, and $d(P,Q)$ is the length of a line segment from $P$ to $Q$. -Is there a similar geometrical definition of torsion at $P$? - -REPLY [2 votes]: Example given is for curvature in the osculation plane rotation $\theta$ : -Arc $ s = 2 R \theta $ in $ \mathbb R^2$ -Direct Euclidean distance $ d = 2 R \sin \theta$ -Series approximation for third order -$$ s/R = 2 \theta;\, d/R= 2 \sin \theta \approx 2(\theta - \theta^3/3!)= 2(s/2R- (s/R)^3/8\cdot3!) $$ -$$ d \approx s - s^3/(24 R^2 )$$ -$$ \frac1R = \kappa \approx \sqrt \frac{ 24 (s-d)}{s^3}$$ -the same as your result. You see that we dealt with infinitesmal lengths ( that later on tend to zero) as small finite lengths that can be sketched to visible proportions. [Incidentally I read somewhere it was similarly handled by Leibnitz during earliest stages of calculus]. -We used $ t' = \kappa\, n $ in Frenet-Serret frame -Similarly we can now develop -$$ b'= \tau \,n $$ -in the plane of normals (principal and bi- normal) -Let the instantaneous radius of torsion be $\rho$, we have torsion -$$ \tau = \frac{d \theta }{ds} = \frac{\sin \psi }{\rho} $$ -Length $ dl $ extends by twisting to $ ds$ in normal plane so that $ \cos \psi = dl/ds$ -$$ \sin \psi \approx 1 - ( dl /ds)^2/ 2 ;\,\, \tau = \frac{1 - ( dl /ds)^2/ 2 }{\rho} $$ -You can write/define using your nomenclature -$$ ds = s(P,Q), dl = d(P,Q). $$<|endoftext|> -TITLE: $C_\infty$ analog of the correspondence between $A_\infty$-alg. structures on $A$ and dg coalg. strucures on $(\bar T(sA),\Delta)$ -QUESTION [7 upvotes]: There is a 1-1-correspondence between $A_\infty$-algebra structures on a graded vector space $A$ and dg. coalgebra structures on the bar construction $(\bar T(sA),\Delta)$. -My question: -Is there any analogous statement for $C_\infty$-algebras? Recently I heard that a $C_\infty$-structure on $C$ corresponds to a dg. structure on the cofree Lie coalgebra generetad by $sC$ but I can't find any reference for that or prove it myself. - -REPLY [7 votes]: As Qiaochu Yuan mentions in a comment, Koszul duality of operads is an answer. I think the standard reference now is Algebraic operads by Loday and Vallette. It's a rather vast subject, so I'll try to make a (quick) sketch of it – read the book for more details. There is also the book Operads in algebra, topology and physics by Markl, Shnider, and Stasheff. -For all this answer, I will consider things happening in dg-modules (chain complexes) over a field. -Koszul duality is something that takes a (quadratic) operad $\mathtt{P}$ and spits out a dual cooperad $\mathtt{P}^¡$ together with a so-called twisting morphism $\kappa : \mathtt{P}^¡ \to \mathtt{P}$. This twisting morphism induces a morphism from the cobar construction $\Omega \mathtt{P}^¡$ (an operad) to $\mathtt{P}$. -When the operad $\mathtt{P}$ satisfies a special property called "being Koszul", this morphism $\Omega \mathtt{P}^¡ \to \mathtt{P}$ is a quasi-isomorphism, i.e. it induces an isomorphism on homology. The operad $\Omega \mathtt{P}^¡$ is then often denoted $\mathtt{P}_\infty$. -It's sometimes a bit easier to work with operads rather than with cooperads, and it's possible to construct a Koszul dual operad $\mathtt{P}^!$ from $\mathtt{P}$. This is an involution: $(\mathtt{P}^!)^! = \mathtt{P}$. The famous trinity of operads $\mathtt{Ass}$ (associative algebras), $\mathtt{Com}$ (commutative algebras), and $\mathtt{Lie}$ (Lie algebras) are all Koszul, and their duals are $\mathtt{Ass}^! = \mathtt{Ass}$, $\mathtt{Com}^! = \mathtt{Lie}$, and $\mathtt{Lie}^! = \mathtt{Com}$. -Now, what's the link with $A_\infty$- and $C_\infty$-algebras? It turns out that an $A_\infty$-algebra is the same thing as an algebra over $\mathtt{Ass}_\infty = \Omega \mathtt{Ass}^{¡}$, and a $C_\infty$-algebra is the same thing as an algebra over $\mathtt{Com}_\infty = \Omega \mathtt{Com}^¡$. This is not really a coincidence: when $\mathtt{P}$ is a Koszul operad, the operad $\mathtt{P}_\infty$ enjoys very nice properties. If $A$ is a dg-module equipped with a $\mathtt{P}$-algebra structure and $B$ is a dg-module quasi-isomorphic to $A$, then $B$ cannot necessarily be equipped with a $\mathtt{P}$-algebra structure; but it can be equipped with a $\mathtt{P}_\infty$, such that the quasi-isomorphism respect this structure. Moreover, a quasi-isomorphism $X \xrightarrow{\sim} Y$ of $\mathtt{P}$-algebras can always be inverted $X \xleftarrow{\sim} Y$, but the inverse is really an $\infty$-quasi-isomorphism of $\mathtt{P}_\infty$-algebras. (I believe these properties are what initially motivated the definition of $A_\infty$- and $C_\infty$-algebas, even before Koszul duality of operads was discovered.) -And now, Koszul duality allows one to reformulate the definition of $A_\infty$- and $C_\infty$-algebras. Since $\mathtt{Ass}^! = \mathtt{Ass}$, by some general abstract nonsense, to give $X$ an algebra structure over $A_\infty = \mathtt{Ass}_\infty$ is exactly the same thing as giving a square zero coderivation on $T^c(\Sigma X)$, the cofree coassociative (conilpotent) coalgebra on the suspension of $X$. And similarly, since $\mathtt{Com}^! = \mathtt{Lie}$, to give $X$ an algebra structure over $C_\infty = \mathtt{Com}_\infty$ is exactly the same thing as giving a square zero coderivation on $L^c(\Sigma X)$, the cofree Lie coalgebra on the suspension of $X$. -(And as a bonus, since $\mathtt{Lie}^! = \mathtt{Com}$, an $L_\infty$ structure on $X$ is thus the same thing as a square zero coderivation on $S^c(\Sigma X)$, the cofree cocommutative coalgebra on the suspension of $X$.)<|endoftext|> -TITLE: Uniform limit of Lipschitz functions is a Lipschitz function -QUESTION [5 upvotes]: Let $f_n:[0,1] \rightarrow \mathbb{R}$ be a sequence of Lipschitz functions. Each $f_n$ has a Lipschitz constant equal to $M_n>0$. -Suppose that $f_n$ converges -uniformly to a function $f$. Then $f$ is Lipschitz. -My attemp: -For all $x,y \in [0,1], x \neq y$: -$|f(x)-f(y)| \leq |f(x)-f_n(x)| + |f_n(x)-f_n(y)| + |f_n(y)-f(y)|$ -For $n\geq n_0$, we have $|f(x)-f_n(x)|<|x-y|$ < and $|f(y)-f_n(y)|<|x-y|$ -since $f_n$ converges uniformly to $f$. -Thus: -$|f(x)-f(y)| \leq |f(x)-f_{n_{0}}(x)| + |f_{n_{0}}(x)-f_{n_{0}}(y)| + |f_{n_{0}}(y)-f(y)| \leq (2+M_{n_0})|x-y|$ -Then $f$ is Lipschitz with constant equal to $2+M_{n_0}$ -Am I right? Is there an easier way to solve this problem? -Thank you. - -REPLY [6 votes]: Elementary example: Let $f_n(x) = \sqrt {x+1/n}$ for $x\in [0,1].$ Then each $f_n$ is continuously differentiable on $[0,1],$ hence is Lipschitz there. But $f_n(x)\to f(x)=\sqrt x$ uniformly on $[0,1],$ and $f$ is not Lipschitz there.<|endoftext|> -TITLE: Good "history of mathematical ideas" book? -QUESTION [63 upvotes]: All too often, mathematical history books include far too much material on the private lives of the personalities involved and not enough information on the actual ideas. Mathematics is a living subject in itself, which is not enhanced by knowing about the practitioners themselves (unless it can be shown how their lives link to their ideas, which is far too complex, speculative, and rarely as successful for shedding light on the ideas, as would a direct analysis of how their new idea grew from previous ones). Besides, can we really claim to know the details of a person's life enough to be able to draw inferences on why they did something? This is why I'm looking for a good history of maths book that focuses on how the ideas developed through time, also including how (and ideally why) the notation changed, why the new ideas were introduced, and so on. In fact, this isn't too hard, as Lagrange admirably demonstrates in his "lectures on elementary mathematics" with his short and insightful exposition on the development of logarithms, where he ends it by remarking that: - -"Since the calculation of logarithms is now a thing of the past, - except in isolated instances, it may be thought that the details [i.e. - the history/development of the theory of logs] into which we have here - entered are devoid of value. We may, however, justly be curious to - know the trying and tortuous paths which the great inventors have - trodden, the different steps which they have taken to attain their - goal, and the extent to which we are indebted to these veritable - benefactors of the human race. Such knowledge, moreover, is not matter - of idle curiosity. It can afford us guidance in similar inquiries and - sheds an increased light on the subjects with which we are employed." - -(Lagrange was known to focus on the history of the ideas involved whenever he wrote a large treatise, such as the excellent history of mechanics that he opens off his Mechanique Analytique with.) -I couldn't sum up the reason for my interest in the history of the development of mathematical ideas any better. - -REPLY [3 votes]: A Concise History of Mathematics - -This compact, well-written history — first published in 1948, and now in its fourth revised edition — describes the main trends in the development of all fields of mathematics from the first available records to the middle of the 20th century. Students, researchers, historians, specialists — in short, everyone with an interest in mathematics — will find it engrossing and stimulating. - Beginning with the ancient Near East, the author traces the ideas and techniques developed in Egypt, Babylonia, China, and Arabia, looking into such manuscripts as the Egyptian Papyrus Rhind, the Ten Classics of China, and the Siddhantas of India. He considers Greek and Roman developments from their beginnings in Ionian rationalism to the fall of Constantinople; covers medieval European ideas and Renaissance trends; analyzes 17th- and 18th-century contributions; and offers an illuminating exposition of 19th century concepts. Every important figure in mathematical history is dealt with — Euclid, Archimedes, Diophantus, Omar Khayyam, Boethius, Fermat, Pascal, Newton, Leibniz, Fourier, Gauss, Riemann, Cantor, and many others. - For this latest edition, Dr. Struik has both revised and updated the existing text, and also added a new chapter on the mathematics of the first half of the 20th century. Concise coverage is given to set theory, the influence of relativity and quantum theory, tensor calculus, the Lebesgue integral, the calculus of variations, and other important ideas and concepts. The book concludes with the beginnings of the computer era and the seminal work of von Neumann, Turing, Wiener, and others. - "The author's ability as a first-class historian as well as an able mathematician has enabled him to produce a work which is unquestionably one of the best." — Nature Magazine.<|endoftext|> -TITLE: Probability measure on $\mathbb N$ such that $P(n \mathbb N) =1/n$ for all $n \ge 1$ cannot exist -QUESTION [8 upvotes]: How to prove that $\mathbb N$ cannot be endowed to a probablity space $(\mathbb N, \mathcal F, P)$ such that for all integer $n \ge 1$ we have $$P(n \mathbb N)=\frac{1}{n}$$ -I imagine that divergence of the harmonic series and inclusion-exclusion principle are good ingredients to be used... But I don't know how up to now! - -REPLY [16 votes]: Let $p_n$ be the $n$th prime. Then the events $p_n\mathbb N$ are pairwise independent: -$$P(p_n\mathbb N \cap p_m\mathbb N)=P(p_np_m\mathbb N)=\frac 1{p_np_m}=P(p_n\mathbb N)P(p_m\mathbb N).$$ -The sum of the reciprocals of the primes -$$\sum_n \frac 1{p_n}$$ -famously diverges. So, by the second Borel-Cantelli lemma, the event that infinitely many of the events $p_n\mathbb N$ occur has probability one. But this cannot be satisfied, since no natural number divides by infinitely many primes. (Apart from $0$, but setting $P(0)=1$ is not a solution either.)<|endoftext|> -TITLE: limit $ \lim \limits_{n \to \infty} {\left(\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}\right)^n} $ -QUESTION [10 upvotes]: Calculate the limit $ \displaystyle \lim \limits_{n \to \infty} {\left(\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}\right)^n} $ -I now the answer, it is $ \displaystyle e^\frac{\log^2z}{2} $, but I don't know how to prove it. It seems like this notable limit $\displaystyle \lim \limits_{x \to \infty} {\left(1 + \frac{c}{x}\right)^x} = e^c$ should be useful here. For example I tried this way: $$ (z^{1/\sqrt n} + z^{-1/\sqrt n}) = (z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)})^2 + 2 $$ -$$ \displaystyle \lim \limits_{n \to \infty} {\left(\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}\right)^n} = \displaystyle \lim \limits_{n \to \infty} {\left(1 + \frac{(z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)})^2}{2}\right)^n} $$ -where $ (z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)})^2 $ seems close to $ \frac{\log^2 z}{n} $. -Also we can say that $$ \left(\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}\right)^n = e^{n \log {\left(1 + \frac{\left(z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)}\right)^2}{2}\right)}}$$ and $ \log {\left(1 + \frac{(z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)})^2}{2}\right)} $ can be expand in the Taylor series. But I can't finish this ways. -Thanks for the help! - -REPLY [2 votes]: If $L$ is the desired limit then we have -\begin{align} -\log L &= \log\left\{\lim_{n \to \infty}\left(\frac{z^{1/\sqrt{n}} + z^{-1/\sqrt{n}}}{2}\right)^{n}\right\}\notag\\ -&= \lim_{n \to \infty}\log\left(\frac{z^{1/\sqrt{n}} + z^{-1/\sqrt{n}}}{2}\right)^{n}\text{ (via continuity of log)}\notag\\ -&= \lim_{n \to \infty}n\log\left(\frac{z^{1/\sqrt{n}} + z^{-1/\sqrt{n}}}{2}\right)\notag\\ -&= \lim_{n \to \infty}n\cdot\dfrac{\log\left(1 + \dfrac{z^{1/\sqrt{n}} + z^{-1/\sqrt{n}} - 2}{2}\right)}{\dfrac{z^{1/\sqrt{n}}+z^{-1/\sqrt{n}} - 2}{2}}\cdot\dfrac{z^{1/\sqrt{n}}+z^{-1/\sqrt{n}} - 2}{2}\notag\\ -&= \lim_{n \to \infty}n\cdot\dfrac{z^{1/\sqrt{n}}+z^{-1/\sqrt{n}} - 2}{2}\notag\\ -&=\frac{1}{2}\lim_{n \to \infty}n\left(\frac{z^{1/\sqrt{n}} - 1}{z^{1/2\sqrt{n}}}\right)^{2}\notag\\ -&= \frac{1}{2}\lim_{n \to \infty}\{\sqrt{n}(z^{1/\sqrt{n}} - 1)\}^{2}\notag\\ -&= \frac{(\log z)^{2}}{2}\notag -\end{align} -Hence $L = \exp\left\{\dfrac{(\log z)^{2}}{2}\right\}$.<|endoftext|> -TITLE: If $X$ has non-singular normalization $\dim (\mathrm{Sing(X)})=\dim (X)-1$? -QUESTION [5 upvotes]: Let $X\subseteq\mathbb{P}^{N}$ be an algebraic variety, and let -$$ -\nu:X^{\nu}\rightarrow X -$$ -be its normalization. Let us suppose that $X^{\nu}(\neq X)$ is smooth. I wonder if in this case -$$ -\dim (\mathrm{Sing(X)})=\dim (X)-1. -$$ -I think Serre's Normality Criterion (See Lemma 12.5 of this text) has something to do with it, but I can't see how. -Maybe -$$ -X^{\nu} \text{ smooth }\Rightarrow -$$ -$$ -\Rightarrow - X\text{ satisfies the property $(S_{2})$ (i.e. $\forall$ $x\in X, \mathrm{depth}(\mathcal{O}_{X,x})\geq \min\{2,\dim(\mathcal{O}_{X,x})\}$)}, -$$ -in which case the equality follows from the fact that $X$ is reduced, not normal, and would satisfy $(R_{1})$ if -$$ -\dim (\mathrm{Sing(X)})<\dim (X)-1. -$$ -Is that implication true? At least, is the equality true? - -REPLY [3 votes]: You are on the right track. If the singularity has codimension greater than one, it satisfies $R_1$, so to make it non-normal, it must fail $S_2$. Further you want the normalization to be smooth. So, here is an example. Let $X$ be the spectrum of $k[x^2,xy,y^2,x^3,y^3]$. It is not normal, its fraction field is $k(x,y)$ and its integral closure is $k[x,y]$. It also satisfies $R_1$.<|endoftext|> -TITLE: On the probability of getting the same number for three dice -QUESTION [14 upvotes]: I found the probability of having the same number when throwing 3 dice to be $1\times\left(\frac16\right)^2$. -In addition, I don't understand how do people get the equation $\left(\frac16\right)^3\times6=\frac1{36}$, like why do we have to multiply $\left(\frac16\right)^3$ by six? - -REPLY [5 votes]: Just a small supplement to the nice answers addressing your question, since I didn't found it mentioned explicitly. - -We can calculate the probability of an event by calculating the ratio of the number of favorable choices or number of successes of an event with the number of all choices of an event. -This way we obtain - \begin{align*} -\frac{\text{number of successes}}{\text{number of all choices}}=\frac{6}{6^3}=\frac{1}{6^2}=\frac{1}{36} -\end{align*}<|endoftext|> -TITLE: Calculating $\int_0^\infty \frac{\sin(x)}{x} \frac{\sin(x / 3)}{x / 3} \frac{\sin(x / 5)}{x / 5} \cdots \frac{\sin(x / 15)}{x / 15} \ dx$ -QUESTION [14 upvotes]: I found the following result on this webpage: -$$\int_0^{\infty } \left(\prod _{k=0}^7 \frac{\sin \left(\frac{x}{2 k+1}\right)}{\frac{x}{2 k+1}}\right) \, \mathbb{d}x= \frac{\pi}{2} - \frac{6879714958723010531}{935615849440640907310521750000} \pi $$ -However, I can't determine how to prove it. - -REPLY [4 votes]: $\newcommand{\S}{\operatorname{sinc}}$ -A 'brute force' solution to a beautiful problem. I don't claim this answer is as insightful as the others and the question is somewhat old, but I feel it is relevant and unique enough to merit posting. -As usual, we will let $\S(x) = \sin(x)/x$ with $\S(0)=1$. -The big idea: use angle-addition, Taylor series, and integration by parts to 'revert' the integral to a bunch- $128$, to be precise- of weighted integrals of the form $\int_0^{\infty}\sin(a x)/x\,dx$, each of which is $\pi/2$. Then we just add up the weights to produce the answer. - - -Angle-addition. - -Recall that -\begin{align} -2\sin(\theta)\sin(\phi)=\cos(\theta-\phi)-\cos(\theta+\phi)\\ -2\sin(\theta)\cos(\phi)=\sin(\theta-\phi)-\sin(\theta+\phi)\\ -2\cos(\theta)\cos(\phi)=\cos(\theta-\phi)+\cos(\theta+\phi)\\ -\end{align} -For instance, this tells us $\S(x)\S(x/3) = \frac{3}{2}x^{-2}(\cos(2x/3)-\cos(4x/3))$. You probably see what lies ahead, even if it's somewhat unappetizing. We have -$$ -\S(x)\S(x/3)\cdots \S(x/15) = -$$ -$$ \frac{1\cdot 3\cdots 15}{2^7 x^8} \sum_{e_i\in\{\pm 1\}} (-1)^{\#(e_i=-1)} \cos\left( x(1+e_1 \cdot 1/3 + e_2\cdot 1/5 +\cdots +e_7 \cdot 1/15)\right) -$$Thus, if we denote $W=(1\cdot 3\cdot \cdots \cdot 15)/2^7$, we have to evaluate $128$ integrals that look like $\displaystyle{W \int_0^{\infty}\frac{\cos(a x)}{x^8}\,dx}$, for some constant $a$. The trouble is that none of them are improperly integrable near $x=0$, which brings us to the second step in the process. - -Taylor series. - -The idea here is to add and subtract the same number of lower-order terms of the Taylor series of each $\cos(a x)$ to make the integrals converge. For example, $\displaystyle{\int_0^{\infty}\frac{\cos(x)}{x^8}\,dx}$ does not converge but $\displaystyle{\int_0^{\infty}\frac{\cos(x)-(1-x^2/2+x^4/24-x^6/720)}{x^8}\,dx}$ does. Since the sum of the arguments of the cosines is zero (check! A short explanation is that the $+$ and $-$ signs are equal in number) we will add and subtract up to the degree six terms to make the integrals convergent. Therefore, we have to evaluate -$$ -\int_0^{\infty}\frac{\cos(a x)-(1-(ax)^2/2+(ax)^4/24-(ax)^6/720)}{x^8}\,dx -$$ - -Integration by parts. - -We have -$$ -\int_0^{\infty}\frac{\cos(a x)-(1-(ax)^2/2+(ax)^4/24-(ax)^6/720)}{x^8}\,dx -$$ -\begin{align} -&=\left.-\frac{\cos(a x)-(1-(ax)^2/2+(ax)^4/24-(ax)^6/720)}{8x^7}\right|_0^{\infty}\\ &+\frac{1}{7}\int _0^{\infty} \frac{-a\sin(ax)+a^6 x^5/120 - a^4 x^3/6 + a^2 x }{x^7}\,dx -\end{align}The boundary terms vanish: at $\infty$, the denominator is a greater power and at $0$ the cosine series is $O(x^8)$. Continue this procedure until the denominator is just $x$: -$$ -\cdots = \frac{a^7}{7!}\int_0^{\infty}\frac{\sin(ax)}{x}\,dx = \frac{a^7}{7!}\cdot\frac{\pi}{2} -$$ - -Add 'em up. - -All that's left is to add up these values for all $128$ values of $a$ corresponding to the $2^7$ terms of the sum. This becomes -$$ -W \cdot \frac{\pi}{2}\cdot \frac{1}{7!} \cdot \sum_{e_i\in\{\pm 1\}} (-1)^{\#(e_i=-1)} (1+e_1 \cdot 1/3 + e_2\cdot 1/5 +\cdots +e_7 \cdot 1/15)^7 -$$A somewhat tedious but routine calculation gives the final answer of -$$ -\frac{467807924713440738696537864469 \pi}{935615849440640907310521750000} -$$ - -For clarity, I'll do a smaller example (the three sinc case) that will evaluate to $\pi/2$. -$$ -\int _0^{\infty} \S(x)\S(x/3)\S(x/5)\,dx = 15 \int_0^{\infty} \frac{\sin(x)\sin(x/3)\sin(x/5)}{x^3}\,dx -$$ -$$ -=\frac{15}{4} \int_0^{\infty} \frac{-\sin(7/15 x)+\sin(13/15 x)+\sin(17/15 x)-\sin(23/15 x)}{x^3}\,dx -$$ -$$ -=\frac{15}{4} \int_0^{\infty} \frac{-\sin(7/15 x)+\sin(13/15 x)+\sin(17/15 x)-\sin(23/15 x) }{x^3}+\frac{7/15 x-13/15 x-17/15 x+23/15 x}{x^3}\,dx -$$I'll just pick one of these, say the first one; the others are exactly the same. -$$ -=\int_0^{\infty} \frac{-\sin(7/15 x)+7/15 x}{x^3}\,dx -$$ -$$ -= \left.\frac{-\sin(7/15 x)+7/15 x}{-2x^2}\right|_0^{\infty} + \int_0^{\infty} \frac{-7/15 \cos(7/15 x)+7/15}{2x^2}\,dx -$$ -$$ -=0+0 + \frac{(7/15)^2}{2} \int _0^{\infty}\frac{\sin(7/15 x)}{x}\,dx =\frac{(7/15)^2}{2} \cdot \frac{\pi}{2} -$$So the total is -$$ -\frac{15}{4}\cdot \frac{\pi}{2}\left(\frac{(7/15)^2}{2}-\frac{(13/15)^2}{2}-\frac{(17/15)^2}{2}+\frac{(23/15)^2}{2}\right)= \frac{\pi}{2} -$$ - -Of course, this method is merely computational: it has none of the theoretical trappings of the other answers. Nevertheless, the method could easily be generalized to other products and is quite amenable to reproduction on software. For instance, Mathematica thought long and hard about the original integral, but was able to perform the $128$ sub-integrals much quicker. -As a final remark, I'm reminded of the idea of quadrature from numerical analysis. It seems these integral-sinc-products, and more importantly their corresponding series, are 'exact' for the first seven until an error is introduced in the eighth. Perhaps I will investigate this further.<|endoftext|> -TITLE: What's the difference between uniformly equicontinuous and uniformly continuous? -QUESTION [12 upvotes]: I am very confused. Thanks in advance. -Our definition is that: -Uniformly Equicontinuous: $\forall \epsilon>0,\exists\delta>0 \ such \ that \ |s-t|< \delta \ and \ n \in \mathbb{N} \ then \ |f_n(t)-f_n(s)|<\epsilon$ -Uniformly continuous: $\forall \epsilon>0,\exists\delta>0 \ such \ that \ \forall s,t \in [a,b], \ |s-t|< \delta \ and \ n \in \mathbb{N} \ then \ |f_n(t)-f_n(s)|<\epsilon$ - -REPLY [25 votes]: The family of function $(f_n)$ defined on $[a,b]$ is said : -Uniformly equicontinuous : -$\forall \epsilon>0,\exists\delta>0,\forall n \in \mathbb{N}, \forall s,t \in [a,b], \ |s-t|< \delta \ \Rightarrow \ |f_n(t)-f_n(s)|<\epsilon$ -Uniformly continuous : -$\forall \epsilon>0,\forall n \in \mathbb{N},\exists\delta>0, \forall s,t \in [a,b], \ |s-t|< \delta \ \Rightarrow \ |f_n(t)-f_n(s)|<\epsilon$ -Look at the place of $\forall n \in \mathbb{N}$. In the first case, you have the same $\delta$ for the whole family of functions. While in the second case, the $\delta$ may depend on the function you are considering. One can remark that uniform equicontinuity implies uniform continuity. So uniform equicontinuity is a more strong condition.<|endoftext|> -TITLE: Diffeomorphism preserves open set? -QUESTION [6 upvotes]: This question might be elementary, but I am a little confused. Does diffeomorphism preserve open sets? -Suppose I have two coordinate charts $(U,\varphi)$, $(V,\psi)$ and atlas $\mathcal{A}=\{(U_{\alpha},\varphi_{\alpha})\}$ such that the two charts are compatible with $\mathcal{A}$. Then clearly $\varphi_{\alpha}(U\cap U_{\alpha})$ and $\varphi_{\alpha}(V\cap U_{\alpha})$ are open. How can I conclude then that $\varphi_{\alpha}(U\cap V\cap U_{\alpha})$ is also open? -I know that $U\cap V\cap U_{\alpha}$ is open, so is the openness preserved by $\varphi_{\alpha}$? - -REPLY [3 votes]: Diffeomorphisms are homeomorphisms because differentiability implies continuity. So we should prove, in order to answer at your question above, that homeomorphisms are open maps. Let $\phi:X \rightarrow Y$ be an homeomorphism between two topological spaces $X,Y$. Since $\phi$ is an homeomorphism then consider $g:Y \rightarrow X$ be the inverse map of $\phi$ which is a continuous map because we assumed $\phi$ to be an homeomorphism (namely a bicontinuous map between topological spaces). Then for every open set $V \subset X$ $g^{-1}(V) \subset Y$ is open in $Y$, but we can write $g^{-1}(V)=(\phi^{-1})^{-1}(V)=\phi(V)$ $\Rightarrow$ $\phi(V)$ is open in $Y$ , this holds for all open set $V$ and proves that $\phi$ is an open map because brings open sets into open sets.<|endoftext|> -TITLE: $\lim_{n\to \infty}\left(\frac{\sqrt[n]{a}+\sqrt[n]{b}}{2}\right)^n\stackrel{?}{=}\sqrt{ab}$ -QUESTION [8 upvotes]: I found this interesting equality, but I could not find a way to prove it. Any (beautiful) idea? -$$\lim\limits_{n\to \infty}\left(\frac{\sqrt[n]{a}+\sqrt[n]{b}}{2}\right)^n=\sqrt{ab}$$ - -REPLY [6 votes]: As usual when you see the variable in exponent, the strategy is to take logs. Thus if $L$ is the desired limit then -\begin{align} -\log L &= \log\left\{\lim_{n \to \infty}\left(\frac{a^{1/n} + b^{1/n}}{2}\right)^{n}\right\}\notag\\ -&= \lim_{n \to \infty}\log\left(\frac{a^{1/n} + b^{1/n}}{2}\right)^{n}\text{ (via continuity of log)}\notag\\ -&= \lim_{n \to \infty}n\log\left(\frac{a^{1/n} + b^{1/n}}{2}\right)\notag\\ -&= \lim_{n \to \infty}n\cdot\dfrac{\log\left(1 + \dfrac{a^{1/n} + b^{1/n} - 2}{2}\right)}{\dfrac{a^{1/n} + b^{1/n} - 2}{2}}\cdot\dfrac{a^{1/n} + b^{1/n} - 2}{2}\notag\\ -&= \frac{1}{2}\lim_{n \to \infty}n(a^{1/n} - 1) + n(b^{1/n} - 1)\notag\\ -&= \frac{1}{2}(\log a + \log b)\notag\\ -&= \frac{1}{2}\log ab\notag -\end{align} -Hence $L = \sqrt{ab}$. Here I have used two fundamental limits $$\lim_{x \to 0}\frac{\log(1 + x)}{x} = 1,\,\lim_{n \to \infty}n(x^{1/n} - 1) = \log x$$<|endoftext|> -TITLE: Why does $\sum_{n=0}^{\infty} \cos^n(n)$ converges? -QUESTION [8 upvotes]: Consider the series -$$\sum_{n=0}^{\infty}\cos^n(n)$$ -I think that the root test is inconclusive, because -$$\limsup_n \sqrt[n]{|\cos^n(n)|}=\limsup_n|\cos(n)|\leq 1$$ -once we can approximate $\pi$ by rational numbers, there will always be some $i$ and $j\in\mathbb{N}$ such that $|j\pi-i|<\varepsilon$, for every $\varepsilon>0$ that we choose. And in this case $|\cos(i)-1|<\delta$. -Nevertheless, it seems that it converges. I can't think of any other convergent series to compare with it. -My question is: how can I prove that this series converges? -Edit: Actually, this series diverges, as you can see in tmyklebu's answer. I made a fortran program and here are some values of the sequence of the partial sums: -n S_n -10 1.5898364866640549 -100 7.8365722183614510 -1000 24.825953005207236 -10000 79.232008037801393 - -REPLY [16 votes]: It doesn't. That series has a lot of terms near $\pm 1$; in particular, I can show it has infinitely many terms whose absolute value is larger than $\frac12$: -The inequality $|\pi - p/q| < 1/q^2$ is satisfied by infinitely many pairs of positive integers $(p,q)$. (This is Dirichlet's approximation theorem.) Let $(p,q)$ be one such pair with $q > 8$. Then, for some real $r$ with $|r| < 1/q$, we have $$|\cos(p)| = |\cos(q \pi + r)| \geq 1 - r^2 \geq 1 - 1/q^2.$$ Then -$$|\cos(p)^p| \geq (1 - 1/q^2)^p \geq 1 - p/q^2 \geq 1 - 4/q > 1/2.$$<|endoftext|> -TITLE: Can every integer greater than 5 be written as the sum of exactly one prime and one composite? -QUESTION [17 upvotes]: I worked it out up to 15. - -6 = 4 + 2 - 7 = 4 + 3 - 8 = 6 + 2 - 9 = 4 + 5 - 10 = 8 + 2 - 11 = 9 + 2 - 12 = 10 + 2 - 13 = 10 + 3 - 14 = 9 + 5 - 15 = 12 + 3 - -Does this trend continue forever? I feel like the answer is obvious but I'm just not seeing it. - -REPLY [11 votes]: After seeing @detnvvp's brilliant answer to this question, one is left to wonder if it can be generalized. - -Theorem: For natural number $k$, every sufficiently large number $n$ is a sum of a prime number and a multiple of $k$ iff $k=1$ or $k$ is prime. - -Proof: The problem reduces to finding a prime $p_i=i\pmod k$ for each $0\le i -TITLE: Differences between derivatives and strong derivatives -QUESTION [5 upvotes]: Definition: Let $f$ be a real valued function. We say $f$ is $\mathbf{strongly}$ $\mathbf{differentiable}$ at $x = a$ if the following limits exists and is finite: -$$ \lim_{x \to a, y \to a, x \neq y} \frac{ f(x)-f(y)}{x-y} = f^*(a) $$ -and we can $f^*(a)$ the strong derivative of $f$ at $a$. Why is this definition of derivative different than the usual one? What is the main crucial point to understand here that makes it different? - -REPLY [6 votes]: Here is roughly what every student of analysis should know. -This "strong" derivative was introduced by Peano in 1892 as a "strict derivative". This asks rather more of a function than it merely have an -(ordinary) derivative and Peano thought that this was actually better for students and engineers to learn and use. I prefer his terminology since the word "strong" gets rather overused in analysis and interferes with the more popular usages. - -PEANO G.: Sur la définition de la dérivée, Mathesis, (2) 2 (1892), - 12—14. - -For a continuous function $f$ the strict (strong) derivative $f^*(x_0)$ -exists at a point if and only if one (and hence all four) of the Dini derivatives $D^+f(x)$, $D_+f(x)$, $D^-f(x)$ or $D_-f(x)$ is continuous at $x_0$. -In particular, if $f'(x)$ is continuous at a point $x_0$ then the strict derivative $f^*(x_0)$ exists and, of course, is equal to the ordinary derivative $f'(x_0)$. If $f'(x)$ exists in some neighborhood of the point -$x_0$ then the strict derivative at $x_0$ exists if and only if $f'$ is continuous at $x_0$. -For a bibliography of papers on the subject the ever-reliable Dave Renfro has supplied quite a few in his StackExchange answer. - -Here are some considerations if you wish to decide whether you would prefer all your derivatives to be strong ones (as Peano did). -If $f'(x_0)$ exists you can be sure that $f$ is continuous at $x_0$, but it could be discontinuous and quite pathological everywhere else. -But if $f^*(x_0)$ exists then you can be certain that $f$ is not only continuous at $x_0$, it is continuous in some neighborhood $(x_0-\delta,x_0+\delta)$. But way more than that: it is even Lipschitz in $(x_0-\delta,x_0+\delta)$. -If $f'(x_0)$ exists then, as already noted there doesn't have to be a derivative at any other point. But if $f^*(x_0)$ exists then there is -some neighborhood $(x_0-\delta,x_0+\delta)$ in which $f$ has almost everywhere a derivative and that derivative is continuous at $x_0$ (i.e., continuous relative to the set of points at which it exists). -If you prefer espresso to green tea, a robust merlot to a sauvignon blanc, and a rare steak to a fillet of sole, you would probably like Peano's idea of using strong derivatives in place of their wimpy cousins (the ordinary derivative) when you have to teach the calculus. Especially to engineers.<|endoftext|> -TITLE: Prove that $\lim _{x\to \infty \:}(1+\frac{x^x}{x!})^{\frac{1}{x}} = e$ -QUESTION [12 upvotes]: Using a graphing calculator, it seems that $\lim _{x\to \infty \:}(1+\frac{x^x}{x!})^{\frac{1}{x}} = e$. How can this be proven? - -REPLY [6 votes]: Using the fact - -$$\lim_{n\to \infty} a_n^{1/n}=\lim_{n\to \infty} \frac{a_{n+1}}{a_n} $$ - -we have - -$$\frac{a_{n+1} }{a_n} = \frac{ (n+1)! + (1+n)^{1+n} }{(n+1)!+(1+n)n^n} \sim_{n\sim \infty} \frac{ (1+n)^{1+n} }{n^{n+1} } \longrightarrow_{n\to \infty} e $$<|endoftext|> -TITLE: What's the value of $\sum_{i=1}^\infty \frac{1}{i^2 i!}$? -QUESTION [5 upvotes]: What's the value of $\sum_{i=1}^\infty \frac{1}{i^2 i!}(= S)$? -I try to calculate the value by the following. -$$\frac{e^x - 1}{x} = \sum_{i=1}^\infty \frac{x^{i-1}}{i!}.$$ -Taking the integral gives -$$ \int_{0}^x \frac{e^t-1}{t}dt = \sum_{i=1}^\infty \frac{x^{i}}{i i!}. $$ -In the same, we gets the following equation -$$ \int_{s=0}^x \frac{1}{s} \int_{t=0}^s \frac{e^t-1}{t}dt ds= \sum_{i=1}^\infty \frac{x^{i}}{i^2 i!}. $$ -So we holds -$$S = \int_{s=0}^1 \frac{1}{s} \int_{t=0}^s \frac{e^t-1}{t}dt ds.$$ -Does this last integral have an elementary closed form or other expression? - -REPLY [3 votes]: Maybe it's interesting to see how to get the “closed form” in terms of hypergeometric function. Recalling the definition of generalized hypergeometric function $$_{q}F_{p}\left(a_{1},\dots,a_{q};b_{1},\dots,b_{p};z\right)=\sum_{k\geq0}\frac{\left(a_{1}\right)_{k}\cdots\left(a_{q}\right)_{k}}{\left(b_{1}\right)_{k}\cdots\left(b_{p}\right)_{k}}\frac{z^{k}}{k!} - $$ where $\left(a_{i}\right)_{k} - $ is the Pochhammer symbol, we note that $\left(2\right)_{k}=\left(k+1\right)! - $ and $\left(1\right)_{k}=k!$. Hence $$_{3}F_{3}\left(1,1,1;2,2,2;1\right)=\sum_{k\geq0}\frac{\left(k!\right)^{3}}{\left(\left(k+1\right)!\right)^{3}}\frac{1}{k!}=\sum_{k\geq0}\frac{1}{\left(k+1\right)^{3}}\frac{1}{k!}=\sum_{k\geq1}\frac{1}{k^{2}k!}.$$<|endoftext|> -TITLE: Calculate $\pi_2(S^2 \vee S^1)$ -QUESTION [6 upvotes]: I am trying to calculate $\pi_2(S^2 \vee S^1)$ and having trouble fitting the pieces together. -I know that the universal cover of $S^2 \vee S^1$ is just $\mathbb{R}$ with spheres attached at integral points. - -Attempt 1: Let $\tilde{X}$ be the universal cover as above. We have that $\pi_2(S^2 \vee S^1) = \pi_2(\tilde{X})$. Now, since $\pi_1(\tilde{X}) = 0$, we have that $\tilde{X}$ is 1-connected, and thus by Hurewicz Theorem, $\pi_2(\tilde{X}) = H_2(\tilde{X})$. -Attempt 2: Consider a map $f: S^2 \rightarrow S^2 \vee S^1$. This map lifts to a map $\tilde{f}:S^2 \rightarrow \tilde{X}$. -Attempt 3: $H_2(S^2 \vee S^1) = H_2(S^2 / S^0) = H_2(S^2, S^0)$ gives a long exact sequence: $$ \dots \rightarrow H_2(S^0)\rightarrow H_2(S^2)\rightarrow H_2(S^2,S^0)\rightarrow H_1(S^0)\rightarrow \dots$$ -$$= \dots \rightarrow 0\rightarrow \mathbb{Z} \rightarrow H_2(S^2,S^0)\rightarrow 0\rightarrow \dots$$ -So, $H_2(S^2 \vee S^1) = H_2(S^2, S^0) \simeq \mathbb{Z} $, but I am not sure how/if this helps, since $\pi_1(S^2 \vee S^1) \neq 0$. - -Any advice would be appreciated. - -REPLY [4 votes]: Your first attempt works and is probably the easiest way to go. The $\tilde{X}$ you found is homotopy equivalent to an infinite bouquet of 2-spheres: $\tilde{X} \simeq \bigvee_{n \in \mathbb{Z}} S^2$. There are various ways of computing $H_2(\tilde{X})$: - -$\tilde{X}$ is the filtered colimit -$$* \subset S^2 \subset \bigvee_{n=-1}^1 S^2 \subset \bigvee_{n-2}^2 S^2 \subset \dots \subset \operatorname{colim}_{k \ge 1} \bigvee_{n=-k}^k S^2 = \tilde{X}$$ -and since homology preserves filtered colimits, $$H_2(\tilde{X}) = \operatorname{colim}_{k \ge 1} \mathbb{Z}^{2k+1} = \bigoplus_{n \in \mathbb{Z}} \mathbb{Z} = \mathbb{Z}^{(\infty)}$$ is a direct sum of an infinite number of copies of $\mathbb{Z}$. -Using cellular homology: the cellular complex of $\tilde{X}$ has $\mathbb{Z}$ in degree zero, $\mathbb{Z}^{(\infty)}$ in degree 2, and $0$ elsewhere. There cannot be any nontrivial differential for degree reasons, and so $H_2(\tilde{X}) = \mathbb{Z}^{(\infty)}$. - -As for the intuition, recall that $\pi_1(X)$ always acts on $\pi_n(X)$ for $n \ge 1$ by "prepending" a loop or something similar. The inclusion $S^2 \subset S^2 \vee S^1$ gives an element $\alpha \in \pi_2(X)$, which generates a cyclic subgroup. -But for the loop $\gamma$ that goes around the $S^1$ factor once, $\gamma \cdot \alpha \in \pi_2(X)$ is another element that is not homotopic to any other element of the cyclic subgroup generated by $\alpha$. This $\gamma \cdot \alpha$ generates a cyclic subgroup too. And it continues: for every $n \in \mathbb{Z}$, $\gamma^n \cdot \alpha$ is an element of $\pi_2(X)$, and they all generate disjoint cyclic subgroups in $\pi_2(X)$. And now the argument above shows that these are all the elements of $\pi_2(X)$.<|endoftext|> -TITLE: Partition of $\{1,2,3,\cdots,3n\}$ into $n$ subsets, each with $3$ numbers, which have equal sum -QUESTION [5 upvotes]: I want to show, that for every odd $n$ $(n\ge3)$, there exists a partition of $\{1,2,3,\cdots,3n\}$ into $n$ disjoint subsets, where each one has $3$ elements and equal sum. -The first such number is $3$. For $3$ it is obvious. $\{1,6,8\}, \{2,4,9\}, \{3,5,7\}$. I tried to show this using induction, but it seems I have some trouble with it. Please help me, if you can. - -REPLY [5 votes]: We let $k$ range from $1$ to $n$. Our sets are $$\begin {cases} -\{k, \frac{3n-1}2+k,3n+2-2k\} &1\le k \le \frac {n+1}2\\ -\{k,n+k-\frac{n+1}2,4n-2k+2\}&\frac{n+1}2 \lt k \le n -\end {cases}$$ -These can be seen to add to $\frac {9n+3}2$ and to use the numbers $1$ to $n$ in the first entry, $n+1$ to $2n$ in the second and $2n+1$ to $3n$ in the third. They follow the pattern in Ng Chung Tak's answer for $n=5$<|endoftext|> -TITLE: What's the benefit of using strong induction when it's replaceable by weak induction? -QUESTION [6 upvotes]: Example of a proof of a theorem using weak(ordinary) induction - - -The two types of inductions have process of proving P(a) and "for all integers $n \ge b, P(n)$" as a result in common. -For example, in the proof of the following question, we can use weak induction instead of strong induction, and using weak(ordinary) induction makes the proof simpler and shorter than the strong form of induction. So what's the benefit of using strong induction when it's replaceable by weak induction? - - -[EDIT] -As requested, here's the weak induction version of the question. Plus, I changed the $s{k_1}$ part in red in the weak induction after reading answers. -I now understand using weak induction doesn't prove the statement. - -Source: Discrete Mathematics with Applications, Susanna S. Epp - -REPLY [4 votes]: Strong Induction is more intuitive in settings where one does not know in advance for which value one will need the induction hypothesis. -Consider the claim: - -Every integer $n \ge 2$ is divisible by a prime number. - -Using strong induction the proof is straightforward. It is true for $n=2$, as $2 \mid 2$ and $2$ is prime. -Assume the statement true for $2 \le a \le n$. -We show $n+1$ is divisible by a prime number. -If $n+1$ is a prime number, then as $(n+1) \mid (n+1)$, the claim is proved. -If $n+1$ is not a prime number then there exists some proper divisor $a\mid (n+1)$, so $2 \le a \le n$. -By induction hypothesis, we know that $a$ is divisible by a prime number $p$. -Since $p \mid a $ and $a \mid (n+1)$ it follows that $p \mid (n+1)$ and the proof is complete. -If you want to do this with weak induction you will have to change the statement you want to prove to something less intuitive.<|endoftext|> -TITLE: A differentiation under the integral sign -QUESTION [7 upvotes]: Let $f:\mathbb{R}^n\to\mathbb{R}$ be a function Lebesgue summable on all $\mu$-measurable and bounded subsets of $\mathbb{R}^n$, where $\mu$ is the usual Lebesgue measure defined on $\mathbb{R}^n$, and let $g:\mathbb{R}^n\to\mathbb{R}$ be a function of class $C^k(\mathbb{R}^n)$ whose support is contained in the compact subset $V\subset\mathbb{R}^n$. That implies that all the derivatives of $g$, up to the $k$-th, are bounded and supported within $V$. -I suspect that these conditions are enough to guarantee that the function $$h(\boldsymbol{x}):=\int_V f(\boldsymbol{y}-\boldsymbol{x})g(\boldsymbol{y})\,d\mu_{\boldsymbol{y}}$$is of class $C^k(\mathbb{R}^n)$. In fact, I notice, if I am not wrong, that $$h(\boldsymbol{x})=\int_{V-\boldsymbol{x}} f(\boldsymbol{y})\bar{g}(\boldsymbol{y}+\boldsymbol{x})\,d\mu_{\boldsymbol{y}}=\int_{\mathbb{R}^n} f(\boldsymbol{y})\bar{g}(\boldsymbol{y}+\boldsymbol{x})\,d\mu_{\boldsymbol{y}}$$where $$\bar{g}(\boldsymbol{y}) := \begin{cases} g(\boldsymbol{y}), & \boldsymbol{y}\in V \\ 0, & \boldsymbol{y}\in\mathbb{R}^n\setminus V \end{cases}$$and $V-\boldsymbol{x}=\{\boldsymbol{y}\in\mathbb{R}^n:\boldsymbol{y}+\boldsymbol{x}\in V\}$, and I suppose that the conditions on $g$ may be enough to allow us to differentiate under the integral sign. -Is my intuition that $h\in C^k(\mathbb{R}^n)$ correct and, if it is, how can we prove it? - -A trial of mine: I know a corollary of Lebesgue's dominated convergence theorem that -- if $f:V\times [a,b]\to \mathbb{R}$, $(\boldsymbol{x},t)\mapsto f(\boldsymbol{x},t)$ with $V$ measurable is such that $\forall t\in[a,b]\quad f(-,t)\in L^1(V)$, i.e. the function $\boldsymbol{x}\mapsto f(\boldsymbol{x},t) $ is Lebesgue summable on $V$, -- and if there is a neighbourhood $B(t_0,\delta)$ of $t_0$ such that, for almost all $\boldsymbol{x}\in V$ and for all $t\in B(t_0,\delta)$, $\left|\frac{\partial f(\boldsymbol{x},t)}{\partial t}\right|\le\varphi(\boldsymbol{x})$, where $\varphi\in L^1(V) $, then -$$\frac{d}{dt}\int_V f(\boldsymbol{x},t) d\mu_{\boldsymbol{x}}\bigg|_{t=t_0}=\int_V\frac{\partial f(\boldsymbol{x},t_0)}{\partial t}d\mu_{\boldsymbol{x}}$$but I am not able to find a proper $\varphi$ to use in this context. Nevertheless I would not be amazed if there were an even more straightforward method to prove that the integral and derivative sign can be commutated... - -REPLY [5 votes]: Your intuition is correct, if $f\colon \mathbb{R}^n \to \mathbb{R}$ is locally integrable, and if $g \in C_c^k(\mathbb{R}^n)$, then the function $h$ given by -$$h(x) = \int_{\mathbb{R}^n} f(y-x)g(y)\,d\mu_y$$ -belongs to $C^k(\mathbb{R}^n)$, and its partial derivatives of order $\leqslant k$ are given by -$$D^{\alpha} h(x) = \int_{\mathbb{R}^n} f(y-x)D^{\alpha} g(y)\,d\mu_y.$$ -The change of coordinates $z = y-x$ gives us -$$h(x) = \int_{\mathbb{R}^n} f(z)g(z+x)\,d\mu_z,\tag{$\ast$}$$ -and in that form we can apply the dominated convergence theorem to justify differentiation under the integral. -We let $K := \operatorname{supp} g$, and define $L = \{x \in \mathbb{R}^n : \operatorname{dist}(x,K) \leqslant 1\}$. Then $L$ is also compact, hence of finite Lebesgue measure. Now we fix an arbitrary $x_0 \in \mathbb{R}$ and show that $h$ is continuously differentiable on $B_1(x_0)$. The integrand in $(\ast)$ is only nonzero for $z$ such that $z+x \in K$, and rearranging gives $z \in K + (x_0 - x) - x_0$. Since we only look at $x$ with $\lVert x-x_0\rVert < 1$, we have $K + (x_0 - x) \subseteq L$, so -$$f(z)g(z+x) \neq 0 \implies z \in L - x_0$$ -for $x\in B_1(x_0)$. As a continuous function with compact support, $g$ is bounded, say $\lvert g(y)\rvert \leqslant M$ for all $y$. Then we have -$$\lvert f(z)g(z+x)\rvert \leqslant M\cdot \lvert f(z)\rvert \cdot \chi_{L - x_0}(z)$$ -for all $z \in \mathbb{R}^n$, and choosing $d(z) := M\cdot \lvert f(z)\rvert \cdot \chi_{L - x_0}(z)$ as the dominating function yields the continuity of $h$ on $B_1(x_0)$. If $k \geqslant 1$, then the partial derivatives of $g$ are all continuous functions with compact support, and hence bounded. We may assume that $\lvert \partial_{i}g(y)\rvert \leqslant M$ for all $y\in \mathbb{R}^n$ and $1 \leqslant i \leqslant n$. Then $d$ is also a dominating function for -$$f(z)\partial_i g(z+x),$$ -and another application of the dominated convergence theorem - repsectively of the mentioned corollary - yields the continuous partial differentiability of $h$ on $B_1(x_0)$, and the formula -$$ \frac{\partial h}{\partial x_i} (x) = \int_{\mathbb{R}^n} f(z) \partial_i g(z+x)\,d\mu_z.$$ -If $k > 1$, we can iterate the argument to obtain the existence and continuity of the higher-order partial derivatives of $h$ on $B_1(x_0)$. Since $x_0$ was arbitrary, it follows that $h \in C^k(\mathbb{R}^n)$.<|endoftext|> -TITLE: Simultaneously diagonalisable iff A,B commute -QUESTION [14 upvotes]: Yes, this is a repeat, however I have not seen anyone explain it fully (or such that I can comprehend it, and believe me, I have searched thoroughly for answers). - -If the (linear) endomorphisms $A,B: V \to V$ are diagonalisable, show that they are simultaneously diagonalisable $\iff AB=BA$ - -The initial implication is trivial. I have shown the case for when all eigenvalues are distinct. It is when there are not necessarily distinct that I cannot seem to get my head around the problem. (For instance, minimal polynomials are too unfamiliar to me to be constructive). -Any links, proofs, hints or explanations are deeply, deeply appreciated. -Thanks! - -REPLY [13 votes]: Let $\lambda_1,\dots,\lambda_r$ the eigenvalues of $A$ and $E_{\lambda_i}$ the corresponding eigenspace. For any eigenvector $u\in E$, we have -$$A(B(u))=B(A(u))=B(\lambda u)=\lambda B(u).$$ -This proves the $E_{\lambda_i}$s are stable by $B$. -Now, since $A$ is diagonalisable, the vector space $V$ decomposes as -$$V=\bigoplus_{i=1}^rE_{\lambda_i}$$ -In a basis of eigenvectors for $A$ the matrix A becomes a diagonal matrix, -and the matrix $B$ is a block-diagonal matrix -$$\begin{pmatrix} -B_1\!\\ -&\!\ddots\!\\ -&&\!B_r -\end{pmatrix}$$ -Thus it is enough to observe the restriction of (the endomorphism associated with) $B$ is diagonalisable. Take in each $E_{\lambda_i}$ a basis of eigenvectors of $B_i=B\Bigl\lvert_{E_{\lambda_i}}$. The matrix of the restriction of $A$ to this eigenspace remains diagonal, since it is $\lambda_i I_{E_{\lambda_i}}$. Finally, choose as a basis for $V$ the union of the bases of the $E_{\lambda_i}$. You obtain a basis which diagonalises simultaneously $A$ and $B$.<|endoftext|> -TITLE: Solve integral $\int \frac{\sqrt{x+1}+2}{(x+1)^2 - \sqrt{x+1}}dx$ -QUESTION [9 upvotes]: This is how i solved it: -first i used substitution $x+1 = t^2 \Rightarrow dx=2tdt$ -so integral becomes $I=\int \frac{t+2}{t^4-t}2tdt = 2\int \frac{t+2}{t^3-1}dt =2\int\frac{t+2}{(t-1)(t^2-t+1)}dt $ -usic partial fraction decomposition i have: -$\frac{t+2}{(t+1)(t^2-t+1)}=\frac{A}{t+1} + \frac{Bt+C}{t^2-t+1}=\frac{At^2-At+A+Bt^2+Bt+Ct+C}{(t+1)(t^2-t+1)}$ -from here, we have that $A=\frac{1}{3} , B=-\frac{1}{3}, C=\frac{5}{3}$ -so integral becomes -$I=\frac{1}{3} \int \frac{dt}{t+1}-\frac{1}{3}\int\frac{(t-5)dt}{t^2-t+1} = \frac{1}{3}ln|t+1|-\frac{1}{3}I_1 $ -Now, for the $I_1$ -$I_1=\int\frac{(t-5)dt}{t^2-t+1} = \int\frac{tdt}{t^2-t+1} - \int\frac{5dt}{t^2-t+1}= \frac{1}{2}\int\frac{2t+1-1}{t^2-t+1}dt - 5\int\frac{dt}{t^2-t+1}=\int\frac{2t+1}{t^2-t+1}dt - \frac{9}{2}\int\frac{dt}{t^2-t+1}= ln|t^2-t+1|-\frac{9}{2}I_2$ -Now, for $I_2$ -$I_2=\int\frac{1}{t^2-t+1}dt= \int\frac{1}{t^2-t+\frac{1}{4} + \frac{3}{4}}dt= \int\frac{1}{(t+\frac{1}{2})^2 + \frac{3}{4}}dt=\frac{4}{3} \int\frac{1}{(\frac{2t+1}{\sqrt{3}})^2 + 1}dt$ -Now, we can use substitution: -$\frac{2t+1}{\sqrt{3}}=z \Rightarrow dt=\frac{\sqrt{3} dz}{2}$ -So we have: -$I_2=\frac{2\sqrt{3}}{3}\int\frac{dz}{1+z^2} =\frac{2\sqrt{3}}{3}\arctan z $ -Now, going back to $I_1$ -$I_1=ln|t^2-t+1|-3\sqrt{3} \arctan \frac{2t+1}{\sqrt{3}}$ -and if we go back to $I$ -$I=\frac{1}{3}ln|t+1|-\frac{1}{3} ln|t^2-t+1|-\sqrt{3} \arctan \frac{2t+1}{\sqrt{3}}$ -in terms of $x$ -$I=\frac{1}{3}ln|\sqrt{x+1}+1|-\frac{1}{3} ln|x+2-\sqrt{x+1}|-\sqrt{3} \arctan \frac{2\sqrt{x+1}+1}{\sqrt{3}}$ -Yet, in my workbook i have a different solution, but i can't find any mistakes here, any help? - -REPLY [3 votes]: $$\int \frac{\sqrt{x+1}+2}{(x+1)^2 - \sqrt{x+1}}dx$$ - -Set $s=\sqrt{x+1}$ and $ds=\frac{dx}{2\sqrt{1+x}}$ -$$\int\frac{(s+2)2s}{s^4-s}ds=\int\frac{2s^2+4s}{s^4-s}ds=2\int\frac{s+2}{s^3-1}ds\overset{\text{partial fractions}}{=}2\int\frac{-s-1}{s^2+s+1}ds+2\int\frac{ds}{s-1}$$ -$$=-\int\frac{2s+1}{s^2+s+1}ds-\int\frac{ds}{s^2+s+1}+2\int\frac{ds}{s-1}$$ -set $p=s^2+s+1$ and $dp=(2s+1)ds$ -$$=-\int \frac 1 p-\int \frac{ds}{s^2+s+1}ds+2\int\frac{ds}{s-1}$$ -$$=-\ln|p|-\int\frac{ds}{\left(s+1/2\right)^2+3/4}+2\int\frac{ds}{s-1}$$ -Set $w=s+1/2$ and $dw=ds$ -$$=-\ln|p|-\frac 4 3\int\frac{dw}{\frac{4w^2}{3}+1}$$ -$$=-\ln|p|-\frac{2\arctan(\frac{2w}{\sqrt 3})}{\sqrt 3}-\ln|p|+2\ln|s-1|+\mathcal C$$ -$$=\color{red}{2\ln|1-\sqrt{x+1}|-\ln|x+\sqrt{x+1}+2|-\frac{2\arctan\left(\frac{2\sqrt{x+1}+1}{\sqrt 3}\right)}{\sqrt 3}+\mathcal C}$$<|endoftext|> -TITLE: Does $f\otimes \operatorname{Id} = \operatorname{Id}$ imply $f= \operatorname{Id}$? -QUESTION [8 upvotes]: Let $R$ be a commutative ring, and $X$ an $R$-module. If an $R$-endomorphism of $X$ satisfies $f\otimes \operatorname{Id}_X = \operatorname{Id}_{X\otimes X}$, is it true that $f=\operatorname{Id}_X$ ? - -It is true if $X$ is locally free, or monogenous, but I suspect it is false in general ; but I can't seem to come up with any convincing example. -If by any chance it is actually true for module categories, is it also true for any symmetric monoidal category (seems even more unlikely, though) ? - -REPLY [9 votes]: Your guess is right, it's indeed not true in general: Any choice of an $R$-module $X$ such that $X\otimes_R X=0$ and $f\neq\text{id}: X\to X$ gives a counterexample, for example you could take $R := {\mathbb Z}$, $X := {\mathbb Q}/{\mathbb Z}$ and $f = 2\cdot \text{id}$.<|endoftext|> -TITLE: For which integers $a,b$ does $ab-1$ divide $a^3+1$? -QUESTION [12 upvotes]: A problem I wasn't able to solve: - -For which values of $a,b\in\mathbb{Z}$ does $ab-1$ divide $a^3+1$? - -I am looking for every possible solution. Some of them are trivial, like $a=0,b=0$ or $(a,b)\in\{(1,1),(1,2),(1,3),(2,1),(2,2),(3,1),(3,5),(5,3)\}.$ -You may notice it is very similar to the famous 1988 IMO problem #6, and I bet Vieta jumping is the key, but an elliptic curve seems to be involved. You may also notice that $(ab-1)\mid (a^3+1)$ implies $(ab-1)\mid (a^3 b^3+b^3)$, hence $(ab-1)\mid (a^3+1)$ is equivalent to $(ab-1)\mid (b^3+1)$. - -In order to avoid trivial cases, we may assume $|a|>1$ and $|b|>1$ without loss of generality. Given -$$ a^3+1 = (ab-1)\cdot k \tag{1}$$ -we must have $k\equiv -1\pmod{a}$, i.e. $k=(ac-1)$. The previous identity then becomes: -$$ a^2-(bc)a+(b+c)=0 \tag{2}$$ -hence every solution $(a,b)$ is associated with other solutions $\left(a,\frac{a^2+b}{ab-1}\right),\left(a,\frac{b^2+a}{ab-1}\right)$. -$b^2 c^2-4(b+c)$ has to be a square: that obviously cannot happen if $b^2 c^2-4(b+c)(bc-1)^2$: that observation leads to the fact that the only solutions in $\mathbb{N}$ are the ones listed above, but what about the other solutions in $\mathbb{Z}$? - -REPLY [6 votes]: A solution can be obtained in a more direct manner than Vieta jumping. -First, if $ab-1\mid a^3+1$, then $ab-1\mid a^2+b$ since $b\cdot(a^3+1)=a^2\cdot(ab-1)+a^2+b$. Similarly, $ab-1\mid b^2+a$ and $ab-1\mid b^3+1$. So, $ab-1$ divides either all or none of $a^3+1$, $a^2+b$, $b^2+a$, and $b^3+1$. -In particular, as has already been noted, if $(a,b)$ is a solution, then so is $(b,a)$, so we may restrict ourselves to solutions with $|a|\le|b|$. For small, fixed $a$, we may simply check all $b=(k+1)/a$ for integers $k\mid a^3+1$. This gives us the following solutions with $|b|\ge|a|$, assuming I haven't missed any: - -$a=0$ and any $b$ -$a=1$ and $b=-1,2,3$ -$a=-1$ and any $b$ -$a=2$ and $b=-4,2,5$ -$a=-2$ and $b=-4,3$ -$a=3$ and $b=-9,5$ -$a=-3$ and $b=-9,4$ - -In the following, we may therefore assume $4\le|a|\le|b|$. -If $ab-1\mid a^2+b$, either $b=-a^2$, which is always a solution, or $|a^2+b|\ge|ab-1|$. The latter, we split into two cases based on the sign of $a^2+b$. -If $a^2+b>0$, we get $a^2+b\ge|ab-1|$ from which follows that $|a|^2+|b|\ge|a|\cdot|b|-1$. Still assuming $|b|\ge|a|$, this gives -$$ -\bigl(|b|-|a|-1\bigr)\bigl(|a|-1\bigr)=|a|\cdot|b|-|a|^2-|b|+1\le2 -$$ -which implies that $|b|$ is either $|a|$ or $|a|+1$ when $|a|>3$. We can then check the alternatives $b=a$, $b=-a$, $b=a\pm 1$, $-b=a\pm 1$ (with $\pm$ depending on sign of $a$) to verify these give no additional solutions. -Similarly, if $a^2+b<0$, we have $-b>a^2$, which with $|b|\ge|a|\ge4$ makes -$$|ab-1|\ge|a|\cdot|b|-1>|b|=-b>-b-a^2>0$$ -for which there is no solution. -Although it is quite likely that I've made some mistake along the way, the approach should work.<|endoftext|> -TITLE: How can I draw plane distributions in $\mathbb{R}^3$? -QUESTION [5 upvotes]: I see so many nice pictures of contact structures, integrable plane distributions, etc., in manuscripts and online and I have absolutely zero idea how they're made. For example, the following image was posted on Tumblr (source unknown): - -I was able to gather some information online. -For instance, if I go to the Wikipedia page on Contact geometry, there's an image of a contact structure on $\mathbb{R}^3$, the notes on which say Generated with MetaPost and Inkscape. I have some experience with Inkscape but ideally, I'd like to find a solution that can take a form (e.g. $dz - y dx$ for the Wiki picture) and automate the drawing without me having to draw a hundred little rectangle "planes" and manually position them, etc. -I also took look through the online Mathematica documentation to see what was available there...I'm sure there's a solution out there, but without asking someone, I just can't seem to find it on my own. -Any information would be hugely appreciated! - -REPLY [4 votes]: I doubt you'll find a ready-to-go solution for this type of plot - odds are you'll have to roll your own to some extent. Here's a quick starting point in Mathematica: - -SlopeSquare[x_, y_, xs_, ys_, size_: 1, z_: 0] := With[{ - x1 = x - size, y1 = y - size, x2 = x + size, y2 = y + size, - pt = ({#1, #2, z + xs (#1 - x) + ys (#2 - y)} &)}, - Polygon[{pt[x1, y1], pt[x1, y2], pt[x2, y2], pt[x2, y1]}]] - -Graphics3D[Flatten@Table[ - SlopeSquare[x, y, -y, 0, 0.09], {x, -2, 2, 0.2}, {y, -2, 2, 0.2}], - PlotRange -> {All, All, {-2, 2}}, Boxed -> False] - -It's pretty rough - for example these rectangles are drawn with constant projection onto the x-y plane rather than with constant size, and it only works for forms of the form $dz + \texttt{xs} dx + \texttt{ys} dy$. Hopefully you get the idea and can work out how to tweak it to your liking.<|endoftext|> -TITLE: Are the functions $e^{z^n}$ all algebraically independent? -QUESTION [5 upvotes]: I'm currently writing about Riemann surfaces and the algebraic dependence of any two meromorphic functions on a compact surface. I'm trying to think of an example of how this result fails for a noncompact surface, and my hunch is that the family $$ e^{z^n}, n = 1,2, ... $$ gives an infinite family of algebraically-independent functions over $\mathbb{C}$, equivalently that all functions $e^{P(z)}$ are linearly independent over $\mathbb{C}$ for $P$ positive integer polynomials. However I can't seem to get a proof out. -Am I being thick, and is there an obvious proof, or is it obviously untrue? If it's untrue, can anyone furnish an example of a family, preferably infinite, of algebraically-independent (over $\mathbb{C}$) meromorphic functions on the plane? -Many thanks in advance! - -REPLY [4 votes]: Let $f_n(z)=e^{z^n}$. Assume $P(f_1,\ldots,f_n)=0$ with $P\in\Bbb C[X_1,\ldots, X_n]$. -Each monimial $a_{i_1,\ldots, i_n}X_1^{i_1}\cdots X_n^{i_n}$ of $P$ contributes $a_{i_1,\ldots, i_n}e^{i_1z+i_2z^2+\ldots+i_nz^n}$. As the polynomials $i_1X+i_2X^2+\ldots+i_nX^n\in\Bbb Z[X]$ are pairwise distinct and have no constant term, we know (see the claim below) that the $e^{i_1z+i_2z^2+\ldots+i_nz^n}$ are $\Bbb C$-linearly independent, hence all $a_{i_1,\ldots, i_n}$ are $=0$. - -Here's the missing link: -Claim. The family $\left\{t\mapsto e^{tf(t)}\right\}_{f\in\Bbb R[X]}$ of functions $\Bbb R\to\Bbb R$ is $\Bbb R$-linearly independent. -Proof. We can define a total order on $\Bbb R[X]$ by letting $$f\prec g\iff \exists x_0\in\Bbb R\colon \forall x>x_0\colon f(x)0\colon\exists x_0\in\Bbb R\colon\forall x>x_0\colon f(x) -TITLE: Possibly rotated parabola from three points -QUESTION [7 upvotes]: I'm looking for a possibly rotated parabola in the plane, i.e., the solution to a quadric like -$$ -Ax^2 + 2Bxy + Cy^2 + Dx + Ey + F = 0 -$$ -where exactly one of the eigenvalues of -$$ -\begin{bmatrix} -A & B \\ B & C -\end{bmatrix} -$$ -is zero, and where the solution set is nonempty. -I know the $xy$ coordinates of the vertex $V$ of the parabola (i.e., the point that lies on the axis of reflectional symmetry) and of two points $P$ and $Q$ of the parabola, one on each side of the vertex, so that going from $P$ to $V$ to $Q$ along the parabola involves no backtracking. -(The corresponding problem where $P$ and $Q$ might both be on the same side has multiple solutions for some configurations, but that's why I've included the "between-ness condition".) -I'd like to know either the coefficients $A, \ldots, F$, or the focus and directrix, or any other unambiguous description of the parabola that I can convert to one of these. -I keep feeling as if something from projective geometry involving perspectivities and projectivities and (2,2)-correspondences ought to make this simple, but I can't see it. - -REPLY [2 votes]: John, I got qualitatively the same result, by what I think is a somewhat different method. I started with $P=(a,b,1)$, $V=(0,0,1)$, $Q=(c,d,1)$ and an arbitrary point $I=(t,1,0)$ on the line at infinity, and drew the conic passing through these four, but tangent to the line at infinity at $I$. -For a rotation about $V$, the indeterminate $t$ is the tangent of the angle necessary to rotate through to bring $I$ to $I'=(0,1,0)$; the corresponding sine and cosine are $t\big/\sqrt{1+t^2}$ and $1\big/\sqrt{1+t^2}$, respectively. The transformed points $P'$ and $Q'$ I’ll not write down here, but the conic passing through $P'$, $V$, $Q'$, and $I'$ will have equation $y=Ax^2+Bx$, no constant ’cause it passes through the origin. And $V$ is the vertex of this parabola if and only if $B=0$. -Plugging the coordinates of $P'$ and $Q'$ into this, you get -$$ -B=\frac{\frac{(a-bt)^2}{1+t^2}\frac{ct+d}{\sqrt{1+t^2}} --\frac{(c-dt)^2}{1+t^2}\frac{at+b}{\sqrt{1+t^2}}}\Delta\,, -$$ -of which we only want a condition for zeroness. So the equation for $t$ seems to be the cubic $(a-bt)^2(ct+d)-(c-dt)^2(at+b)=0$. -I tried $P=(-1,1)$, $Q=(5,2)$, and got the equation $-23 + 54t - 12t^2 + 9t^3=0 $, assuming I typed everything in correctly; and of course my hand computations above would need to be checked as well. But I think the principle is valid. For my example, the derivative has negative discriminant, so the polynomial is increasing. At least in this case there is only the one real root.<|endoftext|> -TITLE: A tensor identity - $\text{Hom}_{R}(A,B \otimes_S C) \cong \text{Hom}_{R}(A,B)\otimes _SC$ -QUESTION [5 upvotes]: Let $R,S$ be associative algebras over $\mathbb{C}$. Let $A$, $B$ and $C$ be, a left $R$-module, a $(R,S)$-bimodule, a left $S$-module, respectively. Assume that $B\otimes_S C$ is finite-dimensional. -Note that $\text{Hom}_{R}(A,B)$ and $B\otimes_S C$ are right $S$-modules and left $R$-modules in a natural way. -$\bf{My \ \ Question:}$ - -Prove that $\text{Hom}_{R}(A,B)\otimes _SC \cong \begin{align} \text{Hom}_{R}(A,B \otimes_S C) \end{align}$. - -My idea is that the map $\Phi:\text{Hom}_{R}(A,B)\otimes _SC \rightarrow \begin{align} \text{Hom}_{R}(A,B \otimes_S C) \end{align}$ given by -$\Phi(f \otimes v)(a) : = f(a)\otimes v$, for all $a\in A, f\in \text{Hom}_{R},(A,B) $ and $v\in C$, is an isomorphism. Thank you very much! - -REPLY [3 votes]: For a simple counterexample, take $R=S=\mathbb{C}[t]$ a polynomial ring, $A=C=\mathbb{C}=R/(t)$, and $B=_R\!\!R_R$. -Then $\operatorname{Hom}_R(\mathbb{C},R)\otimes_R\mathbb{C}=0$ as there are no non-zero $R$-module homomorphisms $\mathbb{C}\to R$, but $\operatorname{Hom}_R(\mathbb{C},R\otimes_R\mathbb{C})\cong -\operatorname{Hom}_R(\mathbb{C},\mathbb{C})$ is non-zero. -Here $B\otimes_SC=R\otimes_R\mathbb{C}\cong\mathbb{C}$ is finite dimensional, and there are even similar examples with everything finite dimensional, as there are finite dimensional algebras $\Lambda$ with a non-zero finite dimensional module $M$ for which $\operatorname{Hom}_\Lambda(M,\Lambda)=0$, so you can take $R=S=B=\Lambda$ and $A=C=M$.<|endoftext|> -TITLE: Is Gaussian curvature intrinsic in higher dimensions? -QUESTION [9 upvotes]: Let $M$ be a hypersurface (a submanifold of codimension 1) in $\mathbb{R}^{n}$. Is it true that it's Gaussian curvature intrinsic? (when $n>3$). -Reminder: -We focus our attention on a small part of $M$ which is orientable, and choose a smooth unit normal vector field $N$. Then we obtain the shape operator $s$, satisfying: $$sX=-\nabla_xN$$ where $\nabla$ is the Levi-Civita connection on $\mathbb{R}^{n}$. $s$ is self-adjoint, hence there are $n$ real eigenvalues $k_1,...k_n$, and the Gaussian curvature is defined as $\det S=k_1k_2\cdots k_n$ - -As noted by Ivo Terek, for odd $n$, the answer is positive. -For even $n$, I would still like to know whether the curvature is intrinsic up to sign, i.e if for a given $(M,g)$ only two values are possible? (one is the negative of the other). -Added Clarification -Assume we have an abstract $n−1$ dimensional Riemannian manifold, which can be embedded in $\mathbb{R}^n$. (By abstract I mean we do not have a "preferred" or "canonical" embedding). The Gaussian curvature is defined via an embedding. -The question is whether it's possibe to get different values of the curvature when computing it w.r.t different embeddings. (in the case of even $n$, we consider $k$,$−k$ as the same values, since by changing the direction of the normal, we change the curvature's sign). -For $n=3$ Gauss's Theorema Egregium famously asserts the curvature is an intrinsic invariant, i.e depending only on the metric of $M$, and not on the embedding chosen. -The question is whether this remains true in higher dimensions? - -REPLY [8 votes]: To answer the question you asked in the comments: Yes, the Gaussian curvature of a hypersurface is an intrinsic isometry invariant up to sign. One reference for this is Volume 4 of Spivak's Comprehensive Introduction to Differential Geometry. In my second edition it's Corollary 23 in Chapter 7.<|endoftext|> -TITLE: Is this statement true: a set is open if every point has a closed ball contained inside of the set -QUESTION [10 upvotes]: Is this statement true: - -Set $S$ (on a metric space) is open if $\forall x \in S$, $\exists - \delta > 0,$ s.t. $\thinspace \overline B_\delta(x) \subset X$ - -I am a little bit thrown off by the closed ball instead of open ball definition of open set. Can someone verify the above statement and show how it is same as open ball. - -REPLY [2 votes]: Both other answers are correct, but they are not getting to the metric content of the problem... -Let $X$ be a metric space, $S \subset X$, and $x \in S$. If $S$ is open, then $X \setminus S$ is closed, so $\mathrm{dist}(x, X \setminus S)$ exists and is positive. Otherwise, if $x \in \partial S$, $\mathrm{dist}(x, X \setminus S)$ is only nonnegative. Set $\delta= \frac{1}{2} \mathrm{dist}(x, X \setminus S)$. Then $\overline{B_\delta (x)} \subset S$. -Note that $\delta = \frac{1}{2}\mathrm{dist}(\dots)$ is not essential. We can choose and $\eta \in (0,1)$ and $\delta = \eta \mathrm{dist}(\dots)$ works as well. In any event, if $x$ may be chosen on $\partial S$, $\delta$ is forced to $0$ and the hypothesis ("$\forall x \in S, \exists \delta > 0 \dots$") does not hold.<|endoftext|> -TITLE: Why does the antisymmetrization map factor through $n$-forms? -QUESTION [5 upvotes]: Consider a $k$-algebra $A$ and a bimodule $M$. One can construct two complexes, the Hochschild complex $C_n(A,M)$ and the Chevalley-Eilenberg complex $C'_n(A,M)=M\otimes \Lambda^n(A)$. Given an element $m\otimes a_1\otimes \cdots\otimes a_n$ and a permutation $\sigma\in S_n$, there is an action $$\sigma(m\otimes a_1\otimes\cdots\otimes a_n)=m\otimes a_{\sigma^{-1}1}\otimes\cdots\otimes a_{\sigma^{-1}n}$$ -of $k[S_n]$ on $C_n(A,M)$. The element $\varepsilon_n = \sum_{\sigma\in S_n} (-1)^\sigma \sigma$ induces well defined maps $C_n'(A,M)\to C_n(A,M)$ which turn out to give a chain map $C'(A,M)\to C(A,M)$. When $A$ is commutative and $M$ is symmetric the differential of $C'$ is trivial and taking homology there is a map $\varepsilon : M\otimes \Lambda^n(A)\to H_n(A,M)$. -If $\Omega^1(A)$ is the $A$-module of Kähler differentials of $k\to A$, one defines $\Omega^n(A) = \Lambda_A^n(\Omega^1(A))$ (where the exterior product is taken over $A$), so it is spanned by the elements $a_0da_1\cdots da_n$. There is a canonical $k$-map $d:A\to \Omega^1(A)$ that sends $a\mapsto da$. In his book Cyclic Homology, Loday claims that $\varepsilon$ factors through $M\otimes_A \Omega^n(A)$, and to check this is suffices to check that any element of the form $$\varepsilon(mx,y,a_3,a_4,\cdots,a_{n+1})+\varepsilon(my,x,a_3,\cdots,a_{n+1})-\varepsilon(m,xy,a_3,\cdots,a_{n+1})$$ -is a boundary. -I have two problems: - -First, I assume the map $\varepsilon$ wants to factor through is the exterior power of $d:A\to \Omega^1(A)$ over $k$ followed by the projection to the exterior power over $A$. -Assuming this, I really cannot see how checking the above suffices. Perhaps he wants to use that $d$ is a universal derivation? For example, one needs to check that one can replace $\otimes=\otimes_k$ with $\otimes_A$, and for this one needs to show an element of the form - -$$\varepsilon(my,x,a_3,\cdots,a_{n+1})-\varepsilon(m,xy,a_3,\cdots,a_{n+1})$$ -also goes to a boundary. I am not sure how this is included in Loday's verification. - -REPLY [3 votes]: The $n$-th exterior power $\Omega^nA=\wedge_A^n(\Omega^1_A)$ represents the functor $$N\mapsto\left\{ d : A^n\to N\ |\ d\text{ antisymmetric, }k\text{-multilinear, derivation in all arguments}\right\}\\\ \ \ \ \ \ \ \ =\left\{ d : A^n\to N\ |\ d\text{ antisymmetric, }k\text{-multilinear, derivation in first argument}\right\}.$$ -In particular, to construct the desired map -$$\tilde{\varepsilon}: M\otimes_A \Omega^n A\to H_n(A,M)$$ it suffices to construct a map $$\hat{\varepsilon}: A^n\to\text{Hom}_A(M,H_n(A,M))$$ which is antisymmetric, $k$-multilinear and a derivation in the first component; then $\tilde{\varepsilon}$ is the unique morphism with $$\tilde{\varepsilon}(m\otimes\text{d}a_1\wedge\ldots\wedge\text{d}a_n)=\hat{\varepsilon}(a_1,\ldots,a_n)(m).$$ Here, for $\hat{\varepsilon}$ you take the adjoint of $\varepsilon$, which we already know to be $k$-multilinear and antisymmetric, and the Leibniz rule for the first argument is what Loday claims needs to be checked.<|endoftext|> -TITLE: Proving an if and only if statement -QUESTION [8 upvotes]: Suppose I am trying to prove a statement in the form A if and only if B. I know I need to prove that - -If A, then B -If B, then A - -I know that 1 is equivalent to proving "If not B, then not A". -My question is: When proving A if and only if B, is it permissible to prove "if not B, then not A" and then "if B, then A." -I have seen many people prove A iff B by showing "If not A, then not B" and then "If not B, then not A," but never the way I described, which is why I am asking if it is okay. - -REPLY [6 votes]: Yes that is perfectly fine. A implies B is logically equivalent to "not B, then not A".<|endoftext|> -TITLE: In how many different ways can $3$ red, $4$ yellow and $2$ blue bulbs be arranged in a row? -QUESTION [6 upvotes]: $1)$ How many different ways can $3$ red, $4$ yellow and $2$ blue bulbs be arranged in a row? -Do I just say $3! 4! 2! = 288$ ? - -$2)$ On a shelf there are $4$ different math books and $8$ different English books. -a. If the books are to be arranged so that the math books are together, how many ways can this be done? -b. What is the probability that all the math books will be together? -For part $a)$, I put $8! 4! = 967680$ and part $b)$, $\dfrac{8!4!}{ 12!}$ - -I'm not too sure if i did these right and I would appreciate some help, thanks. - -REPLY [3 votes]: I think we are to assume bulbs of the same colour are indistinguishable. So we are counting the "words" of length $9$ that have $3$ R, $4$ Y, and $2$ B. -The places for the R's can be chosen in $\binom{9}{3}$ ways. For each of these ways, the places for the Y's can be chosen in $\binom{6}{4}$ ways. Multiply and simplify. -About the books, imagine that that the math books are placed in a box, labelled M. Then we have $9$ objects, the English books and the M. These can be arranged on the shelf in $9!$ ways. For each of these ways, the math books can be taken out of the box and arranged in $4!$ ways, for a total of $9!4!$. -For the probability, divide as you did by $12!$.<|endoftext|> -TITLE: Usefulness of the notion of Hilbert scheme in algebraic geometry. -QUESTION [6 upvotes]: Could someone tell me why and how Hilbert schemes and relative Hilbert schemes are important and useful in algebraic geometry? -Could anyone give me some applications of this notion in concrete terms? -Thanks a lot for your answers. - -REPLY [25 votes]: As Alex Youcis said, there are many ways Hilbert schemes can be useful. What follows is my personal favorite: Kollár's counterexample to the Integral Hodge Conjecture. -I will try to be elementary for the motivation, but for the actual example I will have to be more technical. -The Hodge Conjecture -You might've heard of the Hodge Conjecture (one of the Clay Millenium problems). Fix $X$ a non-singular complex projective variety. We can ask: what cohomology classes of $X$ are linear combinations of Poincaré duals of subvarieties of $X$? Only some can be expected to be such linear combinations: these are called Hodge classes and are the elements of $H^{2k}(X,A) \cap H^{k,k}(X)$, where our coefficients $A$ will be either $\mathbf{Q}$ or $\mathbf{Z}$. If a Hodge class is indeed a linear combination of Poincaré duals of homology classes of subvarieties in $X$, we say it is algebraic. -We can now state the conjecture: -Hodge Conjecture. If $X$ is a non-singular complex projective variety, then every rational Hodge class is algebraic. -Now this is not the original form of the Hodge Conjecture: the original conjecture concerns integral Hodge classes, i.e., elements in $H^{2k}(X,\mathbf{Z}) \cap H^{k,k}(X)$: -Integral Hodge Conjecture. If $X$ is a non-singular complex projective variety, then every integral Hodge class is algebraic. -This version of the Hodge Conjecture is false: Atiyah and Hirzebruch in 1962 found examples of torsion integral Hodge classes that are not algebraic. Maybe we can at least ask for all non-torsion Hodge classes to be algebraic? But the answer to this question is also "no": Kollár in 1990 constructed non-torsion integral Hodge classes that are not algebraic. These are the first known non-torsion counterexamples to the Integral Hodge Conjecture; see Colliot-Thélène–Voisin, Totaro, Diaz, and Ottem–Suzuki for other examples. -Kollár's idea was the following: for "very general" hypersurfaces $X$ of certain degree, you can degenerate the pair $(C,X)$ where $C$ is any curve in $X$, to a special pair $(C_0,X_0)$ using the relative Hilbert scheme, and then compute the degree of $C$ there to prove certain one-dimensional Hodge classes in $X$ cannot be classes associated to any curve $C$. -Kollár's Counterexample -I'll now present Kollár's counterexample. For simplicity, I will do a very special case of the construction. You can modify the numerics in many ways and still get counterexamples to the integral Hodge conjecture. See Voisin and Soulé for details. -Let $X \subseteq \mathbf{P}^4$ be a hypersurface of degree $125$. The Lefschetz hyperplane theorem together with Hodge duality implies $H^2(X,\mathbf{Z}) = \mathbf{Z} \cdot h$, where $h$ is the restriction of the hyperplane section $H$ on $\mathbf{P}^4$, and $H^4(X,\mathbf{Z}) = \mathbf{Z} \cdot \alpha$ where -$$ - \int_X \alpha \cap h = 1. -$$ -Note that $125\alpha$ satisfies -$$ - \int_X 125\alpha \cap h = 125 = \int_X h^3, -$$ -so $125\alpha = h^2$ in cohomology. We then claim -Theorem (Kollár). Let $X$ be a very general hypersurface of degree $125$ in $\mathbf{P}^4$. Then, any curve $C \subseteq X$ has degree divisible by $5$, and so the Hodge class $\alpha$ from above is not algebraic. -Note that since $125\alpha = h^2$ in cohomology, this does not give a counterexample to the (rational) Hodge Conjecture. -Here "very general" means that $X$ is in the complement of a union of countably many Zariski closed subsets of the space $\mathbf{P}^N$ parametrizing all degree $125$ polynomials on $\mathbf{P}^4$. The exact description of this set will be in the proof. -Step 1. For such hypersurfaces $X_0$ of a certain type, any 1-dimensional subscheme $C_0 \subseteq X_0$ has degree divisible by $5$. -Proof of Step 1. We first describe these $X_0$. Consider $Y$ a hyperplane in $\mathbf{P}^4$, and its image $X_0$ via the composition of a $5$-uple embedding, followed by a generic projection back down to $\mathbf{P}^4$: -$\hskip2.5in$ -By the theory of generic projections (the reference I found is Roberts), the map $\phi\colon Y \to X_0$ then satisfies the following properties: - -$X_0$ is a degree $125$ hypersurface in $\mathbf{P}^4$; -$\phi$ is generically one-to-one; -$\phi$ is two-to-one generically over a surface in $X_0$; -$\phi$ is three-to-one generically over a curve in $X_0$. - -Now let $C_0 \subseteq X_0$ be a curve with associated cycle $z_0$. By the properties above, there is a cycle $\tilde{z}_0$ on $Y$ such that $\phi_*(\tilde{z}_0) \in \{z_0,2z_0,3z_0\}$, and so $\deg \phi_*(\tilde{z}_0) \mid 6\deg z_0$. On the other hand, we can compute -$$ - \deg \phi_*(\tilde{z}_0) = \int_{X_0} c_1(\mathcal{O}_{X_0}(1)) \cap \phi_*(\tilde{z}_0) = \int_Y c_1(\phi^*\mathcal{O}_{X_0}(1)) \cap \tilde{z}_0 = \int_Y c_1(\mathcal{O}_{Y}(5)) \cap \tilde{z}_0, -$$ -by the projection formula [Fulton, Thm. 2.5] -and so $5 \mid \deg \phi_*(\tilde{z}_0) \mid 6 \deg z_0$. But $5$ and $6$ are coprime, so $5 \mid \deg z_0$. $\blacksquare$ -Step 2. Any very general hypersurface $X$ of degree $125$ with a chosen curve $C \subseteq X$ can be degenerated into a hypersurface $X_0$ and a curve $C_0 \subseteq X_0$ such that $\deg C = \deg C_0$. -Proof of Step 2. Let $\mathbf{P}^N$ be the space parametrizing all degree $125$ polynomials on $\mathbf{P}^4$. Let $\mathcal{X} \to \mathbf{P}^N$ be the universal hypersurface. Then, consider the relative Hilbert schemes -$$ - \mathcal{H}_v \to \mathbf{P}^N -$$ -parametrizing pairs $\{(Z,X) \mid Z \subseteq X\}$, where $Z$ is a one-dimensional subscheme with Hilbert polynomial $v$. The Hilbert polynomials $v$ encode all possible values for the degree and genus of $Z$, and so there are only countably many choices of $v$. Now, we use the following facts about the relative Hilbert scheme [Kollár, Thm. 1.4]: - -The morphism $\rho_v\colon \mathcal{H}_v \to \mathbf{P}^N$ is -projective; -There exists a universal subscheme -$$\mathcal{Z}_v \subseteq \mathcal{H}_v \times_{\mathbf{P}^N} \mathcal{X}$$ -which is flat over $\mathcal{H}_v$. - -Now let $U$ be the set -$$\mathbf{P}^N \setminus \bigcup_{v \in I} \rho_v(\mathcal{H}_v)$$ -where $I$ is the set of Hilbert polynomials for which the map $\rho_v$ is not dominant. This set $U$ will parametrize the "very general" hypersurfaces $X$ of degree $125$. -Suppose $X \subseteq \mathbf{P}^4$ is a very general non-singular hypersurface $X$ of degree $125$, parametrized by $x \in U$. Let $C \subseteq X$ be a curve; then, giving $C$ the reduced subscheme structure, $(C,X)$ parametrizes a point $c_x \in \mathcal{H}_v$ over $x$ for some $v$. By definition of $U$, we have $\rho_v(c_x) = x$, hence the map $\rho_v$ is surjective since it is dominant and projective. We then have some point $c_0 \in \mathcal{H}_v$ such that $\rho_v(c_0) = x_0$, where $x_0$ is the point parametrizing the hypersurface $X_0$ constructed above. The fibre $Z_0$ of the universal subscheme $\mathcal{Z}_v$ over $c_0$ gives a subscheme $Z_0 \subseteq X_0$, which by flatness has the same degree as $C$. $\blacksquare$ -Finally, we have shown the integral Hodge class $\alpha$ is not algebraic, since it has degree $1$, but any curve $C$ would have an associated cycle whose degree is divisible by $5$. $\blacksquare$ -An Open Question -I want to conclude with some remarks about what is open (in addition to the Hodge Conjecture). Notice that Kollár's example constructed above is a hypersurface very high degree, hence is of general type. We can ask: -Open Question. Let $X$ be a Fano (or more generally, rationally connected) variety of dimension $n$. Does the integral Hodge conjecture hold for cohomology classes of degree $4$ or $2n-2$? -The reason we cannot hope for better is because we can construct rationally connected varieties that fail the integral Hodge conjecture for cohomology classes of degree $2n-2k$ for any $n-2 > k > 1$ by blowing up Kollár's example embedded in a larger projective space; see Voisin and Soulé, p. 113. On the other hand, the question above has a positive answer for rational varieties [Voisin and Soulé, p. 113], uniruled threefolds [Voisin 2006], and cubic fourfolds [Voisin 2007].<|endoftext|> -TITLE: Results in mathematics whose only proof is model theoretic -QUESTION [9 upvotes]: What are results in mathematics, for example in algebra, whose only proof so far used model theoretical arguments? - -REPLY [4 votes]: I don't know whether or not there is a non-model theory proof, but the first proof of the "unconditional" André-Oort conjecture for arbitrary products of modular curves was done by Pila using model theory (more explicitly, O-minimality). -Furthermore, there is a trend of results coming out right now linking model theory of graphs and combinatorics. For instance, see Regularity lemmas for stable graphs. Since much of this work is current and still on going, I would venture to say that this result (or results like this one) only have proofs in model theory (as of now).<|endoftext|> -TITLE: Picking Multiples of 4 -QUESTION [10 upvotes]: I recently came up with and tried to solve the following problem: If you are randomly picking integers in the range $[1,30]$ out of a hat without replacement, on average, how many integers will you have to pick until you have picked all of the multiples of $4$? -There are $7$ multiples of $4$ that can be chosen. I know that the expected value of picks until you pick a multiple of 4 is the smallest value of $n$ such that $1-\displaystyle\prod_{i=0}^{n-1}\dfrac{23-i}{30-i}>0.5$, which is $3$. However, I don't know how to figure out how many picks are needed until all multiples of $4$ have been chosen. Can I please have some assistance? - -REPLY [4 votes]: Colour the $23$ numbers not divisible by $4$ blue, and call them $b_1$ to $b_{23}$. Colour the $7$ multiples of $4$ red. For $i=1$ to $23$, let indicator random variable $X_i$ be defined by $X_i=1$ if there is a red number which is chosen later than $b_i$, and by $X_i=0$ otherwise. -Then the total number $W$ of trials until we get all the red numbers is given by $W=7+\sum_1^{23}X_i$. -By the linearity of expectation, -$$E(W)=7+\sum_1^{23}E(X_i).$$ -The probability that $X_i=1$ is $\frac{7}{8}$, for the blue integer $b_i$ is equally likely to be first, second, and so on up to eighth among the $8$ numbers consisting of the red numbers and $b_i$. It follows that -$$E(W)=7+23\cdot\frac{7}{8}.$$ -This simplifies to $\frac{217}{8}$. -Remark: The indicator random variable technique bypasses finding the distribution of $W$. That distribution is in this case not difficult to find, but the technique can be valuable when the distribution is less accessible.<|endoftext|> -TITLE: Tail bounds for maximum of sub-Gaussian random variables -QUESTION [14 upvotes]: I have a question similar to this one, but am considering sub-Guassian random variables instead of Gaussian. Let $X_1,\ldots,X_n$ be centered $1$-sub-Gaussian random variables (i.e. $\mathbb{E} e^{\lambda X_i} \le e^{\lambda^2 /2}$), not necessarily independent. I am familiar with the bound $\mathbb{E} \max_i |X_i| \le \sqrt{2 \log (2n)}$, but am looking for an outline of a tail bound for the maximum. -A union bound would give -$$\mathbb{P}(\max_i |X_i| > t) \le \sum_i \mathbb{P}(|X_i| > t) \le 2n e^{-t^2/2},$$ -but I am looking for a proof of something of the form -$$\mathbb{P}(\max_i |X_i| > \sqrt{2 \log (2n)} + t) -\le \mathbb{P}(\max_i |X_i| > \mathbb{E} \max_i |X_i| + t) -\le 2e^{-t^2/2}.$$ -Does anyone have any hints? - -REPLY [7 votes]: I needed something along those lines recently and didn't have a specific reference to cite, so here is a proof of a self-contained statement implying yours. - -Theorem. Let $X_1,...,X_n$ be independent $\sigma^2$-subgaussian random variables. Then -$$ -\mathbb{E}[\max_{1\leq i\leq n} X_i] \leq \sqrt{2\sigma^2\log n} \tag{1} -$$ -and, for every $t>0$, -$$ -\mathbb{P}\!\left\{\max_{1\leq i\leq n} X_i \geq \sqrt{2\sigma^2(\log n + t)}\right\} \leq e^{-t}\,. \tag{2} -$$ - -Proof. -The first part is quite standard: by Jensen's inequality, monotonicity of $\exp$, and $\sigma^2$-subgaussianity, we have, for every $\lambda \in \mathbb{R}$, -$$ -e^{\lambda \mathbb{E}[\max_{1\leq i\leq n} X_i]} -\leq \mathbb{E}e^{\lambda \max_{1\leq i\leq n} X_i} -= \max_{1\leq i\leq n}\mathbb{E}e^{\lambda X_i} -\leq \sum_{i=1}^n\mathbb{E}e^{\lambda X_i} -\leq n e^{\frac{\sigma^2\lambda^2}{2}} -$$ -so, taking logarithms and reorganizing, we have -$$ -\mathbb{E}[\max_{1\leq i\leq n} X_i] \leq \frac{1}{\lambda}\ln n + \frac{\lambda \sigma^2}{2}\,. -$$ -Choosing $\lambda := \sqrt{\frac{2\ln n}{\sigma^2}}$ proves (1). -${}$ -Turning to (2), let $u := \sqrt{2\sigma^2(\log n + t)}$. We have -$$ -\mathbb{P}\{ \max_{1\leq i\leq n} X_i \geq u \} -= \mathbb{P}\{ \exists i,\; X_i \geq u \} -\leq \sum_{i=1}^n \mathbb{P}\{ X_i \geq u \} -\leq n e^{-\frac{u^2}{2\sigma^2}} - = e^{-t} -$$ -the last equality recalling our setting of $u$. -$\square$ -Here is now an immediate corollary: - -Corollary. Let $X_1,...,X_n$ be independent $\sigma^2$-subgaussian random variables. Then, for every $u>0$, -$$ -\mathbb{P}\!\left\{\max_{1\leq i\leq n} X_i \geq \sqrt{2\sigma^2\log n}+ u \right\} \leq e^{-\frac{u^2}{2\sigma^2}}\,. \tag{3} -$$ - -Proof. For any $u>0$, -$$ -\mathbb{P}\!\left\{\max_{1\leq i\leq n} X_i \geq \sqrt{2\sigma^2\log n} + u \right\} -\leq e^{-\frac{u^2}{2\sigma^2} - u\sqrt{\frac{2\log n}{\sigma^2}}} -\leq e^{-\frac{u^2}{2\sigma^2}}, -$$ -the inequality by choosing $t := \frac{u^2}{2\sigma^2} + u\sqrt{\frac{2\log n}{\sigma^2}}$ and using (2). $\square$ - -More: the slides of some lecture notes by John Duchi.<|endoftext|> -TITLE: For which $n$ can $\{1,2,...,n\}$ be rearranged so that the sum of each two adjacent terms is a perfect square? -QUESTION [19 upvotes]: For which numbers $n$ can the sequence $1$ to $n$ be rearranged such that each pair of consecutive terms adds up to a perfect square? -Can this be done on the set of natural numbers as well? Integers? Rationals? - -REPLY [7 votes]: (Just to summarize things so people don't have to jump from MSE, MO, OEIS, SO.) -This is a rather interesting question, but there are two previous MSE posts that have already covered it. Post 1 (MSE) asks for which $n$ we can arrange {$1,2,\dots n$} so that the sum $S^k$ of every two adjacent numbers is a square (or $k=2$). A commenter pointed A090461 hence, -$$n = 15,16,17,23,25,26,27,\dots,\infty$$ -so it is conjectured for all $n>24$. That, in turn, was inspired by Post 2 (MSE) which was the general case, but focused on sums $S^k$ for $k>2$. For $k=3$, the OP gave an example as $n=305$. -Post 3 (MO) gives an example for $k=4$ as $n=9641$. It was also a cyclic arrangement; that is, the first and last entries also have a sum $S^k$. -P.S. Re MYXMYX's question here if there is a cyclic arrangement for $n=35$ for squares, MJD found there are a whopping $17175$ possible arrangements, so chances are good. By the update below, OEIS says there are $57$ ways to do it.)<|endoftext|> -TITLE: How to rotate the positions of a matrix by 90 degrees -QUESTION [10 upvotes]: I have a 5x5 matrix of values. I'm looking for a simple formula that I can use to rotate the position of the values (not the values themselves) 90 degrees within the matrix. -For example, here is the original matrix: -01 02 03 04 05 -06 07 08 09 10 -11 12 13 14 15 -16 17 18 19 20 -21 22 23 24 25 - -and then when the position of the values are rotated 90 degrees, it would look like this: -21 16 11 06 01 -22 17 12 07 02 -23 18 13 08 03 -24 19 14 09 04 -25 20 15 10 05 - -I found this post and this one, and I'm sure the answer I'm after is in there somewhere, but I've been out of university for quite a few years and am having trouble following the algorithm. -I need this for a C# program I'm writing and will be using Math.Net Numberics. I'm hoping there is just a simple rotation matrix/vector I can use to multiply my matrix with that will give me the result I'm after. Any suggestions are appreciated. - -REPLY [11 votes]: Transpose the matrix, then reverse the order of the columns. So $$M\mapsto M^T\begin{bmatrix}0&0&\cdots&0&1\\0&0&\cdots&1&0\\\vdots&\vdots&&\vdots&\vdots\\0&1&\cdots&0&0\\1&0&\cdots&0&0\end{bmatrix}$$ -For instance $$\begin{bmatrix}a&b&c\\d&e&f\\g&h&i\end{bmatrix}\mapsto\begin{bmatrix}a&b&c\\d&e&f\\g&h&i\end{bmatrix}^T\begin{bmatrix}0&0&1\\0&1&0\\1&0&0\end{bmatrix}=\begin{bmatrix}a&d&g\\b&e&h\\c&f&i\end{bmatrix}\begin{bmatrix}0&0&1\\0&1&0\\1&0&0\end{bmatrix}=\begin{bmatrix}g&d&a\\h&e&b\\i&f&c\end{bmatrix}$$<|endoftext|> -TITLE: Monad as not trivial adjunctions -QUESTION [5 upvotes]: It is well known that a monad $(T, \mu, \eta)$ can be factorized in multiple ways as adjunctions, and that in some sense, Kleisli is the initial factorization while Eilenberg-Moore is the final factorization. What are examples of factorizations in between? -For example, I can picture the monad of groups as being factorized as either the free groups (Kleisli) or all the groups (Eilenberg-Moore). What would be a factorization of the monad of groups in between these two? - -REPLY [4 votes]: Identity monad on $\mathrm{Set}$ factors in many different ways. Kleisli and Eilenberg-Moore categories coincide, but another factorization is given by the free functor to $\mathrm{Top}$ (equipping a set with the discrete topology). This example illustrates that one needs to be careful with the intuition that a generic factorization lies between the Kleisli and Eilenberg-Moore adjunctions. Kleisli category always embeds, but the comparison functor to Eilenberg-Moore category is in general only faithful.<|endoftext|> -TITLE: What's so special about a prime ideal? -QUESTION [17 upvotes]: An ideal is defined something like follows: - -Let $R$ be a ring, and $J$ an ideal in $R$. For all $a\in R$ and $b\in J$, $ab\in J$ and $ba\in J$. - -Now, $J$ would be considered a prime ideal if - -For $a,b\in R$, if $ab\in J$ then $a\in J$ or $b\in J$. - -To my (admittedly naive) eyes, this isn't saying much. More or less, I guess it just sounds like a backwards way of describing a regular ideal. - -$a,b$ are always elements of $R$, though the prime ideal definition doesn't specify that one has to be in $J$... -... but the definition of a normal ideal already tells that the product is in $J$ if one of the elements is in $J$. - -So, in both cases, the product is in $J$, and either of the elements is in $J$, making them seem like incredibly similar statements to me, and not saying much about the interesting "prime-like" properties of a prime ideal. -What makes these two different? - -REPLY [12 votes]: If the question is "What's so special about it?", which I might take to mean "Why does the concept matter?", I might mention that the quotient ring of of a commutative ring with unit by a prime ideal is an integral domain. -However, later it looks as if maybe what you meant is that you're trying to understand what the definition says. -The set of all multiples of $10$ is an ideal in $\mathbb Z$. That means that if $a$ is a multiple of $10$ and $b\in\mathbb Z$, then $ab$ is a multiple of $10$. -But that ideal is not a prime ideal: $2$ and $5$ are not in that ideal but $2\times5$ is. The definition tells us that if it were a prime ideal, then if $2\times 5$ is in the ideal, then either $2$ or $5$ is in the ideal. That means that if $2\times5$ is a multiple of $10$ then either $2$ or $5$ is a multiple of $10$. But that is not true, so this ideal is not prime. -It seems to me you're confused about quantifiers. - -$a,b$ are always elements of $R$, though the prime ideal definition doesn't specify that one has to be in $J$ - -The definition of "ideal" does not say anything to the effect that either $a$ or $b$ "has to be" in $J$. It says that if one of them is in $J$, then their product has to be in $J$. The definition of "prime ideal" also includes that, in that it says it's an ideal. -One statement says that if one of two things is in $J$, then so is their product. -The other says that if their product is in $J$, then so is one of them. -There is a big difference between "If P then Q" and "If Q then P". -Now consider the set of all multiples of $11$. That is also an ideal, since if just one of $a,b$ is a multiple of $11$, then so is $ab$. But this time you cannot find two numbers $a,b$ that are not multiples of $11$ but for which $ab$ is a multiple of $11$. With $10$ we were able to do that just by factoring $10$ as $2\times5$, and we could do that because $10$ is not prime. The set of all multiples of a prime number is a prime ideal in $\mathbb Z$; the set of all multiples of a composite number (like $10$) is not. It's easy to see why the latter kind is not a prime ideal, just as we did above. The other statement, that if a number is prime, then it divides a product $ab$ only if it divides either $a$ or $b$, is a bit more work to prove, and is called Euclid's lemma.<|endoftext|> -TITLE: Difference between flow and solution of ODE -QUESTION [8 upvotes]: I am reading Wikipedia's entry on Flow and it is not clear the distinction between solution of an ODE and the flow of an ODE. In particular it is clearly written $φ(x_0,t) = x(t)$, then what is the purpose of even defining the flow? -https://en.wikipedia.org/wiki/Flow_(mathematics) -Can someone please concisely explain the difference between the two concepts and provide some examples showing their differences? -Much thanks - -REPLY [4 votes]: Simple answer: -Indeed, if you only consider autonomous differential equations, the concept of (local) flow has nothing to add, although as always being an additional point of view it helps in understanding or perhaps even finding properties that otherwise could be missed. -Not so simple answer: -However, it also happens that the concept of flow is much more general and need not be related to a differential equation. It can be associated for example to a stochastic differential equation, a delay equation, a partial differential equation, or even be associated to multidimensional time, etc, etc. -Complicated but more complete answer: -Having said this it may seem that the concept of flow is something more general than the set of solutions of a differential equation. This is also not a good perspective, since there are generalizations of an autonomous differential equation, even general nonautonomous differential equations, that don't lead to obvious concepts of flows. -The trick of adding $t'=1$ is clearly unsatisfactory in many situations (such as when compactness is crucial), leading for example to the study of convex hulls or lifts in the context of ergodic theory (but leading always to infinite-dimensional systems).<|endoftext|> -TITLE: Is there a closed form or approximation to $\sum_{i=0}^n\binom{\binom{n}{i}}{i}$ -QUESTION [12 upvotes]: I tried to calculate the sum -$$ -\sum_{i=0}^n\binom{\binom{n}{i}}{i} -$$ -but it seems that all my known methods are poor for this. -Not to mention the intimate recursion, that is -$$ -\sum_{i=0}^n\binom{\binom{{\binom{n}{i}}}{i}}{i} -$$ -Any ideas? - -REPLY [7 votes]: Lets try to find the maximum of -$$f(i)=\binom {\binom ni}i$$ -Consider $i\in \{n/4,3n/4\}$. In this range a good approximation (from the central limit theorem) is -$$\binom ni\simeq\frac{2^n}{\sqrt{\frac 12n\pi}}e^{-\frac{(i-n/2)^2}{n/2}}$$ -This is far larger than $i$, so we have -$$f(i)=\binom {\binom ni}i\simeq\frac{\binom ni^i}{i!}\simeq\frac{2^{ni}}{i!(\frac 12n\pi)^{i/2}}e^{-\frac{i(i-n/2)^2}{n/2}}$$ -Taking logarithms and using the Stirling approximation -$$\log f(i)\simeq ni\log 2 -\frac i2\log(\frac 12n\pi)-i\log i+i-\frac 12\log(2\pi i)-\frac{i(i-n/2)^2}{n/2}$$ -These terms are all negligable compared to the first and last, so -$$\log f(i)\simeq ni\log 2 -\frac{i(i-n/2)^2}{n/2}$$ -$$=-\frac 2ni^3+2i^2+(\log2-\frac{1}2)ni$$ -Therefore -$$\frac{d\log f(i)}{di}\simeq-\frac 6ni^2+4i+(\log 2-\frac 12)n$$ -With roots -$$i=\frac{-4\pm\sqrt{16+24(\log 2-\frac 12)}}{-12/n}\simeq-0.0452n\text{ and }0.712n$$ -The first is a minimum (and outside the sensible range) but the second is a maximum. Let $\alpha\simeq0.712n$ be this maximum. -So using our above approximation -$$\log f(\alpha)\simeq [-2\times 0.712^3+2\times 0.712^2+(\log2-\frac{1}2)\times 0.712]n^2\simeq0.430n^2$$ -So bounding $\sum_i f(i)$ below by its largest term gives an approximation of about $e^{0.430n^2}$, which is in line with Claude Leibovici's empirical results. -This isn't a rigorous lower bound. The main problem is that $\alpha$ isn't an integer, and so $f(i)$ might not actually attain this maximum. Since $\alpha$ is within $\frac 12$ of an integer you can fix this by evaluating the second derivative of $\log f$ at $\alpha$ an use this to approximate $f(\alpha\pm\frac 12)$.<|endoftext|> -TITLE: How is the smoothness of the space of deformations related to unobstructedness? -QUESTION [12 upvotes]: As a beginning differential geometer, I've been trying to learn about deformation theory. Other than Kodaira's book, I've found virtually no references from the point of view of differential geometry. As such, my understanding is hazy. - -I am asking this question in an attempt to clarify my vague understanding, especially in regard to the last bullet point below. Precise definitions of terms are welcome, as are references to books or introductory papers on the subject. - -My vague understanding goes like this: -We are interested in studying the "deformations" of a geometric object $X$ (e.g., a complex manifold $X$). We can associate to $X$ its "deformation exact sequence," which amounts to a kernel-cokernel exact sequence in some cohomology theory (e.g., Cech cohomology) of some map $\varphi$ which I don't understand: -$$0 \to \text{Ker}(\varphi_*) \to H^1_X \xrightarrow{\varphi_*} H^2_X \to \text{Coker}(\varphi_*) \to 0.$$ -We interpret this as follows: - -$\text{Ker}(\varphi_*)$ is the space of infinitesimal (or "first-order") deformations. An object $X$ is rigid iff $X$ has no infinitesimal deformations -- i.e., $\text{Ker}(\varphi_*) = 0$. -$\text{Coker}(\varphi_*)$ is the space of infinitesimal (or "first-order") obstructions. An object $X$ is unobstructed iff $X$ has no obstructions -- i.e, $\text{Coker}(\varphi_*) = 0$. -Let $\mathcal{M}$ be the "moduli space of local deformations of $X$." The "formal tangent space" $T_X\mathcal{M}$ is the space of first-order deformations -- i.e., $T_X\mathcal{M} \cong \text{Ker}(\varphi_*)$. -Somehow, in some cases, the smoothness of $\mathcal{M}$ is related to whether $X$ is unobstructed. In this case, $\dim(\mathcal{M}) = \dim(T_X\mathcal{M}) = \dim(\text{Ker}(\varphi_*))$. - -More hazy thoughts: I think in the cases I'm interested in, the map $\varphi_*$ can be regarded as the differential of some map $\varphi \colon \text{Somewhere} \to \text{Somewhere Else}$, and $\mathcal{M} = \varphi^{-1}(\text{point})$, so $\mathcal{M}$ is smooth if $\varphi$ is a submersion (meaning $\varphi_*$ is surjective, so $\text{Coker}(\varphi_*) = 0$), in which case $T_X\mathcal{M} = \text{Ker}(\varphi_*)$. How any of this works precisely, I don't know. - -REPLY [5 votes]: Consider the following very basic example. The affine scheme $\text{Spec } k[x, y]/xy$ is singular at the origin. This singularity can be detected as follows: the Zariski tangent space at the origin, which we think of as the space of $k$-algebra maps $k[x, y]/xy \to k[e]/e^2$ which reduce to $x, y \mapsto 0 \bmod e$, is 2-dimensional, since we can take -$$x \mapsto ae, y \mapsto be$$ -for any $a, e$. But most of these tangent vectors don't lift to 2-jets. The space of 2-jets at the origin is the space of $k$-algebra maps $k[x, y]/xy \to k[e]/e^3$ which reduce to $x, y \mapsto 0 \bmod e$, and if $x \mapsto ae$ with $a \neq 0$ then we must have $y \mapsto be^2$ for some $b$, and vice versa. So the only tangent vectors which lift to 2-jets are those with $a = 0$ or $b = 0$. (Geometrically this should make sense since this variety is the union of the $x$- and $y$-axis and these are the tangent vectors pointing along those axes.) -This example can be used to motivate the definition of formal smoothness in algebraic geometry. This is a condition weaker than being smooth, and it's usually stated in much more generality than we need here: all we need is that formal smoothness of a space $M$ (variety, scheme, stack, whatever) implies that any tangent vector $\text{Spec } k[e]/e^2 \to M$ lifts to a 2-jet $\text{Spec } k[e]/e^3 \to M$. -Now take $M$ to be a moduli space of whatever sort of objects you're trying to deform. A point of $M$ is an object of that sort. A tangent vector to a point is a first-order deformation of that object. And a 2-jet is a second-order deformation. So if $M$ is smooth at a point, then we expect any tangent vector to lift to a 2-jet, which is precisely the statement that first-order deformations lift to second-order deformations. -If $X$ is an object describing a point of the moduli space $M$, a further question is what any of this business has to do with, say, the tangent bundle of $X$. The short story is that the correct definition of "tangent space of $M$ at $X$" should return an object called the tangent complex of $X$ (shifted by $1$), which is a derived version of the tangent bundle of $X$, and "$M$ is smooth at $X$" means that the tangent complex is concentrated in degree $\pm 1$ (it's one of these, not either, but don't ask me to figure out which), where it is just the usual tangent bundle. Obstructions live in the tangent complex in other degrees, so if there aren't any other degrees then there aren't any other obstructions.<|endoftext|> -TITLE: Calculating radius of circles which are a product of Circle Intersections using Polygons -QUESTION [6 upvotes]: Lets say you imagine a circle with the radius $R$ and you inscribe a regular polygon with $n$ sides in it, whose side we know will then be: $$a=2R*sin(\frac{180}{n})$$ -Then you draw a set of circles so that in each point of the polygon there is a circle which has the radius $R$. (Blue circles) -After that, you then draw another set of circles that pass through the intersections of the circles in the first set. (Red circles) -For $n=3$ It would look like this: - -The green circle is twice the size of our imagined circle, and we also see that here there is only one red circle whose radius is the same as our imaginary circle in which the polygon is inscribed. -Let's start increasing our number of sides: - - -So, every two more sides we get a new circle, and all others increase in size with the outer one approaching the green circle. Lets set then $n$ to 32 for example: - -Question -How can the the radius $r_m$ ($m=1,2,3...$) of the $m$th red circle be calculated for a polygon with $n$ sides inside the imagined circle of radius $R$? (Is there a Formula or expressions to be used?) -Thanks JeanMarie for answering this question: $$ r_m= 2R \ \cos{( \frac{m}{n}\pi )}$$ - -REPLY [3 votes]: Use inversion (see def. below) with inversion circle your green circle. In the $n$th case, your circles will be transformed into a regular n-sided circumscribed polygon with a vertex $V$ at a distance (see figure) $\dfrac{10}{\cos\frac{\pi}{n}}$ from the origin. -The inverse $V'$ of point $V$ will thus be at a distance $10 \ \cos \frac{\pi}{n}$ from the origin. -Furthermore, $V$, being at the intersection of 2 sides, its image $V'$ will be at the intersection of the images of theses sides, i.e. the intersection of 2 circles. -The answer is thus that the radius is $10 \ \cos \frac{\pi}{n} \ \ \ (1)$, which tends to 10 when $n$ tends to $\infty$. -Definition: Inversion wrt to a circle with center $O$ and radius $r$ (here $r=10$) is the non linear transform such that the image of a point $P$ is $P'$ such that $O,P,P'$ aligned and $\vec{OP}.\vec{OP'}=r^2$. -See http://mathworld.wolfram.com/Inversion.html -**Edit **: (After a thorough remark of the author of the question, I rewrite this part). Let it be clear that formula (1) gives the radius of the circle that contains the outermost intersection points of circles that are internally tangent to the large circle (radius 10). -The penultimate level of intersection points is obtained by taking successively one out of two of these tangent circles, giving rise, by inversion to a different circumscribed polygon (hopefully non convex as in the figure we give) ; thus the same reasoning as before will end up with a radius $10 \ \cos \frac{2\pi}{n} \ \ \ (2)$. Then, we will take circles 3 by 3, and more generally $k$ by $k$, ... -Thus the general formula is -$$R_k=10 \ \cos \frac{k\pi}{n} \ \ \ \ k=1\cdots(n/2-1)$$ -Remark: this formula explains the "3D interpretation" one can have of your last graphics as a view from above of a terrestrial globe with parallels at latitude $\frac{k\pi}{n}.$ -Figure: The case $n=8=2^3$ with the 3 polygons having their vertices in correspondence, through inversion, with the interesctions of the families of circles.<|endoftext|> -TITLE: Distribution of Square of Rician Random Variable? -QUESTION [5 upvotes]: We know that the square of a Rayleigh random variable has exponential distribution, i.e., -Let the random variable $X$ have Rayleigh distribution with PDF -$$f_X(x)=\frac{2x}{\alpha}e^{-x^2/{\alpha}}.$$ -Then the random variable $Y=X^2$ has the PDF given by $$f_Y(y)=\frac{1}{\alpha}e^{-y/{\alpha}}.$$ -For an exponentially distributed r.v. $Y$ with mean $\mathbb{E}[Y]=1$ -$$\mathbb{E}[Y^{\delta}]=\Gamma[1+\delta].$$ ----------------------------------------------------------------------------- -Now, if the random variable $X$ has Rician distribution (unit power in direct and scattered paths), whose PDF is given by -$$f_X(x)=\frac{2x}{\alpha}\text{exp}\left(\frac{-(x^2+v^2)}{\alpha}\right)I_0\left(\frac{2xv}{\alpha}\right)$$ -with $\frac{v^2}{\alpha}=1$ and $I_0(z)$ is the modified Bessel function of the first kind with order zero. -what is the PDF of $Y=X^2$? -And what is $\mathbb{E}[Y^{\delta}]$ when $\delta<1.$ -Note: when $v^2=0$, $X$ has Rayleigh distribution. - -REPLY [2 votes]: Using a Dirac-delta method -$$ -f_Y(y)=\int_0^\infty dx\ \frac{2x}{\alpha}\text{exp}\left(\frac{-(x^2+v^2)}{\alpha}\right)I_0\left(\frac{2xv}{\alpha}\right)\delta(y-x^2) -$$ -$$ -\int_0^\infty dx\ \frac{2x}{\alpha}\text{exp}\left(\frac{-(x^2+v^2)}{\alpha}\right)I_0\left(\frac{2xv}{\alpha}\right)\frac{\delta(x-\sqrt{y})}{2\sqrt{y}}=\frac{1}{\alpha}\exp\left(\frac{-(y+\nu^2)}{\alpha}\right)I_0\left(\frac{2\nu\sqrt{y}}{\alpha}\right)\ , -$$ -which for $\nu=0$ correctly reproduces the exponential PDF. Note that $\int_0^\infty f_Y(y)dy=1$, so the PDF is correctly normalized. -$$ -\mathbb{E}[Y^\delta]=\int_0^\infty dy\ y^\delta \frac{1}{\nu^2}\exp\left(\frac{-(y+\nu^2)}{\nu^2}\right)I_0\left(\frac{2\sqrt{y}}{\nu}\right) -=\nu ^{2 \delta } \Gamma (\delta +1) L_\delta(-1)\ , -$$ -as computed by Mathematica (for $\nu^2/\alpha=1$), where $L_\delta(x)$ is a Laguerre function [I checked a few numerical values and it seems it works just fine].<|endoftext|> -TITLE: Constructing a $4$-state Markov chain model that describes the arrival of customers -QUESTION [6 upvotes]: The times between successive customer arrivals at a facility are independent and identically distributed random variables with the following PMF: -$$p(k) = 0.2(k = 1)$$ -$$p(k) = 0.3(k = 3)$$ -$$p(k) = 0.5(k = 4)$$ -$$p(k) = 0(k \notin \{1,3,4\})$$ -Construct a four-state Markov chain model that describes the arrival process. In this model, one of the states should correspond to the times when an arrival occurs. - -Can you please explain in simple words how to construct this markov chain? Because I am totally lost with given distribution and how can I use this in my problem. - -REPLY [2 votes]: The state of the Markov chain the question seems to be asking about is the length of time since the last customer arrived. It can have any of the four values, $\ 0,$$\ 1,$$\ 2,$ or $\ 3\ $. The chain is in state $\ 0\ $ at time $\ n\ $ if a customer arrived at that time, and is otherwise in state $\ i\ne0\ $ at that time if the last customer arrived at time $\ n-i\ $. -If the state at time $\ n\ $ is $\ 0\ $ then there is a probability of $\ 0.2\ $ that the next customer will arrive at time $\ n+1\ $, in which case the chain will remain in state $\ 0\ $, and a probability of $\ 0.8\ $ that the customer will arrive at either time $\ n+3\ $ or $\ n+4\ $, in which case the state at time $\ n+1\ $ will be $\ 1\ $. Thus we have -\begin{align} -p_{00}&=0.2\\ -p_{01}&=0.8 -\end{align} -If the chain is in state $\ 1\ $ at time $\ n\ $, then the last customer arrived at time $\ n-1\ $ and none arrived at time $\ n\ $, so the next one won't arrive until either time $\ n+2\ $ or $\ n+3\ $, and the state at time $\ n+1\ $ will be $\ 2\ $ with probability $\ 1\ $. That is, -$$ -p_{12}=1\ . -$$ -If the chain is in state $\ 2\ $ at time $\ n\ $, then the last customer arrived at time $\ n-2\ $ and none arrived at time $\ n\ $. There is then a (conditional) probability of $\ \frac{0.3}{0.8}\ $ that the next customer will arrive at time $\ n+1\ $, in which case the state at time $ n+1\ $ will be $\ 0\ $, and a probability of $\ \frac{0.5}{0.8}\ $ that the customer will arrive at time $\ n+2\ $, in which case the state at time $\ n+1\ $ will be $\ 3\ $. Thus we have -\begin{align} -p_{20}&=\frac{0.3}{0.8}\\ -p_{23}&=\frac{0.5}{0.8}\ . -\end{align} -Here, we must condition our probabilities on the event that the time between the arrival of the last customer and the next was not $\ 1\ $ time unit, because we know that that event has occured. -If the chain is in state $\ 3\ $ at time $\ n\ $, then the last customer arrived at time $\ n-3\ $ and none arrived at time $\ n\ $, so the next one must arrive at time $\ n+1\ $, and the chain will then be in state $\ 0\ $. Thus we have -$$ -p_{30}=1\ . -$$ -Putting all this together, we get the following transition matrix for the Markov chain. -$$ -P=\pmatrix{0.2&0.8&0&0\\ - 0&0&1&0\\ - \frac{0.3}{0.8}&0&0&\frac{0.5}{0.8}\\ - 1&0&0&0}\ . -$$<|endoftext|> -TITLE: Game-winning strategy -QUESTION [6 upvotes]: Player A and Player B are playing a turn-based game. At the beginning of the game there are $N(N \ge 3)$ points in a plane. In each turn one of the players chooses exactly $3$ different points and he connects them with a closed curve line (He can leave as many points as he wants inside/outside of the closed curve). The only other restriction of the game is that none of the line should intersect or touch each other (Therefore you can't choose a point that's already lying on a curve). After finite number of moves it's not possible to choose $3$ different points without violating the rules, hence the game ends. The player who drew a closed curve last is the winner. For which $N$ Player A can be sure that he will win, no matter how Player B plays? - -I've managed to partially solve this problem, as I have proved that for every odd number, the Player A can always win the game. The strategy is to choose $3$ random points and connect them with a closed curve such that the number of point encircled by the curve is the same as the number of line outside of the curve. After that in each turn Player A repeats what Player B does, so because of the symmetry he's garanteed to finish last and hence win. -On the other hand this strategy doesn't work for even numbers. By brute-force and checking every possible game situation, I found out that Player A can't win when $N=8,14,20$ (I did this up to 20, as the calculations get more complicated as the number of points grows). This makes me conjecture that Player A can't win when $N=6k+2$, unless Player B makes a mistake. -On each turn the dots are split into two new "disjoint" sets. Actually after $K$ turns we have $K+1$ "disjoint" sets, albeit some of them can be empty. This sets of point can be treated as separate games, but the problem is that sometimes is better to lose in some of those separate games, in order to win in the last one. - -REPLY [4 votes]: Disclaimer: A lot of this post was copied from my answer to a similar sort of game. -Summary -It turns out that this game is very closely related to the Octal Game with code "0.007". Using the Sprague-Grundy Theorem combined with a winning strategy for Nim, it is relatively straightforward to analyze small games like this (e.g. your game for small $n$), and there are theorems which give efficient algorithms to learn about this in the medium range. Some Octal games can be efficiently solved even in large cases, but this particular game is not one of them. - -It turns out that $B$ has a winning strategy for $N={0, 1, 2, 8, 14, 24, 32, 34, 46, 56, 66, 78, 88, 100, 112, 120, 132, 134, 164, 172, 186, 196, 204, 284, 292, 304, 358, 1048, 2504, 2754, 2914, 3054, 3078, 7252, 7358, 7868, 16170}$ and $A$ has a winning strategy in all other cases up to $2^{28}-1$. But I believe the general result is still open. - - -Connecting this game to existing theory -First note that you shouldn't be able to draw a curve through points as then drawing a curve through all of the points would be a winning move all of the time. To connect this game to existing general theory, we can reconceptualize it in the way that Lærne did: After you draw a curve, you remove three points from a component, and split the remainder into two components (where one or both of those may be empty). That is traditionally encoded in a "number" with Octal digits. In this case, the code is $0.007$, where the $0$s indicate that you can't remove only one or two points, and the three bits of the $7$ indicate that you can leave zero or one or two components when you remove three points from a component. - -Who wins? -Since the original question was "who has a winning strategy?", rather than "what is the winning strategy, I won't re-present an introduction to the relevant theorems in detail. -If you don't already know about the strategy for Nim or the Sprague-Grundy Theorem and how they can be applied to similar games, you can read one or more of the following: - -Tom Ferguson's class notes on the subject -MJD's notes in their tidy blog post about using this theory to solve the game in the question for $n$ up through $23$. -Lim Chu Wee's series of blog posts which are also class notes building up the relevant theory (posts I through V) -The lecture notes Misère Games and Misère Quotients by Aaron N. Siegel through page 10. -My own long-winded series of blog posts building up the relevant theory (posts I.1 through I.6). -Chapter 7 of the undergraduate textbook "Lessons in Play: An Introduction to Combinatorial Game Theory" by Albert, Nowakowski, and Wolfe (although a bit of reading of other chapters may be needed first). - -Very briefly, positions in games like this act in combinations like single heaps of Nim, where the size of the corresponding Nim heap is the least number that's not a size of a Nim heap corresponding to a position you can move to. The strategy for Nim tells you that combining games with known equivalent Nim boils down to bitwise XOR (aka "adding in base $2$ without carrying"). -Because of the strategy for Nim (how Grundy values combine), and a position in this game is a combination of separate components, it suffices to know the Grundy/Nim values for a single "heap" (in our context, a single component of points). There are tricks to make this calculation more efficient than the naive method, some of which are covered in the graduate textbook "Combinatorial Game Theory" by Aaron N. Siegel. For some similar games, the Nim values are known to be eventually periodic (by a theorem that says they will be if they look periodic long enough), but according to Flammenkamp, for $0.007$, $2^{28}$ values have been calculated with no periodic behavior, although the values $N$ for which $B$ wins appear to stop at $16170$. -Aaron Siegel's program Combinatorial Game Suite allows efficient calculation of middling values (but probably wouldn't get up to $2^{28}$. For example, the lines hr:=game.heap.HeapRules.MakeRules("0.007") and hr.NimValues(100000) takes about a minute and a half on my machine to produce the list of Nim values for $N=0$ to $N=99999$. -For example, the Nim values from $N=0$ to $N=499$ are as follows (remember that $N=0$ corresponds to a win for $B$) $0,0,0,1,1,1,2,2,0,3,3,1,1,1,0,4,3,3,3,2,2,2,4,4,0,5,5,2,2,2,3,3,0,5,0,1,1,1,3,3,3,5,6,4,4,1,0,5,5,6,6,2,7,7,7,8,0,1,9,2,7,2,3,3,3,9,0,5,4,4,8,6,6,2,7,1,1,1,0,5,5,9,3,1,8,2,8,5,0,1,1,12,2,2,7,3,3,9,4,4,0,11,3,3,3,9,2,2,8,1,3,5,0,9,12,2,6,13,13,5,0,1,1,4,11,7,7,10,3,4,1,4,0,5,0,3,3,6,7,2,14,13,10,4,12,9,2,2,3,3,6,9,9,1,16,4,8,3,3,2,15,1,1,4,0,5,5,16,6,6,6,8,0,16,5,4,4,17,2,2,7,14,6,10,12,1,0,16,13,3,6,2,7,7,8,1,0,5,17,2,12,15,3,11,0,19,18,12,4,16,17,2,2,21,6,9,4,19,5,5,17,10,3,6,19,2,7,8,4,1,9,12,7,2,13,6,3,19,5,9,4,8,8,17,17,2,15,18,1,1,8,5,21,16,21,3,19,19,13,5,18,1,4,17,7,2,7,6,3,19,12,5,5,16,16,6,17,19,7,7,18,1,4,17,0,9,16,3,3,14,13,22,0,1,15,24,17,2,6,18,3,4,19,19,0,8,21,16,3,15,7,26,18,13,1,1,17,9,2,21,2,6,22,19,9,5,16,4,16,20,3,7,18,23,22,8,20,5,16,21,15,6,10,19,18,18,18,4,4,17,17,7,2,3,23,19,9,5,0,16,16,3,17,30,2,18,18,8,4,17,17,9,27,6,10,19,19,14,9,9,4,20,17,14,11,7,18,6,19,19,5,13,16,16,10,6,19,19,23,18,4,4,17,12,12,14,10,6,3,19,5,9,5,21,16,20,6,7,7,18,30,13,13,17,12,21,15,10,3,19,22,18,8,4,32,17,17,11,14,6,26,24,12,5,9,16,16,6,7,7,7,18,18,8,4,17,20,7,16,10,10,22,19,22,9,23,4,13,17,20,7,11,23,23,4,5,9,5,16,16,10,17,10,22,18,23,8,4,17,17,20,16,32,13,19,19,33,5,5,24$<|endoftext|> -TITLE: Every inverse semigroup is a group -QUESTION [5 upvotes]: The Wikipedia page about inverse semigroups defines them as follows: - -In mathematics, an inverse semigroup (occasionally called an inversion semigroup) $S$ is a semigroup in which every element $x$ in $S$ has a unique inverse $y$ in $S$ in the sense that $x = xyx$ and $y = yxy$. - -There is a question on this site (A semigroup $X$ is a group iff for every $g\in X$, $\exists! x\in X$ such that $gxg = g$) whose answer is "A nonempty semigroup $S$ is a group iff for every $x\in S$ there is a unique $y\in S$ such that $xyx=x$." -So it follows that every nonempty inverse semigroup is a group. It seems weird that this fact is not listed on Wikipedia, and that some literature seems to exist about inverse semigroups. Am I not understanding correctly the definitions? - -REPLY [2 votes]: Let's consider the canonical example of an inverse semigroup that J.-E. Pin mentioned in his answer, the semigroup of partial bijections on a set. For concreteness, let's take our set to be $[3]=\{1,2,3\}$ and call our resulting semigroup $S$. -Consider the element $f:\{1,2\}\rightarrow\{2,3\}$ defined by $f(1)=2$ and $f(2)=3$. Then $f$ has a unique inverse $f^*:\{2,3\}\rightarrow\{1,3\}$ defined by $f^*(2)=1$ and $f^*(3)=2$. You can easily verify that -$$f\circ f^*\circ f = f,\ \ \ \ \ \text{and}\ \ \ \ f^*\circ f \circ f^* = f^*,$$ -and indeed, you can verify that $f^*$ is the unique element of $S$ which satisfy both these conditions. But there are others which satisfy the each of the two above conditions individually. For example, $f^{**}:[3]\rightarrow [3]$ defined by $f^{**}(1)=3$, $f^{**}(2)=1$, and $f^{**}(3)=2$. Then we also have -$$f\circ f^{**} \circ f = f,$$ -but we no longer have -$$f^{**} \circ f \circ f^{**} = f^{**}.$$ -In fact, $f^{**} \circ f \circ f^{**}$ is the restriction of $f^{**}$ to $\{2,3\}$. -So there is a distinct difference between: -i. A semigroup $S$ such that for every $x\in S$, there exists a unique $y\in S$ such that $xyx=x$. -ii. A semigroup $S$ such that for every $x\in S$, there exists a unique $y\in S$ such that both $xyx=x$ and $yxy=y$. -And the condition ii is strictly weaker than condition i. You can prove that any semigroup which satisfies condition i is a full group. You cannot do the same for condition ii, which defines inverse semigroups. The semigroup of partial bijections is an explicit example of this.<|endoftext|> -TITLE: Probability of having complex eigenvalue? -QUESTION [6 upvotes]: Let $A$ be a real $n \times n$ matrix with coefficients randomly chosen from Uniform Distribution [-1,1]. What's the probability that A has a complex eigenvalue with non-zero imaginary part? -If you'd like to know the motivation, I've been doing QR factorization and I notice that the algorithm seems to fail at times, producing blocks along the main diagonal but zero elsewhere. While I'm sure the fault lies with my code, it got me interested in the question. Props to you if you can generalize from Uniform distribution to any probability distribution. - -REPLY [5 votes]: Let $p_n$ be the required probability. The case $n=2$ is not difficult when $a,b,c,d$ follow $U[-1,1]$. The probability that the roots are real is -$p_2=\dfrac{1}{32}\int\int\int\int_{[-1,1]^4} signum((a-e)^2+4bc)+1\;\; da\; db\; dc\; de\approx 0.6805$. Yet, we will see that $p_n$ decreases very quickly when $n$ increases. -This question is about the number $N$ of real roots of a real polynomial $\sum_{i=1}^na_ix^i$. Let $E_n(N)$ be the expected number of real roots of such a polynomial. In the following reference (dated 1994) -http://www-math.mit.edu/~edelman/publications/how_many_zeros.pdf -it is proved that i) if the $(a_i)$ follow $N(0,1)$, then $E_n(N)\sim \dfrac{2}{\pi}\log(n)$ -In fact it is a result from Kac and the same estimation of $E_n(N)$ works when the $(a_i)$ follow $U[-1,1]$ (Kac) or follow $U\{-1,1\}$ (Erdos). Note that the proof concerning the law $U[-1,1]$ is much more difficult than the one concerning $N(0,1)$. -ii) if the $(a_i)$ follow $N(0,\binom{n}{i})$, then $E_n(N)\sim \sqrt{n}$. -Note that when we consider $A\in M_n(\mathbb{R})$, where the $(a_{ij})$ follow $N(0,1)$, the polynomial $\det(A-xI)$ fulfills rather the condition ii) than the condition i). Anyway, $E_n(N)=o(n)$. -Now let $V_n(N)$ be the variance of $N$ ; Maslova proved, in the case i), the following estimate of the variance of $N$: $V_n(N)\sim \dfrac{4}{\pi}(1-2/\pi)\log(n)$. Thus, in case i), $\log(p_n)$ behaves somewhat like $-n^{2}$. In case ii), and when we consider the characteristic polynomial of a random matrix, one has to find similar results. -EDIT. Assume that $A\in M_n(\mathbb{R})$, where the $(a_{ij})$ are iid and follow $N(0,1)$ and let $N$ be the number of real eigenvalues of $A$. Then the estimates of the mean and the variance are $E_n(N)\sim\sqrt{\dfrac{2n}{\pi}}$ and $V_n(N)\sim (2-\sqrt{2})\sqrt{\dfrac{2n}{\pi}}$ (due to Edelman,Kostlan, Forrester, Nagao, cf. Theorem 16 in [1]). -[1] Tao and Vu http://arxiv.org/abs/1206.1893 . A monumental 97 p. paper where you will find (among others) a generalization of the previous result. -Assume that $n$ is a great number; if one approximates the distribution of $N$ by a normal one, then some calculations seem to show that $p_n$, the probability that $A$ has only real roots, satisfies $\log(p_n)=-n^{\frac{3}{2}+o(1)}$. -Happy Rounded Pi Day to all.<|endoftext|> -TITLE: Percentile Symbol - does it exist or not? -QUESTION [13 upvotes]: Is there a standard symbol for percentile in mathematics, much like % is used for percentage? I have trying to get the right answer but only getting conflicting answers and logic. - -REPLY [8 votes]: As Neil mentions in his comment $P_i$ is a common notation to denote the $i$-th percentile. -The Wikipedia page on Percentile doesn't actually mention the notation as far as I can see but denotes quartiles as $Q_1$, $Q_2$, and $Q_3$ several times and from this it's logical that percentiles would be denoted by $P_i$ (and likewise other quantiles with their respective character in the same way). -A real world example of this notation (even if it's not subscript) is how you request percentiles in the Amazon CloudWatch API which follows the pattern p(\d{1,2}(\.\d{0,2})?|100) - or p5 for the 5th percentile, p70 for the 70th percentile, p50.36 for the 50.36th percentile, and p100 for the maximum value.<|endoftext|> -TITLE: How do you "linearize" a differential operator to get its symbol? -QUESTION [6 upvotes]: Assuming I got some non-linear differential operator $D$, given by $(\partial_xu(x,t))^2-\partial_tu(x,t)=0$, what would its linearization (around a solution $\tilde u$) be? -So far I thought of something like a Fréchet derivative, i.e. $Du = D\tilde u + A(u-\tilde u)+o(u-\tilde u)$, where $D\tilde u$ would vanish because $\tilde u$ is a solution. However, I don't see how this $A$ could be written as a "usual" DE so I can determine its symbol. -After some research, I guess this question could also be seen as a follow-up question to this comment. - -REPLY [9 votes]: If you want to linearise a (partial) differential equation around a solution $\tilde{u}$, your goal is to investigate how the equation (or, to be more precise, its solutions) behave 'around' $\tilde{u}$. Therefore, the most straightforward thing to do is to substitute -\begin{equation} - u = \tilde{u} + \epsilon v, -\end{equation} -where $\epsilon$ is some small number. We call the term $\epsilon v$ a perturbation of $\tilde{u}$. -When you perform this substitution in your PDE, you get -\begin{equation} - (\partial_x \tilde{u})^2 + 2 \epsilon (\partial_x \tilde{u})(\partial_x v) + \epsilon^2 (\partial_x v)^2 - \partial_t \tilde{u} - \epsilon \partial_t v = 0, -\end{equation} -which we can rewrite as -\begin{equation} - \left[(\partial_x \tilde{u})^2 - \partial_t \tilde{u}\right] + \epsilon\left[2 (\partial_x \tilde{u})(\partial_x v) - \partial_t v\right] + \epsilon^2 (\partial_x v)^2 = 0 -\end{equation} -Because $\tilde{u}$ is assumed to be a solution to the original PDE, the first term $\left[(\partial_x \tilde{u})^2 - \partial_t \tilde{u}\right]$ is zero by definition. Now, the 'linearisation' part of the procedure comes from the observation that if $\epsilon$ is very small (i.e. very close to zero), then $\epsilon^2$ is much smaller than $\epsilon$. Therefore, it seems a good approximation to neglect the term $\epsilon^2 (\partial_x v)^2$, because it is much smaller than the term $\epsilon\left[2 (\partial_x \tilde{u})(\partial_x v) - \partial_t v\right]$. Actually, you can take $\epsilon$ as small as you want, or in other words, study the system as close to $\tilde{u}$ as you want. The 'linearisation' of the original PDE is therefore given by the PDE -\begin{equation} -2 (\partial_x \tilde{u})(\partial_x v) - \partial_t v = 0. -\end{equation} -If we rewrite this in terms of a differential operator, we get -\begin{equation} - A(\tilde{u}) v = 0, -\end{equation} -with the operator $A(\tilde{u})$ (which depends explicitly on $\tilde{u}$ !) given by -\begin{equation} - A(\tilde{u}) = 2 (\partial_x \tilde{u})\,\partial_x - \partial_t. -\end{equation} -Note that $A$ is a linear operator, in the sense that it acts linearly on $v$.<|endoftext|> -TITLE: Is every infinite set equipotent to a field? -QUESTION [8 upvotes]: For example, $\mathbb N$ is equipotent to $\mathbb Q$ which is a field. -$\mathbb R$ is equipotent to itself, which is a field. -But what about $\mathbb R^{\mathbb R}$, $P(\mathbb R^{\mathbb R})$ etc.? - -REPLY [3 votes]: I think this answer by Gregory Grant is in some ways better than any relying on the Löwenheim–Skolem theorem, since it gives an explicit construction of a field of arbitrary infinite cardinality.<|endoftext|> -TITLE: The category of locally $P$ spaces -QUESTION [8 upvotes]: Let $P$ be a class of topological spaces (for example, compact spaces). -The class of locally $P$ spaces consists of those spaces in which every point has a neighborhood basis consisting of $P$ spaces. The class of weakly locally $P$ spaces consists of those spaces in which every point has a neighborhood in $P$. -Question. What are some categorical properties of the category of locally $P$ spaces which are not shared by the category of weakly locally $P$ spaces, or vice versa? If necessary, assume that $P$ is closed under suitable operations. -This could shed some light on the question which of the two definitions of locally $P$ spaces is more "natural". (And you already might guess my preference.) - -REPLY [5 votes]: Let's start with the obvious: if a topological space is $P$ then it is weakly locally $P$ but not necessarily locally $P$. For example, every connected space is weakly locally connected but not necessarily locally connected. On the other hand, more or less by definition, a topological space is locally $P$ if and only if every open subspace of it is weakly locally $P$. Thus, locally $P$ coincides with weakly locally $P$ if and only if every open subspace of every $P$ space is weakly locally $P$. For example, a topological space is locally $T_1$ (resp. Hausdorff) if and only if it is weakly locally $T_1$ (resp. Hausdorff). The best situation is when weakly locally $P$ and locally $P$ coincide. -It is easy to see that finitary products of weakly locally $P$ spaces are weakly locally $P$ if and only if finitary products of $P$ spaces are weakly locally $P$. Moreover, if finitary products of weakly locally $P$ spaces are weakly locally $P$, then finitary products of locally $P$ spaces are locally $P$. The situation with pullbacks seems more complicated. In the case where open subspaces of $P$ spaces are $P$, one can use the usual patching argument to show that the category of locally $P$ spaces (= weakly locally $P$ spaces here) is closed under pullbacks in $\mathbf{Top}$ if the category of $P$ spaces is. A more careful version of this argument allows us to drop the hypothesis that open subspaces of $P$ spaces are $P$, but then we can only conclude for locally $P$ spaces. It is not obvious to me whether or not there is a way to make the argument work for weakly locally $P$ spaces.<|endoftext|> -TITLE: What does strength refer to in mathematics? -QUESTION [63 upvotes]: My professors are always saying, "This theorem is strong" or "There is a way to make a much stronger version of this result" or things like that. In my mind, a strong theorem is able to tell you a lot of important information about something, but this does not seem to be what they mean. What is strength? Is it a formal idea? - -REPLY [2 votes]: Terry Tao (Ask yourself dumb questions – and answer them!): - -For instance, given a standard lemma in a subject, you can ask what happens if you delete a hypothesis, or attempt to strengthen the conclusion - -To strengthen a conclusion is to say more. -We could have some lemma (or theorem) that says $p \to q$. To attempt to strengthen the conclusion is to see if we can say more than just $q$ so we would try to see if we could say $p \to q_1$ where $q_1$ is some proposition s.t. $q_1 \to q$. -For instance $x=1$ is stronger than $x=0$ or $x=1$. The former implies the latter. -So if we have some assumption that implies the conclusion '$x=0$ or $x=1$', oh let's say, '$x^2 = x$', we would try to see if we could try to strengthen the conclusion to '$x=1$'. We cannot because it is possible that '$x \ne 1$' while '$x^2 = x$' (namely when '$x=0$'). -Let's try using a different assumption: -It is true that '$x+1=2$' implies '$x=0$ or $x=1$'. Here, we can strengthen the conclusion to $x=1$.<|endoftext|> -TITLE: Is $e^e$ irrational? -QUESTION [15 upvotes]: I was surprised to find out that the following question is open: -Is $e^e$ transcedental? -According to Wikipedia, a positive answer to Schanuel's conjecture implies "yes" to the above question. -My questions: -1) Can we at least prove that $e^e$ is irrational? Or is this also open? -2) Given that $e^e$ is irrational, does it follow that $e^e$ is transcedental? -Added comment: For (2) I mean "Does the knowledge that $e^e$ is irrational help with the proof that $e^e$ is transcedental?" - -REPLY [9 votes]: About (1), it is still unknown whether $e^e$ is irrational or not, according to Wikipedia. -https://en.wikipedia.org/wiki/Irrational_number#Open_questions -Even more interesting, according to Gelfond's Theorem, $a^b$ is transcendental (therefore irrational) if $a$ is algebraic (and $\not\in\{0,1\}$) and if $b$ is irrational and algebraic. -http://mathworld.wolfram.com/GelfondsTheorem.html -This theorem can be used to prove that $e^\pi$ is transcendental and therefore irrational.<|endoftext|> -TITLE: Proof of every finite group is finitely presented. -QUESTION [8 upvotes]: I'm reading the proof that every finite group is finitely presented from Dummit's Abstract Algebra, but there's a part that I don't understand. In the proof below, what are the elements $\tilde{g_i}$? I think they are the cosets $g_iN$, but how do we know that they generate $\tilde{G}$? And why does $|\tilde{G}|=|G|$ lead to $N=\ker \pi$? And finally, how do we get the sufficient condition (ii) in the final sentence? -I really do not understand these parts and I'd greatly appreciate any explanations. - -REPLY [3 votes]: I'll try to clarify the proof though you might have done it already (I was struggling with this theorem too so the answer could be useful for someone). -First of all, let's check that $N$ $\leq$ ker $\pi$. It is true since $N$ is the intersection of all normal subgroups containig $R_0$, and ker $\pi$ is one of such subgroups. Now we apply the following theorem. Let $h:A\rightarrow B$ be a homomorphism and $A$ $\rhd$ $X$ $\leq$ ker $h$, then for the projection $p:A\rightarrow A/X$ there is a unique homomorphism $f$ such that the following diagram commutes ($h=f\circ p$): -$\hskip2.8in$ -Let say that in our case there is a unique homomorphism $\phi$ such that the following diagram has to commute: -$\hskip2.8in$ -Now consider the cosets $g_1N,...,g_nN\in F(S)/N$. Note that they are pairwise distinct. Indeed, we have $i\neq j\rightarrow\pi(g_i)=g_i\neq g_j=\pi(g_j)$, and the commutativity implies $\pi(g_i)=\phi(p(g_i))=\phi(g_iN)$, $\pi(g_j)=\phi(p(g_j))=\phi(g_jN)$. Thus $i\neq j\rightarrow\phi(g_iN)\neq\phi(g_jN)$. So it's not the case that $g_iN=g_jN$ for $i\neq j$. -It's easy to verify that $F(S)/N=\langle g_1N,...,g_nN\rangle$. With this if we'll prove that $\{g_1N,...,g_nN\}$ is closed under the multiplication and inversion, then we'll get $F(S)/N=\{g_1N,...,g_nN\}$. -Let's begin from the multiplication. We have $g_iNg_jN=g_ig_jN$. Let $g_ig_j=g_k$ then $g_ig_jN=g_kN$. Indeed, $g_ig_jg_k^{-1}\in N$ thus $g_ig_j\in Ng_k=g_kN$. Further we have $(g_jN)^{-1}=g_j^{-1}N$. Let $g_j^{-1}=g_k$ then $g_j^{-1}N=g_kN$. Indeed, let $g_i=e_G$ then $g_ig_kg_j\in N$. So $g_ig_k\in Ng_j^{-1}=g_j^{-1}N$. This implies $g_j^{-1}N=g_ig_kN$. But $g_ig_kN=g_kN$ as we have proved. -We conclude that $F(S)/N$ consists of $n$ distinct elements: $\{g_1N,...,g_nN\}$. The commutativity of our diagram implies $\phi(g_iN)=g_iN$. This means $\phi$ is injective. Now, if we assume that ker $\pi$ - $N$ contains some element $a$, then $aN\neq N$ and $\phi(aN)=e_G$. This gives a contradiction with the injectivity. Thus we have $N$ $=$ ker $\pi$ and $G\cong(S|R_0)$. \ No newline at end of file