diff --git "a/stack-exchange/math_stack_exchange/shard_10.txt" "b/stack-exchange/math_stack_exchange/shard_10.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_10.txt" +++ /dev/null @@ -1,20537 +0,0 @@ -TITLE: Showing that if $R$ is a commutative ring and $M$ an $R$-module, then $M \otimes_R (R/\mathfrak m) \cong M / \mathfrak m M$. -QUESTION [5 upvotes]: Let $R$ be a local ring, and let $\mathfrak m$ be the maximal ideal of $R$. Let $M$ be an $R$-module. I understand that $M \otimes_R (R / \mathfrak m)$ is isomorphic to $M / \mathfrak m M$, but I verified this directly by defining a map $M \to M \otimes_R (R / \mathfrak m)$ with kernel $\mathfrak m M$. However I have heard that there is a way to show these are isomorphic using exact sequences and using exactness properties of the tensor product, but I am not sure how to do this. Can anyone explain this approach? -Also can the statement $M \otimes_R (R / \mathfrak m) \cong M / \mathfrak m M$ be generalised at all to non-local rings? - -REPLY [7 votes]: Morover, let $I$ be a right ideal of a ring $R$ (noncommutative ring) and $M$ a left $R$-module, then $M/IM\cong R/I\otimes_R M$.<|endoftext|> -TITLE: Is a 3D Mandelbrot-esque fractal analogue possible? -QUESTION [9 upvotes]: I understand that (unlike complex numbers) there's no consistent 3 dimensional number system (even 4D loses some nice properties). -Regardless, I'm wondering if there might be a 'trick' to create a 3D Mandelbrot which has detail running through each dimension without any discontinuities or 'smeared' sections. In 2008 I wrote a short article discussing the possibility, and went on to help discover the Mandelbulb in 2009. -As mentioned in those articles, variations on the quaternion Julia 4D fractal unfortunately resemble 'whipped cream' and have detail running through only 1 or 2 dimensions. Other attempts at a 3D analogue are mere extrusions or lathes of a 2D Mandelbrot. The real thing (if it exists) would look MUCH more interesting and beautiful. -Even the new 'Mandelbulb' isn't perfect as it too contains 'whipped cream' and has less variety than even the 2D Mandelbrot. -Is a Mandelbrot-looking equivalent in 3D space even remotely possible? Here's an artist's impression (created by Marco Vernaglione): - -REPLY [7 votes]: I was wondering something similar, I was trying to find a 3D mandelbrot. Not a mandelbulb, but a 3D equivilant of the mandelbrot set. When searching for it, I came across some 3D 'representations,' but they didn't really appear to be a true 3D mandelbrot set. -Then I accidentally made a 3D mandelbrot, in the program Mandelbulber. I made a hybrid fractal of the mandelbulb, and the 'general mandelbox fold' and came up with this: - -The fun thing about this model is I can enter the X and Y coordinates from the traditional mandelbrot set, and it will zoom in on the same portion of the 3D model. You can even see the dendrites, and self-similar mandelbrots on the dendrites, exactly were they would be in the 2D set. -I'm not completely sure this is what you were looking for, I have been having trouble finding julias sets in the 3D model were I find them in the 2D model, but that could simply be because of the quality settings I'm using. Either way, this is the closest match to a true 3D mandelbrot I've come across. Any others have strange artifacts or artistic fluff that makes me wonder if it's just the 2D set with a lot of pretty effects. -edit:: After playing around more, this 3D mandelbrot is puzzling. With certain settings, I can see the outline of the 2D mandelbrot from a top-down view. If I use too many iterations, the spiraling branches and arms visible in the 2D set disappear and I only see smaller and smaller self-similar mandelbrots. Then if I set certain settings (mainly "DE detection") too too high of a quality on a point were you expect spirals and intricate branches, I am only shown an "oil spill". Then on other combinations of quality settings, I see wispy clouds of semi-transparent smoke over the mandelbrot set itself. - -It seems most obvious that with the best of quality, the features from the 2D mandelbrot set are clouded and usually completely hidden when viewing on a microscopic scale. -I would really love to hear more input from someone (especially the question asker). I have a feeling this probably isn't what he was looking for since he appears to be somewhat of an expert in this area, and he hasn't found a 3D mandelbrot-esque figure. I only found this by chance, but it doesn't look like anything he has dismissed yet, so I'd love it my dumb luck happened to find a way to view a 'true' 3D mandelbrot.<|endoftext|> -TITLE: Positivity of a determinant -QUESTION [8 upvotes]: I'm stuck to prove the following exercise : Given real numbers $x_1,\ldots,x_n$ and $y_1,\ldots,y_n$, show that -$$ -\det(e^{\large{x_iy_j}})_{i,j=1}^n>0 -$$ -provided that $x_1<\cdots -TITLE: Integral with Bessel function -QUESTION [7 upvotes]: Let $n$ be half an odd integer, say $n=k+1/2, k \in \mathbb{N}$. -Let $q\geq 1$. I would like to calculate (or approximate) the following integral: -$$ -\int_0^{\infty}\left(\sqrt{\frac{\pi}{2}}\cdot 1\cdot 3\cdot 5\cdots (2k+1) \frac{J_{k+\frac 12}(t)}{t^{k+ \frac 12}}\right)^q t\ dt. -$$ -Any ideas or references will be very helpful. -Thank you. - -REPLY [3 votes]: Due to Rayleigh's formulas we have: -$$\sqrt{\frac{\pi}{2}}\frac{J_{k+1/2}(t)}{t^{k+1/2}}=(-1)^k\left(\frac{1}{t}\frac{d}{dt}\right)^k \frac{\sin t}{t}\tag{1}$$ -and since: -$$\frac{\sin t}{t}=\sum_{m=0}^{+\infty}\frac{(-1)^m\,t^{2m}}{(2m+1)!}$$ -we have: -$$\left(\frac{1}{t}\frac{d}{dt}\right)\frac{\sin t}{t}=\sum_{m=1}^{+\infty}\frac{(-1)^m(2m)t^{2m-2}}{(2m+1)!}=(-1)\sum_{m=0}^{+\infty}\frac{(-1)^m (2m+2)t^{2m}}{(2m+3)!},$$ -$$\left(\frac{1}{t}\frac{d}{dt}\right)^k\frac{\sin t}{t}=(-1)^k\sum_{m=0}^{+\infty}\frac{(-1)^m (2m+2k)\cdot\ldots\cdot(2m+2)t^{2m}}{(2m+2k+1)!}$$ -so: -$$\begin{eqnarray*}(2k+1)!!\cdot\sqrt{\frac{\pi}{2}}\frac{J_{k+1/2}(t)}{t^{k+1/2}}&=&\sum_{m=0}^{+\infty}\frac{(-1)^m (2m+2k)!!(2k+1)!!}{(2m+2k+1)!(2m)!!}\,t^{2m}\\&=&\sum_{m=0}^{+\infty}\frac{(-1)^m \binom{m+k}{m}}{\binom{2m+2k+1}{2m}}\cdot\frac{t^{2m}}{(2m)!}.\end{eqnarray*}\tag{2}$$ -Now a really good approximation for the LHS of $(2)$ is simply given by: -$$(2k+1)!!\cdot\sqrt{\frac{\pi}{2}}\frac{J_{k+1/2}(t)}{t^{k+1/2}}\approx \exp\left(-\frac{t^2}{4k+6}\right).\tag{3}$$ -$\hspace2in$ -$\hspace2in\qquad$ Approximation for $k=3$ -Hence the starting integral can be approximated by: - -$$\int_{0}^{+\infty}t\,\exp\left(-\frac{qt^2}{4k+6}\right)\,dt=\frac{2k+3}{q}.$$<|endoftext|> -TITLE: Linear independence of roots over Q -QUESTION [5 upvotes]: Let $p_1,\ldots,p_k$ be $k$ distinct primes (in $\mathbb{N}$) and $n>1$. Is it true that $[\mathbb{Q}(\sqrt[n]{p_1},\ldots,\sqrt[n]{p_k}):\mathbb{Q}]=n^k$? (all the roots are in $\mathbb{R}^+$) -Iurie Boreico proved here that a linear combination $\sum q_i\sqrt[n]{a_i}$ with positive rational coefficients $q_i$ (and no $\sqrt[n]{a_i}\in\mathbb{Q}$) can't be rational, but this question seems to be more difficult.. - -REPLY [10 votes]: Below are links to classical proofs. Nowadays such results are usually derived as special cases of results in Kummer Galois theory. See my post here for a very simple proof of the quadratic case. - -Besicovitch, A. S. $\ $ On the linear independence of fractional powers of integers. -J. London Math. Soc. 15 (1940). 3-6. MR 2,33f 10.0X -Let $\ a_i = b_i\ p_i,\ i=1,\ldots s\:,\:$ where the $p_i$ are $s$ different primes and -the $b_i$ positive integers not divisible by any of them. The author proves -by an inductive argument that, if $x_j$ are positive real roots of - $x^{n_j} - a_j = 0,\ j=1,...,s ,$ and $P(x_1,...,x_s)$ is a polynomial with -rational coefficients and of degree not greater than $n_j - 1$ with respect -to $x_j,$ then $P(x_1,...,x_s)$ can vanish only if all its coefficients vanish. $\quad$ Reviewed by W. Feller. - -Mordell, L. J. $\ $ On the linear independence of algebraic numbers. -Pacific J. Math. 3 (1953). 625-630. MR 15,404e 10.0X -Let $K$ be an algebraic number field and $x_1,\ldots,x_s$ roots of the equations -$\ x_i^{n_i} = a_i\ (i=1,2,...,s)$ and suppose that (1) $K$ and all $x_i$ are real, or -(2) $K$ includes all the $n_i$ th roots of unity, i.e. $ K(x_i)$ is a Kummer field. -The following theorem is proved. A polynomial $P(x_1,...,x_s)$ with coefficients -in $K$ and of degrees in $x_i$ , less than $n_i$ for $i=1,2,\ldots s$ , can vanish only if -all its coefficients vanish, provided that the algebraic number field $K$ is such -that there exists no relation of the form $\ x_1^{m_1}\ x_2^{m_2}\:\cdots\: x_s^{m_s} = a$, where $a$ is a number in $K$ unless $\ m_i = 0\ mod\ n_i\ (i=1,2,...,s)$ . When $K$ is of the second type, the theorem was proved earlier by Hasse [Klassenkorpertheorie, -Marburg, 1933, pp. 187--195] by help of Galois groups. When K is of the first -type and K also the rational number field and the $a_i$ integers, the theorem was proved by Besicovitch in an elementary way. The author here uses a proof analogous to that used by Besicovitch [J. London Math. Soc. 15b, 3--6 (1940) these Rev. 2, 33]. $\quad$ Reviewed by H. Bergstrom. - -Siegel, Carl Ludwig. $\ $ Algebraische Abhaengigkeit von Wurzeln. -Acta Arith. 21 (1972), 59-64. MR 46 #1760 12A99 -Two nonzero real numbers are said to be equivalent with respect to a real -field $R$ if their ratio belongs to $R$ . Each real number $r \ne 0$ determines -a class $[r]$ under this equivalence relation, and these classes form a -multiplicative abelian group $G$ with identity element $[1]$. If $r_1,\cdots,r_h$ -are nonzero real numbers such that $r_i^{n_i}\in R$ for some positive integers $n_i\ -(i=1,...,h)$ , denote by $G(r_1,...,r_h) = G_h$ the subgroup of $G$ generated by -[r_1],...,[r_h] and by R(r_1,...,r_h) = R_h the algebraic extension field of -$R = R_0$ obtained by the adjunction of $r_1,...,r_h$ . The central problem -considered in this paper is to determine the degree and find a basis of $R_h$ -over $R$ . Special cases of this problem have been considered earlier by A. S. -Besicovitch [J. London Math. Soc. 15 (1940), 3-6; MR 2, 33] and by L. J. -Mordell [Pacific J. Math. 3 (1953), 625-630; MR 15, 404]. The principal -result of this paper is the following theorem: the degree of $R_h$ with respect -to $R_{h-1}$ is equal to the index $j$ of $G_{h-1}$ in $G_h$ , and the powers $r_i^t\ -(t=0,1,...,j-1)$ form a basis of $R_h$ over $R_{h-1}$ . Several interesting -applications and examples of this result are discussed. $\quad$ Reviewed by H. S. Butts - -REPLY [6 votes]: Yes, it is true that $[\mathbb{Q}(\sqrt[n]{p_1},\ldots,\sqrt[n]{p_k}):\mathbb{Q}]=n^k$ and it was proved by Besicovitch ( a student of A.A. Markov) in 1940. -Although there are (almost) infinitely many books on field and Galois theory, the only book I know which proves Besicovich's theorem (but only for odd $n$) is Roman's Field Theory (Theorem 14.3.2, page 305 of the second edition).<|endoftext|> -TITLE: Solutions of $p!q! = r!$ -QUESTION [10 upvotes]: The title says it all, more or less. Obviously, there are infinitely many "trivial" integral solutions of the form $p=n, q=(n!-1), r= n!$. How many non-trivial solutions are there? -I came across this about ten years ago; as far as I can tell, it hasn't appeared here before, so I thought that it might be of interest. I'm actually most interested in finding whether there was any progress made since Florian Luca's 2007 article. - -REPLY [4 votes]: The only citation of the Luca paper found by MathSciNet: - -Bhat, K. G.; Ramachandra, K.: -A remark on factorials that are products of factorials. (Russian. Russian summary) Mat. Zametki 88 (2010), no. 3, 350–354; translation in -Math. Notes 88 (2010), no. 3–4, 317–320<|endoftext|> -TITLE: How to present a paper? -QUESTION [7 upvotes]: I am going to present several papers to an audience. I have read through all the papers and have a clear idea about them. -But this is the first time I have ever presented papers, and I am guessing that many of you have done this many times. So I am sure I can get a lot of good advice from you on how to do this. -I have about 80min to talk about 2-3 papers. They are about operator algebras and my audience are experts who have been working in this field for at least 20 years. -Any advice would help! -Thanks! - -REPLY [5 votes]: This sounds like a presentation for an oral examination. I'm going to give you some tips I found helpful in preparing my oral exam presentation, which was over one paper and several supporting papers. (I am assuming that your goal is to present papers, not proofs. I don't mean to say, don't include proofs when you're presenting several papers, but that if you focus on the details of any proofs, you run the risk of losing your audience's interest in a thicket of technicalities. rschwieb has a very good comment below.) -Rehearse the talk beforehand. Make sure that you don't run over time. -You have probably spent weeks reading these papers. You need to distill those weeks down into 80 minutes of clearly explained ideas. If I were giving the presentation, I would give the big-picture logical structure of the papers. I wouldn't present any details unless you are asked. -I'd try as best I could to make that big picture logical structure a story. Mathematicians are just human, after all, and we follow stories better than logic. What motivates these papers? What common themes tie them together? The motivation is the problem you have to solve, the themes are the characters, the plot is how the themes come together to solve the problem. -I think that finding common themes and turning them into a story is the most important part of this exercise. By doing this, you'll get to show off how discerningly you know the math (since you have to know it well in order to distill it down to common themes and then focus on those). You'll also have the pleasure of stepping back and looking at the big picture as you prepare, instead of losing yourself in yucky details. Finally, you'll also be able to tell a (hopefully semi-exciting) story instead of just reciting "theorem ... proof ... lemma ... proof ... theorem ... proof ..." while your audience dozes off. -One other tip. Before starting, I'd outline the talk narrative and every five or ten minutes, as the flow of the material allows, remind the audience where we are in the narrative. This helps put the story in perspective, is a temporary check for anybody tuning out, and (frankly) reminds the audience that the talk isn't going to last forever. It also helps hammer the chief ideas home. In any speech, you do three things: tell them what you're going to tell them, tell them, and then tell them what you told them.<|endoftext|> -TITLE: Proving that the halting problem is undecidable without reductions or diagonalization? -QUESTION [12 upvotes]: I'm currently teaching a class on computability and recently covered the proof that the halting problem is undecidable. I know of three major proof avenues that can be used here: - -Diagonalization - show that if the halting problem were decidable, we could build a machine that, if run on itself, is forced to do the opposite of what it says it will do. -Recursion theorem - show that if the halting problem were decidable, then we could build a machine that obtains its own description, determines whether it halts on its input, then does the opposite. -Reduction - show that some other undecidable problem reduces to the halting problem. - -While I completely understand these proofs and think that they're mathematically beautiful, many of my students are having trouble with them because none of the proofs directly address why you can't tell if a program will halt. Nothing in the above proceeds along the line of "computations can evolve in a way that is so complicated that we can't predict what will happen when we start them up" or "machines can't introspective on themselves at that level of detail." I often give these intuitions to my students when they ask why the result holds, but I'm not aware of any formal proofs of that form. -Are there any proofs of the undecidability of the halting problem that directly explore why it's impossible for a program to decide what another program will do? -Thanks! - -REPLY [3 votes]: I agree that the usual proofs are not very "informative", in the sense that they don't usually satisfy a student's desire to know why this is the case. A "fully informative" proof would presumably construct the object in question -- but obviously you can't construct an object to show that it doesn't exist! I wouldn't exclude the possibility of finding a "somewhat informative" proof, but I think such a proof, if it exists, might be complicated enough that a persuasive but non-mathematical argument might be better. -Appealing to the idea you mentioned, that "computations can evolve in a way that is so complicated that we can't predict what will happen when we start them up", seems to be a good approach (possibly using the Collatz sequence as an example of a computation that, clearly, evolves in ways we don't yet understand very well.) -A complementary approach might be to ask the student to consider program analyzers. These are programs that take a program as input, and output "yes" if they can prove it has some property (like halting), "no" if they can prove it doesn't, and "don't know" if they can't prove either. There are real-world examples of these programs, and it can be shown some are more sophisticated than others, and tying the argument to real software may help some students get a better conceptual grip on it. -Now ask: what happens if we run one of these program analyzers on, say, the standard Python interpreter? The program analyzer can't truthfully say "yes" or "no", because it doesn't know what Python program the interpreter will be asked to run. -Even if we stipulate that the script that the Python interpreter will be running on be provided to the analyzer, the script itself could be an interpreter for some other programming language (ad infinitum). -Even if we stipulate that the program being analyzed take no input, it might be a Python interpreter with an arbitrary Python script hardcoded in it. (I believe there are actual tools available for Python to do just this.) -Now, this reasoning is not absolutely watertight, but the idea is to emphasize that you can make programs arbitrarily convoluted, and to try to get across, "oh, no matter how sophisticated my program analyzer is, there will always be some program that is so convoluted that it is out of its reach."<|endoftext|> -TITLE: Isomorphisms involving localisation of graded rings -QUESTION [5 upvotes]: I have been trying to establish an isomorphic concerning graded rings, and there is a last step that I'm confused about. -Let $R$ be a $\Bbb{Z}$ - graded ring. Let $f$ be a homogeneous non-nilpotent element of degree $1$ in $R$. Let $R_f$ denote the localisation $S^{-1}R$ where $S = \{1,f,f^2 \ldots ...\}$. It is not hard to see that $R_f$ is also a graded ring and we denote by $(R_f)_0$ the degree zero component of $R_f$. Let $t$ be an indeterminate, I have established the following isomorphisms: - - The "?" on the bottom left corner of the diagram is something that I want to establish. Now firstly when I try to identify $\ker \pi \phi = \phi^{-1} (\ker \pi)$, I am curious to know if the ideal $(f-1)$ in $R_f$ is actually the extension of the ideal $(f-1)$ in $R$. Sorry for the bad notation, but perhaps I should call the one in $R_f$ say $(f-1)^e$ and the one in $R$ just $(f-1)$. Perhaps they are actually the same? -The dotted arrows from "?" to $R_f/(f-1)$ indicate some map that I am trying to establish. I believe such a map is an isomorphism, and I want to prove this using the first isomorphism theorem. The problem is I don't know if $\pi \phi$ is surjective so how can I conclude such an isomorphism exists? Perhaps it may not be true after all. -Thanks. - -REPLY [3 votes]: Of cause, your idea is right: Passing to the quotient where $f$ is unity isn't affected by localizing at $f$ first. Anyway, localizing commutes with quotients (see comments above) and we have a commutative diagram $$\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\sum}\right.} -\begin{array}{c} -R & \longrightarrow & R_f \\ -\da{\;}& & \da{\;}\\ -R/(f-1) & \xrightarrow{\varphi} & R_f/(f-1) -\end{array},$$ -where $\varphi$ is the composition of canonical maps $R/(f-1)\xrightarrow{\psi} (R/(f-1))_f \;\tilde\to\; R_f/(f-1)$. To see that this is an isomorphism, you indeed could use the homomorphism theorems, but its easier. -The latter arrow is the canonical isomorphism and the former ($\psi$) is localizing. Hence, if we view $(R/(f-1))_f$ as $(R/(f-1))[u]/(fu-1)$, $\psi$ is just inclusion $R/(f-1)\hookrightarrow(R/(f-1))[u]$ and projection. On the other hand we have -$$(R/(f-1))[u]/(fu-1) = (R/(f-1))[u]/(u-1) \cong R/(f-1),$$ where reading backwards, the isomorphism is nothing but inclusion $R/(f-1)\hookrightarrow(R/(f-1))[u]$ and projection. Hence there is the commuting diagram -$$\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\sum}\right.} -\begin{array}{c} -R/(f-1) & \xrightarrow{\psi} & (R/(f-1))_f \\ -\;& \searrow & \da{\;}\\ -\; & \; & (R/(f-1))[u]/(fu-1) -\end{array}$$ -of $\searrow$ and $\downarrow$ being isomorphisms, such that $\psi$ is so, as desired.<|endoftext|> -TITLE: Teenager solves Newton dynamics problem - where is the paper? -QUESTION [76 upvotes]: From Ottawa Citizen (and all over, really): - -An Indian-born teenager has won a research award for solving a - mathematical problem first posed by Sir Isaac Newton more than 300 - years ago that has baffled mathematicians ever since. -The solution devised by Shouryya Ray, 16, makes it possible to - calculate exactly the path of a projectile under gravity and subject - to air resistance. - -This subject is of particular interest to me. I have been unable to locate his findings via the Internet. Where can I read his actual mathematical work? -Edit: -So has he written an actual paper, and if so, will anyone get to read it? - -REPLY [3 votes]: Today I found this on arxiv.org Shouryya Ray Paper -It's the official paper from Shouryya Ray: - -An analytic solution to the equations of the motion of a point - mass with quadratic resistance and generalizations - Shouryya Ray · Jochen Frohlich - -why did he write it two years later from his original solution ?<|endoftext|> -TITLE: Number Game: 31 - Winning Strategy? -QUESTION [9 upvotes]: My Maths teacher taught us how to play a game called 31 on Friday. Not once did my Maths teacher lose. I want to know why. -I'll explain the game... -31 is a game between two people. -Let's say you've got a grid of numbers - 6 rows, 4 columns. Each column contains the numbers 1-6, such that the first row will contain only 1's, the second will contain only 2's and so on. -Player one begins by choosing an entry from the grid, crossing it off and recording the number chosen. Player two then chooses another entry from the grid (could be the same number), crosses it off and adds this to the number previously recorded. Once an entry has been crossed off, it cannot be reused. The objective is to make the total reach 31 on your turn. -I've established that if you make the total reach 24 on your turn, the next turn a number would have to be chosen which would not allow the total to exceed 30, and hence you would win on your next turn, providing that the entry required to reach 31 had not been crossed off. Furthermore, if you make the total reach 17 on your turn, the next turn a number would have to be chosen which would not allow the total to exceed 23, and hence you would reach 24 on your next turn, providing that the entry required to reach 24 had not been crossed off. Further "magic numbers" are hence 10 and 3. -Using this "magic number" logic, I decided to attempt to beat my Maths teacher. He still beat me - I reached 24 no problem, but ran out of entries in a row (sorry - cannot remember which, think it was row 3), and so, he was crowned the winner to yet another student. -I'm really struggling to devise a fail-proof method. Can you please assist me? - -REPLY [6 votes]: Most likely it was row $3$: you probably started by taking a $3$, and then he took nothing but $4$’s, to which you responded with $3$’s to hit the ‘magic numbers’. Unfortunately, after four moves apiece the total was at $28$, and you’d used all the $3$’s. Your strategy would have been perfect had there been at least five of each number instead of just four. -You can make the same idea work for you as first player if you start by taking $5$. If the second player takes another $5$ to bring the total to $10$, you take $2$. If he then takes another $5$ to make $17$, you take $2$ again. In order to make $24$, he has to take the last $5$. You then take another $2$; there are no $5$’s left, so he can’t hit $31$. No matter whether he takes $1,2,3$, or $4$, you can reach $31$; the closest call is when he takes a $3$, but there’s still one $2$ left, so you still win. -If the second player doesn’t take $5$ on his first turn, there are two cases. -Case 1: He takes $1,2,3$, or $4$, making a total less than $10$. In this case you bring the total to $10$ on your next turn by taking $4,3,2$, or $1$, respectively. At this point there are still at least three of every number left, and only three more numbers to hit ($17,24,31$), so he can’t force you to use up what you need. -Case 2: He takes a $6$, making a total of $11$. Go ahead and take a $6$, making $17$. The worst he can do to you is take a $1$, forcing you to take the third $6$, but that brings the total to $24$, and even if he takes $1$ again, there’s still a $6$ that you can take to reach $31$. -In other words, if he doesn’t respond to your opening $5$ by taking a $5$ himself, you just play the ‘magic number’ strategy.<|endoftext|> -TITLE: Show that the eigenvalues of ad(A) (adjoint action) are $\{\lambda_i-\lambda_j\}$, if those of $A$ are $\{\lambda_i\}$ -QUESTION [7 upvotes]: The Question: Let $A$ be an $n \times n$ matrix with distinct eigenvalues $\lambda_{1},...,\lambda_{n}$. Show that ad$_{A}$ acting on the space of all $n \times n$ complex matrices has $n^{2}$ eigenvalues $\lambda_{i}-\lambda_{j}$, $1 \leq i,j \leq n$. Then, find the corresponding eigenvectors ("eigenmatrices"). -The Attempt: So we can think of ad$_{A}$ as a matrix of dimension $n^{2} \times n^{2}$, with $n^{2}$ eigenvalues. I can diagonalize A to get the eigenvalues in the diagonal. We have distinct eigenvalues. I think this has to do with weights but I am unsure. -any hints would be appreciated. - -REPLY [11 votes]: Let us denote by $u_1,\ldots,u_n$ and $v_1,\ldots,v_n$ the distinct eigenvectors of $A$ and $A^T$ respectively corresponding to the eigenvalues $\lambda_1,\ldots,\lambda_n.$ -If we introduce $X_{ij}=u_i\otimes v_j,$ where $\otimes$ denote the Kronecker tensor product, then we get $$AX_{ij}=\lambda_iX_{ij},\quad X_{ij}A=\lambda_jX_{ij}.$$ -So $\textrm{ad}_AX_{ij}\equiv[A,X_{ij}]\equiv AX_{ij}-X_{ij}A=(\lambda_i-\lambda_j)X_{ij}.$ -I hope it helps.<|endoftext|> -TITLE: A counterexample to the going down theorem -QUESTION [9 upvotes]: I will appreciate any enlightenment on the following which must be an exercise in a certain textbook. (I don't recognize where it comes from.) -I understand that the going down property does not hold since $R$ is not integrally closed (in fact, it is not a UFD), but I have no idea how to show that $q$ is such a counterexample. - -Let $k$ be a field, $A = k[X, Y]$ be a polynomial ring, $R = \lbrace f \in A \colon f(0, 0) = f (1, 1) \rbrace \subset A$ be a subring. - Define $q = (X)\cap R$, $p = (X - 1, Y - 1) \cap R$, $P = (X - 1, Y - 1)$. - Show that there is no $Q \in \operatorname{Spec} A$, $Q\subset P$ that goes down to $q$. - -REPLY [3 votes]: I'll show that the existence of $Q\in Spec(A)$ satisfying $Q\subsetneq P$ and $q= Q\cap R$ leads to a contradiction. -We have $X\cdot (X-1)\in q $ , so $X\cdot (X-1)\in Q$. -Hence we have $(X-1)\in Q$ (since $X\notin Q$ because $X\notin P$). -But this forces $Q=(X-1)A$, since $(X-1)A\subset Q\subsetneq P=(X-1,Y)$. -But then $(X-1)Y\in Q\cap R \setminus q$ : contradiction . -Edit -As wxu remarks in his comment, $q$ as defined by eltonjohn is not included in $p$. -The above answer remains correct (I am sure of that, because else wxu would have noticed!), but it is not a counterexample to Going Down. -I advise users to read wxu's post: he modified eltonjohn's question precisely in order to give such a counterexample.<|endoftext|> -TITLE: From injective map to continuous map -QUESTION [9 upvotes]: Let $X$ and $Y$ metric spaces, $f$ is an injective from $X$ to $Y$, and $f$ sets every compact set in $X$ to compact set in $Y$. How to prove $f$ is continuous map? -Any comments and advice will be appreciated. - -REPLY [5 votes]: Since $X$ and $Y$ are metric spaces, it suffices to show that if $\langle x_n:n\in\Bbb N\rangle$ is a convergent sequence in $X$ with limit $x$, then $\langle f(x_n):n\in\Bbb N\rangle$ is a convergent sequence in $Y$ with limit $f(x)$; in words, f preserves convergent sequences. -Suppose that $\langle x_n:n\in\Bbb N\rangle$ converges to $x$ in $X$. If there is an $n_0\in\Bbb N$ such that $x_n=x$ for all $n\ge n_0$, it’s trivially true that $\langle f(x_n):n\in\Bbb N\rangle\to f(x)$, so assume (by passing to a subsequence if necessary) that $\langle x_n:n\in\Bbb N\rangle$ is a sequence of distinct points. (Since $\langle x_n:n\in\Bbb N\rangle$ converges to $x$ and is not eventually constant at $x$, it cannot have a constant infinite subsequence: for each $n\in\Bbb N$ there must be an $m>n$ such that $x_k\ne x_n$ whenever $k\ge m$.) -For each $n\in\Bbb N$ set $K_n=\{x\}\cup\{x_k:k\ge n\}$; each $K_n$ is compact and infinite. (Why?) By hypothesis, therefore, each $f[K_n]$ is compact. -For convenience let $y=f(x)$, and let $y_n=f(x_n)$ and $H_n=f[K_n]$ for $n\in\Bbb N$. By hypothesis each $H_n$ is compact and infinite, so each contains a limit point. Fix $n\in\Bbb N$. For each $k\ge n$, $Y\setminus H_{k+1}$ is an open nbhd of $y_k$ that contains only finitely many points of $H_n$ (why?), so $y_k$ can’t be a limit point of $H_n$. Thus, for each $n\in\Bbb N$ the only possible limit point of $H_n$ is $y$ itself. From here you should be able to prove without too much trouble that $\langle y_n:n\in\Bbb N\rangle\to y$ and hence that $f$ is continuous.<|endoftext|> -TITLE: Prove that the normed space $L^{\infty}$ equipped with $\lVert\cdot\rVert_{\infty}$ is complete. -QUESTION [9 upvotes]: Possible Duplicate: -Understanding proof of completeness of $L^{\infty}$ - -Most of the materials I have in Real Analysis consider this statement as a trivial one: "The normed space $L^{\infty}$ equipped with $\lVert\cdot\rVert_{\infty}$ is complete". -But to my suprise I can't see the triviality. I am searching for it now... -Anybody with hint? - -REPLY [12 votes]: Let $\left(f_n\right)_{n\geqslant 1}$ be a Cauchy sequence in $L^{\infty}$ endowed with the natural norm. For $n,m$, let $N_{n,m}$ a set of measure $0$ such that $|f_n(x)-f_m(x)|\leq \lVert f_n-f_m\rVert_{\infty}$ for each $x\notin N_{n,m}$. Define $N:=\bigcup_{n,m}N_{n,m}$. Then $N$ is of measure $0$ as a countable union of such sets and the sequence of functions $\left(\widetilde f_n\right)_{n\geqslant 1}$ restricted to $N^c$ is uniformly convergent. Then you can find an uniform limit $f$ (i.e. such that $\lVert f-\widetilde f_n\rVert_{\infty}\to 0$. Then just define $f$ by $0$ on $N$ to get a limit in $L^{\infty}$ (more precisely the limit will be the equivalence class of this function).<|endoftext|> -TITLE: Open properties of quasi-compact schemes -QUESTION [6 upvotes]: I am following Ravi Vakil's Math 216: Foundations of Algebraic geometry notes, and there is a remark following an exercise that I don't understand at all, and if anyone could enlighten me then that would be brilliant. -The exercise asks one to show that if $X$ is a quasicompact scheme, then every point has a closed point in its closure, which is clear from the preceding exercise asking to show that $X$ is quasicompact if and only if it can be written as a finite union of affine schemes. These I am fine, as well as the following implication that every nonempty closed subset of $X$ contains a closed point. -However, then the notes then go on to state that this will be used in the following way: If a property $P$ is open (that is, if some point $x$ has $P$, then there exists an open neighbourhood $U$ of $x$ such that all points in $U$ have $P$), then to check that all points of a quasicompact scheme have $P$, then it suffices to check only the closed points. -I do not seem to be able to see how this follows at all. It seems to me that everything in the exercises is regarding closed points being in closures, and to show the remark, I want to show that other points are in (all) open neighbourhood(s) if closed points. These seem relatively distinct to me - is this wrong? -These comments/exercises are on pages 139-140 of the notes. - -REPLY [7 votes]: Let $\eta$ be a generic point with closure $Y = \overline{\{ \eta \}}$, and let $P$ be an open property. Then, the set of all points of $X$ with property $P$ is open, so its complement is closed; in particular, if $\eta$ does not have property $P$, then no point of $Y$ has property $P$. (This is just point set topology.) -Now, suppose $X$ is a quasicompact scheme. Then, $Y$ must have a closed point; so if all closed points of $X$ have property $P$, then $\eta$ must have property $P$ as well. - -REPLY [3 votes]: First: Are you sure that your argument about existence of closed points is correct? The reason I ask is that you say that it's clear from writing $X$ is a finite union of open affines, but (while I agree that $X$ is quasi-compact if and only if this is possible) I don't myself see how it follows immediately from this. (The arguments I know, e.g. this one, use the topological property of quasi-compactness in an explicit manner.) -Secondly: Suppose $U$ is an open subset of $X$ that contains all the closed points of $X$. The complement of $U$ is a closed subset of $X$. Can it contain any closed point of $X$? If not, then taking into account the facts you state in your question, can it contain any points at all? - -REPLY [3 votes]: This is a purely topological statement. Suppose that an open property $P$ holds for all closed points. Let $\eta$ be a non-closed point, and let $x \in \bar{\eta}$ be a closed point in the closure (which exists by the exercise!). Then by assumption $x$ has $P$, and since $P$ is open there exists an open neighborhood $U$ of $x$ so that all points of $U$ have $P$. -It is clear that any open neighborhood $U$ of $x$ must contain $\eta$. If $\eta \notin U$, then $U^c:=X-U$ is a closed subset of $X$ containing $\eta$. But $\eta \in U^c$ implies $\bar{\eta} \subset U^c$ which then means that $x \in \bar{\eta} \subset U^c$, a contradiction since $x \in U$. -Since $\eta \in U$, then $\eta$ has property $P$ also. Hopefully this gives you some intuition about the strangeness of non-closed points.<|endoftext|> -TITLE: Laplace transform identity -QUESTION [18 upvotes]: Is there a function equal to its Laplace transform? -I mean -$$ \int_{0}^{\infty}dt\exp(-st)f(t)= f(s).$$ -Of course I know $f(t)=0 $ satisfy the equation. -For the case of the Fourier transform, I know the Hermite Polynomials are eigenfunction of the Fourier transform, perhaps it's enough with a shift or rotation into the complex plane ($s \rightarrow i\omega$)? - -REPLY [18 votes]: For $l$ with real part greater than $1$ a standard computation yields $$ \mathcal{L}(t^l) = s^{-l-1} \Gamma(l+1).$$ Pick $z$ with real part between $0$ and $1$ and let $f(t)=\sqrt{\Gamma(z)} t^{-z} + \sqrt{\Gamma(1-z)} t^{z-1}.$ Then we have $$ \mathcal{L}\left( f \right) = \sqrt{ \Gamma(z) \Gamma(1-z) } \cdot f(s).$$ -Since $\displaystyle \Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin(\pi z)} $ it suffices to find a $z$ with real part between $0$ and $1$ such that $\sin(\pi z)=\pi.$ A solution is $ z= \displaystyle \frac{\sin^{-1} \pi}{\pi} $ where $\sin^{-1}(\pi) = \dfrac{\pi}{2} - i \log(\pi + \sqrt{\pi^2-1}) .$ Therefore the Laplace transform operator has a fixed point.<|endoftext|> -TITLE: The A^1-localization in the unstable motivic category -QUESTION [6 upvotes]: I am currently trying to study $\mathbb{A}^1$-homotopy theory and I have a question about the construction of the unstable motivic category. -Here is roughly the construction I try to understand : -1) Fix a noetherian scheme of finite Krull dimension $S$, and denote the category of smooth schemes of finite type over it by $\text{Sm}/S$. It is (essentially) small due to the finiteness condition. Endow it with the Nisnevich topology. OK. -2) The category of simplicial presheaves $[\text{Sm}/S^{\text{op}}, \text{sSet}]$, denoted by $M_S$, has (at least) 3 local model structures, the injective (Jardine), the projective (Dugger, Hollander, Isaksen) and the flasque (Isaksen). The weak equivalences in these structures are the local weak equivalences, which can be characterized either by being the maps inducing isomorphisms on all sheaves of homotopy groups (Jardine), or the maps inducing isomorphisms on all stalks. These models can be seen as a left Bousfield localization of the global model structures. Similar models hold by restricting to simplicial sheaves instead of presheaves, and they are Quillen equivalent by sheafification-embedding. OK. -3) There still is a localization to be done, and this is the one I don't quite understand. It is the $\mathbb{A}^1$-localization, or the localization with respect to the interval. Here is what I understand : -In "$\mathbb{A}^1$-homotopy theory of schemes" of Morel and Voevodsky : They do a more general construction for any site with interval. Everything is done "by hand", Theorem 2.2.5 is the left Bousfield localization, Theorem 2.3.2 is the localization of the simplicial sheaves "with respect to the interval", and Definition 3.2.1 is the case of interest, the site $\text{Sm}/S$ with the interval $\mathbb{A}^1$. Their model structure seems to be a left Bousfield localization of the category of simplicial sheaves at the unique map $\mathbb{A}^1 \to \ast$. Magically, all the projections $\mathbb{A}^1 \times F \to F \in M_S$ are weak equivalences ? So the localization can be formally done by apllying a left Bousfield localization ? -Moreover, this is done on simplicial sheaves, does a similar result hold for simplicial presheaves ? I would be very happy to hear that in the left Bousfield localization of some local model structure on $[\text{Sm}/S^{op},sSet]_{\text{Nis}}$ at the unique map $\mathbb{A}^1 \to \ast$, all the maps $\mathbb{A}^1 \times F \to F$ of simplicial presheaves are weak equivalences. Moreover, is this the property we want in the unstable category of motivic spaces ? We could of course try to do a left Bousfield localization at all maps $\mathbb{A}^1 \times F \to F$, but since it is not a set, this does not necessarily exists, a priori. -Thanks. Feel free to redirect me to any reference. - -REPLY [4 votes]: Markus Severitt wrote a wonderfully readable introduction to the unstable motivic category, and it seems to me he answers all of your questions. In particular, look at section 6.2 of his paper. -http://www.math.uni-bielefeld.de/~mseverit/mseverittse.pdf<|endoftext|> -TITLE: How do you find the limit of $\lim\limits_{x\rightarrow 0^-}\frac { \arcsin{ \frac {x^2-1}{x^2+1}}-\arcsin{(-1)}}{x}$? -QUESTION [5 upvotes]: I'm trying to calculate the following limit, involving $\arcsin$, where $x$ is getting closer to $0$ from the negative side, so far with no success. -The limit is: -$$\lim_{x\rightarrow 0^-}\frac { \arcsin{ \frac {x^2-1}{x^2+1}}-\arcsin{(-1)}}{x}$$ -it's part of a bigger question, where I should prove that the same expression, but where $x\rightarrow 0$ has no limit. So, I prove that the limits from both sides are different. -I already know that the solution to the above question is $(-2)$, but have no idea how to get there. -Would appreciate your help! - -REPLY [3 votes]: You can use the mean-value theorem on $f(y) = \arcsin({y^2 - 1 \over y^2 + 1})$ on any interval $[x,0]$ for small $x$, since $f(y)$ is continuous on $[x,0]$ and differentiable on $(x,0)$ (you don't need the derivative to exist at $y = 0$ or $y = x$ to apply the mean value theorem). Thus there is some $y \in (x,0)$ (that depends on $x$) such that -$$ f'(y) = {\arcsin({x^2 - 1 \over x^2 + 1}) - \arcsin(-1) \over x} $$ -Taking the derivative of $f$ using the chain rule, you get -$$f'(y) = {1 \over \sqrt{1 - \big({y^2 - 1 \over y^2 + 1}\big)^2}}\bigg({y^2 - 1 \over y^2 + 1}\bigg)'$$ -$$= {y^2 + 1 \over \sqrt{4y^2}}{4y \over (y^2 + 1)^2}$$ -$$= -{2 \over y^2 + 1}$$ -The minus sign comes from the fact that $y < 0$. Since $y \in (x,0)$, we conclude $f'(y)$ is between $-2$ and $-{2 \over x^2 + 1}$. Thus as $x$ goes to zero, $f'(y)$ must approach $-2$, which therefore is the limit. - -Since the asker needs an entirely elementary proof, here's another way. Since $\arcsin(-1) = {-\pi \over 2}$, we are seeking -$$\lim_{x \rightarrow 0^-}{\arcsin({x^2 - 1 \over x^2 + 1}) + {\pi \over 2} \over x}$$ -Since $\lim_{\theta \rightarrow 0} {\sin(\theta) \over \theta} = 1$, this can be rewritten as -$$\lim_{x \rightarrow 0^-} {\sin(\arcsin({x^2 - 1 \over x^2 + 1}) + {\pi \over 2}) \over\arcsin({x^2 - 1 \over x^2 + 1}) + {\pi \over 2}} \times {\arcsin({x^2 - 1 \over x^2 + 1}) + {\pi \over 2} \over x}$$ -$$= \lim_{x \rightarrow 0^-}{\sin(\arcsin({x^2 - 1 \over x^2 + 1}) + {\pi \over 2}) \over x}$$ -Using that $\sin(\theta + {\pi \over 2}) = \cos(\theta)$, this becomes -$$\lim_{x \rightarrow 0^-}{\cos(\arcsin({x^2 - 1 \over x^2 + 1})) \over x}$$ -Since $\cos(\arcsin(y)) = \sqrt{1 - y^2}$, the above becomes -$$\lim_{x \rightarrow 0^-} {-{2x \over x^2 + 1} \over x}$$ -$$= \lim_{x \rightarrow 0^-}-{2 \over x^2 + 1}$$ -$$= -2$$ -So this is your limit.<|endoftext|> -TITLE: Proof that angle-preserving map is conformal -QUESTION [15 upvotes]: Let $\phi: S \to \bar{S}$ be a diffeomorphism between two surfaces in $\mathbb{R^3}$. Such a map is called conformal if for all $p \in S$, and $v_1, v_2 \in T_p(S)$ (the tangent plane) we have -$$\langle d\phi_p(v_1), d\phi_p(v_2) \rangle = \lambda^2 \langle v_1, v_2 \rangle_p$$ -for some nowhere-zero function $\lambda$. -$\phi$ is said to be angle-preserving, if -$$\cos(v_1, v_2) = \cos(d\phi_p(v_1), d\phi_p(v_2)),$$ -which I take to mean -$$\frac{\langle v_1, v_2\rangle}{\lVert v_1 \rVert \lVert v_2 \rVert} = -\frac{\langle d\phi(v_1), d\phi(v_2)\rangle}{\lVert d\phi(v_1) \rVert \lVert d\phi(v_2) \rVert} -$$ -From do Carmo, "Differential Geometry of Curves and Surfaces", 4.2/14: - -Prove that $\phi$ is locally conformal if and only if it preserves angles. - -The "only if" part is obvious, but how can the "if" portion be proved (i.e. how does preserving angles imply conformality)? - -REPLY [8 votes]: Let $e_1$, $e_2$ be an orthonormal basis of $T_{p}S$. Let: -\begin{align*} -\langle d\phi_{p}(e_1), d\phi_{p}(e_1) \rangle &= \lambda_1 \\ -\langle d\phi_{p}(e_1), d\phi_{p}(e_2) \rangle &= \mu \\ -\langle d\phi_{p}(e_2), d\phi_{p}(e_2) \rangle &= \lambda_2 -\end{align*} -Now take: -\begin{align*} -v_1 &= e_1 \\ -v_2 &= \cos\theta\ e_1 + \sin\theta\ e_2 -\end{align*} -The equation in your question implies that: -$$ -\cos\theta = \frac{\lambda_1 \cos\theta + \mu \sin\theta}{\sqrt{\lambda_1\left(\lambda_1\cos^2\theta + 2\mu\sin\theta\cos\theta + \lambda_2\sin^2\theta\right)}} -$$ -Take $\theta = \frac{\pi}{2}$ to get $\mu = 0$. This implies that: -$$ -\lambda_1 = \lambda_1 \cos^2\theta + \lambda_2\sin^2\theta -$$ -Or $\lambda_1 = \lambda_2$. Hence: -\begin{align*} -\langle d\phi_{p}(e_1), d\phi_{p}(e_1) \rangle &= \lambda_1 \langle e_1, e_1 \rangle_{p} \\ -\langle d\phi_{p}(e_2), d\phi_{p}(e_2) \rangle &= \lambda_1 \langle e_2, e_2 \rangle_{p} \\ -\langle d\phi_{p}(e_1), d\phi_{p}(e_2) \rangle &= \lambda_1 \langle e_1, e_2 \rangle_{p} \qquad (= 0) -\end{align*} -Since both $\langle, \rangle_{p}$ and $\langle d\phi_{p}(), d\phi_{p}() \rangle$ are bilinear forms, the above is true for all $v_1, v_2 \in T_{p}S$.<|endoftext|> -TITLE: Using Replacement to prove transitive closure is a set without recursion -QUESTION [11 upvotes]: In the course on set theory I'm doing, I'm told that one of the main motivations behind the axiom of replacement is that the Axiom of Infinity asserts the existence of an infinite set, namely $\omega = \{\emptyset, \emptyset^+,\emptyset^{++},\dots\}$ where $a^+ = a \cup \{a\}$, but doesn't in general show the existence of other infinite sets that we'd like. -In particular, I'm told that the transitive closure $TC(x)$ of $x$ is defined as $$TC(x) = \bigcup\{x,\cup x, \cup\cup x,\dots\}$$, and we can use Replacement to be able to say that the set $\{x,\cup x, \cup\cup x,\dots\}$ actually exists. -My question is: assuming the existence of $\omega$ as given above, what function-class can we use with Replacement to prove the existence of the iterated-union set? It would clearly suffice to come up with a formula $\phi$ such that $\phi(n,y)$ asserts that $y$ is the $n^\mathrm{th}$ iteration of the union operation on $x$. -But the only ways I can think of to define such a formula are recursive, and that's no good because the proof I've seen that recursion works uses the transitive closure operation. So there must be either some restricted and easier-to-prove form of recursion, or some non-recursive definition that is just as good. -Edit: here's some legwork to get you started: -$$\mathrm{Fun}(f) = \forall x.(\exists y.(x,y)\in f\wedge(\forall z. (x,z)\in f \Rightarrow z = y))$$ so $\mathrm{Fun}(f)$ means $f$ is a function. -$$\mathrm{Rec}(f) = \forall nxy. ((n,x),y)\in f\Rightarrow ((n = \emptyset\wedge x = y) \vee (\exists m. m^+=n\wedge ((m,\cup x),y)\in f$$ so $\mathrm{Rec}(f)$ means $f$ satisfies a recursion equation for an iterated-union function. -Then $$\phi((n,x),y) = \exists f. \mathrm{Fun}(f) \wedge \mathrm{Rec}(f) \wedge ((n,x),y) \in f$$ is close to what I want, but it's not completely obvious that $\phi$ itself is a function-class. I think I'd need to prove using Foundation that the recursion terminates, possibly by requiring that some things are members of $\omega$ (which I understand exists by using Separation to take the intersection of all sets of the kind described by Infinity). But normal $\epsilon$-induction is still not available, since it depends on $TC$. - -REPLY [4 votes]: The proof of recursion with which I’m familiar does not use transitive closure. A version of it can be found on-line in Don Monk’s Advanced Set Theory notes, specifically, as Theorem 4.12 in Chapter 4: - -Suppose that $\mathbf G$ is a class function with domain the class of all (ordinary) functions. Then there is a unique class function $\mathbf F$ with domain $\mathrm{On}$ such that for every ordinal $\alpha$ we have $\mathbf F(\alpha)=\mathbf G(\mathbf F\upharpoonright\alpha)$. - -This starts on page 19. Roughly speaking the proof is by showing that for each ordinal $\alpha$ there is an approximation $f_\alpha$ to $\mathbf F$ with domain $\alpha$, that these approximations are unique, and that they ‘fit together’ properly, so that one may define $\mathbf F(\alpha)$ as $f_{\alpha+1}(\alpha)$. - -REPLY [3 votes]: You employ the following formula: -$$\exists f(x\in\omega\land f\textrm{ is a function }\land\textrm{dom}(f)=x\cup\{x\}\land f(\varnothing)=\{A\}\\\land(\forall m< x)( f(m\cup\{m\})=\bigcup f(m))\land y=f(x))$$ -It's easy to check that this function is unique. To prove that it exists proceed by induction on $n$.<|endoftext|> -TITLE: Showing $\int\limits_a^b h(x)\sin(nx) dx \rightarrow 0$ -QUESTION [7 upvotes]: Let $h\in C_0([a,b])$ arbitrary, that is $h$ is continuous and vanishes on the boundary. -I want to show that -$\int\limits_a^b h(x)\sin(nx)dx \rightarrow 0$. -If $h\in C^1$, integration by parts immediately yields the claim, since $h'$ is continuous and thence bounded on the compact interval, using also the zero boundary condition. -However, I believe the statement is also true for all $h\in C_0([a,b])$. My idea is to approximate $h$ by functions $h_m \in C_0^1([a,b])$. Then for all $m$, -$$\begin{equation*} -\lim_{n \to \infty} \int h_m(x) \sin(nx) dx = 0. -\end{equation*}$$ -$$\begin{align*} -\Rightarrow ~~~ \lim_{n \to \infty} \int h(x)\sin(nx) dx &= \lim_{n \to \infty} \int \lim_{m \to \infty} h_m(x)\sin(nx) dx\\ &= \lim_{m \to \infty}(\lim_{n \to \infty} \int h_m(x)\sin(nx) dx)\\ &= \lim 0 = 0. -\end{align*}$$ -This is fine iff the second equality is. In fact, this is two different steps, as three limiting processes are involved. Hence the questions: -First, can I make sure that I can interchange the $m$-limit with the integral sign? (Can I assume that $h_m$ converges uniformly? Or use some sort of Dominated Convergence Theorem?) -And second, may I swap the $n$-limit for the $m$-limit? (The $n$-limit is in fact $C/n \to 0$) -I hope it's not too messy. Many thanks for any kind of help! - -REPLY [4 votes]: I think this is from Apostol. It is an informal approach to the following Lemma, if I'm not recalling wrongly: -Let $f$ be integrable in $[a,b]$. Then -$$\lim \limits_{\lambda \to \infty } \int\limits_a^b f\left( x \right)\sin \lambda xdx = 0$$ -$(1)$ Let $f$ be constant. Then -$$\lim \limits_{\lambda \to \infty } \int\limits_a^b k\sin \lambda xdx = \left.-k\frac{\cos \lambda x}{\lambda}\right]_a^b=0$$ -$(2)$ Let $f$ be a step function over $[a,b]$, viz -$$f(x) = \begin{cases} k:a< x\leq a_1 \cr k_1: a_1 -TITLE: Probability of a random binary string containing a long run of 1s? -QUESTION [21 upvotes]: For some fixed $n$, let $p_n$ be the probability that a random infinite binary string contains a run of consecutive $1$s, containing $n$ more $1$s than the total number appearing before the run. -For example, if $n=2$, the binary string $010010111100...$ is of the sort we're looking for, but the string $01001011100...$ is not (yet). Call a run of $1$s that is sufficiently long, like the four $1$s in the first string, a satisfying run. -Is there any reasonably nice way to compute $p_n$? -Here's what I know: - -We can compute the expected number of satisfying runs in a string. Suppose a satisfying run begins after $k$ preliminary bits. If $k=0$, this occurs with probability $2^{-n}$; otherwise, it occurs with probability $\sum_{i=0}^{k-1} {k-1 \choose i} 2^{-n-k-i}=2^{-n-k}\left(\frac{3}{2}\right)^k$. Summing over all $k$, we get that the expected number of satisfying runs is $3(2^{-n})$, and so $p_n<3(2^{-n})$. You could extend this argument to compute $p_n$ by inclusion-exclusion, but it looks to me like it'd be incredibly ugly. In fact, $3(2^{-n})$ is a pretty good estimate for $p_n$, since the probability of a given string having multiple satisfying runs is so low. -By conditioning on the location of the first $0$ in the string, we can get a recurrence relation: $p_n=2^{-n+1}+\sum_{i=1}^{n-1} 2^{-i+1}p_{n+i}$. This is not sufficient to compute $p_n$ on its own, even given some initial values -- the largest index that appears in it is $p_{2n-1}$, so if you try to use it recursively you'll never learn anything about $p_n$ for $n$ even. You might be able to get something out of it by repeatedly substituting it into itself (in the form I've given it in, with the smallest coefficient singled out), and using the bound from 1. to make some kind of limit argument, but this also looks nasty to me. - -Thoughts? -(This originally comes from this Magic: the Gathering scenario, but I hope I've managed to successfully de-Magic it.) - -REPLY [2 votes]: This answer has a "motivational" stage, and afterwards a "calculations" stage. Even if the deduction of the formulas is not as pleasant as the formulas provided by the OP and leonbloy, I think that my answer qualifies as "nice", at least because I obtain a decreasing recurrence and because of its "constructive" flavor. Please prepare yourself to see a lot of summation symbols, but do not get impatient, just don't forget the moral of the reasoning, and feel free to skip the easy steps. -STAGE 1: MOTIVATION -I always try to solve problems concerning random binary strings in a constructive manner. Then we can ask: how to "construct" an "admissible" string? -This is how I proceed: let us say that you want that the desired condition be fulfilled at the $r$-th run of $1$s (shortly, $1$-run) and not before. Then you construct the previous $1$-runs with prescribed lengths $k_1,k_2,\dots,k_{r-1}\geq1$. Now we must interleave $0$-runs between our $1$-runs. -The first $0$-run, which we will be put before the first $1$-run, can have any length $i_1$ including zero (because your sequence may or not start with $1$). On the other hand, the other dividing $0$-runs must have positive length (because they must separate our $1$-runs). -Therefore the lengths $i_1,\dots,i_r$ of the $0$-runs satisfy $i_1\geq0$ and $i_2,\dots,i_r\geq1$. Finally, the $r$-th $1$-run must have length greater or equal than $n+k_1+\cdots+k_{r-1}$. Actually, we can suppose that this length is exactly equal to $n+k_1+\cdots+k_{r-1}$, because in this case it does not matter how the rest of the sequence is. -Of course, the desired condition is not fulfilled before the $r$-th step if and only if $k_j\leq s_{j-1}$ for $j=1,\dots,r-1$, being $s_j=n-1+k_1+\cdots+k_j$ ($s_0=n-1$). -The set of sequences constructed in the way described above, with the values $k_i$ and $i_j\ $ fixed has probability -$$\underbrace{2^{-(i_1+\cdots+i_r)}}_{0\text{-}\style{font-family:inherit;}{\text{runs}}}\ \underbrace{2^{-(k_1+\cdots+k_{r-1})}}_{\style{font-family:inherit;}{\text{faulty}}\ 1\text{-}\style{font-family:inherit;}{\text{runs}}}\ \underbrace{2^{-(n+k_1+\cdots+k_{r-1})}}_{\style{font-family:inherit;}{\text{successful}}\ 1\text{-}\style{font-family:inherit;}{\text{run}}}\,.$$ -It remains to sum this value over all admissible values of $r,i_1,\dots,i_r,k_1,\dots,k_{r-1}$. In other words, the desired probability is equal to -$$\sum_{r=1}^\infty\Biggl[\,\sum_{i_1=0}^\infty\sum_{i_2=1}^\infty\cdots\sum_{i_r=1}^\infty\Biggr]\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-1}=1}^{s_{r-2}}2^{-(i_1+\cdots+i_r)}\,2^{- (k_1+\cdots+k_{r-1})}\,2^{-(n+k_1+\cdots+k_{r-1})}\,.$$ -The part of the sum involving the $i_j$ can be easily solved: recall that $\sum_{i=0}^\infty 2^{-i}=2$ and $ \sum_{i=1}^\infty2^{-i}=1$, so our sum simplifies to -$$2^{1-n}\sum_{r=1}^\infty\ \sum_{k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-1}=1}^{s_{r-2}}4^{-(k_1+\cdots+k_{r-1})}\,.$$ -Now we concentrate on the inner sum, that is, with $r$ fixed. We have -$$\sum_{k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-1}=1}^{s_{r-2}}4^{-(k_1+\cdots+k_{r-1})}=\sum_{k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}4^{-(k_1+\cdots+k_{r-2})}\sum_{k_{r-1}=1}^{s_{r-2}}4^{-k_{r-1}}\,.$$ -Since $\sum_{k=1}^s4^{-k}=\frac13(1-4^{-s})$, our sum becomes -$$\frac13\sum_{k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}4^{-(k_1+\cdots+k_{r-2})}(1-4^{-s_{r-2}})$$ -$$=\frac13\sum_{k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}4^{-(k_1+\cdots+k_{r-2})}-\frac13\sum_{k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}4^{-(k_1+\cdots+k_{r-2})}4^{-s_{r-2}}$$ -$$=\frac13\sum_{k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}4^{-(k_1+\cdots+k_{r-2})}-\frac13\,4^{-(n-1)}\sum_{k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}4^{-2(k_1+\cdots+k_{r-2})}$$ -Now I will try to convince you that this is indeed a decreasing recurrence, unlike OP and leonbloy's approach (which of course are very clever and illuminating). This is a good time to make a generalization of the problem and introduce some convenient notation: -STAGE 2: CALCULATIONS -Let $p\in(0,1)$ and let $q=1-p$. Now we suppose that the probability of $1$ in each place of a binary string is equal to $p$ (so the probability of $0$ is $q$). In the case of interest we have $p=q=1/2$, but the general case is not harder than this particular case. -Reasoning as in the previous stage, the probability of the set of desired sequences with the numbers $r,i_1,\dots,i_r,k_1,\dots,k_{r-1}$ fixed is equal to -$$\underbrace{q^{i_1+\cdots+i_r}}_{0\text{-}\style{font-family:inherit;}{\text{runs}}}\ \underbrace{p^{k_1+\cdots+k_{r-1}}}_{\style{font-family:inherit;}{\text{faulty}}\ 1\text{-}\style{font-family:inherit;}{\text{runs}}}\ \underbrace{p^{n+k_1+\cdots+k_{r-1}}}_{\style{font-family:inherit;}{\text{successful}}\ 1\text{-}\style{font-family:inherit;}{\text{run}}}\,,$$ -It is important to distinguish the case $r=1$ of the probability above: in this case there is no choice of numbers $k_1,\dots,k_{r-1}$, we only choose $i_1\geq0$, so in this case the probability is equal to $p^nq^{i_1}$. Thus, the total probability is equal to -$$\sum_{i_1=0}^\infty p^nq^{i_1}+\sum_{r=2}^\infty\Biggl[\,\sum_{i_1=0}^\infty\sum_{i_2=1}^\infty\cdots\sum_{i_r=1}^\infty\Biggr]\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-1}=1}^{s_{r-2}}q^{i_1+\cdots+i_r}\,p^{k_1+\cdots+k_{r-1}}\,p^{n+k_1+\cdots+k_{r-1}}\,.$$ -$$=\frac{p^n}{1-q}+\frac{p^n}{1-q}\sum_{r=2}^\infty\biggl(\frac{q}{1-q}\biggr)^{r-1}\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-1}=1}^{s_{r-2}}\bigl(p^2\bigr)^{k_1+\cdots+k_{r-1}}\,.$$ -Define -$$S(\alpha,1)=1;\ S(\alpha,r)=\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-1}=1}^{s_{r-2}}\alpha^{k_1+\cdots+k_{r-1}}\,,\ \style{font-family:inherit;}{\text{for}}\ r\geq2\,.$$ -Then our probability can be written as -$$\frac{p^n}{1-q}\sum_{r=1}^\infty\biggl(\frac{q}{1-q}\biggr)^{r-1}S(p^2,r)\,.$$ -We have $S(\alpha,1)=1$, and for $r\geq2$: -$$S(\alpha,r)=\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}\alpha^{k_1+\cdots+k_{r-2}}\sum_{k_{r-1}=1}^{s_{r-2}}\alpha^{k_{r-1}}$$ -$$=\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}\alpha^{k_1+\cdots+k_{r-2}}\biggl[\frac{\alpha}{1-\alpha}\,(1-\alpha^{s_{r-2}})\biggr]$$ -$$=\frac{\alpha}{1-\alpha}\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}\alpha^{k_1+\cdots+k_{r-2}}-\frac{\alpha}{1-\alpha}\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}\alpha^{k_1+\cdots+k_{r-2}+s_{r-2}}$$ -$$=\frac{\alpha}{1-\alpha}S(\alpha,r-1)-\frac{\alpha}{1-\alpha}\,\alpha^{n-1}\sum_ {k_1=1}^{s_0}\sum_{k_2=1}^{s_1}\sum_{k_3=1}^{s_2}\cdots\sum_{k_{r-2}=1}^{s_{r-3}}\alpha^{2(k_1+\cdots+k_{r-2})}$$ -$$=\frac{\alpha}{1-\alpha}\,S(\alpha,r-1)-\frac{\alpha^n}{1-\alpha}\,S(\alpha^2,r-1)\,.$$ -Changing $\alpha$ by $\alpha^j$, we obtain, for $r\geq2$: -$$S(\alpha^j,r)=\frac{\alpha^j}{1-\alpha^j}\,S(\alpha^j,r-1)-\frac{\alpha^{jn}}{1-\alpha^j}\,S(\alpha^{2j},r-1)\,.$$ -Why the hell I did this? because defining -$$b_j=\frac{\alpha^j}{1-\alpha^j},\ c_j=-\,\frac{\alpha^{jn}}{1-\alpha^j}\quad \style{font-family:inherit;}{\text{and}}\quad T(j,r)=S(\alpha^j,r)$$ -the recurrence becomes -$$T(j,1)=1;\quad T(j,r)=b_j\,T(j,r-1)+c_j\,T(2j,r-1),\ \style{font-family:inherit;}{\text{for}}\ r\geq2\,.$$ -OK, I agree that this is not a genuine decreasing recurrence, as index $j$ is increasing; fortunately, the index $r$ decreases. In general, these recurrences can be explicitly solved, either by bare hands or using computer algebra systems. Later I will try to solve it step by step, for the benefit of those who believed (or disbelieved?) me and endured up this point.<|endoftext|> -TITLE: Radical Locally Solvable? -QUESTION [6 upvotes]: A group $G$ is locally solvable if all finitely generated subgroups are -solvable. -A group $G$ is locally finite if all finitely generated subgroups are -finite. -A group $G$ is virtually locally solvable if it has a locally solvable -subgroup of the finite index. -Let be $R(S)=\left\langle T\,;\,T\trianglelefteq G\,,\,T\text{ locally solvable }\right\rangle $ -My question are: -1)Is $\,R(S)\,$ locally solvable? -2) If 1) is true: $G$ locally finite, $R(S)$ locally solvable and $G/R(S)$ virtually locally solvable $\Rightarrow G$ -virtually locally solvable? - -REPLY [3 votes]: No, your group $R(S)$ need not be locally solvable. A counterexample due to P. Hall is described in Part 2 of Robinson's "Finiteness Conditions and Generalized Soluble Groups". See the development leading up to, and the proof of, Theorem 8.19.1 on page 91 (and the corollary).<|endoftext|> -TITLE: notation of differentiation in differential geometry -QUESTION [11 upvotes]: I can't wrap my head around notation in differential geometry especially the abundant versions of differentiation. -Peter Petersen: Riemannian Geometry defines a lot of notation to be equal but I don't really know when one tends to use which version and how to memorize the definitions and properties/identities. - -Directional derivative or equivalently the action of a vector field $X$ on a function ($f:M\to\mathbb R$): $X\cdot f=D_Xf=df\cdot X\ $, which is also denoted as $L_Xf$ - -This is mostly clear except why the notation $D_Xf\ $ exists. - -$grad(f)=\nabla f\ $ the gradiant of $f:M\to\mathbb R$ - -Has $\nabla$ something to do with the Levi-Civita connection? - -Lie derivative of vector fields: $L_XY:=[X,Y]= X\cdot Y - X\cdot Y\ $, where the action of one vector field on one another is given by: $X\cdot Y:=D_XY\ $ the directional derivative of $Y$ along an integral curve of the vector field $X$. - -Also mostly clear. - -The covariant derivative or Levi-Civita connection $\nabla_XY$ - -Here my understanding stops and my brain starts dripping out of my ears… -Are there mnemonics or other ways to get into all those ways of thinking about differentiating on manifolds. And why do most books use coordinates - are they necessary I rather like not using $X=\sum_ia^i\partial_i$ for vector fields especially if the author (ab)uses Einstein sum convention. - -REPLY [4 votes]: It's not so complicated. Let's do things systematically. -a) The pure manifold case -On a differentiable manifold $M$ a vector field $X$ is the datum of a smoothly varying tangent vector $X(m)\in T_mM$ at each point $m\in M$. -Given a smooth function $f\in C^\infty(M)$ you obtain a differential form $ {d}f$ , that is a linear form $df(m)\in T^*_m(M)=(T_m(M))^*$ at each $m\in M$ , smoothly varying with $m$. It is given by the formula $(df(m))(v)=v(f)$ for $v\in T_m(M)$. -The smooth function $X(f)=L_X(f)=D_X(f)\in C^\infty(M)$ is then the function $M\to \mathbb R:m\mapsto (df(m))(X(m))$ -Let me insist that this does not require any riemannian structure. -b) The riemannian case -If $M,g$ is a riemannian manifold, each vector space $T_x(M)$ has a euclidean structure, which permits us to associate to each linear form $\phi\in T_m^*(M)$ the vector $v\in T_m(M)$ such that $g_x(v,w)=\phi (w)$ for all $w\in T_m(M)$. -If $\phi=df(m)$ the corresponding $v$ is denoted by $grad(f)(m)$. -This gives rise to the required function $grad(f)\in C^\infty(M).$ -c) The Lie derivative -Given two vector fields $X,Y$ on $M$ you can associate to them the following map $$z:C^\infty(M)\to C^\infty(M): f\mapsto X(Y(f))-Y(X(f))$$ A fundamental result is then that to this map corresponds a unique vector field $Z$ such that $z(f)=Z(f)$ for all $f\in C^\infty(M)$. -We then write $Z=[X,Y]$ or $Z=L_X(Y)$ -Again, this does not require a riemannian structure on $M$. -(It is better at this stage not to mention connections, which are supplementary data on vector bundles, related to differential operators. If you like, you can ask another question about them.)<|endoftext|> -TITLE: $L^p$ space question -QUESTION [5 upvotes]: Assume $(X,\mathcal{M},\mu)$ is a measure space and for some $1\leq p<\infty$, $1\leq q<\infty$, $L^p(\mu)\subset L^q(\mu)$. Prove there is a constant $C>0$ so that $\|f\|_q\leq C\|f\|_p$ for all $f\in L^p(\mu)$. - -I need help getting started on this. - -REPLY [7 votes]: $L^p(\mu)$ and $L^q(\mu)$ are Banach spaces, so we can apply the closed graph theorem: if $f_n\to f$ in $L^p$ and $f_n\to g$ in $L^q$ then passing to a subsequence $f_{n_k}\to f$ and $g$ almost everywhere. Hence the graph of the identity $\iota\colon L^p\to L^q$ is closed, so $\iota$ is continuous (and linear).<|endoftext|> -TITLE: How demonstrate the Craig representation for the Gaussian probability function? -QUESTION [7 upvotes]: The Q-function is defined by : -$$Q(x) =\frac{1}{\sqrt{2\pi}} \int_{x}^{\infty}\exp(-\frac{u^2}{2}) \ \mathrm{d}u \ \ (1).$$ -According to the wiki page there is an alternative form of the Q-function based on John W. Craig's work that is more useful is expressed as: -$$Q(x) =\frac{1}{\pi} \int_{0}^{\frac{\pi}{2}}\exp\left(-\frac{x^2}{2\sin^2(\theta)}\right) \ \mathrm{d}\theta \ \ (2).$$ -Craig's proove is based on probabilistic approach, there for I look for an analytic one. -any help will be appreciated. -Thanks. - -REPLY [4 votes]: Since both expressions coincide at $x=0$, it suffices to show that their derivatives coincide on $x\geqslant0$. Since the derivative of the first expression of $Q(x)$ is proportional to $\mathrm e^{-x^2/2}$, this happens if $R(x)$ is constant on $x\geqslant0$, where -$$ -R(x)=\mathrm e^{x^2/2}\int_0^{\pi/2}\frac{x}{\sin^2\theta}\mathrm e^{-x^2/(2\sin^2\theta)}\mathrm d\theta=\int_0^{\pi/2}\frac{x}{\sin^2\theta}\mathrm e^{-x^2\cot^2\theta/2}\mathrm d\theta. -$$ -The change of variables $v=x\cot\theta$ yields the range $v\gt0$ and the Jacobian $\mathrm dv=x\mathrm d\theta/\sin^2\theta$, hence -$$ -R(x)=\int_0^{+\infty}\mathrm e^{-v^2/2}\mathrm dv, -$$ -which does not depend on $x$. QED.<|endoftext|> -TITLE: Finding the Dual Basis -QUESTION [9 upvotes]: Define the four vectors in $\mathbb{R}^4$ by -$$v_1=\left( \begin{array}{ccc} -1 \\ -0 \\ -0 \\ -0 \end{array} \right), -v_2=\left( \begin{array}{ccc} -1 \\ -1 \\ -0 \\ -0 \end{array} \right), -v_3=\left( \begin{array}{ccc} -1 \\ -1 \\ -1 \\ -0 \end{array} \right), -v_4=\left( \begin{array}{ccc} -1 \\ -1 \\ -1 \\ -1 \end{array} \right). $$ -I'm now asked to find the basis dual to $\{v_1,v_2,v_3,v_4 \}$ in $\mathbb{R}^4$, wth each vector expressed as a linear combination of the standard basis in $\mathbb{R}^4$. -Now, this is one of those situations where I 'know' all of the bookwork regarding dual bases etc. however, what seems like a simple application presents quite a hurdle. -Any explanation of how to progress would be very appreciated. - -REPLY [6 votes]: Let $\{u_1,\ldots,u_4\}$ be the dual basis for basis $\{v_1,\ldots, v_4\}$, to be written as co-ordinate (column) vectors relative to the standard basis of $\mathbb{R}^4$. By definition of Dual basis, these sets are biorthogonal, that is, $u_i^T v_j = \delta_{ij}$, for all $i,j$ in $\{1,\ldots,4\}$. -Let $$ -U = \left[\begin{matrix} -u_1 &u_2& u_3& u_4 -\end{matrix}\right] -$$ -and -$$ -V = \left[\begin{matrix} -v_1 &v_2& v_3& v_4 -\end{matrix}\right]. -$$ -Now the biorthogonality equations can be expressed as -$$U^T V = \left[\begin{matrix} -1 & 0 & 0 & 0\\ -0 & 1 & 0 & 0\\ -0 & 0 & 1 & 0\\ -0 & 0 & 0 & 1\\ -\end{matrix}\right] -= I. -$$ -So, $U^T=V^{-1}$, which you can easily compute, and the dual basis is formed by the columns of U.<|endoftext|> -TITLE: Sigma algebra and algebra difference -QUESTION [26 upvotes]: An algebra is a collection of subsets closed under finite unions and intersections. -A sigma algebra is a collection closed under countable unions and intersections. - -Whats the difference between finite and countable unions and intersections? Does "countable" mean it implies there can be infinitely many unions and intersections? - -Secondly, I was reading a definition -For an algebra on a set: By De Morgan's law, $A \cap B = (A^c \cup B^c)^c$, thus an algebra is a collection of subsets closed under finite unions and intersections. -What law are they using here to get $A \cap B = (A^c \cup B^c)^c$? I thought de morgan's law was $(A\cap B)^c = A^c \cup B^c$? -Finally, what exactly do they mean by "closed under finite unions and intersections? - -REPLY [8 votes]: Ok, am not a mathematician or a student - BUT Doob - measure theory -had a pretty terrific answer for it. -I was actually looking for the answer and got in here, and obviously could not understand the answer. -So, according to Doob, here is the answer:- -The algebra S is $\sigma$ algebra if, S contains the limits of every monotone sequence in S. Notice that this is the same idea - we use for complete metric space. -So, now I believe I get it. There is a one to one similarity between this measure stuff and metric stuff, and what is a complete space there, becomes sigma algebra here. -Hope I do understand it.<|endoftext|> -TITLE: Show a certain group is contained in a Sylow p-group. -QUESTION [15 upvotes]: Statement: Let G be a group and p a prime that divides $|G|$. Prove that if $K\le G$ such that $|K|$ is a power of p, K is contained in at least one Sylow p-group. -I just started studying Sylow p-groups, so although I'm familiar with Sylow theorems and a couple of corollaries, I don't know how to get started with this problem. Any hint is more than welcome. -PS: I looked for something related here at Math.SE but didn't find anything. Sorry if it's a duplicate. - -REPLY [4 votes]: Here's another approach. You want to show for any $p$-subgroup $K$, $K\subseteq aPa^{-1}$ where $P$ is a Sylow $p$-subgroup and $a\in G$ . Let $X=\{aP \mid a\in G\}$ be the set of left cosets of $P$ in $G$. Now let $K$ act on $X$ in the following manner $k \cdot(aP)=(ka)P$ . -Let $|G|=p^n m$ where $p \nmid m$ , we know that $|P|=p^n$ since $P$ is a Sylow $p$-subgroup. Note that $[G:P]=|X|= \displaystyle\frac {|G|}{|P|}=m$, then $p \nmid |X|$. We also know that if a $p$-group acts on a set $X$ then $p\mid |X|-|X_f|$ where $X_f$ is the fixed set under the action (I can supply the proof if needed). But $p \nmid |X|$ then $p \nmid |X_f|$ (otherwise $p \mid |X|-|X_f|+|X_f|=|X|$ which is a contradiction) then $|X_f|\not= 0$ then there is an element $aP\in X$ such that $k(aP)=P \ \forall\ k\in K$. $kaP=aP \implies a^{-1}(ka) \in P \implies k\in aPa^{-1}$ for all $k\in K$ hence $K \subseteq aPa^{-1}$ . -As a side note you can use this as a lemma to prove the second Sylow theorem.<|endoftext|> -TITLE: What does $\log^{2}{x}$ mean? -QUESTION [31 upvotes]: What is it used for and why doesn't it equal $\log{x^2}$? - -REPLY [3 votes]: Already "log" is ambiguous, implying respectively base 10, e, or 2 in elementary applied mathematics, general mathematics, and information theory or theoretical computer science. In the second case, particularly in number theory and theoretical statistics, the iterated logarithm arises naturally, and some authors mean $\log \log x$ by $\log^2 x$. The squared logarithm isn't seen much outside school calculus textbooks. The reverse is true for trigonometric functions: the squared functions are ubiquitous, while the iterated functions are mostly confined to examples and exercises for students. The notation $\sin^2 x$ is illogical (Gauss, in particular, complained about it), but so convenient and established by tradition that we are probably stuck with it.<|endoftext|> -TITLE: Is there an algorithm to determine whether rational matrices generate a finite group? -QUESTION [5 upvotes]: This is inspired by this question. Given finitely many invertible rational $n\times n$ matrices $A_{1},\ldots, A_{k}\in\operatorname{GL}(n,\mathbb{Q})$, is there an algorithm (a practical one) to determine whether the group $\langle A_{1},\ldots, A_{k}\rangle$ that they generate is finite? One could, I suppose, use something like Dimino's algorithm to calculate the closure and stop when the size exceeds the maximum order possible (which is the order $2^{n}n!$ of the group of signed permutation matrices, except for some small exceptions, if I recall correctly) but that seems impractical. Is there something better? - -REPLY [4 votes]: There is such an algorithm, and it has allegedly been implemented in GAP and Magma. The only reference I have found is: -László Babai, Robert Beals, and Daniel Rockmore. Deciding finiteness of matrix groups in deterministic polynomial time. In Proc. ISSAC'93 (Internat. Symp. on Symbolic and Algebraic Computation), Kiev 1993, pages 117-126. ACM Press, 1993. -Unfortunately I could not find any open access version of this paper, and I have -not seen it myself yet.<|endoftext|> -TITLE: Can anybody recommend me a topology textbook? -QUESTION [13 upvotes]: Possible Duplicate: -choosing a topology text -Introductory book on Topology - -I'm a graduate student in Math. But I never learnt Topology during my undergraduate study. Next semester, I am going to take Differential Geometry. I assume this course would require a background of Topology. So I would like to take advantage of this summer and learn some topology myself. -I don't need to become an expert in Topology. All I need is that after this summer, my topology knowledge will be enough for my Differential Geometry course. -So can somebody please recommend me a textbook? I'd be really grateful! - -REPLY [8 votes]: Seebach and Steen's book Counterexamples in Topology is not a book you should try to learn topology from. But as a supplemental book, it is a lot of fun, and very useful. Munkres says in introduction of his book that he does not want to get bogged down in a lot of weird counterexamples, and indeed you don't want to get bogged down in them. But a lot of topology is about weird counterexamples. (What is the difference between connected and path-connected? What is the difference between compact, paracompact, and pseudocompact?) Browsing through Counterexamples in Topology will be enlightening, especially if you are using Munkres, who tries hard to avoid weird counterexamples. - -REPLY [6 votes]: I entered my graduate general topology course with no previous background in the field (save what I knew about the real line). Despite this, I had great success with Stephen Willard's General Topology.<|endoftext|> -TITLE: What is the difference between $\omega$ and $\mathbb{N}$? -QUESTION [6 upvotes]: What is the difference between $\omega$ and $\mathbb{N}$? -I know that $\omega$ is the "natural ordering" of $\mathbb{N}$. And I know that $\mathbb{N}$ is the set of natural numbers (order doesn't matter?). And so, $\omega$ is a well-ordered set? an ordinal number? and $\mathbb{N}$ is an un-ordered set? -Is this right, is there anything else? -A little context: I'm wondering why people here have been telling me that a set $A$ is countable iff there exists a bijection between $A$ and $\omega$, as opposed to $A$ and $\mathbb{N}$. Does it make a difference? -Thanks. - -REPLY [6 votes]: Outside of set theory $\mathbb N$ is agreed to be the standard model of the Peano Axioms. Indeed this is a countable set. -When approaching foundational set theory (which I am now going to assume is ZFC), one prefers to avoid referencing more theories. In particular theories which we will later interpret within our universe. -On the other hand, the ordinal $\omega$ is a very concrete set in ZFC. It means that if I write $\omega$ I always mean one very concrete set. Of course that $\omega$, along with its natural order and the ordinal arithmetics is a model of the Peano Axioms, even the second-order theory. -Let us see why I take this as important (at least when talking about axiomatic set theory, in naive set theory I will usually let go of this). We often think of the following chain of inclusions: -$$\mathbb N\subseteq\mathbb Z\subseteq\mathbb Q\subseteq\mathbb R\subseteq\mathbb C$$ -On the other hand we think of $\mathbb N$ as the atomic set from which we start working, $\mathbb Z$ is created by an equivalence relation on $\mathbb N$; later $\mathbb Q$ is defined by an equivalence relation over $\mathbb Z$; then $\mathbb R$ is defined by Dedekind cuts (or another equivalence relation); and lastly $\mathbb C$ is again defined by an equivalence relation. -How can we say that $\mathbb N\subseteq\mathbb C$? What we mean is that there is a very natural and canonical embedding of $\mathbb N$ (and all the other levels of the construction) which we can identify as $\mathbb N$ or $\mathbb R$, etc. In many places in mathematics it is enough to identify things up to isomorphism. -Note, however that it is still not the same set. In fact the result of $\mathbb C$ as a set will vary greatly on the choices we made along the way. -What about $\omega$? Well, that is always the smallest set such that $\varnothing\in\omega$ and if $x\in\omega$ then $x\cup\{x\}\in\omega$. Very concrete indeed. -I also find that this distinction helps to somewhat defuse the "how can the continuum hypothesis be independent of ZFC?" question, because $\mathbb N$ is an extremely concrete notion in mathematics, and people see it in a very concrete way. Of course it's not a great solution and it doesn't mean people accept the independence of the cardinality of the power set of $\omega$ instead, it's just easier. - -To Read More: - -Is there an absolute notion of the infinite? -In set theory, how are real numbers represented as sets?<|endoftext|> -TITLE: Question about Riemann integral and total variation -QUESTION [17 upvotes]: Let $g$ be Riemann integrable on $[a,b]$, $f(x)=\int_a^x g(t)dt $ for $x \in[a,b]$. -Can I show that the total variation of $f$ is equal to $\int_a^b |g(x)| dx $? - -REPLY [20 votes]: If $g$ is non-negative, $f$ is non-decreasing and the total variation is $f(b)-f(a)$ which coincides with $\int_a^b{|g(x)|}dx$, so the theorem is true. -For arbitrary $g$, write $g=g^+-g^-$, with at least one of $g^+(x)$ and $g^-(x)$ equal to 0. -Fix $\varepsilon>0$. There is a mesh $\delta^+>0$ such that for all partitions $a=x_0<\dots0$, then $g^+$ is always non-zero on this interval and $g^-$ must be identically zero. Then -$$\begin{aligned} -|f(x_i)-f(x_{i-1})| =&\int_I{g^+(x)}dx\\ -\ge&(x_i-x_{i-1})\inf g^+([x_{i-1},x_i])\\ -=&(x_i-x_{i-1})\inf g^+([x_{i-1},x_i]) + (x_i-x_{i-1})\inf g^-([x_{i-1},x_i]) -\end{aligned}$$ -The bound holds similarly when $\inf g^-(I)>0$. Finally when $\inf g^+(I)=\inf g^-(I)=0$, -$$\begin{aligned} -|f(x_i)-f(x_{i-1})| \ge&0\\ -=&(x_i-x_{i-1})\inf g^+([x_{i-1},x_i]) + (x_i-x_{i-1})\inf g^-([x_{i-1},x_i]) -\end{aligned}$$ -So we can write -$$V\ge \sum_{i=1}^n (x_i-x_{i-1})\inf g^+([x_{i-1},x_i]) + (x_i-x_{i-1})\inf g^-([x_{i-1},x_i])$$ -and because the partition is finer than $\delta$: -$$V \ge \left(\int_a^b{g^+(x)}dx - \varepsilon/2\right)+\left(\int_a^b{g^-(x)}dx - \varepsilon/2\right)$$ -that is -$$V \ge \int_a^b{|g(x)|}dx - \varepsilon$$ -We also have the obvious upper bound -$$V=\sum_{i=1}^n \left|\int_{x_{i-1}}^{x_i} {g(x)} dx\right|\le \sum_{i=1}^n \int_{x_{i-1}}^{x_i} {|g(x)|} dx = \int_a^b{|g(x)|}dx$$ -Since this holds for all $\varepsilon$, the total variation (the upper bound of the total variation over all partitions) is precisely $\int_a^b{|g(x)|}dx$.<|endoftext|> -TITLE: Isometry in $\mathbb{R}^n$ -QUESTION [6 upvotes]: I'm trying to prove that if $f\colon\mathbb{R}^n \to \mathbb{R}^n$ is a $\mathcal{C}^1$ mapping such that $f'(x)$ is a (linear) isometry for every $x \in \mathbb{R}^n$, then $f$ is an isometry. By an application of inverse mapping theorem and mean value theorem, we have that $|f(x) - f(y)| = |x-y|$ as long as $x$ and $y$ are sufficiently close. How to extend this to the whole space? - -REPLY [5 votes]: Edit -Here's a much simpler argument. Let $X\subseteq\mathbb{R}^n\times\mathbb{R}^n$ with $$X = \{(x,y)\in\mathbb{R}^n\times \mathbb{R}^n|d(x,y) = d(f(x),f(y)) \}.$$ Note that $(x,x)\in X$ for any $x$, so $X\neq \emptyset$. Your observation is equivalent to the statement that $X$ is open. To see $X$ is closed, note that if $g:\mathbb{R}^n\times\mathbb{R}^n\rightarrow\mathbb{R}$ with $g(x,y) = d(x,y) - d(f(x),g(y))$, then $g$ is continuous since $d$ and $f$ are, and $X = g^{-1}(0)$. Hence, $X$ is a nonempty clopen subset of $\mathbb{R}^n\times\mathbb{R}^n$, so $X = \mathbb{R}^n\times\mathbb{R}^n$, so $f$ is an isometry. -(End edit) -Here's one approach, borrowed from Riemannian geometry. -Let $\gamma:\mathbb{R}\rightarrow\mathbb{R}^n$ be any straight line paramaterized by arclength, meaning that $\|\gamma(t)-\gamma(s)\| = |t-s|$ for any $t$ and $s$. We will show that $f\circ \gamma$ is also a straight line parametrized by arclength. -Believing this for a second, for $x$ and $y$ in $\mathbb{R}^n$, if $\gamma$ is chosen to be the line going through $x$ and $y$ with $\gamma(t) = x$ and $\gamma(s) = y$, then we have \begin{align*} d(x,y) &= \|\gamma(t)-\gamma(s)\|\\\ &= |t-s|\\\ &= \|f(\gamma(t))-f(\gamma(s))\| \\\ &= d(f(x),f(y)), \end{align*} establishing what we want. -So, why is $f\circ \gamma$ a straight line parameterized by arclength? This follows from your observation that $\|f(x)-f(y)\| = \|x-y\|$ for $x$ and $y$ close together. More specifically, looking at the point $\gamma(t)$, we know that for $s$ close to $t$ (and therefore $\gamma(s)$ close to $\gamma(t)$), that $\|f(\gamma(t)) - f(\gamma(s))\| = \|\gamma(t)-\gamma(s)\|$. This implies that for the line segment of points near $\gamma(t)$, that $f($segment$)$ is another line segment, parametrized by arclength. -Since $\mathbb{R}$ is connected, so is $f(\gamma(\mathbb{R}^n))$. This implies that $f(\gamma(\mathbb{R}^n))$ is a union of line segments end point to end point where each segment is parameterized by arclength. This is a straight line iff there are no corners. But if $f(\gamma(t_0))$ is a corner, then $f$ brings points near $\gamma(t_0)$, but on either side of it, closer together, contradicting your earlier observation that locally $f$ preserves distance.<|endoftext|> -TITLE: Is this equivalent to bounded variation? -QUESTION [6 upvotes]: Let $\Omega \subset \mathbb{R}^n$ be open and bounded with smooth boundary. For $f \in L^1(\Omega)$, define -$$ \|D_1 f\|_M(\Omega) =\inf\left\{\liminf_{k\to\infty}\int_\Omega |\nabla f_k|\,dx \mid f_k \to f \text{ in } L^1(\Omega),\ f_k \in \text{ Lip }(\Omega)\right\}. $$ Here $\text{Lip}(\Omega)$ is the set of Lipschitz functions on $\Omega$. Note that by Rademacher's Theorem, for $f \in \text{Lip}(\Omega)$, $\nabla f$ exists Lebesgue-a.e. My question is, is $\|D_1 f\|_M(\Omega)$ the same as $\int_\Omega |Df|$ in general? I have a feeling the answer is ''no'', because if it is ''yes'', people would probably use this as the definition of bounded variation instead of the usual definition, which I find more complicated. - -REPLY [2 votes]: Yes, this is another way to introduce the BV norm, sometimes called Miranda's definition. People do use it, but it does not mean the distributional definition can be forgotten. It's not a bad thing to have two or more definition of the same class. For example, they might generalize in different ways when we move beyond Euclidean spaces. This dissertation is relevant.<|endoftext|> -TITLE: A finite graph G is $d$-regular if, and only if, its adjacency matrix has the eigenvalue $λ = d$ -QUESTION [6 upvotes]: Show that a graph $G$ finite with $n$ vertices is $d$-regular if, and only if, the vector with all the coordinates equals to 1 is eigenvetor from eigenvalue $λ = d$ from the adjacency matrix - $A$ from the graph $G$. -The question itself was a little confused for me... - -REPLY [7 votes]: Suppose that $G$ is $d$-regular. Then every vertex of $G$ is adjacent to $d$ other vertices, so each row of its adjacency matrix $A$ will have $d$ $1$’s and $n-d$ $0$’s. Let -$$A=\pmatrix{a_{11}&a_{12}&\dots&a_{1n}\\ -a_{21}&a_{22}&\dots&a_{2n}\\ -\vdots&\vdots&\ddots&\vdots\\ -a_{n1}&a_{n2}&\dots&a_{nn}}\;,$$ -and $\vec v$ be the $n\times 1$ vector whose entries are all $1$: -$$\vec v=\pmatrix{1\\1\\\vdots\\1}\;.$$ -The product $\vec u=A\vec v$ is an $n\times 1$ vector whose $i$-th entry is $$u_i=\sum_{j=1}^na_{ij}\cdot1=a_{i1}+a_{i2}+\ldots+a_{in}\;.$$ Since $d$ of the numbers $a_{i1},\dots,a_{in}$ are $1$ and the rest are $0$, $u_i=d$ for every $i$. Thus, $$\vec u=\pmatrix{d\\d\\\vdots\\d}=d\vec v\;.$$ That is, $A\vec v=d\vec v$, so by definition $\vec v$ is an eigenvector of $A$ for the eigenvalue $d$. -To show the other direction, you can try to reverse this argument; that works. Alternatively, you can assume that $G$ is not $d$-regular and show that $\vec v$ is not an eigenvector for an eigenvalue $d$; that also works and may be even easier. - -REPLY [4 votes]: Funny thing, I just learned this myself this week. It is proposition 3 on page 2 of these excellent lecture notes by Padraic Bartlett. The entire set of lecture notes on spectral graph theory is here. -(I promised in 2012 to write up the proof here, but I hereby retract that promise. I would not be able to improve upon Brian Scott's writeup elsewhere in this thread.)<|endoftext|> -TITLE: Example of nonAbelian group with a normal subgroup such that the quotient group is Abelian? -QUESTION [5 upvotes]: Can someone give me an example of a non-Abelian group $G$ with a normal subgroup $H$ such that $G/H$ is Abelian? - -REPLY [16 votes]: Smallest example: take $G=S_3$, $H=\{1,(123),(132)\}$. Then $G/H$ has order $2$, so it is abelian. -(Of course, you can always just take $H=G$; then $G/H$ is trivial, hence abelian; more generally, if $G$ is a group, then there is a smallest normal subgroup $N$ such that $G/N$ is abelian; it's called the "commutator subgroup of $G$", denoted $[G,G]$ or $G'$, and is the subgroup generated by all elements of the form $[x,y] = x^{-1}y^{-1}xy$; $G$ is abelian if and only if $[G,G]=\{1\}$. It is possible that $[G,G]=G$, of course, for example, with $G=A_5$, but since you did not specify that you wanted $H$ to be nontrivial, this subgroup always works and is the smallest one that does)<|endoftext|> -TITLE: Classification of automorphisms of projective space -QUESTION [10 upvotes]: Let $k$ be a field, n a positive integer. -Vakil's notes, 17.4.B: Show that all the automorphisms of the projective scheme $P_k^n$ correspond to $(n+1)\times(n+1)$ invertible matrices over k, modulo scalars. -His hint is to show that $f^\star \mathcal{O}(1) \cong \mathcal{O}(1).$ (f is the automorphism. I don't if $\mathcal{O}(1)$ is the conventional notation; if it's unclear, it's an invertible sheaf over $P_k^n$) I can show what he wants assuming this, but can someone help me find a clean way to show this? - -REPLY [16 votes]: An automorphism of $\mathbb{P}^n_k$ induces an automorphism of the Picard group $\text{Pic}(\mathbb{P}^n_k) \cong \mathbb{Z}$. Such an automorphism must send the generator $\mathcal{O}(1)$ to a generator. Since the only two generators of $\mathbb{Z}$ are $1$ and $-1$, $f^*(\mathcal{O}(1))$ must be $\mathcal{O}(1)$ or $\mathcal{O}(-1)$. But $\mathcal{O}(-1)$ has no nonzero global sections, so it cannot be the pullback of $\mathcal{O}(1)$ (recall that $\mathcal{O}(1)$ pulls back to a line bundle together with $n+1$ global sections which have no common zero). - -REPLY [10 votes]: Well, $f^*(\mathcal{O}(1))$ must be a line bundle on $\mathbb{P}^n$. In fact, $f^*$ gives a group automorphism of $\text{Pic}(\mathbb{P}^n) \cong \mathbb{Z}$, with inverse $(f^{-1})^*$. Thus, $f^*(\mathcal{O}(1))$ must be a generator of $\text{Pic}(\mathbb{P}^n)$, either $\mathcal{O}(1)$ or $\mathcal{O}(-1)$. But $f^*$ is also an automorphism on the space of global sections, again with inverse $(f^{-1})^*$. Since $\mathcal{O}(1)$ has an $(n+1)$-dimensional vector space of global sections, but $\mathcal{O}(-1)$ has no non-zero global sections, it is impossible for $f^*(\mathcal{O}(1))$ to be $\mathcal{O}(-1)$.<|endoftext|> -TITLE: About alternating group $A_4$ -QUESTION [7 upvotes]: This is a simple exercise telling that $A_4$ cannot have a subgroup of order $6$. Here in my way: -Obviously, for any group $G$ and a subgroup $H$ of it with index $2$; we have $∀$$ g\in G$ ,$g^2\in H$. I suppose that $A_4$ has such this subgroup, named $H$, of order 6. Then for any $\sigma\in A_4$; $\sigma^2\in H$. I think maybe the contradiction happens when we enumerate all $\sigma^2$. May I ask if there is another approach for this problem? Thanks. - -REPLY [5 votes]: Your approach and many more can be found in this article, where $11$ different proofs are given. - -Michael Brennan. Des Machale. Variations on a Theme: $A_4$ Definitely Has no Subgroup of Order Six!, Mathematics Magazine, Vol. $73$, No. $1$ (2000) JSTOR<|endoftext|> -TITLE: Understanding isometric spaces -QUESTION [9 upvotes]: I have studied that an isometry is a distance-preserving map between metric spaces and two metric spaces $X$ and $Y$ are called isometric if there is a bijective isometry from X to Y. -My questions are related with the understanding of isometric spaces, they are as follows: -Can we say that two isometric spaces are same? If no, in what context they differ? What are the common properties shared by two isometric spaces? -Intuitively what are isometric spaces? -If two spaces are isometric how to find out bijective distance preserving map between them? -Thanks for your help and time. - -REPLY [14 votes]: Homeomorphisms are the maps that preserve all topological properties: from a structural point of view, homeomorphic spaces might as well be identical, though they may have very different underlying sets, and if they’re metrizable, they may carry very different (but equivalent) metrics. Isometries are the analogue for metric spaces, topological spaces carrying a specific metric: they preserve all metric properties, and of course those include the topological properties. Thus, all isometries are homeomorphisms, but the converse is false. -Consider the metric spaces $\langle X,d_X\rangle$ and $\langle Y,d_Y\rangle$ defined as follows: $X=\Bbb N,Y=\Bbb Z$, $$d_X(m,n)=\begin{cases}0,&\text{if }m=n\\1,&\text{if }m\ne n\;,\end{cases}$$ for all $m,n\in X$, and $$d_Y(m,n)=\begin{cases}0,&\text{if }m=n\\1,&\text{if }m\ne n\end{cases}$$ for all $m,n\in Y$. It’s easy to check that $d_X$ and $d_Y$ are metrics on $X$ and $Y$, respectively. -Clearly these are not the same space: they have different underlying sets. However, if $f:X\to Y$ is any bijection1 whatsoever, then $f$ is an isometry between $X$ and $Y$. $\langle X,d_X\rangle$ and $\langle Y,d_Y\rangle$ are structurally identical as metric spaces: if $P$ is any property of metric spaces $-$ not just of metrizable spaces, but of metric spaces with a specific metric $-$ then either $X$ and $Y$ both have $P$, or neither of them has $P$. There is no structural property of metric spaces that distinguishes them. -What I just said about $X$ and $Y$ is true of isometric spaces in general: there is no structural property of metric spaces that distinguishes them. Considered as metric spaces, they are structurally identical, though they may have different underlying sets. -Isometric spaces may even have the same underlying set but different metrics. Consider the following two metrics on $\Bbb N=\{0,1,2,\dots\}$. For any $m,n\in\Bbb N$, -$$d_0(m,n)=\begin{cases} -0,&\text{if }m=n\\\\ -\left|\frac1m-\frac1n\right|,&\text{if }0\ne m\ne n\ne 0\\\\ -\frac1m,&\text{if }n=01\\\\ -1-\frac1m,&\text{if }n=0\text{ and }m>1\\\\ -1-\frac1n,&\text{if }m=0\text{ and }n>1\\\\ -\frac1m,&\text{if }n=1\ne m\\\\ -\frac1n,&\text{if }m=1\ne n\;. -\end{cases}$$ -It’s a good exercise to show that $$f:\Bbb N\to\Bbb N:n\mapsto\begin{cases}n,&\text{if }n>1\\1,&\text{if }n=0\\0,&\text{if }n=1\end{cases}$$ is an isometry between $\langle\Bbb N,d_0\rangle$ and $\langle\Bbb N,d_1\rangle$. (HINT: Both spaces are isometric to the space $\{0\}\cup\left\{\frac1n:n\in\Bbb Z^+\right\}$ with the usual metric.) Yet these are clearly not the same space: metric $d_0$ makes $0$ a limit point of the other points, but metric $d_1$ makes $0$ an isolated point. -I don’t know of any general method for finding an isometry between isometric spaces; if you can recognize two spaces as being isometric, you probably already have a good idea of what an isometry between them must look like. - -1 If you want a specific bijection, $$f(n)=\begin{cases}0,&\text{if }n=0\\\\\frac{n}2,&\text{if }n>0\text{ and }n\text{ is even}\\\\-\frac{n+1}2,&\text{if }n\text{ is odd}\end{cases}$$ does the job.<|endoftext|> -TITLE: Can we construct a basis for Hermitian matrices made of positive semidefinite ones? -QUESTION [7 upvotes]: let us consider $n\times n$ hermitian matrices. They form a real space. Now we know that any such matrix $A$ can be written as -$A=A_+-A_-$, -where $A_\pm$ are positive semidefinite matrices. Thus we can say that the (real) linear combination of positive semi-definite matrices spans the space of hermitian matrices. My question is, can we construct any basis for the space of hermitian matrices such that each basis element is a positive semidefinite matrix. please help or refer some literature for it. -ADDED: -seeing the comment of Joriki I have decided to add a few more lines regarding my earlier (failed) approach, in the hope that someone can help me (in completing the line of argument, if possible; or by finding a fault in my argument). I can diagonalise and separate the positive and negative part. now let $A=UDU^*$, where $U$ is an unitary operator and $D$ is the diagonal matrix. This again can be written as $A=UD_1U^*-UD_2U^*$ where $D_i$ are diagonal matrices with all entries $\geq0$. hence, if we take a such a positive diagonal matrices and only consider unitary group action on it, we are going to find all the hermitian operators. in particular, i tried to take diagonal matrix $D_j$ ($j$-th entry $1$, others $0$) and applied unitary group. this method seemed to fail here, as i could not get any meaningful basis out of these actions. - -REPLY [10 votes]: The space of hermitian $n\times n$ matrices is spanned by the $n$ matrices with a single $1$ on the diagonal, the $n(n-1)/2$ matrices with a single pair of $1$s at corresponding off-diagonal elements and the $n(n-1)/2$ matrices with a single pair of $\mathrm i$ and $-\mathrm i$ at corresponding off-diagonal elements. The diagonal matrices are positive semidefinite, and the remaining matrices can be made positive semidefinite by adding $1$ to the two diagonal elements corresponding to the non-zero off-diagonal elements. Since adding one element of a linearly independent set to another doesn't render the set linearly dependent, the result is again a basis of the space of hermitian matrices.<|endoftext|> -TITLE: Does every curve over a number field have infinitely many rational functions of fixed degree -QUESTION [8 upvotes]: Let $X$ be a curve over a number field $K$ of genus $g\geq 2$. Does there exist an integer $d$ such that $X$ has infinitely many rational functions (i.e., finite morphisms $f:X\to \mathbf{P}^1_K$) of degree $d$? -If yes, can we choose/bound $d$ in terms of $K$ and $g$? -Note. This is not the same question as in the title. The answer to the question in the title is in fact negative as the example below shows for $d=2$. -Note: I want rational functions to be really different. So we mod out by the action of the automorphism group of $\mathbf{P}^1_K$. That is, $f$ and $\sigma\circ f$ are the same if $\sigma$ is an automorphism of $\mathbf{P}^1_K$. -Example 1. We can not have $d=2$. Hyperelliptic maps are unique. -Question 2. How does the answer to this question change if we replace $K$ by an algebraically closed field $k$? -Example 2. Let $X$ be a general curve of odd genus $g\geq 3$ over an algebraically closed field. Then, it has an infinite number of (really different) gonal morphisms. In fact, this family is one-dimensional. -Idea. I think the question can be reduced to a question on $\mathbf{P}^1_K$. In fact, it suffices to show that there are infinitely many (really different) rational functions on $\mathbf{P}^1_K$ of some degree, say $3$. In fact, once you know this, composing some morphism $f:X\to \mathbf{P}^1_K$ with such a rational function gives infinitely many rational functions of degree $d \leq 3 \deg f$. The only problem is then finding (in a controlled way) a morphism $f:X\to \mathbf{P}^1_K$. This is a hard problem, but let's allow finite base change if necessary... - -REPLY [3 votes]: I think you are looking for rational subextensions of $K(X)$ of index $d$. -First in the field $K(t)$ of rational functions of $\mathbb P^1_K$, there are infinitely many subextensions of index $2$: Consider the $K(t^2+\lambda t+1)$ for $\lambda\in K$. -If $t^2+\lambda t + 1\in K(t^2+\mu t+1)$, then $(\lambda - \mu)t\in K(t^2+\mu t+1)$, hence $\lambda - \mu=0$ because otherwise $t\in K(t^2+\mu t +1)$. Therefore we get infinitely many subextension of index $2$ (this has no much to do with number fields, just need $K$ is infinite). -Now apply your idea and you get a positive answer to your question. I don't understand what you mean by a hard problem. Any inclusion $K(x)\subset K(X)$ corresponds to a unique morphism $X\to \mathbb P^1_K$ of degree $[K(X) : K(x)]$. -For a bound on $d$: first there always exists a morphism $X\to \mathbb P^1_K$ of degree $\le 2g-2$ (use the canonical morphism, recall $g\ge 2$). So a rough upper bound is $2(2g-2)$. -Edit When are there infinitely many gonal morphisms (i.e. morphisms to $\mathbb P^1$ of smallest degree), including the case $g=1$ ? Let $f : X\to \mathbb P^1_K$ be given by an $f\in L(D)$ for some effective divisor $D$ of degree $d$ with $\deg f=d$. Suppose $D$ is base point free and if $\dim L(D)\ge 3$, then there are infinitely many gonal morphisms if $K$ is infinite. -Proof. Write $D=\sum_{1\le i\le n} a_i[x_i]$ with $a_i>0$. As $D$ is base point free, $L(D)\ne L(D-[x_i])$. So there exists $f_1\in L(D)\setminus \cup_i L(D-[x_i])$ because $K$ is inifnite. Let $f_2\in L(D) \setminus (K+Kf_1)$ (recall that $\dim L(D)\ge 3$). -Let's show that -$$K(f_2)\ne K(f_1).$$ -Otherwise $f_2=(\alpha f_1+ \beta)/(\gamma f_2+ \delta)$ with $\alpha, \beta, \gamma, \delta\in K$ and $\gamma\ne 0$. So we can write -$$af_2+b=1/(\gamma f_1+c), \quad a, b, c\in K, \ a\ne 0.$$ -So $1/(\gamma f_1+c)\in L(D)$. As -$$(\gamma f_1+c)_{\infty}=(f_1)_{\infty}=D,$$ -this is impossible. Let it is easy to see that $K(f_1+\lambda f_2)$, when $\lambda$ runs through $K$, form an infinite family of pairwise distinct rational subextension of $K(X)$ of index $d$. -For genus $1$ curves, the gonality is the maximum of $2$ and the index of the curve. If the gonality $\gamma$ is at least $3$, the result above plus Riemann-Roch show there are infinitely many morphisms of degree $\gamma$ from $X$ to the projective line.<|endoftext|> -TITLE: Principal prime ideals are minimal among prime ideals in a UFD -QUESTION [6 upvotes]: Fulton, "Algebraic Curves," Exercise 1.39(a): - -Let $R$ be a UFD, and $P = (t)$ a principal, proper, prime ideal. - Show there is no prime ideal $Q$ with $0 \subset Q \subset P$. - -After being stumped for some time, I came up with the following proof while attending a concert (my ears are still ringing an hour later): - -Suppose there is such an ideal. Take some $0 \neq q \in Q$. Since $Q \subset P$, $q = rt$ for some $r \in R$. Since $Q$ is prime, and $t \not \in Q$ by assumption, we must have $r \in Q$. Applying the preceding to $r$ instead of $q$, we have $r = r't$ for some $r' \in Q$. So $q = rt = (r't)t = r't^2$. Proceeding along these lines, $q$ is divisible by $t^n$ for all $n \geq 0$. Notice that $t$ is irreducible, since $P = (t)$ is prime. Now, what possible factorization into irreducibles could $q$ have? - -So far as I can tell, this seems to work, but it also seems exceedingly silly. What's the "right" way to prove this? - -REPLY [11 votes]: Yes, your proof is fine. Here is another way to view it. In a UFD every prime ideal $\rm\:P\:$ may be generated by primes. Indeed, $\rm\:0\ne p_1\cdots p_n\in P\:\Rightarrow\:$ some $\rm\: p_i\in P,\:$ so we may replace each generator by one of its prime factors. Hence if prime $\rm\:(p)\supseteq Q = (p_1,p_2,\ldots)\ne 0\ne p_i\:$ then $\rm\:(p)\supseteq (p_i)\:$ $\Rightarrow$ $\rm\:(p) = (p_i)\:$ $\Rightarrow$ $\rm\: Q = (p).\,$ QED $\, $ A converse is a famous theorem of Kaplansky. -Theorem $\ $ TFAE for an integral domain D -$\rm(1)\ \ \:D\:$ is a UFD (Unique Factorization Domain) -$\rm(2)\ \ $ In $\rm\:D\:$ every prime ideal is generated by primes. -$\rm(3)\ \ $ In $\rm\:D\:$ every prime ideal $\ne 0$ contains a prime $\ne 0.$ -Proof $\ (1 \Rightarrow 2)\ $ Proved above. $\rm\ (2\Rightarrow 3)\ $ Clear. -$(3 \Rightarrow 1)\ $ The set $\rm\:S\subseteq D\:$ of products of units and nonzero primes forms a saturated monoid, i.e. $\rm\:S\:$ is closed under products (clear) and under divisors, since the only nonunit divisors of a prime product are subproducts (up to associates), due to uniqueness of factorization of prime products. Since $\rm\:S\:$ is a saturated monoid, its complement $\rm\:\bar S\:$ is a union of prime ideals. So $\rm\:\bar S = \{0\}\:$ (else it contains some prime ideal $\rm\:P\ne 0\:$ which contains a prime $\rm\:0\ne p\in P\subseteq \bar S,\:$ contra $\rm\:p\in S).\:$ Hence every $\rm\:0\ne d\in D\:$ lies in $\rm S,\:$ i.e. $\rm\:d\:$ is a unit or prime product. Thus $\rm\:D\:$ is a UFD. $\ $ QED -Remark $\ $ The essence of the proof is clearer when one learns localization. Then ones sees from general principles that prime ideals in $\rm\bar S\:$ correspond to maximal ideals of the localization $\rm\:S^{-1} D.\:$ -In fact one can view this as a special case of how UFDs behave under localization. Generally the localization of a UFD remains a UFD. Indeed, such localizations are characterized by the sets of primes that survive (don't become units) in the localizations. -The converse is also true for atomic domains, i.e. domains where nonzero nonunits factor into atoms (irreducibles). Namely, if $\rm\:D\:$ is an atomic domain and $\rm\:S\:$ is a saturated submonoid of $\rm\:D^*$ generated by primes, then $\rm\: D_S$ UFD $\rm\:\Rightarrow\:D$ UFD $\:\!$ (popularized by Nagata). This yields a slick proof of $\rm\:D$ UFD $\rm\Rightarrow D[x]$ UFD, viz. $\rm\:S = D^*\:$ is generated by primes, so localizing yields the UFD $\rm\:F[x],\:$ $\rm\:F =\:$ fraction field of $\rm\:D.\:$ Therefore $\rm\:D[x]\:$ is a UFD, by Nagata. This yields a more conceptual, more structural view of the essence of the matter (vs. traditional argument by Gauss' Lemma).<|endoftext|> -TITLE: Integrals conditioning relations -QUESTION [5 upvotes]: Let be $f:[0,1]\to\mathbb R$ a continuous function such that: -$$\int_0^1 f(x) dx = 0 $$ -Prove that there exists $c\in(0,1)$ such that: -$$\int_0^c xf(x) dx = 0 $$ -I tried to go the integration by parts way but got stuck. I'm also curious if the converse is true. - -REPLY [5 votes]: Suppose not. -Then for all $c \in (0,1)$, we can say without loss of generality that $\displaystyle \int_0^c xf(x) > 0$. Let $\displaystyle F(t) := \int_0^t f(x)dx$. Then integrating by parts, as you suggested, yields -$$ 0 < \int_0^t xf(x) dx = tF(t) - \int_0^t F(x)dx \quad \forall t \in (0,1) \tag{1}$$ -One can justify that the limit as $t \to 1$ exists, and going to that limit in $(1)$ we get that -$$\int_0^1 F(x)dx \leq 0 \tag{2}$$ -Now, we get a little witty. Define $G: [0,1] \to \mathbb{R}$ by $\displaystyle G: = \frac{\int_0^tF(x)dx}{t}$ if $t \neq 0$, and $G:= 0$ if $t = 0$. Then you can check that $G$ is differentiable, and $\displaystyle G' = \frac{tF(t) - \int_0^t F(x)dx}{t^2} > 0$. -Thus $G$ is increasing on $(0,1)$, and thus nondecreasing on $[0,1]$. And so, as $G(0) = 0$, we have that $G(t) > 0$ if $t > 0$. But then $\displaystyle \int_0^1 F(x)dx > 0$, contradicting $(2)$. -Thus there exists such a $c$. -With respect to the converse - it's very much false. If you let $f$ be a smooth function supported (and positive) on $[1/2, 1]$, then choosing any $c$ in $(0, 1/2)$ will force $\displaystyle \int_0^c xf(x) dx = 0$, although $\int_0^1 f(x)dx = \int_{1/2}^1 f(x)dx > 0$ as I chose $f$ postive.<|endoftext|> -TITLE: Non-vanishing differential form: what does it mean? -QUESTION [9 upvotes]: A $1$-form $\alpha$ over a smooth manifold is non vanishing if for every $p\in M$, $\alpha_p\neq 0$. -But $\alpha_p$ is a linear map $T_p M\to \mathbb R$ hence $\alpha_p(0)=0$. So confusion arises and the precise question is: -What does non vanishing mean for differential forms? -And what does $\alpha\wedge..\wedge\alpha\neq 0$ mean? - -REPLY [5 votes]: Non vanishing (at, say, $p$) means that there is a vector $v$ in $T_pM$ such that $\alpha_p(v)\neq 0$. Similarly for the $k$-form, it means that there is a set of $k$ vectors such the form is nonzero if evaluated on these vectors.<|endoftext|> -TITLE: Understanding equivalent metric spaces -QUESTION [11 upvotes]: I have studied following definitions of equivalent metric spaces. -Two metrics on a set $X$ are said to be equivalent if and only if they induce the same topology on $X$. -1: Two metrices $d_1$ and $d_2$ in metric space $X$ are equivalent if $d_1(x_n,x_0)\rightarrow 0 $ iff $d_2(x_n,x_0)\rightarrow 0 $. -2: We say that d1 and d2 are equivalent iff there exist positive constants $c$ and $C$ -such that -$c d_1(x, y)\leq d_2(x, y)\leq Cd_1(x, y)$ for all $x, y \in X$. -My questions are as follows: -Is there any other definition of equivalent metrics? I need a proof of how these conditions are equivalent? -Is there any connection between homeomorphism and equivalence of metric spaces? -What are the common properties shared by equivalent metric spaces? -I am very much confused with this. Quite often I found myself struggling with what definition should I apply to show the equivalence of given metric spaces. I need help to clear my doubts. -Thanks a lot for helping me - -REPLY [5 votes]: Two definitions on equivalence are not the same. Definition 2 implies definition 1, but not vice versa. -For instance, Let $X = (0, 1]$ and $d_1(x, y) = |x - y|$ and $d_2(x, y) = |\frac 1 x - \frac 1 y|$. Then $d_1$ is equivalent to $d_2$ under Def-1, but not under Def-2. Indeed, one can see $X$ is complete under $d_1$, but not $d_2$.<|endoftext|> -TITLE: Generating Elements of Galois Group -QUESTION [12 upvotes]: I am trying to prove the following: - -Let $K=\mathbb{Q}(\sqrt{p_1},\ldots,\sqrt{p_n})$. Show that $K/\mathbb{Q}$ is Galois with Galois group $(\mathbb{Z}/2\mathbb{Z})^n$. - -I have attached my proof below not as a means of verifying it, only as a means of making my question as clear as possible. -My question is: - -Is it correct to say that that $\tau_i$'s generate $\operatorname{Gal}(K/\mathbb{Q})$, and therefore that we can decompose elements of $\operatorname{Gal}(K/\mathbb{Q})$ into compositions of these functions. - -My proof is as follows: -To show $K/\mathbb{Q}$ is Galois we need to show it is the splitting field of a separable polynomial. The field $\mathbb{Q}(\sqrt{p_1},\ldots,\sqrt{p_n})$ is obtained by an $n$ step process where step $i$ consists of adjoining $\sqrt{p_i}$ to the field $Q_{i-1}=\mathbb{Q}(\sqrt{p_1},\ldots,\sqrt{p_{i-1}})$ and $Q_0$ is defined to be $\mathbb{Q}$. This gives the following tower of extensions -$$\mathbb{Q}\subset \mathbb{Q}(\sqrt{p_1})\subset \mathbb{Q}(\sqrt{p_1},\sqrt{p_2})\subset \cdots\subset \mathbb{Q}(\sqrt{p_1},\ldots,\sqrt{p_n}).$$ -Note that $Q_i\neq Q_{i+1}$, because $\sqrt{p_{i+1}} \not \in Q_{i}$ for all $i$; the fact the $\sqrt{p_1},\ldots,\sqrt{p_n}$ are distinct primes insures this. Since none of the $p_j$ are squares (because they are prime) we know $\sqrt{p_j} \not \in \mathbb{Q}$. Furthermore, the distinctness of the $p_j$ insure that $\sqrt{p_i} \neq \sqrt{p_j}$ for all $i \neq j$. -The minimal polynomial of the extension $Q_{i+1}/Q_i$ is $f_{i+1}(X)=X^2-p_{i+1}$, which is separable with roots $\pm\sqrt{p_{i+1}}$. From this we see that the polynomial we are concerned with is -$$f(X)=f_1(X)\cdots f_n(X)=(X^2-p_1)\cdots(X^2-p_n),$$ -also a separable polynomial. $K$ is a splitting field for this separable polynomial over $\mathbb{Q}$, and thus $K/\mathbb{Q}$ is Galois. From the tower of fields above we see they $[K:\mathbb{Q}]=2^n$, since the degree of each $Q_i/Q_{i-1}$ is 2. From this we can deduce that $\operatorname{Gal}(K/\mathbb{Q})=2^n$. As with before, $\operatorname{Gal}(K/\mathbb{Q})$ is determined by its action on the roots of $f(X)$, which are -$$\{\pm \sqrt{p_1},\ldots,\pm \sqrt{p_n}\},$$ -or, more correctly, on the roots of the minimal polynomial of each of these roots. Consider the map -\begin{equation*} - \tau_{(a_1,\ldots,a_n)}= \begin{cases} - \sqrt{p_1}& \mapsto \pm \sqrt{p_1},\\ - \sqrt{p_2}& \mapsto \pm \sqrt{p_2},\\ - & \vdots\\ - \sqrt{p_n}& \mapsto \pm \sqrt{p_n}, - \end{cases} -\end{equation*} -where $a_i\in \{0,1\}$ and $a_i=0$ means $\sqrt{p_i} \mapsto \sqrt{p_i}$ and $a_i=1$ means $\sqrt{p_i} \mapsto \sqrt{p_i}$. -This gives $2n$ possible maps which take roots of the $f_i(X)$ to roots of $f_i(X)$ (note that these are all such maps as well). Since $|\operatorname{Gal}(K/\mathbb{Q})|=2^n$ all of these maps must be automorphisms of $K$. -What remains to be seen is that $\operatorname{Gal}(K/\mathbb{Q}) \cong (\mathbb{Z}/2\mathbb{Z})^n$. We can decompose $\tau_{(a_1,\ldots,a_n)}$ into a composition of the maps $\tau_i:K\rightarrow K$ defined by -\begin{equation*} -\tau_i = \begin{cases} -\tau_i\left(\sqrt{p_j}\right) = -\sqrt{p_i} & \text{for } i=j,\\ -\tau_i\left(\sqrt{p_j}\right) = \sqrt{p_j} & \text{for } i\neq j, -\end{cases} -\end{equation*} -where $\tau_{(a_1,\ldots,a_n)}=\tau_1^{a_1}\circ \cdots \circ \tau_n^{a_n}$. We see that every element of $\operatorname{Gal}(K/\mathbb{Q})$ can be obtained through the composition of $\tau_i$'s and therefore $\{\tau_1,\ldots,\tau_n\}$ generates $\operatorname{Gal}(K/\mathbb{Q})$. -Let $\tau \in \operatorname{Gal}(K/\mathbb{Q})$ such that $\tau_1^{a_1}\circ \cdots \circ \tau_n^{a_n}$. Define the map $\chi: \operatorname{Gal}(K/\mathbb{Q}) \rightarrow \{\pm 1\}^n$ by first defining the action of $\chi$ on each $\tau_i$ by -$$\chi(\tau_i) \mapsto \tau_i(\sqrt{p_i})/\sqrt{p_i},$$ -and then defining -\begin{align*} -\chi(\tau)&=\chi(\tau_1^{a_1}\circ \cdots \circ \tau_n^{a_n})\\ -&=(\chi(\tau_1^{a_1}),\ldots,\chi(\tau_n^{a_n})). -\end{align*} -Once we show this map is injective we are done, as any injective map between finite sets of the same cardinality is surjective. Assume $(a_1,\ldots,a_n)=(b_1,\ldots,b_n)$. Then clearly $\tau_1^{a_1}\circ \cdots\circ \tau_n^{a_n}=\tau_1^{b_1}\circ \cdots \circ \tau_n^{b_n}$, since $a_i=b_i$ for all $i$. -Now, $\{\pm 1\}$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}$, as the map $\varphi: \{\pm 1\} \rightarrow \mathbb{Z}/2\mathbb{Z}$ by $\varphi(-1)=1$ and $\varphi(1)=0$ is an injective homormorphism. Therefore -$$\operatorname{Gal}(K/\mathbb{Q})\cong (\mathbb{Z}/2\mathbb{Z})^n.$$ -Attempted proof of Martin Brandenburg's Claim: -Claim: For all $i1$ (for example distinct primes). Then I claim that $K_r := \mathbb{Q}(\sqrt{m_1},\dotsc,\sqrt{m_r})$ has degree $2^r$ over $\mathbb{Q}$. -It is enough to prove $\sqrt{m_{i+1}} \notin K_i$ for all $i < r$. But it turns out one should show something stronger: For $i -TITLE: Contractible vs. Deformation retract to a point. -QUESTION [17 upvotes]: I have a quick question about the difference between the two concepts in the title. -The question is basically ex.6 (b) in Hatcher's book titled "Algebraic Topology". -Let $X$ be the subspace of $R^2$ consisting of the horizontal segment $[0,1] \times \{0\}$ together with the vertical segments $\{r\} \times [0,1-r]$ for $r$ a rational number in $[0,1]$. Now let $Y$ be the space that is the union of an infinite number of copies of $X$ arranged in a zig zag formation. See below - - -Now my question is why can't one deformation retract $Y$ to a point in the darkened zig zag line? Surely the darkened zig zag line is homeomorphic to $\mathbb{R}$, which is deformation retractable to a point, and each of the vertical lines of each copy of $X$ deformation retracts to its segment of the zig zag line! I must be missing something here as one has to prove that $Y$ does not deformation retract to any point! - -REPLY [18 votes]: In exercise 5, that's the one before this one, you showed that "if a space $X$ deformation retracts to a point $x \in X$, then for each neighbourhood $U$ of $x$ in $X$ there exists a neighborhood $V \subset U$ of $x$ such that the inclusion map $V \hookrightarrow U$ is nullhomotopic." -Pick a point $z$ in $Z$ (the zig zag line). Then you can find a neighbourhood $N$ of $z$ that is disconnected and such that every neighbourhood $U$ with $z \in U \subset N$ is also disconnected. Then you apply 5.<|endoftext|> -TITLE: An inequality with positive real numbers and their partial sums -QUESTION [9 upvotes]: Let be $x_1, x_2, \ldots , x_n$ strictly positive real numbers. Prove that the following inequality holds: -$$\frac1{1+x_1}+\frac1{1+x_1+x_2}+\cdots+\frac1{1+x_1+x_2+\cdots+x_n} < \sqrt{\frac1{x_{1}}+\frac1{x_2}+\cdots+\frac1{x_n}}$$ -How may I tackle this inequality? I tried AM-GM, but it seems of no help. - -REPLY [11 votes]: Let $s_j:=\sum_{k=1}^jx_j$. We have by Cauchy-Schwarz inequality that -$$\sum_{j=1}^n\frac 1{1+s_j}\leq \sqrt{\sum_{j=1}^n\frac{x_j}{(1+s_j)^2}}\cdot \sqrt{\sum_{j=1}^n\frac 1{x_j}},$$ -so it remains to show that $\sum_{j=1}^n\frac{x_j}{(1+s_j)^2}<1$. We have -$$\frac{x_j}{(1+s_j)^2}\leq \frac{x_j}{(1+s_j)(1+s_{j-1})}=\frac{(1+s_j)-(1+s_{j-1})}{(1+s_j)(1+s_{j-1})}=\frac 1{1+s_{j-1}}-\frac 1{1+s_j}$$ -hence -$$\sum_{j=1}^n\frac{x_j}{(1+s_j)^2}\leq 1-\frac 1{1+s_n}<1.$$<|endoftext|> -TITLE: Convergence or divergence of $ u_{n}=\left(\sum\limits_{k=1}^n e^{\frac{1}{k+n}}\right)-n$ -QUESTION [5 upvotes]: $$ u_{n}=-n+\sum_{k=1}^n e^{\frac{1}{k+n}}$$ -$$ u_{n+1}-u_{n}=-1+\exp\left(\frac{1}{2n+2}\right)+\exp\left(\frac{1}{2n+1}\right)-\exp\left(\frac{1}{n+1}\right)=O(1/n^3)$$ -So $ \sum u_{n+1}-u_n$ and $u_n$ converge. -$ u_{100000} \approx 0.69 $ -The limit seems to be $\ln2$ - -REPLY [3 votes]: Everything boils down to the n times use of the following elementary limit, namely: -$$\lim_{x\to0}\frac{e^{x}-1}{x}=1$$ -Consequently, by expanding the sum we have that: -$$\lim_{n\to\infty}\frac{e^\frac{1}{n+1}-1}{\frac{1}{n+1}} \frac{1}{n+1} + \lim_{n\to\infty}\frac{e^\frac{1}{n+2}-1}{\frac{1}{n+2}} \frac{1}{n+2}+ \cdots = \lim_{n\to\infty}\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n} = $$ -$$\lim_{n\to\infty}{\gamma}+\ln{2n}-{\gamma}-\ln{n}= \ln{2}.$$ -Also notice that i've just applied another well-known limit: $$\lim_{n\to\infty} 1+\frac1{2}+\cdots+\frac{1}{n}-\ln{n}={\gamma}$$ $\tag{$\gamma$ is Euler-Mascheroni constant}$ -The proof is complete.<|endoftext|> -TITLE: algebraic curves with negative (arithmetic) genus? -QUESTION [5 upvotes]: By an algebraic curve I mean a projective reduced connected scheme of pure dimension 1 over a field. -My question is: Is there a lower bound for the arithmetic genus of such curves? If the answer is no, is there a general method to construct such curves with arbitrary negative genus? -Any reference is welcome, thank you! -Notice that, as pointed out by Andrea in the comment, when the curve is geometrically connected, the arithmetic genus is always non-negative. - -REPLY [5 votes]: Let $K / k$ be finite extension of fields of degree $d$. Let $X$ be smooth geometrically connected projective curve over $K$ of genus $g$. The morphism $\mathrm{Spec} K \to \mathrm{Spec} k$ is projective (Exercise 3.3.22 of Liu's Algebraic geometry and arithmetic curves), therefore $X$ is projective over $k$. Then -$$ -p_{a,k}(X) = 1- \chi_k(X) = 1 - d \cdot \chi_K(X) = 1 - d (1-g). -$$ -You can take $g = 0$. -EDIT. More concretely, let $k$ be a field, let $f \in k[t]$ be a monic irreducible polynomial of degree $d$, let $\alpha \in \bar{k}$ be a root of $f$ and let $K = k(\alpha)$. If $f(t) = \sum_{i=0}^d a_i t^i$, then $\mathrm{Spec} K$ is isomorphic to -$$ -\mathrm{Proj} k[x_0,x_1] / (\sum_{i=0}^d a_i x_0^i x_1^{d-i} ). -$$ -Then -$$ -X = \mathbb{P}^1_K = \mathbb{P}^1_k \times_k \mathrm{Spec} K = \mathbb{P}^1_k \times_k \mathrm{Proj} k[x_0,x_1] / (\sum_{i=0}^d a_i x_0^i x_1^{d-i} ) -$$ -is a closed subscheme of $\mathbb{P}^1_k \times_k \mathbb{P}^1_k$, hence it is a closed subscheme of $\mathbb{P}^3_k$ by Segre embedding, i.e. -$$ -X = \mathrm{Proj} k[z_0, z_1, z_2, z_3] / (z_0 z_3 - z_1 z_2, \sum_{i=0}^d a_i z_0^i z_1^{d-i}, \sum_{i=0}^d a_i z_2^i z_3^{d-i}).$$<|endoftext|> -TITLE: How does one calculate genus of an algebraic curve? -QUESTION [32 upvotes]: I've been reading about parametrization of algebraic curves recently and the idea of the "genus of a curve" appears quite often (my impression is that a curve is parametrizable exactly when it has genus 0), but I can't seem to find a definition for it, much less an intuitive idea of what this means. I'd appreciate if anyone could explain what the genus of an algebraic curve is. -More specifically, papers often say something like this (where $\mathcal{C}$ is our curve): - -$\mathcal{C}$ has singularities at $P_1=(1:0:0),P_2=(0:1:0),P_3=(0:0:1),P_4=(1:1:1)$, where $P_1,P_2,P_3$ are 5-fold points and $P_4$ is a 4-fold point. So the genus of $\mathcal{C}$ is 0. [For reference, $\mathcal{C}$ is a curve of degree 10 in $\mathbb{P}^2(\mathbb{C})$.] -$\mathcal{C}$ has a triple point $P_1$ at the origin $(0,0)$, and double points $P_2=(0,1), P_3=(1,1)$, $P_4=(1,0)$. So the genus of $\mathcal{C}$ is 0. [$\mathcal{C}$ is a curve of degree 5 in $\mathbb{A}^2(\mathbb{C})$.] - -I don't understand how to make the leap from singular points to genus in these examples. Can someone explain? - -REPLY [14 votes]: I am not an algebraic geometer either, but I tried to figure out how to compute the genus of an algebraic curve to satisfy my own curiosity. Although I have yet to succeed, I would like to share what I did learn. -First, those still getting used to projective space and homogeneous coordinates should read the first two sections of the appendix in Rational Points on Elliptic Curves by Silverman. It provides both motivation and intuition for these concepts. -Below I attempt to explain how to compute the genus by hand. Alternatively, one can use a computer algebra system like Maple to compute the genus. -This answer by Vogler on The Math Forum provided by Hans in a comment is indeed helpful. It explains almost everything in a very accessible way. That answer is based on Algebraic Curves by Walker (see sections 7.1 to 7.5). Another reference is Algebraic Curves: An Introduction to Algebraic Geometry by Fulton, which is freely available online (see section 7.5). These sections I point out in both books deal with the most difficult part of computing the genus, which is how to handle non-ordinary singularities. -With only ordinary singularities, things are much easier. Let $f(x,y) = 0$ define a nonsingular algebraic curve $C$ of degree $d$ with just $n$ ordinary singular points $p_i$ (for $1 \le i \le n$), where $p_i$ has multiplicity $r_i$. Then $$\operatorname{genus}(C) = \frac{(d-1)(d-2)}{2} - \sum_{i=1}^n \frac{r_i (r_i - 1)}{2}.$$ -A point (in dehomogenized coordinates) is singular if $$f(x,y) = \partial_x f(x,y) = \partial_y f(x,y) = 0.$$ Let $(a,b)$ be a singular point. To determine the order of $(a,b)$, compute $f(a + x t, b + y t)$. Then the order of $(a,b)$ is the minimum value for $r$ such that $g(x,y) t^r$ is not identically zero. Now write $g(x,y)$ as $y^r h(x/y)$. Then $(a,b)$ is an ordinary singularity if $\gcd(h, h') = 1$ and is non-ordinary otherwise. -(Note that this explanation only mentions the variables $x$ and $y$, but there is another variable $z$ that is implicitly set to 1. Don't forget to consider the points "at infinity", which is when $z=0$. Read the appendix by Silverman if this isn't clear.) -As Mariano said in a comment, an ordinary singularity is a point with distinct tangents (and is non-ordinary if any tangent appears more than once). To get a feel for this, see the example figures by Fulton on page 32 (or the examples by Walker on page 57). -I am completely unsure what to do if the curve is reducible. -To compute the genus of an irreducible algebraic curve with non-ordinary singularities, we transform it into another algebraic curve with the same genus and no non-ordinary singularities using a so-called birational transformation. In contrast to the explanations above, this part is best explained in homogenous coordinates. -This transformation is obtained by repeatedly performing two steps. In the first step, we transform $C$ to a new curve $C'$ satisfying several properties. Vogler states these properties (with my paraphrasing) as follows. Let $p=(a,b,c)$ be a non-ordinary singular point of multiplicity $r$. Then - -$p=(1,0,0)$ in projective coordinates; -The points $(0,1,0)$ and $(0,0,1)$ are not on $C'$; -The line $x = 0$ does not intersect $C'$ at any singular point; -The lines $y = 0$ and $z = 0$ do not intersect $C'$ at any singular point other than $p=(1,0,0)$ of multiplicity $r$. - -Fulton says that a curve satisfying these conditions is in excellent position (see page 90). -The first condition is easy to satisfy, as the curve $C_1$ defined by $$f'(x,y,z) = f(a x, y + b, z + c) = 0$$ has this property. However, I am unsure how to systematically obtain further transformations to satisfy the other properties while maintaining the previous ones (even if these last three conditions should typically hold as Vogler points out). Vogler gives one example of such transformations while Fulton and Walker leave this step as an exercise for the reader. If someone could modify my answer by explaining this step, that would be fantastic. -Now given that our curve $C'$ defined by $f'(x,y,z)=0$ satisfies the above properties, we transform to a new curve $C''$ defined by $f''(x,y,z)=0$, where $$f'(yz,xz,yz) = x^r f''(x,y,z).$$ -Then we repeat this whole process starting with $C''$ until we obtain a curve with no non-ordinary singularities, at which point we can compute the genus using the formula above.<|endoftext|> -TITLE: Relation between projective modules over $R$ and $R[T]$ -QUESTION [8 upvotes]: Let $R$ be a commutative ring and $R[U]$ the polynomial ring in one variable. What is the relation between projective modules over $R$ and projective modules over $R[U]$? Is every projective module over $R[U]$ of the form $P[U]$ for a projective $R$-module $P$? If not what are the obstructions? -Edit: I realised that this question was to general for what I was actually looking for. Since the question in the current form seems to be interesting on its own, I refrained from editing it and opened a new question instead. - -REPLY [3 votes]: If $R$ is a left regular ring, then the canonical map $K_0(R) \to K_0(R[t])$ is an isomorphism. This result is due to Grothendieck at least when $R$ is commutative. The general case can be found in the paper "The Whitehead group of a polynomial extension" (Bass, Heller, Swan) or in Rosenberg's book on Algebraic K-Theory. -Of course, this does not imply that every f.g. projective $R[t]$-module has the form $P[t]$ for some f.g. projective $R$-module (but we cannot expect that!); but this turns out to be true "up to exact sequences".<|endoftext|> -TITLE: Proving that $\mathbb{Z}[\sqrt{2}]$ is a Euclidean domain -QUESTION [12 upvotes]: We're proving that $\mathbb{Z}[\sqrt{2}]$ is a Euclidean domain, using the norm function $$\nu (a + b\sqrt{2} ) = |a^2 - 2b^2|$$ and the first part says that since $\nu (a + b\sqrt{2} ) = |(a + b\sqrt{2})(a - b\sqrt{2})|$ it's clear that $\nu (xy) = \nu(x) \nu(y)$? ... Can someone please explain to me how this is clear? - -REPLY [21 votes]: Let $\alpha = a_1 + a_2 \sqrt{2}$ and $\beta = b_1 + b_2 \sqrt{2}$ be elements of $\mathbb{Z}[\sqrt{2}]$ with $\beta \neq 0$. We wish to show that there exist $\gamma$ and $\delta$ in $\mathbb{Z}[\sqrt{2}]$ such that $\alpha = \gamma\beta + \delta$ and $N(\delta) < N(\beta)$. To that end, note that in $\mathbb{Q}(\sqrt{2})$ we have $\frac{\alpha}{\beta} = c_1 + c_2 \sqrt{2}$, where $c_1 = \dfrac{a_1b_1 - 2a_2b_2}{b_1^2 - 2b_2^2}$ and $c_2 = \dfrac{a_2b_1 - a_1b_2}{b_1^2 - 2b_2^2}$. -Let $q_1$ be an integer closest to $c_1$ and $q_2$ an integer closest to $c_2$; then $|c_1 - q_1| \leq 1/2$ and $|c_2 - q_2| \leq 1/2$. Now let $\gamma = q_1 + q_2 \sqrt{2}$; certainly $\gamma \in \mathbb{Z}[\sqrt{2}]$. Next, let $\theta = (c_1 - q_1) + (c_2 - q_2) \sqrt{2}$. We have $\theta = \frac{\alpha}{\beta} - \gamma$, so that $\theta\beta = \alpha - \gamma\beta$. -Letting $\delta = \theta\beta$, we have $\alpha = \gamma\beta + \delta$. It remains to be shown that $N(\delta) < N(\beta)$. To that end, note that $$N(\theta) = |(c_1 - q_1)^2 - 2(c_2 - q_2)^2| \leq |(c_1 - q_1)^2| + |-2(c_2 - q_2)^2|$$ by the triangle inequality. Thus we have $$N(\theta) \leq (c_1 - q_1)^2 + 2(c_2 - q_2)^2 \leq (1/2)^2 + 2(1/2)^2 = 3/4.$$ In particular, $N(\delta) \leq \frac{3}{4}N(\beta)$ as desired.<|endoftext|> -TITLE: When the localization of a ring is a field -QUESTION [15 upvotes]: Let $R$ be a commutative noetherian ring with no nonzero nilpotents. Let $p$ be a minimal prime of $R$. Could you help me to prove that $R_p$ is a field? - -REPLY [4 votes]: Maybe a little bit more clear than the first answer: -$R_p$ itself is reduced and local. Moreover $\dim R_p=0$, so we can ask the following: is it true that a local reduced ring with dimension 0 is a field? Answer: yes! Reduced means that the nilradical of $R$ is $(0)$, but in this case the nilradical is the only prime ideal of $R$, so the only prime ideal of $R$ is (0) which means that $R$ is a field.<|endoftext|> -TITLE: Prove the equation has a root. -QUESTION [6 upvotes]: Assume that $f$ is a bounded and differentiable function in $(0,1)$. If $f({1\over 2})=0$, prove that the equation, -$$2f(x)+xf'(x)=0,$$ -has at least one root in $(0,{{1}\over{2}})$. -I tried to do it using Rolle's Theorem. Because the left side of the equation looks like the derivative of some function. And if I find a function $F$ s.t. $F'(x)=2f(x)+xf(x)$ and $F(0)=F({1\over 2})$, then I can use Rolle's Theorem to get the conclusion. I've found $F$, which is -$$F(x)=\int_{0}^{x}f(t)\,dt+xf(x),$$ -s.t. $F'(x)=2f(x)+xf(x)$, but the only problem is that I can't ask for $F(0)=F({1\over 2})$. -Can somebody give me a hint about this problem? - -REPLY [5 votes]: Hint: What's the derivative of $F(x)=x^2f(x)$?<|endoftext|> -TITLE: Infinite cyclic group generated by every single element? -QUESTION [9 upvotes]: This probably is a stupid question, but is there an infinite cyclic group generated by every single one of its nonidentity elements? - -REPLY [13 votes]: The infinite cyclic group is unique up to isomorphism. If $C$ is infinite cyclic and is generated by $x$ and by $y$, then $x=ky$ and $y=hx$ for some $k,h\in\mathbb Z$. But then $x=khx$, so $hk=1$ and either $h=k=1$ or $h=k=-1$. So either $x=y$ or $x=-y$. - -REPLY [11 votes]: Up to isomorphism there is only one infinite cyclic group, the integers under addition; it has only two generators, $1$ and $-1$. - -REPLY [4 votes]: No. The only infinite cyclic group up to isomorphism is $\mathbb{Z}$, which has only two generators. - -REPLY [4 votes]: The only infinite cyclic group is $\mathbb{Z}$. So, no.<|endoftext|> -TITLE: Does $A$ a UFD imply that $A[T]$ is also a UFD? -QUESTION [17 upvotes]: I'm trying to prove that $A$ a UFD implies that $A[T]$ is a UFD. -The only thing I am sure I could try to use is Gauss's lemma. -Also, how can we deduce that the polynomial rings $\mathbb{Z}[x_1,\ldots,x_n]$ and $k[x_1,\ldots,x_n]$ are UFDs? - -REPLY [18 votes]: There is a slick general way to do this by localization (usually credited to Nagata). Suppose $\rm\:D\:$ is an atomic domain, i.e. nonzero nonunits factor into atoms (irreducibles). If $\rm\:S\:$ is a saturated submonoid of $\rm\:D^*$ (i.e. $\rm\,cd\in S\!\iff c,d\in S)\,$ and, furthermore, $\rm\,S\,$ is generated by primes, then $\rm\: D_S$ UFD $\rm\:\Rightarrow\:D$ UFD. $ $ This is often called Nagata's Lemma. -This yields said slick proof of $\rm\:D$ UFD $\,\rm\Rightarrow D[x]$ UFD, viz. $\rm\:S = D^*\:$ is generated by primes, so localizing yields the UFD $\rm\:F[x],\:$ $\rm F =\:$ fraction field of $\rm\:D.\:$ Hence $\rm\:D[x]\:$ is a UFD, by Nagata. -This yields a more conceptual / structural view of the essence of the matter (vs. the traditional argument by Gauss' Lemma). Moreover, the proof generalizes to various rings closely related to GCD domains, e.g. Riesz/Schreier rings, which provide refinement-based views of UFDs (which prove more convenient for noncommutative generalizations).<|endoftext|> -TITLE: Prove that the Lipschitz constant cannot be less than $1$ -QUESTION [6 upvotes]: I was asked to study the following map: let $I=[0,1]$ and let $f(s)=\log(1+s^2)$ for any $s\in\mathbb R$. For every $u\in L^1(I,\mathbb R)$ we set $$(F(u))(x)=\int_0^xf(u(t))\mathrm d t.$$ -First of all I had shown that $F$ maps $L^1(I,\mathbb R)$ into itself. Then the second point was to show that $F$ is Lipschitz continuous from $L^1(I,\mathbb R)$ into itself with Lipschitz constant less than or equal than $1$, and I've done that. -Finally the problem asked to show that the Lipschitz constant couldn't be less than $1$. My first approach was to use the Caccioppoli Banach lemma to derive a sort of a contradiction. However if you study the integral equation $$\begin{cases}u(x)=\int_0^x\log(1+u(t)^2)\mathrm d t\\ u(0)=0,\end{cases}$$ then by unicity of the solution one sees that $u\equiv 0$ is the only fixed point... if I am not mistaken. -Then i tried to find a sequence of function $u_\lambda\in L^1(I,\mathbb R)$, with $\lambda<1$ such that, $\|Fu_\lambda\|_{L^1}>\lambda\|u_\lambda\|_{L^1}$, but i had no success. -Can anybody help me please? thank you... -EDIT: As I have written in the comment after DId answer, I don't think that his method matches my idea of proceeding in the exercise.. as it is written, it sounds troublesome to me.. Can anyone help me in finishing the problem? -Ps: sorry for accepting the answer, but as I wrote in the comment, I rushed in checking the detail because that kind of functions were also my first candidate to solve the exercise. However, let me again thank Did for answering. - -REPLY [3 votes]: Consider $u_{a,b}=a\mathbf 1_{(0,b)}$ with $0\lt b\lt 1\leqslant a$, and $\ell(a)=\log(1+a^2)$. -Then $\|u_{a,b}-u_{1,b}\|_1=(a-1)b$ and $F(u_{a,b})(x)=\ell(a)\min\{x,b\}$ hence -$$ -\|F(u_{a,b})-F(u_{1,b})\|_1=(\ell(a)-\ell(1))\int_0^1\min\{x,b\}\mathrm dx=(\ell(a)-\ell(1))b(1-\tfrac12b). -$$ -Since $\ell'(1)=1$, $\|F(u_{a,b})-F(u_{1,b})\|_1\sim (a-1)b=\|u_{a,b}-u_{1,b}\|_1$ when $a\to1^+$ and $b\to0$. In particular, no inequality $\|F(u)-F(v)\|_1\leqslant c\|u-v\|_1$ may hold for every integrable functions $u$ and $v$, if $c\lt1$. -On the other hand, $\ell$ is $1$-Lipschitz hence, for every $x$ in $(0,1)$, -$$ -|F(u)(x)-F(v)(x)|\leqslant\int_0^1|\ell(u)-\ell(v)|\leqslant\int_0^1|u-v|=\|u-v\|_1. -$$ -This shows that $F$ is $1$-Lipschitz but not $c$-Lipschitz for any $c\lt1$.<|endoftext|> -TITLE: How can I find an element $x\not\in\mathfrak mM_{\mathfrak m}$ for every maximal ideal $\mathfrak m$ -QUESTION [7 upvotes]: Let $R$ be a commutative ring with finitely many maximal ideals $\mathfrak m_1,\ldots,\mathfrak m_n$. Let $M$ be a finitely generated module. Then there exists an element $x\in M$ such that $\frac{x}{1}\not\in\mathfrak m_iM_{{\mathfrak m}_i}$ for every $i=1,\dots,n$. - -I cannot prove that such an element there exists. I was trying to prove it by induction on $n$. If $n=1$ it is true by Nakayama, so suppose it is true for $n-1$. Then for every $i$ I can find an $x_i\in M$ such that $\frac{x_i}{1}\not\in \mathfrak m_jM_{\mathfrak m_j}$ for every $j\neq i$. If $\frac{x_i}{1}\not\in \mathfrak m_iM_{\mathfrak m_i}$ for some $i$ we are done, so suppose $\frac{x_i}{1}\in\ m_iM_{m_i}$ for every $i$. I don't know how to go on, could you help me? - -REPLY [2 votes]: The question about the finitely generated projective module is exercise 2.40(1) from Lam, Exercises in Modules and Rings, and a proof can be found there.<|endoftext|> -TITLE: Proving all primes are 1 or -1 modulo 6 -QUESTION [13 upvotes]: Possible Duplicate: -Is that true that all the prime numbers are of the form $6m \pm 1$? - - - -Q. Why is it that all primes greater than 3 are either 1 or -1 modulo 6? - - -Does it suffice to argue as follows: -Let $p$ be a prime. $p>3 \Rightarrow 3$ does not divide $p$. Clearly $2$ does not divide $p$ either, and so 6 does not divide p. -Now, $p$ is odd, and so $p$ is either 1,3 or 5 modulo 6. However, if $p$ were 3 (mod6), that would give us that $3$ divides $p$, which is a contradiction. -As such, we conclude that $p$ is either 1 or 5 (=-1) mod 6 - -REPLY [8 votes]: Yes. Generally if $\rm\:p\:$ is prime, then modulo $\rm\,2p,\,$ any prime $\rm\:q\ne 2,p\:$ must lie in one of the $\rm\:\phi(2p) = \phi(p) = p\!-\!1\:$ residue classes that are coprime to $2$ and $\rm p,$ i.e. all odd residue classes excluding $\rm p,\:$ viz. $\rm\:1,3,5,\ldots,\hat p,\ldots,2p\!-\!1.\:$ Indeed, integers in other classes are divisible by $2$ or $\rm p\:$ hence, if prime, must be $2$ or $\rm p,\:$ resp. More succinctly, exploiting negation reflection symmetry: $\rm\: q\equiv \pm\{1,3,5,\cdots,p\!-\!2\}\ \ (mod\ 2\:\!p),\ $ e.g. $\rm\,\ q\equiv \pm 1\ \ (mod\ 6),\:$ $\,\rm q\equiv \pm\{1,3\}\ \ (mod\ 10),\:$ $\,\rm q\equiv\pm \{1,3,5\}\ \ (mod\ 14),\:$ etc. -Generally, if $\rm\,q\,$ is any integer coprime to $\rm\:m\:$ then its remainder mod $\rm\:m\:$ lies in one of the $\rm\:\phi(m)\:$ residue classes coprime to $\rm\:m,\:$ where $\phi$ is the Euler totient function.<|endoftext|> -TITLE: Two-sided Laplace transform -QUESTION [5 upvotes]: I want to study more formally the properties of the two sided Laplace transform -$$ -\hat f(z)=\int_{-\infty}^{\infty} f(t)e^{zt}dt -$$ -as a kind of generalization of the Fourier transform. I found some good references in the books by LePage and van der Pol, but these are operational calculus books, and I would like a more mathematical approach, well defined spaces, etc... -Where can I find something (books, articles) like that? - -REPLY [2 votes]: I found it sometime ago. Advanced Mathematical Analysis by Richard Beals has a really good mathematical formulation of the two-sided Laplace transform, definition spaces and others.<|endoftext|> -TITLE: Show that $11^{n+1}+12^{2n-1}$ is divisible by $133$. -QUESTION [16 upvotes]: Problem taken from a paper on mathematical induction by Gerardo Con Diaz. Although it doesn't look like anything special, I have spent a considerable amount of time trying to crack this, with no luck. Most likely, due to the late hour, I am missing something very trivial here. - -Prove that for any integer $n$, the number $11^{n+1}+12^{2n-1}$ is divisible by $133$. - -I have tried multiplying through by $12$ and rearranging, but like I said with meager results. I arrived at $11^{n+2}+12^{2n+1}$ which satisfies the induction hypothesis for LHS, but for the RHS I got stuck at $12 \times 133m-11^{n+1}-11^{n+3}$ or $12^2 \times 133m-12 \times11^{n+1}-11^{n+3}$ and several other combinations none of which would let me factor out $133$. - -REPLY [18 votes]: In fact, we can prove a stronger result and the proof is easier. The result we will prove is that $$x^2+x+1 \text{ divides }x^{n+1} + (x+1)^{2n-1}$$ for all $n \in \mathbb{N}$. Setting $x=11$ gives the result, you are looking for. -The proof follows immediately from the remainder theorem since $(x^2+x+1) = (x-\omega)(x-\omega^2)$, where $\omega$ is the complex cube-root of unity. -(Remember that $(x-a)$ divides $f(x)$ if and only if $f(x) = 0$) -Plugging in $\omega$ in $x^{n+1} + (x+1)^{2n-1}$ gives us $$\omega^{n+1} + (\omega+1)^{2n-1} = \omega^{n+1} + (-\omega^2)^{2n-1} = \omega^{n+1} - \omega^{4n-2} = \omega^{n+1} - \omega^{3n} \omega^{n-2}\\ -=\omega^{n+1} - 1 \times \omega^{n-2} = \omega^{n-2} \left( \omega^3 - 1\right) = 0$$ -This gives us that $(x- \omega)$ divides $x^{n+1} + (x+1)^{2n-1}$. -Similarly, plugging in $\omega^2$ in $x^{n+1} + (x+1)^{2n-1}$ gives us $$\omega^{2n+2} + \left( \omega^2 + 1 \right)^{2n-1} = \omega^{2n+2} + (-\omega)^{2n-1} = \omega^{2n+2} - \omega^{2n-1}\\ = \omega^{2n-1} \left( \omega^3 - 1\right) = 0$$ This gives us that $(x- \omega^2)$ divides $x^{n+1} + (x+1)^{2n-1}$. -Hence, $(x-\omega)(x-\omega^2) = x^2 + x + 1$ divides $x^{n+1} + (x+1)^{2n-1}$. -Setting $x=11$ gives us the result you want i.e. $11^2 + 11 + 1 = 133$ divides $11^{n+1} + 12^{2n-1}$. -Note that the above result can also be proved by inducting on $n$.<|endoftext|> -TITLE: A formula for the minimum number of generators of a module over a semilocal ring -QUESTION [6 upvotes]: Let $R$ be a commutative ring with only finitely many maximal ideals $\mathfrak m_1,\ldots,\mathfrak m_r$. Let $M$ be a finitely generated $R$-module. Then - $$\mu_R(M)=\max\{\dim_{R/\mathfrak m_i}M/\mathfrak m_iM\mid 1\leq i\leq r\},$$ - where $\mu_R(M)$ is the minimum number of generators of $M$ as $R$-module. - -How can I prove this? -Of course the inequality $\geq$ is trivial; what I want to prove is that $\dim_{R/\mathfrak m_i}M/\mathfrak m_iM\leq\mu_R(M)$. -I was trying to prove it first if $R$ is a finite product of fields but I wasn't succesful; any help? - -REPLY [4 votes]: Well, for simplicity of writing, we assume $r=2$, that is, $R$ has only two maximal ideals. Pick $x_i,y_j\in M$ and assume the images of $x_1,\ldots, x_n$ is a basis of $M/\mathfrak{m}_1M$, and the images of $y_1,\ldots,y_m$ is a basis of $M/\mathfrak{m}_2M$, and assume $n\leq m$. Now consider the map $$(R/\mathfrak{m}_1\mathfrak{m}_2R)^m\to M/\mathfrak{m}_1M\times M/\mathfrak{m}_2M$$ sending $e_i$ to $(x_i,y_i)$ for $i=1,\ldots,n$, and $e_{k}\to (x_1,y_k)$ for $k\geq n+1$ if it exists, where $e_l$ is a basis of LHS. This map is surjective. Now lifting this map gives a map $R^m\to M$, and by Nakayma's lemma, $R^m\to M$ is surjective. -Maybe we should think about the baby case: $R=k_1\times k_2$, $M=k_1\times k^2_2$, what is a generating set of $M$ as $R$-module?<|endoftext|> -TITLE: Tarski's decidability proof on real closed field and Peano arithmetic -QUESTION [23 upvotes]: It seems very confusing that real closed field (which also can be used as the theory of real number) is decidable, while Peano arithmetic, which seems to be a subset of real closed field is undecidable. -What am I getting wrong? - -REPLY [6 votes]: As others point out, the main reason for this seemingly paradoxical situation (all natural numbers are real numbers but the theory of the first one is much more complicated than the second one) is because natural numbers are not definable in the language of RCF. -We can use the same language to talk about them, say $\{0,1,+,\cdot\}$ and natural numbers are a substructure of real numbers. However being a subsructure doesn't make things simpler. Take for example prime numbers, do you expect that their theory in the same language is simpler than the theory of natural numbers? It doesn't need to be the case. The main issue as others have point out is quantification. Quantification over natural numbers can generate very complicated sets (arithmetical hierarchy) where as quantification over reals does not, in fact if we simply add order to the language then every quantified formula is equivalent formula with no quantifiers for reals, i.e. quantifiers do not increase the complexity of sets we consider for reals and can be eliminated (and that is how it is proven that RCF is decidable). We can show that this is not possible for natural numbers, quantifiers can increase the complexity of sets. The undecidability follow essentially from the fact that we can talk about computation of machines in the theories like PA and much weaker theories about natural numbers. -It might be helpful to look at something in between: consider integers. We can define natural numbers by using the fact that every natural number is equivalent to sum of four squares. Therefore they are going to be as complex as natural numbers. Another more tricky set in the middle: consider rational numbers. Their theory is similarly undecidable, however showing that we can define natural numbers in the their theory is much more complicated (this nice result is originally due to Julia Robinson). Another example: consider complex numbers. Their theory ACF is kind of even simpler than real numbers. -To make it even more clear, if we just add a predicate to the language that tells us which real numbers are natural numbers, then theory of real numbers with this predicate becomes undecidable. The main issue is that without such predicate natural numbers are indistinguishable from other real numbers in the language.<|endoftext|> -TITLE: When is $L^1 = (L^\infty)^\ast$? -QUESTION [17 upvotes]: I found this exercise in Cohn's Measure Theory: - -Let $(X, \mathscr A, \mu)$ be a finite measure space. Show that the conditions - -the map $T: L^1(X, \mathscr A, \mu) \to (L^\infty(X, \mathscr A, \mu))^\ast$ given by $g\mapsto T_g(f) = \int fg \, d\mu$ is surjective -$L^1(X, \mathscr A, \mu)$ is finite-dimensional -$L^\infty(X, \mathscr A, \mu)$ is finite-dimensional -there is a finite $\sigma$-algebra $\mathscr A_0$ on $X$ such that $\mathscr A_0\subset \mathscr A$ and such that each set in $\mathscr A$ differs from a set in $\mathscr A_0$ by a $\mu$-null set - -are equivalent. - -I figured out a way to show $2. \implies 4. \implies 3. \implies 2.$ and how these three conditions imply $1.$ What I'm having trouble with is how to get from 1. to either of the other three. If someone could provide a hint, I'd be grateful. -Thank you. - -REPLY [10 votes]: Thanks to t.b.'s hint in the comments, I think I can prove $1. \implies 4.$ now. -Suppose 4. is not true for $(X, \mathscr A, \mu)$. Then there exists a sequence $B_n$ in $\mathscr A$ of pairwise disjoint sets with positive measure. Now choose a point $b_n \in B_n$ for each $n$ and let -$$C = \{f\in L^\infty(X, \mathscr A, \mu) \mid f \text{ is constant on $B_n$ for all $ n$ and } \lim_{n\to\infty} f(b_n) \text{ exists}\}$$ -Let $\Lambda_0$ be the linear functional on $C$ given by $$\Lambda_0(f) = \lim_{n\to \infty} f(b_n)$$ Then $\Lambda_0$ is continuous on $C$ (in fact $\Vert \Lambda_0 \Vert = 1$) and we can extend $\Lambda_0$ to a linear functional $\Lambda$ on all of $L^\infty(X, \mathscr A, \mu)$ by an application of Hahn-Banach. -But this $\Lambda$ cannot be of the form -$$\Lambda(f) = \int_X fg \, d\mu$$ -for any $g\in L^1(X, \mathscr A, \mu)$. Suppose there was such $g$. Let $A_n = \bigcup_{k=1}^n B_k$ and $A = \bigcup_{n=1}^\infty B_n$. Then we would have $$\int_X \chi_{A_n} g \, d\mu = \Lambda\left(\chi_{A_n}\right) = \lim_{m\to\infty} \chi_{A_n}(b_m) = 0$$ -for all $n$. And therefore, by the monotone convergence theorem applied to $\chi_{A_n} g^+ \uparrow \chi_A g^+$ and $\chi_{A_n} g^- \uparrow \chi_A g^-$, we obtain -\begin{align} -\int_X \chi_A g\, d\mu &= \int_X \chi_{A} g^+\, d\mu - \int_X \chi_{A} g^-\, d\mu \\\ -&= \lim_{n\to\infty} \int_X \chi_{A_n} g^+\, d\mu - \lim_{n\to\infty} \int_X \chi_{A_n} g^- \, d\mu \\\ -&= \lim_{n\to\infty} \int_X \chi_{A_n} g\, d\mu \\ -&= 0 -\end{align} -But on the other hand we have $\Lambda(\chi_A) = \lim_{n\to\infty} \chi_A(b_n) = 1$, a contradiction. -So this $\Lambda \in (L^\infty(X, \mathscr A, \mu))^\ast$ is not in the image of $T: L^1(X, \mathscr A, \mu)\to (L^\infty(X, \mathscr A, \mu))^\ast$.<|endoftext|> -TITLE: The product of two spectral spaces -QUESTION [6 upvotes]: Notice: the following statements about the product topologies are all Cartesian product topology, we are in the category of topology not the category of schemes. -In this page of sober space, it said any product of sober spaces is sober. What does it mean by "any"? Any index or any finite products? - -Fact 1. Could anyone give a proof about this fact that the product $X\times Y$ of any two sober spaces $X,Y$ is sober? - -I am considering the product topology of two spectral spaces. Let $X=\operatorname{Spec} A$, $Y=\operatorname{Spec} B$ where $A,B$ commutative rings. Then I wonder to know - -Does there exist a canonical choice of a ring $C$ such that $\operatorname{Spec} C$ is cannonically isomorphic to the Cartesian topology of $X\times Y$? - -The topologies of $X,Y$ are quasicompact, and they have bases consisting of quasicompact opens and the intersection of any two quasicompact open is quasicompact open. Those above properties are preserved in $X\times Y$ (if I am right). So by Fact 1, the product topology of $X\times Y$ is a spectral space, thus it can be realized as a spectrum of a commutative ring (see the same wiki article). -Moreover, if we are considering the product topology $X\times_Z Y$ (the induced topology of the product topology $X\times Y$), where $X,Y,Z$ are affine schemes the maps $X\to Z$, $Y\to Z$ are induced by the ring maps, what will happen in this case, is $X\times_Z Y$ a spectral space, etc? -Thanks! - -REPLY [3 votes]: Proof of Fact 1 (for any product). -Let $\{ X_i \}_{i\in I}$ be a family of non-empty sober spaces. Let $F$ be a closed irreducible subset of $X:=\prod_i X_i$. By replacing $X_i$ with the closure of the projection of $F$ in $X_i$, we can suppose $F\to X_i$ has dense image for all $i$. -I claim that $F=X$. If $\eta_i$ is the generic point of $X_i$, then it is clear that $(\eta_i)_i$ is the generic point of $X$. So let's prove the claim. Suppose the open subset $X\setminus F$ is non-empty. Then it contains a product -$$X\setminus F \supseteq U_{i_1}\times \cdots \times U_{i_n} \times \prod_{i\ne i_1,..., i_n} X_i$$ -with non-empty open subsets $U_{i_j}\subseteq X_{i_j}$. -So $F$ is covered by finitely many closed subsets of $X$: -$$ F\subseteq \cup_{1\le j\le n} Z_{i_j}\times \prod_{i\ne i_j} X_i$$ -where $Z_{i_j}=X_{i_j}\setminus U_{i_j}$. As $F$ is irreducible, it is contained in one of them, say -$$ F\subseteq Z_{i_1}\times \prod_{i\ne i_1} X_i.$$ -But then the projection of $F$ to $X_{i_1}$ is not dense. Contradiction.<|endoftext|> -TITLE: What is the symbol to refer to the set of whole numbers -QUESTION [14 upvotes]: The set of integers and natural numbers have symbols for them: - -$\mathbb{Z}$ = integers = {$\ldots, -2, -1, 0, 1, 2, \ldots$} -$\mathbb{N}$ = natural numbers ($\mathbb{Z^+}$) = {$1, 2, 3, \ldots$} - -Even though there appears to be some confusion as to exactly What are the "whole numbers"?, my question is what is the symbol to represent the set $0, 1, 2, \ldots $. I have not seen $\mathbb{W}$ used so wondering if there is another symbol for this set, or if this set does not have an official symbol associated with it. - -REPLY [4 votes]: Except $\mathbb{N}\cup\{0\}$ we use $\mathbb{I}$ symbol too for $\{0,1,2,3,...\}$ and we call it the set of calculating numbers that it has only zero more than natural numbers. Also in some books it has been denoted by $\mathbb{Z}^{\geq0}$<|endoftext|> -TITLE: Countability in first-order logic is relative to what exactly? -QUESTION [5 upvotes]: Skolem's Paradox tells us that countability in first-order logic is relative. -Relative to what? -Below is what I've gathered. -Countability it relative to: -1. what a model takes to be $\mathbb N$ -2. what bijections between $\mathbb N$ and some set $A$ a model recognizes. -Two examples: -For (1): Let $$ be a model such that $A$ is uncountable in $$. We add a bijection between $A$ and $\mathbb N$ to $$ and call this new model $$. $A$ is countable in $$. As mentioned here. -For (2): The underlying set of a model $$ might be countable from the perspective of a larger model $$, and so $$ might "see" $\mathbb N$ differently than $$. I'm not sure if I've said this right, but here is a post on this. - -REPLY [5 votes]: I fear I may have confused you a bit, but this is a confusing topic after all, and it can take quite some time to wrap your head around it completely. -First let us establish the following fact. We live in a big big universe. This universe, for the sake of conversation is a model of ZFC. However this universe is not a set, and we do not know any sets outside our universe. -This universe judges with (extreme prejudice) what is truly countable and what is not, what is countable is what the universe know has a bijection with $\omega$ (which in our case is "the true $\omega$"). -Suppose that there is a model of ZFC $\mathfrak M$ which is a set in our universe, it may be countable and it might not be. It might know the same true $\omega$, and it might think that some other set is $\omega^\mathfrak M$ (the set which $\mathfrak M$ thinks is $\omega$). It is important to know, $\omega^\mathfrak M$ may not even be countable! In such case $\mathfrak M$ may think that things are countable even if they are not, as it compares things to its own $\omega$ (which we know is uncountable). -Since $\mathfrak M$ is small, it may know some sets which are truly countable, but it may not know about the bijections these sets have with the true $\omega$, it may be the case that $\omega^\mathfrak M$ is itself uncountable (but $\mathfrak M$ is unaware to this fact, since it judges countability wrong) and then $\mathfrak M$ will get "most" things wrong about countability. -So we end up with the following situation: - -There is an absolute notion of countability. This is what the universe decides, or knows, is countable. -Every model inside the universe has its own version of $\omega$ which may be the true $\omega$, may be a different countable set, and in the worst possible case may not even be a countable set! Inside such model, $\mathfrak M$, a set $A$ is countable if the model knows about a bijection between the "local" $\omega^\mathfrak M$ and $A$. -We can then extend such $\mathfrak M$ to a slightly larger $\mathfrak N$ in which some set $A$ which in $\mathfrak M$ was not countable, $\mathfrak N$ thinks is countable (we added the needed bijection). - -We separate the case of 1, where the countability is absolute (or "true") even if internally some model $\mathfrak M$ may not know that some set is countable, from the cases of 2 and 3 in which a certain model thinks of a set as countable, or uncountable, regardless of its true size.<|endoftext|> -TITLE: GCD of rationals -QUESTION [47 upvotes]: Disclaimer: I'm an engineer, not a mathematician -Somebody claimed that $\gcd$ only is applicable for integers, but it seems I'm perfectly able to apply it to rationals also: -$$ \gcd\left(\frac{13}{6}, \frac{3}{4} \right) = \frac{1}{12} $$ -I can do this for a number of cases on sight, but I need a method. I tried Euclid's algorithm, but I'm not sure about the end condition: a remainder of 0 doesn't seem to work here. -So I tried the following: -$$\gcd\left(\frac{a}{b}, \frac{c}{d} \right) = \frac{\gcd(a\cdot d, c \cdot b)}{b \cdot d}$$ -This seems to work, but I'm just following my intuition, and I would like to know if this is a valid equation, maybe the correct method. (It is in any case consistent with $\gcd$ over the natural numbers.) -I'm not a mathematician, so please type slowly :-) - -REPLY [9 votes]: Yes, your formula yields the unique extension of $\rm\:gcd\:$ from integers to rationals (fractions), presuming the natural extension of the divisibility relation from integers to rationals, i.e. for rationals $\rm\:r,s,\:$ we define $\rm\:r\:$ divides $\rm\:s,\:$ if $\rm\ s/r\:$ is an integer, $ $ in symbols $\rm\:r\:|\:s\:$ $\!\iff\!$ $\rm\:s/r\in\mathbb Z.\: $ -[Such divisibility relations induced by subrings are discussed further here] -Essentially your formula for the gcd of rationals works by scaling the gcd arguments by a factor that yields a known gcd (of integers), then performing the inverse scaling back to rationals. -Even in more general number systems (integral domains), where gcds need not always exist, this scaling method still works to compute gcds from the value of a known scaled gcd, namely -$\rm{\bf Lemma}\ \ \ gcd(a,b)\ =\ gcd(ac,bc)/c\ \ \ if \ \ \ gcd(ac,bc)\ $ exists $\rm\quad$ -Therefore $\rm\ \ gcd(a,b)\, c = gcd(ac,bc) \ \ \ \ \ if\ \ \ \ gcd(ac,bc)\ $ exists $\quad$ [GCD Distributive Law] -The reverse direction fails, i.e. $\rm\:gcd(a,b)\:$ exists does not generally imply that $\rm\:gcd(ac,bc)\:$ exists. $\ $ For a counterexample see my post here, which includes further discussion and references. -More generally, as proved here, we have these dual formulas for reduced fractions -$$\rm\ gcd\left(\frac{a}b,\frac{c}d\right) = \frac{gcd(a,c)}{lcm(b,d)}\ \ \ if\ \ \ \gcd(a,b) = 1 = \gcd(c,d)$$ -$$\rm\ lcm\left(\frac{a}b,\frac{c}d\right) = \frac{lcm(a,c)}{gcd(b,d)}\ \ \ if\ \ \ \gcd(a,b) = 1 = \gcd(c,d)$$ -See this answer for the $k$-ary inductive extension of the above gcd formula. -Some of these ideas date to Euclid, who computed the greatest common measure of line segments, by anthyphairesis (continually subtract the smaller from the larger), i.e. the subtractive form of the Euclidean algorithm. The above methods work much more generally since they do not require the existence of a Euclidean (division) algorithm but, rather, only the existence of (certain) gcds.<|endoftext|> -TITLE: wedge product of differential form -QUESTION [6 upvotes]: If $\alpha $ is one form over some manifold $M$ $2n-1$ dimensional real, and $X= M\times (0,\infty)$. $r$ is the coordinate for the second factor. Define two form on $X$: -$$\omega= d(r^2\alpha)$$ -Then we have to calculate $\omega^n$. I am sorry if following doubt are too silly: My doubts are: -1- I think, $\omega^n:= \omega\wedge..\wedge\omega$, n times. -2- As $\omega$ is two form hence $\omega\wedge \omega \neq 0$. but for any one form $\alpha$, we must have $\alpha\wedge\alpha= 0$.[As $\alpha\wedge\alpha= c(\alpha\otimes \alpha- \alpha\otimes \alpha)$ -3- What is the guarantee that $\omega^n\neq 0$, As in one form(as in 2nd part above), can we say when $\omega^n=0$ for any two form. - -REPLY [8 votes]: That depends on the convention in your textbook or notes. But I have seen that notation used, so I won't rule it out. - -"As $\omega$ is a two-form hence $\omega\wedge\omega\neq 0$" is false. Consider the two form $\mathrm{d}x\wedge \mathrm{d}y$ on $\mathbb{R}^4 = \{(w,x,y,z)| w,x,y,z\in\mathbb{R}\}$. This two form wedged with itself is zero. What you meant is that "there exists two forms such that $\omega\wedge \omega \neq 0$", which is true. - -There are some cases when $\omega^n = 0$ can be easily guaranteed by algebraic constraints. For starters, if the dimension of $X$ is less than $2n$, then since $\omega^n$ is a $2n$-form, it must be identically 0. This can be generalised using the rank of the differential form using the notion of form envelope. -Let $V$ be a vector space and let $\eta \in \wedge^p V$. We can consider the smallest subspace $W\subseteq V$ such that $\eta \in \wedge^p W$. It is clear that if $kp > \mathrm{dim}(W)$ that $\eta^k = 0$. It is easy to see that since every one form lives in a one dimensional sub-space, this means that for one-forms $\alpha^k = 0$ if $k > 1$. - - -For your specific form, you have that -$$ \omega = 2 r\mathrm{d}r \wedge \alpha + r^2 \mathrm{d}\alpha $$ -The main constraint then is what $\mathrm{d}\alpha$ looks like. But noting that -$$ (\mathrm{d}r \wedge \alpha)^2 = 0 $$ -you can directly compute (using basically the binomial formula) that -$$ \omega^n = r^{2n} (\mathrm{d}\alpha)^n + 2n r^{2n-1} \mathrm{d} r \wedge \alpha \wedge (\mathrm{d}\alpha)^{n-1} ~.$$ -Whether this vanish is determined by whether $(\mathrm{d}\alpha)^n$ vanishes and whether $\alpha\wedge(\mathrm{d}\alpha)^{n-1}$ vanishes. -Now, since per your edit, $M$ is $2n-1$ dimensional. Necessarily $(\mathrm{d}\alpha)^n$ which is a $2n$ form on $M$ must be zero. So you are reduced to -$$ \omega^n = 2n r^{2n-1} \mathrm{d}r\wedge\alpha\wedge(\mathrm{d}\alpha)^{n-1} $$ -This is the best you can do in the abstract, unless more information about $\alpha$ is given. As an example, take $n = 2$ and $M = \mathbb{R}^3$ with coordinates $x,y,z$. Let $\alpha = \mathrm{d}z + x\mathrm{d}y$. Then $\mathrm{d}\alpha = \mathrm{d}x\wedge \mathrm{d}y$ and -$$ \omega^2 = 4 r^3 \mathrm{d}r \wedge \mathrm{d}z \wedge \mathrm{d}x\wedge \mathrm{d}y~.$$<|endoftext|> -TITLE: Understanding Ramification Points -QUESTION [39 upvotes]: I really don't understand how to calculate ramification points for a general map between Riemann Surfaces. If anyone has a good explanation of this, would they be prepared to share it? Disclaimer: I'd prefer an explanation that avoids talking about projective space! -I'll illustrate my problem with an example. The notion of degree for a holomorphic map is not well defined for non-compact spaces, such as algebraic curves in $\mathbb{C}^2$. I've had advice from colleagues not to worry about this and to use the notion of degree anyway, because it works in a different setting (I don't know which). In particular consider the algebraic curve defined by -$$p(z,w)=w^3-z(z^2-1)$$ -and the first projection map -$$f(z,w)=z$$ -In order to find the ramification points of this we know that generically $v_f(x)=1$ and clearly when $z\neq0,\pm 1$ we have $|f^{-1}(z)|=3$ so the 'degree' should be $3$. Thus $z=0,\pm1$ are ramification points with branching order 3. I've had feedback that this is correct. Why did this work? -Now let's look at an extremely similar example. Consider the algebraic curve defined by -$$p(z,w)=w^2-z^3+z^2+z$$ -and the second projection map -$$g(z,w)=w$$ -Now again we see the 'degree' of $g$ should be $3$. Now $g^{-1}(i)=\{(1,i),(-1,i)\}$. So by the degree argument exactly one of these is a ramification point, of branching order 2. Is this correct? If so, how do I tell which one it is? -Finally in more generality, does this method work for the projection maps of all algebraic curves in $\mathbb{C}^2$? Sorry for the long exposition! -Edit: Here's an idea I just had. If our map $f$ is proper then we don't need $X$ to be compact for $\deg(f)$ to be well defined. Now the projection map is clearly proper (I think) so that's why this works. Am I right? This of course raises the natural question - 'what standard maps are proper'? I guess I should ask this in a separate question though! - -REPLY [50 votes]: Let's look at your second example. Let $p(z, w) = w^2 - z^3 + z^2 + z$, and let $Y = \{ p(z, w) = 0 \}$. Then, -$$p(z, i) = -z^3 + z^2 + z - 1 = -(z - 1)^2 (z+1)$$ -so I claim $(1, i)$ has ramification index $2$ while $(-1, i)$ has ramification index $1$. Indeed, observe that -\begin{align} -\frac{\partial p}{\partial z} & = -3 z^2 + 2 z + 1 \\ -\frac{\partial p}{\partial w} & = 2 w -\end{align} -so by an inverse function theorem argument, we find that $(z, w) \mapsto z$ is locally a chart near both $(-1, i)$ and $(1, i)$. In this chart, your function $g : Y \to \mathbb{C}$ is given by $z \mapsto \sqrt{z^3 - z^2 - z}$. Let us take Taylor expansions around $\pm 1$: -\begin{align} -g(z) - i & = -i (z-1)^2 + O((z-1)^3) \\ -g(z) - i & = -2 i (z+1) + O((z+1)^2) -\end{align} -Hence, the ramification index at $(1, i)$ is indeed $2$ and at $(-1, i)$ it is $1$. - -Morally, what is going on is that your curves are dense open subsets of projective curves. Indeed, your first curve is given in homogeneous coordinates by -$$w^3 - z (z^2 - u^2) = 0$$ -and your second curve is given by -$$w^2 u - z^3 + z^2 u + z u^2 = 0$$ -and one can check by hand that these curves are smooth "at infinity", so we have the desired embedding of the original affine algebraic curves into projective (hence compact) algebraic curves. Degree is well-defined on the latter, so is well-defined on the former by restriction; the only trouble is that there may be "missing" preimages and so the equation relating degrees and ramification indices becomes an inequality: -$$\text{deg}(g) \ge \sum_{x \in g^{-1} \{w\}} \nu_x (g)$$ -For example, take the affine hyperbola $z w - 1 = 0$ and the projection $(z, w) \mapsto w$; this function has degree $1$ (once we embed it in the projective closure), but obviously there are no preimages of $0$ in the affine hyperbola. - -Let's develop a generic method of dealing with affine plane curves. Let $p : \mathbb{C}^2 \to \mathbb{C}$ be a polynomial function in two variables, and suppose $Y = \{ p(z, w) = 0 \}$ is a smooth algebraic curve. Let $f : Y \to \mathbb{C}$ be the projection $(z, w) \mapsto w$. For each fixed complex number $b$, we get a polynomial function $p(-, b)$, say of degree $d$. Now, because $\mathbb{C}$ is algebraically closed, we can write -$$p(z, b) = c (z - a_1)^{e_1} \cdots (z - a_n)^{e_n}$$ -for some distinct complex numbers $a_1, \ldots, a_n$, $c \ne 0$, and positive integers $e_1, \ldots, e_n$, such that $e_1 + \cdots + e_n = d$. Suppose also that -$$\frac{\partial p}{\partial w}(a_i, b) \ne 0$$ -for all $a_i$; then an inverse function theorem argument shows that $(z, w) \mapsto z$ is a chart near each $(a_i, b)$. I claim that the ramification index of $f$ at $(a_i, b)$ is $e_i$ under these hypotheses. Indeed, when $z$ is a local parameter, we have -$$0 = \frac{\partial p}{\partial z} + \frac{\mathrm{d} w}{\mathrm{d} z} \frac{\partial p }{\partial w}$$ -so if $e_i > 1$, we have $\frac{\partial p}{\partial z} (a_i, b) = 0$, so we must have $\frac{\mathrm{d} w}{\mathrm{d} z} (a_i) = 0$ because $\frac{\partial p}{\partial w} (a_i, b) \ne 0 $ by hypothesis – implying $f(z) - b = O((z - a_i)^2)$. Playing around with total derivatives more, we eventually find that the first non-zero coefficient of $f(z) - b$ around $a_i$ is the coefficient of $(z - a_i)^{e_i}$, as required. -On the other hand, when we have $\frac{\partial p}{\partial w} (a_i, b) = 0$, then by non-degeneracy we must have $\frac{\partial p}{\partial z} (a_i, b) \ne 0$, and we must have $e_i = 1$ and $(z, w) \mapsto w$ is a chart near $(a_i, b)$. But then obviously the ramification index of $f$ at $(a_i, b)$ must be $1$. So in either case the ramification index of $f$ at $(a_i, b)$ is equal to $e_i$. Convenient, no?<|endoftext|> -TITLE: Proving two torus maps are homotopic -QUESTION [6 upvotes]: I have the following problem: -Given two maps $\varphi , \psi :T^2 \rightarrow T^2$, with $\varphi(p)=\psi(p)=p$ such that $\varphi_*=\psi_*$ (the induced homomorphisms on the fundamental groups based at p), then the two maps are homotopic. -Using degree theory of the circle I'm able to construct homotopies between these maps restricted to loops, but I don't see how to extend them to a continuous homotopy defined on all the torus. -Thanks in advance. - -REPLY [4 votes]: I will give an elementary proof of the problem using the fact, that $T^2$ is a topological group and that its universal cover is contractible. We will start with some constructions in homotopy theory of topological groups, which are required to understand the proof given below and added here for convenience. -First note, that $\mathbb R^2$ forms a topological group under addition: $(x,y) + (x',y') := (x+x',y+y')$ and that $\mathbb Z^2$ is a discrete normal subgroup thereof. We identify $T^2$ with the quotient of $\mathbb R^2$ by $\mathbb Z^2$, so that $T^2$ again becomes a topological group under addition. Moreover the quotient map $p: \mathbb R^2 \to T^2$ becomes a group homomorphism and is easily seen to be the universal covering projection ($\mathbb Z^2$ discrete subgroup $\Rightarrow$ $p$ is covering projection; $\mathbb R^2$ is contractible $\Rightarrow$ $p$ is universal). -Next, we observe, that for $[f],[g]\in [(X,\ast),(T^2,0)]$ the sum $[f]+[g] := [f+g]$ is well defined, turning $[(X,\ast),(T^2,0)]$ into a group. The same arguments show that $[(X,\ast),(\mathbb R^2,0)]$ is a group under point wise addition of representatives as well (as is $[(X,\ast),(G,1)]$ for any topological group $G$ with unit $1$), and that the map $p_\sharp : [(X,\ast),(\mathbb R^2,0)] \to [(X,\ast),(T^2,0)]$ given by $p_\sharp([f]) := [p \circ f]$ is a group homomorphism. -Now $\pi_1(T^2,0) = [(S^1,1), (T^2,0)]$ is a group in two ways, by means of composition of (representatives of) loops $[\alpha], [\beta] \mapsto [\alpha \ast \beta]$ and by means of point wise addition of (representatives of) loops $[\alpha], [\beta] \mapsto [\alpha + \beta]$. -Both operations share the same unit, the (class of the) constant loop sending everything to $0 \in T^2$ and denoted simply by $0: (S^1,1) \to (T^2,0)$. We can also observe, that for any loops $\alpha, \beta, \gamma, \delta$ we have $(\alpha + \beta) \ast (\gamma + \delta) = (\alpha + \gamma) \ast (\beta + \delta)$. Therefore $$[\alpha]+[\beta] = ([\alpha] \ast [0]) + ([0] \ast [\beta]) = ([\alpha] + [0]) \ast ([0] + [\beta]) = [\alpha] \ast [\beta],$$ -hence the two operations are in fact the same on $\pi_1(T_2,0)$. The same argument can be used to show the analogous statement for $\pi_1(\mathbb R^2,0)$ (or $\pi_1(G,1)$ for any topological group $G$ with unit $1$). -Now back to the problem: -Given two maps $\varphi, \psi: T^2 \to T^2$, such that for some point $x \in T^2$, we have $\varphi(x) = \psi(x) = x$ and $\pi_1(\varphi) = \pi_1(\psi): \pi_1(T^2,x) \to \pi_1(T^2,x)$, we want to show $\varphi \simeq \psi$, where the homotopy can be taken relative to $x$. Replacing $\varphi$ with $\xi \mapsto \varphi(\xi + x) - x$ and $\psi$ with $\xi \mapsto \psi(\xi + x) - x$ if necessary, we may assume $x=0$. It will then suffice to show, that $\chi \simeq 0$, where $\chi := \varphi - \psi$. -Since the induced map $\pi_1(\chi): \pi_1(T^2,0) \to \pi_1(T^2,0)$ on fundamental groups is trivial (this is where we need all the constructions for topological groups), we can lift $\chi$ to a map $\bar{\chi}: (T^2,0) \to (\mathbb R^2,0)$ with $\chi = p \circ \bar{\chi}$. -We now define $H: T^2 \times I \to T^2$ by $H(x,t) = p(t\bar\chi(x))$, which is easily checked to be the required homotopy $0 \simeq \chi$.<|endoftext|> -TITLE: Powers in ASCII text -QUESTION [5 upvotes]: I have a problem reading a discussion forum post. Namely, in the ASCII text, is 2^3^4 the same as $(2^3)^4$ or $2^{3^4}$? - -REPLY [10 votes]: I believe that usually the intended meaning of a^b^c or $a^{b^c}$ is $a^{(b^c)}$. -The reason is that if someone wants to write $(a^b)^c$, he can use the equivalent expression $a^{bc}$ instead. -In particular, this seems to be quite common in cardinal arithmetic - I think no one will doubt what is meant when someone writes $2^{2^{\aleph_0}}$ even when it's not indicated by brackets: $2^{(2^{\aleph_0})}$. - -REPLY [6 votes]: The reason you have a problem is that the notation is ambiguous. A careful writer will write 2^(3^4) or (2^3)^4, depending on what she means. There is no way of telling what 2^3^4 means, except possibly from context.<|endoftext|> -TITLE: Space of Germs of Holomorphic Function -QUESTION [12 upvotes]: A bit of a general question, but here goes. Morally, what is the space of germs of a holomorphic function? -I know that a germ is simply an equivalence class of function elements, where we regard two function elements as equivalent at a point if they agree on some open neighbourhood of that point. Moreover I know the definition that the space of germs is simply the union of these equivalence classes for all functions $f$ and points $x$. -This is all a bit abstract at the moment though. I can't see how germs are useful, or how I might calculate them for a concrete function. Has anyone got any nice examples of calculations of the space of germs? And could someone explain the overarching idea behind them in Riemann Surfaces? -Many thanks. - -REPLY [13 votes]: Consider the set of all germs of holomorphic functions at a specific point $x$. If you know of stalks, then this is just $\mathcal{O}_x$. You can think of this set as a collection of all "possible functions" which are holomorphic at that point. -To be precise, you can think of it as the set of all power series which converge in a little neighborhood of $x$. So $\mathcal{O}_x$ is isomorphic to the ring $\mathbb{C}\{z-x\}$ of all convergent power series in $z-x$. -Of course, two functions define the same germ if their power series about $x$ are the same. This gives a method to calculate the germs of a function $f$ at points $x$. -Now the union of all germs of all functions is useful for instance because it allows for the construction of a maximal analytic continuation of a given holomorphic function: -For a Riemann Surface $X$, a point $x\in X$ and a function germ $f$ at $x$, you get a Riemann Surface $Y$ together with an unbranched holomorphic map $Y\rightarrow X$ and a holomorphic function $F$ on all $Y$, such that the germ of $F$ is in some natural way the same as that of $f$. -Now the space of germs allows for the construction of a "maximal $Y$". You can think of it as the largest possible domain of definition for $f$. Because there may be different continuations of a representative of $f$ in different neighborhoods, you have to consider them all and combine them to get your larger space $Y$. -The actual construction and proofs can be read, for instance, in O. Forster's "Lectures on Riemann Surfaces".<|endoftext|> -TITLE: Continuous bijection from $\mathbb{R}^{2} \to \mathbb{R}$ -QUESTION [8 upvotes]: Can anyone give an example of a continuous bijection from $\mathbb{R}^{2} \to \mathbb{R}$ - -REPLY [2 votes]: It is a well known result from basic topology that a continuos injective (bijective) map $$f:X\rightarrow Y$$ -from a compact space $X$ into a Hausdorff space $Y$ is closed (a homeomorphism). -If you apply this result in your case to any closed disc in $\mathbb{R}^2$ and it's image you see that your bijection is a local homeomorphism, hence a homeomorphism. Using, e.g., Chandrasekhar's reason you get a contradiction.<|endoftext|> -TITLE: Bounded linear operator on a Hilbert space -QUESTION [5 upvotes]: I am having a bit of difficulty with the following homework problem. - -Let $\{x_n\}$ be an orthonormal basis in a Hilbert space $V$ over $\mathbb{C}$ and let $\{c_n\}_{n \in \mathbb{N}}$ be a fixed bounded sequence of complex numbers. Consider the bounded linear operator $T: V \to V$ defined by $T(x_n) = c_nx_n$. - -There are numerous parts to the question, but below are the ones I am having trouble with - - -Find the adjoint operator $T^*$ and its norm $||T^*||$ -If T is invertible, is its inverse continuous? -Show that any linear operator on a normed space is continuous if the unit sphere is compact. - - - -I have managed to find $T^*$. As for the norm, I know that $||T^*|| = ||T||$. But is there an explicit value for $||T||$ that can be found? I can't think of a way to find $||T||$ explicitly since we don't know what the norm on $V$ is. - -I am not really sure how to do this one. Firstly, I know that a linear operator is continuous iff it is bounded, so I need to show that a linear operator $T: V \to V$ is bounded if the unit sphere $\{x \in V : ||x|| = 1\}$ is compact. I have been told to assume that $T$ is unbounded and try to get a contradiction. If T is unbounded then $||T|| = \sup_{||x|| = 1}\{||Tx||\} = \infty$. I don't know what to do from here. - -REPLY [4 votes]: We do know the norm on $V$, because we know that $\{x_n\}$ is an orthonormal basis. That means that each $v\in V$ can be written as $v=\sum_n a_nx_n$ with $a_n=\langle v,x_n\rangle$ and $\|v\|^2=\sum_n\|a_n\|^2$. Using this fact, you should be able to find the norm of $\|T\|$ in terms of the sequence $\{c_n\}$. -This is typically false. If $c_n=0$ for some $n$, the map is not injective. If $0$ is in the closure of $\{c_n\}$, then the map is not surjective. The sum you mention would converge if the sequence $\left\{\frac{1}{c_n}\right\}$ is bounded, so that would be a good condition to focus on. You may also find it useful to note that a bijective bounded linear operator on a Hilbert space automatically has a bounded inverse. -You could combine the facts that “Every linear mapping on a finite dimensional space is continuous” and the Characterization of normed vector spaces of finite dimension in terms of compactness of the unit sphere.<|endoftext|> -TITLE: limit of power of fraction of sums of sines -QUESTION [5 upvotes]: Find the following limit: -$$\lim_{n\to\infty} \left(\frac{{\sin\frac{2}{2n}+\sin\frac{4}{2n}+\cdot \cdot \cdot+\sin\frac{2n}{2n}}}{{\sin\frac{1}{2n}+\sin\frac{3}{2n}+\cdot \cdot \cdot+\sin\frac{2n-1}{2n}}}\right)^{n}$$ -I thought of some $\sin(x)$ approximation formula, but it doesn't seem to work. - -REPLY [6 votes]: Let $f : [0, 1] \to [0, \infty)$ be of the class $C^1$ and not identically zero. Then by Mean Value Theorem, we have -$$ \sum_{k=1}^{n} f \left( \tfrac{2k}{2n} \right) = \sum_{k=1}^{n} \left( f \left( \tfrac{2k-1}{2n} \right) + f' (x_{n,k}) \frac{1}{2n} \right) $$ -for some $x_{n,k} \in \left(\frac{2k-1}{2n}, \frac{2k}{2n} \right)$. Letting -$$ I_n = \frac{1}{n} \sum_{k=1}^{n} f \left( \tfrac{2k-1}{2n} \right) \quad \text{and} \quad J_n = \frac{1}{n} \sum_{k=1}^{n}f' (x_{n,k}),$$ -We have -$$I_n \to I := \int_{0}^{1} f(x) \; dx \quad \text{and} \quad J_n \to J := \int_{0}^{1} f'(x) \; dx.$$ -Therefore we obtain -$$ \left[ \frac{\sum_{k=1}^{n} f \left( \frac{2k}{2n} \right)}{\sum_{k=1}^{n} f \left( \frac{2k-1}{2n} \right)} \right]^{n} = \left( \frac{nI_n + \frac{1}{2}J_n}{n I_n} \right)^{n} = \left( 1 + \frac{1}{n}\frac{J_n}{2I_n} \right)^{n} \xrightarrow[n\to\infty]{} \exp \left( \frac{J}{2I} \right). $$ -Now plugging $f(x) = \sin x$, the corresponding limit is $\exp \left( \frac{1}{2} \cot \frac{1}{2} \right)$.<|endoftext|> -TITLE: Smallest of sigma-algebra -QUESTION [7 upvotes]: Can anyone show me how to solve this question extracted from Michael Taylor's book: -If $\{\mathcal{F}_{\alpha} : \alpha \in A\}$ is the collection of all $\sigma$-algebras of subsets of $X$ that contain $\mathcal{C}$, show that $$\bigcap_{\alpha \in A} \mathcal{F}_{\alpha} = \mathcal{F}$$ is a $\sigma$-algebra of subsets of $X$, containing $\mathcal{C}$, and is in fact the smallest such $\sigma$-algebra. One says $\mathcal{F}$ is the $\sigma$-algebra generated by $\mathcal{C}$ and writes $\mathcal{F} = \sigma(\mathcal{C})$ - -REPLY [10 votes]: Recall the definition of a $\sigma$-algebra: it is a collection of subsets of some particular $X$, such that: - -It is closed under countable unions; -It is closed under taking complement (relative to $X$). - -From this, countable intersections follow by DeMorgan's Laws. -Now suppose that $X_i\in\mathcal F$ for $i\in\mathbb N$. So for all $\alpha\in A$ we have $X_i\in\mathcal F_\alpha$. Since $\mathcal F_\alpha$ is a $\sigma$-algebra we have that $\bigcap X_i\in\mathcal F_\alpha$ for all $\alpha\in A$ and therefore $\bigcap X_i\in\bigcap\mathcal F_\alpha=\mathcal F$. -For complements, the principle is the same. -Furthermore, the same idea works to show that $\mathcal F$ contains $\mathcal C$. Lastly we need to show that this is indeed the smallest: -Suppose that $\mathcal S$ is a $\sigma$-algebra which contains $\mathcal C$, then for some $\alpha\in A$ we have $\mathcal S=\mathcal F_\alpha$, so it took part in the intersection which generated $\mathcal F$, therefore $\mathcal F\subseteq\mathcal S$. - -Further Reading: - -The $\sigma$-algebra of subsets of $X$ generated by a set $\mathcal{A}$ is the smallest sigma algebra including $\mathcal{A}$<|endoftext|> -TITLE: How to find irreducible polynomials over $\mathbb{Q}(i)$ with prescribed Galois group? -QUESTION [9 upvotes]: Here is my recent homework question: - -For each of the following five fields $F$ and five groups $G$, find an irreducible polynomial in $F[x]$ whose Galois group is isomorphic to $G$. If no example exists, you must justify that. -Fields $F$: $\mathbb{C}$, $\mathbb{R}$, $\mathbb{F}_{11}$, $\mathbb{Q}$, $\mathbb{Q}(i)$ -Groups $G$: $C_2$, $C_5$, $C_2\times C_2$, $S_3$, $D_4$ - -I've found the polynomials for first 4 fields; however, I've got no idea about the $\mathbb{Q}(i)$ one. -Can anyone here help me? Thanks, and regards. -Now, I just found I made mistakes in looking for C2xC2 and C5 one . The polynomial I found are not irreducible ( in fact only with separable irreducible factors ). Moreover, I don't know how to check the irreducibility of polynomial in F11. So I also can't do this part.. - -REPLY [2 votes]: Let me try to give a fairly complete answer, covering those cases that @Jyrki has not; and let’s hope that this will not prove to be too long. -For $\mathbb{C}$, any Galois group is trivial, since the field is algebraically closed. For $\mathbb{R}$, the only nontrivial Galois group is $C_2$, and as we know, the polynomial $X^2+1$ will get the extension. For ${\mathbb{F}}_{11}$, all Galois groups are cyclic, so that $C_2$ and $C_5$ are possible, but the other three are not; and @Jyrki has covered the two possible cases. The remaining two fields are $\mathbb{Q}$ and the Gaussian numbers ${\mathbb{Q}}(i)$, which I will call $k$ from here on. -For $\mathbb{Q}$, all possibilities occur, and as we all know, you an use $X^2+1$ for an extension with Galois group $C_2$; this new field is our $k$. For $C_2\times C_2$, you can take any “biquadratic” field, compositum of two quadratic extensions. Although it’s a very special one, I choose as an example the field ${\mathbb{Q}}(\sqrt{2},i)$, and since it’s going to be important to us later, I choose to name it $E$. The Galois group consists of the identity, complex conjugation (leaving $\sqrt2$ fixed), and the two automorphisms that send $\sqrt2$ to $-\sqrt2$: one leaving $i$ fixed, the other sending $i$ to $-i$. Please notice that $(1+i)/\sqrt2$ is in $E$ and it’s a primitive eighth root of unity. I will call this complex number $\zeta$, and observe that it’s a primitive element for $E$, irreducible polynomial over $\mathbb{Q}$ being $X^4+1$, the eighth cyclotomic polynomial. I will use without proof the fact that $\{1,\zeta,\zeta^2,\zeta^3\}$ form an integral basis for the (algebraic) integers of $E$, that is, an algebraic integer in $E$ is a $\mathbb{Z}$-linear combination of those powers of $\zeta$. -Continuing with $\mathbb{Q}$, the remaining groups $S_3$ and $D_4$ are easy: take the cube root of any squarefree positive integer, say $\alpha$, root of $X^3-m$, and the splitting field has to contain $\omega=(-1+\sqrt{-3})/2$, a primitive cube root of unity. I will not detail the action of the Galois group, nor write down minimal polynomial for $\alpha+\omega$, which will be a primitive element. For the group $D_4$, the dihedral group of order eight, it’s almost the same. Take an integer $m$ that is neither a square nor the negative of a square, and adjoin $\alpha=\root{4}\of{m}$. Again the extension is not normal, but the normal closure must contain $i$. Two automorphisms of the field are $\sigma\colon \{i\mapsto-i, \alpha\mapsto\alpha\}$ and $\tau\colon\{i\mapsto-i, \alpha\mapsto i\alpha\}$. Note that $\tau\circ\sigma$ leves $i$ fixed, but sends $\alpha$ to $i\alpha$, so is an automorphism of period four. This is just what we need for dihedral of order eight. Further details of this case I leave as an exercise. -The most interesting case is our field $k$ of Gaussian numbers. Almost everything goes as for $\mathbb{Q}$, except the problem of finding an extension whose Galois group is $D_4$. I have found one example of very special nature, but there must be many more. The problem, you see, is that if you just adjoin the fourth root of something, you’ll get a cyclic extension of degree four, already normal. Kummer -Theory tells us that if the characteristic does not divide $m$, and if the $m$-th roots of unity are in the base field, then the adjunction of the $m$-th root of an element, say $u$, gives a normal extension, of degree $m'$ dividing $m$, and with cyclic Galois group. In particular, if $m$ is $4$, and $u$ is not a square, then you get a cyclic extension of degree four. -Thus the standard elementary strategy for finding a $D_4$-extension of $\mathbb{Q}$ will not work for the base field $k$, because $i$ is already there. The nature of the dihedral group is that it has a cyclic normal subgroup $C$ of order four, index two, so that an extension of $k$ with group $D_4$ must start with a quadratic extension of $k$, and then jump up to a cyclic quartic extension of that: $k\subset ?\subset L$, where the group of ? over $k$ is $D_4/C$, and the group of $L$ over ? is $C$. My choice for ? is our field $E$, gotten by adjoining $\sqrt2$ to $k$; recall that $E$ is the field of eighth roots of unity. And the nonsquare element of $E$ whose fourth root I will adjoin to get $L$ is the unit $u=1+\sqrt2$. Let’s call $\beta=\root{4}\of{u}$, so that its minimal polynomial over $E$ is simply $X^4-u$. Now $E(u)$ is normal over $E$, but $k(u)$ is not normal over $k$. Indeed, the normal closure must at least contain a fourth root of $\overline u-1-\sqrt2$. But if $X^4-u$ is the minimal polynomial for $\beta$ over $e$, surely $(X^4-u)(X^4-\overline u)$ must be the minimal polynomial for $\beta$ over $k$. This multiplies out to $X^8-2X^4-1=f(X)$. I’ll show that adjunction of one root of this polynomial to $k$ brings along all other roots, so that we have a normal extension of degree eight, and I’ll show how the Galois group acts. -Consider one root of $f$. Without loss of generality, we may assume that it’s a root of the factor $X^4-(1+\sqrt2)$, even assume that it’s the unique positive real root $\beta$ of this, if you like. Call $L_0=k(\beta)$. With $\beta$ in our field $L_0$, we have its fourth power $u=1+\sqrt2$, and so we have $\sqrt2$, and because $i$ is in the base field, we also have $\zeta$ in $L_0$. That was our eighth root of unity in the first quadrant. Consequently, we find $\zeta/\beta$ in $L_0$. But what is the fourth power of this? It’s $-1/u=\overline u$. So $\zeta/\beta$ is a root of the other factor of $f$. And since $i$ is in $L_0$, We have all roots of both factors of $f$. That is, $L_0$ is our normal extension of $k$ of degree eight. The Galois group? I need to specify two involutions whose composition is of period four. Easy: $\sigma$ sends $\beta$ to $\zeta/\beta$, and $\tau$ sends $\beta$ to $i\zeta/\beta$.<|endoftext|> -TITLE: Non-abelian $p$-group; abelian subgroups of index $p$ -QUESTION [10 upvotes]: I'm trying to prove the following problem: - -(a) Let $G$ be a non-abelian $p$-group with an abelian subgroup of index $p$. Then the number of abelian subgroups of $G$ of index $p$ is either $1$ or $p+1$; in the latter case the center of $G$ has index $p^2$. -(b) A nilpotent group of class $3$ and order $16$ has exactly one cyclic subgroup of index $2$. - -I've made some progress towards (a) and tried to use (a) to show (b). But despite thinking about this for a few days I'm missing something and I can't quite finish off my argument. Here is an outline of the main results that I have managed to show; essentially, I think I've shown that for part (a), if there is more than one than there must be at least $p+1$. -Assume that there is more than one abelian subgroup of index $p$. Let $H$ and $K$ be two of them. Then they are both maximal and hence normal in $G$, and then $G=HK$. Then I was able to show that $H \cap K \leq Z(G)$, and then it follows that $Z(G)$ has index $p^2$ (as $G$ is non-abelian). -So now I want to show that there are exactly $p+1$ abelian subgroups of index $p$. Suppose $L$ is a subgroup of $G$ of index $p$ containing $H \cap K$. Then $L \lhd G$. Since $H \cap K \lhd G$ and it has index $p^2$, then $L/(H \cap K)$ is cyclic of order $p$. From this I was able to show that $L$ is abelian. -Now consider $G/(H \cap K)$, which has order $p^2$; however it can't be cyclic since it has two subgroups of order $p$, namely $H/(H \cap K)$ and $K/(H \cap K)$. So $G/(H \cap K) \cong C_p \times C_p$, and it follows that it has $p+1$ subgroups of order $p$. This gives $p+1$ subgroups of $G$ of index $p$. By the above, each of these is abelian. -Now I need to show that there can't be more than $p+1$ such subgroups... but I'm a bit stuck here. -As for part (b), I thought I might be able to use (a) for this, if I can show that there does exist an abelian group of order 8 and it's cyclic. Somehow I need to use the fact that the group is of class 3. I try looking at the upper and lower central series but can't see a way of applying them. I would be grateful for any pointers in the right direction. - -REPLY [6 votes]: Note that in fact any abelian subgroup of index $p$ must contain the center: for if $H$ is maximal and abelian, and $Z(G)$ is not contained in $H$, then $G=HZ(G)$, which would make $G$ abelian. Therefore, if $H$ and $K$ are both abelian of index $p$, then $Z(G)\subseteq H\cap K$. If you have proven that $H\cap K\subseteq Z(G)$, then in fact we have that $H\cap K=Z(G)$ for any two distinct abelian subgroups of index $p$. -(And indeed: if $H\neq K$ are both abelian of index $p$, then for every $g\in G$ there exists $h\in H$, $k\in K$ with $g=hk$; given $x\in H\cap K$ we have $gx = (hk)x = x(hk) = xg$, since $x$ commutes with everything in $K$ and with everything in $H$, so $H\cap K\subseteq Z(G)$). -Conversely, if $M$ is a subgroup of $G$ that contains $Z(G)$ and is of index $p$, then it must be abelian (since $M/Z(G)$ is of order $p$). Thus, every abelian subgroup of $G$ of index $p$ corresponds to a subgroup of $G/Z(G)$, and you are done. -For part (b), show that (a) implies that if $G$ has more than one abelian subgroup of index $p$, then $G$ is of class exactly $2$: by the argument in (a), you know that $Z(G)$ has index $p^2$ in $G$, but that means that $G/Z(G)$ is abelian, which means that $[G,G]\subseteq Z(G)$. -So if $G$ is of class $3$ and order $16$, then it has at most one abelian subgroup of order $8$. The center must be of order $2$: if the center were of order $4$, then $G/Z(G)$ would be of order $4$, hence abelian, so again we have that $G$ would be of class $2$. Thus, $G/Z(G)$ is of order $8$, and cannot be abelian. Therefore, $G/Z(G)$ is one of the two nonabelian groups of order $8$. If $G/Z(G)$ were quaternion, then it would have four cyclic subgroups of order $4$: each of them pull back to a subgroup $H$ of $G$ of order $8$, containing the center; and it is not hard to show that they are all abelian, contradicting the fact that $G$ has at most one abelian subgroup of order $8$. So $G/Z(G)$ must be dihedral. Verify that you get a cyclic subgroup of order $8$ in $G$ in this case. -Added. So, let us suppose that $G$ is of class $3$ and order $16$, and has a unique abelian subgroup of index $2$. We know $Z(G)$ is of order $2$, and that $G/Z(G)$ is dihedral of order $8$. Let $z\in G$ generate $Z(G)$, and let $x\in G$ map to a generator of the cyclic group of order $4$ in $G/Z(G)$. We know that $x$ has order either $4$ or $8$ in $G$. We aim to show that it has order $8$. -If $x$ has order $8$, we are done. Otherwise, $|x|=4$. Let $y\in G$ be such that $y$ maps to the generators of $G/Z(G)$ of order $2$, so that $yx = x^3y$ or $yx=x^3yz$. In the former case, we have $yx^2 = x^6y = x^2y$, so $x^2$ commutes with $x$, $y$, and $z$; in the latter case we would have $yx^2 = x^3yxz = x^6yz^2 = x^2y$, so again $x^2$ commutes with $x$, $y$, and $z$. Since $x$, $y$, and $z$ generate $G$, it follows that $x^2\in Z(G)$. But then the image of $x$ in $G/Z(G)$ would have order $2$, which is a contradiction. Therefore, $x$ cannot have order $4$, and so must have order $8$, as desired.<|endoftext|> -TITLE: Very special rational points on curves over number fields -QUESTION [5 upvotes]: For some reason, I'm convinced the answer to the following question should be (obviously) negative, but I can't come up with a good reason. -Does there exist a number field $K$, a smooth projective geometrically connected curve $X/K$ of genus $g\geq 2$ with a $K$-rational point $x$ such that, for any number field $L/K$, $x$ does not intersect $ X(L)$? -Let me make the last part of the question more precise. Firstly, let $\mathcal X$ be the minimal regular model of $X$ over $O_K$. When I say that $x$ does not intersect $X(L)$ I mean that the intersection product $(x,y)_{\mathcal X}$ on $\mathcal X$ equals zero for all $y\in X(L)-\{x\}$. (Here $x$ and $y$ also denote their Zariski closures in $\mathcal X$.) -I think it could happen that some $K$-rational point does not intersect any other $K$-rational point. Take for example a curve with $X(K) = \{pt\}$. For some reason I do think that there should always be some (other) $L$-rational point which intersects this $K$-rational point. - -REPLY [5 votes]: The answer is no (for any genus). Fix a closed point $s\in \mathrm{Spec}(O_K)$ such that $\mathcal X_s$ is smooth. Let $x_s$ be the intersection point of $\overline{\{ x\}}$ with the fiber $\mathcal X_s$. Look (Zariski) locally at $\mathcal X$ around $x_s$. Lift a generator of the maximal ideal of $O_{\mathcal{X}_s, x_s}$ to $O_{\mathcal X, x_s}$, then you get a horizontal curve in $\mathcal X\otimes O_{S,s}$ passing through $x_s$. Taking the Zariski closure of the horizontal curve in $\mathcal X$ then gives you a horizontal curve in $\mathcal X$ passing through $x_s$. Its intersection with $\overline{\{ x\}}$ is positive, and its generic fiber is a point of $X(\bar{K})$ hence of $X(L)$ for some finite extension $L/K$. -Finally, there are infinitely many local liftings of a given generator (just perturb a lifting by adding a multiple of the uniformizing element of $O_{S,s}$), so at least one of them will gives a horizontal curve different from $\overline{\{ x\}}$.<|endoftext|> -TITLE: Looking for intuition behind coin-flipping pattern expectation -QUESTION [8 upvotes]: I was discussing the following problem with my son: -Suppose we start flipping a (fair) coin, and write down the sequence; for example it might come out HTTHTHHTTTTH.... I am interested in the expected number of flips to obtain a given pattern. For example, it takes an expected 30 flips to get HHHH. But here's the (somewhat surprising) thing: it takes only 20 expected flips to get HTHT. -The tempting intuition is to think that any pattern XXXX is equiprobable since, in batches of 4 isolated flips, this is true. But when we are looking for embedded patterns like this, things change. My son wanted to know why HTHT was so much more likely to occur before HHHH but I could not articulate any kind of satisfying explanation. Can you? - -REPLY [6 votes]: The other answers are perfectly good, but I feel a picture is worth a thousand words in this case. - -When waiting for HHHH, you go directly to jail (do not pass go, do not collect $200) a lot more often than when you are waiting for HTHT.<|endoftext|> -TITLE: Every closed set in $\mathbb R^2$ is boundary of some other subset -QUESTION [16 upvotes]: A problem is bugging me many years after I first met it: - -Prove that any closed subset of $\mathbb{R}^2$ is the boundary of some set in $\mathbb{R}^2$. - -I have toyed with this problem several times over the last 20 years, but I have never managed to either prove it, or conversely prove that the question as posed is in some way wrong. -I can't remember which book I found the question in originally (but I am pretty sure that is exactly the question as it appeared in the book). -Any help, with either the topology or the source would be gratefully received! - -REPLY [15 votes]: There is a very elementary way to solve this, that is also much more widely applicable. - -Let $Y$ be a topological space that can be partitioned into dense subsets $D$ and $E$. If $X \subset Y$ is closed, then there is a $V \subset X$ such that $\operatorname{Fr} V = X$. - -Take $V = X \setminus (D \cap \operatorname{Int} X)$. Then since $V \subset X$ we have $\operatorname{Cl} V \subset \operatorname{Cl} X$, and $\operatorname{Fr} V \subset X$ and $E \cap \operatorname{Int} X$ dense in $\operatorname{Int} X$, therefore $\operatorname{Cl} V = X$. On the other hand $Y \setminus X$ is dense in $Y \setminus \operatorname{Int} X$ and $D \cap \operatorname{Int} X$ is dense in $\operatorname{Int} X$, therefore $\operatorname{Int} V = \emptyset$. It follows that $ \operatorname{Fr} V = X$. - -Some additional information: -The question probably came from Willard's General topology, problem 3 B. -As Henno Brandsma pointed out in a comment, a space that can be partitioned into -two dense subsets is called resolvable. For some examples of irresolvable spaces, see the answer to "Is a perfect set a boundary?". -A somewhat stronger result can be found in "Any two disjoint open sets are the interior and exterior of some set".<|endoftext|> -TITLE: Is there a non-cyclic group with every subgroup characteristic? -QUESTION [16 upvotes]: Suppose $G$ is a group such that every subgroup of $G$ is a characteristic subgroup. Does this mean that $G$ is cyclic? I remember reading that this is true in the finite case, is that right? What about the infinite case? - -REPLY [6 votes]: Arturo has proffered the Prüfer $p$-group. However, this is infinitely generated, which leaves us with the following question: - -Does there exist a finitely generated example? - -Answer: No, there does not. -Proof: As has already been pointed out, our group is necessarily abelian, and it is well-known that every finitely generated abelian group is of the form $\mathbb{Z}^n\times C_{m_1} \times\ldots\times C_{m_i}$ for some finite list of natural numbers $(n, m_1, \ldots, m_i)$ with $n$ possible zero. We can assume $n>0$ here, as we are looking for an infinite example. -Take the given generators for $G$ in terms of its direct-product decomposition, $G=\langle a_1, \ldots, a_n, b_{m_1}, \ldots, b_{m_i}\rangle$. Clearly the map which sends $a_1$ to $a_1c$, for $c$ some generator other than $a_1$, and keeps every other generator fixed is an automorphism of $G$, but $\langle a_1\rangle\neq\langle a_1c\rangle$, so not every subgroup of $G$ is characteristic. -Thus, there can be no generator other than $a_1$ and so $G=\mathbb{Z}$ is cyclic. -We thus have another question: - -If every subgroup of $G$ is characteristic, is $G$ locally cyclic? - -I am not sure of the answer to this, but I suspect it is "yes".<|endoftext|> -TITLE: Examples of mathematicians who lost interest in Math and got interested again? -QUESTION [31 upvotes]: I am looking for some examples, and hopefully some short biographies on mathematicians who lost interest in Math along the way, and somehow got rejuvenated again. (Better still, who managed to do something interesting) -I hope to gather these examples to encourage myself. Any help would be very much appreciated. Thanks! - -REPLY [4 votes]: It's not 100% clear that Grothendieck lost interest in mathematics, but it seems that he didn't make any mathematical research during almost the whole 1970's. -He began to release mathematical research papers in early 1980's. -From Wikipedia: - -After leaving the IHÉS, Grothendieck became a temporary professor at Collège de France for two years. A permanent position became open at the end of his tenure, but the application Grothendieck submitted made it clear that he had no plans to continue his mathematical research. The position was given to Jacques Tits. -He then went to Université de Montpellier, where he became increasingly estranged from the mathematical community. Around this time, he founded a group called Survivre, which was dedicated to antimilitary and ecological issues. His mathematical career, for the most part, ended when he left the IHÉS.<|endoftext|> -TITLE: What's so special about the 4 fundamental subspaces? -QUESTION [14 upvotes]: I was reading Gilbert Strang's book for Linear Algebra (along with his lectures) and I feel that he is emphasizing that the 4 fundamental subspaces (Column Space, Row Space, Null Space and Null Space of $A^T$) form the "crux" or "heart" of Linear Algebra and that the understanding of it is crucial for further study. -But what I don't understand is WHY? Why are they so important? -I understand that : - -Row Space and Null Space are orthogonal. (Same for other two) -Their dimensions are governed by rank of the matrix. - -but beyond that I didn't find anything that I found profound or interesting. Am I missing something or is Prof. Strang over-rating the 4 fundamental subspaces? - -REPLY [16 votes]: The core of studying matrices is to study linear transformations between vector spaces. These can be realized as matrix multiplication on the left (or right) of column (or row) vectors. -If we are in this setup: $x\mapsto Ax$ for a column vector $x$ and appropriate matrix $A$, then the image of the linear transformation will be spanned by the columns of $A$. -The kernel of the transformation (nullspace) is the set of all $x$ such that $Ax=0$ is important for understanding the solutions to some matrix equations. You probably have already learned that if $x_0$ is a solution to $Ax=b$, then every other solution is given by $x_0+k$ where $k$ is in the nullspace. -This all has analogous explanation on the other side. If we are in this setup: $x\mapsto xA$ for a row vector $x$, then the image of the linear transformation is now spanned by the rows of $A$. -Talking about the nullspace of $A^T$ is just a fancy way of dressing up the "left nullspace" of $A$, since $xA=0$ iff $A^T x^T=0$. The nullspace is now the set of all $x$ such that $xA=0$, and you can draw the same conclusions about solutions to $xA=b$. -In short, these four spaces (really just two spaces, with a left and a right version of the pair) carry all the information about the image and kernel of the linear transformation that $A$ is affecting, whether you are using it on the right or on the left.<|endoftext|> -TITLE: Should one understand a proof of every important theorem in a field of mathematics to research? -QUESTION [18 upvotes]: I guess not. -However, I think I have to understand proofs of some of them, if not all of them. -So what is the criterion, if any? -What kind of theorem whose proof I can get away with? -EDIT -For example, let me take the main theorems of class field theory. -It's rather easy to understand what they say, but it's difficult to understand the proofs. -Do I have to understand those proofs to research on algebraic number theory? -EDIT -Another example: Hironaka's theorem on resolution of singularities of an algebraic variety in characteristic 0. -I guess most algebraic geometers can do their research without understanding its proof. -EDIT -I'll add more examples. - -The theorem that singular homology groups satisfy the Eilenberg-Steenrod axioms. -Most of the basic theorems of homological algebra, for example, the theorem that a filtered complex has a spectral sequence. -The Feit-Thompson theorem that every finite group of odd order is solvable. -The classification of finite simple groups(CFSG). - -EDIT -I believe this question is becoming more and more important because mathematic is developing faster and faster than before. I guess you are going to give up understaning the proofs of all the important theorems in your field, regardless whether you want to understand them or not. -EDIT -The existence of the field of real numbers. -I'm almost certain that one doesn't need to know its proof to do analysis. -All one needs to know are its properties. - -REPLY [27 votes]: I think it is easier to deal with your examples than with the general question. -Class field theory: When I teach class field theory, I teach the statements of the results, initially in classical language and then in idelic language. I spend a lot of time discussing examples, special cases, and applications, such as to classical reciprocity laws, the Kronecker--Weber theorem, and CM of elliptic curves. My goal is to explain what a class field is, what an abelian extension is, why a priori they are very different concepts, but that they ultimately turn out to be the same. -In terms of proofs, I explain some arguments, especially those that show how group cohomology can be made to interact with statements from classical algebraic number theory (such as Dirichlet's unit theorem) to get a surprising amount of leverage. (The kind of topics I have in mind are what are classicaly known as genus theory, which lead to the proof of one of the two main inequalities of class field theory.) The most recent time I taught it, I also sketched the proofs of the basic facts about $L$-functions, following Tate's thesis, and used them to prove the other inequality. -My goal in selecting topics (apart from time limitations) is to illustrate important ideas, focussing on those which recur throughout the theory. -For a self-studier, some of the background I am discussing can be found in Franz Lemmermeyer's on-line book; other parts are in Cox's book. Some of them are hard to find in the standard text-book literature. -If you are reasonably comforable with group cohomology, then you should read the chapter by Washington in Cornell--Silverman--Stevens. The duality and Euler characteristic theorems that he states there (largely due to Tate) are (non-trivial) consequences of class field theory, but assuming them, you can rederive most of class field theory. Being able to actually carry out this exercise is a reasonable measure of mastery of the algebraic aspects of class field theory, and is probably more valuable than learning all the technicalities in the proof of class field theory itself. -Resolution: At one point, in terms of the combined measures of importance of its statement, and (at least reputed) difficulty of its proof, Hironoka's theorem was an extreme point in algebraic geometry. I remember that when I was a graduate student in the mid 90s, there was one particular faculty member who was legendary for his technical abilities, and one of the rumours about him was that he had actually read Hironaka's proof. -This changed a lot after de Jong proved his theorem on alterations. De Jong's argument was much more accessible, and more self-evidently geometric, than Hironaka's, and (or at least this was my impression) it led to a new burst of work on all kinds of questions related to resolution of singularities, semistable degeneration, and factorization of birational maps. Hironaka's proof was revisited, and reworked in a much more accessible fashion. -There is now a wonderful book of Kollar explaining resolution, and it is very geometric and very accessible. I certainly don't require my students to read it, but I've recommended it to those of them who are more geometrically inclined. -The point is that the nature of the proof of resolution, and the way it fits into the rest of algebraic geometry, changed a lot over the last fifteen or so years. With Kollar's book available, one can learn the ideas of resolution as part of a general education in algebraic geometry, and this greatly enhances the value of learning the proof. -One realization (due to Grothendieck, I think) is that resolution could be used as a black-box to prove other things (such as his result on equality of algebraic and analytic de Rham cohomology for all smooth varieties). This kind of application reaches its peak in Deligne's mixed Hodge theory. When I was a student, and Hironaka's argument itself remained shrouded in mystery for most of us, this was the manner by which we came to grips with resolution, and its geometric significance. It remains one of the fundamental applictions, and now that we have new insight into resolution and its proof in general, there is hope that we might be able to extend this sort of application to other contexts. -General conclusions: Probably there aren't any specific conclusions. A general lesson that I take away from my own experience of learning math myself, and teaching students, is that it is good to focus on learning arguments, techniques and examples that have significance beyond their immediate context. As you come to work on a specific piece of research, your focus will also have to become correspondingly more specific, but until you know exactly what specific direction you should focus on, I think the sentiments of the previous sentence are fairly sound advice. (See also my advice on learning arithmetic geometry in this thread.) -Added in response to OP's edit: The later examples that you give are of greatly divergent difficulty. The basic theorems of homological algebra are essentially exercises (there is a well-known remark by Lang to this effect), and easily learnt. The proof that singular homology satisfies the Eilenberg--Steenrod axioms is also fairly straightforward (or at least the key idea --- simplicial approximation --- is easily grasped). Treating singular homology and homological algebra as black-boxes is more or less unnecessary, and since the proofs are closely aligned with the basic statements of the theory, I think it would also be a mistake. -The results in group theory that you mention are of a different nature. They also come up less often, but when they do, they are often treated as black-boxes. People are also known, though, to go to efforts to avoid appealing to the classification of finite simple of groups, because of its black-box nature, and because of lingering questions about the reliability of what's inside the box.<|endoftext|> -TITLE: Product of two ideals doesn't equal the intersection -QUESTION [12 upvotes]: The product of two ideals is defined as the set of all finite sums $\sum f_i g_i$, with $f_i$ an element of $I$, and $g_i$ an element of $J$. I'm trying to think of an example in which $IJ$ does not equal $I \cap J$. -I'm thinking of letting $I = 2\mathbb{Z}$, and $J = \mathbb{Z}$, and $I\cap J = 2\mathbb{Z}$? Can someone point out anything faulty about this logic of working with an even ideal and then an odd ideal? -Thanks in advance. - -REPLY [8 votes]: In a PID, we have $(a) \cap (b) = (\mathrm{lcm}(m,n))$, whereas we have $(a) \cdot (b) = (a \cdot b)$. So this ideal-theoretic question becomes a number-theoretic one and we see: $(a) \cap (b) = (a \cdot b)$ iff $a \cdot b$ is a lcm of $a$ and $b$ iff $a,b$ are coprime.<|endoftext|> -TITLE: A seeming paradox in a coin-flipping game -QUESTION [5 upvotes]: This is related to my other question on a similar topic. -Suppose we play the following game: we flip a coin repeatedly and record the outcomes. For example we might get HHTTTHTTHHTTT.... Now Alice and Bob each choose distinct patterns of the same length, called $A$ and $B$, respectively. Which ever player's pattern appears first wins the game. -Now suppose Alice chooses $A=$HHHH and Bob chooses $B=$HHHT. First, let's notice that the expected number of flips to obtain $A$ is 30, but for $B$ it's only 16. This would seem to imply that Bob is very likely to win this game most of the time. -However, thinking about the game in another way, neither Alice nor Bob can win until HHH occurs. And after this, the game ends on the next flip with each player winning equiprobably. -This seems counter-intuitive to me: we have two events, one expected to occur much sooner than the other, but relative to each other the ordering is 50-50. What am I missing? - -REPLY [12 votes]: The odds of either player winning are 50%. However, imagine for a moment that after that player wins, you were to continue flipping until the other player sees their chosen sequence. -If Bob wins, that means the last four flips were HHHT, which means Alice cannot use those flips as part of her sequence, essentially meaning she's starting over, and it'll be an expected 30 more flips before she sees HHHH. -However, if Alice wins, then the last four flips were HHHH, which means Bob has a 50% chance that the very next flip gives him his sequence. Even if it doesn't, there's then another 50% chance the very next flip after that will complete a HHHT string. If you do the math, I believe Bob would expect to see his finished sequence on average 2 flips after Alice wins. -So the apparent paradox occurs because the expected number of flips each player must wait includes waiting after the other player has already won, in which case Alice would wait much longer than Bob. But if you stop the game as soon as one player wins, each player has a flat 50% chance of seeing their sequence.<|endoftext|> -TITLE: A representation is semisimple if its restriction to a subgroup of index prime to Char(F) is semisimple -QUESTION [5 upvotes]: Let $G$ be a finite group and $H$ a subgroup whose index is prime to $p$. Suppose $V$ is a finite-dimensional representation of $G$ over $\mathbb{F}_p$ whose restriction to $H$ is semisimple. Prove that $V$ is semisimple. - -REPLY [5 votes]: (Expanded form of the answers from the comments.) -A module is semisimple iff every submodule has a direct complement. -Aschbacher's Finite Group Theory on pages 39-40 proves that if U is a sub FG-module of V and U has an FP-module direct complement, then it has an FG-module direct complement where P is a Sylow p-subgroup of G. The argument is an averaging argument as described. -Since H has index coprime to p, H contains a Sylow p-subgroup P of G. Every semisimple FH-module is also semisimple as an FP-module (or just replace n in Aschbacher's proof with $[G:P]$ and allow $P=H$ to be any subgroup of index coprime to p), and so Aschbacher's result answers your question. - -The same proof is phrased in more complicated language on pages 70, -71, and -72 of Benson's Representations and Cohomology Part 1. -A module is called relatively H-projective if G-module homomorphisms that split as H-module homomorphisms also split as G-module homomorphisms. In other words, you just want to show that every G-module is relatively H-projective when $[G:H]$ is invertible in the ring, which is Corollary 3.6.9.<|endoftext|> -TITLE: Show $\mathbb{Q}[x,y]/\langle x,y \rangle$ is Not Projective as a $\mathbb{Q}[x,y]$-Module. -QUESTION [14 upvotes]: Disclaimer: Though I have been re-reading my notes, and have scanned the relevant texts, my commutative algebra is quite rusty, so I may be overlooking something basic. -I want to show $\mathbb{Q} \simeq \mathbb{Q}[x,y]/\langle x,y \rangle$ is not projective as a $\mathbb{Q}[x,y]$ module. I've tried two methods, neither of which gets me to the conclusion. -I first tried what seems to be sort of standard when proving that something is not projective: show that the lifting of the identity yields a contradiction. So I let $\pi: \mathbb{Q}[x,y] \to \mathbb{Q}[x,y]/\langle x,y \rangle$ be my surjection given by $f \mapsto \bar{f}$ and the identity map is $id: \mathbb{Q}[x,y]/\langle x,y \rangle \to \mathbb{Q}[x,y]/\langle x,y \rangle$. So all I need to show is that a homomorphism $\phi: \mathbb{Q}[x,y]/\langle x,y \rangle \to \mathbb{Q}[x,y]$ such that $\pi \circ \phi =id$ does not exist. But if $$\pi(f) = \bar{f} = \overline{a_0+a_{10}x+a_{01}y+a_{11}xy+\cdots+a_{n0}x^n + a_{0n}y^n} = \bar{a_0}$$ then doesn't the map $\bar{a_0} \mapsto a_0$ work? After all, $$ (\pi\circ \phi)(\bar{a_0}) = \pi(a_0) = \bar{a_0} = id(\bar{a_0}).$$I was concerned at first about this not being well defined, but since every element of a particular coset has the same constant term, it does not depend on choice. So either I have already made a mistake, or this is just the wrong map from which to derive a contradiction. -The next thing I tried used a different characterization of projective modules: that $P$ is a projective $R$-module iff there is a free module $F$ and an $R$-module $K$ such that $F \simeq K\oplus P$. In our case, this means there is a free module $F$ and a $\mathbb{Q}[x,y]$-module $K$ such that -$$ -\mathbb{Q}[x,y] \oplus \cdots \oplus \mathbb{Q}[x,y] \simeq F \simeq K \oplus \mathbb{Q}[x,y]/\langle x,y \rangle \simeq K \oplus \mathbb{Q}.$$ -From here, my concern is that I am waving my hand too much when I say: obviously this cannot be true, since every element of the LHS, which is a tuple of polynomials, cannot be broken up with one chunk in $K$ and the other in $\mathbb{Q}$. Do agree? If so, how can I make this argument more rigorous? -One more trouble: nowhere in either of these methods did I explicitly use that the polynomial ring here is only in two variables. The fact that the question did not use $\mathbb{Q}[x_1,\ldots,x_n]$ instead of $\mathbb{Q}[x,y]$ worries me. - -REPLY [10 votes]: A projective module over a domain has no nonzero torsion element, since it is a submodule of a free module. - But every element of your module is a torsion element: it is killed by $x$.<|endoftext|> -TITLE: Standard model of ZFC and existence of model of ZFC -QUESTION [5 upvotes]: The assumption that there exists a standard model of ZFC in a given universe is stronger than the assumption that there exists a model. - -So, existence of a model of ZFC and existence of a standard model of ZFC are only assumption, not theorem? - -REPLY [5 votes]: We usually assume that there is a grand universe of sets which is not a set, and we often assume that this universe is a model of ZFC. It does not prove ZFC is consistent, though, because it is too large for us to capture. -It may be the case, however, that inside our universe of sets there is a set $M$ and a binary relation $E$ over $M$ such that $\langle M,E\rangle$ is a model of set theory. This is to say that ZFC is consistent, or that it has a model. -From the point of view of $M$ the relation $E$ is $\in$. However, we as all knowing beings, know that $E$ may be something else. -We say that $M$ is well-founded if $E$ is well-founded as a relation in the universe. It is important to remark that $M$ always thinks that $E$ is well-founded, but it might be the case that $M$ does not know about any decreasing sequence. -Well-founded models are good. They are good because we have a theorem known as the Mostowski collapse lemma which tells us that if $\langle A,R\rangle$ is well-founded then there is some $B$ such that $\langle A,R\rangle\cong\langle B,\in\rangle$ as ordered sets, and $B$ is transitive (i.e. $x\in B$ and $y\in x$ imply that $y\in B$). -Now suppose that $\langle M,E\rangle$ is a well-founded model of ZFC, namely it is a model of ZFC and $E$ is really well-founded, in this case we can collapse it and have some $\langle N,\in\rangle$ which is a model of ZFC and $\in$ is the restriction of $\in$ to $N$. Such models are standard models of ZFC. -Now comes the punch: the existence of a well-founded model is stronger than simply the existence of a model. (Since the true $\in$ is well-founded, standard models are always well-founded.)<|endoftext|> -TITLE: Difference quotient -QUESTION [5 upvotes]: Where am I going wrong? -Find the difference quotient for: $f(x)=2-x-3x^2$ -$$\frac{[ 2-(x+h)-3(x+h)^2 ] - [ 2-x-3x^2 ]}{h}$$ -$$\frac{2-x-h-3x^2-6hx-3h^2-2+x+3x^2}{h}$$ -$$\frac{-3h^2-6hx-h}{h}$$ -$$-3h-6x-1$$ - -REPLY [5 votes]: Well, nowhere. Everything is OK.<|endoftext|> -TITLE: Help with finding the fixed field of a subgroup of the galois group -QUESTION [5 upvotes]: So basically the question is: Let $K$ be the splitting field of $t^5-2$ (over the rationals). Let $\theta=\sqrt[5]{2}$, and $\eta$ a primitive $5$-th root of unity. First we note that the splitting field of $t^5-2$ is given by $K=\mathbb{Q}(\eta,\theta)$. Since $\theta$ is an extension of degree $5$ and $\eta$ of degree $4$, then $K$ is an extension of degree $20$. Define $\tau$ to be the automorphism that fixes $\theta$ and it maps $\eta$ to $\eta^2$. Then the order of $\tau$ is $4$. Also, define $\sigma$ to be the automorphism fixing $\eta$ and mapping $\theta$ to $\eta\theta$. Then we see that $\sigma$ is of order $5$. I already constructed the lattice for the Galois group, but now I am trying to construct the lattice for the corresponding fixed fields. -By definition we have that $\langle\sigma\rangle$ has $\mathbb{Q}(\eta)$ as a fixed field. Note that $\langle \sigma,\tau^2\rangle$ is a subgroup of $\langle \sigma, \tau\rangle=G(K/\mathbb{Q})$ which contains $\langle \sigma\rangle$, so the fixed field of $\langle \sigma,\tau^2\rangle$ should be an intermediate field contained in $\mathbb{Q}(\eta)$ (since the correspondance betweeen galois subgroups and fixed fields is order reversing), but I cannot figure out what the fixed field of $\langle \sigma,\tau^2\rangle$ is. At first I thought it was $\mathbb{Q}(\eta^i)$ for some $i$, and we trivially have that $\sigma$ leaves $\eta^i$ unchanged and $\tau^2(\eta^i)=\eta^i$, but this does not work since $\tau^2(\eta)=\eta^4$, $\tau^2(\eta^2)=\eta^3$, $\tau^2(\eta^3)=\eta^2$ and $\tau^2(\eta^4)=\eta$. -So what would then the fixed field of $\langle \sigma,\tau^2\rangle$ be? - -REPLY [3 votes]: Note that $\tau^2$ maps $\eta$ to $\eta^4=\eta^{-1}$. So you might want to consider $\eta+\eta^{-1}$, which is contained in $\mathbb{Q}(\eta)$, and is fixed by both $\tau^2$ and $\sigma$. Now it's just a question of determining if it is the complete fixed field, or just a subfield thereof. But this should get you started.<|endoftext|> -TITLE: Spectrum of Indefinite Integral Operators -QUESTION [29 upvotes]: I've considered the following spectral problems for a long time, I did not kow how to tackle them. Maybe they needs some skills with inequalities. -For the first, suppose $T:L^{2}[0,1]\rightarrow L^{2}[0,1]$ is defined by -$$Tf(x)=\int_{0}^{x} \! f(t) \, dt$$ -How can I calculate: - -the radius of the spectrum of $T$? -$T^{*}T$? -the norm of $T$? - -I guess $r(T)$ should be $0$. but I did know how to prove it. My idea is use Fourier to tackle it, however it does not seem to work. -The other problem may be very similar to this one. Let $T:C[0,1]\rightarrow C[0,1]$ be defined by -$$Tf(x)=\int_{0}^{1-x}f(t)dt$$ -It is obvious that $T$ is compact and I guess its radius of spectrum is zero, but I do not know how to prove it. -Any references and advice will be much appreciated. - -REPLY [3 votes]: thanks PZZ's answer for the first problem. Now I know why I did realize why I am failed in the second problem. -$$Tf(x)=\int_{0}^{1-x}f(t)dt$$ -It is trival that this operator is is linaer compact operator according to Arzela-Ascoli theorem, so we get $\sigma(T)/{0} \subset \sigma_{p}(T)$, use simple computation we can get that $0\notin \sigma_{p}(T)$, for C[0,1] is a infinite dimensional space, we can get $0\in \sigma(T)$. $\forall \lambda \in \sigma_{p}(T)$, and $\lambda \neq 0$, we can get that -$$\int_{0}^{1-x}f(t)dt=\lambda f(x)$$ -which implies that $f \in C^{\infty}[0,1]$, then we can get -$$-f(1-x)=\lambda f'(x)$$ -notice that -$$f(1)=0$$ -further we get -$$\lambda^{2}f''(x)=-f(x)$$, solve this second order of differential equation, we can calulated every $\lambda_{n}=\frac{2\pi}{2n+1}, \qquad n\in Z$.<|endoftext|> -TITLE: Does the splitting lemma hold without the axiom of choice? -QUESTION [10 upvotes]: In part of the proof of the splitting lemma (a left-split short exact sequence of abelian groups is right-split) it seems necessary to invoke the axiom of choice. That is, if $0\to A\overset{f}{\to} B\overset{g}{\to} C\to 0$ is exact and there is a retraction $B\overset{r}{\to}A$, then we can find a section $C\overset{s}{\to}B$ by choosing any right inverse of $g$ and removing the part in the kernel of $f$, which gives a well-defined morphism independent of the choice. -Is this invocation of the axiom of choice essential? I thought I had an example that showed it was: $0\to\mathbb{Q}\to\mathbb{R}\to\mathbb{R}/\mathbb{Q}\to0$ splits on the right if you can choose a basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$. But actually now I think it doesn't split on the left without a basis either. -The map $C\to B$ is supposed to be a canonical injection map. Can something be "canonical" if it requires choice? - -REPLY [9 votes]: There's no choice involved in the following argument showing that $f$ has a left inverse (if and) only if $g$ has a right inverse: -Consider the morphism $h = 1_B-fr\colon B \to B$. Then $hf = f -frf = 0$, so $h$ factors as $h = sg$ for a unique morphism $s\colon C \to B$ (note that $s(b+A) = h(b)$ is well-defined as a map $s: B/A \to B$). Now $gsg = gh =g-gfr = g = 1_C g$, so $gs = 1_C$ because $g$ is onto, so $s$ is a section of $g$. - -We also have $frsg = frh = fr-frfr = 0$, so $frs = 0$ because $g$ is onto, so $rs = 0$ because $f$ is injective and by construction of $s$ we have $fr + sg = fr + h = 1_B$. In particular, we have an isomorphism -$$ -\begin{bmatrix} f & s \end{bmatrix}\colon A \oplus C \longleftrightarrow {B :} \begin{bmatrix} r \\ g \end{bmatrix}. -$$ -Note that $s$ is uniquely determined by $r$ and similarly one shows that if $s$ is a right inverse of $g$ then there is a uniquely determined right inverse $r$ of $g$ such that $fr = 1-sg$. - -Concerning the existence of a retraction of the inclusion $\mathbb{Q} \to \mathbb{R}$: assuming such a retraction exists, the previous part gives us a section $s\colon\mathbb{R/Q} \to \mathbb{R}$ of $g\colon \mathbb{R} \to \mathbb{R/Q}$. Modifiying this section by setting $t(x) = s(x) - \lfloor s(x) \rfloor$, we get a Vitali set $t(\mathbb{R}/\mathbb{Q}) \subset [0,1]$, whose existence we cannot prove from ZF alone, so we cannot prove from ZF that there is a left inverse of the inclusion $\mathbb{Q} \to \mathbb{R}$. - -A simple example of something “canonical” that requires choice in order to be non-trivial would be a product of an arbitrary collection of sets. -My naïve way of thinking of the axiom of choice is that it is first and foremost an axiom ensuring the existence of things. In my experience it is quite often the case that things can be defined and shown to be unique (hence “canonical”) if they exist (or non-uniqueness is controlled in some tractable way), but their existence requires additional assumptions.<|endoftext|> -TITLE: why are subobjects defined to be equivalence classes of objects, instead of just objects? -QUESTION [15 upvotes]: In category theory, a subobject of object $A$ is defined to be an equivalence class of isomorphic monomorphisms into $A$. Does this seem weird to anyone else? Isn't it normal to allow something to be only defined "up to isomorphism"? Sure, we could define a product to be the equivalence class of objects satisfying the universal property, but then it wouldn't live in our category. And it may well be a proper class. No one defines limits this way, why do we do this for subobjects and quotient objects? -If we just defined a subobject of $A$ to be a monomorphism into $A$, then the class of subobjects of $A$ would only be a preorder, instead of a poset. So what? - -REPLY [11 votes]: The question is ancient, but IMHO the most convincing answer is missing: -The definition of "subobject in a category $\mathcal{C}$" is chosen in such a way that it generalize the notion of $k$-vector subspaces (when $\mathcal{C} = \mathsf{Vect}_k$), the notion of subsets (when $\mathcal{C} = \mathsf{Set}$), the notion of subrings (when $\mathcal{C} = \mathsf{Ring}$), and lots of other classical "sub-something" notions that appear throughout mathematics (probably not all of them, though). If you were to define "subobjects of $A \in \mathcal{C}$" to mean "monomorphisms with codomain $A$", then the notion of a subobject would not generalize all of these classical notions, because you get too many different subobjects that correspond to the same "sub-something". For instance, in $\mathsf{Set}$, the monomorphisms $\emptyset \to \left\{1\right\}$, $\left\{1\right\} \to \left\{1\right\}$ and $\left\{2\right\} \to \left\{1\right\}$ would be three different subobjects of the object $\left\{1\right\}$, but there are only two subsets of the set $\left\{1\right\}$. So this would be a bad definition. -However, if you define subobjects of $A \in \mathcal{C}$ to be isomorphism classes of monomorphisms with codomain $A$ (where "isomorphism" is to be correctly interpreted: an isomorphism between two monomorphisms $\alpha : S \to A$ and $\alpha^{\prime} : S^{\prime} \to A$ means a morphism $s : S \to S^{\prime}$ satisfying $\alpha = \alpha^{\prime} \circ s$), then, in all of the examples listed above, the subobjects of $A$ are in a canonical bijection with the "sub-somethings" (i.e., the $k$-vector subspaces, or the subsets, or the subrings). For instance, in $\mathsf{Set}$, the subobjects of a set $A$ are the isomorphism classes of monomorphisms with codomain $f$. The isomorphism class of such a monomorphism $f : S \to A$ can be identified with the subset $f\left(S\right)$ of $A$. Thus, the subobjects of $A$ are in bijection with the subsets of $A$ here. The same construction works for rings and for $k$-vector spaces. -The good definition of a subobject also has the advantage (compared with the bad definition) that the subobjects of a given object $A \in \mathcal{C}$ often form a set (as opposed to just a class). I don't personally find this vital; it is not usually true in constructive mathematics anyway, and I don't believe that a definition is necessarily bad just because it sometimes returns proper classes.<|endoftext|> -TITLE: Convex functions in integral inequality -QUESTION [11 upvotes]: Let $\mu,\sigma>0$ and define the function $f$ as follows: -$$ -f(x) = \frac{1}{\sigma\sqrt{2\pi}}\mathrm \exp\left(-\frac{(x-\mu)^2}{2\sigma ^2}\right) -$$ -How can I show that -$$ -\int\limits_{-\infty}^\infty x\log|x|f(x)\mathrm dx\geq \underbrace{\left(\int\limits_{-\infty}^\infty x f(x)\mathrm dx\right)}_\mu\cdot\left(\int\limits_{-\infty}^\infty \log|x| f(x)\mathrm dx\right) -$$ -which is also equivalent to $\mathsf E[ X\log|X|]\geq \underbrace{\mathsf EX}_\mu\cdot\mathsf E\log|X|$ for a random variable $X\sim\mathscr N(\mu,\sigma^2).$ - -REPLY [3 votes]: Below is a probabilistic and somewhat noncomputational proof. -We ignore the restriction to the normal distribution in what follows below. Instead, we consider a mean-zero random variable $Z$ with a distribution symmetric about zero and set $X = \mu + Z$ for $\mu \in \mathbb R$. -Claim: Let $X$ be described as above such that $\mathbb E X\log|X|$ is finite for every $\mu$. Then, for $\mu \geq 0$, -$$ -\mathbb E X \log |X| \geq \mu \mathbb E \log |X| \> -$$ -and for $\mu < 0$, -$$\mathbb E X \log |X| \leq \mu \mathbb E \log |X| \>.$$ -Proof. Since $X = \mu + Z$, we observe that -$$ -\mathbb E X \log |X| = \mu \mathbb E \log |X| + \mathbb E Z \log |\mu + Z| \>, -$$ -and so it suffices to analyze the second term on the right-hand side. -Define -$$ -f(\mu) := \mathbb E Z \log|\mu+Z| \>. -$$ -Then, by symmetry of $Z$, we have -$$ -f(-\mu) = \mathbb E Z \log|{-\mu}+Z| = \mathbb E Z \log|\mu-Z| = - \mathbb E \tilde Z \log|\mu + \tilde Z| = - f(\mu) \>, -$$ -where $\tilde Z = - Z$ has the same distribution as $Z$ and the last equality follows from this fact. This shows the $f$ is odd as a function of $\mu$. -Now, for $\mu \neq 0$, -$$ -\frac{f(\mu) - f(-\mu)}{\mu} = \mathbb E \frac{Z}{\mu} \log \left|\frac{1+ Z/\mu}{1- Z/\mu}\right| \geq 0\>, -$$ -since $x \log\left|\frac{1+x}{1-x}\right| \geq 0$, from which we conclude that $f(\mu) \geq 0$ for all $\mu > 0$. -Thus, for $\mu > 0$, $\mu \mathbb E \log |X|$ is a lower bound on the quantity of interest and for $\mu < 0$, it is an upper bound. -NB. In the particular case of a normal distribution, $X \sim \mathcal N(\mu,\sigma^2)$ and $Z \sim N(0,\sigma^2)$. The moment condition stated in the claim is satisfied.<|endoftext|> -TITLE: If a function $f$ is differentiable on $[a,b]$ and its derivative $f^\prime$ is integrable, must $f$ be of bounded variation? -QUESTION [5 upvotes]: I know there is a theorem saying if $f$ defined on $[a,b]$ is of bounded variation, then it is differentiable on $(a,b)$ a.e and $f'$ is integrable over $[a,b]$. -I wonder whether the converse is true, say, if $f$ defined on $[a,b]$ is differentiable on $(a,b)$ a.e and $f'$ is integrable, must $f$ have bounded variation? -In fact, I run into such concrete question: -$$ -f=x^\alpha \text{sin}\left(\frac{1}{x^\beta}\right) \text{ for $x\in(0,1]$} -$$ -and $f(0)=0$. -The hint says that we can prove that when $\alpha > \beta$, $f$ is of bounded variation by showing $f'$ is integrable. -So here comes my question above, is such argument true? - -REPLY [3 votes]: Rudin, "Real & Complex Analysis", Theorem 7.21: If $f \in L^1[a,b]$ is differentiable at every point, then $f(x) - f(a) = \int_a^x f'(t) dt$. -Now consider $\psi_+(x) = \int_a^x \max(f'(t),0)\; dt$, $\psi_-(x) = \int_a^x \min(f'(t),0)\; dt$. Both are monotonic, hence of bounded variation. It follows that $f(x) = f(a)+\psi_+(x)+\psi_-(x)$ has bounded variation. -Note: To apply Rudin's theorem, $f$ must be differentiable everywhere, not just a.e.<|endoftext|> -TITLE: Fake proof of the limit of a series -QUESTION [5 upvotes]: Now, I know this to be correct: -$$\begin{align*} -\lim_{n \rightarrow\infty} \left(\frac 1{n^2}+\frac 2{n^2}+\ldots+\frac n{n^2}\right)&=\lim_{n \rightarrow\infty} \left[\frac 1{n^2} \left(\frac n2\right)(1+n)\right]\\ -&=\lim_{n \rightarrow\infty} \frac {1+n}{2n}\\ -&=\frac 12\;. -\end{align*}$$ -But what is wrong with the following reasoning? -$$\begin{align*} -\lim_{n \rightarrow\infty} \left(\frac 1{n^2}+\frac 2{n^2}+\ldots+\frac n{n^2}\right)&=\lim_{n \rightarrow\infty} \frac 1{n^2} + \displaystyle \lim_{n \rightarrow\infty} \frac 2{n^2} +...+ \displaystyle \lim_{n \rightarrow\infty} \frac n{n^2}\\\\ -&=0+0+\ldots+0\\\\ -&=0\;? -\end{align*}$$ - -REPLY [5 votes]: Look at $$\frac1{n^2}+\frac2{n^2}+\ldots+\frac{n}{n^2}$$ for the first few values of $n$: -$$\begin{array}{c} -\frac11\\ -\frac14&+&\frac24\\ -\frac19&+&\frac29&+&\frac39\\ -\frac1{16}&+&\frac2{16}&+&\frac3{16}&+&\frac4{16}\\ -\vdots&&\vdots&&\vdots&&\vdots&&\ddots -\end{array}$$ -It’s true that each column is converging to $0$, but the number of columns is increasing at the same time, and you don’t know until you work it out which effect is going to ‘win’. In this case neither wins: they almost exactly balance out, with an overall limit of $1/2$. -It’s somewhat analogous to $0\cdot\infty$ indeterminate forms, limits of the form $\lim\limits_{x\to\infty}f(x)g(x)$ where $\lim\limits_{x\to\infty}f(x)=0$ and $\lim\limits_{x\to\infty}g(x)=\infty$: here $f$ can win, making the limit $0$, or $g$ can win, making it $\infty$, or they can roughly balance out, making it some positive real number.<|endoftext|> -TITLE: Integral of $\int_0^{\pi/2} \ (\sin x)^7\ (\cos x)^5 \mathrm{d} x$ -QUESTION [8 upvotes]: I am trying to find this by using integration by parts but I am not sure how to do it. -$$\int_0^{\pi/2} (\sin x)^7 (\cos x)^5 \mathrm{d} x$$ -I tried rewriting as -$$\int_0^{\pi/2} \sin x\cdot\ (\sin x)^6\cdot\ (\cos x)^5 \mathrm{d} x = \int_0^{\pi/2}\sin x(1-\ (\cos x)^3)\cdot\ (\cos x)^5 \mathrm{d} x$$ -but that seems to only give me a very, very long loop that doesn't help me at all. How do I proceed? -$$\int_0^{\pi/2} \sin x\cdot (\sin x)^6\cdot (\cos x)^5 \mathrm{d} x = \sin x(1- (\cos x)^3)\cdot (\cos x)^5 \mathrm{d} x$$ -$u = \cos x$, then $du = -\sin xdx$ -$\int \frac{-u^6}{6} \mathrm{d} u - \int \frac{-u^9}{9} \mathrm{d} u$ -From here it looks like I have an incredibly long string of $u$ substitutions to make to get to something I can find an antiderivative for. - -REPLY [2 votes]: I'm going to expand on Valentin's answer: -$$I(a,b)=\int_0^{\pi/2}\sin^ax\ \cos^bx\ dx$$ -$t=\sin^2x$: -$\therefore dt=2\sin x\cos x\ dx\\\therefore dx=\frac12t^{-1/2}(1-t)^{-1/2}dt\\\therefore x=0\mapsto t=0\\\therefore x=\pi/2\mapsto t=1$ -$$I(a,b)=\int_0^1 t^{a/2}(1-t)^{b/2}\frac12t^{-1/2}(1-t)^{-1/2}dt$$ -$$I(a,b)=\frac12\int_0^1 t^{\frac{a-1}2}(1-t)^{\frac{b-1}2}dt$$ -$$I(a,b)=\frac12\int_0^1 t^{\frac{a+1}2-1}(1-t)^{\frac{b+1}2-1}dt$$ -Note the definition of the Beta function: -$$B(a,b)=\int_0^1t^{a-1}(1-t)^{b-1}dt=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}=B(b,a)$$ -Thus -$$I(a,b)=\frac{\Gamma(\frac{a+1}2)\Gamma(\frac{b+1}2)}{2\Gamma(\frac{a+b}2+1)}=I(b,a)$$<|endoftext|> -TITLE: One-parameter subgroup of SL(n) is diagonalizable -QUESTION [7 upvotes]: Given a group homomorphism $\rho\colon \mathbb{C}^\times \rightarrow SL(n,\mathbb{C})$. Why is it true that the subgroup $\rho(\mathbb{C}^\times)$ is diagonalizable, i.e. that there is a basis of $\mathbb{C}^n$ such that $\rho(\mathbb{C}^\times)$ is a subgroup of the group of diagonal matrices in this basis? -I found this in a proof and suppose that it is a special case of a more general fact in representation theory, but unfortunately I do not have enough background in this area. It probably can be derived somehow from the Jordan Chevalley decomposition. -Thanks! - -REPLY [7 votes]: You need some assumptions on $\rho$ for this to be true. E.g. -$$z \mapsto \begin{pmatrix} 1 & \log |z| \\ 0 & 1 \end{pmatrix}$$ -is a continuous group homomorphism from $\mathbb C^{\times}$ to $SL_2(\mathbb C)$ -with non-diagonalizable image. -The standard hypothesis made to ensure that the image be diagonalizable is that $\rho$ be a map of algebraic groups, i.e. be given by expressions that are polynomial in $z^{\pm 1}$. (Polynomials in $z^{\pm 1}$ and $\overline{z}^{\pm 1}$ would also be okay; this corresponds to a map of algebraic groups over $\mathbb R$.) -The claim follows from a classification of the possible algbraic representations of $\mathbb C^{\times}$: if you let -$A = \mathbb C[z,z^{-1}]$ (the ring of Laurent polynomials on $\mathbb C^{\times}$) with $\mathbb C^{\times}$ acting via the regular representations, you an easily check that $A$ is semi-simple, i.e. is a direct sum of one-dimensional representations. (More precisely, it is the direct sum of the representations $z \mapsto z^n$, for $n \in \mathbb Z$; this essentially corresponds to interpreting the elements of $A$ as Fourier polynomials.) -Similarly, $B = \mathbb C[z,\overline{z}, z^{-1},\overline{z}^{-1}]$ is semi-simple, being a direct sum of the representations $z \mapsto z^p\overline{z}^q$, for $p,q \in \mathbb{Z}$. -Since $A$ and $B$ are the "regular" algebraic representations of $\mathbb C^{\times}$ (thought of either as an algebraic group over $\mathbb C$ or $\mathbb R$) and they are semi-simple, a standard argument shows that any algebraic representation of $\mathbb C^{\times}$ is semi-simple. -In particular, the algebraic $n$-dimensional representation of $\mathbb C^{\times}$ that arises from an algebraic homomorphism $\mathbb C^{\times} \to SL_n(\mathbb C)$ must be semi-simple, which is to say that the image must consists of simultaneously diagonalizable matrices. This is the claim that you asked about.<|endoftext|> -TITLE: Proving there are an infinite number of pairs of positive integers $(m,n)$ such that $\frac{m+1}{n}+\frac{n+1}{m}$ is a positive integer -QUESTION [19 upvotes]: The question is: - -Show that there are an infinite number of pairs $(m,n): m, n \in \mathbb{Z}^{+}$, such that: $$\frac{m+1}{n}+\frac{n+1}{m} \in \mathbb{Z}^{+}$$ - -I started off approaching this problem by examining the fact that whenever the expression was a positive integer, the following must be true: -$$\frac{m+1}{n}+\frac{n+1}{m} - \left\lfloor\frac{m+1}{n}+\frac{n+1}{m}\right\rfloor = 0$$ -However, I was unable to do much more with this expression, so I abandoned it and started the problem again from a different angle. -Next I re-arranged the expression to state that: -$$\frac{m^2+m+n^2+n}{mn} \in \mathbb{Z}^{+} \implies \frac{m(m+1) + n(n+1)}{mn} \in \mathbb{Z}^{+}$$ -And therefore: -$$mn \mid (m(m+1) + n(n+1))$$ -However, I'm unsure how to demonstrate there are infinitely many occurances of $(m, n)$ for which this is true. So I'd appreciate any help. -Thanks in advance - -REPLY [2 votes]: Never noticed this one. The viewpoint from Hurwitz (1907) is that there are solutions only if there are "fundamental" solutions. In this case, that would be integer points $(x,y),$ both positive, with -$$ x^2 - kxy+y^2 + x + y = 0, $$ -$$ y \geq \frac{2x+1}{k}, $$ -$$ x \geq \frac{2y+1}{k}. $$ -I will include some pictures. This is possible only if -$$ \frac{2k+4 + \sqrt{4k^3+8k^2}}{2k^2-8} \geq 1 \; . $$ -So, actually $$ k = 3,4. $$ As indicated in answers and comments, for $k=3$ we get consecutive terms in the sequence $$ x_{j+2} = 3 x_{j+1} - x_j - 1, $$ beginning -$$ 2, 2, 3, 6, 14, 35, 90, 234, 611, 1598, \cdots $$ -For $k=4$ we get consecutive terms in the sequence $$ x_{j+2} = 4 x_{j+1} - x_j - 1, $$ beginning -$$ 1, 1, 2, 6, 21, 77, 286, 1066, \cdots $$ -3 2.341640786499874 -4 1.316496580927726 -5 0.9632741216820454 -6 0.7803300858899106 -7 0.6666666666666666 -8 0.5883036880224507 -9 0.5305145858856961 - - - - - - 4 1 1 - 4 2 1 - 3 2 2 - 3 3 2 - 4 6 2 - 3 6 3 - 3 14 6 - 4 21 6 - 3 35 14 - 4 77 21 - 3 90 35 - 3 234 90 - 4 286 77 - 3 611 234 - 4 1066 286 - 3 1598 611 - 4 3977 1066 - 3 4182 1598 - 3 10947 4182 - 4 14841 3977 - 3 28658 10947<|endoftext|> -TITLE: Proving $\lim_{n\rightarrow\infty} n(2^{1/n} - 1) = \log 2$ -QUESTION [6 upvotes]: How do you prove $\lim_{n\rightarrow\infty} n(2^{1/n} - 1) = \log 2$ ? -Background: in computer science, if you allocate CPU time to $n$ processes by rate-monotonic scheduling, all the processes get sufficient amount of time when the quantity $U$ called CPU utilization is less than or equal to $n(2^{1/n} - 1)$. It is monotonously decreasing and tends to $\log 2$ when $n\rightarrow\infty$, so if $U \le \log 2$, you can be sure all the processes will be given sufficient CPU time. - -REPLY [7 votes]: Let $h=1/n$. It is enough to show that -$$\lim_{h\to 0}\frac{2^h-1}{h}=\log 2.$$ -This just says that the derivative of $2^x$ at $x=0$ is $\log 2$, which is easy to check. -Remark: Equivalently, let $x=1/n$, rewrite our expression as $\frac{e^{(\ln 2) x}}{x}$, and find the limit of this as $x\to 0$ using L'Hospital's Rule. But reducing to the definition of the derivative has a more "elementary" feel. -The mathematically natural approach is the one by Ilya. The Taylor expansion of $2^x$, that is, of $e^{(\ln 2)x}$, tells us about the "local" behaviour of $2^x$ near $x=0$, so it is the right tool to use. The derivative also tells us about local behaviour, but the Taylor expansion is finer-grained. - -REPLY [3 votes]: Depends on your knowledge, e.g. with Taylor expansion you have $2^x-1\sim x\log2$ for $x\to 0$, so -$$ -\lim\limits_{n\to\infty}n(\sqrt[n]{2}-1) = \lim\limits_{x\to 0}\frac1x(2^x-1) = \lim\limits_{x\to 0}\frac1x(x\log2) =\log 2. -$$ - -REPLY [3 votes]: We have -$$\lim_{n\to\infty} n(2^{1/n} - 1) = \lim_{t\downarrow 0}{2^t - 1\over t}.$$ -Put $f(t) = 2^t$; then the limit is just $f'(0)$. Since $f'(x) = 2^x\log(2)$, we are done.<|endoftext|> -TITLE: How to prove the pullback lemma -QUESTION [5 upvotes]: I am new in category theory. I am trying to prove the well known fact that if you have a commutative diagram of the form □□, where each square is a pullback, then the whole diagram is a pullback too, and hence deduce that the pullback of a pullback square is a pullback. Every book I have looked at has this as an exercise, but I (embarrasingly, I know) cannot see the solution. I have tried using the universality property of the two pullbacks but i am lost in calculations. If someone could help, I would really appreciate it. - -REPLY [18 votes]: Just for you, and it turns out my answer has to contain at least 30 characters, so let's make it 100.<|endoftext|> -TITLE: Infinite Degree Algebraic Field Extensions -QUESTION [18 upvotes]: In I. Martin Isaacs Algebra: A Graduate Course, Isaacs uses the field of algebraic numbers $$\mathbb{A}=\{\alpha \in \mathbb{C} \; | \; \alpha \; \text{algebraic over} \; \mathbb{Q}\}$$ as an example of an infinite degree algebraic field extension. I have done a cursory google search and thought about it for a little while, but I cannot come up with a less contrived example. -My question is - -What are some other examples of infinite degree algebraic field extensions? - -REPLY [2 votes]: Let $\{n_1,n_2,...\}$ be pairwise coprime, nonsquare positive integers. Then $\mathbb{Q}(\sqrt{n_1},\sqrt{n_2},...)$ is an algebraic extension of infinite degree.<|endoftext|> -TITLE: Exercise 2.17(d) of Eisenbud's Commutative Algebra -QUESTION [14 upvotes]: First some notation: Let $P$ be a homogeneous prime ideal of a $\Bbb{Z}$ - graded ring $R$, $U$ the multiplicative subset of all homogeneous elements not in $P$. Suppose that there exists a homogeneous element $f$ of degree $1$ that is not in $P$. -The problem in Eisenbud is to show that the image of $P$ (which we call $Q$) in the ring $R/(f-1)$ is a prime ideal. In fact what I am trying to prove (something stronger actually) is that the complement of $Q$ in $R/(f-1)$ is the multiplicative set $\bar{U}$, that is the image of $U$ under the canonical projection $\pi : R \longrightarrow R/(f-1)$. Recall because $\pi$ is surjective $Q$ is already an ideal and hence -$$\text{$Q$ is a prime ideal in $R/(f-1)$ $\iff R/(f-1) - Q$ is multiplicatively closed}$$ -Now we want to show that $Q^{c} = \bar{U}$. To show these containments we can assume wlog that we show containment for just homogeneous elements. Now one direction I have already shown, namely that $Q^{c} \subseteq \bar{U}$. The other direction is tantamount to showing that $\bar{U} \subseteq Q^{c}$. -Now suppose this does not hold. Then there is $x \in \bar{U}$ such that $x$ is also in $Q$. I.e. there exist $u \in U$ and $p \in P$ such that $x = \pi(u) = \pi(p)$ so that -$$\pi(u - p ) = 0 \implies u- p = (f-1)r$$ -for some $r \in R$. Now if $r \in p$ we have our desired contradiction. If $r$ is not in $P$ then we can write -$$r = x_{n_1} + \ldots x_{n_k}$$ where each $x_{n_i}$ for $1 \leq i \leq k$ is homogeneous of degree $x_{n_i}$ and there is at least one $x_{n_i}$ (which we can take to be $x_{n_1}$ ) that is not in $P$. Then rearranging the equation above we have that -$$p = u - fx_{n_1} - \ldots f_{n_k} + x_{n_1} + \ldots x_{n_k}.$$ -Recall we assumed that $x_{n_1} \notin P$. Now where I am stuck is I want to do some argument in comparing degrees to derive a contradiction. However what if say only $u$ and $x_{n_1}$ have the same degree as $p$ so that $u + x_{n_1} = p$? How can I get a contradiction out of something like that? -This is where I am stuck because the rest of the analysis seems to be bashing out cases like this and I do not get the desired contradictions. Any hints on how to proceed on the problem would be greatly appreciated. -Thanks. - -Edit 1: I believe I have my desired contradiction at least in the case that $r$ is homogeneous. The proof is as follows. Suppose that $u-p = fr - r$. Then rearranging this equation we have $p = u - fr + r$. Now if $u$ has degree different than the other two terms we are stuffed because recall $P$ is a homogeneous prime ideal so then $u \in P$, a contradiction. Furthermore $u$ can only have the same degree as at most 1 of $fr $ or $r$. This is because $\deg fr \neq \deg r$. We divide this out into two cases: -Case 1: $u$ has the same degree as $r$. -Now because $ p = u - fr + r$ and $u$ has the same degree as $r$, we conclude by comparing degrees that $p = u + r$ or $p = -fr$. Now in the former case we must have $-fr = 0 \in P$ contradicting $fr \notin P$ because $f \notin P$ and $r \notin P$. The latter case also gives the same contradiction. -Case 2: $u$ has the same degree as $-fr$. -Now $p$ being a homogeneous element means by comparing degrees that -$$p = u-fr, \hspace{2mm} r = 0 \hspace{3mm} \text{or} \hspace{3mm} p = r, \hspace{3mm} u-fr = 0.$$ In the former case we have a contradiction because $r = 0 \implies u = p$ and in the latter we have a contradiction too because $r \notin P$. -Edit 2: The one direction that I claimed to have shown, namely that $Q^{c} \subseteq \overline{U}$ is false. Namely because I can't just assume to show containment for homogeneous elements because $U$ and $\overline{U}$ don't have any additive structure on them. - -REPLY [4 votes]: Your claim is not what the exercise asks you to do. In fact, it is not true in general that the image of $\frak p$ in $R/(f-1)$ is a prime ideal. For example, let $R$ be the polynomial ring $\mathbb{Q}[x]$ over the field of rational numbers, $\frak p$ be the principal ideal $(x+1)$ generated by $x+1$, and $f=x$. So the image of $\frak p$ is the whole ring. -If you assume $\frak p$ is homogeneous, then the claim is correct. For the direction you hope to show, please read my comment below. In fact, it may be helpful to realize that the ring $R/(f-1)$ is canonically isomorphic to the ring $R_{(f)}$.<|endoftext|> -TITLE: What is the purpose of the $\mp$ symbol in mathematical usage? -QUESTION [16 upvotes]: Occasionally I see the $\mp$ symbol, but I don't really know what it is for, except in conjunction with the $\pm$ symbol thus: $a \pm b \mp c$ which (I believe) means $a+b-c$ or $a-b+c$ (please correct me if I am wrong). Is there any other mathematical usage for the $\mp$ symbol, particularly on its own ? - -REPLY [5 votes]: You are correct; $\mp$ only makes sense in a formula that already has $\pm$. -One simple and useful example is that when $x$ is small, ${1\over{1\pm x}}\approx 1\mp x$. - -REPLY [4 votes]: Like the other answerer, I've only seen it used in the same line as a $\pm$, to mean "positive when the other term is negative and negative when the other term is positive." So, for instance, if we were to say -$\pm a = \mp b$ -that would imply that -$ a = -b $ -and -$ -a = b $<|endoftext|> -TITLE: Conventional ordering of faces of regular polyhedron? -QUESTION [5 upvotes]: e.g. For an icosahedron defined as follows: -Diagram: A regular icosahedron (courtesy of Microsoft Visio): - -We define position and orientation w.r.t. this body's frame of reference as follows: - -Point O at the origin: (0, 0, 0) -Point P within the half-line: (x=0, y>0, z=0) -Point P1 within the half-line: (x=0, y<0, z=0) -Point A within the quarter-plane: (x>0, y<0, z=0) - -Question 1: Formal methods/conventions? -Is there (a) a formal way to index/order the faces of a regular polyhedron?  In other words, are some methods of ordering the faces/vertices etc., more correct, logical or conventional than certain other methods?  If not, then do certain face encodings/orderings facilitate more efficient vector manipulation?  How then should the faces be ordered, or, in what order should we generate their normal direction vectors?OR, is it (b) a matter of arbitrary discretion according to the requirements of the specific application one is designing for; or otherwise arbitrary due to fundamental constraints on the shape of the data structures & algorithms required to manipulate these data most efficiently? -Question 2: Ordering for recursive precision - -Suppose we break each surface triangle down recursively into four sub-triangles so that: - -Any face may be folded if its edge-adjacent neighbours are also folded along common edges and their constituent sub-triangles stretched (each sub-triangle remaining flat but no longer remaining coplanar with each other); so as to more perfectly approximate the curved surface being modelled -Each successive ≈4× improvement in area precision requires exactly two additional bits of information to store the resulting surface-positional “reference code” or “index number” - -Is there (a) a formal mathematical way to order or encode these constituent sub-triangles?Or is this (b) open to the engineer's discretion, to define a convention that he prefers?Do some encodings of the constituent triangles permit more efficient data manipulation, or is this choice arbitrary?  Can this be proven either way?  If the choice is theoretically arbitrary, then does a relevant mathematical convention already exist? -Typical application: geographical data visualisation -We will use this for data geocoding, indexing, querying & visualisation; using polyhedral projections from near-spherical shapes (the surface of the Earth) with the best all-round properties of: - -computational efficiency (for encoding/ retrieval/ visualisation) -low spatial distortion, high area uniformity by code space -information density -conceptual simplicity - -Our general purposes are illustrated by this document: -http://www.progonos.com/furuti/MapProj/Normal/ProjPoly/projPoly.html#PolyhedralMaps -Suggested conventions for mapping geographical data -We will orient our icosahedral mapping so that: - -Point O (the origin) corresponds to the centre of mass of the body being modelled -Point P corresponds to the geographical North Pole at +90° latitude -Point P1 corresponds to geographical South Pole at −90° latitude -Point A lies within the half-plane containing the prime meridian i.e. zero longitude in the standard modern polar (latitude, longitude) coordinates.  The icosahedral diagram here:http://www.win.tue.nl/~vanwijk/myriahedral/— from Jack van Wijk's work on “myriahedral projections” appears to validate the near-optimality of this convention for planet Earth in the current geological era. - -For modelling or display purposes, the model polyhedron should be scaled such that its volume is equivalent to the modelled geoid.  Where the geoid is irregular and the modelling polyhedron is of high resolution, approximations to this rule might be considered acceptable. -Reference design for encoding surfaces/ positions -Unless advised of a better way, we plan to order the faces first according to their apex orientation with triangle base perpendicular to (P→P1), whether up/down with respect to vector (P→P1); and second (for each orientation) in a spiral/helix, working first anticlockwise around (P→P1), and then clockwise around (P1→P) for the opposite apex orientation.  This would number the 20 main triangles from 0 to 4, 8 to 12, 16 to 20 and 24 to 28 as follows: - + 0 1 2 3 4 - 0 P A1 B1; P B1 C1; P C1 D1; P D1 E1; P E1 A1; - 8 A1 A B ; B1 B C ; C1 C D ; D1 D E ; E1 E A ; -16 P1 B A ; P1 C B ; P1 D C ; P1 E D ; P1 A E ; -24 B B1 A1; C C1 B1; D D1 C1; E E1 D1; A A1 E1; - -— This coding system has discontinuities to simplify logical functions for calculating surface polygon orientation, which is necessary for translating position data between systems of coordinates (orientation might be considered on the basis of individual triangles, or cumulatively on the basis of recursive N-ary sum style positional calculations).  Note that the rows in this table correspond to horizontally oriented slices of the icosahedron, whereas the columns in this table correspond to diagonally (near-vertically) oriented half-slices. -We plan to encode each sub-triangle recursively as per: - -File format; recursion-conforming data compression -We plan to support data compression and adaptive rendering by storing at each detail level only the differences from the average values corresponding to the hierarchical parent surface.  In accordance with this objective, location codes will be stored and communicated in a big-endian format.  If modelling a perfect unit sphere, the “unit sphere” should be of unit radius.  The volume of the unit spheroid/geoid etc. should generally approach 4π/3 cubic units as closely as possible (altitudes should be defined in proportion with such a unit geoid, and a scale factor should be specified for the entire geoid). -Community benefits (public domain, IP considerations) -To promote industrial standardisation; I now release these plans freely here, and will further consider releasing my work freely when complete as a set of open-source software libraries.  For the record, I (Matthew Slyman) started working independently on this project some time around the year 1993, and first proposed this idea to Iestyn Bleasdale-Shepherd late in 1996 (he implemented the first proof-of-concept as an IBM PC program written in C over the Christmas holidays of 1996–1997).  We subsequently shelved the original application we were discussing; and as the originator of this idea and the sole developer of this specification, I (Matthew Slyman) hereby relinquish any claim to the intellectual property rights for this data structure or any compatible code implementations that might be created by third parties.  In order to better serve the mathematical community, I would like to give priority at the design/implementation stage to any relevant formal mathematical requirements or conventions.  Anyone commenting on this thread thereby places their comments likewise in the public domain, for general community benefit.  In the absence of further instruction, we will number the faces according to our practical requirements or whims (see above).  We would be grateful for any formal improvements that can be offered by anyone better educated in mathematics. -I propose to give this standardised set of conventions a name: “Icosamap”.  Please register any objections to this name, including links to any evidence of priority within the relevant field of science. - -REPLY [2 votes]: I am also interested in knowing if there are any good reasons to pick one ordering of polygons over some other ordering. -I'll post what little I know, but I'm really hoping someone else will post something much better. -I have heard of 3 "standards" for ordering these polygons, but as far as I can tell they were arbitrarily picked: -Your icosahedron-based method, -the quadrilateralized spherical cube, and -Tegmark's icosahedron-based method for pixelizing the celestial sphere. -All three are examples of geodesic grids. -Ordering polygons on a polyhedron -icosahedron-based method for pixelizing the celestial sphere -"What is the best way to pixelize a sphere?" by Max Tegmark -gives a format that appears extremely similar to your proposal: -Tegmark starts with an icosahedron, just as you do, -and recursively subdivides it, just as you do. -So I find it hard to believe that his method is "computationally and geometrically far more complex than what [you] are proposing" or that it has significantly "greater distortion in either area or geometry". -Given some specified resolution (the same as your depth of recursion) -and some point on a sphere, Tegmark's system -assigns that point to a "pixel number" (the number indicating what major icosahedral face it is on, and which particular minor sub-triangle on that face that point is in). -quadrilateralized spherical cube -Wikipedia: "quadrilateralized spherical cube" -The all-sky, skyward looking, unfolded cube is reassembled by -arranging the faces in a sideways T with the bar on the right side: - 0 - 4 3 2 1 - 5 - -where square face 0 is centered on the North pole, -square face 5 is centered on the South pole, -the vernal equinox -- at latitude=0, longitude=0 -- -lies at the center of square face 1. -Ecliptic longitude increases from face 1 to face 4. -other polyhedra -Tegmark mentions that "it is clearly desirable to use as small faces as possible". -Subdividing each face of a (30-faced) rhombic triacontahedron into two nearly-equilateral isosceles triangles gives the (60-faced) pentakis dodecahedron, as you mentioned, -which one might expect to give less error than starting with a (20-faced) icosahedron. -Further subdividing each of those faces into two irregular triangles gives the (120-faced) disdyakis triacontahedron (dual to the truncated icosidodecahedron). -I've been told that the strictly convex polyhedron with the maximum number of identical faces (exactly equal-shaped and exactly equal-area) is that disdyakis triacontahedron. -However, apparently none of these shapes gives any improvement over your icosahedron proposal. -Tegmark's code immediately breaks up each equilateral triangle of the 20-sided icosahedron into 6 identical irregular triangles. -The resulting 20*6 = 120 identical triangles are the same triangles as the faces of the disdyakis triacontahedron, right? -Ordering polygons on a recursively subdivided polygon -The quadrilateralized spherical cube recursively subdivides each square -- each division of 4 by area adding 2 more bits of localization information -- using a Morton code (Z-order curve). -The Maidenhead Locator System uses a system similar to the Morton code. -You might also want to look at the military grid reference system. -Other ways of ordering such subdivided square (i.e., drawing a path that hits the center of each square exactly once) include the -Sierpiński curve, the Hilbert curve, -the Peano curve, -the Moore curve, etc. -There exist other space-filling curves based on recursively subdividing a triangle -- each division of 4 by area adding 2 more bits of localization area. -(Do any of them have popular names?) -There are 2 popular ways of subdividing squares and triangles: -( a, b ) -"Class I"/"alternate" method: -each of the original triangles (or squares) is replaced with n^2 triangles (or squares) that exactly fit inside the original triangle (or square). -(n=1 gives the original base shape, n=2 gives a shape with 4 times as many faces). -The small triangles have edges (very close to) parallel to the original big triangles. -"Class II"/"triacon" method, employed on almost every major geodesic dome project in the 1950s ( a p. 30 ), -each of the original triangles is replaced with 3*n^2 triangles. -The small triangles have edges (very close to) right angles to the original big triangles. -(Unreliable sources tell me the triacon method can be applied to the cube or rhombic solids, -each of the original squares or rhombuses is replaced with 2*n^2 rhombuses). -(n=1 converts the icosahedron to the pentakis dodecahedron I mentioned earlier; n=8 converts the icosahedron to an approximation of the Epcot "Spaceship Earth"; n=1 converts the cube into the rhombic dodecahedron, etc.). -EDIT: -The most common application of numbering the sides of a platonic solid is while making dice. -One particular way of putting numbers of the sides of an icosahedron is shown on DiceCollector's page, but I don't know if that is particularly "standard". -Have you considered asking -"What is the proper way to number the sides of a 1d20 die?" or -"What is the proper way to number the sides of a 1d120 die?", -or both, -on the https://rpg.stackexchange.com/ ?<|endoftext|> -TITLE: Can Markov Chain state space be continuous? -QUESTION [9 upvotes]: I looked for a formal definition of Markov chain and was confused that all definitions I found restrict chain's state space to be countable. I don't understand purpose of such a restriction and I have feeling that it does not make any sense. -So my question is: can state space of a Markov chain be continuum? And if not then why? -Thanks in advance. - -REPLY [7 votes]: Yes, it can. In some quarters the "chain" in Markov chain refers to the discreteness of the time parameter. (A notable exception is the work of K.L. Chung.) The evolution in time of a Markov chain $(X_0,X_1,X_2,\ldots)$ taking values in a measurable state space $(E, {\mathcal E})$ is governed by a one-step transition kernel $P(x,A)$, $x\in E$, $A\in{\mathcal E}$: -$$ -{\bf P}[ X_{n+1}\in A|X_0,X_1,\ldots,X_n] = P(X_n,A). -$$ -Two fine references for the subject are Markov Chains by D. Revuz and Markov Chains and Stochastic Stability by S. Meyn and R. Tweedie.<|endoftext|> -TITLE: Equivalent definition of absolutely continuous -QUESTION [13 upvotes]: A function $f$ is absolutely continuous on $[a,b]$ is defined by: for each $\varepsilon>0$, there is a $\delta>0$, for each finite disjoint open interval $\{(c_k,d_k)\}_{k=1}^n$ contained in $[a,b]$, we have -$$ -\text{if}\,\, \sum_{k=1}^n (d_k-c_k)<\delta, \,\,\text{then}\,\, \sum_{k=1}^n\left|f(d_k)-f(c_k)\right|<\varepsilon. -$$ -However, in the book I'm reading, it is said that there is a equivalent definition, say -$$ -\text{if}\,\, \sum_{k=1}^n (d_k-c_k)<\delta,\,\, \text{then}\,\, \left|\sum_{k=1}^n f(d_k)-f(c_k)\right|<\varepsilon. -$$ -However, I can not prove it. How? - -REPLY [8 votes]: Assume the second condition holds for some $\delta$ and $\varepsilon$ and choose some disjoint intervals $(c_k,d_k)$ with $\sum\limits_kd_k-c_k\lt\delta$. Then $\sum\limits_+d_k-c_k\lt\delta$ where $\sum\limits_+$ indicates that the sum is restricted to the indices $k$ such that $f(d_k)\gt f(c_k)$. In particular $\sum\limits_+ |f(d_k)-f(c_k)|=\sum\limits_+ f(d_k)-f(c_k)\lt\varepsilon$. -Likewise $\sum\limits_- |f(d_k)-f(c_k)|=-\sum\limits_- f(d_k)-f(c_k)=\left|\sum\limits_- f(d_k)-f(c_k)\right|\lt\varepsilon$, where $\sum\limits_-$ indicates that the sum is restricted to the indices $k$ such that $f(d_k)\leqslant f(c_k)$. -Summing these two contributions, one sees that the first condition holds for $\delta$ and $2\varepsilon$. -The other implication is direct.<|endoftext|> -TITLE: Specific Calculation of the Germs of a Holomorphic Function -QUESTION [9 upvotes]: As a specific example of this question and a follow up to this one does anyone know a nice way to calculate the germs at $z=1$ of -$$f(z)=\sqrt{1+\sqrt{z}}$$ -My attempts have been messy at best, and I'd rather avoid trying to wade through Taylor series if I can! Any ideas would be most welcome! - -REPLY [5 votes]: We have -$$ \sqrt{1+\sqrt{z}} = \sqrt{2}\left(1+\sum_{n=1}^\infty \frac{(-1)^{n-1}}{16^n n}\binom{4n - 2}{2n-1} (z-1)^n\right) $$ -Here is a very general method, in the particular case of an algebraic function, to prove the identity. -I — Notations -Let $f$ be the function $z \mapsto \sqrt{1+\sqrt{1+z}}/\sqrt 2$, which is holomorphic in a neighbourhood of zero. We take a determination of the square root holomorphic around 1 and such that $\sqrt 1 = 1$. Note that in order to simply the computation, I shifted the variable and normalized the value at zero. -II — Algebraic equation -The function $f$ obviously satisfies the algebraic equation -$$ (f(z)^2 - 1/2)^2 = \frac{1+z}{4}, $$ -or, equivalently, $P(f(z), z) = 0$, where -$$ P(Y, z) = 4Y^4 - 4Y^2 -z. $$ -III — Differential equation -The function $f$ satisfies the following linear differential equation : -$$ 16 z (z+1) f''(z) + 8(1+2z) f'(z) - f(z) = 0 $$ -This is a general fact that an algebraic function satisfies a linear differential equation with polynomial coefficient. -IV — Recurrence -Write $f(z) = \sum_{n\geqslant 0} u_n z^n$. Thus -$$ 16 z (z+1) f''(z) + 8(1+2z) f'(z) - f(z) = 16n(n-1)u_n + 16 (n+1)n u_{n+1} + 8 (n+1) u_{n+1} + 16 n u_n - u_n $$ -which implies that -$$ (4 n - 1)(4n+1)u_n + 8(n+1)(2n+1)u_{n+1} = 0. $$ -V — Resolution -We check easily that the sequence defined by -$$v_n = \frac{(-1)^{n-1}}{16^n n}\binom{4n - 2}{2n-1}$$ -if $n>0$ and $v_0 = 1$ -satisfies the first order recurrence above. Since $u_0 = 1 = v_0$, we can conclude that -$$u_n = v_n$$ -VI — Automation -Here is a Maple session showing how to automate the proof of steps II to V. - -> with(gfun): -> f := sqrt(1+sqrt(1+z))/sqrt(2); - 1/2 1/2 1/2 - (1 + (1 + z) ) 2 - f := ------------------------ - 2 - -> holexprtodiffeq(f, y(z)); - / 2 \ - /d \ 2 |d | - {-y(z) + (8 + 16 z) |-- y(z)| + (16 z + 16 z) |--- y(z)|, y(0) = 1} - \dz / | 2 | - \dz / - -> diffeqtorec(%, y(z), u(n)); - 2 2 - {(-1 + 16 n ) u(n) + (24 n + 8 + 16 n ) u(n + 1), u(0) = 1} -> rsolve(%, u(n)); - n - GAMMA(2 n - 1/2) (-1) - -1/2 ---------------------- - 1/2 - Pi GAMMA(2 n + 1) - -You have to use the duplication formula for $\Gamma$ to retrieve the binomial coefficient.<|endoftext|> -TITLE: How does $\sigma(T)$ change with respect to $T$? -QUESTION [14 upvotes]: Consider $\sigma$ as a mapping which maps $T\in\mathcal{L}(X)$ to $\sigma(T)$, the spectrum of $T$, a compact set in the complex plane. -I wonder whether there is some result concerning how $\sigma(T)$ changes when $T$ changes. For instance, how is the Hausdorff distance between $\sigma(T)$ and $\sigma(S)$ related to $\|T-S\|$? Or something like this. -Thanks! - -REPLY [12 votes]: Without assuming additional properties on $S$ and $T$, such as commutation or self-adjointness (on a Hilbert space) — see points 2. and 3. at the end of the answer — continuity of the spectrum as a map from $\mathcal{L}(X)$ to the compact subsets of $\mathbb{C}$ with the Hausdorff distance $d_H$ doesn't hold, because the spectrum can “collapse” under arbitrarily small perturbations, as illustrated by the following simple example: -Let $X = \ell^p(\mathbb{Z})$ with basis $(e_n)_{n \in \mathbb{Z}}$ and $1 \leq p \leq \infty$. Define the operator $S: X \to X$ by -$$ -S(e_n) = e_{n-1}\text{ if }n\neq 0 \quad\text{and}\quad S(e_0) = 0. -$$ -It is not difficult to check that $\sigma(S) = \overline{\mathbb{D}} = \{\lambda \in \mathbb{C}\,:\,\lvert \lambda\rvert \leq 1\}$: Indeed, $\lVert S \rVert = 1$, so we certainly have the inclusion $\sigma(S) \subset \overline{\mathbb{D}}$ and if $\lvert\lambda\rvert \lt 1$, the vector $v = \sum_{n=0}^\infty \lambda^n e_n$ is an eigenvector of $S$ with eigenvalue $\lambda$, so $\lambda \in \sigma(S)$. It follows from compactness of $\sigma(S)$ that $\sigma(S) \supset \overline{\mathbb{D}}$. -Let now $C$ be the rank one operator defined by $Ce_0 = e_{-1}$ and $C(e_n) = 0$ for $n \neq 0$. Putting $T_\varepsilon = S + \varepsilon C$ we have an invertible operator for $\varepsilon \neq 0$ and $\lVert T_{\varepsilon} - S\rVert = \varepsilon$. -For the spectral radius of $T_\varepsilon$ we have $r(T_{\varepsilon}) =1$, so $\sigma(T_{\varepsilon}) \subset \overline{\mathbb{D}}$. The inverse of $T_{\varepsilon}$ also has spectral radius $r(T_{\varepsilon}^{-1}) = 1$, so $\sigma(T_{\varepsilon}) \subset \partial\mathbb{D}$. -It follows from $\sigma(S) = \overline{\mathbb{D}}$ and $\sigma(T_\varepsilon) \subset \partial \mathbb{D}$ that the Hausdorff distance between $\sigma(S)$ and $\sigma(T_{\varepsilon})$ is at least $1$, while $\lVert S-T_{\varepsilon}\rVert = \varepsilon$ is as small as we wish. - -Added: On the other hand, in some sense this is the worst that can happen: T. Kato, Perturbation Theory for Linear Operators, Springer Classics in Mathematics, 1995, proves in §3, section 1. of chapter IV on pp208f the following: - -The spectrum is upper semicontinuous as a function from $\mathcal{L}(X)$ to the compact subsets of $\mathbb{C}$ with respect to the Hausdorff distance (Remark 3.3). -If $S$ and $C$ commute then there is the estimate $d_{H}(\sigma(S),\sigma(S+C)) \leq r(C)$ on the Hausdorff distance of the spectra (Theorem 3.6). -Much later: To address a question posed in a comment: if $S$ and $T$ are self-adjoint operators on a Hilbert space then $d_H(\sigma(S),\sigma(T)) \leq r(S-T) = \lVert S - T \rVert$ holds as well, mainly because the resolvent satisfies $\lVert R_T(\lambda)\rVert = 1/d(\lambda,\sigma(T))$, which allows us to apply a variant of the Neumann series, see Kato, Theorem V.4.10, page 291 and section II.1.3, in particular formula (1.13) on page 67. Far more precise results can be found at the beginning of chapter VIII, especially §1.2. - -The example above (which I learned from Edi Zehnder) appears as Example 3.8 on page 210.<|endoftext|> -TITLE: Dimension of spaces of bi/linear maps -QUESTION [6 upvotes]: For $V$ a finite dimensional vector space over a field $\mathbb{K}$, I have encountered the claim that -$$ -\dim(\mathrm{Hom}(V,V)) = \dim(\mathrm{Hom}(V \times V, \mathbb{K})) -$$ -where $\mathrm{Hom}(V,V)$ denote the vector spaces, respectively, of all linear maps from $V$ to $V$ and all bilinear maps from $V\times V$ to the ground field $\mathbb{K}$. I'm sure I'm overlooking something elementary, but I don't see this. -There is a theorem that, in general, for any finite-dimensional vector spaces $V$ and $W$ that $$ -\dim(\mathrm{Hom}(V,W)) = \dim(V)\dim(W) -$$ -But, $\dim(V \times W) = \dim(V) + \dim(W)$ and therefore -$$ -\dim(\mathrm{Hom}(V \times V, \mathbb{K})) = (\dim(V) + \dim(V))\cdot \dim(K) = 2\dim(V)\cdot 1 -$$ -which is obviously not equal to $\dim(\mathrm{Hom}(V,V)) = \dim(V)\cdot\dim(V)$ -Where is my mistake? - -REPLY [3 votes]: I would not write $\operatorname{Hom}(V \times V, \mathbb K)$ for the space of bilinear maps, since there is nothing to distinguish this from your old notation for the space of linear maps. I've seen $L^2(V, V; \mathbb K)$ used, but $\operatorname{Bilin}(V, V; \mathbb K)$ has the advantage of being obvious. In any case, it never hurts to specify your notation. -Now to calculate the dimension. Let $\{e_1, \ldots, e_n\}$ be a basis for $V$, and let $\{f_i\}$ be the corresponding dual basis. Then I claim that the set of $n^2$ bilinear maps -\[ -F_{ij}(x, y) = f_i(x)f_j(y) \qquad i, j = 1, \ldots, n -\] -is a basis for $L^2(V, V; \mathbb K)$. To remember this fact, it might help to recall how bilinear forms correspond to matrices after one has chosen a basis.<|endoftext|> -TITLE: Generalizing the total probability of simultaneous occurrences for independent events -QUESTION [5 upvotes]: I want to generalize a formula and I need your help with this. This is not my homework or assignment but I need to come up with a concise formula that fits my documentation. -Background for my problem: -Considering all events to be independent of each other, -Let the probability of $Event$ $0$ be $P_{0}$ and $Event$ $1$ be $P_{1}$ and so on ... -Then the probability that two events occur simultaneously is : $P_0P_1$ . This is nothing but the area of intersection of two circles $P_0$ and $P_1$. -Continuing the same way, -The total probability of simultaneous occurrence in case of three events is: -$P_0 P_1 + P_0P_2 + P_1P_2 - 2 P_{0}P_{1}P_{2}$. -Also can be visualized by drawing three intersecting circles. -One clarification here: This gives me the total probability of any two events plus all the three occurring at the same time right? -I cannot visualize the formula by drawing circles anymore. How can the above formula be generalized to get the probability of simultaneous occurrences when there are 4, 5, ... independent events. -I have seen that the inclusion-exclusion principle is the answer. But I am not able to get an intuition for it. The inclusion-exclusion principle gives the probability of 2,3,4 sets intersecting but isn't my question different? -I get this doubt because, probability of four independent events occurring simultaneous is: $P_0P_1P_2P_3$. But what I need is a general formula for the total probability of two or more independent events at the same time. -Can anyone of you please throw some light? - -Yes indeed I meant that the probability of "two or more events". The answer you have given is very precise and the one I was looking for. Yes it is boring to try to visualize circles when the number exceeds three. Instead, in general if use 1 - ($w_0 + w_1$) then we should land up correctly given that the events are independent. Thank you so much. - -REPLY [4 votes]: $$1-\left(1+\sum_{i=1}^n\frac{P_i}{1-P_i}\right)\cdot\prod_{k=1}^n(1-P_k)$$<|endoftext|> -TITLE: Hausdorff dimension of Cantor set -QUESTION [11 upvotes]: I know this is probably a easy question, but some steps in the proofs I found almost everywhere contained some parts or assumptions which I think may not be that trivial, so I would like to make it rigorous and clear enough. Here is the question: -Let $C$ be the Cantor set with delete the middle third of the interval and go on. The general Cantor can be considered similarly. We want to proof the Hausdorff dimension of $C$ is $\alpha:=\log2/\log3$. So we calculate the $d$-dimensional Hausdorff measure $H^d(C)$ for all $d$ to determine the Hausdorff dimension. Let $C(k)$ be the collection of $2^k$ intervals with length $1/3^k$ in the $k^{th}$-step of construction of Cantor set. -It is rather easy to show that $H^{\alpha}(C)<\infty$ by showing that for any covering $\{E_j\}_{j=1}^{\infty}$of $C$ the set $C(k)$ also covers $C$ for $k$ large enough, so we can bound $H^{\alpha}(C)$ from above. Which implies that the Hausdorff dimension of $C$ is less than $\alpha$. -To show the dimension is actually equal to $\alpha$, it suffices to show $H^{\alpha}(C)>0$. -Now let $\{E_j\}_{j=1}^{\infty}$ be any covering of $C$ with diameter $diam(E_j)\le \delta$ for all $j$. How do we show that -$$\sum_j diam(E_j)^{\alpha}>constant$$ -One author (see this link) made the following assumption: $E_j$ be open, so one can find the Lebesgue number of this covering, and when $k$ large enough, any interval in $C(k)$ will be contained in $E_j$ for some $j$. Hence one can bound the $\sum_j diam(E_j)^{\alpha}$ from below by the ones of $C(k)$. -I got confused here: First why we can assume $E_j$ to be open? - -REPLY [3 votes]: It is a fact in general (i.e. true in any metric space and Hausdorff measures of any dimension) that you can assume your covering sets to be open or closed. See Federer. For closed version, it is easier: diameter of $\bar{S}$ and diameter of $S$ are equal, so, if a collection of $S$'s cover your set and each has diam less than $\delta$, then you can instead consider the collection of $\bar{S}$'s, which again have diam bounded by $\delta$ ans still cover your set. -For open version, you need some sacrifice! At a cost of arbitrarily small $\sigma$, you can enlarge every set of diam less than $\delta$ to an open one with diam less than $(1+\sigma)\delta$. The latter can be used to estimate $\mathcal{H}^s_{(1+\sigma)\delta}$ within $(1+\sigma)^s$ accuracy of $\mathcal{H}^s_{\delta}$. Since for Hausdorff measure $\mathcal{H}^s$, you will send $\delta \to 0$, and $(1+\sigma)\delta$ will as well, your sacrifices will not affect the ultimate measure. (However, as expected, for one fixed $\delta$, $\mathcal{H}^s_\delta$ can be different if you only allow open coverings versus all coverings.)<|endoftext|> -TITLE: $X=\{\cos n + i\sin n : n \in \mathbb{N}\}$ is dense in $\mathbb{S}^1 \subset \mathbb{C}$ -QUESTION [6 upvotes]: I'd like a hint to prove the above assertion. My idea was to find a convergent sequence, of points of $X$, to each point $z \in \mathbb{S}^1$, but I don't think it's right. - -REPLY [8 votes]: First solution (longer but more general): -I propose to show that the set $G=\{n+2k\pi: n\in Z, k\in Z\}$ is dense in R. -You can use the fact that the subgroups of R are cyclic or dense. -G is not cyclic else $\pi$ would be a rational. So it is dense. -Direct and easier solution: -For $\varepsilon>0$ divide the unit circle in parts of equal size whose length does not exceed $\varepsilon$. -For $m\ne n$, $e^{im}\ne e^{in}$ (else $\pi$ would be rational). -Hence there is an infinite number of $e^{in}$. As there is only a finite number of parts, one can find $m 0\,$ stuff here) -3) Deduce now that any element $\,z=e^{it}\,\,,2k\pi\leq t<2(k+1)\pi\,,\,k\in\mathbb{Z}\,$ in $\, S^1\, $ is close enough to -some root of unity.<|endoftext|> -TITLE: Integral with 4 radicals-hat -QUESTION [8 upvotes]: I'd like to find out a simple way for calculating the value of: - -$$\int_{0}^{1}\sqrt{1+\sqrt{1 + {\sqrt{1+ \sqrt{x}}}}}\,dx .$$ - -Of course, I thought of some variable change, but it seems pretty complicated. -On the other hand, I wonder if there can be made a generalization when having -to deal with the expression with $k$ radicals, $k>1$. - -REPLY [4 votes]: Let -$$\begin{eqnarray*} -u &=&\sqrt{1+\sqrt{1+\sqrt{x}}} \Leftrightarrow &x=\left( \left( u^{2}-1\right) ^{2}-1\right) -^{2}=u^{8}-4u^{6}+4u^{4}. -\end{eqnarray*}$$ -Since $$\begin{equation*} -dx=\left( 8u^{7}-24u^{5}+16u^{3}\right) du -\end{equation*}$$ we have -$$I :=\int_{0}^{1}\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{x}}}}dx\\=\int_{\sqrt{2}}^{\sqrt{1+\sqrt{2}}}\sqrt{1+u}\left(8u^{7}-24u^{5}+16u^{3}\right) du.\quad\textit{(computation below)}^† -$$ Each term can be integrated using the substitution $t=\sqrt{1+u}$ $$\begin{equation*} -\int_{a}^{b}\sqrt{1+u}u^{n}du=2\int_{\sqrt{1+a}}^{\sqrt{1+b}}t^{2}\left( -t^{2}-1\right) ^{n}\,dt,\quad a=\sqrt{2},b=\sqrt{1+\sqrt{2}}. -\end{equation*}$$ -Generalization to $k=5$ radicals $$\begin{equation*} -J:=\int_{0}^{1}\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{x}}}}}dx. -\end{equation*}$$ Similarly to above the substitution is now -$$\begin{eqnarray*} v &=&\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{x}}}}\Leftrightarrow x=\left( \left( \left( v^{2}-1\right) ^{2}-1\right) ^{2}-1\right) ^{2} \\ x &=& v^{16}-8v^{14}+24v^{12}-32v^{10}+14v^{8}+8v^{6}-8v^{4}-1, -\end{eqnarray*}$$ and -$$\begin{equation*} dx=\left( -16v^{15}-112v^{13}+288v^{11}-320v^{9}+112v^{7}+48v^{5}-32v^{3}\right) dv. -\end{equation*}$$ - -Hence -$$\begin{eqnarray*} -J &=&\int_{\alpha }^{\beta }\sqrt{1+v}\left( -16v^{15}-112v^{13}+288v^{11}-320v^{9}+112v^{7}+48v^{5}-32v^{3}\right) dv \\ -\alpha &=&\sqrt{1+\sqrt{2}},\beta =\sqrt{1+\sqrt{1+\sqrt{2}}}. -\end{eqnarray*}$$ --- -†In SWP I obtained -$$\begin{eqnarray*} -I &=&-\frac{26\,704}{765\,765}\sqrt{1+\sqrt{\sqrt{2}+1}}\sqrt{\sqrt{2}+1} -\sqrt{2} \\&&+\frac{83\,584}{765\,765}\sqrt{1+\sqrt{\sqrt{2}+1}}\sqrt{\sqrt{2}+1} \\ -&&+\frac{344\,096}{765\,765}\sqrt{1+\sqrt{\sqrt{2}+1}} \\ -&&+\frac{67\,328}{109\,395}\sqrt{\sqrt{2}+1} \\ -&&-\frac{256}{3003}\sqrt{\sqrt{2}+1}\sqrt{2} \\ -&&-\frac{17\,168}{765\,765}\sqrt{1+\sqrt{\sqrt{2}+1}}\sqrt{2} \\ -&\approx &1.584\,9. -\end{eqnarray*}$$<|endoftext|> -TITLE: if $f$ is entire and $|f(z)| \leq 1+|z|^{1/2}$, why must $f$ be constant? -QUESTION [27 upvotes]: How can we prove that if $f:\mathbb{C}\rightarrow\mathbb{C}$ is holomorphic (analytic) and $|f(z)| \leq 1+|z|^{1/2} \forall z$, then $f$ is constant? -Liouville's theorem springs to mind, but I can't see how to use it since $1+|z|^{1/2}$ is not holomorphic. The maximum modulus principle doesn't seem easily usable either. And the principle of isolated zeroes can't really be applied since all we know is an inequality, not an equation. -Many thanks for any help with this! - -REPLY [3 votes]: A slightly different way: $|z f(1/z)| \leq |z| + |z|^{1/2}$ for $z \neq 0$ so $z f(1/z)$ extends to an entire function $\sum_{k \geq 1} a_k z^k$ by Riemann's extension theorem. Then $f(z) = \sum_{k \geq 1} a_kz^{1-k}$. This implies that all coefficients $a_k$ vanish except possibly $a_1$.<|endoftext|> -TITLE: condition for a set to be compact and convex -QUESTION [6 upvotes]: Is it true that a set in, say, $n$-dimensional Euclidean space is compact and convex iff its intersection with any line is empty, a single point, or a closed line segment? - -REPLY [3 votes]: Yes. "Only if" is easy, so suppose the intersection of $K$ with any line is empty, a single point, or a closed line segment. Then $K$ is convex: if $x \in K$ and $y \in K$, consider the intersection of $K$ with the line through $x$ and $y$. -To show that $K$ is closed: -we may assume wlog that $K$ has interior (otherwise embed it in ${\mathbb R}^k$ for some $k < n$), and (by translation) that $0$ is in that interior. If $p \in \overline{K} \backslash K$, I claim $tp \in K$ for $0 \le t < 1$, which implies $p \in K$. Namely, suppose $ B_\delta \subseteq K$, where $B_\delta$ is the open ball of radius $\delta$ centred at $0$. Then if $p' \in K$ with $\|p' - p\| < \eta = \delta (1 - t)/t$, we have $t p \in t p' + B_{t \eta} = t p' + (1-t) B_\delta \subseteq K$. -Finally, to show that $K$ is bounded: -suppose $x_n \in K$ with $\|x_n\| \ge n$. Then $y_n = x_n/\|x_n\| \in K$ with $\|y_n\| = 1$. By passing to a subsequence, we may assume $y_n \to y$, which is in $K$ because $K$ is closed. Now for any positive integer $m$, $m y_n \in K$ for all sufficiently large $n$, and again since $K$ is closed, $m y = \lim_{n \to \infty} m y_n \in K$. So the intersection of $K$ with the line through $0$ and $y$ is unbounded, which is a contradiction. - -REPLY [2 votes]: Convex sets are connected. Note that the connected subsets of $\mathbb{R}$ are: $\emptyset$, singletons, intervals, rays, and the whole line. The intersection of two convex sets is convex (same holds replacing "convex" with any of "compact," "closed," "bounded," or "connected"). Noting that a (half-)open line segment is not closed, that rays (and whole lines) are not bounded, it follows that if a set in $\mathbb{R}^n$ is convex and compact (so closed and bounded), then since lines are convex and connected their intersection with the convex, compact set will necessarily be one of the three desired set types. -It is clear that if $A\subseteq\mathbb{R}^n$ fails to be convex, then there is a line whose intersection with $A$ fails to be connected. If $A$ is convex but fails to be bounded, then it contains a closed ray (this shouldn't be too tricky to justify). Suppose $A$ is convex and bounded but fails to be closed, so there is some limit point $x$ of $A$ with $x\notin A$. Then at least one of the following should hold: (i) there is some hyperplane $P$ of dimension $n-1$ such that $A\cap P=\emptyset$ and $x\in P$ or (ii) every hyperplane $P$ of dimension $n-1$ with $x\in P$ contains a line $L$ whose intersection with $A$ it a (half-)open line segment. Since "closed and bounded=compact" in $\mathbb{R}^n$, then in case (ii), we're done, and in case (i), the line through $x$ normal to the plane $P$ will have its intersection with $A$ is a (half-)open line segment, and again we're done. Thus, we see that if $A$ has the property that for every line $L$, $A\cap L$ is either empty, a singleton, or a closed line segment, then $A$ is convex and compact. (Play with it a bit to justify the claims made in this paragraph. If you get stuck, make a comment, and I'll be on later to help you out.) -Thus, they are equivalent conditions.<|endoftext|> -TITLE: Theorems about Mersenne numbers -QUESTION [5 upvotes]: The wiki page on Mersenne Primes gives 8 theorems about Mersenne primes. My question relates to number 4. and 7.: - -4.If $p$ is an odd prime, then any prime $q$ that divides $2^p-1$ must be congruent to $\pm 1 (\bmod 8)$. -7.If $p$ and $2p+1$ are both prime (meaning that $p$ is a Sophie Germain prime), and $p$ is congruent to $3 (\bmod 4)$, then $2p+1$ divides $2^p − 1$. - -I have 3 questions: - -Isn't this equivalent, since from 7. we get $2p+1=8n +7$ which is $- 1 (\bmod 8)$? Ok, I see that 7. is more stringent, since 4. also allows $1$ as divisor. If this is answer to 1., my next question is - -How does the property of $p$ and $2p+1$ both prime force that $2p+1$ divides $2^p − 1$? - -Is this extendable? Can a Cunningham chain help to find more divisors of $2^p − 1$? - - -Thanks... - -REPLY [4 votes]: For the second question: Note that $2p+1$ is of the form $8k+7$. So $2$ is a quadratic residue of $2p+1$. Thus $2\equiv a^2\pmod{2p+1}$ for some $a$. But then $2^p\equiv a^{2p}\equiv 1\pmod{2p+1}$ by Fermat's Theorem. -Assertion $4$ just puts restrictions on the form of divisors, without asserting that non-trivial divisors exist. Assertion $7$ says that a very specific number is a divisor, albeit in a very restricted setting (Sophie Germain primes). For example, it enables us to conclude, without any trial divisions, that $2^{11}-1$ is not prime, and that $2^{23}-1$ is not prime.<|endoftext|> -TITLE: Find all non-abelian groups of order 105. -QUESTION [5 upvotes]: Find all non-abelian groups of order 105. -My attempt: $105=3.5.7$. -Consider the $3$-factorization, as $5, 7, 35 \not\equiv 1\mod 3$ we have that the sylow $3$-subgroup is normal. -Consider now the $7$-factorization. By this we get that there is either $15$ sylow $7$-subgroups or one normal sylow $7$-subgroup. -If there are $15$ sylow-$7$-subgroups then $[N_G(P):P]=1$ and then since $P$ is abelian we have that $N_G(P)=C_G(P)$. But this means that we have that P has a normal $p$-complement and so we have that any groups of this form can be represented as: -$$ -But what $\alpha$ are possible? -If there is one normal sylow $7$-subgroup then we have a normal subgroup of order $21$ which is abelian. Hence we have groups of this type represented by: -$$ -But what $\beta$ are possible? - -REPLY [5 votes]: Let $\,Q_p\,$ be a Sylow p-sbgp. of $\,G\,$ . If either $\,Q_5\,,\,Q_7\,$ is normal then the product $\,P:=Q_5Q_7\,$ is a sbgp. of $\,G\,$ and thus it is normal as well, as its index is the minimal prime dividing $\,|G|\,$. In this case, and since the only existing group of order $35$ is the cyclic one, we'd get that $\,|\operatorname{Aut}(P)|=\phi (35)=24$, and thus we can define a homomorphism $\,Q_3=\langle c\rangle\to\operatorname{Aut}(P=\langle d\rangle)\,$ by $\,d^c:= c^{-1}dc\,$ . As this homom. isn't trivial (of course, we assume at least one of the Sylow sbgps. is non-abelian) we get a (non-abelian, of course) semidirect product $$ P\rtimes Q_3 $$ . -If both $\,Q_5,Q_7\,$ are not normal, a simple counting argument tells us there are $\,90\,$ elements of order $7$ and $84$ elements of order 5, which is absurd as we've only $\,105\,$ elements in the group. -Thus, the above is the only way to get a non-abelian group of the wanted order.<|endoftext|> -TITLE: What algorithms are there for determining whether a Gaussian integer is prime? -QUESTION [5 upvotes]: Give a Gaussian integer $z\in{Z[i]}$, how can I determine if $z$ is prime? I imagine there exists an algorithm that maps primality in $Z[i]$ to primality in Z. And for the case when $z\in{Z}$ I think we can just check that $z$ is a prime in $Z$ and $z=3$ (mod 4). - -REPLY [5 votes]: If $z \in \mathbb{Z}[i]$ is truly complex, then the norm i.e. $\text{real}(z)^2+\text{imag}(z)^2$ should be a prime in $\mathbb{N}$. If $z \in \mathbb{Z}[i]$ is real or purely imaginary, then the real or imaginary part (whichever is non-zero) itself should be a prime of the form $3 \bmod 4$ in $\mathbb{Z}$.<|endoftext|> -TITLE: Freiman homomorphism on generating set -QUESTION [5 upvotes]: I got stuck in an exercise from Tao and Vu's book Additive Combinatorics. It is ex. 5.3.4. on page 226. - -In the following let (Z,+) and (W,+) be two abelian groups and let A $\subset$ Z and B $\subset$ W be two finite subsets. -The definition of a Freiman hom. goes as follows: -Let $k \in \mathbb{N}\setminus \{0\}$. We call a map $\varphi:A\rightarrow B$ a $k$-Freiman homomorphism : $\Leftrightarrow \; \sum_{i=1}^{k}{a_{i}}= \sum_{i=1}^{k}{\tilde{a}_{i}} \implies \sum_{i=1}^{k}{\varphi(a_{i})}= \sum_{i=1}^{k}{\varphi(\tilde{a}_{i})}$ for $(a_{i})_{i = 1}^{k}, (\tilde{a}_{i})_{i = 1}^{k}$ elements in A. - -The exercise I'm stucked with is: - -Let $\varphi: A \rightarrow B$ be a Freiman $k$-homomorphism for any $k\geq 2$. Suppose further that $A$ generates $Z$ as a group. -$\exists!$ group homomorphism $\psi: Z \rightarrow W$ and $\exists! c \in W$ such that $$\varphi(x)=\psi(x)+c \;\; \forall x \in A.$$ - -Here are my thoughts so far: -Clearly $c$ should be something like $\varphi(0)$ but in general $0 \notin A.$ -I can define a $\tilde{\varphi}$ on $Z$ to be just $\widetilde{\varphi}(z):=\sum{\varphi(a_i)}$ for $z=\sum{a_i} \in Z.$ -However, I don't see how I can show that this is well-defined: -If $z=\sum_{i=1}^{n}{a_i}=z=\sum_{i=1}^{m}{\tilde{a}_i}$ and (WLOG) $\; m-n \geq 0\,$ -I could add just zeroes on the left side to get the same number of summands and then use the property of the Freiman homomorphism. But as I cannot make sense of $\varphi(0)$ this doesn't help for getting the well-definition. -I also cannot add any other elements from $A$, can I? -The remaining parts of the proof I will be able to do, but I am stuck with this problem. -I am very thankful for any hints or comments. - -REPLY [2 votes]: EDIT: -In errata here I found: In Exercise 5.3.4, the hypothesis "$0 \in A$” is missing. -I guess after this addition the exercise is relatively simple. (Basically the solution sketched by OP in the question.) -My original post (before finding the erratum) follows: - -I have probably misunderstood something, but I think I have a counterexample. I will post it here; perhaps when someone points out where the mistake is, it might help with the solution. (The other possibility is that the counterexample is indeed correct and some assumption is missing in the exercise.) -Non-uniqueness -This example should give a situation, where $\psi$ and $c$ exists, but they are not determined uniquely. -Consider $A=B=\{1\}$ as subsets of $(\mathbb Z,+)$. -Define $\varphi(1)=1$. -We have several choices for $\psi$ and $c$, e.g. $\psi=0$ and $c=1$ or $\psi=id_{\mathbb Z}$ and $c=0$. -Non-existence -Consider $A=\{(1,0),(0,1),(2,3)\}$ as a subset of $\mathbb Z\oplus\mathbb Z$. Consider $B=\{1,2\}$ as a subset of $\mathbb Z$. -Define $\varphi(1,0)=1$, $\varphi(0,1)=1$ and $\varphi(2,3)=2$. -Let us check whether this is a $k$-Freiman homomorphism. We want to find out whether we can obtain some elements of $\mathbb Z \oplus \mathbb Z$ in two different ways as a sum of $(1,0)$, $(0,1)$ and $(2,3)$, such that we have the same number of summands. I.e. we are looking for non-negative integers $a$, $b$, $c$, $a'$, $b'$, $c'$, such that $$a+b+c=a'+b'+c'\\a(1,0)+b(0,1)+c(2,3)=a'(1,0)+b'(0,1)+c'(2,3).$$ By solving this system we find out that it is possible only if $a=a'$, $b=b'$ and $c=c'$. (Solving this system is equivalent to checking whether $(1,0,1)$, $(0,1,1)$ and $(2,3,1)$ are linearly independent.) So there are no non-trivial sums of this form, which means that the condition from the definition of $k$-Freiman homomorphism is fulfilled. -Now if $\varphi=\psi+c$, we have $\psi=\varphi-c$. Using the fact that $\psi$ is a homomorphism we get $\psi(2,3)=2\psi(1,0)+3\psi(0,1)$ which means -$$\varphi(2,3)-c=2\varphi(1,0)+3\varphi(0,1)-5c$$ -which implies $2-c=5-5c$, i.e.and $4c=3$. So we cannot find $c\in\mathbb Z$.<|endoftext|> -TITLE: solve equation $y^{\alpha} + y^{1 + \alpha} = x $ -QUESTION [6 upvotes]: how to solve this equation: -$y^{\alpha} + y^{1 + \alpha} = x $ where $\alpha \in (-1, 0)$ -Is there trick to solve it? -EDIT. I want to find $y(x)$. - -REPLY [4 votes]: Lagrange inversion formula, more precisely Lagrange-Bürmann formula is made for this. -The function $y$ solves $y=z\phi(y)$, with $z=x^{1/\alpha}$ and $\phi:t\mapsto(1+t)^{-1/\alpha}$. Since $\phi(0)\ne0$, one knows that a solution is -$$ -y(x)=\sum_{n\geqslant1}a_nz^n=\sum_{n\geqslant1}a_nx^{n/\alpha},\qquad a_n=\frac1n[t^{n-1}]\phi(t)^n. -$$ -In the present case, for every $\alpha$ in $(-1,0)$, -$$ -a_n=\frac1n{-n/\alpha\choose n-1}=-\frac{\alpha}n{-n/\alpha+1\choose n}, -$$ -and one can check that the resulting series converges for every $x$ such that $|x|\gt R_\alpha$ with -$$ -R_\alpha=\frac{(-\alpha)^\alpha}{(1+\alpha)^{1+\alpha}}. -$$ -Sanity check: If $\alpha=-\frac12$, one gets $R_\alpha=2$, as was to be expected since an explicit formula for $y(x)$ in this case is -$$ -y(x)=-1+\frac12x^2-\frac12x^2\sqrt{1-\frac4{x^2}}=\frac{1-\sqrt{1-\frac4{x^2}}}{1+\sqrt{1-\frac4{x^2}}}. -$$ -Edit: -For every $\beta\gt0$ and $\gamma$, when $n\to\infty$, -$$ -\frac{\Gamma(\beta(n+1)+\gamma)}{\Gamma(\beta n+\gamma)}\sim \beta^\beta n^\beta. -$$ -Applying this to each Gamma function in the expression of -$$ -a_n=\dfrac{\Gamma(-n/\alpha+1)}{\Gamma(n+1)\Gamma(-n/\alpha-n+2)}, -$$ -one sees that the powers of $n$ cancel out and one is left with -$$ -\lim\limits_{n\to\infty}\frac{a_{n+1}}{a_n}=\frac{(-1/\alpha)^{-1/\alpha}}{1^1\,(-1/\alpha-1)^{-1/\alpha-1}}=(-1/\alpha)\,(1+\alpha)^{(1+\alpha)/\alpha}=\varrho(\alpha). -$$ -Hence the series $y(x)$ converges for every $x$ such that $\varrho(\alpha)|x|^{1/\alpha}\lt1$ and diverges for every $x$ such that $\varrho(\alpha)|x|^{1/\alpha}\gt1$. Since $\alpha\lt0$, the first condition reads $|x|\gt\varrho(\alpha)^{-\alpha}$ and the second condition reads $|x|\lt\varrho(\alpha)^{-\alpha}$. Since $\varrho(\alpha)^{-\alpha}=R_\alpha$, the proof is complete.<|endoftext|> -TITLE: If every irreducible representation of a finite group has dimension $1$, why must the group be abelian? -QUESTION [10 upvotes]: Suppose $G$ is a group and that every irreducible representation of $G$ has dimension $1$. Why does this mean that $G$ is abelian? -The number of $1$-dimensional representations of $G$ is given by $|G/G'|$, where $G'$ is the derived subgroup of $G$. So if every irreducible representation of $G$ has degree $1$, then the number of conjugacy classes of $G$ is equal to $|G/G'|$. I can't see how to conclude that $G$ is abelian (or if this is the right approach). -Any help appreciated. Thanks - -REPLY [9 votes]: The following steps lead to a solution. The argument I am giving here is essentially equivalent to the argument given by Qiaochu above (in the comments) but is more general in that it is also applicable for compact Lie groups. The main point in this connection is in the application of the Peter-Weyl theorem in the solution to Exercise 4 below for compact Lie groups. -Let $G$ be a group and let $G'$ denote the commutator subgroup of $G$. We wish to understand how $G'$ acts on one-dimensional representations of $G$: -Exercise 1: Let $(\pi,V)$ be a one-dimensional representation of $G$. Prove that the induced representation of $G'$ is trivial. -The following result is fundamental in representation theory and is applicable to finite groups or, more generally, compact Lie groups: -Exercise 2: Prove that every finite-dimensional representation of $G$ is a direct sum of irreducible representations. -We know that every irreducible representation of $G$ is one-dimensional by assumption. In particular, every finite-dimensional representation of $G$ is a direct sum of one-dimensional representations. -Exercise 3 Prove that $G'$ acts trivially on every finite-dimensional representation of $G$. -Let us recall that a representation $(\pi,V)$ of $G$ is said to be faithful if the kernel $\text{ker }\pi$ is the trivial subgroup of $G$. The following exercise might be a little difficult; you can assume it on faith if desired: -Exercise 4 Prove that if $G$ is a finite group or a compact Lie group, then $G$ possesses a finite-dimensional faithful representation. (Hint: If $G$ is finite, then you can prove this using a familiar representation of $G$. If $G$ is an arbitrary compact Lie group, then appeal to the Peter-Weyl theorem if you can.) -The result should now be clear: -Exercise 5: Prove that if every irreducible representation of $G$ is one-dimensional, then $G'=\{e\}$, the trivial subgroup of $G$, i.e., $G$ is abelian. -I hope this helps!<|endoftext|> -TITLE: Generalized Eigenvalue Problem -QUESTION [6 upvotes]: Consider a generalized Eigenvalue problem $Av = \lambda Bv$ -where $A$ and $B$ are square matrices of the same dimension. It is known that $A$ is positive semidefinite, and that $B$ is diagonal with positive entries. -It is clear that the generalized eigenvalues will be nonnegative. What else can one say about the eigenvalues of the generalized problem in terms of the eigenvalues of $A$ and the diagonals of $B$? Equivalently, what else can one say about the eigenvalues of $B^{-1}A$? -It seems reasonable (skipping over zero eigenvalues) that -$$ -\lambda_{min}(B^{-1}A) \geq \lambda_{min}(A)/B_{max} -$$ -but I am unable to see how one could rigorously show this, and it is perhaps a conservative bound. Equivalently again, what could one say about the eigenvalues of -$$ -B^{-1/2}AB^{-1/2} -$$ -? - -REPLY [3 votes]: Let $A$ be a symmetric, real, and positive-semidefinite $n \times n$ matrix. Consider the quadratic form $v^T Av$ on $\mathbb{R}^n$. By restricting if necessary to the complement of the nullspace of $A$, we may assume WLOG that $A$ is positive-definite. -Lemma: The minimal eigenvalue of $A$ is the minimal value of $v^T A v$ on the unit sphere $||v|| = 1$. -This is a consequence of the (proof of the) spectral theorem. Now, let $D = B^{-1/2}$ be a diagonal matrix with positive real entries. What is the minimal value of $v^T DAD v = (Dv)^T A (Dv)$ on the unit sphere? Well, $||Dv||$ is at least the smallest diagonal entry $d_{min}$ of $D$, so it follows that the minimal value is at least $d_{min}^2$ times the minimal eigenvalue of $A$. The conclusion follows. -$D$ being diagonal is not essential to this argument; it just needs to be invertible. In general $d_{min}$ needs to be replaced by the inverse of the operator norm $||D^{-1}||^{-1}$ and we need to consider $v^T D^T AD v = (Dv)^T A (Dv)$.<|endoftext|> -TITLE: Group isomorphism concerning free group generated by $3$ elements. -QUESTION [9 upvotes]: From Jacobson's Basic Algebra I, page 70, - -Let $G$ be the group defined by the following relations in $FG^{(3)}$: - $$x_2x_1=x_3x_1x_2, \qquad x_3x_1=x_1x_3,\qquad x_3x_2=x_2x_3.$$ - Show that $G$ is isomorphic to the group $G'$ defined to be the set of triples of integers $(k,l,m)$ with $$(k_1,l_1,m_1)(k_2,l_2,m_2)=(k_1+k_2+l_1m_2,l_1+l_2,m_1+m_2).$$ - -My thoughts: I was able to show that $G'$ is generated by $(1,0,0)$, $(0,1,0)$ and $(0,0,1)$, since $(h,l,m)=(1,0,0)^{h-lm}(0,1,0)^l(0,0,1)^m$. Letting $(0,0,1)=a_1$, $(0,1,0)=a_2$, and $(1,0,0)=a_3$, I calculate that $a_2a_1=a_3a_1a_2,a_3a_1=a_1a_3,a_3a_2,a_2a_3$. So they look like the satisfy the same relations as the $x_i$. (I'm not sure if this is necessary.) -So taking the set $X=\{x_1,x_2,x_3\}$, I have a map $x_i\mapsto a_i$, which gives a homomorphism of $FG^{(3)}$ into $G'$ such that $\bar{x}_i\mapsto a_i$, and this homomorphism is in fact an epimorphism since it maps onto a set of generators for $G'$. Thus $FG^{(3)}/K\simeq G'$ where $K$ is the kernel of the homomorphism. Since $G\simeq FG^{(3)}/K$, $G\simeq G'$. -I can't quite justify if $G\simeq FG^{(3)}/K$, from the comments, I understand why the generated normal subgroup $K$ is contained in the kernel $\ker\nu$ of the induced homomorphism $FG^{(3)}\to G'$, but I don't follow why $\ker\nu\subset K$. Why does $\ker\nu\subset K$? Thanks. - -REPLY [4 votes]: Since my solution seems to be unnoticed, I edit it in order to make it more formal and complete. -Let $G = \langle x_1, x_2, x_3\mid x_2x_1 = x_3x_1x_2, x_3x_1 = x_1x_3, x_3x_2 = x_2x_3\rangle\ $ and $G' = (\mathbb{Z}^3, \star)$ where $\star$ is the following operation: -$$(h,l,m)\star(h',l',m') = (h+h'+lm', l + l', m + m')$$ -What I have to prove is that $G\cong G'$. -Let $K$ be the normal closure of $\{x_2x_1x_2^{-1}x_1^{-1}x_3^{-1}, x_3x_1x_3^{-1}x_1^{-1}, x_2x_1x_2^{-1}x_1^{-1}\}$ in $FG^{(3)}$ then $G\cong FG^{(3)}/K$ by the definition of presentation (at least the one I use). -Now, let $a_1$, $a_2$ and $a_3$ denote the elements $(0,0,1)$, $(0,1,0)$ and $(1,0,0)$ of $G'$. They generate $G'$ since -$$\begin{align} a_3^h\star a_1^m\star a_2^l &= (1,0,0)^h\star(0,0,1)^m\star(0,1,0)^l \\ -&= (h,0,0)\star(0,0,m)\star(0,l,0) \\ -&= (h,0,m)\star(0,l,0) \\ -&= (h,l,m)\end{align}$$ -Now, we have a set of generators of cardinality $3$ so $G' \cong FG^{(3)}/K'$ for some normal subgroup $K'$ of $FG^{(3)}$. -Let $\nu\colon FG^{(3)}\to G'$ denotes the homomorphism such that $\ker(\nu) = K'$ and $\pi\colon FG^{(3)}\to G$ the homomorphism with $\ker(\pi) = K$. I want to show that there exist an homomorphism $\mu\colon G\to G'$ such that $\nu = \mu\circ \pi$. It is obvious that if such a function exists then $\mu(x_i)=a_i$. -Since $(1,0,0)\star(0,0,1) = (0,0,1)\star(1,0,0) = (1,0,1)$, $(1,0,0)\star(0,1,0) = (0,1,0)\star(1,0,0) = (1,1,0)$ and $(0,1,0)\star(0,0,1) = (1,0,0)\star(1,0,0)\star(0,1,0) = (1,1,1)$, it follow that the relations of $G$ are in $K'$ too. So, since $K$ is the smaller normal subgroup that contains them, we can conclude that $K\subseteq K'$. -By the third isomorphism theorem, $FG^{(3)}/K' \cong (FG^{(3)}/K)/(K/K')$ or in other word $\mu$ exists (actually it is also unique by the universal property of the free groups). -My last answer started from this point. -Since $\nu$ is surjective, $\mu$ have to be surjective too. In other words, $\mu^{-1}(a)$ contains at least an element for every $a\in G'$. An obvious choice is the elements $x_3^hx_1^mx_2^l$, in fact $\mu(x_3^hx_1^mx_2^l) = \mu(x_3)^h\star\mu(x_1)^m\star\mu(x_2)^l = a_3^h\star a_1^m\star a_2^l = (h,l,m)$. -Lets consider an element $w\in G$ and one decomposition $w = \prod x_i^{\varepsilon_i}$ as the product of elements of the set $\{x_1, x_2, x_3\}$. I want to show that there exists a product of the form $x_3^hx_1^mx_2^l$ such that $x_3^hx_1^mx_2^l = w$. -I do it considering the product $\prod x_i^{\varepsilon_i}$ as a succession of elements of $\{x_1, x_2, x_3\}$ and then I transform it in the wanted form in a finite number of steps. Lets define the transformations: - -Since, by the relations, $x_3$ and $x_3^{-1}$ commute with the other generators, the first transformation consists in move an $x_3$ or an $x_3^{-1}$ at the beginning of the succession. -The second one, consists in deleting $x_ix_i^{-1}$ or $x_i^{-1}x_i$ from the succession. -The third one consists in the application of the first relation $x_2x_1 = x_3x_1x_2$. -The fourth transformation is simply $x_2^{-1}x_1^{-1} = x_3x_1^{-1}x_2^{-1}$ that is the inverse of the third rule. - -It is obvious that these first four transformations transform a product in an equivalent one (in G). -Let's consider the product $x_2x_1^{-1}x_2^{-1}x_1$. If we apply the four transformation to it, we have the equation $x_2x_1^{-1}x_2^{-1}x_1 = x_3^{-1}$. So: - -The fifth one consists in the use of the equivalence $x_2x_1^{-1} = x_3^{-1}x_1^{-1}x_2$ that is a direct conseguence of $x_2x_1^{-1}x_2^{-1}x_1 = x_3^{-1}$. -The last transformation is $x_2^{-1}x_1 = x_3^{-1}x_1x_2^{-1}$ that it is analog to the fifth one. - -Like for the first four transformation, the last two send products in products with the same result. -Using them, we can transform every product in a product of the form $x_3^hx_1^mx_2^l$ with the same result. Since every such product has a different image in G', we conclude that $\mu$ define an isomorphism between the two groups.<|endoftext|> -TITLE: Lower bound on proof length -QUESTION [7 upvotes]: Given any positive integer $n$, is there a way to quickly construct a statement $S$, such that the shortest proof of $S$, if it exists, must have length at least $n$? -And such that one out of $S$ or not $S$, is provable. - -And question 2: -If S, then a proof of S must have length atleast n. -If not S, then a proof of not S must have length atleast n. -Edit: -I had in mind that the number of symbols in S be of the same order as the number of symbols required to specify n. - -REPLY [16 votes]: There's a famous example that I think is due to Gödel. It is analogous to the so-called "Gödel sentence" $G$ which can be interpreted to mean "$G$ has no proof in PA", and which is therefore true but not provable in PA. (Supposing that PA is consistent.) -The analogous example is a sentence $S$ whose interpretation is - -$S$ cannot be proved in PA in less than one million steps. - -Suppose $S$ is false. Then $S$ can be proved in PA (in less than one million steps) and therefore PA is inconsistent. -Suppose $S$ is true. Then $S$ cannot be proved in PA in less than one million steps. -So the assumption that PA is consistent leads us to the conclusion that $S$ is either unprovable, or provable but not in less than a million steps. -But there is a proof of $S$: enumerate all proofs of up to a million steps, and check each one to make sure it does not prove $S$. (This proof takes vastly more than a million steps.) So $S$ is both true and provable and its shortest proof requires at least a million steps. -Aha, I see this is discussed in Wikipedia.<|endoftext|> -TITLE: Prove $\mathbb{Z}$ is not a vector space over a field -QUESTION [24 upvotes]: This is an exercise from Chapter 3 of Golan's linear algebra book. -Problem: Show $\mathbb{Z}$ is not a vector space over a field. -Solution attempt: -Suppose there is a such a field and proceed by contradiction. I will write multiplication $FV$, where $F$ is in the field and $V$ is an element of $\mathbb{Z}$. -First we rule out the case where the field has characteristic 2. We would have -$$0=(1_F+1_F)1=1_F1+1_F1=2$$ -a contradiction. -Now, consider the case where the field does not have characteristic 2. Then there is an element $2^{-1}_F$ in the field, and -$1=2_F(2^{-1}_F1)=2^{-1}_F1+2^{-1}_F1$ -Now $2^{-1}_F1\in\mathbb{Z}$ as it is an element of the vector space, but there is no element $a\in\mathbb{Z}$ with $2a=1$, so we have a contradiction. -Is this correct? - -REPLY [9 votes]: The answer is, "Yes." - -If $V$ is a vector space over a field of positive characteristic, then as an abelian group, every element of $V$ has finite order. If $V$ is a vector space over a field of characteristic $0$, then as an abelian group, $V$ is divisible. The abelian group $\mathbb Z$ has neither of these properties.<|endoftext|> -TITLE: Continuous images of compact sets are compact -QUESTION [6 upvotes]: Let $X$ be a compact metric space and $Y$ any metric space. If $f:X \to Y$ is continuous, then $f(X)$ is compact (that is, continuous functions carry compact sets into compact sets). -Proof: - -Consider an open cover of $f(X)$. -Then $f(X) \subset \bigcup_{\alpha \in A}V_\alpha$ where each $V_\alpha$ is open in $Y$. -$X \subset f^{-1}(f(X)) \subset f^{-1}\left(\bigcup_{\alpha \in A}V_\alpha\right) = \bigcup_{\alpha \in A}f^{-1}(V_\alpha)$. -Hence $\bigcup_{\alpha \in A}f^{-1}(V_\alpha)$ is an open cover of $X$. Since $X$ is compact then we can choose a finite subcover $\{V_i\}_{i=1}^n$ such that $X \subset \bigcup_{i=1}^n f^{-1}(V_i)$. -So then $f(X) \subset f\left(\bigcup_{i=1}^n f^{-1}(V_i)\right) = \bigcup_{i=1}^n f\left(f^{-1}(V_i)\right) \subset \bigcup_{i=1}^n V_i$, a finite subcover of $f(X)$. $\therefore f(X)$ is compact. - -Does this proof have an error? -Thanks for your help. - -REPLY [2 votes]: The proof is not good. Knowing that $f$ is continuous does not say much about $f^{-1}$, while the proof assumes that it exists and is continuous. It was not given that $f$ is a homeomorphism. -The statement is still true though. Take any sequence $x_n$ in compact $X$. Then it has a convergent subsequence $x_q$ (by definition of compactness). Now take the sequence $f(x_n)$ and notice that it has a convergent subsequence $f(x_q)$ (by definition of continuity). Since for any point $y \in f(X)$ there exists $x \in X $ such that $f(x) = y$, every sequence if $f(X)$ can be written as $f(x_n)$ for some sequence $x_n \in X$ and we are done.<|endoftext|> -TITLE: Polynomials over GF(7) -QUESTION [6 upvotes]: The following exercise is from Golan's book on linear algebra. -Problem: Consider the algebra of polynomials over $GF(7)$, the field with 7 elements. -a) Find a nonzero polynomial such that the corresponding polynomial function is identically equal to zero. -b) Is the polynomial $6x^4+3x^3+6x^2+2x+5$ irreducible? -Work so far: The first part is easy. The polynomial $x^7-x$ works by Fermat's little theorem. The second part is trickier. If the polynomial is reducible, it facts into the product of a linear term and something else, or it factors as two quadratics. The first case is easy to exclude; simply plug all seven elements of $\mathbb{Z}_7$ into the polynomial and confirm none of them are a root. The second is harder. Of course, one could just set up the systems of equations resulting from -$$(ax^2+bx+c)(dx^2+ex+f)$$ -and go through all the possible values of $a,c,d,f$ and see if the resulting values of $b$ and $e$ are permissible, and while I know that would eventually give me the answer, I have no desire to do all of those computations. Is there a slicker way? - -REPLY [3 votes]: Hint $\ $ As suggested you could use trial-and-error, and program a computer to test the $7^4 = 2401$ cases that arise from $4$ undetermined coefficients in a factorization into two quadratics. But, as is often true, a little insight trumps brute force. By exploiting innate symmetry, we can reduce the $2401$ cases to $2$ cases. First, shifting $\rm\:x\to x\!-\!1\:$ to kill the $\rm\:x^3\:$ term yields -$$\rm\begin{eqnarray} -f(x\!-\!1) &\equiv&\rm\ \ \ x^4\ +\ 2\ x^2\ -\ 3\ x\ +\ 2\pmod 7 \\ &\equiv&\rm\ (x^2\!- a\, x + b)\ (x^2\! + a\, x + c)\\ &\equiv&\rm\ \ x^4\! + (b\!+\!c\!-\!a^2)\!\: x^2\! + a(b\!-\!c)\:\!x + bc\end{eqnarray} $$ -Up to $\rm\, b,c\, $ swaps, $\rm\: bc\equiv 2\!\iff\! (b,c)\, \equiv\, \pm(2,1),\, \pm\:\!(3,3).\:$ $\rm\:b\not\equiv c\:$ else coef of $\rm\,x\,$ is $\,0\not\equiv -3$. -If $\rm\ (b,c) \equiv\ \ \: (\ 2,\ 1\ )\ $ then $\rm\:-3 \equiv a(b\!-\!c)\equiv\ \: a\:\ $ so $\rm\:b\!+\!c\!-\!a^2\equiv\ \ \ \, 2\!+\!1\!-\!(-3)^2\equiv\ \ 1\:\not\equiv 2$ -If $\rm\ (b,c) \equiv (-2,\!-1)\:$ then $\rm\:-3 \equiv a(b\!-\!c)\equiv -a\:$ so $\rm\:b\!+\!c\!-\!a^2\equiv\, -2\!-\!1\!-\!(+3)^2\equiv -5 \equiv 2$ -So $\rm\:a,b,c \equiv 3,-2,-1,\:$ is a solution, which yields the factorization -$$\rm -f(x\!-\!1)\, \equiv\, x^4 + 2\,x^2 - 3\,x+2\, \equiv\, (x^2-3\,x-2)(x^2+3\,x-1)\pmod 7$$ -Therefore $\rm\:f(x)\:$ is reducible since $\rm\:x\to x\!+\!1\:$ above yields a factorization of $\rm\:-f(x).\ \ $ QED -Remark $\ $ Alternatively, you could use the Euclidean algorithm to compute $\rm\:gcd(f(x\!+\!c),x^{24}\!-\!1)\:$ for random $\rm\:c,\:$ which, $\,$ for $\rm\:c=1\:$ quickly yields $\rm\:x^2\!+\!2\:|\:f(x\!+\!1),\:$ hence $\rm\:f(x)\:$ has the factor $\rm\:(x\!-\!1)^2\!+\!2\, =\, x^2-2\,x+3.\:$ This is how some factoring algorithms work.<|endoftext|> -TITLE: Why metrizable group requires continuity of inverse? -QUESTION [11 upvotes]: A metrizable group is a metric space $(G,d)$ with a binary operation $\cdot$ such $(G,(\cdot))$ is a group and maps $(\cdot):G\times G\to G$ and $f:G\to G$ given by $(\cdot)(x,y)=xy$ and $f(x)=x^{-1}$ are continuous respect $d$. -Why requires definition that $f$ be continuous? Is it possible to have continuity of $(\cdot)$ without continuity of $f$? - -REPLY [11 votes]: I made a mistake here before. Sorry! -Continuity of multiplication is not sufficient for general topological groups. -Consider the standard additive group of reals with Sorgenfrey topology. -Inverse is clearly not continuous, since $-[a,b)=(-b,-a]$. -Multiplication (or, rather, addition) is continuous, since, intuitively, the open sets in the product topology are the sets which do not have "upper" and "right" edge (but may possibly have "left" and "lower" edge, or some parts of those). More precisely, for any $x+y=c\in [a,b)$, then let $0<\varepsilon<(b-c)/2$. Then for any $(x',y')\in [x,x+\varepsilon)\times [y,y+\varepsilon)$, $a\leq x+y\leq x'+y' -TITLE: Semidirect product of two cyclic groups -QUESTION [14 upvotes]: Describe all semidirect products of $C_n$ by $C_m$ (ie $C_n \rtimes C_m$) where $m,n \in \mathbb{N_+}$ - -Note: For the first attempt one needs to find all homomorphisms from $C_m \to U(n)$, but the situation differs a lot for different pairs of $n ,m$, is there a better way to find all structures of $C_n \rtimes C_m$? -Wikipedia provided a general presentation of this product which I do not know how it was worked out. - -REPLY [3 votes]: As any automorphism of $C_n$ is of the form $g \mapsto g^k$ for some $k$, such that $GCD(n, k) = 1$, we have any semidirect product $C_n \rtimes C_m$ being of the form $\langle a, b|a^n = e, b^m = e, b^{-1}ab = a^k\rangle$. From that presentation we get, that $a = b^{-m}ab^{m} = a^{k^m}$. Thus, $ord(a) = n$ iff $n|k^m - 1$. -So any such semidirect product is defined by presentation $\langle a, b|a^n = e, b^m = e, b^{-1}ab = a^k\rangle$, where $n|k^m - 1$<|endoftext|> -TITLE: Number Theory in a Choice-less World -QUESTION [12 upvotes]: I was reading this article on the axiom of choice (AC) and it mentions that a growing number of people are moving into school of thought that considers AC unacceptable due to its lack of constructive proofs. A discussion with Mariano Suárez-Alvarez clarified that this rejection of AC only occurs when it makes sense. -This got me thinking. What are some examples of theorems in number theory that require the axiom of choice or its equivalents (ie Zorn's lemma) for its proof? -Note: Someone mentioned to me that Fermat's Last Theorem requires AC. Can someone verify this? - -REPLY [7 votes]: My experience, which is among the group of people who are working on automorphic forms, Galois representations, and their interrelations, is that no-one cares about whether or not AC is invoked. I think for some, this is simply because they genuinely don't care. For others (such as myself), it is because AC is a convenient tool for setting up certain frameworks, but they don't believe that it is truly necessary when applied to number theory. (For reasons somewhat related to Asaf Karagila's answer, I guess: there is a sense that all the rings/schemes/etc. that appear are of an essentially finitistic and constructive nature, and so one doesn't need choice to work with them --- although no-one can be bothered to actually build everything up constructively, so, as I said, AC is a convenient formal tool.) -On a somewhat related note: -My sense is that most number theorists, at least in the areas I am familiar with, argue with second order logic on the integers, rather than just first order logic, i.e. they are happy to quanitfy over subsets of the naturals and so on. And they are really working with the actual natural numbers, not just an arbitrary system satisfying PA. So it's not immediately clear as to whether results (such as FLT) which are proved for the natural numbers are actually true for any model of PA. But, as with the use (or not) of AC, it can be hard to tell, because people aren't typically concerned with this issue, and so don't phrase their arguments (even to themselves) in such as way as to make it easily discernible what axiomatic framework they are working in. (I think many have the view that "God made integers ...".) One example of this is the question of determining exactly what axiom strength is really needed to prove FLT. As far as I know, this is not yet definitively resolved.<|endoftext|> -TITLE: How to show Tukey's Lemma proves Zorn's lemma? -QUESTION [5 upvotes]: I heard that Zorn's lemma is equivalent to Tukey's lemma. - Now I've proved that Zorn's lemma implies that Tukey's lemma but I cannot prove that Tukey's lemma implies Zorn's lemma. How to show this? - -REPLY [4 votes]: You’re given a partial order $\langle P,\le\rangle$ in which every chain has an upper bound, and you want to show that $P$ has a maximal element. To apply Tukey’s lemma, you need to find a family $\mathscr{F}$ of finite character such that a maximal element of $\mathscr{F}$ with respect to $\subseteq$ will somehow give you a maximal element of $P$ with respect to $\le$. Since your hypothesis on $P$ involves chains, it’s reasonable to look for some family connected with them. As it happens, the simplest one works: let $\mathscr{C}$ be the set of chains in $P$. - -Show that $\mathscr{C}$ has finite character. - -Let $C$ be a $\subseteq$-maximal element of $\mathscr{C}$. $C$ is a chain in $P$, so it has an upper bound $p$. - -Show that $p\in C$. -Conclude that $p$ is a $\le$-maximal element of $P$. - -REPLY [3 votes]: Assume Tukey's lemma holds. Suppose that $(P,\leq)$ is a partially ordered set in which every chain has an upper bound. -Now consider $\mathcal F$ to be the collection of all chains in $P$. To see that $\mathcal F$ has a finite character recall that every finite subset of a chain is a chain, and that if $B\subseteq P$ is not a chain then there is a finite subset witnessing that. -Now $\mathcal F$ has a maximal element $C$, which is a chain and therefore has an upper bound $p$. By the maximality of $C$ we have that $p\in C$ and that $p$ is maximal.<|endoftext|> -TITLE: Does the assertion that every two cardinalities are comparable imply the axiom of choice? -QUESTION [5 upvotes]: If, for any two sets $A$ and $B$, Either $|A|<|B|, |B|<|A|$ or $|A|=|B|$ holds, does the axiom of choice holds? Why? - -REPLY [8 votes]: This is Hartogs theorem. -Suppose that $A$ is a set, let $\aleph(A)$ be the minimal ordinal $\alpha$ such that $|\alpha|\nleq|A|$. We cannot have that $\aleph(A)\leq|A|$, so if we assume that all cardinalities are comparable we have to have that $|A|<\aleph(A)$. This means that $A$ can be well ordered, as it can be injected into an ordinal. -It is also known that if every set can be well ordered then the axiom of choice holds. -(The ordinal $\aleph(A)$ is known as the Hartogs number of $A$ and it plays an important role in many of these constructions) - -REPLY [3 votes]: This is the famous Trichotomy. It implies the Axiom of Choice. You can find a partial list of equivalents of AC in Wikipedia.<|endoftext|> -TITLE: $G$ is a group $,H \cong K$, then is $G/H \cong G/ K$? -QUESTION [5 upvotes]: $G$ is a group with subgroups $H$ and $K$ such that $,H \cong K$, then is $G/H \cong G/ K$? - -REPLY [18 votes]: No. Consider $G = (\mathbb{Z},+)$, $H= (2\mathbb{Z},+)$ and $K= (4\mathbb{Z},+)$. Note that $H$ and $K$ are isomorphic by the mapping $z \to 2z$. -You might be interested in seeing the following questions as well. - - -Finite group with isomorphic normal subgroups and non-isomorphic quotients? - -https://mathoverflow.net/questions/29006/counterexamples-in-algebra/<|endoftext|> -TITLE: Unnecessary property in definition of topological space -QUESTION [15 upvotes]: A set $X$ with a subset $\tau\subset \mathcal{P}(X)$ is called a topological space if: - -$X\in\tau$ and $\emptyset\in \tau$. -Let $L$ be any set. If $\{A_\lambda\}_{\lambda\in L}=\mathcal{A}\subset\tau$ then $\bigcup_{\lambda\in L} A_\lambda\in\tau$. -Let $M$ be finite set. If $\{A_\lambda\}_{\lambda\in M}=\mathcal{A}\subset\tau$ then $\bigcap_{\lambda\in M} A_\lambda\in\tau$. - -Let $\emptyset=\mathcal{A}=\{A_\lambda\}_{\lambda\in N}$, i.e $N=\emptyset$. Then by 2: -$$\bigcap_{\lambda\in N} A_\lambda=\{x\in X; \forall \lambda\in N\text{ we have }x\in A_\lambda\}=X\in\tau,$$ -since $N$ is empty. And by 3: -$$\bigcup_{\lambda\in N} A_\lambda=\{x\in X; \exists \lambda\in N\text{ such that }x\in A_\lambda\}=\emptyset\in\tau,$$ -since $N$ is empty. Then 2 and 3 implies 1. -Many books define a topology with 1,2 and 3. But I think that 1 is not necessary because I was prove that 2,3 $\Rightarrow$ 1. -Am I right? - -REPLY [6 votes]: I don't know if there's still interest in this thread, but I'm going to answer anyway. -As Asaf points out, $\bigcap \emptyset$ may not exist, and it certainly doesn't equal $X$. The problem occurs because subsets don't remember the larger set they were cut from. However, this is a quirk of how subsets are defined. And, the problem can be avoided by redefining the meaning of "subset." This will allow us to rewrite the definition of a topological space. -Firstly, lets agree that by "function", we will mean an ordered triple $(f,X,Y)$. Then we can define that a subset of $X$ is a function $A : X \rightarrow 2,$ where $2 = \{0,1\}.$ -Now for a bit of notation. As shorthand for $A(x)=1$, lets write $x \propto A,$ which can be read "$x$ is an element of $A$." -Lets also write $A \diamond X$ to mean that $A$ is a subset of $X$. Note that the subset relation is no longer transitive, so we also need a containment relation. So if $A,B \diamond X$, lets write $A \subseteq B$ in order to mean that for all $x \in X$ it holds that if $x \propto A$, then $x \propto B$. -Also, lets write $2^X$ for the powerset of $X$. Formally, we define $$2^X := \{A : X \rightarrow 2 | A \mbox{ is a function}\}.$$ -Thus $A \diamond X$ if and only if $A \in 2^X$. -Finally, lets write $A = \{x \in X | P(x)\}$ in order to mean that $A \diamond X$, and that $x \propto A$ iff $P(x)$. -Given these conventions, we can define the intersection of $\mathcal{A} \diamond 2^X$ as follows. -$$\bigcap \mathcal{A} = \{x \in X|\forall A \propto \mathcal{A} : x \propto A\}$$ -Unions can be defined similarly. -Finally, the payoff: letting $\bot$ denote the least subset of $2^X$, and letting $\top$ denote the greatest subset of $X$, we see that -$$\bigcap \bot = \top.$$ -We're now in a position to rewrite the definition of a topological space. - -A set $X$ with a subset $\tau \diamond 2^X$ is called a topological space if: - -For all $\mathcal{A} \subseteq \tau$ it holds that $\bigcup \mathcal{A} \propto \tau$. -For all finite $\mathcal{B} \subseteq \tau$ it holds that $\bigcap \mathcal{B} \propto \tau$.<|endoftext|> -TITLE: Set of zeroes of the derivative of a pathological function -QUESTION [17 upvotes]: For a continuous function $f : [0,1] \to {\mathbb R}$, let us set -$$ -X_f=\lbrace x \in [0,1] \bigg| f'(x)=0 \rbrace -$$ -(for a $x\not\in X_f$, $f'(x)$ may be a nonzero value or undefined). -There are well-known "Cantor staircase" examples where $X_f$ is a dense open set in -$[0,1]$. Is there a continuous $f$ with $X_f={\mathbb Q}\cap [0,1]\,$ ? Is there a continuous $f$ with $X_f=[0,1] \setminus {\mathbb Q}$ ? -UPDATE 06/02/2012 Since $X_f$ is dense in $[0,1]$, the continuity set of $f'$ is included in $X_f$. Since $X_f$ has empty interior, the continuity set of $f'$ cannot be a $G_{\delta}$ in any subinterval of $[0,1]$, so $f$ cannot be everywhere differentiable on any subinterval of $[0,1]$. -All the links proposed so far in the comments are about everywhere differentiable functions, so they do not suffice to answer my question. -An interesting sub-question is obtained if, in addition, we also require $f$ to be increasing (so that $f$ will be a homeomorphism from $[0,1]$ onto some other interval). -It is easy enough to construct a $f$ and control the behaviour $f'$ on a countable set, by the usual step-by-step procedure. But it seems very hard to say anything at all on the behaviour of $f'$ on the other points of $[0,1]$. -SECOND UPDATE 06/02/2012 As noted in the link provided in a comment below, it follows from Cousin's lemma that if $f$ is a continuous function such that $f'=0$ everywhere except -for a countable set, then $f$ is constant. -So there is no $f$ such that $X_f=[0,1] \setminus {\mathbb Q}$ : the answer to my second question is NO. My first question remains open however. - -REPLY [6 votes]: The answer to the first question is YES. In fact, one can show the following stronger statement : -THEOREM. There exists an increasing homeomorphism $f: [0,1] \to [0,1]$, such that $f'(q)=0$ for all $q\in {\mathbb Q}$ and any irrational $x\in [0,1]$ is contained in arbitrary small intervals $[u,v]$ with $\frac{f(v)-f(u)}{v-u} \geq 1$ (so that $f'(x)$, if defined, is $\geq 1$). -We use a familiar "inbreeding", piecewise and iterative construction, enumerating the rationals and imposing $f'(q)=0$ for each $q$ one by one, keeping at the same time a tight net of values $x,y$ such that $\frac{f(y)-f(x)}{y-x} \geq 1$. Formally : -DEFINITION. Let $f,g$ be two function $[a,b] \to {\mathbb R}$. We say that $(f,g)$ is a claw on $[a,b]$ when -(1) $f(a)=g(a),f(b)=g(b),f'(a)=g'(a)=f'(b)=g'(b)=0$ and -(2) $f(x) -TITLE: Ellipse fitting methods. -QUESTION [6 upvotes]: I have set of points and want to fit ellipse to this set. -I have found only function which fits ellipse in least squares sense. In this set of points there are some noise points which should not be taken to ellipse fitting. -I want some references to algorithms which eliminate noisy points and fit ellipse to set of points. - -REPLY [7 votes]: This site has code to fit an ellipse, plus an explanation of the algorithm. -http://www.geometrictools.com/Documentation/LeastSquaresFitting.pdf -If the data is just "noisy", this should work fine. -If there are a relatively few "wild" points in the input, they probably won't influence the result very much. You can filter them out after doing a first trial fitting. -Anyway, try this code, and see if it works for you.<|endoftext|> -TITLE: Sum of $\cos(k x)$ -QUESTION [10 upvotes]: I'm trying to calculate the trigonometric sum : $$\sum\limits_{k=1}^{n}\cos(k x)$$ -This is what I've tried so far : $$\renewcommand\Re{\operatorname{Re}} -\begin{align*} -\sum\limits_{k=1}^{n}\cos(k x) &= \Re\left(\sum\limits_{k=1}^{n}e^{i k x}\right)\\ -&= \Re\left(e^{i x}\frac{1 - e^{inx}}{1 - e^{ix}}\right) -\end{align*}$$ -How can I go on ? - -REPLY [17 votes]: Here's a slightly different approach than André's one, you may find it easier and less error prone (it doesn't involve common denominators or long expansions). There's also a useful trick in it, so it's not completely uninteresting. -$$\begin{align} -\sum_{k=1}^n \cos(kx) & = \Re\left(\sum_{k=1}^n e^{ikx}\right)\\ -& = \Re\left(e^{ix} {e^{inx}-1 \over e^{ix}-1}\right) \\ -& = \Re\left({e^{ix} e^{inx \over 2} \over e^{ix \over 2}} {e^{inx \over 2} - e^{-inx \over 2} \over e^{ix \over 2} - e^{-ix\over2}}\right)\\ -& = \Re\left( e^{i(n+1)x \over 2} {\sin{nx\over2} \over \sin{x \over 2}}\right)\\ -& = {\sin{nx\over2} \over \sin{x \over 2}} \cos\left({(n+1)x\over2}\right) -\end{align}$$ -The trick is between lines 2 and 3, where you factor by the "half-angle" exponential to make a sine appear. As in André's proof you need to distinguish the case $e^{ix} = 1$.<|endoftext|> -TITLE: How does one give a mathematical talk? -QUESTION [56 upvotes]: Sometime tomorrow morning I will be presenting a mathematics talk on something related to commutative algebra. The people present there will probably be two mathematicians (an algebraic geometer and a complex analyst) and some friends of mine. -Now this is the first time in my life giving a mathematics talk; it will not be a "general" talk aimed at the public but will be more technical. It will involve technical terms (if that's what one calls it) like quotient rings and localisation. As this is my first time I am obviously a little worried! I have read Halmos' advice here but I feel it is more for a talk aimed at the general public. -Several things bug me, one of them being how much detail in the proofs does one put in a talk? I am thinking obviously one does not check that maps are well - defined on the board but just says "one can check so and so is well-defined". -Furthermore, what about the speed that one writes? I am comfortable writing on the board and some people tell me I speak and write too fast. Obviously that is a problem and I need to slow down, but also I don't want to be talking too slow to bore the audience and seem to be out of passion. What is a good indicator of how "fast" or "slow" should one give a talk? -Besides, is there a way that one should "act when up on stage"? By that I mean so called "socially acceptable" do's and don'ts. I think any advice given from those who have "been there done that" would be useful for future wannabe mathematicians like me. -Thanks -Edit: Since many people have said it is difficult to give advice not knowing the audience, for the moment the audience will be a complex analyst, an algebraic geometer, one person who has just completed honours in orbifold theory, another friend in third year taking courses in measure theory,galois theory and differential geometry, and lastly a PhD student in operator/ $C^{\ast}$ - algebras - -REPLY [18 votes]: A lot of good advice has already been given. However, there is a key question which I'm surprised that no one has yet asked: -WHY are you giving this talk? -Every mathematical talk has a purpose, and there are many different purposes for a mathematical talk. For instance: -1) The job talk: you are being (explicitly or implicitly) screened for an academic job, and the goal of your talk is to make the listeners want to hire you. -(As a good friend and colleague of mine likes to point out, in some sense every talk that you travel to a new place to deliver is a job talk in that if you do very well you will advance your career and if you do very badly your career may suffer.) -2) The research seminar talk: In this type of talk the purpose is to convey something about your own recent work to an audience of relative experts. Note that this is formally similar to a job talk (and see the above parenthetical comment), but if it is really not a job talk -- and, in particular, if it's a talk you're giving to your colleagues / peers / students -- it has a quite different purpose: to inform rather than impress. -3) The colloquium talk: This is somewhat similar to 2) but with a general mathematical audience. (Note that the phrase "general mathematical audience" actually carries relatively little meaning: if you are asked to give such a talk, you should find out more exactly what it means! Sometimes colloquia are attended primarily by research mathematicians with a particular interest in your field, and sometimes they are attended primarily by undergraduates who may or may not be math majors.) The purpose of a colloquium talk is somewhere between information and entertainment. Although you are expected to talk about your research, this can sometimes be in a very general way: for instance, you can give an excellent colloquium talk in which you do not explicitly state a theorem of your own (but it takes some guts and confidence in yourself to do so). -4) The learning seminar talk: This is a talk that you give as a participant in a seminar. Generally you have chosen, or been assigned, some specific paper or part of a paper and your job is to present as much of this as possible in the time allotted, starting from an overview but often including key technical details. -5) The student project talk: This is a talk that a student gives in the context of a particular course. Often, but not always, it is given "in class" and only attended by other members of the class and the instructor. Often there will be an accompanying written paper that you are summarizing. This is similar to 4) but probably at a lower level. -6) The expository talk: This is a talk with the goal of teaching the audience some material which is (usually) not due to you. Really it is like teaching a course except it all takes place in one sitting (or maybe a small series of sittings), which of course makes it challenging. Generally you have some specific reason to do this, e.g. it functions as background / prelude to some other activity. -This is probably not a complete list, and if anyone wants to suggest any additions, please do so. -So...what kind of talk will you be giving? Any of the above? Did someone ask you to give this talk? If so, why?<|endoftext|> -TITLE: how to calculate this infinite integral of infinite product of cosine -QUESTION [5 upvotes]: What is the value of this nontrivial itegral: -$$\int_0^{+\infty} \left( \prod_{n = 1}^{+\infty} \cos \frac{x}{n}\right) \, \mbox d x$$ -I don't know if there is nice closed answer with known constants. - -REPLY [3 votes]: Beginning of an answer. Use these: -$$\begin{align} -\cos x &= \prod_{k=0}^\infty \left(1-\frac{4x^2}{(2k+1)^2 \pi^2}\right) -\\ -\frac{\sin x}{x} &= \prod_{k=1}^\infty \left(1-\frac{x^2}{k^2 \pi^2}\right) -\end{align}$$ -so that -$$\begin{align} -f(x) &:= \prod_{n=1}^\infty \cos \frac{x}{n} = \prod_{n=1}^\infty \prod_{k=0}^\infty \left(1-\frac{4x^2}{(2k+1)^2n^2\pi^2}\right) -\\ &= \prod_{k=0}^\infty \prod_{n=1}^\infty \left(1-\frac{4x^2}{(2k+1)^2n^2\pi^2}\right) = \prod_{k=0}^\infty \frac{\sin\frac{2x}{2k+1}}{\frac{2x}{2k+1}} . -\end{align}$$ -We have to check that the order can be reversed. -Now (at least for the first few $K$)$^*$ I get -$$ -\int_0^\infty \prod_{k=0}^K \frac{\sin\frac{2x}{2k+1}}{\frac{2x}{2k+1}}\,dx -= \frac{\pi}{4} -$$ -exactly. If we can find the right limit theorem, perhaps also -$$ -\int_0^\infty f(x)\,dx = \int_0^\infty \prod_{k=0}^\infty \frac{\sin\frac{2x}{2k+1}}{\frac{2x}{2k+1}}\,dx -= \frac{\pi}{4} -$$ -$^*$ added No, the answer $\pi/4$ is only true up to $K=6$, but fails for $7$ and up.<|endoftext|> -TITLE: Is fibre product of varieties irreducible (integral)? -QUESTION [20 upvotes]: Let $k$ be an algebraically closed field and $X,Y$ varieties (i.e. integral, separated schemes of finite type over $k$). Is the fibre product $X \times_k Y$ necessary irreducible or integral? - -I have another more or less related question: - -If $A,B$ are finitely generated $k$-algebras which are also integral domains, is $A \otimes_k B$ an integral domain? - -REPLY [32 votes]: a) Yes, the product $X\times_k Y$ of two varieties over an algebraically closed field $k$ is a variety. To prove it, reduce to affine varieties and for those use: -b) If the field $k$ is algebraically closed and if $A$ and $B$ are $k$-algebras without zero divisors, then their tensor product $A\otimes_kB$ also has no zero divisors. - (Whether $A$ or $B$ is finitely generated is irrelevant.) -This is proved in Iitaka's Algebraic Geometry, page 97 (Lemma 1.54) -Edit: And if $k$ is not algebraically closed... -... all is not lost! -Let $k$ be a field and $R$ be a $k$-algebra which is a domain and with fraction field $K$. -If the extension $k\to K$ is separable and if $k$ is algebrically closed in $K$, then for all $k$-algebras $S$ which are domains, the $k$-algebra $R\otimes_k S$ will be a domain. -(Beware that separable means universally reduced. If $k\to K$ is algebraic, separability coincides with the usual notion.) -With luck, one of $A$ or $B$ may play the role of $R$ and $A\otimes_k B$ will be a domain. -As an illustration, any intermediate ring $k\subset R\subset k(T_1,\cdots,T_n)$ (where the $T_i$'s are indeterminates) satisfies the conditions above and will remain a domain when tensorized with a domain. -Bibliography -For the product of algebraic varieties I recommend Chapter 4 of Milne's online notes. -For the tensor product of fields, you might look at Bourbaki's Algebra, Chapter V, §17. -New Edit -At Li's request I'll show that $X\times Y$ is irreducible. -Let $$X\times Y=F_1\cup F_2$$ with $F_i$ closed and consider the sets $X_i=\lbrace x\in X\mid \lbrace x \rbrace\times Y\subset F_i\rbrace$. -Since $$\lbrace x \rbrace\times Y=[(\lbrace x \rbrace\times Y)\cap F_1]\cup [(\lbrace x \rbrace\times Y)\cap F_2]$$ we see, by irreducibility of $\lbrace x \rbrace\times Y$ (isomorphic to $Y$), that each vertical fiber $\lbrace x \rbrace\times Y$ is completely included in (at least) one of the $F_i$'s. In other words, $$X=X_1 \cup X_2.$$ It suffices now to show that the $X_i$'s are closed, because by irreducibility of $X$ we will then have $X_1=X$ (say) and thus $X\times Y=F_1$: this will prove that indeed $X\times Y$ is irreducible. -Closedness of $X_i$ is proved as follows: -Choose a point $y_0\in Y$. The intersection $(X\times \lbrace y_0 \rbrace)\cap F_i$ is closed in $X\times \lbrace y_0 \rbrace$ and is sent to $X_i$ by the isomorphism $X\times \lbrace y_0 \rbrace \stackrel {\cong}{\to}X$. Hence $X_i\subset X$ is closed. qed. - -REPLY [7 votes]: As mentioned by Georges Elencwajg, the fiber product is also integral if $k$ is algebraically closed. In general, some author require a variety over any field $k$ should be geometrically integral. In this way the fiber product of varieties is also variety. As for the affine case, I just want to mention an example that $\mathbb{C} \otimes_{\mathbb{R}} \mathbb{C}$ is not a domain. In fact it is isomorphic to $\mathbb{C} \times \mathbb{C}$.<|endoftext|> -TITLE: Double integral $\int_{z=u}^{+\infty}\int_{t=u}^{+\infty}\frac{e^{-Az}}{z+B}\frac{te^{-tD}}{t-zC}\,dtdz$ -QUESTION [5 upvotes]: I am doing research, and while calculating a closed form expression, I got a form of integration like the following: -$$\int_{z=u}^{+\infty}\int_{t=u}^{+\infty}\frac{e^{-Az}}{z+B}\frac{te^{-tD}}{t-zC}dtdz$$ -where $A$, $B$, $C$, $D$ and $u$ are positive reals. -I don't know if there is way to get the closed form of it or we have to rely on some approximations. - -REPLY [2 votes]: $$\begin{align} \int_{z=u}^{+\infty}\int_{t=u}^{+\infty}\frac{te^{-tD}}{t-zC} \mathbb{d}t\mathbb{d}z &= \int_{z=u}^{+\infty} \left( \frac{e^{-Az}}{z+B} \int_{t=u}^{+\infty}\frac{te^{-tD}}{t-zC}\mathbb{d}t \right)\mathbb{d}z \\ -&=\int_{z=u}^{+\infty} \left( \frac{e^{-Az}}{z+B} \underbrace{\int_{t=u}^{+\infty}\frac{te^{-tD}}{t-zC}\mathbb{d}t}_{J_{D,C}(u,z)} \right)\mathbb{d}z \\ -\end{align}$$ -with -$$\begin{align} J_{D,C}(u,z) &= \int_{t=u}^{+\infty}\frac{te^{-tD}}{t-zC}\mathbb{d}t \\ -&= \int_{t=u}^{+\infty}\frac{t-zC+zC}{t-zC}e^{-tD}\mathbb{d}t \\ -&= \int_{t=u}^{+\infty}e^{-tD}\mathbb{d}t + zC\int_{t=u}^{+\infty}\frac{e^{-tD}}{t-zC}\mathbb{d}t\\ -&= \left[\frac{e^{-tD}}{-D}\right]_u^{+\infty} + zC I_{D,zC}(u)\\ -\end{align}$$ -where $$I_{a,b}(u) = \int\limits_{u}^{\infty} \frac{\ \exp\left(-a t\right)}{t-b} \mathrm{d}t \stackrel{s=\frac{t-b}{u-b}}= (u-b)e^{-ab}\int\limits_{1}^{\infty} \frac{\ e^{-a(u-b)s}}{s} \mathrm{d}s =(u-b)e^{-ab}E_1(a(u-b))$$ -$=>$ -$$J_{D,C}(u,z) =\frac{e^{-uD}}{D} + zC(u-zC)e^{-zCD}E_1(D(u-zC))$$ -Now : -$$\begin{align} \int_{z=u}^{+\infty}\int_{t=u}^{+\infty}\frac{te^{-tD}}{t-zC} \mathbb{d}t\mathbb{d}z -&=\frac{e^{-uD}}{D}\int_{z=u}^{+\infty} \frac{e^{-Az}}{z+B} \mathbb{d}z + \int_{z=u}^{+\infty} \frac{zC(u-zC)e^{-zCD}E_1(D(u-zC))}{z+B} e^{-Az}\mathbb{d}z\\ -&=\frac{e^{-uD}}{D}I_{A,-B} + \underbrace{\int_{z=u}^{+\infty} \frac{zC(u-zC)e^{-zCD}E_1(D(u-zC))}{z+B} e^{-Az}\mathbb{d}z}_{(1)}\\ -\end{align}$$ -Let $g(z)=(u-zc)De^{D(u-zC)}E_1(D(u-zC)) $ -since $\frac{z}{z+B}= 1-\frac{B}{z+B}$ then : -$$\begin{align} (1) &= \frac{C}{D}e^{-uD}\int_{z=u}^{+\infty}{g(z)} e^{-Az}\mathbb{d}z-\frac{BC}{D}e^{-uD}\int_{z=u}^{+\infty} \frac{g(z)}{z+B} e^{-Az}\mathbb{d}z \\ -\end{align}$$ -Consider the substitution $s=z-u$ -$$\begin{align}(1) &= \frac{C}{D}e^{-uD}\int_{0}^{+\infty}{g(s+u)} e^{-As}\mathbb{d}s-\frac{BC}{D}e^{-uD}\int_{0}^{+\infty} \frac{g(s+u)}{z+B} e^{-As}\mathbb{d}s\\ -&= \mathcal{L}\{g(s+u)\} (A) -\frac{BC}{D}e^{-uD}\mathcal{L}^2\{g(s+u)e^{-As}\} (B) \\ -\end{align} $$ -where $\mathcal{L}\{f(t)\}$ The Laplace transform and -$\mathcal{L}^2\{f(t)\}$ the second iterate of Laplace transform wich the same as the Stieltjes Transform given by: $\mathcal{S}\{f(x)\}(y)= \int_{0}^{+\infty} \frac{f(x)}{x+y} \mathbb{d}x$<|endoftext|> -TITLE: How to find a 2D basis within a 3D plane - direct matrix method? -QUESTION [8 upvotes]: I have a plane equation in 3D, in the form $Ax+By+Cz+D=0$ (or equivalently, $\textbf{x}\cdot\textbf{n} = \textbf{a}\cdot\textbf{n}$), where $\textbf{n}=\left[A\:B\:C\right]^T$ is the plane normal, and $D=-\textbf{a}\cdot\textbf{n}$ with $\textbf{a}$ as a point in the plane, hence $D$ is negative perpendicular distance from the plane to the origin. I have some points in 3D space that I can project onto the plane, and I wish to express these points as 2D vectors within a local 2D coordinate system in the plane. The 2D coordinate system should have orthogonal axes, so I guess this is a case of finding a 2D orthonormal basis within the plane? -There are obviously an infinity of choices for the origin of the coordinate system (within the plane), and the in-plane $x$ axis $\textbf{i}$ may be chosen to be any vector perpendicular to the plane normal. The 3D unit vector of the in-plane $y$ axis $\textbf{j}$ could then be computed as the cross product of the $x$ axis 3D unit vector and the plane normal. One algorithm for choosing these could be: - -Set origin as projection of $\left[0\:0\:0\right]^T$ on plane -Compute 2D $x$ axis unit vector $\textbf{i}$ direction as $\left[1\:0\:0\right]^T\times\textbf{n}$ (then normalize if nonzero) -If this is zero (i.e. $\textbf{n}$ is also $\left[1\:0\:0\right]^T$) then use $\textbf{i}$ direction as $\left[0\:1\:0\right]^T\times\textbf{n}$ instead (then normalize) -Compute $\textbf{j}=\textbf{i}\times\textbf{n}$ - -However, this all seems a bit hacky (especially the testing for normal along the 3D $x$ axis, which would need to deal with the near-parallel case on a fixed-precision computer, which is where I'll be doing these sums). -I'm sure there should be a nice, numerically stable, matrix-based method to find a suitable $\textbf{i}$ and $\textbf{j}$ basis within the plane. My question is: what might this matrix method be (in terms of my plane equation), and could you explain why it works? -Thanks, -Eric - -REPLY [2 votes]: Your original approach is fine, except for one small improvement: instead of choosing $\textbf{i}$ by default to obtain the first basis vector, you should choose the one of $\{\textbf{i}, \textbf{j}, \textbf{k}\}$ whose angle with $\textbf{n}$ is closest to 90°. This will be the one whose dot product with $\textbf{n}=(n_1, n_2, n_3)$ is smallest (by absolute value). Since $\textbf{i}\cdot \textbf{n} = n_1$ etc., simply choose the one that corresponds to the smallest $|n_i|$ (e.g. choose $\textbf{j}$ if $|n_2|$ is smallest). This guarantees a minimum angle (ca. 54°) between the two vectors and is therefore numerically robust.<|endoftext|> -TITLE: For an $n$-dimensional object, how many types of holes are possible? -QUESTION [12 upvotes]: Update 2012-06-06: At some point I'll attempt to answer my own question by using a dual-fluid model that places the dimensionality and connectivity of "solids" and "holes" on an equal footing. With that same model, I should also be able to explain how Betti numbers are related. Incidentally, the dual-fluid model seems to provide an interesting alternative way of viewing Poincaré duality. -Original Question 2012-05-31: -(Please bear with my non-standard terminology; I am not a topologist.) -In $n=3$ space an object such as an $n$-ball can have only type of hole. For the simplest case of one such hole in one $3$-ball, the result is a ring. If the ring is smoothed down and narrowed, the result is a $1$-sphere or circle. -It is easy to generalize "hole" to mean any topologically distinct, simple subtraction of substance from an $n$-ball. By "simple subtraction" I mean this: Define a removal $n$-ball that is initially identical to the first ball. Expand the removal ball slightly by uniformly expanding the $m$-ball defined by the $m$ axes, and contract it slightly by uniformly shrinking the ($n-m$)-ball defined by the remaining subspace of ($n-m$) axes. Intersect the resulting distorted removal ball with the first one. For the first $n$-ball, erase all substance from the locations where it intersects with the removal ball. -This procedure results in three additional hole types for $n=3$ space: voids ($m=0$), splits ($m=2$), and erasures ($m=3$). The original definition of a hole for $n=3$ space becomes $m=1$. -Thus using this generalized hole definition, $n$-space has $(n+1)$ hole types with labels $m=0,1,\ldots,n$. The $m=0$ case is always a void, $m=n-1$ is always a split, and $m=n$ is always an erasure. The space has $(n-2)$ "true holes" labeled $m=1,\ldots,n-2$, with the true holes being those that (a) leave the $n$-ball connected, and (b) provide a path through it. (If the second criterion (b) is omitted, $m=0$ voids could also be called true holes.) -The thinning procedure mentioned earlier simplifies the resulting holes and provides an easy way to observe structure that is shared across all $n$-spaces. For example, an ($n=2,m=0$) void looks like a washer or ring drawn on paper, and it thins into a $1$-sphere. An ($n=3,m=1$) hole looks like a bagel, and also thins into a $1$-sphere. -This pattern produces a very simple rule: Every $n$-space has ($n-2$) true holes, and each of those holes $m$ has the same fundamental topology as an ($n-m-1$)-sphere. Thus our space has one true hole, ($n=3,m=1$), with the fundamental topology of a $1$-sphere or circle, and a void, ($n=3,m=0$), which has the fundamental topology of a $2$-sphere (a hollow shell). For $n=4$ there are two true holes: ($n=4,m=1$) with the form of a $1$-sphere, and ($n=4,m=2$) with the form of a $2$-sphere. -Addendum 2012-06-05: There's no reason for me to be overly abstract about any of this: -A "1" indicates a "filled n-cell" rather than a vertex, and "0" an empty n-cell. Each solid shown is assumed to exist in empty space, so surrounding zeros are implied. -Notice the (x,y) gap at the center of (4,2), which would allow any sufficiently thin plane-like object in 4D to pass through. At the same time, the object itself retains its structural integrity by forming a closed loop in the (z,t) plane. That loop can be seen easily in the 3x3 units surrounding the empty (x,y) unit. -The (4,2) hole has no analog in 3D, although ironically its fundamental form is the same as a circle or a ring. It's just that in 4D a toroid ends up with two axes of freedom through its hole instead of just one. -Addendum 2012-06-01: Here is a way to visualize why the different $m$ holes have significantly different properties. -In 3D, a ring is a compact object containing a single (genus 1) ($n=3,m=1$) hole. Now the interesting thing about a single hole is that it enables a curious but well-defined relationship between two objects residing in the same $n$-space. In particular, if the hole is of size $m$, you can use it to "point to" any $m$-dimensional location on an object that is indefinitely larger in $m$ dimensions than the hole object. I'll call this the pointer theorem for future reference. -The pointer theorem sounds a bit abstract, but it's really quite easy to visualize. Imagine a ring on a wire, for example. The ring is a compact object with a ($n=3,m=1$) hole in it, and the wire is another 3D object that is indefinitely larger than the ring in exactly one dimension. The pointer theorem therefore applies, so you should be able to "relocate" the ring to any point along the indefinitely long axis of the wire. And it's true! The ring can in fact be moved to "point to" any location on the wire, no matter how longer the wire is. Moreover, the relationship is non-trivial, since you cannot remove the ring from the wire without covering at least some other locations on the wire; the ring is "strung" onto the wire. -While 3D space allows only one type of true hole, notice that a split ($n=3,m=2$), which consists of two "nearby" but separate objects, qualifies as a generalized hole. Does the pointer theorem apply to splits? Yes. The extended object for an $m=2$ hole, regardless of the space size $n$, is a sheet-like object that is indefinitely larger than the hole object in $m=2$ dimensions. Can the two such nearby objects point to any location on such a sheet? Yes. An ordinary sewing machine is a pretty good example, since it has paired parts above and below an indefinitely large sheet of fabric, and those paired parts can be used to stitch any point on a sheet of fabric. The problem in 3D, however, is that unlike the approximation provided by a sewing machine, a real split breaks the compact object apart into two disconnected units. -Now look at the 4D case. For $n=4$ you have have two "true holes," namely ($n=4,m=1$) and ($n=4,m=2$). Unlike the 3D case, either one of these holes can be used to create an continuous (internally connected) compact 4D object with genus 1. The pointer theorem says that an ($n=4,m=1$) hole can be used to point to any location on an indefinitely large rod-like or wire-like $n=4$ object. This makes the ($n=4,m=1$) hole into the 4D equivalent of a ring on a wire. Intriguingly, however, the reduced form of ($n=4,m=1$) is not a $1$-sphere, but a $2$-sphere, since $m-n-1=2$. That makes it topologically equivalent to a hollow ball in 3D space, rather than a ring! What is happening conceptually is that any 4D wire would be orthogonal to our 3D space, and would show up in intersection with our space as a $3$-ball trapped inside the hollow sphere. -But what about that other 4D hole, ($n=4,m=2$)? It is a "sheet-like" hole, just like the one in the earlier 3D split example. The pointer theorem implies that you can "string" a compact object with a genus 1 ($n=4,m=2$) hole onto an indefinitely larger sheet-like 4D object, and then use that compact object to point to any location on the sheet. -What's fascinating conceptually is that a compact genus 1 ($n=4,m=2$) object would remain both intact and "stuck" on such a sheet in very much the same way that a ring gets "stuck" on a long wire in 3D. My own phrase for this is the 4D abacus effect. Just as in 3D an abacus bead can be moved anywhere on a wire but remains trapped on that wire, in 4D a "bead" that has a ($n=4,m=2$) hole can be moved anywhere on a sheet, but remains trapped on that sheet. -Note that this type of genus 1 ($n=4,m=2$) bead is equivalent to a $1$-sphere or ring, just as in the 3D abacus case. If the sheet was rotated into 3D space, this ring would show up as mysteriously connected pieces above and below the sheet. It would look very much like the 3D split in fact, with the addition of a hidden 4D link that keeps the pieces joined. -There are infinitely many different abaci, incidentally. For every $n$-space the corresponding $n$-abaci uses $1$-sphere type beads -- that is, compact genus 1 objects with ($n=n,m=n-2$) holes -- strung onto ($n-2$)-objects. The $5$-abacus is interesting because it attaches a movable ring to any location within a 3D "object." Thus it might provide an interesting physics model for folks who like to model particles as loops attached to and mobile within a 3D space. You can also get a lot more complicated than that, of course, such as by increasing the genus and/or mixing together abaci of different dimensionality. -The main point in all these examples of the pointer theorem is this: Holes with different $m$ cannot be equivalent because they cannot be strung onto the same kinds of objects. In 4D for example, an ($n=4,m=1$) hole can be strung onto a wire-like 4D object, but not onto a sheet-like 4D object. The other true hole type for 4D, ($n=4,m=2$), can be strung onto a sheet-like 4D object, however. -There's another more subtle difference I should mention. While you can in principle use a ($n=4,m=2$) "bead" to point to any location on a wire-like 4D object, you cannot "string" such a bead onto that object. It simply falls off! To see why, imagine trying to capture a thread between two disconnected objects in 3D. The same problem of too many degrees of freedom applies for trying to string a ($n=4,m=2$) bead onto a wire-like 4D object. -So, for any given embedding space $n$, its set of ($n+1$) true and generalized hole types {$m=0,1,\ldots{n}$} are all fundamentally different in terms of the types of interactions they provide between objects in that space. -Finally, an observation: If embedding spaces are not used, the difference between these holes remain, but they get a lot harder to observe and characterize (I think). - -So with all of that, here's my simple question: Where do I find all of this in topology? -I'm assuming there must be a similarly simple and regular way to classifying holes in higher dimensional spaces in terms of their sphere analogs, but I've had a lot of trouble finding it the last time I looked. (That was a couple of years ago, so maybe I need to look again.) Most of what I saw dived into the algebraic complexity and special terminology weeds so quickly that I had trouble figuring whether they were really relevant. - -REPLY [6 votes]: I think Alexander Duality is what you are looking for. I gather that you are a non-expert, so I will attempt to describe in fairly informal terms how Alexander duality deals with the questions that you are interested in. Consequently, I'll suppress the inevitable technicalities, since they don't enter into the very geometric situations that you are interested in. -Alexander duality deals with the following situation. Let $S^n$ denote the $n$-dimensional sphere. Note that you can think of the $n$-sphere as being standard $n$-dimensional space $\mathbb{R}^n$ with an extra point "at infinity" added in (take a look at stereographic projection if this is unfamiliar to you). The upshot of this is that working with the $n$-sphere is not too far away from the situation you are interested in. Now take some subspace $X$, like the $m$-balls you are removing. Or take anything else; a solid torus of whatever genus you like, higher-dimensional manifolds, etc. Let $Y$ denote the complement of $X$ inside $S^n$. Then, in very informal terms, Alexander duality asserts that homologically, $Y$ is exactly as complicated as $X$. Somewhat more technically, Alexander duality asserts that for all $q$, there is an isomorphism -$$ -\tilde H _q(Y) \cong \tilde H^{n-q-1}(X) -$$ -between the reduced homology of $Y$ and the reduced cohomology of $X$ (for whatever coefficient group we choose). If the meaning of this is somewhat unfamiliar to you, you might be more interested to know that this says that the Betti numbers of $Y$ can be computed from those of $X$. In the range $1 \le q \le n-2$, a consequence of Alexander duality is that -$$ -B_q(Y) = B_{n-q-1}(X), -$$ - where $B_k(Z)$ indicates the $k^{th}$ Betti number of a space $Z$. If the piece $X$ that you are removing has $p$ components, this also implies that $B_{n-1}(Y) = p-1$. -As I remarked before, Alexander duality says that the topology of a space obtained by cutting a piece out is, from the point of view of homology (which encapsulates Betti numbers) exactly as complicated as the piece being removed. It is interesting to note that from the point of view of homotopy theory, this is incredibly far from being true. A basic technique in knot theory is to study a knot by studying the topology of its complement in $\mathbb R^3$ (or $S^3$). Homologically, Alexander duality says this is very boring, but from the perspective of homotopy theory, the story is very interesting indeed.<|endoftext|> -TITLE: Seeking analytic proof for $\sum_{n=r}^\infty \frac{1}{n!}\left[ n-1 \atop r-1 \right] = 1$ -QUESTION [8 upvotes]: In Blom, Holst, Sandell, "Problems and snapshots from the world of probability", section 9.4, a model of records is discussed: - -Elements are ordered in a sequence of increasing length according to some rule that leads to exchangeability. At each step, the element is inserted at its proper place among those already ordered. If an element comes first, it is called a record. The position of $r$-th record is denoted $N_r$. Naturally $N_1 = 1$. - -The following probability is then derived: -$$ - \mathbb{P}(N_r = n) = \frac{1}{n!} \left[ n-1 \atop r-1 \right] I\left( n \geqslant r\right) -$$ -where $\left[ n \atop r \right]$ denotes the unsigned Stirling number of the first kind. -Question: How does one analytically prove that this probability mass function is normalized for all integer $r \geqslant 2$, i.e. -$$ - \sum_{n=r}^\infty \frac{1}{n!} \left[ n-1 \atop r-1 \right] = 1 -$$ -Using $\left[ n-1 \atop 1 \right] = (n-2)!$, and $\left[ n-1 \atop 2 \right] = H_{n-2} \cdot (n-2)!$, the normalization follows for special cases of $r=2$ and $r=3$. - -REPLY [6 votes]: Define $$g_r(y) = \sum_{n=r}^\infty \frac{1}{n!}\begin{bmatrix} n-1 \cr r-1\end{bmatrix}y^n$$ -Define $$G(y,z) = \sum_{r=1}^\infty (-1)^r g_r(y)z^{r-1}$$ -Swapping terms we get: -$$G(y,z) = \sum_{n=1}^\infty \frac{(-1)^n y^n}{n!}\sum_{r=1}^{n} (-1)^{n-r} \begin{bmatrix} n-1 \cr r-1\end{bmatrix}z^{r-1}$$ -The inner sum is just $(z)_{n-1}=\frac{(1+z)_n}{1+z}$, so we get: -$$G(y,z) = \frac{1}{1+z} \sum_{n=1}^\infty (-1)^ny^n \frac{(1+z)_n}{n!}$$ -Which is just: -$$G(y,z) = \frac{1}{1+z} ((1-y)^{1+z}-1)$$, -where $|y|<1$ and $|z|<1$ -Now, for any real $z\in(-1,1)$, $\lim_{y\to 1} G(y,z) = \frac{-1}{1+z} = \sum_{r=1}^\infty (-1)^{r}z^{r-1}$ so it seems that $g_r(y)\to 1$ as $y\to 1$, but that last step might be hard to formalize. - -REPLY [6 votes]: On the one hand, -$$\binom{u}{n}=\frac{1}{n!}\sum_{l=0}^ns(n,l)u^l=\frac{1}{n!}\sum_{l=0}^n \left[n\atop l\right](-1)^{n-l}u^l. \tag{1}$$ -So on the other hand, -$$-\frac{1}{z}\int_0^z (1-\tau)^{-u}d\tau=\sum_{n=0}^\infty \frac{(-z)^n}{n+1}\binom{-u}{n}=\sum_{n=0}^\infty\sum_{l=0}^n\frac{1}{(n+1)!}\left[n\atop l\right]z^nu^l. \tag{2}$$ -And on your third hand, -$$\frac{1}{z}\int_0^z (1-\tau)^{-u}d\tau=\frac{1}{z}\frac{(1-z)^{1-u}-1}{1-u}. \tag{3}$$ -Plugging in $z=1$, -$$\sum_{k=0}^\infty u^k=\sum_{l=0}^\infty \left(\sum_{n=l}^\infty \frac{1}{(n+1)!}\left[n\atop l\right]\right)u^l. \tag{4}$$ -Equating coefficients gives the result. (I'll have time to clarify/elaborate/salvage later. My hand ... or hands were forced by the answer of Mr. Andrews!)<|endoftext|> -TITLE: Is this function convex when the input vector is positive? -QUESTION [5 upvotes]: I am wondering if $f(\mathbf{x})$ is convex on the input of a vector of $n$ positive reals $\mathbf{x}$: -$$f(\mathbf{x})=\operatorname{Tr}[(\mathbf{A}+\operatorname{diag}(\mathbf{x}))^{-1}]$$ -where $\mathbf{A}$ is a positive-definite $n\times n$ matrix of reals, $\operatorname{diag}(\mathbf{x})$ returns a diagonal matrix with values from $\mathbf{x}$ placed on the diagonal, and $\operatorname{Tr}[\mathbf{M}]$ is the trace of matrix $\mathbf{M}$. -I understand that I can prove convexity via taking the second derivative, but I am not sure how to it in the matrix case. I also know that for positive-definite $\mathbf{A}$, $g(\mathbf{y})=\frac{1}{2}\mathbf{y}^T\mathbf{A}\mathbf{y}+\mathbf{y}^T\mathbf{b}$ is strictly convex, but I am not sure how that applies here (if it applies at all). - -REPLY [3 votes]: Consider $g(t) = {\bf u}^T (C + tB)^{-1} \bf u$ for any real vector $\bf u$, where $C$ and $B$ are real symmetric matrices and $C+tB$ is positive definite. Note that the first derivative of $(C + tB)^{-1}$ with respect to $t$ is $- (C + tB)^{-1} B (C + tB)^{-1}$, and the second derivative is $2 (C+tB)^{-1} B (C+tB)^{-1} B (C+tB)^{-1}$. So $$g''(t) = 2 {\bf u}^T (C+tB)^{-1} B (C+tB)^{-1} B (C+tB)^{-1} {\bf u} = 2 {\bf v}^T (C+tB)^{-1} \bf v$$ where ${\bf v} = B (C+tB)^{-1} \bf u$. Since $(C+tB)^{-1}$ is positive definite when $C+tB$ is positive definite, $g''(t) \ge 0$, i.e. $g$ is convex as long as $C+tB$ is positive definite. -Now $Tr (C+tB)^{-1} = \sum_j {\bf u}_j^T (C+tB)^{-1} {\bf u}_j$ for an orthonormal basis ${\bf u}_j$, so this is also convex. And -$f(t {\bf x} + (1-t) {\bf y})$ is of this form, with $C = A + \text{diag}({\bf y})$ and $B = \text{diag}({\bf x} - {\bf y})$, when $0 \le t \le 1$ (note that for $0 \le t \le 1$, $t {\bf x} + (1-t) {\bf y}$ has nonnegative entries, and the sum of a positive definite matrix and a nonnegative diagonal matrix is positive definite). That says $f$ is convex.<|endoftext|> -TITLE: Random Variable and Simple Function Measurability -QUESTION [5 upvotes]: Problem: As I've mentioned before, I am studying Probability and Measure Theory on my own, and I am using Resnick as the main text. I've been working through Ch. 3 on random variables, measurable maps, etc. and am somewhat stuck on one of the exercises. -Suppose we have a space $\Omega$ and a countable partition: {$B_n, n\geq 1$} of that space. Next, we define the sigma field $\mathcal{B}=\sigma(B_n,n\geq 1)$. -We are asked to show that a function $X:\Omega\rightarrow (-\infty,\infty]$ is $\mathcal{B}$-measurable iff it is of the form: -$\sum_{i=1}^\infty c_i 1_{B_i}$, for constants {$c_i$}. -Understanding/Attempt at a solution: -If I understand the problem (a large, but variable "if"), we need to prove that a random variable is measurable iff it can be expressed as a simple function, because simple functions are measurable. So, in effect, the random variable can only take values that correspond to some combination of disjoint sets that partition the space? -In my attempt, I start with $B_i \in \mathcal{B}$ and show that $X(\omega)=1_{B_i}(\omega)$ is measurable because $\varnothing, B_i^c, \Omega \in \mathcal{B}$. -I am not sure what to do next, although it seems like I should be trying to show that the inverse image of (c,$\infty$] under X should be a union of elements of $\mathcal{B}$. -Even then, it only feels like I am addressing the "if" part, and I'm not sure what to do with the "only if" part of the exercise. -As always, thank you for any help you can provide! - -REPLY [6 votes]: Let's work through this in pieces. Since you've shown your work and are doing this for self-study, I'm going to be overly verbose in the interest of being explicit and complete (hopefully). -What does $\mathcal B$ look like? -Since $\{B_n\}$ is a countable partition then we know intersections are empty, hence in $\sigma(B_n,n \geq 1)$, and by closure under countable unions, we know that for each $I \subset \mathbb N$, $\bigcup_{i \in I} B_i$ must also be in the $\sigma$-algebra. So a good first guess might be to look at the system -$$ -\mathcal F := \big\{ \bigcup_{i \in I} B_i: I \subset \mathbb N \big\} \>. -$$ -Clearly $\mathcal F \subset \mathcal B$ as described above. Check the axioms to see that $\mathcal F$ is, itself, a $\sigma$-algebra and conclude that $\mathcal F = \mathcal B$. -$X$ is a random variable. -Suppose $\newcommand{\o}{\omega} X(\o) = \sum_n c_n 1_{B_n}(\o)$. Then, since $\o$ is in exactly one $B_n$, $X$ has (at most) countable range and we can check measurability by looking at the sets $\{\o:X(\o) = x\}$. But, -$$ -\{\o:X(\o) = x\} = \bigcup_n \{B_n : c_n = x\} \in \mathcal B \>. -$$ -Thus, $X$ is a random variable. -All random variables look like $X$. -Suppose $X$ is a random variable measurable with respect to $\mathcal B$. -The key is to work with the "right" sets. For each $B_n$, choose an $\o_n \in B_n$ and set $x_n = X(\o_n)$. Then $X^{-1}(\{x_n\})$ is measurable since $\{x_n\} \in \mathcal B(\mathbb R)$. But, this implies that there exists $I \subset \mathbb N$ such that -$$ -X^{-1}(\{x_n\}) = \bigcup_{i \in I} B_i \>. -$$ -Since $X(\o_n) = x_n$, we have that $B_n \subset \bigcup_{i \in I} B_i$ and so for all $\o \in B_n$, $X(\o) = X(\o_n) = x_n$. In other words, $X$ is constant on each $B_n$. -There are only countably many $B_n$, so $X$ can take at most a countable number of values. Let these (distinct) values be $a_1, a_2,\ldots$ and take $A_n = X^{-1}(\{a_n\})$. Then, $A_n$ are disjoint, partition $\Omega$ and are composed of the union of sets $B_n$ such that each such union is pairwise disjoint. For example, for $n \neq m$, -$$ -A_n := X^{-1}(\{a_n\}) = \bigcup_{i \in I_n} B_i \>, -$$ -and -$$ -A_m := X^{-1}(\{a_m\}) = \bigcup_{i \in I_m} B_i \>, -$$ -and $I_n \bigcap I_m = \varnothing$. -Now we have enough to explicitly construct $X$. For a given $I_n$, set $c_i = a_n$ for all $i \in I_n$. Then, $X = \sum_i c_i 1_{B_i}$ as desired. -Side note: Resnick is a quite good book, particularly for self-study. However, be aware that there are a nonnegligible number of exercises that appear in chapters for which the necessary theory to solve them has not yet been introduced or developed. (I'm not speaking of this particular exercise, though.)<|endoftext|> -TITLE: What are "Lazard" sheaves? -QUESTION [9 upvotes]: Early in Categories, Allegories, by Freyd and Scedrov (p.12, in the section on basic examples) there appears the following example: - -Let $\mathcal{LH}$ be the category whose objects are topological spaces and whose morphisms are continuous maps that are LOCAL HOMEOMORPHISMS: … $\mathcal{LH}/Y$ is the category of LAZARD SHEAVES over Y…. Its objects may be viewed as continuously $Y$-indexed families of pairwise-disjoint sets, its maps are $Y$-indexed families of functions which, in concert, are continuous, … - -On the following page, the authors opine: - -Lazard sheaves and functor categories provide the two most important families of examples in Geometric Logic. - -"Lazard sheaves" are not defined anywhere (except, I think, implicitly as the objects of $\mathcal{LH}/Y$) and I cannot find this terminology used anywhere else. Google Book Search yields this one book and no others. -What are Lazard sheaves? The same as just regular sheaves? Who's Lazard? Are Freyd and Scedrov using nonstandard terminology? - -REPLY [9 votes]: Yes, judging by the Universal Measure of Standard Terminology™, Freyd and Ščedrov use non-standard terminology: Googling for "Lazard sheaves" currently gives me four hits: two of them lead to Freyd and Ščedrov's book, the other two lead to the present thread. -The word “Cartan–Lazard sheaf” appears on page 129 of Dieudonné's A History of Algebraic and Differential Topology, 1900 – 1960, Springer 2009 edition. In this book, Dieudonné distinguishes between Leray's definition and the Cartan–Lazard definition of sheaves and gives details on their relation in §7B The Concept of Sheaf, page 123. - -As I mentioned in a comment, - -[...] it was [very likely] Michel Lazard who introduced the formulation of sheaves in terms of étalé spaces. In the notes to his Séminaire, Faisceaux sur un espace topologique. I., Séminaire Henri Cartan, 3 (1950-1951), Exposé No. 14, Henri Cartan deviates from his earlier approach and attributes the new definition to Lazard. See also the “Prologue” of Mac Lane–Moerdijk, Sheaves in Geometry and Logic on page 1. - -The history of sheaves is covered in many books. Apart from the already mentioned section of Dieudonné, two nice accounts are: - -Christian Houzel, A Short History: Les débuts de la théorie des faisceaux in Kashiwara–Schapira, Sheaves on Manifolds, Springer Grundlehren Vol. 292, 1990, p.7–22. -Ralf Krömer, Development of the Sheaf concept until 1957, Section 3.2 in his book Tool and Object: A History And Philosophy of Category Theory, Springer 2007. - - -Freyd and Ščedrov define Lazard sheaves over a topological space $Y$ via the slice category $\mathscr{LH}/Y$, and as was already pointed out, the resulting notion is better known under the name espace étalé or étalé space, see the nLab entry and the Wikipedia page on sheaves. -An étalé space $(X,p)$ over a topological space $Y$ thus is a local homeomorphism $p\colon X \to Y$ and a morphism $f\colon(X,p) \to (X',p')$ is a local homeomorphism $f\colon X \to X'$ such that $p'f = p$. It is easy to check that it suffices for $f$ to be continuous, the local homeomorphism property is a consequence of continuity. -These are the same as the usual sheaves in that there is an equivalence of categories between étalé spaces and the usual sheaves. This is detailed in (1.37.) of Freyd and Ščedrov. - -Added: There is some remaining uncertainty as to whether it really was Michel Lazard (as opposed to another Lazard) who introduced the notion. -Henri Cartan only attributes the concept to a certain Lazard (it was customary among Bourbakistes to mention other mathematicians by last name only). Unfortunately, the notes for the talks 12–17 of the first Séminaire Cartan (1948–1949) seem to be lost from the public records. The table of contents contains the following note: - -Houzel speaks of M. Lazard in the last paragraph on page 12. Incidentally, it is also mentioned there that Godement coined the term espace étalé. See also the footnotes on page 110 of Krömer's book. -M. Lazard participated in the Séminaire Cartan in the early fifties, see his éxposé on Algèbres Affines from the eighth seminar (1955–1956).<|endoftext|> -TITLE: Motivation for stable curves -QUESTION [15 upvotes]: I was looking at Deligne-Mumford's paper on the irreducibility of the space of curves of a given genus, and it seems that they generalize the notion of a smooth curve to a "stable curve." I'm a little confused why this is a natural thing to do. I understand that this is supposed to be some kind of compactification of the usual space of curves, but I can't see why it is a natural property to consider. Is it obvious that things like having nodal singularities are closed properties? Is there an easier to read source than Deligne-Mumford's paper where I can learn more about this? -(For background: I'm trying to understand a bit about what $M_{1,1}$ has to do with modular forms, which I'm only vaguely familiar with from an analytic perspective.) - -REPLY [14 votes]: You have to be careful when you ask about a "closed condition", because the questions is "closed in what"? E.g. the plane curves of given degree admitting at worst nodal singularities are not closed in the linear system of all plane curves of that degree. (Consider e.g. the family $y^2 = x^3 + t x^2$, where $t$ is a paramater; this is a nodal family (when $t \neq 0$) which has a cuspidal curve (at $t = 0$) as a limit.) -The key fact which justifies the definition of stable curve is the semi-stable reduction theorem, which says that any family of smooth projective curves over the punctured open disk (or more generally, over the fraction field of a DVR) can be extended to a semistable family over the unpunctured disk (or, more generally, over the DVR) after making a finite base-change. -Here semistable means that every geometric singular point in the fibre over the "filled-in point" (which I'll take to be $t = 0$, where $t$ is the coordiante on the disk, -or more generally the uniformizing parameter in the DVR) has a formal neighbourhood isomorphic to $x y = t$. -This is a fundamental theorem, which takes time and practice to understand, and which is closely related to resolution of singularities: indeed, you an always extend the family of curves over the puncture in some manner (e.g. by closing up in some ambient projective space), and you can think of semi-stable reduction as telling you that you can resolve the singularities in the total space of the resulting family in a particular way, so that they are well-adapted to the projection map to the disk. -E.g. Imagine that locally you had the equation $x y = t^2$, which is smooth over $t \neq 0$, but has a singular fibre over $t = 0$. This is not semistable at $t = 0$ (the point $(0,0,0)$ is a singular point of the total space of the family, whereas the total space of a semistable family is smooth), but if you blow up the total space of the family at $(0,0,0)$, you can check that what you get is now semistable (and since the blow-up is occuring in the fibre over $ t = 0$, nothing changes along the smooth family where $t \neq 0$). -E.g. Consider the equation $x^2y = t$. This is not semistable: the fibre over $t = 0$ is non-reduced, whereas the singular fibres of a semistable family are always reduced. To make it semistable, we first base-change to the disk $t = s^2$, to get $x^2y = s^2$. We then normalize (this doesn't change anything when $s \neq 0$, because the fibres, and so also the total space, is smooth there and hence normal): since $(s/x)^2 = y$, we find that $z:= s/x$ is well-defined on the normalization, and satisfies the equation $x z = s$; thus we have now obtained a semistable model. -In general, you have to alternate blow-ups and normalizations (after ramified base-changes) to obtain a semistable model. -The semistable reduction theorem implies that the moduli stack of stable curves is proper (the fact that we might have to make a base-change of our DVR reflects the stacky nature of the situation). - -General remarks (maybe these should have come first!): -We know that moduli stacks of smooth objects are not proper, so to obtain proper moduli stacks, we have to allow some sort of degeneration. Semistable degenerations are a natural class of explicit and simple singular degenerations which give rise to a proper moduli stack. -Note that there are another class of singular degenerations which sometimes work well, namely those with isolated quadratic singularities (this is the theory of Lefschetz pencils). These coincide with semistable degenerations in the case of curves (which is what your question is about). -Both semistable degenerations and Lefschetz pencils behave well with regard to understanding the behaviour of cohomology in the family (this is Picard--Lefschetz theory in the second case, and the Rapoport--Zink spectral sequence in the semistable case). Deligne uses Picard--Lefschetz theory in his proof of the Weil conjectures, but semistable degenerations are often more natural from a motivic point of view.<|endoftext|> -TITLE: $f(m + f(n)) = f(f(m)) + f(n)$ -QUESTION [5 upvotes]: I found this one in the list of IMO'96 (3) problems and decided to have a go at it, but could not complete the solution. So $m$ and $n$ are non-negative integers and $f$ takes values in the same set: -$$f(m + f(n)) = f(f(m)) + f(n)$$ -Let $m=n=0$: -$$f(f(0))=f(f(0))+f(0)$$ -hence -$$f(0)=0$$ -Now let $m=0$: -$$f(f(n))=f(n)$$ -therefore: -$$f(m+f(n))=f(m)+f(n)$$ -Now consider the following possibilities -1) $f(m)=f(n) \implies m=n$ -$$f(m+f(n))=f(m)+f(n)$$ -$$f(n+f(m))=f(n)+f(m)$$ -By symmetry of the RHS: -$$f(m+f(n))=f(n+f(m))$$ -Hence, by assumption -$$m+f(n)=n+f(m)$$ -$$f(n)-n=f(m)-m$$ -Since the two sides are independent they must be both equal to a constant -$$f(n)-n=C$$ -$$f(0)=0 \implies C=0 \implies f(n)=n$$ -2) It is the second part where I got confused. I assumed $f(m)=f(n)$ where $m \ne n$, so that -$$f(m+f(n))=2f(n)$$ -then figure that since RHS does not depend on $m$ whereas LHS does, then $f=C$ and deduced from the original equation that $C$ must be 0. But then I relayed that in fact here I have some particular values of $m$ and $n$ and not just something arbitrary. If anyone can point me in the right direction, I would be grateful. - -REPLY [5 votes]: If $f$ is a solution to $f(m+f(n)) = f(f(n)) + f(m)$ then either $f \equiv 0$ or there is some $k\in\mathbb N$ and some $a_0,\ldots,a_{k-1} \in \mathbb N_0$ with $a_0 = 0$ s.t. $f(kn+r) = k(n+a_r)$ for all $n\in \mathbb N_0, 0 \leq r < k$. -It is easy to verify that these functions are indeed solutions of the functional equation. To prove that every solution is of this kind, we first make some observations about fixed points. -Claim: Let $f$ be a solution and denote by $F$ the set of fixed points of $f$, i.e. $F = \{ n\in \mathbb N_0 : f(n) = n \}$. Then the following statements hold: - -$0 \in F$ -$F$ is closed under addition, i.e. if $x,y\in F$, then also $x+y \in F$. -If $x \in F$ and $x+y \in F$ , then also $y \in F$. -$F = k\mathbb N_0$ for some $k \in \mathbb N_0$. - -Proof: -(1) follows from $f(0) = 0$. If $x,y\in F$ then from $f(x+f(y)) = f(x)+f(y)$ we get $f(x+y) = x+y$, hence $x+y \in F$. If $x,x+y\in F$ then -$$y+x = f(y+x) = f(y+f(x)) = f(y)+f(x) = f(y) + x,$$ -hence $f(y) = y$. (4) now follows from the previous statements: If $0$ is the only fixed point then $F = \{0\} = 0\mathbb N_0$ and we are finished. Otherwise, there is a smallest non-zero fixed point $k$. By (2), $k\mathbb N_0 \subseteq F$. If $x$ is any other fixed point then write $x = kn+r$ with $n\in \mathbb N_0$ and $0 \leq r < k$. By (3), we have $r \in F$. But $k$ is the smallest non-zero fixed point and $r < k$, so $r = 0$ and $x = kn \in k\mathbb N_0$. -(This was basically the proof that $\mathbb Z$ is a principal ideal domain.) -Now observe that $f(\mathbb N_0) \subseteq F$, since $f(f(n)) = f(n)$ for all $n$. Therefore, if $F = \{0\}$ then $f(n) = 0$ for all $n$, i.e. $f$ is constantly zero. -Otherwise, $F = k\mathbb N_0$ for some non-zero $k$. Now let $a_r' := f(r)$ for $0 \leq r < k$. Then $a_0' = 0$, and for all $n \in \mathbb N_0$ and $0\leq r < k$ we have -$$f(kn+r) = f(r + f(kn)) = f(r) + f(kn) = f(r) + kn = kn + a_r'.$$ -Furthermore, $kn+a_r' \in f(\mathbb N_0) \subseteq k\mathbb N_0$, hence $k|a_r'$. Write $a_r' = k a_r$, then $f(kn+r) = k(n+a_r)$, as claimed.<|endoftext|> -TITLE: Are the functions $\sin^n(x)$ linearly independent? -QUESTION [13 upvotes]: The following problem is from Golan's linear algebra book. I have posted a proposed solution in the answers. -Problem: For $n\in \mathbb{N}$, consider the function $f_n(x)=\sin^n(x)$ as an element of the vector space $\mathbb{R}^\mathbb{R}$ over $\mathbb{R}$. Is the subset $\{f_n:\ n\in\mathbb{N}\}$ linearly independent? - -REPLY [4 votes]: The Wronskian of $\sin(t),...,\sin^n(t)$ is -$$ -1! 2! 3! \dots (n-1)! \sin^n(t) \cos^{n(n-1)/2}(t) -$$ -not identically zero.<|endoftext|> -TITLE: Is $m=n+1$ the largest $m$ such that $S_m$ has a faithful action on $\mathbb{Z}^n$? -QUESTION [5 upvotes]: I can write down a faithful action of $S_{n+1}$ on $\mathbb{Z}^n$. That is, I know of a way to explicitly give a homomorphism from $S_{n+1}$ to $GL(n,\mathbb{Z})$ that has a trivial kernel. An example of this is below, although I'm sure there are many others. -Can this ever be done with $S_{n+2}$, maybe for the right value of $n$? Are there larger symmetric groups than $S_{n+1}$ that can act faithfully on $\mathbb{Z}^n$ if $n$ has the right value? Or will $S_{n+2}$ never act faithfully on $\mathbb{Z}^n$? -The orders of maximal finite subgroups of $GL(n,\mathbb{Z})$ for $n=2,3,4,5$ that are tabulated here imply that for these $n$ values at least, the answer is negative, since $(n+2)!$ never divides the order of subgroups. -(One example of an $S_{n+1}$-action is to identify $\mathbb{Z}^n$ with polynomials of degree $n-1$ or less, permute the $n+1$ coefficients of $p(x)\cdot(x-1)$, and then divide by $(x-1)$ which will still be a factor after the permuting.) - -REPLY [3 votes]: Here's a reasonably elementary argument that shows that $m \le 2n+3$. By Bertrand's postulate there's a prime $p$ between $n+2$ and $2n+3$. Then $S_p$ contains an element of order $p$. characteristic polynomial of an integer matrix of order $p$ is necessarily divisible by the cyclotomic polynomial $\Phi_p(x) = x^{p-1} + ... + 1$, which is irreducible, so if such a matrix is $n \times n$ then we conclude that $p-1 \le n$, which contradicts $p \ge n+2$. Hence $S_p$ can't embed into $\text{GL}_n(\mathbb{Z})$. -(Asymptotically we actually expect that there is a prime between $n$ and $n + O(\sqrt{n})$ so $m$ gets reasonably close to $n$ for large $n$.) -Here's a silly argument that shows that $m \le n+6$ for $n \ge 2$. Conditional on Goldbach's conjecture (never thought I'd see myself write that!), one of $\{ n+3, n+4, n+5, n+6 \}$ is a sum of two distinct primes $p$ and $q$. Write $m = p+q$. Thus $S_m$ contains the product of a cycle of order $p$ and a cycle of order $q$. The characteristic polynomial of an integer matrix with this property is divisible by the product $\Phi_p(x) \Phi_q(x)$, so if such a matrix is $n \times n$ then we conclude that $p + q - 2 \le n$, which contradicts $p + q \ge n+3$. Hence $S_m$ can't embed into $\text{GL}_n(\mathbb{Z})$. -According to Wikipedia, the smallest irreducible representations of $S_m$ of dimension greater than $1$ have dimension at least $m-1$ for $m \ge 7$, so your conjecture is correct for all $n \ge 5$. In fact this result shows that $S_{n+2}$ cannot embed into $\text{GL}_n(\mathbb{C})$, let alone $\text{GL}_n(\mathbb{Z})$.<|endoftext|> -TITLE: Check whether sets are open, closed, compact or bounded -QUESTION [5 upvotes]: For each of following sets decide whether it is open, closed, compact or bounded: -(a) $\left\{ (x,y)\in\mathbb{R}^2 : \sin\left( \sin\left( \cos(xy)\right)\right)=\sin\left(\sin(x+y) \right)\cdot y , \ x\ge -1 , \ y\le x \right\}$ -(b) $\left\{ (x,y)\in\mathbb{R}^2 : e^{x+y^2}=\ln \frac{1}{1+x^2+y^2} \right\}$ -The above formulas are so complicated that I guess there is some smart way to test everything without solving equations. Can anybody help? - -REPLY [4 votes]: a) closed -b) closed -In both cases you have continuous functions $f(x,y),g(x,y)$ on both sides of the "=", and so $f(x,y)-g(x,y)$ is also continuous. You can rewrite your sets as $\{(x,y)\mid f(x,y)-g(x,y)=0\}$. Since the inverse image of $0$ under a continuous map is closed, you have that your sets are closed. - -REPLY [3 votes]: To continue answering the other questions, it's clear that the set in (b) is empty since $e^{x+y^2}>0$ but $\ln(\frac{1}{1+x^2+y^2})\leq 0$. That would mean it is open, closed, compact and bounded. -The only tricks I saw to (a) so far are: Since $\mathbb{R}^2$ only has the two trivial clopen sets, it would not be open. -(Withdrawing second observation, I don't think it's correct.) If you can show there is an upper bound to the $x$'s, and bounds on the $y$'s, then you will have shown it is bounded, and with closedness, this implies compactness. If it is unbounded it is not compact.<|endoftext|> -TITLE: How to correctly apply Newton-Raphson method to Backward Euler method? -QUESTION [5 upvotes]: I'm solving a system of stiff ODEs, at first I wanted to implement BDF, but it seem to be a quite complicated method, so I decided to start with Backward Euler method. Basically it says that you can solve an ODE: -$y'=f(t,y)$, with initial guess $y(t_0)=y_0$ -Using the following approximation: -$y_{k+1}=y_k+hf(t_{k+1},y_{k+1})$, where $h$ is a step size on parameter $t$ -Wikipedia article says that you can solve this equation using Newton-Raphson method, which is basically a following iteration: -$x_{n+1}=x_n-\frac{g(x_n)}{g'(x_n)}$ -So, the question is how to correctly mix them together? What initial guess $x_0$ and function $g$ should be? -Also $f$ is quite complex in my case and I'm not sure if it possible to find another derivative of it analytically. I want to write an implementation of it by myself, so predefined Mathematica functions wouldn't work. - -REPLY [7 votes]: Your aim is to solve the following equation for $y_{k+1}$: $$ g(y_{k+1}):=y_{k+1}-y_k-hf(t_{k+1},y_{k+1})=0,$$ -where $f$ is a known function and $y_k, t_{k+1}$ and $h$ are known values. This gives you the $g$ for a Newton-Raphson method. As an initial guess, I'd suggest that you use a 1-step forward Euler method to explicitly calculate -$$ \hat{y}_{k+1} := y_k+hf(t_{k+1},y_k),$$ -and use $\hat{y}_{k+1}$ as your initial guess for $y_{k+1}$.<|endoftext|> -TITLE: For an ideal $I$ of $K[x]$, $K[x]/I$ is finitely generated iff $I$ is nonnull -QUESTION [5 upvotes]: In a book on rational series, a blunt statement is made to the effect that: - -For $K$ a field, $I$ an ideal of $K[x]$, $K[x]/I$ is finitely generated iff $I$ is nonnull. - -The statement elaborates with the not-so-enlightening (to me) sentence - -This is true since a nonnull ideal in $K[x]$ always has a finite codimension ⁽¹⁾, and the latter is equal to the degree of any generator of this ideal ⁽²⁾. - -I gather that if (2) is true, then $K[x]$ may be finitely generated if $K$ itself is finitely generated, but this is as far as I can go. -As for (1), I have a feeling of why this is true, but no proof. -Thus I need help in proving the whole statement :-) Thanks ! - -REPLY [2 votes]: When one writes finitely generated, it is always good to specify the particular ring for which this is true. -E.g. $K[x]$ is always finitely generated as a module over itself (indeed, any ring with unit $R$ is finitely generated --- by $1$ --- as a module over itself). -The statement in your question is about finite generation over $K$, and one could replace finitely generated by finite dimensional (since a $K$-module is the same as a $K$-vector space, and being finitely generated is the same as being finite-dimensional). -So the statement could be made simpler and clearer by writing - -For $K$ a field and $I$ an ideal in $K[x]$, the quotient $K[x]/I$ is finite-dimensional as a $K$-vector space if and only if $I$ is non-null. - -The proof of the if direction is as indicated in the other two answers, and the conclusion they lead to is that if $I$ (which is necessarily principal --- the ring $K[x]$ is a PID) is generated by a degree $n$ polynomial, then $K[x]/I$ has dimension $n$ over $K$. -The proof of the only if direction is easy: if $I = 0$, then $K[x]/I = K[x]$, and it's easy to see that $K[x]$ is infinite dimensional over $K$.<|endoftext|> -TITLE: Examples demonstrating that the finitely generated hypothesis in Nakayama's lemma is necessary -QUESTION [14 upvotes]: Recall that Nakayama's lemma states that - -Let $R$ be a commutative ring with unity, and let $J$ be the Jacobson radical of $R$ (the intersection of all the maximal ideals of $R$). For any finitely generated $R$-module $M$, if $IM=M$ for some ideal $I\subseteq J$, then $M=0$. - -I was trying to come up with a simple / illustrative example of the above situation where $M$ is non-zero and not finitely generated, but $IM=M$ for some $I\subseteq J$. -The only example I managed to come up with was -$$R=k[[\overline{x}_0,\overline{x}_1,\overline{x}_2,\ldots]]=k[[x_0,x_1,x_2,\ldots]]/(x_0^2,x_1^2-x_0,x_2^2-x_1,\ldots)$$ -where $k$ is a field, and $I=M=(\overline{x}_0,\overline{x}_1,\overline{x}_2,\ldots)\neq0$. Since $k[[x_0,x_1,x_2,\ldots]]$ is a local ring with maximal ideal $(x_0,x_1,x_2,\ldots)$, we have that $R$ is a local ring with maximal ideal $I$, so that $I$ is the Jacobson radical of $R$, and $$IM=I^2=(\overline{x}_0^2,\overline{x}_1^2,\overline{x}_2^2,\ldots,\overline{x}_0\overline{x}_1,,\ldots,\overline{x}_1\overline{x}_2,\ldots)=(0,\overline{x}_0,\overline{x}_1,\ldots,\overline{x}_0\overline{x}_1,\ldots,\overline{x}_1\overline{x}_2,\ldots)\supseteq I$$ -so that $IM=I^2=I=M$. -Is there an easier example - maybe with $R$ noetherian? Also, what is the geometric picture here? As I understand it, letting $M$ be non-finitely generated corresponds to looking at non-coherent sheaves, which I don't have a whole lot of intuition for - even less than my novice understanding of coherent sheaves. - -REPLY [18 votes]: If $(R,\mathfrak m)$ is a local domain which is not a field, then for its fraction field $K$ we have $\mathfrak m K=K$. -Amusingly, this proves that $K$ is not a finitely generated $R$-module.<|endoftext|> -TITLE: Integrals and Dirac delta measures -QUESTION [5 upvotes]: Why is it the case that in general $\int X d\delta_k(w) = X(k) $?? -I can't seem to work it out but it seems like a useful fact? - -REPLY [5 votes]: Here's two methods of how to show it. Someone correct me if I made some mistakes. -Method 1. It can be proven in steps, starting from simple functions and moving on to measurable functions. (Note that since the whole powerset is the collection of $\delta_{k}$-measurable sets, then every function is $\delta_{k}$-measurable.) -Assume first that $X$ is a simple function, and let $\sum_{i=1}^{n}a_{i}\chi_{A_{i}}$ be its normal representation. Then -\begin{align*} -\int X\,d\delta_{k}=\sum_{i=1}^{n}a_{i}\int \chi_{A_{i}}\,d\delta_{k}=\sum_{i=1}^{n}a_{i}\delta_{k}(A_{i})=\sum_{i=1}^{n}a_{i}\chi_{A_{i}}(k)=X(k), -\end{align*} -since $\delta_{k}(A_{i})=1$ if $k\in A_{i}$ and $0$ otherwise, which is indeed the same as the indicator function of $A_{i}$ at point $k$ (which is indeed the key point to notice). -Assume then that $X$ is non-negative and measurable. There now exists a nondecreasing sequence of simple functions $(\psi_{i})_{i=1}^{\infty}$ so that $\psi_{i}\to X$ pointwise. Using the monotone convergence theorem $(*)$ and the previous step $(**)$ we obtain: -\begin{align*} -\int X\,d\delta_{k} &=\int \lim_{i\to\infty}\psi_{i}\,d\delta_{k}\overset{(*)}{=}\lim_{i\to\infty}\int \psi_{i}\,d\delta_{k}\overset{(**)}{=}\lim_{i\to\infty} \psi_{i}(k)=X(k). -\end{align*} -The result then finally follows by composing any measurable function $X$ to $X=X^{+}-X^{-}$ and invoking the previous step to both of them. -Method 2: Use the fact that for $\delta_{k}$-a.e. we have $X=X(k)$ and integrate.<|endoftext|> -TITLE: Why does mathematical convention deal so ineptly with multisets? -QUESTION [135 upvotes]: Many statements of mathematics are phrased most naturally in terms of multisets. For example: - -Every positive integer can be uniquely expressed as the product of a multiset of primes. - -But this theorem is usually phrased more clumsily, without multisets: - -Any integer greater than 1 can be written as a unique product (up to ordering of the factors) of prime numbers.¹ -Apart from rearrangement of factors, $n$ can be expressed as a product of primes in one way only.² -Every integer greater than 1 can be expressed as a product of prime numbers in a way that is unique up to order.³ - -Many similar factorization theorems are most naturally stated in terms of multisets; try a search for the phrase "up to rearrangement" or "disregarding order". Other examples: a monic polynomial is uniquely determined by its multiset of roots, not by its set of roots. The eigenvalues of a matrix are a multiset, not a set. -Two types that are ubiquitous in mathematics are the set and the sequence. The sequence has both order and multiplicity. The set disregards both. The multiset has multiplicity without order, but is rare in mathematical literature. -When we do handle a multiset, it's usually by interpreting it as a function into $\Bbb N$. This leads to somewhat strange results. For example, suppose $M$ is the multiset of the prime factors of some integer $n$. We would like to write: -$$n = \prod_{p\in M} p$$ -or perhaps even just: -$$n = \prod M$$ -But if we take the usual path and embed multisets in the conventional types as a function $M:\mathrm{Primes}\to\Bbb N$, then we have to write the statement with an infinite product and significantly more notation: -$$n = \prod_{p\in\mathrm{ Primes}}p^{M(p)} $$ -(For comparison, imagine how annoying it would be if sets were always understood as characteristic functions with codomain $\{0, 1\}$, and if we had to write $\sum_{x\in S}{F(x)}$ all the time instead of just $|F|$.) -Interpreting multisets as functions is infelicitous in other ways too. Except in basic set theory, we usually take for granted that the difference between a finite and an infinite set is obvious. But for multisets-as-functions, we have to say something like: - -A multiset $M$ is finite if $M(x)=0$ for all but finitely many values of $x$. - -The other way that multisets are sometimes handled in mathematical proofs is as (nonstrict) monotonic sequences. One often sees proofs that begin "Let $a_1\le a_2\le\ldots\le a_n$; then…". The intent here is that the $a_i$ are a multiset, and if $b_i$ are a similar sequence of the same length, then the multisets are equal if and only if $a_i = b_i$ for each $i$. Without the monotonicity, we don't get this equality property. With first-class multisets, we would just say $A=B$ and avoid a lot of verbiage. -Sets and sequences both have a full complement of standard notation and jargon. Multisets don't. There is no standard notation for the union or intersection of multisets. Part of the problem here is that there are two reasonable definitions of multiset union: -$$(M\uplus N)(x) = M(x) + N(x)$$ -or -$$(M\Cup N)(x) = \max(M(x), N(x))$$ -For example, if $M$ and $N$ are the prime factorizations of $m$ and $n$, then $M\uplus N$ is the prime factorization of $mn$, and $M\Cup N$ is the prime factorization of $\mathrm{lcm}(m,n)$. -Similarly there is no standard notation for multiset containment, for the empty multiset, for the natural injection from sets to multisets, or for the forgetful mapping from multisets to sets. If there was standard notation for multisets, we could state potentially useful theorems like this one: -$$ m|n \quad\mbox{if and only if}\quad \mathrm{factors}(m) \prec \mathrm{factors}(n)$$ -Here $\mathrm{factors}(m)$ means the multiset of prime factors of $m$, and $\prec$ means multiset containment. The analogous statement with sets, that $m|n$ if and only if factors$(m)\subset$ factors$(n)$, is completely false. -It seems to me that multisets are a strangely missing piece of math jargon. Clearly, we get along all right without them, but it seems that a lot of circumlocution would be avoided if we used multisets more freely when appropriate. Is it just a historical accident that multisets are second-class citizens of the mathematical universe? - -REPLY [6 votes]: Old question I know, but I wanted to point out that (finite) multisets do appear quite frequently in algebraic geometry/topology in the guise of finite formal sums; i.e., a multiset over a set $A$ is an element of the free abelian group on the elements of $A$. For example: -$$ -a-2b+4c -$$ -for $a,b,c\in A$. -If you want to allow only positive numbers of each element, you work over the free commutative monoid instead; i.e., you require that the coefficients be non-negative: -$$ -a+2b+4c -$$ -The fact that we are allowed to reorder is immediately given to us by the commutativity of the $+$ operation. -In the case of the product of primes it is very easy just to change the operation to multiplication and declare that every number is a unique product of primes, in the sense that it corresponds to a unique formal (commutative) product of (formal symbols corresponding to) primes. -Finite formal sums should be enough for the examples you gave, but if you want to allow infinitely many 'elements' then you can do that as well: -$$ -M=\sum_{a\in A} m_a a -$$ -where $m_a\in\mathbb N$ for each $a\in A$. Semantically, this is still represented as a function from $A$ to $\mathbb N$, but the notation is different.<|endoftext|> -TITLE: Showing $(C[0,1], d_1)$ is not a complete metric space -QUESTION [12 upvotes]: I am completely stuck on this problem: $C[0,1] = \{f: f\text{ is continuous function on } [0,1] \}$ with metric $d_1$ defined as follows: -$d_1(f,g) = \int_{0}^{1} |f(x) - g(x)|dx $. -Let the sequence $\{f_n\}_{n =1}^{\infty}\subseteq C[0,1]$ be defined as follows: -$ - f_n(x) = \left\{ - \begin{array}{l l} - -1 & \quad \text{ $x\in [0, 1/2 - 1/n]$}\\ - n(x - 1/2) & \quad \text{$x\in [1/2 - 1/n, 1/2 +1/n]$}\\ - 1 & \quad \text{ $x\in [1/2 +1/n, 1]$}\\ - \end{array} \right. - $ -Then $f_{n}$ is cauchy in $(C[0,1], d_1)$ but not convergent in $d_1$. -I have proved that $f_{n}$ is not convergent in $(C[0,1])$ since it is converging to discontinuous function given as follows: -$ - f_n(x) = \left\{ - \begin{array}{l l} - -1 & \quad \text{ $x\in [0, 1/2 )$}\\ - 0 & \quad \text{$x = 1/2$}\\ - 1 & \quad \text{ $x\in (1/2 , 1]$}\\ - \end{array} \right. - $ -I am finding it difficult to prove that $f_{n}$ is Cauchy in $(C[0,1], d_1)$. -I need help to solve this problem. -Edit: I am sorry i have to show $f_n$ is cauchy -Thanks for helping me. - -REPLY [12 votes]: Suppose $m,n > N$. Then $f_m(x) = f_n(x) = -1 $ when $x \in [0, \frac{1}{2}-\frac{1}{N}]$. Similarly, $f_m(x) = f_n(x) = +1 $ when $x \in [\frac{1}{2}+\frac{1}{N},1]$. And $|f_m(x)-f_n(x)| < - 1$ when $x \in (\frac{1}{2}-\frac{1}{N}, \frac{1}{2}+\frac{1}{N})$. -Hence $d_1(f_m,f_n) = \int_{0}^{1} |f_m(x) - f_n(x)|dx = \int_{\frac{1}{2}-\frac{1}{N}}^{\frac{1}{2}+\frac{1}{N}} |f_m(x) - f_n(x)|dx < \frac{2}{N}$.<|endoftext|> -TITLE: Localisation is isomorphic to a quotient of polynomial ring -QUESTION [20 upvotes]: I am having trouble with the following problem. - -Let $R$ be an integral domain, and let $a \in R$ be a non-zero element. Let $D = \{1, a, a^2, ...\}$. I need to show that $R_D \cong R[x]/(ax-1)$. - -I just want a hint. -Basically, I've been looking for a surjective homomorphism from $R[x]$ to $R_D$, but everything I've tried has failed. I think the fact that $f(a)$ is a unit, where $f$ is our mapping, is relevant, but I'm not sure. Thanks - -REPLY [7 votes]: Here's another answer using the universal property in another way (I know it's a bit late, but is it ever too late ?) -As for universal properties in general, the ring satisfying the universal property described by Arturo Magidin in his answer is unique up to isomorphism. Thus to show that $R[x]/(ax-1) \simeq R_D$, it suffices to show that $R[x]/(ax-1)$ has the same universal property ! -But that is quite easy: let $\phi: R\to T$ be a ring morphism such that $\phi(a) \in T^{\times}$. -Using the universal property of $R[x]$, we get a unique morphism $\overline{\phi}$ extending $\phi$ with $\overline{\phi}(x) = \phi(a)^{-1}$. -Quite obviously, $ax -1 \in \operatorname{Ker}\overline{\phi}$. Thus $\overline{\phi}$ factorizes uniquely through $R[x]/(ax-1)$. -Thus we get a unique morphism $\mathcal{F}: R[x]/(ax-1) \to T$ with $\mathcal{F}\circ \pi = \phi$, where $\pi$ is the canonical map $R\to R[x]/(ax-1)$. This shows that $\pi: R \to R[x]/(ax-1)$ has the universal property of the localization, thus it is isomorphic to the localization. -This is essentially another way of seeing Arturo Magidin's answer<|endoftext|> -TITLE: Real numbers equipped with the metric $ d (x,y) = | \arctan(x) - \arctan(y)| $ is an incomplete metric space -QUESTION [34 upvotes]: I have to show that the real numbers equipped with the metric -$ d (x,y) = | \arctan(x) - \arctan(y)| $ is an incomplete metric space. -Certainly, I have to search for a Cauchy sequence of real numbers with respect to given metric that must not be convergent. But I am unable to figure out that. - Can anybody help me with this. -Thanks for helping me. - -REPLY [19 votes]: Sorry for reviving such an old problem... -Anyways, what is important here is that $\text{arctan}$ is a bijection from $\mathbb{R}$ to $( -\pi/2, \pi/2 )$, and it is an isometry if we give $\left(-\pi/2, \pi/2\right)$ the metric it carries as a subspace of $\mathbb{R}$ with the usual metric. If $f: X \to Y$ is a surjective isometry then the Cauchy sequences in $Y$ are the images of Cauchy sequences under $f$, and the convergent sequences in $Y$ are the images of convergent sequences under $f$, so $Y$ is complete if and only if $X$ is. Thus $(\mathbb{R}, d)$ is complete if and only if $(-\pi/2, \pi/2)$ with the metric $\text{dist}(x, y) = |x - y|$ is complete. But $(-\pi/2, \pi/2)$ is not complete since $\{\pi/2 - 1/n\}_{n=1}^\infty$ is Cauchy and does not converge in $(-\pi/2, \pi/2)$.<|endoftext|> -TITLE: Is the set $\{\frac{1}{a\,-\,\pi}\mid a\in\mathbb{Q}\}$ linearly independent over $\mathbb{Q}$? -QUESTION [8 upvotes]: The following problem is from Golan's linear algebra book. I have posted a proposed solution in the answers. -Problem: Consider $\mathbb{R}$ as a vector space over $\mathbb{Q}$. Is the subset $$\left\{\frac{1}{a-\pi}\;\middle\vert\; a\in\mathbb{Q}\right\}$$ linearly independent? - -REPLY [4 votes]: Well, if you are going to assume known that $\pi$ is transcendental over $\mathbb Q$ ... you may as well use any other transcendent instead. For example, prove that -$$ -\frac{1}{a-z},\qquad a \in \mathbb Q -$$ -in the vector space of meromorphic functions is linearly independent over $\mathbb Q$. In fact, more is true: linearly independent over $\mathbb C$. We can see none of these is a linear combination of some others by comparing their poles.<|endoftext|> -TITLE: implicit equation for "double torus" (genus 2 orientable surface) -QUESTION [16 upvotes]: The embedded torus in $\mathbb R^3$ can be described by the set of points in $(x,y,z)\in \mathbb R^3$ satisfying $T(x,y,z)=0$, where $T$ is the polynomial $T(x,y,z)=(x^2+y^2+z^2+R^2-r^2)^2-4R^2(x^2+y^2)$ for $R>r>0$. -Is it possible to find a polynomial that describes the sum of two (or $n$) tori? That is, is there a polynomial (or even a smooth function) $P$ such that the embedded double torus can be described as the set where $P(x,y,z)=0$? - -REPLY [12 votes]: Here's another way to obtain a "double torus": you can start from the implicit equation of a lemniscate, which is a curve shaped like a figure-eight. One could, for instance, choose to use the lemniscate of Gerono: -$$x^4-a^2(x^2-y^2)=0$$ -or the hyperbolic lemniscate, which is the inverse curve of the hyperbola: -$$(x^2+y^2)^2-a^2x^2+b^2y^2=0$$ -(the famous lemniscate of Bernoulli is a special case of this, corresponding to the inversion of an equilateral hyperbola). -Now, to generate a double torus from these lemniscates, if you have the implicit Cartesian equation in the form $F(x,y)=0$, you can perform the "inflation" step of Rahul's approach; that is, form the equation -$$F(x,y)^2+z^2=\varepsilon$$ -where $\varepsilon$ is a tiny number. -For instance, here's a double torus formed from the lemniscate of Bernoulli: $$((x^2+y^2)^2-x^2+y^2)^2+z^2=\frac1{100}$$ - -For surfaces of higher genus, one might want to use sinusoidal spirals instead as the base curve. - -Yet another possibility to generate surfaces of genus $n\geq 2$ is to consider the surface $F_1(x,y,z)F_2(x,y,z)\dots F_n(x,y,z)=0$ where the $F_i$ are the implicit Cartesian equations for the usual torus, suitably translated and/or rotated. One can then replace $0$ with a tiny number $\varepsilon$.<|endoftext|> -TITLE: What do the cosets of $\mathbb{R} / \mathbb{Q}$ look like? -QUESTION [23 upvotes]: $\newcommand{\R}{\Bbb R}\newcommand{\Q}{\Bbb Q}$ -Looking at the group of real numbers under addition $(\R, +)$ it contains the (normal) subgroup of rational numbers $(\Q, +)$. I am wondering how to describe the cosets of $\R / \Q$. -I know from looking at the cardinality of the sets that because $\R$ is uncountable and $\Q$ is countable that $\R / \Q$ is uncountable. I am also thinking of $\R / \Q$ containing a "representative" of each irrational number. I am also aware that $\Q$ is dense in $\R$, so that each member of $\R$ is the limit of a sequence of numbers in $\Q$. -Both $\R$ and $\Q$ are ordered. But is there a natural order on $\R / \Q$? What else can we determine about $\R / \Q$? -Background: I am investigating functions satisfying -$f(a + b) = f(a)f(b)$ for all $a,b\in\R$. -If $f$ is required to be continuous then $f(x) = \exp(A x)$ -but if $f$ is not required to be continuous then I think I can define -$f(x) = \exp(A_t x)$ where $x$ in $\Q$ where $t$ in some coset $\R / \Q$ and $t \Q = {t + q \text{ where } q\in \Q}$ and $A_t$ is different for each coset. This makes for quite an interesting function!! - -REPLY [3 votes]: There is no natural ordering on $\mathbb{R/Q}$. Of course if you choose a set of representatives then you have a natural ordering inherited from $\mathbb R$, however it is easy to see that this order would be very dependent on the choice of the representatives. -In fact this answer on MathOverflow explains why in some models without the axiom of choice $\mathbb{R/Q}$ cannot be linearly ordered at all.<|endoftext|> -TITLE: Points and sheaf of functions for some schemes -QUESTION [6 upvotes]: I am (slowly) reading Eisenbud and Harris and trying to get my head around (affine) schemes. -Let $R$ be a ring, and $X=\text{spec}(R)$, as usual with the Zariski topology. We have a basis of open sets; for $f \in R$, $X_f=\{ p \subset R | f \notin p \}$ where $p$ is a prime ideal. Then the structure sheaf is $\mathcal{O}(X_f) = R_f$, the localization of the ring $R$ with respect to the multiplicative subset $\{ 1,f,f^2,\ldots \}$. -Exercise I-20 in E-H is to calculate the points and sheaf of functions for some schemes - -1) $X_1 = \text{Spec } \mathbb{C}[x]/(x^2)$ -This shouldn't be to hard - there is exactly one (closed) point corresponding to the maximal ideal $(x)$ in which case $X_1 = \{(x)\}$. Thus the only open sets are $\emptyset \subset X_1$ and then $\mathcal{O}(\emptyset) = 0$ and $\mathcal{O}(X_1) = \mathbb C[x]/(x^2)$ -Is this correct? -2) $X_2 = \text{Spec } \mathbb{C}[x](x^2-x)$ -Here we should have exactly two (closed) points: $(x),(x-1)$. Call these $\{ a,b \}$. -The topology should then be $\{\emptyset,\{a \}, \{ b \}, \{a, b\} \}$ (the discrete topology). -Again we have $\mathcal{O}(\emptyset) =0 $ and $\mathcal{O}(\{ a,b \}) = \mathbb C[x]/(x^2-x)$. -Now -$$ -\begin{align} -\mathcal{O}(\{ a \}) &= [\mathbb C[x]/(x^2-x)]_{(x)} \\ - &\simeq [\mathbb C[x]/(x(x-1)]_{(x)} -\end{align} -$$ -Am I now localizaing with respect to the multiplicative set $R - \mathfrak{p}$ where $\mathfrak{p}=(x)$ and $R = \mathbb C[x]/(x^2-x)$? And then is this just: -$$ -\begin{align} -\mathcal{O}(\{ a \}) &\simeq [\mathbb C[x]/(x(x-1)]_{(x)} \\ - &\simeq [\mathbb C[x]/(x-1)]_{(x)} \\ - &\simeq \mathbb{C} -\end{align} -$$ -? - -REPLY [2 votes]: Your arguement are correct. I am not sure what are your question? But what you write is correct. In fact, as you already know, as for the first example, the underlying topology is easy which is clearly irreducible, but note that the ring of global sections is not reduced; as for the second example, please note that the underlying toplogy is not irreducible, and the ring of global sections is isomorphic to $\mathbb{C}\times \mathbb{C}$, which is reduced.<|endoftext|> -TITLE: Interpreting the determinant of matrices of dot products -QUESTION [5 upvotes]: In the Euclidean space $\mathbb{R}^n$ consider two (ordered) sets of vectors $a_1 \ldots a_k$ and $b_1 \ldots b_k$ with $k \le n$. - -Question - -What is the geometrical interpretation of $\det(a_i \cdot b_j)?$ -Is it true that $\det(a_i\cdot b_j)=\det(a'_p\cdot b'_q)$ if $a_1\wedge \ldots \wedge a_k=a'_1\wedge \ldots \wedge a'_k$ and $b_1\wedge \ldots \wedge b_k=b'_1\wedge \ldots \wedge b'_k?$ - - -Since $\det(a_i\cdot a_j)$ equals the squared $k$-volume spanned by $a_1\ldots a_k$, I guess that $\det(a_i\cdot b_j)$ may be interpreted as the $k$-volume spanned by some kind of projection of $b_1\wedge \ldots \wedge b_k$ (thought of as an oriented $k$-parallelogram) onto $a_1\wedge \ldots \wedge a_k$. For the same reason I would answer affirmatively to the second question. - -REPLY [2 votes]: Given a Euclidean structure on $\mathbb R^n$, you deduce a canonical euclidean structure on each $\Lambda ^k\mathbb R^n$. -On pairs of decomposable elements $a_1\wedge \ldots \wedge a_k, b_1\wedge \ldots \wedge b_k\in\Lambda ^k\mathbb R^n$ it is given by the formula -$$ (a_1\wedge \ldots \wedge a_k\mid b_1\wedge \ldots \wedge b_k)= \det(a_i\cdot b_j) $$ -So your determinant is the scalar product of two vectors, but in a new vector space. -In particular you have the pleasant interpretation that the volume of the parallelipiped spanned by the vectors $a_1, \ldots ,a_k\in V$ is the length of the vector $a_1\wedge \ldots \wedge a_k\in \Lambda ^k\mathbb R^n$. -The above interpretation makes it now obvious that given $$\omega =a_1\wedge \ldots \wedge a_k, \:\omega '=a'_1\wedge \ldots \wedge a'_k, \eta=b_1\wedge \ldots \wedge b_k, \eta'=b'_1\wedge \ldots \wedge b'_k$$ the equalities $\omega=\omega'$ and $\eta=\eta'$ imply that $$(\omega\mid\eta)=\det(a_i\cdot b_j)=(\omega'\mid\eta')=\det(a'_p\cdot b'_q)$$<|endoftext|> -TITLE: How is the hyperplane bundle cut out of $(\mathbb{C}^{n+1})^\ast \times \mathbb{P}^n$? -QUESTION [10 upvotes]: [Question has been updated with more context and perhaps a better explanation of my question.] -Source: Smith et al., Invitation to Algebraic Geometry, Section 8.4 (pages 131 - 133). -First, a brief set-up, whose purpose will become obvious in a minute. Note that everything here is over $\mathbb{C}$. - -The tautological bundle over $\mathbb{P}^n$ is constructed as follows. Consider the incidence correspondence of points in $\mathbb{C}^{n+1}$ lying on lines through the origin, $B = \{(x, \ell) \;|\; x \in \ell \} \subseteq \mathbb{C}^{n+1} \times \mathbb{P}^n$, together with the natural projection $\pi : B \rightarrow \mathbb{P}^n$. [...] The tautological bundle over the projective variety $X \subseteq \mathbb{P}^n$ is obtained by simply restricting the correspondence to the points of $X$... - -Next, it's shown that this bundle has no global sections, in the case $X = \mathbb{P}^1$: - -A global section of the tautological bundle defines, for each point $p \in \mathbb{P}^1$, - a point $(a(p), b(p)) \in \mathbb{C}^2$ lying on the line through the origin corresponding to $p$. Since the assignment $p \mapsto (a(p), b(p))$ must be a morphism, we see that projecting onto either factor, we have morphisms $a, b : \mathbb{P}^1 \rightarrow \mathbb{C}$. But because $\mathbb{P}^1$ admits no nonconstant regular functions, both $a$ and $b$ are constant functions. But then both are zero... - -So far, so good. Now: - -The hyperplane bundle $H$ on a quasi-projective variety is defined to be the dual of the tautological line bundle: The fiber $\pi^{-1}(p)$ over a point $p \in X \subset \mathbb{P}^n$ is the (one-dimensional) vector space of linear functionals on the line $\ell \subset \mathbb{C}^{n+1}$ that determines $p$ in $\mathbb{P}^n$. The formal construction of $H$ as a subvariety of $(\mathbb{C}^{n+1})^\ast \times \mathbb{P}^n$ parallels that of the tautological line bundle. - -This, I don't understand. Specifically, whereas the set of $v \in \mathbb{C}^{n+1}$ such that $v \in \ell$ was a subspace of $\mathbb{C}^{n+1}$, the set of linear functionals $f : \ell \rightarrow \mathbb{C}$ appears to be a quotient of $(\mathbb{C}^{n+1})^\ast$, rather than a subspace. So I don't see how to perform the parallel construction here. -In particular, any method that cuts something out of $(\mathbb{C}^{n+1})^\ast \times \mathbb{P}^n$ would seemingly allow us to carry out the argument above and show that the hyperplane bundle has no global sections, which is false. - -REPLY [6 votes]: Let's be more canonical: call$V$ the vector space such that $\mathbb P^n=\mathbb P(V)$. - Given a point $p\in W$ (where $W\subset \mathbb P^n$ is your subvariety), you may consider the linear forms $\phi:l\to \mathbb C$ on the line $l$ corresponding to $p$. -These linear forms constitute the fiber at $p$ of the bundle $B^*$ dual to $B$: $\phi\in B^*[p]$. -The terminology hyperplane bundle is due to the fact that if you consider the same construction on $\mathbb P^n$ you get a bundle $H$ whose global sections are identified to $V^*$ i.e. $\Gamma (\mathbb P^n, H)=V^*$, so that the zero set of such a $\Phi\in V^*$ is a hyperplane $\mathbb P(Ker(\Phi))\subset \mathbb P^n$. -The restricted bundle $H\mid W$ is then the bundle you are interested in : $H\mid W=B^*$ -Finally note that you have a canonical vector space morphism of global sections $\Gamma (\mathbb P^n, H)=V^*\to \Gamma (W, B^*)$ which is neither injective nor surjective in general. -Edit -Lat me address a subtle point raised by Daniel. -The trivial bundle $V^*\times \mathbb P^n$ and $H$ have the exact same vector space of global sections, namely $V^*$. But are they equal? Of course not: the first bundle has rank $n+1$ and the second one rank $1$. -All right, what then is the fiber of $H$ at $p$? Some dimension $1$ subspace of $V^*$ maybe? -Not at all! It is $l^*$, where $l$ is the line that $p$ represents and, as Daniel correctly notes, $l^*$ is a quotient of $V^*$, not a subspace. -Since we do not have $H\subset V^*\times \mathbb P^n$, we cannot apply the reasoning that led to the conclusion that the tautological vector bundle has only the zero section and the paradox vanishes. -To sum up, we have a canonical surjective (but not injective) morphism of vector bundles on $\mathbb P^n$ $$V^*\times \mathbb P^n\to H\to 0 $$ which induces an isomorphism of ordinary $\mathbb C$-vector spaces $$\Gamma( \mathbb P^n,V^*\times \mathbb P^n)=V^*\stackrel{\cong}{\to} \Gamma(\mathbb P^n,H) $$<|endoftext|> -TITLE: What's the need of defining notion of distance using norm function in a metric space? -QUESTION [5 upvotes]: I have started studying normed spaces. I wonder what's the need of defining notion of distance using norm function. For example , we know that $\mathbb{R}$ is a metric space with respect to usual metric defined by $d(x,y) = \mid x-y \mid$. -Now, I am studying $\mathbb{R}$ is a metric space with respect to metric induced by norm defined by $d(x,y) = \parallel x - y\parallel$. -Edit 1: I mean can't we simply study metric spaces using distance function which doesn't involve norms? Why we have introduced concept of norms? -I have no problem in understanding things related with norms. But this question is troubling me which might sound trivial. -Thanks for helping me. - -REPLY [5 votes]: Norms only make sense on vector spaces, whereas metrics can be defined on arbitrary sets. -Supposing we are in a vector space, the principles that distinguish a norm from a metric are 1) translation invariance, and 2) homogeniety. Metrics induced by a norm are always translation invariant and homogeneous, and given a translation invariant homogeneous metric you can build a norm via, -$$||x||:=d(x,0).$$ -Thus the core question is: what is the intuition behind translation invariance and homogeniety, and why are they interesting? -1) Translation invariance means that normed spaces look the same everywhere, in some sense. Any property that depends on pairwise distances between points will be the same if you translate all the points over. -$$d(x+h,y+h)=||x-y|| \text{, independent of h.}$$ -2) Homogeniety means that it is meaningful to put "units of measurement" on your space. If you measure two vectors in meters then take the norm distance between them, you get the same result as if you measured them in inches, then took the norm distance between them, then converted that distance from inches to meters. -$$d(ax,ay) = |a|d(x,y).$$<|endoftext|> -TITLE: Ramanujan's approximation to factorial -QUESTION [14 upvotes]: I saw this approximation for the factorial given by Ramanujan as -$$\log(n!) \approx n \log n - n + \frac{\log(n(1+4n(1+2n)))}{6} + \frac{\log(\pi)}{2}$$ in wikipedia, which claims the approximation is superior to Stirling's approximation. I tried to locate the reference but unfortunately I could not. -I would appreciate if someone can throw light on how this asymptotic is obtained and the order of the error. - -REPLY [7 votes]: Well, I finally found the formula in -S. Ramanujan, The Lost Notebook and other Unpublished Papers. S. Raghavan and S. S. Rangachari, editors. Narosa, New Delhi, 1987. -page 339.<|endoftext|> -TITLE: A convergence problem in Banach spaces related to ergodic theory -QUESTION [9 upvotes]: Suppose $X$ is a Banach space, $T\in B(X)$, satisfied the following condition. - -$\sup \Big\lVert\frac{1}{n}\sum \limits_{i=0}^{n-1}T^{i}\Big\rVert<\infty$ -$\frac{1}{n}\lVert T\rVert^{n}\rightarrow0$, as $n \rightarrow\infty$ - -For $x \in X$, take $x_{n}=\frac{1}{n} \sum\limits_{i=0}^{n-1}T^{i}x$. If there is a subquence $\{x_{n_{k}}\}$ which has a weak limit $x^{*}$(in the weak topology), prove that $x_{n} $ is convergent to $ x^{*}$, in the norm topology, and $Tx^{*}=x^{*}$ -This can been seen as a generalization of the von Neumann Ergodic Theorem for Banach spaces. -Any advice and discussions will be appreciated. - -REPLY [5 votes]: For me it is a bit easier to work with the Cesàro averaging operators -$$ -S_n = \frac{1}{n} \sum_{i=0}^{n-1} T^i -$$ -than with the sequences $x_n$. Since $S_nx = x_n$, the translation is straightforward. -Observe that $(1-T)S_n = S_n(1-T) = \frac{1}{n}(1-T^{n})$. -I assume that the condition in the second bullet point $\frac{1}{n} \lVert T\rVert^n \to 0$ (which is equivalent to $\Vert T\rVert \leq 1$) is a typo for the weaker condition $\frac{1}{n} \lVert T^n\rVert \to 0$ saying that $\lVert T^n\rVert$ grows slower than linearly. We have -$$\tag{$\ast$} -\lVert S_n(1-T)\rVert = \lVert(1-T)S_n\rVert \leq \frac{1}{n}(1+\lVert T^n\rVert) \xrightarrow{n\to\infty} 0. -$$ -We will also make use of the identity -$$ -\tag{${\ast\ast}$} -1-S_n = \frac{1}{n} \sum_{i=0}^{n-1} (1-T^i) -= (1-T) \frac{1}{n}\sum_{i=0}^{n-1} i \cdot S_i. -$$ - -Now we're ready for the proof. Assume that $x \in X$ and that $n_{k}$ is an increasing sequence such that $S_{n_k} x \to x^\ast$ weakly. We want to show that -$$ -\lVert x^\ast - S_nx\rVert \xrightarrow{n\to\infty} 0. -$$ - -Since bounded operators are weak-weak continuous, we have $TS_{n_k}x \to Tx^\ast$ weakly, and on the other hand for $\varphi \in X^\ast$ we have -$$ -\lvert \varphi((1-T)x^\ast)\rvert -= \lim_{k\to\infty} \lvert\varphi ((1-T)S_{n_k}x)\rvert\leq \lVert\varphi\rVert\lVert x \rVert\lim_{k\to\infty}\lVert(1-T)S_{n_k}\rVert = 0 -$$ -where the first equality follows from weak convergence and last one follows from $(\ast)$. -Thus $\varphi(x^\ast - Tx^\ast) = 0$ for all $\varphi \in X^\ast$ and since $X^\ast$ separates the points of $X$ we conclude that $x^\ast = Tx^\ast$. We also have $S_{n}x^\ast = x^\ast$. -Let $y = x- x^\ast$ and note that -$$\tag{$\ast\ast\ast$} -S_ny = S_n x - x^\ast, -$$ -so that weak convergence $S_{n_k}x \to x^\ast$ implies weak convergence $S_{n_k}y \to 0$. -With $(\ast\ast)$ we get -$$ -y - S_{n_k}y = (1-T)\sum_{i=0}^{n_k-1}i\cdot S_iy, -$$ -so $y = x-x^\ast$ is in the weak closure of the range of $(1-T)$. -For convex sets the weak closure and the norm closure coincide, so $y = x- x^\ast$ is in the norm closure of the range of $(1-T)$. -Let $\varepsilon \gt 0$. -There is $z = (1-T)w$ such that $\lVert y-z\rVert \lt \varepsilon$. Again with $(\ast)$ we conclude that $\lVert S_n z\rVert \to 0$. -Finally, the hypothesis $C = -\sup_{n \in \mathbb{N}} \lVert S_n \rVert \lt \infty$ (which we haven't used so far) shows that for $n$ large enough we have -$$ -\lVert S_n y \rVert \leq \lVert S_n(y-z)\rVert + \lVert S_n z\rVert \leq (C+1)\varepsilon. -$$ -Recalling $(\ast\ast\ast)$ this gives -$$ -\lVert S_n x - x^\ast\rVert \xrightarrow{n\to\infty} 0, -$$ -as we wanted.<|endoftext|> -TITLE: Finding a Galois extension of $\Bbb{Q}(i)$ isomorphic to $D_4$ -QUESTION [8 upvotes]: This problem has been bothering me for a few days, now. This is not homework, just something I do for my own entertainment. -I want to find a Galois extension of $\Bbb{Q}(i)$ which has Galois group isomorphic to $D_4$, the symmetry group of the square (with 8 elements), if one exists. If one does not exist, I would like to know why. -I'd also like to find explicitly an irreducible polynomial in $\Bbb{Q}(i)[x]$ corresponding to this extension. My instincts tell me I might be looking for a "bi-quadratic" polynomial (that is: one of the form $x^4 + ax^2 + b$). I can do this if the base field is $\Bbb{Q}$, but the presence of $i$ throws me off a bit. -Hints are fine with me, I'll gladly do the leg-work, if someone points me in the right direction. - -REPLY [6 votes]: The following always gives you a Galois extension with Galois group isomorphic to $D_4$ of order 8. Let $F$ be a field of characteristic not equal to 2, like $\Bbb{Q}(i)$. Choose $c \in F$ such that $c$ is not a square in $F$. Then $L = F(\sqrt{c})$ is a degree 2 extension of $F$. -Now adjoin an element $\sqrt{a + b\sqrt{c}}$ to $L$ in such a way that $a + b\sqrt{c}$ is not a square in $L$. Then if you chose $a,b,c$ carefully in such a way that $a^2 - b^2c \neq g^2c$ and is also not equal to $h^2$ for any $g,h \in F$ then $E = F(\sqrt{c},\sqrt{a + b\sqrt{c}})$ will not be a Galois extension of $F$. However if you then consider $M = E(\sqrt{a - b\sqrt{c}})$ then this will be a Galois extension of $F$, because the polynomial $(x^2 -a)^2 - b^2c$ splits completely in here. Furthermore using the degree arguments above this polynomial is irreducible over $F$ and you can check that its derivative is non-zero so that $M$ is a normal and separable extension of $F$. -By a simple computation one can check that $M$ is a degree 8 extension of $F$ with Galois group isomorphic to $D_4$. -Now let's choose a specific example where this works. Let $F = \Bbb{Q}(i)$. Choose $c = 2, a= 2$ and $ b= 3$. Clearly $\sqrt{2} \notin \Bbb{Q}(i)$. Now we then calculate $a^2 - b^2c = 4 - 9\cdot2 = -14$. This is clearly not the square of anything in $\Bbb{Q}(i)$ and furthermore if we attempt to write $-14 = g^2\cdot 2$ we will get a square root of 7 in $\Bbb{Q}(i)$ which is impossible too. It follows by what I said above that -$E= F(\sqrt{2},\sqrt{2 + 3\sqrt{2}})$ is not Galois extension, but that $M= E(\sqrt{2 - 3\sqrt{2}})$ is a Galois extension of $F$ with Galois group isomorphic to $D_4$. To find an irreducible polynomial of degree $8$ over $\Bbb{Q}(i)$, I suggest you consider the minimal polynomial of $\sqrt{2 + 3\sqrt{2}} + \sqrt{2 - 3\sqrt{2}} + \sqrt{2}$ over $\Bbb{Q}(i)$. You can show it is irreducible by showing that $$\Bbb{Q}(i)\bigg(\sqrt{2 + 3\sqrt{2}} + \sqrt{2 - 3\sqrt{2}} + \sqrt{2}\bigg) = \Bbb{Q}(i)\bigg(\sqrt{2+3\sqrt{2}},\sqrt{2 - 3\sqrt{2}},\sqrt{2}\bigg).$$ -Now one containment is already obvious. To show the other write $x = \sqrt{2 + 3\sqrt{2}} + \sqrt{2 - 3\sqrt{2}} + \sqrt{2}$. Then you find that $x^2 -2\sqrt{2}x = 2 \sqrt{-14}$ and so -$$\sqrt{2} = \frac{x^2 - 2\sqrt{-14}}{2x} \in \Bbb{Q}(i)\bigg(\sqrt{2 + 3\sqrt{2}} + \sqrt{2 - 3\sqrt{2}} + \sqrt{2}\bigg).$$ -It follows that $y =\sqrt{2 + 3\sqrt{2}} + \sqrt{2 - 3\sqrt{2}}$ is in here too. By a similar computation as above you find that $\sqrt{2 + 3\sqrt{2}}$ is in here and consequently $\sqrt{2 - 3\sqrt{2}}$ finishing the proof. Hence whatever monic polynomial polynomial of degree 8 in $\Bbb{Q}(i)[x]$ that we find with $x$ as a root, that is the minimal polynomial of $x$ over $\Bbb{Q}(i)$. -Now from the step above where $ x^2 -2\sqrt{2}x = 2 \sqrt{-14}$, you find that $x^4 = -56 + 8x + 8\sqrt{-28x}$. Rearranging we get -$$x^4 - 8x + 56 = \sqrt{-28}x$$ -from which it follows that $f(y) = (y^4 - 8y + 56)^2 +28y^2 \in \Bbb{Q}(i)[y]$ is the minimal polynomial for $x$ over $\Bbb{Q}(i)$.<|endoftext|> -TITLE: Inequalities in $l_p$ norm -QUESTION [10 upvotes]: I'm having difficulty with the following problem. Any help would be appreciated. -Problem: Consider the sequence spaces $l_p$ with the usual norm. If $1\le p\le q\le \infty$, I want to show the following inequality for any sequence $a$. -$$\|a\|_q\le \|a\|_p$$ -If we restrict to $\mathbb{R}^n$ but still use the $l_p$ norms, I also want to show this: -$$\|a\|_q\le \|a\|_p\le n^{\frac{1}{p}-\frac{1}{q}}\|a\|_q$$ -Work so far: I strongly suspect that a clever application of Hölder is needed here, but I tried the following for the first inequality: -First, we consider the case where a finite number of elements in the sequence are nonzero. We want to prove -$$||x||_q\le ||x||_p \Leftrightarrow \left(\sum_1^n |x_j|^q\right)^{\frac{1}{q}} \le \left(\sum_1^n |x_j|^p\right)^{\frac{1}{p}}.$$ -We induct on $n$. The base case is clear. Because we can multiply all of the variables by a constant without affecting the inequality, we assume $x_n=1$. Assume we have proven the inequality for $n-1$. Then -$$\left(\sum_1^{n-1} |x_j|^q\right) \le \left(\sum_1^{n-1} |x_j|^p\right)^{\frac{q}{p}}$$ -It suffices to show that -$$\left(\sum_1^{n-1} |x_j|^q\right) + 1 \le \left(\sum_1^{n-1} |x_j|^p+1\right)^{\frac{q}{p}}$$ -This is equivalent to -$$\left(\sum_1^{n-1} |x_j|^q\right)\le \left(\sum_1^{n-1} |x_j|^p+1\right)^{\frac{q}{p}}-1$$ -So we need to show that if $f(x)=x^{q/p}$, then $f(x+1)\ge f(x)+1$. But this is clear, as $q\ge p$. Now I think it should be an easy matter to pass to the $l_p$ spaces by taking limits. -I'm not sure what to do about the second inequality yet. - -REPLY [11 votes]: For any $x,y\in\mathbb{R}^n,$ let us define $x\ast y=(x_iy_i)_{i=1,\ldots,n}\in\mathbb{R}^n.$ -For any $p,q,r\in[1,\infty]$ such that $\frac{1}{p}+\frac{1}{q}=\frac{1}{r}$, we have a generalization of Hoelder inequality $$||x\ast y||_r\leq ||x||_p||y||_q\tag{*}.$$ -By applying (*) taking $y=(1,\ldots,1),$ we get $$||x||_r\leq n^{\frac{1}{r}-\frac{1}{p}}||x||_p.$$ - -Edit About the first inequality $||x||_p\geq ||x||_q,\textrm{ when }1\leq p\leq q\leq\infty.$ Apart from the trivial case $q=\infty,$ a possible derivation is as follows -$$||x||_p^q=\left(\Sigma_{i}|x_i|^p\right)^{q/p}\geq \Sigma_{i}|x_i|^q=||x||_q^q.$$ -Here we have used the majoration $\left(\Sigma_{i}|x_i|^p\right)^{q/p}\geq \Sigma_{i}|x_i|^q$ which is justifed by the remark that, for any $\alpha\in [1,\infty[,$ the function $f(t)=(1+t)^\alpha- 1-t^\alpha$ is nonnegative.<|endoftext|> -TITLE: does $\int_0^\infty x/(1+x^2 \sin^2x) \mathrm dx$ converge or diverge? -QUESTION [7 upvotes]: $$\int_0^\infty x/(1+x^2\sin^2x) \mathrm dx$$ -I'd be very happy if someone could help me out and tell me, whether the given integral converges or not (and why?). Thanks a lot. - -REPLY [2 votes]: The answer for the convergence/divergence may be easily obtained by using the following trivial inequality: -$$\int_{0}^{\infty} \frac{x}{1+x^{2}\sin^{2}{x}} \geq \int_{0}^{\infty} \frac{x}{1+x^{2}}= \bigl(\frac{1}{2}\log(x^2+1)\bigr)_{0}^{\infty} \to \infty\ $$ -The proof is complete.<|endoftext|> -TITLE: Finding the integral of $\int x\ln(1+x)dx$ -QUESTION [5 upvotes]: I know I have to make a u substitution and then do integration by parts. -$$\int x\ln(1+x)dx$$ -$ u = 1 + x$ -$du = dx$ -$$\int (u-1)(\ln u)du$$ -$$\int u \ln u du - \int \ln u du$$ -I will solve the $\ln u$ problem first since it will be easier -$$ \int \ln u du$$ -$u = \ln u$ -$du = 1/u$ -$dz = du$ -$z = u$ -$$-(u\ln u - u)$$ -Now I will do the other part. -$$\int u \ln u du$$ -$u = \ln u$ $du = 1/u$ -$dz = udu$ $z = u^2 / 2$ -$$\frac {u^2 \ln u}{2} - \int u/2$$ -$$\frac {u^2 \ln u}{2} - \frac{1}{2} \int u$$ -$$\frac {u^2 \ln u}{2} - \frac{u^2}{2} $$ -Now add the other part. -$$\frac {u^2 \ln u}{2} - \frac{u^2}{2} -u\ln u + u $$ -Now put u back in terms of x. -$$\frac {(1+x)^2 \ln (1+x)}{2} - \frac{(1+x)^2}{2} -(1+x)\ln (1+x) + (1+x) $$ -This is wrong and I am not sure why. - -REPLY [3 votes]: Comment for Jordan: You should not read the solution below, it is somewhat non-standard and at this stage you need to think in terms of standard approaches. -We use integration by parts, $u=\ln(1+x)$, $dv=x\,dx$. So $du =\frac{dx}{1+x}$. -Now we do something cute with $v$. Any antiderivative of $x$ will do. Instead of boring old $\frac{x^2}{2}$, we can take -$$v=\frac{x^2}{2}-\frac{1}{2}=\frac{1}{2}(x+1)(x-1).$$ -Thus -$$\int x\ln(1+x)\,dx=\left(\frac{x^2}{2}-\frac{1}{2}\right)\ln(1+x)-\int \frac{1}{2}(x-1)\,dx.$$<|endoftext|> -TITLE: Inverse of transformation matrix -QUESTION [7 upvotes]: I am preparing for a computer 3D graphics test and have a sample question which I am unable to solve. -The question is as follows: -For the following 3D transfromation matrix M, find its inverse. Note that M is a composite matrix built from fundamental geometric affine transformations only. Show the initial transformation sequence of M, invert it, and write down the final inverted matrix of M. -$M =\begin{pmatrix}0&0&1&5\\0&3&0&3\\-1&0&0&2\\0&0&0&1\end{pmatrix} $ -I only know basic linear algebra and I don't think it is the purpose to just invert the matrix but to use the information in the question to solve this. -Can anyone help? -Thanks - -REPLY [2 votes]: I know this is old, but the inverse of a transformation matrix is just the inverse of the matrix. For a transformation matrix $M$ which transforms some vector $\mathbf a$ to position $\mathbf v$, then to get a matrix which transforms some vector $\mathbf v$ to $\mathbf a$ we just multiply by $M^{-1}$ -$M\cdot \mathbf a = \mathbf v \\ -M^{-1} \cdot M \cdot \mathbf a = M^{-1} \cdot \mathbf v \\ -\mathbf a = M^{-1} \cdot \mathbf v$<|endoftext|> -TITLE: Parametric form of a plane -QUESTION [19 upvotes]: Can you please explain to me how to get from a nonparametric equation of a plane like this: -$$ x_1−2x_2+3x_3=6$$ -to a parametric one. In this case the result is supposed to be -$$ x_1 = 6-6t-6s$$ -$$ x_2 = -3t$$ -$$ x_3 = 2s$$ -Many thanks. - -REPLY [15 votes]: Welcome to math.stackexchange! -A plane can be defined by three things: a point, and two non-colinear vectors in the plane (think of them as giving the plane a grid or coordinate system, so you can move from your first point to any other using them). -So first, we need an initial point: since there are many points in the plane, we can pick randomly. I'll just take $x_1=6,x_2=0$ so that $x_3=0$ and we see that the point $(6,0,0)$ solves the equation. -Now I need two vectors in the plane. I can do this by finding two other points in the plane, and subtracting them from this one (the difference of two vectors points from one to the other, so if both points are in the plane their difference will point along it). I'll take the points $(0,-3,0)$, and $(0,0,2)$. Notice the simple construction of all my points: set two variables to zero and find out what the third one should be. You can almost always do this, and it's probably the easiest way to go. -So my vectors are going to be these two points minus the original one I found. $$(0,-3,0)-(6,0,0)=(-6,-3,0)$$ -$$(0,0,2)-(6,0,0)=(-6,0,2)$$ -Now any vector in the plane, when scaled, is still in the plane. So I can define my plane like this: -$$(6,0,0)+(-6,-3,0)t+(-6,0,2)s$$ -I.e. start at the first point, and move $t$ amount in one direction and $s$ amount in another, where $t$ and $s$ range over the real numbers, so they cover the whole plane. Note that each of the scaled vectors, when plugged into the equation, give $0$. So for any point here, we're doing $6+0+0=6$, which solves the original equation. Splitting this up in terms of components $(x_1,x_2,x_3)$ instead of points, we get -$$x_1=6-6t-6s$$ -$$x_2=-3t$$ -$$x_3=2s$$ -There are infinitely many other parameterizations that could have worked, so your answer could look completely different while still being completely correct. But this is probably the logic they used, in case you were wondering.<|endoftext|> -TITLE: System of two Equations: $\sqrt x+y=11$ and $\sqrt y+x=7$ -QUESTION [7 upvotes]: A friend of Mine gave me a system of two equations and asked me to solve them $\rightarrow$ -$$\sqrt{x}+y=11~~ ...1$$ -$$\sqrt{y}+x=7~~ ...2$$ -I tried to solve them manually and got this horrendously complicated fourth degree equation $\rightarrow$ -$$\begin{align*} -y &= (7-x)^2 ~...\mbox{(from 2)} \\ -y &= 49 - 14 x + x^2 \\ -\implies 11&= \sqrt{x}+ 49 - 14 x + x^2 ...(\mbox{from 1)}\\ -\implies~~ 0&=x^4-28x^3+272x^2-1065x+1444 -\end{align*}$$ -Solving this wasn't exactly my piece of cake but I could tell that one of Solutions would have been 9 and 4 -But my friend kept asking for a formal solution. -I tried plotting the equations and here's what I got $\rightarrow$ - -So the equations had two pairs of solutions (real ones). -Maybe, Just maybe I think these could be solved using approximations. -So How do i solve them using a formal method (Calculus,Algebra,Real Analysis...) -P.S. I'm In high-school. - -REPLY [2 votes]: Once you guessed the solutions, you can easily prove that there are no others. Rewrite the equations as $y=11-\sqrt x=F(x)$ and $x=7-\sqrt y=G(y)$. Note that both $x,y\le 11$, so their square roots are at most $4$, which means that $x,y\ge 3$. Now just observe that $z\mapsto \sqrt z$ is a contraction on $[3,\infty)$ (the difference of values is less than the difference of arguments). Thus, $F$ and $G$ are also contractions whence if we had two different solutions $(x_1,y_1)$ and $(x_2,y_2)$, we would get -$$ -|x_1-x_2|=|G(y_1)-G(y_2)|<|y_1-y_2|=|F(x_1)-F(x_2)|<|x_1-x_2| -$$ -which is absurd.<|endoftext|> -TITLE: Solution in integers to $2^n+n=3^m$ -QUESTION [15 upvotes]: How to find all positive integers $m,n$ such that $2^n+n=3^m$ ? -We have by inspection $(m,n)=(0,0)$ and $(1,1)$ -And there are no more for m and n both less then $100$. - -REPLY [12 votes]: In this paper Lemma 2.2 gives that if $p,q \in \mathbb{Z}^+$ then -$$ -\left|\log_3 2-\frac{p}{q}\right|\ge \frac{1}{1200 q^{14.3}} -$$ -limiting how closely $\log_3 2$ can be approximated by rationals. -It then follows that if $m\log 3 = n\log 2 + \epsilon$ with $\epsilon>0$ then $\epsilon\ge \frac{\log 3}{1200 n^{14}}$ and -$$ -\begin{align} -3^m & > 2^n (1+\epsilon) \\ -& = 2^n + \frac{2^n \log 3}{1200 n^{14}} -\end{align} -$$ -The second term is greater than $n$ for $n>112$, and a computer check also excludes $2\le n\le 112$, so there are no other solutions.<|endoftext|> -TITLE: Showing a homomorphism of a field algebraic over $\mathbb{Q}$ to itself is an isomorphism. -QUESTION [10 upvotes]: Suppose $F$ is algebraic over $\mathbb{Q}$ and $\varphi : F\to F$ is a homomorphism. Prove $\varphi$ is an isomorphism. - -Showing injectivity follows from the fact that the only ideals in a field are $(0)$ and $F$. But how do you show surjectivity? - -REPLY [20 votes]: Let $\alpha$ be an element of $F$. -Let $f(X)$ be the minimal polynomial of $\alpha$. -Let $S$ be the set of all the roots of $f(X)$ in $F$. -$\varphi$ induces an injective map $S\to S$. -Since $S$ is a finite set, this map is surjective. -Hence $\varphi$ is surjective. - -REPLY [5 votes]: The possible images of $\alpha \in F$ under $\varphi$ are the conjugates of $\alpha$ in $F$. This is a finite set $A$ because $\alpha$ is algebraic. Since $\varphi$ is injective and takes $A$ into $A$, it must be surjective on $A$. In particular, $\alpha$ is in the image of $\varphi$. Thus, $\varphi$ is surjective on $F$.<|endoftext|> -TITLE: Proof that a set has the Lindelöf property in a metric space -QUESTION [5 upvotes]: I am having some problems with the proof of the following Theorem: -"Let $E$ be a set in a metric space $\mathscr{X}$. Then $E$ has the Lindelöf property provided there exists a countable set $D$ which is dense in $E$ ". -A set $E\in\mathscr{X}$ has the Lindelöf property if every open cover has a countable subcollection that covers $E$. The proof I am using can be found on page 80 of the book "Topological Ideas" by K.G. Binmore. I have a real problem only with one part of the proof, but I show the whole proof as given in the text for completeness (I have made some notational changes). For a proof for $\mathscr{X}=\mathbb{R}^{n}$ see How to prove that if $D$ is countable, then $f(D)$ is either finite or countable?. -Let $\mathscr{U}$ be any collection of open sets that covers $E$. We need to show that a countable subcollection of $\mathscr{U}$ covers $E$. Let $\mathscr{B}$ be the class of open balls $B_{q}(d)$ with centers $d\in D$, and rational radii $q$ such that $B_{q}(d)\subset U$ for at least one $U\in\mathscr{U}$ (I will use $B\in\mathscr{B}$ for a general element). Then $\mathscr{B}$ is countable (I have omitted some stuff which shows this). The following statement is then proven (this is the bit I am having a problem with): -$E\subset\bigcup_{U\in\mathscr{U}}U\subset\bigcup_{B\in\mathscr{B}}B\hspace{200pt}(1)$. -The above is proven in the following way: Let $u\in U$ for any $U\in\mathscr{U}$. Since $U$ is open, it is contained in an open ball $B_{\epsilon}(u)$ such that $B_{\epsilon}(u)\subset U$. Since $E$ is dense in $D$ we have $d(e,D)=0$ for each $e\in E$, and so we can always find a point $d\in D$ such that $d(u,d)<\frac{1}{3}\epsilon$ (my problem is here: I think this only holds if $u\in E$ correct?). We can also choose a rational number $q$ such that $\frac{1}{3}\epsilon0$ such that $u\in B_\epsilon(u)\subseteq U$. Since $E$ is dense in $D$, there is a point $d\in D\cap B_{\epsilon/3}(u)$. We can also choose a rational number $q$ such that $\frac{\epsilon}30$ such that $B_\epsilon(p)\cap D=\varnothing$. -From your final question I suspect that you may have misunderstood the nature of the function $f$, which is one reason I’ve used a different notation above. The domain of $f$ is my $\mathscr{B}_0$, not some set of points of $\mathscr{X}$: the elements of the domain are the sets $B_q(d)$, not the points in those sets. For each $B\in\mathscr{B}$ that’s a subset of some member of $U$, $f$ picks a member of $U$ containing $B$. I tried to make that a little more clear by using $U$ as the name of the function instead of $f$ and writing the argument as a subscript: his $f(B)$ is my $U_B$, which I think is more obviously a single member of $\mathscr{U}$.<|endoftext|> -TITLE: Why is the topological pressure called pressure? -QUESTION [38 upvotes]: Let us consider a compact topological space $X$, and a continuous function $f$ acting on $X$. One of the most important quantities related to such a topological dynamical system is the entropy. -For any probability measure $\mu$ on $X$, one can define the measure-theoretic (or Kolmogorov-Sinai) entropy. Without reference to any measure, one can define the topological entropy, which has the good property of being and invariant under homeomorphism. These two notions are related via a variational principle: -$$h_\mathrm{top} (f) = \sup_{\{\mu\ \mathrm{inv.}\}} h_\mu (f),$$ -and are also related to the physical notion of entropy of a system (well, the KS entropy is, at least. The case for the topological entropy is less clear for me, although things behave nicely in the cases I know and which have a physical interest). -Given a continuous potential $\varphi:X \to \mathbb{R}$, one can define the topological pressure $P(\varphi, f)$ by mimicking the definition of the topological entropy (other definitions include the following equation, and some extensions for complex potentials). Then one can get another variational principle: -$$P (\varphi, f) = \sup_{\{\mu \ \mathrm{inv.}\}} \left\{ \int_X \varphi \ d \mu + h_\mu (f) \right\}.$$ -The RHS in the variational principle above is the supremum of $\int_X \varphi \ d \mu + h_\mu (f)$, which is, up to a change of sign (1), what is called in physics the free energy of the system. And we try to maximize it, as in physics (modulo the change of sign). -So it would seem logical if, as we have measure-theoretic and topological entropy, we would have measure-theoretic and topological free energy. And I can't find why one would like to call "pressure" what is the maximum of the free energy. I looked at some old works by David Ruelle, but couldn't find how this term was coined, and soon ran into the "not on the Internet nor in the library" wall. It may have something to do with lattices gases, but I emphasize the "may". -So my question is: why is this thing called pressure? - -The first clue is that the entropy has a positive, and not negative, sign. The second is that we try to maximize the quantity, while in physics one tries to minimize it. Other clues include the fact that, in non-compact cases, a good condition is to have $\lim_\infty \varphi = - \infty$, again in opposition with physics. - -Edit: I have asked three people which are familiar with the subject, but none gave me a good answer (actually, I got somewhat conflicting answer). I am starting a bounty to draw some attention, but this might be better suited to MathOverflow... - -REPLY [3 votes]: I think you've essentially found the answer when you say that $P(\varphi, f)$ is called the free energy in physics. That is, the pressure should not be called the pressure, it should be called free energy. -This view is expressed in the paper Regularity Properties and Pathologies of Position-Space Renormalization-Group Transformations by van Enter, Fernandez, and Sokal. They define the "pressure" in Definition 2.55 and go on to prove a version of the variational principle for it in Theorem 2.63. But right below this definition, they stated that "this quantity should really be called 'minus the free energy density'". -Now if you want a physical explanation of why the pressure or free energy is called what it's called, I suggest you take a look at the first chapter of Statistical Mechanics of Lattice Systems: -a Concrete Mathematical Introduction by Friedli and Velenik. -The authors define temperature, pressure, and free energy from a thermodynamic viewpoint in Sections 1.1.1-1.1.5. These quantities are related to the log of the (canonical or grand canonical) partition function in Section 1.3.1.<|endoftext|> -TITLE: Nice riddle - is there an elegant solution -QUESTION [7 upvotes]: Possible Duplicate: -Taking Seats on a Plane - -There are 100 seats on a plane and 100 passengers, each with his ticket. However, the first person to enter the plane discovers he has lost his ticket, so he picks a seat at random. Afterwards, every new passenger sits in his place if it is free, and otherwise picks a vacant seat at random. -You are the last to enter the plane. What is the probability you'll sit in your seat? -I managed to solve this using induction (i.e. marking by $A(n)$ the probability where $n$ is the number of passengers and then finding a recursive formula for $A(n)$ which is quite simple). However, I want to know if there are more "instantly obvious" or one-liner solutions. - -REPLY [11 votes]: Let the first passenger to board have ticket for seat $p$, and suppose your ticket is for seat $q$. Either (i) $p$ is filled before $q$ is or (ii) $q$ is filled before $p$ is. In case (i), you will get seat $q$, and in case (ii) you won't. -These two cases are equally likely. For it is equally likely that the first passenger will choose $p$ or $q$. And if she chooses neither, then by symmetry $p$ and $q$ remain equally likely to be filled first, since they are the correct seat for none of the remaining passengers. So the required probability is $\frac{1}{2}$.<|endoftext|> -TITLE: Does weak convergence in Sobolev spaces imply pointwise convergence? -QUESTION [8 upvotes]: I encounter a problem when reading Struwe's book Variational Methods (4th ed). On page 38, it is assumed that $\|u_m\|$ is a minimizing sequence for a functional $E$, i.e. $E(u_m)\rightharpoonup I$ in $L^p(\mathbb{R}^n)$, -and then it assume in addition that - -$u_m\rightharpoonup u$ weakly in $H^{1,2}(\mathbb{R}^n)$ and pointwise almost everywhere. - -My question is -why the pointwise convergence assumption is reasonable? Since $\mathbb R^n$ is not compact, the embedding theorem is not obviously valid. -Thanks in advance. - -REPLY [6 votes]: For sufficiently small $p$ (more precisely: $p<2n/(n-2)$ for $n\ge 3 $ or $p$ arbitrary otherwise) the space $H^{1,2}(\Omega) $ is compactly embedded in $L^p$ for $\Omega \subset\subset \mathbb{R}^n$ with sufficiently regular boundary (take balls of increasing radius tending to infinity). This implies strong $L^p$ convergence of a subsequence, hence pointwise a.e, on each such $\Omega$, hence a.e. -(If $u_k$ converges pointwise a.e on each open set with compact closure it obviously converges pointwise almost everywhere. You may need to countably often further subsubsequence to make this work, but who cares?).<|endoftext|> -TITLE: Finding solutions to equation of the form $1+x+x^{2} + \cdots + x^{m} = y^{n}$ -QUESTION [20 upvotes]: Exercise $12$ in Section $1.6$ of Nathanson's : Methods in Number Theory book has the following question. - - -When is the sum of a geometric progression equal to a power? Equivalently, what are the solutions of the exponential diophantine equation $$1+x+x^{2}+ \cdots +x^{m} = y^{n} \qquad \cdots \ (1)$$ in integers $x,m,n,y$ greater than $2$? Check that - \begin{align*} -1 + 3 + 3^{2} + 3^{3} + 3^{4} & = 11^{2}, \\\ 1 + 7 + 7^{2} + 7^{3} &= 20^{2}, \\\ 1 + 18 +18^{2} &= 7^{3}. -\end{align*} - These are the only known solutions of $(1)$. - - -The Wikipedia link doesn't reveal much about the above question. My question here would be to ask the following: - -Are there any other known solutions to the above equation. Can we conjecture that this equation can have only finitely many solutions? - -Added: Alright. I had posted this question on Mathoverflow some time after I had posed here. This user by name Gjergji Zaimi had actually given me a link which tells more about this particular question. Here is the link: - -https://mathoverflow.net/questions/58697/ - -REPLY [3 votes]: I liked your question much. The cardinality of the solutions to the above equation purely depends upon the values of $m,n$. -Let me break your problem into some cases. There are three cases possible. - -When $ m = 1 $ and $ n = 1 $ , you know that there are infinitely many solutions . -When $m=2$ and $n=1$ you know that a conic may have an infinitely many rational points or finitely many rational points. In more broad sense, these are genus -1 curves. Where the elliptic curves are also included ( when $m=2,n=3$ or hyper elliptic curves when $m=2, n\ge 4$ ) . This case the number of points on the curve are figured out using the conjecture of Birch and Swinnerton-dyer. It gives you a measure of Cardinality, whether infinite or finite by considering the $L$-functions associated to the curves. -When $m \ge 2 , n \ge 4$ it may represent some higher dimensional curve. So by the standard theorem of Falting, it has finitely many points given that the curve has genus $g \ge 2$ . - -Thank you. I update this answer once if I find something more interesting.<|endoftext|> -TITLE: Given the error in the cg-method, calculate a lower bound for the condition number -QUESTION [6 upvotes]: Edit: If any information is missing, please tell me and I'll edit the question. Thanks again! - -The conjugate gradient (cg) method was applied to a positive definite Matrix $A$. It is only known that $||e||_A=1$ and $||e^{10}||_A=2^{-9}$ (where $e$ is the error $||e^k||_A= ||x-x^k||_A$). Calculate with this information a lower bound for $κ(A)$ (where $κ$ is the condition number) and compare it with the equation - $$k \geq \frac{1}{2}(\sqrt{κ(A)}\ln(2/ε))$$ - where ε is the factor by which the error is reduced, defined as - $$||e^k||_A= ||x-x^k||_A \leq ε||e^0||_A$$ - -Here's what I have so far. If I have understood it correctly $ε=2^{-9}/1=2^{-9}$. I however don't understand how only from that can I calculate the condition number. Doesn't the condition number require knowing the matrix and it's inverse? $κ=||A||||A^{-1}||$ -I have calculated what $κ$ should be using the equation $k \geq \frac{1}{2}(\sqrt{κ(A)}\ln(2/ε))$ and I got -$$10 \geq \frac{6.93}{2}(\sqrt{κ(A)})$$ -$$2.89 \geq \sqrt{κ(A)}$$ -$$8.33 \geq κ(A)$$ -How can I move forward? Thanks in advance! - -REPLY [2 votes]: My attempt ... i'm not a math guy so feel free to throw it in the trash! -I think you should use the relationship between the convergence rate of CG and the condition number of the $A$-matrix. Check these lecture notes for the full details, I refer specifically to formula (52) on page 36 -$$ -\lvert\lvert e_{(i)}\rvert\rvert_A\leq % -2\left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\right)^i% -\lvert\lvert e_{(0)}\rvert\rvert_A -$$ -where $e_{(0)}=x_{(0)}-x$ is the initial error. In your case $\lvert\lvert e_{(0)}\rvert\rvert_A=1$ and $i=10$ -$$ -\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\geq% -\left(\frac{\lvert\lvert e_{(10)}\rvert\rvert_A}{2}\right)^{1/10}=\frac{1}{2} -$$ -the latter equation can be solved easily noting that $f(\kappa)=(\sqrt{k}-1)/(\sqrt{\kappa}+1)$ is a monotonically increasing function in $[1,\infty)$, therefore if $f(\bar{\kappa})= 1/2 \Rightarrow$ then $f(\kappa > \bar{\kappa})\geq 1/2$, implying the lower bound $\kappa\geq\bar{\kappa}$. The value of $\bar{\kappa}$ is obtained solving -$$ -\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}=\frac{1}{2}\Rightarrow\bar{\kappa}=3^2=9 -$$ -Therefore the lower bound should be $\kappa\geq 9$. Now, on the lecture notes I have used, the direction of your inequality is reversed, so you should be computing a lower bound also... but i'm not sure about that! Cheers!<|endoftext|> -TITLE: Kernel of the Tensor Product of a Linear Map with Itself -QUESTION [6 upvotes]: For two vector spaces, $V$ and $W$, and a map $f: V \to W$, it is clear that: -$$ -\ker(f) \otimes V + V \otimes \ker(f) \subseteq \ker(f \otimes f). -$$ -Does the opposite inclusion hold? If so, I'd like a proof, and if not, a counterexample. -Bascially, given an element an element $\sum_i a_i \otimes b_i \in V \otimes V$, for which it holds that -$$ -\sum_i f(a_i) \otimes f(b_i) = 0 -$$ -can we show that $\sum_i a_i \otimes b_i \in \ker(f) \otimes V + V \otimes -\ker(f)$? - -REPLY [4 votes]: Take a basis $\{x_i\}_{i \in I_0}$ for the kernel of $f$, and extend it to a basis $\{x_i\}_{i \in I}$ for $V$, where $I_0 \subset I$. Thus $\{x_i \otimes x_j : i,j \in I\}$ is a basis for $V \otimes V$. Write out a general element of $\ker(f\otimes f)$ in terms of this basis, and note that the set $\{f(x_i) \otimes f(x_j) : i,j\in I\setminus I_0\}$ is linearly independent.<|endoftext|> -TITLE: Limit of ratios of numbers with $m$ factors and primes -QUESTION [8 upvotes]: This is my first question. -Let $a_1, a_2,\ldots, a_k$ be natural numbers $\leq n$ with $m$ prime factors. -Let $p_1, p_2, \ldots, p_r$ be the prime numbers $\leq n$. -Let $$C_{m,n} = \frac{\sum_{i=1}^{k}a_i}{\sum_{i=1}^{r}p_i}$$ -Does the following limit exist when $m\geq 2$? Obviously, when $m=1$, $C_{1,n}=1$. -$$\lim_{n\to \infty} C_{m,n}$$ -I have some guesses but not defendable enough. -My best, -The FM. - -REPLY [4 votes]: For $m\gt1$, there are more numbers with $m$ prime factors than there are primes, in the following sense: -Hardy and Wright, Theorem 437, prove that the number of integers up to $x$ with exactly $m$ prime factors (and it doesn't matter whether the primes are distinct or not) is asymptotic to $${x(\log\log x)^{m-1}\over(m-1)!\log x}$$ They attribute the result to Landau, 1900. Now the number of primes is asymptotic to $x/(\log x)$, so the ratio is $(\log\log x)^{m-1}/(m-1)!$, which of course goes to infinity with $x$. -Meanwhile, the sum of the primes up to $x$ is asymptotically $x^2/(2\log x)$ although I haven't found a wholly trustworthy cite for this. I'm sure (although I'm not up to proving it) that the sum of the almost-primes will be bigger by that same factor of $(\log\log x)^{m-1}/(m-1)!$<|endoftext|> -TITLE: What and where in the notebooks of Ramanujan is this series? -QUESTION [14 upvotes]: The wikipedia page on Ramanujan contains the following series: -$$ -1 - 5\left(\frac{1}{2}\right)^3 + 9\left(\frac{1\times3}{2\times4}\right)^3 - 13\left(\frac{1\times3\times5}{2\times4\times6}\right)^3 + \cdots = \frac{2}{\pi} -$$ -$$\sum_{n=0}^\infty(-1)^n(4n+1) \left[ \frac{(2n-1)!!}{(2n)!!}\right]^3=\frac{2}{\pi}$$ -Unfortunately, there is no reference as to where I can find this series in his notebooks (or in the 5-volume Springer set, for that matter), which is basically what I am after. -Also, though I do now study math, I am still a freshman, so I don't know enough about infinite series to be able to classify this series. Does it belong to a type of series? Is there a general class of series to which this series belongs? -Generally, I am very keen to know more about this series. - -REPLY [3 votes]: I'm seriously late for this party but, to address one of your questions, there are infinitely many formulas -$$\sum_{n=0}^\infty\frac{An+B}{C^n} \left[ \frac{(2n-1)!!}{(2n)!!}\right]^3=\frac{1}{\pi}$$ -where $A,B,C$ are algebraic numbers. A few others are -$$\begin{aligned}\frac{2\sqrt{2}}{\pi}&=\sum_{n=0}^\infty(-1)^n\frac{6n+1}{2^{3n}} \left[ \frac{(2n-1)!!}{(2n)!!}\right]^3\\ -\frac{4}{\pi}&=\sum_{n=0}^\infty\frac{6n+1}{2^{2n}} \left[ \frac{(2n-1)!!}{(2n)!!}\right]^3\\ -\frac{16}{\pi}&=\sum_{n=0}^\infty\frac{42n+5}{2^{6n}} \left[ \frac{(2n-1)!!}{(2n)!!}\right]^3\end{aligned}$$ -and so on. They belong to Ramanujan's fourth class of pi formulas. (I discussed the Wikipedia example in my blog Ramanujan Once A Day.)<|endoftext|> -TITLE: Question(s) about uniform spaces. -QUESTION [10 upvotes]: I was reviewing questions and notes related to uniform spaces and came across this interesting statement: Every metric space is homeomorphic, as a topological space, to a complete uniform space. -It seems pretty staightforward, but I am having trouble proving it. -Also, there was a follow-up question that seemed intesting: Is it true that every metric space is homeomorphic to a complete metric space? -Can anyone help? Thank you! - -REPLY [14 votes]: As t.b. noted in the comments, every metric space is paracompact, so an even stronger result is: - -Theorem. Every paracompact Hausdorff space is completely uniformizable. - -Let $\langle X,\tau\rangle$ be a paracompact Hausdorff space. The first step of the proof is to show that the collection $\mathfrak{N}$ of all open nbhds of the diagonal of $X\times X$ is a base for a diagonal uniformity on $X$. It’s clear that $\mathfrak{N}$ satisfies most of the required conditions; the only one that isn’t immediately clear is that for each open nbhd $N$ of the diagonal there is an open nbhd $M$ such that $M\circ M\subseteq N$, i.e., such that $\langle x,z\rangle\in N$ whenever $\langle x,y\rangle,\langle y,z\rangle\in M$. -To see this, let $N\in\mathfrak{N}$. Let $\mathscr{U}=\{U\in\tau\setminus\{\varnothing\}:U\times U\subseteq N\}$; $\mathscr{U}$ is an open cover of $X$. In this answer I showed how to find an open barycentric refinement $\mathscr{V}$ of $\mathscr{U}$, i.e., a refinement with the property that for each $x\in X$ there is a $U\in\mathscr{U}$ such that $\operatorname{st}(x,\mathscr{V})\subseteq U$. Now repeat the process to get a barycentric open refinement $\mathscr{W}$ of $\mathscr{V}$. -Fix $W_0\in\mathscr{W}$ and $x_0\in W_0$ arbitrarily. If $W\in\mathscr{W}$ and $W_0\cap W\ne\varnothing$, we may pick an $x\in W_0\cap W$. $\mathscr{W}$ is a barycentric refinement of $\mathscr{V}$, so there is some $V_W\in\mathscr{V}$ such that $W_0\cup W\subseteq\operatorname{st}(x,\mathscr{W})\subseteq V_W$. Moreover, $x_0\in W_0$, so $W\subseteq V_W\subseteq\operatorname{st}(x_0,\mathscr{V})$. And $\mathscr{V}$ is a barycentric refinement of $\mathscr{U}$, so there is a $U\in\mathscr{U}$ such that $\operatorname{st}(W_0,\mathscr{W})=\bigcup\{W\in\mathscr{W}:W_0\cap W\ne\varnothing\}\subseteq\operatorname{st}(x_0,\mathscr{V})\subseteq U$. In other words, $\mathscr{W}$ is an open star refinement of $\mathscr{U}$. -Let $M=\bigcup\{W\times W:W\in\mathscr{W}\}$; clearly $M$ is an open nbhd of the diagonal. Suppose that $\langle x,y\rangle,\langle y,z\rangle\in M$ for some $x,y,z\in X$. Then there are $W_0,W_1\in\mathscr{W}$ such that $x,y\in W_0$ and $y,z\in W_1$, and since $W_0\cap W_1\ne\varnothing$, there is a $U\in\mathscr{U}$ such that $W_0\cup W_1\subseteq U$. Then $x,z\in U$, so $\langle x,z\rangle\in U\times U\subseteq N$, as desired. -In the answer to which I linked above I showed that $\mathfrak{N}$ is complete, so it only remains to verify that it generates the topology $\tau$. As usual, for each $N\in\mathfrak{N}$ and $x\in X$ let $N[x]=\{y\in X:\langle x,y\rangle\in N\}$, and let $\mathscr{N}=\{N[x]:N\in\mathfrak{N}\text{ and }x\in X\}$; $\mathscr{N}$ is a base for the topology $\tau_\mathfrak{N}$ generated by $\mathfrak{N}$. The members of $\mathfrak{N}$ are open in $X\times X$, so $\mathscr{N}\subseteq\tau$. On the other hand, suppose that $\varnothing\ne V\in\tau$, and let $x\in V$. $X$ is $T_3$, so there is an open set $U$ such that $x\in U\subseteq\operatorname{cl}U\subseteq V$; let $W=X\setminus\operatorname{cl}U$. Then $\{U,W\}$ is an open cover of $X$, so $N=(U\times U)\cup(W\times W)\in\mathfrak{N}$. Clearly $x\in N[x]=U\subseteq V$, so $V\in\tau_\mathfrak{N}$. Thus, $\tau_\mathfrak{N}=\tau$, and the proof is complete.<|endoftext|> -TITLE: Is every proper nontrivial ideal in a Noetherian ring not flat? -QUESTION [7 upvotes]: I guess my general question is exactly what's in the title, but let me explain why I'm asking and how I came to it. -Consider the ideal $I=\langle x,y \rangle \subset k[x,y]$ for a field $k$. Just to be safe, let's assume $k$ is infinite and of zero characteristic. Then $I$ is not flat. The proof I came up with uses Lemma 6.4 in Eisenbud's "Commutative Algebra", or equivalently Lemma 36.10 in The Stacks Project: just notice that the relation $y(x) + (-x)y=0$ is nontrivial. -But then I thought: it seems this sort of logic extends to any ideal in $k[X_1,\ldots,X_n]$. If $I=\langle f_1,\ldots,f_n\rangle$ then the relation $$(f_2\cdots f_n)f_1 + (-f_1f_3\cdots f_n)f_2 + \cdots + (-f_1\cdots f_{n-1})f_n = 0 $$ is nontrivial for $n$ even and if $n$ is odd, just replace the last coefficient with $0$. But once I wrote this, I thought this could be applied to any Noetherian ring $R$. So in a Noetherian ring, every ideal is not flat. But this sounds too strong. Since Prufer Domains exist, I think it is indeed wrong (one of the characterizations of a Prufer domain $R$ is that every ideal of $R$ is flat). -So I think I'm going wrong in one of a few places. First: my "proof" that $\langle x,y\rangle \subset k[x,y]$ is not flat is wrong and indeed this relation is trivial. I don't think this is true since, using the Stacks Project's notation, the $y_j \in I$ so must be divisible by $x$ or $y$. Then we would want $a_{ij} \in k \subset k[x,y]$ so get something like $y=2y + x - x - y$, for example, since otherwise $x_i = \sum_j a_{ij} y_j$ is impossible. But if $a_{ij} \in k$ then there's no way for $\sum_ia_{ij} f_i = 0$ since $f_1=y, f_2=-x$. I realize this is sort of handwavy, and indeed this may be why I'm confused since I haven't formalized it: maybe it can't be formalized. -Second, maybe the proof is correct, but extending this and concluding that every ideal in $k[X_1,\ldots,X_n]$ is not flat doesn't hold. If this is the case, can someone give an example of such an ideal which is flat? -Finally, maybe every ideal in the polynomial ring isn't flat for the reasons above, but maybe there are rings which contain finitely generated ideals such that this logic does not extend. If so, can someone provide an example? - -REPLY [12 votes]: An ideal $I$ of an noetherian (commutative) ring $A$ is flat if and only if it is locally free of (local) rank $\le 1$ (i.e. $IA_P=0$ or $\simeq A_P$ for every prime ideal $P$). -The if part is clear. Suppose $I$ is flat. Then it is locally free and finitely generated. Let $P$ be a prime ideal. So $IA_P$ is an ideal free of some rank $r$. Your reasonning shows that $r\ge 1$: $e_1, e_2\in IA_P$ are not zero, then $\{ e_1, e_2\}$ is not free because $e_2.e_1+(-e_1).e_2=0$. -To make connection to the examples in the comments, it is easy to see that all ideals of $A$ are flat if and only if $A$ is a finite product of Dedekind domains (including fields).<|endoftext|> -TITLE: An interesting sum to infinity -QUESTION [17 upvotes]: Is there any simple way of computing the following sum? -$$\sum_{k=1}^\infty \frac1{k\space k!}$$ - -REPLY [6 votes]: $\def\d{\delta} -\def\e{\epsilon} -\def\g{\gamma} -\def\pv{\mathrm{PV}} -\def\pv{\mathcal{P}} -\def\pv{\mathrm{P}}$We show another way to get the integral representation of the sum and explain its relation to the exponential integral. -Let -$$S(x) = \sum_{k=1}^\infty \frac{x^k}{k k!}.$$ -The sum we are interested in is $S(1)$, but, as is often the case, it is easier to get the sum for any $x>0$. -(There is a straightforward extension to $x<0$.) -Notice that -$$\begin{eqnarray*} -S'(x) &=& \sum_{k=1}^\infty \frac{x^{k-1}}{k!} \\ -&=& \frac{1}{x}\left( \sum_{k=0}^\infty \frac{x^{k}}{k!} - 1\right) \\ -&=& \frac{e^x-1}{x}. -\end{eqnarray*}$$ -Therefore, -$\displaystyle S(x) = \int_a^x dt\, \frac{e^t-1}{t}.$ -To find $a$ just notice that $S(0) = 0$, so $a=0$, -$$S(x) = \int_0^x dt\, \frac{e^t-1}{t}.$$ -The argument of the integral is perfectly well-behaved at $t=0$, so -$$\begin{eqnarray*} -S(x) &=& \lim_{\e\to 0} \int_\e^x dt\, \frac{e^t-1}{t} \\ -&=& \lim_{\e\to 0} \left( - \int_\e^x dt\, \frac{e^t}{t} - \int_\e^x dt\,\frac{1}{t} - \right) \\ -&=& \lim_{\e\to 0} \left( - \pv \int_{-\infty}^x dt\,\frac{e^t}{t} - \pv \int_{-\infty}^\e dt\,\frac{e^t}{t} - -\log x + \log \e - \right) \\ -&=& \lim_{\e\to 0} \left( - \mathrm{Ei}(x) - \mathrm{Ei}(\e) - \log x + \log \e - \right) \\ -&=& \lim_{\e\to 0} \left( - \mathrm{Ei}(x) - (\g + \log \e) - \log x + \log \e - \right) \\ -&=& \mathrm{Ei}(x) - \g - \log x. -\end{eqnarray*}$$ -(See below for a derivation of $\mathrm{Ei}(\e) = \g + \log \e + O(\e)$.) -Therefore, -$$\sum_{k=1}^\infty \frac{x^k}{k k!} = \mathrm{Ei}(x) - \g - \log x$$ -and so -$$\sum_{k=1}^\infty \frac{1}{k k!} = \mathrm{Ei}(1) - \g.$$ -Some details -Above we use the definition of the exponential integral -$$\mathrm{Ei}(x) = \pv \int_{-\infty}^x dt\,\frac{e^t}{t},$$ -where $\pv\int$ stands for the Cauchy principal value, and the series expansion for $\mathrm{Ei}(x)$ for small $x$, which we derive now. -Split the integral, -$$\begin{eqnarray*} -\mathrm{Ei}(x) &=& \lim_{\d\to0}\left[ -\underbrace{\int_{-\infty}^{-\d} dt\,\frac{e^t}{t}}_{I_1} -+ \underbrace{\int_{\d}^{x} dt\,\frac{e^t}{t}}_{I_2} -\right]. -\end{eqnarray*}$$ -For $I_1$, let $t=-s$ and integrate by parts, -$$I_1 = \log\d - \int_\d^\infty ds\, e^{-s}\log s.$$ -For $I_2$, Taylor expand $e^t$ and integrate, -$$I_2 = \log x - \log\d + O(x).$$ -Thus, -$$\begin{eqnarray*} -\mathrm{Ei}(x) &=& \lim_{\d\to0}\left[ -\left(\log\d - \int_\d^\infty ds\, e^{-s}\log s\right) -+\left(\log x - \log\d + O(x)\right) -\right] \\ -&=& \g + \log x + O(x), -\end{eqnarray*}$$ -where we recognize the integral representation of the Euler-Mascheroni constant, $\g = -\int_0^\infty ds\,e^{-s}\log s$. -Notice that if we kept the higher order terms in the expansion for $I_2$ we would find -$$\mathrm{Ei}(x) = \g + \log x + \sum_{k=1}^\infty \frac{x^k}{k k!},$$ -the correct expansion for the exponential integral for $x>0$. -In fact, this immediately gives our sum, -$$\sum_{k=1}^\infty \frac{1}{k k!} = \mathrm{Ei}(1) - \g.$$ -This is the approach of @Ayman Hourieh.<|endoftext|> -TITLE: Continuous and additive implies linear -QUESTION [15 upvotes]: The following problem is from Golan's linear algebra book. I have posted a solution in the comments. -Problem: Let $f(x):\mathbb{R}\rightarrow \mathbb{R}$ be a continuous function satisfying $f(x+y)=f(x)+f(y)$ for all $x,y\in \mathbb{R}$. Show $f$ is a linear transformation. - -REPLY [5 votes]: Perhaps a clearer answer is ... -Let $f$a addictive continuous function such that $f:\mathbb{R}\rightarrow\mathbb{R}$. -(a) Note that $f$ is linear in $\mathbb{Q}$: -(i) if $q\in \mathbb{Z}$, $f(q)=f(\sum_{i=1}^{q} (1^i))$, for the additivity of $f$, -$$ -f(q)=\sum_{i=1}^{q} f(1^i)=q f(1)=q k -$$ -for some $k\in\mathbb{R}$. So $f$ is linar in $\mathbb{Z}$ -(ii) if $q\in \mathbb{Q}\backslash\mathbb{Z}$, $q=\frac{a}{b}$ where $b\neq0$ and $a,b\in \mathbb{Z}$. Note that $f(1)=f\left(\sum_{i=1}^b 1^i/b\right)$, for the additivity of $f$, -$$ -f(1)=\sum_{i=1}^nf( 1^i/n)=nf(1/n)\Rightarrow \frac{1}{b}f(1)=f(1/b)\Rightarrow f(1/b)=k/b -$$ -for some $k\in\mathbb{R}$. So $f$ is linear in $\mathbb{Q}$ -(b) Let $x\in \mathbb{R}\backslash\mathbb{Q}$ and $\varepsilon>0$ -By the continuity of $f$, there is $\delta>0$ such that $|x-y|<\delta \Rightarrow |f(x)-f(y)|<\varepsilon$. -For the density of $\mathbb{Q}$ in $\mathbb{R}$, there is $j$ in $(x,y)$, such that $|x-j|<\varepsilon$. -$$ -|f(x)-xf(1)|\leq |f(x) - f(j)| +|f(j) -xf(1)| \leq \varepsilon+|jf(1) -xf(1)| <\varepsilon(1+f(1)) -$$ -as $\varepsilon$ is an arbitrary positive, we have $f(x)=kx$ for any real numbers, that is it $f$ is linear.<|endoftext|> -TITLE: Evaluate or Simplify $\int_{a}^{+\infty} \frac{\exp(-bx)}{x+c} Ei(x) dx$ -QUESTION [5 upvotes]: I am stuck trying to evaluate or simplify this integrale : - -$$I_{a,b,c} = \int_{a}^{+\infty} \frac{\exp(-bx)}{x+c} Ei(x) dx $$ - -with $a,b,c \in \mathbb{R}_+^*$. -and $ Ei(x) =\int_{-\infty}^{x} \frac{\exp(t)}{t} \mathbb{d}t $ : The Exponential integral function -I already found this result: -$$\int\exp(-bx) \ Ei(x) = \frac{1}{b} \left[Ei((1-b)x)-\exp(-bx)\ Ei(x) \right] $$ -but I can't see if it useful. -Any Hint ? - -REPLY [3 votes]: By using the expression $\text{Ei}(x)=\gamma+\ln x+\int_0^x\dfrac{e^t-1}{t}dt$ mentioned in http://people.math.sfu.ca/~cbm/aands/page_230.htm, -$\int_a^\infty\dfrac{e^{-bx}\text{Ei}(x)}{x+c}dx$ -$=\int_a^\infty\dfrac{e^{-bx}}{x+c}\left(\gamma+\ln x+\int_0^x\dfrac{e^t-1}{t}dt\right)~dx$ -$=\gamma\int_a^\infty\dfrac{e^{-bx}}{x+c}dx+\int_a^\infty\dfrac{e^{-bx}\ln x}{x+c}dx+\int_a^\infty\dfrac{e^{-bx}}{x+c}\int_0^x\dfrac{e^t-1}{t}dt~dx$ -$=\gamma\int_a^\infty\dfrac{e^{-bx}}{x+c}dx+\int_a^\infty\dfrac{e^{-bx}}{x+c}\int_1^x\dfrac{1}{t}dt~dx+\int_a^\infty\dfrac{e^{-bx}}{x+c}\int_0^x\dfrac{e^t-1}{t}dt~dx$ -$=\gamma\int_a^\infty\dfrac{e^{-bx}}{x+c}dx+\int_a^\infty\int_1^x\dfrac{e^{-bx}}{t(x+c)}dt~dx+\int_a^\infty\int_0^x\dfrac{(e^t-1)e^{-bx}}{t(x+c)}dt~dx$ -$=\gamma\int_a^\infty\dfrac{e^{-bx}}{x+c}dx+\int_1^a\int_a^\infty\dfrac{e^{-bx}}{t(x+c)}dx~dt+\int_a^\infty\int_t^\infty\dfrac{e^{-bx}}{t(x+c)}dx~dt+\int_0^a\int_a^\infty\dfrac{(e^t-1)e^{-bx}}{t(x+c)}dx~dt+\int_a^\infty\int_t^\infty\dfrac{(e^t-1)e^{-bx}}{t(x+c)}dx~dt$ -$=\gamma\int_a^\infty\dfrac{e^{-bx}}{x+c}dx+\int_1^a\int_a^\infty\dfrac{e^{-bx}}{t(x+c)}dx~dt+\int_0^a\int_a^\infty\dfrac{(e^t-1)e^{-bx}}{t(x+c)}dx~dt+\int_a^\infty\int_t^\infty\dfrac{e^te^{-bx}}{t(x+c)}dx~dt$ -$=\gamma\int_{a+c}^\infty\dfrac{e^{-b(x-c)}}{x}dx+\int_1^a\int_{a+c}^\infty\dfrac{e^{-b(x-c)}}{tx}dx~dt+\int_0^a\int_{a+c}^\infty\dfrac{(e^t-1)e^{-b(x-c)}}{tx}dx~dt+\int_a^\infty\int_{t+c}^\infty\dfrac{e^te^{-b(x-c)}}{tx}dx~dt$ -$=\gamma e^{bc}E_1(b(a+c))+\int_1^a\dfrac{e^{bc}E_1(b(a+c))}{t}dt+\int_0^a\dfrac{(e^t-1)e^{bc}E_1(b(a+c))}{t}dt+\int_a^\infty\dfrac{e^{bc}e^tE_1(b(t+c))}{t}dt$ -$=\gamma e^{bc}E_1(b(a+c))+e^{bc}E_1(b(a+c))\ln a+e^{bc}E_1(b(a+c))\int_0^a\dfrac{e^t-1}{t}dt+\int_a^\infty e^{bc}e^tE_1(b(t+c))~d(\ln t)$ -$=e^{bc}E_1(b(a+c))\text{Ei}(a)+\int_{\ln a}^\infty e^{bc}e^{e^t}E_1(b(e^t+c))~dt$<|endoftext|> -TITLE: Knuth's up-arrow notation - Is there practical use for the numbers involved? -QUESTION [10 upvotes]: From Wikipedia, Knuth's up-arrow notation begins at exponentiation and continues through the hyperoperations: -$a \uparrow b = a^b$ -$a \uparrow\uparrow b = {\ ^{b}a} = \underbrace{a^{a^{.^{.^{.^{a}}}}}}_b$ (the tetration of a and b; an exponentiation tower of a, b elements high) -This already produces numbers much larger than the number of Planck volumes in the observable universe with very small numbers; $3 \uparrow\uparrow 3$ is a relatively modest 7.6 trillion, but $3 \uparrow\uparrow 4 = 3^{7.6t} = 10^{3.6t}$. -Then there is pentation ($a\uparrow\uparrow\uparrow b = a\uparrow^3b$) and hexation ($a\uparrow\uparrow\uparrow\uparrow b = a\uparrow^4b$). The pentation of 3 and 3 is $\underbrace{3^{3^{.^{.^{.^{3}}}}}}_{\ ^{3}3}$, an exponentiation tower of 3s 7.6 trillion elements in height. Hexation is an exponentiation tower of 3s equal in height to the value of the pentation of 3 and 3. And that is just $g_1$, the first layer of calculation necessary to compute Graham's number, $g_{64}$, where $g_n = 3\uparrow^{g_{n-1}}3$. -I'm having considerable, and I hope understandable, difficulty simply wrapping my head around a number of this magnitude. So, the question is, is there value in understanding the scope of numbers produced by Knuth's up-arrow notation, or is this simply a way for mathematicians to make each others' heads explode? -If it's the latter, I leave you with the following: -$A(g_{64},g_{64})$ - -REPLY [4 votes]: Yes, see "Enormous Integers in Real life" by Harvey Friedman.<|endoftext|> -TITLE: Hamel basis for $\mathbb{R}$ over $\mathbb{Q}$ cannot be closed under scalar multiplication by $a \ne 0,1$ -QUESTION [19 upvotes]: The following is the problem 206 from Golan's book Linear Algebra a Beginning Graduate Student Ought to Know. I've been unable to make any progress. -Definition: A Hamel basis is a (necessarily infinite dimensional) basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$. -Problem: Let $B$ be a Hamel basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$ and fix some element $a\in\mathbb{R}$ with $a\neq 0,1$. Show there exists some $y\in B$ with $ay\notin B$. - -REPLY [22 votes]: Here is the proof of the exercise: -Let $B$ be a Hamel basis. Then any real number $r$ can we written uniquely as $\Sigma_{x \in B} {r_x}x$ where the $r_x$ are rational numbers only finitely many of which are nonzero. The function $\alpha : r \to \Sigma_{x \in B} r_x$ is a linear transformation of vector spaces over the rational numbers. Now suppose that $a \ne 1$ and $ax \in B$ for all $x \in B$. Then $\alpha(ar)=\alpha(r)$ for all real numbers $r$. In particular, if $x \in B$ and if $r = x(a-1)^{-1}$ then $1 = \alpha(x) = \alpha([a-1]r) = \alpha(ar) - \alpha(r) = 0$. Contradiction!<|endoftext|> -TITLE: Is $\operatorname{Homeo}([0,1])$ Weil-Complete? -QUESTION [15 upvotes]: After learning about uniformities on topological groups, we were given several sources to read. I came across the term "Weil-complete." A topological group is Weil-complete if it is complete with respect to the left (or right) uniformity. -We learned that the sets $\{(x,y) \in G \times G : x^{-1}y \in U \}$, where $U$ is neighborhood of the neutral element, form a base of entourages for the left uniformity (similarly the sets $\{(x,y) \in G \times G : yx^{-1} \in U \}$ form a base of entourages for the right uniformity). -If $G$ is the group $\operatorname{Homeo}([0,1])$ of all self-homeomorphisms of $[0,1]$, and it is equipped with the topology of uniform convergence, would $G$ be Weil-complete? -I've been looking at this for quite some time, but haven't been able to make any progress. Any help would be greatly appreciated! - -REPLY [11 votes]: The answer is no. That's simply because a uniform limit of homeomorphisms need not be a homeomorphism (for notational convenience, I shall consider $[0,2]$ instead of $[0,1]$): -Let $$\varphi_n(x) = \begin{cases} \frac xn & \text{for }x\in \left[0,1\right] \\ 2\left(1-\frac{1}n \right)(x-1)+\frac1 n & \text{for }x\in [1,2] \end{cases}$$ -and -$$\varphi(x) = \begin{cases} 0 & \text{for }x\in \left[0,1\right] \\ 2(x-1) & \text{for }x\in [1,2] \end{cases}$$ -Then $\varphi_n \in \text{Homeo}([0,2])$ for all $n$ and $\|\varphi_n - \varphi\|_\infty \le \frac 1n \to 0$. So $\varphi_n$ is Cauchy, but since $\varphi\notin \text{Homeo}([0,2])$, the sequence cannot have a limit in $\text{Homeo}([0,2])$.<|endoftext|> -TITLE: Euclid's Lemma and $\gcd(a,b)=1\Rightarrow \gcd(a^n,b^k)= 1$ -QUESTION [6 upvotes]: I'm solving some problems in Apostol's Intro to Analytic Number Theory. He asks to prove -$$(a,b)=1 \wedge c|a \wedge d|b\Rightarrow (c,d)=1$$ -My take is -$$\tag 1(a,b)=1 \Rightarrow 1=ax+by$$ -$$ \tag 2 c|a \Rightarrow a=kc$$ -$$\tag 3d|b \Rightarrow b=jd$$ -$$(1),(2),(3) \Rightarrow 1=ckx+djy=cx'+dy'\Rightarrow (c,d)=1$$ -Is this correct or is something missing? -I intend to use something similar to prove -$$\eqalign{ - & \left( {a,b} \right) = 1 \wedge \left( {a,c} \right) = 1 \Rightarrow \left( {a,bc} \right) = 1 \cr - & \left( {a,b} \right) = 1 \Rightarrow \left( {{a^n},{b^k}} \right) = 1;n \geqslant 1,k \geqslant 1 \cr} $$ -so I'd like to know. - -REPLY [2 votes]: I prefer to use factorization in prime numbers. If $a=p_1^{n_1}\cdot\ldots\cdot p_r^{n_r}$ and $b=p_1^{m_1}\cdot\ldots\cdot p_r^{m_r}$ then $(a,b)=p_1^{k_1}\cdot\ldots\cdot p_r^{k_r}$ with $k_i=\min \{n_i,m_i\}$. For example, if $(a,b)=1$ then $k_i=0$ for every $i=1,\ldots,r$, which is equivalent to $n_i=0$ or $m_i=0$. As $a^n=p_1^{n~n_1}\cdot\ldots\cdot p_r^{n~n_r}$ and $b^k=p_1^{k~m_1}\cdot\ldots\cdot p_r^{k~m_r}$, therefore $(a^n,b^k)=p_1^{l_1}\cdot\ldots\cdot p_r^{l_r}$ with $l_i=\min \{n~n_i,k~m_i\}=0$, and $(a^n,b^k)=1$. -In fact, Apostol's book makes exhaustive use of factorization in the initial chapters.<|endoftext|> -TITLE: Extending a homeomorphism of a subset of a space to a $G_\delta$ set -QUESTION [10 upvotes]: I am having trouble figuring out the following question (3.10 in Kechris, Classical Descriptive Set Theory): If $X$ is completely metrizable, and $A\subseteq X$ with $f:A\to A$ a homeomorphism, then there is a $G_\delta$ set $G\subseteq X$ containing $A$ and an extension $h:G\to G$ of $f$ which is a homeomorphism of $G$. -Lavrentiev's Theorem gets us $G_\delta$ sets $G'$ and $H'$ containing $A$, and a homeomophism $g:G'\to H'$ extending $f$, but it doesn't seem to me that the proof of that result can be easily adapted to make $G'=H'$. The key fact in all of this is the set $\bar{A}\cap\{x:\mathrm{osc}_f(x)=0\}$, for any continuous $f:A\to X$, is a $G_\delta$ set in $X$ on which we can continuously extend $f$. - -REPLY [10 votes]: By Lavrentiev’s theorem $f$ can be extended to a homeomorphism $f_0:G_0\to H_0$, where $G_0$ and $H_0$ are $G_\delta$-sets in $X$ containing $A$. Let $G_1=G_0\cap H_0$, and let $H_1=f_0[G_1]$; clearly $G_1$ is a $G_\delta$ in $X$. Moreover, $G_1$ is a $G_\delta$ in $G_0$, and $f_0$ is a homeomorphism, so $H_1$ is a $G_\delta$ in $H_0$ and therefore in $X$. Finally, $f_1\triangleq f_0\upharpoonright G_1:G_1\to H_1$ is a homeomorphism extending $f$. Now let $H_2=G_1\cap H_1$, $G_2=f_1^{-1}[H_2]$, and $f_2=f_1\upharpoonright G_2$; $G_2$ and $H_2$ are $G_\delta$-sets containing $A$, and $f_2$ is a homeomorphism between them extending $f$. -Continue in this fashion. If $n\in\omega$ is even, $G_{n+1}=G_n\cap H_n$ and $H_{n+1}=f_n[G_{n+1}]$, while if $n$ is odd, $H_{n+1}=G_n\cap H_n$ and $G_{n+1}=f_n^{-1}[H_{n+1}]$, and $f_{n+1}=f_n\upharpoonright G_{n+1}$ in both cases. The sets $G_n$ and $H_n$ are $G_\delta$-sets in $X$, and each $f_n$ is a homeomorphism from $G_n$ onto $H_n$ extending $f$. Note that $$H_0\supseteq G_1\supseteq H_2\supseteq G_3\supseteq H_4\supseteq\dots\;.$$ -Now let $$G=\bigcap_{n\in\omega}G_n=\bigcap_{n\in\omega}H_n\qquad\text{ and }\qquad\bar f=\bigcap_{n\in\omega}f_n=f_0\upharpoonright G\;;$$ clearly $G$ is a $G_\delta$ containing $A$, and $\bar f$ is a homeomorphism of $G$ onto $$\bar f[G]=\bigcap_{n\in\omega}f_n[G_n]=\bigcap_{n\in\omega}H_n=G\;.$$<|endoftext|> -TITLE: Continuous and Open maps -QUESTION [13 upvotes]: I was reading through Munkres' Topology and in the section on Continuous Functions, these three statements came up: -If a function is continuous, open, and bijective, it is a homeomorphism. -If a function is continuous, open, and injective, it is an imbedding. -If a function is continuous, open, and surjective, it is a quotient map. (This one isn't a definition, but it is a particular example.) -So then I wondered: is there was a name for functions that are just continuous and open without being 1-1 or onto? Are these special at all? Or does dropping the set theoretic restrictions give us a class of functions that just isn't very nice. -EDIT: This question is not asking if continuous implies open or vice versa. I know we can have one of them, both, or neither. The question is about if we suppose we have both of them, but our function isn't 1-1 or onto, what can we say about this function. -Thanks! - -REPLY [4 votes]: A function that is continuous and open is an embedding of a quotient of the original space. This is a very interesting notion, just like subquotients of groups. For instance, if you restrict a covering map to a subset of your domain, you (usually) get a continuous open map that is not one-to-one or surjective. This comes up a lot in geometry, for instance near cusps, or in creating the universal cover of graphs of groups; you look at the preimage of a subspace under a covering map (do it twice for two spaces with homeomorphic subspaces) and then glue together copies of the two spaces along these subspace... Anyways, I'm rambling, but such maps are interesting and useful and come up a lot, although withou any special name that I'm aware of.<|endoftext|> -TITLE: Definition of Willmore energy -QUESTION [5 upvotes]: The MAA has posted to its facebook page a link to an article about a recent proposed proof of what is called the Willmore conjecture, after Thomas Willmore. -Wikipedia's article titled Wilmore conjecture includes the following: - - -Let $v:M\to\mathbb{R}^3$ be a smooth immersion of a compact, orientable surface (of dimension two). Giving $M$ the Riemannian metric induced by $v$, let $H:M\to\mathbb{R}$ be the mean curvature (the arithmetic mean of the principal curvatures $\kappa_1$ and $\kappa_2$ at each point). Let $K$ be the Gaussian curvature. In this notation, the Willmore energy $W(M)$ of $M$ is given by - $$W(M) = \int_S H^2 \, dA - \int_S K \, dA.$$ - In the case of the torus, the second integral above is zero. - - -A little bit of this came from editing by me within the past hour. -Knowing very little of differential geometry, I hesitate to do much more with this paragraph before clarifying some things. It seems $M$ is a particular parametrization of the surface, but the integrals look like things that should not depend on which suitably well-behaved parametrization is chosen. Yet the definition seems to attribute the Willmore energy to the parametrization $M$, rather than to the surface, which might be parametrized in any of many different ways. Notice the use of the capital letter $S$ in the expression $\displaystyle\int_S$, when nothing called $S$ was defined! Presumably $S$ means the image of $M$. -Ought one to write $W(S)$ instead of $W(M)$, to be clear about a lack of dependence on a choice of parametrization? - -REPLY [3 votes]: The principal curvatures of $v(M)$ are quantities which do not depend on the parametrization, but only on the geometry of the image of the immersion with respect to the surrounding manifold. -For this reason you are right, the notation for the Willmore energy should reflect this fact and has been poorly chosen in your example. The notation $W(M)$, or even $W(M, g)$ does not reflect this, a metric on $M$ can be defined w/o having an immersion into some target manifold. In this case the principal curvatures are not even defined. The metric on $M$ alone obtained by pulling back the metric of the target space does not allow to define this, only if you know the normal to the image you can define principal curvatures (they are basically the eigenvalues of the derivative of the normal or, without being too specific, the second derivative of $v$, the mean curvature is the sum of them (trace of the Weingarten map) and the Gauss curvature their product (determinant of the Weingarten map)). -(As an off topic side remark: The fact that the integral over $K $ vanishes is a consequence of the Gauss- Bonnet Theorem which relates this integral to the Euler characteristic of $v(M)$).<|endoftext|> -TITLE: Winding number question. -QUESTION [6 upvotes]: In class we defined the winding number as follows: If $\gamma$ is a loop on $\mathbb{R}^2$ that does not pass through a point $p$, the winding number $W( \gamma, p)$ is an integer $n$ that $\gamma$ represents $n$ times the canonical generator in the fundamental group $\pi_1(\mathbb{R}^2\setminus \{p\})$. Essentially, it is thought of as the number of turns $\gamma$ makes about $p$. -I've been working on problems to better my understanding of this topic. I can't figure out this one, and was wondering if anyone could help me out? -Let $p$ and $q$ be distinct points on the plane and $X = \mathbb{R}^2 \setminus \{p, q\}$. If $\gamma$ is a loop in $X$ such that $W(\gamma, p) = W(\gamma, q) = 0$, does it follow that $\gamma$ represents the trivial element of the fundamental group $\pi_1(X)$? -Thank you so much! - -REPLY [5 votes]: Fix a basepoint $x_0\in X$. Let $\gamma_p$ be a loop at $x_0$ going around $p$ exactly once, and $q$ zero times. Similarly, let $\gamma_q$ be a loop at $x_0$ passing around $q$ exactly once and $p$ zero times. Then if for a path $\gamma$, $\gamma^{-1}$ means traverse $\gamma$ backwards, consider the loop $\gamma=\gamma_p\gamma_q\gamma_p^{-1}\gamma_q^{-1}$. Its equivalence class is nontrivial in $\pi_1(X)$ but it has $W(\gamma,p)=W(\gamma,q)=0$.<|endoftext|> -TITLE: Sample variance derivation -QUESTION [6 upvotes]: I have quite a simple question but I can't for the life of me figure it out. -For a set of iid samples $\,\,X_1, X_2, \ldots, X_n\,\,$ from distribution with mean $\,\mu$. -If you are given the sample variance as -$$ S^2 = \frac{1}{n-1}\sum\limits_{i=1}^n \left(X_i - \bar{X}\right)^2 $$ -How can you write the following? -$$ S^2 = \frac{1}{n-1}\left[\sum\limits_{i=1}^n \left(X_i - \mu\right)^2 - n\left(\mu - \bar{X}\right)^2\right] $$ -All texts that cover this just skip the details but I can't work it out myself. I get stuck after expanding like so -$$ S^2 = \frac{1}{n-1}\sum\limits_{i=1}^n \left(X_i^2 -2X_i\bar{X} + \bar{X}^2\right) $$ -What am I missing? -Edit: -A similarly equivalent expression is often given that I also can't derive but which may be more obvious is -$$ S^2 = \frac{1}{n-1}\left[\sum\limits_{i=1}^n X_i^2 - n\bar{X}^2\right] $$ - -REPLY [4 votes]: $$\begin{align*} -\frac{1}{n-1}\left[\sum\limits_{i=1}^{n}\left(X_i - \mu\right)^2 - -n\left(\mu - \bar{X}\right)^2\right] -&= \frac{1}{n-1}\sum\limits_{i=1}^{n}\left[\left(X_i - \mu\right)^2 - -\left(\mu - \bar{X}\right)^2\right]\\ -&= \frac{1}{n-1}\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right) -\left(X_i + \bar{X} - 2\mu\right).\end{align*}$$ -Now, -$$\sum_{i=1}^n (X_i - \bar{X}) = \sum_{i=1}^n X_i - \sum_{i=1}^n \bar{X} -= n\bar{X} - n\bar{X}= 0 $$ -and so -$$\begin{align*} -\frac{1}{n-1}\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right) -\left(X_i + \bar{X} - 2\mu\right) -&= \frac{1}{n-1}\left[\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right)X_i -+ \sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right)\left(\bar{X} - 2\mu\right)\right]\\ -&= \frac{1}{n-1}\left[\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right)X_i -+ \left(\bar{X} - 2\mu\right)\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right) -\right]\\ -&= \frac{1}{n-1}\left[\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right)X_i\right]\\ -&= \frac{1}{n-1}\left[\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right)X_i -- \bar{X}\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right)\right]\\ -&= \frac{1}{n-1}\sum\limits_{i=1}^{n}\left(X_i - \bar{X}\right)^2\\ -&= S^2 -\end{align*}$$<|endoftext|> -TITLE: Every Lindelöf topological group is isomorphic to a subgroup of the product of second countable topological groups. -QUESTION [14 upvotes]: I want to show that every Lindelöf topological group is isomorphic to a subgroup of the product of second countable topological groups. I received an answer using the fact that Lindelöf topological groups are $\omega$-narrow, but I want to show it by using the following theorem. -Theorem: Every Hausdorff topological group $G$ is topologically isomorphic to a subgroup of the group of isometries $Is(M)$ of some metric space $M$, where $Is(M)$ is taken with the topology of pointwise convergence. -Any help would be greatly appreciated! - -REPLY [2 votes]: Let $G$ be a Lindelöf group. By your theorem, you can assume that $G$ is a subgroup of $\operatorname{Iso}(M)$ for some metric space $M$. Now, consider the decomposition of $M$ into $G$-orbits, call them $M_a$. Since each orbit is an image of $G$, each $M_a$ is Lindelöf. Furthermore, since each $M_a$ is metrizable, they each have a countable base. -Now, for every $M_a$, since $G$ acts on $M_a$, there is a natural homomorphism of topological groups from $G$ to $\operatorname{Iso}(M)$. Then, the diagonal product of these homomorphisms gives you the embedding of $G$ into the product of second countable groups.<|endoftext|> -TITLE: Understanding open covering definition of compactness -QUESTION [6 upvotes]: My question is related with the understanding of open covering -definition of compactness which can be stated as: A metric space $X$ -is called compact iff each of its open cover has a finite subcover. -For example $[0,1)$ is not compact. I understand what open covers are; -for example set $A = \{(-1,1)\}$ may be one of open coverings of $[0,1)$. -But I don't understand how to check whether set A admits a finite -subcover or not? I tried many times but I don't know where I am -lacking? Please help me with this. -Edit 1: Let $O$ be the set of open intervals of the form $(-1/2, 1-1/t)$. $O$ is an open cover but it does not admit a finite subcover. My question is related with understanding of how can we say $O$ doesn't admit a finite subcover. -Edit 2: I find it difficult to apply the open covering definition of compactness. If someone can explain this definition with examples it would be good for me. -Thanks - -REPLY [8 votes]: Here's the first task. - -Let $U_n = (0, 1 - \frac1n)$ for $n \in \mathbf N$. Show that the open covering $\{U_n\}$ of $(0, 1)$ has no finite subcover. - -Some steps you could follow: - -A finite subcover is of the form $\{U_n\}_{n \in S}$ for some finite subset $S$ of $\mathbf N$. -If $S$ is non-empty then let $N$ be the largest element of $S$. Then $U_n \subset U_N$ for all $n \in S$, so it is enough to show that $U_N$ does not contain all of $(0, 1)$. -$1 - \frac1N$ is an element of $(0, 1) \setminus U_N$. - -I can't tell you how to understand the definition in a few words, but here are some thoughts. It's often enough to understand a special case of something, while keeping in mind that exceptions exist. Here, the familiar case is that of $\mathbf R$. More generally, in a metric space compactness is equivalent to sequential compactness. This might be a more visceral notion, and it might help to see the proof of equivalence in section 2 of this handout of Brian Conrad's. -For me, this justifies thinking of non-compact sets as those in which points can run off, either to a point outside of the set or "to infinity". See also this (closed) question at MO. Try thinking about why you can't construct an example like the one you gave for $[0, 1]$. -As opposed to applying the open covering definition directly, it's often enough to remember the fundamental fact that a subset of $\mathbf R^n$ is compact if and only if it is closed and bounded, together with standard theorems on compactness.<|endoftext|> -TITLE: How to find the error in a proof? (that $1=0$) -QUESTION [10 upvotes]: So I devised this proof that $1=0$. Of course it is false, but I don't know why. Why? -$$\begin{align*} -x+1&=y\\ -\frac{x+1}{y}&=1\\ -\frac{x+1}{y}-1&=0\\ -\frac{x+1}{y}-\frac{y}{y}&=0\\ -\frac{x-y+1}{y}&=0\\ -x-y+1&=0\\ -x-y+1&=\frac{x-y+1}{y}\\ -y(x-y+1)&=x-y+1\\ -y&=1\\ -x+1&=1\\ -x&=0\qquad * * * *\\ -y-1&=x\\ -\frac{y-1}{x}&=1\\ -\frac{y-1}{x}-1&=0\\ -\frac{y-1}{x}-\frac{x}{x}&=0\\ -\frac{y-x-1}{x}&=0\\ -y-x-1&=0\\ -y-x-1&=\frac{y-x-1}{x}\\ -x(y-x-1)&=y-x-1\\ -x&=1\qquad * * * *\\ -1&=0\\ -\end{align*}$$ - -REPLY [6 votes]: Other answerers have already pinpointed the flaw in the argument, which is division by zero. However, your comments indicate you are still not satisfied. Let me offer some thoughts on some further issues in the hopes of addressing what you are dissatisfied about. -(1) What's wrong with division by zero anyway? -As Zev Chonoles and Bill Dubuque have both pointed out, the first mistake is in going from -$$x-y+1 = (x-y+1)y$$ -to -$$1=y$$ -since $x-y+1$ is zero, because of the starting assumption that $y=x+1$. Let me add some perspective about just why this deduction is not valid. I will avoid the idea that "you can't divide by zero" or any equivalent, in an attempt to clarify the root issue. The basic question is this: -If $A\cdot b = A\cdot c$, does it follow that $b=c$? In your case, $A$ is $x-y+1=0$, $b$ is $1$, and $c$ is $y$, but the issue at stake is broader. If you know that two numbers $b$ and $c$ end up being the same after multiplication by some number $A$, do you know that they were the same to begin with? -I encourage you to give this question some thought for a moment before reading on. -Here's the conclusion you'll come to: -Most of the time, the answer is yes. For most choices of $A$, for example $3$, two numbers that are the same after multiplication by $A$ must have been the same to begin with. To put it another way, if two numbers are different, then after multiplication by $A$ they will still be different. To make this concrete, take $A=3$. If $b$ is not $c$, then 3 times $b$ isn't 3 times $c$ either. A technical way to express this is to say that multiplication by $3$ is an injective function. -However, there is one case where the answer is no: $A=0$. Two different numbers can be made the same by multiplication by $0$. Multiplication by zero is not injective. (In fact, all numbers are made equal after multiplication by zero: multiplication by zero is extremely not injective.) So, from $0\cdot b = 0\cdot c$ it is not safe to conclude that $b=c$. $b$ and $c$ could be different and they would still end up the same after multiplication by zero, so the fact that they ended up the same doesn't tell you that they started the same. -This discussion is the logical back-end behind the algebraic move of "canceling a factor" i.e. going from an equation with the form $Ab=Ac$ to $b=c$. As long as $A$ is different from zero, multiplication by $A$ is injective (i.e. different before multiplication by $A$ implies different after), and therefore this is valid. But if $A$ is zero, many values are collapsed together by multiplication by $A$, so it's not valid. Thus the algebraic move from $Ab=Ac$ to $b=c$ requires the assumption that $A\neq 0$ in order to be valid. -To specialize to your case, you have $(x-y+1)\cdot 1 = (x-y+1)\cdot y$; in the language I've been using, "$1$ and $y$ end up the same after multiplication by $x-y+1$." But you can't conclude that $1$ and $y$ were originally the same, because $x-y+1$ is zero, so multiplication by it is not injective. -(2) Couldn't you have changed the numbers around so that the factors being canceled are not zero after all, thereby rescuing the proof? -I think it would be a productive exercise to actually attempt to rewrite the proof to avoid canceling zeros. -Here's what one attempt might look like, based on the comment discussion on Zev's answer. -Since we are trying to avoid having $x-y+1$ be zero, perhaps we should start with $x+2=y$ instead. Let's run through the proof starting with that: -$$ x+2 = y$$ -$$ \frac{x+2}{y}=1$$ -$$\frac{x+2}{y}-1 = 0$$ -$$\frac{x+2}{y}-\frac{y}{y}=0$$ -$$\frac{x-y+2}{y}=0$$ -$$x-y+2=0$$ -$$x-y+2=\frac{x-y+2}{y}$$ -$$y(x-y+2) = x-y+2$$ -... hmmm. To conlcude from here that $y=1$, we would have to know (a propos of the above) that $x-y+2$ is not zero. But actually $x-y+2$ is definitely zero, as the proof itself showed 2 lines above. So the attempt to rescue the proof by tweaking the numbers was unsuccessful: changing the numbers also changed the troublesome factor in such a way that it was still zero. (This is what Zev was getting at in his comment about $x-y+a$.) -Now because I know the conclusion $0=1$ is false, and I see that the flaws in the proof all involve cancellations of factors equal to zero, I believe that in any attempt to tweak the proof, factors that have to be canceled will still end up being zero. But you may get something out of actually trying to make it work and seeing what happens.<|endoftext|> -TITLE: Advantage of ZF over other set theories such as New Foundation -QUESTION [13 upvotes]: What would be the advantage of adopting ZF over other set theories such as New Foundation? -I am very curious, since it seems that there is no reason just to stick with ZF. -Edit: What about set theories other than NF? And why is NF's finite axiomatization possbility not attractive? - -REPLY [6 votes]: In my site http://settheory.net I provide explanations on the main issues and paradoxes at the foundations of mathematics, with the articulation between set theory and model theory. I explain in details the difference of meaning between sets and classes, and the relative degree of justifications of different axioms. -These explanations clearly show in particular that there is no universal set in nature, so that axiom systems admitting one such as New Foundations, may be studied as logical curiosities but cannot be accepted as a "natural" foundation for mathematics. -I give a justification for the consistency of ZF (showing that the one doubtful axiom is the powerset axiom, which we need anyway as far as I know), and find it relevant as a basis for the work of professional set theorists studying relative consistency issues. -But I propose another formalization accepting functions as fundamental objects aside sets, and many symbols instead of one, that I consider more appropriate to start mathematics from scratch : to facilitate the understanding of basic mathematical concepts, make the first developments of set theory simpler and more intuitive, and better fit with the common practice of mathematics that uses many symbols. Indeed I see the usual construction of ordinary mathematical tools from the mere membership predicate as unnatural, overcomplicated and irrelevant for beginners.<|endoftext|> -TITLE: A series with infinitely many logarithms: $\lim_{n\to\infty} \left(\frac{\ln 2}2+\frac{\ln 3}3+\cdots + \frac{\ln n}n \right)^{\frac1n}$ -QUESTION [9 upvotes]: I have to solve the following limit: -$$\lim_{n\rightarrow\infty} \left(\frac{\ln 2}{2}+\frac{\ln 3}{3}+\cdots + \frac{\ln n}{n} \right)^{\frac{1}{n}}$$ -I'm just curious if there is a simple way to solve it. I think I solved it by using some pretty unusual trick: I just considered the sum approximation under the radical by using an integral and got $\approx \frac{\ln^{2}n}{2}$. Then I simply applied Cauchy D'Alembert and got 1. Still thinking of a simple way. - -REPLY [3 votes]: We use $\log(t+1)\leq t$ for $t\geq 0$. We have -$$a_n:=\frac 1n\log \left(\sum_{k=1}^n\frac{\ln k}k\right)=\frac 1n\log \left(1+\sum_{k=1}^n\frac{\ln k}k-1\right)\leq \frac 1n\left(\sum_{k=1}^n\frac{\ln k}k-1\right).$$ -Since $\frac{\ln n}n\to 0$, the Cesaro means converge to $0$ (use $\varepsilon$) hence $\lim_{n\to +\infty}\frac 1n\log \left(\sum_{k=1}^n\frac{\ln k}k\right)=0$ and the limit of $e^{a_n}$ is $1$.<|endoftext|> -TITLE: What is the prerequisite knowledge for learning Galois theory? -QUESTION [17 upvotes]: What is the prerequisite knowledge for learning Galois theory? I don't know what a ring is. - -REPLY [13 votes]: If you don't know a great deal of abstract algebra so far, maybe "A First Course in Abstract Algebra" by Fraleigh might be a good place to start, as it includes all the prerequisites (groups, rings, fields, linear algebra) as well as a very readable treatment of Galois Theory itself.<|endoftext|> -TITLE: Continuity in two dimensions -QUESTION [6 upvotes]: How would you prove or disprove that the function given by -$$f(x,y) = \begin{cases} \frac{x^3y^2}{x^4 + y^4} & (x,y) \neq (0,0) \\ 0 & (x,y) = (0,0) \end{cases}$$ -is continuous at $(0,0$). I tried to think of a function where the limit approached zero over which tended to an answer apart from zero, but they all went to zero! So this makes me think that the function is continuous at $(0,0)$. But how can I prove this? Thanks! - -REPLY [3 votes]: You could take advantage of the fact that numerator and denominator are homogeneous polynomials to write the expression, as far as possible as a function of $y/x$ (or $x/y$): Divide both numerator and denominator by $x^4$ to get $$\frac{z}{1+z^4}\cdot y,\qquad z=\frac{y}{x}.$$ -Now use ordinary single variable calculus to show that the fraction is bounded, so the whole expression is bounded by a constant times $\lvert y\rvert$. (For $x=0$, the original expression is $0$, so no problem there.)<|endoftext|> -TITLE: Count the number group homomorphisms from $S_3$ to $\mathbb{Z}/6\mathbb{Z}$? -QUESTION [9 upvotes]: I have to count the number of group homomorphisms from $S_3$ to $\mathbb{Z}/6\mathbb{Z}$ ? - -1 -2 -3 -6 - -I aware of the formula for calculating group homomorphisms defined on cyclic groups, but here $S_3$ is not cyclic. Please suggest how to proceed. - -Thank you so much Egbert for the much detailed and patient reply. - -REPLY [5 votes]: So we consider $\phi(S_3).$ Note that $|\phi(S_3)|$ divides $6$. So the possiblities are $1,2,3,6$. -Suppose $|\phi(S_3)| = 6.$ Then it maps to whole of $\mathbb Z_6,$ which is not possible as $\mathbb Z_6$ is abelian and cyclic. -Suppose $|\phi(S_3)| = 3.$ Then $ |\ker\phi| = 2$, as there is no normal subgroup of order $2$ in $S_3$ the case gets violated. -Finally $|\phi(S_3)| = 2,$ where $\ker\phi$ is $A_3$ and when $|\phi(S_3)| = 1,$ $\ker \phi = S_3.$ So there are two homomorphisms.<|endoftext|> -TITLE: How can we produce another geek clock with a different pair of numbers? -QUESTION [20 upvotes]: So I found this geek clock and I think that it's pretty cool. - -I'm just wondering if it is possible to achieve the same but with another number. -So here is the problem: -We want to find a number $n \in \mathbb{Z}$ that will be used exactly $k \in \mathbb{N}^+$ times in any mathematical expresion to produce results in range $[1, 12]$. No rounding, is allowed, but anything fancy it's ok. -If you're answering with an example then use one pair per answer. -I just want to see that clock with another pair of numbers :) -Notes for the current clock: -1 o'clock: using 9 only twice, but it's easy to use it 3 times with many different ways. See comments. -5 o'clock: should be $\sqrt{9}! - \frac{9}{9} = 5$ - -REPLY [3 votes]: For $n=3$ and $k = 3$. -$1 = 3^{3-3}$ -$2 = 3-\frac{3}{3}$ -$3 = 3+3-3$ -$4 = 3+\frac{3}{3}$ -$5 = 3!-\frac{3}{3}$ -$6 = 3*3-3$ -$7 = 3!+\frac{3}{3}$ -$8 = \pi(3)*\pi(3)*\pi(3)$ -$9 = 3+3+3$ -$10 = 3!+\pi(3)+\pi(3)$ -$11 = 3!+3+\pi(3)$ -$12 = 3*3+3$<|endoftext|> -TITLE: How many irreducible polynomials of degree $n$ exist over $\mathbb{F}_p$? -QUESTION [14 upvotes]: I know that for every $n\in\mathbb{N}$, $n\ge 1$, there exists $p(x)\in\mathbb{F}_p[x]$ s.t. $\deg p(x)=n$ and $p(x)$ is irreducible over $\mathbb{F}_p$. - -I am interested in counting how many such $p(x)$ there exist (that is, given $n\in\mathbb{N}$, $n\ge 1$, how many irreducible polynomials of degree $n$ exist over $\mathbb{F}_p$). - -I don't have a counting strategy and I don't expect a closed formula, but maybe we can find something like "there exist $X$ irreducible polynomials of degree $n$ where $X$ is the number of...". -What are your thoughts ? - -REPLY [38 votes]: Theorem: Let $\mu(n)$ denote the Möbius function. The number of monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$ is the necklace polynomial -$$M_n(q) = \frac{1}{n} \sum_{d | n} \mu(d) q^{n/d}.$$ -(To get the number of irreducible polynomials just multiply by $q - 1$.) -Proof. Let $M_n(q)$ denote the number in question. Recall that $x^{q^n} - x$ is the product of all the monic irreducible polynomials of degree dividing $n$. By counting degrees, it follows that -$$q^n = \sum_{d | n} d M_d(q)$$ -(since each polynomial of degree $d$ contributes $d$ to the total degree). By Möbius inversion, the result follows. -As it turns out, $M_n(q)$ has a combinatorial interpretation for all values of $q$: it counts the number of aperiodic necklaces of length $n$ on $q$ letters, where a necklace is a word considered up to cyclic permutation and an aperiodic necklace of length $n$ is a word which is not invariant under a cyclic permutation by $d$ for any $d < n$. More precisely, the cyclic group $\mathbb{Z}/n\mathbb{Z}$ acts by cyclic permutation on the set of functions $[n] \to [q]$, and $M_n(q)$ counts the number of orbits of size $n$ of this group action. This result also follows from Möbius inversion. -One might therefore ask for an explicit bijection between aperiodic necklaces of length $n$ on $q$ letters and monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$ when $q$ is a prime power, or at least I did a few years ago and it turns out to be quite elegant. -Let me also mention that the above closed form immediately leads to the "function field prime number theorem." Let the absolute value of a polynomial of degree $d$ over $\mathbb{F}_q$ be $q^d$. (You can think of this as the size of the quotient $\mathbb{F}_q[x]/f(x)$, so in that sense it is analogous to the norm of an element of the ring of integers of a number field.) Then the above formula shows that the number of monic irreducible polynomials $\pi(n)$ of absolute value less than or equal to $n$ satisfies -$$\pi(n) \sim \frac{n}{\log_q n}.$$ - -REPLY [8 votes]: With regards to your question, this paper has a formula for counting the number of monic irreducibles over a finite field. - -REPLY [6 votes]: The number of monic irreducible polynomials of degree $n$ over $\mathbb{F}_p$ equals -$$\frac{1}{n} \cdot \sum_{d|n} p^d \mu\left(\frac{n}{d}\right)$$ -where $\mu$ is the Möbius function. This follows rather easily from the Möbius inversion formula. You can find details here. Note that this in particular implies the existence of one irreducible polynomial and therefore of the field with $p^n$ elements.<|endoftext|> -TITLE: Definition of Sectional Curvature -QUESTION [5 upvotes]: do Carmo gives a definition of sectional curvature as follows: -$$K(x,y) = \frac{\langle R(x,y)x,y\rangle}{|x\times y|^2}$$ -where $x,y \in T_pM$ are linearly independent vectors. -My question: The curvature of a riemannian manifold is a correspondence that associates, for each vector fields $X,Y$, a linear map $R(X,Y)$, which takes vector fields in vector fields. In the definition above $x,y \in T_pM$ which means they are not vector fields, so how could one interpret $R(x,y)x$? - -REPLY [5 votes]: One of the main points about tensor fields is that they can be evaluated pointwise. That is, you don't need vector fields, but only tangent vectors. This applies to all three arguments of the curvature tensor (your $x, y$). -Edit after reading M.B.s comment: the point about his remark is, that the result will not depend on the choice of vector fields. Too bad, he removed the comment. It said (wording may differ): choose vector fields $X, Y$ such that $X(p)=x, Y(p)= y$ and evaluate along these.<|endoftext|> -TITLE: The Fundamental Group of a Polyhedron Depends Only on its $2$-Skeleton -QUESTION [5 upvotes]: Does anyone have a good quick proof of this using the Simplicial Approximation Theorem? I'm aware that it comes out as a corollary when considering edge paths and the edge group, but this seems like quite heavy machinery for what should be a simple idea. I haven't been able to put together a convincing argument myself though! - -REPLY [3 votes]: Let $X$ be a polyhedron, and consider the inclusion $i : \operatorname{sk}_2 X \to X$. It induces a morphism on fundamental groups $i_* : \pi_1 \operatorname{sk}_2 X \to \pi_1 X$, for some choice of base point $x_0$ in the $0$-skeleton. This morphism is: - -Surjective: let $\alpha : I \to X$ be a representative of some class in $\pi_1 X$, $\alpha(0) = \alpha(1) = x_0$. Then by the simplicial approximation theorem, since $I$ is 1-dimensional, then $\gamma$ is homotopic to $\beta : I \to \operatorname{sk}_2 X$; since $\alpha$ was already simplicial on the 0-skeleton of $I$, we can assume the homotopy is constant on the 0-skeleton, hence $\beta(0) = \beta(1) = x_0$ too. Then $i_*[\beta] = [\alpha]$. -Injective: let $\alpha, \beta : I \to \operatorname{sk}_2 X$ be two loops such that $i_*[\alpha] = i_*[\beta]$, i.e. there is a homotopy $H : I \times I \to X$ such that $H(t,0) = \alpha(t)$, $H(t,1) = \beta(t)$, and $H(0,s) = H(1,s) = x_0$. Then similarly by the simplicial approximation theorem, $H$ is homotopic to some $G : I \times I \to \operatorname{sk}_2 X$ ($I \times I$ is 2-dimensional), and one can moreover take the homotopy to be constant on $\partial(I \times I)$ (on which $H$ is already simplicial). Then $G$ is a homotopy in $\operatorname{sk}_2 X$ between $\alpha$ and $\beta$, hence $[\alpha] = [\beta]$. - -It follows that $i_* : \pi_1(\operatorname{sk}_2 X) \to \pi_1(X)$ is an isomorphism. More generally one can immediately adapt the arguments above to show that $\pi_n(\operatorname{sk}_{n+1} X) \cong \pi_n(X)$.<|endoftext|> -TITLE: Are isomorphic structures really indistinguishable? -QUESTION [13 upvotes]: I always believed that in two isomorphic structures what you could tell for the one you would tell for the other... is this true? I mean, I've heard about structures that are isomorphic but different with respect to some property and I just wanted to know more about it. -EDIT: I try to add clearer informations about what I want to talk about. In practice, when we talk about some structured set, we can view the structure in more different ways (as lots of you observed). For example, when someone speaks about $\mathbb{R}$, one could see it as an ordered field with particular lub property, others may view it with more structures added (for example as a metric space or a vector space and so on). Analogously (and surprisingly!), even if we say that $G$ is a group and $G^\ast$ is a permutation group, we are talking about different mathematical object, even if they are isomorphic as groups! In fact there are groups that are isomorphic (wrt group isomorphisms) but have different properties, for example, when seen as permutation groups. - -REPLY [9 votes]: You should learn some category theory. It provides some elegant language for discussing the issues you're running into. - -In fact there are groups that are isomorphic (wrt group isomorphisms) but have different properties, for example, when seen as permutation groups. - -As I said above, if you want to talk about properties of permutation groups which are not captured as properties of groups, then you don't want to talk about isomorphism of groups but isomorphism of permutation groups. More precisely, we can define the category $\text{GrpAct}$ of group actions and talk about isomorphism in this category. This is the category whose objects are triples $(G, X, \rho)$ where $G$ is a group, $X$ is a set, and $\rho : G \to \text{Aut}(X)$ an action of $G$ on $X$; and whose morphisms $(G_1, X_1, \rho_1) \to (G_2, X_2, \rho_2)$ are pairs $(\phi, f)$ where $\phi : G_1 \to G_2$ is a homomorphism and $f : X_1 \to X_2$ is a set map satisfying -$$\rho_1(g)(x) = \rho_2(\phi(g))(f(x))$$ -for all $g \in G_1, x \in X_1$. Isomorphism in this category captures isomorphism of permutation groups (the special case where $\rho$ is injective), which is a stronger condition than isomorphism of abstract groups. More precisely, there is a forgetful functor -$$F : \text{GrpAct} \to \text{Grp}$$ -sending a group action $(G, X, \rho)$ to the group $G$, and the basic phenomenon you are observing is that two objects $a, b \in \text{GrpAct}$ may not be isomorphic even if $F(a)$ and $F(b)$ are. This is not surprising and is in some sense typical. - -Analogously (and surprisingly!), even if we say that G is a group and G∗ is a permutation group, we are talking about different mathematical object, even if they are isomorphic as groups! - -From the perspective of category theory, the problem is that groups and permutation groups live in different categories. To compare them, you need to use the forgetful functor $F$, and then you need to distinguish between a permutation group (which lives in $\text{GrpAct}$) and the corresponding abstract group (which lives in $\text{Grp}$) as mathematical objects. -To use a computer science analogy (well, more than an analogy, but...), the forgetful functor typecasts between the types GroupAction and Group, and the isEqual operator is defined differently for the two types (and does not compare a GroupAction and a Group; you need to typecast first).<|endoftext|> -TITLE: Galois Groups of Finite Extensions of Fixed Fields -QUESTION [10 upvotes]: I am trying to prove the following proposition: - -Let $L$ be an algebraically closed field, $g \in Aut(L)$ and $K=\{x \in L \; | \; g(x)=x\}$. Show that every finite extensions $E/K$ is a cyclic Galois extension. - -So far my thoughts have led me to the following: -If $E=K$, then clearly $E/K$ is Galois and $Gal(E/K)=Aut(K/K)=i$, where $i$ is the identity automorphism, so it is trivially cyclic. Assume $[E:K]=n$ for some $n>1$. Since $E/K$ is a finite extension, it is algebraic. Note that if $\beta \in E\setminus F$ then we have the minimal polynomial $m_{\beta}(X)$ over $K[x]$. Since $K$ is fixed by $g$, we know that $g(\beta)$ is also a root of $m_{\beta}(X)$. We can also conclude that $g(\beta)\not \in K$ since $\beta \not \in K$. -Where to take any of these observations, I have yet to determine. I feel I must be missing something elementary, for I cannot see how to conclude this is a Galois extension from here. -Any tips would be greatly appreciated, especially over full solutions. - -REPLY [5 votes]: Hint: Assume that $x(\in L)$ is algebraic over $K$. The set $\{g^i(x)\mid i\in\mathbf{N}\}$ is then finite, as $x$ has only finitely many conjugates over $K$. It follows that a suitable polynomial of the form -$$ -\prod_{i=0}^{n-1}(T-g^i(x))\in K[T]. -$$ -Using this you can hopefully prove that if $N$ is a normal closure of $E/K$, then $Gal(N/K)=\langle g\rangle$. The rest is easy.<|endoftext|> -TITLE: How can I spot positive recurrence? -QUESTION [8 upvotes]: Can someone please explain to me the intuition behind Positive recurrence. What does it mean and why is it different to normal recurrence? - -REPLY [9 votes]: If the probability of return or recurrence is $1$ then the process or state is recurrent. -If the expected recurrence time is finite then this is called positive-recurrent; if the expected recurrence time is infinite then this is called null-recurrent. -See the Wikipedia article on Markov chains for more details. -Added as an example: -In a simple symmetric 1D random walk, the probability of first return after $2n$ steps is $\dfrac{\frac{1}{2n-1}{2n \choose n}}{2^{2n}}$. Since $\sum_{n=1}^\infty \dfrac{\frac{1}{2n-1}{2n \choose n}}{2^{2n}} =1$, the probability of first return in finite time is $1$, so this is recurrent. But since $\sum_{n=1}^\infty 2n \dfrac{\frac{1}{2n-1}{2n \choose n}}{2^{2n}}$ is infinite, the expected time of the first return is infinite, so this is null-recurrent.<|endoftext|> -TITLE: Inverse Laplace transform computation -QUESTION [5 upvotes]: Calculate the inverse Laplace transform -$$\mathcal{L^{-1}} \left\{ s\log \frac{s^2 + a^2}{s^2 - a^2}\right\},$$ -where $a\in\mathbb{C}$ is a constant. - -I know that is boring but I would really appreciate some help. -Thank's in advance! - -REPLY [5 votes]: I would proceed step by step as follows (using $\risingdotseq$ for the correspondence of the original and image): -$$f\left(x\right)\risingdotseq F=s\log\frac{s^{2}+a^{2}}{s^{2}-a^{2}}$$ -$$\int_{0}^{x}f\left(t\right)dt\risingdotseq\frac{F}{s}=\log\frac{s^{2}+a^{2}}{s^{2}-a^{2}}$$ -$$-x\int_{0}^{x}f\left(t\right)dt\risingdotseq \frac{d}{ds}\left(\frac{F}{s}\right)=\frac{2s}{s^{2}+a^{2}}-\frac{2s}{s^{2}-a^{2}}$$ -$$-x\int_{0}^{x}f\left(t\right)dt\risingdotseq\frac{2s}{s^{2}+a^{2}}-\frac{2s}{s^{2}-a^{2}}$$ -inverting the RHS: -$$-x\int_{0}^{x}f\left(t\right)dt=2\cos ax-2\cosh ax \qquad (*)$$ -EDIT (thanks to the comment by Fabian): differentiate once with respect to $x$ -$$-\int_{0}^{x}f\left(t\right)dt-xf\left(x\right)=-2a\sin ax-2a\sinh2x$$ -Now multiply by $x$ and subtract from (*): -$$x^2f(x)=2(ax\sin{ax}+ax\sinh{ax}+\cos{ax}-\cosh{ax})$$ -$$f(x)=\frac{2}{x^2}(ax\sin{ax}+ax\sinh{ax}+\cos{ax}-\cosh{ax})$$<|endoftext|> -TITLE: In an ordered field, must 1 be positive? -QUESTION [7 upvotes]: In an ordered field, must the multiplicative identity be positive? Or must it be defined as such? - -REPLY [13 votes]: Remember that for a total order, - -If $a \leq b$, then $a+c \leq b+c$. -If $0 \leq a$ and $0 \leq b$, then $0 \leq ab$. - -If $1 \leq 0$, then $1+(-1) \leq 0 + (-1)$ i.e. $0 \leq -1$. By ($2$), we need $0 \leq (-1)(-1) = 1$. -Hence, we get that $1 \leq 0 \leq 1$. For a non-trivial field, $0 \neq 1$. Hence, we get a contradiction that $$1 < 0 < 1.$$ -Hence, $0 < 1$.<|endoftext|> -TITLE: If $G$ is finite and abelian, then every subgroup of $G$ is characteristic if and only if $G$ is cyclic -QUESTION [7 upvotes]: Suppose $G$ is finite and abelian. Show that every subgroup of $G$ is characteristic if and only if $G$ is cyclic. - -I have the 'if' part so far: -If $G$ is cyclic, then $G = \langle g \rangle $ with $|g| = n$, say. Let $\alpha \in \operatorname{Aut}(G)$, then $\alpha : G \rightarrow G$ with $g \mapsto g^i$ where $(i,n) = 1$. Let $K \leq G$, then $K$ is cyclic with $K = \langle g^k \rangle$ for some $k \in \mathbb{Z}$. $$\alpha (K) = \alpha ( \langle g^k\rangle ) = \langle \alpha(g)^k \rangle = \langle (g^i)^k \rangle = \langle (g^k)^i \rangle =: H.$$ Obviously $H\leq K$. Pick an arbritrary element in $K$, $(g^k)^j$ say $(g^j) = (g^i)^{j'}$ for some $j'$ as $g^i$ generates $G$. So $$(g^k)^j = (g^j)^k = ((g^i)^{j'})^k = ((g^k)^i)^{j'} \in H.$$ hence $K\leq H$ and hence $H=K$, thus every subgroup is characteristic. -But the 'only if' part gives me trouble. this is what I've come up with: -$G$ is finite abelian, hence $$G = C_{a_1}\times C_{a_2} \times \cdots \times C_{a_m}$$ where each $C_{a_i}$ is cyclic and $a_{i} \mid a_{i+1}$ (correct me if I'm wrong but I think this is called Smith normal form?). Suppose $m\geq 2$. From here I'm trying to find an automorphism of $M:=C_{a_1} \times C_{a_2}$ which does not fix every subgroup of $M$, but unfortunately I've had no luck. - -REPLY [5 votes]: Hint: If $1 -TITLE: Book(s) Request to Prepare for Algebraic Number Theory -QUESTION [14 upvotes]: I would appreciate suggestions for books to enhance my learning in algebra so as to be able to read Samuel's "Algebraic Theory of Numbers" and eventually at least begin Neukirch's "Algebraic Number Theory." -By way of background, I have gone through B. Gross's Harvard lectures on algebra several times. They were correlated to Artin, and included factoring and quadratic number fields, but did not cover modules or fields. Nor Galois Theory. -So I would like to get a good exposure to those areas that are particularly germane to ANT. (E.g. Artin does not have anything on perfect fields and only mentions algebraic closure in a short paragraph before the Fundamental Theory of Algebra. -Thanks very much. - -REPLY [3 votes]: Pierre Colmez has written an amazing text, entitled Éléments d'analyse et d'algèbre (et de théorie des nombres). Here's the table of contents. It's so technically precise and well digested that I think probably this book could serve as a nice reference for most of undergraduate analysis and algebra as well. This text together with Samuel's book should be good preparation. There's also Dino Lorenzini's An invitation to arithmetic geometry, which I personally like quite a lot.<|endoftext|> -TITLE: Computing $ \int_a^b \frac{x^p}{\sqrt{(x-a)(b-x)}} \mathrm dx$ -QUESTION [12 upvotes]: I would like to compute the integral: -$$ \int_a^b \frac{x^p}{\sqrt{(x-a)(b-x)}} \mathrm dx$$ -where $$ a -TITLE: A good reference to begin analytic number theory -QUESTION [28 upvotes]: I know a little bit about basic number theory, much about algebra/analysis, I've read most of Niven & Zuckerman's "Introduction to the theory of numbers" (first 5 chapters), but nothing about analytic number theory. I'd like to know if there would be a book that I could find (or notes from a teacher online) that would introduce me to analytic number theory's classical results. Any suggestions? -Thanks for the tips, - -REPLY [2 votes]: I'll try introduce something different from the other answers: Kiran Kedlaya's course notes for Analytic Number Theory seem like a good option. -An updated version is here.<|endoftext|> -TITLE: Invertible Matrices are dense -QUESTION [19 upvotes]: While reading about linear algebra for math olympiads in these notes, I came across the following assertion: - -Remark. The set of invertible matrices form a Zariski (dense) open subset, and hence to verify a polynomial identity, it suffices to verify it on this dense subset. - -Could someone provide an explanation of what it means to be a "Zariski (dense) open subset"? A proof of this result is sketched in the notes, but I feel there is some deeper theory going on underneath. -In case anyone is interested, the author has a similar set of notes here. - -REPLY [18 votes]: Zariski density means that any polynomial identity in the entries of an $n\times n$ matrix which holds on all invertible matrices holds on all matrices. -If we furthermore are considering matrices with entries in $\mathbb R$ of $\mathbb C$, then we can also say that any identity between continuous functions on the space of all $n\times n$ matrices which holds on all invertible matrices holds on all matrices. -The proof for polynomial functions is not hard: -We can work over any infinite field $k$, which we can take to be $\mathbb R$ of $\mathbb C$ if you like. -A polynomial in the entries of an $n\times n$ matrix is just a polynomial in $n^2$ variables. - -Check that for any non-zero polynomial in $n^2$ variables, there is at least one matrix on which it doesn't vanish. (This is where we use that $k$ is infinite, and it ultimately reduces to the fact that polynomials in one variable have only finitely many zeroes.) - -The determinant, which I'll denote $\Delta$, is a non-zero polynomial in $n^2$-variable. - -Suppose that $f$ is a polynomial which vanishes on all invertible matrices. Then the product $f \Delta$ vanishes on all matrices. By the first point, it must be the zero polynomial. Since $\Delta$ is non-zero, we see that $f$ must be the zero polynomial. That is, $f$ vanishes on all matrices. - - -The proof for continuous functions is similar, but involves some topology as well algebra: you have to check that any non-empty open subset of $n\times n$ matrices contains an invertible matrix. This is standard, but may not be clear to you if you're not used to making arguments in topology or manifold theory. -Added: Actually, Georges's comment below gives a nice proof of the statement in the preceding comment. The same argument can also be found in Pete Clark's answer here. (This is an answer to the question linked to by Jonas Meyer a comment above.)<|endoftext|> -TITLE: Counting train stops using combinatorics -QUESTION [6 upvotes]: There are 12 intermediate stations on a railway line between 2 stations. Find the number of ways a train can be made to stop at 4 of these so that no two stopping stations are consecutive. - -My attempt: -Initially I found the maximum allowed stop number for the first stop that satisfies the consecutive station condition. -$$A \quad 1 \quad 2 \quad 3 \quad 4 \quad 5 \quad 6 \quad 7 \quad 8 \quad 9 \quad 10 \quad 11 \quad 12 \quad B$$ -I can have a stop at stations 8, 10, 12. Hence the train can travel a maximum of 6 stations before coming to its first stop, so number of ways $= 6 \cdot 1 \cdot 1 \cdot 1 = 6$. -Then I shift the first stop to station number 5. Now I have 5 options for stop 1 and 2 possible options for any of the next three stops. Hence number of ways $= 5 \cdot 2 \cdot 3 \cdot 1 = 30$. -Again, I shift the first stop to station number 4. Now I have 4 options for stop 1 and 3 possible options for any of the next 3 stops. Hence number of ways $= 4\cdot 3\cdot 3 = 36$. -Continuing the same logic, I arrive at an answer of 156. But the answer I have with me is 126. -Help appreciated. Thanks in advance. - -REPLY [4 votes]: If the train is stopping in $4$ stations Then train is not stopping in $8$ stations. Let us denote halting stations with | and non halting stations with x -Then, - x x x x x x x x - -Now, between these $8$ non-halting stations we have $9$ places and we select these $9$ places as halt between these 8 stations. -Thus answer should be $\binom{9}{4}$.<|endoftext|> -TITLE: How to prove there are an infinite number of squarefree numbers of the form $2^p-1$? -QUESTION [23 upvotes]: How to prove there are an infinite number of squarefree numbers of the form $2^p-1$, where $p$ is prime? -It is conjectured that all numbers of the form $2^p-1$ are squarefree. I've been having trouble proving that there are an infinite number of squarefree numbers of the form $2^p-1$; also I am unable to prove it for numbers of the form $2^n-1$ when the restriction on prime exponents is dropped. I can see that there are an infinite number of primes which divide some number of the form $2^p-1$ (all primes dividing $2^p-1$ are larger than than $p$, so if $p$ is the largest known prime dividing any number of this form, there is an even larger prime dividing $2^p-1$). Also I can prove the related statement that there are an infinite number of squarefree numbers of the form $n^2+1$ by overcounting the squareful values according to squares of primes of the form $4k+1$, and after some fiddling, bounding them below a constant fraction, but I can't figure out how to adapt this idea to the $2^p-1$ case. -Hints as well as full solutions are appreciated. - -REPLY [2 votes]: This is an open problem (mentioned also in a comment) as conjectured by Schinzel. -An interesting consequence, due to Rotkiewicz, is that your open question - if true - would imply there are infinitely many primes $p$ for which $2^{p-1} \not\equiv 1$ (mod $p^2$). -This latter statement was shown by Silverman to be a consequence of the $abc$-conjecture, so it's "probably" true (perhaps someday Mochizuki's work will be verified or refuted...). -Thus, there is little hope of using Rotkiewicz's work to contradict the infinitude conjectured here. -I am re-tagging this as open, but you might find the citation below of interest. -Book: Ribenboim, P., Numbers, M., & Friends, M. (2000). Popular Lectures on Number Theory. - -Edit: See also the MO post here.<|endoftext|> -TITLE: Matrices with $A^3+B^3=C^3$ -QUESTION [11 upvotes]: Problem: Find infinitely many triples of nonzero $3\times 3$ matrices $(A,B,C)$ over the nonnegative integers with -$$A^3+B^3=C^3.$$ -My proposed solution is in the answers. - -REPLY [13 votes]: OR OR OR, given -$$ x, y > 0, $$ -let -$$ R \; = \; - \left( \begin{array}{ccc} - 0 & 1 & 0 \\ - 0 & 0 & 1 \\ - x & 0 & 0 -\end{array} - \right) , \; \; - S \; = \; - \left( \begin{array}{ccc} - 0 & 1 & 0 \\ - 0 & 0 & 1 \\ - y & 0 & 0 -\end{array} - \right) , \; \; - T \; = \; - \left( \begin{array}{ccc} - 0 & 1 & 0 \\ - 0 & 0 & 1 \\ - x + y & 0 & 0 -\end{array} - \right) , - $$ -then -$$ R^3 = x I, \; \; S^3 = y I, \; \; T^3 = (x+y) I $$ -and -$$ R^3 + S^3 = T^3. $$ -OR -$$ S \; = \; - \left( \begin{array}{rrr} - 0 & 1 & 0 \\ - 0 & 0 & 1 \\ - 2n^2 & 0 & 0 -\end{array} - \right) , \; \; - T \; = \; - \left( \begin{array}{rrr} - 0 & 0 & 1 \\ - 2 n & 0 & 0 \\ - 0 & 2 n & 0 -\end{array} - \right) . - $$ -Then -$$ S^3 = 2 n^2 I, \; \; T^3 = 4 n^2 I, $$ -and -$$ S^3 + S^3 = T^3. $$ -OR OR, given a Pythagorean triple -$$ a^2 + b^2 = c^2, $$ -let -$$ R \; = \; - \left( \begin{array}{rrr} - 0 & 1 & 0 \\ - 0 & 0 & 1 \\ - a^2 & 0 & 0 -\end{array} - \right) , \; \; - S \; = \; - \left( \begin{array}{rrr} - 0 & 1 & 0 \\ - 0 & 0 & 1 \\ - b^2 & 0 & 0 -\end{array} - \right) , \; \; - T \; = \; - \left( \begin{array}{rrr} - 0 & 0 & 1 \\ - c & 0 & 0 \\ - 0 & c & 0 -\end{array} - \right) , - $$ -then -$$ R^3 = a^2 I, \; \; S^3 = b^2 I, \; \; T^3 = c^2 I $$ -and -$$ R^3 + S^3 = T^3. $$<|endoftext|> -TITLE: Submanifold of $\mathbb R^n$ : projections onto tangent spaces -QUESTION [6 upvotes]: Let $M$ a submanifold of $\mathbb R^n$, for all $x$ in $M$, let $\pi_x:\mathbb R^n\rightarrow T_xM$ the orthogonal projection onto the tangent space $T_xM$ of $M$ at $x$. -How could you show that for all $x$ in $M$, it exists an open neighborhood $U$ of $x$, such as for all $y$ in $U\cap M$, $\pi_y$ restricted on $U$ is a diffeomorphism from $U$ onto $\pi_y(U)$? -Thank you. - -REPLY [3 votes]: Change the coordinates such as $x=0$ and $\mathbb R^n=T_xM\oplus V=\{(t,w)\}$, so $T_xM:w=0$. -By the local inverse function theorem there is an open neighborhood of $x$ such as $\pi_x:U\rightarrow\pi_x(U)$ is a diffeomorphism. -Denote by $f:V=\pi_x(U)\rightarrow U$ the inverse function. So $f$ is a local parameterization of $M$ at $x$ by a neighborhood of $0$ in $T_xM$. We have $f(0)=0$ and $d_xf=0$ (since $T_xM:w=0$). -Let $y=(t,f(t))\in U\cap M$, then $T_yM=\{(t+h,f(t)+d_tf(h))\}$. -It exists $\varepsilon>0$ such as $\|d_tf\|<\varepsilon$ for all $(t,f(t))\in U$, so $p_y$ is injective on $U$. -Let $z\in U$, we check that $d_z(p_y):T_zU\rightarrow T_yU=T_yM$ coincides with the orthogonal projection $T_zU\rightarrow T_yM$ : $p_y:U\hookrightarrow R^N\rightarrow T_yM$ so $d_z(p_y)=T_zU\hookrightarrow R^N\rightarrow T_yM$.So $d_z(p_y)$ is invertible. -So $p_y$ is a diffeomorphism on $U$.<|endoftext|> -TITLE: How can we find and categorize the subgroups of $\mathbb{R}$? -QUESTION [19 upvotes]: $\newcommand{\R}{\Bbb R}\newcommand{\Q}{\Bbb Q}\newcommand{\Z}{\Bbb Z}$ -What are all the subgroups of R = $(\R, +)$ and how can we categorize them? -I started thinking about this question last night after looking at the structure of the cosets of $\R / \Q$ What do the cosets of $\mathbb{R} / \mathbb{Q}$ look like?. I did some searching on SO and google but didn't find anything giving a full categorization (or even a partial one) of the subgroups of $\R$. -Here are the subgroups that I came up with so far: - -$\Z$ (there are no finite subgroups and $\Z$ is the universal smallest subgroup I think) -n$\Z$ eg 2$\Z$ all even numbers -a$\Z$ where a is any real number, including a in $\Q$ which "nest" nicely in each other -$\Z$[a] - group generated by adding one real a to $\Z$ -n$\Z$[a] which equals $\Z$[na] and so is just a case of the one above -Dyadic rationals eg a numbers of the form a/2b or similar subgroups such as a/3b, a/2b7c etc -$\Q$ -$\Q$[a] -$\Q$[a in A] where A is a subset of $\R$ - could be finite, countable or uncountable. Group generated by adding all elements of A to $\Q$ -eg $\Q[\sqrt2]$ - -It is clear that the "n$\Z$ subgroups" n$\Z$ and m$\Z$ are related according to the gcd(n,m) -Also when H is a subgroup of R looking at the structure of the cosets of R / H. eg for H any of the Z subgroups we get R / H homomorphic to [0,1) or the circle. For H one of the Q subgroups it is more complex and I currently don't have ideas on the larger subgroup cosets -I am not clear how "big" a subgroup H can get before it becomes the whole of R. I do know that if it contains any interval then it is the whole of R. But what about H with dimension less than 1? -I am aware of one question on SO about the proper measurable subgroups of R having 0 measure Proper Measurable subgroups of $\mathbb R$, one on dense subgroups Subgroup of $\mathbb{R}$ either dense or has a least positive element? and one on the subgroups of Q How to find all subgroups of $(\mathbb{Q},+)$ but that is all my searching found so far. -Why is this question interesting? 1) there seem to be so many subgroups and they are related in many groupings 2) I think the subgroups related to the structure of the reals in some subtle ways 3) I know the result for the complete classification of all finite subgroups was a major result so wondering what has been done in this basic uncountable case. -If anyone has any insight, intuition, info, papers or theorems on subgroups of R and how they are interrelated that would be interesting. - -REPLY [9 votes]: The subgroups of $(\mathbb{R},+)$ are up to isomorphism the torsion-free abelian groups of rank $\alpha$ for every cardinal $\alpha\leq 2^{\aleph_0}$, because $(\mathbb{R},+)$ is isomorphic to $(\mathbb{Q}^{(2^{\aleph_0})},+)$ (weak direct product) as a $\mathbb{Q}$-vector space and thus also as a group. -The torsion-free abelian groups of rank $2$ already haven't been classified yet and it seems difficult to do so (cf. The classification problem for torsion-free abelian groups of finite rank). -László Fuchs book "Infinite abelian group theory" contains a lot of interesting shizzle related to this.<|endoftext|> -TITLE: Minimal polynomial of the root of algebraic number -QUESTION [6 upvotes]: I have obtained the minimal polynomial of $9-4\sqrt{2}$ over $\mathbb{Q}$ by algebraic operations: -$$ (x-9)^2-32 = x^2-18x+49.$$ -I wonder how to calculate the minimal polynomial of $\sqrt{9-4\sqrt{2}}$ with the help of this sub-result? Or is there a smarter way to do this (not necessarily algorithmic)? - -REPLY [2 votes]: Maple can calculate minimal polynomials, e.g.: - -evala(Norm(convert(sqrt(9-4*sqrt(2)),RootOf) - z)); - -$-7+2z+z^2$<|endoftext|> -TITLE: How to prove that $p-1$ is squarefree infinitely often? -QUESTION [19 upvotes]: How to prove that $p-1$ is squarefree infinitely often, where $p$ is prime? -I had thought of using Dirichlet's theorem on arithmetic progressions. The number of squareful values of $p-1$ less than or equal to $n$ is bounded by $\sum_{q}{\pi_{q^2,1}(n)}$, each term $\pi_{q^2,1}(n) \approx \frac{\pi(n)}{q \cdot (q-1)}$, and $\sum_{q}{\frac{1}{q \cdot (q-1)}} \lt 1$. But using for example this statement of Dirichlet's theorem, the possibility is open that the error in each approximation for $\pi_{q^2,1}(n)$ may continue to increase with $q$, and since the sum runs up to $\sqrt{n}$, this argument is invalid. -Is there a more refined version of Dirichlet's theorem that can be used to circumvent this issue? Or an entirely different way to prove this statement? - -REPLY [11 votes]: There is a more refined version of Dirichlet's theorem which is immensely useful in these types of problems. It's called the Bombieri-Vinogradov theorem, which says roughly that the error term in Dirichlet's theorem cannot be very large for many values of $q$ simultaneously. While we can't bound any individual error term by $O(n^{1/2+\epsilon})$ (such a bound would be equivalent to a form of RH), we can get this level of control if we average over many values of $q$. -More precisely, let $E(q)$ denote the error $\left|\pi_{q^2,1}(n) - \frac{\pi(n)}{q(q-1)}\right|$ in Dirichlet's theorem. Then a simple consequence of Bombieri-Vinogradov is that for some constant $B > 0$, -$$\sum_{q < n^{1/4} \ \log^{-B} n} E(q) \ll \frac{n}{\log^2n}.$$ -Therefore the error terms for $q \le n^{1/4}\ \log^{-B} n$ cannot accumulate to exceed the $\Omega(n/\log n)$ difference between $\pi(n)$ and $\sum\limits_{q\text{ prime}} \frac{\pi(n)}{q(q-1)}$, exactly as you had hoped. The only moduli remaining are $n^{1/4} \ \log^{-B} n \le q \le \sqrt{n}$, but these are very easily controlled using the trivial bound $\pi_{q^2,1}(n) \le \frac{n}{q^2}$ (there aren't many numbers congruent to $1\!\!\pmod{q^2}$ when $q$ is large). -A slightly more general application of Bombieri-Vinogradov should be able to produce the asymptotic result cited in @GerryMyerson's answer. -EDIT — Had the wrong upper bound for $q$, since the modulus is $q^2$, not $q$. -EDIT 2012/06/13: The asymptotic is probably not as immediate as I thought; it would require some subtlety with regard to the number of primes $q$ we can sieve out.<|endoftext|> -TITLE: Showing a subset of $C([0,1])$ is compact. -QUESTION [14 upvotes]: Let $${\cal F}=\left\{ f:\left[0,1\right]\to\mathbb{R} : \left|f\left(x\right)-f\left(y\right)\right|\le\left|x-y\right|\mbox{ and }{\displaystyle \int_{0}^{1}f\left(x\right)dx=1}\right\}.$$ Show that ${\cal F}$ is a compact subset of $C\left(\left[0,1\right]\right)$. - -When I am trying to show a set is compact, I usually resort to the every open cover has a finite subcover definition. But if this case, we are dealing with functions. So I am having a difficulty "visualizing" what's going on. Any help or solutions would be appreciated. -Edit: I should mention that we are working with respect to the sup norm. - -REPLY [14 votes]: According to Arzelà–Ascoli theorem you only have to show that $\mathcal F$ is - -equicontinuous, i.e. $(\forall x\in[0,1])(\forall \varepsilon>0)(\exists \delta>0)(\forall f\in\mathcal F) (\forall y) |y-x|<\delta \Rightarrow |f(y)-f(x)|<\varepsilon$; -pointwise bounded; -closed in $C[0,1]$. - -Both equicontinuity and pointwise boundedness follow from the Lipschitz condition $|f(x)-f(y)|\le |x-y|$. -To show pointwise boundedness you can notice that $|f(x)-f(0)|\le |x|=x$, which means -$$f(0)-x \le f(x) \le f(0)+x.$$ -If you apply integral $\int_0^1$ to the left inequality, you get $f(0)\le\int_0^1 (x+f(x))\,\mathrm{d}x=\frac32$. Now the right inequality implies -$f(x)\le \frac52$ for each $x$. -(Thanks to Nate Eldredge, who pointed in his comment, that this was missing in my original answer.) -To show that it is closed in sup-norm, you only have to show that if $f_n$ converges to $f$ uniformly and $f_n\in\mathcal F$, then the limit is in $\mathcal F$. -We know that integral behaves well w.r.t. uniform convergence, see this questions. Proof of the fact that the condition $(\forall x,y\in [0,1])|f(x)-f(y)|\le |x-y|$ is preserved by uniform convergence is more-or-less standard. (In fact, in this part we only use pointwise convergence.)<|endoftext|> -TITLE: Hitting probability of biased random walk on the integer line -QUESTION [11 upvotes]: Lets say we start at point 1. Each successive point you have a, say, 2/3 chance of increasing your position by 1 and a 1/3 chance of decreasing your position by 1. The walk ends when you reach 0. -The question, what is the probability that you will eventually reach 0? -Also, is there any generalization for different probabilities or different starting positions or different rules (say you increase 2 and decrease 1)? -NOTE: I have never taken a course that considered random walks. So, if possible, could no prior knowledge of random walks be assumed? - -REPLY [8 votes]: With fancier tools one can extract a bit more information from the problem. Let $p>\frac12$ be the probability of stepping to the right, and let $q=1-p$. For $n\in\Bbb N$ let $P_n$ be the probability of first hitting $0$ in exactly $n$ steps. Clearly $P_n=0$ when $n$ is even, and $P_1=q$. In order to hit $0$ for the first time on the third step you must go RLL, so $P_3=pq^2$. To hit $0$ for the first time in exactly $2k+1$ steps, you must go right $k$ times and left $k+1$ times, your last step must be to the left, and through the first $2k$ steps you must always have made at least as many right steps as left steps. It’s well known that the number of such paths is $C_k$, the $k$-th Catalan number. Thus, -$$P_{2k+1}=C_kp^kq^{k+1}=C_kq(pq)^k=\frac{q(pq)^k}{k+1}\binom{2k}k\;,$$ -since $C_k=\dfrac1{k+1}\dbinom{2k}k$. It’s also well known that the generating function for the Catalan numbers is $$c(x)=\sum_{k\ge 0}C_kx^k=\frac{1-\sqrt{1-4x}}{2x}\;,$$ so the probability that the random walk will hit $0$ is -$$\begin{align*} -\sum_{k\ge 0}P_{2k+1}&=q\sum_{k\ge 0}C_k(pq)^k\\\\ -&=qc(pq)\\\\ -&=q\left(\frac{1-\sqrt{1-4pq}}{2pq}\right)\\\\ -&=\frac{1-\sqrt{1-4pq}}{2p}\\\\ -&=\frac{1-\sqrt{1-4q(1-q)}}{2p}\\\\ -&=\frac{1-\sqrt{1-4q+4q^2}}{2p}\\\\ -&=\frac{1-(1-2q)}{2p}\\\\ -&=\frac{q}p\\\\ -&=\frac{1-p}p\;. -\end{align*}$$ -For the present case, $p=\dfrac23$, this yields the probability $\dfrac12$. Rounded to four decimal places, the probabilities of first hitting $0$ in $1,3,5,7,9,11$, and $13$ steps are: -$$\begin{array}{rcc} -\text{Steps}:&1&3&5&7&9&11&13\\ -\text{Probability}:&0.3333&0.0747&0.0329&0.0183&0.0114&0.0076&0.0053 -\end{array}$$ -These already account for $0.4829$ (total calculated before rounding) out of the total probability of $0.5$.<|endoftext|> -TITLE: Prove $\gcd(a+b,a^2+b^2)$ is $1$ or $2$ if $\gcd(a,b) = 1$ -QUESTION [14 upvotes]: Assuming that $\gcd(a,b) = 1$, prove that $\gcd(a+b,a^2+b^2) = 1$ or $2$. - -I tried this problem and ended up with -$$d\mid 2a^2,\quad d\mid 2b^2$$ -where $d = \gcd(a+b,a^2+b^2)$, but then I am stuck; by these two conclusions how can I conclude $d=1$ or $2$? -And also is there any other way of proving this result? - -REPLY [4 votes]: Since $\text{gcd}(a,b) = 1$, we have that there exists $x,y \in \mathbb{Z}$ such that $$ax+by = 1$$ Hence, we have that $$(a+b)x + b(y-x) = 1$$ and $$a(x-y) + (a+b)y = 1$$ -Squaring and adding the two equations, we get that $$(a+b)^2 x^2 + b^2(y-x)^2 + 2b(a+b)x(y-x) + (a+b)^2 y^2 + a^2(x-y)^2 + 2a(a+b)y(x-y) = 2$$ -Rearranging the above equation, we get that -$$(a+b) \left( (a+b) (x^2+y^2) + 2 (x-y) (ay-bx) \right) + (a^2 + b^2)(x-y)^2 = 2$$ -Hence, we have that $$(a+b) X + (a^2 + b^2) Y =2$$ -where $X = \left( (a+b) (x^2+y^2) + 2 (x-y) (ay-bx) \right)$ and $Y = (x-y)^2$. This implies that $\text{gcd}(a,b) \vert 2$. -Hence, $$\text{gcd}(a,b) = 1 \text{ or } 2$$<|endoftext|> -TITLE: What is $\lim_{(x,y)\to(0,0)} \frac{(x^3+y^3)}{(x^2-y^2)}$? -QUESTION [5 upvotes]: In class, we were simply given that this limit is undefined since along the paths $y=\pm x$, the function is undefined. -Am I right to think that this should be the case for any function, where the denominator is $x^2-y^2$, regardless of what the numerator is? -Just wanted to see if this is a quick way to identify limits of this form. -Thanks for the discussion and help! - -REPLY [7 votes]: Brian's answer is nice, so it got me thinking, what about approaching the point along other functions, $y=f(x)$, with $f(0)=0$. Let us plug that into the expression to get $\frac{x^3+f^3}{x^2-f^2}$. This will give $$\lim_{x \to 0} \frac{x^2-xf+f^2}{x-f}\;.$$ -We will now perform a l’Hospital’s rule to get $$\lim_{x \to 0} \frac{2x-xf'-f+2ff'}{1-f'}\;.$$ -If at this point we assume that $f'(0)=1$, we have another $0/0$. If $f'\neq 1$, we get that the limit is zero. -So letus now hit this with another l’Hospital. This will give us -$$\lim_{x \to 0} \frac{2-2f'-xf''+2f''f'+2(f')^2}{-f''}\;. \qquad (*)$$ -When evaluate at $x=0, f'=1$, we get $$\lim_{x \to 0} -\frac{2f''+2}{f''}\;.$$ -And you can decide what you want the limit to be by picking a value for $f''$.<|endoftext|> -TITLE: What is the shortest function of lambda calculus that generates all functions of lambda calculus? -QUESTION [11 upvotes]: I suspect there's a good chance the answer to this is unknown and hard (or at least extremely tedious), but I figured it would be worth asking. -It's well known that the functions $K:=\lambda x.\lambda y.x$ and -$S:=\lambda x.\lambda y.\lambda z.xz(yz)$ together generate all functions of lambda calculus. -It's also possible to do it with just a single function, as mentioned here: If we define $U=\lambda x.xSK$, then we can obtain $K=U(U(UU))$, and $S=U(U(U(UU))$, and thus everything. -It's also possible to do this with $V:=\lambda x.xKS$, since $S=VVV$, and $K=V(VVVVV)$. -What I want to know is, picking a reasonable notion of "length", is there any way that is shorter than $U$ or $V$? Let's say for now that the length is the number of occurrences of a variable, including when they're introduced, so e.g., $K$ has a length of 3, $S$ has a length of 7, and $U$ and $V$ each have length 12. (Or is there a usual notion of "length" that's been studied?) Is it possible to do better than 12, and what's the shortest way? -What if we allow for more than one generator and total the lengths? Then the usual set $\{S,K\}$ does it with 10. (Should we add a penalty for using more than one? Well, I guess you could, but I'm not going to define it that way here. I mean, unless people have studied this problem and already doing it that way...). Can this variant be done in fewer than 10, and what's the shortest? -I don't expect there's any easy way to answer the "what's the shortest" question, but I'm hoping maybe that at least if there is a shorter way that someone will know it or find it. - -REPLY [8 votes]: I believe this is related to finding a single axiom base for intuitionistic propositional calculus. There is a web page by Ted Ulrich on the subject, which discusses many such axioms. However, trying to find the shortest single axiom corresponds to trying to find a combinator with the shortest type (as opposed to your goal finding a combinator with the shortest λ-calculus expression). -Edit: You can take those single axioms and ask Djinn (a Haskell theorem prover) to find functions with corresponding types. For example, taking one of the first axioms in Ted Ulrich's web page, you can ask Djinn: -Djinn> ? x :: ((p -> q) -> r) -> (s -> ((q -> (r -> t)) -> (q -> t))) - -and it answers -x :: ((p -> q) -> r) -> s -> (q -> r -> t) -> q -> t -x a _ b c = b c (a (\ _ -> c)) - -So expression λazbc.bc(a(λy.c)) has the given type, and it is a candidate for a single combinator you're looking for. -(It is not obvious how to express S and K from such a combinator, but it can be recovered from the proof that forumlas (p→(q→r))→((p→q)→(p→r)) and p→(q→p) can be derived from the single axiom.) -This way, you could generate many possible combinators and see how long they are. Most likely you won't find the shortest one, but you might find some that are shorter than the ones you described. If you do, let us know!<|endoftext|> -TITLE: A characteristic subgroup is a normal subgroup -QUESTION [7 upvotes]: $\phi(H) = H$ for $\phi$ any automorphism in $G$. -I tried to find a homomorphism for which $H$ is the kernel, which shows that $H$ is normal. However, I tried to have it maps $g$ to $\phi(gH) = \phi(g)H$, but cannot show it preserves the operation because we don't know that $\phi(a)\phi(b) = \phi(a)H\phi(b)H = \phi(a)\phi(b)H$ as we don't have $H$ is normal. -Is this the right way to go? Or should I try something else? - -REPLY [3 votes]: $\phi (H)=H$ for all automorphism - -We wanted to show that H is normal subgroup that is for every $x \in G$ $xHx^{-1}=H$ Now as we have $\phi_x(g)=xgx^{-1}$ as inner automorphism .$\phi_x(H)=xHx^{-1}=H$ Hence Done<|endoftext|> -TITLE: Fundamental group as a functor -QUESTION [21 upvotes]: Is it right to consider assigning a fundamental group to a topological space the same as having a functor from $\mathbf{Top}$ to $\mathbf{Grp}$ ? -Are there any other examples of such functors ? - -REPLY [36 votes]: Assigning the fundamental group to a topological space is definitely a functor. But you have to keep in mind that a fundamental group is always taken with respect to a base point, and hence the functor assigns a pair $(X,x_0)$ consisting of a topological space $X$ and a point $x_0\in X$ to its fundamental group $\pi_1(X,x_0)$. As such, the functor goes from $\mathbf{Top}_\ast$ to $\mathbf{Grp}$. -In more detail, the fundamental group $\pi_1(X,x_0)$ is the group of homotopy classes of loops starting and ending in the base point $x_0$. It is not so hard to show that the map $[\gamma]\mapsto[f\circ\gamma]$ is well-defined for each loop $\gamma$ from $x_0$ to $x_0$ and each morphism $f:(X,x_0)\to(Y,y_0)$ of the category $\mathbf{Top}_\ast$; we take this map to be $\pi_1(f)$. Roughly, this is a definition by post-composition, so it is immediate that this functor respects identity morphisms and compositions. - -This is a side remark, because you have been asking about the fundamental group explicitly. But I feel it is in place here because it is natural to consider a functor with domain $\mathbf{Top}$ that is `like taking the fundamental group'. -Instead of $\mathbf{Top}_\ast\to\mathbf{Grp}$, one could also work with a functor $\mathbf{Top}\to\mathbf{Grpd}$, where the category $\mathbf{Grpd}$ is the category of groupoids (categories in which all morphisms are isomorphisms). The functor sends a topological space $X$ to the groupoid which has the points of $X$ as objects and between two points $x$ and $y$ of $X$ the morphisms are the homotopy classes of paths from $x$ to $y$. This gives you the fundamental groupoid rather than the fundamental group. -The n-lab has more information on the fundamental groupoid. - -There are many more examples of functors from $\mathbf{Top}$ or related categories. An important one is the singular functor to the category $\mathbf{Sset}$ of simplicial sets. The category if simplicial sets is defined as follows: first you consider the category $\Delta$ consisting of an object $[n]$ for each natural number $n$, where $[n]$ is the partial ordered set $\{0,\ldots,n\}$ with the usual order; the morphisms are the order preserving maps. Then $\mathbf{Sset}$ is the category of contravariant functors from $\Delta$ to $\mathbf{Set}$. -For each natural number $n$, there is the topological space -$$ -|\Delta^n|:=\big\{(t_0,\ldots,t_n)\in[0,1]^{n+1}:\textstyle\sum_{i=0}^n t_i=1\big\}, -$$ -which is called the standard $n$-simplex. To test your understanding of these definitions, you can show that the map $[n]\mapsto|\Delta^n|$ is a functor from $\Delta$ to $\mathbf{Top}$. Now we can define the functor $S:\mathbf{Top}\to\mathbf{Sset}$, which is called the singular functor, by assigning to each topological space $X$ the functor -$$ -n\mapsto\mathbf{Top}(|\Delta^n|,X) -$$ -It turns out that the simplicial sets $S(X)$ have very nice properties. One of them is that they really are $\infty$-groupoids. Also, the set $\mathbf{Top}(|\Delta^n|,X)$ is used to define the $n$-th homology group of $X$, which is gives yet another functor from the category of topological spaces. All of these functors have been (and are) important for the investigation of topological spaces.<|endoftext|> -TITLE: Double negation elimination in constructive logic -QUESTION [15 upvotes]: How can I prove that the double negation elimination is not provable in constructive logic? -To clarify, double negation elimination is the following statement: -$$\neg\neg q \rightarrow q$$ - -REPLY [3 votes]: A very simple and efficient tree proof method (i.e. a tableau method without signed formulae) for intuitionistic logic has been published by Bell, DeVidi and Solomon in "Logical Options: An Introduction to Classical and Alternative Logics" -http://books.google.fr/books/about/Logical_Options.html?id=zUVYx-bTLgMC&redir_esc=y -Here is the tree counter-model for the formula $\neg \neg p \to p$, à la Bell $et{}~ al.$: -$$\underline{?(\neg \neg p \to p)}^{\surd}$$ - $$ ? p $$ - $$ \neg \neg p$$ - $$ \underline{? \neg p }^{\surd}$$ - $$ p $$ -The tree show a Kripke counter-model where, in a locality expressed by a space between two horizontal lines, $p$ is not known to be true (i.e. is false in the locality), while $\neg \neg p$ is proved, i.e. is known to be true. The symbol $?$ says that the formula is not known to be true, and it sticks the formula in the locality. The symbol $\surd$ says that the formula is deactivated. Every formula without $\surd$ or without $?$ can pass through any horizontal line (truth is persistent). For more details, see Bell et al.'s book. -This is very simple proof method to echo to the very nice explanation given by Zhen Lin. (I wish to thank Zhen Lin warmly for his post.) -Reading this page again, I must add that, contrarily to what Ben Millwood claimed, double negation elimination is not equivalent to the law of excluded middle, because if it is true that $(\neg A \lor A)$ implies $(\neg \neg A \to A)$ in intuitionistic logic, the converse is not intuitionistically provable. The fact that $(\neg \neg A \to A) \to (\neg A \lor A)$ is not an intuitionistic theorem contradicts the claim that double negation elimination is equivalent to the law of excluded middle.<|endoftext|> -TITLE: Proof of Eberlein–Smulian Theorem for a reflexive Banach spaces -QUESTION [6 upvotes]: Looking for the proof of Eberlein-Smulian Theorem. -Searching for the proof is what I break with this morning. Some of my friends recommend Haim Brezis (Functional Analysis, Sobolev Spaces and Partial -Differential Equations). After I search the book, I only found the statement of the theorem, is the proof very difficult to grasp? Why is Haim Brezis skip it in his book? -Please I need a reference where I can find the proof in detail. - - -Theorem:(Eberlein-Smul'yan Theorem) A Banach space $E$ is reflexive if and - only if every (norm) bounded sequence in $E$ has a subsequence which converges - weakly to an element of $E$. - -REPLY [10 votes]: I think Megginson's book An Introduction to Banach Space Theory (GTM 183) is worth having look at. You can preview some parts of the book at Google Books. (I believe the whole section 2.8 Weak Compactness might be interesting for you.) -Albiac–Kalton, Topics in Banach Space Theory (GTM 233), Corollary 1.6.4, page 24. -Whitley, An elementary proof of the Eberlein–Šmulian theorem, Mathematische Annalen 172 (2), 1967, 116–118. Freely available from GDZ. -Zeidler, Nonlinear functional analysis and its applications. II/A: Linear monotone operators, Springer, 1990, Theorem 21.D - -I made this answer CW, so that other people can add further references if they think it's suitable.<|endoftext|> -TITLE: Why do we talk about Trace Operator? -QUESTION [5 upvotes]: What is the importance of a Trace operator in PDE . Although I have read the Wiki page on this but I am not able to connect it to the aspect of solving PDE's. -Particularly why do we define Trace operator as $T\colon W^{1,p}(u) \to L^p(\partial U)$. -Also why do we deal with only exponent $1$ . What is great about operator from $W^{1,p}$ to $L^p$? -Please help me to understand this concept of trace operator. Thank you very much . - -REPLY [2 votes]: We take $U$ a smooth open set, regular enough in order to have density of $C^{\infty}_0(\overline U)$ in $W^{1,p}(U)$. -We define $T$ on $C^{\infty}_0(\overline U)$ by taking the restriction of these functions to the boundary (it's well-defined, these one are not equivalence classes of functions). We check that this operator is continuous on this space of functions, then we extend it by continuity to $W^{1,p}(U)$. -We get functions in $L^p(\partial U)$, which is a space of which can be defined using charts. Thanks to that, we can define an element of the Sobolev space on the boundary, even if this one has measure $0$, and it extends the concept of trace for function.<|endoftext|> -TITLE: Calculating the shortest possible distance between points -QUESTION [8 upvotes]: Question: -Given the points $A(3,3)$, $B(0,1)$ and $C(x,0)$ where $0 < x < 3$, $AC$ is the distance between $A$ and $C$ and $BC$ is the distance between $B$ and $C$. What is x for the distance $AC + BC$ to be minimal? -What have I done? -I defined the function $AC + BC$ as: -$\mathrm{f}\left( x\right) =\sqrt{{1}^{2}+{x}^{2}}+\sqrt{{3}^{2}+{\left( 3-x\right) }^{2}}$ -And the first derivative: -$\mathrm{f'}\left( x\right) =\frac{x}{\sqrt{{x}^{2}+1}}+\frac{3-x}{\sqrt{{x}^{2}-6\,x+18}}$ -We need to find the values for $\mathrm{f'}\left( x\right) = 0$, so by summing and multiplying both sides I got to the equation: -$2x^4 - 12x^3 + 19x^2 -6x+18 = 0$ -But I don't think the purpose should be to solve a 4th grade equation, there should be another way I'm missing.. - -REPLY [3 votes]: The following little trick (method?) is useful in many places. Let $y=3-x$. We want to minimize $\sqrt{1+x^2}+\sqrt{3^2+y^2}$, where $x+y=3$. -Differentiate with respect to $x$. Using the fact that $\frac{dy}{dx}=-1$, we arrive at the equation -$$\frac{x}{\sqrt{1+x^2}}=\frac{y}{\sqrt{9+y^2}}.$$ -Note that this is (apart from a little minus sign problem) the equation you arrived at, with $3-x$ replaced by $y$. -Square both sides, cross-multiply. We get -$$x^2(9+y^2)=y^2(1+x^2),$$ -which simplifies to $9x^2=y^2$. -No fourth degree equation here! Since $x$ and $y$ are non-negative, we get $y=3x$, a linear equation. Now from $x+y=3$ we obtain $x=3/4$.<|endoftext|> -TITLE: A formal name for "smallest" and "largest" partition -QUESTION [7 upvotes]: Consider a set $A=\{1,2,3,4,5\}$, -is there any terminology for the following partitions of $A$ ? -(1) $A=\{ \{1\},\{2\},\{3\},\{4\},\{5\} \}$ -(2) $A=\{\{1,2,3,4,5\}\}$. - -REPLY [6 votes]: Suppose that $A$ is a non-empty set and $P_1,P_2$ are two partitions of $A$. We say that $P_1$ refines $P_2$ if for every $B\in P_1$ there is $C\in P_2$ such that $B\subseteq C$. This means that $P_1$ was a result of partitioning each part of $P_2$. -We also say that $P_1$ is finer than $P_2$; that $P_1$ is a refinement of $P_2$; or that $P_2$ is coarser than $P_1$. -It is a nice exercise to verify that the relation $x\prec y\iff x\text{ refines }y$ is a partial order over the partitions of $A$, in fact it is a lattice. Every two partitions has a least-refinement and a maximal-coarse partitions. -One can now see that the partition to singletons is the maximum of this partial order. Indeed it is the finest partition. Similarly the partition into a single part is the coarsest partition, and it is the minimum of the order.<|endoftext|> -TITLE: Is sheafification always an inclusion? -QUESTION [18 upvotes]: Let $\mathscr{F}$ be a presheaf and $\mathscr{F}^+$ its sheafification, with the universal morphism $\theta:\mathscr{F}\rightarrow\mathscr{F}^+$. Question is: is $\theta$ always an inclusion? I'm pretty sure it isn't, but in many cases it seems that it is. For example, if $\phi:\mathscr{F}\rightarrow\mathscr{G}$ is a morphism of sheaves, then $\operatorname{ker}\phi$, $\operatorname{im}\phi$, $\operatorname{cok}\phi$ all seem to have this property. If it isn't always the case, then is there any characterization of such presheaves? A "subpresheaf" of a sheaf certainly has this property (e.g., $\operatorname{im}\phi$), but what about $\operatorname{cok}\phi$? - -REPLY [12 votes]: The answer to the question is NO since the global information about a presheaf can not be determined by its local information. In fact, this is the essential point of the definition of a sheaf. In above, Georges Elencwajg has given a nice counter-example which is quite natural. We can also contruct a counter-example directly as follows. -Take a topological space $X$ which is not empty, define a presheaf $\mathcal{F}$ of abelian groups in the following way: for any proper open subset $U$ define $\mathcal{F}(U)$ to be zero while define $\mathcal{F}(X)$ to be any nontrival abelian group, say $\mathbb{Z}$. Then the sheafication $\mathcal{F}^+$ is zero. Consider the global sections we will find it is not an inclusion.<|endoftext|> -TITLE: Does convergence in distribution implies convergence of expectation? -QUESTION [35 upvotes]: If we have a sequence of random variables $X_1,X_2,\ldots,X_n$ converges in distribution to $X$, i.e. $X_n \rightarrow_d X$, then is -$$ -\lim_{n \to \infty} E(X_n) = E(X) -$$ -correct? -I know that converge in distribution implies $E(g(X_n)) \to E(g(X))$ when $g$ is a bounded continuous function. Can we apply this property here? - -REPLY [35 votes]: With your assumptions the best you can get is via Fatou's Lemma: -$$\mathbb{E}[|X|]\leq \liminf_{n\to\infty}\mathbb{E}[|X_n|]$$ -(where you used the continuous mapping theorem to get that $|X_n|\Rightarrow |X|$). -For a "positive" answer to your question: you need the sequence $(X_n)$ to be uniformly integrable: -$$\lim_{\alpha\to\infty} \sup_n \int_{|X_n|>\alpha}|X_n|d\mathbb{P}= \lim_{\alpha\to\infty} \sup_n \mathbb{E} [|X_n|1_{|X_n|>\alpha}]=0.$$ -Then, one gets that $X$ is integrable and $\lim_{n\to\infty}\mathbb{E}[X_n]=\mathbb{E}[X]$. -As a remark, to get uniform integrability of $(X_n)_n$ it suffices to have for example: -$$\sup_n \mathbb{E}[|X_n|^{1+\varepsilon}]<\infty,\quad \text{for some }\varepsilon>0.$$<|endoftext|> -TITLE: Galois group of a degree 5 irreducible polynomial with two complex roots. -QUESTION [8 upvotes]: Let $f(x)\in \mathbb{Q}[x]$ be an irreducible polynomial of degree - five with exactly three real roots and let $K$ be the splitting field - of $f$. Prove that Gal$(K/Q) \cong S_5$. - -My attempt -The splitting field of an irreducible polynomial of degree $5$ has Galois group which is a transitive subgroup of $S_5$. Hence, if we can show that the Galois group has elements of cycle type $[2][1][1][1]$ and $[3][1][1][1]$ then $S_5$ is the only such subgroup possible (we are eliminating the possible subgroups of $F_{20}$ with the cycle of order three since $3 \nmid 20$ and $A_5$ since the transposition cannot be an even permutation). -Conjugation is a field automorphism that fixes $\mathbb{Q}$. It is in thus in the Galois Group and has cycle type $[2][1][1][1]$. -My question -How do I show that there is an element of order three in the Galois group? - -REPLY [3 votes]: Let the roots be $a,b,c,d,e$ with $a,b $ non-real. Your subgroup is transitive and has at least one transposition $(a b)$. Since it is transitive you can conjugate your transposition by some element to get a transposition which moves $c$ (conjugation preserves the cycle structure), and likewise one which moves $d$ or $e$. -Case 1. If this is $(bc)$ or $(ac)$ you can find your 3-cycle as a product of transpositions. -Case 2. If it is $(cd)$ then you can use conjugation to find a transposition which moves $e$ and whichever one you get, you can find your 3-cycle by composing it with either $(ab)$ or $(cd)$.<|endoftext|> -TITLE: Polynomial ring $R[X] $ has zero Jacobson radical, for $R$ a domain -QUESTION [10 upvotes]: Let $R$ be a commutative domain. -Prove that the Jacobson radical of $R[X]$, i.e. the intersection of all maximal ideals, is the zero ideal. -Thank you. - -REPLY [9 votes]: Exercise 1.4 of Atiyah - Macdonald tells you that in any polynomial ring $R[x]$, the Jacobson radical and nilradical are equal. For your problem let us throw in the additional hypothesis that $R$ is an integral domain. Then the nilradical of $R[x]$ is zero because $R[x]$ is an integral domain and hence the Jacobson radical is zero.<|endoftext|> -TITLE: Krull dimension of local Noetherian ring -QUESTION [8 upvotes]: Let $R$ be a commutative local Noetherian ring and $\mathfrak{m}$ its maximal ideal. -Prove that, if $\mathfrak{m}$ is principal, then $\mathrm{dim}(R)\leq 1$ (the Krull dimension of the ring). -Thank you. - -REPLY [5 votes]: First, let me explicitly assume $R$ is a noetherian local domain with a principal maximal ideal $\mathfrak{m}$. -Proposition. The Krull dimension of $R$ is at most $1$. -Proof. Let $\mathfrak{m} = (t)$, and let $\mathfrak{p}$ be prime. We know $\mathfrak{p} \subseteq \mathfrak{m}$, so it is enough to show that either $\mathfrak{p} = \mathfrak{m}$ or $\mathfrak{p} = (0)$. Suppose $\mathfrak{p} \ne \mathfrak{m}$: then $t \notin \mathfrak{p}$. Let $a_0 \in \mathfrak{p}$. For each $a_n$, because $\mathfrak{p}$ is prime, there exists $a_{n+1}$ in $\mathfrak{p}$ such that $a_n = a_{n+1} t$. By the axiom of dependent choice, this yields an infinite ascending sequence of principal ideals: -$$(a_0) \subseteq (a_1) \subseteq (a_2) \subseteq \cdots$$ -Since $R$ is noetherian, for $n \gg 0$, we must have $(a_n) = (a_{n+1}) = (a_{n+2}) = \cdots$. Suppose, for a contradiction, that $a_0 \ne 0$. Then, $a_n \ne 0$ and $a_{n+1} \ne 0$, and there is $u \ne 0$ such that $a_{n+1} = a_n u$. But then $a_n = a_{n+1} t = a_n u t$, so cancelling $a_n$ (which we can do because $R$ is an integral domain), we get $1 = u t$, i.e. $t$ is a unit. But then $\mathfrak{m} = R$ – a contradiction. So $a_n = 0$. $\qquad \blacksquare$ - -Here's an elementary proof which shows why we can reduce to the case where $R$ is an integral domain. -Proposition. Any non-trivial ring $A$ has a minimal prime. -Proof. By Krull's theorem, $A$ has a maximal ideal, which is prime. Let $\Sigma$ be the set of all prime ideals of $A$, partially ordered by inclusion. The intersection of a decereasing chain of prime ideals is a prime ideal, so by Zorn's lemma, $\Sigma$ has a minimal element. $\qquad \blacksquare$ -Thus, we can always assume that a maximal chain of prime ideals starts at a minimal prime and ends at a maximal ideal. But if $R$ is a noetherian local ring with principal maximal ideal $\mathfrak{m}$ and $\mathfrak{p}$ is a minimal prime of $R$, then $R / \mathfrak{p}$ is a noetherian local domain with a principal maximal ideal $\mathfrak{m}$, and $\dim R = \sup_\mathfrak{p} \dim R / \mathfrak{p}$, as $\mathfrak{p}$ varies over the minimal primes. - -Update. Georges Elencwajg pointed out in a comment that the first proof actually works without the assumption that $R$ is a domain, because $(1 - u t)$ is always invertible.<|endoftext|> -TITLE: Sequential characterization of closedness of the set -QUESTION [5 upvotes]: Let $(X, d)$ be a metric space. A set $F\subset X$ is closed if and only if for every sequence $\left\{x_n\right\}\subset F$, if $x\in X$ and $x_n\rightarrow x$ then $x\in F$. - -Definition of closed set: Set is closed if and only if its complement is open. A Set $U$ is open if and only if $\forall_{x\in U}\exists_{r>0}B(x,r)\subset U$, where $B(x,r)$ is a ball with middle in $x$ and with radius $r$. -It's rather a well-known fact that I used many times while solving problems, but just now I realized that I don't know how to prove it. - -REPLY [5 votes]: $\Rightarrow$: Suppose that $F$ is closed and let $\{x_{n}\}\subset F$ so that $x_{n}\to x$ for some $x\in X$. We show that $x\in F$. Let $U$ be any nhood of $x$. Since $x_{n}\to x$ there exists $k\in\mathbb{N}$ so that $x_{n}\in U$ for all $n\geq k$. In particular, $U\cap F\neq \emptyset$ (since e.g. $x_{k}\in U\cap F$). Since $U$ was an arbitrary nhood of $x$, this shows that $x$ is in the closure of $F$, which is equal to $F$ since $F$ is a closed set. Hence $x\in F$. -$\Leftarrow$: We show that $F$ is closed provided the latter property. Let $x$ be any element in the closure of $F$: we show that $x\in F$. Choose $x_{n}\in B(x,\frac{1}{n})\cap F$ for all $n\in\mathbb{N}$ (such $x_{n}$ exists since $x$ is in the closure of $F$, whence every nhood of $x$ intersects $F$). Now $x_{n}\to x$ and by assumption of this direction we have $x\in F$. Hence the closure of $F$ is a subset of $F$, whence they are in fact equal since a set is always subset of its closure. But this means that $F$ is a closed set.<|endoftext|> -TITLE: How do I compute mean curvature in cylindrical coordinates? -QUESTION [5 upvotes]: If I have a surface defined by $ z=f(r, \theta) $, does anyone know the expression for the mean curvature? There is a previous post dealing with Gaussian instead of mean curvature, the answer I'm looking for is similar to that given by J.M. on that post. -The mentioned post: -How do I compute Gaussian curvature in cylindrical coordinates? -Many thanks in advance for your help, - -REPLY [4 votes]: Unfortunately, Vittorio gave the mean curvature with respect to the parameters $\theta$ and $z$ in his answer instead of the parameters $r$ and $\theta$ that OP needed. To get the mean curvature of $z=f(r,\theta)$, we start with the parametrization -$$\begin{align*}x&=r\cos\,\theta\\y&=r\sin\,\theta\\z&=f(r,\theta)\end{align*}$$ -Using the usual formula for mean curvature (equation 7 here) and simplifying, we obtain the expression -$$\small \frac{\frac{\partial f}{\partial r}\left(r^2\left(\left(\frac{\partial f}{\partial r}\right)^2+1\right)+2\frac{\partial f}{\partial \theta}\left(\frac{\partial f}{\partial \theta}-r\frac{\partial f}{\partial r\partial \theta}\right)\right)+r\left(\left(\frac{\partial f}{\partial \theta}\right)^2+r^2\right)\frac{\partial^2 f}{\partial r^2}+r\frac{\partial^2 f}{\partial \theta^2}\left(\left(\frac{\partial f}{\partial r}\right)^2+1\right)}{2\left(r^2\left(\left(\frac{\partial f}{\partial r}\right)^2+1\right)+\left(\frac{\partial f}{\partial \theta}\right)^2\right)^{3/2}}$$<|endoftext|> -TITLE: "Antisymmetry" among cut points -QUESTION [13 upvotes]: We might define a ternary relation between points of a topological space $X$ by writing $x|yz$ whenever $y$ is not in the same quasi-component of $X\setminus \{x\}$ as $z$. It is not hard to prove that if $X$ is connected, at most one of $x|yz$, $y|xz$ and $z|xy$ is true for distinct $x, y, z$. Informally, at most one of the points can be "between" the other two. -What I can't figure out is whether this still holds if quasi-components are replaced by real components. If it did, I think it would simplify the answer to the question Is the configuration space of a connected space connected? -Any help would be appreciated. - -Addendum: -For comparison, here is the proof I had in mind for the -case of quasi-components. Clearly $x|yz$ is symmetric w.r.t. $y$ and $z$, -so it suffices to prove that - -if $x|yz$ and $y|xz$ for distinct $x, y, z \in X$, - then $X$ is disconnected. - -If $x|yz$ then there is a clopen neighbourhood $U$ of $z$ in -$X \setminus \{x\}$ that does not include $y$. -This means that one of $U$ and $U \cup \{x\}$ must be open in -$X$ and one must be closed. Similarly, if $y|xz$ there must be a set $V$ that -includes $z$ but not $x$, such that one of $V$ and $V \cup \{y\}$ is open -and one is closed. -Since $y \notin U \cup \{x\}$ and $x \notin V \cup \{y\}$ we have -$$ - (U \cup \{x\}) \cap (V \cup \{y\}) = (U \cup \{x\}) \cap V = - U \cap (V \cup \{y\}) = U \cap V -$$ -Thus $U \cap V$ is always the intersection of two open sets and the -intersection of two -closed sets, therefore it is clopen, and since it contains $z$ but not -$x$ it shows that $X$ is disconnected. $\square$ -The analogous statement about path-components in a path-connected space -is also not hard to prove. This statement is equivalent to - -For distinct points $x, y, z$ in a path-connected space $X$, there is - a path from $z$ to $x$ that does not pass through $y$, or a path from - $z$ to $y$ that does not pass through $x$. - -Let $f: [0,1] \to X$ be any path from $z$ to $x$. Consider the path -$$ - g(t) = \cases{ - y, & if $y \in f([0, t])$ \cr - f(t), & otherwise. \cr - } -$$ -If $g(a) = x$ for some $a \in (0,1]$. then -$h: [0,1] \to X: t \mapsto g(t/a)$ is a path from $z$ to $x$ that doesn't -pass through $y$. If not then $g$ is a path from $z$ to $y$ that -doesn't pass through $x$. $\square$ - -REPLY [16 votes]: I have found the following theorem in Kuratowski - Topology (vol. 2, page 140), which is helpful here: -Theorem. Let $X$ be a connected topological space, $A\subseteq X$ a connected subset and $C$ a component of $X\setminus A$. Then $X\setminus C$ is connected. -For distinct points $x,y,z$, define $x|yz$ to mean that $y$ and $z$ lie in different connected components of $X\setminus\{x\}$. We shall now show how the theorem helps us prove what we want: -Proposition. Suppose $X$ is connected and $x,y,z\in X$. Then at most one of $x|yz$, $y|xz$, $z|xy$ holds. -Proof. Suppose $x|yz$ holds, i.e. $y$ and $z$ lie in different connected components of $X\setminus\{x\}$. Let $M$ be the component of $X\setminus\{x\}$ that contains $y$. By the theorem above, $X\setminus M$ is connected. By definition, $y\notin X\setminus M$, so $X\setminus M$ is a connected subset of $X\setminus\{y\}$. Since $y$ and $z$ lie in different components, we further have $z\in X\setminus M$ and since $M\subseteq X\setminus\{x\}$ we also have $x\in X\setminus M$. Therefore $x$ and $z$ lie in the same connected subset of $X\setminus\{y\}$, so $y|xz$ cannot hold. This completes the proof. $\square$ -Since I cannot find a useful link to the proof of the theorem, I shall just briefly describe the ideas Kuratowski uses to prove it. First, he establishes the following theorem of decomposition: -Theorem. Let $C$ be a connected subset of a connected topological space $X$. If $M$ and $N$ are separated sets (i.e. $(M\cap\overline{N})\cup(\overline{M}\cap N)=\emptyset$) such that $X\setminus C=M\cup N$, then the sets $C\cup N$ and $C\cup M$ are connected. (Furthermore, if $C$ is closed, $C\cup N$ and $C\cup M$ are also closed.) -The idea of proof is pretty straightforward: first we suppose that $C\cup M=A\cup B$, where $A$ and $B$ are separated. Without loss of generality we can assume that $A\cap C=\emptyset$ and then we show that in $X=C\cup M\cup N=A\cup B\cup N$ the sets $A$ and $B\cup N$ are separated, so at least one of them must be empty. -The proof of the first theorem uses a similar idea. Suppose $X\setminus C= M\cup N$, where $M$ and $N$ are separated. Without loss of generality, $A\cap M=\emptyset$. Next we show that $C\subseteq C\cup M\subseteq X\setminus A$. Since by the second theorem, $C\cup M$ is connected, $C=C\cup M$ follows, proving that $M$ is empty. -(I suppose it shouldn't be too hard for the reader to supply the missing details here. In any case, Kuratowski's exposition of connectedness is really good, so I warmly recommend the book to anyone interested in this subject.)<|endoftext|> -TITLE: Estimates on conjugacy classes of a finite group. -QUESTION [7 upvotes]: In Character Theory Of Finite Groups by I Martin Issacs as exercise 2.18, on page 32. - -Theorem: - Let $A$ be a normal subgroup of $G$ such that $A$ is the centralizer of every non-trivial element in $A$. If further $G/A$ is abelian, then $G$ has |G:A| linear characters, and $(|A|-1)/|G:A|$ non-linear irreducible characters of degree =|G:A| which vanish off $A$. -My Attempt: - By the hypotheses, every conjugacy class contained in $A$ has order=|G:A|, except the trivial one. Moreover, we find that if $C$ is a class which contains one element in $\alpha A$, then $C$ is contained in $\alpha A$. Let $A$ act on $C$ by conjugation and partition $C$ into orbits. Again we find that no element in $C$ is fixed by $A$, so that |C| is greater than |A|, thus - k=the number of classes in $G$ is $\le 1+(|A|-1)/|G:A|+(|G|-|A|)/|A|$. - On the other hand, as $G' \subset A$, we find that the number of linear characters is $\ge |G:A|$. Furthermore, by Mackey's irreducibility criterion, there are exactly (|A|-1)/|G:A| irreducible characters induced by linear ones of $A$. Therefore, we conclude as stated. - -As is obvious, this approach, if correct, exploits properties of induced characters of Mackey, with which I am still not so familiar, and hence I might ask: -I: Is my try valid? -II:How to proceed in an elementary manner? - -REPLY [6 votes]: I do not think you need to use Mackey or other induction theorems. You have already observed that (counting the identity), there are exactly $1 + \frac{(|A|-1)}{[G:A]}$ conjugacy classes meeting $A.$ Now choose an element $b \in G \backslash A.$ Notice that $|C_{G}(b)| \geq [G:A]$ because there are are at least $[G:A]$ linear characters of $G$, as you have already noted, and for any linear character $\mu$ of $G$ we have $|\mu(b)|^{2} = 1.$ On the other hand, $C_{G}(b) \cap A = 1,$ so $|G| \geq |A| |C_{G}(b)|$ and $|C_{G}(b)| \leq [G:A].$ Hence $|C_{G}(b)| = [G:A]$ for each $b \in G \backslash A.$ Hence there are $[G:A]-1$ conjugacy classes of $G$ which do not meet $A$, and each of these contains $|A|$ elements. Thus $G$ has $[G:A] + \frac{|A|-1}{[G:A]}$ conjugacy classes, hence the same number of complex irreducible characters. Since $|C_{G}(b)| = [G:A]$ for each element $b$ of $G \backslash A,$ there can be no more that $[G:A]$ linear characters of $G,$ so there are exactly $[G:A]$ such linear characters, as we know there are at least that many. Furthermore, from the orthogonality relations, we see that whenever $\chi$ is a non-linear irreducible character of $G$, we must have $\chi(b) = 0$ for all such $b.$ Also, we have $\chi(1) \leq [G:A]$ by other results in Isaacs book, so you have enough information to deduce that $\chi(1) = [G:A]$ for all such non-linear irreducible $\chi.$<|endoftext|> -TITLE: Equivalent of $\int_0^{\infty} \frac{\mathrm dx}{(1+x^3)^n},n\rightarrow\infty$ -QUESTION [7 upvotes]: According to my calculations -$$ \int_0^\infty \frac{\mathrm dx}{(1+x^3)^n}=\frac{(3n-4)\times(3n-7)\times\cdots\times5\times2}{3^{n+1/2}(n-1)!}2\pi$$ -How can an equivalent of $$ \int_0^\infty \frac{\mathrm dx}{(1+x^3)^n}$$ be derived from this formula? -(Given that my objective is to study the nature of the series $ \sum \int_0^\infty \frac{\mathrm dx}{(1+x^3)^n} $) -So my question is simply: is there a simple equivalent for $(3n-4)\times(3n-7)\times\cdots\times5\times2$ ? - -REPLY [3 votes]: The integral is the beta function in disguise. -Let $x=\left(\frac{z}{1-z}\right)^{1/3}$. -Then -$$\begin{eqnarray*} -I_n &=& \int_0^\infty \frac{dx}{(1+x^3)^n} \\ - &=& \frac{1}{3} \int_0^1 dz\, z^{-2/3}(1-z)^{n-4/3} \\ - &=& \frac{1}{3} B(n-1/3,1/3) \\ - &=& \frac{1}{3} \Gamma(1/3) \frac{\Gamma(n-1/3)}{\Gamma(n)} \\ - &=& \Gamma(4/3) \frac{\Gamma(n-1/3)}{\Gamma(n)}. -\end{eqnarray*}$$ -If the upper limit is finite, a closed expression can be found for your sum, -$$\sum_{n=1}^N I_n = \frac{3}{2} \Gamma(4/3) -\frac{\Gamma(N+2/3)}{\Gamma(N)}. $$ -In the limit $N\to\infty$ the sum is divergent. -It goes like -$\frac{3}{2} \Gamma(4/3) N^{2/3}$, -as hinted at by @RagibZaman.<|endoftext|> -TITLE: Intermediate fields of cyclotomic splitting fields and the polynomials they split -QUESTION [7 upvotes]: Consider the splitting field $K$ over $\mathbb Q$ of the cyclotomic polynomial $f(x)=1+x+x^2 +x^3 +x^4 +x^5 +x^6$. Find the lattice of subfields of K and for each subfield $F$ find polynomial $g(x) \in \mathbb Z[x]$ such that $F$ is the splitting field of $g(x)$ over $\mathbb Q$. -My attempt: We know the Galois group to be the cyclic group of order 6. It has two proper subgroups of order 2 and 3 and hence we are looking for only two intermediate field extensions of degree 3 and 2. -$\mathbb Q[\zeta_7+\zeta_7^{-1}]$ is a real subfield. -$\mathbb Q[\zeta_7-\zeta_7^{-1}]$ is also a subfield. -How do I calculate the degree and minimal polynomial? - -REPLY [11 votes]: Somehow, the theme of symmetrization often doesn't come across very clearly in many expositions of Galois theory. Here is a basic definition: -Definition. Let $F$ be a field, and let $G$ be a finite group of automorphisms of $F$. The symmetrization function $\phi_G\colon F\to F$ associated to $G$ is defined by the formula -$$ -\phi_G(x) \;=\; \sum_{g\in G} g(x). -$$ -Example. Let $\mathbb{C}$ be the field of complex numbers, and let $G\leq \mathrm{Aut}(\mathbb{C})$ be the group $\{\mathrm{id},c\}$, where $\mathrm{id}$ is the identity automorphism, and $c$ is complex conjugation. Then $\phi_G\colon\mathbb{C}\to\mathbb{C}$ is defined by the formula -$$ -\phi_G(z) \;=\; \mathrm{id}(z) + c(z) \;=\; z+\overline{z} \;=\; 2\,\mathrm{Re}(z). -$$ -Note that the image of $\phi$ is the field of real numbers, which is precisely the fixed field of $G$. This example generalizes: -Theorem. Let $F$ be a field, let $G$ be a finite group of automorphisms of $F$, and let $\phi_G\colon F\to F$ be the associated symmetrization function. Then the image of $\phi_G$ is contained in the fixed field $F^G$. Moreover, if $F$ has characteristic zero, then $\mathrm{im}(\phi_G) = F^G$. -Of course, since $\phi_G$ isn't a homomorphism, it's not always obvious how to compute a nice set of generators for its image. However, in small examples the goal is usually just to produce a few elements of $F^G$, and then prove that they generate. -Let's apply symmetrization to the present example. You are interested in the field $\mathbb{Q}(\zeta_7)$, whose Galois group is cyclic of order $6$. There are two subgroups of the Galois group to consider: -The subgroup of order two: This is the group $\{\mathrm{id},c\}$, where $c$ is complex conjugation. You have already used your intuition to guess that $\mathbb{Q}(\zeta_7+\zeta_7^{-1})$ is the corresponding fixed field. The basic reason that this works is that $\zeta_7+\zeta_7^{-1}$ is the symmetrization of $\zeta_7$ with respect to this group. -The subgroup of order three: This is the group $\{\mathrm{id},\alpha,\alpha^2\}$, where $\alpha\colon\mathbb{Q}(\zeta_7)\to\mathbb{Q}(\zeta_7)$ is the automorphism defined by $\alpha(\zeta_7) = \zeta_7^2$. (Note that this indeed has order three, since $\alpha^3(\zeta_7) = \zeta_7^8 = \zeta_7$.) The resulting symmetrization of $\zeta_7$ is -$$ -\mathrm{id}(\zeta_7) + \alpha(\zeta_7) + \alpha^2(\zeta_7) \;=\; \zeta_7 + \zeta_7^2 + \zeta_7^4. -$$ -Therefore, the corresponding fixed field is presumably $\mathbb{Q}(\zeta_7 + \zeta_7^2 + \zeta_7^4)$. -All that remains is to find the minimal polynomials of $\zeta_7+\zeta_7^{-1}$ and $\zeta_7 + \zeta_7^2 + \zeta_7^4$. This is just a matter of computing powers until we find some that are linearly dependent. Using the basis $\{1,\zeta_7,\zeta_7^2,\zeta_7^3,\zeta_7^4,\zeta_7^5\}$, we have -$$ -\begin{align*} -\zeta_7 + \zeta_7^{-1} \;&=\; -1 - \zeta_7^2 - \zeta_7^3 - \zeta_7^4 - \zeta_7^5 \\ -(\zeta_7 + \zeta_7^{-1})^2 \;&=\; 2 + \zeta_7^2 + \zeta_7^5 \\ -(\zeta_7 + \zeta_7^{-1})^3 \;&=\; -3 - 3\zeta_7^2 - 2\zeta_7^3 - 2\zeta_7^4 - 3\zeta_7^5 -\end{align*} -$$ -In particular, $(\zeta_7+\zeta_7^{-1})^3 + (\zeta_7+\zeta_7^{-1})^2 - 2(\zeta_7+\zeta_7^{-1}) - 1 = 0$, so the minimal polynomial for $\zeta_7+\zeta_7^{-1}$ is $x^3 + x^2 - 2x - 1$. Similarly, we find that -$$ -(\zeta_7 + \zeta_7^2 + \zeta_7^4)^2 \;=\; -2 - \zeta_7 - \zeta_7^2 - \zeta_7^4 -$$ -so the minimal polynomial for $\zeta_7 + \zeta_7^2 + \zeta_7^4$ is $x^2+x+2$.<|endoftext|> -TITLE: well-defined functions -QUESTION [6 upvotes]: I am asked to argue whether or not the following two functions are well-defined (textbook definition: a) define $y$ for all $x$ in domain, and b) any is mapped to exactly one y). Both of the below functions are functions from $\mathbb{Q}$ to $\mathbb{Q}$. -$$f\left(\frac{p}{q}\right) = \frac{p+1}{q}$$ -$$g\left(\frac{p}{q}\right) = \frac{p+q}{p-q}$$ -My argument is that since $0$ is a rational number, we can take, for $f$, $p=0$ and $q=x$ and the function will not be defined. Similarly, we can take $p=q=0$ for $g$, and the function, again, will not be defined. -But the argument seems to be too easy. Is there something I am missing that won't allow me to use these two counter examples? -Thanks! - -REPLY [5 votes]: You might want to think of the question like this, with $r =\frac pq$ -For the first case $f(r) = r + \frac 1q$ -For the second case note that $q\neq 0 $ if $r$ is rational, and $f(r) = \frac {r+1}{r-1}$ -A well-defined function is one for which you can determine a single defined value for each $r$. - -REPLY [3 votes]: First note that in the definition of rationals, $q$ is not allowed to be zero i.e the numbers to be consider are of the form $\dfrac{p}{q}$, where $p \in \mathbb{Z}$ and $q \in \mathbb{Z} \backslash \{0\}$. -Consider the first function $$f \left( \dfrac{p}{q}\right) = \dfrac{p+1}{q}$$ Note that $$\dfrac{kp}{kq} = \dfrac{p}{q}$$ where $k \in \mathbb{Z} \backslash \{0\}$. Hence, for the function to be well defined we need $$f \left( \dfrac{p}{q}\right) = f \left( \dfrac{kp}{kq}\right) $$ i.e. $$\dfrac{p+1}{q} = \dfrac{kp+1}{kq}, \text{ for all }k \in \mathbb{Z}$$which is clearly not true. Hence, the first function is not well-defined. -The second function $$f \left( \dfrac{p}{q}\right) = \dfrac{p+q}{p-q}$$ is well-defined except for the rational number $1$ since $$f \left( \dfrac{p}{p}\right) = \dfrac{2p}{0} = \text{ not defined}$$ -For all other rationals except $1$, note that $f \left( \dfrac{kp}{kq} \right)$ where $p \neq q$ and $k \in \mathbb{Z} \backslash \{0\}$, we get that $$f \left( \dfrac{kp}{kq} \right) = \dfrac{kp+kq}{kp-kq} = \dfrac{p+q}{p-q} = f \left( \dfrac{p}{q} \right)$$ -Hence, the second function is well-defined for $x \in \mathbb{Q} \backslash \{1\}$.<|endoftext|> -TITLE: Does the integral of PDF of multi-normal distribution over quarter planes have a closed form? -QUESTION [5 upvotes]: I am interested in finding a closed form solution (wich I suspect does not exist) to the following integral -$$\displaystyle \int _a^{\infty }\int _b^{\infty } \frac{\exp \left(-\frac{x^2+y^2-2 c x y}{2 \left(1-c^2\right)}\right)}{2 \pi \sqrt{1-c^2}} dy dx$$ -which corresponds to the integral of the PDF$(x,y)$ of a multiNormalDistribution (of covariance coefficient $c$) over the quarter plane $x>a$ and $y>b$. Here $a$ and $b$ are positive and $0a,Y>b) $ can be expressed in terms of Owen's T-function. -By the way Mathematica v8 has a built-in support for multi-normal distribution with special efficient cases for 2D and 3D Gussian random vectors, see BinormalDistribution (ref-page), and MultinormalDistribution (ref-page), and OwenT (ref-page).<|endoftext|> -TITLE: Numbers of the form $a^m-b^n$ -QUESTION [5 upvotes]: Can all positive integers $k$, be written as a difference of two perfect powers $k=a^m-b^n$, with $m,n>1$ and $a,b$ positive integers? -A number is imperfect if it can not, which numbers are imperfect? -What is the asymptotics of the number of imperfect numbers less then $x$, as $x\rightarrow\infty$? -I have proved that all odd numbers are the difference of two squares. How to solve the other cases? - -REPLY [5 votes]: See http://oeis.org/A074981 and references there.<|endoftext|> -TITLE: Irreducible representations of a cyclic group over a field of prime order -QUESTION [12 upvotes]: Consider $G$ a cyclic group of order $n$ with prime $p\nmid n$. -How do I construct all the irreducible representations over $\mathbb F_p$? -How many irreducible representations are there and what are their dimensions? - -REPLY [23 votes]: The group algebra $\mathbb{F}_p[G]$ is isomorphic to the ring -$$ -\mathbb{F}_p[x]/\langle x^n-1\rangle -$$ -(map the generator of $G$ to the coset of $x$). -The derivative of the polynomial $f(x)=x^n-1$ is $f'(x)=nx^{n-1}$. -As $\gcd(n,p)=1$, we see that $\gcd(f(x),f'(x))=1$, so $f(x)$ has no repeated zeros in any extension of $\mathbb{F}_p$. Therefore in the factorization (in $\mathbb{F}_p[x]$) -$$ -f(x)=\prod_j f_j(x) -$$ -to irreducible factors, all the factors $f_j(x)$ are distinct. By the Chinese remainder theorem we thus get an isomorphism of rings -$$ -\mathbb{F}_p[G]\simeq\bigoplus_j\mathbb{F}_p[x]/\langle f_j(x)\rangle. -$$ -The summands are all extension fields of $\mathbb{F}_p$, so -they are also the components of the Wedderburn decomposition of the group algebra. Maschke's theorem already told us that $\mathbb{F}_p[G]$ is semisimple. Furthermore, they are in bijective correspondence with the non-isomorphic irreducible representations. The dimensions are thus equal to $\deg f_j$ for each index $j$. -The roots of the factors $f_j$ are various roots of unity of order that is a factor of $n$. -As the Galois group of any finite extension of $\mathbb{F}_p$ is generated by the Frobenius automorphism $F:x\mapsto x^p$, it is actually easy to calculate the degrees of the factors $f_j(x)$ without finding them explicitly. -As an example let us consider the case $p=3$, $n=10$. Let $g$ be a primitive tenth root of unity in some extension field of $\mathbb{F}_3$. We see that its conjugates are then -$F(g)=g^3$, $F(g^3)=g^9$, $F(g^9)=g^{27}=g^7$. The list stops here, because $F(g^7)=g^{21}=g$. Therefore the minimal polynomial of $g$ is $(x-g)(x-g^3)(x-g^9)(x-g^7)$. In the same way we see that the minimal polynomial of $g^2$ is $(x-g^2)(x-g^6)(x-g^8)(x-g^4)$. The missing two root $g^0=1$ and $g^5=-1=2$ belong to the prime field, so their minimal polynomials are linear. We have seen that $x^{10}-1$ splits into a product of two linear and two quartic factors in $\mathbb{F}_3[x]$. Hence the irreducible representations of $C_{10}$ over $\mathbb{F}_3$ have dimensions 1,1,4 and 4 respectively. -The previous example generalizes to a study of the so called cyclotomic cosets. -Also observe that these representations are not absolutely irreducible. As soon as we extend the ground field to contain the appropriate roots of unity, the usual arguments showing that irreducible reps of abelian groups are 1-dimensional kicks in. This manifests itself also on the polynomial ring side: over a splitting field the polynomial $x^n-1$ splits into linear factors. -In the case $p=2$ the irreducible modules are an extremely well studied object in coding theory. Namely they are the minimal cyclic codes of length $n$. - -Oh, an answer is missing! Define a relation $\sim_p$ in $\mathbb{Z}_n$ as follows: $a\sim_p b$ if and only if $ap^k\equiv b\pmod{n}$ for some non-negative integer $k$. This is an equivalence relation (the equivalence classes are the cyclotomic cosets modulo $n$). The number of irreducible representations of $C_n$ over $\mathbb{F}_p$ is equal to the number of equivalence classes $[a]$ of the relation $\sim_p$, and their dimensions are equal to the number of elements $|[a]|$ of the corresponding equivalence class.<|endoftext|> -TITLE: Diophantine equation $x^y-y^x=11$ -QUESTION [10 upvotes]: How can one find all integer solutions to $x^y-y^x=k$, for a given k? -Example case $x^y-y^x=11$ - -REPLY [4 votes]: The numbers $x^y - y^x$ blow up when $x$ and $y$ get large. Besides the trivial solutions of the form $(k+1, 1)$ the only values of $k < 1000$ giving solutions are: -$$\begin{align} 3^2 - 2^3 &= 1 \\ 2^5 - 5^2 &= 7 \\ 3^4 - 4^3 &= 17 \\ 2^6 - 6^2 &= 28 \\ 2^7 - 7^2 &= 79 \\ 3^5 - 5^3 &= 118 \\ 2^8 - 8^2 &= 192 \\ 4^5 - 5^4 &= 399 \\ 2^9 - 9^2 &= 431 \\ 3^6 - 6^3 &= 513 \\ 2^{10} - 10^2 &= 924 \end{align}$$ -For a small theoretical analysis, suppose $x < y$ (if $x > y$, the result is negative). Then $y^x = x^x (1 + \frac{y - x}{x})^x < x^x e^{y-x}$, so $$x^y - y^x \geq x^x (x^{y-x} - e^{y-x}).$$ So for $x \geq 3$ this blows up enormously. So for small $k$, only small values of $x$ and $y$ are possible.<|endoftext|> -TITLE: How does trigonometric substitution work? -QUESTION [22 upvotes]: I have read my book, watched the MIT lecture and read Paul's Online Notes (which was pretty much worthless, no explanations just examples) and I have no idea what is going on with this at all. -I understand that if I need to find something like $$\int \frac { \sqrt{9-x^2}}{x^2}dx$$ -I can't use any other method except this one. What I do not get is pretty much everything else. -It is hard to visualize the bounds of the substitution that will keep it positive but I think that is something I can just memorize from a table. -So this is similar to u substitution except that I am not using a single variable but expressing x in the form of a trig function. How does this not change the value of the problem? To me it seems like it would, algebraically how is something like $$\int \frac { \sqrt{9-x^2}}{x^2}dx$$ the same as $$\int \frac {3\cos x}{9\sin^2 x}3\cos x \, dx$$ -It feels like if I were to put in numbers for $x$ that it would be a different answer. -Anyways just assuming that works I really do not understand at all what happens next. -"Returning" to the original variable to me should just mean plugging back in what you had from before the substitution but for whatever unknown and unexplained reason this is not true. Even though on problems before I could just plug back in my substitution of $u = 2x$, $\sin2u = \sin4x$ that would work fine but for whatever reason no longer works. -I am not expected to do some pretty complex trigonometric manipulation with the use of a triangle which I do not follow at all, luckily though this process is not explained at all in my book so I think I am just suppose to memorize it. -Then when it gets time for the answer there is no explanation at all but out of nowhere inverse sin comes in for some reason. -$$\frac {- \sqrt{9-x^2}}{x} - \sin^{-1} (x/3) +c$$ -I have no idea happened but neither does the author apparently since there is no explanation. - -REPLY [3 votes]: Imagine this. Drop your example for a sec. Is there any way that you can write an equation where $\sin{a} = \sqrt{1-x^{2}}$ ? Plug in an x, and no matter what the x is, you can find an a that satisfies the equation? If so, great! There is only one rule: the two functions must have the same range, or must output the same area of y values, and we use a different variable. For example, sin(a) only outputs numbers between -1 and 1, and the same goes for our second equation. Also, it doesn't use the same variable, so we're good! -If you don't believe me, let's solve for a. Taking the arcsin yields $a = \arcsin{ \sqrt{1-x^{2}}}$. If you pick any value of x, you will get a value fora. If you solved for x, you would get $\sqrt{1-\sin^{2}{a}}=x$. -The important thing to recognize is that we are just finding new functions to represent the same number. -This is all just a different way of looking at the same number. -The new variable ensures a new function, and the equivalent range ensures the new function can still produce the same numbers. -In the equation you give, any x value greater than or equal to 3 would put a negative number under the radical. This is also true for any number less than -3, because squaring gets rid of the negative. This means the domain is all real numbers greater than -3 but less than 3. How convenient! That entire domain is also found in 3 cos(theta), which oscillates between -3 and 3. So they have the same range! Perfect! -Try imagining this a different way. Let's say you want to solve the equation I gave, $\int x^{4} dx$. This is very easy, but sometimes you have to make the easy things hard to make the hard things easy. Let's say that you want to substitute in $x=4y$ and the necessary $dx = dy$. Can you do it? Well, why wouldn't you be able to? They output the same range: real numbers. They have different variables. So let's do it! -We now have -$ -\int \left (4y \right )^{4} dy -$ -which is just -$ -\int \ 256y^{5} dy -$ . Completely pointless but bear with me. This is just equal to -$ -\frac{256y^{5}}{5} -$ -Now let's plug back in x. Our original equation mentioned that $x=y^{2}$. Solving for y we have $y=\frac{x}{4}$. By putting that in our new equation we have our new equation as -$\frac{x^{5}}{5}$. Seem familiar? We have seen that the only two things necessary to do a substitution is that -a. you have the same range in both and -b. you use a different variable. -3 cos(theta) = x is just the same. In the equation referenced you have a range between -3 and 3, as does 3 cos(theta). Obviously, they use different variables, so you are good.<|endoftext|> -TITLE: continuous linear functional on a reflexive Banach space attains its norm -QUESTION [12 upvotes]: How does one prove that if a $X$ is a Banach space and $x^*$, a continuous linear functional from $X$ to the underlying field, then $x^*$ attains its norm for some $x$ in $X$ and $\Vert x\Vert = 1$? -My teacher gave us a hint that we should use the statement that if $X$ is a reflexive Banach space, the unit ball is weak sequentially compact, but I am not sure as to how to construct a sequence in this ball which does not converge. -Thank you. - -REPLY [12 votes]: We can use a corollary of Hahn-Banach theorem, applied to the dual space $X'$ of $X$. We have -$$\lVert x'\rVert=\max_{y\in X'',\lVert y\rVert=1}|y(x')|$$ -(the maximum is reached for a $y_0$ that can be constructed thanks to Hahn-Banach theorem). -For this $y_0\in X''$, since $X$ is reflexive we can find $u\in X$ such that $J(u)=y_0$, where $J\colon X\to X''$ is the canonical embedding. Hence by definition $J(u)(x')=x'(u)=y(x')$ and $u$ (or $-u$) is such that $|x(u)|=\sup_{\lVert v\rVert=1}|x(v)|$. - -Note that it doesn't follow the hint given by your teacher. Note that the converse is true (if each linear continuous functional attains its norm, the Banach space is reflexive). It's a difficult result, from James I think.<|endoftext|> -TITLE: If $p$ is prime, does there exist a positive integer $k$ such that $2^k\,p+1$ is also prime? -QUESTION [18 upvotes]: A prime $p$ such that $2\,p+1$ is also prime is called a Sophie Germain prime. It is conjectured that there are infinitely many such primes. This prompted me to ask the following question: given a prime $p$, is there a positive integer $k$ such that $2^k\, p+1$ is prime? Define $\lambda(p)$ to be the smallest such $k$ if it exists, $\infty$ otherwise. If $p$ is a Sophie Germain prime, then $\lambda(p)=1$. The function $\lambda$ doesn't seem to have any regularity, other than the following: $\lambda(p)$ is even if and only if $p\equiv1\mod3$. The values for the first 100 primes: -1, 1, 1, 2, 1, 2, 3, 6, 1, 1, 8, 2, 1, 2, 583, 1, 5, 4, 2, 3, 2, 2, -1, 1, 2, 3, 16, 3, 6, 1, 2, 1, 3, 2, 3, 4, 8, 2, 7, 1, 1, 4, 1, 2, -15, 2, 20, 8, 11, 6, 1, 1, 36, 1, 279, 29, 3, 4, 2, 1, 30, 1, 2, 9, -4, 7, 4, 4, 3, 10, 21, 1, 12, 2, 14, 6393, 11, 4, 3, 2, 1, 4, 1, 2, -6, 1, 3, 8, 5, 6, 19, 3, 2, 1, 2, 5, 1, 5, 4, 8 - -Some extreme values: $\lambda(2\,897)=9\,715$, $\lambda(3\,061)=33\,288$. There are several questions that can be asked about $\lambda$: - -Is $\lambda(p)<\infty$ for all primes $p$? -Is $\lambda$ bounded? -Can something be said about the asymptotic behavior as $N\to\infty$ of the average -$$ -\Lambda(N)= \frac{1}{N}\sum_{n=1}^N\lambda(p_n),\quad N\in\mathbb{N}, -$$ -where $p_n$ is the $n$-th prime? Here is a graph of $\Lambda(N)$ for $1\le N\le700$. - -REPLY [20 votes]: Actually $\lambda$ might not be finite, for example $271129$ is prime and $271129 \cdot 2^k+1$ is never prime. This is a special case of a Sierpinski number. Every number in the set $\{271129 \cdot 2^k+1\}$ is divisible by a number in the set $\{3, 5, 7, 13, 17, 241\}$.<|endoftext|> -TITLE: Examples of non-Riemann surfaces? -QUESTION [6 upvotes]: While studying Complex Analysis, I have come across Riemann Surfaces: -http://mathworld.wolfram.com/RiemannSurface.html -Can anyone please provide some examples of non-Riemannable surfaces? Thanks a lot! - -REPLY [13 votes]: Your question doesn't really make a lot of sense. I'll explain why. -"Riemann" isn't an adjective that's used to classify surfaces. That is, there's not some classification of surfaces into "Riemann surfaces" and "non-Riemann surfaces". -Instead, a Riemann surface is a surface together with some extra structure. In particular, a Riemann surface is a surface with a complex structure, which lets you define things like holomorphic functions on the surface. -Asking for a surface that isn't a Riemann surface is a lot like asking for a set that isn't a group. A group isn't a special kind of set -- it's a set that has been endowed with extra structure, namely a binary operation satisfying certain axioms. Some sets can be a group in several different ways, possibly using several different binary operations. Also, some sets (e.g. the empty set) can't be given the structure of a group. Finally, there's lots of sets that don't have a "natural" or "obvious" group structure, but could be made into a group if you define an appropriate binary operation. -Typical Riemann surfaces include: - -The Riemann sphere -Open subsets of the complex plane -Covers of open subsets of the complex plane or other Riemann surfaces -Quotients of the complex plane by lattices -Hyperbolic surfaces, which can be described as quotients of the unit disk by groups of Möbius transformations. -Nonsingular surfaces in $\mathbb{C}^n$ (or $\mathbb{CP}^n$) defined by polynomial equations (or more generally equations involving holomorphic functions). For example, every complex elliptic curve is a Riemann surface. - -In each case, the way that the surface is constructed gives it a natural complex structure. Other ways of making surfaces (e.g. surfaces you find in $\mathbb{R}^n$) often don't come with a complex structure, so they aren't Riemann surfaces unless you endow them with one. Moreover, some surfaces (such as a torus) can be endowed with a complex structure in several non-equivalent ways. -Finally, as Henry T. Horton points out, non-orientable surfaces cannot be given a complex structure, since holomorphic maps are always orientation-preserving. Every compact orientable surface can be given a complex structure, though in some cases there are several possibilities which lead to different Riemann surfaces.<|endoftext|> -TITLE: Understanding the conductor ideal of a ring. -QUESTION [10 upvotes]: Consider the inclusion of a ring $A$ into its integral closure $B$. The conductor ideal $I$ is defined as $I:=\{a\in A~|~aB\subseteq A\}$. This is supposed to describe the locus where the normalization map $\textrm{Spec}(B)\rightarrow \textrm{Spec}(A)$ fails to be an isomorphism. -Can anyone explain to me why this is the case? -Thanks! - -REPLY [12 votes]: Consider the extension as a short exact sequence of $A$-modules. -$$ 0 \rightarrow A\rightarrow \overline{A}\rightarrow \overline{A}/A\rightarrow 0$$ -This is telling us that, to get an integrally closed ring, we must extend $A$ by $\overline{A}/A$. We can think of $\overline{A}/A$ as the obstruction to $A$ being integrally closed. -Localization commutes with taking integral closures, so for $p$ any prime ideal in $A$, $\overline{(A_p)}=\overline{A}_{\overline{A}p}$. Since localization is flat, we see that -$$ \overline{(A_p)}/A_p = \overline{A}_{\overline{A}p}/A_p = (\overline{A}/A)_p$$ -So $(\overline{A}/A)_p$ is simultaneously measuring... - -the local contribution at $p$ to the global obstruction $\overline{A}/A$, and -the obstruction to $A_p$ being integrally closed. - -In particular, $A_p$ is integrally closed (and $Spec(\overline{A}_p)\rightarrow Spec(A_p)$ is an isomorphism) at those primes where $(\overline{A}/A)_p=0$. This is the complement of the support of $\overline{A}/A$ (thought of as a coherent sheaf, if you prefer). -An equivalent definition of the conductor $I$ is the annihilator of the $A$-module $\overline{A}/A$. Thus, $Supp(I)=Supp(\overline{A}/A)$ is the complement of the set of primes where the normalization map is an isomorphism.<|endoftext|> -TITLE: Epsilon numbers -QUESTION [6 upvotes]: Let $\alpha$ be an ordinal number and define $f_\alpha$ as: - -$f_\alpha(0) = \alpha + 1$ -$f_\alpha(n+1) = \omega^{f_a(n)}$ - -Let $S(\alpha) = \sup\{f_a(n)\ |\ n \in \omega\}$ -Then $S(\alpha)$ is an epsilon number and is the least epsilon number greater than $\alpha$. -Since none of natural numbers are epsilon number, I think $S(n)=S(m)$ for every natural numbers $n,m$. I know that I'm wrong but I don't know why. Please, help. -And I have problem with showing that $m -TITLE: Solving $x^4-y^4=z^2$ -QUESTION [11 upvotes]: I have a question -show that $x^4- y^4 = z^2$ has no nontrivial solution where $x$, $y$ and $z$ are nonzero integers -I tried infinite descent to find solution but I could not find it. square of a number in mod 4 is 1 or 0 I also tried to use but got nothing. -Can you help? -thanks - -REPLY [22 votes]: Suppose $z^2=y^4-x^4$ with $xyz\not=0$ for the smallest possible value of $y^4$. First we rewrite the -equation as $y^4=x^4+z^2$ so that $\{z,x^2,y^2\}$ is a Pythagorean triple. -It must be primitive, since if some prime $p$ divides $\gcd(x^2,y^2)$, then -$p\,|\,y^2$ implies $p\,|\,y$ which gives $p^4\,|\,y^4$. -Similarly, $p^4\,|\,x^4$, so $p^4\,|\,z^2$. This implies $p^2\,|\,z$, so that -$\left({z/p^2}\right)^2=\left({y/p}\right)^4-\left({x/p}\right)^4$ is a smaller solution. -The Pythagorean triple $z,x^2,y^2$ is primitive and there are two cases: -If $x$ is even, then for some $m>n$, $(m,n)=1$, - and $m\not\equiv n \pmod2$ we have - $$ z=m^2-n^2,\quad x^2=2mn,\quad y^2=m^2+n^2.$$ - Since $m,n$ have opposite parity, we can let $o$ denote the odd number and $e$ the even number among $\{m,n\}$. - The primitive Pythagorean triple $\{n,m,y\}$ gives $$o=t^2-s^2,\quad e=2st,\quad y=t^2+s^2,$$ for some - $t>s$, $(t,s)=1$, and $t\not\equiv s\pmod2$. The formula for $x^2$ now - gives $$(x/2)^2=ts(t^2-s^2)$$ which expresses the product of three relatively prime numbers as a square. - That means all three of them are squares: $s=u^2$, $t=v^2$, and $t^2-s^2=w^2$. - In other words, $v^4-u^4=w^2$ is another - solution to our equation, and it is smaller, since $v^4n$, $(m,n)=1$, - and $m\not\equiv n\pmod2$ we have - $$ x^2=m^2-n^2,\quad z=2mn,\quad y^2=m^2+n^2.$$ - In this case $m^4-n^4=(xy)^2$ is a smaller solution, since $m^4<(m^2+n^2)^2=y^4$.<|endoftext|> -TITLE: $\frac{\partial f_i}{x_j}=\frac{\partial f_j}{x_i}\implies(f_1,\ldots,f_n)$ is a gradient -QUESTION [5 upvotes]: I was reading a solution when I came across this statement. - -So $$\frac{\partial f_i}{x_j}=\frac{\partial f_j}{x_i}.$$ Then there exists a differentiable function $g$ on $\mathbb{R}^n$ such that $\frac{\partial g}{\partial x_i}=f_i$. - -Why is this true? - -REPLY [2 votes]: The following proofs assume 2 variables. -Proof of necessary condition: -If $(f_i, f_j)$ is the gradient of a function $F$, it means that: -$$ -\frac{\partial{F}}{\partial{x_i}} = f_i \\ -\frac{\partial{F}}{\partial{x_j}} = f_j -$$ -Now, if $F$ has continuous second partial derivatives, then according to Clairaut's theorem: -$$ -\frac{\partial^2{F}}{\partial{x_i}\partial{x_j}} = \frac{\partial^2{F}}{\partial{x_j}\partial{x_i}} -$$ -Therefore: -$$ -\frac{\partial{f_i}}{\partial{x_j}} = \frac{\partial{f_j}}{\partial{x_i}} -$$ -Proof of sufficient condition: -The function $F$, if it exists, has the property: -$$ -\frac{\partial{F}}{\partial{x_i}} = f_i -$$ -By integrating with $x_j$ constant: -$$ -F = \int_{x_{i_0}}^{x_i} f_i \, dx_i + R(x_j) \tag{1} -$$ -Now take partial derivatives of both sides with respect to $x_j$: -$$ -\frac{\partial{F}}{\partial{x_j}} = \frac{\partial}{\partial{x_j}}\int_{x_{i_0}}^{x_i} f_i \, dx_i + R'(x_j) = f_j -$$ -Using differentiation under integral sign: -$$ -\frac{\partial{F}}{\partial{x_j}} = \int_{x_{i_0}}^{x_i} \frac{\partial{f_i}}{\partial{x_j}} \, dx_i + R'(x_j) = f_j -$$ -Using the assumption that $\displaystyle \dfrac{\partial f_i}{\partial x_j} = \dfrac{\partial f_j}{\partial x_i}$: -$$ -\frac{\partial{F}}{\partial{x_j}} = \int_{x_{i_0}}^{x_i} \frac{\partial{f_j}}{\partial{x_i}} \, dx_i + R'(x_j) = f_j -$$ -Which we can write as: -$$ -\left. f_j \right|_{x_{i_0}}^{x_i} + R'(x_j) = f_j -$$ -Therefore: -$$ -R'(x_j) = f_j(x_{i_0}, x_j) -$$ -And: -$$ -R(x_j) = \int_{x_{j_0}}^{x_j} f_j \, dx_j -$$ -Plug in back into (1): -$$ -F = \int_{x_{i_0}}^{x_i} f_i \, dx_i + \int_{x_{j_0}}^{x_j} f_j \, dx_j -$$ -Therefore, we have shown that $F$ exists.<|endoftext|> -TITLE: How do we know that $\exp(x)$ agrees with raising a number to a rational power? -QUESTION [8 upvotes]: This is motivated by an earlier question of mine, in which I realized I was never really presented a definition of $e^x$, or more generally, what it means to raise a (positive) real number to an irrational power. -I know that the definition of $a^b$ with $a \in \mathbb{R}^+, b \in \mathbb{Q}$ is pretty straightforward in terms of repeated multiplication and the property that $a^{bc}=(a^b)^c$. I also know that one can define $a^b$ where $b \in \mathbb{R} - \mathbb{Q}$ using limits. This is stated, for example, in this Math.SE question. -Other way to define exponentiation with real powers is with the function $\exp(x)$ or $e^x$, which has many equivalent definitions. For example, one may define it as $e^x = \lim\limits_{n \to \infty} (1+\frac{x}{n})^n$, or as the unique solution to $y' = y$ with $y(0)=1$. Wikipedia has a whole page stating these definitions and showing that they are equivalent to each other. -What I haven't seen is a proof that this new $e^x$ behaves just like the old way of doing exponentiation when $x \in \mathbb{Q}$. If I were to guess, I'd say it's related to Wikipedia's fifth defintion: it is the unique (with some conditions) function that satisfies $f(1) = e$ and $f(x+y)=f(x)f(y)$. However, that defintion seems to involve more advanced concepts than the other ones, concepts which I don't really understand right now. -Is there a proof of the fact that $\exp(x)$ is equivalent to the definition of exponentiation for rational powers? - -REPLY [5 votes]: Let $\exp(x)$ denote the exponential function. You can define this any way that you like, but we will assume the following facts: -Fact 1. The derivative of $\exp(x)$ is $\exp(x)$, and $\exp(0)=1$. -Fact 2. Let $f(x)$ be any differentiable function. If $f(0) = 1$ and $f'(x) = f(x)$ for all $x\in\mathbb{R}$, then $f(x) = \exp(x)$ for all $x\in\mathbb{R}$. -We will also assume the Power Rule for rational exponents. From this, we can prove the following theorem: -Theorem. Let $e = \exp(1)$. Then $e^q = \exp(q)$ for any rational number $q$. -Proof: Let $q\in\mathbb{Q}$, and let $f\colon\mathbb{R}\to\mathbb{R}$ be the function $f(x) = [\exp(x/q)]^q$. Note that $f(0) = 1^q = 1$. Furthermore, by the Power Rule and the Chain Rule, we have -$$ -f'(x) \;=\; q[\exp(x/q)]^{q-1} \exp(x/q)\, (1/q) \;=\; [\exp(x/q)]^q \;=\; f(x) -$$ -It follows that $f(x) =\exp(x)$ for all $x\in\mathbb{R}$, so -$$ -\exp(q) \;=\; f(q) \;=\; [\exp(q/q)]^q \;=\; e^q.\tag*{$\square$} -$$ - -REPLY [2 votes]: Let's use the definition $$e^x=\lim_{n\to\infty}\left(1+{x\over n}\right)^n$$ and prove $e^{pq}=(e^p)^q$. -We have $$e^{pq}=\lim_{n\to\infty}\left(1+{pq\over n}\right)^n$$ and $$(e^p)^q=\left(\lim_{n\to\infty}\left(1+{p\over n}\right)^n\right)^q=\lim_{n\to\infty}\left(1+{p\over n}\right)^{qn}=\lim_{n\to\infty}\left(1+{pq\over qn}\right)^{qn}=\lim_{m\to\infty}\left(1+{pq\over m}\right)^m$$ which is the same thing. -But we've now proved the property that you used to define rational powers.<|endoftext|> -TITLE: Any nonabelian group of order $6$ is isomorphic to $S_3$? -QUESTION [6 upvotes]: I've read a proof at the end of this document that any nonabelian group of order $6$ is isomorphic to $S_3$, but it feels clunky to me. -I want to try the following instead: -Let $G$ be a nonabelian group of order $6$. By Cauchy's theorem or the Sylow theorems, there is a element of order $2$, let it generate a subgroup $H$ of order $2$. Let $G$ act on the quotient set $G/H$ by conjugation. This induces a homomorphism $G\to S_3$. I want to show it's either injective or surjective to get the isomorphism. -I know $n_3\equiv 1\pmod{3}$ and $n_3\mid 2$, so $n_3=1$, so there is a unique, normal Sylow $3$-subgroup. Also, $n_2\equiv 1\pmod{2}$, and $n_2\mid 3$, so $n_2=1$ or $3$. However, if $n_2=1$, then I know $G$ would be a direct product of its Sylow subgroups, but then $G\cong C_2\times C_3\cong C_6$, a contradiction since $G$ is nonabelian. So $n_2=3$. Can this info be used to show the homomorphism is either injective or surjective? Thanks. - -REPLY [9 votes]: You can't talk about the quotient $G/H$ unless you first prove that $H$ is normal (which you won't be able to do, since a group of order $6$ always has a normal $3$-subgroup, and if it has a normal $2$-subgroup then it is abelian). If you are trying to talk about the cosets of $H$ in $G$, then the action by conjugation is not well-defined, since the coset $H$ is not mapped to a coset of $H$ under conjugation by any element not in $H$ (precisely because $H$ is not normal). -If you want to use actions, you can do it: let $H$ be a subgroup of order $2$ and consider the action of $G$ on the left cosets of $H$ in $G$ by left multiplication. This gives you a homomorphism $G\to S_3$; the kernel is contained in $H$, but since $H$ is of order $2$ and not normal, that means that the kernel is trivial, and so the map is an embedding. Since both $G$ and $S_3$ have order $6$, it follows that the map is an isomorphism.<|endoftext|> -TITLE: Probability that the last ball is white? -QUESTION [8 upvotes]: A jar contains $m=90$ white balls and $n=10$ red balls, the balls are drawn under the following constraints: - -the ball is thrown away if it is white; -the ball is put back if it is red and another ball is drawn; this time, the ball is thrown away no matter what color it is. - -The question is, what is the probability to exhaust all balls and have the last one in white color? My guess is $\frac{1}{2}$ but I might be wrong. Please show how you deduce the answer. Thanks! -EDIT: Thanks for posting the solution and simulation, which are all appreciated. B. E. Oakley and R. L. Perry discussed a very similar problem in their A Sampling Process paper published on The Mathematical Gazette, Vol. 49, No. 367 (Feb., 1965), pp. 42-44. -The problem presented in the paper is: -A bag contains m > 0 black balls and n > 0 white balls. A sequence of balls from the bag is discarded in the following manner: -(i) A ball is chosen at random and discarded. -(ii) Another ball is chosen at random from the remainder. If its colour is different from the last it is replaced in the bag and the process repeated from the beginning (i.e. (i)). If the second ball is the same colour as the first it is discarded and we proceed from -(ii). Thus the balls are sampled and discarded until a change in colour occurs, at which point the last ball is replaced and the process starts afresh. -The question is: what is the probability that the final ball should be black? -Their induction gives $\frac{1}{2}$ which is what I have here. Apparently having no constraint on colors like what's in the paper changes the situation significantly. -But the story hasn't ended. At the end of their paper, they proposed a seemingly more interesting problem: What is the solution if there are balls of 3 different colours and the sampling process is as before? - -REPLY [10 votes]: I'll continue on the answer by Ross Millikan, and give an exact solution (I'll just replace $R$ and $W$ by lower case $r$ and $w$, which intimidates me less). So for $r,w\in\mathbb N$, not both zero, let $P(r,w)$ denote the probability of leaving a white ball as last one when starting with $r$ red and $w$ white balls. One obviously has $P(r,0)=0$ for any $r>0$ and $P(0,w)=1$ for any $w>0$; moreover by the argument Ross gives one has the recurrence relation -$$ - P(r,w) = \frac{r^2}{(r+w)^2}P(r-1,w) + \frac{w^2+2rw}{(r+w)^2}P(r,w-1) \quad\text{for all } r,w>0, -$$ -since the first step reduces the problem defining $P(r,w)$ either to the one defining $P(r-1,w)$ or to the one defining $P(r,w-1)$, with the indicated factors as probablilities for the first step. -Now this recurrence has the (surprisingly simple) solution -$$ - P(r,w)=\frac{w}{(r+1)(r+w)}\quad\text{for }(r,w)\in\mathbb N^2\setminus\{(0,0)\}. -$$ -The proof is by a simple induction on $r+w$, expanding the recurrence relation, factoring out $\frac w{(r+w)^2(r+w-1)(r+1)}$ which leaves $r(r+1)+(w+2r)(w-1)$ as other factor, and rewriting that factor as $(r+w)(r+w-1)$ permits some cancelling and arriving at the desired result. -I did not guess the formula just like that of course. Rather I calculated a number of values $P(1,w)$ with exact rational arithmetic, noticing substantial simplifications and easily guessing $P(1,w)=\frac{w}{2(w+1)}$. I proceeded similarly for $P(2,w)$, again with significant simplfications, after which I guessed the general form. -For the concrete problem one gets $P(10,90)=\frac9{110}\approx0.08181818$, a chance of a bit less than $1$ in $12$ to be left with a white ball. In fact one sees that throwing in any number of white balls (with $10$ red balls) initially never even raises the chance to $1$ in $11$. And if there are any red balls at all initially, it is always more likely to be left with one of them than with a white ball!<|endoftext|> -TITLE: Real roots of $3^{x} + 4^{x} = 5^{x}$ -QUESTION [5 upvotes]: How do I show that $3^{x}+4^{x} = 5^{x}$ has exactly one real root. - -REPLY [3 votes]: Consider our equation as: -$$\left(\frac{3}{5}\right)^x+\left(\frac{4}{5}\right)^x=1.$$ -Notice that left side is a sum of 2 monotonically decreasing functions and their sum is a monotonically decreasing function. Hence, the only possible solution is x=2. -The proof is complete.<|endoftext|> -TITLE: Saturated ideal -QUESTION [7 upvotes]: Let $k$ be a field, let $I \triangleleft k[X_1,\dots,X_n]=S$ be an ideal and fix $f \in S$. -The saturated ideal of $I$ is $I^{sat}=I:f^\infty=\{g \in S \mid \exists m \in \mathbb{N} \ s.t. \ f^mg \in I \}=\displaystyle\bigcup_{i \geq 1} I:f^i$. -Prove that $I^{sat}=I:f^m \Leftrightarrow f^m=f^{m+1}$. -My attempt: -"$\Rightarrow$" Since we have the ascending chain $I:f \subseteq I:f^2 \subseteq \dots$ and $S$ is Noetherian, it follows that the $m$ that we are looking for is exactly the one that stops the chain, i.e. the one from which on all ideals in the chain are equal. From $I^{sat}=\displaystyle\bigcup_{i \geq 1} I:f^i$, we have that $I^{sat}=I:f^m$. -"$\Leftarrow$" We have to show that all of the ideals $I:f^q$ are in $I:f^m$, i.e. the chain stops after $m$ steps. We have to prove $\{g \in S \mid f^mg \in I \} = \{h \in S \mid f^{m+1}h \in I \}$. "$\subseteq$" is clear, from the chain. -What about the reverse inclusion? It seems like going around in circles, so it must be something easy that I don't see. -Thank you. - -REPLY [3 votes]: Perhaps I am misunderstanding your problem, but here goes: You want to show that if $f^m = f^{m+1}$ then this implies that $I^{sat} = (I:f^m)$. Now if $f^m = f^{m+1}$ then by induction it is easily seen that $f^m = f^{m+k}$ for all non-negative integers $k$. Now we claim that -$$\bigcup_{i \geq 1} (I : f^i) = \bigcup_{i=1}^m (I:f^i).$$ -One inclusion is obvious, for the other suppose that there is $x \in \bigcup_{i \geq 1} (I : f^i)$ such that $x \notin \bigcup_{i=1}^m (I:f^i)$. Now the former assumption gives that there is a positive integer $k$ such that $f^kx \in I$. The latter assumption gives that $k > m$. However we already proved that $f^{m+1} = f^{m+2} = \ldots = f^k$ so that $f^kx \in I \implies f^mx \in I$, i.e. $x \in (I:f^m)$ which is a contradiction. This establishes the equality above. -Now it is clear that $(I:f^m ) \subseteq I^{sat}$. It now remains to show the other inclusion, namely that $\bigcup_{i=1}^m (I:f^i) \subseteq (I:f^m)$. Now take any $x$ in the left hand side, then $x \in (I : f^n)$ for some $1 \leq n \leq m$. This means that $f^nx \in I$ so that $f^{m-n}(f^nx) \in I$ as well. In other words $x$ is such that $f^mx \in I$, i.e. $x \in (I:f^m)$ and your claim is proven.<|endoftext|> -TITLE: Finite ordinal to the power a = a -QUESTION [6 upvotes]: Let $\alpha>\omega$ be an ordinal such that -$2^\alpha$ = $\alpha$. -Then $\alpha$ is an epsilon number? -I have tried many different ways, but i can only work with the left side of $\alpha$(e.g, I have proved such ordinals satisfy $\omega$$\alpha$ = $\alpha$ and etc), but i think it's critical to work with right side of $\alpha$ in proof and i cant handle this.. Help -Plus i want to know even when the base is not 2 but finite, whether $\alpha$ is an epsilon number - -REPLY [3 votes]: Lemma: $2^{\omega\alpha}=\omega^\alpha.$ -Proof: Given $2^{\omega\alpha}=\omega^\alpha,$ we have $2^{\omega(\alpha+1)}=2^{\omega\alpha+\omega}=\omega^{\alpha+1},$ where the first equality is by definition of the function $\omega x$ and the second is by the hypothesis and calculation of the limit of $2^{\omega\alpha+n}.$ -At limit ordinals, it's true because the composition of continuous functions is continuous.$\square$ -Writing $\alpha$ in Cantor normal form to base $\omega,$ if $\alpha$ has any $\omega^n$ terms for finite $n,$ then $2^\alpha>\alpha$ by normality of $2^x$ and inspection: -Since $2^x$ is a normal function, we know $2^\alpha\geq\alpha.$ Addition of 1 in the exponent is multiplication by 2; addition of $\omega$ in the exponent is multiplication by $\omega;$ addition of $\omega^2$ yields multiplication by $\omega^\omega$ (all of these are meant as on the right). All of these produce larger ordinals. -If $\alpha$ does not have any $\omega^n$ terms, then $\alpha=\omega\alpha$ and so if $2^{\alpha}=\alpha$ then $\alpha=\omega^\alpha$ by the lemma; this is the defining property of an $\epsilon$-number. -Note that the same applies to any finite base.<|endoftext|> -TITLE: Is a sphere a closed set? -QUESTION [14 upvotes]: The unit sphere in $\mathbb{R}^3$ is $\{(x,y,z) : x^2 + y^2 + z^2 = 1 \}$. I always hear people say that this is closed and that it has no boundary. But isn't every point on the sphere a boundary point? Since for every point $x$ on the sphere, any open ball in $\mathbb{R}^3$ (defined by $\{ y : |y-x| < r\}$) contains points on the sphere and outside/inside the sphere. Hence it is a boundary point. -I also sometimes see that people say instead of an open ball in $\mathbb{R}^3$, I should be looking at an open neighbourhood of the sphere but this doesn't make any sense I don't think as the definition of the open set is as I gave above. -Or is it that people mean it's a closed surface (analogous to a closed curve)? - -REPLY [24 votes]: You are correct that its boundary as a subset of $\mathbb R^3$ is itself. However, there is another meaning of the word "boundary" being used by all the people you hear talking. As a two-dimensional surface, every point on the sphere has a neighborhood that "looks like" an open disk. A boundary point of a surface would be a point where all the neighborhoods have an "edge". -It is true that you have to think of neighborhoods within the sphere itself. This isn't so far fetched from our everyday experience. After all, when you think about the neighborhood you live in, you might think of homes within a quarter-mile "horizontal" radius, but you're probably not thinking about the air a quarter mile above your head or the rock a quarter mile below ground level. The neighborhoods within the sphere itself are analogous to neighborhoods as you would view them on a map, not as balls protruding above and below the sphere's surface. -If $S$ is the sphere, one way to define the open sets of $S$ relative to $S$ is using the subspace topology (or the restricted metric) from $\mathbb R^3$. An open "ball" centered at a point $x\in S$ has the form $\{y\in S:|x-y| -TITLE: A group of order $p^2q$ will be abelian -QUESTION [15 upvotes]: This problem is not homework but, I was stuck to it when I reviewed the Sylow theorems and problems. I am really interested of finding a test in which we can examine whether a finite group of certain order is abelian or not. It tells: - -$G$ is a finite group of order $p^2q$ wherein $p$ and $q$ are distinct primes such that $p^2 \not\equiv 1$ (mod $q$) and $q \not\equiv 1$ (mod $p$). Then $G$ is an abelian group. - -We know that $n_p=1+kp$ and it must divide $p^2q$. So, $1+kp|q$ and because of $q≢1$ (mod $p$), we get $n_p=1$. This means that we have a unique $p$-sylow of $G$, for example $P$, in the group and so is normal and ofcourse isomorphic to $\mathbb Z_{p^2}$ or $\mathbb Z_p×Z_p$. What should I do next? Thanks for helping me. - -REPLY [8 votes]: @Babak, your problem is a special case of the following.A positive integer $n=p_1^{a_1}. .. p_t^{a_t}$, $p_i$ distinct, is said to have good factorization if and only if $p_i^{k} \not\equiv 1$ mod $p_j$ for all integers $i$, $j$ and $k$, where $1 \leqslant k \leqslant a_i$. -Theorem The groups of order n are all abelian if and only if n is cube-free and has good factorization. -See for example the nice paper Nilpotent and Solvable Numbers by J. Pakianathan and K. Shankar<|endoftext|> -TITLE: Tower property of conditional expectation -QUESTION [5 upvotes]: I'm trying to prove the "tower property" of conditional expectations, -$$ -E[V\mid W] = E[\ E[V\mid U,W]\ \mid W\ ], -$$ -where $U$, $V$ and $W$ are any random variables. $E[X \mid Y]$ is itself a random variable $f(Y)$ where $$f(y) = E[X \mid Y = y) = \sum_x x\cdot Pr[X=x\mid Y=y].$$ Keeping this observation in mind, I still don't see why $U$ is "averaged out" when moving from the right hand side to the left side. - -REPLY [6 votes]: The last equality in your observation does not apply in general (i.e. if $X$ is not discrete). Let $U,V,W$ be random variables such that $V\in \mathcal{L}^1(P)$. In order to show that -$$ -E[V\mid W]=E[E[V\mid U,W]\mid W] -$$ -we note that the right hand side is indeed $\sigma(W)$-measurable, so we only need to check the defining equation, i.e. check that -$$ -\int_A V\,\mathrm{d}P=\int_A E[V\mid U,W]\,\mathrm{d}P -$$ -for all $A\in\sigma(W)$. Let such an $A$ be given. Then $A\in\sigma(W)\subseteq \sigma(U,W)$ and therefore -$$ -\int_A E[V\mid U,W]\,\mathrm{d}P=\int_A V\,\mathrm{d}P -$$ -and we are done.<|endoftext|> -TITLE: If $d_1, d_2$ are metrics of $X$, is it true that $d_1 +d_2 $, $d_1 - d_2$, $d_1\cdot d_2$, $\sqrt d_1$ are metrics on $X$? -QUESTION [5 upvotes]: If $d_1, d_2$ are metrics of $X$, is it true that $d_1 +d_2 $, $d_1 - d_2$, $d_1\cdot d_2$, $\sqrt d_1$ are metrics on $X$? -Here is my attempt: -If we take $d_1 = d_2 $ = standard metric on the real line, then $d_1\cdot d_2 = d_1^2$ is not a metric. -$d_1 - d_2$ may not be metric because it may not even be always non-negative. -But I am not sure about others. I need help. -Thanks for giving me time. - -REPLY [5 votes]: To prove that $d_1+d_2$ is a metric, just check that it has each of the properties of a metric. The only one that takes any work is the triangle inequality, and it’s not hard, using the fact that $d_1$ and $d_2$ both satisfy the triangle inequality. That leaves only $\sqrt{d_1}$, and it’s clear that the only question is whether it satisfies the triangle inequality. -In other words, must it always be true that $$\sqrt{d_1(x,z)}\le\sqrt{d_1(x,y)}+\sqrt{d_1(y,z)}\;?\tag{1}$$ -Since both sides of $(1)$ are non-negative, $(1)$ holds iff the inequality that you get by squaring both sides of $(1)$ holds; does it?<|endoftext|> -TITLE: The Goldbach Conjecture and Hardy-Littlewood Asymptotic -QUESTION [15 upvotes]: A source I am reading refers to the Goldbach conjecture (that every even number is the sum of two primes), and then immediately follows with the "Hardy-Littlewood conjecture" that - -$\sum \limits_{n \leq N} \Lambda(n) \Lambda(N-n) = 2 \prod \limits_{p \geq 3} \left(1-\frac{1}{(p-1)^2}\right) \left( \prod \limits_{p | N} \frac{p-1}{p-2}\right)N^{1+o(1)}$ - -which is termed "Goldbach for almost all even numbers", and apparently this also - -$\Longleftrightarrow \prod \limits_{p | N} (\frac{p-1}{p-2}) N^{1+o(1)} = O(\log N)^c$. Here, $\Lambda(n)$ denotes the Von-Mangoldt function. - -It then states the following theorem: - -Let $A \in \mathbb{R}$. Then for all but $\frac{x}{\log^A x}$ even numbers $N \leq x$, we have $\sum \limits_{n \leq N} \Lambda(n) \Lambda(N-n) = 2 \prod \limits_{p \geq 3} \left(1-\frac{1}{(p-1)^2}\right) \left( \prod \limits_{p | N} \frac{p-1}{p-2}\right)N (1+o_A(1))$, where $o_A$ denotes the fact that the constant in the limiting behaviour may depend on $A$. - -Now I can't see how this is the same as Goldbach at all really. I can see that if $N= p+q$ is the sum of 2 primes then the corresponding term on the LHS will be nonzero, but the Von-Mangoldt function is also nonzero for powers of primes, so it might be nonzero for some $N= P^i + Q^j$. I am beginning to think the RHS may be some sort of probabilistic slant on the conjecture, but can't quite see it. -It may be the case that the Von-Mangoldt function is just negligible on all prime powers except the primes themselves (this is certainly often the case), but at the least it seems like this should have been stated somewhere since it certainly doesn't seem like a trivial deduction to me. Or is this simply an "approximation to Goldbach"; namely a relationship which seems to imply that Goldbach might well be true, but as I have said doesn't remove the problem of the prime powers? (Obviously I am aware that Goldbach is unproved, but the text doesn't clarify whether this is a proved statement weaker than Goldbach, or an unproved statement equivalent to Goldbach: I suspect the latter.) -I also can't see how the "$\Longleftrightarrow$" follows from the first conjecture; I'd be very grateful if someone could help me understand what's going on here, at least heuristically if not formally. -Next, the text goes on to "prove" Vinogradov from the latter theorem which has been stated (I say "prove" because I can't see how the proof works). It says: - -Corollary (Vinogradov) Every sufficiently large odd number is the sum of 3 primes. - Proof: Let N be odd. Then taking $A=2$ in the theorem, there is some prime $p\leq N/2$ for which the Hardy-Littlewood asymptotic holds. In particular, $N-p$ is the sum of 2 primes. - -Now this time I really can't see what's happening: the sum over the Von-Mangoldt function on the LHS will surely just go to zero almost every time $N-n$ is a sum of 2 primes (unless this sum of 2 primes is a prime power of course), and I don't see how the RHS tells us nothing about the LHS except that it is not "extremely small". Could anyone explain what's going on here to me? -It is possible I transcribed some of this material wrong, though I do not see where I might have made an error which could have caused all my confusions simultaneously. Again, any insight you could provide would be desperately appreciated; many thanks in advance. - -REPLY [7 votes]: There are a few irregularities in your opening statements that seem to be typos. The statement of the Hardy-Littlewood conjecture should be - -$\sum \limits_{n \leq N} \Lambda(n) \Lambda(N-n) = 2 \prod \limits_{p\ge3} \left(1-\frac{1}{(p-1)^2}\right) \left( \prod \limits_{p \mid N, p\ge 3} \frac{p-1}{p-2}\right)N(1+o(1)),$ - -rather than ending with $N^{1+o(1)}$, as originally stated. The previous version, while not any less true, is no more precise than just saying $\sum \limits_{n \leq N} \Lambda(n) \Lambda(N-n) = N^{1+o(1)}$. In particular, I claim that - -$2 \prod \limits_{p\ge3} \left(1-\frac{1}{(p-1)^2}\right) \left( \prod \limits_{p \mid N, p\ge 3} \frac{p-1}{p-2}\right) = N^{o(1)}.$ - -To see this, note that the infinite product ($p\ge 3$) converges to some positive constant, so the factor $2 \prod \limits_{p \geq 3} \left(1-\frac{1}{(p-1)^2}\right)$ is $N^{c/(\log N)} = N^{o(1)}.$ The remaining factor is at least 1, and smaller than $\prod\limits_{p \mid N, p\ge 3} \left(1 + \frac{2}{p-1}\right) \le \prod\limits_{p\mid N} \left(1 + \frac{1}{p-1}\right)^2 = \left(\frac{N}{\phi(N)}\right)^2,$ which is well-known to be $N^{O(1/\log\log N)} = N^{o(1)}.$ -The “equivalent” formulation in the second equation is a bit baffling: it is very much false as written (the left-hand side is certainly larger than $\sqrt{N}$ and therefore much larger than $O(\log N)^c$), but I don't see an obvious correction. I also cannot understand why you write “the sum over the Von-Mangoldt function on the LHS will surely just go to zero” — can you explain? I think I can address your remaining questions. - -The relation between $\sum \limits_{n \leq N} \Lambda(n) \Lambda(N-n)$ and Goldbach problem: you are correct in that the summation also includes solutions of the form $p^r + q^s = N$. However, the number of solutions containing a prime power is very small, certainly less than $O(\sqrt{N})$ (to see this, note that there are fewer than $\log N/\log 2$ choices for $r > 1$, and only $\pi(\sqrt{N})$ choices for $p$. -Therefore even if we subtract all prime power solutions from $\sum \limits_{n \leq N} \Lambda(n) \Lambda(N-n)$, we lose at most $O(\sqrt{N} \log^2 N) = N^{1/2 + o(1)}$, which is negligible compared to $N(1+o(1))$. The prime powers, as usual, have no effect here (though they do play a role in prime number races). - -Every large odd number is the sum of 3 primes. The Theorem shows that there are very few exceptions to Goldbach's conjecture. In particular, for sufficiently large $N$ there are at most $N/(\log N)^2$ even numbers between $N/2$ and $N$ which are not the sum of two primes. But there are about $N/(2 \log N)$ odd primes $p$ between $1$ and $N/2$, and for each one, $N-p$ is a distinct even number between $N/2$ and $N$. -If you can find just one prime $p$ such that $N-p$ is the sum of two primes $q+r$, you can write $N = p+q+r$. By comparing $N/(2 \log N)$ to $N/(\log N)^2$, we can see that there are more primes $p$ than there are ineligible values of $N-p$, so this is always possible once $N$ is large enough.<|endoftext|> -TITLE: Approximation by $C^1$ path of a Lipschitz continuous path -QUESTION [6 upvotes]: I was wondering if the following equality holds: -$$\inf\left\{\int_0^1 G(\gamma(t))|\gamma'(t)|dt, \gamma \in X \cap (\text{Lipschitz})\right\}\stackrel{??}{=}\inf\left\{ \int_0^1 G(\gamma(t))|\gamma'(t)|dt, \gamma \in X \cap C^1\right\}$$ -where $X=\{ \gamma:[0,1]\to \Bbb{R}^d : \gamma(0)=a,\gamma(1)=b,\ |\gamma'|>0\}$ and $G$ is a continuous function $G :\Bbb{R}^d \to [0,\infty)$ with zeros only at $a,b$. I found a result which states that a Lipschitz continuous function can be uniformly approximated by smooth function in the $L^\infty$ norm, but the result on the derivative is not very strong. - -REPLY [3 votes]: The details are slightly tedious, but it is a fact that given Lipschitz $\gamma \in X$ you can find $\sigma \in X \cap C^1$ with $\|\gamma - \sigma\|_\infty < \epsilon$ and $\|\gamma' - \sigma'\|_{L^1} < \epsilon$. (Idea: Since continuous functions are dense in $L^1$, choose a continuous $\lambda$ with $\|\lambda - \gamma'\|_{L^1} < \epsilon/2$ and look at $\sigma_1(t) = \gamma(0) + \int_0^t \lambda(s)\,ds$. Then tweak $\sigma_1$ a little bit to guarantee it has the correct endpoint and a nonzero derivative.) Now you can check that if $\gamma$ comes close to achieving the infimum then so does $\sigma$. -Essentially this is the fact that $C^1([0,1], \mathbb{R}^d)$ is dense in the Sobolev space $W^{1,1}([0,1], \mathbb{R}^d)$. As Leonid said, you only need $\gamma$ to be absolutely continuous, and if you wanted you could choose $\sigma$ to be $C^\infty$ or even polynomial.<|endoftext|> -TITLE: Baby Rudin Problem 7.16 -QUESTION [8 upvotes]: Problem 7.16: Suppose $\{f_n\}$ is an equicontinuous family of functions on a compact set $K$ and $\{f_n\}$ converges pointwise to some $f$ on $K$. Prove that $f_n \to f$ uniformly. - - -Now for this problem I assume that $f_n,f : K \subset \Bbb{R} \rightarrow \Bbb{R}$ with the usual euclidean metric. Even though this is not assumed in the problem, I assume this to simplify matters first. -Now I believe I have proven that $f_n$ is uniformly cauchy on $K$ as follows. By equicontinuity of the family $\{f_n\}$ I can choose $\delta> 0$ such that $|x-p_i|< \delta$ will imply that $|f_m(x) - f_m(p_i)| < \epsilon$, and $|f_n(x) - f_n(p_i)| < \epsilon$. -Consider the collection $\{B_\delta(x)\}_{x \in K}$ that clearly covers $K$, by compactness of $K$ we get that there are finitely many points $p_1,\ldots p_n \in K$ such that $\{B_\delta(p_i)\}_{i=1}^n$ is a cover for $K$. Furthermore, because $f_n \rightarrow f$ pointwise for each $x \in K$ we get a cauchy sequence of numbers, so in particular given any $\epsilon > 0$, for each $p_i$ there exists $N_i$ such that $m,n \geq N_i$ implies that $|f_m(p_i) - f_n(p_i) | < \epsilon$. Taking -$$N = \max_{1 \leq i \leq n} N_i$$ -gives that $m,n\geq N$ implies that $|f_m(p_i) - f_n(p_i)| < \epsilon$ for all $i$. -Now we can finally put everything together to prove uniform cauchyness, take any $x \in K$ so that $x \in B_\delta(p_j)$ for some $1 \leq j \leq n$. Then -$$\begin{eqnarray*} |f_n(x) - f_m(x)| &\leq& |f_n(x) - f_n(p_i) | + | f_n(p_i) - f_m(p_i)| + |f_m(p_i) - f_m(x)| \\ -&<& \epsilon + \epsilon + \epsilon \\ -&=& 3\epsilon. \end{eqnarray*}$$ -The first and last term being less than $\epsilon$ come from equicontinuity, the middle term being less than $\epsilon$ comes from the derivation just before. Now what I am thinking of doing now to prove uniform cauchyness is to take the sup on the left, is this something legal I can do? Also are there are any mistakes in the proof above? - -Here is some context why I want to prove uniform cauchyness: Suppose I know that $\{f_n\}$ is uniformly cauchy. Then I know that given any $\epsilon > 0$, there exists $N \in \Bbb{N}$ such that $m,n\geq N$ implies that $|f_n(x) - f_m(x)| < \epsilon$ for all $x \in K$. Now we do this trick of fixing one of the indices. Fix $n$ to be some integer greater than $N$ and let $m\rightarrow \infty$, we see that -$$\begin{eqnarray*} |f_n - f| &=& \lim_{m\rightarrow \infty} | f_n - f_m| \\ -&\leq& \epsilon \end{eqnarray*} $$ -by the limit comparison test. Recall that $f$ was the pointwise limit of $\{f_n\}$. But then since $n$ was any arbitrary integer greater than $N$ we have that $f_n \rightarrow f$ uniformly. - -REPLY [4 votes]: Your argument is great until the point where you say "Now what I am thinking of doing..." -You have already proven that $\{f_n\}$ is uniformly Cauchy since you've shown that for any $\epsilon>0$, there is an $N$ so that if $n,m>N$, $|f_m(x)-f_n(x)|<3\epsilon$ for all $x\in K$. -To show uniform convergence, we simply need to say that, because of the pointwise convergence, for any $x$, there is some $M>N$ so that if $m>M$, $|f_m(x)-f(x)|<\epsilon$. Then, for any $n>N$, we have that -$$ -\begin{align} -|f_n(x)-f(x)| -&\le|f_n(x)-f_m(x)|+|f_m(x)-f(x)|\\ -&<3\epsilon+\epsilon\\ -&=4\epsilon -\end{align} -$$ -That is, for any $\epsilon>0$, there is an $N$ so that if $n>N$, $|f_n(x)-f(x)|<4\epsilon$ for all $x\in K$.<|endoftext|> -TITLE: A tricky sum to infinity -QUESTION [12 upvotes]: I try to solve the following tricky limit: -$$\lim_{x\rightarrow\infty} \sum_{k=1}^{\infty} \frac{kx}{(k^2+x)^2} $$ -For some large values, W|A shows that its limit tends to $\frac{1}{2}$ but not sure how to prove that. - -REPLY [16 votes]: ETA: These bounds are wrong, as $\frac{kx}{(k^2+x)^2}$ is not monotone in $k$. For a fixed version of this answer, see robjohn's answer here. - -Notice that, for fixed $x$, your sum is less than -$$\int_0^\infty \frac{kx}{(k^2+x)^2} \, dk=\frac{1}{2}\, ,$$ -and greater than -$$\int_1^\infty \frac{kx}{(k^2+x)^2} \, dk=\frac{x}{2(1+x)} \, ,$$ -and then apply the squeeze theorem.<|endoftext|> -TITLE: Krull dimension of local Noetherian ring (2) -QUESTION [7 upvotes]: Let $(A,\mathfrak{m})$ be a local Noetherian ring and $x \in \mathfrak{m}$. - -Prove that $\dim(A/xA) \geq \dim(A)-1$, with equality if $x$ is $A$-regular (i.e. multiplication with $x,$ as a map $A\rightarrow A$ is injective). - -The dimensions are Krull dimensions. -It may have something to do with this question Dimension inequality for homomorphisms between noetherian local rings, but I simply can't figure it out. -Thank you. - -REPLY [9 votes]: Prologue -For any commutative ring $R$ and any ideal $I\subsetneq R$ we have $$ \dim(R/I)+ht(I)\leq \dim (R) \quad (*)$$ This does not assume $R$ noetherian, nor local, nor... but just follows from the definitions. -Inequality -Suppose now that $(A,\mathfrak m)$ is local noetherian. -The trick is to use that $\dim(A)$ is the smallest number of elements in $\mathfrak m$ generating an $\mathfrak m$-primary ideal (cf. Atiyah-Macdonald Theorem 11.14). Let's do that for $A/xA$: -If $\bar x_1,...,\bar x_k\in \mathfrak m/xA$ generate an $\mathfrak m/xA$-primary ideal, then $x, x_1,..., x_k$ generate an $\mathfrak m$-primary ideal and this immediately yields the required inequality $$\dim(A)\leq \dim(A/xA)+1 \quad (**)$$ -Equality -The Prologue implies that equality in $(**)$ will hold if $ht(xA)=1$. -The principal ideal theorem says that we always have $ht(xA)\leq 1$. -Now to say that $ht(xA)=0$ means that $x\in \mathfrak p$ for some minimal ideal $\mathfrak p$. -But it is well known that minimal ideals consist of zero divisors (= non-regular elements). -Hence if $x$ is regular we have $ht(xA)=1$ (since we don't have $ht(xA)=0$ !) and the required equality follows $$ \dim(A)= \dim(A/xA)+1 \quad (***)$$<|endoftext|> -TITLE: Point set topology question: compact Hausdorff topologies -QUESTION [5 upvotes]: $\tau_1,\tau_2,\tau_3$ are topologies on a set such that $\tau_1\subset \tau_2\subset \tau_3$ and $(X,\tau_2)$ is a compact Hausdorff space. Could any one tell me which of the following are correct? - -$\tau_1=\tau_2$ if $(X,\tau_1)$ is compact Hausdorff. -$\tau_1=\tau_2$ if $(X,\tau_1)$ is compact. -$\tau_2=\tau_3$ if $(X,\tau_3)$ is Hausdorff. -$\tau_2=\tau_3$ if $(X,\tau_3)$ is compact. - -REPLY [11 votes]: Hint: The identity mapping $(X,\tau_{i+1}) \to (X,\tau_i)$ is continuous and a continuous bijection from a compact space to a Hausdorff space is a homeomorphism. This takes care of two statements and the two others are refuted by considering the trivial and the discrete topology on an infinite compact Hausdorff space $(X,\tau_2)$.<|endoftext|> -TITLE: How to prove spherical harmonics are orthogonal -QUESTION [6 upvotes]: A lot of texts and derivations eg here simply say: -"The Spherical Harmonics are orthonormal, so: -$$ -\int{ Y_l^m Y_{l'}^{m'} } = \delta_{ll'}\delta_{mm'} -$$ -And if you try any (l,m) pair you will find this always works out. -But how do you prove they are orthonormal for every $l$ and $m$? Where do you start? What principles do you use? - -REPLY [6 votes]: Maybe not really an answer but you may get the idea nontheless: this is true more or less by construction. You get the spherical harmonics (as an example) as eigenfunctions of the angular part of the Laplace Operator, that is, they satisfy -$$\Delta_{S^2} Y_{lm}(\vartheta,\phi) = \lambda Y_{lm} (\vartheta,\phi)$$ -(Actually it turns out that this implies $\lambda = -l(l+1)$ with integer $l$) -If you have such eigenfunctions for different eigenvalues it is a matter of linear algebra to show they are orthogonal, by looking at $$\int_{S^2}\langle \nabla_{S^2}Y_{lm}, \nabla_{S^2}Y_{l'm'}\rangle d\mu_{S^2}= -\int_{S^2}\langle Y_{lm}, \Delta_{S^2}Y_{l'm'}\rangle d\mu_{S^2}$$ -This implies that the functions are orthogonal if $l\neq l'$, since otherwise you could derive $l(l+1) = l'(l'+1)$ from this. For fixed $l$ it turns out that you may solve the equation by a separation approach which leads to an ODE which is known to be solvable by orthogonal polynomials by ODE theory. -You can also write down the $Y_{ml}$ quite explicitly, see e.g. the german wikipedia page on "Kugelflächenfunktionen" http://de.wikipedia.org/wiki/Kugelfl%C3%A4chenfunktionen. -If you look at these more closely and and do have some ODE background you may notice that the $\vartheta$ part are well known orthogonal polynomials in $\cos(\vartheta)$ (Legendre Polynomials), while the $\phi$ part is more or less just $e^{im\phi}$ which is known to a system of orthogonal functions. To then actually prove orthogonality is still a bit of work, but it kind of shows you the direction you should take. -To make them actually orthonormal you have to norm them, of course, that's where the complicated looking factors come from.<|endoftext|> -TITLE: the Nordhaus-Gaddum problems for chromatic number of graph and its complement -QUESTION [10 upvotes]: Is there any relation between the chromatic number of a graph $G$ and its complement $G'$ that are always true? -I saw these ones: $\chi(G)\chi(G')\geq n$ and $\chi(G)+\chi(G')\geq 2n$, -but I'm not pretty sure about them. - -REPLY [5 votes]: $(a)$ Prove that $\chi(G)\cdot \chi(G')\geq n$. -Proof: For every graph $G$ and $G'$ we know that $\chi(G)\geq {n\over \alpha(G)}$ and $\chi(G')\geq \omega(G')=\alpha(G)$ where $\alpha(G)$ and $\omega(G)$ denote the independence number and clique number of $G$. So $\chi(G)\cdot \chi(G')\geq {n\over \alpha(G)}\cdot \alpha(G)=n$. Thus $\chi(G)\cdot \chi(G')\geq n$. -$(b)$ Prove that $\chi(G)+\chi(G')\geq 2\sqrt{n}$. -Proof: Since both $\chi(G)$ and $\chi(G')$ are at least one we know that $(\chi(G)-\chi(G'))^2\geq 0$ and so $\chi(G)^2-2\chi(G)\cdot \chi(G')+\chi(G')^2\geq 0$. Adding $4\chi(G)\cdot \chi(G')$ to both sides of the equation we obtain $\chi(G)^2+2\chi(G)\cdot\chi(G')+\chi(G')^2\geq 4\chi(G)\cdot\chi(G')$ and factoring the left-hand side we now have $(\chi(G)+\chi(G'))^2\geq 4\chi(G)\cdot\chi(G')$. Taking the square root of both sides gives us $\chi(G)+\chi(G')\geq 2\sqrt{\chi(G)\cdot \chi(G')}$ and since $\chi(G)\cdot \chi(G')\geq n$ by the previous result in $(a)$ we can conclude that $\chi(G)+\chi(G')\geq 2\sqrt{n}$.<|endoftext|> -TITLE: Simple geometric proof for Snell's law of refraction -QUESTION [17 upvotes]: Snell's law of refraction can be derived from Fermat's principle -that light travels paths that minimize the time using simple -calculus. Since Snell's law only involves sines I wonder whether -this minimum problem has a simple geometric solution. - -REPLY [22 votes]: Perhaps this will help, if you are looking at a non Calculus approach. -Consider two parallel rays $A$ and $B$ coming through the medium $1$ (say air) to the medium $2$ (say water). Upon arrival at the interface $\mathcal{L}$ between the two media (air and water), they continue their parallel course in the directions $U$ and $V$ respectively. -Let us assume that at time $t=0$, light ray $A$ arrives at the interface $\mathcal{L}$ at point $C$, while ray $B$ is still shy of the surface by a distance $PD$. $B$ travels at the speed $v_{1}=\frac{c}{n_{1}}$ and arrives at $D$ in $t$ seconds. During this time interval, ray $A$ continues its journey through the medium $2$ at a speed $v_{2}=\frac{c}{n_{2}}$ and reaches the point $Q$. -We can formulate the rest, geometrically (looking at the parallel lines) from the figure. Let $x$ denote the distance between $C$ and $D$. -\begin{eqnarray*} -x \sin\left(\theta_{i}\right) &=& PD \\ -&=& v_{1} t \\ -&=& \frac{c}{n_{1}} t \\ -x \sin\left(\theta_{r}\right) &=& CQ \\ -&=& v_{2} t \\ -&=& \frac{c}{n_{2}} t -\end{eqnarray*} -Thus, -\begin{eqnarray*} -n_{1} \sin\left(\theta_{i}\right) &=& \frac{c}{x} t \\ -n_{2} \sin\left(\theta_{r}\right) &=& \frac{c}{x} t -\end{eqnarray*} -Re arranging this will take us to the Snell's law as we know. -\begin{eqnarray*} -\frac{n_{2} }{n_{1}} &=& \frac{\sin\left(\theta_{i}\right) }{ \sin\left(\theta_{r}\right)} -\end{eqnarray*}<|endoftext|> -TITLE: How to transform normally distributed random sequence N(0,1) to uniformly distributed U(0,1)? -QUESTION [5 upvotes]: Everybody knows how to convert U(0,1) to N(0,1). However does anybody know an efficient algorithm solving the opposite task? I mean how to generate U(0,1) sequence from N(0,1) one? -Asking because a group of voice and speech researchers with whom I work are routinely trying to represent results of their measurements which they believe to be normally distributed using a linear 0 to 100 scale. As a result they are getting negative values outside of their linear scale obviously because they are dealing with random noise that can be roughly assumed to be normal and therefore this noise theoretically spans the whole real axis. Though I can imagine that I need to take a logarithm of the normal distribution, then multiply the negative quadratic term by -1 and then take a square root of it to get a linear function, a question is: does anybody know an efficient algorithm for doing that, I mean for generating high quality U(0,1) random numbers from N(0,1). I would highly appreciate any feedback on this!! - -REPLY [8 votes]: Naively, this seems to just be a problem of remapping the probability densities. I do not understand the answer by vanna since that requires two variates. -Going from Gaussian to Uniform requires going from an infinite support $[-\infty,+\infty]$ to a finite support, like $[0,1]$. -I think that the simplest way to achieve this is along the lines of what stefan-hansen was suggesting: - -normalize the data to Gaussian(0,1) by subtracting the average and dividing by the standard deviation: $y = \frac{x - \hat\mu}{\hat\sigma}$, and -transform the values using the CDF: $z=\frac{1}{2}Erfc(-\frac{y}{\sqrt{2}})$, where $Erfc$ is the complementary error function. - -If $y$ is distributed with Gaussian(0,1), $z$ will be distributed as Uniform(0,1). -This requires having estimates of $\hat\mu$ and $\hat\sigma$ before proceeding with the transform, but that should not be a problem. -(I would post histograms illustrating the procedure, but I do not have enough points.)<|endoftext|> -TITLE: Does obtaining Lie algebras via differentiation work for a general Lie group? -QUESTION [6 upvotes]: I noticed that the characterizations of the Lie algebras of matrix Lie groups can be obtained by differentiation. For example: -$$O(n) = XX^t = \mathbb{1} \implies \mathfrak{o}(n) = X + X^t = \mathbb{0}$$ -$$SO(n) = XX^t = \mathbb{1},\; \text{det}(X) = 1 \implies \mathfrak{so}(n) = X + X^t = \mathbb{0},\; tr(X)=0$$ -but it works also for $U(n), SU(n), Sl(n,\mathbb{K})$. -Does this work for a general Lie group? - -REPLY [8 votes]: The matrix groups can be defined as regular level sets of functions $f: GL(n) \to M$ where $M = \mathbb R$ (e.g. for $SL(n)$) or $M = n\times n$ matrices (e.g. for $O(n)$). In general, if you have a map $f: M \to N$ and $y \in N$ is a regular value then $f^{-1}(y)$ is a submanifold and $T_xf^{-1}(y) = \ker df_x : T_x M \to T_y N$. So in the case of $O(n)$ for example you're differentiating the function $X \mapsto XX^t$ at the identity to see that its Lie algebra (identifiable with $T_e O(n)$ is skew-symmetric matrices. - -REPLY [6 votes]: A Lie group is also a differentiable manifold; in particular we can define its tangent space at the identity. Intuitively, the tangent space is the set of directions $v$ such that if you start at the identity in $G$ and move infinitesimally in direction $v$, you stay in $G$. -Let's say that $G$ is defined by the vanishing of some differentiable function $f$: so $G = \{ X: f(X)=0\}$. The tangent space consists of matrices $M$ such that $f(I + \epsilon M) =0$ to first order in $\epsilon$: using the Taylor expansion and neglecting higher terms, what we want is that $df_I(M)=0$ where $df_I$ is the total derivative at the identity. Elements of the kernel of the total derivative are exactly those vectors which can be realised as $\gamma'(0)$ for some differentiable $\gamma: (-1,1)\to G$: the usual definition of tangent space is the equivalence classes of such functions $\gamma$ under the relation $\gamma_1\sim \gamma_2$ iff $\gamma_1'(0)=\gamma_2'(0)$. -The Lie algebra associated to $G$ has as its underlying vector space the tangent space at the identity, so it really is obtained by differentiating and setting the total derivative equal to zero. -Looking at your $O(n)$ example, the "$f$" is $f(X)=XX^t-I$. To find the total derivative at the identity, we ought to have $f(I+\epsilon M) = I + \epsilon df_I(M) +$ higher order terms. To order one in epsilon we have $f(I+\epsilon M)= \epsilon (M + M^t)$, so the total derivative at the identity is $M+M^t$. The vanishing of this is exactly the condition you gave for being in the Lie algebra of $O(n)$.<|endoftext|> -TITLE: Showing a uniformity is complete. -QUESTION [5 upvotes]: I've seen in various textbooks and notes that if $X$ is paracompact, then the collection of all the neighborhoods of the diagonal is a uniformity. -I am trying to show that this uniformity is complete using Cauchy filters. So far, I let $\mathfrak{F}$ be a filter on $X$ that does not converge. By definition, saying $\mathfrak{F}$ does not converge to any point is to say that for any $A \subset X$, there is a open neighborhood $O_{A}$ of $A$ which is not an element of $\mathfrak{F}$. -With that said, the set $\alpha = \{ X \setminus \overline{F} : F \in \mathfrak{F} \}$ is an open cover of $X$. This is where I'm stuck. Why does it follow from here that $\mathfrak{F}$ is not Cauchy? -Can anyone help? - -REPLY [5 votes]: Let $\mathscr{U}=\{X\setminus\operatorname{cl}F:F\in\mathfrak{F}\}$. Let $\mathscr{V}$ be a locally finite open refinement of $\mathscr{U}$. A paracompact Hausdorff space is normal, so $X$ has an open cover $\mathscr{W}=\{W_V:V\in\mathscr{V}\}$ such that for each $V\in\mathscr{V}$, $\operatorname{cl}W_V\subseteq V$; clearly $\mathscr{W}$ is locally finite. (I’m assuming that your definition of paracompactness, unlike mine, includes Hausdorffness; otherwise, you need to add that hypothesis.) -Consider any $x\in X$. If $x\in\operatorname{cl}W_V\in\mathscr{W}$, then $x\in V\in\mathscr{V}$, and $\mathscr{V}$ is point-finite, so $$G_x=\bigcap_{x\in\operatorname{cl}W_V}V$$ is open. $\mathscr{W}$ is locally finite and hence closure-preserving, so $$H_x=\bigcup_{x\notin\operatorname{cl}W_V}\operatorname{cl}W_V$$ is closed. Thus, $N_x=G_x\setminus H_x$ is an open neighborhood of $x$. Let $\mathscr{N}=\{N_x:x\in X\}$; $\mathscr{N}$ is an open cover of $X$. -Fix $x\in X$; there is some $V\in\mathscr{V}$ such that $x\in\operatorname{cl}W_V$, and I claim that $\operatorname{st}(x,\mathscr{N})\subseteq V$. To see this, suppose that $x\in N_y\in\mathscr{N}$; then $x\notin H_y$, so $y\in\operatorname{cl}W_V$, and therefore $N_y\subseteq G_y\subseteq V$. Since $N_y$ was an arbitrary element of $\mathscr{N}$ containing $x$, it follows that $\operatorname{st}(x,\mathscr{N})\subseteq V$. (In other words, $\mathscr{N}$ is a barycentric open refinement of $\mathscr{V}$ and hence also of $\mathscr{U}$.) -Now let $$D=\bigcup_{x\in X}(N_x\times N_x)\;;$$ clearly $D$ is an open neighborhood of the diagonal. Let $F\in\mathfrak{F}$ be arbitrary, and suppose that $F\times F\subseteq D$. Fix $x\in F$. Then for each $y\in F$, $\langle x,y\rangle\in F\times F\subseteq D$, so there is some $z\in X$ such that $x,y\in N_z$. Thus, $F\subseteq\operatorname{st}(x,\mathscr{N})\subseteq V$ for some $V\in\mathscr{V}$. But $\mathscr{V}$ refines $\mathscr{U}$, so $V$ (and hence $F$) is disjoint from some member of the filter $\mathfrak{F}$. This is impossible, so for all $F\in\mathfrak{F}$ we must have $F\times F\nsubseteq D$, and $\mathfrak{F}$ is therefore not Cauchy.<|endoftext|> -TITLE: Derivative of a linear transformation. -QUESTION [16 upvotes]: We define derivatives of functions as linear transformations of $R^n \to R^m$. Now talking about the derivative of such linear transformation , -as we know if $x \in R^n$ , then -$A(x+h)-A(x)=A(h)$, because of linearity of $A$, which implies that $A'(x)=A$ where , $A'$ is derivative of $A$ . -What does this mean? I am not getting the point I think. - -REPLY [29 votes]: This is a fair question, since it is counterintuitive to the way introductory calculus is taught. -One looks at a typical linear function in calc 1: $f(x)=ax$, $a\neq0$, takes the derivative, $f'(x)=a$, and thinks to themselves, "well clearly the linear function is not equal to the constant function, one has a slope and the other is flat!" -Since we generalized to higher dimensions, it is wiser to pay closer attention to what we call the derivative. Merely looking at the Jacobian masks a deeper insight: the derivative is the best affine approximation to a function at a particular point. That is, $F(x)\approx F(a) + F'(x-a)+o(|x-a|)$, which is a good approximation when $x$ is close to $a$. Notice that $F'$ acts as a "factor" on the tangent vector $(x-a)$. -What if $F$ is already affine? Then $F(x)=Ax+b$. Plug this in the formula above, which has exact equality now, and you get: $Ax+b=Aa+b+F'(x-a)$, which gives $A(x-a)=F'(x-a)$, or if we call $h=x-a$, $Ah=F'h$. Notice what that is telling you: $A$ and $F'$ do the same thing to vectors $h$, hence they're equal. $A$ is also the derivative of $F(x)$. -When, $F$ is linear, $b=0$, and thus $Fh=Ah=F'h$. Makes sense. -What about our calc 1 example? The confusion stems from naming. The linear transformation is not $ax$, but $a$. View it as $fx=ax$. It's a 1x1 matrix with the entry $a$. The derivative (Jacobian), at any point, is also just $a$. Hence, $f'x=ax$ also. Thus the generalized notion of derivative is no longer "the slope function", but a unique linear transformation taking tangent vectors to tangent vectors which best approximates the linear behavior of a function at a particular point. In that light it makes sense that $fx=ax=f'x$ since we're viewing $f$ and $f'$ as "factors" at particular points rather than changing functions. This is why $Df(x)$ (which is just $a$ in our example) is used as notation for derivative at particular point $x$ rather than $f'$. -If you're interested there are notions of higher derivatives that take the derivative of the map that assigns to each point $x$ the matrix $Df(x)$, which differs from taking the derivative of the same matrix $Df(x)$, which is just linear and hence the same. See: http://www.math.pitt.edu/~sph/1540/1540-notes4.pdf<|endoftext|> -TITLE: continuity of power series -QUESTION [8 upvotes]: I want to prove that every power series is continuous but I am stuck at one point. -Let $\sum\limits_{n=0}^\infty a_n(x-x_0)^n$ a power series with a radius of convergence $r>0$ and let $D:=\{x\in\mathbb R:|x-x_0|0$ such that $|S_{N_\varepsilon}(y)-S_{N_\varepsilon}(x)|<\frac\varepsilon3$. So we get $|S(y)-S(x)|<\frac\varepsilon3+\frac\varepsilon3+\frac\varepsilon3=\varepsilon$ for $|y-x|<\delta$ - -Why is in the violet term $|\phi_{N_\varepsilon}|<\frac\varepsilon3$ correct? Don't I have two terms in the absolute value? Thanks for helping! - -REPLY [4 votes]: Choose $z \in D$ and select $\delta >0$ so that $C = \overline{B}(z,\delta) \subset D$. Let $\rho = |z-x_0|+\delta$, note that $\rho < r_0$ and that $C \subset \overline{B}(x_0,\rho) \subset D$. -Then let $M_n = |a_n|\rho^n$, and note that $|a_n(x-x_0)^n | \leq M_n$, $\forall x \in C$, and $\sum M_n < \infty$. -Hence we can use the Weierstrass M-test to conclude that the series $\sum a_n (x-x_0)^n$ converges uniformly on $C$. Since each of the functions $\sum_{n -TITLE: tough series involving digamma -QUESTION [9 upvotes]: I ran across a series that is rather challenging. For kicks I ran it through Maple and it gave me a conglomeration involving digamma. Mathematica gave a solution in terms of Lerch Transcendent, which was worse yet. Perhaps residues would be a better method?. -But, it is $$\sum_{k=1}^{\infty}\frac{(-1)^{k}(k+1)}{(2k+1)^{2}-a^{2}}.$$ -The answer Maple spit out was: -$$\frac{a+1}{16a}\left[\psi\left(\frac{3}{4}-\frac{a}{4}\right)-\psi\left(\frac{-a}{4}+\frac{1}{4}\right)\right]+\frac{a-1}{16a}\left[\psi\left(\frac{3}{4}+\frac{a}{4}\right)-\psi\left(\frac{1}{4}+\frac{a}{4}\right)\right]+\frac{1}{a^{2}-1}.$$ -Is it possible to actually get to something like this by using $\sum_{k=1}^{\infty}\left[\frac{1}{k}-\frac{1}{k+a}\right]=\gamma+\psi(a+1)?$ -I tried, but to no avail. But, then again, maybe it is too cumbersome. -i.e. I tried expanding it into -$\frac{k+1}{(2k+1)^{2}-a^{2}}=\frac{-1}{4(a-2k-1)(2k+1)}-\frac{1}{4(a-2k-1)}+\frac{1}{4(a+2k+1)(2k+1)}+\frac{1}{4(a+2k+1)}$ -then using $\sum_{k=1}^{\infty}\left[\frac{1}{k}-\frac{1}{k-\frac{1}{4}-\frac{a}{4}}\right]=\psi\left(\frac{3}{4}-\frac{a}{4}\right)$ and so on, but it did not appear to be anywhere close to the given series. -On another point, can it be done using residues?. By using $$\frac{\pi csc(\pi z)(z+1)}{(2z+1)^{2}-a^{2}}.$$ -This gave me a residue at $\frac{a-1}{2} and \frac{-(a+1)}{2}$ of -$\frac{-\pi}{a-1}sec(a\pi/2)$ and $\frac{\pi}{a+1}sec(\pi a/2)$ -Taking the negative sum of the residues, it is $\frac{2\pi}{(a-1)(a+1)}sec(a\pi/2)$ -By subbing in k=0 into the series, it gives $\frac{-1}{a^{2}-1}$. -I try adding them up and finding the sum, but it does not appear to work out. -Any suggestions?. Perhaps there is another method I am not trying?. There probably is. Thanks a million. - -REPLY [7 votes]: Note that -$$ -\frac{1}{2k+1-a}+\frac{1}{2k+1+a}=\frac{4k+2}{(2k+1)^2-a^2}\tag{1} -$$ -and -$$ -\frac1a\left(\frac{1}{2k+1-a}-\frac{1}{2k+1+a}\right)=\frac{2}{(2k+1)^2-a^2}\tag{2} -$$ -Adding $(1)$ and $(2)$ and dividing by $4$ yields -$$ -\begin{align} -\frac{k+1}{(2k+1)^2-a^2} -&=\frac{1+a}{8a}\frac{1}{k+(1-a)/2}-\frac{1-a}{8a}\frac{1}{k+(1+a)/2}\\ -&=\hphantom{+ }\frac{1-a}{8a}\left(\frac1k-\frac{1}{k+(1+a)/2}\right)\\ -&\hphantom{= }-\frac{1+a}{8a}\left(\frac1k-\frac{1}{k+(1-a)/2}\right)\\ -&\hphantom{= }+\frac{1}{4k}\tag{3} -\end{align} -$$ - -Now, using -$$ -\psi(a+1)+\gamma=\sum_{k=1}^\infty\frac{1}{k}-\frac{1}{k+a}\tag{4} -$$ -we get -$$ -\frac12\psi\left(\frac{a}{2}+1\right)+\frac\gamma2=\sum_{k=1}^\infty\frac{1}{2k}-\frac{1}{2k+a}\tag{5} -$$ -and subtracting twice $(5)$ from $(4)$ gives -$$ -\psi(a+1)-\psi\left(\frac{a}{2}+1\right)=\sum_{k=1}^\infty(-1)^{k-1}\left(\frac{1}{k}-\frac{1}{k+a}\right)\tag{6} -$$ -Furthermore, -$$ -\log(2)=\sum_{k=1}^\infty(-1)^{k-1}\frac1k\tag{7} -$$ - -Using $(3)$, $(6)$, and $(7)$, we get -$$ -\begin{align} -\sum_{k=1}^\infty(-1)^k\frac{k+1}{(2k+1)^2-a^2} -&=-\frac{1-a}{8a}\left(\psi\left(\frac{3+a}{2}\right)-\psi\left(\frac{5+a}{4}\right)\right)\\ -&\hphantom{= }+\frac{1+a}{8a}\left(\psi\left(\frac{3-a}{2}\right)-\psi\left(\frac{5-a}{4}\right)\right)\\ -&\hphantom{= }-\frac14\log(2)\tag{8} -\end{align} -$$ -Equivalence of Forms: -Using $(4)$, $(5)$, and $(7)$, we get -$$ -\begin{align} -\sum_{k=1}^\infty\frac{1}{2k-1}-\frac{1}{2k-1+a} -&=\color{green}{\sum_{k=1}^\infty\frac{1}{2k-1}-\frac{1}{2k}}+\color{red}{\sum_{k=1}^\infty\frac{1}{2k}-\frac{1}{2k-1+a}}\\ -&=\color{green}{\log(2)}+\color{red}{\frac12\psi\left(\frac{a+1}{2}\right)+\frac\gamma2}\tag{9} -\end{align} -$$ -Adding $(5)$ to $(9)$ yields -$$ -\begin{align} -\psi(a+1)+\gamma -&=\hphantom{+}\log(2)+\frac12\psi\left(\frac{a+1}{2}\right)+\frac\gamma2\\ -&\hphantom{= }+\frac12\psi\left(\frac{a}{2}+1\right)+\frac\gamma2\tag{10} -\end{align} -$$ -Rearranging $(10)$ shows that -$$ -\psi(a)=\log(2)+\frac12\psi\left(\frac{a}{2}\right)+\frac12\psi\left(\frac{a+1}{2}\right)\tag{11} -$$ -Applying $(11)$ gives -$$ -\psi\left(\frac{3+a}{2}\right)=\log(2)+\frac12\psi\left(\frac{3+a}{4}\right)+\frac12\psi\left(\frac{5+a}{4}\right)\tag{12} -$$ -and -$$ -\psi\left(\frac{3-a}{2}\right)=\log(2)+\frac12\psi\left(\frac{3-a}{4}\right)+\frac12\psi\left(\frac{5-a}{4}\right)\tag{13} -$$ -Plug $(12)$ and $(13)$ into $(8)$ -$$ -\begin{align} -\sum_{k=1}^\infty(-1)^k\frac{k+1}{(2k+1)^2-a^2} -&=\hphantom{+}\frac{a-1}{16a}\left(\psi\left(\frac{3+a}{4}\right)-\psi\left(\frac{5+a}{4}\right)\right)\\ -&\hphantom{= }+\frac{a+1}{16a}\left(\psi\left(\frac{3-a}{4}\right)-\psi\left(\frac{5-a}{4}\right)\right)\\ -&=\hphantom{+}\frac{a-1}{16a}\left(\psi\left(\frac{3+a}{4}\right)-\psi\left(\frac{1+a}{4}\right)-\frac{4}{1+a}\right)\\ -&\hphantom{= }+\frac{a+1}{16a}\left(\psi\left(\frac{3-a}{4}\right)-\psi\left(\frac{1-a}{4}\right)-\frac{4}{1-a}\right)\\ -&=\hphantom{+}\color{red}{\frac{a-1}{16a}\left(\psi\left(\frac{3+a}{4}\right)-\psi\left(\frac{1+a}{4}\right)\right)}\\ -&\hphantom{= }\color{red}{+\frac{a+1}{16a}\left(\psi\left(\frac{3-a}{4}\right)-\psi\left(\frac{1-a}{4}\right)\right)}\\ -&\hphantom{= }\color{red}{+\frac{1}{a^2-1}}\tag{14} -\end{align} -$$<|endoftext|> -TITLE: Why is the ring of holomorphic functions not a UFD? -QUESTION [22 upvotes]: Am I correct or not? I think that a ring of holomorphic functions in one variable is not a UFD, because there are holomorphic functions with an infinite number of $0$'s, and hence it will have an infinite number of irreducible factors! But I am unable to get a concrete example. Please give some example. - -REPLY [26 votes]: You are perfectly right: the ring of entire functions $\mathcal O(\mathbb C)$ is not a UFD. Here is why: -In a UFD a non-zero element has only finitely many irreducible (=prime) divisors and this does not hold for our ring $\mathcal O(\mathbb C)$. -Indeed the only primes in $\mathcal O(\mathbb C)$ are the affine functions $z-a$ and on the other hand the function $\sin(z)$ is divided by the infinitely many primes $z-k\pi\; (k\in \mathbb Z) $. -The same proof shows that for an arbitrary domain $D$ the ring $\mathcal O(D)$ is not a UFD, once you know Weierstrass's theorem which implies that there exist non identically zero holomorphic functions in $D$ with infinitely many zeros. -NB -It is sufficient for the proof above to show that the $z-a$ are irreducible in $\mathcal O(\mathbb C)$ (you don't need that there are no other irreducibles). And that is easy: if $a=0$ for example, write $z=fg$ and you will see that $f$ (say) has no zero and is thus a unit $f\in \mathcal O(\mathbb C)^*$.<|endoftext|> -TITLE: Who is a Math Historian? -QUESTION [13 upvotes]: In the context of classes, it is very often that discussion on the history of mathematics arises, whether it'd be on who should a lemma be attributed to or a certain event that occurred during the discovery of the proof (the elementary proof of the prime number theorem is one such example). -My question is: - -What does a math historian do? Is he simply a mathematician who dabbles in search for the history behind his research or does he commit his time fully investigating past mathematical facts? Also, is it closer in nature to mathematics or is it closer in nature to history (i.e. is the context behind the discovery of the proof emphasized or is the insight that led to the proof emphasized)? - -EDIT: Due to the nature of some of the answers, I am now curious to as to whether math historians are mathematicians or historians (ie do they work in math departments or history departments). Does anyone have an answer to this? - -REPLY [8 votes]: First of all, as far as I know, serious historians of mathematics are or were in their majority mathematicians: the reason is that mathematics is very difficult and you can't analyze it in depth without a very serious technical background. -There might be exceptions for very ancient mathematics, but even there I wouldn't trust a historian studying Diophantus who wouldn't have some knowledge of arithmetic/algebraic geometry and number theory. -What often happens is that aging mathematicians start writing about the history of the subject they have devoted their life to. -Prestigious examples are for example: -Weil on number theory, - Dieudonné and his wonderful histories of algebraic geometry and algebraic topology, - Marcel Berger on differential geometry, -Dickson and his monumental history of the theory of numbers. -Younger mathematicians may also be interested : - Bourbaki has very nice historical surveys at the end of some of his chapters, written at the time by necessarily young members (there was an age limit for participants) - Schappacher is an excellent research mathematician who already as a young researcher wrote about the history of number theory, -Krömer has written a great thesis on the genesis of category theory (including the incredible beginnings of sheaf theory in a prisoner of war camp ) , -and to finish on a personal note, here is the fairly recent thesis on the birth of group cohomology by Nicolas Babois, whom I taught at the undergraduate level (but I had no rôle in his thesis). -In conclusion, my point of view is that a historian of mathematics is essentially a mathematician, and historical science in the usual sense is of secondary importance. -This is certainly controversial. - My convictions on this subject essentially derive from Dieudonné's and Houzel's points of view. (Houzel is an other example of a mathematician with high technical skills attracted very early by the history of mathematics)<|endoftext|> -TITLE: Prove: The weak closure of the unit sphere is the unit ball. -QUESTION [25 upvotes]: I want to prove that in an infinite dimensional normed space $X$, the weak closure of the unit sphere $S=\{ x\in X : \| x \| = 1 \}$ is the unit ball $B=\{ x\in X : \| x \| \leq 1 \}$. -$\\$ -Here is my attempt with what I know: -I know that the weak closure of $S$ is a subset of $B$ because $B$ is norm closed and convex, so it is weakly closed, and $B$ contains $S$. -But I need to show that $B$ is a subset of the weak closure of $S$. -$\\$ -for small $\epsilon > 0$, and some $x^*_1,...,x^*_n \in X^*$, I let $U=\{ x : \langle x, x^*_i \rangle < \epsilon , i = 1,...,n \} $ -then $U$ is a weak neighbourhood of $0$ -What I think I need to show now is that $U$ intersects $S$, but I don't know how. - -REPLY [21 votes]: With the same notations in you question: Notice that if $x_i^*(x) = 0$ for all $i$, then $x \in U$, and therefore the intersection of the kernels $\bigcap_{i=1}^n \mathrm{ker}(x_i^*)$ is in $U$. Since the codimension of $\mathrm{ker}(x^*_i)$ is at most $1$, then the intersection has codimension at most $n$ (exercise: prove this). But since $X$ is infinite dimensional, this means the intersection has an infinite dimension, and in particular contains a line. Since any line going through $0$ intersects $S$, then $U$ intersects $S$. -The same argument can be applied to any point in $B$ (any line going through a point in $B$ intersects $S$), and since you've proved the other inclusion, the weak closure of $S$ is $B$.<|endoftext|> -TITLE: (Un-)Countable union of open sets -QUESTION [8 upvotes]: Let $A_i$ be open subsets of $\Omega$. Then $A_0 \cap A_1$ and $A_0 \cup A_1$ are open sets as well. -Thereby follows, that also $\bigcap_{i=1}^N A_i$ and $\bigcup_{i=1}^N A_i$ are open sets. -My question is, does thereby follow that $\bigcap_{i \in \mathbb{N}} A_i$ and $\bigcup_{i \in \mathbb{N}} A_i$ are open sets as well? -And what about $\bigcap_{i \in I} A_i$ and $\bigcup_{i \in I} A_i$ for uncountabe $I$? - -REPLY [8 votes]: The union of any collection of open sets is open. Let $x \in \bigcup_{i \in I} A_i$, with $\{A_i\}_{i\in I}$ a collection of open sets. Then, $x$ is an interior point of some $A_k$ and there is an open ball with center $x$ contained in $A_k$, therefore contained in $\bigcup_{i \in I} A_i$, so this union is open. Others have given a counterexample for the infinite intersection of open sets, which isn't necessarily open. -By de Morgan's laws, the intersection of any collection of closed sets is closed (try to prove this), but consider the union of $\{x\}_{x\in (0,1)}$, which is $(0,1)$, not closed. The union of an infinite collection of closed sets isn't necessarily closed.<|endoftext|> -TITLE: History of Modern Mathematics Available on the Internet -QUESTION [22 upvotes]: I have been meaning to ask this question for some time, and have been spurred to do so by Georges Elencwajg's fantastic answer to this question and the link contained therein. -In my free time I enjoy reading historical accounts of "recent" mathematics (where, to me, recent means within the last 100 years). A few favorites of mine being Alexander Soifer's The Mathematical Coloring Book, Allyn Jackson's two part mini-biography of Alexander Grothendieck (Part I and Part II) and Charles Weibel's History of Homological Algebra. -My question is then: - -What freely available resources (i.e. papers, theses, articles) are there concerning the history of "recent" mathematics on the internet? - -I would like to treat this question in a manner similar to my question about graph theory resources, namely as a list of links to the relevant materials along with a short description. Perhaps one person (I would do this if necessary) could collect all the suggestions and links into one answer which could be used a repository for such materials. -Any suggestions I receive in the comments will listed below. -Suggestions in the Comments: - -[Gregory H. Moore, The emergence of open sets, closed sets, and limit points in analysis and topology] -http://mcs.cankaya.edu.tr/~kenan/Moore2008.pdf - -REPLY [3 votes]: Things I have found so far: - -Who is Alexander Gorthendieck by Winfried Scharlau -Mathematics at G��ttingen under the Nazis by Saunders Mac Lane -History of Knot Theory by Józef H. Przytycki -The Life and Works of Raoul Bott by Loring W. Tu<|endoftext|> -TITLE: On minimizing the area of an enclosing surface subject to nonnegative Gaussian curvature -QUESTION [5 upvotes]: This is inspired by this previous question on physical processes that might give rise to convex hulls. -Consider the problem of gift-wrapping a three-dimensional object using an inextensible material, like paper. We can make the material conform to any surface with nonnegative Gaussian curvature by cutting and folding it. (At least, we can make an arbitrarily good polyhedral approximation.) But if we want to have negative Gaussian curvature, we have to make a cut and glue some extra material in there, which is awkward and cumbersome, so we forbid it. Now we want to perform this gift wrapping as tightly as possible, which suggests using the least amount of material. -Formally, given a set $S$ of points in $\mathbb R^3$, we want to find the surface with minimum area that encloses all the points in $S$, subject to an additional condition that the Gaussian curvature of the surface is nonnegative everywhere. In a comment on the previous question, I conjectured that this would be precisely the convex hull of $S$. But I have no idea if that's actually true, and if so, how to begin proving it. - -REPLY [2 votes]: By Hadamard's ovaloid theorem, any positively curved surface (without boundary) in $\mathbb R^3$ is the boundary of a convex set. Therefore, such a surface encloses the convex hull of $S$. The projection onto the convex hull does not increase the area, being a 1-Lipschitz map. It follows that the area of any positively curved surface enclosing $S$ cannot be smaller than the area of the surface of the convex hull of $S$.<|endoftext|> -TITLE: Linear algebra and arbitrary fields -QUESTION [12 upvotes]: The linear algebra course that I took was fairly consistent about assuming that the scalar field is either the reals or the complex numbers. The theory about linear maps, basis, their matrices, eigenvalues and eigenvectors, trace and determinant clearly generalize to a general field without any changes. Similarly, the Jordan canonical form only seems to require algebraic closedness of a field. -The definition of an inner-product seems to explicitly require either the reals or the complex numbers, but even then one should be able to replace it by a bilinear pairing $V\times V\to \mathbb{F}$, where $\mathbb{F}$ is our field. However, why do we then want conjugate symmetry in the complex case? Do we need something similar for fields which have a "similar" automorphism? Is there a precise way to formalize this? -My questions is essentially the following: How can we generalize spectral theorems to general fields? What would the results look like and what do we need to assume? What's the right way to generalize inner-product spaces and what can we translate unchanged from the setting of an undergraduate linear algebra class? - -REPLY [3 votes]: An inner product is a symmetric positive definite bilinear form. In general fields, you'll happily be able to satisfy symmetric bilinear, but you'll struggle with positive definite: over a field of nonzero characteristic, you will not even be able to make sense of $\langle x, x \rangle \ge 0$, much less find a form for which it holds, since there is no ordering compatible with the field. Note that there is also no ordering on the complex numbers that is compatible with the field, but there is the subfield $\mathbb R$ which can be ordered, and so conjugate symmetry rescues us by ensuring $\langle x, x \rangle$ falls inside there, but fields of characteristic $p$ must contain a subfield $\mathbb F_p$ which is already not orderable. -Over fields of characteristic zero, like the rationals, you can find a reasonable inner product, but it's not as useful as you'd expect. For example, you can't get an orthonormal basis from a given basis, because you can't do square roots: you can find the norm squared, but not the norm itself. You could go all the way to the algebraics, or just as far as all the square roots, but at this stage I think you gain very little generality over just using $\mathbb R$ in the first place, and if you find you want completeness at any point, you'll be forced into the reals anyway. -I think I have too found the asymmetry between complex and real inner products to be frustrating. It's possible that there's a unifying theory that I'm unaware of, but I don't think it's unreasonable to view them as basically separate (if highly similar) entities, albeit entities that embed one in the other in a neat way. Essentially, $\mathbb R$ is special: it is, after all, the unique complete totally ordered Archimedean field, so it's not that surprising that we should pay it specific attention.<|endoftext|> -TITLE: How is this function never decreasing?! -QUESTION [6 upvotes]: What I'm doing is finding where this function is decreasing or increasing. -Here is the original function: -$f(x) = \ln(x+6)-2$ -I take the prime when I believe is: -$f'(x)= \dfrac{1}{x+6}$ -Then I made a sign chart. -I know right off the bat that there is nothing that make this function equal to zero because the numerator doesn't have an $x.$ -The denominator can make the function undefined, and it's undefined at $-6.$ So thats the number I use on my sign chart. -I plugged the first value $-10$ into the prime function and it gives me a negative value: -$f'(-10)= \frac{1}{(-10+6)}$ -$ = \frac{-1}{4}$ -Then I plugged the $0$ in and I got -$f'(0)= 1/6$ -It should look something like this: - - n | d + - ------(-10)------ ((-6)) -------(0)------ -My homework is saying the function is never decreasing. >. - -REPLY [2 votes]: The function $\ln(x+6)$ is the inverse function of the strictly increasing function $g(x)=e^x-6$. -(We have $y=\ln(x+6)$ if and only if $e^y=x+6$ if and only if $x=e^y-6$.)<|endoftext|> -TITLE: Axiom of Choice (for example in the Snake Lemma) -QUESTION [6 upvotes]: If we have to make a choice, but in the end it doesn't matter what choice we made, did we really make a choice to begin with? -More explicitly, somewhere in the standard diagram-chasing proof of the snake lemma for $R$-modules (see http://mathworld.wolfram.com/SnakeLemma.html) we use the fact we have a surjective map $A \twoheadrightarrow B$ and an element $b \in B$ to deduce that there is some element $a \in A$ which maps to $b \in B$. We use this element $a$ to define a map $\mathrm{Ker}(\gamma) \to \mathrm{Coker}(\alpha)$. It turns out of course, that the map we get is independent of the choice of $a$. -Are we really using the axiom of choice here since the choice we make is irrelevant? I understand that there are proofs of the Snake Lemma in its various forms that avoid the usage of selecting an element but I am more interested in what happens here. - -REPLY [2 votes]: The specific case in question was answered, however I wish to add on the general question. -Yes, there is a use of the axiom of choice even if the actual choice is irrelevant. Consider a Vitali set, for example, that is a choice of representatives for $\mathbb{R/Q}$. We wish to show that such set is non-measurable, and the fact it is non-measurable is indeed independent of the choice of representatives. However without the axiom of choice we may not be able to make a choice at all.<|endoftext|> -TITLE: Mathematics of computation -QUESTION [5 upvotes]: What is a good introduction to turing machines, complexity classes, P=NP etc from a purely mathematical viewpoint? -I want to know how computation relates to provability in mathematics, I need the details of how one relates arithemtical statements to turing machines. -I am not interested in practical computing or programming. -Also I want to know how to prove that some system is turing complete. - -REPLY [2 votes]: A good place to start for the relationship between computability and provability is Computability and Logic by Boolos, Burgess, and Jeffrey. But it's not for the faint of heart - the content is dense, and the book has a lot more content than its thickness suggests. -For something more CS focused that will discuss Turing completeness and complexity, another great text is Introduction to Automata Theory, Languages, and Computation by Hopcroft and Ullman. Just skip chapters 2-7 which cover regular and context-free languages. Again, this book has a lot of content and is very well regarded.<|endoftext|> -TITLE: What does it mean to do MLE with a continuous variable -QUESTION [14 upvotes]: I am struggling with the semantics of continuous random variables. -For example, we do maximum likelihood estimation, in which we try to find the parameter $\theta$ which, for some observed data $D$, maximizes the likelihood $P(\theta|D)$. -But my understanding of this is $$P(\theta = x) = P(x\leq\theta\leq x) = \int_x^xp(t)dt = 0$$ so I am not sure how any $\theta$ can result in a non-zero probability. -Intuitively I understand what it means to find the "most probable" $\theta$, but I am having trouble uniting it with the formal definition. - -EDIT: In my class we defined $L(\theta:D)=P(D|\theta)=\prod_i P(D_i|\theta)$ (assuming i.i.d, where $D_i$ are the observations). Then we want to find $\text{argmax}_\theta \prod_i P(D_i|\theta)$. -I was incorrect above about finding $P(\theta)$, but it seems to me we're still trying to find the maximal probability, where all probabilities are zero. Some answerers suggested that we're actually trying to find the max probability density but I don't understand why this is true. - -REPLY [2 votes]: Here's still another way to view the MLE, that really helped clarify it for me: -You're taking the derivative of the pmf (With respect to whatever variable you're trying to isolate) and finding a local maximum by setting that derivative equal to 0. -That's what the MLE is. To look at it from the viewpoint of a normal distribution, you're finding the exact value (Or the formula for it) of the peak (highest probability of occurring - the mean, in the case of the normal), because that's where its derivative changes direction (So it, for an instant, is 0 there) -The log step is because all taking a log does is flattens a curve (Such as you see here, in situations where it won't just flatten the curve you can't take the log) which doesn't change the local maximum at all, just lowers it -- but it almost always makes the derivative more straightforward. -Hope this helps!<|endoftext|> -TITLE: Recurrence relation telescoping -QUESTION [6 upvotes]: Hi there I am trying to solve the following recurrence relation using telescoping. How would I go about doing it? -$$T(n) = \frac 2n \Big(T(0) + T(1) + \ldots+ T(n-1)\Big) + 5n$$ -Assuming $n\ge 1$ - -REPLY [6 votes]: Rearrange $$T(n) = \frac 2n \Big(T(0) + T(1) + \ldots+ T(n-1)\Big) + 5n\tag{1}$$ to get -$$T(0)+T(1)+\ldots+T(n-1)=\frac12\Big(nT(n)-5n^2\Big)\;,$$ and hence $$T(0)+T(1)+\ldots+T(n-2)=\frac12\Big((n-1)T(n-1)-5(n-1)^2\Big)\;.$$ -Then substitute this into $(1)$: -$$\begin{align*} -T(n)&=\frac2n\left(\frac12\left((n-1)T(n-1)-5(n-1)^2\right)+T(n-1)\right)+5n\\ -&=\frac{n-1}nT(n-1)-\frac{5(n-1)^2}n+\frac2nT(n-1)+5n\\ -&=\frac{n+1}nT(n-1)-\frac5n(n^2-2n+1)+5n\\ -&=\frac{n+1}nT(n-1)+10-\frac5n\;.\tag{2} -\end{align*}$$ -Now let $a=T(0)$, and calculate a few values of $T$: -$$\begin{array}{c|l} -n&T(n)\\ \hline -0&a\\ -1&2a+5\\ -2&3a+15\\ -3&4a+\frac{85}3\\ -4&5a+\frac{265}6\\ -5&6a+62 -\end{array}$$ -This suggests that $T(n)=c_na+b_n$, where $c_n$ is probably $n+1$. Indeed, if $$T(n-1)=na+b_{n-1}\;,$$ then from $(2)$ we find that -$$\begin{align*} -T(n)&=(n+1)a+\left(1+\frac1n\right)b_{n-1}+10-\frac5n\\ -&=(n+1)a+b_{n-1}+10+\frac1n(b_{n-1}-5)\;, -\end{align*}$$ -confirming that $c_n=n+1$. Thus, the problem reduces to solving the recurrence $$b_n=b_{n-1}+10+\frac1n(b_{n-1}-5)$$ with initial condition $b_0=0$. -It appears to be convenient to let $u_n=b_n-5$, so that $u_0=-5$, and -$$u_n=u_{n-1}+10+\frac1nu_{n-1}=\frac{n+1}nu_{n-1}+10\;.$$ Then -$$\begin{align*} -u_n&=\frac{n+1}nu_{n-1}+10\\ -&=\frac{n+1}n\left(\frac{n}{n-1}u_{n-2}+10\right)+10\\ -&=\frac{n+1}{n-1}u_{n-2}+10\left(1+\frac{n+1}n\right)\\ -&=\frac{n+1}{n-1}\left(\frac{n-1}{n-2}u_{n-3}+10\right)+10\left(1+\frac{n+1}n\right)\\ -&=\frac{n+1}{n-2}u_{n-3}+10\left(1+\frac{n+1}n+\frac{n+1}{n-1}\right)\\ -&\qquad\qquad\qquad\vdots\\ -&=(n+1)u_0+10\left(\frac{n+1}{n+1}+\frac{n+1}n+\frac{n+1}{n-1}+\ldots+\frac{n+1}2\right)\\ -&=-5(n+1)+10(n+1)(H_{n+1}-1)\\ -&=-5-5n+10(n+1)(H_{n+1}-1)\;, -\end{align*}$$ -where $H_n$ is the $n$-th harmonic number. (Properly speaking, this should now be proved by induction on $n$.) -Finally, then, $b_n=u_n+5=10(n+1)(H_{n+1}-1)-5n$, and -$$\begin{align*} -T(n)&=(n+1)\Big(T(0)+10(H_{n+1}-1)\Big)-5n\\ -&=(n+1)\Big(T(0)+10H_{n+1}\Big)-10(n+1)-5n\\ -&=(n+1)\Big(T(0)+10H_{n+1}\Big)-15n-10\;. -\end{align*}$$<|endoftext|> -TITLE: How to make sure a proof is correct -QUESTION [18 upvotes]: If you come up with a proof of a mathematical proposition, how do you verify the proof is correct? -Put it another way, how do you avoid a wrong proof? -I guess there is no definitive answer to this. -However, I believe this question is important. -Any idea, suggestion or method to help make sure a proof is correct would be appreciated. -EDIT -Thanks for all your suggestions. -I'd like to add my idea which is perhaps similar to Leslie Lamport's (I haven't read his paper yet). -My idea is basically "divide and conquer". -Divide your proof to small propositions or lemmas. -The smaller, the better. -Ideally each of these small proposition should be trivial. -To do this, first divide the main theorem into several propositions. -The main theorem should be almost trivial assuming each proposition is correct. -Then divide each proposition into several propositions. -Repeat this process until each proposition cannot be divided or trivial enough. -Then apply some or all of your ideas to each proposition. -Generalizing your theorem can help make this process easier(there is an adage(?): if your problem is difficult, generalize it). -I got this idea from the Grothendieck's approach in his EGA. -See the article "The rising sea; Grothendieck on simplicity and generality" by Colin McLarty. - -REPLY [4 votes]: Follow Kolmogorov's advice: if you are not sure about the validity of the statement try proving and disproving it consecutively.<|endoftext|> -TITLE: Relation between uniform convergence of curves in a manifold and homotopy -QUESTION [5 upvotes]: I was working through some things in riemannian geometry and I had this doubt: -Let $M$ be a closed riemannian manifold, $H$ an embedded submanifold and $V$ be its $\varepsilon$-tubular neighborhood. Consider a sequence of continuous curves $\sigma_{l}:[0,1] \rightarrow M\setminus V$. Suppose that the sequence converges uniformly to a continuous curve $\sigma:[0,1]\rightarrow M\setminus V$. Can I assume that $\sigma$ is homotopic to some $\sigma_l$ for $l$ large enough? - -REPLY [4 votes]: I was thinking a differently than Jeremy. My argument is not nearly as general, since it relies explicitly on the metric. -Let $\sigma_l\to\sigma$ uniformly in $M$, where $M$ is some Riemannian manifold (not necessarily complete) without boundary. Since the convergence is uniform, for every $\epsilon>0$ there is some $N$ so that $l>N$ implies $\sup_t d(\sigma_l(t),\sigma(t)) < \epsilon$. -Since $\sigma$ is compact, it admits a tubular neighborhood $\nu$ of radius $\epsilon_0 > 0$. (Compactness lets us avoids pathologies such as an infinite spiral toward the "edge" of an incomplete manifold.) Associated to $\epsilon_0$ is an $N_0$ so that for all $l>N_0$, $\sigma_l$ lies entirely within $\nu$. -The tubular neighborhood is diffeomorphic to the normal bundle of $\sigma$ in $M$, so we may regard $\sigma_l$ as a section of the normal bundle. All sections of a vector bundle are homotopic to its zero section Pulling this homotopy back by the diffeomorphism between the normal bundle and $\nu$, we have a homotopy between $\sigma_l$ and $\sigma$. The minimal $N_0$ so that all $l > N_0$ have $\sigma_l$ homotopic to $\sigma$ is that given by the radius of the cut locus of $\sigma$. -The intuition here is that since $\sigma$ is compact, some small $\epsilon$-neighborhood of $\sigma$ must retract onto it (this is the tubular neighborhood). Since the $\sigma_l\to\sigma$ uniformly, for large enough $l$ we can "suck" the $\sigma_l$ onto $\sigma$ via that retraction.<|endoftext|> -TITLE: Limit of the sequence $\lim_{n\rightarrow\infty}\sqrt[n]n$ -QUESTION [11 upvotes]: Possible Duplicate: -$\lim_{n \to +\infty} n^{\frac{1}{n}} $ - -I know that -$$\lim_{n\rightarrow\infty}\sqrt[n]n=1$$ -and I can imagine that $n$ grows linearly while $n$th root compresses it exponentially and therefore the result is $1$, but how do I calculate it? - -REPLY [5 votes]: Let's see a very elementary proof. Without loss of generality we proceed replacing $n$ by $2^n$ and get that: -$$ 1\leq\lim_{n\rightarrow\infty} n^{\frac{1}{n}}=\lim_{n\rightarrow\infty} {2^n}^{\frac{1}{{2}^{n}}}=\lim_{n\rightarrow\infty} {2}^{\frac{n}{{2}^{n}}}\leq\lim_{n\rightarrow\infty} {2}^{\frac{n}{\dbinom{n}{2}}}=2^0=1$$ -By Squeeze Theorem the proof is complete.<|endoftext|> -TITLE: What's so "shrieky" about this shriek map? -QUESTION [5 upvotes]: On page 88 of Atiyah-Macdonald's "Introduction to Commutative Algebra" there is an exercise about the Grothendieck group $K(A)$ of a noetherian ring $A$. In this context to every finite ring homomorphism $f: A \rightarrow B$ of noetherian rings there is an associated group homomorphism -$$f_{!}: K(B) \rightarrow K(A)$$ -which is induced by restricting a finitely generated $B$-module via $f$ to a finitely generated $A$-module. Given two finite ring homomorphisms $A \stackrel{f}\longrightarrow B \stackrel{g} \longrightarrow C$ we get -$$(g \circ f)_{!} = f_{!} \circ g_{!}$$ -What I am wondering about is: why do they put the "shriek" (i.e. the symbol "$!$") into the subscript when it behaves contravariantly? -On wikipedia they say that shrieks are used either to distinguish a functor from another similar functor, or in order to warn the reader that something which intuitively behaves covariantly (contravariantly) behaves instead contravariantly (covariantly). -So what of the two, if anything, applies in my case above? It's pretty clear to me that in my case the shriek has to "turn arrows around" because we succesively restrict scalars, first along $g$ then along $f$. Would one instead, a priori, expect that the Grothendieck group functor is covariant, or is there another, well-known functor, which could easily be confused with this one? - -REPLY [3 votes]: In general, $f_!$ is the notation, in cohomological or cycle-theoretic contexts, for pushforward with proper supports (the relative version of cohomology with compact supports). -In this particular case, one is applying it to the proper morphism -Spec $B \to $ Spec $A$ (proper because the corresponding extension of rings -$A \to B$ is finite). -So the answer to your question is that there is a larger geometric context in which this particular situation considered by AM can be placed, and in that larger context $f_!$ is the traditional notation.<|endoftext|> -TITLE: the fundamental exact sequence associated to a closed space -QUESTION [10 upvotes]: Let $(X,\mathcal O_X)$ be an algebraic variety. If $Y\subseteq X$ is a closed subset, then we can equip $Y$ with a structure of algebraic variety $(Y,\mathcal O_Y)$. The function $i:Y\rightarrow X$ is the usual immersion, moreover if $i_*\mathcal O_Y$ is the pushforward of $\mathcal O_Y$ through $i$, we have the following surjective morphism of sheaves: -$\mathcal O_X(U)\rightarrow i_*\mathcal O_Y(U):=\mathcal O_X(U\cap Y)$ such that $s\mapsto s|_{U\cap Y}$ -Clearly $i_*\mathcal O_Y$ is an $\mathcal O_X$-module and the kernel of the above morphism, called $\mathcal I_{Y|X}$, is a sheaf of ideals, so an $\mathcal O_X$-module. Finally we have the fundamental exact sequence of $\mathcal O_X$-modules associated to $Y$: -$$0\longrightarrow \mathcal I_{Y|X}\longrightarrow \mathcal O_X\longrightarrow i_*\mathcal O_Y\longrightarrow 0$$ -I have two question, one conceptual and one more technical: -1) Why is it so important considering closed subspaces of $X$? For example if $U\subseteq X$ is open, then $(U,\mathcal O_X|_U)$ is an algebraic variety so one can costruct the fundamental sequence for $U$. -2) In some texts there is the identification of $i_*\mathcal O_Y$ with the sheaf $\mathcal O_Y$. How can I prove this identification? - -REPLY [8 votes]: Let $X$ be an integral scheme and let $Y$ be a non-empty open subscheme of $X$. $X$ is irreducible, so $Y$ is dense in $X$. I claim the induced homomorphism $i^\flat : \mathscr{O}_X \to i_* \mathscr{O}_Y$ is monic but not necessarily epic. Indeed, the claim is local on $X$ and $Y$, so we may take $X$ to be affine and $Y$ to be a distinguished open subscheme of $X$. But then all we have is a localisation of an integral domain, and this is always injective but not necessarily surjective. - -The point is that there isn't one notion of subscheme which gives rise to open and closed subschemes; rather, there are two. -Open subschemes are a special case of the following construction: if $X$ is a locally ringed space and $Y$ is any subset of $X$, we can make $Y$ into a locally ringed space by pulling back $\mathscr{O}_X$ along the inclusion $i : Y \hookrightarrow X$. Unfortunately, there's no guarantee that $(Y, i^{-1} \mathscr{O}_X)$ is a scheme even if $(X, \mathscr{O}_X)$ is. -On the other hand, the definition of a closed subscheme depends a lot on the scheme structure. If $Y$ is a closed subset of $X$, that means $Y \cap U$ is closed in $U$ for every open subset $U$ – but we know that closed subsets of $\operatorname{Spec} A$ are homeomorphic to $\operatorname{Spec} A / \mathfrak{a}$ for some suitable ideal $\mathfrak{a}$, and in essence the structure of $Y$ as a closed subscheme of $X$ is defined so that we have an exact sequence -$$0 \longrightarrow i^{-1} \mathscr{I}_{Y \mid X} \longrightarrow i^{-1} \mathscr{O}_X \longrightarrow \mathscr{O}_Y \longrightarrow 0$$ -or equivalently, so that we have the "fundamental exact sequence": -$$0 \longrightarrow \mathscr{I}_{Y \mid X} \longrightarrow \mathscr{O}_X \longrightarrow i_* \mathscr{O}_Y \longrightarrow 0$$ - -But the story for varieties is more subtle. For the purposes of this discussion, I mean "variety" in the sense of a reduced scheme of finite type over an algebraically closed field $k$. Because varieties have enough closed points, the structure sheaf of a variety $X$ is isomorphic to a subsheaf of the sheaf of continuous functions $X(k) \to \mathbb{A}^1(k)$. (Henceforth, I will pretend non-closed points don't exist.) Thus, there is a canonical way of restricting regular functions on (any open subset of) $X$ to any (not necessarily open or closed!) subset $Y$ of $X$. If $Y$ is open, this recovers the open subscheme structure, and if $Y$ is closed, this recovers the closed subscheme structure. -Let's look at this more closely. We define $\mathscr{I}_{Y \mid X}$ to be the subsheaf of $\mathscr{O}_X$ consisting of those regular functions which vanish on $Y$, i.e. -$$\mathscr{I}_{Y \mid X} (U) = \{ f \in \mathscr{O}_X : \forall y \in Y . \, f (y) = 0 \}$$ -and we define, for each open subset $V$ of $Y$, -$$\mathscr{O}_Y (V) = \varinjlim_{U \supseteq V} \mathscr{O}_X (U) / \mathscr{I}_{Y \mid X} (U)$$ -This is a sheaf because $Y$ is quasicompact. (Every subset of $X$ is quasicompact!) If $Y$ is open, then $V$ is open in $X$, so we are taking the direct limit over a directed system with a terminal object – hence $\mathscr{O}_Y (V) \cong \mathscr{O}_X (V) / \mathscr{I}_{Y \mid X} (V) \cong \mathscr{O}_X (V)$ in this case. -For $Y$ closed something weird happens as well. Let $U$ be an open affine subset of $X$. Then, $V = U \cap Y$ is a closed subset of $U$ and an open subset of $Y$. Suppose $f \in \mathscr{O}_X (U)$ does not vanish on $V$. Then, the Nullstellensatz implies $f$ is already invertible in $\mathscr{O}_X (U) / \mathscr{I}_{Y \mid X} (U)$ – and this implies that the directed system is constant! In particular, we get $i_* \mathscr{O}_Y \cong \mathscr{O}_X / \mathscr{I}_{Y \mid X}$. -In general, however, we don't get anything nice. There is a natural left exact sequence of groups -$$0 \longrightarrow \mathscr{I}_{Y \mid X} (U) \longrightarrow \mathscr{O}_X (U) \longrightarrow \mathscr{O}_Y (U \cap Y)$$ -and therefore a left exact sequence of sheaves on $X$: -$$0 \longrightarrow \mathscr{I}_{Y \mid X} \longrightarrow \mathscr{O}_X \longrightarrow i_* \mathscr{O}_Y$$ -We have just seen that this extends to a short exact sequence when $Y$ is closed. When $Y$ is open and $X$ is irreducible, the homomorphism $\mathscr{O}_X \to i_* \mathscr{O}_Y$ is monic but in general not epic. (Consider a point $x \in X \setminus Y$: the stalk of $i_* \mathscr{O}_Y$ at $x$ gives the fraction field of the local ring $\mathscr{O}_{X, x}$.)<|endoftext|> -TITLE: About a continuous function -QUESTION [7 upvotes]: I'm trying to solve this problem, but I don't have any idea. Can you help me? -Let X a compact metric space and $f:X\times\mathbb{R}\rightarrow \mathbb{R}$ be a continuous function. Consider $m(t_0)=\max_x (f(x,t_0))$. Show that $m$ is continuous. -Thanks. - -REPLY [2 votes]: Fix $t\in\Bbb R$. We just have to show sequential continuity, since we are working in a metric space. Let $\{t_n\}\subset\Bbb R$ a sequence which converges to $t$. Since $X$ is compact, we can find $x_n$ such that $m(t_n)=f(x_n,t_n)$. We show that for each subsequence of $\{t_n\}$ we can find a further subsequence $\{t_{n_k}\}$ such that $m(t_{n_k})\to m(t)$. It will show that $m(t_n)\to m(t)$. -Let $\{t_{n'}\}$ a subsequence of $\{t_n\}$. The sequence $\{x_{n'}\}$ admits a converging subsequence $\{x_{n_k}\}$, say to $x$. Then $(t_{n_k},x_{n_k})\to (t,x)$ and we conclude using the continuity of $f$ (and the fact that $f(x_n,t_n)\geq f(y,t_n)$ for all $y\in Y$, to show that $f(x,t)=u(t)$.<|endoftext|> -TITLE: Can we distinguish $\aleph_0$ from $\aleph_1$ in Nature? -QUESTION [9 upvotes]: Can we even find examples of infinity in nature? - -REPLY [2 votes]: The question as asked is somewhat ill-defined. I'm taking the word "nature" to mean "the thing described by physics", but what does it mean to say that some mathematical object is found "in nature"? -Such a phrase can only have meaning relative to some scheme of associating mathematical objects to physical "things". It's easy to construct such a scheme: e.g. if I consider an interpretation where I associate "$\aleph_1$" to the apple sitting on the table and don't assign meaning to any mathematical objects, then I have found $\aleph_1$ in nature: it's sitting on the table. -Maybe more interesting is a more 'standard' interpretation associated with a physical theory. The interpretations typically seen in a physical theory don't tend to associate cardinal numbers to objects. In particular, they don't assign $\aleph_1$ to any sort of object, and this fact is a banality. -But, in this case, we can turn to higher-order concepts. e.g. a version of Newtonian physics based upon ZFC+CH says that the number of points in any region of space is $\aleph_1$.<|endoftext|> -TITLE: Why would the author ask if I used the Associative Law to prove + is not equiv. to *? -QUESTION [5 upvotes]: I just started reading An Introduction to Mathematical Analysis by H.S. Bear and problem 1 goes as follows: - -Problem 1: Show that + and * are necessarily different operations. That is, for any system (F, +, *) satisfying Axioms I, II, and III, it cannot happen that x + y = x * y for all x, y. Hint: You do not know there are any numbers other than 0 and 1, so that your argument should probably involve only these numbers. Did you use Axiom II? If not, state explicitly the stronger result that you actually proved. - -In this book, Axiom I is commutativity of + and *, Axiom II is associativity of + and *, and Axiom III is existence of identities (x+0=x, x*1=x, 0 does not equal 1). -My question: Simply why would the author specifically ask the reader if he/she used Axiom II (associativity) and what exactly do they mean by "If not, state explicitly the stronger result you actually proved"? Why not not include those last two sentences? -FWIW, here is my solution: -To prove: - -Restated: - - -And I justified 5 by citing Axiom III since Axiom III includes the statement that 0 does not equal 1. - -REPLY [4 votes]: The basic idea is as follows. From the neutral Axioms III, and commutativity of addition we have -$$\begin{eqnarray}\rm x = 0 + x &\rm \\ -\rm y *\: 1 &=\rm y \end{eqnarray}$$ -If $\ +\, =\, *\ $ then aligned terms are unified for $\rm\:y = 0,\ x = 1,\:$ yielding -$$\rm\ 1 = 0 + 1 = 0 * 1 = 0 $$ -contra hypothesis $\rm\:1 \ne 0.\:$ Thus $\rm\: +\: \ne\: *\:$ because they take different values at the point $\rm\:(0,1)$. -Note that the proof does not use associativity, and doesn't use commutativity if you state the neutral axioms as above. In any case, only one of the commutative axioms is needed, so that the neutral axioms can be ordered so the above unification is possible. In particular, the inference works in noncomutative rings, i.e. rings where multiplication is not necessarily commutative. Further, because the proof did not use associativity, it will also work in nonassociative rings. -Note $\ $ This method of deriving consequences by unifying terms in identities is a basic method in equational reasoning (term rewriting), e.g. google Knuth-Bendix or Grobner basis algorithms.<|endoftext|> -TITLE: Teaching permutations, How to? -QUESTION [6 upvotes]: I posed this question to my niece while teaching her permutations: - -Given four balls of different colours, and four place holders to put those balls, in how many ways can you arrange these four balls in the four place holders? - -She reverted with $(4!)^2$ while I was expecting to hear $4!$. Her reasoning was so: - -I can select the first of any of the four balls in four ways. Having picked one, I put this in any of the four place holders in four ways. Now, I select the second ball from the three remaining ones in 3 ways. I can place it in any of the remaining three place holders in 3 ways. - -... and so on to produce $4^2 * 3^2 * 2^2 * 1^2$ -How do I explain to her that only $4!$ arrangements are possible and this is regardless of which order she picks the balls? - -REPLY [7 votes]: Demonstrate it inductively. Clearly there’s only one possible ordering of one object, $A$; that’s $1$. Add a second; it can go before ($BA$) or behind ($AB$), doubling the number of possiblities to $2\cdot1$. Add a third: it can go into any of three slots, tripling the number of possibilities: $3\cdot2\cdot1$. At this stage it’s a good idea still to illustrate everything: -$$\begin{array}{c} -&&&\square&B&\square&A&\square\\ -&&&\color{blue}{C}&&\color{red}{C}&&\color{green}{C}\\ -&&\swarrow&&&\downarrow&&&\searrow\\ -\color{blue}{C}&\color{blue}{B}&\color{blue}{A}&&\color{red}{B}&\color{red}{C}&\color{red}{A}&&\color{green}{B}&\color{green}{A}&\color{green}{C}\\ \hline -\\ -&&&\square&A&\square&B&\square\\ -&&&\color{blue}{C}&&\color{red}{C}&&\color{green}{C}\\ -&&\swarrow&&&\downarrow&&&\searrow\\ -\color{blue}{C}&\color{blue}{A}&\color{blue}{B}&&\color{red}{A}&\color{red}{C}&\color{red}{B}&&\color{green}{A}&\color{green}{B}&\color{green}{C} -\end{array}$$ -Then each of the $3\cdot2\cdot1$ strings of $3$ objects has $4$ slots in which to insert a new one, for a total of $4\cdot3\cdot2\cdot1$ strings, and so on: -$$\square\quad X\quad\square\quad Y\quad\square\quad Z\quad\square$$<|endoftext|> -TITLE: Direct image of double cover. -QUESTION [6 upvotes]: Let $\mathbb{Z}$ be the constant sheaf on $\mathbb{S}^1$, $f: \mathbb{S}^1 \to\mathbb{R}P^1$ the double cover and $\mathcal{A} = f\mathbb{Z}$. -Then $\mathcal{A}$ has stalks $\mathbb{Z}\oplus\mathbb{Z}$ but I am having difficulties describing the topology of $\mathcal{A}$. That is, I do not understand why the sheaf is 'twisted' via the automorphism $(n,k)\to (k,n)$. -edit: my idea was flawed so I might as well remove it. - -REPLY [3 votes]: Locally, $\mathcal A$ is a constant sheaf with stalks $\mathbb Z \oplus\mathbb Z$. -More precisely, if $U$is an open of $S^1$, $\underline {\mathbb Z}(U)$ is the product of one copy of $\mathbb Z$ for each connected component of $U$, and if $V$ is an open of $\mathbb P^1$, then $\mathcal A (V) = \underline{\mathbb Z}(f^{-1} (V))$ = the product of as many copies of $\mathbb Z$ as there are connected components of $f^{-1} (V)$, which is usually twice the number of connected components of $V$. -For example, if we take two distinct points $x_1,x_2 \in \mathbb P^1$, and take $V_i = \mathbb P^1 - \{x_i\}$, then $\mathcal A |_{V_i}$ is a constant sheaf with stalks $\mathbb Z \times\mathbb Z$. (the connected components of $f^{-1}(U)$ are always twice the connected components of $U$ when $U$ is an open subset of $V_i$) -But, as a whole, $\mathcal A$ is not the constant sheaf on $\mathbb P^1$, because $\mathcal A(\mathbb P^1) = \underline {\mathbb Z} (S^1) = \mathbb{Z}$, which is not $\mathbb Z \oplus \mathbb Z$. -We can describe it instead as a locally constant sheaf, which is constant on $V_1$ and $V_2$, and we need to describe how it glues. -Let $W = V_1 \cap V_2$.$W$ has two components, call them $W = W_a \cup W_b$. The fact that $\mathcal A$ is constant on $V_i$ means that it gives two different isomorphisms from $\mathcal A(W_a) \to \mathcal A(W_b)$, one going through $V_1$ and the other going through $V_2$. -Denote $W_a^1, W_a^2$ the connected components of $f^{-1}(W_a)$, do the same for $W_b^1, W_b^2$, and put the exponents such that $W_a^1$ and $W_b^1$ are in the same connected component of $f^{-1}(V_1)$, so that the connected components of $f^{-1}(V_1)$ correspond to $W_a^1 \cup W_b^1$ and $W_a^2 \cup W_b^2$, and those of $f^{-1}(V_2)$ correspond to $W_a^1 \cup W_b^2$ and $W_a^2 \cup W_b^1$ -Then, we have restriction isomorphisms $\rho_i^a : \mathcal A(V_i) \to \mathcal A(W_a)$, which look like this : $\rho_1^a(x,y) = (x,y)$ and $\rho_2^a(x,y) = (x,y)$ ; and $\rho_i^b : \mathcal A(V_i) \to \mathcal A(W_b)$ : $\rho_1^a(x,y) = (x,y)$ and $\rho_2^a(x,y) = (y,x)$. -Then, the twisting map is the isomorphism $\tau = \rho_2^a \circ (\rho_2^b)^{-1} \circ \rho_1^b \circ (\rho_1^a)^{-1} : \mathcal A(W_a) \to \mathcal A(W_a) $. You should get that $\tau(x,y) = (y,x)$. -This isomorphism describes what happens to sections on $W_a$ when you go once through $\mathbb P^1$ : the two connected components of $f^{-1}(W_a)$ get switched around, and you are basically right when you describe it in your comment. -And in fact, this map is enough to recover all the information you need to describe the sheaf $\mathcal A$ completely, so we can describe $\mathcal A$ as a locally constant sheaf whose stalks are $\mathbb Z \oplus \mathbb Z$ twisted by $\tau$ when we do one loop around $\mathbb P ^1$<|endoftext|> -TITLE: Eigenvalues of block matrices -QUESTION [6 upvotes]: Let $K$ be a field of characteristic 0, and consider the following block matrix -$$M=\left(\begin{array}{cc} A & B\\ -B&D\end{array}\right),$$ -where each block is an $n\times n$ matrix with coefficients in $K$. I am looking for a relation between the eigenvalues of $M$ and those of $A$ and $D$. -Context: Here, I'm assuming that $M$ is invertible and semisimple. I was wondering if there is a way to show that both $A$ and $D$ are invertible and semisimple as well. Moreover, we can also assume that $B$ is $2\times2$ and has one of the following forms: -$$\left\{\left(\begin{array}{cc} 1 & 0\\ 0&1\end{array}\right),\left(\begin{array}{cc} 1 & 0\\ 0&0\end{array}\right),\left(\begin{array}{cc} 0 & 0\\ 0&0\end{array}\right)\right\}.$$ - -REPLY [4 votes]: It's certainly not necessary for $A$ and $D$ to be invertible, e.g. with $B = \pmatrix{1 & 0\cr 0 & 1\cr}$ you could have $A = D = \pmatrix{0 & 0\cr 0 & 0\cr}$, or with $B = \pmatrix{1 & 0\cr 0 & 0\cr}$ you could have $A = \pmatrix{0 & 0\cr 0 & 1\cr}$ and $D = \pmatrix{1 & 1\cr 1 & 1\cr}$. -Of course with $B = \pmatrix{0 & 0\cr 0 & 0\cr}$ the eigenvalues of $M$ are the union of the eigenvalues of $A$ and of $D$. -In all cases $\text{Tr}(M) = \text{Tr}(A) + \text{Tr}(D)$, so the sum of the eigenvalues of $M$ is the sum for $A$ plus the sum for $D$. -In the case $B = \pmatrix{1 & 0\cr 0 & 1\cr}$, the coefficient of $\lambda^2$ in the characteristic polynomial of $M$ (which is $\sum_{i < j} \lambda_i \lambda_j$ where $\lambda_i$ are the eigenvalues of $M$) is $a_{{1}}a_{{2}}+a_{{1}}d_{{1}}+a_{{1}}d_{{2}}+a_{{2}}d_{{1}}+a_{{2}}d_{ -{2}}+d_{{1}}d_{{2}}+2$, where $a_i$ and $d_i$ are the eigenvalues of $A$ and $D$ respectively. -In the case $B = \pmatrix{1 & 0\cr 0 & 0\cr}$, that coefficient would be -$a_{{1}}a_{{2}}+a_{{1}}d_{{1}}+a_{{1}}d_{{2}}+a_{{2}}d_{{1}}+a_{{2}}d_{ -{2}}+d_{{1}}d_{{2}}+1$. -In the case $B = \pmatrix{1 & 0\cr 0 & 0\cr}$, I think these are the only equations linking the eigenvalues of $M$ with those of $A$ and $D$: you can choose $\lambda_1, \lambda_2, \lambda_3, \lambda_4$ arbitrarily satisfying the two constraints on -$\sum_i \lambda_i$ and $\sum_{i -TITLE: Is Koch snowflake a continuous curve? -QUESTION [9 upvotes]: For Koch snowflake, does there exits a continuous map from $[0,1]$ to it? -The actural construction of the map may be impossible, but how to claim the existence of such a continuous map? Or can we conside the limit of a sequence of continuous map, but this sequence of continuous maps may not have continuous limit. - -REPLY [12 votes]: Consider the snowflake curve as the limit of the curves $(\gamma_n)_{n\in \mathbb N}$, in the usual way, starting with $\gamma_0$ which is just a equilateral triangle of side length 1. Then each $\gamma_n$ is piecewise linear, consisting of $3\cdot 4^n$ pieces of length $3^{-n}$ each; for definiteness let us imagine that we parameterize it such that $|\gamma_n'(t)| = 3(\frac 43)^n$ whenever it exists. -Now, it always holds that $|\gamma_{n+1}(t)-\gamma_n(t)|\le 3^{-n}$ for every $t$ (because each step of the iteration just changes the curve between two corners in the existing curve, but keeps each corner and its corresponding parameter value unchanged). This means that the $\gamma_n$'s converge uniformly towards their pointwise limit: At every $t$ the distance between $\gamma_n(t)$ and $\lim_{i\to\infty}\gamma_i(t)$ is at most $\sum_{i=n}^\infty (1/3)^i$ which is independent of $t$ and goes to $0$ as $n\to\infty$. -Because uniform convergence preserves continuity, the limiting curve is a continuous function from $[0,1]$ to the plane.<|endoftext|> -TITLE: Locally lipschitz on $[a,b]$ implies lipschitz -QUESTION [9 upvotes]: Suppose a function $f:\mathbb{R}\rightarrow\mathbb{R}$ is locally Lipschitz. Prove $f$ is Lipschitz on $[a,b]$. -Here is what I have so far: Let $[a, b]$ be some closed, bounded interval. Since f is locally Lipschitz, for each $x\in[a; b]$, we may find some $U_x$ and some $M_x$ such that $|f(y)-f(z)| < M|x-y|$. Let $\mathcal{U}$ denote the collection of all such open neighborhoods $U_x$. Then $\mathcal{U}$ is an open cover of $[a, b]$. By the Heine-Borel Theorem, $[a, b]$ is compact, so $\mathcal{U}$ has a finite subcover, \mathcal{V}. Label the members of $\mathcal{V}$ as $U_{x_1}, U_{x_2},\ldots,U_{x_n}$. Then $[a, b]$ and for each $U_{x_k}$, we associate a corresponding $M_{x_k}$ such -that if $y, z\in U_{x_k}$ , then $|f(y)-f(z)|< M_{x_k}|y-z|$. Let $M = \max\{M_{x_1},\ldots,M_{x_n}\}$. Let y and z be some points in $[a, b]$ with z < y. If both y and z lie in the same neighborhood in $\mathcal{V}$, then we are done. -This is as far as I got. I do not know how to handle the case when $y$ and $z$ are in different neighborhoods. - -REPLY [6 votes]: Very good work so far. -By the lebesgue covering lemma, there exists $\delta >0$ s.t. if $x,y$ satisfy $|x-y|<\delta$ then there is $k$ s.t. $x ,y \in U_{x_k}$. -Let $x -TITLE: A free boolean algebra -QUESTION [7 upvotes]: Consider the following definition: -The boolean algebra $A$ is generated freely with the subset $G \subseteq A$ if for every boolean algebra $B$ and map $f:G \mapsto B$ there is precisely one homomorphism $\overline{f}:A \mapsto B$ extending $f.$ That is $\overline{f}(x) = f(x)$ for all $x \in G.$ -I would like to check if $A = P(\mathbb{N})$ is generated freely by some subset $G.$ -Clearly if $G$ generates $A$ then $G$ is not the empty set and it does not contain 0 or 1. Define a mapping $f$ such that a fixed $x \in G$ is mapped to an atom if $x$ is not an atom, and to a non atom element otherwise. -Since any automorphism preserves atoms there is no way to extedn $f$ to a homomorphism from $A$ to $A$. -The above reasoning (if correct) would therefore imply that $A$ is not generated freely. -Am I mistaken? - -REPLY [4 votes]: Indeed $P(\mathbb{N})$ is not free. All infinite freely generated Boolean algebras are atomless, while $A$ is atomic. Furthermore, only the Boolean algebras that are generated by a finite set $G$ are finite (with $2^{2^{|G|}}$ many elements -if I'm not mistaken). -To see that if $G$ is infinite then $A$ is, is trivial. To see that if $G$ is finite then $A$ is finite you just need to check that $A$ is the Lindenbaum algebra of propositional logic with $|G|$ many atoms (check that this algebra satisfies the requirements). -To see that if $G$ is infinite then $A$ is atomless you do the following: Take an atomless Boolean algebra $B$ with $|G|$ many elements and define a mapping $f:G\to B$ such that $f[G]$ is dense, i.e. for every element $b\in B$ there is some element in $c\in f[G]$ such that $c\leq b$. Then take the homomorphism $\bar{f}$ that extends $f$. If $A$ has an atom $a$, take $f(a)$, since $B$ is atomless, there is some $d -TITLE: Impossible to prove vs neither true nor false -QUESTION [28 upvotes]: First off I am not a logician, so I probably won't use the correct terms. Sorry ! -I have heard, like most mathematicians, about questions like the continuum hypothesis, or the independance of the axiom of choice from ZF. These statements (continuum hypothesis or axiom of choice) were referred to as "neither true nor false", because you could add them or their negation to form two different sets of axioms that would be equally self-consistent. -On the other hand, I have also heard about statements that "were true but could not be proven", i.e. could not be proven in a finite number of applications of axioms. For instance, it could be that the Goldbach conjecture is true, but that there is no other way to "prove" it than to verify it for all integers, which is not really a proof of course. -Is that distinction correct ? (and what is the real terminology?) I fail to understand what the problem would be, for example, if you were to add the negation of a "true but impossible to prove" statement as an axiom. There would be a contradiction, but you could never find it, so... - -REPLY [25 votes]: First we need to assert the general framework. We have a language with relation symbols and function symbols and constants, etc. With this language we can write sentences and formulas. -We say that $T$ is a theory if it is a collection of sentences in a certain language, often we require that $T$ is consistent. -If $T$ is a first-order theory, whatever that means, then we can apply Goedel's completeness theorem and we know that $T$ is consistent if and only if it has a model, that is an interpretation of the language in such way that all the sentences in $T$ are true in a specific interpretation. -The same theorem also tells us that if we have some sentence $\varphi$ in the same language, then $T\cup\{\varphi\}$ is consistent if and only if it has a model. We go further to notice that if we can prove $\varphi$ from $T$ then $\varphi$ is true in every model of $T$. -On the other hand we know that if $T$ is consistent it cannot prove a contradiction. In particular if it proves $\varphi$ it cannot prove $\lnot\varphi$, and if both $T\cup\{\varphi\}$ and $T\cup\{\lnot\varphi\}$ are consistent then neither $\varphi$ nor $\lnot\varphi$ can be proved from $T$. -When we say that CH is unprovable from ZFC we mean that there exists a model of ZFC+CH and there exists a model of ZFC+$\lnot$CH [1]. Similarly AC with ZF, there are models of ZF+AC and models of ZF+$\lnot$AC. -Now we can consider a specific model of $T$. In such model there are things which are true, for example in a given model of ZF the axiom of choice is either true or false, and similarly the continuum hypothesis. Both these assertions are true (or false) in a given model, but cannot be proved from ZF itself. -Some theories, like Peano Axioms treated as the theory of the natural numbers, have a canonical model. It is possible that the Goldbach conjecture is true in the canonical model, and therefore we can regard it as true in some aspects, but the sentence itself is false in a different, non-canonical model. This would cause the Goldbach conjecture to become unprovable from PA, while still being true in the canonical model. - -Footnotes: - -Of course this is all relative to the consistency of ZFC, namely we have to assume that ZFC is consistent to begin with, but if it is then both ZFC+CH and ZFC+$\lnot$CH are consistent as well.<|endoftext|> -TITLE: Prove $\int 1-\prod_{i=1}^n (1- \mathbb{I}_{A_i}) d \mu= \mu ( \bigcup_{i=1}^n A_i )$ -QUESTION [6 upvotes]: Let $(\Omega, \mathcal{A}, \mu)$ be a measurable space. $A_1, A_2,...,A_n \in \mathcal{A}$ are sets with finite measure. -I have to prove $\int 1-\prod_{i=1}^n (1- \mathbb{I}_{A_i}) d \mu= \mu ( \bigcup_{i=1}^n A_i )$. But I am once again totaly puzzeld how to start. - -Result: -I solved the problem, using the hint by Weltschmerz. By now I found an elegant way of the prove, which simply is -$$\int 1 - \prod_{i=1}^n \left( 1- \mathbb{I}_{A_i} \right) d \mu= \int 1 - \prod_{i=1}^n \left( \mathbb{I}_{A_i^C} \right) d \mu$$ $$ = \int 1 - \mathbb{I}_{\bigcap_{i=1}^n A_i^C} d \mu = \int \mathbb{I}_{\left( \bigcap_{i=1}^n A_i^C \right)^C} d \mu = \int \mathbb{I}_{\bigcup_{i=1}^n A_i} d \mu= \mu \left( \bigcup_{i=1}^n A_i \right)$$ - -REPLY [6 votes]: Giveaway hint: $1-\prod_{i=1}^n (1-\mathbb{I}_{A_i})$ is $1$ if $x\in$ ... and $0$ if $x\in$ ..., and that $=\mathbb{I}_{\textrm{something}}$.<|endoftext|> -TITLE: Polynomial irreducibility criterion -QUESTION [5 upvotes]: Given $f \in \mathbb{Z}[X]$ and $2\deg(f)+1$ distinct $a_i \in \mathbb{Z}$ such that $f(a_i)$ are prime numbers. Then $f$ is irreducible. - -I'm trying to prove that and I am stuck. Any hints for a good starting point? - -REPLY [5 votes]: Hint: Suppose $f=gh$ were a factorization where $g,h\in\mathbb{Z}[x]$ are not units (i.e. not equal to $\pm1$). -Because $f(a_i)=g(a_i)h(a_i)$ is a prime number for each $i$, we must have that either $g(a_i)=\pm1$ or $h(a_i)=\pm1$ for each $i$ (the signs might be different for different $i$). -Further hint (mouse over to reveal): - - Let $n=2\deg(f)+1$. WLOG, suppose $a_1,\ldots,a_k$ are the $a_i$ such that $g(a_i)=\pm1$, and $a_{k+1},\ldots,a_n$ are the $a_i$ such that $h(a_i)=\pm1$. Since $$2\deg(f)+1=2\deg(g)+2\deg(h)+1=n,$$ we have $$(2\deg(g)-k)+(2\deg(h)-(n-k))=-1$$ so that either $2\deg(g)< k$ or $2\deg(h)< n-k$.<|endoftext|> -TITLE: Euler's summation by parts formula -QUESTION [8 upvotes]: I'm beginning analytic number theory and I see this formula in Apostol's book : If $f$ has a continuous derivative $f'$ on the interval $[y,x]$, where $0 < y < x$, then -$$ -\sum_{y < n \le x} f(n) = \int_y^x f(t) \, dt + \int_y^x (t-[t]) f'(t) \, dt + f(x) ([x] - x) - f(y) ([y]-y). -$$ -The proof in Apostol's can be followed easily if one uses Riemann Integration. But since I meet with number theorists often I see more this kind of notation : -$$ -\sum_{y < n \le x} f(n) = \int_y^x f(t) d[t] = \text{something here I don't recall} - \int_y^x [t]f'(t) dt -$$ -because for some reason they can "integrate $d[t]$ and it gives $[t]$", which I don't understand, and I also don't really understand precisely what $d[t]$ stands for. I have done a measure theory course ; what I'm saying is that I don't understand all the details ; I understand that they "integrate by parts with the measure $d[t]$" which makes the proof quite simpler, but I don't understand the assumptions they make and how the details work out. I think that $d[t]$ could be a measure such that for $E \subseteq \mathbb R$ or $\mathbb C$, $d[t](E) = | \mathbb N \cap E |$, but I'm not sure. -Here's what I'm looking for : I don't want an intuitive point of view with plots or summations ; I want a formal proof from the viewpoint of a measure theorist, with details. Is there anyway this can be made clear? The reason why I want this is because I don't have much faith in the "integration by parts with $d[t]$" version of the proof, but number theorists seem to love it so much and they all sketch it ; I never managed to do it myself formally, even though I did a measure theory course. -Thanks for the help, - -REPLY [6 votes]: Here's what I'm looking for : I don't want an intuitive point of view with plots or summations ; I want a formal proof from the viewpoint of a measure theorist, with details. - -Find the book Montgomery and Vaughn's "Multiplicative Number Theory I. Classical Theory". This book is an excellent reference for many different subjects analytic number theory. -Appendix A, "The Riemann Siteltjes integral," deals with precisely your question. It is $8$ pages long, and should answer everything. -Edit: Zev kindly added links to Appendix A, so no need to check your library, or ask your prof.<|endoftext|> -TITLE: A question about modular curves and base change -QUESTION [5 upvotes]: Let $X$ be a smooth projective geometrically connected curve over a number field $K$. -Suppose that the curve $X\times_{K,\sigma} \mathbf{C}$ is a modular curve for some $\sigma:K\to \mathbf{C}$. -Can we conclude that $X\times_{K,\tau} \mathbf{C}$ is a modular curve for ALL $\tau:K\to \mathbf{C}$? -I'm asking this question out of pure curiosity. I see no reason why all base changes of $X$ to $\mathbf{C}$ should be modular provided one of them is. Then again, I wouldn't know how to construct a counter example. Probably you can do something with elliptic curves. -A modular curve is (to me) an algebraic curve isomorphic to the compactification of $\Gamma\backslash \mathbf{H}$ for some congruence subgroup $\Gamma \subset \mathrm{SL}_2(\mathbf{Z})$. - -REPLY [9 votes]: You are asking if any conjugate of a modular curve is again a modular curve, and the answer is yes. This is a very special case of the general theory of conjugation of Shimura varieties, which says that any algebraic conjugate of a Shimura variety is again a Shimura variety, but which in this case can be verified directly. -Firstly, just to explain why I say that you are asking about algebraic conjugates of modular curves: modular curves are defined over $\overline{\mathbb Q}$, so one can replace $\mathbb C$ by $\overline{\mathbb Q}$ in the question (because if two smooth projective curves, both defined over an algebraically closed field $k$ --- in this case $\overline{\mathbb Q}$ --- become isomorphic over a larger algebraically closed field $\Omega$ --- in this case $\mathbb C$ --- then they are already isomorphic over $k$). -Now, regarding fields of definition, we can say more: -The modular curve $X(N)$ is defined over $\mathbb Q(\zeta_N)$, and it has -a natural action of $SL_2(\mathbb Z/N)$ which is also defined over that field. -Let $\sigma_a$ denote the automorphism of $\mathbb Q(\zeta_N)$ which maps $\zeta_N$ to $\zeta_N^a$, for $a \in (\mathbb Z/N)^{\times}$. Then one -can show that the conjugate of $X(N)$ by $\sigma_a$ is again isomorphic to $X(N)$, and that this isomorphism can be chosen so that it takes the action of $\gamma \in SL_2(\mathbb Z/N)$ to the action of $\begin{pmatrix} a & 1 \\0 & 1\end{pmatrix} \gamma \begin{pmatrix} a^{-1} & 1 \\ 0 & 1 \end{pmatrix}.$ -Now any modular curve $X$ is a quotient of $X(N)$ for some level $N$ by some subgroup $H$ of $SL_2(\mathbb Z/N)$, and the preceding paragraph shows that it is also defined over $\mathbb Q(\zeta_N)$, with -its conjugate $X^{\sigma_a}$ being isomorphic to the modular curve obtained -by taking the quotient of $X(N)$ by the conjugated subgroup $\begin{pmatrix}a & 0 \\ 0 & 1 \end{pmatrix} H \begin{pmatrix} a^{-1} & 0 \\ 0 & 1\end{pmatrix}.$<|endoftext|> -TITLE: "8 Dice arranged as a Cube" Face-Sum Equals 14 Problem -QUESTION [10 upvotes]: I found this here: - -Sum Problem -Given eight dice. Build a $2\times 2\times2$ cube, so that the sum of the points on each side is the same. -$\hskip2.7in$ -Here is one of 20 736 solutions with the sum 14. - You find more at the German magazine "Bild der Wissenschaft 3-1980". - -Now my question: -Is $14$ the only possible face sum? At least, in the example given, it seems to related to the fact, that on every face two dice-pairs show up, having $n$ and $7-n$ pips. Is this necessary? Sufficient it is... - -REPLY [3 votes]: No, 14 is not the only possibility. -For example: -Arrange the dice, so that you only see 1,2 and 3 pips and all the 2's are on the upper and lower face of the big cube. This gives you face sum 8. -Please ask your other questions as separate questions if you are still interested.<|endoftext|> -TITLE: A function and its Fourier transform cannot both be compactly supported -QUESTION [12 upvotes]: I am stuck on the following problem from Stein and Shakarchi's third book. -I can't figure out how to use the hint productively. Once I know $f$ is a trigonometric polynomial, I see how to finish the problem, but I don't know how to conclude that $f$ is a trigonometric polynomial. -I tried substituting in the Fourier transform in the formula for Fourier coefficients and switching the order of integration, but I couldn't get that to work. I can't think of any more ideas. -Problem: Suppose $f$ is continuous on $\mathbb{R}$. Prove $f$ and $\hat f$ cannot both be compactly supported unless $f=0$. -Hint: Suppose $f$ is supported in $[0, 1/2]$ and expand it in a Fourier series in $[0,1]$. Show $f$ must be a trigonometric polynomial. -This question was asked before, but with different hypotheses and in the context of complex analysis. Please do not close as a duplicate. - -REPLY [13 votes]: Further hint: let $c_n$ the $n$-th Fourier coefficient of $f$. We can write $\widehat f(n)=c_n$ and since $\widehat f$ is compactly supported, $c_n$ vanishes for $n$ large enough. It implies that $f$ is a trigonometric polynomial. -The hypothesis of the hint is not restrictive: using a substitution in the integral defining the Fourier transform, we can assume that the support of $f$ is contained in $[0,a]$ for some $a>0$, then define $g(x):=f\left(\frac x{2a}\right)$.<|endoftext|> -TITLE: Is this limit evaluation correct? -QUESTION [7 upvotes]: In trying to give the OP an elementary answer to this question, I made some rather stupid mistakes. I feel terrible about giving a wrong answer (in lieu of a complicated but correct one). -I devised a new proof, and wanted to check it before editing my answer. Does everyone like the following (well enough)? - -Assertion: $$\lim_{x \to 0^+} \frac{x^{x^x}}{x} = 1$$ -Proof: We pass to the log of the limit. -$$\log\left(\lim_{x \to 0^+} \frac{x^{x^x}}{x}\right) = \lim_{x \to 0^+} \log\left(\frac{x^{x^x}}{x}\right) = \lim_{x \to 0^+} \frac{\log(x)}{\frac{1}{x^x - 1}}$$ -We use L'Hospital's rule, and rearrange: -$$\lim_{x \to 0^+} \frac{\log(x)}{\frac{1}{x^x - 1}} = \lim_{x \to 0^+} \frac{\frac{1}{x}}{\frac{-x^x(\log(x) + 1)}{(x^x - 1)^2}} = \lim_{x \to 0^+} \frac{- (x^x - 1)^2}{x^{x}(x\log(x) + x)} = \left( \lim_{x \to 0^+} \frac{-(x^x - 1)}{x^x} \right) \left( \lim_{x \to 0^+} \frac{(x^x - 1)}{x\log(x) + x} \right) $$ -provided that both of these last limits exist; but (again using L'Hospital in the 2nd limit) we see that -$$\lim_{x \to 0^+} \frac{-(x^x - 1)}{x^x} = \frac{0}{1} = 0,$$ -$$\lim_{x \to 0^+} \frac{(x^x - 1)}{x\log(x) + x} = \lim_{x \to 0^+} \frac{x^x(\log(x)+1)}{(1+ \log(x)) + (1)} = \left( \lim_{x \to 0^+} \frac{x^x(\log(x)+2)}{( \log(x)) + 2)} - \lim_{x \to 0^+} \frac{x^x}{(\log(x)) + 2)} \right) = 1,$$ -and therefore $\displaystyle\log\left(\lim_{x \to 0^+} \frac{x^{x^x}}{x}\right) = 0$. -Evaluating both sides by $\exp(x)$ therefore shows that $\displaystyle\lim_{x \to 0^+} \frac{x^{x^x}}{x} = 1$. - -REPLY [4 votes]: The proof is correct, but you worked too hard. Shorter proof: -$$\log \frac{x^{x^x}}{x}=\log x^{x^x-1}=(x^x-1)\log x=(e^{x\log x}-1)\log x\tag{1}$$ -Recall that $\lim_{t\to 0}(e^t-1)/t =1$. -Since $x\log x\to 0$ as $x\to 0$, it follows that -$$\lim_{x\to 0^+} \frac{e^{x\log x}-1}{x\log x}=1$$ -Rewriting (1) as -$$ -\left( \frac{e^{x\log x}-1}{x\log x}\right) \cdot (x\log^2x) -$$ -we see that the first factor tends to $1$ while the second tends to $0$. Thus, the limit of (1) is $0$, and the limit of $\frac{x^{x^x}}{x}$ is $1$.<|endoftext|> -TITLE: Help with Serre spectral sequences -QUESTION [10 upvotes]: I'm working through Hatcher's unfinished book Spectral Sequences in Algebraic Topology and have found myself stuck on Exercise 2 on page 23: - -Compute the Serre spectral sequence for homology with $\mathbb{Z}$ - coefficients for the fibration $K(\mathbb{Z}_2,1) \rightarrow - K(\mathbb{Z}_8,1) \rightarrow K(\mathbb{Z}_4,1).$ - -The question asks to compare with Example 1.6 in the text (a similar computation), but when I try to write out the first few pages, I can't see anything close to a nice pattern. Is there something I'm missing? - -REPLY [5 votes]: It would be nice to see what you've got for the $E_2$-page. Firstly, note that $K(\mathbb{Z}/m,1)$ is just an infinite dimensional lens space. We know (pp. 146 of Hatcher - Algebraic Topology) that these have homology -$$H_i(K(\mathbb{Z}/m,1)) = \begin{cases} -\mathbb{Z} &\text{ for } i=0 \\ -\mathbb{Z}/m &\text{ for } i \text{ odd} \\ -0 &\text{ for } i \text{ even.} \\ -\end{cases} -$$ -Thus we can calculate $E_2^{p,q} = H_p(K(\mathbb{Z}/4,1),H_q(K(\mathbb{Z}/2,1)))$. Note that because everything disappears in even dimensions our $E_2$-term looks like similar to the spectral sequence given in Example 1-6, except now we have $\mathbb{Z}/4$'s on the row $q=0$ (and $n \ne 0$). -Once we have calculated the $E_2$-term we now need to work out the differentials. Again a similar methodology to the example in Hatcher now works. For example there is a $\mathbb{Z}/2$ in the $n=2$ diagonal, which must be killed. There is only one possible way to kill this, and that is for there to be a map $d_2:E_2^{3,0} = \mathbb{Z}/4 \to E_2^{1,1} = \mathbb{Z}/2$. This leaves a $\mathbb{Z}/2$ in the $(3,0)$ position - but this is nice, because then this leaves us with three $\mathbb{Z}/2$'s in the $n=3$ diagonal, which is where we get the $\mathbb{Z}/8$ we need in the homology. -Proceed from here!<|endoftext|> -TITLE: Prove that $\sin(2A)+\sin(2B)+\sin(2C)=4\sin(A)\sin(B)\sin(C)$ when $A,B,C$ are angles of a triangle -QUESTION [21 upvotes]: Prove that $\sin(2A)+\sin(2B)+\sin(2C)=4\sin(A)\sin(B)\sin(C)$ when $A,B,C$ are angles of a triangle - -This question came up in a miscellaneous problem set I have been working on to refresh my memory on several topics I did earlier this year. I have tried changing $4\sin(A)\sin(B)\sin(C)$ to $$4\sin(B+C)\sin(A+C)\sin(A+B)$$ by making substitutions by reorganizing $A+B+C=\pi$. I then did the same thing to the other side to get $$-2(\sin(B+C)\cos(B+C)+\sin(A+C)\cos(A+C)+\sin(A+B)\cos(A+B))$$ and then tried using the compound angle formula to see if i got an equality. However the whole thing became one huge mess and I didn't seem to get any closer to the solution. I am pretty sure there is some simpler way of proving the equality, but I can't seem to figure it out. Maybe there is a geometric interpretation or maybe it can be done using just algebra and trig. Any hint's would be appreciated (I would prefer an algebraic approach, but it would be nice to see some geometric proofs as well) - -REPLY [8 votes]: The other two answers are great, but if you're ever not feeling clever enough to come up with them you can always try the sledgehammer method (using Euler's formula and complex exponentials): -$$4\sin(A)\sin(B)\sin(C)=4\left(\frac{e^{iA}-e^{-iA}}{2i}\right)\left(\frac{e^{iB}-e^{-iB}}{2i}\right)\left(\frac{e^{iC}-e^{-iC}}{2i}\right) \, $$ -Now, multiply out this product and you'll get eight terms. The two where all the signs are the same will be $e^{i\pi}/(-2i)$ and $-e^{-i\pi}/(-2i)$ since $A+B+C=\pi$, so they cancel. The other six will collect into three terms that look like -$$\frac{-e^{i(A+B-C)}+e^{i(-A-B+C)}}{-2i}=\sin(A+B-C)$$ -(possibly with the variables permuted). But $A+B-C=\pi-2C$, which means $$\sin(A+B-C)=\sin(\pi-2C)=\sin(2C) \, ,$$ -and similarly with the other two terms. So their sum is the left-hand side of your identity.<|endoftext|> -TITLE: The spectra of weighted shifts -QUESTION [6 upvotes]: Since weighted shifts are like the model-operators in operator theory and people have been studying them for so long, I think there should be quite a large literature on the spectra of such operators. However, after some search I hardly found anyone which gives a complete picture of what the spectra of a weighted shift. -Most of the papers I found deal with some specific properties of their spectra, but the question I have in mind is what the spectra are, exactly. -I wonder whether there is some good reference on this. -Thanks! - -REPLY [3 votes]: For complete description of the spectra of weighted shifts, see either -1: A.L. Shields, Weighted Shift Operators and Analytic Function Theory, in Topics in Operator Theory, Mathematical Surveys, N0 13 (ed. C. Pearcy), pp. 49-128. American Mathematical Society, Providence, Rhode Island 1974 -Or -2: http://surface.syr.edu/cgi/viewcontent.cgi?article=1138&context=mat -Or -3: https://www.impan.pl/shop/publication/transaction/download/product/90721<|endoftext|> -TITLE: Simultaneously Diagonalizing Bilinear Forms -QUESTION [7 upvotes]: Let $\theta$ and $\psi$ be symmetric bilinear forms on a - finite-dimensional real vector space $V$, and assume $\theta$ is positive - definite. Show that there exists a basis $\{v_1,\ldots,v_n\}$ for $V$ - and $\lambda_1,\ldots,\lambda_n\in\mathbb{R}$ such that - $$\theta(v_i,v_j)=\delta_{i,j}\quad\text{and}\quad\psi(v_i,v_j)=\delta_{ij}\lambda_i$$ - where $\delta_{ij}$ is the Kronecker delta function. - -I think it's enough to choose a basis $\{w_1,\ldots,w_n\}$ for which the matrix representations of $\theta$ and $\psi$ are both diagonal. Then $\left\{\frac{w_1}{\sqrt{\theta(w_1,w_1)}},\ldots,\frac{w_n}{\sqrt{\theta(w_n,w_n)}}\right\}$ is the required basis. - -REPLY [5 votes]: Your question essentially reduces the to spectral theorem for symmetric bilinear forms. Use $\theta$, the positive definite form, as an inner-product. This makes $(V,\theta)$ a (real) inner product space, and hence spectral theorem applied to $\psi$ will give you an answer. - -For a sketch of the proof of the spectral theorem, what we can do is to look at the set of all vectors $S := \{ v\in V| \theta(v,v) = 1\}$. Note that by positive definiteness every vector $w\in V$ can be written as a multiple of some $s\in S$. In fact, $S$ is a topological sphere and is compact. So we can let $e_1$ be a vector in $S$ such that $\psi(e_1,e_1) = \inf_S \psi(s,s)$. Let $S_1 = S \cap \{e_1\}^\perp$ where $\perp$ is defined relative to $\theta$. We can define $e_2$ as a vector in $S_1$ such that $\psi(e_2,e_2) = \inf_{S_1} \psi(s,s)$ and so on. By induction we will have arrived at a collection of vectors which are orthonormal with respect to $\theta$. That they are also $\psi$-orthogonal follows by minimization: if there exists $s\in S_1$ such that $\psi(e_1,s) \neq 0$, we have that for $a^2 + b^2 = 1$ -$$ \psi(a e_1 + b s, a e_1 + b s) = a^2 \psi(e_1,e_1) + b^2 \psi(s,s) + 2ab \psi(e_1,s) = \psi(e_1,e_1) + b^2 (\psi(s,s) - \psi(e_1,e_1) + 2ab \psi(e_1,s)$$ -By choosing $|b| < 1/2$ sufficiently small such that -$$ \left|\frac{1}{b}\right| > \left|\frac{\psi(s,s) - \psi(e_1,e_1)}{\psi(e_1,s)}\right| $$ -and with the appropriate sign, we see we can make -$$ \psi(a e_1 + bs, a e_1 + bs) < \psi(e_1,e_1) $$ -contradicting the minimisation assumption. By induction the same can be said of all $e_i$, and hence they are mutually orthogonal relative to $\psi$. - -It is important to note that the assumption that $\theta$ is positive definite is essential. In the proof above we used the fact that for a positive definite form, its corresponding "unit sphere" is a topological sphere and is a compact set in $V$. For an indefinite form or a degenerate form, the corresponding "sphere" would be non-compact (imagine some sort of hyperboloid or cylinder), and hence it can happen that the infimum of a continuous function on the surface is not achieved, breaking the argument. -In fact, given two symmetric bilinear forms without the assumption that at least one of them is positive definite, it is possible that they cannot be simultaneously diagonalised. An example: let -$$ \theta = \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix} \qquad \psi = \begin{pmatrix} 1 & -1 \\ -1 & -1\end{pmatrix}$$ -Suppose you want $(x,y)$ and $(z,w)$ to simultaneously diagonalise the matrices. This requires in particular -$$ xz = wy \qquad xz - wy - xw - zy = 0 $$ -for the cross terms to vanish. Hence we have -$$ xw + zy = 0 $$ -Assuming $x \neq 0$ (at least one of $x,y$ is non zero), we solve by substitution $z = wy / x$ which implies $w(x^2 + y^2) = 0$. Since $x^2 + y^2 \neq 0$ if $(x,y)$ is not the zero vector, this means $w = 0$. But the equation $xz = wy = 0$ implies that $xz = 0$. By assumption this implies $z = 0$ and hence $(z,w)$ is the zero vector, which contradicts our assumption. -A similar proof can be used to show that -$$ \theta = \begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix} \qquad \psi = \begin{pmatrix} 1 & 2 \\ 2 & 0 \end{pmatrix} $$ -also cannot be simultaneously diagonalised.<|endoftext|> -TITLE: A (non-artificial) example of a ring without maximal ideals -QUESTION [41 upvotes]: As a brief overview of the below, I am asking for: - -An example of a ring with no maximal ideals that is not a zero ring. -A proof (or counterexample) that $R:=C_0(\mathbb{R})/C_c(\mathbb{R})$ is a ring with no maximal ideals. - -A homework question in my algebra class earlier this year asked to exhibit a ring (necessarily without identity) without any maximal (proper) ideals. For solutions to this, it suffices to exhibit an abelian group $(G,+)$ without maximal (proper) subgroups. For, given such a group, define multiplication to be constantly zero. In this case, $G$ becomes a zero ring without maximal ideals (because ideals correspond to subgroups). -It is not particularly difficult to construct examples of the above. For example, consider $(\mathbb{Q},+)$. Another interesting example is $P=\{z\in\mathbb{C}\mid \exists n\in\mathbb{N}, z^{p^n}=1\}$ with standard complex number multiplication as "addition" (that is, as the abelian group operation). -However, any example constructed in this manner is a zero ring, and as such seems "artificial," which I admit, is not a rigorous term. I would like to find a somewhat less artificial example of a ring without maximal ideals. For a definition of "less artificial," let us start with "not a zero ring." -I have a candidate in mind, but I am having trouble explicitly proving that it has no maximal ideals. Let $C_0(\mathbb{R})$ denote the ring of continuous real-valued function on $\mathbb{R}$ vanishing at infinity. Let $C_c(\mathbb{R})$ denote the (two-sided) ideal of compactly supported functions. I believe that the ring $R:=C_0(\mathbb{R})/C_c(\mathbb{R})$ contains no maximal ideals, but I am having trouble showing it. -My intuition for this problem is as follows. Given a function $f\in C_0(\mathbb{R})$, $f(x)$ approaches zero at some "rate" as $x\to\pm\infty$ (possibly different based on $\pm$). Furthermore, for any given rate, we can find a function with a larger rate, in the sense that we can find a $g\in C_0(\mathbb{R})$ such that $f(x)=o(g(x))$ ($f$ is little-$o$ of $g$). Now, even if $f$ is non-vanishing, there is no $h\in C_0(\mathbb{R})$ such that $fh=g$, for any $h$ could not vanish at infinity. Thus the principal ideal generated by $f$ does not contain $g$. By iterating this process we could construct a strictly ascending chain of principal ideals. -Now, the idea is that the ring $R$ consists of these "rates" as described above. I know that this is not precise, or necessarily even correct. But it's my intuition. The previous paragraph shows that we can find an ascending chain of "rates," but a lot of work still needs to be done. If anyone can clean this up, it would be much appreciated. - -REPLY [16 votes]: After some research I came across this paper, which yields an affirmative answer to my second question. I quote Proposition 2.4 below. - -Proposition 2.4 If $X$ is a completely regular Hausdorff space, then every maximal ideal of $C_0(X)$ is fixed. In fact every maximal ideal of $C_0(X)$ is of the form $M_x\cap C_0(X)$, where $M_x$ is a fixed maximal ideal in $C(X)$ and the point $x$ has a compact neighborhood. - -The ideals $M_x$ are of the form $M_x=\{f\in C(X)\mid f(x)=0\}$. Now, clearly $\mathbb{R}$ satisfies the conditions of the theorem, so the maximal ideals of $C_0(\mathbb{R})$ are of the form $M_x\cap C_0(\mathbb{R})$ for any $x\in\mathbb{R}$. Furthermore, $C_c(\mathbb{R})$ is not contained in any of these maximal ideals, and so by the correspondence theorem for rings, $R:=C_0(\mathbb{R})/C_c(\mathbb{R})$ has no maximal ideals.<|endoftext|> -TITLE: Dedekind Cut Proof -QUESTION [11 upvotes]: I am greatly confused with Dedekind cuts... I am trying to prove that this is a Dedekind cut: - -If $D$ and $E$ are in $\mathbb{Q}$ and are Dedekind cuts, then prove that - $$D*E=(-\infty, 0] \cup \{r_1r_2\mid 0 < r_1 \in D, 0 < r_2 \in E\}$$ - is a Dedekind cut as well. - -My three propositions of a Dedekind cut are: -1.) If $r\in D$ and $s < r$, then $s \in D$. -2.) There is a number $x \in \mathbb{Q}$ so that $r\leq x$ for all $r \in D$. -3.) If $r \in D$, then there is a number $s \in D$ so that $r < s$. -After looking at many sources, my concept of a Dedekind cut is falling short... and so this proof is. -I would be greatly appreciative for a simple definition and example of a Dedekind cut, and/ or help on this proof. Thx! - -REPLY [21 votes]: Let’s take a closer look at the conditions defining a Dedekind cut. A subset $D$ of $\Bbb Q$ is a Dedekind cut if: - -whenever $r\in D$ and $s -TITLE: Question on some arithmetic calculations -QUESTION [6 upvotes]: When $6272$ is multiplied by $0.94$ the answer is $5895.68$. When it is divided by $1.06$ the answer is $\approx 5916.9811$. Why is it so? -Just as a little background, I am using the default Microsoft calculator for this calculation. I haven't pulled out Mathematica yet. Could this be because of some weird variable conversion (int to double)? Any help would be much appreciated. - -REPLY [2 votes]: If you take $5196$ and multiply it by $1.06 \times 0.94$ you are, in fact multiplying by a product of the form: -$$(1-x)(1+x)=1-x^2$$ -In this case we have $x=0.06$. This is a useful thing to know if you are working with figures like this all the time (e.g. in finance, if prices go up 6% and then go down 6%, they end up a little lower).<|endoftext|> -TITLE: If a line bundle and its dual both have a section (on a projective variety) does this imply that the bundle is trivial? -QUESTION [19 upvotes]: Is there any reason that, on a projective variety X, if a line bundle L has a (non-zero) section and also its dual has a section then this implies that L is the trivial line bundle? - -REPLY [20 votes]: Yes, there is a reason for $L$ to be trivial and here it is: -Let $0\neq s\in \Gamma(X,L)$ and $0\neq \sigma\in \Gamma(X,L^*)$ be two non zero sections. -Then $s\otimes \sigma\in \Gamma(X,L\otimes L^*)=\Gamma(X,\mathcal O)$ is a constant since $X$ is complete: $s\otimes \sigma =c\in k$ (the base field). -Now, since $s$ and $\sigma$ are non-zero there is a non-empty open subset $U\subset X$ on which both do not vanish and on which $s\otimes \sigma=c $ does not vanish either: in other words $c\neq0\in k$ . -Since $s\otimes \sigma =c\neq 0$, a non-zero constant, vanishes nowhere we conclude that a fortiori $s$ vanishes nowhere, so that $L$ is trivial, as announced, since $ s\in \Gamma(X,L)$ .<|endoftext|> -TITLE: Exhibiting a special subgroup whose involutions are all conjugate to a given involution? -QUESTION [5 upvotes]: I'm trying to work through a sketch proof attributed to Walter Feit on characterizing $S_5$. - -Suppose $G$ is a finite group with exactly two conjugacy classes of involutions, with $u_1$ and $u_2$ being representatives. Suppose $C_1=C(u_1)\simeq \langle u_1\rangle\times S_3$ and $C_2=C(u_2)$ be a dihedral group of order $8$. The eventual result is that $G\simeq S_5$. Also, $C(u)$ denotes the centralizer of $u$ in $G$. - -I'm trying to deduce that $|S_2|=0$ or $4$, where $S_i$ is the set of pairs $(x,y)$ with $x$ conjugate to $u_1$, $y$ conjugate to $u_2$, and $(xy)^n=u_i$ for some $n$, and that $C_2$ then has a noncyclic subgroup $V$ such that all involutions in $V$ are conjugate to $u_2$ in $G$. This is part of exercise 10 of page 83, Jacobson's Basic Algebra I. Thanks for any help. - -My sparse thoughts: I know a few facts I think are useful (proofs are linked in the numbers to the left hand side): -1. If $c_i=|C(u_i)|$ and $s_i=|S_i|$, then $|G|=c_1s_2+c_2s_1$. -2. $C_2$ is a Sylow $2$-subgroup, and I observe from this that one can assume $u_1\in C_2$ by taking a conjugate. -3. There are $3$ classes of involutions in $C_2$, and if $x$ is an involution distinct from $u_2$, then $x$ is conjugate to $xu_2$ in $C_2$. -I'm lost on how to use this info on the involutions in $C_2$ to count the size of $S_2$. Identifying $C_2$ with $D_8=\langle r,s\mid r^4=s^2=1,\; sr=r^3s\rangle$ and $r^2$ with $u_2$, I know that the two noncyclic subgroups of order $4$ are $\{1,s,r^2,r^2s\}$ and $\{1,rs,r^2,r^3s\}$. I consider the intersection $C:=C_1\cap C_2$. This is a subgroup with order dividing $8$ and $12$, but since I already know $1,u_1,u_2\in C$, then $|C|=4$. I think this might be the desired subgroup $V$. I'm not sure what action of $C_1$ on $u_2$ to consider, since I don't know if left multiplication or conjugation will send $u_2$ back into $C_2$. - -REPLY [3 votes]: As noted in the OP, we may assume $u_1\in C(u_2)$, i.e. $u_1$ and $u_2$ commute. -The key property here is what is called "Exercise 8" in Jacobson's book : in a dihedral group of even rank, the largest element is central. -This immediately implies that $x$ and $y$ are both in $C(u_i)=C_i$ whenever $(x,y) \in S_i$. Denote by $I_2$ the set of involutions in $C_2$ other than $u_2$, $X_2$ the set of involutions in $I_2$ that are conjugate (in $G$) to $u_1$, and $Y_2$ the set of involutions in $I_2$ that are conjugate (in $G$) to $u_2$. Then we must count the elements in -$$ -S_2= X_2\times Y_2 -$$ -In $I_2$ there are two $C_2$-conjugacy classes of cardinality $2$. - If those two classes merge in $G$, then $X_2=I_2$ and $Y_2=\emptyset$, so $s_2=0$. - If those two classes do not merge, then $|X_2|=|Y_2|=2$ so $s_2=2 \times 2=4$. -We do something similar for $S_1$. -There is a subgroup $A$, isomorphic to ${\mathfrak S}_3$ (let us denote its elements by $a_{\sigma}, \sigma \in {\mathfrak S}_3$), such that $C_1$ is the disjoint union of $A$ and $u_1A$. Note that we may replace $A$ with $A'=\lbrace u_1^{{\sf signature}(\sigma)}a_{\sigma}| \sigma \in {\mathfrak S}_3 \rbrace$ which is a subgroup sharing exactly the same properties as $A$. - If $u_2\not\in u_1A$ then $u_2\in u_1A'$, so we may assume without loss of generality that $u_2\in u_1A$ : $u_2=u_1a_{\tau}$ for some $\tau$ which must be of order $2$. We can assume $\tau=(1,2)$. -Denote by $I_1$ the set of involutions in $C_1$ . Also, put -$$ -\Gamma=\lbrace (x,y) \in I_1^2| \exists n, (xy)^n=u_1 \rbrace -$$ -There are three conjugacy classes in $I_1$ : -$$ -K_1=\lbrace u_1 \rbrace, \ K_2=\lbrace u_1a_{(i,j)} | 1 \leq i \lt j \leq 3 \rbrace, \ -K_3=\lbrace a_{(i,j)} | 1 \leq i \lt j \leq 3 \rbrace -$$ -It is easy to check that -$$ -\Gamma = (K_2 \times K_3) \cup (K_3 \times K_2) -$$ -Now $u_1$ and $u_1u_2$ are conjugates in the dihedral group $C_2$. We deduce that $K_3$ merges with $K_1$ in $G$, so $S_1=K_3 \times K_2$ so $s_1=9$.<|endoftext|> -TITLE: When is the pushout of a monic also monic? -QUESTION [8 upvotes]: Let $$\matrix{ -A& \mathop{\longrightarrow}\limits^f &B\\ -\Big\downarrow & & \Big\downarrow\\ -C&\mathop{\longrightarrow}\limits_g &D -}$$ -Be a pushout diagram in a category $\mathcal C$. If $f$ is monic, is $g$ also monic? -I have known that this holds in an abelian category. Is it true for a general category? If so, how to prove it? -If if fails, could anyone give me a counterexample? And what conditions should we impose on the category $\mathcal C$ to ensure that this is true? -Thanks! - -REPLY [9 votes]: In the category of commutative rings, this pushout diagram means that $D = C \otimes_A B$, and you ask: If $A \to B$ is monic (which is the case iff the underlying map is injective), is the same true for the cobase change $C \to C \otimes_A B$ mapping $c \mapsto c \otimes 1$? Well this is true when $A \to C$ is flat, but in general its terribly false: If $C=A/I$ for some ideal $I \subseteq A$, then this is the case iff $IB \cap A = I$. And this is rather rare. Take for instance $A=\mathbb{Z}$, $B=\mathbb{Q}$, and $I$ any non-trivial ideal. -I don't know if there are any reasonable and non-trivial assumptions on a category which makes the statement true. In the category of sets the statement is true (and probably also in every topos). Hence, it is also true in many other concrete categories whose forgetful functor preserves pushouts and monics, for instance the category of topological spaces. In fact, this property appears in the realm of Waldhausen categories. There you require that the class of cofibrations is stable under cobase change. Often one imagines these cofibrations as "nice" monomorphisms. -The statement is also true in every dual of an algebraic category with the property that epis coincide with surjective homomorphisms, since surjective homomorphisms are stable under base change: If $B \to A$ is surjective, then for every map $C \to A$ it is clear that also $C \times_A B \to C$ is surjective. Is it true in the dual category of the category of commutative rings (epimorphisms of rings are rather complicated, see here), i.e. the category of affine schemes?<|endoftext|> -TITLE: Ring of analytic functions on the circle -QUESTION [5 upvotes]: Let $A = C^\omega(S^1)$ (resp. $C^\omega_{\mathbb C}(S^1)$) the ring of real-analytic real-valued (resp. complex valued) functions on the circle. -These rings have maximal ideals $\mathfrak m_p = \left \{ f \in A \, | \, f(p) = 0\right \}$ (for $p \in S^1$) and ideals $\mathfrak m_{p_1}^{e_1} \mathfrak m_{p_2}^{e_2} \cdots \mathfrak m_{p_n}^{e_n}$ (the ideal of functions having prescribed zeroes). -What I would like to prove is that there are no other ideals. -That would give a nice example of Dedekind rings: $C^\omega_{\mathbb C}(S^1)$ would be a PID (because it is not hard to give functions generating the aforementioned ideals) but $C^\omega(S^1)$ would be an example of Dedekind ring $A$ with $\mathrm{Cl}(A) = \mathbb Z/2$ (essentially because of the intermediate value theorem: only ideals $\mathfrak m_{p_1}^{e_1} \mathfrak m_{p_2}^{e_2} \cdots \mathfrak m_{p_n}^{e_n}$ with $e_1 + \cdots + e_n$ even are principal). -I feel like such a result, if true, must be classical, but I was unable to find references on those rings (unlike their algebraic counterpart: trigonometric polynomial rings $\mathbb R[S^1] = \mathbb R[X,Y]/(X^2+Y^2-1) \simeq \mathbb R[\cos \vartheta, \sin \vartheta]$ and $\mathbb C[S^1] = \mathbb C[X,Y]/(X^2+Y^2-1) \simeq \mathbb C[e^{\pm i \theta}]$). - -REPLY [5 votes]: Consider an ideal $\mathfrak I$ which is not contained in any $\mathfrak{m}_p$. Thus, for every $p\in S^1$, there is some function $f\in\mathfrak I$ with $f(p)\ne0$. Such an $f$ is nonzero in a neighbourhood of $p$. Since $S^1$ is compact, you can find $f_1$, …, $f_n\in\mathfrak I$ so that the $n$ sets $\{p\in S^1\colon f_k(p)\ne0\}$ cover $S^1$. Thus $f_1^2+\cdots+f_n^2\in I$ has no zeros, so it is invertible in $\mathfrak I$. (In the complex case, use $\bar f_1f_1+\cdots+\bar f_nf_n$ instead.) -Edited to add: Now consider any proper, nonzero ideal $\mathfrak I$. Let $\{p_1,\ldots,p_n\}$ be the common zeros of $\mathfrak I$ (it is finite because analytic, nonzero functions only have isolated zeros). Let $e_k$ be the minimal order of any zero at $p_k$. Clearly $\mathfrak I\subseteq \mathfrak m_1^{e_1}\cdots\mathfrak m_n^{e_n}$. -Edit the second: To prove the opposite inclusion in the even case (i.e., $e_1+\cdots+e_n$ even), note that any function with a zero of order $e_k$ at $p_k$ for each $k$, and no other zeros, is a generator of $\mathfrak m_1^{e_1}\cdots\mathfrak m_n^{e_n}$. I only need to find a member of $\mathfrak I$ with this property. First, find some member $f\in\mathfrak I$ with a zero of exactly order $e_k$ at $p_k$ for each $k$, but possibly other zeros as well. (Taking linear combinations of functions $f_k\in\mathfrak I$ with a zero of order $e_k$ at $p_k$ will do for this.) Assume $p_1$, …, $p_k$ are listed in clockwise order around $S^1$, and add indices modulo $n$ where necessary. Use obvious interval notation like $[p_k,p_{k+1}]$ for arcs of $S^1$. $f$ must suffer an even number of sign changes around the circle. Since we consider the even case, that means the arcs $[p_k,p_{k+1}]$ in which $f$ has an odd number of sign changes must itself be even in number. Multiply $f$ by some function which has a single zero in the interior of each such arc, and no other zeros. For economy of notation, call the result $f$ once more. So now $f$ has an even number of sign changes in each arc $[p_k,p_{k+1}]$. Moreover, the new $f$ has a sign change at an even number of the $p_k$. Pick a function $h\in C^\omega(S^1)$ with a single zero at each of those $p_k$ and no other zeros; negate it if necessary so that $f$ and $h$ have the same sign in a neighbourhood of each $p_k$. -Next, pick a function $g\in\mathfrak I$ which is positive in every open arc $(p_k,p_{k+1})$. (To do this, start by squaring any member of $\mathfrak I$. It has only a finite number of zeros outside the $p_k$, and we can eliminate those by adding squares of more members of $\mathfrak I$ which don't vanish at those points.) If $M$ is large enough, then $f+Mgh\in\mathfrak I$ will have no zero other than the $p_k$. (Sign considerations guarantee this in a neighbourhood of each $p_k$, and $gh$ is bounded away from zero elsewhere.) This is the function sought, and the proof of the even case is complete (modulo details that I might explain in the comments if asked). -For the odd case, pick any $f\in \mathfrak m_1^{e_1}\cdots\mathfrak m_n^{e_n}$. Since it must have an even number of zeros (couning multiplicity), it must have zero $q\notin\{p_1,\ldots,p_n\}$, or it has a zero of order $>e_k$ in some $p_k$ (in which case we put $q=p_k$). Let $\mathfrak I'=\{f\in\mathfrak I\colon f(q)=0\}$ (with the obvious modification if $q=p_k$, that $f$ have a zero of order $ -TITLE: Stone duality for ideals and filters (exercise) -QUESTION [7 upvotes]: In A Course in Universal Algebra (Burris, Sankapannavar), the exercise 4.4.7-8, p.158, says: - -Let $A$ be a Boolean algebra. Denote $A^\ast:=\{\text{ultrafilters of }A\}$, and give $A^\ast$ the topology, defined by the basis of open sets $\{N_a; a\!\in\!A\}$, where $N_a\!:=\!\{U\!\in\!A^\ast; a\!\in\!U\}$. -(a) The map $(\{\text{ideals of }A\},\subseteq)\!\rightarrow\!(\{\text{open subsets of }A^\ast\},\subseteq),\, I\!\mapsto\!I^\ast\!:= \bigcup_{a\in I}\!N_a$ is a lattice isomorphism, with $a\!\in\!I \Leftrightarrow N_a\!\subseteq\!I^\ast$. -(b) The map $(\{\text{filters of }A\},\subseteq)\!\rightarrow\!(\{\text{closed subsets of }A^\ast\},\subseteq),\, F\!\mapsto\!F^\ast\!:= \bigcap_{a\in F}\!N_a$ is a lattice isomorphism, with $a\!\in\!F \Leftrightarrow N_a\!\supseteq\!F^\ast$. - -For any $S\!\subseteq\!A$, let $\mathfrak{I}(S)$ denote the ideal generated by $S$, and $\mathfrak{F}(S)$ the filter generated by $S$. Then $$\bigcup_{a\in S}\!N_a=\!\bigcup_{a\in \mathfrak{I}(S)}\!N_a~~~\text{ and }~~~\bigcap_{a\in S}\!N_a=\!\bigcap_{a\in \mathfrak{F}(S)}\!N_a.$$ -In $(\{\text{ideals of }A\},\subseteq)$, the supremum is described as $I\!\vee\!I'\!=\!\{x\!\in\!A; \exists a\!\in\!I\,\exists a'\!\in\!I'\!: x\!\leq\!a\!\vee\!a'\}$, and in $(\{\text{filters of }A\},\subseteq)$, the supremum is described as $F\!\vee\!F'\!=\!\{x\!\in\!A; \exists a\!\in\!F\,\exists a'\!\in\!F'\!: x\!\geq\!a\!\wedge\!a'\}$. Moreover, $N_a\!\cup\!N_b\!=\!N_{a\vee b}$; $N_{a}\!\cap\!N_{b}\!=\!N_{a\wedge b}$; $(N_a)^c=\!N_{a^c}$. In Boolean algebras, an ideal $I$ of $A$ is maximal (i.e. maximal w.r.t. $\subseteq$ among all ideals $I'$ with $1\!\notin\!I'$) iff it is prime (i.e. $1\!\notin\!I$ and $\forall x,y\!\in\!A\!: x\!\wedge\!y\!\in\!I \Leftrightarrow (x\text{ or }y\!\in\!I)$). In Boolean algebras, a filter $F$ of $A$ is maximal (or an ultrafilter, i.e. maximal w.r.t. $\subseteq$ among all filters $F'$ with $0\!\notin\!F'$) iff it is prime (i.e. $0\!\notin\!F$ and $\forall x,y\!\in\!A\!: x\!\vee\!y\!\in\!F \Leftrightarrow (x\text{ or }y\!\in\!F)$). (Stone) If $I$ is an ideal of $A$ and $a\!\in\!A\!\setminus\!I$, then there is a maximal ideal $M$ with $F\!\subseteq\!M\!\subseteq\!A\!\setminus\!\{a\}$. (Stone) If $F$ is a filter of $A$ and $a\!\in\!A\!\setminus\!F$, then there is an ultrafilter $U$ with $F\!\subseteq\!U \!\subseteq\! A\!\setminus\!\{a\}$. -Questions: Here are the things that I didn't yet manage to prove and am having problems with. -(1) We have $F^\ast\cap F'^\ast=(F\!\cap\!F')^\ast$ iff $(\bigcap_{a\in F}\!N_a)\cap(\bigcap_{a'\in F'}\!N_{a'}) = \bigcap_{x\in F\cup F'}\!N_x = \bigcap_{y\in F\cap F'}\!N_y$ iff for each ultrafilter $U$, we have $F\!\cap\!F'\!\subseteq U \Rightarrow F\!\cup\!F'\!\subseteq U$, but I don't see why this would be true. -(2) Proving $F^\ast\!\cup F'^\ast=(F \vee\!F')^\ast$ boils down to showing that the following inclusion holds: $\{U\!\in\!A^\ast; \forall a\!\in\!F\,\forall a'\!\in\!F'\!: a\!\vee\!a'\!\in\!U\}\subseteq\{U\!\in\!A^\ast; \forall b\!\in\!F\,\forall b'\!\in\!F'\, \forall x\!\geq\!b\!\wedge\!b'\!: x\!\in\!U\}$. Now for $b,b'$, we have $b\!\vee\!b'\!\in\!U$, and from primality of $U$, we have w.l.o.g. $b\!\in\!U$. But how do we show $b\!\wedge\!b'\!\in\!U$? -(3) Injectivity: We have $I^\ast\!=\!I'^\ast$ iff $\forall U\!\in\!A^\ast\!: (\exists a\!\in\!I\!: a\!\in\!U)\Leftrightarrow(\exists a'\!\in\!I'\!: a'\!\in\!U)$ iff $\forall U\!\in\!A^\ast\!: U\!\cap\!I\!=\!\emptyset \Leftrightarrow U\!\cap\!I'\!=\!\emptyset$. I've proved the injectivity of $F^\ast\!=\!F'^\ast$ by using Stone's theorem above, but for $I^\ast\!=\!I'^\ast$, I must produce an ultrafilter by using ideals, so I'm not sure what to do. -(4) We have $a\!\in\!I\Leftarrow N_a\!\subseteq\!I^\ast$ iff $\{U\!\in\!A^\ast;a\!\in\!U\}\!\subseteq\!\{U\!\in\!A^\ast\!; I\!\cap\!U\!\neq\!\emptyset\}\Rightarrow a\!\in\!I$. I don't know where to go from here. I've proved $a\!\in\!F\Leftarrow N_a\!\supseteq\!F^\ast$, by using Stone's theorem above, but here, we must find an ultrafilter by using ideals, so I'm out of good ideas. - -REPLY [2 votes]: You have $F^*=\bigcap\{N_a:a\in F\}=\{U\in A^*:F\subseteq U\}$. This clearly means that if $F_0\subseteq F_1$, then $F_0^*\supseteq F_1^*$: the bigger the filter $F$, the more sets $N_a$ you’re intersecting to form $F^*$, so the smaller $F^*$ must be. In fact, if $F$ is an ultrafilter, then $F^*=\{U\in A^*:F\subseteq U\}=\{F\}$: the only ultrafilter that contains $F$ is $F$ itself. Thus, if $\mathscr{F}$ is the set of filters on $A$, and $\mathscr{C}$ is the family of closed subsets of $A^*$, the map $\mathscr{F}\to\mathscr{C}:F\mapsto F^*$ is order-reversing. In particular, you can’t hope to prove that $\langle\mathscr{F},\subseteq\rangle$ and $\langle\mathscr{C},\subseteq\rangle$ are isomorphic lattices: if the map is a lattice isomorphism, it must be an isomorphism between $\langle\mathscr{F},\subseteq\rangle$ and $\langle\mathscr{C},\supseteq\rangle$. -In particular, this means that you should be trying to prove that if $F_0,F_1\in\mathscr{F}$, then $$(F_0\cap F_1)^*={F_0}^*\cup{F_1}^*$$ and $$(F_0\lor F_1)^*={F_0}^*\cap{F_1}^*\;.$$ This should dispose of most of your difficulties with (1) and (2). -For (3) and (4), note that (maximal) ideals and (ultra)filters are complementary to each other: a set $S\subseteq A$ is a (maximal) ideal iff $\{\lnot a:a\in A\}$ is an (ultra)filter. Thus, by taking complements you can work with filters or with ideals, as you choose.<|endoftext|> -TITLE: Parabolic PDE existence/uniqueness -QUESTION [5 upvotes]: Consider the parabolic PDE: -$$\frac{\partial u}{\partial t} = u^2\frac{\partial^2u}{\partial x^2} + u^3$$ -with some initial condition. -Apparently this is a straightforward parabolic PDE in which I can apply standard results to prove short term existence and uniquness. Can someone tell/refer me to these standard results please? The equation is non-linear and I haven't seen any theory for non-linear results. -Thanks - -REPLY [4 votes]: I will assume that the initial data is smooth. First we define the appropriate energy (this depends on oyur problem, but usually it is enough to take some high order Sobolev norm). For instance we take $E[u]=\|u\|_{H^1}$. Now we obtain an a priori bound for this energy. Multiplying the equation by u and integrating by parts we get -$$ -\frac{1}{2}\frac{d}{dt}\|u(t)\|_{L^2}^2=\int u u_t=\int u^3 u_{xx}+u^4=\int -3(u_x)^2u^2+u^4 -$$ -$$ -\leq \|u\|_{L^2}^2\|u\|_{L^\infty}^2\leq C\|u\|_{H^1}^4 \text{ (by Sobolev embedding)}. -$$ -Now we take one derivative and follow the same way: -$$ -\frac{1}{2}\frac{d}{dt}\|u_x(t)\|_{L^2}^2=\int u_x u_{xt}=\int 2u(u_x)^2u_{xx}+u^2u_xu_{xxx}+3u^2(u_x)^2= -$$ -We integrate by parts in the second term and we get -$$ -\int u^2u_xu_{xxx}=-\int u^2(u_{xx})^2-2u(u_x)^2u_{xx}. -$$ -Inserting this in the previous equation we get -$$ -\frac{1}{2}\frac{d}{dt}\|u_x(t)\|_{L^2}^2\leq3\|u\|_{L^\infty}^2\|u_x\|_{L^2}^2\leq C\|u\|_{H^1}^4. -$$ -Collecting both estimates, we get -$$ -\frac{d}{dt}\|u\|_{H^1}\leq C\|u\|_{H^1}^3, -$$ -and we can use Gronwall inequality to get -$$ -\sup_{t\in[0,T)}\|u(t)\|_{H^1}\leq C(u_0), -$$ -for an explicit time $T=T(u_0)$. -To conclude the argument you should regularize the system. Typically you should take some convolutions with the usual mollifier to have that the derivative operator is bounded in this Sobolev spaces. If you take the correct regularized system the same bounds hold and you have enough compactness to ensure the existence of a smooth limit which is also a solution. -All these techniques are quite standart and I refer you the book of A.Majda and A. Bertozzi "Vorticity and incompressible flow" (Chapter 3). -I hope that this is useful for you.<|endoftext|> -TITLE: How to find the closed form formula for this recurrence relation -QUESTION [8 upvotes]: $ x_{0} = 5 $ -$ x_{n} = 2x_{n-1} + 9(5^{n-1})$ -I have computed: $x_{0} = 5, x_{1} = 19, x_{2} = 83, x_{3} = 391, x_{4} = 1907$, but cannot see any pattern for the general $n^{th}$ term. - -REPLY [4 votes]: Note that Ayman’s technique of ‘unwinding’ the recurrence works even without the preliminary division by $2^n$: -$$\begin{align*} -x_n&=2x_{n-1}+9\cdot5^{n-1}\\ -&=2\left(2x_{n-2}+9\cdot5^{n-2}\right)+9\cdot5^{n-1}\\ -&=2^2x_{n-2}+2\cdot9\cdot5^{n-2}+9\cdot5^{n-1}\\ -&=2^2\left(2x_{n-3}+9\cdot5^{n-3}\right)+2\cdot9\cdot5^{n-2}+9\cdot5^{n-1}\\ -&=2^3x_{n-3}+2^2\cdot9\cdot5^{n-3}+2\cdot9\cdot5^{n-2}+9\cdot5^{n-1}\\ -&\qquad\qquad\qquad\vdots\\ -&=2^kx_{n-k}+2^{k-1}\cdot9\cdot5^{n-k}+2^{k-2}\cdot9\cdot5^{n-k+1}+\ldots+9\cdot5^{n-1}\\ -&=2^kx_{n-k}+9\sum_{i=0}^{k-1}2^i5^{n-1-i}\\ -&\qquad\qquad\qquad\vdots\\ -&=2^nx_0+9\sum_{i=0}^{n-1}2^i5^{n-1-i}\\ -&=5\cdot2^n+9\sum_{i=0}^{n-1}\left(\frac25\right)^i5^{n-1}\\ -&=5\cdot2^n+9\cdot5^{n-1}\left(\frac{1-(2/5)^n}{1-2/5}\right)\\ -&=5\cdot2^n+3\cdot5^n\left(1-\frac{2^n}{5^n}\right)\\ -&=5\cdot2^n+3\cdot5^n-3\cdot2^n\\ -&=2^{n+1}+3\cdot5^n\;. -\end{align*}$$<|endoftext|> -TITLE: Riemann's theorem on removable singularities -QUESTION [8 upvotes]: Theorem -Let $\Omega\subseteq \mathbb{C}$ open , $ a\in\Omega,\ f\in H(\Omega\backslash \{a\})$ and there is $r>0$ with -$f$ is bounded on $C(a,r)\backslash \{a\}$ ($C(a,r)$ is the circle with origin $a$ and radius $r$), -then $a$ is a removable singularity. -Proof -Let $h:\Omega\rightarrow \mathbb{C}$ be defined as: -$$ -h(z)=\left\{\begin{array}{l} -0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ z=a \\ -(z-a)^{2}f(z), \ z\in\Omega\backslash \{a\} -\end{array}\right. -$$ -Then we have: -$$ -\lim_{z\rightarrow a}\frac{h(z)-h(a)}{z-a}=\lim_{z\rightarrow a}(z-a)f(z)=0\Rightarrow h'(a)=0. -$$ -$\color{red}{\text{ Why is} \lim\limits_{z\rightarrow a}(z-a)f(z)=0?}$ -So we have $h\in H(\Omega)$ and therefore $(h(a)=h'(a)=0)$. -$$ -h(z)=\sum_{n=2}^{\infty}c_{n}(z-a)^{n}\ (z\in K(a,\ r)). -$$ -Letting $f(a):=c_{2}$, it follows: -$\color{red}{\text{ Why do we have} f(a):=c_{2}?}$ -$$ -f(z)=\sum_{n=0}^{\infty}c_{n+2}(z-a)^{n}\ (z\in K(a,\ r)), -$$ -so $f\in H(\Omega).\ \square $ - -REPLY [5 votes]: The first red line follows because $|(z-a)f(z)|\le M|(z-a)|$ near $a$, where $M$ is the bound for $f(z)$. This clearly goes to zero as $z$ goes to $a$. -The coefficient $c_2$ is $h''(z)/2!$ evaluated at $a$. We see $h'(z)=(z-a)^2f'(z)+2(z-a)f(z)$ and $h''(z)=(z-a)^2f''(z)+2(z-a)f'(z)+2(z-a)f'(z)+2f(z)$. Plugging in $a$ and dividing by 2 gives the the value for $c_2$ above. - -REPLY [3 votes]: Since $f(z)$ is bounded in $C(a,r)$ and $\lim_{z\to a}(z-a)=0$ you have $\lim_{z\to a}(z-a)f(z)=0$. -You want the function to be continuous at $z=a$, so you need $\lim_{z\to a}f(z)$. Observe that $h'(z)=2(z-a)f(z)+(z-a)^2f'(z)$ for all $z\neq a$. Hence: -$$h''(z)=2f(z)+2(z-a)f'(z)+(z-a)^2f''(z)$$ -Plugging in $z\to a$, you have $\lim_{z\to a}f(z)=\frac{h''(a)}{2}=c_2$<|endoftext|> -TITLE: Operators from $\ell^\infty$ into $c_0$ -QUESTION [7 upvotes]: I have the following question related to $\ell^\infty(\mathbb{N}).$ How can I construct a bounded, linear operator from $\ell^\infty(\mathbb{N})$ into $c_0(\mathbb{N})$ which is non-compact? -It is clear that $\ell^\infty$ is a Grothendieck space with Dunford-Pettis property, hence any operator from $\ell^\infty$ into a separable Banach space must be strictly singular. But I do not know any example above which is non-compact. - -REPLY [6 votes]: A bounded operator $T:\ell_\infty\rightarrow c_0$ has the form $Tx=(x_n^*(x))$ for some weak$^*$ null sequence $(x_n^*)$ in $\ell_\infty^*$. A set $K\subset c_0$ is relatively compact if and only if there is a $x\in c_0$ such that $|k_n|\le |x_n|$ for all $k\in K$ and all $n\ge1$. From these two facts, it follows that $T(B({\ell_\infty}))$ is relatively compact if and only if the representing sequence $(x_n^*)$ is norm-null. -So, you need only find a sequence in $\ell_\infty^*$ that is weak$^*$ null, but not norm null. -Such a sequence exists in $\ell_\infty^*$ since: 1) weak$^*$ convergent sequences in $\ell_\infty^*$ are weakly convergent ($\ell_\infty^*$ has the Grothendieck property), and 2) $\ell_\infty^*$ does not have the Schur property (weakly convergent sequences are norm convergent). -(There may be a less roundabout way of showing the the result of the preceding paragraph.)<|endoftext|> -TITLE: If $f(x)=g(x)$ for all $x:A$, why is it not true that $\lambda x{.}f(x)=\lambda x{.}g(x)$? -QUESTION [13 upvotes]: There's something about lambda calculus that keeps me puzzled. Suppose we have $x:A\vdash f(x):P(x)$ and $x:A\vdash g(x):P(x)$ for some dependent type $P$ over a type $A$. Then it is not necessarily true that $\lambda x{.}f(x)=\lambda x{.}g(x)$. -I've tried to understand this phenomenon and then it goes something like this: actually, the argument you pass to $\lambda$ is not just $x{.}f(x)$, but it is the entailment $x:A\vdash f(x):P(x)$. Thus different reasons why $f(x):P(x)$ for each $x:A$ give different functions. But I've never seen things like this written down somewhere, so I feel quite uncertain about it. -I'd like to know why $\lambda x{.}f(x)$ is not $\lambda x{.}g(x)$ even if $f(x)=g(x)$. From a type theoretical point of view, but maybe also from a models point of view. If there are models that can easily explain what kind of behavior we can expect from $\lambda$, that would be very helpful. - -Edit. I should add that the bit of lambda calculus I have learned is from doing dependent type theory (and more specifically, Martin-Lof's intensional type theory) in Coq. There, the dependent product $$\prod(x:A),\ P(x)$$ is defined for a dependent type $P:A\to\mathsf{Type}$ over a type $A$ by introducing elements $\lambda x{.}f(x)$ for each $x:A\vdash f(x):P(x)$, with for each $f:\prod(x:A),\ P(x)$ and each $a:A$, a term $\mathsf{apply}(f,a):P(a)$. These are then required to satisfy the $\beta$- and $\eta$-conversion rules: -$$ -\begin{split} -\beta.\qquad\qquad & \mathsf{apply}(\lambda x{.}f(x), a)=f(a)\\ -\eta.\qquad\qquad & \lambda x{.}\mathsf{apply}(f,x)=f, -\end{split} -$$ -but not rule $\xi$, which is the rule saying that $\lambda x{.}f(x)=\lambda x.g(x)$ whenever $f(x)=g(x)$ for all $x:A$. But I don't know about anything that would break rule $\xi$, hence my question. - -Here's a concrete example where I face this problem. -An example of such $f$ and $g$ where this problem bugged me the most comes from proving that the axiom of choice is an equivalence. Suppose that $A$ is a type, that $P$ is a dependent type over $A$ and that $R$ is a dependent type over $P$, i.e. $R:\prod(x:A),\ (P(x)\to\mathsf{Type})$. Then the type theoretical axiom of choice is the function -$$ -\mathsf{ac}:\prod(x:A)\sum (u:P(x)),\ R(x,u)\to\sum\Big(s:\prod(x:A),\ P(x)\Big)\prod(x:A),\ R(x,s(x)) -$$ -given by -$$ -\lambda f.\langle \lambda x.\mathsf{proj_1}f(x),\lambda x.\mathsf{proj_2}f(x)\rangle -$$ -A candidate inverse would be given by -$$ -\mathsf{ac{-}inv}:=\lambda w.\lambda x.\langle\mathsf{proj_1}w(x),\mathsf{proj_2}w(x)\rangle. -$$ -To prove that this is a right inverse you need the $\eta$-conversion at some point, but to prove that it is a left inverse, the identity -$$ -\lambda x.\langle\mathsf{proj_1}f(x),\mathsf{proj_2}f(x)\rangle=\lambda x.f(x) -$$ -is needed at some point. While it is fairly obvious that $\langle\mathsf{proj_1}f(x),\mathsf{proj_2}f(x)\rangle=f(x)$ for each $x:A$, this is not a step you can make in Coq, because there's no rule $\xi$ there (as far as I know). Even $\eta$ you have to introduce manually and that is only possible using the identity types of Martin-Lof. Mike Shulmann has constructed a way around this issue: there is a proof that the map $\mathsf{ac}$ is indeed an equivalence. I understand this proof, but not the behavior of $\lambda$. What can $\lambda$ do if you don't have $\xi$ that prohibits me from making this step? - -REPLY [4 votes]: I’m pretty sure that the $\xi$ rule is implemented in Coq. The doc says somewhere - -Let us write E[Γ] ⊢ t ▷ u for the contextual closure of the relation t reduces to u in the environment E and context Γ with one of the previous reduction β, ι, δ or ζ. - -The $\xi$ rule is usually not even mentionned because this is a consequence of the contextual closure of definitional equality which is usually assumed. -In your concrete example, the problem does not come from $\xi$ but from the "fairly obvious" fact that $\langle\mathsf{proj_1}f(x),\mathsf{proj_2}f(x)\rangle=f(x)$. This is called $\eta$-equality for dependent sums, but Coq does not satisfy this definitionally (not even Coq 8.4 which has only $\eta$-equality for functions). -So yes, in Coq you have to prove $\eta$-equality for dependent sums using identity types, but this has nothing to do with $\xi$. -In Agda there is $\eta$-equality for dependent sums (and more generally for records), so probably that these two functions are definitionally inverse to each other in Agda.<|endoftext|> -TITLE: If $M$ and $N$ are graded modules, what is the graded structure on $\operatorname{Hom}(M,N)$? -QUESTION [6 upvotes]: Let $A$ be a graded ring. Note that the grading of $A$ may not be $\mathbb{N}$, for example, the grading of $A$ could be $\mathbb{Z}^n$. Actually, my question comes from the paper of Tamafumi's On Equivariant Vector Bundles On An Almost Homogeneous Variety, Proposition 3.4. And I translate this proposition to modern language: - -Let $A = \mathbb{C}[\sigma^{\vee} \cap \mathbb{Z}^n]$. If $M$ is a finitely generated $\mathbb{Z}^n$-graded $A$-projective module of rank $r$, then there exists $u_1,u_2,\dots,u_r$ in $\mathbb{Z}^n$ such that - \begin{eqnarray*} -M \simeq A(-u_1) \oplus A(-u_2)\oplus \dots \oplus A(-u_r) -\end{eqnarray*} - as $\mathbb{Z}^n$-graded $A$-module. In particular, $M$ is an $A$-free module. - -(page 71) He says since $\operatorname{Hom}(\widetilde{M},\widetilde{F})$ is $T$-linearized vector bundle, $\operatorname{Hom}(M,F)$ is a $\mathbb{Z}^n$-graded A-module. - -My Questions: I don't know what this statement means. I know why we need to show that $\operatorname{Hom}(M,F)$ is a $\mathbb{Z}^n$-graded $A$-module, but I don't understand his reason. What is the "grading" of "$\operatorname{Hom}(M,F)$"? What does "$\operatorname{Hom}(M,F)$" mean? "$\operatorname{Hom}_A(M,F)$", "$\operatorname{Hom}_{\mathbb{Z}}(M,F)$" or what? - I think it is just a purely algebraic question, so why do we need to use a "vector bundle"? I feel uncomfortable about this question. - -REPLY [8 votes]: If $A$ is a (commutative) $\newcommand\ZZ{\mathbb Z}\ZZ^n$-graded ring and $M$ and $N$ are $\mathbb Z^n$-graded $A$-modules, we can consider the $A$-module $\hom_A(M,N)$ of all $A$-linear maps. For each $g\in\ZZ^n$ we can look at the subset $\hom_A(M,N)_g$ of all $A$-linear maps $f:M\to N$ such that $f(M_h)\subseteq N_{h+g}$ for all $h\in\ZZ^n$: we call the elements of $\hom_A(M,N)_g$ the homogeneous $A$-linear maps of degree $g$. -It is easy to see that the sum $\hom_A(M,N)_{\text{homog}}=\bigoplus_{g\in\ZZ^n}\hom_A(M,N)_g$ is a direct sum, giving a $A$-submodule of $\hom_A(M,N)$. -If $M$ is a finitely generated $A$-module, then we have $\hom_A(M,N)_{\text{homog}}=\hom_A(M,N)$. In general, though, $\hom_A(M,N)_{\text{homog}}$ is a proper submodule of $\hom_A(M,N)$. -In what you are reading, it is most likely that $\hom(M,N)$ denotes $\hom_A(M,N)$.<|endoftext|> -TITLE: In $\mathbb{R}^n$, locally lipschitz on compact set implies lipschitz -QUESTION [23 upvotes]: I need to prove: - -Let $A$ be open in $\mathbb{R}^m$, $g:A \longrightarrow \mathbb{R}^n$ a locally lipschitz function and $C$ a compact subset of $A$. -Show that $g$ is lipschitz on $C$. - -Can anyone help me? - -REPLY [12 votes]: Maximize the continuous function $f(x,y)=\frac{|g(x)-g(y)|}{|x-y|}$ over the compact set $C\times C\cap\{|x-y| \geq \varepsilon\}$ with a sufficiently small $\varepsilon>0$. Locally Lipschitz condition is used to show that $f$ is bounded in $C\times C\cap\{|x-y|<\varepsilon\}$.<|endoftext|> -TITLE: How to prove that $\mathrm{Fibonacci}(n) \leq n!$, for $n\geq 0$ -QUESTION [8 upvotes]: I am trying to prove it by induction, but I'm stuck -$$\mathrm{fib}(0) = 0 < 0! = 1;$$ -$$\mathrm{fib}(1) = 1 = 1! = 1;$$ -Base case n = 2, -$$\mathrm{fib}(2) = 1 < 2! = 2;$$ -Inductive case assume that it is true for (k+1) $k$ -Try to prove that $\mathrm{fib}(k+1) \leq(k+1)!$ -$$\mathrm{fib}(k+1) = \mathrm{fib}(k) + \mathrm{fib}(k-1) \qquad(LHS)$$ -$$(k+1)! = (k+1) \times k \times (k-1) \times \cdots \times 1 = (k+1) \times k! \qquad(RHS)$$ -...... -How to prove it? - -REPLY [43 votes]: $$ -F_{k+1} = F_k + F_{k-1} \le k! + (k - 1)! \le k! + k! \le 2 k! \le (k + 1) k! -$$<|endoftext|> -TITLE: Combinatorics: Selecting objects arranged in a circle -QUESTION [15 upvotes]: If $n$ distinct objects are arranged in a circle, I need to prove that the number of ways of selecting three of these $n$ things so that no two of them are next to each other is $\frac{1}{6}n(n-4)(n-5)$. -Initially I can select $1$ object in $n$ ways. Then its neighbours cannot be selected. So I will have $n-3$ objects to choose $2$ objects from. Again, I can select the second object in $n-3$ ways. The neighbours of this object cannot be selected. However from here onwards, I am unable to extend this argument as the selection of the third object is dependent on the position of the first and the second object. Is there any simpler method to prove the result? - -REPLY [10 votes]: References: The number of ways to choose $d$ non-consecutive -positions on a circle of size $n$ is -$${n-d+1 \choose d} - {n-d-1 \choose d-2}.$$ -Aryabhata explains the derivation in the answer here -Consecutive birthdays probability. -The first term is the number of ways to choose $d$ non-consecutive -positions on a line of size $n$, and the second term subtracts off - those arrangements where positions $1$ and $n$ were both chosen. -This expression can also be rewritten as -$${n\over n-d}{n-d\choose d}.$$ This form, called $d_k$ in the following link, - is used in the solution of - the menage problem, (with $2n$ instead of $n$, and $k$ instead of $d$).<|endoftext|> -TITLE: $K/E$, $E/F$ are separable field extensions $\implies$ $K/F $ is separable -QUESTION [12 upvotes]: I wish to prove that if $K/E$, $E/F$ are algebraic separable field extensions then $K/F$ is separable. I tried taking $a\in K$ and said that if $a\in E$ it is clear, otherwise I looked at the minimal polynomials over $E,F$ and called them $f(x),g(x)$ respectively, then I said that $f(x)$ is separable and since there exist $h(x)$ s.t $f(x)h(x)=g(x)$ it remains to see that $h(x)$ is separable — here I am stuck. -Can someone please help proving this claim or can say how to continue ? - -REPLY [7 votes]: This is only problematic in the context of positive characteristic, so assume that all fields are of cahracteristic $p$. -Suppose that $K/E$ is separable, and $E/F$ is separable. Let $S$ be the set of all elements of $K$ that are separable over $F$. Then $E\subseteq S$, since $E/F$ is separable. -Note that $S$ is a subfield of $K$: indeed, if $u,v\in S$ and $v\neq 0$, then $F(u,v)$ is separable over $F$ because it is generated by separable elements, so $u+v$, $u-v$, $uv$, and $u/v$ are all separable over $F$. So $S$ is a field. -I claim that $K$ is purely inseparable over $S$. Indeed, if $u\in K$, then there exists $n\geq 0$ such that $u^{p^n}$ is separable over $F$, hence there exists $n\geq 0$ such that $u^{p^n}\in S$. Therefore, the minimal polynomial of $u$ over $S$ is a divisor of $x^{p^n} -u^{p^n} = (x-u)^{p^n}$, so $K$ is purely inseparable over $S$. -But since $E\subseteq S\subseteq K$, and $K$ is separable over $E$, then it is separable over $S$. So $K$ is both purely inseparable and separable over $S$. This can only occur if $S=K$, hence every element of $K$ is separable over $F$. This proves that $K/F$ is separable. -Added. Implicit above is that an extension is separable if and only if it is generated by separable elements. That is: -Lemma. Let $K$ be an extension of $F$, and $X$ a subset of $F$ such that $K=F(X)$. If every element of $X$ is separable over $F$, then $K$ is a separable extension of $F$. -Proof. Let $v\in K$. Then there exist $u_1,\ldots,u_n\in X$ such that $v\in F(u_1,\ldots,u_n)$. Let $f_i(x)\in F[x]$ be the irreducible polynomial of $u_i$ over $F$; by assumption, $f_i(x)$ is separable. Let $E$ be a splitting field over $F(u_1,\ldots,u_n)$ of $f_1(x),\ldots,f_n(x)$. Then $E$ is also a splitting field of $f_1,\ldots,f_n$ over $F$, and since the $f_i$ are separable, $E$ is separable over $F$. Therefore, since $v\in F(u_1,\ldots,u_n)\subseteq E$, it follows that $v$ is separable over $F$. $\Box$ - -Added. Despite the OP's acceptance of this answer, it is clear from the comments that he does not actually understand the answer, which is rather frustrating. Equally frustrating is to be told, in drips, what it is the OP does and does not know about separability, in the form of "Just explain why this is true", only to be find out the facts that underlie that assertion are also unknown. -The following is taken from Hungerford's treatment of separability. -Definition. Let $F$ be a field and $f(x)\in F[x]$ a polynomial. The polynomial is said to be separable if and only if for every irreducible factor $g(x)$ of $f(x)$, there is a splitting field $K$ of $g(x)$ over $F$ where every root of $g(x)$ is simple. -Definition. Let $K$ be an extension of $F$, and let $u\in K$ be algebraic over $F$. Then $u$ is said to be separable over $F$ if the minimal polynomial of $u$ over $F$ is separable. The extension is said to be separable if every element of $K$ is separable over $F$. -Theorem. Let $K$ be an extension of $F$. The following are equivalent: - -$K$ is algebraic and Galois over $F$. -$K$ is separable over $F$ and $K$ is a splitting field over $F$ of a set $S$ of polynomials in $F[x]$. -$K$ is the splitting field over $F$ of a set $T$ of separable polynomials in $F[x]$. - -Proof. (1)$\implies$(2),(3) Let $u\in K$ and let $f(x)\in F[x]$ be the monic irreducible polynomial of $u$. Let $u=u_1,\ldots,u_r$ be the distinct roots of $f$ in $K$; then $r\leq n=\deg(f)$. If $\tau\in\mathrm{Aut}_F(K)$, then $\tau$ permutes the $u_i$. So the coefficients of the polynomial $g(x) = (x-u_1)(x-u_2)\cdots(x-u_r)$ are fixed by all $\tau\in\mathrm{Aut}_F(K)$, and therefore $g(x)\in F[x]$ (since the extension is Galois, so the fixed field of $\mathrm{Aut}_F(K)$ is $F$). Since $u$ is a root of $g$, then $f(x)|g(x)$. Therefore, $n=\deg(f)\leq \deg(g) = r \leq n$, so $\deg(g)=n$. Thus, $f$ has $n$ distinct roots in $K$, so $u$ is separable over $F$. Now let $\{u_i\}_{i\in I}$ be a basis for $K$ over $F$; for each $i\in I$ let $f_i\in F[x]$ be the monic irreducible of $u_i$. Then $K$ is the splitting field over $F$ of $S=\{f_i\}_{i\in I}$, and each $f_i$ is separable. This establishes (2) and (3). -(2)$\implies$(3) Let $f\in S$, and let $g$ be an irreducible factor of $f$. Since $f$ splits in $K$, then $g$ is the irreducible polynomial of some $u\in K$, where it splits. Since $K$ is separable over $F$, then $u$ is separable, so $g$ is separable. Thus, the elements of $S$ are separable. So $K$ is the splitting field over $F$ of a set of separable polynomials. -(3)$\implies$(1) -Since $K$ is a splitting field over $F$, it is algebraic. If $u\in K-F$, then there exist $v_1,\ldots,v_m\in K$ such that $u\in F(v_1,\ldots,v_m)$, and each $v_i$ is a root of some $f_i\in S$, since $K$ is generated by the roots of elements of $S$. Adding all the other roots of the $f_i$, $u\in F(u_1,\ldots,u_n)$, where $u_1,\ldots,u_n$ are all the roots of $f_1,\ldots,f_m$; that is, $F(u_1,\ldots,u_n)$ is a splitting field over $F$ of the polynomial $f_1\cdots f_m$. -If the implication holds for all finite dimensional extensions, then we would have that $F(u_1,\ldots,u_n)$ is a Galois extension of $F$, and therefore there exist $\tau\in \mathrm{Aut}_F(F(u_1,\ldots,u_n))$ such that $\tau(u)\neq u$. Since $K$ is a splitting field over $F$, it is also a splitting field over $F(u_1,\ldots,u_n)$, and therefore $\tau$ extends to an automorphism of $K$. Thus, there exists $\tau\in\mathrm{Aut}_F(K)$ such that $\tau(u)\neq u$. This would prove that the fixed field of $\mathrm{Aut}_F(K)$ is $F$, so the extension is Galois. Thus, we are reduced to proving the implication when $[K:F]$ is finite. When $[K:F]$ is finite, there is a finite subset of $T$ that will suffice to generate $K$. Moreover, $\mathrm{Aut}_F(K)$ is finite. If $E$ is the fixed field of $\mathrm{Aut}_F(K)$, then by Artin's Theorem $K$ is Galois over $E$ and $\mathrm{Gal}(K/E) = \mathrm{Aut}_F(K)$. Hence, $[K:E]=|\mathrm{Aut}_F(K)|$. -Thus, it suffices to show that when $K$ is a finite extension of $F$ and is a splitting field of a finite set of separable polynomials $g_1,\ldots,g_m\in F[x]$, then $[K:F]=|\mathrm{Aut}_F(K)|$. Replacing the set with the set of all irreducible factors of the $g_i$, we may assume that all $g_i$ are irreducible. -We do induction on $[K:F]=n$. If $n=1$, then the equality is immediate. If $n\gt 1$, then some $g_i$, say $g_1$, has degree greater than $1$; let $u\in K$ be a root of $g_1$. Then $[F(u):F]=\deg(g_1)$, and the number of distinct roots of $g_1$ in $K$ is $\deg(g_1)$, since $g_1$ is separable. Let $H=\mathrm{Aut}_{F(u)}(K)$. Define a map from the set of left cosets of $H$ in $\mathrm{Aut}_F(K)$ to the set of distinct roots of $g_1$ in $K$ by mapping $\sigma H$ to $\sigma(u)$. This is one-to-one, since $\sigma(u)=\rho(u)\implies \sigma^{-1}\rho\in H\implies \sigma H=\rho H$. Therefore, $[\mathrm{Aut}_F(K):H]\leq \deg(g_1)$. If $v\in K$ is any other root of $g_1$, then there is an isomorphism $\tau\colon F(u)\to F(v)$ that fixes $F$ and maps $u$ to $v$, and since $K$ is a splitting field, $\tau$ extends to an automorphism of $K$ over $F$. Therefore, the map from cosets of $H$ to roots of $g_1$ is onto, so $[\mathrm{Aut}_F(K):H]=\deg(g_1)$. -We now apply induction: $K$ is the splitting field over $F(u)$ of a set of separable polynomials (same one as we started with), and $[K:F(u)] = [K:F]/\deg(g_1)\lt [K:F]$. Therefore, $[K:F(u)]=|\mathrm{Aut}_{F(u)}(K)|=|H|$. -Hence $|\mathrm{Aut}_F(K)| = [\mathrm{Aut}_{F}(K):H]|H| = \deg(g_1)[K:F(u)]=[F(u):F][K:F(u)] = [K:F]$, and we are done. $\Box$ -Corollary. Let $F$ be a field, and let $f_1,\ldots,f_n\in F[x]$ be nonconstant separable polynomials. Then any splitting field of $f_1,\ldots,f_n$ over $F$ is separable over $F$.<|endoftext|> -TITLE: Integrable derivative implies absolutely continuous -QUESTION [10 upvotes]: Consider Lebesgue integrals over the real line. I have the following problem: -Problem: Suppose $F(x)$ is a continuous function in $[a,b]$, and $F'(x)$ exists everywhere in $(a,b)$ and is integrable. Show $F(x)$ is absolutely continuous. -The hint in the book suggested showing that $F'(x)\ge 0$ a.e. implies that $F(x)$ is increasing. I did that, but I do not see how it helps with this problem. How can this hint be used to solve the problem? -(I am aware other proofs exist.) - -REPLY [9 votes]: A proof using Vitali-Caratheodory's theorem could be found in Papa Rudin, Chapter 7, Theorem 7.21. -But it's unrelated to the hint given by Stein. If you want one of this kind, well, there's one (I spent so much time in looking for this!): -Natanson, Theory of Functions of a Real Variable, volume 1, Chapter IX, section 7, theorem 1. The hint given by Stein, is just the two lemmas in that proof! -The sketch of the proof: -Let $\phi_n=\min(n,F')$, and $R_n(x)=F(x)-\int_a^x\phi_n(t)dt$, we have $R_n'=F'-\phi_n\ge0$ a.e. and $D^+R_n\ge F'-n>-\infty$, thus (by hint) $R_n(b)\ge R_n(a)$ and $F(b)-F(a)\ge\int_a^b\phi_n$, hency by domintated convergence theorem, $F(b)-F(a)\ge\int_a^b F'$. Replace $F$ with $-F$, we'll obtain the result.<|endoftext|> -TITLE: Dirichlet Series and Average Values of Certain Arithmetic Functions -QUESTION [5 upvotes]: If an arithmetic function $f(n)$ has Dirichlet series $\zeta(s) \prod_{i,j = 1} \frac{\zeta(a_i s)}{\zeta(b_j s)}$, for which values of $a_{i}$ and $b_{j}$ is the following true? That -\begin{align} -\lim_{x \to \infty} \tfrac{1}{x} \sum_{n \leq x} f(n) = \prod_{i,j = 1} \frac{\zeta(a_i )}{\zeta(b_j )} -\end{align} -or, more generally, there is a $\kappa > 0$ such that -\begin{align} - \sum_{n \leq x} f(n) = x \prod_{i,j = 1} \frac{\zeta(a_i )}{\zeta(b_j )} + O(x^{1-\kappa}) \sim x \prod_{i,j = 1} \frac{\zeta(a_i )}{\zeta(b_j )}. -\end{align} -Does the Wiener-Ikehara Theorem apply here? - -REPLY [4 votes]: The following theorem appears in an introductory chapter in a soon to be published book by Andrew Granville and Kannan Soundararajan. An early version of the book which contains this material can be found on Granville's website. (Note: The statement of the result may appear different, but the proof is in the book which is currently on his website.) - -Theorem: Let $f(n)=1*g(n)$ be a multiplicative function, and suppose that for $0\leq \sigma \leq 1$ the sum $$\sum_{d=1}^\infty \frac{|g(d)|}{d^{\sigma}}=\tilde{G}(\sigma)$$ converges. Then, if we write $\mathcal{P}(f)= \sum_{n=1}^\infty \frac{g(n)}{n},$ we have that $$\left|\sum_{n\leq x} f(n)-x\mathcal{P}(f)\right|\leq x^{\sigma}\tilde{G}(\sigma).$$ - -With your definition of $f(n)$, we see that $\mathcal{P}(f)=\prod_{i,j} \frac{\zeta(a_i)}{\zeta(b_i)}$, and letting $$\delta=\min\{a_i-1,\ b_i-1,\ 1\},$$ we see that by setting for $\sigma=\delta+\frac{1}{\log x}$, we get $$\sum_{n\leq x}f(n)=x\prod_{i,j}\frac{\zeta(a_i)}{\zeta(b_i)}+O\left(x^{1-\delta}\log x\right).$$ -Remark: As an almost immediate corollary, since $\frac{\phi(n)}{n}$ has Dirichlet series $\frac{\zeta(s)}{\zeta(s+1)}$, we get that $$\sum_{n\leq x}\frac{\phi(n)}{n}=\frac{x}{\zeta(2)}+O(\log x).$$ -See also this answer on Math.Stackexchange.<|endoftext|> -TITLE: Nonexistence of a strongly multiplicative increasing function with $f(2)=3$ -QUESTION [6 upvotes]: Show that there does not exist a strictly increasing function - $f:\mathbb{N}\rightarrow\mathbb{N}$ satisfying -$$f(2)=3$$ $$f(mn)=f(m)f(n)\forall m,n\in\mathbb{N}$$ - -Progress: -Assume the function exists. Let $f(3)=k$ -Since $2^3 < 3^2$, -$$3^2=f(2)^3=f(2^3)5$ and since $3^3 < 5^2$, then -$$k^3=f(3)^3=f(3^3) -TITLE: Definition of Conditional Probability by Measure Theory -QUESTION [14 upvotes]: I was reading a book on information theory and entropy by Robert Gray, when I saw the following definition of conditional probability: - -Given a probability space $(\Omega,\mathcal{B}, P)$ and a - sub-$\sigma$-field $\mathcal{G}$, for any event $H\in\mathcal{B}$ the - conditional probability $m(H\text{ }|\text{ }\mathcal{G})$ is defined - as any function , say $g$, which satisfies the two properties: -(1) $g$ is measurable with respect to $\mathcal{G}$ -(2) $\displaystyle\int_{G}ghdP=m(G\bigcap{}H)$; all $G\in\mathcal{G}$ - -I am quite confused with this definition since it is very different from the definition through joint probability of events. -I understand what measurable function, sub-$\sigma$-field and probability space are, and I'm guessing that the author is trying to definie the measure $m$ through the measurable function $g$, but I don't quite understand what the second requirement is saying. Especially, what does that h in $\displaystyle\int_{G}ghdP$ refer to? it just jumped out of nowhere in the book, so I'm suspecting that it may have some conventional meaning? -I'd appreciate it a lot if someone can help. -Thank you!! - -REPLY [21 votes]: The starting point for abstract measure theoretic conditional probability is conditional expectation. Essentially, one uses the identity $P(A)=\mathbb{E}(1_A)$. -Now let $(\Omega,\mathcal{B},P)$ be a probability space, $f$ a random variable and $\mathcal{G}$ a sub-$\sigma$-algebra of $\mathcal{B}$. The conditional expectation of $f$ with respect to $\mathcal{G}$ is a $\mathcal{G}$-measurable function $\mathbb{E}_\mathcal{B}$ such that for all $G\in\mathcal{G}$ $$\int_G \mathbb{E}_\mathcal{B}~dP=\int_G f~dP.$$ The notion is not very intuitive, but the idea is the following: Since $\mathbb{E}_\mathcal{B}$ is $\mathcal{G}$-measurable, it uses only the information in $\mathcal{G}$. The integral condition says that $\mathbb{E}_\mathcal{B}$ "averages $f$ out" over sets in $\mathcal{G}$. -Now if we want to calculate the conditional probability of the event $H\in\mathcal{B}$ with respect to the sub-$\sigma$-algebra $\mathcal{G}$, we simply take the conditional expectation of the indiacator function $1_H$. Then, a conditional probability of $H$ with respect to $\mathcal{G}$ is a $\mathcal{G}$-measurable function $\mathbb{P}^H_\mathcal{G}$ such that for all $G\in\mathcal{G}$ $$\int_G \mathbb{P}^H_\mathcal{G}~dP=\int_G 1_H~dP.$$ Since $\int_G 1_H~dP=P(H\cap G)$, this can be rewritten as $$\int_G \mathbb{P}^H_\mathcal{G}~dP=P(H\cap G).$$ -This is fairly standard material, so I assume the author made simply some typos. The $h$ is superflous and the $m$ should be $P$.<|endoftext|> -TITLE: $\mid\theta-\frac{a}{b}\mid< \frac{1}{b^{1.0000001}}$, question related to the dirichlet theorem -QUESTION [6 upvotes]: The question is: -A certain real number $\theta$ has the following property: There exist infinitely many rational numbers $\frac{a}{b}$(in reduced form) such that: -$$\mid\theta-\frac{a}{b}\mid< \frac{1}{b^{1.0000001}}$$ -Prove that $\theta$ is irrational. -I just don't know how I could somehow relate $b^{1.0000001}$ to $b^2$ or $2b^2$ so that the dirichlet theorem can be applied. Or is there other ways to approach the problem? -Thank you in advance for your help! - -REPLY [5 votes]: Hint: Let $\theta=\frac{p}{q}$, where $p$ and $q$ are relatively prime. Look at -$$\left|\frac{p}{q}-\frac{a}{b}\right|.\tag{$1$}$$ -Bring to the common denominator $bq$. Then if the top is non-zero, it is $\ge 1$, and therefore Expression $(1)$ is $\ge \frac{1}{bq}$. -But if $b$ is large enough, then $bq -TITLE: Is the functional $F(u) = \int_{\Omega} \langle A(x) \nabla u, \nabla u \rangle$ convex? -QUESTION [5 upvotes]: The functional -\begin{equation} -F(u) = \int_{\Omega} \langle A(x) \nabla u, \nabla u \rangle -\end{equation} -where $A$ is a symmetric matrix . You can assume $\Omega$ conviniente such that the expression above make sense. For example, $C^{1}(\Omega, H^{1}_{0}(\Omega))$ or other space such that the functional above be convex. I don't know if the hypothesis that $A$ is simetric is nescessary. Thank you. - -REPLY [3 votes]: Assuming that $A(x) \geq 0$ (ie, positive semi-definite) for all $x$, then $F$ is convex. -The mapping $h \mapsto \langle A(x_0) h, h \rangle$ on $\mathbb{R}^n$ is convex for each $x_0$. The mapping $u \mapsto \nabla u(x_0)$ is linear. The composition of a convex function with a linear function is convex, hence the functional $\phi_{x_0}(u) = \langle A(x_0) \nabla u (x_0), \nabla u (x_0) \rangle$ is convex. -Convex means that $\phi_{x_0} (\lambda u + (1-\lambda) v) \leq \lambda \phi_{x_0} ( u) + (1-\lambda) \phi_{x_0} ( v)$ for all $\lambda \in [0,1]$. Integrating both sides over $x_0 \in \Omega$ gives -$$\int_\Omega \phi_{x_0} (\lambda u + (1-\lambda) v) d x_0 \leq \lambda \int_\Omega \phi_{x_0} ( u) d x_0 + (1-\lambda) \int_\Omega \phi_{x_0} ( v) d x_0,$$ -or equivalently -$$F(\lambda u + (1-\lambda) v) \leq \lambda F(u) + (1-\lambda) F(v).$$ -Hence $F$ is convex.<|endoftext|> -TITLE: Is $\mathbb{R}$ a vector space over $\mathbb{C}$? -QUESTION [13 upvotes]: Here is a problem so beautiful that I had to share it. I found it in Paul Halmos's autobiography. Everyone knows that $\mathbb{C}$ is a vector space over $\mathbb{R}$, but what about the other way around? -Problem: Prove or disprove: $\mathbb{R}$ can be written as vector space over $\mathbb{C}$ -Of course, we would like for $\mathbb{R}$ to retain its structure as an additive group. - -REPLY [18 votes]: If you want the additive vector space structure to be that of $\mathbb{R}$, and you want the scalar multiplication, when restricted to $\mathbb{R}$, to agree with multiplication of real numbers, then you cannot. -That is, suppose you take $\mathbb{R}$ as an abelian group, and you want to specify a "scalar multiplication" on $\mathbb{C}\times\mathbb{R}\to\mathbb{R}$ that makes it into a vector space, and in such a way that if $\alpha\in\mathbb{R}$ is viewed as an element of $\mathbb{C}$, then $\alpha\cdot v = \alpha v$, where the left hand side is the scalar product we are defining, and the right hand side is the usual multiplication of real numbers. -If such a thing existed, then the vector space structure would be completely determined by the value of $i\cdot 1$: because for every nonzero real number $\alpha$ and every complex number $a+bi$, we would have -$$(a+bi)\cdot\alpha = a\cdot \alpha +b\cdot(i\cdot \alpha) = a\alpha + b(i\cdot(\alpha\cdot 1)) = a\alpha + b\alpha(i\cdot 1).$$ -But say $i\cdot 1 = r$. Then $(r-i)\cdot 1 = 0$, which is contradicts the properties of a vector space, since $r-i\neq 0$ and $1\neq \mathbf{0}$. So there is no such vector space structure. -But if you are willing to make the scalar multiplication when restricted to $\mathbb{R}\times\mathbb{R}$ to have nothing to do with the usual multiplication of real numbers, then you can indeed do it by transport of structure, as indicated by Chris Eagle.<|endoftext|> -TITLE: Nicer expression for the following differential operator -QUESTION [11 upvotes]: I have the following sequence of differential operators: -$$D_n = \underbrace{t \partial_t t \partial_t \dots t \partial_t}_{\text{$n$ times}}.$$ -Is there any expression involving a sum of "normal" differential operator? That is a sum of different powers (up to $n$)? I have tried setting up a recurrence relation, but I really have no clue how we would solve that for (unbounded) operators. -The recursion is not that hard, $D_n = t \partial_t D_{n - 1}$, but if that is of any help... -In particular I would like to apply this to the function $e^{2 x t - t^2}$. - -REPLY [7 votes]: The Weyl "algebra" $\mathbb{Z}[t, \partial_t]$ acts faithfully on the polynomial ring $\mathbb{Z}[t]$ in the obvious way. The latter is graded by degree, $t$ raises degree by $1$, $\partial_t$ lowers degree by $1$, and $t \partial_t$ preserves degree: in fact -$$(t \partial_t) t^m = m t^m.$$ -Thus -$$(t \partial_t)^n t^m = m^n t^m.$$ -A basis for the space of degree-preserving elements of the Weyl algebra is given by $t^k \partial_t^k, k \in \mathbb{Z}_{\ge 0}$, and -$$(t^k \partial_t^k) t^m = m(m-1)...(m-k+1) t^m = (m)_k t^m$$ -where $(m)_k$ denotes the falling factorial. Thus in order to find coefficients $a_{n,k}$ such that -$$(t \partial_t)^n = \sum_k a_{n,k} t^k \partial_t^k$$ -it is necessary and sufficient to find coefficients $a_{n,k}$ such that -$$m^n = \sum_k a_{n,k} (m)_k$$ -for all $m$. Since the polynomials $(m)_k$ form a basis of the space of polynomials, the coefficients $a_{n,k}$ exist uniquely, and in fact this is one way to define the Stirling numbers of the second kind. -The combinatorial interpretation is as follows: $m^n$ counts the number of functions $[n] \to [m]$, where $[n] = \{ 1, 2, ... n \}$. The above identity groups these functions together according to the size of their range: there are $(m)_k$ possible ranges of size $k$ and $a_{n,k}$ functions $[n] \to [m]$ having range a fixed subset of $[m]$ of size $k$. (This is the same thing as an equivalence relation on $[n]$ with $k$ equivalence classes by taking preimages.)<|endoftext|> -TITLE: 8 periodicity: Clifford clock- Bott periodicity - KO-dimension in noncommutative geometries -QUESTION [8 upvotes]: Periodicity modulo 8 appears in the classification of real Clifford algebras $C\ell_{p,q}(\mathbb{R})$ (usualy refered to as the "Clifford Clock"), in real Bott periodicity and in the definition of a real structure of KO-dimension on a spectral triple. The latter concept can be found in Connes-Marcolli book http://alainconnes.org/docs/bookwebfinal.pdf, for instance. -Spectral triples are a generalization of spin$^c$ manifolds and real spectral triples of spin manifolds. In fact, every (real) spectral triple over a commutative $*$-algebra is a spin manifold, by certain reconstruction theorems proven by Connes and, independently and under other conditions, by A.Rennie and J.Várilly. The KO-dimension $N\in\mathbb{Z_8}$ of a real spectral triple is enterly determined by knowing whether certain operators on a Hilbert space $H$ commute or anticommute. $H$ generalizes the square-integrable spinors Hilbert space. -Being alien to K-theory, I suspect that the definition of KO-dim is motivated (as many concepts in noncommutative geometry are) by what happens in the "commutative case" (spin geometry). I want to know where do such commutation and anticommutation relations appear in KO-theory. Otherwise put, what is the motivation for the definition of KO-dim, from the point of view of K-theory? can this periodicity be related to real Bott periodicity or the periodicity of the Clifford clock? - -REPLY [7 votes]: I'm afraid I'm rather late to the party, but let me throw out a few thoughts, in the hope that something will be of use to someone. You probably know everything under 1. and 2., so if you want the punchline, do forgive the tl;dr and just skip ahead to 3. - -To be absolutely clear about the state of the art, Connes's theorem actually tells you the following: - -A unital Frechet pre-$C^\ast$-algebra $A$ is isomorphic to $C^\infty(X)$ for $X$ a compact orientable $p$-manifold if and only if there exists a $\ast$-representation of $A$ on a Hilbert space $H$ and a self-adjoint unbounded operator $D$ on $H$ such that $(A,H,D)$ is a commutative spectral triple of metric dimension $p$. -In particular, $A$ is isomorphic to $C^\infty(X)$ for $X$ a compact spin$^{\mathbb{C}}$ $p$-manifold if and only if there exist $H$ and $D$ such that $(A,H,D)$ is a commutative spectral triple of metric dimension $p$ and $A^{\prime\prime}$ acts on $H$ with multiplicity $2^{\lfloor p/2\rfloor}$. - -Once you know that $A \cong C^\infty(X)$, you can then apply the much earlier "baby reconstruction theorem" (for lack of a better phrase) announced by Connes and proved in detail by Gracia-Bondia--Varilly--Figueroa to conclude that: - -In the general case, $(A,H,D) \cong (C^\infty(X),L^2(X,E),D)$ where $E \to X$ is a Hermitian vector bundle and $D$ can be interpreted as an essentially self-adjoint elliptic first-order differential operator on $E$. -In the case where $A^{\prime\prime}$ acts with multiplicity $2^{\lfloor p/2 \rfloor}$, $E \to X$ is in fact a spinor bundle (i.e., irreducible Clifford module bundle) and $D$ is Dirac-type (viz, a perturbation of a spin$^{\mathbb{C}}$ Dirac operator by a symmetric bundle endomorphism of $E$). - -So, whilst you can refine the reconstruction theorem to a characterisation of compact spin$^{\mathbb{C}}$ manifolds with spinor bundle and essentially self-adjoint Dirac-type operator, the general result is really just a statement about compact orientable manifolds. Indeed, one can even refine the reconstruction theorem to a characterisation of compact oriented Riemannian manifolds with self-adjoint Clifford module and essentially self-adjoint Dirac-type operator. -After that detour, let's get down to brass tacks---everthing here is basically taken from Varilly's excellent lecture notes. It is well known in NCG-land that a compact oriented manifold $X$ is spin$^{\mathbb{C}}$ if and only if it admits an irreducible Clifford module (i.e., spinor bundle) $S \to X$, in which case the Picard group of line bundles (up to isomorphism) acts freely and transitively on the spinor bundles by $([L],[S]) \mapsto [L \otimes S]$. -Now, with a little bit of care, if $S \to X$ is a spinor bundle, then you can make the dual bundle $S^\ast \to X$ into a spinor bundle as well, so that $S^\ast \cong L \otimes S$ for some line bundle $S$. It is then a famous (in NCG-land) theorem of Plymen's that $X$ is actually spin if and only if there exists a spinor bundle $S$ with $S^\ast \cong S$ as Clifford modules, in which case $S$ is the spinor bundle for the underlying spin structure. By the Riesz representation theorem (for Hermitian vector bundles) together with a little bit of care, the existence of this isomorphism of Clifford modules is equivalent to the existence of the famed charge conjugation operator $J$, whose commutation or anticommutation with the Dirac operator and chirality element is, ultimately, forced by the algebraic structure of $\mathrm{Cl}(\mathbb{R}^{\dim X})$---see Landsman's excellent but seemingly little-known lecture notes for details. Hence, by Bott periodicity for real Clifford algebras, these relations only depend on $\dim X \bmod 8$, yielding Connes's famous table---for subtleties, including why Connes's table doesn't (explicitly) include all $8$ possibilities for the three signs, see Landsman's notes. -So, what about $KO$-theory? Here's what I can piece together as a relative layperson from the only source that goes into any detail, Gracia-Bondia--Varilly--Figueroa. So, by Section 9.5 of GBVF, there's a nice, concrete (indeed, basically algebraic) one-to-one correspondence between real spectral triples of $KO$-dimension $j \bmod 8$ $(A,H,D,J)$, aka reduced $KR^j$-cycles $(A \otimes A^o,H,D,J)$, and so-called unreduced $KR^j$-cycles $(A \otimes A^o,H,D,J,\rho)$, which are (roughly speaking) real spectral triples of $KO$-dimension $j \bmod 8$ endowed with a compatible action of $\mathrm{Cl}(\mathbb{R}^j)$. Under this correspondence (roughly speaking!), the Dirac operator of a compact spin manifold $X$, which can be viewed as living in the $K$-homology $K_0(X)$ of $X$, should correspond to a certain $\mathrm{Cl}(\mathbb{R}^j)$-linear (twisted) Dirac operator on $X$ (see Lawson--Michelson, S II.7), which can be viewed as living in the $KO$-homology $KO_j(X)$ of $X$, where Real Bott periodicity tells you that there are only the eight distinct $KO$-homology groups. So, to cut a long story short, a real spectral triple of $KO$-dimension $j \bmod 8$ is, well, said to have $KO$-dimension $j \bmod 8$ because it lives (morally) in the relevant $j$-th $KO$-homology group. This is probably what you actually wanted, so I hope it makes some sense!<|endoftext|> -TITLE: Is this the free abelian group functor? -QUESTION [5 upvotes]: Let $\mathbb{Z}(.) : \mathbf{Set} \to \mathbf{Ab}$ be the functor that assigns to any set $S$ the set of maps -$\mathbb{Z}(S) := \{ z: S \to \mathbb{Z} \; | \; z(s)=0 \mbox{ for almost all } s \in S \}$ -and to any set map $f: S \to T$ the morphism $\mathbb{Z} f :\mathbb{Z}(T) \to \mathbb{Z}(S)$ of -abelian groups, defined by $\mathbb{Z} f(z):= z \circ f$ for all $z \in \mathbb{Z}(T)$. -This defines a contravariant functor. Right? -Is this what is called the free abelian group functor? (I wonder because of its -contravariance) - -REPLY [8 votes]: What you've defined is not a functor, covariant or contravariant. Let $S$ be an infinite set and $f : S \to 1$ the unique map. Then $\mathbb{Z} f$ does not exist. (You need to require that $f$ is proper, that is, that the preimage of a finite set is finite.) -The free abelian group functor is covariant. You've assigned the right things to objects (more or less) but not to morphisms. The free abelian group functor assigns to a set $S$ the abelian group $\mathbb{Z}[S]$ of formal linear combinations -$$\sum_{s \in S} c_s s, c_s \in \mathbb{Z}$$ -and assigns to a function $f : S \to T$ the homomorphism $\mathbb{Z}[f]$ sending $\sum c_s s$ to $\sum c_s f(s)$. In this setup, the homomorphism you wanted to assign to a function $g : T \to S$ sends $\sum c_s s$ to -$$\sum c_s \sum_{g(t) = s} t$$ -and of course this is not well-defined if $\{ t : g(t) = s \}$ is infinite. This desire to "integrate over" inverse images appears in other contexts (e.g. pullbacks in homology, of which this may be regarded as a toy example (the $H_0$ of discrete spaces)) but I am not well-qualified to discuss them. -This issue is precisely the reason why I get annoyed when people call $\mathbb{Z}^S$ the free abelian group on $S$ when $S$ is finite: the assignment $S \to \mathbb{Z}^S$ ought to be contravariant, not covariant. -Edit: The comparison to homology might be valuable as a way of contextualizing this discussion. If $S$ is a set regarded as a discrete space, $\mathbb{Z}[S]$ is the zeroth homology $H_0(S)$ while $\mathbb{Z}^S$ is the zeroth cohomology $H^0(S)$; in particular, the former is covariant while the latter is contravariant. The fact that for $S$ finite we can identify the two can then be thought of as a very special case of Poincaré duality, which hammers home the point that the finiteness of $S$ is essential here.<|endoftext|> -TITLE: Is there really no way to integrate $e^{-x^2}$? -QUESTION [61 upvotes]: Today in my calculus class, we encountered the function $e^{-x^2}$, and I was told that it was not integrable. -I was very surprised. Is there really no way to find the integral of $e^{-x^2}$? Graphing $e^{-x^2}$, it appears as though it should be. -A Wikipedia page on Gaussian Functions states that -$$\int_{-\infty}^{\infty} e^{-x^2} dx = \sqrt{\pi}$$ -This is from -infinity to infinity. If the function can be integrated within these bounds, I'm unsure why it can't be integrated with respect to $(a, b)$. -Is there really no way to find the integral of $e^{-x^2}$, or are the methods to finding it found in branches higher than second semester calculus? - -REPLY [59 votes]: To build on kee wen's answer and provide more readability, here is an analytic method of obtaining a definite integral for the Gaussian function over the entire real line: -Let $I=\int_{-\infty}^\infty e^{-x^2} dx$. -Then, -$$\begin{align} -I^2 &= \left(\int_{-\infty}^\infty e^{-x^2} dx\right) \times \left(\int_{-\infty}^{\infty} e^{-y^2}dy\right) \\ -&=\int_{-\infty}^\infty\left(\int_{-\infty}^\infty e^{-(x^2+y^2)} dx\right)dy \\ -\end{align}$$ -Next we change to polar form: $x^2+y^2=r^2$, $dx\,dy=dA=r\,d\theta\,dr$. Therefore -$$\begin{align} -I^2 &= \iint e^{-(r^2)}r\,d\theta\,dr \\ -&=\int_0^{2\pi}\left(\int_0^\infty re^{-r^2}dr\right)d\theta \\ -&=2\pi\int_0^\infty re^{-r^2}dr -\end{align}$$ -Next, let's change variables so that $u=r^2$, $du=2r\,dr$. Therefore, -$$\begin{align} -2I^2 &=2\pi\int_{r=0}^\infty 2re^{-r^2}dr \\ -&= 2\pi \int_{u=0}^\infty e^{-u} du \\ -&= 2\pi \left(-e^{-\infty}+e^0\right) \\ -&= 2\pi \left(-0+1\right) \\ -&= 2\pi -\end{align}$$ -Therefore, $I=\sqrt{\pi}$. -Just bear in mind that this is simpler than obtaining a definite integral of the Gaussian over some interval (a,b), and we still cannot obtain an antiderivative for the Gaussian expressible in terms of elementary functions.<|endoftext|> -TITLE: What is the difference between the terms "classical solutions" and "smooth solutions" in the PDE theory? -QUESTION [6 upvotes]: What is the difference between the terms "classical solutions" and "smooth solutions" in the PDE theory? Especially,the difference for the evolution equations? If a solution is in $C^k(0,T;H^m(\Omega))$,can I call it smooth solution? - -REPLY [13 votes]: A smooth solution is infinitely differentiable. A classical solution is a solution which is differentiable as many times as needed if you want to plug the function into the PDE (for example, if the PDE contains the term $u_{xxxx}$, then the fourth derivate $u_{xxxx}$ must exist in order for $u$ to be a classical solution). -In particular, every smooth solution is a solution in the classical sense. But for the unidirectional wave equation $u_x + u_t = 0$, any function of the form $u(x,t)=f(x-t)$ where $f$ is only (say) twice differentiable, is a classical solution which is not smooth. - -REPLY [4 votes]: A classical solution is a function that solves the PDE in the usual sense, ie. $x'=x, x(0)=1 \implies x(t)=e^t$. You can also have weak solutions, which is a variant of the equation with integrals, and is equivalent to the original equation if the solution you are looking at is a classical solution. You can also have solutions as distributions. Look up weak and distribution solutions of Laplace's equation as an example. A smooth solution is one with infinitely many derivatives. -A smooth solution is classical, but a classical solution may not be smooth.<|endoftext|> -TITLE: Do $\omega^\omega=2^{\aleph_0}=\aleph_1$? -QUESTION [6 upvotes]: As we know, $2^{\aleph_0}$ is a cardinal number, so it is a limit ordinal number. However, it must not be $2^\omega$, since $2^\omega=\sup\{2^\alpha|\alpha<\omega\}=\omega=\aleph_0<2^{\aleph_0}$, and even not be $\sum_{i = n<\omega}^{0}\omega^i\cdot a_i$ where $\forall i \le n[a_i \in \omega]$. Since $\|\sum_{i = n<\omega}^{0}\omega^i\cdot a_i\| \le \aleph_0$ for all of them. -Besides, $\sup\{\sum_{i = n<\omega}^{0}\omega^i\cdot a_i|\forall i \le n(a_i \in \omega)\}=\omega^\omega$, and $\|\omega^\omega\|=2^{\aleph_0}$ since every element in there can be wrote as $\sum_{i = n<\omega}^{0}\omega^i\cdot a_i$ where $\forall i \le n[a_i \in \omega]$ and actually $\aleph_{0}^{\aleph_0}=2^{\aleph_0}$ many. -Therefore $\omega^\omega$ is the least ordinal number such that has cardinality $2^{\aleph_0}$, and all ordinal numbers below it has at most cardinality $\aleph_0$. Hence $\omega^\omega=2^{\aleph_0}=\aleph_1$? - -REPLY [15 votes]: Your notation confuses cardinal and ordinal exponentiation, which are two very different things. If you’re doing cardinal exponentiation, $2^\omega$ is exactly the same thing as $2^{\aleph_0}$, just expressed in a different notation, because $\omega=\aleph_0$. If you’re doing ordinal exponentiation, then as you say, $2^\omega=\omega$. -But if you’re doing ordinal exponentiation, then $$\omega^\omega=\sup_{n\in\omega}\omega^n=\bigcup_{n\in\omega}\omega^n\;,$$ which is a countable union of countable sets and is therefore still countable; it doesn’t begin to reach $\omega_1$. Similarly, still with ordinal exponentiation, $\omega^{\omega^\omega}$ is countable, $\omega^{\omega^{\omega^\omega}}$ is countable, and so on. The limit of these ordinals, known as $\epsilon_0$, is again countable, being the limit of a countable sequence of countable ordinals, and so is smaller than $\omega_1$. (It’s the smallest ordinal $\epsilon$ such that $\omega^\epsilon=\epsilon$.) -Now back to cardinal exponentiation: for that operation you have $2^\omega\le\omega^\omega\le(2^\omega)^\omega=2^{\omega\cdot\omega}=2^\omega$, where $\omega\cdot\omega$ in the exponent is cardinal multiplication, and therefore $2^\omega=\omega^\omega$ by the Cantor-Schröder-Bernstein theorem. The statement that this ordinal is equal to $\omega_1$ is known as the continuum hypothesis; it is both consistent with and independent of the other axioms of set theory.<|endoftext|> -TITLE: Find the remainder when $ 12!^{14!} +1 $ is divided by $13$ -QUESTION [6 upvotes]: Find the remainder when $ 12!^{14!} +1 $ is divided by $13$ -I faced this problem in one of my recent exam. It is reminiscent of Wilson's theorem. So, I was convinced that $12! \equiv -1 \pmod {13} $ after this I did some test on the exponent and it seems like $12!^{n!} +1\equiv 2\pmod {13}\forall n \in \mathbb{N}$. -After I came back home I ran some more test and I noticed that if $p$ is prime then $(p-1)!^{n!} +1\equiv 2\pmod {p}\forall n \in \mathbb{N}$. -I was wondering if this result is true, if yes how to prove it? If not what is the formal way for solving the mother problem. - -REPLY [2 votes]: Hint $\ $ Unifying little Fermat and Wilson: $\rm C\:\!!^{\:\!C}\!\equiv 1\ mod\:\ C\!+\!1\:$ prime. Take your pick for a proof.<|endoftext|> -TITLE: Condition on function $f:\mathbb{R}\rightarrow \mathbb{R}$ so that $(a,b)\mapsto | f(a) - f(b)|$ generates a metric on $\mathbb{R}$ -QUESTION [13 upvotes]: Can we impose such condition on function $f:\mathbb{R}\rightarrow \mathbb{R}$ so that -$(a,b)\mapsto | f(a) - f(b)|$ generates a metric on $\mathbb{R}$? -This question came into my mind when I was working on problem $(a,b)\mapsto | e^{a} - e^{b}|$ is a metric on $\mathbb{R}$. I guess this can be done by taking injective function $f$. But I am not sure whether this will work or not. Certainly, this will help everyone in dealing with such kind of problems. I need help with this. -Thank you very much. - -REPLY [13 votes]: Let $f:\Bbb R\to\Bbb R$, and for $x,y\in\Bbb R$ define $d(x,y)=|f(x)-f(y)|$. -First note that for any function $f:\Bbb R\to\Bbb R$ and $x,y,z\in\Bbb R$ we have $$\begin{align*} -|f(x)-f(y)|&=\left|\big(f(x)-f(z)\big)+\big(f(z)-f(y)\big)\right|\\ -&\le|f(x)-f(z)|+|f(z)-f(y)|\;, -\end{align*}$$ -so $d$ always satisfies the triangle inequality. It’s also clear that $d(x,x)=0$ for all $x\in\Bbb R$ and that $d$ is symmetric no matter what $f$ we use. Thus, $d$ is always a pseudometric on $\Bbb R$. Finally, in order for $d$ to separate points, so that it’s necessary and sufficient that $f$ be injective: that ensures that if $x\ne y$, then $f(x)\ne f(y)$ and hence $d(x,y)\ne 0$. The function $f$ need not be nice in any other way. -For example, you could use the following function: -$$f(x)=\begin{cases} -\tan^{-1}x,&\text{if }x\in\Bbb Q\\ -\tan^{-1}(x+1),&\text{if }x\in\Bbb R\setminus\Bbb Q\;. -\end{cases}$$ -It’s discontinuous at every point, and it’s not surjective, but it is injective, and that’s all that matters. - -REPLY [6 votes]: It is necessary and sufficient that $f$ be injective. If $f$ is injective, then $|f(a)-f(b)|=0$ iff $a=b$, and otherwise we have some $a\neq b$ such that $|f(a)-f(b)|=0$. Clearly $|f(a)-f(b)|=|f(b)-f(a)|$, so it remains to check the triangle inequality. But this follows from just applying the triangle inequality for $|\cdot |$, so $f$ gives you a metric.<|endoftext|> -TITLE: Eigenvalues of $A+B$ -QUESTION [7 upvotes]: $A,B$ are symmetric matrices, $A$ has eigenvalues in $[a,b]$ and $B$ has eigenvalues in $[c,d]$ then we need to show that eigenvalues of $A+B$ lie in $[a+c,b+d]$, I am really not getting where to start. What I know $A,B$ have real eigenvalues, they are diagonalizable also. - -REPLY [6 votes]: We can use Rayleigh quotients: for a symmetric matrix $M$ and $x\neq 0$, it is defined as $R_M(x):=\frac{\langle Mx,x\rangle}{\lVert x\rVert^2}$. If $\lambda_1\leq\ldots\leq \lambda_n$ are the eigenvalues of $M$, then $$\lambda_1=\min_{x\neq 0}R_M(x)\quad\mbox{and}\quad\lambda_n=\max_{x\neq 0}R_M(x).$$ -To see this, use the fact that $M$ is diagonalizable in an orthonormal basis of eigenvectors (in order to just deal with the case $M$ diagonal. -Once you have this result, the mininmal eigenvalue of $A+B$ is $\geq\min_{x\neq 0}\frac{\langle Ax,x\rangle}{\lVert x\rVert^2}+\frac{\langle Bx,x\rangle}{\lVert x\rVert^2}\geq a+c$. Use a similar argument for the $\max$.<|endoftext|> -TITLE: Where does complex exponential come from? -QUESTION [5 upvotes]: The complex exponential function is defined as : $$e^{ix} = \cos x + i\sin x$$ It shares most of its properties with real exponential and it allows a lot of trigonometric calculations such as de Moivre's formula : $$(\cos x+i\sin x)^n = \cos{nx}+i\sin{nx}$$ -But where does this definition come from and why does it work ? - -REPLY [5 votes]: People have given you some good algebraic answers that show that the equations work out; here's a hand-wavy justification that may make you more comfortable with the idea in general. -The derivative of $f(x)$ is how $f(x)$ changes as you change $x$ a little. -What's the derivative of $f(x) = e^{ix}$? It's $ie^{ix}$, which is $i$ times $f(x)$ itself. So the way in which $f$ changes as you wiggle $x$ by $dx$ is $i \cdot f \cdot dx$. -So what does it mean to multiply something by $i$? It means rotating it 90 degrees counterclockwise in the complex plane. (Try this yourself with some simple complex numbers if you didn't notice this already.) -So when $x=0$, and $f(x)=1$, $f'(0) = i \cdot f(0) = i \cdot 1 = i$; $f(0)$ is changing to the north when its value is to the east. The same argument works for other values of $x$; $f(x)$ will be changing at a 90 degree clockwise angle from the current value of $f(x)$. -The equation of motion that satisfies this rule (velocity is always perpendicular to the direction from the origin to the current point) is a circle. $f(x) = e^{ix}$ moves in a counterclockwise circle around the complex plane, and that's exactly what $\cos x + i \sin x$ does too.<|endoftext|> -TITLE: Local base of a topological vector space -QUESTION [8 upvotes]: I would like to prove that if $B$ is local base for a topological vector space $X$, then every member of $B$ contains the closure of some member of $B$. -I would appreciate if somebody can guide me through this problem. -I am still facing problems in understanding various primary concepts. - -REPLY [5 votes]: Since you read Ridin's functional analysis I will refer to its theorems. -Let $U\in B$ be some neighborhood of zero and element of the base $B$. Consider compact $K=\{0\}$, and closed set $C=X\setminus U$. By corollary of theorem 1.10 there exist neighborhood of zero $V$ such that $\overline{(K+V)}\cap (C+V)=\varnothing$. -Assume that, $\overline{(K+V)}\cap C\neq\varnothing$ , then there exist $c\in C$ such that $c\in \overline{(K+V)}$. Since $C\subset C+V$, then we have $c\in C+V$ such that $c\in \overline{(K+V)}$. This means that $\overline{(K+V)}\cap(C+V)\neq\varnothing$. Contradiction, hence $\overline{(K+V)}\cap C=\varnothing$. -Since $\overline{(K+V)}\cap C=\varnothing$ then $\overline{K+V}\subset X\setminus C=U$. Since $K=\{0\}$, then $\overline{(K+V)}=\overline{V}$, so we get $\overline{V}\subset U$. Since $B$ is a local base of zero, then there exist $W\in B$ such that $W\subset V$. As the consequence $\overline{W}\subset\overline{V}\subset U$. Thus for each $U\in B$ we found $W\in B$ such that $\overline{W}\subset U$.<|endoftext|> -TITLE: How to prove that the sum and product of two algebraic numbers is algebraic? -QUESTION [51 upvotes]: Suppose $E/F$ is a field extension and $\alpha, \beta \in E$ are algebraic over $F$. Then it is not too hard to see that when $\alpha$ is nonzero, $1/\alpha$ is also algebraic. If $a_0 + a_1\alpha + \cdots + a_n \alpha^n = 0$, then dividing by $\alpha^{n}$ gives $$a_0\frac{1}{\alpha^n} + a_1\frac{1}{\alpha^{n-1}} + \cdots + a_n = 0.$$ - -Is there a similar elementary way to show that $\alpha + \beta$ and $\alpha \beta$ are also algebraic (i.e. finding an explicit formula for a polynomial that has $\alpha + \beta$ or $\alpha\beta$ as its root)? - -The only proof I know for this fact is the one where you show that $F(\alpha, \beta) / F$ is a finite field extension and thus an algebraic extension. - -REPLY [2 votes]: Consider fields $ E \supseteq F $, and elements $ \alpha, \beta \in E $ algebraic over $ F $. We want to show $ \alpha + \beta $, $ \alpha \beta $ are algebraic over $ F $ too. If even one of $ \alpha, \beta $ are $ 0 $, the result is trivial, so let's take both $ \alpha, \beta $ to be non-zero. -We have $ \alpha ^m + a_{m-1} \alpha ^{m-1} + \ldots + a_0 = 0 $ ( each $ a_i \in F $ ), and $ \beta ^n + b_{n-1} \beta ^{n-1} + \ldots + b_0 = 0 $ ( each $ b_j \in F $ ). -(The first equation lets us express all powers of $ \alpha $ as $F$-combinations of $ 1, \alpha, \ldots, \alpha ^{m-1} $. Similarly for $ \beta $) -Let $$ Z := \, [ \, \alpha ^0 \beta ^0, \alpha ^0 \beta ^1, \ldots, \alpha ^0 \beta ^{n-1} ; \alpha ^1 \beta ^0, \ldots, \alpha ^1 \beta ^{n-1} ; \ldots ; \alpha ^{m-1} \beta ^{0}, \ldots, \alpha ^{m-1} \beta ^{n-1} \, ]^{T} \in E^{mn} $$ -Now notice we can express $ (\alpha + \beta)Z $ as $ M_1 Z $ with $ M_1 \in F^{mn \times mn} $. So $ ( ( \alpha + \beta ) I - M_1 ) Z = 0 $, and as $ Z \neq 0 $ we have $ \det( (\alpha + \beta)I - M_1 ) = 0 $. Hence $ \alpha + \beta $ is a root of the polynomial $ P(t) := \det( tI - M_1 ) \in F[t] $, and is therefore algebraic over $ F $. Similarly we can show $ \alpha \beta $ is algebraic over $ F $ (Write $ \alpha \beta Z = M_2 Z $ and proceed as above).<|endoftext|> -TITLE: continuity and $C^2$ solution of a series -QUESTION [7 upvotes]: For $\alpha$ with $|\alpha|=2$ let $P$ be a homogenous harmonic Polynom of degree $2$ with $D^\alpha P\ne0$ (e.g. take $P=2x_1x_2$). Choose $\eta\in C^\infty_0(\{x:|x|<2\})$ with $\eta=1$ when $|x|<1$ and $\eta=0$ when $|x|\geq2$, set $t_k=2^k$ and $c_k=\frac1k$ with $\sum c_k$ divergent. Define $f(x)=\sum\limits_0^\infty c_k\Delta(\eta P)(t_kx)$. -How do you proof that $f$ is continuous but that $\Delta u=f$ does not have a $C^2$ solution in any neighborhood of the origin? - -REPLY [8 votes]: Since $P$ is a harmonic polynomial, you have that $\triangle P = 0$. Therefore -$$ \triangle(\eta P) = \triangle \eta \cdot P + 2\nabla\eta \nabla P $$ -Observe that in particular -$$ \operatorname{supp}(\triangle(\eta P)) \subset \{ 1\leq |x| \leq 2\}~. $$ -Therefore for every $x \neq 0$, there exists a small neighborhood $V_x$ of $x$ (ball of radius $\frac12 |x|$, say), such that $f(x)$ can be expressed as a finite sum of continuous functions on $V_x$ (that is, for all but finitely many of that $\triangle(\eta P)(t_k x)$ are zero on $V_x$). So $f(x)|_{V_x}$ is a continuous function away from 0. For $x = 0$, observe that the definition requires $f(0) = 0$. It is easy to check that, since $c_k \searrow 0$, and since we have that $f(x) = c_k \triangle(\eta P)(t_k x)$ for $2^{-k} \leq |x| < 2^{-k+1}$, that $f(x) = O\left( \frac{1}{\lvert\log_2 |x|\rvert}\right)$ and hence is continuous at 0. -Now suppose $u$ solve $\triangle u = f$. Away from the origin, define the function $v$ to be $\sum \frac{c_k}{t_k^2} (\eta P)(t_k x)$. Since $|x| > 0$, only finitely many terms are non-zero in the sum (up to $k \approx - \log_2 |x|$). A similar argument as above shows that for any $|x| > 0$ there is a neighborhood of $|x|$ such that $v$ must be twice continuously differentiable, being the sum of a finite number of $C^2$ functions there. -On the other hand, since $t_k$ decreases geometrically, we have that the infinite sum expression defining $v$ is absolutely convergent. And thus $v$ is continuous. -Furthermore, as away from the origin $v$ is given by a finite sum, we can differentiate term by term in the sum. This shows that away from the origin $\triangle v = f$. Hence any continuous solution of $\triangle u = f$ must be $v$ plus some harmonic (and hence $C^\infty$) function. -It suffices to show that the Hessian of $v$ is not continuous. It is here we use the condition that $D^2 P$ does not vanish identically. Let us do it for the case of $P = x_1 x_2$. A direct computation of $\partial^2_{x_1x_2} v$ shows that away from the origin, it is given by -$$ \sum c_k \eta(t_k x) + c_k \partial_1\eta(t_k x) t_kx_1 + c_k \partial_2 \eta(t_k x) t_k x_2 + c_k \partial^2_{12}\eta(t_k x) t_k^2 x_1x_2 $$ -The same argument as before shows that the last three terms are only non-zero when $t_k |x| \approx 1$. This means that their contribution to the sum is only $O(c_k) = O(\frac{1}{- \log_2|x|})$, which decays as $x\to 0$. The main first term, however, is bounded below by -$$ \sum_{k < - \log_2 |x|} c_k > \frac12 \ln\left( -\log_2 |x|\right)$$ -which diverges as $x\to 0$. In particular, this means that $\partial^2_{1,2}v$ is unbounded as $x\to 0$, showing that the Hessian of $v$ cannot be continuous at the origin.<|endoftext|> -TITLE: Is it generally accepted that if you throw a dart at a number line you will NEVER hit a rational number? -QUESTION [70 upvotes]: In the book "Zero: The Biography of a Dangerous Idea", author Charles Seife claims that a dart thrown at the real number line would never hit a rational number. He doesn't say that it's only "unlikely" or that the probability approaches zero or anything like that. He says that it will never happen because the irrationals take up all the space on the number line and the rationals take up no space. This idea almost makes sense to me, but I can't wrap my head around why it should be impossible to get really lucky and hit, say, 0, dead on. Presumably we're talking about a magic super sharp dart that makes contact with the number line in exactly one point. Why couldn't that point be a rational? A point takes up no space, but it almost sounds like he's saying the points don't even exist somehow. Does anybody else buy this? I found one academic paper online which ridiculed the comment, but offered no explanation. Here's the original quote: - -"How big are the rational numbers? They take up no space at all. It's a tough concept to swallow, but it's true. Even though there are rational numbers everywhere on the number line, they take up no space at all. If we were to throw a dart at the number line, it would never hit a rational number. Never. And though the rationals are tiny, the irrationals aren't, since we can't make a seating chart and cover them one by one; there will always be uncovered irrationals left over. Kronecker hated the irrationals, but they take up all the space in the number line. The infinity of the rationals is nothing more than a zero." - -REPLY [4 votes]: One very useful way to think about probability is in terms of betting. Suppose someone offers you a payoff of 1 dollar if event X happens, and 0 dollars if event X does not happen. What's the largest amount of money that you're willing to pay to play this game? That amount is the probability of X happening. (Probably I need to be a bit more careful, but this is roughly the idea.) -So what does it mean to say that an event has probability zero? It doesn't mean that it can't happen, it just means that you wouldn't be willing to play that game for 1 cent, or a tenth of a cent, or any actual non-zero amount of money. -If you want to read more about this way of thinking about probability, you can search for "Dutch book."<|endoftext|> -TITLE: Finite Sum of Power? -QUESTION [16 upvotes]: Can someone tell me how to get a closed form for -$$\sum_{k=1}^n k^p$$ -For $p = 1$, it's just the classic $\frac{n(n+1)}2$. -What is it for $p > 1$? - -REPLY [2 votes]: By Binomial Series -$(n+1)^x=1 + {x \choose 1}\sum_{k=1}^n{k^{x-1}} + {x \choose 2}\sum_{k=1}^n{k^{x-2}}+{x \choose 3}\sum_{k=1}^n{{k^{x-3}}} -...+{x \choose x-1}\sum_{k=1}^n{{k^{x-x+1}}}+{x \choose x}\sum_{k=1}^n{{k^{x-x}}}$ -which becomes -$(n+1)^x = 1 + {x \choose 1}\sum_{k=1}^n{k^{x-1}} + {x \choose 2}\sum_{k=1}^n{k^{x-2}}+{x \choose 3}\sum_{k=1}^n{{k^{x-3}}} -....+{x \choose x-1}\sum_{k=1}^n{{k}}+{x \choose x}\sum_{k=1}^n{{1}}$ -for example consider $x=3$ -$(n+1)^3 = 1 + {3 \choose 1}\sum_{k=1}^n{k^{2}} + {3 \choose 2}\sum_{k=1}^n{k^{1}}+{3 \choose 3}\sum_{k=1}^n{{1}}$ -$(n+1)^3 = 1 + 3\sum_{k=1}^n{k^{2}} + 3*\frac{n*(n+1)}{2}+n$ -which gives -$\sum_{k=1}^n{k^2} =(1/6)*n*(n+1)*(2n+1)$<|endoftext|> -TITLE: How to induce a connection on a submanifold? -QUESTION [8 upvotes]: Suppose an affine connection is given on a smooth manifold $M$ and let $N\subset M$ be an embedded submanifold. Is there a canonical way of defining an induced connection on $N$? -In classical differential geometry of smooth surfaces in Euclidean 3-space, the corresponding construction is that of covariant derivative (cfr. Do Carmo, Differential geometry of curves and surfaces §4-4). The covariant derivative of a vector field along a curve on the surface is defined as the orthogonal projection of the ordinary Euclidean derivative onto the plane tangent to the surfaces. -I wonder how (and if) this can be ported to the language of connections.Wikipedia's entry does something like that by means of the Riemannian structure: I wonder if this extra structure is really necessary. - -REPLY [9 votes]: With just an affine structure you will not be able to get an induced connection. (Part of the story is told in Fox's AMS Notices article from March 2012 titled "What is an affine sphere?".) -Instead, you can consider the following for codimension 1 submanifolds: given $\tau:N\to M$ an embedding and let $v$ be a vector field on $M$ along $N$ that is transverse to $N$, then $(\nabla,v)$ on $M$ together induces a connection on $N$. For $(X,Y)$ vector fields on $N$, we can define -$$ D^{(v)}_X Y = [\nabla_{\tau_*X}\tau_*Y] $$ -where $[W]$ for $W\in T_pM$, $p\in \tau(N)$ is defined by $\tau_*[W] - W = \lambda v$ for some $\lambda\in\mathbb{R}$. For higher codimension case you need more (linearly independent) vector fields. In the Riemannian case, $v$ is canonically chosen to be the unit normal vector to $N$ (or in higher codimension, a family that spans the normal bundle).<|endoftext|> -TITLE: Existence of global solution of Riccati equation -QUESTION [5 upvotes]: Consider a Riccati differential equation -$$ - \dot P + A(t)^{T}P + PA(t) -PB(t)R(t)B(t)^{T}P + Q(t) = 0,\;\;\; P(t_0) = P_0 = P_0^{T} \geqslant 0 -$$ -where $Q(t) = Q(t)^{T} \geqslant 0$, $R(t) = R(t)^{T} > 0$, all matrices are real and continuous. How to show that for any $t_0$ there exists (and unique) a solution defined on $(-\infty,t_0]$? - -REPLY [11 votes]: The uniqueness of solution, when $P$ remains continuous, as well as the existence of a local solution, follows from Picard-Lindelof. Hence it remains to establish that $P$ doesn't blow-up in finite time. It is clear from the form of the evolution equation that $P$ will remain symmetric. Hence it suffices to show that $P$ has eigenvalues bounded above and below. A sketch of the idea of the proof: - -$P \geq 0$ in $(-\infty,t_0]$. This follows from the sign on $Q$. Let $\tau\in (-\infty,t_0]$ be the smallest time such that $P$ is positive semidefinite on $[\tau,t_0]$. Necessarily, there exists some vector $v\in \ker P(\tau)$ and some $\epsilon > 0$ such that $v^TP(t)v < 0$ for all $t\in (\tau-\epsilon,\tau)$. However, for $v\in \ker P$, we have that -$$ v^T\dot{P}v + (Av)^TPv + v^TPAv - v^TPBRB^TPv + v^TQv = v^T\dot{P}v + v^TQv = 0 $$ -Using that $Q$ is positive semidefinite, we see that it is impossible for increase to 0 from somewhere negative. -$P$ cannot blow-up in finite backwards time. Since $PBRB^TP$ is by definition a positive semi-definite form, we have that -$$ -\dot{P} \leq A^TP + PA + Q $$ -which implies that in backwards time, the growth of $P$ cannot exceed that of the linear equation $-\dot{P} = A^TP + PA + Q$. Since the latter cannot blow-up in finite time, the same is true for the original equation. - - -Let me work out the scalar case in more detail (mainly for convenience of notation, since integrating factors are a bit easier to write when the products commute). From this you can see how to write it up for the matrix case. -The analogous scalar equation is -$$ \dot{p} + 2ap - rp^2 + q = 0 $$ -where $q,r$ are non-negative functions, and $p$ is given initial data $p(t_0) \geq 0$. Let $\alpha(t) = \int_{t_0}^t 2a(s)\mathrm{d}s$. And write $\tilde{p} = e^{\alpha}p$. We have that -$$ \dot{\tilde{p}} = e^\alpha\left(\dot{p} + 2a p\right) = e^\alpha (rp^2 - q) $$ -So that -$$ \dot{\tilde{p}} = e^{-\alpha}r \tilde{p}^2 - e^\alpha q $$ -Now since $e^\alpha$ is positive, we have that $\tilde{r} = e^{-\alpha}r$ is still non-negative, as is $\tilde{q} = e^\alpha q$. -For the equation -$$ \dot{\tilde{p}} = \tilde{r} \tilde{p}^2 - \tilde{q} $$ -upper boundedness in backwards time is immediate. Since we have that -$$ \dot{\tilde{p}} \geq -\tilde{q} \implies \tilde{p}(t) - \tilde{p}(t_0) \leq \int_{t}^{t_0} \tilde{q}(s) \mathrm{d}s$$ -For lower boundedness we see that on an interval $[t_1,t_2]$ on which $p$ is non-vanishing, we must have that -$$ \dot{\tilde{p}} \leq \tilde{r} \tilde{p}^2 \implies \frac{1}{\tilde{p}(t_1)} - \frac{1}{\tilde{p}(t_2)} \leq \int_{t_1}^{t_2} \tilde{r}(s)\mathrm{d}s $$ -hence if $\tilde{p}(t_1) < 0$ at some point, $\tilde{p}$ cannot approach 0 in finite time to the future. The contrapositive implies that since $\tilde{p}(t_0) \geq 0$, for every $\tau < t_0$ we cannot have $\tilde{p}(\tau) < 0$, establishing the lower-bound. This last argument, however, is not so easy to translate to the matrix case. So instead, let me give another one which is easier to use in the matrix case. -As before, suppose $\tilde{p} = 0$ at some point $\tau < t_0$ and $\tilde{p} < 0$ in $[\tau - \epsilon,\tau)$ (the existence of the solution, and its continuity on this interval, we assume by appealing to the local existence part of Picard-Lindelof). We can further assume that $0\leq \int_{\tau-\epsilon}^\tau \tilde{r}(\sigma) \tilde{p}(\sigma)\mathrm{d}\sigma < \frac12$, by choosing $\epsilon$ sufficiently small. We have the expression -$$ \dot{\tilde{p}} \leq \tilde{r} \tilde{p}^2 $$ -Now let $\tau' \in [\tau-\epsilon,\tau)$ be a point where $\tilde{p}$ achieves its minimum (in that interval). Integrate from $\tau'$ to $\tau$ we get -$$ - \tilde{p}(\tau') = |\tilde{p}(\tau')| \leq \int_{\tau'}^\tau \tilde{r}|\tilde{p}|^2 \mathrm{d}\sigma $$ -Using the integral bound we assumed we have -$$ |\tilde{p}(\tau')| \leq \frac12 \sup_{\sigma\in[\tau',\tau]} |\tilde{p}(\sigma)| = \frac12 |\tilde{p}(\tau')|$$ -which can only occur if $\tilde{p}(\tau') = 0$. This shows that it is impossible to have $\tilde{p}(\tau') < 0$ and $\tilde{p}(\tau) = 0$ with $\tau' < \tau$.<|endoftext|> -TITLE: Calculate $\ln(2)$ using Riemann sum. -QUESTION [8 upvotes]: Possible Duplicate: -Is $\lim\limits_{k\to\infty}\sum\limits_{n=k+1}^{2k}{\frac{1}{n}} = 0$? - - -Show that -$$\ln(2) = \lim_{n\rightarrow\infty}\left( \frac{1}{n + 1} + \frac{1}{n + 2} + ... + - \frac{1}{2n}\right)$$ -by considering the lower Riemann sum of $f$ where $f(x) = \frac{1}{x}$ over $[1, 2]$ - -I was confused looking at the equality to begin with, since taking $n \rightarrow \infty$ for all of those terms would become $0$ right? -Anyway, I attempted it regardless. -$$\sum_{k=1}^n \frac{1}{n}(f(1 + \frac{k}{n}))$$ -$$= \sum_{k=1}^n \frac{1}{n}(\frac{1}{1+ \frac{k}{n}})$$ -$$=\sum_{k=1}^n \frac{1}{n + k} = $$ the sum from the question? -I wasn't sure what to do from here. I tried something else though: -$$=\frac{1}{n + \frac{n(n+1)}{2}}$$ -$$=\frac{2}{n^2 + 3n}$$ which seemed equally useless if I'm taking $n \rightarrow \infty$ as it all becomes $0$. - -REPLY [9 votes]: METHOD I -We may recall the celebre limit that yields Euler-Mascheroni constant, namely: -$$\lim_{n\to\infty} 1+\frac1{2}+\cdots+\frac{1}{n}-\ln{n}={\gamma}$$ $\tag{$\gamma$ is Euler-Mascheroni constant}$ -Then everything boils down to: -$$\lim_{n\to\infty}\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n} = \lim_{n\to\infty}{\gamma}+\ln{2n}-{\gamma}-\ln{n}= \ln{2}.$$ -METHOD II -Use one of the consequences of the Lagrange's theorem applied on $\ln(x)$ function, namely: -$$\frac{1}{k+1} < \ln(k+1)-\ln(k)<\frac{1}{k} \space , \space k\in\mathbb{N} ,\space k>0$$ -Taking $k=n,n+1,...,2n$ values to the inequality and then summing all relations, we get all we need in order to apply Squeeze theorem. -METHOD III -We may use Botez-Catalan identity and immediately get that: -$$\lim_{n\to\infty}\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n} = \lim_{n\to\infty} 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots + (-1)^{2n+1}\frac{1}{2n}= $$ -$$\lim_{n\to\infty} 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots + (-1)^{n+1}\frac{1}{n}=\ln{2}.$$ -The last series' limit is obtained by using Taylor expansion of $\ln(x+1)$ and take $x=1$ -The proofs are complete.<|endoftext|> -TITLE: combinatorics: The pigeonhole principle -QUESTION [5 upvotes]: Assume that in every group of 9 people, there are 3 in the same height. -Prove that in a group of 25 people there are 7 in the same height. -I started by defining: -pigeonhole- heights. -pigeons-people. -I do not know how to use the assumption. -thanx. - -REPLY [2 votes]: Just for fun an alternative to Jyrki's nice solution: -If there are at most four different heights, then $25 = 6\cdot 4 + 1$ shows that seven people have the same height. -So let's assume that there are five people all of different heights. -If among the other 20 people four have different heights, the group of these four plus the five give a group of nine contradicting the hypothesis as at most two in this group of nine have the same height. -So the remaining 20 people have at most three different height. Now $20 = 6\cdot 3 + 2$ shows that seven people of the remaining 20 have the same height.<|endoftext|> -TITLE: Primes of the form $p=a^2-2b^2$. -QUESTION [10 upvotes]: I've stumbled upon this and I was wondering if anyone here could come up with a simple proof: -Let $p$ be a prime such that $p\equiv 1 \bmod 8$, and let $a,b\geq 1$ such that -$$p=a^2-2b^2.$$ -Question: Is $b$ necessarily a square modulo $p$? -I have plenty of numerical data to support an affirmative answer, but the proof eludes me so far. For instance: -\begin{align*} -17 & = 5^2 - 2\cdot 2^2\\ -&= 7^2 - 2\cdot 4^2\\ -& = 23^2 - 2\cdot 16^2\\ -& = 37^2 - 2\cdot 26^2\\ -& = 133^2 - 2\cdot 94^2\\ -\end{align*} -and $2\equiv 36$, $4$, $16$, $26\equiv 9$, $94\equiv 9 \bmod 17$ are squares. -Thanks! - -REPLY [12 votes]: Since $p\equiv 1\pmod 8$, $2$ is a square modulo $p$. It will therefore be enough to show that any odd prime divisor of $b$ is also a square modulo $p$. Then any prime divisor of $b$ will be a square modulo $p$, therefore $b$ itself will be. -Let $q$ be an odd prime divisor of $b$, and consider your equation modulo $q$. You find that $p\equiv a^2 \pmod{q}$, so that $p$ is a square modulo $q$. By quadratic reciprocity (using that $p\equiv 1\pmod 4$), $q$ is a square modulo $p$.<|endoftext|> -TITLE: Which multivariable calculus books are heavily oriented at physics? -QUESTION [6 upvotes]: Please recommend multivariable calculus books that are really physics oriented. -My wife is looking to brush up on multivariable calculus, at the same time she needs to brush up on the related physics. - -REPLY [2 votes]: Bressoud's Second Year Calculus is probably what you want. The motivation for much of this book is the historical problems that come from physics. This is probably one of the best books I know that integrates physics with mathematics as smoothly as this (pun not intended), and it's great read for both physics and mathematics students. Not to mention, it covers and uses differential forms, a topic used in both mathematics and physics for differential geometry.<|endoftext|> -TITLE: How to solve $y=\frac{(x-\sin x)}{ (1-\cos x)}$ -QUESTION [5 upvotes]: I only see numerical approaches to solve this equation. Is there an analytical solution to solve $x$ as a function of $y$ for the range $(0,2 \pi)$? If there is no solution, is it possible to proof it? - -REPLY [2 votes]: For a full and complete answer you might want to look into inversion of the power series, although I have not checked the inverse function theorem conditions for this one thoroughly. -Nevertheless here's my $O(y^5)$-worth take on it: -$$y=\frac{\frac{x^{3}}{6}+O\left(x^{5}\right)}{\frac{x^{2}}{2}+O\left(x^{4}\right)}=\frac{2}{x^{2}}\frac{\frac{x^{3}}{6}+O\left(x^{5}\right)}{1+O\left(x^{2}\right)}=\frac{2}{x^{2}}\left(\frac{x^{3}}{6}+O\left(x^{5}\right)\right)\left(1+O\left(x^{2}\right)\right)=\frac{2}{x^{2}}\left(\frac{x^{3}}{6}+O\left(x^{5}\right)\right)=\frac{x}{3}+O\left(x^{3}\right)$$ -$$x=3y+O\left(y^{3}\right)$$ -$$x=\left(1-\cos x\right)y+\sin x=\frac{3y^{3}}{2}+O\left(y^{5}\right)+3y+O\left(y^{3}\right)-\frac{1}{6}\left(27y^{3}+O\left(y^{5}\right)\right)=3y-3y^{3}+O\left(y^{5}\right)$$ -I am not entirely (neither meromorphically) confident about the last term at $\frac{1}{6}$ in brackets in the last line, however Grapher tells me I am not too far from truth<|endoftext|> -TITLE: Zeros in the complex plane and convergence -QUESTION [7 upvotes]: I'm doing some number theory which requires some work in $\mathbb{C}$, but unfortunately my complex analysis is a little rusty. -A text I am reading states the following: - -...and given that $\sum_{\rho}|\rho|^{-2}$ converges, it follows that the product $\prod \limits_\rho (1-\frac{z}{\rho})e^{z/\rho}$ converges to an entire function with zeros at $z=\rho$ and nowhere else. - -Could anyone explain, preferably without going into too much detail (just refreshing my memory!) why this is? Here, $\rho$ ranges over the set of zeros of an integral function $f: \mathbb{C} \to \mathbb{C}$ of order 1, which is not identically zero. It's obvious that the product is zero at each $\rho$; what do we need to know it converges to an entire function with zeros nowhere else? -For convergence, do we simply need to bound the modulus above by some finite value at each $z$? And since it's an infinite product, how do we confirm that it's zero nowhere else; a lower bound of the same manner? Finally, what tells us it is entire? Again, given that it's an infinite product I don't know how trivial that claim is; not very hard I'm sure but perhaps requiring some small justification. -As I said, I have probably learned all this at some point in my life so no need for extraordinary detail, just a brief explanation would be very helpful. It all seems vaguely reminiscent of a course I took on Riemann surfaces a few years back. Thanks! - -REPLY [2 votes]: Just for ease of notation, enumerate the zeros as $(\rho_n)$. Then for any $R>0$ there exists $n_R$ such that $|\rho_n| > 2R$ for $n\ge n_R$. For any $N > n_R$, the function -$$g_N(z) = \prod_{n=n_R}^N \left(1-\frac{z}{\rho_n}\right) e^{z/\rho_n}$$ -has no roots in $|z| -TITLE: Poisson summation formula and Schwartz functions -QUESTION [6 upvotes]: I am reading a proof of the Poisson summation formula which states that (with my version of the Fourier transform - I think they sometimes vary by a constant factor) for $f$ a Schwartz function on $\mathbb{R}$ (that is, a smooth function with all derivatives of $f(x)$ of all orders decay faster than any polynomial function as $|x| \to \infty$), the following relationship holds: -$$\sum \limits_{n \in \mathbb{Z}}f(n) = \sum \limits_{n \in \mathbb{Z}}\hat{f}(2\pi n)$$ -where $\hat{f}$ denotes the Fourier transform. The proof goes as follows: define 2 functions on $\mathbb{T} = \{z: |z| = 1\}$ by $F, G: \mathbb{T} \to \mathbb{C}$ by $F(\theta) = \sum \limits_{n \in \mathbb{Z}}\hat{f}(2\pi n) e^{2 \pi i n \theta}$, and $G(\theta) = \sum \limits_{k \in \mathbb{Z}}f(\theta + k)$. Then take Fourier transforms and show they are equal. After showing also that $F$ and $G$ are both Schwartz functions on $\mathbb{T}$, we apply uniqueness of Fourier series to show that $F=G$. Now uniqueness here relies very much on the fact that both $F$ and $G$ are Schwartz functions on $\mathbb{T}$: but a Schwartz function on $\mathbb{T}$ is simply an element of $C^\infty (\mathbb{T})$, i.e. a smooth function on $\mathbb{T}$. -So, my question is this: how do we know (or show) that $F$ and $G$ are smooth? It is clear both are periodic, so it will suffice to show smoothness within their respective periods I suppose. $f$ is Schwartz on $\mathbb{R}$ which means it is smooth, but we are taking an infinite sum of translations of $f$, so it is not manifestly clear that either $F$ or $G$ will remain smooth. How do we show this? I guess we need to make use of the fact that at large values $f$ is very fast-decaying to show that most terms are "insignificant" in the sums, but whenever I tried to prove the smoothness formally it became messy. Is there a nice trick to showing $F$ and $G$ are smooth on $\mathbb{T}$? I would be very grateful for a proof of the fact. Many thanks in advance. - -REPLY [5 votes]: The following result is relevant here: If $f_n : \mathbb{R} \to \mathbb{R} $ is a sequence of continuously differentiable functions, uniformly convergent to $f$ and the sequence $f'_n$ converges uniformly to $g$, then $f$ is differentiable and $f'=g.$ -So we can check that $F$ and $G$ are smooth by checking that we can keep on differentiating them, which we do by applying the above theorem, the Weierstrass M-test and the fact that Schwarz functions decay very quickly.<|endoftext|> -TITLE: What are the $n$-th degree minimal polynomials for $L^p([-1,1])$? -QUESTION [6 upvotes]: It is known (even by me) that the Chebyshev polynomial of degree $n$ (of the first kind) is the minimal polynomial in the space $L^{\infty}([-1,1])$ for a fixed $n$ and leading coefficient $2^{n-1}$. -However, what are the minimal polynomials for the $p$-norm in general for a fixed $n$? Does there exist a general answer? - -This is my first question here and I apologize if it is not up to par. Feel free to edit, migrate or close it if necessary. - -REPLY [4 votes]: It seems that your question is an open problem, but there are partial answers. -Let's fix natural number $n$, and $1 -TITLE: Need help in determining where this pascal's triangle-like sequence comes from. -QUESTION [9 upvotes]: I have a very interesting problem in that a program that I am running has generated a sequence of numbers that act like the pascal's triangle but have somehow built more structure into it. I have been puzzling over this for a while now and cant figure out how the sequence works. I have put it half the triangle up to n = 14 so the whole thing should look like: -1 -1 2 1 -1 3 3 1 -... etc... so notice that the numbers contained within the brackets add up to the right number for the triangle, for n = 4, (2 4) add to 6 and so on. I'm hoping that someone might know where this sequence comes from or how to generalize this solution so that I can calculate the numbers in the triangle for every n. -Thanks! -(1) -(2) (1) -(3) (1) -(2 4) (4) (1) -(5 5) (5) (1) -(2 6 12) (6 9) (6) (1) -(7 7 21) (7 14) (7) (1) -(2 8 24 36) (8 16 32) (8 20) (8) (1) -(9 9 54 54) (9 30 45) (9 27) (9) (1) -(2 10 40 80 120) (10 25 75 100) (10 50 60) (10 35) (10) (1) -(11 11 110 110 220) (11 55 99 165) (11 77 77) (11 44) (11) (1) -(2 12 60 150 300 400) (12 36 144 240 360) (12 105 126 252) (12 96 112) (12 54) (12) (1) -(13 13 195 195 650 650) (13 91 182 455 546) (13 156 364 182) (13 117 156) (13 65) (13) (1) -(2 14 84 252 630 1050 1400) (14 49 245 490 980 1225) (14 196 224 784 784) (14 189 294 504) (14 140 210) (14 77) (14) (1) - -As per requested, the code written is clojure is: -(defn binary? [number] - (if (or (= number 0) - (= number 1)) - true false)) - -(defn all-binary? [array] - (cond (empty? array) true - (binary? (first array)) (all-binary? (rest array)) - :else false )) - -(defn select-bit [value position] - (bit-and 1 (bit-shift-right value position))) - -(defn make-bits [length value] - (let [positions (reverse (range length) )] - (map select-bit (repeat value) positions))) - -(defn make-random-bits [length] - (cond (< length 32) - (let [random-value (rand-int (bit-shift-left 1 length))] - (make-bits length random-value)) - :else - (->> #(rand-int 2) repeatedly (take length)))) - -(defn x! [n j] (make-bits n j)) - -(defn x->j [x] - (let [to-dec (fn [position value] - (* value (bit-shift-left 1 position))) - positions (reverse (range (count x)))] - (reduce + (map to-dec positions x)))) - -(defn f! [n m] - (let [table (-> (bit-shift-left 1 n) (make-bits m) vec)] - (fn [x] (nth table (x->j x))))) - -((f! 2 4) (x! 2 1)) - -(defn T! [n m] - (let [cyc (fn [X position] - (nth X (mod position (count X)))) - idx (fn [position n] - (range (- (inc position) n) - (inc position))) - sub (fn [X position n] - (map #(cyc X %) (idx position n))) - f (f! n m)] - (fn [X] - (let [sub-arrs (map #(sub X % n) - (range (count X)))] - (vec (map f sub-arrs)))))) - - ((T! 2 10) [0 1 0 1 0 1 1 1]) ;;Works - -(def q-keys #{:0->0 :0->1 :1->0 :1->1}) - -(defn- qode-bit [in out] - (let [pair [in out]] - (cond (= pair [0 0]) :0->0 - (= pair [0 1]) :0->1 - (= pair [1 0]) :1->0 - (= pair [1 1]) :1->1))) - -(defn- patch-q [q] - (let [missing? #(not (contains? q %)) - all-missing (filter missing? q-keys) - filler (zipmap all-missing (repeat 0))] - (merge q filler))) - -(defn q! - ([T X-in] - (let [X-out (T X-in) - q-arr (map qode-bit X-in X-out)] - (-> q-arr frequencies patch-q))) - ([T N j] (q! T (x! N j)))) - -(q! (T! 2 9) [0 1 1 0 0 1 0]) - -(defn Q! - [T N & {track? :track?}] - (let [q-fn #(q! T N %)] - (loop [key-set #{} - count-map {} - inputs-map {} - js (range (bit-shift-left 1 N))] - (cond - (nil? (seq js)) - (zipmap key-set (map #(hash-map :counts (count-map %) - :inputs (if track? (inputs-map %) [])) key-set)) - :else - (let [j (first js) - q (q-fn j) - entry? (not (nil? (key-set q)))] - (cond entry? - (recur key-set - (assoc count-map q (inc (count-map q))) - (if track? (assoc inputs-map q (conj (inputs-map q) j))) - (rest js)) - :else - (recur (conj key-set q) - (assoc count-map q 1) - (assoc inputs-map q (if track? [j] [ ])) - (rest js)))))))) - -So: -The system is a variant of the cellular automaton... just to explain the code little better: -(x! n v) => creates a binary sequence of length N, having a decimal value of v -(x! 9 7) => (0 0 0 0 0 0 1 1 1) - -(f! n m) => constructs a function that takes a sequence - => of length n and looks up the bit value of m - => which specifies the type of values that are - => given to the output. (more clearly shown in - => the example below) -(map (f! 2 0) [[0 0] [0 1] [1 0] [1 1]]) => (0 0 0 0) -(map (f! 2 1) [[0 0] [0 1] [1 0] [1 1]]) => (0 0 0 1) -(map (f! 2 15)[[0 0] [0 1] [1 0] [1 1]]) => (1 1 1 1) - -(T! n m) => generates a cellular automata type transform - => that takes an input sequence X and maps (f! n m) - => across every element of X -((T! 2 7) [1 0 1 0 1 0 1]) => [1 1 1 1 1 1 1] -((T! 2 10) [1 0 1 0 1 0 1]) => [0 1 0 1 0 1 0] - - -(q! T X-in) => Takes a transform and a sequence and looks at - => changes between the input and the output bits -(q! (T! 2 7) [0 1 1 0 0 1 0]) => {:1->0 0, :0->0 2, :1->1 3, :0->1 2} -(q! (T! 2 10) [1 0 1 0 1 0 1]) => {:0->0 0, :1->1 0, :0->1 4, :1->0 3} - -Now: -(Q! T N) => does a summary of mapping T to possible values of an N length - => sequence (from 0 to 2^N-1) - -(Q! (T! 2 3) 5) => - - - {{:0->0 0, :0->1 1, :1->0 1, :1->1 3} {:counts 5, :inputs [15 23 27 29 30]}, - {:0->1 1, :0->0 1, :1->0 1, :1->1 2} {:counts 5, :inputs [7 14 19 25 28]}, - {:0->1 1, :0->0 2, :1->0 1, :1->1 1} {:counts 5, :inputs [3 6 12 17 24]}, - {:1->1 0, :0->1 1, :0->0 3, :1->0 1} {:counts 5, :inputs [1 2 4 8 16]}, - {:0->0 0, :1->0 0, :0->1 0, :1->1 5} {:counts 1, :inputs [31]}, - {:0->0 0, :0->1 2, :1->0 2, :1->1 1} {:counts 5, :inputs [11 13 21 22 26]}, - {:1->1 0, :0->1 2, :0->0 1, :1->0 2} {:counts 5, :inputs [5 9 10 18 20]}, - {:1->0 0, :1->1 0, :0->1 0, :0->0 5} {:counts 1, :inputs [0]}} - -(Q! (T! 2 3) 6) => - - - {{:0->0 0, :0->1 1, :1->0 1, :1->1 4} {:counts 6, :inputs [31 47 55 59 61 62]}, - {:0->0 0, :1->1 0, :0->1 3, :1->0 3} {:counts 2, :inputs [21 42]}, - {:1->1 0, :0->1 1, :0->0 4, :1->0 1} {:counts 6, :inputs [1 2 4 8 16 32]}, - {:0->0 0, :1->0 0, :0->1 0, :1->1 6} {:counts 1, :inputs [63]}, - {:0->0 0, :0->1 2, :1->0 2, :1->1 2} {:counts 9, :inputs [23 27 29 43 45 46 53 54 58]}, - {:0->1 2, :0->0 1, :1->0 2, :1->1 1} {:counts 12, :inputs [11 13 19 22 25 26 37 38 41 44 50 52]}, - {:1->1 0, :0->1 2, :0->0 2, :1->0 2} {:counts 9, :inputs [5 9 10 17 18 20 34 36 40]}, - {:1->0 0, :1->1 0, :0->1 0, :0->0 6} {:counts 1, :inputs [0]}, - {:0->1 1, :0->0 3, :1->0 1, :1->1 1} {:counts 6, :inputs [3 6 12 24 33 48]}, - {:0->1 1, :0->0 2, :1->0 1, :1->1 2} {:counts 6, :inputs [7 14 28 35 49 56]}, - {:0->1 1, :0->0 1, :1->0 1, :1->1 3} {:counts 6, :inputs [15 30 39 51 57 60]}} - -And so on. -I have grouped the inputs together into different counts using a pattern I discovered about the inputs and came up with a triangle. I'm hoping someone can cast a bit of light on what is happening! - -When I turn the inputs into a binary string, I get these groupings that correspond to elements in the triangle. It seems that these groups grow a certain way, but I'm not a mathematician and so are not good and generalising patterns. - -For N=5 - id count indices - 0 1 [ 00000 ] - 1 5 [ 00001 00010 00100 01000 10000 ] - 2 5 [ 00011 00110 01100 10001 11000 ] - 3 5 [ 00101 01001 01010 10010 10100 ] - 4 5 [ 00111 01110 10011 11001 11100 ] - 5 5 [ 01011 01101 10101 10110 11010 ] - 6 5 [ 01111 10111 11011 11101 11110 ] - 7 1 [ 11111 ] - -For N=6 - id count input - 0 1 [ 000000 ] - 1 6 [ 000001 000010 000100 001000 010000 100000 ] - 2 6 [ 000011 000110 001100 011000 100001 110000 ] - 3 9 [ 000101 001001 001010 010001 010010 010100 100010 100100 101000 ] - 4 6 [ 000111 001110 011100 100011 110001 111000 ] - 5 12 [ 001011 001101 010011 010110 011001 011010 100101 100110 101001 101100 110010 110100 ] - 6 6 [ 001111 011110 100111 110011 111001 111100 ] - 7 2 [ 010101 101010 ] - 8 9 [ 010111 011011 011101 101011 101101 101110 110101 110110 111010 ] - 9 6 [ 011111 101111 110111 111011 111101 111110 ] - 10 1 [ 111111 ] - -For N=7 - id count input - 0 1 [ 0000000 ] - 1 7 [ 0000001 0000010 0000100 0001000 0010000 0100000 1000000 ] - 2 7 [ 0000011 0000110 0001100 0011000 0110000 1000001 1100000 ] - 3 14 [ 0000101 0001001 0001010 0010001 0010010 0010100 0100001 0100010 0100100 0101000 1000010 1000100 1001000 1010000 ] - 4 7 [ 0000111 0001110 0011100 0111000 1000011 1100001 1110000 ] - 5 21 [ 0001011 0001101 0010011 0010110 0011001 0011010 0100011 0100110 0101100 0110001 0110010 0110100 1000101 1000110 1001001 1001100 1010001 1011000 1100010 1100100 1101000 ] - 6 7 [ 0001111 0011110 0111100 1000111 1100011 1110001 1111000 ] - 7 7 [ 0010101 0100101 0101001 0101010 1001010 1010010 1010100 ] - 8 21 [ 0010111 0011011 0011101 0100111 0101110 0110011 0110110 0111001 0111010 1001011 1001101 1001110 1010011 1011001 1011100 1100101 1100110 1101001 1101100 1110010 1110100 ] - 9 7 [ 0011111 0111110 1001111 1100111 1110011 1111001 1111100 ] - 10 7 [ 0101011 0101101 0110101 1010101 1010110 1011010 1101010 ] - 11 14 [ 0101111 0110111 0111011 0111101 1010111 1011011 1011101 1011110 1101011 1101101 1101110 1110101 1110110 1111010 ] - 12 7 [ 0111111 1011111 1101111 1110111 1111011 1111101 1111110 ] - 13 1 [ 1111111 ] - -REPLY [4 votes]: Here's a formula that is based on @QiaochuYuan's combinatorial description. -Let $f(n,k,c)$ be the number of binary strings with $n$ bits, $k$ ones and $c$ circular clusters. Let $f_{00}(n,k,c)$ be the number of such strings that start and end with 0, and define $f_{01}$,$f_{10}$,$f_{11}$ analogously, so $f=f_{00}+f_{01}+f_{10}+f_{11}$. -By considering what happens when we append a 0 or 1 bit to the left end of each string (e.g. for strings counted by $f_{00}$ a 1 on the end adds one 1 and one cluster), we have the following: -$$ -\begin{align} -f_{00}(n,k,c) & = f_{00}(n-1,k,c)+f_{10}(n-1,k,c) \\ -f_{01}(n,k,c) & = f_{01}(n-1,k,c)+f_{11}(n-1,k,c-1) \\ -f_{10}(n,k,c) & = f_{00}(n-1,k-1,c-1)+f_{10}(n-1,k-1,c) \\ -f_{11}(n,k,c) & = f_{01}(n-1,k-1,c)+f_{11}(n-1,k-1,c) \\ -f_{10}(n,k,c) & = f_{01}(n,k,c) -\end{align} -$$ -where the last identity comes from considering the reversal of each binary string. -By inspection I propose -$$ -\begin{align} -f_{00}(n,k,c) & = \binom{n-k-1}{c}\binom{k-1}{c-1} \\ -f_{01}(n,k,c) & = \binom{n-k-1}{c-1}\binom{k-1}{c-1} \\ -f_{11}(n,k,c) & = \binom{n-k-1}{c-1}\binom{k-1}{c} -\end{align} -$$ -and we can prove them correct by induction using the identities above. -Hence -$$ -\begin{align} -f(n,k,c) & = f_{00}(n,k,c)+2f_{01}(n,k,c)+f_{11}(n,k,c) \\ -& = \binom{n-k}{c}\binom{k-1}{c-1}\left[1+\frac{k}{n-k}\right] -\end{align} -$$ -Thus, for example, let $n=13,k=5$. For $c=1,2,3,4,5$ we get $f=13,182,546,455,91$, which is one of your sets (rearranged).<|endoftext|> -TITLE: response of unit step input in harmonically oscillating system -QUESTION [5 upvotes]: As far as I've understood or misunderstood in constant coefficient second order differential equation -$$\frac{d^2y}{dt^2} + b \frac{dy}{dt} +cy = ef(t)$$ -$b$, $c$ being constants, -$f(t)$ the input to the system, -$y$ being response of the system. -Let, $$\frac{d^2y}{dt^2} +\omega_0^2y = u(t)k$$ -such that $k$ is non zero be a system and $u(t)$ a unit step function. -How to physically visualize this system(or input to the system), isn't the $u(t)$ like applying constant force to the system? Will it not bring the system to halt after certain time? -Like if we keep poking a mass-spring system with constant force only in one direction?? -But the solution seems different. Please Help me to clear this simple misconception. -Thanks you!! - -REPLY [4 votes]: Yes, after time $t = 0$, the step function $ku(t)$ is like applying a constant force $k$ to the system, but no, it will not bring the system to a halt. In fact, it merely shifts the equilibrium position of the system, but the system will continue to oscillate about the new equilibrium position just as before. Since we only care about $t \ge 0$, let's just assume the force is a constant $k$. Observe that if -$\newcommand{\d}{\mathrm d}$ -$$\frac{\d^2y}{\d t^2} + \omega^2y = k,$$ -this is equivalent to -$$\frac{\d^2y}{\d t^2} + \omega^2\left(y - \frac k{\omega^2}\right) = 0,$$ -and if you let $\tilde y = y - k/\omega^2$, you get back the equation of the simple harmonic oscillator centered at $0$, -$$\frac{\d^2\tilde y}{\d t^2} + \omega^2\tilde y = 0.$$ -So the system with a constant force behaves exactly like the unforced system, only shifted by $k/\omega^2$. -Perhaps you're imagining the constant force to be like holding the oscillator and pushing it to one side. But when you do that in real life, you're also opposing the relative motion of the oscillator with respect to your hand, and that is what damps out the motion of the system. A constant force is not like that; it continues to push in one direction, no matter whether the oscillator is above or below its new equilibrium, no matter whether it is moving towards or away from it. -Or just think of a spring held up at one end, with a weight at the other end being pulled down by gravity. It's not the gravitational force that makes it eventually come to a stop, it's the friction in the spring itself.<|endoftext|> -TITLE: Distribution of the sample mean of a exponential -QUESTION [7 upvotes]: I please ask someone to check if my calculations are right. -I have $X_1, ..., X_n$ from a $\mathcal{E}(\lambda): f(x, \lambda) = \lambda e^{-\lambda x}$. -I have to find the $k$ such that $P(\bar{X} \le k) = \alpha$, where $\bar{X}$ is the sample mean; i did: -$$Y=\sum_{i = 1}^{n} X_i$$ -$$Y \sim \Gamma (n, \lambda)$$ -$$\bar{X} = \frac{1}{n} Y \sim \Gamma(n, \frac{\lambda}{n})$$ -$$T = 2\frac{n}{\lambda} \bar{X} \sim \Gamma(\frac{2n}{2}, 2) \stackrel{d}{=}\chi^2 (2n) $$ -$$P(\bar{X} \le k) = P(T \le k' = 2\frac{n}{\lambda} k) = \alpha $$ -Then i can find the value of $k'$ from the table, and finally find $k$. I'm i missing something? (I can't reach the result stated by the book). -Thank you very much. - -REPLY [2 votes]: Correction. -In discussing this question, I have discovered errors here. -Specifically if $n$ observations are sampled at random from -$\mathsf{Exp}(\text{rate} = \lambda),$ as shown in the Question above, -then $T \sim \mathsf{Gamma}(\text{shape}=n,\, \text{rate}=\lambda).$ -The proof is that the MGF of $X_i$ is $M_X(t) = \frac{\lambda}{1-t},$ -so the MGF of $T$ is $M_T(t) = (\frac{\lambda}{1-t})^n,$ which is -the MGF of $\mathsf{Gamma}(\text{shape}=n,\, \text{rate}=\lambda).$ -Consequently, $\bar X \sim \mathsf{Gamma}(n, n\lambda).$ -(This relationship is illustrated in the link.)<|endoftext|> -TITLE: A trigonometric series -QUESTION [10 upvotes]: Let $\alpha$ be a real number. I'm asked to discuss the convergence of the series -$$ -\sum_{k=1}^{\infty} \frac{\sin{(kx)}}{k^\alpha} -$$ -where $x \in [0,2\pi]$. -Well, I show you what I've done: - -if $\alpha \le 0$ the series cannot converge (its general term does not converge to $0$ when $k \to +\infty$) unless $x=k\pi$ for $k=0,1,2$. In other words, if $\alpha \le 0$ there is pointwise convergence only in $x=0,\pi,2\pi$. -if $\alpha \gt 1$, I can use the Weierstrass M-test to conclude that the series is uniformly convergent hence pointwise convergent for every $x \in [0,2\pi]$. Moreover the sum is a continuous function in $[0,2\pi]$. - -Would you please help me in studying what happens for $\alpha \in (0,1]$? Are there any useful criteria that I can use? -Does the series converge? And what kind of convergence is there? In case of non uniform but pointwise convergence, is the limit function continuous? -Thanks. - -REPLY [2 votes]: There is an easier proof that $f_\alpha$ is discontinuous at 0. -Let $x=\pi/n$ for some even $n$. Then for $1\le i\le n$, group terms $a_{2nk+i}$ and $a_{2nk+i+n}$ together: -$$\frac{\sin (2nk+i)x}{(2nk+i)^\alpha}+\frac{\sin (2nk+i+n)x}{(2nk+i+n)^\alpha}\ge\frac{n\alpha}{(2nk+i)(2nk+i+n)^\alpha}\sin \frac{i\pi}{n} \ge 0$$ -We need only use $k=0$ and $i\le n/2$ for the lower bound: -$$\begin{align} -f_\alpha(x)\ge&\alpha \sum_{i=1}^{n/2} \frac{n}{i(i+n)^\alpha}\sin \frac{i\pi}{n}\\ -\ge&\alpha\sum_{i=1}^{n/2} \frac{2}{(i+n)^\alpha}\qquad{\text{($\sin t\ge 2/\pi\cdot t$)}}\\ -\ge&2\alpha\log\frac{3n/2+1}{n+1}\\ -\rightarrow&2\alpha\log 3/2 -\end{align}$$ -So $f_\alpha$ is bounded from below by a positive number as $x\rightarrow 0$, for all $\alpha\in(0,1]$. Because $f_\alpha(0)=0$, the convergence cannot be uniform around 0.<|endoftext|> -TITLE: Evaluating $ \lim_{n\rightarrow\infty} n \int_{0}^{1} \frac{{x}^{n-2}}{{x}^{2n}+x^n+1} \mbox {d}x$ -QUESTION [10 upvotes]: Evaluating -$$L = \lim_{n\rightarrow\infty} n \int_{0}^{1} \frac{{x}^{n-2}}{{x}^{2n}+x^n+1} \mbox {d}x$$ - -REPLY [5 votes]: Marvis showed that -$$ -I_n = \int_0^1 \dfrac{nx^{n-2}}{x^{2n} + x^n + 1} dx = \int_0^1 \dfrac{dt}{t^{1/n}(t^2 + t + 1)}. -$$ -At this point one can use monotone convergence as Davide commented, but since Chris is interested in a more direct approach, here is one. -We have -$$ -I_n \geq \int_0^1 \dfrac{dt}{t^2 + t + 1}=\frac{\pi}{3\sqrt3}, -$$ -and for $n>1$ and for any small $\delta>0$ -$$ -I_n\leq \int_0^\delta \dfrac{dt}{t^{1/n}} + \int_\delta^1 \dfrac{dt}{\delta^{1/n}(t^2 + t + 1)}=\frac{\delta^{1-1/n}}{1-1/n}+\frac1{\delta^{1/n}}\Big(\frac{2\pi}{3\sqrt3}-\frac{2\arctan(\frac{2\delta+1}{\sqrt3})}{\sqrt3}\Big). -$$ -An inspection of the limit as $n\to\infty$ of the right hand side shows that -$$ -I_n\leq \delta+\frac{2\pi}{3\sqrt3}-\frac{2\arctan(\frac{2\delta+1}{\sqrt3})}{\sqrt3}+E(\delta,n), -$$ -where $E(\delta,n)\to0$ as $n\to\infty$ for any fixed $\delta>0$. Since $\arctan(z)$ is continuous at $z=\frac1{\sqrt3}$ with $\arctan(\frac1{\sqrt3})=\frac\pi6$, for any given $\varepsilon>0$, we can choose $\delta>0$ so small that -$$ -\delta+\frac{2\pi}{3\sqrt3}-\frac{2\arctan(\frac{2\delta+1}{\sqrt3})}{\sqrt3}<\frac{\pi}{3\sqrt3}+\frac\varepsilon2. -$$ -For sufficiently large $n$, we can also ensure $E(\delta,n)<\frac\varepsilon2$, implying that there is a threshold value $N$ such that -$$ -\frac{\pi}{3\sqrt3}\leq I_n<\frac{\pi}{3\sqrt3}+\varepsilon, -$$ -for all $n>N$.<|endoftext|> -TITLE: Why does $\cos (\pi\cos (\pi \cos (\log (20+\pi)))) \approx -1$ -QUESTION [6 upvotes]: I read on Wikipedia that -$$\cos (\pi\cos (\pi \cos (\log (20+\pi)))) \approx -1$$ -to a high degree of accuracy. Why is this true? Is this pure coincidence or is there some mathematical background? - -REPLY [13 votes]: It is a well known coincidence that -$$e^{\pi}-\pi \approx 20$$ -Using this, we find -$$e^{\pi}-\pi \approx 20 \implies \pi\approx \log ( 20+\pi)$$ -then -$$-1 =\cos (\pi) \approx \cos(\log ( 20+\pi))$$ -$\cos (-\pi)=-1$, so a closer approximation of $-1$ can be found with -$$-1 =\cos(\pi\cos (\pi)) \approx \cos(\pi\cos(\log ( 20+\pi)))$$ -and again -$$-1 =\cos(\pi \cos(\pi\cos (\pi))) \approx \cos(\pi\cos(\pi\cos(\log ( 20+\pi))))$$ - -In fact, if $x_0 \approx -1$ and $x_n=\cos (\pi x_{n-1})$ then -$$\lim_{n \to \infty}x_n=-1$$<|endoftext|> -TITLE: Explicit expression for eigenpairs of Laplace-Beltrami operator -QUESTION [7 upvotes]: In $R^n$, the Laplace-Beltrami operator is just the Laplacian, and its eigenstructure is well known. There are also explicit expressions for the eigenvalues/eigenvectors of the Laplace-Beltrami operator on the sphere. - -Question: Are there any other nontrivial surfaces for which explicit expressions - for the eigenvalues/eigenvectors of the Laplace-Beltrami operator have - been worked out ? I was unable to even find anything for an ellipsoid. - -I also wanted to emphasize that I'm looking for closed form expressions. - -REPLY [8 votes]: The answer is nice for flat tori in every dimension. Let's write such a torus as $V/\Gamma$ where $V$ is a finite-dimensional real vector space of dimension $n$ with inner product $\langle \cdot, \cdot \rangle$ and $\Gamma$ is a lattice (a discrete subgroup isomorphic to $\mathbb{Z}^n$ which spans $V$). Any twice-differentiable eigenfunction $f : V/\Gamma \to \mathbb{C}$ of the Laplacian is in particular a bounded eigenfunction of the Laplacian on $V$, so we can take it to have the form -$$f_w(v) = e^{2 \pi i \langle w, v \rangle}$$ -for some $w \in V$ (for reasons to be described later). We also need to impose the constraint that $f_w$ is invariant under $\Gamma$, hence that -$$e^{2\pi i \langle w, v \rangle} = e^{2 \pi i \langle w, v + g \rangle}$$ -for every $g \in \Gamma$. This condition is satisfied if and only if $w$ belongs to the dual lattice $\Gamma^{\vee}$, which consists of all vectors $w$ such that $\langle w, g \rangle \in \mathbb{Z}$ for all $g \in \Gamma$. Moreover, -$$\Delta f_w = - 4 \pi^2 \| w \|^2$$ -so the eigenvalues of the Laplacian on $V/\Gamma$ are just $- 4\pi^2$ times the squares of the lengths of the vectors in $\Gamma^{\vee}$.<|endoftext|> -TITLE: Theorem about two real numbers -QUESTION [7 upvotes]: My question is: -$a,b$ are two positive real numbers such that their product is constant,equal to $k$ say. Prove: the sum $a+b$ is minimum if and only if $a = b= \sqrt k$. -Can this be solved using $A.M.-\;G.M.$ inequality? If yes,then I would like to know it that way too. - -REPLY [4 votes]: Since you asked specifically for a proof using the A.M.-G.M. inequality, I'll provide one, even though it's not as elegant or direct as the proof in André Nicolas's answer (which does not use the A.M.-G.M. inequality). -By the A.M.-G.M. inequality, $\sqrt{ab}\le\frac{a+b}2$, -i.e. $2\sqrt{ab}\le a+b$, with equality iff $a=b$. -Since the left-hand side of that inequality is fixed ($2\sqrt k$), we have that the right-hand side is equal to a fixed number it is not less than — i.e. is minimized — iff $a=b$, as sought.<|endoftext|> -TITLE: Is $r=2\cos(\theta)$ a one-petal polar function? -QUESTION [5 upvotes]: I'm currently learning about polar functions and their graphs in precalculus, and one of the questions on my homework is to identify the shape of the function $r=2\cos(\theta)$. We were taught that functions in the form $r=a\cos(n\theta)$ where $a>0$ are roses. So, were $a=2$ and $n=1$, wouldn't it form this function and be rose? -I ask because the graph is a perfect circle centered at polar coordinates (1,0). - -REPLY [5 votes]: I'd say that it's convenient to include it in the set of rose curves—it can be useful to think about $r=a\sin n\theta$ for noninteger values of $n$ (even starting at $0$), rather than just integers greater than $1$:<|endoftext|> -TITLE: Coin Toss Probability Question (Feller) -QUESTION [5 upvotes]: I'm working out of Feller's "Introduction to Probability and its Application (Vol I.)" textbook and I'm stuck on a coin toss problem. I'll list the full problem and show where I'm having trouble. - -A coin is tossed until for the first time the same result appears twice in succession. To every possible outcome requiring n tosses attribute probability 1/$2^{n-1}$. Describe the sample space. Find the probability of the following events: a.) the experiment ends before the sixth toss, b.) an even number of tosses is required. - -Alright so I'm not having any trouble describing the sample space and completing part a. This first part was solved by creating a possibility tree and adding up the probabilities (answer: 15/16). However, I'm stuck on part b and I don't understand how the 1/$2^{n-1}$ given in the problem is to be interpreted because if you toss the coin twice it makes it seem like HH, and TT each have a probability of 1/2 which is not the case. The sample space of two tosses would be {HH, HT, TH, TT} and each would have a probability of 1/4 and following this logic I arrived at 15/16 so I believe this is the correct thinking which makes the problem even more confusing. -The answer to part b is 2/3 so I'm not sure if that will help. Thanks for any help. - -REPLY [4 votes]: I think you have interpreted the question slightly incorrectly. The probability that the game ends immediately after the $n$ tosses is $\dfrac1{2^{n-1}}$ and this the probability that has been given in the problem. -The sample space is $$\Omega = \{\underbrace{AA}_{1/2},\underbrace{A\bar{A}\bar{A}}_{1/4},\underbrace{A\bar{A}AA}_{1/8},\underbrace{A\bar{A}A\bar{A}\bar{A}}_{1/16},\underbrace{A\bar{A}A\bar{A}AA}_{1/32},\ldots\}$$ where $\bar{A}$ denotes the outcome which is not $A$. -Hence, the probability that the experiment ends before the $6^{th}$ toss is $$\dfrac12 + \dfrac1{2^2} + \dfrac1{2^3} + \dfrac1{2^4} = \dfrac{15}{16}$$ -For the second part, the probability that the game ends after even number of tosses is -\begin{align} -\sum_{n=2,4,6}^{\infty} \dfrac1{2^{n-1}} & = \dfrac1{2} + \dfrac1{2^3} + \dfrac1{2^5} + \dfrac1{2^7} + \cdots = \dfrac12 \left( 1 + \dfrac14 + \dfrac1{4^2} + \dfrac1{4^3} + \cdots\right) = \dfrac12 \dfrac1{\left(1- \dfrac14 \right)}\\ -& = \dfrac12 \times \dfrac1{3/4} = \dfrac4{2 \times 3} = \dfrac23. -\end{align}<|endoftext|> -TITLE: Existence of valuation rings in an algebraic function field of one variable -QUESTION [8 upvotes]: The following theorem is a slightly modified version of Theorem 1, p.6 of Chevalley's Introduction to the theory of algebraic functions of one variable. -He proved it using Zorn's lemma. -However, Weil wrote, in his review of the book, that this can be proved without it. -I wonder how. -Theorem -Let $k$ be a field. -Let $K$ be a finitely generated extension field of $k$ of transcendence degree one. -Let $A$ be a subring of $K$ containing $k$. -Let $P$ be a prime ideal of $A$. -Then there exists a valuation ring $R$ of $K$ dominating $A_P$. -EDIT -Weil wrote: - -One might observe here that, in a function-field of dimension 1, every valuation-ring is finitely generated over the field of constants, and therefore , if a slightly different arrangement had been adopted, the use of Zorn's lemma (or of Zermelo's axiom) could have been avoided altogether; since Theorem 1 is formulated only for such fields, this treatment would have been more consistent, and the distinct features of dimension 1 would have appeared more clearly. - -EDIT -We can assume that $A$ contains a transcendental element $x$ over $k$(otherwise the theorem would be trivial). -If $A$ is finitely generated over $k$, we can prove the theorem without using Zorn's lemma; -It is well known that the integral closure $B$ of $A$ in $K$ is finitely generated as an $A$-module. Hence $B$ is Noetherian and integrally closed. -We can assume $P ≠ 0$. -Since $B$ is integral over $A$, there exists a prime ideal ideal $M$ of $B$ which lies over $P$. -Since $B$ is Noetherian, this can be proved without Zorn's lemma. -Since dim $B$ = 1, $B_M$ is a discrete valuation ring dominating $A_P$. -I wonder if Weil was talking about this case. -EDIT[July 10, 2012] -As the answers to this question show, the following line in the above is wrong: "Since $B$ is Noetherian, this can be proved without Zorn's lemma." -Surely this can be proved without Zorn's lemma, but it's not because $B$ is Noetherian, but because $B$ is finitely generated as an $A$-module. -EDIT[July 13, 2012] -I think I solved the problem. -However, I don't think this is how Weil did. -I guess his proof was simpler. - -REPLY [4 votes]: I borrowed the idea of the Bourbaki's proof of Krull-Akizuki theorem. -Definition -Let $A$ be a not-necessarily commutative ring. -Let $M$ be a left $A$-module. -Suppose $M$ has a composition series, the lengths of each series are the same by Jordan-Hoelder theorem. We denote it by $leng_A M$. -If $M$ does not have a composition series, we define $leng_A M = \infty$. -Lemma 1 -Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. -Let $f$ be a non-zero element of $A$. -Then $A/fA$ is a finite $k$-module. -Proof: -Clear. -Lemma 2 -Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. -Let $M$ be a torsion $A$-module of finite type. -Then $M$ is a finite $k$-module. -Proof: -Let $x_1, ..., x_n$ be generating elements of $M$. -There exists a non-zero element $f$ of $A$ such that $fx_i = 0$, $i = 1, ..., n$. -Let $\psi:A^n \rightarrow M$ be the morphism defined by $\psi(e_i) = x_i$, $i = 1, ..., n$, -where $e_1, ..., e_n$ is the canonical basis of $A^n$. -By Lemma 1, $A^n/fA^n$ is a finite $k$-module. -Since $\psi$ induces a surjective mophism $A^n/fA^n \rightarrow M$, $M$ is a finite $k$-module. -QED -Lemma 3 -Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. -Let $M$ be an $A$-module. -Then $length_A M < \infty$ if and only if $M$ is a finite $k$-module. -Proof: -Suppose $length_A M < \infty$. -Let $M = M_0 \supset M_1 \supset ... \supset M_n = 0$ be a composition series. -Each $M_i/M_{i+1}$ is isomorphic to $A/f_iA$, where $f_i$ is an irreducible polynomial in $A$. -Since $dim_k A/f_iA$ is finite by Lemma 1, $dim_k M$ is finite. -The converse is clear. -QED -Lemma 4 -Let $A$ be a not necessarily commutative ring. -Let $M$ be a left $A$-module. -Let $(M_i)_I$ be a family of $A$-submodules of $M$ indexed be a set $I$. -Suppose $(M_i)_I$ satisfies the following condition. -$M = \cup_i M_i$, and for any $i, j \in I$, there exists $k \in I$ such that $M_i \subset M_k$ and $M_j \subset M_k$. -Then $leng_A M = sup_i leng_A M_i$. -Proof: -Suppose $sup_i leng_A M_i = \infty$. -Since $sup_i leng_A M_i \leq leng_A M$, $leng_A M = \infty$. -Hence we can assume that $sup_i leng_A M_i = n < \infty$. -Let $n = leng_A M_{i_0}$. -For each $i \in I$, there exists $k \in I$ such that $M_{i_0} \subset M_k$ and $M_i \subset M_k$. -Since $leng_A M_k = n$, $M_{i_0} = M_k$, $M_i \subset M_{i_0}$. -Since $M = \cup_i M_i$, $M = M_{i_0}$. -Hence $leng_A M = n$. -QED -Lemma 5 -Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. -Let $K$ be the field of fractions of $A$. -Let $M$ be a torsion-free $A$-module of finite type. -Let $r = dim_K M \otimes_A K$ -Let $f$ be a non-zero element of $A$. -Then $leng_A M/fM \leq r(leng_A A/fA)$ -Proof: -There exists a $A$-submodule $L$ of $M$ such that $L$ is isomorphic to $A^r$ and $Q = M/L$ is a torsion module of finite type over $A$. -Hence, by Lemma 2, $Q$ is a finite $k$-module. -The kernel of $M/f^nM \rightarrow Q/f^nQ$ is $(L + f^nM)/f^nM$ which is isomorphic to $L/(f^nM \cap L)$. -Since $f^nL \subset f^nM \cap L$, -$leng_A M/f^nM \leq leng_A L/f^nL + leng_A Q/f^nQ \leq leng_A L/f^nL + leng_A Q$. -Since $M$ is torsion-free, $f$ induces isomorphism $M/fM \rightarrow fM/f^2M$. -Hence $leng_A M/f^nM = n(leng_A M/fM)$. -Similarly $leng_A L/f^nL = n(leng_A L/fL)$. -Hence $leng_A M/fM \leq leng_A L/fL + (1/n) leng_A Q$. -Since $L$ is isomorphic to $A^r$, $leng_A L/fL = r(leng_A A/fA)$. -Hence $leng_A M/fM \leq r(Leng_A A/fA)$. -QED -Lemma 6 -Let $A = k[X]$ be the polynomial ring of one variable over a field $k$. -Let $K$ be the field of fractions of $A$. -Let $M$ be a torsion-free $A$-module. -Suppose $r = dim_K M \otimes_A K$ is finite. -Let $f$ be a non-zero element of $A$. -Then $leng_A M/fM \leq r(Leng_A A/fA)$ -Proof: -Let $(M_i)_I$ be the family of finitely generated $A$-submodules of $M$. -$M/fM = \cup_i (M_i + fM)/fM =\cup_i M_i/(M_i \cap fM)$. -Since $fM_i \subset M_i \cap fM$, $M_i/(M_i \cap fM)$ is isomorphic to a quotient of $M_i/fM_i$. -Hence, by Lemma 5, $leng_A M_i/(M_i \cap fM) \leq r(leng_A A/fA)$. -Hence, by Lemma 4, $leng_A M/fM \leq r(leng_A A/fA)$ -QED -Lemma 7 -Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. -Let $K$ be the field of fractions of $A$. -Let $L$ be a finite extension field of $K$. -Let $B$ be a subring of $L$ containing $A$. -Then $B/fB$ is a finite $k$-module for every non-zero element $f \in B$. -Proof: -Since $L$ is a finite extension of $K$, $a_rf^r + ... + a_1f + a_0 = 0$, where $a_i \in A, a_0 \neq 0$. -Then $a_0 \in fB$. -Since $B \otimes_A K \subset L$, $dim_K B \otimes_A K \leq [L : K]$. -Hence, by Lemma 6, $leng_A B/a_0B$ is finite. -Hence $leng_A B/fB$ is finite. -Hence, by Lemma 3, the assertion follows. -QED -Lemma 8 -Let $A$ be an integrally closed domain containing a field $k$ as a subring. -Suppose $A/fA$ is a finite $k$-module for every non-zero element $f \in A$. -Let $S$ be a multiplicative subset of $A$. -Let $A_S$ be the localization with respect to $S$. -Then $A_S$ is an integrally closed domain containing a field $k$ as a subring and -$A_S/fA_S$ is a finite $k$-module for every non-zero element $f \in A_S$. -Proof: -Let $K$ be the field of fractions of $A$. -Suppose that $x \in K$ is integral over $A_S$. -$x^n + a_{n-1}x^{n-1} + ... + a_1x + a_0 = 0$, where $a_i \in A_S$. -Hence there exists $s \in S$ such that $sx$ is integral over $A$. -Since $A$ is integrally closed, $sx \in A$. -Hence $x \in A_S$. -Hence $A_S$ is integrally closed. -Let $f$ be a non-zero element of $A_S$. -$f = a/s$, where $a \in A, s \in S$. -Then $fA_S = aA_S$. -By this, $aA$ is a product of prime ideals of $A$. -Let $P$ be a non-zero prime ideal $P$ of $A$. -Since $P$ is maximal, $A_S/P^nA_S$ is isomorphic to $A/P^n$ or $0$. -Hence $A_S/aA_S$ is a finite $k$-module. -QED -Lemma 9 -Let $A$ be an integrally closed domain containing a field $k$ as a subring. -Suppose $A/fA$ is a finite $k$-module for every non-zero element $f \in A$. -Let $P$ be a non-zero prime ideal of $A$. -Then $A_P$ is a discrete valuation ring. -Proof: -By Lemma 8 and this, every non-zero ideal of $A_P$ has a unique factorization as a product of prime ideals. -Hence $PA_P \neq P^2A_p$. -Let $x \in PA_P - P^2A_P$. -Since $A_P$ is the only non-zero prime ideal of $A_P$, $xA = PA_P$. -Since every non-zero ideal of $A_P$ can be written $P^nA_P$, $A_P$ is a principal ideal domain. -Hence $A_P$ is a discrete valuation ring. -QED -Theorem -Let $k$ be a field. -Let $K$ be a finitely generated extension field of $k$ of transcendence degree one. -Let $A$ be a subring of $K$ containing $k$. -Let $P$ be a prime ideal of $A$. -Then there exists a valuation ring $R$ of $K$ dominating $A_P$. -Proof: -We can assume that $A$ contains a transcendental element $x$ over $k$(otherwise the theorem would be trivial). -We can also assume that $P \neq 0$. -Let $B$ be the integral closure of $A$ in $K$. -By Lemma 7, $B/fB$ is a finite $k$-module for every non-zero element $f \in B$. -Let $S = A - P$. -Let $B_P$ and $A_P$ be the localizations of $B$ and $A$ with rspect to $S$ respectively. -Let $y \in P$ be a non-zer element. -By Lemma 8, $B_P/yB_P$ is a finite k-module. -Since $yB_P \subset PB_P$ and $PB_P \neq B_P$, $yB_P \neq B_P$. -Hence there exists a maximal ideal $Q$ of $B_P$ containing $y$. -Since $B_P$ is integral over $A_P$ and $PA_P$ is a unique maximal ideal of $A_P$, $P = Q \cap A_P$. -Let $Q' = Q \cap B$. -Then $Q'$ is a prime ideal of $B$ lying over $P$. -By Lemma 9, $B_Q'$ is a discrete valuation ring and it dominates $A_P$. -QED<|endoftext|> -TITLE: Relation between zeta value and genus of modular curve -QUESTION [8 upvotes]: This question is sort of vague, so I don't mind a vague answer. -We have the special value formula -$\zeta(-1)=-B_2/2 = -1/12$, -where $\zeta$ is the Riemann zeta function. Also, the "genus" of the level 1 modular curve $X(1)$ is $1/12$, where genus is meant in the sense of orbifolds. Is this just a numerical coincidence, or is there a deeper underlying phenomenon? - -REPLY [4 votes]: One way to interpret this result, I think, is as a Tamagawa number computation. More precisely, for the simply connected semisimple algebraic group $SL_2$, the Tamagawa number is famously equal to $1$. If you try to compute what this means in classical terms, you will find a relationship between the volume (and hence, by Gauss--Bonnet, -the genus) of $X(1)$, and a $\zeta$-value, which will be the relationship you are asking about.<|endoftext|> -TITLE: Increasing orthogonal functions -QUESTION [14 upvotes]: What is the maximal $n$ such that there exist functions $f_1, \dots, f_n:[0,1] \to \mathbb{R}$ that are all bounded, non-decreasing, and mutually orthogonal in $L^2([0,1])$? - -REPLY [5 votes]: The answer is TWO. (The proof below is just an easy adaption of somebody else’s -answer to a related question, here on MO). - Lemma Let $f^{\star}$ and $g^{\star}$ be two nonzero, nondecreasing -functions in $L^1([0,1])$, with $\int_{[0,1]}f^{\star}=\int_{[0,1]}g^{\star}=0$. -Then $\int_{[0,1]}f^{\star}g^{\star} \gt 0$. - Proof of lemma -There is an $a_1\in [0,1]$ such that $f^{\star}(x)\leq 0$ when $x \lt a_1$ and -$f^{\star}(x) \geq 0$ when $x\gt a_1$. Similarly, there is an $a_2\in [0,1]$ such that $g^{\star}(x)\leq 0$ when $x \lt a_2$ and $g^{\star}(x) \geq 0$ when $x\gt a_2$. By symmetry, we may assume -$a_1 \leq a_2$. -Then $0=\int_{0}^{a_2}f^{\star}(x)dx+\int_{a_2}^{1}f^{\star}(x)dx$, so $\int_{0}^{a_2}f^{ \star}(x)dx \leq 0$. We deduce -$$ -\int_{0}^{a_1} |f^{\star}(x)|dx \geq \int_{a_1}^{a_2} |f^{\star}(x)|dx -$$ -and hence -$$ -\int_{0}^{a_1} f^{\star}(x)g^{\star}(x)dx= -\int_{0}^{a_1} |f^{\star}(x)||g^{\star}(x)|dx -\geq \int_{0}^{a_1} |f^{\star}(x)||g^{\star}(a_1)|dx -\geq \int_{a_1}^{a_2} |f^{\star}(x)||g^{\star}(a_1)|dx -=\int_{a_1}^{a_2} |f^{\star}(x)||g^{\star}(x)|dx= --\int_{a_1}^{a_2} f^{\star}(x)g^{\star}(x) dx -$$ -So the integral $\int_{0}^{a_2} f^{\star}(x)g^{\star}(x)dx$ is nonnegative ; on the other hand, $f^{\star}g^{\star}$ is nonnegative on $[a_2,1]$. So $\int_{[0,1]}f^{\star}g^{\star} \geq 0$, and if this inequality is in fact an equality then $|f^{\star}g^{\star}|$ must be zero a.e. on $[0,1]$. In particular, there is a sequence $(x_n)$ tending to $1$ such that -$f^{\star}(x_n)g^{\star}(x_n)=0$ for all $n$. By the pigeon-hole principle, there must be infinitely many $n$ such that $u(x_n)=0$, where $u$ is one of $f^{\star}$ or $g^{\star}$. Then $u \leq 0$, but this contradicts the fact that $u$ is nonzero with integral zero. The lemma is proved. - Corollary : Let $f$ and $g$ be two orthogonal, nonzero, nondecreasing -functions in $L^1([0,1])$. Then $0 \gt \int_{[0,1]}f \int_{[0,1]}g$ : the integrals of -$f$ and $g$ must have different signs. - Proof of corollary : use the above lemma with -$$ - f^{\star}=f-\int_{[0,1]}f, \ g^{\star}=g-\int_{[0,1]}g - $$<|endoftext|> -TITLE: Orientation and simplicial homology -QUESTION [5 upvotes]: I'm reading Chapter 2 of Hatcher's Algebraic Topology, and I just can't figure out the computations of the boundary homomorphism for the examples provided. To provide some context, reproduced the figure for the torus from the book below: - -As I understand it, to compute $\partial U$ we follow the faces (which are edges) counter-clockwise, negating an edge if the oriented arrow is "facing us." But starting in the top right corner and working around U results in $\partial U = (-1)^0 (-b) + (-1)^1 (-a) + (-1)^2 c = a - b + c$, which contrasts with the book's result of $\partial U = a + b - c$. I seem to be making some critically flawed assumptions. What am I not understanding? - -REPLY [6 votes]: I don't understand your description of the boundary map, or your explicit calculation. Here is how the calculation goes: -Imagine that you are standing in $U$, making a counterclockwise pivot, and looking at the boundary (in the naive sense) as you do so. Let's begin facing the top right corner. As we turn, our field of vision sweeps out $b$, but in the opposite direction to its arrow (we sweep out $b$ from right to left, while the arrow points from left to right), then $a$, again in the opposite direction to its arrow, and finally $c$, in the same direction as its arrow. -So $\partial U = -b - a + c$. (If the text instead writes it as $a + b - c$, it must use the opposite orientation on $U$, i.e. a clockwise, rather than counterclockwise, orientation.) -The same procedure applied in $L$, starting by facing the lower right corner, yields $\partial L = a - c + b$, which is $- \partial U$, as one would expect. (When you glue the $U$ and $L$ into a torus, the boundaries get glued to together, and so "cancel" one another.)<|endoftext|> -TITLE: Existence of the Pfaffian? -QUESTION [16 upvotes]: Consider a square skew-symmetric $n\times n$ matrix $A$. We know that $\det(A)=\det(A^T)=(-1)^n\det(A)$, so if $n$ is odd, the determinant vanishes. -If $n$ is even, my book claims that the determinant is the square of a polynomial function of the entries, and Wikipedia confirms this. The polynomial in question is called the Pfaffian. -I was wondering if there was an easy (clean, conceptual) way to show that this is the case, without mucking around with the symmetric group. - -REPLY [4 votes]: Here is an approach using (possibly complex) Grassmann variables and Berezin integration$^1$ to prove the required relation $${\rm Det}(A)~=~{\rm Pf}(A)^2. \tag{1}$$ This approach isn't purely conceptional, but at least it is easy, we don't fudge the overall sign, we don't muck around much with the symmetric group, and Grassmann variables do implement exterior calculus. - -Define the Pfaffian of a (possibly complex) antisymmetric matrix $A^{jk}=-A^{kj}$ (in $n$ dimensions) as$^2$ -$$ \begin{align} -{\rm Pf}(A)&~:=~\int \!d\theta_n \ldots d\theta_1~ -e^{\frac{1}{2}\theta_j A^{jk}\theta_k} -\cr &~=~(-1)^{\lfloor\frac{n}{2}\rfloor} \int \!d\theta_1 \ldots d\theta_n~ -e^{\frac{1}{2}\theta_j A^{jk}\theta_k}\cr & -\cr &~=~(-1)^{\frac{n}{2}} \int \!d\theta_1 \ldots d\theta_n~ -e^{\frac{1}{2}\theta_j A^{jk}\theta_k}\cr -\cr &~=~i^n \int \!d\theta_1 \ldots d\theta_n~ -e^{\frac{1}{2}\theta_j A^{jk}\theta_k}\cr -&~=~ \int \!d\theta_1 \ldots d\theta_n~ -e^{-\frac{1}{2}\theta_j A^{jk}\theta_k}.\end{align} - \tag{2}$$ -In the last equality of eq. (2), we rotated the Grassmann variables $\theta_k\to i\theta_k$ with the imaginary unit. -Define the determinant as -$$ {\rm Det}(A)~:=~\int \!d\theta_1 ~d\widetilde{\theta}_1 \ldots d\theta_n ~d\widetilde{\theta}_n~ e^{\widetilde{\theta}_j A^{jk}\theta_k} -. \tag{3}$$ -It is not hard to prove via coordinate substitution that eq. (3) indeed reproduces the standard definition of the determinant. -If we make a change of coordinates -$$ \theta^{\pm}_k~=~ \frac{\theta_k\pm \widetilde{\theta}_k}{\sqrt{2}}, \qquad k~\in~\{1,\ldots,n\},\tag{4} $$ -in eq. (3), the super-Jacobian becomes $(-1)^n$. -Therefore we calculate -$$\begin{align} {\rm Det}(A)&\stackrel{(3)+(4)}{=}~(-1)^n\int \!d\theta^+_1 ~d\theta^-_1 \ldots d\theta^+_n ~d\theta^-_n~ -e^{\frac{1}{2}\theta^+_j A^{jk}\theta^+_k --\frac{1}{2}\theta^-_j A^{jk}\theta^-_k}\cr -&~~=~\int \!d\theta^-_1 \ldots d\theta^-_n~d\theta^+_n\ldots d\theta^+_1 ~~e^{\frac{1}{2}\theta^+_j A^{jk}\theta^+_k} -e^{-\frac{1}{2}\theta^-_j A^{jk}\theta^-_k}\cr -&~~\stackrel{(2)}{=}~{\rm Pf}(A)^2, \end{align}\tag{5}$$ -which proves eq. (1).$\Box$ - --- -$^1$ We use the sign convention that Berezin integration $$\int d\theta_i~\equiv~\frac{\partial}{\partial \theta_i}\tag{6} $$ is the same as differentiation wrt. $\theta_i$ acting from left. See e.g. this Phys.SE post and this Math.SE post. -$^2$ The sign of the permutation $(1, \ldots, n)\mapsto(n, \ldots, 1)$ is given by $(-1)^{\frac{n(n-1)}{2}}=(-1)^{\lfloor\frac{n}{2}\rfloor}$, where $\lfloor\frac{n}{2}\rfloor$ denotes the integer part of $\frac{n}{2}$. One may show that the Pfaffian (2) vanishes in odd dimensions $n$.<|endoftext|> -TITLE: Evaluating $ \int_1^{\infty} \frac{\{t\} (\{t\} - 1)}{t^2} dt$ -QUESTION [9 upvotes]: I am interested in a proof of the following. -$$ \int_1^{\infty} \dfrac{\{t\} (\{t\} - 1)}{t^2} dt = \log \left(\dfrac{2 \pi}{e^2}\right)$$ -where $\{t\}$ is the fractional part of $t$. -I obtained a circuitous proof for the above integral. I'm curious about other ways to prove the above identity. So I thought I will post here and look at others suggestion and answers. -I am particularly interested in different ways to go about proving the above. -I'll hold off from posting my proof for sometime to see what all different proofs I get for this. - -REPLY [2 votes]: The integral on $[1,N+1]$ is (see @Rahul's first comment) -$$ -I_N=\sum_{n=1}^N\big(2+2\log n+(2n-1)\log n-(2n+1)\log(n+1)\big), -$$ -that is, -$$ -I_N=2N+2\log(N!)-(2N+1)\log(N+1). -$$ -Thanks to Stirling's approximation, $2\log(N!)=(2N+1)\log N-2N+\log(2\pi)+o(1)$. After some simplifications, this leads to -$$ -I_N=\log(2\pi)-(2N+1)\log(1+1/N)+o(1)=\log(2\pi)-2+o(1). -$$<|endoftext|> -TITLE: Cute Determinant Question -QUESTION [89 upvotes]: I stumbled across the following problem and found it cute. -Problem: We are given that $19$ divides $23028$, $31882$, $86469$, $6327$, and $61902$. Show that $19$ divides the following determinant: -$$\left| - \begin{matrix} - 2 & 3&0&2&8 \\ - 3 & 1&8&8&2\\ -8&6&4&6&9\\ -0&6&3&2&7\\ -6&1&9&0&2 - \end{matrix}\right|$$ - -REPLY [39 votes]: Integer proof -Perform the column operation $C_5\leftarrow 10^4C_1+10^3C_2+10^3C_3+10C_4+C_5$: the coefficient of $C_5$ is $1$ so this doesn't change the determinant. -All elements of $C_5$ ($23028$, $31882$, $86469$, $6327$, and $61902$) are now divisible by $19$, so we can factor out $19$: hence the determinant is divisible by $19$. - -Modular proof -In $\mathbb Z/19\mathbb Z$, the columns $10^4C_1+10^3C_2+10^3C_3+10C_4+C_5$ sum to $0$: hence the matrix is not invertible and has determinant $0$. So in $\mathbb Z$, the determinant is a multiple of $19$.<|endoftext|> -TITLE: Block Determinants -QUESTION [6 upvotes]: This is a nice question I recently found in Golan's book. -Problem: Let $A,B,C,D$ be $n\times n$ matrices over $\mathbb{R}$ with $n\ge 2$, and let $M$ be the $2n\times 2n$ matrix \begin{bmatrix} - A & B \\ - C & D\\ - \end{bmatrix} -If all of the "formal determinants" $AD-BC$, $AD-CB$, $DA-CB$, and $DA-BC$ are nonsingular, is $M$ necessarily nonsingular? If $M$ is nonsingular, must all of the formal determinants also be nonsingular? - -REPLY [4 votes]: $A=\left[\begin{array} \\ -1&3\\ -1&2\\ -\end{array}\right]\quad -B=\left[\begin{array}\\ -2&4\\ -2&1\\ -\end{array}\right]\quad -C=\left[\begin{array} \\ -1&0\\ -1&5\\ -\end{array}\right]\quad -D=\left[\begin{array} \\ -2&0\\ -2&6\\ -\end{array}\right]\\ -|AD-BC|=20 \\ -|AD-CB|=102\\ -|DA-BC|=18\\ -|DA-CB|=8$ -but the combined matrix has one row twice another, so has a determinant of 0.<|endoftext|> -TITLE: Every path has a simple "subpath" -QUESTION [11 upvotes]: I've been thinking about this for a while, and can't seem to find any way to do it despite the statement itself seeming obvious. The problem is: - -Let $f:[0,1] \to \mathbb{R}^n$ be a continuous map, not necessarily injective, such that $f(0) \not = f(1)$. Let $Y$ denote the image of $f$ as a compact subset of $\mathbb{R}^n$. Then there exists an injective map $g:[0,1] \to Y$ such that $g(0) = f(0)$ and $g(1) = f(1)$. - -Obviously, the problem is trivial in many cases, but gets tough when you consider "wild" curves. Below I'll discuss how I've tried to solve it, but feel free to ignore it if you know the answer. -My first idea was to start Zorn's lemma argument using a sequence of maps $f_0,f_1,f_2,f_3,\dots$ with $f_0 = f$ and $f_0([0,1]) \supseteq f_1([0,1]) \supseteq f_2([0,1]) \supseteq \cdots$ and trying to find a bound for the chain, but it seems this fails because given compact path-connected subsets $X_0,X_1,\dots$ where $X_0 \supseteq X_1 \supseteq \cdots$, the intersection $\bigcap_{k=1}^\infty X_k$ is not necessarily path connected. For example, if $S \subset \mathbb{R}^2$ is a segment of the (closed) topologist's sine curve, then the family $S \cup \bar{B}_{1/n}(0)$ provide a counterexample. -The other approach seemed to be trying to define a sequence of functions $f_0,f_1,\dots$ where each $f_n$ is "closer" to being injective than the one before it, and such that the sequence converges in some meaningful way to an injective function, or at least to a function that allows us to use a simpler method to finish the problem. However, I don't really know enough analysis to follow through with this approach, so I'm asking for help here. Any solution would be great, and solutions a method similar to what I've mentioned above would be doubly appreciated. - -REPLY [4 votes]: It suffices to show that $Y$ is arcwise connected, since a homeomorphism $g:[0,1]\to Y$ such that $g(0)=f(0)$ and $g(1)=f(1)$ is certainly injective. -The Hahn-Mazurkiewicz theorem says that a Hausdorff space is the continuous image of the closed unit interval iff it is a compact, connected, locally connected metric space; such spaces are sometimes called Peano spaces. Clearly $Y$ is a Peano space. The result is therefore immediate from the theorem that every Peano space is arcwise connected. This is Theorem 31.2 in Stephen Willard, General Topology; the proof is non-trivial but not hard to follow. -You may be able to see most of it at Google Books, if you you’re allowed to read the page 220 result here. The missing lines from page 219 (including the end of the sentence from p. 220) are: - -Suppose $a$ and $b$ are points in a Peano space $X$. Using Theorem 26.15, there is a simple chain $U_{11},\dots,U_{1n}$ of open connected sets of diameter $<1$ from $a$ to $b$.<|endoftext|> -TITLE: Tropical Machinery -QUESTION [10 upvotes]: Recently I heard of a recent field in mathematics called tropical geometry. Having read the wiki page on it it seems like it is combinatorial algebraic geometry. -My question is what are the benefits of applying tropical geometry to problems in algebraic geometry? Are there examples of theorems in algebraic geometry where a proof was made much simpler using tropical geometry? Or are there any conjectures in algebraic geometry that was proved using tropical geometry? -Also does anyone know the motivation behind why it was developed? - -REPLY [3 votes]: Here is a link to a Mathoverflow thread that gives several references for someone wanting to learn tropical algebraic geometry, including to the recent book of Maclagan and Sturmfels. Much of the motivation for tropical algebraic geometry can be found by perusing the papers mentioned there.<|endoftext|> -TITLE: $\{1,1\}=\{1\}$, origin of this convention -QUESTION [12 upvotes]: Is there any book that explicitly contain the convention that a representation of the set that contain repeated element is the same as the one without repeated elements? -Like $\{1,1,2,3\} = \{1,2,3\}$. -I have looked over a few books and it didn't mention such thing. (Wikipedia has it, but it does not cite source). -In my years learning mathematics in both US and Hungary, this convention is known and applied. However recently I noticed some Chinese students claim they have never seen this before, and I don't remember I saw it in any book either. -I never found a book explicitly says what are the rules in how $\{a_1,a_2,a_3,\ldots,a_n\}$ specify a set. Some people believe it can only specify a set if $a_i\neq a_j \Leftrightarrow i\neq j$. The convention shows that doesn't have to be satisfied. - -REPLY [3 votes]: You asked - -Is there any book that explicitly contain the convention that a representation of the set that contain repeated element is the same as the one without repeated elements? - -Set Theory and the Continuum Problem by Smullyan and Fitting contains, on page 19: - -For any sets $a$ and $b$ (whether the same or different) by $\{a,b\}$ we mean the class whose only elements are $a$ and $b$—or equivalently the class of all $x$ such that $x=a$ or $x=b$. … Note that if $a$ and $b$ happen to be the same, then $\{a,b\} = \{a\}$—stated otherwise, $\{a,a\}=\{a\}$. - - -Here is one of many examples that I found by searching in Google Books for set theory ordered pair. It appears on page 23 of Naive Set Theory by Paul Halmos. This well-known book says: - -The ordered pair of a and b… is the set $(a, b)$ defined by: - $$(a, b) = \{\{a\}, \{a,b\}\}.$$ - … - We note first that if $a$ and $b$ happen to be equal, then the ordered pair - $(a, b)$ is the same as the singleton $\{\{a\}\}$.<|endoftext|> -TITLE: Proving an interesting feature of any $1000$ different numbers chosen from $\{1, 2, \dots,1997\}$ -QUESTION [5 upvotes]: Assume you choose $1000$ different numbers from the group $\{1, 2, -\dots,1997\}$. -Prove that within the $1000$ chosen numbers, there is a couple which - sum is $1998$. - -I defined: - -pigeonholes: possible sums. -pigeons: the $1000$ different numbers. - -Is this definition good or there is something better? - -REPLY [2 votes]: I would consider the sets $A_1 = \{1,1997\}, A_2 =\{2,1996\} \ldots A_{998} = \{998, 1000\}, A_{999}=\{999\}$. Note that they contain all the numbers ${1,\ldots , 1997}$. -Now, we want to choose $1000$ numbers: it means that you take at least two elements from the same set and hence their sum is indeed $1998$.<|endoftext|> -TITLE: A center of a graph (for example a tree) lies on its longest path -QUESTION [5 upvotes]: Prove a center of a tree (or if not much harder, of any graph) lies on the longest path. -(I encountered this when I was reading an alternative proof for the property :"a tree has at most two centers") - -REPLY [2 votes]: You might find these items of interest: http://web.archive.org/web/20100619094433/http://orion.math.iastate.edu/axenovic/Papers/Path-Transversals.pdf and http://www.math.uiuc.edu/~west/openp/pathtran.html<|endoftext|> -TITLE: If $\gcd(a,35)=1$ then show that $a^{12} \equiv 1 \pmod{35}$ -QUESTION [10 upvotes]: If $\gcd(a,35)=1$, then show that $a^{12}\equiv1\pmod{35}$ - -I have tried this problem beginning with $a^6 \equiv 1 \pmod{7}$ and $a^4 \equiv 1 \pmod{5}$ (Fermat's Theorem) but couldn't get far enough. Please help. - -REPLY [16 votes]: Since $\gcd(a,7) =\gcd(a,5) = 1$, from Fermat's theorem, -$$a^6\equiv 1\pmod7 \quad \text{ and } \quad a^4\equiv1\pmod5. -$$ -Hence, -$$ a^{12}\equiv 1\pmod7 \quad \text{ and } \quad a^{12}\equiv1\pmod5. -$$ -This means that -$$7\mid a^{12}-1 \quad\text{ and } \quad 5\mid a^{12}-1. -$$ -Since $\gcd(7,5)=1$, -$$35\mid a^{12}-1, -$$ -that is, -$$ -a^{12}\equiv1\pmod{35}. -$$ - -REPLY [4 votes]: One way to do this is to use Euler's totient theorem. You are familiar with Fermat's little theorem, and this is a generalized version. This approach is lengthier, but instructive, because it shows that the totient function doesn't always give you the minimal exponent. For more information, see here. -From the theorem, we know -$$a^{\phi(35)} \equiv_{35} 1$$ -A simple computation shows that $\phi(35)=24$ (this can be shortened if you know more about the $\phi$ function). -So we have -$$a^{24}\equiv 1. $$ -In general, if $k^2\equiv 1$, we see $(k-1)(k+1)\equiv 0$, so $k\equiv 1$ or $k\equiv -1$ (You need to think for a moment to see that other cases are ruled out, as this isn't an integral domain). Applying this to the above, we will be done once we rule out the case -$$a^{12}\equiv -1.$$ -In particular, we would need $(a^6)^2$ to be $-1$ mod $35$. To show $-1$ is not a square mod $35$, it suffices to show it is not a square mod $7$ (if $a^2$ was -1 mod 35, the same $a$ mod 7 would produce $a^2\equiv -1$ mod 7). -And it is easy to see by checking all possibilities that $-1$ is not the square of any number when considered modulo 7.<|endoftext|> -TITLE: Continuous root map of the coefficients of a polynomial -QUESTION [7 upvotes]: I have a set of polynomials $P_t(z)= z^n+ a_{n-1}(t)z^{n-1}+\cdots+ a_0(t)$ which depends on a real parameter $t \in [a,b]$ and where $a_{n-1}(t),\ldots, a_0(t)$ are real continuous functions. -May I say that there exists a continuous map $\theta(t)$ such that $\theta(t)$ is a root of $P_t$ (for all $t$)? -I mean, I know that there exists a continuous dependence of the roots of a polynomial with respect to the coefficients and that the Viète map descends to a homeomorphism $w:C^n/S_n\to C^n$, but, can I 'choose' a root? Or I need the axiom of choice to affirm that there exists a map $C^n/S_n\to C^n$? In that case, may I get a such map to be continuous? -Any bibliography reference for all this? - -REPLY [3 votes]: Great question. -If the roots are all always real the answer is yes (this comes from the fact that in $\mathbb R$ you can order the roots from the lowest to the highest). -If the roots or the coefficients may be complex, the answer is in general negative. Take for example the polynomial $t^2-z \in \mathbb{C}[t]$, with $z \in \mathbb C$. -However there is a deep theorem (by Kato) that may help you: it states that if the roots of your polynomial depend only on a real parameter $t \in \mathbb {R}$ than you have $N$ continuous functions that describe the roots. -Anyway, I suggest you to give a look to Kato, Perturbation theory for linear operators, Springer (Theorem 5.2, pag 109 in my edition).<|endoftext|> -TITLE: Asymptotics for sum of binomial coefficients from Concrete Mathematics -QUESTION [7 upvotes]: Concrete Mathematics EXERCISE 9.25: - -Supposing - \[ S_n = \sum_{k=0}^n \binom{3n}k \] - Prove that - \[ S_n = \binom{3n}{n}\left(2-\frac4n+O\left(\frac1{n^2}\right)\right) \] - -This sequence also appears in OEIS A066380 -I have been trying to understand the answer to the problem, but failed: - -\[S_n\left/\binom{3n}n\right. = \sum_{k=0}^n \frac{n\cdots(n-k+1)}{(2n+1)\cdots(2n+k)}\tag1\] - We may restrict the range of summation to $0 \le k \le (\log n)^2$, say. In this range $n\cdots(n-k+1) = n^k\left(1-\binom k2/n+O(k^4/n^2)\right)$ and $(2n+1)\cdots(2n+k) = (2n)^k\left(1+\binom{k+1}2/2n+O(k^4/n^2)\right)$, so the summand is - \[ \frac1{2^k}\left(1-\frac{3k^2-k}{4n}+O\left(\frac{k^4}{n^2}\right)\right) \tag2 \] - Hence the sum over $k$ is $2-4/n+O(1/n^2)\tag3$ Q.E.D. - -The formula (1) is acceptable, because -\[ \left. \binom{3n}{n-k} \right/ \binom{3n}{n} = \frac{n\cdots(n-k+1)}{(2n+1)\cdots(2n+k)} \] -The equation (2) maybe holds for $0 \le k \le (\log n)^2$, but formula (3) seems too strange (notice that $k$ is restricted, not over integers from $[0..n]$. How can we conclude that? -I have tried to considered equation (2) as the partial sum of a power series (the Taylor series for $n^{-1}$), but there seems no evidence that the corresponding power series of (2) or (3) converges. -Now OP has understood the answer. A trivial trick is necessary. OP will look for someone clever to give a complete solution and set his/her answer as accepted answer. - -REPLY [2 votes]: The claim is that $$\sum_{k=0}^{m} \frac1{2^k}\left(1-\frac{3k^2-k}{4n}+O\left(\frac{k^4}{n^2}\right)\right) = 2-4/n+O(1/n^2) $$ where $m=\lfloor \log_2^2 n \rfloor.$ -Computing one term at a time: $ \displaystyle A(m)= \sum_{k=0}^m 1/2^k = 2 - 2^{-m}= 2- \frac{1}{n^{\log n}}= 2 + \mathcal{O}(n^{-2}).$ -This far into the book you should know how to compute $\displaystyle \sum_{k=0}^m \frac{3k^2-k}{2^k} = \frac{ 2^{m+4} -3m^2-11m-16}{2^m}.$ (In case you forgot, try differentiating $\sum x^m/2^m$.) The only thing that survives the $\mathcal{O}(n^{-2})$ war is $2^4=16$ so the second term contributes $-4/n + \mathcal{O}(n^{-2}).$ -And finally, $\displaystyle \sum_{k=1}^{\infty} \frac{k^4}{2^k}$ is convergent so the last terms contribution is certainly $\mathcal{O}(n^{-2}).$ Hence the result.<|endoftext|> -TITLE: $p$ an prime number of the form $p=2^m+1$. Prove: if $(\frac{a}{p})=-1$ so $a$ is a primitive root modulo $p$ -QUESTION [5 upvotes]: Let $p$ be an odd prime number of the form $p=2^m+1$. -I'd like your help proving that if $a$ is an integer such that $(\frac{a}{p})=-1$, then $a$ is a primitive root modulo $p$. -If $a$ is not primitive root modulo $p$ so $Ord_{p}(a)=t$, where $t -TITLE: Schur -Weyl duality for $sl_2$ and $S_n$ -QUESTION [9 upvotes]: $V$ is an $m$ dimensional vector space having a structure of $sl_2(\mathbb{C})$-module, where $sl_2(\mathbb{C})$ is the Lie algebra of the Lie group $SL_2(\mathbb{C})$. The symmetric group $S_n$ acts on the tensor product $V^{\otimes n}$. -What does Schur-Weyl duality say in this case? -What is the irreducible decomposition of $V^{\otimes n}$? -If we have $S_n$ irreducible decomposition can we get $sl_2(\mathbb{C})$ decomposition and vice versa? -I would be very grateful if someone could give a detailed answer. -Thanking you in advance. - -REPLY [5 votes]: Here is an answer to your question about the decomposition of $V^{\otimes n}$ as an $\mathfrak{sl}_2(\mathbb{C})$-module (I assume $V$ is the standard $2$-dimensional irreducible module). -The irreducible $\mathfrak{sl}_2(\mathbb{C})$-modules are indexed by non-negative integers and the one corresponding to the integer $m$ will be denoted $L(m)$ (it has dimension $m+1$ and we know exactly what it looks like, see for example Humphrey's Introduction to Lie Algebras and Representation Theory chapter 7). -So to see how to decompose $V^{\otimes n}$ we need to know what $V\otimes L(m)$ is for some integer $m$. This is easiest to see if we look at these as $\mathfrak{gl}_2(\mathbb{C})$-modules where the decomposition of tensor products is given by the Littlewood-Richardson rule. In this case, since $V = L(1)$, we simply get that $V\otimes L(m) = L(m+1)\oplus L(m-1)$. -Now we want to apply this to see how many times $L(m)$ appears as a summand in $V^{\otimes n}$. Let us denote this multiplicity by $a_{m,n}$. -We see from the above that $a_{m,n} = a_{m-1,n-1} + a_{m+1,n-1}$. -One can check that $$a_{m,n} = \binom{n}{\frac{m+n}{2}} - \binom{n}{\frac{n - m - 2}{2}}$$ satisfies the above recursive formula when $n$ and $m$ have the same parity (and when they don't, $a_{m,n} = 0$). Note that $a_{0,2k} = a_{1,2k-1}$ is the $k$'th Catalan number.<|endoftext|> -TITLE: $m!n! < (m+n)!$ Proof? -QUESTION [29 upvotes]: Prove that if $m$ and $n$ are positive integers then $m!n! < (m+n)!$ -Given hint: - -$m!= 1\times 2\times 3\times\cdots\times m$ and $1 1$ if $0 < m < n$.<|endoftext|> -TITLE: There exists a unique isomorphism $M \otimes N \to N \otimes M$ -QUESTION [7 upvotes]: I want to show that there is a unique isomorphism $M \otimes N \to N \otimes M$ such that $x\otimes y\mapsto y\otimes x$. (Prop. 2.14, i), Atiyah-Macdonald) -My proof idea is to take a bilinear $f: M \times N \to N \otimes M$ and then use the universal property of the tensor product to get a unique linear map $l : M \otimes N \to N \otimes M$. Then show that $l$ is bijective. -Can you tell me if my proof is correct: -Let $M,N$ be two $R$-modules. Let $(M \otimes N, b)$ be their tensor product. -Then $$ \varphi: M \times N \to N \otimes M$$ defined as $$ (m,n) \mapsto n \otimes m$$ -and $$ (rm , n) \mapsto r(n \otimes m)$$ -$$ (m , rn) \mapsto r(m \otimes n)$$ -is bilinear. -Hence by the universal property of the tensor product there exists a unique $R$-module homomorphism ($\cong$ linear map) $l: M \otimes N \to N \otimes M$ such that $l \circ b = \varphi$. -$l$ is bijective: -$l$ is surjective: Let $n \otimes m \in N \otimes M$. Then $l(m \otimes n) = l(b(m,n)) = \varphi (m,n) = n \otimes m$. -$l$ is injective: Let $l(m\otimes n) = l(b(m,n)) = 0 = \varphi(m,n) = n \otimes m$. Then $n \otimes m = 0$ implies that either $n$ or $m$ are zero and hence $m \otimes n = 0$. - -REPLY [11 votes]: It is not true that $n\otimes m = 0$ implies either $n$ or $m =0$ (see example below). To prove injectivity you should define a map going the other way and show that these maps are inverse. -example: -$\bar1\otimes \bar2 \in \mathbb{Z}/2\mathbb{Z}\otimes_\mathbb{Z}\mathbb{Z}/3\mathbb{Z}$ satisfies $\bar1\otimes \bar2=\bar1\otimes (2\cdot\bar1)=(\bar1\cdot 2)\otimes \bar1= \bar0\otimes \bar1=0$ but $\bar1\in\mathbb{Z}/2\mathbb{Z}$ and $\bar2\in\mathbb{Z}/3\mathbb{Z}$ are not zero.<|endoftext|> -TITLE: Proving that the function $f(x,y)=\frac{x^2y}{x^2 + y^2}$ with $f(0,0)=0$ is continuous at $(0,0)$. -QUESTION [7 upvotes]: How would you prove or disprove that the function given by -$$f(x,y) = \begin{cases} \dfrac{x^2y}{x^2 + y^2} & (x,y) \neq (0,0) \\ 0 & (x,y) = (0,0) \end{cases}$$ -is continuous at $(0,0$)? - -REPLY [8 votes]: Observe that -$$ -\left| \frac{x^2y}{x^2+y^2} \right| \leq \frac{x^2 |y|}{x^2} = |y| -$$ -provided $x \neq 0$ and $y \neq 0$. Then you conclude.<|endoftext|> -TITLE: Does there exist a 3-dimensional subspace of real functions consisting only of monotone functions? -QUESTION [13 upvotes]: This is Exercise 1.O from the book -Van Rooij, Schikhof: A Second Course on Real Functions. - -The set of the monotone functions on $[0,1]$ contains all polynomial - functions of degree $\le 1$. These form a two-dimensional vector space. Does the set of - all monotone functions contain a three-dimensional vector space? - -REPLY [7 votes]: If $V$ is a vector space of monotone functions and $f_1, f_2, f_3 \in V$ then $\left(f_i(0), f_i(1)\right) \in \mathbb{R^2}$ are three vectors in a two-dimensional space and therefore dependent. That means that there is a non-trivial linear combination of $f_1, f_2, f_3$ that vanishes at $0$ and $1$ and because it is also monotone, it is identically zero. So any three elements of $V$ are linearly dependent.<|endoftext|> -TITLE: Evaluating $\int_{0}^{1} \frac{x^{2} + 1}{x^{4} + 1 } \ dx$ -QUESTION [6 upvotes]: How do I evaluate $$\int_{0}^{1} \frac{x^{2} + 1}{x^{4} + 1 } \ dx$$ -I tried using substitution but I am getting stuck. If there was $x^3$ term in the numerator, then this would have been easy, but this one doesn't. - -REPLY [6 votes]: Another way, if you want to sweat harder instead of the elegant suggestion of Chandrasekhar:$$x^4+1=(x^2+\sqrt{2}\,x+1)(x^2-\sqrt{2}\,x+1)\Longrightarrow$$$$ \frac{x^2+1}{x^4+1}=1-\frac{\sqrt 2\,x}{x^2+\sqrt 2\,x+1}+1+\frac{\sqrt 2\,x}{x^2-\sqrt 2\,x+1}$$ so for example$$\int\frac{\sqrt 2\,x}{x^2+\sqrt 2\,x+1}dx=\frac{1}{\sqrt 2}\int\frac{2x+\sqrt 2}{x^2+\sqrt 2\,x+1}dx-\frac{1}{2\sqrt 2}\int\frac{\sqrt 2dx}{(\sqrt 2 x+1)^2+1}=$$$$=\frac{1}{\sqrt 2}\log|x^2+\sqrt 2\,x+1|-\frac{1}{2\sqrt 2}\arctan(\sqrt 2\,x+1)+C$$ and etc.<|endoftext|> -TITLE: Convergence or divergence of $\sum_{k=1}^{\infty} \left(1-\cos\frac{1}{k}\right)$ -QUESTION [6 upvotes]: Does $$\sum_{k=1}^{\infty} \left(1-\cos\frac{1}{k}\right)$$ converge or diverge? - -REPLY [4 votes]: We know that $\cos x = 1-\frac{x^2}{2} + o(x^3)$ (this is $\cos x$ Taylor expansion near zero), so: -$$1-\cos\left(\frac{1}{k}\right) = 1-\left(1-\frac{\left(\frac{1}{k}\right)^2}{2} + o\left(\frac{1}{k^2}\right) \right)=\frac{\left(\frac{1}{k}\right)^2}{2} + o\left(\frac{1}{k^2}\right)$$ -Notice that: -$$\lim_{k\to\infty}\frac{\frac{\left(\frac{1}{k}\right)^2}{2} + o\left(\frac{1}{k^2}\right)}{\frac{1}{k^2}}=\frac{1}{2}$$ -Since $\sum \frac{1}{k^2}$ converges, and since $\left(1-\cos\left(\frac{1}{k}\right)\right)>0$ for all natural $k$, we'll conclude from the limit comparison test that $\sum \left(1-\cos\frac{1}{k} \right)$ converges.<|endoftext|> -TITLE: Determinant called Grammian -QUESTION [5 upvotes]: Famously, if functions $f_1,f_2,…,f_n$, each of which possesses a derivative of order $n-1$, are linearly independent on the interval $I$, if -$$ \det\left( \begin{array}{ccccc} f_1 & f_2 & f_3 &… &f_n \\ f'_1 & f'_2 & f'_3 &... &f'_n \\ ⋮ & ⋮ & ⋮ &⋮ &⋮ \\ f_1^{(n-1)} & f_2^{(n-1)} & f_3^{(n-1)} &... &f_n^{(n-1)} \end{array} \right) $$ -called Wronskian of $f_1,f_2,…,f_n$ ,is not zero for at least one point in the interval $I$. Equivalently, if functions $f_1,f_2,…,f_n$ possess at least $n-1$ derivatives and are linearly dependent on $I$ then $W(f_1,f_2,…,f_n)(x)=0$ for every $x\in I$. So this equivalent statement gives just a necessary condition for dependency of above functions on the interval. Fortunately, there is necessary and sufficient condition for dependency of a set of functions $f_1(x),f_2(x),…,f_n(x), x\in I$: - -A set of functions $f_1(x),f_2(x),…,f_n(x), x\in I$ is linearly dependent on $I$ iff the determinant below is identically zero on $I$: - $$ \det\left( \begin{array}{ccccc} \int_{a}^{b} f_1^2 dx& \int_{a}^{b} f_1f_2 dx&… &\int_{a}^{b}f_1f_ndx \\ \int_{a}^{b}f_2f_1dx & \int_{a}^{b}f_2^2 dx &... &\int_{a}^{b}f_2f_ndx \\ ⋮ & ⋮ & ⋮ &⋮ \\ \int_{a}^{b}f_nf_1dx & \int_{a}^{b}f_nf_2dx&... &\int_{a}^{b}f_n^2dx \end{array} \right) $$ - -It seems to be a great practical Theorem, but I couldn't find its proof. I really appreciate your help. - -REPLY [10 votes]: If we assume that $f_j$ are continuous, then we define an inner product by $\langle f,g\rangle=\int_a^bf(x)g(x)dx$. Consider $g_1,\ldots,g_n$ such that $\{g_1,\ldots,g_n\}$ is orthonormal and $\operatorname{span}\{g_1,\ldots,g_n\}=\operatorname{span}\{f_1,\ldots,f_n\}$. We can write $f_i=\sum_{j=1}^n\alpha_{ij}g_j$ and if $\alpha$ denotes the matrix whose entries are $\alpha_{i,j}$ we have that $G=\alpha \alpha^T$, where $G$ is the last matrix of the OP. -The matrix $G$ is invertible if and only if so is $\alpha$, which gives the result. -Indeed, if $\sum_{j=1}^n\beta_k\alpha_{k,j}=0$ for some $\beta_k$ not all $0$ then $\sum_k\beta_kf_k=0$. - -REPLY [8 votes]: This is really a fact about real finite dimensional vector spaces with an inner product. Suppose $V$ is of dimension $n,$ with a positive definite inner product $\langle , \rangle.$ Suppose we take an orthonormal basis $e_i.$ Finally, suppose we have a set of $n$ vectors $f_j.$ Define two determinants, -$$ A = \det \left( \langle e_i, f_j \rangle \right), $$ and -$$ B = \det \left( \langle f_i, f_j \rangle \right). $$ -Then $$ A^2 = B. $$ -The determinant $A$ is just the set of coefficients of the $f_j$ in terms of the $e_i,$ so the determinant is $0$ if and only if the $f_j$ are dependent. Same for $B.$<|endoftext|> -TITLE: Prove $\binom{p-1}{k} \equiv (-1)^k\pmod p$ -QUESTION [9 upvotes]: Prove that if $p$ is an odd prime and $k$ is an integer satisfying $1\leq k \leq p-1$,then the binomial coefficient - $$\binom{p-1}{k} \equiv (-1)^k\pmod p$$ - -I have tried basic things like expanding the left hand side to $\frac{(p-1)(p-2).........(p-k)}{k!}$ but couldn't get far enough. - -REPLY [4 votes]: Recall that we have $(x + 1)^p \equiv x^p + 1 \bmod p$. The ring $\mathbb{F}_p[x]$ is an integral domain, so we can divide and it follows that -$$(x + 1)^{p-1} \equiv \frac{x^p + 1}{x + 1} \equiv x^{p-1} - x^{p-2} \pm ... - x + 1 \bmod p.$$<|endoftext|> -TITLE: Multiple choice question about an entire function $f:\Bbb{C}\to\Bbb{C}$ and the function $g :\Bbb{C}\to\Bbb{C} $ defined by $ g(z)= f(z) - f(z+1)$ -QUESTION [7 upvotes]: Let $ f: \mathbb{C} \rightarrow \mathbb{C} $ be an entire function and let $g : \mathbb{C} \rightarrow \mathbb{C} $ be defined by -$$g(z)= f(z) - f(z+1)$$ for all $ z\in \mathbb{C}$. Which of the options are correct : - -if $ f(\frac{1}{n}) = 0 $ for all positive integers n, then $f$ is a constant function. -if $ f(n) = 0 $ for all positive integers n, then $f$ is a constant function. -if $ f(\frac{1}{n}) = f(\frac{1}{n}+1)$ for all positive integers n, then $g$ is a constant function. -$ f(n) = f(n+1) $ for all positive integers $n$, then $g$ is a constant function - -Please suggest which of the options are correct. -Using the Identity theorem, the options 1 and 3 seem to be correct as in both cases, the sequence of zeros for $\,f\,$ and $\,g\,$ is $ < \frac{1}{n} >$ that converges to zero which belongs to $\Bbb C$. Therefore, in both cases $\,f\,$ and $\,g\,$ are identically equal to zero. But in (2) and (4), we arrive for both $\,f\,$ and $\,g\,$, at the zeros sequence $ <{n}>$ diverges to infinity which does not ensure the required conclusion. - -REPLY [7 votes]: This answer is a compilation of the answers given in the comments above. -To answer these questions, you should be familiar with the identity theorem. -1. True. The set of zeros of $f(z)$ has an accumulation point at $0$, so $f$ is identically zero. -2. False. Consider $f(z)=\sin(\pi z)$. -3. True. We have $f(1/n)=f(1/n+1)$ for all $n\in\mathbb{N}$, so $g(1/n)=f(1/n)-f(1+1/n)=0$ for all $n\in\mathbb{N}$, and the set of zeros has an accumulation point at 0, so $g$ is identically zero. -4. False. Again, consider $f(z)=\sin(\pi z)$.<|endoftext|> -TITLE: How to prove $641|2^{32}+1$ -QUESTION [6 upvotes]: Possible Duplicate: -To show that Fermat number $F_{5}$ is divisible by $641$. - -How to prove that $641$ divides $2^{32}+1$? What the technical way will be for this question? I want to teach it to my students. Any help. :-) - -REPLY [7 votes]: In light of Peter's comment: -we have: -$2^2=4$, -$2^4=16, 2^8=256,$ -$2^{16}=256^2=65536=641k_1+154,$ -$2^{32}=641k_2+154^2=641k_3+640$ -the rest is very easy.<|endoftext|> -TITLE: Evaluating a sum to infinity -QUESTION [7 upvotes]: I'm looking for a way that allows me to work out the following sum: -$$\sum\limits_{k=1}^{\infty} \sin^2\left(\frac{1}{k}\right)$$ -Any hint/suggestion is welcome. Thanks. - -REPLY [9 votes]: It may be too much to ask for a closed form. -We find an equivalent series that converges very fast. -We have -$$\begin{eqnarray*} -\sum_{k=1}^\infty \sin^2\frac{1}{k} -&=& \sum_{k=1}^\infty \frac{1}{2}\left(1-\cos \frac{2}{k}\right) \\ -&=& \frac{1}{2} \sum_{k=1}^\infty \sum_{j=1}^\infty \frac{(-1)^{j+1}}{(2j)!} \left(\frac{2}{k}\right)^{2j} \\ -&=& \frac{1}{2} \sum_{j=1}^\infty \frac{(-1)^{j+1} 2^{2j}}{(2j)!} \zeta(2j) \\ -&=& \frac{1}{4} \sum_{j=1}^\infty \frac{(4\pi)^{2j}}{[(2j)!]^2} B_{2j} -\end{eqnarray*}$$ -where $\zeta(2j)$ is the zeta function and $B_{2j}$ are the Bernoulli numbers. -Interchanging the sums is allowed by Fubini's theorem. -The ratio of successive terms goes like $1/j^2$ for $j$ large. -Below we give the partial sums to $25$ digits. -$$\begin{array}{ll} -N & \frac{1}{4} \sum_{j=1}^N \frac{(4\pi)^{2j}}{[(2j)!]^2} B_{2j}\\\hline - 1 & 1.644934066848226436472415\cdots \\ - 2 & 1.284159655611180372633747\cdots \\ - 3 & 1.329374902810489223287726\cdots \\ - 4 & 1.326187355647956066654778\cdots \\ - 5 & 1.326328589450443236755002\cdots \\ - 6 & 1.326324312838454339066804\cdots \\ - 7 & 1.326324406812557661734373\cdots \\ - 8 & 1.326324405246394595313185\cdots \\ - 9 & 1.326324405266867080420232\cdots \\ - 10 & 1.326324405266651581194045\cdots \\ - 11 & 1.326324405266653446986876\cdots \\ - 12 & 1.326324405266653433466641\cdots \\ - 13 & 1.326324405266653433549842\cdots \\ - 14 & 1.326324405266653433549402\cdots \\ - 15 & 1.326324405266653433549404\cdots -\end{array}$$<|endoftext|> -TITLE: Evaluating $\int_{0}^{1} x^m \ln^\alpha(x) dx$ -QUESTION [11 upvotes]: Honestly, I am asked to think about $$\int_{0}^{1} x^m \ln^\alpha(x) dx$$ And I applied all methods I know. I doubt if this integral makes sense either. If it is replicate, plz inform me to omit the question soon. Thanks. - -REPLY [5 votes]: $$ -\begin{align} -\int_0^1x^m\,\log^\alpha(x)\,\mathrm{d}x -&=\int_{-\infty}^0u^\alpha\,e^{(m+1)u}\,\mathrm{d}u\\ -&=(-1)^\alpha(m+1)^{-\alpha-1}\int_0^{\infty}t^\alpha e^{-t}\mathrm{d}t\\ -&=(-1)^\alpha(m+1)^{-\alpha-1}\Gamma(\alpha+1) -\end{align} -$$<|endoftext|> -TITLE: To define a measure, is it sufficient to define how to integrate continuous function? -QUESTION [19 upvotes]: Let me make my question clear. I want to define a measure $\mu$ on a space $X$. But instead of telling you what value I assign for some subset of $X$ (measurable sets that form a $\sigma$-algebra), I tell you that for each $f$ continuous, what $\int_X f(x)d\mu (x)$ is. -Then, is this measure uniquely determined? I know if I tell you how to integrate all measurable functions, then this measure is of course uniquely determined. Because integrate characteristic functions will give you measure of that respective set. But is it also true if I only define integration with continuous functions? - -REPLY [20 votes]: In general this is false. Here are some examples to think about: - -If the $\sigma$-algebra on $X$ is not the Borel $\sigma$-algebra, there is generally no hope. (What if $X$ has the trivial topology but the $\sigma$-algebra is not trivial?) Hence you should restrict your attention to Borel measures. -Take $X = \{a,b\}$ with the topology $\tau = \{\emptyset, \{a\}, \{a,b\}\}$. The Borel $\sigma$-algebra is $2^X$ but the only continuous functions $f : X \to \mathbb{R}$ are constant, so $\mu_1 = \delta_a$ and $\mu_2 = 2 \delta_a - \delta_b$ agree on all continuous functions. Thus you probably want a Hausdorff space. -Take $X = \mathbb{R}$. Let $\mu$ be counting measure and $\nu = 2\mu$. So you probably want to look at $\sigma$-finite measures. -As I mentioned in the above comment, on $X = \omega_1 + 1$ (which is compact Hausdorff), one can find two distinct finite measures which agree on all continuous functions. - -However, here is a positive result. - -Proposition. Let $\mu, \nu$ be finite Borel measures on a metric space $(X,d)$. If $\int f d\mu = \int f d\nu$ for all bounded continuous $f$, then $\mu = \nu$. - -Proof. Let $E$ be a closed set, and let $f_n(x) = \max\{1 - n d(x,E), 0\}$. You can check that $f_n$ is continuous and $f_n \downarrow 1_E$ as $n \to \infty$. So by dominated convergence, $\mu(E) = \nu(E)$, and $\mu, \nu$ agree on all closed sets. -Now we apply Dynkin's $\pi$-$\lambda$ theorem. Let $\mathcal{P}$ be the collection of all closed sets; $\mathcal{P}$ is closed under finite intersections, and $\sigma(\mathcal{P})$ is the Borel $\sigma$-algebra $\mathcal{B}$. Let $\mathcal{L} = \{ A \in \mathcal{B} \colon \mu(A) = \nu(A)\}$. Using countable additivity, it is easy to check that $\mathcal{L}$ is a $\lambda$-system, and we just showed $\mathcal{P} \subset \mathcal{L}$. So by Dynkin's theorem, $\mathcal{B} = \sigma(\mathcal{P}) \subset \mathcal{L}$, which is to say that $\mu,\nu$ agree on all Borel sets, and hence are the same measure.<|endoftext|> -TITLE: Guaranteed Checkmate with Rooks in High-Dimensional Chess -QUESTION [34 upvotes]: Given an infinite (in all directions), $n$-dimensional chess board $\mathbb Z^n$, and a black king. What is the minimum number of white rooks necessary that can guarantee a checkmate in a finite number of moves? -To avoid trivial exceptions, assume the king starts a very large distance away from the nearest rook. -Rooks can change one coordinate to anything. King can change any set of coordinates by one. -And same problem with i) Bishops and ii) Queens, in place of rooks. - -REPLY [29 votes]: In 3-dimensional chess, it is possible to force checkmate starting with a finite number of rooks. As this fact still appears to be open, I'll post a method of forcing checkmate with 96 rooks, even though it should be clear that this is not optimal. You can remove some of the rooks from the method I'll give below, but I am aiming for a simple explanation of the method rather than the fewest possible number of rooks. -First, we move all of the rooks far away in the $z$ direction, so that they cannot be threatened by the king. We also move each of the rooks so that they all have distinct $z$ coordinates. That way, they are free to move any number of steps in the $x$ and $y$ directions without blocking each other. The king will be in check whenever it has the same $(x,y)$-coordinate as one of the rooks. We can project onto the $(x,y)$-plane to reduce it to a 2-dimensional board. Looked at this way, each rook can move any number of places in the $x$ or $y$ direction (and can pass through each other, can pass through the king, and you can multiple rooks in the same $(x,y)$-square). The king is in check if it is on the same square as a rook. -First, I'll describing the following "blocking move" to stop the king passing a given horizontal (or vertical) line. - - -In the position above, the right-most 3 rooks are stopping the king moving past the red line on the next move. Then, once the king moves, do the following. (i) If the king's $x$-coordinate does not change, do nothing. (ii) If the king's $x$-coordinate increases by one, move the left-most rook so that it is to the right of the other three. Then you are back in the same position, just moved along by one step. (iii) If the king's $x$-coordinate decreases by one step, do nothing. We are back in the same situation, except reflected (so, keep performing the same steps, but reflected in the $x$-direction on subsequent moves). -This way, we chase the king along the red line, but he can never cross it. Furthermore, if the king changes from going right to going left, we have a free move to do something elsewhere on the board. Actually, for this to work, if the king is in column $i$, we just need three rooks at positions $i-1,i,i+1$ on the row above the red line, and one more at any other position on the row. Next, if we have 4 rooks stationed somewhere on the given horizontal row, how many moves does it take to move them into the blocking position? The answer is 6. You first move one rook to have the same $x$-coordinate as the king (say, $x=i$). After the king moves, by reflection we can assume he keeps the same $x$-coordinate or moves one to the right. Then, move the next rook to position $i+2$. Then, after the next move, move a rook to position $i-2$ or $i+4$ in such a way that we have three rooks on the row, with one space between each of them, and the kings is in one of the 3 middle columns. Say, the rooks are at positions $j-2,j,j+2$ and the king is in column $j-1,j$ or $j+1$. If the king moves to column $j-1$ or $j+1$ we just move the 4th rook to this position and we have attained the blocking position. If the king moves to column $j$, we move the rook in position $j-2$ to position $j-1$ and, on the next move, we can move the 4th rook in to attain the blocking position. If the king moves to column $j+2$, we move the rook in column $j-2$ to $j+4$, then we are in the position above where there are rooks at positions $k-2,k,k+2$ and the king in position $k$, so it takes 2 more moves to attain the blocking position. -So, we just need to keep 4 rooks stationed along the row which we wish to block the king from crossing. Whenever he moves within 6 steps from this row, start moving the rooks into the blocking position, and he can never step into the given row. -Now, choose a large rectangle surrounding the king, and position 15 rooks in each corner as below. - - -Also, position 4 rooks in arbitrary positions along each edge of the rectangle. So, that's $4\times15+4\times4=76$ rooks used so far. I puposefully left some of the board blank in the diagram above. The point is to not specify exactly how big the rectangle is. It doesn't matter, just so long as it is large enough to be able to move the 76 rooks into position before the black king can get within 6 steps of any of the edges of the rectangle. -Now, once we are in this position, then whenever the black king moves within one of the red rectangles, use the 4 rooks positioned along the adjacent edge to perform the blocking move as described above to stop the king crossing that edge. We can keep doing this, and imprison the black king within the big rectangle. Furthermore, we keep getting free moves to do something else whenever the king moves out of the red rectangles, or whenever he changes direction within a red rectangle. Also, if the king is in one of the inside corners of a red rectangle, there is already a rook in the corresponding position at the adjacent edge of the big rectangle, giving us a free move. -Now, suppose that we have an extra 20 rooks. During the free moves we get while chasing the king around the edge of the big square, we can move these to any position we like. With 20 rooks, we can position 16 of to the left of each of the 16 rooks near the right-hand corners of the big rectangle which have an empty square to their left. Also, position 4 rooks along the column one step to the left of the right-hand edge of the big rectangle. This way, we create a new rectangle one square smaller in the $x$-direction. If the king ever enters the right-hand red rectangle or one step to the left of this, we use the new 4 blocking rooks to stop him from reaching the right-hand edge of the new big rectangle. If he is already within the red rectangle, and stays there then, when we get a free move, we can move one of the new blocking rooks to the position one above or below the row in which the king is. Then we can bring the other 3 rooks in, blocking him out of this column. In this way, we create a new big rectangle one step smaller in the $x$-direction and with the king still trapped inside. Similarly, we can reduce the height of the big rectangle by 1. Repeat this, enclosing the king in ever smaller rectangles until, eventually, he gets trapped in the single square within a 3x3 rectangle surrounded by 8 rooks. Then bring one of the other rooks in to cover this square, which is checkmate.<|endoftext|> -TITLE: Determining all Sylow $p$-subgroups of $S_n$ up to isomorphism? -QUESTION [8 upvotes]: I'm trying to understand a classification of all Sylow $p$ subgroups of $S_n$. -Let $Z_p$ be the subgroup of $S_p$ generated by $(12\cdots p)$. Then $Z_p\wr Z_p$ has order $p^p\cdot p=p^{p+1}$, and is isomorphic to a subgroup of $S_{p^2}$. -Define inductively $Z_p^{\wr r}$ by $Z_p^{\wr 1}=Z_p$ and $Z_p^{\wr k+1}=Z_p^{\wr k}\wr Z_p$. It is easy to show by induction that $Z_p^{\wr r}$ has order $p^{(p^{r-1}+p^{r-2}+\cdots+1)}$, and since by inductively assuming $Z_p^{\wr r-1}$ is isomorphic to a subgroup of $S_{p^{r-1}}$ and $Z_p$ isomorphic to a subgroup of $S_p$, then $Z_p^{\wr r}$ is isomorphic to a subgroup of $S_{p^r}$. -However, I can't make the jump that if $n=a_0+a_1p+\cdots+a_kp^k$ is the base $p$ expansion, then any Sylow $p$-subgroup is isomorphic to -$$ -\underbrace{Z_p^{\wr 1}\times\cdots\times Z_p^{\wr 1}}_{a_1}\times -\underbrace{Z_p^{\wr 2}\times\cdots\times Z_p^{\wr 2}}_{a_2}\times\cdots\times -\underbrace{Z_p^{\wr k}\times\cdots\times Z_p^{\wr k}}_{a_k}. -$$ -I know this group has order -$$ -(p)^{a_1}(p^{p+1})^{a_2}\cdots(p^{(p^{k-1}+p^{k-2}+\cdots+1)})^{a_k}=p^{\sum_{i=1}^k a_i(1+\cdots+p^{i-1})}=p^{\nu_p(n!)} -$$ -which is the order of any Sylow $p$-subgroup of $S_n$, based on the formula here. However, I couldn't find an epimorphism from any Sylow $p$-subgroup onto this product, or vice versa. Is it clear how this is isomorphic to a subgroup of $S_n$? Then I understand that the isomorphism would just follow from the Sylow theorems. Thanks. - -REPLY [3 votes]: Given two permutation groups H on n points and K on m points, there is a permutation group called H × K that acts on n + m points. The group is abstractly the direct product of the two groups, and the action is very simple: an element like $(h,k)$ acts on the first n points exactly like h did on its n points, and on the last m points exactly like k did on its m points. -For instance the Sylow 3-subgroup of Sym(6) is $\langle (1,2,3) \rangle \times \langle (4,5,6) \rangle$.<|endoftext|> -TITLE: Reductions for regular languages? -QUESTION [7 upvotes]: To reason about whether a language is R, RE, or co-RE, we can use many-one reductions to show how the difficulty (R, RE, or co-RE-ness) of one language influences the difficulty of another. To reason about whether a language is in P, NP, or co-NP, we can use polynomial-time many-one reductions to show how the difficulty (P, NP, or co-NP-ness) of one language influences the difficulty of another. -Is there are similar type of reduction we can use for regular languages? For example, is there some type of reduction $\le_R$ such that if $L_1 \le_R L_2$ and $L_2$ is regular, then $L_1$ is regular? Clearly we could arbitrarily define a very specific class of reductions such that this property holds, but is there a known type of reduction with this property? -Thanks! - -REPLY [6 votes]: There is a very natural model of finite-state reduction, namely the most general finite-state transducer -- one input tape, one output tape, non-deterministic, transitions can be labelled with arbitrary regular sets (with empty strings) on both the input and output side. This can be shown equivalent to Henning's single-symbol operations, but allows for much more intuitive reductions, still within the finite-state realm. The ambiguity Henning speaks of is just the non-determinism. -You can even allow such a transducer to have secondary storage (like a Turing Machine, pushdown automaton, etc) as long as there is a uniform constant bound on the size of the secondary storage. -Taking that a step further, you can use transformations that do arbitrary computations, but again show that the size of memory needed over all inputs is uniformly bounded, that is, there's a $k$ not depending on the input that limits the size of all memory used. Thus you can use pseudo-code, Java or whatever formalism you like, including forking, that is, non-determinism -- as long as you have: - -one input and one output tape/stream -both streams processed in a single pass -total memory is uniformly bounded across all forks/threads - -In other words, you don't have to model finite-state transformations with transitions on a finite graph, which is a very brittle and finicky programming model. You can use any convenient programming formalism or model with any structuring of memory you like, as long as it satisfies those criteria. -In fact, I propose that as a sort of finite-state equivalent of the Turing-Church thesis. Not quite as crisp as the Turing-Church Thesis in the world of recursive functions, but very useful.<|endoftext|> -TITLE: Prove that the intersection of all subfields of the reals is the rationals -QUESTION [12 upvotes]: I'm reading through Abstract Algebra by Hungerford and he makes the remark that the intersection of all subfields of the real numbers is the rational numbers. -Despite considerable deliberation, I'm unsure of the steps to take to show that the subfield is $\mathbb Q$. -Any insight? - -REPLY [19 votes]: First note that $\mathbb Q$ is itself a subfield of $\mathbb R$, so the intersection of all subfields must be a subset of the rationals. -Second note that $\mathbb Q$ is a prime field, that is, it has no proper subfields. This is true because if $F\subseteq\mathbb Q$ is a field then $1\in F$, deduce that $\mathbb N\subseteq F$, from this deduce that $\mathbb Z\subseteq F$ and then the conclusion. -Third, conclude the equality. - -REPLY [6 votes]: Any subfield of the reals must contain 0 and 1. Since the subfield is closed under addition and subtraction, it must contain all the integers. Since it's also closed under division (except division by zero), it must contain the rationals.<|endoftext|> -TITLE: Does $abab=baba$ imply commutativity in a Group of uneven order? -QUESTION [15 upvotes]: Suppose $(G,\cdot)$ is a finite group of uneven order such that $abab=baba$ for any $a,b\in G$. Does this mean that $G$ is commutative? - -REPLY [35 votes]: Yes. Let $|G|=2k-1$ be the order of the group and $a,b\in G$. Then: $$ab=ab(ab)^{2k-1}=(ab)^{2k}=(abab)^k=(baba)^k=(ba)^{2k}=ba(ba)^{2k-1}=ba.$$ -(Added: I should probably mention that here we use the following fact twice: if $G$ is a finite group of order $n$ and $a\in G$, then $a^n=e$, where $e$ is the identity element.)<|endoftext|> -TITLE: Choosing an isomorphism $\tau:\overline{\mathbf{Q}}_p\simeq\mathbf{C}$; how do things depend on choice of $\tau$? -QUESTION [5 upvotes]: I sometimes see arguments that begin by choosing an isomorphism of fields $\tau:\overline{\mathbf{Q}}_p\simeq\mathbf{C}$, and then defining some property in terms of this isomorphism. I'm not so familiar with the technical properties; must there exist continuous isomorphisms or isometries? Where can I read up on the basic properties of such isomorphisms or how this technique is deployed? -Sample question: suppose I say that $P\in\mathbf{Q}_p[X]$ is 'pure of weight $i\in\mathbf{Z}$' if every root $\lambda\in\overline{\mathbf{Q}}_p$ of $P$ satisfies $|\tau(\lambda)|_{\mathbf{C}}=i$. Does this notion depend on $\tau$? - -REPLY [3 votes]: Your notion of purity does not depend on $\tau$ for polynomials with $\mathbb{Q}$ coefficients but does depend on $\tau$ for arbitrary polynomials with $\mathbb{Q}_p$ coefficients. Any two isomorphisms from $\overline{\mathbb{Q}_p}$ to $\mathbb{C}$ will differ by an automorphism of $\mathbb{C}$. An automorphism of $\mathbb{C}$ must interchange roots of any polynomial with $\mathbb{Q}$ coefficients, so if $P$ is pure with respect to one choice of $\tau$ it is pure with respect to an arbitrary $\tau$. -To see that it doesn't work in general, let's work over $\mathbb{Q}_7$. This field has two square roots of 2(Hensel's lemma); we call them $\pm \alpha$. We'll reserve the symbol $\sqrt{2}$ to mean the usual positive element of $\mathbb{R}$. There's a $\tau$ that takes $j$ to $\sqrt{2}$ and there's a $\tau$ that takes $j$ to $-\sqrt{2}$ (simply compose $\tau$ with any extension of the unique automorphism of $\mathbb{Q}(\sqrt{2})$ to $\mathbb{C}$. Now consider the polynomial -$$ -(x-1)^2 - \alpha. -$$ -If $\tau(\alpha) = \sqrt{2}$ then the roots of this polynomial are $1 + 2^{1/4}$ and $1 - 2^{1/4}$, which have different complex absolute values. If $\tau(\alpha) = -\sqrt{2}$ the roots are $1 + 2^{1/4}i$ and $1 - 2^{1/4}i$, which have the same complex absolute value.<|endoftext|> -TITLE: Constructing arithmetic progressions -QUESTION [14 upvotes]: It is known that in the sequence of primes there exists arithmetic progressions of primes of arbitrary length. This was proved by Ben Green and Terence Tao in 2006. However the proof given is a nonconstructive one. -I know the following theorem from Burton gives some criteria on how large the common difference must be. - -Let $n > 2$. If all the terms of the arithmetic progression - $$ -p, p+d, \ldots, p+(n-1)d -$$ - are prime numbers then the common difference $d$ is divisible by every prime $q -TITLE: On Cesàro convergence: If $ x_n \to x $ then $ z_n = \frac{x_1 + \dots +x_n}{n} \to x $ -QUESTION [38 upvotes]: I have this problem I'm working on. -Hints are much appreciated (I don't want complete proof): -In a normed vector space, if $ x_n \longrightarrow x $ then $ z_n = \frac{x_1 + \dots +x_n}{n} \longrightarrow x $ -I've been trying adding and subtracting inside the norm... but I don't seem to get anywhere. -Thanks! - -REPLY [2 votes]: WLOG, the $x_n$ converge to $0$ (otherwise consider the differences $x_n-x$), and stay confined in an $\epsilon$-neighborhood of $0$ after $N_\epsilon$ terms. -Then the average of the first $m$ terms is bounded by -$$\frac{N\overline{x_N}+(m-N)\epsilon}m,$$ which converges to $\epsilon$. So you can make the average as close to $0$ as you like.<|endoftext|> -TITLE: Definition of a contractible simplicial complex without appealing to topological realization -QUESTION [5 upvotes]: Let $\Delta$ be a simplicial complex. I believe the standard definition of $\Delta$ being contarctible is if the topological realization $|\Delta|$ is contractible. However, I am seeking a definition of a contractible simplicial complex that is in terms of the simplicial complex qua simplicial complex, possibly having something to do with its simplicial homology. In other words, something in terms of the faces or its simplicial homology or something that can be checked while staying in the category of simplicial complexes. Call such a definition $S$-contractible. I would then hope that a simplicial complex is $S$-contractible if and only if $|\Delta|$ is contractible. Is anyone aware of such a definition and if so, has it been shown to agree with the standard definition of contractible? Many thanks in advance. - -REPLY [4 votes]: You want the simplicial approximation theorem. One part, or perhaps I should say version, says that any continuous map between the geometric realizations of two simplicial complexes is homotopic to a simplicial map (possibly after subdivision). Another part/version says that there is a simplicial definition of what it means for two simplicial maps to be homotopic and that it agrees with the usual definition (possibly after subdivision). -Given that, you can write down a simplicial definition of what it means for two simplicial complexes to be homotopy equivalent, and you can write down a simplicial definition of what it means for there to exist a map from a point to your simplicial complex which is a homotopy equivalence. That's contractibility by simplicial approximation. But maybe you want something that doesn't require subdivision?<|endoftext|> -TITLE: Math for computer science? -QUESTION [13 upvotes]: Math for computer science? -I'm a computer science major and just completed linear algebra. Many courses are available to take now. Of particular interest: number theory and abstract algebra (Modern Algebra). Which one do you recommend? Is there any overlap between the two courses? Which one is more applicable to computer science? -I would like to learn more, -Thanks. - -REPLY [3 votes]: As usual the answer is, it depends. If you go into a field like algorithms and end up wishing to do algorithmic number theory then surely, as the name implies, you need to know algorithms. If you go into cryptography, you'll also need number theory, as RSA and elliptic curve cryptography use a lot of it. -However if you go into want to go into something like signal processing or coding theory and information theory, you'll be better off with abstract algebra. A lot of graph theory also has some algebraic components. -On a final note, most number theory beyond the elementary level requires an understanding of groups, rings, and fields. Even one's understanding of elementary number theory can be enriched with a knowledge of group theory. -For a similar discussion see this, this, and this.<|endoftext|> -TITLE: Penguin Brainteaser : 321-avoiding permutations -QUESTION [15 upvotes]: There are $k$ penguins, $k\ge 3$. They are all different heights. How many ways are there to order the penguins in a line, left to right, so that we cannot find any three that are arranged tallest to shortest (in left to right order)? The penguin triples do not have have to be adjacent. -This problem is courtesy of someone I know only as "DaMancha." - -REPLY [9 votes]: For my own benefit last night I worked through a complete argument demonstrating a bijection between the $321$-avoiding permutations of $[n]$ and the Dyck paths of length $2n$. I see that Théophile has posted a delightful sketch of another argument, but I’m going to post this anyway, if only to be able to have easy access to it later. I believe that this bijection is essentially due to Krattenthaler. -Let $\pi=\pi_1\pi_2\dots\pi_n$ be a permutation of $[n]$. A number $\pi_k$ is a left-to-right maximum of $\pi$ at position $k$ if $\pi_k>\pi_i$ for $1\le i -TITLE: $\bar{\partial}$-Poincaré lemma -QUESTION [5 upvotes]: This is $\bar{\partial}$-Poincaré lemma: Given a holomorphic funtion $f:U\subset \mathbb{C} \to \mathbb{C}$ ,locally on $U$ there is a holomorphic function $g$ such that : $$\frac{\partial g}{\partial \bar z}=f$$ -The author says that this is a local statement so we may assume $f$ with compact support and defined on the whole plane $\mathbb{C}$, my question is why she says that... thanks. -*Added* -$f,g$ are suppose to be $C^k$ not holomorphic, by definition $$\frac{\partial g}{\partial \bar z}=0$$ if $g$ were holomorphic... - -REPLY [2 votes]: I don't have the book, and thus I can't check the statement. -However, I believe that the statement holds for smooth $f$. -Basically we want to construct/find $g$ as the following integral: -$$g(z) = \frac{1}{2 \pi i}\int_{w\in \mathbb{C}} \frac{f(w)}{z-w} d\overline{w}\wedge dw$$ -In order to do this, $f$ must be defined over the whole complex plane.<|endoftext|> -TITLE: Special orthogonal matrix uniquely determined by $n-1 \times n-1$ entries? -QUESTION [5 upvotes]: For example, consider the specific question: Given $a_{11},a_{12},a_{21},a_{22}$ does that uniquely determine -$A=\begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{21}&a_{22}&a_{23} \\ a_{31}&a_{32}&a_{33} \end{bmatrix}$ -where $A\in SO(3)$. - -REPLY [2 votes]: Note that if $A \in SO_n(\mathbb R)$ with -$$ A \; = \; - \left( \begin{array}{rr} - E & F \\ - G & H -\end{array} - \right) , -$$ -and submatrices $F,G$ rectangular and $E,H$ both perfectly square, then $$ \det E = \det H. $$ -I'm just saying. In your case, if you have the upper left square, it turns out that $$ a_{33} = \; a_{11} a_{22} \; - \; a_{12} a_{21}. $$ Put another way, if you are given the upper left block and complete it, another choice for $A \in SO_n(\mathbb R)$ is to negate column $n$ and row $n,$ which means that $a_{nn}$ is negated twice and actually stays put. -Proof: -$$ \left( \begin{array}{cc} E & F \\ 0 & I \end{array} \right) -\left( \begin{array}{cc} E^t & G^t \\ F^t & H^t \end{array} \right) = -\left( \begin{array}{cc} I & 0 \\ F^t & H^t \end{array} \right). $$ -Note that this is more general as both $E,H$ are allowed to have size larger than one, if $n > 3$ anyway.<|endoftext|> -TITLE: Norm of integral operator in $L_2$ -QUESTION [11 upvotes]: What is the norm of integral operators $A$ in $L_2(0,1)$? -$Ax(t)=\int_0^tx(s)ds$ - -REPLY [4 votes]: Here is a rather direct way to obtain the norm, without previous knowledge of what it should be. We take advantage of the C$^*$-identity $$\tag1\|V\|=\|V^*V\|^{1/2}.$$ Because $V^*V$ is compact and positive, its norm is equal to its greatest eigenvalue. -We have -$$\tag2 -V^*Vf(x)=\int_x^1\int_0^tf(s)\,ds\,dt. -$$ -If we differentiate the equality $V^*Vf(x)=\lambda f(x)$ twice, we get the differential equation $-f(x)=\lambda f''(x)$. From $(2)$ we get the initial conditions $f(1)=0$, $f'(0)=0$. Since we know that $\lambda>0$ this gives, up to a multiple, -$$ -f(x)=\cos\frac x{\sqrt\lambda},\qquad\text{subject to $f(1)=0$.} -$$ -Thus -$$ -\frac1{\sqrt\lambda}=\frac{(2k+1)\pi}2,\qquad k\in\mathbb N, -$$ -so -$$ -\sqrt{\lambda}=\frac2{(2k+1)\pi}. -$$ -Using $k=0$ to get the largest $\lambda$, $$\|V\|=\frac2\pi.$$<|endoftext|> -TITLE: Show that $a^{q-1} \equiv 1 \pmod{pq}$ -QUESTION [6 upvotes]: Assume that $p$ and $q$ are distinct add primes such that $p-1\mid q-1$. If $\gcd(a,pq)=1$ ,show that: $$a^{q-1} \equiv 1 \pmod{pq}$$ - -I have tried as follows: -$$a^{q-1} \equiv 1 \pmod{q} \quad \text{and} \quad a^{p-1} \equiv 1 \pmod{p}$$ -$$\implies a^{(q-1)(p-1)} \equiv 1 \pmod{q} \quad \text{and} \quad a^{(q-1)(p-1)} \equiv 1 \pmod{p}$$ -$$\implies a^{(q-1)(p-1)} \equiv 1 \pmod{pq}$$ -But then I am stuck - please help. - -REPLY [2 votes]: You know that: -$a^{p-1} \equiv 1$ mod $p$ -but $(p-1)|(q-1)$ so this means that -$a^{q-1} \equiv 1$ mod $p$. -Now you may use the Chinese remainer theorem with your first congruence to tell you that -$a^{q-1} \equiv 1$ mod $pq$<|endoftext|> -TITLE: What are the zero divisors of $C[0,1]$? -QUESTION [26 upvotes]: Suppose you have a ring $(C[0,1],+,\cdot,0,1)$ of continuous real valued functions on $[0,1]$, with addition defined as $(f+g)(x)=f(x)+g(x)$ and multiplication defined as $(fg)(x)=f(x)g(x)$. I'm curious what the zero divisors are. -My hunch is that the zero divisors are precisely the functions whose zero set contains an open interval. My thinking is that if $f$ is a function which is at least zero on an open interval $(a,b)$, then there exists some function which is nonzero on $(a,b)$, but zero everywhere else on $[0,1]\setminus(a,b)$. Conversely, if $f$ is not zero on any open interval, then every zero is isolated in a sense. But if $fg=0$ for some $g$, then $g$ is zero everywhere except these isolated points, but continuity would imply that it is also zero at the zeros of $f$, but then $g=0$, so $f$ is not a zero divisor. -I have a hard time stating this formally though, since I'm only studying algebra, and not analysis. Is this intuition correct, and if so, how could it be rigorously expressed? - -REPLY [6 votes]: If $f$ and $g$ are not identically zero and $f \cdot g = 0$ then $g^{-1}(\mathbb{R} \setminus \{0\})$ is open and non-empty and $f$ vanishes on this open subset. You already did the implication in the other direction. So $f$ is a zero divisor if and only if it is not identically zero and vanishes on some non-empty open set.<|endoftext|> -TITLE: Two $NP$-complete languages whose union is in $P$? -QUESTION [9 upvotes]: I've been thinking about transformations on $NP$-complete problems that produce languages known to be in $P$. However, I can't seem to find an example of two $NP$-complete languages whose union is in $P$. I would imagine that such a pair exists (perhaps something like "every object has exactly one of two properties, but it's $NP$-complete to determine whether any given object has one of those properties"), though I don't know anything of this sort. -Are such pairs of languages known to exist? Or is their existence an open problem? -Thanks! - -REPLY [13 votes]: It's true that such pairs exist, but I'm afraid I can only think of trivial unsatisfying examples. Take any two NP-complete languages on disjoint domains and glue them together so that their union is essentially everything. -For instance, take $L_1$ to be the set of all hamiltonian graphs, union the set of all boolean expressions. Then take $L_2$ to be the set of all graphs, together with satisfiable boolean expressions. Then both $L_1$ and $L_2$ are still NP-complete, but $L_1 \cup L_2$ consists of all graphs and all boolean expressions, which is a very boring language and clearly in P (I suppose one could even make it regular). - -REPLY [13 votes]: Take two $NP$-complete languages: $L$, whose alphabet is the lower-case letters $a-z$, and $U$, whose alphabet is the upper-case letters $A-Z$. Now add all upper-case strings to $L$, and all lower-case strings to $U$. The resulting languages are still $NP$-complete, but their union is the set of all strings with constant case, which is certainly in $P$.<|endoftext|> -TITLE: Arcwise connected part of $\mathbb R^2$ -QUESTION [19 upvotes]: Here's a question that I share: - Show that if $D$ is a countable subset of $\mathbb R^2$ (provided with its usual topology) then $X=\mathbb R^2 \backslash D $ is arcwise connected. - -REPLY [2 votes]: Ok I will take a stab at clarifying Brian M. Scott's Answer instead of writing a fully different answer. -Let $p,q\in \mathbb{R}-D$. Then Define $A:=\{\text{ Lines Through }p\}$, $B:=\{ \{\text{ Lines through }q\}$. -One could write the definitions of these sets more formally if desired, but the meaning is clear. I claim the following: -$$ -\exists l\in A, \text{ such that } l\cap D=\emptyset. -$$ -Similarly -$$ -\exists n\in B, \text{ such that } l\cap D=\emptyset. -$$ -Lets us understand why this should be clear. Now define a mapping $m:\mathbb{R} \to A$ by $m(x):=$ the line in $A$ with slope $x$. This mapping is clearly $1-1$ and onto (almost see below*), as any line through a point $p$ is determined by its slope, and any slope uniquely determines a line through $p$. You could formalize this more by writing any line through $p$ in point slope form, but I leave that to the reader. -A similar argument follows for $B$, thus $||A||=||B||=||\mathbb{R}||$. -Now let us show that there are lines in $A$ and $B$ which do not contain points in $D$. define a map $\phi:D\to A\times B$, by $\phi(d):=\{(l,m)\in A\times B| d=l\cap m\}$. That is each point $d\in D$ gets mapped to the pair of lines in $A$ and $B$ which meet at $d$. This mapping is $1-1$, as any two points determine a unique line. -By assumption $D$ is countable, therefor $\phi$ cannot be onto thus there exists a pair of lines, $(l^*,m^*)$, which do not contain any points in $D$. This follows as $A\times B- \phi(D)\neq \emptyset$. -we have a piecewise linear path $p \to l^*\cap m^* \to q$. QED. -The Crux of the argument, here is where we assert that the mapping $\phi$ cannot be onto, given the assumption that $D$ was countable. -One small note, the map $m$ is not quite onto, as the vertical line through $p$ has undefined slope. However, this doesn't disturb the argument as the main idea is that $A$ and $B$ are both uncountable. We might also be able to circumvent this by taking a mapping from the extended reals.<|endoftext|> -TITLE: Constructing a holomorphic function with some specific points zero/nonzero -QUESTION [5 upvotes]: Given $n \in \mathbb{Z}$, is it possible to construct a holomorphic function -$f : \mathbb{C} \rightarrow \mathbb{C}$ such that $f(n) \neq 0$, but -for any integer $m \neq n$ we have $f(m)=0$? -This is actually a homework problem in algebra which I reduced to this statement (in case it is correct). - -REPLY [5 votes]: Here's a hint: $\dfrac{\sin z}{z}$.<|endoftext|> -TITLE: If an element has a unique right inverse, is it invertible? -QUESTION [15 upvotes]: Suppose $u$ is an element of a ring with a right inverse. I'm trying to understand why the following are equivalent. - -$u$ has at least two right inverses -$u$ is a left zero divisor -$u$ is not a unit - -If $v$ and $w$ are distinct right inverse of $u$, then $u(v-w)=0$, but $v-w\neq 0$, so $u$ is a left zero divisor. It's also clear that if $u$ is a left zero divisor, it cannot be a unit (else I could cancel $u$ from $ub=0$ to see $b=0$). -I'm having a heck of a time seeing why $u$ is not a unit implies $u$ has at least two right inverses. I tried the contrapositive, but saw no good approach. What am I missing? - -REPLY [21 votes]: If $u$ has only one right inverse $v$, then $u(1-vu)=u-(uv)u=0$ hence $u(1-vu+v)=1$ and by uniqueness $1-vu+v=v$, so $1=vu$ and $v$ is a left inverse. Hence $u$ is a unit.<|endoftext|> -TITLE: Understanding adjoint functors -QUESTION [12 upvotes]: To understand adjoint functors I tried to look at an example. Can you tell me if the following is correct? -Before I give the example I'd like to recap the definition: Given two categories $C,D$ and two functors $F: C \to D$ and $G: D \to C$ we say that $F$ and $G$ are adjoint if we can give a natural transformation isomorphism $\eta$ such that for every pair of objects $A \in \text{Obj}(C)$, $B \in \text{Obj}(D)$ and morphisms $f: A \to A^\prime$ in $C$ and $g: B \to B^\prime$ in $D$ the following diagram commutes: -$$ -\begin{matrix} -\operatorname{Hom}(FA, B) & \xrightarrow{\eta_{AB}} & \operatorname{Hom}(A, GB) \\ -\left\downarrow{\scriptstyle{\operatorname{Hom}(F(f), g)}}\vphantom{\int}\right. & & \left\downarrow{\scriptstyle{\operatorname{Hom}(f, G(g))}}\vphantom{\int}\right.\\ -\operatorname{Hom}(FA^\prime, B^\prime)& \xrightarrow{\eta_{A^\prime B^\prime}} & \operatorname{Hom}(A^\prime, GB^\prime) -\end{matrix} -$$ - -I'm not sure whether $F$ is left adjoint to $G$ or the other way around. Which one is the left adjoint here? -And: is there a better way to display this diagram? - - -Now the example: We claim that $F = - \otimes_R M$ is the (left?) adjoint of $G = \operatorname{Hom}_R(M, -)$ where $M$ is an $R$-module. To see this we give a natural isomorphism $\eta_{A,B}$ (where $A,B$ are $R$-modules and $C = D = R-\textbf{Mod}$) such that the following diagram commutes: -$$\begin{matrix}\textrm{Hom}(A \otimes M, B)&\xrightarrow{\eta_{AB}}&\operatorname{ Hom}(A, \operatorname{Hom}(M,B))\\ -\left\downarrow{\scriptstyle{\textrm{Hom}(f \otimes id_M, g)}}\vphantom{\int}\right.&&\left\downarrow{\scriptstyle{\textrm{Hom}(f, G(g))}}\vphantom{\int}\right.\\ -Hom(A' \otimes M, B')&\xrightarrow{\scriptstyle{\eta_{A'B'}}}&\textrm{ Hom}(A^\prime, \operatorname{Hom}(M,B'))\end{matrix}$$ -We define $\eta_{AB}$ to be the map $$\eta_{AB}: (f: a \otimes m \mapsto b) \mapsto (g: a \mapsto f(a \otimes -))$$ -Then the diagram above commutes. Is this correct? -And is the downarrow map really $\operatorname{Hom}(f \otimes id_M, g)$? I didn't know what else to put there. And did I get the left/right adjointness the correct way around? - -REPLY [7 votes]: I will tell you how I remember if something is a left or right adjoint. Hopefully it's useful for you. -Let $\mathcal{C},\mathcal{D}$ be categories, and let $F:\mathcal C \to \mathcal D$, $G:\mathcal D \to \mathcal{C}$ be functors. -By definition $F$ is left-adjoint to $G$ if there are natural isomorphisms $$\overline{(\ )}:\mathcal{D}(FA, -) \to \mathcal{C}(A,G-)$$ $$ \overline{(\ )}:\mathcal{C}^{\mathrm op}(GB,-) \to \mathcal{D}^{\mathrm op}(B,F-) $$ -for all objects $A \in \text{ob}\mathcal C$ and $B \in \text{ob}\mathcal D$, such that they are mutual inverses when you plug $B$ in the top one and $A$ in the bottom. -The way to remember that $F$ is a left adjoint is that in the first nice covariant natural transformation, $F$ is on the left. -So your diagram is simply the naturality square for the first transformation: hence $F$ is the left adjoint in that case. -EDIT OVER A YEAR LATER: An easier way to say the above is $F$ is left-adjoint to $G$ if there is a natural isomorphism -$$ -\mathcal D( F-_1, -_2) \cong \mathcal C(-_1, G-_2) -$$ -of functors $\mathcal C^{\text{op}} \times \mathcal D \longrightarrow \mathsf{Set}$.<|endoftext|> -TITLE: Galois group of $x^6 + 3$ isomorphic to a copy of $S_3$ inside $S_6$ -QUESTION [7 upvotes]: I have seen the the thread here related to the computation of the Galois group of the same polynomial. However, my question is not about the computation itself but about the group presentation of the Galois group. I will explain. -I have determined that the polynomial $x^6 +3 \in \Bbb{Q}[x]$ has Galois group of order 6. The splitting field is $\Bbb{Q}(a)$, where $a$ is a root of $x^6 + 3$. One can take $a = \sqrt[6]{3}\zeta$ where $\zeta = e^{2\pi i/6} = e^{\pi i/6}$. -Now I have determined the rest of the roots to be: -$$\begin{array}{ccccc} \alpha_1 &=& a &&& \alpha_4 = -\alpha_1 \\ \alpha_2 &=& \frac{a^4 + a}{2}= \zeta a &&& \alpha_5 = -\alpha_2 \\ \alpha_3 &=& \frac{a^4 - a}{2} = \zeta^2 a &&& \alpha_6 = - \alpha_3 \end{array}. $$ -I have also computed some automorphisms of the Galois group, for example the automorphism $\tau : \alpha_1 \mapsto \alpha_4$ that has order 2, $\sigma : \alpha_1 \mapsto \alpha_2$ that has order 2 and $\rho : \alpha_1 \mapsto \alpha_3$ that has order 3. The presence of two automorphisms of order 2 tells me that the Galois group is isomorphic to $S_3$ -However the problem now is if I want to identify my $\tau,\sigma$ and $\rho$ as cycles in $S_6$, I get the cycles $(14)$, $(12)$ and $(132)$. I don't think these cycles lie in the copy of $S_3$ inside of $S_6$; what am I misunderstanding here? -Thanks. - -Edit: I made a mistake in the calculations. We actually need $\alpha_1 = a = \sqrt[3]{3}\zeta$ where $\zeta = e^{\pi i/6}$. I did not take a primitive 6-th root of unity earlier. Now if I write $a = \sqrt[6]{3}e^{\pi i/6}$, then $a^3 = \sqrt{3}i$ and so $\frac{1 + a^3}{2} = \frac{1 + \sqrt{3}i}{2} = \zeta^2$. So indeed with the redefined $\zeta$ and $a$, the equations now are -$$\begin{array}{ccccc} \alpha_1 &=& a &&& \alpha_4 = -\alpha_1 \\ \alpha_2 &=& \frac{a^4 + a}{2}= \zeta^2 a &&& \alpha_5 = -\alpha_2 \\ \alpha_3 &=& \frac{a^4 - a}{2} = \zeta^4 a &&& \alpha_6 = - \alpha_3. \end{array} $$ -After all this mess, I have got the automorphisms $\sigma = (12)(45)(36)$ and $\gamma = (135)(246)$. We check that $\sigma\gamma = \gamma^2\sigma$. $\sigma\gamma = (12)(36)(45)(135)(246) = (16)(25)(34)$. -$\gamma^2\sigma = (153)(264)(12)(36)(45) = (16)(25)(34)$ so indeed $\sigma\gamma = \gamma^2\sigma$. Hence the Galois group has elements -$$\{1,\gamma,\gamma^2,\sigma,\sigma\gamma, \sigma\gamma^2\} = \{1, (135)(246),(153)(264),(12)(45)(36),(14)(23)(56),(16)(34)(25)\}.$$ - -REPLY [4 votes]: By Galois theory we know that there exists an automorphism $\sigma$ with the property -$$\sigma:\alpha_1\mapsto\alpha_2=\frac{\alpha^4+\alpha}2.$$ -From this we can deduce that -$$ -\sigma(a^3)=\left(\frac{a+a^4}2\right)^3=\frac{a^{12}+3a^9+3a^6+a^3}8. -$$ -Here in the numerator $a^{12}=-3a^6$ and $a^9=-3a^3$, so this simplifies to $-a^3$ and hence -$$ -\sigma(\zeta_6)=\sigma\left(\frac{1+a^3}2\right)=\frac{1-a^3}2=\zeta_6^{-1}. -$$ -Therefore $\sigma(\alpha_2)=\alpha_1$, and by computing the images of the other roots we see that $\sigma$ corresponds to the permutation $(12)(36)(45)$. -Complex conjugation $\rho$ will also be an automorphism of your field, and by plotting the roots on the complex plane we see that $\rho=(16)(25)(34)$. -As products of these we get the other non-trivial automorphisms as permutations of roots: -$$ -\rho\sigma=(153)(264),\quad \sigma\rho=(135)(246), -\quad\sigma\rho\sigma=\rho\sigma\rho=(14)(23)(56). -$$ -The generators $\sigma$ and $\rho$ satisfy the relations $\sigma^2=\rho^2=(\sigma\rho)^3=1$, so they generate a copy of $S_3$.<|endoftext|> -TITLE: abel summable implies convergence -QUESTION [8 upvotes]: Prove that: -If $\sum c_n$ is Abel summable to $s$ and $c_n=O(\frac{1}{n})$ , then $\sum c_n $ converges to $s$. -"A series of complex number $\sum_{n=0}^{\infty} c_n $ is said to be Abel summable to $s$ if for every $0 \le r <1$ ,the series $A(r)=\sum_{k=0}^{\infty} c_kr^k$ converges ,and $\lim_{r\rightarrow 1^-} A(r)=s$." - -REPLY [9 votes]: The following quote is from the book Jacob Korevaar: Tauberian Theory from Section 7.1: Hardy-Littlewood Tauberians for Abel Summability. -Littlewood [1911] answered Hardy's question whether the condition '$na_n \to 0$' in -Tauber's theorem could be relaxed to boundedness of the sequence $\{na_n\}$. - -Theorem 7.1. (Littlewood) - $\sum a_n$ is Abel summable and $|na_n| < C$ $\Rightarrow$ $\sum a_n$ converges. - -This 'big O-theorem' for Abel summability is more difficult than the earlier -results. The theorems in this section have attracted much interest and invited many -alternative proofs, frequently with Theorem 7.3 as the first step towards Theorem -7.1. Littlewood's original proof was rather complicated; his key tool was repeated -differentiation; cf. Section 17. -For comments on Littlewood's fundamental article of -1911, see his Collected Papers [1982]; for the history of his discovery, see Littlewood -[1953], A first simple proof for the Theorem was found by Karamata [1930a]; see -Section 11 for his method. A related more direct proof by Wielandt [1952] will be -described in Section 12. - -Theorem 7.3. (Hardy and Littlewood) One has the following implication: - $\sum a_n$ is Abel summable and $s_n >-C$ $\Rightarrow$ $\sum a_n$ is Cesaro summable. - - -The following quote is from the book Boos: Classical and modern methods in summability -H. Tietz and K. Zeller drew my attention to a recent paper (cf. [240]) in -which they give a modification of Wielandt's well-known elegant proof of -the Hardy-Littlewood o-Tauberian theorems for the Abel method. This -is an elementary proof and I decided to use the material of this paper for -the important part of Section 4.4.... -The shortest proofs are due to Wielandt [246] and Karamata [124] and, by modifying Wielandt's proof, to Tietz and Zeller [240]. We follow the lines of Tietz and Zeller's -proof which avoids integrals and is based on the Weierstrass approximation -theorem. - -References mentioned above: - -J. Karamata. Uber die Hardy-Littlewoodsche Umkehrung des Abelschen Stetigkeitssatzes. Math. Z. 32, 319-320 (1930). DOI: 10.1007/BF01194636, GDZ -J. E. Littlewood. The converse of Abel's theorem on power series. Proc. London Math. Soc. (2) 9, 434-448 (1911). doi: 10.1112/plms/s2-9.1.434 -H. Tietz and K. Zeller. A unified approach to some Tauberian theorems of Hardy and Littlewood. Acta Comment. Uni. Tartu. Math. 2, 15-18 (1998). -H. Wielandt. Zur Umkehrung des Abelschen Stetigkeitssatzes. Math. Z. 56, 206-207 (1952). DOI: 10.1007/BF01175034, GDZ.<|endoftext|> -TITLE: Find a maximum of complex function -QUESTION [7 upvotes]: I am trying to find a simple method that does not use the tools of advanced differential calculus to find following maximum, whose existence is justified by the compactness of the close ball $\Delta$ of $\mathbb C$ and continuity of the function $f:z \mapsto |z^3 + 2iz|$ from $\mathbb C$ to $\mathbb C$ -$$ \large { \displaystyle \max_{z \in {\mathbb C},|z| \leq 1} |z^3 +2i z |} $$ -Since : $$(\forall z \in \Delta) \quad f(z) \leq 3 $$ -is obtained using triangular inequality, we can yet try to find some $z_0 \in {\mathbb C}$ such that $f(z_0)=3$ -Does anybody have an idea? -Thanks. - -REPLY [5 votes]: If you've begun a study of complex functions, you may have seen the Maximum Modulus Principle. Since $z^3 + 2iz$ is a polynomial and entire (analytic in the complex plane), the maximum of $|z^3 + 2iz|$ you seek must occur on the boundary of the unit disk. Gerry's Hint then quickly points you in the right direction! - -REPLY [3 votes]: Hint: $z^3 + 2iz$ is differentiable on $\mathbb{C}$ (i.e. holomorphic) so you can apply the maximum modulus principle, and deduce that the maximum of $f$ lies on the boundary of $\Delta$, which has a simple parametrisation, so you can use standard techniques from real one-variable calculus to find the maximum. -(A slightly different approach to Gerry Myerson's, much more complicated in this case but also far more general.)<|endoftext|> -TITLE: In mean value theorem, does the mean value vary continuously? -QUESTION [7 upvotes]: Let $f\colon\mathbb R\to\mathbb R$ be continuously differentiable and let's say, for simplicity, that $f(0)=0$. Then by mean value theorem it's -$$f(x)=f'(\xi)\cdot x \,\text{ for some } \xi \in (0, x)$$ -What I wondered is: What can we tell about the $\xi$ as we change $x$? My intuition says we should at least be able to find some $\xi\equiv \xi(x)$ that varies continuously with respect to $x$. -Or isn't this necessarily the case? -Thanks for any ideas. - -REPLY [8 votes]: No. Here is a counterexample: Choose $f$ such that -$$f'(x)=\begin{cases} 2x-2 & x \le 1 \\ 0 & 1 \le x \le 2 \\ 2x-4 & x\ge 2\end{cases}$$ -so -$$f(x)=\begin{cases} x^2-2x & x\le 1 \\ -1 & 1\le x \le 2 \\ x^2-4x+3 & x\ge 2\end{cases}$$ -Then $f(3)=0$, and for $03$, $f(x)>0$, so $\xi\ge 2$ for these $x$.<|endoftext|> -TITLE: Every $k$ vertices in an $k$ - connected graph are contained in a cycle. -QUESTION [5 upvotes]: Let $G$ be a $k$-connected graph. Meaning, $G$ has no fewer than $k$ vertices, and for every set of $k-1$ or fewer vertices, if we remove them from $G$, the graph stays connected (Of course, $G$ itself is also connected). -I want to prove that for any $k>1$, if $G$ is $k$-connected, then every set of $k$ vertices is contained in a cycle. -I have tried some ways - mainly using induction by removing one of the vertices of the set from the graph, and/or using Menger's theorem to construct the cycle. But I always encounter problems with making sure that the cycle I'm building deosn't have repeating edges etc. -Help would be greatly appreciated :) -Thanks! - -REPLY [2 votes]: It should be an induction statment, the induction is on k. -Base of induction begins with k=2 then we know every $u,v \in V$ have at least two vertex-disjoint paths (this is by Menger's theorem) $P_1=(v,...,u)$ and $P_2=(v,...,u)$ , we notice $P_1P_2$ is a cycle and u and v are both in it which means for every group |S|=k=2 it holds that it is contained in a cycle. -Induction hypothesis is that for k-1 - conectedness the statment holds, which means take any groups of vertices |S|=k-1 you'll have them sitting on a cycle C. -So now it's left to take arbitary $v \in V$ in a k-connected graph remove it and this gives us a graph k-1 connected so a group |S|=k-1 sits on a circle and now it's only left to look at v-as suggested above using menger and creating a cycle.<|endoftext|> -TITLE: What's the name of this operator? -QUESTION [6 upvotes]: Let $f,g$ be functions in $C^A$ and $C^B$ respectively. -Let $\boxtimes:C^A \times C^B \to (C\times C)^{A \times B}$ s.t. -$f\boxtimes g(a,b)=(f(a),g(b))$ -It seems not the tensor product, nor Cartesian product. Then can we call it direct product? But it seems the term 'direct product' often used on operator between structures. - -REPLY [7 votes]: For the viewpoint of category theory, your map is just $f\times g$ -- it is the image of $f$ and $g$ under the product bifunctor $(-)\times(-)$. -More verbosely, if you compose $f$ and $g$ with the projection maps from $A\times B$, then you get maps $f\circ \pi_1: A\times B \to C$ and $g\circ \pi_2: A\times B \to C$, which factor through $C\times C$ by the universal property of the latter. The mediating morphism is excatly $f\times g$. -If you consider $\mathbf{Set}$ to be a monoidal category by declaring $\times$ to be $\otimes$, then $f\times g$ is indeed the tensor product $f\otimes g$. - -On the other hand, in ordinary set theory, we usually identify a function with its graph: $$f=\{\langle x,f(x)\rangle\mid f(x)\text{ is defined}\},$$ and in that sense your $f\boxtimes g$ is of course not the cartesian product of $f$ and $g$. It is closely related though: -$$ f\boxtimes g = \{ \langle\langle a,b\rangle,\langle c,d\rangle\rangle \mid \langle\langle a,c\rangle,\langle b,d\rangle\rangle \in f\times g\}$$ -which could be seen as a more vivid justification for Hurkyl's "transpose" terminology. - -REPLY [2 votes]: Your map is some transpose of the Cartesian product. -Given a function $f : A \to B$ and another function $g:C \to D$, their Cartesian product is -$$ f \times g : A \times C \to B \times D : (a,c) \mapsto (f(a), g(c)) $$ -The notion of "transpose" enters the picture in the sense that you are switching from thinking about a function $A \to C$ to thinking of an element of $C^A$.<|endoftext|> -TITLE: Is $C[0,1]$ normal with the topology of pointwise convergence? -QUESTION [13 upvotes]: I was asked a question by someone the other day regarding the topology of pointwise convergence, and I can't seem to get anywhere with it. I was wondering if anyone could be of any assistance... -The question was: is $C[0,1]$, the set of continuous real-valued functions from $[0,1]$ to $\mathbb{R}$, with the topology of pointwise convergence (defined by the sub-basis $A_{a,x,b} :=\{f\in C[0,1] : a< f(x) < b\}$ for $x \in [0,1]$ and $a -TITLE: a non separable metric space -QUESTION [7 upvotes]: Let $X$ be a metric space with discrete metric whose points are the positive integers. We have to show $C(X,\mathbb{R})$ is non separable. Well, what I have to do is to show $C(X,\mathbb{R})$ has no countable dense subset. I have no idea how to show that It has no countable as well as dense subset of $C(X,\mathbb{R})$, so far I guess to show it has non dense subset we need to find a sequence of functions $f_n\in C(X,\mathbb{R})$ which has some constant distance to the element of that set. Please, will any one help me to solve the problem? - -REPLY [8 votes]: Hint: - -Prove the follwing lemma. If $\{x_i:i\in I\}$ - is an uncountable family in metric space $(M,d)$ such that -$$ -\exists \delta>0\quad\forall i\in I\quad\forall j\in I\quad (i\neq j\Longrightarrow d(x_i,x_j)>\delta) -$$ -then $(M,d)$ is not separable. -Take a look at binary sequences. - -REPLY [2 votes]: Hint: any function from a discrete space to any space is continuous, so sharpening Ragib's comment we can say that $\,\mathcal{C}(X,\mathbb{R}) \,$ contains all the functions $\,X\to\mathbb{R}\,$ , i.e. all the sequences of real numbers (indexed by the naturals, of course). -Now, since $\,X\,$ is not compact I am not sure what topology are you taking for $\,\mathcal{C}(X,\mathbb{R})\,$...The supremum wrt the usual metric in the reals? - -REPLY [2 votes]: Assuming something like the sup-norm, we can prove the result with a diagonal argument. -Suppose we have any countable collection of sequences in $\mathbb{R}.$ We can list this collection into a sequence, say $f_1, f_2, \cdots$ where each $f_n$ is a sequence of reals (denote the i-th term of $f_n$ by $f_n^{(i)}.$) -Define a sequence of real numbers by $g_n = f_n^{(n)}+1.$ Then $g$ has distance at least $1$ from any $f_n$ so $\{ f_n \}$ is not dense in the sequences of real numbers.<|endoftext|> -TITLE: Tangent sheaf of a (specific) nodal curve -QUESTION [7 upvotes]: Given a nodal (= reduced, connected, projective, having only ordinary double points as singularities) curve $C$ consisting of 5 $\mathbb P^1$ labeled $C_0, D_1, D_2, D_3, D_4$ such that $D_i$ intersects only $C_0$ in exactly one point $P_i$ and $P_i \neq P_j$ for $i \neq j$ transversally. The dual graph of $C$ is a cross with $C_0$ in the middle and $D_1, \ldots, D_4$ as leafs. - -I want to compute the dimension dim $H^0(C, T_C)$ of the global sections of the tangent sheaf $T_C$ to $C$? - -REPLY [4 votes]: I've found the solution in Hartshorne's book Deformation Theory on p. 183. I formulate the solution for an arbitrary nodal curve $C$ consisting of irreducible compontents $C_i \cong \mathbb P^1$. -Let $S$ be the set of singular points in $C$. Locally $C$ around each node in $S$ the curve looks like $(xy=0) \subset \mathbb A^2$. Hence, locally, $T_C = xT_D \oplus yT_{D'}$, where $D, D'$ are the components through the chosen node. It follows, globally, we have -$$ T_C \cong \bigoplus_{C_i \subset C} (\mathcal I_{S\cap C_i} \otimes T_{C_i}), $$ -where $\mathcal I_{S\cap C_i}$ is the sheaf of ideals in $\mathcal O_{C_i}$ which defines the closed subscheme $S \cap C_i$ of $C_i$ consisting of nodes in $C_i$. Now $h^0(C_i, \mathcal I_{S\cup C_i} \otimes T_{C_i}) = max(3 - \#(S\cap C_i), 0)$ which follows from $C_i \cong \mathbb P^1$. -So the answer in the case of the above curve is $h^0 = 8$.<|endoftext|> -TITLE: Determinant of the character table of a finite group $G$ -QUESTION [7 upvotes]: This is an exercise from the book "Groups and Representations" by Alperin & Bell. -This quantity is well defined upto a sign. By column orthogonality relations, its squared norm is -$\displaystyle\prod_{\substack{g}} |C_G(g)|$, where g runs over the representatives of the conjugacy classes. If the group is cyclic, the determinant is just a Vandermonde determinant. -I wonder if there is a nice explanation for an arbitrary group. - -REPLY [6 votes]: Let $A$ be the character table as a square matrix. In other words $A_{ij}=\chi_i(g_j)$, where $\chi_i$ the distinct irreducible characters, and $g_j$ are representatives of conjugacy classes. Let $A^H$ be the conjugate transpose. Let us compute the matrix product -$B:=A^HA$. At position $(i,j)$ we get -$$ -B_{ij}=\sum_k A^H_{ik}A_{kj}=\sum_k \overline{A_{ki}}A_{kj}=\sum_k\overline{\chi_k(g_i)}\chi_k(g_j)=|C_G(g_j)|\delta_{ij} -$$ -by the second orthogonality relation. Therefore $B$ is diagonal, and -$$ -\det B=\prod_j|C_G(g_j)|. -$$ -So by the multiplicativity of determinant we also have -$$ -\det(A^H)\det(A)=\det B=\prod_j|C_G(g_j)|. -$$ -Let us study the relation between $A^H$ and the transpose $A^T$. -Let us define a permutation $s$ of the conjugacy classes by the mapping $[g]\mapsto [g^{-1}]$. -It is obviously a product of disjoint 2-cycles, and its fixed points are exactly the conjugacy classes stable under taking the inverse element. Let us denote by $\ell$ -the number of orbits of size two. -If we denote by $\tilde{A}$ the matrix that we get from $A$ by permuting the columns according to $s$, then the general fact $\chi(g^{-1})=\overline{\chi(g)}$ allows us to identify $A^H$ as $\tilde{A}^T$. Clearly -$$ -\det\tilde{A}=(-1)^{\ell}\det A,$$ -so we get the equation -$$ -(-1)^{\ell}(\det A)^2=\det B=\prod_j|C_G(g_j)|. -$$ -The sign of $\det A$ will always remain ambiguous, because we have no natural ordering neither for the conjugacy classes nor for the characters, so all we can say is that -$$ -\det A=\pm i^{\ell}\sqrt{\prod_j|C_G(g_j)|}. -$$<|endoftext|> -TITLE: How do I show that this function is always $> 0$ -QUESTION [6 upvotes]: Show that $$f(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + - \frac{x^4}{4!} > 0 ~~~ \forall_x \in \mathbb{R}$$ - -I can show that the first 3 terms are $> 0$ for all $x$: -$(x+1)^2 + 1 > 0$ -But, I'm having trouble with the last two terms. I tried to show that the following was true: -$\frac{x^3}{3!} \leq \frac{x^4}{4!}$ -$4x^3 \leq x^4$ -$4 \leq x$ -which is not true for all $x$. -I tried taking the derivative and all that I could ascertain was that the the function became more and more increasing as $x \rightarrow \infty$ and became more and more decreasing as $x \rightarrow -\infty$, but I couldn't seem to prove that there were no roots to go with this property. - -REPLY [12 votes]: Hint: -$$f(x) = \frac{1}{4} + \frac{(x + 3/2)^2}{3} +\frac{x^2(x+2)^2}{24}$$ - -REPLY [3 votes]: $f$ is a polynomial, and therefore, is differentiable at all points. Furthermore, as $x\to\infty$ or $x\to-\infty$, $f(x)\to+\infty$. Thus, if $f(x)\le0$ for some $x$, then $f(x)\le0$ for some relative minimum. -$$f'(x)=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}$$ -$f'(x)=0$ for all relative minima. However, if $f'(x)=0$, then $$f(x)=f'(x)+\frac{x^4}{4!}$$ -Thus, $f(x)=\frac{x^4}{4!}>0$ for all relative minima $x\not=0$. $x=0$ is not a relative minimum, because $f'(0)\not=0$, so this equation holds for all relative minima of $f$. This contradicts our assumption, so $f(x)>0$ for all $x\in \mathbb R$.<|endoftext|> -TITLE: Show that $\frac{(3^{77}-1)}{2}$ is odd and composite -QUESTION [13 upvotes]: The question given to me is: - -Show that $\large\frac{(3^{77}-1)}{2}$ is odd and composite. - -We can show that $\forall n\in\mathbb{N}$: -$$3^{n}\equiv\left\{ - \begin{array}{l l} - 1 & \quad \text{if $n\equiv0\pmod{2}$ }\\ - 3 & \quad \text{if $n\equiv1\pmod{2}$}\\ - \end{array} \right\} \pmod{4}$$ -Therefore, we can show that $3^{77}\equiv3\pmod{4}$. Thus, we can determine that $(3^{77}-1)\equiv2\pmod{4}$. Thus, we can show that $\frac{(3^{77}-1)}{2}$ is odd as: -$$\frac{(3^{77}-1)}{2}\equiv\pm1\pmod{4}$$ -However, I am unsure how to show that this number is composite. The book I am reading simply states two of the factors, $\frac{(3^{11}-1)}{2}$ and $\frac{(3^{7}-1)}{2}$, but I do not know how the authors discovered these factors. -I'd appreciate any help pointing me in the right direction, thanks. - -REPLY [26 votes]: Another way to see this is by writing the number in base 3: -$$3^{77}=1\underbrace{00\dots00}_{77}\ _3$$ -Here the index $3$ denotes base 3, and $77$ is the number of digits. Subtracting one, we get: -$$3^{77}-1=\underbrace{22\dots22}_{77}\ _3$$ -Therefore, dividing this by two, -$$\frac{3^{77}-1}{2}=\underbrace{11\dots11}_{77}\ _3$$ -From this we can directly read that the number is odd, since it is the sum of 77 odd numbers, and composite, since $$\underbrace{11\dots11}_{77}\ _3=1111111_3\cdot\underbrace{10000001000000\dots100000010000001}_{71}\ _3$$ -(Although, this is basically the same as some of the other answers.)<|endoftext|> -TITLE: Big Oh notation Question in calculus -QUESTION [8 upvotes]: In my text book, they state the following: -$$\begin{align*}f(x) &= (\frac{1}{x} + \frac{1}{2}) (x-\frac{1}{2}x^2+\frac{1}{3}x^3+O(x^4))-1& ,x \rightarrow 0\\&= 1-\frac{1}{2}x+\frac{1}{3}x^2+\frac{1}{2}x-\frac{1}{4}x^3+O(x^3)-1& ,x \rightarrow 0 \end{align*}$$ -However, when I calculate this, I get $1-\frac{1}{2}x+\frac{1}{3}x^2+\frac{1}{2}x-\frac{1}{4}x^3+O(x^3)+\frac{O(x^4)}{2}-1$. -That $O(x^4)$ part disappears I guess, due to the big O notation. However, I cannot figure out why. -Furthermore, a few pages later, they say that $\lim_{x\rightarrow 0} O(x) = 0$. Which I do not really understand, since $O(x)$ defines a set of functions, no? - -REPLY [6 votes]: For your first question, both $O(x^3)$ and $O(x^4)$ are error terms as $x$ approaches zero. Since $x^4$ goes to zero faster than $x^3$ as $x$ goes to zero, the larger error, $O(x^3)$ will subsume the smaller $O(x^4)$. -For your second question, you're correct in interpreting $O(x)$ as a set of functions. In this context, $O(x)$ is the set of all functions $f(x)$ for which $\mid f(x)\mid \le c\mid x\mid$, eventually, for some $c>0$ (which will depend on $f$). The limit $\lim_{x\rightarrow 0} O(x)$ is then interpreted to mean the limit of all such functions $f(x)$ as $x\rightarrow 0$, if it exists. It does in this case, since every $f\in O(x)$ satisfies $\mid f(x)\mid \le c\mid x\mid$, and so has limiting value $0$ as $x\rightarrow 0$.<|endoftext|> -TITLE: Computing: $\lim\limits_{n\to\infty}\left(\prod\limits_{k=1}^{n} \binom{n}{k}\right)^\frac{1}{n}$ -QUESTION [12 upvotes]: I try to compute the following limit: -$$\lim_{n\to\infty}\left(\prod_{k=1}^{n} \binom{n}{k}\right)^\frac{1}{n}$$ -I'm interested in finding some reasonable ways of solving the limit. I don't find any easy approach. Any hint/suggestion is very welcome. - -REPLY [4 votes]: As noted by Phira and Byron Schmuland, the product diverges. -I find an asymptotic expression for the product for large $n$, Eqn. (3) below. -I have found some inspiration for this solution in @sos440's answer here. -With a little work, one can show that -$$\begin{equation*} -\log \left(\prod_{k=1}^n{n\choose k}\right)^{1/n} -= -\frac{n+1}{n}\log n! -+ (n+1)\log n -+ 2 \sum_{j=1}^n \frac{j}{n}\log\frac{j}{n}. \tag{1} -\end{equation*}$$ -For a derivation of (1), see below. -Using Stirling's approximation, and the fact that -$\sum_{j=1}^n \frac{j}{n}\log\frac{j}{n} \approx n\int_0^1 x \log x = -n/4$ -(the error here is $O(\log(n)/n)$), -we get -$$\begin{equation*} -\log \left(\prod_{k=1}^n{n\choose k}\right)^{1/n} -\sim \frac{n}{2}+1 - \frac{1}{2} \log 2\pi n.\tag{2} -\end{equation*}$$ -Therefore, -$$\begin{equation*} -\left(\prod_{k=1}^n{n\choose k}\right)^{1/n} \sim \frac{e^{n/2+1}}{\sqrt{2\pi n}}. \tag{3} -\end{equation*}$$ -Clearly the product diverges. -For $n=10$, $100$, and $1000$ the left and right side of (3) agree to $12\%$, $2.0\%$, and $0.28\%$, respectively. -From (3) we get the result -$\lim_{n\to\infty} \left(\prod_{k=1}^n{n\choose k}\right)^{1/n^2} = \sqrt{e}$ -for free. -(Use $\lim_{n\to\infty} x^{1/n} = 1$ for $0 -TITLE: Finding the coordinates of points from distance matrix -QUESTION [36 upvotes]: I have a set of points (with unknown coordinates) and the distance matrix. I need to find the coordinates of these points in order to plot them and show the solution of my algorithm. -I can set one of these points in the coordinate (0,0) to simplify, and find the others. Can anyone tell me if it's possible to find the coordinates of the other points, and if yes, how? -Thanks in advance! -EDIT -Forgot to say that I need the coordinates on x-y only - -REPLY [49 votes]: Doing this with angles, as Jyrki suggested, is cumbersome and difficult to generalize to different dimensions. Here is an answer that's essentially a generalization of WimC's, which also fixes an error in his answer. In the end, I show why this works, since the proof is simple and nice. -The algorithm -Given a distance matrix $D_{ij}$, define -$$M_{ij} = \frac {D^2_{1j}+D^2_{i1}-D^2_{ij}} 2 \,.$$ -One thing that is good to know in case the dimensionality of the data that generated the distance matrix is not known is that the smallest (Euclidean) dimension in which the points can be embedded is given by the rank $k$ of the matrix $M$. No embedding is possible if $M$ is not positive semi-definite. -The coordinates of the points can now be obtained by eigenvalue decomposition: if we write $M = USU^T$, then the matrix $X = U \sqrt S$ (you can take the square root element by element) gives the positions of the points (each row corresponding to one point). Note that, if the data points can be embedded in $k$-dimensional space, only $k$ columns of $X$ will be non-zero (corresponding to $k$ non-zero eigenvalues of $M$). -Why does this work? -If $D$ comes from distances between points, then there are $\mathbf x_i \in \mathbb R^m$ such that -$$D_{ij}^2 = (\mathbf x_i - \mathbf x_j)^2 = \mathbf x_i^2 + \mathbf x_j^2 - 2\mathbf x_i \cdot \mathbf x_j \,.$$ -Then the matrix $M$ defined above takes on a particularly nice form: -$$M_{ij} = (\mathbf x_i - \mathbf x_1) \cdot (\mathbf x_j - \mathbf x_1) \equiv \sum_{a=1}^m \tilde x_{ia} \tilde x_{ja}\,,$$ -where the elements $\tilde x_{ia} = x_{ia} - x_{1a}$ can be assembled into an $n \times m$ matrix $\tilde X$. In matrix form, -$$M = \tilde X \tilde X^T \,.$$ -Such a matrix is called a Gram matrix. Since the original vectors were given in $m$ dimensions, the rank of $M$ is at most $m$ (assuming $m \le n$). -The points we get by the eigenvalue decomposition described above need not exactly match the points that were put into the calculation of the distance matrix. However, they can be obtained from them by a rotation and a translation. This can be proved for example by doing a singular value decomposition of $\tilde X$, and showing that if $\tilde X \tilde X^T = X X^T$ (where $X$ can be obtained from the eigenvalue decomposition, as above, $X = U\sqrt S$), then $X$ must be the same as $\tilde X$ up to an orthogonal transformation.<|endoftext|> -TITLE: Good books on "advanced" probabilities -QUESTION [46 upvotes]: what are some good books on probabilities and measure theory? -I already know basic probabalities, but I'm interested in sigma-algrebas, filtrations, stopping times etc, with possibly examples of "real life" situations where they would be used -thanks - -REPLY [5 votes]: I really like -Probability with Martingales by D. Williams and Probability: Theory and Examples by Durrett.<|endoftext|> -TITLE: How to evaluate $\lim\limits_{t\to 0} \frac{e^{-1/t}}{t}$? -QUESTION [6 upvotes]: How can I evaluate -\[ -\lim_{t\to 0} \frac{e^{-1/t}}{t}\quad ? -\] -I tried to use L'Hôpital's rule but it didn't help me. Any hints are welcome. Thanks. - -REPLY [11 votes]: You are given -$$\lim_{t \to 0} \frac{1}{t}\exp\left({-\frac 1 t}\right)$$ -Since $1/t$ behaves oppositely for $0^+$ or $0^-$, we consider both situations. Then we let $x =\dfrac 1 t $ and get -$$\lim_{t \to 0^+} \frac{1}{t}\exp\left({-\frac 1 t}\right)=\lim_{x \to+\infty}xe^{-x}=\lim_{x \to+\infty}\frac x {e^{x}}$$ -$$\lim_{t \to 0^-} \frac{1}{t}\exp\left({-\frac 1 t}\right)=\lim_{x \to -\infty}xe^{-x}$$ -I guess calculation is now straightforward. - -REPLY [7 votes]: Note that as $t \to 0$, $\exp(-1/t)$ tends 'faster' to $0$ than $1/t$ tends to $\infty$. To make this precise, let us proceed as follows. -First note that $\exp(x) \geq 1 + x + \dfrac{x^2}2$ for $x \geq 0$ and $\exp(x) \leq 1 + x + \dfrac{x^2}2$ for $x \leq 0$. For $t \geq 0$, -$$\dfrac{\exp(-1/t)}{t} = \dfrac1{t \exp(1/t)} \leq \dfrac1{t (1 + 1/t + 1/(2t^2))} = \dfrac1{t + 1 + 1/(2t)} = \dfrac{2t}{2t^2 + 2t + 1}$$ -Hence, if we let $t \rightarrow 0^+$, we get that -$$0 \leq \lim_{t \rightarrow 0^+} \dfrac{\exp(-1/t)}{t} \leq \lim_{t \rightarrow 0^+} \dfrac{2t}{2t^2 + 2t + 1} = 0$$ -Argue similarly, for $t \leq 0$.<|endoftext|> -TITLE: Automorphism on integers -QUESTION [10 upvotes]: Is multiplying by a constant m (integer) on group of set of all integers on addition an automorphism? -If so why does the 2nd example in http://en.wikipedia.org/wiki/Automorphism says that the unique non trivial automorphism is negation? - -REPLY [4 votes]: Let $f:\mathbb Z\to \mathbb Z$ be a homomorphism. By definition, for each $m\in \mathbb Z$, $$f(m)=f(1+\cdots+1)=f(1)+\cdots+f(1)=m f(1)=f(1)m$$ Hence any homomorphism $f:\mathbb Z\to \mathbb Z$ is just multiplication by $f(1)$ that is the map $m\mapsto f(1) m$. Now it is easy to see that every such non zero homomorphism is injecitve. Indeed the equation $f(1)m=f(1)n$ implies that $m=n$ since $f(1)\not =0$ by hypothesis and also because $\mathbb Z$ is an integral domain. Now the only cases where these homomorphisms are surjective correspond to when $f(1)=1$ or $f(1)=-1$ hence if we call the case $f(1)=1$ the trivial case then the only remaining case is when $f(1)=-1$ and the corresponding automorphism $m\mapsto -m$ is called the negation.<|endoftext|> -TITLE: A particular case of Truesdell's unified theory of special functions -QUESTION [12 upvotes]: I'm reading through Clifford Truesdell's "An essay toward a unified theory of special functions", Princeton Univ. Press, 1948. All his exposition is based on the functional equation -$$\frac{\partial}{\partial z}\mathrm F(z,\alpha)=\mathrm F(z,\alpha+1)$$ -He starts with - -We are going to study functions $f (y, \alpha)$ satisfying a functional equation of the type -$$\frac{\partial}{\partial y} f (y, \alpha) = \mathrm A(y, \alpha) f (y, \alpha) + \mathrm B(y, a) f (y, \alpha+1 )$$ - -Then, we define - -$$g\left( {y,\alpha } \right) = f\left( {y,\alpha } \right)\exp \left\{ { - \int\limits_{{y_0}}^y {\mathrm A\left( {v,\alpha } \right)dv} } \right\}$$ - -We verify that $g$ satisfies - -$$\frac{\partial }{{\partial y}}g\left( {y,\alpha } \right) = g\left( {y,\alpha + 1} \right)B\left( {y,\alpha } \right)\exp \left\{ { - \int\limits_{{y_0}}^y {\left[ {A\left( {v,\alpha + 1} \right) - A\left( {v,\alpha } \right)} \right]dv} } \right\}$$ - -Thus we reduce the equation to - -$$\frac{\partial }{{\partial y}}g\left( {y,\alpha } \right) = C\left( {y,\alpha } \right)g\left( {y,\alpha + 1} \right)\tag {1}$$ - -Now he states -In the case of nearly every special function that I know to satisfy an equation of type $(1)$, the coefficient $C(y, \alpha)$ is factorable, $C(y, \alpha)=Y(y)A( \alpha)$, so we asume - -$$\frac{\partial }{{\partial y}}g\left( {y,\alpha } \right) = Y(y)A( \alpha)g\left( {y,\alpha + 1} \right)$$ - -Now he defines: -$$z:= \int_{y_1}^y Y(v) dv$$ -and -$$F(z,\alpha ): = g\left( {y,\alpha } \right)\exp \left\{ {\mathop {\mathrm S}\limits_{{\alpha _0}}^\alpha \log {\text{A}}\left( v \right)\Delta v} \right\}$$ -Now this is the operator that is troubling me -$$\mathop {\mathrm S}\limits_{{\alpha _0}}^\alpha h\left( v \right)\Delta v = \mathop {\lim }\limits_{k \to {0^ + }} \left\{ {\int\limits_{{a_0}}^\infty {h\left( v \right){e^{ - kc\left( v \right)}}dv} - \sum\limits_{m = 0}^\infty {h\left( {a + m} \right){e^{ - kc\left( {a + m} \right)}}} } \right\}$$ -I can't find any reference to what $c(v)$ is. Is this known operator? What is $c$? -Anyways, I have a simple case I need to transform: -Let $$\mathrm F\left( {x,\alpha } \right) = \int\limits_0^x {{{\left( {\frac{t}{{t + 1}}} \right)}^\alpha }} \frac{{dt}}{t}$$ -Then we have the functional equation -$$\frac{\alpha }{x} \mathrm F\left( {x,\alpha } \right) - \frac{\alpha }{x} \mathrm F\left( {x,\alpha + 1} \right) = \frac{\partial }{{\partial x}} \mathrm F\left( {x,\alpha } \right)$$ -Following Truesdell's method, I define -$$\mathrm G\left( {x,\alpha } \right) = \frac{{\mathrm F\left( {x,\alpha } \right)}}{{{x^\alpha }}}$$ -Then I have the functional equation -$$\frac{\partial }{{\partial x}} \mathrm G\left( {x,\alpha } \right) = - \alpha \mathrm G\left( {x,\alpha + 1} \right)$$ -How can I transform it to the $\mathrm F$ equation using Truesdell's method? -The importance of the original $\mathrm F$ I define is that it can be used to show that -$$\log (1+x)=\sum_{n=1}^\infty \frac{1}{n}\left(\frac x {x+1} \right)^n\text{ ; for } x > -\frac 1 2$$ -and maybe some other results can be derived. I still have a lot of exposition to read. - -REPLY [12 votes]: The expression with the puzzling $\rm\:c(v)\:$ is Norlund's principal solution of the difference equation $\rm \mathop\Delta\limits_{\alpha}\ \mathop {\mathrm S}\limits_{{\alpha _0}}^\alpha h(v) dv = h(a).\: $ As Truesdell mentions in Appendix II, one can find an exposition of this in Chapter 8 of the classic The Calculus of Finite Differences by Milne-Thomson. -As I have mentioned previously here, Willard Miller showed that Truesdell's method is essentially Lie-theoretic. See his freely available book Lie theory and Special Functions, 1968. There he also shows that, similarly, the Schroedinger-Infeld-Hull ladder / factorization method (a powerful tool widely exploited by physicists to compute eigenvalues, recurrence relations, etc. for solutions of second order ODEs) is essentially equivalent to the representation theory of four local Lie groups. Nowadays it is a special case of Lie-theoretic symmetry methods used for separation of variables in partial differential equations (a major theme in the group-theoretic approach towards a unified theory of special functions).<|endoftext|> -TITLE: Banach-Tarski theorem without axiom of choice -QUESTION [11 upvotes]: Is it possible to prove the infamous Banach-Tarski theorem without using the Axiom of Choice? -I have never seen a proof which refutes this claim. - -REPLY [20 votes]: The Banach-Tarski theorem heavily uses non-measurable sets. It is consistent that without the axiom of choice all sets are measurable and therefore the theorem fails in such universe. The paradox, therefore, relies on this axiom. -It is worth noting, though, that the Hahn-Banach theorem is enough to prove it, and there is no need for the full power of the axiom of choice. -More information can be found through here: - -Herrlich, H. Axiom of Choice. Lecture Notes in Mathematics, Springer, 2006. -Schechter, E. Handbook of Analysis and Its Foundations. Academic Press, 1997. - -REPLY [6 votes]: Directly from Wikipedia's page on the Paradox/Theorem - -Unlike most theorems in geometry, this result depends in a critical way on the axiom of choice in set theory. This axiom allows for the construction of nonmeasurable sets, collections of points that do not have a volume in the ordinary sense and for their construction would require performing an uncountably infinite number of choices. - -REPLY [5 votes]: While I'm aware you did not ask for this I cannot resist to suggest that you have a look at Stan Wagon's book 'The Banach-Tarski Paradox', Cambridge University Press 1985.<|endoftext|> -TITLE: Practical method of calculating primitive roots modulo a prime -QUESTION [9 upvotes]: How are generators of a (large prime) set calculated in popular programs such as pgp and libraries such as java's bouncycastle? i cannot imagine them just churning away at every value between 2 and p until something comes up, but there does not seem to be any description of some other method programmers use to find them. -even if they test every number between 2 and p, what is the test? is it checking if the set generated is {1,2,...p-1}? that seems like it would take too much memory. -can anyone give me some pseudocode on how to do it? im trying something thats probably incredibly naive and the program is using 1.5gb ram after a few seconds, with only a 32 bit value - -REPLY [10 votes]: As others have mentioned, we don't know efficient methods for finding generators for $(ℤ/pℤ)^∗$ without knowing the factorization of $p-1$. However, you can efficiently generate a random factored number $n$, then test if $n+1$ is prime, and then compute primitive roots modulo $n+1$. -See Victor Shoup -- A Computational Introduction to Number Theory and Algebra, chapter 11. (You actually need sections 11.1 about finding generators, 9.6 for generating random factored numbers and 9.5, for generating a random non-increasing sequence).<|endoftext|> -TITLE: Covering points on a sphere with a disk -QUESTION [11 upvotes]: Suppose $m$ points ("sites") are selected on the unit sphere $S^2$. For a given radius $r < \pi$, we can define a disk around any point on the sphere as the set of points at geodesic distance at most $r$ from it. Let $k$ be the maximum number of sites contained in any such disk. Is there a nice lower bound on $k$ in terms of $r$? In other words, what should the function $k(r)$ be so that, no matter where the $m$ sites are placed, we can always find a disk of radius $r$ containing $k(r)$ of them? -I would hope it is simply $m$ times the fraction of the surface area of the sphere covered by a disk. After all, if the average density of sites on the sphere is $m/A$, there must be some disk whose density is at least the average, right? This would be easily proved if you could tile the sphere with disks with no overlap, but you can't, so I'm not sure. If it turns out to depend on the packing density of disks on a sphere, I'll be happy with a reasonable lower bound. -If you replace the unit sphere and the disks with unit ball and smaller balls of radius $r < 1$ in $\mathbb R^3$, and the area ratio with the volume ratio, then the naive guess doesn't work. You can have $m/2$ sites clustered around one point near the surface of the unit ball and the other $m/2$ sites around the antipodal point, so that for $\sqrt[3]{\frac1{2}} < r < 1$, the volume of a ball of radius $r$ is more half that of the unit ball, but you can't get more than $m/2$ sites inside it. Perhaps it still works in some asymptotic sense; I'm curious about anything rigorous that can be said in this case too. -Finally, although I've stated the above question for the 2-sphere and ball in $\mathbb R^3$, I'm also interested in the generalization to higher dimensions. - -REPLY [2 votes]: It turns out that the answer to my first question is really very simple. -Suppose you pick the center of the disk randomly from a uniform distribution on the sphere. Appealing to symmetry, we may infer that the probability that a given site lies within the disk is precisely the fraction of the surface area of the sphere covered by the disk; if the areas of the disk and the sphere are $a$ and $A$ respectively, this probability is $a/A$. By linearity of expectation, the expected number of sites contained in a randomly chosen disk is $ma/A$. Therefore, there must exist some disk which contains at least this many points. -The second question remains open, namely the problem of covering as many as possible of $m$ sites in a unit ball with a smaller ball of radius $r < 1$. -Edit: I hate to edit merely to bump this to the front page, but I wanted to use this solution to answer another question on math.SE, and I'd rather not do that if it has a negative score. For all I know, it might have an error that I haven't noticed. The one person who downvoted did not leave a reason; can anyone else let me know if this solution is incorrect?<|endoftext|> -TITLE: Show that $\operatorname{int}(A \cap B)= \operatorname{int}(A) \cap \operatorname{int}(B)$ -QUESTION [7 upvotes]: It's kind of a simple proof (I think) but I´m stuck! -I have to show that $\operatorname{int} (A \cap B)=\operatorname{int} (A) \cap \operatorname{int}(B)$. -(The interior point of the intersection is the intersection of the interior point.) -I thought like this: -Intersection: there's a point that is both in $A$ and $B$, so there is a point $x$, so $\exists ε>0$ such $(x-ε,x+ε) \subset A \cap B$.I don´t know if this is right. -Now $\operatorname{int} (A) \cap \operatorname{int}(B)$, but again with the definition ,there is a point that is in both sets,there's an interior point that is in both sets,an $x$ such $(x-ε,x+ε)\subset A \cap B$. There we have the equality. -I think it may be wrong. Please, I'm confused! - -REPLY [3 votes]: Always remember the trivial inclusion using the property $A\subset B \implies \operatorname{int}A\subset \operatorname{int}B$. Then: -$$A\cap B\subset A,\ A\cap B\subset B \implies \operatorname{int}(A\cap B)\subset \operatorname{int}A,\ \operatorname{int}(A\cap B)\subset \operatorname{int}B$$ -therefore $\operatorname{int}(A\cap B)\subset \operatorname{int}A\cap\operatorname{int}B$. The other inclusion is in Arturo Magidin answer. -In same form we can prove the trivial inclusion $\operatorname{int}A\cup\operatorname{int}B\subset\operatorname{int}(A\cup B)$. Using only the fact that $A\subset A\cup B$ and $B\subset A\cup B$. -If you known what is the closure of a set you can prove that if $A\subset B$ then $\overline{A}\subset \overline{B}$. Then the following facts are inmediate: -$$\overline{A\cap B}\subset\overline{A}\cap\overline{B},$$ -$$\overline{A}\cup\overline{B}\subset \overline{A\cup B}.$$ -Please don't forget it. This observation is crucial and is always used. The other inclusion sometimes is false or sometimes is true, generally you must to use definition indeed this basic properties.<|endoftext|> -TITLE: Non-zero prime ideals in the ring of all algebraic integers -QUESTION [16 upvotes]: Let $\mathcal{O}$ be the ring of all algebraic integers: elements of $\mathbb{C}$ which occur as zeros of monic polynomials with coefficients in $\mathbb{Z}$. -It is known that $\mathcal{O}$ is a Bezout domain: any finitely generated ideal is a principal ideal. -In addition, $\mathcal{O}$ has no irreducible elements, since any $x \in \mathcal{O}$ which is not a unit can be written as $x = \sqrt{x}\cdot\sqrt{x}$, where $\sqrt{x}$ is also not a unit in $\mathcal{O}$. -My question is: - -Does $\mathcal{O}$ have any prime ideal other than $(0)$? - -REPLY [17 votes]: We don't need the axiom of choice. We can write down a maximal ideal. -First, we observe that $R$ is countable: For every monic $p \in \mathbb{Z}[x]$ let $V(p)$ be the set of its complex roots. It has a canonical enumeration, when we order the roots using the lexicographic order on $\mathbb{R} \times \mathbb{R}$. For every $n \geq 1$ the set of these $p$ of degree $n$ identifies with $\mathbb{Z}^n$ and is therefore countable, with an explicit enumeration. It follows that also $\mathcal{O}_n = \cup_{\mathrm{deg}(p)=n} V(p)$ has an explicit enumeration, and then also $\mathcal{O}= \cup_n \mathcal{O}_n$. This enumeration is complicated, but it is computable. -Now let $R \neq 0$ be any countable ring. Any enumeration $R=\{a_0,a_1,\dotsc\}$ produces a maximal ideal as follows: Define an increasing chain of proper ideals $I_k$ as follows: Let $I_0=0$. If $I_k + \langle a_k \rangle$ is proper, let $I_{k+1} = I_k + \langle a_k \rangle$. If not, call $a_k$ bad, and let $I_{k+1}=I_k$. Then $I:=\cup_k I_k$ is a maximal ideal: By construction $1 \notin I$. Now let $a_k \in R \setminus I$. Then $a_k$ has to be bad (otherwise $a_k \in I_{k+1} \subseteq I$), so that $I_k + \langle a_k \rangle = R$ and hence $I+\langle a_k \rangle = R$. $\square$ -More generally, let $R$ be a ring $\neq 0$ whose underlying set is well-orderable. Every enumeration $R=\{a_{\alpha} : \alpha < \kappa\}$ with some limit ordinal $\kappa$ produces a maximal ideal (without using the axiom of choice): Let $I_0=0$, and construct $I_{\alpha+1}$ from $I_{\alpha}$ as above. For limit ordinals $\lambda<\kappa$, let $I_{\lambda}=\cup_{\alpha<\lambda} I_{\alpha}$. Then $I=\cup_{\alpha<\kappa} I_{\alpha}$ is maximal: If $\alpha<\kappa$ and $a_{\alpha} \notin I$, then $a_{\alpha}$ is bad (otherwise $\alpha+1<\kappa$, $a_{\alpha} \in I_{\alpha+1} \subseteq I$), hence $I_{\alpha}+\langle a_{\alpha} \rangle = R$ and $I+\langle a_{\alpha} \rangle = R$. -Even more generally, the proof of "well ordering principle $\Rightarrow$ Zorn's Lemma" in ZF actually shows that every partial order, in which every chain has an upper bound, and whose underlying set is well-orderable, has a maximal element.<|endoftext|> -TITLE: Kaplansky's theorem of infinitely many right inverses in monoids? -QUESTION [28 upvotes]: There's a theorem of Kaplansky that states that if an element $u$ of a ring has more than one right inverse, then it in fact has infinitely many. I could prove this by assuming $v$ is a right inverse, and then showing that the elements $v+(1-vu)u^n$ are right inverses for all $n$ and distinct. -To see they're distinct, I suppose $v+(1-vu)u^n=v+(1-vu)u^m$ for distinct $n$ and $m$. I suppose $n>m$. Since $u$ is cancellable on the right, this implies $(1-vu)u^{n-m}=1-vu$. Then $(1-vu)u^{n-m-1}u+vu=((1-vu)u^{n-m-1}+v)u=1$, so $u$ has a left inverse, but then $u$ would be a unit, and hence have only one right inverse. -Does the same theorem hold in monoids, or is there some counterexample? - -REPLY [11 votes]: Let $S$ be the semigroup of functions from $\mathbb N=\{z\in \mathbb Z|z\geq 0\}$ to itself, with the composition written traditionally: $(f\circ g)(x)=f(g(x)).$ -Let $f\in S$, $f(0)=f(1)=0$ and for $n\geq 2,\,f(n)=n-1.$ Suppose $f\circ g=\operatorname{id}$. Then for $n\geq 1$, we must have $g(n)=n+1$. However, $g(0)$ can be chosen to be either $0$ or $1$ and the equality holds.<|endoftext|> -TITLE: Blow-up along an ideal sheaf -QUESTION [5 upvotes]: Let $k^2=\operatorname{Spec} \; k[x,y]$ where $k$ is an algebraically closed field. Let $\mathcal{I}$ be the ideal sheaf defined by $(x,y)$. Then -$$ -Bl_{\mathcal{I}}k^2 -$$ -is covered by two open charts $\operatorname{Spec} \; k[x, y/x] \cup \operatorname{Spec}\; k[y,x/y]$. -Q1: Why can each chart be described by -$$ -\operatorname{Spec} \; k[x,y][t]/(tx-y) \mbox{ and } \operatorname{Spec} \; k[x,y][t]/(ty-x)? -$$ -Q2: Isn't $Bl_{\mathcal{I}}k^2=\operatorname{Proj}(\oplus_{i\geq 0} (Rx\oplus Ry)^i t^i)$? -Q3.a: Now let $k^3 =\operatorname{Spec} \; k[x,y,z]$ with $\mathcal{I}$ being defined by $(x,y,z).$ Then -isn't -$$ -Bl_{\mathcal{I}}k^3 = \operatorname{Spec} \; k[x,y/x,z/x] \cup \operatorname{Spec}\; k[y,x/y,z/y] \cup \operatorname{Spec}\; k[z,x/z,y/z]? -$$ -Q3.b: How can one see that the charts -$$ - \operatorname{Spec}\; k[x,y,z][t_1, t_2]/(t_1 x - y, t_2 x-z) $$ -$$\operatorname{Spec}\; k[x,y,z][t_1, t_2]/(t_1 y - x, t_2 y-z) -$$ -$$\operatorname{Spec}\; k[x,y,z][t_1, t_2]/(t_1 z - x, t_2 z-y) -$$ -also cover $Bl_{\mathcal{I}}k^3$? -$$ -$$ - -REPLY [2 votes]: Let $A=k[x,y]$, then $$k[x,y/x]=k[x,y,y/x]=A[y/x]=A[t]/(y-tx).$$ For the last equality, see the comments below. Similar equalities apply to the other $k[y,x/y]$. -By definition the blowing-up of $k^2$ along the origin should be -$$\operatorname{Proj}(\oplus_{i\geq0}I^i)\longrightarrow k^2,$$ -where $I=(x,y)$. -The blowing-up of the affine space $k^n$ along the origin is (see Liu,8.1.13) -$$\operatorname{Proj}(k[t_1,...,t_n][T_1,...,T_n]/(t_iT_j-t_jT_i))\longrightarrow k^n.$$ -For your $n=3$ case, we have -$$D_+(T_1)=\operatorname{Spec}(k[t_1,t_2,t_3][\frac{T_2}{T_1},\frac{T_3}{T_1}]/(t_2-t_1\frac{T_2}{T_1},t_3-t_1\frac{T_3}{T_1})),$$ -and similar for $D_+(T_2),D_+(T_3)$. By re-writting the symbols carefully these are the three open charts you mentioned exactly.<|endoftext|> -TITLE: Prove that $\mathcal{W}(\mathbb{R})$ is not metrizable. -QUESTION [5 upvotes]: A colection $\mathcal{V}$ of open sets in a topological space $X$ is called a Fundamental System of Open Neighborhoods (FSON) of a point $x\in X$ when: - -$\forall\ V\in\mathcal{V}$ we have that $x\in V.$ -If $A\subset X$ is open set containing $x$ then $\exists\ V\in\mathcal{V}$ such that $V\subset A$. - -For example in any metric space the set $\{B(x,\frac{1}{n}); n\in \mathbb{N}\}$ is a FSON of $x$. -Let $\mathcal{W}(\mathbb{R})$ the set of continuous functions $f:\mathbb{R}\to\mathbb{R}$ with a topology defined by the following way: Let $f\in\mathcal{W}(\mathbb{R})$ and a continuous positive function $\varepsilon:\mathbb{R}\to\mathbb{R}^+$ and define the set $B(f,\varepsilon)=\{g\in\mathcal{W}(\mathbb{R}); |g(x)-f(x)|<\varepsilon(x)\ \forall x\in\mathbb{R}\}$ which is a basis for the topology. - -Prove that $\mathcal{W}(\mathbb{R})$ is not metrizable. - -Hint: Show that $f=0$ doesn't have a countable FSON. -Attempt: - -Suppose that we have a countable FSON called $\mathcal{R}_{0}$ of $f=0$ then $\mathcal{R}_{0}=\{A_i\}_{i=1}^{\infty}$. Then for each $A_i$ I can choose $\varepsilon_i$ such that $0\in B(0,\varepsilon_i)=\{g\in\mathcal{W}(\mathbb{R}); |g(x)|<\varepsilon_i(x)\ \forall x\in\mathbb{R}\}\subset A_i$. Then I need to find a positive function $\varphi:\mathbb{R}\to\mathbb{R}^+$ such that for all $i\in\mathbb{N}$ we have that $B(0,\varepsilon_i)\nsubseteq B(0,\varphi)$. If were $\varepsilon_i(x)=\frac{1}{n}\ \forall x \in\mathbb{R}$ I have that $\varphi(x)=\frac{1}{1+x^2}\ \forall x \in\mathbb{R}$ satisfies the condition. But I don't know how to find $\varphi$ in the general case. - -REPLY [2 votes]: For each $n\in\Bbb Z^+$ choose a positive number $r_n<\epsilon_n(n)$. Now construct a continuous $\varphi:\Bbb R\to\Bbb R^+$ such that $\varphi(n)=r_n$ for each $n\in\Bbb Z^+$. (A piecewise linear function is continuous and easy to construct.) Then $B(0,\varphi)$ does what you want.<|endoftext|> -TITLE: Building the integers from scratch (and multiplying negative numbers) -QUESTION [14 upvotes]: Now I understand that what I am about to ask may seem like an incredibly simple question, but I like to try and understand math (especially something as fundamental as this) at the deepest level possible. And for the life of me, I can't shake this feeling I have that something is not quite right. -Let me begin: -First of all, I was reading Terry Tao's discussion about the construction of the standard number system and was very pleased with the way in which $\mathbb{C}$ can be systematically constructed from the natural numbers through a system of homomorphisms (as a direct limit): $$\mathbb{N} \hookrightarrow \mathbb{Z} \hookrightarrow \mathbb{Q} \hookrightarrow \mathbb{R} \hookrightarrow \mathbb{C}$$ -In particular, he talks about how the integers can be constructed as the space of equivalence classes of formal differences between natural numbers, where $$[a-b]\sim[c-d ] \iff a+d=b+c$$ with $a,b,c,d\in\mathbb{N}.$ He then goes on to say "with the arithmetic operations extended in a manner consistent with the laws of algebra." - -Now, for $(\mathbb{N},+,\cdot)$ addition is straight forward, and $$n\cdot k:=\underbrace{k+\cdots+k}_{n\;\text{times}}$$ is well defined with $n\cdot k=k\cdot n$. Moreover, the distributive law, $n\cdot(m+k)=n\cdot m+n\cdot k$, also holds. -However, when I want to construct $(\mathbb{Z},+,\cdot)$ as above, I run into some trouble. Addition of two equivalence classes seems pretty straightforward, given by $$[a-b] + [c-d] := [ (a+c)-(b+d) ],$$ and is well defined. This makes sense as we simply "add up" the positive and negative amounts together. This definition also behaves nicely with the map $\varphi:\mathbb{N}\hookrightarrow\mathbb{Z}$, given by $n\mapsto[ n-0 ]$; with $\varphi(n+k)=\varphi(n)+\varphi(k)$, where the second addition is the addition of equivalence classes. Moreover, we have commutativity of addition; the existence of an additive identity, namely $[0-0]$; and the existence of additive inverses, $[a-b]+[b-a]\sim[0-0].$ -But now, when I try to define the multiplication of classes, I am unsure how to proceed: -If we consider our usual algebraic rules, we get that $$(a-b)\cdot(c-d)=a\cdot c-a\cdot d-b\cdot c+b\cdot d,$$ which might lead us to define the multiplication of two equivalence classes as $$[ a-b ]\cdot[ c-d ]:=[ (a\cdot c+b\cdot d)-(a\cdot c+b\cdot c) ].$$ Now it's easy to check that this is indeed well defined, and obeys the distributive law with the definition of addition we've given above. However I feel that we've simply gone in a circle (logically) as we've assumed a priori that $(-b)\cdot c=-(b\cdot c)$ and $(-b)\cdot(-d)=-(b\cdot d)$. -Now, I am very familiar with the fact that if your set is a ring (with unity) then these results (particularly that negative times negative is positive) come as simply a result of playing around with additive inverses and the distributive laws (see here). However my problem is that we've only gotten this ring structure on our set of equivalence classes by appealing to the ring structure of the integers, which is exactly the thing we are trying to construct from scratch! And I do not want to simply define multiplication in this way "because it works"; it has always been my feeling that results like $(-1)\cdot(-1)=1$ should come as consequences of the structure, and not as properties we impose. -So I'm curious if there is a way around this issue. Is this particular definition of multiplication the only one that: - -is well defined? -satisfies the distributive laws? -is associative? -has a multiplicative identity? $$[1-0]\cdot[a-b]=[a-b],\;\text{for all $a,b\in\mathbb{N}$}$$ -gives a homomorphism which splits over multiplication? $$\varphi:(\mathbb{N},+,\cdot)\hookrightarrow(\mathbb{Z},+,\cdot),\; n\mapsto[ n-0 ]$$ $$\varphi(n\cdot k)=\varphi(n)\cdot\varphi(k)=[ n-0 ]\cdot[ k-0 ]$$ - -If our deffiniton of of multiplication has all these properties, then we'll have a ring structure on our set of equivalence classes, and all the familiar properties of the integers will be established. However, it is not immediately clear to me that this is our only option. Can someone shed some light on this? - -REPLY [8 votes]: The easiest way to convince oneself that there's no circularity is to notice that the entire discussion leading up to the law -$$\tag{*} [a-b][c-d]=[(ac+bd)-(ad+bc)]$$ -is superfluous from a completely rigorous point of view. The discussion makes it look like $(*)$ is a deduced truth, but from a formal point of view it is just a definition. We could just have pulled it out of a hat, or found it in the lost notebooks of a crazy deceased genius. All that matters from a formal standpoint is that we can prove that this definition allows us to prove the laws we want to be true (such as the associative and distributive laws). -The "circular" discussion leading up to the definition is just there to satisfy the student's curiosity why anyone would get the idea of trying out such a complicated definition in the first place. Circularity there doesn't matter; one is allowed to use all sorts of stupid tricks, iffy intuitions, and clairvoyant cheating to decide what one wants to prove -- as long as the proof itself (which comes later) is solid.<|endoftext|> -TITLE: Derivative of an implicit function -QUESTION [6 upvotes]: I am asked to take the derivative of the following equation for $y$: -$$y = x + xe^y$$ -However, I get lost. I thought that it would be -$$\begin{align} -& y' = 1 + e^y + xy'e^y\\ -& y'(1 - xe^y) = 1 + e^y\\ -& y' = \frac{1+e^y}{1-xe^y} -\end{align}$$ -However, the text book gives me a different answer. -Can anyone help me with this? -Thank you and sorry if I got any terms wrong, my math studies were not done in English... :) - -REPLY [6 votes]: You can simplify things as follows: -$$y' = \frac{1+e^y}{1-xe^y} = \frac{x+xe^y}{x(1-xe^y)} = \frac{y}{x(1-y+x)}$$ -Here in the last step we used $y=x+xe^y$ and $xe^y=y-x$.<|endoftext|> -TITLE: Number of embeddings in algebraic closure -QUESTION [8 upvotes]: I'm having trouble following the details of the discussion on pages 9 and 10 of Neukirch's algebraic number theory book. -Suppose $L$ is a separable extension of $K$ with degree n. Consider the set of embeddings of $L$ into $\bar K$, the algebraic closure of $K$, that fix $K$ (K-embeddings). Why are there $n$ embeddings in this set? -EDIT: Also, consider some element $x\in L$. Let $d$ be the degree of $L$ over $K(x)$ and $m$ be the degree of $K(x)$ over $K$. Why are the $K$-embeddings of $L$ partitioned by the equivalence relation -$$ \sigma\sim\tau\ \Leftrightarrow\ \sigma x = \tau x $$ -into $m$ equivalence classes of $d$ elements each? - -REPLY [2 votes]: I'll provide a slightly different solution to your second question. Assuming that $\#\text{Hom}(L,\bar K)=n$, let $x=x_1,\dots,x_m\in \bar K$ be the conjugates of $x$ (i.e., the roots in $\bar K$ of the minimal polynomial $m_{x,K}(t)$ for $x$ over $K$). Recall that each of the $m$ isomorphisms $K(x)\rightarrow K(x_j)$ extend to $d$ isomorphisms $L=K(x,\theta)\rightarrow K(x_j,\theta_k)$ where $\theta\in \bar K$ is a primitive element for $L$ over $K(x)$ (i.e. $L=K(x,\theta)$) and $\theta=\theta_1,\dots,\theta_d$ are its conjugates. (This is Theorem 8 in section 13.1 of Dummit and Foote.) This accounts for all $n=md$ embeddings $L\rightarrow \bar K$ via $L=K(x,\theta)\rightarrow K(x_j,\theta_k)\rightarrow\bar K$, where the last map is inclusion. Thus, what we find is that for each $1\leq j \leq m$ there are $d$ embeddings $\sigma:L\rightarrow \bar K$ with $\sigma: x\mapsto x_j$. Hence there are $m$ equivalence classes, each with $d$ elements.<|endoftext|> -TITLE: If $z$ is the unique element such that $uzu=u$, why is $z=u^{-1}$? -QUESTION [7 upvotes]: I'm trying to figure out why an element $u$ in some ring is invertible with inverse $z$ if any only if - -$uzu=u$ and $zu^2z=1$ - -OR - -$uzu=u$ and $z$ is the unique element meeting this condition. - -Clearly, both conditions follow if $u$ is a unit with inverse $z$. However, I can't see why either condition implies that $z=u^{-1}$. -I haven't been able to make any decent progress on my own, so does anyone have hints or suggestions on where to go? Thanks. -Edit: From Qiaochu's hint, $zu$ and $uz$ are idempotent. So $(zu)^2=zu$. But $zu$ has right inverse $uz$, so $(zu)^2(uz)=(zu)(uz)\implies zu=1$. The analogous argument for $uz$ shows $uz=1$, so $z=u^{-1}$. -Does anyone have an idea for the second? - -REPLY [8 votes]: For the sake of having an answer: -The strategy is this: since we have $u=uzu$, we also have $0=u(zu-1)=(uz-1)u$. If it can be shown that $u$ is "regular" (in the sense that it is not a nonzero zero-divisor), then we have $zu-1=uz-1=0$, establishing the result. -As per Yuki's comment above, if $u\alpha=0$, then $u(z+\alpha)u=uzu=u$. By uniqueness of $z$, we have $z+\alpha=z$, and so $\alpha=0$. A symmetric argument establishes that if $\alpha u=0$, then $\alpha=0$. Thus, $u$ is regular.<|endoftext|> -TITLE: Proving that $S_n$ has order $n!$ -QUESTION [8 upvotes]: I have been working on this exercise for a while now. It's in B.L. van der Waerden's Algebra (Volume I), page $19$. The exercise is as follows: - -The order of the symmetric group $S_n$ is $n!=\prod_{1}^{n}\nu$. (Mathematical induction on $n$.) - -I don't comprehend how we can logically use induction here. It seems that the first step would be proving $S_1$ has $1!=1$ elements. This is simply justified: There is only one permutation of $1$, the permutation of $1$ to itself. -The next step would be assuming that $S_n$ has order $n!$. Now here is where I get stuck. How do I use this to show that $S_{n+1}$ has order $(n+1)!$? -Here is my attempt: I am thinking this is because all $n!$ permutations of $S_n$ now have a new element to permutate. For example, if we take one single permutation -$$ -p(1,\dots,n) -= -\begin{pmatrix} -1 & 2 & 3 & \dots & n\\ -1 & 2 & 3 & \dots & n -\end{pmatrix} -$$ -We now have $n$ modifications of this single permutation by adding the symbol $(n+1)$: -\begin{align} -p(1,2,\dots,n,(n+1))&= -\begin{pmatrix} -1 & 2 & \dots & n & (n+1)\\ -1 & 2 & \dots & n & (n+1) -\end{pmatrix}\\ -p(2,1,\dots,n,(n+1))&= -\begin{pmatrix} -1 & 2 & \dots & n & (n+1)\\ -2 & 1 & \dots & n & (n+1) -\end{pmatrix}\\ -\vdots\\ -p(n,2,\dots,1,(n+1))&= -\begin{pmatrix} -1 & 2 & \dots & n & (n+1)\\ -n & 2 & \dots & 1 & (n+1) -\end{pmatrix}\\ -p((n+1),2,\dots,n,1)&= -\begin{pmatrix} -1 & 2 & \dots & n & (n+1)\\ -(n+1) & 2 & \dots & n & 1 -\end{pmatrix} -\end{align} -There are actually $(n+1)$ permutations of that specific form, but we take $p(1,\dots,n)=p(1,\dots,n,(n+1))$ in order to illustrate and prove our original statement. We can make this general equality for all $n!$ permutations: $p(x_1,x_2,\dots,x_n)=p(x_1,x_2,\dots,x_n,x_{n+1})$ where $x_i$ is any symbol of our finite set of $n$ symbols and $x_{n+1}$ is strictly defined as the symbol $(n+1)$. -We can repeat this process for all $n!$ permutations in $S_n$. This gives us $n!n$ permutations. Then, adding in the original $n!$ permutations, we have $n!n+n!=(n+1)n!=(n+1)!$. Consequently, $S_n$ has order $n!$. -How is my reasoning here? Furthermore, is there a more elegant argument? I do not really see my argument here as incorrect, it just seems to lack elegance. My reasoning may well be very incorrect, however. If so, please point it out to me. - -REPLY [2 votes]: Here's my answer using induction (it's a similar proof, but seems more concise and understandable): -Our base is true. -Assume $S_n$ has $n!$ permutations. -Define all the original $n!$ permutations as permutations where $(n+1)$ is sent to itself. Thus, by definition, all other permutations ("the new permutations") are the original permutations, except $(n+1)$ is sent to a place other than itself. There are $n$ places to send $(n+1)$ if we exclude $(n+1) \to (n+1)$. Since there are $n!$ original permutations, there must be $n!n$ new permutations. The reason for this is because, for all $n!$ permutations, there is $n$ different modifications of the $n!$ permutations (e.g. $(n+1) \to 1$, $(n+1) \to 2$, etc.). Therefore, the total amount of permutations of $S_{n+1}$ is $n!+n!n=(n+1)!$. -P.S. I only used induction because I wanted to do precisely as the exercise states; I try not to deviate in order to avoid erroneous proofs (in this case, I would prefer to deviate).<|endoftext|> -TITLE: Groups where all elements are order 3 -QUESTION [16 upvotes]: I am a student trying to learn some abstract algebra this summer, and I recently proved (as an exercise) that if $G$ is a group where every element has order 2, then $G$ is abelian. I was wondering could we make a similar conclusion about groups where every element has order 3, namely I am asking if $G$ is a group where all elements have order 3, then $G$ is abelian. I think that this is not true, but I cannot think of a counterexample. -The only groups that I can think of which have all elements order 3 are the groups $(\mathbb{Z}/3\mathbb{Z})^n$, but these are abelian. Any help is appreciated. Thanks! - -REPLY [30 votes]: The standard example is the Heisenberg group. Consider the group of all matrices of the form -$$\left(\begin{array}{ccc} -1 & x & y\\ -0 & 1 & z\\ -0 & 0 & 1 -\end{array}\right),$$ -where $x,y,z\in\mathbb{Z}/3\mathbb{Z}$. It is not hard to verify that this is a group, that every one of its 27 elements is of exponent $3$, and that it is not abelian. Replacing $\mathbb{Z}/3\mathbb{Z}$ with $\mathbb{Z}/p\mathbb{Z}$ for odd prime $p$ shows that a similar result cannot hold for any prime other than $p=2$. -This is an example of smallest possible order: a finite group in which every element is of exponent $3$ must have order $3^n$ for some $n$ (a consequence of Cauchy's Theorem), and every group of order $3^2$ is abelian. -There is another nonabelian group of order $27$, but in that group there is an element of order $9$: -$$\langle a,b\mid a^9 = b^3 = 1, ba = a^4b\rangle.$$ - -REPLY [3 votes]: No, it isn't true, but if you're beginning with this stuff perhaps you won't fully understand the example: the semidirect product of a non-cyclic group of order $9$ by a group of order $3$ has all its non-unit elements of order 3... -You can read here http://groupprops.subwiki.org/wiki/Prime-cube_order_group:U(3,3) an exposition about this one as a group of unitriangular matrices.<|endoftext|> -TITLE: Proof that every metric space is homeomorphic to a bounded metric space -QUESTION [14 upvotes]: I have tried to show that every metric space $(X,d)$ is homeomorphic to a bounded metric space. My book gives the hint to use a metric $d'(x,y)=\mbox{min}\{1,d(x,y)\}$. -If we can show that $d(x,y) \le c_1 \cdot d'(x,y)$ with $c_1$ some positive constant and $d'(x,y) \le c_2 \cdot d(x,y)$ for $c_2$ some positive constant, then the identity map $i:(X,d) \to (X,d')$ is continuous, and also obviously a bijection, thus showing that $(X,d)$ is homeomorphic to $(X,d')$, where $(X,d')$ is bounded, thus giving the desired result. -Suppose $d(x,y)<1$. Then $d'(x,y)=d(x,y)$. If $d(x,y)\ge 1$, then $d'(x,y) \le d(x,y)$. Thus we can set $c_2 = 1$ and $d'(x,y) \le d(x,y)$ for all $x,y \in X$. -But when $d(x,y)>1$ why won't it always be the case that $c_1$ will depend on what $d(x,y)$ is? - -REPLY [17 votes]: You’re working too hard: just show that $d$ and $d'$ generate the same open sets. Remember, a set $U$ is $d$-open if and only if for each $x\in U$ there is an $\epsilon_x>0$ such that $B_d(x,\epsilon_x)\subseteq U$. Once you have that $\epsilon_x$ that’s small enough, you can use any smaller positive $\epsilon$ just as well, so you might as well assume that $\epsilon_x<1$. Can you take it from there?<|endoftext|> -TITLE: Showing inequality for harmonic series. -QUESTION [13 upvotes]: I want to show that $$\log N<\sum_{n=1}^{N}\frac{1}{n}<1+\log N.$$ But I don't know how to show this. - -REPLY [2 votes]: A simpler way is to use one of the consequences of the Lagrange's theorem applied on $\ln(x)$ function, namely: -$$\frac{1}{k+1} < \ln(k+1)-\ln(k)<\frac{1}{k} \space , \space k\in\mathbb{N} ,\space k>0$$ -Then take $k=1,2,...,n$ values to the inequality and add them up. -The proof is complete.<|endoftext|> -TITLE: How to draw pictures of prime spectra -QUESTION [16 upvotes]: In Atiyah-MacDonald's Commutative Algebra, they give in Exercise 16 of Chapter 1 the instruction: - -Draw pictures of Spec($\mathbb{Z}$), Spec($\mathbb{R}$), - Spec($\mathbb{C}[x]$), Spec($\mathbb{R}[x]$), Spec($\mathbb{Z}[x]$). - -What exactly do they have in mind? I can enumerate the prime ideals in each of these rings, but I have little idea of what kinds of pictures one might draw and what advantage a good picture has over a mere enumeration. - -REPLY [5 votes]: Pictures are useful because they're suggestive, and are sometimes better at organizing information. -Sometimes the right picture is a little unclear. For example, when drawing $\mathop{\mathrm{Spec}} \mathbb{C}[x]$, sometimes you might want to draw a line, and imagine each complex number corresponds to some point on this line. Also, you want a splotch off to the side to represent the generic point of the line. This emphasizes the one-dimensionalness and the triviality of its structure as an algebraic curve, and also, it helps us to remember that its ordinary topology doesn't play a role algebraically. -But other times, you would want to draw $\mathop{\mathrm{Spec}} \mathbb{C}[x]$ as the traditional complex plane with its ordinary labeling (again with a splotch to represent the generic point). This picture is useful, for example, if we want to apply results and intuition from complex analysis. -The main thing about having a picture for $\mathop{\mathrm{Spec}} \mathbb{Z}$ is, IMO, really just so that you can then draw a picture of $\mathop{\mathrm{Spec}} \mathbb{Z}[x]$ or to draw a picture of the spectrum of the ring of integers of a number field as a curve with a projection down to $\mathop{\mathrm{Spec}} \mathbb{Z}$.<|endoftext|> -TITLE: $\mathfrak{h}_1,\mathfrak{h}_2$ Cartan subalgebras with $\mathfrak{h}_1\cap\mathfrak{h}_2=0$ -QUESTION [8 upvotes]: Let $\mathfrak{g}$ be a finite dimensional simple Lie Algebra over an algebraically closed field $K$. I'm having trouble to show that always exists Cartan subalgebras $\mathfrak{h}_1,\mathfrak{h}_2$ such that $\mathfrak{h}_1\cap\mathfrak{h}_2=0$. -In general, if all (or one) cartan subalgebras of a finite dimensional Lie Algebra $\mathfrak{g}$ are abelian, then $\bigcap_{\mathfrak{h}\text{ cartan}}\mathfrak{h}=\mathfrak{z(g)}$. For this, call $\mathfrak{h}'=\bigcap\mathfrak{h}$. Since $\mathfrak{z(g)}\subset\mathfrak{n(h)}=\mathfrak{h}$, for all $\mathfrak{h}$ cartan subalgebra, we have $\mathfrak{z(g)}\subset\mathfrak{h}'$. Now, let $X\in\mathfrak{h}'$. Since each $\mathfrak{h}$ is abelian, $X$ commutes with all $Y\in\mathfrak{\bar g}=\{\text{regular elements of }\mathfrak{g}\}$, so, $\mathfrak{z}(X)\supset\mathfrak{\bar g}$. Since $\mathfrak{z}(X)$ is an subalgebra (and so a vector subspace), we have $\mathfrak{z}(X)=\mathfrak{g}$, so $X\in\mathfrak{z(g)}$. -From this, since any cartan subalgebra of a simple algebra is abelian, we have that the intersection of all subalgebras is $0$, since in this case $\mathfrak{z(g)}=0$. But I have no idea how to show that only two is sufficient... -Any help will be appreciated! - -REPLY [3 votes]: I will propose another approach to solve the question. Since $\mathfrak g$ is semisimple then we can decompose $$\mathfrak g = \mathfrak n^- \oplus \mathfrak h_1 \oplus\mathfrak n^{+}$$ where -$$ \mathfrak n^{+} = \sum_{\alpha >0} \mathfrak g_\alpha$$ -$$ \mathfrak n^{-} = \sum_{\alpha <0} \mathfrak g_\alpha$$ -such that each $\alpha$ is a root. -Now, we can consider the isomorphism of Lie Algebras -\begin{align*}\varphi:& \mathfrak g \to \mathfrak g \\ -H &\mapsto e^{\text {ad} (X_{\alpha_1})} \cdot \ldots\cdot e^{\text {ad} (X_{\alpha_n})}(H)\end{align*} -where $\{X_{\alpha_1},...,X_{\alpha_n}\}$ is a basis of $\mathfrak n^+$, such that, $X_{\alpha_i} \in g_{\alpha_i}$. -Note that $e^{\text{ad}(X_{\alpha_i})}$ is well defined since $\text{ad}(X_{\alpha_i})$ is nilpotent. Moreover it is an isomorphism, because $\text{ad}(X_{\alpha_i})$ is a derivation and $e^{\text{ad}(X_{\alpha_i})}$ is an invertible linear transformation. -Consider $H \in \mathfrak h_1\setminus \{0\}$ since $H \neq 0$, there exists a root $\alpha$, such that $\alpha (H) \neq 0$, otherwise we would conclude that $H \in \mathfrak z (\mathfrak g)$, because $[H,H_1] =0,$ $\forall H_1 \in \mathfrak h_1$ and $[H,X_\alpha] = \alpha(H) X_\alpha =0$ $\forall$ root $\alpha$ which implies that $H \in \mathfrak z (\mathfrak g) \Rightarrow H = 0$ since $\mathfrak g$ is semisimple. -Since there exists a root $\alpha$ such that $\alpha(H) \neq 0$ we conclude that $\varphi (H) \not\in \mathfrak h_1$ for all $H \in \mathfrak h_1 \setminus \{0\}$. Then $\mathfrak h_2 = \varphi (\mathfrak h_1)$ is a Cartan subalgebra that satisfies the required properties. - -Edit: 14/02/2020 -In order to check that $\varphi (H) \not\in \mathfrak h_1$. Note that if $\alpha(H)\neq 0$, then $\text{ad}(X_\alpha)(H) = \alpha(H) X_\alpha \in \mathfrak g_\alpha $. Let $\pi_\alpha: \mathfrak g\to \mathfrak g_{\alpha}$ be the natural projection into the subspace $\mathfrak g_{\alpha}$. Since $[\mathfrak g_{\alpha},\mathfrak g_{\beta}] = \mathfrak g_{\alpha + \beta}$ (if $\alpha + \beta$ is not a root the $g_{\alpha + \beta}:= \{0\}$) one can conclude that -$$\pi_\alpha \left(e^{\mathrm{ad}(X_\alpha)}(H)\right) =\pi_\alpha \left(\sum_{n=0}^{m} \frac{1}{n!}\mathrm{ad}(X_\alpha)^{n}(H)\right) =\pi_\alpha \left(H + \alpha(H) X_\alpha + ...\right) = \alpha(H) X_\alpha. $$ -Since $\varphi$ is the product of exponencials of $\text{ad}(X_\beta)$ and we are considering only positive roots. Coupled with the fact that the identity matrix appears summed up in the operator $e^{\mathrm{ad}(X_\beta)}$, i.e. $$e^{\mathrm{ad}(X_\beta)} =\color{blue}{\text{Id}} + \text{ad}(X_\beta) + ... + \frac{1}{n!}\text{ad}(X_\beta)^n.$$ -This implies that the term $\alpha(H)X_\alpha$ cannot be vanished by the successive application of the operators $e^{\text{ad}(X_\beta)}$, $\beta>0$. Therefore -$$\pi_\alpha(\varphi(H)) =\pi_{\alpha}\left( e^{(\text {ad} (X_{\alpha_1})} \cdot \ldots\cdot e^{\text {ad} (X_{\alpha_n})}(H)\right)\neq 0,$$ -implying $\varphi(H)\not\in\mathfrak h_1$, because $g_\alpha$ is l.i. with the subspace $\mathfrak h_1$.<|endoftext|> -TITLE: Describe the structure of the Sylow $2$-subgroups of the symmetric group of degree $22$ -QUESTION [8 upvotes]: Describe the structure of the Sylow $2$-subgroups of the symmetric group of degree $22$. - -The only thing I've managed to deduce about the structure of $P\in \operatorname{Syl}_p(G)$ is that $|P| = 2^{12}$. -Help please :) -edit: I obviously can't count. Silly, I (for some reason) only counted $8$ as one $2$ and $16$ as one $2$. But as far as I can tell now, $2^{17}$ is the highest power of $2$ dividing $22$. - -REPLY [6 votes]: I very much endorse Ted's answer. Just in case you appreciate a concrete list of generators and/or want a very elementary approach, I will give you such a list while walking you through this exercise. The size of the Sylow 2-subgroup of $S_n$ (as a function of $n$) grows whenever $n$ is even. As you observed (by stuyding the highest power of two that divides $n!$) something special happens, whenever we reach a power of two. -Start out with the permutation $g_1=(12)$ that generates the Sylow 2-subgroup $P_1$ of $S_2$. That will also do for $S_3$, but let's spend some time with $S_4$, and see how we can build its Sylow 2-subgroup, call it $P_2$, out of $P_1$. Consider the permutation $g_2=(13)(24)$. Notice that it interchanges the "lower half" ($=\{1,2\}$) with the "upper half" ($=\{3,4\}$) of the set $\{1,2,3,4\}$ elementwise. We see that the permutation $g_1'=g_2g_1g_2^{-1}=(34)$ alone generates a copy of $P_1$, call it $P_1'$, that acts on the upper half, i.e. a Sylow 2-subgroup of $\mathrm{Sym}(\{3,4\})$. The two groups $P_1$ and $P_1'$ act on disjoint subsets (=the two halves), so they commute with each other, and we can form their direct product $P_1\times P_1'$ inside $S_4$. -As $g_2$ has order two, repeating the conjugation gives back $g_1=g_2g_1'g_2^{-1}$. This means that $g_2$ is in the normalizer of the group $P_1\times P_1'$. It follows that the subset -$$ -P_2=(P_1\times P_1')\cup g_2(P_1\times P_1') -$$ -is closed under multiplication and, hence a subgroup of $S_4$ of the desired size 8. From the above relations it follows that $P_2$ is generated by $g_1$ and $g_2$. We also recover the fact that $P_2$ is the dihedral group: for example $g_1g_2=(1324)$ is a 4-cycle. Do observe that you need to number the vertices of a square in a not the most obvious way to realize $P_2$ as the dihedral group. In what follows we forget about symmetries of geometric objects. -Next up is $S_6$. This is easy, because in addition to $P_2$ we only need to add yet another copy of $P_1$, namely the one generated by $g_1''=(56)$, call this group $P_1''$. The groups $P_2$ and $P_1''$ move different elements of $\{1,2,3,4,5,6\}$, so they commute inside $S_6$, and their direct product $P_2\times P_1''$ is a group of order 16. Hence it must be a Sylow 2-subgroup of $S_6$. -The story really begins with $S_8$. Let us introduce a new permutation $g_3=(15)(26)(37)(48)$. As earlier, we see that this order two permutation intechanges the lower half ($=\{1,2,3,4\}$) and the upper half ($=\{5,6,7,8\}$) of the set $\{1,2,3,4,5,6,7,8\}$ elementwise. Therefore the permutations $g_3g_1g_3^{-1}=(56)$ and $g_3g_2g_3^{-1}=(57)(68)$ generate a copy of $P_2$ acting on the upper half, call it $P_2'$. As in the case of $S_4$ we see that $g_3$ normalizes the direct product $P_2\times P_2'$, and the -set -$$ -P_3=(P_2\times P_2')\cup g_3(P_2\times P_2') -$$ -is then a group of order $2\cdot8^2=2^7$. As $2^7$ is the highest power of two dividing $8!$, $P_3$ must be a Sylow 2-subgroup of $S_8$. Furthermore, the permutations $g_1,g_2,g_3$ already generate all of $P_3$. -Moving on we skip all the way to $S_{16}$, and introduce yet another permutation of order two -$$ -g_4=(19)(2A)(3B)(4C)(5D)(6E)(7F)(8G). -$$ -To avoid any confusion I used single character substitutes for the integers in the range $[10,16]$, so $A=10$, $B=11$, $\ldots, G=16$, and $g_4$ interchanges the lower half (1 to 8) with the upper half (9 to 16). It's the same story again. The group -$P_3'=g_4P_3g_4^{-1}$ is a copy of $P_3$ acting on the upper half, i.e. a Sylow 2-subgroup of $\mathrm{Sym}(\{9,10,\ldots,16\})$. Also the set -$$ -P_4=(P_3\times P_3')\cup g_4(P_3\times P_3') -$$ -is easily seen to be a subgroup of $S_{16}$. It is generated by $g_1,g_2,g_3,g_4$ and its order is $2\cdot(2^{7})^2=2^{15}$ is just right for it to be a Sylow 2-subgroup of $S_{16}$. -In the end we get a Sylow 2-subgroup of $S_{22}$ as a direct product of three subgroups: -a copy of $P_4$ acting on the subset $\{1,2,\ldots,16\}$, a copy of $P_2$ acting on the subset $\{17,18,19,20\}$ and finally a copy of $P_1$ acting on the subset $\{21,22\}$. This group is generated by $g_1,g_2,g_3,g_4$, $(17;18)$, $(17;19)(18;20)$ and $(21;22)$. -Its order is $2^{15}\cdot2^3\cdot2=2^{19}$, which is also the highest power of two dividing $22!$.<|endoftext|> -TITLE: Unique weak solution to the biharmonic equation -QUESTION [14 upvotes]: I am attempting to solve some problems from Evans, I need some help with the following question. - -Suppose $u\in H^2_0(\Omega)$, where $\Omega$ is open, bounded subset of $\mathbb{R}^n$. - -How can I solve the biharmonic equation - $$\begin{cases} -\Delta^2u=f \quad\text{in } \Omega, \\ -u =\frac {\partial u } {\partial n }=0\quad \text{on }\partial\Omega. -\end{cases} -$$ where $n$ is the normal vector such that $\int _\Omega \Delta u \Delta v \, \,dx =\int _\Omega fv $ for all $v\in H^2_0(\Omega)$. -Given $f \in L^2(\Omega)$ , and prove that the weak solution is unique. - - -Any kind of help would be great. - -REPLY [22 votes]: Suppose that $u \in C_0^\infty(\Omega)$. -Then $$\int_\Omega |D^2 u|^2 \, dx = \int_\Omega \sum_{j,k=1}^n (u_{x_jx_k})^2 \, dx = \sum_{j,k=1}^n \int_\Omega u_{x_jx_k} u_{x_j x_k} \, dx.$$ You can integrate by parts twice to get $$\int_\Omega u_{x_jx_k} u_{x_jx_k} \, dx = - \int_\Omega u_{x_jx_kx_j}u_{x_k} \, dx = \int_\Omega u_{x_j x_j}u_{x_kx_k}\, dx$$ taking into account that $u$ is smooth and vanishes near the boundary of $\Omega$. Thus -$$\int_\Omega |D^2 u|^2 \, dx = \sum_{j,k=1}^n \int_\Omega u_{x_j x_j}u_{x_kx_k}\, dx = \int_\Omega \left( \sum_{j=1}^n u_{x_jx_j} \right) \left( \sum_{k=1}^n u_{x_k x_k} \right) \, dx = \int_\Omega |\Delta u|^2 \, dx.$$ -Thus $\|D^2 u\|_2^2 = \|\Delta u\|_2^2$. You can use the Poincare inequality to find a constant $C = C(n,\Omega)$ with the property that $\|u\|_2^2 \le C \|Du\|_2^2.$ -On the other hand, for any $\epsilon > 0$ you have $$\|Du\|_2^2 = \int_\Omega |Du|^2 \, dx = \int_\Omega Du \cdot Du \, dx = - \int_\Omega u (\Delta u) \, dx \le \frac \epsilon 2 \|u\|_2^2 + \frac 1{2\epsilon} \|\Delta u\|_2^2$$ by Young's inequality. Thus -$$\|Du\|_2^2 \le \frac{\epsilon C}{2} \|D u\|_2^2 + \frac{1}{2\epsilon} \|\Delta u\|_2^2.$$ With e.g. $\epsilon = \dfrac 1 C$ it follows that $\|Du\|_2^2 \le \dfrac{C}{2} \|\Delta u\|_2^2$ and consequently $\|u\|_2^2 \le \dfrac{C^2}{2} \|\Delta u\|_2^2$. -Finally we obtain $$ \|u\|_2^2 + \|Du\|_2^2 + \|D^2u\|_2^2 \le \left(1 + \frac C2 + \frac{C^2}{2} \right) \|\Delta u\|_2^2.$$ This can be extended to $u \in H_0^2(\Omega)$ using the density of $C_0^\infty(\Omega)$ in that space.<|endoftext|> -TITLE: The distinction between infinitely differentiable function and real analytic function -QUESTION [16 upvotes]: I have known that all the real analytic functions are infinitely differentiable. -On the other hand, I know that there exists a function that is infinitely differentiable but not real analytic. For example, -$$f(x) = -\begin{cases} -\exp(-1/x), & \mbox{if }x>0 \\ -0, & \mbox{if }x\le0 -\end{cases}$$ -is such a function. -However, the function above is such a strange function. I cannot see the distinction between infinitely differentiable function and real analytic function clearly or intuitively. -Can anyone explain more clearly about the distinction between the two classes? - -REPLY [7 votes]: A real analytic function, by definition, admits a power series representation in the neighbourhood of each point in it's domain of definition, that is, it can be written as a power series. This is a very severe constraint, analytic functions, as an example, are constant everywhere (on the affected component of the domain) if they are constant on an open set. The latter is not true for functions which are 'merely' infinitely often differentiable (smooth), you can have smooth functions with compact support (which are very important tools in analysis) -- the example you wrote down is often used to construct such functions. -Figuring out whether a given smooth function is real analytic is usually much more difficult to detect than, e.g., in the case of complex analytic functions, which are just those functions which are complex differentiable. One of the few tools available to show a function is real analytic is the theorem which tells you when the Taylor series of $f$ actually converges to $f$. -(Clearly each complex analytic function gives rise to real analytic ones, so many examples are known. Similarly, harmonic function, i.e. solutions to $\Delta u = 0$ are real analytic, even in higher dimensions.)<|endoftext|> -TITLE: Cauchy-Product of non-absolutely convergent series -QUESTION [7 upvotes]: While grading some basic coursework on analysis, I read an argument, that a Cauchy product of two series that converge but not absolutely can never converge i.e. if $\sum a_n$, $\sum b_n$ converge but not absolutely, the series $\sum c_n$ with $$c_n= \sum_{k=0}^n a_{n-k}b_k$$ diverges. -Although we didn't have any theorem in the course stating something like this, it made me wonder if it was true. - -REPLY [4 votes]: As Davide Giraudo has said in the comments, we can find a counter example by using $a_k = b_k = (-1)^k/k$ for $k\geq 1$ and $a_0=b_0=0.$ In that case we compute $$c_n = \sum_{k=0}^n a_{n-k} b_k = \sum_{k=1}^{n-1} \frac{(-1)^{n-k}}{n-k} \frac{(-1)^k}{k} = (-1)^n \sum_{k=1}^{n-1} \frac{1}{k(n-k)}$$ -$$ = \frac{(-1)^n}{n} \sum_{k=1}^{n-1} \frac{n-k+k}{k(n-k)} = \frac{(-1)^n}{n} \sum_{k=1}^{n-1} \left( \frac{1}{k} + \frac{1}{n-k}\right) = 2\frac{(-1)^n}{n} \sum_{k=1}^{n-1} \frac{1}{k}.$$ -We show $\sum c_n$ converges by applying the Leibniz criterion. $c_n \to 0$ is clear, so we need only verify that $d_n = \frac{1}{n} \sum_{k=1}^{n-1}\frac{1}{k} $ is monotonically decreasing for sufficiently large $n.$ -We compute $$d_{n+1}- d_n = \frac{1}{n+1} \sum_{k=1}^n \frac{1}{k} - \frac{1}{n} \sum_{k=1}^{n-1} \frac{1}{k}= \frac{1}{n(n+1)} - \frac{1}{n(n+1)}\sum_{k=1}^{n-1} \frac{1}{k}.$$ -Since $\displaystyle \sum_{k=1}^{n-1} \frac{1}{k} \geq 1$ for all $n\geq 2$ so $d_n$ is indeed monotone.<|endoftext|> -TITLE: Quotient of two free abelian groups of the same rank is finite? -QUESTION [10 upvotes]: Let $A,B$ be abelian groups such that $B\subseteq A$ and $A,B$ both are free of rank $n$. I want to show that $|A/B|$ is finite, or equivalently that $[A:B -]$ (the index of $B$ in $A$) is finite. -For example, if $A=\mathbb{Z}^n$ and $B=(2\mathbb{Z})^n$ we have that $[A:B]=2^n$ (correct me if I'm wrong). - -REPLY [21 votes]: Let $A\cong B\cong \mathbb{Z}^n$, with $B\subseteq A$. Then we have the short exact sequence -$$ 0\rightarrow B\rightarrow A\rightarrow A/B\rightarrow 0.$$ -Tensoring with $\mathbb{Q}$ shows $A/B$ has rank $0$, so is torsion.<|endoftext|> -TITLE: Continuous function a.e. -QUESTION [6 upvotes]: I'm not sure these two statement are not same thing. -$$ "f equals a continuous function a.e." & "f is continuous a.e."$$ -The concept is too much abstract, so I wanna find some counter examples. - -a function $f$ and a continuous function $g$ s.t. $f=g$ a.e -and $f$ is not continuous a.e. -a function $f$ continuous a.e. s.t. there exists no continuous function $g$ with $f=g$ a.e. - -In addition I wanna find a Riemann integrable function which has an uncountable set of discontinuities. -Is there some nice example to understand those concepts, note that please. - -REPLY [10 votes]: Function continuous a.e. but not equal a.e. to a continuous function: $f(x) = 0$ if $x < 0$ and $f(x) = 1$ if $x \geq 0$ -Function equal a.e. to a continuous function but not continuous a.e.: $f$ = the characteristic function of the rationals. -Riemann integrable function with discontinuities on an uncountable set: $f$ = the characteristic function of the Cantor set in $[0,1]$. Let $C$ be the cantor set. Then $C$ is perfect so it is uncountable and closed. Hence $f$ is continuous on the complement of $C$ because these points are all interior points. But since $C$ contains no open interval, every point of $C$ is a boundary point, so $f$ is discontinuous on $C$. $C$ has measure zero, so $f$ is Riemann integrable.<|endoftext|> -TITLE: If $ f \in C_0^\infty$, then is $f$ uniformly continuous? -QUESTION [5 upvotes]: If $ f \in C_0^\infty=\{ g: g\in C^\infty, \lim_{|x|\rightarrow \infty}g(x)=0\}$, then is $f$ uniformly continuous on $\mathbb R$? -($ f : \mathbb R \to \mathbb R $) - -REPLY [5 votes]: HINTs - -A continuous function on a compact interval is uniformly continuous. -$\lim_{|x| \to \infty} f(x) = 0$ means that $\forall \epsilon...$ -Split up the domain to use these two properties.<|endoftext|> -TITLE: Geodesics that self-intersect at finitely many points -QUESTION [7 upvotes]: Notations -$M$ will denote a smooth manifold and $\nabla$ an affine connection on it. A smooth curve $\gamma\colon I \to M$ will be called a geodesic if it is $\nabla$-parallel along itself, that is $\nabla_{\dot{\gamma}(t)}\dot{\gamma}=0$ for every $t \in I$. A geodesic will be said to be maximal if every proper extension of it is not a geodesic. - -It is easy to find examples of maximal geodesics which do not self-intersect, like -lines in Euclidean plane, or that intersect in infinitely many points, like great circles on the sphere. On the contrary I cannot find examples of geodesics which self-intersect at finitely many points, like the curve below: - - -Question Is it possible to determine $M$ and $\nabla$ in such a way that one of the resulting maximal geodesics intersects at finitely many points? - -Thank you. - -REPLY [7 votes]: I would like to give an answer with a certain mechanical flavor. -Let us consider the Lagrangian $L=K-V$ where $K=\dfrac{x'^2+y'^2}{2}$ and $V=\dfrac{x^2+ a^2y^2}{2},$ with $a\in\mathbb{R}_+,$ which describe a particle in the plane acted upon by a anisotropic spring. -The equations of motion are elementarly solved obtaining $$x=A\cos(t+\phi),\ y(t)=B\sin(at+\psi),$$ with $A,B,\phi,\psi$ arbitrary constants of integrations. -The trajectories of motion are the well-known Lissajous curves; if $a\in\mathbb{Q}$ then they are closed with multiple self-intersctions (see the figure below), otherwise they fill densely a domain of the plane. - -Why does this argument answer you question? -By Maupertius' principle, for any $e>0,$ in the invariant submanifold $\Omega:=\{(x,y):V(x,y) -TITLE: For which $n$, $G$ is abelian? -QUESTION [9 upvotes]: My question is: - -For Which natural numbers $n$, a finite group $G$ of order $n$ is an abelian group? - -Obviouslyو for $n≤4$ and when $n$ is a prime number, we have $G$ is abelian. Can we consider any other restrictions or conditions for $n$ to have the above statement or the group itself should have certain structure as well? Thanks. - -REPLY [10 votes]: Every group of order $n$ is abelian iff $n$ is a cubefree nilpotent number. -We say that $n$ is a nilpotent number if when we factor $n = p_1^{a_1} \cdots p_r^{a_r}$ we have -$p_i^k \not \equiv 1 \bmod{p_j}$ for all $1 \leq k \leq a_i$. -(adapted from an answer by Pete Clark.)<|endoftext|> -TITLE: Open set whose boundary is not a null set -QUESTION [14 upvotes]: I've just seen a theorm about a bounded set $A \subset \mathbb{R}^n$. -$$\chi_A \text{ is Riemann integrable} \Longleftrightarrow \partial A \text{ is a null set}$$ -Then, I wonder if there's any open set whose boundary is not a null set. Can you give me some example for that? - -REPLY [13 votes]: Consider $A$ a fat Cantor set, for example a set of measure $\frac13$. $A$ is closed, so its complement is open, and also $A$ is nowhere dense therefore its complement is dense. That is to say that $U=[0,1]\setminus A$ is open, so for the Lebesgue measure $m$ we have: $$m(\partial U)=m([0,1]\setminus U)=m(A)=\frac13.$$<|endoftext|> -TITLE: Unramified p-adic extension implies Galois -QUESTION [6 upvotes]: I am looking for a short proof that if $L \supset K$ are finite extensions of the p-adic numbers $\mathbb{Q}_p$, then if $L/K$ is unramified, $L/K$ is Galois. -I think the proof is related to somehow injecting $Gal(L/K) \hookrightarrow Gal(k_L / k_K) $ where $k_L$ and $k_K$ are the respective residue fields (possibly using the Teichmuller map); then $f=[k_L:k_K] = [L:K]$ by the fact the extension is unramified, so we would get surjectivity by counting degrees. However, I can't quite put it all together. -I have seen a result somewhere about uniqueness of unramified extensions (adjoining a root of unity $\zeta_m$ or something along those lines), but I can't recall the result exactly. I would be very grateful for some help - thanks in advance. - -REPLY [2 votes]: Let $A$ and $B$ be the rings of integers of $K$ and $L$ respectively. -Let $\mathfrak{p}$ and $\mathfrak{P}$ the unique maximal ideals of $A$ and $B$ respectively. -Let $F = A/\mathfrak{p}$, $F' = B/\mathfrak{P}$. -For any $\alpha \in B$, we denote by $\bar \alpha$ the image of $\alpha$ by the canonical homomorphism $B \rightarrow F'$. -There exists $\theta \in B$ such that $L = K(\theta)$. -Let $f(X)$ be the minimal polynomial of $\theta$ over $K$. -Then $f(X) \in A[X]$. -Let $\bar f(X)$ be the reduction of $f(X)$ mod $\mathfrak{p}$. -Since $f(\theta) = 0$, $\bar f(\bar \theta) = 0$. -Let $n = [L : K]$. -Then the degree of $f(X)$ is $n$, hence the degree of $\bar f(X)$ is also $n$. -Since $L/K$ is unramified, $n = [F' : F]$. -Hence $\bar f(X)$ is the minimal polynomial of $\bar \theta$ over $F$. -Since $F'/F$ is Galois, $\bar f(X)$ splits in $F'$. -Hence $f(X)$ splits in $L$ by Hensel's lemma and we are done.<|endoftext|> -TITLE: Does curvature zero mean the bundle is trivial? -QUESTION [20 upvotes]: Let $P\to M$ be some Bundle over $M$. I know that, if $P$ is a trivial bundle it must have curvature zero. -Say I have the converse, my curvature is zero. Does this imply that the bundle ist trivial? -If not, what can actually be said about the bundle, chern classes or it's connection if the curvature is zero? (I'm specifically interested in non-abelian principal G-bundles) - -REPLY [11 votes]: Kobayashi-Nomizu discuss flat connections in Section II.9 of the first volume of their book and in particular derive the following as a corollary of earlier results. -Let $P \to M$ be a principal bundle equipped with a flat connection. If $M$ is simply connected, then $P$ is trivial.<|endoftext|> -TITLE: Product of all prime numbers upto some prime $p$ -QUESTION [5 upvotes]: Let $p$ be a prime number. Denote by $P$ the set of all primes which are not greater than $p$. -Is there a well known estimation of the product of all prime numbers in $P$ (i.e. $\prod_{q\in P}q$)? - -REPLY [4 votes]: What you are looking for is $\exp(\theta(x))$ where $\theta(x)$ is the first Chebyshev function. $$\theta(x) = \sum_{\underset{p \leq x}{p-\text{prime}}} \log(p)$$It is also related to the primorial, which is the product of the first $n$ primes. -The fact that $\theta(x) \sim x$ i.e. $\theta(x) = x + o(x)$ is equivalent to the prime number theorem. We can get a better quantification of $o(x)$ in-fact. This is obtained while proving the prime number theorem. We can get that $$\theta(x) = x + \mathcal{O}(x \exp(-c (\log x)^{\lambda}))$$ for some constant $c$ and $\lambda$. -Also, the fact that $$\theta(x) = x + \mathcal{O}(x^{1/2+\epsilon})$$ is equivalent to Riemann hypothesis, where the power $\epsilon$ takes into account some $\log$ factors inside it. -EDIT -I am adding this to answer Tom's comment above. The claim that $$p_n\# =\exp((1+\mathcal{o}(1) )n\log n)$$ is equivalent to the prime number theorem. You can google for proof of prime number theorem. There are two main proofs. The first main proof was by Jacques Hadamard and Charles Jean de la Vallée-Poussin in 1896 which uses complex analysis. The second main proof was by Atle Selberg and Paul Erdős in 1949 and is an "elementary" proof. ("elementary" here denotes that the proof doesn't use complex analysis. However, the elementary proof is supposedly much harder than any proof using complex analysis.)<|endoftext|> -TITLE: Can an ordered field be finite? -QUESTION [11 upvotes]: I came across this question in a calculus book. - -Is it possible to prove that an ordered field must be infinite? Also - does this mean that there is only one such field? - -Thanks - -REPLY [3 votes]: Hint $\ $ Linearly ordered groups are torsion-free: $\rm\: 0\ne n\in \mathbb N,$ $\rm\:g>0 \:\Rightarrow\: n\cdot g = g +\cdots + g > 0,\:$ since positives are closed under addition. Conversely, a torsion-free commutative group can be linearly ordered (Levi 1942).<|endoftext|> -TITLE: Tricky radius of convergence: $\sum\limits_{n=0}^\infty\cos\left(\alpha\sqrt{1+n^2}\right)z^n$ -QUESTION [7 upvotes]: I encountered the following power series, and while I know a couple of ways to determine radius of convergence, I wasn't able to figure out how to evaluate the appropriate limit to get said radius. Can anyone help? - -What is the radius of convergence of the power series $$\sum_{n=0}^\infty\cos\left(\alpha\sqrt{1+n^2}\right)z^n,$$ where $\alpha$ is any real number? What if $\alpha$ is a complex number? - -REPLY [4 votes]: Hint: $\sqrt{1+n^2} = n + 1/(2n) + O(1/n^3)$.<|endoftext|> -TITLE: Multiple choice question: Let $f$ be an entire function such that $\lim_{|z|\rightarrow\infty}|f(z)|$ = $\infty$. -QUESTION [9 upvotes]: Let $\displaystyle f$ be an entire function such that $$\lim_{|z|\rightarrow \infty} |f(z)| = \infty .$$ Then, - -$f(\frac {1}{z})$ has an essential singularity at 0. -$f$ cannot be a polynomial. -$f$ has finitely many zeros. -$f(\frac {1}{z})$ has a pole at 0. - -Please suggest which of the options seem correct. -I am thinking that $f$ can be a polynomial and so option (2) does not hold. -Further, if $f(z) = \sin z $ then it has infinitely many zeros... which rules out (3) while for $f(z) = z$ indicates that it has a simple pole at $0$ and option (4) seems correct. - -REPLY [5 votes]: You are correct about $2$, and given that, you should be able to determine whether $1$ is true or not--consider your example $f(z)=z$. -Your example $f(z)=\sin z$ does not meet the given criteria. Note that if there are infinitely many zeros, then the set of zeros is necessarily unbounded, for if not, it has a limit point, and so the function is identically zero, contradicting our assumption that $\lim_{|z|\to\infty}|f(z)|=\infty$. But then we have a sequence $\{z_n\}$ such that $|z_n|\to\infty$ but $f(z_n)=0$ for all $n$, so that once again contradicts our assumption. That takes care of $3$. -For $4$, note that since $|1/z|\to\infty$ as $z\to 0$, then by assumption, $\lim_{|z|\to 0}|f(1/z)|=\infty$, which means that $f(1/z)$ has a pole at $z=0$. (H/T to J.J. for reminding me of that characteristic of poles.)<|endoftext|> -TITLE: Evaluate $\lim_{x \to \infty} \frac{1}{x} \int_x^{4x} \cos\left(\frac{1}{t}\right) \mbox {d}t$ -QUESTION [5 upvotes]: Evaluate $$\lim_{x \to \infty} \frac{1}{x} \int_x^{4x} - \cos\left(\frac{1}{t}\right) \mbox {d}t$$ - -I was given the suggestion to define two functions as $g(x) = x$ and $f(x) = \int_x^{4x}\cos\left(\frac{1}{t}\right)dt$ so then if I could prove that both went to $\infty$ as $x$ went to $\infty$, then I could use L'Hôpital's rule on $\frac{f(x)}{g(x)}$; but I couldn't seem to do it for $f(x)$. -I can see that the limit is 3 if I just go ahead and differentiate both functions and take the ratio of the limits, but of course this is useless without finding my original intermediate form. -How do I show that $\frac{f(x)}{g(x)}$ is in intermediate form? or how else might I evaluate the original limit? - -REPLY [2 votes]: For $x\ge\dfrac2\pi$, Dominated Convergence says -$$ -\begin{align} -\lim_{x\to\infty}\frac1x\int_x^{4x}\cos\left(\frac1t\right)\,\mathrm{d}t -&=\lim_{x\to\infty}\int_1^4\cos\left(\frac1{xt}\right)\,\mathrm{d}t\\ -&=\int_1^41\,\mathrm{d}t\\[9pt] -&=3 -\end{align} -$$<|endoftext|> -TITLE: Putting ${n \choose 0} + {n \choose 5} + {n \choose 10} + \cdots + {n \choose 5k} + \cdots$ in a closed form -QUESTION [7 upvotes]: As the title says, I'm trying to transform $\displaystyle{n \choose 0} + {n \choose 5} + {n \choose 10} + \cdots + {n \choose 5k} + \cdots$ into a closed form. My work: -$\displaystyle\left(1 + \exp\frac{2i\pi}{5} \right )^n = \displaystyle\sum_{p=0}^{n}\binom{n}{p}\exp\left(\frac{p\cdot2i\pi}{5} \right)$ -$\displaystyle=\binom{n}{0} + \binom{n}{1}\exp\left(\frac{1\cdot2i\pi}{5} \right) + \binom{n}{2}\exp\left(\frac{2\cdot2i\pi}{5} \right) + \binom{n}{3}\exp\left(\frac{3\cdot2i\pi}{5} \right) + \binom{n}{4}\exp\left(\frac{4\cdot2i\pi}{5} \right) + \binom{n}{5} + \cdots = \left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] + \exp\left(\frac{2i\pi}{5} \right)\left[\binom{n}{1} + \binom{n}{6} + \binom{n}{11} \right ] + \exp\left(\frac{4i\pi}{5} \right)\left[\binom{n}{2} + \binom{n}{7} + \binom{n}{12} \right ] + \exp\left(\frac{6i\pi}{5} \right)\left[\binom{n}{3} + \binom{n}{8} + \binom{n}{13} \right ] + \exp\left(\frac{8i\pi}{5} \right)\left[\binom{n}{4} + \binom{n}{9} + \binom{n}{14} \right ] + \cdots$ -I'll recall $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = k$, $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = u$, $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = v$, $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = w$ and $\left[\binom{n}{0} + \binom{n}{5} + \binom{n}{10} + \cdots\right ] = z$. Thus -$\displaystyle\left(1 + \exp\frac{2i\pi}{5} \right )^n = k + u\cdot\exp\frac{2i\pi}{5} + v\cdot\exp\frac{4i\pi}{5} + w\cdot \exp\frac{6i\pi}{5} + z\cdot\exp\frac{8i\pi}{5} = k + u\cdot\left (\cos\frac{2\pi}{5} + i\cdot\sin\frac{2\pi}{5} \right ) + v\cdot\left (\cos\frac{4\pi}{5} + i.\sin\frac{4\pi}{5} \right ) + w\cdot\left (\cos\frac{6\pi}{5} + i.\sin\frac{6\pi}{5} \right ) + z\cdot\left (\cos\frac{8\pi}{5} + i.\sin\frac{8\pi}{5} \right )$ -Noting that $\cos\frac{2\pi}{5} = \cos\frac{8\pi}{5}$, $\cos\frac{4\pi}{5} = \cos\frac{6\pi}{5}$, $\sin\frac{2\pi}{5} = -\sin\frac{8\pi}{5}$ and $\sin\frac{4\pi}{5} = -\sin\frac{6\pi}{5}$: -$\displaystyle\left(1 + \exp\frac{2i\pi}{5} \right )^n = k + \left(u + z\right)\cos\frac{2\pi}{5} + i\cdot\left(u - z \right)\sin\frac{2\pi}{5} + \left(v + w\right)\cos\frac{4\pi}{5} + i\cdot\left(v - w \right)\sin\frac{4\pi}{5} = \left(k + \left(u + z\right)\cos\frac{2\pi}{5} + \left(v + w\right)\cos\frac{4\pi}{5}\right) + i\cdot\left(\left(u - z \right)\sin\frac{2\pi}{5} + \cdot\left(v - w \right)\sin\frac{4\pi}{5} \right)$ -But $\displaystyle\left(1 + \exp\frac{2i\pi}{5} \right )^n = \left(2\cos\left(\frac{\pi}{5} \right)\cdot\exp\left(\frac{i\pi}{5}\right)\right)^n = \left(2^n\cos^n\left(\frac{\pi}{5} \right)\cdot\exp\left(\frac{ni\pi}{5}\right)\right) = \left(2^n\cos^n\frac{\pi}{5} \right)\left(\exp\left(\frac{ni\pi}{5}\right)\right) = \left(2^n\cos^n\frac{\pi}{5} \right)\left(\cos\frac{n\pi}{5} + i.\sin\frac{n\pi}{5} \right ) = \left(2^n\cos^n\frac{\pi}{5}\cos\frac{n\pi}{5} \right) + i\cdot \left(2^n\cos^n\frac{\pi}{5}\sin\frac{n\pi}{5} \right)$ -So, -$\displaystyle k + \left(u + z\right)\cos\frac{2\pi}{5} + \left(v + w\right)\cos\frac{4\pi}{5} = 2^n\cos^n\frac{\pi}{5}\cos\frac{n\pi}{5}$ -and -$\displaystyle\left(u - z \right)\sin\frac{2\pi}{5} + \left(v - w \right)\sin\frac{4\pi}{5} = 2^n\cos^n\frac{\pi}{5}\sin\frac{n\pi}{5}$ -and I'm stuck here. I noted that $k + u + v + w + z = 2^n$ but I couldn't isolate $k$. So, any help finishing this result will be fully appreciated. Thanks. - -REPLY [5 votes]: As you know, $\sum_{j=0}^n {n \choose j} = 2^n$, and if $\omega$ is a primitive $5$'th root of unity -$\sum_{j=0}^n {n \choose j} \omega^j = (1 + \omega)^n$. Now $\sum_{i=0}^4 \omega^{ij} = 5$ if $j$ is divisible by $5$ and $0$ otherwise. So -$$\sum_{k} {n \choose {5k}} = \frac{1}{5} \sum_{i=0}^4 \sum_{j=0}^n {n \choose j} \omega^{ij} = \frac{1}{5} \sum_{i=0}^4 (1+\omega^i)^n$$<|endoftext|> -TITLE: Consistency strength: If Con($T+A$) implies Con($T+B$), can we infer anything about $A$ and $B$? -QUESTION [6 upvotes]: To be more specific, let $T$ be a first order theory and let $A$ and $B$ be two different first-order sentences, both in the same language as $T$ but independent of $T$. Additionally, suppose we have (working in some meta-theory) that Con($T+A$) implies Con($T+B$) (but not vice-versa). -$\textbf{Question}$: What, if anything, can we say about the sentences $A$ and $B$? -That is, can we only speak of the difference between $A$ and $B$ in terms of $T$? Are we limited to saying that adding $A$ to $T$ results in a theory which is "more likely" to derive a contradiction than adding $B$ to $T$? Or can we infer anything about $A$ and $B$ on their own? (Does one prove or refute the other? Might it be that $A$ has more first-order consequences than $B$ does?) -$\textbf{Follow-up question}$: Could it ever be the case that replacing $T$ with a different theory $S$ (also in the same language and unable to decide $A$ or $B$) leads to Con($S+B$) being equivalent to, or stronger than, Con($S+A$)? -Here is an example to better illustrate my questions: -It is known that the consistency strength of ZF together with the Axiom of Choice is exactly that of ZF: -$$\text{Con(ZF+AC)} \iff \text{Con(ZF)}$$ -However, the consitency strength of ZF together with the Axiom of Determinacy is very much greater: -$$\text{Con(ZF+AD)} \iff \text{Con(ZF}+\ \psi )$$ -where $\psi$ is the statement "there are infinitely many Woodin cardinals". -Do these relations of consitency strength tell us about AC and AD as sentences set apart from ZF? Is AD in some regard more powerful than AC (even though it's refutable in ZFC)? Could, for some new theory of sets $T$ in the same language as ZF, it be possible that $T$+AC is equiconsistent with $T$+AD? -The answers to these questions may turn out to be trivial, but I'm unable to settle on a satisfactory conclusion myself. Any responses offering a greater intuition about consistency strength would be very much appreciated. - -REPLY [2 votes]: This is really more of a philosophical overview, rather than a mathematical answer. $\newcommand{ZF}{\mathrm{ZF}}\newcommand{ZFC}{\mathrm{ZFC}}\newcommand{AC}{\mathrm{AC}}\newcommand{AD}{\mathrm{AD}}$ -Let us agree to work in $\ZF$ as our theory and $\ZFC$ as the meta-theory. First we consider the seemingly strange equiconsistency of $\ZF$ and $\ZFC$ alongside with many weak choice principles in between. -What does that tell us? It means that if $\ZF$ is our meta-theory, then whenever we happen to run into something which is a model of $\ZFC$ then we are guaranteed to run into a model of $\ZF+\lnot\AC$ somewhere in the universe. Often proofs of equiconsistency also tell us how to find such model, for example by forcing arguments or inner model arguments, etc. -Philosophically this means that if $\ZFC$ is true in some structure then $\ZF+\lnot\AC$ is true in another. Since $\ZF$ is a reasonable theory to consider, it tells us that there is no "leap of faith" if we assume $\ZFC$ instead. In particular it means that we are free of the worry that $\AC$ introduced some inconsistency to the system, that of course if $\ZF$ did not have one already. -From this we can deduce that Lebesgue and the school of people leading a mathematical life that $\AC$ is false are actually rejecting $\ZF$ as a whole, since from a proof of contradiction in $\ZFC$ we know how to generate a contradiction in $\ZF$. Then again, these proofs were given quite later than the early days of $\AC$ where people raged over its unthinkable consequences. -On the other hand, we have $\ZF+\AD$ which is infinitely stronger than $\ZF$ or $\ZFC$ in the consistency strength. This tells us that in contrast to the situation above even if we believe that $\ZF$ itself is consistent we still have to believe that there are large cardinals, and that there are many of them. So many that anything less than $\omega$ Woodin cardinals is not even close to enough. -This is quite the leap of faith, specifically since we know that anything that proves the consistency of $\ZF$ is quite the strong theory, and coming from outside set theory it may seem suspicious and borderline contradictory. However there are still arguments in favour of large cardinals, and either way these are interesting. - -Coda: -It seems nowadays like a reasonable idea to believe that the axioms of $\ZFC$ are consistent. I personally believe that if no shocking contradictions are found, in several decades most people will assume $\ZFC+\varphi$ where $\varphi$ is some large cardinal axiom which is strong enough to support category theory related constructions. -Until that happens, people may be wary of large cardinals, or indifferent to them.<|endoftext|> -TITLE: History of elliptic curves -QUESTION [20 upvotes]: In one sense elliptic curves are a rather modern object as some of its properties have been studied only in the last century or so. But in another sense there are a very classical object for studying Diophantine equations. For instance it is possible to be "using" them implicitly and proving facts about them without actually knowing the formal concept like Ramanujan did with modular forms. -So my question is what is the history behind elliptic curves? When was the notion formalized and by whom? Any references to this? - -REPLY [15 votes]: Clebsch, in the 1860s, proved that curves of genus 0 are parametrized by rational functions, and that those of genus 1 are parametrized by elliptic functions. Juel gave a geometric -interpretation of the group law in the 1890s, Poincare asked in 1901 whether the rational points on a curve of genus 1 are finitely generated, and Mordell proved this in the 1920s. -As for examples, integral solutions of $y^2 = x^3 - 2$ etc. were determined (without proof) -by Fermat, and Euler later solved it using algebraic numbers. There's a whole industry of -mathematicians who tried so solve such equations at the end of the 19th century (Lucas, Sylvester, B. Levi, etc.). -The modern theory took off in the 1930s with Hasse's work on the number of points on elliptic curves over finite fields, which subsequently was generalized by Weil with his conjectures.<|endoftext|> -TITLE: Freeman Dyson's example of an unprovable truth -QUESTION [11 upvotes]: Freeman Dyson has claimed that -$$\nexists m,n \operatorname{Reversed}(2^n) = m ^ 5 $$ -(where $\operatorname{Reversed}(l)$ just is the reverse of the digits of $l$ in base 10), is probably an example of an unprovable truth (source), and that even if it's not, there are many similar statements, some of which will be unprovable. As a heuristic argument, he says that a proof would have to rely on some pattern in the digits of powers of two, but those seem to be random. -Is this heuristic argument reasonable? I was surprised because I had thought that a search for undecidable arithmetic statements in Peano Arithmetic (let alone ZFC or something stronger) was rather challenging. But maybe that's only for (a) provably unprovable statements or (b) interesting statements. I'd tend to assume Dyson knows what he's talking about here, but am still curious. - -REPLY [7 votes]: There are many quotes from Dyson's paper/book in this note by Calude: Dyson Statements that Are Likely to Be True but Unprovable -In the end (looking at the quotes in that note), Dyson's argument seems to be: because the identity in question seems to be very "likely" to be true, it must be true. Of course that is not a very strong argument, even for a heuristic one. There are many true statements in mathematics that nevertheless would be extremely "unlikely". In other words, looking at the negations of these "unlikely" but true statements, there are many false statements in mathematics that would nevertheless be extremely "likely" in a naive sense. (For the moment we will assume the claims about "likelihood" and probabilities are correct; that is a separate problem I will discuss below). -For example, the probability that $e^{\pi i} = -1$ is zero, because the set of all $x$ with $x = -1$ has measure zero. Nevertheless $e^{\pi i}$ does equal $-1$. -The main issue is that there is a difference between "we don't understand the pattern" and "the pattern is random". There are occasionally heuristic arguments that use probabilistic methods in a compelling way, but based on the quotes I have seen Dyson's argument is not particularly compelling. -There are two other issues: - -Dyson is using the word "random" in an unusual way. In the normal sense of probability, the probability that an $(n+1)$-digit natural number ends with $n$ zeros goes to zero very quickly as $n$ increases, but $10^n$ does always end with $n$ zeros. It may be that Dyson clarifies this in his actual book, I don't know. But the precise definition of "random" needs to be clarified to make the argument sensible. -Calude quotes Dyson as saying "Any proof of the statement would -have to be based on some non-random property of the digits." In fact, if we had a clear definition of the notion of "random" involved (see above), then we might be able to use the very fact that the sequence is random to prove things about it. In other words we might be able to use an assumption that the identity does not hold to show that the sequence would not possess the appropriate form of randomness. This is not entirely hypothetical: in the study of Martin-Lof randomness, it is common to use the assumption that a sequence is Martin-Lof random as a hypothesis to prove facts about the sequence. In general, hypothetical arguments that claim "any possible proof" would have to take a very specific form tend not to be compelling.<|endoftext|> -TITLE: Can the argument of an algebraic number be an irrational number times pi? -QUESTION [8 upvotes]: This is mainly out of curiosity. Let $\nu$ be an algebraic number. Can Arg($\nu$) be of the form $\pi \times \mu$ for an irrational number $\mu$? - -REPLY [2 votes]: To extend @ChrisEagle’s answer without using Niven, let me point out that any nonreal Gaussian integer off the lines $y=\pm x$ will give an example of $z$ with $\arg z=\lambda\pi$ and $\lambda$ irrational, because of unique factorization in the Gaussian integers. For, notice that the $\lambda$ above is rational if and only if $z^m$ is real for some positive integer $m$. And yet a Gaussian integer is real if and only if it is of the form -$$ -2^r\prod_jq_j\prod_k(\rho_k\bar\rho_k)\,, -$$ -where the $q_j$ are ordinary primes congruent to $3$ modulo $4$ and each $\rho_k\bar\rho_k$ is an ordinary prime congrent to $1$ mod $4$, just as $13=(3+2i)(3-2i)$. Note that $\rho_k$ and $\bar\rho_k$ are different Gaussian primes, i.e. not related by a unit factor. So take your $z$ satisfying the conditions I specified at the outset; then its factorization into primes must involve at least one $\rho$ that’s not matched by a $\bar\rho$, and every power of $z$ satisfies this condition as well, so is not real.<|endoftext|> -TITLE: In which cases is the inverse of a matrix equal to its transpose? -QUESTION [49 upvotes]: In which cases is the inverse of a matrix equal to its transpose, that is, when do we have $A^{-1} = A^{T}$? Is it when $A$ is orthogonal? - -REPLY [62 votes]: If $A^{-1}=A^T$, then $A^TA=I$. This means that each column has unit length and is perpendicular to every other column. That means it is an orthonormal matrix.<|endoftext|> -TITLE: The $p$-adic integers as a profinite group -QUESTION [6 upvotes]: How to prove that if $\mathbb{Z}_p$ is the set of $p$-adic integers then $\displaystyle{\mathbb{Z}_p=\varprojlim\mathbb{Z}/p^n\mathbb{Z}}$ where the limit denotes the inverse limit? -$\mathbb{Z}_p$ is the inverse limit of the inverse system $(\mathbb{Z}/p^n\mathbb{Z}, f_{mn})_{\mathbb{N}}$, but I don't know what the $f_{mn}$ are. -Can someone help me? - -REPLY [4 votes]: Let me guess the definition of $p$-adic integers you got in mind is some $p$-power series. Now, forget it and let us consider some topology: -We DEFINE the $p$-adic integers to be the completion of certain metric on $\mathbb{Z}$, then the $p$-power series is a way to make completion, and the inverse limit is another way. However, the completion of a metric space is unique up to unique isometry. Now you can check the isometry is a ring isomorphism by hands.<|endoftext|> -TITLE: Book about ergodic theory, group actions and number theory -QUESTION [5 upvotes]: Does anyone know about an introductory book showing the intersection between ergodic theory, group actions and number theory? I have been looking for but it has been impossible to me. - -REPLY [3 votes]: I like the survey article "Interactions Between Ergodic Theory, Lie Groups, and. Number Theory" by Marina Ratner. Look it up in Google.<|endoftext|> -TITLE: Set theory puzzles - chess players and mathematicians -QUESTION [11 upvotes]: I'm looking at "Basic Set Theory" by A. Shen. The very first 2 problems are: 1) can the oldest mathematician among chess players and the oldest chess player among mathematicians be 2 different people? and 2) can the best mathematician among chess players and the best chess player among mathematicians be 2 different people? I think the answers are no, and yes, because a person can only have one age, but they can have separate aptitudes for chess playing and for math. Is this correct? - -REPLY [11 votes]: Yes, it’s correct. If $M$ is the set of mathematicians, and $C$ is the set of chess players, you’re looking rankings of the members of $M\cap C$. If for $x\in M\cap C$ we let $m(x)$ be $x$’s ranking among mathematicians, $c(x)$ be $x$’s ranking among chess players, and $a(x)$ be $x$’s age, then there is a unique $x_a\in M\cap C$ such that $$a(x_a)=\max\{a(x):x\in M\cap C\}\;,$$ but there can certainly be distinct $x_m,x_c\in M\cap C$ such that $$m(x_m)=\max\{m(x):x\in M\cap C\}$$ and $$c(x_c)=\max\{c(x):x\in M\cap C\}\;.$$ -All of which just says what you said, but a bit more formally. - -REPLY [4 votes]: (1) Think of it in terms of sets. Let $M$ be the set of mathematicians, $C$ the set of chess players. Both are asking for the oldest person in $C\cap M$. -(2) Absolutely fantastic reasoning, though perhaps less simply set-theoretically described.<|endoftext|> -TITLE: Multilinear optimization -QUESTION [7 upvotes]: Are there any efficient algorithms to solve, multi-linear objective and multi-linear constraint optimization problems? The multilinear functions are sums of bilinear, trilinear (and so on) terms -\begin{align*} - \min\quad & f_1(x_1,x_2,x_3,...,x_n)\\ - \text{s.t.}\quad - &f_2(x_1,x_2,x_3,...,x_n)\leq b\\ - &f_3(x_1,x_2,x_3,...,x_n)\leq c\\ - &f_i(x_1,x_2,x_3,...,x_n)=a_{i1}x_1x_2...x_n+a_{i2}x_1x_2...x_{n-1}+...+a_{i(n-1)}x_1+a_{in} -\end{align*} - -REPLY [7 votes]: An optimization problem where the objective function and constraints are multilinear (i.e. sums of products, $f(x_1,\ldots,x_n)=\sum_{I\subseteq\{1,\ldots,n\}} a_I \prod_{i\in I} x_j, a_I \in \mathbb{R}$) with respect to the decision variables are difficult to solve, as these problems are non-convex in general and there are no known ways to convexify it. Thus the general problem cannot (in my knowledge) be transformed to a linear program, for instance. However, as demonstrated by the earlier post by user p.s., in some special cases this is possible. -There exists algorithms for solving the general problem. One particular is the Reformulatio-Linearization technique by Hanif Sherali and Cihan Tuncbilek (1992) , which solves the problem as a sequence of linear programs. This method is based on relaxing the product variables. For instance, if there is a product term $x_1 x_2$, then this is replaced by a variable $X_{1,2}$. This new variable is constrained using explicit lower and upper bounds on the variables. -If the original problem would have bounds $0 \leq x_1\leq 1$ and $1 \leq x_2\leq 2$, then one could generate a constraint on the product variable $X_{1,2}$ by noting that $(x_1-0)(x_2-1)\geq0$, which in expanded form yields $x_1 x_2 -x_1\geq0$, where by substituting $x_1 x_2\rightarrow X_{1,2}$ would yield $X_{1,2}-x_1\geq 0$. By using all the bounds of the variables, one can derive more constraints on the product variable, which gives a tighter relaxation of the original problem. -The reformulated problem is linear with respect to the original variables $x_i$ and the product variables $X_.$ and can be solved with a linear programming algorithm. If the solution gained from the relaxation does not yield a feasible solution to the original problem, then this solution can be used for dividing the search space into two by branching on a variable. So continuing the previous example, if at optimum of the relaxation $x_1=0.5$, then a branching could be done by considering two problems that are the same as the original but in the first $0\leq x_1\leq0.5$ and in the second $0.5\leq x_2\leq 1$. Clearly, the optimal solution to the original problem is in either of these branches. Now, because the bounds of the variable $x_1$ has changed, also the constraints on the product variables has changed also, and it can be shown that the product variables will converge towards the product terms when continuing iteration in this fashion. -Although conceptually simple, the RLT method is not that straight forward to implement. Thus, for small problem instances, it may be worthwhile to try some commercial numerical non-linear solver (e.g. BARON) or even the Maximize command in Mathematica that solves the problem symbolically. -References -Sherali, Hanif D., and Cihan H. Tuncbilek. "A global optimization algorithm for polynomial programming problems using a reformulation-linearization technique." Journal of Global Optimization 2.1 (1992): 101-112.<|endoftext|> -TITLE: Partition of the plane into union of open segments -QUESTION [7 upvotes]: A subset of ${\mathbb R}^2$ is open in the usual topology iff it is a (not necessarily disjoint) union of open disks. It is well-known that the plane is connected, so that there is no nontrivial partition of ${\mathbb R}^2$ into open sets. -Now say that a subset of ${\mathbb R}^2$ is weakly open iff it is a (not necessarily disjoint) union of "open disks in dimension 1", i.e. a union of open segments). With this definition, there is an obvious nontrivial partition of ${\mathbb R}^2$ into weakly open subsets : if $\cal D$ is the set of all lines parallel to some direction, any nontrivial partition of $\cal D$ will yield a partition of ${\mathbb R}^2$ into weakly open subsets. - Are there any other such partitions ? -UPDATE 06/11/2012 16:30 Since the original question was quickly answered, I ask now : what about partitions into closed, weakly open subsets ? -Sub-question : if a subset is both closed and weakly open, does it necessarily contain a straight line ? - -REPLY [3 votes]: Lemma: Let $A$ be a closed, weakly open set, $p$, $q$ and $r$ three distinct points, $p \notin A$, and suppose the open line segments $pq$ and $pr$ are disjoint from $A$. Then the open triangle $T$ with vertices $p$,$q$,$r$ is disjoint from $A$. Moreover, -the open segment $qr$ is either disjoint from $A$ or contained in $A$. -Proof: If $T$ intersects $A$, then there is a point $y$ of $T \cap A$ that is as far as possible from the line $L$ through $qr$. $y$ must be contained in a maximal open segment $S$ contained in $A$, and since no point of $S$ can be farther from $L$ than $y$, $S$ must be parallel to $L$. But since $S$ can't intersect the open segments $pq$ and $qr$, it must have an endpoint $y' \in T$. This endpoint is also in $A$, and also maximizes the distance from $L$, but any open segment containing $y'$ and contained in $T$ must not be parallel to $L$, contradiction. -If some point $z$ of the open segment $qr$ is in $A$, it is contained in an open segment contained in $A$, but such a segment can't intersect $T$ so it must be on the line $L$. -But if not all of $qr$ is in $A$, that segment has an endpoint $z' \in qr$, and then an open segment containing $z'$ and contained in $A$ must not be on $L$, contradiction. -That concludes the proof of the Lemma. -Theorem: Suppose $A$ and $B$ are disjoint nonempty, closed, weakly open sets. Then there is some line $L$ disjoint from $A \cup B$, such that both $A$ and $B$ contain translates of $L$. -Proof: Along a line segment from a point of $A$ to a point of $B$ there must be an open interval disjoint from $A \cup B$ that has one endpoint in $A$ and the other in $B$. Let $p$ be a member of such an interval. Let $C_A$ be the set of points $s$ of the unit circle $C$ such that for some $t \in (0,\infty)$, $p + t s \in A$ and for all $t' \in (0,t)$, $p + t' s \notin B$. Similarly define $C_B$, interchanging $A$ and $B$. It is not hard to show that $C_A$ and $C_B$ are open and nonempty. So there must be $s_1$ and $s_2$ in $C$ that are neither in $C_A$ nor $C_B$, dividing $C$ into arcs $C_1$ and $C_2$ where $C_A$ intersects $C_1$ and $C_B$ intersects $C_2$. But by Lemma 1, neither of those arcs can subtend an angle less than $\pi$. So both arcs must subtend $\pi$, i.e. $s_2 = -s_1$, and the line $L$ through $p$ in direction $s_1$ is disjoint from $A \cup B$. -Now let $q$ be a point of $A$ such that the open interval $pq$ is disjoint from $A$, and let $L'$ be the line through $q$ parallel to $L$. Let $U$ be the open strip between $L$ and $L'$. $U$ is the union of the open triangles with vertices $p$, $q$ and points $r$ of $L$. Since the open segments $pq$ and $pr$ are disjoint from $A$, the Lemma says $U$ is disjoint from $A$. Now there is an open segment containing $q$ and contained in $A$, and since this segment can't intersect $U$ it must be contained in $L'$. Applying the second part of the Lemma to a triangle with vertices $p$, $q$ and another point of $L'$, we see that $L' \subseteq A$. -Corollary: In a partition of ${\mathbb R}^2$ into closed, weakly open sets, the boundary of each member of the partition consists of parallel straight lines.<|endoftext|> -TITLE: Change of Basis vs. Linear Transformation -QUESTION [8 upvotes]: If i understand it correctly, change of basis is just a specific case of a linear transformation. Specifically given a vector space $V$ over a field $F$ such that $\dim V=n$, change of basis is just a transformation from $F^n$ to $F^n$. Does change of basis in and of itself have practical uses that are separate from linear transformations? What I mean is separate from linear transformations that do more than just change the basis of a vector in it's own vector space. - -REPLY [3 votes]: The Fourier transform is an example of a practical change of basis. Some often used operations in signal processing are easier in the Fourier transformed basis. For example, a convolution in the time basis is simply a multiplication in the Fourier transformed basis, i.e., the frequency basis.<|endoftext|> -TITLE: Integrate and measure problem. -QUESTION [6 upvotes]: If $f \in L^{p_0}(X,M,\,u)$ for some $01$ use LDCT to prove that. But I can't conclude to measure $\mu$ and how can I approach second fact? - -REPLY [3 votes]: $1$. Define -$$ -\begin{align} -E^>&=\{x\in X:|f(x)|\ge1\}\\ -E^<&=\{x\in X:0<|f(x)|<1\}\\ -E^=&=\{x\in X:|f(x)|=0\} -\end{align}\tag{1a} -$$ -On $E^>$, $|f(x)|^p$ decreases to $1$ as $p$ decreases to $0$; on $E^<$, $|f(x)|^p$ increases to $1$ as $p$ decreases to $0$; and on $E^=$, $|f(x)|=0$ as $p$ decreases to $0$. -Therefore, by monotone convergence on $E^<$ and dominated convergence on $E^>$, -$$ -\begin{align} -\lim_{p\to0^+}\int_{E^>}|f(x)|^p\,\mathrm{d}x&=\mu(E^>)\\ -\lim_{p\to0^+}\int_{E^<}|f(x)|^p\,\mathrm{d}x&=\mu(E^<)\\ -\lim_{p\to0^+}\int_{E^=}|f(x)|^p\,\mathrm{d}x&=0 -\end{align}\tag{1b} -$$ -Summing these yields -$$ -\lim_{p\to0^+}\int_X|f(x)|^p\,\mathrm{d}x=\mu(\{x\in X:|f(x)|\not=0\})\tag{1c} -$$ - -$2$. Preliminaries - -For $p\gt0$ and $t\ge0$, define - $$ -g_p(t)=\frac{t^p-1}{p}\tag{2a} -$$ - Claim: $g_p(t)$ is non-decreasing in both $p$ and $t$. -$g_p(t)$ is non-decreasing in $t$: This follows from - $$ -g_p^\prime(t)=t^{p-1}\ge0\tag{2b} -$$ -$g_p(t)$ is non-decreasing in $p$: As Didier commented, this follows from - $$ -g_p(t)=\int_1^tu^{p-1}\,\mathrm{d}u\tag{2c} -$$ - and because $u^{p-1}$ is non-decreasing in $p$ when $u\ge1$ and non-increasing in $p$ when $0\le u\le1$. -Furthermore, L'Hopital says - $$ -\lim_{p\to0^+}g_p(t)=\log(t)\tag{2d} -$$ - -$\hspace{1pt}$ - -Jensen's Inequality says that $h(p)=\|f\|_p$ is non-decreasing in $p$. - -$\hspace{1pt}$ - -Consider an $\epsilon$ neighborhood of $-\infty$ to be $(-\infty,-\frac1\epsilon)$ and let $L=\lim\limits_{p\to0^+}\log(h(p))$. -For any $\epsilon>0$, choose $q>0$ so that $\log(h(q))$ is within an $\frac{\epsilon}{2}$ neighborhood of $L$. -Choose $r>0$ so that $g_r(h(q))$ is within an $\epsilon$ neighborhood of $L$. -If $p<\min(q,r)$, then both $\log(h(p))$ and $g_p(h(p))$ will be within an $\epsilon$ neighborhood of $L$. Therefore, - $$ -\lim_{p\to0^+}\log(h(p))=\lim_{p\to0^+}g_p(h(p))\tag{2e} -$$ - -Main Result -Define $E=\{x:|f(x)|>1\}$, then the results above yield -$$ -\begin{align} -\lim_{p\to0^+}\log\left(\|f\|_p\right) -&=\lim_{p\to0^+}\frac{\|f\|_p^p-1}{p}\\ -&=\lim_{p\to0^+}\int_X\frac{|f(x)|^p-1}{p}\,\mathrm{d}x\\ -&=\color{#C00000}{\lim_{p\to0^+}\int_{E}\frac{|f(x)|^p-1}{p}\,\mathrm{d}x} -+\color{#00A000}{\lim_{p\to0^+}\int_{X\setminus E}\frac{|f(x)|^p-1}{p}\,\mathrm{d}x}\\ -&=\color{#C00000}{\int_{E}\log|f(x)|\,\mathrm{d}x} -+\color{#00A000}{\int_{X\setminus E}\log|f(x)|\,\mathrm{d}x}\\ -&=\int_{X}\log|f(x)|\,\mathrm{d}x\tag{2f} -\end{align} -$$ -The left limit, in red, is by Dominated Convergence, while the right limit, in green, is by Monotone Convergence. Exponentiate to get -$$ -\lim_{p\to0^+}\|f\|_p=e^{\int_{X}\log|f(x)|\,\mathrm{d}x}\tag{2g} -$$<|endoftext|> -TITLE: Asymptotics for the expected length of the longest streak of heads. -QUESTION [9 upvotes]: As Introduction to Algorithms (CLRS) describes, the problem is - -Suppose you flip a fair coin $n$ times. What is the longest streak of consecutive heads that you expect to see? - -The book claims that the expects is $\Theta(\log{}n)$, and proves that it is both $O(\log{}n)$ and $\Omega(\log{}n)$. I want generize the problem, and look into the $\Theta(\log{}n)$. -For example, we're flipping a biased coin with the probability $p$ for head and $q$ for tail, where $p+q=1$. Supposing that the length of the longest streak is $X$, we want to rewrite $EX = A\log{}n + O(1)$, or more precisely, $EX = A\log{}n + B + o(1)$, or something else. How to determine the asymptotics for $EX$? -There's something trivial. Supposing that $P_{n,m}=\textrm{ probability that }X0$. -Any help? Thanks! - -REPLY [5 votes]: Suppose, $p \in (0; 1)$ is the probability of a single coin rolling heads. Let's call $L_n$ the length of the longest streak of heads in first $n$ rolls, $T_n$ - the number of rolls before the $n$-th tails and $X_i$ the length of the streak of heads just before the $i$-th tails. It is not hard to see, that $X_i$ are i.i.d. geometrically distributed with parameter $(1-p)$, that $T_n$ are almost surely monotonously increasing, and that $T_n = n + \sum_{i = 1}^n X_i \geq n$. We also know that $L_{T_n} = \frac{\ln(n)}{\ln(p^{-1})}+O(1)$. Thus we have: -$$ -\begin{align*} -\lim_{n \to \infty} \frac{E[L_n]}{\ln(n)}&=\lim_{n \to \infty} E\left[\frac{L_n}{\ln(n)}\right]\\ -&= \lim_{n \to \infty} E\left[\frac{L_{T_n}}{\ln(T_n)}\right]\\ -&= \lim_{n \to \infty} E\left[\frac{L_{T_n}}{\ln(n + \sum_{i = 1}^n X_i)}\right]\\ -&=\lim_{n \to \infty} E\left[\frac{L_{T_n}}{\ln(n) + \ln\bigl(1 + \frac{\sum_{i = 1}^n X_i}{n}\bigr)}\right]\\ -&= \lim_{n \to \infty} E\left[\frac{L_{T_n}}{\ln(n) + \ln(1 + E[X_1])}\right]\\ -&= \lim_{n \to \infty} \frac{E[L_{T_n}]}{\ln(n) + \ln(1 + p^{-1})}\\ -&= \lim_{n \to \infty} \frac{\frac{\ln(n)}{\ln(p^{-1})}+O(1)}{\ln(n) + \ln(1 + p^{-1})}\\ -&= \frac{1}{\ln(p^{-1})}. -\end{align*} -$$ -From that we can conclude, that $\forall p \in (0; 1)$, $E[L_n]$ is asymptotically equivalent to $\frac{\ln(n)}{\ln(p^{-1})}$, which seems to be the thing you wanted to prove.<|endoftext|> -TITLE: Proving that $A=\{(-2)^n : n \in \mathbb{N} \}$ is unbounded -QUESTION [9 upvotes]: I am trying to prove that $A=\{(-2)^n : n \in \mathbb{N} \}$ is unbounded. -What I did was first to show that for every $n \in \mathbb{N}$ if $n$ is even then $(-2)^n = 2^n$ and if $n$ is odd then $(-2)^n = -2^n$ (I did it by induction on $n$). -Then I show that for every $n \in \mathbb{N}$ there exist $k \in \mathbb{N}$ such that $(-2)^n \lt (-2)^k$ because if $n$ is even then -$$(-2)^n=2^n\lt 2^k$$ -Where the last inequality holds by the definition of natural powers. The case where $n$ is odd is obvious now. -Am I right? Is the a more elegant way of proving so? thanks! - -REPLY [8 votes]: What does it mean to be bounded? $A$ is bounded if there exists some positive number $M$ such that $|a|\leq M$ for all $a\in A.$ You want to show $A$ is unbounded, so that there is no such $M$ that satisfies that condition. One way to do this is to pick an arbitrary positive number, and find something in $A$ which has larger magnitude than what you picked. If you can do that for any $M>0$ then $A$ can't be bounded. -So to start your proof: Let $M>0.$ We can find $a\in A$ such that $|a|> M$ by picking $n$ ... - -REPLY [6 votes]: I would simply prove by induction that $2^n>n$ for $n\in\Bbb N$ and then note that for any $m\in\Bbb N$, $(-2)^{2m}=2^{2m}>2m>m$, so $A$ has no upper bound. If you also want to show that $A$ has no lower bound, just observe that $(-2)^{2m+1}=-2^{2m+1}<-(2m+1)<-m$. (Here I’m using the fact about odd and even powers of $-2$ that you already proved by induction.)<|endoftext|> -TITLE: Showing that the space $C[0,1]$ with the $L_1$ norm is incomplete -QUESTION [17 upvotes]: Can anyone think of a relatively easy counter example to remember, which demonstrates that the space $C[0,1]$ with the $L_1$ norm is incomplete? -Thanks! - -REPLY [2 votes]: Here is an easy example that shows that $C[0,1]$ with the $L^1$ norm is not complete. Let $$f_n(x) = \begin{cases} nx,& 0 \leq x \leq \frac{1}{n} \\ 1,& \frac{1}{n} \leq x \leq 1\end{cases}.$$ -It should not be hard to verify that $f_n$ is a cauchy sequence with respect to the $L^1$ norm. Now if there were a function $f$ such that $f_n \rightarrow f$ in the $L^1$ metric, we would necessarily have that $f$ be $1$ for $0 < x \leq 1$ and $f(0) = 0$. But such a function cannot possibly be continuous so that $C[0,1]$ with the $L^1$ norm is not complete.<|endoftext|> -TITLE: True if sigma-compact -QUESTION [6 upvotes]: Halo to readers of this post, I would like to enquire from the book of Royden's Real Analysis book, there is a lemma which I believe is wrong but am not certain. Perhaps someone can how me a counter-example? - -Let $X$ a locally compact Hausdorff space and $E$ be a subset of $X$ such that $E \bigcap K$ is a Borel Set for each compact set $K$. Then $E$ is a Borel set. - -I think this is only true if $X$ is sigma-compact. Please correct me if I am wrong. - -REPLY [2 votes]: As with Nate, I'm leaning towards the side that Royden's Lemma is false. -Look at page 5 in the Notes on measure and integration in locally compact spaces written by William Arveson from Berkeley. The document can be found on his homepage by following the link "papers" and scrolling down to the section titled "Lecture notes and snippets". Here is the direct download link. -It seems that they refer to Royden's Lemma and say that it is misleading, providing seemingly a counter-example for someone (with more knowledge than me) to verify it if possible. - -However, a weaker result does hold, which I believe Royden might have assumed to imply the above result since the Borel $\sigma$-algebra is generated by closed sets. -Namely, that if $X$ is a locally compact Hausdorff space and $F\subset X$ is a set for which $F\cap K$ is closed for all compact $K\subset X$, then $F$ is closed. I believe that it can be shown with the following argument: -Assume the contrary that $F$ is not closed. Then we find $x\in\bar{F}\setminus F$ where $\bar{F}$ denotes the closure of $F$ in $X$. In particular, this means that $U\cap F\neq \emptyset$ for all open neighborhoods $U$ of $x$. Since $X$ is locally compact, we find an open neighborhood $U_{x}$ of $x$ so that $\bar{U_{x}}$ is compact. Thus $V:=\bar{U_{x}}\cap F\neq\emptyset$ and $V$ is closed by our assumption. Since $x\notin F\supset V$ then $x\notin V$, and since $V$ is closed its complement is open. Thus we find an open neighborhood $W$ of $x$ so that $x\in W\subset V^{c}$. We then take $G=U_{x}\cap W$ as an open neighborhood of $x$, whence -\begin{align*} -G=U_{x}\cap W\subset U_{x}\cap V^{c}=U_{x}\cap(\bar{U_{x}}\cap F)^{c}=U_{x}\cap(\bar{U_{x}}^{c}\cup F^{c})=U_{x}\cap F^{c}. -\end{align*} -But this implies that $G\cap F\subset U_{x}\cap F^{c}\cap F=\emptyset$, which is a contradiction since $x\in\bar{F}$ and $G$ is an open neighborhood of $x$. Thus $F$ must be closed.<|endoftext|> -TITLE: Why the terminology "monoid"? -QUESTION [24 upvotes]: As I am not a native English speaker, I sometimes am bothered a little with the word "monoid", which is by definition a semigroup with identity. But why this terminology? -I searched some dictionaries (Longman for English, Larousse for Francais, Langenscheidts for Dentsch) but didn't find any result, and it seems to me that it is just a pronounciable word with certain mathematical meaning. So, where does it come from? Is there any etymological explanation? Who was the first mathematician who used it? - -REPLY [11 votes]: If Chevalley was the first to popularize the term "monoid", then I can pretty confidently guess that it meant the structure of operators on a single type (i.e., a category with a single object). Note that Chevalley's second example (after the mandatory natural numbers) is the collection of mappings from a set to itself. His term for the monoid operation is "composition." -The term "groupoid" in the sense of a category with invertible arrows was already well-established. So, the use of "monoid" to mean a category of arrows on a single object seems quite natural.<|endoftext|> -TITLE: On the zeta sum $\sum_{n=1}^\infty[\zeta(5n)-1]$ and others -QUESTION [19 upvotes]: For p = 2, we have, -$\begin{align}&\sum_{n=1}^\infty[\zeta(pn)-1] = \frac{3}{4}\end{align}$ -It seems there is a general form for odd p. For example, for p = 5, define $z_5 = e^{\pi i/5}$. Then, -$\begin{align} &5 \sum_{n=1}^\infty[\zeta(5n)-1] = 6+\gamma+z_5^{-1}\psi(z_5^{-1})+z_5\psi(z_5)+z_5^{-3}\psi(z_5^{-3})+z_5^{3}\psi(z_5^{3}) = 0.18976\dots \end{align}$ -with the Euler-Mascheroni constant $\gamma$ and the digamma function $\psi(z)$. - -Anyone knows how to prove/disprove this? -Also, how do we split $\psi(e^{\pi i/p})$ into its real and imaginary parts so as to express the above purely in real terms? - -More details in my blog. - -REPLY [21 votes]: $$ -\begin{align} -\sum_{n=1}^\infty\left[\zeta(pn)-1\right] & = \sum_{n=1}^\infty \sum_{k=2}^\infty \frac{1}{k^{pn}} \\ -& = \sum_{k=2}^\infty \sum_{n=1}^\infty (k^{-p})^n \\ -& = \sum_{k=2}^\infty \frac{1}{k^p-1} -\end{align} -$$ -Let $\omega_p = e^{2\pi i/p} = z_p^2$, then we can decompose $1/(k^p-1)$ into partial fractions -$$ -\frac{1}{k^p-1} = \frac{1}{p}\sum_{j=0}^{p-1} \frac{\omega_p^j}{k-\omega_p^j} -= \frac{1}{p}\sum_{j=0}^{p-1} \omega_p^j \left[\frac{1}{k-\omega_p^j}-\frac{1}{k}\right] -$$ -where we are able to add the term in the last equality because $\sum_{j=0}^{p-1}\omega_p^j = 0$. So -$$ -p\sum_{n=1}^\infty\left[\zeta(pn)-1\right] = \sum_{j=0}^{p-1}\omega_p^j\sum_{k=2}^{\infty}\left[\frac{1}{k-\omega_p^j}-\frac{1}{k}\right] -$$ -Using the identities -$$ -\psi(1+z) = -\gamma-\sum_{k=1}^\infty\left[\frac{1}{k+z}-\frac{1}{k}\right] -= -\gamma+1-\frac{1}{1+z}-\sum_{k=2}^\infty\left[\frac{1}{k+z}-\frac{1}{k}\right]\\ -\psi(1+z) = \psi(z)+\frac{1}{z} -$$ -for $z$ not a negative integer, and -$$ -\sum_{k=2}^\infty\left[\frac{1}{k-1}-\frac{1}{k}\right]=1 -$$ -by telescoping, so finally -$$ -\begin{align} -p\sum_{n=1}^\infty\left[\zeta(pn)-1\right] -& = 1+\sum_{j=1}^{p-1}\omega_p^j\left[1-\gamma-\frac{1}{1-\omega_p^j}-\psi(1-\omega_p^j)\right] \\ -& = \gamma-\sum_{j=1}^{p-1}\omega_p^j\psi(2-\omega_p^j) -\end{align} -$$ -So far this applies for all $p>1$. Your identities will follow by considering that when $p$ is odd $\omega_p^j = -z_p^{2j+p}$, so -$$ -\begin{align} -p\sum_{n=1}^\infty\left[\zeta(pn)-1\right] -& = \gamma+\sum_{j=1}^{p-1}z_p^{2j+p}\psi(2+z_p^{2j+p})\\ -& = \gamma+\sum_{j=1}^{p-1}z_p^{2j+p}\left[\frac{1}{1+z_p^{2j+p}}+\frac{1}{z_p^{2j+p}}+\psi(z_p^{2j+p})\right] \\ -& = \gamma+p-1+S_p+\sum_{j=1}^{p-1}z_p^{2j+p}\psi(z_p^{2j+p}) -\end{align} -$$ -where -$$ -\begin{align} -S_p & = \sum_{j=1}^{p-1}\frac{z_p^{2j+p}}{1+z_p^{2j+p}} \\ -& = \sum_{j=1}^{(p-1)/2}\left(\frac{z_p^{2j-1}}{1+z_p^{2j-1}}+\frac{z_p^{1-2j}}{1+z_p^{1-2j}}\right) \\ -& = \sum_{j=1}^{(p-1)/2}\frac{2+z_p^{2j-1}+z_p^{1-2j}}{2+z_p^{2j-1}+z_p^{1-2j}} \\ -& = \frac{p-1}{2} -\end{align} -$$ -which establishes your general form. -I don't have an answer for your second question at this time.<|endoftext|> -TITLE: Why is this forcing notion closed? -QUESTION [5 upvotes]: I'm studying a forcing argument which produces a generic extension in which GCH holds, but I am, somewhat embarrassingly, stuck on a minor detail. I hope someone can point out the thing I'm missing. -Let $P_\alpha=\mathrm{Fn}(\beth_\alpha^+,\beth_{\alpha+1},\beth_\alpha^+)$ be the Lévy collapsing notions. Denote by $P$ their Easton product over all ordinals $\alpha$; each element of $P$ is a function $p$, defined on some subset of the ordinals, so that $p(\alpha)\in P_\alpha$ for all $\alpha$ and $$|\{\alpha<\gamma;p(\alpha)\neq\emptyset\}|<\gamma$$ -holds for all regular cardinals $\gamma$. Order $P$ coordinate-wise. Also, for any $\alpha$ define -$$P^{>\alpha}=\{p|_{\{\beta;\beta>\alpha\}};p\in P\}$$ -as the class of restrictions of elements of $P$ to ordinals, greater than $\alpha$ (don't be alarmed that these things are proper classes; it doesn't really matter at this point). -I am told that $P^{>\alpha}$ is $\beth_{\alpha+1}$-closed (as Jech would say, or $\leq\beth_{\alpha+1}$-closed, as Kunen would say). So let's take $\mu\leq\beth_{\alpha+1}$ and a descending $\mu$-sequence $(q_\xi)_{\xi<\mu}$ and define $q$ on $\bigcup_{\xi<\mu}\mathrm{dom}(q_\xi)$ by $q(\beta)=\inf_{\xi<\mu} q_\xi(\beta)$, where the existence of the infimum in $P_\beta$ is guaranteed by the fact that $P_\beta$ is $\beth_\beta$-closed. For $q$ to be a lower bound for our sequence, we have to check the support condition mentioned above. Taking a regular $\gamma$, we have -$$|\{\beta<\gamma;q(\beta)\neq\emptyset\}|= -\left|\bigcup_{\xi<\mu}\{\beta<\gamma;q_\xi(\beta)\neq\emptyset\}\right|$$ -This is where I get confused. Certainly, $\gamma$ can be taken greater than $\alpha$. Also, since $\gamma$ is regular, if we also have $\mu<\gamma$, the above union has cardinality strictly less than $\gamma$ and we get what we want. What worries me is what happens when $\gamma\leq\mu$. This doesn't seem at all right, so I suspect I messed up something in setting up the proof. - -REPLY [3 votes]: (Compiled from comment thread above) -If you look at Jech's hint, you'll see the Easton-like requirement on the product is only for inaccessible $\gamma$. Also, to verify $\beth_{\alpha+1}$-closure one need only consider $\mu = \beth_{\alpha+1}$. So now, if $\gamma < \beth_{\alpha+1}$, the increasing union $\bigcup_{\xi < \beth_{\alpha+1}}\{\beta < \gamma | q_\xi(\beta)\neq \emptyset\}$ must stabilize, thus it has size less than $\gamma$. $\gamma = \beth_{\alpha+1}$ never happens because $\gamma$ is inaccessible. So that'll do it. -The Easton requirement here is different from the usual one. The best way to see the analogy between this Easton requirement and the usual one is to re-index the forcing notions here: Define $P_{\beth_\alpha}$ to be the Levy collapse of $\beth_{\alpha+1}$ and $P_\beta$ to be trivial if $\beta$ is not a $\beth$-number. Then the appropriate Easton requirement here is the usual one, but the only $\gamma$ for which this requirement does anything is when $\gamma$ is inaccessible.<|endoftext|> -TITLE: A problem about Riemann-Lebesgue lemma -QUESTION [11 upvotes]: Let $f$ be an integrable function over $[a,b]$. Prove that: -$$\lim_{n \rightarrow \infty } \int _a ^b f(x)|\sin(nx)| dx= \frac {2}{\pi} \int _a^b f(x) dx.$$ - -REPLY [12 votes]: Based on the hints given by Alex Becker, we can obtain a more general result: -If $f$ is integrable on $[a,b]$,which is a bounded closed interval, and $g$ is integrable periodic function with period $\rm T$, -then we have the following: -$$\lim\limits_{n\rightarrow \infty} \int_a^b f(x)g(nx)dx={\rm T}^{-1}\int_0^{\rm T} g(t)dt \int_a^bf(x)dx$$ - -(1): I show that $$\lim\limits_{n\rightarrow \infty} \int_a^b {\rm g}(nx)dx={\rm T}^{-1}(b-a)\int_0^{\rm T}g(t)dt $$ - -PROOF: For each $n\in \mathbb{N}$, define $b_n=a+\frac{{\rm T}}{n} \left[\frac{n(b-a)}{{\rm T}} \right]$. -It is not difficult to see that $0\le b-b_n <\frac{{\rm T}}{n}$. -$$\int_a^{{b_n}} g (nx)dx = \sum\limits_{k = 1}^{\left[ {\frac{{n(b - a)}}{{\rm T}}} \right]} {\int_{a + (k - 1)\frac{{\rm T}}{n}}^{a + k\frac{{\rm T}}{n}} g } (nx)dx$$ -$$ = \frac{1}{n}\sum\limits_{k = 1}^{\left[ {\frac{{n(b - a)}}{{\rm T}}} \right]} {\int_{na + (k - 1){\rm T}}^{na + k{\rm T}} g } (y)dy = \frac{1}{n}\sum\limits_{k = 1}^{\left[ {\frac{{n(b - a)}}{{\rm T}}} \right]} {\int_0^{\rm T} g } (y)dy$$ -(since the integral of a periodic function over a length of its period is the same) -$$\int_a^{b_n} g(nx) dx=\frac{1}{n}\sum_{k=1}^{\left[\frac{n(b-a)}{{\rm T}}\right]}\int_{0}^{{\rm T}} g(y)dy -=\frac{1}{n}\left[\frac{n(b-a)}{{\rm T}}\right]\int_0^{\rm T}g(t)dt$$ -Since $$\left|\int_a^bg(nt)dt-\int_a^{b_n}g(nt)dt \right|\le (b-b_n)M \le \frac{{\rm T}}{n} M$$ where $$M=\sup|g(x)|$$ -so $$\lim_{x\rightarrow \infty} \int_a^bg(nx)dx=\lim_{x\rightarrow \infty} \int_a^{b_n}g(nx)dx={\rm T}^{-1}(b-a)\int_0^{\rm T} g(t)dt $$ - -(2): I show $$\lim_{n\rightarrow \infty} \int_a^b f(x)g(nx)dx={\rm T}^{-1}\int_0^{\rm T} g(t)dt \int_a^bf(x)dx$$ for any step function $f$. - -PROOF: This result follows from (1) easily. - -(3): I prove $$\lim_{n\rightarrow \infty} \int_a^b f(x)g(nx)dx={\rm T}^{-1}\int_0^{\rm T} g(t)dt \int_a^bf(x)dx$$ for any integrable function. - -PROOF: Given $\delta >0$, since $f$ is integrable, we can find a step function $h$ such that $\displaystyle \int_a^b|f-h|<\delta$. -$$\eqalign{ - & \left| {\int_a^b f (x)g(nx)dx - {T^{ - 1}}\int_0^T g (t)dt\int_a^b f (x)dx} \right| \leqslant \cr - & \left| {\int_a^b f (x)g(nx)dx - \int_a^b h (x)g(nx)dx} \right| + \left| {\int_a^b h (x)g(nx)dx - {T^{ - 1}}\int_0^T g (t)dt\int_a^b h (x)dx} \right| \cr - & + \left| {{T^{ - 1}}\int_0^T g (t)dt\int_a^b f (x)dx - {T^{ - 1}}\int_0^T g (t)dt\int_a^b h (x)dx} \right| < \left( {M + 1 + {T^{ - 1}}\int_0^T g (t)dt} \right)\delta \cr} $$ -for all $n\ge N$,(such $N$ exist by (2)),and since $\delta>0$ is arbitrary so - -$$\lim_{n\rightarrow \infty} \int_a^b f(x)g(nx)dx=T^{-1}\int_0^Tg(t)dt \int_a^bf(x)dx$$<|endoftext|> -TITLE: Question on "up to isotopy" when attaching two spaces -QUESTION [6 upvotes]: Let $M$, $N$, $A$, $B$ be topological spaces (or manifold) such that $A$ and $B$ are subspaces in $M$, $N$ respectively. -Let $f: A \to B$ and $g:A \to B$ be homeomorphism and assume that $f$ and $g$ are isotopic. We attach spaces $M$ and $N$ via $f$ and $g$ and obtain $M\cup_fN$ and $M\cup_gN$. -I want to prove (or disprove) that $M\cup_fN$ and $M\cup_gN$ are homeomorphic. -First of all, I am confused by the difinition of isotopy. -My understanding is that homeomorphisms $f: A \to B$ and $g:A \to B$ are isotopic if there is a map $H: A\times [0,1] \to B$ such that $H(x, t)$ is a homeomorphism for each $t\in [0,1]$ and $H(x, 0)=f(x)$ and $H(x,1)=g(x)$. -Is this definition correct in this context? -Any suggestions to the definition of isotpy and construction of a homeomorphism between $M\cup_fN$ and $M\cup_gN$ are appreciated. - -REPLY [4 votes]: In the differential context (ie, all spaces are smooth manifolds), if $f,g\colon A\rightarrow N$ are smooth maps, an ambient isotopy between $f$ and $g$ is a (smooth) isotopy $F\colon N\times R\rightarrow N$ such that $F(p,0) = p$, for $p\in N$ and $F(f,1)= g$. If $f$ and $g$ are ambient isotopic, then $M\cup_f N$ and $M\cup_g N$ are diffeomorphic. -It is a theorem of Thom, Cerf and Palais that if $A$ is compact and $N$ is closed, then two embeddings are isotopic if and only if they are ambient isotopic (see Theorem 5.2 in chapter II in Kosinski's Differential manifolds). -EDIT: Regarding the topological version of this theorem: Suppose $A = S^1$ and $M = S^3$, and $f,g$ are the trivial knot and the trefoil knot. Then $f$ and $g$ are topologically isotopic, but their complements are not homeomorphic, so this topological isotopy does not extend to an ambient isotopy (In https://www.encyclopediaofmath.org/index.php/Isotopy_(in_topology) there is an account of topological isotopy, with conditions for a topological isotopy extension theorem to be true and references)<|endoftext|> -TITLE: A new formula for Apery's constant and other zeta(s)? -QUESTION [31 upvotes]: I recently found these Plouffe-like formulas using Mathematica's LatticeReduce. Has anybody seen/can prove these are indeed true? -$$\begin{aligned}\frac{3}{2}\,\zeta(3) &= \frac{\pi^3}{24}\sqrt{2}-2\sum_{k=1}^\infty \frac{1}{k^3(e^{\pi k\sqrt{2}}-1)}-\sum_{k=1}^\infty\frac{1}{k^3(e^{2\pi k\sqrt{2}}-1)}\\ -\frac{3}{2}\,\zeta(5) &= \frac{\pi^5}{270}\sqrt{2}-4\sum_{k=1}^\infty \frac{1}{k^5(e^{\pi k\sqrt{2}}-1)}+\sum_{k=1}^\infty \frac{1}{k^5(e^{2\pi k\sqrt{2}}-1)}\\ -\frac{9}{2}\,\zeta(7) &= \frac{41\pi^7}{37800}\sqrt{2}-8\sum_{k=1}^\infty\frac{1}{k^7(e^{\pi k\sqrt{2}}-1)}-\sum_{k=1}^\infty\frac{1}{k^7(e^{2\pi k\sqrt{2}}-1)} \end{aligned}$$ -And so on for other $\zeta(2n+1)$. The background for these are in my blog. - -REPLY [17 votes]: It seems to have escaped attention that these sums may be evaluated -using harmonic summation techniques. -Introduce the sum -$$S(x; \alpha, p) = \sum_{n\ge 1} \frac{1}{n^p(e^{\alpha n x}-1)}$$ -with $p$ a positive odd integer and $\alpha>1$, so that we seek e.g. -$2 S(1; \pi\sqrt{2}, 3)+S(1; 2\pi\sqrt{2}, 3).$ -The sum term is harmonic and may be evaluated by inverting its Mellin -transform. -Recall the harmonic sum identity -$$\mathfrak{M}\left(\sum_{k\ge 1} \lambda_k g(\mu_k x);s\right) = -\left(\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} \right) g^*(s)$$ -where $g^*(s)$ is the Mellin transform of $g(x).$ -In the present case we have -$$\lambda_k = \frac{1}{k^p}, \quad \mu_k = k -\quad \text{and} \quad -g(x) = \frac{1}{e^{\alpha x}-1}.$$ -We need the Mellin transform $g^*(s)$ of $g(x)$ which is -$$\int_0^\infty \frac{1}{e^{\alpha x}-1} x^{s-1} dx -= \int_0^\infty \frac{e^{-\alpha x}}{1-e^{-\alpha x}} x^{s-1} dx -\\ = \int_0^\infty \sum_{q\ge 1} e^{-\alpha q x} x^{s-1} dx -= \sum_{q\ge 1} \int_0^\infty e^{-\alpha q x} x^{s-1} dx -\\= \Gamma(s) \sum_{q\ge 1} \frac{1}{(\alpha q)^s} -= \frac{1}{\alpha^s} \Gamma(s) \zeta(s).$$ -It follows that the Mellin transform $Q(s)$ of the harmonic sum -$S(x;\alpha,p)$ is given by -$$Q(s) = \frac{1}{\alpha^s} \Gamma(s) \zeta(s) \zeta(s+p) -\quad\text{because}\quad -\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} = -\sum_{k\ge 1} \frac{1}{k^p} \frac{1}{k^s} -= \zeta(s+p)$$ -for $\Re(s) > 1-p.$ -The Mellin inversion integral here is -$$\frac{1}{2\pi i} \int_{3/2-i\infty}^{3/2+i\infty} Q(s)/x^s ds$$ -which we evaluate by shifting it to the left for an expansion about -zero. -First formula. -We take -$$Q(s) = \frac{1}{\pi^s\sqrt{2}^s} -\left(2 + \frac{1}{2^s}\right) -\Gamma(s) \zeta(s) \zeta(s+3).$$ -We shift the Mellin inversion integral to the line $s=-1$, integrating -right through the pole at $s=-1$ picking up the following residues: -$$\mathrm{Res}(Q(s)/x^s; s=1) = -\frac{\pi^3\sqrt{2}}{72 x} -\quad\text{and}\quad -\mathrm{Res}(Q(s)/x^s; s=0) = --\frac{3}{2}\zeta(3)$$ -and -$$\frac{1}{2}\mathrm{Res}(Q(s)/x^s; s=-1) = -\frac{\pi^3\sqrt{2}x}{36}.$$ -This almost concludes the proof of the first formula if we can show -that the integral on the line $\Re(s) = -1$ vanishes when $x=1.$ To -accomplish this we must show that the integrand is odd on this line. -Put $s = -1 - it$ in the integrand to get -$$\pi^{1+it} \sqrt{2}^{1+it} -\left(2 + 2^{1+it}\right) -\Gamma(-1-it) \zeta(-1-it) \zeta(2-it).$$ -Now use the functional equation of the Riemann Zeta function in the -following form: -$$\zeta(1-s) = \frac{2}{2^s\pi^s} -\cos\left(\frac{\pi s}{2}\right) \Gamma(s) \zeta(s)$$ -to obtain (with $s=-1-it$) -$$\pi^{1+it} \sqrt{2}^{1+it} -\left(2 + 2^{1+it}\right) -\zeta(2+it) 2^{-1-it} \pi^{-1-it} -\frac{1}{2\cos\left(\frac{\pi (-1-it)}{2}\right)} -\zeta(2-it)$$ -which is -$$ \sqrt{2}^{1+it} -\left(2^{-it} + 1\right) -\zeta(2+it) -\frac{1}{2\cos\left(\frac{\pi (1+it)}{2}\right)} -\zeta(2-it)$$ -and finally yields -$$-\frac{1}{\sin(\pi i t/2)} -\left(\sqrt{2}^{-1-it}+\sqrt{2}^{-1+it}\right) -\zeta(2+it)\zeta(2-it).$$ -It is now possible to conclude by inspection: the zeta function terms -and the powers of the square root are even in $t$ and the sine term is -odd, so the whole term is odd and the integral vanishes. (We get exponential decay from the sine term.) -Second formula. -We take -$$Q(s) = \frac{1}{\pi^s\sqrt{2}^s} -\left(4 - \frac{1}{2^s}\right) -\Gamma(s) \zeta(s) \zeta(s+5).$$ -We shift the Mellin inversion integral to the line $s=-2$ (no pole on -the line this time) picking up the following residues: -$$\mathrm{Res}(Q(s)/x^s; s=1) = -\frac{\pi^5\sqrt{2}}{540 x} -\quad\text{and}\quad -\mathrm{Res}(Q(s)/x^s; s=0) = --\frac{3}{2}\zeta(5)$$ -and -$$\mathrm{Res}(Q(s)/x^s; s=-1) = -\frac{\pi^5\sqrt{2}x}{540}.$$ -It remains to verify that the integrand on the line $\Re(s)=-2$ is odd -when $x=1$. Put $s=-2-it$ in the integrand to get -$$\pi^{2+it} \sqrt{2}^{2+it} -\left(4 - 2^{2+it}\right) -\Gamma(-2-it) \zeta(-2-it) \zeta(3-it).$$ -Applying the functional equation once again with $s=-2-it$ we obtain -$$\pi^{2+it} \sqrt{2}^{2+it} -\left(4 - 2^{2+it}\right) -\zeta(3+it) 2^{-2-it} \pi^{-2-it} -\frac{1}{2\cos\left(\frac{\pi(-2-it)}{2}\right)} -\zeta(3-it)$$ -which is -$$\sqrt{2}^{2+it} -\left(2^{-it} - 1\right) -\zeta(3+it) -\frac{1}{2\cos\left(\frac{\pi(-2-it)}{2}\right)} -\zeta(3-it)$$ -which is in turn -$$\left(\sqrt{2}^{2-it} - \sqrt{2}^{2+it} \right) -\frac{1}{2\cos\left(\frac{\pi(2+it)}{2}\right)} -\zeta(3+it) -\zeta(3-it)$$ -which finally yields -$$-\left(\sqrt{2}^{2-it} - \sqrt{2}^{2+it} \right) -\frac{1}{2\cos(\pi i t /2)} -\zeta(3+it) -\zeta(3-it)$$ -The product of the zeta function terms is even, as is the cosine -term. The term in front is odd, so the integrand is odd as claimed. -(We get exponential decay from the cosine term.) -Third formula. -We take -$$Q(s) = \frac{1}{\pi^s\sqrt{2}^s} -\left(8 + \frac{1}{2^s}\right) -\Gamma(s) \zeta(s) \zeta(s+7).$$ -We shift the Mellin inversion integral to the line $s=-3$, integrating -right through the pole at $s=-3$ picking up the following residues: -$$\mathrm{Res}(Q(s)/x^s; s=1) = -\frac{17 \pi^7\sqrt{2}}{37800 x} -\quad\text{and}\quad -\mathrm{Res}(Q(s)/x^s; s=0) = --\frac{9}{2}\zeta(7)$$ -and -$$\mathrm{Res}(Q(s)/x^s; s=-1) = -\frac{\pi^7\sqrt{2}x}{1134} -\quad\text{and}\quad -\frac{1}{2} \mathrm{Res}(Q(s)/x^s; s=-3) = --\frac{\pi^7\sqrt{2}x^3}{4050}.$$ -This almost concludes the proof of this third formula if we can show -that the integral on the line $\Re(s) = -3$ vanishes when $x=1.$ To -accomplish this we must show once more that the integrand is odd on -this line. -Put $s = -3 - it$ in the integrand to get -$$\pi^{3+it} \sqrt{2}^{3+it} -\left(8 + 2^{3+it}\right) -\Gamma(-3-it) \zeta(-3-it) \zeta(4-it).$$ -By the functional equation we obtain with $s = -3-it$ -$$\pi^{3+it} \sqrt{2}^{3+it} -\left(8 + 2^{3+it}\right) -\zeta(4+it) -2^{-3-it} \pi^{-3-it} -\frac{1}{2\cos\left(\pi(-3-it)/2\right)} -\zeta(4-it)$$ -which is -$$\sqrt{2}^{3+it} -\left(2^{-it} + 1\right) -\zeta(4+it) -\frac{1}{2\cos\left(\pi(3+it)/2\right)} -\zeta(4-it)$$ -which finally yields -$$\frac{1}{2\sin(\pi it/2)} -\left(\sqrt{2}^{3-it} + \sqrt{2}^{3+it}\right) -\zeta(4+it) -\zeta(4-it).$$ -This concludes it since the two zeta function terms together are even -as is the square root term while the sine term is odd, so their -product is odd. -A similar yet not quite the same computation can be found at this MSE link. -Another computation in the same spirit is at this MSE link II.<|endoftext|> -TITLE: A list of basic integrals -QUESTION [5 upvotes]: I am in need of a list of basic integrals for my upcoming ODE test, -I have searched on Math.SE for a post that might help but I didn't find such a post. -When I write 'basic' I don't necessarily mean immediate integrals (such as $e^x,\sin x$) but also 'useful' ones such as $\ln x$. If there is also some common techniques (useful substitutions like the trigonometric substitutions, how to integrate a rational function, etc.) this would be helpful too (since it has been over a year since I tooked my exam on Calc II and I forgot some of this material, this might also save some time during the exam). -I would appreciate any reference for this matter. - -REPLY [5 votes]: I have found the following "cheat sheets" helpful when I was going through ODE. -Paul's Online Math Notes: Cheat Sheets -The integral ones come in a full sized and condensed form. -Integrals Cheat Sheet<|endoftext|> -TITLE: Are these proofs correct? (Number Theory) -QUESTION [8 upvotes]: I'm finishing Chapter 1 of Apostol's book Introduction to Analytic Number Theory. I have made almost half of the 30 problems posed. I have some doubts on the proofs I produce, since sometimes I seem to assume extra information, or seem assume things that are obvious, when that is precisely what is to be proven. I am unsure about this few, however $(3)$ and $(4)$ seem right. -$(1)$ -THEOREM If $(a,b)=1$ and $ab=c^n$, then $a=x^n$ and $b=y^n$ for some $x,y$. -PROOF If $(a,b)=1$ then we have -$$a = \prod p_i^{a_i}$$ -$$b = \prod p_j^{b_j}$$ -where $p_j \neq p_i$ for any $i,j$. Let $c=\prod p_m ^{c_m}$, and the $p_j$ and $p_i$ are uniquely determined. Then -$$ab=\prod p_i^{a_i}p_j^{b_j}=\prod p_m ^{nc_m}$$ -But this means, since $p_i \neq p_j$, that -$$a_i =nc_{m_i}$$ -$$b_j =nc_{m_j}$$ -Thus -$$a = \prod p_i^{nc_{m_i}}=x^n$$ -$$b = \prod p_j^{nc_{m_j}}=y^n$$ -What I seem to be saying is "if $a$ and $b$ have no common prime factors and $ab=c^n$, then $a$ and $b$'s prime factors must be of multiplicity $n$. Else, $c$ wouldn't be a perfect $n$th power." -Apostol suggests considering $d=(a,c)$. -$(2)$ -THEOREM For every $n \geq 1$ there exist uniquely determined $a<0$, $b>0$ such that $n=a^2b$, where $b$ is squarefree. -PROOF From the fundamental theorem of arithmetic, one has -$$n=\prod p_i^{a_i}$$ where the $p_i$ are unique. Group the product into two factors, according to the parity if the $a_i$s. If $a_i=2m_i$, write -$$n=\left(\prod p_i^{m_i} \right)^2 \prod p_l^{a_l}$$ -The remaining $a_l$ are all odd, viz $a_i=2n_i+1$. Then write -$$n=\left(\prod p_i^{m_i} \prod p_l^{n_l}\right)^2 \prod p_l$$ -$$n=a^2 b$$ -Since the $p_i$ were unique, so are $a^2$ and $b$, and $b$ is clearly squarefree. -$(3)$ -THEOREM If $2^n-1=p$, where $p$ is prime, then $n$ is prime. -PROOF Reductio ad absurdum. -Suppose $2^n-1$ is prime, and write $n=qp$. Then -$$2^n-1=2^{qp}-1=(2^q-1)(1+2^q+2^{2q}+\cdots+2^{q(p-1)}$$ -thus $2^{q}-1\mid 2^n-1$, $\Rightarrow \Leftarrow$ -$(4)$ -THEOREM If $2^n+1$ is prime, then $n$ is a power of two. -PROOF Reductio ad absurdum. -Suppose that $2^n+1=p$, $p$ a prime, and $n$ is composite -$$n=ed$$ where $e$ is odd. Then it is clear $n \neq 2^m$ and $$2^n+1=2^{ed}+1=(2^d+1)(1-2^d+-\cdots+2^{d(e-1)})$$ -Thus $2^d+1 \mid 2^n+1$. $\Rightarrow \Leftarrow$ -Then $n$ can't have any odd factors, that is $n=2^m$ for some $m$. -NOTE: I mostly care about the proofs being correct or not. If they aren't let me know what the flaw is, and please hint a correction. I'm not looking for alternative proofs unless the proof is absolutely hokum. - -REPLY [2 votes]: Your proof is correct of 1 is correct; it is indeed saying that if a product of two relatively prime (positive) integers is a perfect $n$th power, then each is a pefect $n$th power. This is a generalization of a result of Euclid's which states that if $ab$ is a perfect square, and $a$ and $b$ are relatively prime, then each of $a$ and $b$ is a perfect square; this result is used to characterize Pythagorean triples in the Elements. -I'll assume that $n\gt 1$. To follow Apostol's hint, let $d=\gcd(a,c)$; then $d^n|c^n = ab$, and since $\gcd(d,b)|\gcd(a,b) = 1$, then $d^n|a$. Since $\gcd(a/d, c/d) = 1$, if we write $a=d^ny$ and $c=dx$, then $\gcd(d^{n-1}y,x) = 1$. But since $y|x^n$, it follows that $y=1$, so $a=d^n$. -Now use a symmetric argument to show to show $b$ is an $n$th power. Note that (I think) we did not use unique factorization into irreducibles, so this argument should work in any gcd-domain, not just in UFDs. -Your proof for 2 is almost okay, but you should really specify that $n=pq$ with $1\lt p,q\lt n$ if you want to argue by contradiction. Then your contradiction arises from the fact that you cannot have $2^q-1 = 1$ nor $1+2^q + \cdots + 2^{q(p-1)}=1$ (you never took this into account!). Alternatively, you can do it directly, by noting that if $n=pq$, then the factorization you give forces either $2^{q}-1=1$, or $1+2^q + \cdots 2^{q(p-1)}=1$, hence $p-1=0$. -Your argument for $3$ is not quite right, because we should not assume that $n$ is composite. If you want to argue by contradiction, you should only assume that $n$ is divisible by an odd number greater than $1$. And you also need to account for the possibility that the second factor in your factorization equals $1$; that is, that -$$1-2^d + 2^{2d} -2^{3d}+ \cdots +2^{(2k)d} = 1.$$ -The argument is that since $2^{2r} - 2^{2r-1}$ is positive, this can only happen if $2k=0$, hence the odd factor $e = 2k+1$ must be equal to $1$.<|endoftext|> -TITLE: Proving ${p-1 \choose k}\equiv (-1)^{k}\pmod{p}: p \in \mathbb{P}$ -QUESTION [5 upvotes]: Possible Duplicate: -Prove $\binom{p-1}{k} \equiv (-1)^k\pmod p$ - -The question is as follows: - -Let $p$ be prime. Show that ${p \choose k}\bmod{p}=0$, for $0 \lt k \lt p,\space k\in\mathbb{N}$. What does this imply about the binomial co-efficients ${p-1 \choose k}$? - -By the definition of binomial coefficients: -$${p \choose k}=\frac{p!}{k!(p-k)!}$$ -Now if $0 \lt k \lt p$, then we have $p\mid{p\choose k}$, therefore ${p \choose k}\equiv0\pmod{p}, \space 0 \lt k \lt p. \space \blacksquare$ -Note that we can write: ${p \choose k}={p-1 \choose k}+{p-1 \choose k-1}$, and therefore: -$${p-1 \choose k}={p \choose k}-{p-1 \choose k-1}=\frac{p!}{k!(p-k)!}-\frac{(p-1)!}{(k-1)!(p-k)!}=\frac{(p-1)!}{(k-1)!(p-k)!}\left(\frac{p}{k}-1\right)$$ -However, I am unsure how to proceed with this question, the book I am working from states that: -$${p-1 \choose k}\equiv(-1)^{k}\pmod{p}, \space 0 \le k \lt p$$ -But I am unsure how the authors have derived this congruence, so I'd appreciate any hints. -Thanks in advance. - -REPLY [4 votes]: It turns out that we do not even need Wilson's Theorem. Note the identity -$$\binom{p-1}{k+1}(k+1)=\binom{p-1}{k}(p-k-1).$$ -This is easily obtained from the fact that $\binom{n}{m}=\frac{n!}{m!(n-m)!}$. -Now note that $p-k-1\equiv -(k+1)\pmod p$. Thus -$$\binom{p-1}{k+1}(k+1)\equiv -\binom{p-1}{k}(k+1)\pmod p.$$ -If $0\le k \lt p-1$, then $k+1$ is not divisible by $p$, so we can cancel and obtain -$$\binom{p-1}{k+1}\equiv -\binom{p-1}{k}\pmod p.\tag{$1$}$$ -Thus $\binom{p-1}{k}$ changes sign modulo $p$ every time that we increment $k$. -But $\binom{p-1}{0}=1$, and the result follows.<|endoftext|> -TITLE: Why is $\log_{-2}{4}$ complex? -QUESTION [17 upvotes]: With the logarithm being the inverse of the exponential function, it follows that $ \log_{-2}{4}$ should equal $2$, since $(-2)^2=4$. The change of base law, however, implies that $\log_{-2}{4}=\frac{\log{4}}{\log{-2}}$, which is a complex number. Why does this occur when there is a real solution? - -REPLY [5 votes]: How do you define $\log_{-2}x$ in general? Under what circumstances is it appropriate to apply that "law"? -When $a>0$, $a\neq 1$, and $x>0$, then $\log_a x$ is defined to be the real number $y$ such that $a^y=x$. The fact that there is a unique such $y$ needs to be shown to justify that this definition makes sense (and it does). Note that this also depends on defining what $a^y$ means when $a$ is a positive number and $y$ is a real number. -With this definition of the logarithm, if you have another number $b>0$, $b\neq 1$, then you can show that $\log_b x=\dfrac{\log_a x}{\log_a b}$. This is an appropriate application of the change of base "law," and most likely how you have seen it introduced. - -Once you get into allowing negative bases and possibly complex exponents, things become murkier. For example, how could we claim that $0$ is the natural logarithm of $1$ when it is well known that $e^{2\pi i}=1$? Should we define $\log_a x$ to be some complex number $y$ such that $a^y=x$, and then stipulate that we'll take $y$ to be real when possible? How do we even define $(-2)^y$ in general when $y$ is not an integer? We have to get into using branches of the complex logarithm, so the answers will not be unique, and they won't match up with the usual "laws" that work out in the positive base case. -Closely related to this are the fallacies that arise from assuming that the "law" $(a^b)^c=a^{bc}$ is valid outside of the context where $a$ is positive, seen in another question.<|endoftext|> -TITLE: Proposition 5.21 in Atiyah-MacDonald -QUESTION [13 upvotes]: There's just one step in this proof I can't see for the life of me. -Set up: We have a field K and an algebraically closed field $\Omega$. $(B, g)$ is maximal in the set $\Sigma$ of ordered pairs $(A, f)$ where $A$ is a subring of K and $f$ a homomorphism into $\Omega$, where $\Sigma$ has the partial order $(A, f) \leq (A', f')$ if $A$ is a subring of $A'$ and $f'|_{A} = f$. The overall claim is that $(B, g)$ is a valuation of $K$. We let $M$ be the unique maximal ideal of $B$ (which exists). We take $x \in K$ with $x \neq 0$ and may assume that $M[x]$ is not the unit ideal of $B' = B[x]$(by a lemma) and so is contained in some maximal ideal $M'$. Let $k = B/M$ and $k' = B'/M'$. -The claim I don't understand: Since $k' = k[\bar{x}]$ for $\bar{x}$ the image of x in k' (which I see), $\bar{x}$ is algebraic over k. - -REPLY [4 votes]: Suppose $\bar x$ is not algebraic over $k$. -Then $k[\bar x]$ is isomorphic to the polynomial ring $k[X]$. -However $k[X]$ is not a field(any polynomial of degree $\geq 1$ in $k[X]$ is not invertible in $k[X]$). -Hence $k[\bar x]$ is not a field. -This is a contradiction. -Hence $\bar x$ is algebraic over $k$.<|endoftext|> -TITLE: Prove that $R \otimes_R M \cong M$ -QUESTION [10 upvotes]: Let $R$ be a commutative unital ring and $M$ an $R$-module. -I'm trying to prove $R \otimes_R M \cong M$ but I'm stuck. If $(R \otimes M, b)$ is the tensor product then I thought I could construct an isomorphism as follows: -Let $\pi: R \times M \to M$ be the map $rm$. Then there exists a unique linear map $l: R \otimes M \to M$ such that $l \circ b (r,m)= l(r \otimes m) =r l(1 \otimes m) = \pi(r,m) = rm$. -Now I need to show that $l$ is bijective. Surjectivity is clear. But I can't seem to show injectivity. In fact, by now I think it might not be injective. But I can't think of a different suitable map $\pi$. -Then I thought perhaps I should show that $l$ has a two sided inverse but for an $m$ in $M$ I can't write down its inverse. How do I finish the proof? - -REPLY [6 votes]: As Dylan suggested, we can use the universal property of the tensor product to show that $R \otimes_R M \cong M$. Now this means we would like to show that the $R$ - module $M$ is equipped with a bilinear map $\pi : R \times M \rightarrow M$ such that for any bilinear map $B : M \times N \rightarrow P$, where $P$ is some other $R$ - module, there exists a unique linear map $L : M \rightarrow P$ such that -$$B = L \circ \pi.$$ -This shouldn't be too hard. Say we are given an $R$ - bilinear map $B : M \times N \rightarrow P$. We can define the map $\pi : R \times M \rightarrow M$ by mapping the pair $(r,m)$ to $rm$. One easily checks that the map $\pi$ is well-defined and bilinear. Now define the map $L : M \rightarrow P$ on "elementary elements" by -$$L(m) = B(1,m)$$ -and extend it additively. I said "elementary elements" because usually we define maps on elementary tensors - but there are none here so I just coined this term. Now your map $L$ is easily seen to be well-defined. For linearity, the fact that it is additive comes from definition of how we defined $L$. Let's check that it is compatible with scalar multiplication: For any $r \in R$, we have that -$$\begin{eqnarray*} L(rm) &=& B(1,rm) \\ -&=& rB(1,m) \hspace{5mm} \text{(By definition of bilinearity)} \\ -&=& r L(m) \end{eqnarray*} $$ -completing the claim that $L$ was $R$ - linear. Now it remains to check that our map $B : R \times M \rightarrow P$ factors uniquely through the tensor product. The question whether $B$ factors through $M$ is equivalent to asking if first sending $(r,m)$ to $B(r,m)$ in $P$ is equal to first sending $(r,m) \mapsto rm$ in $M$, and then sending $rm$ in $M$ to $P$. But this is clear because -$$\begin{eqnarray*} L(rm) &=& B(1,rm) \\ -&=& rB(1,m) \\ -&=& B(r,m). \end{eqnarray*}$$ -It is possible to do manipulations like that because $M$ is an $R$ module and the other guy in the direct product is $R$ itself. -It now remains to see why given our maps $B$ and $\pi$, there is a unique $R$ - linear map $L : M \longrightarrow P$. If you have any linear map $L$ out of $M$, in order for an appropriate diagram in question to commute we must have that $L(m) = L(\pi(1,m)) = B(1,m)$. There really is no choice for what $L$ is because it is defined by $B$. Hence there is only one $L$ in question for a given bilinear map $B: R \times M \longrightarrow P$ and uniqueness is proven. -We have shown that the $R$ - module $M$ satisfies the universal property of the tensor product $R \otimes_R M$, and hence must be isomorphic to $M$. -$$\hspace{6in} \square$$<|endoftext|> -TITLE: Arclength of the curve $y= \ln( \sec x)$ $ 0 \le x \le \pi/4$ -QUESTION [6 upvotes]: Arclength of the curve $y= \ln( \sec x)$ $ 0 \le x \le \pi/4$ -I know that I have to find its derivative which is easy, it is $\tan x$ -Then I put it into the arclength formula -$$\int \sqrt {1 - \tan^2 x}$$ -From here I am not sure what to do, I put it in wolfram and it got something massive looking. I know I can't use u substitution and I am pretty certain I have to algebraicly manipulate this before I can continue but I do not know how. - -REPLY [5 votes]: As others have noted, it should be -$$\int_0^{\pi/4} \sqrt{1+\tan^2 x} \ \ dx$$ -Recall $1+\tan^2 x = \sec^2 x$ -$$\int_0^{\pi/4} \sqrt{\sec^2 x} \ \ dx$$ -Since all trig functions are positive in the first quadrant, we can simply rewrite the integrand as -$$\int_0^{\pi/4} {\sec x} \ \ dx$$ -Which is a (relatively) well known integral that evaluates to -$$\log (\tan(x) + \sec (x))$$ -Now simply evaluate at the endpoints - you should get around $.8814$ assuming I didn't make a button-punching error.<|endoftext|> -TITLE: Which of the following are Dense in $\mathbb{R}^2$? -QUESTION [7 upvotes]: Which of the following sets are dense in $\mathbb R^2$ with respect to the usual topology. - -$\{ (x, y)\in\mathbb R^2 : x\in\mathbb N\}$ -$\{ (x, y)\in\mathbb R^2 : x+y\in\mathbb Q\}$ -$\{ (x, y)\in\mathbb R^2 : x^2 + y^2 = 5\}$ -$\{ (x, y)\in\mathbb R^2 : xy\neq 0\}$. - -Any hint is welcome. - -REPLY [5 votes]: For a set to be dense in $\mathbb{R}^2$ (or in any other metric space, for that matter) it is necessary and sufficient to check that ot intersects every open disc. So, to prove that a set isn't dense, it's enough to find one open disc that includes no points of the set. For example, in (1), take $D((\frac{1}{2},0)\frac{1}{4})$ (a disk with radius $\frac{1}{4}$ around $(\frac{1}{2},0)$). It contains no point of the set in (1). Hence the set is not dense. -To prove (4), take any open disk $D((x,y),r)$. If $r -TITLE: If $X$ is a connected subset of a connected space $M$ then the complement of a component of $M \setminus X$ is connected -QUESTION [11 upvotes]: I have an exercise found on a list but I didn't know how to proceed. Please, any tips? -Let $X$ be a connected subset of a connected metric space $M$. Show that for each connected component $C$ of $M\setminus X$ that $M\setminus C$ is connected. - -REPLY [7 votes]: Here is the theorem found on Kuratowski's book. Thanks for the reference, it is a very excellent book. - - -The Theorem II.4 cited above:<|endoftext|> -TITLE: Closure of the interior of another closure -QUESTION [7 upvotes]: Let $X$ be a topological space and let $A \subset X$. -Is it true that $\overline{\rm{Int}(\overline{A})}=\overline {A}$? -This question arose when I try to show$\overline{X-\overline{\rm{Int}(\overline{A})}}=\overline{X-\overline{A}}$ - -REPLY [7 votes]: The statement is false in general. Take $A$ to be a closed, nowhere dense set; then $\operatorname{cl}A=A$, but $\operatorname{cl}\operatorname{int}\operatorname{cl}A=\operatorname{cl}\operatorname{int}A=\operatorname{cl}\varnothing=\varnothing.$ In the space $\Bbb R^n$, for instance, any closed, discrete set provides a counterexample, as does any Cantor set. -Added: To show that -$$\overline{X-\overline{\rm{Int}(\overline{A})}}=\overline{X-\overline{A}}\;,$$ -note first that you already know that -$$\overline{X-\overline{\rm{Int}(\overline{A})}}\supseteq\overline{X-\overline{A}}\;.$$ -Suppose that $x\notin\operatorname{cl}\left(X\setminus\operatorname{cl}A\right)$, and let $V$ be an open nbhd of $x$ disjoint from $X\setminus\operatorname{cl}A$; $V\subseteq\operatorname{cl}A$, so $V\subseteq\operatorname{int}\operatorname{cl}A\subseteq\operatorname{cl}\operatorname{int}\operatorname{cl}A$, and therefore $V\cap (X\setminus\operatorname{cl}\operatorname{int}\operatorname{cl}A)=\varnothing$, i.e., $x\notin\operatorname{cl}(X\setminus\operatorname{cl}\operatorname{int}\operatorname{cl}A)$. It follows that $\operatorname{cl}(X\setminus\operatorname{cl}\operatorname{int}\operatorname{cl}A)\subseteq\operatorname{cl}\left(X\setminus\operatorname{cl}A\right)$, and we’re done.<|endoftext|> -TITLE: If $p:E\to B$ is a covering space and $p^{-1}(x)$ is finite for all $x \in B$, show that $E$ is compact and Hausdorff iff $B$ is compact and Hausdorff -QUESTION [11 upvotes]: I can show that if $E$ is compact and Hausdorff $B$ has the same properties, also I can show that if $B$ is compact and Hausdorff $E$ is Hausdorff, but I have troubles trying to prove that $E$ is also compact. Any suggestions would be appreciated. -I would like to know if there is a short way or at least a simple way to show that if E is Hausdorff so is B, I can prove it but I have to make a lot of observations and I get a really really long demostration. -This is an exercise in Hatcher (Algebraic Topology) Section 1.3, exercise 3 - -REPLY [12 votes]: I'll try to answer the question without saying too much so that you can still work on it. I can edit my answer to give a complete solution if need be. -Let $\mathcal{U}$ be an open cover of $E$. Then for each $x\in B$ there exist $p^{-1}(x)$ is finite. Thus we can choose $U^x_1,\ldots, U^x_{n_x}\in\mathcal{U}$ such that $p^{-1}(x)$ is in the union of these sets. -Hints: Look at the image of $U^x_1,\ldots,U^x_{n_x}$ under $p$. Can you get an open set of $B$ from this containing $x$? How can you use this to get an open cover of $B$? How do you extract an open cover of $E$ from this information?<|endoftext|> -TITLE: Algebraic proof of a binomial sum identity. -QUESTION [6 upvotes]: I came across this identity when working with energy partitions of Einstein solids. I have a combinatorial proof, but I'm wondering if there exists an algebraic proof. -$$\sum_{q=0}^N\binom{m + q - 1}{q}\binom{n + N - q - 1}{N - q} = \binom{m + n + N - 1}{N}$$ -I've tried induction, but Pascal's Identity cannot simultaneously reduce the top and bottom argument for an inductive proof. -For those interested, a combinatorial proof of the identity can be given as follows: Consider the ways of distributing $N$ quanta of energy to a system of $n + m$ oscillators (where each oscillator can have any number of quanta). This is equivalent to the question of asking how many ways of putting $N$ objects into $n + m$ boxes. From the traditional stars and bars method, the total is given by -$$\binom{m + n + N - 1}{N}$$ -which is the right-hand side. Alternatively, consider partitioning the units of energy between the first $m$ and the last $n$ oscillators. Give $q$ units of energy to the first $m$ oscillators. Then there remains $N - q$ units of energy for the latter $n$. Together, the number of states for this particular partition is -$$\binom{m + q - 1}{q}\binom{n + N - q - 1}{N - q}$$ -Summing over all partitions gives the left-hand side. -Thanks for your time. - -REPLY [2 votes]: A straightforward proof by induction is possible, provided that you induct on the right thing, essentially $n+N$. -To simplify the notation, let $a=m-1$, $b=n+N-1$, and $c=n-1$; then the equation -$$\sum_{q=0}^N\binom{m + q - 1}{m-1}\binom{n + N - q - 1}{n-1} = \binom{m + n + N - 1}{N}$$ -can (after applying symmetry to each binomial) be rewritten as -$$\sum_{k=0}^b\binom{a+k}a\binom{b-k}c=\binom{a+b+1}{a+c+1}\;.\tag{1}$$ -I’ll show by induction on $b$ that for each $b\ge 0$, $(1)$ holds for all $a,c\ge 0$. -When $b=0$, both sides of $(1)$ are $1$ if $c=0$ and $0$ otherwise. Now assume that $(1)$ holds for some $b\ge 0$. Then -$$\begin{align*} -\sum_{k=0}^{b+1}\binom{a+k}a\binom{b+1-k}c&=\sum_{k=0}^{b+1}\binom{a+k}a\left(\binom{b-k}c+\binom{b-k}{c-1}\right)\\ -&=\sum_{k=0}^b\binom{a+k}a\binom{b-k}c+\sum_{k=0}^b\binom{a+k}a\binom{b-k}{c-1}\\ -&=\binom{a+b+1}{a+c+1}+\binom{a+b+1}{a+c}\\ -&=\binom{a+b+2}{a+c+1}\;. -\end{align*}$$<|endoftext|> -TITLE: Extreme boundary of a compact, convex, metrizable set is $G_\delta$ -QUESTION [5 upvotes]: Let $X$ be a topological vector space (no assumptions about local convexity are made in the question, though I am worried they might be required). Suppose $K\subset X$ is a compact, convex, metrizable subset of $X$, and denote by $\partial_e K$ the extreme boundary of $K$. We wish to show that $\partial_e K$ is $G_\delta$ in $K$. -It seems that this would boil down to writing a correct, alternate definition of being an extreme point as an intersection of bunch of open or $G_\delta$ sets, and using separability (of $K$ or of $\mathbb{R}$) to reduce that intersection to a countable one. I had two possible approaches: -(1): For each rational $\lambda$ with $0<\lambda<1$, let $$U_\lambda = \{x\in K : \forall y,z\in K (x=\lambda y + (1-\lambda z) \implies y=z)\}.$$ I believe $\partial_e K = \bigcap_\lambda U_\lambda$, and would like to claim that each $U_\lambda$ is open. -(2): Let $D$ be a countable dense set in $K$ (which is separable). For $y\neq z$ in $D$, let $$G_{y,z}=\{x\in K:x \text{ is not in the interior of the line segment between $y$ and $z$}\}.$$ It is easy to see that each $G_{y,z}$ is $G_\delta$, and $\partial_e K\subset \bigcap_{y\neq z \in D} G_{y,z}$. There is an obvious problem if $D$ is dense in the interior of $K$ while containing no boundary points of $K$, but we could throw in a countable dense subset of the boundary of $K$ as well. Under the assumption of local convexity, we need only show that every boundary point of $K$ which is in $\bigcap_{y\neq z \in D} G_{y,z}$ is in $\partial_e K$, which seems geometrically clear to me, but I do not see the details. -Any help? - -REPLY [6 votes]: Let $K$ be compact, convex and metrizable in the topological vector space $X$ (local convexity isn't required for the argument). Fix a metric $d$ on $K$ compatible with the topology. Let $$F_n = \left\{x \in K\,:\, \text{there are } y,z \in K\text{ such that }x = \frac{1}{2}(y+z)\text{ and }d(y,z) \geq \frac{1}{n}\right\}.$$ - Then $F_n$ is closed and a point is non-extremal if and only if it is in the $F_\sigma$-set $F = \bigcup_n F_n$. Thus the set of extremal points $\partial_e K = K \smallsetminus F$ is a $G_\delta$. - -Note: I posted the above as a comment in this related MO thread where it is also mentioned that this may fail if $K$ is compact convex but not metrizable (even if $X$ is assumed to be locally convex): the set $\partial_e K$ need not even be a Borel set in $K$ by an example described by Bishop–de Leeuw in section VII of The representations of linear functionals by measures on sets of extreme points. Annales de l'institut Fourier, 9 (1959), p. 305–331. - -Added: -Some more details: - -The set $C_n = \{(y,z) \in K \times K\,:\,d(y,z) \geq 1/n\}$ is closed since it is the preimage of $[1/n,\infty)$ under the continuous function $d\colon K \times K \to [0,\infty)$. Therefore $C_n$ is compact. The set $F_n$ is the image of the compact set $C_n$ under the continuous function $(y,z) \mapsto \frac{1}{2}(y+z)$, hence $F_n$ is compact and thus closed. -We have $\partial_e K = K \smallsetminus F$ where $F = \bigcup_n F_n$ as above. - -If $x$ is not an extremal point, then $x \in F_n$ for some $n$: -By definition $x = (1-\lambda) y + \lambda z$ for some $0 \lt \lambda \lt 1$ and $y \neq x \neq z$. If $\lambda = \frac{1}{2}$ then $x \in F_n$ where $n$ is so large that $\frac{1}{n} \leq d(y,z)$. If $\lambda \neq \frac{1}{2}$ we may (possibly after switching $y$ and $z$) assume that $0 \lt \lambda \lt \frac{1}{2}$ and write $x = \frac{1}{2}(y + z_\lambda)$ where $z_\lambda = (1-2\lambda)y + 2\lambda z \in K$. Since $y \neq z$ we have $y \neq z_\lambda$ and thus $d(y,z_\lambda) \geq \frac{1}{n}$ for large enough $n$, so $x \in F_n$. -Conversely, if $x \in F_n$ for some $n$ then $x$ is clearly not extremal. - - - -Later: -Let me add some remarks on your approaches: -The problem with idea (1) is that $U_{\lambda}$ is not open. In fact, I showed in point 2. above that $U_{1/2} = \partial_e K$ and a small modification of that argument gives that $U_{\lambda} = \partial_e K$ for all $\lambda \in (0,1)$, so you're right that $\partial_e K = \bigcap_{\lambda} U_{\lambda}$, but proving that $U_{\lambda}$ is a $G_\delta$ is the same as the original problem, so that idea won't lead anywhere. -The second idea looks much better, however I doubt that exploiting separability of $K$ only is enough (that is: I doubt that the set of extremal points of a compact convex separable but non-metrizable set is a $G_\delta$, but I haven't checked this thoroughly). I think the argument I gave is one way to get around the difficulties. -Another point I'd like to add is that the distinction of boundary points and interior points you seem to be making does not work for infinite-dimensional compact convex sets. In fact, the Hilbert cube $C$ is homogeneous in the sense that its homeomorphism group acts transitively (see here for a good write-up of the non-trivial proof), so no point is distinguished by topological properties alone. This property is generic in the sense that every compact convex metrizable set in a locally convex vector space is either contained in a finite-dimensional subspace or homeomorphic to $C$ by a theorem of Klee. See my answer here for more on this. -Finally, we can't hope to do much better than $G_\delta$. In three dimensions take the double cone $K$ obtained by taking the convex hull of a circle $C$ of radius $1$ around $(1,0,0)$ in the $(x,y)$-plane and the two points $p_\pm=(0,0,\pm1)$ on the $z$-axis. Then $\partial_e K = \{p_\pm\} \cup C\smallsetminus \{(0,0,0)\}$ is not closed. [In two dimensions the non-extremal points are open in the boundary, hence the set of extremal points of a compact convex set is closed, the one-dimensional case is trivial.]<|endoftext|> -TITLE: Liouville's theorem for Banach spaces without the Hahn-Banach theorem? -QUESTION [18 upvotes]: Let $B$ be a (complex) Banach space. A function $f : \mathbb{C} \to B$ is holomorphic if $\lim_{w \to z} \frac{f(w) - f(z)}{w - z}$ exists for all $z$, just as in the ordinary case where $B = \mathbb{C}$. Liouville's theorem for Banach spaces says that if $f$ is holomorphic and $|f|$ is bounded, then $f$ is constant. -The only way I know how to prove this uses the Hahn-Banach theorem: once we know that continuous linear functionals on $B$ separate points, we can apply the usual Liouville's theorem to $\lambda(f)$ for every such functional $\lambda : B \to \mathbb{C}$. - -Can we avoid using Hahn-Banach? What if $B$ is in addition a Banach algebra? - -Motivation: Liouville's theorem is useful in the elementary theory of Banach algebras, where it seems to me that we usually don't need the big theorems of Banach space theory (e.g. the closed graph theorem), and I would like to be able to develop this theory within ZF if possible. It would be very interesting if this were actually independent of ZF. - -REPLY [7 votes]: The usual argument (from Ahlfors) is to use the estimate -$|f'(a)| \leq M/r$, where M is a bound for $|f|$ and $r$ -is the radius of a large circle about $0$ containing $a$. -This follows from Cauchy's integral formula. I believe -there is no difficulty proving Cauchy's theorem and -integral formula for Banach space valued functions using -classical methods, since these just estimate absolute -values (replace by norms) and then use completeness. -First you need some integration, but the integral that -is the limit of the integral on step maps suffices.<|endoftext|> -TITLE: Heat equation in bounded domain -QUESTION [7 upvotes]: Let $\Omega \subseteq \mathbb{R}^n$, $n\geq 2$, be a bounded domain with boundary $\partial \Omega \subseteq C^2,v$ outer unit normal vector on $\partial \Omega$, $h \in L^2(\Omega)$. -Let $u \in C\left( \left[ 0,\infty \right);{{L}^{2}}\left( \Omega \right) \right)\cap C^1\left( \left( 0,\infty \right);{{H}^{2}}\left( \Omega \right) \right)$ be a solution to the Dirichlet problem -$$ \left\{ \begin{matrix} - {{u}_{t}}\left( t,x \right)={{\Delta }_{x}}u\left( t,x \right),t\in \left( 0,\infty \right),\ x\in \Omega \\ - u\left( t,x \right)=0,t\in \left( 0,\infty \right),\ x\in \partial \Omega \\ - u\left( 0,x \right)=h\left( x \right),\ x\in \Omega \\ -\end{matrix} \right.$$ -Let $w \in C\left( \left[ 0,\infty \right);{{L}^{2}}\left( \Omega \right) \right)\cap C^1\left( \left( 0,\infty \right);{{H}^{2}}\left( \Omega \right) \right)$ be a solution to the Neumann problem -$$\left\{ \begin{matrix} - {{w}_{t}}\left( t,x \right)={{\Delta }_{x}}w\left( t,x \right),t\in \left( 0,\infty \right),\ x\in \Omega \\ - \frac{\partial w}{\partial v}\left( t,x \right)=0,t\in \left( 0,\infty \right),\ x\in \partial \Omega \\ - w\left( 0,x \right)=h\left( x \right),\ x\in \Omega \\ -\end{matrix} \right.$$ -Show that there are bounded linear operators $ -{{E}_{D}}\left( t \right):{{L}^{2}}\left( \Omega \right)\to {{L}^{2}}\left( \Omega \right)$ and $ -{{E}_{N}}\left( t \right):{{L}^{2}}\left( \Omega \right)\to {{L}^{2}}\left( \Omega \right)$ such that $u\left( t,\centerdot \right)={{E}_{D}}\left( t \right)h$ and $w\left( t,\centerdot \right)={{E}_{N}}\left( t \right)h,t\in \left[ 0,\infty \right)$ and find their norms. -Furthermore show that there are $0\ne {{h}_{D}}\in {{L}^{2}}\left( \Omega \right)$ y $0\ne {{h}_{N}}\in {{L}^{2}}\left( \Omega \right)$ such that ${{E}_{D}}\left( t \right)h_{D}=\left\| {{E}_{D}}\left( t \right) \right\|{{h}_{D}}$ y ${{E}_{N}}\left( t \right)h_{N}=\left\| {{E}_{N}}\left( t \right) \right\|{{h}_{N}}, \forall t\in \left[ 0,\infty \right)$. Describe those elements ${{h}_{D}}$ and ${{h}_{N}}$ -Attempt: Let ${{\left\{ {{\lambda }_{n}^D} \right\}}_{n\in \mathbb{N}}}$ be eigenvalues with their respectives eigenfunctions ${{\left\{ {{\phi }_{n}^D} \right\}}_{n\in \mathbb{N}}}$ orthonormal basis of ${{L}^{2}}\left( \Omega \right)$, of the Dirichlet Laplacian. Therefore the solution is $u\left( t,x \right)=\sum\limits_{n\in \mathbb{N}}{{{\left\langle \phi _{n}^{D},h \right\rangle }_{{{L}^{2}}\left( \Omega \right)}}{{e}^{-\lambda _{n}^{D}t}}\phi _{n}^{D}\left( x \right)}$, so that ${{E}_{D}}\left( t \right)\left( f \right)=\sum\limits_{n\in \mathbb{N}}{{{\left\langle \phi _{n}^{D},f \right\rangle }_{{{L}^{2}}\left( \Omega \right)}}{{e}^{-\lambda _{n}^{D}t}}\phi _{n}^{D}\left( x \right)},f\in {{L}^{2}}\left( \Omega \right)$, but I don't know how to find the norm. - -REPLY [4 votes]: For the norm part, you have: -$$ \langle E_D(t) f, E_D(t)f\rangle = \sum_{n\in \mathbb{N}} e^{-2\lambda_n^Dt}\langle \phi_n^D,f\rangle^2 $$ -by definition. So the norm, being -$$ \sup_{\|f\| = 1} \|E_D(t) f\| $$ -is just given by -$$ e^{-\lambda_0^Dt} $$ -where $\lambda_0^D$ is the smallest eigenvalue of the Dirichlet Laplacian. Furthermore, $h = \phi_0^D$ the corresponding eigenfunction would be the norm-achieving example desired. - -For the Neumann boundary condition case, note that $u \equiv c$ is a solution. It is clear that such a solution maximizes the norm for $E_N(t)$, that is given $h \equiv c$ you have $E_N(t)h = h = \|E_N(t)\|h$.<|endoftext|> -TITLE: Spotting crankery -QUESTION [39 upvotes]: Underwood Dudley published a book called mathematical cranks that talks about faux proofs throughout history. While it seems to be mostly for entertainment than anything else, I feel it has become more relevant in modern mathematics. Especially with the advent of arXiv, you can obtain research papers before they are peer reviewed by a journal. So how does one tell between a crank proof and a genuine proof? This seems to be tough to discern in general. -For instance Perelman's proof was not submitted to any journal but published online. How did professional mathematicians discern that it was a genuine proof? -So how does one spot a crank proof? It seems that John Baez once (humorously) proposed a "crackpot checklist". Would this seem like a fair criterion? - -REPLY [20 votes]: In addition to Willie's answer/comment: -About a year ago, last June, a paper by Gerhard Opfer got a bit of attention for claiming to solve the Collatz Conjecture (it didn't). It was submitted to Mathematics of Computation, which may have given it the seeming credibility that propelled it into the spotlight (this is always a mystery - I don't know what made the recent kid-who-sort-of-solved-an-old-Newton-problem thing such a firestorm either). It even got to a question here. -This prompted me to write a short blog post about the Collatz Conjecture, Opfer's paper, and as a soft-answer to this soft-question, a bit about cranks and crank papers. (Ironically, writing that blog post somehow threw the spotlight on me as a destination for crank papers, and I've received a great many since.) -I think a large part of this aspect of the post can be summed up in two links: The Alternative-Science Respectability Checklist and Ten Signs a Claimed Mathematical Breakthrough is Wrong. -But I also happened across some articles from the writer-physicist or physicist-writer Jeremy Bernstein (much of whose work is published in periodicals like the New Yorker). He wrote an article called How can we be sure that Einstein was not a crank? (this is a link to a book containing the article), and he discusses two criteria for determining whether a new physics paper is from a crank or not. -The criteria don't quite port over to math so well, but there is an idea behind them that's true, just as the ideas behind the very humourous Ten Signs a Claimed Mathematical Breakthrough is Wrong are accurate in many ways. If I were to summarize some of his key points, I would say that Bernstein looks at 'correspondence' and 'predictiveness.' In the physics sense, 'correspondence' means that the result should explain why previous theories were wrong, and how the proposed idea agrees with experimental evidence. 'Predictiveness' is just what it says: a physics breakthrough should be able to predict some phenomena. If I were to cast these in a mathematical nature, I suppose 'correspondence' would say that the math shouldn't contradict things we already know (now we can solve all quintics with radicals, for example). But if the result is a big, old one, like Collatz or the Millenium problems, I should think that one needs to introduce something new so that there is some explanation of why it hadn't been done before. Predictiveness really doesn't port so well. I suppose that the strength of a mathematical result is sometimes measured in how much 'new math' it creates, and this is a sort of predictiveness... it's not a great match. -But I'd like to end by noting that sometimes, especially in math, simple arguments for nonsimple results (whatever that really means) exist. One of my favorite examples is the paper PRIMES is in P!, the paper detailing the AKS algorithm for quickly determining whether a given number is prime. The arguments are entirely elementary, despite how big the result it. And, funny enough, there is capitalization and excitement, indicators on some of the crank-checklists. Yet the result is valid.<|endoftext|> -TITLE: How would the double derivative of $f:\mathbb{R}^N \to \mathbb{R}^M$ i.e., $f''$ look like? -QUESTION [6 upvotes]: The derivative of $f:\mathbb{R}^N \to \mathbb{R}^M$ is of the form $f':\mathbb{R}^N \to \mathbb{R}^{M \times N}$. I'd like to know how the double derivative look like, i.e, how would $f''$ be ? It maps from $\mathbb{R}^N$ to which space? -PS : Please suggest some good references on this topic. - -REPLY [5 votes]: Since you tagged (differential-geometry), let me give a slightly more geometric perspective. -The "derivative" operation in the category of smooth manifolds is one that associates to a smooth map $f:M\to N$ between smooth manifolds $M$ and $N$ the map $Tf$ (or sometimes written as $\mathrm{d}f$ or $\nabla f$, I will not use the notation $\mathrm{d}f$ so as not to confuse with the exterior differentiation) between the smooth manifolds $TM$ and $TN$, where $TM$ denotes the tangent bundle of $M$. A specific property of the map $Tf$ is that restricted on the tangent space of a point $p\in M$, $Tf(p): T_pM \to T_{f(p)}N$ is a linear map between the tangent spaces of $M$ at $p$ and of $N$ and $f(p)$. -This is already slightly different from the calculus definition. For linear spaces $X$ and $Y$, the calculus definition of a derivative gives that for $f:X\to Y$, $\nabla f: X\times X \to Y$ is linear in the second component. Formally we can think of the map $Tf$ as $(f,\nabla f)$ in this context, using that there is a canonical isomorphism between the tangent spaces $TX$ and $X\times X$ when $X$ is linear. -Going back to the geometrical picture: naturally, the "second derivative" then is -$$ T(Tf) : T(TM) \to T(TN) $$ -since the tangent bundles $TM$ and $TN$ are smooth manifolds in their own right, and $Tf$ is a smooth map in its own right. In the advanced calculus definition, given a linear space $X$ and a linear space $Y$, the second derivative is a map from $X\times X\times X \to Y$ that is bilinear in the second and third components. Analogous to the case before, we can try to formally identify -$$ T(Tf) "=" ((f,\nabla f),\nabla(f,\nabla f)) \overset{?}{=} (f,\nabla f,\nabla f,\nabla^2 f) $$ -While the co-domain has the correct number of dimensions, the domain however has some trouble: the domain of $T(Tf)$ is $TTM$, a manifold of $4m$ dimensions (assuming $M$ has $m$ dimensions). $\nabla^2 f$ however takes input as something only $3m$ dimensional! Furthermore, our intuitive notion of the second derivative from advanced calculus consists of taking two derivatives in two possibility different directions, where directions are interpreted as vectors along $M$, that is to say, two objects in $TM$. But as seen above, the notion of $T(Tf)$ requires considering objects in $TTM$. This (at the level of the second derivative) is the point where geometry departs from merely "calculus on manifolds", and this is the point where the notion of linear connection comes in. More precisely, to reconcile the notion with our usual notion of second derivatives, what we require is a way, when given $p\in M$, $v,w\in T_pM$, to identify the triple $(p,v,w)$ with an element of $T_{(p,v)}(TM)$. -For more about these types of stuff, you may wish to consult: - -Kobayashi and Nomizu, Foundations of Differential Geometry -Kobayashi, S. "Theory of connections" Ann. Mat. Pura Appl. (4), 1957, 43, 119-194 -Kolář, Michor, & Slovák, Natural operations in differential geometry<|endoftext|> -TITLE: The boundary is a closed set -QUESTION [16 upvotes]: A point $p$ in a metric space $X$ is a boundary point of the set $A$, if any neighbourhood of $p$ has points of both $A$ and $X-A$.Prove that the set of all boundary points of $A$ is closed. -My attempt: -By definition of an open set this means that for every $x$ in the boundary there is an open ball centred at $x$ contained in the boundary. An open ball is a neighbourhood of $x$, which implies it contains points of $A$ and $X - A$, which in turn implies there are points in both $A$ and $X - A$ that are in the boundary of $A$. -If $A$ is open, then pick any such point in $A$ that is also in the boundary. This point cannot be in $X - A$ by definition of set subtraction. Further, because $A$ is open there exists an open ball centred around this point contained in $A$. Again, an open ball is a neighbourhood, which means a neighborhood of this point does not contain points of $X - A$, implying it cannot be in the boundary, a contradiction. If A is closed then $X - A$ is open and a symmetric argument holds. Hence the boundary is closed. -Is my work correct? - -REPLY [6 votes]: Your very first statement simply isn’t true: there need not be any non-empty open set contained in the boundary of $A$. Suppose that $A=[0,1]$ in the space $\Bbb R$: the boundary of $A$ is the set $\{0,1\}$, which does not contain any non-empty open subset of $\Bbb R$. -I suggest that you try to show that $X\setminus\operatorname{bdry}A$ is open, from which it will follow at once that $\operatorname{bdry}A$ is closed. To do this, pick a point $x\in X\setminus\operatorname{bdry}A$, and show that some open neighborhood of $x$ is disjoint from $\operatorname{bdry}A$. You’ll need to consider two cases: if $x\in X\setminus\operatorname{bdry}A$, either $x$ has an open neighborhood disjoint from $A$, or $x$ has an open neighborhood disjoint from $X\setminus A$. - -REPLY [4 votes]: It appears that your proof is not correct since you only consider the case when $A$ is either open or closed. -A set is closed if and only if it contains all its limit points. -Suppose $(x_n)$ is sequence of boundary points of $A$ which converges to some point $x$. One seeks to show that $x$ is also a boundary point. Let $U$ be an open set containing $x$. By definition of being the limit of $(x_n)$, there exists a $N$ such that $x_N \in U$. Since $X_N$ is a boundary point and $U$ is a neighborhood of $x_N$, $U$ contains a point of $A$ and $X - A$. Since $U$ is arbitrary, $x$ is a boundary point. The set of boundary points of $A$ is a closed set.<|endoftext|> -TITLE: Paracompact and Compactly Generated spaces -QUESTION [7 upvotes]: A couple of days ago, thanks to Strom's excellent book Modern Classical Homotopy Theory, I started reading up on compactly generated spaces, weak Hausdorff spaces and compactly generated weak Hausdorff spaces (the best decision in my life so far). The classical article by Steenrod A Convenient Category of Topological Spaces and Strickland's openly available exposition The Category of CGWH Spaces have been extremely valuable resources. -Not only have these texts shown me there is a conceptually satisfying way to deal with the topology in algebraic topology without any funny business, but for the first time ever, I understand the importance of the categorical viewpoint (and Yoneda's lemma). -I have several times tried to grasp algebraic topology in the past, and have gone some way each time, however I was always held back by the topology, and have come to StackExchange on a couple occasions to ask questions. I learnt about classifying spaces for vector bundles and principal bundles and thus know of the usefulness of paracompact spaces, so I want to know wether my new paradise (the category of CGWH spaces) contains them. - -1) Are paracompact (resp. regular, normal...) spaces compactly generated? -1') Are paracompact (resp. regular, normal...) Hausdorff spaces compactly generated? -2) If not, is the $k$-ification of a paracompact (resp. regular, normal...) space still paracompact (resp. regular, normal...)? -3) Are there free online resources that give a thorough account of paracompactness and other such separation properties (such as regularity, normality, metrizability) and their interplay? - -The article on nLab linked me to a set of lecture notes I have thus far only saved on my pc (http://www.helsinki.fi/~hjkjunni/ the top.$1$ to top.$10$). - -REPLY [6 votes]: The answer to all versions of (1) and (1') is no. -Let $D$ be an uncountable set and $p$ a point not in $D$. Let $X=\{p\}\cup D$, and topologize $X$ by making each point of $D$ isolated and making $V\subseteq X$ a nbhd of $p$ iff $p\in V$ and $X\setminus V$ is countable. In other words, if $\tau$ is the topology on $X$, $$\tau=\wp(D)\cup\{X\setminus C:C\subseteq D\text{ and }|C|\le\omega\}\;.$$ -($X$ has been called the one-point Lindelöfization of the uncountable discrete space $D$.) -$X$ is easily seen to be paracompact, Hausdorff, and hereditarily normal, but $X$ is not a $k$-space: the only compact subsets of $X$ are the finite subsets, and they don’t generate the topology. -The $k$-ification of a Hausdorff space is certainly Hausdorff, since the new topology is finer than the original one; I don’t know about the other properties off the top of my head.<|endoftext|> -TITLE: Asymptotics for Bell number -QUESTION [6 upvotes]: Concrete Mathematics EXERCISE 9.46 - -Show that the Bell number $\varpi_n=e^{-1}\sum_{k\ge0}k^n/k!$ of exercise 7.15 is asymptotically equal to - \[ m(n)^ne^{m(n)-n-1/2}/\sqrt{\ln n} \] - where $m(n)\ln m(n) = n-\frac12$, and estimate the relative error in this approximation. - -Part of the answer is that (According to the errata, I've edited the answer) - -For convenience we write just $m$ instead of $m(n)$. By Stirling's approximation, the maximum value of $k^n/k!$ occurs when $k\approx m\approx n/\ln n$, so we replace $k$ by $m+k$ and find that - \begin{align*} -\ln\frac{(m+k)^n}{(m+k)!}=&n\ln m-m\ln m+m-\frac{\ln 2\pi m}2\\ -&-\frac{(m+n)k^2}{2m^2}+O(k^3m^{-2}\log n)+O(1/m)\tag1 -\end{align*} - Actually we want to replace $k$ by $\lfloor m\rfloor+k$; this adds a further $O(km^{-1}\log n)$. The tail-exchange method with $|k|\le m^{1/2+\epsilon}$ now allows us to sum on $k$, ... - -How can we derive equation (1) (especially when $|k|\le m^{1/2+\epsilon}$)? I try to expand $\ln(m+k)!$ using Stirling's approximation. It gets -\[ -\ln(m+k)!=(m+k)\ln(m+k)-(m+k)+\frac12\ln(m+k)+\sigma+O(1/m) -\] -where $e^\sigma=\sqrt{2\pi}$. However, the term $k\ln m$ in -\[ -(m+k)\ln(m+k)=(m+k)(\ln m+\ln(1+k/m))=(m+k)\left(\ln m-k/m+O(k/m)^2\right) -\] -never vanishes, and it's $\Omega(1)=\omega\left(k^3m^{-2}\log n\right)$ when $k$ is small. -Any help? Thanks! - -REPLY [4 votes]: You're doing the right things, as far as I can tell. Maybe you're not using $n = m \log m + 1/2$ or $n/\log n = m + o(m)$ in the right place? (The latter asymptotic comes from $\log n = \log m + \log(\log m + 1/(2m))$ and doing the division of $n$ by $\log n$. The dominant term is $m$.) At any rate, here's the derivation I get. - -We need Stirling's approximation -$$\log (m+k)! = (m+k) \log(m+k) - (m+k) + \frac{1}{2}\log(m+k) + \frac{1}{2} \log (2\pi) + O\left(\frac{1}{m}\right)$$ -and $$\log(m+k) = \log m + \log\left(1 + \frac{k}{m}\right) = \log m + \frac{k}{m} - \frac{k^2}{2m^2} + O\left(\frac{k^3}{m^3}\right).$$ -We have -$$\log \frac{(m+k)^n}{(m+k)!} = n \log (m+k) - \log(m+k)!.$$ -Let's take these two terms separately. -\begin{align*} -n \log (m+k) &= n \log m + \frac{nk}{m} - \frac{nk^2}{2m^2} + O\left(\frac{nk^3}{m^3}\right) \\ -&= n \log m + k \log m + \frac{k}{2m} - \frac{nk^2}{2m^2} + O\left(\frac{k^3 \log n}{m^2}\right). \tag{1} -\end{align*} -And -\begin{align} -- \log(m+k)! = &-(m+k)\left(\log m + \frac{k}{m} - \frac{k^2}{2m^2} + O\left(\frac{k^3}{m^3}\right)\right) \\ -& + (m+k) - \frac{1}{2}\left(\log m + \frac{k}{m} + O\left(\frac{k^2}{m^2}\right)\right) - \frac{1}{2} \log (2\pi) + O\left(\frac{1}{m}\right) \\ -=&- (m+k)\log m -k - \frac{k^2}{m} + \frac{k^2}{2m} + m+k - \frac{\log m}{2} - \frac{k}{2m} \\ -&- \frac{1}{2} \log (2\pi) + O\left(\frac{k^3}{m^2}\right) + O\left(\frac{1}{m}\right)\\ -=& - (m+k) \log m + m - \frac{\log(2 \pi m)}{2} - \frac{k^2}{2m} - \frac{k}{2m} + O\left(\frac{k^3}{m^2}\right) + O\left(\frac{1}{m}\right). \tag{2} -\end{align} -Adding Eqs. (1) and (2), we get -\begin{align} -&\log \frac{(m+k)^n}{(m+k)!} \\ -&= n \log m - m \log m + m - \frac{\log(2 \pi m)}{2} - \frac{(m+n)k^2}{2m^2} + O\left(\frac{k^3 \log n}{m^2}\right) + O\left(\frac{1}{m}\right), -\end{align} -which is the expression given by Concrete Mathematics.<|endoftext|> -TITLE: Inner product space over $\mathbb{R}$ -QUESTION [9 upvotes]: Definition of the problem -I have to prove the following statement: -Let $\left(E,\left\langle \cdot,\cdot\right\rangle \right)$ be an -inner product space over $\mathbb{R}$. prove that for all $x,y\in E$ -we have -$$ -\left(\left\Vert x\right\Vert +\left\Vert y\right\Vert \right)\left|\left\langle x,y\right\rangle \right|\leq\left\Vert x+y\right\Vert \cdot\left\Vert x\right\Vert \left\Vert y\right\Vert . -$$ -My efforts -I tried two different ways to prove that, both unsuccessfull.. -First: -First, by squaring the whole inequality: -$$ -\left(\left\Vert x\right\Vert +\left\Vert y\right\Vert \right)^{2}\left|\left\langle x,y\right\rangle \right|^{2}\leq\left\Vert x+y\right\Vert ^{2}\cdot\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}. -$$ -We have from Cauchy-Schwarz that -$$ -\left|\left\langle x,y\right\rangle \right|\leq\left\Vert x\right\Vert \cdot\left\Vert y\right\Vert -$$ -So we obtain -$$ -\left(\left\Vert x\right\Vert +\left\Vert y\right\Vert \right)^{2}\left|\left\langle x,y\right\rangle \right|^{2}\leq\left(\left\Vert x\right\Vert +\left\Vert y\right\Vert \right)^{2}\cdot\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}=\left(\left\Vert x\right\Vert ^{2}+\left\Vert y\right\Vert ^{2}+2\left\Vert x\right\Vert \left\Vert y\right\Vert \right)\cdot\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}. -$$ -By Pythagorean theorem, we obtain -$$ -\left(\left\Vert x\right\Vert +\left\Vert y\right\Vert \right)^{2}\left|\left\langle x,y\right\rangle \right|^{2}\leq\left(\left\Vert x+y\right\Vert ^{2}+2\left\Vert x\right\Vert \left\Vert y\right\Vert \right)\cdot\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}. -$$ -We're almost there, except an extra term very annoying: -$$ -\left(\left\Vert x\right\Vert +\left\Vert y\right\Vert \right)^{2}\left|\left\langle x,y\right\rangle \right|^{2}\leq\left\Vert x+y\right\Vert ^{2}\cdot\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}+2\left\Vert x\right\Vert ^{3}\left\Vert y\right\Vert ^{3}. -$$ -Second -I tried after to use only the Cauchy-Schwarz inequality, not squared: -$$ -\left(\left\Vert x\right\Vert +\left\Vert y\right\Vert \right)\left|\left\langle x,y\right\rangle \right|\leq\left(\left\Vert x\right\Vert +\left\Vert y\right\Vert \right)\cdot\left\Vert x\right\Vert \left\Vert y\right\Vert . -$$ -My question -Could you give me a hint/idea on how to solve this problem? which Lemma/Theorem should I use? -Thank you -Franck - -REPLY [5 votes]: Then, let me remove this question from the dead list of "unanswered questions" by answering it. -The statement is false. A counter-example is as follows. Let $E$ be $\mathbb{R}$ itself, and the inner product be the ordinary multiplication of real numbers. Let $x = 1$ and $y = -1$. Then the left hand side is $(||x||+||y||) \cdot | \langle x, y \rangle| = (1 + 1) 1 = 2.$ The right hand side is $||x + y|| \cdot ||x|| \cdot ||y|| = ||0|| \cdot 1 \cdot 1 = 0$.<|endoftext|> -TITLE: Sum(Partition(Binary String)) = $2^k$ -QUESTION [8 upvotes]: So given any binary string B: -$$b_1 b_2 \dots b_n$$ -$$b_i \in \{0,1\}$$ -It would seem it is always possible to make a partitioning of B: -$$ b_1 b_2 \dots b_{p_1}|b_{p_1 + 1}b_{p_1 + 2}\dots b_{p_2}|\dots|b_{p_m + 1}b_{p_m + 2}\dots b_n$$ -$$= P_0 | P_1 |\dots|P_m$$ -such that when $P_i$ is interpreted as the binary representation of an integer then: -$$\exists{k}:\sum_{i=0}^{m}P_i = 2^k$$ -For example here is the first few: -1 = 1 -10 = 2 -1|1 = 2 -100 = 4 -1|01 = 2 -11|1 = 4 -1000 = 8 -1|00|1 = 2 -10|10 = 4 -1|0|11 = 4 -... - -Additional examples are here: http://pastebin.com/3B2P4asC -How can we prove this for all binary strings? - -REPLY [2 votes]: Look at two and three bit strings starting with 1, and the minimum and maximum ways they can contribute to the sum. -$$ -\begin{array}{c|c|c} -T & A(T) & B(T) \\ -\hline \\ -1 & 1 & 1 \\ -10 & 1 & 2 \\ -11 & 2 & 3 \\ -100 & 1 & 4 \\ -101 & 2 & 5 \\ -110 & 2 & 6 \\ -111 & 3 & 7 -\end{array} -$$ -For example, we can count 110 as $1+1+0=A(110)$ or as $6=B(110)$ (the only other possibilities are $1+2$ and $3+0$). So for the two-bit strings $B(T)=A(T)+1$, and for the three-bit -strings $B(T)-A(T)\in\{3,4\}$. -Now consider a number $k$ and let $w(k)\ge 6$ be the number of 1 bits in -its binary representation. -Split $k$ into components like this: -$$ -(k)_2 = T_1 0^* T_2 0^* T_3 \ldots T_{c+3} 0^* T_{end} -$$ -where each $T$ starts with 1, each of $T_1,T_2,T_3$ has exactly two -bits, and each of $T_4,\ldots,T_{c+3}$ has exactly three bits, and $T_{end}$ can be empty or can contain 1,10 or 11 if $(k)_2$ ends that way without allowing another 3-bit term. In -other words, gather terms in a greedy fashion starting with the next -available 1-bit taking two or three bits as required, skipping -extra zeros, and assigning the left-over to $T_{end}$. For example -$$ -(999999)_2 = 11~~11~~0~~10~~000~~100~~0~~111~~111 -$$ -so for $n=999999$ we would get -$$T_1=T_2=11,T_3=10,T_4=100,T_5=111,T_6=111,c=3,T_{end}=\text{empty}$$ -For each type of term $A(T)$ counts the bits in $T$, so $\sum A(T_i) = -w(k)$ (including $T_{end}$ in the sum). -Now we can use our terms as a counter as follows. - -Start counting each term with $A(T)$ to sum to $w(k)$. -To increment, count $T_1$ as $B(T_1)$ to get $w(k)+1$. -Count $T_2$ as $B(T_2)$ to get $w(k)+2$. -Count $T_3$ as $B(T_3)$ to get $w(k)+3$. -Count $T_4$ as $B(T_4)$ and reset to $A(T_i),i=1,2,3$ to get either -$w(k)+3$ or $w(k)+4$. - -Repeat this, counting more 3-bit terms as $B(T)$ to add 3 or 4 at a time (and finally $B(T_{end})$ if applicable), -counting $T_1,T_2,T_3$ as $A$ or $B$ to get the increments in between. -By this counting method we find partitions that sum to each number between $w(k)=\sum A(T_i)$ and $\sum B(T_i)$. -Let $z$ be the number of terms that are $T=111, A(T)=3, B(T)=7$. -Then -$$ -\begin{align} -w(k) = \sum A(T_i) & \le 6+(2c+z)+2 \\ -w(k)-8 & \le 2c+z \le 3c \\ -c & \ge w(k)/3-8/3 \\ -\sum B(T_i) & \ge \sum A(T_i)+3+3c+z \\ -& \ge w(k)+3+w(k)/3-8/3+w(k)-8 \\ -& = 7w(k)/3-23/3 -\end{align} -$$ -So for any $k$ we can find a partition that sums to each value in $[w(k),\max(w(k)+3,7w(k)/3-23/3)]$. For all $w(k)\ge 6, w(k)\ne 9, w(k)\ne 10$ this interval contains a power of 2. -We handle the remaining possible values for $w(k)$ as special cases. -For $w(k)=10$ we can just use a refinement of the bound above. In this case $c\ge 1$, so $\sum B(T_i) \ge \sum A(T_i)+6$, so 16 will be included. -$w(k)=1,2,4$ are immediate, just add the bits. For $w(k)=3$ we can always partition the number into $10,1,1$ or $11,1$ and possibly some zeros. For $w(k)=5$ if all 1 bits are adjacent, we can write it as $1111,1$, otherwise we can write it as $101,1,1,1$ or $100,1,1,1,1$. -For $w(k)=9$ if all bits are adjacent, then we can take $11111111,1$. Otherwise there is either a 100 or 101 which we take as $T_1$. From the remaining bits make a three-bit term $T_2$ and a two-bit term $T_3$. Then either $B(T_1)+B(T_2)$ or $B(T_1)+B(T_2)+B(T_3)$ plus the remaining bits makes 16. -This answers the original question. - -As an addendum, we can extend this technique taking longer terms, e.g. for 4-bit terms $A(T)+7 \le B(T) \le A(T)+11$, so we can take 11 two-bit terms and $c$ 4-bit terms and count from $w(k)$ to at least $w(k)+11+7c$. -For 5-bit terms $A(T)+15 \le B(T) \le A(T)+26$ and with 26 two-bit terms we can count from $w(k)$ to $w(k)+26+15\lfloor (w-26)/5\rfloor$. For $w(k)$ large enough this upper bound is larger than $3w(k)$, so the interval will contain a power of 3. -Carrying it further, taking $l$-bit terms for larger $l$ we can extend this interval to any multiple of $w(k)$. Thus, for choice of any $q>1$ there is a bound $W(q)$ so that any number $k$ with Hamming weight large enough $w(k)>W(q)$ can be partitioned into $P_i$ with $\sum P_i = q^m$ for some $m$.<|endoftext|> -TITLE: Complement of the diagonal in product of schemes -QUESTION [6 upvotes]: Let $S$ be a noetherian scheme and $X \rightarrow S$ be an affine morphism of schemes. -Consider the diagonal morphism $\Delta: X \rightarrow X \times_S X$. If $\Delta (X)$ is the closed subset of $X \times_S X$, then one can look at the open embedding -$j: U \rightarrow X \times_S X$ -of the open complement of $\Delta(X)$. -Has $j$ a chance to be itself an affine morphism of schemes? Or what additional hypotheses would one need to get this property? - -REPLY [4 votes]: First notice that $j: U\to S$ is affine if and only if $U\to X\times_S X$ is affine (to check this, one can suppose $S$ is affine, then use the facts that $X\times_S X$ is affine and $U$ is open in $X\times_S X$). -So we are reduced to see whether $U$ is an affine scheme when $S$, and hence $X$ are affine. It is well known that then the complementary $\Delta(X)$ of $U$ in $X\times_S X$ is purely 1-codimensional. This is true essentially only when $X\to S$ has relative dimension $1$ (for reasonable schemes). In particular, if $X$ is any algebraic variety of dimension $d>1$, then $U$ can't be affine. - -Claim: Let $C$ be a smooth projective geometrically connected curve of genus $g$ over a field $k$, let $X\subset C$ be the complementary of $r$ points with $r+1-g >0$. Then $U$ is affine. - -Proof. We can suppose $k$ is algebraically closed (the affiness can be checked over any field extension). Let $D$ be the divisor -$D=C\setminus X$ and $$H=D\times C+C\times D + \Delta(C).$$ -Then $U=C\times C\setminus H$. It is enough to show that $H$ is an ample divisor on the smooth projective surface $C\times C$. To do this, we will use Nakai-Moishezon criterion (see Hartshorne, Theorem V.1.10). -It is easy to see that $(D\times C)^2=0$ (because $D \sim D'$ with the support of $D'$ disjoint from that of $D$), $(D\times C).(C\times D)=r^2$, -$(D\times C).\Delta(C)=r$, and -$$\Delta(C)^2=\deg O_{C\times C}(\Delta(C))|_{\Delta(C)}=\deg \Omega_{C/k}^{-1}=2-2g.$$ -Thus $H^2=2(r^2+r+1-g)>0$. -Let $\Gamma$ be an irreducible curve on $C\times C$. If $\Gamma\ne \Delta(C)$, it is easy to see that $H.\Gamma>0$. On the other hand, $H.\Delta(C)=2r+2-2g=2(r+1-g)>0$ and we are done. -I didn't check the details because it is time to sleep, but I believe the proof is essentially correct. -EDIT -(1). In the above claim, we can remove the condition $r+1-g>0$: - -Let $C$ be a smooth projective connected curve over a field $k$, let $X$ be an affine open subset of $C$. Then $U$ is affine. - -The proof is the same as above, but consider $H=n (D\times C+ C\times D)+\Delta(C)$ for $n > g-1$. The same proof shows that $H$ is ample. -(2). Let $S$ be noetherian, and let $C\to S$ be a smooth projective morphism with one dimensional connected fibers, let $D$ be a closed subset of $C$ finite surjective over $S$. Let $X=C\setminus D$. Then $U$ is affine. -Proof. One can see that $D$ is a relative Cartier divisor on $C$ (because $D_s$ is a Cartier divisor for all $s\in S$). So the $H$ defined as above is a relative Cartier divisor on $C\times_S C$. We showed that $H_s$ is ample for all $s$. This implies that $H$ is relatively ample for $C\times_S C\to S$ (EGV III.4.7.1). So $U$ is affine.<|endoftext|> -TITLE: Are the eigenvectors of a real symmetric matrix always an orthonormal basis without change? -QUESTION [6 upvotes]: I was reading the wikipedia page for symmetric matrices, and I noticed this part: -a real n×n matrix A is symmetric if and only if there is an orthonormal basis of Rn consisting of eigenvectors for A -Does this mean the eigenvectors of a symmetric matrix with real values always form an orthonormal basis, meaning that without changing them at all, they're always orthogonal and always have a norm of 1? -Or does it mean that based on the eigenvectors, we can manipulate them (e.g. divide them by their current norm) and turn them into vectors with a norm of 1? - -REPLY [12 votes]: There is no "the" eigenvectors for a matrix. That's why the statement in Wikipedia says "there is" an orthonormal basis... -What is uniquely determined are the eigenspaces. But you can make different choices of eigenvectors from the eigenspaces and make them orthogonal or not (and of course you can go in and out of "orthonormal" by multiplicating by scalars). In the special case where all the eigenvalues are different (i.e. all multiplicities are $1$) then any set of eigenvectors corresponding to different eigenvalues will be orthogonal. -As a side note, there is a small language issue that appears often. This is that matrices have eigenvalues, but to talk about eigenvectors you are seeing your matrix as a linear operator on a vector space (and that's of course where the notion of eigenvalue comes from) -To see a concrete example, consider the matrix -$$ -\begin{bmatrix}1&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix} -$$ -The orthonormal basis the Wikipedia article is talking about is $\begin{bmatrix}1\\0\\0\end{bmatrix}$, -$\begin{bmatrix}0\\1\\0\end{bmatrix}$, -$\begin{bmatrix}0\\0\\1\end{bmatrix}$. -But as the multiplicity of zero as eigenvalue is $2$, we can choose a different basis for its eigenspace, and then $\begin{bmatrix}1\\0\\0\end{bmatrix}$, -$\begin{bmatrix}0\\1\\1\end{bmatrix}$, -$\begin{bmatrix}0\\2\\1\end{bmatrix}$ is another (not orthogonal) basis of eigenvectors. -Finally, if you don't want a basis, you can have an infinity of eigenvectors: for instance all vectors of the form $\begin{bmatrix}t\\0\\0\end{bmatrix}$, for any $t$, are eigenvectors. And all vectors $\begin{bmatrix}0\\t\\s\end{bmatrix}$, for any $t$ and $s$, are eigenvectors. - -REPLY [8 votes]: There are two different things that might go wrong: -1) Eigenvectors can always be scaled. So if $v$ is an eigenvector then so is $av$ for $a\in k^\ast$. So the length one part is not automatic but can be forced easily - as you said yourself by dividing each eigenvector by it's length. -2) More importantly linear independent eigenvectors to the same eigenvalue do not need to be orthogonal. What is true however is that two eigenvectors to different eigenvalues of a symmetric matrix are orthogonal. So if each eigenvalue has multiplicity one a basis of eigenvectors is automatically orthogonal (and can be made orthonormal as above). -In general we need to find an orthogonal basis of each eigenspace first, e.g. by Gram-Schmidt. -Edit: Part two is illustrated in @Martin's answer. The eigenvectors to the eigenvalue $1$ are always orthogonal to the eigenvectors to the eigenvalue $0$. However we can choose multifarious non-orthogonal bases of the eigenspace to $0$.<|endoftext|> -TITLE: Known proofs of Wirtinger's Inequality? -QUESTION [9 upvotes]: I am looking for proofs of the (Poincare-) Wirtinger inequality which states that if $f:[0,\pi]\to \mathbb{C}$ is $C^1$ and $f(0)=f(\pi)=0$ then -\begin{equation} -\int_0^\pi |f(t)|^2 dt \leq \int_0^\pi |f'(t)|^2 dt. -\end{equation} -See link. -The proof that I know starts by proving that if -$$ \int_0^{2\pi} F(t) dt =0 $$ -then -$$ -\int_0^{2\pi} |F(t)|^2 dt \leq \int_0^{2\pi} |F'(t)|^2 dt. -$$ -using Parseval's identity. From this, one proves the desired inequality for $f$ on $[0,\pi]$ by "extending" $f$ to an odd $C^1$ function on $[-\pi,\pi]$. -Are there other proofs? (Straightforward or otherwise...) - -REPLY [6 votes]: If you are willing to get a non-sharp constant, here's another proof found in many differential geometry texts. Without loss of generality assume $f \geq 0$. (Replacing $f$ by $|f|$ doesn't change the integrals on either side, if $f$ is assumed to be $C^1$.) -Let $2M = \sup f$, and let $t_0 \in (0,\pi)$ attain this maximum. -Let $X(t) = f(t) - M$ and $Y(t) = \sqrt{M^2 - X(t)^2}$ if $t \leq t_0$ and $-\sqrt{M^2 - X(t)^2}$ if $t \geq t_0$. -We have that $(X(t),Y(t))$ lies on the circle of radius $M$, and goes around the circle exactly once as $t$ goes from $0$ to $\pi$. We thus can use a well-known formula to conclude that -$$ -\int_0^\pi Y(t) X'(t) \mathrm{d}t = \text{Area of disk} = \pi M^2 $$ -By Schwarz inequality, however, we have -$$ \int_0^\pi Y(t) X'(t) \mathrm{d}t \leq \sqrt{ \int_0^\pi Y^2\mathrm{d}t \int_0^\pi X'^2\mathrm{d}t} = \sqrt{ \left(\pi M^2 - \int_0^\pi X^2\mathrm{d}t \right) \int_0^\pi X'(t)^2\mathrm{d}t }$$ -Squaring we get -$$ \pi^2 M^4 \leq \left(\pi M^2 - \int_0^\pi X^2 \mathrm{d}t\right) \int_0^\pi f'^2\mathrm{d}t $$ -Now, notice that -$$ \int_0^\pi f^2 ~\mathrm{d}t = \int_0^\pi (X + M)^2 ~\mathrm{d}t = \pi M^2 + \int_0^\pi X^2 ~\mathrm{d}t + 2M \int_0^{\pi} X ~\mathrm{d}t \leq \pi M^2 (1+A)^2 $$ -where -$$ A^2: = \left[ \frac{1}{\pi M^2} \int_0^\pi X^2 ~\mathrm{d}t \right] < 1. $$ -This implies -$$ \int_0^\pi f^2 ~\mathrm{d}t \leq (1 + A)^2(1-A^2) \int_0^\pi |f'|^2 ~\mathrm{d}t$$ -The coefficient has a maximum when $A = 1/2$ or that -$$ \int_0^\pi f^2 ~\mathrm{d}t \leq \frac{27}{16} \int_0^\pi |f'|^2~\mathrm{d}t $$ - -If $\int_0^\pi X ~\mathrm{d}t = 0$, we can sharpen the coefficient to $(1 + A^2)(1-A^2) = 1 - A^4 \leq 1$. This can be achieved by extending $f$ to a function $g$ on $(-\pi,\pi)$ with an odd extension, exactly as you have described for the Fourier proof.<|endoftext|> -TITLE: Show that $\sum\nolimits_{d|n} \frac{1}{d} = \frac{\sigma (n)}{n}$ for every positive integer $n$. -QUESTION [6 upvotes]: Show that $\sum\nolimits_{d|n} \frac{1}{d} = \frac{\sigma (n)}{n}$ for every positive integer $n$. - -where $\sigma (n)$ is the sum of all the divisors of $n$ -and $\sum\nolimits_{d|n} f(d)$ is the summation of $f$ at each $d$ where $d$ is the divisor of $n$. -I have written $n=p_1^{\alpha_1}p_2^{\alpha_2}p_3^{\alpha_3}.......p_k^{\alpha_k}$ then:- -$$\begin {align*} \sum\nolimits_{d|n} \frac{1}{d}&=\frac{d_2.d_3......d_m+d_1.d_3......d_m+........+d_1.d_2.d_3......d_{m-1}}{d_1.d_2.d_3......d_m} \\&=\frac{d_2.d_3......d_m+d_1.d_3......d_m+........+d_1.d_2.d_3......d_{m-1}}{p_1^{1+2+...+\alpha_1}p_2^{1+2+...+\alpha_2}p_3^{1+2+....+\alpha_3}.......p_k^{1+2+....+\alpha_k}} \\ \end{align*}$$ -where $d_i$ is some divisor among the $m$ divisors. -Then I cannot comprehend the numerator so that to get the desired result. -Also suggest some other approches to this question. - -REPLY [4 votes]: \begin{equation*} -\begin{split} -\sigma (n) &=\sum_{d|n}d (\text{ i.e. sum of all divisors of $n$})\\ - &=\sum_{d|n}\left(\frac{n}{d}\right) (\text{ i.e. sum of all divisors of $n$})\\ -\therefore \sigma(n)&= n\sum_{d|n}\frac{1}{d}\\ -\Rightarrow \sum_{d|n}\frac{1}{d}=\frac{\sigma(n)}{n}. -\end{split} -\end{equation*}<|endoftext|> -TITLE: A "prime-mapping" polynomial -QUESTION [7 upvotes]: Suppose that $f$ is a polynomial with integer coefficients with the property that for any prime $p$, $f(p)$ is a prime. Is there any such polynomial $f$ other than $f(x)=x$ of course? -My approach was that if the leading coefficient $a_{0}$ of $f$ is $0$, then $f(p)=p$ for any prime $p$, so $f(x)-x$ has infinite roots $\implies f(x)=x$. If $\deg{f}=n$ and if $a_{0}$ has $\gt n$ prime factors, then also, the same argument works - but I couldn't complete my argument. -Any help will be greatly appreciated! - -REPLY [13 votes]: The constant functions $f(x)=p$, where $p$ is prime, have the desired property, as does the function $f(x)=x$. To show there are no others, suppose that $f$ has positive degree. Suppose also that for some prime $p$, we have $f(p)=q$, where $q$ is a prime different from $p$. (If $f(p)=p$ for all primes $p$, then $f(x)=x$, since a non-zero polynomial of degree $m$ cannot have more than $m$ zeros.) -Then $f(p+nq)\equiv 0 \pmod{q}$ for every integer $n$. But by Dirichlet's Theorem, there are infinitely many primes in the arithmetic progression $p, p+q, p+2q, \dots$. If $p^\ast$ is a large enough such prime, then $f(p^\ast)$ is larger than $q$, but divisible by $q$, and therefore not prime.<|endoftext|> -TITLE: Bounded operator that does not attain its norm -QUESTION [7 upvotes]: What is a bounded operator on a Hilbert space that does not attain its norm? An example in $L^2$ or $l^2$ would be preferred. -All of the simple examples I have looked at (the identity operator, the shift operator) attain their respective norms. - -REPLY [3 votes]: This is similar to other answers but a bit more general. Consider a sequence $c=\{c_n\}\in\ell^\infty(\mathbb{Z})$ such that $|c_n|<\sup|c_n|$, e.g. $c_n$ may be positive and growing to $1$. -Define $T:\ell^2(\mathbb{Z})\to \ell^2(\mathbb{Z})$ by -$$T:x=\{x_n\}\mapsto Tx = cx =\{c_nx_n\}.$$ -Then, for any $x=\{x_n\}\in\ell^2$, we have -$$\|Tx\|^2 = \sum_{n\in\mathbb{Z}} |c_nx_n|^2<\sum_{n\in\mathbb{Z}} \sup|c_n|^2|x_n|^2= \sup|c_n|^2\|x\|^2\tag{1}$$ -that is -$$\|T\| \leq\sup|c_n|.$$ -On the other hand the choice $x=e_n=\{\delta_{kn}\}_{k\in\mathbb{Z}}$, where $\delta_{kn}$ is the Kronecker-$\delta$ (which is 0 whenever $k\ne n$ and 1 if $k=n$) shows that -$$\sup_{\|x\|\leq1}\|Tx\|\geq\sup_{n}\|Te_n\|=\sup_n|c_n|$$ -and hence -$$\|T\| =\sup|c_n|.\tag{2}$$ -Now, (1) and (2) shows that the norm is never attained.<|endoftext|> -TITLE: Measurable function on the interval $[0,1]$ -QUESTION [8 upvotes]: Assume that $f$ is a measurable function on the interval $[0,1]$ such that $0 -TITLE: Is $\mathbb{Q}[\sqrt2]$ = $\mathbb{Q}[\sqrt2+1]$? -QUESTION [6 upvotes]: Is $\mathbb{Q}[\sqrt2]$ = $\mathbb{Q}[\sqrt2+1]$? -I think so because -$$\mathbb{Q}[\sqrt{2}+1] = \{\sum_{i=0}^{n}c_i(\sqrt{2}+1)^i\mid n\in\mathbb{N}, c_i\in\mathbb{Q}\}$$ -$$= \{\sum_{i=0}^{n}c_i(\sqrt{2})^i\mid n\in\mathbb{N}, c_i\in\mathbb{Q}\} = \mathbb{Q}[\sqrt{2}].$$ -This could be worked out with Binomial theorem right? - -REPLY [7 votes]: A basis for $\mathbb{Q}[\sqrt2]$ is $\mathcal{A}=\{1,\sqrt2\}$. -A basis for $\mathbb{Q}[\sqrt2+1]$ is $\mathcal{B}=\{1,\sqrt2+1\}$. -You can write -$$ -\mathcal{B}= \pmatrix{ 1 & 0 \\ 1 & 1} \mathcal{A} -$$ -Since this matrix is invertible, we have $\langle A \rangle = \langle B \rangle$.<|endoftext|> -TITLE: Evaluate the sum: $\sum\limits_{n=0}^{\infty} \frac1{F_{(2^n)}}$ -QUESTION [10 upvotes]: Evaluate the sum: -$$\sum_{n=0}^{\infty} \frac{1}{F_{(2^n)}}$$ -where $F_{m}$ is the $m$-th term of the Fibonacci sequence. I need some support here. Thanks. - -REPLY [6 votes]: As wikipedia claims the result follows from the identity -$$ -\sum\limits_{n=0}^N\frac{1}{F_{2^n}}=3-\frac{F_{2^N-1}}{F_{2^N}} -$$ -You can try to prove it by induction.<|endoftext|> -TITLE: Multi variable integral : $\int_0^1 \int_\sqrt{y}^1 \sqrt{x^3+1} \, dx \, dy$ -QUESTION [8 upvotes]: $$\int_0^1 \int_\sqrt{y}^1 \sqrt{x^3+1} \, dx \, dy$$ -Here is my problem in my workbook. If I solve this problem by definition, that find integral for $x$, after that solve for $y$. so $\int_\sqrt{y}^1 \sqrt{x^3+1} \, dx$ is so complicate. I have used Maple, but the result still long and complicate that I cannot use it to find integral for $y$. -Thanks :) - -REPLY [12 votes]: The way you have the integration setup, you integrate along $x$ first for a fixed $y$. The figure below indicates how you would go about with the integration. You integrate over the horizontal red strip first and then move the horizontal strip from $y=0$ to $y=1$. - -Now for the ease of integration, change the order of integration and integrate along $y$ first for a fixed $x$. The figure indicates how you would go about with the integration. You integrate over the vertical red strip first and then move the vertical strip from $x=0$ to $x=1$. - -Hence, if you swap the integrals the limits become $y$ going from $0$ to $x^2$ and $x$ goes from $0$ to $1$. -$$I = \int_0^1 \int_\sqrt{y}^1 \sqrt{x^3+1} dx dy = \int_0^1 \int_0^{x^2} \sqrt{x^3+1} dy dx = \int_0^1 x^2 \sqrt{x^3+1} dx$$ -Now call $x^3+1 = t^2$. Then we have that $3x^2 dx = 2t dt \implies x^2 dx = \dfrac{2}{3}tdt$. As $x$ varies from $0$ to $1$, we have that $t$ varies from $1$ to $\sqrt{2}$. -Hence, we get that -$$I = \int_1^\sqrt{2}\dfrac23 t \times t dt = \dfrac23 \int_1^\sqrt{2}t^2 dt = \dfrac23 \times \left. \dfrac{t^3}3\right \vert_{t=1}^{t=\sqrt{2}} = \dfrac29 \left( (\sqrt{2})^3 - 1^3\right) = \dfrac29 (2 \sqrt{2} - 1).$$<|endoftext|> -TITLE: $\operatorname{arsinh}$ vs $\operatorname{arcsinh}$ -QUESTION [8 upvotes]: I note that some people like to write the inverse hyperbolic functions not with the prefix "arc" (like regular inverse trigonometric functions), but rather "ar". This is because the prefix "arc" (for arcus) is misleading, because unlike regular trigonometric functions, they are not used to find lengths (inverse trigonometric functions can be used to find arc length of ellipses like $x^2+y^2=1$) but rather find area of a sector of the unit hyperbola. -Which version should be preferred? $\operatorname{arsinh}$ initially confused me as I did not know why "ar" was used. However, some seem to prefer this notation as it is more of an accurate description of the function. - -REPLY [3 votes]: Which version should be preferred? - -This is easy: the version you prefer. One of the most important lessons in math is to learn to become confident in your own use of the language. Math requires coming up with new symbolism all the time. Functions and variables need names, properties need names, operations need symbols; every new object you work with, you need to be confident enough to own it as your own and look at it however you want to. Start with knowing that you can work with whatever form makes most sense to you. -When you want to communicate a result, use some common term and don't purposely be obscure. But even then, it is extremely common to see that a paper summary uses a full, common term, that internally becomes something specific to the author and their own preference. There is usually an internal logic to such choices (and if there is, much the better). Which gives another reason why it's important to be flexible with one's choice of symbols: it will help you read other people's papers.<|endoftext|> -TITLE: Cartan Theorem. -QUESTION [9 upvotes]: Cartan Theorem: Let $M$ be a compact riemannian manifold. Let $\pi_1(M)$ be the set of all the classes of free homotopy of $M.$ Then in each non trival class there is a closed geodesic. (i.e a closed curve which is geodesic in all of its points.) -My question: Why free classes? Why the theorem does not apply if we exchange free classes for classes with a fixed base point? - -REPLY [8 votes]: Consider the Klein bottle $K$ with flat metric. I'm thinking of $K$ as a square where the left and right sides are identified in the "right" way (like on the torus), while the top and bottom are identified in the "wrong" way (like on $\mathbb{R}P^2$). In this picture, geodesics are straight lines that wrap around depending on the identifications on the edges. -Take your basepoint $p$ to be the center of the square. Consider the geodesic $\gamma$ emanating from $p$ with slope 1. So, it starts in the middle of the square moving towards the top right corner. After it gets to the top right corner, due to the identifications we're making, it becomes a straight line emanating from the bottom right corner with slope -1 until it eventually hits $p$ again, i.e., it closes up. However, it is not a closed geodesic because it makes a corner at $p$. -Further, I claim no other geodesic emanating from $p$ is in the same homotopy class as $\gamma$. To see this, work in the univeral cover, $\mathbb{R}^2$ (thought of as being tiled by squares with identification arrows as approrpriate, with corners on integer lattice points). Geodesics are still straight lines, but now there is a unique straight line from $(\frac{1}{2},\frac{1}{2})$ to $(\frac{3}{2},\frac{3}{2})$, given by lifting $\gamma$. -This means the Cartan's theorem fails on based loops, at least in this particular case.<|endoftext|> -TITLE: $\wedge^k(V)^* \cong \mathrm{Alt}^k(V)$ -QUESTION [7 upvotes]: Let $V$ be a finite dimensional real vector space, let $\mathrm{Alt}^k(V)$ denote the space of -alternating $k$-linear forms on $V$ and let $\wedge^k(V)$ denote the $k^{th}$ exterior power of $V$. -I am trying to see why the algebraic dual $\wedge^k(V)^* := (\wedge^k(V))^*$ is isomorphic to $\mathrm{Alt}^k(V)$. Here are -my thoughts: -By the universal property of the exterior power, for any alternating $k$-linear form $f$ -with domain $V^k$ there exists a unique linear form $\phi$ with domain $\wedge^k(V)$ such that -$$ -\phi(v_1 \wedge \cdots \wedge v_k) = f(v_1, \dots, v_k). -$$ -The universal property thus provides a mechanism to produce elements in $\wedge^k(V)^*$ -from elements in $\mathrm{Alt}^k(V)$ and this mechanism of production is unique. -Thus, we have an injection -$$ -\Phi: \mathrm{Alt}^k(V) \longrightarrow \wedge^k(V)^* -$$ -What I'm not sure about is how to argue surjectivity; what is the best way to approach this? - -REPLY [6 votes]: The canonical map $\psi\colon V^k \to \bigwedge^k V$ given by $\psi(v_1, \ldots, v_k) = v_1 \wedge \cdots \wedge v_k$ is alternating. If you have an element $g \in (\bigwedge^k V)^*$ then $g \circ \psi\colon V^k \to \mathbf R$ is also alternating, and I believe that this assignment $g \mapsto g \circ \psi$ gives you an inverse to $\Phi$. -This is simpler than you might fear, and has little to do with the ground ring or the finiteness of $V$. And maybe we should expect that, since the universal property of $\bigwedge^k V$ more or less says that it represents the functor $W \mapsto L^k_a(V, W)$. -Extra trouble and hypotheses seem to enter once you try to write things such as an isomorphism $(\bigwedge^k V)^* \approx \bigwedge^k (V^*)$, an embedding $\bigwedge^k V \hookrightarrow V^{\otimes k}$, or the wedge product induced on alternating forms. Some examples of this are discussed in the back of Fulton and Harris.<|endoftext|> -TITLE: Vitali set of outer-measure exactly $1$. -QUESTION [23 upvotes]: I know that for any $\varepsilon\in (0,1]$ we can find a non-measurable subset (w.r.t Lebesgue measure) of $[0,1]$ so that its outer-measure equals exactly $\varepsilon$. It is done basicly with the traditional Vitali construction inside the interval $[0,\varepsilon]$ and noticing that such a set carries zero inner-mass, and thus its complement in $[0,\varepsilon]$ (being non-measurable as well) must carry the full outer-mass of $[0,\varepsilon]$. -However, this resulting non-measurable set is a complement of the traditional Vitali constructed set. My question asks if the Vitali construction itself can yield a non-measurable set with outer-measure of exactly $1$ (or any before-hand decided number from $(0,1]$). Some modifications can be done inside the construction of course, but in particular I would like to stay away from taking complements. Maybe someone knows how this could be done? -Any references and input is appreciated. Thanks in advance. - -REPLY [10 votes]: $\newcommand{\c}{\mathfrak{c}}$ -Let $\c$ denote the cardinal of the continuum and wellorder the Borel subsets of $[0,1]$ as $(B_\alpha)_{\alpha < \c}$. We build by transfinite recursion a sequence $(x_\alpha)_{\alpha < \c}$ of elements of $[0,1]$ such that: -(a) $x_\alpha$ is Vitali inequivalent to $x_\beta$ for all $\beta < \alpha$, and -(b) $x_\alpha \in [0,1] \setminus B_\alpha$ if $[0,1] \setminus B_\alpha$ is uncountable. -Note that this process can't get stuck, since if the complement of $B_\alpha$ is uncountable then it has cardinality $\c$, and thus it meets an unused Vitali equivalence class (since at most $|\alpha| < \c$ have been used so far). Then by setting $X = \{x_\alpha : \alpha < c\}$ we obtain a set such that whenever $B$ is a Borel set with $X \subseteq B$, then $B$ has countable complement (and in particular has measure $1$). So $X$ has outer measure $1$ as desired.<|endoftext|> -TITLE: hausdorff, intersection of all closed sets -QUESTION [5 upvotes]: Can you please help me with this question? - -Let's $X$ be a topological space. - Show that these two following conditions are equivalent : - -$X$ is hausdorff -for all $x\in X$ intersection of all closed sets containing the neighborhoods of $x$ it's $\{x\}$. - - -Thanks a lot! - -REPLY [3 votes]: Recall the definition of Hausdorff: - -$X$ is a Hausdorff space if for every two distinct $x,y\in X$ there are disjoint open sets $U,V$ such that $x\in U$ and $y\in V$. - -Suppose that $X$ is Hausdorff and $x\in X$. Suppose $y\neq x$. We have $U,V$ as in the definition. So $x\in U$ and $y\in V$ and $U\cap V=\varnothing$. Suppose that $F$ is a closed set containing an open neighborhood of $x$, intersecting this open set with $U$ yields an open neighborhood of $x$ which is a subset of $F$ so without loss of generality $U\subseteq F$. $F'=F\cap(X\setminus V)$ is closed and does not contain $y$. Furthermore $U$ itself is a subset of this closed set and $y\notin F'$. Therefore when intersecting all closed subsets which contain an open environment of $x$ we remove every other $y$, so the result is $\{x\}$. - -On the other hand, suppose that for every $x\in X$ this intersection is $\{x\}$, by a similar process as above deduce that $X$ is Hausdorff as in the definition above.<|endoftext|> -TITLE: In probability, how can a sigma-algebra represent the total information? -QUESTION [5 upvotes]: Why does a sigma-algebra represent the information available at a given time? -I understand the idea of filtration and stopping-time, given that each sigma-algebra represent the info we have at a specific time, but why is that? -For instance in a game of dice rolls (or anyone you want), what would be total universe and the available information in forms of sigma-algebra, at the n-th turn? -thanks - -REPLY [2 votes]: The Doob-Dynkin lemma relates them in an intuitive way for most of the standard applications in probability theory. -Suppose you have a probability space $(\Omega,\Sigma,\mu)$ and two random variables $f:\Omega\to\mathbb{R}$ and $g:\Omega\to\mathbb{R}$. That $g$ only depends on $f$ can be interpreted as saying that you know the value of $g$ whenever you know the value of $f$. This means that you can find a function $h:\mathbb{R}\to\mathbb{R}$ such that $g(\omega)=h(f(\omega))$. In other words, $g=h\circ f$. Now the Doob-Dynkin emma says that the following are equivalent: - -There is a measurable function $h:\mathbb{R}\to\mathbb{R}$ such that $g=h\circ f$. -The random variable $g$ is measurable with respect to the $\sigma$-algebra generated by $f$, that is the $\sigma$-algebra $\{f^{-1}(B):B\textrm{ is a Borel set}\}$. - -Most naturally occuring $\sigma$-algebras are of the form $\{f^{-1}(B):B\textrm{ is a Borel set}\}$ for some random variable $f$. This is equivalent to the $\sigma$-algebra being countably generated.<|endoftext|> -TITLE: Subgroup of $\pi_1(\mathbb T^2 \sharp \mathbb T^2 )$ with index $2$ -QUESTION [5 upvotes]: We know $\pi_1(\mathbb {T^2} \sharp \mathbb T^2)=\langle\alpha_1,\beta_1,\alpha_2,\beta_2|\alpha_1 \beta_1 \alpha_1^{-1} \beta_1^{-1}\alpha_2\beta_2\alpha_2^{-1}\beta_2^{-1}=1\rangle$. -My question is:How to find a subgroup of it with index $2$? -I think we need to find a subgroup H of $\pi_1(\mathbb {T^2} \sharp \mathbb T^2)$ which has two generator.And $aH \cup bH=\pi_1(\mathbb {T^2} \sharp \mathbb T^2)$ for $a,b\in \pi_1(\mathbb {T^2} \sharp \mathbb T^2)$. -But how to deal with the equivalent relation?Thank you! - -REPLY [2 votes]: Let $G=\pi_1(\mathbb{T}^2\sharp\mathbb{T}^2)$. Then $G^{\rm ab}\cong\mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}$, with generators being the images of $\alpha_1$, $\beta_1$, $\alpha_2$, and $\beta_2$. Any subgroup of index two must contain the commutator subgroup of $G$, and so it just corresponds to maps from $G^{\rm ab}$ onto $C_2$. Maps from $G^{\rm ab}$ to $C_2$ are in bijection with elements of $C_2^4$, so there are example $15$ subgroups of index $2$, corresponding to how you map the $\alpha_i$. -For example, the map given by mapping $\alpha_1$ to the generator of $C_2$ and all other generators to the identity gives the subgroup of index $2$ that is the normal closure of the subgroup generated by $\alpha_1^2$, $\alpha_2$, $\beta_1$, and $\beta_2$.<|endoftext|> -TITLE: Why is it undecidable whether two finite-state transducers are equivalent? -QUESTION [10 upvotes]: According to the Wikipedia page on finite-state transducers, it is undecidable whether two finite-state transducers are equivalent. I find this result striking, since it is decidable whether two finite-state automata are equivalent to one another. -Unfortunately, Wikipedia doesn't provide any citations that provide a justification for this result. Does anyone know of a proof of this result? Alternatively, if the article is incorrect, does someone know an algorithm that could be used to show equivalence? -Thanks! - -REPLY [6 votes]: To complement on Brian's answer, you might want to look at The equivalence problem of multitape finite automata, where the authors give an algorithm for deciding whether two deterministic transducers are equivalent. The key is that it is decidable if two automata are equivalent with regard to multiplicities (i.e. how many times the word is accepted). -This is very closely related to probabilistic automata, of which the equivalence algorithm was given in A polynomial-time algorithm for the equivalence of probabilistic automata. In this setting the question is whether two automata accept the word with the same probability. -You can generalize that for some weighted automatons, without loosing decidability (see here for nice slides with examples). Thus, you could transform deterministic (multitape) automatons into weighted version such that word is accepted if and only if it has (in some semiring) weight 1 (one), and afterwards just use the previous algorithm. However, please note that this is rather fragile result, even adding $\varepsilon$-edges would make this undecidable. -Hope this helps ;-)<|endoftext|> -TITLE: Tips and examples for a poster presentation in pure mathematics -QUESTION [12 upvotes]: I will be presenting a poster in a few weeks but have no experience with them. I've seen and given plenty of talks, read and written papers, but I have never made or even seen a poster in pure mathematics. Googling I was able to find LaTeX templates but was unable to find any examples or tips on presenting pure mathematics in a poster format. So what experience and examples does the math.SE community have with posters in pure mathematics topics? -This mathoverflow question https://mathoverflow.net/questions/21401/how-do-you-make-a-good-math-research-poster-for-a-non-mathematical-audience is related but here I'm asking about a poster aimed at research mathematicians, not a general audience. -I hope this is in the scope of the website. I'm also not sure what the appropriate tags are. - -REPLY [2 votes]: Since most of book covers on research mathematics is intentionally kept sober, to make a poster you may want to keep it simple yet not too boring. Also, it depends on the subject, audience, medium and budget. -Some tips: - -keep it simple or to borrow the term: minimalistic -Visual wordplay or pun (as long as it is not overdone) OR, allusion to artwork such as Magritte or Escher if you are dealing with a topic that is self-referential, but avoid being cliche. Since mathematics is hotbed for symbols and logos, you can alter the font to mold into an object. The zero-sharp, zero-dagger, club suit, diamond suit, etc. Example, a bird morphing from "w" (the smallercase Greek omega letter) -You can also use an equation in a Rebus style such as the much abused meme of $i$, complex number and irrationality. Although memes can be rather cheap humor, you can browse to keep ideas flowing. This cartoon was quite interesting. (It can also be a simple Euler's identity with the tagline: Thus God exists.) Also, try to make a campaign - to borrow advertising term- so that you keep one constraint, eg: Thus God exist and you show mathematical equations such as the the one already mentioned, Kurt Goedel's proof, etcetera. Only caveat: keep it simple and connect in a non-sequiter manner. If you show the image of two balls from one of Banach-Tarski paradox, your tagline could be something of the nature such as: Never a boring day at the classroom. -Probably a historical image in a monotone shade either a sketch of the mathematician if he is obscure or of the university or locale -Keep in mind of that if it is posted on a bullet inboard, passerby will have very short span to notice it, so it cannot be too deep to "get it" -Another idea could be "ambient"-a term in advertising, where you use the surrounding to prove a point, such as a life-size ballerina image around revolving door. Stickers on calendars, numpad on phone, mirror, trees can be ways to spread the message. Taglines like: Average person sees a tree, a number theorist sees the sequence: 1 1 2 3 5... or, Average person sees an coffee, but a topologist sees a a donut. (The last one is cliche, but for illustration purpose). - -Some examples: -The famous book cover wittily shows and tells the theme - -A Cantor one I found online that shows but does not tell the theme<|endoftext|> -TITLE: Is there a differential limit? -QUESTION [9 upvotes]: I'm wondering if there's such a concept as a "differential limit". Let me give an example because my nomenclature is my own and unofficial, but hopefully indicative of the concept. -For some function f(x), there may exist a derivative of that function f'(x) which we call a first order differentiation of f(x). Likewise from f'(x) we could also derive a second order function f''(x). Is there a mathematical concept of the limit of a function as the order of differentiation goes to infinity? -I can think of a function that I can take the derivative of an infinite number of times ( albeit without getting a constant value): f(x) = sin(x) -The function g(x) = x^2 might have a differential limit of 0. (g'(x) = 2x; g''(x) = 2; g'''(x) = 0 ...) -Is there any usefulness to this concept? A topic of study or keyword that discusses this? Any more examples of infinitely derivable functions? -Most importantly if this is interesting to anyone other than myself: what is a good resource for learning more? - -REPLY [6 votes]: It would seem that if the sequence $f$, $f'$, $f''$, ..., $f^{(n)}$, ... tends to a limit as a function, then that limit would be a solution of the differential equation $\frac{dy}{dx}=y$. So if the limit exists, it is either zero or some multiple of $e^x$. (I'm assuming that the notion of limit we're using is one that the differentiation operator is continuous with respect to, which doesn't seem like much to ask. Pointwise convergence is out, though). -Also, if we have some function whose iterated derivatives converge towards $ce^x$, we can subtract $ce^x$ from the initial function and get a new sequence whose iterated derivatives converge towards the zero function. So if we look for solutions to $\lim_{n\to\infty} f^{(n)}=0$ we essentially get all functions whose iterated derivatives have a limit. -All polynomials qualify, of course, but there are also non-polynomial analytic solutions such as $\sum_{n=0}^\infty \frac{1}{n\cdot n!} x^n$ and in general $\sum_{n=0}^\infty \frac{a_n}{n!}x^n$ whenever $\lim_{n\to \infty} a_n = 0$. I wonder whether some of these have nice closed forms.<|endoftext|> -TITLE: Generalization of Hölder's inequality -QUESTION [8 upvotes]: Assume $1 -TITLE: Understanding the $L^\infty$ norm -QUESTION [8 upvotes]: I'm very confused about $L^\infty$. So I'm trying to prove this: -Is $\|f\|_{\infty}$ the smallest of all numbers of the form $\sup\{|g(x)| \,:\,x \in X\}$, where $f=g $ $\mu$-a.e.? - -REPLY [11 votes]: The answer to your question is yes. Recall that the definition of $\|f\|_\infty$ is -$$\|f\|_\infty=\operatorname{ess sup}(f)=\inf\{a\in\mathbb{R}\mid \mu(\{x\in X\mid |f(x)|>a\})=0\}.$$ -I'll interpret the phrase "the smallest of all numbers of the form $\sup\{|g(x)|\,:\,x\in X\}$, where $f=g$ $\mu$-a.e." to mean $$M=\inf_{g=f\text{ a.e.}}\left\{\sup_{x\in X}|g(x)|\right\}.$$ - -For any $\epsilon>0$, we can define $g:X\to\mathbb{R}$ by -$$g(x)=\begin{cases}f(x) & \text{if }|f(x)|\leq \|f\|_\infty+\epsilon,\\0 & \text{otherwise}.\end{cases}$$ -Note that $$\mu(\{x\in X\mid f(x)\neq g(x)\})=\mu(\{x\in X\mid |f(x)|>\|f\|_\infty+\epsilon\})=0$$ -by the definition of $\|f\|_\infty$, so that $f=g$ almost everywhere, and that $\sup_{x\in X}|g(x)|\leq \|f\|_\infty+\epsilon$. -Thus, -$$M=\inf_{g=f\text{ a.e.}}\left\{\sup_{x\in X}|g(x)|\right\}\leq \|f\|_\infty+\epsilon$$ -for all $\epsilon>0$, and thus $M\leq \|f\|_\infty$. - -For the other direction, note that for any $\epsilon>0$, $$\mu(\{x\in X\mid |f(x)|>(\|f\|_\infty-\epsilon)\})>0$$ by the definition of $\|f\|_\infty$. If $g=f$ a.e., then certainly $|g|=|f|$ a.e., so we must have that $$\mu(\{x\in X\mid |g(x)|>(\|f\|_\infty-\epsilon)\})>0$$ too, so that $g(x)>\|f\|_\infty-\epsilon$ for some $x\in X$. Thus, for any $g$ such that $g=f$ a.e., we have that $$\sup_{x\in X}|g(x)|\geq \|f\|_\infty-\epsilon$$ -so that -$$M=\inf_{g=f\text{ a.e.}}\left\{\sup_{x\in X}|g(x)|\right\}\geq \|f\|_\infty-\epsilon$$ -for all $\epsilon>0$, and thus $M\geq \|f\|_\infty$. -This shows that $M=\|f\|_\infty$.<|endoftext|> -TITLE: Characterization of ideals in rings of fractions -QUESTION [7 upvotes]: Let $R$ be a commutative unital ring. Let $S$ be a multiplicative subset. - -Is there a characterisation of the ideals in the ring of fractions $S^{-1}R$ in terms of ideals $I$ in $R$ and $R$? - -REPLY [5 votes]: Proposition: Let $R$ be a commutative ring with unity. Proper ideals of the ring of fractions $D^{-1}R$ are of the form $\displaystyle D^{-1}I = \bigg\{ \frac{i}{d} : i \in I,\ d \in D\bigg\}$ with $I$ an ideal of $R$ and $I \cap D = \emptyset$. -Proof: Let $J$ be a proper idea of $D^{-1}R$. Let $I = J \cap R$ and observe that $I$ is an ideal of $R$. Suppose to the contrary that $I \cap D \neq \emptyset$. Let $d \in I \cap D$ then $d \in I$. Observe that $\displaystyle \frac{d}{1} \in J$. Moreover, since $J$ is an ideal, it must absorb any element from $D^{-1}R$. Observe that $\displaystyle \frac1d \in D^{-1}R$. Hence it must follow that $\displaystyle \frac1d \cdot \frac{d}{1} = 1 \in J$ and thus $J$ contains a unit which implies $J = D^{-1}R$, a contradiction to $J$ being proper. Thus $I \cap D = \emptyset$. -Let $j \in J$. Observe that $\displaystyle j = \frac{i}{d} = \frac{1}{d}\frac{i}{1}$ for some $i \in R$, $d \in D$. Since $J$ is an ideal of $D^{-1}R$ it must absorb $\displaystyle \frac{d}{1}$ and thus $\displaystyle \frac{d}{1}\bigg(\frac{1}{d}\frac{i}{1}\bigg) = \frac{i}{1} \in J$. Now since $I = J \cap R$ and $\frac{i}{1} = i \in J \cap R$, it follows that $i \in I$. Hence $J \subseteq D^{-1}I$. -Let $x \in D^{-1}I$ where $\displaystyle x = \frac{i}{d}$ for some $i \in I$ and some $d \in D$. Since $i \in I = J \cap R$, then $i \in J$. Hence $x \subseteq D^{-1}I$. -Thus we can conclude that $J = D^{-1}I$. - -As an example you might like to consider $R = \mathbb Z$ and $D = \{12^i : i = 0, 1, 2, \ldots\}$. You can see that $I = D^{-1}2\mathbb Z$ is not a proper ideal because you will get a unit with $r = 6$, $i = \frac{2}{12}$. - -Proposition: Let $R$ be a commutative ring with unity. Prime ideals in $D^{-1}R$ are of the form $D^{-1}P$ where $P$ is a prime ideal of $R$ and $P \cap D = \emptyset$. -Proof:Suppose $Q$ is a prime ideal of $\displaystyle D^{-1}R = \bigg\{ \frac{r}{d} : r \in R,\ d \in D \bigg\}$. Set $P = Q \cap R$. Suppose to the contrary that $P \cap D \neq \emptyset$. Choose $d \in P \cap D$. Observe that $\frac{d}{1} \in Q$. Moreover $\frac1d \in D^{-1}R$. Since $Q$ is a prime ideal, it follows that $\frac1d \frac{d}{1} = 1 \in Q$ which implies that $Q = D^{-1}R$ a contradiction to $Q$ being a prime ideal, which by definition is proper. Hence $P \cap D = \emptyset$. -Let $q \in Q$. Then $q = \frac{r_1}{d_1}$ for $r_1 \in R$ and $d_1 \in D$. Observe that $\frac{d_1}{1} \in D^{-1}R$. So, py property of $Q$ being an ideal, $\frac{d_1}{1} \frac{r_1}{d_1} = \frac{r_1}{1} \in Q$ and since $P = Q \cap R$, then $r_1 \in P$. Hence we have $\frac{r_1}{1} \in D^{-1}P$. Thus $Q \subseteq D^{-1}P$. -Let $x \in D^{-1}P$. Then $x = \frac{p_1}{d_1}$ where $p_1 \in P$ and $d_1 \in D$. Since $P = Q \cap R$ and $p_1 \in P$ it follows that $p_1 \in Q$ so $x \in Q$. Hence $D^{-1}P \subseteq Q$. -Conclude that $Q = D^{-1}P$.<|endoftext|> -TITLE: When $\mathbb{Z}/pq\mathbb{Z}$ is not semisimple? -QUESTION [6 upvotes]: Prove that for any primes $p$, $q$, $p\neq q$, the ring $\mathbb{Z}_{pq}$ (the ring of integers modulo pq) is semisimple, and for $p=q$ the same ring is not semisimple. -I was told that the easiest way is to observe that it has global dimension 1, so it's hereditary, not semisimple. But I don't know how to prove this. -I'm sure it's not complicated, but it eludes my mind. Thanks in advance for any useful replies. - -REPLY [3 votes]: Here, I'll always be assuming $n>1$ -The ring $\mathbb{Z}/n\mathbb{Z}$ is semisimple exactly when $n$ is squarefree. The easy way to see this is that the ideals generated by primes are maximal and have trivial pairwise intersections, and so the Chinese Remainder theorem says the ring is a finite product of fields. -If, on the other hand, $n$ is not squarefree, (say $p^2$ divides it), then $\frac{n}{p}\mathbb{Z}/n\mathbb{Z}$ is a nonzero nilpotent ideal, and so the ring is not semisimple. -The ring $\mathbb{Z}/n\mathbb{Z}$ is local iff $n$ is a power of a prime. -The ring $\mathbb{Z}/n\mathbb{Z}$ is always quasi-Frobenius.<|endoftext|> -TITLE: Energy method for elliptic PDE -QUESTION [8 upvotes]: I have a Energy functional of a PDE : $-\triangle u +u|u| =f$ in $\Omega$ -$u=0$ in $\partial \Omega$, and corresponding energy functional below . -$$ -E(u)=\int_\Omega\Bigl(\frac12|\nabla u|^2+\frac1{3}|u|^3-f\,u\bigr)dx. -$$ -Observe that $E(u)$ is well defined on the Banach space $X=H^1_0(\Omega)\cap L^3(\Omega)$. -I have the following difficulties in understanding it : -a) How do we correspondingly change the Energy functional according to the given PDE ? -b) I am not able to understand why the above energy functional is well defined in $X$ ? -c) Suppose i define $u$ not to be in $H^1_0(\Omega)\cap L^3(\Omega)$ , rather in say $H^2_0(\Omega)\cap L^2(\Omega)$ , How would the setting of energy functional change ? -Please help me to understand . Thanks a lot for giving time . - -REPLY [16 votes]: First of all I would highly recommend you consulting some text in calculus of variations. Some recommendations (in no particular order): - -Weinstock, Calculus of Variations with Applications to Physics and Engineering is a classic, small book. Available now through Dover for cheap. -Gelfand and Fomin, Calculus of Variations is also available now through Dover publications. -Jost and Li-Jost, Calculus of Variations from Cambridge University Press - -Part (b) of your question was already resolved in the comments: the short of the matter is that by considering the functional on the choice of function space, we have restricted ourselves to a domain over which the functional is well-defined. (For example, over the space $H^1_0(\Omega)$ we have that by definition an element $u\in H^1_0(\Omega)$ satisfies $\int_\Omega |\nabla u(x)|^2\mathrm{d}x < \infty$.) -For part (a), the answer is that it is a bit like taking anti-derivatives: you develop some rules based on the reverse operation of finding Euler-Lagrange Equations. Let me elaborate a bit more. -Given a functional $S: H \to \mathbb{R}$ on some Banach space $H$, we are often interested in critical points of $S$ (physically this corresponds to Fermat's principle of least time or the principle of least action; in mathematics we are also interested in area minimizing surfaces or length minimizing curves). In all of those cases, we do what we do in freshman calculus: the critical point is where the derivative of $S$ vanishes. -Now, let $H$ be a space of functions defined on some domain $\Omega$ vanishing on the boundary $\partial\Omega$, and let the functional -$$ S[u] = \int_{\Omega} \mathcal{S(x,u,\nabla u)} \mathrm{d}x $$ -for some function $\mathcal{S}$. By definition of the derivative, we want that for any function $v\in H$, -$$ S'[u]\cdot v = \lim_{t\to 0} \frac{S[u+tv] - S[u]}{t} = 0 $$ -Doing a Taylor expansion on $S$ we have that -$$ \mathcal{S}(x,u+tv, \partial u + t \nabla v) = \mathcal{S}(x,u,\nabla u) + \left(\frac{\partial}{\partial u}\mathcal{S}\right)(x,u,\nabla u) \cdot tv + \left(\frac{\partial}{\partial \nabla u}\mathcal{S}\right)(x,u,\nabla u)\cdot t\nabla v + O(t^2)$$ -which gives -$$S'[u]\cdot v = \int_\Omega \frac{\partial\mathcal{S}}{\partial u} v + \frac{\partial\mathcal{S}}{\partial\nabla u}\cdot \nabla v \mathrm{d}x $$ -Integrating the last term by parts we get -$$ S'[u] \cdot v = \int_\Omega \left( \frac{\partial\mathcal{S}}{\partial u} - \nabla\cdot \frac{\partial \mathcal{S}}{\partial \nabla u}\right) v \mathrm{d}x $$ -Since $S'[u]\cdot v = 0$ is supposed to hold for all $v\in H$ (when $u$ is a critical point), this requires that -$$ \frac{\partial\mathcal{S}}{\partial u}(x,u,\nabla u) - \nabla\cdot \left(\frac{\partial \mathcal{S}}{\partial \nabla u}(x,u,\nabla u)\right) = 0$$ -The above expression is known as the Euler-Lagrange Equation associated to the functional $S$. -You see that in some sense, what you have done in computing the Euler Lagrange equation is computed the "gradient" of the functional $S$. And so, what you asked in part (a), that of finding an energy functional given an equation, is the opposite of this process, and hence very similar to finding the "antiderivative" of the equation. -The way to do this is by learning about rules which you can derive by inference after having done some of these computations. Many of these rules have direct analogue in calculus. Some examples: - -If $\mathcal{S}$ contains a term $|u|^p$, where $p \geq 2$, then the Euler Lagrange equation contains a term $p |u|^{p-2} u$. -If $\mathcal{S}$ contains a term $|\nabla u|^2$, then the Euler Lagrange equation contains a term $-2 \triangle u$. -If $\mathcal{S}$ contains a term $F(u)$, where $F$ is a real/complex valued functions of one variable, then the Euler Lagrange equation contains a term $F'(u)$. -If $\mathcal{S}$ contains a term $V \cdot u$, then the Euler Lagrange equation contains a term $V$. - -The four rules given here are the simplest and most common types of terms of the functional $S$. Here I've assumed that $\mathcal{S}$ does not depend on $x\in\Omega$. If it does, this introduces another layer of complication, as the second term of the Euler-Lagrange equation involves taking a physical space divergence $\nabla\cdot$ and whether derivatives hit the $x$ dependence of $\mathcal{S}$ we pick up additional lower order terms. -Just like solving anti-derivatives, finding the energy functional is mostly educated guess work involving experience and some memorised forms. Here I re-iterate what I said in the comments: just as in multivariable calculus where not all vector fields can be written as the gradient of a function, in the functional setting not all equations can be written as the Euler Lagrange equation of some functional. That is to say, not all PDEs are variational in nature. -Edit: As Theorem clarified in a comment: - -Basically my question in part (c) is, how would the energy functional change if i define u to be in a different space ? - -The answer is it won't. Once should be careful with one's goals and not "put the cart before the horse". - -In the case you are considering a given variational problem (minimise a functional over a certain function space), the function space is given and so it makes no sense to change the function space. - -In the case you are trying to solve a given PDE, the energy method is just an intermediate tool. The tool demands a particular form of energy functional. So you must choose a function space on which this energy functional is well-defined. But that's not all! To using the machinery of the calculus of variations, the functional has to be "sufficiently coercive" relative to the norm of your function space. This is because the analysis of such PDEs usually proceeds by (i) choosing a minimising/extremising sequence in the function space (ii) showing that it converges. For the problem you mentioned in the question, with standard functional analysis machinery one can show that there exists a bounded minimising sequence, which can be shown to weakly converge using standard abstract nonsense. To get strong convergence in this case we have to use the fact that the norm in which we measure the strong convergence can be controlled by the energy functional, and then we can exploit the convexity of the functional to show that the weak limit must be a strong limit in fact. (The above is very sketchy; I described the general method with a hair more detail in this answer.) -In other words, what I am trying to say is a bit of philosophy: in solving PDEs, one does not choose a function space first. Instead, one chooses a method and finds a function space in which the method can be applied. In fact, a large and necessary part of modern analysis of partial differential equations consists of clever choices of function spaces in order to implement certain solution schemes.<|endoftext|> -TITLE: Loopspace adjunction: when are unit or counit equivalences? -QUESTION [8 upvotes]: For (nice?) pointed spaces, the reduced suspension $\Sigma$ is left adjoint to the loop space $\Omega$. This adjunction is given by the unit maps -$\eta_X : X \to \Omega \Sigma X$, $x \mapsto (t \mapsto [x,t])$ -and the counit maps -$\varepsilon_X : \Sigma \Omega X \to X , [\omega,t] \mapsto \omega(t).$ -Question. For which $X$ is $\eta_X$ a homotopy equivalence? For which $X$ is $\varepsilon_X$ a homotopy equivalence? -If this is useful, let's assume that $X$ is sufficiently nice (for example a CW complex). You may also replace "homotopy equivalence" by "homology equivalence" etc., if this yields to interesting statements. If there is no characterization: What are interesting classes of examples? And is there any source in the literature where this sort of question is studied? - -REPLY [2 votes]: You can build on Simon's answer though. $J(X)$ has a great definition, you should check it out. Then it may be easy to see when it could be true or why it must always fail. (can an algebra ever be isomorhpic as an algebra to the tensor algebra generated by it?) -Another suggestion is that you might try to answer the homology equivalence by yourself using the Serre spectral sequence (using the path-loop space fibration). I bet this will pair nicely with the tensor algebra question. -Good question, but I bet it will never be the case in either situation. Don't be sad though, how often is the unit or counit of the free forgetful adjunction an isomorphism? -Although, there are other adjunctions that behave much better, such as Dold-Kan and the adjunction between $sSets$ and $Top$.<|endoftext|> -TITLE: Bijective holomorphic map -QUESTION [5 upvotes]: A bijective holomorphic map from unit disk to itself will be rotation? That mean $f(z)=e^{i\alpha}z$? How do I approach to solve this problem?In addition I want to know how one can remember the conformal maps which sends unit disk to upper half plane or conversely,and all possible known place to other places like that? - -REPLY [9 votes]: For $w\in\mathbb{D}$, consider the following function defined on $\mathbb{D}$: -$$B_w(z) = \frac{z-w}{1-\overline{w}z}.$$ -This function is called a "Blaschke factor." It defines a bijective holomorphism from $\mathbb{D}$ to $\mathbb{D}$, but it is not a rotation if $w\neq 0$. This demonstrates there are many such mappings that are not rotations. -To see this, first note that that by an easy computation, $B_{w}$ is its own inverse, so it is biholomorphic. To see it maps $\mathbb{D}\rightarrow \mathbb{D}$, just take the modulus and do the standard manipulations, noting that $|z|<1$ and $|w|<1$. -However, if you make the assumption that $f(0)=0$, your guess is correct. All biholomorphic maps of the unit disk that fix the origin are rotations. We use the Schwartz lemma to show this. If you are not familiar with this, leave a comment below and I will edit this answer to include a proof and discussion. -By the lemma, we have $|f'(0)|\le 1$. Because $f^{-1}$ is another origin preserving biholomorphism, we also have $|f^{-1}$'$(0)|\le 1$. Because we know by the differentiation rule for inverse functions that $f'(0)=\frac{1}{f^{-1}\text{'}(0)}$, we must have $|f'(0)|=|f^{-1}\text{'}(0)|=1$, and considering the equality case in the Schwartz lemma, we see $f$ must be a rotation. -We know that $f^{-1}$ is differentiable. Then the chain rule gives -$$f(f^{-1}(x))=x\rightarrow f'(f^{-1}(x))=\frac{1}{f^{-1}\text{'}(x)}$$ -and noting here that $f^{-1}(0)=0$ gives the result. -To answer the second part of your question, I suggest you use Google and the literature available to you. The magic search terms are "automorphism group of the disk" and "automorphism group of the upper half plane." This will tell you about all possible biholomorphisms of each space. Now, because the half-plane is conformally equivalent to the unit disk, knowing the automorphism group of the unit disk is enough to know of all the the maps from the unit disk to the upper half-plane.<|endoftext|> -TITLE: Collection of continuous functions determines the topology -QUESTION [15 upvotes]: Let $(X,\tau)$ be a topological space. Do the set of all $X\rightarrow X$ continuous functions uniquely determines $\tau$? - -REPLY [4 votes]: No. -let $X$ be an arbitrary set with two or more elements, and let $\tau_1$ be the discrete topology on $X$ (all sets are open), and let $\tau_2$ be the trivial topology on $X$ (only $X$ and $\emptyset$ are open). -Note that all maps from $(X,\tau_1)$ and all maps into $(X,\tau_2)$ are continuous, regardless of where they go or where they come from respectively. -Thus, in both cases, the set of all continuous maps $X\rightarrow X $ is the set of all set-theoretic maps $X\rightarrow X $, but $\tau_1 \neq \tau_2$.<|endoftext|> -TITLE: Question Regarding Cardano's Formula -QUESTION [14 upvotes]: In Cardano's derivation of a root of the cubic polynomial $f(X)=X^3+bX+c$ he splits the variable $X$ into two variables $u$ and $v$ together with the relationship that $u+v=X$. From this he finds that $x=u+v$, where -$$u=\sqrt[3]{\frac{-c}{2}+\sqrt{\frac{c^2}{4}+\frac{b^3}{27}}}$$ and $$v=\sqrt[3]{\frac{-c}{2}-\sqrt{\frac{c^2}{4}+\frac{b^3}{27}}}$$ -is a root of $f(X)$. - -Is there an intuitive explanation for why Cardano splits the variable $X$ into two parts, $u$ and $v$? - -REPLY [6 votes]: Cardano knew that any quadratic equation of the form $$x^2+bx+c=0\tag{1}$$ can be written as $$x^2-(u+v)x+uv=0,\tag{2}$$ where $u$ and $v$ are the roots of the equation. Since by setting $t=u+v$ in the reduced cubic $$t^3+pt+q=0\tag{3}$$ we get $$(u^3+v^3+q)+(3uv+p)(u+v)=0,\tag{4}$$ then every root of the system $$u^3+v^3+q=0\tag{5a}$$ $$3uv+p=0\tag{5b}$$ is a root of $(4)$ as well, and based on the property of the quadratic equation indicated in $(2)$ it's now easy to find a formula for $t$ satisfying equation $(3)$. -Added. We just need to find two numbers $u^3$ and $v^3$ such that their sum is $-q$ and their product is $-p^3/27$, which we know from $(1)-(2)$ are the roots of the quadratic equation $$Y^2+qY-\frac{p^3}{27}=0.\tag{6}$$ -Consequently, -$t=u+v=\sqrt[3]{u^3}+\sqrt[3]{v^3}$.<|endoftext|> -TITLE: How to check that whather a Polygon is completly inside of another Polygon? -QUESTION [5 upvotes]: Let's say I have two polygons. I know the co-ordinates of both polygons. Now, I need to check whether the first Polygon is completely inside of second polygon? IN this figure only 1 polygon is completely inside of red polygon. - -REPLY [3 votes]: In order for polygon $A$ to be inside polygon $B$, all of the vertices of $A$ must be inside $B$, and all of the vertices of $B$ must be outside $A$. The second condition does not need to be checked if $B$ is known to be convex.<|endoftext|> -TITLE: If $S$ consists of units then $S^{-1}R \cong R$ -QUESTION [5 upvotes]: I want to show that if $S$ consists of units then $S^{-1}R \cong R$. -Can you tell me if my proof is correct? -Since $S$ consists of units, $S$ is zero-divisor free and hence $f: R \to S^{-1}R$, $r \mapsto \frac{r}{1}$ is injective. So we have an isomorphism $h: R \to f(R)$. -Now we construct an isomorphism $S^{-1}R \to f(R)$: For this we pick any $s_0$ in $S$ and denote by $u \in R$ its inverse, i.e. $s_0 u = us_0 = 1$. Then the map $g_u : S^{-1}R \to f(R)$, $\frac{r}{s} \mapsto \frac{ru}{1}$ is an isomorphism. It is injective since if $\frac{ru}{1} = \frac{r^\prime u}{1}$ then $ru = r^\prime u$ and since $u$ is a unit, $r = r^\prime$. And it is surjective since for $\frac{r}{1}$ in $f(R)$, $g(\frac{rs}{s}) = \frac{rsu}{1} = \frac{r}{1}$. Hence $R \cong f(A) \cong S^{-1}R$. - -REPLY [3 votes]: The natural injection is the isomorphism, but I think you don't have to work so hard. -Just note that for any $(r,s)$ in $S^{-1}R$, we have $(r,s)=(rs^{-1},1)$. -So, $S^{-1}R=Im(R)\cong R$.<|endoftext|> -TITLE: When weak convergence implies moment convergence? -QUESTION [6 upvotes]: Given a sequence $(\mu_n)_n$ of probability measures on $\mathbb R$, which converges weakly to a probability measure $\mu$, when do we have -$$ -\tag{1} \lim_{n}\int x^kd\mu_n(x)=\int x^k d\mu(x) \qquad \forall k\geq 0\;? -$$ -Is "$\mu$ has compact support" a sufficient condition? -Note that $\mu_n$ converges to $\mu$ weakly if -$$ \int \varphi d\mu_n \to \int \varphi d\mu$$ -for all $\varphi$ which is continuous and has compact support. Note that $x^k$ are continuous but not of compact support, so (1) is not immediately obvious. - -REPLY [5 votes]: This is almost 10 years late. Possibly by know you are Student no longer. In any event, if $\mu_n\stackrel{n}{\Longrightarrow\infty}\mu$, a sufficient condition for $E_{\mu_n}[X]\rightarrow E_\mu[X]$ is that -$$\lim_{a\rightarrow\infty}\sup_n\int_{\{|x|>a\}}|x|\,\mu_n(dx)=0$$ -This is equivalent to say that a sequence of random variable $X_n\sim \mu_n$, defined on a common domain $(\Omega,\mathscr{F},\mathbb{P})$ is uniformly integrable. This can be seen for instance in Billingsley, P. Convergence in Probability Measures, John Wiley & Sons, 1968, pp. 32. -From this result, is not difficult to derived sufficient conditions for $E_{\mu_n}[X^r]\xrightarrow{n\rightarrow\infty}E_{\mu}[X^r]$ (again, uniform integrability is the key). -In the counterexample given by the responder (user940), $\mu_n=(1-\tfrac1n)\delta_0+\frac1n\delta_{x_n}$ with $x_n\xrightarrow{n\rightarrow\infty}\infty$ -Notice that given $a>0$, for all $n$ large enough $x_n>a$ and so, $$\int\mathbb{1}_{\{|x|>a\}}|x|\mu_n(dx)=\frac{1}{n}x_n$$ -If $x_n\geq n$ for all $n$, then -$$\sup_n\int\mathbb{1}_{\{|x|>a\}}|x|\mu_n(dx)\geq1$$ -This means that in this case, the measures do not satisfy the uniformintegrability condition.<|endoftext|> -TITLE: Primes of the form $\lfloor x^k\rfloor$ -QUESTION [5 upvotes]: I'm looking for a result (embarrassingly enough, a somewhat famous result) which shows the infinitude in some sense I don't recall of primes of the form -$$ -\lfloor x^k\rfloor -$$ -for $k$ fixed and irrational. There were sharp limits on the size of $k.$ I think the original result has been improved many times, mostly by widening the allowable range of $k.$ - -REPLY [6 votes]: Rivat and Sargos, Nombres premiers de la forme $[n^c]$, Canad. J. Math. 53 (2001), no. 2, 414–433, MR1820915 (2002a:11107), reviewed by G. Greaves. -The authors establish an asymptotic formula for the number of primes not exceeding $x$ of the form $[n^c]$. Their result applies for each $c$ with $1\lt c\lt2817/2426$. The review compares this to previous work, and there are links to other papers and reviews that cite this paper. -Apparently the first paper along these lines was by Piatetski-Shapiro in 1953, with $1\lt c\lt12/11$.<|endoftext|> -TITLE: Computing: $L =\lim_{n\rightarrow\infty}\left(\frac{\frac{n}{1}+\frac{n-1}{2}+\cdots+\frac{1}{n}}{\ln(n!)} \right)^{{\frac{\ln(n!)}{n}}} $ -QUESTION [7 upvotes]: Compute the following limit: -$$L =\lim_{n\rightarrow\infty}\left(\frac{\frac{n}{1}+\frac{n-1}{2}+\cdots+\frac{1}{n}}{\ln(n!)} \right)^{{\frac{\ln(n!)}{n}}} $$ -I'm looking for an easy, simple solution here, but not sure yet this is possible. Any hint, suggestion along this way is welcome. Thanks. - -REPLY [7 votes]: I'm not sure that I'm right. -First we have -$\sum_{k=1}^n (n+1-k)/k = (n+1)H_n-n$, -so -$$L = \lim_{n\to\infty} \left(\frac{(n+1)H_n-n}{\ln n!}\right)^{\frac{\ln n!}n}$$ -Take logarithm, we have $\ln L = \lim_{n\to\infty} A(n)B(n)$, where -$$A(n) = \frac{\ln n!}n = \frac{n\ln n+O(n)}n = \ln n+O(1)$$ -and $B(n) = \ln C(n)$ where -\begin{align*} -C(n) -&= \frac{(n+1)H_n-n}{\ln n!} \\ -&= \frac{(n+1)(\ln n+\gamma+O(1/n))-n}{n\ln n-n+O(\log n)} \\ -&= \frac{n\ln n-(1-\gamma)n+O(\log n)}{n\ln n-n+O(\log n)} \\ -&= \frac{1-\dfrac{1-\gamma}{\ln n}+O(1/n)}{1-\dfrac1{\ln n}+O(1/n)} \\ -&= \left(1-\frac{1-\gamma}{\ln n}\right)\left(1-\frac1{\ln n}\right)^{-1}\left(1+O(1/n)\right)^2 \\ -&= \left(1-\frac{1-\gamma}{\ln n}\right)\left(1+\frac1{\ln n}+O(1/\log n)^2\right)\left(1+O(1/n)\right) \\ -&= 1+\frac\gamma{\ln n}+O(1/\log n)^2 -\end{align*} -So -$$B(n) = \ln C(n) = \ln\left(1+\frac\gamma{\ln n}\right)+O(1/\log n)^2 = \frac\gamma{\ln n}+O(1/\log n)^2$$ -and -$$A(n)B(n)=\gamma+O(1/\log n)$$ -Let $n\to\infty$, we have $\lim_{n\to\infty} A(n)B(n)=\gamma$, so $L = e^\gamma$. - -The following equations come from Concrete Mathematics, proved by Euler-Maclaurin formula - -$H_n = \sum_{k=1}^n 1/k = \ln n+\gamma+O(1/n)$, where $\gamma$ is Euler-Mascheroni constant. -$\ln n! = n\ln n-n+O(\log n)$. (It's really Stirling's approximation)<|endoftext|> -TITLE: Equivalent definition of exactness of functor? -QUESTION [6 upvotes]: I'll use the following definition: -(Def) A functor $F$ is exact if and only if it maps short exact sequences to short exact sequences. -Now I'd like to prove the following (not entirely sure it's true but someone mentioned something like this to me some time ago): -Claim: $F$ is exact if and only if it maps exact sequences $M \to N \to P$ to exact sequences $F(M) \to F(N) \to F(P)$ -Proof: -$\Longleftarrow$: Let $0 \to M \to N \to P \to 0$ be exact. Then $0 \to M \to N$, $M \to N \to P$ and $N \to P \to 0$ are exact and hence $0 \to F(M) \to F(N) $, $F(M) \to F(N) \to F(P)$ and $F(N) \to F(P) \to 0$ are exact. Hence $0 \to F(M) \to F(N) \to F(P) \to 0$ is exact. -$\implies$: This is direction I'm stuck with. I am trying to do something like this: Given $M \to N \to P$ exact, we have that $0 \to ker(f) \to M \to im(f) \to 0$ is exact. Hence $0 \to F(ker(f)) \to F(M) \to F(im(f)) \to 0$ is exact. Then I want to do this again for the other side of the sequence and stick it back together after applying $F$ to get the desired short exact sequence. -How does this work? Perhaps I need additional assumptions on $F$? Thanks for your help. - -REPLY [10 votes]: Any exact sequence can be broken down into short exact sequences (the $C_i$ are kernels/images): - -So, since your functor $F$ preserves short exact sequences, you can apply $F$ and the diagonal sequences will remain exact. It's now a general fact that in any such diagram, if the diagonals are exact, then the middle terms are exact as well (by diagram chasing). -EDIT: If $f_i\colon A_i\to A_{i+1}$, then $C_i=\ker(f_i)$ which by exactness is isomorphic to $\operatorname{im}(f_{i-1})$.<|endoftext|> -TITLE: Can we always find a primitive element that is a square? -QUESTION [13 upvotes]: Let $L/\mathbb Q$ be a finite field extension. The Primitive Element Theorem says that there is an element $\alpha \in L$ so that $L=\mathbb Q(\alpha)$. Can I always find an element $\beta \in L$ so that $L=\mathbb Q(\beta^2)$ ? - -REPLY [12 votes]: $\newcommand{\Q}{\mathbb Q}$ -I think this works. Let $K/\Q$ be a finite field extension. Note that $K$ has finitely many subfields, call them $F_1,\dots,F_t$ and let $W$ be their union. Then any element in $K \setminus W$ is necessarily a primitive element. So it suffices to show their exists some $\alpha \in K \setminus W$ that has a square root. Pick $\alpha \in K \setminus W$, note that for $k \in \Q$ we still have $\alpha_k=\alpha + k \in K \setminus W$ otherwise $\alpha \in W$. Consider $\alpha_k^2=(\alpha+k)^2=\alpha^2+2k\alpha+k^2$ suppose that for every $k \in \Q \setminus {0}$ we have $\alpha_k^2 \in W$. Then by the pigeonhole principle we must have some $j$ and $k_1 \neq k_2$ such that $\alpha_{k_1}^2,\alpha_{k_2}^2 \in F_j$. So we have -$$\frac{\alpha_{k_1}^2-\alpha_{k_2}^2-(k_1^2+k_2^2)}{2(k_1-k_2)} \in F_j$$ -but -$$ \frac{\alpha_{k_1}^2-\alpha_{k_2}^2-(k_1^2+k_2^2)}{2(k_1-k_2)} =\frac{2\alpha(k_1-k_2)}{2(k_1-k_2)}=\alpha. $$ -Which would imply that $\alpha \in F_j \subset W$ so we have some $k \in \Q\setminus \{0\}$ such that $(\alpha+k)^2 \in K \setminus W$ and thereby $\Q((\alpha+k)^2)=K$. -I think this could be turned into a constructive argument fairly easy. Take one of the primitive elements given to you by the primitive element theorem then add $t+1$ non-zero rational numbers to it and the square of one of those will be the desired element. -Ammendum: This could also extend to a proof that there always exists an element $\alpha \in K$ such that $\Q(\alpha^n)=K$. Essentially use the same argument except find $t$ such distinct $c_i$ so that $\alpha_{c_i}^n \in F_t$ then notice that the matrix given by $A_{ij}=\binom{m}{j}c_i^j$ has determinant a constant multiple of the Vandermonde matrix $A_{ij}=c_i^j$. Thereby the determinant is nonzero and so we can find a linear combination of the $\alpha_{c_i}^n$ that gives us $\alpha$.<|endoftext|> -TITLE: Is the set of all probability measures weak*-closed? -QUESTION [5 upvotes]: Let $(\Omega,\Sigma)$ be a measurable space. Denote by $ba(\Sigma)$ the set of all bounded and finitely additive measures on $(\Omega,\Sigma)$ (see http://en.wikipedia.org/wiki/Ba_space for a definition). Is the set of all probability measures $\mathcal{M}_1(\Sigma)\subseteq ba(\Sigma)$ weak*-closed? The weak*-topology on $ba(\Sigma)$ is the weakest topology such that the maps $l_Z:ba(\Sigma)\rightarrow \mathbb{R}$, mapping $\mu\mapsto \int_\Omega Z d\mu$, are continuous for all bounded and measurable maps $Z:\Omega\rightarrow \mathbb{R}$. - -REPLY [6 votes]: No. Take $\Omega=\mathbb N$ and $\Sigma$ the power set. As wikipedia says, then $ba(\Sigma)=ba=(\ell^\infty)^*$. However, the collection of probability measures is just the collection of $(x_n)\in\ell^1$ (as a measures are countably additive) with $x_n\geq 0$ for all $n$, and $\sum_n x_n=1$. This is not weak$^*$-closed in $(\ell^\infty)^*$. For example, any limit point of the set $\{\delta_n:n\in\mathbb N\}$, where $\delta_n\in\ell^1$ is the point mass at $n$, is a member of $ba \setminus \ell^1$.<|endoftext|> -TITLE: Was there some prior idea that inspired both Fermat & Descartes to invent coordinates? -QUESTION [15 upvotes]: It seems incredible to me that both Descartes & Fermat could have both simultaneously discovered such a novel & significant idea, without there being some single prior idea that they both could have taken inspiration from. Has there been any research done on this, or can someone expliciate the histrorical record further? -Wikipedia does state that Nicole Oresme in the 14C made constructions similar to coordinates. This is well before either Descartes or Fermat. It doesn't state whether they were influenced by him. -Alternatively, could they have been influenced by developments in Cartography or Map-making? - -REPLY [3 votes]: To complement Henning Makholm's fine answer, I would add that it is not entirely accurate to say that "The improved notation had been in development for at least a couple of centuries, with algebraists gradually freeing themselves from the classical/medieval tradition of describing everything in prose." Here the key name is Vieta. His work introduced the idea of symbolic mathematics in a systematic way, and constituted the transition to the modern period as far as symbolism is concerned. Both Fermat and Descartes relied on this.<|endoftext|> -TITLE: The Fibonacci sum $\sum_{n=0}^\infty \frac{1}{F_{2^n}}$ generalized -QUESTION [26 upvotes]: The evaluation, -$$\sum_{n=0}^\infty \frac{1}{F_{2^n}}=\frac{7-\sqrt{5}}{2}=\left(\frac{1-\sqrt{5}}{2}\right)^3+\left(\frac{1+\sqrt{5}}{2}\right)^2$$ -was recently asked in a post by Chris here. -I like generalizations, and it turns out this is not a unique feature of the Fibonacci numbers. If we use the Pell numbers $P_m = 1,2,5,12,29,70,\dots$ then the sum is also an algebraic number of deg 2. In general, it seems for any positive rational b, then, -$$\sum_{n=0}^\infty \frac{1}{\frac{1}{\sqrt{b^2+4}}\left( \left(\frac{b+\sqrt{b^2+4}}{2}\right)^{2^n}-\left(\frac{b-\sqrt{b^2+4}}{2}\right)^{2^n}\right)}=1+\frac{2}{b}+\frac{b-\sqrt{b^2+4}}{2}$$ -where Fibonacci numbers are just the case b = 1, the Pell numbers b = 2, and so on. (For negative rational b, then one just uses the positive case of $\pm\sqrt{b^2+4}$.) -Anyone knows how to prove/disprove the conjectured evaluation? - -REPLY [19 votes]: Your conjecture is indeed right. Before proving your conjecture, let us obtain an intermediate result first. Let us prove the following claim first. - -CLAIM: -If we have a sequence given by the recurrence, -$$a_{n+2} = ba_{n+1} + a_n,$$ with $a_0 =0 $ and $a_1 = 1$, we then have -$$\boxed{\color{blue}{\displaystyle \sum_{k=0}^{N} \dfrac1{a_{2^k}} = 1 + \dfrac2b - \dfrac{a_{2^N-1}}{a_{2^N}}}}$$ - -Proof: -Let us write out a few terms of this sequence, we get -$$a_0 = 0, a_1 = 1, a_2 = b, a_3 = b^2 + 1, a_4 = b^3 + 2b, \cdots$$ -The proof is by induction on $N$. For $N=1$, we have the left hand side to be $$\dfrac1{a_1} + \dfrac1{a_2} = 1 + \dfrac1b$$ while the right hand side is $$1 + \dfrac2b - \dfrac{a_1}{a_2} = 1 + \dfrac2b - \dfrac1{b} = 1 + \dfrac1b$$ -For $N=2$, we have the left hand side to be $$\dfrac1{a_1} + \dfrac1{a_2} + \dfrac1{a_4} = 1 + \dfrac1b + \dfrac1{b^3 + 2b}$$ while the right hand side is $$1 + \dfrac2b - \dfrac{a_3}{a_4} = 1 + \dfrac2b - \dfrac{b^2+1}{b^3+2b} = 1 + \dfrac1b + \dfrac1b - \dfrac{b^2+1}{b^3+2b} = 1 + \dfrac1b + \dfrac1{b^3+2b}$$ Hence, it holds for $N=1$ and $N=2$. Now lets go ahead with induction now. Assume the result is true for $N=m$ i.e. we have -$$\sum_{k=0}^{m} \dfrac1{a_{2^k}} = 1 + \dfrac2b - \dfrac{a_{2^m-1}}{a_{2^m}}$$ -Now $$\sum_{k=0}^{m+1} \dfrac1{a_{2^k}} = 1 + \dfrac2b - \dfrac{a_{2^m-1}}{a_{2^m}} + \dfrac1{a_{2^{m+1}}}$$ Hence, we want to show that -$$ - \dfrac{a_{2^m-1}}{a_{2^m}} + \dfrac1{a_{2^{m+1}}} = -\dfrac{a_{2^{m+1}-1}}{a_{2^{m+1}}}$$ -i.e. -$$\dfrac1{a_{2^{m+1}}} + \dfrac{a_{2^{m+1}-1}}{a_{2^{m+1}}} = \dfrac{a_{2^m-1}}{a_{2^m}}$$ -i.e. -$$a_{2^m}(1+a_{2^{m+1}-1}) = a_{2^m-1} a_{2^{m+1}} \,\,\,\, (\star)$$ -which can be verified using the recurrence. In fact $(\dagger)$, a slightly more general version of $(\star)$, which is easier to check is true. -$$a_{2k}(1+a_{4k-1}) = a_{2k-1} a_{4k} \,\,\,\, (\dagger)$$ i.e. -$$a_{2k-1} a_{4k} - a_{2k} a_{4k-1} = a_{2k} \,\,\,\, (\dagger)$$ -Hence, we get that -$$\boxed{\color{red}{\displaystyle \sum_{k=0}^{N} \dfrac1{a_{2^k}} = 1 + \dfrac2b - \dfrac{a_{2^N-1}}{a_{2^N}}}}$$ - -Now letting $N \to \infty$, we see that your conjecture is indeed right. This is so since -from the recurrence we get that -$$\dfrac{a_{n+2}}{a_{n+1}} = b + \dfrac{a_n}{a_{n+1}}$$ -If we have $\displaystyle \lim_{n \to \infty} \dfrac{a_n}{a_{n+1}} = L$, then we get that -$$\dfrac1L = b + L$$ and since $L>0$, we have $L = \dfrac{\sqrt{b^2+4}-b}2$. Hence, -$$\boxed{\color{red}{\displaystyle \sum_{k=0}^{\infty} \dfrac1{a_{2^k}} = \lim_{N \to \infty} \displaystyle \sum_{k=0}^{N} \dfrac1{a_{2^k}} = 1 + \dfrac2b - \lim_{N \to \infty} \dfrac{a_{2^N-1}}{a_{2^N}} = 1 + \dfrac2b - L = 1 + \dfrac2b + \dfrac{b}2 -\dfrac{\sqrt{b^2+4}}2}}$$ - -EDIT -After some googling, I found out that a similar result is true for a more general class of recurrences of the form $$a_{n+1} = P a_n + Q a_{n-1}$$ See this article for more details. -Also, try googling Millin series for more details.<|endoftext|> -TITLE: Taking fractions $S^{-1}$ commutes with taking intersection -QUESTION [5 upvotes]: Let $N,P$ be submodules of an $R$-module $M$ and let $S$ be a multiplicative subset of $R$. I think I proved $S^{-1}(N \cap P) = S^{-1}N \cap S^{-1} P$ but since my proof is not the same as the one given in Atiyah-MacDonald on page 39 I suspect there is something wrong with it. Can you tell me please what's wrong here: -Claim: $S^{-1}(N \cap P) = S^{-1}N \cap S^{-1} P$ -Proof: -$\frac{m}{s} \in S^{-1}N \cap S^{-1} P \iff$ $\frac{m}{s} \in S^{-1}N$ and $\frac{m}{s} \in S^{-1}P \iff m \in N$ and $m \in P \iff m \in N \cap P \iff \frac{m}{s} \in S^{-1}(N \cap P)$. - -REPLY [4 votes]: In your notation, it doesn't have to be the case that $m \in N$. There just needs to be an $s' \in S$ such that $s'm \in N$. For example, take $R = M = \mathbf Z$, $N = 6\mathbf Z$, and $S = \{1, 2, 2^2, \ldots\}$. But having made this change, I think you can complete your proof.<|endoftext|> -TITLE: Multiplicative Selfinverse in Fields -QUESTION [5 upvotes]: I assume there are only two multiplicative self inverse in each field with characteristice bigger than $2$ (the field is finite but I think it holds in general). In a field $F$ with $\operatorname{char}(F)>2$ a multiplicative self inverse $a \in F$ is an element such that -$$ a \cdot a = 1.$$ -I think in each field it is $1$ and $-1$. Any ideas how to proof that? - -REPLY [2 votes]: Hint $\rm\ x^2\! =\! 1\!\iff\! (x\!-\!1)(x\!+\!1) = 0\! \iff\! x = \pm1,\:$ by $\rm\:ab=0\:\Rightarrow\: a=0\:\ or\:\ b=0\:$ in a field. -This may fail if the latter property fails, i.e. if nontrivial zero-divisors exist. Consider, for example, $\rm\ x^2 = 1\:$ has $4$ roots $\rm\:x = \pm1, \pm 3\:$ in $\rm\:\mathbb Z/8 = $ integers mod $8,\:$ i.e. $\rm\:odd^2 \equiv 1\pmod 8$. -Rings satsifying the latter property (no zero-divisors) are called (integral) domains. They are characterized by a generalization of the above, viz. a ring $\rm\: D\:$ is a domain $\iff$ every nonzero polynomial $\rm\ f(x)\in D[x]\ $ has at most $\rm\ deg\ f\ $ roots in $\rm\:D.\:$ For the simple proof see my post here, where I illustrate it constructively in $\rm\: \mathbb Z/m\: $ by showing that, given any $\rm\:f(x)\:$ with more roots than its degree,$\:$ we can quickly compute a nontrivial factor of $\rm\:m\:$ via a $\rm\:gcd$. The quadratic case of this result is at the heart of many integer factorization algorithms, which try to factor $\rm\:m\:$ by searching for a nontrivial square root in $\rm\: \mathbb Z/m,\:$ e.g. a square root of $1$ that is not $\:\pm 1$.<|endoftext|> -TITLE: About the definition of Cech Cohomology -QUESTION [17 upvotes]: Let $X$ be a topological space with and open cover $\{U_i\}$ and let $\mathcal F$ be a sheaf of abelian groups on $X$. A $n$-cochain is a section $f_{i_0,\ldots,i_n}\in U_{i_0,\ldots,i_n}:= U_{i_0}\cap\ldots\cap U_{i_n}$; we can costruct the following abelian group (written in additive form): -$$ -\check C^n(\mathcal U,\mathcal F):=\!\!\prod_{(i_o,\ldots,i_n)}\!\!\mathcal F(U_{i_0,\ldots,i_n}) -$$ -Now my question is the following: -we consider oredered sequences $(i_o,\ldots,i_n)$ ? Because in this case in the direct product we have each group repetead $(n+1)!$ times, that is the number of permutations of the set $\{i_o,\ldots,i_n\}$. - -REPLY [5 votes]: You can choose: both versions exist! -a) You can take the product $\check C^n(\mathcal U,\mathcal F)=\!\!\prod_{(i_o,\ldots,i_n)}\!\!\mathcal F(U_{i_0,\ldots,i_n})$ over all $n+1$-tuples, so that indeed there will be much redundancy in your groups i.e. they will be repeated. -b) Or you can put some total order on $I$ and consider the complex -$$ -\check C'^n(\mathcal U,\mathcal F):=\!\!\prod_{i_o\lt\ldots\lt i_n}\!\!\mathcal F(U_{i_0,\ldots,i_n}) -$$ -which is clearly more economical. -b') There is a variant where you use the subcomplex $\check C_{alt}^\bullet(\mathcal U,\mathcal F)\subset \check C^n(\mathcal U,\mathcal F)$ of the complex in a) consisting of alternate families: for example if $n=1$ you require that $s_{ij}=-s_{ji}\in \mathcal F(U_i\cap U_j) \;\text {for } \;i\neq j$ and $s_{ii}=0.$ -These complexes give the same cohomology groups: it is pretty clear for b) and b') and it requires a calculation to show that the inclusion of complexes -$$\check C_{alt}^\bullet(\mathcal U,\mathcal F)\hookrightarrow \check C^\bullet(\mathcal U,\mathcal F)$$ yields isomorphisms at the level of cohomology groups $$ \check H_{alt }^n(\mathcal U,\mathcal F)\hookrightarrow \check H^n(\mathcal U,\mathcal F) $$ -In that generality I must admit that all this is a bit boring. -For familiarizing oneself with effective calculations in low degrees, I recommend §12 of Forster's Lectures on Riemann Surfaces in which he very explicitly computes, for example, the first Čech cohomology group $\check H^1$ of some sheaves on Riemann surfaces. -Edit -Let me say, as an answer to Galoisfan's question, that version b) is much more powerful. -Here is an example: -Let $X$ be a Riemann surface and $\mathcal F$ a coherent sheaf. -If $\mathcal U=\lbrace U_0,U_1\rbrace$ is a covering of $X$ consisting of two open sets, then obviously $\check C'^n(\mathcal U,\mathcal F)=0$ for $n\geq 2$ since you cannot extract a sequence ot three strictly increasing numbers from $\lbrace 0,1\rbrace$ ! -But if $U_0,U_1\subsetneq X$ are strict open subsets, they are Stein and Leray's theorem says that $ \check H^n(\mathcal U,\mathcal F) = \check H^n(X,\mathcal F) $, the genuine cohomology of $\mathcal F$ (i.e. the inductive limit over the coverings of $X$). -So version b) lets you prove that all genuine cohomology groups $\check H^n(X,\mathcal F) \; (n\geq 2)$ are zero: quite a remarkable theorem! -In the same vein, for every algebraic variety $X$ that can be covered by $n+1$ open affine subsets ($\mathbb P^n$ for example ) and every coherent algebraic sheaf $\mathcal F$ on $X$, the totally ordered version of Čech cohomology shows that $\check H^k(X,\mathcal F) =0$ for $k\gt n$.<|endoftext|> -TITLE: Lower bound on Tail Probabilities -QUESTION [8 upvotes]: Inequalities such as Markov's and Chebyshev’s provide upper bounds on tail probabilities. Are there similar inequalities that give lower bounds in the form $P(X \geq \alpha)>\theta$? - -REPLY [6 votes]: Markov's inequality is also called the first moment method. What you want is the second moment method using bounds for the two first moments to derive the desired inequality : -http://en.wikipedia.org/wiki/Second_moment_method<|endoftext|> -TITLE: Normal subgroups of $S_4$ -QUESTION [32 upvotes]: Can anyone tell me how to find all normal subgroups of the symmetric group $S_4$? -In particular are $H=\{e,(1 2)(3 4)\}$ and $K=\{e,(1 2)(3 4), (1 3)(2 4),(1 4)(2 3)\}$ normal subgroups? - -REPLY [3 votes]: As suggested by Babak Sorouh, the answer can be found easily using GAP using the SONATA library. Here's the code: -G:=SymmetricGroup(4); -S:=Filtered(Subgroups(G),H->IsNormal(G,H)); -for H in S do - Print(StructureDescription(H),"\n"); -od; - -So as to not spoil Arturo Magidin's answer, here's the output if I replace G:=SymmetricGroup(4); with G:=DihedralGroup(32); (the dihedral group of order $32$) -1 -C2 -C4 -C8 -D16 -D16 -C16 -D32<|endoftext|> -TITLE: How to determine the $\delta$ in the open mapping theorem? -QUESTION [5 upvotes]: Let $X$ and $Y$ be Banach spaces and $T\in\mathcal{L}(X,Y)$ be a bounded linear operator from $X$ to $Y$. -If $T$ is surjective, then the open mapping theorem says that there is a positive $\delta$ such that $TB_1\supset\delta B_2$, where $B_1$ and $B_2$ are open unit balls in $X$ and $Y$ respectively. -My question is how is the $\delta$ related to the norm of $T$, which gives a (sharp) bound for the norm of the inverse of $T$ if $T$ is also injective. -Thanks! -And another related question: If $\mu$ is at a positive distance to $\sigma(T)$, the spectrum of $T$, how is $\|(\mu-T)^{-1}\|$ related to the distance from $\mu$ to $\sigma(T)$? Obviously we have an lower bound, but what I need is an upper bound. -Thanks! - -REPLY [6 votes]: Short answer: $0< \delta\le \|T\|$ is as much as we can say, if $\|T\|$ is all we know. -Slightly longer answer. To an operator $T$ we can associate a lot of numbers, such as: the norm $\|T\|=\sup_{\|x\|=1}\|Tx\|$, the lower bound $m_T=\inf_{\|x\|=1}\|Tx\|$, and the covering number $\delta_T=\sup\{r>0\colon TB_1\supset r B_2\}$. The first measures boundedness, the second injectivity, the third surjectivity. For the adjoint operator the norm stays the same: $\|T^*\|=\|T\|$ but the other two trade places: $m_{T^*}=\delta_T$ and $\delta_T=m_{T^*}$. So, $\delta$ is directly related to the lower bound for $T^*$.<|endoftext|> -TITLE: Evaluation of $\lim\limits_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$ -QUESTION [8 upvotes]: One of the previous posts made me think of the following question: Is it possible to evaluate this limit without L'Hopital and Taylor? -$$\lim_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$$ - -REPLY [4 votes]: Here is a different approach. Let $$L = \lim_{x \to 0} \dfrac{\tan(x) - x}{x^3}$$ -Replacing $x$ by $2y$, we get that -\begin{align} -L & = \lim_{y \to 0} \dfrac{\tan(2y) - 2y}{(2y)^3} = \lim_{y \to 0} \dfrac{\dfrac{2 \tan(y)}{1 - \tan^2(y)} - 2y}{(2y)^3}\\ -& = \lim_{y \to 0} \dfrac{\dfrac{2 \tan(y)}{1 - \tan^2(y)} - 2 \tan(y) + 2 \tan(y) - 2y}{(2y)^3}\\ -& = \lim_{y \to 0} \dfrac{\dfrac{2 \tan^3(y)}{1 - \tan^2(y)} + 2 \tan(y) - 2y}{(2y)^3}\\ -& = \lim_{y \to 0} \left(\dfrac{2 \tan^3(y)}{8y^3(1 - \tan^2(y))} + \dfrac{2 \tan(y) - 2y}{8y^3} \right)\\ -& = \lim_{y \to 0} \left(\dfrac{2 \tan^3(y)}{8y^3(1 - \tan^2(y))} \right) + \lim_{y \to 0} \left(\dfrac{2 \tan(y) - 2y}{8y^3} \right)\\ -& = \dfrac14 \lim_{y \to 0} \left(\dfrac{\tan^3(y)}{y^3} \dfrac1{1 - \tan^2(y)} \right) + \dfrac14 \lim_{y \to 0} \left(\dfrac{\tan(y) - y}{y^3} \right)\\ -& = \dfrac14 + \dfrac{L}4 -\end{align} -Hence, $$\dfrac{3L}{4} = \dfrac14 \implies L = \dfrac13$$ -EDIT -In Hans Lundmark answer, evaluating the desired limit boils down to evaluating $$S=\lim_{x \to 0} \dfrac{\sin(x)-x}{x^3}$$ The same idea as above can be used to evaluate $S$ as well. -Replacing $x$ by $2y$, we get that \begin{align} -S & = \lim_{y \to 0} \dfrac{\sin(2y) - 2y}{(2y)^3} = \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y) - 2y}{8y^3}\\ -& = \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y) - 2 \sin(y) + 2 \sin(y) - 2y}{8y^3}\\ -& = \lim_{y \to 0} \dfrac{2 \sin(y) - 2y}{8y^3} + \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y)-2 \sin(y)}{8y^3}\\ -& = \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) - y}{y^3} - \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) (1 - \cos(y))}{y^3}\\ -& = \dfrac{S}4 - \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) 2 \sin^2(y/2)}{y^3}\\ -& = \dfrac{S}4 - \dfrac18 \lim_{y \to 0} \dfrac{\sin(y)}{y} \dfrac{\sin^2(y/2)}{(y/2)^2}\\ -& = \dfrac{S}4 - \dfrac18 \lim_{y \to 0} \dfrac{\sin(y)}{y} \lim_{y \to 0} \dfrac{\sin^2(y/2)}{(y/2)^2}\\ -& = \dfrac{S}4 - \dfrac18\\ -\dfrac{3S}4 & = - \dfrac18\\ -S & = - \dfrac16 -\end{align}<|endoftext|> -TITLE: Normal, Non-Metrizable Spaces -QUESTION [11 upvotes]: We know that every metric space is normal. We know also that a normal, second countable space is metrizable. -What is an example of a normal space that is not metrizable? -Thanks for your help. - -REPLY [10 votes]: Rather than list specific spaces, I thought that I’d mention a few classes of spaces. -A compact metric space has cardinality at most $2^\omega$, so every compact Hausdorff space of cardinality greater than $2^\omega$ is normal and non-metrizable. In particular, this includes every product of compact Hausdorff spaces with at least $2^\omega$ non-trivial factors. -Let $I$ be an index set, for each $i\in I$ let $X_i$ be a space with at least two points, and for each $i\in I$ let $p_i\in X_i$. Let $$X=\left\{x\in\prod_{i\in I}X_i:|\{i\in I:x_i\ne p_i\}|\le\omega\right\}$$ as a subspace of the Tikhonov product of the $X_i$; such spaces are called $\Sigma$-products. If $I$ is countable, the $\Sigma$-product is just the ordinary Tikhonov product, but if $I$ is uncountable it’s something new. - -Proposition: If $I$ is uncountable and each $X_i$ is $T_1$, $X$ is not paracompact (and therefore not metrizable). -Proof: Let $I_0=\{i_\xi:\xi<\omega_1\}$ be a subset of $I$ of cardinality $\omega_1$, and for each $\xi<\omega_1$ fix $q_{i_\xi}\in X_{i_\xi}\setminus\{p_{i_\xi}\}$. For $\eta<\omega_1$ define $x^\eta\in X$ by $$x^\eta_i=\begin{cases}q_{i_\xi},&\text{if }i=i_\xi\text{ and }\xi<\eta\\p_i,&\text{otherwise}\;.\end{cases}$$ It’s not hard to check that $\{x^\eta:\eta<\omega_1\}$ is a closed subspace of $X$ homeomorphic to $\omega_1$ (with the order topology), which is not paracompact. $\dashv$ - -However, it’s a theorem of Mary Ellen Rudin and, independently, S.P. Gul’ko that $\Sigma$-products of metric spaces are always normal. (The proof is highly non-trivial and can be found in Teodor C. Przymusiński, Products of Normal Spaces, in the Handbook of Set-Theoretic Topology, K. Kunen & J.E. Vaughan, eds., where the result is Theorem 7.4.) Thus, every uncountable $\Sigma$-product of non-trivial metric spaces is an example of a normal, non-metrizable space. -It’s well known that every linearly ordered space [LOTS] is hereditarily normal. This means that every generalized ordered [GO] space is hereditarily normal, since the GO-spaces are precisely the subspaces of linearly ordered spaces. (The actual definition of a GO-space is that it’s a space $X$ equipped with a linear order $\le$ whose topology has a base consisting of $\le$-intervals, not necessarily open, but this is equivalent to being a subspace of a LOTS. An example is the Sorgenfrey line.) -Thus, any non-metrizable GO-space is an example. The most straightforward way for a GO-space to fail to be metrizable is to have a point with uncountable character, i.e., a point that has no countable local base; this automatically includes all ordinal spaces $\alpha$ for $\alpha>\omega_1$ and many of their subspaces. This is far from the only way, of course. The Sorgenfrey line, for example, is first countable but fails to be metrizable for a variety of reasons: it’s separable and Lindelöf but not second countable, and its square is neither normal nor Lindelöf. It’s rather easy to come up with all sorts of variations on this theme.<|endoftext|> -TITLE: The harmonic sum of coprime integers is not an integer. -QUESTION [8 upvotes]: As stated, I tried to prove the following: -The theorem seems to be very incompletely phrased, since one can obtain non integer sums of the form -$$\frac{1}{4} + \frac{1}{4} + \frac{1}{4} + \frac{1}{7} + \frac{1}{9}$$ -or -$$\frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{{18}} + \frac{1}{{20}}$$ -so further detail is needed. Maybe this should be closed as no longer relevant until I come up with a better phrasing and I consider more initial conditions. The big question would be - -Given the set $S$ of $n$ integers -$$S=\{x_1,x_2,\dots,x_n\}$$ what are sufficient conditions on $x_1,x_2,\dots,x_n$ so that $$\eta = \sum_{k \leq n }x_k^{-1}$$ is not an integer? - -Though I don't know if this is an important/relevant question to be asking. - -THEOREM If $x_1,\dots,x_n $ are pairwise coprime, $x_i\neq 1$, let -$$\mu =\sum_{k=1}^n \frac 1 x_k $$ -Then $\mu$ can't be an integer. - -PROOF By induction on $n$. Asume the theorem is true for $2, \dots, n-1$. I'll analize the case $k=n$. -$(1)$ It is true for $n=2$. If $$(x_1,x_2)=1 \Rightarrow (x_1 x_2,x_1+x_2)=1$$ -The proof is simple. We have that $(x_1,x_2)=1$. Let $d \mid x_1+x_2 , d \mid x_1x_2$. Then -$$d\mid x_1(x_1+x_2)-x_1x_2 \Rightarrow d\mid x_1^2$$ -$$d\mid x_2(x_1+x_2)-x_1x_2 \Rightarrow d\mid x_2^2$$ -So $$d \mid (x_1^2,x_2^2)=(x_1,x_2)=1 \Rightarrow d=1$$ -This means $$\frac{1}{x_1}+\frac{1}{x_2}=\frac{x_1+x_2}{x_1 x_2}=\phi$$ -is not an integer. -$(2)$ Let -$$\mu = \frac{1}{x_1}+ \frac{1}{x_2}+\cdots+ \frac{1}{x_{n-1}}+ \frac{1}{x_n}$$ -Then -$$x_n \mu-1 = x_n\left(\frac{1}{x_1}+ \frac{1}{x_2}+\cdots+ \frac{1}{x_{n-1}}\right) =x_n \omega$$ -By hypothesis, $(x_1,\dots,x_{n-1})=1$ so $\omega$ is not an integer. Thus, if $x_n \mu-1$ were an integer, it must be the case: -$$ x_n\left(\frac{1}{x_1}+ \frac{1}{x_2}+\cdots+ \frac{1}{x_{n-1}}\right) =k \text{ ; } k \text{ an integer }$$ -$$ x_n \frac{\tau}{x_1 x_2 \cdots x_{n-1}} =k \text{ ; } k \text{ an integer }$$ -$\tau$ is the numerator obtained upon taking a common denominator. -But since $\omega$ is not an integer, then it must be the case -$$x_1 x_2 \cdots x_{n-1} \mid x_n$$ -which is imposible. Then $x_n \mu -1$ is not an integer. But since $x_n$ and $1$ are, this means $\mu$ isn't an integer, this is, -$$\mu =\sum_{k=1}^n \frac 1 x_k $$ -is not an integer. $\blacktriangle$ -NOTE The hypothesis that $x_k \neq 1$ is necessary to avoid sums like -$$\frac{1}{1}+\overbrace{\frac{1}{n}+\cdots +\frac{1}{n}}^{n }=1+n\frac{1}{n}=2$$ -however, if $(x_1,\dots,x_n)=1$, the sum -$$\nu =\sum_{k=1}^n \frac 1 x_k +1 $$ -will clearly not be an integer. - -REPLY [2 votes]: $\rm b_i\:$ pair coprime, $\rm\displaystyle \: \sum_i \dfrac{a_i}{b_i} = n\in\mathbb Z\:\Rightarrow\: mod\ b_j\!:\ a_j = b_j\left[n- \sum_{i\,\ne\, j} \dfrac{a_i}{b_i}\right]\!\equiv 0\:\Rightarrow\,\dfrac{a_j}{b_j}\in \mathbb Z,\, \forall\, j$<|endoftext|> -TITLE: Is there a sequence that contains every rational number once, but with the "simplest" fractions first? -QUESTION [8 upvotes]: The Calkin-Wilf sequence contains every positive rational number exactly once: -1/1, 1/2, 2/1, 1/3, 3/2, 2/3, 3/1, 1/4, 4/3, 3/5, 5/2, 2/5, 5/3, 3/4, …. -I'd consider 5/1 to be a "simpler" ratio than 8/5, but it appears later in the series. - -Is there a mathematical term for the "simpleness" of a ratio? It might be something like the numerator times the denominator, or maybe there are other ways to measure. -Is there a sequence that contains all the positive rational numbers, but with the "simpleness" of the ratios monotonically increasing? - -(Small integer ratios are found in Just intonation, polyrhythm, orbital resonance, etc.) -If you use the Calkin-Wilf sequence with the num*den measure, for instance, it looks like this: - -REPLY [2 votes]: You could measure simplicity by the sum of the numerator and denominator (the "length", as it would be known in some parts of Number Theory), breaking ties by, say, size of numerator. -1/1, 1/2, 2/1, 1/3, 3/1, 1/4, 2/3, 3/2, 4/1, 1/5, 5/1, 1/6, 2/5, 3/4, 4/3, 5/2, 6/1,.... -This is essentially the ordering you get out of the usual proof that the (positive) rationals are countable, except that in that proof you include 2/4 and 2/6 and 3/6 and so on. The price of leaving out those duplications is that you can't expect a simple formula for the $n$th rational.<|endoftext|> -TITLE: Proving or disproving $f(n)-f(n-1)\le n, \forall n \gt 1$, for a recursive function with floors. -QUESTION [7 upvotes]: The Olympiad-style question I was given was as follows: - -A function $f:\mathbb{N}\to\mathbb{N}$ is defined by $f(1)=1$ and for $n>1$, by: $$f(n)=f\left(\left\lfloor\frac{2n-1}{3}\right\rfloor\right)+f\left(\left\lfloor\frac{2n}{3}\right\rfloor\right)$$ Is it true that $f(n)-f(n-1)\le n, \forall n \gt 1$? - -Expanding $f(n)$ and $f(n-1)$, we get: -$$f(n)-f(n-1)=f\left(\left\lfloor\frac{2n-1}{3}\right\rfloor\right)+f\left(\left\lfloor\frac{2n}{3}\right\rfloor\right)-f\left(\left\lfloor\frac{2n-3}{3}\right\rfloor\right)-f\left(\left\lfloor\frac{2n-2}{3}\right\rfloor\right)$$ -Looking at the behaviour of the function given arguments modulo 3, we can see the following, $\forall n \gt 1$, where $D(n)=f(n)-f(n-1)$: -$$D(n)=\begin{cases} f\left(\left\lfloor\frac{2n-1}{3}\right\rfloor\right)+f\left(\left\lfloor\frac{2n}{3}\right\rfloor\right)-f\left(\frac{2n}{3}-1\right)-f\left(\frac{2n-2}{3}\right) & \text{if}\space n\equiv1\pmod{3} \\ f\left(\left\lfloor\frac{2n-1}{3}\right\rfloor\right)+f\left(\left\lfloor\frac{2n}{3}\right\rfloor\right)-f\left(\left\lfloor\frac{2n}{3}\right\rfloor-1\right)-f\left(\left\lfloor\frac{2n-2}{3}\right\rfloor\right) & \text{if}\space n\equiv2\pmod{3} \\ f\left(\frac{2n}{3}-1\right)+f\left(\frac{2n}{3}\right)-f\left(\left\lfloor\frac{2n}{3}\right\rfloor-1\right)-f\left(\left\lfloor\frac{2n-2}{3}\right\rfloor\right) & \text{if}\space n \equiv 0\pmod{3}\end{cases}$$ -But I'm not sure how to construct a suitable counter example or proof of the proposition. A quick construction of the first 10 test cases, shows that $D(n)=1: n\equiv1\pmod{3}$, and $D(n)=1:n\equiv0\pmod{3}$. -However $D(2)=1$, $D(5)=2$ and $D(8)=4$, suggesting a power of 2 escalation, which would mean for sufficiently high $n:n\equiv2\pmod{3}$, $D(n)\gt n$, but I'm unsure how to show this. -Thanks in advance! -EDIT: Continuing my construction of the sequence has shown there is no power of 2 escalation, for successive terms $n\equiv2\pmod{3}$, however, as pointed out in the comments, all of the differences are powers of 2. -EDIT 2: If it helps anyone, by plotting a graph on Mathematica, I was able to determine that the assertion is false, with a counter example at $n=242$, with $f(242)=3072$, $f(241)=2816$, and therefore $f(242)-f(241)=256$, and as $256 \gt 242$, the assertion is false. But how could I disprove the assertion mathematically, without a computer? - -REPLY [6 votes]: I'm going to expand on the idea given by Leslie Townes in the comment above. Build a tree rooted in 2 on the integers $\ge 2$ with two types of edges: -$$2k\longrightarrow 3k\\ -2k\longrightarrow 3k+1\\ -2k+1\implies 3k+2$$ -Then $\log_2 (f(n)-f(n-1))$ (which happens to be an integer) is the number of $\implies$ in the path from 2 to $n$. -We want a path with many $\implies$ and few $\longrightarrow$ (specifically, more than $\log_2 n$ $\implies$). Because exactly one of $3k$ and $3k+1$ is odd, there is a unique infinite path $P$ starting from 2 avoiding consecutive $\longrightarrow$. The density of $\implies$ in $P$ is at least $1/2$, and if it is higher than $\log_2 3/2$ we have succeeded. -I'm not sure if anything can be proven rigorously about the density of $P$ (the problem reminds me a bit of the Collatz conjecture so it could be difficult): if you do please comment. -But experimentally the density seems to be $2/3>\log_2 3/2$, so we are sure to find a counterexample that is a finite subpath of $P$ and indeed: -$$2\longrightarrow 3\implies 5\implies 8\longrightarrow 13\implies 20\longrightarrow 31\implies 47\implies 71\implies 107\implies 161\implies 242$$ -There are 8 $\implies$ before 242, and $242<2^8$.<|endoftext|> -TITLE: How do mathematicians think about high dimensional geometry? -QUESTION [8 upvotes]: Many ideas and algorithms come from imagining points on 2d and 3d spaces. Be it in function analysis, machine learning, pattern matching and many more. -How do mathematicians think about higher dimensions? Can intuitions about the meaning of dot-product, angles and lengths transfer from 2d geometry to a 100d? -If so, would it be enough to fully understand the higher dimesions, namely, could the same problem in 100d have properties\behaviours that are not seen in 2d\3d? - -REPLY [7 votes]: It vastly depends on the objects you define. Indeed, when talking about vector spaces, we algebraically think about $\mathbb{R}^n$, build up intuition, and then set $n=100$. -However, when you start adding exotic objects, like knots, it becomes less "easy". For example, some knots in $\mathbb{R}^3$ are trivial loops in $\mathbb{R}^4$ (ie. the trefoil knot falls apart in 4D). -Then again, functions and their orthogonality are computed in $\mathbb{R}^{89270}$ just as they are in $\mathbb{R}$ - nothing strange going on there. It's only when you consider infinite-dimensional spaces that this becomes slightly unintuitive again. -So, in short, it completely depends on the objects you talk about. Most finite-dimensional vector spaces over some field $K$ are equal in almost all aspects. Adding more structure can make it much more difficult, and oftentimes all mathematicians have is algebra.<|endoftext|> -TITLE: Localization arguments in Dedekind domains -QUESTION [5 upvotes]: I am reading Serre's Local Fields, and have questions about the text. Specifically, pages 11 and 12. -1) Consider a Dedekind domain. We want to show that all fractional ideals are invertible. Serre claims that because the image of a fractional ideal under every localization by prime ideals is invertible, that the fractional ideal itself must be invertible. I do not see why this is. -2) In the same way, he argues "by localization" that because the image of an ideal $\mathfrak{a}$ under the localization $A_\mathfrak{p}$ has the form $\mathfrak{p}^{{v_p(\mathfrak{a}})}$, where $v_{\mathfrak{p}}$ is the valuation, and because the exponents are zero except for a finite number of ideals, that $\mathfrak{a}$ factors into a finite number of prime ideals. Again, I do not see how to carry out the details of this argument. -Thanks for the help. - -REPLY [4 votes]: For (1), let $I$ be a non-zero ideal of a Dedekind domain $R$ with fraction field $K$. Let $I^{-1}=\{r\in K:rI\subseteq R\}$. Then $I^{-1}$ is a fractional ideal and $I$ is invertible if and only if $II^{-1}=R$ (in general one has from the definitions that $II^{-1}\subseteq R$). You can verify that localization at a prime $\mathfrak{p}\in\mathrm{mSpec}(R)$ commutes with products of fractional ideals and with formation of $I^{-1}$, that is, upon identifying $R_\mathfrak{p}$ with a subring of $K$, $(I^{-1})_\mathfrak{p}=(I_\mathfrak{p})^{-1}$. Now, local invertibility means $(II^{-1})_\mathfrak{p}=I_\mathfrak{p}(I_\mathfrak{p})^{-1}=R_\mathfrak{p}$ for all $\mathfrak{p}\in\mathrm{mSpec}(R)$. If $N$ and $M$ are $R$-modules with $N\subseteq M$ and $N_\mathfrak{p}=M_\mathfrak{p}$ for all $\mathfrak{p}$ maximal, then $N=M$. So, using this, you get $II^{-1}=R$. So $I$ is invertible. Since principal fractional ideals are obviously invertible and every fractional ideal differs (multiplicatively) from an integral ideal by a principal ideal, it follows that all fractional ideals are invertible. -For (2), take an ideal $I$. You have $I_\mathfrak{p}=(\mathfrak{p}R_\mathfrak{p})^{v_\mathfrak{p}(I)}$, where $v_\mathfrak{p}(I)$ is defined by this equation (using that $R_\mathfrak{p}$ is a discrete valuation ring for all maximal $\mathfrak{p}$). Put $J=\prod_\mathfrak{p}\mathfrak{p}^{v_\mathfrak{p}(I)}$, the product over all maximal ideals of $R$ (granting the fact that $v_\mathfrak{p}(I)$ is non-zero for only finitely many maximal ideals). Compare the localizations of these two ideals. You'll see that $I_\mathfrak{p}=J_\mathfrak{p}$ for all $\mathfrak{p}\in\mathrm{mSpec}(R)$. This implies $I=J$. -EDIT: In answer to the questions posed in the comments, let $r\in I$ and consider the ideal $I^\prime$ of $s\in R$ with $sr\in J$. Because $r/1\in I_\mathfrak{p}=J_\mathfrak{p}$, we have $r/1=t/s$ for $t\in J$ and $s\notin\mathfrak{p}$. So $sr=t\in J$, and thus $s\in I^\prime$. It follows that $I^\prime$ is not contained in any maximal ideal of $R$, so it must be all of $R$. Thus $1\in I^\prime$, so $r=1r\in J$, and thus $I\subseteq J$. By symmetry, $J\subseteq I$ as well. The same argument works for general $R$-modules. This is a standard fact about localizations. For the other question, in the argument I gave, I was assuming $I$ to be an integral ideal throughout, so I had to say something about general fractional ideals at the end. However, the argument applies without change for an arbitrary fractional ideal $I$.<|endoftext|> -TITLE: Hausdorff distance via support function -QUESTION [5 upvotes]: Let $A$ and $B$ be convex compact sets in $\mathbb{R}^n$. Define -$$ - h_{+}(A,B) = \inf \left\{ \varepsilon > 0 \mid A \subseteq B+\mathbb{B}_{\varepsilon} \right\} -$$ -where $\mathbb{B}_{\varepsilon}$ is an $\varepsilon$-ball centered at origin. Hausdorff distance between $A$ and $B$ is -$$ - h(A,B) = \max \left\{ h_{+}(A,B), h_{+}(B,A) \right\} -$$ -Support function of a compact convex set $K$ is defined as -$$ - c(y\mid K) = \max\limits_{x \in K} \langle y, x\rangle -$$ -How to show that -$$ - h(A,B) = \max\limits_{ | y | \leq 1} | c(y \mid A) - c (y \mid B) | -$$ -I tried to use the Legendre transform but without success. - -REPLY [4 votes]: To prove the above statement we need an additional statement. -Lemma. $h_{+}(A,B) = \sup\limits_{a \in A}\; \mathop{\mathrm{dist}}{(a,B)}$. -Proof. -$$ - h_{+}(A,B) \leq \varepsilon \Leftrightarrow A \subseteq B+\mathbb{B}(\varepsilon,0) \Leftrightarrow \forall a \in A \; (a \in B+\mathbb{B}(\varepsilon,0)) \\ - \Leftrightarrow \forall a \in A \; \exists b\in B \colon|b-a|\leq\varepsilon \Leftrightarrow \forall a \in A \; \mathop{\mathrm{dist}}{(a,B)} \leq \varepsilon \\ -\Leftrightarrow \sup\limits_{a \in A} \; \mathop{\mathrm{dist}}(a,B) \leq \varepsilon. -$$ -Since $h_{+}(A,B) \leq \varepsilon$ iff $\sup_{a \in A} \; \mathop{\mathrm{dist}}(a,B) \leq \varepsilon$ they are equal. $\blacksquare$ -Hence we have an equality -$$ - h(A,B) = \max \left\{ \sup\limits_{a \in A} \; \mathop{\mathrm{dist}}(a,B), \; \sup\limits_{b \in B} \; \mathop{\mathrm{dist}}(b,A) \right\}. -$$ -Recall that convex conjugate function of $x \mapsto \mathop{\mathrm{dist}}(x,B)$ is a support function of compact convex set $B$, i.e. -$$ - d(a,B) = \sup\limits_{\|l\| \leq 1} \left( \langle l, a \rangle - c(l \mid B) \right). \tag{1} -$$ -Now we are ready to proove our main formula. We have -$$ - \sup\limits_{a \in A} \;\mathop{\mathrm{dist}}(a,B) = \sup\limits_{a \in A} \sup\limits_{\|l\| \leq 1} ( \langle l, a \rangle - c(l \mid B) ) = \sup\limits_{\|l\| \leq 1} ( c(l \mid A) - c (l \mid B) ). -$$ -We have changed the order of supremums in the latter equality. Now since -$$ - \sup\limits_{\|l\| \leq 1} | c(l \mid A) -c (l \mid B) | = \max \{ \sup\limits_{\|l\| \leq 1} ( c(l \mid A) - c (l \mid B) ), \sup\limits_{\|l\| \leq 1} ( c(l \mid B) - c (l \mid A) ) \} -$$ -we obtain the needed formula: -$$ - h(A,B) = \sup\limits_{\|l\| \leq 1} | c(l \mid A) -c (l \mid B) |. -$$ -Added. As concerns the proof of (1). Put $f(x) = \mathop{\mathrm{dist}} (x,B)$. Then -$$ - f^*(l) = \sup_x \bigl( \langle l, x\rangle - f(x) \bigr) \\ - = \sup_{b \in B} \sup_x \bigl( \langle l, x\rangle - \|x-b\| \bigr) \\ - = \sup_{b \in B} \sup_{\alpha > 0} \sup_{\|x-b\|=\alpha} \bigl( \bigl( \langle l,x-b\rangle - \alpha \bigr) + \langle l, b \rangle \bigr) \\ - = \sup_{b \in B} \sup_{\alpha > 0} \bigl( \alpha(\|l\|-1) + \langle l, b\rangle \bigr) \\ - = \sup_{b \in B} \bigl( \langle l, b\rangle + \delta_1(\|l\|) \bigr) \\ - = c(l|B) + \delta_1(\|l\|), -$$ -where $\delta_1(t) = 0$ if $t\leq 1$ and $\delta_1(t) = +\infty$ otherwise. Hence, -$$ - \mathop{\mathrm{dist}} d(x,B) = \sup_l \bigl( \langle l,x\rangle - c(l|B) - \delta_1(\|l\|) \bigr) \\ - = \sup_{\|l\|\leq 1} \bigl( \langle l, x \rangle - c(l|B) \bigr). -$$<|endoftext|> -TITLE: Finding an invariant under Group operations -QUESTION [5 upvotes]: Context: I am trying to answer this question about solving the peg solitaire, and I already posted as an answer some code devised for treating the board - -as a graph. -The algorithm in Mathematica for solving the problem I implemented there (please don't care to read the code) is a first try brute force approach which I want to refine. -One way for doing this is aborting the calculation of the branches already explored, and those symmetrically equivalent. -AFAIK, the symmetry of the problem is represented in the Dihedral D4 group. -So my problem: I have a vector with the occupancy state of the board -$S =\{o_1, ...,o_{33}\}$ $(o_i \in \{ True, False \})$ -and I want to find a function that when applied to an occupancy state vector returns the same Real number for all eight symmetric states (and of course bijective, returning a different value for any other input). -Any suggestions? -Edit -For example, the following program in Mathematica calculates a bijective bilinear invariant under D4 for the easy board: - x1 - - x4 x2 - - x3 - -> -bl = Times @@@ Union[Sort /@ Tuples[{x1, x2, x3, x4}, 2]]; -coef = Array[a, Length@bl]; - -(* This is the first nuance, I've to write down the one member - for each symmetry class*) -base = {{1, 0, 0, 0}, {1, 1, 0, 0}, {1, 0, 1, 0}, - {1, 1, 1, 0}, {1, 1, 1, 1}, {0, 0, 0, 0}}; - -f[{x1_, x2_, x3_, x4_}] := Evaluate[coef.bl]; - -(*This is the second problem: I calculate all members of each - class (in this case by rotations)*) -g[x_] := Table[RotateRight[x, i], {i, 4}]; - -fi = FindInstance[ - Unequal @@ (f /@ base) && - And @@ Equal @@@ (f /@ g /@ base) - , coef, Integers]; - --f[{x1, x2, x3, x4}] /. fi[[1]] - -And the result is -$f(x_1,x_2,x_3,x_4) = x_1^2 + x_1 x_2 + x_2^2 + x_2 x_3 + x_3^2 + x_1 x_4 + x_3 x_4 + x_4^2$ - - -f value .......... Equivalent boards - - -I am sure there must be a better way ... - -REPLY [5 votes]: A symmetry group acts on a set of data structures (the occupancy vectors). The question asks for a procedure to create group-invariant hashes of those data structures. Here is a very general way, requiring only a presentation of the group and explicit definitions of the actions of its generators. -In mathematical notation, the data structures form a set $S$, a finite group $G$ acts on it (notation: any $g\in G$ sends $s\in S$ to $s^g$), and a hash function is an injection $h:S\to\mathbb{N}$. To construct a $G$-invariant function, one usually averages (or sums) over the group. In this application it is computationally a little simpler to minimize over the group: that is, define -$$h^G: S\to\mathbb{N},$$ -$$h^G(s) = \min\{h(s^g) | g\in G\}.$$ -This is obviously $G$-invariant. It works well for smallish groups because it requires computing $h$ for up to $|G|$ elements of $S$. -The rest is computational details, pseudocoded in Mathematica. - -Let's begin by creating a way to address cells on the board--a way on which the effect of the group action is easy to compute--and storing an array of addresses of all the board cells. Row and column coordinates are an obvious way: -n = 7; k = 2; (* n by n square without the four k by k corners *) -validAddress[address[i_, j_]] := - (k < i <= n - k && 1 <= j <= n) || (k < j <= n - k && 1 <= i <= n); -board = Select[Flatten[Outer[address, Range[n], Range[n]]], validAddress]; - -It is convenient to define the group action via generators. This is a Coxeter group, making it easy to find a pair of generators {alpha, beta} and relations: -apply[address[i_, j_], alpha] := address[n+1 - i, j];(* Horizontal reflection *) -apply[address[i_, j_], beta] := address[j, i]; (* Diagonal reflection *) -apply[a_address, g_List] := Fold[apply, a, g]; (* Group composition *) -group = FoldList[Append, {}, Riffle[ConstantArray[alpha, 4], beta]]; - -These addresses have to be associated with the indexes of cells within the board array. The group action will be pulled back to those indexes as permutations of $\{1,2,\ldots\}$, which I store in a variable action indexed by the group itself: -indexes = Table[board[[i]] -> i, {i, 1, Length[board]}]; -action[g_] := - action[g] = (apply[board[[#]], g] /. indexes) & /@ Range[Length[board]]; - -To hash an occupancy vector, it is natural to interpret it as a binary number. To form a hash invariant under the group action, we need a unique representative of the hashes of each orbit. A simple way to choose that representative is to use the one with the numerically smallest hash. -hash[occupancy_List] /; Length[occupancy] == Length[board] := - Min[FromDigits[occupancy[[action[#]]], 2] & /@ group]; - -For example: -hash[{0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, - 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1}] - -produces $1315991027$, the smallest of $\{1315991027, 1795954175, 4084234697, 4953218462, 4972379519, 6969629924, 8547732521, 8578425260\}.$ (The fact that the eight elements of group produce eight distinct hashes proves they are distinct group elements, something that might not have been evident until now.) -The efficiency is OK: -Timing[hash /@ RandomInteger[{0, 1}, {10^4, Length[board]}];] - -takes a half second (on one 3.2 GHz Xeon core).<|endoftext|> -TITLE: Easiest way to prove that $\int_{\pi/6}^{\pi/2} \sin(2x)^3\cos(3x)^2 \mathrm{d}x=\left(3/4\right)^4$ -QUESTION [5 upvotes]: I have been trying to evaluate this integral a few times. And my best attempt has been to rewrite is as a sum of linear combination of sine and cosine terms. Alas, this takes a couple of handwritten pages to accomplish. Is there any easier/faster/neater way to evaluate? -$$ \int_{\pi/6}^{\pi/2} \sin(2x)^3\cos(3x)^2\,\mathrm{d}x=\left(\frac{3}{4}\right)^4 $$ -Thanks in advance =) - -REPLY [3 votes]: $$\int_{\pi/6}^{\pi/2} (\sin 2x)^3 (\cos 3x)^2\,dx$$ -Since $(\sin 2x)^2=1-\cos 4x$ and $(\cos 3x)^2=1+\cos 6x$ -$$= -\frac{1}{4}\int_{\pi/6}^{\pi/2}(\sin 2x)(1-\cos 4x)(1+\cos 6x)$$ -Substitute $u=2x$, -$$=\frac{1}{8}\int_{\pi/3}^\pi(\sin u)(1-\cos 2u)(1+\cos 3u)\,du$$ -Use the formula $\sin u=\frac{e^{iu}-e^{-iu}}{2i}$ and $\cos u=\frac{e^{iu}+e^{-iu}}{2}$, -$$= \frac{1}{8}\int_{\pi/3}^\pi \frac{e^{iu}-e^{-iu}}{2i}(1- \frac{e^{2iu}+e^{-2iu}}{2} -)(1+\frac{e^{3iu}+e^{-3iu}}{2})du$$ -Expand, -$$= \frac{1}{64i}\int_{\pi/3}^\pi( - (e^{6iu}-e^{-6iu})+3(e^{4iu}-e^{-4iu})-2(e^{3iu}-e^{-3iu})-3(e^{2iu}-e^{-2iu})+6(e^{iu}-e^{-iu}) )du$$ -Use the formula $\sin u=\frac{e^{iu}-e^{-iu}}{2i}$, go back to $\sin$ -$$= \frac{1}{32}\int_{\pi/3}^\pi(-(\sin 6u)+3(\sin 4u)-2(\sin 3u)-3(\sin 2u)+6(\sin u))du$$ -Do the integral, in total five parts, -$$= \frac{1}{32}(\cos 6u/6-3\cos 4u/4+2\cos 3u/3+3\cos 2u/2-6\cos u)|_{\pi/3}^\pi$$ -Calculate the value for each of the five parts, -$$= \frac{1}{32}(0-9/8+0+9/4+9)=\frac{3^4}{4^4}$$<|endoftext|> -TITLE: $f\colon M\to N$ continuous iff $f(\overline{X})\subset\overline{f(X)}$ -QUESTION [8 upvotes]: Possible Duplicate: -Continuity and Closure - -$f\colon M\to N$ is continuous iff for all $X\subset M$ we have that $f\left(\overline{X}\right)\subset\overline{f(X)}$. -I only proved $\implies$. -If $f$ is continuous then for any $X\subset M$, -$$X\subset f^{-1}[f(X)]\subset f^{-1}\left[\overline{f(X)}\right]=\overline{f^{-1}\left[\overline{f(X)}\right]}$$ -therefore -$$\overline{X}\subset f^{-1}\left[\overline{f(X)}\right]\implies f\left(\overline{X}\right)\subset \overline{f(X)}.$$ -The other side must be the same idea but I don't know why I can't prove it. -Added: With exactly same idea when I proved $\implies$ I did proved $\Longleftarrow$, -Let $F\subset N$ any closed set then: -$$f\left[f^{-1}(F)\right]\subset f\left[ \overline{f^{-1}(F)}\right]\subset \overline{f\left[ f^{-1}(F)\right]}\subset \overline{F}=F$$ -in particular -$$f\left[ \overline{f^{-1}(F)}\right]\subset F\implies f^{-1}(F)\supset\overline{f^{-1}(F)}$$ -then $f^{-1}(F)=\overline{f^{-1}(F)}$ and $f$ is continuous. - -REPLY [4 votes]: First, a general fact about maps of sets: -$$X \subseteq f^{-1} Y \iff f X \subseteq Y$$ -Now, suppose for all $X$ in $M$ we have $f \overline{X} \subseteq \overline{f X}$. Let $Y$ be a closed subset of $N$ and let $X = f^{-1} Y$. A map $f : M \to N$ is continuous if and only if the preimage of every closed set is closed, so we need to show $X$ is closed. Clearly, $f X \subseteq Y$ and $X = f^{-1} f X$. Consider $\overline{X}$: we have $X \subseteq \overline{X}$, so $f X \subseteq f \overline{X} \subseteq \overline{f X} \subseteq Y$, hence $\overline{X} \subseteq f^{-1} Y = X$, i.e. $\overline{X} = X$. - -REPLY [2 votes]: Let $V\subset N$ be an open set around $f(x)$. Then its complement $V^c$ is closed. Let $U=cl({f^{-1}(V^c)})^c$. Then it is an open set. Because of the property of the function: -$$ -f(cl({f^{-1}(V^c)}))\subset cl(f({f^{-1}(V^c)}) \subset V^c -$$ -we see that $x\in U$, and that $f(U)\subset V$. Then f is continuous.<|endoftext|> -TITLE: Dimensionality of null space when Trace is Zero -QUESTION [16 upvotes]: This is the fourth part of a four-part problem in Charles W. Curtis's book entitled Linear Algebra, An Introductory Approach (p. 216). I've succeeded in proving the first three parts, but the most interesting part of the problem eludes me. Part (a) requires the reader to prove that $\operatorname{Tr}{(AB)} = \operatorname{Tr}{(BA)}$, which I was able to show by writing out each side of the equation using sigma notation. Part (b) asks the reader to use part (a) to show that similar matrices have the same trace. If $A$ and $B$ are similar, then -$\operatorname{Tr}{(A)} = \operatorname{Tr}{(S^{-1}BS)}$ -$= \operatorname{Tr}(BSS^{-1})$ -$= \operatorname{Tr}(B)$, -which completes part (b). Part (c) asks the reader to show that the vector subspace of matrices with trace equal to zero have dimension $n^2 - 1$. Curtis provides the hint that the map from $M_n(F)$ to $F$ is a linear transformation. From this, I used the theorem that $\dim T(V) + \dim n(T) = \dim V$ to obtain the dimension of the null space. Part (d), however, I'm stuck on. It asks the reader to show that subspace described in part (c) is generated by matrices of the form $AB - BA$, where $A$ and $B$ are arbitrary $n \times n$ matrices. I tried to form a basis for the subspace, but wasn't really sure what it would look like since an $n \times n$ matrix has $n^2$ entries in it, but the basis would need $n^2 - 1$ matrixes. I also tried to think of a linear transformation whose image would have the form of $AB - BA$, but this also didn't help me. I'm kind of stuck... -Many thanks in advance! - -REPLY [11 votes]: One way of proving this: Note that for all $A,B$ matrices, $AB−BA$ has trace equal to zero. Denote by $E_{ij}$ the matrix with entry $1$ in row $i$, column $j$ and $H_{ij}:=E_{ii}−E_{jj}$. Then $\{E_{ij}:i\ne j\}∪\{H_{i,i+1}:1\le i\le n-1\}$ form a basis for the space. Also, $H_{ij}=E_{ij}E_{ji}−E_{ji}E_{ij}$ and $2E_{ij}=H_{ij}E_{ij}−E_{ij}H_{ij}$. So, you have a basis formed by elements of the form $AB−BA$.<|endoftext|> -TITLE: Complex matrices with null trace -QUESTION [5 upvotes]: I'm trying to prove the following: - -Let $A\in \mathbb{C}^{n\times n}$ be a matrix with null trace; then $A$ is similar to a matrix $B$ such that $B_{jj}=0$ (i.e. it has zeroes on its diagonal). - -Any ideas? Induction on $n$ sounded feasible but I wasn't able to put together anything. - -REPLY [3 votes]: You can read the following short, nice paper -http://www.cs.berkeley.edu/~wkahan/MathH110/trace0.pdf -Please note the gist of the paper for you is Corollary 4: any square matrix over the complex is similar to a matrix all of whose diagonal elements are the same element, and of course this is all you need.<|endoftext|> -TITLE: Recommendations for real analysis -QUESTION [6 upvotes]: I have completed two courses in real analysis that covered up to chapter 9 in Rudin's Principle of Mathematical Analysis (and one on complex analysis). So if I am interested in continuing on in analysis (real analysis and not complex analysis), what would be a good direction to go from here? What would be a good book to learn from? As my interest is primarily number theory I was wondering if there is a direction in analysis that would be helpful in this aspect. - -REPLY [4 votes]: I suggest Folland's Real Analysis: Modern Techniques and Their Applications. It covers all you need to know about measure theory and Lebesgue information, and has chapters on probability, distributions, Fourier analysis, and lots of information about functional analysis. -Other people will recommend "Big Rudin," Royden, and Stein and Shakarchi's series of books, but I find Folland the most clearly written and comprehensive. - -REPLY [2 votes]: If you are interested in number theory, you really should get further into complex analysis. Analytic number theory uses some ideas from complex analysis like the gamma and zeta functions. A good book for this subject, especially the number theory aspect of complex analysis is Stein and Shakarchi's Complex Analysis. -If you want to go further into Real Analysis, you should continue with measure theory and functional analysis. (I can't remember if the first 9 nine chapter of the first Rudin covers measure theory.) A good book for a first introduction to functional analysis and measure theory would be Rudin's Real and Complex Analysis. For more advance book on functional analysis, I would recommend Brezis Functional Analysis, Sobolev Spaces, and Partial Differential Equation$.<|endoftext|> -TITLE: Finding the topological complement of a finite dimensional subspace -QUESTION [5 upvotes]: I know that for any finite dimensional subspace $F$ of a banach space $X$, there is always a closed subspace $W$ such that $X=W\oplus F$, that is, any finite dimensional subspace of a banach space is topologically complemented. -However, I wonder whether we can put some condition on the complemented subspace. The problem I am working on is the following: -Let $X$ be an infinite dimensional subspace. Suppose we have -\begin{equation*} -X=\overline{F_1\oplus F_2\oplus F_3\oplus\cdots} -\end{equation*} -where all $F_j$'s are finite dimensional subspaces of $X$ with dimensions larger than 1. -Can we find a closed subspace $W$ such that $X=F_1\oplus W$ and $W\supset \overline{F_2\oplus F_3\oplus\cdots}$? -Or equivalently, can we find a vector $x\in F_1$ such that it lies outside the closed linear span of $F_j$ $(j\neq 1)$? -Thanks! - -REPLY [4 votes]: Edit: I'm still not completely sure I understand your notation. For this answer, $F \oplus G$ means the algebraic sum of $F, G$, i.e $F \oplus G = \{x + y : x \in F, y \in G\}$, with the requirement that $F \cap G = \{0\}$, and $F = F_1 \oplus F_2 \oplus \dots$ means -$$F = \bigcup_{n \ge 1} F_1 \oplus \dots \oplus F_n.$$ -The answer to your question is: not necessarily. For instance, it may be that $F_2 \oplus F_3 \oplus \dots$ is dense in $X$. -To be explicit, take $X = C([0,1])$. I don't quite understand the point of wanting the $F_n$ to have dimension larger than 1, and it will make this example a little messier, but for $n \ge 2$ let $F_n$ be the span of $x^{2n-4}$ and $x^{2n-3}$. Then $F_2 \oplus F_3 \oplus \dots$ is precisely the space $P$ of polynomials, which by the Weierstrass approximation theorem is dense in $X$. Let $g_1, g_2$ be bump functions supported in $[0,1/4], [3/4,1]$ respectively, and let $F_1$ be the span of $g_1$ and $g_2$. Every function in $F_1$ vanishes identically on $[1/4, 3/4]$ so if it is a polynomial it is 0. Thus $F_1 \cap P = \{0\}$, so $F_1 \oplus F_2 \oplus \dots$ is still a direct sum, and it is still dense in $X$ (since it contains $P$), so your hypotheses are satisfied. But the only $W$ that contains $\overline{F_2 \oplus F_3 \oplus \dots} = X$ is $X$ itself, and so $W \cap F_1 = F_1$.<|endoftext|> -TITLE: How to resolve Skolem's Paradox by realizing what can be said of a set is relative to what is in the domain of some model? -QUESTION [8 upvotes]: I apologize in advanced if I'm hopelessly confused... -Skolem's Paradox, I suppose, can be put like this: -$M$ is a countable model of ZFC and $M$ implies the existence of uncountable sets. -I suppose that people find this initially paradoxical because they assume the statement is said from a single, absolute perspective. However, there are (necessarily) two perspectives involved: the inside perspective on $M$ and the outside perspective on $M$. Once these perspectives are separated, we realize that there is no paradox. Consider: -The former conjunct "$M$ is a countable model of ZFC" is necessarily said from an outside perspective on $M$ -- as discussed here. Actually, $M$ can't express its own cardinality at all. -(Is $M$'s inability to express its own cardinality related to $M$'s being a proper class -- namely, that there is no function in $M$ that takes one of $M$'s members onto the universe of $M$?) -Continuing on…Let $N$ be the outside perspective on $M$ such that there is a bijection $f\in N$ between the domain M of $M$ and $\omega^N\in N$. -The latter conjunct "$M$ implies the existence of uncountable sets" is obviously said from $M$'s inside perspective -- after all, $M$ is a model of ZFC and thus must satisfy Cantor's Theorem. -So, we can separate the perspectives of the paradoxical statement above here: - -From $M$'s perspective, $M$ is a proper class and there is some $A\in M$ such that $A$ is uncountable in $M$. -From $N$'s perspective, $M$ is countable. - -These two statements are jointly consistent when we realize that what can be said of a set $B$ is relative to what some model has to say about $B$. And so, the paradoxical statement isn't so paradoxical. -Is there anything wrong with what I have written above? (It took me a long time to learn this stuff, esp. with zero background in set-theory, higher-math, higher-logics, model-theory, etc., so specifically telling me where I am going wrong, if I am, will be a great help and a great relief.) - -What I'm really interested in is what can we say about $A$ from $N$'s perspective? Are the following possible: - -$M$ sees $M$ as a proper class, $M$ sees $A$ as uncountable, $N$ sees $M$ as countable, and $N$ sees $A$ as finite. -$M$ sees $M$ as a proper class, $M$ sees $A$ as uncountable, $N$ sees $M$ as countable, and $N$ sees $A$ as countable. -$M$ sees $M$ as a proper class, $M$ sees $A$ as uncountable, $N$ sees $M$ as countable, and $N$ sees $A$ as uncountable. - -Under what conditions might (1) - (3) be individually possible (obviously they can't be jointly possible)? -I suspect this question might be rather simple. For example, (2) could be possible when $N$ recognizes a bijection both between $M$ and $\omega^N$ and between $A$ and $\omega^N$. (3) could be possible when $N$ recognizes a bijection between $M$ and $\omega^N$ but doesn't recognize a bijection between $A$ and $\omega^N$. -My goal here is to understand/stress the fact that truth in model theory is relative to what particular models have to say about their members. So, I'm trying to see that while $M$ may take $A$ to be uncountable, $N$ can take $A$ to be of any cardinality even under the condition that $N$ sees $M$ as countable. - -REPLY [5 votes]: The second and third are easily describable: -Suppose that $N$ is a model of ZFC and $N$ thinks that $M$ is a countable transitive model of ZFC ($M$ may not be such model, but internally to $N$ this assertion holds). -This means that $N$ thinks that $M$ is countable, and that every element of $M$ is countable. $M$, on the other hand, knows some sets which are uncountable to it. So we have $\omega_1^M$ is a countable ordinal in $N$, so the second situation holds. -Suppose now that we have a nice model of ZFC, $N$, which is uncountable and it knows about uncountable sets. If we take $\omega_1^N$ we can consider $M$ a countable elementary submodel of $N$ such that $\omega_1^N\in M$. By elementarity $M$ and $N$ agree on $\omega$ and both agree that there is no bijection between that set and $\omega_1^N$. So we have the third situation in which both models agree that some set is uncountable. -Lastly to address the first situation, what we want is to have an ill-founded model of ZFC which thinks of an uncountable set as its $\omega$, but that it will also know about some model which is nicer. I am not sure how to address this situation since for $N$ to think that $M$ is a model of ZFC, $N$ would have to assert that $M$ have certain properties in $N$. These properties may make it impossible to make the jump from infinite to finite in this manner. -There is a possibility that $N$ will not be aware that $M$ is a model of ZFC, but that defeats the purpose because then we talk from an external point of view about both these models.<|endoftext|> -TITLE: Continuous with respect to weak convergence implies affine -QUESTION [5 upvotes]: Let $\phi : \mathbb R \rightarrow \mathbb R$ be a continuous function such that whenever $f_n \rightarrow f$ weakly in $L^2[0,1]$, we have $\phi\circ f_n \rightarrow \phi\circ f$ weakly in $L^2[0,1]$. I am trying to prove that $\phi$ must be an affine map, $\phi(x) = ax+b$ for some $a,b \in \mathbb R$. So far I've tried proving the contrapositive, or trying to show that $\phi'$ exists and is constant, but have had no success. Any suggestions? - -REPLY [4 votes]: Suppose there exist $t,s\in\mathbb R$ such that $\phi((s+t)/2)\ne (\phi(s)+\phi(t))/2$. -Let $f_n$ take the values $s$ and $t$ in an alternating fashion: for example, $f_n=s$ on $[k/2^n,(k+1)/2^n)$ if $k$ is even, and $f_n=t$ if $k$ is odd. Observe that $f_n$ converge weakly to the constant function $(s+t)/2$, while $\phi\circ f_n$ converge weakly to the constant function $(\phi(s)+\phi(t))/2$.<|endoftext|> -TITLE: Relation between defining polynomials and irreducible components of variety -QUESTION [6 upvotes]: I've been puzzled about some basic facts in (classical) algebraic geometry, but I cannot seem to find the answer immediately: - -Let $V=V(f_1,\ldots,f_n)$ be a variety over some field $k$, and let $n > 1$. Suppose that $V$ turned out to be reducible, i.e. it is the union of irreducible varieties $V_1,\ldots,V_m$ for some $m > 1$. Must it be the case that at least one of the $f_i$s are reducible polynomials? -Take one of the irreducible components, $V_1$, say. Do the defining equations for $V_1$ have anything to do with the polynomials $f_1,\ldots,f_n$? -Suppose varieties $V = V(f_1,f_2)$ and $W = V(f_3)$ shared a common component. Does that mean that $f_3$ shares a factor with either one of $f_1$ or $f_2$? If not, what's a counterexample? - -Thanks so much! - -REPLY [5 votes]: 1) No. The variety $V=V(y,y-x^2+1)\subset \mathbb A^2_k $ is reducible and consists of the two points $(\pm1,0)$ (if $char.k\neq 2$), but the polynomials $y,y-x^2+1$ are irreducible. -2) The question is not very precise. In a sense the answer is "yes", because you can obtain $V_1$ by adding polynomials $g_1,...,g_r$ to the $f_i$'s and write $V_1=V(f_1,...,f_n;g_1,...,g_r)$ -3) Yes if $k$ is algebraically closed : -Decompose $f_3$ into irreducibles: $f_3=g_1^{a_1}\ldots g_s^{a_s}$. The irreducible components of $V(f_3)$ are the $V(g_j)$'s. Say $V(g_1)$ is also an irreducible component of $V_1=V(f_1,f_2)$. Then, since $f_1$ vanishes on $V_1$, the polynomial $g_1$ divides $f_1$ (by the Nullstellensatz) and similarly $g_1$ divides $f_2$. So actually $f_3$ shares the same factor $g_1$ with both $f_1$ and $f_2$, which is more than you asked for. -Caveat If $k$ is not algebraically closed, the answer to 3) may be no: for example over $\mathbb R$ the varieties $V=V(x+y,x^4+y^4)$ and $W=V(x^2+y^2)$ are both equal to the irreducible subvariety of the plane consisting of just one point: $V=W=\lbrace (x,y)\rbrace \subset \mathbb A^2(\mathbb R)=\mathbb R^2$ . -However $x^2+y^2$ is irreducible in $\mathbb R[x,y]$ and shares no factor with $x+y$ nor $x^4+y^4$ since $x^2+y^2$ does not divide those polynomials. - -REPLY [2 votes]: Here's a silly example to play around with. In $\mathbf A^3$ consider the algebraic set $V(x - yz, x)$. Here the irreducible components are the lines $V(x, y)$ and $V(x, z)$.<|endoftext|> -TITLE: Lifting isomorphisms between derived categories -QUESTION [9 upvotes]: Suppose $A$ and $B$ are commutative rings. Let $A\to B$ be a surjective ring homomorphism. I will denote by $D(A)$ and $D(B)$ the derived categories of unbounded complexes over $A$ and $B$. -Suppose $M,N \in D(B)$ are two complexes over $B$. Let $F:D(B)\to D(A)$ be the forgetfull functor. -Suppose that we know that $F(M) \cong F(N)$. Does it follows that $M\cong N$ in $D(B)$? -If we had a quasi-isomorphism $F(M) \to F(N)$, then it will of course lift to $D(B)$, because since $A\to B$ is surjective, an $A$-linear map of complexes over $B$ will automatically be $B$-linear. -However, isomorphisms in the derived category might pass through a third object $K$, which might not be defined over $B$. Thus, I suspect the answer to my question is no, but I have no idea how to find a counterexample. -Thank you for any idea! -(remark: Since I did not get any answer, I posted this question to mathoverflow: https://mathoverflow.net/questions/99828/lifting-isomorphisms-between-derived-categories) - -REPLY [2 votes]: Answered on mathoverflow at the following link: -https://mathoverflow.net/questions/99828/lifting-isomorphisms-between-derived-categories<|endoftext|> -TITLE: Algebraic Curves reference -QUESTION [7 upvotes]: I am an undergraduate student. I finished a Galois theory course. I started to read Fulton`s book Algebraic Curves.I am doing self-study. I found it difficlt to understand from 2nd chapter onwards. The notes I find are too concise. Are there some lecture videos, or expanded notes.so that I can understand it easily? - -REPLY [7 votes]: You are right, I think, that Chapter 2 of Fulton's can give you a hard time when you read it for the first time, because of the commutative algebra involved. I also learn AG from Fulton's book at the moment and as I already finished Chapter 2 (in fact almost all of the book, yet I have to do all the exercises), so I might give you a rough guideline what Chapter 2 is about, and what in my opinion is important and what is very important.(WARNING: Purely subjective comments by a non-expert in Algebraic Geometry) -Sections 2.1-2.4 are mandatory, in particular section 2.2 on polynomial maps. Do all the exercises of section 2.2, in particular the ones containing concrete examples of varieties (i.e. 2.8, 2.12 and 2.13). The result of exercise 2.7 is very important to keep in mind, since it gives a convenient criterion for an affine algebraic set to be a variety. -Section 2.4 is also very important, because it contains the definition of the local ring at a point of a variety. This concept is absolutely fundamental in Chapter 3, where intersections of affine plane curves are studied via local rings (roughly spoken, the way two plane curves intersect at a point is completely encoded in the local ring at that point, modulo the ideal generated by those two curves). -Section 2.3 is purely technical. It is of importance in Chapter 3 as well, because the results obtained in that section allow you to reduce most arguments involving an arbitrary point $P \in \mathbb{A}^{2}(k)$ and two distinct lines $L$, $L'$ passing through $P$ to the case, where $P=(0,0)$ and where $L$, $L'$ are the two coordinate axes (see Exercise 2.15d). -The rest of the sections are pure algebra with algebraic geometry in hindsight. So I'll give you just a few comments on each section -Section 2.5: The point why DVR's (discrete valuation rings) are so important is the fact that the local ring at a simple point on a plane curve is a DVR (Theorem 1, section 3.2), and as the local study of plane curves involves a detailed study of the local rings of that curve, it is nice to know that most such local rings are rather simple (i.e. have only one maximal ideal, which is even principal). You won't need much of the stuff discussed in the exercises until Chapter 7, but for the exercises of Chapter 3 you definitely need Exercise 2.29. So do at least this one. -Section 2.6: This can be postponed until you finished Chapter 3, since you will need it only from Chapter 4 onwards (but you really, really need it from that point on!). -Section 2.7: Short and painless. It's so short you should read it. -Section 2.8: Here the exercises are far more important than the text. Do Exercises 2.42-2.46, they are all very important in Chapter 3 and beyond. -Section 2.9: This basically contains only one important result, so keep the statement in mind, but I don't think you need to understand all of the proof in order to carry on, so you might wish to skip a detailed study of the proof for now and do that later. -Section 2.10: This is a very important section, because some proofs of important theorems (such as Bezout's Theorem) are proved by counting dimensions of certain vector spaces, and by comparing them via short exact sequences. So although the exercises may be tiresome and formal doing all of them is very important (Fulton sometimes uses results from those exercises without explicitly referring to them all over the book). -Section 2.11: This is material used in Chapter 8, so you might skip it on your first read. -Lastly a word on the exercises scattered throughout the text: Fulton uses a whole lot of those exercises in the main text. Most of the time he is giving an explicit reference. But the result of some exercises might prove extremely helpful in solving other exercises, and Fulton may or may not tell you which of the results is important in solving this exercise. -So you should really at least try to do all of them. Otherwise you won't get the best of the book and you will have to settle for much less. -Hope I could help you. And any criticism (in particular of experts of algebraic geometry) is of course very much welcome.<|endoftext|> -TITLE: Kan fibrations and surjectivity -QUESTION [7 upvotes]: I have a basic question on the usual model structure on simplicial sets. - -What is the relation between being a Kan (trivial maybe ?) fibration and - surjectivity ? - -Surjectivity here means either surjectivity on the components, or surjectivity at each level of the simplicial set, or other interesting notions. -In Simplicial homotopy theory of Goerss and Jardine, they see at a moment, "since trivial fibrations are surjective, the result follows" (Proposition 3.3 of Chapter II). Is this surjectivity on the components ? -Also, if you have a reference to point me too that would be great too, I haven't found much neither in Simplicial homotopy theory nor in others similar books. - -REPLY [2 votes]: Let me just record here for clarity the actual conclusion from Aaron's answer. - -If $f: X \to Y$ is a Kan fibration and $\sigma \in Y_n$ is a simplex in $Y$, then $\sigma$ is in the image of $f$ if and only if the path component of $\sigma$ is in the image of $f$. - -Of course, there are plenty of maps satisfying the surjectivity condition which are not Kan fibrations.<|endoftext|> -TITLE: How to show that $f$ is an odd function? -QUESTION [11 upvotes]: An entire function $f$ takes real $z$ to real and purely imaginary to purely imaginary. We need to show that $f$ is an odd function. -well, $f=\sum_{n=0}^{\infty}a_nz^n$ what I can say is $f(\mathbb{R})\subseteq\mathbb{R}$ and $f(\mathbb{iR})\subseteq\mathbb{iR}$ -How to proceed, please give me hint. - -REPLY [7 votes]: I am writing, enlarging and enhancing (hopefully...) Mex's answer to his own question (kudos!) and I'll be happy to erase this answer of mine if he decides to write down his. -We have that $$f(z)=\sum_{n=0}^\infty a_nz^n$$ because $\,f\,$ is entire, and by the given conditions we have$$(1)\,\,f(r)=\sum_{n=0}^\infty a_nr^n\in\mathbb{R}\,,\,\,\,r\in\mathbb{R}$$$$(2)\,\,f(ir)=\sum_{n=0}^\infty a_n(ir)^n\in i\mathbb{R}\,\,,\,\,r\in\mathbb{R}$$but we have that $$\sum_{n=0}^\infty a_n(ir)^n=\sum_{n=0}^\infty i^n (a_nr^n)=\sum_{n=0}^\infty (-1)^na_{2n}r^{2n}+i\sum_{n=0}^\infty (-1)^na_{2n+1}r^{2n+1} $$and as the above is purely imaginary we get that $\,a_{2n}=0\,,\,\forall n\in\mathbb{N}\,$ , so the power series of the function has zero coefficients for the even powers of $\,z\,$ and is thus a sum of odd powers and trivially then an odd function.<|endoftext|> -TITLE: Research in algebraic topology -QUESTION [42 upvotes]: I have started studying algebraic topology with the help of Armstrong(Basic), Massey, and Hatcher. If I plan to do research in algebraic topology in future: - -What else should I study after completing homology(basic), cohomology(basic) and homotopy theory(basic)? - -After completing Hatcher how far I would be (in terms of time and effort) from tackling a research problem? -I have average background in Algebra and never studied Category theory in detail.However I feel comfortable working with algebra. I would like to work in those areas which require more algebraic machinery than any other area and which are more Geometric in flavour. - -Are there other areas to which I should switch over to like Geometric topology or algebraic geometry? - -REPLY [2 votes]: Despite your comments in the OP, I think you should consider learning basic Category Theory. Category theory is crucial to most of topology, and a lack of knowledge of category theory will seriously cripple your ability to do topological problems and communicate with other topologists. -Categories for the Working mathematician by Mac Lane, Schapira's lecture notes, and Categories and Sheaves by Kashiwara and Schapira are good resources for this.<|endoftext|> -TITLE: Linear dependence of linear functionals -QUESTION [7 upvotes]: Problem: Let V be a vector space over a field F and let $\alpha$ and $\beta$ be linear functionals on $V$. If $\ker(\beta)\subset\ker(\alpha)$, show $\alpha = k\beta$, for some $k\in F$. -A proposed solution is in the answers below. - -REPLY [8 votes]: If $\alpha$ is the zero functional, we are done, because we take $k=0$. -Otherwise, consider a basis $\{v_\alpha\}$ of $V$. Let $\{v_p\}$ be the vectors that $\beta$ maps to nonzero scalars, and the $\{v_r\}$ the basis vectors mapped to zero. Then $\alpha$ must also map every vector in $\{v_r\}$ to 0, by hypothesis. -If $\{v_p\}$ contains just one vector we are done, because we can just scale $\beta$ so that $\alpha$ and $\beta$ agree on this basis vector. Otherwise, choose two vectors $v_1$ and $v_2$ in this set. Let $\beta(v_1)=b_1$, $\beta(v_2)=b_2$, $\alpha(v_1)=a_1$, and $\alpha(v_2)=a_2$. We want to show that $b_1/a_1=b_2/a_2$. Assume not. Then consider the vector $b_2v_1-b_1v_2$. We see that $\beta$ maps this to 0, but $\alpha$ does not, a contradiction.<|endoftext|> -TITLE: Show trace is zero -QUESTION [13 upvotes]: Problem: We are given $n\times n$ square matrices $A$ and $B$ with $AB+BA=0$ and $A^2+B^2=I$. Show $tr(A)=tr(B)=0$. -Thoughts: We have $tr(BA)=tr(AB)=-tr(BA)=0$. We also have the factorizations $(A+B)^2=I$ and $(A-B)^2=I$ by combining the two relations above. -Let $\alpha_i$ denote the eigenvalues of $A$, and $\beta_i$ the eigenvalues of $B$. We have, by basic properties of trace, -$\sum \alpha_i^2 +\sum \beta_i^2=n$ -from $A^2+B^2=I$. -I'm not sure where to go from here. -I would prefer a small hint to a complete answer. - -REPLY [6 votes]: It appears from the context in the book that the correct problem is -$$ A^2 + B^2 = A B + B A = 0. $$ - The middle step is that $(B-A)^2 = 0,$ so we name the nilpotent matrix $N=B-A.$ Wait, I think that is enough. Because it is also true that $(A+B)^2 = 0.$ So $A+B$ and $B-A$ both have trace $0.$ So $tr \; \; 2B = 0.$ That finishes characteristic other than 2. We don't need full Jordan form for nilpotent matrices, just a quick proof that $N^2 = 0$ implies that the trace of $N$ is zero. Hmmm. This certainly does follow from the fact that a nilpotent matrix over any field has a Jordan form, but I cannot say that I have seen a proof of that. -Alright, in characteristic 2 this does not work, in any dimension take -$$ A = B, $$ -$$ A = B \; \; \; \mbox{then} \; \; A^2 + B^2 = 2 A^2 = 0, \; AB + BA = 2 A^2 = 0. $$ -In comparison, the alternate problem -$$ A^2 + B^2 = A B + B A = I $$ has the same thing about nilpotence, however in fields where $2 \neq 0$ and $2$ is a square we get a counterexample with -$$ A \; = \; - \left( \begin{array}{rr} - \frac{1}{\sqrt 2} & \frac{-1}{2} \\ - 0 & \frac{1}{\sqrt 2} -\end{array} - \right) - $$ -and -$$ B \; = \; - \left( \begin{array}{rr} - \frac{1}{\sqrt 2} & \frac{1}{2} \\ - 0 & \frac{1}{\sqrt 2} -\end{array} - \right) - $$<|endoftext|> -TITLE: Cellular Homology via Stable Homotopy -QUESTION [5 upvotes]: one can define cellular homology by letting -$C_n(X)=\{\mathbb{S}^n, X^n/X^{n-1}\}$, where $X$ is a CW complex and the curly brackets mean stable homotopy classes of maps. Now the differential of the resulting complex is supposed to be given by a map $X^n/X^{n-1}\to\Sigma X^{n-1}/X^{n-2}$ and I struggle to understand this map. -The only candidate I can think of would be the suspension of an attaching map. More precisely, on an $n$-cell, we use the inverse of the characteristic map to end up in $\mathbb{S}^n$, identify this with $\Sigma\mathbb{S}^{n-1}$ via a (canonical) homeomorphism, then apply the attaching map to end up in $\Sigma X^{n-1}$. -If this is correct (is it?), I still do not see why this is indeed a differential, i.e. d^2=0. -Furthermore, I would like to do concrete calculations with this formulation, if necessary only in very low dimensions (say, 1 or 2). So in particular I would like to know how the ordinary cellular boundary operator can be recovered from the fancy one above. Does anyone know a detailed reference for this view on cellular homology or can provide any useful insights? -Thank you. - -REPLY [3 votes]: I am not completely clear on the motivation of your question, but I think I can answer the question as-asked. In my opinion, this is easiest to see with a complicated diagram. Start with the inclusions of subcomplexes: -$$\begin{array}{ccccccccccc}X^1 & \to & X^2 & \to & \cdots & \to & X^{n-1} & \to & X^n & \to & X^{n+1} & \to & \cdots\end{array},$$ -and then throw in each quotient map dangling off: -$$\begin{array}{ccccccccccc}X^1 & \to & X^2 & \to & \cdots & \to & X^{n-1} & \to & X^n & \to & X^{n+1} & \to & \cdots \\ \downarrow p_1 & & \downarrow & & & & \downarrow p_{n-1} & & \downarrow p_n & & \downarrow p_{n+1} & &\\ X^1 & & X^2 / X^1 & & & & X^{n-1} / X^{n-2} & & X^n / X^{n-1} & & X^{n+1} / X^n & & \cdots,\end{array}$$ -where I've elected to name the downward maps for later use. -Each of these is what's called a cofiber sequence, where a cofiber sequence is a pair of sequential maps $A \to B \to C$ with the property that a test map $T \to B$ can be lifted to a commuting triangle $T \to A \to B$ if and only if the composite $T \to B \to C$ is null-homotopic. The way you get cofiber sequences in homotopy theory is by attaching cones to subspaces ($A \to B \to B \cup_A Cone(A)$), which for nice enough subspaces agrees with the quotient sequence $A \to B \to B/A$. An important lemma is that the maps in cofiber sequences satisfy the differential property: - -Setting $T = A$, the map $A = T \to B$ lifts to a map $A = T \to A$ by taking the identity, and this forces (by the "only if") the long composite $A \to B \to C$ to be null. - -An exceedingly useful fact about cofiber sequences is that they can be extended: in the sequence of maps $$\begin{array}{ccc} A & \to & B \\ & \to & B \cup_A Cone(A) \\ & \to & (B \cup_A Cone(A)) \cup_B Cone(B) \simeq \Sigma A \\ & \to & \Sigma A \cup_{B \cup_A Cone(A)} Cone(B \cup_A Cone(A)) \simeq \Sigma B \\ & \to & \cdots,\end{array}$$ -any adjacent pair forms a cofiber sequence. Abbreviating all these messy cones, one says that $A \to B \to C$ extends to $$A \to B \to C \to \Sigma A \to \Sigma B \to \Sigma C \to \Sigma^2 A \to \cdots.$$ -Now what does this have to do with you? Well, we have a whole bunch of cofiber sequences in that diagram with all the quotients of subcomplexes, and if we extend them, we find long sequences of the form $$X^{n-1} \to X^n \xrightarrow{p_n} X^n / X^{n-1} \xrightarrow{\partial_n} \Sigma X^{n-1} \to \Sigma X^n \to \cdots,$$ where I've again decided to give a name for later use to the "new map" that comes from extension. You'll notice that the subcomplexes $X^n$ themselves sit in a remarkable position in the diagram: they have a map to the right to $X^{n+1}$ and a map down to $X^n / X^{n-1}$, both of which sit in different cofiber sequences. Now, here's an important definitional assertion: - -Your boundary map $X^{n+1} / X^n \to \Sigma(X^n / X^{n-1})$ is the same as the composite $(\Sigma p_n) \circ \partial_{n+1}$. That is: start at your favorite node on the bottom row, follow the map $\partial_{n+1}$ to move diagonally back up to (a suspension of) the top row, then follow $p_n$ down to move back to the bottom row. - -Now that we've said all these things about cofiber sequences, the assertion that this gives a differential follows quickly: performing this pair of operations twice yields a four-fold composite $((\Sigma^2 p_{n-1}) \circ (\Sigma \partial_n)) \circ ((\Sigma p_n) \circ \partial_{n+1})$. Since we can associate composition, you find $\Sigma \partial_n$ and $\Sigma p_n$ right next to each other in the middle --- and these are two maps which appear adjacent to each other in the cofiber sequence $$X^{n-1} \to X^n \xrightarrow{p_n} X^n / X^{n-1} \xrightarrow{\partial_n} \Sigma X^{n-1} \to \cdots.$$ Hence, by the boxed lemma, their composite is zero, and hence the four-fold composite must also be zero, and that is exactly the differential condition on your boundary operator. Since the map of spaces $X^{n+1} / X^n \to \Sigma^2 X^{n-1} / X^{n-2}$ is itself null, applying any decent functor (such as homotopy groups) will also yield zero. -You can easily dress this up to fit into, e.g., the equivariant setting --- the important thing was just that cofiber sequences were around to work with. Moreover, if you know anything about spectral sequences, this is identical to the argument that the $d^1$-differential of the associated filtration spectral sequence is in fact a differential. Let me know if I've missed your point, and I'll revise the answer accordingly.<|endoftext|> -TITLE: An inequality involving integrals -QUESTION [6 upvotes]: Let be $f:[0,1] \longrightarrow R $, $f$ is an integrable function such that: -$$\int_{0}^{1} f(x) \space dx = \int_{0}^{1} xf(x) \space dx=1$$ -I need to prove that: -$$\int_{0}^{1} f^2(x) \space dx\geq4$$ - -REPLY [6 votes]: A geometric reading of this question is to consider the space $E=\mathcal C([0,1],\mathbb R)$ with inner product : $$\langle f,g \rangle = \int_0^1 f(t) g(t) dt$$ and say : Show that forall $f \in F^{\bot}$ , we have : $\| f \| \geq 2 $ where $F= \text{Span} \{x \mapsto 1, x \mapsto x \} $. -By Gram-Schmidt orthogonalization procedure, we obtain $(x \mapsto 1, x \mapsto \sqrt 3(2x-1))$ as orthonormal basis of $F$ who gives one expression of $p_F(f)$ projection of $f$. By Pythagore theorem we have : $\|p_F(f)\| \leq \|f\|$ and since the basis is orthonormal wa have : $$\|P_F(f)\|^2=(\langle f,1 \rangle )^2 + (\langle f,\sqrt 3(2x-1) \rangle )^2 = 1^2 + \sqrt 3^2 = 4$$ -All this can explain the provenace of the function $x \mapsto 6x-2 = 1 + 3(2x-1) $ used by Unoqualunque. -Edit : $x \mapsto 6x-2 = 1 + 3(2x-1) $ instead of $x \mapsto 6x-2 = 3(2x-1) $<|endoftext|> -TITLE: Modules over a functor of points -QUESTION [8 upvotes]: I have a question on the ''functor of points''-approach to schemes and $\mathcal{O}_X$-modules. Please let me first write up a defintion. - -Let $Psh$ denote the category of presheaves on the opposite category -of rings $Rng^{op}$. So $Psh$ is the category of functors from the -category of rings $Rng$ to the category $Set$ of sets. -Fix an $X\in Psh$. Demazure and Gabriel define in their book -''Introduction to Algebraic Geometry and Algebraic Groups" (page 58, -I.2.4.1) an $X$-module $M$ to be an object $M\in Psh$ and a morphism $f:M\to - X$ in $Psh$ (a natural transformation $f:M\to X$, that's what they call -an $X$-functor) such that for every ring $R$ and every map $p:*\to - X(R)$ the set $M(R,p):=*\times_{X(R)}M(R)$ has an $R$-module -structure with the property that for any ring map $\phi:R\to S$ the -induced map $\psi:M(R,p)\to M(S,\phi(p))$ is additive and satisfies -\begin{equation} \psi(\lambda m)= \phi(\lambda)\psi(m) \end{equation} -for all $m\in M(R,p)$ and $\lambda\in R$. -They call $M$ quasicoherent if for any ring map $\phi:R\to S$, the -induced map \begin{equation} M(R,p)\otimes_R S\cong M(S,\phi(p)) - \end{equation} is an isomorphism. - -I want to understand an $X$-module $M$ as a morphism $f:M\to X$ in $Psh$ for which some conditions are required to hold ''locally'', like a bundle, but let me more precise in what I mean: The map $p$ in the definition above corresponds by the Yoneda lemma to a map $p:R\to X$ (Here, I use the same notion for $R$ and its associated presheaf $\hom(R,-)$). Let the object $M_p'$ of $Psh$ be defined by the cartesian diagram -\begin{eqnarray} -M_p'&\to & M\\ -\downarrow && \downarrow f\\ -R&\xrightarrow{p} & X -\end{eqnarray} -in $Psh$. I want to formulate conditions on $M_p'$ (and not on $M(R,p)=*\times_{X(R)}M(R)$ as above) such that $M$ is an $X$-module. The set $M(R,p)$ is contained in the set $M_p'(R)$ but there are not equal, unfortunately. My question is thus: What are the conditions on the $M_p'$ such that $M$ (together with $f$) defines an $X$-module? How is quasicoherence defined in this situation? - -To be more precise, I would like the above definition of a quasicoherent $X$-module to be the same as something like this: An object $M\in Psh$ and a morphism $f:M\to - X$ in $Psh$ such that for every ring $R$ and every map $p:R\to - X$ the set $M_p'(R)=(R \times_X M)(R)$ has an $R$-module -structure with the property that for any ring map $\phi:R\to S$ the -induced map $\psi'(R):M_p'(R)\to M_{\phi(p)}'(R)$ is additive and satisfies $\psi'(R)(\lambda m)= \phi(\lambda)\psi'(R)(m)$ for all $m\in M_p'(R)$ and $\lambda\in R$ and $M_p'(R)\otimes_R S\cong M_{\phi(p)}'(S)$ is an isomorphism. - -I hope that I was able to clarify my question. Thank you in advance for any hints. - -REPLY [10 votes]: The first step is to understand what presheaves are "really" about. -Theorem (Kan). Let $\mathbb{C}$ be a small category, and let $\hat{\mathbb{C}}$ be the category of presheaves on $\mathbb{C}$. Let $H_\bullet : \mathbb{C} \to \hat{\mathbb{C}}$ be the Yoneda embedding. If $\mathcal{E}$ is a locally small and cocomplete category and $F : \mathbb{C} \to \mathcal{E}$ is a functor, then there is a unique (up to isomorphism) cocontinuous functor $\tilde{F} : \hat{\mathbb{C}} \to \mathcal{E}$ such that $\tilde{F} H_\bullet = F$; in other words, $\hat{\mathbb{C}}$ is the free cocompletion of $\mathbb{C}$. -Thus, we may think of a presheaf on $\mathbb{C}$ as a formal specification for gluing together objects of $\mathbb{C}$. Making this idea completely precise is the essence of the proof of this theorem. Indeed, if $P$ is a presheaf on $\mathbb{C}$, then there is a category $\int^{\mathbb{C}} P$ whose objects are pairs $(c, x)$, where $c \in \operatorname{ob} \mathbb{C}$ and $x \in P(c)$, and arrows $f : (c, x) \to (c', x')$ are those arrows $f : c \to c'$ of $\mathbb{C}$ such that $P(f)(x') = x$; equivalently, $\int^{\mathbb{C}} P$ is the comma category $(H_\bullet \downarrow P)$ (by the Yoneda lemma). Let $X : \int^{\mathbb{C}} P \to \hat{\mathbb{C}}$ be the functor $c \mapsto H_c$. It is straightforward to show that $P \cong \varinjlim X$, and from here one deduces that $\tilde{F} : \hat{\mathbb{C}} \to \mathcal{E}$ must be defined by $P \mapsto \varinjlim A$, where $A : \int^{\mathbb{C}} P \to \mathcal{E}$ is the functor $c \mapsto F c$. (Verifying that this works is slightly tricky, but not very interesting.) - -Now, the setup of Demazure and Gabriel calls for a universe axiom, but this isn't really necessary if one is willing to restrict the category of schemes under investigation. Let $R$ be any ring, and let $\mathcal{R}$ be any essentially small full subcategory of the category of $R$-algebras which is closed under principal localisations (i.e. localisations of the form $A [1/f]$) and finite colimits (i.e. the initial algebra $R$, tensor products, and coequalisers). For example, we could take $\mathcal{R}$ to be the category of finitely-presented $R$-algebras. -Let $\mathcal{A} = \mathcal{R}^\textrm{op}$. We think of this as being a full subcategory of the category of schemes over $S = \operatorname{Spec} R$. Then, the category of presheaves on $\mathcal{A}$ includes as a full subcategory the category of all $S$-schemes locally modelled on $\mathcal{A}$, i.e. all those $S$-schemes that have an open cover by schemes isomorphic to objects in $\mathcal{A}$. -Given a presheaf $P$ that represents a $S$-scheme $X$, how do we know what $X$ "looks" like? Of course, we use the canonical diagram given in the proof of the above theorem, but the geometric setting means we have a very concrete interpretation of what is happening. Elements of $P$ should be thought of as $S$-scheme morphisms to $X$: to be precise, if $x \in P(A)$, then $x$ corresponds to a unique $S$-scheme morphism $\operatorname{Spec} A \to X$; after all, that is what it means to be represented by $X$: there is a presheaf isomorphism $P \cong \textbf{Sch}_S (\operatorname{Spec} (-), X)$. But we know every scheme admits a cover by open affine subschemes, so this is more than enough information to reconstruct $X$ – as claimed. -What about presheaves on $X$? Well, there's only one thing they could be in this language: a presheaf $E$ together with a presheaf morphism $p : E \to P$; in other words, it is an object of the slice category $(\hat{\mathcal{A}} \downarrow P)$. (Actually, this isn't literally true: presheaves in this sense are strictly more general that presheaves on $X$; but the important thing is the idea.) As before, given an element $x$ of $P(A)$, we think of the set $E(A, x) = \{ e \in E(A) : p(e) = x \}$ as being the set of "sections" of $E$ over the "neighbourhood" $x$. Now suppose $E$ represents an $\mathscr{O}_X$-module. Then, $E(A, x)$ would have to be an $A$-module, in such a way that the various "restriction" maps become generalised module homomorphisms – exactly as in your definition. So, you see, it is already a local condition, just perhaps not in the way you expect! - -If, however, you are dead-set on seeing this in terms of Yoneda, then you can use the following fact: -Proposition. Let $\mathbb{C}$ be a small category, and let $\hat{\mathbb{C}}$ be the category of presheaves on $\mathbb{C}$. If $P$ is a presheaf on $\mathbb{C}$, then the slice category $(\hat{\mathbb{C}} \downarrow P)$ is isomorphic to the category of presheaves on $\int^{\mathbb{C}} P$. -Proof. Given a presheaf morphism $p : E \to P$, one constructs a presheaf on $\int^{\mathbb{C}} P$ in the obvious way: set $E(c, x) = \{ e \in E(c) : p(e) = x \}$. Conversely, given such a presheaf on $\int^{\mathbb{C}} P$, we may recover $E$ by setting $E(c) = \coprod_{x \in P(c)} E(c, x)$ and defining $p : E \to P$ to be the obvious projection. Checking that these constructions are functorial and mutually inverse is straightforward. $\qquad \blacksquare$ -Thus, one sees that $E(A)$ is in general not an $A$-module, but rather a disjoint union of $A$-modules. (And I really do mean disjoint union, not coproduct!) So we must work in the category of presheaves on $\int^{\mathbb{C}} P$ instead, but the only thing we gain from doing this is notational convenience.<|endoftext|> -TITLE: Numerical Analysis over Finite Fields -QUESTION [5 upvotes]: Notwithstanding that it isn't numerical analysis if it's over finite fields, but what topics that are traditionally considered part of numerical analysis still have some substance to them if the reals are replaced with finite fields or an algebraic closure thereof? Perhaps using Hamming distance as a metric for convergence purposes, with convergence of an iteration in a discrete setting just meaning that the Hamming distance between successive iterations becomes zero i.e. the algorithm has a fixed-point. -I ask about still having substance because I suspect that in the ff setting, na topics will mostly either not make sense, or be trivial. - -REPLY [3 votes]: The people who factor large numbers using sieve algorithms (the quadratic sieve, the special and general number field sieves) wind up with enormous (millions by millions) systems of linear equations over the field of two elements, and they need to put a lot of thought into the most efficient ways to solve these systems if they want their algorithms to terminate before the sun does.<|endoftext|> -TITLE: Pattern of orders of elements in a cyclic group -QUESTION [11 upvotes]: Doodling as I was contemplating another recent question, I picked out the orders of elements of the cyclic group $Z_{15}$ namely: -1 element of order 1 -2 elements of order 3 -4 elements of order 5 -8 elements of order 15 -In a group of order 3 you have 1 element of order 1 and two elements of order 3. -Are there other examples of this same pattern (powers of 2), and can anyone show and prove a general rule. - -REPLY [9 votes]: The number of elements of order $d$, where $d$ is a divisor of $n$, is $\varphi(d)$. -These values of $\varphi(d)$ are powers of $2$ for all divisors $d$ of $n$ precisely when $n$ is a power of $2$ times a product of distinct Fermat primes, that is, primes of the form $2^{2^k}+1$. The power of $2$ may be $2^0$, and the product may be the empty one. -The proof is immediate from the usual formula for $\varphi$ in terms of the prime power factorization. -So the answer is essentially the same as the classical one of which regular polygons are Euclidean-constructible. Sadly, there are not many Fermat primes known. -At this time, we only have $3$, $5$, $17$, $257$, and $65537$ to play with.<|endoftext|> -TITLE: Required reading on the Collatz Conjecture -QUESTION [10 upvotes]: I am currently writing a paper on 3x+1 and realized that despite having enough knowledge to work on a singular facet of the problem I lack a more broad understanding of the problem. I have seen the thorough annotated bibliographies by Jeffrey C. Lagarias but I do not have the time to read most of them and I imagine plenty of them would not teach me much about the problem itself, even if I did take the time to dissect them. So what are the papers people feel I should read with my limited time to gain the best possible understanding of the Collatz Conjecture? - -REPLY [7 votes]: It's challenging to distill a required reading on the Collatz conjecture, because it's unsolved. What may or may not be necessary? As you might have noticed from Lagarias's annotated bibliographies, there is a lot of literature on the subject, and it seems as though Lagarias has already sifted through a whole lot. What would be more convenient than his bibliographies, with short explanations of what each paper has done? If I were you, and I was set on working on the problem, I would decide which of the papers were related to the methods I had in mind, or which ones at least sound interesting. -But to be sure, he was two bibliographies. Pre 2000 here and 2000-2009 here. As mentioned in the comments, he has a book, and an intro. Lagarias is the expert, and there's no better list. -I would also mention that the problem is unsolved, and so I recommend patience. Developing the patience to read a beautifully written set of annotated bibliographies might be a good place to start.<|endoftext|> -TITLE: Simplex: outgoing variable cannot re-enter the basis next iteration -QUESTION [5 upvotes]: How can I prove that in the simplex method, a variable that has just left the basis cannot re-enter the basis on the very next iteration? The pivoting rule is Dantzig's. - -REPLY [5 votes]: Assume the problem to be a minimization problem. Then we choose an entering variable only if the coefficient in the objective row of that variable is negative so as to reduce the objective value by increasing the value of the variable (from it's current value). -When a variable leaves the basis, the coefficient in the objective row of that variable becomes non-negative. In the next iteration of simplex, only variables with a negative coefficient can enter the basis. Hence the variable that just left cannot re-enter the basis in the very next iteration.<|endoftext|> -TITLE: In which case $M_1 \times N \cong M_2 \times N \Rightarrow M_1 \cong M_2$ is true? -QUESTION [5 upvotes]: Usually for modules $M_1,M_2,N$ -$$M_1 \times N \cong M_2 \times N \Rightarrow M_1 \cong M_2$$ -is wrong. I'm just curious, but are there any cases or additional conditions where it gets true? -James B. - -REPLY [5 votes]: I recommend A Crash Course on stable range, cancellation, -substitution and exchange by T.Y. Lam, 2004, which is a pretty good guide to the phenomenon. -Update: Another question surfaced which contains a sufficient condition for cancellation. The condition is: $\hom(M_1,N)=\hom(M_2,N)=\{0\}$.<|endoftext|> -TITLE: How do I solve this problem? -QUESTION [5 upvotes]: Exercise: If $a+2b=125$ and $b+c=348$, find out $2a+7b+3c$. Here $a$, $b$, $c$ are natural numbers. - -The answer is: $2a+7b+3c = 1294$ -I tried but just can't figure out how to get to this answer. I have a lot of exercises similar to this one but don't know how to aproach them. Can anyone write the steps in order to get to the answer above? Also it would be great if you can write in a general way so I can apply it to other exercises similar to this. - -REPLY [5 votes]: If you multiply the first equation of the following system by $2$ and the second equation by $3$ you get an equivalent system. -$$\left\{ -\begin{array}{c} -a+2b=125 \\ -b+c=348 -\end{array} -\right. \overset{\times 2}{\underset{\times 3}{\Leftrightarrow }}\left\{ -\begin{array}{c} -2a+4b=250 \\ -3b+3c=1044 -\end{array} -\right. $$ -Now you can add the two equations ...<|endoftext|> -TITLE: Comparison theorem for systems of ODE -QUESTION [5 upvotes]: Let vector-function $x(t)$ satisfy a differential equation -$$ - \dot x = f(x), -$$ -and a vector-function $y($t) satisfy a differential inequality -$$ - \dot y \leq f(y) -$$ -with starting positions $y(0) < x(0)$. If a function $f(x)$ satisfies the property: -$$ - f_{i}(x_1+\alpha_1,\ldots,x_{i-1}+\alpha_{i-1},x_i,x_{i+1}+\alpha_{i+1},\ldots,x_{n}+\alpha_{n}) \geq f_{i}(x_1,\ldots,x_n) -$$ -for any $\alpha_{1} \geq 0, \ldots, \alpha_{n} \geq 0$ (i.e. it is quasimonotone), then $y(t) \leq x(t)$ for any $t>0$. Function $f(x)$ is smooth. -Is there a name for such theorem? Please help me to proof it or give me a reference. - -REPLY [5 votes]: This theorem is known in Russia as the Chaplygin lemma. It can be proved as follows. Suppose that it isn't true. Then let -$$ - t^{*} = \inf \{ t \geqslant 0 \mid \exists i\colon y_{i}(t) > x_{i}(t) \} < \infty -$$ -By definition of $t^{*}$ we have that $y_{i}(t^{*}) = x_{i}(t^{*})$ and for any $j \neq i$ we have $y_{j}(t^{*}) \leqslant x_{j}(t^{*})$. Then by the quasimonotony propertie we have -$$ -f_{i}(y(t^{*})) \leqslant f_{i}(x(t^{*})) \tag{0} -$$ -On the other hand by the definition of $t^{*}$ there exists some small $\delta > 0$ such that -$$ -y_{i}(t^{*}+\Delta t) > x_{i}(t^{*} + \Delta t) \tag{1} -$$ - for any $0 < \Delta t < \delta$. Then -$$ -\dot y_{i}(t^{*}) \geqslant \dot x_{i}(t^{*}) = f_{i}(x(t^{*})) \tag{2} -$$ - because the opposite inequality implies contradiction with $(1)$. There may occur two different situations. - -$\dot y_{i}(t^{*}) < f_{i}(y(t^{*}))$. From $(2)$ it follows that $f_{i}(y(t^{*})) > f_{i}(x(t^{*}))$. This is a contradiction with $(0)$. -$\dot y_{i}(t^{*}) = f_{i}(y(t^{*}))$. Then consider a solution $y_{\varepsilon}(t)$ of differential inequality -$$ -\dot y_{\varepsilon} \leqslant f(y_{\varepsilon})-\varepsilon -$$ -From the first case we have that $y_{\varepsilon}(t) \leqslant x(t)$ for any $t$. Then let $\varepsilon \to 0^{+}$ and use that the solution depends continuously of parameter $\varepsilon$. - -Theorem is proved.<|endoftext|> -TITLE: How to find the solutions for the n-th root of unity in modular arithmetic? -QUESTION [16 upvotes]: $$\begin{align*} -x^n\equiv1&\pmod p\quad(1)\\ -x^n\equiv-1&\pmod p\quad(2)\end{align*}$$ -Where $n\in\mathbb{N}$,$\quad p\in\text{Primes}$ and $x\in \{0,1,2\dots,p-1\}$. -How we can find the solutions for x without calculate all numbers (from 0 to p-1) high power n? -Or, in other words, how to find the n-th root of unity (and the negative of unity) using modular arithmetic without do all calculus table? - -REPLY [12 votes]: Let $p$ be an odd prime. For the congruence $x^n \equiv 1 \pmod{p}$, let $g$ be a primitive root of $p$. There are very good probabilistic algorithms for finding a primitive root of $p$. -After that, everything is easy. -Let $d=\gcd(n,p-1)$. Then our congruence has $d$ incongruent solutions modulo $p$. One solution is $h$, where $h \equiv g^{(p-1)/d}\pmod{p}$. To get them all, we look at $h^t$, where $t$ ranges from $0$ to $d-1$. -For the congruence $x^n \equiv -1\pmod{p}$, we must first determine whether there is a solution. There is a solution if and only if $(p-1)/d$ is even. One solution is then $g^{(p-1)/2d}$. The other solutions can be obtained by multiplying by $n$-th roots of $1$.<|endoftext|> -TITLE: How do modules,vector spaces, algebras,fields,rings, groups, relate to one another? -QUESTION [6 upvotes]: Modules, vector spaces, algebras, fields, rings, groups... -How do these basic algebraic objects relate to each other via tensor products? -Is there a way to go from one object to its generalization via a tensor product construction? -I think this is an interesting question...but I can't find many resources on the subject. -For e.g. I've seen people use the phrase 'tensoring up'. -Your answer should describe how tensor products are used to relate algebraic objects? (I'm basically looking for tricks of the trade that every mathematician should know about this) - -REPLY [7 votes]: To take a different tack from Qiaochu's great answer, let's take a look at these objects from the point of view of General Algebra. -All of these objects are "algebras" in the sense of General Algebra: they are a set $S$, together with a family of finitary operations on $S$ (a finitary operation on $S$ is a map $S^n\to S$, where $n$ is a nonnegative integer), together with a set of identities that are satisfied. -For example, a semigroup is an algebra of type $(2)$ (meaning it has a unique binary operation), $(S,\cdot)$, subject to the identity $(a\cdot b)\cdot c = a\cdot(b\cdot c)$ (using the infix notation). A monoid is an algebra of type $(2,0)$, $(M,\cdot,e)$, (a "nullary" operation on $M$ is a map $\{\varnothing\}\to M$, so it corresponds to a distinguished element of $M$), subject to the identities $(a\cdot b)\cdot c = a\cdot(b\cdot c)$, $a\cdot e = a$, and $e\cdot a = a$. A group is an algebra of type $(2,1,0)$ (the unary operation is the operation that maps every element to its inverse), subject to certain identities. Add the identity $a\cdot b = b\cdot a$ and you get abelian groups. A ring is an algebra of type $(2,2,1,0)$ (addition, multiplication, additive inverse, additive zero); a ring with unity would be $(2,2,1,0,0)$. -A vector space over $F$ is an algebra which, in addition to the binary, unary, and nullary operations that determine its additive structure as an abelian group, has a unary operation for every element of $F$, corresponding to scalar multiplication. The axioms of a vector space are translated into identities. $R$-modules are defined similarly, with a unary operation for each element of $R$. $K$-algebras are defined like vector spaces, but they have an extra binary operation (the product in the algebra). -Caveat. Not every standard structure can be described as a general algebra. For example, fields cannot be described as a general algebra in this way, because the multiplicative inverse is not a function on the entire field: it is undefined at $0$. There is a generalization called "Partial Algebras", which allow "partially-defined operations", but their theory is much more complicated. -Now, consider the relationship between semigroups and groups. If we look at a group, $(G,\cdot, ^{-1}, e)$, then by "forgetting" about the operations $^{-1}$ and $e$ and all the identities that are imposed on those operations, we get a structure which satisfies all the requirements for being a semigroup: we have a set, $G$, a binary operation $\cdot$, and the operation is associative. That is: every group can be considered to be a semigroup by "forgetting" part of its structure. Likewise, every abelian group can be considered to be a group by "forgetting" that we are requiring the identity $ab=ba$ to be satisfied. Every ring can be considered to be an abelian group by "forgetting" about multiplication. Every vector space over $F$ can be considered to be an abelian group by "forgetting" about scalar multiplication. Every $F$-algebra can be considered an $F$-vector space by forgetting about the multiplication of elements of the algebra. Etc. -All of this can be made precise with the notion of a forgetful functor. The notion is not a formal one, so I cannot define it directly, but to paraphrase Justice Potter Stewart, "you know it when you see it". The most common forgetful functor is the "underlying set functor", which maps an algebra to its underlying set. (The category of sets can be viewed as a category of algebras, in which the collection of operations is empty). -One nice thing about forgetful functors among categories of algebras is that they always have left adjoints. If $\mathbf{U}\colon \mathcal{A}\to\mathcal{B}$ is a forgetful functor between categories of algebras, then there exists a functor $\mathbf{F}\colon \mathcal{B}\to\mathcal{A}$ such that for all objects $A\in\mathcal{A}$ and $B\in\mathcal{B}$, -$$\mathrm{Hom}_{\mathcal{B}}(B,\mathbf{U}(A)) \cong \mathrm{Hom}_{\mathcal{A}}(\mathbf{F}(B),A).$$ -In this situation, $\mathbf{F}$ is said to be a left adjoint of $\mathbf{U}$ (note it occurs on the left entry of the $\mathrm{Hom}$ set). -For example, the forgetful function from $\mathsf{Group}$ to $\mathsf{Set}$ has the "free group" functor from $\mathsf{Set}$ to $\mathsf{Group}$ as an adjoint: there is a natural one-to-one correspondence between set-theoretic maps between a set $X$ and the underlying set of a group $A$, and group homomorphisms between the free group on $X$ and the group $A$. Another example: the forgetful functor $\mathbf{U}\colon\mathsf{AbGroup}\to \mathsf{Group}$ has the adjoint $\mathbf{F}\colon \mathsf{Group}\to\mathsf{AbGroup}$ that sends every group $G$ to its abelianization. -Now, suppose that $F$ and $K$ are fields, $F\subseteq K$. If you consider the categories $F$-$\mathsf{VecSpace}$ and $K$-$\mathsf{VecSpaces}$, there is a natural forgetful functor from the latter to the former: just "forget" how to multiply by scalars that are not in $F$. What is the adjoint of this forgetful functor? It's the tensor product! Given an $F$-vector space $V$, and a $K$-vector space $W$, we have (using $\mathrm{Lin}_F$ to denote the collection of $F$-linear maps), -$$\mathrm{Lin}_F(V,U(W)) \cong \mathrm{Lin}_K(V\otimes_FK,W).$$ -So the adjoint of the underlying set functor is the tensor product with $K$. This is the same construction for modules, where a homomorphism $f\colon A\to B$ allows us to define a forgetful functor from $B$-$\mathsf{Mod}$ to $A$-$\mathsf{Mod}$ by giving a $B$-module $M$ the $A$-module structure $a\cdot m = f(a)m$. -So one can view tensor products as a particular instance of the phenomenon of the adjoint to a forgetful functor, and so generalize them in that direction. (While there are other roles the tensor product plays; for example, among commutative rings, viewed as $\mathbb{Z}$-algebras, the tensor product is the coproduct of two objects). Personally, I would say that this is the corresponding concept to "tensoring up" in the general setting. - -As ever: if you want a great introduction to General Algebra, particularly from the categorical point of view, I recommend George Bergman's An invitation to General Algebra and Universal Constructions, available from his website as PDF files. There is a discussion of forgetful functors in Chapter 6, and of adjoint functor pairs in Chapter 7.<|endoftext|> -TITLE: Application of Harish-Chandra theorem -QUESTION [7 upvotes]: Let $\mathfrak{g}$ be a semisimple finite dimensional Lie algebra and $V_\lambda$, resp. $V_\mu$ -its finite dimensional highest weight modules with highest weights $\lambda$, resp. $\mu$. Let -$\chi_\lambda, \chi_\mu : C(\mathcal{U}(\mathfrak{g})) \rightarrow \mathbb{C}$ -be the corresponding central characters. Harish-Chandra theorem asserts that $\chi_\lambda = \chi_\mu$ if and only if -$w(\lambda+\delta)-\delta = \lambda$ for some $w$ in the Weyl group, where -$\delta$ is the Weyl vector, i.e. the sum of all fundamental weights. -Is it also true in this setting, that $V_\lambda \simeq V_\mu$ if and only if $\chi_\mu = \chi_\nu$ ? -How is this version of the theorem related to the fact that Harish-Chandra homomorphism is an isomorphism ? -Thank you very much for your answers. (I am studying a program in mathematical physics and trying to figure out -how general is the procedure of labeling irreducible representations by values of Casimir operators, as physicists do so often) - -REPLY [6 votes]: If I interpret your question as asking: are irred. finite-dimensional rep's determined by their infinitesimal character (i.e. by the eigenvalues of the -centre of the enveloping algebra on them) the answer is yes. As you essentially -observe, if one has such an irrep. $V$, and you want to write it as $V_{\lambda}$, -you can realize $\lambda$ as the unique dominant weight that $\chi_V$ (the central character of $V$) is equal to $\chi_{\lambda+\delta}$. (I am not sure what normalization you are using, but if you are using the normalized HC isomorphism, -the one that identifies the centre of the enveloping algebra with $W$-invariants in the enveloping algebra of the Cartan, then $\chi_V$ will be the homomorphism corresponding to the character $\chi+\delta$ of the Cartan.) -The proof of this statement is closely tied up with the proof of the HC isomorphism (at least, with the proofs that I know). I learned the HC isomorphism from Knapp's overview by examples book, and I think I learned -this fact, and its relationship to the HC isomorphism, from that book. - -By the way, if you have a physics background, then you probably know one case of this: for the spherical harmonics, the Casimir (= spherical Laplacian) eigenvalue determines the irrep. of $SO(3)$ to which a given spherical harmonic belongs.<|endoftext|> -TITLE: Solving $\frac{dy}{dx} = xy^2$ -QUESTION [5 upvotes]: This problem appears to be pretty simple to me but my book gets a different answer. -$$\frac{dy}{dx} = xy^2$$ -For when y is not 0 -$$\frac{dy}{y^2} = x \, dx$$ -$$\int \frac{dy}{y^2} = \int x \, dx$$ -$$\frac{-1}{y^1} = \frac{x^2}{2}$$ -$$\frac{-2}{x^2} = y$$ -Is there anything wrong with this solution? It is not what my book gets but it is similar to how they do it in the example. - -REPLY [4 votes]: Your mistake is here. You should have -$$\int \frac{dy}{y^2} = \int y^{-2}dy = -y^{-1}$$ -and not $$\int \frac{dy}{y^2} = \frac{-1}{y^{-1}}$$. - -REPLY [2 votes]: When you apply indefinite integration, you need to add a "${}+C\,$" (or whatever letter you want) at the end; this is rather infamous for tripping up students. In your problem this gives -$$-\frac{1}{y}=\frac{x^2}{2}+C~~\implies~~ y=\frac{1}{-(x^2/2+C)}=\frac{2}{-x^2-2C}.$$ -If we write $K=-2C$ (or again, any old letter), we can write this simply as $\displaystyle\frac{2}{K-x^2}$. -The reason we have "plus a constant" at the end is to prevent erroneous derivations like this: -$$\frac{d}{dx}f(x)=\frac{d}{dx}\big(f(x)+1\big)\implies f(x)=f(x)+1\implies 0=1.$$ -In other words, antidifferentiation will only find the antiderivative you want up to addition by an unknown constant. (However with some initial conditions this constant, or constants as it may be in more complicated problems, may be computed exactly.) -Note that it is only necessary to write an add-a-constant to one side of an equation, because for example something like $f(x)+A=g(x)+B$ can be written instead as $f(x)=g(x)+C$ with our constant $C=B-A$.<|endoftext|> -TITLE: Is the ring of integers in a relative algebraic number field faithfully flat over a ground ring? -QUESTION [7 upvotes]: Let $L$ be a finite extension of an algebraic number field $K$. -Let $A$ and $B$ be the rings of integers in $K$ and $L$ respectively. -Is $B$ faithfully flat over $A$? -What if $L$ is an infinite algebraic extension of $K$? - -REPLY [5 votes]: A module over a Dedekind domain is flat if and only if it is torsion-free. So any extension of rings $A\rightarrow B$ with $A$ Dedekind and $B$ a domain is flat. In particular, if the extension is integral, it is faithfully flat (by lying-over, as you point out in your comment).<|endoftext|> -TITLE: Why is there no functor $\mathsf{Group}\to\mathsf{AbGroup}$ sending groups to their centers? -QUESTION [24 upvotes]: The category $\mathbf{Set}$ contains as its objects all small sets and arrows all functions between them. A set is "small" if it belongs to a larger set $U$, the universe. -Let $\mathbf{Grp}$ be the category of small groups and morphisms between them, and $\mathbf{Abs}$ be the category of small abelian groups and its morphisms. -I don't see what it means to say there is no functor $f: \mathbf{Grp} \to \mathbf{Abs}$ that sends each group to its center, when $U$ isn't even specified. Can anybody explain? - -REPLY [6 votes]: This is very similar to Arturo Magidin's answer, but offers another point of view. -Consider the dihedral group $D_n=\mathbb Z_n\rtimes \mathbb Z_2$ with $2\nmid n$ (so the $Z(D_n)=1$). From the splitting lemma we get a short exact sequence -$$1\to\mathbb Z_n\rightarrow D_n\xrightarrow{\pi} \mathbb Z_2\to 1$$ -and an arrow $\iota\colon \mathbb Z_2\to D_n$ such that $\pi\circ \iota=1_{\mathbb Z_2}$. -Hence the composite morphism -$$\mathbb Z_2\xrightarrow{\iota} D_n\xrightarrow{\pi}\mathbb Z_2$$ -is an iso and would be mapped by the centre to an iso -$$\mathbb Z_2\to 1\to \mathbb Z_2$$ -what is impossible. (One can also recognize a split mono and split epi above and analyze how they behave under an arbitrary functor). -Therefore the centre can't be functorial.<|endoftext|> -TITLE: Finite groups of functions under function composition -QUESTION [13 upvotes]: Over the years I have done many questions along the lines of the following: -"Given functions $\phi, \theta$ (usually defined on $\mathbb{R}$ or $\mathbb{C}$, or a suitable subset of $\mathbb{R}$ or $\mathbb{C}$) prove that the collection of all functions obtained from $\theta$ and $\phi$ by function composition form a group G." (Frequently G is $\mathbb{Z}/4 \mathbb{Z}$ or $S_3$.) -A typical example might be functions $\theta:x\mapsto 1-x$ and $\phi:x\mapsto \frac{1}{x}$, (defined on the set of non-zero reals) generating a group isomorphic to $S_3$. -When I have been inventing questions for my students, rather than simply copying examples from previous exam papers, I have sometimes wondered if it is possible to create a similar example with a function $\psi$ which has order 5 in the group - just for a bit of variety! However a crucial restriction is that I need the functions to be simple algebraic functions (something like $x\mapsto \frac{ax+b}{cx+d}$ with $a, b, c, d\in \mathbb{Z}$), which rules out things like rotations of $\mathbb{C}$ through an angle of $2\pi/5$. -My question is then: does anyone know of such a function, or has anyone come across a similar exam question which gives an element of order 5 (or 7 for that matter ...) arising from such a simple type of function? -I have tried investigating the possibilities at various times, and have easily found functions which have order 2, 3, 4, but never one of order 5 or 7. -PS There is no great urgency here, as I have now retired from teaching this sort of stuff. - -REPLY [2 votes]: You can look for a rotation in $\mathbb{H^2}$. For example suposse fixed $i$ and then $-i$. The expression looking for $w=w(z)$ with -$$\frac{w+i}{w-i}=k\frac{z+i}{z-i}$$ -Then $$w(z)=\frac{(k+1)z+(k-1)i} {(k-1)z+(k+1)i}$$ -You want $w^5(z)=z$. In matrix notation you need that -$$\left( \begin{array}{ll} - k+1 & k-1\\ - k-1 & k+1 -\end{array}\right)^5$$ be a scalar matrix. As -$$\left( \begin{array}{ll} - k+1 & k-1\\ - k-1 & k+1 -\end{array}\right)^5=\left( \begin{array}{ll} - 16 k^5+16 & 16 k^5-16\\ - 16 k^5-16 & 16 k^5+16 -\end{array}\right)$$ -Then $k^5=1$, for example $k=e^{2\pi i/5}$. If you want $w^7(z)=z$, in the same way $k^7=1$. -Certainly, it is not a simply and closed expression with integer numbers.<|endoftext|> -TITLE: Proving that the magnitude of the sample correlation coefficient is at most $1$ -QUESTION [5 upvotes]: How can you show that the magnitude of the sample correlation coefficient is at most $1$? -The formula is huge, I'm not even sure how to approach this. Can anyone point me in the right direction? -Note that this is the sample correlation coefficient: -$$r_{xy} = \dfrac{\displaystyle \sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})}{(n-1)s_xs_y} = \dfrac{\displaystyle \sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\displaystyle \sum_{i=1}^{n} (x_i - \bar{x})^2 \displaystyle \sum_{i=1}^{n} (y_i - \bar{y})^2}}$$ - -REPLY [5 votes]: This follows from Cauchy–Schwarz inequality. The Cauchy–Schwarz inequality states that for any two vectors $a$ and $b$ in an inner product space, we have that -$$\lvert \langle a, b \rangle \rvert^2 \leq \lvert \langle a, a \rangle \rvert \lvert \langle b, b \rangle \rvert$$ -In your case, the vector $a$ is taken as $a_i = (x_i-\bar{x})$ and the vector $b$ is taken as $b_i = (y_i-\bar{y})$ and the inner product of $a$ and $b$ is taken as $\displaystyle \langle a, b \rangle = \sum_{i=1}^n a_i b_i$. Hence, we get that -$$\displaystyle \langle a, b \rangle = \sum_{i=1}^n a_i b_i = \sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})$$ -$$\displaystyle \langle a, a \rangle = \sum_{i=1}^n a_i a_i = \sum_{i=1}^n (x_i - \bar{x})^2$$ -$$\displaystyle \langle b, b \rangle = \sum_{i=1}^n b_i b_i = \sum_{i=1}^n (y_i - \bar{y})^2$$ -Hence, by Cauchy–Schwarz inequality, we get that -$$\left(\sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})\right)^2 \leq \left( \sum_{i=1}^n (x_i - \bar{x})^2 \right) \left( \sum_{i=1}^n (y_i - \bar{y})^2\right)$$ -Taking the squareroot, we get that -$$\left|\sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})\right| \leq \sqrt{\left( \sum_{i=1}^n (x_i - \bar{x})^2 \right) \left( \sum_{i=1}^n (y_i - \bar{y})^2\right)}$$ -Hence, we can conclude that -$$|r_{xy}| = \dfrac{\left|\displaystyle \sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})\right|}{\displaystyle \sqrt{\left( \sum_{i=1}^n (x_i - \bar{x})^2 \right) \left( \sum_{i=1}^n (y_i - \bar{y})^2\right)}} \leq 1$$ -EDIT -Proof of Cauchy Schwarz inequality: -First note that if the vector $b$ is zero, then the inequality is trivially satisfied since both sides are zero. Hence, we can assume that $b \neq 0$. Now look at the component of $a$ orthogonal to $b$ i.e. $$c = a - \dfrac{\langle a, b \rangle}{\langle b, b \rangle} b$$ -i.e. -$$a = c + \dfrac{\langle a, b \rangle}{\langle b, b \rangle} b$$ -You can check that $c$ is orthogonal to $b$ by computing $$\langle c, b \rangle = \langle a,b \rangle - \dfrac{\langle a, b \rangle}{\langle b, b \rangle} \langle b, b \rangle = \langle a,b \rangle - \langle a,b \rangle = 0$$ -You can also check that $\langle c, \alpha b \rangle = 0 = \langle \beta c, b \rangle$. -We now have that -\begin{align} -\langle a,a \rangle & = \left \langle c + \dfrac{\langle a, b \rangle}{\langle b, b \rangle} b, c + \dfrac{\langle a, b \rangle}{\langle b, b \rangle} b \right \rangle\\ -& = \langle c,c \rangle + \left \langle c,\dfrac{\langle a, b \rangle}{\langle b, b \rangle} b \right \rangle + \left \langle \dfrac{\langle a, b \rangle}{\langle b, b \rangle} b, c \right \rangle + \left \langle \dfrac{\langle a, b \rangle}{\langle b, b \rangle} b, \dfrac{\langle a, b \rangle}{\langle b, b \rangle} b \right \rangle\\ -& = \langle c,c \rangle + \left \lvert \dfrac{\langle a, b \rangle}{\langle b, b \rangle} \right \rvert^2 \langle b, b \rangle = \langle c,c \rangle + \dfrac{\left \lvert \langle a, b \rangle \right \rvert^2}{\langle b, b \rangle} -\end{align} -Now $\langle c,c \rangle \geq 0$. This gives that -$$\langle a,a \rangle \geq \dfrac{\left \lvert \langle a, b \rangle \right \rvert^2}{\langle b, b \rangle}$$ -Rearranging, we get what we want, namely -$$\lvert \langle a, b \rangle \rvert^2 \leq \lvert \langle a, a \rangle \rvert \lvert \langle b, b \rangle \rvert$$<|endoftext|> -TITLE: The contraction of a maximal ideal of $A[[x]]$ is a maximal ideal of $A$? -QUESTION [5 upvotes]: I am working on the problems in Atiyah and MacDonald's famous Introduction to Commutative Algebra. On p. 11, problem 5 part iv reads: - -[Show that] the contraction of a maximal ideal $\mathfrak{m}$ of $A[[x]]$ is a maximal ideal of $A$, and $\mathfrak{m}$ is generated by $\mathfrak{m}^c$ and $x$. - -Here $A$ is an arbitrary commutative ring with unity, and $A[[x]]$ (as usual) the ring of formal power series over $A$. By the contraction $\mathfrak{m}^c$, Atiyah and MacDonald mean the pullback of an ideal of $A[[x]]$ under the inclusion $A\hookrightarrow A[[x]]$; thus, if I am understanding correctly, $\mathfrak{m}^c=\mathfrak{m}\cap A$. -What's bugging me is that I am having trouble believing the claimed result, due to the following example: -Let $A=k[t]$, the polynomial ring over a field; so then $A[[x]]$ is $k[t][[x]]$, the ring of formal power series in $x$ with coefficients that are polynomial in $t$. Consider the ideal $(tx-1)$. Then $A[[x]]/(tx-1)$ is the field of finite-tailed Laurent series in $x$ with coefficients in $k$, right? In which case, $(tx-1)$ is a maximal ideal of $A[[x]]$, right? But, $(tx-1)\cap A=(tx-1)\cap k[t]=(0)$, since no element of $k[t]$ is a multiple of $tx-1$. But $(0)$ is not a maximal ideal of $A=k[t]$, and $(tx-1)$ is certainly not generated by $x$ and $(0)$. So doesn't this example violate the conclusion? -In a less legendarily tight book, I would assume there was a typo or an omitted assumption somewhere, but this is Atiyah and MacDonald, and they are never wrong. Conclusion: I must be missing something. - -What am I missing? Is $A[[x]]/(tx-1)$ not actually a field? Is $(tx-1)\cap A$ not actually $(0)$? Did I misunderstand the definition of contraction? Or is it something else? - -REPLY [4 votes]: Maybe it's helpful to write your element as $-1 + tx$. Part (i) of the same exercise has you show that the units of $A[[x]]$ are the power series whose constant terms are units in $A$, so your ideal is the whole ring and the authors remain faultless. -The issue seems to be that $k[t][[x]]$ and $k[[x]][t]$ are not the same. My initial justification for this was that, as you noted, elements of the former ring can involve infinitely many powers of $t$. I'd be interested in seeing other reasons. -For the exercise proper, I think things clear up if you first prove that $x \in \mathfrak m$. If this is not the case, then there exists an $f \in A[[x]]$ such that $xf \equiv 1 \bmod\mathfrak m$, i.e., $1 - xf \in \mathfrak m$. Why is that impossible? -Added later: There's a nice discussion of $k[t][[x]]$ in Example 1.2 of Brian Conrad's handout. I'll try to incorporate some of the details here later on.<|endoftext|> -TITLE: Is the complement of a finite dimensional subspace always closed? -QUESTION [8 upvotes]: Let $F$ be a finite dimensional subspace of an infinite dimensional Banach space $X$, we know that $F$ is always topologically complemented in $X$, that is, there is always a closed subspace $W$ such that $X=F\oplus W$. -I am thinking about the converse. Suppose $W$ is a subspace of $X$ such that $X=F\oplus W$ for some finite dimensional subspace $F$. Is $W$ necessarily closed? -I guess the answer should be negative but I cannot find such an example. Can somebody give a hint? -Thanks! - -REPLY [6 votes]: If $f:X\to\mathbb C$ is a discontinuous linear functional, then $\ker f$ is not closed. If $v$ is in $X\setminus \ker f$, then $F=\mathbb C v$ and $W=\ker f$ gives a counterexample. ($X$ is the internal direct sum of $F$ and $W$ as vector spaces, but it is not a topological direct sum.)<|endoftext|> -TITLE: Basic probability problem -QUESTION [5 upvotes]: Problem states: - -Consider two events $A$ and $B$, with $P(A) = 0.4$ and $Pr(B) = 0.7$. Determine the maximum and the minimum possible values for $P(A \& B)$ and the conditions under which each of these values is attained. - -To solve, I considered the event with the lowest probability $A$ to be a subset of the other, so maximum value is attained under that circumstance giving a probability of $0.4$. -But the book states that the minimum is $0.1$, if $P(A \cup B) = 1$. -I don't understand why! Because I thought that the minimum value is get when the two events are disjoint... So the minimum value must be $0$... - -REPLY [3 votes]: $$ -\begin{align*} -0\leq P(A \cup B) \leq 1 & \implies 0 \leq P(A)+P(B)-P(A \cap B) \leq 1\\ -& \implies 0\leq 0.4 + 0.7 - P(A\cap B) \leq 1 \\ -& \implies -1.1\lt -P(A\cap B)\leq -0.1 \\ -& \implies 0.1 \leq P(A\cap B) \lt 1.1. -\end{align*} -$$ -So, the minimum possible value of $P(A\cap B)$ is $0.1$ and this is possible if $P(AB)=0.1, P(A^c \cap B) = 0.1, P(A \cap B^c)= 0.8$ and $P(A \cup B)^c = 0.$ - -For the maximum value, $P(AB)\leq P(A)= 0.4$ and $P(AB)\leq P(B)= 0.7$. Taking the most restrictive of these two values gives the maximum of $P(AB)$ as $0.4$. -This upper bound is possible if $A \subset B$.<|endoftext|> -TITLE: Is there a value for $\pi$ that relates to triangles? -QUESTION [9 upvotes]: So I heard that if one inscribes the largest circle that can fit into a equilateral triangle, then divides the perimeter of the triangle by the diameter of the inscribed circle, it gives a value which can be called "triangle $\pi$", and that value ($\sqrt{27}$) can be used in the place of regular $\pi$ to derive volumes of the other platonic solids. Is that true? Is there a different $\pi$ for triangles? What is that value? Is it close to $\sqrt{27}$? Can it be used to find volumes of platonic solids, especially the icosahedron and the one that looks like a pyramid flipped and stacked on its twin? 4 part question. Thanks we have been arguing about it at work for weeks - -REPLY [20 votes]: Yes, the ratio of the perimeter of an equilateral triangle to the diameter of its incircle is indeed $3\sqrt3$, or $\sqrt{27}$ as you wrote. I guess you can call it "triangle pi" if you want, but I don't know of anyone else who does, and people might not take you very seriously if you did. -Actually, I do find it rather interesting that the volume of a regular octahedron of inradius $r$ is $\frac43\overset{\scriptsize\triangle}\pi r^3$, just like the volume of a sphere except with $\overset{\scriptsize\triangle}\pi = 3\sqrt3$. But there don't seem to be any similar relationships with the other Platonic solids. - -To address just one of your edited questions: "Is there a different $\pi$ for triangles?" Not really. $\pi$ is the name of a specific number whose value is about $3.14159\ldots$ You can always find a different number which is related to, say, equilateral triangles in an way analogous to how $\pi$ is related to circles, and give it a name of your choosing, such as "triangle $\pi$". But that's just an analogy. It doesn't make that number "a different $\pi$ for triangles", any more than Stephen Harper is "a different Barack Obama for Canada". - -On further reflection, whoever chose the diameter of the incircle to take the ratio against, as opposed to that of the circumcircle, the nine point circle, or any other circle associated with the triangle, definitely knew what they were doing. If the inradius is $r$ and you choose $\overset{\scriptsize\triangle}\pi$ so that the triangle's perimeter is $2\overset{\scriptsize\triangle}\pi r$, then it is also the case that the area of the triangle is $\overset{\scriptsize\triangle}\pi r^2$! This actually holds for arbitrary triangles, not just equilateral ones — and it doesn't work for any other value of $r$ different from the inradius. -One can start to demystify this by observing that the incenter is the unique point whose perpendicular distance from all three edges of the triangle is the same, and equal to the inradius. Why this matters is easiest to see when generalized to $n$-sided polygons: - -If there exists a point whose perpendicular distance from all sides of a simple polygon is equal to the same value $r$, then the area of the polygon is $\frac12 r$ times the perimeter of the polygon. - -This now starts to look like an obvious fact: simply join all the vertices of the polygon to the inner point, and add up the areas of the triangles formed. And it explains the above property of the inradius, which always exists for any triangle (though typically such a point does not exist at all for an arbitrary $n$-sided polygon with $n>3$).<|endoftext|> -TITLE: Necessary and sufficient condition that a localization of an integral domain is integrally closed -QUESTION [9 upvotes]: Is the following proposition true? -Proposition? -Let $A$ be an integral domain, $K$ its the field of fractions. -Let $B$ be the integral closure of $A$ in $K$. -Suppose $B$ is finitely generated $A$-module. -Let $I = \{a \in A; aB \subset A\}$. -Let $P$ be a prime ideal of $A$. -Then $A_P$ is not integrally closed if and only if $I ⊂ P$. -EDIT -I'm particularly interested in the case when $K$ is an algebraic number field and $A$ is its order(a subring of K which is a finitely generated $\mathbb{Z}$-module and contains a $\mathbb{Q}$-basis of $K$). In this case, the above condition gives a necessary and sufficient condition for $A_P$ to be a discrete valuation ring. - -REPLY [6 votes]: Yes, your proposition is true. -The main point to know is that normalisation commutes with localization. That is to say, if $S$ is a multiplicative part of $A$, then $(S^{-1} A)'$ equals $S^{-1} A'$, where the “prime” denote the integral closure. -In your case, set $S = A\setminus P$, so that $(A_P)' = S^{-1} A'$. And you're asking when $S^{-1} A' \subset S^{-1} A$, the reverse inclusion being obvious. -The intuition is the following : $A'$ is generated over $A$ by some fractions, and if $S$ contains the denominators of these fractions, then $A'$ is a subset of $S^{-1} A'$, and thus we have the equality. -Lets go for a proof. -Let $I$ be the conductor ideal $[A:A'] = \{ a \in A \mid aA' \subset A \}$. This is a kind of a l.c.m. of the denominators. -We shall prove that $[S^{-1} A : S^{-1} A'] = S^{-1} I$ : - -Let $a/s$ be an element of $S^{-1}I$, with $a\in I$. Then $aS^{-1} A' = S^{-1}(aA') \subset S^{-1} A$. -Let $a$ be an element of $[S^{-1} A: S^{-1} A']$. Since $A'$ is finite over $A$, let we can write $A' = A[f_1,\dotsc,f_n]$, for some fractions $f_i$'s. By definition of the conductor ideal $a f_i \in S^{-1} A$, let write it as $a_i/s_i$. Let $s$ be the product $\prod_i s_i$. It is clear that for all $i$ we have $saf_i\in A$, and thus $sa A'\subset A$, i.e. $sa\in I$, hence $a\in S^{-1} A$. - -Since $S^{-1} A' = S^{-1} A$ if and only if the corresponding conductor ideal contains $1$, the problem is almost solved. Indeed $S^{-1} I$ contains $1$ is and only if $S$ and $I$ have non zero intersection. -Hence, we have proved the following : - -A localisation $A_P$ at a prime ideal $P$ (assuming that $A'$ is finitely generated over $A$) is integrally closed if and only if $I$ is not included in $P$.<|endoftext|> -TITLE: What is wrong with my `proof'?(solved) -QUESTION [6 upvotes]: The question is: -Let $k\in C^{0}(\mathbb{R}^{n}-\{0\})$ be a function such that $$k(xt)=t^{-n}k(x)$$ for $0\not=x\in\mathbb{R}^{n},t>0$. Show that the principal value $$\int k(x)\phi(x)dx=\lim_{|x|>\epsilon}k(x)\phi(x)dx,\phi\in C^{\infty}_{c}(\mathbb{R}^{n})$$ exists if and only if $$\int_{S^{n-1}}k(\theta)d_{\omega}(\theta)=0$$ where $d\omega$ is the usual surface measure on the unit sphere $\mathbb{S}^{n-1}$. -Show also that the first is a distribution when the second equation holds. - -REPLY [3 votes]: We denote $ds(\theta)$ the surface measure on $\mathbb{S}^{n-1}$ at angle $\theta$. We use $|x|=r,\theta=\frac{x}{|x|}\in \mathbb{S}^{n-1}$. Then we have $dx=r^{n-1}drds$. Thus we have: -\begin{align*} -\int_{|x|\ge \epsilon} k(x)\phi(x)dx -&=\int_{|x|\ge \epsilon} k(\frac{x}{|x|})\cdot |x|^{-n}\phi(x)dr\\ -&=\int_{r\ge \epsilon} \int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\cdot \frac{1}{r^{n}}\phi(r\theta)r^{n-1}drds(\theta)\\ -&=\int_{r\ge \epsilon} \int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\cdot \frac{1}{r}\phi(r\theta)drds(\theta) -\end{align*} -To prove the `if' part, consider the function $\phi$ that is 1 on the unit ball $B^{A}_{0}$ and has compact support in $\mathbb{R}^{n}$ with maximum value 1. Then we have -\begin{align*} -\int_{r\ge \epsilon} \int_{\theta\in \mathbb{S}^{n-1}}k(\theta) \frac{1}{r}\phi(r\theta)drds(\theta) -&=\int^{A}_{r\ge \epsilon}\int_{\theta\in \mathbb{S}^{n-1}} k(\theta) \frac{1}{r}drds+\int^{\infty}_{r\ge A}\int_{\theta\in \mathbb{S}^{n-1}} k(\theta) \phi(r\theta)\frac{1}{r}drds\\ -&=(\log(A)-\log(\epsilon))\int_{\theta\in \mathbb{S}^{n-1}} k(\theta)ds+\int^{\infty}_{r\ge A}\int_{\theta\in \mathbb{S}^{n-1}} k(\theta) \phi(r\theta)\frac{1}{r}drds -\end{align*} -Since $\phi$ has compact support, we may assume its support is in a ball of radius $B$, while $k(\theta)$ as maximum $K$ in the compact set $S^{n-1}$. With $C=\int_{\mathbb{S}^{n-1}}ds$ we have -\begin{align*}|\int^{\infty}_{r\ge A}\int_{\theta\in \mathbb{S}^{n-1}} k(\theta) \phi(r\theta)\frac{1}{r}drds| -&\le \int^{\infty}_{r\ge A}\int_{\theta\in \mathbb{S}^{n-1}} |k(\theta)||\phi(r\theta)|\frac{1}{r}drds\\ -&\le\int^{B}_{A}\int_{\mathbb{S}^{n-1}}K\frac{1}{r}drds=K(\log(B)-\log(A))C -\end{align*} -Thus the original integral exist or not is depended on $$(\log(A)-\log(\epsilon))\int_{\theta\in \mathbb{S}^{n-1}} k(\theta)ds$$ only. And it diverges with $\epsilon\rightarrow 0$ unless $\int_{\theta\in \mathbb{S}^{n-1}} k(\theta)ds=0$. -To prove the `only if' part we assume $$\int_{\theta\in \mathbb{S}^{n-1}} k(\theta)ds=0$$ Then above integral become -\begin{align*} -\int_{r\ge \epsilon} \int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\frac{1}{r}\phi(r\theta)drds(\theta) -\end{align*} -Since $\phi(r\theta)\in C^{\infty}_{c}(\mathbb{R}^{n})$, $$\frac{\partial \phi}{\partial r}=\sum \frac{\partial \phi}{\partial x_{i}}*\frac{\partial x_{i}}{\partial r}=\sum\frac{\partial \phi}{\partial x_{i}}\theta_{i}$$ Thus $\phi$ is $\infty$ differentiable in $r$ as well. -We may expand $\phi(r\theta)=\phi(r,\theta)=\phi(0)+\psi(r,\theta)r$ with $$\psi(r,\theta)=\sum^{\infty}_{i=1}\frac{r^{i-1}}{i!}\frac{\partial^{i} \phi(0,\theta)}{\partial r^{i}}$$ be continuous in $r$ and $\theta$. Then $\frac{\phi}{r}=\frac{\phi(0)}{r}+\psi(r,\theta)$, substitue this into the above formula and assume the support of $\phi$ is in a ball of radius $L$, then we have: -\begin{align*} -\int_{r\ge \epsilon} \int_{\theta\in \mathbb{S}^{n-1}}k(\theta) \frac{1}{r}\phi(r\theta)drds(\theta) -&=\int^{L}_{\epsilon}\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\frac{\phi(0)}{r}drds(\theta)+\int^{L}_{\epsilon}\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\psi(r,\theta)drds(\theta)\\ -&=\phi(0)\int^{L}_{\epsilon}\frac{1}{r}dr\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)ds(\theta)+\int^{L}_{\epsilon}\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\psi(r,\theta)drds(\theta)\\ -&=\int^{L}_{\epsilon}\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\psi(r,\theta)drds(\theta) -\end{align*} -We now prove as $\epsilon\rightarrow 0$ the above integral converges to $$\int^{L}_{0}\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\psi(r,\theta)drds(\theta)$$ Because $\psi(r,\theta)$ is continuous on $r$ and $\theta$, thus $|\psi(r,\theta)|\le M,\forall x\in B^{L}_{0}$. Thus the difference integral -\begin{align*} -|\int^{\epsilon}_{0}\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\psi(r,\theta)drds(\theta)| -&\le \int^{\epsilon}_{0}\int_{\theta\in \mathbb{S}^{n-1}}|k(\theta)\psi(r,\theta)|drds(\theta)\\ -&\le KMC\int^{\epsilon}_{0}dr\\ -&\le KMC\epsilon\\ -&\rightarrow 0 -\end{align*} -Now we show the above $k$ defines a distribution. It is suffice to show that for any $\{\phi_{n}\}\rightarrow 0,\langle k,\phi_{n}\rangle \rightarrow 0$. We have proved that for any $\phi\in C^{\infty}_{c}$, $$\langle k,\phi\rangle=\int^{L}_{0}\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\psi(r,\theta)drds(\theta)$$ We now derive $$\psi(r,\theta)=\frac{\phi(r,\theta)-\phi(0,\theta)}{r}=\frac{\int^{r}_{0}\phi'(t,\theta)dt}{r}$$ Since $\phi_{n}\rightarrow 0$, $\partial^{\alpha}\rightarrow 0$ in the support of $\phi_{n}$ as well. Notice $$\frac{\partial \phi}{\partial r}=\sum \frac{\partial \phi}{\partial x_{i}}*\frac{\partial x_{i}}{\partial r}=\sum\frac{\partial \phi}{\partial x_{i}}\theta_{i}$$ Thus we have $$\frac{\partial \phi_{n}(r,\theta)}{\partial r}\rightarrow 0$$ in the compact region $B^{L}_{0}$. -Thus we have -\begin{align*} -|\int^{L}_{0}\int_{\theta\in \mathbb{S}^{n-1}}k(\theta)\psi_{n}(r,\theta)drds(\theta)| -&\le \int^{L}_{0}\int_{\theta\in \mathbb{S}^{n-1}}|k(\theta)|\psi_{n}(r,\theta)|drds(\theta)\\ -&=\int^{L}_{0}\int_{\theta\in \mathbb{S}^{n-1}}|k(\theta)|\frac{\int^{r}_{0}\frac{\partial \phi_{n}}{\partial r}(t,\theta)dt}{r}|drds(\theta)\\ -&\le\int^{L}_{0}\int_{\theta\in \mathbb{S}^{n-1}}|k(\theta)|\frac{\max(|\partial\phi_{n}|)*r}{r}|drds(\theta)\\ -&=\max(|\partial\phi_{n}|)\int^{L}_{0}\int_{\theta\in \mathbb{S}^{n-1}}|k(\theta)|drds(\theta)\\ -&=\max(|\partial\phi_{n}|)K\int^{L}_{0}\int_{\theta\in \mathbb{S}^{n-1}}drds(\theta)\\ -&=\max(|\partial\phi_{n}|)KLC\\ -&\rightarrow 0 -\end{align*} -And therefore $\langle k,\phi \rangle$ is a distribution.<|endoftext|> -TITLE: "Fully correlated" definition -QUESTION [5 upvotes]: Really sorry to be a noob, but I'm a programmer, not a mathematician, and all of my knowledge about statistics come from this book "Schaum's Outline of Theory and Problems of Probability, Random Variables, and Random Processes". -I'm implementing an UKF for target tracking using C++. Everything went well until an error about covariance matrix of state is not positive definite happened. -After a little research, I found this link Under what circumstance will a covariance matrix be positive semi-definite rather than positive definite? which almost answer everything I need. -Only one thing I don't understand: The answer says "This happens if and only if some linear combination of X is ‘fully correlated’". Can anyone explain for me what does "fully correlated" mean? And example would be great. I have search Google about its definition but there is no luck at all. - -REPLY [3 votes]: (I was the poster of the question to which the OP refers and I am heartily embarrassed by my distinct lack of clarity). -The intention was that the relationship is exactly linear and the correlation is 1 or -1. -That is: -$\rho_{i,j} = \frac{\sigma_{i,j}}{\sigma_i \sigma_j} = \pm 1$ -where $\sigma_{i,j}$ is the covariance of elements $i$ and $j$ and ${\sigma_i}^2$ and ${\sigma_j}^2$ is the variance of $i$ and $j$ respectively. In all cases $i \ne j$, except for the scalar case which is covered in the original post. -There is the assumption that the random variable is multivariate normal, which is the an assumption of the (Unscented) Kalman-based estimators that are the subject of both the original question and this one. -I hope this doesn't cause past answers to be wrong!<|endoftext|> -TITLE: The function $f(x) = \int_0^\infty \frac{x^t}{\Gamma(t+1)} \, dt$ -QUESTION [18 upvotes]: Does anyone know if this function has a name? I came up with it by looking at the power series for $e^z$, changing the summation to an integral, and substituting the gamma function for the factorial function. - -REPLY [6 votes]: Laplace transform -We consider the function for $x\ge 0$. -As found by @Mathlover, it is not hard to get the Laplace transform of $f(x)$. -Inverting, we find -$$f(x) = \frac{1}{2\pi i} \int_\gamma ds\, e^{s x}\frac{1}{s\log s},$$ -where $\gamma$ is the Bromwich contour. -There is a singularity at $s=1$ and a cut from $s=0\to -\infty$. -We deform the contour in the usual way and simplify, with the result -$$\begin{eqnarray*} -f(x) &=& e^x - \int_0^\infty d\rho\, -\frac{e^{-\rho x}}{\rho(\log^2\rho + \pi^2)} \\ -&=& e^x - \int_{-\infty}^\infty d\sigma\, -\frac{e^{-x e^\sigma}}{\sigma^2 + \pi^2}. -\end{eqnarray*}$$ -It is possible to go further with the last integral, but we'll stop here. -Addendum: Series -Let's examine the series for $f(x)$. -Define -$$\begin{eqnarray*} -\Delta(x) &=& \int_0^\infty d\sigma\, -\frac{e^{-x e^\sigma} + e^{-x e^{-\sigma}}}{\sigma^2 + \pi^2} -\end{eqnarray*}$$ -Note that $f(x) = e^x - \Delta(x)$. -We have $e^{-x e^\sigma} + e^{-x e^{-\sigma}} \le e^{-x}+1$, -and $\int_0^\infty d\sigma\, \frac{1}{\sigma^2+\pi^2}$ converges, -so $\Delta(x)$ converges uniformly. -Clearly the derivative of $\Delta(x)$ doesn't exist at $x=0$ -(the integral -$\int_0^\infty d\sigma\,\frac{\cosh \sigma}{\sigma^2+\pi^2}$ diverges) -so there is no Maclaurin series for $f(x)$. -We can do a Taylor series about any $x>0$. -Of course, we must evaluate integrals of the form -$$\left.\frac{d^k}{d x^k} \Delta(x)\right|_{x=x_0} = (-1)^k \int_0^\infty d\sigma\, -\frac{e^{k\sigma -x_0 e^\sigma} + e^{-k\sigma -x_0 e^{-\sigma}}}{\sigma^2 + \pi^2}.$$ -These integrals are uniformly convergent for $x_0 >0$. -In this regard it is useful to notice that -$e^{-k\sigma -x_0 e^{-\sigma}} \le 1$ and -$$e^{k\sigma -x_0 e^\sigma} \le -\begin{cases} -e^{-x_0}, & x_0 \ge k \\ -\left(\frac{k}{x_0 e}\right)^k, & \mathrm{else}. -\end{cases}$$ -For large $x_0$ the dominant terms in the expansion come from the series for $e^x$, as expected. -Trapezoidal rule -Approximate the integral by a sum. -Let the step size be $1/m$, where $m=1,2,\ldots$, and let -$g_k = \frac{x^{k/m}}{\Gamma(k/m+1)}$. -Then, -$$\begin{eqnarray*} -\int_0^\infty dt\, \frac{x^t}{\Gamma(t+1)} &\simeq& -\lim_{n\to\infty} -\left( -\frac{1}{m}\sum_{k=0}^n g_k -\frac{1}{2m}(g_0+g_n) \right)\\ -&=& \frac{1}{m}E_{1/m}(x^{1/m}) -\frac{1}{2m} \\ -&=& \frac{1}{m} e^x\left(1+\sum_{k=1}^{m-1}\frac{\gamma(1-k/m,x)}{\Gamma(1-k/m)}\right) - \frac{1}{2m}, -\end{eqnarray*}$$ -where $E_\alpha(z)$ is the Mittag-Leffler function and $\gamma$ is the lower incomplete gamma function. -(The relation for $E_{1/m}(x^{1/m})$ used above can be found in this paper.) -In the limit $m\to\infty$ the sum is equal to the original integral, so -$$\begin{eqnarray*} -f(x) &=& e^x \lim_{m\to\infty} \frac{1}{m} \sum_{k=1}^{m-1}\frac{\gamma(1-k/m,x)}{\Gamma(1-k/m)} \\ -&=& e^x \int_0^1 ds\,\frac{\gamma(s,x)}{\Gamma(s)} -\end{eqnarray*}$$ -This is the integral formula found by @sos440 using another method. -Here are the sums for $m=1$ and $2$. -$$\begin{array}{ll} -m & f(x) \\ \hline -1 & e^x - 1/2 \\ -2 & \frac{1}{2}e^x \left(\mathrm{erf}\left(\sqrt{x}\right) + 1\right) -\frac{1}{4} \end{array}$$ -Since $f(x) \simeq e^x-1/2$, $f^{-1}(x)\simeq \log(x+1/2)$. -Below we plot $f(x)$ (solid), $e^x$ (dashed, black), and the approximations for $m=1$ (dashed, red), and $m=2$ (dashed, blue).<|endoftext|> -TITLE: Is there a distributive law for ideals? -QUESTION [13 upvotes]: I'm curious if there is some sort of distributive law for ideals. - -If $I,J,K$ are ideals in an arbitrary ring, does $I(J+K)=IJ+IK$? - -The containment "$\subset$" is pretty clear I think. But the opposite ontainment doesn't feel like it should work. I couldn't work out a counterexample with ideals in $\mathbb{Z}$ however. So does such an equality always hold or not? - -REPLY [21 votes]: Note that if $A$, $B$, and $C$ are ideals, and $B\subseteq C$, then $AB\subseteq AC$; and if $A$ and $B$ are both contained in $C$, then $A+B\subseteq C$. -Since $J\subseteq J+K$, then $IJ\subseteq I(J+K)$. Since $K\subseteq J+K$, then $IK\subseteq I(J+K)$. Therefore, $IJ$ and $IK$ are both contained in $I(J+K)$, so $IJ+IK\subseteq I(J+K)$. -For the converse inclusion, a general element of $I(J+K)$ is of the form -$$\sum a_i(j_i+k_i)$$ -with $a_i\in I$, $j_i\in J$, and $k_i\in K$. And we have -$$\sum a_i(j_i+k_i) = \sum\Bigl( a_ij_i + a_ik_i\Bigr) = \left( \sum a_ij_i\right) + \left(\sum a_ik_i\right) \in IJ + IK.$$<|endoftext|> -TITLE: Can such a function exist? -QUESTION [5 upvotes]: Denote by $\Sigma$ the collection of all $(S, \succeq)$ wher $S \subset \mathbb{R}$ is compact and $\succeq$ is an arbitrary total order on $S$. -Does there exist a function $f: \mathbb{R} \to \mathbb{R}$ such that for all $(S, \succeq) \in \Sigma$ there exists a compact interval $I$ with the properties that - -$f(I) = S$ -$x \geq y$ implies $f(x) \succeq f(y)$ for all $x,y \in I$? - -If so, how regular can we take $f$ to be? The motivation is that basically, I am trying to construct the analogue of a normal sequence but on $\mathbb{R}$ instead of $\mathbb{N}$. -EDIT: As Brian M. Scott points out, this is not possible if the orderings have no greatest and least elements. However, since adding this assumption doesn't go against the intuition of generalizing normal sequences, I am still interested in the answer if we restrict the various total orders to have minimal and maximal elements. -Thanks in advance. - -REPLY [3 votes]: If I understand correctly what you’re asking, the answer is no. Let $\preceq$ be a linear order on $[0,1]$ having no last element; no compact subset of $\Bbb R$ can be mapped onto $[0,1]$ in such a way that $f(x)\preceq f(y)$ whenever $x\le y$, because every compact subset of $\Bbb R$ has a last element with respect to the usual order. That is, if $I$ is a compact subset of $\Bbb R$, let $u=\max I$, and let $x\in[0,1]$ be such that $f(u)\prec x$; then $x\notin\operatorname{ran}f$, so $f[I]\ne[0,1]$. -Added: Restricting the linear orders to those with first and last elements doesn’t help. Well-order $[0,1]=\{y_\xi:\xi<2^\omega+1\}$. Now let $I$ be any compact interval, say $[a,b]$, and let $f:I\to[0,1]$ be an order-preserving surjection. Clearly $f(b)=y_{2^\omega}$ is the last element of in the well-ordering of $[0,1]$. Let $\langle x_n:n\in\omega\rangle$ be a strictly increasing sequence in $[a,b]$ converging to $b$. Then $\langle f(x_n):n\in\omega\rangle$ is an increasing sequence in the well-ordering of $[0,1]$, and it has a supremum, say $y_\alpha$. Then $y_\alpha<2^\omega$, since $\operatorname{cf}2^\omega>\omega$, and no $y_\beta$ such that $\alpha<\beta<2^\omega$ is in the range of $f$. -Actually, this requires a bit of modification if $f$ is not required to be strictly order-preserving (i.e., a bijection). In that case let $B=f^{-1}[\{y_{2^\omega}\}]$; $B$ must be of the form $(c,b]$ or $[c,b]$ for some $c\in I$. If $B=[c,b]$, replace $b$ by $c$ in the previous paragraph. If $B=(c,b]$, note that nothing between $f(c)$ and $f(b)$ is in the range of $f$.<|endoftext|> -TITLE: Do multiplicative maps of matrices factor through determinants? -QUESTION [9 upvotes]: Given a map $f:M_n(k)\to k$ (with $k$ some field) such that $f(AB)=f(A)f(B)$ for all matrices $A$ and $B$, is it necessarily the case that $f$ factors through the determinant, i.e. does there exist a multiplicative map $g:k\to k$ such that $f=g\circ\det\,$? Are constraints on $k$ necessary? -A simple corollary would be that nonzero multiplicative maps on subgroups of the general linear group $GL_n(k)$ factor through multiplicative maps on the units $k^\times\to k^\times$. -Two definitions of $\det$ I'm aware of (written with our setting in mind): - -The unique alternating mulilinear map (of column vectors in a matrix) sending $I$ to $1_k$. -The trace of the map induced by $A$ on the $n$th exterior power $\mathrm{Alt}^nk^n$. - -By Gaussian elemination, any multiplicative map $f$ from matrices to the base field is determined by its values on the matrices representing elementary row operations and upper triangular matrices. -One stumbling block is that it seems hard, in general, to fully characterize the multiplicative maps on the base field $k$. With a finite field it would just be integer powers and the zero map, but with the reals you get all (positive) real powers too, and some funky stuff may occur with other fields. - -REPLY [5 votes]: Your mapping $f$ sends all the commutators to $1_k$, so given that the determinant distinguishes the cosets in $GL_n(k)/SL_n(k)$, the answer to your question seems to be YES, whenever $GL_n(k)'=SL_n(k)$, and we restrict the domain of $f$ to invertible matrices. -Lemma in N. Jacobson, Basic Algebra I, p. 377, tells us that $SL_n(k)$ is its own commutator subgroup (and obviously then also equal to $GL_n(k)'$) unless $n=2$ and $|k|=2$ or $3$. In the case $n=2$, $k=\mathbb{F}_2$ we have that $GL_2(k)=SL_2(k)\simeq S_3$ meaning that $GL_2(k)'$ is an index two subgroup of $SL_2(k)$. Using the ideas of the proof of this result it is easy to see that in the case $k=\mathbb{F}_3$ we do get that $GL_2(k)'=SL_2(k)$ (but $SL_2(k)$ is not its own commutator subgroup). -So we know that the restriction of your mapping $f$ to $GL_n(k)$ must factor via the determinant except possibly in the case $n=2,k=\mathbb{F}_2$.<|endoftext|> -TITLE: My first course in algebraic geometry: two simple questions -QUESTION [12 upvotes]: I'm attending my first course in algebraic geometry, and my professor has chosen an approach which is a middle-way between the basic algebraic geometry done in $\mathbb A^n_k$ and the approach with schemes, so substantially like in Milne's notes. I have for you some simple question and I hope that the answers will not involve scheme theory: -1) Let $(X,\mathcal O_X)$ an affine variety, so it is isomorphic as ringed space to $(V,\mathcal O_V)$ where $V$ is an affine algebraic set and $\mathcal O_V$ is the sheaf of regular functions. When one says "take the covering of $X$ with standard open sets", it means that we consider the covering of $X$ done with those open sets of $X$ which are homeomorphic to the standard open sets $D(f)\subseteq V$? -2) In class we have defined a quasi-coherent sheaf on $(V,\mathcal O_V)$ (we are in $\mathbb A^n_k$) as the sheaf $\widetilde M $ uniquely associated to the assignment $\widetilde M(D(f))=M_f$ where $M$ is a $\Gamma(V,\mathcal O_V)$-module. When we talk about a quasi-coherent sheaf defined on a abstract affine variety $(X,\mathcal O_X)$ so isomorphic to $(V,\mathcal O_V)$, do we intend a sheaf $\mathcal F$ on $X$ with an isomorphism $\mathcal F\cong f_{\ast}\widetilde M$ (assuming that $f$ is the omeomorphism between $V$ and $X$)? - -REPLY [9 votes]: 1) Standard open sets are defined for every locally ringed space. If $f \in \Gamma(X,\mathcal{O}_X)$, then $X_f$ (sometimes also called $D(f)$) is by definition the set of all $x \in X$ such that $f_x \notin \mathfrak{m}_x$, where $\mathfrak{m}_x$ is the maximal ideal of the local ring $\mathcal{O}_{X,x}$. Equivalently, $f(x) \neq 0$ in the residue field $k(x) = \mathcal{O}_{X,x}/\mathfrak{m}_x$. This is also the reason why often $X_f$ is called the "locus where $f$ doesn't vanish" or "where $f$ is invertible". It is an easy exercise to show that $X_f$ is in fact open, and that we have the standard identities $X_1 = X$, $X_f \cap X_g = X_{fg}$. When $X$ is an affine algebraic set, this coincides with the locus where $f$ doesn't vanish defined in the usual sense (and probably this is what you meant by $D(f) \subseteq V$). -2) Again quasi-coherent sheaves make sense for arbitrary ringed spaces. And it is a very bad idea to give definitions only for algebraic sets $\subseteq \mathbb{A}^n$ and try to extend them via chosen isomorphisms! You should work with intrinsic geometric objects instead, and (locally) ringed spaces provide a nice framework for that. So let's use this language. -A quasi-coherent module on a ringed space $X$ is just a module $M$ (i.e. what most people call a sheaf of modules) on $X$ such that locally on $X$ there is a presentation $\mathcal{O}^{\oplus I} \to \mathcal{O}^{\oplus J} \to M \to 0$. So to be more precise: There is an open covering $X = \cup_i X_i$ such that for each $i$ there is an exact sequence (which, of course, does not belong to the data) $\mathcal{O}|_{X_i}^{\oplus I} \to \mathcal{O}|_{X_i}^{\oplus J} \to M|_{X_i} \to 0$. Quasi-coherent modules constitute a (tensor) category $\mathrm{Qcoh}(X)$, which is by the way a very interesting and deep invariant of $X$, especially when $X$ is a variety. -How to construct quasi-coherent modules on a ringed space $X$? Well pick a $\Gamma(X,\mathcal{O}_X)$-module $M$. Then I claim that we can construct a quasi-coherent module $\tilde{M}$ on $X$ as follows: Choose a presentation $\Gamma(X,\mathcal{O}_X)^{\oplus I} \to \Gamma(X,\mathcal{O}_X)^{\oplus J} \to M \to 0.$ Represent the morphism on the left as a "relation matrix" consisting of elements of $\Gamma(X,\mathcal{O}_X)$. Now, every such global section corresponds to a homomorphism $\mathcal{O}_X \to \mathcal{O}_X$. Thus we can produce a matrix consisting of endomorphisms over $\mathcal{O}_X$, and thus a morphism $\mathcal{O}_X^{\oplus I} \to \mathcal{O}_X^{\oplus J}$. Define $\tilde{M}$ to be the cokernel. By definition, this is quasi-coherent! This already produces lots of examples; in fact all $X$ is an affine variety, but only few if $X$ is projective. -To give a more concise definition which does not depend on the presentation: Just define $\tilde{M}$ to be the sheaf associated to the presheaf $U \mapsto \Gamma(U,\mathcal{O}_X) \otimes_{\Gamma(X,\mathcal{O}_X)} M$. This definition easily implies a more conceptual characterization of the functor $M \to \tilde{M}$ from $\Gamma(X,\mathcal{O}_X)$-modules to quasi-coherent modules modules on $X$: It is left adjoint to the global section functor! In fact, everything you want to know about $\tilde{M}$ already follows from this adjunction. You may forget about the details of the construction, you just have to remember $\hom(\tilde{M},F) \cong \hom(M,\Gamma(X,F))$, which actually holds for every module $F$ on $X$. -So what happens when $X$ is some affine variety? Then the sets $X_f$ constitute a basis for the topology of $X$, and we have $\Gamma(X_f,\mathcal{O}_X) = \Gamma(X,\mathcal{O}_X)_f$. Namely, this is well-known if $X \subseteq \mathbb{A}^n$ and then generalizes immediately to affine varieties, which are isomorphic as ringed spaces to such concrete varieties. Let $M$ be a $\Gamma(X,\mathcal{O}_X)$-module. Now it turns out that the presheaf defined above is actually a sheaf! This comes down to the following: If $f_1,\dotsc,f_n \in \Gamma(X,\mathcal{O}_X)$ generate the unit ideal (i.e. the corresponding sets $X_{f_i}$ cover $X$), then the canonical sequence -$$0 \to M \to \prod_{i} M_{f_i} \to \prod_{i,j} M_{f_i f_j}$$ -is exact. Everyone should have done this proof instead of looking it up in the standard sources. Because I think it is quite enlightening and in fact purely geometric if you think of $f_1,\dotsc,f_n$ as a partition of unity. -Anyway, so this tells us that we don't need associated sheaves in the definition of $\tilde{M}$. Thus, by definition, on the open subset $X_f$ it is given by -$$\Gamma(X_f,\tilde{M}) = \Gamma(X_f,\mathcal{O}_X) \otimes_{\Gamma(X,\mathcal{O}_X)} M = \Gamma(X,\mathcal{O}_X)_f \otimes_{\Gamma(X,\mathcal{O}_X)} M = M_f.$$ -So this describes some quasi-coherent sheaves on affine varieties. In fact, one can show that every quasi-coherent sheaf on an affine variety $X$ has the form $\tilde{M}$. Namely, one shows that for every such sheaf $F$ the canonical counit morphism of the adjunction mentioned above $\tilde{\Gamma(X,F)} \to F$ is an isomorphism. Again, this is a very nice exercise. After some thought you will see that this is just another application of the exact sequence above. So this provides, for every affine variety, an equivalence of categories -$$\mathrm{Qcoh}(X) \cong \mathrm{Mod}(\Gamma(X,\mathcal{O}_X)).$$ -By the way, if you define $\tilde{M}$ on an affine variety by $\Gamma(X_f,\tilde{M}) = M_f$ and extended via projective limits to arbitrary open subsets, then you probably would like to know that this is a sheaf. And again this comes down to the exact sequence above. You cannot get around it. I don't like this approach because it is somewhat clumsy, you don't get the general picture, and it doesn't produce a formula for $\tilde{M}(U)$ for arbitrary $U$. Therefore I've chosen the rather abstract but hopefully concise approach above. Of course nothing is new, you can find all that in EGA I, the Stacks Project, etc.<|endoftext|> -TITLE: Number of elements vs cardinality vs size -QUESTION [5 upvotes]: I have been wondered the definition of cardinality and number of elements. One mathematician told me that one can't said that the cardinality or size of the set $\{1\}$ is one, it should be said that the number of elements in the set is one. I guess that his opinion is that there is no term size in mathematics. Is these true? On the other hand, if we have an infinite set like $\mathbb Z$, can we say that the number of elements of $\mathbb Z$ is infinite or is the term "number of elements" used only in finite sets? - -REPLY [4 votes]: The question you need to ask yourself is "what is the meaning of the notion number?" -Natural numbers can be used to measure size, length, etc. while rational numbers measure ratio between two integers, real numbers measure length... what do complex numbers measure? -In mathematics we allow ourselves to define new ways of measuring things, and usually we refer to the values of these measures as numbers. Indeed there is no real difference between the length of a $100$ meters running track, and a $100$ meters long hot dog... -When measuring the size of a set we came up with a clever way to discuss infinite sets. This way is "cardinality", and the cardinal of a set represents in a good sense the "number of elements in the set". So it has a perfectly good meaning to say that a set has $\aleph_0$ many elements, or that a set is of size $\aleph_1$.<|endoftext|> -TITLE: About the Wasserstein "metric" -QUESTION [11 upvotes]: I've just encountered the Wasserstein metric, and it doesn't seem obvious to me why this is in fact a metric on the space of measures of a given metric space $X$. Except for non-negativity and symmetry (which are obvious), I don't know how to proceed. -Do you guys have any advices or links to useful references ? -Thanks in advance ! -Cyril - -REPLY [17 votes]: So I assume that what puzzles you are the triangle-inequality and $W_{p}(\mu,\mu)=0$, where $W_{p}$ denotes the $p$-Wasserstein metric. -Here's some preliminary information. I will denote $\Pi(\mu,\nu)$ the collection of all transference plans from $\mu$ and $\nu$, i.e. $\pi\in\Pi(\mu,\nu)$ iff $\mu$ is the first marginal of $\pi$ and $\nu$ is the second. This can also be expressed in form $\mu=(\mathrm{pr}_{1})_{\#}\pi$ and $\nu=(\mathrm{pr}_{2})_{\#}\pi$, where $\#$ denotes the push-forward. If $(X,d)$ is Polish then for every pair of probability measures $\mu,\nu$ there exists an optimal transference plan $\pi\in\Pi(\mu,\nu)$ so that $W_{p}(\mu,\nu)=\left(\int_{X\times X}d(x,y)^{p}\,d\pi(x,y)\right)^{\frac{1}{p}}$. The proof of this can be found in the book 'Topics in optimal transportation', Cedric Villani, 2003, and the key point consists of noting that $\Pi(\mu,\nu)$ is compact in the weak-convergence of measures (which is shown by using Prokhorov's theorem). -Now to the metric itself. -The triangle-inequality uses a so called "Gluing lemma" (also found in Villani's book). It states that if $\mu_{1},\mu_{2},\mu_{3}$ are Borel probability measures on $X$, and $\pi_{1,2}\in\Pi(\mu_{1},\mu_{2})$ and $\pi_{2,3}\in\Pi(\mu_{2},\mu_{3})$ are optimal transference plans, then there exists a Borel probability measure $\mu$ on $X^{3}$ with marginals $\pi_{1,2}$ to the left $X\times X$ and $\pi_{2,3}$ to the right $X\times X$. This measure in a sense glues together $\pi_{1,2}$ and $\pi_{2,3}$. It follows by a simple argument using the marginal properties of each measure that the marginal of $\mu$ to $X\times X$ (the first and third $X$) denoted by $\pi_{1,3}$ is a transference plan in $\Pi(\mu_{1},\mu_{3})$ (not necessarily optimal!) $(*)$. Using minkovski inequality of $L^{p}(X^{3},\mu)$ $(**)$, marginal properties of the measures $(***)$, optimality of $\pi_{1,2}$ and $\pi_{2,3}$ $(****)$, we obtain -\begin{align*} -W_{p}(\mu_{1},\mu_{3}) &\overset{(*)}{\leq} \bigg(\int_{X\times X}d(x,z)^{p}\,d\pi_{1,3}(x,z)\bigg)^{\frac{1}{p}}\overset{(***)}{=}\bigg(\int_{X\times X\times X}d(x,z)^{p}\,d\mu(x,y,z)\bigg)^{\frac{1}{p}} \\ - &\leq \bigg(\int_{X\times X\times X}(d(x,y)+d(y,z))^{p}\,d\mu(x,y,z)\bigg)^{\frac{1}{p}} \\ - &\overset{(**)}{\leq}\bigg(\int_{X\times X\times X}d(x,y)^{p}\,d\mu(x,y,z)\bigg)^{\frac{1}{p}}+\bigg(\int_{X\times X\times X}d(y,z)^{p}\,d\mu(x,y,z)\bigg)^{\frac{1}{p}} \\ - &\overset{(***)}{=}\bigg(\int_{X\times X}d(x,y)^{p}\,d\pi_{1,2}(x,y)\bigg)^{\frac{1}{p}}+\bigg(\int_{X X\times X}d(y,z)^{p}\,d\pi_{2,3}(y,z)\bigg)^{\frac{1}{p}} \\ - &\overset{(****)}{=}W_{p}(\mu_{1},\mu_{2})+W_{p}(\mu_{2},\mu_{3}). -\end{align*} -So we have the triangle-inequality. -About the $W_{p}(\mu,\mu)=0$, take the homeomorphism $f:X\to\Delta$ given by $x\mapsto(x,x)$, i.e. $\Delta$ is the "diagonal" of $X\times X$. Then take $\nu:=f_{\#}\mu$ (which is a Borel probability measure on the diagonal $\Delta$) and furthermore define a Borel probability measure $\pi$ on the product space $X\times X$ by setting $\pi(A)=\nu(A\cap\Delta)$ for all Borel sets $A$. Now $\pi$ is a transference plan between $\mu$ to itself (not necessarily optimal!), which is a straight-forward proof, and it vanishes outside the diagonal (i.e. $\pi(\Delta^{c})=0$). Since the diagonal is the zero set of the metric $d$, we conclude that -\begin{equation*} -W_{p}(\mu,\mu)^{p}\leq \int_{X\times X}d(x,y)^{p}\,\pi(x,y)=\int_{\Delta}d(x,y)^{p}\,\pi(x,y)+\int_{\Delta^{c}}d(x,y)^{p}\,\pi(x,y)=0+0=0, -\end{equation*} -whence $W_{p}(\mu,\mu)=0$.<|endoftext|> -TITLE: Example of an application of a theorem about ideals in rings of fractions in Atiyah-MacDonald -QUESTION [7 upvotes]: In Atiyah-MacDonald, we have the following theorem (p. 41): -Proposition 3.11. -i) Every ideal in $S^{-1}R$ is an extended ideal. -ii) If $I$ is an ideal in $R$ then $I^{ec} = \bigcup_{s \in S} (I : \langle s \rangle )$. Hence $I^e = (1) = S^{-1}R$ if and only if $I$ meets $S$. -iii) $I = I^{ec}$ if and only if no element of $S$ is a zero-divisor in $R/I$. -iv) The prime ideals of $S^{-1}R$ are in one-to-one correspondence with the prime ideals of $R$ which don't meet $S$. -(I'm omitting point v) of the theorem since my question is about ii),iii) and iv). ) -While I am able prove this theorem I'm wondering about how I'll be able to remember it, in particular, statements ii)-iv). Can anyone give me an example of where I'll be using either one of these three statements or all of them? -Thanks. - -REPLY [5 votes]: Of all of the statements above, number 4 is the one that I've used the most. Let me tell you about what number 4 can be used for: - -Proving the lying over theorem: Given a finite extension $A \subset B$, for every prime ideal $P \subset A$ there exists a prime ideal $Q$ of $B$ lying over $P$, i.e. $Q^{c} = P$. You can prove this by noticing that $S^{-1}B$ is a finitely generated $S^{-1}A$ - module (To prove this bit, I remember using some tensor products iirc) and then if you draw an appropriate diagram you can apply (iv). I believe you have Nakayama's Lemma available to you as $S^{-1}B$ is a finitely generated $S^{-1}A$ - module. -Proving that a ring $A$ is absolutely flat iff $A_\mathfrak{m}$ for each maximal ideal $\mathfrak{m}$. -Proving that every prime ideal of $A$ is maximal iff $A/\mathfrak{R}$ is absolutely flat. $\mathfrak{R}$ is the nilradical of $A$. -Atiyah Macdonald problem 3.6 - For this problem one of the localisations iirc reduces to the case of a local ring, and that is very powerful! -Proving that if ring $A$ has no nilpotent elements and $\mathfrak{p}$ a minimal prime ideal of $A$, then $A_\mathfrak{p}$ is a field. You can see a proof here. -This isn't really about the ideal correspondence in (iv) but about the following result: If you have an ideal $I$ disjoint from $S$, then you can always find a prime ideal $P$ such that $P \supset I$ and is maximal with respect to the property that $P \cap S = \emptyset$. This is Krull's lemma, and you can use it to prove problems 1.14 and 3.7(i) of Atiyah - Macdonald. -Proving that the nilradical of $S^{-1}A$ is $S^{-1}\mathfrak{R}$. This is used in the first step in the "if" direction of #3 above.<|endoftext|> -TITLE: Every absolutely continuous function with integrable derivative tends to zero at infinity -QUESTION [5 upvotes]: I am given $f,f' \in L^1(\mathbb{R})$, and f is absolute continuous, I want to show that: -$$\lim_{|x|\rightarrow \infty} f(x)= 0$$ -Not sure how to show this, I know that $f(x)=\int_0^x f'(t) \, dt+f(0)$, and I can assume without loss of generality that $f(0)=0$, any help? -Thanks in advance. - -REPLY [2 votes]: For every $x,s \in \mathbf{R}$ we have -$$ -|f(x)| = \left|f(s)+\int_s^x f'(t)dt\right| \le |f(s)|+\int_s^x |f'(t)|dt. -$$ -Integrating the above inequality over $x\le s \le x+1$ we get -\begin{eqnarray} -|f(x)| -&\le& \int_{x}^{x+1}|f(s)|ds+\int_{x}^{x+1}(\int_s^x |f'(t)|dt)ds\cr -&\le& \int_{x}^{x+1}|f(s)|ds+\int_{x}^{x+1}(\int_{x}^{x+1}|f'(t)|dt)ds\cr -&=& \int_{x}^{x+1}|f(s)|ds+\int_{x}^{x+1}|f'(t)|dt. -\end{eqnarray} -Thus we have -$$ -|f(x)| \le a(x):=\int_{x}^{\infty}|f|+\int_{x}^{\infty}|f'| \ \ \forall x \in \mathbf{R}. -$$ -Since $f, f' \in L^1(\mathbf{R})$ it follows that $a(x) \to 0$ as $x \to \infty$. -Similarly, one shows after integrating over $x-1 \le s \le x$ that -$$ -|f(x)| \le b(x):=\int_{-\infty}^x|f|+\int_{-\infty}^x |f'| \ \ \forall x \in \mathbf{R}, -$$ -and for the same reason as before we have $b(x) \to 0$ as $x \to -\infty$.<|endoftext|> -TITLE: Transcendental extension that is not simple -QUESTION [6 upvotes]: Let $K$ be a field and $x, y$ be independent variables. How can I show that $K(x, y)/K$ is not a simple extension? - -REPLY [2 votes]: This is a simple and elementary proof, I think. -Assume that $K(x,y)=K(t)$ for some $t\in K(x,y)$. -Since $x\in K(t)$, there exist (coprime and nonzero) polynomials $u(z),v(z)\in K[z]$ such that $x=\frac{u(t)}{v(t)}$. -Consider now the polynomial $f(z)=xv(z)-u(z)\in K(x)[z]$: clearly it is not $0$ and $f(t)=0$, which means that $t$ is algebraic over $K(x)$, that is, $K(x)[t]$ is an algebraic extension of $K(x)$. But then $K(x,y)=K(t)=K(x,t)=K(x)(t)=K(x)[t]$ is algebraic over $K(x)$, a contradiction.<|endoftext|> -TITLE: Complete/incomplete theory -QUESTION [5 upvotes]: I am thinking about completeness and incompleteness of theory's, and to illustrate both properties i am thinking of how to build an complete system, and then turn it into an incomplete one. -Example. Let the theory $\mathfrak{T}$ be the set of formulas that represent what a child (named raul) can speak in some point of the time. Let $D = \{ paul, raul \}$, $Functions = \{ fatherof \}$. Then the interpretation of the system affirms that raul knows who is his father. At some point, raul get's old and learns lots of new things. Then, if we update the interpretation of the system to reflect this change in the domain and don't update it's axioms, we should have a simple example of completeness/incompleteness in theories, right? - -REPLY [15 votes]: A simpler example would be to consider, for example, the elementary theory of groups, whose axioms are simply the group axioms. A model of the theory is just any group. Because both abelian and non-abelian groups exist, the theory can neither prove nor disprove $\forall a.\forall b.a*b=b*a$, and so it is incomplete. -On the other hand we can add some extra axioms to the theory that ensure that there is only one (isomporphism class of) models of the theory, for example: - -$\exists a.\exists b.~a\ne b\land a\ne 1 \land b\ne 1$. -$\forall a.\forall b.~ a=1 \lor b=1 \lor a=b \lor a*b=1$. - -The only model of the enlarged theory is the cyclic group of order 3. Every sentence $\phi$ in the language of group theory is either true or false in $C_3$, and so either $\phi$ or $\neg\phi$ must be provable in the extended theory -- the theory is now complete. -There are also complete theories that have more than one isomorphism class of models, but they are generally more complex -- especially showing that they are complete, as for example for Presburger arithmetic. Or else they are trivially complete by construction, such as the theory of true arithmetic whose axioms are all sentences in the language of basic arithmetic that are true in $\mathbb N$. (This theory is not recursively axiomatizable, however).<|endoftext|> -TITLE: Repeatedly rolling a die and the tails of the multinomial distribution. -QUESTION [6 upvotes]: For $1\leq i\leq n$ let $X_i$ be independent random variables, and let each $X_i$ be the uniform distribution on the set ${0,1,2,\dots,m}$ so that $X_i$ is like an $m+1$ sided die. Let $$Y=\frac{1}{n}\sum_{i=1}^n \frac{1}{m} X_i,$$ so that $\mathbb{E}(Y)=\frac{1}{2}$. I am interested in the tails of this distribution, that is the size of $$\Pr\left( Y \geq k\right)$$ where $\frac{1}{2}< k\leq 1$ is a constant. -In the case where $m=1$, we are looking at the binomial distribution, and $$\Pr\left( Y \geq k\right)= \frac{1}{2^n}\sum_{i=0}^{(1-k) n} \binom{n}{i}$$ and we can bound this above by $(1-k)n \binom{n}{(1-k)n}$ and below by $\binom{n}{(1-k)n}$ which yields $$\Pr\left( Y \geq k\right)\approx \frac{1}{2^n} e^{n H(k)}$$ where $H(x)=-\left(x\log x+(1-x)\log (1-x)\right)$ is the entropy function. (I use approx liberally) -What kind of similar bounds do we have on the tails of this distribution when $m\geq 2$? I am looking to use the explicit multinomial properties to get something stronger than what you would get using Chernoff of Hoeffding. - -REPLY [3 votes]: Consider some i.i.d. random variables $\xi$ and $(\xi_n)_{n\geqslant1}$ with mean $\mathrm E(\xi)=0$ and finite exponential moment $\mathrm E(\mathrm e^{t|\xi|})$ for every $t$. Then, for every $x\gt0$, there exists $I(x)\gt0$ such that, when $n\to\infty$, -$$ -\mathrm P(\xi_1+\cdots+\xi_n\geqslant nx)=\mathrm e^{-nI(x)+o(n)}. -$$ -Furthermore, optimizing Chernoff exponential upper bound yields the exact value of the exponent $I(x)$. Namely, recall that, for every nonnegative $t$, -$$ -\mathrm P(\xi_1+\cdots+\xi_n\geqslant nx)\leqslant\left(\mathrm e^{-tx}\mathrm E(\mathrm e^{t\xi})\right)^n=\mathrm e^{-nI(t,x)}, -$$ -where -$$ -I(t,x)=tx-\log\mathrm E(\mathrm e^{t\xi}), -$$ -and it happens that -$$ -I(x)=\sup\limits_{t\geqslant0}I(t,x). -$$ -Note that $\mathrm E(\xi)=0$ hence $\mathrm E(\mathrm e^{t\xi})=1+o(t)$ when $t\to0$ and $I(t,x)=tx+o(t)$. In particular, $I(t,x)\gt0$ for $t\gt0$ small enough, hence $I(x)\gt0$ and the upper bound above is not trivial. -As stated above, this upper bound also provides the exact behaviour, in the exponential scale, of the probability of the large deviations event considered, that is, -$$ -\lim\limits_{n\to\infty}\frac1n\log\mathrm P(\xi_1+\cdots+\xi_n\geqslant nx)=-I(x). -$$ -In your setting, one considers $\xi=\frac1mX-\frac12$ and $x=k-\frac12$ hence $0\lt x\lt\frac12$. Note finally that, in general, $I(x)=I(t_x,x)$ where $t_x$ solves the equation $\partial_tI(t,x)=0$, that is, -$$ -x\mathrm E(\mathrm e^{t\xi})=\mathrm E(\xi\mathrm e^{t\xi}). -$$<|endoftext|> -TITLE: Linear independence of $n$th roots over $\mathbb{Q}$ -QUESTION [10 upvotes]: I know that the set of square roots of distinct square-free integers is linearly independent over $\mathbb{Q}$. To generalize this fact, define -$R_n = \{ \sqrt[n]{s} \mid s\text{ integer with prime factorization }s = p_1^{a_1} \ldots p_k^{a_k}, \text{ where } 0 \leq a_i < n \}$ -For example, $R_2$ is the set of square roots of square-free integers. -Question: Is $R_n$ linearly independent over $\mathbb{Q}$ for all $n \geq 2$? -Harder (?) question: Is $\cup_{n\geq2}R_n$ linearly independent over $\mathbb{Q}$? - -REPLY [3 votes]: We'll show that incommensurable real radicals are linearly independent. -Let $\alpha_i$ real numbers, $F$ subfield of $\mathbb{R}$ so that for any $i$ $\alpha_i^{n_i}\in F$ for some $n_i>1$. Assume moreover that $\frac{\alpha_i}{\alpha_j}\not \in F$ for all $i\ne j$. Then the $\alpha_i$ are linearly independent over $F$. -For the proof we use the Lemma: let $\beta$ a real number so that $\beta^m\in F$ for some $m>1$, and $\beta\not \in F$. Then $\operatorname{Trace}_F \beta = 0$. A proof is given below. -Let now $\sum a_i \alpha_i$ a linear relation. Take $i_0 \in I$. We get -$$ a_{i_0} = \sum_{i \ne i_0} a_i \frac{\alpha_i}{\alpha_{i_0}}$$ -Taking $\operatorname{Trace}^K_F$ on both sides ( $K$ is an arbitrary finite extension of $F$ containing all the $\beta_i =\frac{\alpha_i}{\alpha_{i_0}}$ we get, using lemma $d\cdot a_{i_0} = 0$, and so $a_{i_0} = 0$. -Proof of the lemma: May assume $\beta > 0$. Let $m>1$ minimal so that $\beta^m \in F$. The polynomial $X^m - \beta^m$ factors over $\mathbb{C}$ as $\prod_{j=0}^{m-1} ( X- \beta \omega^j)$. Assume that some factor $\prod_{j \in J} (X - \beta \omega^j)$ is in $F[X]$. The the free term $\prod_{ j \in J} ( - \beta \omega^j)$ is in $F$. Taking complex absolute values on both sides, we get $\beta^l \in F$ for some $1\le l < m$, contradiction. Now we know the basis $1$, $\beta$, $\ldots$, $\beta^{m-1}$ for $F(\beta)$ over $F$. It follows right away that the trace of $\beta$ is $0$. -Note: the condition $\beta$ real is necessary, as we see for $\beta = 1+i$, $\beta^4 = -4$, and $\operatorname{trace}^{\mathbb{Q}(i)}_{\mathbb{Q}} \beta = 2$.<|endoftext|> -TITLE: Can we decide a conjecture is decidable without knowing a conjecture is correct or false? -QUESTION [6 upvotes]: Can we decide a conjecture is decidable without knowing a conjecture is correct or false? -I asked this question because I assume that the millenium prize problem is already to be decidable, otherwise the mathematician would need to consider infinity numbers of cases in order to get the money or the question's logic makes it a fake question. - -REPLY [2 votes]: Here's another perspective on a related question: a problem where we can prove that an 'answer' in some sense exists but we can also prove that we can't find the answer. -A graph is a set of vertices and a set of edges between those vertices - for instance, the vertices might be 'Hollywood actors' with an edge between any two actors who have appeared in a movie together, or the vertices might be 'US cities with more than 100,000 people', with an edge between any two cities within 500 miles of each other. -One graph is a minor of another if, essentially, you can tag some of the vertices in the larger graph matching the vertices of the smaller graph and 'draw edges' between them matching the edges of the smaller graph - in essence, it's like a subgraph (though there are obviously some minor technical details). -And finally, we say a collection of graphs is minor-closed if every graph that's in the collection also brings along all of its minors. For instance, the set of planar graphs (the ones that can be drawn in the plane without any of their edges crossing) is minor-closed this way; any subgraph, or any contraction of a subgraph, is still planar. -Now, the Robinson-Seymour theorem says that any minor-closed set of graphs can be defined by a small set of graphs that aren't in it, the so-called 'forbidden minors'; any graph that has one of these forbidden minors as a minor isn't in the collection, and all graphs that don't have any of the forbidden minors are in the collection. For instance, for the planar graphs there are two forbidden minors: $K_5$ (a graph with five vertices, and edges between every pair of vertices) and $K_{3,3}$ (a graph with three 'red' nodes and three 'blue' nodes, and every possible edge from one color to the other). -So, we know that for any collection of graphs we give, there's some set of forbidden minors defining those graphs; in this sense we have a 'yes' answer to the problem. But that doesn't mean that we can actually find the set of forbidden minors; in fact, even though that set is known to exist, it's also known that there can't be any procedure for finding them; any algorithm for taking (a description of) a set of graphs and finding its forbidden minors could also solve the halting problem.<|endoftext|> -TITLE: Things related to the Preissman Theorem -QUESTION [5 upvotes]: I'm reading the proof of the Preissman Theorem, in Do Carmo's book of Riemannian Geometry. A crucial step in this demonstration is the following lema, -Lema: Let $M$ be a compact riemannian manifold, and $\alpha$ a non trivial deck transformation of the universal covering $\widetilde{M}$, where we are considering $\widetilde{M}$ with a covering metric. So the statement is that $\alpha$ leaves invariant a geodesic $\widetilde{\gamma}$ of $\widetilde{M}$, in this sense -$$\alpha(\widetilde{\gamma}(-\infty,\infty))=\widetilde{\gamma}(-\infty,\infty).$$ -Sketch of proof: Let $\pi:\widetilde{M}\to M$ be the covering transformation. Let $\widetilde{p}\in \widetilde{M}$ and $p=\pi(\widetilde{p}).$ Let $g\in \pi_1(M,p)$ be the element corresponding to $\alpha$ by the known isomorphism $\pi_1(M,p)\simeq Aut(\widetilde{M}).$ By the Cartan Theorem, there is a closed geodesic $\gamma$ in the class of free homotopy $M$ given by $g.$ -The main idea now is to show that, $\alpha$ fixes the extension of a lifting of $\gamma.$ For this, we obtain a deck tranformation that clearly fix the lifting of $\gamma$ (just take a deck transformation $\beta$ associated to the class of homotopy of $\gamma$ with a base point $q\in \gamma$). And then show that they coincide in one point and therefore must be the same, $\alpha=\beta$. -My Question: Is there any reason to believe that the geodesic wich will be fixed by $\alpha$ is precisely the lifting of a geodesic given by the Cartan Theorem? -Or was that just an insight wich the person who'd demonstrated the theorem have? -For those who do not remember this is the statement of the theorem cartan -Cartan Theorem: Let $M$ be a compact riemannian manifold. Let $\pi_1(M)$ be the set of all the classes of free homotopy of $M.$ Then in each non trival class there is a closed geodesic. (i.e a closed curve which is geodesic in all of its points.) - -REPLY [2 votes]: I am ashamed, because the answer seems simple. Given a non trival element, $g\in \pi_1$, it is clear that the deck transfomation $\alpha$ associated to $g$ leaves invariant the extension of a lifting of any element of the class $g.$ Thus, the more natural curves associated with our purpose are the liftings of the elements of the class $g.$ However, there are not necessarily geodesics in the class of $g$ (See this post Cartan Theorem.). So, by Cartan Theorem, there exists a closed geodesic in the free class determined by $g$. And as $\alpha$ leaves invariant the liftings of the elements of the class $g$, is natural to ask if $\alpha$ leaves invariant the lifting of this geodesic.<|endoftext|> -TITLE: Bloch-Kato conjecture and Wiles' numerical criterion -QUESTION [5 upvotes]: In the introduction (p. 14) of this paper on FLT the authors say that a numerical criterion found by Wiles as part of his proof of FLT "seems to be very close -to a special case of the Bloch-Kato conjecture". -Can someone explain how this numerical criterion is related to a (which ?) special case of the Bloch-Kato conjecture (which is now a theorem) ? -The numerical criterion is Theorem 5.3 (p. 139) in the linked paper. - -REPLY [5 votes]: The particular conjecture in question is about the power of $p$ defining the -Selmer group attached to the adjoint of the $p$-adic Tate module of an elliptic curve. (In general, the Bloch--Kato conjecture deals with the order and/or rank of Selmer groups.) -This order is supposed to be equal to the algebraic part of a particular special value of the symmetric square $L$-function of the elliptic curve. (Note that the symmetric square and the adjoint agree up to a twist, and while it would be more logical to speak of the adjoint $L$-function at this point, the symmetric square $L$-function is more traditional; in any case, these $L$-functions would be the same up to the change of variables $s \mapsto s+1$.) -Now results of Hida and Ribet show that up to small prime factors, this algebraic part of the special value in question coincides with the congruence modulus of the modular form attached to the elliptic curve. On the other hand, -Wiles's numerical criterion (or rather, the fact that it holds for the Hecke algebra in question) shows that the order of the adjoint Selmer group also -coincides with this congruence modulus. Putting these two statements together gives the case of Bloch--Kato under consideration. -(Diamond--Flach--Guo have a paper carefully detailing all this.)<|endoftext|> -TITLE: A question about Euclidean Domain -QUESTION [5 upvotes]: This is a problem from Aluffi's book, chapter V 2.17. - -"Let $R$ be a Euclidean Domain that is not a field. Prove that there exists a nonzero, nonunit element $c$ in $R$ such that $\forall a \in R$, $\exists q$, $r \in R$ with $a = qc + r$, and either $r = 0$ or $r$ a unit." - -Ok, I know that if $c\mid a$ then $r=0$, but if $c\nmid a$, not sure about what to do. I took the classic Euclidean Domain $\mathbb{Z}$ as example, and in $\mathbb{Z}$ I know that $c = 2$ ( also $-2$). Then I tried to generalize this. -I did $c = unit + unit$, but this didn't help and exercise 2.18 showed me that $c$ is not always $unit+unit$. I'm out of ideas, need some help. -Thanks. - -REPLY [5 votes]: If $R$ is not a field, then it has nonunits. Consider the set $S=\{\varphi(a)\mid a\text{ is not a unit, and }a\neq 0\}$, where $\varphi$ is the Euclidean function. -It is a nonempty set of positive integers. By the Least Element Principle, it has a smallest element. Let $c\in R$ be a nonunit, nonzero, such that $\varphi(c)$ is the smallest element of $S$. -Edited. I claim that $c$ satisfies the conditions of the problem. Let $a\in R$. Then we can write $a = qc + r$, with either $r=0$ or $\varphi(r)\lt \varphi(c)$. If $r=0$, we are done. If $r\neq 0$, then $\varphi(r)\lt \varphi(c)$, then $\varphi(r)\notin S$, hence $r$ does not satisfy the condition - -$r$ is not a unit, and $r\neq 0$. - -Since $r\neq 0$, it follows that $r$ must be a unit. -Thus, for every $a\in R$, there exist $q,r\in R$ such that $a=qc+r$, and either $r=0$ or $r$ is a unit, as desired. - -REPLY [2 votes]: Try an element $c$ of smallest value greater than $1$. Then division by $c$ has a remainder of zero or value $1$, which is therefore an unit.<|endoftext|> -TITLE: Difference between calculus and analysis -QUESTION [23 upvotes]: It's somthing I always want to figure out, when did calculus start to be extended to analysis(I reformulate the question, the previous one"where one can draw a line to distinguish calculus and analysis, or there does not even exist such a line." was quite misleading). -As mentioned a lot in comments, analysis is a much border field than calculus, but the root could be traced back to the calculus in 19th century. -Besides, indeed infinitesimal calculus was proved in non-standard analysis, but it was invented until 1960s I think. And I don't know if it can replace all arguments in the theories developed after $\varepsilon-\delta$-definition and before the invention of non-standard analysis. - -I will explain what I understand, please point out my mistakes. - -The early stage (Newton and Leibniz) -They used infinitesimal, say $\mathrm{d}(\cdot)$ to describe change such as $\mathrm{d}x$ and $\mathrm{d}y$. -And use -$$\frac{\mathrm{d}y}{\mathrm{d}x}=\frac{y(x+\mathrm{d}x)-y(x)}{\mathrm{d}x}$$ -to compute derivatives. -And let $y'$ be a shorthand notation for $\frac{\mathrm{d}y}{\mathrm{d}x}$, they defined integral as sum over infinitesimals -$$\int y' \mathrm{d}x. $$ -(I do not know how Newton and Leibniz defined integral. Maybe as $\approx\sum y(x_i)\Delta x$?) -19th century -People started worrying about the precision of infinitesimals. And the ratio of infinitesimals was replaced by limit (the '$\varepsilon-\delta$' definition). -In shorhand notation -$$\frac{\mathrm{d}y}{\mathrm{d}x}:=\lim_{t\rightarrow 0}\frac{y(x+t)-y(x)}{t}.$$ -$$$$ -While the notation was inherited, it did no longer hold the original meaning. -And Riemann established his formalization of integration. -Based on these, people started to work on functions defined in real number system (real analysis). And in the meantime, the properties of real number were intensively explored (set theory, continuum, etc.). -Later the concept of limit was further extended to more general spaces, such as metric spaces(generalized distance), normed spaces(generalized length). -So many branches of analysis such as measure theory(it's a part of real analysis. I put it here simply because I feel it is so important.), functional analysis, differential equations emerged. - -Hence, roughly speaking, changing from infinitesimal approach to limit approach can be considered as the line separating calculus and analysis. -Interestingly, in modern calculus textbooks, they in fact loosely use analysis approach while they remain name themself as Calculus. Is this because they do not discuss real number system, which is the very base for the rest. And they only loosely argue 'taking limit by $\Delta x\rightarrow 0$'? I really get confused here. - -Updates: -Can I state that calculus is a study on real-valued functions with $\mathbb{R}^d$-valued argument? -So one can loosely conclude that -$$\text{infinitesimal and integral calculus} \subsetneq \text{real analysis}\subsetneq \text{analysis}.$$ - -Update again: -The question is much clear now. -If calculus is understood as art of calculation, there is no more confusions. -Thanks for all dedications on this topic! -Since most of answers pointed out the linchpin for the question, I hope it won't cause any misunderstanding if I do not accept any of them. -At the end, I hope this post will help others in future. -Cheers. - -REPLY [2 votes]: As (I believe) the words are commonly used, calculus is the art of calculating, and analysis is the art of analysis. -The focus of a calculus course is on computation. Infinitesimal* calculus deals with methods for computing limits, derivatives and integrals symbolically, or by numerical approximations complete with error analysis. Even some/most/all of the proofs can be seen as exercises in manipulating approximations towards a goal. -*: I include differential and integral calculus in this category. Although the methods don't include 'true' infinitesimals as in the hyperreal numbers, the focus of these subjects is still usually along the lines of manipulating things by breaking them down into infinitesimal parts, or recombining the infinitesimal parts to yield an object of interest. -The focus of a real analysis course, on the other hand, is more about analysis -- breaking the subject matter apart into useful ideas. Methods of topology and measure theory are developed and applied, the theory and properties of derivatives and integrals are developed, and the ideas and structures involved are generalized beyond the 'simple' case of multivariate real functions.<|endoftext|> -TITLE: Extension Theorem for the Sobolev Space $W^{1, \infty}(U)$ -QUESTION [9 upvotes]: I am trying to find a way of extending functions in the Sobolev Space $W^{1, \infty}(U)$ to $W^{1, \infty}(\mathbb{R}^n)$ where $U\subset\mathbb{R}^{n}$ is open such that $U\subset\subset V$ for $V\subset\mathbb{R}^{n}$ also open and bounded. Furthermore, assume $\partial U$ is $C^1$. - -REPLY [4 votes]: I am not familiar with the Whitney extension theorem, etc. but maybe the following proof will do? -Fix $x^0\in\partial U$. Assume for now that for some ball $B=B(x^0, r)$ about $x^0$, the boundary is straightened, such that $B\cap U\subset \mathbb{R}_{+}^n$. Let $B^+=\{x\in B\ \mid \ x_n\geq 0\}$ and $B^-=\{x\in B\ \mid \ x_n\leq 0\}$. Define -\begin{equation} -\bar{u}(x)\equiv\left\{ -\begin{array}{l l} -u(x', x_n)&\quad x\in B^+\\ -u(x', -x_n)&\quad x\in B^-. -\end{array}\right. -\end{equation} -Let $u^+=\bar{u}|_{B^+}$ and $u^-=\bar{u}|_{B^-}$. On $\{x_n=0\}$ we observe that $u^+=u^-$. Since $u\in L_{\text{loc}}^1(U)$ we can deduce that $\bar{u}\in L_{\text{loc}}^1(B)$. We now claim that the weak derivative of $\bar{u}$ is -\begin{equation}%\label{eq: dve} -D\bar{u}(x)=\left\{ -\begin{array}{l l} -Du(x', x_n)&\quad x\in B^+\\ -D_{x'}u(x', -x_n)&\quad x\in B^-\\ --D_{x_n}u(x', -x_n)&\quad x\in B^-. -\end{array}\right. -\end{equation} -We prove it as follows. Let $\varphi\in C_{c}^{\infty}(B)$ and assume for now that $u\in C^1(\overline{B^+\setminus\mathbb{R}^{n-1}})$ then -\begin{align*} -\int_{B}\bar{u}\varphi_{x_n}\mathrm{d}x &=\int_{B^{+}}\bar{u}\varphi_{x_n}\mathrm{d}x+\int_{B^-}\bar{u}\varphi_{x_n}\mathrm{d}x\\ -&=\int_{B^{+}}u\varphi_{x_n}\mathrm{d}x-\int_{B^+}u(y', y_n)\varphi_{y_n}(y', -y_n)\mathrm{d}y\\ -&=-\int_{\{x_n=0\}}u\varphi\mathrm{d}S_x-\int_{B^+}u_{x_n}(x', x_n)\varphi(x', x_n)\mathrm{d}x+\int_{\{y_n=0\}}u\varphi\mathrm{d}S_y-\int_{B^+}-u_{y_n}(y', y_n)\varphi(y', -y_n)\mathrm{d}y\\ -&=-\int_{B^+}u_{x_n}(x', x_n)\varphi(x', x_n)\mathrm{d}x-\int_{B^+}-u_{y_n}(y', y_n)\varphi(y', -y_n)\mathrm{d}y\\ -&=-\int_{B^+}u_{x_n}(x', x_n)\varphi(x', x_n)\mathrm{d}x-\int_{B^-}-u_{x_n}(x', -x_n)\varphi(x', x_n)\mathrm{d}x -\end{align*} -and using the same technique for $i\neq n$ we obtain for every multi index $\alpha$ such that $\vert\alpha\vert=1$ -\begin{equation*} -\int_{B}\bar{u}D^{\alpha}\varphi\mathrm{d}x=\left\{ -\begin{array}{l} --\int_{B^+}D_{x'}u(x', x_n)\varphi(x', x_n)\mathrm{d}x-\int_{B^-}D_{x'}u(x', -x_n)\varphi(x', x_n)\mathrm{d}x\\ --\int_{B^+}D_{x_n}u(x', x_n)\varphi(x', x_n)\mathrm{d}x-\int_{B^-}-D_{x_n}u(x', -x_n)\varphi(x', x_n)\mathrm{d}x. -\end{array}\right. -\end{equation*} -Let $C^+\equiv B^+\setminus\{x_n=0\}$. Since $u\in W^{1, \infty}(C^+)$ is locally integrable in $C^+$, we can define its $\varepsilon$ mollification in $C_{\varepsilon}^+$ and we also know that: -\begin{equation*} -D^{\alpha}u^{\varepsilon}(x)\equiv \eta_{\varepsilon}\ast D^{\alpha}u(x)\quad x\in C_{\varepsilon}^+, \ \vert\alpha\vert\leq1. -\end{equation*} -Here, we define the cutoff function $\eta:\mathbb{R}^n\rightarrow\mathbb{R}$ as -\begin{equation*} -\eta(x)\equiv\left\{ -\begin{array}{l l} -C\exp\left[\frac{-1}{1-\|x\|^2}\right] &\quad \|x\|<1\\ -0&\quad \|x\|\geq 1 -\end{array}\right. -\end{equation*} -where $C$ is chosen such that $\int_{\mathbb{R}^n}\eta\mathrm{d}x=1$ and then we define $\eta_{\varepsilon}(x)\equiv\frac{1}{\varepsilon^n}\eta\left(\frac{x}{\varepsilon}\right)$. -Define $v_{\varepsilon}: C^{+}\rightarrow\mathbb{R}$ as -\begin{equation*} -v_{\varepsilon}(x)\equiv\left\{ -\begin{array}{l l} -u^{\varepsilon}(x) &\quad x\in C_{\varepsilon}^{+}\\ -0&\quad x\in C^{+}\setminus C_{\varepsilon}^{+}. -\end{array}\right. -\end{equation*} -Now, because $u^{\varepsilon}\in C^{\infty}(D)$ for every open bounded set $D\subset C^+$, by repeated application of the mean value theorem, the derivatives of all orders are bounded on such $D$. Therefore $D^{\alpha}u$ is Lipschitz continuous on $D$ for all $\vert\alpha\vert\leq 1$. In particular, $D^{\alpha}u$ is uniformly continuous on open bounded subsets of $C^{+}$ for all $\vert\alpha\vert\leq 1$. So we can say that $v^{\varepsilon}\in C^1(\overline{C_{\varepsilon}^{+}})$. -Now for $\vert\alpha\vert=1$ we claim that the weak derivative of $v_{\varepsilon}$ exists and is given by: -\begin{equation} -D^{\alpha}v_{\varepsilon}(x)\equiv\left\{ -\begin{array}{l l} -D^{\alpha}u^{\varepsilon}(x) &\quad x\in C_{\varepsilon}^{+}\\ -0&\quad x\in C^{+}\setminus C_{\varepsilon}^{+}. -\end{array}\right. -\end{equation} -We check it as follows, let $\vert\alpha\vert=1$ and let $\varphi\in C_{c}^{\infty}(C^+)$, then -\begin{align*} -\int_{C^+}v_{\varepsilon}\varphi_{x_i}\mathrm{d}x &=\int_{C_{\varepsilon}^{+}}v_{\varepsilon}\varphi_{x_i}\mathrm{d}x\\ -&=-\int_{C_{\varepsilon}^+}(v_{\varepsilon})_{x_i}\varphi\mathrm{d}x+0\\ -&=-\int_{C_{\varepsilon}^+}u^{\varepsilon}_{x_i}\varphi\mathrm{d}x -\end{align*} -for $i=1, \dots, n$ whence we conclude that the weak derivative exists. Since $\|D^{\alpha}v_{\varepsilon}\|_{L^{\infty}(C^+)}\leq\|D^{\alpha}u\|_{L^{\infty}(C^+)}\infty$ for all $\vert\alpha\vert\leq 1$ we conclude that $v_{\varepsilon}\in W^{1, \infty}(C^+)$. -Moreover as $\varepsilon\rightarrow0$ -\begin{equation*} -D^{\alpha}v^{\varepsilon}\rightarrow D^{\alpha}u\quad\text{a.e. in } C^+, \ \vert\alpha\vert\leq 1. -\end{equation*} -We can write -\begin{equation*} -C^+\equiv\bigcup_{i=1}^{\infty}C_{1/i}^{+} -\end{equation*} -and $D^{\alpha}v_{i}: C^{+}\rightarrow\mathbb{R}$ -for each $\vert\alpha\vert\leq 1$ -\begin{equation*} -D^{\alpha}v_{i}(x)\equiv\left\{ -\begin{array}{l l} -D^{\alpha}v^{\frac{1}{i}}(x) &\quad x\in C_{1/i}^{+}\\ -0&\quad x\in C^+\setminus C_{1/i}^{+}. -\end{array}\right. -\end{equation*} -Since $D^{\alpha}v_i$ is measurable and uniformly bounded almost everywhere in $C^+$ by $\|D^{\alpha}v\|_{\infty}$ for each $\vert\alpha\vert\leq 1$, we deduce by the dominated convergence theorem that: -\begin{equation*} -\lim_{i\rightarrow \infty}\int_{C^+}D^{\alpha}v_i\mathrm{d}x=\int_{C^+}D^{\alpha}u\mathrm{d}x\quad \vert\alpha\vert\leq 1. -\end{equation*} -Since $\mathcal{L}^{n}(\mathbb{R}^{n-1})=0$ we can define $D^{\alpha}v_i$ arbitrarily there and write -\begin{equation*} -\lim_{i\rightarrow \infty}\int_{B^+}D^{\alpha}v_i\mathrm{d}x=\int_{B^+}D^{\alpha}u\mathrm{d}x \quad\vert\alpha\vert\leq 1. -\end{equation*} -Now let $\bar{u}$ be as defined initially, $\alpha=(0, \dots, 1)$ and $\varphi\in C_{c}^{\infty}(B)$, then %replacing $D^{\alpha}v_i$ with $D^{\alpha}u_i$ we have -\begin{align*} -\int_{B}\bar{u}D^{\alpha}\varphi\mathrm{d}x &=\int_{B^{+}}\bar{u}D^{\alpha}\varphi\mathrm{d}x+\int_{B^-}\bar{u}D^{\alpha}\varphi\mathrm{d}x\\ -&=\int_{B^{+}}uD^{\alpha}\varphi\mathrm{d}x-\int_{B^+}u(y', y_n)D^{\alpha}\varphi_{y_n}(y', -y_n)\mathrm{d}y\\ -&=\lim_{i\rightarrow \infty}\int_{B^{+}}v_iD^{\alpha}\varphi\mathrm{d}x-\lim_{i\rightarrow \infty}\int_{B^+}v_i(y', y_n)D^{\alpha}\varphi(y', -y_n)\mathrm{d}y\\ -&=-\lim_{i\rightarrow \infty}\int_{B^+}D^{\alpha}v_i(x', x_n)\varphi(x', x_n)\mathrm{d}x-\lim_{i\rightarrow \infty}\int_{B^+}-D^{\alpha}v_i(y', y_n)\varphi(y', -y_n)\mathrm{d}y\\ -&=-\int_{B^+}D^{\alpha}u(x', x_n)\varphi(x', x_n)\mathrm{d}x-\int_{B^+}-D^{\alpha}u(y', y_n)\varphi(y', -y_n)\mathrm{d}y\\ -&=-\int_{B^+}D^{\alpha}u(x', x_n)\varphi(x', x_n)\mathrm{d}x-\int_{B^-}-D^{\alpha}u(x', -x_n)\varphi(x', x_n)\mathrm{d}x. -\end{align*} -We obtain the corresponding result for any other multi index $\alpha$ such that $\vert\alpha\vert=1$. So we conclude that the weak derivative of $\bar{u}$ is as claimed. -Furthermore, we have that for $\vert \alpha\vert\leq 1$: -\begin{equation*} -\vert\vert D^{\alpha}\bar{u}\vert\vert_{L^{\infty}(B)}\leq\vert\vert D^{\alpha}\bar{u}\vert\vert_{L^{\infty}(B^+)}+\vert\vert D^{\alpha}\bar{u}\vert\vert_{L^{\infty}(B^-)}=2\vert\vert D^{\alpha}u\vert\vert_{L^{\infty}(B^+)} -\end{equation*} -and hence -\begin{equation*} -\vert\vert \bar{u}\vert\vert_{W^{1, \infty}(B)}\leq 2\vert\vert u\vert\vert_{W^{1, \infty}(B^+)}. -\end{equation*} -If the boundary is not already straightened out we can do so under the map $\Phi: W\rightarrow B$ and unstraighten it under $\Psi: B\rightarrow W$ as defined in Evans, where $\Psi (B)=W$. Writing $\Phi(x)=y$ and $\Psi(y)=x$ we define $u'(y)\equiv u\circ\Psi (y)$ . Then we have: -\begin{equation*} -\vert\vert \bar{u'}\vert\vert_{W^{1, \infty}(B)}\leq 2\vert\vert u'\vert\vert_{W^{1, \infty}(B^+)} -\end{equation*} -and converting back to original coordinates we have -\begin{equation} -\vert\vert \bar{u}\vert\vert_{W^{1, \infty}(W)}\leq 2\vert\vert u\vert\vert_{W^{1, \infty}(U)} -\end{equation} -We know that $\overline{U}\subset\mathbb{R}^n$ is paracompact so it admits locally finite partitions of unity subordinate to any countable cover of open sets. Cover the boundary $\partial U$ with the cover $\{W_x\}_{x\in\partial U}$ such that each open set $W_x$ is a ball about $x$ and we obtain inequality as above for each $W_x$. The boundary is compact so we can pass to a finite subcover $\{W_i\}_{i=1}^{N}$ and have the corresponding extensions $\bar{u}_i$. Choose any $W_0\subset\subset U$ such that $U\subset\bigcup_{i=0}^{N} W_i$ and define $\bar{u}_0\equiv u$ in $W_0$. -Let $\{\zeta_i\}_{i=1}^{N}$ be a locally finite smooth partition of unity subordinate to the cover $\{W_i\}_{i=0}^{N}$. Define $\bar{u}\equiv\sum_{i=0}^{N}\zeta_i\bar{u}_i$. Then we obtain the bound: -\begin{align*} -\vert\vert \bar{u}\vert\vert_{W^{1, \infty}(\mathbb{R}^n)}&=\left\| \sum_{i=0}^{N}\zeta_i\bar{u}_i\right\|_{W^{1, \infty}(\mathbb{R}^n)}\\ -&\leq\sum_{i=0}^{N}\|\zeta_i\bar{u}_i\|_{W^{1, \infty}(W_i)}\\ -&\leq\sum_{i=0}^{N}\|\bar{u}_i\|_{W^{1, \infty}(W_i)}\\ -&\leq \sum_{i=0}^{N}2\|u\|_{W^{1, \infty}(U)}\\ -&\leq C\|u\|_{W^{1, \infty}(U)}. -\end{align*} -Furthermore we can arrange for the support of $\bar{u}$ to lie within some $V\supset\supset U$. Define $Eu\equiv\bar{u}$. Then we can see that the map $E: W^{1, \infty}(U)\rightarrow W^{1, \infty}(\mathbb{R}^n)$ is a linear operator such that - -$Eu=u$ almost everywhere in $U$. -$Eu$ has support within $V\supset\supset U$ -$\vert\vert Eu\vert\vert_{W^{1, \infty}(\mathbb{R}^n)}\leq C\|u\|_{W^{1, \infty}(U)}$ for $C=2N$.<|endoftext|> -TITLE: A question about intersection number -QUESTION [5 upvotes]: I'm trying to understand the geometric meaning of (symmetric) bilinear forms. -I'm reading parts of "Symmetric Bilinear Forms", in particular, the appendix mentions what I'm interested in: on page 100 they write -"Let $M = M^{2n}$ be a closed manifold of dimension $2n$, and let $F_2$ be the field with two elements. If $x,y$ are homology classes in $H_n(M, F_2)$, the intersection number $$ x \cdot y = y \cdot x \in F_2$$ -is defined. The Poincaré duality theorem, see e.g. [Spanier], implies that $H_n(M,F_2)$ is an inner product space over $F_2$ using the intersection number as inner product." - -What exactly is the intersection number? For example: I don't know how to think about $n>1$ but I think if $n=1$ we think of the elements in $H_1$ as equivalence classes of paths, namely, two paths are equivalent if they differ by a boundary which means that $p_1 - p_2 = \partial U$ where $U \subset M$ is a submanifold (is this the correct term?) of $M$. If $M$ is for example the torus, we see that any two cycles around it form the boundary of a cylinder, hence are equivalent. Similarly, any two cycles around the centre hole form the boundary of an annulus, hence are equivalent. Hence the first homology group is generated by two elements and hence we get $H_1(T) = \mathbb Z \oplus \mathbb Z$. Now assume we pick two arbitrary representatives $x,y$ of each equivalence class. Now I don't know the actual definition of intersection number but assuming it means number of points of intersection, is it correct that $x \cdot y = 1$ in the example of the torus? And what about a sphere? Then it should be $x \cdot y = 0$ because we don't have any holes. Can you please give me a rigorous definition of intersection number? -Where does the Poincaré duality come in here? As far as I know it tells us that $H^k (M^n) = H_{n-k}(M^n)$. I dont's see where we need this to compute intersection numbers. - -Thank you for your help. - -REPLY [9 votes]: The role of Poincare duality is that it provides one way to define the intersection pairing: -As countinghaus notes, for an $n$-dimensional connected closed manifold, the intersection pairing (with mod $2$ coefficients) is a pairing -$H_{n-i} \times H_{n-j} \to H_{n - i - j}.$ One way to define it is geometrically, as in countinghaus's answer, by choosing well-behaved representatives for cycles, moving them within their respective homology classes so as to be transverse, and then intersecting them to obtain a new cycle. -(Because we are working mod $2$, orientations don't matter.) -To prove that this process is well-defined, and to derive its basic properties (e.g. that it is bilinear) takes some effort, and so many authors prefer to put -that effort elsewhere, e.g. into stating and proving Poincare duality. -Once you have Poincare duality, you can do the following: -Cup product gives a map $H^i \times H^j \to H^{i+j}$. Poincare duality (with mod $2$ coefficients) identifies $H^i$ with $H_{n-i}$. Thus we can rewrite this as $H_i \times H_{n-i} \to H_{n - i - j}.$ It turns out that this is the intersection pairing described geometrically above. (To see why, look at this answer --- it treats the case of $\mathbb Q$-coefficients rather than $\mathbb F_2$-coefficients, but the idea is the same.)<|endoftext|> -TITLE: Are two groups isomorphic if they have the same character table and each $|\chi| \leq 1$? -QUESTION [5 upvotes]: Suppose two groups have the same character table of complex representations. Also, all the entries in this character table have absolute value at most $1$. Does this imply that the two groups are isomorphic? - -REPLY [8 votes]: If you're asking about finite groups, then yes. -All entries have absolute value at most 1, so in particular all irreducible representations are one-dimensinal, so the groups in question are abelian. Thus we can apply the fundamental theorem of finitely generated abelian groups to them to decompose them into direct sums of cyclic groups. -==edit== -As per Jyrki Lahtonen's suggestion, there's a simpler way to finish the proof: a finite abelian group is isomorphic to its dual (as in the character group), which can be shown using the decomposition inductively: for cyclic groups it is trivial, and the dual of a direct sum is a direct sum of the duals (which is again not very hard to see). From character table we can deduce the character group, so we're done. -If I'm not mistaken, investigating the character group is also beneficial in the general (locally compact) case. The Pontryagin/van Kampen duality theorem states that for any locally compact abelian group, there is a natural (topological) isomorphism between it and its double dual, defined by the $\varphi(x)(\chi)=\chi(x)$ formula. On the other hand, we can read the character group, and consequently the bidual from the character table. -The duality theorem is quite strong a tool, though. You can find the details in e.g. Hewit, Ross: Abstract harmonic analysis, vol. 1. -==/edit==<|endoftext|> -TITLE: Does this equation have infinitely many solutions? -QUESTION [5 upvotes]: I was considering some number theory problems which inspired me to write the following conjecture, which bears some resemblance to the Catalan problem, but is in fact different: -Fix two distinct sequences of primes $p_{1}, ..., p_{n}$ and $q_{1}, ..., q_{m}$. Do there exist infinitely many sequences of naturals $a_{1}, ..., a_{n}$, $b_{1}, ..., b_{m}$ such that: -$p_{1}^{a_{1}} ... p_{n}^{a_{n}} - q_{1}^{b_{1}} ... q_{m}^{b_{m}} = 1$? - -REPLY [2 votes]: Thue proved (http://en.wikipedia.org/wiki/Thue_equation) that the equation -$$A \cdot X^k - B \cdot Y^k = 1$$ -(for fixed $A$, $B$, and $k$) has only finitely many integral solutions if $k \ge 3$. -Fix two sets of primes $p_i$ and $q_j$. Your equations give integral solutions to a finite number of Thue equations, and thus there can be at most finitely -many solutions. -For example (to be very explicit about the construction), any solution to $2^a 3^b - 5^c 7^c = 1$ yields a solution -to -$$A x^3 - B y^3 = 1$$ - with $A \in \{1,2,4,3,6,12,9,18,36\}$ and -$B \in \{1,5,25,7,35,175,49,245,1225\}$. -More generally, this problem falls under the broader class of problems -known as $S$-unit equations (http://en.wikipedia.org/wiki/S-unit), which are well studied.<|endoftext|> -TITLE: Evaluating: $\lim_{n\to\infty} \int_{0}^{\pi} e^x\cos(nx)\space dx$ -QUESTION [8 upvotes]: Evaluate the limit: -$$\lim_{n\to\infty} \int_{0}^{\pi} e^x\cos(nx)\space dx$$ -W|A tells that the limit is $0$, but i'm not sure why is that result or if this is the correct result. - -REPLY [9 votes]: Here's the standard non-integration by parts form of the intergral, using Euler's identity: -$$\begin{align} -\int_0^\pi e^x \cos(nx)\ dx &= \mathfrak{Re}\left(\int_0^\pi e^x e^{inx}\ dx \right) \\ -&= \mathfrak{Re}\left(\int_0^\pi e^{(1+in)x}\ dx \right) \\ -&= \mathfrak{Re}\left( \left. \frac{1}{1+in}e^{(1+in)x} \right |_0^\pi\right) \\ -&= \mathfrak{Re}\left( \left. \frac{1-in}{1+n^2}e^{(1+in)x} \right |_0^\pi\right) \\ -\end{align}$$ -and it's relatively straightforward to find an explicit form for the latter term using Euler's identity the 'other way', but not even necessary; from here all the exponential terms in $n$ are clearly going to wind up as $\sin(nx)$ and $\cos(nx)$ terms before evaluating, and so they're drowned out by the $O(1/n)$ factor in front of the evaluation.<|endoftext|> -TITLE: Find Zariski closure of a set -QUESTION [7 upvotes]: Let $X=\{(x,\sin(x)): x \in \mathbb{A}^{2}\}$. I want to find the closure (with respect Zariski topology) of $X \subseteq \mathbb{A}^{2}$. -OK I've already shown that $X$ is not a closed set. Now consider $cl(X)$ this is a closed subset of $\mathbb{A}^{2}$ so its dimension is $0,1$ or $2$, it is not $0$ because it is not a point. So either $cl(X)$ has dimension $1$ or $2$. I suspect the answer is $2$. If the dimension is $1$ then $X=V(f)$ for some $f \in k[x,y]$ with $f$ irreducible. This implies then that $f(a,\sin(a))=0$ for every $a \in \mathbb{A}^{1}$. -Question: does this implies that $f$ is the zero polynomial? -From this it would follow that the dimension is $2$ so $cl(X)=\mathbb{A}^{2}$. - -REPLY [11 votes]: The Zariski closure is the whole set. Suppose $f(x,\sin(x))=0$ for all $x$. Define $g_y(x)=f(x,y)$ for $y\in [-1,1]$ and note that $g_y$ has infinitely many zeroes, thus must be the zero polynomial. Thus $f$ vanishes on the strip $\mathbb R\times [-1,1]$, so must be the zero polynomial by any number of properties (for example, the fact that polynomials are analytic). -Edit: To clarify, since polynomials are analytic, it suffices to show that $f$ is $0$ on an open set (in the usual metric topology, NOT the Zariski topology) in order to conclude that $f$ is $0$ everywhere. Otherwise, we have some $n$ such that the first $n-1$ derivatives vanish on the set but the $n^{th}$ does not, so is nonzero and of constant sign on some open set, so integrating gives us $f$ is nonzero somewhere on this set. Since $\mathbb R\times [-1,1]$ contains the unit open ball around $(0,0)$, we are done.<|endoftext|> -TITLE: Expected number of steps/probability in a Markov Chain? -QUESTION [11 upvotes]: Can anyone give an example of a Markov Chain and how to calculate the expected number of steps to reach a particular state? Or the probability of reaching a particular state after T transitions? -I ask because they seem like powerful concepts to know but I am having a hard time finding good information online that is easy to understand. - -REPLY [14 votes]: The simplest examples come from stochastic matrices. Consider a finite set of possible states. Say that the probability of transitioning from state $i$ to state $j$ is $p_{ij}$. For fixed $i$, these probabilities need to add to $1$, so -$$\sum_j p_{ij} = 1$$ -for all $i$. So the matrix $P$ whose entries are $p_{ij}$ needs to be right stochastic, which means that $P$ has non-negative entries and $P 1 = 1$ where $1$ is the vector all of whose entries are $1$. -By considering all the possible ways to transition between two states, you can prove by induction that the probability of transitioning from state $i$ to state $j$ after $n$ transitions is given by $(P^n)_{ij}$. So the problem of computing these probabilities reduces to the problem of computing powers of a matrix. If $P$ is diagonalizable, then this problem in turn reduces to the problem of computing its eigenvalues and eigenvectors. -Computing the expected time to get from state $i$ to state $j$ is a little complicated to explain in general. It will be easier to explain in examples. -Example. Let $0 \le p \le 1$ and let $P$ be the matrix -$$\left[ \begin{array}{cc} 1-p & p \\\ p & 1-p \end{array} \right].$$ -Thus there are two states. The probability of changing states is $p$ and the probability of not changing states is $1-p$. $P$ has two eigenvectors: -$$P \left[ \begin{array}{c} 1 \\\ 1 \end{array} \right] = \left[ \begin{array}{c} 1 \\\ 1 \end{array} \right], P \left[ \begin{array}{c} 1 \\\ -1 \end{array} \right] = (1 - 2p) \left[ \begin{array}{c} 1 \\\ -1 \end{array} \right].$$ -It follows that -$$P^n \left[ \begin{array}{c} 1 \\\ 1 \end{array} \right] = \left[ \begin{array}{c} 1 \\\ 1 \end{array} \right], P^n \left[ \begin{array}{c} 1 \\\ -1 \end{array} \right] = (1 - 2p)^n \left[ \begin{array}{c} 1 \\\ -1 \end{array} \right]$$ -and transforming back to the original basis we find that -$$P^n = \left[ \begin{array}{cc} \frac{1 + (1 - 2p)^n}{2} & \frac{1 - (1 - 2p)^n}{2} \\\ \frac{1 - (1 - 2p)^n}{2} & \frac{1 + (1 - 2p)^n}{2} \end{array} \right].$$ -Thus the probability of changing states after $n$ transitions is $\frac{1 - (1 - 2p)^n}{2}$ and the probability of remaining in the same state after $n$ transitions is $\frac{1 + (1 - 2p)^n}{2}$. -The expected number of transitions needed to change states is given by -$$\sum_{n \ge 1} n q_n$$ -where $q_n$ is the probability of changing states after $n$ transitions. This requires that we do not change states for $n-1$ transitions and then change states, so -$$q_n = p (1 - p)^{n-1}.$$ -Thus we want to compute the sum -$$\sum_{n \ge 1} np (1 - p)^{n-1}.$$ -Verify however you want the identity -$$\frac{1}{(1 - z)^2} = 1 + 2z + 3z^2 + ... = \sum_{n \ge 1} nz^{n-1}.$$ -This shows that the expected value is -$$\frac{p}{(1 - (1 - p))^2} = \frac{1}{p}.$$ -An alternative approach is to use linearity of expectation. To compute the expected time $\mathbb{E}$ to changing states, we observe that with probability $p$ we change states (so we can stop) and with probability $1-p$ we don't (so we have to start all over and add an extra count to the number of transitions). This gives -$$\mathbb{E} = p + (1 - p) (\mathbb{E} + 1).$$ -This gives $\mathbb{E} = \frac{1}{p}$ as above.<|endoftext|> -TITLE: Ideal class group of a one-dimensional Noetherian domain -QUESTION [13 upvotes]: Let $A$ be a one-dimensional Noetherian domain. -Let $K$ be its field of fractions. -Let $B$ be the integral closure of $A$ in $K$. -Suppose $B$ is finitely generated $A$-module. -It is well-known that B is a Dedekind domain. -Let $\mathfrak{f} = \{a \in A; aB \subset A\}$. -Let $I$ be an ideal of $A$. -If $I + \mathfrak{f} = A$, we call $I$ regular. -Are the following assertions true? -If yes, how do you prove them? -(1) Let $I$ be a regular ideal. -Then $I = IB \cap A$. -(2) Let $\mathfrak{I}$ be an ideal of B such that $\mathfrak{I} + \mathfrak{f} = B$. -Let $I = \mathfrak{I} \cap A$. -Then $I$ is regular and $IB = \mathfrak{I}$. -(3) A regular ideal is uniquely decomposed as a product of regular prime ideals. -(4) A regular ideal is invertible. -(5) Let $I(A)$ be the group of invertible fractional ideals of $A$. -Let $P(A)$ be the group of principal ideals of $A$. -Let $RI(A)$ be the group of regular fractional ideals of $A$. -Let $RP(A)$ be the group of regular principal ideals of $A$. -Then $RI(A)/RP(A)$ is isomorphic to $I(A)/P(A)$. -EDIT[Jun 26, 2012] -(6) There exists the following exact sequence of abelian groups. -$0 \rightarrow B^*/A^* \rightarrow (B/\mathfrak{f})^*/(A/\mathfrak{f})^* \rightarrow I(A)/P(A) \rightarrow I(B)/P(B) \rightarrow 0$ -In paticular, -$[I(A) : P(A)] = [I(B) : P(B)][(B/\mathfrak{f})^* : (A/\mathfrak{f})^*]/[B^* : A^*]$. -EDIT[Jun 27, 2012] -The converse of (4) is false. -Let $\alpha$ be a nonzero element of a non-regular maximal ideal $P$ of $A$. -Then $\alpha A$ is invertible, but not regular. -EDIT -My motivation came from the theory of binary quadratic forms over the ring of rational integers. It has close relationship with the ideal theory of orders of quadratic number fields. I got some idea from the books of Hilbert and Neukirch on algebraic number theory. -EDIT -I think you need this. -EDIT -Let K be an algebraic number field. -Let $A$ be its order(a subring of K which is a finitely generated $\mathbb{Z}$-module and contains a $\mathbb{Q}$-basis of $K$). -Let $B$ be the ring of algebraic integers in K. -Usually $B$ is hard to be determined while $A$ is easily found. -For example, let $\theta$ be an algebraic integer which generates $K$. -Then $A = \mathbb{Z}[\theta]$ is an order of $K$. -In this case, the prime decomposition of a regular ideal of $A$ can be rather easily calculated than in $B$. -By the above results, we can get information of prime decompositions of ideals of $B$ which are prime to $\mathfrak{f}$. -EDIT[Jun 28, 2012] -I think Hilbert(or someone else before him) proved (1), (2), (3), (4) and a modified version of (6) (using $RI(A)/RP(A)$ instead of $I(A)/P(A)$). -However, I think (5) is non-trivial. -To prove (5), I needed to prove (6) by a method whose basic idea I borrowed from Neukirch's book on algebraic number theory. -EDIT[Nov 26, 2013] -I asked alternative proofs of (5) here in MathOverflow because I think my proof is detoured. - -REPLY [4 votes]: Since Matt proved (1) and (2), I'll prove the rest. -(3) -Let $RI^+(A)$ be the set of regular ideals of $A$. -Clearly $RI^+(A)$ is an ordered commutative monoid with mulitiplications of ideals. -Let $RI^+(B)$ be the set of ideals of $B$ which are relatively prime to $\mathfrak{f}$. -$RI^+(B)$ is also an ordered commutative monoid. -By (1) and (2), $RI^+(A)$ is canonically isomorphic to $RI^+(B)$ as an ordered commutative monoid. -Since $B$ is a Dedekind domain, (3) follows immediately. -(4) follows immediately from (3) and the following lemma. -Lemma 1 -Let $P$ be a maximal ideal of $A$. -$P$ is invertible if and only if $P$ is regular. -Proof: -Suppose P is regular. -By this, $A_P$ is integrally closed. -Since $A_P$ is integrally closed, Noetherian and of dimension 1, it is a discrete valuation ring. -Hence $PA_P$ is principal. -Let $Q$ be a maximal ideal such that $Q \neq P$. -Since $P$ is not contained in $Q$, $PA_Q = A_Q$. -Hence $PA_Q$ is also principal. -Since $A$ is Noetherian, $P$ is finitely generated over $A$. -Hence P is invertible by this. -Suppose conversely P is invertible. -By this, $PA_P$ is principal. -Hence $A_P$ is a discrete valuation ring(e.g Atiyah-MacDonald). -Hence $A_P$ is integrally closed. -By this, $P$ is regular. -QED -Lemma 2 -Let $A$ be a commutative Noetherian ring. -Let $I$ be a proper ideal of $A$ such that $dim A/I = 0$. -Then $A/I$ is canonically isomorphic to $\prod_P A_P/IA_P$, where $P$ runs over all the maximal ideals of $A$ such that $I \subset P$. -This is well knowm. -Lemma 3 -Let $A$ be a Noetherian domain of dimension 1. -Let $I$ be a non-zero proper ideal of $A$. -Then $(A/I)^*$ is canonically isomorphic to $\bigoplus_{\mathfrak{p}} (A_\mathfrak{p}/IA_\mathfrak{p})^*$ as abelian groups, where $\mathfrak{p}$ runs over all the maximal ideals of $A$ such that $I \subset \mathfrak{p}$. -This follows immediately from Lemma 2. -Lemma 4 -Let $A, K, B, \mathfrak{f}$ be as in the title question. -Then $(B/\mathfrak{f})^*$ is canonically isomorphic to $\bigoplus_{\mathfrak{p}} (B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^*$ as abelian groups, where $\mathfrak{p}$ runs over all the maximal ideal of $A$. -Proof: -By Lemma 3, $(B/\mathfrak{f})^*$ is canonically isomorphic to $\bigoplus_{\mathfrak{P}} (B_\mathfrak{P}/\mathfrak{f}B_\mathfrak{P})^*$ as an abelian group, where $\mathfrak{P}$ runs over all the maximal ideal of $B$ such that $\mathfrak{f} \subset \mathfrak{P}$. -If $\mathfrak{p}$ is a regular prime ideal of $A$, $A_\mathfrak{p}$ is integrally closed by this.Hence $B_\mathfrak{p} = A_\mathfrak{p}$. -Since $\mathfrak{f}A_\mathfrak{p} = A_\mathfrak{p}$, $B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p} = A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p} = 0$. -Hence we only need to consider $\mathfrak{p}$ such that $\mathfrak{f} \subset \mathfrak{p}$. -It's easy to see that $B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p}$ is canonically isomorphic to $\prod B_\mathfrak{P}/\mathfrak{f}B_\mathfrak{P}$, where $\mathfrak{P}$ runs over all the maximal ideals of $B$ lying over $\mathfrak{p}$. -Hence $(B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^*$ is canonically isomorphic to $(\bigoplus B_\mathfrak{P}/\mathfrak{f}B_\mathfrak{P})^*$, where $\mathfrak{P}$ runs over all the maximal ideals of $B$ lying over $\mathfrak{p}$. QED -Lemma 5 -Let $B$ be an integral domain. -Let $A$ be a subring of $B$ such that $B$ is integral over $A$. -Let $I$ be an ideal of $A$. -Let $\mathfrak{p}$ be a prime ideal of $A$ such that $I \subset \mathfrak{p}$. -Let $B_\mathfrak{p}$ be the localization of $B$ with respect the multiplicative subset $A - \mathfrak{p}$. Let $f:B_\mathfrak{p} \rightarrow B_\mathfrak{p}/IB_\mathfrak{p}$ be the canonical homomorphism. -$f$ induces a group homomorphism $g: (B_\mathfrak{p})^* \rightarrow (B_\mathfrak{p}/IB_\mathfrak{p})^*$. -Then $g$ is surjective. -Proof: -Since $A_\mathfrak{p}$ is a local ring and $B_\mathfrak{p}$ is integtral over $A_\mathfrak{p}$, -every maximal ideal $\mathfrak{Q}$ of $B_\mathfrak{p}$ lies over $\mathfrak{p}A_\mathfrak{p}$. -Hence $\mathfrak{Q}$ = $\mathfrak{P}B_\mathfrak{p}$, where $\mathfrak{P}$ is a maximal ideal of $B$ lying over $\mathfrak{p}$. -Since $I \subset \mathfrak{p}$, $I \subset \mathfrak{P}$. -Hence $IB_\mathfrak{p} \subset \mathfrak{Q}$. -Let $x \in B_\mathfrak{p}$. -Suppose $f(x)$ is invertible. -Then $f(x)$ is not contained in any maximal ideal of $B_\mathfrak{p}/IB_\mathfrak{p}$. -Suppose $x$ is not invertible. -$x$ is contained in a maximal ideal of $B_\mathfrak{p}$. -This is a contradiction. QED -Lemma 6 -Let $A, K, B, \mathfrak{f}$ be as in the title question. -Let $\mathfrak{p}$ be a prime ideal of $A$ such that $\mathfrak{f} \subset \mathfrak{p}$. -Since $\mathfrak{f}$ is an ideal of both $A$ and $B$, $\mathfrak{f} = \mathfrak{f}A = \mathfrak{f}B$. -Hence $\mathfrak{f}A_\mathfrak{p} = \mathfrak{f}B_\mathfrak{p}$. -Hence $A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p} \subset B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p}$. -Hence $(A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p})^* \subset (B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^*$. -We claim $(B_\mathfrak{p})^*/(A_\mathfrak{p})^*$ is isomorphic to $(B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^*/(A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p})^*$. -Proof: -By lemma 5, $g: (B_\mathfrak{p})^* \rightarrow (B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^*$ -is surjective. -Let $\pi: (B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^* \rightarrow (B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^*/(A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p})^*$ be the canonical homomorphism. -Let $h: (B_\mathfrak{p})^* \rightarrow (B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^*/(A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p})^*$ be $\pi g$. -Let $x \in (B_\mathfrak{p})^*$. -Suppose $h(x) = 0$. -Then $g(x) \in (A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p})^*$. -Hence thers exists $y \in A_\mathfrak{p}$ such that $x \equiv y$ (mod $\mathfrak{f}B_\mathfrak{p}$). -Since $\mathfrak{f}B_\mathfrak{p} = \mathfrak{f}A_\mathfrak{p}$, $x \in A_\mathfrak{p}$. -Since $g(x) \in (A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p})^*$, $x \in (A_\mathfrak{p})^*$. -Hence Ker$(h) = (A_\mathfrak{p})^*$. -QED -(6) -Let $A, K, B, \mathfrak{f}$ be as in the title question. -There exists the following exact sequence of abelian groups. -$0 \rightarrow B^*/A^* \rightarrow (B/\mathfrak{f})^*/(A/\mathfrak{f})^* \rightarrow I(A)/P(A) \rightarrow I(B)/P(B) \rightarrow 0$ -Proof: -By this, there exists the following exact sequence of abelian groups. -$0 \rightarrow B^*/A^* \rightarrow \bigoplus_{\mathfrak{p}} (B_{\mathfrak{p}})^*/(A_{\mathfrak{p}})^* \rightarrow I(A)/P(A) \rightarrow I(B)/P(B) \rightarrow 0$ -Here, $\mathfrak{p}$ runs over all the maximal ideals of $A$. -If $\mathfrak{p}$ is a regular prime ideal of $A$, $A_\mathfrak{p}$ is integrally closed. -Hence $B_\mathfrak{p} = A_\mathfrak{p}$. -Hence, in $\bigoplus_{\mathfrak{p}} (B_{\mathfrak{p}})^*/(A_{\mathfrak{p}})^*$, -it suffices to consider only $\mathfrak{p}$ such that $\mathfrak{f} \subset \mathfrak{p}$. -By Lemma 3, -$(A/\mathfrak{f})^*$ is canonically isomorphic to $\bigoplus_\mathfrak{p} (A_\mathfrak{p}/\mathfrak{f}A_\mathfrak{p})^*$ as an abelian group, where $\mathfrak{p}$ runs over all the maximal ideals of $A$ such that $\mathfrak{f} \subset \mathfrak{p}$. -By Lemma 4, -$(B/\mathfrak{f})^*$ is canonically isomorphic to $\bigoplus_{\mathfrak{p}} (B_\mathfrak{p}/\mathfrak{f}B_\mathfrak{p})^*$ as an abelian group, where $\mathfrak{p}$ runs over all the maximal ideal of $A$ - such that $\mathfrak{f} \subset \mathfrak{p}$. -Now, by Lemma 6, we are done. QED -Lemma 7 -Let $A, K, B, \mathfrak{f}$ be as in the title question. -Let $\phi: I(A)/P(A) \rightarrow I(B)/P(B)$ be the canonical homomorphism. -Let $C \in$ Ker($\phi$). -Then $C$ contains an ideal of the form $A \cap \beta B$, where $\beta$ is an element of $B$ such that $\beta B + \mathfrak{f} = B$. -This follows immediately from (6). -(5) -Let $I$ be an invertible ideal of $A$. -Since $B$ is a Dedekind domain, by this, there exist an ideal $\mathfrak{J}$ of $B$ and $\gamma \in K$ such that $\mathfrak{J} + \mathfrak{f} = B$ -and $IB = \mathfrak{J}\gamma$. -Let $J = A \cap \mathfrak{J}$. -By (2), $J$ is regular and $JB = \mathfrak{J}$. -By (4), $J$ is invertible. -Since $IB = \mathfrak{J}\gamma = J\gamma B$, $IJ^{-1}B = \gamma B$. -By Lemma 7, there exists $\beta \in B$ such that $\beta B + \mathfrak{f} = B$ and $IJ^{-1} \equiv A \cap \beta B$ mod($P(A)$). -Hence $I \equiv J(A \cap \beta B)$ mod($P(A)$). -Since $J$ and $A \cap \beta B$ are regular, we are done.<|endoftext|> -TITLE: Proving that $a + b = b + a$ for all $a,b \in\mathbb{R}$ -QUESTION [7 upvotes]: Being interested in the very foundations of mathematics, I'm trying to build a rigorous proof on my own that $a + b = b + a$ for all $\left[a, b\in\mathbb{R}\right] $. Inspired by interesting properties of the complex plane and some researches, I realized that defining multiplication as repeated addition will lead me nowhere (at least, I could not work with it). So, my ideas: - -Defining addition $a+b$ as a kind of "walking" to right $\left(b>0\right)$ or to the left $\left(b<0\right)$ a space $b$ from $a$. Adding a number $b$ to a number $a$ (denoted by $a+b$) involves doing the following operation: - -Consider the real line $\lambda$ and its origin at $0$. Mark a point $a$, draw another real line $\omega$ above $\lambda$ such what $\omega \parallel \lambda$ and mark a point $b$ on $\omega$. Now, draw a line $\sigma$ such that $\sigma \perp \omega$ and the only point in commom between $\sigma$ and $\omega$ is $b$. Consider the point that $\lambda$ and $\sigma$ have in commom; this point is nicely denoted as $a + b$. - - -(Note that all my work is based here. Any problems, and my proof goes to trash) - -This definition can be used to see the properties of adding two numbers $a$ and $b$, for all $a, b \in\mathbb{R}$. -Using geometric properties may lead us to a rigorous proof (if not, I would like to know the problems of using it). - -So, I started: - -$a, b \in\mathbb{N}$: - -$a+b = \overbrace{\left(1+1+1+\cdots+1\right)}^a + \overbrace{\left(1+1+1+\cdots+1\right)}^b = \overbrace{1+1+1+1+\cdots+1}^{a+b} = \overbrace{\left(1+1+1+\cdots+1\right)}^b + \overbrace{\left(1+1+1+\cdots+1\right)}^a = b + a$ -(Implicity, I'm using the fact that $\left(1+1\right)+1 = 1+\left(1+1\right)$, which I do not know how to prove and interpret it as cutting a segment $c$ in two parts -- $a$ and $b$. However, this result can be extended to $\mathbb{Z}$ in the sense that $-a$ $(a > 0)$ is a change; from right to left). - -$a, b \in\mathbb{R}$: - -Here, we have basically two cases: - -$a$ and $b$ are either positive or negative; -$a$ and $b$, where one of them is negative. - -Since in my definition $-b, b>0$ means drawing a point $b$ to the left of the real line, there's no big deal interpretating it; subtracting can be interpreted now. So, it starts: -$a + b = c$. However, $c$ can be cut in two parts: $b$ and $a$. Naturally, if $a>c$, then $b<0$ -- many cases can be listed. So, $c = b + a$. But $c = a + b$; it follows that $a + b = b + a$. My questions: -Is there any problem in using my definition of adding two numbers $a$ and $b$, which uses many geometric properties? Is there any way to solve it from informality? Is there anything right here? -Thanks in advance. - -REPLY [11 votes]: First you need to define $\mathbb{R}$ in your construction! -To define $\mathbb{R}$, one way is to go about defining $\mathbb{N}$, then defining $\mathbb{Z}$, then defining $\mathbb{Q}$ and then finally defining $\mathbb{R}$. Once you have these things set up, proving associativity, commutativity of addition over reals essentially boils down to proving associativity, commutativity of addition over natural numbers. -As said earlier, one goes about first defining natural numbers. For instance, $2$ as a natural number is defined as $2_{\mathbb{N}} = \{\emptyset,\{\emptyset\} \}$. We will use the notation that $e$ is $1_{\mathbb{N}}$ and $S(a)$ be the successor function applied to $a \in \mathbb{N}$. -Then we define addition on natural numbers using the successor function. Addition on natural numbers is defined inductively as -$$a +_{\mathbb{N}} e = S(a)$$ -$$a +_{\mathbb{N}} S(k) = S(a+k)$$ -You can also define $\times_{\mathbb{N}},<_{\mathbb{N}}$ on natural numbers similarly. -Then one defines integers as an equivalence class (using $+_{\mathbb{N}}$) of ordered pairs of naturals i.e. for instance, $2_{\mathbb{Z}} = \{(n+_{\mathbb{N}}2_{\mathbb{N}},n):n \in \mathbb{N}\}$. You can similarly, extend the notion of addition and multiplication of two integers i.e. you can define $a+_{\mathbb{Z}} b$, $a \times_{\mathbb{Z}}b$, $a <_{\mathbb{Z}} b$. Addition, multiplication and ordering of integers are defined as appropriate operations on these set. -Then one moves on to defining rationals as an equivalence class (using $\times_{\mathbb{Z}}$) of ordered pairs of integers. So $2$ as a rational number, $2_{\mathbb{Q}}$ is an equivalence class of ordered pair $$2_{\mathbb{Q}} = \{(a \times_{\mathbb{Z}} 2_{\mathbb{Z}},a):a \in \mathbb{Z}\backslash\{0\}\}$$ Again define $+_{\mathbb{Q}}, \times_{\mathbb{Q}}$, $a <_{\mathbb{Q}} b$. Addition, multiplication and ordering of rationals are defined as appropriate operations on these set. -Finally, a real number is defined as the left Dedekind cut of rationals. i.e. for instance $2$ as a real number is defined as $$2_{\mathbb{R}} = \{q \in \mathbb{Q}: q <_{\mathbb{Q}} 2_{\mathbb{Q}}\}$$ -Addition, multiplication and ordering of reals are defined as appropriate operations on these set. -Once you have these things set up, proving associativity, commutativity of addition over reals essentially boils down to proving associativity, commutativity of addition over natural numbers. -Here are proofs of associativity and commutativity in natural numbers using Peano's axiom. -Associativity of addition: $(a+b) + c = a + (b + c )$ -Proof: -Let $\mathbb{S}$ be the set of all numbers $c$, such that $ (a+b) + c = a + (b + c )$, $ \forall a,b \in \mathbb{N}$. -We will prove that $ e$ is in the set and whenever $k \in \mathbb{S}$, we have $S(k) \in \mathbb{S}$. Then by invoking Peano’s axiom (viz, the principle mathematical induction), we get that $\mathbb{S} = \mathbb{N}$ and hence $ (a+b) + c = a + (b + c )$, $ \forall a,b \in \mathbb{N}$. -First Step: -Clearly, $ e \in \mathbb{S}$. This is because of the definition of addition. -$ (a+b)+e = S(a+b)$ and $ a + S(b) = S(a+b)$ -Hence $ (a+b)+e = a + S(b) = a+ (b+e)$ -Second Step: -Assume that the statement is true for some $ k \in \mathbb{S}$. -Therefore, we have $ (a+b)+k = a+(b+k)$. -Now we need to prove, $ (a+b) + S(k) = a+(b+S(k))$. -By definition of addition, we have $ (a+b)+S(k) = S((a+b) + k)$ -By induction hypothesis, we have $ (a+b)+k = a+ (b+k)$ -By definition of addition, we have $ b + S(k) = S(b+k)$ -By definition of addition, we have $ a+S(b+k) = S(a+(b+k))$ -Hence, we get, -$ (a + b) + S(k) = S((a+b) + k) = S(a+ (b+k)) = a + S(b+k) = a+ (b + S(k))$ -Hence, we get, -$ (a+b) + S(k) = a + (b+S(k))$ -Final Step: -So, we have $ e \in \mathbb{S}$. And whenever, $k \in \mathbb{S}$, we have $S(k) \in \mathbb{S}$. -Hence, by principle of mathematical induction, we have that $\mathbb{S} = \mathbb{N}$ -i.e. the associativity of addition, viz, -$$(a+b) + c = a + (b+c)$$ -Commutativity of addition: $ m + n = n + m$, $ \forall m,n \in \mathbb{N}$. -Proof: -Let $ \mathbb{S}$ be the set of all numbers $ n$, such that $ m + n = n + m$, $ \forall m \in \mathbb{N}$. -We will prove that $ e$ is in the set $ \mathbb{S}$ and whenever $ k \in \mathbb{S}$, we have $ S(k) \in \mathbb{S}$. Then by invoking Peano's axiom (viz, the Principle Mathematical Induction), we state that $ \mathbb{S}=\mathbb{N}$ and hence $ m + n = n + m$, $ \forall m,n \in \mathbb{N}$. -First Step: -We will prove that $ m + e = e + m$ and hence $ e \in \mathbb{S}$. -The line of thought for the proof is as follows: -Let $ \mathbb{S}_1$ be the set of all numbers $ m$, such that $ m + e = e + m$. -We will prove that $ e$ is in the set $ \mathbb{S}_1$ and whenever $ k \in \mathbb{S}_1$, we have $ S(k) \in \mathbb{S}_1$. Then by invoking Peano's axiom (viz, the Principle Mathematical Induction), we state that $ \mathbb{S}_1=\mathbb{N}$ and hence $ m + e = e + m$, $ \forall m \in \mathbb{N}$. -To prove: $ e \in \mathbb{S}_1$ -Clearly, $ e + e = e + e$ (We are adding the same elements on both sides) -Assume that $ k \in \mathbb{S}_1$. So we have $ k + e = e + k$. -Now to prove $ S(k)+ e = e + S(k)$. -By the definition of addition, we have $ e + S(k) = S(e + k)$ -By our induction step, we have $ e + k = k + e$. -So we have $ S(e+k) = S(k+e)$ -Again by definition of addition, we have $ k + e = S(k)$. -Hence, we get $ e + S(k) = S(S(k))$. -Again by definition of addition, $ p + e = S(p)$, which gives us $ S(k) + e = S(S(k))$. -Hence, we get that $ S(k+e) = S(k) + e$. -So we get, -$ e + S(k) = S(e+k) = S(k+e) = S(S(k)) = S(k) + e$. -Hence, assuming that $ k \in \mathbb{S}_1$, we have $ S(k) \in \mathbb{S}_1$. -Hence, by Principle of Mathematical Induction, we have $ m + e = e + m$, $ \forall m \in \mathbb{N}$. -Second Step: -Assume that $ k \in \mathbb{S}$. We need to prove now that $ S(k) \in \mathbb{S}$. -Since $ k \in \mathbb{S}$, we have $ m + k = k + m$. -To prove: $ m + S(k) = S(k) + m$. -Proof: -By definition of addition, we have $ m + S(k) = S(m+k)$. -By induction hypothesis, we have $ m + k = k + m$. Hence, we get $ S(m+k) = S(k+m)$. -By definition of addition, we have $ k + S(m) = S(k+m)$. -Hence, we get $ m + S(k) = S(m+k) = S(k+m) = k + S(m)$. -We are not done yet, since we want to prove, $ m + S(k) = S(k) + m$. -So we are left to prove $ k + S(m) = S(k) + m$. -$S(k) +m = (k+e) + m = k + (e+m) = k + (m+e) = k + S(m)$. -Hence, we get $ m + S(k) = S(k) + m$. -Final Step: -So, we have $ e \in \mathbb{S}$. And whenever $ n \in \mathbb{S}$, we have $ S(n) \in \mathbb{S}$. -Hence, by Principle of Mathematical Induction, we have the commutativity of addition, viz, -$ m + n = n + m$, $ \forall m,n \in \mathbb{N}$. - -We might think that associativity is harder/lengthier to prove than commutativity, since associativity is on three elements while commutativity is on two elements. -On the contrary, if you look at the proof, proving associativity turns out to be easier than commutativity. -Note that the definition of addition, viz $m + S(n) = S(m+n)$, incorporates the associativity $m+(n+e) = (m+n)+e$. -For commutativity however, we are changing the roles of $m$ and $n$, (we are changing the "order") and no wonder it is "harder/lengthier" to prove it. - -REPLY [5 votes]: Unfortunately, your method is not rigorous. There are too many undefined notions running around. For example, you never define what $\mathbb{R}$ is in your construction, or what properties it has. To answer your question, in the usual construction of the real numbers, the fact that $a+b=b+a$ is taken as axiomatic (along with several other axioms, called field axioms). -You mention that you are a precalculus student. At this stage in your mathematical development, you should just take it on faith that you can add numbers. It's intuitively obvious, and there are much more interesting things you can be doing with your time than worrying about foundational issues. Learn some elementary number theory! Learn some combinatorics! Find a book of challenging elementary problems in mathematics and work through it! There is a time for rigor and proof, but it comes after developing fundamental intuitions about the subject. The whole point of the formalizations of the real numbers is to make our intuitions as precise and error-free as possible, so putting the formalization before the intuition is the wrong way to go about things here, in my opinion. -If you are still interested in foundational issues, there are many books available that cover the foundations of mathematics. You would probably want an introductory book on set theory. For an exposition of the construction of the reals and their properties, I recommend Foundations of Analysis by Edmund Landau. But without any experience with proofs, such books are quite difficult to understand. - -REPLY [3 votes]: You should first think how you define $\alpha$, a real number. One of the classical construction is defining it as a set. This is a construction based on Dedkind cuts, defined as follows: - -DEFINITION (Spivak) -A real number is a set of rational numbers $\mathrm {\mathbf A}$ such that -$(1)$ If $x \in \mathrm {\mathbf A}$ and $y < x$ then $y \in \mathrm {\mathbf A}$. -$(2)$ If $x \in \mathrm {\mathbf A}$, there exists another $y$ such that $y \in \mathrm {\mathbf A}$ and $y >x$ - viz, $\mathrm {\mathbf A}$ has no maximal element. -$(3)$ $\mathrm {\mathbf A}$ is not empty - viz $\mathrm {\mathbf A} \neq \emptyset$ -$(4)$ $\mathrm {\mathbf A} \neq \mathbf Q$. -The set of all real numbers is noted $\mathbf R$ - -These sets are also called Dedekind cuts, honoring Dedekind, who first considered them. The classical example of a Dedekind cut is $\sqrt{ \mathbf {2}} = \{ x : x^2 < 2 \text{ or } x <0\}$. This set defines the real number $\sqrt 2 $. -Then we define the sum $ \large {\bf +}$ of two real numbers (different from $+$, the sum of rationals) as follows - -DEFINITION (Spivak) -If $\mathrm {\mathbf A} $ and $\mathrm {\mathbf B} $ are real numbers, then -$$\mathrm {\mathbf A} {\large {\bf +}}\mathrm {\mathbf B}= \{x:x=y+z \text{ ; for some } y \in \mathrm {\mathbf A} \text{ and some } z \in \mathrm {\mathbf B} \} $$ - -Note that with this definition, the proofs that -$${\bf{A}} {\large {\bf +}} {\bf{B}} = {\bf{B}} {\large {\bf +}} {\bf{A}}$$ -and -$$({\bf{A}} {\large {\bf +}} {\bf{B}}){\large {\bf +}}{\bf{C}} = {\bf{B}} {\large {\bf +}} ({\bf{A}}{\large {\bf +}}{\bf{C}})$$ -directly follow from the fact that for any $x,y,z$ rational numbers -$$\eqalign{ - & x + y = y + x \cr - & x + \left( {y + z} \right) = \left( {x + y} \right) + z \cr} $$ -We can define then $\mathbf < $ for two real numbers, as: - -DEFINITION (Spivak) -If ${\bf{A}}$ and ${\bf{B}}$ are real numbers, then ${\bf{A}}\mathbf{<} {\bf{B}}$ means that ${\bf{A}}$ is contained in ${\bf{B}}$, but ${\bf{A}} \neq {\bf{B}}$. - -Note that $\mathbf > $ $\mathbf \leq $ and $\mathbf \geq $ canall be defined analogously. -Then one can prove the following: - -THEOREM (Spivak) -If $A$ is a set of real numbers, $A \neq \emptyset$ and $A$ is bounded avobe, then $A$ has a least upper bound, or supremum. - -Note then that if we consider the set $\mathbf R$ along with $+$, $\leq$, $\cdot$ then -$(\mathbf 1)$ Then $(\mathbf R,+,\cdot)$ is a field, meaning the usual properties of addition and multiplication hold (along with the relations between them) , along with the existance of an identity for multiplication ($1$), and identity for the sum ($0$), and the existance of inverses for both operations (excluding $0$ in multiplication). -$(\mathbf 2)$ The field is ordered, in the sense the relation $\leq$ is a total order. -$(\mathbf 3)$ It is complete, in the sense that every set of real numers which is not the empty set and has an upper bound, has a least upper bound. - -REPLY [2 votes]: Please don't let the comments discourage you - it is good that you are trying to understand the fundamentals of Mathematics, and your effort is to be applauded. -In order to prove a theorem, one must have in place a set of axioms and definitions (as well as acceptable rules of logic) from which the theorem can be deduced. It is quite helpful to see some specific examples of how this is achieved before striking out on your own. In addition (pun intended), it is helpful to have a book (or better yet, a teacher) which can guide your proofs along. -The answer to your question is that your proof is missing some key parts, most importantly the axiomatic framework. Your proof seems to boil down to a proof by intuition, where you define addition in a geometric way and rely on the geometric intuition that no matter which way you perform your construction, the total length would be the same. This is not enough, and to understand why you need to see what is enough. -I would recommend starting a book on set theory, which often is the fist place one encounters proofs of this nature. This free online book looks accessible to someone with your background: -http://math.boisestate.edu/~holmes/holmes/head.pdf - -REPLY [2 votes]: Something of this kind was done by Hilbert a bit over a century ago, in his Foundations of Geometry. The axioms of the geometric substrate were laid out in great detail. Then an arithmetic was defined on the points of a particular line $\ell$, with addition defined in a way reminiscent of what you did, and then multiplication and division. For addition one picks an arbitrary point $O$ on $\ell$ to serve as $0$. For multiplication one needs another arbitrary point $P\ne O$, to serve as $1$. -Hilbert then showed that the points on the line, under these geometric operations, form what we now call a complete ordered field. This is of some interest, in that it shows that classical plane geometry, done correctly (that is, with the axioms missed by Euclid added), yields a structure isomorphic to the familiar coordinate plane $\mathbb{R}^2$. -However, in my opinion, this work of Hilbert had little long-term significance. Although it shows that the real numbers can be developed within a classical geometric framework, the standard approach continues to be the arithmetical one pioneered by Weierstrass, Dedekind, and Cantor. The natural numbers are either taken as fundamental or (later) defined set-theoretically. Then one uses set-theoretic tools to build in succession the integers, the rationals, and the reals.<|endoftext|> -TITLE: Must a weakly or weak-* convergent net be eventually bounded? -QUESTION [25 upvotes]: Let $\mathfrak{X}$ be a Banach space. As a standard corollary of the Principle of Uniform Boundedness, any weak-* convergent sequence in $\mathfrak{X}^*$ must be (norm) bounded. A weak-* convergent net need not be bounded in general, but must it be eventually bounded? -It seems like the following should prove that the answer is yes: If $\{y_\nu\}$ is a net in $\mathfrak{X}^*$, suppose it's not eventually bounded. Then we can recursively construct an unbounded subsequence: since the net is not bounded, there exists some $\nu_1$ with -$\|y_{\nu_1}\| > 1$. By hypothesis the tail subnet $\{y_\nu \mid \nu \geq \nu_1\}$ is not bounded, so there exists some $\nu_2 \geq \nu_1$ with $\|y_{\nu_2}\| > 2$, and so on. If the original net were weak-* convergent, so would this unbounded subsequence, contradicting PUB. -It would then follow that weakly convergent nets in $\mathfrak{X}$ are bounded as well, because the image in $\mathfrak{X}^{**}$ would be weak-* convergent. -Question: This is legit, right? I'm still not quite comfortable enough with nets or with the weak-* topology to entirely trust myself here, and I'd like to know the answer since I seem to be bumping into this question a lot recently. - -REPLY [13 votes]: Nate Eldredge has done the hard work by giving a counterexample to the conjecture; here’s a brief explanation of what’s wrong with the argument given in the question. -A net $\psi:J\to X$ is a subnet of a net $\varphi:I\to X$ iff for each $i\in I$ there is a $j\in J$ such that $$\big\{\psi(j\,'):j\le j\,'\big\}\subseteq\big\{\varphi(i\,'):i\le i\,'\big\}\;.$$ Equivalently, if $\varphi$ is eventually in a set $A$, so is $\psi$. -Taking $D$ as the directed set underlying your net, there’s no reason to think that your sequence $\langle y_{\nu_k}:k\in\Bbb N\rangle$ is actually a subnet of $\langle y_\nu:\nu\in D\rangle$: there may well be a $\nu_0\in D$ such that $$\{y_{\nu_k}:k\in\Bbb N\}\setminus\{y_\nu:\nu_0\preceq\nu\}$$ is infinite. This is the case with Nate’s net, for instance.<|endoftext|> -TITLE: Always a value with uncountably many preimages? (for a continuous real map on the plane) -QUESTION [5 upvotes]: Let $f$ be a continuous map ${\mathbb R}^2 \to {\mathbb R}$. For $y\in {\mathbb R}$, denote by $P_y$ the preimage set $\lbrace (x_1,x_2) \in {\mathbb R}^2 | f(x_1,x_2)=y \rbrace$. -Is it true that -(1) At least one $P_y$ is uncountable ? -(2) At least one $P_y$ has the same cardinality as $\mathbb R$. -Some easy remarks : - -(2) is stronger than (1). -(2) follows from (1) if we assume the GCH. -If there is a point $(x_0,y_0)$ such that the partial derivatives $\frac{\partial f}{\partial x}(x_0,y_0)$ and $\frac{\partial f}{\partial y}(x_0,y_0)$ exist and one of them is nonzero, then -(2) (and hence (1)) follows from the implicit function theorem. - -REPLY [8 votes]: Consider the restriction of $f$ to $\mathbb{R}_x=\mathbb{R} \times \{x\}$ for fixed $x$. If its image is a point, we're done. Otherwise, its image is an interval in $\mathbb{R}$, and that interval contains a subinterval with rational endpoints. Since there are only countably many such intervals, there must be one that's contained in the images of uncountably many $\mathbb{R}_x$; then any point in that interval has uncountable preimage. - -REPLY [7 votes]: It’s well-known that every uncountable closed subset of $\Bbb R^n$ (indeed, every uncountable Borel set) has cardinality $2^\omega$, so (1) and (2) are equivalent, since every $P_y$ is closed. In fact a strong form of (2) is true. -(2) is certainly true if $f$ is constant. If not, $\operatorname{ran}f$ contains an open interval $(a,b)$. For each $y\in(a,b)$, $\Bbb R^2\setminus P_y$ must be disconnected. But for any countable set $S\subseteq\Bbb R^2$, $\Bbb R^2\setminus S$ is arcwise connected and therefore connected, so $|P_y|=2^\omega$. -To see that $\Bbb R^2\setminus S$ is arcwise connected, fix $p,q\in\Bbb R^2\setminus S$ with $p\ne q$. There are $2^\omega$ straight lines through $p$, so most of them miss $S$, and similarly for $q$. Thus, there are straight lines through $p$ and $q$ that miss $S$ and intersect. - -REPLY [4 votes]: (1) Deleted my answer for this part. -(2) If you have that there exists a $P_y$ which is uncountable, then $P_y$ has the cardinality of $\mathbb{R}$ even without the continuum hypothesis. This is because since $f$ is continuous, $P_y$ is a closed set since $y$ is closed. By the Cantor Bedixson theorem, it can be written as a union of a countable set and a perfect set. Perfect sets have cardinality of the continuum. (I guess they are called perfect because they are not counterexamples to the continuum hypothesis.)<|endoftext|> -TITLE: The Ring Game on $K[x,y,z]$ -QUESTION [394 upvotes]: I recently read about the Ring Game on MathOverflow, and have been trying to determine winning strategies for each player on various rings. The game has two players and begins with a commutative Noetherian ring $R$. Player one mods out a nonzero non-unit, and gives the resulting ring to player 2, who repeats the process. The last person to make a legal move (i.e. whoever produces a field) wins. For PIDs the game is trivial: player 1 wins by modding out a single prime element. However, the game becomes far more complicated even in the case of 2-dimensional UFDs. After several days I was unable to determine a winning strategy for either player for $\mathbb Z[x]$. -I believe the most tractable class of rings are finitely generated commutative algebras over an algebraically closed field $K$, as for these we can take advantage of the nullstellensatz. So far, I've been able to deal with the cases of $K[x]$ and $K[x,y]$. Player 1 has a trivial winning strategy for $K[x]$, as it is a PID. For $K[x,y]$, player 1 has a winning strategy as follows: -$1.$ Player 1 plays $x(x+1)$. -$2.$ Player 2 plays $f(x,y)$ which vanishes somewhere on $V(x(x+1))$ but not everywhere (this describes all legal moves). Note that $V(f(x,y),x(x+1))$ is a finite collection of points possibly union $V(x)$ or $V(x+1)$ but not both. Furthermore, $V(f(x,y),x(x+1))$ cannot be a single point as the projection of $V(f(x,y))$ onto the $x$-axis yields an algebraic set, which must either be a finite collection of points or the entire line and so intersects $V(x(x+1))$ in at least two points, thus $R/(x(x+1),f(x,y))$ is not a field. -$3.$ Player 1 plays $ax+by+c$ which vanishes at exactly one point in $V(f(x,y),x(x+1))$. This is always possible, since any finite collection of points can be avoided and the lines $V(x)$ or $V(x+1)$ will intersect $V(ax+by+c)$ at most once. The resulting ring is a field, as the ideal $I$ generated by the three plays contains either $(ax+by+c,x)$ or $(ax+by+c,x+1)$, which are maximal, hence is equal to one of these and $R/I$ is a field. -However, I've no idea where to go for $K[x,y,z]$ and beyond. I suspect that $K[x,y,z]$ is a win for player 2, since anything player 1 plays makes the result look vaguely like $K[x,y]$, but on the other hand player 1 could make some pretty nasty plays which might trip up any strategy of player 2's. -So my question is: which player has a winning strategy for $K[x,y,z]$, and what is one such strategy? -Edit: As pointed out in the comments, this winning strategy is wrong. - -REPLY [4 votes]: I computed the nimbers of a few rings, for what it's worth. I don't see any sensible pattern so perhaps the general answer is hopelessly hard. This wouldn't be surprising, because even for very simple games like sprouts starting with $n$ dots no general pattern is known for the corresponding nimbers. -OK so the way it works is that the nimber of a ring $A$ is the smallest ordinal which is not in the set of nimbers of $A/(x)$ for $x$ non-zero and not a unit. The nimber of a ring is zero iff the corresponding game is a second player win -- this is a standard and easy result in combinatorial game theory. If the nimber is non-zero then the position is a first player win and his winning move is to reduce the ring to a ring with nimber zero. -Fields all have nimber zero, because zero is the smallest ordinal not in the empty set. An easy induction on $n$ shows that for $k$ a field and $n\geq1$, the nimber of $k[x]/(x^n)$ is $n-1$; the point is that the ideals of $k[x]/(x^n)$ are precisely the $(x^i)$. In general an Artin local ring of length $n$ will have nimber at most $n-1$ (again trivial induction), but strict inequality may hold. For example if $V$ is a finite-dimensional vector space over $k$ and we construct a ring $k\oplus \epsilon V$ with $\epsilon^2=0$, this has nimber zero if $V$ is even-dimensional and one if $V$ is odd-dimensional; again the proof is a simple induction on the dimension of $V$, using the fact that a non-zero non-unit element of $k\oplus\epsilon V$ is just a non-zero element of $V$, and quotienting out by this brings the dimension down by 1. In particular the ring $k[x,y]/(x^2,xy,y^2)$ has nimber zero, which means that the moment you start dealing with 2-dimensional varieties things are going to get messy. But perhaps this is not surprising -- an Artin local ring is much more complicated than a game of sprouts and even sprouts is a mystery. -Rings like $k[[x]]$ and $k[x]$ have nimber $\omega$, the first infinite ordinal, as they have quotients of nimber $n$ for all finite $n$. As has been implicitly noted in the comments, the answer for a general smooth connected affine curve (over the complexes, say) is slightly delicate. If there is a principal prime divisor then the nimber is non-zero and probably $\omega$ again; it's non-zero because P1 can just reduce to a field. But if the genus is high then there may not be a principal prime divisor, by Riemann-Roch, and now the nimber will be zero because any move will reduce the situation to a direct sum of rings of the form $k[x]/(x^n)$ and such a direct sum has positive nimber as it can be reduced to zero in one move. So there's something for curves. For surfaces I'm scared though because the Artin local rings that will arise when the situation becomes 0-dimensional can be much more complicated. -I don't see any discernible pattern really, but then again the moment you leave really trivial games, nimbers often follow no discernible pattern, so it might be hard to say anything interesting about what's going on.<|endoftext|> -TITLE: Infinitely many primes are of the form $an+b$, but how about $a^n+b$? -QUESTION [8 upvotes]: A famous theorem of Dirichlet says that infinitely many primes are of the form:$\alpha n+\beta$, but are there infinitely many of the form: $\alpha ^n+\beta$, where $\beta$ is even and $\alpha$ is prime to $\beta$? or of the form $\alpha!+\gamma$, where $\gamma$ is odd? -Out of mere curiosity has this question come, thus any help is greatly appreciated. - -REPLY [2 votes]: Numbers $n$ such that $n! - 1$ is prime is http://oeis.org/A002982. The list begins, 3, 4, 6, 7, 12, 14, 30, 32, 33, 38, 94, 166, 324, 379, 469, 546, 974, 1963, 3507, 3610, 6917, 21480, 34790, 94550, 103040. Presumably the list is infinite, but it appears that no one has proved it. -Numbers $n$ such that $n! + 1$ is prime is http://oeis.org/A002981. The list begins, 0, 1, 2, 3, 11, 27, 37, 41, 73, 77, 116, 154, 320, 340, 399, 427, 872, 1477, 6380, 26951, 110059, 150209. As before, presumably the list is infinite, but it appears that no one has proved it. -Many references are given at those two webpages.<|endoftext|> -TITLE: Is there a nice way to classify the ideals of the ring of lower triangular matrices? -QUESTION [9 upvotes]: Suppose $T$ is the subset of $M_2(\mathbb{Z})$ of lower triangular matrices, those of form $\begin{pmatrix} a & 0 \\ b & c\end{pmatrix}$. So $T$ is a subring. Now I know that the ideals of $M_2(\mathbb{Z})$ are all of the form $M_2(I)$ for $I$ and ideal of $\mathbb{Z}$. -However, is there a nice way to describe all the ideals in $T$ specifically, or does it not behave quite as nicely? - -REPLY [6 votes]: This is a special case of a "triangular ring" construction, and you can find a detailed answer here about its left/right/two-sided ideal lattices. -Adjustments will have to be made if you really want to use lower triangular matrices, but the answer will be similar. -Added: Let's try to interpret this through the help given in that post. Let $T= \begin{pmatrix} R &0\\ M & S \end{pmatrix}$ be your ring, with $R=M=S=\mathbb{Z}$. Under ordinary matrix multiplication, $M$ is an $(S,R)$ bimodule. We may think of this ring as $R\oplus M\oplus S$ with funny multiplication. - -The right ideals are all of the form $J_2\oplus J_1$, where $J_1$ is a right ideal of $S$ and $J_2$ is a right $R$ submodule of $R\oplus M$ which contains $J_1M$. - -To see the motivation for the somewhat cryptic conditions given in the other solution, just think: if I have a right ideal and I multiply on the right by $\begin{pmatrix}z&0\\0&0\end{pmatrix}$, what would be included in my ideal? Do the same with a few other sparse matrices and I think you'll see how the conditions work. -So, let us take $12\mathbb{Z}$ to be $J_1$, and pick a $J_2\supseteq 12\mathbb{Z}(\mathbb{Z})=12\mathbb{Z}$. You could pick, for example, $J_2=7\mathbb{Z}\oplus 6\mathbb{Z}\subseteq R\oplus M$. So our candidate ideal is $7\mathbb{Z}\oplus 6\mathbb{Z}\oplus 12\mathbb{Z}\subseteq R\oplus M\oplus S$. Written out properly with matrices it looks like: -$$ -I=\begin{pmatrix} 7\mathbb{Z} &0\\ 6\mathbb{Z} & 12\mathbb{Z} \end{pmatrix} -$$ -I have to warn you though, that $J_2$ need not be a direct sum of two submodules of $R$ and $M$ like that. You could have $J_2=(0,6\mathbb{Z})+\{(a,a)\mid a\in 7\mathbb{Z}\}=\{(a,a+b)\mid a\in 7\mathbb{Z}, b\in 6\mathbb{Z}\}\subseteq R\oplus M$. -But nevertheless, according to the rules, -$$ -I=\begin{pmatrix} m\mathbb{Z} &0\\ n\mathbb{Z} & t\mathbb{Z} \end{pmatrix} -$$ -will be a right ideal as long as $n$ divides $t$. -I'll encourage you to try working out the left ideals (but you can summon me again if you get stuck.)<|endoftext|> -TITLE: Implicit Function Theorem example in Baby Rudin -QUESTION [6 upvotes]: I am looking at example 2.29 of Baby Rudin (page 227) of my edition to illustrate the implicit function theorem. This is what the example is: - - -Take $n= 2$ and $m=3$ and consider $\mathbf{f} = (f_1,f_2)$ of $\Bbb{R}^5$ to $\Bbb{R}^2$ given by - $$\begin{eqnarray*} f_1(x_1,x_2,y_1,y_2,y_3) &=& 2e^{x_1} + x_2y_1 -4y_2 + 3 \\ -f_2(x_1,x_2,y_1,y_2,y_3) &=& x_2\cos x_1 - 6x_1 + 2y_1 - y_3 \end{eqnarray*}.$$ - If $\mathbf{a} = (0,1)$ and $\mathbf{b} = (3,2,7)$, then $\mathbf{f(a,b)} = 0$. With respect to the standard bases, the derivative of $f$ at the point $(0,1,3,2,7) $ is the matrix - $$[A] = \left[\begin{array}{ccccc} 2 & 3 & 1 & -4 & 0 \\ -6 & 1 & 2 & 0 & -1 \end{array}\right].$$ - Hence if we observe the $2 \times 2$ block - $$\left[\begin{array}{cc} 2 & 3 \\ -6 & 1 \end{array}\right]$$ - it is invertible, and so by the implicit function theorem there exists a $C^1$ mapping $\mathbf{g}$ defined on a neighbourhood of $(3,2,7)$ such that $\mathbf{g}(3,2,7 ) = (0,1)$ and $\mathbf{f}(\mathbf{g}(\mathbf{y}),\mathbf{y}) = 0$. - - -Now what I don't understand is from such a $\mathbf{g}$, how does this mean that I can solve the variables $x_1$ and $x_2$ for $y_1,y_2,y_3$ locally about $(3,2,7)$? -Also if I wanted to carry out this computation explicitly, how can I do it? We do not have a nice and shiny linear system to solve unlike problem 19 of the same chapter. -Thanks. - -REPLY [4 votes]: As suggested by several comments the implicit function theorem as well as the inverse function theorem (which is equivalent to the implicit function theorem) are (powerful) existence theorems which in fact provide little help when it comes to explicitly computing an inverse or implicit function. This is, however, usually not necessary in applications in analysis. -What is extremely important in the two theorems (apart from the existence claim) are the assertions of the uniqueness/invertibility of the solution in a whole neighbourhood (having topological consequences, e.g., the inverse functions theorem shows that $C^1$ maps with invertible derivative in one point are locally open) and regularity of the functions the existence of which is guaranteed (i.e. $f(x,y)\in C^1 (C^k)$ and the assumptions of the theorem are fulfilled $ \Rightarrow g\in C^1 (C^k)$ if $f(x, g(x))=0$). -As most people have difficulties to grasp the implicit function theorem when they first get to see it I think Rudin's intention was to illustrate the steps which are necessary when one wants to apply it.<|endoftext|> -TITLE: Is it possible for a quadratic equation to have one rational root and one irrational root? -QUESTION [34 upvotes]: Is it possible for a quadratic equation to have one rational root and one irrational root? - -Yes, a pretty straightforward question. Is it possible? - -REPLY [4 votes]: Consider the equation of form x(x-a)=0 -Here one solution is 0 and other is a. So if a is irrational one root is rational and other is irrational.<|endoftext|> -TITLE: Evaluating: $\lim_{n\to\infty} \frac{\sqrt n}{\sqrt {2}^{n}}\int_{0}^{\frac{\pi}{2}} (\sin x+\cos x)^n dx $ -QUESTION [8 upvotes]: I'm supposed to compute the following limit: -$$\lim_{n\to\infty} \frac{\sqrt n}{\sqrt {2}^{n}}\int_{0}^{\frac{\pi}{2}} (\sin x+\cos x)^n dx $$ -I'm looking for a resonable approach in this case, if possible. Thanks. - -REPLY [7 votes]: Laplace's method yields -$$\frac{\sqrt n}{\sqrt {2}^{n}}\int_{0}^{\pi/2} (\sin x+\cos x)^n \mathrm dx=\int_{-\pi\sqrt{n}/4}^{\pi\sqrt{n}/4}\left(\cos(t/\sqrt{n})\right)^n\mathrm dt\to\int_{-\infty}^{+\infty}\mathrm e^{-t^2/2}\mathrm dt=\sqrt{2\pi}. -$$<|endoftext|> -TITLE: Matrix inverse property, show that $(I + uv^T)^{-1} = I - \frac{uv^T}{1+v^Tu}$ -QUESTION [6 upvotes]: Let $u, v \in \mathbb{R}^N, u^Tv \neq -1$. Thereby $I +uv^T \in \mathbb{R}^{N \times N}$ is invertible. Show that: -$$(I + uv^T)^{-1} = I - \frac{uv^T}{1+v^Tu}$$ - -I'm lost, why did the denominator get $uv^T$ as $v^T u$? Where did this $1$ come from? Any hints are very appreciated! - -REPLY [3 votes]: Some ideas: Let us put $$B:=uv^T\,\,,\,\,w:=v^Tu$$Now, let us do the matrix product$$(I+B)\left(I-\frac{1}{1+w}B\right)=\frac{1}{1+w}(I+B)(I-B+wI)=\frac{1}{1+w}(I-B^2+wI+wB)=$$$$=\frac{1}{1+w}\left[(1+w)I+B(wI-B)\right]$$Well, now just check the product $$B(wI-B)...:>)$$ -*Added*$$B^2=\left(uv^T\right)\left(uv^T\right)=u\left(v^Tu\right)v^T=uwv^T=wuv^T=wB$$ so we get $$B(wI-B)=wB-B^2=wB-wB=0$$<|endoftext|> -TITLE: Numerator vs. denominator vs. nominator -QUESTION [25 upvotes]: What is appropriate usage of "numerator", "denominator", and "nominator" to refer to parts of a fraction? - -I'm posting this question and answer here because I had little luck finding a clear answer through Google. I realize that it's not really mathematics, but I think it's worth mentioning as an educational issue. -Ideally, the question at English.stackexchange would address this completely, however they have closed it with the reason: "you can just look it up". I agree that this is possible, but I don't think that reveals the consensus. I wanted to allow for more opinions to be expressed. (Below, I argue that "nominator" should be discouraged.) - -REPLY [31 votes]: The numerator is the top part of a fraction, the denominator is the bottom part, and nominator is not an appropriate term for any part of a fraction. -I have seen nominator used to mean both "numerator" and "denominator". According to a question on this at English.stackexchange, this use of "nominator" is exceedingly rare. -Rather than people having been taught that "nominator" was appropriate, I think that it is far more often the case that the use of "nominator" is an eggcorn that has arisen due to its resemblance to the other two words. -I think using "nominator" should be discouraged because it already has a wholly different meaning, and has no etymological connection to fractions to speak of. It is also helps to confuse the meanings of the proper terms, if it is mixed with them.<|endoftext|> -TITLE: A particular (functional) determinant calculation -QUESTION [7 upvotes]: One wants to calculate the quantity, $\det'(\frac{\partial}{\partial t} - i [\alpha, ])$ where the prime on the "det" means that one wants to do a product over only non-zero eigenvalues of the operator $\frac{\partial}{\partial t} - i [\alpha, ]$. This operator is acting on the adjoint representation of a Lie algebra. ($\alpha$ itself is in the adjoint representation of the Lie algebera) -Now one claims that one can find an eigenbasis basis of matrix functions for $\alpha$ such that whose eigenvalues are $\{ \lambda _i \}_{i=1} ^ {i = n}$ and whose $t$ dependence is $\exp(\frac{i2\pi n t}{\beta})$ - -Can someone write down the exact equation into which the above translates? - -I would vaguely guess that if $X_i$ is such an eigenvector then because of the adjoint nature of the representation it means that $[\alpha, X_i] = \lambda _ i X_i$ -But I don't seem to see exactly where to fix the exponential dependence. -Now one claims that in this basis the determinant is equal to the following expression, -$$\prod _{n \neq 0} \prod _ {i,j} [ \frac{i2\pi n}{\beta} - i (\lambda _ i - \lambda _j)]$$ -and the above can apparently be simplified to give, -$$\det'(\frac{\partial}{\partial t} - i [\alpha, ]) = \left ( \prod _{m \neq 0} \frac{i2\pi m}{\beta} \right )\prod _ {i,j} \frac{2}{\beta (\lambda _i - \lambda _j)}\sin (\frac{\beta (\lambda _ i - \lambda _j)}{2})$$ - -It would be great if someone can help explain the above simplification. - -REPLY [3 votes]: $\def\a{\alpha} -\def\at{\widetilde\a} -\def\b{\beta} -\def\d{\delta} -\def\j{\psi} -\def\J{\Psi} -\def\JT{\widetilde\J} -\def\l{\lambda} -\def\L{\Lambda} -\def\p{\pi} -\def\w{\omega} -\def\det{\mathrm{det'}\,} -\def\dt{\frac{\partial}{\partial t}} -\def\diag{\mathrm{diag}} -\def\su{\mathfrak{su}}$ -A simpler determinant -Let's first have a look at a related problem, to determine -$$\det\left(\dt - i \a\right)$$ -where $\a$ is some scalar. -(Recall that the analog of commutation is multiplication.) -Some of the difficulties of the full problem exist already in this simpler one. -We are looking for the eigenvectors of the operator $\dt - i \a$, that is, for solutions to the equation -$$\left(\dt - i \a\right)\j = \L \j.$$ -The eigenvectors are $\j(t) = e^{i\w t}$, with eigenvalues $\L = i(\w-\a)$. -Imposing periodic boundary conditions, $\j(\b) = \j(0)$, we find $\w_m = 2m\p/\b$, where $m\in\mathbb{Z}$. -Thus, the eigenvalues are quantized, -$\L_m = i\left(\frac{2 m\p}{\b}-\a\right).$ -We assume that $\a\ne 2m\p/\b$ for any $m\in\mathbb{Z}$, so $\L_m \ne 0$. -In particular, $\L_0 = -i\a \ne 0$. -Then the product $\det$ ranges over all $m$, -$$\begin{eqnarray*} -\det\left(\dt - i \a\right) -&=& \prod_{m=-\infty}^\infty \L_m \\ -&=& \L_0 \prod_{m\ne 0} \L_m \\ -&=& \L_0 \prod_{m\ne 0} i\left(\frac{2 m\p}{\b}-\a\right) \\ -&=& \L_0 \prod_{m\ne 0} \frac{2m\p i}{\b} - \left(1-\frac{\a\b}{2\p m}\right) \\ -&=& \L_0 \left(\prod_{m=1}^\infty \left(\frac{2m\p}{\b}\right)^2 \right) - \prod_{m=1}^\infty \left(1-\left(\frac{\a\b}{2\p m}\right)^2\right) \\ -&=& \L_0 \frac{2}{\a\b}\sin\frac{\a\b}{2} - \prod_{m=1}^\infty \left(\frac{2m\p}{\b}\right)^2 \\ -&=& -\frac{2i}{\b}\sin\frac{\a\b}{2} - \prod_{m=1}^\infty \left(\frac{2m\p}{\b}\right)^2. -\end{eqnarray*}$$ -Above we have used the infinite product representation for the sine function, -$\sin x = x\prod_{m=1}^\infty \left(1-\frac{x^2}{m^2\pi^2}\right)$. -The determinant in general -We assume the algebra is a finite dimensional semisimple Lie algebra over $\mathbb{R}$ or $\mathbb{C}$. -Consider the equation -$$\begin{equation*} -\dt\J - i [\a,\J] = \L \J.\tag{1} -\end{equation*}$$ -The objects $\a$ and $\J$ are now $n\times n$ matrices in the adjoint representation of the Lie algebra where $n$ is the number of elements in the algebra. -We are free to apply a similarity transformation to (1), and instead solve -$$\begin{equation*} -\dt\JT - i [\at,\JT] = \L \JT,\tag{2} -\end{equation*}$$ -where $\widetilde A = U^{-1}AU$, and $U$ is independent of time. -In fact, we can diagonalize $\alpha$ with this transformation, -$\at = \diag(\l_1,\ldots,\l_n)$, so $\at$ is in the Cartan subalgebra. -(See the example for $\su(3)$ below.) -Let's choose as our basis for matrices $e_{ab}$, where $a,b=1,\ldots,n$, such that -the components are -$$(e_{ab})_{ij} = \d_{ai} \d_{bj},$$ -where $\d$ is the Kronecker delta. -Then -$$[\at,e_{ab}] = (\l_a - \l_b) e_{ab},$$ -so the $e_{ab}$s are eigenvectors of $\at$. -(This is straightforward to prove using the fact that $\at = \sum_a \l_a e_{aa}$.) -Thus, the eigenvectors of (2) are $\JT = e_{ab}e^{i\w t}$, since -$$\begin{eqnarray*} -\dt\JT - i [\at,\JT] -&=& \left(i\w - i(\l_a-\l_b)\right)\JT. -\end{eqnarray*}$$ -Imposing the boundary condition $\JT(\beta) = \JT(0)$ we find -$$\L_m^{ab} = i\left(\frac{2m\pi}{\beta} - (\l_a-\l_b)\right).$$ -For every $a,b$ there is an infinite tower of eigenvalues corresponding to the index $m$. -Notice that $\L\ne 0$ if and only if -$\l_a-\l_b \ne 2m\p/\b$. -For $m\ne0$ and a generic $\b$, $\l_a-\l_b \ne 2m\p/\b$. -For $m=0$, $\L \ne 0$ if and only if $\l_a - \l_b \ne 0$. -Therefore, -$$\begin{eqnarray*} -\det\left(\dt - i [\a,\,]\right) -&=& \left(\prod_{\l_a\ne\l_b} \L_0^{ab} \right) - \left(\prod_{m\ne0\atop a,b} \L_m^{ab}\right) \\ -&=& \left(\prod_{\l_a\ne\l_b} \L_0^{ab}\right) - \left(\prod_{m\ne0\atop a,b} - i\left(\frac{2m\pi}{\beta} - (\l_a-\l_b)\right) - \right) \\ -&=& \left(\prod_{\l_a\ne\l_b} \L_0^{ab}\right) - \left(\prod_{a,b} - \frac{2}{(\l_a-\l_b)\b}\sin\frac{(\l_a-\l_b)\b}{2} - \prod_{m=1}^\infty \left(\frac{2m\p}{\b}\right)^2\right) \\ -&=& \left(\prod_{\l_a\ne\l_b} \L_0^{ab}\right) - \left(\prod_{\l_a\ne\l_b} - \frac{2}{(\l_a-\l_b)\b}\sin\frac{(\l_a-\l_b)\b}{2}\right) - \left(\prod_{m=1\atop a,b}^\infty \left(\frac{2m\p}{\b}\right)^2\right) \\ -&=& \left(\prod_{\l_a\ne\l_b} - -\frac{2i}{\b}\sin\frac{(\l_a-\l_b)\b}{2} - \right) - \left(\prod_{m=1\atop a,b}^\infty \left(\frac{2m\p}{\b}\right)^2\right). -\end{eqnarray*}$$ -Convince yourself that the correct way to interpret -$\prod_{\l_a=\l_b} \frac{2}{(\l_a-\l_b)\b}\sin\frac{(\l_a-\l_b)\b}{2}$ -is $\prod_{\l_a=\l_b}1 = 1$. -Intuitively, $\lim_{x\to 0}\frac{\sin x}{x} = 1$. -Notice the similarity between this result and the previous one. -The formula for $\det\left(\dt-i[\a,\,]\right)$ given in the question statement is missing some factors. -For readers unfamiliar with such calculations, -which appear in physics quite regularly, and disturbed by the product -$\prod_{m=1\atop a,b}^\infty \left(\frac{2m\p}{\b}\right)^2$, -it may relieve you (a little) to know that this is just -$\det\left(\dt\right)$ and typically we are interested in objects of the form -$$\frac{\det\left(\dt-i[\a,\,]\right)}{\det\left(\dt\right)}.$$ -Example: Eigenvalues of $\at$ for $\su(3)$ -We use the structure constants typical to physics, $f_{abc}$, to construct the adjoint representation $(t_a)_{bc} = -i f_{abc}$. -The Cartan subalgebra is two dimensional. -Simultaneously diagonalize $t_3$ and $t_8$ so -$\at = x \widetilde t_3 + y \widetilde t_8$, -where $x$ and $y$ are some constants. -To be explicit, -$$t_3 = \textstyle\left( -\begin{array}{cccccccc} - 0 & -i & 0 & 0 & 0 & 0 & 0 & 0 \\ - i & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & -\frac{i}{2} & 0 & 0 & 0 \\ - 0 & 0 & 0 & \frac{i}{2} & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 & 0 & \frac{i}{2} & 0 \\ - 0 & 0 & 0 & 0 & 0 & -\frac{i}{2} & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 -\end{array} -\right)$$ -$$t_8 = \textstyle\left( -\begin{array}{cccccccc} - 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & -\frac{i \sqrt{3}}{2} & 0 & 0 & 0 \\ - 0 & 0 & 0 & \frac{i \sqrt{3}}{2} & 0 & 0 & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 & 0 & -\frac{i \sqrt{3}}{2} & 0 \\ - 0 & 0 & 0 & 0 & 0 & \frac{i \sqrt{3}}{2} & 0 & 0 \\ - 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 -\end{array} -\right)$$ -Up to permutation, -$$\at = \diag\left( -x, --x, -\frac{1}{2}(x+\sqrt{3} y), -\frac{1}{2}(x-\sqrt{3} y), --\frac{1}{2}(x+\sqrt{3} y), --\frac{1}{2}(x-\sqrt{3} y), -0,0\right).$$ -The diagonal elements above are the eigenvalues $\l_a$.<|endoftext|> -TITLE: Nilradical of a primary ideal is a minimal prime -QUESTION [5 upvotes]: I'd like to show the following claim: -The radical of a primary ideal $\mathfrak q$, $r(\mathfrak q)$, is the smallest prime ideal containing $\mathfrak q$. -Can you tell me if my proof is correct: -I'll use the following two facts: -(i) There is a bijection between ideals $J$ of $R$ containing $I$ and ideals $\overline{J}$ of $R/I$. -(ii) The radical of an ideal $r(I)$ equals the nilradical $n(R/I)$. -(To see this, let $x \in r(I)$. Then $x^n \in I$ and hence $\overline{x}^n = 0$ in $R/I$. On the other hand, let $\overline{x}^n = 0$ then $x^n \in I$ and hence $x \in r(I)$.) -Now for the claim: By (ii), we have $r(\mathfrak q) = n(R/\mathfrak q)$. We know that $n(R/\mathfrak q) = \bigcap_{I \in R/\mathfrak q; I \text{ prime}} I$. Going back to $R$ and using (i) we hence see that $r(\mathfrak q) = \bigcap_{I \subset R; \mathfrak q \subset I } I$. At first I thought that I could write "$\mathfrak q \subset I; I \text{ prime}$" here but I think that might be wrong. (Do you know an example where that's wrong?) -Now we have established that $r(\mathfrak q)$ is the smallest ideal containing $\mathfrak q$ so to finish the proof we want to show that $r(\mathfrak q)$ is prime: -Let $xy \in r(\mathfrak q)$. Then $x^n y^n \in \mathfrak q$ for some $n$. Since $\mathfrak q$ is primary we hence know that for some $m$ we have $x^m$ or $y^m$ in $ \mathfrak q$. Hence $x \in r(\mathfrak q)$ or $y \in r(\mathfrak q)$. - -REPLY [3 votes]: The last paragraph is great: you've shown that $r(\mathfrak q)$ is a prime ideal. The other part is actually easier, for if $\mathfrak p$ is a prime ideal then one shows by induction that if $a \in A$ and $n \geq 1$ are such that $a^n \in \mathfrak p$ then $a \in \mathfrak p$. Hence if $\mathfrak q \subset \mathfrak p$ then $r(\mathfrak q) \subset \mathfrak p$. -Regarding the earlier stuff: it might be safer to write something like $r(I)/I = n(R/I)$. The correspondence between ideals containing $I$ and ideals of $R/I$ preserves the property of being prime, and taking pre-images commutes with intersections, so you should indeed write $r(\mathfrak q) = \bigcap_{I \text{ prime }\supset\ \mathfrak q} I$; but I suspect you already knew this identity. Note that intersecting over all ideals containing $\mathfrak q$ would just give you $\mathfrak q$!<|endoftext|> -TITLE: Are there simple counterexamples to a strengthening of omitting types theorem -QUESTION [5 upvotes]: The famous Ehrenfeucht's omitting types theorem states that for any countable set of nonisolated types (without parameters), there is a (countable) model such that it does not realize any of them. -A similiar theorem, which so far as I know is due to Shelah states that for any complete theory $T$ with infinite models, you can omit any set of COMPLETE nonisolated types of cardinality less than $2^{\lvert T\rvert}$ -- he announces the result in the introduction to Classification Theory and the Number of Non-Isomorphic Models, but the book is quite hard to read and I did not even manage to find the actual proof, however, an accessible sketch for the countable case can be found in Wilfrid Hodges' Model Theory. -This, however, provokes the following conjecture, generalizing the above result for countable theories: - -For any complete, countable theory $T$ with infinite models, and any subset of $S_1(\emptyset)$ with empty interior, there exists a model of $T$ omitting all the types in the set. - -It would be a generalization, since the space of complete types in a countable theory is a Polish space (through its natural embedding into the Cantor set, for instance), so in particular any open subset contains an isolated point or its cardinality is $2^{\aleph_0}$. -On the other hand, I haven't heard about such a conjecture or theorem, even though it seems very natural, as it would give complete characterization of sets of types that can be omitted (it is easy to see that emptiness of interior is a necessary condition), so my guess is that there are known counterexamples, or it is independent from ZFC. -Am I right? If yes, are there simple counterexamples? If I'm not, was there ever any promising research in that direction, or maybe it is even a theorem? - -REPLY [6 votes]: The proposed generalization is false. The hypothesis of empty interior is too weak, but, as indicated by Levon Haykazyan's comment from the book of Tent and Ziegler, the statement would become correct if one required that the closure of the set of types to be omitted has empty interior. -Here's a counterexample to the generalization in the question. Let the language consist of countably many unary predicate symbols $U_n$ (one for each natural number $n$) and a unary function symbol $f$. Let the theory $T$ say that $f(x)\neq x$ and $f(f(x))=x$ for all $x$ (so $f$ serves to pair up the elements of the universe of any model of $T$), that $U_n(x)\iff U_n(f(x))$ for all $x$ for each $n\neq0$, but $U_0(x)\iff\neg U_0(f(x))$ (so paired elements satisfy the same $U_n$'s except that they disagree about $U_0$), and, for any distinct subscripts $n_i,\dots,n_k,m_1,\dots,m_l$, the formula saying that there is an element $x$ satisfying all of $U_{n_1},\dots,U_{n_k}$ and none of $U_{m_1},\dots,U_{m_l}$ (so the $U$ predicates are combinatorially independent). Unless I've made a stupid (and easily repaired) mistake, this $T$ is a complete theory. -For each infinite sequence $a=(a_0,a_1,\dots)$ of zeros and ones, there is a type $t_a$ consisting of the formulas $U_n(x)$ for those $n$ with $a_n=1$ and $\neg U_n(x)$ for those $n$ with $a_n=0$ (and the $T$-consequences of these, if your definition of "type" requires closure under consequences). Note that the topology of the space $S_1$ of types (in one variable) for $T$ matches the usual product topology on the space of sequences $a$. -Partition the set of sequences $(a_1,a_2,\dots)$ [intentionally starting with subscript 1, not 0] into two pieces $Y$ and $Z$ both with empty interior. For example, $Y$ could consist of those sequences that have infinitely many zeros and $Z$ those with only finitely many. Let $X$ consist of those sequences $a=(a_0,a_1,\dots)$ such that either $a_0=0$ and $(a_1,a_2,\dots)\in Y$ or $a_0=1$ and $(a_1,a_2,\dots)\in Z$. This $X$ has empty interior, but I claim no model can omit all the types $t_a$ for all $a\in X$. -To verify the claim, suppose, toward a contradiction, that we had such a model, and consider some element $q$ of it. Suppose $U_0(q)$ holds (in this model). (The case where it doesn't hold is treated symmetrically.) Define $(a_1,a_2,\dots)$ by setting $a_n$ equal to 0 if $U_n(q)$ holds and equal to 1 if it doesn't hold. So $q$ realizes the type $t_a$ where $a=(0,a_1,a_2,\dots)$. If $(a_1,a_2,\dots)\in Y$ then $a\in X$ and we have the claimed contradiction. If, on the other hand, $(a_1,a_2,\dots)\in Z$ then $f(q)$ realizes $t_b$ where $b=(1,a_1,a_2,\dots)\in X$, so we again have a contradiction.<|endoftext|> -TITLE: Question about conjugacy class of alternating group -QUESTION [8 upvotes]: This is problem 26 from Grove's "Algebra." - -Suppose $K$ is a conjugacy class in $S_n$ of cycle type $(k_1,...,k_n)$, and that $K \subseteq A_n$. If $\sigma \in K$ write $L$ for the conjugacy class of $\sigma$ in $A_n$. -If either $k_{2m} > 0$ or $k_{2m+1} > 1$ for some $m$ show that $L = K$. - -I can show $L \subseteq K$ but not $K \subseteq L$. I don't know how to use the "$k_{2m} > 0$ or $k_{2m+1} > 1$" hypotheses. If $k_{2m} > 0$ for some $m$ then $\sigma \in A_n$ must have an even number of odd transpositions. Can I get a hint? -Thank you. -Edit: $k_m$ is the number of cycles of length m. - -REPLY [5 votes]: Suppose you have $\sigma'\in K$; you want to show that $\sigma'\in L$. -Because $\sigma'\in K$ you have $\tau\sigma\tau^{-1}=\sigma'$ for some $\tau\in S_n$. Now if $\tau\in A_n$ then $\sigma'\in L$ immediately. The problem is if $\tau$ is odd and so not in $A_n$. -The idea is now that if we can find an odd $\rho$ such that $\rho\sigma\rho^{-1}=\sigma$, then we would have $\sigma'=\tau\rho\sigma\rho^{-1}\tau^{-1}$ with $\tau\rho\in A_n$, and we could conclude $\sigma'\in L$. -How do we find $\rho$? This is where the additional assumption on the cycle structure comes in. We know that either $\sigma$ has a cycle of even length, or $\sigma$ has two cycles of the same odd length. If the first is true, then (blah blah); otherwise the second is true and (blah blah). Can you take it from here?<|endoftext|> -TITLE: Show a sequence is decreasing -QUESTION [8 upvotes]: I'm stuck trying to show that the following sequence is decreasing -$$a_{n} = \left(\frac{n+x}{n+2x}\right)^{n}$$ where $x>0$. I've tried treating $n$ as a real number and took derivatives but it didn't lead to anything promising. Any hints would be appreciated. - -REPLY [4 votes]: $$\frac{a_{n}}{a_{n+1}}=\frac{\left(\frac{n+x}{n+2x}\right)^{n}}{ \left(\frac{n+1+x}{n+1+2x}\right)^{n+1}}= \frac{n+1+2x}{n+1+x}\left( \frac{n+1+2x}{n+1+x} \frac{n+x}{n+2x}\right)^n$$ -Lets observe that -$$ \frac{n+1+2x}{n+1+x} \frac{n+x}{n+2x}=\frac{n^2+n+3nx+x+2x^2}{n^2+n+3nx+2x+2x^2}=1-\frac{x}{n^2+n+3nx+2x+2x^2}$$ -Then, by Bernoulli -$$\frac{a_{n}}{a_{n+1}} \geq (1+\frac{x}{n+1+x})(1-n\frac{x}{n^2+n+3nx+2x+2x^2}) $$ -An easy computation shows that -$$(1+\frac{x}{n+1+x})(1-n\frac{x}{n^2+n+3nx+2x+2x^2}) \geq 1 \Leftrightarrow$$ -$$\frac{x}{n+1+x} \geq \frac{nx}{(n+1+x)(n+2x)} +\frac{x}{n+1+x}\frac{nx}{(n+1+x)(n+2x)} \Leftrightarrow $$ -$$x(n+1+x)(n+2x) \geq nx(n+1+x)+nx^2 \Leftrightarrow $$ -$$n^2x+nx+3nx^2+2x^2+2x^3 \geq n^2x+nx+nx^2+nx^2 \Leftrightarrow $$ -$$ nx^2+2x^2+2x^3 \geq 0 $$ -Thus -$$\frac{a_{n}}{a_{n+1}} \geq 1$$<|endoftext|> -TITLE: Defining a linear map via kernel and image. -QUESTION [5 upvotes]: Are linear maps defined in a 1-1 manner by setting their kernel and image? In other words, If I have a vector space, and I define a set to be the kernel of my would-be linear map, and another set to be it's image. Would I get a well defined, one linear map? (Given that my Ker and Im are okay, e.g. the dimensions are fine, etc.) -I have the following task: Let T be a linear map in R^4. Given that - orthogonal_space_of(k e rT) = { (1, 2, 0, 4) , ( 1, 0,1, 0) }, and T(1,0,1,1)=(1,2,1,1), give an example of such T ("you don't have to explicitly find T(x1,x2,x3,x4).") -Any ideas on how to do that (how can I characterize a linear map T without finding an explicit formula for it?) I thought about finding it's Ker and Im, but that doesn't work. Any thoughts? - -REPLY [6 votes]: No. A linear map $f : V \to W$ is uniquely determined by the images of a basis of $V$. Consider given $\mathrm{Ker} \, f$ and $\mathrm{Im} \,f$. When you define $\mathrm{Ker} \, f$, you are specifying the images of some elements of a basis. (Suppose $V$ is finite dimensional, $\mathrm{Ker} \, f = \langle v_1, \cdots, v_k \rangle$, with $\{v_1, \cdots, v_k\}$ linearly independent, and extend this set of vectors to a basis $\{v_1, \cdots, v_n\}$). If $\mathrm{Ker} \, f \neq V$, i.e., if $f \neq 0$, you still have some elements of the basis that you have to map somewhere to determine the linear transformation: $v_{k+1}, \dots, v_n$. -Suppose you have one such linear map, with $f(v_{i}) = y_i$, for all $i \in {k+1, \dots, n}$, and $\mathrm{Im} \,f = \langle y_{k+1},...y_n \rangle$. Then you have a couple of ways in which you could get a different linear map while keeping $\mathrm{Ker} \, f$ and $\mathrm{Im} \,f$: for any $i \in {k+1, \dots, n}$, you could redefine $f(v_{i}) = k\cdot y_i$, with $k$ any scalar, or $f(v_{i}) = y_j$, with $j \in {k+1, \dots, n}$ such that $y_j \neq y_i$, or even define $f(v_{i})$ to be some linear combination of the vectors in the image. That is, it's very easy to change a linear map and keep the image and the kernel. -A very simple example: consider $f(x) = 2x$ and $g(x) = 3x$, two linear maps $\mathbb{R} \to \mathbb{R}$ with $\mathrm{Ker} \,f = \mathrm{Ker} \,g = {0}$ and $\mathrm{Im} \,f = \mathrm{Im} \,g = \mathbb{R}$. -To answer your second question: -You cannot uniquely determine such a map, but there are lots of them. Since you have $\mathrm{Ker} \,f$ (first you have to calculate the orthogonal complement of the given subspace), pick a basis of it, add the other element whose image you know ($(1,0,1,1)$) and add vectors in order to get a basis of $\mathbb{R}^4$. Then choose any images for the vectors on which you don't have any other conditions. The important thing is: a linear map is uniquely determined by the images of the elements of a basis of the domain, so you could give an answer in the following form, without computing an explicit equation for $T(x_1,x_2,x_3,x_4)$: -$\begin{cases} -T(v_1) = y_1\\ -T(v_2) = y_2\\ -T(v_3) = y_3\\ -T(v_4) = y_4 -\end{cases}$, -where $\{v_1, v_2, v_3, v_4\}$ is a basis of $\mathbb{R}^4$.<|endoftext|> -TITLE: A proof of the Morse Lemma -QUESTION [5 upvotes]: On page 7 of Milnor's Morse Theory is part of a proof of the Morse Lemma: - -Suppose by induction that there exist coordinates $u_1, \ldots, u_n$ in a neighbourhood $U_1$ of $0$ so that - $$f = \pm(u_1)^2 \pm \cdots \pm (u_{r-1})^2 + \sum_{i,j \geq r} u_i u_j H_{ij}(u_1,\ldots,u_n)$$ - throughout $U_1$; where the matrices $(H_{ij}(u_1,\ldots,u_n))$ are symmetric. After a linear change in the last $n - r + 1$ coordinates, we may assume that $H_{rr}(0) \neq 0$. - -(full text of proof: page 6, page 7, page 8) -I do not understand how he can WLOG assume that $H_{rr}\neq 0$ by doing a "linear change of variables". I don't really even understand what he's saying. Could someone please spell this out for me? Note that page 6 handles an entirely different part of the lemma. If you need context, it should be enough to just assume that the equalities at the top of page 7 hold. You also must know that $f$ has a nondegenerate critical point at $0$, and $f(0)=0$. - -REPLY [9 votes]: If all entries of the sum over $i,j\ge r$ were equal to zero, we would have a degenerate quadratic form. Thus, there is a nonzero term $H_{ij}$. If it's diagonal ($i=j$), then just relabel the rows/columns to make it $H_{rr}$. If all diagonal terms are zero, pick a non-diagonal $H_{ij}$ and replace the corresponding coordinates $x_i,x_j$ by $y_i=x_i+x_j$ and $y_j=x_i-x_j$. Since $y_i^2-y_j^2=4x_ix_j$, both $H_{ii}$ and $H_{jj}$ will be nonzero. -As Milnor says, this is how one diagonalizes quadratic forms so you may want to read a proof of that first. -[added] Here is a concrete example: $f=x_1^2+x_2x_3H_{23}(x_1,x_2,x_3)$. Performing the substitution above, we get $$f=x_1^2+\frac{1}{4}(y_2^2-y_3^2)H_{23}(x_1,(y_2+y_3)/2,(y_2-y_3)/2)$$ So, our new functions are $$\widetilde H_{22}(x_1,y_2,y_3)=\frac{1}{4}H_{23}(x_1,(y_2+y_3)/2,(y_2-y_3)/2)$$ and $$\widetilde H_{33}(x_1,y_2,y_3)=-\frac{1}{4}H_{23}(x_1,(y_2+y_3)/2,(y_2-y_3)/2)$$<|endoftext|> -TITLE: Does this explanation of derangements on Wikipedia make sense? -QUESTION [5 upvotes]: On the Wikipedia page on derangements, the following description is given about how to count derangements: - -Suppose that there are $n$ persons numbered $1,2,\ldots,n$. Let there be $n$ hats also numbered $1,2,\ldots,n$. We have to find the number of ways in which no one gets the hat having same number as his/her number. Let us assume that first person takes the hat $i$. There are $n-1$ ways for the first person to choose the number $i$. Now there are 2 options: -A. Person $1$ does not take the hat $i$. This case is equivalent to solving the problem with $n − 1$ persons $n − 1$ hats: each of the remaining $n − 1$ people has precisely 1 forbidden choice from among the remaining $n − 1$ hats ($i$'s forbidden choice is hat $1$). -B. Person $1$ takes the hat $i$. Now the problem reduces to $n − 2$ persons and $n − 2$ hats. - -Aren't the two bolded statements in contradiction? Isn't the "first person" and "person 1" the same person? Is this explanation misworded? - -REPLY [6 votes]: There was a mistake in the article; the people were reversed. It should have read, -A. Person i does not take hat 1. ... -B. Person i takes hat 1. ... -I've fixed it on Wikipedia.<|endoftext|> -TITLE: Traces of all positive powers of a matrix are zero implies it is nilpotent -QUESTION [38 upvotes]: Let $A$ be an $n\times n$ complex nilpotent matrix. Then we know that because all eigenvalues of $A$ must be $0$, it follows that $\text{tr}(A^n)=0$ for all positive integers $n$. -What I would like to show is the converse, that is, - -if $\text{tr}(A^n)=0$ for all positive integers $n$, then $A$ is nilpotent. - -I tried to show that $0$ must be an eigenvalue of $A$, then try to show that all other eigenvalues must be equal to 0. However, I am stuck at the point where I need to show that $\det(A)=0$. -May I know of the approach to show that $A$ is nilpotent? - -REPLY [12 votes]: Here is an argument that does not involve Newton's identities, although it is still closely related to symmetric functions. Write -$$f(z) = \sum_{k\ge 0} z^k \text{tr}(A^k) = \sum_{i=1}^n \frac{1}{1 - z \lambda_i}$$ -where $\lambda_i$ are the eigenvalues of $A$. As a meromorphic function, $f(z)$ has poles at the reciprocals of all of the nonzero eigenvalues of $A$. Hence if $f(z) = n$ identically, then there are no such nonzero eigenvalues. -The argument using Newton's identities, however, proves the stronger statement that we only need to require $\text{tr}(A^k) = 0$ for $1 \le k \le n$. Newton's identities are in fact equivalent to the identity -$$f(z) = n - \frac{z p'(z)}{p(z)}$$ -where $p(z) = \prod_{i=1}^n (1 - z \lambda_i)$. To prove this identity it suffices to observe that -$$\log p(z) = \sum_{i=1}^n \log (1 - z \lambda_i)$$ -and differentiating both sides gives -$$\frac{p'(z)}{p(z)} = \sum_{i=1}^n \frac{- \lambda_i}{1 - z \lambda_i}.$$ -(The argument using Newton's identities is also valid over any field of characteristic zero.)<|endoftext|> -TITLE: Estimate total song ('coupon') number by number of repeats -QUESTION [6 upvotes]: If shuffle-playing playlist ×100 resulted in [10 13 10 3 2 2] different songs being repeated [1 2 3 4 5 6] times, what is the estimate for the total number of songs? (assuming shuffle play was completely random) -Update: (R code) -k <- 50 # k number of songs on the disk indexed 1:k -n <- 100 # n number of random song selections -m <- 20 # m number of repeat experiments -colnum <- 10 -mat <- matrix(data=NA,nrow=m,ncol=colnum) -df <- as.data.frame(mat) -for(i in 1:m){ - played <- 1+floor(k*runif(n)) # actual song indices (1:k) selected - freq <- sapply(1:k,function(x){sum(played==x)}) - # = number of times song with index x is being played - histo <- sapply(1:colnum,function(x){sum(freq==x)}); - for(j in 1:colnum){ - df[i,j] <- histo[j] - } -} -df - -Resulting in: e.g. 20 distributions (V1=number of single plays, V2=number of double plays, etc): - V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 -1 15 13 11 1 2 2 0 0 0 0 -2 15 12 10 1 4 0 1 0 0 0 -3 12 14 7 6 3 0 0 0 0 0 -4 17 16 6 4 2 0 1 0 0 0 -5 17 10 12 5 0 0 1 0 0 0 -6 13 15 11 6 0 0 0 0 0 0 -7 10 14 9 3 2 1 1 0 0 0 -8 12 17 5 6 3 0 0 0 0 0 -9 9 19 8 3 1 2 0 0 0 0 -10 13 9 11 6 1 0 1 0 0 0 -11 16 9 12 5 2 0 0 0 0 0 -12 15 9 11 6 2 0 0 0 0 0 -13 19 9 7 4 4 1 0 0 0 0 -14 17 11 4 7 3 1 0 0 0 0 -15 11 20 8 1 3 1 0 0 0 0 -16 14 12 10 5 0 2 0 0 0 0 -17 9 12 8 7 3 0 0 0 0 0 -18 10 15 9 4 2 0 1 0 0 0 -19 14 11 12 7 0 0 0 0 0 0 -20 16 14 11 3 1 1 0 0 0 0 - -Now I need to get from here to the Poisson modelling--my R is a bit rusty (?lmer)...--Any help would be appreciated... -Attempted Poisson modelling: disappointing fit?! -plot(1:colnum,df[1,1:colnum],ylim=c(0,30), - type="l",xlab="repeats",ylab="count") -for(i in 1:m){ - clr <- rainbow(m)[i] - lines(1:colnum,df[i,1:colnum],type="l",col=clr) - points(1:colnum,df[i,1:colnum],col=clr) -} - -df.lambda=data.frame(lambda=seq(1,5,0.1),ssq=c(NA));df.lambda -for(ii in 1:dim(df.lambda)[1]){ - l <- df.lambda$lambda[ii] - ssq <- 0 - for(i in 1:20){ - for(j in 1:10){ - ssq <- ssq + (df[i,j] - n*dpois(j,l))^2 - } - } - print(ssq) - df.lambda$lambda[ii] <- l - df.lambda$ssq[ii] <- ssq - } - df.lambda - lambda.est <- df.lambda$lambda[which.min(df.lambda$ssq)] -lambda.est # 2.4 -points(x <- 1:10, n*dpois(1:10,lambda.est),type="l",lwd=2) - -100*dpois(1:10,3) -n/lambda.est - -the estimated lambda stays around 2.3, with an n estimate of around 43; the fitted curve seems very discrepant, and seems to worsen with rising n !? -Doesn't this have to do with the fact, that our repeats are different from the 'classical' Poisson distributions: it's not just ONE event that repeats itself x number of times, but the sum of repeats of different items (songs)?! - -REPLY [2 votes]: Using the notation of leonbloy, also let $N_1=10$ be the number of songs that were played once. A simple Good-Turing estimator for $M$ is then given by $$ \hat M = {{P} \over {1-{N_1 \over N} }}= {{40} \over {1-{10 \over 100}}}=44.4,$$ which agrees nicely with the maximum likelihood approach.<|endoftext|> -TITLE: Evaluation of a product of sines -QUESTION [14 upvotes]: Possible Duplicate: -Prove that $\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$ - -I am looking for a closed form for this product of sines: -\begin{equation} -\sin \left(\frac{\pi}{n}\right)\,\sin \left(\frac{2\pi}{n}\right)\dots\sin \left(\frac{(n-1)\pi}{n}\right), -\end{equation} -where $n$ is a fixed integer. I would like to see here a strategy that hopefully can be generalized to similar cases, not just the result (which probably can be easily found). - -REPLY [38 votes]: Use the formula $\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$ to get -\begin{align*} -\prod_{k=1}^{n-1} \sin(k\pi/n) &= \left(\frac{1}{2i}\right)^{n-1}\prod_{k=1}^{n-1} \left(e^{k\pi i/n} - e^{-k\pi i/n}\right) \\ -&= \left(\frac{1}{2i}\right)^{n-1}\left(\prod_{k=1}^{n-1} e^{k\pi i/n} \right) \prod_{k=1}^{n-1} \left(1-e^{-2k\pi i/n} \right). -\end{align*} -The first product simplifies to -$$e^{\sum_{k=1}^{n-1} k\pi i/n} = e^{(n-1)\pi i/2} = i^{n-1}$$ -which cancels out with the $i^{n-1}$ in the numerator. The second product can be recognized as the polynomial $f(X) = \prod_{k=1}^{n-1} (X-e^{-2k\pi i/n})$ evaluated at $X = 1$. The roots of this polynomial are the non-trivial $n$-th roots of unity, so $f(X) = \frac{X^n-1}{X-1} = 1+X+X^2+\ldots+X^{n-1}$. Plugging in $1$ for $X$ yields -$$\prod_{k=1}^{n-1} \left(1-e^{-2k\pi i/n} \right) = f(1) = n.$$ -Altogether, we have -$$\prod_{k=1}^{n-1} \sin(k\pi/n) = \frac{n}{2^{n-1}}.$$<|endoftext|> -TITLE: Inherited Morita similar rings -QUESTION [5 upvotes]: Let $R$ and $S$ be Morita similar rings. -If a ring $R$ with the following property: -every right ideal is injective. How do I prove that the ring $S$ has this property? -If a ring $R$ with the following property: -$R$ is finitely generated. How do I prove that the ring $S$ has this property? - -REPLY [5 votes]: Every right ideal of $R$ is injective iff $R$ is semisimple iff all right $R$ modules are semisimple. -Since the Morita equivalence sends semisimple modules to semisimple modules, all of $S$'s right modules are semisimple as well, so $S$ is semisimple. -Your second question is a little strange... for Morita theory we usually require $R$ to have unity and so $R$ will always be cyclic... and $S$ too. -If you mean: "Finitely generated modules will correspond to each other through a Morita equivalence between $R$ and $S$." then let's try that. -Prove that $M$ is f.g. iff for every collection of submodules $\{M_i\mid i\in I\}$ of $M$, $\sum M_i=M$ implies $M$ is a sum of finitely many $M_i$. This provides a module theoretic description of finite generation that you can see is preserved by Morita equivalence.<|endoftext|> -TITLE: Computing $ \int_0^{\infty} \frac{ \sqrt [3] {x+1}-\sqrt [3] {x}}{\sqrt{x}} \mathrm dx$ -QUESTION [6 upvotes]: I would like to show that -$$ \int_0^{\infty} \frac{ \sqrt [3] {x+1}-\sqrt [3] {x}}{\sqrt{x}} \mathrm dx = \frac{2\sqrt{\pi} \Gamma(\frac{1}{6})}{5 \Gamma(\frac{2}{3})}$$ -thanks to the beta function which I am not used to handling... -$$\frac{2\sqrt{\pi} \Gamma(\frac{1}{6})}{5 \Gamma(\frac{2}{3})}=\frac{2}{5}B(1/2,1/6)=\frac{2}{5} \int_0^{\infty} \frac{ \mathrm dt}{\sqrt{t}(1+t)^{2/3}}$$ -...? - -REPLY [6 votes]: We can calculate the first integral by double integral. -1.Denote your first integral by $I$.Then -$\eqalign{ -& I=2\int_{0}^{\infty}(x^2+1)^{1/3}-x^{2/3}dx \cr -& =\frac{2}{3}\int_{0}^{1}\int_{0}^{\infty}(x^2+y)^{-2/3}dxdy \cr -& =\frac{2}{3}\int_{0}^{1}y^{-1/6}dy\int_{0}^{\infty}\frac{1}{(x^2+1)^{2/3}}dx \cr -& =\frac{4}{5}\int_{0}^{\infty}\frac{1}{(x^2+1)^{2/3}}dx}$ -2.Use formula found by Peter Tamaroff.What would you see?<|endoftext|> -TITLE: What is the correct way to solve $\sin(2x)=\sin(x)$ -QUESTION [10 upvotes]: I've found two different ways to solve this trigonometric equation - -$\begin{align*} -\sin(2x)=\sin(x) \Leftrightarrow \\\\ 2\sin(x)\cos(x)=\sin(x)\Leftrightarrow \\\\ 2\sin(x)\cos(x)-\sin(x)=0 \Leftrightarrow\\\\ \sin(x) \left[2\cos(x)-1 \right]=0 \Leftrightarrow \\\\ \sin(x)=0 \vee \cos(x)=\frac{1}{2} \Leftrightarrow\\\\ x=k\pi \vee x=\frac{\pi}{3}+2k\pi \vee x=\frac{5\pi}{3}+2k\pi \space, \space k \in \mathbb{Z} -\end{align*}$ -The second way was: -$\begin{align*} -\sin(2x)=\sin(x)\Leftrightarrow \\\\ 2x=x+2k\pi \vee 2x=\pi-x+2k\pi\Leftrightarrow \\\\ x=2k\pi \vee3x=\pi +2k\pi\Leftrightarrow \\\\x=2k\pi \vee x=\frac{\pi}{3}+\frac{2k\pi}{3} \space ,\space k\in \mathbb{Z} -\end{align*}$ -What is the correct one? -Thanks - -REPLY [2 votes]: For what it's worth, the second one is "better" because it generalizes nicer. Imagine solving $\sin(3 x) = \sin(x)$ using the first method ($\sin(3 x) = 3\cos^2(x)\sin(x) - \sin^3(x)$). On the other hand, it's easy to see that $\sin(a x)= \sin(b x)$ will have an infinite number of solutions for any real $a$ and $b$ from the second method.<|endoftext|> -TITLE: Does a surjective ring homomorphism have to be surjective on the unit groups? -QUESTION [5 upvotes]: I know ring homomorphisms map units to units, which made me curious about the following. Suppose $f:R\to R'$ is a surjective ring homomorphism, mapping $1$ to $1'$. Is it necessarily surjective from $U(R)\to U(R')$? -I know if $f(u)$ is a unit in $R'$ with $f(w)$ its inverse, then $f(uw)=f(wu)=1'$, but I see no reason to conclude $uw=wu=1$ in $R$ without assuming $f^{-1}(1')=\{1\}$. But I can't find a counterexample, so I'm not sure whether it's true or not. - -REPLY [10 votes]: Look at the quotient map $\mathbf Z \to \mathbf Z/5\mathbf Z$. The residue class of $2$ is a unit in the target, but the only units of $\mathbf Z$ are $\pm 1$.<|endoftext|> -TITLE: Product rule for the derivative of a dot product. -QUESTION [7 upvotes]: I can't find the reason for this simplification, I understand that the dot product of a vector with itself would give the magnitude of that squared, so that explains the v squared. What I don't understand is where did the 2 under the "m" come from. -(The bold v's are vectors.) -$$m\int \frac{d\mathbf{v}}{dt} \cdot \mathbf{v} dt = \frac{m}{2}\int \frac{d}{dt}(\mathbf{v}^2)dt$$ -Thanks. -Maybe the book's just wrong and that 2 should't be there... - -REPLY [13 votes]: The derivative of the dot product is given by the rule -$$\frac{d}{dt}\Bigl( \mathbf{r}(t)\cdot \mathbf{s}(t)\Bigr) = \mathbf{r}(t)\cdot \frac{d\mathbf{s}}{dt} + \frac{d\mathbf{r}}{dt}\cdot \mathbf{s}(t).$$ -Therefore, -$$\begin{align*} -\frac{d}{dt} \lVert \mathbf{r}(t)\rVert^2 &= \frac{d}{dt}\left( \mathbf{r}(t)\cdot \mathbf{r}(t)\right)\\ -&= 2\mathbf{r}(t)\cdot \frac{d\mathbf{r}}{dt}. -\end{align*}$$ -Dividing by through by $2$, we get -$$\frac{d\mathbf{v}}{dt}\cdot \mathbf{v}(t) = \frac{1}{2}\frac{d}{dt}\lVert \mathbf{v}\rVert^2.$$ - -REPLY [8 votes]: If two $n$-dimensional vectors $\mathbf u$ and $\mathbf v$ are functions of time, the derivative of their dot product is given by -$$\frac{\mathrm d}{\mathrm dt}(\mathbf u\cdot\mathbf v) = \mathbf u\cdot\frac{\mathrm d\mathbf v}{\mathrm dt} + \mathbf v\cdot\frac{\mathrm d\mathbf u}{\mathrm dt}$$ -This is analogous to (and indeed, is easily derived from) the product rule for scalars, $\frac{\mathrm d}{\mathrm dt}(ab) = a\frac{\mathrm db}{\mathrm dt} + b\frac{\mathrm da}{\mathrm dt}$. -Therefore, -$$\frac{\mathrm d}{\mathrm dt} \lVert\mathbf v\rVert^2 = \frac{\mathrm d}{\mathrm dt}(\mathbf v\cdot\mathbf v) = \mathbf v\cdot\frac{\mathrm d\mathbf v}{\mathrm dt} + \mathbf v\cdot\frac{\mathrm d\mathbf v}{\mathrm dt} = 2\mathbf v\cdot\frac{\mathrm d\mathbf v}{\mathrm dt}$$ -just like $\frac{\mathrm d}{\mathrm dt} a^2 = 2a\frac{\mathrm da}{\mathrm dt}$. -Halve that, and you have the result you need.<|endoftext|> -TITLE: Recalling result of tensor product of polynomial rings -QUESTION [8 upvotes]: Let $k$ be a field (alg closed if you want). Now let $I_{i}$ be an ideal of $k[x_{i}]$ for every $i \in \{1,2,\ldots,n\}$. Is it always true that: -$$k[x_1,x_2,\ldots,x_n]/ \langle I_1,I_2,\ldots,I_n \rangle \cong k[x_1]/I_1 \otimes_k k[x_2]/I_2 \otimes_k \cdots \otimes_k k[x_n]/I_n$$ - -REPLY [9 votes]: Seems like it? -Compose the natural inclusion $k[x_i] \rightarrow k[x_1, ..., x_n]$ with the quotient map; the kernel is just $I_1$. So the universal property of tensor products induces a map from the RHS to the LHS. -The map in the other direction takes a monomial to the corresponding tensor product, and extends linearly. It is simple to check that these are inverses.<|endoftext|> -TITLE: Dimension in mathematics and physics -QUESTION [6 upvotes]: I have studied linear algebra and commutative algebra, there are two kinds of dimension there : the vector space's dimension and the Krull dimension. -Also, in physics, dimension is also a very intuitive concepts. -My question is : What is the nearest mathematical definition of dimension to the physical one ? -In the wikipedia page, they also list some mathematical types of dimension : Dimension - -REPLY [5 votes]: The answers and comments so far indicate that we are talking about two completely different kinds of "dimension" here: - -There is the notion of dimension of a real vector space $V$ or manifold $M$. This is an integer $d\geq0$ and has the same meaning in physics as in mathematics. The intuitive physical interpretation of $d$ is the "number of degrees of freedom" in the physical system under study. – In a space of dimension $d$ (infinitesimal) volumes scale like $\lambda^d$ under a linear scaling by a factor $\lambda>0$. This property can be used to envisage sets $S\subset{\mathbb R}^d$ whose "volume" scales like $\lambda^\alpha$ with a noninteger $\alpha\leq d$. This value $\alpha$ is called the Hausdorff dimension of $S$; but this is a dimension in a measure theoretical, not in a topological sense. -Physical quantities have a "dimension" of length, time, degree Kelvin, etc. This dimension is not a number, but a quality. It's up to a physics member of the community to give an exact definition. Tentatively I would say that (at least in the realm of mechanics) the set of physical dimensions is the multiplicative abelian group generated by the three elements $L$ (for length), $T$ (for time) and $M$ (for mass). To any physical quantity an element of this group is associated. Two physical quantities can be sensefully compared or added only if the associated dimensions are the same. Furthermore, the dimension of a quantity determines how the numerical value of this quantity changes when the base units for $L$, $T$ and $M$ are changed.<|endoftext|> -TITLE: Harmonic Function which cannot be described as real part of a holomorphic function -QUESTION [6 upvotes]: We define $f:\mathbb{C}\rightarrow\mathbb{C},\ f(z)=\log|z|$. $f$ is harmonic. Why can't we describe $f$ as a real part of a holomorphic (analytic) function? -Thank you very much for your time, -Chris - -REPLY [10 votes]: Any harmonic function on a simply connected region in $\mathbb{R}^2$ (seen as $\mathbb{C}$) is indeed the real part of a holomorphic function on the same region. The function $f(z) = \log |z|$ is not defined at zero, and so the domain you're referring to is actually $\mathbb{C}\backslash\{0\}$. This is the reason why it doesn't globally represent the real part of a holomorphic function. -The complex logarithm is the typical example of many related notions in complex analysis, notably "multi-valued" functions, functions defined on Riemann surfaces (multi-sheated surfaces), and functions that fail to be analytically continued properly. The harmonic function $\log |z|$ is indeed the real part of the complex logarithm, but its imaginary part is not well defined because of its reliance on the complex argument (which is the same mod $2\pi$). Any introductory text on complex analysis will discuss all of this in depth.<|endoftext|> -TITLE: Motivation for Koszul complex -QUESTION [14 upvotes]: Koszul complex is important for homological theory of commutative rings. -However, it's hard to guess where it came from. -What was the motivation for Koszul complex? - -REPLY [5 votes]: In this answer I would rather focus on why is the Koszul complex so widely used. In abstract terms, the Koszul complex arises as the easiest way to combine an algebra with a coalgebra in presence of quadratic data. You can find the modern generalization of the Koszul duality described in Aaron's comment by reading Loday, Valette Algebraic Operads (mostly chapters 2-3). -To my knowledge the Koszul complex is extremely useful because you can use it even with certain $A_\infty$-structures arising from deformation quantization of Poisson structures and you relate it to the other "most used resolution in homological algebra", i.e. the bar resolution. -For a quick review of this fact, please check my answer in Homotopy equivalent chain complexes -As you can see it is a flexibe object which has the property of being extremely "explicit". This helped alot its diffusion in the mathematical literature.<|endoftext|> -TITLE: Offset Alternating Series -QUESTION [7 upvotes]: I have the following alternating series that I would like to determine whether it is absolutely convergent, conditionally convergent, or divergent: -$$ \sum\limits^{\infty}_{n=1} \frac{1+2(-1)^n}{n} $$ -I have applied some tests and I find it reasonable to conclude that it is divergent. -As a sum of two series: -$$ \sum\limits^{\infty}_{n=1} \frac{1}{n} + \sum\limits^{\infty}_{n=1} \frac{2(-1)^n}{n} $$ -I believe a convergent series when added to a divergent series, results in a divergent series. If this isn't a fact then I would still be left to say that it is inconclusive. -Using the Alternating Series Test, with: -$$ a_n = \frac{1+2(-1)^n}{n} $$ -although this isn't of 'proper form' $ \sum\limits^{\infty}_{n=1} (-1)^n a_n $ the limit of $a_n $ does approach zero as $ n \rightarrow \infty $. As for monotonically decreasing, the limit of the ratio of absolute terms is divergent for $ n $ even and inconclusive for $ n $ odd, which has me concluding divergent by The Ratio Test as well as not monotonically decreasing, where: -$$ \lim\limits_{n \rightarrow \infty} \left\lvert \frac{2(-1)^n + 1}{2(-1)^n - 1} \frac{n}{n+1} \right\rvert $$ -Am I on the right track here? Am I making any really improper assumptions? Was there a better way to go about with the proof? -Thanks! - -REPLY [2 votes]: If we pair the terms with 2n and 2n+1, we get, -using Arturo Magidin's notation, -$$b_{2n}-b_{2n+1} = \frac{3}{2n} - \frac{1}{2n+1} -=\frac{3(2n+1)-2n}{2n(2n+1)} -= \frac{4n+3}{2n(2n+1)} -> \frac{1}{n}, -$$ -so the sum of the first 2n+1 terms -is greater than -$\sum_{k=1}^n \frac{1}{k} -$ -which diverges. -In general, when working with an alternating series. -I find it useful to pair each even term with the following odd term. -This can be readily generalized to the case where -the signs of the terms of a series to be summed -follow a repeating pattern.<|endoftext|> -TITLE: Commutative Algebra without the axiom of choice -QUESTION [15 upvotes]: It is well known that in a commutative ring with unit, every proper ideal is contained in a maximal ideal. The proof uses the axiom of choice. This fact, and others that are proved using essentially the same argument, anchor a large part of commutative algebra. -Suppose now that we disallow the use of the axiom of choice. My feeling is that this fact should still hold except for very pathological rings. I would find it odd if commutative algebra was entirely dependent on the axiom of choice. I also recall hearing that there was a workaround argument that did not use the axiom of choice for sufficiently nice rings, so this question is not entirely speculative. -So, assuming we work without the axiom of choice, for which rings can we prove that every proper ideal is contained in a maximal ideal? How is this done? And what characterizes the rings where we can't? - -REPLY [2 votes]: [So, assuming we work without the axiom of choice, for which rings can we prove that every proper ideal is contained in a maximal ideal? How is this done?] -Most of rings you encounter in algebraic geometry. -For example, rings which are finitely generated over a field and their localizations. -For the proof, see this thread.<|endoftext|> -TITLE: Understanding the Analytic Continuation of the Gamma Function -QUESTION [15 upvotes]: So my book proves the convergence of $\Gamma(z) = \int_0^{\infty}t^{z-1}e^{-t}dt$ in the right half plane $Re(z) > 0$, and then goes on to prove the initial recurrence relation $\Gamma(z+1)=z\Gamma(z)$ by applying integration by parts to $\Gamma(z+1)$: -$$\int_0^{\infty}t^{z}e^{-t}dt = -t^ze^{-t}|_0^{\infty} + z\int_0^{\infty}t^{z-1}e^{-t}dt$$ -The book explicitly states this equality to be true only in the right half plane, since otherwise $-t^ze^{-t}|_0^{\infty} = \infty$, instead of equaling zero. With this initial recurrence relation we are 'supposably' able to analytically continue the Gamma function to $Re(z) > -1$ (not including the origin) by writing the relation in the form: -$$\Gamma(z) = \frac{\Gamma(z+1)}{z}$$ -What I don't understand is this relation is still only true in the right half plane, since otherwise $-t^ze^{-t}|_0^{\infty}\neq 0$. I don't see what reason we have to believe that, for instance, $\Gamma(-\frac{1}{2}) = \frac{\Gamma(\frac{1}{2})}{-\frac{1}{2}}$. -Furthermore $\int_0^{\infty}t^{z-1}e^{-t}dt$ is clearly not convergent in the left half plane, so I can't even imagine why it would be plausible to think that a recurrence relation directly based on it could possibly lead to a genuine analytic continuation of its domain. - -REPLY [4 votes]: An excellent introduction to this topic can be found in the book -The Gamma Function by James Bonnar. An entire chapter is devoted to analytic continuation of the factorials, as well as why the Gamma function is defined as it is -- Hölder's theorem and the Bohr-Mullerup theorem are discussed.<|endoftext|> -TITLE: $0 \leq a^2 + b^2 - abc \leq c \implies a^2 + b^2 - abc$ is a perfect square -QUESTION [6 upvotes]: The following problem looks very interesting to me and I cannot even guess a solution to it. It states that: - -Suppose that $a,b,c$ are three natural numbers satisfying the inequality: - $0\leq a^2 + b^2 - abc\leq c$. Show that $a^2 + b^2 - abc$ is a perfect square. - -Cases like $a=b$ or $a=1$ can be handled very easily, but is there any general solution? Any help shall be greatly appreciated. - -REPLY [4 votes]: A solution appears in the first link in this question: -Seemingly invalid step in the proof of $\frac{a^2+b^2}{ab+1}$ is a perfect square? -This is a variant of the $ab+1$ problem, maybe devised by working backward from the solution.<|endoftext|> -TITLE: Why is it hard to prove whether $\pi+e$ is an irrational number? -QUESTION [111 upvotes]: From this list I came to know that it is hard to conclude $\pi+e$ is an irrational? Can somebody discuss with reference "Why this is hard ?" -Is it still an open problem ? If yes it will be helpful to any student what kind ideas already used but ultimately failed to conclude this. - -REPLY [17 votes]: I was actually going to ask the same question... and in particular if the result would follow as the consequence of any hard, still open conjecture. From the MO thread mentioned by lhf (not the same as the one mentioned by mixedmath) I found out that Schanuel's conjecture would imply it. -On the Mathworld page for $e$ there's a bit of info on numerical attempts to (how should I say?) verify that you cannot easily disprove the irrationality: - -It is known that $\pi+e$ and $\pi/e$ do not satisfy any polynomial equation of degree $\leq 8$ with integer coefficients of average size $10^9$. - -Obtaining this result in 1988 required the use of a Cray-2 supercomputer (at NASA Ames Research Center). I guess one could add that the Ferguson–Forcade algorithm, which was used in this computation, gets a bit of flak on Wikipedia. In fact, the author of this paper, D.H. Bailey, later co-developed the superior PSLQ algorithm. So it is interesting that the problem has advanced computational science too, in a way.<|endoftext|> -TITLE: Sequence in $C[0,1]$ with no convergent subsequence -QUESTION [19 upvotes]: I am trying to show that $C[0,1],$ the space of all real - valued continuous functions with the sup metric is not sequentially compact with the sup metric by showing that the sequence $f_n = x^n$ has no convergent subsequence. The sup metric $\|\cdot\|$ is defined as -$$\|f - g \| = \sup_{x \in [0,1]} |f(x) - g(x)|$$ -where $|\cdot|$ is the ordinary Euclidean metric. Now I know that $f_n \rightarrow f$ pointwise, where -$$f = \begin{cases} 0, & 0 \leq x < 1 \\ 1, & x = 1.\end{cases}$$ -However $f \notin C[0,1]$ so this means by theorem 7.12 of Baby Rudin that $f_n$ cannot converge to $f$ uniformly. However how does this tell me that no subsequence of $f_n$ can converge to something in $C[0,1]$? -Thanks. - -REPLY [7 votes]: This is (more-or-less) a modification of the hint from t.b's comment. -It suffices to show that any subsequence of $(f_n)$ is not Cauchy. (Since every convergent sequence is a Cauchy sequence.) -To do this, notice that -$$\lVert f_{2n}-f_n \rVert = \sup_{x\in[0,1]} (x^n - x^{2n}) = \sup_{x\in[0,1]} (x^n - (x^{n})^2) = \sup_{t\in[0,1]} (t-t^2)=\frac14.$$ -(It is easy to find maximum of the quadratic function $f(t)=t-t^2=t(1-t)$.) -In addition, if we use the monotonicity of the sequence $(f_n)$, we see that -$$k\ge 2n \qquad \Rightarrow \qquad \lVert f_{k}-f_n \rVert \ge \frac14.$$ -Now, if we have any subsequence of $(f_n)$ then the above estimate shows that this subsequence is not Cauchy. For any given $k_0$ we can find $k'>k_0$ such that $n_{k'}>2n_{k_0}$ and $\lVert f_{n_{k'}}-f_{n_{k_0}} \rVert \ge \frac 14$.<|endoftext|> -TITLE: Evaluate $\int_0^\pi xf(\sin x)dx$ -QUESTION [16 upvotes]: Let $f(\sin x)$ be a given function of $\sin x$. -How would I show that $\int_0^\pi xf(\sin x)dx=\frac{1}{2}\pi\int_0^\pi f(\sin x)dx$? - -REPLY [8 votes]: $\int_0^\pi xf(\sin x)dx$ -=$\int_0^\pi (\pi-x)f(\sin (\pi - x))dx$ -= $\int_0^\pi (\pi-x)f(\sin x)dx$ -= $\pi\int_0^\pi f(\sin x)dx - \int_0^\pi xf(\sin x)dx$ -$2\int_0^\pi xf(\sin x)d$ = $\pi\int_0^\pi f(\sin x)dx$ -$\int_0^\pi xf(\sin x)d$ = $\frac{\pi}{2}\int_0^\pi f(\sin x)dx$ -I also used: $\int_a^b f(x)dx$ =$\int_a^b f(a+b-x)dx$<|endoftext|> -TITLE: Can $G≅H$ and $G≇H$ in two different views? -QUESTION [32 upvotes]: Can $G≅H$ and $G≇H$ in two different views? -We have two isomorphic groups $G$ and $H$, so $G≅H$ as groups and suppose that they act on a same finite set, say $\Omega$. Can we see $G≇H$ as permutation groups. Honestly, I am intrested in this point in the following link. It is started by: - -Notice that different permutation groups may well be isomorphic as .... - -See here - -REPLY [13 votes]: Let $G$ and $H$ act faithfully on a finite set $\Omega$. This is equivalent to say that $G$ and $H$ are subgroups of the total permutation group $\mathfrak S_\Omega$. -Here are two notion of isomorphism which are stronger than a simple group isomorphism : - -The action of $G$ and $H$ are isomorphic if there exists a group morphism $\phi : G \to H$ and a bijection $\sigma : \Omega \to \Omega$ such that for all $g\in G$ and $x\in \Omega$ -$$ g\cdot \sigma x = \sigma(\phi g \cdot x). $$ -In particular, the morphism $\phi$ is an isomorphism, because if $\phi g = 1$ then for all $x\in\Omega$ $g\cdot \sigma x = \sigma x$, and since the action of $G$ is faithful and $\sigma$ surjective, this implies that $g = 1$. -$G$ and $H$ are said to be conjugate in $\mathfrak S_\Omega$ is there exist a permutation $\sigma$ of $\Omega$ such that $G = \sigma H \sigma^{-1}$. In particular the map $g\in G \mapsto \sigma^{-1} g \sigma\in H$ is an isomorphism. - -In fact, both notion are the same (easy exercise for you !), and its what Wikipedia call permutation group isomorphism. -This notion is strictly stronger that the notion of group isomorphism. For example, take $\Omega = \{1,2,3,4\}$, $G$ the group of order $2$ generated by the transposition $(1 2)$ and $H$ the group of order $2$ generated by the double transposition $(1 2)(3 4)$. As groups, $G$ and $H$ are isomorphic, because they are both isomorphic to $\Bbb Z/2\Bbb Z$. However, they are not isomorphic as permutation group. Indeed, the conjugate of a transposition is always a transposition, it cannot be a double transposition. More precisely, the conjugate $\sigma (1 2) \sigma^{-1}$ is the transposition $(\sigma 1, \sigma 2)$. -When classifying the subgroups of a given group, it is often important to classify them up to isomorphism but also up to conjugation, because isomorphism class can split into several conjugation classes.<|endoftext|> -TITLE: The series $ \sum\limits_{k=1}^{\infty} \frac1{\sqrt{{k}{(k^2+1)}}}$ -QUESTION [12 upvotes]: Given the series -$$\sum_{k=1}^{\infty} \frac1{\sqrt{{k}{(k^2+1)}}}$$ -How can I calculate its exact limit (if that is possible)? - -REPLY [3 votes]: I might as well... as mentioned by oen in his answer, the series -$$\mathscr{S}=\sum_{j=0}^\infty \left(-\frac14\right)^j\binom{2j}{j}\zeta\left(2j+\frac32\right)$$ -is alternating, but rather slowly convergent. oen used the Euler transformation for accelerating the convergence of this series in his answer; for this answer, I will be using the Levin $t$-transform: -$$\mathcal{L}_n=\frac{\sum\limits_{j=0}^n (-1)^j\binom{n}{j}(j+1)^{n-1}\frac{S_j}{a_j}}{\sum\limits_{j=0}^n (-1)^j\binom{n}{j}(j+1)^{n-1}\frac1{a_j}}$$ -where $a_j$ is the $j$-th term of the series, and $S_j$ is the $j$-th partial sum. -To demonstrate the superior convergence properties of the $t$-transform, consider the following evaluations: -\begin{array}{c|cc}k&\mathcal{L}_k&|\mathscr{S}-\mathcal{L}_k|\\\hline 2&2.2593704308006952815&4.692\times10^{-3}\\5&2.2640560757360687322&6.323\times10^{-6}\\10&2.2640623990222550236&1.189\times10^{-10}\\15&2.2640623991414400190&2.190\times10^{-13}\\20&2.2640623991412210238&4.828\times10^{-18}\\25&2.2640623991412210286&5.938\times10^{-21}\end{array} -Thirty terms of the series along with the $t$-transform gives a result good to twenty-five digits, compared to the $128$ terms required by the Euler transform.<|endoftext|> -TITLE: Identifying a map by looking at the pair of topologies that makes it continuous. -QUESTION [8 upvotes]: Let $\omega_X$ be the set of all topologies on $X$. Given $f:X\rightarrow X$, define $R_f \subset \omega_X \times \omega_X $ as those pairs of topologies on $X$ which make $f$ continuous. For example $\left(\text{Discrete Topology},-\right)$ or $\left(-,\text{Indiscrete Topology}\right)$ are always in $R_f$. But when $f$ can be uniquely determined, by its $R_f$? Here is one such case: $$ \forall x \in X: f(x)=x \iff R_f= \left\{ \left(T_\alpha,T_\beta\right)\subset \omega_X \times \omega_X | T_\beta \subset T_\alpha \right\}$$ -Can some one give me more elaborative examples of this please? - -REPLY [7 votes]: $R_f$ uniquely determines $f$ for any non-constant function $f:X \to Y$ (if, and only if, $f$ is constant, $R_f$ is $\omega_X\times\omega_Y$). -Proof: Fix $y\in Y$. Define the topology $T_y=\{\emptyset,Y,\{y\}\}$. -Then $(T,T_y)\in R_f$ iff: - -$f^{-1}(\emptyset)=\emptyset\in T$ (always true) -$f^{-1}(Y)=X\in T$ (always true) -$f^{-1}(\{y\})\in T$ - -Take the intersection of all such $T$: it is the set $T_0=\{\emptyset,X,f^{-1}(\{y\})\}$. Because $f$ is not constant, $f^{-1}(\{y\})\ne X$ and so $f^{-1}(\{y\})$ is the largest element of $T_0\setminus\{X\}$. -Since $f$ is uniquely determined when $f^{-1}(\{y\})$ is given for all $y$, this proves the result. -(Note: we only needed to use topologies with a finite number of open sets.)<|endoftext|> -TITLE: Estimating maximum value of random variable -QUESTION [8 upvotes]: Suppose I have some random variable $X$ which only takes on values over some finite region of the real line, and I want to estimate the maximum value of this random variable. Obviously one crude method is to take many measurements, lets say $X_1$, $X_2$, $\ldots, X_n$ (which we'll say are all iid) and to use -$$X_{max} = \text{max}(X_1, \ldots X_n)$$ -as my guess, and as long as $n$ is large enough this should be good enough. However, $X_{max}$ is always less than the actual maximum, and I'm wondering if there's any way to modify $X_{max}$ so it gives a guess (still with some uncertainty) which is centred around the actual maximum value, rather than always a little less than it. -Thanks - -REPLY [4 votes]: When the random variables are uniform on $(0,M)$ for an unknown $M\gt0$, one can check that the maximum $X_n^*$ of an i.i.d. sample of size $n$ is such that $\mathrm P(X_n^*\leqslant x)=(x/M)^n$ for every $x$ in $(0,M)$ hence $\mathrm E(X_n^*)=\frac{n}{n+1}M$. This means that an unbiased estimate of $M$ is $\widehat M_n=\frac{n+1}nX_n^*$. -For other distributions, the result is different. For example, starting from the distribution with density $ax^{a-1}/M^a$ on $(0,M)$ for some known $a\gt0$, one gets $\mathrm P(X_n^*\leqslant x)=(x/M)^{an}$ for every $x$ in $(0,M)$ hence $\mathrm E(X_n^*)=\frac{an}{an+1}M$ and an unbiased estimate of $M$ is $\widehat M_n=\frac{an+1}{an}X_n^*$. -Likewise, if the density is $a(1-x)^{a-1}/M^a$ on $(0,M)$ for some known $a\gt0$, one gets $\mathrm E(X_n^*)=M(1-c_n)$ with $c_n\sim\frac{\Gamma(1+1/a)}{n^{1/a}}$ and an unbiased estimate of $M$ is $\widehat M_n=\frac1{1-c_n}X_n^*$.<|endoftext|> -TITLE: How to show that $\mathrm{SL}(2,\mathbb Z) = \langle A, B\rangle$? -QUESTION [8 upvotes]: Show, that if $\mathbf{A}= \left( \begin{array}{cc} 1&1\\ 0&1 \end{array} \right)$, $\mathbf{B}= \left( \begin{array}{cc} 0&1\\ -1&0 \end{array} \right)$ and $\mathrm{SL}(2, \mathbb{Z}) := \{ \mathbf{C}\in\mathrm{M}(2\times 2;\mathbb{Z})\, |\, \det(\mathbf{C}) = 1\}$ then $\langle\mathbf{A}, \mathbf{B}\rangle = \mathrm{SL}(2, \mathbb{Z})$. -I found this exercise in a textbook for linear algebra in a chapter about the determinant, so it should be solved rather elementarily and without any deeper understanding of group theory ($\mathrm{SL}(2, \mathbb{Z})$ is defined only for the exercise). Showing $\langle\mathbf{A}, \mathbf{B}\rangle \subseteq \mathrm{SL}(2, \mathbb{Z})$ was easy but I got stuck with the opposite direction. Any help would be appreciated. - -REPLY [12 votes]: The intuitive content is that the claimed statement roughly paraphrases into "every invertible integer matrix is a product of integer elementary row operations", with appropriate tweaks to the fact we want to stay within determinant 1 rather than $\pm 1$. -The proof of the claimed statement follows the same idea as the proof every invertible (real) matrix is a product of elementary matrices: show that elementary operations can row reduce any invertible matrix to the identity -The powers of $\mathbf{A}$ are precisely the elementary matrices that describe adding multiplies of the second row to the top row. -The powers of $\mathbf{BAB}^{3}$ are the elementary matrices that describe adding multiples of the top row to the second row. -Using these elementary row operations, any 2x2 integer matrix can be row reduced (think "Euclidean algorithm") to one of the following forms: -$$ \left( \begin{matrix} x & y \\ 0 & z \end{matrix} \right) -\qquad \left( \begin{matrix} x & y \\ 0 & 0 \end{matrix} \right) -\qquad \left( \begin{matrix} 0 & x \\ 0 & 0 \end{matrix} \right) -\qquad \left( \begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix} \right) -$$ -with $0 \leq y < |z|$ in the first case, and $|x| > 0$ in all. If the original matrix was in $SL(2,\mathbf{Z})$, then only the first form is possible and we must have either $x=z=1$, and thus $y=0$.<|endoftext|> -TITLE: Permutations: Given $P^4$, how many $P^1$s are possible? -QUESTION [9 upvotes]: Let $P^0$ be the identity tuple $(1,2,...,N)$ -Let $P^{i+1}$ be the tuple after a permutation $P$ is applied to $P^i$. -For example, if $P$ is $(2,1,3,6,4,5)$ than: -$$\begin{align} -P^0 &= (1,2,3,4,5,6) \\ -P^1 &= (2,1,3,6,4,5) \\ -P^2 &= (1,2,3,5,6,4) \\ -P^3 &= (2,1,3,4,5,6) \\ -\dots -\end{align}$$ -Given the value $P^4$, how many possible values of $P$ are there? - -REPLY [9 votes]: We can restate the question as: how many "fourth roots" does a given permutation have? That is, given a permutation $\sigma$, how many permutations $\tau$ exist such that $\tau^4 = \sigma$? The answer — let's call it $N(\sigma)$ — depends on the structure of $\sigma$. -(The basic idea of what to look at comes from section 4.8 of generatingfunctionology (e.g. here).) -Consider a permutation $\tau$, decomposed into cycles. For a particular cycle in $\tau$ of length $n$, let's look at what happens in $\tau^4$. If $(a_0, a_1, \dots...)$ is a cycle in $\tau$ (that is, $\tau$ takes $a_0$ to $a_1$, takes $a_1$ to $a_2$, etc.), then $\tau^4$ takes $a_0$ to $a_4$, takes $a_1$ to $a_5$, etc. The resulting (sub)permutation may be a single cycle or multiple cycles, depending upon $n$: - -If $n$ is odd (relatively prime to $4$), then we have a single cycle of length $n$, which looks like $(a_0, a_4, a_8, \dots)$ with the indices looping around modulo $n$. -If $n$ is even but not a multiple of $4$, then we get two cycles of length $n/2$, one looking like $(a_0, a_4, a_8, \dots)$ and containing all the even indices, and the other looking like $(a_1, a_5, a_9 \dots)$ and containing all the odd ones. -If $n$ is a multiple of $4$, then we get four cycles of length $n/4$ each, namely $(a_0, a_4, \dots)$, $(a_1, a_5, \dots)$, $(a_2, a_6, \dots)$, and $(a_3, a_7, \dots)$. - -Turning this around, we can look at a cycle of length $m$ in $\tau^4$ and say the following: - -If $m$ is even, then it must have come, along with three others, as the result of a single cycle of length $4m$ splitting into four cycles of length $m$ each. -If $m$ is odd, then either: - -One cycle of length $m$ gave rise to one cycle of length $m$, -One cycle of length $2m$ gave rise to two cycles of length $m$ each, or -One cycle of length $4m$ gave rise to four cycles of length $m$ each. - - -(This immediately gives a condition for $N(\sigma)$ to be positive, i.e. for the permutation $\sigma$ to have at least one fourth root: for every even number $m$, the number of cycles of length $m$ must be a multiple of $4$.) -We can now calculate $N(\sigma)$. For each number $m$, let the number of cycles of length $m$ in $\sigma$ be $c_m$. Then: - -If $m$ is even, then ($c_m$ must be a multiple of $4$ and) we can reconstruct a list of $c_m/4$ original cycles of length $4m$ each, by picking $c_m/4$ ordered groups of four, with ordering among them and cyclic order within each group of four being irrelevant, the number of ways of doing which is $$\frac{c_m!}{(c_m/4)!4^{c_m/4}}$$ After picking a particular group of four cycles, we can make a big one out of them (find a fourth root of the product of these four cycles) by choosing where in the cyclic ordering we start in each of the four cycles. WLOG we can fix a point in the first cycle where we start so the choice is only among the other three, so we must multiply the above number by $m^3$. -If $m$ is odd, then for each integer partition of $c_m$ of the form $$c_m = x + 2y + 4z$$ with $x$, $y$ and $z$ being nonnegative integers, we can pick $x$ cycles of length $m$ by themselves, form $y$ pairs of cycles, and $z$ groups of four. The number of ways of doing this is $$\frac{c_m!}{x! (y!2^y) (z!4^z)}.$$ Again in each of the $y$ pairs we have $m$ ways of putting them together, and in each of the $z$ quadruplets we have $m^3$ ways of putting them together. - -So using all that, the final expression for $N(\sigma)$ is: -$$N(\sigma) = \left(\prod_{2|m}\frac{c_m!m^{3c_m/4}}{(c_m/4)!4^{c_m/4}}\right) \left(\prod_{2\not|m}\sum_{x,y,z:c_m=x+2y+4z}\frac{c_m! m^y m^{3z}}{x! (y!2^y) (z!4^z)}\right)$$ -I've checked this with a computer program for all permutations of size up to 12, so I'm finally convinced the expression is correct. - -Examples: Consider your original permutation $\tau = \begin{pmatrix} -1 & 2 & 3 & 4 & 5 & 6\\ -2 & 1 & 3 & 6 & 4 & 5\end{pmatrix}$. We can write this in cycle notation as $\tau = (1 2) (3) (4 6 5)$. For this permutation, $\tau^4 = (1)(2)(3)(4 6 5)$. Notice how the cycle of length $2$ has split into two cycles of length $1$ each. If we want to start with this permutation $\sigma = (1)(2)(3)(4 6 5)$ and find fourth roots, we must look at the number of cycles $c_m$ for each $m$: - -$m = 1$: $c_m = 3$. We're looking at just the $(1)(2)(3)$ part. - -For the partition $3 = x + 2y + 4z = 1 + 2(0) + 4(0)$, we have only one ($\frac{3!}{3!} = 1$) way of picking three individual $1$-cycles: the identity permutation $(1)(2)(3)$ itself. -For the partition $3 = x + 2y + 4z = 1 + 2(1) + 4(0)$, we have three ways ($\frac{3!}{1!2!2^1} = 3$) of picking a single $1$-cycle and a pair, corresponding to the fourth roots $(1)(2 3)$, $(2)(1 3)$ and $(3)(1 2)$ respectively. - -$m = 3$: $c_m = 1$. We're looking at the $(4 6 5)$ part. - -There's only one partition $1 = 1 + 2(0) + 4(0)$ and only one way of picking a single element out of a single one, and here the fourth root of $(4 6 5)$ happens to be $(4 6 5) itself. - - -So the four fourth roots of $\sigma = (1)(2)(3)(4 6 5)$ are - -$(1)(2)(3)(4 6 5) = \begin{pmatrix} -1 & 2 & 3 & 4 & 5 & 6\\ -1 & 2 & 3 & 6 & 4 & 5 \end{pmatrix}$, -$(1)(2 3)(4 6 5) = \begin{pmatrix} -1 & 2 & 3 & 4 & 5 & 6\\ -1 & 3 & 2 & 6 & 4 & 5 \end{pmatrix}$, -$(2)(1 3)(4 6 5) = \begin{pmatrix} -1 & 2 & 3 & 4 & 5 & 6\\ -3 & 2 & 1 & 6 & 4 & 5 \end{pmatrix}$, -$(3)(1 2)(4 6 5) = \begin{pmatrix} -1 & 2 & 3 & 4 & 5 & 6\\ -2 & 1 & 3 & 6 & 4 & 5 \end{pmatrix}$. - - -Let $\sigma = (1 2 3)(4 5 6) = \begin{pmatrix} -1 & 2 & 3 & 4 & 5 & 6\\ -2 & 3 & 1 & 5 & 6 & 4\end{pmatrix}$. We can either choose to keep the two $3$-cycles separate ($x = 2$, $y = 0$), giving us the root $(1 2 3)(4 5 6)$ itself, or we can choose to put the pair together ($x = 0, y = 1$), where the choice of the pair is unique but we can interleave the two in three ways — basically the form $(1 ? 3 ? 2 ?)$ with the '?'s filled by $(4 6 5)$ — giving us the roots $(1 4 3 6 2 5)$, $(1 5 3 4 2 6)$ and $(1 6 3 5 2 4)$. So we have four fourth roots. - -Let $\sigma = (1 2)(3 4)(5 6)(7 8 9)$. In this case for $m = 2$, $c_m = 3$ is not a multiple of $4$, so $N(\sigma) = 0$. - -Let $\sigma = \begin{pmatrix}1 & 7\end{pmatrix} \begin{pmatrix}2\end{pmatrix} \begin{pmatrix}3 & 16 & 9\end{pmatrix} \begin{pmatrix}4 & 10\end{pmatrix} \begin{pmatrix}5\end{pmatrix} \begin{pmatrix}6 & 18\end{pmatrix} \begin{pmatrix}8\end{pmatrix} \begin{pmatrix}11 & 15 & 13\end{pmatrix} \begin{pmatrix}12 & 17\end{pmatrix} \begin{pmatrix}14\end{pmatrix}$. Counting the fourth roots of this permutation by brute-force enumeration is impossible, but we can easily do it by collecting cycles of the same length: - -For $m = 1$: the roots of $\begin{pmatrix}2\end{pmatrix} \begin{pmatrix}5\end{pmatrix} \begin{pmatrix}8\end{pmatrix} \begin{pmatrix}14\end{pmatrix}$ are - -(all separate) $\begin{pmatrix}2\end{pmatrix} \begin{pmatrix}5\end{pmatrix} \begin{pmatrix}8\end{pmatrix} \begin{pmatrix}14\end{pmatrix}$, -(two pairs) $\begin{pmatrix}2 & 5\end{pmatrix} \begin{pmatrix}8 & 14\end{pmatrix}$, $\begin{pmatrix}2 & 8\end{pmatrix} \begin{pmatrix}5 & 14\end{pmatrix}$, $\begin{pmatrix}2 & 14\end{pmatrix} \begin{pmatrix}5 & 8\end{pmatrix}$, -(all together) $\begin{pmatrix}2 & 5 & 8 & 14\end{pmatrix}$, $\begin{pmatrix}2 & 5 & 14 & 8\end{pmatrix}$, $\begin{pmatrix}2 & 8 & 5 & 14\end{pmatrix}$, $\begin{pmatrix}2 & 8 & 14 & 5\end{pmatrix}$, $\begin{pmatrix}2 & 14 & 5 & 8\end{pmatrix}$, $\begin{pmatrix}2 & 14 & 8 & 5\end{pmatrix}$. -So there are $1 + 3 + 6 = 10$ roots of $\begin{pmatrix}2\end{pmatrix} \begin{pmatrix}5\end{pmatrix} \begin{pmatrix}8\end{pmatrix} \begin{pmatrix}14\end{pmatrix}$. - -For $m = 2$, the roots of $\begin{pmatrix}1 & 7\end{pmatrix} \begin{pmatrix}4 & 10\end{pmatrix} \begin{pmatrix}6 & 18\end{pmatrix} \begin{pmatrix}12 & 17\end{pmatrix}$ are $6 \times 8 = 48$ in number (pick one of the $3!$ orderings of the last three, then one of the $2^3$ orderings of each of the three). -For $m = 3$, the roots of $\begin{pmatrix}3 & 16 & 9\end{pmatrix} \begin{pmatrix}11 & 15 & 13\end{pmatrix}$ are - -(both separate) $\begin{pmatrix}3 & 16 & 9\end{pmatrix} \begin{pmatrix}11 & 15 & 13\end{pmatrix}$ -(both together) $\begin{pmatrix}3 & 11 & 9 & 13 & 16 & 15\end{pmatrix}$, $\begin{pmatrix}3 & 15 & 9 & 11 & 16 & 13\end{pmatrix}$, $\begin{pmatrix}3 & 13 & 9 & 15 & 16 & 11\end{pmatrix}$. -So there are $1 + 3 = 4$ roots of $\begin{pmatrix}3 & 16 & 9\end{pmatrix} \begin{pmatrix}11 & 15 & 13\end{pmatrix}$. - - -So totally $N(\sigma) = 10 \times 48 \times 4 = 1920$.<|endoftext|> -TITLE: Prove the convergence/divergence of $\sum \limits_{k=1}^{\infty} \frac{\tan(k)}{k}$ -QUESTION [44 upvotes]: Can be easily proved that the following series onverges/diverges? -$$\sum_{k=1}^{\infty} \frac{\tan(k)}{k}$$ -I'd really appreciate your support on this problem. I'm looking for some easy proof here. Thanks. - -REPLY [27 votes]: A proof that the sequence $\frac{\tan(n)}{n}$ does not have a limit for $n\to \infty$ is given in this article (Sequential tangents, Sam Coskey). This, of course, implies that the series does not converge. -The proof, based on this paper by Rosenholtz (*), uses the continued fraction of $\pi/2$, and, essentially, it shows that it's possible to find a subsequence such that $\tan(n_k)$ is "big enough", by taking numerators of the truncated continued fraction ("convergents"). -(*) "Tangent Sequences, World Records, π, and the Meaning of Life: Some Applications of Number Theory to Calculus", Ira Rosenholtz - Mathematics Magazine Vol. 72, No. 5 (Dec., 1999), pp. 367-376<|endoftext|> -TITLE: show if function is even or odd -QUESTION [6 upvotes]: Suppose that we have equation: -$$f(x)=\frac{2^x+1}{2^x-1}$$ -There is question if this function even or odd? I know definitions of even and odd functions, namely -even is if $f(-x)=f(x)$ and odd is if $f(-x)=-f(x)$ and when I put $-$ sign in function, found that this is neither even nor odd function, because $2^{-x}\ne-1 \times 2^x$, but my book says that it is even, so am I wrong? Please help me to clarify book is correct or me? Thanks - -REPLY [12 votes]: Let's see what $f(-x)$ looks like: -$$ -f(-x) = \frac{2^{-x} + 1}{2^{-x} - 1} -$$ -Since $f(x)$ contains $2^x$ and not $2^{-x}$, let's multiply the numerator and denominator by $2^x$: -$$ -f(-x) = \frac{2^x(2^{-x} + 1)}{2^x(2^{-x} - 1)} = \frac{1 + 2^x}{1-2^x} = - \frac{2^{x} + 1}{2^{x} - 1} = -f(x) -$$ -This shows that the function is odd. - -REPLY [7 votes]: The function is odd. You propably miss something in your calculation. -$ \displaystyle{ f(-x) = \frac{2^{-x}+1}{2^{-x}-1}= \frac{ \frac{1}{2^x} + 1 }{ \frac{1}{2^x} -1} = \frac{ \frac{1+2^x}{2^x}}{ \frac{1-2^x}{2^x}} =-f(x)}$ - -REPLY [6 votes]: $$\frac{2^{-x}+1}{2^{-x}-1} = \frac{\frac 1 {2^x}+1}{\frac 1 {2^x}-1}$$ -Now clear factions by multiplying numerator and denominator by $2^x$ and see what you have. - -REPLY [4 votes]: We get that -$$ -f(-x) = \frac{2^{-x}+1}{2^{-x}-1} = \frac{1+2^x}{1-2^x}= - \frac{2^x + 1}{2^x-1} = -f(x), -$$ -so the function is odd.<|endoftext|> -TITLE: Why isn't there a continuously differentiable injection into a lower dimensional space? -QUESTION [25 upvotes]: How to show that a continuously differentiable function $f:\mathbb{R}^{n}\to \mathbb{R}^m$ can't be a 1-1 when $n>m$? This is an exercise in Spivak's "Calculus on manifolds". -I can solve the problem in the case $m=1$. To see this, note that the result is obvious if the first partial derivative $D_1 f(x)=0$ for all $x$ as then $f$ will be independent of the first variable. Otherwise there exists $a\in \mathbb{R}^n$ s.t. $D_1 f(a)\not=0$. Put $g:A\to \mathbb{R}^n, g(x)=(f(x),x_2, \ldots, x_n)$ (with $x=(x_1,\ldots, x_n)$). Now the Jacobian -$$ -g'(x) = \left[ - \begin{array}{cc} - D_1 f(x) & 0 \\ - 0 & I_{n-1} - \end{array} - \right] -$$ -so that $\text{det}\, g'(a)=D_1 f(a)\not=0$. By the Inverse Function Theorem we have an open set $B\subseteq A$ s.t. $g:B\to g(B)$ is bijective with a differentiable inverse. In particular, $g(B)$ is open. -Pick any $g(b)=(f(b),b_2,\ldots, b_n)\in g(B)$. Since $g(B)$ is open there exists $\varepsilon > 0$ such that $(f(b), b_2, \ldots, b_n+\varepsilon) \in g(B)$. Thus we can find $b'\in B$ s.t. $g(b')=(f(b), b_2, \ldots, b_n+\varepsilon)$. By the definition of $g$, $f(b')=f(b)$ with $b'$ and $b$ differing in the last coordinate. Thus $f$ isn't injective. -However, I cannot generalize this argument to higher dimenssions. - -REPLY [2 votes]: I don't have a full solution for the general case, myself, but I can boil it down to a simpler problem which I feel may have a more elementary solution. -Firstly, I believe you miscalculated the top row of $g'$. -In the $m=1$ case, it comes out to: -$$ - g'(x) - = - \left[ - \begin{array}{c|ccc} - D_1 f(x) & D_2 f(x) & \dots & D_n f(x) \\ \hline - 0 & & I_{n-1} & \\ - \end{array} - \right] -$$ -The determinant still comes out to be $\det(g'(x)) = D_1 f(x)$. -In the general case, $g'$ becomes -$$ - g'(x) - = - \left[ - \begin{array}{ccc|ccc} - D_1 f^1(x) & \dots & D_m f^1(x) & D_{m+1} f^1(x) & \dots & D_n f^1(x) \\ - \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ - D_1 f^m(x) & \dots & D_m f^m(x) & D_{m+1} f^m(x) & \dots & D_n f^m(x) \\ \hline - & 0 & & & I_{n-m} & \\ - \end{array} - \right] -$$ -The determinant then becomes -$$ - \det(g'(x)) - = - \left| - \begin{array}{ccc} - D_1 f^1(x) & \dots & D_m f^1(x) \\ - \vdots & \ddots & \vdots \\ - D_1 f^m(x) & \dots & D_m f^m(x) \\ - \end{array} - \right| -$$ -As before, this determinant cannot be nonzero anywhere, or else you can apply the inverse function theorem again, to show that $f$ is not 1-1. Therefore it remains to be shown that if this determinant is zero everywhere, $f$ cannot be 1-1. -It seems intuitively clear to me that that $(f^1, \dots, f^m)$ should fail to be 1-1, and therefore so will $f$, but I'm having difficulties showing it using the earlier results from Spivak. -To be clear, I claim that the general case boils down to: if $f: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is continuously differentiable and satisfies $\det(f'(x)) = 0$ for all $x$, then $f$ is not 1-1.<|endoftext|> -TITLE: If $\sum\limits_{k=1}^{\infty}a_k=S $, then $ a_4+a_3+a_2+a_1+a_8+a_7+a_6+a_5+\dots=?$ -QUESTION [7 upvotes]: if we know that $\sum\limits_{k=1}^{\infty}a_k=S$, what can we say about the convergence of $$a_4+a_3+a_2+a_1+a_8+a_7+a_6+a_5+a_{12}+a_{11}+a_{10}+a_{9}+\dots$$ ? -If it does converges, what is the sum (in terms of $S$)? -As per the first question - it clearly converges since the number of terms in each parentheses is bounded (by 4) and the $(a_n)_{n=1}^\infty$ tends to zero as $n\to\infty$. -Second question is where I'm struggling. We don't know that $\sum\limits_{k=1}^{\infty}a_k$ absolutely converges so I don't know what can we say about it's sum. -Thanks for your help. - -REPLY [7 votes]: I think Cameron is right. In particular, the difference $|S_n-T_n|$ is bounded by $|a_{n-1}+a_{n-2}+a_{n-3}|$ and therefore has to approach zero. -The problem of series that are not absolutely convergent is that you can't make arbitrary rearrangement of the terms. However, in general, the rearrangement that cause the sum of a convergent series to change cannot "bounded", meaning that there's no uniform upper bound on the number of shifts applied to each term of the original series (i.e. you can't say "every term is moved at most by $M$ terms"). -Indeed, if every term of the sequence is shifted by at most $M$ terms, you can prove that the difference between the partial sums of the new and original series is bounded by -$$|T_n-S_n|\leq\sum_{n-M}^{n-1}|a_n|$$ -which clearly converges to zero if the original series is convergent.<|endoftext|> -TITLE: Proving the Möbius formula for cyclotomic polynomials -QUESTION [8 upvotes]: We want to prove that -$$ \Phi_n(x) = \prod_{d|n} \left( x^{\frac{n}{d}} - 1 \right)^{\mu(d)} $$ -where $\Phi_n(x)$ in the n-th cyclotomic polynomial and $\mu(d)$ is the Möbius function defined on the natural numbers. -We were instructed to do it by the following stages: -Using induction we assume that the formula is true for $n$ and we want to prove it for $m = n p^k$ where $p$ is a prime number such that $p\not{|}n$. -a) Prove that $$\prod_{\xi \in C_{p^k}}\xi = (-1)^{\phi(p^k)} $$ where $C_{p^k}$ is the set of all primitive $p^k$-th roots of unity, and $\phi$ is the Euler function. I proved that. -b) Using the induction hypothesis show that -$$ \Phi_m(x) = (-1)^{\phi(p^k)} \prod_{d|n} \left[ \prod_{\xi \in C_{p^k}} \left( (\xi^{-1}x)^{\frac{n}{d}} - 1 \right) \right]^{\mu(d)} $$ -c) Show that -$$ \prod_{\xi \in C_{p^k}} \left( (\xi^{-1}x)^{\frac{n}{d}} - 1 \right) = (-1)^{\phi(p^k)} \frac{x^{\frac{m}{d}}-1}{x^{\frac{m}{pd}} - 1} $$ -d) Use these results to prove the formula by substituting c) into b). -I am stuck in b) and c). -In b) I tried to use the recursion formula $$ x^m - 1 = \prod_{d|m}\Phi_d(x) $$ and -$$ \Phi_m(x) = \frac{x^m-1}{ \prod_{\stackrel{d|m}{d -TITLE: The Duality Functor in Linear Algebra -QUESTION [5 upvotes]: I'm trying to gain an intuitive understanding of the following construction: -For any vector space $M$ over a field $R$, one can define the algebraic dual of $M$ as $M^* := \mathsf{Hom}(M, R)$ and given another vector space $N$ one can define the algebraic dual of a linear map $A \in \mathsf{Hom}(M,N)$ according to $A^*(\omega) = \omega \circ A$. This establishes a linear mapping from $N^*$ to $M^*$ and since for suitable linear maps $A$ and $B$, $(A \circ B)^* = B^* \circ A^*$ and $(1_A)^* = 1_{A^*}$ the "duality" operation $*$ sets up an endomorphic (contravariant) functor on the category of vector spaces. -I understand the basics of this construction, but what I would like to see are some good concrete examples that would illustrate the abstractions in a meaningful way. It's easy to come up with specific examples of linear maps and dual spaces where one applies this to specific linear forms, but are there some examples that would shed light on the motivation for these definitions and relations? -Ironically, I think I have a better understanding of this in purely categorical terms since the duality construct shows up repeatedly in different contexts and when I think of "duality" I think of precisely this construct... - -REPLY [6 votes]: Let $V$ be a finite-dimensional vector space with basis $e_1, ... e_n$. Then we may write any vector $v$ in the form -$$v = \sum c_i e_i$$ -for some coefficients $c_i$. Sending a vector $v$ to the coefficient $c_i$ for fixed $i$ defines a linear functional $e_i^{\ast} : V \to k$. These linear functionals together constitute the dual basis to $V$, and what confused me for a long time is that linear functionals do not transform in the same way as vectors under change of coordinates; we say that vectors transform covariantly but linear functionals transform contravariantly. Before I understood this I was constantly getting confused about the difference between transforming a vector and transforming its components. -For an infinite-dimensional example, consider the vector space $k[x]$ of polynomials in one variable over a field. It has a distinguished set of dual vectors given by the functions $[x^n]$ which return the coefficient of $x^n$ in a polynomial. To be suggestive you can write these functions as $\frac{1}{n!} \frac{d^n}{dx^n}_{x = 0}$. It turns out that the dual space $k[x]^{\ast}$ is precisely the product of the spaces containing each of these dual vectors; for example, the dual space contains vectors that ought to be called -$$(e^{t \frac{d}{dx} })_{x=0} = \sum_{n \ge 0} \frac{t^n}{n!} \frac{d^n}{dx^n}_{x=0}$$ -that given a polynomial $f(x)$ return the numerical value of $f(t)$. -Thinking of $\frac{d^0}{dx^0}_{x=0}$ as a toy model for the Dirac delta function, you can think of this construction as a toy model for (Schwartz) distributions. - -In differential geometry, the dual of a tangent space $T_p(M)$ at a point $p$ on a manifold $M$ is the cotangent space $T_p^{\ast}(M)$ at $p$. Just as the tangent space captures the infinitesimal behavior of smooth functions $\mathbb{R} \to M$ near $p$ (curves), the cotangent space captures the infinitesimal behavior of smooth functions $M \to \mathbb{R}$ near $p$ (coordinates). Just as a nice family of tangent vectors gives a vector field, a nice family of cotangent vectors gives a 1-form. In classical mechanics, the cotangent bundle is the phase space of a classical particle traveling on $M$; cotangent vectors give momenta. - -For me duality really shines when you combine it with tensor products and start using the language of tensors. Then you can describe any kind of linear-ish thing using a combination of tensor products and duals, at least for finite-dimensional vector spaces: - -What's a linear function $V \to W$? It's an element of $V^{\ast} \otimes W$. -What's a bilinear form $V \times V \to k$? It's an element of $V^{\ast} \otimes V^{\ast}$. -What's a multiplication $V \times V \to V$? It's an element of $V^{\ast} \otimes V^{\ast} \otimes V$. - -When you have a bunch of linear-ish things around, writing them as all tensors helps you keep track of exactly how you can combine them (using tensor contraction). For example, an endomorphism $V \to V$ is an element of $V^{\ast} \otimes V$, but I have a distinguished dual pairing -$$V^{\ast} \otimes V \to k.$$ -What does this do to endomorphisms? It's just the trace!<|endoftext|> -TITLE: How to calculate all the four solutions to $(p+5)(p-1) \equiv 0 \pmod {16}$? -QUESTION [6 upvotes]: This is a kind of a plain question, but I just can't get something. -For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$. -How come that the in addition to the solutions -$$\begin{align*} -p &\equiv 11\pmod{16}\\ -p &\equiv 1\pmod {16} -\end{align*}$$ -we also have -$$\begin{align*} -p &\equiv 9\pmod {16}\\ - p &\equiv 3\pmod {16}\ ? -\end{align*}$$ -Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them? -Thanks - -REPLY [2 votes]: In the Theorem below put $\rm\:p\!=\!2,\ c=-5,\ d = 1\:$ to deduce that $\rm\:mod\ 16,\ (x\!+\!5)(x\!-\!1)\:$ has roots $\rm\:x \equiv -5,1\pmod 8,\:$ which are $\rm\:x,x\!+\!8 \equiv -5,1,3,9\pmod{16}.$ It has $\rm\,4\ (\!vs.\,2)$ roots by $\rm\:x\!+\!5 \equiv x\!-\!1\pmod{2},\:$ so both are divisible by $2$, so the other need be divisible only by $\rm8\ (\!vs. 16).$ Choosing larger primes $\rm\:p\:$ yields a quadratic with as many roots as you desire. These matters are much clearer $\rm\:p$-adically, e.g. google Hensel's Lemma. -Theorem $\ $ If prime $\rm\:p\:|\:c\!-\!d\:$ but $\rm\:p^2\nmid c\!-\!d\:$ then $\rm\:(x\!-\!c)(x\!-\!d)\:$ has $\rm\:2\!\;p\:$ roots mod $\rm\:p^4,\:$ namely $\rm\:x \equiv c+j\,p^3\:$ and $\rm\: x\equiv d+j\,p^3,\:$ for $\rm\:0\le j \le p\!-\!1.$ -Proof $\ $ Note $\rm\: a = x\!-\!d,\ b = x\!-\!c\:$ satisfy the hypotheses of the Lemma below, thus we deduce $\rm\:p^4\:|\:(x\!-\!c)(x\!-\!d)\iff p^3\:|\:x\!-\!c\:$ or $\rm\:p^3\:|\:x\!-\!d,\:$ i.e. $\rm\:x\equiv c,d\pmod{p^3}.\:$ This yields the claimed roots $\rm\:mod\,\ p^4,\:$ which are all distinct since $\rm\:c+jp^3\equiv d+kp^3\:$ $\Rightarrow$ $\rm\:p^4\:|\:c\!-\!d+(j\!-\!k)p^3\:$ $\Rightarrow$ $\rm\: p^3\:|\:c\!-\!d,\:$ contra hypothesis. $\quad$ QED -Lemma $\ $ If prime $\rm\:p\:|\:a\!-\!b\:$ but $\rm\:p^2\nmid a\!-\!b\:$ then $\rm\:p^4\:|\:ab\iff p^3\:|\:a\ $ or $\rm\:p^3\:|\:b.$ -Proof $\rm\,\ (\Rightarrow) \ \ p\:|\:ab\:\Rightarrow\:p\:|\:a\:$ or $\rm\:p\:|\:b,\:$ so $\rm\:p\:|\:a,b\:$ by $\rm\:a\equiv b\pmod p.$ But not $\rm\:p^2\:|\:a,b\:$ else $\rm\:p^2\:|\:a\!-\!b.\:$ So one of $\rm\:a,b\:$ is not divisible by $\rm\:p^2,\:$ hence the other is divisible by $\rm\:p^3.$ -$(\Leftarrow)\ \ $ As above, $\rm\:p\:|\:a,b.\:$ Since one is divisible by $\rm\:p^3,\:$ then $\rm\:p^4\:$ divides their product. $\ \ $ QED<|endoftext|> -TITLE: Solving SAT by converting to disjunctive normal form -QUESTION [19 upvotes]: The first well-known $NPC$ problem is the Boolean Satisfiability Problem, which has a proof of being $NPC$ done by Cook (Cook-Levin Theorem). -The problem can easily be described the following way: - -In complexity theory, the satisfiability problem (SAT) is a decision problem, whose instance is a Boolean expression written using only AND, OR, NOT, variables, and parentheses. So, we should give an answer 'yes' if there is a set of boolean variables which yield 'TRUE' for the given for the corresponding expression. - -However, I have a question. The wikipedia article states the following: - -SAT is easier if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals (possibly negated variables). Such a formula is indeed satisfiable if and only if at least one of its terms is satisfiable, and a term is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in polynomial time. - -So, basically, if the expression is written in the DNF, this problem is not $NPC$, but simply $P$. -However, as far as I know, there is a $O(p(n))$ algorithm to transform any given boolean expression into DNF and DNF exists for any boolean expression. -What am I missing or misinterpreting? There is obviously an error in my logic, because it resulted in making the SAT problem $P$, but not $NP$. -Thank you! - -REPLY [26 votes]: Your claim that you can convert an arbitrary formula to DNF in polynomial time is mistaken. -You can convert a boolean formula into DNF, but the resulting formula might be very much larger than the original formula—in fact, exponentially so. Since the conversion algorithm must at the very least write out the resulting DNF formula, its running time must be at least the length of the output, and therefore its worst-case running time must be exponential. -Even if you somehow finesse the issue of writing out the DNF version of the input formula, the algorithm you propose takes a formula of size $x$, converts it into a DNF formula of worst-case size $2^x$, and then calculates satisfiability of the DNF formula in time $P(2^x)$ for some polynomial $P$. This is no better in general than just trying every possible satisfying assignment in the original formula. -The Wikipedia article on "Disjunctive Normal Form" has an example of a formula that explodes when converted to DNF. But it's an easy example. Consider: -$$(A_1\lor B_1)\land(A_2\lor B_2)\land\cdots\land(A_n\lor B_n)$$ -This has length proportional to $n$. But in DNF it turns into: -$$(A_1\land A_2\land\cdots\land A_n)\lor\\ -(B_1\land A_2\land\cdots\land A_n) \lor \\ -(A_1\land B_2\land\cdots\land A_n) \lor \\ -(B_1\land B_2\land\cdots\land A_n) \lor \\ -\vdots\\ -(B_1\land B_2\land\cdots\land B_n)\hphantom{\lor} -$$ -with $2^n$ clauses.<|endoftext|> -TITLE: constructive proof of the infinititude of primes -QUESTION [5 upvotes]: There are infinitely many prime numbers. Euclides gave a constructive proof as follows. -For any set of prime numbers $\{p_1,\ldots,p_n\}$, the prime factors of $p_1\cdot \ldots \cdot p_n +1$ do not belong to the set $\{p_1,\ldots,p_n\}$. -I'm wondering if the following can be made into a constructive proof too. -Let $p_1 = 2$. Then, for $n\geq 2$, define $p_n$ to be a prime number in the interval $(p_{n-1},p_{n-1} + \delta_n]$, where $\delta_n$ is a real number depending only on $n$. Is such a $\delta_n$ known? -Note that this would be a constructive proof once we find a $\delta_n$ because finding a prime number in $(p_{n-1},p_{n-1}+\delta]$ can be done in finite time algorithmically. -For some reason I believe such a $\delta_n$ is not known. -In this spirit, is it known that we can't take $\delta_n = 10n$ for example? - -REPLY [5 votes]: As noted in the comments, we can take $\delta_n=p_{n-1}$. In fact, there are improvements on that in the literature. But if you want something really easy to prove, you can take $\delta_n$ to be the factorial of $p_{n-1}$, since that gives you an interval which includes Euclid's $p_1\times p_2\times\cdots\times p_{n-1}+1$ and therefore includes a new prime.<|endoftext|> -TITLE: Which sets are removable for holomorphic functions? -QUESTION [36 upvotes]: Let $\Omega$ be a domain in $\mathbb C$, and let $\mathscr X$ be some class of functions from $\Omega$ to $\mathbb C$. A set $E\subset \Omega$ is called removable for holomorphic functions of class $\mathscr X$ if the following holds: every function $f\in\mathscr X$ that is holomorphic on $\Omega\setminus E$ is actually holomorphic on $\Omega$, possibly, after being redefined on $E$. -(An example of the above: $E$ is a line interval, $\mathscr X$ consists of continuous functions. In this case $E$ is removable, which is shown in the answer.) -It is clear that the larger $\mathscr X$ is, the smaller is the class of removable sets. In the extreme case, if $\mathscr X$ contains all functions $\Omega\to\mathbb C$, there are no nonempty removable sets. Indeed, if $a\in E$, then $f(z)=\frac{1}{z-a}$ (arbitrarily defined at $z=a$) is holomorphic on $\Omega\setminus E$ but has no holomorphic extension to $\Omega$. -The problem of describing removable sets is nontrivial in many classes $\mathscr X$ such as - -$L^{\infty}(\Omega)$, bounded functions -$C(\Omega)$, continuous functions -$C^{\alpha}(\Omega)$, Hölder continuous functions -$\mathrm{Lip}(\Omega)$, Lipschitz functions - - -Which sets are removable for holomorphic functions in these classes? - -REPLY [30 votes]: Line segment is removable for continuous functions -Let's begin with a concrete result: If $L$ is a line, then $L\cap \Omega$ is removable for $\mathscr X=C(\Omega)$. Let $f\in C(\Omega)$ be holomorphic on $\Omega\setminus L$. By Morera's theorem it suffices to prove that the integral of $f$ along the boundary of every triangle $T\subset \Omega$ is zero. Consider three cases: - -$T$ does not meet $L$. This case is clear. -$T$ has a side that lies on $L$. Let $T_n=T+\delta_n$ where $\delta_n$ are complex numbers such that $\delta_n\to 0$ and $T_n\cap L=\varnothing$. In other words, we translate $T$ by a small amount to move it off the line $L$. Using the uniform continuity of $f$ on compact subsets of $\Omega$, we find that $$\int_{\partial T}f(z)\,dz=\lim_{n\to\infty}\int_{\partial T_n}f(z)\,dz=0$$ -$T$ has points on both sides of $L$. Then $L$ divides $T$ into a triangle and another polygon (triangle or quadrangle). The quadrangle can be further divided into two triangles. The integral of $f$ over the boundary of each resulting triangle is $0$ by the above. Add these integrals together to obtain $\int_{\partial T}f(z)\,dz=0$. - -Line segment is not removable for bounded functions -To complement the above result: a line segment is not removable for $\mathscr X=L^{\infty}(\mathbb C)$. Indeed, the function $f=z+z^{-1}$ maps the unit disk $\mathbb D$ bijectively onto $\mathbb C\setminus [-2,2]$. The inverse $g=f^{-1}$ is holomorphic in $\mathbb C\setminus [-2,2]$ and is bounded by $1$, but has no holomorphic extension to $\mathbb C$. (For one thing, $g(z)$ approaches both $i$ and $-i$ as $z\to 0$. For another, a holomorphic extension would have to be constant by Liouville's theorem.) -General sets -The case $\mathscr X=C^{\alpha}(\Omega)$, $0\le \alpha<1$, was settled by E. P. Dolzhenko in 1963: a compact set $E$ is removable if and only if its $(1+\alpha)$-dimensional Hausdorff measure is zero. -As often happens, the integer Hölder exponent turned out to be harder than fractional. It waited under 1979, when Nguyen Xuan Uy proved that a compact set $E$ is removable for holomorphic functions of class $\mathrm{Lip}(\Omega)$ if and only if $E$ has 2-dimensional measure zero. -The two remaining cases, $L^\infty$ and $C$, are much more involved. Two classical results are: - -A set of $1$-dimensional measure $0$ is removable for $L^\infty$ (hence also for $C$). -A set of Hausdorff dimension greater than $1$ is not removable for $C$ (hence also for $L^\infty$). - -But the removable sets for these classes admit no complete characterization in terms of Hausdorff measures. Instead, they are studied via the concepts of analytic capacity $\gamma$ and continuous analytic capacity $\alpha$ (the latter is defined similarly to $\gamma$ but using $C$ instead of $L^{\infty}$.) Removability is easily shown to be equivalent to the vanishing of the appropriate capacity. One then proceeds to investigate necessary and sufficient conditions for the vanishing of analytic capacity, its invariance under classes of maps, continuity under nested unions/intersections, etc... The webpage of Xavier Tolsa has a wealth of information on this topic.<|endoftext|> -TITLE: Complex analysis exercises -QUESTION [12 upvotes]: These two questions are driving me mad as I need to help my daughter but I can't remember all this stuff. -$\,(1)\,\,$ Let $\,p(z)\,,\,q(z)\,$ be two non-constant complex polynomials of the same degree s.t. $$\text{whenever |z|=1}\,\,,\,|p(z)|=|q(z)|$$ -If all the zeros of both $\,p(z)\,,\,q(z)\,$ are within the open unit disk $\,|z|<1\,$ , prove that $$\forall z\in\mathbb{C}\,\,,\,q(z)=\lambda\, p(z)\,,\,\lambda\in\mathbb{C}\,\,\text{a constant}$$ -What've I thought: since the polynomials are of the same degree, I know that $$\lim_{|z|\to\infty}\frac{q(z)}{p(z)}$$ exists finitely, so we can bound $\,\displaystyle{\frac{q(z)}{p(z)}}\,$ say in $\,|z|>1\,$ . Unfortunately, I can't use Liouville's Theorem to get an overall bound as the rational function is not entire within the unit disk... -$\,(2)\,\,$ Let $\,f(z)\,$ be analytic in the punctured disk $\,\{z\in\mathbb{C}\;|\;0<|z-a|0\,$ , so by the maximum principle both functions $\left(|g(z)|, \left|\frac{1}{g(z)}\right|\right)$ are bounded by $1$ for $ |z| \leq 1$ since the condition $|p(z)|=|q(z)|$ for $|z|=1$ that means $|g(z)|=1$ for $|z| \leq 1$ and you are done.<|endoftext|> -TITLE: How to solve this recurrence $T(n) = 2T(n/2) + n\log n$ -QUESTION [19 upvotes]: How can I solve the recurrence relation $T(n) = 2T(n/2) + n\log n$? It almost matches the Master Theorem except for the $n\log n$ part. - -REPLY [2 votes]: the given problem is best fit on master theorem<|endoftext|> -TITLE: "Convergent" Integral in Davenport's Multiplicative Number Theory -QUESTION [5 upvotes]: I am currently learning analytic number theory using Davenport's Multiplicative Number Theory book, and at some point I believe something silly is happening. I have great faith that I am wrong AND that I am right. Here's the thing. -At some point during the proof of Dirichlet's proof of the theorem about primes in arithmetic progressions, using Poisson's summation formula, one ends up with the integral -$$ -\int_{-\infty}^{\infty} e^{2\pi i N y^2} \, dy. -$$ -Here $N$ is an integer. This improper integral should, in practice, evaluate to $\frac{1+i}{2\sqrt N}$. Actually, the reason why we need to compute this is because we know that something we need is equal to -$$ -\lim_{Y,Z \to \infty} \int_{-Y}^{Z+1} e^{2\pi i N t^2} \, dt, -$$ -where $Y$ and $Z$ take integer values, and because of some Fourier series argument (which is not the purpose of my question), I know that this limit exists. The way Davenport computes this limit is by evaluating the above integral (not the limit one, the one above it). The way it is computed is by first proving that it converges, and then uses some identity which I don't have problems with. The argument Davenport uses is that for $Y' > Y > 0$, we have -$$ -\int_Y^{Y'} e^{2\pi i N y^2} \, dy = \frac 12 \int_{Y^2}^{Y'^2} \frac{e^{2\pi i N z}}{\sqrt z} \, dz -$$ -after the change of variables $z = y^2$, and "this is where I'm stuck, magic happens" : supposedly that "after using the second mean value theorem, or by integration by parts, this should have absolute value $O(\frac 1Y)$ as $Y \to \infty$". How is that? I know both integration by parts/second mean value theorems, but I have no idea how to get there ; naive applications of those two give me no big-oh at all ; for instance, -$$ -\frac 12 \int_{Y^2}^{{Y'}^2} \frac{e^{2 \pi i N z}}{\sqrt z} \, dz -= \frac 12 \left( \left. \frac{e^{2 \pi i Nz}}{2 \pi i N \sqrt z} \right|_{Y^2}^{{Y'}^2} + \frac 1{4\pi i N} \int_{Y^2}^{{Y'}^2} \frac{e^{2\pi i N z}}{(\sqrt z)^3} \, dz \right) -$$ -The first term is $O(1/Y)$, but the second term, if I use the mean value theorem there's a $Y'$ in the numerator, which can get the thing arbitrary large ; what I want is that the integral from $Y$ to $\infty$ to be bounded for $Y$ large enough, so this is really annoying. I could integrate by parts again, but I would still get a $Y'$ in the numerator. -Another thing that annoys me is that is I write -$$ -\int_{-\infty}^{\infty} e^{2\pi i N y^2} \, dy = \int_{-\infty}^{\infty} \cos(2\pi N y^2) + i \sin(2 \pi N y^2) \, dy, -$$ -the real part and imaginary parts both don't seem to converge, since the tail doesn't go to zero, but oscillates very fast? I know that functions can oscillate and still be integrable in the Riemann-limit sense, but still, this looks suspicious. -Any explanations? Anything is welcome... thanks in advance! - -REPLY [2 votes]: To answer the last part of your question, yes this is one of the unintuitive differences between infinite series and improper integrals. An improper integral can converge because the integrand oscillates quickly, without going to $0$. -Intuitively, the idea is that there isn't much difference between $\int_0^B f(t)\,dt$ and $\int_0^{B+x} f(t)\,dt$ when $B$ is large enough, regardless of $x$: if $x$ is small then it's obvious that $[B,B+x]$ doesn't have much mass, and if $x$ is large then the oscillation creates a lot of cancellation on $[B,B+x]$. Once we can uniformly bound $\left| \int_B^{B+x} f(x)\,dx \right|$, it's not such a leap to make $\lim\limits_{B\to\infty} \int_0^B f(x)\,dx$ converge. -More concretely, if you integrate by parts you get -$$\int_0^B\cos(y^2)\,dy = \int_0^B 2y \cos(y^2) \cdot \frac1{2y}\, dy = \sin(y^2)/2y\, \Big|_0^B+ \int_0^B\frac{\sin(y^2)}{2y^2}\, dy,$$ -and it is very easy to accept that this converges (of course we also get this from the change of variables in the question).<|endoftext|> -TITLE: A complex map with "bounded" derivative is injective -QUESTION [6 upvotes]: The exercise I try to solve states: "Let $\,f\,$ be analytic in $\,D:=\{z\in\mathbb{C}\;|\;|z|<1\}\,$ , and such that $$|f'(z)-1|<\frac{1}{2}\,\,\,\forall\,z\in D$$ -Prove that $\,f\,$ is $\,1-1\,$ in $\,D\,$. -My thoughts: The condition $$|f'(z)-1|<\frac{1}{2}\,\,\,\forall\,z\in D$$ -means the range of the analytic function $\,f'\,$ misses lots of points on the complex plane, so applying Picard's Theorem (or some extension of Liouville's) we get that $\,f'(z)=w=\,$ constant, from which it follows that $\,f\,$ is linear on $\,D\,$ and thus $\,1-1\,$ there. -Doubts: $\,\,(i)\,\,$ This exercise is meant to be from an introductory first course in complex functions, so Picard's theorem seems overkill here...yet I can't see how to avoid it. -$\,\,(ii)\,\,$ Even assuming we must use Picard's Theorem, the versions of it I know always talk of "entire functions", yet our function $\,f\,$ above is analytic only in the open unit disk. Is this a problem? Perhaps it is and thus something else must be used...? -Any help will be much appreciated. - -REPLY [10 votes]: There is a very similar answer here (feel free to ignore the question if it looks intimidating). The basic idea is to use the triangle inequality applied to $f(z)-z$ to show that $|f(x)-f(y)| > 0$ for any distinct $x,y$. More precisely your assumption allows us to show $|f(x)-f(y)| \ge \frac12 |x-y|$.<|endoftext|> -TITLE: Proof about $z\cot z=1-2\sum_{k\ge1}z^2/(k^2\pi^2-z^2)$ -QUESTION [5 upvotes]: In Concrete Mathematics, it is said that -$$z\cot z=1-2\sum_{k\ge1}\frac{z^2}{k^2\pi^2-z^2}\tag1$$ -and proved in EXERCISE 6.73 -$$z\cot z=\frac z{2^n}\cot\frac z{2^n}-\frac z{2^n}\tan\frac z{2^n}+\sum_{k=1}^{2^{n-1}-1}\frac z{2^n}\left(\cot\frac{z+k\pi}{2^n}+\cot\frac{z-k\pi}{2^n}\right)$$ -The trigonmetric identity is not hard, but I cannot understand the rest: - -It can be shown that term-by-term passage to the limit is justified, hence equation (1) is valid. - -How can we conclude that? Thanks for help! - -REPLY [4 votes]: This identity is also proven in this answer, but the limit of the trigonometric identity is a cute trick, too. -Concrete Mathematics claim: -For the limit claimed in Concrete Mathematics, we need a few things. -First, by inspecting the graph of $\frac{1-x\cot(x)}{x^2}$ for $-\frac{3\pi}{4}\le x\le\frac{3\pi}{4}$, we have -$$ -\left|\frac1x-\cot(x)\right|\le|x|\tag{1} -$$ -Next, the Mean Value Theorem says -$$ -\begin{align} -|\cot(\delta+x)+\cot(\delta-x)| -&=|\cot(x+\delta)-\cot(x-\delta)|\\ -&\le2\delta\sup_{[x-\delta,x+\delta]}\csc^2(\xi)\\ -&\le\color{#C00000}{8\delta\,\csc^2(x)}\\ -&\le\color{#C00000}{2\pi^2\delta/x^2}\tag{2} -\end{align} -$$ -if $\color{#C00000}{2\delta\le|x|\le\frac{\pi}{2}}$. -Finally, note that since $0\le k< 2^{n-1}$, $0\le\frac{k\pi}{2^n}<\frac{\pi}{2}$ -Using $(1)$, we get -$$ -\begin{align} -&\left|\frac{z}{2^n}\left(\cot\left(\frac{z+k\pi}{2^n}\right)+\cot\left(\frac{z-k\pi}{2^n}\right)\right)-\left(\frac{z}{z+k\pi}+\frac{z}{z-k\pi}\right)\right|\\ -&\le2\left|\frac{z}{2^n}\right|\frac{|z|+k\pi}{2^n}\tag{3} -\end{align} -$$ -Using $(2)$, we get, for $2z\le k\pi$, -$$ -\begin{align} -\left|\frac{z}{2^n}\left(\cot\left(\frac{z+k\pi}{2^n}\right)+\cot\left(\frac{z-k\pi}{2^n}\right)\right)\right| -&\le2\pi^2\left|\frac{z^2}{2^{2n}}\right|\left(\frac{2^n}{k\pi}\right)^2\\ -&\le2\pi^2\left(\frac{z}{k\pi}\right)^2\tag{4} -\end{align} -$$ -Estimate $(3)$ is used to control the difference between the series for small $k$, and $(4)$ to control the remainder in the sum of the cotangents for large $k$. -Pick an $\epsilon>0$, and find $m$ large enough so that $2z\le m\pi$ and -$$ -\sum_{k=m}^\infty\frac{1}{k^2}\le\epsilon\tag{5} -$$ -Then we have the following estimate for the tail of the sum -$$ -\sum_{k=m}^\infty\frac{z^2}{k^2\pi^2-z^2}\le\frac43z^2\epsilon\tag{6} -$$ -Combining $(4)$ and $(5)$ yields -$$ -\sum_{k=m}^{2^{n-1}-1}\left|\frac{z}{2^n}\left(\cot\left(\frac{z+k\pi}{2^n}\right)+\cot\left(\frac{z-k\pi}{2^n}\right)\right)\right|\le2z^2\epsilon\tag{7} -$$ -Summing $(3)$ gives -$$ -\begin{align} -&\sum_{k=1}^{m-1}\left|\frac{z}{2^n}\left(\cot\left(\frac{z+k\pi}{2^n}\right)+\cot\left(\frac{z-k\pi}{2^n}\right)\right)-\left(\frac{z}{z+k\pi}+\frac{z}{z-k\pi}\right)\right|\\ -&\le2\left|\frac{z}{2^n}\right|\frac{m|z|+m^2\pi/2}{2^n}\tag{8} -\end{align} -$$ -Just choose $n$ big enough so that $(8)$ and $\displaystyle\left|\frac z{2^n}\cot\frac z{2^n}-\frac z{2^n}\tan\frac z{2^n}-1\right|$ are each less than $\epsilon$ and we get that the term-by-term absolute difference is less than -$$ -\left(\frac{10}{3}z^2+2\right)\epsilon\tag{9} -$$<|endoftext|> -TITLE: An example of a norm which can't be generated by an inner product -QUESTION [28 upvotes]: I realize that every inner product defined on a vector space can give rise to a norm on that space. The converse, apparently is not true. I'd like an example of a norm which no inner product can generate. - -REPLY [2 votes]: The norm on C[a,b] defined by ||f||=sup{|f(t)|: t belongs to [a,b]} does not satisfy the parallelogram law. take f(t)=1 and g(t)=t-a/b-a 0r f(t)=max{sin t, 0} and g(t)=max{-sin t, 0) on [0, 2pi].<|endoftext|> -TITLE: How to calculate a linear transformation given its effect on some vectors -QUESTION [7 upvotes]: Im not sure if my question is worded very well, but I'm having trouble understanding how to tackle this problem. - -Let $T\colon\mathbb{R}^3\to\mathbb{R}^2$ be the linear transformation such that $T(1,-1,2)=(-3,1)$ and $T(3,-1,1) = (-1,2)$. Find $T(9,-1,10)$. - -Thanks - -REPLY [11 votes]: Recall that a linear transformation is, well, linear. If you know the value at vectors $\mathbf{v}_1$ and $\mathbf{v}_2$, then you can compute the value at any linear combination of those two vectors, by using linearlity: -$$T(\alpha\mathbf{v}_1+\beta\mathbf{v}_2) = \alpha T(\mathbf{v}_1) + \beta T(\mathbf{v}_2).$$ -Here, for example, you know the value of $T$ at $(1,-1,2)$ and at $(3,-1,1)$. So, for instance, you can easily calculate the value of $T$ at $(5,-3,5) = 2(1,-1,2)+(3,-1,1)$: -$$\begin{align*} -T(5,-3,5) &= T\Bigl( 2(1,-1,2) + (3,-1,1)\Bigr)\\ -&= 2T(1,-1,2) + T(3,-1,1)\\ - &= 2(-3,1) + (-1,2)\\ -&= (-7,4).\end{align*}$$ -So... if you can find a way of writing $(9,-1,10)$ as a linear combination of $(1,-1,2)$ and $(3,-1,1)$, then you'll be set. -Added. Unfortunately, this cannot be done here. Namely, there are no scalars $a$ and $b$ such that $(9,-1,10) = a(1,-1,2) + b(3,-1,1)$. Note that you would need $a+3b=9$ and $-a-b=-1$. Adding these two you get $2b = 8$, so $b=4$; then $a=-3$, but then the last coordinate does not work out: -$$-3(1,-1,2) + 4(3,-1,1) = (-3+12, 3-4, -6+4) = (9,-1,-2).$$ -So the information you have does not determine the value of $T$ at $(9,-1,10)$. Given any vector $(r,s)$ in $\mathbb{R}^2$, you can find a linear transformation $U$ that agrees with $T$ on $(1,-1,2)$, on $(3,-1,1)$, and that sends $(9,-1,10)$ to $(r,s)$. -That suggests to me that you have either miscopied the problem, or else that whoever assigned the problem made a mistake. One possibility is that the first vector should have been $(1,-1,-2)$, as then you do get $(9,-1,10)$.<|endoftext|> -TITLE: Topology needed for differential geometry -QUESTION [16 upvotes]: I am a physics undergrad, and need to study differential geometry ASAP to supplement my studies on solitons and instantons. How much topology do I need to know? I know some basic concepts reading from the internet on topological spaces, connectedness, compactness, metrics and quotient hausdorff spaces. Do I need to go deeper? Also, could you suggest me some chapters from topology textbooks to brush up this knowledge? Could you please also suggest a good differential geometry books that covers the topics in differential geometry that are needed in physics in sufficient detail (without too much emphasis on mathematical rigour)? I have heard of the following textbook authors: Nakhara, Fecko, Spivak. Would you recommend these? - -REPLY [16 votes]: You shouldn't need much. Almost all you need to know about topology (especially of the point-set variety) should have been covered in a course in advanced calculus. That is to say, you really need to know about "stuff" in $\mathbb{R}^n$. (The one main exception is when you study instantons and some existence results are topological in nature; for that you will need to know a little bit about fundamental groups and homotopy.) The reason is that differential topology and differential geometry study objects which locally look like Euclidean spaces. This dramatically rules out lots of the more esoteric examples that point-set topologists and functional analysts like to consider. So most introductory books in differential geometry will quickly sketch some of the basic topological facts you will need to get going. -In terms of topology needed for differential geometry, one of the texts I highly recommend would be - -J.M. Lee's Introduction to Smooth Manifolds - -It is quite mathematical and quite advanced, and covers large chunks of what you will call differential geometry also. One can complement that with his Riemannian Manifolds to get some Riemannian geometry also. -But since you are asking from the point of view of a Physics Undergrad, perhaps better for you would be to start with either (or both of) - -Nakahara's Geometry, Topology, and Physics -Choquet-Bruhat's Analysis, Manifolds, and Physics: Vol. 1 and Vol. 2 - -and follow-up with - -Greg Naber's two book series: Topology, Geometry, and Gauge Fields: Foundations and Topology, Geometry, and Gauge Fields: Interactions<|endoftext|> -TITLE: Characterizing all ring homomorphisms $C[0,1]\to\mathbb{R}$. -QUESTION [6 upvotes]: This is something I've been trying to work out this evening. - -Let $R$ be the ring of continuous real-valued functions on $[0,1]$ with pointwise addition and multiplication. For $t\in [0,1]$, the map $\phi_t\colon f\to f(t)$ is a ring homomorphism of $R$ to $\mathbb{R}$. I'm trying to show that every ring homomorphism of $R\to\mathbb{R}$ has this form. - -Suppose otherwise, that there is some $\phi\neq\phi_t$, and thus there is some $f_t\in R$ such that $\phi(f_t)\neq \phi_t(f_t)=f_t(t)$. Define $g_t=f_t-\phi(f_t)1\in R$. Here $\phi(f_t)1$ is the constant function sending $[0,1]$ to $\phi(f_t)$. Then $g_t(t)\neq 0$. My first small question is why does $\phi(g_t)=0$? It seems only that $\phi(g_t)=\phi(f_t)-\phi(\phi(f_t)1)$. -I would like to conclude that there are only finitely many $t_i$ such that $g(x)=\sum g_{t_i}^2(x)\neq 0$ for all $x$. Then $g^{-1}=1/g(x)\in R$, but $\phi(g)=0$, contradicting the fact that homomorphisms map units to units. How can we be sure there are only finitely many $g_{t_i}$ such that the sum of their squares is never $0$? Thanks. - -REPLY [6 votes]: Use compactness. For each $t$ the set $\{x : g_t(x) \ne 0\}$ is open and contains $t$, so the union of all these sets is $[0,1]$, meaning they form an open cover.<|endoftext|> -TITLE: Numbers between real numbers -QUESTION [5 upvotes]: I wonder if there can be numbers (in some extended theory) for which all reals are either smaller or larger than this number, but no real number is equal to that number?! -Is there some extension of number which allows that? Under what conditions (axiom etc.) there is no such number. - -REPLY [3 votes]: Consider the field $\mathbb{R}(x)$ of all (formal) rational functions in one variable with real coefficients. While this is not an ordered field, it is an orderable field -- it is possible to define an ordering $<$ on rational functions that is consistent with the usual laws of arithmetic. -For any ordering $<$ of $\mathbb{R}(x)$, we can define sets $L = \{ a \in \mathbb{R} \mid a < x\}$ and $R = \{ a \in \mathbb{R} \mid a > x\}$, and we have $\mathbb{R} = L \cup R$ -- under this ordering, every real number is either less than or greater than the polynomial $x$. -It turns out the ordering $<$ is completely determined by $L$ and $R$, and conversely each way to choose $L$ and $R$ corresponds to an ordering of $\mathbb{R}(x)$. -The complete list of orderings are: - -The ordering "$+\infty$" - $x$ is larger than every real number -The ordering "$-\infty$" - $x$ is smaller than every real number -The ordering "$a^+$" - $x$ is infinitesimally larger than $a$ -The ordering "$a^-$" - $x$ is infinitesimally smaller than $a$ - -The labels I've chosen for the orderings refer to "where" $x$ is placed in relation to the real line. -Some good buzzwords that relate to this sort of topic are: - -Real closed field -Formally real field -Real algebraic geometry -Semi-algebraic geometry - -There is an easy way to write down a first-order theory whose models are examples of the sort of number system you ask for. For example, - -Start with the language of ordered fields -Add a new constant symbol $\varepsilon$ -Add in all of the ordered field axioms -Add in one axiom $0 < \varepsilon$ -For every positive integer $n$, add in one axiom $\varepsilon < n$ - -Every model of this theory will have a number $\varepsilon$ with the property that it is larger than every non-positive real number, and smaller than every positive real number.<|endoftext|> -TITLE: When are two norms equivalent on a Banach space? -QUESTION [6 upvotes]: I'm working on an exercise from functional analysis. -Let $E$ be a vector space and $\|\cdot\|_1$ and $\|\cdot\|_2$ be two complete norms on $E$. Now suppose that $E$ satisfies the following property: -$\bullet$ if $(x_n)$ is a sequence in $E$ and $x,y\in E$ such that $\|x_n-x\|_1\to 0$ and $\|x_n-y\|_2\to 0$, then $x=y$. -Now we want to show that the norms $\|\cdot\|_1$ and $\|\cdot\|_2$ are equivalent. -My idea is as follows: -If for any $n>0$, there is an element $x_n\in E$ such that $\|x_n\|_1>n\|x_n\|_2$. Then consider $(\frac{x_n}{\|x_n\|_1})_{n\geq 1}$. Clearly, $(\frac{x_n}{\|x_n\|_1})_{n\geq 1}$ converges to $0$. However, I cann't get a contradicition from this. Maybe my idea is wrong. -In fact, I even don't konw how to show that a Cauchy sequence in norm $\|\cdot\|_1$ is also a Cauchy sequence in norm $\|\cdot\|_2$. -Anyone can give me some hints or a counter example? Thank you very much. - -REPLY [6 votes]: Hint: Define $\Vert \dot\, \Vert_3 := \Vert \dot\, \Vert_1 + \Vert \dot\, \Vert_2$. Show that $\Vert \dot\, \Vert_3$ is a complete norm on $E$. Now use the fact, that the map $f: (E, \Vert \dot\, \Vert_3) \to (E,\Vert \dot\, \Vert_1)$ with $f(x) = x$ for all $x\in E$ is a continuous bijection between Banach spaces.<|endoftext|> -TITLE: Question about a proof in Evans -QUESTION [8 upvotes]: On page 57. in Partial Differential Equation by Lawrence C. Evans, he prove the maximum principle for the Cauchy problem of the heat equation, i.e. (I quote) -Suppose $u\in C^2_1(\mathbb{R}^n\times (0,T])\cap C(\mathbb{R}^n\times [0,T])$ solves $u_t-\Delta u= 0$ in $\mathbb{R}^n\times (0,T)$ and $u=g$ on $\mathbb{R}^n\times \{t=0\}$. Moreover, u satisfies the growth estimate -$$u(x,t)\le Ae^{a|x|^2}$$ -for $x\in\mathbb{R}^n,0\le t\le T$ for constants $A,a>0$. Then -$$\sup_{\mathbb{R}^n\times [0,T]}u = \sup_{\mathbb{R}^n}g$$ -In the proof they define $v(x,t):=u(x,t)-\frac{\mu}{(T+\epsilon -t)^\frac{n}{2}}\exp{\frac{|x-y|^2}{4(T+\epsilon -t)}}$ -The proof consists of several steps. First they show for $4aT<1$ that - -$\max_{\overline{U_T}} v= \max_{\Gamma_T}v$, where $U_T:=B^0(y,r)\times (0,T]$ for fixed $r>0$. -If $x\in \mathbb{R}^n$ then $v(x,0)\le g(x)$ - -Now in equation $(29)$ they say: for $r$ selected sufficiently large, we have $v(x,t)\le A\exp{a(|y|+r)^2}-\mu (4(a+\gamma))^{\frac{n}{2}}\exp{(a+\gamma)r^2}\le \sup_{\mathbb{R}^n}g$. Why is this all less or equal the supremum of $g$, for large $r$? -And why can we conclude with all these facts, that $v(y,t)\le \sup_{\mathbb{R}^n} g$ for all $y\in \mathbb{R}^n$ and $0\le t\le T$? - -REPLY [7 votes]: I guess I was able to solve it by myself. All you have to do is to write the expression in a nice way: -$$v(x,t)\le A\exp{(a(|y|+r)^2)}-\mu (4(a+\gamma))^{\frac{n}{2}}\exp{((a+\gamma)r^2)}=\exp{((a+\gamma)r^2)}[(-\mu (4(a+\gamma))^{\frac{n}{2}}+A\exp{(-\gamma r^2+2ar|y| + a|y|^2)]}$$ -This converges to $-\infty$ as $r\to \infty$ and the conclusion follows. -We already know that the max is attained at the "boundary" $\Gamma_T$ and we have found a bound of $v$ by (the supremum) of $g$ on $\Gamma_T$.<|endoftext|> -TITLE: Chromatic Number Identity Involving Edges -QUESTION [7 upvotes]: I'm trying the prove the following: - -Let $G$ be a simple graph with $m$ edges. Show that $\chi(G)\leq \frac{1}{2}+\sqrt{2m+\frac{1}{4}}.$ - -A very minute bit of algebraic manipulation shows that this is equivalent to proving $$\chi(G)(\chi(G)-1)\leq 2m.$$ From here I am a bit stuck. Could someone suggest a direction to head in? -Please no full solutions, just hints. - -REPLY [7 votes]: HINT: A little more manipulation turns it into $$m\ge\binom{\chi(G)}2\;,$$ which can be understood as saying that there must be at least as many edges as there are pairs of colors in a minimal coloring. - -REPLY [4 votes]: The fact that $G$ has chromatic number $\chi(G)$ means that you can partition vertices of the graph into $\chi(G)$ disjoint classes such that vertices in the same class have no neighbours. -Can you count the number of edges with respect to such a partition?<|endoftext|> -TITLE: $\int_{0}^{\infty} \frac{e^{-x} \sin(x)}{x} dx$ Evaluate Integral -QUESTION [28 upvotes]: Compute the following integral: -$$\int_{0}^{\infty} \frac{e^{-x} \sin(x)}{x} dx$$ -Any hint, suggestion is welcome. - -REPLY [3 votes]: $$\int_0^{\infty} \frac{e^{-x}\sin x}{x}\,dx=\int_0^{\infty}\int_0^{\infty} e^{-x}\sin x \,e^{-xy}\,dy\,dx=\int_0^{\infty} \int_0^{\infty} e^{-x(1+y)}\sin x\,dx\,dy$$ -From integration by parts or otherwise, one can show that: -$$\int_0^{\infty}e^{-x(1+y)}\sin x\,dx=\frac{1}{1+(1+y)^2}$$ -Hence -$$\int_0^{\infty} \int_0^{\infty} e^{-x(1+y)}\sin x\,dx\,dy=\int_0^{\infty} \frac{dy}{1+(1+y)^2}=\left(\arctan(1+y)\right|_0^{\infty}=\boxed{\dfrac{\pi}{4}}$$<|endoftext|> -TITLE: Automorphisms of the field of complex numbers -QUESTION [5 upvotes]: Using AC one may prove that there are $2^{\mathfrak{c}}$ field automorphisms of the field $\mathbb{C}$. Certainly, only the identity map is $\mathbb{C}$-linear ($\mathbb{C}$-homogenous) among them but are all these automorphisms $\mathbb{R}$-linear? - -REPLY [4 votes]: An automorphism of $\mathbb C$ must take $i$ into $i$ or $-i$. Thus an automorphism that is $\mathbb R$-linear must be the identity or conjugation.<|endoftext|> -TITLE: What is the length of a maximal deranged sequence of permutations -QUESTION [6 upvotes]: We were playing a home-made scribblish and were trying to figure out how to exchange papers. During each round, you'll trade k times and each time you need to give your current paper to someone who has never had it, and you need to receive a paper that you've never had. There are n papers. For example, if everyone passes their paper to the left, then you can trade $n-1$ times, and on the $n$th trade everyone gets their papers back. Clearly $k < n$ no matter how you trade. -It is suboptimal for one player to always trade with the same player, so we want to use different permutations each time. When writing a webpage to choose permutations randomly, I ran into a theoretical problem: if we don't know k, can we still generate a good sequence of shuffles? As long as any good sequence of shuffles can be extended to a maximal sequence, we are ok. For n ≤ 6 this is true. Is it true in general? - -For n a positive integer, call a sequence of permutations $g_i \in S_n$ deranged if $\prod_{i=a}^b g_i$ has no fixed points on $\{1,\dots,n\}$ for any $1 \leq a\leq b \leq k$, where $k$ is the length of the sequence. Must every maximal deranged sequence have $k=n-1$? - -A deranged sequence of length 1 is just called a derangement. -We partial order deranged sequences by $a \leq b$ if $a$ is an initial segment of $b$, so that $(1,2,3,4,5) \leq (1,2,3,4,5), (1,2,3,4,5) \leq (1,2,3,4,5), (1,2,3,4,5), (1,3,5,2,4)$. Hence a deranged sequence $g_1, \dots, g_k\in S_n$ is maximal iff for every $g_{k+1} \in S_n$, the sequence $g_1, \dots, g_k, g_{k+1}$ is not deranged. -Examples: -In order to make the problem more symmetrical, it can be helpful to append $g_{k+1} = (g_1 \cdots g_k)^{-1}$ to the sequence. This corresponds to a final step of "handing everyone back their original paper." Then every consecutive $k-1$ subsequence of every cyclic permutation has the property that it is deranged. Thus these deranged "cycles" of length $k+1$ are acted on both by $S_n$ (relabeling the people) and by $C_{k+1}$ (cyclic permutations). This can help reduce the number of truly distinct examples. -For two players, obviously you just pass it to each other and it is over. - -(1,2) [ add (1,2) to complete the cycle ] - -For three players, you can either pass clockwise twice or counterclockwise twice. - -(1,2,3), (1,2,3) [ add (1,2,3) to complete the cycle ] -(1,3,2), (1,3,2) [ this is the previous one with players 2 and 3 swapped ] - -For four players, there are 24 deranged sequences, but after completing them to deranged cycles, there are only 3 distinct orbits under $S_n$ and $C_{k+1}$. Notice that $k+1=n$ in each case: - -(1,2)(3,4), (1,3)(2,4), (1,2)(3,4), (1,3)(2,4) -- trade within, across, within, across -(1,2)(3,4), (1,3,2,4), (1,2)(3,4), (1,4,2,3) -- across, left, across, right -(1,2,3,4), (1,2,3,4), (1,2,3,4), (1,2,3,4) -- four lefts - -For five players there are 1344 deranged sequences, and once completed they fall into 4 orbits. In each case $n=k+1$. - -(1,2)(3,4,5), (1,3)(2,4,5), (1,2,3,4,5), (1,2,4,5,3), (1,4,5,2,3) -(1,2)(3,4,5), (1,3)(2,4,5), (1,4,3,5,2), (1,3,5,4,2), (1,3,4,2,5) -(1,2,3,4,5), (1,2,3,4,5), (1,2,3,4,5), (1,2,3,4,5), (1,2,3,4,5) -- five left -(1,2,3,4,5), (1,2,3,4,5), (1,3,5,2,4), (1,5,4,3,2), (1,3,5,2,4) -- left, left, double-left, right, double-left - -For six players, the number of possibilities seems to explode (1128960 deranged sequences, 362 orbits of deranged cycles), but in each case $n=k+1$. -The sequence OEIS:A000479 may be relevant. - -REPLY [6 votes]: Form a $k\times n$ matrix -$$A=\left[\matrix{a_{11}&a_{12}&\dots&a_{1n}\\ -a_{21}&a_{22}&\dots&a_{2n}\\ -\vdots&\vdots&\ddots&\vdots\\ -a_{k1}&a_{k2}&\dots&a_{kn}}\right]$$ -as follows: $a_{ij}$ is the number of the person holding $i$’s paper after $j-1$ rounds. For example, your first deranged sequence for four players produces the matrix -$$\left[\matrix{1&2&3&4\\ -2&1&4&3\\ -4&3&2&1\\ -3&4&1&2}\right]\;.$$ -Each row of $A$ must be a permutation of $1,\dots,n$, and no column of $A$ may contain any number more than once, so $A$ is a $k\times n$ Latin rectangle, which can always be extended to a Latin square. Thus, every maximal deranged sequence has length $n-1$. -Added: Since OEIS A000479 counts the number of such Latin squares with first row $[\matrix{1&\dots&n}]$, it is indeed relevant: it counts the number of unreduced maximal deranged sequences. Dividing by $(n-1)!$ gives the number of reduced Latin squares of order $n$. According to Wikipedia, this number is known only for $n\le 11$, so you’re unlikely to get any nice expression for it.<|endoftext|> -TITLE: Prove that $fg\in L^r(\Omega)$ if $f\in L^p(\Omega),g\in L^q(\Omega)$, and $\frac1 p+\frac1 q=\frac1 r$ -QUESTION [16 upvotes]: Can anyone give me a hint for proving the following: -Let $\Omega$ be a measure space. Assume $f \in L^p(\Omega)$ and $g \in L^q(\Omega)$ with $1 \leq p, q \leq \infty$ and $\frac1p + \frac1q \leq 1$. Prove that $fg \in L^r(\Omega)$ with $\frac1r = \frac1p + \frac1q$. -Note: One should be able to use (the standard) Hölder inequality. Notice that if you have $\frac1p + \frac1q = 1$ you recover the former result. - -REPLY [21 votes]: Using the standard Hölder inequality: -$$ -\|fg\|_r^r=\|f^rg^r\|_1\le\|f^r\|_{p/r}\|g^r\|_{q/r}=\|f\|_p^r\|g\|_q^r -$$ -Since $\frac{r}{p}+\frac{r}{q}=1$.<|endoftext|> -TITLE: Direct proof that, in a commutative ring with only one prime ideal $P$, every element of $P$ is nilpotent -QUESTION [6 upvotes]: Let $R$ be a commutative ring with identity such that $R$ has exactly one prime ideal $P$. -Prove: all elements in $P$ are nilpotent. - -While doing this problem, I used the fact that "the nilradical of $R$ is equal to the intersection of all prime ideals of $R$" (in this case, the intersection of all prime ideals $=P$), then I can solve this problem. -However, it seems to me that this fact overkills this problem because we have a strong condition that there is only one prime ideal. -I am here to ask if there is a much more simple and direct approach (which I have overlooked) to solving this problem. - -Let me put it in another way: is there any proof without using Zorn's lemma? - -REPLY [5 votes]: When reading this post I began to wonder if $Nil(R)=\bigcap\{ P\mid P\text{ prime}\}$ was equivalent to choice, but that led me to this interesting post. -If I read it correctly, the nilradical equation is not equivalent to AC! -Hope this helps!<|endoftext|> -TITLE: comparing distribution of two data sets -QUESTION [11 upvotes]: I need to compare the distribution (unknown) of a set of data to the distribution of another one (unknown). In particular, I want to check for equality of the two distributions. -What are some statistical tests for this? - -REPLY [12 votes]: A huge subject! The standard all-purpose test is Kolmogorov-Smirnov.<|endoftext|> -TITLE: Why do mathematicians care so much about zeta functions? -QUESTION [15 upvotes]: Why is it that so many people care so much about zeta functions? Why do people write books and books specifically about the theory of Riemann Zeta functions? -What is its purpose? Is it just to develop small areas of pure mathematics? - -REPLY [5 votes]: People write books about the theory of Riemann Zeta functions because there is sufficiently developed theory and enough applications to warrant a dedicated book, much the same way that people write books specifically about elliptic curves or Schrodinger's equation. -As for the research interest in the Riemann Hypothesis, this MO thread gathers some of its consequences and gives an idea of to which parts of mathematics it can apply. -And as for more "popular" interest, here's a quote from one of the aforementioned books, Edwards' Riemann's Zeta Function: - -The experience of Riemann's successors with the Riemann hypothesis has been the same as Riemann's -- they also consider its truth "very likely" and they also have been unable to prove it. ... the attempt to solve this problem has occupied the best efforts of many of the best mathematicians of the twentieth century. It is now unquestionably the most celebrated problem in mathematics and it continues to attract the attention of the best mathematicians, not only because it has gone unsolved for so long but also because it appears tantalizingly vulnerable and because its solution would probably bring light to new techniques of far-reaching importance. - -That's from 1974, and is probably even more applicable today.<|endoftext|> -TITLE: Main differences between analytic number theory and algebraic number theory -QUESTION [22 upvotes]: What are some of the big differences between analytic number theory and algebraic number theory? -Well, maybe I saw too much of the similarities between those two subjects, while I don't see too much of analysis in analytic number theory. - -REPLY [3 votes]: The main difference is that in algebraic number theory one typically considers questions with answers that are given by exact formulas, whereas in analytic number theory one looks for good approximations of quantities we don't expect to find a formula for.<|endoftext|> -TITLE: Words that agree on the count of all subwords of length $\leq k$ -QUESTION [12 upvotes]: I'm working with a two letter alphabet $\{0,1\}$, and I'm talking about generalized sub-word i.e. letters don't need to be adjacent, $|01010|_{00} = 3$ -For example, the two words $u=1001$ and $v=0110$ agree on all subwords of length $\leq$ 2 (1, 0, 10, 01, 11 and 00), and are both of length 4, which happens to be the shortest length where this property will be true with $k=2$. I've found the minimal lengths $\{2,4,7,12,16,22\}$ for $k=\{1,2,3,4,5,6\}$ respectively, through bruteforcing. -What I'm basically looking for is, for two words $|u|=|v|=n$, what bound you need on $k$ such that if $u$ and $v$ agree on all subwords of length $\leq k$, then $u=v$. -I'm aiming for a $\mathcal{O}(\log n)$ bound, so I've tried looking for an inductive proof on k where the length of the word becomes a multiple after each iteration. -Also, as a side proof, so far experimentally, I've found that if two words agree on all subwords of length $k$, they also agree on all of length $\leq k$, but can't quite find a proof or counter-proof for this. - -REPLY [5 votes]: It is a natural problem of combinatorics on words which has already been studied. -We don't actually know a good asymptotic equivalent for $k(n)=1,3,6,11,\dots$. We have the upper bound $k(n)=O(\sqrt n)$, or more precisely: -$$k(n)\le 5+\left\lfloor\tfrac{16}{7}\sqrt{n}\right\rfloor\qquad\text{(Krasikov1997)}$$ -$$k(n)\le 3+2\left\lfloor\sqrt{n\log 2}\right\rfloor\qquad\text{(Krasikov2000)}$$ -(Krasikov2000) does not appear to be online (it is cited in (Ligeti2007)), and for some reason this better upper bound is not cited in (Dudik2002), so caveat emptor. -There is an easy $\Omega(\log n)$ lower bound from considering e.g. Thue-Morse words and their complement, but it is not sharp. In fact, $k(n)$ grows faster than any polynomial in $\log n$, so the bound you were hoping for is not satisfied: -$$\log k(n)=\Omega(\sqrt{\log n})$$ -See (Dudik2002). -In a way these two bounds can be reconciled to give the admittedly vague equivalent -$$\log \log k(n)=\Theta(\log \log n)$$ -but this is still precise enough to reject logarithmic growth, which would be $\Theta(\log \log \log n)$. - -References: - -(Krasikov1997) Krasikov, Roditty, On a Reconstruction Problem for Sequences. J. Combin. Theory Ser. A, 77 (2) (1997), pp. 344–348. -(Krasikov2000) Foster, Krasikov, An improvement of a Borwein-Erdélyi-Kós result. Methods Appl. Anal., 7(4):605–614, 2000. -(Dudik2002) Dudı́k, Schulman, Reconstruction from subsequences, J. Combin. Theory Ser. A, 103 (2) (2003), pp. 337-348. -(Ligeti2007) Ligeti, Combinatorics on words and its applications. Doctoral thesis.<|endoftext|> -TITLE: Proving that $A_n$ is the only proper nontrivial normal subgroup of $S_n$, $n\geq 5$ -QUESTION [14 upvotes]: There is a famous Theorem telling that: - -For $n≥5$, $A_n$ is the only proper nontrivial normal subgroup of $S_n$. - -For the proof, we firstly start with assuming a subgroup of $S_n$ which $1≠N⊲S_n$. We proceed until at the last part of proof's body, we assume $N∩A_n=\{1\}$. This assumption should be meet a contradiction with normality of $N$ in $S_n$. There; we get $N=\{1,\pi $} in which $\pi$ is an odd permutation of order $2$. Now for meeting desire inconsistency, I have two approaches: - -(a) Since every normal subgroup, having two elements, lies in the center of $G$ so, our $N⊆ Z(S_n)=\{1\}$ for $n≥5$ and then $N=\{1\}$. -(b) Clearly, $1≠N$ acting on set $\Omega=\{1,2,...,n\}$ is intransitive wherein $|\Omega|≥5$ and according to the following Proposition $S_n$ would be imprimitive. -Proposition 7.1: If the transitive group $G$ contains an intransitive normal subgroup different from $1$, then $G$ is imprimitive -(Finite Permutation Groups by H.Wielandt). - -May I ask if the second approach is valid? I am fond of knowing new approach if exists. Thanks. - -REPLY [12 votes]: You are almost there. Try to prove that $Z(S_n)= 1$ for all $n \geq 3$. Then if $N$ is non-trivial and normal, you assume $N \cap A_n = 1$. This implies $N \subseteq Z(S_n)$. Why? Because in general, if $N \unlhd G$ and $N \cap [G,G] = 1$ then $N \subseteq Z(G)$. -We conclude that the normal subgroup $N \cap A_n \neq 1$. At this point I assume that you know that $A_n$ is a simple group for $n \geq 5$. Hence $N \cap A_n = A_n$, so $A_n \subseteq N \subseteq S_n$. Since $index[S_n:A_n] = 2$, it follows that $N=A_n$ or $N=S_n$.<|endoftext|> -TITLE: High school mathematical research -QUESTION [14 upvotes]: I am a grade 12 student. I am interested in number theory and I am looking for topics to research on. -Can you suggest some topics in number theory and in general that would make for a good research project? -I have self-studied certain topics in Abstract Algebra and Number Theory. I'm fascinated by primes (like most people are). -Preferably, suggest some unexplored problems so that new results can be obtained. -Thanks. - -REPLY [7 votes]: NOTE The OP didn't state "Preferably, suggest some open problems so that new results can be obtained." when this was answered. - -I can provide you with Burton's Elementary Number Theory. It has a series of historical introductions and great examples you'll probably find worth of a research project. He has information and obivously theory about results from Fermat, Euler, Diophantus, Wilson, Möbius, and others. I can also provide you with the three volumes of the History of Number Theory, which might be a great source. -A few examples are -Fermat's Little Theorem If $p\not\mid a$ then$$a^{p-1} \equiv 1 \mod p$$ -Wilson's Theorem If $p$ is a prime then -$$({p-1})! \equiv -1 \mod p$$ -Möbius Inversion Formula If we have two arithmetical functions $f$ and $g$ such that -$$f(n) = \sum_{d \mid n} g(d)$$ -Then -$$g(n) = \sum_{d \mid n} f(d)\mu\left(\frac{n}{d}\right)$$ -Where $\mu$ is the Möbius function. -Maybe so interesting as the previous, -The $\tau$ and $\sigma$ functions -Let $\tau(n)$ be the number of divisors of $n$ and $\sigma(n)$ its sum. Then if $$n=p_1^{l_1}\cdots p_k^{l_k}$$ -$$\tau(n)=\prod_{m=1 }^k(1+l_m)$$ -$$\sigma(n)=\prod_{m=1 }^k \frac{p^{l_m+1}-1}{p-1}$$ -Legendre's Identity -The multiplicty (i.e. number of times) with which $p$ divides $n!$ is -$$\nu(n)=\sum_{m=1}^\infty \left[\frac{n}{p^m} \right]$$ -However odd that might look, the argument is somehow simple. The multiplicity with which $p$ divides $n$ is $\left[\dfrac{n}{p} \right]$, for $p^2$ it is $\left[\dfrac{n}{p^2} \right]$, and so forth. To get that of $n!$ we sum all these values to get the above, since each of $1,\dots,n$ is counted $l$ times as a multiple of $p^m$ for $m=1,2,\dots,l$, if $p$ divides it exactly $l$ times. Note the sum will terminate because the least integer function $[x]$ is zero when $p^m>n$. -Perfect numbers -A number is called a perfect number is the sum if its divisors equals the number, this means -$$\sigma(n) =2n$$ -Euclid showed if $p=2^n-1$ is a prime, then $$\frac{p(p+1)}{2}$$ is always a perfect number -Euler showed that if a number is perfect, then it is of Euclid's kind. -$n$ - agonal or figurate numbers. -The greeks were very interested in numbers that could be decomposed into geometrical figures. The square numbers are well known to us, namely $m=n^2$. But what about triangular, or pentagonal numbers? -Explicit formulas have been found, namely -$$t_n=\frac{n(n+1)}{2}$$ -$$p_n=\frac{n(3n-1)}{2}$$ -You can try, as a good olympiadish excercise, to prove the following: -$${t_1} + {t_2} + {t_3} + \cdots + {t_n} = \frac{{n\left( {n + 1} \right)\left( {n + 2} \right)}}{6}$$ -We can arrange the numbers in a pentagon as a triangle and a square: -$${p_n} = {t_{n - 1}} + {n^2}$$<|endoftext|> -TITLE: Approximation of $\log(x)$ as a linear combination of $\log(2)$ and $\log(3)$ -QUESTION [17 upvotes]: I wonder if it's possible to approximate $\log(n)$, n integer, by using a linear combination of $\log(2)$ and $\log(3)$. -More formally, given integer $n$ and and real $\epsilon>0$, is it always possible to find integer $x,a,b$ where: -$$\left|n^x-2^a 3^b\right|<\epsilon$$ -For example, I can approximate $11$ by $$2^{-33} 3^{23}=10.959708460955880582332611083984375 \approx 10.96.$$ - -REPLY [18 votes]: Yes. -Let $a=\frac{\log(2)}{\log(3)}$. Then, $a$ is irrational, thus by Dirichclet Theorem, the set $\{ ma+n | m,n \in Z \}$ is dense. Thus, there exists some $m,n \in Z$ so that -$$\left| \frac{\log(n)}{\log(3)} - ma -k \right| < \frac{\epsilon}{\log(3)}$$ -Multiply by $\log(3)$ and you are done. -P.S. It is irrelevant that $n$ is integer. Also, the proof works if you replace $2$ and $3$ by any numbers $x,y$ so that $\log_x(y)$ is irrational. -P.P.S. I think that for $n$ positive integer, it is enough to use one $\log(2)$. Indeed, if $n$ is a power of 2, you are done, otherwise, $\frac{\log(n)}{\log(2)}$ is irrational, and then the set $m\frac{\log(n)}{\log(2)} - k$ is dense. Thus, you can find some integers so that -$$\left|m\frac{\log(n)}{\log(2)} - k \right| < \frac{\epsilon}{\log(2)}$$ -Of course, you get rational coefficients in this case.<|endoftext|> -TITLE: Matrix Representation of the Tensor Product of Linear Maps -QUESTION [12 upvotes]: I'm trying to work out some examples of applying the tensor product in some concrete cases to get -a better understanding of it. Within this context, let $f:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ -be a linear map with matrix $A$ and let $g:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ be a linear map with matrix $B$. -It follows almost immediately from the universal property of the tensor product that there exists a unique linear map -$$ -f \otimes g: \mathbb{R}^2 \otimes \mathbb{R}^2 \rightarrow \mathbb{R}^2 \otimes \mathbb{R}^2 -$$ -such that -$$ -(f \otimes g)(u \otimes v) = f(u) \otimes g(v). -$$ -Suppose we take a basis for $\mathbb{R}^2$, say the canonical one, $(e_1, e_2)$, and let -$u = a^ie_i$ and $v = b^je_j$. Then, using the linearity of $f$ and $g$ and multilinearity of -$\otimes$ we arrive at the general expression -$$ -(f \otimes g)(u \otimes v) = a^ib^j f(e_i) \otimes g(e_j) -$$ -where the summation convention is in force. Since $f \otimes g$ is a linear map -we ought to be able to represent it by a matrix $A \otimes B$ whose columns represent the action -of $f \otimes g$ a basis of $\mathbb{R}^2 \otimes \mathbb{R}^2$. -Now, since $\{e_i \otimes e_j| i,j = 1, 2 \}$ is a basis for $\mathbb{R}^2 \otimes \mathbb{R}^2$ the resulting matrix -with respect to this basis should have columns that are given by $(f \otimes g)(e_i \otimes e_j)$ where $i,j=1, 2$. -The first column of this matrix, for example, is the vector obtained by -$$ -(f \otimes g)(e_1 \otimes e_1) = f(e_1) \otimes g(e_1) -$$ -which is the tensor product of the first column of $A$ and the first column of $B$ since $A$ and $B$ -respectively represent $f$ and $g$. -This is where I am stuck. I understand $f(e_1) \otimes g(e_1)$ -is essentially an equivalence class that is obtained in the existence proof of the tensor -product but I'm not so sure how one works this out concretely based on the definition of the tensor product -and its universal construction. -So my question is, how does one concretely compute $f(e_1) \otimes g(e_1)$? - -REPLY [13 votes]: $f$ has matrix $A$ with respect to $\{e_1, e_2\}$. Hence $f(e_1) = a^i_1e_i$, analogously $g(e_1) = b^j_1e_j$. By bilinearity of $\otimes$, therefore -\[ f(e_1) \otimes g(e_1) = a^i_1b^j_1 (e_i \otimes e_j) \] -So if say, you decide that $(e_1 \otimes e_1, e_1 \otimes e_2, e_2 \otimes e_1, e_2 \otimes e_2)$ is your ordered basis of $\mathbb R^2 \otimes \mathbb R^2$, then the first column of your matrix is $(a_1^1b_1^1, a_1^1b_1^2, a_1^2b_1^1, a_1^2b_1^2)^t$. - -REPLY [6 votes]: Check out http://en.wikipedia.org/wiki/Kronecker_product<|endoftext|> -TITLE: Criterion for a limit of invertible operators on a Banach space to be invertible -QUESTION [5 upvotes]: Let $A_n$ linear operators in a Banach space $B$ that have inverses. $||A_n-A|| \to 0$ for some operator $A$. -I need to prove that $A$ has an inverse operator iff the sequence $\{||A_n^{-1}||\}$ is bounded. -I am almost sure it should be solved with the Uniform boundedness principle, but I can't figure it out, neither statements. - -REPLY [3 votes]: Suppose that $(A_n^{-1})$ is bounded. Using the identity $a^{-1}-b^{-1}=a^{-1}(b-a)b^{-1}$ and the fact that $(A_n)$ is a Cauchy sequence, it follows that $(A_n^{-1})$ is a Cauchy sequence. Since $L(B)$ is complete, there exists an operator $T$ such that $A_n^{-1}\to T$. Taking the limit of $A_nA_n^{-1}=A_n^{-1}A_n = I$ shows that $T=A^{-1}$. -Rearranging the same identity, $(I+a^{-1}(b-a))b^{-1}=a^{-1}$. If $A$ is invertible, then -$(I+A^{-1}(A_n-A))A_n^{-1}=A^{-1}$. Since $T_n:=A^{-1}(A_n-A)\to 0$, $I+T_n$ is eventually invertible, with $(I+T_n)^{-1}=\sum\limits_{k=0}^{\infty}(-T_n)^k$, and $\|(1+T_n)^{-1}\|\leq \dfrac{1}{1-\|T_n\|}\to 1$. Thus, for $n$ sufficiently large, $A_n^{-1}=(I+T_n)^{-1}A^{-1}$, and this implies that $(A_n^{-1})$ is bounded.<|endoftext|> -TITLE: Retract of projective object is projective -QUESTION [6 upvotes]: An object $P$ in a category $\mathcal{C}$ is called projective if the functor $\mathcal{C}(P,-): \mathcal{C} \rightarrow Set$ preserves epimorphisms. -Now I have to prove the following: -Every retract of a projective object is again projective. -So let $r: P \rightarrow A$ be a retraction in $\mathcal{C}$ and $f: X \rightarrow Y$ an epimorphism in $\mathcal{C}$. -Let $s: A \rightarrow P$ be the morphism in $\mathcal{C}$ with $r \circ s = 1_{A}$. - Now I have to prove that $\mathcal{C}(A,f): \mathcal{C}(A,X) \rightarrow \mathcal{C}(A,Y)$ is an epimorphism. -I've tried to prove it using the definition of an epimorphism: -$$\phi \circ C(A,f) = \psi \circ C(A,f) \Rightarrow \phi = \psi$$ -but I didn't get anything nice. Just a bunch of equalities from where I couldn't conclude '$\phi = \psi$' -Can someone give me a hint how I can relate this to the morphism $\mathcal{C}(P,f)$ (which is an epimorphism) or the retraction $r$? -Or is there an easier way to handle this problem? For example, by using the fact that in Set an epimorphism is a surjective function? -As always, any help would be appreciated! - -REPLY [7 votes]: See also Proposition 4.6.4 in Borceux F. Handbook of categorical algebra. Vol. 1. Basic category theory (CUP, 1994). Although the definition of projective object is slightly different (the definition in that book uses strong epimorphism instead of epimorphism), the proof almost identical. - -$\newcommand{\Zobr}[3]{{#1}\colon{#2}\to{#3}}\newcommand{\ol}[1]{\overline{#1}}$The fact that $P$ is projective -means the following: -Whenever we have an epimorphism $\Zobr fXY$ and a morphism $\Zobr pPY$, then there exists a morphism $\Zobr{\ol p}PX$ such that $f\circ\ol p=p$. - -Now we have the following situation: We have a retraction $\Zobr rPA$ a morphism $\Zobr pAY$ and an epimorphism $\Zobr fXY$. - -The rest of solution is: draw the obvious arrows to complete the diagram. -This would be a logical place to stop, if you only want a hint and want to do the rest by yourself. -Since it is already some time since you posted your question, I guess posting full solution will not do much harm. -(And of course you can simply ignore the rest of you post.) -We have the morphism $q=p\circ r$. Since $P$ is projective, there is a morphism $\ol q$ such that $f\circ \ol q=q=p\circ r$. Thus we get -$$f\circ\ol q \circ s=p\circ r\circ s=p\circ 1_A=p,$$ -i.e. we have shown that there is a morphism $\ol p = \ol q\circ s$ fulfilling $f\circ\ol p=p$. - -I hope that the uploaded diagram will last some time, but I also uploaded the LaTeX source to pastebin.<|endoftext|> -TITLE: The collection of all compact perfect subsets is $G_\delta$ in the hyperspace of all compact subsets -QUESTION [7 upvotes]: Let $X$ be metrizable (not necessarily Polish), and consider the hyperspace of all compact subsets of $X$, $K(X)$, endowed with the Vietoris topology (subbasic opens: $\{K\in K(X):K\subset U\}$ and $\{K\in K(X):K\cap U\neq\emptyset\}$ for $U\subset X$ open), or equivalently, the Hausdorff metric. We want to show that $K_p(X)=\{K\in K(X):K \text{ is perfect}\}$ is $G_\delta$ in $K(X)$. (This is another question from Kechris, Classical Descriptive Set Theory, Exercise 4.31.) -A possible approach: $K_p(X) = \bigcap_{n=1}^\infty \{K\in K(X): \forall x\in K, (B(x,1/n)\setminus\{x\})\cap K\neq\emptyset\}$. What can we say about the complexity of $\{K\in K(X): \forall x\in K, (B(x,1/n)\setminus\{x\})\cap K\neq\emptyset\}$? Note that for fixed $x$, the set $\{K\in K(X): (B(x,1/n)\setminus\{x\})\cap K\neq\emptyset\}$ is open in $K(X)$. Also, the set $\{(x,K)\in X\times K(X):x\in K\}$ is closed in $X\times K(X)$, but I don't think this helps since the projection of a $G_\delta$ set need not be $G_\delta$. -Any ideas? - -REPLY [3 votes]: For any $n\in\Bbb Z^+$ and open $U_1,\dots,U_n$ in $X$ define -$$B(U_1,\dots,U_n)=\left\{K\in\mathscr{K}(X):K\subseteq\bigcup_{k=1}^nU_k\text{ and }K\cap U_k\ne\varnothing\text{ for }k=1,\dots n\right\}\;;$$ -the collection $\mathscr{B}$ of these sets is a base for the topology of $\mathscr{K}(X)$. -For $n\in\omega$ let $\mathfrak{U}_n$ be the collection of all finite families of open sets of diameter less than $2^{-n}$. For each $\mathscr{U}\in\mathfrak{U}$ and $p,q:\mathscr{U}\to\bigcup\mathscr{U}$ such that for each $U\in\mathscr{U}$, $p(U)$ and $q(U)$ are distinct points of $U$, fix disjoint open sets $V_{\mathscr{U},p,q}(U)$ and $W_{\mathscr{U},p,q}(U)$ for $U\in\mathscr{U}$ such that $p\in V_{\mathscr{U},p,q}(U)\subseteq U$ and $q\in W_{\mathscr{U},p,q}(U)\subseteq U$. Then let -$$G(\mathscr{U},p,q)=B(\mathscr{U})\cap\bigcap_{U\in\mathscr{U}}B\big(V_{\mathscr{U},p,q}(U),W_{\mathscr{U},p,q}(U),X\big)\;,$$ -let $\mathscr{G}_n$ be the set of all such $G(\mathscr{U},p,q)$ for $\mathscr{U}\in\mathfrak{U}_n$, and let $G_n=\bigcup\mathscr{G}_n$; clearly each $G_n$ is open in $\mathscr{K}(X)$. -Let $K\subseteq X$ be a non-empty compact set without isolated points. Fix $n\in\omega$. Let $\mathscr{U}$ be a finite open cover of $K$ by sets of diameter less than $2^{-n}$. Pick distinct points $p(U),q(U)\in K\cap U$ for each $U\in\mathscr{U}$. Then $G(\mathscr{U},p,q)\in\mathscr{G}_n$ is an open nbhd of $K$ in $\mathscr{K}(X)$, so $K\in G_n$. -Now suppose that $K\subseteq X$ is compact but has an isolated point $x$. Fix $m\in\omega$ such that $$B(x,2^{-m})\cap K=\{x\}\;,$$ where $B(x,\epsilon)$ is the open ball of radius $\epsilon$ centred at $x$. -Suppose that $n\ge m$ and $K\in G(\mathscr{U},p,q)\in\mathscr{G}_n$. Some $U\in\mathscr{U}$ contains $x$, and $$K\in B\big(V_{\mathscr{U},p,q}(U),W_{\mathscr{U},p,q}(U),X\big)\;,$$ so there are distinct points $y\in K\cap V_{\mathscr{U},p,q}(U)$ and $z\in K\cap W_{\mathscr{U},p,q}(U)$. But $$y,z\in U\subseteq B(x,2^{-n})\subseteq B(x,2^{-m})\;,$$ so $y,z\in B(x,2^{-m})\cap K=\{x\}$, which is impossible. Thus, $K\notin G_n$ for $n\ge m$. -Finally, let $G=\bigcap_{n\in\omega}G_n$. Clearly $G$ is a $G_\delta$-set in $\mathscr{K}(X)$, and we’ve just shown that $G=\{K\in\mathscr{K}:K\text{ is perfect}\}$.<|endoftext|> -TITLE: Complex Analysis Book -QUESTION [44 upvotes]: I want a really good book on Complex Analysis, for a good understanding of theory. There are many complex variable books that are only a list of identities and integrals and I hate it. For example, I found Munkres to be a very good book for learning topology, and "Curso de Análise vol I" by Elon Lages Lima is the best Real Analysis book (and the best math book) that I have read with many examples, good theory and challenging exercises. -An intuitive and introductory approach is not very important if the book has good explanations and has correct proofs. -Added: If it is possible, tell me your experience with your recommended books and if you got a really good understanding of complex analysis with a deep reading. - -REPLY [3 votes]: An Introduction to Complex Analysis -by Ravi P. Agarwal , Kanishka Perera , Sandra Pinelas -is a fantastic book!<|endoftext|> -TITLE: Computing rank using $3$-Descent -QUESTION [6 upvotes]: For an elliptic curve $E$ over $\Bbb{Q}$, we know from the proof of the Mordell-Weil theorem that the weak Mordell-Weil group of $E$ is $E(\Bbb{Q})/2E(\Bbb{Q})$. It is well known that -$$ -0 \rightarrow E(\Bbb{Q})/2E(\Bbb{Q}) \rightarrow S^{(2)}(E/\Bbb{Q}) \rightarrow Ш(E/\Bbb{Q})[2] \rightarrow 0 -$$ -is an exact sequence which gives us a procedure to compute the generators for $E(\Bbb{Q})/2E(\Bbb{Q})$. -(Relatively) recently I found out that there is another way to compute the rank of $E$ using $3$-descent. I was wondering, since the natural structure of the weak Mordell-Weil group is $E(\Bbb{Q})/2E(\Bbb{Q})$, what is the motivation behind using $3$-descent? Also does $3$-descent similarly produce the generators of $E(\Bbb{Q})/2E(\Bbb{Q})$ or does it simply tell us the structure of $E(\Bbb{Q})$ via the Mordell-Weil theorem by giving us only the rank of $E$? Finally does it help us get around the issue of $Ш(E/\Bbb{Q})$ containing an element that is infinitely $2$-divisible? - -REPLY [5 votes]: An $n$-descent will compute the $n$-Selmer group, which sits in a s.e.s. -$$0 \to E(\mathbb Q)/n E(\mathbb Q) \to S^{(n)}(E/\mathbb Q) \to Ш(E/\mathbb Q)[n] \to 0.$$ -If you do a $2$-descent, it will give an upper bound on the size of $E(\mathbb Q)/2E(\mathbb Q).$ If you do a $3$-descent, it will give you an upper bound on the size of $E(\mathbb Q)/3 E(\mathbb Q).$ -The advantage of one over the other will depend on the structure as a Galois module of $E(\mathbb Q)[n]$, and the structure of the $n$-torsion in Sha. -If $E$ contains a rational $2$-torsion point, then the $2$-Selmer group may be easier to compute than the $n$-Selmer groups for other $n$. -As an example of a descent at a different choice of $n$, note that -the elliptic curve $X_0(11)$ contains a rational $5$-torsion point, and Mazur does a $5$-descent to prove that it has no other rational points besides the five points generated by this $5$-torsion point. (See this answer for more details on this.) -The elliptic curve $X_0(17)$ has a rational $3$-torsion point, and for this Mazur does a $3$-descent.<|endoftext|> -TITLE: Geometry IMO 1988 -QUESTION [8 upvotes]: (IMO 1988/1) Consider two circles of radii $R$ and $r$ $(R > r)$ with the same center. Let $P$ be a fixed point on the smaller circle and $B$ a variable point on the larger circle. The line $BP$ meets the larger circle again at $C$. The perpendicular $l$ to $BP$ at $P$ meets the smaller circle again at $A$. (As per our convention, if $l$ is tangent to the circle at $P,$ then we take $A = P$.) -(i) Find the set of values of $BC^2$ + $CA^2$ + $AB^2$ -(ii) Find the locus of the midpoint of $AB$. - -REPLY [4 votes]: Adding a few more perpendiculars, we augment $PA$ to a rectangle $\square PAQA^\prime$ with diagonal $PQ$ a diameter of the smaller circle. (We've exploited the fact that any angle inscribed in a semicircle is a right angle.) This makes $\square AQBC$ an isosceles trapezoid, whose height we'll denote $q$, smaller base $p$, and larger base $p+2s$. - -Note that Pythagoras gives us -$$|PQ|^2 = p^2 + q^2 = 4 r^2 \qquad |CA|^2 = q^2 + s^2 \qquad |AB|^2 = q^2 + (p+s)^2$$ -Moreover, the (unsigned) Power of Point $P$ relative to the big circle is -$$R^2 - r^2 = |PC| |PB| = s \left( p + s \right)$$ -Therefore, -$$\begin{align} -|AB|^2 + |BC|^2 + |CA|^2 &= \left( q^2 + \left( p + s \right)^2 \right) + \left( p + 2 s \right)^2 + \left( q^2 + s^2 \right) \\ -&= 2 \left( p^2 + q^2 \right) + 6 s \left( p + s \right) \\ -&= 8 r^2 + 6 \left( R^2 - r^2 \right) \\ -&= 2 \left( 3 R^2 + r^2 \right) -\end{align}$$ -(Similar algebra ---and an identical result--- arises from applying Ptolemy's theorem to the (necessarily-cyclic) isosceles trapezoid: -$$|CA| |BQ| + |AQ| |BC| = |AB||QC|$$ -where $|BQ| = |CA|$ and $|QC| = |AB|$.) -Consequently, the sum-of-squares is independent of the location of point $B$, so that the answer to (i) is the singleton set containing $2\left( 3R^2 + r^2\right)$. -For (ii), extend the trapezoid's shorter base to match the longer, obtaining rectangle $\square BCC^\prime B^\prime$. - -The midpoint, $M$, of $AB$ is always the midpoint of $PB^\prime$, whereas $B^\prime$ is always a point on the larger circle. Thus, $M$ is a dilation of $B^\prime$ in $P$ with scale factor $1/2$, and the locus of $M$ is the corresponding dilation of the locus of $B^\prime$ (aka, the big circle). -Note: The locus of $N$, the midpoint of $AC$ (and of $PC^\prime$), is the same circle. Likewise with the midpoints of $PC$ and $PB$.<|endoftext|> -TITLE: Would nonmath students be able to understand this? -QUESTION [5 upvotes]: For a course, I am required to do a presentation. The topic could either be something mundane, like a career strategy report, or something more interesting, such as a controversial topic, or an exposition on something you find interesting. What I would like to do is to present math in a way that probably no one in the class, other than myself, has seen before. That is to say, math as a deeply conceptual subject that does not necessarily involve computation with literal numbers. -In order to illustrate what I mean by the above, I would present the following theorem: There are at least two kinds of infinite sets: Countable ones, and uncountable ones (of course I would define bijection and countable). I would present the diagonal argument, since it is elegant, ingenius, noncomputational, and short. -My question is whether or not the general public (nonmathematicians) would be able to understand the argument. Note, I would not be explicit about the axiom of choice, etc. - -REPLY [8 votes]: My experience with explaining non-countable sets to non-mathematicians is rather weird. Unfortunately this is what my data suggest (I don't know if it is true, it is just happened every time I tried): an idea of a set you cannot enumerate is too hard for some people, there is a certain threshold (as a function of ability in abstract thinking maybe?) below which the concept just slips away from their grasp. Usually you can tell very fast whether convincing them will be fruitful (maybe long and tedious, but doable), or if their mind rejects the thought as ridiculous and often unimportant (this frequently happened for practical individuals, deeply rooted on Earth as in "Dreams? Fantasies? What I would need that for?"). On the other hand, I hadn't tried this on children, so hopefully they might behave differently. -I second Limitless' idea about showing how the rational numbers are countable: you could do it first and decide on proceeding while in class and seeing their reaction. -Also, there are other theorems that might rock the audience, just stating them -might be enough (it does depend on the audience, but I think it is worth trying). To give you some examples: - -[meta] post on math.SE, -if you stir coffee, then there is a point in it which will return to its original position, -hairy ball theorem, -inscribed square problem, -voting paradox, -Goodstein's sequence and theorem (hard conceptually), -ham and sandwich theorem, -fold and cut theorem. - -Good luck!<|endoftext|> -TITLE: Evaluate the series: $ \sum_{k=1}^{\infty}\frac{1}{k(k+1)^2k!}$ -QUESTION [7 upvotes]: Evaluate the series: -$$ \sum_{k=1}^{\infty}\frac{1}{k(k+1)^2k!}$$ - -REPLY [5 votes]: Try it with factor $x^k$. -$$\begin{align} -f(x) &=\sum_{k=1}^{\infty}\frac{x^k}{k(k+1)^2k!} -\\ -f'(x) &=\sum_{k=1}^{\infty}\frac{x^{k-1}}{(k+1)^2k!} -\\ -x^2f'(x) &=\sum_{k=1}^{\infty}\frac{x^{k+1}}{(k+1)^2k!} -\\ -\big(x^2f'(x)\big)' &=\sum_{k=1}^{\infty}\frac{x^{k}}{(k+1)k!} -\\ -x\big(x^2f'(x)\big)' &=\sum_{k=1}^{\infty}\frac{x^{k+1}}{(k+1)k!} -\\ -\Big(x\big(x^2f'(x)\big)'\Big)' &=\sum_{k=1}^{\infty}\frac{x^{k}}{k!} = e^x-1 . -\end{align}$$ -Now solve a differential equation. Then plug in $x=1$.<|endoftext|> -TITLE: Reference request in number theory for an analyst. -QUESTION [18 upvotes]: I am a confirmed mathochist. My background is in analysis, and fairly traditional analysis at that; mainly harmonic functions, subharmonic functions and boundary behaviour of functions, but I have for many years had an interest in number theory (who hasn't?) without ever having the time to indulge this interest very much. -Having recently retired from teaching, I now do have the time, and would like to look more deeply into a branch of number theory in which my previous experience might still be useful, and in particular, I would be interested to find out more about the interplay between elliptic curves, complex multiplication, modular groups etc. -I am pretty confident in my background with respect to complex analysis, and I have a working knowledge of the basics of p-adic numbers, but my algebra background is much, much weaker: just what I can remember from courses many years ago in groups, rings, fields and Galois Theory, and absolutely no knowledge of the machinery of homolgy/cohomology, and very little of algebraic geometry (I once read the first 2-3 chapters of Fulton before getting bored and going back to analysis!) -Alas, I now no longer have easy access to a good academic library, so I would need to puchase any text(s) needed, unless any good ones happen to be available online. -My request would then be this: -What text(s) would you recommend for someone who wants to find out more about elliptic curves, complex multiplication and modular groups, bearing in mind that I am very unlikely to want to do any original research, and it is all "just for fun"? -Many thanks for your time! - -REPLY [9 votes]: Have a look at Koblitz: Introduction to Elliptic Curves and Modular Forms, and also at Knapp: Elliptic Curves. I prefer the latter, since it is more in depth, but it is also more algebro-geometric. The former has a much stronger complex analysis slant, especially at the beginning, which might make the entry easier. So you could try reading Koblitz first, and then Knapp (there will be a lot of overlap, of course). -The most thorough text on elliptic curves is Silverman: Arithmetic of Elliptic Curves. But it is considerably more algebro-geometric than the above two, and it has very little material on modular forms. So I am not sure it is the right entry text for you. -None of these cover complex multiplication. For that, you could have a look at Silverman: Advanced Topics in the Arithmetic of Elliptic Curves. My feeling is that to appreciate the theory of complex multiplication, it would help to have seen class field theory beforehand. But Silverman does review the main results of class field theory, so perhaps you can dive straight in, after having worked through one of the basic texts above. -"Just for fun" is a great premise to start learning elliptic curves, since it really is great fun! - -REPLY [7 votes]: A really good but underrated book that is freely available online is Milne's Elliptic Curves. Its treatment of elliptic curves is certainly less detailed than Silverman's but it is fairly through and yet accessible. He also has notes on complex multiplication. Also Silverman and Tate's book could give you a cursory exposure to elliptic curves and assumes nothing but a knowledge of groups. -Now with your experience in complex analysis, maybe the study of modular forms and analytic number theory would be of interest to you? Apostol has a series of two books (Analytic Number Theory and Modular Functions and Dirichlet Series) that develop these in some detail and are very classic books. Ram Murty's book has a ton of exercises which is always good when learning. -A knowledge of analytic number theory and modular forms will certainly not go to waste if you are interested in elliptic curves and are very interesting subjects in themselves as well. -EDIT: Since Milne clicks well with you maybe you should take a look at his entire collection of notes and books? He has stuff on algebra so you can use those to brush up on it.<|endoftext|> -TITLE: Evaluating $\lim\limits_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$ -QUESTION [251 upvotes]: I'm supposed to calculate: -$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$ -By using WolframAlpha, I might guess that the limit is $\frac{1}{2}$, which is a pretty interesting and nice result. I wonder in which ways we may approach it. - -REPLY [7 votes]: I thought that it might be of instructive to post a solution to a generalization of the OP's question. Namely, evaluate the limit -$$\lim_{n\to\infty}e^{-n}\sum_{k=0}^{N(n)}\frac{n^k}{k!}$$ -where $N(n)=\lfloor Cn\rfloor$, where $C>0$ is an arbitrary constant. To that end we now proceed. - -Let $N(n)=\lfloor Cn\rfloor$, where $C>0$ is an arbitrary constant. We denote $S(n)$ the sum of interest -$$S(n)=e^{-n}\sum_{k=0}^{N}\frac{n^k}{k!}$$ -Applying the analogous methodology presented by @SangchulLee, it is straightforward to show that -$$S(n)=1-\frac{(N/e)^{N}\sqrt{N}}{N!}\int_{(N-n)/\sqrt{N}}^{\sqrt{N}}e^{\sqrt{N}x}\left(1-\frac{x}{\sqrt N}\right)^N\,dx\tag7$$ -We note that the integrand is positive and bounded above by $e^{-x^2/2}$. Therefore, we can apply the Dominated Convergence Theorem along with Stirling's Formula to evaluate the limit as $n\to\infty$. -There are three cases to examine. -Case $1$: $C>1$ -If $C>1$, then both the lower and upper limits of integration on the integral in $(7)$ approach $\infty$ as $n\to \infty$. Therefore, we find -$$\lim_{n\to \infty}e^{-n}\sum_{k=0}^{\lfloor Cn\rfloor}\frac{n^k}{k!}=1$$ -Case $2$: $C=1$ -If $C=1$, then the lower limit is $0$ while the upper limit approaches $\infty$ and we find -$$\lim_{n\to \infty}e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!}=\frac12$$ -Case $3$: $C<1$ -If $C<1$, then the lower limit is approaches $-\infty$ while the upper limit approaches $\infty$ and we find -$$\lim_{n\to \infty}e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!}=0$$ - -To summarize we have found that -$$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty}e^{-n}\sum_{k=0}^{\lfloor Cn\rfloor}\frac{n^k}{k!}=\begin{cases}1&,C>1\\\\\frac12&, C=1\\\\0&, C<1\end{cases}}$$<|endoftext|> -TITLE: Complete ordered field -QUESTION [11 upvotes]: I'm trying to prove that; -If any Cauchy sequence is convergent in an ordered field F, every nonempty subset of F that has an upperbound has a sup in F. -Let A be a nonempty subset of F that is not a singleton and has an upperbound in F. -Let $a_0 \notin v(A)$ and $b_0 \in v(A)$. -It's written in my book that for every $e \in P_F$, there exists $N \in \omega$ such that $N≧(b_0 - a_0)/e$. -I think this is not accurate since it hasn't showed that such F is Archimedean.. Is such F archimedean? Or in such a condition does there exist such N? --Definition of a Cauchy sequence; -For every $e\in P_F$, there exists $N\in \omega$ such that if $i,j≧N$, then $|x(i) - x(j)| < e$. -($x:\omega \to F$ is a sequence) -Least Upper Bound Property $\implies$ Complete - -REPLY [9 votes]: You’re right: there is a problem with the argument, because there is a Cauchy-complete non-Archimedean ordered field, and a non-Archimedean ordered field is not complete in the sense of having least upper bounds. -The standard example starts with the field $F$ of rational functions over $\Bbb R$, with positive cone consisting of those functions $f/g$ such that the leading coefficients of $f$ and $g$ have the same algebraic sign. Then form the Cauchy completion by extending this to equivalence classes of Cauchy sequences in $F$. This is Example 7 on page 17 of Gelbaum & Olmsted, Counterexample in Analysis; you may be able to see it here.<|endoftext|> -TITLE: Is the volume of a tetrahedron determined by the surface areas of the faces? -QUESTION [11 upvotes]: I am looking for a formula: $V=f(S_1,S_2,S_3,S_4)$, where $S_1$, $S_2$, $S_3$, and $S_4$ are the areas of the four faces. -We know $V=\dfrac{S_1.h_1}{3}=\dfrac{S_2.h_2}{3}=\dfrac{S_3.h_3}{3}=\dfrac{S_4.h_4}{3}$, where $h_1$, $h_2$, $h_3$, and $h_4$ are the corresponding altitudes. -So we need to find -$h_1=g(S_2,S_3,S_4)$ -$h_2=g(S_1,S_3,S_4)$ -$h_3=g(S_1,S_2,S_4)$ -$h_4=g(S_1,S_2,S_3)$ -Also, if all points of the tetrahedron are on a plane, the volume should be zero. Thus - -Firstly, if the projection of point is out of $S1$ Area, $S_1+S_2=S_3+S_4$ then $V=0$ -If the projection of point is in $S1$ Area, $S_1=S_2+S_3+S_4$ then $V=0$ - -So can we determine from the areas if volume is zero or not ? -Is it possible to find $V=f(S_1,S_2,S_3,S_4)$? Is the surface information enough to create a unique closed volume? -Thanks a lot for advice and answers. - -REPLY [22 votes]: The volume of a tetrahedron cannot be determined from the surface areas of the faces. I shall provide a family of counterexamples. -A tetrahedron is called equifacial if its four faces are all congruent triangles. Equivalently, a tetrahedron is equifacial if its opposite edges have the same length. -Starting with any triangle in the plane, you can try to "glue" together four copies of the triangle in an obvious way to get a tetrahedron. This process has the following properties: - -If you start with an equilateral triangle, the result is a regular tetrahedron. -If you start with an acute triangle, the result is an equifacial tetrahedron. -If you start with an right triangle, the result is a "degenerate" tetrahedron with zero volume. -If you start with an obtuse triangle, you can't make a tetrahedron. - -Now, it is possible to continuously deform an equilateral triangle into a right triangle without changing the area. Therefore, it is possible to continuously deform a regular terahedron into a degenerate tetrahedron without changing the areas of the faces! -The following animation shows this process: - -All of the tetrahedra shown in this animation have faces with area 1, but the volume decreases continuously from about $0.41$ to $0$. -Of course, equifacial tetrahedra aren't the only possible counterexample. Indeed, for any allowed quadruple $(S_1,S_2,S_3,S_4)$ of areas, there ought to be a whole interval of possible values for the volume.<|endoftext|> -TITLE: Evaluating Integral with Residue Theorem -QUESTION [5 upvotes]: The integral in question is -$$\int_{_C} \frac{z}{z^2+1}\,dz,$$ where $C$ is the path $|z-1| = 3.$ -The two pole of $f(x)$ where $f(x)=\frac{z}{z^2+1}$ is $-j$ and $j$ -$${\rm Res}_{z=z_0}f(x)=\lim_{z\rightarrow\infty}(z-z_0)f(z)$$ -For the first pole: -$${\rm Res}_{z=j}f(z)= \lim_{z\rightarrow\\j}(z-j)\frac{z}{z^2+1} \\ = \lim_{z\rightarrow\\j}\frac{(z-j)z}{(z+j)(z-j)}\\ -=\lim_{z\rightarrow\\j}\frac{z}{(z+j)} =\frac{j}{(j+j)}$$ -${\rm Res}_{z=j}f(z)= \frac{1}{2}$. -For the second pole: -$${\rm Res}_{z=-j}f(z)= \lim_{z\rightarrow\\-j}(z+j)\frac{z}{z^2+1} \\ = \lim_{z\rightarrow\\-j}\frac{(z+j)z}{(z+j)(z-j)}\\ = \lim_{z\rightarrow\\-j}\frac{z}{(z-j)}\\ = \frac{j}{(-j-j)}$$ -${\rm Res}_{z=-j}f(z)= \frac{-1}{2}$. -Sum: -$${\rm Res}_{z=j}f(z)+ {\rm Res}_{z=-j}f(z)= \frac{1}{2}-\frac{1}{2} = 0$$ -Now I have always been under the impression that when integrating inside a path, the only time when the result is 0 is when there are no pole in or on the path. -Am I mistaken? or have I made an error in the calculation? Or should I not be trying to use the Residue Theorem all together? -Any help would be much appreciated. - -REPLY [5 votes]: For the second pole: -$${\rm Res}_{z = -i} f(z) = \lim_{z \to -i} \frac{(z+i) z}{(z+i)(z-i)} = \lim_{z \to -i} \frac{z}{z-i} = \frac{-i}{-2i} = \frac{1}{2}$$ -so in fact two poles contribute the same to the final result which is $2\pi i \cdot \left( \frac{1}{2} + \frac{1}{2} \right) = 2\pi i$.<|endoftext|> -TITLE: How many bits needed to store a number -QUESTION [13 upvotes]: How many bits needed to store a number $55^{2002}$ ? -My answer is $2002\;\log_2(55)$; is it correct? - -REPLY [22 votes]: The number of bits required to represent an integer $n$ is $\lfloor\log_2 n\rfloor+1$, so $55^{2002}$ will require $\lfloor 2002\; \log_2 55\rfloor+1$ bits, which is $11,575$ bits. -Added: For example, the $4$-bit integers are $8$ through $15$, whose logs base $2$ are all in the interval $[3,4)$. We have $\lfloor\log_2 n\rfloor=k$ if and only if $k\le\log_2 n -TITLE: manifold structure on on a finite dimensional real vector space -QUESTION [16 upvotes]: I am reading Warner's Differentiable Manifolds I do not get one example which is - -Let $V$ be a finite dimensional real vector space. Then $V$ has a natural manifold structure. If $\{e_i\}$ is a basis then the elements of the dual basis $\{r_i\}$ are the coordinate functions of a global coordinate system on $V$. - -I don't understand how "the elements of the dual basis $\{r_i\}$ are the coordinate functions of a global coordinate system on $V$." Could any one explain me about that? Then how such a global coordinate system uniquely determines a differentiable structure on $V$? And why this structure is indipendent of choice of basis? -First of all for a manifold structure I need each point must have an open neighborhood $U$ homeomorphic to some open subset of $\mathbb{R}^n$. Here am I getting such notions? - -REPLY [11 votes]: The space $\mathbb{R}^n$ has coordinate functions $x_j:\mathbb{R}^n\to\mathbb{R}$, projection onto the $j^{th}$ axis. If $(\phi,U)$ is a coordinate system on a manifold $M$, then we get coordinate functions on $U$ by composing $\phi$ with the $x_j$. -Warner is just saying that by choosing a basis on a real vector space $V$, you induce a bjiective linear map (hence homeomorphism) $A$ between $V$ and $\mathbb{R}^n$, and that homeomorphism is a global coordinate system with coordinate functions $x_j\circ A = r_j$. The open neighborhood about each point is the entire space $V$. -To see that the structure is independent of choice of basis (up to diffeomorphism), try the construction with a different basis; can you see a diffeomorphism between the two structures?<|endoftext|> -TITLE: What is the usefulness of matrices? -QUESTION [53 upvotes]: I have matrices for my syllabus but I don't know where they find their use. I even asked my teacher but she also has no answer. Can anyone please tell me where they are used? -And please also give me an example of how they are used? - -REPLY [5 votes]: I never fully got matrices until I left university. I am glad I understand now. -An example: -A good quality camera will save the captured image uncorrected, along with a 3x3 colour correction matrix. Your computer will multiply this with the colour correction matrix of your display, and then by every pixel in the image before putting it on your display. The computer will use a different display matrix for the printer (as it is a different display). -Look at several real world examples. Experiment with colour or 2D/3D transformations, they are fun and visual (if you are a visual person). 2D is easiest and most visual.<|endoftext|> -TITLE: Expression of the Hyperbolic Distance in the Upper Half Plane -QUESTION [13 upvotes]: While looking for an expression of the hyperbolic distance in the Upper Half Plane $\mathbb{H}=\{z=x +iy \in \mathbb{C}| y>0\},$ I came across two different expressions. Both of them in Wikipedia. -In the page Poincaré Half Plane Model it is explicitly stated that the distance of $z,w \in \mathbb{H}$ is: -$$d_{hyp}(z,w)= Arccosh(1+ \frac{|z-w|^2}{2 Im(z) Im(w)}).$$ -While in the page Poincaré Metric it is stated that the metric on the Upper Half Plane is : -$$\rho(z,w)=2 Arctangh(\frac{|z-w|}{|z-\bar w|}).$$ -At the beginnning I thought it would have been an easy exercise to prove the equivalence of the two expressions. But first I failed in doing that, and then I found, using Mathematica, a counterexample (i.e. $z=2i$ and $w=i$). -Question: If they are not equal, which of the two expressions is the right one? Then, how is the metric related to the induced distance? -The question is probably silly, but I'm often confused about the relationships between "metric" objects. -Thank you very much for your time! - -REPLY [7 votes]: To verify the consistency of the formulas, recall these identities: -$$1 - \tanh^2 d = \mathrm{sech}^2 d \qquad \qquad \cosh^2\frac{d}{2}=\frac{1+\cosh d}{2}$$ -The "Poincare Metric" formula is equivalent to this form: -$$\tanh\frac{d}{2} = \frac{p}{q}$$ -so ... -$$\begin{align} -\mathrm{sech}^2\frac{d}{2} &= \frac{q^2-p^2}{q^2} \\ -\implies \qquad \frac{1 + \cosh d}{2} &= \frac{q^2}{q^2-p^2} \\ -\implies \qquad \cosh d &= \frac{q^2+p^2}{q^2-p^2} -\end{align}$$ -Now, with $w := u + i v$ and $z := x + i y$, and $p := |z-w|$ and $q := |z-\overline{w}|$, we have -$$\begin{align} -q^2+p^2 = |z-\overline{w}|^2+|z-w|^2 -&=\left( (x-u)^2+(y+v)^2 \right)+|z-w|^2 \\ -&=\left( (x-u)^2+(y-v)^2+4yv \right)+|z-w|^2 \\ -&=4yv+2|z-w|^2 \\ -q^2-p^2 = |z-\overline{w}|^2-|z-w|^2 -&=(x-u)^2+(y+v)^2-(x-u)^2-(y-v)^2 \\ -&=4yv -\end{align}$$ -Thus, -$$\cosh d = 1 + \frac{\;|z-w|^2}{2yv}$$<|endoftext|> -TITLE: Can different uniformizations of Riemann surfaces be related somehow -QUESTION [5 upvotes]: Let $X$ be a hyperbolic compact connected Riemann surface. Let $U\subset X$ be an open subset. Assume that $U\neq X$. -We can uniformize $X$ by $\mathbf{H}$ directly to obtain it as a quotient of $\mathbf{H}$ by some cofinite Fuchsian group $\Gamma$ without cusps nor elliptic elements. -But we can also uniformize $U$ in the same way and then obtain $X$ by adding the set $X-U$ of cusps. -Can these uniformizations be related in some sense? Even abstractly speaking? Have such "different" uniformizations been studied in some sense? -It's a bit of a vague question, I admit. I'm just wondering what exactly can be done in this context. - -REPLY [3 votes]: The Schwarz lemma implies that lengths of curves in the uniformization of $U$ are strictly larger than the length of the curve in the uniformization of $X$.<|endoftext|> -TITLE: What is the expected area of a polygon whose vertices lie on a circle? -QUESTION [13 upvotes]: I came across a nice problem that I would like to share. -Problem: What is expected value of the area of an $n$-gon whose vertices lie on a circle of radius $r$? The vertices are uniformly distributed. - -REPLY [3 votes]: A simple asympotics, for large $N$: the central angles $x_i$ (which sum up to $2\pi$, and hence are not independent), can be approximated as iid exponentials $y_i$ with mean $\alpha=2 \pi /N$ (a similar procedure as "Poissonization" for discrete variables - also analogous to change ensemble in statistical physics). -Then, because the area of each triangle is $ \sin(x_i)/2$, we have that the expected area is -$$E(A) = \frac{1}{2}\sum_{i=1}^N E[\sin(x_i)] \approx \frac{N}{2} E[\sin(y_i)] = \frac{N}{2} \int_0^{\infty} \sin(y) \frac{1}{\alpha} \exp{\left(-\frac{y}{\alpha}\right)} \,dy = \frac{N}{2} \frac{\alpha}{1+\alpha^2}$$ -So -$$E(A)\approx \frac{\pi}{1+(2 \pi/N)^2}$$<|endoftext|> -TITLE: infinite series involving harmonic numbers and zeta -QUESTION [12 upvotes]: I ran across a fun looking series and am wondering how to tackle it. -$$\sum_{n=1}^{\infty}\frac{H_{n}}{n^{3}}=\frac{{\pi}^{4}}{72}.$$ -One idea I had was to use the digamma and the fact that -$$\sum_{k=1}^{n}\frac{1}{k}=\int_{0}^{1}\frac{1-t^{n}}{1-t}dt=\psi(n+1)+\gamma.$$ -Along with the identity $\psi(n+1)=\psi(n)+\frac{1}{n}$, I managed to get it into the form -$$\sum_{n=1}^{\infty}\frac{H_{n}}{n^{3}}=\gamma\zeta(3)+\zeta(4)+\sum_{n=1}^{\infty}\frac{\psi(n)}{n^{3}}.$$ -This would mean that $$\sum_{n=1}^{\infty}\frac{\psi(n)}{n^{3}}=\frac{{\pi}^{4}}{360}-\gamma\zeta(3).$$ Which, according to Maple, it does. But, how to show it?. If possible. -I also started with $\frac{-\ln(1-x)}{x(1-x)}=\sum_{n=1}^{\infty}H_{n}x^{n-1}$. -Then divided by x and differentiated several times. This lead to some interesting, but albeit, tough integrals involving the dilog: -$$-\int\frac{\ln(1-x)}{x(1-x)}dx=Li_{2}(x)+\frac{\ln^{2}(1-x)}{2}=\sum_{n=1}^{\infty}\frac{H_{n}x^{n}}{n}.$$ -Doing this again and again lead to some integrals that appeared to be going in the right direction. -$$\int_{0}^{1}\frac{Li_{3}(x)}{x}dx=\frac{{\pi}^{4}}{90}$$ -$$-\int_{0}^{1}\frac{\ln^{2}(1-x)\ln(x)}{2x}dx=\frac{{\pi}^{4}}{360}$$ -$$-\int_{0}^{1}\frac{\ln(1-x)Li_{2}(1-x)}{x}dx=\frac{{\pi}^{4}}{72}$$ -But, what would be a good approach for this one? I would like to find out how to evaluate -$$\sum_{n=1}^{\infty}\frac{\psi(n)}{n^{3}}=\frac{{\pi}^{4}}{360}-\gamma\zeta(3)$$ - if possible, but any methods would be appreciated and nice. -Thanks a bunch. - -REPLY [2 votes]: I appreciate all of the input. -I thought I would come back and post something I managed to come up with. -This is kind of based on the methods in my first post using the dilog. -I started by using the identity $-n\int_{0}^{1}(1-x)^{n-1}\ln(x)dx=-\sum_{k=1}^{n}\binom{n}{k}\frac{(-1)^{k}}{k}=H_{n}$. -Then, $\sum_{n=1}^{\infty}\frac{H_{n}}{n^{3}}=-\sum_{n=1}^{\infty}\frac{1}{n^{2}}\int_{0}^{1}(1-x)^{n-1}\ln(x)dx$ -$=-\int_{0}^{1}\sum_{n=1}^{\infty}\frac{(1-x)^{n-1}\ln(x)}{n^{2}}dx$ -Using the definition of the dilog, $Li_{2}(1-x)=\sum_{n=1}^{\infty}\frac{(1-x)^{n}}{n^{2}}$, I got: -$\sum_{n=1}^{\infty}\frac{H_{n}}{n^{3}}=-\int_{0}^{1}\frac{Li_{2}(1-x)\ln(x)}{1-x}dx$ -$=\frac{1}{2}(Li_{2}(1-x))^{2} |_{0}^{1}$ -$=\frac{1}{2}(Li_{2}(1))^{2}=\frac{1}{2}\left(\frac{{\pi}^{2}}{6}\right)^{2}$ -$=\frac{{\pi}^{4}}{72}$.<|endoftext|> -TITLE: Cutting cake into 5 equal pieces -QUESTION [7 upvotes]: If a cake is cut into $5$ equal pieces, each piece would be $80$ grams - heavier than when the cake is cut into $7$ equal pieces. How heavy is - the cake? - -How would I solve this problem? Do I have to try to find an algebraic expression for this? $5x = 7y + 400$? - -REPLY [3 votes]: $$\frac{w}{5}=\frac{w}{7}+80$$ -Multiply both sides by 35 -$$7w=5w+80\cdot 35$$ -substracting both sides by $$2w$$ -$$2w=80\cdot 35$$ -dividing both sides by 2 -$$w=40\cdot 35$$ -So weight of cake is 1400<|endoftext|> -TITLE: short exact sequences and direct product -QUESTION [9 upvotes]: Let -$$0\longrightarrow L^{(i)}\longrightarrow M^{(i)}\longrightarrow N^{(i)}\longrightarrow 0$$ -be a short exact sequence of abelian groups for every index $i$. Clearly if I take finite direct products, then -$$0\longrightarrow \prod_iL^{(i)}\longrightarrow\prod_i M^{(i)}\longrightarrow \prod_iN^{(i)}\longrightarrow 0$$ -is a short exact sequence. But what about infinite direct product? Is the exactness preserved? - -REPLY [14 votes]: In a general (abelian) category, the product of epimorphisms (if it exists!) may not be an epimorphism – that is why we sometimes assume (AB4*) as an extra axiom. As it turns out, the category of $R$-modules (for any ring $R$) always satisfies the (AB4*) axiom. -Proposition. If the (AB4*) axiom is satisfied in an abelian category, then products of short exact sequences are short exact sequences. -Proof. Consider short exact sequences -$$0 \longrightarrow L^{(i)} \longrightarrow M^{(i)} \longrightarrow N^{(i)} \longrightarrow 0$$ -By general abstract nonsense, one can show that the kernel of a product is the product of the kernels, so we have a left exact sequence -$$0 \longrightarrow \prod_i L^{(i)} \longrightarrow \prod_i M^{(i)} \longrightarrow \prod_i N^{(i)}$$ -but by the (AB4*) assumption, the last homomorphism is an epimorphism, so we in fact have a short exact sequence.<|endoftext|> -TITLE: Basic understanding of Spec$(\mathbb Z)$ -QUESTION [15 upvotes]: So, I'm looking into schemes, and found that I have no intuition in the field, so I decided to look into some simple (as in affine and well-known) examples. As I like to dwell on the basics for a while, and texts on graduate level tend to move too quickly away from the basics for me, there isn't much material. These are some of the conclusions I've come to so far: -First of all, the Zarisky topology on Spec$(\mathbb Z)$ has as closed sets any finite set not containing $(0)$, as well as the whole set. -Second, let the open set $U$ be the complement of the union of the prime ideals generated by the primes $p_1, \ldots, p_n$, in other words, $U$ consists of all the prime ideals not containing the product $p_1p_2\cdots p_n$. Then the sheaf over Spec$(\mathbb Z)$ takes $U$ to the localization $\mathbb Z_{p_1p_2\cdots p_n}$, i.e. the subring of $\mathbb Q$ consisting of rationals which can be written as a fraction with the denominator a power of $p_1p_2\cdots p_n$. -Third, the stalk around a prime ideal $(p)$ is $\mathbb Z_{(p)}$, that is, the subring of $\mathbb Q$ consisting of rationals that can be written as a fraction without any factor $p$ in the denominator. As a special case, the stalk around $(0)$ is $\mathbb Q$. -Am I wrong about any of this? - -REPLY [3 votes]: Turns out it was all good, according to the two commments. I even got a few pointers to material, which was a nice bonus. Thanks a lot, Zhen and Dylan.<|endoftext|> -TITLE: Sum of $n \sigma(n)$ -QUESTION [6 upvotes]: What is known about the asymptotic behavior of -$$ --\frac{\pi^2}{18}x^3+\sum_{n\le x}n\sigma(n) ? -$$ -It seems to be $O(x^{2+\varepsilon})$ but I cannot prove this. - -REPLY [3 votes]: See my blog post regarding the average of $\sigma(n)$. This post is a two part series, part I looks at the upper bound, and Part II proves Pétermann's lower bound, which is significantly more difficult. All results regarding $n\sigma(n)$ follow right away from partial summation. -In Part I, the hyperbola method is used to show that $$\sum_{n\leq x} \sigma(n) =\sum_{n\leq x}\sigma(n)=\frac{\pi^{2}}{12}x^2+O(x\log x),$$ which should be exactly what you are looking for. From here, partial summation yields $$\sum_{n\leq x} n\sigma(n) =\frac{\pi^{2}}{18}x^3+O(x^2\log x).$$ -I will post Part II soon which proves that the error term is not $o(x^2\log \log x),$ and oscillates from negative to positive with a magnitude of $x^2\log \log x$ infinitely often.<|endoftext|> -TITLE: What's the latest on Laver tables? -QUESTION [19 upvotes]: A couple of years ago, I was astonished and delighted to learn about Laver tables, a sequence (indexed on $n$) of Cayley-like tables for a binary operation $\star$ on numbers $i,j\leq 2^n$ that satisfies $p\star 1\stackrel{\text{def}}{=}p+1\bmod 2^n$ and $p\star (q\star r)\stackrel{\text{def}}{=}(p\star q)\star(p\star r)$. As the Wikipedia page notes, these have connections with elementary embeddings of cardinals (and apparently some connections with representations of braid groups as well, though I know less about that). -In particular, it's known that the top 'row' of the table - the list of entries $1\star q$ - is periodic for each $n$, with period $2^k$ for some $k\lt n$. It's relatively straightforward to show that this period sequence is nondecreasing (larger tables project onto smaller ones). All the tables that have been calculated have period 16 or less, and it's known that the smallest $n$ (if any) with a period larger than 16 is titanic. On the other hand, the Wikipedia page notes that the sequence of periods is 'known' to be unbounded - but only under the assumption of one of the strongest large-cardinal hypotheses known! -It's this last result that I'm hoping for an update on; is anything 'new' known about the unboundedness of the period sequence? Has it been shown to hold unconditionally? If not, is there any revised upper or lower bound on the hypothesis needed for unboundedness? I've seen Dehornoy's result that the unboundedness can't be proven in PRA, but has it been proven independent of PA itself (or even ZFC, e.g. needing some large-cardinal hypothesis) yet? - -REPLY [3 votes]: The problem about the unboundedness is still completely open. Furthermore, no lower bound for the rate of growth of the function $n\mapsto o_{n}(1)$ has been calculated since the 90's when Dougherty has shown that this function grows only slightly faster than the Ackermann function. Dougherty has stated that “pushing the lower bound on the growth rate of the number $F(n)$ of critical points below $\kappa_{n}$, to a function beyond $F_{\omega+1}$, will probably require a new idea” in 1 where he has proven that $F(n)$ grows faster than the Ackermann function. This seems to be a very difficult problem regardless of whether this problem is independent of PA or not. -There does not seem to be any original research publication that focuses on Laver tables published from 2000 to 2013 (but that is about to change ). -1 [Critical points in an algebra of elementary embeddings]1<|endoftext|> -TITLE: Difference between supremum and maximum -QUESTION [60 upvotes]: Referring to this lecture , I want to know what is the difference between supremum and maximum. It looks same as far as the lecture is concerned when it explains pointwise supremum and pointwise maximum - -REPLY [78 votes]: A maximum of a set must be an element of the set. A supremum need not be. -Explicitly, if $X$ is a (partially) ordered set, and $S$ is a subset, then an element $s_0$ is the supremum of $S$ if and only if: - -$s\leq s_0$ for all $s\in S$; and -If $t\in X$ is such that $s\leq t$ for all $s\in S$, then $s_0\leq t$. - -By contrast, an element $m$ is the maximum of $S$ if and only if: - -$s\leq m$ for all $s\in S$; and -$m\in S$. - -Note that if $S$ has a maximum, then the maximum must be the supremum: indeed, if $t\in X$ is such that $s\leq t$ for all $s\in S$, then in particular $m\in S$, so $m\leq t$, proving that $m$ satisfies the conditions to be the supremum. -But it is possible for a set to have a supremum but not a maximum. For instance, in the real numbers, the set of all negative numbers does not have a maximum: there is no negative number $m$ with the property that $n\leq m$ for all negative numbers $n$. However, the set of all negative numbers does have a supremum: $0$ is the supremum of the set of negative numbers. Indeed, $a\leq 0$ for all negative numbers $a$; and if $a\leq b$ for all negative numbers $a$, then $0\leq b$. -The full relationship between supremum and maximum is: - - -If $S$ has a maximum $m$, then $S$ also has a supremum and in fact $m$ is also a supremum of $S$. -Conversely, if $S$ has a supremum $s$, then $S$ has a maximum if and only if $s\in S$, in which case the maximum is also $s$. - - -In particular, if a set has both a supremum and a maximum, then they are the same element. The set may also have neither a supremum nor a maximum (e.g., the rationals as a subset of the reals). But if it has only one them, then it has a supremum which is not a maximum and is not in the set. - -REPLY [34 votes]: In terms of sets, the maximum is the largest member of the set, while the supremum is the smallest upper bound of the set. -So, consider $A=\{1,2,3,4\}$. Assuming we're operating with the normal reals, the maximum is 4, as that is the largest element. The supremum is also 4, as four is the smallest upper bound. -However, consider the set $B=\{x | x < 2\}$. Then, the maximum of B is not 2, as 2 is not a member of the set; in fact, the maximum is not well defined. The supremum, though is well defined: 2 is clearly the smallest upper bound for the set. -You will find many points in real analysis (har har har) where it is easier and more profitable to consider the supremum of a given set (or construct the supremum) than it would be to consider or construct the maximum. - -REPLY [20 votes]: The short answer is that if the maximum exists, there is no difference. But it is possible for a supremum to exist where the maximum does not. - -REPLY [5 votes]: Taken directly from the wikipedia page: - -In mathematics, given a subset S of a totally or partially ordered set - T, the supremum (sup) of S, if it exists, is the least element of T - that is greater than or equal to every element of S. Consequently, the - supremum is also referred to as the least upper bound (lub or LUB). If - the supremum exists, it is unique. If S contains a greatest element, - then that element is the supremum; otherwise, the supremum does not - belong to S (or does not exist). For instance, the negative real - numbers do not have a greatest element, and their supremum is 0 (which - is not a negative real number).<|endoftext|> -TITLE: Conformal automorphism of $H^n$ -QUESTION [5 upvotes]: I was looking for the characterization ( or a complete list ) of the conformal automorphisms of the upper half space $H^n$ in $R^n$. I know that when $n=2$, it is $PSL(2,R)$ and when $n=3$, it is $PSL(2,C)$. Is there a general characterization of the conformal automorphisms of $H^n$. ( Note that I need the characterization for the upper half space model, not the Poincare ball model ).The forms/expressions for these conformal automorphims are very important to me. -Feel free to state a reference etc. Thank you ! - -REPLY [6 votes]: Liouville's Theorem for Conformal Maps states that every conformal map defined on a domain in $\mathbb{R}^n$ ($n\geq 3$) is the composition of translations, dilations, rotations, and inversions through spheres. Using this theorem, it is possible to prove that every conformal automorphism of $H^n$ is a hyperbolic isometry. -It follows from this that $\mathrm{Aut}(H^n)$ is isomorphic to the group $SO^+(1,n)$ (an indefinite special orthogonal group). Thus, one way of keeping track of such automorphisms is to use $(n+1)\times(n+1)$ matrices of a certain type. In particular, let $\mathrm{Hyp}$ denote the hyperboloid $-x_0^2 + x_1^2 + \cdots + x_n^2 = -1$ in $\mathbb{R}^{n+1}$, and let $\mathrm{Hyp}^+$ be the portion of the hyperboloid for which $x_0>0$. Then $SO^+(1,n)$ acts on $\mathrm{Hyp}^+$ via linear transformations. If we conjugate by the map $f\colon \mathrm{Hyp}^+\to H^n$ defined by -$$ -f(x_0,x_1,\ldots,x_n) \;=\; \left(\frac{x_2}{x_0+x_1},\ldots,\frac{x_n}{x_0+x_1},\frac{1}{x_0+x_1}\right) -$$ -then we get an action of $SO^+(1,n)$ on $H^n$. -Unfortunately, this description doesn't give much geometric insight, and isn't closely related to the upper half-space $H^n$. What follows is a more geometric description of these automorphisms. I'm not sure whether or not it is helpful, but it's much more closely related to the upper half-space model. At the very least, it gives you a good way of constructing examples of conformal automorphisms of $H^n$. -Consider a hyperbolic isometry $\alpha\colon H^n\to H^n$. As in dimensions two and three, any such isometry extends to the hyperbolic boundary, which is $\mathbb{R}^{n-1}\cup\{\infty\}$. There are three cases: -Case 1: The isometry $\alpha$ fixes $\infty$. In this case, $\alpha$ must simply be a Euclidean similarity $H^n\to H^n$. In particular, $\alpha$ has the form -$$ -\alpha(\textbf{x},z) = (kA\textbf{x}+\textbf{b},kz) -$$ -for all $(\textbf{x},z) \in\mathbb{R}^{n-1}\times(0,\infty)$, where $k$ is a positive real number, $\textbf{b}\in\mathbb{R}^{n-1}$, and $A\in SO(n-1)$. (Note that any map of this form is indeed a conformal automorphism of $H^n$.) -Case 2: The isometry $\alpha$ maps the origin to $\infty$. In this case, let $\rho\colon H^n\to H^n$ be the map -$$ -\rho(\textbf{x},z)=\frac{1}{\|\textbf{x}\|^2 + z^2}(B\textbf{x},z) -$$ -where $B$ is some orientation-reversing element of $O(n-1)$, e.g. negation of the first coordinate. (The map $\rho$ is essentially the composition of inversion through the unit sphere with the reflection $B$.) Then $\rho$ is an automorphism of $H^n$ that maps the origin to $\infty$, and $\alpha$ can be expressed as $\beta\circ \rho$, where $\beta$ is some automorphism of $H^n$ that fixes $\infty$ (see case 1). -Case 3: The isometry $\alpha$ maps some other boundary point $(\textbf{p},0)$ to $\infty$. In this case, let $\tau$ be the translation $\tau(\textbf{x},z) = (\textbf{x}-\textbf{p},z)$. Then $\alpha$ can be written as $\beta\circ\tau$, where $\beta$ is an automorphism of $H^n$ that maps the origin to $\infty$. -Conlusion: Every conformal automorphisms of $H^n$ is either a Euclidean similarity transformation, or can be expressed uniquely as the composition of a translation $\tau$, the map $\rho$, and a Euclidean similarity transformation.<|endoftext|> -TITLE: If $a_0\in R$ is a unit, then $\sum_{k=0}^{\infty}a_k x^k$ is a unit in $R[[x]]$ -QUESTION [6 upvotes]: Let $R$ a ring, and let $$\displaystyle R[[x]]=\left\{\sum_{k=0}^{\infty}a_k x^k\;\middle\vert\; a_k\in R\right\}$$ with addition and multiplication as defined for polynomials. We have that $R[[x]]$ is a ring containing $R[x]$ as a subring. -How to prove that if $a_0\in R$ is a unit, then $\displaystyle\sum_{k=0}^{\infty}a_k x^k$ is a unit in $R[[x]]$? - -REPLY [9 votes]: Let $a=\sum_{k=0}^\infty a_kx^k\in R[[x]]$, where $a_0$ is a unit. We want to construct some $b=\sum_{k=0}^\infty b_kx^k\in R[[x]]$ such that $ab=1$, or after expanding, -$$ab=a_0b_0+(a_1b_0+a_0b_1)x+\cdots=1+0x+0x^2+\cdots$$ -We therefore need $b_0=a_0^{-1}$ (recall that $a_0$ is a unit). We want to have $a_1b_0+a_0b_1=0$, so our only choice for $b_1$ is $$b_1=\frac{-a_1b_0}{a_0}=-a_1a_0^{-2}.$$ We want $a_2b_0+a_1b_1+a_0b_2=0$, so we must have $$b_2=\frac{-a_2b_0-a_1b_1}{a_0}=-a_2a_0^{-2}+a_1^2a_0^{-3}.$$ -Prove that, by continuing this process, you get a $b$ such that $ab=1$. - -REPLY [2 votes]: One trick is to get the geometric series involved. We can write -$$\left(\sum_{k=0}^\infty a_k x^k\right)^{-1}=\frac{1}{a_0}\frac{1}{1+\underbrace{\left(\sum\limits_{k=1}^\infty a_0^{-1}a_kx^k\right)}_{u(x)}}. \tag{$\circ$}$$ -Now $x|u(x)$, so in the $l$-adic topology the sequence $u(x),u(x)^2,u(x)^3,\cdots$ converges to $0$, or equivalently the terms contributed to the coefficient of a $x^n$ from these powers is gauranteed to be finite for all $n$. Thus if you expand $(\circ)$ in a geometric series, the element of $R[[x]]$ it converges to will be a legitimate multiplicative inverse (by the usual algebra establishing the geo sum formula). - -REPLY [2 votes]: Hint $\rm\displaystyle\quad 1\: =\: (a-xf)(b-xg)\ \Rightarrow\ \color{#c00}{ab=1}$ -$$\Rightarrow\ \ \displaystyle\rm\frac{1}{a-xf}\ = \dfrac{b}{\color{#c00}b(\color{#c00}a-xf)}\ =\ \frac{b}{1-bxf}\ =\ b\:(1+bxf+(bxf)^2+(bxf)^3+\:\cdots\:)$$ -By the way, such rings are called rings of (formal) power series. -Corollary $ $ When $R$ is a field the fraction field of $R[[x]]$ has Laurent form, i.e. every fraction can be written with denominator $\rm\,x^n,\,$ by $\rm\,g/h = g/(x^n(a\!-\!xf)) = gh'/x^n\,$ for $\rm\, h'\! = (a\!-\!xf)^{-1}\!\in R[[x]]$<|endoftext|> -TITLE: What is the difference between plus-minus and minus-plus? -QUESTION [15 upvotes]: Possible Duplicate: -What is the purpose of the $\mp$ symbol in mathematical usage? - -Just as the title explains. I've seen my professor actually differentiating between those two. Do they not mean the same? - -REPLY [25 votes]: If you write -$$ -\cos(a \pm b) = \cos a \cos b \mp \sin a \sin b, -$$ -then + on the left side corresponds to minus on the right side, and - on the left side corresponds to + on the right side. -Standing alone, they mean the same. - -REPLY [8 votes]: If it stands alone, say $a \pm b$, then it means the same as $a \mp b$. However, if they both occur in the same statement, such as $a\pm b \mp c$, then you may pick the "top" row or the "bottom" row of operators. In this case $a+b-c$ and $a-b+c$ would what is intended. But $a+b+c$ and $a-b-c$ would not be allowed. - -REPLY [7 votes]: In general, we use $\pm$, but when we want to correlate a change of sign we also use $\mp$. For example: $2 (x \pm y) = 2x \pm 2y$, meaning that $2(x+y) = 2x + 2y$, and that $2(x-y) = 2x-2y$. Now, if we wanted the second sign to be the opposite of the first, we use $\mp$. For example: $-2(x \pm y) = 2x \mp 2y$ would mean that $-2(x+y) = -2x - 2y$ and $-2(x-y) = -2x + 2y$. -That is, whenever we have an expression involving $\pm$ or $\mp$, it's actually an abbreviation for two expressions: one in which we read all the top symbols ($+$ in $\pm$ and $-$ in $\mp$), and another one in which we read all the bottom symbols. -Common examples: -$\sin (x\pm y) = \sin x \cos y \pm \cos x \sin y$ means $\begin{cases} \sin (x+ y) = \sin x \cos y +\cos x \sin y \\ -\sin (x- y) = \sin x \cos y - \cos x \sin y -\end{cases}$ -$\cos (x \pm y) = \cos x \cos y \mp \sin x \sin y$ means $\begin{cases} \cos (x + y) = \cos x \cos y - \sin x \sin y\\ -\cos (x - y) = \cos x \cos y + \sin x \sin y\end{cases}$ -Now, when we don't have any changes of sign, like in $\sin (x\pm y) = \sin x \cos y \pm \cos x \sin y$ (all the top symbols are $+$), we could also write $\sin (x\mp y) = \sin x \cos y \mp \cos x \sin y$, and it would be the same, but this isn't common usage. The symbol $\mp$ only appears when there's already a $\pm$, but we want to establish a correspondence between opposite signs in an equation. -Note that it's only a matter of style; we could dispose completely of $\mp$ and used $\pm -$ instead, e.g., $\cos (x \pm y) = \cos x \cos y \pm (- \sin x \sin y)$.<|endoftext|> -TITLE: Restriction maps for structure sheaf of Spec A -QUESTION [5 upvotes]: For the space $X = \operatorname{Spec} A$, we define the structure sheaf $\mathcal{O}_X$ as follows. For an open subset $U \subseteq X$, we let $\mathcal{O}_X(U)$ be the projective limit of the family $\{ A_f : f \in A, D(f) \subseteq U \}$ indexed with the partial order $f \le g \iff D(f) \subseteq D(g)$. (Here $A_f$ denotes the localization of $A$ at $f$.) I am having trouble understanding how to define the restriction maps $\rho^U_V : \mathcal{O}_X(U) \to \mathcal{O}_X(V)$, for $V \subseteq U$. I understand it should be induced from $\rho^{D(g)}_{D(f)} : A_g \to A_f$ somehow ($D(f) \subseteq D(g)$), but I can’t quite figure out what it should be. -($D(f)$ are the principal open sets.) - -REPLY [6 votes]: If $V \subset U$ inside of $\operatorname{Spec} A$ then the principal open subsets contained in $V$ form a subfamily of those contained in $U$. By the universal property of the inverse limit, one way to give a homomorphism $\mathscr O(U) \to \mathscr O(V)$ is to give a homomorphism $\mathscr O(U) \to A_f$ for each $f \in A$ such that $D(f) \subset V$, and in a compatible way. Since $D(f) \subset U$, the projection corresponding to $f$ that comes with the inverse limit defining $\mathscr O(U)$ will do this job nicely!<|endoftext|> -TITLE: Consecutive non square free numbers -QUESTION [8 upvotes]: I was thinking to solve this by computer programs but I prefer a solution. -How to obtain a list of 3 consecutive non square free positive integers? In general, how to obtain the same kind of list with $k$ elements? Thanks. - -REPLY [2 votes]: See http://oeis.org/A045882 and references given there.<|endoftext|> -TITLE: Sanity check, is $\{(-9,-3),(2,-1),(7,7),(-1,-1)\}$ a function? -QUESTION [6 upvotes]: EDIT#2: Yes, I'm crazy! This IS a function. Thanks for beating the correct logic into me everyone! -I'm using a website provided by my algebra textbook that has questions and answers. It has the following question: -Determine whether the following relation represents a function: -$$\{(-9,-3),(2,-1),(7,7),(-1,-1)\}$$ -I answered NO, it is not a function but the website says it is. Am I wrong? If so, what am I missing? -EDIT: I was given the following definition in class: - -Function: A function is a rule which assigns to each X, called the domain, a unique y, called the range. - -My instructor also said that if you plot the points you can tell if it is not a function if it fails the vertical line test. Here is the graph of the above points, and for example it would fail the vertical line test if I drew one on x = 1, right? - -Thanks! -Jason - -REPLY [2 votes]: One way to precisely define a function is as follows: A function is a collection of ordered pairs, no two of which have the same first term. From this definition, it is immediate that your collection of ordered pairs is a function.<|endoftext|> -TITLE: Understanding the definition of a compact set -QUESTION [19 upvotes]: I just need a bit of help clarifying the definition of a compact set. -Let's start with the textbook definition: - -A set $S$ is called compact if, whenever it is covered by a collection of open sets $\{G\}$, $S$ is also covered by a finite sub-collection $\{H\}$ of $\{G\}$. - -Question: Does $\{H\}$ need to be a proper subset of $\{G\}$? If, for instance, $\{G\}$ is already a finite collection, does that mean $S$ is automatically covered by a finite sub-collection of $\{G\}$? Also, is there any need for the open sets in $\{H\}$ to be bounded sets? - -REPLY [33 votes]: As with many statements involving nested quantifiers, it may help to think of this in terms of a game. Suppose you are trying to prove that a certain space $G$ is compact. $G$ is compact if, for every open covering $C$ of $G$, there is a finite subcovering. So the game goes like this: - -You say “$G$ is compact.” -Your adversary says “It is not. Here is an open covering $C$.” (The adversary gives you a family of open sets whose union contains $G$.) -You reply “Here is a finite subcovering of $C$.” (You reply with a finite subset of $C$ whose union still contains $G$.) - -If you succeed in step 3, you win. If you fail, you lose. (If you're trying to prove that $G$ is not compact, you and the adversary exchange roles.) -If the adversary presents a finite open covering $C$ in step 2, you have an easy countermove in step 3: just hand back $C$ itself, and you win! -But to prove that $G$ is compact you also have to be able to find a countermove for any infinite covering $C$ that the adversary gives you. -Must your finite subcovering be a proper subset of $C$? No. If this were required, the adversary would always be able to win in step 2 by handing you a covering $C$ with only a single element, $C=\{ G \}$. Then the only proper subset you could hand back would be $\lbrace\mathstrut\rbrace$, which is not a covering of $G$, and therefore the would be no nonempty compact sets. That would be silly, so you have to be allowed to hand back $C$ unchanged in step 3.<|endoftext|> -TITLE: Does this notion of morphism of noncommutative rings appear in the ring theory literature? -QUESTION [11 upvotes]: Definition: Let $R, S$ be two rings. A classical morphism $\phi : R \to S$ is a function from elements of $R$ to elements of $S$ which restricts to a homomorphism (of rings, in the usual sense) on commutative subrings of $R$. - -This definition is motivated by quantum mechanics; roughly speaking $\phi$ preserves what a classical observer can observe about the noncommutative spaces $\text{Spec } R$ and $\text{Spec } S$. See the discussion at the nLab page on the Bohr topos. Actually it should be more like this: - -Definition: Let $R, S$ be two $^\ast$-rings. A classical morphism $\phi : R \to S$ is a function from normal elements of $R$ (elements such that $r^{\ast} r = r r^{\ast}$) to normal elements of $S$ which restricts to a $^{\ast}$-homomorphism on commutative $^{\ast}$-subrings of $R$. - -This definition allows, among other things, an elegant statement of the Kochen-Specker theorem, which can be restated as the claim that if $H$ is a Hilbert space of dimension at least $3$, then the algebra $B(H)$ of bounded linear operators $H \to H$ does not admit a classical morphism to $\mathbb{C}$. -Has this definition been studied from a purely ring theory or noncommutative geometry point of view? Have basic properties of the corresponding category been worked out somewhere? - -REPLY [6 votes]: In a recent paper, I adapted some of the terminology of Kochen and Specker's original paper to a more ring-theoretic context. I would refer to your first type of morphism as a morphism of partial $\mathbb{Z}$-algebras from $R$ to $S$, or even better as a morphism of partial rings. The point is that every ring has the underlying structure of a partial ring (i.e., there is are categories of rings and of partial rings, and a forgetful functor from rings to partial rings). The functions you are thinking about are the morphisms of the second category. -Regarding your second definition, it may be helpful for you to look at this paper of Benno van den Berg and Chris Heunen. They define the category of partial C*-algebras. Given a C*-algebra $A$, one may only consider $A$ itself as a partial C*-algebra if $A$ is commutative. But the subset $N(A)$ of normal elements of $A$ is always a partial C*-algebra, even if $A$ is not commutative. In fact, "taking the normal part" forms a functor from the category of C*-algebras to the category of partial C*-algebras. -So a clean way to fit all of these ideas together would be to define the category of "partial *-rings" in the obvious way, so that the set $N(R)$ of normal elements of any *-ring $R$ is a partial *-ring and so that the functions you describe above are exactly the morphisms of partial *-rings from $N(R)$ to $N(S)$. -Heunen and van den Berg have worked out some interesting properties of the category of partial C*-algebras. If you need particular facts of theirs for partial *-rings, I would imagine that many of their results should translate easily.<|endoftext|> -TITLE: What is the mutual information $I(X;X)$? -QUESTION [6 upvotes]: $X$ is a random variable with normal distribution, assume $Y=X$, what is the mutual information $I(X;Y)$? -I guess that $h(Y|X)=0$ since when $X$ is known, $Y$ is completely known, so $$I(X;Y)=h(Y)-h(Y|X)=h(Y)=\frac{1}{2}\log 2\pi e\sigma^2$$ nat. -But, I was told I was wrong! and a numerical computation also shows that the value of $$I(X;Y) \neq \frac{1}{2}\log 2\pi e\sigma^2$$ Where is my mistake? Please help me out of this problem, thanks a lot! (Please note that $X$ and $Y$ are both continuous). - -REPLY [7 votes]: For any r.v. $X$ -$$I(X,X)=H(X).$$ -To see this, put $Y=X$ in $I(X,Y)=H(X)-H(X|Y)$ and use -$$H(X|X)=0.~~ (*)$$ -In summary, the mutual information of $X$ with itself is just its self information $H(X)$, as $H(X|X)$, i.e. the residual average information carried by $X$ conditional $X$ is zero, i.e. $(*)$ holds.<|endoftext|> -TITLE: Set defined by $xy-zw=1$ -QUESTION [5 upvotes]: This should be an easy question, but I found it ungooglable and not obvious to visualize... -What geometric object is defined by the equation $xy-zw=1$ in $\mathbb R^4$? And what is the homotopy type of the complement? - -REPLY [5 votes]: It's $\text{SL}_2(\mathbb{R})$, of course! As a "geometric object" it may equivalently be realized as the unit sphere $x^2 + y^2 - z^2 - w^2 = 1$ in $\mathbb{R}^{2,2}$, which exhibits it (or maybe one of its connected components?) as a homogeneous space for the orthogonal group $\text{O}(2, 2)$. In particular it can be given the structure of a pseudo-Riemannian manifold. -The complement of the unit sphere in $\mathbb{R}^{2, 2}$ has two connected components -$$X = \{ (x, y, z, w) : x^2 + y^2 - z^2 - w^2 > 1 \}$$ -$$Y = \{ (x, y, z, w) : x^2 + y^2 - z^2 - w^2 < 1 \}.$$ -$X$ deformation retracts via the straight-line homotopy $(x, y, (1-t)z, (1-t)w)$ to $\{ (x, y) : x^2 + y^2 > 1 \}$, which is homotopy equivalent to $S^1$. -$Y$ deformation retracts via the straight-line homotopy $((1-t)x, (1-t)y, z, w)$ to $\{ (z, w) : z^2 + w^2 > -1 \}$, which is contractible. - -REPLY [3 votes]: It's hard to visualize a 3-dimensional hypersurface in 4 dimensions. But perhaps this animation may help, showing cross-sections at different values of $w$. Note that these are hyperbolic paraboloids except at $w=0$ where you have a hyperbolic cylinder. -http://www.math.ubc.ca/~israel/problems/surf2.gif<|endoftext|> -TITLE: About the uniqueness of rank-1 decomposition of a positive-definite Hermitian matrix -QUESTION [5 upvotes]: Suppose T is positive-definite Hermitian matrix and I know that it can be expressed by eigen-decomposition as the following sum of rank-1 matrices:$ \textbf{T}= \sum \lambda _{k} \textbf{u}_{k} \textbf{u}_{k}^{H} $where $\textbf{u}_{k} $ are orthogonal to each other. -But my question is: is this rank-1 decomposition unique? For example, can T be also written in other forms, say:$\textbf{T}= \sum \gamma _{k} \textbf{v}_{k} \textbf{v}_{k}^{H} $, only in this case $\textbf{v}_{k} $ do not necessarily need to be orthogonal vectors. If so, is there any relationship between $\textbf{u}_{k} $ and $\textbf{v}_{k} $? -Thanks. - -REPLY [4 votes]: Nope. For example (exercise!), the identity matrix is equal to $\sum \textbf{u}_k \textbf{u}_k^H$ for every orthonormal basis $\textbf{u}_k$.<|endoftext|> -TITLE: Every finite subgroup of $\mathbb{Q}/\mathbb{Z}$ is cyclic -QUESTION [14 upvotes]: Show that every finite subgroup of the quotient group $\mathbb{Q}/\mathbb{Z}$ (under addition) is cyclic. -Note: there is a related problem which I just proved: "Let $G$ be a finite abelian group, then $G$ is non-cyclic iff $G$ has a subgroup isomorphic to $C_p \times C_p$ for some prime $p$." -Since $\mathbb{Q} /\mathbb{Z}$ is abelian, so based on the related problem it suffices to show it has no elementary abelian subgroup group. I tried to start prove by contradiction: Let $\mathbb{Z} -TITLE: Power series and singularity -QUESTION [8 upvotes]: Consider the power series $\sum a_n z^n$.Given that $a_n$ converges to $0$, prove that - $f(z)$ cannot have pole on the unit circle, where $f(z)$ is the function represented by the power series in the question. - -EDIT -I have thought an answer for it. Since $a_n$ converges to $0$, we can write $\lvert a_n \rvert <1$ for all $n >N_0$. From here, we can say radius of convergence of the power series is bigger than or equal to $1$. If the radius of convergence is bigger than $1$, the series converges on the unit circle. If it is equal to $1$, then points on the unit circle cannot be an isolated singularity. But I am not sure of my answer. - -REPLY [4 votes]: A power series whose coefficients tend to $0$, and whose radius of convergence is therefore at least $1$, can define a function that has isolated sigularities on the unit circle; only those isolated singularities cannot be poles. So your reasoning is not correct, and this claim needs to be changed. -Take for instance the power series with $a_0=0$ and $a_n=\frac1n$ for $n>0$. This defines the function $f: z\mapsto\ln(\frac1{1-z})$ which has an isolated singularity at $1$. -So instead of singularities, you should be thinking about what having a pole means.<|endoftext|> -TITLE: Calculate: $\sum_{k=0}^{n-2} 2^{k} \tan \left(\frac{\pi}{2^{n-k}}\right)$ -QUESTION [11 upvotes]: Calculate the following sum for integers $n\ge2$: -$$\sum_{k=0}^{n-2} 2^{k} \tan \left(\frac{\pi}{2^{n-k}}\right)$$ -I'm trying to obtain a closed form if that is possible. - -REPLY [10 votes]: Consider -$$\prod_{k = 0}^{n - 2}\cos(2^k \theta)$$ -Multiplying numerator and denominator by $2\sin(\theta)$ we get, -$$\frac{2\sin(\theta)\cos(\theta)}{2\sin(\theta)}\prod_{k = 1}^{n - 2} \cos(2^k\theta) = \frac{\sin(2\theta)}{2\sin(\theta)}\prod_{k = 1}^{n - 2} \cos(2^k\theta)$$ -Now, repeatedly multiplying and dividing by 2, we can reduce the above to, -$$\prod_{k = 0}^{n - 2}\cos(2^k \theta) = \frac{\sin(2^{n - 1} \theta)}{2^{n - 1} \sin(\theta)}$$ -Take logs on both sides, -$$\sum_{k = 0}^{n - 2}\ln(\cos(2^k \theta)) = \ln(\sin(2^{n - 1} \theta)) - \ln(2^{n - 1}) - \ln(\sin(\theta))$$ -Differentiating both sides w.r.t $\theta$ we get, -$$-\sum_{k = 0}^{n - 2}2^k\tan(2^k \theta) = 2^{n - 1}\cot(2^{n - 1} \theta) - \cot(\theta)$$ -Substitute $\theta = \frac{\pi}{2^n}$ above to get, -$$\sum_{k = 0}^{n - 2}2^k\tan\left(\frac{\pi}{2^{n - k}}\right) = \cot\left(\frac{\pi}{2^n}\right)$$<|endoftext|> -TITLE: cohomology vs homology -QUESTION [6 upvotes]: I have learned the basic things about cohomology and homology. It seems that homology and cohomology both deal with the same objects, the complexes, but with a different choice of the indexes (for the homology the indexes are decreasing and in cohomology are increasing). -Why, in some geometric applications it is done a radical selection and it is important to distinguish homology from cohomology? -For example if one constructs a Cech homology instead af Cech cohomology, what are the differences? - -REPLY [13 votes]: One short answer is that cohomology (of spaces, not of general chain complexes) carries a ring structure, the cup product. There are spaces with the same homology and cohomology as groups, but where the ring structure on the cohomologies is different; in this case one can use the cup product to distinguish them. -There is probably a lot one can say as far as a long answer. For example, for nice spaces cohomology is representable by Eilenberg-MacLane spaces, so one can say a lot about cohomology by studying these spaces (e.g. classifying cohomology operations using the Yoneda lemma), and analogous things don't seem to be true of homology.<|endoftext|> -TITLE: Radius either integer or $\sqrt{2}\cdot$integer -QUESTION [8 upvotes]: Given a circle about origin with exactly $100$ integral points(points with both coordinates as integers),prove that its radius is either an integer or $\sqrt{2}$ times an integer. -What my solution is: -Since circle is about origin, hence, integral points would be symmetric about the $x$-axis and $y$-axis as well as line $x=y$ and line $x+y=0$ ,i.e. if $(x,y)$ is an integral point, so are $(x,-y),(-x,-y),(-x,y),(y,x),(y,-x),(-y,x)$ and $(-y,-x)$.Therefore, we need to consider only a single octant. -Since there are a total of $100$ integral points, two cases are possible: -1) radius of the circle is integer. -2) radius of the circle is not an integer. -case 1: -If radius is an integer, then $4$ points on the $x$-axis and $y$-axis of the circle would be integral points and hence each octant must have $12$ points(as $100-4=96$ is a multiple of $8$). -therefore, this case is consistent. -case2: -If radius is not an integer, then $100$ integral points can't be divided into $8$ parts(octants),and points on $x$-axis and $y$-axis of circle are not integral points, therefore points on line $x=y$ and $x+y=0$ must be integral points so as to divide $100-4=96$ points in $8$ parts. -But since point on line $x=y$ and circle is of the form $(r\cdot\cos(45^\circ),r\cdot\sin(45^\circ))$,therefore, $r/\sqrt{2}$ is an integer and hence $r=\sqrt{2}\cdot$integer. -other points of circle on these lines are consistent with it. -So, i proved that either radius is an integer and if not then it has to be $\sqrt{2}\cdot$integer. -Is there any flaw in my arguments?? -I couldn't find the proof to check whether mine is correct. -Thanks in advance!! - -REPLY [3 votes]: All the ingredients are here, but the flow of the argument is not optimal. A smooth proof of the claim would begin with "Assume the set $S:=\gamma\cap{\mathbb Z}^2$ contains $100$ elements. Then $\ldots$", or it should begin with "Assume the radius of $\gamma$ is neither an integer nor $\sqrt{2}$ times an integer. Then $\ldots$". -The essential point (which does not come out clearly in your argument) is the following: The group of symmetries of $S$ is the dihedral group $D_4$, which is of order $8$. Since $100$ is not divisible by $8$ this action has nontrivial fixed points, i.e. points on the lines $x=0$, $y=0$, $y=\pm x$.<|endoftext|> -TITLE: Expectation value of a product of an Ito integral and a function of a Brownian motion -QUESTION [6 upvotes]: this problem has come up in my research and is confusing me immensely, any light you can shed would be deeply appreciated. -Let $B(t)$ denote a standard Brownian motion (Wiener process), such that the difference $B(t)-B(s)$ has a normal distribution with zero mean and variance $t-s$. -I am seeking an expression for -$$E\left[ \cos(B(t))\int\limits_0^t \sin(B(s))\,\textrm{d}B(s) \right],$$ -where the integral is a stochastic It$\hat{\textrm{o}}$ integral. My first thought was that the expectation of the integral alone is zero, and that the two terms are statistically independent, hence the whole thing gives zero. However, I can't prove this. -To give you a little background: this expression arises as one of several terms in a calculation of the second moment of the integral -$$\int\limits_{0}^{t}\cos(B(s))\,\textrm{d}s,$$ -after applying It$\hat{\textrm{o}}$'s lemma and squaring. I can simulate this numerically, so I should know when I get the right final expression! -Thanks. - -REPLY [8 votes]: This addresses the question cited as a motivation. -For every $t\geqslant0$, introduce $X_t=\int\limits_{0}^{t}\cos(B_s)\,\textrm{d}s$ and $m(t)=\mathrm E(\cos(B_t))=\mathrm E(\cos(\sqrt{t}Z))$, where $Z$ is standard normal. -Then $\mathrm E(X_t)=\int\limits_{0}^{t}m(s)\,\textrm{d}s$ and $\mathrm E(X_t^2)=\int\limits_{0}^{t}\int\limits_{u}^{t}2\mathrm E(\cos(B_s)\cos(B_u))\,\textrm{d}s\textrm{d}u$. -For every $s\geqslant u\geqslant0$, one has $2\cos(B_s)\cos(B_u)=\cos(B_s+B_u)+\cos(B_s-B_u)$. Furthermore, $B_s+B_u=2B_u+(B_s-B_u)$ is normal with variance $4u+(s-u)=s+3u$ and $B_s-B_u$ is normal with variance $s-u$. Hence, $2\mathrm E(\cos(B_s)\cos(B_u))=m(s+3u)+m(s-u)$, which implies -$$ -\mathrm E(X_t^2)=\int\limits_{0}^{t}\int\limits_{u}^{t}(m(s+3u)+m(s-u))\,\textrm{d}s\textrm{d}u. -$$ -Since $m(t)=\mathrm e^{-t/2}$, this yields after some standard computations, -$\mathrm E(X_t)=2(1-\mathrm e^{-t/2})$ and -$$ -\mathrm E(X_t^2)=2t-\frac13(1-\mathrm e^{-2t})-\frac83(1-\mathrm e^{-t/2}). -$$ -Sanity check: When $t\to0^+$, $\mathrm E(X_t^2)=t^2+o(t^2)$. - -To compute the integral $J_t=\mathrm E\left[ \cos(B_t)\int\limits_{0}^{t} \sin(B_s)\,\textrm{d}B_s \right]$, one can start with Itô's formula -$$ -\cos(B_t)=1-\int\limits_{0}^{t} \sin(B_s)\,\textrm{d}B_s-\frac12\int\limits_{0}^{t} \cos(B_s)\,\textrm{d}s, -$$ -hence -$$ -J_t=\mathrm E(\cos(B_t))-\mathrm E(\cos^2(B_t))-\frac12\int\limits_{0}^{t} \mathrm E(\cos(B_t)\cos(B_s))\,\textrm{d}s, -$$ -and it seems each term can be computed easily.<|endoftext|> -TITLE: Compute: $\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$ -QUESTION [8 upvotes]: I try to solve the following sum: -$$\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$$ -I'm very curious about the possible approaching ways that lead us to solve it. I'm not experienced with these sums, and any hint, suggestion is very welcome. Thanks. - -REPLY [6 votes]: Here's another approach. -It depends primarily on the properties of telescoping series, partial fraction expansion, and the following identity for the $m$th harmonic number -$$\begin{eqnarray*} -\sum_{k=1}^\infty \frac{1}{k(k+m)} -&=& \frac{1}{m}\sum_{k=1}^\infty \left(\frac{1}{k} - \frac{1}{k+m}\right) \\ -&=& \frac{1}{m}\sum_{k=1}^m \frac{1}{k} \\ -&=& \frac{H_m}{m}, -\end{eqnarray*}$$ -where $m=1,2,\ldots$. -Then, -$$\begin{eqnarray*} -\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k} -&=& \sum_{k=1}^{\infty} \frac{1}{k} - \sum_{n=1}^{\infty} \frac{1}{n(n+k+2)} \\ -&=& \sum_{k=1}^{\infty} \frac{1}{k} \frac{H_{k+2}}{k+2} \\ -&=& \frac{1}{2} \sum_{k=1}^{\infty} - \left( \frac{H_{k+2}}{k} - \frac{H_{k+2}}{k+2} \right) \\ -&=& \frac{1}{2} \sum_{k=1}^{\infty} - \left( \frac{H_k +\frac{1}{k+1}+\frac{1}{k+2}}{k} - \frac{H_{k+2}}{k+2} \right) \\ -&=& \frac{1}{2} \sum_{k=1}^{\infty} - \left( \frac{H_k}{k} - \frac{H_{k+2}}{k+2} \right) - + \frac{1}{2} \sum_{k=1}^{\infty} - \left(\frac{1}{k(k+1)} + \frac{1}{k(k+2)}\right) \\ -&=& \frac{1}{2}\left(H_1 + \frac{H_2}{2}\right) - + \frac{1}{2}\left(H_1 + \frac{H_2}{2}\right) \\ -&=& \frac{7}{4}. -\end{eqnarray*}$$<|endoftext|> -TITLE: Constructing $\exp$ on $\mathbb R$ -QUESTION [5 upvotes]: I am trying to construct the exponential function on $\mathbb R$ by first finding all functions $f$ such that $f = f'$ (which should be all the constant multiples of $\exp$), then characterizing $\exp$ by the initial condition $f(0) = 1$. -I intend to use only the definitions and properties of derivatives and integrals, and the fundamental theorem(s) of calculus; in particular, I am trying to avoid Taylor series and/or the notion of uniform convergence. Of course, these restrictions are by no means precise, and I really don't know if it's possible to come up with a reasonably "simple" construction using only this set of tools, but I figured I'd give it a try. -So far I've only managed to prove uniqueness in the sense that if $f = f'$ and $c \in \mathbb R$, then $f$ is uniquely determined by $f(c)$. With this comes the corollary that if $f(c) = 0$ for any $c \in \mathbb R$, then $f = 0$. Existence seems pretty difficult to get to without any of the heavy machinery. -Here is the proof of uniqueness as described above: -Lemma. If $f = f'$, $g = g'$, and $c \in \mathbb R$, then $f + c \cdot g = f' + c \cdot g' = f' + (c \cdot g)' = (f + c \cdot g)'$. -Lemma. If $f = f'$ and $f(c) = 0$ for some $c \in \mathbb R$, then $f(x) = 0$ for all $x \leq c$. (I had trouble proving it for $x > c$.) -Proof. By the FTC, we have $\int_a^b f = f(b) - f(a)$. If $f(x) > 0$ for $c - \varepsilon \leq x \leq c$, then $\int_{c - \varepsilon}^c f > 0$, but $f(c) - f(c - \varepsilon) < 0$, a contradiction. Similar contradiction for $f(x) < 0$. -Theorem. If $f = f'$, $g = g'$, and $f(c) = g(c)$ for some $c \in \mathbb R$, then $f = g$. -Proof. Suppose $f(d) \neq g(d)$ for some $d \in \mathbb R$. WLOG, assume $g(d) \neq 0$. If $d < c$, then $(f - g)(d) \neq 0 = (f - g)(c)$; if $c < d$, then $$\left( f - \frac{f(d)}{g(d)} \cdot g \right)(c) \neq 0 = \left( f - \frac{f(d)}{g(d)} \cdot g \right)(d).$$ In any case, we contradict the second lemma. -Corollary. If $f = f'$ and $f(c) = 0$ for some $c \in \mathbb R$, then $f = 0$. -Proof. $0 = 0'$. -Now I want to prove the existence of $f$ for any initial condition $f(x) = y$, but I don't know how. - -REPLY [2 votes]: Supoose $u$ and $v$ are functions so that $u'=u$, $v' = v$ and so $u(0)=v(0) = 1$. -Put $\Phi(t) = u(t)v(-t)$ for $t\in\mathbb{R}$. -$$\Phi'(t) = u'(t)v(-t) - u(t)v'(-t) = u(t)v(-t) - u(t)v(-t) = 0.$$ -The function $\Phi$ is constant. Since $\Phi(0) = 1$, $u(t)v(-t) = 1$ for all -$t\in{\mathbb{R}}$. -We can apply this result to $v$ and $v$ to get -$$v(-t) = {1\over v(t)},\qquad t\in\mathbb{R}.$$ -We conclude that $u(t)/v(t) = 1$ for $t\in\mathbb{R}$, so $u = v$. -Here is one way to get existence. -Define $l(x) = \int_1^x{dt\over t}$, for $x > 0$. This function is continuous and differentiable on $(0,\infty)$. -Observe that -$$l(xy) = \int_1^{xy} {dt\over t} = \int_1^x {dt\over t} + \int_x^{xy}{dt\over t} -= l(x) + \int_x^{xy} {dt\over t}.$$ -Using a change of variable, we get -$$\int_x^{xy} {dt\over t} = \int_1^y {x\,dt\over xt} = l(y).$$ -We have $l(xy) = l(x)+l(y)$ for $x, y > 0$. We clearly have $l(1) = 0$. -And it's not hard to show that $l(x^\alpha) = \alpha l(x)$ for $\alpha\in\mathbb{R}$ and $x > 0$. -By the fundamental theorem of calculus we know that -$$l'(x) = 1/x$$ -for $x > 0$. This function is strictly increasing and therefore 1-1 on $(0,\infty)$. -The inverse function to $l$ will satisfy the the initial value problem -$u'(t) = u(t)$ and $u(0) = 1$. This gives existence. -You can use the properties of $l$ to see this is an exponential function. The base for this exponential is $l^{-1}(1)$.<|endoftext|> -TITLE: Galois Group over Finite Field -QUESTION [10 upvotes]: I am having a bit of difficulty trying to answer the following question: - -What is the Galois group of $X^8-1$ over $\mathbb{F}_{11}$? - -So far I have factored $X^8-1$ as -$$X^8-1=(X+10)(X+1)(X^2+1)(X^4+1).$$ -I know $X^2+1$ is irreducible over $\mathbb{F}_{11}$ since $10$ is not a square modulo $11$. Also, $X^4+1$ is irreducible over $\mathbb{F}_{11}$. The roots of $X^2+1$ and $X^4+1$ over $\mathbb{Q}$ are $\pm i$ and $\pm \frac{\sqrt{2}}{2} \pm \frac{\sqrt{2}}{2} i$, respectively. We also see that $\sqrt{2} \not \in \mathbb{F}_{11}$ since no element squared is equal to $2$. I would then think that $\mathbb{F}_{11}(i, \sqrt{2})$ is a splitting field for $x^8-1$ over $\mathbb{F}_{11}$, which is clearly Galois. If all this were true, I would then venture that the Galois group is $V_4$. I have the feeling, however, that I have made many mistakes in my reasoning. How should one approach a problem like this? - -REPLY [3 votes]: Let $E$ be a splitting field of $x^8 - 1$ over $F_{11}$. Then $E = F_{11}(\alpha)$, where $\alpha$ is a primitive $8$th root of unity. -It follows that $G = \operatorname{Gal}(E/F)$ is isomorphic to a subgroup of $(\mathbb{Z}/8\mathbb{Z})^*$, which is isomorphic to the Klein four-group. Now $G$ cannot have order $1$ because $x^8 - 1$ does not split over $F_{11}$, it cannot have order $4$ because then it wouldn't be cyclic (see this question). Therefore $G$ must be cyclic of order $2$. -In general, suppose $F$ is a field and $E = F(\alpha)$, where $\alpha$ is a primitive $n$th root of unity. Then you can show that $\operatorname{Gal}(E/F)$ is isomorphic to a subgroup of $(\mathbb{Z}/n\mathbb{Z})^*$.<|endoftext|> -TITLE: Do filters on a Boolean algebra also make a Boolean algebra? -QUESTION [5 upvotes]: Let $\mathfrak{B}=(B,\bot,\top,\lnot,\wedge,\vee)$ be a boolean algebra. $B_F$ be the set of all filters on $\mathfrak B$. And for all filter $F$, $G$, $F \wedge_{B_F} G \colon= \mathbf C(F \cup G)$ in which $\mathbf C$ denotes the filter closure operator; $F \vee_{B_F} G \colon= F \cap G$; $0 \colon= B$, $1 \colon= \{\top\}$. Then $(B_F,0,1,\wedge_{B_F},\vee_{B_F})$ makes a complete lattice. -My question is can we add a negation in order to make it be a Boolean algebra? - -REPLY [6 votes]: It is relatively easy to see that it does not matter whether we work with filters or with ideals. -The following is taken verbatim from -Steven R. Givant,Paul Richard Halmos: Introduction to Boolean Algebras -p.167: - -The ideals of a Boolean algebra form a complete, distributive lattice, but - they do not, in general, form a Boolean algebra. To give an example, it is - helpful to introduce some terminology. An ideal is maximal if it is a proper - ideal that is not properly included in any other proper ideal. We shall see in - the next chapter that an infinite Boolean algebra $B$ always has at least one - maximal ideal that is not principal. Assume this result for the moment. A - "complement" of such an ideal $M$ in the lattice of ideals of $B$ would be an - ideal $N$ with the property that - $$M\wedge N=\{0\} \qquad\text{and}\qquad M\vee N=B.$$ - Suppose the first equality holds. If $q$ is any element in $N$, then $p \wedge q = 0$, - and therefore $p \le q'$, for every element $p$ in $M$, by Lemma 1. In other words, - the ideal $M$ is included in the principal ideal $(q')$. The two ideals must - be distinct, since $M$ is not principal. This forces $(q')$ to equal $B$, by the - maximality of $M$. In other words, $q' = 1$, and therefore $q = 0$. What has - been shown is that the meet $M\wedge N$ can be the trivial ideal only if $N$ itself is - trivial. In this case, of course, $M \vee N$ is $M$, not $B$. Conclusion: a maximal, - non-principal ideal does not have a complement in the lattice of ideals. - -The existence of maximal ideals, which was used in the above excerpt, is guaranteed by Boolean prime ideal theorem.<|endoftext|> -TITLE: What is the importance of determinants in linear algebra? -QUESTION [7 upvotes]: In some literature on linear algebra determinants play a critical role and are emphasized in the earlier chapters (see books by Anton & Rorres, and Lay). However in other literature it is totally ignored until the latter chapters (see Gilbert Strang). -How much importance should we give the topic of determinants? I tend to use it to find linear independence of vectors and might extend this to finding the inverse but I think Gauss Jordan and LU might be easier for inverse. Does it have any other uses in Linear Algebra. -Are there areas where determinants are used and have a real impact? Are there any real life applications of determinants? -Is there a really good motivating example or explanation which will hook students into this topic? -In linear algebra, where should determinants be placed? Like I said in my comment - in some literature it is at the beginning whilst in others it is bolted on at the end. I like the idea of checking if vectors are independent by using determinants so think they should be placed before independence of vectors. What do you think? If you teach a linear algebra course where do you place this topic. - -REPLY [5 votes]: This is quite an informal answer. -Determinants basically help to describe the nature of solutions of linear equations. The determinant of a real matrix is just some real number, telling you about the invertibility of the matrix and hence telling you things about linear equations wrapped up in the matrix. -The determinant being non-zero is equivalent to the matrix being invertible, which is equivalent to the corresponding sets of linear equations having EXACTLY one solution. -The determinant being zero means the matrix is NOT invertible. In this case the corresponding sets of linear equations can either have infinitely many solutions or none at all (depending on the numbers on the RHS). -So really the determinant is useful anywhere that linear equations crop up. For example, when checking linear independence, this is the same as demanding the existence of a UNIQUE solution to a set of linear equations (i.e. the zero vector solution). This is the same as the matrix determinant being non-zero as discussed above. Linear dependence must therefore be the same as the determinant being zero (so that there may be non-zero solutions to the equations, i.e. so that some of the vectors really can be made to add up non-trivially to give one of the others).<|endoftext|> -TITLE: In axiomatization of propositional logic, why can uniform substitution be applied only to axioms? -QUESTION [14 upvotes]: I'm reading an introductory book about mathematical logic for Computation (just for reference, the book is "Lógica para Computação", by Corrêa da Silva, Finger & Melo), and would like to ask a question. -I'm currently reading the chapter that talks about deductive systems, and it begins with Axiomatization. More specifically, it talks about axiomatization as a form of logical inference, that is, an axiomatization of classical logic. -The book is in Portuguese, so all the quotes from it are translated to English. -Since I'm new to the subject, I'm not sure what I need to specify before asking the particular doubt, so I will put here some definitions and explanations that are given in the book (the context). -Context -Right before it presents an axiomatization of classical logic, it defines the concept of substitution: - -The substitution of a formula B for an atom p, inside a formula A, is represented by $A[p:=B]$. Intuitively, if we have a formula $A=p\to(p\wedge q)$, and we want to substitute $(r\wedge s)$ for $p$, the result of the substitution is: $A[p:=(r\wedge s)]=(r\wedge s)\to((r\wedge s)\wedge q)$. - -Then it defines an instance: - -When a formula B results from the substitution of one or more atoms of a formula A, we say that B is an instance of the formula A - -Then, it presents an axiomatization for classic propositional logic, which contains several axioms, including, for example, $p\to(q\to p)$ and $(p\to (q\to r))\to((p\to q)\to(p\to r))$. I'm not sure whether I should transcribe the whole set of axioms here, but I think it is not necessary (but I can detail it here if it is necessary). -The axiomatization also includes the rule of inference modus ponens: From $A\to B$ and $A$, one infers $B$. -Next, it states that axioms can be instantiated: - -Axioms can be instantiated, that is, atoms can be uniformly substituted by any other formula of the logic. In this case, we say that the resulting formula is an instance of the axiom. With the notion of axiomatization, we can define the notion of deduction. -Definition 2.2.2 A deduction is a sequence of formulae $A_1,...,A_n$ such that every formula in the sequence is either an instance of an axiom, or is obtained from previous formulae by means of inference rules, that is, by modus ponens -A theorem $A$ is a formula such that there exists a deduction $A_1,...,A_n = A$. We represent a theorem as $\vdash_{\text{Ax}} A$ or simply $\vdash A$ - -Then, it says theorems can also be instantiated to produce new theorems: - -The axiomatization presented here possesses the property of uniform substitution, that is, if A is a theorem and B is an instance of A, then B is also a theorem. The reason for that is very simple: if we can apply a substitution to obtain B from A, we can apply the same substitution in the formulas that occur in the deduction of A and, since any instance of an axiom is a deductible formula, we've transformed the deduction of A into a deduction of B. - -Doubt -Now I will present the part of the text that generated doubt: - -Now we will define when a formula $A$ is deductible from a set of formulae $\Gamma$, also called a theory or a set of hypotheses, which is represented by $\Gamma \vdash_{\text{Ax}} A$. In this case, this concerns adapting the notion of deduction to include the elements of $\Gamma$. -Definition 2.2.3 We say that a formula $A$ is deductible from the set of formulae $\Gamma$ if there is a deduction, that is, a sequence of formulae $A_1,...,A_n = A$ such that every formula $A_i$ in the sequence is: -1) either a formula $A_i \in \Gamma$ -2) or an instance of an axiom -3) or is obtained from previous formulae by means of modus ponens. -[...] Note also that we cannot apply uniform substitution in the elements of $\Gamma$; uniform substitution can only be applied to the axioms of the logic. - -This is the statement that I didn't understand: "Note also that we cannot apply uniform substitution to the elements of $\Gamma$; uniform substitution can only be applied to the axioms of the logic". Why is this particularly true? Since theorems can be instantiated, it seems that it should be possible to instantiate elements of $\Gamma$ too. Am I missing something? -Thank you in advance. - -REPLY [2 votes]: If $A \to B$ is an axiom, it should not be possible to derive $C \to D$; there are valuations in which the former is true and the latter is false. But, the latter is a substitution instance of the former. Thus it is not sound to apply substitution to arbitrary axioms.<|endoftext|> -TITLE: Module isomorphic to second dual -QUESTION [11 upvotes]: Is there a simple condition on a module $M$ over a ring $R$ which will ensure that $M$ is isomorphic to its double dual, $M^{**} = \operatorname{Hom} (\operatorname{Hom}(M,R),R)$? What about a condition on $R$ which guarantees this will hold for all $R$-modues $M$? Can we find conditions which are necessary, as well as sufficient? - -REPLY [14 votes]: A module which is isomorphic to its double dual under the natural map is called reflexive. These are objects that have been rather carefully studied in commutative algebra. -As noted in the other answers/comments, a finitely generated projective module over a ring $R$ is necessarily reflexive. Conversely if $R$ is a regular local commutative ring of dimension $\leq 2$, then any f.g. reflexive module is projective (equivalently, free, since we are over a local ring). This is not true if the dimension is $> 2$; in that case there are f.g. reflexive modules that are not free. -This MO answer provides more information, including a characterization of f.g. reflexive modules over an integrally closed domain in terms of other standard module theoretic properties.<|endoftext|> -TITLE: An "independence" condition on two algebraic elements over $K$. -QUESTION [6 upvotes]: Let $K$ be a field and let $a,b\in \overline K$ be algebraic elements. -I've stumbled upon a certain condition on $a,b$, which I feel could be considered an "independence" condition. I would like to know more about it. -Let's say that $a,b$ are weakly independent when $$\deg_K(a)=\deg_{K(b)}(a)$$ -and $$\deg_K(b)=\deg_{K(a)}(b).$$ -A condition weaker than algebraic independence implies this condition: -Fact. Suppose that for $g\in K[x,y],$ we have that $g(a,b)=0$ implies $g\in K[x]$ or $g\in K[y].$ Then $a,b$ are weakly independent. -Proof. Suppose $$\deg_K(a)>\deg_{K(b)}(a)$$ ($\geq$ always holds). Let $f(x)=x^n+a_{n-1}x^{n-1}+\ldots+a_0$ be the minimal (monic) polynomial of $a$ over $K(b).$ We have $$f\in K(b)[x]\setminus K[x]$$ because monic minimal polynomials are unique. For each $i=0,1,\ldots,n-1,$ there exists $g_i\in K[x]$ such that $g_i(b)=a_i,$ and a least one $g_i$ isn't constant because otherwise $f\in K[x].$ Thus $$f=x^n+g_{n-1}(b)x^{n-1}+\ldots+g_0(b).$$ -Let $g\in K[x,y]$ be defined by $$g(x,y)=x^n+g_{n-1}(y)x^{n-1}++\ldots+g_0(y).$$ Clearly $g(a,b)=0.$ But also, since $g_j(y)=b_my^m+\ldots+b_0$ is non-constant, there is $1\leq k\leq m$ such that $b_k\neq 0.$ Therefore, the $x^jy^k$-coefficient of $g$ is non-zero, and so $g\not\in K[x]$. It is clear that $g\not\in K[y].$ The symmetric case is proved symmetrically. $\square$ -The converse doesn't hold. For example, $\sqrt 2,\sqrt 3$ are weakly independent over $\mathbb Q$ but $g(x,y)=x^2y-2y$ annihilates $(\sqrt 2,\sqrt 3)$. Something weaker does hold, but I won't post it here because I don't understand it very well and I don't want to make this question too long. -I would like to know if this "weak independence" has any real name and if it's equivalent to anything interesting. I've been having different ideas as to what it could be equivalent to but nothing seemed to work. Most of my ideas have been somewhere around the fact above. - -REPLY [4 votes]: As Jyrki notes, your condition is a special case of linear disjointness. E.g., this is taken from Lang's Algebra, Section VIII.3: -Definition. Let $K$ and $L$ be extensions of $k$, contained in some common algebraically closed field $\Omega$ that contains $k$. We say that $K$ is linearly disjoint from $L$ over $k$ if every finite set of elements of $K$ that is linearly independent over $k$ is also linearly independent over $L$. -Although the definition is asymmetric, the condition is in fact symmetric: -Proposition. $K$ is linearly disjoint from $L$ over $k$ if and only if $L$ is linearly disjoint from $K$ over $k$. -Proof. Let $y_1,\ldots,y_n\in L$ be elements that are linearly independent over $k$, and let -$$\alpha_1y_1+\cdots +\alpha_ny_n = 0\tag{1}$$ -be a $K$-linear combination equal to $0$. Reordering if necessary, assume that $\alpha_1,\ldots,\alpha_r$ are linearly independent over $k$, and $\alpha_{r+1},\ldots,\alpha_n$ are $k$-linear combinations of $\alpha_1,\ldots,\alpha_r$; that is, -$$\alpha_i = \sum_{j=1}^r \beta_{ij}\alpha_j,\qquad i=r+1,\ldots,n.$$ -We can rewrite $(1)$ to get -$$\begin{align*} -\sum_{j=1}^r \alpha_j y_j + \sum_{i=r+1}^{n}\left(\sum_{j=1}^r \beta_{ij}\alpha_j\right)y_i &=0\\ -\sum_{j=1}^r \left(y_j + \sum_{i=r+1}^n\beta_{ij}y_i\right)\alpha_j&=0 -\end{align*}$$ -Since $K$ is linearly disjoint from $L$ over $k$ and $\alpha_1,\ldots,\alpha_r$ are $k$-linearly independent, it follows that for each $j=1,\ldots,r$ we have -$$y_j + \sum_{i=r+1}^n\beta_{ij}y_i = 0.$$ -But since the $y_1,\ldots,y_n$ are $k$-linearly independent, this cannot occur; thus, $r=0$, so that $\alpha_1=\cdots=\alpha_n=0$, as desired. $\Box$ - -Added to address a question raised in comments. -A question was raised in comment, on whether, like linear disjointness, the condiition off being weakly independent can be made one sided. That is, suppose that $a$ and $b$ are such that $[K(a,b):K(b)] = [K(a):K]$. Does it follow that $[K(a,b):K(b)] = [K(b):K]$? -Theorem. Let $K$ be a field, and let $a$ and $b$ be elements of some overfield that contains $K$. If $[K(a,b):K(b)] = [K(a):K]$, then $[K(a,b):K(a)] = [K(b):K]$. -Proof. It suffices to show that if $1,b,b^2,\ldots,b^{m-1}$ are linearly independent over $K$, then they are linearly independent over $K(a)$. Suppose we have -$$p_0(a) + p_1(a)b + \cdots +p_{m-1}(a)b^{m-1}=0,$$ -where each $p_i(x)$ is a rational function on $a$; clearing denominators, we may assume that they are in fact polynomials, and that they are of degree less than $[K(a):K]$ (arbitrary degree if $a$ is transcendental). Let $n$ be the highest power of $a$ that occurs. Then we can rewrite this expression in the form -$$q_0(b) + q_1(b)a + \cdots + q_{n}(b)a^{n}=0$$ -where the $q_i$ are polynomials with coefficients in $K$ (namely, $q_0(x)$ has the constant coefficient of $p_i$ as the degree $i$th coefficient; $q_1(x)$ has the degree one coefficient of $p_i$ as the degree $i$th coefficient, etc). Since $1,a,a^2,\ldots,a^n$ are linearly independent over $K$, they are linearly independent over $K(b)$, so we conclude that $q_I(x)=0$ for all $i$; this yields that the $p_i$ are $0$ for all $i$ as well, which establishes the claim. $\Box$ - -Theorem. Let $a$ and $b$ be algebraic over $K$. Then $a$ and $b$ are weakly independent over $K$ if and only if $K(a)$ is linearly disjoint from $K(b)$ over $K$. -Proof. Let $n=[K(a):K]$ and $m=[K(b):K]$. Assume first that $K(a)$ is linearly disjoint from $K(b)$ over $K$. Since $1,a,a^2,\ldots,a^{n-1}$ are $K$-linearly independent in $K(a)$, it follows that they are $K(b)$-linearly independent, and therefore the minimal polynomial of $a$ over $K(b)$ has degree at least $n$. Since the minimal polynomial of $a$ over $K$ has degree exactly $n$, it follows that $[K(a,b):K(b)]=n$. A symmetric argument shows that $[K(a,b):K(a)]=m$, proving that $a$ and $b$ are weakly independent over $K$. -Conversely, assume that $a$ and $b$ are weakly independent over $K$. Let $y_1,\ldots,y_r$ be elements of $K(a)$ that are linearly independent over $K$. We can write them in terms of $1,a,\ldots,a^{n-1}$, and we get: -$$\begin{align*} -y_1 &= \alpha_{01} + \alpha_{11}a +\alpha_{21}a^2+\cdots +\alpha_{n-1,1}a^{n-1}\\ -y_2 &= \alpha_{02} + \alpha_{12}a + \alpha_{22}a^2+\cdots + \alpha_{n-1,2}a^{n-1}\\ -&\vdots\\ -y_r &= \alpha_{0r} + \alpha_{1r}a + \alpha_{2r}a^2 + \cdots + \alpha_{n-1,r}a^{n-1}. -\end{align*}$$ -Because $y_1,\ldots,y_r$ are linearly independent over $K$, any $r\times r$ subdeterminant of the $\alpha_{ij}$ will be nonzero. That is, the matrix of the $\alpha_{ij}$ has rank $r$. -Suppose that $\beta_1,\ldots,\beta_n\in K(b)$ are such that $\beta_1y_1+\cdots + \beta_ry_r = 0$. Plugging in and reordering, we get: -$$0 = \sum_{i=1}^r \beta_i\alpha_{0i} + \left(\sum_{i=1}^r\beta_i\alpha_{1i}\right)a + \cdots + \left(\sum_{i=1}^r\beta_i\alpha_{n-1,i}\right)a^{n-1}.$$ -Since $1,a,a^2,\ldots,a^{n-1}$ are linearly independent over $K(b)$ (because $[K(b)(a):K(b)]=n$), it follows that -$$\sum_{i=1}^r \beta_i\alpha_{ji} = 0$$ -for $j=0,\ldots,n-1$. -Viewing this as a system of $n$ linear equations over the $\beta_i$, we have $n$ equations in $r$ unknowns; the coefficient matrix is full rank (rank $r$), and therefore the system has a unique solution, namely $\beta_1=\cdots=\beta_r=0$. Thus, $y_1,\ldots,y_r$ are linearly independent over $K(b)$, showing that $K(a)$ is linearly disjoint from $K(b)$, as claimed. $\Box$ - -Added. The nonalgebraic cases. -Proposition: If $a$ is algebraic over $K$ and $b$ is transcendental over $K$, then both the following conditions hold: - -$a$ and $b$ are weakly independent over $K$; and -$K(a)$ and $K(b)$ are linearly disjoint over $K$. - -Proof. If $a$ is algebraic and $b$ is transcendental over $K$, then $b$ is transcendental over $K(a)$; hence $[K(a,b):K(a)] = [K(b):K]$; by previously established proposition, it follows that $[K(a,b):K(b)] = [K(a):K]$. Alternatively, the fact that $[K(a,b):K(b)]=[K(a):K]$ when $a$ is algebraic and $b$ is transcendental is well-known. Thus, $a$ and $b$ are weakly independent. -For the second part, we can proceed as above: let $y_1,\ldots,y_m$ be linearly independent elements of $K(a)$; to show that they are linearly independent over $K(b)$, write out $r_1(b)y_1 + \cdots r_m(b)y_m = 0$, where the $r_i$ are rational functions on $b$. Clearling denominators, we may assume that they are in fact polynomials; then looking at the terms of degree $n$ in $b$ we see that the coefficients of degree $n$ must be equal to $0$ (by the linear independence of $y_1,\ldots,y_m$ over $K$), so $r_1(b)=\cdots=r_m(b)=0$. Thus, $y_1,\ldots,y_m$ remain independent over $K(b)$, so $K(a)$ is linearly disjoint from $K(b)$, as claimed. $\Box$ -Proposition. Let $K$ be a field, and let $a$ and $b$ be transcendental over $K$. Then the following are equivalent: - -$a$ and $b$ are weakly independent over $K$; -$K(a)$ and $K(b)$ are linearly disjoint. - -Proof. (2)$\implies$(1): since $1,a,a^2,\ldots,a^n$ are linearly independent over $K$ for every $n$, then they are linearly independent over $K(b)$. Thus, $[K(a,b):K(b)]\gt n$ for all $n$< so $[K(a,b):K(b)]=\infty = [K(a):K]$. Thus, $a$ and $b$ are weakly independent over $K$. -(1)$\implies$(2): Since $a$ and $b$ are weakly independent and transcendental, it follows that $a$ is transcendental over $K(b)$, and $b$ is transcendental over $K(a)$. If $r_1(a),\ldots,r_n(a)$ are $K$-linearly independent rational functions on $a$, and $s_1(b),\ldots,s_n(b)$ are rational functions on $b$ such that -$$s_1(b)r_1(a)+\cdots+s_n(b)r_n(b)=0,$$ -then clearing denominators we obtain a polynomial expression in $a$ and $b$ equal to $0$. This implies the expression is trivial (all coefficients are $0$), which in turn yields that $s_1(b)=\cdots=s_n(b)=0$. Thus, $K(a)$ and $K(b)$ are linearly disjoint. $\Box$ - -The fact that linear disjointness is weaker than algebraic independence is also well known. For example, Proposition VIII.3.3 in Lang reads: - -Proposition. Let $L$ be an extension of $k$, and let $\{u_1,\ldots,u_r\}$ be a set of quantities algebraically independent over $L$. Then $k(u)$ is linearly disjoint from $L$ over $k$. - -On the other hand, we have the following: -Definition. We say that $K$ is free from $L$ over $k$ if every finite set of elements of $K$ algebraically independent over $k$ remains algebraically independent over $L$. If $(x)$ and $(y)$ are two sets of elements in $\Omega$, we say that they are free over $k$ (or independent over $k$) if $k(x)$ and $k(y)$ are free over $k$. -Proposition. If $K$ and $L$ are linearly disjoint over $k$, then they are free over $k$. -Proof. Let $x_1,\ldots,x_n$ be elements of $K$ algebraically independent over $k$. Suppose we have a relation -$$\sum y_iM_i(x_1,\ldots,x_n)=0$$ -where $M_i(x_1,\ldots,x_n)$ is a monomial in $x_1,\ldots,x_n$ and $y_i\in L$. This gives a linear relation over $L$ of the $M_i(x_1,\ldots,x_n)$, but because $x_1,\ldots,x_n$ are algebraically independent over $k$, we know that their monomials are linearly independent over $k$; and so by linear disjointness, their monomials are linearly independent over $L$. Therefore, $y_i=0$ for all $i$, so $x_1,\ldots,x_n$ are algebraically independent over $L$. $\Box$<|endoftext|> -TITLE: Condition for commuting matrices -QUESTION [5 upvotes]: Let $A,B$ be $n \times n$ matrices over the complex numbers. If $B=p(A)$ where $p(x) \in \mathbb{C}[x]$ then certainly $A,B$ commute. Under which conditions the converse is true? -Thanks :-) - -REPLY [4 votes]: THEOREM: The following are equivalent conditions about a matrix $A$ with entries in $\mathbb C$: -(I) $A$ commutes only with matrices $B = p(A)$ for some $p(x) \in \mathbb C[x]$ -(II) The minimal polynomial and characteristic polynomial of $A$ coincide -(III) $A$ is similar to a companion matrix. -(IV) Each characteristic value of $A$ occurs in only one Jordan block. This includes the possibility that all eigenvalues are distinct, but allows for repetition if they all occur in one Jordan block.<|endoftext|> -TITLE: Has anyone ever tried to develop a theory based on a negation of a commonly believed conjecture? -QUESTION [5 upvotes]: I know that plenty of theorems have been published assuming the Riemann hypothesis to be true. I understand that the main goal of such research is to have a theory ready when someone finally proves the Riemann hypothesis. A secondary goal seems to be to have a chance of spotting a contradiction, thus proving the conjecture false. This must be a secondary goal, since most mathematicians believe the conjecture to be true. -I wonder if it would be a good idea to do the opposite. It is widely believed that $e+\pi\not\in\mathbb Q,$ however no one seems to have any idea how to prove it. I wonder if it would be a good idea to try to build a theory on the statement that $e+\pi\in\mathbb Q$. I don't mean just trying to prove the conjecture by contradiction. I mean really, frowardly assuming we live in a universe in which $e+\pi\in\mathbb Q$ and trying to do maths in this universe. I'm not sure if the distinction is clear, but I hope it is. The ultimate goals mentioned above would switch places in this approach. Now the primary goal would be to spot a contradiction, since the theorems proved would all be very likely to be vacuous. The conjecture is considered very difficult to prove so perhaps it wouldn't be bad to admit our blindness and just move a bit at random and hope we're moving ahead. The theorems proved would of course be likely to be vacuous so it could seem as too focused an approach: it may only serve to prove a single statement, that $e+\pi\not\in\mathbb Q.$ However, I think the techniques developed could still turn out useful in proving other things. -The question whether it's a good idea is probably not a good question on this site, so it it is not my main question. What I would like to know is examples of such an approach being employed. I assume it hasn't been employed often because I have never heard of it except of one case, so I would also like to know why it hasn't. (The one case is the work on the parallel postulate, which was thought by many to follow from other axioms of Euclidean geometry, and which was later shown not to when considering the alternatives resulted in finding consistent geometries violating this axiom only.) - -REPLY [3 votes]: I'm not sure most mathematicians believe the Riemann Hypothesis is true. Anyway, there are lots of theorems published assuming RH to be false. One very famous example concerns the class number of imaginary quadratic fields. The first proof that the class number goes to infinity was obtained by showing that it followed from both RH and from the negation of RH.<|endoftext|> -TITLE: Example of profinite groups -QUESTION [5 upvotes]: Could someone help me with an simple example of a profinite group that is not the p-adics integers or a finite group? It's my first course on groups and the examples that I've found of profinite groups are very complex and to understand them requires advanced theory on groups, rings, field and Galois Theory. Know a simple example? -Last, how to prove that that $\mathbb{Z}$ not is a profinite group? - -REPLY [3 votes]: One way to get a profinite group is to start with any torsion abelian group $A$, and take $\hom(A, \mathbb{Q}/\mathbb{Z})$. This acquires a topology as the inverse limit topology: it is the inverse limit of $\hom(A_0, \mathbb{Q}/\mathbb{Z})$ for $A_0 \subset A$ a finitely generated (and necessarily torsion) subgroup. -In fact, this gives an anti-equivalence between profinite abelian groups and torsion abelian groups, which is sometimes useful (it means, for instance, that a filtered inverse limit of profinite abelian groups is always exact, since the corresponding fact for filtered direct limits is true in abelian groups).<|endoftext|> -TITLE: Lower bound for $\|A-B\|$ when $\operatorname{rank}(A)\neq \operatorname{rank}(B)$, both $A$ and $B$ are idempotent -QUESTION [5 upvotes]: Let's first focus on $k$-by-$k$ matrices. We know that rank is a continuous function for idempotent matrices, so when we have, say, $\operatorname{rank}(A)>\operatorname{rank}(B)+1$, the two matrices cannot be close in norm topology. -But I wonder whether there is an explicit lower bound of the distance between two idempotent matrices in terms of their difference in their ranks. -Thanks! - -REPLY [3 votes]: Here is a generalization of a special case, namely the case of self-adjoint idempotents. -Suppose that $p$ and $q$ are self-adjoint idempotents (projections) in a C*-algebra $A$. If $\|p-q\|<1$, then there is a continuous path of projections in $A$ from $p$ to $q$. If $A$ is unital, this implies that $p$ and $q$ are unitarily equivalent. In general, it implies that there is a partial isometry $v\in A$ such that $v^*v=p$ and $vv^*=q$. This is shown in Chapter 2 of Rørdam et al.'s An introduction to K-Theory for C*-algebras -In the case where $A=M_n(\mathbb C)$, this tells us that $\|p-q\|<1$ implies that $\mathrm{rank}(p)=\mathrm{Tr}(v^*v)=\mathrm{Tr}(vv^*)=\mathrm{rank}(q)$, a special case of Robert Israel's answer. -On a related note, it isn't too hard to show that if $p$ and $q$ are arbitrary projections in a C*-algebra, then $\|p-q\|\leq 1$. More generally, if $a\geq 0 $ and $b\geq 0$ in a C*-algebra, then $\|a-b\|\leq \max\{\|a\|,\|b\|\}$. In particular, if $p$ and $q$ are self-adjoint idempotent matrices of different ranks, then $\|p-q\|=1$.<|endoftext|> -TITLE: Polynomials irreducible over $\mathbb{Q}$ but reducible over $\mathbb{F}_p$ for every prime $p$ -QUESTION [76 upvotes]: Let $f(x) \in \mathbb{Z}[x]$. If we reduce the coefficents of $f(x)$ modulo $p$, where $p$ is prime, we get a polynomial $f^*(x) \in \mathbb{F}_p[x]$. Then if $f^*(x)$ is irreducible and has the same degree as $f(x)$, the polynomial $f(x)$ is irreducible. -This is one way to show that a polynomial in $\mathbb{Z}[x]$ is irreducible, but it does not always work. There are polynomials which are irreducible in $\mathbb{Z}[x]$ yet factor in $\mathbb{F}_p[x]$ for every prime $p$. The only examples I know are $x^4 + 1$ and $x^4 - 10x^2 + 1$. -I'd like to see more examples, in particular an infinite family of polynomials like this would be interesting. How does one go about finding them? Has anyone ever attempted classifying all polynomials in $\mathbb{Z}[x]$ with this property? - -REPLY [16 votes]: For any distinct primes $p_1,p_2$ the polynomial -$$x^4-2(p_1+p_2)x^2+(p_1-p_2)^2,(1)$$ -is irreducible in $\Bbb Q$, but this polynomial is reducible modulo $p$ for any prime $p$. Let us see why: -It is a well-known fact that $[\Bbb Q(\sqrt{p_1},\sqrt{p_2}):\Bbb Q]=4$ and $\Bbb Q(\sqrt{p_1}+\sqrt{p_2})=\Bbb Q(\sqrt{p_1},\sqrt{p_2})$; see this, thus as $\sqrt{p_1}+\sqrt{p_2}$ is a root of $(1)$, this polynomial is irreducible over $\Bbb Q$. -As the Legendre symbol is multiplicative we get that $p_1$ or $p_2$ have a square root modulo $p$ or $p_1p_2$ does; notice this happens trivially if $p=p_1$ or $p=p_2$, hence as $\Bbb F_p(\sqrt{p_1},\sqrt{p_2})=\Bbb F_p(\sqrt{p_1},\sqrt{p_1p_2})=\Bbb F_p(\sqrt{p_2},\sqrt{p_1p_2})$, we obtain -$$[\Bbb F_p(\sqrt{p_1},\sqrt{p_2}):\Bbb F_p]\leq 2,$$ -and this implies $(1)$ is not irreducible as $\sqrt{p_1}+\sqrt{p_2}$ is a root. -The polynomial $x^4-10x^2+1$ is a particular case with $p_1=2$ and $p_2=3$.<|endoftext|> -TITLE: Connectedness of the spectrum of a tensor product. -QUESTION [28 upvotes]: Let $A$, $B$ be finite free $\mathbb{Z}$-algebras such that $\operatorname{Spec}(A)$ and $\operatorname{Spec}(B)$ are both connected. Is $\operatorname{Spec}(A\otimes_{\mathbb{Z}} B)$ connected? - -REPLY [16 votes]: Late edit. The answer is $\mathrm{Spec}(A\otimes_{\mathbb Z}B)$ is always connected. -One reduces to the case $A=B=\mathcal O_F$ for some finite Galois extension $F/\mathbb Q$ as below. Let $G=\mathrm{Gal}(F/\mathbb Q)$. Let $X=\mathrm{Spec}(\mathcal O_F)$ and $S=\mathrm{Spec}(\mathbb Z)$. For any $g\in G$, consider the surjective map -$$ \mathcal O_F\otimes_{\mathbb Z}\mathcal O_F\to \mathcal O_F, \quad b\otimes c\mapsto bg(c).$$ -It induces a closed immersion $i_g: X\to X\times_S X$. It is not hard to see that the $i_g(X)$, $g\in G$, are the irreducible components of $X\times_S X$, and that $i_g(X)\cap i_h(X)\ne\emptyset$ if (and in fact only if) $gh^{-1}$ belongs to the inertia subgroup $I_x$ of $G$ at some $x\in X$ (if $g=\theta h$ with $\theta\in I_x$, then $i_g(x)=i_h(x)\in i_g(X)\cap i_h(X)$). Using the fact the $I_x$'s, when $x$ varies, generate $G$ ($\mathbb Q$ has no nontrivial unramified extension), we easily get the connectedness. - -I don't have a solution, but just some remarks. - -It is enough to deal with the case when $A, B$ are ring of integers of number fields $K, L$. -Proof: Denote by $X=\mathrm{Spec}(A), Y=\mathrm{Spec}(B)$ and $S=\mathrm{Spec}(\mathbb Z)$. As $X, Y$ are finite over $S$ and are connected, each of their irreducible components $X_1,\dots, X_n, Y_1, \dots, Y_m$ are finite and surjective over $S$. If we can prove that $X_i\times_S Y_j$ is connected for all $i, j$, then simple topological arguments show that $X\times_S Y_j$ is connected. Similary, $X\times_S Y$ is connected. So we are reduced to the case $X, Y$ irreducible. As the connectedness property is purely topological, we can replace $X, Y$ by their maximal reduced subschemes and suppose that they are reduced (hence integral). Let $X', Y'$ be their normalizations. Then $X'\times_S Y'\to X\times_S Y$ is surjective. If $X'\times_S Y'$ is connected, then so is $X\times_S Y$. Therefore it is enough to treat the case when $X, Y$ are integral and normal, so their are defined by ring of integers of number fields. -It is enough to deal with the case $A=B=O_F$ for some finite Galois extension $F/\mathbb Q$. Proof: let $K, L$ be as above, let $F$ be a finite Galois extension containing both $K, L$. Let $Z=\mathrm{Spec}(O_F)$. Then $Z\times_S Z\to X\times_S Y$ is surjective and we are done as in (1). -Let $X=\mathrm{Spec}(O_F)$ with $G=\mathrm{Gal}(F)$. Then $X\times_S X$ splits into union of copies of $X$ parametrized by $G$. This union can be written explicitely. In the very special case where $O_F=\mathbb Z[t]$, we have $X\times_S X=\cup_{\sigma\in G} \mathrm{Spec}(O_F[s]/(s-\sigma(t)))$. - -EDIT Two irreducible components $\mathrm{Spec}(O_F[s]/(s-\sigma(t)))$ and $\mathrm{Spec}(O_F[s]/(s-\tau(t)))$ meet at $\mathrm{Spec}(O_F/(\sigma(t)-\tau(t)))$. This intersection is non empty iff $\sigma(t)-\tau(t)$ is not an unit in $O_F$. -Suppose $\mathbb {Z}\to O_F$ is totally ramified at some $p$ (e.g $[F:\mathbb Q]$ is prime), then $(X\times_S X)_p$ consist in one point $q$, hence all its irreducible components meet each other at $q$, and $X\times_S X$ is connected. -I tried some explicite examples, all of them are connected. I think it is not too hard to show the connectedness if the Galois group $G$ is $2$-transitive on itself. In general, I don't know whether connectedness holds !<|endoftext|> -TITLE: Can anyone give any insight on this group given these generators and relations? -QUESTION [7 upvotes]: $G = \langle x,y | x^3 = 1, y^3 = 1, (xy)^3 = 1, (xy^2)^n = 1 \rangle$ -I am studying this group and I can't seem to get anywhere with it. I've tried making a Cayley Table but it's getting pretty big. This makes me think I'm doing something wrong. Which doesn't have to be the case. Maybe the group is larger than I expected. -I am assuming it's nonabelian. My specific questions are: -Is this a relatively common group? Does it have a name? -What is the order of G? -(Note: an earlier version of this question accidentally left out the $n$ in $(xy^2)^n$.) - -REPLY [5 votes]: Set $a=xy^2$ and $b=a^x = y^2x$, so that $ab= xyx$ and $ba =y^2 x^2 y^2$. But $xyxyxy = 1$ so $xyx = y^{-1} x^{-1} y^{-1} = y^2 x^2 y^2$, so $a$ and $b$ commute, and so form a normal abelian subgroup $A$ that is a quotient of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/n\mathbb{Z}$, and which together with either $x$ or $y$ generates $G$. -Check that the semi direct product of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/n\mathbb{Z}$ with $x(a) = b$, $x(b)= (ab)^{-1}$ satisfies the relations so that $G$ is a generalization of the alternating group of order 12, having in general order $3n^2$ instead of $3\cdot 2^2$. -If you omit the last relation (set $n=0$), then you get the following faithful integral matrix representation of the group. To include the last relation, just interpret the matrices mod $n$ to get a faithful matrix rep over $\mathbb{Z}/n\mathbb{Z}$. -$$ -x = \left[\begin{smallmatrix} 0 & 1 & 0 \\ -1 & -1 & 0 \\ 0 & 0 & 1 \end{smallmatrix}\right] -\quad -y = \left[\begin{smallmatrix} 0 & 1 & -1 \\-1 & -1 & 0 \\ 0 & 0 & 1 \end{smallmatrix}\right] -\quad -a = \left[\begin{smallmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{smallmatrix}\right] -\quad -b = \left[\begin{smallmatrix} 1 & 0 & -1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{smallmatrix}\right] -$$<|endoftext|> -TITLE: Group presentation for semidirect products -QUESTION [27 upvotes]: If $G$ and $H$ are groups with presentations $G=\langle X|R \rangle$ and $H=\langle Y| S \rangle$, then of course $G \times H$ has presentation $\langle X,Y | xy=yx \ \forall x \in X \ \text{and} \ y \in Y, R,S \rangle$. Given two group presentations $G=\langle X|R \rangle$ and $H=\langle Y| S \rangle$ and a homomorphism $\phi: H \rightarrow \operatorname{Aut}(G)$, what is a presentation for $G \rtimes H$? Is there a nice presentation, as in the direct product case? Thanks! - -REPLY [42 votes]: Let $G = \langle X \mid R\rangle$ and $H = \langle Y \mid S\rangle$, and let $\phi\colon H\to\mathrm{Aut}(G)$. Then the semidirect product $G\rtimes_\phi H$ has the following presentation: -$$ -G\rtimes_\phi H \;=\; \langle X, Y \mid R,\,S,\,yxy^{-1}=\phi(y)(x)\text{ for all }x\in X\text{ and }y\in Y\rangle -$$ -Note that this specializes to the presentation of the direct product in the case where $\phi$ is trivial. -  -For example, let $G = \langle x \mid x^n = 1\rangle$ be a cyclic group of order $n$, let $H = \langle y \mid y^2=1\rangle$ be a cyclic group of order two, and let $\phi\colon H \to \mathrm{Aut}(G)$ be the homomorphism defined by $\phi(y)(x) = x^{-1}$. Then the semidirect product $G\rtimes_\phi H$ is the dihedral group of order $2n$, with presentation -$$ -G\rtimes_\phi H \;=\; \langle x,y\mid x^n=1,y^2=1,yxy^{-1}=x^{-1}\rangle. -$$<|endoftext|> -TITLE: Countably Compact vs Compact vs Finite Intersection Property -QUESTION [6 upvotes]: There is this exercise: Show that countable compactness is equivalent to the following condition. If ${C_n}$ is a countable collection of closed sets in S satisfying the finite intersection hypothesis, then $\bigcap_{i=1}^\infty C_i$ is nonempty. -Definitions: - -A Space S is countably compact if every infinite subset of S has a limit point in S. -A space S has the finite intersection property provided that if ${C_\alpha}$ is any collection of closed sets such that any finite number of them has a nonempty intersection, then the total intersection $\bigcap_\alpha C_\alpha$ is non-empty -A family of closed sets, in any space, such that any finite number of them has a nonempty intersection, will be said to satisfy the finite intersection hypothesis. - -Now there is also a related theorem in the book: Compactness is equivalent to the finite intersection property. -Sounds to me countable compactness and compactness are pretty much the same. -I am not asking for a solution to the exercise. My question is this: -What is the difference between Compactness and Countable Compactness in terms of closed Collections? Both things sound to me like this: Given a collection of closed sets, when a finite number of them has a nonempty intersection, all of them have a nonempty intersection. -BTW: the definitions, the theorems and the exercise are from Topology by Hocking/Young. - -REPLY [5 votes]: The difference is that if $X$ is compact, every collection of closed sets with the finite intersection property has a non-empty intersection; if $x$ is only countably compact, this is guaranteed only for countable collections of closed sets with the finite intersection property. In a countably compact space that is not compact, there will be some uncountable collection of closed sets that has the finite intersection property but also has empty intersection. -An example is the space $\omega_1$ of countable ordinals with the order topology. For each $\xi<\omega_1$ let $F_\xi=\{\alpha<\omega_1:\xi\le\alpha\}=[\xi,\omega_1)$, and let $\mathscr{F}=\{F_\xi:\xi<\omega_1\}$. $\mathscr{F}$ is a nested family: if $\xi<\eta<\omega_1$, then $F_\xi\supsetneqq F_\eta$. Thus, it certainly has the finite intersection property: if $\{F_{\xi_0},F_{\xi_1},\dots,F_{\xi_n}\}$ is any finite subcollection of $\mathscr{F}$, and $\xi_0<\xi_1<\ldots<\xi_n$, then $F_{\xi_0}\cap F_{\xi_1}\cap\ldots\cap F_{\xi_n}=F_{\xi_n}\ne\varnothing$. But $\bigcap\mathscr{F}=\varnothing$, because for each $\xi<\omega_1$ we have $\xi\notin F_{\xi+1}$. This space is a standard example of a countably compact space that it not compact. -Added: Note that neither of them says: - -Given a collection of closed sets, when a finite number of them has a nonempty intersection, all of them have a nonempty intersection. - -The finite intersection property is not that some finite number of the sets has non-empty intersection: it says that every finite subfamily has non-empty intersection. Consider, for instance, the sets $\{0,1\},\{1,2\}$, and $\{0,2\}$: every two of them have non-empty intersection, but the intersection of all three is empty. This little collection of sets does not have the finite intersection property. -Here is perhaps a better way to think of these results. In a compact space, if you have a collection $\mathscr{C}$ of closed sets whose intersection $\bigcap\mathscr{C}$ is empty, then some finite subcollection of $\mathscr{C}$ already has empty intersection: there is some positive integer $n$, and there are some $C_1,\dots,C_n\in\mathscr{C}$ such that $C_1\cap\ldots\cap C_n=\varnothing$. In a countably compact space something similar but weaker is true: if you have a countable collection $\mathscr{C}$ of closed sets whose intersection $\bigcap\mathscr{C}$ is empty, then some finite subcollection of $\mathscr{C}$ already has empty intersection. In a countably compact space you can’t in general say anything about uncountable collections of closed sets with empty intersection.<|endoftext|> -TITLE: Ideal of the twisted cubic -QUESTION [11 upvotes]: The twisted cubic is the image of the morphism $\phi : \mathbb{P}^1 \to \mathbb{P}^3 , (x:y) \mapsto (x^3:x^2 y:x y^2:y^3)$, it is given by $X = V(ad-bc,b^2-ac,c^2-bd)$. Now I would like to compute $I(X)$, which equals by the Nullstellensatz the radical of the ideal $I := (ad-bc,b^2-ac,c^2-bd) \subseteq k[a,b,c,d]$. I think that that $I$ is already a radical ideal, even a prime ideal. Namely, I suspect that -$$\phi^* : Q:=k[a,b,c,d]/I \to k[s,t] , a \mapsto s^3, b \mapsto s^2 t , c \mapsto s t^2 , d \mapsto t^3$$ -is an injection. If this is true: How can we prove that? I've already tried to find a $k$-basis of the quotient, but this turned out to be a big mess. Even the representation of the quotient as a monoid algebra doesn't seem to help. Another idea is the following: A formal manipulation of generators and relations implies $Q_a \cong k[a,b]_a$. Thus it suffices to prove that $Q \to Q_a$ is injective, i.e. that $a$ is not a zero divisor. - -REPLY [12 votes]: Here is a purely algebraic proof that $I(X)=I$. -It is of course sufficient to prove that $I(X) \subset I$ and for that it suffices to prove that every homogeneous polynomial $P(a,b,c,d)$ -which vanishes on $X$ is in $I$. -Lemma -Any homogeneous polynomial $P(a,b,c,d)\in k[a,b,c,d]$ can be written $$P(a,b,c,d)=R(a,d) +S(a,d)b+T(a,d)c+i(a,b,c,d) $$ for some polynomials $R,S,T\in k[a,d]$ - and a polynomial $i\in I$ -The easy proof is by induction on the degree of $P$ and I'll leave it to you. -Now back to our problem. If now that homogeneous $P$ is in $ I(X)$ , we write it as in the lemma and get by using that $P$ vanishes on $X$ that for all $(x:y)\in \mathbb P^1_k$ $$0=P(x^3,x^2y,xy^2,y^3)=R(x^3,y^3) +S(x^3,y^3)x^2y+T(x^3,y^3)xy^2+0 $$ -By considering exponents modulo $3$ for $x$ and $y$, we see that no cancellation occurs, hence that $R=S=T=0$ and thus $P=i\in I$ as required.<|endoftext|> -TITLE: What should a PDE/analysis enthusiast know? -QUESTION [11 upvotes]: What are the cool things someone who likes PDE and functional analysis should know and learn about? What do you think are the fundamentals and the next steps? I was thinking it would be good to know how to show existence or even to know where to start to show existence of any non-linear PDE I come across. -For example, I only recently found about about how people can use the inverse theorem to prove existence of a non-linear PDE. This involved Frechet derivatives which I have never seen before. And I don't fully appreciate the link between normal derivative, Gateaux derivative and Frechet derivative. So I thought how many other things I have no idea about in PDEs. -And PDEs on surface are interesting (but I'm just learning differential geometry so a long wait till I look at that in detail) but it seems studied to death. -So anyway what do you think is interesting in this field? I am less interested in constructing solutions to PDEs and more into existence. PS: you can assume the basic knowledge (Lax-Milgram, linear elliptic and parabolic existence and uniqueness, etc..) - -REPLY [4 votes]: One nice idea along these lines is the Leray-Schauder Fixed Point approach to non-linear elliptic existence problems. -Roughly speaking, for example, if I would like to solve the quasilinear problem -$a_{ij}(x,u,Du)D_{ij}u + b(x,u,Du) = 0$ -$u = \varphi$ on $\partial\Omega$, -say, I can set up a map which takes a function $v$ and sends it to the unique solution $u$ of the linear problem -$a_{ij}(x,v,Dv)D_{ij}u + b(x,v,Dv) = 0$ -$u = \varphi$ on $\partial\Omega$. -Call this map $T$. Obviously I need to know existence for the linear problem. Then, by considering this as a map (not a linear map) between appropriate Banach spaces (for which I need some regularity for solutions of the linear problem) I can see that a solution to the quasilinear problem is precisely a fixed point of $T$. -Googling Leray-Schauder fixed point theorem seems to turn up a nice set of notes by Leon Simon on the subject. Also Chapter 11 of Gilbarg and Trudinger explains the method well. -The method is so-named because of the Leray-Schauder fixed point theorem - an abstract fixed point theorem (that is to say it is just about maps between Banach spaces and not specifically about PDE) which, under certain conditions gives a fixed point of $T$. The task of verifying the conditions under which the theorem is applicable is one of gaining apriori estimates for solutions of the quasilinear problem, so this approach is a good way to see the need for the apriori philosophy in (elliptic, at least) existence problems.<|endoftext|> -TITLE: Thompson's Conjecture -QUESTION [5 upvotes]: I have heard that the following is a conjecture due to Thompson: -The number of maximal subgroups of a (finite) group $G$ does not exceed the order $|G|$ of the group. -My question is: did Thompson really conjecture this? If so, is there any literature on the subject? - -REPLY [8 votes]: I've seen this conjecture attributed to Wall (1961), for example: On a conjecture of G.E. Wall. This is a recent article (journal version appeared in 2007), and it gives a bunch of references. The conjecture remains open. Here is a very recent article which does not attack the conjecture itself, but uses it as an inspiration for a different conjecture.<|endoftext|> -TITLE: Does localization preserves dimension? -QUESTION [7 upvotes]: Does localization preserves dimension? -Here's the problem: -Let $C=V(y-x^3)$ and $D=V(y)$ curves in $\mathbb{A}^{2}$. I want to compute the intersection multiplicity of $C$ and $D$ at the origin. -Let $R=k[x,y]/(y-x^3,y)$. The intersection multiplicity is by definition the dimension as a $k$-vector space of $R$ localized at the ideal $(x,y)$. -Note now that: -$R \cong k[y]/(y) \otimes _{k} k[x]/(x^3)$ -Clearly $k[y]/(y) \cong k$ so the above tensor product is isomorphic to $k[x]/(x^3)$. -Therefore $R$ has dimension $3$. -However I want to compute the dimension of $R$ localized at the ideal $(x,y)$. Does localization preserves dimension or how do we proceed? I would really appreciate if you can please explain this example in full detail. - -REPLY [6 votes]: Since $R$ is already a local ring with maximal ideal $\mathfrak m=(\bar x,\bar y )$ , localizing changes nothing! -In other words $R\cong R_{\mathfrak m}$ and thus $dim_kR_{\mathfrak m}=dim_kR=3$ (since $R=k\oplus k\bar x\oplus k\bar x^2$).<|endoftext|> -TITLE: What is the thing inside a sum called? -QUESTION [6 upvotes]: You know how the "thing" inside an integral, we call that an integrand. Does any know what the $a_n$ in a typical $\sum a_n$ is called? Or do we only have names if it is an infinite series? I could've sworn there is a name or there really should be a name - -REPLY [13 votes]: Any element of a sum is called a summand. For example: $2$, $4$ and $6$ are summands in $2+4+6 = 12$, and in $\sum a_i$ each of the $a_i$ are summands, that is, $a_1$, $a_2$, $a_3$, etc. are summands. -However, in $\sum a_i$, we can also call $a_i$ the general term of the sequence $(a_n)$, but that's specific to sequences and series, because it makes a reference to the role of $a_i$ in the sequence, unlike the name summand, which only refers to the fact that a quantity is being summed (actually, that's the etymology of the word: sum, with a gerund ending), or is part of a sum. -The name term applies to both concepts, and probably to more general settings too. - -REPLY [4 votes]: Analogously to the term "integrand," the thing inside the sum is called the "summand."<|endoftext|> -TITLE: The smallest possible value of $x^2 + 4xy + 5y^2 - 4x - 6y + 7$ -QUESTION [6 upvotes]: I have been trying to find the smallest possible value of -$x^2 + 4xy + 5y^2 - 4x - 6y + 7$, but I do not seem to have been heading in any direction which is going to give me an answer I feel certain is correct. Any hints on how to algebraically approach finding this value would be appreciated. I prefer not to be told what the value is. - -REPLY [10 votes]: $$x^2 + 4xy + 5y^2 -4x -6y + 7 = (x+2y-2)^2 + (y+1)^2 + 2$$ - -REPLY [5 votes]: Note that $x^2 + 4xy + 5y^2 - 4x - 6y + 7=(x+2y)^2+y^2-4x-6y+7$. -Let $u=x+2y$. Write our expression in terms of $u$ and $y$, and complete the squares. -Remark: The approach may have looked a little ad hoc, but the same basic idea will work for any quadratic polynomial $Q(x,y)$ that has a minimum or maximum. For quadratic polynomials $Q(x_1,x_2,\dots, x_n)$, one can do something similar, but it becomes useful to have more theory. You may want to look into the general diagonalization procedure.<|endoftext|> -TITLE: When does an "infinite polynomial" make sense? -QUESTION [9 upvotes]: Suppose I pick a collection $A \subset \mathbb{C}$ of points in the complex plane and attempt to construct a "polynomial" with those roots via, -$$f(z):=\Pi_{\alpha \in A} (z-\alpha).$$ -If $A$ is finite, we get a polynomial. -If $A=\{n\pi:n \in \mathbb{Z}\}$, according to Euler we get $f(x)=\sin(x)$. Edit: this example is not right as Qiaochu has pointed out; see his answer for more details. -What about other subsets of the complex plane? Other countable subsets without accumulation points? Countable sub with accumulation points like $A=\{1/n:n \in \mathbb{Z}\}$? Uncountable subsets?? When does the product converge, and if it does how does the spatial distribution of $A$ effect the properties of $f$? -This question was motivated by the question here: Determining the density of roots to an infinite polynomial - -REPLY [9 votes]: That is not what Euler's product expansion of the sine looks like. It is very much supposed to be in the form -$$\frac{\sin z}{z} = \prod_{n \ge 1} \left( 1 - \frac{z^2}{\pi^2 n^2} \right).$$ -The product you've written down does not converge for $A = \{ n \pi \}$ unless $z \in A$. Indeed, its factors don't go to $1$, which is a necessary condition exactly analogous to the condition for infinite series that the terms need to go to $0$. In fact one can switch between infinite sums and infinite products using the logarithm, which can be used to prove the following. -(First I need to mention that the theorem below requires the convention wherein a product which tends to $0$ is said to diverge. This is because the logarithm of such a product diverges to $-\infty$.) -Theorem: Let $a_n \in \mathbb{C}$ be a sequence such that $\sum |a_n|^2$ converges. Then $\prod (1 + a_n)$ converges if and only if $\sum a_n$ converges. -Sketch. Use the fact that $\log (1 + a_n) = a_n + O(|a_n|^2)$. -So we can make sense of the "infinite polynomial" -$$\prod_{\alpha \in A} \left( 1 - \frac{z}{\alpha} \right)$$ -for countable $A$ such that $\sum_{\alpha \in A} \frac{1}{|\alpha|^2}$ and $\sum_{\alpha \in A} \frac{1}{\alpha}$ both converge. See also the Weierstrass factorization theorem. -Note that by the identity theorem, the zeroes of a holomorphic function are isolated, so if you want your product to be holomorphic with $A$ as its zero set, $A$ needs to be discrete. -Infinite sums and products do not behave well for uncountably many terms, the basic reason being the following. -Theorem: Let $S$ be an uncountable set of positive real numbers. Then for any positive real $r$, there is a finite subset of $S$ whose sum is greater than $r$. -(In other words, no sum with uncountably many terms can converge absolutely.) -Proof. The sets $S_{\epsilon} = \{ s : s \in S, s > \epsilon \}$ for $\epsilon$ a positive rational are a countable collection of sets whose union is $S$. Since a countable union of countable sets is countable, it follows that there exists $\epsilon$ such that $S_{\epsilon}$ is uncountable. Then the result is clear.<|endoftext|> -TITLE: Uniqueness of classifying space -QUESTION [7 upvotes]: Classifying spaces are obviously unique up to homotopy type. I am wondering, whether under stronger conditions, one can also say that they are unique up to homeomorphism. In particular, suppose $\Gamma$ is a group and there exist a model $X$ for $B\Gamma$, which is closed (compact without boundary). Suppose $Y$ is also a model for $B\Gamma$ and $Y$ is also closed. In my baby examples it seems reasonable that $X\cong Y$. Is this always true? -Furthermore, if $X$ and $Y$ are models for $B\Gamma$ and $X$ is a closed $n$-dimensional manifold and $Y$ is also an $n$-dimensional manifold. Is it true that $Y$ is closed as well? - -REPLY [6 votes]: There is something called an aspherical manifold. This is a closed manifold whose universal cover is contractible. In particular any aspherical manifold $M$ is a model for $B\pi$ where $\pi=\pi_1(M)$. There are many examples of aspherical manifold, for example any closed manifold of negative sectional curvature (e.g. hyperbolic manifolds) are aspherical by the Theorem of Cartan-Hadamard, that the exponential map is a then covering map. -Now there is a beautiful conjecture due to Borel (the Borel Conjecture) which states that any two aspherical manifolds $M$ and $N$ with isomorphic fundamental group $\pi$ are homeomorphic. Even more the conjecture predicts that any homotopy equivalence $f: M \to N$ is homotopic to a homeomorphism. -Recently there has been a lot of work concerning this conjecture due to a stronger conjecture, the Farrell-Jones Conjecture. This is a conjecture about algebraic $K$ and $L$ theory of group rings. -The Farrell-Jones Conjecture for $K$ and $L$ theory together imply the Borel Conjecture. -Moreover the Farrell-Jones Conjecture has been proven for a quite large class of groups including hyperbolic groups, $CAT(0)$-groups and many more. -A lot of work on the Farrell-Jones Conjecture is due to Wolfgang Lück ( professor at Bonn university ) and you might want to look at some of his survey articles concerning these kinds of questions. You can find them on his homepage, http://www.math.uni-bonn.de/ag/topo/members for a link to that. -Moreover I should mention that there is a theorem called "Mostow rigidity" which proves the Borel conjecture for hyperbolic manifolds, and this is much older than the work on Farrell-Jones Conjecture.<|endoftext|> -TITLE: Do there exist infinitely many primes $p$ such that $a^{p-1}\equiv 1$ $\text{mod } p^2$ for fixed a? -QUESTION [9 upvotes]: I noticed that Hardy and Wright in their "An Introduction to Theory of Numbers"(sixth edition) have asked the following: - -Is it ever true that $$2^{p-1}\equiv 1 \bmod p^2 \tag{*}\;\;\;?$$ - -They have pointed out that for $p=1093$ there is a solution to $(*)$ .But they have stated that such $p$ are sparse . -Question: Do there exist infinitely many primes $p$ such that $a^{p-1}\equiv 1$ $\text{mod } p^2$? For some fixed $a\in Z^\mathbb{+}$ for $a>2$? Sorry if my question is absolutely trivial. - -REPLY [8 votes]: Your question is related to Wieferich primes, which represent the case if $a=2$: - -Despite a number of extensive searches, the only known Wieferich primes to date are 1093 and 3511. (sequence A001220 in OEIS). - -You asked for the more general case $a>2$. See here for Generalized Wieferich primes. Here's the table of known examples: -\begin{eqnarray} -a& p& \text{OEIS sequence}\\ -2 & 1093, 3511 & A001220\\ -3& 11, 1006003& A014127\\ -5& 2, 20771, 40487, 53471161, &&\\ -&1645333507, 6692367337, 188748146801 & A123692\\ -7& 5, 491531 & A123693\\ -11& 71 & \\ -13& 2, 863, 1747591 & A128667\\ -17& 2, 3, 46021, 48947, 478225523351 & A128668\\ -19& 3, 7, 13, 43, 137, 63061489 &A090968\\ -23& 13, 2481757, 13703077, 15546404183, 2549536629329 & A128669\\ -\end{eqnarray} -Gottfried Helms had a paper on that: -Fermat-/Euler-quotients $(a^{p-1}-1)/p^k$ with arbirtrary $k$.<|endoftext|> -TITLE: Limit of sum with binomial coefficient -QUESTION [16 upvotes]: Prove that: -$$\lim_{n\to +\infty}\sum_{k=0}^{n}(-1)^k\sqrt{{n\choose k}}=0$$ -I completely don't know how to approach. Is it very difficult? - -REPLY [2 votes]: First note that $f(z)=\frac{\sin \pi z}{\pi z(1-z)(1-\frac{z}{2})\cdots(1-\frac{z}{n})}$ satisfies $f(z)=\binom{n}{k}$ for any integer $k$. -Because $f(z)$ has no zeros in $-1 -TITLE: Literature on the hyperelliptic involution -QUESTION [6 upvotes]: I'm currently trying to familiarize myself with reducible Jacobians of hyperelliptic curves. A construction which I recently saw in a paper was the quotient $C/\tau$ of a curve $C$ of genus $2$ by a non-hyperelliptic involution $\tau$. -I can get the general idea, but the details of the construction and the following proofs escape me. Almost all of the literature seems to assume a familiarity with properties of such involutions and quotients, but they don't show up at all in the books I read on elliptic curves, abelian varieties and functions fields (there is, as far as I know, very few explicit introductory literature on hyperelliptic curves). -Can someone recommend a good review or some book source which devotes not only two lines to involutions of curves? The genus does not have to be $2$, in fact a more general exposition would be preferable. -Thank you in advance, I would appreciate any guidance and hints. - -REPLY [4 votes]: This book, this book, and these notes contain some exposition on hyperelliptic involutions.<|endoftext|> -TITLE: Height one prime ideal of arithmetical rank greater than 1 -QUESTION [5 upvotes]: Let $R$ be a Noetherian local domain which is not a UFD and let $P$ be a height one prime ideal of $R.$ Can we find an element $x\in P$ such that $P$ is the only minimal prime ideal containing $x$? - -REPLY [4 votes]: I'll prove @messi's assertion that $R_\mathfrak{m}$, where $R=k[X,Y,U,V]/(XV-YU)$ and $\mathfrak{m}=(X,Y,U,V)/(XV-YU)$, is a counterexample to the question, that is, there exists $P$ a height one prime ideal of $R_\mathfrak{m}$ which is not generated up to radical by a single element. In other words, the arithmetical rank of $P$ is greater than $1$. -Let's write $R=k[x,y,u,v]$ with $xv=yu$. Then $R$ is a noetherian graded $k$-algebra with $\dim R=3$. Moreover, Proposition 14.5 from R. Fossum, The Divisor Class Group of a Krull Domain, tells us that $R$ is a Krull domain and its divisor class group $\operatorname{Cl}(R)$ is isomorphic to $\mathbb{Z}.$ Take $\mathfrak{p}=(x,y)$. It's easy to check that $\mathfrak{p}$ is a height one prime ideal of $R$ and from Fossum we also know that $[\mathfrak{p}]$, the class of $\mathfrak{p}$, generates $\operatorname{Cl}(R)$. Suppose that $\mathfrak{p}=\sqrt{aR}$. Since $aR$ is a divisorial ideal it follows that $aR=\mathfrak{p}^{(t)}$, $t\ge 1$. This implies that $[\mathfrak{p}]$ is a torsion element of $\operatorname{Cl}(R)$, a contradiction. Now we can pass from $R$ to $R_\mathfrak{m}$ by using Corollary 10.3 from the same book. -Remark. In the view of curious' answer form this related topic, another way to solve this problem is to show that the local cohomology module $H_\mathfrak{p}^2(R)\neq 0$.<|endoftext|> -TITLE: Simplicial Cup Product and Orientability. -QUESTION [10 upvotes]: One way to define the cup product on a finite simplicial complex $K$ is as follows. -i) Choose a partial ordering on the vertex set of $K$ which induces a total ordering on the vertex set of any simplex. The $p$-th chain group $C_p(K)$ is then the free abelian group on the $p$-simplices of $K$ with vertices listed in increasing order and has the usual differential. -ii) The simplicial cup product $C^p(K) \times C^q(K) \xrightarrow{\cup} C^{p+q}(K)$ is given by the formula -$$(\phi \cup \psi)[v_{i_0},...,v_{i_{p+q}}] = \phi(v_{i_0},...,v_{i_p}) \cdot \psi(v_{i_p},...,v_{i_{p+q}})$$ -where $[v_{i_0},...,v_{i_{p+q}}]$ is a basis element of $C_{p+q}(K)$ and again the indices $i_j$ satisfy $i_0 < i_1 < ... < i_{p+q}$. We then extend by bilinearity. -Suppose instead that I wish to work with a more flexible set up with regards to vertex orderings. A standard way to do this is the notion of an orientation - two choices of orderings of the vertices of a simplex are equivalent if they differ by an even permutation, and an orientation of a simplex is a choice of one of the two equivalence classes of vertex orderings. If the vertex set is finite and we label is as $\{0,1,..,n\}$ then we could agree that the standard orientation of each simplex is the one where the vertices are listed in increasing order. -The above formula for the cup product would no longer be well-defined on the cochain level since it would not necessarily remain invariant under even permutations of the vertices $v_{i_0}, v_{i_1}, ... ,v_{i_{p+q}}$. Are there any books which deal with this issue? Most if the books I have consulted have worked only with fixed vertex orderings and the more general orientations. - -Sorry, I mean: Most if the books I have consulted have worked only with fixed vertex orderings and not the more general orientations approach. - -REPLY [6 votes]: I think the usual fix is to define -$$ -(\phi\cup\psi)[v_0,\ldots,v_{p+q}] \;=\; \frac{1}{(p+q+1)!}\sum_{\sigma\in S_{p+q+1}} (-1)^\sigma \phi[v_{\sigma(0)},\ldots,v_{\sigma(p)}]\cdot\psi[v_{\sigma(p)},\ldots,v_{\sigma(p+q)}] -$$ -on the level of cochains, where $(-1)^\sigma$ denotes the sign of the permutation $\sigma$.<|endoftext|> -TITLE: $\int f(x) dx $ is appearing as $\int dx f(x)$. Why? -QUESTION [6 upvotes]: A few of us over on MITx have noticed that $\int f(x) dx $ is appearing as $\int dx f(x)$. -It's not the maths of it that worries me. It's just I recently read a justification (analytical?) of the second form somewhere but can't recall it or where I saw it. -Can anyone give me a reference? - -REPLY [14 votes]: The second form sometimes makes it easier for the reader to match variables of integrations with their limits. Compare -$$ -\int_0^1\int_{-\infty}^{\infty}\int_{-\eta}^{\eta}\int_{0}^{|t|} \Big\{\text{some long and complicated formula here}\Big\}\,ds\,dt\,d\zeta\,d\eta -$$ -and -$$ -\int_0^1 d\eta\int_{-\infty}^{\infty}d\zeta\int_{-\eta}^{\eta}dt\int_{0}^{|t|} ds\,\Big\{\text{some long and complicated formula here}\Big\} -$$<|endoftext|> -TITLE: A good commutative algebra book -QUESTION [9 upvotes]: Possible Duplicate: -Reference request: introduction to commutative algebra - -I'm looking for a good book on commutative algebra covering most of (but not limited to) : - -Basic Galois theory and Module algebra -Primary decomposition of ideals -Zariski topology -Nullstellensatz, Hauptidealsatz -Noether's normalization -Ring extensions -"Going up" and "Going down" - -The emphasis is on the approach, as I would like a book giving a good geometric intuition of ring theory that I could use as a solid basis to start learning algebraic geometry. -All in all, do you remember a book that gave you a deeper geometric insight of commutative algebra ? - -REPLY [9 votes]: My top 3 : - -Commutative Algebra: with a View Toward Algebraic Geometry, by D. Eisenbud, definitely. As Dylan said in the comments, “some will call it overly chatty but the geometry discussed there is worth everything”. To learn, nothing is too chatty, but to serve as a handbook, yes, this book might be a bit too chatty. -Commutative Algebra, by Bourbaki, exhaustive, once you will be confortable, not to learn. -Commutative Algebra I & II, by Zariski and Samuel, slightly old fashioned, but very pedagogic, and feature very interesting points of view, aimed at geometry.<|endoftext|> -TITLE: "Ballot numbers" sum up to Catalan numbers -QUESTION [10 upvotes]: Summing certain numbers and comparing the results with OEIS, I found that -$ -\sum_{k=1}^n \frac{k^2}{n} \binom{2n-k-1}{n-1} = C_{n+1} - C_{n}, -$ -where $C_n$ denotes the $n^{\textrm{th}}$ Catalan Number. How can I prove this equation? And is there any combinatorial interpretation? -Some background information: The number $\frac{k}{n} \binom{2n-k-1}{n-1}$ denotes the number of unranked trees of size $n$ with a root degree $k$ (these numbers are known as ballot numbers, see e.g. the book Analytic Combinatorics from Flajolet and Sedgewick, page 68). So one must have $\sum_{k=1}^n \frac{k}{n} \binom{2n-k-1}{n-1} = C_{n}$, since there are $C_n$ many trees of size $n$. - -REPLY [4 votes]: Consider the space of lattice points $(p,q)$ where $0 \leq p \leq q, q = 0, 1, \dots$; -the number of (shortest) paths in this space from $(p,q)$ to $(0,0)$ is the ballot number -$$N(p,q) = \frac{q-p+1}{q+1} \cdot \binom{p+q}{p}; \quad N(0,0) =_D 1.$$ -Using this notation we have $ N(n-k,n-1) = \frac{k}{n} \binom{2n-k-1}{n-1} $. -The problem asks to show -$$\sum_{k=1}^n k N(n-k, n-1) = C_{n+1} - C_{n}.$$ -Note that $ C_{n} $ is the nth Catalan number and that $ C_{n} = N(n,n) $, so -$ C_{n+1} - C_{n} $ is the number of paths from point $(n-1, n+1)$ to $(0,0)$ (all paths from -$(n+1, n+1)$ to $(0,0)$ which don't use point $(n,n)$ must use point $(n-1, n+1)$). -Any path from point $(n-1, n+1)$ to point $(0,0)$ must use one and only one of the arcs $a(k)$ -which start from point $(k, n)$ and stop in point $(k, n-1)$, where $0 \leq k \leq n-1$. -But the number of such paths is $(n-k)N(k,n-1)$ because the number of paths from $(n-1, n+1)$ -to point $(k, n)$ is $(n-k)$, and the number of paths from point $(k,n)$ which use arc $a(k)$ is -$N(k,n-1)$, so the total number of paths from point $(n-1, n+1)$ to $(0,0)$ is -$\sum_{k=0}^{n-1} (n-k) N(k,n-1)$. - -Voineasa.<|endoftext|> -TITLE: Curl of a vector in spherical coordinates -QUESTION [15 upvotes]: The curl of a Vector function in curvilinear coordinate system is given by -$$ \nabla \times A = -\frac 1 {h_1 h_2 h_3} -\begin{vmatrix} -h_1 \hat e_1 & h_2 \hat e_2 & h_3 \hat e_3\\ -\partial \over \partial x_1 & \partial \over \partial x_2 & \partial \over \partial x_3\\ -h_1 A_1 & h_2 A_2 & h_3 A_3 -\end{vmatrix} \hspace{20 mm} \mathbf{(1)}$$ -where $h_1, h_2, h_3$ are scale factors. For spherical coordinates -$$h_1 = 1, h_2 = r, h_3 = r\sin\theta$$ -However I don't understand (1), which is also not explained in my book. How is it derived?? -Can anyone explain me? Even links would be helpful. Thank you!! - -REPLY [16 votes]: Before doing the derivation, I'd like to explain the origin of the scale factors $h_i$. We will assume throughout that our curvilinear coordinates $x_1$, $x_2$, and $x_3$ are orthogonal, i.e. that the gradients $\nabla x_1$, $\nabla x_2$, $\nabla x_3$ are orthogonal vectors. We will also assume that they are right-handed, in the sense that $\widehat{e}_1\times\widehat{e}_2=\widehat{e}_3$. -  -The Origin of the Scale Factors -One important difference between curvilinear coordinates $x_1,x_2,x_3$ and standard $x,y,z$ coordinates is that curvilinear coordinates do not change at unit speed. That is, if we start at a point and move in the direction of $\widehat{e}_i$, we should not expect $x_i$ to increase at unit rate. -One consequence of this is that the gradients $\nabla x_i$ of the curvilinear coordinates are not unit vectors. For $x,y,z$ coordinates, we know that -$$ -\nabla x \;=\; \widehat{\imath},\qquad \nabla y \;=\; \widehat{\jmath},\qquad\text{and}\qquad \nabla z\;=\; \widehat{k}. -$$ -However, for curvilinear coordinates, we get something like -$$ -\nabla x_1 \;=\; \frac{1}{h_1}\widehat{e}_1,\qquad \nabla x_2 \;=\; \frac{1}{h_2}\widehat{e}_2,\qquad\text{and}\qquad \nabla x_3 \;=\; \frac{1}{h_3}\widehat{e}_3, -\tag*{(1)}$$ -where $h_1$, $h_2$, and $h_3$ are scalars. -The reciprocal $1/h_i$ of each scale factor represents the rate at which $x_i$ will change if we move in the direction of $\widehat{e}_i$ at unit speed. Equivalently, you can think of $h_i$ as the speed that you have to move if you want to increase $x_i$ at unit rate. For spherical coordinates, it should be geometrically obvious that $h_1 = 1$, $h_2 = r$, and $h_3 = r\sin\theta$. -  -Formula for the Gradient -We can use the scale factors to give a formula for the gradient in curvilinear coordinates. If $u$ is a scalar, we know from the chain rule that -$$ -\nabla u \;=\; \frac{\partial u}{\partial x_1}\nabla x_1 \,+\, \frac{\partial u}{\partial x_2}\nabla x_2 \,+\, \frac{\partial u}{\partial x_3}\nabla x_3 -$$ -Substituting in the formulas from (1) gives us -$$ -\nabla u \;=\; \frac{1}{h_1}\frac{\partial u}{\partial x_1}\widehat{e}_1 \,+\, \frac{1}{h_2}\frac{\partial u}{\partial x_2}\widehat{e}_2 \,+\, \frac{1}{h_3}\frac{\partial u}{\partial x_3}\widehat{e}_3\tag*{(2)} -$$ -This is the formula for the gradient in curvilinear coordinates. -  -Formula for the Curl -First, observe that the determinant formula you have given for the curl is equivalent to the following three formulas: -$$ -\begin{gather*} -(\nabla\times A)\cdot\widehat{e}_1 \;=\; \frac{1}{h_2h_3}\left|\begin{matrix}\frac{\partial}{\partial x_2} & \frac{\partial}{\partial x_3} \\[8pt] h_2A_2 & h_3A_3\end{matrix}\right| \\[12pt] -(\nabla\times A)\cdot\widehat{e}_2 \;=\; \frac{1}{h_3h_1}\left|\begin{matrix}\frac{\partial}{\partial x_3} & \frac{\partial}{\partial x_1} \\[8pt] h_3A_3 & h_1A_1\end{matrix}\right| \\[12pt] -(\nabla\times A)\cdot\widehat{e}_3 \;=\; \frac{1}{h_1h_2}\left|\begin{matrix}\frac{\partial}{\partial x_1} & \frac{\partial}{\partial x_2} \\[8pt] h_1A_1 & h_2A_2\end{matrix}\right| -\end{gather*} -$$ -We will prove the first of these formulas. Given any vector field $A$, we can write -$$ -\begin{align*} -A \;&=\; A_1 \widehat{e}_1 \,+\, A_2 \widehat{e}_2 \,+\, A_3 \widehat{e}_3 \\[6pt] -&=\; h_1A_1\,\nabla x_1 \,+\, h_2A_2\,\nabla x_2 \,+\, h_3A_3\,\nabla x_3 -\end{align*} -$$ -Taking the curl gives -$$ -\nabla \times A \;=\; \nabla(h_1A_1)\times (\nabla x_1) \,+\, \nabla(h_2A_2)\times(\nabla x_2) \,+\, \nabla(h_3A_3)\times(\nabla x_3) -$$ -Here we have used the identity $\nabla\times(uF) = (\nabla u)\times F + u(\nabla\times F)$, as well as the fact that the curl of a gradient is zero. Applying formula (1), we get -$$ -\nabla \times A \;=\; \frac{1}{h_1}\nabla(h_1A_1)\times \widehat{e}_1 \,+\, \frac{1}{h_2}\nabla(h_2A_2)\times\widehat{e}_2 \,+\, \frac{1}{h_3}\nabla(h_3A_3)\times\widehat{e}_3 -$$ -When we take the cross products, the $\widehat{e}_1$ component will be -$$ -(\nabla \times A)\cdot\widehat{e}_1 \;=\; \frac{1}{h_3}\nabla(h_3A_3)\cdot\widehat{e}_2 \,-\, \frac{1}{h_2}\nabla(h_2A_2)\cdot\widehat{e}_3. -$$ -But, by formula (2) for the gradient, -$$ -\nabla(h_3A_3)\cdot\widehat{e}_2 \;=\; \frac{1}{h_2}\frac{\partial}{\partial x_2}(h_3 A_3)\qquad\text{and}\qquad\nabla(h_2A_2)\cdot\widehat{e}_3 \;=\; \frac{1}{h_3}\frac{\partial}{\partial x_3}(h_2 A_2) -$$ -Therefore, -$$ -\begin{align*} -(\nabla \times A)\cdot\widehat{e}_1 \;&=\; \frac{1}{h_2h_3}\frac{\partial}{\partial x_2}(h_3A_3) \,-\, \frac{1}{h_2h_3}\frac{\partial}{\partial x_3}(h_2A_2) \\[12pt] -&=\; \frac{1}{h_2h_3}\left|\begin{matrix}\frac{\partial}{\partial x_2} & \frac{\partial}{\partial x_3} \\[8pt] h_2A_2 & h_3A_3\end{matrix}\right| -\end{align*} -$$ -as desired.<|endoftext|> -TITLE: What are some (or even one) interesting examples of (non-group) semigroups? -QUESTION [6 upvotes]: I'm going to give a lecture on Alon and Schieber's Tech Report on computing semigroup products (Optimal Preprocessing for Answering On-Line Product Queries). Basically, given a list of elements $a_1,\ldots,a_n$ and a bound $k$, they show how to preprocess the elements so that later it is possible to efficiently compute the product of any arbitrary sublist $a_i\cdot\ldots\cdot a_j$ by performing only $k$ products. -To motivate this algorithm I am looking for examples of semigroups where: 1. It is plausible that one would have a list of elements and want to compute products of sublists. 2. The semigroup is not a group (since the problem is trivial for groups). 3. The space complexity of the product does not overwhelm the time complexity of computing it directly (e.g., in string concatenation, just copying the result to the output takes the same time as computing it even with no preprocessing). -So far I only have $\min$ and $\max$ and matrix multiplication as examples. I'm looking for something more "sexy", preferably related to Computer Science (perhaps something to do with graphs or trees). Also, for $\min$ and $\max$ there is a better algorithm, so I can't really use them to motivate this algorithm. - -REPLY [2 votes]: The composition of functions can be generalized to the composition of binary relations. The composition of binary relations is associative, and they form a semigroup if restricted to relations from $X$ to $X$. -If $X$ is finite, they can be represented by a digraph with loops.<|endoftext|> -TITLE: Tables and histories of methods of finding $\int\sec x\,dx$? -QUESTION [15 upvotes]: The famously most difficult among completely elementary antiderivatives is that of the secant function. -Has someone tabulated all the ways it can be done, or written a somewhat comprehensive history of them, or an account of logical connections among them? -Does the feeling that this particular antiderivative can be found only by methods that are unexpected except by hindsight correspond in some way to some precisely stateable mathematical fact? -Parenthesis: Look at the tangent half-angle formula in the form -$$ -f(x)= \tan\left(\frac x 2 + \frac\pi4\right) = \tan x + \sec x. -$$ -Differentiating both sides yields -$$ -f'(x) = 2\sec^2\left(\frac x 2 + \frac\pi4\right) = \sec^2 x + \tan^2 x = (\sec x)(\sec x + \tan x) = (\sec x)f(x). -$$ -So $f$ satisfies a differential equation -$$ -f'(x) = (\sec x)f(x). -$$ -$$ -\frac{df}{f} = \sec x. -$$ -Antidifferentiating both sides gives $\log|f(x)|= \text{the thing sought}$. Is this one "out there" somewhere (in published source or on the web)? -Later edit: Perhaps some ways of finding this antiderivative are neither "unexpected" (in the sense of being things that can be seen to work only by hindsight) nor applicable only to this one integral. But a fact persists: Lots of ways of doing this that are out there in the literature do match that description. Probably far more so than with all other elementary antiderivatives. So there's a question of whether there's some mathematical fact that expains why that should be true only of this one integral. - -REPLY [3 votes]: Since you ask for references, the original way this integral was found, via cartography, should be kept in mind. See V. F. Rickey and P. M. Tuchinsky, "An Application of Geography to Mathematics: History of the Integral of the Secant", Mathematics Magazine 53 (1980), 162-166. It is available for free on JSTOR.<|endoftext|> -TITLE: Product of spheres embeds in Euclidean space of 1 dimension higher -QUESTION [33 upvotes]: This problem was given to me by a friend: - -Prove that $\Pi_{i=1}^m \mathbb{S}^{n_i}$ can be smoothly embedded in a Euclidean space of dimension $1+\sum_{i=1}^m n_i$. - -The solution is apparently fairly simple, but I am having trouble getting a start on this problem. Any help? - -REPLY [52 votes]: Note first that $\mathbb{R}\times\mathbb{S}^n$ smoothly embeds in $\mathbb{R}^{n+1}$ for each $n$, via $(t,\textbf{p})\mapsto e^t\textbf{p}$. -Taking the Cartesian product with $\mathbb{R}^{m-1}$, we find that $\mathbb{R}^m\times\mathbb{S}^n$ smoothly embeds in $\mathbb{R}^{m}\times\mathbb{R}^n$ for each $m$ and $n$. -By induction, it follows that $\mathbb{R}\times\prod_{i=1}^m \mathbb{S}^{n_i}$ smoothly embeds in a Euclidean space of dimension $1+\sum_{i=1}^m n_i$. - -The desired statement follows.<|endoftext|> -TITLE: what is the sum of following permutation series $nP0 + nP1 + nP2 +\cdots+ nPn$? -QUESTION [19 upvotes]: what is the sum of following permutation series $nP0 + nP1 + nP2+\cdots+ nPn$ ? -I know that -$nC0 + nC1 +\cdots + nCn = 2^n$, but not for permutation. Is there some standard result for this ? - -REPLY [17 votes]: You can write this as -$$ S(n) = n! \left( {1 \over 0!} + {1 \over 1!} + \cdots + {1 \over n!} \right) $$ -and now recall that $e = 1/0! + 1/1! + 1/2! + \cdots $. So in fact -$$ S(n) = n! \left( e - \left( {1 \over (n+1)!} + {1 \over (n+2)!} + \cdots \right) \right) $$ -and we can rearrange this to give -$$ S(n) = n! \times e - \left( {1 \over (n+1)} + {1 \over (n+1)(n+2)} + \cdots \right). $$ -Call the expression in parentheses $g(n)$ -- that is, -$$ g(n) = {1 \over (n+1)} + {1 \over (n+1)(n+2)} + {1 \over (n+1)(n+2)(n+3)} + \cdots $$. -Then clearly -$$ g(n) < {1 \over n} + {1 \over n^2} + {1 \over n^3} + \cdots $$ -and by the usual formula for the sum of a geometric series, -$$ g(n) < {1/n \over 1-(1/n)} = {1 \over n-1}. $$ -In particular if $n > 2$ we have $g(n) < 1$ and therefore $(n! \times e) - 1 < S(n) < n! \times e$. But $S(n)$ is a sum of integers and is therefore an integer. So $S(n) = \lfloor n! \times e \rfloor$ -- that is, $S(n)$ is the greatest integer less than $n! \times e$. For example $4! \times e \approx 65.23$ and $S(4) = 65$. -This sequence is A000522 in the OEIS and the formula I gave here is given there without proof. -Also, the number of derangements of n elements is given by $n! (1/0! - 1/1! + 1/2! - \cdots \pm 1/n!)$ and can similarly be proven to be the integer closest to $n!/e$. - -REPLY [5 votes]: Let us call the sum $S(n)$. We have $S(1) = 1P0 + 1P1 = 1+1 = 2$. -For $n\gt 1$, we can factor out $n$ to get -$$\begin{align*} -S(n) &= nP0 + nP1 + nP2 + \cdots + nPn\\ -&= 1+ n + n(n-1) + \cdots + n!\\ -&= 1+ n\Bigl( 1+(n-1) + (n-1)(n-2) + \cdots + (n-1)!\Bigr)\\ -&= 1+nS(n-1). -\end{align*}$$ -Thus, the sequence begins: -$$\begin{align*} -S(1) &= 2\\ -S(2) &= 1 + 2S(1) = 5\\ -S(3) &= 1+ 3S(2) = 16\\ -S(4) &= 1+4S(3) = 65\\ -S(5) &= 1+5S(4) = 326\\ -&\vdots -\end{align*}$$ -This is sequence A00522 on the On-Line Encyclopedia of Integer Sequences. The entry contains numerous connections; e.g., $S(n)$ is the permanent of the $n\times n$ matrix with $2$s in the diagonal and $1$s elsewhere; it also gives the formula from Marvis's post: -$$S(n) = \exp(1)\Gamma(n+1,1)\text{ where }\Gamma(z,t)=\int_{x\geq t} \exp(-x)x^{z-1}\, dx$$ - -REPLY [4 votes]: There is no "nice" closed form as such though it is related to the incomplete $\Gamma$ function. -$P(n,k) = \dfrac{n!}{(n-k)!}$. Hence, $$\sum_{k=0}^{n} P(n,k) = n! \left( \sum_{k=0}^{n} \dfrac1{(n-k)!} \right) = n! \left( \sum_{k=0}^{n} \dfrac1{k!} \right)$$ -The above sum is related to the incomplete $\Gamma$ function, which is defined as $$\Gamma(n+1,x) = \int_x^{\infty} t^{n} e^{-t} dt = n! e^{-x} \sum_{k=0}^n \left(\dfrac{x^k}{k!} \right)$$ taking $n$ to be a positive integer. Setting $x=1$, we get that -$$\Gamma(n+1,1) = \dfrac{n!}e \left(\sum_{k=0}^n \dfrac1{k!} \right)$$ -Hence, $$\sum_{k=0}^{n} P(n,k) = e \times \Gamma(n+1,1)$$<|endoftext|> -TITLE: gauss-manin connection for curves -QUESTION [6 upvotes]: Let $\pi: X \to Y$ be a finite morphism between smooth projective curves over the complex numbers. I would like to known: -(1) what the Gauss-Manin connection with respect to $\pi$ (that is, the connexion corresponding to the local system $\pi_\ast\mathbb{C}$ on $Y$ minus the ramification points) looks like -(2) what kind information does the Grothendieck-Riemann-Roch theorem provide when applied to $\pi$ -Thanks! - -REPLY [4 votes]: I'll take a stab at these: -(1) Well, the Gauss-Manin connection is flat, and all flat connections look alike in local holomorphic trivializations. A better question is what the parallel sections of the local system look like. Here a simple example might be useful. -Let $\pi : \mathbb C \to \mathbb C$ be $z \mapsto z^2$. This is a finite morphism, of the admittedly non-projective affine plane to itself, but it can be extended to a finite morphism on the projective line that is ramified at $0$ and $\infty$ only. The only nontrivial local system associated to this morphism, outside of the ramification points, is a copy if two disjoint $\mathbb C$, one for each point in the preimage of a given point. A parallel section of the associated bundle over $U \subset \mathbb C$ then corresponds to the choice of a square root of $z$ over $U$, and if $U$ is connected this choice of square root does not "jump" between branches, which would correspond to jumping from one point in a preimage of $\pi$ to another. -The case of a general finite morphism should maybe be thought of as similar to this one; parallel sections of the vector bundle associated to the local system correspond to picking a branch of local solutions $x$ of $\pi(x) = y$ when $y$ varies on $Y$. -(2) I haven't worked out the details, but I'm willing to bet good money that we get an extreme overkill proof of the Riemann-Hurwitz formula by applying Grothendieck-Riemann-Roch to the finite morphism $\pi : X \to Y$.<|endoftext|> -TITLE: Axiomatization of $\mathbb{Z}$ via well-ordering of positives. -QUESTION [5 upvotes]: Though I've seen several cool axiomizations of $\mathbb{R}$, I've never seen any at all for $\mathbb{Z}$. -My initial guess was that $\mathbb{Z}$ would be a ordered ring which is "weakly" well-ordered in the sense that any subset with a lower bound has a minimal element. -However, after seeing this definition of a discrete ordered ring, I'm less sure. I made that guess under the impression that the fundamental characteristic of $\mathbb{Z}$ is that every nonzero element has exactly one representation of the form $\pm (1+1+\dots+1)$, but that seems to be shared by every DOR. -Presumably, this definition wouldn't exist if every instance of it was isomorphic to $\mathbb{Z}$, so can someone give me an example of another discrete ordered ring? More to the point, what is a sufficient characterization of $\mathbb{Z}$? (and a proof sketch of uniqueness would be nice) -I'm aware that $\mathbb{Z}$ is pretty easily constructible from $\mathbb{N}$, but I want to use this for a seminar and given the audience I am expecting, I would rather not deal with Peano. (And I guess it feels like cheating to say "$\mathbb{N}$ is a well-ordered rig") - -REPLY [3 votes]: Any ordered ring R whose positives P are well-ordered in R is isomorphic to $\mathbb Z$ as an ordered ring. The proof is easy. Hint: $ $ the natural image of $\,\mathbb Z\,$ in R is an order mononomorphism, so it remains to show it is onto. If not, R has a positive element $\rm\:w\not\in \mathbb Z.\:$ $\rm w$ is not infinite $\rm (w\! >\! n,\, \forall\, n\in\mathbb N)\,$ else $\rm\,w > w\!-\!1 > w\!-\!2,\ldots\,$ is an infinite descending chain in P, contra P well-ordered. Therefore $\rm\:w\:$ must lie between two naturals $\rm\:n < w < n\!+\!1,\:$ hence $\rm\ 0 < \epsilon < 1\:$ for $\rm\:\epsilon = w\!-\!n,\:$ therefore $\rm\: \epsilon > \epsilon^2 > \epsilon^3 > \ldots\,$ is an infinite descending chain in P, $ $ contra P is well-ordered. $ $ QED -You ask for another example of a discrete ordered ring. As here, order the ring $\rm\,\mathbb Z[x]\,$ of integer coef polynomials by: $\rm\:f > 0\:$ iff it has leading coefficient $> 0,\,$ i.e. iff $\rm\:f\:$ is positive at $+\infty,$ and $\rm\:f > g\:$ iff $\rm\,f\!-\!g > 0.\:$ Here, as above, $\rm\:x > x\!-\!1 > x\!-\!2 > \ldots\, $ so its positives are not well-ordered. - -REPLY [2 votes]: Second-order quantification allows us to talk about properties of subsets of the ring, much like the completeness axiom of the real numbers (which is too a second-order statement). -We can adjoin the usual theory of ordered rings the following axiom: -$$\forall A(A\neq\varnothing\land\exists x\forall a(a\in A\rightarrow x -TITLE: Is a probability measure on a compact space regular? -QUESTION [6 upvotes]: Let $X$ be a compact Hausdorff space. Let $\mu$ be a Borel, probability measure on $X$. Does it automatically follow that $\mu$ is regular? That is, for all Borel $E \subset X$, must we have $$\mu(E) = \inf \mu(U) = \sup \mu(K)$$ where the infimum is taken over open $U \supset E$ and the supremum is taken over compact $K \subset E$? I know this is true when $X$ is a metric space or, more generally, when every open subset of $X$ is $\sigma$-compact. I imagine this fails in general though. - -I have a proposed counterexample which is a long way from being complete, but I'll put it here anyway. Let $I$ be an uncountable index set. Denote the product of $I$ many copies of the discrete space $\{0,1\}$ by $2^I$ and give it the (compact) product topology. I would like to give $2^I$ it's "product measure" as well, but I don't understand products of infinite families of measure spaces. Luckily, $2^I$ is, in a natural way, a compact group, so we can use the Haar measure which does what I wanted the product measure to do. Let $E$ be the set of elements of $2^I$ with countable support. -Issue 1: Is $E$ Borel? I don't see why it should be... -Assuming $E$ is Borel, it must have measure zero. Consider the translates $$xE = \{ y \in 2^I : y_i = x_i \text{ for all but countably many } i \in I\}$$ -of $E$. By choosing a sequence of points $x_1,x_2,x_3,\ldots \in X$ such that any two differ in uncountably many coordinates (this is possible), we see that the translates $x_1E,x_2E,x_3E,\ldots$ are disjoint. So, if $E$ had positive measure, we get that $\mu(x_1E \cup x_2E \cup \ldots) = \mu(E) + \mu(E) + \ldots = \infty$ by translation invariance of the measure. This contradicts $\mu(2^I) =1$. -Now, I would like to use $E$ to contradict outer regularity. So how is it that one can find open $U \supset E$? For $F \subset I$ finite, let $U_F = \{x \in 2^I : x_i = 0 \text{ for all } i \in F\}$ (basic open set). I think essentially the only way to cover $E$ by an open set is to choose an uncountable family $F_j$ of disjoint finite subsets of $I$ and to consider $U = \bigcup_j U_{F_j}$. This covers $E$ since, if $x \in E$, then the support of $x$ is countable, so $x$ is nonzero on countably many of the $F_j$, so $x$ is zero on some particular $F_j$ and $x \in U$. -Issue 2: Is it true that $U$ should have $\mu(U)=1$? For some reason I feel maybe it should. -If both of the above issues are resolved, then $2^I$ is not regular since $\mu(E) = 0$ and $\inf \mu(U) = 1$. - -REPLY [7 votes]: Here's a famous example of the counter-intuitive behaviour that you want. -The Dieudonné measure $\mu$ is a Borel probability measure on the compact space $X=[0,\omega_1]$, -where $\omega_1$ is the first uncountable ordinal. -It has the property that $\mu(K)=0$ for every compact subset of $X\setminus\{\omega_1\}$, yet -$\mu(X\setminus\{\omega_1\})=1$. -You can find more details in Volume 2 of Bogachev's Measure Theory Example 7.1.3 (page 68).<|endoftext|> -TITLE: Group structure on $\mathbb R P^n$ -QUESTION [9 upvotes]: For which positive integers $n$ can $\mathbb R P^n$ be given the structure of a topological group? -I believe that $\mathbb R P^n$ cannot be given a Lie group structure for even $n$, since then it is not orientable. But, this doesn't necessarily imply it doesn't have a topological group structure (which is not smooth); moreover, it tells us nothing about odd $n$. And ideas? - -REPLY [6 votes]: As Olivier has soon this can be reduced to the question for spheres. -One way to prove this is to note that for a topological group $G$ we have that $G$ is homotopy equivalent to $\Omega BG$, the loopsapce of the classifying space. For example $\Omega BS^1 = \Omega \mathbb{C}P^\infty = \Omega K(\mathbb{Z},2) = K(\mathbb{Z},1) = S^1$. Thus the question is which spheres are loop spaces of classifying spaces. Adams' work showed this is true only for $n=0,1,3$.<|endoftext|> -TITLE: Chernoff inequalities -QUESTION [8 upvotes]: Chernoff inequalities are inequalities that express concentration around the expectation of a random variable $X=\sum_iX_i$ where the $X_i$ are i.i.d random variables -I have been encountering these inequalities at differenet contexts and for several distributions of the $X_i's$. Nevertheless whenever I try to understand the proof for it, it seems to me that the proof is just a sequence of tricks and I fail to get any insight. - -Do you know of any insightful, perhaps using "higher math", way to view these inequalities? -For which distributions of the $X_i's$ can one expect to get a chernoff bound for $\sum_iX_i$? - -REPLY [2 votes]: 1) I wouldn't consider this as a quantitative version of the central limit theorem, but rather as a quantitative version of large deviation theorems (the two are related, of course). Let us focus on the result, and not on the methods you use to get them. Let $(X_i)$ be a sequence of i.i.d., $\mathbb{R}$-valued, centered, bounded random variables. I'll denote by $(S_n)$ the sequence of its partial sums. A large deviation principle tells you that there exists a rate function $I: \mathbb{R} \to \mathbb{R}_+$ such that, for any open set $O$: -$$- \inf_O I \leq \liminf_{n \to + \infty} \frac{\ln \mathbb{P} (S_n/n \in O)}{n},$$ -and for any closed set $F$: -$$\liminf_{n \to + \infty} \frac{\ln \mathbb{P} (S_n/n \in F)}{n} \leq - \inf_F I.$$ -In other words, the probability that the sum $S_n$ is large (say, $S_n \geq \varepsilon n$ for a fixed $\varepsilon$) decreases exponentially in $n$, roughly at speed $e^{- I (\varepsilon)n}$. -A notable feature of these large deviation principles for i.i.d. random variable is that the function $I$, which governs the speed of the decay, is the Lapaplace-legendre transform of the characteristic function of $X$. In other words, exactly what you get with the Chernoff bounds! So the Chernoff bounds you give you a quantitative, upper bound for the large deviation principles: -$$\mathbb{P} (S_n/n \geq \varepsilon) \leq e^{- I(\varepsilon) n},$$ -or equivalently, -$$\frac{\mathbb{P} (S_n/n \geq \varepsilon)}{n} \leq - I(\varepsilon).$$ -In a more general setting, the rate function $I$ is related to the entropy of some system (you get a large entropy [that is, small for a physicist - there is often a sign change] when the sum $S_n$ is far from its typical state). -========== -There a point which is worthy of note, but has not been raised yet. You can show that moment bounds are stronger that exponential bounds. You know that, for any $p \geq 0$ and any $t > 0$: -$$\mathbb{P} (|X| \geq t) \leq \frac{\mathbb{E} (|X|^p)}{t^p}.$$ -These bounds are stronger that the Chernoff bounds: if you know each of the moments of $X$, then the moment bounds allow you to get better bounds on $\mathbb{P} (|X| \geq t)$ than Chernoff bounds. However, they behave very badly when you look at sums of i.i.d. random variables (because the moments change in a non-trivial way), while the exponential bounds are very easy to manage: -$$\mathbb{E} (e^{\lambda S_n}) = \mathbb{E} (e^{\lambda X})^n.$$ -========== -2) Obviously, Chernoff bounds exist as soon as the characteristic function $\mathbb{E} (e^{\lambda X})$ is defined on a neighborhood of $0$, so you only need exponential tails for $X$ (and not boundedness). Moreover, if you want to get a bound in one direction (i.e. on $\mathbb{P} (S_n/n \geq \varepsilon)$ or $\mathbb{P} (S_n/n \leq -\varepsilon)$, not on $\mathbb{P} (|S_n/n| \geq \varepsilon)$), you only need exponential tails in the corresponding direction. -If you assume stronger hypotheses on the tails of $X$, you can get stronger Chernoff bounds. Boundedness or sub-gaussianity of $X$ are typical assumptions. -You can get similar bounds (concentration inequalities) not only for the aprtial sums of i.i.d. random variables, but also for some martingales (see Collin McQuillan's answer), and for much, much larger classes of processes. This Wikipedia page will give you a taste of it, as well as some key-words, if you are interested.<|endoftext|> -TITLE: Entropy of a Linear Toral Automorphism -QUESTION [6 upvotes]: I'm trying to calculate the entropy of the Linear Toral Automorphism -induced by -$$f(x,y,z)=(x,y+x,y+z)$$ -This is an exercise in the Katok book. -This map has all eigenvalues ​​equal to 1. But I do not want to use that $~~ h_{top}(f)= log (max|\lambda_i|)$. would like to use Katok's suggestion that says that the cardinality separate sets grow quadratically with $ n $ where $ n $ is the size of the orbit. But I can not see it clearly. - -REPLY [2 votes]: We have -$$f^n(x,y,z)=(x,\,y+nx,\,z+ny+\tbinom n 2 x)$$ -Taking $\|\cdot\|_\infty$ as a metric on $(\mathbb R/\mathbb Z)^3$, this implies -$$d_n(a,b) \le (1+n+n(n-1)/2) \|a-b\|_\infty$$ -where $d_n$ is the maximum distance between the two orbits $(a,f(a),\dots,f^n(a))$. -So that an $(n,\varepsilon)$-separated set must be $(0,\Omega(\varepsilon/n^2))$-separated (in other words, the metric $d_n$ grows at most quadratically) and therefore, since we are in dimension $d=3$, for fixed $\varepsilon$ its cardinality grows as $O(n^{2d})$, which suffices to conclude that $h_{top}(f)=0$. - -Note that the cardinality itself does grow faster than quadratically, as can be seen with the following $n^2(n-1)/2$ points $M_{uv}$: -$$\left\{\begin{aligned} -x=&u/\tbinom n 2\\ -y=&v/n\\ -z=&0 -\end{aligned}\right.$$ -If $d_n(M_{uv},M_{u'v'})<\varepsilon<1/4$ for some odd $n$, then we have -$$|n(y'-y)+\tbinom n 2 (x'-x)|<\varepsilon\\ -|v'-v+u'-u|<1\\ -v'-v = -(u'-u)$$ -$$|(n+1)/2\cdot(y'-y)+\tbinom{(n+1)/2}{2}(x'-x)|<\varepsilon\\ -|u'-u|<\frac{n}{(n+1)/4}\varepsilon<1\\ -u=u'\\ -v=v'$$ -So that we have a set of $\Omega(n^3)$ points that is $(n,\varepsilon)$-separated.<|endoftext|> -TITLE: Show that $a^{\phi(b)}+b^{\phi(a)} \equiv 1 (\text{mod }ab)$ , if a and b are relatively prime positive integers. -QUESTION [13 upvotes]: Show that -$a^{\phi(b)}+b^{\phi(a)} \equiv 1 (\text{mod} ab)$, -if a and b are relatively prime positive integers. -Note that $\phi(n)$ counts the number of positive integers not exceeding n which are relatively prime with n. - -REPLY [2 votes]: Use Euler's Theorem, -$$a^{\phi (b)}+b^{\phi (a)} \equiv a^{\phi (b)}\equiv 1 (mod \space b) $$ -$$a^{\phi (b)}+b^{\phi (a)} \equiv b^{\phi (a)}\equiv 1 (mod \space a) $$ -So, $a^{\phi (b)}+b^{\phi (a)} \equiv 1 (mod \space ab) $<|endoftext|> -TITLE: Isomorphism between quotient modules -QUESTION [10 upvotes]: Is it true for a commutative ring $R$ and its ideals $I$ and $J$ that if the quotient $R$-modules $R/I$ and $R/J$ are isomorphic then $I=J$? - -REPLY [16 votes]: Problem 22 of Chapter 4 from Steven Roman's Advanced Linear Algebra asks to prove this question in the affirmative when $R/I\simeq R/J$ as $R$-modules, and then asks about the case when $R/I\simeq R/J$ as rings. -The nice existing answers show it is not necessarily true that $I=J$ when the quotients are isomorphic as rings. However, suppose $R/I\simeq R/J$ as $R$-modules with the standard $R$-module structure. -Let $\tau\colon R/I\to R/J$ be an $R$-module isomorphism. Then for any $j\in J$, -$$ -\tau(j+I)=\tau(j\cdot 1+I)=j\cdot\tau(1+I)=0+J -$$ -since $\tau(1+I)\in R/J$, and any element of $R/J$ is annihilated by multiplication by elements of $J$. Thus $j+I\in\ker\tau=\{I\}$, the equality following since $\tau$ is injective. So $j+I=I$, thus $j\in I$, and $J\subseteq I$. The reverse containment follows similarly, by looking at $\tau^{-1}$ say. So $I=J$.<|endoftext|> -TITLE: Are sets given in parametric form always algebraic? -QUESTION [12 upvotes]: If a set is given in parametric form by polynomials, is this set always closed (Zariski topology), i.e algebraic? -For example, take $X=\{(t,t^{2},t^{3}): t \in \mathbb{A}^{1}\}$ and $W=\{(t^{3},t^{4},t^{5}):t \in \mathbb{A}^{1}\}$ one can check these sets are closed, for example $X=V(y-x^{2},z-x^{3})$. -Question: suppose $X \subseteq \mathbb{A}^{n}$ is given by: -$X=\{(f_{1}(t),f_{2}(t),...,f_{i}(t)): t \in \mathbb{A}^{1}\}$ where $f_{i} \in k[t]$ for each $i \in \{1,2,..n\}$. Is it always true that $X$ is closed? why? (by the way the underlying field $k$ is algebraically closed and char(k) is zero) - -REPLY [5 votes]: Here is a second, more algebraic proof. -Consider the morphism $f:\mathbb A^1_k \to \mathbb A^n_k :t\mapsto (f_1(t),\ldots,f_n(t))$. -It corresponds to the morphism of $k$-algebras $\phi: k[T_1,...,T_n] \to k[T]:T_i\mapsto f_i(T) $. -There are now two cases: -a) If all the polynomials $f_i(T)$ are constant the image of $f$ is a point in $\mathbb A^n_k$, and so obviously a closed set. -b) If some $f_i$ is not constant, then $T$ is integral over $k[f_i(T)]$ and a fortiori over $k[f_1(T),...,f_n(T)]$. - In other words the morphism $\phi: k[T_1,...,T_n] \to k[T] $ is integral (and even finite). -So the morphism $f:\mathbb A^1_k \to \mathbb A^n_k $ is integral and thus closed. In particular $f(\mathbb A^1_k)\subset \mathbb A^n_k$ is closed -Remark -Despite appearances this proof is more elementary than the preceding one. It only uses that integral morphisms are closed, which follows from lying-over. -The simplicity of the first proof is a bit deceptive: it uses as a black box that projective space is complete. -This is often thought trivial because it corresponds to compactness over $\mathbb C $, but the algebraic proof of completeness is not trivial, nor, come to think of it, is the assertion that completeness is equivalent to compactness in the classical case ( a GAGA-type of assertion)<|endoftext|> -TITLE: Eckmann-Hilton and higher homotopy groups -QUESTION [17 upvotes]: How does the Eckmann-Hilton argument show that higher homotopy groups are commutative? - -I can easily follow the proof on Wikipedia, but I have no good mental picture of the higher homotopy groups, and I can't see how to apply it. Wikipedia mentions this application in one sentence, with no explanation. -(Motivation: I'm thinking of giving a talk on algebraic topology for a general audience of math majors. The purpose would be to try to explain how purely "algebraic" methods can be used to gain serious insight into purely "geometric" problems. This would be a great example, if I could do the proof!) -Thanks in advance! - -REPLY [14 votes]: The following is relevant to the point made by Qiaochu Yuan. -What is usually called the Eckmann-Hilton argument is actually a special application of the interchange law, whose quite general setting is for double categories, i.e. sets with two distinct category structures such that, and this is the interchange law, each is a morphism for the other. However, even when each of the structures is a groupoid, i.e. all arrows are isomorphisms, the interchange law does not lead to a commutative structure but only that it contains a family of abelian groups. Also the interchange law implies that double groupoids contain the structures known as crossed modules, which occur in homotopy theory and the cohomology of groups. In fact $n$-fold groupoids become more complicated with increasing $n$. -The existence of such double structure, first formulated by C. Ehresmann in the 1960s, raised the question of their potential use in homotopy theory, a question relevant to the history of homotopy groups, and to their abelian nature. -Topologists of the early 20th century were aware that the non commutative nature of the fundamental group was useful in geometry and analysis; -that the first homology group was, for a connected space, the fundamental group made abelian; and that the homology groups were defined in all dimensions. Consequently, -there was a desire to find higher dimensional versions of the fundamental group, keeping its nonabelian nature. -In 1932, E. Cech submitted to the ICM at Zurich a paper on higher homotopy groups; however, Alexandroff and Hopf, the kings of topology at the time, objected to the fact that they were abelian for $n \geq 2$, and on these grounds persuaded Czech to withdraw his paper, so that only a small paragraph appeared in the ICM Proceedings. -It seems that Hurewicz attended this ICM. In 1935, the first of his notes on homotopy groups was published, and from then the concerns about the abelian nature of higher homotopy groups were regarded as a failure to accept a basic fact of life. -In a publication of 1978 R. Brown and P.J. Higgins defined the homotopy double groupoid $\rho(X,A,x)$ of a pointed pair of spaces, which consisted of homotopy classes of maps of a square $I^2$ into $X$ which mapped the edges to $A$ and the vertices to the base point $x$. This enabled the proof of a 2-dimensional van Kampen theorem, which had as a special case a result of Whitehead on free crossed modules. -This work was in fact inspired by that work by J.H. C. Whitehead who in the 1940s developed a deeper understanding of the nonabelian second relative homotopy group of a pair $(X,A)$, with an operation of $\pi_1(A,x)$, and introduced the notion of crossed module for the structure of the boundary map $\delta: \pi_2(X,A,x) \to \pi_1(A,x)$. He also gave a deep determination of $\pi_2(A \cup \{e^2_\lambda\}_{\lambda \in \Lambda}, A,x)$ as a free crossed $\pi_1(A,x)$-module on the characteristic maps of the $2$-cells. -One advantage of this result is that allows for an expression that we very much want, namely that in the standard representation of a Klein bottle as a square $\sigma$ with boundary we would like to write $$\delta \sigma= a+b-a+b,$$ -which is a noncommutative formula, and this is exactly possible in the context of crossed modules, with $\sigma$ as a free generator. In the usual chain complex system we can of course write only -$$\partial \sigma= 2b,$$ -thus giving a loss of information. -The interchange law is thus central to the use of higher groupoids in homotopy theory and other areas. There are lots of fun things, with an extra structure of connections to enable such things as the notions of commutative cubes and of rotations. -For more details on maths and history, see the pdf of this book, and also this paper, available there in a special issue of Indagationes Math. in honor of L.E.J. Brouwer.<|endoftext|> -TITLE: Does an injective endomorphism of a finitely-generated free R-module have nonzero determinant? -QUESTION [20 upvotes]: Alternately, let $M$ be an $n \times n$ matrix with entries in a commutative ring $R$. If $M$ has trivial kernel, is it true that $\det(M) \neq 0$? -This math.SE question deals with the case that $R$ is a polynomial ring over a field. There it was observed that there is a straightforward proof when $R$ is an integral domain by passing to the fraction field. -In the general case I have neither a proof nor a counterexample. Here are three general observations about properties that a counterexample $M$ (trivial kernel but zero determinant) must satisfy. First, recall that the adjugate $\text{adj}(M)$ of a matrix $M$ is a matrix whose entries are integer polynomials in those of $M$ and which satisfies -$$M \text{adj}(M) = \det(M).$$ -If $\det(M) = 0$ and $\text{adj}(M) \neq 0$, then some column of $\text{adj}(M)$ lies in the kernel of $M$. Thus: - -If $M$ is a counterexample, then $\text{adj}(M) = 0$. - -When $n = 2$, we have $\text{adj}(M) = 0 \Rightarrow M = 0$, so this settles the $2 \times 2$ case. -Second observation: recall that by Cayley-Hamilton $p(M) = 0$ where $p$ is the characteristic polynomial of $M$. Write this as -$$M^k q(M) = 0$$ -where $q$ has nonzero constant term. If $q(M) \neq 0$, then there exists some $v \in R^n$ such that $w = q(M) v \neq 0$, hence $M^k w = 0$ and one of the vectors $w, Mw, M^2 w,\dots, M^{k-1} w$ necessarily lies in the kernel of $M$. Thus if $M$ is a counterexample we must have $q(M) = 0$ where $q$ has nonzero constant term. -Now for every prime ideal $P$ of $R$, consider the induced action of $M$ on $F^n$, where $F = \overline{ \text{Frac}(R/P) }$. Then $q(\lambda) = 0$ for every eigenvalue $\lambda$ of $M$. Since $\det(M) = 0$, one of these eigenvalues over $F$ is $0$, hence it follows that $q(0) \in P$. Since this is true for all prime ideals, $q(0)$ lies in the intersection of all the prime ideals of $R$, hence - -If $M$ is a counterexample and $q$ is defined as above, then $q(0)$ is nilpotent. - -This settles the question for reduced rings. Now, $\text{det}(M) = 0$ implies that the constant term of $p$ is equal to zero, and $\text{adj}(M) = 0$ implies that the linear term of $p$ is equal to zero. It follows that if $M$ is a counterexample, then $M^2 \mid p(M)$. When $n = 3$, this implies that -$$q(M) = M - \lambda$$ -where $\lambda$ is nilpotent, so $M$ is nilpotent and thus must have nontrivial kernel. So this settles the $3 \times 3$ case. -Third observation: if $M$ is a counterexample, then it is a counterexample over the subring of $R$ generated by the entries of $M$, so - -We may assume WLOG that $R$ is finitely-generated over $\mathbb{Z}$. - -REPLY [3 votes]: Here is an elementary proof of the fact that if the determinant $D$ of an $n \times n$ matrix M is a zero-divisor, then there is a nonzero vector $X$ such that $MX = 0$. -Let $a$ be a nonzero scalar such that $aD = 0$. Let $M'$ be a square submatrix of $M$ of maximum size $r$ such that $a \det M' \ne 0$. (If there is no such submatrix, let $r = 0$.) We have $r < n$ by the definition of $a$. After permuting the rows and columns of $M$ if necessary, we may assume that $M'$ is located in the top left corner of $M$. -Let $M''$ be the $r \times (r + 1)$ matrix in the top left corner of $M$, and let $d_j$, for $j = 1, \dots, r+1$, be the minor of $M''$ obtained by deleting its $j$th column. -Now define $X = aX_0$, where -$$X_0 = (d_1, -d_2, \dots, (-1)^r d_{r+1}, 0, \dots, 0).$$ -We have $ad_{r+1} = a\det M' \ne 0$, so $X \ne 0$. -I claim that $MX = 0$. If $i > r$, then the $i$th coordinate of $MX_0$ is, up to sign, the minor of $M$ obtained from its first $r + 1$ columns and rows $1, 2, \dots, r, i$. By the definition of $r$, this minor becomes zero upon multiplication by $a$. When $i \leq r$, the the $i$th coordinate of $MX_0$ is zero because it is, up to sign, the determinant obtained by extending $M''$ with its own $i$th row. This proves the claim.<|endoftext|> -TITLE: What is the difference between $L^2$ norm and $\ell^2$ norm? -QUESTION [5 upvotes]: I can find no precise definitions on the internet for the $L^2$ and $\ell^2$ norms. Certain websites keep switching between the two. Can someone please help me? - -REPLY [4 votes]: Regarding the switching I would like to add that $L^2([0,1])$ and $\ell^2$ are isomorphic as Hilbert spaces (as they are both separable and infinite-dimensional). That means: If $(f_n)_{n\in\mathbb N}$ is an orthonormal basis of $L^2([0,1])$, for example the basis $\{\exp(2\pi i n \,\cdot\,) : n \in \mathbb Z\}$, then -\begin{align*} - L^2([0,1]) &\to \ell^2\\ - f &\mapsto (\left)_n -\end{align*} -is an isometric isomorphism of Hilbert spaces, so especially $\|f\|_{L^2} = \|(\left)_n\|_{\ell^2}$.<|endoftext|> -TITLE: Discriminant for $x^n+bx+c$ -QUESTION [6 upvotes]: The ratio of the unsigned coefficients for the discriminants of $x^n+bx+c$ for $n=2$ to $5$ follow a simple pattern: -$$\left (\frac{2^2}{1^1},\frac{3^3}{2^2},\frac{4^4}{3^3},\frac{5^5}{4^4} \right )=\left ( \frac{4}{1},\frac{27}{4},\frac{256}{27},\frac{3125}{256} \right )$$ -corresponding to the discriminants -$$(b^2-4c, -4b^3-27c^2,-27b^4+256c^3,256b^5+3125c^4).$$ -Does the pattern for the ratios extend to higher orders? (An online reference would be appreciated.) - -REPLY [3 votes]: Use the relation between the disciminant of $f$ and the resultant of $f$ and $f'$. The resultant is easy to calculate since $f'$ is so simple.<|endoftext|> -TITLE: What does a "heat ball" look like? -QUESTION [6 upvotes]: I am learning Mean value property (MVP) of the heat equation. MVP of Laplace equation was relatively easy to understand I think it is because of the spherical symmetry. But I am not able to appreciate the MVP of heat equation. It's not very easy to imagine the "heat ball" in the following theorem from a note: - - - -Here are questions: - - -How do I define a heat ball? -How does it actually look like? - -REPLY [7 votes]: The "heat ball" is defined as it is in the note you cited which is bases on Evans's Partial Differential Equations Chapter 2.3. -For fixed $x\in{\bf R}^n$, $t\in{\bf R}$ and $r>0$, we define -$$ -E(x,t;r)=\left\{(y,s)\in {\bf R}^{n+1}\bigg|\; s\leqslant t,\ \dfrac{1}{(4\pi(t-s))^{n/2}}\exp\left({-\dfrac{|x-y|^2}{4(t-s)}}\right)\geqslant\frac{1}{r^n}\right\}. -$$ -The Wikipedia article Mean-value property for the heat equation also gives a similar definition. - -Note that in the definition, one should replace $s\leqslant t$ with $s -TITLE: Is the following integral positive? -QUESTION [7 upvotes]: For each positive integer $n$, consider the following intgral: -$$\int_0^\infty\int_0^\infty\frac{(x^4-y^2)x^{2n+1}}{(x^4+y^2)[(1+x^2)^2+y^2]^{n+1}}dxdy.$$ -I want to know if there is any easy way to see that it's positive. If there is no easy proof to show that it's positive, I am satisfied if I know that it is positive. I don't need to know the exact value of the integral. I try to write the integral as -$$\int_0^\infty\int_0^\infty\frac{x^{2n+1}}{[(1+x^2)^2+y^2]^{n+1}}dxdy- -\int_0^\infty\int_0^\infty\frac{2y^2x^{2n+1}}{(x^4+y^2)[(1+x^2)^2+y^2]^{n+1}}dxdy,$$ -but it seems to me that it doesn't help much. - -REPLY [3 votes]: Start with the substitution suggested by @alex.jordan: $u=\sqrt{x}, v=y$. Then think of $(u,v)$ as Cartesian coordinates with the integral over the first quadrant. Change to polar coordinates $u=r\cos\theta, v=r\sin\theta$ to obtain -$$ \hbox{your integral } = I_n = \frac12\int_0^\infty r^{n+1} dr \int_0^{\pi/2} \frac{\cos^n\theta(\cos^2\theta-\sin^2\theta)}{(r^2+2r\cos\theta+1)^{n+1}}d\theta.$$ -Split -$$J_n=\int_0^{\pi/2} \frac{\cos^n\theta(\cos^2\theta-\sin^2\theta)}{(r^2+2r\cos\theta+1)^{n+1}}d\theta$$ -at $\theta=\pi/4$ to obtain integrands that are obviously positive and negative. In the subinterval $\theta\in[\pi/4,\pi/2]$ change the variable to $\theta^\prime=\pi/2-\theta$ - the domain for the second integral is now $\theta^\prime\in[0,\pi/4]$ - and then change the name of $\theta^\prime$ to $\theta$. Recombine the two integrals to obtain -$$ J_n=\int_0^{\pi/4} (h_n(r,\cos\theta)-h_n(r,\sin\theta))(\cos^2\theta-\sin^2\theta)d\theta$$ -where -$$h_n(r,z)=\frac{z^n}{(r^2+2rz+1)^{n+1}}.$$ -For $n\geq 2$, this is an increasing function of $z$ on $[0,1]$ which is sufficient to show that the integrand in the second representation of $J_n$ is positive. I think this deals with everything but the case $n=1$.<|endoftext|> -TITLE: $H$ is a maximal normal subgroup of $G$ if and only if $G/H$ is simple. -QUESTION [10 upvotes]: I need to prove that $H$ is a maximal normal subgroup of $G$ if and only if $G/H$ is simple. -My proof to the $(\Rightarrow)$ direction seems too much trivial: -Let us assume there exist $A$ so that $A/H\lhd G/H$. Then by definion, $H$ must be normal in $A$. Because $H$ is maximal, we get $H=A$ and therefore $A/H={1}$ -Is it correct? -Update: -Now I see that I need to prove that not only $A\lhd H$ but also $A\lhd G$. Assumig I have proven that, is the proof correct? - -REPLY [3 votes]: I flesh out the exquisite answer of B. S. -$\color{darkred}{ \text{ (1.) If I'm not confounded, I think $H ⊴ G$ means $H ⊲ G$ or $H = G$. Is this perfect? } }$ -Forward step: $H \text{ maximal } ⊲ G \implies G/H$ simple. -Let $\frac{A}{H} ⊲\frac{G}{H}$ wherein $H ⊴A⊴G$. Since H is a maximal subgroup, $\begin{cases} H = A \implies \frac{A}{H}=1 \\ \text{ or } A=G \implies \frac{A}{H}=\frac{G}{H} \end{cases}$. -This means that $\frac{G}{H} $ is a simple group. ♥ -Backward step: Now suppose that $H ⊲G$ and $\frac{G}{H} $ is simple. -If we have $H ⊴A⊴G$ then obviously $\frac{A}{H} ⊲\frac{G}{H}$. -By reason of the presupposition for this backward step, $\frac{G}{H}$ is simple. -Hence $\frac{A}{H}=\frac{G}{H}$ or $\frac{A}{H} =\{H\}$ . So, $A=G$ or $H=A$. ♥<|endoftext|> -TITLE: surface integral of vector along the curved surface of cylinder -QUESTION [5 upvotes]: Evaluate -$$ \iint_s (4x \hat i - 2y^2 \hat j + z^2 \hat k)\cdot \hat n ds $$ -over the curved surface of $x^2 + y^2 = 4$ and $z = 0 \text{ to }z = 3$. -Using method -$$ \iint_s f(x,y(x,z),z)\cdot \frac{\nabla u(x, y)}{|\nabla u(x, y)|} \sqrt{ 1 + \left ( \partial y \over \partial x\right )^2 + \left ( \partial y \over \partial z\right )^2} dxdz$$I got $48 \pi - 128.$ -EDIT:: -added method for above -$$ \int_0^3 \int_{-2}^2 (4x \hat i - 2y^2 \hat j + z^2 \hat k)\cdot(\hat i x + \hat iy) \left ( \sqrt{ 1 + \frac{x^2}{4 - x^2 } }\right ) dx dz$$ -$$ \implies 12 \int_{-2}^{2} \frac{2x^2}{\sqrt{4 - x^2}} - (4 - x^2)dx = 48 \pi - 128 $$ -EDIT:: -I got wrong answer below because of formula. I got right answer from this! The correct formula should have been -$$ \iint_s F(\theta, z) \cdot (u_\theta \times u_z) d\theta dz$$ -Parametrizing $x = 2 \cos \theta, y = 2\sin\theta , z=z $ -$$ \int_0^3 \int_0^{2\pi} F(\theta, z)\cdot \frac{u_{\theta}\times u_z}{|u_{\theta}\times u_z|}d\theta dz$$ -I got $24 \pi$. The answer sheet says $48\pi$.Please help!! Thank you!! - -REPLY [4 votes]: The quantity $\Phi$ (for flux) you want to compute is given by the formula -$$\Phi=\int_R {\bf F}\bigl({\bf u}(\theta, z)\bigr) \cdot ({\bf u}_\theta \times {\bf u}_z)\ {\rm d}(\theta ,z)\ ,\qquad R:=[0,2\pi]\times[0,3]\ ,$$ -which you quote correctly. But in the next line for no reason you divide by $ \bigl|{\bf u}_\theta \times {\bf u}_z\bigr|=2$; therefore your end result is off by a factor of $2$. -As an aside: The orientation of ${\bf n}$ was not defined; so maybe the intended value is $-48\pi$.<|endoftext|> -TITLE: An interesting topological space with $4$ elements -QUESTION [24 upvotes]: There is an interesting topological space $X$ with just four elements $\eta,\eta',x,x'$ whose nontrivial open subsets are $\{\eta\},\{\eta'\},\{\eta,\eta'\}, \{\eta,x,\eta'\}, \{\eta,x',\eta'\}$. This seems to be some "discrete model" for the $1$-sphere $S^1 \subseteq \mathbb{C}$: The open sets $\{\eta,x,\eta'\}$ and $\{\eta,x',\eta'\}$ may be imagined as arcs joining $\eta$ and $\eta'$ via $x$ or resp. $x'$. They are contractible, and their intersection is the discrete space $\{\eta\} \sqcup \{\eta'\}$. It also follows that $\pi_1(X) \cong \mathbb{Z}$. Without knowing any algebraic topology, one can explicitly classify all coverings of $X$, namely this category is equialent to $\mathbb{Z}\mathsf{-Sets}$. -In contrast to $S^1$, actually $X$ is homeomorphic to the spectrum of a ring: Let $R$ be the localization of $\mathbb{Z}$ at all primes $p \neq 2,3$. There is a canonical surjective homomorphism $R \to \mathbb{Z}/6$. Let $A$ be the fiber product $R \times_{\mathbb{Z}/6} R$. Then $X \cong |\mathrm{Spec}(A)|$. We glue $\mathrm{Spec}(R) = \{\eta,x,x'\}$ with itsself along its closed subscheme $\{x,x'\}$. We end up with two generic points $\eta,\eta'$. -Question. Does this space $X$ have a name? What is the precise relationship to $S^1$? Where do the observations above appear in the literature? -According to Miha's comment below (which is an answer), $X$ is called the pseudocircle, and the definite source for the general phenomenon is: -Singular homology groups and homotopy groups of finite topological spaces, by Michael C. McCord, Duke Math. J., 33(1966), 465-474, doi:10.1215/S0012-7094-66-03352-7. -The pseudocircle already appeared a couple of times on math.SE, for example in question/56500. - -REPLY [3 votes]: The natural relationship between $S^1$ and $X$ arises as follows. Consider $X$ as a preordered set in its specialization order: explicitly, the order is $\eta,\eta'\leq x,x'$ (or the reverse of that depending on your conventions). We can then take the nerve $N(X)$: explicitly, $N(X)$ is a simplicial set in which an $n$-simplex is an order-preserving map $[n]\to X$. The nondegenerate simplices are the injective order-preserving maps, i.e. the chains in the poset $X$. There are four nondegenerate $0$-simplices (the $4$ points of $x$) and $4$ nondegenerate $1$-chains (the $4$ chains in $X$ of size $2$), which connect the points of $X$ cyclically: $\eta\leq x$, then $x\geq \eta'$, then $\eta'\leq x'$, then $x'\geq \eta'$. -This means that the geometric realization $|N(X)|$ is homeomorphic to a circle (it's just a cyclic graph with 4 vertices). Moreover, there is a canonical continuous map $p:|N(X)|\to X$, which sends a point that is in the interior of a nondegenerate simplex $\sigma$ (i.e., a chain in $X$) to its least element in $X$. Explicitly, this maps the four vertices of $|N(X)|$ to the corresponding points of $X$, and the interiors of the edges to either $\eta$ or $\eta'$ depending on which of the two they have as a vertex. -Now for the magical fact: this canonical map $p:|N(X)|\to X$ is a weak homotopy equivalence, so $X$ is weak homotopy equivalent to $S^1$. And even more magically, this generalizes. In particular, McCord proved the following results in his paper Singular homology groups and homotopy groups of finite topological spaces: - -Theorem: Let $P$ be any preordered set, considered as a topological space by letting the open sets be the downward closed sets (i.e., the finest topology with $P$ as its specialization order). Then the natural map $|N(P)|\to P$ constructed as above is a weak homotopy equivalence. -Corollary: Let $X$ be any finite topological space and equip it with its specialization order. Then the natural map $|N(X)|\to X$ is a weak homotopy equivalence. -Proof: A topology on a finite set is determined by its specialization order, so the topology on $X$ is the same as the topology induced by the specialization order as in the Theorem. -Corollary: Every finite CW-complex is weak homotopy equivalent to a finite topological space. -Proof: Let $X$ be a finite CW-complex. Then $X$ is homotopy equivalent to the geometric realization of some finite simplicial complex $Y$. Let $P$ be the poset of faces of $Y$, ordered by inclusion. Then $N(P)$ is just the barycentric subdivision of $Y$, and in particular there is a natural homeomorphism $|N(P)|\to |Y|$. Thus by the Theorem, $X$ is weak homotopy equivalent to $P$. - -Note in particular that these results show that the relationship between your space $X$ and $S^1$ is "one-sided": $S^1$ is naturally associated to $X$, but $X$ is not naturally associated to $S^1$. Indeed, there are many different "finite models" of $S^1$; for instance, the poset of faces in any triangulation of $S^1$ is a "finite model" of $S^1$ as in the proof of the second Corollary.<|endoftext|> -TITLE: What reference contains the proof of the classification of the wallpaper groups? -QUESTION [5 upvotes]: Background: I am doing a course on Groups and Geometry ( Open University M336 ). One of the topics is the classification of the plane symmetries, a.k.a. The Wallpaper Groups. -Question: What reference contains the original proof that there exists only 17 wallpaper groups? - -REPLY [2 votes]: Try "The 17 Plane Symmetry Groups" by R.L.E. Schwarzenberger, The Mathematical Gazette Vol 58 No 404.<|endoftext|> -TITLE: Find all primes $p$ such that $(2^{p-1}-1)/p$ is a perfect square -QUESTION [24 upvotes]: Find all primes $p$ such that $(2^{p-1}-1)/p$ is a perfect square. I tried brute-force method and tried to find some pattern. I got $p=3,7$ as solutions. Apart from these I have tried for many other primes but couldn't find any other such prime. -Are these the only primes that satisfy the condition? -If yes, how to prove it theoretically and if not, how to find others? -Thanks in advance! - -REPLY [25 votes]: Hint: Let $p=2k+1$ where $k \in \mathbb{N},$ then $2^{2k}-1=(2^k-1)(2^k+1)=pm^2.$ We know that $\gcd(2^k-1,2^k+1)=1$ since they are consecutive odd integers, so the equation breaks into $2^k-1=px^2, 2^k+1=y^2$ or $2^k-1=x^2, 2^k+1=py^2.$ -Easy investigation shows that the only solutions are $p=3,7.$ I will leave it to you to fill the gaps. You may also interested in this.<|endoftext|> -TITLE: Compute: $\lim_{n\to\infty} \{ (\sqrt2+1)^{2n} \}$ -QUESTION [7 upvotes]: Compute the following limit: -$$\lim_{n\to\infty} \{ (\sqrt2+1)^{2n} \}$$ -where $\{x\}$ is the fractional part of $x$. I need some hints here. Thanks. - -REPLY [18 votes]: Consider $$ (\sqrt2+1)^{2n} + (\sqrt2-1)^{2n} $$ -Try to show that it is an integer and hence this fractional part you are looking for is $1 - (\sqrt2-1)^{2n}$ Now the limit becomes easy.<|endoftext|> -TITLE: Evaluating $\frac{1}{\sqrt{4}+\sqrt{6}+\sqrt{9}}+\frac{1}{\sqrt{9}+\sqrt{12}+\sqrt{16}}+\frac{1}{\sqrt{16}+\sqrt{20}+\sqrt{25}}=?$ -QUESTION [6 upvotes]: It's a question (not hw) I bumped into few years back. Couldn't make any real progress with. Maybe you can help? -$$\frac{1}{\sqrt{4}+\sqrt{6}+\sqrt{9}}+\frac{1}{\sqrt{9}+\sqrt{12}+\sqrt{16}}+\frac{1}{\sqrt{16}+\sqrt{20}+\sqrt{25}}=?$$ -Thanks. - -REPLY [2 votes]: For your original question, it might be worth noting that -$$\frac{1}{\sqrt{k^2}+\sqrt{k(k+1)}+\sqrt{(k+1)^2}}=\frac{1}{2k+1+\sqrt{k(k+1)}}=\frac{2k+1-\sqrt{k(k+1)}}{4k^2+4k+1-k^2-k}=\frac{2k+1-\sqrt{k(k+1)}}{3k^2+3k+1}$$ -Thus -$$\frac{1}{\sqrt{4}+\sqrt{6}+\sqrt{9}}+\frac{1}{\sqrt{9}+\sqrt{12}+\sqrt{16}}+\frac{1}{\sqrt{16}+\sqrt{20}+\sqrt{25}}=\frac{5-\sqrt{6}}{19}+\frac{7-2\sqrt{3}}{37}+\frac{9-2\sqrt{5}}{61} $$ -As a side not, a much nicer expression is: -$$\frac{1}{\sqrt{4}+2\sqrt{6}+\sqrt{9}}+\frac{1}{\sqrt{9}+2\sqrt{12}+\sqrt{16}}+\frac{1}{\sqrt{16}+2\sqrt{20}+\sqrt{25}}$$<|endoftext|> -TITLE: Calculate: $\lim_{n\to\infty} \int_{0}^{\pi/2}\frac{1}{1+x\tan^{n} x }dx$ -QUESTION [7 upvotes]: I'm supposed to work out the following limit: -$$\lim_{n\to\infty} \int_{0}^{\pi/2}\frac{1}{1+x \left( \tan x \right)^{n} }dx$$ -I'm searching for some resonable solutions. Any hint, suggestion is very welcome. Thanks. - -REPLY [8 votes]: Note that the integrand is bounded in $[0,\pi/2]$, so if $$\lim_{n\to \infty} \frac{1}{1+x\tan^nx}$$ exists a.e. then we may apply the Dominated Convergence Theorem to show $$\lim_{n\to \infty} \int_0^{\pi \over 2}\frac{1}{1+x\tan^nx}dx = \int_0^{\pi \over 2}\lim_{n\to \infty} \frac{1}{1+x\tan^nx}dx.$$ -If $x<\pi/4$ then the integrand converges to 1, and if $x>\pi/4$ then it converges to 0. Thus we have the integral equals -$$ -\int_0^{\pi \over 4} 1dx + \int_{\pi \over 4}^{\pi \over 2} 0dx = \frac{\pi}{4}. -$$<|endoftext|> -TITLE: Find the limit of: $\lim_{n\to\infty} \frac{1}{\sqrt[n+1]{(n+1)!} - \sqrt[n]{(n)!}}$ -QUESTION [5 upvotes]: Could be the following limit computed without using Stirling's approximation formula? -$$\lim_{n\to\infty} \frac{1}{\sqrt[n+1]{(n+1)!} - \sqrt[n]{(n)!}}$$ -I know that the limit is $e$, but I'm looking for some alternative ways that doesn't require to resort - to the use of Stirling's approximation. I really appreciate any support at this limit. Thanks. - -REPLY [2 votes]: This completes Jonas's answer, here is an idea. This is too long for a comment. -To prove that the limit exists, we can prove that $a_n$ is decreasing and positive: -$$\frac{1}{\sqrt[n+1]{(n+1)!} - \sqrt[n]{(n)!}} \geq 0 \Leftrightarrow $$ -$$\sqrt[n+1]{(n+1)!} \geq \sqrt[n]{(n)!} \Leftrightarrow $$ -$$(n+1)!^n\geq (n)!^{n+1} \Leftrightarrow $$ -$$(n+1)^n\geq (n)! \Leftrightarrow $$ -$$(n+1)\cdot(n+1)...\cdot(n+1)\geq 1\cdot 2..\cot n \checkmark $$ -Now for decreasing -$$\frac{1}{\sqrt[n+1]{(n+1)!} - \sqrt[n]{(n)!}} \geq \frac{1}{\sqrt[n+2]{(n+2)!} - \sqrt[n+1]{(n+1)!}} \Leftrightarrow $$ -$$\sqrt[n+1]{(n+1)!} - \sqrt[n]{(n)!} \leq \sqrt[n+2]{(n+2)!} - \sqrt[n+1]{(n+1)!} \Leftrightarrow $$ -$$2\sqrt[n+1]{(n+1)!} \leq \sqrt[n]{(n)!}+ \sqrt[n+2]{(n+2)!}$$ -Now, by AM-GM inequality -$$\frac{\sqrt[n]{(n)!}+ \sqrt[n+2]{(n+2)!}}{2} \geq (n!)^\frac{1}{2n}[(n+2)!]^\frac{1}{2n+1}$$ -So if we can prove that -$$(n!)^\frac{1}{2n}[(n+2)!]^\frac{1}{2n+4} \geq \sqrt[n+1]{(n+1)!}$$ -we are done. -Now -$$(n!)^\frac{1}{2n}[(n+2)!]^\frac{1}{2n+4} \geq \sqrt[n+1]{(n+1)!} \Leftrightarrow $$ -$$(n!)^\frac{1}{2n}[(n+2)]^\frac{1}{2n+4} \geq (n+1)!^{\frac{1}{n+1}-\frac{1}{2n+4}} \Leftrightarrow $$ -$$(n!)^{\frac{1}{2n}+\frac{1}{2n+4}-\frac{1}{n+1}}[(n+2)]^\frac{1}{2n+4} \geq (n+1)^{\frac{1}{n+1}-\frac{1}{2n+4}}\,. $$ -To keep it simple: -The power of $n!$ is -$$\frac{(2n^2+6n+4)+(2n^2+2n)-(4n^2+8n)}{(n+1)2n(2n+4)}=\frac{2}{n(n+1)(2n+4)}$$ -The power of $n+1$ is -$$\frac{1}{n+1}-\frac{1}{2n+4}=\frac{n+3}{(n+1)(2n+4)}$$ -Thus, after bringing the inequality to the $n(n+1)(2n+4)$, it becomes: -$$(n!)^\frac{1}{2n}[(n+2)!]^\frac{1}{2n+4} \geq \sqrt[n+1]{(n+1)!} \Leftrightarrow $$ -$$(n!)^2(n+2)^{n(n+1)} \geq (n+1)^{n(n+3)} $$ -Now, I ma not sure that this is true, but might work....<|endoftext|> -TITLE: Proofs of Sylow theorems -QUESTION [15 upvotes]: It seems that there are many ways to prove the Sylow theorems. I'd like to see a collection of them. Please write down or share links to any you know. - -REPLY [24 votes]: I love Wielandt's proof for the existence of Sylow subgroups (Sylow I). Isaacs uses this proof in his books Finite Group Theory and Algebra: A Graduate Course. As Isaacs mentions, the idea of the proof is not very natural and does not generalize to other situations well but it is simply beautiful. First a lemma: -Lemma: Let $p,a,b$ be natural numbers where $p$ is prime and $a \geq b$. Then $$\binom{pa}{pb} \equiv \binom{a}{b} \pmod{p}$$ -Proof. Consider the polynomial $(x + 1)^{pa} = (x^p+1)^a \in \mathbb{F}_p[x]$. Computing the coefficient of the $x^{pb}$ term in two different ways yields the result. -Proof of Sylow I: Let $|G| = p^nm$ such that $p \nmid m$. Let $$\Omega = \{ X \subseteq G: |X| = p^n\} $$ -(Note that we are taking every subset of $G$ with $p^n$ elements). $G$ acts on $\Omega$ by left multiplication. Observe that -$$|\Omega| = \binom{p^nm}{p^n} \equiv m \pmod{p}$$ -by repeated usage of the lemma. Hence $p \nmid |\Omega|$, therefore $\Omega$ has an orbit $\mathcal{O}$ such that $p \nmid |\mathcal{O}|$. Now let $X \in \mathcal{O}$ and let $H$ be the stabilizer subgroup of $X$. Since $|G:H| = |\mathcal{O}|$ (orbit-stabilizer theorem), we deduce that $p^n$ divides $|H|$; in particular $p^n \leq |H|$. On the other hand, for $x \in X$ by definition of stabilizing $Hx \subseteq X$ and hence $$|H| = |Hx| \leq |X| = p^n$$ -Thus $H$ is a Sylow $p$-subgroup.<|endoftext|> -TITLE: Is there an efficient algorithm to compute a minimal polynomial for the root of a polynomial with algebraic coefficients? -QUESTION [8 upvotes]: An algebraic number is defined as a root of a polynomial with rational coefficients. -It is known that every algebraic number $\alpha$ has a unique minimal polynomial, the monic polynomial with rational coefficients of smallest degree of which $\alpha$ is a root. -It is also known that the algebraic numbers are algebraically closed, meaning that any polynomial with algebraic numbers as coefficients has algebraic roots. -My question is: -Given the root of a degree $d$ uni-variate polynomial with $n$ terms with algebraic coefficients, is there an efficient (i.e. polynomial time in $d$ and $n$) algorithm to generate its minimal polynomial? -I have searched the literature, but found very little work on algebraic number theory concerned with computational complexity. The closest result I found is: -Theorem 1.4.7 in Lovasz's book: -http://books.google.co.uk/books?id=NWa5agInx0gC&lpg=PA39&ots=ufoR8afRSQ&dq=finding%20the%20minimal%20polynomial%20for%20a%20root%20of%20a%20polynomial%20%20lovasz&pg=PA38#v=onepage&q&f=false -but I don't think this (quite) answers my question. - -REPLY [3 votes]: Suppose you have a finite field extension $\mathbb Q \subset K$ and an irreducible polynomial $P \in K[X]$. -Let $L$ be the Galois closure of $K$, $G$ be the Galois group of $\mathbb Q \subset L$, and $H$ be the subgroup of $G$ fixing $K$. Then $G$ acts on $L[X]$ by $\sigma(\sum a_i X^i) = \sum \sigma(a_i)X^i$, and $H$ is still the subgroup of $G$ fixing $K[X]$. -Then define $\tilde{P} = \prod_{\sigma \in G/H} \sigma(P)$. For any $\sigma$ in $G$, $\sigma(\tilde{P}) = \tilde P$, thus $\tilde P \in \mathbb Q[X]$, and I think it should be irreducible over $\mathbb Q$ if $K$ is the extension generated by the coefficients of $P$. -And so far, the computation is polynomial in the degree of the extension $\mathbb Q \subset L$ (which may be exponential in the degree of $K$) and in the degree of $P$. -If you don't know exactly what $L$ and $G$ are but you know the set of conjugates $C_i$ of each coefficient $a_i$ of $P$, define -$$\tilde P = \prod_{(b_0, \ldots, b_n) \in C_0 \times \ldots \times C_n} (b_0 + b_1 X + \ldots b_n X^n)$$ -This will produce a polynomial in $\mathbb Q[X]$, but can be extremely larger than the minimal polynomial. -Here you pick one conjugate for each coefficient, and you do the product for all possible simultaneous choices of those conjugates. If you have one coefficient with 2 conjugates and another one with 3, and all the other coefficients are rationals, then you have 6 polynomials to multiply. -Finally, if you don't know the conjugates of the coefficients but you know some annihilating polynomial for each coefficient, let $C_i$ be a set of formal roots of those polynomials, and formally expand that product. You will get an expression involving the formal roots symmetrically so you can write it using the elementary symmetric polynomials of those roots, and then use the Viete relations and replace those with the corresponding coefficients of your annihilating polynomials. -However, these two methods can be exponential in the degree of $P$, so you should avoid them if possible. - -In the worst case scenario, the Galois group of $\mathbb Q \subset K$ will be $S_n$, meaning that we have to do calculations in a very big field extension. -suppose $K$ is of degree $n$ and $P$ is of degree $d$. We can estimate of the generic formula you need to use to transform $P$ into a polynomial with rational coefficients. -Pick $L = \mathbb Q(Y_1, \ldots, Y_n)$, let $Z_1, \ldots, Z_n$ be the elementary symmetric polynomials in $Y_i$, pick $K_0 = L^{S_n} = \mathbb Q(Z_1, \ldots, Z_n)$, and $K = K_0(Y_1)$ So the extension $K_0 \subset K$ is the "generic" extension of degree $n$ over $\mathbb Q$. -Then say $P = \sum a_i X^i$, where $a_i \in K = K_0[Y_1]$ and are of degree $ < n$. -So you can write $P = \sum b_{i,j} X^i Y_1^j$ where $b_{i,j} \in K_0$, and $(i,j) \in \{0 \ldots d \} \times \{0 \ldots n \}$. Add indeterminates $B_{i,j}$, so that now $P$ is an element of $K[B_{i,j},X]$. We can compute the product $\prod_{k=1}^n P(Y_k,B_{i,j},X)$ in $L[B_{i,j},X]$, then since it is symmetric, write it as an element of $K_0[B_{i,j},X]$. -In fact since the coefficients of $P$ are polynomials in $Y_1$ with integer coefficients, this will be a polynomial in $\mathbb Z[Z_i, B_{i,j},X]$ of degree $dn$ in $X$, homogeneous of degree $n$ in the $B_{i,j}$. -For any choice of $n$ indeterminates among the $B_{i,j}$, $B_{i(1),j(1)} \ldots B_{i(n),j(n)}$ will appear accompanied with $X^{i(1)+\ldots i(n)}$ and a polynomial in $Z_k$ of degree $j(1)+\ldots j(n) \le n^2$ in $Y_k$. So we obtain less than about $(n+d)!/n!d! * n^{2n}/n!$ terms in the final polynomial. Though there may be a better bound on the complexity of polynomials in $Z_k$ that are of a given degree less than $n^2$ than $n^{2n}/n!$ (in particular, not all of them should be able to appear). -Well the good thing it that this is polynomial in $d$ when $n$ is fixed, and is probably exponential in $n$ when $d$ is fixed. And when both of them vary, it's exponential in $d$ and $n$.<|endoftext|> -TITLE: On Grunwald-Wang theorem -QUESTION [10 upvotes]: Consider (roughtly speaking) the following statements (the Grunwald-Wang theorem) -Theorem 1 (see here for details Wiki): Let $K$ be a number field and $x \in K$. Then under some conditions : $x$ is an $n$-th power iff $x$ is an $n$-th power almost locally everywhere. -Theorem 2 (see theorem (9.2.8) on p541) : Let $K$ be number field and family $(L_p/K_p)_p$ of local abelian extensions. Then under some conditions : there exists an extension $M/K$ such that $M_p \simeq L_p$. -Question 1 : Why are these two theorems equivalent ? -According to Wiki, the fact that $16$ is an 8 power almost locally everywhere but not in $\mathbb{Q}$, implies that there is no cyclic extension of degree $8$ where $2$ is inert. -Question 2 : How to prove this implication ? - -REPLY [3 votes]: From a modern perspective, Theorem 1 is easier, as it follows immediately from the vanishing of $Ш^1(K,\mu)$ and Kummer theory. -Theorem 2, on the other hand, requires the vanishing of $\mathrm{coker}^1(K,A)$, the cokernel of the localization map, where A is the potential global galois group -(in both cases assuming that we are not in a special case). -Theorem 2 is closer to the one originally proven by Grunwald (and later corrected by Wang). -Q2: The implication requires class field theory to prove. The point is that the norm residue symbol of $16$ would vanish everywhere except at $2$, contradicting the product formula/global reciprocity law. -Q1: Since Theorem 2 lets us prescribe the ramification at a finite number of places, we can use the same method to conclude Theorem 1 from Theorem 2.<|endoftext|> -TITLE: Let $f:\mathbb{R}\longrightarrow \mathbb{R}$ a differentiable function such that $f'(x)=0$ for all $x\in\mathbb{Q}$ -QUESTION [50 upvotes]: Let $f:\mathbb{R}\longrightarrow \mathbb{R}$ a differentiable function such that $f'(x)=0$ for all $x\in\mathbb{Q}.$ $f$ is a constant function? - -REPLY [29 votes]: No, such a function is not necessarily constant. -At the bottom of page 351 of Everywhere Differentiable, Nowhere Monotone Functions, Katznelson and Stromberg give the following theorem: - -Let $A$ and $B$ be disjoint countable subsets of $\mathbb{R}$. Then there exists an everywhere differentiable function $F: \mathbb{R} \to \mathbb{R}$ satisfying - -$F'(a) = 1$ for all $a \in A$, -$F'(b) < 1$ for all $b \in B$, -$0 < F'(x) \leq 1$ for all $x \in \mathbb{R}$. - - -Choosing $A = \mathbb{Q}$ and an arbitrary (nonempty*) countable set $B \subset \mathbb{R} \setminus \mathbb{Q}$, we get an everywhere differentiable function $F$ with $F'(q) = 1$ when $q \in \mathbb{Q}$. So if we define $f: \mathbb{R} \to \mathbb{R}$ by $f(x) = F(x) - x$, this then is the desired function satisfying $f'(q) = F'(q) - 1 = 1 - 1 = 0$ for $q \in \mathbb{Q}$, and $f$ is not constant (or else its derivative would be zero everywhere by the mean value theorem). -*(in case you take "countable" to mean either "countably infinite" or "finite")<|endoftext|> -TITLE: Gaussian curvature in $S^3$ -QUESTION [7 upvotes]: I'm trying to read a survey paper on the Willmore conjecture and I'm missing a lot of basic knowledge. In particular, let $u: \mathcal{M} \rightarrow S^3 \rightarrow \mathbb{R}^4$ be a smooth immersion of a compact orientable two dimensional surface into the standard 3-sphere, and let $\mathcal{M}$ take the metric induced by the ambient space. Writing the principal curvatures as $k_1$ and $k_2$, we have the Gaussian curvature given by $1+k_1k_2$. -I don't understand where the 1 in $K = 1+k_1k_2$ is coming from. The metric on $\mathbb{R}^4$ is the standard $g_{ij} = \delta_{ij}$, and restricting it to $S^3$ yields the round metric. $S^3$ is our ambient space, and restricting to $u$ gives us our induced metric. -I can think of two ways of showing this. The first would simply be to do everything out in local coordinates: for any $p \in \mathcal{M}$, we have a chart $\varphi: U \rightarrow V \subset \mathbb{R}^2$. Writing $u(p) = u(\varphi^{-1}(x^1,x^2)) = (y^1,y^2,y^3,y^4)$ and then projecting stereographically $\sigma: S^3 \rightarrow \mathbb{R}^3$, $\sigma(\vec{y}) = (\frac{y^1}{1-y^4},\dots,\frac{y^3}{1-y^4})$, we could recover $K$ via typical computations. -Alternatively, since $2K = \mathcal{R}$, the Ricci scalar, we could compute the curvature tensor and find it that way (unless there is some shortcut? I don't have much experience with these diffgeo objects). -I'm wondering if either these approaches would get me what I want, or if there is a naive reason for where the $1$ is coming from. Any help is appreciated. The relevant stuff is on p. 365 of the document, or 5th page from the start, section 2: the S^3 framework. - -REPLY [5 votes]: For $3$- manifolds of constant curvature $K_0$ (your $S^3$ for which $K_0$ equals $1$) the Gauss curvature of a hypersurface is $ K=k_1*k_2+K_0 $, with $k_i$ being the principal curvatures - see, e.g., Volume 4 (chapter 7) of Spivak's 'Comprehensive Introduction to Differential Geometry', which contains a comprehensive (! - nomen est omen) discussion of the relevant equations, also including a discussion about higher dimensional manifolds. See in particular the proof of propostion 24. Alternatively Theorem 5.5 in Gallot, Hulin, Lafontaine, 'Riemannian Geometry'. I'd expect to find similar results in many books of differential geometry. Search for Gauss formulas and Gauss curvature. -(And it's 'principal curvature', not 'principle curvature').<|endoftext|> -TITLE: Prove that: $\int_{0}^{1} \frac{x^{4}\log x}{x^2-1}\le \frac{1}{8}$ -QUESTION [10 upvotes]: Here is another interesting integral inequality : -$$\int_{0}^{1} \frac{x^{4}\log x}{x^2-1}\le \frac{1}{8}$$ -According to W|A the difference between RS and LS is extremely small, namely 0.00241056. I don't know what would work here since the difference is so small. - -REPLY [11 votes]: You can actually just evaluate the integral explicitly. You can divide $x^2 -1$ into $x^4$ and get -$$\frac{x^4}{x^2 - 1} = x^2 + 1 + \frac {1}{x^2 - 1}$$ -So the integral is the same as -$$\int_0^1 (x^2 + 1)\log(x)\,dx + \int_0^1 \frac{\log(x)}{x^2 - 1}\,dx $$ -The second integral is related to the famous dilogarithm integral, and as explained in Peter Tamaroff's answer can be evaluated to $\frac{\pi^2}{8}$. For the first term, just integrate by parts; you get -$$({x^3 \over 3} + x)\log(x)\big|_{x = 0}^{x =1} - \int_0^1 ({x^2 \over 3} + 1)\,dx$$ -The first term vanishes, while the second term is $-{10 \over 9}$. So the answer is just ${\pi^2 \over 8} - {10 \over 9}$ which is less than ${1 \over 8}$. - -A way of doing the whole integral in one fell swoop occurs to me. Note that ${\displaystyle {1 \over 1 - x^2} = \sum_{n=0}^{\infty} x^{2n}}$. So the integral is -$$-\sum_{n = 0}^{\infty} \int_0^1 x^{2n + 4}\log(x)\,dx$$ -$$= -\sum_{m = 2}^{\infty} \int_0^1 x^{2m}\log(x)\,dx$$ -Integrating this by parts this becomes -$$\sum_{m = 2}^{\infty} \int_0^1 {x^{2m} \over 2m + 1}$$ -$$= \sum_{m = 2}^{\infty} {1 \over (2m + 1)^2}$$ -This is the sum of the reciprocals of the odd squares starting with $5$. The sum of the reciprocals of all odd squares is ${\pi^2 \over 8}$, so one subtracts off $1 + {1 \over 9} = {10 \over 9}$. Hence the result is $ {\pi^2 \over 8} - {10 \over 9} $.<|endoftext|> -TITLE: What is the distribution of a random variable that is the product of the two normal random variables ? -QUESTION [28 upvotes]: What is the distribution of a random variable that is the product of the two normal random variables ? -Let $X\sim N(\mu_1,\sigma_1), Y\sim N(\mu_2,\sigma_2)$ -and $Z=XY$ -That is, what is its probability density function, its expected value, and its variance ? -I'm kind of stuck and I can't find a satisfying answer on the web. -If anybody knows the answer, or a reference or link, I would be really thankful... - -REPLY [9 votes]: For the special case that both Gaussian random variables $X$ and $Y$ have zero mean and unit variance, and are independent, the answer is that $Z=XY$ has the probability density $p_Z(z)={\rm K}_0(|z|)/\pi$. The brute force way to do this is via the transformation theorem: -\begin{align} -p_Z(z)&=\frac{1}{2\pi}\int_{-\infty}^\infty{\rm d}x\int_{-\infty}^\infty{\rm d}y\;{\rm e}^{-(x^2+y^2)/2}\delta(z-xy) \\ -&= \frac{1}{\pi}\int_0^\infty\frac{{\rm d}x}{x}{\rm e}^{-(x^2+z^2/x^2)/2}\\ -&= \frac{1}{\pi}{\rm K}_0(|z|) \ . -\end{align}<|endoftext|> -TITLE: Teaching abstract maths concepts to young children. -QUESTION [16 upvotes]: I am interested in opinions and, if possible, references for published research, about the pros and cons of teaching abstract maths concepts to young children. My younger brother (five years old) understands negative numbers and square roots so I was thinking of trying to teach him about complex numbers and maybe some other concepts, but my elder brother (who is doing a maths/stats degree) said it was a crazy idea (without elaborating, but that's what he's like). -Update: I quizzed my brother on why his thinks it is crazy and his response was "Don't you think there is a reason why 99% of maths teachers have a degree in maths ?" I'm at high school by the way. - -REPLY [2 votes]: It really depends on what kind of a person your younger brother is. If he has the mind of a mathematician (which sounds likely, given his advanced knowledge and your family history), then go for it. It will intrigue him, and teach him to look at things in mathematically different ways. On the other hand, if he is like 99% of the population, it would probably only serve to confuse and/or bore him. -So I'd say go for it, and if he doesn't byte, then wait for him to get older. It's not like you're making stuff up (there really are such things as imagainary numbers, and he probably will end up learning about them eventually). -And remember that he is five, so unless he's a Gauss, he will need things explained slowly and in the most basic of terms. He won't have taken mathematical concepts for granted yet that you probably have (for example, I remember when I was five or so, I did not yet have it ingrained in my mind that addition and multiplication must be commutative). -Finally, if he doesn't seem to be able to understand it, there are other "advanced" mathematical concepts that are more accessable to younger children because they are less abstract. Prime numbers, for example.<|endoftext|> -TITLE: Are Tonelli's and Fubini's theorem equivalent? -QUESTION [6 upvotes]: I can derive Fubini's theorem for interated integrals of complex functions from Tonelli's theorem for iterated integrals for unsigned functions. I was wondering whether there is a way to go backwards. I do not think so, because Fubini's theorem assumes the integrals are finite, whereas Tonelli's theorem allows the value of the integral to be $+\infty$. But maybe we can use a limiting argument? This is where I am not clear. -So: is it possible to derive Tonelli's theorem from Fubini's theorem? If so, I would appreciate a proof (or a outline of a proof). - -REPLY [6 votes]: If we have Tonelli's theorem, then by considering positive parts and negative parts separately we immediately obtain Fubini's theorem. -Conversely, assuming Fubini's theorem, Tonelli's theorem follows by monotone convergence argument applied to cut-off functions $f_k(x) = \min \{k, f(x)\} \chi_{B_k}(x)$. You can also find the detail at the Chapter 6.2 of the celebrated textbook Measure and Integral by Wheeden and Zygmund.<|endoftext|> -TITLE: What's an intuitive explanation of the max-flow min-cut theorem? -QUESTION [6 upvotes]: I'm about to read the proof of the max-flow min-cut theorem that helps solve the maximum network flow problem. Could someone please suggest an intuitive way to understand the theorem? - -REPLY [10 votes]: Imagine a complex pipeline with a common source and common sink. You start to pump the water up, but you can't exceed some maximum flow. Why is that? Because there is some kind of bottleneck, i.e. a subset of pipes that transfer the fluid at their maximum capacity--you can't push more through. This bottleneck will be precisely the minimum cut, i.e. the set of edges that block the flow. Please note, that there may be more that one minimum cut. If you find one, the you know the maximum flow; knowing the maximum flow you know the capacity of the cut. -Hope that explains something ;-)<|endoftext|> -TITLE: Why Zariski topology? -QUESTION [75 upvotes]: Why in algebraic geometry we usually consider the Zariski topology on $\mathbb A^n_k$? Ultimately it seems a not very interesting topology, infact the open sets are very large and it doesn't satisfy the Hausdorff separation axiom. Ok the basis is very simple, but what are the advantages? - -REPLY [6 votes]: The following answer has a similar spirit to Zhen Lin's. Like his answer, it has a strong logical flavor; unlike his, no toposes appear (though they are lurking in the background). -That said, allow me a slight switch in terminology: Let's define $\operatorname{Spec} A$ not as the set of prime ideals of $A$, but as the set of filters in $A$. The axioms for a filter are precisely dual to the axioms of a prime ideal, so that a subset of $A$ is a prime ideal if and only if its complement is a filter. For instance, while prime ideals have the axiom "$x \in \mathfrak{p} \wedge y \in \mathfrak{p} \Rightarrow x+y \in \mathfrak{p}$", filters have the axiom "$x + y \in F \Rightarrow x \in F \vee y \in F$". -It happens that the axioms of a filter have a certain logical form; they form a so-called "geometric theory". For any geometric theory, there is an associated space of its models, which will be automatically endowed with a suitable topology. In the case of the geometric theory of filters, a model is precisely a filter, so the space of models coincides with the spectrum; and the automatically given topology is precisely the Zariski topology. -A very readable introduction to this point of view are notes by Steve Vickers ("Continuity and Geometric Logic").<|endoftext|> -TITLE: Triples of positive real numbers $(a,b,c)$ such that $\lfloor a\rfloor bc=3,\; a\lfloor b\rfloor c=4,\;ab\lfloor c\rfloor=5$ -QUESTION [7 upvotes]: Find the all ordered triplets of positive real numbers $(a,b,c)$ such that: $$\lfloor a\rfloor bc=3,\quad a\lfloor b\rfloor c=4,\quad ab\lfloor c\rfloor=5,$$ -where $\lfloor x\rfloor$ is the greatest integer less than or equal to $x$. - -REPLY [3 votes]: Now we have by dividing equations and putting together the ratios we get the ratio $$\frac a{\lfloor a\rfloor}:\frac b{\lfloor b\rfloor}:\frac c{\lfloor c\rfloor}=20:15:12$$ -This shows that $\frac a{\lfloor a\rfloor}\ge\frac53$ since $\frac c{\lfloor c\rfloor}\ge1$. It is checkable that $\frac a{\lfloor a\rfloor}\ge\frac53$ forces $\lfloor a\rfloor=1$, since $a,b,c\ge1$ -Similarly, we get $\frac b{\lfloor b\rfloor}\ge\frac54$ This forces $\lfloor b\rfloor\le3$ -Now,$$a=\frac a{\lfloor a\rfloor}\ge\frac53$$ -we already have $ab\ge\frac{25}{16}$. But $ab\lfloor c\rfloor=5$ so that $\lfloor c\rfloor\le\frac{80}{25}$ whence $\lfloor c\rfloor\le 3$ But if one of $b,c$ is $3$ or more then it forces the other two to be $1$ or less which contradicts either the ratios above or the fact that all of a,b,c are at least 1. -So, $\lfloor b\rfloor=\lfloor c\rfloor\le2$ and $\lfloor a\rfloor=1$ -The ratios become -$$a:\frac b{\lfloor b\rfloor}:\frac c{\lfloor c\rfloor}=20:15:12$$ -Case 1: $\lfloor b\rfloor=1$ -Then $a:b=4:3$, so $b=\frac{3a}4$. The third equation in data then gives$$\frac{3a^2}4\lfloor c\rfloor=5$$ So that $3a^2\lfloor c\rfloor=20$. But $a<2$ implies $\lfloor c\rfloor=2$ Now we get $a=\sqrt\frac{10}3$ $b=\frac{\sqrt{30}}4$ Since $\lfloor a\rfloor=1$, we have by the first equation in data that $\frac{\sqrt{30}c}4=3$ That is, $c=\frac25{\sqrt{30}}$ -Case 2: $\lfloor b\rfloor=2$ -Then $a:b=2:3$, so $b=\frac{3a}2$. The third equation in data then gives$$\frac{3a^2}2\lfloor c\rfloor=5$$ So that $3a^2\lfloor c\rfloor=10$. -Suppose $\lfloor c\rfloor=1$, we get $3a^2=10$ again and as above using ratios we get the same $a$ and $b$, which contradicts that $\lfloor b\rfloor\ne1$ here! This contradiction gives $\lfloor c\rfloor=2$ and we get $6a^2=10$ and $a=\sqrt\frac53$ But we saw earlier that $a\ge\frac53$. So case 2 is contradictory and we have only 1 solution! -$$(a,b,c)=\left(\sqrt\frac{10}3,\frac{\sqrt{30}}4,\frac25{\sqrt{30}}\right)$$<|endoftext|> -TITLE: Important papers in arithmetic geometry and number theory -QUESTION [18 upvotes]: Having been inspired by this question I was wondering, what are some important papers in arithmetic geometry and number theory that should be read by students interested in these fields? -There is a related wikipedia article along these lines, but it doesn't mention some important papers such as Mazur's "Modular Curves and the Eisenstein ideal" paper or Ribet's Inventiones 100 paper. - -REPLY [10 votes]: Serre's Duke 54 paper on his famous conjecture about modularity of mod $p$ Galois representations (now a theorem of Khare--Wintenberger and Kisin, and one of the highlights of 21st century number theory so far). -Khare has written some expository papers on his argument with Wintenberger, and so you might like to read these in conjunction with Serre's original paper. -Also, there is an old paper of Tate where he proves the level $1$, $p = 2$ case -of Serre's conjecture by algebraic number theory methods. It serves as the base-case of a very sophisticated induction (on both the level and the prime $p$!) in the Khare--Wintenberger argument, and is a good read. -There is also Serre's much older (early 70s) paper on weight one modular forms and the associated Galois representations. (It's from a Durham proceedings.) It is a kind of expository companion to his paper with Deligne where they construct the Galois representations for weight one forms. The Deligne--Serre paper itself is also fantastic (but a more technical read than Serre's Durham paper). - -To add to the Galois representation-theoretic suggestions: -There is Shimura's article A non-solvable reciprocity law, in which he describes the elliptic curve $X_0(11)$ and the relationship to its points mod $p$ and the Hecke eigenvalues of the (unique normalized) cuspform of level $11$. -As a supplement and motivational guide to the paper, you could look at the paper What is a reciprocity law by Wyman. (I found it very helpful as an overview to what reciprocity laws were about when I first began trying to learn number theory.)<|endoftext|> -TITLE: Learning Model Theory -QUESTION [27 upvotes]: What books/notes should one read to learn model theory? As I do not have much background in logic it would be ideal if such a reference does not assume much background in logic. Also, as I am interested in arithmetic geometry, is there a reference with a view towards such a topic? - -REPLY [2 votes]: Model Theory by Marker is quite good and has modern exposition. Sometimes he skips very basic stuff which might leave some questionmarks to a careful reader without background. He also does not cover Ultraproducts at all so if you are interested in that something else might be better suited. It took me longer to grasp proofs then e.g. in Chang and Keisler. -Chang and Keisler also covers ultraproducts and covers a lot of stuff. I don't like the way it is sectioned though. There Marker is better iny opinion. I like the way it is written. -Model Theory by Hodges is my favorite. I like his style of writkng it is maybe also the most complete of the three? This is also a downside as it might be too much at first and it might be better to get an overvew at first. I have to admit that I mainly use it for reference. -When I study I usually read Marker at first and then use the other two if I have problems with his presentation or need more details.<|endoftext|> -TITLE: What conditions guarantee that all maximal ideals have the same height? -QUESTION [16 upvotes]: It fails in general that all maximal ideals in a commutative ring with unity have the same height. It's easy to construct a counter-example when the ring is NOT an integral domain (consider the coordinate ring of a line union a surface). The intuition is that the dimensions at different points are different. -It is indeed true that all maximal ideals have the same height when the ring $A$ is a finitely generated algebra over some field $k$ and does not have any nonzero zero-divisors. This height equals the transcendence degree of $A$ over $k$. -However, in general, even when the ring is a Noetherian integral domain, the statement may be false. A counter-example can be found in Atiyah and Macdonald's Introduction to Commutative Algebra, Exercise 4, Chapter 11 on Pg 126. This ring is "large" in some sense. -My question is that is there some more suitable condition that guarantees that all maximal prime ideals in an integral domain have the same height? -Thanks! - -REPLY [9 votes]: Not an answer, but other examples. - -If $R$ is a Dedekind domain with infinitely many maximal ideals (so $R$ is a Jacobson ring), then for any finitely generated domain $A$ over $R$, all maximal ideals have the same height. - -Proof. One can suppose $R\to A$ is injective (otherwise $A$ is finitely generated over a field). Then $A$ is flat over $R$ because torsion-free. So the non-empty fibers of $X:=\mathrm{Spec}(A)\to S:=\mathrm{Spec}(R)$ are pure of the same dimension $d$ (EGA IV.13.2.10 or Algebraic Geometry and Arithmetic Curves, 8.2.8) and $\dim X= d+\dim S=d+1$. Let $x$ be a closed point of $X$ (maximal ideal of $A$). As $A$ is Jacobson, the image $s$ of $x$ in $S$ is closed. As all irreducible components of $X_s$ have dimension $d$ and $S$ has dimension $1$, it is easy to see that $\mathrm{codim}(x,X)\ge d+1$ (the codimension is the height of the corresponding maximal ideal). As this codimension is bounded by $\dim X=d+1$, we have equality. -Edit. A domain such that all maximal ideals have the same height is called equicodimensional (see EGA IV.5.1.1.6). In EGA IV.10.6.1, it is proved that if (I simplify some assumptions) $R$ is a equicodimensional Jacobson domain, quotient of a regular domain, then any domain finitely generated over $R$ is equicodimensional. My above example (with $R$ Dedekind) is a special case.<|endoftext|> -TITLE: Proving a commutative ring can be embedded in any quotient ring. -QUESTION [6 upvotes]: Here's the exercise, as quoted from B.L. van der Waerden's Algebra, - -Show that any commutative ring $\mathfrak{R}$ (with or without a zero divisor) can be embedded in a ''quotient ring" consisting of all quotients $a/b$, with $b$ not a divisor of zero. More generally, $b$ may range over any set $\mathfrak{M}$ of non-divisors of zero which is closed under multiplication (that is, $b_1$, $b_2$ is in $\mathfrak{M}$ when $b_1$ and $b_2$ are). The result is a quotient ring $\mathfrak{R}_{\mathfrak{M}}$. - -I'm not sure if I am stuck or if I am overthinking. My answer goes like this: -Commutative rings without zero divisors are integral domains as defined in Algebra and, by removing all $a/b$ which are not in the the commutative ring $R$ from the field $R \hookrightarrow Q$ where $Q$ is the field of all quotients $a/b$, one shows that any commutative ring without zero divisors can be embedded in a quotient ring. (This is more rigorously outlined in Algebra itself.) -Now what has me confused is how this case differs from a case with zero divisors. Doesn't the exact same logic hold for a commutative ring with zero divisors? The only thing that I think zero divisors would interfere with is solving equations of the form $ax=b$ and $ya=b$ since there aren't inverses of zero divisors. However, we do not need to show that they can be embedded in a quotient field. We're only showing they can be embedded in a quotient ring. So, there's really no issue here and we apply the same approach as above. -Now, as for the last part, I think that all that needs to be said is that if the set $\mathfrak{M}$ wasn't closed under multiplication, neither would the "ring" be and hence it would not be a ring by definition. Therefore, $\mathfrak{M}$ has to be closed and any commutative ring where $b$ ranges over any set of non-divisors of zero can be embedded in a quotient ring. -I guess my real issue here is that I can't tell if I'm overthinking or underthinking. Does anyone care to elucidate this for me? - -REPLY [10 votes]: The fraction ring (localization) $\rm\,S^{-1} R\,$ is, conceptually, the universal way of adjoining inverses of $\rm\,S\,$ to $\rm\,R.\,$ The simplest way to construct it is $\rm\,S^{-1} R = R[x_i]/(s_i x_i - 1).\,$ This allows one to exploit the universal properties of quotient rings and polynomial rings to quickly construct and derive the basic properties of localizations (avoiding the many tedious verifications always "left for the reader" in the more commonly presented pair approach). For details of this folklore see e.g. the exposition in section 11.1 of Rotman's Advanced Modern Algebra, or Voloch, Rings of fractions the hard way. -Likely Voloch's title is a joke - since said presentation-based method is by far the easiest approach. In fact both Rotman's and Voloch's expositions can be simplified. Namely, the only nonobvious step in this approach is computing the kernel of $\rm\, R\to S^{-1} R,\,$ for which there is a nice trick: -$\quad \begin{eqnarray}\rm n = deg\, f\quad and\quad r &=&\rm (1\!-\!sx)\,f(x) &\Rightarrow&\ \rm f(0) = r\qquad\,\ \ \ via\ \ coef\ x^0 \\ -\rm\Rightarrow\ (1\!+\!sx\!+\dots+\!(sx)^n)\, r &=&\rm (1\!-\!(sx)^{n+1})\, f(x) &\Rightarrow&\ \rm f(0)\,s^{n+1}\! = 0\quad via\ \ coef\ x^{n+1} \\ -& & &\Rightarrow&\ \rm\quad r\ s^{n+1} = 0 -\end{eqnarray}$ -Therefore, if $\rm\,s\,$ is not a zero-divisor, then $\rm\,r = 0,\,$ so $\rm\, R\to S^{-1} R\,$ is an injection. -For cultural background, for an outstanding introduction to universal ideas see George Bergman's An Invitation to General Algebra and Universal Constructions. -You might also find illuminating Paul Cohn's historical article Localization in general rings, a historical survey - as well as other papers in that volume: Ranicki, A.(ed). Noncommutative localization in algebra and topology. ICMS 2002.<|endoftext|> -TITLE: Sum inequality: $\sum_{k=1}^n \frac{\sin k}{k} \le \pi-1$ -QUESTION [15 upvotes]: I'm interested in finding an elementary proof for the following sum inequality: -$$\sum_{k=1}^n \frac{\sin k}{k} \le \pi-1$$ -If this inequality is easy to prove, then one may easily prove that the sum is bounded. - -REPLY [5 votes]: Let's first observe that $\sum_{k=1}^\infty u^k/k=-\ln(1-u)$. -If we're concerned about the convergence radius, we can always replace $u$ with $ue^{-\epsilon}$ and let $\epsilon\rightarrow0$. The branch of $\ln$ we're using is the one defined on $\mathbb{C}\setminus(-\infty,0]$: i.e. $\ln(re^{i\theta})=\ln r+i\theta$ where $r>0$ and $\theta\in(-\pi,\pi)$. -Inserting $\sin x=(e^{ix}-e^{-ix})/2i$, we get -$$\sum_{k=1}^\infty \frac{\sin kx}{k} -=\sum_{k=1}^\infty \frac{e^{ikx}-e^{-ikx}}{2ki} -=\frac{\ln(1-e^{-ix})-\ln(1-e^{ix})}{2i} -$$ -At this point, I have two alternative solutions. In either case, I assume $x\in[0,\pi)$ to help stay within the selected branch of the logarithm. -You can look at the triangle with corners $O=0$, $I=1$ and $A=1-e^{-ix}$: this has $IO=IA$ and $\angle OIA=x$, so $\angle AOI=\frac{\pi-x}{2}$. This makes the imaginary part of $\ln(1-e^{-ix})=\angle AOI=\frac{\pi-x}{2}$; for $\ln(1-e^{ix})$ it is $-\frac{\pi-x}{2}$. The real part of the logarithm cancels out, and what remains is $\frac{\pi-x}{2}$. -Alternatively, while ensuring we stay within the branch of the logarithm, we get -$$\sum_{k=1}^\infty \frac{\sin kx}{k} -=\frac{1}{2i}\ln\frac{1-e^{-ix}}{1-e^{ix}} -=\frac{\ln(-e^{-ix})}{2i} -=\frac{\ln(e^{i(\pi-x)})}{2i} -=\frac{\pi-x}{2}. -$$ -Thus, not only is the sum less than $\pi-1$. It is exactly $\frac{\pi-1}{2}$. And the more general sum -$$\sum_{k=1}^\infty \frac{\sin kx}{k} -=\frac{\pi-x}{2} -$$ -for $x\in[0,\pi]$: if $x=\pi$, the sum becomes zero (either by limit or because all the terms are zero).<|endoftext|> -TITLE: Slope of curve in $\mathbb{R}^3$ -QUESTION [5 upvotes]: While doing revision, I came across this problem: -The surface given by $z=x^2-y^2$ is cut by the plane given by $y=3x$, producing a curve in the plane. Find the slope of this curve at the point $(1,3,-8)$. -I tried substituting $y=3x$ into $z=x^2-y^2$, yielding $z=-8x^2$. Then, $\frac{dz}{dx}=-16x=-16$. -However the answer is $-8\sqrt{\frac{2}{5}}$. -Thank you very much for any help. - -REPLY [5 votes]: This is a very badly posed question, and does not have an answer. (Read the comments.) -The following is a solution to a rephrased question which can be answered, however. -Question: -The surface given by $z=x^2−y^2$ is cut by the plane given by $y=3x$, producing a curve in the plane. -Treating the intersection as a curve in the said plane with vertical axis along $(0,0,1)$ and horizontal axis along $(1,3,0)/\sqrt{10}$, find the slope of this curve at the point (1,3,−8). -Solution: -Any point on the plane has Cartesian coordinates in the form $$\frac{a}{\sqrt{10}}\begin{pmatrix} 1\\3 \\0 \end{pmatrix} + b\begin{pmatrix} 0\\0\\1 \end{pmatrix}.$$ Substituting this into $z=x^2−y^2$, we get $b = -4a^2/5$. -So the "slope" at a point on this intersection, with $a$ and $b$ given, is $$\frac{db}{da} = -\frac{8a}{5}.$$ -Setting $$\begin{pmatrix}1 \\3\\-8\end{pmatrix} = \frac{a}{\sqrt{10}}\begin{pmatrix} 1\\3 \\0 \end{pmatrix} + b\begin{pmatrix} 0\\0\\1 \end{pmatrix},$$ we get $a = \sqrt{10}$ and so the "slope" at this point is $-8\sqrt{\frac{2}{5}}$. -Solution using grad: -Let $f:=x^2-y^2-z$ and $g:=y-3x$. At the point $(1,3,-8)$, $\nabla f=(2,-6,-1)$ and $\nabla g=(-3,1,0)$. Their cross product, $(1,3,-16)$, is along the tangent direction of the intersecting curve produced by the surface and the plane, at the point $(1,3,-8)$. -Denote the angle between $(1,3,-16)$ and $(1,3,0)$ (i.e. the "horizontal") by $\theta$. Then, using the dot product, $\cos\theta = \sqrt{\frac{5}{133}}$. The "slope" is $$\tan \theta = - \sqrt{\frac{1}{\cos^2 \theta}-1} = -8\sqrt{\frac{2}{5}},$$ where the negative square root is taken because the "vertical" is along $(0,0,1)$.<|endoftext|> -TITLE: Does this sequence converge to $\pi$? -QUESTION [5 upvotes]: I have a problem with the following sequence -$$ \lim_{n \to \infty} g_n \stackrel{?}{=} \pi $$ -where -$$g_n = \sum_{k=1}^{n-1} \frac{\sqrt{\frac{2n}{k}-1}}{n-k} - + \sum_{k=n+1}^{2n-1}\frac{\sqrt{\frac{2n}{k}-1}}{n-k}.$$ -Does it converge to $\pi$? I tested experimentally that it -does, but I was unable to prove it by hand. Could anybody help, -or offer some methods of approach? - -REPLY [8 votes]: Setting $i=n-k$ and $j=k-n$ we have -\begin{eqnarray} -g_n&=& -\sum_{i=1}^{n-1}\frac{1}{i}\sqrt{\frac{2n-n+i}{n-i}}-\sum_{j=1}^{n-1}\frac{1}{j}\sqrt{\frac{2n-n-j}{n+j}}\cr -&=&\sum_{i=1}^{n-1}\frac{1}{i}\sqrt{\frac{n+i}{n-i}}-\sum_{j=1}^{n-1}\frac{1}{j}\sqrt{\frac{n-j}{n+j}}\cr -&=&\sum_{k=1}^{n-1}\frac{1}{k}\left(\sqrt{\frac{n+k}{n-k}}-\sqrt{\frac{n-k}{n+k}}\right)\cr -&=&\sum_{k=1}^{n-1}\frac{1}{k}\frac{(n+k)-(n-k)}{\sqrt{n^2-k^2}}\cr -&=&2\sum_{k=1}^{n-1}\frac{1}{\sqrt{n^2-k^2}}=\frac{2}{n}\sum_{k=1}^{n-1}\frac{1}{\sqrt{1-(k/n)^2}}=-\frac{2}{n}+\frac{2}{n}\sum_{k=0}^{n-1}f(k/n), -\end{eqnarray} -with $f(x)=1/\sqrt{1-x^2}$. -Therefore -$$ -\lim_{n\to \infty}g_n=2\int_0^1f(x)dx=2\int_0^{\pi/2}\frac{\cos t}{\sqrt{1-\sin^2t}}dt=2\int_0^{\pi/2}dt=\pi. -$$ - -REPLY [4 votes]: Let $l = 2n - k$. Then by simple algebraic manipulation, we have -$$ \sum_{k=n+1}^{2n} \frac{\sqrt{\frac{2n}{k} - 1}}{n-k} = - \sum_{l=1}^{n-1} \frac{1}{(n-l)\sqrt{\frac{2n}{l} - 1}}. $$ -Thus replacing the summation index by $k$ and letting $x_k = x_{k}^{(n)} = \frac{k}{n}$, -$$ g_n = \frac{1}{n} \sum_{k=1}^{n-1} \frac{1}{1 - x_k} \left( \sqrt{\frac{2}{x_k} - 1} - \frac{1}{\sqrt{\frac{2}{x_k} - 1}} \right) = \frac{1}{n} \sum_{k=1}^{n-1} \frac{2}{\sqrt{2x_k - x_k^2}}, $$ -which converges to -$$ 2 \int_{0}^{1} \frac{dx}{\sqrt{2x - x^2}} = \pi.$$<|endoftext|> -TITLE: Accumulation points / Cluster points / Closed sets -QUESTION [7 upvotes]: In a topological space $X$, call $x\in X$ an accumulation point if $\forall$ open set $U\ni x$, $U \cap A \neq \emptyset$, and $y\in X$ a cluster point if $\forall$ open set $U\ni y$, $U\cap A\setminus \{y\} \neq \emptyset$. (These are the terminologies used by my lecturer. I'm aware that different ones exist.) -Call a set $A\subseteq X$ closed if its complement is open. -My lecturer gave us a proof that $A$ is closed iff $A$ contains all of its accumulation points (see below). However, I managed to modify it to show that $A$ is closed iff $A$ contains all of its cluster points (see below, marked with []). What went wrong here? If the latter is false in general, in what special cases is it true (I heard it's true in metric spaces)? -The proof: -($\Rightarrow$): Suppose $A$ is closed and $x_0 \in X \setminus A$. Take $U:= X\setminus A$, an open set containing $x_0$. Now $U\cap A =\emptyset$, so $x_0$ is not an accumulation point. -[$x_0$ is not an accumulation point and so it is not a cluster point either.] -($\Leftarrow$): Suppose $A$ is not closed, then $X\setminus A$ is not open. $\exists x_0 \in X\setminus A$ such that no open set $U\ni x_0$ is contained in $X\setminus A$, i.e. any open set $U\ni x_0$ satisfies $U\cap A \neq \emptyset$. So $x_0$ is an accumulation point of $A$ but not in $A$. -[For this $x_0$, note that $x_0 \notin U\cap A$ because $x_0 \notin A$. So any open set $U\ni x_0$ satisfies $U\cap A \setminus \{x_0\} \neq \emptyset$, i.e. $x_0$ is a cluster point of $A$ but not in $A$.] - -REPLY [9 votes]: Your result is correct, as is your argument. You can even prove directly that if $A$ contains all of its cluster points, then it contains all of its accumulation points. Suppose that a set $A$ contains all of its cluster points but fails to contain its accumulation point $x$. Then $x$ is not a cluster point, so $x$ has an open nbhd $U$ such that $U\cap A\subseteq\{x\}$. But $x\notin A$, so $U\cap A=\varnothing$, contradicting the assumption that $x$ was an accumulation point of $A$. -Added: Your lecturer could have proved a stronger result. Let $\operatorname{cl}A$ be the set of accumulation points of $A$; then $A$ is closed iff $A=\operatorname{cl}A$. Suppose first that $A$ is closed. You’ve already proved that $A\supseteq\operatorname{cl}A$, and it’s clear that every point of $A$ is an accumulation point of $A$, so $A=\operatorname{cl}A$. Conversely, if $A$ is not closed, you already know that it fails to contain some accumulation point, so $A\ne\operatorname{cl}A$. -This stronger result fails for cluster points. Let $X$ be any $T_1$-space with at least two points, and let $x\in X$. Then $\{x\}$ is closed, but it has no cluster points, so it can’t be equal to the set of its cluster points.<|endoftext|> -TITLE: Decomposing a Polish space into closed sets -QUESTION [6 upvotes]: Is it possible to decompose $\mathbf R$, or in general, any uncountable Polish space, into $\mathfrak c$-many disjoint closed subsets such that the union of any infinite subfamily is dense? -If it is, what additional constraints (if any) have to be put on the space to allow this (I suspect connectedness/zero dimensionality might play an important role there, if it is possible at all)? -I tried to construct such a decomposition using the Bernstein set (or a family of those), but without much success. -This question is motivated by another question (which, as far as I can see, is equivalent to my question in case of $\mathbf R^2$). -edit: -As for the last remark, it seems that I was under the false impression that for quotient maps, the closure of preimage is the preimage of closure, which is not true in general. - -REPLY [4 votes]: Such a partition does not exist for any first countable space, so certainly not for Polish spaces. -Let $P$ be a collection of closed sets, such that the union -of every infinite subset is everywhere dense. If $x$ is a point with a countable -neighbourhood base, then $x \in F$ for all but countably many $F \in P$. -Proof: -Let $\{B_n\}_{n \in \mathbb{N}}$ be a decreasing neighbourhood base of $x$. -Define $d: P \to [-\infty, +\infty]$ by $d(F) = \sup \{ n \mid F \cap B_n \neq \emptyset \}$. Because the union of every infinite subset of $P$ must intersect -every $B_n$, we have that $\{ F \in P \mid d(F) \le n \}$ is finite for every $n \lt \infty$. If follows that $d(F) = \infty$ for all but countably many $F\in P$, and for all these $F$, since $F$ is closed, $x \in F$.<|endoftext|> -TITLE: Is there any direct application of Gödel's Theorems outside of logic? -QUESTION [5 upvotes]: Gödel's incompleteness theorems was a major achievement with ramifications outside the field of mathematics itself. Are there any direct applications of the theorem(s), or any of the methods pioneered in the proof(s) outside the field of logic itself but within mathematics itself. For example, say in Category Theory. - -REPLY [10 votes]: To apply Gödel's incompleteness theorems one needs to be working with formal theories. Whether or not there are mathematical applications outside logic depends on how one draws the boundary of logic, as a subfield of mathematics. -For example, consider the Paris–Harrington theorem which shows that a certain statement of Ramsey theory, formulable in the language of arithmetic, implies the consistency of Peano arithmetic (PA). The Paris–Harrington sentence is a natural combinatorial statement which, by Gödel's second incompleteness theorem, is not provable in our usual system of first-order arithmetic. -There are a number of morals which one could draw from this theorem, but the one I want to call attention to here is this: there are number-theoretic propositions which go beyond our usual axioms for arithmetic, and which we must account for. If we hold the Paris–Harrington statement to be true then we're committed to stronger mathematical axioms than PA; for instance, we might want to assert that we can perform induction up to $\varepsilon_0$ and not just $\omega$ (since this is the usual way the consistency of PA is proved). -In fact a common use of the second incompleteness theorem is to draw boundaries to provability within certain formal systems. For example, we know that we can't prove a version of the Montague–Levy reflection theorem for infinite, rather than finite sets of sentences, because if we could then one of the infinite sets of sentences we could reflect would be the axioms of ZFC itself. Thus we'd have shown in ZFC that ZFC has a model, proving Con(ZFC) and contradicting the second incompleteness theorem. This also gives us the following corollary: ZFC is not finitely axiomatisable. -Assume for a contradiction that there is some finite set of sentences $\Phi$ such that for every formula $\varphi$ in $\mathcal{L}_\in$, $\Phi \vdash \varphi \Leftrightarrow \mathrm{ZFC} \vdash \varphi$. Of course this means that every axiom $\psi \in \Phi$ is a theorem of ZFC. So by the reflection theorem, there is some $V_\alpha$ such that $V_\alpha \models \Phi$. But by our assumption such a model will also be a model of ZFC, so Con(ZFC), contradicting the second incompleteness theorem. -Again, is this an application within logic? If set theory is part of logic, yes. But direct applications of Gödel's results will only ever show up when we deal with formal systems, so if we say that anytime we do that we're working within logic, then by definition they won't be applicable outside of it. Nonetheless incompleteness is, I would argue, a deep phenomenon; the only reason it doesn't appear more often—or more obviously—in mathematics is just that many results are proved in areas which employ in the background systems like ZFC which are far stronger than they need, and that mathematicians are often not careful about stipulating precisely what resources (that is to say, axioms) they do assume.<|endoftext|> -TITLE: infinitely many prime numbers with prescribed digits -QUESTION [5 upvotes]: My main question is the generalization, though one can answer the first one and it will get accepted. - -Are there infinitely many primes involving $3,7$ only? - -Generalization: For what sets of given $k$ distinct digits (not all even) from $\{0,1,...,9\}$ where $1\leq k \leq 9,$ there are infinitely many prime numbers involving only these $k$ digits? - -REPLY [2 votes]: I note that Primes that contain digits 3 and 7 only are tabulated at the Online Encyclopedia of Integer Sequences. My understanding is that there is no set $D$ of fewer than 10 digits for which it has been proved that there are infinitely many primes which use only the digits in $D$.<|endoftext|> -TITLE: how to rotate a Gaussian? -QUESTION [5 upvotes]: Lets suppose that we have a 2D Gaussian with zero mean and one covariance and the equation looks as follows -$$f(x,y) = e^{-(x^2+y^2)}$$ -If we want to rotate in by an angle $\theta$, does it mean that we rotate the values $x$ and $y$ and then see how the Gaussian is rotated or do we actually rotate the graph of the function. -How this rotation actually be computed analytically and how the graph would look like. Is there any intuitive way of understanding. -How do we explain rotating a general function analytically and geometrically? -Thanks a lot - -REPLY [7 votes]: If the covariance is the $2\times2$ identity matrix, then the density is -$$e^{−(x^2+y^2)/2}$$ -multiplied by a suitable normalizing constant. If $\begin{bmatrix} X \\ Y \end{bmatrix}$ is a random vector with this distribution, then you rotate that random vector by multiplying on the left by a typical $2\times 2$ orthogonal matrix: -$$ -G \begin{bmatrix} X \\ Y \end{bmatrix} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}\begin{bmatrix} X \\ Y \end{bmatrix}. -$$ -If the question is how to "rotate" the probability distribution, then asnwer is that it's invariant under rotations about the origin since it depends on $x$ and $y$ only through the distance $\sqrt{x^2+y^2}$ from the origin to $(x,y)$. -If you multiply on the left by a $k\times2$ matrix $G$, you have -$$ -\mathbb{E}\left(G\begin{bmatrix} X \\ Y \end{bmatrix}\right) = G\mathbb{E}\begin{bmatrix} X \\ Y \end{bmatrix} -$$ -and -$$ -\operatorname{var}\left( G \begin{bmatrix} X \\ Y \end{bmatrix} \right) = G\left(\operatorname{var}\begin{bmatrix} X \\ Y \end{bmatrix}\right)G^T, -$$ -a $k\times k$ matrix. If the variance in the middle is the $2\times2$ identity matrix and $G$ is the $2\times 2$ orthogonal matrix given above, then it's easy to see that the variance is -$$ -GG^T -$$ -and that is just the $2\times 2$ identity matrix. The only fact you need after that is that if you multiply a multivariate normal random vector by a matrix, what you get is still multivariate normal. I'll leave the proof of that as an exercise.<|endoftext|> -TITLE: Computing roots of high degree polynomial numerically. -QUESTION [5 upvotes]: Here is my problem ; for my research, I believe that the complex numbers I am looking at are precisely the (very large) set of roots of some high degree polynomial, of degree $\sim 2^n$ where $1 \le n \le 10 \sim 15$. Mathematica has been running for the whole day on my computer just for $2^{10}$ even though $2^9$ took half an hour, and I wondered if any other program out there would be faster than Mathematica so that I could compute more of those roots. If I had more examples to compute it would REALLY help. The thing I need the program to be able to do is simple : I give you a polynomial of very high degree, and I want to numerically plot its complex roots. I don't care about multiplicity. -Thanks in advance, -EDIT: Marvis, here is my code for computing the nested polynomial $p^m(x) - x$, where $p^2(x) = p(p(x))$. - -function r = polycomp(p,q); -r = p(1); -for k = 2:length(p); -r = conv(r,q); -r(end) = r(end) + p(k); -end - -All I do afterwards is a loop with - -r = [1 0] -for i = 1:n -r = polycomp(r,p) -end - -where $n$ is my loop length and $p$ is my polynomial. - -REPLY [2 votes]: I'm coming to the conversation late, but I note that there is no discussion of the accuracy of the determined roots. Roots are highly sensitive to perturbations in the polynomial coefficients, i.e. numerical precision is a key determinant of root accuracy particularly for huge polynomials. -My question: MATLAB may be faster but is it more accurate than Mathematica? Did you check to see whether your $2^n$ roots aren't mostly wildly inaccurate?<|endoftext|> -TITLE: If a function has a finite limit at infinity, does that imply its derivative goes to zero? -QUESTION [40 upvotes]: I've been thinking about this problem: Let $f: (a, +\infty) \to \mathbb{R}$ be a differentiable function such that $\lim\limits_{x \to +\infty} f(x) = L < \infty$. Then must it be the case that $\lim\limits_{x\to +\infty}f'(x) = 0$? -It looks like it's true, but I haven't managed to work out a proof. I came up with this, but it's pretty sketchy: -$$ -\begin{align} -\lim_{x \to +\infty} f'(x) &= \lim_{x \to +\infty} \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \\ -&= \lim_{h \to 0} \lim_{x \to +\infty} \frac{f(x+h)-f(x)}{h} \\ -&= \lim_{h \to 0} \frac1{h} \lim_{x \to +\infty}[f(x+h)-f(x)] \\ -&= \lim_{h \to 0} \frac1{h}(L-L) \\ -&= \lim_{h \to 0} \frac{0}{h} \\ -&= 0 -\end{align} -$$ -In particular, I don't think I can swap the order of the limits just like that. Is this correct, and if it isn't, how can we prove the statement? I know there is a similar question already, but I think this is different in two aspects. First, that question assumes that $\lim\limits_{x \to +\infty}f'(x)$ exists, which I don't. Second, I also wanted to know if interchanging limits is a valid operation in this case. - -REPLY [12 votes]: Because all counterxamples given here are oscillating, perhaps someone might wonder whether the proposition is true if we require the funcion to be monotonic. The answer is still no. Simply consider the inverse of the function, monotonic on $(0,\infty)$, given here $f(x)=\frac{1}{x}+\sin\left(\frac{1}{x}\right)$. -Also: consider the cumulative distribution function corresponding to -a probability density function that has no limit as $x\to +\infty$. For example, the integral of the density constructed here. Or, using the same idea with a Cauchy distribution we get other example: -$$F(x)= \frac{2}{\pi}\sum_{k=1}^\infty \frac{\tan^{-1}\left((x-k) \pi 2^k\right)}{2^k} $$ - -This tends to $1$, but its derivative $f(x)=F'(x)= 2 \sum_{k=1}^\infty (\pi^2 2^{2k} (x-k)^2 +1)^{-1} $ has no limit, because $f(n)>2$ $\forall n \in \mathbb{N}$.<|endoftext|> -TITLE: Compute the limit of $\frac1{\sqrt{n}}\left(1^1 \cdot 2^2 \cdot3^3\cdots n^n\right)^{1/n^2}$ -QUESTION [18 upvotes]: Compute the following limit: -$$\lim_{n\to\infty}\frac{{\left(1^1 \cdot 2^2 \cdot3^3\cdots n^n\right)}^\frac{1}{n^2}}{\sqrt{n}} $$ -I'm interested in almost any approaching way for this limit. Thanks. - -REPLY [27 votes]: Let's begin -$$ -\lim\limits_{n\to\infty}\frac{\left(\prod\limits_{k=1}^n k^k\right)^{\frac{1}{n^2}}}{\sqrt{n}}= -\lim\limits_{n\to\infty}\exp\left(\frac{1}{n^2}\sum\limits_{k=1}^n k\log k - \frac{1}{2}\log n\right)= -$$ -$$ -\lim\limits_{n\to\infty}\exp\left(\frac{1}{n^2}\sum\limits_{k=1}^n k\log\left(\frac{k}{n}\right)+\frac{1}{n^2}\sum\limits_{k=1}^n k\log n - \frac{1}{2}\log n\right)= -$$ -$$ -\lim\limits_{n\to\infty}\exp\left(\sum\limits_{k=1}^n \frac{k}{n}\log\left(\frac{k}{n}\right)\frac{1}{n}+\frac{1}{2}\log n\left(\frac{n^2+n}{n^2}-1\right)\right)= -$$ -$$ -\exp\left(\lim\limits_{n\to\infty}\sum\limits_{k=1}^n \frac{k}{n}\log\left(\frac{k}{n}\right)\frac{1}{n}+\frac{1}{2}\lim\limits_{n\to\infty}\frac{\log n}{n}\right)= -$$ -$$ -\exp\left(\int\limits_{0}^1 x\log x dx\right)=\exp\left(-1/4\right) -$$ -And now we are done! - -REPLY [11 votes]: $$ -\frac1{n^2}\sum_{k=1}^nk\log(k)-\frac12\log(n)=\frac1{n}\sum_{k=1}^n\frac{k}n\log\left(\frac{k}n\right)+\frac12\frac{\log(n)}n=\int_0^1x\log(x)\mathrm dx+o(1) -$$ - -Edit: Per request, a solution without integrals, using only the elementary version of Stirling's approximation. Let $s_n=\displaystyle\sum_{k=1}^nk\log(k)$. Then, -$$ -\sum_{k=1}^n\log(k!)=\sum_{k=1}^n\sum_{i=1}^k\log(i)=\sum_{i=1}^n(n-i+1)\log(i)=(n+1)\log(n!)-s_n. -$$ -On the other hand, Stirling's approximation in its simplest form is $\log(k!)=k\log(k)-k+r_k$ with $r_k=O(\log k)$, which yields -$$ -\sum_{k=1}^n\log(k!)=\sum_{k=1}^nk\log(k)-\sum_{k=1}^nk+\sum_{k=1}^nr_k=s_n-\frac12n(n+1)+t_n,\quad t_n=\sum_{k=1}^nr_k. -$$ -Comparing these two expressions and using once more $\log(n!)=n\log(n)-n+r_n$, one gets -$$ -2s_n=n(n+1)\log n-n(n+1)+(n+1)r_n+\frac12n(n+1)-t_n, -$$ -hence $2s_n-n^2\log n=-\frac12n^2+u_n$ with -$$ -u_n=n\log(n)-\frac12n+(n+1)r_n-t_n. -$$ -Since $r_n=O(\log n)$, $t_n=O(n\log n)$ and each term in $u_n$ is $O(n\log n)$, hence -$$ -\frac{s_n}{n^2}-\frac{\log n}2=-\frac14+\frac{u_n}{2n^2}=-\frac14+O\left(\frac{\log n}{n}\right)\to-\frac14. -$$<|endoftext|> -TITLE: What are your favorite integration tricks? -QUESTION [5 upvotes]: I'm learning to integrate and I'd like to hear what are you favorite integration tricks? -I can't contribute much to this thread, but I like the fact that: -$$\int_{-a}^{a}{f(x)}dx=0 \space\text{if}\space f(x) \space\text{is odd}$$ - -REPLY [2 votes]: One I really like is this one : -If $f$ is a continuous function for which $f(a+b-t)=f(t)$ then $$\int_a^b t\cdot f(t) \mathrm{d}t=\frac{a+b}{2}\int_a^bf(t) \mathrm{d}t$$ -Example : -$$\begin{align} \int_0^{\pi} \frac{x\sin(x)}{1+\cos^2 (x)}\mathrm{d}x &=\frac{\pi}{2}\int_0^{\pi} \frac{\sin(x)}{1+\cos^2 (x)}\mathrm{d}x\\ &=\frac{\pi}{2} \left[-\arctan(\cos(x))\right]_0^{\pi} \\ &=\frac{\pi^2}{4}\end{align}$$<|endoftext|> -TITLE: Does the curvature determine the metric? -QUESTION [11 upvotes]: Here I asked the question whether the curvature deterined the metric. Since I am unfortunately completely new to Riemannian geometry, I wanted to ask, if somebody could give and explain a concrete example to me, as far as the following is concerned: -At the MO page (as cited above) I got the following answer to the question -Given a compact Riemannian manifold M, are there two metrics g1 and g2, which are not everywhere flat, such that they are not isometric to one another, but that there is a diffeomorphism which preserves the curvature? -If the answer is yes: -Can we chose M to be a compact 2-manifold? - -On the positive side, if $M$ is compact of dimension $\ge 3$ and has nowhere constant sectional curvature, then combination of results of Kulkarni and Yau show that - a diffeomorphism preserving sectional curvature is necessarily an isometry. -Concerning 2-dimensional counter-examples: First of all, every surface which admits an open subset where curvature is (nonzero) constant would obviously yield a counter-example. Thus, I will assume now that curvature is nowhere constant. Kulkarni refers to Kreyszig's "Introduction to Differential Geometry and Riemannian Geometry", p. 164, for a counter-example attributed to Stackel and Wangerin. You probably can get the book through interlibrary loan if you are in the US. - -I looked up the example in Kreyszig's "Introduction to Differential Geometry and Riemannian Geometry", p. 164: -If we rotate the curve $x_3=\log x_1$ about the $x_3$-axis in space, -we obtain the surface of revolution $X(u_1,u_2)=(u_2\cos(u_1), u_2\sin(u_1),\log(u_2))$, $u_2>0$. -This is diffeomorphic to the helicoid $X(u_1,u_2) =(u_2\cos(u_1),u_2\sin(u_1),u_1)$. -I think, these manifolds are not compact (but I assumed compactness of the manifold in my question on MO). -I don't understand, how to manipulate this example in order to get a compact manifold. -Thank you for your help. - -REPLY [10 votes]: Here's another example. -First, imagine a short cylinder $S^1\times [0,1]$ and a long cylinder $S^1\times [0,10^{10}]$, but with the same radii. Smoothly cap off both ends of the cylinders in the same way using spaces homeomorphic to discs. -The resulting manifolds are both homeomorphic to $S^2$, are not isometric (since one has a much larger diameter than the other), but there is a curvature preserving diffeomorphism between them. -To see this, just convince yourself there is a diffeomorphism $f:S^1\times[0,1]\rightarrow S^1\times [0,10^{10}]$ with the property that $f$ is an isometry when restricted to $[0,\frac{1}{4}]$ and $[\frac{3}{4},1]$. This condition allows you to extend $f$ to a curvature preserving diffeo of both compact manifolds.<|endoftext|> -TITLE: If $g(a,\cdot):= E[f(a,X)|\mathcal{G}]$ and $C= E[f(A,X)|\mathcal{G}]$ then $C(\omega) = g(A(\omega),\omega)$ -QUESTION [5 upvotes]: Consider a probability space $(\Omega,\mathcal{F},P)$, a random Variable $X$ on that space, a $\sigma$-Algebra $\mathcal{G}$ and a $\mathcal{G}$-measurable random variable $A$. For some function: $f: \mathbb{R}^2 \mapsto \mathbb{R}$ consider the conditional expectation: -$C(\omega):= E[f(A,X)|\mathcal{G}](\omega)$ -I am interested in the question, whether $C$ can be expressed using the function: -$g(a,\omega):= E[f(a,X)|\mathcal{G}](\omega)$ -Such that: $C(\omega) = g(A(\omega),\omega)$ ? -Can you give me some hints on where to look and what to read, in order find out, if its possible and even derive some properties of $g$? -Thanks - -REPLY [6 votes]: A useful tool in matters of conditional expectations is sometimes called the collectivist approach and may be summarized as follows: - -To show that some specific object $c_0$ has a given property, study the collection $\mathcal C$ of objects $c$ with said property. Then the fact that $c_0$ is in $\mathcal C$ often becomes obvious, for example because $\mathcal C$ contains all the objects $c$ sharing some feature of $c_0$. - -Here, one is given a random variable $X:\Omega\to\mathbb X$, a sub-sigma-algebra $\mathcal G$ on $\Omega$, a random variable $Y:\Omega\to\mathbb Y$, measurable with respect to $\mathcal G$, and a bounded measurable function $f:\mathbb Y\times\mathbb X\to\mathbb R$. One defines a function $G_f:\mathbb Y\times\Omega\to\mathbb R$ by $G_f(y,\omega)=E(f(y,X)\mid\mathcal G)(\omega)$ and a random variable $Z_f:\Omega\to\mathbb R$ by $Z_f(\omega)=G_f(Y(\omega),\omega)$. -One wants to show that $E(f(Y,X)\mid\mathcal G)=Z_f$. -Consider the collection $\mathcal C$ of bounded measurable functions $u:\mathbb Y\times\mathbb X\to\mathbb R$ such that $E(u(Y,X)\mid\mathcal G)=Z_u$. The goal is to show that $f$ is in $\mathcal C$. -Assume first that $u=\mathbf 1_{F\times E}$ for some measurable subsets $F$ and $E$ of $\mathbb Y$ and $\mathbb X$ respectively. - -If $y\in F$, $u(y,\cdot)=\mathbf 1_E$ hence $G_u(y,\omega)=P(X\in E\mid\mathcal G)(\omega)$ for every $\omega$. -If $y\notin F$, $u(y,\cdot)=0$ hence $G_u(y,\omega)=0$ for every $\omega$. - -Thus, $G_u(y,\omega)=P(X\in E\mid\mathcal G)(\omega)\cdot\mathbf 1_F(y)$ for every $\omega$, that is, $Z_u=P(X\in E\mid\mathcal G)\cdot\mathbf 1_F(Y)$. -On the other hand, $u(Y,X)=\mathbf 1_F(Y)\cdot\mathbf 1_E(X)$ and $\mathbf 1_F(Y)$ is $\mathcal G$-measurable hence -$$ -E(u(Y,X)\mid\mathcal G)=\mathbf 1_F(Y)\cdot E(\mathbf 1_E(X)\mid\mathcal G)=\mathbf 1_F(Y)\cdot P(X\in E\mid\mathcal G). -$$ -One sees that $Z_u=E(u(Y,X)\mid\mathcal G)$. Thus, every $u=\mathbf 1_{F\times E}$ is in $\mathcal C$. -The next step is to consider step functions $u=\sum\limits_{n=1}^Na_n\mathbf 1_{F_n\times E_n}$ for some $N\geqslant1$, measurable subsets $F_n$ and $E_n$ of $\mathbb Y$ and $\mathbb X$ respectively, and numbers $a_n$. A simple argument shows that every such function $u$ is in $\mathcal C$ (linearity?). -The last step is to note that any bounded measurable function $u:\mathbb Y\times\mathbb X\to\mathbb R$ is a limit of step functions as above, and that another standard argument shows that every such function $u$ is in $\mathcal C$ (dominated convergence?). -This finishes the proof that $f$ is in $\mathcal C$. -Finally, note that it is often the case, as here, that the first step (the functions $u=\mathbf 1_{F\times E}$) requires a relative amount of care but that the successive subsequent extensions are routine. -Edit: All this is quite classical. A congenial reference is the so-called little blue book Probability with martingales by David Williams.<|endoftext|> -TITLE: Bounds for $\zeta$ function on the $1$-line -QUESTION [9 upvotes]: I was going over my notes from a class on analytical number theory and we use a bound for the $\zeta$ function on the $1$ line as $\vert \zeta(1+it) \vert \leq \log(\vert t \vert) + \mathcal{O}(1)$ for $t$ bounded away from $0$, say $\vert t \vert \geq 1$. I don't seem to have a proof for this in my notes. -How does one prove the following bound for $\zeta(s)$ on the one line? - -$$\zeta(1+it) \leq \log(|t|) + \mathcal{O}(1) \, \, \forall t \geq 1$$ - -REPLY [10 votes]: Analytic Continuation of $\boldsymbol{\zeta(z)}$ with Euler-Maclaurin -The Euler-Maclaurin Sum formula says that, as a function of $n$, -$$ -\sum_{k=1}^n\frac{1}{k^z}=\zeta^\ast(z)+\frac{1}{1-z}n^{1-z}+\frac12n^{-z}+O\left(zn^{-1-z}\right)\tag{1} -$$ -for some $\zeta^\ast(z)$. Note that for $\mathrm{Re}(z)>1$, $\zeta^\ast(z)=\zeta(z)$. -For all $z\in\mathbb{C}\setminus\{1\}$, define -$$ -\zeta_n(z)=\sum_{k=1}^n\frac{1}{k^z}-\frac{1}{1-z}n^{1-z}\tag{2} -$$ -Note that each $\zeta_n$ is analytic and equation $(1)$ says that -$$ -\zeta_n(z)=\zeta^\ast(z)+\frac12n^{-z}+O\left(zn^{-1-z}\right)\tag{3} -$$ -which says that for $\mathrm{Re}(z)>0$, -$$ -\lim_{n\to\infty}\zeta_n(z)=\zeta^\ast(z)\tag{4} -$$ -and the convergence is uniform on compact subsets of $\mathbb{C}\setminus\{1\}$. Thus, the $\zeta^\ast(z)$ defined in $(4)$ is analytic and agrees with $\zeta(z)$ for $\mathrm{Re}(z)>1$. Thus, $\zeta^\ast(z)$ is the analytic continuation of $\zeta(z)$ for $\mathrm{Re}(z)>0$. - -Application to $\boldsymbol{|\zeta(1+is)|}$ -Therefore, using $(1)$, we get that -$$ -\zeta(1+it)=\sum_{k=1}^n\frac{1}{k^{1+it}}+\frac{1}{it}n^{-it}-\frac{1}{2n}n^{-it}+O\left(\frac{t}{n^2}\right)\tag{5} -$$ -So as long as $n\ge|t|$, we get that -$$ -\zeta(1+it)=\sum_{k=1}^n\frac{1}{k^{1+it}}+O\left(\frac1t\right)\tag{6} -$$ -Using the Euler-Maclaurin Sum formula again, we get that -$$ -\sum_{k=1}^n\frac1k=\log(n)+\gamma+O\left(\frac1n\right)\tag{7} -$$ -Letting $n=\lceil|t|\rceil$, and using $(6)$ and $(7)$, we get for large $|t|$, -$$ -|\zeta(1+it)|\le\log(|t|)+C\tag{8} -$$<|endoftext|> -TITLE: What's the difference between "balance laws" and "conservation laws"? -QUESTION [7 upvotes]: What's the difference between "balance laws" and "conservation laws" ? -Can someone give me some examples? - -REPLY [5 votes]: In the partial differential equations literature, the terms conservation law and balance law refer to particular types of first-order PDEs in space and time. Such equations are very common in physics since they express the conservation of a quantity over time (cf. the divergence theorem and the continuity equation). The use presented hereinafter is quite common. -Systems of conservation laws are equations of the form -$$ -\partial_t\boldsymbol{u} + \sum_{k=1}^d \partial_{x_k} \boldsymbol{f}_k(\boldsymbol{u}) = \boldsymbol{0}\, , -$$ -where $\boldsymbol{u} \in \Bbb R^p$ is the vector of conserved variables, $\boldsymbol{f}_k \in \Bbb R^p$ is the flux along the $k$th spatial direction, and $d$ denotes the spatial dimension. A single conservation law corresponds to the case $p=1$. Typical examples are the linear advection equation $u_t + u_x = 0$ and the inviscid Burgers equation $u_t + uu_x = 0$. -Systems of balance laws are systems of conservation laws with some additional terms, such as the relaxation term $\boldsymbol{r}(\boldsymbol{u})$ in -$$ -\partial_t\boldsymbol{u} + \sum_{k=1}^d \partial_{x_k} \boldsymbol{f}_k(\boldsymbol{u}) = \boldsymbol{r}(\boldsymbol{u})\, . -$$ -A typical example of a single such balance law is the Burgers equation with relaxation $u_t + uu_x = -u$. -Remark. Many authors use sometimes the term balance law for the particular case of conservation laws. In practice, the vocabulary may vary from one author to another, so it is hard to make a clear semantic distinction between conservation laws and balance laws.<|endoftext|> -TITLE: Every Hilbert space operator is a combination of projections -QUESTION [15 upvotes]: I am reading a paper on Hilbert space operators, in which the authors used a surprising result - -Every $X\in\mathcal{B}(\mathcal{H})$ is a finite linear combination of orthogonal projections. - -The author referred to a 1967 paper by Fillmore, Sums of operators of square zero. However, this paper is not online. -I wonder whether someone has a hint on how this could be true since there are all kinds of operators while projections have such a regular and restricted form. -Thanks! - -REPLY [4 votes]: I believe the answers you're looking for can be found in a paper by Pearcy and Topping, Sums of small numbers of idempotents., which is openly accessible.<|endoftext|> -TITLE: How many nodes are there in a 5-regular planar graph with diameter 2? -QUESTION [5 upvotes]: Undergrad here; I honestly have no idea what to do. I can't even imagine what a 5-regular graph with diameter 2 would look like, let alone a planar one. - -REPLY [9 votes]: No such simple graph can exist, the smallest 5-regular planar graph is the icosahedron and it has diameter 3 (the distance between the green and yellow vertex is 3). I proved that there are actually only two simple planar 5-regular graphs with diameter less than 4 in my paper in Ars Combinatoria (Volume CVI, July, 2012). See my webpage about extremal regular planar graphs for other examples of different degrees and diameters. - -We have $5n=2m$ since it is 5-regular and every face must be of size 3 or more if there are no multiple edges, so $2m\geq 3f$. Putting these together with Euler's formula we get -$$ -\frac{2m}{3} \geq f = 2 + m - n = 2 + m - \frac{2m}{5} -$$ -Simplifying we see that $m \geq 30$ and so $n\geq 12$. -If you are allowing loops (not just multiple edges) then the best possible is -10 vertices, achieved by adding loops to the vertices of valency 3 in this graph: - -Any graph with more vertices would lead (upon removal of multiple edges or loops) to a simple planar graph of maximum degree at most 5 with diameter 2 but the largest such graph is proven by Yang, Yuansheng, Lin, Jianhua, Dai, Yongjie (J. Comput. Appl. Math. 144 (2002) 349-358) to have 10 vertices, such as the example above.<|endoftext|> -TITLE: How to compute the volume of intersection between two hyperspheres -QUESTION [12 upvotes]: Let's say I have two n-spheres and I've no prior knowledge about the spheres (such as one of the sphere might be inside the other one) and I need to compute the volume of the intersection of the two hyper-spheres. Is there an efficient and generic way to compute the hyper-spherical cap of the intersection of these n-spheres? Note that the hyperspheres are expected to be very high dimensional such as 4096. - -REPLY [2 votes]: Following the idea of using caps, The Wikipedia article has a mention of a nice asymptotic result: If $h$ is the height of the cap and $r$ is the radius of the hypersphere, then $V^{cap}_n\rightarrow V_n(1-F((1-h/r)\sqrt{n}))$ for large $n$, where $F$ is the integral of the standard gaussian. -In our case $h=\frac{2r-d}{2}$ and we are interested in the intersection which is $2V_n^{cap}$, so we get $2V^{cap}_n\approx V_n\,2(1-F(\frac{d}{2r}\sqrt{n}))$. -This shows that the intersection, even normalized by $V_n$ goes to $0$ as the dimension gets large. Pretty cool.<|endoftext|> -TITLE: Is $z=x^2+y^2$ a bijection? -QUESTION [6 upvotes]: I am learning basic set theory and was doing some exercises where we are to determine whether a given relation is a function, an injection, a surjection and if it is a bijection. -The question in this case was: Determine whether the relation from $\mathbb{R}^2$ to $\mathbb{R}$ defined by $(x,y)Rz$ if and only if $z=x^2+y^2$ is $(1)$ a function, $(2)$ an injection, $(3)$ a surjection and $(4)$ a bijection. -$(1)$ is clearly true. $(2)$ It is not an injection as $(x,y)$ and $(y,x)$ both map to the same point in $\mathbb{R}$. $(3)$ Neither is it a surjection as $x^2+y^2\ge0, \forall x,y\in\mathbb{R}$. $(4)$ This leads me to believe that it is therefore not a bijection. However the answers says it is. This is strange as the book defines a bijection to be a function which is both an injection and a surjection. Clearly the given relation does not satisfy either condition so cannot be a bijection. -I think the answers might have some typos though since it is missing the answer to the next exercise (A relation from $\mathbb{Z}\times\mathbb{Z}$ to $\mathbb{Z}\times\mathbb{Z}$ where $(a,b)R(x,y)$ if an only if $y=a$ and $x=b$; which I found to be a bijection) and so might have mixed up the answer to this exercise with the previous. However, I am new to set theory and so was wondering whether I just made a silly mistake. - -REPLY [2 votes]: Recall the definitions: -$F\colon A\to B$ is a bijection if two things are true: - -For every $x,y\in A$ if $x\neq y$ then $F(x)\neq F(y)$ (injectivity); and -For every $b\in B$ there is some $a\in A$ such that $F(a)=b$ (surjectivity). - -Now we identify this in the problem, $A=\mathbb R^2$ and $B=\mathbb R$, and $F(\langle x,y\rangle)=x^2+y^2$. - -Is this $F$ injective (do the ordered pairs $\langle a,b\rangle$ and $\langle b,a\rangle$ have different images)? -Is this $F$ surjective (can the sum of two non-negative numbers be $-1$)? - -Even if only one of these is true the definition of bijection no longer holds, in our case - both are true.<|endoftext|> -TITLE: Normal subgroups of p-groups -QUESTION [5 upvotes]: Let $G$ be a group of order $p^\alpha$, where $p$ is prime. If $H\lhd G$, then can we find a normal subgroup of $G/H$ that has order $p$? - -REPLY [7 votes]: Theorem: a group $\,G\,$ of order $\,p^n\,,\,p\,$ a prime, $\,n\in\mathbb N\,$ , always has normal subgroup of order $\,p^m\,\,,\,\,\forall\, m\leq n\,\,,\,m\in \mathbb N$ -Proof: Exercise, using that always $|Z(G)|>1\,$ and induction on $\,n\,$ -So the comments by Gerry and Geoff close the matter.<|endoftext|> -TITLE: Question about Milnor's talk at the Abel Prize -QUESTION [26 upvotes]: I don't quite follow the rough outline Milnor gives of the fact that the 7-sphere has different differentiable structures. The video is available here, and the slides he used can be found here. -Here's what I got from the talk. Unless otherwise stated, all manifolds are compact, orientable smooth (most of the time I state these hypotheses explicitely anyways), and $M$ is an ($n$-dimensional) manifold. Homology and cohomology are taken with coefficients in $\mathbb{Z}$. - -By a theorem of Whitney, $M$ embeds in $\mathbb{R}^{n+k}$ from which we get a Gauss map $g:M\rightarrow G_n(\mathbb{R}^{n+k})\subset G_n$, where $G_n(\mathbb{R}^{n+k})$ is the grassmannian manifold of $n$-planes in $\mathbb{R}^{n+k}$, and $G_n$ is the colimit of those grassmannians, i.e. the grassmannian of $n$-planes in $\mathbb{R}^{\infty}$. All Gauss maps $g$ (viewed as maps $M\rightarrow G_n$) are homotopic, and we obtain a well defined homology class $\langle M\rangle=g_*\mu\in H_n(G_n)$. The letter $\mu$ stands for the fundamental class of $M$. -If $M$ is a compact orientable topological manifold of dimension divisible by $4$, say $\dim~M=4k$, it comes equipped with a symmetric bilinear form given by the cup product in dimensions $2k$: -$$H^{2k}(M)\times H^{2k}(M)\rightarrow H^{4k}(M)\simeq \mathbb{Z},~(x,y)\mapsto x\cup y$$ -This form must kill torsion, so we get a quadratic form on the finitely generated free abelian group $H^{2k}(M)/\mathrm{Torsion}\simeq \mathbb{Z}\oplus\cdots\oplus\mathbb{Z}$, and we can define its signature (as a real quadratic form). The signature of the manifold $M$ is then defined as $\sigma (M)=p-q$ where $p=\#$ of positive eigen values and $q=\#$ of negative eigenvalues. -Going back to the differentiable case, we define the Pontrjagin numbers of $M$ by looking at the cohomology ring of $G_n$. This ring is concentrated in dimensions that are multiples of $4$ and has one generator $p_i\in H^{4i}(G_n)$ for each $i\geq 1$, so that all $1,p_1, p_1^2,p_2,p_1^3,p_1\cup p_2,p_3,\dots$ generate the cohomology (plus some torsion elements). We then define the Pontrjagin numbers of $M$ by evaluating these cohomology classes on the homology class $\langle M\rangle$. This gives potentially non zero numbers provided $M$ has dimension $4k$. In particular, if $M$ has dimension $8$ and is smooth, compact, orientable, there are two Pontrjagin numbers: $p_1^2(M)$ and $p_2(M)$. -By a theorem of Hirzebruch, the signature of $M$ (with $\dim~M=4k$) ought to be a polynomial with rational coefficients in the Pontryagine numbers of $M$, and Milnor tells us that in case $\dim~M=8$, -$$45\sigma(M)=7p_2(M)-p_1^2(M).$$ -This is somehow related to oriented cobordism. Apparantly, two smooth oriented compact manifolds $M$ and $N$ of the same dimension are cobordant iff $\langle M\rangle=\langle N\rangle$. I understand that the signature is a cobordism invariant, so that it extends to a homomorphism $\sigma:\Omega(n)\rightarrow\mathbb{Z}$ where $\Omega(n)$ is the group of oriented cobordisms, and that since cobordism classes are determined by $\langle M\rangle\in H_n(G_n)$, there ought to be a linear relation between the non torsion bits of this homology class (which can be read off the Pontrjagin numbers) and the signature. The main idea of the proof will be to calculate $$\frac{1}{7}(45\sigma(M)+p_1^2(M))$$ for some eight dimensional manifold, to observe that it is not an integer thus showing that its boundary, while homeomorphic to the $7$-sphere, cannot be diffeomorphic to it. - -Part of my confusion stems from here. Milnor considered $7$-dimensional smooth compact orientable manifolds $M$ that are the total space of a locally trivial fibre bundle with fibre $\mathbb{S}^3$ over $\mathbb{S}^4$. He was able to show explicitely that some of these $M$ were homeomorphic to the $7$-sphere by constructing explicit Morse functions with exactly two critical points. However, he found that some of these smooth manifolds that were topological $7$-spheres could not be diffeomorphic to the $7$-sphere. He then (as I understand it) calculated $\frac{1}{7}(45\sigma(E)+p_1^2(E))$ for some $8$ dimensional manifold (Which one?) and found that the result was not an integer. -Could you help me understand how he got to exotic $7$-spheres? What manifold $E$ would one consider? Why is the result a non integer? and how could it be a non integer? since in order to make sense of $p_1^2(E)$ in the first place we need it to be smooth, and then $\frac{1}{7}(45\sigma(M)+p_1^2(M))$ must be equal to $p_2(M)$ which is an integer. - -REPLY [38 votes]: The method in his slides differs from his original paper. First I will explain Milnor's original construction of exotic $7$-spheres, from his article - -On manifolds homeomorphic to the $7$-sphere, Annals of Mathematics, Vol. 64, No. 2, September 1956. - -I'll answer your question about his construction mentioned in the slides after that. - -First, he defines a smooth invariant $\lambda(M^7)$ for closed, oriented $7$-manifolds $M^7$. To do this, we first note that every $7$-manifold bounds an $8$-manifold, so pick an $8$-manifold $B^8$ bounded by $M^7$. Let $\mu \in H_7(M^7)$ be the orientation class for $M^7$ and pick an "orientation" $\nu \in H_8(B^8,M^7)$, i.e. a class satisfying -$$\partial \nu = \mu.$$ -Define a quadratic form -$$H^4(B^8,M^7)/\mathrm{Tors} \longrightarrow \mathbb{Z},$$ -$$\alpha \mapsto \langle \alpha \smile \alpha, \nu \rangle,$$ -and let $\sigma(B^8)$ be the signature of this form. Milnor assumes that $M^7$ has -$$H^3(M^7) \cong H^4(M^7) \cong 0,$$ -so that -$$i: H^4(B^8,M^7) \longrightarrow H^4(B^8)$$ -is an isomorphism. Hence the number -$$q(B^8) = \langle i^\ast p_1(B^8) \cup i^\ast p_1(B^8), \nu \rangle$$ -is well-defined. Then Milnor's $7$-manifold invariant is -$$\lambda(M^7) \equiv 2q(B^8) - \sigma(B^8) \pmod 7.$$ -Milnor shows that $\lambda(M^7)$ does not depend on the choice of $8$-manifold $B^8$ bounded by $M^7$. -Now let us turn to the specific $7$- and $8$-manifolds that Milnor considers. As you noted, he looks at the total spaces of $S^3$-bundles over $S^4$. The total space of such a bundle is a $7$-manifold bounding the total space of the associated disk bundle. $S^3$-bundles over $S^4$ (with structure group $\mathrm{SO}(4)$) are classified by elements of -$$\pi_3(\mathrm{SO}(4)) \cong \mathbb{Z} \oplus \mathbb{Z}.$$ -An explicit isomorphism identifies the pair $(h,j) \in \mathbb{Z} \oplus \mathbb{Z}$ with the $S^3$-bundle over $S^4$ with transition function -$$f_{hj}: S^3 \longrightarrow \mathrm{SO}(4),$$ -$$f_{hj}(u) \cdot v = u^h v u^j$$ -on the equatorial $S^3$, where here we consider $u \in S^3$ and $v \in \mathbb{R}^4$ as quaternions, i.e. the expression $u^h v u^j$ is understood as quaternion multiplication. -Let $\xi_{hj}$ be the $S^3$ bundle on $S^4$ corresponding to $(h,j) \in \mathbb{Z} \oplus \mathbb{Z}$. For each odd integer $k$, let $M^7_k$ be the total space of the bundle $\xi_{hj}$, where -\begin{align*} -h + j & = 1, \\ -h - j & = k. -\end{align*} -Milnor shows that -$$\lambda(M^7_k) \equiv k^2 - 1 \pmod 7.$$ -Furthermore, he shows that $M^7_k$ admits a Morse function with exactly $2$ critical points, and hence is homeomorphic to $S^7$. Clearly we have -$$\lambda(S^7) \equiv 0,$$ -so if -$$k \not\equiv \pm 1 \pmod 7,$$ -then $M^7_k$ is homeomorphic but not diffeomorphic to $S^7$, and hence is an exotic sphere. In particular, $S^7$, $M^7_3$, $M^7_5$, and $M^7_7$ are all homeomorphic to one another but all pairwise non-diffeomorphic. - -Now, in the slides, the space $E$ should be the total space of the disk bundle associated to an $S^3$ bundle $\xi_{hj}$ over $S^4$ with -\begin{align*} -h + j & = 1, \\ -h - j & = k -\end{align*} -for some odd integer $k$, as described above. Then $E$ is an $8$-manifold with boundary $\partial E$ homeomorphic to $S^7$. Now, if $\partial E$ is diffeomorphic to $S^7$, then we can glue $D^8$ to $E$ along their common boundary via a diffeomorphism -$$f: \partial E \longrightarrow S^7$$ -in order to get a smooth manifold -$$E' = E \cup_f D^8.$$ -If $f$ is not a diffeomorphism, then $E'$ is not necessarily smooth. So in showing that -$$p_2(E') \notin \mathbb{Z},$$ -Milnor proves by contradiction that no such diffeomorphism $f$ can exist, since Pontrjagin numbers of a manifold are integers. So in that case $\partial E$ would be homeomorphic to $S^7$ but not diffeomorphic, and hence an exotic $7$-sphere.<|endoftext|> -TITLE: Can a countable set contain uncountably many infinite subsets such that the intersection of any two such distinct subsets is finite? -QUESTION [19 upvotes]: Can a countable set contain uncountably many infinite subsets such that the intersection of any two such distinct subsets is finite? - -REPLY [4 votes]: For each $t\in(\frac1{10},1),$ take its decimal expansion (or one of its decimal expansions in case of non-uniqueness), and let $A_t$ be the set of all finite truncations of that decimal expansion, considered as positive integers. For example, -$$A_{\frac13}=\{3,\ 33,\ 333,\ 3333,\ 33333,\dots\},$$ -$$A_{\pi-3}=\{1,\ 14,\ 141,\ 1415,\ 14159,\dots\}.$$<|endoftext|> -TITLE: Show $|a|+|b|+|c|+|a+b+c| \geq |a+b|+|b+c|+|c+a|$ for complex $a$, $b$, $c$ -QUESTION [7 upvotes]: How to prove for any complex numbers $a$, $b$, $c$, the inequality $$|a|+|b|+|c|+|a+b+c| \geq |a+b|+|b+c|+|c+a|$$ is correct? - -REPLY [4 votes]: Both sides are non-negative, so it suffices to show that the square of the left-hand-side is at least the square of the right-hand-side. That is, we wish to show: -$$ -|a|^2+|b|^2+|c|^2+|a+b+c|^2+2|ab|+2|bc|+2|ac+2(|a|+|b|+|c|)|a+b+c| -\geq\\ -|a+b|^2+|b+c|^2+|a+c|^2+2(|a(a+b+c)+bc|+|b(a+b+c)+ac|+|c(a+b+c)+bc|) -$$ -The square terms cancel: -$$ -|a|^2+|b|^2+|c|^2+|a+b+c|^2 = 2|a|^2+2|b|^2+2|c|^2+2\operatorname{Re}(ab+bc+ac)=|a+b|^2+|b+c|^2+|a+c|^2 -$$ -and by the triangle inequality we have $|a(a+b+c)|+|bc|\geq |a(a+b+c)+bc|$ and cyclic permutations.<|endoftext|> -TITLE: Definition and meaning of "Proof Schema", "Class Sign" -QUESTION [5 upvotes]: I'm a newbie in advanced mathematics, and I'm trying to understand Godel's theorem. I came across these two words which I couldn't understand clearly. -"Proof Schema" and "Class-Sign" -Can anybody provide me definition of these, and describe what these term mean in simple words? -It's from Kurt Godel's book (translated into English) "On Formally Undecidable Propositions of Principia Mathematica and Related Systems".. by Dover publications.. I'll just quote lines where it's introduced (Page No 39)- -"It can be shown that "formula", "proof-schema", and "provable formala" are definable in the system of PM. -Class-Sign: -"A formula of PM with just one free variable, and that of the type of natural numbers(class of classes), we shall designate a 'class-sign'.. -Thank you - -REPLY [5 votes]: By "proof schema" Gödel just means a formal proof. You can see this quite clearly from the context: a formula is a finite sequence of natural numbers, each of which codes one of the symbols of the formula, while a proof schema is a finite sequence of finite sequences of natural numbers, in other words a finite sequence of codes of formulae. -The basic idea of Gödel coding is very simple: take the symbols of a formal language (the alphabet) and represent each symbol by a natural number in such a way that given any string of such symbols we can produce a finite sequence of symbols which represent them (encoding); and given any natural number there is an effective procedure by which we can calculate which symbol it represents (decoding). -A "class sign" is what we would normally in English call a predicate: a formula with one free variable $\varphi(x)$ whose extension is the class of objects $\left\{ x : \varphi(x) \right\}$. Since we're only considering natural numbers here the extension in question will of course be a set, that is, the set of natural numbers $n$ such that $\varphi(n)$. (Being fussy, we might say that a class sign is a unary predicate since it has only one free variable; one can have predicates of any finite arity.) -Defining these things in a precise manner is key to the method of arithmetisation, whereby logical notions are shown to be expressible in the language of arithmetic, and thus sentences in that language can be constructed which refer to themselves. This is called the method of diagonalisation.<|endoftext|> -TITLE: Three sequences and a limit(own) -QUESTION [16 upvotes]: Let us consider three sequences $(a_n)_{n\ge1}$, $(b_n)_{n\ge1}$ and $(c_n)_{n\ge1}$ having the properties: - -$a_{n},\ b_{n},\ c_{n}\in\left(0,\ \infty\right)$ -$a_{n}+b_{n}+c_{n}\ge\frac{a_{n}}{b_{n}}+\frac{b_{n}}{c_{n}}+\frac{c_{n}}{a_{n}}\ \forall n\ge1$ -$\lim\limits_{n\to\infty}a_{n}b_{n}c_{n}=1 - $ - -Prove that - $$\lim_{n\to\infty}\frac{a_{n}+b_{n}+c_{n}}{a_{n}b_{n}+b_{n}c_{n}+c_{n}a_{n}}=1 - $$ - -REPLY [5 votes]: A proof that isn't very clever but does the job: let -$$f(a,b,c)=ab\left(a+b+c-\tfrac ab-\tfrac bc-\tfrac ca\right)$$ -which is chosen so that $f(a_n,b_n,c_n)\ge 0$. -The idea is: - -Reduce to the $abc=1$ case by defining $g(a,b)=f(a,b,\frac{1}{ab})$ and proving that $f(a_n,b_n,c_n)-g(a_n,b_n)\to 0$. -Prove that $g$ is nonpositive and that $g(a_n,b_n)\to 0$ implies $(a_n,b_n)\to (1,1)$. -Conclude that $(a_n,b_n,c_n)\to (1,1,1)$. - -Proof: - -Since the problem is invariant under cyclic permutation of $a_n,b_n,c_n$ for any single $n$, we can assume that $c_n$ is the maximum. We get -$$c_n/a_n\le a_n+b_n+c_n\le 3c_n\\ -1/a_n\le 3$$ -This implies $\limsup b_n c_n\le 3$, so that -$$\limsup a_n^2 b_n^3\le \limsup c_n^2 b_n^{5/2} c_n^{1/2}\le 3^{5/2}$$ -So: -$$a_n^2 b_n^3\left(\tfrac{1}{a_nb_nc_n}-1\right)+\tfrac{1}{a_n}(a_nb_nc_n-1)\to 0$$ -and therefore -$$f(a_n,b_n,c_n)-g(a_n,b_n)\to 0$$ -A straightforward calculation shows that $g(a,b)$ is the cubic polynomial $$-(ab)^3+(ab)^2+a^3b-a^3+a-1$$ with discriminant in $a$ -$$-(b-1)^2(b+1)(23b^3+5b^2-27b+23)$$ -so that, on $D=[0,+\infty)^2$, $g(a,b)$ is always non-zero when $b\ne 1$, and when $b=1$ it is non-zero only when $a=1$. Therefore because $g(a,b)$ is negative at $(0,0)$ it is everywhere negative except at $(a,b)=(1,1)$. -Furthermore the $\sup$ of $g$ over $D$ minus any neighborhood of $(1,1)$ is negative because $D$ is closed and $g\le -1+\varepsilon$ in a neighborhood of $\infty$. As a consequence, if $g(a_n,b_n)\to 0$ then $(a_n,b_n)\to (1,1)$. -Because $f(a_n,b_n,c_n)\ge 0$ and $g$ is nonpositive, $g(a_n,b_n)\to 0$, so that $(a_n,b_n)\to (1,1)$ and therefore: -$$\lim_{n\to\infty} a_n=\lim_{n\to\infty} b_n=\lim_{n\to\infty} c_n=1$$ -which is actually a stronger form of the theorem.<|endoftext|> -TITLE: Restriction of flat morphism -QUESTION [6 upvotes]: Suppose that $f\colon X\to Y$ is a flat morphism of varieties over an algebraically closed field $k$. Let $E\subseteq X$ and $F\subseteq Y$ be closed subvarieties such that $f(E) = F$. Is it true that the restricted morphism $f|_E\colon E\to F$ is also flat? If not, are there some additional conditions on $f$ which would make this true? - -REPLY [7 votes]: No, the restriction $f:E\to F$ needn't be flat. -Take $Y=\mathbb A^2, X=\mathbb A^2 \times \mathbb P^1$ and for $f:X\to Y $ take the first projection, which is flat. -Now inside $X$ lies the blow-up $B\subset X$ of $Y$ at the origin $(0,0)\in \mathbb A^2=Y$. -The restricted map $f\mid B: B\to \mathbb A^2$ is well known not to be flat, because all its fibers outside the origin are single points, whereas the fiber at the origin is the one-dimensional projective space $\mathbb P^1$: flat maps do not tolerate such dimension jumps. -As to your second question, I am pessimistic about a general criterion ensuring that the restriction of a flat map will remain flat. -Flatness is a subtle relation between the fibers of a morphism, and I have the feeling that restricting a flat morphism to a subvariety of its domain will usually destroy this relation.<|endoftext|> -TITLE: On the Dirichlet beta function sum $\sum_{k=2}^\infty\Big[1-\beta(k) \Big]$ -QUESTION [7 upvotes]: Given the Dirichlet beta function, -$$\beta(k) = \sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^k}$$ -(The cases k = 2 is Catalan's constant.) It seems, -$$\sum_{k=2}^\infty\Big[1-\beta(k) \Big] = \frac{1}{4}\big(\pi+\log(4)-4\big)=0.131971\dots$$ -or, in general, for some constant p > 0, -$$\sum_{k=2}^\infty\left[1-\sum_{n=0}^\infty\frac{(-1)^n}{(pn+1)^k} \right] = \sum_{m=1}^\infty\frac{1}{2p^2m^2+3pm+1}$$ -Anyone knows how to prove the general proposed equality? (This is similar to the question on the zeta sum here.) - -REPLY [6 votes]: Here is a way to derive a slightly different looking result: -Notice that $$\sum_{k=2}^{\infty}\left[1-\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(pn+1)^{k}}\right]=\sum_{k=2}^{\infty}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{(pn+1)^{k}}$$ -$$=\sum_{n=1}^{\infty}(-1)^{n-1}\sum_{k=2}^{\infty}\frac{1}{(pn+1)^{k}}=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{(pn+1)^{2}}\sum_{k=0}^{\infty}\frac{1}{(pn+1)^{k}}.$$ Now, since $$\sum_{k=0}^{\infty}\frac{1}{(pn+1)^{k}}=\frac{1}{1-\frac{1}{pn+1}}=\frac{pn+1}{pn},$$ our series is -$$\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{pn(pn+1)}.$$ -Plugging in the case $p=2$ seems to agree with your first identity. -Remark: Using partial fractions, we can go a bit further. Notice that $$\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{pn(pn+1)}=\sum_{n=1}^{\infty}(-1)^{n-1}\left(\frac{1}{pn}-\frac{1}{pn+1}\right)=\frac{\log 2}{p}-\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{pn+1} $$ -Suppose $p$ is an integer, and let $\zeta_{p}$ be a $p^{th}$ root of unity. Then consider $$\frac{\log\left(1+z\right)}{z}+\frac{\log\left(1+\zeta_{p}z\right)}{\zeta_{p}z}+\cdots+\frac{\log\left(1+\zeta_{p}^{p-1}z\right)}{\zeta_{p}^{p-1}z}=\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n-1}}{n}z^{n-1}\sum_{k=0}^{p-1}\zeta_{p}^{k(n-1)} $$ -$$=\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n-1}}{pn+1}z^{pn}.$$ Letting $z=1,$ we have the identity $$\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n-1}}{pn+1}=\sum_{k=0}^{p-1}\frac{\log\left(1+\zeta_{p}^{k}\right)}{\zeta_{p}^{k}z},$$ so our original series is $$\frac{1}{p}\log2+\sum_{k=0}^{p-1}\frac{\log\left(1+\zeta_{p}^{k}\right)}{\zeta_{p}^{k}z}.$$<|endoftext|> -TITLE: A limit involving polynomials -QUESTION [9 upvotes]: Let be the polynomial: -$$P_n (x)=x^{n+1} - (x^{n-1}+x^{n-2}+\cdots+x+1)$$ -I want to prove that it has a single positive real root we'll denote by $x_n$, and then to compute: -$$\lim_{n\to\infty} x_{n}$$ - -REPLY [3 votes]: Since it's not much more work, let's study the roots in $\mathbb{C}$. -Note that $x=1$ is not a solution unless $n=1$, since $P_n(1) = 1-n$. -Since we are interested in the limit $n\to\infty$, we can assume $x\ne 1$. -Sum the geometric series, -$$\begin{eqnarray*} -P_n (x) &=& x^{n+1} - (x^{n-1}+x^{n-2}+\cdots+x+1) \\ -&=& x^{n+1} - \frac{x^n-1}{x-1}. -\end{eqnarray*}$$ -The roots will satisfy -$$x_n^{n}(x_n^2-x_n-1) = -1.$$ -(Addendum: If there are concerns about convergence of the sum, think of summing the series as a shorthand that reminds us that $(x-1)P_n(x) = x^{n}(x^2-x-1) + 1$ for all $x$.) -If $0\le |x_n|<1$, $\lim_{n\to\infty} x^n = 0$, thus, in the limit, there are no complex roots in the interior of the unit circle. -If $|x_n|>1$, $\lim_{n\to\infty} 1/x^n = 0$, thus, in the limit, the roots must satisfy -$$x_n^2 - x_n - 1 = 0.$$ -There is one solution to this quadratic equation with $|x_n|>1$, it is real and positive, -$$x_n = \frac{1}{2}(1+\sqrt{5}).$$ -This is the golden ratio. -It is the only root exterior to the unit circle. -The rest of the roots must lie on the boundary of the unit circle. - -Figure 1. Contour plot of $|P_{15}(x+i y)|$.<|endoftext|> -TITLE: Curvature of geodesic circles on surface with constant curvature -QUESTION [11 upvotes]: I am trying to solve the following exercise: - -Prove that on a surface of constant curvature the geodesic circles have constant curvature. - -"Constant curvature" in case of the surface I take to refer to the Gaussian curvature. Now, the geodesic curvature of a curve parameterized by arc length in orthogonal coordinates is given by -$$k_g(s) = \frac{1}{2 \sqrt{EG}} \left(G_u v'- E_v u' \right)+ \phi',$$ -where $\cdot'$ denotes the derivative with respect to $s$, and $\phi$ is the angle the tangent of the curve makes with $x_u$. -Using geodesic polar coordinates (setting $u = \rho$ and $v = \theta$), a surface with constant Gaussian curvature $K$ satisfies -$$(\sqrt{G}_{\rho\rho}) + K \sqrt{G} = 0$$ -Also, we get $E=1$, $F=0$, and a geodesic circle has the equation $\rho = \mathrm{const.}$ Therefore, the first equation above yields -$$ -k_g(s) = \frac{G_\rho \theta'}{2\sqrt{G}} -$$ -It seems to prove that $k_g$ is constant, you would have to show that its derivative is 0. I tried that, but the derivative gets rather ugly and I don't see how to proceed. - -REPLY [4 votes]: If you have textbook, Differential Geometry of Curves and Surfaces, Do CARMO, -Then See p.289, the part of Theorem by Minding. You will get $E=1,F=0,G=constant,G_ρ=constant$ in geodesic circles. (i.e, ρ=constant) -And see p.254, the part of Theorem by Liouville, curvature of curve ρ=constant, -$k_{g2}=\large\frac {G_ρ}{2G\sqrt{E}}$. So k is constant.<|endoftext|> -TITLE: Is $i\notin \mathbb{Q}(\zeta_p)$ for all odd primes $p$? -QUESTION [8 upvotes]: My main question is the title: for an odd prime $p$, denote a primitive $p^{\text{th}}$ root of unity by $\zeta_p$. Is it true that $i$ is not contained in the cyclotomic extension $\mathbb{Q}(\zeta_p)$? If this is true, is the following proof correct, and if it is not true, where does the following proof break down: -Recall that the unique quadratic subfield of $\mathbb{Q}(\zeta_p)$ is $\mathbb{Q}(\sqrt{\pm p})$ where there is a $"+"$ is $p\equiv 1 \mod 4$ and a $"-"$ if $p\equiv 3 \mod 4$ (Source: exercise 11 of section 14.7 of Dummit and Foote). Assume to the contrary that $i \in \mathbb{Q}(\zeta_p)$. Then $\mathbb{Q}(i)$ is a quadratic extension of $\mathbb{Q}$ of degree 2 contained in $\mathbb{Q}(\zeta_p)$. But this yields an immediate contradiction since of course $\mathbb{Q}(\sqrt{\pm p}) \neq \mathbb{Q}(i)$. So $i\notin \mathbb{Q}(\zeta_p)$. - -REPLY [11 votes]: Another proof: if $i\in{\mathbb Q}(\zeta_p)$ then ${\mathbb Q}(i\zeta_p) \subseteq {\mathbb Q}(\zeta_p)$. But $i\zeta_p$ is a primitive $(4p)$th root of unity, so we have a field of degree $\phi(4p)=2(p-1)$ over $\mathbb Q$ contained in a field of degree $\phi(p)=p-1$ over $\mathbb Q$, a contradiction.<|endoftext|> -TITLE: Formula for the sequence repeating twice each power of $2$ -QUESTION [7 upvotes]: I am working on some project that needs to calculate what $a_n$ element of the set of numbers $$1, 1, 2, 2, 4, 4, 8, 8, 16, 16 \ldots$$ will be. -$n$ can be quite big number so for performance issues I have to calculate formula for this set of numbers (Note: first element may be different). What I've done so far is that I managed to find out how to calculate next element using element before. My formula for this is: -$$a_n = a_{n-1}\cdot 2^{\frac{1 + (-1)^{n-1}}2}$$ -Now from this I want to calculate $a_n$ element using $a_1$ element. with this I am stuck. - -REPLY [2 votes]: In addition to the other more correct answers, it is interesting to note that this sequence can be generated with only two bit shifts: -$${\tt{}a_n=1 << (n >> 1)}$$ -Note that ${\tt{}n}\in[0,\infty]$, not ${\tt{}n}\in[1,\infty]$. Also, if you allow $\tt{}1$ to vary (for example, choose $\tt{}a_{n,\alpha}=\alpha << (n >> \alpha)$), you get sequences like this: -$\alpha=1: \quad 1, 1, 2, 2, 4, 4, 8, 8, 16, 16, \cdots$ -$\alpha=2: \quad 2, 2, 2, 2, 4, 4, 4, 4, 8, 8, \cdots$ -$\alpha=2: \quad 3, 3, 3, 3, 3, 3, 3, 3, 6, 6, \cdots$<|endoftext|> -TITLE: Dixmier-Douady class: computations -QUESTION [6 upvotes]: As far as I know, Dixmier-Douady classes represent obstrucions to spin$^c$ structures. Questions: - -Could somebody prove or give a reference: manifolds of dimension lower than $5$ always have a vanishing Dixmier-Douady class. -I want to compute Dixmier-Douady classes for $n$-manifolds, $n\geq 5$. Rather, I want to be sure that I understand at a computational level, what DD classes are. Could somebody provide an operational definition of Dixmier-Douady classes that would work for this task? I have no idea how to begin with. E.g. I only know the example in T. Friedrich's book. He considers the homogeneous space $X^5=SU(3)/SO(3)$, shows via homotopy theory that the frame bundle $Q\to X^5$ has vanishing fundamental group and therefore admits no spin$^c$ structure. Indirectly, he is showing that the Dixmier-Douady class of $X^5$ is not trivial. But, can one do the same for arbitrary manifods? could somebody give a simple example how to procede? - -REPLY [3 votes]: An orientable smooth manifold $X$ admits a $\text{Spin}^c$ structure iff its second Stiefel-Whitney class $w_2 \in H^2(X, \mathbb{F}_2)$ is the reduction of a class $c_1 \in H^2(X, \mathbb{Z})$. This condition is equivalent to the condition that the third integral Stiefel-Whitney class $W_3 = \beta w_2 \in H^3(X, \mathbb{Z})$ vanishes, and I guess this is what you're calling the Dixmier-Douady class of $X$. Here $\beta$ is a Bockstein homomorphism. Note that $W_3$ is always $2$-torsion, and since $H^3(X, \mathbb{Z})$ has no torsion if $\dim X \le 3$, the Dixmier-Douady class is always trivial in these low-dimensional cases. -That leaves the case $\dim X = 4$. Here I don't have a proof. It's clear if $H^3(X, \mathbb{Z})$ has no torsion, which in the compact case by Poincare duality is equivalent to $H_1(X, \mathbb{Z})$ having no torsion. It is always possible to pass to a finite cover with this property, and hence I can at least say that a compact orientable smooth $4$-manifold has a finite cover with a $\text{Spin}^c$ structure. But I don't yet see how to prove the full result. -Edit: A proof for the case $\dim X = 4$ is given in this note by Teichner and Vogt. In the compact case they mention that it is a classical result due to Hirzebruch and Hopf, proven using some Wu class computations.<|endoftext|> -TITLE: Showing that the product and metric topology on $\mathbb{R}^n$ are equivalent -QUESTION [8 upvotes]: I'm new to topology, and can't figure out why the metric and product topologies over $\mathbb{R}^n$ are equivalent. Could someone please show me how to prove this? - -REPLY [7 votes]: The product topology is induced by this norm. -$$\|x\|_{\rm prod} = \max\{|x_k|, 1\le k \le n\}$$ -Let us use $\|\cdot\|$ for the Euclidean norm. Then -$$\|x\| = \left(\sum_{k=1}^n x_k^2\right)^{1/2}\le \left(\sum_{k=1}^n \|x\|_{\rm prod}^2\right)^{1/2} = \|x\|_{\rm prod}\sqrt{n}.$$ -Now for a reverse inequality. -We have -$$|x_k |\le \left(\sum_{k=1}^n x_k^2\right)^{1/2}, \qquad 1\le k \le n,$$ -so $$\|x\|_{\rm prod} \le \|x\|.$$ -The norms are equivalent.<|endoftext|> -TITLE: computing ${{27^{27}}^{27}}^{27}\pmod {10}$ -QUESTION [8 upvotes]: I'm trying to compute the most right digit of ${{27^{27}}^{27}}^{27}$. -I need to compute ${{27^{27}}^{27}}^{27}(\bmod 10)$. -I now that ${{(27)^{27}}^{27}}^{27}(\bmod 10) \equiv{{(7)^{27}}^{27}}^{27} (\bmod 10)$, so now I need to to compute ${({7^{27})}^{27}}^{27} (\bmod 10)$, since $\gcd(7,10)=1$ and $\phi(10)=4$, $7^{27}=7^{24}\cdot 7^3(\bmod 10)=1 \cdot 7^3 (\bmod 10)=3 (\bmod 10)$ - (Fermat theorem), so I am left with computing ${(3^{27}})^{27} (\bmod 10)$, again $\gcd(3,10)=1$, so $3^{27}= 3^{24}\cdot3^3 \equiv 7(\bmod 10)$, so as I see it the final cut should be again $7^{27}$ which I saw it is already $3 (\bmod 10)$. -Is it correct? what is the correct way to do that? -Thanks - -REPLY [4 votes]: $\varphi(10)=4$ and $27$ is of the form $(4n+3)\; $. -Now $ {(4n+3)^{(4n+3)}} \equiv 3(\bmod 4)=4c+3\; $. -Clearly, ${(4n+3)}^{(4n+3)^{(4n+3)}}=(4n+3)^{(4c+3)}\equiv 3(\bmod 4)\; $. -This will be held true for any length of such powers of the form $4n+3$. -So ${27}^{27^{27^{27}}}\equiv 7^3(\bmod\ 10)\equiv 3(\bmod\ 10)\; \; $ as $7^4\equiv 1(\bmod\ 10)\; $ .<|endoftext|> -TITLE: Embedding torsion units of an order into torsion units of the reduced order. -QUESTION [5 upvotes]: Let $A$ be an order, i.e. a commutative ring of which the additive group is isomorphic to $\mathbb{Z}^n$ for a certain non-negative integer $n$. Show that there exists an embedding -$$A^{\times}_{\text{tor}}\ \hookrightarrow\ (A_{\text{red}})^{\times}_{\text{tor}},$$ -where $A^{\times}_{\text{tor}}$ is the group of torsion units of $A$, and $A_{\text{red}}=A/\sqrt{0_A}$ is the reduced ring of $A$. -Edit: In response to a reply which seems to have been removed; I understand that the quotient map $A\rightarrow A_{\text{red}}$ restricts to a group homomorphism $A^{\times}_{\text{tor}}\rightarrow(A_{\text{red}})^{\times}_{\text{tor}}$, but I am unable to show that this map is injective. - -REPLY [5 votes]: Since this is homework, let me give a hint: -If $a$ lies in the kernel of your map, show that you may write $a = 1 + x$ -for some nilpotent $x$. Now write $a^n = 1$ for some $n$, deduce a corresponding -equation involving $x$, and see where it leads. -Added: Since the OP has now solved the question, let me sketch the answer, based -on the discussion in the comments: -$(1+x)^n = 1$ implies that $nx + $ higher order terms in $x = 0$. From this it is easy to deduce that $x = 0,$ given that $x$ is nilpotent. (The OP's gives the following very succinct approach: we may factor out $x$ in the above equation to get $x (n + $ terms involving positive powers of $x) = 0$, and the parenthetical factor is a non-zero divisor in $A$ (since $n$ is not a zero-divisor, and $x$ is nilpotent).) -As an aside, note that -the assumption that $A$ is torsion-free as an abelian group (this is what is really used; of course it follows directly from the assumption that $A$ is an order) is crucial. There are char. $p$ examples where the given map is not injective. One of the simplest is obtained by taking $A = \mathbb F_p[x]/(x^p).$<|endoftext|> -TITLE: Splitting of quaternion algebras -QUESTION [8 upvotes]: A rational (definite) quaternion algebra is an algebra of the form -$$ \mathcal{K} = \mathbb{Q} + \mathbb{Q}\alpha + \mathbb{Q}\beta + \mathbb{Q}\alpha \beta $$ -with $\alpha^2,\beta^2 \in \mathbb{Q}$, $\alpha^2 < 0$, $\beta^2 < 0$, and $\beta \alpha = - \alpha \beta$. -For a place $v$ of $\mathbb{Q}$, we say that $\mathcal{K}$ splits at $v$ if $\mathcal{K} \otimes \mathbb{Q}_v \simeq M_2(\mathbb{Q}_v)$; otherwise, we say that it ramifies. -This comes up because the endomorphism ring of an elliptic curve over a finite field may be a quaternion algebra. -I have pretty much no intuition for these things. For instance, I'm just thinking about the rational quaternions, and I don't see how tensoring up with $\mathbb{Q}_v$ could introduce zero-divisors. Doesn't the existence of a multiplicative, positive-definite norm preclude this? -I would very much appreciate some examples of quaternion algebras (preferably, examples that arise as endomorphism of elliptic curves and an explanation as to why) which split/ramify at some places. I want to get a feel for what these things "look like." -Thanks! - -REPLY [3 votes]: Let us look at the Hamiltonian quaternions $\mathcal{K}$ with $\alpha^2=\beta^2=-1$. We all know, by Sir William's reasoning, that this algebra ramifies at the infinite place $\mathbb{Q}_v=\mathbb{R}$. Let $p$ be an odd prime. The claim is that there exists a negative integer $-m$ such that it has a square root in $\mathcal{K}$ as well as $\mathbb{Q}_p$. -It is easy to find such integers, because several negative integers have square roots in $\mathcal{K}$. This is because -$$(ai+bj+ck)^2=-a^2-b^2-c^2$$ -for any triple of rational integers $a,b,c$. It is known that for example all odd integers that are not congruent to $7$ modulo $8$ can be written as a sum of three squares. OTOH, any integer that is congruent to a quadratic residue modulo $p$ has a square root in $\mathbb{Q}_p$ by a Hensel lift of the modular square root. Because $p$ is odd, such integers cannot cover the entire residue class $7+8\mathbb{Z}$. The claim follows. -This implies that $\mathbb{Q}_p$ contains a maximal subfield of the quaternion algebra, -and that in turn implies that this places splits. Another way of seeing this is that -when $z\in\mathbb{Q}_p$ satisfies $z^2=-m$, and simultaneously we have $m=a^2+b^2+c^2$ -for some integers $a,b,c$, then the element -$$ -z\cdot1+a\cdot i+b\cdot j+ c\cdot k\in \mathbb{Q}_p\otimes\mathcal{K} -$$ -has zero norm, and hence cannot be invertible. -Edit: See Keith Conrad's comment below for a simpler way of showing that Hamiltonian -quaternions split at all odd primes $p$. -The prime $p=2$ OTOH ramifies (class field theory also tells that any division algebra must ramify at at least two places). We already suspect as much from the above calculation, -because an odd integer $m$ has a square root in $\mathbb{Q}_2$, iff $m\equiv 1\pmod 8$ -(so $\sqrt{-m}\in\mathbb{Q}_2$ only, if $m\equiv 7\pmod 8$). But this time we should study the norm of an element -$$ -q=a_0\cdot1+a_1\cdot i+a_2\cdot j+a_3\cdot k\in\mathbb{Q}_2\otimes\mathcal{K}. -$$ -The norm is, of course, -$$ -N(q)=a_0^2+a_1^2+a_2^2+a_3^2. -$$ -I want to prove that this never vanishes. This will prove that $\mathcal{K}$ ramifies at $p=2$.Without loss of generality (scaling) we can assume that all the coefficients are $2$-adic integers, and that least one of them is a $2$-adic unit. An easy case-by-case analysis then shows that $N(q)$ is not divisible by $8$. Basically this follows from the fact that the squares of all the odd integers are congruent to $1\pmod 8$.<|endoftext|> -TITLE: Riemann zeta sums and harmonic numbers -QUESTION [5 upvotes]: Given the nth harmonic number of order s, -$$H_n(s) =\sum_{m=1}^n \frac{1}{m^s}$$ -It can be empirically observed that, for $s > 2$, then, -$$\sum_{n=1}^\infty\Big[\zeta(s)-H_n(s)\Big] = \zeta(s-1)-\zeta(s)$$ -Can anyone prove this is true? - -REPLY [4 votes]: $$\zeta(s) - H_n(s) = \sum_{m=n+1}^{\infty} \dfrac1{m^s}$$ -Hence, $$\begin{align} -\sum_{n=1}^{\infty} (\zeta(s) - H_n(s)) & = \sum_{n=1}^{\infty} \sum_{m=n+1}^{\infty} \dfrac1{m^s}\\& = \sum_{m=2}^{\infty} \sum_{n=1}^{m-1} \dfrac1{m^s} \text{ (Changing the order of summation)}\\& = \sum_{m=2}^{\infty} \left( \dfrac1{m^{s-1}} - \dfrac1{m^s}\right)\\ -& = \sum_{m=1}^{\infty} \left( \dfrac1{m^{s-1}} - \dfrac1{m^s}\right)\\ -& = \zeta(s-1) - \zeta(s) -\end{align} -$$<|endoftext|> -TITLE: What Topology does a Straight Line in the Plane Inherit as a Subspace of $\mathbb{R_l} \times \mathbb{R}$ and of $\mathbb{R_l} \times \mathbb{R_l}$ -QUESTION [19 upvotes]: Given a straight line in the plane, what topology does this straight line inherit as a subspace of $\mathbb{R_l} \times \mathbb{R}$ and as a subspace of $\mathbb{R_l} \times \mathbb{R_l}$, where $\mathbb{R_l}$ is the lower limit topology? -So trying to figure this out definitely made my brain hurt. I believe that as a subspace of $\mathbb{R_l} \times \mathbb{R}$, all non-vertical straight lines just inherit the lower limit topology $\mathbb{R_l}$, while vertical lines inherit the standard topology on $\mathbb{R}$. As for $\mathbb{R_l} \times \mathbb{R_l}$, as far as I can tell, the only difference is that now all straight lines, including vertical ones, inherit the lower limit topology. -My reasoning was basically that for $\mathbb{R_l} \times \mathbb{R}$, open sets on the line all have an initial left end point, since the x-coordinate of this left end point is always captured by the open sets [a,b) in $\mathbb{R_l}$, and this in turn drags the y-coordinate of the initial left end point along for the ride (so to speak). This is true in all but the vertical line case where there is no shift in the horizontal direction and thus the topology is inherited strictly from $\mathbb{R}$. -As for $\mathbb{R_l} \times \mathbb{R_l}$ it's basically the same argument except now the vertical lines inherit from $\mathbb{R_l}$ as well. -Can someone let me know whether I've reasoned correctly? Thanks. - -REPLY [4 votes]: I wish to respond to @lomber's question to Brian M. Scott's answer, however I don't have enough Math Exchange points to do so directly. With that, I answer it here: -Notice that, if $\langle x,y\rangle\in L $, then -$$\{\langle x,y\rangle\}\stackrel{(1)}{=}L\cap([x,x+1)\times[y,y+1))$$ is an open set in the subspace topology on $ L $, inherited by $ \mathbb{R}_\ell^2 $. This tells us that for every element $ \langle x,y\rangle\in L $, the set $ \{\langle x,y\rangle\} $ is open. Thus, if $ U $ is any subset of $ L $, then $$U=\bigcup_{\langle x,y\rangle\in U}\{\langle x,y\rangle\}$$ is an open set, being a union of open sets. This informs us every subset of $ L $ is open. This is the definition of the discrete topology. -To justify equation (1) (if that was the difficult part), notice if $ L $ has a negative slope and $ \langle x,y\rangle\in L $ (i.e. $y=mx+b$ with $ m<0 $), then $ v>x$ implies $$mv+b -TITLE: Model of spread of a rumor -QUESTION [5 upvotes]: From Stewart 7e pg 614 # 9 -"One model for the spread of a rumor is that the rate of spread is proportional to the product of the fraction y of the population who have heard th eremor and the fraction who have not hear the rumor. -a) Write a differential equation that is satisfied by y. -b) Solve the differential equation -c) A small town has 1000 inhabitants. At 8 am 80 people have heard a rumor. By noon half the town has heard it. At what will 90 percent of the population have heard the rumor? -" -The wording of this is very ambiguous to me and I can't really make sense of it. -They mention a product, so I know that something is being multiplied and that y is a fraction which belongs to the population who have seen it so I think that "have not" heard is a constant, and that y is a fraction taht represents who have. I tried to set this up and it is the wrong answer. I am not sure what they want from that, the English usage is too ambiguous to make sense of it. The complete lack of punctuation is what really does it. - -REPLY [5 votes]: A start: Let $y=y(t)$ be the fraction who have heard by time $t$. Then the fraction who have not is $1-y$. The rate of change of $y$, we are told, is proportional to the product $y(1-y)$. Our differential equation is therefore -$$\frac{dy}{dt}=ky(1-y).$$ -This is a special case of the logistic equation, which you know how to solve. -It is convenient to let $t=0$ at $8\colon00$. So $y(0)=\frac{80}{1000}$. We are told that $y(4)=\frac{1}{2}$. These two items are enough to tell us everything about the equation, including the constant $k$. Some algebraic manipulation will be needed. -Now that you have the equation for $y(t)$ in terms of $t$, you can find the $t$ such that $y(t)=0.9$. Note that this $t$ is the time elapsed since $8\colon00$ AM. You will need to give the answer in clock terms.<|endoftext|> -TITLE: Continuously differentiable with constraint on gradient implies the function is convex -QUESTION [5 upvotes]: I am not sure what the relevant theorems for this problem are. I have been searching through Rudin for some hints, but I have come up short. This is an example question for an exam, so not homework. -Can anyone point me in the right direction? Thanks. -A function $f: \mathbb{R}^n \to \mathbb{R}$ is called convex if $f$ satifies -$$ f(\alpha x + (1 - \alpha)y) \le \alpha f(x) + (1 - \alpha)f(y) \quad \forall x, y \in \mathbb{R}^n,\ 0 \le \alpha \le 1.$$ -Assume that $f$ is continuously differentiable and that for some constant $c > 0$, the gradient -$$(\nabla f(x) - \nabla f(y)) \cdot (x - y) \ge c(x - y) \cdot (x - y), \quad \forall x, y \in \mathbb{R}^n,$$ -where $\cdot$ denotes the dot product. Show that $f$ is convex. - -REPLY [2 votes]: Let $x,y\in\mathbb R^n$ and let $\varphi:[0,1]\rightarrow\mathbb R$ defined by $\delta\mapsto f(x+\delta(y-x))$. -Check that $\varphi'(\delta)=(\nabla f(x+\delta(y-x)))\cdot(y-x)$. -So for all $\delta\in]0,1]$, $\varphi'(\delta)-\varphi'(0)=\frac{1}{\delta}(\nabla f(x+\delta(y-x))-\nabla f(x))\cdot(\delta(y-x))\ge\delta c \|y-x\|^2$ by your hypothesis. -Notice that the inequality is also true for $\delta=0$. -By integration : -$$\varphi(1)-\varphi(0)\ge\int_0^1\varphi'(0)+\delta c \|y-x\|^2d\delta=\varphi'(0)+\frac{c}{2}\|y-x\|^2\ge\varphi'(0)$$ -And so $$f(y)\ge f(x)+\nabla f(x)\cdot(y-x)$$ -So for all $x,y\in\mathbb R^n$, $f(y)\ge f(x)+\nabla f(x)\cdot(y-x)$. -Write this last inequality for $(x+\delta(y-x),y)$ and for $(x+\delta(y-x),x)$ with $\delta\in[0,1]$ and $x,y\in\mathbb R^n$, so -$$f(y)\ge f(x+\delta(y-x))+(1-\delta)\nabla f(x+\delta(y-x))\cdot(y-x)$$ -$$f(x)\ge f(x+\delta(y-x))-\delta\nabla f(x+\delta(y-x))\cdot(y-x)$$ -Multiply the first line by $\delta$ the second line by $1-\delta$, then sum, and you'll get : -$$\delta f(y)+(1-\delta)f(x)\ge f(x+\delta(y-x))=f((1-\delta)x+\delta y)$$ -So $f$ is convex.<|endoftext|> -TITLE: Algebraic Topology Challenge: Homology of an Infinite Wedge of Spheres -QUESTION [65 upvotes]: So the following comes to me from an old algebraic topology final that got the best of me. I wasn't able to prove it due to a lack of technical confidence, and my topology has only deteriorated since then. But, I'm hoping maybe someone can figure out the proof as I've always been interested in seeing it all at once! - -Let $E_\infty$ denote the 2-D analogue of the Hawaiian Earring, i.e. -$$E_\infty = \bigcup_{n=1}^\infty \{(x,y,z) \in \mathbb{R}^3 | \hspace{2mm} (x-1/n)^2 + y^2 + z^2 = 1/n^2\}.$$ -The object of the exercise is to show that even though $E_\infty$ is 2-dimensional, $H_3(E_\infty) \neq 0$, which I find interesting even though I know it's not a CW-complex. Parts of the proof were helped along by my professor to serve as a road map for the solution, but sadly I'm still lost in the driveway filling in some of the remaining bits: -Let $h: S^3 \longrightarrow S^2$ denote the Hopf map i.e. the attaching map for the 4-cell in $\mathbb{C}P^2$. One can readily observe that there is a continuous map $\tilde{h}: S^3 \longrightarrow E_\infty$ so that the projection of $\tilde{h}$ to any $S^2$ in $E_\infty$ is homotopic to the Hopf map. Let $C_\tilde{h}$ denote the mapping cone of $\tilde{h}$. Then we have a mapping cones that looks something like a LOT of $\mathbb{C}P^2$'s. -(1) First we must prove that $H^2(C_\tilde{h})$ contains a subgroup $\mathbb{Z}<\zeta_1, \zeta_2, ...>$ and $H^4(C_\tilde{h}) \cong \mathbb{Z}<\eta>$ where -$$\zeta_i \cup \zeta_i = \eta \text{ and } \zeta_i \cup \zeta_j = 0 \text{ for } i \neq j.$$ -(I suspect that this comes in part from the cup product structure on $\mathbb{C}P^2$). -Now let $[S^3]$ denote the fundamental class of $S^3$. Assumption: Suppose that $\tilde{h}_* = 0 \in H_3(E_\infty)$. Under this assumption, there is a finite simplicial complex $X$ with boundary $S^3$ so that $\tilde{h}$ extends to a map $k: X \longrightarrow E_\infty$. Let $Y = X \cup \mathbb{D}^4$ where $\mathbb{D}^4$ is glued to $S^3$ in the obvious way. Then extending $k$, we have another map $l: Y \longrightarrow C_\tilde{h}$ sending the $B^4$ to the cone $S^3 \times [0,1]/ S^3 \times 1$. -(2) Here it must be proven that $l^*(\eta)$ is a nontrivial element of $H^4(Y)$. -(I truly do not see how to do this, but it seems like it would be true [here "seems" doesn't really mean anything]). -(3) And now it needs to be shown that the infinitely many $l^*(\zeta_i)$ are all linearly independent. -(This would seem to follow from naturality of the cup product in some way). -If I could prove these things, it would only remain to observe that $Y$ came from gluing to a finite simplicial complex, and therefore it is itself a finite simplicial complex. Hence $H^2(Y)$ is finitely generated, so we arrive at the desired contradiction, which is awesome since we have found a nontrivial element in the third homology group of a 2-D object! Awesome! - -Anyways, if anyone could complete this proof in its entirety, I would be supremely grateful. I assure you that I will upvote it a hundred times over! Even though the last 99 won't really do much. Also sorry that there are sort of several questions embedded in one. I thought it would be justified as they lie in the same vein of the single proof. - -REPLY [3 votes]: $\newcommand{\Ch}{ C_{\tilde h}} -\newcommand{\CP}{\mathbb{CP}^2}$ -We will make use of natural inclusions $\alpha_i\colon S^2\hookrightarrow C_{\tilde h}$ and natural projections $\pi_j\colon \Ch\to \CP$. The composition $\pi_j\circ \alpha_j$ is the standard inclusion of $S^2$ in $\mathbb{CP}^2$ and the compositions $\pi_j\circ\alpha_i$ are the constant map to the base point when $i\neq j$. The composition on the level of $H^2$ is the identity, giving that $\pi_i^*(\zeta)=:\zeta_i$ are all nonzero. (We let $\zeta$ be a generator of $H^2(\CP)$.) In fact, we want to show they are linearly independent. This follows from the fact that the homology-cohomology pairing satisfies $\langle \pi_i^*(\zeta),(\alpha_j)_*([S^2])\rangle =\delta_{ij}$. -To continue, I'm going to cheat slightly and assume that $H^4(E_\infty)=H^5(E_\infty)=0$, since otherwise we would have a higher dimensional class, which would be equally surprising as a $3$ dimensional one. -In that case, by the long exact sequence of the pair $$\mathbb Z\cong H^4(\Ch,E_\infty)\cong H^4(\Ch).$$ Note that the generator of $H^4(\Ch,E_\infty)$ is dual to the $4$-cell as is the image of $\zeta^2\in H^4(\CP)$. Thus if we let $\eta$ denote the generator of $H^4(\Ch)$ we have $\zeta_i^2=\eta$ for all $i$. Furthermore $\zeta_i\zeta_j=0$ for $i\neq j$ because they are disjointly supported. -The image $\ell^*(\eta)$ is nonzero as it is dual to the $4$-cell, but $Y$ by construction represents a $4$-cycle on which $\ell^*(\eta)$ evaluates nontrivially. -To see that $\ell^*(\zeta_i)$ are linearly independent, suppose that $\sum_{i\in I} n_i\ell^*(\zeta_i)=0$. Now take the cup product of this element with itself! We have $$\sum_{i\in I}\sum_{j\in I} n_in_j\ell^*(\zeta_i\zeta_j)=\sum_{i\in I} n_i^2\ell^*(\eta)=0,$$ implying $\sum n_i^2=0$ and so $n_i=0$ for all $i$. This contradicts the finite generation of $H^4(Y)$ and we are done!<|endoftext|> -TITLE: Topology - The arbitrary union axiom -QUESTION [5 upvotes]: So, the common answer to why we need the concept of topology is that we need it to talk about things like limits of infinite sequences and continuity. But, when we define the axioms of topology, we have an axiom which says that an arbitrary(countable and uncountable) union of open sets exists(or is this not true?) and is open. But doesn't the notion of arbitrary union in itself hide an infinite sequence, namely the the sequence of partial unions of the open set? Isn't the union of infinitely many open sets equal to the "limit" of the partial unions? Is there an alternate interpretation of arbitrary unions which doesn't requires us to define a topology on the power set of the set of interest first? - -REPLY [6 votes]: No, no sequence or limit process in involved. This is a purely set-theoretic concept. If you have a collection of sets $\mathcal{F}$ in a universe of discourse $\Omega$, you define -$$\bigcup \mathcal{F} = \{x\in\Omega| \exists F\in \mathcal{F}\; {\rm with}\; x\in F\}.$$ -It's just an existential quantifier. No order or structure is involved.<|endoftext|> -TITLE: Game theory problem: Poker with bluffing -QUESTION [10 upvotes]: Hope someone can help me with this one. The problem I am talking about can be found in the book titled "Game Theory Evolving: A Problem-Centered Introduction to Modeling Strategic Interaction" by Herbert Gintis. The problem is as follows: -Ollie and Stan decide to play the following game of poker. Each has a deck consisting of three cards, labeled H (high), M (medium), and L (low). Each puts 1 dollar in the pot, chooses a card randomly from his deck, and does not show the card to his friend. Ollie (player 1) either stays, leaving the pot unchanged, or raises, adding 1 dollar to the pot. Stan simultaneously makes the same decision. If both raise or both stay, the player with the higher card wins the pot (which contains 2 dollars if they stayed and 4 dollars if they raised), and if they tie, they just take their money back. If Ollie raises and Stan stays, then Ollie gets the 3 dollar pot. However, if Stan raise and Ollie stays, Ollie gets another chance. He can either drop, in which case Stan wins the 3 dollar pot (only 1 dollar of which is Ollie’s), or he can call, adding 1 dollar to the pot. Then, as before, the player with the higher card wins the pot, and with the same card, they take their money back. A game tree for poker with bluffing is depicted here: - -(the “?” in the figure means that the payoff depends on who has the higher card). -The questions are: -a. Show that Ollie has 64 pure strategies and Stan has 8 pure strategies. -b. Find the normal form game. Note that although poker with bluffing is a lot simpler than real poker, the normal form is nevertheless a 64 × 8 matrix! If you know computer programming, solving this is not a hard task, however. -c. Show that when you eliminate dominated strategies there are only nine pure strategies for Ollie, and seven for Stan. -d. Show that Ollie always raises with H or M on the second round,and drops with L. -e. Suppose Stan sees Ollie’s first move before deciding to stay or raise (i.e., the two nodes where Stan moves are separate information sets). Now find the normal form game. Notice that this matrix is 64 x 64, so calculating this by hand is quite prohibitive. -f. Eliminate dominated strategies, and show that Ollie has only ten strategies left, while Stan has twelve. -g. Show that Ollie always calls with a high card on the second rounda nd Stan always raises with a high card. - -The first is quite obvious, but the problem is the second question. So I am supposed to write a 64 x 8 matrix, which I would prefer to do it programatically (on the computer - this is not a problem). The problem is I do not know how to compute this matrix. What do I put in every matrix cell? How do I programatically calculate these values, so that I can then, in the third question, eliminate dominated strategies? - -EDITED 28.6.: -Thanks every body for your answers, but I am still having problems. This is the way I wanted to solve this problem, but something isn't adding up. I wrote a JavaScript program that should solve this problem, but I dont know what I am missing. The program can be found here: http://rok.pnv.si/poker/en.html and it runs in your browser, so you dont have to install it or any thing. The only thing is, that every time you load the page, your computer calculates the steps in solving the game, so loading of the page can take some time, depending on your computer. The game can be found here, with all comments and the source code: http://rok.pnv.si/poker/en.html. Below I provide a description of what I am doing. The same description can be found at the above link. -The way I start is I write down all the possible pure strategies if I dont account for nature choice (which cards will be delt) (this is so that its more obvious). For Ollie these are (RC), (RD), (SC), (SD), for Stan these are (R) or (S). If I account for nature, than I get 64 pure strategies for Ollie and 8 for Stan. For Ollie they go something like this: (RC, RC, RC), (RC, RC, RD), (RC, RC, SC), (RC, RC, SD),... and for Stan something like this: (R, R, R), (R, R, S), (R, S, R),... For Ollie the first strategy (RC, RC, RC) would mean if he gets a low card: raise on first round, call on second, and the same for medium and high cards. For Stan it is the same (only difference is that Stan does not have a second move). -So with these pure strategies I know the play for every cell in the normal form game matrix, so I can calculate payoff values coresponding to these cells: for every cell in the normal form game matrix we calculate the payoff for Ollie and Stan. The algorithm for calculating the (i,j)-th element of the final normal form game matrix is as follows: -First we construct two loops with which we loop through all the rows and all the columns, so that we get to (i,j)-th element of the final matrix. At this point Ollie is using strategy i and Stan strategy j in the game matrix. Then we look at the payoff for each player for (i,j)-th strategies relative to the cards they were delt (we go through every combination (9) of cards nature could have delt). We multiply each payoff by the probability that nature picked those particular cards (which is always 1/9) and sum together all of the payoffs that arise for different card combinations. This is the payoff that goes in each cell. We do this for Ollie and for Stan. The example can be seen on the above link. -Now I got the normal form game. So I continue with eliminating dominant strategies. First I try this by searching for saddle points with minimax method (I can do this because this is a zero-sum game) but I get strange saddle points (they are all in one column and there are too few to get 9 pure strategies for Ollie and 7 for Stan if I eliminate those strategies that don't have a saddle point). The example can be seen on the above link. -The second way I try to eliminate dominant strategies is by iterated elimination of strictly dominated strategies (if I include weakly dominated strategies I get a single cell for the end result - which is one of the saddle points found with the minimax method). The way I do this is: we go through every row and compare it with every other row. We compare the first values in each cell of the row with the first value from the coresponding cell of the second row. If we find that every value in the first row is bigger than the value in the second row, we declare the strategy in the second row strictly dominated and delete that row. After deletion we move on to the columns (we forget about comparing rows further in this round). We look at columns and repeat the previous comparison, this time with columns and second values in each cell. This constitutes one round. If there were any strategies that were eliminated, we repeat the whole process in round 2 and so on until in the final round when we can't eliminate any more strategies. That is then the matrix with strictly dominated strategies removed. The problem here is, that there remains 42 pure strategies for Ollie and 7 for Stan. That is way too much for Ollie as it should be only 9. -What am I doing wrong? Calculating the payoffs wrong? Eliminating the strategies wrong (does the order of elimination matter? - as far as I know it does not in zero-sum games)? I've been at this for quite some time now and I can't seem to figure it out. Any help would be appreciated. - -REPLY [2 votes]: Stan's 8 pure strategies consist of an action (stay or raise) for each of the 3 cards. One of them is raise on H, stay on M or L. One of Ollie's is raise on all cards. In the cell of the matrix at the intersection of these goes the average payoff if both players follow this strategy. So you go over the 9 possible distributions of cards, calculate the payoff to Stan, average them, and put it in the matrix. -For c, you look for a pair of Stan's strategies where Stan comes out better regardless of what Ollie does. You are told that there will be exactly one pair. You can then eliminate the poorer strategy from the matrix. You are also told that you will be able to eliminate 55 of Ollie's strategies. For d, you note that the remaining strategies for Ollie all satisfy this.<|endoftext|> -TITLE: A characterization of invertible fractional ideals of an integral domain -QUESTION [9 upvotes]: Let $A$ be an integral domain, $K$ its field of fractions. -Let $M$ be a fractional ideal of $A$. -I'd like to prove that $M$ is invertible if and only if $MA_P$ is a principal fractional ideal of $A_P$ for every maximal ideal $P$ of $A$. -EDIT -As Georges Elencwajg pointed out, it seems that we need to assume $M$ is finitely generated to prove if part. - -REPLY [9 votes]: I'll prove the title assertions (Proposition 1 and Proposition 2) using the following lemmas. -Lemma 1 -Let $A$ be an integral domain. -Let $M$ be an invertible fractional ideal of A. -Then $M$ is finitely generated as an $A$-module. -Proof: -There exists a fractional ideal $N$ of $A$ such that $MN = A$. -Hence there exist $x_i \in M, y_i \in N, i = 1, ..., n$ such that $1 = \sum x_iy_i$. -Hence, for every $x \in M$, $x = \sum x_i(xy_i)$. -Since $xy_i \in A$, $M$ is generated by $x_1, ..., x_n$ over $A$. -QED -Lemma 2 -Let $A$ be an integral domain. -Let $M$ be an invertible fractional ideal of $A$. -Then $M$ is projective as an $A$-module. -Proof: -There exists a fractional ideal $N$ of $A$ such that $MN = A$. -Hence there exist $x_i \in M, y_i \in N, i = 1, ..., n$ such that $1 = \sum x_iy_i$. -For each i, define A-homomorphism $f_i: M \rightarrow A$ by $f_i(x) = y_ix$. -Since $x = \sum x_i(y_ix)$ for every $x \in M$, $x = \sum f_i(x)x_i$. -As shown in the proof of Lemma 1, $M$ is generated by $x_1, ..., x_n$ over $A$. -Let $L$ be a free $A$-module with basis $e_1, ..., e_n$. -Define $A$-homomorphism $p: L \rightarrow M$ by $p(e_i) = x_i$ for each $i$. -Define $A$-homomorphism $s: M \rightarrow L$ by $s(x) = \sum f_i(x)e_i$. -Let $K = Ker(p)$. -We get an exact sequence: -$0 \rightarrow K \rightarrow L \rightarrow M \rightarrow 0$. -Since $ps = 1_M$, this sequence splits. -Hence $M$ is projective. -QED -Lemma 3 -Let $A$ be a local ring. -Let $M$ be a finitely generated projective $A$-module. -Then $M$ is a free $A$-module of finite rank. -Proof: -Let $\mathfrak{m}$ be the maximal ideal of $A$. -Let $k = A/\mathfrak{m}$. -Since $M \otimes k$ is a free k-module of finite rank, there exists a free $A$-module $L$ of finite rank and -a surjective homomorphism $f: L \rightarrow M$ such that $f \otimes 1_k: L \otimes k \rightarrow M \otimes k$ is an isomorphism. -Let $K = Ker(f)$. -We get an exact sequence: -$0 \rightarrow K \rightarrow L \rightarrow M \rightarrow 0$ -Since $M$ is projective, this sequence splits. -Hence the following sequence is exact. -$0 \rightarrow K \otimes k \rightarrow L \otimes k \rightarrow M \otimes k \rightarrow 0$ -Since $f \otimes 1_k: L \otimes k \rightarrow M \otimes k$ is an isomorphism, $K \otimes k = 0$. -Since $K$ is a direct summand of $L$, $K$ is a finitely generated $A$-module. -Hence $K = 0$ by Nakayama's lemma. -QED -Lemma 4 -Let $A$ and $B$ be commutative rings. -Let $f: A \rightarrow B$ be a homomorphism. -Let $M$ be a projective $A$-module. -Then $M \otimes_A B$ is projective as a $B$-module. -Proof: -Let $N$ be a $B$-module. -$N$ can be regarded as an $A$-module via $f$. -$Hom_B(M \otimes_A B, N)$ is canonically isomorphic to $Hom_A(M, N)$. -This isomorphism is functorial in $N$. -Since $Hom_A(M, -)$ is an exact functor, $Hom_B(M \otimes_A B, -)$ is exact. -Hence $M \otimes_A B$ is projective as a $B$-module. -QED -Lemma 5 -Let $A$ be an integral domain. -Let $K$ be the field of fractions of $A$. -Let $M$ and $N$ be $A$-submodules of $K$. -Let $MN$ be the $A$-submodule of $K$ generated by the set {$xy; x \in M, y \in N$}. -Let $M^{-1} = \{x \in K; xM \subset A\}$. -Suppose $MN = A$. -Then $N = M^{-1}$. -Proof: -Since $N \subset M^{-1}$, $MN \subset MM^{-1} \subset A$. -Since $MN = A$, $MM^{-1} = A$. -Multiplying the both sides of $MN = A$ by $M^{-1}$, we get $M^{-1}MN = M^{-1}$. -Hence $N = M^{-1}$. -QED -Lemma 6 -Let $A$ be an integral domain. -Let $K$ be the field of fractions of $A$. -Let $M$ be finitely generated $A$-submodule of $K$. -Let $M^{-1} = \{x \in K; xM \subset A\}$. -Let $P$ be a prime ideal of $A$. -Let $(M_P)^{-1} = \{x \in K; xM_P \subset A_P\}$. -Then $(M^{-1})_P = (M_P)^{-1}$. -Proof: -Let $x \in M^{-1}$. Since $xM \subset A$, $xM_P \subset A_P$. -Hence $x \in (M_P)^{-1}$. -Hence $M^{-1} \subset (M_P)^{-1}$. -Hence $(M^{-1})_P \subset (M_P)^{-1}$. -Let $x_1, ..., x_n$ be generators of $M$ as an $A$-module. -Let $y \in (M_P)^{-1}$. -Then $yx_i \in A_P$ for $i = 1, ..., n$. -Hense there exists $s \in A - P$ such that $syx_i \in A$ for $i = 1, ..., n$. -Since $sy \in M^{-1}$, $y \in (M^{-1})_P$. -Hence $(M_P)^{-1} \subset (M^{-1})_P$ -QED -Proposition 1 -Let $A$ be an integral domain, $K$ its field of fractions. -Let $M$ be an invertible fractional ideal of $A$. -Let $P$ be a prime ideal of $A$. -Then $M_P$ is a principal fractional ideal. -Proof: -By Lemma 1, $M$ is finitely generated as an $A$-module. -Hence $M_P$ is finitely generated as an $A_P$-module. -By Lemma 2, $M$ is projective as an $A$-module. -Hence by Lemma 4, $M_P$ is projective as an $A_P$-module. -Therefore, by Lemma 3, $M_P$ is a free $A_P$-module of finite rank. -Since $M_P \neq 0$ and an $A_P$-basis of $M_P$ is linearly independent over $K$, $M_P$ is a free $A_P$-module of rank 1. -Hence $M_P$ is a principal fractional ideal. -QED -Proposition 2 -Let $A$ be an integral domain, $K$ its field of fractions. -Let $M$ be a finitely generated fractional ideal of $A$. -Suppose $M_P$ is a principal fractional ideal of $A_P$ for every maximal ideal $P$ of $A$. -Then $M$ is invertible. -Proof: -Let $M^{-1} = \{x \in K; xM \subset A\}$. -Let $P$ be a maximal ideal of $A$. -Hence by Lemma 6, $(M^{-1})_P = (M_P)^{-1}$. -Hence, $(MM^{-1})_P = (M_P)(M^{-1})_P = (M_P)(M_P)^{-1}$ -Since $M_P$ is a principal, $M_P$ is invertible. -Hence by Lemma 5, $(M_P)(M_P)^{-1} = A_P$. -Hence $(MM^{-1})_P = A_P$. -Hence by Lemma 4 of this question, $MM^{-1} = A$. -QED<|endoftext|> -TITLE: Where do people learn about things like caustics, evolutes, inverse curves, etc.? -QUESTION [10 upvotes]: When I look up a curve on Wikipedia, I'll often see a lot of properties along the lines of "you can generate curve X by rolling a circle along curve Y and tracing the trajectory of a single point," or other things to similar effect. Of course the individual calculations are elementary, but are these just scattered facts or do these kinds of results belong to a useful general theory? So far as I know, these classical geometric concepts aren't taught in schools anymore. -Is there a good way to learn about these properties systematically? Or, conversely, a good reason why they are only of historical interest? - -REPLY [3 votes]: Apart from the beautiful references that Joseph has already given, I want to point out the beautiful book The Advanced Geometry of Plane Curves and their Applications by Cornelis Zwikker. In this book, the machinery of complex numbers is used to easily deduce the properties of curves, as well as derived curves like the evolute and the pedal. -Joseph has already mentioned Lockwood's fabulous book, but there is one thing I wish to emphasize about it: it is meant to be an interactive book, where you should have a number of drawing implements (or a computer if you're more inclined to draw with that tool instead of pencil and paper) on hand to fully appreciate the book. -Two more books that I have found to be nice are C.G. Gibson's Elementary Geometry of Algebraic Curves and Elementary Geometry of Differentiable Curves; the second book, for instance, has a neat discussion of the connections between envelopes, orthotomics and caustics. -Finally, if you are Francophone, you might wish to look at the wonderful Mathcurve site by Alain Esculier and others; the animated GIFs demonstrating properties of curves in that site are a sight to behold.<|endoftext|> -TITLE: Convergence of sets is same as pointwise convergence of their indicator functions -QUESTION [5 upvotes]: Please help me prove this: - -Let $A_1,A_2,\ldots$ be subsets of $\Omega$. Prove that $A_n\to A$ if and only if $I_{A_n}(\omega)\to I_A(\omega)$ for every $\omega\in\Omega$ (so that convergence of sets is the same as pointwise convergence of their indicator functions). -Note: $I_A(\omega)=1$ if $\omega\in A$, and $0$ if $\omega\notin A$. Use in the proof that - $$\operatorname{lim\;inf}\limits_n\; x_n=\bigvee_{k=1}^\infty\bigwedge_{n=k}^\infty x_n\quad\text{ -and }\quad\operatorname{lim\;sup}\limits_n\; x_n=\bigwedge_{k=1}^\infty\bigvee_{n=k}^\infty x_n.$$ - -Thank you very much! - -REPLY [8 votes]: Let $\sigma=\langle A_n:n\in\Bbb N\rangle$ be a sequence of subsets of some set $\Omega$. A point $\omega\in\Omega$ is eventually in $\sigma$ if there is an $n_0\in\Bbb N$ such that $\omega\in A_n$ for all $n\ge n_0$, i.e., if $\omega$ is in each member of a ‘tail’ of the sequence. The point $\omega$ is frequently in $\sigma$ if for each $m\in\Bbb N$ there is an $n\ge m$ such that $\omega\in A_n$, i.e., if $\omega$ is in infinitely many members of the sequence. These terms provide an easy way to think and talk about the liminf and limsup of a sequence of sets: $\liminf_nA_n$ is the set of points of $\Omega$ that are eventually in $\sigma$, and $\limsup_nA_n$ is the set of points of $\Omega$ that are frequently in $\sigma$. This is quite easy to verify from the definitions. For example, $$\liminf_{n\in\Bbb N}A_n=\bigcup_{n\in\Bbb N}\bigcap_{k\ge n}A_k\;,\tag{1}$$ so $\omega\in\liminf_nA_n$ iff there is an $n\in\Bbb N$ such that $\omega\in\bigcap_{k\ge n}A_k$, which is the case iff $\omega\in A_k$ for each $k\ge n$: in short, $\omega\in\liminf_nA_n$ iff $\omega$ is eventually in $\sigma$. Similarly, $$\limsup_{n\in\Bbb n}A_n=\bigcap_{n\in\Bbb N}\bigcup_{k\ge n}A_k\;,\tag{2}$$ so $\omega\in\limsup_n A_n$ iff for each $n\in\Bbb N$ $\omega\in\bigcup_{k\ge n}A_k$, which is the case iff $\omega\in A_k$ for some $k\ge n$: $\omega\in\limsup_nA_n$ iff $\omega$ is frequently in $\sigma$. -It’s easy to check that $$\liminf_{n\in\Bbb N}I_{A_n}(\omega)=\bigvee_{n\in\Bbb N}\bigwedge_{k\ge n}I_{A_k}(\omega)\text{ for all }\omega\in\Omega$$ and $$\limsup_{n\in\Bbb N}I_{A_n}(\omega)=\bigwedge_{n\in\Bbb N}\bigvee_{k\ge n}I_{A_k}(\omega)\text{ for all }\omega\in\Omega$$ are simply restatements of $(1)$ and $(2)$ in terms of indicator functions. (E.g., $\omega$ is eventually in $\sigma$ iff $I_{A_n}(\omega)$ is eventually $1$.) Thus, the following statements are equivalent: - -$$\begin{align*}&\lim_{n\in\Bbb N}A_n\text{ exists}\tag{3}\\&\liminf_{n\in\Bbb N}A_n=\limsup_{n\in\Bbb N}A_n\tag{4}\\&\liminf_{n\in\Bbb N}I_{A_n}(\omega)=\limsup_{n\in\Bbb N}I_{A_n}(\omega)\text{ for all }\omega\in\Omega\tag{5}\end{align*}$$ - -To finish the proof, you need only show that $(5)$ is equivalent to - -$$\lim_{n\in\Bbb N}I_{A_n}(\omega)\text{ exists for each }\omega\in\Omega\tag{6}$$ - -and then show that the limit in $(6)$ is the indicator function of the limit in $(3)$. -It’s all just a matter of translating between two ways of saying the same thing: $\omega\in A$ iff $I_A(\omega)=1$, and $\omega\notin A$ iff $I_A(\omega)=0$.<|endoftext|> -TITLE: How to determine if 2 points are on opposite sides of a line -QUESTION [6 upvotes]: How can I determine whether the 2 points $(a_x, a_y)$ and $(b_x, b_y)$ are on opposite sides of the line $(x_1,y_1)\to(x_2,y_2)$? - -REPLY [9 votes]: Explicitly, they are on opposite sides iff -$$((y_1-y_2)(a_x-x_1)+(x_2-x_1)(a_y-y_1))((y_1-y_2)(b_x-x_1)+(x_2-x_1)(b_y-y_1)) < 0.$$<|endoftext|> -TITLE: If $f \circ g$ is invertible, is $(f \circ g)^{-1} = g^{-1} \circ f^{-1}$? -QUESTION [5 upvotes]: If $f \circ g$ is invertible, is $(f \circ g)^{-1} = g^{-1} \circ f^{-1}$? -If not can someone give me a counterexample? - -REPLY [5 votes]: Let $ f:\{0,1\} \rightarrow \{0\} $ be defined by -$ f(0) = 0 = f(1) $ -and define $ g:\{0\} \rightarrow \{0\} $ with obvious mapping $g(0) = 0$ -then $ f \circ g $ is defined from $ \{0\} $ to $ \{0\} $ with $ (f \circ g) (0) = 0 $ and $ (f \circ g)^{-1} (0) = 0 $ -but f is not invertible.<|endoftext|> -TITLE: How many words can be formed from the letters of the word 'DAUGHTER' so that the vowels never come together? -QUESTION [5 upvotes]: How many words can be formed from the letters of the word 'DAUGHTER' so that the vowels never come together ? -The answer is obviously $8!-6!\cdot3!$. -My question is that if we ponder from a different perspective, that is taking $5$ consonants first and arranging them ($5!$ ways of doing that) and then placing the $3$ vowels in the $6$ places created due to the arrangement of consonants ($\frac{6!}{3!}$ ways to do that), the answer should be $5!\frac{6!}{3!}$. -What is wrong with this? - -REPLY [9 votes]: Your first solution ($36000$) counts words in which AEU don't come all three together, your second solution ($14400$) counts those in which they are all three separate, which is quite a more strict condition. Which shows that it helps to pose your question more carefully: what exactly do you mean by "the vowels come together"?<|endoftext|> -TITLE: Must a function that 'preserves r.e.-ness' be computable itself? -QUESTION [11 upvotes]: Does there exist a non-recursive function (say, from naturals to naturals) such that the inverse of every r.e. set is r.e.? -If yes, how to construct one? -If no, how to prove that? -Any References? - -REPLY [2 votes]: Just to clarify I assumed the question was asking about bijective functions (since they are more interesting) so I showed that answer was yes in that case as well. -If you just want any old function you can simply use Galvin-Prikry style forcing. Define a finite initial segment of the function and keep an infinite set $S$ of possible future values for elements in the range. At stage $i$ you subtract $W_i$ from $S$ unless that would leave it finite in which case you shrink $S$ to be a subset of $W_i$. Really it's the same technique but I like it better. - -As for the bijective case the answer is still yes. -Let $\mathscr{E}$ denote the structure with domain consisting of c.e. (aka r.e.) sets under the relation $\subseteq$. An automorphism of $\mathscr{E}$ is just a bijective function on c.e. sets respecting $\subseteq$. -It is a result of (?) (Perhaps Soare as it appears in his textbook) that any automorphism of $\mathscr{E}$ is induced by a permutation of $\omega$ (i.e. bijection of $\omega$ with itself). -There are several different ways to see that not all automorphisms of $\mathscr{E}$ are induced by a computable permutation. Lachlan's argument is my favorite because it works even on $\mathscr{E}$ modulo finite differences. -Given that all computable permutations give rise to automorphisms one may fix a co-c.e. cohesive set $C$, e.g., the compliment of a maximal set, and note that any permutation $p_C$ of $C$ union the identity on $C$ compliment induces an injective map $p(W)= (W \cap \bar{C}) \cup p_C(W \cap C)$. Since $C$ is cohesive either $W \cap C$ is finite so $p(W)$ is a c.e. set union a finite set and hence c.e. or $W$ almost covers $C$ so $C - W$ is finite. Hence $p_C(W \cap C)$ only differs finitely from $C$ and thus only differs finitely from $W \cap C$. Therefore $p(W)$ differs only finitely from $W$ so $p(W)$ is c.e. Hence $p$ is an automorphism of the c.e. sets (since it is clearly a surjective function on all sets). - -Lachlan's proof relies on defining computable permutations on larger and large computable subsets of $\omega$. We start with $R_0=\emptyset$ and build a sequence $R_0 \subseteq R_1 \ldots $ of computable sets such that $\omega - R_i$ is never finite but $\bigcup R_i = \omega$. On each $R_{i+1} - R_i$ we define a permutation and let the final permutation to be the union of these partial permutations. The key idea is similar to what we used above: if $W \cap C$ is finite or $C - W$ is finite then however we define our permutation on $C$ it can't interfere with mapping $W$ to another c.e. set. -At stage $i+1$ choose some computable $R_{i+1}$ not almost equal to $\omega$ infinitely extending $R_{i}$ containing $W_i$ or $\bar{W_i}$. Then select the first c.e. set $W$ not yet dealt with such that $W \cap (R_{i+1}-R_i)$ is infinite and $(R_{i+1}-R_i) - W$ is infinite. Let $C$ be infinite computable subset of $W \cap (R_{i+1}-R_i)$. Let $p_0$ be the identity on $(R_{i+1}-R_i)$. Let $p_1$ be the permutation on $(R_{i+1}-R_i)$ exchanging $C$ and $\bar{C} \cap (R_{i+1}-R_i)$. -Clearly both $p_0$ and $p_1$ and computable and as $\bar{C} \cap (R_{i+1}-R_i)$ contains infinitely many elements outside of $W$ it follows that $p_1(W)$ is not almost equal to $W$. Since $R_{i+1}$ either contains $W_i$ or $\bar{W_i}$ every c.e. set has it's image determined by a computable permutation giving rise to an automorphism. However, any infinite string of $0$'s and $1$'s gives rise to a unique automorphism. -Importantly, since $p_0$ and $p_1$ always map c.e. sets to sets that differ by infinitely much this shows the stronger result that we there are $2^\omega$ many automorphisms differing non-trivially.<|endoftext|> -TITLE: $\frac{(a^2+b^2)}{(1+ab)}$ must be a perfect square if it is an integer -QUESTION [10 upvotes]: Possible Duplicate: -Alternative proof that $(a^2+b^2)/(ab+1)$ is a square when it's an integer - -I came across this problem, but couldn't solve it. - -Let $a,b>0$ be two integers such that $(1+ab)\mid (a^2+b^2)$. Show that the integer $\frac{(a^2+b^2)}{(1+ab)}$ must be a perfect square. - -It's a double star problem in Number theory (by Niven). Thanks in advance. - -REPLY [11 votes]: It was an IMO(International Mathematical Olympiad)problem, Terence Tao among few others solved it. There is a technique that solves similar problems, here is a link http://www.georgmohr.dk/tr/tr09taltvieta.pdf<|endoftext|> -TITLE: Maximum Likelihood Estimation of an Ornstein-Uhlenbeck process -QUESTION [9 upvotes]: I am wondering whether an analytical expression of the maximum likelihood estimates of an Ornstein-Uhlenbeck process is available. The setup is the following: Consider a one-dimensional Ornstein-Uhlenbeck process $(X_t)_{t\geq 0}$ with $X_0=x$ for some $x\in\mathbb{R}$, i.e. $(X_t)_{t\geq 0}$ solves the SDE -$$ -\mathrm{d} X_t=\theta(\mu-X_t)\,\mathrm{d} t + \eta\,\mathrm{d} W_t,\quad X_0=x -$$ -where $(W_t)_{t\geq 0}$ is a standard Wiener process and $\eta,\theta>0$, $\mu\in\mathbb{R}$. If $\lambda=(\eta,\theta,\mu)$ is the vector of parameters, then the transition densities are known and if $p_{\lambda}(t,x,\cdot)$ denotes the density of $X_t$ (remember $X_0=x$) with respect to the Lebesgue-measure, then -$$ -p_{\lambda}(t,x,y)=(2\pi\beta)^{-1/2}\exp\left(-\frac{(y-\alpha)^2}{2\beta}\right),\quad y\in\mathbb{R}, -$$ -where $\alpha=\mu+(x-\mu)e^{-\theta t}$ and $\beta=\frac{\eta^2}{2\theta}(1-e^{-2\theta t})$. -Suppose we have observed an Ornstein-Uhlenbeck process in equidistant time-instances (where the parameter $\lambda$ is unknown), i.e. the vector of observations is given by -$$ -\mathbf{x}=\{x_0,x_{\Delta},\ldots,x_{N\Delta}\}, -$$ -where $x_0=x$ and $\Delta>0$ and $N+1$ is the number of observations. Then by the Markov property of $(X_t)_{t\geq 0}$ we have that the log-likelihood function is given by -$$ -l(\lambda)=l(\theta,\eta,\mu;\mathbf{x})=\sum_{i=1}^N \log\left(p_{\lambda} (\Delta,x_{(i-1)\Delta},x_{i\Delta})\right). -$$ -Now i am asking if it is possible to maximize this expression with respect to $\lambda=(\eta,\theta,\mu)$ simultaneously and if so, how would one go about doing this. If anyone can point me in the direction of a paper/book where this is shown, it would be much appreciated. Thanks in advance! - -REPLY [5 votes]: In the paper -"Parameter estimation and bias correction for diffusion processes" by Tang and Chen explicit formulas for the MLE are given. Their formulas ignore $X_0$, but this makes little difference if the number of observations is reasonably large. I am puzzled how they managed to come up with this formulas, though. Solving $l'(\lambda)=0$ seem to be a difficult task.<|endoftext|> -TITLE: A number has 101 composite factors. -QUESTION [9 upvotes]: A number has 101 composite factors. How many prime factors at max A number could have ? - -REPLY [4 votes]: Suppose $m = p_1^{a_1} ... p_n^{a_n}$ has evactly $101$ composite factors. -Then $101 + (1+n) = (a_1 + 1)(a_2+1) ... (a_n+1)$. -But the RHS is at least $2^n$ and it is easily checked that the inequality: -$102 + n \geq 2^n$ -fails for $n \geq 7$. So there can be at most $6$ primes in the factorisation of $m$. -We now try to decompose the numbers $101 + (1+n)$ into a product of exactly $n$ integers for $n=1,2,3,4,5,6$, in order to see whether the $a_i$ can actually exist in each case. -We see that: -$108 = 2^2 \times 3^3$ -$107$ is prime -$106 = 2\times 53$ -meaning that the cases for $n=4,5,6$ cannot work. -However the number $105 = 3\times 5\times 7$ does have such a representation as a product of three numbers. Hence the biggest number of primes you may have in $m$ is $3$ in order to have exactly 101 composite factors. -Such a number is given by $m = p_1^2 p_2^4 p_3^6$ for any three different primes you wish. -As an aside, all such numbers $m$ must be of one of the following forms: -$p_1^2 p_2^4 p_3^6$ -$p_1^7 p_2^{12}$ -$p_1^3 p_2^{25}$ -$p_1 p_2^{51}$ -$p_1^{102}$ -Where $p_1,p_2,p_3$ are distinct primes.<|endoftext|> -TITLE: Why define measures on $\sigma$-rings? -QUESTION [8 upvotes]: I have the impression that modern texts deal almost excusively with measures on $\sigma$-algebras, while older texts, such as the one of Halmos, deal mainly with measures defined on $\sigma$-rings. I'm curious what motivated this change and in what context are $\sigma$-rings more natural domains for measures? - -REPLY [3 votes]: Being unfamiliar with the older text, I can only speculate. One explanation is that one prefers to work with sets of $\sigma$-finite measure: those that can be written as a countable union of sets of finite measure. For example, sets of $\sigma$-finite length ($1$-dimensional Hausdorff measure) in the plane form a $\sigma$-ring, not a $\sigma$-algebra. It is rather fruitless to think about 1-dimensional measure of the complement of a line, so removing such sets from consideration seems reasonable.<|endoftext|> -TITLE: Probability density function of the integral of a continuous stochastic process -QUESTION [16 upvotes]: I am interested in whether there is a general method to calculate the pdf of the integral of a stochastic process that is continuous in time. -My specific example: I am studying a stochastic given process given by -$X(t)=\int\limits_{0}^{t}\cos(B(s))\,\text{d}s$, -where $B(t)$ is the Wiener process, which is normally distributed over an interval of length $\tau$ with zero mean and variance $\tau$: -$B(t+\tau)-B(t)\sim\mathcal{N}(0, \tau)$. -I am able to calculate the first and second moments of $X(t)$, see: Expectation value of a product of an Ito integral and a function of a Brownian motion -A couple of thoughts on the matter: -1) Integrals of Gaussian continuous stochastic processes, such as the Wiener process can be considered as the limit of a sum of Gaussians and are hence themselves Gaussian. Since $\cos(B(s))$ is not Gaussian, this doesn't seem to help here. -2) If we can derive an expression for the characteristic function of the process $X(t)$, then we can theoretically invert this to obtain the pdf. The Feynman-Kac formula enables us to describe the characteristic function in terms of a PDE. If this PDE has a unique analytic solution then we can make use of this. In my specific example, this is not the case - the PDE obtained has no analytic solution. I can provide more detail on this point if required. -Many thanks for your thoughts. - -REPLY [3 votes]: You may use a computational method to approximate as accurate as wanted the probability density function of $I(t)=\int_0^t \cos(B(s))\,\mathrm{d}s$. I will do so for $0\leq t\leq 1$. -Consider the Karhunen-Loève expansion of $B(t)$ on $[0,1]$: -$$ B(t)=\sum_{j=1}^\infty \frac{\sqrt{2}}{\left(j-\frac12\right)\pi}\sin\left(\left(j-\frac12\right)\pi t\right)\xi_j, $$ -where $\xi_1,\xi_2,\ldots$ are independent and $\text{Normal}(0,1)$ distributed random variables. The convergence of the series holds in $L^2([0,1]\times\Omega)$. Truncate -$$ B_J(t)=\sum_{j=1}^J \frac{\sqrt{2}}{\left(j-\frac12\right)\pi}\sin\left(\left(j-\frac12\right)\pi t\right)\xi_j, $$ -and define $I_J(t)=\int_0^t \cos(B_J(s))\,\mathrm{d}s$. -It is easy to see that: - -$I_J(t)\rightarrow I(t)$ in $L^2(\Omega)$, for each $0\leq t\leq 1$. Indeed, by Cauchy-Schwarz inequality, $\mathbb{E}[(I(t)-I_J(t))^2]\leq t\|B(t)-B_J(t)\|_{L^2([0,t]\times\Omega)}^2\rightarrow 0$, as $J\rightarrow\infty$. -$I_J(t)\rightarrow I(t)$ almost surely, for each $0\leq t\leq 1$. Indeed, for each fixed $\omega\in\Omega$, we know that $B_J(t)(\omega)\rightarrow B(t)(\omega)$ in $L^2([0,1])$, because deterministic Fourier series converge in $L^2$. Since $\cos$ is Lipschitz, $\cos(B_J(t)(\omega))\rightarrow \cos(B(t)(\omega))$ in $L^2([0,1])$. Then $I_J(t)(\omega)\rightarrow I(t)(\omega)$ for each $t\in[0,1]$ follows. - -Although these two facts are not enough to have that the density functions of $\{I_J(t):J\geq1\}$ tend to the density function of $I(t)$, we have that the density function of $I_J(t)$ may be a very good candidate (recall that this is a computational method, not a proof). The good thing about $I_J(t)$ is that is consists of a finite number of $\xi_j$, so it is possible to obtain exact realizations of $I_J(t)$. And if we generate a sufficiently large number $M$ of realizations of $I_J(t)$, then a kernel density estimation allows obtaining an approximate density function for $I_J(t)$. -I have written a function in Mathematica to approximate the distribution of $I(T)$, for $0\leq T\leq 1$, using a truncation order $J$ and a number of simulations $M$: -distributionIT[T_, J_, simulations_] := - Module[{realizationsIT, simulation, xi, BJ, distribution}, - realizationsIT = ConstantArray[0, simulations]; - For[simulation = 1, simulation <= simulations, simulation++, - xi = RandomVariate[NormalDistribution[0, 1], J]; - BJ[t_] := - Sqrt[2]*Sum[ - Sin[(j - 0.5)*Pi*t]/((j - 0.5)*Pi)*xi[[j]], {j, 1, J}]; - realizationsIT[[simulation]] = NIntegrate[Cos[BJ[t]], {t, 0, T}]; - ]; - distribution = SmoothKernelDistribution[realizationsIT]; - Return[distribution]; - ]; - -Let us do a numerical example. Choose $T=1$. Write -distribution1 = distributionIT[1, 40, 50000]; -plot1 = Plot[PDF[distribution1, x], {x, -2, 2}, Frame -> True, PlotRange -> All]; -distribution2 = distributionIT[1, 50, 100000]; -plot2 = Plot[PDF[distribution2, x], {x, -2, 2}, Frame -> True, PlotRange -> All]; -Legended[Show[plot1, plot2], LineLegend[{Green, Blue}, {"J=40, M=50000", "J=50, M=100000"}]] - - -We plot the estimated density function for $J=40$, $M=50000$ and $J=50$, $M=100000$. We observe no differences, so our method provides a good approximation of the probability density function of $I(1)$. -Similar computations for $T=0.34$ give the following plot: - -If you plot the approximate density function for smaller $T$, you will see that at the end one gets a Dirac delta at $0$, which agrees with the fact that $I(0)=0$ almost surely. -Remark: Computational methods are of constant use in research to approximate probabilistic features of response processes to random differential equations. See for example [M. A. El-Tawil, The approximate solutions of some stochastic differential equations using transformations, Applied -Mathematics and Computation 164 (1) 167–178, 2005], [D. Xiu, Numerical Methods for Stochastic Computations. A Spectral Method Approach, Princeton University Press, 2010], [L. Villafuerte, B. M. Chen-Charpentier, A random differential transform method: Theory and applications, Applied Mathematics Letters, 25 (10) 1490-1494, 2012].<|endoftext|> -TITLE: 'Plus' Operator analog of the factorial function? -QUESTION [6 upvotes]: Possible Duplicate: -What is the term for a factorial type operation, but with summation instead of products? - -Is there a similar function for the addition operator as there is the factorial function for the multiplication operator? -For factorials it is 5! = 5*4*3*2*1, is there a function that would do 5+4+3+2+1? -Thanks, - -REPLY [9 votes]: As far as I know we haven't given that expression a name since we may write it explicitly as -$$ -\sum_{i=1}^ni = {n+1 \choose 2} = \frac{n(n+1)}{2}. -$$<|endoftext|> -TITLE: Eigenvalues of certain block Hermitian matrix -QUESTION [8 upvotes]: Suppose I have a special block, Hermitian matrix -$$H = \begin{bmatrix} A & B \\ B^* & A^* \end{bmatrix}$$ -where $*$ denotes conjugate transpose. The blocks $A$ and $B$ are themself Hermitian in this case. Are there any theorems considering the eigenvalues and eigenvectors for this special matrix? - -REPLY [4 votes]: Since in the comment, you assumed $A$ and $B$ hermitian, we can compute the characteristic polynomial $\det(H-X_{2n})$. Add to the column $k$ the column $n+k$ for $1\leq k\leq n$ to see that -$$\det(H-XI_{2n})=\det(A+B-XI_n)\det\pmatrix{I_n&B\\ I_n&A-XI_n}.$$ -Then do $R_{n+k}\leftarrow R_{n+k}-R_k$, $1\leq k\leq n$, which gives -$$\det(H-XI_{2n})=\det(A+B-XI_n)\det(A-B-XI_n).$$ -So the spectrum of $H$ is the union of the spectra of $A+B$ and $A-B$.<|endoftext|> -TITLE: A system of equations with 5 variables: $a+b+c+d+e=0$, $a^3+b^3+c^3+d^3+e^3=0$, $a^5+b^5+c^5+d^5+e^5=10$ -QUESTION [12 upvotes]: Find the real numbers $a, b, c, d, e$ in $[-2, 2]$ that simultaneously satisfy the following relations: -$$a+b+c+d+e=0$$ $$a^3+b^3+c^3+d^3+e^3=0$$ $$a^5+b^5+c^5+d^5+e^5=10$$ -I suppose that the key is related to a trigonometric substitution, but not sure what kind of substitution, or it's about a different thing. - -REPLY [10 votes]: The unknowns $a,b,c,d,e$ are to be real and in the interval $[-2,2]$. This screams for the substitution $a=2\cos\phi_1$, $b=2\cos\phi_2$, $\ldots, e=2\cos\phi_5$ with some unknown angles $\phi_j,j=1,2,3,4,5$ to be made. Let's use the equations $2\cos\phi_j=e^{i\phi_j}+e^{-i\phi_j}$, $j=1,2,3,4,5$. Now -$$ -0=a+b+c+d+e=\sum_{j=1}^5(e^{i\phi_j}+e^{-i\phi_j}), -$$ -Using this in the second equation gives -$$ -0=a^3+b^3+c^3+d^3+e^3=\sum_{j=1}^5(e^{3i\phi_j}+3e^{i\phi_j}+3e^{-i\phi_j}+e^{-3i\phi_j}) -=\sum_{j=1}^5(e^{3i\phi_j}+e^{-3i\phi_j}). -$$ -Using both of these in the last equation gives -$$ -\begin{align} -10=a^5+b^5+c^5+d^5+e^5&=\sum_{j=1}^5(e^{5i\phi_j}+5e^{3i\phi_j}+10e^{i\phi_j}+10e^{-i\phi_j}+5e^{-3i\phi_j}+e^{-5i\phi_j})\\ -&=\sum_{j=1}^5(e^{5i\phi_j}+e^{-5i\phi_j})=\sum_{j=1}^5(2\cos5\phi_j). -\end{align} -$$ -This is equivalent to -$$ -\sum_{j=1}^5\cos5\phi_j=5. -$$ -When we know that the sum of five cosines is equal to five, certain deductions can be made :-) -This shows that there are 5 possible values for all the five unknowns, namely $2\cos(2k\pi/5)$ with $k=0,1,2,3,4$ (well, cosine is an even function, so there are only three!). We get a solution by using each value of $k$ exactly once, because then the first two equations are satisfied (use familiar identities involving roots of unity). There may be others, but having reduced the problem to a finite search, I will exit back left.<|endoftext|> -TITLE: Proof that $\mathbb N $ is finite -QUESTION [15 upvotes]: Obviously this is a false proof. It relies on Berry's paradox. -Assume that $\mathbb{N}$ is infinite. Since there are only finitely many words in the English language, there are only finitely many numbers which can be described unambiguously in less than 15 words. Let $n$ be the smallest number which can't. -Then $n$ can be described as "the smallest number which can be described unambiguously in less than 15 words". Contradiction. -I know nothing of mathematical logic, but looking in a few books has told me that the problem here lies in the definition $n$ := "smallest number which can't be described unambiguously in less than 15 words". If this isn't a valid definition, then what exactly is a valid definition? - -REPLY [13 votes]: Essentially what your argument shows here is that "described unambiguously in less than $N$ words" is not itself an unambiguous description -- we can plainly see ambiguity arise in the form of the paradox. This defuses the paradox, because now description itself is not among those we quantify over. -It is of course easy to accuse natural-language phrases of being ambiguous, but the same problem carries over if we attempt to formalize the argument. For example, we could ask for the least natural number $n$ such that for every first-order formula $\phi(x)$ containing less than $20,000$ symbols in the language of basic arithmetic, $\forall x.(\phi(x)\leftrightarrow x=\bar n)$ is false in $\mathbb N$. -The wall we then hit is this: Even though we can represent the formulas $\phi(x)$ themselves inside arithmetic using Gödel numbers, there is no arithmetic formula that expresses the property of being the Gödel number of a true formula. So the number asked for in the previous paragraph is indeed not described by any arithmetic formula. -We can define arithmetic truth if we allow formulas in a stronger language, such as set theory. But that still doesn't produce a paradox, because the language of set theory cannot express set-theoretic truth.<|endoftext|> -TITLE: Direct limit of localizations of a ring at elements not in a prime ideal -QUESTION [7 upvotes]: For a prime ideal $P$ of a commutative ring $A$, consider the direct limit of the family of localizations $A_f$ indexed by the set $A \setminus P$ with partial order $\le$ such that $f \le g$ iff $V(f) \subseteq V(g)$. (We have for such $f \le g$ a natural homomorphism $A_f \to A_g$.) I want to show that this direct limit, $\varinjlim_{f \not\in P} A_f$, is isomorphic to the localization $A_P$ of $A$ at $P$. For this I consider the homomorphism $\phi$ that maps an equivalence class $[(f, a/f^n)] \mapsto a/f^n$. (I denote elements of the disjoint union $\sqcup_{f \not\in P} A_f$ by tuples $(f, a/f^n)$.) Surjectivity is clear, because for any $a/s \in A_P$ with $s \not\in P$, we have the class $[(s, a/s)] \in \varinjlim_{f \not\in P} A_f$ whose image is $a/s$. For injectivity, suppose we have a class $[(f, a/f^n)]$ whose image $a/f^n = 0/1 \in A_P$. Then there exists $t \notin P$ such that $ta = 0$. We want to show that $[(f, a/f^n)]] = [(f, 0/1)]$, which I believe is equivalent to finding a $g \notin P$ such that $V(f) \subseteq V(g)$ and $g^ka = 0$ for some $k \in \mathbb{N}$. Well, $t$ seems to almost work, but I couldn’t prove that $V(f) \subseteq V(t)$, so maybe we need a different $g$? Or am I using the wrong map entirely? - -REPLY [4 votes]: If $a/f^n \in A_f$ is mapped to $0$ in $A_p,$ then there is a $g \not \in p,$ s.t. $ga=0,$ therefore, $a/f^n=0 \in A_{gf}.$ Hence the injectivity.<|endoftext|> -TITLE: Finding solutions to $(4x^2+1)(4y^2+1) = (4z^2+1)$ -QUESTION [6 upvotes]: Consider the following equation with integral, nonzero $x,y,z$ -$$(4x^2+1)(4y^2+1) = (4z^2+1)$$ -What are some general strategies to find solutions to this Diophantine? -If it helps, this can also be rewritten as $z^2 = x^2(4y^2+1) + y^2$ -I've already looked at On the equation $(a^2+1)(b^2+1)=c^2+1$ - -REPLY [5 votes]: Here is one general approach. Since the product of the sum of two squares is itself the sum of two squares, then, -$$\tag{1}(4x^2+1)(4y^2+1) = 4z^2+1$$ -is equivalent to, -$$\tag{2}(2x+2y)^2+(4xy-1)^2 = 4z^2+1$$ -The complete solution to the form, -$$\tag{3}x_1^2+x_2^2 = y_1^2+y_2^2$$ -is given by the identity, -$$\tag{4}(ac+bd)^2 + (bc-ad)^2 = (ac-bd)^2+(bc+ad)^2$$ -One can then equate the terms of (2) and (4), solve for {x, y, z}, with {a, b, c, d} chosen such that one term on the RHS is equal to unity. -EDITED MUCH LATER: -In response to your questions, let's have a simpler solution to (3) as, -$$\tag{5}(6n+2)^2+(6n^2+4n-1)^2=(6n^2+4n+2)^2+1$$ -Equate the terms of (2) and (5) and we find that, -$$x = \frac{1}{2}\big(1+3n-\sqrt{3n^2+2n+1}\big)$$ -$$y = \frac{1}{2}\big(1+3n+\sqrt{3n^2+2n+1}\big)$$ -$$z = (6n^2+4n+2)/2$$ -To get rid of the $\sqrt{N}$ and solve the form, -$$an^2+bn+c^2 = \square$$ -one simply chooses, -$$n = \frac{-2cuv+bv^2}{u^2-av^2}$$ -for arbitrary {u, v}. Of course, since you want integer n, you have to solve the denominator as the Pell equation $u^2-av^2 = \pm 1$. -In summary, and after simplification, an infinite number of integer solutions to, -$$(4x^2+1)(4y^2+1) = 4z^2+1$$ -is given by the rather simple, -$$x = (u-3v)(u-v)$$ -$$y = 2uv$$ -$$z = (u^2-2uv+3v^2)^2$$ -where, -$$u^2-3v^2=1$$ -P.S. It is quite easy to find other solutions similar to (5), and appropriate ones would need other Pell equations.<|endoftext|> -TITLE: How to get a part of a quaternion? e.g. get half of the rotation of a quaternion? -QUESTION [7 upvotes]: if I have a quaternion which describes an arbitrary rotation, how can I get for example only the half rotation or something like 30% of this rotation? -Thanks in advance! - -REPLY [3 votes]: I believe what you're looking for are exponent and logarithm formulas for quaternions, which can be found on the Wikipedia page on quaternions. The Wikipedia page even gives a formula for raising a quaternion to an arbitrary power, which is exactly what you want. If your original rotation is given by $q$, and you want to take 30% of this rotation, you simply take $q^{0.3}$.<|endoftext|> -TITLE: When is the closure of a path connected set also path connected? -QUESTION [9 upvotes]: What are the most general criteria we can impose on a locally path connected Hausdorff space $X$ and a path connected subset $A$ such that $\overline{A}$ is path connected? Do more restrictions need to be imposed on $X$ or $A$? -For instance, I know that if $\overline{A}$ is locally path connected then $\overline{A}$ is path connected; for all $x \in \overline{A}$ and some neighborhood $U$ of $x$ that is open in $\overline{A}$, there must be some path connected neighborhood $U' \subseteq U$ of $x$ that is open in $\overline{A}$. That is, there is some open subset $V'$ of $X$ such that $U' = V' \cap \overline{A}$. Since $x$ is a point of closure of $A$, $V'$ must contain some point $x' \in A \subseteq \overline{A}$, i.e. $x' \in U'$, so $x$ is path connected to $x'$ and hence also to $A$. This holds for all $x \in \overline{A}$ so $\overline{A}$ is path connected. -However, the tough part is proving that $\overline{A}$ is locally path connected, because $\overline{A}$ is probably (?) not open in $X$. I'm a complete novice so the only useful thing I know from browsing definitions is that all open subsets of a locally path connected space inherit the local path connectivity. Are there more ways to prove that a subspace inherits local path connectedness? -This is more specific, but would it help if I knew that $A$ was the set difference of two closed sets (i.e. the intersection of a closed set and an open set)? -I've been looking at stronger restrictions such as $X$ being locally simply connected, but the online documentation is scarce. Would local simple connectivity be "inherited more easily" by subspaces? - -REPLY [6 votes]: Here's something that I came up with. The proposition below is what you're asking for but I've also encapsulated the main idea behind these results in the following lemma in case that's more helpful. -Lemma: Let $S \subseteq X$ be path-connected and $x^1 \in \overline{S}$. Suppose there exists a countable decreasing (i.e. $U_{i+1} \subseteq U_i$) neighborhood basis $\left( U_i \right)_{i=1}^{\infty}$ in $X$ at $x^1$ such that for each $i$, whenever $s^i \in S \cap U_i$ then there exists a path in $S \cap U_i$ from $s^i$ to some element of $S \cap U_{i+1}$. Then $S \cup \left\lbrace x^1 \right\rbrace$ is path-connected. -Remark: Note that we are not assuming that for all $i$, there exists a path between any two point of $S \cap U_i$. The sets $S \cap U_i$ need not even be connected so this is weaker than requiring local connectivity of $\overline{S}$ at $x^1$. -Corollary: Let $S \subseteq X$ be path-connected. If the condition of the above lemma is satisfied at each $x^1 \in \overline{S}$ (or slightly more generally, if each path-component of the boundary of $S$ contains some point satisfying this condition) then $\overline{S}$ is path-connected. -Prop: Let $S \subseteq X$ be path-connected. Suppose that each path component of $\overline{S} \setminus S$ contains some $x^1$ for which there exists a countable decreasing neighborhood basis $\left( U_i \right)_{i=1}^{\infty}$ in $X$ at $x^1$ s.t. for each $i$ and each path-component $P_i$ of $S \cap U_i$, there exists a path in $\overline{S} \cap U_i$ whose image intersects both $P_i$ and $S \cap U_{i+1}$. Then $\overline{S}$ is path-connected. -Remark: In this proposition, you can replace "of $\overline{S} \setminus S$" with "of the boundary of $S$ in $\overline{S}$". Also, to prove that $\overline{S}$ is path-connected, it may be easier to find some other path-connected $R \subseteq X$ such that $\overline{R} = \overline{S}$ and then apply these results to $R$ in place of $S$. -Proof of lemma: Pick any $s^1 \in S \cap U_1$ and any $0 = t_0 < t_1 < \cdots < 1$ s.t. $t_i \to 1$ and let $\gamma_0 : [t_0, t_1] \to S$ be the constant path at $s^0 := s^1$. Suppose for all $0 \leq l \leq i + 1$ we've picked $s^l \in S \cap U_l$ and for every $0 \leq l \leq i$ we have a path $\gamma_l : [t_l, t_{l+1}] \to S \cap U_l$ from $s^l$ to $s^{l+1}$ (where observe that this holds for $i = 0$). By assumption, we can pick $s^{i+2} \in S \cap U_{i+2}$ and a path $\gamma_{i+1} : [t_{i+1}, t_{i+2}] \to S \cap U_{i+1}$ from $s^{i+1}$ to $s^{i+2}$. -After starting this inductive construction at $i = 0$ we can use $\gamma_0, \gamma_1, \ldots$ to define $\gamma : [0, 1] \to S \cup \left\lbrace x^1 \right\rbrace$ on $[0, 1)$ in the obvious way and then declare that $\gamma(1) := x^1$. For any integer $N$, $l \geq N$ implies $\operatorname{Im} \gamma_l \subseteq U_l \subseteq U_N$ so that $\gamma([t_N, 1]) \subseteq U_N$. Thus $\gamma$ is continuous at $1$ so that $S \cup \left\lbrace x^1 \right\rbrace$ is path-connected. Q.E.D. -It should now be clear how the idea behind this lemma's proof led to the above proposition's statement. -Proof of prop: Let $x^1$ and $\left( U_i \right)_{i=1}^{\infty}$ have the properties described in the proposition's statement, let $0 = t_0 < t_1 < \cdots < 1$ be s.t. $t_i \to 1$, and let $\gamma_0 : [t_0, t_1] \to S \cap U_1$ be any constant path. Suppose $i \geq 0$ is such that for all $1 \leq l \leq i$, we have constructed a path $\gamma_l : \left[ t_l, t_{l+1} \right] \to \overline{S} \cap U_l$ such that $\gamma_l(t_l) = \gamma_{l-1}\left( t_{l} \right)$ and $\gamma_l\left( t_{l+1} \right) \in S \cap U_{l+1}$ (note that this is true for $i = 0$). Our assumption on $\left( U_i \right)_{i=1}^{\infty}$ allows us to construct a path $\gamma_{i+1} : \left[ t_{i+1}, t_{i+2} \right] \to \overline{S} \cap U_{i+1}$ starting that $\gamma_i\left( t_{i+1} \right)$ and ending at some point of $S \cap U_{i+2}$. Exactly as was done in the proof of the above lemma, we may now define a continuous map $\gamma : [0, 1] \to \overline{S}$ such that $\gamma(1) = x^1$. Q.E.D.<|endoftext|> -TITLE: Is there a difference between allowing only countable unions/intersections, and allowing arbitrary (possibly uncountable) unions/intersections? -QUESTION [11 upvotes]: As in the title, I am asking if there is a difference between allowing set-theoretic operations over arbitrarily many sets, and restricting to only countably many sets. -For example, the standard definition of an topology on a set $X$ requires that arbitrary unions of open sets are open. Do I lose anything significant if I restrict this to just unions of countably many (open) sets? -I cannot come up with an example where it makes a difference. - -REPLY [4 votes]: *Edit:*I originally let $X$ be just a Hausdorff space, but in that case the two aren't guaranteed to be different. -Here's an example to see a where it makes a difference. Let $X$ be an uncountable Polish space. If you look at the smallest collection of subsets of $X$ that a) contains all the open sets of $X$ and b) is closed under complements and countable unions (hence also countable intersections), you get the Borel $\sigma$-algebra of $X$. -But if you look at the smallest collection of subsets of $X$ that a) contains all the open sets and b) is closed under complements and arbitrary unions (hence also arbitrary intersections), this is $\cal{P}$$(X)$. This is because it includes all closed sets, hence all singletons, and then we can take an arbitrary union to get any subset of $X$.<|endoftext|> -TITLE: Differentiating Under the Integral Proof -QUESTION [14 upvotes]: There are many variations of "differentiating under the integral sign" theorem; here is one: -If $U$ is an open subset of $\mathbb{R}^n$ and $f:U \times [a,b] \rightarrow \mathbb{R}$ is continuous with continuous partial derivatives $\partial_1 f, \dots \partial_n f$ then the function -$$ -\phi(x) = \int^b_a f(x,t)dt -$$ -is continuously differentiable and -$$ -\partial_i \phi (x) = \int^b_a \partial_i f(x,t)dt -$$ -Can anyone suggest a textbook that provides a proof of this version of the theorem? - -REPLY [7 votes]: Isn't the proof sort of "follow your nose"? Let $\Delta x$ be nonzero, consider -$$ \phi(x+\Delta x)-\phi(x) = \int^{b}_{a}f(x+\Delta x,t)-f(x,t)\,\mathrm{d}t$$ -Then construct the quotient -$$ \frac{\phi(x+\Delta x)-\phi(x)}{\Delta x} = \frac{\int^{b}_{a}f(x+\Delta x,t)-f(x,t)\,\mathrm{d}t}{\Delta x} $$ -But because we do not integrate over $x$, we treat $x$ like a constant. So we can rewrite the integral as -$$ \frac{\phi(x+\Delta x)-\phi(x)}{\Delta x} = \int^{b}_{a}\frac{f(x+\Delta x,t)-f(x,t)}{\Delta x}\,\mathrm{d}t $$ -Taking the limit as $\Delta x\to0$ gives us -$$ \frac{\mathrm{d}\phi(x)}{\mathrm{d} x} = \int^{b}_{a}\frac{\partial f(x,t)}{\partial x}\,\mathrm{d}t $$ -precisely as desired? [Edit: We can take the limit under the integral sign, as Giuseppe Negro points out, if the function $f(x,t)$ is continuously differentiable in $x$.] -Addendum: Why, oh why, do we need $f(x,t)$ to be continuously differentiable in $x$? -Why can we take this limit? Well, there's a number of different arguments. -One is the Dominated convergence theorem, which states if we have a sequence of functions $f_{n}(t)\to F(t)$ which is "dominated" by some function $g(t)$, meaning -$$ |f_{n}(t)|\leq g(t)\quad\mbox{for any }t $$ -then we have -$$ \lim_{n\to\infty}\int|f_{n}(t)-F(t)|\,\mathrm{d}t=0 $$ -which implies -$$ \lim_{n\to\infty}\int f_{n}(t)\,\mathrm{d}t=\int F(t)\,\mathrm{d}t. $$ -Take $F(t)=\partial f(x,t)/\partial x$ and $f_{n}(t)$ to be -$$ f_{n}(t) = \frac{f(x + \varepsilon_{n},t)-f(x,t)}{\varepsilon_{n}} $$ -using any sequence $\varepsilon_{n}\to 0$. -Addendum 2: A second different way begins with the observation -$$ \int^{b}_{a}\int^{x}_{0}\frac{\partial f(y,t)}{\partial y}\,\mathrm{d}y\,\mathrm{d}t = \phi(x)-\phi(0)$$ -by the fundamental theorem of calculus. Fubini's theorem lets us switch the order of integration -$$ \int^{x}_{0}\int^{b}_{a}\frac{\partial f(y,t)}{\partial y}\,\mathrm{d}t\,\mathrm{d}y = \phi(x)-\phi(0)$$ -Then we can use Leibniz's rule differentiating both sides with respect to $x$. This gives us the desired result -$$ \int^{b}_{a}\frac{\partial f(x,t)}{\partial x}\,\mathrm{d}t = \phi'(x).$$ -Recall Leibniz's rule states if $G(x) = \int^{x}_{0}g(y)\,\mathrm{d}y$ then -$$ G'(x) = g(x). $$ -We can prove this quickly by -$$ \frac{G(x+\Delta x)-G(x)}{\Delta x} = \frac{1}{\Delta x}\int^{x+\Delta x}_{x} g(y)\,\mathrm{d}y$$ -and taking $\Delta x$ to be "sufficiently small", we can approximate the Riemann sum as -$$ \int^{x+\Delta x}_{x} g(y)\,\mathrm{d}y\approx g(c)\Delta x$$ -where $x\leq c\leq x+\Delta x$. Plugging this back in gives us -$$ \frac{G(x+\Delta x)-G(x)}{\Delta x} = \frac{1}{\Delta x}\left(g(c)\Delta x\right) = g(c)$$ -Taking $\Delta x\to 0$ gives us $c\to x$, and -$$ G'(x) = g(x)$$ -as desired.<|endoftext|> -TITLE: Evaluating a series with the Möbius function and greatest common divisor. -QUESTION [7 upvotes]: Problem: Let $\gcd(a,b,c,d)$ refer to the largest integer $r$ such that $r$ divides each of $a,b,c,d$. Evaluate the series $$\sum_{a=1}^{\infty}\sum_{b=1}^{\infty}\sum_{c=1}^{\infty}\sum_{d=1}^{\infty}\frac{\mu(a)\mu(b)\mu(c)\mu(d)}{a^{2}b^{2}c^{2}d^{2}}\gcd(a,b,c,d)^{4},$$ where $\mu(n)$ is the Möbius function. - -I tried several tricks, but I eventually got stuck. I think it should be possible to rewrite the entire thing as an Euler Product. It looks very similar to -the double series $$\sum_{a=1}^{\infty}\sum_{b=1}^{\infty}\frac{\mu(a)\mu(b)}{a^{2}b^{2}}\gcd(a,b)^{2}=\frac{6}{\pi^2}.$$ -Any help is appreciated. - -REPLY [6 votes]: Your sum can be re-written in terms of an Euler product: -$$\sum_{a=1}^\infty\sum_{b=1}^\infty\sum_{c=1}^\infty\sum_{d=1}^\infty\frac{\mu(a)\mu(b)\mu(c)\mu(d)}{a^2b^2c^2d^2}\gcd(a,b,c,d)^4=\frac{1296}{\pi^8}\prod_{p}(1+\frac{p^4-1}{(p^2-1)^4})\approx .16544\cdots$$ - -A proof can be given as follows: -First note that, -$$d\mid x_1\wedge d\mid x_2\wedge d\mid x_3\wedge d\mid x_4\iff d\mid \gcd(x_1,x_2,x_3,x_4)$$ -So we get:$$\sum_{d=1}^\infty f(d)1_{d\mid x_1}1_{d\mid x_2}1_{d\mid x_3}1_{d\mid x_4}=\sum_{d=1}^\infty f(d)1_{d\mid x_1\wedge d\mid x_2\wedge d\mid x_3\wedge d\mid x_4}=\sum_{d=1}^\infty f(d)1_{d\mid \gcd(x_1,x_2,x_3,x_4)}$$ -$$=\sum_{d\mid \gcd(x_1,x_2,x_3,x_4)}f(d)$$ -Where $1_{A}=[A]$ in Iverson bracket notation -Then define: -$$\phi_s(n)=n^s\prod_{p\mid n}(1-\frac{1}{p^s})$$ -So that we have: $(\phi_s*1)(n)=n^s$ -Now set $f=\phi_4$ in the aforementioned equality and we get that: -$$\sum_{d=1}^\infty \phi_4(d)1_{d\mid x_1}1_{d\mid x_2}1_{d\mid x_3}1_{d\mid x_4}=\gcd(x_1,x_2,x_3,x_4)^4$$ -Now we note that: -$$\sum_{x_i=1}^\infty\frac{\mu(x_i)}{x_i^2}1_{d\mid x_i}=\sum_{n=1}^\infty\frac{\mu(dn)}{(dn)^2}=\frac{6}{\pi^2}\frac{\mu(d)}{\phi_2(d)}$$ -Then multiplying both sides of the previous series by $\frac{\mu(x_1)}{x_1^2}\frac{\mu(x_2)}{x_2^2}\frac{\mu(x_3)}{x_3^2}\frac{\mu(x_4)}{x_4^2}$ and rearranging gives: -$$\sum_{d=1}^\infty \phi_4(d)(\frac{\mu(x_1)}{x_1^2}1_{d\mid x_1})(\frac{\mu(x_2)}{x_2^2}1_{d\mid x_2})(\frac{\mu(x_3)}{x_3^2}1_{d\mid x_3})(\frac{\mu(x_4)}{x_4^2}1_{d\mid x_4})$$ -$$=\frac{\mu(x_1)\mu(x_2)\mu(x_3)\mu(x_4)}{x_1^2x_2^2x_3^2x_4^2}\gcd(x_1,x_2,x_3,x_4)^4$$ -Thus: -$$\sum_{x_1=1}^\infty\sum_{x_2=1}^\infty\sum_{x_3=1}^\infty\sum_{x_4=1}^\infty\frac{\mu(x_1)\mu(x_2)\mu(x_3)\mu(x_4)}{x_1^2x_2^2x_3^2x_4^2}\gcd(x_1,x_2,x_3,x_4)^4$$ -$$=(\frac{6}{\pi^2})^4\sum_{d=1}^\infty \phi_4(d)\frac{\mu(d)^4}{\phi_2(d)^4}=\frac{1296}{\pi^8}\sum_{n=1}^\infty\frac{\phi_4(n)}{\phi_2(n)^4}|\mu(n)|=\frac{1296}{\pi^8}\prod_{p}(1+\frac{\phi_4(p)}{\phi_2(p)^4})$$ -So we have: -$$\sum_{a=1}^\infty\sum_{b=1}^\infty\sum_{c=1}^\infty\sum_{d=1}^\infty\frac{\mu(a)\mu(b)\mu(c)\mu(d)}{a^2b^2c^2d^2}\gcd(a,b,c,d)^4=\frac{1296}{\pi^8}\prod_{p}(1+\frac{p^4-1}{(p^2-1)^4})$$ -A similar argument will give your second sum as $\sum_{a=1}^\infty\sum_{b=1}^\infty\frac{\mu(a)\mu(b)}{a^2b^2}\gcd(a,b)^2=\frac{6}{\pi^2}$. -In addition to formula for other such similar generalizations.<|endoftext|> -TITLE: principal value as distribution, written as integral over singularity -QUESTION [7 upvotes]: Let $C_0^\infty(\mathbb{R})$ be the set of smooth functions with compact support on the real line $\mathbb{R}.$ Then, the map -$$\operatorname{p.\!v.}\left(\frac{1}{x}\right)\,: C_0^\infty(\mathbb{R}) \to \mathbb{C}$$ -defined via the Cauchy principal value as -$$ \operatorname{p.\!v.}\left(\frac{1}{x}\right)(u)=\lim_{\varepsilon\to 0+} \int_{\mathbb{R}\setminus [-\varepsilon;\varepsilon]} \frac{u(x)}{x} \, \mathrm{d}x \quad\text{ for }u\in C_0^\infty(\mathbb{R})$$ -Now why is -$$ \lim_{\varepsilon\to 0+} \int_{\mathbb{R}\setminus [-\varepsilon;\varepsilon]} \frac{u(x)}{x} \, \mathrm{d}x = \int_0^{+\infty} \frac{u(x)-u(-x)}{x}\, \mathrm{d}x $$ -why is the integral defined on the left. - -REPLY [8 votes]: Because $1/x$ is an odd function. So, decomposing $u(x)$ in its odd and even parts, that is -$$u(x)=\frac{u(x)+u(-x)}{2}+\frac{u(x)-u(-x)}{2}$$ -we have -$$\lim_{\varepsilon \to 0} \int_{\lvert x \rvert > \varepsilon} \frac{u(x)}{x}\, dx= \lim_{\varepsilon \to 0} \left(\int_{\lvert x \rvert > \varepsilon} \frac{u(x)+u(-x)}{2x}\, dx + \int_{\lvert x \rvert > \varepsilon} \frac{u(x)-u(-x)}{2x}\, dx\right)$$ -and the first integral on the right hand side vanishes because its argument is odd. On the contrary, the second integral has an even argument, so we can rewrite it as follows: -$$\lim_{\varepsilon \to 0}\int_{\lvert x \rvert > \varepsilon} \frac{u(x)-u(-x)}{2x}\, dx = \lim_{\varepsilon \to 0} \int_\varepsilon^\infty \frac{u(x)-u(-x)}{x}\, dx.$$ - -REPLY [8 votes]: We can write -$$I(\varepsilon):=\int_{\Bbb R\setminus [-\varepsilon,\varepsilon]}\frac{u(x)}xdx=\int_{-\infty}^{-\varepsilon}\frac{u(x)}xdx+\int_{\varepsilon}^{\infty}\frac{u(x)}xdx.$$ -In the first integral of the RHs, we do the substitution $t=-x$, then -$$I(\varepsilon)=-\int_{\varepsilon}^{+\infty}\frac{u(t)}tdt+\int_{\varepsilon}^{\infty}\frac{u(x)}xdx=\int_{\varepsilon}^{+\infty}\frac{u(t)-u(-t)}tdt.$$ -Now we can conclude, since, by fundamental theorem of analysis, the integral $\int_0^{+\infty}\frac{u(t)-u(-t)}tdt$ is convergent. Indeed, -$$u(t)-u(-t)=\int_{-t}^tu'(s)ds=\left[su'(s)\right]_{-t}^t-\int_{-t}^tsu''(s)ds\\= -t(u'(t)+u'(-t))-\int_{-t}^tsu''(s)ds$$ -hence, for $0 -TITLE: Submodule of free module over a p.i.d. is free even when the module is not finitely generated? -QUESTION [43 upvotes]: I have heard that any submodule of a free module over a p.i.d. is free. -I can prove this for finitely generated modules over a p.i.d. But the proof involves induction on the number of generators, so it does not apply to modules that are not finitely generated. -Does the result still hold? What's the argument? - -REPLY [50 votes]: Let $F$ be a free $R$-module, where $R$ is a PID, and $U$ be a submodule. Then $U$ is also free (and the rank is at most the rank of $F$). Here is a hint for the proof. -Let $\{e_i\}_{i \in I}$ be a basis of $F$. Choose a well-ordering $\leq$ on $I$ (this requires the Axiom of Choice). Let $p_i : F \to R$ be the projection on the $i$th coordinate. Let $F_i$ be the submodule of $F$ generated by the $e_j$ with $j \leq i$. Let $U_i = U \cap F_i$. Then $p_i(U_i)$ is a submodule of $R$, i.e. has the form $R a_i$. Choose some $u_i \in U_i$ with $p_i(u_i)=a_i$. If $a_i=0$, we may also choose $u_i=0$. -Now show that the $u_i \neq 0$ constitute a basis of $U$. Hint: Transfinite induction. -The same proof shows the more general result: If $R$ is a hereditary ring (every ideal of $R$ is projective over $R$), then any submodule of a free $R$-module is a direct sum of ideals of $R$.<|endoftext|> -TITLE: Is Fuzzy Logic Needed? -QUESTION [5 upvotes]: I had a very big doubt in my mind about Fuzzyness. When statistics is answering all the questions, which we see generally in Fuzzy theory. Then why one SHOULD learn Fuzzy Theory. Or is there any gap in statistics, I mean: - -Is there any problems which can be solved by Fuzzy theory and not by statistics? - -Please clarify with examples. -Thanks in advance. - -REPLY [2 votes]: There are plenty of domains in mathematics that bring no general interesting new theorem but that are interesting in themselves, in that they bring new tools to prove things. One of the greatest contributions of number theory, for me, is that it gave birth to modern algebra by studying the groups $\mathbb Z / n\mathbb Z$ and $(\mathbb Z / n\mathbb Z)^{\times}$. Even though these are the simplest groups of all time (they are cyclic or direct products of cyclic groups), their study as number-theoretic objects also gave rise to the theory of characters. If you have read a little bit Dirichlet's proof on the infinity of primes in congruences classes, you see that it is also a wonderful use of complex analysis that one didn't expect to happen on questions about prime numbers, at the time the proof was first written. -And if my memory serves me well, I think fuzzy logic is used in developping artifical intelligence algorithms. Here's a Wikipedia link that confirms my doubts : http://en.wikipedia.org/wiki/Fuzzy_logic -And maybe you don't see any interest as a statistician, but that only means it has no interest to you, and that is okay, but still, some people might need it. I don't need much statistics in what I do, still I think that statistics are very useful to other people. -Hope that helps,<|endoftext|> -TITLE: If $a^m=b^m$ and $a^n=b^n$ for $(m,n)=1$, does $a=b$? -QUESTION [6 upvotes]: Possible Duplicate: -Prove that $a=b$, where $a$ and $b$ are elements of the integral domain $D$ - -Something I'm curious about, suppose $a,b$ are elements of an integral domain, such that $a^m=b^m$ and $a^n=b^n$ for $m$ and $n$ coprime positive integers. Does this imply $a=b$? -Since $m,n$ are coprime, I know there exist integers $r$ and $s$ such that $rm+sn=1$. Then -$$ -a=a^{rm+sn}=a^{rm}a^{sn}=b^{rm}b^{sn}=b^{rm+sn}=b. -$$ -However, I'm worried that if $r$ or $s$ happen to be negative then $a^{rm}, a^{sn}$, etc may not make sense, and moreover, I don't see where the fact that I'm working in a domain comes into play. How can this be remedied? - -REPLY [4 votes]: If $a=0$ or $b=0$, the conclusion follows, so we may assume $a\neq 0$ and $b\neq 0$. -Suppose that $s\lt 0$ (in which case $r\gt 0$). Write $s=-t$ with $t\gt 0$. Then $rm = 1+tn$. So we have -$$aa^{tn} = a^{1+tn} = a^{rm} = (a^m)^r = (b^m)^r = b^{rm} = b^{1+tn} = bb^{tn}.$$ -Since $a^{tn} = (a^n)^t = (b^n)^t = b^{tn}$, we conclude from $aa^{tn}=bb^{tn}$ that $a=b$. -A symmetric argument holds if $r\lt 0$. -(Basically, we are going to the field of fractions and then clearing denominators "behind the scenes"). -Alternatively, say $m = qn+r$, $0\leq r\lt n$. Then $a^ra^{qn} = b^rb^{qn}=b^ra^{qn}$, which yields $a^r=b^r$; so you can replace $m$ with its remainder modulo $n$. Repeating as in the Euclidean Algorithm, we get that if $a^n=b^n$ and $a^m=b^m$, then $a^{\gcd(n,m)} = b^{\gcd(n,m)}$. - -REPLY [4 votes]: That works as long as you pass to the fraction field. But using fractions, the proof is much simpler: excluding the trivial case $\rm\,b=0,\,$ we have $\rm\:(a/b)^m = 1 = (a/b)^n\:$ hence the order of $\rm\,a/b\,$ divides the coprime integers $\rm\,m,n,\,$ thus the order must be $1.\,$ Therefore $\rm\,a/b = 1,\,$ so $\rm\,a = b.\,$ -For a proof avoiding fraction fields see this proof that I taught to a student. Conceptually, both proofs exploit the innate structure of an order ideal. Often hidden in many proofs in elementary number theory are various ideal structures, e.g. denominator/conductor ideals in irrationality proofs. Pedagogically, it is essential to bring this structure to the fore. - -REPLY [2 votes]: Hint: Let $d$ be the least positive integer such that $a^d=b^d$. Show that $d|n$ and $d|m$. -This approach will not require $R$ commutative, or even that $R$ have a multiplicative identity, only that it not have zero divisors. -Specifically, use the division algorithm to show that if $n=dq+r$ with $0\leq r0$, show $a^r = b^r$, contradicting that $d$ was the least example.<|endoftext|> -TITLE: How much Set Theory before Topology? -QUESTION [7 upvotes]: I was reading Baby Rudin for Real Analysis and wanted to explore Topology a little deeper. I bought George Simmons' Introduction to Topology and Modern Analysis and found myself liking it. I am having some problems every once in a while with prerequisites. -How much Set theory do I need to learn before diving into the aforementioned book? -Also, the book is divided into 3 parts : Topology, Operators and Algebras of Operators. -Till where can I trot with a good understanding of SV Calculus? - -REPLY [3 votes]: I assume that in the book you are asking about the preliminaries included by author would be sufficient. -One thing which I consider useful is Zorn's lemma, which is in the first chapter of Simmons' book. But even if you choose different text, you will bump into using Zorn's lemma a few times. -Other than that, the only thing that I can think of are ordinals and transfinite induction. But they are perhaps less important. Many of the books that are intended as the first course in general topology don't include them. (For instance, Willard includes these two topics.) -However, I don't think you should worry too much about preliminaries. You can always get back to some topic, if you find at some point in the book, that you need to know something from set theory (or any other area).<|endoftext|> -TITLE: A non-arithmetical set? -QUESTION [5 upvotes]: A set is called arithmetical if it can be defined by a first-order formula in Peano arithmetic. I first encountered these sets when exploring the arithmetical hierarchy in the context of computability theory. However, I have not encountered any examples of sets that are not arithmetical. -Is there a canonical example of an non-arithmetical set? -Thanks! - -REPLY [3 votes]: The usual examples are things like: -$0^{\omega}$ or anything bigger. -Any arithmetically generic set. -The set of ordinal notations or equivalently the indexes for computable well-orderings or even the indexes of well-founded computable trees. -I figured I'd add these because these are natural examples not merely a diagonalization. -Though the godel numbers of true statements of arithmetic is quite natural.<|endoftext|> -TITLE: You must be joking ... math and fun -QUESTION [6 upvotes]: Long ago I took an oral exam Algebra and my professor asked me the following: "Let $G$ be an abelian group of order 17020. What is its commutator subgroup $G’$?" At first I focused on factoring the number, but in a few seconds I realized with a smile that he said abelian and of course gave the right answer. Afterwards, I found the question very funny. -On another occasion I sat down with another professor over lunch and we discussed group theory and suddenly he quick-wittedly remarked: can you classify all groups $G$ with only a single non-normal subgroup $H$? Of course, such an $H$ must be normal by definition and without saying anything we could not resist to roar with laughter … -Have you also come across some of these sorts of math jokes? - -REPLY [19 votes]: I’m fond of the first exercise in the second edition of Edward Scheinerman’s Mathematics: A Discrete Introduction: - -Simplify the following algebraic expression: $$(x-a)(x-b)(x-c)\dots(x-z)$$<|endoftext|> -TITLE: The most efficient way to tile a rectangle with squares? -QUESTION [15 upvotes]: I was writing up a description of the Euclidean algorithm for a set of course notes and was playing around with one geometric intuition for the algorithm that involves tiling an $m \times n$ rectangle with progressively smaller squares, as seen in this animation linked from the Wikipedia article: - -I was looking over this animation and was curious whether or not the squares that you would place down in the course of this algorithm necessarily gave the minimum number of squares necessary to cover the entire rectangle. -More formally: suppose that you are given an $m \times n$ rectangle, where $m, n \in \mathbb{N}$ and $m, n > 0$. Your goal is to tile this rectangle with a set of squares such that no two squares overlap. Given $m$ and $n$, what is the most efficient way to place these tiles? -Is the tiling suggested by the Euclidean algorithm (that is, in an $m \times n$ rectangle, with $m \ge n$, always place an $n \times n$ rectangle, then recurse on the remaining rectangle) always optimal? If not, is there are more efficient algorithm for this problem? -I am reasonably sure that the Euclidean approach is correct, but I was having a lot of trouble formalizing the intuition with a proof due to the fact that there are a lot of different crazy ways you can try to place the squares. For example, I'm not sure how to have the proof handle the possibility that squares could be at angles, or could have side lengths in $\mathbb{R} - \mathbb{Q}$. -Thanks! - -REPLY [7 votes]: Euclid doesn't always minimize the number of squares. E.g., with an $8\times9$ rectangle, Euclid says use an 8-square and 8 1-squares, 9 squares in all. But you can do it with a 5, two 4s, a 3, a 2, and two 1s, making 7 squares total. You put the 5 in a corner, then put the 4s in the corners that have room for them; that leaves a $3\times5$ rectangle to fill, which you can do by Euclid.<|endoftext|> -TITLE: Find the modular inverse of $19\pmod{141}$ -QUESTION [7 upvotes]: I'm doing a question that states to find the inverse of $19 \pmod {141}$. -So far this is what I have: -Since $\gcd(19,141) = 1$, an inverse exists to we can use the Euclidean algorithm to solve for it. -$$ -141 = 19\cdot 7 + 8 -$$ -$$ -19 = 8\cdot 2 + 3 -$$ -$$ -8 = 3\cdot 2 + 2 -$$ -$$ -3 = 2\cdot 1 + 1 -$$ -$$ -2 = 2\cdot 1 -$$ -The textbook says that the answer is 52 but I have no idea how they got the answer and am not sure if I'm on the right track. An explanation would be appreciated! Thanks! - -REPLY [2 votes]: Using method of Gauss and modulo 141: 1/19 = 8/152 = 8/11 = 104/143 = 104/2 = 52 -As asked by Nate Eldredge, I am explaining it in detail: -We have to find 1/19 (mod 141). So, we take fraction 1/19, now multiply 19 with a number that is nearest to 141. If we multiply 19 with 8, we get 1/19 = 8/152. On taking modulo 141, 8/152 becomes 8/11. Now, multiply 11 with a number that makes it near to 141. This number is 13. So, when 8/11 is multiplied with 13, we get 104/143. Taking modulo 141, we get 104/2 = 52. -This can also be solved in the following manner: -Let x = 1/19 (mod 141). So, 19x = 141y + 1. This can be written as 19x - 1 = 141y. Now, we can compute y = -1/141 (mod 19) = -1/8 = -2/16 = -2/-3 = 2/3 = 12/18 = -12 = +7. -As y = 7, x = (141*7+1)/19 = 52. In this way, we obtained the same answer but using much simpler computation.<|endoftext|> -TITLE: How to prove a property regarding periodicities of points in the Mandelbrot set? -QUESTION [11 upvotes]: While studying a visual representation the Mandelbrot set, I have come across a very interesting property: -For any point inside the same primary bulb (a circular-like 'decoration' attached to the main body of the set), the periodicity of that point (i.e. the pattern of values that emerges when '$f(x) = z^2 + c$' is iterated with the '$c$' value that represents that point) is constant. -Does anyone know how to prove this property in a mathematical way? Is there more than one way in which this could be shown? - -REPLY [3 votes]: For the first question: An application of the $\lambda$-lemma. -Theorem: -Let $c_0$ and $c_1$ be in the same component $U$ of $C \backslash \partial M$, ($M$ is the Mandelbrot set) then $J_{c_0}$ and $J_{c_1}$ (the Julia sets) are homeomorphic (and even quasiconformal-homeomorphism) and the dynamics of the two polynomials $z^2+c_0$ and $z^2+c_1$ are conjugated on the Julia sets. -Proof: First notice that $M$ and $\mathbb C \backslash M$ are connected (Douady-Hubbard theorem). So $U$ is conformally equivalent to $\mathbb D$ (because simply connected). -Let $Q_k \subset U \times \mathbb C$ be set defined by the equation $P_c^k(z)=z$ where $P_c(z) = z^2+c$ and denote by $p_k : Q_k \longrightarrow U$ the projection onto the first factor. $Q_k$ is closed and $p_k$ is the restriction to $Q_k$ of the projection onto the first factor, so $p_k$ is a proper map. Moreover the two functions $(c,z)\mapsto P_c^k(z)-z$ and $(c,z)\mapsto (P_c^k(z))'-1$ vanish simultaneously at a discrete set $Z \subset U \times \mathbb C$, and the map -$p_k : Q_k \backslash p_k^{-1}(p_k(Z)) \longrightarrow U \backslash p_k(Z)$ -is a finite sheeted convering map: it's proper and a local homeomorphism. -Let $c^\star$ be a point of $p_k(Z)$ and $U' \subset U$ a simply connected neighborhood of $c^\star$ containing no point of $p_k(Z)$. Denote by: $Q'_k = p_k^{-1}(U')$ and $Q_k^\star = Q'_k \backslash p_k^{-1}(c^\star)$. Denote by $Y_i$ the connected component of $Q^\star_k$; each of there is a finite cover of $U^\star = U' \backslash \{c^\star\}$. The closure of each $Y_i$ in $Q'_k$ is $Y_i \cup \{y_i\}$ for some $y_i \in \mathbb C$. If $(P^k_{c^\star})'(y_i)\neq 1$ the by the implicit function theorem $Q'_k$ is near $y_i$ the graph of an analytic function $\phi_i : U' \longrightarrow \mathbb C$. -Now let $Y_i$ be a component such that $(P^k_{c^\star})'(y_i)=1$. If $(c,z)\mapsto (P_c^k)'(z)$ is not constant on $Y_i$, the its image contains a neighborhood of $1$, in particular points of the unit circle, and the corresponding points of $Y_i$ are indifferent cycles that are not persistent. This cannot happen and $(c,z)\mapsto (P_c^k)'(z)$ is constant on every such component $Y_i$. -From the above it follows that if $R_k \subset Q_k$ is the subset of repelling cycles, then the projection $p_k : R_k \longrightarrow U$ is a covering map. Indeed, it is a local homeomorphism by the implicit function theorem, and proper since a sequence $(c_n,z_n)$ in $R_k$ converging in $Q_k$ cannot converge to a point $(c^\star,z^\star)$ where $P^k_{c^\star}(z^\star) = 1$. Hence the set of all repelling cycles of $P_c$ is a holomorphic motion. By the $\lambda$-lemma, this map extends to the closure of the set of repelling points, i.e. to the Julia set $J_c$, which also forms a holomorphic motion. $\square$ -See also: Mane-Sad-Sullivan theorem. -I don't really understand your second question.<|endoftext|> -TITLE: How to find the sum of this infinite series. -QUESTION [5 upvotes]: How to find the sum of the following series ? -Kindly guide me about the general term, then I can give a shot at summing it up. -$$1 - \frac{1}{4} + \frac{1}{6} -\frac{1}{9} +\frac{1}{11} - \frac{1}{14} + \cdots$$ - -REPLY [2 votes]: Using the principal value for the doubly infinite harmonic series yields -$$ -\begin{align} -\sum_{k=0}^{\infty} \left(\dfrac1{5k+1} - \dfrac1{5k+4}\right) -&=\frac15\sum_{k=-\infty}^\infty\frac{1}{k+1/5}\\ -&=\frac{\pi}{5}\cot\left(\frac{\pi}{5}\right)\\ -&=\frac{\pi}{5}\sqrt{\frac{5+2\sqrt{5}}{5}} -\end{align} -$$<|endoftext|> -TITLE: weak convergence in $L^p$ plus convergence of norm implies strong convergence -QUESTION [25 upvotes]: Having trouble with this problem. Any ideas? -Let $\Omega$ be a measure space. Let $f_n$ be a sequence in $L^p(\Omega)$ with $1 -TITLE: Supplement to Herstein's Topics in Algebra -QUESTION [7 upvotes]: I am currently studying Group Theory from I.N. Herstein's Topics in Algebra.However after studying about 50 pages of it I felt it lacks a bit of geometrical flavour (one of my friends described via email some time back how dihedral groups were treated in his course).He also said that Herstein's treatment is not exactly very modern though it supposedly has some good exercises. -Question: Do I need a supplement to Topics in Algebra that treats the subject in a "modern" way or do I need to change the book(I am skeptical about the second alternative I described)?I would be obliged if someone told me what to do.It seems that I am in a fix. - -REPLY [11 votes]: I first learned algebra from Herstein in Honors Algebra as an undergraduate, so the book will always have a special place in my heart. Is it old fashioned? I dunno if at this level it really makes that much of a difference. No, it doesn't have any category theory or homological algebra and no, it doesn't have a geometric flair. What it does have is the sophisticated viewpoint and enormous clarity of one of the last century's true masters of the subject and perhaps the single best set of exercises ever assembled for an undergraduate mathematics textbook. When a graduate student is scared to take his qualifier in algebra, I give him some very simple advice: I tell him to get a copy of the second edition of Herstein and to try to do all the exercises. If he can do 95 percent of them, he's ready for the exam. Period. -So to be honest, I don't think it's really necessary when you're first learning algebra to get a more "modern" take on it. It's more important to develop a strong foundation in the basics and Herstein will certainly do that. If you want to look at a book that focuses on the more geometrical aspects as a complement to Herstein at about the same level, the classic by Micheal Artin is a very good choice. (Make sure you get the second edition,the first is very poorly organized in comparison!) Another terrific book you can look at is E.B.Vinberg's A Course In Algebra, which is my single favorite reference on basic algebra. You'll find it as concrete as Artin with literally thousands of examples, and it's beautifully written and much gentler then either of the others while still building to a very high level. I think it may be just what you're looking for as a complement to Herstein.<|endoftext|> -TITLE: Find the possible value from the following. -QUESTION [6 upvotes]: Find the possible value from the following. -I'm not able to end up on a concrete note, as I'm unable to get the essence of question, still not clear to me. - -$x$, $y$, $z$ are distinct reals such that $y=x(4-x)$, $z=y(4-y)$, $x=z(4-z)$. The possible values of $x+y+z$ is: -$$\begin{array}{l} A.\ 4 && C.\ 7 \\ B.\ 6 && D.\ 9 \end{array}$$ - -REPLY [2 votes]: Composing the functions, we get -$$ -\begin{align} -0 -&=x^8-16x^7+104x^6-352x^5+660x^4-672x^3+336x^2-63x\\ -&=x(x-3)(x^3-7x^2+14x-7)(x^3-6x^2+9x-3)\tag{1} -\end{align} -$$ -The roots $x=0$ and $x=3$ lead to indistinct $x$, $y$, and $z$. -$x^3-7x^2+14x-7$ has 3 real roots in $[0,4]$ whose sum is $7$ (the negative of the coefficient of $x^2$). -$x^3-6x^2+9x-3$ has 3 real roots in $[0,4]$ whose sum is $6$ (the negative of the coefficient of $x^2$). -$x$, $y$, and $z$ all satisfy $(1)$. -$t(4-t)$ rotates the roots of the cubics. -Thus, the possible values of $x+y+z$ are $6$ and $7$.<|endoftext|> -TITLE: Visualizing Commutator of Two Vector Fields -QUESTION [15 upvotes]: I'm reading a book on calculus, the part about vector fields on -manifolds. It's a nice book, but with a severe drawback --- it has no pictures. -I like how vectors are treated algebraically, as derivatives over a local ring (ring of germs). But I still want to use "geometrical" view on vector fields. -The problem is I can't imagine vector field "multiplication" as a composition of derivatives. And thus I can't picture commutator of two -vector fields. -Has anybody here got pictures too help me? - -REPLY [15 votes]: In Gauge fields, knots and gravity by J. Baez and J. P. Muniain authors got the following two pictures to visualize commutator: - - -I hope it helps you.<|endoftext|> -TITLE: The volume and surface area of pipe? -QUESTION [5 upvotes]: A line segment turns around a curve with right angle from point A to point B. -I would like to find the closed region volume and surface area that figured out in the picture. - -Could you please give me hint how to define the volume and surface area with integrals ? -Is it correct that volume formula is as shown below? -$$V=\pi r^2\int _{x_1}^{x_2} dS=\pi r^2\int _{x_1}^{x_2} \sqrt{1+(f'(x))^2}dx$$ -I do not know how to define the surface of the shape? -UPDATE: -important note: the offset curves that are the parallels of a function may not be functions: My related questions about parallel functions: -Parallel functions. -What is the limit distance to the base function if offset curve is a function too? -Thanks a lot for answers and advice. - -REPLY [3 votes]: Assume for the moment that your plane curve $\gamma$ is parametrized by arc length: -$$\gamma:\quad s\mapsto\bigl(u(s),v(s),0\bigr)\qquad(0\leq s\leq L)\ .$$ -Then the body of your pipe has the following parametrization: -$${\bf f}:\quad (s,t,\phi)\mapsto\left\{\eqalign{x&=u-t\dot v\cos\phi\cr y&=v+t\dot u\cos\phi\cr z&=t\sin\phi\cr}\right.\quad ,$$ -and putting $t:=r$ you get a parametrization of the surface of the pipe. -Using the Frenet formulas $\ddot u=-\kappa\dot v$, $\ \ddot v=\kappa \dot u$, where $\kappa=\kappa(s)$ denotes the curvature of $\gamma$, we obtain -$$\eqalign{{\bf f}_s&=\bigl(\dot u(1-t\kappa\cos\phi),\dot v(1-t\kappa\cos\phi),0\bigr)\ ,\cr -{\bf f}_t&=(-\dot v\cos\phi,\dot u\cos\phi,\sin\phi)\ ,\cr -{\bf f}_\phi&=(\dot v t\sin\phi, -\dot u t\sin\phi, t\cos\phi)\ .\cr}$$ -From these equations one computes -$${\bf f}_\phi\times{\bf f}_s=(1-t\kappa\cos\phi)\bigl(-\dot v t\cos\phi,\dot u t\cos\phi, t\sin\phi\bigr)\ ,\qquad |{\bf f}_\phi\times{\bf f}_s|=t(1-t\kappa\cos\phi)\ ,$$ -and -$$J_{\bf f}={\bf f}_t\cdot({\bf f}_\phi\times{\bf f}_s)=t(1-t\kappa\cos\phi)\ .$$ -The surface of the pipe now computes to -$$\omega=\int_0^L\int_0^{2\pi}|{\bf f}_\phi\times{\bf f}_s|_{t:=r}\ {\rm d}(s,\phi)=2\pi r L\qquad\Bigl(=2\pi r\int_a^b\sqrt{1+f'(x)^2}\ dx\Bigr)\ ,$$ -and its volume to -$$V=\int_0^L\int_0^r\int_0^{2\pi}J_{\bf f}(s,t,\phi)\ {\rm d}(s,t,\phi)=2\pi{r^2\over 2}L\qquad\Bigl(=\pi r^2\int_a^b\sqrt{1+f'(x)^2}\ dx\Bigr)\ .$$ -These computations show that your conjectured formulas are indeed true: The gain in volume and surface on the outside of a bend of the pipe is exactly outweighed by the loss on the inside. -In all of this we have tacitly assumed that ${\bf f}$ is injective in the considered domain. This is guaranteed as long as $\ r \kappa(s)<1$ $\ (0\leq s\leq L)$. If this condition is not fulfilled we have "overlap", i.e., the map ${\bf f}$ producing the body of the pipe is no longer injective. In this case the integral $I:=\int_0^L\int_0^r\int_0^{2\pi}J_{\bf f}(s,t,\phi)\ {\rm d}(s,t,\phi)$ is no longer equal to the actual volume of the pipe but it is equal to a "weighted" volume where each volume element ${\rm d}(x,y,z)$ is counted as many times as it is covered by the representation. Computing the actual volume will be difficult in such a case, insofar as one might have to deal with pieces of envelope surfaces turning up in the process.<|endoftext|> -TITLE: Definition of embedded and immersed curve -QUESTION [7 upvotes]: What does it mean to say that a curve in $\mathbb{R}^2$ is embedded? I think a curve in $\mathbb{R}^3$ is embedded if it lies on a plane, but what does it mean in 2d? I searched everywhere but I can't find an answer. -Also, is there a simple way of seeing an immersive curve in $\mathbb{R}^2$? -Thanks - -REPLY [13 votes]: In the smooth context, an embedding is a diffeomorphism onto its image. A curve in $\mathbb R^2$ is really a smooth map $\gamma:\mathbb R\to \mathbb R^2$. This map must have a smooth inverse $\gamma^{-1}: \gamma(\mathbb R)\to \mathbb R$ in order for the curve to be embedded. In particular, this requires $\gamma'$ to be nonzero (otherwise the inverse can't be smooth). -An embedded curve can look like this: - -Having an immersed curve asks only for nonzero derivative. Being a diffeomorphism is not required. An immersed curve can look like this: - -To make the distinction trickier, an injective immersion can fail to be an embedding. (As Zhen Lin said.) The figure below shows an immersed line: the immersion is such that the limits $\lim_{t\to \pm\infty}\gamma(t)$ are the "intersectinn" point. There is no actual intersection: the curve passes through the center of the figure only once. This is an injective immersion. Not an embedding, because the inverse map $\gamma^{-1}$ is not even continuous. - - -I think a curve in $\mathbb R^3$ is embedded if it lies on a plane - -This is totally wrong (as Neal pointed out). Being embedded into any space means the same thing: diffeomorphism onto the image. Or homeomorphism, when we are in the topological setting.<|endoftext|> -TITLE: Expressing $2002^{2002}$ as a sum of cubes -QUESTION [24 upvotes]: This is the problem: Determine the smallest positive integer $k$ -such that there exist integers $x_1, x_2 , \ldots , x_k$ with -${x_1}^3+{x_2}^3+{x_3}^3+\cdots+{x_k}^3=2002^{2002} $. How to approach these kind of problems?? -Thanks in advance!! - -REPLY [65 votes]: $k=4$ is the smallest: -Certainly, it can be done using 4 cubes, by noticing that $2002 = 10^3 + 10^3 + 1^3 +1^3$, and then using $2002^{2002} = 2002 \times 2002^{2001} = (10^3 + 10^3 + 1^3 +1^3)\times (2002^{667})^3$, and multiplying out the brackets. -Since the number can be represented by 4 cubes, it suffices to show that it cannot be done with less than 4. -Since $2002 \equiv 4 \pmod 9$ we have $2002^3 \equiv 64 \equiv 1 \pmod 9$ so that $2002^{3n} \equiv 1 \pmod 9$ and so the original number is equivalent to 4 (mod 9). -Looking at cubes mod 9, they are equivalent to 0, 1 or -1, so at least 4 are required for any number equivalent to $4 \pmod 9$.<|endoftext|> -TITLE: Why does the Gauss-Bonnet theorem apply only to even number of dimensons? -QUESTION [10 upvotes]: One can use the Gauss-Bonnet theorem in 2D or 4D to deduce topological characteristics of a manifold by doing integrals over the curvature at each point. -First, why isn't there an equivalent theorem in 3D? Why can not the theorem be proved for odd number of dimensions (i.e. what part of the proof prohibits such generalization)? -Second and related, if there was such a theorem, what interesting and difficult problem would become easy/inconsistent? (the second question is intentionally vague, no need to answer if it is not clear) - -REPLY [3 votes]: First, for a discussion involving Chern's original proof, check here, page 18. -I think the reason is that the original Chern-Gauss-Bonnet theorem can be treated topologically as -$$ -\int_{M} e(TM)=\chi_{M} -$$ -and for odd dimensional manifolds, the Euler class is "almost zero" as $e(TM)+e(TM)=0$. So it vanishes in de rham cohomology. On the other hand, $\chi_{M}=0$ if $\dim(M)$ is odd. So the theorem now trivially holds for odd dimension cases. -Another perspective is through Atiyah-Singer index theorem. The Gauss-Bonnet theorem can be viewed as a special case involving the index of the de rham Dirac operator: -$$ -D=d+d^{*} -$$ -But on odd dimensional bundles, the index of $D$ is zero. Therefore both the left and right hand side of Gauss-Bonnet are zero. -I heard via street rumor that there is some hope to "twist" the Dirac operator in K-theory, so that the index theorem gives non-trivial results for odd dimensions. But this can be rather involved, and is not my field of expertise. One expert on this is Daniel Freed, whom you may contact on this.<|endoftext|> -TITLE: An "AGM-GAM" inequality -QUESTION [22 upvotes]: For positive real numbers $x_1,x_2,\ldots,x_n$ and any $1\leq r\leq n$ let $A_r$ and $G_r$ be , respectively, the arithmetic mean and geometric mean of $x_1,x_2,\ldots,x_r$. -Is it true that the arithmetic mean of $G_1,G_2,\ldots,G_n$ is never greater then the geometric mean of $A_1,A_2,\ldots,A_n$ ? -It is obvious for $n=2$, and i have a (rather cumbersome) proof for $n=3$. - -REPLY [8 votes]: It's a special case ($r=0$, $s=1$) of the mixed means inequality -$$ -M_n^s[M^r[\bar a]]\le M_n^r[M^s[\bar a]], \quad r,s\in \mathbb R,\ r -TITLE: What is the integral of function $f(x) = (\sin x)/x$ -QUESTION [7 upvotes]: I want to know if the function sinx/x is integrable and if it is, then what's its integral? -My high school book says its a non-integrable function while WolframAlpha says its integral is Si(x) + constant -Please shed some light on this topic and explain from basic level like what is Si(x), who used it for the first time etc. - -REPLY [4 votes]: In general, a function $f:\mathbb{R} \longrightarrow \mathbb{R}$ is integrable if it is bounded and the set of discontinuities (i.e. $x=0$ in this case) have measure zero. Intuitively, this more or less amounts to the function being defined except at reasonably few exceptional points (i.e. a finite number of points as in this case is fine), so the function is integrable since it is clearly also bounded. As the others have mentioned and linked to, the integral is nontrivial and has a special function defined to be its antiderivative called $Si(x)$.<|endoftext|> -TITLE: Representation of smooth function -QUESTION [7 upvotes]: Is it true that any smooth function $f\colon \mathbb{R}^n \to \mathbb{R}^n$ can be represented as -$$ - f(x) = \nabla U(x) + g(x) -$$ -where $U(x)$ is a scalar function and $\langle g(x), f(x) \rangle \equiv 0$? Is this representation unique? - -REPLY [2 votes]: Let me summarize my comments as an answer to you original question. Given a smooth map $f$, consider a problem of existence of a pair $(U,g)$ such that $U$ is a smooth scalar function and $g$ is a smooth map and such that -$$ - f = \nabla U+g, \tag{1} -$$ -$$ -f\cdot g = 0. \tag{2} -$$ -Multiplying both sides in $(1)$ by $f$, we obtain -$$ - \|f\|^2 - f\cdot \nabla U = f\cdot g = 0. -$$ -Hence, problem $(1)+(2)$ can be reduced to the problem of existence of $U$ such that -$$ -f\cdot \nabla U = \|f\|^2 \tag{3} -$$ -and if the solution of the latter 1st order linear PDE exists, then $g = f - \nabla U$ is the function needed in $(1),(2)$. Unfortunately, I cannot say anything about the existence of solution of this PDE in the general case (i.e. when $f$ is smooth). -Uniqueness does not hold: take $f = 0$, then we are looking for $(U,g)$ such that -$$ - g = - \nabla U. -$$ -Clearly, any $U$ smooth solves the problem.<|endoftext|> -TITLE: Math department space design ideas -QUESTION [17 upvotes]: I'm looking for resources on design space for academic departments, particularly, We've been asked to provide ideas on designing a "dream math department". I'd like to gather some ideas for doing this. What ideal characteristics should the space/architecture/furniture in a good math department have? -I'd like innovative ideas for: -Classrooms, offices, student lounges, lighting, furniture, bookshelves, artwork. Any suggestions should be mindful of improvement of creative output and innovative instruction and ideally be accompanied by rationale. -I'm sure (although having resources allocated for this is rare) I'm not the only faculty member faced with this. Any links to, or thoughts about, math department design ideas would be helpful. -As strange as this question is, it may indirectly help departments flourish and may indirectly benefit mathematics in the long run. -(As I'm writing this question all I can think of is a window I saw in a classroom in the math department at MIT once that was stuffed with newspaper...but I nevertheless ask.) -EDIT: In light of Mariano's very accurate assessment of this question as off-topic, I'd like to apologize to the moderators and the forum for the question. Also, though, I'd like to clarify my rationale for asking: this site is more or less for anything about mathematics (even...gulp...for homework problems), so I thought the question may not be too much of a stretch because good answers to it may affect the allocation of resources to mathematics departments. We aren't individually inclined to think about questions like these, although not doing so may lead to our being put in substandard space because administrators and/or other departments with more time to put value on these things actually put in their two cents. A site like this can be used to "crowdsource" such a problem without too much loss to any individual and yet with great gain to the group. Certainly this question is not mathematics, but is directly related to the health of mathematics in mathematics departments...and hence I asked the question. - -REPLY [2 votes]: There must be a low faculty to coffee machine ratio.<|endoftext|> -TITLE: Why does the equation $x^2-82y^2=\pm2$ have solutions in every $\mathbb{Z}_p$ but not in $\mathbb{Z}$? -QUESTION [10 upvotes]: I have been working on an exercise in H. P. F. Swinnerton-Dyer's book, A Brief Guide to Algebraic Number Theory. The question is like this: - -Show that $x^2-82y^2=\pm2$ has solutions in every $\mathbb{Z}_p$ but not in $\mathbb{Z}$.What conclusion can you draw about $\mathbb{Q}(\sqrt{82})$? - -I thought it might be solved by using the Hensel's lemma. But I can't give an answer. -Thanks in advance! - -REPLY [2 votes]: Regarding the last part of the question, this tells you that the principal genus of discriminant $328$ contains a non-trivial class, and hence that the class number of $\mathbb Q(\sqrt{82})$ is divisible by $2$.<|endoftext|> -TITLE: Polynomials identity, factoring $x^{2^n}-1$ -QUESTION [7 upvotes]: There is a proof that I can't solve. -Show that for any integer $k$, the following identity holds: -$$(1+x)(1+x^2)(1+x^4)\cdots(1+x^{2^{k-1}})=1+x+x^2+x^3+\cdots+x^{2^k-1}$$ -Thanks for your help. - -REPLY [11 votes]: This equation is a fancy way of stating existence and uniqueness of binary representation.<|endoftext|> -TITLE: What is an intuitive meaning of genus? -QUESTION [26 upvotes]: I read from the Finnish version of the book "Fermat's last theorem, Unlocking the Secret of an Ancient Mathematical Problem", written by Amir D. Aczel, that genus describes how many handles there are on a given surface. But now I read the Proposition 4.1 on chapter 7.4.1 on Qing Liu's book "Algebraic Geometry and Arithmetic Curves". It assumes a geometrically integral projective curve $X$ over a field such that the arithmetic genus of $X$ is $p_a\leq 0$. So is my intuition that "genus is the number of handles" somehow wrong as $p_a$ can be negative? - -REPLY [35 votes]: A compact Riemann surface $X$ is in particular a compact real orientable surface. These surfaces are classified by their genus. -That genus is indeed the number of handles cited in popular literature; more technically it is -$$g(X)=\frac {1}{2}\operatorname {rank} H_1(X,\mathbb Z) = \frac {1}{2}\operatorname {dim} _\mathbb C H^1_{DR}(X,\mathbb C) $$ in terms of singular homology or De Rham cohomology. -Under the pressure of arithmetic, geometers have been spurred to consider the analogue of compact Riemann surfaces over fields $k$ different from $\mathbb C$: complete smooth algebraic curves. -These have a genus that must be calculated without topology. -The modern definition is (for algebraically closed fields) $$ g(X)=\operatorname {dim} _k H^1(X, \mathcal O_X)= \operatorname {dim} _kH^0(X, \Omega _X)$$ -in terms of the sheaf cohomology of the structural sheaf or of the sheaf of differential forms of the curve $X$. -Of course this geometric genus is always $\geq 0$. -There is a more general notion of genus applicable to higher dimensional and/or non-irreducible varieties over non algebraically closed fields: the arithmetic genus defined by $$p_a(X)=(-1)^{dim X}(\chi(X,\mathcal O_X)-1)\quad {(ARITH)}$$ (where $\chi(X,\mathcal O_X)$ is the Euler-Poincaré characteristic of the structure sheaf). -[ Hirzebruch and Serre have, for very good reasons, advocated the modified definition $p'_a(X)=(-1)^{dim X}\chi(X,\mathcal O_X)$, which Hirzebruch used in his ground-breaking book and Serre in his foundational FAC] -For smooth projective curves over an algebraically closed field $g(X)=p_a(X)\geq 0$ : no problem. -It is only in more general situations that the arithmetic genus $p_a(X)$ may indeed be $\lt 0$ -Edit -The simplest example of a reducible variety with negative arithmetic genus is the disjoint union $X=X_1\bigsqcup X_2$ of two copies $X_i$ of $\mathbb P^1$. -The formula $(ARITH)$ displayed above yields: $p_a(X)=1-\chi(X,\mathcal O_X)=1-(dim_\mathbb C H^0(X,\mathcal O_X)-dim_\mathbb C H^1(X,\mathcal O_X))=1-(2-0)$ - so that $$p_a(X)=p_a(\mathbb P^1\bigsqcup \mathbb P^1)=-1\lt0$$<|endoftext|> -TITLE: Approximate $\int_a^b \frac{1}{\sqrt{2 \pi \sigma^2}}e^{-(x-\mu)^2/2 \sigma^2}\log(1+e^{-x}) \ \ dx $ -QUESTION [7 upvotes]: I am trying to find an approximation to -$$ -I = \int_a^b \frac{1}{\sqrt{2 \pi \sigma^2}}e^{-(x-\mu)^2/2 \sigma^2}\log(1+e^{-x}) \ \ dx. -$$ -My attempt is as follows: -$$ -\begin{align} -I &= \int_a^b \frac{1}{\sqrt{2 \pi \sigma^2}}e^{-(x-\mu)^2/2 \sigma^2} \left( \sum_{i=1}^\infty \frac{e^{-ix}}{i} (-1)^{(i+1)} \right)\ dx\\ -&= \sum_{i=1}^\infty \frac{(-1)^{(i+1)}}{i}\int_a^b \frac{1}{\sqrt{2 \pi \sigma^2}}e^{-(x-\mu)^2/2 \sigma^2} e^{-ix} \ \ dx\\ -&= \sum_{i=1}^\infty \frac{(-1)^{(i+1)}}{i} k_i \int_a^b \frac{1}{\sqrt{2 \pi \sigma^2}}e^{-(x-(\mu-i \sigma^2))^2/2 \sigma^2} \ \ dx,\\ -\end{align} -$$ -where -$$ -k_i = e^{(\mu -i \sigma^2)^2-\mu^2}. -$$ -The $k_i$ increases exponentially with increasing $i$ and thus makes the sum divergent. I don't understand why this is happening although this sum should be finite (because I don't see any problem with the original integral). -P.S. I used MacLauren series in approximating natural logarithm. - -REPLY [6 votes]: For the expansion to make sense we need $e^{-x}<1$, so let's assume -$00$ the sum $\sum_n (-1)^n e^{-na}/{n^2}$ is absolutely convergent. -This sum is related to the dilogarithm. -For $(a-\mu+\sigma^2)/\sigma \gg 1$, the integral is well approximated by -$$\begin{equation*} -I\approx \frac{1}{\sqrt{2\pi}\sigma} -\left[ -\exp\left({-\frac{(b-\mu)^2}{2\sigma^2}}\right) \mathrm{Li}_2(-e^{-b}) -- \exp\left({-\frac{(a-\mu)^2}{2\sigma^2}}\right) \mathrm{Li}_2(-e^{-a}) -\right].\tag{3} -\end{equation*}$$ -In general you can cut the sum in (1) off at some appropriate $n$ dependent on your choice of the various parameters. -The higher order terms are exponentially suppressed, so this should work quite well for a good choice of $n$. - -Figure 1. Plot of $I(a)$ (solid) and the fit using $(3)$ (dashed) for $\mu=2$, $\sigma=4$, and $b=4$.<|endoftext|> -TITLE: Graham's Number : Why so big? -QUESTION [21 upvotes]: Can someone give me an idea of how R.Graham reached Graham's Number as an upper bound on the solution of the related problem ? Thanks ! - -REPLY [45 votes]: This post appears long and frightening, but I hope you will not be put off, because the topic is not actually hard to understand. It is long because I explained a lot of things from first principles. I did this because I thought the answer would be of interest to a general audience and because the branch of mathematics is not that well-known. So there is a lot of explanation, but not much is difficult. - -First, a disclaimer. Graham's Number is usually cited as the largest number ever to appear in a mathematical proof. There is no evidence for this, and in fact the claim is false on its face, because Graham's Number does not actually appear in the proof that it is claimed to appear in. (It can't be the largest number ever to appear in a mathematical proof if it doesn't actually appear in a mathematical proof.) According to these posts by John Baez [1] [2]: - -I asked Graham. And the answer was interesting. He said he made up Graham's number when talking to Martin Gardner! Why? Because it was simpler to explain than his actual upper bound - and bigger, so it's still an upper bound! - -Martin Gardner then wrote about the number that Graham described, which is not the number from the proof, and the rest is history. -Now what is the number from the proof? Here there is some interesting mathematics. - -The question addressed by Graham's Number belongs to the branch of mathematics known as Ramsey theory, which is not at all hard to understand. It can roughly be described as the study of whether a sufficiently large structure, chopped into pieces, must still contain smaller structures. This is a rather vague explanation, so I will give two of the canonical examples. - -Ramsey's theorem. Let $n$ and $k$ be given. Then there exists a number $R(n;k)$ such that, if you take a complete graph of at least $R(n;k)$ vertices, and color its edges in $k$ different colors, then there must be a complete subgraph of $n$ vertices whose edges are all the same color. -A frequently-cited special case of this theorem says that $R(3;2) = 6$: if you have a party with at least 6 guests, then there must be 3 guests who have all met one another before, or 3 people who have never met; it is impossible that every subset of three guests has both a pair of people who have met and a pair of people who have not. (With only five guests, this is possible.) Here the two “colors” are “have met before” and “have not met before”. - -Van der Waerden's theorem. Let $n$ and $k$ be given. Then there exists a number $W(n;k)$ such that, if you take an arithmetic progression of length $W(n;k)$, and color its elements in $k$ different colors, it must contain an arithmetic progression of $n$ elements that are all the same color. - - -In both these examples you can see the general pattern: we have some large structure (a graph of $R(n;k)$ vertices in one case, an arithmetic progression of $W(n;k)$ elements in the other) and we divide the structure into $k$ parts and ask if one of the parts still contains a sub-structure of size $n$. -The proofs of these theorems are constructive. For example, the proof of van der Waerden's theorem allows one to calculate that for $W(3;2) \ge 325$ suffices: if you color the integers $\{1, \ldots, 325\}$ with $k=2$ colors, then there must be an $n=3$-term arithmetic progression whose elements are all one color, and the proof shows you how to take an arbitrary coloring of $\{1, \ldots, 325\}$ and explicitly find the $3$-term subprogression of all one color. -But the $\{1, \ldots, 325\}$ is rather silly, because in fact the same is true of $\{1, \ldots, 9\}$, as is easily shown. So the proof gives an upper bound of $325$ when the correct answer is $9$. This is typical of theorems in Ramsey theorem: the proof tells you that the number exists and is at most some large number, but then closer investigation reveals that it is really some considerably smaller number. The corresponding overestimate for $W(3;3)$ is that the proof tells you that $$W(3;3) \le 7\cdot\left(2\cdot3^7+1\right)\left(2\cdot3^{\left(2\cdot3^7+1\right)}+1\right),$$ -a number with $2095$ digits, but exhaustive computer search quickly reveals that actually $W(3;3)=27$. -The reason for these rapidly-growing bounds is that typically the proofs proceed by induction, and one shows that if there are sufficiently many size-$n-1$ structures, then two of them must have their subcomponents colored exactly the same, and this allows one to find sub-parts of those size-$n-1$ structures that work together to form a size-$n$ structure of all one color. But a size-$n-1$ structure with $S(n-1)$ subcomponents will have something like $k^{S(n-1)}$ ways its components can be colored, so “sufficiently many” means something like $k^{S(n-1)}$, and the number required looks something like an exponential tower of $k$'s of height $n$, that is something like $\left.k^{k^{⋰^k}}\right\} \text{height $n$}$; you can see this happening in the $W(3;3)$ example above, where the third factor is an embellished version of $3^{3^3}$. When the structures one is forming are more complicated, then instead of needing only two size-$n-1$ structures colored the same, one might need an increasingly large number of such structures, and so perhaps you can imagine how the number required increases even faster than an exponential tower. -(That was the crucial paragraph that really answers your question, so I apologize for being so vague; please let me know if you want me to elaborate or provide a specific example.) -Enormous numbers are quite commonplace in Ramsey theory, and so the Graham's Number might have some competition even in its own field. - -The particular problem discussed in -the Graham's Number paper, “Ramsey's theorem for $n$-parameter sets” is rather general, but the enormous number (not the one described by Gardner) is an upper bound for a problem very similar to the ones I described above: - -We recall that by definition $N(1, 2, 2)$ is an integer such that -if $n\ge N(1, 2, 2)$ and the $\binom{2^n}{2}$ straight line segments joining all possible pairs of vertices of a unit $n$-cube are arbitrarily 2-colored, then there always exists a set of four coplanar vertices which determines six line segments of the same color. - -That is, Graham and Rothschild are investigating a problem that, in this special case, involves taking a certain $n$-dimensional object, coloring its 1-dimensional subobjects with 2 colors, and looking for a single-colored 2-dimensional subobject; $N(1,2,2)$ is the smallest number of dimensions that such an object must have in order to guarantee a single-colored 2-dimensional subobject. - -Let $N^*$ denote the least possible value $N(1,2,2)$ can assume. -We introduce a calibration function $F(m,n)$ with which me may compare our estimate of $N^*$. This is defined recursively as follows: -$$\begin{align} -F(1,n)=2^n \qquad F(m,2)=4 &\qquad m\ge 1, n\ge 2, \\ -F(m,n) = F(m-1, F(m,n-1)) & \qquad m\ge2, n\ge 3. -\end{align} $$ -It is recommended that the reader calculate a few small values of $F$ to get a feeling for its rate of growth, e.g. $F(5,5)$ or $F(10,3)$. - -“A few small values” here is a joke. $F(3,3)$ is already $2^{16}=65536$. $F(3,4)$ is a tower $2^{2^{⋰^2}}$ of height $65536$. $F(3,5)$ is a similar tower of height $F(3,4)$. -Finally, the bound: - -The best estimate we obtain this way is roughly -$$N^* \le F(F(F(F(F(F(F(12,3),3),3),3),3),3),3).$$ - -The authors continue with an understatement that I imagine made them chuckle: - -On the other hand, it is known only that $N^*\ge 6$. Clearly, there is some room for improvement here. - -A remark a little later says - -in fact, the exact bound is probably $<10$. - -but Sbiis Saibian's excellent discussion of this claims, unfortunately without citation: - -It was recently proved that the solution could not be smaller than $11$. - -I will be glad to elaborate on any part of this that is not clear.<|endoftext|> -TITLE: Multiplicative property of the GCD -QUESTION [7 upvotes]: I need to prove that -$$(ah,bk)=(a,b)(h,k)\left( \frac{a}{(a,b)},\frac{k}{(h,k)}\right)\left( \frac{b}{(a,b)},\frac{h}{(h,k)}\right)$$ -I'm most certain I need to use $$\left(\frac{a}{(a,b)},\frac{b}{(a,b)}\right)=1 $$ and $(a ,b)=1 $ and $a \mid bc $ then $a \mid c$ but I'm not sure how to build up the proof. - -REPLY [7 votes]: Cancel $\rm\:(a,b)(h,k),\:$ write $\rm\: A = \dfrac{a}{(a,b)},\ B = \dfrac{b}{(a,b)},\ H = \dfrac{h}{(h,k)},\ K = \dfrac{k}{(h,k)}\:$ it becomes -$$\rm\ (AH,BK) = (A,K)(B,H)$$ -$$\rm\iff \left(\frac{A}{(A,K)}\frac{H}{(B,H)},\, \frac{B}{(B,H)}\frac{K}{(A,K)}\right) = 1\phantom{\iff}$$ -True since each term on the left is coprime to each term on the right, e.g. the first terms are coprime by $\rm\:(A,B) = (a/(a,b),\,b/(a,b)) = (a,b)/(a,b) = 1\:$ and the first and last terms are coprime by a similar argument (replace $\rm\:a,b\:$ by $\rm\:A,K).$<|endoftext|> -TITLE: If the series $\sum_0^\infty a_n$ converges, then so does $\sum_1^\infty \frac{\sqrt{a_n}}{n} $ -QUESTION [16 upvotes]: Problem: - -Suppose that for every $n\in\mathbb{N}$, $a_n\in\mathbb{R}$ and $a_n\ge 0$. Given that - $$\sum_0^\infty a_n$$ - converges, show that - $$\sum_1^\infty \frac{\sqrt{a_n}}{n} $$ - converges. - -Source: Rudin, Principles of Mathematical Analysis, Chapter 3, Exercise 7. - -REPLY [22 votes]: We have for all real numbers $2ab\leq a^2+b^2$ hence -$$0\leq \frac{\sqrt{|a_n|}}n\leq \frac{|a_n|+\frac 1{n^2}}2.$$ -Since the series $\sum_n|a_n|$ and $\sum_n\frac 1{n^2}$ are convergent, we get the convergence of $\sum_n\frac{\sqrt{|a_n|}}n$.<|endoftext|> -TITLE: Visualizing a Deformation Retraction -QUESTION [5 upvotes]: It's known that there exists a deformation retraction from the space $\mathbb{R}^3$\ $S^1$ to $S^2 \wedge S^1$, and I thought I had a visualization for it, but now it seems discontinuous. Can anyone help out with describing (or even better constructing explicitly) this map? -Thanks! - -REPLY [11 votes]: The space $\mathbb R^3\setminus S^1$ is obtained by rotating a closed half-plane with a puncture in it. The latter space can be retracted onto semi-circle plus a diameter. The rotation of semi-circle plus a diameter creates $S^2$ plus a diameter. Now it is time to lose the rotational symmetry: we contract the Eastern hemisphere into a point (sorry, guys). The result is $S^2\wedge S^1$.<|endoftext|> -TITLE: How to show $\mathcal{L}(\mathbb{R}) \otimes \mathcal{L}(\mathbb{R}) \subset \mathcal{L}(\mathbb{R^2})$? -QUESTION [6 upvotes]: Let $\mathcal{L}(\mathbb{R})$ be the lebesgue-measurable set of $\mathbb{R}$, and $\mathcal{L}(\mathbb{R^2})$ the lebesgue-measurable set of $\mathbb{R^2}$. -First I shall show, that $\mathcal{L}(\mathbb{R}) \otimes \mathcal{L}(\mathbb{R})$ is a subset of $\mathcal{L}(\mathbb{R^2})$. -In a second part I shall show, that in fact they are not equal. -There is a hint, that the Lebesgue-measurable sets are the completion of the Borel sets. -So $\mathcal{L}(\mathbb{R}) \otimes \mathcal{L}(\mathbb{R})$ is the smallest $\sigma$-Algebra containing the set $\left\{ A \times B: A \in \mathcal{L}(\mathbb{R}), B \in \mathcal{L}(\mathbb{R}) \right\}$ -But how to show all sets in this $\sigma$-Algebra are in fact lebesgue measurable in $\mathbb{R}^2$? I have no idea where to start. Any hint would be welcome. - -REPLY [6 votes]: One characterization of $\mathcal{L}(\mathbb{R}^k)$ is the following: -$$ -\mathcal{L}(\mathbb{R}^k)=\{C\cup N\mid C \in \mathcal{B}(\mathbb{R}^k),\; N\in\mathcal{N}^k\}, -$$ -where $\mathcal{B}(\mathbb{R}^k)$ is the Borel-sets of $\mathbb{R}^k$ and $\mathcal{N}^k$ are the Lebesgue-nullsets in $\mathbb{R}^k$. As you suggest, it is enough to show that $A\times B\in \mathcal{L}(\mathbb{R}^2)$ for all $A,B\in \mathcal{L}(\mathbb{R})$. So let such $A$ and $B$ be given. Then they are of the form -$$ -A=C_1\cup N_1,\quad B=C_2\cup N_2, -$$ -where $C_2,C_2\in\mathcal{B}(\mathbb{R})$ and $N_1,N_2\in\mathcal{N}$. Now -$$ -A\times B=(C_1\cup N_1)\times (C_2\cup N_2)=(C_1\times C_2)\cup (C_1\times N_2) \cup (N_1\times C_2)\cup(N_1\times N_2), -$$ -above. Use the characterization above to conclude that $A\times B\in\mathcal{L}(\mathbb{R}^2)$.<|endoftext|> -TITLE: Find the number of rational roots of $f(x)$ -QUESTION [5 upvotes]: There's a polynomial $f(x)$ such that its degree is 3. -All the coefficients of $f(x)$ are rational. If $f(x)$ is a tangent to $x$ axis, what can be the possible number of rational roots of $f(x) = 0$ -options are : 0, 1, 2, 3, none -My approach : -Since y = 0 is a tangent to f(x), thus 2 roots are real and imaginary roots occur in pair so all 3 must be rational. - -REPLY [3 votes]: For your tangency problem, you have shown that all roots are real. But there is no proof given that all roots are rational. They are. -Without loss of generality we can assume that the coefficient of $x^3$ is $1$. What I would do is to note that if our polynomial $P(x)$ is tangent to the $x$-axis at $x=a$, then $P(a)=P'(a)=0$. -Now divide the cubic $P(x)$ by $P'(x)$. The remainder is a polynomial of degree at most $1$, with rational coefficients. And $a$ is a root of this remainder. If the remainder is non-zero, that shows $a$ is rational, and therefore so is the remaining root. -If the remainder is $0$, then the cubic has shape $(x-a)^3$, and therefore $a$ is rational. -Remark: For any polynomial $P(x)$, the number $a$ is a multiple root of $P(x)$ if and only if $a$ is a common root of $P(x)$ and $P'(x)$. This is simple to prove, but widely useful. - -REPLY [2 votes]: Sturm's Theorem can determine how many real roots a polynomial has in any interval (or on the real line). For rational roots, you can try the finite number of possibilities given by Rational Root Theorem. -If you are looking for distinct roots, divide the polynomial by the GCD of the polynomial and its derivative (gotten using the Euclidean Algorithm). That will remove any repeated roots. This is done before applying Sturm's Theorem, anyway.<|endoftext|> -TITLE: Breaking symmetries -QUESTION [9 upvotes]: Back when I was studying electromagnetism and Maxwell's equations, our teacher told us a quote. I can't recall it exactly, but the meaning was roughly the following: - -Symmetry in a problem is useless if you don't have the means to exploit it. - -(by the way, I would be delighted if a nice soul could provide the source for it.) -It makes sense in the context of electromagnetism: the effect of symmetries in the initial condition is not as simple as one might naively think. For instance, a planar symmetry for the charges yields a planar symmetry for the electric field, while a planar symmetry for the current yields an antisymmetry for the magnetic field. Hence, the effect of a given symmetry in the initial conditions depends on the properties of the equations. -I was later quite surprised when I learned of some much more striking examples. The first one which comes to mind is the following: - -What is the shortest graph which connects the vertices of a square? - -The first reflex of most people would be to look at graphs which have the same symmetry as the square ($D_4$). That's an error. The solutions exhibit some degree of symmetry ($D_2$), but less than the square! -However, I don't know any other nice examples for which the solution is less symmetric than the problem (except perhaps sphere packings, but that's less surprising). I think it would be nice to have a list as diverse as possible, both to hone my intuition and to provide counter-examples to my students. And, frankly, because this kind of phenomenon is quite fun. - -REPLY [3 votes]: I think there are two very distinct effects at play here: whether the rules of the problem are invariant under symmetry, and whether the solutions of the problem are invariant under symmetry. -The first case is embodied by the electromagnetism example. When this happens you really can't say anything, and the apparent symmetry really isn't one. Nothing to see here. -The second case is the "shortest road" problem. In the same spirit but simpler, you can look at -$$P(x)=x^2+1$$ -which is a real polynomial, in other words symmetric with respect to the $x$ axis symmetry (conjugation): $P(\bar x)=0$ iff $P(x)=0$. -Yet no solution is symmetric with respect to conjugation: the roots are $i$ and $-i$, which are both away from the $x$ axis. The symmetry was broken. -However, taken as a set, the set of roots is $\{i,\,-i\}$, which is globally invariant under conjugation. This is a general feature of symmetric problems. In fact the set of symmetries of a problem can be defined as the set of $\sigma$ from a given family that leave the set of solutions invariant. (Of course, you may object that under this definition a problem with no solutions is maximally symmetric, even though these hidden symmetries do not appear in the structure of the problem and are of no use to actually solving the problem.)<|endoftext|> -TITLE: $M/\Gamma$ is orientable iff the elements of $\Gamma$ are orientation-preserving -QUESTION [9 upvotes]: The probem is: - -Suppose $M$ is connected, orientable, smooth manifold and $\Gamma$ is - a discrete group acting smoothly, freely, and properly on $M$. We say - that the action is orientable-preserving if for each $\gamma \in \Gamma$, the diffeomorphism $x \rightarrow \gamma \cdot x$ is - orientation-preserving. Show that $M/ \Gamma $ is orientable if only - if the action of $\Gamma$ is orientable-preserving. - -Notes: I tried using that $\pi\colon M\to M/\Gamma$ is a covering map. I also tried to use that for any $\gamma$ we have two disjoint open sets $U,V\subset M$ such that $\pi|_{U}$ and $\pi|_{V}$ are diffeomorphisms and $(\pi|^{-1}_{U} \circ \pi|_V)(x)=\gamma.x$ but I didn't prove this result too. - -REPLY [9 votes]: If the action is orientation-preserving, construct explicitly an orientation on the quotient. Fix an orientation on $M$. If $\bar x\in M/\Gamma$ is a point and $x\in M$ is such that $\pi(x)=\bar x$, the differential $d\pi_x:T_xM\to T_{\bar x}(M/\Gamma)$ is an isomorphism, so you can push the orientation of $T_xM$ given by the orientation of $M$ to one on $T_{\bar x}(M/\Gamma)$. Check that this depends only on $\bar x$ and not on the preimage $x$ chosen: this is where the hypothesis comes in. Finally, check that this way of orienting the tangent spaces to $M/\Gamma$ is in fact an orientation of $M/\Gamma$. -Can you do the converse?<|endoftext|> -TITLE: $p=4n+3$ never has a Decomposition into $2$ Squares, right? -QUESTION [5 upvotes]: Primes of the form $p=4k+1\;$ have a unique decomposition as sum of squares $p=a^2+b^2$ with $0 -TITLE: Prove that Pascals triangle contains only natural numbers, using induction. -QUESTION [5 upvotes]: I'm currently working my way through Spivak, and I'm stuck on the following. -Prove that Pascals triangle only contains natural numbers using induction and the following relation: $\left( {\begin{array}{*{20}c} n+1 \\ k \\ \end{array}} \right)=\left( {\begin{array}{*{20}c} n \\ k-1 \\ \end{array}} \right)+\left( {\begin{array}{*{20}c} n \\ k \\ \end{array}} \right)$ -So far, the basic thrust of my proof is that if each term on the right is natural, then the term on the left must be natural, which should conclude the proof. After showing that $\left( {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array}} \right)$ is in fact natural, can I just assume that the other term on the right is natural since it's clearly 1? I'm getting confused since I've only done basic induction proofs, and this has more than one term. When I look at a picture of Pascal's triangle, this approach seems to make sense, but I feel a little lost. Can someone set me straight? - -REPLY [5 votes]: Show that if every term in the $n$th row is a natural number then so is every term in the $(n+1)$th row. -That every term in the $n$th row is a natural number is a stronger statement than that $\dbinom n k$ is a natural number. That would appear if you wanted to work term-by-term instead of row-by-row. It often happens with mathematical induction that it's easier to prove a stronger statement than a weaker one, because one has a stronger induction hypothesis to use.<|endoftext|> -TITLE: Recursively enumerable languages are closed under the min(L) operation? -QUESTION [5 upvotes]: Define $\min(L)$, an operation over a language, as follows: -$$ min(L) = \{ w \mid \nexists x \in L, y \in \Sigma^+ , w=xy \} $$ -In words: all strings in language L that don't have a proper prefix in $L$ -Question: Recursively enumerable languages (RE) are closed under $\min$? -That is, if $L$ is RE, is $\min(L)$ also RE? -I think the answer is NO, because in order to accept a string $w$, the Turing machine for $\min(L)$ would have to test all prefixes of $w$, ensuring that none belongs to $L$. However, since $L$ is RE, its Turing machine is not guaranteed to halt on all inputs. -Even if my explanation makes sense (does it?), it will not be accepted as a proof in my final exam. I need to show a reduction from a known non-RE language to $\min(L)$. But I don't know how :( -Can anyone help? -Thanks! - -REPLY [5 votes]: Let $K$ be any non-recursive r.e. set, and define -$$ L = \{1^x0 : x \in K\} \cup \{1^x00 : x \in \mathbb{N}\}. $$ -Clearly $L$ is r.e., and the language $\min(L)$ is -$$ \min(L) = \{1^x0 : x \in K\} \cup \{1^x00 : x \notin K\}. $$ -If $\min(L)$ were r.e., then $K$ would be recursive.<|endoftext|> -TITLE: Integral of determinant -QUESTION [15 upvotes]: Good evening. I need help with this task -$$ -\int\limits_{-\pi}^\pi\int\limits_{-\pi}^\pi\int\limits_{-\pi}^\pi{\det}^2\begin{Vmatrix}\sin \alpha x&\sin \alpha y&\sin \alpha z\\\sin \beta x&\sin \beta y&\sin \beta z\\\sin \gamma x&\sin \gamma y&\sin \gamma z\end{Vmatrix} \text{d}x\,\text{d}y\,\text{d}z -$$ -where $\alpha,\beta,\gamma$ are integers. -Computations are horrible, I gave up. - -REPLY [3 votes]: Consider the vector $x = [x_1,...,x_N].$ -Let $A(x)$ and $B(x)$ be two $N\times N$ matrices such that $(A)_{ij} = f_i(x_j)$ -and $(B)_{ij} = g_i(x_j)$ where $f_i(z)$ and $g_i(z)$ are any functions, $i=1,\cdots ,N;$ -Then -$\int_{[a,b]^N} det(A)det(B) dx = N! det(C)$ -where $(C)_{ij} = \int_{[a,b]} f_i(z)g_j(z) dz$ -In your case $A=B, a=-\pi, b=\pi, f_i(z) = g_i(z) = \sin(\alpha_i z)$ -Since $\alpha_i$ are integers, the matrix $C$ is diagonal with diagonal elements -equal to -$\int_{-\pi}^{\pi} \sin^2(\alpha_i z) dz = \pi$ -Therefore the integral you are looking for gives $N! \pi^N$ -For $N=3$ you have $6\pi^3$ -If for some $i$ and $j$, $|\alpha_i|=|\alpha_j|$ the matrices $A$ and $B$ are rank deficient and therefore their determinant is $0$. It follows that the integral is $0$ -This can be generalized for any $N$ and for real coefficients $\alpha_i$<|endoftext|> -TITLE: If a principal divisor is defined over K, then is the function? -QUESTION [5 upvotes]: Let $X$ be an algebraic variety, $D$ a principal divisor of $X$ defined over $K$, i.e. the points of $D$ are in $X(K)$ and there is a function in $\overline{K}(X)$ whose divisor is $D$. Is $D$ necessarily the divisor of a function on $X$ defined over $K$, i.e. an element of $K(X)$? -I believe that I can answer this affirmatively using Galois cohomology: any two functions with the same divisor differ by an element of $\overline{K}^*$, so we can define a cocycle of $H^1(G_K, \overline{K}^*)$ by sending an element of the Galois group to the corresponding multiplicative factor on $f$. By Hilbert's Theorem 90, this is of the form $\sigma \mapsto \frac{\sigma(\lambda)}{\lambda}$ for some $\lambda$, so then $\lambda^{-1} f$ fits the bill. -Is this right / is there a more elementary way of seeing this? - -REPLY [3 votes]: This argument is correct (with $K^{sep}$ in place of $\overline{K}$ if $K$ is not perfect), and the argument with Hilbert's Thm. 90 is standard. If there's a more elementary argument (not that HT90 is all that sophisticated) I don't know it. (There are lots of similar contexts involving "descent of the ground field" where HT90 is used in the same way; once you've seen it used a few times like this, it starts to seem less out of place.)<|endoftext|> -TITLE: Number of specific partitions of a given set -QUESTION [5 upvotes]: Let U be the set $U=\{(1,2,3,\ldots,2^m)\}$. Let $A$ and $B$ partitions of $U$, such that $A \cup B$ is the set $U$, and their intersection is empty, and adding the elements of the first set is the same number of the addition of the elements of the second set $B$. -How many $A$ and $B$ is there? -I mean, if $U=\{1,2,3,4\}$, there exist just a single pair of sets $A$, and $B$. -$A=(3,2)$ and $B=(4,1)$, because $3+2=4+1$, i want to know how many $A$ and $B$s exists for bigger sets $U$. - -REPLY [3 votes]: If instead we take $U=\{{1,2,3,\dots,m\}}$, then the answer is easily seen to be nonzero only if $m$, reduced modulo 4, is 0 or 3. The sequence of values is tabulated at the Online Encyclopedia of Integer Sequences; it starts $$\eqalign{&0, 0, 1, 1, 0, 0, 4, 7, 0, 0, 35, 62, 0, 0, 361, 657, 0, 0, 4110, 7636, 0, 0, 49910, 93846, 0, 0,\cr &632602, 1199892, 0, 0, 8273610, 15796439, 0, 0, 110826888, 212681976, 0, 0,\cr &1512776590, 2915017360, 0, 0, 20965992017, 40536016030, 0, 0, 294245741167,\cr &570497115729\cr}$$ No formula is given. -If we restrict to powers of two, the sequence starts $$0,0,1,7,657=9\times73,15796439=41\times385279,24435006625667338=(2)(727)(182764397)(91951)$$ This sequence is not tabulated. I am not optimistic about finding any useful formula for these numbers.<|endoftext|> -TITLE: Where is the most appropriate place to ask about the contribution of published papers? -QUESTION [5 upvotes]: I am trying to come to terms with a variety of new fields as I start doing mathematics research. Often I come across a paper and am lost as to what it's contribution (or perhaps significance is). I do not mean that I think it is unimportant, but rather I don't understand the 'lay of the land' in that field. -What I would like to know is where would be the most appropriate place to get some feedback on particular papers? The department I am in is small and I don't have a lot of direct contact with other Mathematicians, so online somewhere seems like the best bet. - -REPLY [5 votes]: Ideally, the reviews in Zentralblatt and MathReviews should provide precisely this information. In practice, many of the reviews are not exactly illuminating. The most important papers, however, often end up getting a "Featured Review" on MathSciNet, and those are usually very clear and well written. -Now, one other way of learning about the significance of papers (if either the paper itself does not provide sufficient introduction so Gerry's advice is hard to follow, or if for some reason you prefer to seek out third-party evaluation of the paper) is to follow the citation trail. Go to the MathSciNet entry for the corresponding paper. On the top right there is a box listing citations to that paper (these are reasonably complete for papers published within the last 40 years). If there are Review articles citing that paper, go and read it. You will likely be enlightened, since the articles will probably also give a good overview of the "lay of the land" so to speak. Citations from references are a bit more hit and miss: sometimes the paper is only cited for a minor technical fact, sometimes the paper is only cited to be polite, but sometimes you will find another paper in the field with a good expository account of why precisely the methods and results of the paper you are interested in is useful.<|endoftext|> -TITLE: A proof that Skorohod metric is a metric -QUESTION [5 upvotes]: I'm reading Billingsley's "Convergence of probability measures" (1968), p. 111. The definitions are: $D$ - the space of $\textit{cadlag}$ functions on [0,1], $\Lambda$ - the class of strictly increasing, continuous mappings of [0,1] onto itself. For $x,y\in D$ define -$$d(x,y):=\inf\{\varepsilon>0:\ \exists\lambda\in\Lambda:\ \sup_t|\lambda(t)-t|<\varepsilon \text{ and } \\ \sup_t|x(t)-y(\lambda(t))|<\varepsilon\}.\tag{1}$$ -I'm stuck with the proof that $d(x,y)=0$ implies $x=y$. The author claims: "$d(x,y)=0$ implies that for each $t$ either $x(t)=y(t)$ or $x(t)=y(t-)$, which in turn implies $x=y$." -This is what I tried: let $x,y\in D$, $d(x,y)=0$ and $t\in[0,1]$. Now from (1) it follows that there exists a sequence $(z_n)\subset[0,1]$ such that $z_n\to t$ and $y(z_n)\to x(t)$ as $n\to\infty$. If $t$ is a continuity point of $y$, then $y(z_n)\to y(t)$, thus $x(t)=y(t)$. If $y$ has a jump at $t$ and $(z_n)$ has a subsequence $(z_{n_k})$ such that $z_{n_k}\geq t, \forall k$, then $y(z_{n_k})\to x(t)$ and $y(z_{n_k})\to y(t)$, as $k\to\infty$, thus $y(t)=x(t)$. Otherwise, $z_n -TITLE: Regular local ring and a prime ideal generated by a regular sequence up to radical -QUESTION [14 upvotes]: Let $R$ be a regular local ring of dimension $n$ and let $P$ be a height $i$ prime ideal of $R$, where $1< i\leq n-1$. Can we find elements $x_1,\dots,x_i$ such that $P$ is the only minimal prime containing $x_1,\dots,x_i$? - -REPLY [9 votes]: Since $P$ has height $i$, the elements $x_1,...,x_i$ must be a regular sequence. Thus what you are asking is whether $V(P)$ is a set-theoretic complete intersection. -This is a notoriously difficult question in general. For example, it is not known for curves in $A_\mathbb C^3$. -In general, the answer is NO. The simplest example is perhaps $R=\mathbb C[x_{ij}]_{{1\leq i\leq 3},{1\leq j\leq 2}}$ and $P$ is generated by the $2$ by $2$ minors. To prove that $P$ is not generated up to radical by $2$ elements one has to show that the local cohomology module $H_P^3(R)$ is nonzero (basic properties of local cohomology dictates that $H_I^n(R)=0$ if $n$ is bigger than the number of elements that generate $I$ up to radical. That is because local cohomology can be computed with Cech complex on these generators). -Even then, the cleanest way to show $H_P^3(R)\neq 0$ involves a topological argument (the non-vanishing is not true in characteristic $p>0$ by the way). -If you want to know more, the key words are: set-theoretic complete intersection, analytic spread, local cohomology.<|endoftext|> -TITLE: Solve for $x$: $2^x = x^3$ -QUESTION [9 upvotes]: What category of equation is this? -What methods are available to solve it? -$2^x -x^3 = 0$ where $x\in\Bbb R$ - -REPLY [12 votes]: You can find a solution in terms of the Lambert W Function. Rewrite as: -$$ -1 = \frac{x^3}{2^x} = x^3 \exp(-x\log 2) -$$ -and take the real cube root: -$$ -1 = x \exp \left(-\frac{x\log 2}{3}\right) -$$ -Now multiply by $-\log 2/3$: -$$ --\frac{\log 2}{3} = -\frac{x\log 2}{3} \exp\left(-\frac{x\log 2}{3}\right) -$$ -Hence: -$$ -x = -\frac{3W_0\left( -\frac{\log 2}{3}\right)}{\log 2} -$$ -where $W_0$ is the principal branch of Lambert's W. The value is about 1.37. -There is another real root between 9 and 10, which is found on the second branch of the Lambert W function: -$$x = -\frac{3W_{-1}\left(-\frac{\log 2}{3}\right)}{\log 2}$$ -and whose value is about 9.94. - -REPLY [10 votes]: Consider $f(n)=2^n-n^3$. $f(9)=-217$ and $f(10)=24$,therefore,there exists a root between $9$ and $10.$ You can solve $f(n)=0$ numerically using Newton-Raphson method taking $x_0=9$. -Also $f(1)=1$ and $f(2)=-6$, therefore there will be a root between 1 and 2. -For $n<1$, $f(n)$ is always positive and for $n>10$, $f(n)$ is always positive,so there are no more roots other than between $9$ and $10, 1$ and $2.$<|endoftext|> -TITLE: Any manifold admits a morse function with one minimum and one maximum -QUESTION [11 upvotes]: I have heard the claim: "Any closed manifold admits a Morse function which has one local minimum and one local maximum" often used in talks without a reference. -This does not seem to be very easy to prove "hands on". Trying to perturb the minima/maxima away locally will create new local minima/maxima, so I don't believe this will work. -One idea I had is to embed the manifold in $\mathbb{R}^n$, using Whitney, taking the height function, and turning on some flow in one fixed direction (I see this as stretching a balloon). I could not get the details right at all though... -Thus my question is: Is there an elementary proof of this fact? What are the standard references? - -REPLY [11 votes]: Here's a direct proof. Let $M$ be a smooth $n$-manifold and $f$ a Morse function on $M$. Let $p_i$ be the local minima, $q_j$ be the index-$1$ critical points, $f(p_i) -TITLE: References on similarity orbits of operators -QUESTION [6 upvotes]: Given an operator $T\in\mathcal{L}(\mathcal{H})$, where $\mathcal{H}$ is a separable Hilbert space, the similarity orbit of $T$ is defined by \begin{equation} -SO(T)=\{STS^{-1}:S\in\mathcal{L}(\mathcal{H})\}. -\end{equation} -I read about this theory in some papers but I wonder whether there is some good books discussing this issue systematically. I am particularly interested in properties like what is the infimum of norm of operators in $SO(T)$ and how far is the orbit from diagonal operators? compact operators? finite rank operators? -Thanks! - -REPLY [3 votes]: I really want to close this problem. As mentioned by user6299 in his/her comment, Herrero has done a lot of work on problems related to orbits of operators in Hilbert spaces. Also a good referece (but quite hard) is his book Approximation of Hilbert Space Operators.<|endoftext|> -TITLE: Prove that every number ending in a $3$ has a multiple which consists only of ones. -QUESTION [19 upvotes]: Prove that every number ending in a $3$ has a multiple which consists only of ones. -Eg. $3$ has $111$, $13$ has $111111$. - -Also, is their any direct way (without repetitive multiplication and checking) of obtaining such multiple of any given number( ending with $3$) ? - -REPLY [2 votes]: Find a more generalized solution for any natural number in base here: -All odd primes except $5$ divide a number made up of all $1$s<|endoftext|> -TITLE: Summation Identity for Stirling Numbers of the First Kind -QUESTION [6 upvotes]: For the Stirling numbers of the second kind, the following identity is well-known: -\begin{align} -S(n,l) = \sum_{k_1 + \cdots + k_l = n-l} 1^{k_1} 2^{k_2} \cdots l^{k_{l}}, -\end{align} -where the sum is taken over non-negative integers satisfying $0 \leqslant k_1 \leqslant \cdots \leqslant k_l \leqslant n- l$. Is an analogous identity known for $s(n,l)$, the Stirling numbers of the first kind? -Edit: Thanks to Raymond, the correct formula is -\begin{align} -s(n,l) = (-1)^{n-l} \mathop{\sum_{k_1 + \cdots +k_{n-1} = n-l}}_{0 \leqslant k_i \leqslant 1} 1^{k_1} 2^{k_2} \cdots (n-1)^{k_{n-1}}. -\end{align} - -REPLY [5 votes]: Yes (26.8.3 of DLMF) : -$$(-1)^{n-k}s(n,k)=\sum_{1\leq b_1<\cdots -TITLE: A problem in additive number theory. -QUESTION [21 upvotes]: Original Problem: Counterexample given below by user francis-jamet. -Let $A\subset \mathbb Z_n$ for some $n\in \mathbb{N}$. -If $A-A=\mathbb Z_n$, then $0\in A+A+A$ - -New Problem: -Is the following statement true? If not, please give a counterexample. -If $A-A=\mathbb Z_n$ and $0\not\in A+A$, then $0\in A+A+A$. - -REPLY [14 votes]: For the original problem, there is a counterexample for $n=24$ and $A=\{3,9,11,15,20,21,23\}$. -There are no counterexamples for $n \leq 23$. -For the new problem, there is a counterexample: -$n=29$ and $A=\{4,5,6,9,13,22,28\}$. -There are no counterexamples for $n \leq 28$.<|endoftext|> -TITLE: Saturated Boolean algebras in terms of model theory and in terms of partitions -QUESTION [11 upvotes]: Let $\kappa$ be an infinite cardinal. A Boolean algebra $\mathbb{B}$ is said to be $\kappa$-saturated if there is no partition (i.e., collection of elements of $\mathbb{B}$ whose pairwise meet is $0$ and least upper bound is $1$) of $\mathbb{B}$, say $W$, of size $\kappa$. Is there any relationship between this and the model theoretic meaning of $\kappa$-saturated (namely that all types over sets of parameters of size $<\kappa$ are realized)? - -REPLY [8 votes]: As far as I know, there is no connection; it's just an unfortunate clash of terminology. It's especially unfortunate because the model-theoretic notion of saturation comes up in the theory of Boolean algebras. For example, the Boolean algebra of subsets of the natural numbers modulo finite sets is $\aleph_1$-saturated but (by a definitely non-trivial result of Hausdorff) not $\aleph_2$-saturated, in the model-theoretic sense, even if the cardinality of the continuum is large. -When (complete) Boolean algebras are used in connection with forcing, it is customary to say "$\kappa$-chain condition" instead of "$\kappa$-saturated" in the antichain sense.<|endoftext|> -TITLE: Degeneracy of outerplanar graphs -QUESTION [9 upvotes]: Does anyone know an elegant proof to the fact that every outerplanar graph has a vertex of degree at most 2 (and hence is 2-degenerate, since every subgraph is also outerplanar). -I have a proof by induction (on the number of vertices) in mind, but it is long and somewhat cumbersome (it splits into a few cases). Can anyone point me to a more elegant proof? -Maybe one can do it using the dual graph - if the dual graph is circle-free then we are done, but I couldn't find an easy argument for that either. -Thanks in advance for any help. - -REPLY [5 votes]: Observe that every outerplanar graph can be made into a maximal outerplanar graph of the same order. The regions in the interior of a maximal outer planar graph form a tree since if there was a cycle, that would surround a vertex, contradicting outerplanarity. Trees have at least two leaves. Any region corresponding to a leaf will have a vertex of degree 2. This vertex must have had degree less than or equal to 2 in the original graph.<|endoftext|> -TITLE: Is it true that the Laplace-Beltrami operator on the sphere has compact resolvents? -QUESTION [9 upvotes]: We consider the Riemannian structure on the sphere $\mathbb{S}^n$ seen as a submanifold of $\mathbb{R}^{n+1}$ and the Laplace-Beltrami operator defined on $C^\infty(\mathbb{S}^n)$ by the equation -$$\Delta f= -\operatorname{div}\operatorname{grad} f = -\frac{1}{\sqrt{g}}\frac{\partial}{\partial u^i}\left(\sqrt{g}g^{ij}\frac{\partial f}{\partial u^j}\right).$$ -We regard $C^{\infty}(\mathbb{S}^n)$ as a dense subspace of the Hilbert space $L^2(\mathbb{S}^n)$. - -Question Is it true that $\Delta$ has compact resolvents, meaning that there exists $\lambda \in \mathbb{R}$ such that the closure of $\Delta-\lambda$ is invertible and its inverse operator is compact? - -I think that we can easily work out the special case $n=1$: in this case the equation $\Delta u-\lambda u = v$ reduces to the standard Sturm-Liouville problem -$$\begin{cases} -\frac{d^2}{dt^2}u-\lambda u = v & t\in (-\pi, \pi) \\ {}\\ u(-\pi)=u(\pi) \\ u'(-\pi)=u'(\pi)\end{cases}$$ -which admits Green's function for, say, $\lambda=-1$ (actually any $\lambda \notin \{0, 1, 4, 9 \ldots\}$ will do). -So the inverse of $-d^2/dt^2+1$ is an integral operator and in particular it is compact. I suspect that, similarly, the operator $\Delta_{\mathbb{S}^n}+1$ admits Green's function in any dimension $n$, but I am unable to prove (or disprove) this. -Thank you for reading. - -REPLY [7 votes]: This might be a stupid way to argue, but anyway. - -The solution of $-\mathrm{div}\ \mathrm{grad}\ f+f=v$ is the unique minimizer of the energy $E_v(f)=\int_{S^n}|\mathrm{grad}\ f|^2+|f-v|^2$. -Since $0$ is admissible in minimization, the minimizer satisfies $E_v(f)\le E_v(0)=\int_{S^n}|v|^2$. -Hence, every function $f\in L^2$ such that $-\mathrm{div}\ \mathrm{grad}\ f+f=v$ with $\|v\|_{L^2}\le 1$ satisfies $\int_{S^n}|\mathrm{grad}\ f|^2\le 1$. -By the Rellich-Kondrachov theorem, the set of such $f$ is precompact in $L^2$.<|endoftext|> -TITLE: Explanation of Proof of Zorn's lemma in Halmos's Book -QUESTION [8 upvotes]: I have been reading Halmos's book on naive set theory on my own and have got stuck in the Chapter on Zorn's lemma. The 2nd and 3rd paragraphs are not very clear to me. Here is the text: - -Zorn's lemma. If $X$ is a partially ordered set such that every chain in $X$ has an upper bound, then $X$ contains a maximal element. -Proof. The first step is to replace the abstract partial ordering in $X$ by the - inclusion order in a suitable collection of sets. More precisely, we consider, - for each element $x \in X$, the weak initial segment $\bar{s}(x)$ consisting of $x$ and - all its predecessors. The range $\mathscr{S}$ of the function $\bar{s}$ (from $X$ to$\wp(X)$) is a certain collection of subsets of $X$, which we may, of course, regard as (partially) ordered by inclusion. The function $\bar{s}$ is one-to-one, and a necessary - and sufficient condition that $\bar{s}(x)\subseteq \bar{s}(y)$ is that $x\leq y$. In view of this, the task of finding a maximal element in $X$ is the same as the task of finding a maximal set in $\mathscr{S}$. The hypothesis about chains in $X$ implies (and is, - in fact, equivalent to) the corresponding statement about chains in $\mathscr{S}$. -Let $\mathscr{X}$ be the set of all chains in $X$; every member of $\mathscr{X}$ is included in $\bar{s}(x)$ for some $x \in X$. The collection $\mathscr{X}$ is a non-empty collection of sets, - partially ordered by inclusion, and such that if $\mathscr{C}$ is a chain in $\mathscr{X}$, then the - union of the sets in $\mathscr{C}$ (i.e., $\bigcup_{A \in \mathscr{C}}A$) belongs to $\mathscr{X}$. Since each set in $\mathscr{X}$ is - dominated by some set in $\mathscr{S}$, the passage from $\mathscr{S}$ to $\mathscr{X}$ cannot introduce any - new maximal elements. One advantage of the collection $\mathscr{X}$ is the slightly - more specific form that the chain hypothesis assumes; instead of saying - that each chain $\mathscr{C}$ has some upper bound in $\mathscr{S}$, we can say explicitly that the union of the sets of $\mathscr{C}$, which is clearly an upper bound of $\mathscr{C}$, is an element of the collection $\mathscr{X}$. Another technical advantage of $\mathscr{X}$ is that it contains all the subsets of each of its sets; this makes it possible to enlarge - non-maximal sets in $\mathscr{X}$ slowly, one element at a time. -Now we can forget about the given partial order in $X$. In what follows - we consider a non-empty collection $\mathscr{X}$ of subsets of a non-empty set $X$, subject to two conditions: every subset of each set in $\mathscr{X}$ is in $\mathscr{X}$, and the - union of each chain of sets in $\mathscr{X}$ is in $\mathscr{X}$. Note that the first condition implies that $\varnothing\in\mathscr{X}$. Our task is to prove that there exists in $\mathscr{X}$ a maximal set. - -and the proof continues... -In the 2nd paragraph: - -'Since each set in $\mathscr{X}$ is dominated by some set in $\mathscr{S}$, the passage from $\mathscr{S}$ to $\mathscr{X}$ cannot introduce any new maximal elements.' - -Here I am able to prove that every maximal element of $\mathscr{S}$ has to be a maximal element of $\mathscr{X}$, but surely there can be maximal elements of $\mathscr{X}$ which are not maximal elements of $\mathscr{S}$ and hence 'extra'- please explain.. -In the 3rd para- The author considers a set $\mathscr{X}$ with the given properties, and states that the problem of finding a maximal element in $X$ is equivalent to finding a maximal set in $\mathscr{X}$. How come? -Detailed but simple answers would be much appreciated. - -REPLY [11 votes]: (This is a fairly major re-write of the original answer.) -For your first question, the proper answer is that the statement doesn't matter too much. But to go through it, we begin with the following two facts: -Fact 1: If $\bar{s} (x)$ is a maximal set in $\mathscr{S}$, then $x$ is a maximal element of $X$. -Proof. If $z \geq x$ holds for some $z \in X$, then $\bar{s}(x) \subseteq \bar{s}(z)$. By maximality of $\bar{s}(x)$ it follows that $\bar{s}(z) = \bar{s}(x)$, and therefore $z \in \bar{s}(x)$ meaning that $z \leq x$. Thus $z = x$. $\dashv$ -Fact 2: If $Y$ is a maximal set in $\mathscr{X}$, then $Y$ contains a maximum element $x$ (i.e. $y \leq x$ holds for all $y \in Y$), and $x$ is a maximal element of $X$. -Proof. As $Y$ is a chain in $X$, it has an upper bound $x$. We show that $x \in Y$ (and as it is an upper bound of $Y$ it follows that $x$ is the maximum element of $Y$) and that $x$ is a maximal element of $X$. -Suppose that $z \geq x$ holds for some $z \in X$. Note that $Y \cup \{ z \}$ is also a chain in $X$, and so by maximality of $Y$ it must be that $Y \cup \{ z \} = Y$, and therefore $z \in Y$. (In particular, as $x \geq x$ we have $x \in Y$.) As $x$ is an upper bound of $Y$ we have $z \leq x$, and so $z = x$. Thus $x$ is a maximal element of $X$. $\dashv$ -Thus starting with maximal sets in either $\mathscr{S}$ or $\mathscr{X}$ will lead you to maximal elements of the partially ordered set $X$. It might appear, at first glance, that as chains are "more general" than weak initial segments , you get more maximal elements of $X$ by starting with maximal sets in $\mathscr{X}$ than you do by starting with maximal sets in $\mathscr{S}$. -Halmos's statement is that this is not the case. More precisely, one can show that if $Y$ is a maximal set in $\mathscr{X}$ with maximum element $x$, then $\bar{s} (x)$ is a maximal set in $\mathscr{S}$. Therefore, the maximal element $x$ of $X$ obtained by begining with a maximal set in $\mathscr{X}$ could also have been obtained by begining with a maximal set in $\mathscr{S}$. -I honestly wouldn't worry about this too much. The real crux of the proof lies in Fact 2. -Your second question should (now) find its answer in the statement and proof of Fact 2, above. -Additional notes: -Note that in the 2nd paragraph Halmos mentions that the family $\mathscr{X}$ of all chains in the partially ordered set $X$ satisfies the following conditions: - -$\mathscr{X}$ is non-empty; -every subset of an element of $\mathscr{X}$ is also an element of $\mathscr{X}$; and -the union of every chain of elements of $\mathscr{X}$ is an element of $\mathscr{X}$. - -He then makes the following claims: -Claim 1: If there is a maximal set in $\mathscr{X}$, then there is a maximal element of $X$. -(This was established in Fact 2, above.) -Claim 2: Suppose $\mathscr{Z}$ is any family of subsets of some non-empty set $Z$ satisfying the following conditions: - -$\mathscr{Z}$ is non-empty; -every subset of an element of $\mathscr{Z}$ is also an element of $\mathscr{Z}$; and -the union of every chain of elements of $\mathscr{Z}$ is an element of $\mathscr{Z}$. - -Then there is a maximal set in $\mathscr{Z}$. -(This is what the remainder of Halmos's proof establishes, except that -- in possibly bad form -- he uses $X$ and $\mathscr{X}$ (which symbols are already used in the proof) in place of my $Z$ and $\mathscr{Z}$, respectively.) -Once these two claims are established, the proof of Zorn's Lemma is complete: Since the family $\mathscr{X}$ of all chains in $X$ has the desired propeties mentioned in the premise of Claim 2, by this it follows that $\mathscr{X}$ contains a maximal set. By Claim 1 it then follows that the partially ordered set $X$ contains a maximal element. - -REPLY [4 votes]: We move from the ordered set $X$ to the ordered set of chains $\mathscr X$. The assumption on $X$ was that every chain has an upper bound, therefore if $Y$ was a maximal element in $\mathscr X$ it is a chain in $X$ and thus has an upper bound. -From this we have that an upper bound of a maximal chain is a maximal element, do you see why? (Hover to see the answer) - - Otherwise we could have added it to the chain and it would contradict its maximality. On the other hand, if there was someone bigger than it in the chain, it would not be an upper bound!<|endoftext|> -TITLE: How to find an ellipse, given five points? -QUESTION [12 upvotes]: Is there a way to find the parameters -$$A, B, \alpha, x_0, y_0$$ -for the ellipse formula -$$\frac{(x \cos\alpha+y\sin\alpha-x_0\cos\alpha-y_0\sin\alpha)^2}{A^2}+\frac{(-x \sin\alpha+y\cos\alpha+x_0\sin\alpha-y_0\cos\alpha)^2}{B^2}=1$$ -given five points of the ellipse? - -REPLY [10 votes]: What about using the DLT : http://en.wikipedia.org/wiki/Direct_linear_transformation ? -The solution is the null-space of the matrix -$$\left(\begin{array} - & - p_1^2 & p_1q_1 & q_1^2 & p_1 & q_1 & 1 -\\ - p_2^2 & p_2q_2 & q_2^2 & p_2 & q_2 & 1 -\\ - p_3^2 & p_3q_3 & q_3^2 & p_3 & q_3 & 1 -\\ - p_4^2 & p_4q_4 & q_4^2 & p_4 & q_4 & 1 -\\ - p_5^2 & p_5q_5 & q_5^2 & p_5 & q_5 & 1 -\end{array}\right)$$ -which can be found using SVD decomposition.<|endoftext|> -TITLE: Are Complex Substitutions Legal in Integration? -QUESTION [7 upvotes]: This question has been irritating me for awhile so I thought I'd ask here. -Are complex substitutions in integration okay? Can the following substitution used to evaluate the Fresnel integrals: -$$\int_{0}^{\infty} \sin x^2\, dx=\operatorname {Im}\left( \int_0^\infty\cos x^2\, dx + i\int_0^\infty\sin x^2\, dx\right)=\operatorname {Im}\left(\int_0^\infty \exp(ix^2)\, dx\right)$$ -Letting $ix^2=-z^2 \implies x=\pm\sqrt{iz^2}=\pm \sqrt{i}z \implies z=\pm \sqrt{-i} x \implies dx = \pm\sqrt{i}\, dz$ -Thus the integral becomes -$$\operatorname {Im}\left(\pm \sqrt{i}\int_0^{\pm\sqrt{-i}\infty} \exp(z^2)\, dz\right)$$ -This step requires some justification, and I am hoping someone can help me justify this step as well: -$$\pm \sqrt{i}\int_0^{\pm\sqrt{-i}\infty} \exp(z^2)\, dz=\pm\sqrt{i}\int^\infty_0\exp(z^2)\, dz=\pm\sqrt{i}\left(\frac{\sqrt{\pi}}{2}\right)$$ -Thus -$$\operatorname {Im}\left(\int_0^\infty \exp(ix^2)\, dx\right)=\operatorname {Im}\left(\pm\frac{\sqrt{i\pi}}{2}\right)=\operatorname {Im}\left(\pm\frac{(1+i)\sqrt{\pi}}{2\sqrt{2}}\right)=\pm\frac{1}{2}\sqrt{\frac{\pi}{2}}$$ -We find that the correct answer is the positive part (simply prove the integral is positive, perhaps by showing the integral can be written as an alternating sum of integrals). -Can someone help justify this substitution? Is this legal? - -REPLY [11 votes]: A correct way to do this might go as follows. Consider the contour integral -$$ \oint_\Gamma e^{iz^2}\ dz $$ -where $\Gamma$ is the positively oriented triangle with vertices $0, R, R+Ri$ for large $R$. -Since the integrand is analytic, Cauchy's Theorem says the result is $0$. This can be written as $J_1 + J_2 + J_3=0$, where $J_1, J_2, J_3$ are the integrals over the segments -$[0, R]$, $[R, R+Ri]$, and $[R+Ri,0]$ respectively. -Show that as $R \to +\infty$, $J_2 \to 0$ and $J_3 \to -(1+i) \dfrac{\sqrt{2 \pi}}{4}$. -Thus $J_1 \to (1+i) \dfrac{\sqrt{2 \pi}}{4}$, which says that -$$ \int_0^\infty \cos(t^2)\ dt = \int_0^\infty \sin(t^2)\ dt = \dfrac{\sqrt{2 \pi}}{4}$$<|endoftext|> -TITLE: Self-Linking Number on 3-Manifolds -QUESTION [5 upvotes]: We can assign a framing to a knot $K$ (in some nice enough space $M$) in order to calculate the self-linking number $lk(K,K)$. But of course it is not necessarily canonical, as added twists in your vector field can remove/add crossings. -Two things are stated in Witten's QFT paper on the Jones polynomial, which I do not quite see: -1) On $S^3$ we do have a canonical framing of knots, by requesting that $lk(K,K)=0$. -Why? I must be understanding this incorrectly, because if we decide the framing by requiring $lk(K,K)=0$, so that the framing has whatever twists it needs to accomplish this, then aren't we making a choice?? We could have simply required $lk(K,K)=n$ for any integer $n$. If $n> 0$ does there then exist multiple possible framings? -2) For general 3-manifolds, we can have $lk(K,K)$ ill-defined or it can be a fixed fraction (modulo $\mathbb{Z}$) so that any choice of framing won't make it $0$. -What are some examples? When is it possible to set a fixed fraction? Is there a relation between the 3-manifold $M$ and the fractional value you can assign to $lk(K,K)$? - -REPLY [6 votes]: I see my answer wasnt so clear. Let me try again. -Your first comment says that given a knot $K:S^1\subset M$ in a 3-manifold, with a choice of framing, i.e. normal vector field to $K$, you can calculate the self-linking number. -My first remark is that this is only possible if the knot is nullhomologous, i.e. represents $0$ in $H_1(M)$. For example, there is no self-linking number of the core $0\times S^1\subset D^2\times S^1$ of a solid torus, no matter how you frame it. -If K is nullhomologous, then depending on how you think of homology you see that there is a 2-chain with boundary the knot $K$. It is true, but a bit more work, to see that in fact there exists a oriented embedded surface $C\subset M$ with boundary $K$. (so you can take the 2-chain to be the sum of the triangles in a triangulation of $C$. Then given any other knot $L$ disjoint from $K$ (for example a push off of $K$ with respect to some framing) then the intersection of $C$ with $L$ is by defintion $lk(K,L)$ and is an integer. You may worry about whether it is independent of the choice of $C$, and the answer is yes if $L$ is also nullhomologous, or more generally torsion (i.e. finite order) in $H_1(M)$, and moreover in this case it is also symmetric, $lk(K,L)=lk(L,K)$. Notice that no framing of $K$ or $L$ was used to define $lk(K,L)$. -Now to answer your questions. Since $H_1(S^3)=0$, every knot in $S^3$ is nullhomologous. Thus any two component link in $S^3$ has a well defined integer linking number. You are considering the 2 component link determined by $K$ and a normal framing: the normal framing is used to push off $K$ to get $L=f(K)$. As you note, changing the framing changes the linking number, and in fact by twisting once over a small arc in $K$ you can change it by $\pm1$. Thus there is some framing $f$ so that $lk(K, f(K))=0$, this is the canonical framing (typically called the "0-framing"). It makes sense in $S^3$, or any 3-manifold with $H_1(M)=0$. -For your second question, you are referring to a slightly different concept, which is the linking pairing $lk:Torsion(H_1(M))\times Torsion(H_1(M))\to Q/Z$. -It is defined as follows: given $a,b$ torsion classes, then some integer $n$ (for example the order of $torsion(H_1(M))$) has the property that $na$ is nullhomologous. Thus $na$ is represented by a nullhomologous knot, call it $K$. $b$ is also represented by a knot say $L$, which can be perturbed to be disjoint from $K$. Then define $lk(a,b)$ to be $(1/n) lk(K,L)$ mod $Z$, with $lk(K,L)$ as above. -For example, if $P$ is a knot in a lens space $M=L(p,q)$ with $H_1(M)=Z/p$, you could take $a=[P]$ and $b=[P]$ in $H_1(M)$, and then $lk(a,b)=q n^2/p$ for some integer $n$ that depends on $a$. Note that the answer (mod Z!) is independent of how you push $P$ off itself, in particular, the framing of $P$ is irrelevant, and you'll never get $0$ unless $a=0$ (i.e. $P$ is nullhomologous). Note also that if you don't mod out by the integers then the linking pairing is not well defined.<|endoftext|> -TITLE: Please recommend books on calculus, linear algebra, statistics for someone trying to learn Probability Theory and Machine Learning? -QUESTION [6 upvotes]: I am tackling some topics in Probability Theory and Machine Learning and while I have plenty of resources dedicated to those disciplines I am lacking in a good basic math foundation. -Does anyone know any good, concise math books that can help introduce the foundations (calculus, linear algebra, statistics) of these disciplines to someone whose exposure to math is very limited? -Of particular interest would be a book that could relate these concepts to someone familiar with programming to leverage that mode of thinking to relate the essential ideas. - -REPLY [2 votes]: For Calculus: I would use the standard textbook for U.S colleges: - Calculus: Early Transcendental -For Linear Algebra: I highly encouraged you to check out Professor Gilbert Strang's Introduction to Linear Algebra -supplement with MIT Opencourseware's free Linear Algebra course -There is even a series of video lectures of Professor Gilbert Strang's course on youtube -This is how I taught myself Linear Algebra -For Probability and Statistics: I used Probability & Statistics for Engineers & Scientists for my introduction class during sophomore year. If you REALLY want to understand probability theory, I cannot recommend Probability Theory: The Logic of Science enough. It is hand down one of the best books I've ever read. You will be a master at Probability Theory after reading it. -By the way all these textbooks are available online as pdf versions at a website called Library Genesis<|endoftext|> -TITLE: how to find maximal linearly independent subsets -QUESTION [10 upvotes]: Given a set of vectors, we can compute the number of independent vectors by calculating the rank of the set, but my question is how to find a maximal linearly independent subset. Thanks! - -REPLY [16 votes]: One method is: - -Place the vectors as columns of a matrix. Call this matrix $A$. -Use Gaussian elimination to reduce the matrix to row-echelon form, $B$. -Identify the columns of $B$ that contain the leading $1$s (the pivots). -The columns of $A$ that correspond to the columns identified in step 3 form a maximal linearly independent set of our original set of vectors. - -Another method is: - -Let $A$ be your multiset of vectors, and let $B=\varnothing$, the empty set. -Remove from $A$ any repetitions and all zero vectors. -If $A$ is empty, stop. This set is a maximal linearly independent subset of $A$. Otherwise, go to step 4. -Pick a vector $\mathbf{v}$ from $A$ and test to see if it lies in the span of $B$. -If $\mathbf{v}$ is in the span of $B$, replace $A$ with $A-\{\mathbf{v}\}$, and do not modify $B$; then go back to step 3. -If $\mathbf{v}$ is not in the span of $B$, replace $A$ with $A-\{\mathbf{v}\}$ and replace $B$ with $B\cup\{\mathbf{v}\}$. Then go back to step 3. - -When step 3 instructs you to stop, $B$ contains a maximal linearly independent subset of $A$. - -REPLY [3 votes]: Form a matrix whose columns are the given vectors. Do row reduction to bring it to reduced form. In each non-zero row of the reduced form, circle the leftmost non-zero entry. The columns in the original matrix that correspond to columns in the reduced matrix with a circled entry - they form a maximal linearly independent set.<|endoftext|> -TITLE: Infinite extension in Galois theory -QUESTION [6 upvotes]: I got this question while I am studying Galois correspondence. -Let $K/F$ be an infinite extension and $G = \mathrm{Aut}(K/F)$. -Let $H$ be a subgroup of $G$ with finite index and $K^H$ -be the fixed field of $H$. Is it true that $[K^H:F]= (G:H)$? -For finite extension, I verified this -is true. Is it true in infinite case also? -Thanks - -REPLY [6 votes]: KCd's comment answers the question, but let's be a little more explicit. Let $F = \mathbb{Q}(x_1, x_2, ...)$ and let $K = \mathbb{Q}(\sqrt{x_1}, \sqrt{x_2}, ...)$. The Galois group $G = \text{Aut}(K/F)$ is $\prod_{n=1}^{\infty} \mathbb{Z}/2\mathbb{Z}$ with the product topology. The subgroup $\bigoplus_{n=1}^{\infty} \mathbb{Z}/2\mathbb{Z}$ is countable, so is a proper subgroup of $G$. Conditional on the axiom of choice (but actually we only need the ultrafilter lemma here) this subgroup is contained in a maximal subgroup $H$. The quotient $G/H$ is a simple group in which every element has order $2$, so must be $\mathbb{Z}/2\mathbb{Z}$. -So $H$ has index $2$. But since it contains the topological generators of $G$ we conclude that $K^H = F$.<|endoftext|> -TITLE: Flirtatious Primes -QUESTION [8 upvotes]: Here's a possibly interesting prime puzzle. Call a prime $p$ flirtatious if the sum of its digits is also prime. Are there finitely many flirtatious primes, or infinitely many? - -REPLY [9 votes]: These are tabulated at the Online Encyclopedia of Integer Sequences. It appears to be known that there are infinitely many, and a link is given to a recent paper of Harman. Some high-powered math is involved.<|endoftext|> -TITLE: $f(x)=x^n+5x^{n-1}+3$ can't be expressed as the product of two polynomials -QUESTION [8 upvotes]: Let $f(x)=x^n+5x^{n-1}+3$ where $n\geq1$ is an integer. Prove that $f(x)$ can't be expressed as the product of two polynomials each of which has all its coefficients integers and degree $\geq1$. - -If the condition that each polynomial must have all its coefficients integers was not there, then I needed to show only that $f(x)$ is irreducible over real numbers. But since this condition is given,therefore, we can't exclude the case when $f(x)$ is reducible over real numbers but not with polynomials of integer coefficients. Anyone with idea how to proceed? Thanks in advance!! - -REPLY [6 votes]: Hint $\ $ This is a minor variant of Eisenstein's criterion. Mod $3$ it factors as $\rm\:x^{n-1}(x+5)\:$ so by uniqueness of factorization if $\rm\:f = gh\:$ then, mod $3,\,$ $\rm\:g = x^j,\, $ $\rm\,h = x^k(x+5),\,$ $\rm\:j\!+\!k = n\!-\!1.\,$ But not $\rm\,j,k > 0\,$ else $\rm\:3\:|\:g(0),h(0)\:$ $\Rightarrow$ $\rm\:9\:|\:g(0)h(0)=f(0).\:$ Hence either $\rm\:j=0,\:$ so $\rm\:f\:$ is irreducible, or $\rm\:k=0,\:$ and $\rm\:f\:$ has a linear factor $\rm\:g,\:$ which is easily ruled out. $\ \ $ QED<|endoftext|> -TITLE: For field extensions $F\subsetneq K \subset F(x)$, $x$ is algebraic over $K$ -QUESTION [9 upvotes]: Let $x$ be an element not algebraic over $F$, and $K \subset F(x)$ a subfield that strictly contains $F$. Why is $x$ algebraic over $K$? -Thanks a lot! - -REPLY [12 votes]: The thing is you need to understand what are the subfields in between $F$ and $F(x)$. If you have a subfield $K$ of $F(x)$ that contains $F$, then to allow $K$ to contain "more" elements than $F$ (i.e. to be distinct from $F$, but lie inside $F(x)$), $K$ must contain some element of $F(x)$, say $\frac{p(x)}{q(x)} \in K \cap F(x)$. But then this means $x$ is a root of $p(t) - q(t)\frac{p(x)}{q(x)} \in K[t]$, hence $x$ is algebraic over $K$. -EDIT : @Aspirin : I assumed in my question that $F \subset K \subset F(x)$ because I thought it was clear from the context, but perhaps it is not. I think that in this case, if you assume that $K$ is not a subfield of $F$, you can replace $F$ by $F' = K \cap F$ so that $F' \subsetneq K \subsetneq F'(x)$. You clearly have $F' \subsetneq K$ by assumption on $K$ in this case (i.e. that $K$ not contained in $F$) and you get $K \subsetneq (K \cap F)(x) = K(x) \cap F(x)$ since $K \subseteq K(x)$ and $K \subsetneq F(x)$, hence is in the intersection, but if you had equality, it would mean $K = K(x) \cap F(x)$, and since $x \in K(x) \cap F(x)$, that would mean $x \in K$. In that case OP's question is trivial, so we can suppose we are not in that case. Also note that since $x$ is not algebraic over $F$ it is certainly not over $F'$. Therefore $F' \subsetneq K \subsetneq F'(x)$ and you can use the arguments above to show that $x$ is algebraic over $K$. -EDIT2 : I am leaving the preceding edit there because I still don't understand why it breaks down in the case $K \subseteq F$, which shouldn't work if we are still sane around here. I am more convinced that you need to assume $F \subsetneq K$ than the fact that my edit gives something good. -EDIT3 : After all those editings and reading the comments, I modified EDIT so that the case where $K \subseteq F$ is clearly not possible. The most general case where it would work is when $K \subseteq F(x)$ but $K$ is not a subfield of $F$. A slighty small detail dodged my eye and Dylan Moreland caught it, thanks to him. Note that this also proves that $x$ would be algebraic over $K$ if and only if $K$ is not a subfield of $F$ (assuming $K$ is described as in OP's question). -Hope that helps,<|endoftext|> -TITLE: How to evaluate $ \lim \limits_{n\to \infty} \sum \limits_ {k=1}^n \frac{k^n}{n^n}$? -QUESTION [26 upvotes]: I can show that the following limit exists but -I am having difficulties to find it. It is -$$\lim_{n\to \infty} \sum_{k=1}^n \frac{k^n}{n^n}$$ -Can someone please help me? - -REPLY [19 votes]: Just for reference: With aid of some fancy theorem, you can skip most of hard analysis. As in other answers, we begin by writing -$$ \sum_{k=1}^{n} \left( \frac{k}{n}\right)^n -\ \overset{k \to n-k}{=} \ \sum_{k=0}^{n-1} \left( 1 - \frac{k}{n}\right)^n -\ = \ \sum_{k=0}^{\infty} \left( 1 - \frac{k}{n}\right)^n \mathbf{1}_{\{k < n\}}, $$ -where $\mathbf{1}_{\{k < n\}}$ is the indicator function which takes value $1$ if $k < n$ and $0$ otherwise. Now for each $0 \leq k < n$, utilizing the inequality $\log(1-x) \leq -x$ which holds for all $x \in [0,1)$ shows that -$$ \left( 1 - \frac{k}{n}\right)^n -= e^{n \log(1 - \frac{k}{n})} -\leq e^{-k}. $$ -Since $\sum_{k=0}^{\infty} e^{-k} < \infty$, by the dominated convergence theorem we can interchange the infinite sum and the limit: -$$ \lim_{n\to\infty} \sum_{k=1}^{n} \left( \frac{k}{n}\right)^n -= \sum_{k=0}^{\infty} \lim_{n\to\infty} \left( 1 - \frac{k}{n}\right)^n \mathbf{1}_{\{k < n\}} -= \sum_{k=0}^{\infty} e^{-k} -= \frac{1}{1 - e^{-1}}. $$<|endoftext|> -TITLE: Why is the recognition principle important? -QUESTION [13 upvotes]: The recognition principle basically states that (under some conditions) a topological space $X$ has the weak homotopy type of some $\Omega^k Y$ iff it is an $E_k$-algebra (ie. an algebra over the operad of the little $k$-cubes). This principle is often quoted as one important application of operad theory (see this MO post for example). -My question, then, is: why is the recognition principle important? What are some typical applications, consequences? I know it has some links to commutativity of loops and things like that, but I'm not quite sure what that entails. - -REPLY [7 votes]: Calculating homotopy groups in general is pretty hard, at least compared to computing homology or cohomology. In between these two extremes, you can try to compute the stable homotopy of a space $X$. By definition, the $i^{th}$ stable homotopy group of $X$ is equal to $\pi_{i+k} \Sigma^{k} X$ for sufficiently large $k$. This group is denoted $\pi^s_i X$. In fact, the sequence of functors -$\pi^s_i(-) \colon \mathrm{Spaces} \to \mathrm{Abelian Groups}$ -forms a generalized homology theory. So these stable homotopy groups can be accessed with many of the same tools and methods as used on ordinary homology. -Now, observe that $\pi_{i+k} \Sigma^k X = \pi_i \Omega^k \Sigma^k X$, so instead of studying $\pi_i^s$, you could study spaces of the form $\Omega^k \Sigma^k X$ for large $k$. As mentioned in the comments, if you take the colimit of these spaces as $k\to\infty$, you get a spectrum, which is the same thing as a generalized homology or cohomology theory. -All of the above is just to say that showing certain spaces are spectra is an example of an important application. Below, I argue that the recognition principle helps you construct such examples. -One simple conclusion from May's delooping theorem is that a topological space with an abelian group structure gives a cohomology theory. In addition, it is often easier to show you have an $E_n$ structure on a space $Y$ for each $n$ than to find a space $X$ such that $\Omega^\infty\Sigma^\infty X = Y$. Usually $Y$ is the classifying space of some symmetric monoidal category. There are other examples, although none come to mind at the moment, of categories with extra structure whose classifying spaces give rise to infinite loop spaces.<|endoftext|> -TITLE: How to evalutate this exponential integral -QUESTION [9 upvotes]: Is there an easy way to compute $$\int_{-\infty}^\infty\exp(-x^2+2x)\mathrm{d}x$$ -without using a computer package? - -REPLY [6 votes]: In general -$$ -\begin{align} -\int_{x=0}^\infty e^{-(ax^2+bx)}\,dx&=\int_{x=0}^\infty \exp\left(-a\left(\left(x+\frac{b}{2a}\right)^2-\frac{b^2}{4a^2}\right)\right)\,dx\\ -&=\exp\left(\frac{b^2}{4a}\right)\int_{x=0}^\infty \exp\left(-a\left(x+\frac{b}{2a}\right)^2\right)\,dx\\ -\end{align} -$$ -Let $u=x+\frac{b}{2a}\;\rightarrow\;du=dx$, then -$$ -\begin{align} -\int_{x=0}^\infty e^{-(ax^2+bx)}\,dx&=\exp\left(\frac{b^2}{4a}\right)\int_{x=0}^\infty \exp\left(-a\left(x+\frac{b}{2a}\right)^2\right)\,dx\\ -&=\exp\left(\frac{b^2}{4a}\right)\int_{u=0}^\infty e^{-au^2}\,du.\\ -\end{align} -$$ -The last form integral is Gaussian integral that equals to $\frac{1}{2}\sqrt{\frac{\pi}{a}}$. Hence -$$ -\int_{x=0}^\infty e^{-(ax^2+bx)}\,dx=\frac{1}{2}\sqrt{\frac{\pi}{a}}\exp\left(\frac{b^2}{4a}\right). -$$ -In your case, -$$ -\begin{align} -\int_{-\infty}^\infty e^{-x^2+2x}\,dx&=2\int_0^\infty e^{-(x^2-2x)}\,dx\\ -&=2\cdot\frac{1}{2}\sqrt{\frac{\pi}{1}}\exp\left(\frac{(-2)^2}{4\cdot1}\right)\\ -&=e\sqrt{\pi}. -\end{align} -$$ - -$$\text{# }\mathbb{Q.E.D.}\text{ #}$$<|endoftext|> -TITLE: An introductory textbook on functional analysis and operator theory -QUESTION [7 upvotes]: I would like to ask for some recommendation of introductory texts on functional analysis. I am not a professional mathematician and I am totally new to the subject. However, I found out that some knowledge of functional analysis and operator theory would be quite helpful to my work... -What I am searching for is some accessible and instructive text not necessarily covering the subject in great depth, but explaining the main ideas. I am not searching for a text for engineers, some amount of mathematical rigor would be fine. But I found myself unable of reading some standard textbooks covering in great depth a large amount of issues in theory of Banach spaces, etc. I am looking for something that proceeds to the most important topics (e.g., spectral theory) faster than the most of textbooks, but not at the expense of rigor. I.e., something that covers rigorously the main topics, but concentrates only on the main ideas. -Simply an accessible introductory text for a fast orientation in the subject. -Moreover, I would prefer a text that does not require any background in measure theory and similar disciplines. -And another question: is there any functional analysis book that deals primarily with sequence spaces? It need not fulfill the description above. -Thank you for your recommendations! - -REPLY [4 votes]: The best introductory text I know on the subject is available from Dover books very cheap; it's George Bachman and Lawrence Narici's Functional Analysis. It develops virtually the entire subject of functional analysis from scratch with many examples and good exercises-and it requires only advanced calculus (elementary real analysis) and a good working knowledge of linear algebra to work through. -Also, if you just "need to know some functional analysis" for work, you might want to check out some applied textbooks on the subject, of which there are many.<|endoftext|> -TITLE: Finding an orthogonal basis from a column space -QUESTION [8 upvotes]: I'm having issues with understanding one of the exercises I'm making. -I have to find an orthogonal basis for the column space of $A$, where: -$$A = \begin{bmatrix} -0 & 2 & 3 & -4 & 1\\ -0 & 0 & 2 & 3 & 4 \\ -2 & 2 & -5 & 2 & 4\\ -2 & 0 & -6 & 9 & 7 -\end{bmatrix}.$$ -The first question was to find a basis of the column space of $A$, clearly this is simply the first $3$ column vectors (by reducing it to row echelon form, and finding the leading $1$'s). -However, then I had to find an orthogonal basis out of the column space of $A$, and here is where I get lost. I started off with finding the first vector: -$$u_1 = \begin{bmatrix}0\\0\\2\\2\\\end{bmatrix}.$$ -Then I thought I would find the second vector like this: -$$u_2 = \begin{bmatrix}2\\0\\2\\0\\\end{bmatrix}-\left(\begin{bmatrix}2\\0\\2\\0\\\end{bmatrix}\cdot\begin{bmatrix}0\\0\\2\\2\\\end{bmatrix}\right)*\begin{bmatrix}0\\0\\2\\2\\\end{bmatrix} = \begin{bmatrix}2\\0\\2\\0\\\end{bmatrix}-4*\begin{bmatrix}0\\0\\2\\2\\\end{bmatrix} = \begin{bmatrix}2\\0\\-6\\-8\\\end{bmatrix}.$$ -However, according to the result sheet we were given, instead of having a $4$, I should have $\frac{4}{8}$. I somehow can not figure out what I am missing, since the dot product of the two vectors clearly is $4$. -Also, as a second question: if I had to find a orthonormal basis I would only have to take the orthogonal vectors found here, and multiply them by their $1$/length, correct? - -REPLY [10 votes]: Your basic idea is right. However, you can easily verify that the vectors $u_1$ and $u_2$ you found are not orthogonal by calculating -$$ = (0,0,2,2)\cdot \left( \begin{matrix} 2 \\ 0 \\ -6 \\ -8 \end{matrix} \right) = -12-16 = -28 \neq 0$$ -So something is going wrong in your process. -I suppose you want to use the Gram-Schmidt Algorithm to find the orthogonal basis. I think you skipped the normalization part of the algorithm because you only want an orthogonal basis, and not an orthonormal basis. However even if you don't want to have an orthonormal basis you have to take care about the normalization of your projections. If you only do $u_i$ it will go wrong. Instead you need to normalize and take $u_i\frac{}{}$. If you do the normalization step of the Gram-Schmidt Algorithm, of course $=1$ so it's usually left out. The Wikipedia article should clear it up quite well. -Update -Ok, you say that $v_1 = \left( \begin{matrix} 0 \\ 0 \\ 2 \\ 2 \end{matrix} \right), v_2 = \left( \begin{matrix} 2 \\ 0 \\ 2 \\ 0 \end{matrix} \right), v_3 = \left( \begin{matrix} 3 \\ 2 \\ -5 \\ -6 \end{matrix} \right)$ is the basis you start from. -As you did you can take the first vector $v_1$ as it is. So you first basis vector is $u_1 = v_1$ Now you want to calculate a vector $u_2$ that is orthogonal to this $u_1$. Gram Schmidt tells you that you receive such a vector by -$$u_2 = v_2 - \text{proj}_{u_1}(v_2)$$ -And then a third vector $u_3$ orthogonal to both of them by -$$u_3 = v_3 - \text{proj}_{u_1}(v_3) - \text{proj}_{u_2}(v_3)$$ -You did do this approach. What went wrong is your projection. You calculated it as -$$ \text{proj}_{u_1}(v_2) = v_2$$ -but this is incorrect. The true projection is -$$ \text{proj}_{u_1}(v_2) = v_2\frac{}{}$$ -As I tried to point out, some textbooks will skip the division by $$ in the explanation of Gram-Schmidt, but this is because in most cases you want to construct an orthonormal basis. In that case you normalize every $u_i$ before proceeding to the next step. Therefore $ = 1$ can be skipped. -So what you need to change is to divide by $ = 8$ in your projection.<|endoftext|> -TITLE: What is the meaning of $\bigvee$ (bigvee) operator -QUESTION [10 upvotes]: What do mean $\bigvee$ operator in page 6 of this document. -It is a Variable-sized Math Operator. -What about $\bigwedge$? - -REPLY [4 votes]: "Wedges and vees" ($\wedge,\vee$) are usually used to denote "meets and joins" (respectively) in lattice theory. Roughly speaking "meet" means "greatest lower bound" and "join" means "least upper bound". -This will come up in logic too because logical conditionals have interpretations as"meets and joins". -As you can see in the document page 6, it looks like they are translating 1.2/1.3/1.4 to 1.5/1.6/1.7. I am very suspicious that there is a typo, because in one spot they have replaced $\cup$ with $\vee$ (and that makes sense, since $\cup$ is a join operator for the lattice of subsets of a set), but they also did the same for $\cap$. -I think possibly they should have replaced $\cap$ with $\wedge$. ($\cap$ is of course, the meet operator in the lattice of subsets of a set.)<|endoftext|> -TITLE: What are the subsemigroups of $(\mathbb N,+)?$ -QUESTION [9 upvotes]: While trying to solve a somewhat bigger problem, I realized that I don't know what the subsemigroups of one of the most important semigroups, $(\mathbb N,+)$, are. (I assume $0\not\in\mathbb N$.) I've tried to characterize them but I haven't managed to do it fully. What follows are the facts I have proven so far, without most of the proofs but with lemmas, which hopefully show how the proofs go. I'm not posting all the proofs not because I'm sure my proofs are correct, but because I don't want to make this post too long. I will add the proof of anything below on request. -Notation. $\langle a_1,\ldots,a_n\rangle$ will denote the subsemigroup generated by $a_1,\ldots,a_n\in\mathbb N$. If $X\subseteq \mathbb N$, then $\langle X\rangle$ willl denote the subsemigroup generated by $X$. -Lemma 1. $\langle a_1,\ldots,a_n\rangle = \{k_1a_1+\ldots+k_na_n\,|\,k_i\geq 0,\,\sum k_i>0\}.$ -Lemma 2. If $a_1,\ldots,a_n\in\mathbb N$ and $\gcd(a_1,\ldots,a_n)=1,$ then there exists $x\in \langle a_1,\ldots,a_n\rangle$ such that for every $n\geq x,\,n\in\mathbb N,$ we have that $n\in \langle a_1,\ldots,a_n\rangle.$ -Notation. For $n\in\mathbb N$ and $X\subseteq \mathbb N$, $nX$ will denote $\{nx\,|\,x\in X\}.$ -Lemma 3. For every finitely generated subsemigroup $S=\langle a_1\ldots,a_n\rangle$ of $\mathbb N$, there exists a finitely generated subsemigroup $T$ of $\mathbb N$ whose generators are coprime and such that $S=\gcd(a_1,\ldots,a_n)\,T$. -Proposition 4. Every finitely generated subsemigroup $S=\langle a_1,\ldots,a_n\rangle$ of $\mathbb N$ eventually becomes an infinite arithmetic progression with difference $d=\gcd(a_1,\ldots,a_n)$. That is, there exists $x\in S$ such that $S\cap\{n\in\mathbb N\,|\,n\geq x\}=\{x+kd\,|\,k\geq 0\}.$ (It has to be noted that $d|x.$) -Lemma 5. If $X\subseteq \mathbb N$, then there exists a unique $\gcd(X),$ that is a number $d\in\mathbb N$ such that for all $x\in X$ we have $d|x,$ and if for all $x\in X$ we have $c|x$, then $c|d.$ There also exists a finite subset $Y\subseteq X$ such that $\gcd (Y)=\gcd(X).$ -Proposition 6. Every subsemigroup of $\mathbb N$ is finitely generated. -Proof. Let $S$ be a subsemigroup of $\mathbb N$. Let $d=\gcd (S).$ Then there exists $Y\subseteq S$ such that $\gcd(Y)=d.$ Surely $\langle Y\rangle\subseteq\mathbb N.$ There exists $x\in\langle Y\rangle$ such that $$\langle Y\rangle\cap\{n\in\mathbb N\,|\,n\geq x\}=\{x+kd\,|\,k\geq 0\}.$$ -Thus, beginning from $x$, all numbers divisible by $d$ are in $\langle Y\rangle.$ Therefore, in particular, all elements of $S$ greater than or equal $x$ are in $\langle Y\rangle.$ It follows that $S=\langle Y\cup (S\cap \{n\in\mathbb N\,|\,n -TITLE: Is $\mathbb{R}^\omega$ a completely normal space, in the box topology? -QUESTION [6 upvotes]: Basically, what the title says. Is $\mathbb{R}^\omega$ a completely normal space in the box topology ? -($\mathbb{R}^\omega$ is the space of sequences to $\mathbb{R}$) -Thanks ! - -REPLY [3 votes]: Quoting Munkres from his book Topology (this remark is made in the 5th exercise of section 32): - -It is not known whether $\mathbb{R}^{\omega}$ is normal in the box topology. Mary-Ellen Rudin has shown that the answer is affirmative if one assumes the continuum hypothesis [RM]. In fact, she shows it satisfies a stronger condition called paracompactness. - [RM] M. E. Rudin. The box product of countably many compact metric spaces. General Topology and Its Applications , 2:293-298, 1972. - -Of course this doesn't rule out the possibility that it is not completely normal.<|endoftext|> -TITLE: Density of finite rank operators on Hilbert space -QUESTION [5 upvotes]: Are finite rank operators on Hilbert space $H$ dense in $B(H)$ in the weak operator topology? - -REPLY [5 votes]: Let $\mathcal{P}(H)$ be a directed system of all finite rank orthogonal projections in $H$. We take $p\leq q$ in $\mathcal{P}(H)$ if $\mathrm{Im}(p)\subseteq\mathrm{Im}(q)$. One may show that for all $a\in\mathcal{B}(H)$ we have -$$ -a=\lim\limits_{p\in\mathcal{P}(H)}pap\tag{1} -$$ -in the weak operator topology. Indeed, take $x,y\in H$. Then for all orthogonal projections $p$ such that $\mathrm{Im}(p)\supseteq\mathrm{span}\{x,y,a(x)\}$ we will have $\langle (pap)(x),y\rangle=\langle a(x),y\rangle$. So we proved equality $(1)$. -Since $\{pap:p\in \mathcal{P}(H)\}\subseteq\mathcal{P}(H)$ we conslude that $\mathcal{P}(H)$ is dense in $\mathcal{B}(H)$ in the weak operator topology.<|endoftext|> -TITLE: Why is minimizing the nuclear norm of a matrix a good surrogate for minimizing the rank? -QUESTION [34 upvotes]: A method called "Robust PCA" solves the matrix decomposition problem -$$L^*, S^* = \arg \min_{L, S} \|L\|_* + \|S\|_1 \quad \text{s.t. } L + S = X$$ -as a surrogate for the actual problem -$$L^*, S^* = \arg \min_{L, S} rank(L) + \|S\|_0 \quad \text{s.t. } L + S = X,$$ i.e. the actual goal is to decompose the data matrix $X$ into a low-rank signal matrix $L$ and a sparse noise matrix $S$. In this context: why is the nuclear norm a good approximation for the rank of a matrix? I can think of matrices with low nuclear norm but high rank and vice-versa. Is there any intuition one can appeal to? - -REPLY [10 votes]: To be accurate, it has been shown that the $\ell_1$ norm is the convex envelope of the $\| \cdot \|_0$ pseudo-norm while the nuclear norm is the convex envelope of the rank. -As a reminder, the convex envelope is the tightest convex surrogate of a function. An important property is that a function and its convex envelope have the same global minimizer.<|endoftext|> -TITLE: How to solve $x'(t)=\frac{x+t}{2t-x}$? -QUESTION [5 upvotes]: I wish to solve $x'(t)=\frac{x+t}{2t-x}$ with the initial condition -$x(1)=0$. I noted that $x'(t)=f(\frac{x}{t})$ where $f(y)=\frac{y+1}{2-y}$ -so I denoted $y(t)=\frac{x(t)}{t}$ and got that $y'(t)=\frac{f(y)-y}{t}$ -so I can write something like $\frac{dy}{dt}=\frac{f(y)-y}{t}$ so -$\frac{dy}{f(y)-y}=\frac{dt}{t}$ . -Here I am a bit stuck, I know I should do something like take integrals -on both sides, but I am having trouble with the initial condition. -In this exercise I can leave integrals in the answer so I would like -to know how to get the solution with the integral, I think this requires -to calculate the boundaries of some integral (maybe of $\frac{f(y)-y}{t}$ -?) -How can I continue to get the solution with integral that satisfies -the initial condition ? -Help is appriciated! - -REPLY [3 votes]: Your differential equation is homogenous of order one. You did it right finding $\frac{dy}{dt}=\frac{f(y)-y}{t}$. So you can integrate of both sides in last equation till your solution is achieved respect to $y(t)$ and $t$. Then you can substitute $x(t)=y(t)t$ in your solution and then apply the initial condition. I hope it help. :)<|endoftext|> -TITLE: Period of the sum/product of two functions -QUESTION [60 upvotes]: Suppose that period of $f(x)=T$ and period of $g(x)=S$, I am interested what is a period of $f(x) g(x)$? period of $f(x)+g(x)$? What I have tried is to search in internet, and found following link for this. -Also I know that period of $\sin(x)$ is $2\pi$, but what about $\sin^2(x)$? Does it have period again $\pi n$, or? example is following function -$y=\frac{\sin^2(x)}{\cos(x)}$ -i can do following thing, namely we know that $\sin(x)/\cos(x)=\tan(x)$ and period of tangent function is $\pi$, so I can represent -$y=\sin^2(x)/\cos(x)$ as $y=\tan(x)\times\sin(x)$,but how can calculate period of this? -Please help me. - -REPLY [3 votes]: If you are suppose to find period of sum of two function such that, $f(x)+g(x)$ given that period of $f$ is $a$ and period of $g$ is $b$ then period of total $f(x)+g(x)$ will be $\operatorname{LCM} (a,b)$. -But this technique has some constrain as it will not give correct answers in some cases. One of those case is, if you take $f(x)=|\sin x|$ and $g(x)=|\cos x|$, then period of $f(x)+g(x)$ should be $\pi$ as per the above rule but, period of $f(x)+g(x)$ is not $\pi$ but $\pi/2$. -So in general it is very difficult to identify correct answers for the questions regarding period. -Most of the cases graph will help.<|endoftext|> -TITLE: Fireworks under inverse-cube gravity -QUESTION [8 upvotes]: What is the path of a projectile under an inverse-cube gravity law? - -Imagine that the law of gravity was changed overnight from -$F(r) = G m_1 m_2 / r^2$ -to -$F(r) = G' m_1 m_2 / r^3$. -To be specific, suppose -$G' = G r_E$ where $r_E$ is the radius of the Earth, -so that the force at the Earth's surface is unchanged. -I am wondering how would the arc of a fireworks rocket -compare to the parabolic path it would follow under the inverse-square law. -(In the U.S. at this time of year, the evening sky is full of fireworks -as we approach the 4th of July.) -Presumably, the same rocket would travel a bit higher and cover -a bit more distance horizontally, but what is the precise path it would follow? -It is known that the solutions to a -central force -that is inverse-cubic is a -Cotes' Spiral, -which comes in three varieties: -      - -But I am uncertain which of the three would apply here, and how to compute the relevant constants. -Perhaps a piece of an -epispiral, -something like this? -      - - -It would be instructive to see the inverse-square parabola and the inverse-cubic Cotes's spiral, -for the same projectile initial conditions, plotted together... -Addendum. After retrieving Arnol'd's book as per Mark Dominus's recommendation, -I wanted to share one interesting fact (p.37): The only central-force laws -in which all the bounded orbits are closed are the inverse-square and inverse-cubic laws! - -REPLY [2 votes]: Inverse square and linear laws $(\Large\frac {a}{r^2}$ and $a*r)$ are the only one to have stable closed orbits, -proved by Bertrand in 1873: -Joseph Bertrand, Théorème relatif au mouvement d'un point attiré vers un centre xe, C. R. Acad. Sci. 77 (1873), 849-853., -see also F.C. Santos, V. Soares, A.C. Tort, An English Translation of Bertrand's Theorem, Lat. Am. J. Phys. Educ. 5 2011, 694-695. --ilan<|endoftext|> -TITLE: Normal subgroup of prime index -QUESTION [102 upvotes]: Generalizing the case $p=2$ we would like to know if the statement below is true. -Let $p$ the smallest prime dividing the order of $G$. If $H$ is a subgroup of $G$ with index $p$ then $H$ is normal. - -REPLY [6 votes]: Another answer if you know character theory over the complex numbers. Let $H$ be a subgroup of order $G$ of index $p$, smallest prime dividing $|G|$. Look at the trivial character of $H$ and induce it to $G$. Then $1_H^G=\sum_{\chi \in{\rm Irr}(G)} a_{\chi}\chi$, with $a_{\chi} \in \mathbb{Z}_{\geq 0}$. Since $[1_H^G, 1_G]=[1_H,1_H]=1=a_{1_G}$, it follows that the irreducible constituents of $1_H^G$ have degree $\chi(1) \leq p-1 \lt p$. Since these degrees divide $|G|$ is follows that all the irreducible constituents $\chi$ must be linear, that is $\chi(1)=1$. Hence $H \supset \ker(1_H^G)=\cap\{{\ker(\chi) \in{\rm Irr}(G): a_{\chi} \gt 0}\} \supseteq G'$, whence $H \lhd G$.<|endoftext|> -TITLE: How do I know which method of revolution to use for finding volume in Calculus? -QUESTION [12 upvotes]: Is there any easy way to decide which method I should use to find the volume of a revolution in Calculus? I'm currently in the middle of my second attempt at Calculus II, and I am getting tripped up once again by this concept, which seems like it should be rather straight forward, but I can't figure it out. If I have the option of the disk/washer method and the shell method, how do I know which one I should use? - -REPLY [15 votes]: The first thing to understand is that you don’t directly choose the method of integration: you determine what kind of integration will be easier, based on the shape of the region in question, and that determines which method you’ll use. -Draw the region that’s being revolved. Then ask yourself: does it slice up nicely into vertical strips, or do horizontal strips work better? - -If the region has boundaries of the form $y=f(x)$, $y=g(x)$, $x=a$, and $x=b$, the answer is almost always that vertical strips are simpler: for each $x$ from $a$ to $b$ you have a strip of length $f(x)-g(x)$ or $g(x)-f(x)$, depending on which of $f(x)$ and $g(x)$ is larger. -Similarly, if the region has boundaries of the form $x=f(y)$, $x=g(y)$, $y=a$, and $y=b$, the answer is almost always that horizontal strips are simpler: for each $y$ from $a$ to $b$ you have a strip of length $f(y)-g(y)$ or $g(y)-f(y)$, depending on which of $f(y)$ and $g(y)$ is larger. -The case of a region bounded by $y=f(x)$ and $y=g(x)$, where you have to solve for the points of intersection of the two curves in order to find the horizontal extent of the region, is really just special case of (1). Similarly, if the region is bounded by $x=f(y)$ and $x=g(y)$, and you have to solve for the vertical extent of the region, you’re looking at a special case of (2). - -What you want is a way of slicing the region into vertical or horizontal strips whose endpoints are defined in the same way. Take, for instance, the region bounded by $y=x$ and $y=x(x-1)$. If you slice it into vertical strips, each strip runs from the parabola at the bottom to the straight line at the top, so the strip at each $x$ between $0$ and $2$ has its bottom end at $x(x-1)$ and its top end at $x$. If you were to slice it into horizontal strips, the ones between $y=0$ and $y=2$ would have their left ends on the straight line and their right ends on the parabola, but the ones between $y=-1/4$ and $y=0$ would have both ends on the parabola. Thus, you’d need a different calculation to handle the part of the region below the $x$-axis from the one that you’d need for the part above the $x$-axis. -Whether the boundaries are given in the form $y=f(x)$ or the form $x=f(y)$ is often a good indication: the former tends to go with vertical slices and the latter with horizontal slices. It’s far from infallible, however, and in some problems some boundaries are given in one form and some in the other. You should always look at a picture of the region. You want slices whose endpoints are defined consistently, and you want slices that don’t have any gaps in them. -Once you’ve decided which way to slice up the region, sketch in the axis of revolution. If it’s parallel to your slices, each slice will trace out a cylindrical shell as it revolves about the axis. If, on the other hand, it’s perpendicular to your slices, each slice will trace out a washer or disk as it revolves about the axis. In either case the proper method of integration has automatically been determined for you.<|endoftext|> -TITLE: Do elliptic operators on Riemannian manifolds have a regularizing effect? -QUESTION [7 upvotes]: I'm working on my master thesis and need to handle some spectral theory of the Laplace operator on compact Riemannian manifolds and especially on the sphere. While investigating essential self-adjointness I stumbled on the following problem. - -*Problem*$\quad$ In a compact Riemannian manifold $M$ let $$\Delta=\operatorname{div}\operatorname{grad}$$ and let $f\in L^2(M)$ be such that $(f, u-\Delta u)=0$ for every $u \in C^{\infty}(M)$. Prove that $f=0$. - -I believe that the claim is true, because the condition $(f, u-\Delta u)=0$ means exactly that $f$ is a distributional solution of the elliptic equation $-\Delta f + f=0$, and so I expect it to be a $H^2_{\text{loc}}$ function (see Theorem 2.1 of Berezin - Shubin's book). Since $M$ is compact this must imply that $f\in H^1(M)$ so that integrating by parts we get $\lVert f \rVert_{H^1}^2=(f, f)+(\operatorname{grad}f, \operatorname{grad}f)=0$. -Unfortunately Theorem 2.1 above is set in an open subset of the Euclidean space and I don't know if it is applicable verbatim in a Riemannian manifold. Can you point me to some reference on this? -Thank you. - -REPLY [5 votes]: Yes. Write your equation in local coordinates and then use the usual elliptic theory there. -Here is an example. The equation $$\Delta_g u = h$$ in local coordinates is -$$g^{ij}\frac{\partial^2u}{\partial x^i\partial x^j} - \frac{1}{\sqrt g} -\frac{\partial}{\partial x^i}(\sqrt g g^{ij})\frac{\partial u}{\partial x^j} = h$$ -Notice that the operator on the LHS is still an elliptic operator on $\mathbb R^n$ in the given local coordinates, due to the fact that the Riemannian metric is positive definite. Therefore all of the standard elliptic regularity theorems you know for operators on $\mathbb R^n$ still apply. -There isn't really a standard reference for this, although it probably appears as a remark buried in most PDE books.<|endoftext|> -TITLE: Line Bundle on subvarieties -QUESTION [10 upvotes]: I've been having problem actually restricting a Line bundle $L$ defined on some projective space $\mathbb C \mathbb P^{N-1}$ to a subvariety $X$. -I know how to do this on an abstract level, but actually computing what's going on, seems quite mysterious. -From Fulton's "Intersection Theory" I have -$c_1(L) \cap [X] = [C]$ where $C$ is the divisor corresponding to $\mathcal O_X(C) \simeq L\vert_X$ -Now, I have $c_1(L)$ given by $-N[H]$ where $[H]$ denotes the hyperplane class in $\mathbb C \mathbb P^{N-1}$. I also have some polynomial $P$ whose zero locus defines $X$. I even know $c_1(TX) = 0$ and have computed that if $X$ is taken to be a divisor in $\mathbb C \mathbb P^{N-1}$, the corresponding line bundle would satisfy $c_1(\mathcal{O}(X)) = N[H]$ (Not quite sure yet if this helps). -But, what I really would like to know is $c_1(L\vert_X)$? -My attempts so far have been to find an actual section $s$ of $L$, use the equation for the zero locus and actually intersect that with $X$. However, finding such a section has proven difficult. -How would one normally go about this? -Am I on the right track? Any help is highly appreciated! - -REPLY [5 votes]: In general (i.e. for restricting line bundles from a variety to a subvariety) one has $c_1(L_{\vert X}) = c_1(L) \cdot X$ (the intersection, thought of as a divisor on $X$). -In your case one can be more explicit, since $L = \mathcal O(d) = \mathcal O(1)^{\otimes d}$ for some $d$ (all line bundles on projective space are of this form). Thus, if we let $H_X$ denote a generic hyperplane section of $X$, then -$c_1(L_{\vert X}) = d H_X$.<|endoftext|> -TITLE: subgroup of connected locally compact group -QUESTION [12 upvotes]: I need a reference or a short proof for the following property: -A nontrivial connected locally compact group $G$ contains an infinite abelian subgroup. - -REPLY [5 votes]: I will assume, in addition, that $G$ is Hausdorff (otherwise the statement is clearly false). Then the statement is a direct corollary of the following -Theorem (Yamabe and Gleason). Let $G$ be a connected locally compact Hausdorff group. Then for every neighborhood $U$ of $1\in G$ there exists a compact normal subgroup $K$ generated by $g$ is an infinite cyclic subgroup of $G$.<|endoftext|> -TITLE: Exact sequence of sheaves in Beauville's "Complex Algebraic Surfaces" -QUESTION [9 upvotes]: On the first pages of Beauville's "Complex Algebraic Surfaces", he has a surface $S$ (smooth, projective) and two curves $C$ and $C'$ in $S$. He defines $\mathcal{O}_S(C)$ as the invertible sheaf associated to $C$. I'm assuming that if $C$ is given as a Cartier divisor by $(U_\alpha,f_\alpha)$, then $\mathcal{O}_S(C)(U_\alpha)$ is generated by $1/f_\alpha$ (following Hartshorne's notation); this assumption is justified as Beauville says that $\mathcal{O}_S(-C)$ is simply the ideal sheaf that defines $C$. -The part I don't understand is that he then takes a non-zero section $s\in H^0(\mathcal{O}_S(C))$ (and the same for $s'$) and says that it vanishes on $C$. Isn't this the definition of a global section of $\mathcal{O}_S(-C)$ though (according to the previous notation)? -He then writes the exact sequence (which I don't really understand) -$$0\to\mathcal{O}_S(-C-C')\stackrel{(s',-s)}{\to}\mathcal{O}_S(-C)\oplus\mathcal{O}_S(-C')\stackrel{(s,s')}{\to}\mathcal{O}_S\to\mathcal{O}_{C\cap C'}\to 0.$$ -I need to have the definitions clear in order to be able to understand the exact sequence. Can anybody help me out? - -REPLY [5 votes]: Beauville is right. -a) The only global section of $\mathcal{O}_S(-C)$ is zero: indeed $\mathcal{O}_S(-C)=\mathcal I_C\subset \mathcal{O}_S$ is the sheaf of holomorphic functions on $S$ vanishing on $C$. - In particular since global holomorphic functions on the projective surface $S$ are constant, the only such function vanishing on $C$ is zero: $ H^0(S,\mathcal O_S(-C))=0\subset H^0(S,\mathcal{O}_S)=\mathbb C$ -b) The part about a section of $\mathcal O_S(C)$ vanishing on $C$ is devilishly subtle and, alas, not sufficiently explained in books. -Pragmatically, the point is that sections $s_\alpha \in H^0(U_\alpha,\mathcal O_S(C))$ are meromorphic functions on $U_\alpha$ of the form $\frac{h_\alpha }{f_\alpha}$ with $h_\alpha\in \mathcal O_S(U_\alpha )$. -But the vanishing set of $s_\alpha$ is that of $h_\alpha $! -Also, it is not true that every section $s \in H^0(S,\mathcal O_S(C))$ vanishes on $C$. -What is true is that there is a canonical section $s_0 \in H^0(S,\mathcal O_S(C))$ vanishing exactly on $C$. -And that section is... -the constant function $1$ , seen as a section $1=s_0\in H^0(S,\mathcal O_S(C)$ ! -Indeed on $U_\alpha $ write $1=\frac{f_\alpha }{f_\alpha }$ and acccording to the pragmatic recipe above, the zero locus of that section is that of $f_\alpha$, namely $C \cap U_\alpha$.<|endoftext|> -TITLE: Is there a name for a semigroup whose idempotents form a subsemigroup? -QUESTION [9 upvotes]: For a semigroup $S,$ I will denote by $E(S)$ the set of all idempotents of $S$. For $X\subseteq S,$ let $X^2$ mean $\{xy\,|\,x,y\in X\}.$ -Is there a name for the class of semigroups $S$ such that $$\left(E(S)\right)^2\subseteq E(S)?$$ -To have an example, in every inverse semigroup, the idempotents form a subsemigroup. More generally, as rschwieb points out in a comment, any semigroup such that the idempotents commute with each other satisfies this condition. -I need a name to be able to search for information about such semigroups. So any contribution besides the name will be welcomed. - -REPLY [2 votes]: A semigroup whose idempotents form a subsemigroup is called an E-semigroup in a bunch of papers, e.g. J. Almeida, J.-E. Pin and P. Weil which was originally published in 1992, Gomes and Howie (1998), Weipoltshammer (2002), Gigoń (2012), or Fountain and Hayes (2014). Note that all of these were published after Birget, Margolis, and Rhodes' paper (1990), mentioned by Margolis himself above, which did not use this E-semigroup terminology. If the E-semigroup is also regular, then it is called an orthodox semigroup. (This latter terminology is much older dating to 1969.) I have updated the Wikipedia page on orthodox semigroup to mention E-semigroup, but E-semigroup is still a "red link" over there. I don't think the E-semigroup terminology has made it in any semigroup textbooks though and it seems to actually clash with the same name chosen for something else in Arveson's textbook. (Do note that Gomes & Howie's paper on the topic came out 3 years after Howie's textbook was published though.)<|endoftext|> -TITLE: Directed colimit in a concrete category -QUESTION [7 upvotes]: I recently found myself at a spot that I never believed I'll get (or at least not that soon in my career). I ran into a problem which seems to be best answered via categories. - -The situation is this, I have a directed system of structures and the maps are all the inclusion map, that is $X_i$ for $i\in I$ where $(I,\leq)$ is a directed set; and if $i\leq j$ then $X_i$ is a substructure of $X_j$. -Suppose that the direct limit of the system exists. Can I be certain that this direct limit is actually the union? Namely, what sort of categories would ensure this, and what possible counterexamples are there? - -I asked several folks around the department today, some assured me that this is the case for concrete categories, while others assured me that a counterexample can be found (although it won't be organic, and would probably be manufactured for this case). -The situation is such that the direct system is originating from forcing, so it's quite... wide and probably immune to some of the "thinkable" counterexamples (by arguments of [set theoretical] genericity from one angle or another), and so any counterexample which is essentially a linearly ordered system is not going to be useful as a counterexample. -Another typical counterexample which is irrelevant here is finitely-generated things, for example we can take a direct system of f.g. vector spaces whose limit is not f.g. but this aspect is also irrelevant to me; although I am not sure how to accurately describe this sort of irrelevance. -Last point (which came up with every person I discussed this question today), if we consider: $$\mathbb R\hookrightarrow\mathbb R^2\hookrightarrow\ldots$$ -Then we consider those to be actually increasing sets in inclusion and not "natural identifications" as commonly done in categories. So the limit of the above would actually be $\mathbb R^{<\omega}$ (all finite sequences from $\mathbb R$). - -REPLY [5 votes]: Your question essentially amounts to asking, "when does the forgetful functor $U : \mathcal{C} \to \textbf{Set}$ create directed colimits?" More generally, one could replace "directed colimit" by "filtered colimit". There is, as far as I know, no general answer. -Here is one reasonably general class of categories $\mathcal{C}$ for which there is such a forgetful functor. Let us consider a finitary algebraic theory $\mathfrak{T}$, i.e. a one-sorted first-order theory with a set of operations of finite arity and whose axioms form a set of universally-quantified equations. For example, $\mathfrak{T}$ could be the theory of groups, or the theory of $R$-modules for any fixed $R$-module. Then, the category $\mathcal{C}$ of models of $\mathfrak{T}$ will be a category in which filtered colimits are, roughly speaking, obtained by taking the union of the underlying sets. This can be proven "by hand", by showing that the obvious construction has the required universal property: the key lemma is that filtered colimits commute with finite limits in $\textbf{Set}$ – so, for example, $\varinjlim_{\mathcal{J}} X \times \varinjlim_{\mathcal{J}} Y \cong \varinjlim_{\mathcal{J}} X \times Y$ if $\mathcal{J}$ is a filtered category. Mac Lane spells out the details in [CWM, Ch. IX, §3, Thm 1]. - -Addenda. Fix a one-sorted first-order signature $\Sigma$. Consider the directed colimit of the underlying sets of some $\Sigma$-structures: notice that the colimit inherits a $\Sigma$-structure if and only if the operations and predicates of $\Sigma$ are all finitary. Qiaochu's counterexample with $\{ 0 \} \subset \{ 0, 1 \} \subset \{ 0, 1, 2 \} \subset \cdots$ can be pressed into service here as well. -So let us assume $\Sigma$ only has finitary operations and predicates. The problem is now to establish an analogue of Łoś's theorem for directed colimits. Let $\mathcal{J}$ be a directed set and let $X : \mathcal{J} \to \Sigma \text{-Str}$ be a directed system of $\Sigma$-structures. Let us say that a logical formula $\phi$ is "good" just if -$X_j \vDash \phi$ for all $X_j$ implies $\varinjlim X \vDash \phi$ (where $\varinjlim X$ is computed in $\textbf{Set}$ and given the induced $\Sigma$-structure). - -It is not hard to check that universally quantified equations and atomic predicates are good formulae. -The set of good formulae is closed under conjunction and disjunction. -The set of good formulae is closed under universal quantification. -The set of good formulae is not closed under existential quantification: the formula $\forall x . \, x \le m$ (with free variable $m$) is a good formula in the signature of posets, but $\exists m . \, \forall x . \, x \le m$ is clearly not preserved by direct limits. -However, a quantifier-free good formula is still a good formula when prefixed with any number of existential quantifiers. -In particular, the set of good formulae is not closed under negation: the property of being unbounded above can be expressed as a good formula in the signature of posets with inequality, but its negation is the property of being bounded above. - -Section 6.5 of [Hodges, Model theory] seems to have some relevant material, but I haven't read it yet. The point, I suppose, is that there are some fairly strong conditions that the theory in question must satisfy before the directed colimit in $\textbf{Set}$ is even a model of the theory, let alone be a directed colimit in the category of models of the theory.<|endoftext|> -TITLE: Differentiable structure on the real line -QUESTION [6 upvotes]: The usual differentiable structure on real line was obtained by taking ${F}$ to be the maximal collection containing the identity map, Let ${F_1}$ be the maximal collection containing $t\mapsto t^3$. I need to show $F_1\neq F$, but $(\mathbb{R},F)$ and $(\mathbb{R},F_1)$ are diffeomorphic. -Well, first of all, what does it mean by maximal collection? I am not clearly getting and how to define a diffeomorphism between these ordered pairs, that also not clear to me. Please help. - -REPLY [3 votes]: The maximality of $F$ means that if $(U, \varphi)$ [here $U$ is an open subset of our manifold; in this case, of $\mathbb R$] is another chart which is compatible with each $(V, \psi) \in F$ — in the sense that each transition map -\[ -\varphi^{-1} \circ \psi\colon \psi(U \cap V) \to \varphi(U \cap V) -\] -is a diffeomorphism in the sense of calculus on $\mathbb R$ — then $\varphi$ is actually in $F$. -So take $U = \mathbb R$ and $\varphi(t) = t^3$. Is this compatible with $F$? For the second part, note that it follows from all of these definitions that if a function $f\colon (\mathbb R, F) \to (\mathbb R, F_1)$ is a diffeomorphism with respect a choice of global coordinates on both sides, then it is a diffeomorphism of manifolds.<|endoftext|> -TITLE: Does $\left(n^2 \sin n\right)$ have a convergent subsequence? -QUESTION [29 upvotes]: I'm wrestling with the following: - -Question: For what values of $\alpha > 0$ does the sequence $\left(n^\alpha \sin n\right)$ have a convergent subsequence? - -(The special case $\alpha = 2$ in the title happened to arise in my work.) In a continuous setting this would be a very simple question since $x^\alpha \sin x$ achieves every value infinitely often for (positive) $x \in \mathbb{R}$, but I feel ill-equipped for this situation -- I have an eye on the Bolzano-Weierstrass theorem and not much else. -I have shown the answer is affirmative for $\alpha \leq 1$. Here is my idea. - -Proof when $0 < \alpha \leq 1$: Define $x_n = n^\alpha \sin n$ for all $n \in \mathbb{N}$. We will find a bounded subsequence $(y_n)$ of $(x_n)$ so the Bolzano-Weierstrass theorem applies. Let $n \geq 1$ be arbitrary; then by Dirichlet's approximation theorem there are $p_n, q_n \in \mathbb{N}$ satisfying - $$q_n \leq n \,\,\,\,\,\,\,\,\,\, \text{and} \,\,\,\,\,\,\,\,\,\, |q_n \pi - p_n| < \frac{1}{n}.$$ - Take $y_n = x_{p_n} = (p_n)^\alpha \sin p_n$ for this index $n$. Then - $$\begin{eqnarray}|y_n| &=& (p_n)^\alpha \left|\sin(q_n \pi - p_n)\right|\\ -&<& (p_n)^\alpha \left(\frac{1}{n}\right)\\ -&<& \left(q_n \pi + \frac{1}{n}\right)^\alpha \left(\frac{1}{n}\right)\\ -&\leq& \left(n \pi + \frac{1}{n}\right)^\alpha \left(\frac{1}{n}\right)\\ -&\leq& \left(n \pi + \frac{1}{n}\right) \left(\frac{1}{n}\right)\\ -&\leq& \pi + 1\end{eqnarray}$$ - so the sequence $(y_n)$ has a convergent subsequence $(z_n)$ by the Bolzano-Weierstrass theorem. But $(z_n)$ is in turn a subsequence of $(x_n)$, which proves the claim. - -Unfortunately, my strategy breaks down when $\alpha > 1$ (where I replace $\alpha$ by $1$ in the chain of inequalities). So in this case, can a convergent subsequence be found some other way or not? (I have a suspicion as to the answer, but I don't want to bias people based on heuristics.) - -REPLY [16 votes]: This seems very closely related to the irrationality measure of $\pi$. Fix some exponent $\mu$, and suppose that there are infinitely many $p_n, q_n$ with -$$ -\left| \pi - \frac{p_n}{q_n} \right| < \frac{1}{(q_n)^\mu} \, .\,\,\, (*) -$$ -Then essentially the same argument you gave shows that the subsequence $\left\{x_{p_n}\right\}$ is bounded for any $\alpha \leq \mu-1$. -Conversely, suppose we have a bounded subsequence $\left\{x_{p_n}\right\}$ of $n^\alpha \sin n$, so $(p_n)^\alpha |\sin p_n| < K$ for some fixed $K$ and all $n$. Choose $q_n$ so that $|p_n - q_n \pi| < \frac{\pi}{2}$. Then -$$|p_n - q_n \pi| < \frac{\pi}{2} |\sin (p_n-q_n \pi)| < \frac{K \pi}{2(p_n)^{\alpha}} \, ,$$ -so $(*)$ holds infinitely often for any $\mu < \alpha + 1$. -According to the Mathworld page I linked to above, all we know about the irrationality measure of $\pi$ is that it's between $2$ and $7.6063$, so your specific problem (which requires that we compare it to $3$) is unsolved. -EDIT: I can't find an online version of the 2008 paper by Salikhov that proves the 7.6063 bound, but here's a pdf of Hata's earlier paper that shows $\mu(\pi) < 8.0161$, and here's a related MathOverflow question (which also has a link to Hata's paper).<|endoftext|> -TITLE: Show $\int_0^\infty \frac{e^{-x}-e^{-xt}}{x}dx = \ln(t),$ for $t \gt 0$ -QUESTION [14 upvotes]: The problem is to show -$$\int_0^\infty \frac{e^{-x}-e^{-xt}}{x}dx = \ln(t),$$ -for $t \gt 0$. -I'm pretty stuck. I thought about integration by parts and couldn't get anywhere with the integrand in its current form. I tried a substitution $u=e^{-x}$ and came to a new integral (hopefully after no mistakes) -$$ \int_0^1 \frac{u^{t-1}-1}{\log(u)}du, $$ -but this doesn't seem to help either. I hope I could have a hint in the right direction... I really want to solve most of it by myself. -Thanks a lot! - -REPLY [2 votes]: Notice, Laplace transform -$$L[1]=\int e^{-st}dt=\frac{1}{s}$$ -Now, we have -$$\int_{0}^{\infty}\frac{e^{-x}-e^{-xt}}{x}dx$$ -$$=\int_{0}^{\infty}\frac{e^{-x}}{x}dx-\int_{0}^{\infty}\frac{e^{-xt}}{x}dx$$ -$$=\int_{1}^{\infty}L[1]dx-\int_{t}^{\infty}L[1]dx$$ -$$=\int_{1}^{\infty}\frac{1}{x}dx-\int_{t}^{\infty}\frac{1}{x}dx$$ -$$=[\ln |x|]_{1}^{\infty}-[\ln |x|]_{t}^{\infty}$$ -$$=\lim_{x\to \infty}\ln|x|-\ln 1-(\lim_{x\to \infty}\ln|x|-\ln |t|)$$ -$$=\lim_{x\to \infty}\ln|x|-0-\lim_{x\to \infty}\ln|x|+\ln |t|$$ -$$=\ln |t|$$$$=\ln(t) \ \ \ \ \forall\ \ \ t>0$$<|endoftext|> -TITLE: A spectral sequence for Tor -QUESTION [12 upvotes]: Suppose $R \to T$ is a ring map such that $T$ is flat as an $R$-module. Then for $A$ an $R$-module, $C$ a $T$-module there is an isomorphism -$$\text{Tor}^R_n(A,C) \simeq \text{Tor}^T_n(A \otimes_R T,C)$$ -This can be proved directly by choosing an $R$-projective resolution $P^\bullet \to A$ and following through with the homological algebra. A more 'sledgehammer' approach us to use the Grothendieck spectral sequence $$E_2^{s,t} = \text{Tor}^{T}_s(\text{Tor}_t^R(A,T),C) \Rightarrow \text{Tor}^R_{s+t}(A,C),$$ -which collapses under the assumptions above to give the required isomorphism. -In the case that $T$ is not flat as an $R$-module is it possible to build a spectral sequence that abuts to $\text{Tor}^T_n(A \otimes_R T,C)$? - -REPLY [5 votes]: $\mathcal{A}, \mathcal{B}, \mathcal{C}$ are abelian categories, $F:\mathcal{A} \rightarrow \mathcal{B}$ and $G: \mathcal{B} \rightarrow \mathcal{C}$ are right exact functors. You want to compute right derived functors of $G\circ F$. -To apply's Grothendieck's spectral sequence you have to ensure that $F$ maps acyclic complexes in $\mathcal{A}$ to acyclic complexes in $\mathcal{B}$. -In your example $F(A) = A \otimes T$ and $G(B)= B \otimes C$, if $T$ is flat then $F$ takes acyclic complexes to acyclic ones but if $T$ is not flat then I do not know of a general characterization in that case. -a way around would be to extend your underlying category to complexes of modules. Then you can replace $T$ by a flat/free resolution. In that case Grothendieck's machinery will work but it will give hyper-tor instead of tor.<|endoftext|> -TITLE: Quick sort algorithm average case complexity analysis -QUESTION [5 upvotes]: This is for self-study. This question is from Kenneth Rosen's "Discrete Mathematics and Its Applications". - -The quick sort is an efficient algorithm. To sort $a_1,a_2,\ldots,a_n$, this algorithm begins by taking the first element $a_1$ and forming two sublists, the first containing those elements that are less than $a_1$, in the order they arise, and the second containing those elements greater than $a_1$, in the order they arise. Then $a_1$ is put at the end of the first sublist. This procedure is repeated recursively for each sublist, until all sublists contain one item. The ordered list of $n$ items is obtained by combining the sublists of one item in the order they occur. -In this exercise we find the average-case complexity of the quick sort algorithm, assuming a uniform distribution on the set of permutations. -a) Let X be the number of comparisons used by the quick sort algorithm to sort a list of n distinct integers. Show that the average number of comparisons used by the quick sort algorithm is $E(X)$ (where the sample space is the set of all $n!$ permutations of $n$ integers). -b) Let $I(j,k)$ denote the random variable that equals 1 if the $j^{th}$ smallest element and the $k^{th}$ smallest element of the initial list are ever compared as the quick sort algorithm sorts the list and equals 0 otherwise. Show that $X = \sum_{k=2}^{n}\sum_{j=1}^{k-1}I_{j,k}$. -c) Show that $E(X) = \sum_{k=2}^{n} \sum_{j=1}^{k-1} p$, where $p$ is the probability that the $j^{th}$ smallest element and the $k^{th}$ smallest element are compared. -d) Show that $p$ (the $j^{th}$ smallest element and the $k^{th}$ smallest element are compared), where $k > j$, equals $2/(k − j + 1)$. - -I didn't have any particular problem with parts a, b and c. -I think that I managed to understand parts a, b and c. -For part a, this seems obvious from the definition of expected value. E(X) is the average value of the number of comparisons, weighted by the probability that the permutation has a particular order (which is $1/n!$). -For part b, I verified that the sum $\sum_{k=2}^{n}\sum_{j=1}^{k-1}I_{j,k}$ gives $I_{1,2} + I_{1,3} + I_{2,3} + I_{1,4} + I_{2,4} + I_{3,4}+\cdots+I_{n-1,n}$, that is, it gives the value of $I_{j,k}$ for every combination of $j$ and $k$. So, it sums 1 to every pair of integers that will be compared. Since the only situation two integers get compared is when one of them the the first element of the list (also called the "pivot"), this means that these two integers will go each one to separate sublists, so that they will not be compared anymore (in other words, every pair of integers is compared at most once). Therefore, it makes sense to say that the mentioned sum will give $X$, the total number of comparisons made by quick sort. -Part c also seems straightforward. The result follows from the linearity of the expected value (the expected value of a sum is the sum of the expected values): $E(X) = E\left(\sum_{k=2}^{n}\sum_{j=1}^{k-1}I_{j,k}\right) = \sum_{k=2}^{n}\sum_{j=1}^{k-1}E(I_{j,k})$. The value of $E(I_{j,k})$ (the expected value of $I_{j,k}$) is 1 times the probability that $I_{j,k}$ gets the value 1; the probability that $I_{j,k}$ gets the value 1 is the probability that the $j^{th}$ smallest element and the $k^{th}$ smallest element are compared, so $I_{j,k} = p$. So, $E(X) = \sum_{k=2}^{n}\sum_{j=1}^{k-1}p$. -Part d is where I got stuck. -I tried to reason the following way: given a list that will be ordered by quick sort, two integers $a_j$ and $a_k$ will get compared only if one of them is the pivot. Also, if a number that is smaller than $a_k$ and greater than $a_j$ is the pivot, $a_k$ and $a_j$ will go to separate sublists, so that they will never get compared. Otherwise, if the pivot is either greater than both $a_k$ and $a_j$, or smaller than them, $a_j$ and $a_k$ will both go to the same sublist, so that, in another recursive call of the algorithm, they may still get compared. -But I'm not sure how to show that the probability that the $j^{th}$ smallest element and the $k^{th}$ smallest element are ever compared is $2/(k − j + 1)$. Could anyone give a hint? I would prefer a hint over a complete solution, so that I can discuss it further in comments to fill in the holes. - -REPLY [2 votes]: Consider the set $z_{i,j} = \{z_i,z_{i+1},....,z_{j-1},z_j\}$ This set order is in terms of the values (not the order in the Array) i.e. $z_ii$. As long as none of the these are chosen as pivot,all are passed to the same recursive call. -i.e. pivot $\leftarrow$ x - $x > z_{j} or \; x -TITLE: Unital nonabelian banach algebra where the only closed ideals are $\{0\}$ and $A$ -QUESTION [7 upvotes]: This is a problem in exercise one of Murphy's book - -Find an example of a nonabelian unital Banach algebra $A$, where the only closed ideals are $\{0\}$ and $A$. - -But does such an algebra exist at all? -My argument is the following: -Let $a$ be an arbitrary nonzero element in $A$, if $a$ is not invertible, then $a$ is contained in a maximal ideal, which is closed. However, there is no such ideal thus every nonzero element must be invertible. Then Gelfand-Mazur says $A$ is the complex numbers and thus must be abelian. -What is the problem with my argument? -Thanks! - -REPLY [4 votes]: The problem is that not being invertible doesn't imply being contained in a maximal two-sided ideal if $A$ is noncommutative. For example, there are plenty of elements of $\mathcal{M}_n(\mathbb{C})$ that are not invertible but since this algebra is simple (which incidentally implies that it's already an answer to your question when $n \ge 2$) all nonzero elements generate the unit two-sided ideal. -What is true is that not being left-invertible is equivalent to being contained in a maximal left ideal and not being right-invertible is equivalent to being contained in a maximal right ideal.<|endoftext|> -TITLE: If $f_n\colon [0, 1] \to [0, 1]$ are nondecreasing and $\{f_n\}$ converges pointwise to a continuous $f$, then the convergence is uniform -QUESTION [5 upvotes]: Suppose that $\{f_n\}$ is a sequence of nondecreasing functions which map the unit interval into itself. Suppose that $$\lim_{n\rightarrow \infty} f_n(x)=f(x)$$ pointwise and that $f$ is a continuous function. Prove that $f_n(x) \rightarrow f(x)$ uniformly as $n \rightarrow \infty$, $0\leq x\leq1$. Note that the functions $f_n$ are not necessarily continuous. -This is one of the preliminary exam from UC Berkeley, the solution goes like this: -Because $f$ is continuous on $[0,1]$, which is compact, it is then uniformly continuous. Hence there exists $\delta >0$ such that if $|x-y|<\delta$ then $|f(x)-f(y)|<\epsilon$. -We then partition the interval with $x_0=0, \cdots ,x_m=1$ such that the distance $x_{i}-x_{i-1}$ is less than $\delta$. -Note that since there are only finite number of $x_m$, there is $N\in \mathbb{N}$ such that if $n\geq N$ then $|f_n(x_i)-f(x_i)|<\epsilon$ where $i=0,\cdots, m$ -Now if $x\in[0,1]$, then $x\in[x_{i-1},x_i]$ for some $i\in\{1, \cdots m\}$. -My question is how to use the nondecreasingness to arrived at this inequality, for $n\geq N$ -$f(x_{i-1})-\epsilon -TITLE: Matrix raised to a matrix: $M^N$, is this possible? with $M,N\in M_n(\Bbb K).$ -QUESTION [21 upvotes]: I was wondering if there is such a valid operation as raising a matrix to the power of a matrix, e.g. vaguely, if $M$ is a matrix, is -$$ -M^N -$$ -valid, or is there at least something similar? Would it be the components of the matrix raised to each component of the matrix it's raised to, resulting in again, another matrix? -Thanks, - -REPLY [8 votes]: Please See first some quick explicit computations below: - -I think the situation for to $2\times 2$ is more comprehensible. In oder to have a glimpse for the general case as mentioned by @Qiaochu Yuan -We know that the complex plan $\Bbb C$, -$$\Bbb C= \{a+ib: a,b\in\Bbb R\} $$ is isomorphic (as a field) to the to fields -$$G_2(\Bbb R) =\left\{\left(\begin{smallmatrix} a &b\\ -b&a\end{smallmatrix}\right): a,b\in\Bbb R\right\}$$ -where one merely identify -$$ 1 \equiv \left(\begin{matrix} 1& 0\\0&1 \end{matrix}\right)~~~\text{and}~~~~ i \equiv \left(\begin{matrix} 0& 1\\-1&0 \end{matrix}\right)$$ - -Now notion of exponential makes senses for complex numbers up to branch cut of the logarithm. Namely is not surprising to write $$ z^w -~~\text{for}~~~z = a+ib, w =x+iy\in\Bbb C\setminus\{0\}.$$ -which from the identification above readily represent the expression, -$$ \color{blue}{z^w\equiv \left(\begin{smallmatrix} a &b\\ -b&a\end{smallmatrix}\right)^\left(\begin{smallmatrix} x &y\\ -y&x\end{smallmatrix}\right)~\text{for}~~~a,b, x, y\in\Bbb R\setminus\{0\}}$$This expression is less abstract(but more particular and restricted) than, using the Taylor expansion for exponential and logarithm. - - -Another advantage of this expression is that, as opposed to the general cases where the expression of the $\log(M+I)$ -$$\log(I + M) = \sum_{n \ge 1} \frac{(-1)^{n-1} M^n}{n}$$ -converges only when $\|M\|< 1$, for the matrices taken in the fields $G_2(\Bbb R)$ such restriction is not require. - -The above identification can be useful also to quickly compute the $n^{th}$ power of matrices(see here). -Application 1: Using the above identification we have, -$$ \displaystyle \left(\begin{smallmatrix} 0 &1\\ -1&0\end{smallmatrix}\right)^\left(\begin{smallmatrix} 0 &1\\ -1&0\end{smallmatrix}\right)\equiv i^i = e^{-\frac{\pi}{2}}\equiv \left(\begin{smallmatrix} e^{-\frac{\pi}{2}} &0\\ 0&e^{-\frac{\pi}{2}}\end{smallmatrix}\right) $$ -Application 2: See here: :For each $a \in \mathbb{R}$ evaluate $ \lim\limits_{n \to \infty}\left(\begin{smallmatrix}1&\frac{a}{n}\\\frac{-a}{n}&1\end{smallmatrix}\right)^n$ -$$\begin{align}\displaystyle \lim_{n \to \infty}\left(\begin{matrix} 1&\dfrac{a}{n}\\\dfrac{-a}{n}&1\end{matrix}\right)^{\left(\begin{smallmatrix} n &0\\ 0&n \end{smallmatrix}\right)} &\equiv \displaystyle \lim_{n \to \infty}\color{red}{\left(1+\dfrac{ai}{n}\right)^n} \\&= \left(\begin{matrix} \cos a&\sin a\\-\sin a&\cos a\end{matrix}\right).\end{align}$$ -Application 3: -$$ \displaystyle\left(\begin{smallmatrix} 1 &1\\ -1&1\end{smallmatrix}\right)^\left(\begin{smallmatrix} 0 &1\\ -1&0 \end{smallmatrix}\right)\equiv (1+i)^i = \left(\sqrt{2}e^{i\frac{\pi}{4}}\right)^i = (\sqrt{2})^ie^{-\frac{\pi}{4}}\\\equiv \displaystyle\left(\begin{smallmatrix} e^{-\frac{\pi}{4}} \cos\log \sqrt{2} &e^{-\frac{\pi}{4}}\sin\log \sqrt{2}\\ -e^{-\frac{\pi}{4}}\sin\log \sqrt{2}&e^{-\frac{\pi}{4}}\cos\log \sqrt{2}&\end{smallmatrix}\right) $$ -Application4: with help of Application 3: compute -$$ \displaystyle\left(\begin{smallmatrix} 1 &1\\ -1&1\end{smallmatrix}\right)^\left(\begin{smallmatrix} 2 &1\\ -1&2 \end{smallmatrix}\right) $$ -This last one is left as an exercise to make sure the OP and future readers understood the rule of computation going on.<|endoftext|> -TITLE: Summing over a cyclic subgroup of a multiplicative group mod n -QUESTION [8 upvotes]: Let $x$ be a unit in $\mathbb Z/ n \mathbb Z$ of multiplicative order $m$. -I am trying to determine when it is that -$$ -\sum_{i=0}^{m-1} x^i \equiv 0 \mod n . -$$ -Is this kind of situation something that has been studied? If so, I would be very interested in any suggestions of books that discuss this kind of problem. - -REPLY [3 votes]: (Below I treat $x$ as an integer whose reduction $\bmod n$ is a unit of order $m$. This allows me to treat $\frac{x^m - 1}{x - 1}$ as an integer and consider its reduction $\bmod n$ without dividing by a zero divisor.) -As usual, the Chinese Remainder Theorem is your friend. We can reduce to the case that $n$ is a prime power $p^k$. If $p \nmid x-1$, then the geometric series argument works and the sum is just $0 \bmod p^k$. -If $p | x-1$, the situation gets a little more complicated. We will need to introduce the following fundamental notion: for an integer $n$, the $p$-adic valuation $\nu_p(n)$ is the greatest $k$ such that $p^k | n$. -Theorem (lifting the exponent): Let $p$ be an odd prime and let $x, y$ be integers relatively prime to $p$ such that $p | x-y$. Then -$$\nu_p(x^n - y^n) = \nu_p(x - y) + \nu_p(n).$$ -Proof. Induction. See these notes. -Applying the above it follows that for odd $p$, if $p | x-1$ then -$$\nu_p \left( \frac{x^m - 1}{x - 1} \right) = \nu_p(m).$$ -Thus $\frac{x^m - 1}{x - 1} \equiv 0 \bmod p^k$ if and only if $p^k | m$. Hence: - -If $n$ is odd then $\frac{x^m - 1}{x - 1} \equiv 0 \bmod n$ if and only if for every prime factor $p$ of $n$, either $p \nmid x-1$ or $p^k | m$. - -Okay, so what if $p = 2$? In the above notes you will also find the following result. -Theorem (lifting the exponent at $2$): Let $x, y$ be odd integers such that $4 | x-y$. Then -$$\nu_2(x^n - y^n) = \nu_2(x - y) + \nu_2(n).$$ -So if $4 | x-1$ then the conclusion is the same as above. Otherwise, write -$$x = 1 + 2y$$ -where $y$ is odd. If $m$ is odd, then -$$x^m = 1 + 2my + ...$$ -where the remaining terms are divisible by $4$, and we conclude that $\nu_2(x^m - 1) = 2$, hence $\nu_2 \left( \frac{x^m - 1}{x - 1} \right) = 0$, so the sum is not divisible by $2^k$. If $m = 2 \ell$ is even, then $4 | x^2 - 1$, hence -$$\nu_2(x^{2\ell} - 1) = \nu_2(x^2 - 1) + \nu_2(\ell) = \nu_2(x + 1) + \nu_2(m).$$ -This gives -$$\nu_2 \left( \frac{x^m - 1}{x - 1} \right) = \nu_2(y + 1) + \nu_2(m).$$ -Thus $\frac{x^m - 1}{x - 1} \equiv 0 \bmod 2^k$ if and only if $2^k | m(y+1)$, and this can occur. So if $n$ is not odd then the above argument still applies to all the odd prime factors of $n$ and we conclude that - -if $n = 2^k o, k \ge 1$ where $o$ is odd, then $\frac{x^m - 1}{x - 1} \equiv 0 \bmod n$ if and only if $\frac{x^m - 1}{x - 1} \equiv 0 \bmod o$ and either $x \equiv 1 \bmod 4$ and $2^k | m$ or $x \equiv 3 \bmod 4$ and $2^k | m \left( \frac{x+1}{2} \right)$.<|endoftext|> -TITLE: Difference between power law distribution and exponential decay -QUESTION [36 upvotes]: This is probably a silly one, I've read in Wikipedia about power law and exponential decay. I really don't see any difference between them. For example, if I have a histogram or a plot that looks like the one in the Power law article, which is the same as the one for $e^{-x}$, how should I refer to it? - -REPLY [7 votes]: If there is anybody landing on this from Nassim Nicholas Taleb's The Black Swan the issue at stake is how doubling a random variable affects the probability in power law distributions as opposed to a normal or Gaussian distribution. -In the case of continuous random variables, the pdf illustrates the difference: -A power law distribution has a pdf of the form -$$f_X(x) =C x^{-\alpha},$$ -where $C$ is a constant, and $\alpha$ is the law's exponent. -The effect of doubling $x$ will remain constant across the domain: For example, the ratio in the pdf between people who make $\$50,000$ per year to those that make $\$25,000$ will be the same as the ratio between people that make $\$10,000,000$ to those that make $\$5,000,000:$ -$$\frac{(2x)^{-\alpha}}{x^{-\alpha}}=2^{-\alpha}$$ -an attribute called scale invariance. -This is in contrast to the rapid decay in the normal distribution, which in the standardized form has the following pdf: -$$f_X(x)=\frac{1}{\sqrt{2\pi}}\exp\left( -\frac{x^2}{2}\right)$$ -Doubling the value of $x,$ amounts to raising to the third power the exponential (un-normalized) part of the pdf: -$$\begin{align} -\frac{f_X(2x)}{f_X(x)}&=\exp\left(-\frac{(2x)^2}{2} +\frac{{}x^2}{2}\right)\\[2ex] -&=\exp\left( -\frac{3x^2}{2}\right)\\[2ex] -&=\left(\exp\left(-\frac{x^2}{2} \right)\right)^{3}\\[2ex] -&=\frac{1}{\left(\mathrm e^{x^2/2}\right)^3} -\end{align}$$ -which can be visually plotted as: - -Therefore, at asymptotic values the relative probability between an extreme event and that of its value doubled will rapidly tend to zero, whereas in a power law it will remain stubbornly constant.<|endoftext|> -TITLE: Moment map of the action of $\operatorname{SO}(3)$ on the sphere -QUESTION [9 upvotes]: The moment map of the action of $\operatorname{SO}(3)$ on the sphere can be thought of as inclusion from $S^2$ into $\mathbb R^3$ by identifying $\mathfrak{so}(3)$ (the Lie algebra of $\operatorname{SO}(3)$) with $\mathbb R^3$. -I am just learning symplectic geometry and this fact came up without explanation in a paper that I'm reading. Can someone explain this, preferably in an intuitive way? -Thanks! - -REPLY [15 votes]: If you've studied the moment map for coadjoint actions, the answer is easy to see but takes a few words to describe. If you are not familiar with this, see the end of this answer for some references. -The Lie algebra -$$\mathfrak{so}(3) = \{A \in M_{3 \times 3}(\mathbb{R}) : A = -A^T\}$$ -can be identified with $\mathbb{R}^3$ via the identification -$$\mathbb{R}^3 \longrightarrow \mathfrak{so}(3),$$ -$$\xi = (\xi_1, \xi_2, \xi_3) \mapsto \begin{pmatrix} 0 & -\xi_3 & \xi_2 \\ \xi_3 & 0 & -\xi_1 \\ -\xi_2 & \xi_1 & 0 \end{pmatrix} = A_\xi.$$ -Under this identification, we have that -$$[A_\xi, A_\eta] = A_{\xi \times \eta},$$ -where $\xi \times \eta$ is the cross product of vectors in $\mathbb{R}^3$. Furthermore, -$$\mathrm{Tr}(A_\xi^T A_\eta) = 2 \langle \xi, \eta \rangle,$$ -so the standard inner product on $\mathbb{R}^3$ induces an invariant inner product on $\mathfrak{so}(3)$ that we can use to identify $\mathfrak{so}(3)$ and its dual $\mathfrak{so}(3)^\ast$. Under these identifications, we get that the adjoint action of $\mathrm{SO}(3)$ on $\mathfrak{so}(3)$, -$$X \cdot A = XAX^{-1},$$ -corresponds to the usual left action of $\mathrm{SO}(3)$ on $\mathbb{R}^3$, -$$A \cdot \xi = A\xi.$$ -Furthermore, the adjoint action is isomorphic to the coadjoint action, so the coadjoint action corresponds to the usual left action of $\mathrm{SO}(3)$ on $\mathbb{R}^3$ as well. Therefore the coadjoint orbits are just $2$-spheres, and the Kirillov-Kostant-Souriau symplectic form -$$\omega_\Omega(\mathrm{ad}_A^\ast \Omega, \mathrm{ad}_{A'}^\ast \Omega) = \langle \Omega, [A,A'] \rangle$$ -on the coadjoint orbits corresponds to the usual symplectic form -$$\omega_x(u,v) = \langle x, u \times v)$$ -on $S^2$. Now the moment map for the obvious action of $\mathrm{SO}(3)$ on one of its coadjoint orbits is just the inclusion map of the orbit into $\mathfrak{so}(3)$ (this is true for the action of any compact, connected Lie group on one of its coadjoint orbits, see the references below), which in our case corresponds to the inclusion -$$S^2 \hookrightarrow \mathbb{R}^3.$$ - -Here are some references on moment maps for coadjoint orbits: - -Introduction to Symplectic Topology, Dusa McDuff and Dietmar Salamon, Oxford Science Publications 1998, 2nd Edition. Example 5.24 on page 168 discusses this topic. -The Topology of Torus Actions on Symplectic Manifolds, Michèle Audin, Progress in Mathematics no. 93, Birkhäuser 1991. See Section 3.3, page 49. -Lectures on Symplectic Geometry, Ana Cannas da Silva, Lecture Notes in Mathematics no. 1764, Springer 2001. Homework 17 on page 139 guides you through the basic theory.<|endoftext|> -TITLE: How to disprove this fallacy that derivatives of $x^2$ and $x+x+x+\dots\quad(x\text{ times})$ are not same. -QUESTION [5 upvotes]: Possible Duplicate: -Where is the flaw in this argument of a proof that 1=2? (Derivative of repeated addition) - -\begin{align*} -x^2 &= \underbrace{x + x + x + \dots + x}_{x \text{ times}}, \\ -\therefore \frac{\mathrm{d}}{\mathrm{d}x} (x^2) -&= \frac{\mathrm{d}}{\mathrm{d}x} (\underbrace{x + x + x + \dots + x}_{x \text{ times}}) \\ -&= \underbrace{1 + 1 + 1 + \dots + 1}_{x \text{ times}} \\ -&= x. -\end{align*} -But we know that -$$ \frac{\mathrm{d}}{\mathrm{d}x} (x^2) = 2x. $$ -So what is the problem? -My take is that -we cannot differentiate both sides because $\underbrace{{x+x+x+\cdots+x}}_{x \text{ times}}$ is not fixed and thus $1$ is not equal to $2$. - -REPLY [17 votes]: Simply because "$x \text{ times}$" is also a "function" of $x$. One mistake is not considering that in the derivation. - -REPLY [8 votes]: You say "$x\text{ times}$". The number of "times" you add it up---the number of terms in the sum---keeps changing as $x$ changes. An what if $x=1.6701$? How do you add up $x$ $1.6701$ times?<|endoftext|> -TITLE: Proving that sequentially compact spaces are compact. -QUESTION [8 upvotes]: I remember seeing this proof somewhere (perhaps here, but I don't remember where) that goes something like this. -Suppose $X$ is sequentially compact, and by contradiction suppose $\{U_n\}$ is a countable open cover with no finite subcover. Then for any positive integer $n$, the set $\{U_i : i \le n\}$ is not an open cover, so there exists $x_n \notin \bigcup_{i \le n} U_i$. Hence, we obtain sequence, and by sequential compactness, there exists a subsequence $x_{n_j}$ that converges to $a \in X$. However, $ a \in U_k$ for some positive integer $k$ and by construction, $x_{n_j} \notin U_k$ if $n_j \ge k$. This is a contradiction. -Doesn't this only prove every countable open cover must have a finite subcover? - -REPLY [3 votes]: For metric spaces, here is a proof of the equivalence. -Let $ (X,d) $ be a metric space, and $ A \subseteq X $. -Thm: $ A $ is sequentially compact if and only if every open cover of $ A $ has a finite subcover. -Pf: $ \underline{\boldsymbol{\implies}} $ Say $ A $ is sequentially compact, and $ \{ U_i \}_{i\in I} $ is an open cover of $ A $. -Firstly there is a $ \delta > 0 $ such that any ball $ B(x, \delta) $ with $ x \in A $ is contained in some $ U_i $. - -Suppose not. Then for every integer $ n > 0 $ there is an $ x_n \in A $ such that $ B(x_n , \frac{1}{n} ) $ isnt fully contained in any $ U_i $. Pass to a subsequence $ (x_j)_{j \in J} $, $ J \subseteq \mathbb{Z}_{>0} $ which converges to a point $ p \in A $ as $ j \in J, j \to \infty$. -Now $ p $ is contained in some $ U_k $, and there is an $ \epsilon > 0 $ such that $ B(p, \epsilon) \subseteq U_k $. Picking a $ j \in J $ such that $ \{ d(x_j, p) < \frac{\epsilon}{2} ; \frac{1}{j} < \frac{\epsilon}{2} \}$, we see $ B(x_j, \frac{1}{j}) \subseteq B(p, \epsilon) \subseteq U_k $, a contradiction. - -There are finitely many open balls, with radius $ \delta $ and centres in $ A $, whose union contains $ A $. - -[Corrected] Suppose not. Pick $ x_1 \in A $. As $ B(x_1, \delta) $ doesnt cover $ A $, pick $ x_2 \in A \setminus B(x_1, \delta) $. As $ B(x_1, \delta) \cup B(x_2, \delta) $ doesnt cover $ A $, let $ x_3 \in A \setminus (B(x_1, \delta) \cup B(x_2, \delta)) $, and so on. Pass to a subsequence $ (x_j)_{j \in J} $, $ J \subseteq \mathbb{Z}_{>0} $ convergent to a point $ p \in A $ as $ j \in J, j \to \infty $. -But ${ d(x _{j}, x _{j'}) \geq \delta }$ for all distinct ${ j, j' }$ in ${ J }.$ Especially ${ (x _j) _{j \in J} }$ isnt Cauchy, a contradiction. - -So there are finitely many balls $ B(x_1, \delta), \ldots, B(x_n, \delta) $ with centres in $ A $, whose union contains $ A $. Also each of these balls is contained in some $ U_i $, giving a finite subcover. -$ \underline{\boldsymbol{\impliedby}}$ Say every open cover of $ A $ has a finite subcover, and let $ (x_n) $ be a sequence in $ A $. -Seq $ (x_n) $ has an accumulation point in $ A $. - -Suppose not. So for every $ y \in A $ there is a $ \delta_y > 0 $ such that $ x_n \in B(y, \delta_y) $ for only finitely many $ n $. Now open cover $ \{ B(y, \delta_y) : y \in A \} $ of $ A $ has a finite subcover $ \{ B(y_1, \delta_{y_1}), \ldots, B(y_k, \delta_{y_k}) \} $. But there are only finitely many indices $ n $ such that $ x_n \in B(y_1, \delta_{y_1}) \cup \ldots \cup B(y_k, \delta_{y_k}) $, a contradiction.<|endoftext|> -TITLE: Asymptotics for sums of the form $\sum \limits_{\substack{1\leq k\leq n \\ (n,k)=1}}f(k)$ -QUESTION [6 upvotes]: How can we find an asymptotic formula for -$$\sum_{\substack{1\leq k\leq n \\ (n,k)=1}}f(k)?$$ -Here $f$ is some function and $(n,k)$ is the gcd of $k$ and $n$. I am particularily interested in the case -$$\sum_{\substack{1\leq k\leq n \\ (n,k)=1}}\frac{1}{k}.$$ -I know about the result -$$\sum_{\substack{1\le k\le n\\(n,k)=1}}k=\frac{n\varphi(n)}{2}$$ -which was discussed here, but I don't know if I can use it in the case of $f(k)=1/k$. - -REPLY [3 votes]: Hint: Try using the fact that $\sum_{d|n} \mu(d)$ is an indicator function for when $n=1$. This allows us to do the following for any function $f$: -$$\sum_{n\leq x}\sum_{k\leq n,\ \gcd (k,n)=1} f(k,n)=\sum_{n\leq x}\sum_{k\leq n} f(k,n) \sum_{d|k, \ d|n} \mu (d) =\sum_{d\leq x} \mu(d) \sum_{n\leq \frac{x}{d}}\sum_{k\leq n} f(dk,nk).$$ -This method is very general, and works in a surprisingly large number of situations. I encourage you to try it. -Remark: Using this approach I get $$\sum_{n\leq x}\sum_{k\leq n,\ \gcd(k,n)=1} \frac{1}{k}=\frac{6x}{\pi^{2}}\log x+\left(-\frac{\zeta^{'}(2)}{\zeta(2)^2}+\frac{6\left(\gamma-1\right)}{\pi^{2}}\right)x+O\left(\log^{2}x\right).$$ -Edit: I made a slight miscalculation in my remark, missing the factor of $\zeta(2)^2$ in the $\zeta^{'}(2)$ term, and have updated the asymptotic.<|endoftext|> -TITLE: Finding a point having the radius, chord length and another point -QUESTION [5 upvotes]: Me and a friend have been trying to find a way to get the position of a second point (B on the picture) having the first point (A), the length of the chord (d) and the radius (r). -It must be possible right? We know the solution will be two possible points but since it's a semicircle we also know the x coordinate of B will have to be lower than A and the y coordinate must be always greater than 0. -Think you can help? -Here's a picture to illustrate the example: - -Thanks in advance! - -REPLY [3 votes]: I prefer a trigonometric approach. - -Points $A$ and $B$ on an origin-centered circle of radius $r$ can be represented as -$$A := (r \cos\alpha, r \sin\alpha) \qquad B := (r\cos\beta, r\sin\beta)$$ -for angles $\alpha$ and $\beta$ as shown in the image. -Now, simply note that $\beta = \alpha - \theta$, where (as shown below) $\theta = 2\;\mathrm{asin}\frac{d}{2r}$, to get the location of $B$. - -The formula for $\theta$ follows from the Law of Cosines: -$$d^2 = r^2 + r^2 - 2 r^2 \cos\theta = 2 r^2 ( 1 - \cos\theta) = 4 r^2 \sin^2\frac{\theta}{2} = \left( 2 r \sin\frac{\theta}{2} \right)^2$$ -Therefore, $d = \pm 2 r \sin\frac{\theta}{2}$, although we can take the "$\pm$" to be "$+$". This gives $\theta = 2 \;\mathrm{asin}\frac{d}{2r}$, as claimed.<|endoftext|> -TITLE: Evaluate $\int_1^\infty \cosh^{-1}(x) \ln(x^2-1) \exp \left(- \frac{x}{T} \right) dx $ -QUESTION [5 upvotes]: I would be interested in any clue on how to evaluate the following integral -$$\int_1^\infty \cosh^{-1}(x) \ln(x^2-1) \exp \left(- \frac{x}{T} \right) dx $$ -I have tried integration by parts but it seems to lead only to other integrals of the same form, with additional powers of $x$ in the integrand. - -REPLY [2 votes]: Hint: -$\int_1^\infty\cosh^{-1}x\ln(x^2-1)e^{-\frac{x}{T}}~dx$ -$=\int_0^\infty\cosh^{-1}\cosh x\ln((\cosh x)^2-1)e^{-\frac{\cosh x}{T}}~d(\cosh x)$ -$=\int_0^\infty xe^{-\frac{\cosh x}{T}}\sinh x\ln\sinh^2x~dx$ -$=2\int_0^\infty xe^{-\frac{\cosh x}{T}}\sinh x\ln\sinh x~dx$ -According to http://people.math.sfu.ca/~cbm/aands/page_376.htm, -Consider $\int_0^\infty e^{-z\cosh x}\cosh(vx)~dx=K_v(z)$ , -$\int_0^\infty xe^{-z\cosh x}\sinh(vx)~dx=\dfrac{\partial K_v(z)}{\partial v}$ -$\int_0^\infty xe^{-z\cosh x}\sinh x~dx=\dfrac{\partial K_v(z)}{\partial v}(v=1)$ -Consider $\int_0^\infty e^{-z\cosh x}\sinh^vx~dx=\dfrac{2^\frac{v}{2}~\Gamma\left(\dfrac{v+1}{2}\right)K_\frac{v}{2}(z)}{\sqrt\pi z^\frac{v}{2}}$ , -$\int_0^\infty e^{-z\cosh x}\sinh^vx\ln\sinh x~dx=\dfrac{\partial}{\partial v}\left(\dfrac{2^\frac{v}{2}~\Gamma\left(\dfrac{v+1}{2}\right)K_\frac{v}{2}(z)}{\sqrt\pi z^\frac{v}{2}}\right)$ -$\int_0^\infty e^{-z\cosh x}\sinh x\ln\sinh x~dx=\dfrac{\partial}{\partial v}\left(\dfrac{2^\frac{v}{2}~\Gamma\left(\dfrac{v+1}{2}\right)K_\frac{v}{2}(z)}{\sqrt\pi z^\frac{v}{2}}\right)_{v=1}$<|endoftext|> -TITLE: the sum $\sum \limits_{n>1} f(n)/n$ over primes -QUESTION [7 upvotes]: Let -$$ -f(n)=\begin{cases}-1&\text{if $n$ is a prime integer},\\ -1&\text{otherwise}. -\end{cases} -$$ -Then, does the series -$$ -\sum_{n>1} f(n)/n -$$ -converge or diverge? - -REPLY [6 votes]: André Nicolas has already shown that the series diverges. The divergence can be quantified with relative ease as well. -$$ S(n) = \sum_{k=1}^{n} \dfrac{1 - 2 \times 1_{\text{$k$ is a prime}}}k = \sum_{k=1}^{n} \dfrac1n - 2 \sum_{k- \text{ prime }\leq n} \dfrac1k \\ = \left(H_n - \log n \right) - 2 \left(\sum_{k- \text{ prime }\leq n} \dfrac1k - \log (\log n) \right) + \log (n) - 2 \log \log (n)$$ -From the asymptotic of $H_n$ and $\sum_{k- \text{ prime }\leq n} \dfrac1k$, we have that $$S(n) = \gamma - 2B + \log \left( \dfrac{n}{\log^2 (n)} \right) + \mathcal{O} \left( \dfrac1{\log n}\right)$$<|endoftext|> -TITLE: Left Haar Measure on the Borel subgroup of the general linear group -QUESTION [7 upvotes]: If we consider the group of upper triangular matrices $B=\bigl(\begin{smallmatrix} -a&b\\ 0&a^{-1} -\end{smallmatrix} \bigr)$ where $a$ and $b$ are either real or complex and $a\neq1$, -then the left Haar measure is given by $a^{-2}\,da \,db$. -While I understand that this measure is invariant with respect to left translation, I am a little confused as to why the factor of $a^{-2}$ is necessary. -Any clarifications would be appreciated, -Thank you! - -REPLY [6 votes]: Let's look at left translation: $\bigl(\begin{smallmatrix} -A&B\\ 0&A^{-1} -\end{smallmatrix} \bigr)\bigl(\begin{smallmatrix} -a&b\\ 0&a^{-1} -\end{smallmatrix} \bigr)=\bigl(\begin{smallmatrix} -Aa&Ab+Ba^{-1}\\ 0&(Aa)^{-1} -\end{smallmatrix} \bigr)$, i.e. $a_{new}=Aa$, $b_{new}=Ab+Ba^{-1}$, hence the Jacobian of $(a,b)\mapsto(a_{new},b_{new})$ is $A^2$. The measure $da\,db$ is thus not invariant (Jacobian is not identically $1$); the factor $a^{-2}$ compensates for this.<|endoftext|> -TITLE: Are there simple algebraic operations for continued fractions? -QUESTION [16 upvotes]: I thought about continued fractions as a cool way to represent numbers, but also about the fact they are difficult to treat because standard algebraic operations like addition and multiplication don't work on them in a simple way. -My question is: do there exist some simple and interesting operations that have a regular behavior with respect to the normal form of a continued fraction? For example, does there exist a simple $\circ$, $?$ or $*$ that satisfies -$$ a = [a_0; a_1, a_2 \, ... \, a_n] \\ b = [b_0; b_1, b_2 \, \ldots \, b_n] \\ - a \circ b = [a_0 \circ b_0; a_1 \circ b_1, a_2 \circ b_2 \, \ldots \, a_n \circ b_n] $$ -or -$$ a \,?\, b = [a_0 + b_0; a_1 + b_1, a_2 + b_2 \, \ldots \, a_n + b_n] $$ -or -$$ *a = [2 a_0; 2 a_1, 2 a_2 \, \ldots \, 2 a_n] $$ -Note that $ [a_0; a_1, a_2 \, \ldots \, a_n] $ can represent: -$$ a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{\ddots a_{n-1} + \cfrac{1}{a_n}}}} $$ -or -$$ a_0 + \cfrac{m_1}{a_1 + \cfrac{m_2}{a_2 + \cfrac{m_3}{\ddots a_{n-1} + \cfrac{m_{n}}{a_n}}}} $$ - -REPLY [26 votes]: This is the reverse of your question; you asked what happens to the value of a continued fraction whose terms are modified in a simple way, but you also implied that you were interested in what to do to the terms to effect simple operations on the value of the fraction. -There is an algorithm which, given a single continued fraction $x$, will give you back the continued fraction $ax+b\over cd+d$ for any fixed integers $a,b,c,d$. -There is an analogous algorithm which takes any two continued fractions $x$ and $y$ and yields the continued fraction for $axy+bx+cy+d\over exy+fx+gy+h$ for any fixed integers $a,\ldots, h$. By putting $\langle a,\ldots,h\rangle = \langle 1,0,0,0,0,0,0,1\rangle$, the algorithm calculates the continued fraction for $xy$; by putting $\langle a,\ldots,h\rangle = \langle 0,1,1,0,0,0,0,1\rangle$, the algorithm calculates the continued fraction for $x+y$. -These algorithms were first discovered in the 1970s by Bill Gosper. They are not too complicated, but they are not trivial either. To explain them in this forum would probably not be very productive. -Gosper's monograph is available here. -I have some slides for a talk on the subject that I gave at Haverford College a few years ago, and an implementation in C. I would not normally have chosen C for this work, but I wanted to make the point that the algorithm does not require the fancy features of language $X$ for any $X$. -I also recently corresponded with Art DuPre of Rutgers University, who, with Dave Reimer of the College of New Jersey recently discovered algorithms which take a continued fraction $x$ and calculates the continued fraction for $\frac12x$. I don't know that these are published yet, and I haven't looked at them closely. -Addendum: I took another look at the Dupre-Reimer algorithm. It is a mess. It is also unaccompanied by a proof, and I imagine that the easiest way to prove it would be to show that it synthesizes the same results as the Gosper algorithm, which seems simpler.<|endoftext|> -TITLE: What things can be defined in terms of universal properties? -QUESTION [6 upvotes]: We can define some mathematical objects using universal properties, for example the tensor product, the free group over a set or the Stone–Čech compactification. -I'm wondering about how to develop my intuition so that I can spot a thing I can define using a universal property when I see it. -It seems clear that a necessary condition on the object is that it is unique. For example for two topological spaces and a function $f: X \to Y$ we cannot define continuity of $f$ in terms of universal properties since there are many functions $X \to Y$ that are continuous. But is it sufficient for an object to be unique (up to unique isomorphism) in order for it to be definable using universal properties? -To summarise into one question: what characterises objects that can be defined using universal properties? -Thank you. - -REPLY [5 votes]: A universal property of an object $X$ in a category is just an isomorphism between $\hom(X,-)$ (or $\hom(-,X)$) and another interesting and more concrete functor. Since "more concrete" is not really precise, this essentially says that every object $X$ has a universal property, but a very boring one. -So the question is really: When do interesting universal properties occur? Well it is the other way round: Given a functor $F$, which often encodes some classification or comparison data, one may ask if it is representable. The Yoneda Lemma tells us that every representation of a functor is unique up to isomorphism. But the functor $F$ is more fundamental than the representing object itsself (after all, it doesn't have to exist!). -For example, when you do linear algebra, you naturally stumble upon bilinear maps $M \times N \to T$. You would like to classify them via homomorphisms. That is, you ask if the functor of bilinear maps $M \times N \to (-)$ is representable. And indeed, this is true, and the representing object is the tensor product $M \otimes N$. But many properties of the tensor product are just consequences of trivial properties of the functor of bilinear maps (for example $M \otimes N \cong N \otimes M$ just says that bilinear maps on $M \times N$ correspond to bilinear maps on $N \times M$ via a switch). -Of course, universal properties are abundant in pure mathematics. It would be nonsense to try to summarize them here or to attempt any rule how they appear. You should learn the examples first.<|endoftext|> -TITLE: Proof that $\frac{(2n)!}{2^n}$ is integer -QUESTION [12 upvotes]: I am trying to prove that $\dfrac{(2n)!}{2^n}$ is integer. So I have tried it by induction, I have took $n=1$, for which we would have $2/2=1$ is integer. So for $n=k$ it is true, so now comes time to proof it for $k+1$, $(2(n+1))!=(2n+2)!$, which is equal to $$1 \times 2 \times 3 \times \cdots \times (2n) \times (2n+1) \times (2n+2),$$ second term would be $$2^{n+1}=2 \times 2^n$$ -Finally if we divide $(1 \times 2 \times 3 \times \cdots \times (2n) \times (2n+1) \times (2n+2)$ by $2^{n+1}=2 \times 2^n$,and consider that,$(2n)!/(2^n)$ is integer, we get $(2n+1) \times (2n+2)/2=(2n+1) \times 2 \times (n+1)/2$, we can cancel out $2$, we get $(2n+1)(n+1)$ which is definitely integer. -I am curious this so simple? Does it means that I have proved correctly? - -REPLY [20 votes]: From 1 to $2n$ there are exactly $n$ even numbers. Hence the product $1\cdots 2n=(2n)!$ is divisible by $2\cdot2\cdots 2 \hbox{ ($n$ times)}= 2^n$.<|endoftext|> -TITLE: codimension of "jumping" of the dimension of fibers -QUESTION [5 upvotes]: Let $f:X\rightarrow Y$ be a dominant morphism of projective (and smooth if you like) varieties over an algebraically closed field $k$ such that $n=\dim(X)=\dim(Y)$. Then $f$ is proper, so by Chevalley's upper semi-continuity theorem, $\dim(X_y)$ is upper semi-continuous on $Y$. Since $f$ is dominant, on an open subset $U\subseteq Y$, $X_y$ is 0 dimensional for all $y\in U$. My question is, can we say $Y\setminus U$ must have codimension at least 2? -We can construct an example where the codimension is exactly two by taking the blowup $B$ of $\mathbb{P}^2$ at a point and looking at the natural map $B\rightarrow\mathbb{P}^2$. If we did not impose that $X$ is irreduicible this need not be true, for example $(xy)\subseteq \mathbb{P}^2$ mapping to $\mathbb{P}^1$ via projection has a one dimensional fiber above the origin. I cannot seem to find an irreducible example and am hoping the answer to the above is positive. -Thanks - -REPLY [5 votes]: Suppose there is a codimension one locus $Z$ in $Y$ such that each fibre over a point $z \in Z$ is of positive dimension. Then $f^{-1}(Z)$ has dimension $\geq 1 + Z = \dim Y = \dim X$, and thus $f^{-1}(Z)$ is of the same dimension of $X$, and so must be a component of $X$. Hence, if $X$ is irreducible, no such $Z$ exists. In conclusion, the answer is positive.<|endoftext|> -TITLE: How to transform a set of 3D vectors into a 2D plane, from a view point of another 3D vector? -QUESTION [19 upvotes]: I googled around a bit, but usually I found overly-technical explanations, or other, more specific Stackoverflow questions on how 3D computer graphics work. I'm sure I can find enough resources for this eventually, but I figured that it's good material for this site... -Lets say that I have a 3D space, with x, y and z coordinates. Then, I have a set of vectors (vertices in computer graphics, I suppose) in that space (they can be forming a cube, for example). -How do I go about transforming them for rendering on a 2D plane (screen)? I need to get x and y coordinates of 2D vectors, but, they need to be dependent on a specific point in space - the camera. When I move the camera, the x and y values should change. -I guess the process will go something like this: - -Translate the 3D vectors according to the camera's x, y and z. -Rotate the 3D vectors according to the camera's theta and phi (I will need a lot of to polar coordinate system and from polar coordinate system conversions for this, but sin and cos aren't expensive, right?) -x = x/z, y = y/z, for transforming into 2D, I think, not sure about this part at all, I think I saw it somewhere. -Scale all vectors according to the camera's distance from the scene (or something else?) -Render. - -I brainstormed these on the fly, there are probably a tonne of better solutions. Also, please try to keep the math simple, as I only know basic trig and calc up to the chain rule, I'm not sure what are people actually using for this. I heard something about "rotation matrices", what are they, exactly? (Well, I'm about to Google that now, but can't hurt me to get an answer here as well.) Also, what are the standard directions for xyz space? (Is z "up"?) - -REPLY [3 votes]: I attended Professor Neil Dodgson's undergraduate lecture series at Cambridge University where he outlined a series of matrix manipulations that in combination, give the result you are asking for. From his 1998 lecture notes:<|endoftext|> -TITLE: Finding the Euler-Lagrange operator $\mathcal M$ of a functional $\mathcal F$ -QUESTION [5 upvotes]: I'd appreciate some help with the following problem: - -Let $F = F(x, \{p_\alpha\}_{|\alpha|\le m})$ be a smooth function of the variables $x\in \overline \Omega$, and $p_\alpha \in \mathbb R, |\alpha|\le m$. Find the Euler-Lagrange operator $\mathcal M$ of the functional $$\mathcal F(u) = \int_\Omega F(x, \{D^\alpha u\}) \, dx$$ and show that its linearization at a smooth funciton $u_0$ is the operator $D^\beta(a_{\alpha \beta} D^\alpha u)$, where $a_{\alpha \beta}(x) = F_{p_\alpha p_\beta}(x,\{D^\delta u_0(x_0)\}_{|\delta |\le m}$. (The subscripts $p_\alpha, p_\beta$ here denote partial derivatives; also, recall that the linearization of $\mathcal M(u)$ at $u_0$ is given by $\mathcal L(v) = \frac{d}{ds}\Big|_{s=0} \mathcal M(u_0+sv)$.) - -$\Omega$ presumably is some bounded domain in $\mathbb R^n$ (but you may assume any niceness conditions necessary). -I must admit, I'm not quite sure what is meant by the "Euler-Lagrange operator" of a functional $\mathcal F$. Google couldn't explain it to me, either. So I'm asking here. - -I know that one normally looks for a stationary part of the functional in hope of finding a minimizer/maximizer. So I simply tried differenting under the integral sign to see what comes out: For an extremal point $u$ and any function $\varphi$ we have -\begin{align} -0 &= \frac{d}{ds}\Big|_{s=0} \mathcal F(u+s\varphi) \\ -&= \int_{\Omega} \sum_{\alpha} F_{p_\alpha}(x,\{D^\delta u\}) D^\alpha \varphi \, dx \\ -&= \int_{\Omega}\left[ \sum_{\alpha} (-1)^{|\alpha|} D^\alpha \left(F_{p_\alpha}(x,\{D^\delta u\})\right) \right] \varphi \, dx -\end{align} -which implies $\sum_{\alpha} (-1)^{|\alpha|} D^\alpha \left(F_{p_\alpha}(x,\{D^\delta u\})\right) = 0$. Should I try to go on from here? I this what I'm supposed to do at all? If yes, is there a way to avoid a messy calculation in trying to evaluate $D^\alpha \left(F_{p_\alpha}(x,\{D^\delta u\})\right)$? - And what is this $\mathcal M$ in the exercise statement? -I think the exercise is largely to motivate why we might be interested in the properties of equations of the form -$$\sum_{|\alpha|, |\beta| \le m} D^\beta (a_{\alpha \beta} D^\alpha u) = \sum_{|\beta|\le m} D^\beta f_\beta$$ -which were the content of the last chapter. So I would like to understand this in some detail. Also, I could not find a definition of the Euler-Lagrange operator $\mathcal M$ in any earlier notes, either. -Thanks for your help! - -REPLY [2 votes]: We have $\mathcal M(u) = \sum_\beta (-1)^{|\beta|}D^\alpha(F_{p_\beta}(x,\{D^\sigma u\})$ as was pointed out above. From the definition of $\mathcal L$, we derive -\begin{align} -\mathcal L(v) -&= \frac{d}{dt}\Big|_{t=0} \mathcal M(u+tv) \\ -&= \sum_\beta (-1)^{|\beta|}D^\beta(F_{p_\alpha p_\beta}(x,\{D^\sigma u\})D^\alpha v) \\ -&= \sum_\beta (-1)^{|\beta|}D^\beta(a_{\alpha \beta}D^\alpha v). -\end{align} \ No newline at end of file