diff --git "a/stack-exchange/math_stack_exchange/shard_100.txt" "b/stack-exchange/math_stack_exchange/shard_100.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_100.txt" +++ /dev/null @@ -1,4613 +0,0 @@ -TITLE: What is the number of Sylow p subgroups in $S_p$? -QUESTION [15 upvotes]: I am reading the Wikipedia article entitiled Sylow theorems. This short segment of the article reads: -Part of Wilson's theorem states that -$(p-1)!$ is congruent to $-1$ (mod $p$) for every prime $p$. One may easily prove this theorem by Sylow's third theorem. Indeed, observe that the number $n_p$ of Sylow's $p$-subgroups in the symmetric group $S_p$ is $(p-2)!$. On the other hand, $n_p ≡ 1$ mod p. Hence, $(p-2)! ≡ 1$ mod $p$. So, $(p-1)! ≡ -1$ mod $p$. -I do not understand why the number of Sylow p-subgroups of $S_p$ is $(p-2)! $ -I am taking a course in Group Theory and I have studied everything in Dummit and Foote up to Sylow's theorem. I have also had an introductory course in number theory and am familiar with basic combinatorics. - -REPLY [17 votes]: The elements of order $p$ consist of a $p$-cycle of the form $(1,a_2,a_3,\ldots,a_p)$, where $a_2,a_3,\ldots,a_p$ is a permutation of $2,3,\ldots,p$. So there are exactly $(p-1)!$ elements of order $p$. -Now each subgroup of order $p$ contains $p-1$ elements of order $p$ (i.e. its non-identity elements), and the intersection of any two such subgroups is trivial, so the total number of subgroups of order $p$ is $(p-1)!/(p-1) = (p-2)!$.<|endoftext|> -TITLE: Uniform Convergence Implies $L^2$ Convergence and $L^2$ Convergence Implies $L^1$ Convergence -QUESTION [25 upvotes]: Some of the books that discuss convergence say that uniform convergence implies $L^2$ convergence and $L^2$ convergence implies $L^1$ convergence, both while taken over a bounded interval I. While I understand how that could be true intuitively, I'm struggling to see the proof of that. Any ideas? -EDIT: For Uniform Convergence implies $L^2$ convergence, uniform convergence and interchange of limits means $\lim\limits_{n\rightarrow\infty} \int_{a}^{b}(f(x)-f_n(x)) = 0$ and from there, I'm not sure what to do? -For $L^2$ convergence, I don't actually know where to get started either. - -REPLY [27 votes]: If $f_n$ converges to $f$ uniformly (i.e. $\sup_{x \in I}|f_n(x)-f(x)| \to 0$), then -$$\|f_n-f\|^2_2=\int_I |f_n(x)-f(x)|^2\,dx \leq m(I) \left(\sup_{x \in I}|f_n(x)-f(x)|\right)^2 \to 0,$$ -so $f_n$ converges to $f$ in $L^2$. -If $f_n$ converges to $f$ in $L^2(I)$, then by Hölder inequality we have -$$ -\|f_n-f\|_1=\int_I |f_n(x)-f(x)|\,dx \leq (m(I))^{\frac12} \|f_n-f\|_2 \to 0, -$$ -so that $f_n$ goes to $f$ in $L^1$. - -Note that these results (with the same proof) hold in a much more general context, as pointed out in the other answer, I tried to give the most "hands on" proof possible.<|endoftext|> -TITLE: The behavior of the graph of $f(x) = \sin (\pi/x)$ as x approaches 0, Why? -QUESTION [5 upvotes]: This is my first post. I hope it is relevant. -for the $\lim_{x\to0}\sin(\pi/x)$ The limit does not exist. -I am curious if my logic is appropriate or if there is another way to understand this. -So what I believe is the following: -As $x\to 0$ we have that $ \pi/x\to\infty$ -Therefore, $\sin(\infty)$, which makes sense by the fact that as x approaches 0, the input of sine will increase to infinity or some large number. As a result, sine will repeat its periods indefinitely. It will oscillate between 1 and -1 indefinitely as x approaches 0. -I missed one thing and that is this is a two-sided limit, so sine's input is approaching positive and negative infinity - -REPLY [2 votes]: Graph of sin(π/x) with x axis changing from +/- 1 units to +/- 0.01 units. This shows graphically the intuition the OP had that the frequency of the sine wave approaches infinity.<|endoftext|> -TITLE: Induced bilinear form on exterior powers - Towards a global Hodge Star Operator -QUESTION [7 upvotes]: In all constructions of the hodge star operator I've seen so far there was a part where an inner product on the exterior power of the tangent space was defined by the ungodly local formula: -$$\langle v_1 \wedge \cdots \wedge v_k , w_1 \wedge \cdots \wedge w_k \rangle = det(\langle v_i,w_j \rangle)$$ -Although i've been able to verify that it is in fact an inner product I have a terrible aversion to such ad hoc constructions. Therefore i was led to this: -Given a projective (hence locally free) $R$-module $M$ and a nondegenrate bilinear form $g: M \times M \to R$, i.e. one that gives an isomorphism: -$$\varphi: M \to M^* , \varphi: m \to g(-,m)$$ -By the non degeneracy, $g$ extends naturally to $M^*$ by: $g(g(-,m_1),g(-,m_2))=g(m_1,m_2)$. -What's the canonical way to extend $g$ to the exterior powers $\bigwedge^k M^*$? - -REPLY [3 votes]: You need that $M$ is finitely presented in addition to projective in order for duality to work nicely. The problem reduces to describing a natural nondegenerate pairing -$$\wedge^k M \times \wedge^k M^{\ast} \to 1$$ -where $1$ denotes the unit module $R$. Equipped with such a pairing, any isomorphism $M \cong M^{\ast}$ gives an isomorphism $\wedge^k M \cong \wedge^k M^{\ast}$, and we can apply the natural isomorphism $\wedge^k M^{\ast} \cong (\wedge^k M)^{\ast}$ coming from the above nondegenerate pairing. -In fact more generally there is a natural pairing -$$\wedge^n M \times \wedge^k M^{\ast} \to \wedge^{n-k} M$$ -as follows: by the universal property of the exterior algebra, derivations $\wedge^{\bullet} M \to \wedge^{\bullet} M$ are freely determined by what they do to the generators $M$. In particular, elements of $M^{\ast}$ are naturally in bijection with derivations of degree $-1$, and again by the universal property of the exterior algebra, this induces an action of $\wedge^{\bullet} M^{\ast}$ on $\wedge^{\bullet} M$ where $\wedge^k M^{\ast}$ acts by "differential operators" of degree $-k$. -This description has the disadvantage that it doesn't treat $M$ and $M^{\ast}$ symmetrically. There are other things you can say but to my mind they're all a bit unsatisfying for other reasons. One attempt to write down this map ends up writing down $k!$ times this map for reasons I don't really understand. And in positive characteristic there isn't an analogous natural isomorphism $S^k M^{\ast} \cong (S^k M)^{\ast}$ for symmetric powers. There's something funny going on involving invariants vs. coinvariants for the action of $S_n$ on $M^{\otimes n}$ and I haven't sorted it out well in my head.<|endoftext|> -TITLE: Example of a topological space which is not first-countable -QUESTION [12 upvotes]: According to Munkres' Topology: - -Definition. A space $X$ is said to have a countable basis at $x$ if there is a countable collection $\mathscr B$ of neighborhoods of $x$ such that each neighborhood of $x$ contains at least one of the elements of $\mathscr B$. A space that has a countable basis at each of its points is said to be first-countable. - -Considering this, I guess that any space $X$ is first-countable. If there is any space $X$ that is not first-countable, please mention it with a simple explanation so that I understand better this concept. -Thanks a lot. - -REPLY [24 votes]: Let $\tau=\{\varnothing\}\cup\{\Bbb R\setminus F:F\text{ is finite}\}$; this is a topology on $\Bbb R$, called the cofinite topology. (A cofinite topology can be defined on any set.) Because $\Bbb R$ is uncountable, $\tau$ is not first countable. -To see this, let $x\in\Bbb R$, and suppose that $\mathscr{B}$ is a countable family of open nbhds of $x$, say $\mathscr{B}=\{B_n:n\in\Bbb N\}$. For each $n\in\Bbb N$ there is by definition a finite $F_n\subseteq\Bbb R$ such that $B_n=\Bbb R\setminus F_n$. Let $C=\{x\}\cup\bigcup_{n\in\Bbb N}F_n$; $C$ is the union of countably many finite sets, so $C$ is countable. $\Bbb R$ is uncountable, so we can choose a point $y\in\Bbb R\setminus C$. Let $U=\Bbb R\setminus\{y\}$. By definition $U$ is open, and clearly $x\in U$. However, for any $n\in\Bbb N$ we have $y\in B_n\setminus U$, so $B_n\nsubseteq U$, and therefore $\mathscr{B}$ is not a local base at $x$. Since $\mathscr{B}$ was any countable family of open nbhds of $x$, this shows that $\langle\Bbb R,\tau\rangle$ is not first countable at $x$. (And since $x$ was an arbitrary point of $\Bbb R$, we’ve actually shown that $\langle\Bbb R,\tau\rangle$ is nowhere first countable: no point has a countable local base.<|endoftext|> -TITLE: Applications of the completeness of $L^1$ -QUESTION [5 upvotes]: I'm teaching a measure theory class. I think one of the main motivations for the development of the Lebesgue integral is that the space $L^1(\mathbb{R})$ of integrable functions on $\mathbb{R}$ is complete under the $L^1$ norm. You can't get this if you stick to the Riemann integral. -For further motivation, I would like to find some interesting applications of this fact; preferably elementary. -For instance, is there some interesting function $f$ that one can construct by writing down a sequence of approximates $f_n$, showing that the sequence is $L^1$-Cauchy, and letting $f$ be its limit? Ideally, it would not be obvious how to construct $f$ otherwise. -I can think of lots of interesting applications of the completeness of $L^2$ (e.g. Fourier series) but not so much $L^1$. - -REPLY [2 votes]: [I have edited my answer. I don't think this is the best possible response but it is somewhat reasonable.] -Many motivate the Lebesgue integral over the Riemann integral by stating (loosely) that the former has "better limit theorems." - That is a bit misguided I think. They have exactly the same limit theorems (from one point of view) except that the Lebesgue integral - integrates more functions. Better yet any limit theorem for the Riemann integral usually has the extra assumption that - the limit function is integrable, while the Lebesgue limit theorem has the conclusion that the limit function is integrable. -The question, however, is to sell the Lebesgue integral on the basis that $L_1(\mathbb{R})$ is complete - while $R_1(\mathbb{R})$ is not. This is similar to the way we sell the transition from $\mathbb{Q}$ to $\mathbb{R}$. - So I have two pitches on this that might work for some students. -A. In a search for motivation it might be better to ask the master. -Lebesgue very specifically said that his motivation for defining and studying a generalization of the Riemann integral was an example, published by Volterra in 1881, of a bounded derivative that is not Riemann integral. -If there is a differentiable Lipschitz function $F:[0,1]\to\mathbb{R}$ but it is totally illegal to write $$\int_0^1 F'(t)\,dt = F(1)-F(0)$$ -then something is very seriously wrong with your integration theory. No eighteenth century mathematician would have hesitated to write this identity and the Riemann integral forbids it! -To see what is happening with Volterra's example consider the sequence of functions - $$f_n(t)=\frac{F(x+1/n)-F(x)}{1/n}$$ - which is uniformly bounded and converges pointwise to $F'$. -Now according to the standard theory of the Lebesgue integral - any uniformly bounded sequence of - functions in $L_1(\mathbb{R})$ that converges pointwise is Cauchy. So - the - sequence $\{f_n\}$ converges - to a function $f$ in that space. Since it is also pointwise convergent to $F'$ we would know that $f=F'$ almost everywhere. - A direct computation then shows that - $$F(1)-F(0)= \lim_{n\to\infty}\int_0^1 f_n(t)\,dt = - \int_0^1 F'(t)\,dt. -$$ -[Skeptical student says: It's a cheat. We didn't use the Cauchy sequence to find the function $f$, we were already given a construction of $F$ and $F'$ anyway. That's like giving a Cauchy sequence $\{q_n\}$ from $\mathbb{Q}$ that converges to $\sqrt2$. I already know $\sqrt2$ and I don't need a Cauchy sequence to tell me what it is.] - -B. Try again. Let's give the following problem to some mathematicans from the 18th, 19th, and 20th century. - -Suppose that $f_k:[0,1]\to \mathbb{R}$ is a sequence of - continuously differentiable functions, with $f_k(0)=0$ and such that - each $f_k$ is nondecreasing and suppose that $\sum_{k=1}^\infty - f_k(x)$ converges uniformly on $[0,1]$ to a function $F$. What is - $F'$? Note that $$G_n(x)= \sum_{k=1}^n f'_k(x) \ \ (n=1,2,3, - \dots)$$ is a Cauchy sequence of continuous functions since $$ - \|G_{n+p}-G_n\|_1 = \int_0^1 \left[f'_{n+1}(t) + \dots -+ f'_{n+p}(t)\right]\,dt = etc. $$ - -18th century mathematician: I don't know what "uniformly " means and who is "Cauchy?" But the answer is clear using term-by-term differentiation which has never failed me. - $$F'(x) = \sum_{k=1}^\infty f'_k(x).$$ -19th century mathematician: Nonsense. Every student knows that you simply must have uniform convergence of the differentiated series. Uniform convergence of the series - $ \sum_{k=1}^\infty f_k(x)$ says nothing about either the pointwise convergence or uniform convergence of -$ \sum_{k=1}^\infty f'_k(x).$ -20th century mathematician: Not at all. We have a Cauchy sequence of continuous functions in $L_1([0,1])$. That provides exactly the function -$$ \sum_{k=1}^\infty f'_k $$ -defined as a function in $L_1([0,1])$ and will prove to be my derivative $F'$. With a little more work (standard in our century) I can also prove that, at almost every point $x$, -$$F'(x) = \sum_{k=1}^\infty f'_k(x). $$ -So we have a nice differentiation formula in the space -$$F' = \sum_{k=1}^\infty f'_k $$ -as well as a pointwise almost everywhere formula -$$F'(x) = \sum_{k=1}^\infty f'_k(x) .$$ -[Skeptical student says: Not impressed! I like the 18th century guy.]<|endoftext|> -TITLE: Solutions of $x^p+y^q=y^r+z^p=z^q+x^r$ -QUESTION [7 upvotes]: I'm struggling with the following problem from Terence Tao's "Solving Mathematical Problems": - -Find all positive reals $x,y,z$ and all positive integers $p,q,r$ such that - $$x^p+y^q=y^r+z^p=z^q+x^r.$$ - -Obviously, taking $x=y=z=1$ we can have $p,q,r$ arbitrary. Also, I've found the symmetries of the problem -$$x,y,z,p,q,r \mapsto x,y,z,p,q,r \\ -x,y,z,p,q,r \mapsto x,z,y,r,q,p \\ -x,y,z,p,q,r \mapsto y,x,z,q,p,r \\ -x,y,z,p,q,r \mapsto y,z,x,r,p,q \\ -x,y,z,p,q,r \mapsto z,x,y,q,r,p \\ -x,y,z,p,q,r \mapsto z,y,x,p,r,q$$ -I was hoping that all solutions follow a simple rule, like $x=y=z$, but unfortunately the solution -$$10,10,\sqrt{190},2,2,1 $$ -shows it is not the case. Can I get any help on this please? Thanks! -EDIT: -I've noticed that equality of two exponents (i.e. $p=q$) implies equality of two basis (i.e. $x=y$), and then the solution is -$$z=(2x^p-x^r)^{1/p} $$ -where $2x^p-x^r>0$. (Of course, different exponents being equal gives rise to different forms of the solution) -I've also noticed that you could have solutions with $x,y,z,p,q,r$ all distinct: -$$x=\frac{1}{2} \left(\frac{\sqrt{69}}{4}-\frac{3}{4}\right),y=\frac{1}{2},z=\frac{1}{2} \left(\frac{\sqrt{69}}{4}-\frac{1}{2}\right),p=1,q=2,r=3.$$ -This makes me believe that the problem is very general and hence difficult. - -REPLY [4 votes]: Apparently, the problem was indeed too complicated, as stated in the errata here: - -In page 66, problem 44, there is a $=2$ missing at the end of the string of equations, thus $x^p+y^q=y^r+z^p=z^q+x^r=2$. - -We still have the solution $x=y=z=1$ for arbitrary $p,q,r$. Say we have a solution $x',y',z'$ where WLOG $x'>1$. That would force $y'<1$ from the first expression, which, in turn forces $z'>1$ from the second expression, which forces $x'<1$ from the third, which is a contradiction. -Thus the only solution is $x=y=z=1$ and $p,q,r \in \mathbb{N}$. -[P.S. -I don't think there is be a closed form for the solution without the $=2$ at the end, but this is based on solely numerical evidence.]<|endoftext|> -TITLE: Choosing a menu for a dinner party -QUESTION [5 upvotes]: I heard of this puzzle that I want to get some hints or a full solution. -There are $1024$ dishes to be chosen from for a party. There are $6875$ participants in total. The objective is to find $10$ dishes in the menu such that for any dish $d$ that is not on the list, there are more than half people who would prefer one of the dishes in the menu over $d$. -Is it possible to generate such a menu? - -REPLY [2 votes]: Have a tournament where each pair of dinners "fight," and the winner is the is the dinner which the majority of contestants prefer. There are $1024\cdot 1023/2$ fights, so there are that many wins. Since there are $1024$ dinners, each dinner wins an average of $1023/2=511.5$ fights. Some dinner must win at least that average many fights, so some dinner $d$ has won $512$ fights. Include that dinner in the menu; the $512$ dinners that $d$ will be off the menu, which is OK since they are preferred less than $d$. -We are now left with a smaller power of two. Repeat this process with these $512$ dinners, then $256$, then $128$, etc, selecting one dinner at each level.<|endoftext|> -TITLE: To confirm the Novikov's condition -QUESTION [6 upvotes]: I have a question about Novikov's condition. -Let $L$ be a local martingale such that either $\exp \left(\frac{1}{2}L \right)$ is a submartingale or $E[\exp\left(\frac{1}{2} \langle L,L \rangle_{t} \right)]<\infty$ for every $t>0$. -Then $\mathcal{E}(L)_{t}:=\exp \left\{L_{t}-\frac{1}{2} \langle L,L\rangle_{t} \right\}$ is a martingale. -Question -Let $X$ be a Brownian motion. We can define $L_{t}:=\int_{0}^{t}b(X_{s})\,dX_{s}$ for a bounded function $b$ and it is easy to check -\begin{align*} -E \left[\exp\left(\frac{1}{2} \langle L,L \rangle_{t} \right) \right]=E \left[\exp\left(\frac{1}{2}\int_{0}^{t}|b(X_{s})|^{2}\,ds\right) \right]<\infty -\end{align*} -for every $t>0$ since $b$ is bounded. -I want to know how to show -\begin{align*} -E \left[\exp\left(\frac{1}{2}\int_{0}^{t}|b(X_{s})|^{2}\,ds\right) \right]<\infty\quad \mbox{or} \quad E \left[\exp\left(\frac{1}{2}\int_{0}^{t}|b(X_{s})|^{2}\,ds\right) \right]=\infty -\end{align*} -when $b$ is unbounded and integrable -If you know a related research, please let me know. -Thank you for your consideration. - -REPLY [9 votes]: This is a partial answer, just too long for a comment. -There is a nice lemma from Dellacherie and Meyer: - -Lemma If $\{A_s,s\in [0,t]\}$ is a continuous non-decreasing adapted process such that $E[A_t - A_s|\mathcal F_s]\le K$ for all $s\in[0,t]$, then for any $\lambda < 1/K$, - $$ -E[e^{\lambda A_t}]< (1-\lambda K)^{-1}. -$$ - -Now take $A_s = \int_0^t b(X_s)^2 ds$. Then -$$ -E[A_t - A_s|\mathcal F_s] = E\left[\int_s^t b(X_u)^2 du\,\Big|\,\mathcal F_s\right] = \int_s^t E\left[ b(X_u)^2 \,|\,\mathcal F_s\right] du \\ -= \int_s^t \int_{\mathbb{R}} p(t-u,y-X_s) b(y)^2 dy\, du\le \int_{s}^t \frac{1}{\sqrt{2\pi(t-u)}}\int_{\mathbb{R}}b(y)^2 dy\,du \le \sqrt{t-s}\int_{\mathbb{R}}b(y)^2 dy, -$$ -where $p(t,x) = e^{-x^2/2t}/\sqrt{2\pi t}$ is the transition density of $X$. -Therefore, using Lemma, -$$ -E \left[\exp\left(\frac{1}{2}\int_{0}^{t}|b(X_{s})|^{2}\,ds\right) \right]<\infty -$$ -for $t$ small enough provided that $b\in L^2(\mathbb{R})$. -Some remarks: - -I think that this is true for all $t>0$. -Having this for $t$ small enough should be sufficient to get the martingale property. -I think that moreover this is true for $b$ such that $\sup_{x\in\mathbb{R}}\int_{x-1}^{x+1} b(y)^2 dy<\infty$. -There is also a useful Beneš sufficient condition: if $|b(x)|\le C(1+|x|)$, then $\mathcal{E}(L)_t$ is a martingale. (This is also true when $b$ depends on the whole path in a sublinear way.) -3 and 4 suggest that the martingale property should be true if $\int_{x-1}^{x+1} b(y)^2 dy\le C(1+|x|^2)$. I have no proofs neither references at the moment.<|endoftext|> -TITLE: Is the minimum of two metrics is again a metric? -QUESTION [8 upvotes]: Let $d_1$ and $d_2$ be two metrics on non empty set $X$. -Is $d$ = $\min\{d_1, d_2\}$ is again metric on $X$? -I'm looking for a counter example with minimum of two metrics not being a metric. - -REPLY [7 votes]: Consider two metrics on the set $\{a,b,c\}$ that have the same distance $d_i(a,c)$ and satisfy $d_i(a,b)+d_i(b,c)=d_i(a,c)$. If these metrics are not identical, their minimum will fail the triangle inequality, with $d(a,b)+d(b,c)< d(a,c)$.<|endoftext|> -TITLE: An inequality using convex functions: If $\frac{a^2}{1+a^2}+\frac{b^2}{1+b^2}+\frac{c^2}{1+c^2} = 1$ then $abc \le \frac1{2\sqrt 2}$ -QUESTION [8 upvotes]: I see this in a Chinese convex analysis book. - -Suppose that $a$,$b$,$c$ are positive real numbers satisfying -\begin{equation} -\frac{a^2}{1+a^2}+\frac{b^2}{1+b^2}+\frac{c^2}{1+c^2} = 1. -\end{equation} -Show that $abc \le \dfrac1{2\sqrt 2}$. - -Since it's from a convex analysis book, I tried proving this using Jensen's inequality. However, I can't think of a suitable convex function. Therefore, I tried AM–GM, but I can't get a product $abc$. -$$(abc)^2\le\frac18\iff8(abc)^2\le1$$ -$$\iff\left(\frac{1}{1+a^2}+\frac{1}{1+b^2}+\frac{1}{1+c^2}\right)^3 (abc)^2 \le 1$$ -Finally, I used Lagrange multiplier to solve the problem, but I think there is some more elementary solution. -$$f(a,b,c)=abc$$ -$$g(a,b,c)=\frac{1}{1+a^2}+\frac{1}{1+b^2}+\frac{1}{1+c^2}-2=0$$ -$$\nabla f(a,b,c)=(bc,ca,ab)$$ -$$\nabla g(a,b,c)=\left(-\frac{2a}{(1+a^2)^2},-\frac{2b}{(1+b^2)^2},-\frac{2c}{(1+c^2)^2}\right)$$ -$$\because \nabla f = \lambda \nabla g$$ -$$\therefore bc = -\frac{2a\lambda}{(1+a^2)^2} \iff abc = -\frac{2a^2\lambda}{(1+a^2)^2}$$ -$$abc = -\frac{2a^2\lambda}{(1+a^2)^2} = -\frac{2b^2\lambda}{(1+b^2)^2} = -\frac{2c^2\lambda}{(1+c^2)^2}$$ -$$\frac{a}{1+a^2}=\frac{b}{1+b^2}=\frac{c}{1+c^2}=\mu$$ -$$a+\frac1a=b+\frac1b=c+\frac1c$$ -$$\because \frac{1}{1+a^2}+\frac{1}{1+b^2}+\frac{1}{1+c^2}=2$$ -$$\therefore \frac{\mu}{a}+\frac{\mu}{b}+\frac{\mu}{c}=2$$ -$$\because \frac{a^2}{1+a^2}+\frac{b^2}{1+b^2}+\frac{c^2}{1+c^2} = 1$$ -$$\therefore a\mu + b\mu + c\mu = 1$$ -$$\frac1a+\frac1b+\frac1c=2(a+b+c)$$ -$$3(a+b+c)=a+\frac1a+b+\frac1b+c+\frac1c=3\left(a+\frac1a\right)$$ -$$b+c=\frac1a$$ -Similarly, $c+a=\dfrac1b$ and $a+b=\dfrac1c$. Substitute $a=\dfrac1{b+c}$ into the other two equations. -$$c+\frac1{b+c}=\frac1b$$ -$$\frac1{b+c}+b=\frac1c$$ -$$b(b+c)c+b=b+c$$ -$$c+b(b+c)c=b+c$$ -Subtracting one equation from another, we get $b=c$. Similarly, we have $a=b=c$. It remains to substitute it back to the original constraint and calculate the product. -Any alternative solution is appreciated. - -REPLY [2 votes]: By C-S $$1=\sum_{cyc}\frac{a^2}{1+a^2}\geq\frac{(a+b+c)^2}{3+a^2+b^2+c^2},$$ -which by AM-GM gives -$$3\geq2(ab+ac+bc)\geq6\sqrt[3]{a^2b^2c^2}$$ -and we are done!<|endoftext|> -TITLE: Equivalence of the theories $\operatorname{Th}(\Bbb{R}, 0,1,+, \le)$ and $\operatorname{Th}(\Bbb{Q}, 0,1,+, \le) $ -QUESTION [5 upvotes]: So I was working on showing that -$$\operatorname{Th}(\Bbb{R}, 0,1,+, \le) = \operatorname{Th}(\Bbb{Q}, 0,1,+, \le) $$ -My initial idea for working on this problem was to systematically start by showing -$$\operatorname{Th}(\Bbb{R}, 0,1,+) = \operatorname{Th}(\Bbb{Q}, 0,1,+) $$ -and -$$\operatorname{Th}(\Bbb{R}, 0,1,\le) = \operatorname{Th}(\Bbb{Q}, 0,1,\le).$$ -Hopefully by then I would have enough intuition of the problem to tackle the larger case. But it occurred to me I don't understand enough from these cases. Starting for the first one. I noted that since there are no actual relations here, and just operators, thus there is no way to set up an equation that only has a solution in one but not the other. Or more generally, its impossible to make statements that have any truth or false value. Other than just statements involving existence. (If we use foralls, given the non existence of other formula, they don't create any constraint) -For the next one it's clear that these are equivalent since no operators can be used so only expressions involving comparing the $1$ and $0$ (present in both) can be manufactured and since we cannot access the underlying structure in either collection of expressions there's no way to differentiate our structures here. -So given that both cases are "obvious" when we mix it up and now consider the full: -$$\operatorname{Th}(\Bbb{R}, 0,1,+, \le) = \operatorname{Th}(\Bbb{Q}, 0,1,+, \le) $$ -One can observe that $+$ is closed in $\Bbb{Q}$ and that the rationals are dense over the reals. But that doesn't say much since $\times$ is also closed in the rationals yet its trivial to show: -$$\operatorname{Th}(\Bbb{R}, 0,1,+, \times, \le) \ne \operatorname{Th}(\Bbb{Q}, 0,1,+,\times \le) $$ -Namely consider -$$ \exists x \forall y |1 + 1 \le x*x \le 1 + 1 $$ -This sentence is only true for $\Bbb{R}$ as the $x$ in question that makes it true does not exist in $\Bbb{Q}$. -So my earlier intuition about closure and density have no substantial weight to them. -But then how do I proceed further? - -REPLY [3 votes]: I recommend using Ehrenfeucht-Fraisse games (this is possibly because I just really like them :P). -Given a pair of relational (see below) structures $\mathcal{A}$ and $\mathcal{B}$ and a natural number $n$, consider the following game $EF_n(\mathcal{A},\mathcal{B})$ between two players, Mix and Match: - -On turn $k$, first Mix plays an element of one structure or the other: either $a_k\in\mathcal{A}$ or $b_k\in\mathcal{B}$. Then Match plays an element of the opposing structure: either $b_k\in\mathcal{B}$ or $a_k\in\mathcal{A}$. -Play continues for $n$ turns - at the end of which, Mix and Match have built a sequence $a_1, a_2, . . . , a_n$ of elements of $\mathcal{A}$ and a sequence $b_1, b_2, . . . , b_n$ of elements of $\mathcal{B}$. -Now, we look at the atomic diagrams of the tuples $(a_i)$ and $(b_i)$: if there is a quantifier-free formula $\varphi(x_1, . . . , x_n)$ such that $$\mathcal{A}\models\varphi(a_1, . . . , a_n)\quad\mbox{ but }\quad\mathcal{B}\not\models\varphi(b_1, . . . , b_n),$$ then Mix wins; otherwise (that is, if the tuples "look the same") Match wins. - -Here's the remarkable fact: - -$\mathcal{A}\equiv\mathcal{B}$ iff player Match has a winning strategy in each game $EF_n(\mathcal{A}, \mathcal{B})$. - -So you can prove the elementary equivalence of two structures by describing a winning strategy for Match (and, of course, showing that it actually is a winning strategy). -EF-games are definitely a sometimes technique: other methods are often much faster. However, personally I find EF-games the most intuitively satisfying approach, and I don't really understand why two structures are elementarily equivalent until I know how to win the EF-game between them. - -OK, what about that "relational" business? Well, a relational structure is just a structure which only has constants and relations - no functions. Bad news: you've got functions! Good news: you can get rid of them! We can "relationalize" a structure $\mathcal{A}$ by replacing each $n$-ary function symbol $f$ with an $(n+1)$-ary relation symbol $G_f$, the graph of $f$: $$G_f^\mathcal{A}=\{(a_1, . . . , a_n, a_{n+1}): f^{\mathcal{A}}(a_1, . . . , a_n)=a_{n+1}\}.$$ -To see why relationalization is necessary, imagine we played the length-1 EF-game with the functional structures $\mathcal{A}=(\mathbb{R}, +, 1)$ and $\mathcal{B}=(\mathbb{Q}, +, 1)$. As Mix, I would play $a_1=\pi\in\mathcal{A}$; Match would have to play some rational $b_1\in \mathcal{B}$. Suppose $b_1={p\over q}$; then $b_1$ would satisfy the atomic formula $b_1+. . . .+b_1=1+. . . .+1$ ($q$-many "$+$"s on the left, $p$-many on the right) in $\mathcal{A}$. But of course $\pi$ doesn't satisfy the corresponding formula in $\mathcal{A}$. So I, Mix, would win! -On the relationalization, by contrast, I wouldn't be able to pull that trick - it would take me $\max\{q, p\}$-many moves to be able to "show" that $b_1$ and $\pi$ were different. So to match $\pi$ in the game of length $n$, I would just need to pick a $b_1$ which is "close to $\pi$ and not too rational" - for instance, the nearest rational with denominator $n+1$. If you try playing, say, $EF_2$ of the two structures, you'll start to see what a winning strategy for Match looks like . . . -A similar idea - that we can find "definable" elements which "look like" undefinable elements for long enough to win specific EF-games - works for showing that the linear orders $\mathbb{Z}$ and $\mathbb{Z}\oplus\mathbb{Z}$ are elementarily equivalent; see chapter 6 of Rosenstein's wonderful book "Linear Orderings" (https://books.google.com/books?id=y3YpdW-sbFsC&pg=PA93&lpg=PA93&dq=ehrenfeucht-fraisse+game+Z+linear+order&source=bl&ots=eREBRBKTDL&sig=9aBq9q7YLHzB6ujtpeW92IPoFK4&hl=en&sa=X&ved=0CCAQ6AEwATgKahUKEwj6zq3ewpTJAhUD02MKHRTwAnQ#v=onepage&q=ehrenfeucht-fraisse%20game%20Z%20linear%20order&f=false).<|endoftext|> -TITLE: Manifolds with geodesics which minimize length globally -QUESTION [7 upvotes]: I am interested in complete Riemannian manifolds whose geodesics minimize length globally. Such manifolds must be non-compact (otherwise there is always a self-intersecting geodesic) -However, I suspect this property is much more restrictive. -Question: Assume $(M,g)$ is complete and has this property. -Must $M$ be simply connected? The exponential map has to be a diffeomorphism on all $T_pM$? -Does the sectional curvature has to be non-positive? Are there unique geodesics between any two points? -Update: From John Ma's answer, it turns out that the $exp_p$ is a diffeomorphism, and in particular there are unique geodesics between any two points. -I would still like to know if anything intelligent can be said about the curvature though. -My guess is that it does not has to be non-positive everywhere, but only 'mostly everywhere' in some sense. (i.e I can imagine a surface with small regions of positive curvature which doesn't violate our condition ) - -I am looking in general for necessary and sufficient conditions (topological\curvature constraints) for this property to hold. -One sufficient conditions is provided by Hadamard's Theorem. - -REPLY [6 votes]: I am just elaborating on some points related to John's answer: -$(1)\Rightarrow (2)$: By completeness, every two points are joined by a geodesic. Assume there exists $p,q \in M, \gamma_1, \gamma_2 : [0,L] \to M$ two geodesics (parametrized by arclength) joining $p,q$. Then both $\gamma_1,\gamma_2$ cannot be length minimizing when $t>L$: -Assume $\gamma_1$ is minimizing between $p=\gamma_1(0),\gamma_1(L+\epsilon)$. -Define a path $\beta=\gamma_1|_{[L,L+\epsilon]} *\gamma_2|_{[0,L]}$. -$L(\beta)=L+\epsilon=L(\gamma_1|_{[0,L+\epsilon]})=d(p,\gamma_1(L+\epsilon))$ -So $\beta$ is length minimizing between $p,\gamma_1(L+\epsilon)$ so it's a geodesic*, and in particular smooth. But this implies $\dot \gamma_1(L)=\dot \gamma_2(L)$, so by uniqueness $\gamma_1=\gamma_2$. -$(2) \Rightarrow (1)$: Assume there is a geodesic between $p,q$ which is not minimizing. Hopf-Rinow theorem implies there is a minimizing geodesic between $p,q$, so there is more than one which is a contradiction. -Note: $(3) \Rightarrow (1)$ is also immediate, since we know geodesics minimize distance from a given point $p$ as long as they stay in a normal ball around $p$. (See Do-carmo's Riemannian geometry, proposition 3.6). If $exp$ is a diffeomorphism, then all $M$ can considered as a normal ball. - -*See Do-carmo's Riemannian geometry, proposition 3.9<|endoftext|> -TITLE: Spectrum of infinite product of rings -QUESTION [8 upvotes]: $\def\Z{{\mathbb{Z}}\,} -\def\Spec{{\rm Spec}\,}$ -Suppose $R$ a ring and consider $\Spec(\prod_{i \in \mathbb{Z}} R)$. Now for the finite case, I know that holds $\Spec(R \times R) = \Spec(R) \coprod \Spec(R)$. -My intutition says that this does not extend to the infinite case. Maybe $\Spec(\oplus_{i \in \Z} R) = \coprod_{i \in \Z}\Spec(R)$ holds, but I am not sure. Can anybody give a proof or counter example for both the infinite direct sum and direct product? - -REPLY [5 votes]: The spectrum of an infinite direct product is complicated. For example, the spectrum of an infinite direct product $\prod_{i \in I} F_i$ of fields can canonically be identified with the space $\beta I$ of ultrafilters on $I$, also known as the Stone-Čech compactification of $I$. For more on this in the special case that each $F_i$ is $\mathbb{F}_2$ see this blog post. -In general the spectrum of an arbitrary infinite direct product $\prod_{i \in I} R_i$ fibers over $\beta I$, where the fiber over an ultrafilter $U \in \beta I$ is the spectrum of the ultraproduct $\prod_{i \in I} R_i / U$. For more on this see Eric Wofsey's excellent answer here and this blog post.<|endoftext|> -TITLE: Is every manifold a metric space? -QUESTION [24 upvotes]: I'm trying to learn some topology as a hobby, and my understanding is that all manifolds are examples of topological spaces. Similarly, all metric spaces are also examples of topological spaces. I want to explore the relationship between metric spaces and manifolds, could it be that all manifolds are examples of metric spaces? - -REPLY [2 votes]: This is too long for a comment. You will probably think it is pedantic, but I promise it isn't! -In mathematics, we make a distinction between structures and properties. -A metric space is a structure, and "being a manifold" is a property that a topological space can have. Therefore, the question doesn't make sense. (In practice, some working mathematicians are less careful with this distinction than others!) -More detail: -A topological space is a structure. If someone comes up to you and say - -Look at this set. Is it a topological space? - -you should reject this question as meaningless. They haven't told you what subsets are supposed to be open. -Similarly, - -Is this topological space a metric space? - -is a meaningless question. However, the following - -(*) Is there a metric on the underlying set of this topological space whose associated topological space is this one? - -is a perfectly reasonable question. -A less wordy thing to say is - -Is this topological space metrizable? - -when we mean (*). Here, metrizable is a property. -A manifold is a certain type of topological space, which is to say, it is a topological space with the property that each That is, if someone comes up to you and says - -Look at this topological space. Is it a manifold? - -then in fact they have asked you a reasonable question. The answer is yes if for all points of that space, there exists an open neighborhood homeomorphic to an open subset of $\mathbb{R}^n$ and no otherwise. -So the right question is - -Is every manifold metrizable?<|endoftext|> -TITLE: Convergence of series $\sum\limits_{k=1}^\infty\frac{1}{X_1+\dots+X_k}$ with $(X_k)$ i.i.d. non integrable -QUESTION [23 upvotes]: Pick a sequence $X_1$, $X_2$, $\dots$, of i.i.d. random variables taking values in positive integers with $\mathbb{P}(X_i=n)=\frac{1}{n}-\frac{1}{n+1}=\frac{1}{n(n+1)}$ for every positive integer $n$. -Q: Does the sum $\frac{1}{X_1}+\frac{1}{X_1+X_2}+\frac{1}{X_1+X_2+X_3}+\dots=\sum\limits_{k=1}^\infty\frac{1}{X_1+\dots+X_k}$ converge a.s.? In other words, is it finite a.s.? - -Some remarks: -1. Some slick computations using the generating function of $X_i$ show that the expected value of the sum is $+\infty$, so it gives us no information. -2. For any fixed $i$, eventually every denominator $X_1+\dots+X_k$ becomes much larger than $X_i$. So convergence of the sum is independent of $X_i$ (more precisely of $(X_1,\dots,X_i)$ jointly) -and by Kolmogorov's 0-1 law either the sum converges a.s. or it diverges a.s. -3. Notice that $\mathbb{E}[X_i]=+\infty$, so by the strong law of large numbers (applied to suitable truncations of the $X_i$'s) we have $\frac{X_1+\dots+X_k}{k}\to +\infty$ a.s. So, if we rewrite our series as $\sum_{k=1}^\infty\frac{1}{k}\left(\frac{X_1+\dots+X_k}{k}\right)^{-1}$, we can conclude that it grows slower than the harmonic series. -4. Easy estimates show that the problem is equivalent to the convergence of $\sum\limits_{k=1}^\infty \frac{1}{t_1^{-1}+\dots+t_k^{-1}}$, where $(t_i)_{i\ge 1}$ is a sequence of independent uniform random variables on $(0,1)$. -5. If we use the $t_i's$, we can rewrite the sum as $\sum\limits_{k=1}^\infty\frac{1}{k} HM(t_1,\dots,t_k)$, where $HM$ denotes the harmonic mean. -The inequality $HM\le GM$ ($GM$ is the geometric mean) does not help, since by the strong law of large numbers we get $\sum\limits_{k=1}^\infty\frac{1}{k} GM(t_1,\dots,t_k)\sim\sum\limits_{k=1}^\infty\frac{\alpha}{k}=+\infty$ where $\alpha:=\exp\left(\int_0^1 \log t\,dt\right)=\frac{1}{e}$. -I do not know the origin of the problem, but it is interesting as it seems to require sharp estimates to solve it. Any idea is appreciated. - -REPLY [7 votes]: The series $\sum\limits_{n=1}^\infty\frac1{S_n}$ diverges almost surely, where $S_n=X_1+\cdots+X_n$ for some i.i.d. random variables $X$ and $(X_n)$ such that $P(X\geqslant k)=\frac1k$ for every positive integer $k$. -The same result applies to every positive i.i.d. sequence $X$ and $(X_n)$ such that $x\cdot P(X\geqslant x)\leqslant c$ for some finite $c$, for every $x$ large enough. - -A proof of this uses the degenerate convergence criterion (part 1.), which provides the convergence in probability of a scaled version of the increments (part 2.), and a lemma which allows to deduce the almost sure convergence of the series from this convergence in probability (part 3.). All this suggests a natural conjecture (part 4.). But it happens that parts 1. and 2. can be replaced by a self-contained argument, explained in addendum A. -Finally, in addendum B. we prove the much simpler fact that the expected value of the sum diverges. -1. A version of the degenerate convergence criterion (see the book by Michel Loève, Probability theory, section 23.5, page 329 (1963), or the quote of Loève's book by Allan Gut in his article An extension of Feller’s weak law of large numbers, dated 2003, as Theorem 1.2) is as follows. - -Degenerate convergence criterion: Let $S_n=X_1+\cdots+X_n$, where $X$ and $(X_n)$ is i.i.d., and $(b_n)$ some positive sequence increasing to infinity. Introduce $$X^{(n)}=|X|\cdot\mathbf 1_{|X|\leqslant b_n},\qquad\mu_n=E(X^{(n)}),$$ and the conditions: - -(i) For every positive $\varepsilon$, $n\cdot P(|X|\geqslant\varepsilon b_n)\to0$ -(ii) $n\cdot\mathrm{var}(X^{(n)})\ll b_n^2$ -(iii) $n\cdot \mu_n\ll b_n$ - -Then, conditions (i) and (ii) imply that $b_n^{-1}(S_n-n\cdot\mu_n)\to0$ in probability, - hence conditions (i), (ii) and (iii) imply that $b_n^{-1}S_n\to0$ in probability. - -2. In the present case, conditions (i) and (ii) both mean that $b_n\gg n$, and $\mu_n\sim\log b_n$ hence the choice $b_n=n\log n$ shows that $S_n/(n\log n)\to1$ in probability. Note that every choice $b_n\gg n\log n$ would show that $S_n/b_n\to0$ in probability, for example $S_n/(n\log n\log\log n)\to0$ in probability. -3. Our second tool is a general lemma which we state independently. - -Lemma: Consider any sequence $(Y_n)$ of nonnegative random variables and $(c_n)$ some positive sequence such that $P(Y_n\leqslant c_n)\to0$ and $\sum\limits_nc_n$ diverges, then $\sum\limits_nY_n$ diverges almost surely. - -Proof: For every event $A$ such that $P(A)\ne0$ there exists some finite $n_A$ such that, for every $n\geqslant n_A$, $P(Y_n\leqslant c_n)\leqslant\frac12P(A)$, in particular, $E(Y_n\mathbf 1_A)\geqslant c_nP(Y_n\geqslant c_n,A)\geqslant\frac12c_nP(A)$ for every $n\geqslant n_A$ hence $E(Y\mathbf 1_A)=+\infty$, where $Y=\sum\limits_nY_n$. This holds for every $A$ such that $P(A)\ne0$ hence $Y=+\infty$ almost surely, QED. -The lemma above with $Y_n=1/S_n$ and $c_n=1/(2n\log n)$ shows that $\sum\limits_{n=1}^\infty\frac1{S_n}$ diverges almost surely. -4. Roughly speaking, $S_n$ is of order $n\log n$. A natural conjecture is that $$\frac1{\log\log n}\sum_{k=1}^n\frac1{S_n}$$ converges in distribution. -A. To make this answer fully self-contained, here is a simple proof that $\frac{S_n}{n\log n}\to 1$ in probability. -First of all, note that $P(X_1>\alpha)<\frac{1}{\alpha}$ for any real $\alpha>0$. Now fix some positive integer $n$ and define the truncations -$X'_k:=X_k \mathbf{1}_{\{X_k\leqslant n\log n\}}$. Let also $S'_n:=X'_1+\dots+X'_n$ and $\mu_n:=E[X'_1]$. -Since $n\mu_n\sim n\log n$, it suffices to show that $\frac{S_n}{n\mu_n}\to 1$ in probability. -Notice that $$P\left(\left|\frac{S_n}{n\mu_n}-1\right|>\epsilon\right)\leqslant P(S_n\neq S'_n)+P\left(\left|\frac{S'_n}{n\mu_n}-1\right|>\epsilon\right) -\leqslant nP(X_1>n\log n)+P\left(\left|\frac{S'_n}{n\mu_n}-1\right|>\epsilon\right).$$ -The first term is easy to bound: $nP(X_1>n\log n)\epsilon\right)\leqslant \frac{1}{\epsilon^2}\text{Var}\left(\frac{S'_n}{n\mu_n}\right)=\frac{\text{Var}(X'_1)}{\epsilon^2 n\mu_n^2} -\leqslant\frac{n\log n}{\epsilon^2 n\mu_n^2}\sim\frac{n\log n}{\epsilon^2 n\log^2 n}\to 0,$$ -where we used the obvious inequality $\text{Var}(X'_1)\leqslant E[X_1'^2]=\sum\limits_{1\leqslant k\leqslant n\log n}\frac{k^2}{k(k+1)}\leqslant n\log n$. -B. The divergence of expectation is much easier to prove. For simplicity, assume that $X = 1/U$, where $U$ is uniformly distributed on $[0,1]$. Then -$$ -E\left[\frac{1}{X_1+\dots+X_n}\right] = E\left[\int_0^\infty e^{-t(X_1+\dots+X_n)}dt \right] = \int_0^\infty \left(E[e^{-tX}]\right)^n dt. -$$ -Now -$$ -1-E[e^{-tX}] = \int_0^1 (1-e^{-t/u}) du = t\int_t^\infty (1-e^{-z}) z^{-2} dz \sim -t\log t,\qquad t\to 0+. -$$ -This already implies that the expectation is infinite, since -$$ -E\left[\sum_{n=0}^\infty\frac{1}{X_1+\dots+X_n}\right] = \int_0^\infty \frac{dt}{1-E[e^{-tx}]} -$$ -and the integrand behaves like $\frac{-1}{t\log t}$ as $t\to 0+$. -Also this allows to write asymptotics for one summand: as $n\to\infty$, -$$ -E\left[\frac{1}{X_1+\dots+X_n}\right] = \int_0^\infty \left(E[e^{-tX}]\right)^n dt \sim \int_0^\epsilon \left(E[e^{-tX}]\right)^n dt\\ -\sim \int_0^\epsilon e^{n t\log t} dt \sim \int_0^{\delta} e^{-nu}\frac{du}{- \log u}\sim \frac{1}{n\log n}, -$$ -which agrees with the convergence of probability $S_n/n\log n\to 1$.<|endoftext|> -TITLE: how to simplify $ 2 \sqrt{2}\left(\sqrt{9-\sqrt{77}} \right) $ -QUESTION [6 upvotes]: How to simplify -$$ -2 \sqrt{2}\left(\sqrt{9-\sqrt{77}} \right) -$$ -so that it has no nested radicals? -This question is same as that already posted but with a different point of view. - -REPLY [4 votes]: $$2\sqrt{2}\sqrt{9-\sqrt{77}} = 2\sqrt{18-2\sqrt{7\cdot 11}} = 2\sqrt{(\sqrt{11}-\sqrt{7})^2} = 2\sqrt{11}-2\sqrt{7}.$$<|endoftext|> -TITLE: Variant of linear regression using perpendicular distance instead of vertical -QUESTION [5 upvotes]: Normally, linear regression asks for a pair of parameters m,b such that for a set of given points $\{x_i,y_i\}$ the variance of $y-m\cdot x-b$ is minimized (this minimizes the distance in y-direction only). -Instead, I would like to find a line $y'=m\cdot x'+b$ such that the pairs $(x_i,y_i)$ have minimal perpendicular distance from that line. -I strongly suspect that the parameters (m,b) turn out to be the same in either case, but I can't find anything on the web, and when I tried the calculation by myself I found me totally out of practice with one error for each 3 lines of my calculations. - -REPLY [8 votes]: With appropriate method names, you will be able to find a lot of references: you seem to be looking for Deming regression, orthogonal regression, orthogonal distance regression (ODR), errors-in-variables (EIV) modeling, or more often total least squares (TLS). It is illustrated below in the bivariate case: - -The model was originally introduced by R. J. Adcock, 1878, A problem in least squares, refined by C. H. Kummell, 1879, Reduction of observation equations which contain more than one observed quantity, and revived by W. E. Deming, 1943, Statistical adjustment of data.<|endoftext|> -TITLE: About the position of "for all" quantifier -QUESTION [19 upvotes]: I'm not expert in logic, but as far as I know, quantifiers comes before the predicates they refer to. Still, if written in english, there are statements which sounds better when you don't put all the quantifiers before. For example, the definition of a sequence of functions $f_n:\mathbb{R}\to\mathbb{R}$ converging uniformly to some function $f:\mathbb{R}\to\mathbb{R}$: - -For all $\varepsilon > 0$, exists $n_0\in\mathbb{N}\ $ such that $\ n>n_0 \implies |f_n(x) - f(x)| < \varepsilon$ for all $x\in\mathbb{R}$ - -I have seen teachers writing this as - -$$\forall\varepsilon > 0,\ \ \exists n_0\in\mathbb{R}\ \ \text{ such that } \ \ n>n_0 \implies |f_n(x) - f(x)| < \varepsilon,\ \ \forall x\in\mathbb{R}$$ - -Maybe an even more simple example, from probability. It's not hard to find something like - -$$P[X_n=Y_n, \forall n\in\mathbb{N}]$$ - -where $(X_n)_{n\in\mathbb{N}}$ and $(Y_n)_{n\in\mathbb{N}}$ are collection of random variables. -I understand the motivation: if you talk about this probability, you probably will say the probability of have $X_n=Y_n$ for all $n$, and not the probability of, for all $n$, have $X_n=Y_n$. The same reasoning applies for the statement about uniform convergence. That last for all quantifier will fit better at the final of the sentence if you have to say it in words. -So my question is: are this "informal" formulas just wrong (in the formal logic point of view)? Or the formal language of logic can handle this kind of writing? -Thanks. - -REPLY [5 votes]: The proper usage of a formal notation or of a more informal one depends particularly on the context of presentation. It is essential to whom we communicate an idea and this should guide us to use a suitable level of formal notation. -Here is an excerpt from P.R. Halmos' instructive paper How to write Mathematics regarding the aspect: -What about the usage of logical symbols? - -P. R. Halmos: Here is a sample : -"Prove that any complex number is the product of a non-negative number and a number of modulus $1$." -... -One way to recast the sample sentence of the preceding paragraph is to establish the convention that all "individual variables" range over the set of complex numbers and then write something like - $$\forall z\exists p\exists u [(p=|p|) \wedge (|u|=1) \wedge (z=pu)]. $$ -I recommend against it. The symbolism of formal logic is indispensable in the discussion of the logic of mathematics, but used as a means of transmitting ideas from one mortal to another it becomes a cumbersome code. The author had to code his thoughts in it (I deny that anybody thinks in terms of $\exists, \forall, \wedge$, and the like), and the reader has to decode what the author wrote ; both steps are a waste of time and an obstruction to understanding. Symbolic presentation, in the sense of either the modern logician or the classical epsilontist, is something that machines can write and few but machines can read. - -Hint: You might also have a look at the related question Why there is no sign of logic symbols in mathematical texts?<|endoftext|> -TITLE: Tangent space of open set -QUESTION [5 upvotes]: If U is a open subset in $\mathbb{R}^n$ and p is a point in U, then tangent space of U at point p is the whole of $\mathbb{R}^n$. -I am having difficulty understanding why this is true. Why is the same not true for closed sets or any set which is not open? - -REPLY [2 votes]: The dimension of a tangent space is the dimension of the underlying manifold and the $n$-dimensional submanifolds of $\mathbb{R}^n$ are precisely the open subsets. -If the set is not open, then at some boundary point of the set, the tangent space is either not well-defined (because the set is not a manifold around that point), or if it is, then it is a manifold of dimension less than $n$. -All claims are immediate from the usual definitions of manifold etc.<|endoftext|> -TITLE: What is a split $\mathbb{K}$-algebra? -QUESTION [6 upvotes]: After some considerations the article I'm reading concludes: "...hence H is a simple split $\mathbb{K}$-algebra". -I can't find this definition anywhere: what does "split" mean? - -REPLY [4 votes]: According to this note, a split $k$-algebra $A$ is a $k$-algebra all of whose simple modules $M$ are absolutely simple, in the sense that after extension of scalars $M \mapsto M \otimes_k L$ (where $k \to L$ is a field extension) they remain simple. -If $A$ is semisimple, I believe this condition is equivalent to the condition that $A$ is a finite product of matrix algebras $M_n(k)$. And if $A$ is (finite-dimensional and) simple, I believe this condition is equivalent to the condition that $A$ is a matrix algebra $M_n(k)$. -The terminology is, as far as I know, motivated by a generalization of the notion of splitting field of a polynomial. A splitting field for a $k$-algebra is a field extension $k \to L$ such that $A \otimes_k L$ is split, and in the special case $A = k[x]/f(x)$ this is the same thing as a splitting field of $f(x)$.<|endoftext|> -TITLE: Is there an elementary way to see that there is only one complex manifold structure on $R^2$? -QUESTION [9 upvotes]: Is there an elementary way to see that there is only one complex manifold structure on $\mathbb{R}^2$? (Up to biholomorphism, naturally.) -Elementary in the sense of not appealing to the uniformization theorem. - -REPLY [15 votes]: There are two, not one. The open unit disc is homeomorphic to $\Bbb R^2$, and it's not biholomorphic to $\Bbb C$, so it's a distinct complex structure. -You will have trouble proving this in an easy way. Let me restate the uniformization theorem: -Every simply connected complex surface is biholomorphic to the plane, sphere, or open unit disc. -But I can tell you all of the simply connected surfaces without boundary: there's $S^2$ and $\Bbb R^2$ and that's it! So we can rephrase the uniformization theorem as follows: -Every two complex structures on $S^2$ are biholomorphic. Every complex structure on $\Bbb R^2$ is biholomorphic to either the complex plane or open unit disc. -So the uniformization theorem does not say much more than what you're asking for; if someone asked me for a proof I would tell them to read the uniformization theorem. (You can't just use Riemann mapping - it's not obvious that a complex manifold homeomorphic to $\Bbb R^2$ has a holomorphic embedding into $\Bbb C$.)<|endoftext|> -TITLE: Expectation Functional in Lebesgue and Riemann Terms – Looking for a clarification -QUESTION [5 upvotes]: Here there is a really central problem I am having self-studying probability theory, that concerns the relation between the definition of expectation in Lebesgue terms and in Riemann terms. I will truly appreciate any feedback because I feel this is really at the very core of the all theory. -Summing up, I have basically two problems (that are closed link): - -I really don't see why the following two expressions -$$ \mathbb{E}(X) :=\int_\Omega X d P \hspace{2cm}(L)$$ -and -$$ \mathbb{E}(X) := \int_{-\infty}^{\infty} t f(t) dt \hspace{2cm}(R)$$ -are equivalent. -[Of course, here I am referring to the case of $X$ continuous r.v.] - -I am not able to let coexist the idea that the Lebesgue Integral gives us -a. a number that is the area under a certain curve, and -b. that basically the same number represents the average value that is taken by this curve. - - -Is there somebody who can enlighten me? -I would say that the second question is really the first, but disguised. -As always, any feedback is most welcome. -Thank you for your time. - -REPLY [2 votes]: (L) is the definition. (R) is not always true. - -Let $X$ be a $(\mathbb R, \mathscr B (\mathbb R))$-valued random variable on a probability space $(\Omega, \mathscr F, \mathbb P)$. Its law is given by $\mathcal L_X(B) = P(X \in B)$ where $B \in \mathscr B (\mathbb R)$ -By change of variable theorem we have the Lebesgue integral: -$$E[X] = \int_{\mathbb R} t d\mathcal L_X(t)$$ -If $\mathbb P$ is Lebesgue measure, we have the Stieltjes integral: -$$E[X] = \int_{\mathbb R} t dF_X(t)$$ -If $X$ is absolutely continuous, it has a pdf and then we have the Riemann integral: -$$E[X] = \int_{\mathbb R} t f_X(t) dt$$ - -The way the Riemann integral is defined is through areas of rectangles that approximate an area under a curve. - -The way the Lebesgue integral is defined is through the standard machine: indicator functions, simple functions, nonnegative functions and then integrable functions. -It's not always supposed to give an area under a curve because the integrands of Lebesgue integrals are not necessarily continuous. I wouldn't say the Lebesgue integral is supposed to assign an area under a curve. It seems more like a way to assign a general 'measure' rather than merely an 'area' -Say for example we have $$E[X] = \int_{\Omega} X dP$$ -where $X$ is a discrete random variable. It wouldn't make sense to compute the 'area' under $X$ because it would just be zero right? However, it could make sense to compute the 'measure' under $X$. Try to look back at your notes or look online as to why we have Lebesgue integration in the first place.<|endoftext|> -TITLE: Are there infinitely many natural numbers $n$ such that $\mu(n)=\mu(n+1)=\pm 1$? -QUESTION [10 upvotes]: A while ago, I answered this question here on StackExchange which asks if for any given integer $k$, whether there exists infinitely many natural numbers $n$ such that -$$ \mu(n+1)=\mu(n+2)=\mu(n+3)=\cdots=\mu(n+k) $$ -This is indeed the case, and can be shown using the Chinese Remainder Theorem, as in my answer to the linked question. We find infinitely many $n$ such that the common value of $\mu$ evaluated at these points is $0$. -If $k \geq 4$, then one of the values $n+1, n+2, n+3, n+4$ is a multiple of $4$, and hence the common value must, in fact, actually be $0$. For $n<4$, this argument doesn't work, and we can't rule out the possibility that -$$ \mu(n+1)=\cdots=\mu(n+k)=\pm 1 $$ -My question concerns the case of $k=2$. Are there infinitely many natural numbers $n$ such that either -$$\mu(n) = \mu(n+1) = 1$$ -or -$$\mu(n) = \mu(n+1) = -1?$$ - -REPLY [5 votes]: It is a standard conjecture in Number Theory that there exist infinitely many $s$ such that $p=10s+1$, $q=15s+2$, and $r=6s+1$ are all prime. Then $3p=30s+3$, $2q=30s+4$, and $5r=30s+5$ are consecutive integers, each a product of two primes, so $\mu(3p)=\mu(2q)=\mu(5r)=1$.<|endoftext|> -TITLE: How to prove $\sum_{k=1}^n \cos(\frac{2 \pi k}{n}) = 0$ for any n>1? -QUESTION [6 upvotes]: I can show for any given value of n that the equation -$$\sum_{k=1}^n \cos(\frac{2 \pi k}{n}) = 0$$ -is true and I can see that geometrically it is true. However, I can not seem to prove it out analytically. I have spent most of my time trying induction and converting the cosine to a sum of complex exponential functions -$$\frac{1}{2}\sum_{k=1}^n [\exp(\frac{i 2 \pi k}{n})+\exp(\frac{-i 2 \pi k}{n})] = 0$$ -and using the conversion for finite geometric sequences -$$S_n = \sum_{k=1}^n r^k = \frac{r(1-r^n)}{(1-r)}$$ -I have even tried this this suggestion I have seen on the net by pulling out a factor of $\exp(i \pi k)$ but I have still not gotten zero. -Please assist. - -REPLY [5 votes]: One approach is to write -$$\cos\left(\frac{2\pi k}{n}\right)=\frac{1}{2\sin\left(\frac{2\pi}{n}\right)}\left(\sin\left(\frac{2\pi(k+1)}{n}\right)-\sin\left(\frac{2\pi(k-1)}{n}\right)\right)$$ -Now, we have converted the sum into a telescoping sum, which we can evaluate directly as -$$\begin{align} -\sum_{k=1}^n\cos\left(\frac{2\pi k}{n}\right)&=\frac{1}{2\sin\left(\frac{2\pi}{n}\right)}\left(\sin\left(\frac{2\pi(n+1)}{n}\right)+\sin\left(\frac{2\pi(n)}{n}\right)\right)\\\\ -&-\frac{1}{2\sin\left(\frac{2\pi}{n}\right)}\left(\sin\left(\frac{2\pi(2-1)}{n}\right)+\sin\left(\frac{2\pi(1-1)}{n}\right)\right)\\\\ -&=\frac{1}{2\sin\left(\frac{2\pi}{n}\right)}\left(\sin\left(\frac{2\pi(n+1)}{n}\right)-\sin\left(\frac{2\pi(2-1)}{n}\right)\right)\\\\ -&=\frac{1}{2\sin\left(\frac{2\pi}{n}\right)}\left(\sin\left(\frac{2\pi}{n}\right)-\sin\left(\frac{2\pi}{n}\right)\right)\\\\ -&=0 -\end{align}$$<|endoftext|> -TITLE: $M \times N$ orientable iff both $M$ and $N$ are orientable proof in terms of volume forms -QUESTION [6 upvotes]: I'm studying differential forms, and in my homework I'm asked to show that the product of two manifolds $M \times N$ is orientable if and only if both $M$ and $N$ are orientable. -I want to show this using volume forms. For the backwards implication ($\Leftarrow$): -Suppose $M$, m-manifold, and $N$, n-manifold, are orientable. Then there exist nowhere vanishing top forms $\omega_1 \in \Omega^m(M)$ and $\omega_2 \in \Omega^n(N)$. Define $\omega_1 \times \omega_2$ in $M \times N$ by $\omega_1 \times \omega_2(X_1,...,X_m,Y_1,...,Y_n)=\omega_1(X_1,...,X_m)\omega_2(Y_1,...,Y_n)$, and one can easily see that this is a form, a top, nowhere vanishing $(m+n)$-form in $M\times N$. It follows that $M\times N$ is orientable. -My problem is in the forward implication. How do I show that if $M\times N$ is orientable, then so are $M$ and $N$, or, equivalently, if $M$ and $N$ are both not orientable, then $M\times N$ can't be orientable? How can I construct volume forms on $M$ and $N$ from a given volume form in $M\times N$? - -REPLY [5 votes]: Assume $M \times N$ is orientable. Fix a point $p \in N$ and a basis $\{v_1, \ldots, v_n\}$ of $T_p N$. Consider an orientation form $\omega$ of $M \times N$ and identify $M$ with $M \times \{p\} \subset M \times N$. Now define $\eta$ on $M$ by $\eta(e_1, \ldots, e_m) = \omega(e_1, \ldots, e_m, v_1, \ldots, v_n)$. -(edited to make argument much simpler)<|endoftext|> -TITLE: Product of numbers $\pm \sqrt{1} \pm \sqrt{2} \pm \dots \pm \sqrt{100}$ is a perfect square -QUESTION [5 upvotes]: Let $A$ be the product of $2^{100}$ numbers of the form -$$\pm \sqrt{1} \pm \sqrt{2} \pm \dots \pm \sqrt{100}$$ -Show that $A$ is an integer, and moreover, a perfect square. -I found a similar problem here, but the induction doesn't seem to show that $A$ is a perfect square. And I think we can generalize the following problem: if $n$ is a perfect square then $A$ is a perfect square, too. - -REPLY [6 votes]: The same argument can be used to show that the product of the $2^{n-1}$ numbers $\sqrt{1} \pm \sqrt{2} \pm \cdots \pm \sqrt{n}$ is an integer. -Then your number is exactly the square of this product.<|endoftext|> -TITLE: Does $H^i(X,F)\cong H^i(Y,f_*F)$ hold for $X\to Y$ finite but $F$ not necessarily quasi-coherent? -QUESTION [5 upvotes]: Let $X\to Y$ be a finite morphism between schemes,$F$ be a sheaf of abelian groups but not necesarily quasi-coherent. Does $H^i(X,F)\cong H^i(Y,f_*F)$ still hold for sheaf cohomology with Zariski topology? (It holds for etale cohomology) - -REPLY [4 votes]: Great question! Here is a counterexample: -Let $(A,\mathfrak m)$ be a DVR, and let $A \subseteq B$ be a finite extension of domains such that $B$ has exactly two primes above $\mathfrak m$. For example, $A = \mathbb Z_{(5)}$, and $B = \mathbb Z_{(5)}[i]$, with the primes $(1+2i), (1-2i)$ lying above $(5)$. -Then $Y = \operatorname{Spec} A$ is the space $\{x,\eta\}$ in which $x$ is closed and $\eta$ is open, and $X = \operatorname{Spec} B$ is the space $\{x, y,\eta\}$ in which $x$ and $y$ are closed, but $\eta$ is not. The map sends both $x$ and $y$ to $x$, and $\eta$ to $\eta$. -Lemma 1. Let $\mathcal F$ be a sheaf on $Y$. Then $H^i(Y, \mathcal F) = 0$ for all $i > 0$. -Proof. We will show that taking global sections is exact. Given a surjection $\mathcal F \to \mathcal G$ on $Y$, and a global section $s \in \mathcal G(Y)$, there exists a covering on which $s$ comes from $\mathcal F$. But every covering of $Y$ has to include a copy of $Y$ itself, so $s$ comes from $\mathcal F(Y)$. $\square$ -Remark. The lemma actually generalises (with the exact same proof!) to arbitrary local rings. Indeed, the only open that contains the closed point of $Y$ is $Y$ itself. -It now suffices to construct a sheaf $\mathcal F$ on $X$ whose first cohomology is nontrivial. To do this, recall from Hartshorne Exercise II.1.19 the exact sequence -$$0 \to j_! (\mathcal F|_U) \to \mathcal F \to i_* (\mathcal F|_Z) \to 0,$$ -where $j \colon U \to X$ and $i \colon Z \to X$ are a complementary open and closed set in a topological space $X$. -Lemma 2. Let $U = \{\eta\}$, $Z = \{x,y\}$, and $\mathcal F = \mathbb Z$ (the constant sheaf). Then $\mathcal F(X) \to i_*(\mathcal F|_Z) (X)$ is not surjective. Consequently (since the constant sheaf $\mathbb Z$ on the irreducible space $X$ is flasque), -$$H^1(X, j_!(\mathcal F|_U)) \neq 0.$$ -Proof. Since $X$ is irreducible, it is connected, so $\mathcal F(X) = \mathbb Z$. On the other hand, $Z$ is a discrete space of two points, so $\underline{\operatorname{Sh}}(Z) \cong \underline{\operatorname{Ab}} \times \underline{\operatorname{Ab}}$, given by -$$\mathcal G \mapsto (\mathcal G_x, \mathcal G_y).$$ -We use that $\mathcal F|_Z = i^{-1} \mathcal F$ is the sheafification of the presheaf -$$V \mapsto \operatorname*{colim}_{W \supseteq i(V)} \mathcal F(W).$$ -Since sheafification doesn't alter stalks, and $\{x,\eta\}$ is the unique minimal open set containing $x$, we see that -$$(\mathcal F|_Z)_x = \mathcal F|_Z(\{x\}) = \mathcal F(\{x,\eta\}) = \mathbb Z,$$ -and similarly $(\mathcal F|_Z)_y = \mathbb Z$. Thus, we see that -$$i_*(\mathcal F|_Z)(X) = \mathcal F|_Z(U) = \mathbb Z \oplus \mathbb Z.$$ -Thus, the map $\mathcal F(X) \to (i_*\mathcal F|_Z)(X)$ can never be surjective, as $\mathbb Z$ does not surject onto $\mathbb Z \oplus \mathbb Z$. $\square$ -Remark. The proof of the vanishing of higher direct images for étale cohomology for a finite morphism relies on the fact that strictly Henselian local rings (the local rings for the étale topology) have no higher cohomology, and that a finite covering of a strictly Henselian local ring is again a finite product of strictly Henselian local rings. However, as we saw above, finite coverings of local rings for the Zariski topology do have higher cohomology. Maybe we should see this as a hint that the Zariski topology does not have a good local theory in the same way that the étale topology does.<|endoftext|> -TITLE: Category theory without sets -QUESTION [17 upvotes]: I have been reading Mac Lane's Categories for the Working Mathematician, and the prospect of developing category theory without any use of set theory is mentioned more than once in the book, but never actually realised. I was wondering whether there are any good references (books or online notes) that give an account of such a theory of categories. Looking at this question, it seems that topos theory has been one of the successful ways in which category theory can be defined without sets. -So my question is: What are some good references for how to develop category theory without set theory (using toposes or otherwise). - -REPLY [12 votes]: Membership relation free set theory -This is what Lawvere did after his PhD. At first, when he discussed this idea with Eilenberg and Mac Lane the latter two did not believe that this was possible and tried to convince him to drop this idea: - -“One day, Sammy told me he had a young student who claimed that he could do set theory without elements. It was hard to understand the idea, and he wondered if I could talk with the student. (...) I listened hard, for over an hour. At the end, I said sadly, ‘Bill, this just won’t work. You can’t do sets without elements, sorry,’ and reported this result to Eilenberg. Lawvere’s graduate fellowship at Columbia was not renewed, and he and his wife left for California.” - An Interview with F. William Lawvere, p. 40 - -However, Lawvere came up with the membership relation free axioms (ETCS): -Lawvere, An elementary theory of the category of sets -Another reference for his membership relation free axiomatization of sets is his book: -Lawvere, Sets for mathematics (availlable via google as pdf) -I think this is answering the original question of set free category theory, since the category of sets Set is thereby internalized into category theory, i.e. there is no need for set theoretical membership relation when using notions of sets in category theory. -It is however not the point to avoid membership relation, but to take function as a more natural basic notion instead and derive membership and other relevant concepts from there. - -Comparison of Lawvere's axioms with ZFC -Lawvere's axioms are characterizing a two-valued topos with infinite object and axiom of choice: - -In summary then, we can say precisely what we mean by a category of abstract - sets and arbitrary mappings. It is a topos that is two-valued with an infinite - object and the axiom of choice (and hence is also Boolean). - -see Lawvere, Sets for mathematics, p. 113. -For a comparison of Lawvere's axioms with ZFC e.g. see Barry W. Cunningham, Boolean topoi and models of ZFC: - -Mitchell in [19] showed that categories satisfying Lawvere's axioms were models for finitely axiomatizable set theory $Z_1$ which is strictly weaker than $ZFC$ (Zermelo-Fraenkel set theory with the Axiom of Choice) in that the full axiom scheme of Replacement does not hold. - -Compare: -W. Mitchell, Boolean topoi and the theory of sets, J. Pure Appl. Algebra 2 (1972), 261-. 274 - -The ten axioms [of ETCS] are weaker than ZFC; but when the eleventh [replacement] is added, the two theories have equal strength and are 'bi-interpretable' (the same theormes hold). Moreover it is known to which fragment of ZFC the ten axioms correspond: 'Zermelo with bounded comprehension and choice'. The details of this relationship were mostly worked out in the 70s ... - -Compare: -T. Leinster, Rethinking set theory (see p. 7 par. 4 and the references there) - -Most mathematicians will never use more properties of sets than those guaranteed by the ten axioms [ETCS]. For example McLarty [C. McLarty. A finite order arithmetic foundation for cohomology. arXiv:1102.1773, 2011] argues that no more is needed anywhere in canons of Grothendieck school of algebraic geometry, the multi-volume works Éléments de Géometrie Algébrique (EGA) and Séminaire de Géometrie Algébrique (SGA). - -Compare: -T. Leinster, Rethinking set theory (see p. 6 'How strong are the axioms?')<|endoftext|> -TITLE: The analogy between two Rudin-Keisler orders -QUESTION [5 upvotes]: Given a set $X$, an ultrafilter $U$ on $X$, and a function $f\colon X\to Y$, we can push forward $U$ along $f$ to obtain an ultrafilter $f_*U$ on $Y$, defined by $C\in f_*U$ if and only if $f^{-1}[C]\in U$. -Consider the class $\mathbb{U}$ of all pairs $(X,U)$, where $X$ is a set and $U$ is an ultrafilter on $X$. The Rudin-Keisler order is a preorder on $\mathbb{U}$: - -$(X,U) \geq_\text{RK}(Y,V)$ if and only if there is a function $f\colon X\to Y$ such that $f_*U = V$. - -Now in model theory, we have another Rudin-Keisler order, also called the realization order. This order was introduced by Lascar and described in Poizat's book A Course in Model Theory, where it is the subject of Section 20.1. Let $T$ be a complete theory, $M\models T$, and let $S_n(M)$ be the usual Stone space of types in $n$ variables with parameters from $M$. Then we have a preorder on $\bigcup_{n\in \omega} S_n(M)$: - -$p(\overline{x}) \geq_{\text{R}} q(\overline{y})$ if and only if every elementary extension of $M$ which realizes $p$ also realizes $q$. - -Poizat writes "The Rudin-Keisler ordering was christened thus by [Las75], because of an analogy to an ordering that Rudin and Keisler defined on ultrafilters; as this is rather far from our subject, it is a little abusive thus to call the ordering $R$." The reference is to Lascar's paper Définissabilité dans les théories stables. Unfortunately, I don't have a copy. -Of course, types are ultrafilters on the Boolean algebra of definable subsets of $M$ (formulas modulo $T$-equivalence). Also, both orders have a unique minimal class: the principal ultrafilters for the first order, and the realized types for the second. The analogy between principal ultrafilters and realized types is clear. -Can anyone spell out how to continue the analogy (if it does continue)? Or make any reasonable guess as to how it goes? - -REPLY [4 votes]: One aspect of the analogy involves what are sometimes called full structures. For any set $X$, the full structure on $X$ has universe $X$, and has all (finitary) relations and functions as part of the structure. (So it's a structure for a language with $2^{|X|}$ relation and function symbols if $X$ is infinite.) A 1-type over this structure amounts to an ultrafilter on $X$. (In detail: Writing $\hat R$ for the symbol that denotes the relation $R$ in the full structure, we can turn any 1-type $p(x)$ into the ultrafilter consisting of those subsets $R$ of $X$ (i.e., unary relations) such that $\hat R(x)\in p(x)$.) For each ultrafilter $U$ on $X$, there is a canonical smallest elementary extension of (the full structure on) $X$ that realizes the type associated to $U$, namely the ultrapower of $X$ with respect to $U$. (The type associated to $U$ is realized by the equivalence class, in the ultrapower, of the identity function of $X$.) The types that are RK-below $U$ in Lascar's sense are therefore those that are realized in this ultrapower. But it's easy to check that any element $[f]$ of this ultrapower realizes the type that corresponds to the ultrafilter $f_*(U)$. So the types RK-below $U$ in Lascar's sense correspond to the ultrafilters on $X$ that are RK-below $U$. -This discussion easily generalizes to a correspondence between $n$-types over the full structure on $X$ and ultrafilters on $X^n$. Again, the two RK-orderings match up via this correspondence. -For the more general notion of RK-ordering between ultrafilters on different sets $X$ and $Y$, one can proceed similarly, using the full structure on the union $X\cup Y$. (I might prefer to use the disjoint union, to avoid headaches about any overlap between $X$ and $Y$, but I don't think it's necessary to disjointify; the theory seems to survive despite headaches.)<|endoftext|> -TITLE: If $|G| = 120$ then $G$ has a subgroup of index $3$ or $5$ (or both) -QUESTION [5 upvotes]: This is exercise 1.C.4 in Isaacs, Finite Group Theory. I think I have a proof, but would like to verify the proof and also inquire whether it can be shortened or improved significantly. - -Let $|G| = 120 = 2^3 \cdot 3 \cdot 5$. Show that $G$ has a subgroup of index $3$ or a subgroup of index $5$ (or both). Hint: Analyze separately the possibilities for $n_2(G)$. - -My proof: By the Sylow theorems, $n_2(G)$ must be an odd integer which divides $15$, so $n_2(G) = 1$, $3$, $5$, or $15$. -Case 1: $n_2(G) = 1$. In this case, $G$ has a unique (hence normal) Sylow $2$-subgroup $S$ of order $8$ (index $15$). Then $G/S$ is a group of order $15$, so it has subgroups of order $3$ and $5$ (index $5$ and $3$, respectively). By the correspondence theorem, these subgroups of $G/S$ correspond to subgroups of $G$ which contain $S$ and which have the same indices $5$ and $3$. -Case 2: $n_2(G) = 3$. Since $n_2(G) = |G:N_G(S)|$ where $S \in Syl_2(G)$, the subgroup $N_G(S)$ has index $3$. -Case 3: $n_2(G) = 5$. Then as in case 2, the normalizer of any Sylow $2$-subgroup has index $5$. -Case 4: $n_2(G) = 15$. Here, I use the following theorem, proved earlier in the chapter. - -Suppose that $G$ is a finite group such that $n_p(G) > 1$, and choose distinct Sylow $p$-subgroups $S$ and $T$ of $G$ such that the order of $|S \cap T|$ is as large as possible. Then $n_p(G) \equiv 1$ mod $|S:S \cap T|$. - -Since $n_2(G) = 15$ is not congruent to $1$ mod $8$ or mod $4$, this theorem implies that there are $S,T \in Syl_2(G)$ such that $|S:S \cap T| = 2$, i.e. $|S \cap T| = 4$. Therefore $S\cap T \lhd S$ and $S \cap T \lhd T$, which means that $S \leq N_G(S\cap T)$ and $T \leq N_G(S\cap T)$. We write $H = N_G(S \cap T)$ for brevity. Note that $n_2(H) \equiv 1$ mod $2$, and we have $S,T \in Syl_2(H)$, so in fact $n_2(H) \geq 3$. This means that $H$ contains at least $16$ elements, so $|G:H| \leq 7$. Also, $8$ divides $|H|$, so $|G:H|$ is either $1$, $3$, or $5$. If $|G:H|$ is $3$ or $5$, then we're done: $H$ is the desired subgroup. -If $|G:H| = 1$, then $S \cap T \lhd G$. Therefore, $K = G / (S \cap T)$ is a group of order $30$. Consequently, $K$ contains subgroups of order $3$ and $5$. By the Sylow theorems, $n_3(K)$ is either $1$ or $10$, and $n_5(K)$ is either $1$ or $6$. If $n_3(K) = 10$ and $n_5(K) = 6$, then $K$ contains $20$ elements of order $3$ and $24$ elements of order $5$, but this is impossible in a group of order $30$. Therefore, either $n_3(K) = 1$ or $n_5(K) = 1$. -If $n_3(K) = 1$, then $K$ contains a normal subgroup $N$ of order $3$. Then $K/N$ is a group of order $10$, so it contains a subgroup of order $2$ (index $5$). By correspondence, $K$ also contains a subgroup of index $5$, and since $K = G / (S \cap T)$, this means that $G$ also contains a subgroup of index $5$. -Similarly, if $n_5(K) = 1$, then $K$ contains a normal subgroup $N$ of order $5$. Then $K/N$ is a group of order $6$, so it contains a subgroup of order $2$ (index $3$). By correspondence, $K$ and $G$ also contain subgroups of index $3$. - -REPLY [2 votes]: Case 4: You showed there are $S,T\in Syl_2(G)$ such that $|S\cap T|=4$ (your argument by second quoted theorem-after Case 4-is interesting, and I didn't see it earlier. But, it is nice argument.) From this, it is easy to come to your conclusion. -You have shown $S\cap T$ is normal in both $S$ and $T$, hence in $\langle S,T\rangle$. -What is order of $\langle S,T\rangle$? It contains $ST$ and -$$|ST|=\frac{|S|\cdot |T|}{|S\cap T|}=16.$$ -Thus, $\langle S,T\rangle$ is a subgroup of $G$ of order at least $16$, it divides $|G|$ and is divisible by $|S|=8$. So, -$$|\langle S,T\rangle|=2^3.3,\,\,\, 2^3.5,\,\,\, 2^3.3.5.$$ -The first two cases give a subgroup of index $5$ or $3$ respectively. -In the third case we get $S\cap T$ normal in $\langle S,T\rangle=G$. Consider quotient $G/(S\cap T)$; it is a group of order $2.3.5$. -Your above computations ensure me that you can easily prove that $G/(S\cap T)$ contains a subgroup of index $3$ and $5$, so it is true for $G$ also.<|endoftext|> -TITLE: Find the derived set of $\{\frac{1}{n} + \frac{1}{m}: m,n \in \mathbb{N}\}$ and prove it is such. -QUESTION [6 upvotes]: It is easy to see the derived set is $A' = \{\frac{1}{n}: n \in \mathbb{N}\}\bigcup\{0\}$. To prove these are the only elements of the derived set we need to show that the shape of the derived set can only be $\frac{1}{n}$ or $0$. We can see the derived set is bounded above by $1$ and below by $0$. So we look for points between $0$ and $\frac{1}{N}$ where $N$ is the largest number and $\frac{1}{n}$ and $\frac{1}{n+1}$. I know I need to show that only a finite number of points exist between these points but I am having trouble doing so. - -REPLY [7 votes]: It is I think clear that are no negative limit points. Let $b$ be a positive real other than $0$ or a fraction $\frac{1}{n}$. We will show that $b$ cannot be a limit point of $A$. Note that either (i) $b\gt 1$, or (ii) there exists a positive integer $q$ such that $\frac{1}{q+1}\lt b\lt \frac{1}{q}$. -We deal with Case (ii) because it feels marginally harder. Let $\epsilon$ be the smaller of $\frac{1}{q}-b$ and $b-\frac{1}{q+1}$. We claim there are only finitely many numbers of the form $\frac{1}{m}+\frac{1}{n}$ at distance less than $\epsilon/2$ from $b$. -To get within $\epsilon/2$ of $b$, we must use two integers $m$ and $n$ each $\ge q+1$. Let $m$ be the smaller of the two integers. Then $m$ must be $\lt 2(q+1)$. So there are only finitely many possibilities for $m$. -For any such $m$, there are only finitely many $n$ such that $\frac{1}{m}+\frac{1}{n}\gt \frac{1}{q+1}+\epsilon/2$, so only finitely many within $\epsilon/2$ of $b$. -This completes the argument for Case (ii). For Case (i) we do the same thing, except instead of $q+1$ we use $1$.<|endoftext|> -TITLE: Euler Transform elementary Proof -QUESTION [10 upvotes]: In this webpage Computing the Digits in π there is a proof of the Euler Transform (page 22). -The proof there relies on measure theory and Lebesgue integration, I haven't studied that yet. -In page 22 there is the following statement: - -Euler didn’t actually prove any general theorems about this transformation. He did use it in several specific cases, where he could show that it really did converge to the original sum, and converged much more quickly. - -I was wondering if anyone knows any elementary proof of this transformation or a proof for a particular series ? -I don't find many information of this transformation online, a resource recommendation is welcome - -REPLY [3 votes]: This is not an answer (I don't know the formal proof) but a comment because the power of the Euler-summation for series like this is much impressive but often not really known. -Here is a table of the progression to the final value without and with Euler-summation. Euler-summation can have "orders", which intuitively means, iterates (but can be interpolated to fractional orders). Here is the table using Euler-summation "to order (0.5)" : -the individual partial distance to partial sums distance to pi/4 -terms of the series sums pi/4 by Euler-summ. ------------------------------------------------------------------------------------------------ - 1.00000000000 1.00000000000 0.214601836603 1.00000000000 0.214601836603 - -0.333333333333 0.666666666667 -0.118731496731 0.777777777778 -0.00762038561967 - 0.200000000000 0.866666666667 0.0812685032692 0.792592592593 0.00719442919514 - -0.142857142857 0.723809523810 -0.0615886395879 0.784832451499 -0.000565711898330 - 0.111111111111 0.834920634921 0.0495224715232 0.785851459926 0.000453296528086 - -0.0909090909091 0.744011544012 -0.0413866193859 0.785350269301 -0.0000478940965617 - 0.0769230769231 0.820934620935 0.0355364575372 0.785432796132 0.0000346327349363 - -0.0666666666667 0.754267954268 -0.0311302091295 0.785393836971 -0.00000432642610791 - 0.0588235294118 0.813091483680 0.0276933202823 0.785401073569 0.00000291017167712 - -0.0526315789474 0.760459904732 -0.0249382586651 0.785397756972 -0.000000406425094661 - 0.0476190476190 0.808078952351 0.0226807889539 0.785398422239 0.000000258841333253 - -0.0434782608696 0.764600691482 -0.0207974719156 0.785398124198 -0.0000000391999284656 - 0.0400000000000 0.804600691482 0.0192025280844 0.785398187306 0.0000000239087486163 - -0.0370370370370 0.767563654445 -0.0178345089527 0.785398159544 -0.00000000385353035100 - 0.0344827586207 0.802046413065 0.0166482496680 0.785398165666 0.00000000226867304697 - -0.0322580645161 0.769788348549 -0.0156098148481 0.785398163013 -0.000000000384322878997 - 0.0303030303030 0.800091378852 0.0146932154549 0.785398163617 2.19652484372E-10 - -0.0285714285714 0.771519950281 -0.0138782131165 0.785398163359 -3.87659406085E-11 - 0.0270270270270 0.798546977308 0.0131488139105 0.785398163419 2.16018278683E-11 - -0.0256410256410 0.772905951667 -0.0124922117305 0.785398163394 -3.94612015099E-12 - 0.0243902439024 0.797296195569 0.0118980321720 0.785398163400 2.15112513145E-12 - -0.0232558139535 0.774040381616 -0.0113577817815 0.785398163397 -4.04724379093E-13 - 0.0222222222222 0.796262603838 0.0108644404407 0.785398163398 2.16404904436E-13 - -0.0212765957447 0.774986008093 -0.0104121553040 0.785398163397 -4.17725596145E-14 - 0.0204081632653 0.795394171359 0.00999600796131 0.785398163397 2.19558323752E-14 - -0.0196078431373 0.775786328222 -0.00961183517595 0.785398163397 -4.33468728127E-15 - 0.0188679245283 0.794654252750 0.00925608935236 0.785398163397 2.24358184761E-15 - -0.0181818181818 0.776472434568 -0.00892572882946 0.785398163397 -4.51894496246E-16 - 0.0175438596491 0.794016294217 0.00861813081966 0.785398163397 2.30671631889E-16 - -0.0169491525424 0.777067141675 -0.00833102172271 0.785398163397 -4.73009122777E-17 - 0.0163934426230 0.793460584298 0.00806242090024 0.785398163397 2.38422950230E-17 - -0.0158730158730 0.777587568425 -0.00781059497278 0.785398163397 -4.96870530081E-18 - -Note that this "order of 0.5" seems to be optimal; the simple Euler-summation (which were "of order 1") accelerates not so spectacular.<|endoftext|> -TITLE: Construct a non-abelian group of order 75 -QUESTION [14 upvotes]: I am trying to use a semi-direct product to construct a non-abelian group of order 75 (Ex 5.5.8 in Dummit & Foote). Using the third Sylow theorem, we get $n_5=1$ so the subgroup of order $25$ is normal. Hence it makes sense to use the following semi-direct product: -$$ -(\text{Sylow }5\text{-subgroup})\rtimes(\text{Sylow }3\text{-subgroup}) -$$ -There are two groups of order $25$, either $\mathbb{Z}/5\mathbb{Z}\times \mathbb{Z}/5\mathbb{Z}$ or $\mathbb{Z}/25\mathbb{Z}$. In order to use the semi-direct product, I need a homomorphism from $\mathbb{Z}/3\mathbb{Z}$ to one of these. The automorphism group of $\mathbb{Z}/25\mathbb{Z}$ is $C_{20}$ so we can't use this group (since $3\nmid 20$). My questions are: -$$ -\begin{split}&1&) \text{ What is } \mathrm{Aut}\left(\mathbb{Z}/5\mathbb{Z}\times \mathbb{Z}/5\mathbb{Z}\right)? \\ &2&) \text{ How can we construct a non-trivial homomorphism } \phi:\mathbb{Z}/3\mathbb{Z}\to \mathrm{Aut}(\mathbb{Z}/5\mathbb{Z}\times \mathbb{Z}/5\mathbb{Z}) \end{split} -$$ - -REPLY [27 votes]: $\mathbb{Z}_5\times \mathbb{Z}_5$ can be considered as a vector space over field $\mathbb{Z}_5$ (the addition is same as addition in abelian group modulo $5$, and scalar multiplication actually comes from addition: $v+v=2.v$, $v+v+v=3.v$, $\cdots$) -Thus any automorphism of the group $\mathbb{Z}_5\times \mathbb{Z}_5$ is also an automorphism of the vector space $\mathbb{Z}_5\times \mathbb{Z}_5$ and conversely (because of remark on scalar multiplication made earlier). -It is well known, what is the automorphism group of $\mathbb{Z}_5\times \mathbb{Z}_5$? It is -$$ -GL(2,5)= -\begin{Bmatrix} -\begin{bmatrix} -a & b\\ -c & d -\end{bmatrix}\colon a,b,c,d\in\mathbb{Z}_5, ad-bc\neq 0 -\end{Bmatrix}. -$$ -To get an element of order $3$ in $Aut(\mathbb{Z}_5\times \mathbb{Z}_5)$, we have to find a non-identity matrix $A$ such that $A^3=I$. Thus, $A$ satisfies polynomial $x^3-1=(x-1)(x^2+x+1)$. The quadratic factor has no root in $\mathbb{Z}_5$ (check) so it is irreducible. Further, $A\neq I$, hence the minimal polynomial of $A$ should divide quadratic factor $x^2+x+1$. But the minimal polynomial of $A$ has degree $\leq 2$ (size of $A$), so $x^2+x+1$ is the minimal (hence characteristic polynomial of $A$). Can we find a matrix $A$ explicitly with such characteristic polynomial? Yes; companion matrix -$$ -A= -\begin{bmatrix} -0 & -1\\ -1 & -1 -\end{bmatrix}. -$$ -Now we come to construction of group. Let $\mathbb{Z}_5\times \mathbb{Z}_5=\langle x,y\rangle$ with $x=\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $y=\begin{bmatrix} 0 \\ 1 \end{bmatrix}$. Then $Ax=y$ and $Ay=-x-y$ (check). -Consider $\mathbb{Z}_5\times \mathbb{Z}_5=\langle x,y\rangle$ multiplicatively, $t$ be element of order $3$ in its automorphism group, and action of $t$ on this group is given by (write above action of $A$ as action of $t$, with multiplicative operation): -$$txt^{-1}=y \,\,\,\, tyt^{-1}=x^{-1}y^{-1}.$$ -Thus -$$G=\langle x,y,t\colon x^5=y^5=1, xy=yx, t^3=1, txt^{-1}=y, tyt^{-1}=x^{-1}y^{-1}\rangle$$ -is a group of order $75$, it is non-abelian.<|endoftext|> -TITLE: Expected Value Diverges? -QUESTION [5 upvotes]: Given the probability density function $f(x) = \frac{1}{x^2}$ for $x > 1$. The expected value of this function is: $$E[X] = \int_1^\infty \frac{1}{x} dx = \infty$$ -Can someone please explain this? Is $E[X]$ only finite under certain condition? Is $\infty$ really the expected value for this probability density function? - -REPLY [2 votes]: This is an example of a Pareto distribution which typically has a density function of the form $$f(x) = \dfrac{\alpha x_{m}^{\alpha}}{x^{\alpha+1}} \text{ for }x \gt x_m $$ and so a cumulative distribution function of -$$F(x) = 1-\left(\dfrac{x_{m}}{x}\right)^{\alpha} \text{ for }x \gt x_m $$ where $x_m \gt 0$ is lowest value of the support (a location parameter, $x_m=1$ in your question) and $\alpha \gt 0$ is a shape parameter ($\alpha =1$ in your question). -Trying to derive the mean would give $\dfrac{\alpha}{\alpha-1} x_m$, but this is only meaningful when $\alpha \gt 1$. For $0 \lt \alpha \le 1$ the mean is infinite, as you found with your example. -Something similar happens with the variance: you can derive $\dfrac{\alpha^2}{(\alpha-1)^2(\alpha-2)}x_m^2$ but this is only valid for $\alpha \gt 2$. For $1 \lt \alpha \le 2$ the variance is infinite, while for $0 \lt \alpha \le 1$ the variance is undefined (since there is no finite mean) and the second moment is infinite. -The Pareto distribution is a fat-tailed distribution. The smaller $\alpha$ is, the fatter the tail becomes. With $\alpha = 1$ the tail is sufficiently fat to prevent the distribution having a finite mean. -It is still possible to descibe a centre of the distribution: for example in general the median of a Pareto distribution is $2^{1/\alpha} x_m$ and in your particular example the median is $2$.<|endoftext|> -TITLE: Why is tree not uniquely possible with given preorder and postorder traversal? -QUESTION [5 upvotes]: Consider the label sequences obtained by the following pairs of traversals on a labeled binary tree. Which of these pairs identify a tree uniquely? - -preorder and postorder -inorder and postorder -preorder and inorder -level order and postorder - - -I've read that inorder is necessary to draw unique tree, well, I drawn the different tree with given options. I concluded that option $(2)$ and $(3)$ is true. - -Can you explain formally please, why is tree not uniquely possible with given preorder and postorder traversal ? - -REPLY [10 votes]: Consider the trees shown below: - 1 1 - / / - 2 2 - / \ - 3 3 - -Both have preorder $123$, and both have postorder $321$, but they’re not the same binary tree. More generally, if a node has only one child, preorder and postorder do not contain enough information to determine whether that child is a left child or a right child.<|endoftext|> -TITLE: How to construct isogenies between elliptic curves over finite fields for some simple cases? -QUESTION [9 upvotes]: From the theorem of Tate, it is known that the two elliptic curves over the same field $\mathbb{F}_p$ are isogenous iff they have the same number of points. For $p\equiv 3\mod 4$, the curve $E_1(\mathbb{F}_p):y^2=x^3+x$ is supersingular, and it has $p+1$ points. So is the case $p\equiv2\mod 3$ and the curve $E_2(\mathbb{F}_p): y^2=x^3+B$, where $B\ne0$ in $\mathbb{F}_p$. And hence, when $p\equiv 11 \mod 12$, $E_1(\mathbb{F}_p)$ and $E_2(\mathbb{F}_p)$ are isogenous. But I wonder how can we construct an isogeny between them, even in the simplest case, when $p=11$? - -REPLY [3 votes]: Here is a very computational way of creating isogenies. Coming from someone who has digested approximately the first half of Silverman's AEC but not much more, so I am unable to use any of the high-powered tools. - -We can create an isogeny from a given elliptic curve $E_1$ to some other by specifying a finite subgroup $A$ of the additive group of $E_1$. At the level of groups the isogeny looks like the natural projection -$$ -E_1\to E_1/A. -$$ -The general theory of isogenies (properties of projective varieties and Hurwitz genus formula at least come into play here) tells us that $E_1/A$ has the structure of a projective curve of genus one and thus is an elliptic curve. The fun starts when we try to identify which elliptic curve it is! -All I can do is to test what happens with a given $A$. I do not know how to select the right $A$ so that $E_1/A$ would be isomorphic to another known elliptic curve $E_2$. -I use the language of function fields. Assume that $E_1$ is defined over a field $k$. Then the function field of $E_1$ is $k(E_1)$ is by adjoing $y$ to the rational function field $k(x)$, where $y$ satisfies the Weierstrass equation of $E_1$. Consequently $[k(E_1):k(x)]=2$. If $P\in A$ is a point of the finite subgroup $A$, then consider the mapping -$$ -\phi_P:E_1\to E_1, (x,y)\mapsto (x,y)+P=(\phi_P(x),\phi_P(y)). -$$ -This is clearly a bijection from $E_1(\overline{k})$ to itself. If we make the assumption that the coordinates of $P$ belong to the field of definition $k$, then the rational functions $\phi_P(x),\phi_P(y)\in k(E_1)$. Therefore the mapping $x\mapsto \phi_P(x), y\mapsto \phi_P(y)$ extends uniquely to an automorphism of the function field $k(E_1)$. -Furthermore, we have trivially the identity -$$ -\phi_{P+Q}=\phi_P\circ\phi_Q -$$ -for all $P,Q\in A$. In other words, the mapping $P\mapsto \phi_P$ gives us a homomorphism of groups $A\to Aut(k(E_1))$. - -The next step is to identify the fixed field $k(E_1)^A$ of the image of $\phi(A)\le Aut(k(E_1))$. The reason for my desire to do this is that the fixed field is in a natural way the function field of $k(E_1/A)$, and I want to identify $E_1/A$ by identifying its function field. -We can immediately spot a few elements of the fixed field. Namely the sums -$$ -U=\sum_{P\in A}\phi_P(x) -$$ -and -$$ -V=\sum_{P\in A}\phi_P(y). -$$ -It seems to me that often actually $k(E_1)^A=k(U,V)$. I have verified this in the cases I have done by hand, but I don't have a general proof. Anyway, $U$ has a pole of order two at all the points of $A$, and no other poles. Similarly $V$ has a pole of order three at all the points of $A$ and no other poles. Given that in $E_1/A$ we identify all the points of $A$ with the point at infinity of $E_2$ this means that $U$ and $V$ are prime candidates for the $x$ and $y$ coordinates in the Weierstrass equation of $E_2$. I cannot prove that this always happens, but it seems to work in all the cases I checked :-) - -Let's consider the specific choice of $E_1$ of this question. So $k=\Bbb{F}_{11}$ and $E_1$ is given by -$$ -y^2=x^3+x. -$$ -It is easy to verify that this curve has a triple tangent at the $k$-rational points -$P_1=(5,3)$ and $P_2=(5,-3)=[2]P_1$. So they are of order three and together with the point at infinity form a cyclic subgroup of order three -$$ -A=\{P_\infty,P_1,P_2\}. -$$ -The group law implies that if $(x,y)\in E_1(\overline{k})$ then -$$ -(x,y)+P_1=(\frac{5 x^2+4 x+5 y+5}{x^2+x+3},\frac{3 x^3+x^2+x y+9 x+3 - y+4}{x^3+7 x^2+9 x+7}) -$$ -and -$$ -(x,y)+P_2=(\frac{5 x^2+4 x+6 y+5}{x^2+x+3},\frac{8 x^3+10 x^2+x y+2 x+3 - y+7}{x^3+7 x^2+9 x+7}). -$$ -Therefore we get -$$ -\begin{aligned} -U&=x+\frac{5 x^2+4 x+5 y+5}{x^2+x+3}+\frac{5 x^2+4 x+6 y+5}{x^2+x+3}\\ -&=\frac{x^3+10}{x^2+x+3} -\end{aligned} -$$ -and -$$ -\begin{aligned} -V&=y+\frac{3 x^3+x^2+x y+9 x+3 - y+4}{x^3+7 x^2+9 x+7}+\frac{8 x^3+10 x^2+x y+2 x+3 - y+7}{x^3+7 x^2+9 x+7}\\ -&=\frac{x^3 y+7 x^2 y+2 y}{x^3+7 x^2+9 x+7} -\end{aligned} -$$ -We see that $[k(x):k(U)]=3$, and that $[k(V,U):k(U)]=2$, so $[k(E_1):k(U,V)]=3$. Therefore we can conclude that $k(U,V)$ is, indeed, -the fixed field $k(E_1)^A$. Furthermore, aided by Mathematica I found (by cancelling poles of $V^2$ with powers of $U$) that -$$ -V^2=(U+1)^3+5. -$$ -So in this case we discovered that the curve $E_1/A$ is isomorphic to the elliptic curve -$$ -E_2: y^2=x^3+5 -$$ -also defined over $\Bbb{F}_{11}$. The isogeny $\Phi:E_1\to E_2$ maps a point $(x,y)$ to the point $(U(x,y)+1,V(x,y))$. - -If $u$ is a non-zero element of $\Bbb{F}_{11}$, the substitution $y\mapsto u^3y$, $x\mapsto u^2x$ allows us to replace $5$ in $E_2$ by $5u^6$. The parameter $u^6$ ranges over all the quadratic residues modulo $11$, so we get one half of the possible values of your parameter $B$ from this construction. I don't know how to get the other half - presumably it is not too difficult. - -A similar but somewhat easier calculation shows that $E_1/\langle(0,0)\rangle$, i.e. moding out a cyclic group of order two, yields the curve -$$y^2=x^3-4x.$$ -As $-4$ is a quadratic non-residue modulo $11$ this isogeneous curve and $E_1$ cover all the curves $y^2=x^3+Ax, A\in\Bbb{F}_{11}^*$. - -I really wish an expert shows up, and clarifies some of this computational mess. From elsewhere on our site I learned that Velu's formula gives a general algorithm for producing the quotient $E/\Phi$ and the corresponding isogeny $E\to E/\Phi$ of an elliptic curve $E$ and its finite subgroup $\Phi$.<|endoftext|> -TITLE: Is there a plane algebraic curve with just 3-fold rotational symmetry, but without reflection symmetry? -QUESTION [6 upvotes]: I am new to the subject of invariant theory, but the Reynolds operator popped up so I tried to calculated some examples for myself. I computed the invariant polynomials under the cylic group $C_3$ of order $3$, given by $120°$ rotations in the plane. -These polynomials, like -$$\frac 1 4 (x^3-3xy^2), \frac 1 4 (3x^2y-y^3), x^2+y^2, ...$$ -and their combinations give rise to nice 120°-symmetric plane curves. However, from what I can judge by looking at the plotted images in Wolframalpha, there always seems to be a reflection symmetry along some axis. - -Can we get rid of that? Is there an algebraic curve with just 120°-rotational symmetry but without additional reflection axes? -Of course we can take a union of e.g. appropriately arranged circles expressed in a single equation, so we should just talk about irreducible curves. Is my observation even correct? Maybe there is some obvious counterexample. Thank you - -REPLY [5 votes]: Posting this just to get things started. I haven't checked that this is absolutely irreducible, but it looks promising: -$$x^3 - 3 x y^2 = C + (x^3 - 3 x y^2)^2 + y^3 - 3 y x^2. -$$ -Built from the invariants that you listed. If we have $C=0$ then there will be a singularity at the origin, and Mathematica has problems drawing it smoothly. Below there is a picture of the real points (I got the impression that you were only interested in real plane curves) with $C=1/24$. - -Adding a positive definite term of a high enough degree should give us a compact variant: -$$ -x^3 - 3 x y^2 + (x^2 + y^2)^4 = - \frac1{24} + (x^3 - 3 x y^2)^2 + y^3 - 3 y x^2 -$$<|endoftext|> -TITLE: Matrix notation in tensor transformations -QUESTION [7 upvotes]: In special relativity one looks at coordinate transformations that consist of combinations of Lorentz boosts, rotations and reflections - members of the Lorentz group. Under an arbitrary transformation like that, a 4-vector $\vec{x}$ transforms as: -$$x'^\alpha = \Lambda^\alpha{}_\beta x^\beta$$ -Where $\Lambda^\alpha{}_\beta$ is represents this transformation (is this a $(1,1)$ tensor itself?). This can be written in matrix form -$$x' = \Lambda x$$ -Now, the basis vectors transform in another way: -$$\vec{e'_\alpha} = (\Lambda^{-1})^\beta{}_\alpha\vec{e_\beta}$$ -A 1-form $\tilde{p}$ transforms like this too: -$$p'_\alpha = (\Lambda^{-1})^\beta{}_\alpha p_\beta$$ -while the basis 1-forms obey -$$\omega'^\alpha = \Lambda^\alpha{}_\beta \omega^\beta$$ -Anyway, for a tensor representable by a matrix, that is a $(0,2)$, $(2,0)$ or $(1,1)$ tensor, there are different transformation properties: -$$T'^{\alpha\beta} = \Lambda^\alpha{}_\gamma \Lambda^\beta{}_\delta T^{\gamma \delta}$$ -$$T'_{\alpha\beta} = (\Lambda^{-1})^\gamma{}_\alpha (\Lambda^{-1})^\delta{}_\beta T_{\gamma \delta}$$ -$$T'^{\alpha}{}_\beta = \Lambda^\alpha{}_\gamma (\Lambda^{-1})^\delta{}_\beta T^\gamma{} _\delta$$ -My question is: how to translate these rules into matrix equations? I've seen formulas involving the transpose (no idea how that would even come about) of $\Lambda$ but never with an explanation where this came from. So how do I come up with the right order and transposes/inverses on the matrices? - -REPLY [3 votes]: Let us rearrange your -$$T'^{\alpha\beta} = \Lambda^\alpha{}_\gamma \Lambda^\beta{}_\delta T^{\gamma \delta},$$ -into -$$T'^{\alpha\beta} = \Lambda^\alpha{}_\gamma T^{\gamma \delta}\Lambda^\beta{}_\delta,$$ -and -one step more -$$T'^{\alpha\beta} = \Lambda^\alpha{}_\gamma T^{\gamma \delta}(\Lambda^{\top})_\delta{}^\beta{}.$$ -In this last equation anyone could clearly see that to get the components of $T'$ one needs to multiply accordingly to -$$T'=\Lambda T\Lambda^{\top}.$$<|endoftext|> -TITLE: Given an entire function which is real on the real axis and imaginary on the imaginary axis, prove that it is an odd function. -QUESTION [8 upvotes]: Given an entire function which is real on the real axis and imaginary on the imaginary axis, prove that it is an odd function. -By a Corollary: If $f$ analytic in a region symmetric with respect to the real axis and if $f$ is real for real $z$, then $f(z) = \overline{f(\bar z)} $. -So that, $f(z) = u(x+iy) + iv(x+iy) = u(x-iy) - iv(x-iy)$ -$f(-z) = u(-x-iy) + iv(-x-iy) = u(-x+iy) - iv(-x+iy)$ -$-f(-z) = -u(-x+iy) + iv(-x+iy)$ -It looks close to the answer but what else can I do by using Schwartz reflection principle?? - -REPLY [6 votes]: Consider the Taylor series of $f(z)$ -$$ -f(z)=\sum_{n=0}^{\infty}a_nz^n -$$ -Since $f(z)= \overline{f(\bar z)}$, all $a_n$ are real. Since $f$ maps imaginary on the imaginary axis -\begin{align} -f(iy)=\sum_{n=0}^{\infty}a_ni^ny^n&=\sum_{n=0}^{\infty}a_{2n}i^{2n}y^{2n}+\sum_{n=1}^{\infty}a_{2n+1}i^{2n+1}y^{2n+1} -\\ -&=\sum_{n=0}^{\infty}(-1)^{n}a_{2n}\:y^{2n}+i\sum_{n=1}^{\infty}(-1)^{n}a_{2n+1}\:y^{2n+1} -\end{align} -is imaginary. So -$$ -\sum_{n=0}^{\infty}(-1)^{n}a_{2n}\:y^{2n}=0 -$$ -which means $a_{2n}=0$ for all $n$. So $f(z)$ is odd.<|endoftext|> -TITLE: Continuous one-to-one mapping from a subset $K \subset \mathbb{R}^n$ of positive measure to $\mathbb{R}^{n-1}$ -QUESTION [8 upvotes]: Let $f\colon \mathbb{R}^n \to \mathbb{R}^{n-1}$ be a continuous function and $K\subset \mathbb{R}^n$ a subset of positive Lebesgue measure. Is it possible that $f$ is one-to-one on $K$? -If $K$ contains a (nonempty) open set this is impossible because of the invariance of domain theorem. But can we say anything for arbitrary (measurable) sets? - -REPLY [7 votes]: Let $n = 2$, $C \subseteq \mathbf R$ a fat cantor set, $f \colon C \times C \to C$ a homeomorphism. As $C \times C \subseteq \mathbf R^2$ is closed, there is a continuous extension $F \colon \mathbf R^2 \to [0,1]$ by the Tietze extension theorem. Now $\lambda(C \times C) > 0$ and $F|_{C \times C} = f$ is one-to-one.<|endoftext|> -TITLE: Lipschitz continuous one-to-one mapping from subset $K\subset\mathbb{R}^n$ of positive measure to $\mathbb{R}^{n-1}$ -QUESTION [5 upvotes]: Let $f:\mathbb{R}^n\to \mathbb{R}^{n-1}$ and $K\subseteq \mathbb{R}^n$ be a set of positive Lebesgue measure. What kind of regularity do we have to impose on $f$ (e.g., $C^1$, Lipschitz) to conclude that $f$ cannot be one-to-one on $K$? -Continuity is (in general) not enough, as demonstrated here. -On the other hand, a nonvanishing Jacobian on a subset of $K$ of positive measure allows us to construct a contradiction by the coarea formula. -But what if we cannot assume anything about the Jacobian? Is, e.g., Lipschitz continuous sufficient to construct a contradiction? Or do there exist Lipschitz continuous examples of one-to-one mappings? -Edit: This seems to have a connection to singularity theory. Unfortunately, things like Sard's Theorem also don't help as it only tells me something about singular values but I would need some information about the possible size of singular points of a one-to-one mapping. - -REPLY [2 votes]: Lipschitz is not sufficient (so, since Lipschitz maps are $C^1$ on the complement of a small set, $C^1$ is not sufficient too). -I will build an example of an injective map for $n=2$, which clearly gives an example for any $n\ge 2$. -Let $C$ be the perfect set obtained in this way: start from $C_0:=[0,1]$, then remove a centered open interval with length $\frac{1}{2}$ obtaining -$C_2:=\left[0,\frac{1}{4}\right]\cup\left[\frac{3}{4},1\right]$, -then, from the remaining two intervals, remove two centered open intervals whose length is $\frac{1}{8}$, etc. It is analogous to the usual construction of the Cantor set, -but the proportions are $\left(\frac{1}{4},\frac{1}{2},\frac{1}{4}\right)$. So for example at the second step we have a union of $4$ intervals: -$$ C_2:=\left[0,\frac{1}{16}\right]\cup\left[\frac{3}{16},\frac{1}{4}\right]\cup\left[\frac{3}{4},\frac{13}{16}\right]\cup\left[\frac{15}{16},1\right]. $$ -$C$ is defined as the intersection of the compact sets obtained at each step, i.e. $C:=\cap_i C_i$. -It is easy to see that the orthogonal projection $\pi:(C\times C)\setminus (V\times V)\to L$ is injective, where $L$ is the line $\{(x,y):y=2x\}$ and $V$ is the countable set of the endpoints of the several intervals appearing in the construction of $C$ (i.e. $V:=\{0,1,\frac{1}{4},\frac{3}{4},\dots\}$). -But $|C\times C|=0$, so we need to modify this construction a little. It suffices to prove the following lemma: then we take $f:=\pi\circ(g\times g):A\times A\to L$ and we identify $L$ with $\mathbb{R}$. -Lemma. There exists an injective Lipschitz map $g:A\to C$, where $A\subset\mathbb{R}$ is a compact set having positive Lebesgue measure. -Proof. Start from $[0,2]$ and repeat the same construction as above, by creating holes having the same sizes as in $C$ (but not the same proportions!). -So at the first step $A_0:=[0,2]$ becomes $A_1:=\left[0,\frac{3}{4}\right]\cup\left[\frac{5}{4},2\right]$, then at the second step -$$ A_2:=\left[0,\frac{5}{16}\right]\cup\left[\frac{7}{16},\frac{3}{4}\right]\cup\left[\frac{5}{4},\frac{25}{16}\right]\cup\left[\frac{27}{16},2\right], $$ -and so on. Define $A:=\cap_i A_i$. Clearly $|A|=1$. We now build the required map $g$. -Take -$$h:=1_{\mathbb{R}\setminus A},\quad g(x):=\int_0^x h(t)\,dt.$$ -$g:\mathbb{R}\to\mathbb{R}$ is $1$-Lipschitz and strictly increasing (since $A$ has empty interior). -Moreover we claim that $g(A)=C$. Since $g$ is bijective, it suffices to prove that $g(A_i)=C_i$. -Let us check it for example when $i=0,1$ (the general case is analogous). -In fact $g(0)=0$ and $g(2)=1$, so $g(A_0)=C_0$. -Moreover $g\left(\frac{5}{4}\right)-g\left(\frac{3}{4}\right)=\frac{1}{2}$ -and $g\left(\frac{3}{4}\right)-g(0)=g(1)-g\left(\frac{5}{4}\right)$, so $g\left(\frac{3}{4}\right)=\frac{1}{4}$ and $g\left(\frac{5}{4}\right)=\frac{3}{4}$, -thus $g(A_1)=C_1$. The general case really boils down simply to the fact that $A$ and $C$ have holes with the same sizes by construction. $\blacksquare$<|endoftext|> -TITLE: Is a linear factor more likely than a quadratic factor? -QUESTION [8 upvotes]: I choose a reducible (over $\mathbb Z$) monic polynomial of degree four with integer coefficients, -at random. Is it more likely to have a linear factor or a quadratic factor ? -Formal version of the question : let $N>0$, and -$$B_N=\lbrace (a,b,c,d) \in [|-N,N|]^4 | X^4+aX^3+bX^2+cX+d \text{ is reducible},\rbrace$$ -(so that $|B_N|$ has at most $(2N+1)^4$ elements ; in fact, it is known to have -$o(N^4)$ elements, and even $O(N^3)$ if I'm not mistaken). Let -$$ -\begin{array}{lcl} -B_{N,1} &=&\lbrace (a,b,c,d) \in B_N | X^4+aX^3+bX^2+cX+d \text{ has a linear factor}\rbrace, \\ -B_{N,2}&=&\lbrace (a,b,c,d) \in B_N | X^4+aX^3+bX^2+cX+d \text{ has a quadratic factor}\rbrace, -\end{array}$$ (note that in $B_{N,2}$ the quadratic factor may itself be reducible ; also the intersection $B_{N,1}\cap B_{N,2}$ is nonempty) and -$$ -p_n=\frac{|B_{N,1}|}{|B_N|}, q_n=\frac{|B_{N,2}|}{|B_N|} -$$ -Is anything known about the asymptotic behaviour of $p_n,q_n$ and $\frac{p_n}{q_n}$ ? - -REPLY [2 votes]: Here is a heuristic answer to the corresponding question over a finite field $\mathbb{F}_q$. As described in this post, it turns out that for fixed $n$, as $q \to \infty$, the distribution of irreducible factors of degree $k$ in a random monic polynomial of degree $n$ asymptotically approaches the distribution of cycles of length $k$ in a random permutation in $S_n$. -Hence the probability that a random monic polynomial over $\mathbb{F}_q$ of degree $4$ has a linear factor is asymptotically the probability that a random permutation in $S_4$ has a fixed point, and similarly the probability of an irreducible quadratic factor is asymptotically the probability that a random permutation in $S_4$ has a 2-cycle. These probabilities are, if I haven't miscomputed, -$$\frac{15}{24} = 0.625, \frac{9}{24} = 0.375$$ -respectively. So linear factors are more likely. -As described in this post, as $n \to \infty$ the number of cycles of length $k$ in a random permutation of $S_n$ is asymptotically Poisson with parameter $\frac{1}{k}$. In particular, the probability that there is at least one such cycle is asymptotically $1 - e^{-\frac{1}{k}} \sim \frac{1}{k}$. For $k = 1, 2$ this gives -$$1 - e^{-1} \sim 0.632, 1 - e^{-\frac{1}{2}} \sim 0.393.$$ -But it's unclear what relevance this has to the original question.<|endoftext|> -TITLE: Is rank of vector bundle encoded in its Hilbert polynomial -QUESTION [6 upvotes]: Let $\mathcal F$ be a vector bundle over a projective variety $(X, \mathcal O_X(1))$, and $P_\mathcal F(m)=\chi(X, \mathcal F(m))$ be its Hilbert polynomial. Then can I define from $P_\mathcal F$ the value of the rank $rk(\mathcal F)$? - -REPLY [4 votes]: Let $(X,\mathcal O_X(1))$ be projective of dimension $n$. Let $\eta \in X$ be the generic point. For a sheaf $ -\mathcal F$ write -$$P_{\mathcal F}(m) = \sum_{i=0}^n a_i(\mathcal F) \frac{m^i}{i!}.$$ -Let $d = \deg X = a_n(\mathcal O_X)$ be the degree of $(X, \mathcal O_X(1))$. - -Lemma. Let $\mathcal E$ be a vector bundle of rank $r$ over $X$. Then $a_n(\mathcal E) = rd$. - -Proof. For $\mathcal E = \mathcal O_X$, this is the definition of $d$. By additivity of the Hilbert polynomial, this proves the result when $\mathcal E$ is the trivial bundle of rank $r$. -Now let $\mathcal E$ be any vector bundle. Let $s_1, \ldots , s_r \in \mathcal E_\eta$ be a basis for the generic fibre of $\mathcal E$. Then $s_i$ is a rational section defined away from some divisor $D_i$. Setting $D = \sum D_i$, we see that the map -\begin{align*} -\mathcal O_X^n &\to \mathcal E(D)\tag{1}\label{1}\\ -e_i &\mapsto s_i -\end{align*} -is defined. This gives a short exact sequence -$$0 \to \mathcal O_X^n \to \mathcal E(D) \to \mathcal F \to 0,$$ -where $\mathcal F$ is supported on some divisor $D'$ (since (\ref{1}) is generically an isomorphism). By additivity of the Hilbert polynomial, we get -$$P_{\mathcal E(D)} = P_{\mathcal O_X^n} + P_\mathcal F.$$ -But $\mathcal F$ is supported in lower dimension, so it does not contribute to the leading coefficient. A similar argument allows us to replace $\mathcal E(D)$ by $\mathcal E$. $\square$ -Remark. If you don't like rational sections, you can also use Serre's theorem to conclude that $\mathcal E(m)$ is globally generated for $m \gg 0$, and then choose $s_i$ that generate $\mathcal E(m)_\eta$. -Remark. It seems that we haven't really used that $\mathcal E$ is a vector bundle. The same argument should work for any coherent sheaf (where the rank is, by definition, the dimension of the generic fibre).<|endoftext|> -TITLE: Let $f:[0,\infty)\to \Bbb R$ be defined by $f(x)=\int_{0}^x \sin^2(t^2)\mathrm dt$. Show that $f$ is uniformly continuous -QUESTION [5 upvotes]: Let $f:[0,\infty)\to \Bbb R$ be defined by -$$f(x)=\int_{0}^x \sin^2(t^2)\mathrm dt$$ -Then Show that the function is uniformly continuous on $[0,1)$ and $(0,\infty)$ -Attempt: Differentiating with respect to $x$ we get, -$$f'(x)=\sin^2(x^2)$$ then $$|f'(x)|=|\sin^2(x^2)|\le1,\forall x\in(0,\infty)$$ -this means that derivative of $f$ is bounded, hence f is uniformly continuous on given intervals. -am I right? Also provide other methods to solve this. Thank you. - -REPLY [2 votes]: You're attempt is correct. In general if you're derivative is a priori bounded $\|f'\|_\infty\leq M$, then the function is uniformly continuous by the mean value theorem: -$$f(x)-f(y)=f'(\xi)(x-y)\Longrightarrow|f(x)-f(y)|=|f'(\xi)||x-y|\leq M|x-y|.$$ - -REPLY [2 votes]: Let $\epsilon >0$. -By MVT we have $\vert f(x)-f(y)\vert =\vert f'(c)(x-y)\vert \leq \vert x-y\vert ;\ y -TITLE: Integrals of orthogonal functions -QUESTION [5 upvotes]: Suppose we have an orthogonal basis $\{e_k\}_{k \in \mathbb{N}}$ for $L^2[0,1]$. Do the functions -\begin{align*} -u_k(t) := \int_t^1e_k(s)\text{d} s,\quad k \in \mathbb{N}, -\end{align*} -also satisfy -\begin{align*} -\int_0^1u_i(s)u_j(s)\text{d} s = 0, \quad i \neq j? -\end{align*} -This is true for the the functions $\{\sqrt{2}\sin{(k - 1/2)\pi t}\}_{k \in \mathbb{N}}$ for example, and also seems to be true for the shifted Legendre polynomials. If it's not generally true, I'm curious under what conditions it might be true. - -REPLY [3 votes]: Let $e_k(s) = e^{2 \pi i k s}$. Then the formula above gives -$u_0(t)=1-t$, $u_1(t) = {1 \over 2 \pi i} (1-e^{2 \pi i s})$, -and $\int_0^1 u_0(s) u_1(s) ds = -{1+i\pi \over 4 \pi^2}$.<|endoftext|> -TITLE: How to estimate $ \left(1 + \sqrt{2} + \sqrt{3} + \dots + \sqrt{n} \right) - \frac{2}{3} n \sqrt{n}$? -QUESTION [10 upvotes]: How do I estimate the error for the sum of reciprocals of square roots. From calculus we know that: -$$ \int_0^n \sqrt{x} \, dx = \frac{2}{3} x\sqrt{x} \;\;\Bigg|_{x=0}^{x=n} = \frac{2}{3}n\sqrt{n}$$ -I forget the name - midpoint rule, trapezoid rule ?? - basically we want to approximate the integral as a Riemann sum. How do we estimate the error? -$$ \left(1 + \sqrt{2} + \sqrt{3} + \dots + \sqrt{n} \right) - \frac{2}{3} n \sqrt{n}$$ -To give you a sense of how much we are losing on this approximation, let's draw two pictures. - - -We are losing all the gray stuff in our approximation, which is quite a lot! I don't really care about the integral, what's important is the difference between all the stuff we are adding and square root of $n$. -The yellow triangle has base $1$ and height $\sqrt{14} - \sqrt{13}$ so the area is: -$$ A = \frac{1}{2} \times b \times h = \frac{1}{2} \times 1 \times (\sqrt{\color{#E0E070}{14}} - \sqrt{\color{#E0E070}{13}}) = \frac{1}{2}(\sqrt{n+1} - \sqrt{n}) \approx \frac{1}{4 \sqrt{n}}$$ -This suggests the total of all errors is about $\propto \sqrt{\color{#D22}{n}}$ which is not a small amount. Can anyone get the constant of proportionality? -The Euler-Maclaurin machine does crank out such error estimates, but the square root function is not so strange. Can we derive such an estimate in this specific case using basic standard inequalities? - -REPLY [3 votes]: Using the Binomial Theorem, we get -$$ -\begin{align} -k^{3/2}-(k-1)^{3/2} -&=k^{3/2}\left[1-\left(1-\frac1k\right)^{3/2}\right]\\ -&=k^{3/2}\left[\frac3{2k}-\frac3{8k^2}-\frac1{16k^3}+O\left(\frac1{k^4}\right)\right]\\ -&=\frac32k^{1/2}-\frac38k^{-1/2}-\frac1{16}k^{-3/2}+O\left(k^{-5/2}\right)\tag{1} -\end{align} -$$ -and -$$ -\begin{align} -k^{1/2}-(k-1)^{1/2} -&=k^{1/2}\left[1-\left(1-\frac1k\right)^{1/2}\right]\\ -&=k^{1/2}\left[\frac1{2k}+\frac1{8k^2}+O\left(\frac1{k^3}\right)\right]\\ -&=\frac12k^{-1/2}+\frac18k^{-3/2}+O\left(k^{-5/2}\right)\tag{2} -\end{align} -$$ -and -$$ -\begin{align} -k^{-1/2}-(k-1)^{-1/2} -&=k^{-1/2}\left[1-\left(1-\frac1k\right)^{-1/2}\right]\\ -&=k^{-1/2}\left[-\frac1{2k}+O\left(\frac1{k^2}\right)\right]\\ -&=-\frac12k^{-3/2}+O\left(k^{-5/2}\right)\tag{3} -\end{align} -$$ -Combining $(1)$, $(2)$, and $(3)$ gives -$$ -\begin{align} -&\small\left(\frac23k^{3/2}+\frac12k^{1/2}+\frac1{24}k^{-1/2}\right)-\left(\frac23(k-1)^{3/2}+\frac12(k-1)^{1/2}+\frac1{24}(k-1)^{-1/2}\right)\\[6pt] -&\small=k^{1/2}+O\left(k^{-5/2}\right)\tag{4} -\end{align} -$$ -Summing, we get that -$$ -\sum_{k=1}^nk^{1/2}=\frac23n^{3/2}+\frac12n^{1/2}+C+\frac1{24}n^{-1/2}+O\left(n^{-3/2}\right)\tag{5} -$$ -where $C=\zeta\left(-\frac12\right)=-0.207886224977354566\dots$ - -Why $\boldsymbol{\zeta\left(-\frac12\right)}$? -We can rewrite the reasoning given in this answer, which uses the Euler-Maclaurin Sum Formula, but use the Binomial Theorem, as above. However, the argument is exactly the same. That is, -$$ -\lim_{n\to\infty}\left[\sum_{k=1}^nk^{-z}-\frac1{1-z}n^{1-z}-\frac12n^{-z}\right] -$$ -converges uniformly to an analytic function for $\mathrm{Re}(z)\gt-1$. For $\mathrm{Re}(z)\gt1$, this function is easily seen to be $\zeta(z)$. -By analytic continuation, we get that this function is $\zeta(z)$ for $\mathrm{Re}(z)\gt-1$. For this question, plug in $z=-\frac12$.<|endoftext|> -TITLE: Is there a group with countably many subgroups, but is not countable in ZF? -QUESTION [17 upvotes]: Inspired by this question, although I don't think it was the OP's intention, hence this separate question: Is there a group $G$ with countably many subgroups, but is a not a countable group itself in $\mathrm{ZF}$? -In $\mathrm{ZFC}$ we can look at the cyclic subgroups of $G$ and "estimate" the number of elements in the group, to conclude that $G$ is countable. But this ends up not going through in $\mathrm{ZF}$ since a countable union of finite sets does not have to be countable, in particular it is known that a countable union of two-element sets does not have to be countable. -So a possible way to construct such a uncountable group (although I am not saying this is a good way to go, I have no idea) is start with a collection $\{ A_i \mid i \in \mathbb{N} \}$ where $A_i$ are pairs, whose union is not a countable set and note that every torsion-free cyclic group has two natural generators, so conceivably there could be a torsion-free group with $A_i$ the natural generating set for a cyclic groups ("$1,-1$" but we could not actually define such a function all $A_i$ without the axiom of choice). Then the constructions would have to make sure there are only countable many subgroups (this seems difficult and would take a lot of care) -An interesting paper "On the number of Russell's socks or $2+2+2+\cdots = ?$" -by Horst Herrlich, Eleftherios Tachtsis discusses some of the ideas around countable unions of pairs. - -REPLY [6 votes]: I believe the answer is yes, as follows: -Start with a model of ZF+atoms, $M$, with a set of atoms $A$ which forms a group isomorphic to $\mathbb{D}/\mathbb{Z}$, where $\mathbb{D}$ is the set of dyadic fractions: $\mathbb{D}=\{{p\over 2^k}: p, k\in\mathbb{Z}\}$. Let $G$ be the group of automorphisms of $A$, and consider the symmetric submodel $N$ of $M$ corresponding to the filter of finite supports on $G$. Then in $N$, $A$ is no longer countable, since there are nontrivial automorphisms of $\mathbb{D}/\mathbb{Z}$ fixing arbitrary finite sets; but the only subgroups of $\mathbb{D}/\mathbb{Z}$, other than the whole thing, are those of the form $$\{x: 2^kx=0\}$$ for some fixed $k\in\mathbb{N}$. This provides an explicit bijection - in the original universe, $M$ - between the subgroups of $A$ and $\omega$. Now, passing to $N$, we get no additional subgroups of $A$, and the map described above is symmetric; so in $N$, $A$ has only countably many subgroups. -Meanwhile, the statements "$A$ is uncountable" and "$A$ has countably many finitely generated subgroups" are each "bounded," so we may apply the Jech-Sochor theorem to push this construction into the ZF-setting.<|endoftext|> -TITLE: Mysterious Inverse Mellin transform using residue theorem -QUESTION [6 upvotes]: The origin of this problem lies in the explanation of the evaluation of the series $\sum_{n\geq1}\frac{\cos(nx)}{n^2}=\frac{x^2}{4}-\frac{2\pi}{4}+\frac{\pi^2}{6}$ -see this link ( -Series $\sum_{n=1}^{\infty}\frac{\cos(nx)}{n^2}$ ) -In the proposed solution a complex integral needs to be evaluated, which is a inverse mellin transform. This is done using the residue theorem. -Let $Q(s)=-\Gamma(s-2)\zeta(s)\cos(\frac{\pi s}{2})$. -The question is how to evaluate $\int_{\frac{5}{2}-i\infty}^{\frac{5}{2}+i\infty} Q(s)/x^s \, ds$ -The author states that he integrates over the left plane, I suppose he uses a semi circle as a contour, which includes the 3 poles and if $R\rightarrow +\infty$ the integral over the arc vanishes and the part where $\operatorname{Re}(s)>\frac{5}{2}$ is covered. But how can I prove this? I tried to apply Jensens lemma which didn't work. What am I missing? - -REPLY [3 votes]: Note that for any $\sigma$, the absolute convergence of the integral -$$\int_{\sigma-i\infty}^{\sigma+i\infty} Q(s)/x^s ds$$ -is guaranteed by Stirling's formula and the bound for Riemann zeta function in vertical strips, provided that we stay away from the poles of $Q(s)$. This is because we know the growth properties of $\Gamma(s-2)$, $\cos(\pi s/2)$ and $\zeta(s)$ as $\text{Im}(s)\rightarrow \infty$. -In detail, let $s=\sigma+it$, then for $\sigma<0$, as $t\rightarrow \infty$, -$$\zeta(s)\ll (2\pi)^{-\sigma} |t|^{1/2-\sigma}$$ -$$\Gamma(s-2)\sim \exp (-\frac{\pi}{2}|t|)|t|^{\sigma-2-1/2}$$ -$$\cos(\pi s/2)\sim \exp (\frac{\pi}{2}|t|)$$ -So their product is far less than$$(2\pi)^{-\sigma} |t|^{-2} $$ -After this is justified, we can just construct a box to be our contour, with vertex $5/2-iT,5/2+iT,-N-iT,-N+iT$ for large $T$ and $N$. As long as $|x/2\pi|<1$, the integral goes to $0$ as $\sigma=-N\rightarrow -\infty$.<|endoftext|> -TITLE: Is the function determinant $A \rightarrow \det(A)$ a non-convex fuction? -QUESTION [5 upvotes]: Is the function $$ \det: A\in \mathbb{M}^{n \times n}(\mathbb{R}) \rightarrow \det (A)$$ a convex function? -I think the answer is no, but I cannot prove it directly using the definition of convex function. How can I do? -Thanks for the help! - -REPLY [7 votes]: If $n=1$, $\det A=a_{1,1}$, which is trivially a convex function of $A$. -If $n>1$, take two diagonal matrices, with $a_{i,i}=1$ if $i\leq n/2$ and $0$ otherwise, and $b_{i,i}=0$ if $i\leq n/2$ and $1$ otherwise. -Then $\det A=\det B=0$. -However, for $t\in]0,1[$, $\det [tA+(1-t)B] > 0$, hence the function is not convex.<|endoftext|> -TITLE: Showing that $\frac{z^n}{1+z^{2n}}$ is real for $|z|=1$. -QUESTION [5 upvotes]: Can you help me to prove that $$\frac{z^n}{1+z^{2n}}$$ -is a real number, given that $z$ is a complex number with modulus $1$ and $n$ a positive integer, such that $z^{2n}$ is not equal to $-1$. - -REPLY [2 votes]: $$z=a+bi\\\frac{z^n}{1+z^{2n}}=\frac{z^n\cdot \overline{z}^n}{\overline{z}^n+z^{2n}\overline{z}^n}=\frac{1}{z^n+\overline{z}^n}=\frac{1}{(a+bi)^n+(a-bi)^n}$$ -By the binomial theorem all the terms with $i$'s cancel out<|endoftext|> -TITLE: The second integral of the Killing form -QUESTION [5 upvotes]: Let $G$ be a lie group. Assume that $B$ is the Killing form of its Lie algebra $T_{e}G$. So $B$ is counted as a symmetric $2$-form on $G$ by translation. - -Is there a smooth function $f$ on $G$ which Hessian is equal to $B$? - -The Hessian is considered with respect to the $LC$ connection associated to an invariant metric. - -REPLY [2 votes]: This is impossible at least if $G$ is compact and non-abelian (or a product of such group with something else). -On the other hand, if $G=\mathbb R^n$ (or $\mathbb R^n/\mathbb Z^n$), then the Killing form vanishes identically and any constant or linear function satisfies the desired property. -I don't have a complete answer, but let me explain why it fails for these compact groups. -A compact Lie group is a product of a torus with a compact group with negative definite Killing form; this follows from the classification of compact Lie groups. -That is, $G=\mathbb T^n\times K$ for some compact group $K$. -We can restrict our function $f$ to the submanifold $K$ and the key property remains: the Hessian equals the Killing form. -Now we are on a compact group $K$ where the Killing form is negative definite. -There is a non-trivial Lie homomorphism $\phi:S^1\to K$ from the circle group because one can always embed a torus of some dimension in a compact group. -Let $f\in C^2(K)$ be such a function that the Hessian $H(f)$ is negative definite. -(The same argument goes through with positive definiteness as well.) -Therefore -$$ -(f\circ\phi)''(t) -= -H(f)(\phi'(t),\phi'(t)) -< -0 -$$ -for all $t\in S^1$. -But $(f\circ\phi)'$ is a $C^1$ function $S^1\to\mathbb R$, so (after applying the fundamental theorem of calculus around the loop), we have -$$ -\int_{S^1}(f\circ\phi)''(t)dt=0. -$$ -This is in contradiction with $(f\circ\phi)''$ being always negative.<|endoftext|> -TITLE: Characterize kernels of monoid homomorphisms -QUESTION [9 upvotes]: The kernel of a monoid homomorphism $f : M \to M'$ is the submonoid $\{m \in M : f(m)=1\}$. (This should not be confused with the kernel pair, which is often also named the kernel.) -Question. Which submonoids $N$ of a given monoid $M$ arise as the kernel of a monoid homomorphism? (If necessary, let us assume that $M$ is commutative.) -Here is a necessary condition: If $xy \in N$, then $x \in N \Leftrightarrow y \in N$. - -REPLY [5 votes]: A late answer. -Nex's answer is correct, but can be simplified as follows. -Let $N$ be a submonoid of $M$. Then there exists a congruence $\sim$ on $M$ such that $N = [1]_\sim$ if and only if, -$$ - \text{(C) for all $x,y \in M$ and for all $u \in N$, $xuy \in N \iff xy\in N$.} -$$ -Condition (C) is necessary since $u \sim 1$ implies $xuy \sim xy$. To prove it is sufficient, consider the syntactic congruence of $N$, which is the congruence $\sim_N$ on $M$ defined by $u \sim_N v$ if and only if, for all $x, y \in M$, -$$ - xuy \in N \iff xvy \in N -$$ -Condition (C) just says that $[1]_{\sim_N} = N$, which concludes the proof. -A side remark. The term kernel should probably be reserved to the kernel category of a monoid homomorphism. See the papers [1, 2, 3] for references and these slides for a quick overview. -[1] S. Margolis and J.-É. Pin, Inverse semigroups and extensions of groups by semilattices, J. of Algebra 110 (1987), 277-297. -[2] B. Tilson, Categories as algebra: an essential ingredient in the theory of monoids, J. Pure Appl. Algebra 48 (1987), no. 1-2, 83--198. -[3] B. Steinberg and B. Tilson, Categories as algebra II, Internat. J. Algebra Comput. 13 (2003), no. 6, 627--703.<|endoftext|> -TITLE: If the derivative of $f$ is never zero, then $f$ is one-to-one -QUESTION [17 upvotes]: This is an exercise from Abbott's second edition of Understanding Analysis. - -Let $f$ be differentiable on an interval $A$. Show that if $f'(x) \neq 0$ on $A$, show that $f$ is one-to-one on $A$. Provide an example to show that the converse statement need not be true. - -Is my solution (below) correct? -Let $x_{1},x_{2} \in A $ such that $x_1 \neq x_2$. Since the function satisfies all the conditions of the mean value theorem, there exists $c \in (x_1,x_2)$ [without loss of generality we consider $x_1 < x_2$] such that $f(x_2) - f(x_1) = (x_2 -x_1) f'(c) \neq 0$ as it is given that $f'(x)$ is nonzero on $A$ and $x_1 \neq x_2$ is our assumption. Therefore $x_1 \neq x_2$ implies $f(x_1) \neq f(x_2)$ for every $x_1,x_2 \in A$. Thus $f$ is one-to one on $A$. -The converse may not be true. Consider $A=[-1,1)$ and $f(x)= x^3$. Therefore $f$ is injective on $A$. Again $f'(x)= 2x^2$. Therefore $f'(0)=0, 0 \in A$. So the converse need not be true. - -REPLY [6 votes]: If the function is differentiable and $f'(x)\neq0$, by the Darboux Theorem it has the intermediate value property. -Suppose by contradiction that there are two real numbers $x_1,x_2 \in A$ such that $f'(x_1)<0$ and $f'(x_2)>0$. This implies the existence of $x_3\in A$ s.t. $f'(x_3)=0$, contradicting the hypothesis $f'(x)\neq 0$. -Because the derivative $f'$ is either strictly positive or strictly negative on $A$, $f$ is monotonic and hence injective.<|endoftext|> -TITLE: Algebraic topology, proving two spaces aren't homeomorphic -QUESTION [5 upvotes]: I need help on how to approach a problem of this kind, I'm given two topological spaces: -$$X=\mathbb{R}^2-\{(n,0)|n\in\mathbb{N}\}\text{ and }Y=\mathbb{R}^2-\{(\frac{1}{n},0)|n\in\mathbb{N}\}$$ -I want to show that they're not the same but I have no clue where to start, one way could be showing that $\pi_1(X)\neq\pi_1(Y)$, but I don't know how to compute those groups. - -REPLY [4 votes]: In $Y$, there exists a point such that no neighborhood of the point is simply connected. Since $X$ does not have this property, they cannot be homeomorphic.<|endoftext|> -TITLE: Every sixth polynomial shares a factor of $(a^2-6)$ -QUESTION [12 upvotes]: I currently looking at the polynomials you get from the series expansion of -$$ -\frac{1-x^2}{1-ax+2x^2}=1+a x - +(a^2-3) x^2 - +(a^3-5a) x^3 - +\underbrace{(a^4-7a^2+6)}_{(a-1)(a+1)(a^2-6)} x^4+\dots -$$ -W|A helped here... -What I found is that from $x^{4}$ onwards, every sixth polynomial shares a factor of $(a^2-6)$. How to prove that? - -REPLY [9 votes]: I will prove what you want kind of indirectly using Chebyshev polynomials of second kind. Actually once you formulate the question in terms of these polynomials this mysterious $a^2-6$ becomes quite meaningful. Let me start by The generating function of Chebyshev polynomial of second kind, $U_n$ (which you can take as the definition of these polynomials): -$$ -\frac{1}{1-2ax+x^2}=\sum_{n=0}^\infty U_n(a)x^n -$$ -So your function which I will call $I(x,a)$ is nothing but -$$ -I(x,a):=\frac{1-x^2}{1-ax +2x^2}\xrightarrow{t=\sqrt{2}x}\left[1-\frac{t^2}{2}\right]\sum_{n=0}^\infty U_n\left(\frac{a}{\sqrt{8}}\right)t^n$$ -We are only interested in the value of this function when $a=\pm \sqrt{6}$. But then $\frac{a}{2\sqrt{2}}=\cos \pi/6$ or $=\cos(\pi-\pi/6)$ (You can already see where every sixth term pattern is coming from). Then your function becomes (I defined $X:=a/\sqrt{8}$) -$$ -I(X,t):=U_0(X) + tU_1(X)+ -t^2\sum_{n=0}^\infty \left[U_{n+2}(X)-\frac{1}{2}U_n(X)\right]t^n -$$ -We are only interested in $n=6k+2$ and want to prove that -$$ -2U_{4+6k}\left(\cos \frac{\pi}{6}\right)= -U_{2+6k}\left(\cos \frac{\pi}{6}\right) -$$ -Now using yet another property of Chebyshev polynomial of second kind, in general if $z=\cos \psi$, then -$$ -U_m(z) = \frac{\sin((m+1)\psi)}{\sin \psi} -$$ -In our case then one can easily show with identity above that $ -U_{4+6k}\left(\cos \frac{\pi}{6}\right)= -(-1)^k -$ and $ -U_{2+6k}\left(\cos \frac{\pi}{6}\right)=2(-1)^k -$. This show that $a=+\sqrt{6}$ is a root for every sixth polynomial after $x^4$ in the expansion of $I(x,a)$. Similarly you can check the $a=-\sqrt{6}$ is also a root. -These identities about Chebyshev polynomials are surprisingly easy to prove actually. For a fast reference about Chebyshev polynomials, look here, or wikipedia.<|endoftext|> -TITLE: Show $R$ is right-Artinian but not left-Artinian -QUESTION [7 upvotes]: Let $R$ be the subring of $M_2(\mathbb{R})$ defined by -$$R=\left\{\begin{pmatrix}a&b\\0&d\end{pmatrix}\mid a\in\mathbb{Q},b,d\in\mathbb{R}\right\}$$ -Show that $R$ is right-Artinian but not left-Artinian. -The "obvious" approach to show it is not left-Artinian is to find an $A\in R$ such that -$$RA\supset R(A^2)\supset R(A^3)\supset\dots$$ -is an infinite decreasing chain of proper left submodules of $R$ (as a module over itself). However, after spending a lot of time in this endeavor I've come to the conclusion that no such element $A$ exists (I can post a proof if anybody is interested but it'd be long and not worth the time for me otherwise). Does anybody have an idea for another approach I can take to this? I have no idea what else submodules of $R$ would look like. - -REPLY [7 votes]: Bad news -It is impossible to prove it with the strategy you chose: in fact $R$ is a perfect ring, and it is known that perfect rings satisfy the DCC on principal ideals. Your descending chain of ideals will therefore have to be more sophisticated. -Triangular rings -The ideal structure of this type of construction is explained in both of the following posts, but the second one is a slightly better fit: -Is there a nice way to classify the ideals of the ring of lower triangular matrices? -Left and right ideals of $R=\left\{\bigl(\begin{smallmatrix}a&b\\0&c \end{smallmatrix}\bigr) : a\in\mathbb Z, \ b,c\in\mathbb Q\right\}$ -Following the description in the latter post, we think of the ring as $\Bbb Q\times \Bbb R\times \Bbb R$ with funny multiplication. -$J(R)$ is clearly $\begin{bmatrix}0&\Bbb R\\0&0\end{bmatrix}$ since this set is a nilpotent ideal and the quotient by this set is clearly ring isomorphic to $\Bbb Q\times \Bbb R$. -Homing in on where the chain is -Of course, basic module theory says that $R$ is left Artinian iff $R/J(R)$ and $J(R)$ are left Artinian $R$ modules. But $R/J(R)$ is just a direct sum of two simple left $R$ modules, so it is clearly Artinian. Therefore it's clear we just need to prove $J(R)$ is not a left Artinian $R$ module. -Following this line of thought, we get the idea to look for an infinite descending chain in $\{0\}\times \Bbb R\times \{0\}$. -Apply the result on ideal structure -$\Bbb R$ is an infinite dimensional $\Bbb Q$ module, so we can find an infinite descending chain of $\Bbb Q$ submodules: index them $N_1\supset N_2\supset\ldots$. Reading from the second link, we need to make sure $MJ_2$ is a subset of our $J_1$, so to make things easy we'll just let $J_2=\{0\}$ so that this is satisfied. (The only other choice is $J_2=\Bbb R$, and that won't do.) -Now if we set $N'_i=\{0\}\times N_i\times\{0\}$, we clearly have an infinite descending sequence of $\Bbb Q$ submodules of $\Bbb Q\times \Bbb R$ containing $MJ_2=\{0\}\times\{0\}\times\{0\}$. Thus $N'_i$ furnishes an infinite descending chain of left ideals of $R$. -You could also make an infinite ascending chain of left ideals in the same way, showing it's not left Noetherian.<|endoftext|> -TITLE: Is every Banach space densely embedded in a Hilbert space? -QUESTION [23 upvotes]: Can every Banach space be densely embedded in a Hilbert space? This is clear if the Banach space is actually a Hilbert space, but much can you relax this? -If the embedding exists, is the target Hilbert space unique? - -REPLY [2 votes]: Suppose that there exists an injective, bounded linear operator $T\colon X\to H$, where $H$ is a Hilbert space. Then $\|x\|_{\rm new}=\|x\|+\|Tx\|$ ($x\in X$) defines an equivalent, strictly convex norm on $X$. -Not every Banach spaces admits a strictly convex renorming -- just to mention $\ell_\infty(\omega_1)$ (Day) or $\ell_\infty / c_0$ (Bourgain).<|endoftext|> -TITLE: I found only one critical point using Lagrange multipliers. Must it be a minimizer? -QUESTION [9 upvotes]: I am trying to minimize -$$V(x,y,z) = \frac {a^2b^2c^2}{6xyz}$$ -subject to -$$\frac {x^2}{a^2} + \frac{y^2}{b^2} + \frac {z^2}{c^2} = 1$$ -and for $x,y,z>0$. -I found one critical point; evaluating it gives a function value of -$$V= \frac {\sqrt{3}}{2}|abc|$$ -This agrees with the solution that I am looking at, but why does it have to be a minimum? -Thanks, - -REPLY [4 votes]: When you approach the boundary (that is, when $x \searrow 0$, $y \searrow 0$ or $z \searrow 0$), the function $V$ goes to infinity. Thus you can assume that - -$V$ admits no maximum on your domain considered -If $V$ admits a minimum, it won't be within a $\varepsilon$-range of the boundary. - -So you can add the conditions $x \ge \varepsilon, y \ge \varepsilon$ and $z \ge \varepsilon$ for $\varepsilon > 0$ small enough. This makes your new domain of definition compact, thus the function attains a minimum and a maximum on this new domain. However, since our function goes to infinity as $x \searrow 0$ (resp. $y$ and $z$), we can assume that the maximum is attained if and only if either $x,y$ or $z = \varepsilon$ ; in other words, that the values on the boundary are large, hence never minimal. So your minimum cannot be in the boundary, thus must be in the interior. If you found only one critical point in the interior, it has to be a minimum since the minimum exists and is a critical point. -Hope that helps,<|endoftext|> -TITLE: Prove $E((X+Y)^p)\leq 2^p (E(X^p)+E(Y^p))$ for nonnegative random variables $X,Y$ and $p\ge0$ -QUESTION [5 upvotes]: Suppose $X \geq 0$ and $Y \geq 0$ are random variables and that $p\geq 0$ - -Prove -$$E((X+Y)^p)\leq 2^p (E(X^p)+E(Y^p))$$ - -Proof - -Since $(X+Y)^p \leq (2 \> \max\{X,Y\})^p=2^p \> \max \{X^p,Y^p\}\leq 2^p(X^p+Y^p)$ $ \implies E((X+Y)^p)\leq 2^p (E(X^p)+E(Y^p))$ - -If $p>1$ the factor $2^p$ may be replaced by $2^{p-1}$ -If $0 \leq p \leq 1$ the factor $2^p$ can be replaced by $1$ - -Need help with part 2 and 3 any suggestions - -REPLY [4 votes]: Let $f(x)=x^p$ -By convexity $$f(\frac{X+Y}{2})\leq \frac{1}{2} f(X)+\frac{1}{2} f(Y) \implies \frac{(X+Y)^p}{2^p}\leq\frac{1}{2} X^p+\frac{1}{2}Y^p$$ -The result follows.<|endoftext|> -TITLE: 101102103104105106..............149150? What is the remainder when divided by 9? -QUESTION [13 upvotes]: 101102103104105106..............149150. What is the remainder when divided by 9? - -My Approach -I think the remainder will be $0$. -Because I think all numbers are in multiplication and if I divide $108/9=0$ remainder, and thus multiplication of all these numbers will be $0$. -Can anyone guide me? Is my approach correct? - -REPLY [5 votes]: Yet another possible shortcut is to notice that the sum of any 9 consecutive numbers is divisible by 9. Intuitively, one of the 9 numbers must be a multiple of 9, and the rest form symmetrical pairs around it whose mod 9 remainders "cancel out". Or, see Prove the sum of any $n$ consecutive numbers is divisible by $n$ (when $n$ is odd). for a more formal proof. -Since the mod 9 remainder of the given number is the same as that of the sum of the 50 consecutive numbers 101 + 102 + ... +150, 5 groups of 9 consecutive numbers can be dropped from that sum as having mod 9 remainders of 0. Choosing to drop the last 45 numbers leaves 101 + 102 + 103 + 104 + 105 which has a mod 9 remainder of 2. -[ EDIT ] To elaborate the "mod 9 remainder of the given number is the same as that of the sum of the 50 consecutive numbers 101 + 102 + ... +150". -$10 ≡ 1\;(mod\;9)$ thus $10^2 = 10 * 10 ≡ 1 * 1 = 1\;(mod\;9)$ and by induction $10^n ≡ 1\;(mod\;9)$ for all $n >= 0$. The given number is $101 * 10^{148} + 102 * 10^{145} + ... + 149 * 10^3 + 150$, and since all factors $10^k ≡ 1\;(mod\;9)$, the sum simplifies $(mod\;9)$ to $101 + 102 + ... + 150$.<|endoftext|> -TITLE: Spectrum of $\mathbb{Z}^\mathbb{N}$ -QUESTION [16 upvotes]: Is anything known about the spectrum of $\mathbb{Z}^{\mathbb{N}}$? Notice that the fiber of $\mathrm{Spec}(\mathbb{Z}^{\mathbb{N}}) \to \mathrm{Spec}(\mathbb{Z})$ at a non-zero prime ideal $(p)$ is the spectrum of $\mathbb{Z}^{\mathbb{N}}/(p) \cong \mathbb{F}_p^{\mathbb{N}}$, which corresponds$^1$ to the set of ultrafilters on $\mathbb{N}$. Thus we only have to look at the generic fiber, which is the spectrum of the $\mathbb{Q}$-algebra $\mathbb{Z}^{\mathbb{N}} \otimes_{\mathbb{Z}} \mathbb{Q}$. This is isomorphic to a subalgebra of $\mathbb{Q}^{\mathbb{N}}$ consisting of those sequences of rational numbers whose denominators are bounded (with respect to suitable (not any) representations as fractions). In particular, every ultrafilter on $\mathbb{N}$ induces a prime ideal of $\mathbb{Z}^{\mathbb{N}} \otimes_{\mathbb{Z}} \mathbb{Q}$, but not everyone arises like this. -$^1$If $(F_i)_{i \in I}$ is family of fields, then there is a bijection between the (ultra)filters on $I$ and the (prime) ideals of $\prod_{i \in I} F_i$. It maps a filter $\mathcal{U}$ to the ideal $\{x \in \prod_{i \in I} F_i : \{i \in I : x_i = 0\} \in \mathcal{U}\}$. - -REPLY [2 votes]: Here is a comment about the first bit of Eric Wofsey's answer, namely the natural map $X \to \beta \mathbb{N}$. $\beta \mathbb{N}$ is, in a precise sense, the "space of connected components" of $X$, as follows. There is a functor $\text{CRing} \to \text{Set}$ sending a commutative ring $R$ to the set of idempotents in $R$. This functor naturally lifts to Boolean rings: the set of idempotents acquires a Boolean ring structure via the usual multiplication with modified addition -$$a +' b = a + b - 2ab.$$ -(This reduces to the usual addition if $R$ has characteristic $2$ but not in general.) I will call this ring $B(R)$. The spectrum of $B(R)$ in the sense of Stone duality is a Stone space called the Pierce spectrum of $R$, and so taking Pierce spectra gives a functor from affine schemes to Stone spaces. -Given any prime ideal $P$ of $R$, the homomorphism $R \to R/P$ induces a homomorphism $B(R) \to B(R/P)$, but since $R/P$ is an integral domain, $B(R/P) \cong \mathbb{F}_2$, and hence we get a homomorphism $B(R) \to \mathbb{F}_2$, or equivalently a point in the Pierce spectrum. This gives a natural continuous map -$$\text{Spec } R \to \text{Spec } B(R).$$ -Now, why is it natural to think of $\text{Spec } B(R)$ as the space of connected components? The functor $B(R)$ is representable by the free commutative ring on an idempotent, namely -$$\mathbb{Z}[x]/(x^2 - x) \cong \mathbb{Z} \times \mathbb{Z}.$$ -The corresponding affine scheme is "two points" $\text{Spec } \mathbb{Z} \coprod \text{Spec } \mathbb{Z}$. This is naturally a Boolean ring object in the category of affine schemes (true more generally for the coproduct $1 \coprod 1$ of two copies of the terminal object in any distributive category), and so maps into it naturally form a Boolean ring, which is just $B(R)$. The analogous construction for topological spaces produces for each space $X$ a natural Stone space $S(X)$ and a natural continuous map $X \to S(X)$ whose fibers (I think) are connected. Maybe it is the left adjoint to the inclusion of Stone spaces into spaces? -From here it's not hard to see that the Pierce spectrum of any infinite product $\prod_{i \in \mathbb{N}} D_i$ of connected rings is $\beta \mathbb{N}$.<|endoftext|> -TITLE: Calculating flux through a moving surface in a vector field that evolves with time -QUESTION [6 upvotes]: Suppose we are given a vector field $\mathbf{F} \colon \mathbb{R}^4 \to \mathbb{R}^3$ that evolves with time and describes the way, say, liquid particles move in a tank. Also, we are given a parametric surface $\mathscr{S}$ that is parameterized by -$$\mathbf{r}(u, v, t) = x(u, v, t)\mathbf{i} + y(u, v, t)\mathbf{j} + z(u, v, t)\mathbf{k}.$$ -(Note that the surfaces evolves with time too; it "moves".) Moreover, let the (spatial) domain of $\mathbf{r}$ be $A$. -Now the question I want to be able to answer is: how to compute flux through an area element taking motion of both the surface and the vector field into account? -See what I tried: -(1) First I need to compute the unit normal to the surface at a particular instant: -$$\mathbf{\hat{N}}(u, v, t) = \frac{\frac{\partial}{\partial u}\mathbf{r}(u, v, t) \times \frac{\partial}{\partial v}\mathbf{r}(u, v, t)}{\Big| \frac{\partial}{\partial u}\mathbf{r}(u, v, t) \times \frac{\partial}{\partial v}\mathbf{r}(u, v, t)\Big|}.$$ -(2) Taking motion into account, we have that the surface element at time $t$ passes the space at rate -$$\Big\langle \mathbf{\hat{N}}(u, v, t), \frac{\partial}{\partial t}\mathbf{r}(u, v, t) \Big\rangle.$$ -(3) Now all we do at each area element is to take the difference of the two vectors: one vector from the vector field and the other one is from the motion of the surface. For example, if at some instant an area element moves with the same speed and direction as the vector implied by the vector field, the flux should be zero. Putting all together, I have the following: -$$ -\begin{align} -\Phi(t) &= \iint_{\mathscr{S}} \Bigg\langle \mathbf{F}, \mathbf{\hat{N}}\Bigg\rangle - \Bigg\langle \frac{\partial}{\partial t}\mathbf{r}, \mathbf{\hat{N}}\Bigg\rangle\, \mathrm{d}S \\ - &= \iint_{\mathscr{S}} \Bigg\langle \mathbf{F} - \frac{\partial}{\partial t}\mathbf{r}, \mathbf{\hat{N}} \Bigg\rangle\, \mathrm{d}S \\ - &= \iint_{A} \Bigg\langle \mathbf{F} - \frac{\partial}{\partial t}\mathbf{r}, \mathbf{\hat{N}} \Bigg\rangle \Bigg| \frac{\partial}{\partial u}\mathbf{r} \times \frac{\partial}{\partial v}\mathbf{r} \Bigg| \, \mathrm{d}u \, \mathrm{d}v \\ - &= \iint_{A} \Bigg\langle \mathbf{F}(\mathbf{r}(u, v, t), t) - \frac{\partial}{\partial t} \mathbf{r}(u, v, t), \frac{\partial}{\partial u} \mathbf{r}(u, v, t) \times \frac{\partial}{\partial v}\mathbf{r}(u, v, t) \Bigg\rangle \, \mathrm{d}u \, \mathrm{d}v. -\end{align} -$$ -Is the above calculation correct? If yes, than the volume through the surface $\mathscr{S}$ within time range $[a, b]$ is simply $\int_a^b \Phi(t) \, \mathrm{d}t$? - -REPLY [3 votes]: Yes, this the calculation is correct. -In a constant density setting (such as a liquid), the flux you calculate is the (signed) amount of stuff that goes through the surface in an infinitesimal time interval. -Physical intuition dictates that these things must happen: - -If the surface does not move, this is just the usual flux. -If the surface moves in a direction tangential to itself (like a sphere rotating but not moving), this should cause no flux. -If the surface moves with the fluid flow ($\vec F=\partial_t\vec r$), then the flux should be zero. (Think of an impenetrable plastic bag moving in water. There is no flux through it.) -If the surface is a disc of area $A$ that moves without any deformations at a constant speed, it wipes and area $A|\hat N\cdot\vec v|T$ in time $T$, as you can easily calculate from elementary geometry. -If there is no flow ($\vec F\equiv0$), this should be the time integral of the flux, up to sign. -Suppose the surface is $\mathscr S_t=\partial B(0,t)$ at any time $t>0$. -This corresponds to $\partial_t\vec r=\hat N$. -(See a remark below about spheres.) -In the absence of any flow, the integral of the flux from $t_0$ to $t_1$ should be the difference of the volumes of the balls $B(0,t_1)$ and $B(0,t_0)$. -(The sign depends on which way you orient your surface.) -The integral can be evaluated explicitly in this case. - -And indeed, these hold true. -When doing such "physical calculus problems", I strongly suggest making sure that the solution exhibits physically correct behavior. -I did not only check these criteria. -I also went through your reasoning, and you have explained your solution well. -Additional remarks: - -Sometimes you cannot parametrize a surface in the way you propose. -For example, you cannot parametrize a sphere by a planar set. -The parametrization can be done up to a small set (which doesn't matter for the integrals) or you can remove double counting if you parametrize some part many times, so you can still do as you did. -You just need to be a bit more careful. -If density is not constant, you need to include it in your integral. -This might be relevant for a gas.<|endoftext|> -TITLE: Can't EF game theory be applied to finite languages WITH function symbols? -QUESTION [6 upvotes]: Let $\mathcal{M}$ and $\mathcal{N}$ be two structures in a language $\mathcal{L}$. We define the finite determined game $G_n(\mathcal{M},\mathcal{N})$ as a game with $n$ rounds where in each round player I selects an element from $M \cup N$ (lets say $a\in M$) and player II responds with another element in the other structure (lets say $b\in N$). Player II wins a game if the partial map that takes elements in $M$ selected by some player to the element in $N$ either responded by player II (if it was player I who chose the element in $M$) or selected by player I (if it was player two who responded with said element in $M$) is a partial embedding. This is called an Ehrenfeucht-Fraïssé (EF) game. - -A remarkable theorem states that if $\mathcal{L}$ is finite and has no constant symbols then $\mathcal{M}\equiv\mathcal{N}$ iff player II has a winning strategy in $G_n(\mathcal{M},\mathcal{N})$ for all $n$. - -Now the way I know of proving this is by showing that player II has a winning strategy for $G_n(\mathcal{M},\mathcal{N})$ is equivalent to $\mathcal{M}$ and $\mathcal{N}$ being elementarily equivalent up to quantifier rank $n$. In such proof the neccesity of having a finite language without function symbols is stated. -Still, any finite language $\mathcal{L}$ can be changed to another finite language $\mathcal{L}^*$ where every function symbols or arity $n$ is changed for a relation symbol of arity $n+1$. For every strcutre $\mathcal{M}$ in $\mathcal{L}$ there is an analogous one $\mathcal{M}^*$ in $\mathcal{L}^*$ where the relations substitute the functions. They are analogous in the sense that for every sentence $\varphi$ in $\mathcal{L}$ there is another sentence $\psi$ in $\mathcal{L}^*$ such that $\varphi$ holds in $\mathcal{M}$ iff $\psi$ holds in $\mathcal{M}^*$ (and vice versa). Now under this alteration a partial partial map between structures in $\mathcal{L}$ is a $\mathcal{L}$-partial embedding iff it is a $\mathcal{L}^*$-partial embedding between the analogue structures in $\mathcal{L}^*$. In $\mathcal{L}^*$ the above theorem can be applied, which leads to my question: - -Does not the above theorem still apply to the case of a finite language with function symbols? - -EDIT: To make things clear, I define partial embedding considering a function simply as another relation. Namely, given $\mathcal{M}$ and $\mathcal{N}$ two $\mathcal{L}$-structures, $j:\text{dom}(j)\rightarrow N$, where $\text{dom}(j)\subset M$ is a partial embedding from $\mathcal{M}$ to $\mathcal{N}$ if the graph of $j$ union $\{(c^\mathcal{M}, c^\mathcal{M})\}$ such that $c$ is a constant symbol in $\mathcal{L}$ gives the graph of an injective function (which we also call $j$) that satisfies -$(1)$ For any $n$-ary relation symbol $R$ in $\mathcal{L}$ and $n$-ary vector $\bar{x}$ in $\text{dom}(j)$, $\bar{x}\in R^\mathcal{M}$ if and only if $j(\bar{x})\in R^\mathcal{N}$. -$(2.)$ For any $n$-ary function symbol $f$ in $\mathcal{L}$ and $n$-ary vector $\bar{x}$ in $\text{dom}(j)$ and any $y\in\text{dom}(j)$, $y= f^\mathcal{M}(\bar{x})$ if and only if $j(y)= f^\mathcal{N}(j(\bar{x}))$. - -REPLY [3 votes]: Let's start with a few definitions. -Definition 1: Let $\mathcal{A}$ be a structure with domain $A$ and $B \subseteq A$. We define $\langle B \rangle$, the substructure of $\mathcal{A}$ generated by $B$, to be the smallest substructure of $\mathcal{A}$ containing $B$. -Definition 2: Let $G_n(\mathcal{A}, \mathcal{B})$ be defined as in the question. A play is a pair $\langle \bar{a}, \bar{b} \rangle$, such that $\bar{a} \in A$ and $\bar{b} \in B$, such that each element from $\bar{a}$ and $\bar{b}$ has been chosen by a player. -Definition 3: A play counts as a win for Player II if there's an embedding $f: \langle \bar{a} \to \bar{b}\rangle$ such that $f(\bar{a}) = \bar{b}$. -These are standard definitions generalizing the EF game to structures with functional symbols; cf., e.g., Hodges's Model Theory, p. 95, or Poizat's A Course in Model Theory, p. 35; the latter presents his definition in terms of local isomorphisms, i.e. partial embeddings, but the translation to the game-theoretic context is immediate. When dealing with function symbols, it's necessary to take the substructures generated by $\bar{a}$ and $\bar{b}$, otherwise the notion of an embedding is not well defined. For instance, consider two structures, $\mathcal{N} = (\omega, S)$ and $\mathcal{M} = (M, S')$, where $\mathcal{N}$ is the standard natural numbers with the successor function and $\mathcal{M}$ is a non-standard elementary end-extension of $\mathcal{N}$. Consider the play of $G_2(\mathcal{N}, \mathcal{M}$ $\langle \langle 2, 4\rangle, \langle 3, 5 \rangle$. In order to determine if this play is a partial embedding $h$, we should check if, e.g., $h(S(2)) = S'(h(2))$. But neither $S(2)$ nor $S'(h(2)) = S'(3)$ belong to the domain or image of $h$, so how can we evaluate this? -Notice that, in the case in which a language contains only relation symbols, the substructure generated by, e.g., $\bar{a}$, will just be the set of all elements from the tuple, so that this is indeed a generalization (it covers the relational case as a particular case). -Given this, it's not difficult to show that, if Player II has a winning strategy for $G_n(\mathcal{A}, \mathcal{B})$ for all $n$, then $\mathcal{A} \equiv \mathcal{B}$. On the other hand, the converse is true only for finite languages containing only relational symbols. Observe that both conditions are essential. I'll give two counter-examples, one showing that finiteness is essential, the other showing that the restriction to relational languages is also essential. Considering this pircture, a natural question to ask is: are there ways of recovering the theorem for finite functional languages? The answer is yes. In this answer, I'll therefore proceed in the following way: first, I'll present the counterexamples to the theorem in the case of either functional languages or else infinite relational languages. Afterwards, I'll show that a natural way of recovering the theorem (the one sketched in your question) works given a modified way of ranking the formulas of the language (this was mentioned by Andreas Blass in his comment). - -Example 1: The theorem fails for finite languages with function symbols -Let $\mathcal{N} = (\omega, 0, S)$ and $\mathcal{M} = (M, 0, S)$, where $\mathcal{M}$ is an elementary end-extension of $\mathcal{N}$, i.e. $\omega \subseteq M$ and $\mathcal{N} \preceq \mathcal{M}$; $S$ is the usual successor function. Since $\mathcal{M}$ is an end-extension, it will contain a non-standard element, $c$. - -Claim: The only substructure of $\mathcal{N}$ is $\mathcal{N}$ itself. - -This is obvious from the fact that $\omega$ is the least set containing $0$ and closed under the successor function. - -Claim: Any substructure of $\mathcal{M}$ will contain $\omega$ and, if it contains $c$, also any upper section of the "cut" determined by $c$. - -This is also obvious from the fact that any substructure of $\mathcal{M}$ will need to be closed under the successor function. - -Claim: If $h$ is an embedding between $\mathcal{N}$ and $\mathcal{M}$, and $c$ is a non-standard element, then $c$ won't be in the in the range of $h$. - -This is clear from the fact that, for any $n$, there's a term denoting $n$, namely $S(S(S(\dots 0) \dots ))$ (the $S$ applied $n$ times to $0$). Since any embedding will take $0$ to $0$, it will take $n$ to $n$, and, thus, it won't take $n$ to $c$ (otherwise $c$ would be reached from $0$ by a finite application of the successor function, which can't happen). -Therefore: - -Claim: Player I can win $G_1(\mathcal{N}, \mathcal{M})$ by choosing a non-standard number $c$ on the first round. - -If Player I chooses $c$, then Player II must choose $n \in \omega$. But then, consider the substructures $\langle n \rangle = \mathcal{N}$ and $\langle c \rangle = \mathcal{M}'$. As we have just seen, there any embedding $h: \mathcal{N} \to \mathcal{M}'$ would have to send $n$ to $n$. Thus, there isn't any embedding $h$ such that $h(n) = c$, whence Player II loses the game. -A couple of observations: notice that the fact that $S$ is a function plays a crucial role in the proof. First, in determining the substructures generated by the elements chosen in the play. Second, in excluding the possibility that $h(n) = c$. If we replace the successor function by its graph, then (a) the substructures generated by $n$ and $c$ would just be their singletons, so Player II would win this particular play; and (b) Player II will be able to mimic Player I's moves by always choosing pairs which are or aren't in the successor relation, a much less restrictive condition than the one above. - -Example 2: The theorem fails for infinite relational languages -I took this one from Monk's Mathematical Logic, exercise 26.47. -Let $\mathcal{A} = (\omega, R_i)_{i \in \omega}$ and $\mathcal{B} = (\omega, S_i)_{i \in \omega}$, where each $R_i$ and $S_i$ is an unary relation symbol. Define $R_0 = S_0 = \omega$ and $R_{i+1} = R_i \setminus \{i+1\}$ and $S_{i+1} = S_i \setminus \{i\}$. Thus, $R_1 = \omega \setminus \{1\}$, $S_1 = \omega \setminus \{0\}$, etc. I used different symbols to distinguish one structure from another, but we may assume they share a common language (say $\mathcal{L} = \{P_i\}_{i \in \omega}$, such that $P_i$ is interpreted as $R_i$ in $\mathcal{A}$ and $S_i$ in $\mathcal{B}$; I'll continue to use $R_i$ and $S_i$ throughout, though). - -Claim: $\mathcal{A} \equiv \mathcal{B}$. - -This should be clear enough: as there aren't any constant symbols, the only sentences will be quantifications over the $R_i$s and $S_i$'s and boolean combinations thereof. As each $R_i$ will be non-empty iff the corresponding $S_i$ is non-empty, it follows that the structures are elementary equivalent. - -Claim: Player I has a winning strategy for $G_1(\mathcal{A}, \mathcal{B})$. - -Player I must choose $0 \in A$ in the first round. Then $\mathcal{A} \models R_i x [0]$ for any $i \in \omega$, but there isn't any element from $B$ with the same property (say Player II chooses $n$; then $\mathcal{B} \not \models S_{n+1} x [n]$). As any partial embedding must preserve atomic formulas, Player II loses. -It's clear that the above example wouldn't work for structures with only finitely many relations, as Player II would always be able to choose a suitably high $n$. - -Given the above, one natural way of trying to recovering the theorem is by modifying the definition of partial embedding in order to treat functions as relations, as you put it. This gives us condition (2) of your question. Using this, is it possible to prove the theorem for finite languages with function symbols? The answer is yes, as I'll show below, once you adopt a modified rank for formulas and terms (I took this idea from Ebbinghaus, Flum & Thomas, Mathematical Logic, chapter 12, exercise 3.15; the whole chapter is a nice exposition of Fraïssé's theorem, by the way). To see that this modification is necessary, consider the following example: -Let $\mathcal{A} = (\{a, b\}, f)$ and $\mathcal{B} = (\{c, d, e\}, f')$ with: -\begin{align*} -f: a &\mapsto b\\ -b &\mapsto b -\end{align*} -and -\begin{align*} -f': c &\mapsto d\\ -d &\mapsto d\\ -e &\mapsto c -\end{align*} -Then Player II can always win the one round game. If (i) Player I plays $a$, Player II plays $c$, if (ii) Player I plays $b$, Player II plays $d$, and similarly with the reverse positions. Finally, if (iii )Player I plays $e$, Player II plays $a$. It seems to me that the embedding $j$ thus generated will always satisfy your condition (2): in case of (i), $a$ is not the image of anyone under $f$, and neither is $c$; (ii), the only element in $\mathrm{dom}(j)$ is $b$ itself, and $b = f(b)$ iff $d = h(b) = f(h(b)) = f(d)$. Case (iii) is analogous to (i). -However, $\mathcal{B} \models \exists x (f(f(x)) \not = f(x))$ (consider $f'(f'(e)) = d$, but $f'(e) = c$), but $\mathcal{A} \not \models \exists x (f(f(x)) \not = f(x))$ ($f(f(a)) = b = f(a)$ and $f(f(b)) = b = f(b)$). Therefore, Player II has a winning strategy for $G_1(\mathcal{A}, \mathcal{B})$, but $\mathcal{A} \not \equiv_1 \mathcal{B}$, as claimed. - -Recovering the theorem -Let $\mathcal{L}$ be finite language, possibly with function symbols, and let $\mathcal{A}, \mathcal{B}$ be two $\mathcal{L}$-structures. Then: - -Theorem: $\mathcal{A} \equiv \mathcal{B}$ iff, for every $n$, Player II has a winning strategy for the game $G_n(\mathcal{A}, \mathcal{B})$. - -I won't prove the theorem in full, but merely note the modifications necessary in order to prove it (the rest is routine). As I said above, the key idea is to employ a modified criterion for ranking formulas and terms, $\mathrm{mrk}$. This rank is defined recursively as follows: -\begin{align*} -\mathrm{mrk}(x) &= 0,\\ -\mathrm{mrk}(c) &=1,\\ -\mathrm{mrk}(f(t_1, \dots, t_n)) &= 1+ \mathrm{mrk}(t_1) + \dots + \mathrm{mrk}(t_n),\\ -\mathrm{mrk}(t_1 = t_2) &= \mathrm{max}(0, \mathrm{mrk}(t_1) + \mathrm{mrk}(t_2) - 1), \\ -\mathrm{mrk}(\neg \phi) &= \mathrm{mrk}(\phi),\\ -\mathrm{mrk}(\phi \wedge \psi) &= \mathrm{max}(\mathrm{mrk}(\phi), \mathrm{mrk}(\psi)),\\ -\mathrm{mrk}(\exists x \phi) &= 1 + \mathrm{mrk}(\phi). -\end{align*} -Notice that the above is a generalization of quantifier rank for the relational case; that is, if $\mathcal{L}$ is relational, then $\mathrm{mrk}(\phi) = \mathrm{qr}(\phi)$. -The idea is now to adapt the proof of the finite relational case to the finite general case by employing this modified rank definition. This is mostly bookkeeping in order to show that the lemmas still go through once we make the necessary adaptations. I'll show how to do this for the crucial lemma 2.4.8 in Marker (p. 54): - -Lemma: For each $n$ and $l$, there is a finite list of formulas $\phi_1, \dots, \phi_k$ of $\mathrm{mrk}$ at most $n$ in free variables $x_1, \dots, x_l$ such that every formula of depth at most $n$ in free variables $x_1, \dots, x_l$ is equivalent to some $\phi_i$. - -The proof is exactly the same as in Marker, that is, an induction on $n$ which uses the disjunctive normal form theorem. The only thing we need to check is the base case, i.e. $n=0$. Given the original definition of $\mathrm{qr}$, this was problematic because just one function symbol generates infinitely many, non-equivalent formulas of $\mathrm{qr} = 0$ (consider, e.g., equalities involving iterations of the successor function). However, given the new definition, this is no longer a problem: for each constant symbol $c$, function symbol $f$ and relation symbol $R$, the formulas of $\mathrm{mrk}=0$ will be of the form $c=x$, $f(x_1, \dots, x_m)=x_{m+1}$, and $R(x_1, \dots, x_m)$. It's obvious that, up to changes in the variables, there will be only finitely many of those. The induction step then follows exactly as in Marker's proof. - -Comments -So, what's happening here? Why the modified rank works? The idea is that, in order to translate formulas with function symbols into a relational setting, you must first "unnest" the formulas. An unnested formula is one of the form: -\begin{align*} -x &= y\\ -c&=y\\ -f(x_1, \dots, x_n) &= y\\ -R(x_1, \dots, x_n)& -\end{align*} -In fact, there is the following theorem, which I'll state without proof: - -Theorem: Every formula $\phi$ is equivalent to a formula $\phi^*$ with only unnested subformulas. - -The rough idea of the proof is the following: suppose you have a formula of the type $f(t_1, \dots, t_n) = t_{n+1}$. Then this formula is equivalent to: -\begin{align*}\exists x_1, \dots,x_{n+1} (t_1 = x_1 \wedge \dots \wedge x_{n+1} = t_{n+1} \wedge f(x_1, \dots, x_n) = x_{n+1}).\end{align*} -Thus, it is possible to translate a language with function symbols into one with relational symbols: introduce relation symbols for the graph of the functions, unnest the formulas in which function symbols appear, and then substitute the function symbols in the unnested formula by the corresponding relation symbol (this theorem is a bit tedious to prove, so excuse me for not positing the proof). However, as you can see, this translation considerably raises the quantifier rank of a formula, which explains why the induction on quantifier rank is not enough to take care of the functional case. -Indeed, it's possible to use this idea to introduce directly a "patch" to the EF games that embody the same insight (see Hodges's Model Theory, section 3.3). Define $G_n[\mathcal{A}, \mathcal{B}]$, the unnested EF game, to be the same game as $G_n(\mathcal{A}, \mathcal{B})$, except that Player II wins if, given a play $(\bar{a}, \bar{b})$, for every unnested atomic formula $\phi$, $\mathcal{A} \models \phi(\bar{a}) \iff \mathcal{B} \models \phi(\bar{b})$. Using this, one is able to prove a similar version to the theorem above, except that, instead of employing formulas of quantifier rank $n$, one employs unnested formulas of quantifier rank $n$. The proof is essentially the same as in the relational case.<|endoftext|> -TITLE: Minimizing fuel usage for small boat between given points -QUESTION [9 upvotes]: I'm 12 years old and my I was getting bored this afternoon so my dad gave me this math problem (he said it was supposed to be hard, and that I should do some research to learn how to solve it). -"A small boat moving at $V$ km/h uses fuel at a rate given by the function $$q = 8 + \frac {V^2}{50}$$ where q is measured in litres/hour. Determine the speed of the boat at which the amount of fuel used for any given journey is the least." -I had no idea how to do it until i found some stuff on the internet about "calculus". I figured out that i might have to work out a formula for the total fuel consumed (fuel rate multiplied by time). But when i tried to do this, I found that I have created another variable (distance) when i was trying to write V in terms of d/t. -I am really stuck, i feel like i have worked out how to do these types of problems, but this particular one I cannot solve. -I dunno, would you guys maybe be able to solve it, or is it too high-level, maybe i should take it to my math teacher? - -REPLY [2 votes]: For a 12 year old, a bit of experimentation may be easier. -Try looking at the extremes first. Choose easy numbers, such as V=1 and V=100. -For V=1, q=8+$\frac{1}{50}$. In one hour, you'll burn 8.02 liters of fuel to travel 1 kilometer. The biggest part is the constant "8" -For V=100, q=8+$\frac{10000}{50}$ In one hour, you'll now burn 208 liters of fuel, but you traveled 100 kilometer. That's better, only 2.08 liter per kilometer. The biggest part is now $\frac{V^2}{50}$ -Let's experiment further. What if both parts would be equal? What if $8 = \frac{V^2}{50}$ ? That means $V^2=400$ or V=20. In that case, q=16. In one hour, you travel 20 kilometer, for only 16 liters. That's only 0.80 liter per kilometer! -Ok, so we see that if we go slowly, the 8 part is important, and if we go fast, the $V^2$ part is important, and in the middle you have a better solution. But what is best? -The trick we use in math is to find the best speed V so that going faster (V+dV) uses more fuel, and going slower (V-dV) also uses more fuel. dV is a very small number. In fact, we'll make it so small that $dV^2$ is small enough to ignore. Thus, $(V+dV)^2 = V^2 + 2*V*dV$. -So that means we'd have a $q+dq = 8 + \frac {V^2 + 2*V*dV}{50}$. That's a bit unruly. Let's simplify it by looking around V=20. $16+dq = 8 + \frac {400 + 40*dV}{50}$ or $dq = \frac {4}{5}*dV$. Now that's nicer. If I go slightly faster than 20 km/h, my fuel consumption per hour goes up just as fast. And if I go slightly slower, my fuel consumption goes down equally fast. -But that means the fuel consumption per kilometer doesn't change around V=20! Now 20 is a somewhat lucky choice of V. I could do the same for V=10, and then I would see that $q = 10, dq = \frac {2}{5} dV$. My fuel burn rate goes up slower than my speed, at only 40%. So it's better to go faster than 10 kilometer per hour. -So, if I hadn't guessed V correctly from the trick $8=\frac{v^2}{50}$, I would have had to solve dq/dV = q/V. That's a bit of writing but you'll get V=20 all the same.<|endoftext|> -TITLE: Find number of roots of the equation $e^x(x^4 + 4x^3 + 12x^2 + 24x + 24) + 1 = 0$ -QUESTION [5 upvotes]: Find number of roots of the equation $e^x(x^4 + 4x^3 + 12x^2 + 24x + 24) + 1 = 0$ -Using Descartes rule, number of positive roots is zero and there can be a maximum of 4 negative roots. -Also, for the function $P(x)=x^4 + 4x^3 + 12x^2 + 24x + 24$, the double derivative $P''(x)>0$. So the function can have either 2 negative roots or no root at all. -But I still lack information required to prove that the function has no real roots. - -REPLY [9 votes]: Just observe that -\begin{align} -P(x) &=(x^4+4x^3+4x^2)+(8x^2+24x+18)+6\\ -&=(x^2+2x)^2+2(2x+3)^2+6>0 -\end{align} -Therefore -$$e^xP(x)+1>0$$ for all $x \in \mathbb R$.<|endoftext|> -TITLE: Solve the diophanic equation $y^2 = x^4 -4x^3 + 6x^2 -2x +5$, where $x,y \in \mathbb{Z}$ -QUESTION [5 upvotes]: Solve the diophanic equation $y^2 = x^4 -4x^3 + 6x^2 -2x +5$. -Methods I know: -1) look modulo p for some prime p, when using this method I almost always conclude there are no solutions, so i don't think it is handy to use this in this particular case. -2) Combinations of factorization and estimation: factorize your specific function and Search for an upper bound. -I used 2), and i get in a little bit of trouble. -I tried factorize the right hand side, but did not come any further then: -$y^2 - 10 = (x-5)(x^3 + x^2 - x - 3)$. -I don't seem to get an upper bound here..... :(. -Any hints on going to the right direction or on using a new method trying to solve this probelem? -Kees - -REPLY [6 votes]: We may suppose that $y\geq 0$. Now -$$x^4-4x^3+6x^2-2x+5=(x-1)^4+2x+4$$ -Hence we have -$$(y-(x-1)^2)(y+(x-1)^2)=2(x+2)$$ -This imply that $(x-1)^2\leq (x-1)^2+y\leq 2|x|+4$ -and it is easy to finish.<|endoftext|> -TITLE: How to show that $a_{1}=a_{3}=a_{5}=\cdots=a_{199}=0$ -QUESTION [7 upvotes]: Let $$(1-x+x^2-x^3+\cdots-x^{99}+x^{100})(1+x+x^2+\cdots+x^{100})=a_{0}+a_{1}x+\cdots+a_{200}x^{200}$$ -show that -$$a_{1}=a_{3}=a_{5}=\cdots=a_{199}=0$$ -I have one methods to solve this problem: -Let$$g(x)=(1-x+x^2-x^3+\cdots-x^{99}+x^{100})(1+x+x^2+\cdots+x^{100})$$ -Note $$g(x)=g(-x)$$ -so $$a_{1}=a_{3}=\cdots=a_{199}=0$$ -there exist other methods? - -REPLY [3 votes]: Let $A (x) = (1-x+x^2-x^3+\cdots-x^{99}+x^{100})$ and $B (x) = 1+x+x^2+\cdots+x^{100}$. Then we have $g(x) = A (x) B (x)$. We have $(x + 1) A (x) = x^{101} + 1$ and $(x - 1) B (x) = x ^ {101} - 1$. Then $$g (x) = \frac {x^ {202} - 1} {x^2 - 1} = x^{200} + x^{198} + \cdots + 1,$$ as desired.<|endoftext|> -TITLE: $U\subset [0,\infty)$ is open and unbounded $\Rightarrow \exists x$ such that $U\cap \{nx;n\in \mathbb N\}$ is infinite. -QUESTION [11 upvotes]: I want to show that: - -Let $U\subset [0,\infty)$ be open and unbounded. Show that there is a number $x\in (0,\infty)$ such that $U\cap \{nx;n\in \mathbb N\}$ is infinite. - -Because of $U$ is open, $U$ is a countable union of open intervals. if $U$ contain an interval $(a,\infty)$, we are done. But if all intervals which contain in $U$ was bounded, what we can do? can somebody give me a hint? - -REPLY [7 votes]: For $x>0$ let $N_x=\{nx:n\in\Bbb N\}$, and suppose that $U\cap N_x$ is finite for each $x>0$. Then for each $x\ge 0$, there is a $q_x\in\Bbb Q$ such that $U\cap N_x\subseteq[0,q_x)$. For each $q\in\Bbb Q$ let $A_q=\{x\ge 0:q_x=q\}$; evidently $[0,\to)=\bigcup_{q\in\Bbb Q}A_q$. By the Baire category theorem there are a $q\in\Bbb Q$ and $a,b>0$ such that $a0\;,$$ -so $(n+1)a -TITLE: Infinite-dimensional Unitary representions that are not completely reducible -QUESTION [6 upvotes]: The Peter-Weyl theorem asserts that for a compact Lie group $G$ every unitary irreducible representation is necessarily finite-dimensional and any unitary representation is a direct sum of irreducibles. -1) Is not any representation of a compact group (finite or infinite-dimensional) representation of a compact group unitarizable and thus completely reducible? Or does this only hold for finite-dimensional? -2) Now if $G$ is not compact, how exactly does the situation change? -If I have an infinite-dimensional unitary representation of a non-compact group, is it also always completely reducible? -Also, which noncompact group possess (nontrivial) finite-dimensional unitary representations? - -REPLY [4 votes]: In the finite-dimensional case this is clear by averaging. In the infinite-dimensional case, it depends on what you mean by a representation. A very general thing you could ask for is a representation on some topological vector space, but most topological vector spaces aren't Hilbert spaces so it's unclear what it would even mean for such a representation to be unitarizable. If you mean a continuous representation on a Hilbert space, then the same averaging argument works. The content of the Peter-Weyl theorem is not that unitary representations are completely reducible: that's a general fact. It asserts a lot of more interesting facts than this, such as that matrix coefficients of unitary irreducibles separate points. -Noncompact semisimple Lie groups in general have many interesting irreducible infinite-dimensional representations, some but not all of which are unitarizable. It's still true that unitary representations are completely reducible (and the proof is the same), but often there are no nontrivial finite-dimensional ones: for example, if $G$ is a noncompact simple Lie group such as $PSL_2(\mathbb{R})$, it can't embed into any unitary group $U(n)$ (all of which are compact).<|endoftext|> -TITLE: Basis elements of the uniform topology on $\mathbb R^{\omega}$ -QUESTION [5 upvotes]: I have been trying to understand the basis elements of the uniform topology on $\mathbb{R}^{\omega}$. For some time, I thought they would be: -$B_\bar{p} (x,\epsilon) = \prod (x_i - \epsilon, x_i + \epsilon)$ if $\epsilon < 1$ -However, after reading a bit online, I learned that this set is not even open in the uniform topology. The actual basis elements are: -$B_\bar{p} (x,\epsilon) = \bigcup_{\delta < \epsilon} \prod (x_i - \delta, x_i + \delta)$ if $\epsilon < 1$ -Why is this the case? - -REPLY [6 votes]: Note that under the uniform topology the corresponding metric is $d(x,y) = \min(1, \sup_i (|x_i-y_i|))$. In a finite product of intervals $(x_i-\epsilon, x_i + \epsilon)$ any particular element will thus be strictly less than $\epsilon$ distance away from $x$, but in an infinite product you have elements like $y = (y_i) = (x_i + (1-1/n)\epsilon)$, where the supremum is equal to $\epsilon$. Note that any open neighborhood of this particular point $y$ will contain elements not in the first given product, demonstrating that the product is not open. -Such elements must be excluded from the basis elements, which should instead be the set of elements of distance strictly less than $\epsilon$ from $x$. This set can be constructed as the given union over $\delta <\epsilon$.<|endoftext|> -TITLE: Complicated sum with binomial coefficients -QUESTION [11 upvotes]: I know how to prove, that $\frac{1}{2^{n}}\cdot\sum\limits_{k=0}^nC_n^k \cdot -\sqrt{1+2^{2n}v^{2k}(1-v)^{2(n-k)}}$ tends to 2 if n tends to infinity for $v\in (0,\, 1),\ v\neq 1/2$. This can be proved with the use of Dynamical systems reasonings, which are non-trivial and complicated. Is there quite good combinatiorial proof of this fact? How would you solve this problem as it is? Here $C_n^k$ denotes $\frac{n!}{k!(n-k)!}$. - -REPLY [8 votes]: Let $$x=\sqrt[4]{4^nv^{2k}(1-v)^{2n-2k}}=(\sqrt{2})^n (\sqrt{v})^{k}(\sqrt{1-v})^{n-k},x>0$$ -First, we have following inequality -$$x^2-x+1<\sqrt{x^4+1}0\tag{1}$$ -because -$$x^4+1>(x^2-x+1)^2\Longleftrightarrow x(2x^2-3x+2)>0,x>0$$ -$$(x^4+1)<(x^2+1)^2\Longleftrightarrow x^2>0$$ -use $(1)$, we have -$$1+2^nv^k(1-v)^{n-k}-(\sqrt{2})^n(\sqrt{v})^k(\sqrt{1-v})^{n-k} -<\sqrt{1+4^nv^{2k}(1-v)^{2n-2k}}<1+2^nv^k(1-v)^{n-k}\tag{2}$$ -use $(2)$,For one thing,we have -\begin{align*}\lim_{n\to\infty}\dfrac{\displaystyle\sum_{k=0}^{n}\binom{n}{k}\sqrt{1+4^nv^{2k}(1-v)^{2n-2k}}}{2^n} -&<\lim_{n\to+\infty}\dfrac{\displaystyle\sum_{k=0}^{n}\binom{n}{k}(1+2^nv^{k}(1-v)^{n-k})}{2^n}\\ -&=\lim_{n\to+\infty}\dfrac{\displaystyle\sum_{k=0}^{n}\binom{n}{k}+\sum_{k=0}^{n}\binom{n}{k}2^nv^k(1-v)^{n-k}}{2^n}\\ -&=\lim_{n\to+\infty}\dfrac{(1+1)^n+2^n(v+1-v)^n}{2^n}\\ -&=2 -\end{align*} -use $(2)$,on the other hand -$$\lim_{n\to\infty}\dfrac{\displaystyle\sum_{k=0}^{n}\binom{n}{k}\sqrt{1+4^nv^{2k}(1-v)^{2n-2k}}}{2^n} ->2-\lim_{n\to+\infty}\dfrac{(\sqrt{2v}+\sqrt{2(1-v)})^n}{2^n}=2$$ -because Use Cauchy-Schwarz inequality we have -$$\sqrt{v}+\sqrt{1-v}<\sqrt{2(v+1-v)}=\sqrt{2}$$<|endoftext|> -TITLE: Is it necessary that if a limit exists at a point it should be also defined at that point? -QUESTION [9 upvotes]: Say there exists a limit $\lim_{x \to x_0}f(x) = L$. Is it necessary that $f$ be defined at the point $x_0$ itself? -Well, what I think of it is that it's OK to be undefined at that point because I guess that won't cease the limit of that function to exist there would it? - -REPLY [28 votes]: "No", in a particularly strong sense: The very reason limits came into mathematics was to extract numerical values from difference quotients -$$ -\frac{f(x) - f(x_{0})}{x - x_{0}} -$$ -(which are, of course, algebraically indeterminate at $x_{0}$), in the limit as $x \to x_{0}$. -The formal definition of $L = \lim(f, x_{0})$ ("For every $\varepsilon > 0$, there exists a $\delta > 0$ such that if $0 < |x - x_{0}| < \delta$, then $|f(x) - L| < \varepsilon$") explicitly avoids evaluation of $f$ at $x_{0}$, precisely because $f$ might be undefined at $x_{0}$.<|endoftext|> -TITLE: What are the difference among entire, smooth, analytic and holomorphic in complex function? -QUESTION [5 upvotes]: In complex analysis, I know that they have similar meaning. -However what are the difference among them? Especially for entire function, does entire function mean analytic function? - -REPLY [9 votes]: "$f$ is holomorphic on $U$" means "the complex derivative of $f$ exists and is continuous on $U$". -"$f$ is entire" means means "$f$ is holomorphic on $\mathbb{C}$". -"$f$ is smooth on $U$" means a lot of different things in different contexts. For complex analysis, let us introduce the map $p : \mathbb{C} \to \mathbb{R}^2$ given by $p(x+iy)=(x,y)$. Then "$f$ is smooth on $U$" probably means that $p \circ f \circ p^{-1} : \mathbb{R}^2 \to \mathbb{R}^2$ is infinitely differentiable on $p(U)$. (In other words, the underlying function on the plane, where we forget about complex numbers, is smooth.) This concept doesn't come up very often in complex analysis because you really need to work with holomorphic functions to use complex-analytic methods. -"$f$ is analytic on $U$" a priori means "a power series expansion of $f$ exists on $U$". It is a remarkable fact in complex analysis that this is equivalent to $f$ being holomorphic on $U$. As a result of this the two terms are sometimes used interchangeably. Another result of this is that holomorphic functions are smooth (in the sense of the previous paragraph).<|endoftext|> -TITLE: Is there a connection between duality in linear programming and duality in functional analysis? -QUESTION [12 upvotes]: In linear programming we optimize a linear function which is constrained by linear inequalities or linear equalities. Under some conditions you can rewrite the problem to the dual problem, so that you can solve another linear programming problem to get your result. -In functional analysis the dual of a vector space is the collection of linear functionals from the vector space. -Are there any connections between this? Can we write the problems in some way so that they say the same thing? The only connection I have seen is that in functional analysis we have the hahn-banach theorem, which is a theorem about extensions of linear functionals, this is connected to the hahn-banach separation theorem, the separating hyperplane theorem, and this again is connected to farkas lemma, which is used in linear programming. Is there any more connections, or is it just coincidental that both use the word dual? Before I started reading functional analysis I thought the connection would be stronger, since we can prove the separating hyperplane theorem using functional analysis, and this theorem and farkas lemma were a big part of linear programming. - -REPLY [6 votes]: As you already mentioned, dual spaces appear in separation theorems naturally. -There is also another relation: -Given a convex optimization problem, one can define a dual problem. If the original optimization problem is posed in a Banach space $V$, then the dual problem lives in the dual space $V^*$. -Linear programming problems are special instances of convex problems in the Banach space $\mathbb R^n$. The dual programming problem can be derived by means of the general method for convex problems.<|endoftext|> -TITLE: Prove there is a Bernstein set $B$ such that $B+B$ is also Bernstein -QUESTION [5 upvotes]: Show that there exists a Bernstein set $B$ such that $B+B$ is also Bernstein. - -I have tried to use the definition that neither $B$ nor its complement contain a perfect set. - -REPLY [9 votes]: Well, to start with, remember that a Bernstein set is built in stages: we list all the perfect sets as $\{P_\eta: \eta<\mathfrak{c}\}$ (where $\mathfrak{c}$ is the cardinality of the continuum), and we define a pair of sequences of sets of reals - $In_\eta$ and $Out_\eta$ - as follows: - -$In_0=Out_0=\emptyset$. -At stage $\eta$, we pick distinct reals $r, s\in P_\eta\setminus (In_\eta\cup Out_\eta)$ and let $In_{\eta+1}=In_\eta\cup\{r\}$ and $Out_{\eta+1}=Out_\eta\cup\{s\}$. (By induction, both $In_\eta$ and $Out_\eta$ only have $\vert\eta\vert$-many elements, so such $r$ and $s$ exist.) -For $\lambda$ a limit, we let $In_\lambda=\bigcup_{\eta<\lambda}In_\eta$ and $Out_\lambda=\bigcup_{\eta<\lambda}Out_\eta$. -Finally, we take $B=In_\mathfrak{c}$. - -By construction, we have: - -$B\cap P=In_\mathfrak{c}\cap P_\eta\supseteq In_{\eta+1}\cap P_\eta\not=\emptyset$, and -$B^c\cap P_\eta\supseteq Out_{\eta+1}\cap P_\eta\not=\emptyset$, - -so $B$ is indeed Bernstein. -So what? -Well, my point is that to build a Bernstein set with additional properties, you'll want to repeat this construction but with more bells and whistles. Think of it like this - in the construction of a "vanilla" Bernstein set, you have continuum-many requirements to meet: "positive" requirements "the $\eta$th perfect set intersects the set I'm building," and "negative requirements "the $\eta$th perfect set intersects the complement of the set I'm building." Each of these requirements is atomic: you can satisfy any one requirement by putting some element into the set you're building, or promising to keep some element out of the set you're building. -Now, you want to build a set $B$ such that - -$(i)\quad$ $B$ is Bernstein, and -$(ii)\quad$ $B+B$ is Bernstein. - -We already know how to break $(i)$ down into a bunch of "atomic" requirements, so let's look at $(ii)$. $(ii)$ breaks down into continuum-many requirements: the "positive" requirements "$P_\eta$ intersects $B+B$," and the "negative" requirements "$P_\eta$ intersects the complement of $B+B$." The positive requirements are no harder to satisfy than the positive requirements in the vanilla case. -The negative requirements, on the other hand, are harder: if I want to decree "$x$ is an element of $P_\eta$ which is not in $B+B$," it's not enough to throw a single real into my $Out$-set - I have to promise that, whenever a real $a$ enters my $In$-set, the real $x-a$ enters my $Out$-set. So this is not an atomic requirement, and to accomodate this I'm going to need to make the construction slightly more complicated: at each stage, I'll have an $In$-set of size $<\mathfrak{c}$, an $Out$-set of size $<\mathfrak{c}$, and additionally a set of $<\mathfrak{c}$-many "rules" of the form "no two elements summing to $x$ wind up in my $In$-set." -Fortunately, this doesn't complicate things too much. Can you see how to proceed?<|endoftext|> -TITLE: Solve $x-\lfloor x\rfloor= \frac{2}{\frac{1}{x} + \frac{1}{\lfloor x\rfloor}}$ -QUESTION [6 upvotes]: Could anyone advise me how to solve the following problem: -Find all $x \in \mathbb{R}$ such that $x-\lfloor x\rfloor= \dfrac{2}{\dfrac{1}{x} + \dfrac{1}{\lfloor x\rfloor}},$ where $\lfloor *\rfloor$ denotes the greatest integer function. -Here is my attempt: -Clearly, $x \not \in \mathbb{Z}.$ -$\dfrac{x}{\lfloor x \rfloor} - \dfrac{\lfloor x \rfloor}{x}= 2$ -$ \implies x^2 -2x\lfloor x \rfloor - \lfloor x \rfloor ^2 = 0$ -$\implies x = (1 \pm \sqrt2 )\lfloor x \rfloor $ -$\implies \{x\} = \pm\sqrt2 \lfloor x \rfloor$ -Thank you. - -REPLY [5 votes]: Completing the square gives -$$ -x^2-2x\lfloor x\rfloor+\lfloor x\rfloor^2=2\lfloor x\rfloor^2 -$$ -so -$$ -(x-\lfloor x\rfloor)^2=2\lfloor x\rfloor^2 -$$ -Since $0\le x-\lfloor x\rfloor<1$, we conclude $\lfloor x\rfloor=0$ that's disallowed by the starting equation.<|endoftext|> -TITLE: Prove existence -QUESTION [7 upvotes]: Let $f$ be a continuous bounded function on the interval $(a, +\infty)$ such that $\lim_{x \to+\infty}f(x)$ does not exist. Prove that for any $t \in\mathbb R$ there is a sequence $x_{n} \to +\infty$ that $f(x_{n} + t) = f(x_{n})$ for all $n \in\mathbb N$. - -REPLY [10 votes]: Counterexample. Let $a=1,\ f(x)=\sin x+\frac1x$ for $x\in(1,+\infty)$, and $t=2\pi.$<|endoftext|> -TITLE: $n\cdot \phi(n)=m\cdot \phi(m)$ -QUESTION [11 upvotes]: If I am not mistaken here in OEIS says that -$n\cdot \phi(n)=m\cdot \phi(m)$ is possible only if $n=m$. -$\phi(n)$ denotes Euler's totient function. -Is there a proof of this fact? - -REPLY [9 votes]: Suppose $n$ is the smallest number such that a number $m\ne n$ exists with -$$n\ \phi(n)=m\ \phi(m)$$ -Let $p$ be the largest prime factor of $n$ and $q$ be the largest prime factor -of $m$. Then, $pq$ is impossible because $m\ \phi(m)$ would not be divisible by $p$. -So, we have $p=q$. -The valuation of $p$ in $n\ \phi(n)$ is uniquely determined -by the valuation of $p$ in $n$, so the valuations of $p$ in $n\ \phi(n)$ and -$m\ \phi(m)$ must coincide. -Therefore we have $\frac{n}{p}\phi(\frac{n}{p})=\frac{m}{p}\phi(\frac{m}{p})$ -contradicting the assumption that $n$ is the smallest number, such that -an $m$ exists with $n\ \phi(n)=m\ \phi(m)$. -Hence $n=1$, but $1=m\ \phi(m)$ has the only solution $m=1$.<|endoftext|> -TITLE: Largest rectangle not touching any rock in a square field -QUESTION [18 upvotes]: You want to build a rectangular house with a maximal area. You are offered a square field of area 1, on which you plan to build the house. The problem is, there are $n$ rocks scattered in unknown locations throughout the field. The rocks are unmovable, and you cannot build on rocks. What is the largest area of a rectangle that you can build, in the worst case? -Formally: let $S_n$ be a set of $n$ points in the unit square. Define $\textrm{MaxArea}(S_n)$ as the maximum area of an axis-parallel rectangle in the unit square that does not contain, in its interior, any point in $S$. Define: -$$\textrm{MinMaxArea}(n) = \inf_{S_n} (\textrm{MaxArea}(S_n))$$ -where the infimum is on all possible sets $S_n$ of $n$ points. What are good bounds on $\textrm{MinMaxArea}(n)$? -EXAMPLE: In the picture below, the unit square is scaled to a 100-by-100 square. There are $n=100$ rocks. Apparently, the largest possible rectangle that does not contain any rocks in its interior is a rectangle such as ABCD, whose area is $.06\times .58$, which is approximately $\frac{1}{4\sqrt{n}}$, so: -$$\textrm{MinMaxArea}(n) \leq \frac{1}{4\sqrt{n}}$$ -Is there another arrangement of rocks in which the largest rectangle is smaller? - -REPLY [3 votes]: Let me shorten MaxArea($S_n$) to $M(S_n)$ for convenience, and let $M(n) = \inf_{S_n} M(S_n)$ be MinMaxArea (this is overloading the notation, but I hope it won't be confusing). -Then, $M(S_n) \le D(S_n)$, where $D(S_n)$ is the classical discrepancy function: -$$ -D(S_n) = \sup_R \left|\frac{|S_n \cap R|}{n} - \mathrm{area}(R) \right|, -$$ -where the supremum is over axis-parallel rectangles in $[0,1]^2$. There are quite a few constructions of $n$-point sets $S_n$ for which $D(S_n) = O(\log(n)/n)$. One example is -$$ -S_n = \left\{ \left(\frac{i}{n}, i\sqrt{2} \bmod 1\right) \right\}_{i = 0}^{n-1}. -$$ -Another is the van der Corput set. This shows that $M(n) = O(\log(n)/n)$. -As far as lower bounds go, it is known that the above bound on $D(n)$ is tight, i.e. $D(n) = \Omega(\log(n)/n)$. -However, even better bounds are possible if we work directly with $M(n)$. Let $n(\epsilon)$ be the size of the smallest point set $P$ such that $M(P) \le \epsilon$ (this is called an $\epsilon$-net). Then it is known that $n(\epsilon) = O(\frac{1}{\epsilon}\log \log \frac{1}{\epsilon})$, which implies that $M(n) = O(\log \log(n)/n)$. -(Note that the $\epsilon$-nets literature usually works with discrete range spaces, but because the bounds on $n(\epsilon)$ are a function of $\epsilon$ only, we can just take an arbitrarily fine discrete approximation of $[0,1]^2$.)<|endoftext|> -TITLE: Exercises with solutions on Elementary Measure Theory -QUESTION [8 upvotes]: Where can I find a nice collection of solved exercises on Elementary Measure Theory? (Rings, algebras, $\sigma$-algebras, Borel sets, measures, outer measures, Lebesgue measure, measurable functions, Borel functions, etc.) - -REPLY [13 votes]: René Schilling: Measures, Integrals and Martingales. There is a solution manual available on the web with full solutions to all exercises. The book does not only cover elementary measure theory, but further topics in measure/probability theory. -Claude George: Exercises in Integration. This is a problem book on measure theory; solutions to the exercises are included in the book (table of contents).<|endoftext|> -TITLE: Is the zero matrix in reduced row echelon form? -QUESTION [7 upvotes]: Is this matrix in reduced row echelon form? -$3\times3$ matrix is: -0 0 0 -0 0 0 -0 0 0 - -I can say for other matrices but this one without 1s confuses me. Are $1$s optional in reduced row echelon form? I think they aren't. What do you think? - -REPLY [10 votes]: In a logical sense, yes. The zero matrix is vacuously in RREF as it satisfies: - -All zero rows are at the bottom of the matrix -The leading entry of each nonzero row subsequently to the first is right of the leading entry of the preceding row. -The leading entry in any nonzero row is a 1. -All entries in the column above and below a leading 1 are zero.<|endoftext|> -TITLE: How can you use Green's relations to learn about a monoid? -QUESTION [6 upvotes]: Without having ever formally learned any real monoid theory, I was recently pointed to Green's relations by a friend. It sounds like they're quite useful, and I get the impression they might do what character tables do for representation theory. -However, after looking over the article and playing around with a few interesting monoids, I'm still not totally sure what I can learn about the monoid with Green's relations. -Let's take this monoid, which I find interesting, as the focus of this post: -$$M=\langle c,k \mid c^2=1, k^2=k,kck = kckckck \rangle$$ -Here's the egg-box diagram I came up with: -$$\begin{matrix}\{1,c\}\\ -&\{(ck)^2,(ck)^3\}&\{(ck)^2c,(ck)^3c\}\\ -&\{(kc)^2k,kck\}&\{(kc)^2,(kc)^3\}\\ -&&&\{ck\}&\{ckc\}\\ -&&&\{k\}&\{kc\}\end{matrix}$$ -I'll try to break down the other relations later, but I hope this is enough to begin with for now. How can I learn about this monoid using Green's relations? - -REPLY [7 votes]: Short answer: Green's relations are mostly useful to know about the local structure of a semigroup (see the definition below) although the $\leqslant_\mathcal{J}$ preorder also gives information on the global structure. -Overview. -One can probably say that the usefulness of Green's relations in semigroup theory compares with that of character tables in group theory, but they are two very different mathematical objects. -Green's relations are better interpreted geometrically on Cayley graphs. Take for instance the monoid $M$ of your example and its two generators, $c$ and $k$. The right Cayley graph of $M$ has $M$ as set of vertices and edges of the form $(m, a, ma)$ where $m \in M$ and $a$ is one of the generators. The left Cayley graph is defined by acting on the left: edges are of the form $(m, a, am)$. Now the $\mathcal{R}$-classes ($\mathcal{L}$-classes) are the strongly connected components of the right (left) Cayley graph. The $\mathcal{J}$-classes (or $\mathcal{D}$-classes since your monoid is finite) are the strongly connected components of the union of these two graphs. -Back to the example. -I have slightly modified the egg-box picture of your monoid. First, I added stars to locate the idempotents, a useful information for a monoid. Secondly, the $\mathcal{J}$-classes are now presented according to the $\leqslant_\mathcal{J}$ order: $D_1 = \{1,c\}$ is the top $\mathcal{J}$-class, $D_2$ is the $\mathcal{J}$-class of $k$ and the remaining $\mathcal{J}$-class $D_3$ is the minimum${}^*$ ideal of $M$ -${}^* \scriptsize\text{For some reason, the term *minimal ideal* is used in the literature, although this minimal ideal is unique when it exists.}$ -$$ -\begin{matrix} -\{{}^*1,c\}\\ -\\ -\begin{matrix} -\{ck\}&\{{}^*ckc\}\\ -\{{}^*k\}&\{kc\} -\end{matrix}\\ -{}\\ -\begin{matrix} -&\{{}^*(ck)^2,(ck)^3\}&\{(ck)^2c,{}^*(ck)^3c\}\\ -&\{{}^*(kc)^2k,kck\}&\{{}^*(kc)^2,(kc)^3\} -\end{matrix}\\ -\end{matrix} -$$ -Now, each $\mathcal{J}$-class $D$ gives rise to a semigroup $(D^0, *)$ defined as follows: $D^0 = D$ if $D$ is a subsemigroup of $M$ and $D = D \cup \{0\}$ otherwise. The product $*$ is defined, for $s, t \in D$ by -$$ -s*t = \begin{cases} - st &\text{if $st \in D$} \\ - 0 &\text{otherwise} -\end{cases} -$$ -Local structure of a semigroup. -The local structure of a semigroup is precisely the data of these semigroups $D^0$. They are ($0$)-simple semigroups and as such they are isomorphic to a Rees matrix semigroup. I let you study this notion in any book of semigroup theory, but let me describe it on your example. -Clearly $D_1^0 = D^1$ is a cyclic group of order $2$. The semigroup $D_2^0$ is isomorphic the semigroup $B_2 = \{1,2\} \times \{1,2\} \cup \{0\}$ with multiplication defined by -$$ -(i,j)(i',j') = \begin{cases} - (i,j') &\text{if $j = i'$} \\ - 0 &\text{otherwise} -\end{cases} -$$ -Finally, the minimum ideal $D_3$ is a semigroup, isomorphic to the semigroup $\{(i,g,j) \mid i \in \{1,2\}, j \in \{1,2\}, g \in \mathbb{Z}/2\mathbb{Z}\}$ with multiplication given by $((i,g,j)(i',g',j') = (i, gg', j')$. On your example, this structure is particularly simple because the product of two idempotents of $D_3$ is an idempotent. -In the general case, the Rees matrix semigroup associated to a regular $\mathcal{D}$-class is given by the set $I$ of $\mathcal{R}$-classes, the set $J$ of $\mathcal{L}$-classes, the structure group $G$ of the $\mathcal{D}$-class and a $J \times I$ matrix $(P_{j,i})$ with entries in $G \cup \{0\}$. The semigroup is defined on $(I \times G \times J) \cup \{0\}$ by the product -$$ -(i,g,j)(i',g',j') = \begin{cases} - (i, gp_{j,i'}g', j') &\text{if $p_{j,i'} \not= 0$} \\ - 0 &\text{otherwise} -\end{cases} -$$ -If you compute the torsion matrices $(P_{j,i})$ of each regular $D$-class, you will completely know the structure of each regular $D$-class, the local structure. For instance, you would know that the product $[(kc)^2k][(ck)^2c]$ is equal to $(kc)^3$ just by looking at the egg-box picture. Green's preorders also give you some partial information on the product of elements of two different $\mathcal{D}$-classes: for instance, if $e$ is idempotent, then $s \leqslant_\mathcal{R} e$ if and only if $es = s$.<|endoftext|> -TITLE: Are there any limit questions which are easier to solve using methods other than l'Hopital's Rule? -QUESTION [19 upvotes]: Are there any limit questions which are easier to solve using methods other than l'Hopital's Rule? It seems like for every limit that results in an indeterminate form, you may as well use lHopital's Rule rather than any of the methods that are taught prior to the rule, such as factoring, rationalizing, trig limits, etc. -EDIT: These are great answers, thanks! - -REPLY [3 votes]: If you want to evaluate -$$\lim_{x\to\infty}\frac{x^{1000000}+x^{999999}+\cdots+x+1}{x^{1000000}}$$ -purely by use of L'Hopital's Rule, you will have to apply the rule a million times!<|endoftext|> -TITLE: Distribution of quotient of random variables -QUESTION [7 upvotes]: If $X$ and $Y$ are independent random variables such that $X\sim \Gamma(a,b)$ and $Y\sim\Gamma(a,c)$. What is the distribution of random variable $\frac{Y}{X+Y}$? Any help with this ? - -REPLY [7 votes]: To find the p.d.f of the ratio $\frac{Y}{X+Y}$, let us first write its c.d.f. -Since $X$ and $Y$ are always positive, their ratio is also positive and, therefore, for $0\leq t\lt1$ we can write: -$ -P\left(\frac{Y}{X+Y}\leq t\right)=P\left(Y\leq \frac{t}{1-t}X\right)=\int_{0}^{\infty }\left(\int_{0}^{\frac{t}{1-t}x}f_{X}(x)f_{Y}(y)dy\right)dx -$ -as $f_{X}(x)f_{Y}(y)$ is the joint p.d.f. of $X$ and $Y$ (the variables are indipendent) and $y$ goes from $0$ to $\frac{t}{1-t}x$ when $x$ goes from $0$ to $\infty$. -The p.d.f. is the derivative of the c.d.f. so we can write: -$ -\frac{d}{dt}P\left(\frac{Y}{X+Y}\leq t\right)=\frac{d}{dt}\int_{0}^{\infty }\left(\int_{0}^{\frac{t}{1-t}x}f_{X}(x)f_{Y}(y)dy\right)dx=\int_{0}^{\infty }\frac{d}{dt}\left(\int_{0}^{\frac{t}{1-t}x}f_{Y}(y)dy\right)f_{X}(x)dx -$ -let $F_{Y}(y)$ be a primitive for $f_{Y}(y)$ -$ -\frac{d}{dt}\left(\int_{0}^{\frac{t}{1-t}x}f_{Y}(y)dy\right)=\frac{d}{dt}F_{Y}(\frac{t}{1-t}x)=\frac{1}{(1-t)^{2}}xf_{Y}(\frac{t}{1-t}x) -$ -so -$ -\frac{d}{dt}P\left(\frac{Y}{X+Y}\leq t\right)=\frac{1}{(1-t)^{2}}\int_{0}^{\infty }xf_{Y}(\frac{t}{1-t}x)f_{X}(x)dx -$ -The last integral is easy to compute but quite long, you just need to substitute the equations for the p.d.f.s, group the exponentials, make a variable change (I have $z=\left(b+\frac{ct}{1-t}\right)x$) and you'll have your p.d.f. -$ -\frac{\Gamma(2a)}{\Gamma^{2}(a)}c^{a}b^{a}\frac{t^{a-1}(1-t)^{a-1}}{(b+(c-b)t)^{2a}} -$ -If you want I can help with all the steps needed, but now I have little time to control if the results are correct. - -EDIT: I changed the final result (I was wrong while grouping a factor). Now the function correctly become a $Beta(a,a)$ when $c=b$.<|endoftext|> -TITLE: Possible error in Hartshorne exercise III.2.7 -QUESTION [6 upvotes]: Exercise III.2.7 in Hartshorne's Algebraic Geometry is the following. - -Let $S^1$ be the circle (with its usual topology) and let $\mathbb Z$ - be the constant sheaf $\mathbb Z$. -(a) Show that $H^1(S^1, \mathbb - Z)\cong \mathbb Z$, using our definition of cohomology [derived - functor cohomology]. (b) Now let $\mathcal R$ be the sheaf of germs of - continuous real-valued functions on $S^1$. Show that $H^1(S^1, - \mathcal R)=0$. - -I have two questions. The first is just me missing something obvious, but the second seems more serious. -For part (a), it seems like the most straightforward thing to do is to build an injective resolution and take cohomology. In proposition III.2.2, Hartshorne gives us a recipe for constructing injectives: stick together a bunch of skyscraper sheaves. Let $i_p(A)$ denote the skyscraper sheaf at a point $p$ with group $A$. Then I get the resolution -$$\mathbb Z \rightarrow \prod_{p\in S^1} i_p(\mathbb Q) \rightarrow \prod_{p\in S^1} i_p(\mathbb Q/\mathbb Z) \rightarrow 0.$$ -But taking global sections and then cohomology gives an answer that is clearly wrong. What am I misunderstanding here? -For (b), it seems the claim is false. For example, if we compute Cech comology using the standard two-piece cover of the circle, mimicking example 4.0.4 in the text, we get that $H^1$ is nonzero. But this seems odd, since Cech cohomology should agree with derived functor cohomology for a good cover. What's going on here? - -REPLY [5 votes]: For (a), your resolution is not right, because $(\prod_{p\in S^1}i_p(\mathbb{Q}))/\mathbb{Z}$ is not $\prod_{p\in S^1}i_p(\mathbb{Q}/\mathbb{Z})$. Or in other words, the kernel of $\prod_{p\in S^1}i_p(\mathbb{Q})\rightarrow \prod_{p\in S^1}i_p(\mathbb{Q}/\mathbb{Z})$ is not $\mathbb{Z}$ (this is $\prod_{p\in S^1}i_p(\mathbb{Z})$). -For (b), you cannot mimick example 4.0.4 to compute the Cech cohomology. In fact, the higher Cech cohomology of this sheaf vanishes. Let $U,V$ your cover (which is good), and consider the Cech complex -$$ R(U)\times R(V)\rightarrow R(U\cap V)$$ -given by $(f,g)\mapsto f_{|U\cap V}-g_{|U\cap V}$. -The kernel of this map are couple of functions $(f,g)$ that agree on the intersection, and hence patch together to form a unique function on $S^1$. So $\overset{\vee}{H^0}(\{U,V\},R)=R(S^1)$ as expected. -But I claim that the map is onto, so that $\overset{\vee}{H^1}(\{U,V\},R)=0$. Indeed, let $f\in R(U\cap V)$, also let $u,v$ be two functions on $S^1$ such that $u$ has (compact) support in $U$, $v$ has (compact) support in $V$ and $u+v=1$. In particular the function $u$ is zero in a neighborhood of $S^1\setminus V$ and $v$ is zero in a neighborhood of $S^1\setminus U$. The function $uf$ can then be extended to $U$ because it is zero in a neighborhood of $U\setminus U\cap V$. Similarily, the function $vf$ can be extended to $V$. And $(uf,-vf)\mapsto uf+vf=f$ so that the map is onto. -Here $u,v$ are called a partition of unity. A sheaf like $R$ with partitions of unity is called a fine sheaf. This argument (of a similar one with more than two open sets in the cover) shows that the higher Cech cohomology groups of a fine sheaf over a paracompact space vanish. On a paracompact space, Cech and derived functor cohomology agree so the cohomology of a fine sheaf is trivial. -The difference with the example 4.0.4 is that there is no partition of unity in a constant sheaf.<|endoftext|> -TITLE: Group of Order 105 Having Normal Sylow 5/7 Subgroups -QUESTION [7 upvotes]: I am trying to prove that, given $|G|=105$, the G has a normal Sylow 5 subgroup and a normal Sylow 7 subgroup. -I think the thing that is confusing me is the word "and". It would seem that there can't be both a normal subgroup of order 5 and 7, i think. If there were, since their intersection is just $e_G$, then this ties up the remaining 94 elements in a bunch of Sylow 3 subgroups. Again, because they each contain $e_G$, there needs to be 47 Sylow 3 subgroups, which is impossible by the conditions on the Sylow theorems; So where is my thinking off? Does it really mean "or"? -There has to be at least one normal 5 or 7, since if neither were normal, because $n_5=21$ and $n_7=15$, and since $4(21)+6(15)>105$, we have a contradiction with the number of elements. So where is my logic flawed? - -REPLY [15 votes]: $|G|=3\cdot5\cdot7$ -$$ n_5 \in\{1, 21\} \; \text{&} \; n_7 \in \{1,15\}$$ -$$ \text{If} \; n_5>1 \; \text{and} \; n_7>1 $$ -$$\Rightarrow n_5=21 \; \text{&} \; n_7=15 $$ -$$\Rightarrow \text{There are } \; 21(5-1) + 15(7-1) >105\; \text{elements of order 5 and 7 together, which is absurd.} $$ -$$ \text{Hence either } \; n_5=1 \; \text{or} \; n_7=1$$ -$$\text{If P and Q are Sylow 5 and 7 subgroups respectively then, one of the two has to be normal in G}$$ -$$\text{Without loss of generality, assume P is normal in G }$$ -$$P\lhd G \;\text{and} \;Q \leq G \Rightarrow PQ \leq G $$ -$$\;\text{Futhermore}, P\cap Q=\{e\} \Rightarrow |PQ|=\frac{|P||Q|}{|P\cap Q|}=35\; $$ -$$[G:PQ]=3, \;\text{which also happens to be the least prime dividing order of G, hence}\; PQ\lhd G$$ -$$|PQ|=5.7 \; \text{ and 5 doesn't divide (7-1), so } \; PQ\; \text{is cyclic} $$ -$$\Rightarrow \;\text{P and Q are characteristic in PQ. Since PQ is normal in G, we have both P and Q normal in G}$$<|endoftext|> -TITLE: Monotone increasing function can be expressed as sum of absolutely continuous function and singular function -QUESTION [6 upvotes]: I'm working on a problem from Royden's Real Analysis: - -Show that if a function $f$ is monotone increasing on $[a,b]$, then $f$ can be represented as the sum of an absolutely continuous function and a singular function. - -I understand the general idea of the proof (I think), but there are a few details I'm unclear on. Here's my proof so far: - -Let $f$ be monotone increasing on $[a,b]$. Let $g(x) = \int_a^x f'(t) dt + g(a)$. Since $g$ is an indefinite integral, $g$ is absolutely continuous. -Let $h(x) = f(x) - g(x)$. Then $h'(x) = f'(x) - g'(x)$. -Since $f$ is monotone increasing, by Theorem 5.3 $\ f'$ is measurable. Then, by Lemma 5.9 $g'(x) = f'(x)$ almost everywhere, so $h'(x) = 0$ almost everywhere, and $h$ is thus singular. -$\textbf{Theorem 5.3:}$ If $f$ is monotone increasing on $[a,b]$, - then $f$ is differentiable a.e. and $f'$ is measurable. -$\textbf{Theorem 5.9:}$ If $f$ is bounded and measurable on $[a,b]$, - and $F(x) = \int_a^x f(t) dt + F(a),$ then $F'(x) = f(x)$ a.e. - -Here are my questions: -What allows me to say $h= f - g \implies h' = f' - g'$? -I would think it's just the differentiability of $f, g,$ and $h$. I know $f$ is differentiable because it's monotone increasing, and $g$ is differentiable because it's absolutely continuous and thus of bounded variation. Is this correct? -Is f' bounded? -In order to apply Lemma 5.9, f' must be both bounded and measurable. I could also use Thm 5.10, which requires $f'$ to be integrable, but I'm not sure if I have integrability, either. -$\textbf{Theorem 5.10:}$ If $f$ is integrable on $[a,b]$, - and $F(x) = \int_a^x f(t) dt + F(a),$ then $F'(x) = f(x)$ a.e. -Is it necessary to let $g(x) = \int_a^x f'(t) dt + g(a)$, or can I let $g(x) = \int_a^x f'(t) dt$? -I would think I need the former in order to use 5.9 or 5.10, but I saw a proof that used the latter. Is there any difference in the two approaches? - -REPLY [4 votes]: I am 4 years late but I figure I will answer this for anyone else who search up this question. -1) Yes just from differentiability, but only on the intersection of the set on which $f'$ is defined and the set on which $g'$ is defined, but since each of them is the complement of a set of zero measure, $h'$ is also defined a.e. -2) $f'$ is integrable on [a,b] by theorem 3 in the same chapter, which says that if $f$ is an increasing real-value function on the interval [a,b]. Then $f'$ defined a.e. and measurable and we have that: -$\int_a^b f' \leq f(b)-f(a)$. -This allows us to use Theorem 10 which says that if $g(x) =: \int_a^x f' + f(a)$, $g' = f'$ a.e. -3) Once you have $g'(x) = f'$ by answer to your question (1), $(g-f(a))' = (\int_a^x f')' = g' - 0 = g'$.<|endoftext|> -TITLE: The differential equation $y' = \frac{\ln(x^2+y^2)}{x^2 + y^2}$ -QUESTION [21 upvotes]: In my university, the integral calculus teacher gave me this differential equation to solve. -$$ y' = \frac{\ln(x^2+y^2)}{x^2 + y^2} $$ -I don't have any clue of what form the solution of this differential equation has. - -REPLY [4 votes]: I would say to simply draw the vector field. Inside the unit circle, $y'$ is negative, so the flow vectors are down and to the right. On the unit circle, $y'=0$, so the flow vectors are horizontal there. Outside the unit circle, the flow vectors are up and to the right (though they become close to horizontal far from the unit circle). So the solutions are essentially constant functions (increasing very slowly) for very positive and very negative initial values of $y$. For intermediate initial values of $y,$ we will see an upward slope approaching the unit circle from left to right, a downward dive passing through the unit circle, and then an upward slope again after leaving the unit circle. There is one solution, symmetric under a $180^\circ$ rotation, where $y\rightarrow 0$ as $x\rightarrow\pm\infty$. - -Update. The attached figure shows approximately what the solutions look like. I misstated the behavior of the symmetric solution; it does not necessarily approach $y=0$ as $x\rightarrow\pm\infty$, but rather has two asymptotes, $y=\pm y_\infty$.<|endoftext|> -TITLE: Show $x^n+ax+b=0$ has most two solutions -QUESTION [5 upvotes]: For any real numbers $a$ and $b$ and even natural number $n$, Show $x^n+ax+b=0$ has most two solutions for all $x$ in $\mathbb{R}$. - -let $f(x)=x^n+ax+b$. Set $f(x)$ and $a$ be $0$, then we have $x^n+b=0$. So $x=\pm b^{1/n}$. $f$ has two solutions. To yeld a contradiction. Now assume that $f$ has more than two solutions. As $f$ is a polynomial on $\mathbb{R}$, it is differentiable on $\mathbb{R}$; thus $f'(x)=nx^{n-1}+a$. Set $f'(x)=0$, then $0=nx^{n-1}+a\Rightarrow x=\left(\frac{-a}{n}\right)^{1/(n-1)}$. At $x=\left(\frac{-a}{n}\right)^{1/(n-1)}$, $f(\left(\frac{-a}{n}\right)^{1/(n-1)})$ is the only extreme value of $f$ which contradicts with $f$ has more than two solutions. - -Can someone check this solution? I am not sure right or not. If not right, please give me a hint or suggestion. Thanks - -REPLY [2 votes]: Observe that $p$ is concave up everywhere since $p''(x)=n(n-1)x^{n-2}\ $and since $n$ is even so is $n-2$. The result follows.<|endoftext|> -TITLE: Is the union of an increasing sequence of topological copies of $\mathbb{R}^n$ homeomorphic to $\mathbb{R}^n$? -QUESTION [9 upvotes]: Let $M$ be an $n$-dimensional topological manifold, and let $(U_k)_{k \in \mathbb{N}}$ be an increasing sequence of open sets $U_k \subset M$ such that for each $k \in \mathbb{N}$, $U_k$ is homeomorphic to $\mathbb{R}^n$. Is it necessarily the case that $\,U\!:=\bigcup_{k=1}^\infty U_k\,$ is homeomorphic to $\mathbb{R}^n$? -If so, does anyone know any nice reference for this fact (either as a theorem/lemma/etc. or as an exercise)? - -REPLY [4 votes]: Yes, this is correct. You could prove it as a corollary of the annulus theorem (sketch of a sketch: at each stage the closure $\overline U_n$ is an open $n$-ball, and you're attaching an annulus, and the limit of this procedure is $\Bbb R^n$), but this wasn't known in all dimensions until the 80s. An early proof of this theorem was given in 1961 by Morton Brown here; it appears to use a modification of the above idea and fact, though I have not at all read it in detail. It appears that this was been known slightly earlier as a corollary of Mazur's proof of the topological Schoenflies theorem.<|endoftext|> -TITLE: Is it possible for two closed sets to not have a minimum distance? -QUESTION [7 upvotes]: Is it possible for two closed sets to not have a minimum distance? I am trying to think of an example in which two closed sets in $\mathbb{R}$ do not have a minimum distance. -I am thinking of the real line and intervals, but I believe it's impossible for two closed intervals to have not have a minimum distance. So are there other subsets of $\mathbb{R}$ that could have this property? - -REPLY [3 votes]: Let $(X,d)$ be a metric space, and let $A$ and $B$ be any (non-empty) subsets of $X$. Then we define the distance between $A$ and $B$ as follows: -$$D(A,B) \colon= \inf \left\{ \ d(a,b) \ \colon \ a \in A, \ b \in B \ \right\}.$$ -Now since the set $$\left\{ \ d(a,b) \ \colon \ a \in A, \ b \in B \ \right\}$$ is a subset of $\mathbb{R}$ that is bounded below by $0$, it always has an infimum. -Now the question is whether this infimum is actually attained, that is, if there exist points $a_0 \in A$, $b_0 \in B$ such that -$$D(A,B) = d(a_0, b_0).$$ -Let $X \colon= \mathbb{R}^2$ with the Euclidean metric, and let -$$A \colon= \left\{ \ (x,y) \in\mathbb{R}^2 \ \colon \ xy=1 \ \right\},$$ -and -$$B \colon= \left\{ \ (x,y) \in \mathbb{R}^2 \ \colon \ xy=-1 \ \right\}.$$ -Then the sets $A$ and $B$ are closed sets (because their complements are open), but there is no minimum distance between them. Look at the graphs of the functions $f, g \colon \mathbb{R} \setminus \{0\} \to \mathbb{R}$ defined by -$$f(x) \colon= \frac{1}{x} \ \mbox{ and } \ g(x) \colon= -\frac{1}{x} \ \mbox{ for all } \ x \in \mathbb{R}\setminus\{0\}.$$ -These graphs are exactly the sets $A$ and $B$. -Let $(a_1, a_2) \in A$, $(b_1, b_2) \in B$. Then -$$a_1 a_2 = 1 \ \mbox{ and } \ b_1 b_2 = -1.$$ -So, $a_1, a_2, b_1, b_2$ are all non-zero and -$$a_2 = {1 \over a_1} \ \mbox{ and } \ b_2 = -{1 \over b_1}.$$ -Therefore, -$$ -\begin{align} -d\left( (a_1, a_2), (b_1, b_2) \right) &= \sqrt{ (a_1-b_1)^2 + (a_2-b_2)^2} \\ &= \sqrt{ (a_1-b_1)^2 + \left( {1 \over a_1} + {1 \over b_1 } \right)^2}. -\end{align} -$$ -So let's consider the function $F \colon \mathbb{R}^2 \setminus \{(0,0)\} \to \mathbb{R}$ defined by -$$F(x,y) \colon= (x-y)^2 + \left({1 \over x } + {1 \over y} \right)^2 \ \mbox{ for all } \ (x,y) \in \mathbb{R}^2 \setminus \{(0,0)\}. $$ -Let's try to minimise this function. The minimum value, if any, is attained at the points $(x,y) \in \mathbb{R}^2 \setminus \{(0,0)\}$ where -$${\partial F \over \partial x } = 0 = {\partial F \over \partial y }.$$ -Now I hope you can continue from here.<|endoftext|> -TITLE: Proof that pointwise equicontinuity on a compact subset of $\mathbb{R}$ implies uniform equicontinuity. -QUESTION [5 upvotes]: I have an idea of the proof of the above statement, but I'm not entirely sure if it's right. Any comments would be appreciated. This question was supposedly answered here but the answer doesn't address the question at all. -Work so far: -Suppose not, i.e. that there exists some $\epsilon > 0$ such that for all $\delta > 0$ and $x,t \in [a,b]$, $|x-t| < \delta$ but $|f_k(x) - f_k(t)| \geq \epsilon$ for some $k \in \mathbb{N}$. Fix this $\epsilon$. -We are given a family of functions $f_n:\mathbb{R} \to \mathbb{R}$ defined on a compact interval $[a,b]$, and that this family is pointwise equicontinuous. Pointwise equicontinuity of {$f_n$} implies that each $f_n$ is continuous, and since the domain is compact, we have that each $f_n$ is uniformly continuous. In particular $f_k$ is uniformly continuous, but this contradicts the hypothesis above. - -REPLY [7 votes]: Let $K \subset \Bbb{R}$ be compact; let $A \neq \varnothing$; let $f_{\alpha}: K \to \Bbb{R}$ for all $\alpha \in A$; let $\{ f_{\alpha} \}_{\alpha \in A}$ be pointwise equicontinuous. For every $c \in K$, if $\varepsilon > 0$ then there is some $\delta_{c} > 0$ such that $x \in B_{K}^{c}(\delta_{c})$ implies $|f_{\alpha}(x) - f_{\alpha}(c)| < \varepsilon/2$ for all $\alpha \in A$. (Here $B_{K}^{c}(\delta_{c})$ denotes the open ball formed over $K$ of center $c$ and radius $\delta_{c}$.) We have $\bigcup_{c \in K}B_{K}^{c}(\delta_{c}/2) = K$, implying by compactness of $K$ that there are some $c_{1},\dots, c_{n} \in K$ such that -$$ -\bigcup_{j=1}^{n}B_{K}^{c_{j}}(\delta_{c_{j}}/2) = K. -$$ -If $x \in K$, then there is some $1 \leq j \leq n$ such that $x \in B_{K}^{c_{j}}(\delta_{c_{j}}/2)$ and hence $\in B_{K}^{c_{j}}(\delta_{c_{j}})$; if $|x-y| < \min_{1 \leq i \leq n}\delta_{c_{i}}/2$, then $|y - c_{j}| \leq |y-x| + |x-c_{j}| < \delta_{c_{j}}/2 + \delta_{c_{j}}/2 = \delta_{c_{j}}$, implying that $y \in B_{K}^{c_{j}}(\delta_{c_{j}})$, implying that -$$ -|f_{\alpha}(x) - f_{\alpha}(y)| \leq |f_{\alpha}(x) - f_{\alpha}(c_{j})| + |f_{\alpha}(c_{j}) - f_{\alpha}(y)| < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon -$$ -for all $\alpha \in A$; -this shows the desired uniform equicontinuity.<|endoftext|> -TITLE: Show that $x^3 \equiv 3 \pmod{p}$ is solvable if $p$ is of the form $6m+5$. -QUESTION [9 upvotes]: The question is: - -Show that $x^3 \equiv 3 \pmod{p}$ is solvable if $p$ is of the form $6m+5$. How many solutions are there? - -Any help/hints would be appreciated! - -REPLY [2 votes]: It as simple as that: -The map $(\mathbb Z/p\mathbb Z)^* \to (\mathbb Z/p\mathbb Z)^*, x \mapsto x^3$ is injective, since there is no element of order $3$ in a group of size $6m+4$. -An injective self-mapping on a finite set is bijective. This also answers the second question. -Of course we can easily broaden the result without any more work: Whenver $m$ and $p-1$ are co-prime, we obtain a unique solution for the equation $x^m=a \pmod p$ for any $a$.<|endoftext|> -TITLE: $\sum^{\infty}_{n=1}x_n$ converges $\Rightarrow$ $\sum^{\infty}_{n=1}x_n^3$ converges also? -QUESTION [7 upvotes]: Let the series $\sum^{\infty}_{n=1}x_n$ converge. $\{x_n\}_{n=1}^{\infty} \in \mathbb R$ -Does $\sum^{\infty}_{n=1}x_n^3$ converge too? - -I tried to find some counter-examples but found none. I tried to prove this also, but I can't... I don't know even if it's true or not. - -REPLY [4 votes]: No, if the signs of the $x_n$ are not the same; here's a fairly typical counterexample. Define the series $\sum_nx_n$ as follows. Its positive terms are $1/n^{1/3}$ for all natural numbers $n$, each of these terms occurring exactly once. Its negative terms are $-1/n^{4/3}$, each occurring exactly $n$ times, immediately after the corresponding positive term $1/n^{1/3}$. In other words, the series consists of blocks of consecutive terms, where the $n$th black consists of $1/n^{1/3}$ followed by $n$ consecutive occurrences of $-1/n^{4/3}$. Notice that each block has sum $0$, from which it easily follows that the whole series converges to $0$. Now consider what happens when you cube each term. The $n$th block of the new series consists of a positive term $1/n$ followed by $n$ consecutive negative terms $-1/n^4$. The $n$ negative terms in the $n$th block add up to $-1/n^3$ which is far from cancelling the positive term $1/n$. Indeed, except for the first block or two, the sum of the $n$th block will be positive and larger than $1/(2n)$. Since the harmonic series diverges, this means that the series of cubes diverges to $+\infty$.<|endoftext|> -TITLE: evaluating using fundamental theorem of calculus -QUESTION [7 upvotes]: Let $$F(x)=\int_0^{x}\ tf (x^2-t^2)\,dt$$ -Find $F'(x)$. -I know that I need to apply the fundamental theorem of calculus. As for the next part, I tried to substitute in $u=x^2-t^2$ but I don't know how to proceed. - -REPLY [8 votes]: If you substitute $u = x^2 - t^2$ then $du = -2t \, dt$. If $t = 0$ then $u = x^2$. If $t = x$ then $u = 0$. Thus, we have -$$ F(x) = -\frac{1}{2} \int_{0}^x f(x^2 - t^2) (-2t) \, dt = -\frac{1}{2} \int_{x^2}^{0} f(u) \, du = \frac{1}{2} \int_0^{x^2} f(u) \, du. $$ -Using the fundamental theorem of calculus, you have -$$ F'(x) = \frac{1}{2} f(x^2) (x^2)' = x f(x^2). $$ - -REPLY [2 votes]: Hint: -We have -$$ -F(x) = \int_{0}^{x}tf(x^{2}-t^{2})dt = \frac{1}{2}\int_{0}^{x^{2}}f(u)du -$$ -for all suitable $x$; here we used the substitution $u := x^{2}-t^{2}$. Now $F$ is a composite map; applying chain rule along with the fundamental theorem to conclude.<|endoftext|> -TITLE: Measure of the irrational numbers? -QUESTION [16 upvotes]: I have read that the measure of the irrational numbers on an interval $[a,b] = b-a$. This both makes sense and doesn't make sense to me. If you consider that the union of the irrationals with the rationals are the reals, then if the rationals have measure 0, then the irrationals must have the same measure as the reals. (right?) -But if you consider that the irrationals are totally disconnected, then it is a collection of disconnected points. If it's a collection of disconnected points, like the rationals, how can it have a non-zero measure? -Another fact I'm not sold on, which may be the source of the problem, is that there isn't a bijection between the rationals and the irrationals. If both of these sets are totally disconnected by each other and both dense, that is, "Between any $ \forall a,b \in \mathbb{R} $, such that $a < b$, there is exists an irrational $i$ and a rational $q$ satisfying $a < i < b$, and $a < q < b$." -I know of the diagonalization argument but I've just never been completely sold on this fact. For the irrationals to be uncountable and the rationals to be countable, in my head it would make more sense if there exists an $\epsilon > 0$ such that around any irrational number there exists only other irrational numbers. I know that if it exists you could just sum enough of those epsilons together to get the length of it greater than some $\frac{p}{q}$ thus finding a contradiction... but I'm still not sold on it. -Can someone help me finally understand these concepts? - -REPLY [8 votes]: The argument in your first paragraph is correct. -For the second paragraph, why do you think that being disconnected is of any relevance? The rational numbers are of zero measure because they are countably many of them. The set of irrationals is not countable, therefore it can (and indeed does) have a non-zero measure. -On your third paragraph: It is true that between any two rationals there's an irrational, and between any two irrational there's a rational. However, there is not only one irrational between two rationals, and not only one rational between two irrationals. Rather there are countably many rationals between two irrationals, but uncountably many irrationals between two rationals. -In other words, it's not that you have an alternating sequence of rational and irrational numbers (which is what you seem to have in mind). In particular, you cannot define ”the next larger rational” nor ”the next larger irrational.” -In some sense, although the rationals are already dense, the irrationals are in a sense even more dense. -Maybe it helps if instead of the rationals, you consider the numbers with finite decimal expansion, which also is a countable dense subset. Here, it might be more intuitive that, you have many more possibilities for numbers if you can arbitrarily continue your number string infinitely than if you are forced to stop after finitely many digits. Yet again, between any two finite-expansion numbers, you always can find an infinite-expansion number (just add infinitely many non-zero digits to the smaller one, after first padding it to the length of the larger with zeroes if necessary), and between any two infinite-expansion numbers you can find a finite-expansion number (just cut the larger one at an appropriate point).<|endoftext|> -TITLE: (Anti-) Holomorphic significance? -QUESTION [9 upvotes]: What are holomorphic and anti-holomorphic components? Why don't we call them complex components and their conjugates? What is holomorphic coordinate transformation? - -REPLY [8 votes]: $\newcommand{\dd}{\partial}\newcommand{\Reals}{\mathbf{R}}\newcommand{\Cpx}{\mathbf{C}}$The $z^{\beta}$ are emphatically not holomorphic coordinates in general. "Holomorphic" refers to overlap maps being holomorphic, i.e., satisfying the Cauchy-Riemann equations, not merely to the coordinates being complex-valued. -In more detail, fix an identification of $\Cpx^{n}$ and $\Reals^{2n}$, say $z^{\beta} = x^{\beta} + iy^{\beta}$, and identify -$$ -(z^{1}, \dots, z^{n}) \leftrightarrow (x^{1}, \dots, x^{n}, y^{1}, \dots, y^{n}). -$$ -Define the complex-valued differential operators -$$ -\frac{\dd}{\dd z^{\beta}} - = \frac{1}{2}\left(\frac{\dd}{\dd x^{\beta}} - i\frac{\dd}{\dd y^{\beta}}\right),\qquad -\frac{\dd}{\dd \bar{z}^{\beta}} - = \frac{1}{2}\left(\frac{\dd}{\dd x^{\beta}} + i\frac{\dd}{\dd y^{\beta}}\right), -$$ -and their dual one-forms $dz^{\beta} = dx^{\beta} + i\, dy^{\beta}$, $d\bar{z}^{\beta} = dx^{\beta} - i\, dy^{\beta}$. -A $C^{1}$ function $f:\Cpx^{n} \to \Cpx$ is holomorphic if $\dfrac{\dd f}{\dd \bar{z}^{\beta}} = 0$ for all $\beta$, and is anti-holomorphic if $\dfrac{\dd f}{\dd z^{\beta}} = 0$ for all $\beta$, i.e., if $\bar{f}$ is holomorphic. A smooth mapping $f:\Cpx^{n} \to \Cpx^{n}$ is holomorphic if each component function is holomorphic. -Holomorphic maps constitute a (rich but) thin subset of all smooth mappings. Among their many nice properties, a holomorphic mapping $F$ is complex analytic (representable by a convergent power series in a neighborhood of each point), and is locally invertible near a point $p$ (i.e., is a local biholomorphism at $p$) if and only if its derivative $dF(p)$ (which is a complex-linear mapping) is invertible (non-singular). -The chain rule guarantees a composition of holomorphic maps is holomorphic, so the set of local biholomorphisms forms a pseudo-group. (Loosely, forms a group, modulo fine print about composability.) It therefore makes sense to speak of a "holomorphic structure" or a "holomorphic coordinate system" by declaring two $\Cpx^{n}$-valued coordinate systems to be compatible if the change of coordinates (a.k.a. the overlap map) from one to the other is holomorphic. This notion of compatibility is an equivalence relation; particularly, compatibility is transitive. -Anti-holomorphic maps are conjugate-analytic in an obvious sense, and are locally invertible at $p$ if and only if their Jacobian is non-singular. The derivative of an anti-holomorphic map is not complex-linear, however, and a composition of anti-holomorphic maps is not anti-holomorphic. Consequently, there is no notion of an "anti-holomorphic structure" or "anti-holomorphic coordinate system": The corresponding notion of "compatibility" is not transitive. -Loosely, "holomorphic" and "anti-holomorphic" are not "dual" or "perfectly symmetric" concepts; the distinction between them is not arbitrary. That said, the definition ultimately rests on a choice of complex square root $i$ of $-1$, and arguably such a choice is arbitrary. But this choice is fixed by the time one even asks about complex differentiability.<|endoftext|> -TITLE: polynomial with all roots on the unit circle -QUESTION [9 upvotes]: I'm wondering if the following statement is true: -if all roots of a polynomial with integer coefficients are on the unit circle, then these roots are in fact roots of unity and the polynomial is a product of cyclotomic polynomial. - -REPLY [13 votes]: This isn't true. Take for example polynomial $5x^2-6x+5$. It's easy to check it has roots $\frac{3}{5}\pm\frac{4}{5}i$, which are both on the unit circle, but neither is a root of unity. -However, if you restrict your attention to monic integer polynomials, then this is indeed correct: it's a result due to Kronecker, and you can see a few proofs of this here.<|endoftext|> -TITLE: The set of traces of orthogonal matrices is compact -QUESTION [8 upvotes]: Is the following set compact: $$M = \{ \operatorname{Tr}(A) : A \in M(n,\mathbb R) \text{ is orthogonal}\}$$ where $\operatorname{Tr}(A) $ denotes the trace of $A$? - -In order to be compact $M$ has to be closed and bounded. -$\|A\|=\sqrt {\sum_{i,j} {a_{ij}}^2}=\sqrt n $ and hence bounded. So $\operatorname{Tr}(A)<\sqrt n $. Hence $M$ is bounded. -Now we have to prove that $M$ is closed. Let $\operatorname{Tr}(A_n)$ be a sequence of matrices converging to $\operatorname{Tr}(A)$ where $A_n$ is a sequence of orthogonal matrices. The only thing remaining to show is that $A$ is orthogonal. - -REPLY [7 votes]: Besides using compactness of the set of orthogonal matrices, you can show directly that the set of possible traces is $[-n,n]$. Note that $I$ and $-I$ are orthogonal with $\text{Tr}(I) = n$ and $\text{Tr}(-I) = -n$. -On the other hand, each matrix element of an orthogonal matrix has absolute value at most $1$, so $\text{Tr}(T) = \sum_{i=1}^n T_{ii}$ has absolute value at most $n$. -To get orthogonal matrices with every trace value from $-n$ to $n$, consider those made from diagonal blocks of the form -$$\pmatrix{\cos \theta & \sin \theta\cr -\sin \theta & \cos\theta}$$ -with an additional diagonal entry of $+1$ or $-1$ in case $n$ is odd.<|endoftext|> -TITLE: Sheldon Ross vs My TA, what answer is wrong? -QUESTION [7 upvotes]: I have the solution of this problem, -1) The game of Clue involves 6 suspects, 6 weapons, and 9 rooms. One of each is chosen randomly and the object of the game is to guess the chosen three. -In one version of the game, the selection is made and then each of the players is -randomly given three of the remaining cards. Let S, W, and R be, respectively, the -number of suspects, weapons, and rooms in the set of three cards given to a specified player. Also, let X denote the number of solutions that are possible after that player observes his or her three cards. -I need to find: (c) Find E[X]. -Solution given by my TA: - -Solution given by Sheldon Ross - -A first Course in Probability (Seven Edition). - -My question, is of what answer is wrong and what answer is correct, and why? - -REPLY [4 votes]: The TA is right, and Ross (or more likely whoever made the solution key for him) is wrong. -The deck has $21$ cards (6 suspects, 6 weapons, 9 rooms) from which the player is given three. However, we know that one suspect, one weapon and one room have been removed from the deck. Thus e.g. the probability that the player was given three room cards is not ${9 \choose 3}/{21 \choose 3}$, -as Ross's solution implies, but ${8 \choose 3}/{18 \choose 3}$.<|endoftext|> -TITLE: Find $n$ and $k$ if $\:\binom{n\:}{k-1}=2002\:\:\:\binom{n\:}{k}=3003\:\:$ -QUESTION [9 upvotes]: $$ \:\binom{n\:}{k-1}=2002\:\:\:\binom{n\:}{k}=3003\:\: $$ -What are the values for n and k? -My initial idea was to divide those two: -$$\frac{\binom{n\:}{k-1}}{\binom{n\:}{k}}=\frac{2002}{3003}\:\:$$ -$$\frac{\frac{n!}{\left(k-1\right)!\left(n+1-k\right)!}}{\frac{n!}{k!\left(n-k\right)!}}\:=\frac{2}{3}$$ -$$ \frac{k}{n+1-k}\:=\frac{2}{3} $$ -After that sum them up: -$$\binom{n\:}{k-1}+\binom{n\:}{k}=\binom{n+1\:}{k}=5005$$ -And now devide the third with the first term and solve everything. -However I would get both times $5k = 2n+2$ and that's the problem. - -REPLY [4 votes]: I originally drew a large version of Pascal's Triangle after telling my father that the total number of gifts in The Twelve Days of Christmas song was $$ \:\binom{14\:}{3} = 364$$ and he complained, "What's wrong with $12?$" I tried to explain how the cumulative sums along the diagonals introduce a bit of an offset, The number of new gifts on day $n$ is $ \:\binom{n\:}{1},$ the total gifts on day $n$ is $ \:\binom{n + 1\:}{2},$ the total gifts on all days up to and including day $n$ is $ \:\binom{n + 2\:}{3}.$ He didn't like it. Later, I found a piece of paper where he had carefully written out the sums in $\Sigma$ notation, and got it right, of course. -Back to this problem, I have always liked that consecutive entries in row $14$ give $1001,2002,3003.$ I wonder whether that ever happens again, $A,2A,3A?$ Maybe not: this seems to give a complete answer: Find three consecutive entries of a row of Pascal triangle that are in the ratio of 1 : 2 : 3<|endoftext|> -TITLE: reference request for a book on high dimensional probability and data analysis written for mathematicians -QUESTION [11 upvotes]: I hope someone can help with this. I am a statistician looking for a good book on high dimensional probability and data analysis. Basically I am looking for the equivalent of Terry Tao's 2 volume set on Analysis, but for high dimensional probability. Let me qualify what I am looking for. -Now there are a bunch of books out there with these very words in the title. I will list some below. But most of these are geared towards just pure machine learning folks or computer science. So often books on high dimensional data focus on techniques like Principle Components Analysis or Lasso, etc., to analyze high dimensional data. In developing these models, the authors start off with strong parametric assumptions about exponential family distributions or independence, etc. These book lack any sort of organic development of a theory behind adding dimensions to a data set or changes in the patterns of symmetry as a data set grows larger--both in dimensions and in the number of observations. -A basic probability text book will begin with a definition of random variables and work its way towards the Central Limit Theorem. While that is good for an intro stats course, there are a lot of problems with assuming normality even in high dimensional situations. -An example of such a book is: -Geometric Structure of High-Dimensional Data and Dimensionality Reduction -Statistics for High-Dimensional Data: Methods, Theory and Applications -(Please note that I am not critizing any of the books mentioned. I am just saying that these books don't fit my particular need.) -So what I am looking for is a more analytic look at how probability varies as dimensions get rather high. I use Terry Tao's book as an example of a wonderful development of analysis from basic foundations. I am looking for the same treatment for high dimensional data. I am not sure if I should be looking at a book on measure theory, or calculus on manifolds, or where? -Any suggestions would really be appreciated. - -REPLY [2 votes]: A recently published book by Wainwright, on high dimensional data, which seems useful -High-dimensional statistics : a non-asymptotic viewpoint<|endoftext|> -TITLE: Finding minimum polynomial of $e^{i\pi/6 }$ -QUESTION [7 upvotes]: Finding minimum polynomial of $e^{i\pi/6 }$: -I know it satisfies $t^6 +1 = 0$. I factorized $t^6+1 = (t^2+1)(t^4-t^2+1)$ -obviously it does not satisfy $t^2 +1$ so it must satisfy $t^4-t^2 +1$. How do I show that this is indeed the minimum? -Is it enough to say: in modulo $2$ we have that $t^4 - t^2 + 1 \equiv t^4 +t^2 + 1$. In mod $2$ we only have 2 elements $0,1$. Obviously none of these satisfy $t^4+t^2+1 = 0$ hence it is irreducible in mod $2$ therefore in $\Bbb Q$? - -REPLY [2 votes]: Here's a slick way of showing irreducibility of $x^4- x^2+1$, using uniqueness of factorization in both $\Bbb Q[x]$ and $K[x]$, where $K$ is the field of Gaussian numbers, $\Bbb Q(i)$. -Note that $x^4-x^2+1=(x^2+ix-1)(x^2-ix-1)$, and that this is a factorization into irreducibles over $K$. If the original quartic had any factorization over $\Bbb Q$, it would also be a factorization over $K$, but this product of quadratics is the only $K$-factorization that there is. So the original quartic is irreducible over the rationals.<|endoftext|> -TITLE: Best books for relearning precalculus maths the right way -QUESTION [10 upvotes]: I am looking for advice on the best texts to remind myself how to do math. I am 35 and an attorney (patent) but was once very good at math (best in my high school, 800 SAT) but then stopped completely after taking Calculus in college as that was all I needed for the MCAT. How I ended up an attorney is a long story... -Anyway, as I get older, I am drawn more and more to understanding simply for understanding's sake. I always enjoyed math because it required a lot of mental effort that led to a real, correct answer. I've gone back to math now and then for fun, took a class at a local college years ago but now I've decided I want to go back again and pick up where I left off but focus more on a rigorous understanding and not just the computational Calculus I took in high school and again in college. I searched online, including reading lots of threads on this site, and ultimately went with Apostol's Calculus Vol 1 and planned to look at the notes in MITs 18.014 class (Calculus w/ Theory). -Long story long, I received the book, got to the first proof in the introduction and realized I have forgotten just about everything. My arithmetic skills are still excellent as even as an attorney those get used day to day, but he might as well have been speaking Greek when he proved the area under the curve/method of exhaustion. -So, hence why I am here. :) I want to find the best book/books to re-ground myself in the mathematics needed to make a go at a rigorous study of calculus. There is no career goal in mind here, I just want to do it for the sake of it. -I came up with several ideas from reading threads here or reading Amazon reviews, including: -Gelfand (Algebra and Trigonometry) -Lang (Basic Mathematics) -Simmons (Precalculus in a Nutshell) -Velleman (How to Prove It) -and I have Euclid already on the shelf, hah. -I am a perfectionist, fortunately or unfortunately, so if I am going to go through the effort, I want to do it well and with full understanding, not just blow through some Kahn Academy videos and on to Calculus. But, I also need to be efficient. -Does the list above make sense? Are they redundant or is anything missing? Any other recommendations? -Thanks very much. - -REPLY [3 votes]: If you want a slightly rigorous pre-calculus re-education, you could try Hall and Knight's Higher Algebra. Alternatively, you could take a look at the pre-calculus section in Thomas' Calculus or Stewarts' Calculus. Both are intro to high-level calculus books and the first chapter is about the real-number system,etc. And I'd recommend using Thomas' Calc. instead of Apostol's (although Apostol's is also fantastic) because Thomas' has exhaustive explanation for each chapter. And if you really want to start right at the basics for pre-calculus, you could always check out KhanAcademy's videos; you need not use it as the only source, as you say you don't want to blow through them, but they do provide an intuitive understanding. -Note: Higher Algebra has more than a basic precalculus education, especially the later chapters!<|endoftext|> -TITLE: A series related to Catalan numbers -QUESTION [8 upvotes]: Recall the definition of Catalan numbers: -$$C_n=\frac1{n+1}\binom{2n}n=\frac{2^n(2n-1)!!}{(n+1)!}.\tag1$$ -Now consider the following series with a parameter $n\in\mathbb N^+$: -$$S_n=\frac{2\cdot18^n}{3\sqrt5}\cdot\sum_{k=1}^\infty\frac{C_{2k}\cdot k^n}{5^{2k}}.\tag2$$ -It appears that $S_n$ is always integer (and odd), with the following values for $n=1,2,3,...:$ -$$1, 43, 3009, 318507, 46065921, 8482079403, 1899432317889, 501282878789547,...$$ -(click here to see more terms) - -Can we prove that $S_n$ is always integer (odd)? Can we find an explicit formula for $S_n$ not involving an infinite summation? - -REPLY [5 votes]: Let -$$f(x) = \sum_{k=1}^{\infty} \frac1{2 k+1} \binom{4 k}{2 k} x^k $$ -Then differentiating, multiplying, integrating, etc., we find that -$$f(x) = \frac1{4 \sqrt{x}} \left [(1+4 \sqrt{x})^{1/2} - (1-4 \sqrt{x})^{1/2} \right ] - 1$$ -It should be plain that, if we define the operator -$$\mathcal{D} = x \frac{d}{dx} $$ -Then -$$S_n = \frac{2 \cdot 18^n}{3 \sqrt{5}} \mathcal{D}^n f\left (\frac1{25} \right ) $$<|endoftext|> -TITLE: Prove $ \sum_{k=0}^n k4^k = \frac 49((3n-1)4^n + 1) $ by induction -QUESTION [5 upvotes]: Prove that for every position integer $n$ that -$$ \sum_{k=0}^n k4^k = \frac 49((3n-1)4^n + 1) $$ -Proof: Let $P(n)$ denote the above statement. -Base case: $n=1$ : Note that $$ \sum_{k=1}^1 k4^k = \frac 49((3(1)-1)4^{(1)} + 1) $$ -$\frac 49((3(1)-1)4^{(1)} + 1) = \frac49((2)4+1) = \frac49(8+1) = \frac 49(9) = 4$ -$k4^k = (1)4^{(1)} = 4$ -So P(1) holds. -Inductive Step: Let $s\ge1$. Assume P(s), so -$$ \sum_{k=1}^s k4^k = \frac 49((3s-1)4^s + 1) $$ -Note -$$ \sum_{k=1}^{s+1} k4^k = \sum_{k=1}^{s} k4^k + (s+1)4^{s+1} $$ -and by inductive hypothesis: -** -$$ \frac 49((3s-1)4^s + 1) + (s+1)4^{s+1} $$ -** -I'm afraid I'm stuck after this point. I know my endpoint needs to be: -$$ \sum_{k=1}^{s+1} k4^k = \frac 49((3(s+1)-1)4^{s+1} + 1) $$ -but I don't know how to get from the asterisks to the above. Any help would be greatly appreciated. - -REPLY [2 votes]: All you need to do is expand the expression you got after using the inductive hypothesis (in general, if you don't know where to go, at least do something you know you can do!). You get: -$$ -\begin{split} \frac{4}{9}((3s-1)4^s+1)+(s+1)4^{s+1}&= \frac{1}{9}(3s-1)4^{s+1}+\frac{4}{9}+s\cdot 4^{s+1}+4^{s+1} \\ &=\frac{1}{3}s\cdot 4^{s+1}-\frac{1}{9}4^{s+1}+\frac{4}{9}+s4^{s+1}+4^{s+1} \\ &=4^{s+1}\left[\frac{s}{3}-\frac{1}{9}+s+1 \right]+\frac{4}{9} \\ &= 4^{s+1} \left[ \frac{3s-1+9s+9}{9}\right]+\frac{4}{9} \\ &= 4^{s+1} \left[\frac{12s+8}{9} \right] \\ &= \frac{4}{9}[(3s+2)4^{s+1}+1] \\ &=\frac{4}{9}[(3(s+1)-1)4^{s+1}+1] \end{split} -$$<|endoftext|> -TITLE: Polynomials closed under composition -QUESTION [7 upvotes]: I'm planning on being a TA for a computer science class and I'm reviewing a few things that have slipped my memory. Currently I'm working on this: - -Show that the polynomials are closed under composition such that for all polynomials $x,y : \mathbb{R} \rightarrow \mathbb{R}$, the function $r: \mathbb{R} \rightarrow \mathbb{R}$ defined by $z(n) = x(y(n))$ is also a polynomial. - -I've tried several approaches on paper, but I can't come up with a cohesive answer. - -REPLY [4 votes]: A good computer-sciencey approach would be to prove generally, by structural induction: - -Lemma. If $E$ is any expression built from the operators $+$ and $\times$, real constants, and a single variable $\mathtt X$, then the function that results from evaluating $E$ with an input value bound to $\mathtt X$ is a polynomial. - -Next, if $p$ and $q$ are polynomials, then simply unfolding their definitions in the expression $p(q(\mathtt X))$ gives you something built out of $+$, $\times$, real constants, and the variable symbol, to which you can apply the lemma. - -REPLY [2 votes]: Every polynomial is -$$ -p(x) = \cdots\cdots + \square x^k + \cdots\cdots -$$ -(where $\text{“}\square\text{''}$ is a coefficient). So -\begin{align} -p(q(x)) & = \cdots\cdots+\square(q(x))^k+\cdots\cdots \\ -& = \cdots\cdots+ \square\Big( \square x^m + \square x^{m-1} + \square x^{m-1} + \cdots + \square \Big)^k +\cdots\cdots. -\end{align} -How you you expand $(a+b+c+\cdots+z)^k$? When you expand it, each term is a constant times a power of $x$. So you end up with a sum of finitely many of those.<|endoftext|> -TITLE: Plotting $\left(1+\frac{1}{x^n}\right)^{x^n}$. -QUESTION [15 upvotes]: When I plot the following function, the graph behaves strangely: -$$f(x) = \left(1+\frac{1}{x^{16}}\right)^{x^{16}}$$ -While $\lim_{x\to +\infty} f(x) = e$ the graph starts to fade at $x \approx 6$. What's going on here? (plotted on my trusty old 32 bit PC.) - -I guess it's because of computer approximation and loss of significant digits. So I started calculating the binary representation to see if this is the case. However in my calculations values of $x=8$ should still behave nicely. -If computer approximation would be the problem, then plotting this function on a 64 bit pc should evade the problem (a bit). I tried the Wolfram Alpha servers: - -The problem remains for the same values of $x$. -Questions - -Could someone pinpoint the problem? What about the 32 vs 64 bit plot? -Is there a way to predict at which $x$-value the graph of the function below would start to fail? -$$f_n(x) = \left(1+\frac{1}{x^n}\right)^{x^n}$$ - -REPLY [3 votes]: Here is a plot using the (arbitrary precision) calculator Pari/GP. I use 200 dec digits precision as default in my computations and got this plot without oscillation up to x=20: - -I tried it so far up to x=256; no oscillation. See here the image up to x=128 (just to have the left increase visible)<|endoftext|> -TITLE: On the many patterns of Euler totient function manipulations -QUESTION [5 upvotes]: The full title for this question should be On the many patterns of Euler totient function manipulations and their tendency towards symmetry at primorials, (but didn't want to take up too much room). For the rest of the question, $\#_n$ will represent the $n$th primorial. -Question / observations -A plot of $n/\phi(n)$ for $1\leq n\leq \#_4$ groups each $n$ that has an identical $\phi(n)$ value: - -It can be seen by drawing reflected lines in $\#_4/2$ there is a tendency towards a symmetry. For the cases where $|n/\phi(n)-(\#_4-n)/\phi(\#_4-n)|=0$ where $n$ is prime are clearly the additive Goldbach prime partitions of $\#_4.$ - -The tendency for all $|n/\phi(n)-(\#_k-n)/\phi(\#_k-n)|\rightarrow 0$ for some $k$ shows a tendency towards symmetry: - -At the primorials, for prime values only of $|n/\phi(n)-(\#_k-n)/\phi(\#_k-n)|$ looks like this for $k=5,$ surprisingly regular in comparison to non-primmorial multiples (right): - - -Similarly, limiting $n$ to values of all equal $\phi(N)$ where $N$ is a primorial results in clear patterns (left), whereas for non-primorial multiples, a completely diferent behaviour is exhibited (right): - - -Presumably, many of these patterns have simple explanations, but is something deeper going on here? I can't explain many of the patterns occurring here - in particular, the striking differences between primorials and nion-primorials. Can someone help to explain what is going on in some of these images? -MMA code for above plots - -REPLY [5 votes]: If $n = \prod_k p_k^{e_k}$ is the prime factorization of $n$, then $\varphi(n) = \prod_k (p_k - 1) p_k^{e_k - 1}$, and hence -$$\frac{\varphi(n)}{n} = \prod_k \frac{p_k - 1}{p_k} = \prod_{p \mid n} \left( 1 - \frac{1}{p} \right).$$ -In particular, this quotient only depends on which primes appear in the prime factorization of $n$ (and not on their multiplicities). Primorials keep getting divisible by more and more primes so this function behaves quite nicely for them, but regular ol' numbers alternate between being divisible and not being divisible by various primes so there's more variation there. -This identity has a nice probabilistic interpretation: $\frac{\varphi(n)}{n}$ is the probability that a random number between $0$ and $n-1$ is relatively prime to $n$, and the RHS says that a number is relatively prime to $n$ iff it shares no prime factors with $n$, and furthermore that each of these events is independent.<|endoftext|> -TITLE: Proving unit of quartic number field is fundamental -QUESTION [6 upvotes]: Let $K = \mathbb Q(\alpha)$, for $\alpha$ a root of $a^4 + 4 \alpha^2 + 2 = 0$. I want to prove the group of units $\mathcal O_K^*$ equals $\langle -1, \alpha^2 + 1\rangle$. -I've found the ring of integers $\mathcal O_K$ is $\mathbb Z[\alpha]$, and that the number field has a trivial class group. Dirichlet's Unit Theorem tells me that since the number of pairs of complex embeddings is 2, I have that the group of units is of rank 1, i.e. $\mathcal O_K^* = \langle -1, \varepsilon \rangle$. -I suspect that the torsion group $\mu(K)$ equals $\langle -1 \rangle$ because we can embed $\alpha \mapsto \sqrt{-2 + \sqrt{2}} \in \mathbb C$, which has absolute value greater than 1, but I'm not sure whether that reasoning is correct. -One result that might be helpful is that I can prove that for any unit $\nu$, I have that $\nu^2$ is of the form $a + b \alpha^2$ for integers $a, b \in \mathbb Z$, but I tried exploring that and it didn't really lead anywhere. - -REPLY [2 votes]: First of all: your reasoning about the torsion is not correct: I can embed $\mathbb{Q}(\sqrt{-3})$ into $\mathbb{C}$ in the obvious way but it has torsion subgroup $\mathbb{Z}/6\mathbb{Z}$ altough $\sqrt{-3}\in\mathbb{C}$ has absolute value greater than $1$. -It is still true however that $\mu(K)=\langle -1\rangle$. Note that if $\zeta_n\in K$ for some integer $n>1$ then $\mathbb{Q}(\zeta_n)\subset K$, so by determining the subfields of $K$ you can determine the torsion. You can check that $K/\mathbb{Q}$ is Galois with cyclic group, so there is a unique quadratic subfield, which is $\mathbb{Q}(\sqrt{2})$. This is not cyclotomic, and $K$ itself also isn't ($K$ is not $\mathbb{Q}(\zeta_5)$ as $5$ is unramified in $K$, and not $\mathbb{Q}(\zeta_8)$ as it is not cyclic). Thus the only cyclotomic numberfield inside $K$ is $\mathbb{Q}=\mathbb{Q}(\zeta_2)$, hence $\mu(K)=\langle -1\rangle$. -Now some notation: let $E=\mathbb{Q}(\sqrt{2})$ be the quadratic subfield. You want to prove that $\mathcal{O}_K^{*}=\mathcal{O}_E^{*}$. You claim that $\mathcal{O}_K^{*2}\subset\mathcal{O}_E^{*}$, and I'd like to know how you proved that, because with that I can give a proof: -We have $\mathcal{O}_K^{*2}\subset\mathcal{O}_E^{*}\subset\mathcal{O}_K^{*}$ where $[\mathcal{O}_K^{*}:\mathcal{O}_K^{*2}]=4$. Thus $\mathcal{O}_K^{*}=\mathcal{O}_E^{*}$ follows by proving that $\mathcal{O}_E^{*}/\mathcal{O}_K^{*2}$ is a group of order $4$. Now $-1\in\mathcal{O}_E^{*}$ is not a square in $\mathcal{O}_K^{*}$ as we have seen that $\mathbb{Q}(\zeta_4)\not\subset K$, so $-1$ is a non-trivial element of $\mathcal{O}_E^{*}/\mathcal{O}_K^{*2}$. To see that $\varepsilon=1+\alpha^2$ is another non-trivial element, we need to show that $\varepsilon$ and $-\varepsilon$ are not a square in $\mathcal{O}_K^{*}$. -This is most easily done by finding suitable primes $\mathfrak{p}$ in $\mathcal{O}_K$ (one for $\varepsilon$ and one for $-\varepsilon$) such that $\pm\varepsilon$ is not a square in $\mathcal{O}_K/\mathfrak{p}$. Since $\mathcal{O}_K=\mathbb{Z}[\alpha]$ we use Kummer-Dedekinds factorisation theorem. After trying some primes we see that $\mathfrak{p}=(17,\alpha-2)$ is a prime of norm $17$ which settles both $\varepsilon$ and $-\varepsilon$: under the quotient map $\mathcal{O}_K\to\mathcal{O}_K/\mathfrak{p}=\mathbb{F}_{17}$ we see that $\varepsilon=1+\alpha^2$ maps to $5\notin\mathbb{F}_{17}^{*2}$. Note that as $17\equiv 1\bmod 4$ we have $-1\in\mathbb{F}_{17}^{*2}$, so also $-\varepsilon$ gets mapped to a non-square, which completes the argument.<|endoftext|> -TITLE: Counting cycle structures in $S_n$ -QUESTION [10 upvotes]: Is there a fast way to count all the different types of cycle structures in a Symmetric group? -More specifically: "How many elements are there in $S_8$ of cycle structure $4^2$" -Here, $4^2$ means a permutation in $S_8$ that is the product of 2 4-cycles i.e: -( 1 2 3 4 )( 5 6 7 8 ) would be an element. ( 2 4 6 8)(1 3 5 7) would also be an element. Is there a general formula that would tell me the number of elements in this set? Or do I have to just have to calculate each one? Thanks! - -REPLY [6 votes]: You can count the cycle structure in $S_n$ using combinatorial techniques. For your example, in $S_8$ you want a product of two 4-cycles: $(a_1 \, a_2 \, a_3 \, a_4)(a_5 \, a_6 \, a_7 \, a_8)$. -Start by counting the number of ways to set up the leftmost cycle. You have 8 choices for $a_1$, 7 choices for $a_2$, 6 choices for $a_3$ and 5 choices for $a_4$. But we have over counted. Notice that for any 4-cycle in $S_8$ $(a_1 \, a_2 \, a_3 \, a_4)=(a_2 \, a_3 \, a_4 \, a_1)=(a_3 \, a_4 \, a_1 \, a_2)=(a_4 \, a_1 \, a_2 \, a_3)$. This tells us that we have over counted by 4 for the first 4-cycle so we must divide by 4. This yields $\frac{8\cdot7\cdot6\cdot5}{4}$ possible 4-cycles for the leftmost cycle. -Next count the number of ways to set up the rightmost cycle. You have already used up 4 out of 8 numbers available so you only have 4 choices for $a_5$, 3 choices for $a_6$, 2 choices for $a_7$ and 1 choice for $a_8$. Again, we have over counted by four so we have $\frac{4\cdot 3\cdot 2 \cdot 1}{4}$ possible 4-cycles for the rightmost cycle. -Finally, since $(a_1 \, a_2 \, a_3 \, a_4)(a_5 \, a_6 \, a_7 \, a_8)=(a_5 \, a_6 \, a_7 \, a_8)(a_1 \, a_2 \, a_3 \, a_4)$ we need to divide by 2 (since right now we are counting say (1 2 3 4)(5 6 7 8) and (5 6 7 8)(1 2 3 4) separately). -So in $S_8$ we have $\frac{8 \cdot 7\cdot 6 \cdot 5}{4} \cdot \frac{4 \cdot 3 \cdot 2 \cdot 1}{4} \cdot \frac{1}{2}=\frac{8!}{4^2 2!}$. -(This coincides with the formula you have already been given.)<|endoftext|> -TITLE: Misinterpretations of Hilbert's Theorem? -QUESTION [8 upvotes]: I've seen a few posts here that make certain claims that are related to Hilbert's theorem. For instance: -"I know that there is no complete surface embedded in $\Bbb R^3$ of constant curvature $-k$ for any $k$." -"However, as Hilbert showed us, the reverse is not true; we cannot embed the hyperbolic plane into Euclidean 3-space." -"$H^2$ does not isometrically embed in $R^3$ (Hilbert's theorem)." -However, these should only apply to smooth ($C^\infty$) embeddings. Nash-Kuiper, particularly, guarantees the existence of an isometric $C^1$ embedding, since the Klein disk model is a short map from $H^2 \to \Bbb R^3$. -This was noted in the answer to this MO question. -There's also this answer to the same question, which says there's no $C^2$ embedding. -But then on the other hand, this page on hyperbolic crochet at Cornell claims that the crochet models are not $C^1$ and can be extended indefinitely, which would appear to violate Hilbert's theorem. -At this point, I'm very confused about what is and isn't allowed regarding isometric embeddings of $H^2$ into $\Bbb R^3$. Obviously no infinitely differentiable isometric embedding exists, but there are obviously $C^1$ embeddings of the whole space. Furthermore, there are also isometric embeddings of compact subsets of $H^2$ into $R^3$, such as the crochet models, and it's not clear what differentiability restrictions there are for that. -Is there some way to better understand the slew of somewhat-contradictory statements made above? What, exactly, is and isn't allowed regarding isometric embeddings of $H^2$ into $R^3$? - -REPLY [3 votes]: A close look at the hyperbolic crochet article that you linked reveals very little mathematical content to it. The link does not even propose a Lipschitz isometric embedding of the hyperbolic plane into $E^3$. This might be a good mathematical PR article but not more than that. No wonder, one gets confused trying to read it. Can this paper be made precise and yield an actual mathematical theorem? Who knows (the paper is by now 16 year old), until then I suggest it should not be used as a reference. -The first proof of nonexistence of a $C^2$-smooth isometric immersion of a hyperbolic plane into $E^3$ is due to Efimov: -N. V. Efimov, Impossibility of a complete regular surface in Euclidean 3-Space whose Gaussian curvature has a negative upper bound, Doklady Akad. Nauk. SSSR 150 (1963), 1206–1209 (Russian); Engl. transl. in Sov. Math. (Doklady) 4 -(1963), 843–846. -N. V. Efimov, Differential criteria for homeomorphism of certain mappings with application to the theory of surfaces, Mat. Sb., -Nov. Ser. 76 (1968) (Russian); Engl. transl. in Math. USSR (Sbornik) 5 (1968), 475–488. - -In fact, he proves an even stronger result: -Theorem. Suppose that $M$ is a simply connected complete Riemannian surface of sectional curvature $\le k<0$. Then there is no $C^2$-smooth isometric immersion of $M$ into $E^3$. -This is a vast generalization of Hilbert's theorem on nonexistence of $C^\infty$-smooth isometric immersions of the hyperbolic plane into $E^3$. - -The pseudosphere provides a $C^\infty$-smooth isometric immersion of a horodisk in the hyperbolic plane into $E^3$. In particular, hyperbolic disks of arbitrarily large radius can be isometrically immersed into $E^3$. -It appears to be an open problem: What is the largest radius of a hyperbolic disk that can be $C^2$-smoothly isometrically embedded into $E^3$. -As a special case of Nash-Kuiper isometric embedding theorem, there exists a $C^1$-smooth isometric embedding of the hyperbolic plane into $E^3$. -For further reading: - -Qing Han and Jia-Xing Hong, Isometric Embedding of Riemannian Manifolds in Euclidean Spaces, AMS Mathematical Surveys and Monographs, vol. 130, 2006.<|endoftext|> -TITLE: About Wald's equation, why can't I simply use total expectation to prove? -QUESTION [5 upvotes]: Wald's equation: for i.i.d r.v $X_i$, if $N$ is a stopping time, then: -$$E \sum_{i=1}^N X_i = E[N] E[X_1]$$ -I just read this equation, but it seems to me if I do: -$$E \sum_{i=1}^N X_i =E_N [\sum_{i=1}^N X_i|N]=E_N[NEX_i]=E[N]E[X_1]$$ -I don't know what's wrong with this proof. But in this proof it does not use the fact that $N$ is stopping time. - -REPLY [4 votes]: Wald's identity fails if $N$ and $X_i$ are too strongly coupled to one another. Wikipedia gives a nice example to illustrate this point: if you have a sequence of iid Bernoulli(1/2) variables $X_n$, and you set $N=1-X_1$, then $E \left [ \sum_{i=1}^N X_i \right ] = 0$ (because the sum is either empty or a sum of a single zero) but $E[N]E[X_i]=1/4$. -That said, with no additional assumptions, you can write this: -$$E \left [ \sum_{i=1}^N X_i \right ] = \sum_{n=1}^\infty E \left [ \left. \sum_{i=1}^n X_i \right | N=n \right ] P(N=n).$$ -You can also use linearity of conditional expectation to get -$$E \left [ \sum_{i=1}^N X_i \right ] = \sum_{n=1}^\infty \sum_{i=1}^n E[X_i \mid N=n] P(N=n).$$ -But you cannot replace $E[X_i \mid N=n]$ by $E[X_i]$, unless you have some independence-type hypothesis. (For instance, in Wikipedia's example, $E[X_i \mid N=0]=1$ and $E[X_i \mid N=1]=0$.)<|endoftext|> -TITLE: Correlation Coefficient as Cosine -QUESTION [9 upvotes]: I've read that the correlation coefficient between two random variables may be viewed as the cosine as the angle between them, but I can't find any solid explanation. -To be concrete, let $X$ and $Y$ be random variables on $(\Omega, \mathcal{F}, P)$ with correlation coefficient $\rho$. Assume $X,Y \in L^2(\Omega,\mathcal{F},P)$. The quantity $\rho$ is defined as -$$ -\rho := \frac{Cov(X,Y)}{\sqrt{Var(X)Var(y)}}. -$$ -Letting $\mu_X := E(X)$ and $\mu_Y := E(Y)$, note -$$ -Cov(X,Y) = E((X - \mu_X)(Y - \mu_Y)) = \left< X - \mu_X, Y - \mu_Y\right>_{L^2} -$$ -and -$$ -Var(X) = E((X - \mu_X)^2) = ||X - \mu_X||_{L^2}^2, -$$ -where $L^2 := L^2(\Omega, \mathcal{F}, P)$. Thus -$$ -\rho = \frac{\left< X - \mu_X, Y - \mu_Y\right>_{L^2}}{||X - \mu_X||_{L^2} ||Y - \mu_Y||_{L^2}}. \qquad (1) -$$ -Compare this with the Euclidean space inner product result that for two vectors $x,y \in \mathbb{R}^n$, -$$ -\cos(\theta) = \frac{x^Ty}{||x||\, ||y||}, -$$ -where $\theta$ is the angle between $x$ and $y$. -The recurring claim I read is that I can think of $\rho$ as the cosine of the "angle" between the two random variables, but it seems it only makes sense in terms of $L^2$ inner products. But in that case, the only notion of "angle" between to elements $X,Y \in L^2$ I can think of is -$$ -\cos(\theta) = \frac{\left< X,Y \right>_{L^2}}{||X||_{L^2} ||Y||_{L^2}}. \qquad (2) -$$ -So, two questions: - -Is Eqn. (2) a valid notion of angle in $L^2$? -Are Eqns (1) and (2) somehow equlivalent? - -If both of these are true, I can justify viewing $\rho = \cos(\theta)$. - -REPLY [2 votes]: $X-\mu_X$ is the projection of $X$ onto the plane $E=0$ in $L^2$. So $\rho$ measures the angle between the two projected vectors $X-\mu_X$, $Y-\mu_Y$. This has the effect that replacing $X$ by an "affine replica" $\alpha X+\beta$ resulting from a change of scales (think of $X$ being a temperature) does not change $\rho$.<|endoftext|> -TITLE: what are the practical uses of "game of life" or "langton's Ant" -QUESTION [6 upvotes]: A few questions: - -Besides looking really cool, what are the practical uses of "game of life" or "langton's Ant"? I understand how agent-based modeling itself is a potentially useful methodoly, not how the 2D game-of-life representation is useful itself? -When trying to develop interesting complicated designs in the game of life, (e.g. https://www.youtube.com/watch?v=C2vgICfQawE ) is it safe to assume that there are certain "building blocks" that can be positioned to produce a predictable pattern? -does research in this field generally follow the pattern: assign some rules, simulate some data, see how the data matches up with real world stuff, change the rules and repeat? - -REPLY [2 votes]: A practical use for langtons ant has two uses for me. -1. It models lorentz force. Ant moves = point charge displaced due to electric field. Ant rotates = same point charge under magnetic field. Tile colour change models photon event. -2. I have a encryption engine based on the thermodynamic behaviour of the ant. -3. It is poosible to program the ant by setting up the environment with pre determined states and observing. -The emergent behaviour which is akin to how a state space machine (cpu) behaves when given assembly language instructions.<|endoftext|> -TITLE: Evaluate the improper integral $ \int_0^1 \frac{\ln(1+x)}{x}\,dx $ -QUESTION [5 upvotes]: I am trying to evaluate -$$ -\int_0^1 \frac{\ln(1+x)}{x}\,dx -$$ -I started by using the Taylor series for $\ln (1+x)$ -$$\begin{align*} -\int_0^1 \frac{\ln(1+x)}{x}\,dx &= \int_0^1\frac{1}{x}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}x^n}{n}dx \\&=\int_0^1 \frac{1}{x}\left(x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5}-\frac{x^6}{6}+...\right)dx \\&=\int_0^1 \left(1-\frac{x}{2}+\frac{x^2}{3}-\frac{x^3}{4}+\frac{x^4}{5}-\frac{x^5}{6}+...\right)dx\\&=1-\frac{1}{4}+\frac{1}{9} -\frac{1}{16}+\frac{1}{25}-\frac{1}{36}+... \\ &=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n^2}\end{align*} -$$ -I'm aware of the fact that $$\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}$$ however the $(-1)^{n+1}$ is giving me trouble. - -REPLY [8 votes]: Hint: If you take $\sum_{n=1}^\infty \frac{1}{n^2} - \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n^2}$ what do you get?<|endoftext|> -TITLE: Expected Value of Square Root of Poisson Random Variable -QUESTION [12 upvotes]: Find the expected value of $\sqrt{K}$ where $K$ is a random variable according to Poisson distribution with parameter $\lambda$. - -I don't know how to calculate the following sum: -$E[\sqrt{K}]= e^{-\lambda} \sum_{k=0}^{\infty} \sqrt{k} \frac{\lambda^k}{k!} $ -Based on Wiki (https://en.wikipedia.org/wiki/Poisson_distribution) I know that should be approximately $\sqrt{\lambda}$. - -REPLY [10 votes]: In general, for smooth $g(X)$ you can do a Taylor expansion around the mean $\mu=E(X)$: -$$g(X)=g(\mu) + g'(\mu)(X-\mu)+ \frac{g^{''}(\mu)}{2!}(X-\mu)^2+ \frac{g^{'''}(\mu)}{3!}(X-\mu)^3+\cdots$$ -So -$$E[g(X)]=g(\mu) + \frac{g^{''}(\mu)}{2!}m_2+ \frac{g^{'''}(\mu)}{3!} m_3+\cdots $$ -where $m_i$ is the $i$-th centered moment. In our case $m_2=m_3 =\lambda$, so: -$$E[g(X)]=\sqrt{\lambda} - \frac{\lambda^{-1/2}}{8} + \frac{ \lambda^{-3/2}}{16} +\cdots $$ -This approximation is useful only if $\lambda \gg 1$<|endoftext|> -TITLE: When Superposition of Two Renewal Processes is another Renewal Process? -QUESTION [6 upvotes]: When superposition of two renewal processes is another renewal process? - -If you merge (superpose) two Poisson processes with parameters $\lambda_1$ and $\lambda_2$, the outcome is another Poisson process with parameter $\lambda_1+\lambda_2$. But, how is that for two general renewal processes? Is there a class of renewal processes that merging two of the members of it makes another renewal process? More generally, under what conditions we can make sure the merged process is a renewal process? - -REPLY [4 votes]: Let $\{S_n\}$ and $\{T_n\}$ be independent renewal processes with interrenewal distributions $F$ and $G$. Define -$$N(t):=N_S(t) + N_T(t)=\sum_{n=1}^\infty \left[\mathsf 1_{(0,t]}(S_n) +\mathsf 1_{(0,t]}(T_n)\right]. $$ -Then the sequence of jump times of $N(t)$, $$U_n = \inf\{t: N(t)=n\} $$ -is not in general a renewal sequence, because the inter-jump times need not be i.i.d. For a counterexample, consider when $F$ and $G$ are constant distributions, e.g. $S_n=\{i, 2i, 3i, \ldots\}$ and $T_n=\{j, 2j, 3j, \ldots\}$, with $i\ne j$. Take $i=2$ and $j=3$, then -$$ -U_n-U_{n-1} = \begin{cases} -1,& n\equiv 1,2,3,5\pmod 6\\ -2,& n\equiv 0,4\pmod 6. -\end{cases} -$$ -Further, we have that -$$R(t) = \mathbb E[N(t)] \stackrel{t\to\infty}\longrightarrow \frac43, $$ -but as $R(n)-R(n-1)=U_n-U_{n-1}$, it is clear that $\lim_{t\to\infty}R(t+1)-R(t)$ does not exist, and thus Blackwell's renewal theorem does not hold. -There are two remaining questions to consider - is there more we can say about $N(t)$ than that it is a counting process, and whether being stable under superposition is equivalent to having independent and stationary increments (i.e. being a Poisson process)? As for the first, we can describe the jump times $\{U_n\}$ by the transitions in a Markov renewal process on $$E=\{(X_n,t) : X_n\in \{S,T\}, t>0\}. $$ As for the second, the superposition of $\{S_n\}$ and $\{T_n\}$ is a renewal process iff one of the following holds: -(i) One of the processes, WLOG $\{S_n\}$ has multiple renewals and $\{T_n\}$ does not (i.e. $F(0)>0$ and $G(0)=0$), $F$ and $G$ are concentrated on a semi-lattice $\{0,\delta,2\delta,\ldots\}$ and either -\begin{align} -F(x) &= \left(1 - p^{\left\lfloor\frac x\delta\right\rfloor+1}\right)\mathsf 1_{[0,\infty)}(x), \quad 0 -TITLE: What is the difference between a Poisson and an Exponential distribution? -QUESTION [11 upvotes]: For a Poisson distribution: -$$\mathsf{P}(X=x)=\frac{e^{-\mu}\times \mu^x}{x!}$$ -where $\mu$ is the mean number of occurrences. -For an Exponential distribution: -$$f(x;\lambda) = -\begin{cases} -\lambda e^{-\lambda x} & x \ge 0 \\ -0 & x < 0 -\end{cases}$$ -where $\lambda$ is the rate parameter. -Apart from the fact that the formulas are obviously different, in layman's terms what is the difference between an Exponential and Poisson distribution? Or put in another way, why do we need them both? What does one of them do that the other doesn't? What is the difference between an Exponential distribution and an Exponential Density function? -And yes, this is all very new to me as I was never taught distribution theory. -Thanks. - -REPLY [16 votes]: As A.S.'s comment indicates, both distributions relate to the same kind of process (a Poisson process), but they govern different aspects: The Poisson distribution governs how many events happen in a given period of time, and the exponential distribution governs how much time elapses between consecutive events. -By way of analogy, suppose that we have a different process, in which events occur exactly every $10$ seconds. Then the number of events that happen in a minute (i.e., $60$ seconds) is deterministically $6$, and the amount of time that elapses between consecutive events is, of course, deterministically $10$ seconds. -In contrast, in a Poisson process with a mean rate of one event every $10$ seconds (i.e., $\lambda = 1/10$), the number of events that happen in a minute is not deterministically $6$, but it has a mean of $6$. The exact distribution is given by the Poisson distribution: -$$ -P_k(t) = \frac{(\lambda t)^k}{k!} e^{-\lambda t} -$$ -where $t = 60$ seconds is the time window. Thus, the probability that no events occur in a minute is given by -$$ -P_0(t) = \frac{(6)^0}{0!} e^{-6} = e^{-6} \doteq 0.0024788 -$$ -whereas the probability that $6$ events occur in a minute is given by -$$ -P_6(t) = \frac{(6)^6}{6!} e^{-6} = \frac{46656}{720} e^{-6} \doteq 0.16062 -$$ -That is obviously much more likely, as you would expect. -Similarly, the time between events is also not deterministically $10$ seconds, but it has a mean of $10$ seconds. The actual time distribution is the exponential distribution, which can be specified using its CDF (cumulative distribution function) -$$ -F_T(t) \equiv P(T < t) = 1-e^{-\lambda t} -$$ -The CDF provides essentially the same information as the PDF (probability density function), whose formulation you gave in your question; in fact, the derivative of the CDF is the PDF. However, the CDF is sometimes easier to understand intuitively, so I'll explain using the CDF here. -In this case, the probability that the time between events is less than $10$ seconds is $F_T(10) = 1-e^{-1} \doteq 0.63212$, whereas the probability that the time between events is less than $60$ seconds is $F_T(60) = 1-e^{-6} \doteq 0.99752$. The probability that it is greater than $60$ seconds is $1-F_T(60) = e^{-6} \doteq 0.0024788$, and you'll notice this is equal to the probability that no events occur in a given minute. This is no coincidence; in a Poisson process, which is memoryless, the probability that the time between events is greater than a minute is naturally equal to the probability that no events occur in that minute!<|endoftext|> -TITLE: Real vs Complex Interpolation -QUESTION [5 upvotes]: The two major classical interpolation theorems in analysis are Riesz-Thorin Theorem (complex method) and Marcinkiewicz Theorem (real method). -One can see the statements of the theorems and realize the differences between them (sublinearity, weak end-point estimates, and so on). I also know that these theorems can be extended to whole theories. -As far as I know, the complex method is called like so because in Riesz-Thorin Theorem it is neccesary that the scalar field is $\mathbb{C}$ (to use complex analysis tools). My questions are: - -Is there any real difference if the scalar field is $\mathbb{R}$ or $\mathbb{C}$? Isn't the scalar field of all $L^p$ spaces $\mathbb{C}$? (I can also define them over $\mathbb{R}$ but I cannot see the advantage). Do I need my functions to be real-valued to use Marcinkiewicz? Can I use Riesz-Thorin to real-valued functions? What is the different between both interpolation methods in this aspect? - -Thank you. - -REPLY [4 votes]: Is there any real difference if the scalar field is $\mathbb{R}$ or $\mathbb{C}$? - -No. - -Isn't the scalar field of all $L^p$ spaces $\mathbb{C}$? - -Depends on context, not necessarily. - -Do I need my functions to be real-valued to use Marcinkiewicz? - -No. - -Can I use Riesz-Thorin to real-valued functions? - -Yes. - -What is the different between both interpolation methods in this aspect? - -The terms complex and real interpolation don't refer to the scalar field. -They refer to the nature of proof. -Marcinkiewicz's theorem is proven using real-variable methods, decomposing functions cleverly according to size, using sublevel sets, etc. -Riesz-Thorin is proven by a magic application of complex analysis, typically the proof uses Hadamard's three-lines lemma (which is a consequence of the maximum modulus principle).<|endoftext|> -TITLE: Priority of vector operators -QUESTION [5 upvotes]: What is the precedence order for cross product, dot product and scalar multiplication? -$$x \overrightarrow {A}\cdot \overrightarrow {B} \times \overrightarrow {C} =\quad?$$ - -REPLY [7 votes]: First note that the cross product is an operation between two vector that gives a vector as result and the dot product is an operation between two vectors that gives a scalar as result. So the mixed product -$$ -A\cdot B \times C -$$ -has a sense only in the oreder $A\cdot(B\times C)$ and has a scalar as a result. -For the scalar product by a real number $x$ we know that both the dot and cross product are compatible with scalar multiplication, this means that: -$$ -x(A\cdot B)= (xA)\cdot B = A \cdot (xB) -$$ -and -$$ -x(A\times B)= (xA)\times B = A \times (xB) -$$ -so in the expression $ xA\cdot B\times C$ we can perform the scalar multiplication when we want ( but only one time for one vector) but we have to calculate the cross product before the dot product.<|endoftext|> -TITLE: Are there two consecutive gaps of size $4$ between prime numbers? -QUESTION [10 upvotes]: Are there consecutive gaps(difference) of 4 between prime numbers? -Seeing at the first few gaps -$1, 2, 2, 4, 2, 4, 2, 4, 6, 2, 6, 4, 2, 4, 6, 6, 2, 6, 4, 2, 6,.....$. -(for example 6 is repeated consecutively at some places). -Are there consecutive gaps of 4 between prime numbers? -If there is no twin gap of 4, then how to prove it, what is the proof of that? - -REPLY [2 votes]: Just to elaborate on @Henning Makholm's answer: - -$p\equiv0\pmod3 \implies p=3 \implies p+4\neq5\text{ which is the next prime}$ -$p\equiv1\pmod3 \implies p+8\equiv9\equiv0\pmod3 \implies p+8\text{ is not prime}$ -$p\equiv2\pmod3 \implies p+4\equiv6\equiv0\pmod3 \implies p+4\text{ is not prime}$<|endoftext|> -TITLE: GAP Editor with Syntax Highlighting for Windows -QUESTION [6 upvotes]: Can anybody recommend a code editor for Windows which has GAP Syntax Highlighting? -Thank you - -REPLY [4 votes]: Thanks for all answers and comments already there. I have updated the question Where is the GAP file editor? How do I save GAP programs? from the GAP F.A.Q to list all available modes in one place. Besides Notepad++ and PSPAd for Windows, it lists some other editors as well. In the future, please check it in the GAP F.A.Q. for any updates. -Posting this is as an answer, to save readers' time from walking through all comments and links.<|endoftext|> -TITLE: Why is the rank of a locally free sheaf same everywhere if $X$ is connected? -QUESTION [6 upvotes]: Let $(X,\mathcal{O}_X)$ be a connected scheme.Let $\mathcal F$ be a locally free sheaf on $X$. This means that $X$ can be covered by open sets $U$ for which $\mathcal F|_U$ is a free $\mathcal O_X|_U$ - module. The rank of $\mathcal F$ on such a $U$ is the number of copies of $\mathcal O_X|_U$ required. - -I want to show that when $X$ is connected this rank is the same - everywhere. (that is same for all open sets in the cover) - -Define a map $f:X\rightarrow \mathbb N$ as follows - -For any $x\in X$ there is a $U$ containg $x$ for which $\mathcal F|_U$ is free $\mathcal O_X|_U$ of rank $n$. Let $f(x)=n$ If $f$ is continuous and well-defined then I am done because then the only connected subsets of $\mathbb N$ with discrete topology are singletons. So my questions are - - -Why is $f$ continuous? -If $V$ is another open set containing $x$ then why should rank of $\mathcal F|_V$ equal to $n$? -would the same proof work if rank is infinite? - -Thank you. - -REPLY [7 votes]: A map to a discrete space is continuous, iff all fibres are open. And the fact that the sets $M_n := \{x \in X | \operatorname{rank} \mathcal F_x = n \}$ are open, is built in the definition: If you have $x \in M_n$, you find some open set $U \ni x$, such that $\mathcal F_{|U} \cong \mathcal O_{X|U}^n$, which shows $U \subset M_n$. Hence $M_n$ is open. -Let $U,V \ni x$ with $\mathcal F_{|U} \cong \mathcal O_{X|U}^n$ and $\mathcal F_{|V} \cong \mathcal O_{X|V}^m$. -From the first iso, we deduce $\operatorname{rank} \mathcal F_x=n$. From the second iso, we deduce $\operatorname{rank} \mathcal F_x=m$, hence $n=m$. -Yes. Instead of $\mathbb N$, you let $f$ map into the set of all cardinals, endowed with discrete topology.<|endoftext|> -TITLE: A Markov process which is not strong Markov process (follow up 2) -QUESTION [5 upvotes]: In https://mathoverflow.net/questions/43833/a-markov-process-which-is-not-a-strong-markov-process -George Lowther's example: -"Consider the following continuous Markov process $X$, starting from position $x$: -If $x = 0$ then $X_t = 0$ for all times. -If $x \neq 0$ then $X_t$ is a standard Brownian motion starting from $x$. -This is not strong Markov (look at times at which it hits zero)." -I don't get it, why this is not a strong Markov process? Seems to me, $X_{T+t}$ has the simple $\delta$-transition, just like $X_t$ (if starts from $0$). -Since several experts are involved in that post, and they seem to agree this is a valid example, I must have missed something. Can someone please help? Thanks! - -REPLY [4 votes]: Your question is quite interesting, and deserves more attention than it -has been getting. Whether or not a (time homogeneous) Markov process $(X_t)_{t\geq 0}$ -with state space $E$ is strong Markov depends on a precise definition. -I hope that my explanation below substracts from, rather than adds to, your confusion. -The standard definition of the strong Markov property says that -for all times $t\geq 0$, all stopping times $T$, and any bounded, measurable -function $f:E\to\mathbb{R}$ we have -$$\mathbb{E}(f(X_{t+T}){\bf 1}_{(T<\infty)}\mid{\cal F}_T)=(p_tf)(X_T){\bf 1}_{(T<\infty)},\tag1$$ -almost surely, where $p_t$ is the transition function of the process. -In George Lowther's example the transition function is -$$p_t(x,\cdot)=N(x,t) \mbox{ for }x\neq0,\quad p_t(0,\cdot)=\delta_0,$$ -where $N(x,0)=\delta_x$. The transition function $(p_t)$ satisfies -$$p_tf(x)=\cases{f(0)& $x=0$\\[5pt] b_tf(x)& $x\neq 0$}, $$ -where $b_t$ is the usual Brownian transition function. -This $(p_t)_{t\geq 0}$ is well-defined and time homogeneous. -To be concrete, suppose that $X_0=1$, so that $X_t=W_t+1$ where $W_t$ is a standard Brownian motion. -Taking the stopping time $T(\omega)=\inf(t\geq 0: X_t(\omega)=0)$ we have -$\mathbb{P}(T<\infty)=1$ and $p_tf(X_T)=p_tf(0)=f(0)$ on the right hand side of (1). -To be dramatic, take $f(x)={\bf 1}_{\{0\}}(x)$ so that the right hand side of (1) equals 1. -However, for $t>0$, the left hand side is -${1\over \sqrt{2\pi t}}\int_{-\infty}^\infty f(y+1) \exp(-y^2/2t)\,dy=0$, -so the strong Markov property fails. I believe this is the argument that George Lowther had in mind. - -What is going on and why does the strong Markov property fail? -By changing the transition function at a single point, we have created a disconnect between the process $(X_t)_{t\geq 0}$ and the transition function $(p_t)_{t\geq 0}$. -The process is just a shifted Brownian motion that rolls over the point $\{0\}$ and continues to move like a Brownian motion. In contrast, the transition function suggests that the process should get stuck as soon as it hits the point $\{0\}.$ -Note that the usual Brownian transition function $(b_t)_{t\geq 0}$ is also a transition function for the process $(X_t)_{t\geq 0}$; transition functions are not unique. With respect to $(b_t)_{t\geq 0}$ our process $(X_t)_{t\geq0}$ does satisfy (1) and therefore is a strong Markov process. -Moral: Whether or not a process is strong Markov, as in (1), depends on which kernel we use. To me, this argues against using the transition function in the definition of strong Markov, and that we ought to use the following transition function-free version instead: -$$\mathbb{E}(f(X_{t+T}){\bf 1}_{(T<\infty)}\mid{\cal F}_T)=\mathbb{E}(f(X_{t+T}){\bf 1}_{(T<\infty)}\mid X_T).\tag2$$<|endoftext|> -TITLE: How to use Pascal's triangle for binomial expansion -QUESTION [7 upvotes]: The question is asking for me to expand ${(p+r)}^4$. I know that I have to use Pascal's triangle, the fourth set down, which is $1,4,6$,$4,1.$ My thinking is that I have to use these numbers to solve this through synthetic division, but don't really know what to do from there. Please let me know if I'm completely wrong. - -REPLY [15 votes]: It's simpler than that. The $1,4,6,4,1$ tell you the coefficents of the $p^4$, $p^3r$, $p^2r^2$, $pr^3$ and $r^4$ terms respectively, so the expansion is just -$$ 1p^4 + 4p^3r + 6p^2r^2 + 4pr^3 + 1r^4 $$ -so -$$ p^4 + 4p^3r + 6p^2r^2 + 4pr^3 + r^4 $$<|endoftext|> -TITLE: why the equation $f^{(n)}(x)=0$ has at least $n-1$ distinct roots in $(-1,1)$ -QUESTION [7 upvotes]: Let $f \in C^{(n)} ( (-1,1) )$ and $\sup_{-12^n\cdot K$. From this start, we can prove by induction that for every $k=1,\ldots,n$ there are some numbers $x_{k,1},\ldots,x_{k,k}$ such that -$\qquad$ $v_k 2^{n+1-k} K$ if $i$ is odd; -$\qquad$ $f^{(k)}(x_{k,i}) < -2^{n+1-k} K$ if $i$ is even. -For $k=1$, $x_{1,1}=0$ is an appropriate choice. -Suppose that $x_{k,1}<\ldots \frac{2^{n+1-k}K - K}{1} > 2^{n-k}K$. -If $i$ is even then -$f^{(k+1)}(x_{k+1,i}) < \frac{-2^{n+1-k}K + K}{1} < -2^{n-k}K$. -We also have $v_{k+1}=x_{k,0} -TITLE: The trigonometric solution to the solvable DeMoivre quintic? -QUESTION [6 upvotes]: Using the relations for the Rogers-Ramanujan cfrac described in this post, -$$\frac{1}{r}-r = x$$ -$$\frac{1}{r^5}-r^5 = y$$ -and eliminating $r$ yields, -$$x^5+5x^3+5x = y$$ -This is the case $a=1$ of the solvable DeMoivre quintic, -$$x^5+5ax^3+5a^2x+b = 0\tag1$$ -In general, it has the solution, -$$x = \left(\tfrac{-b+\sqrt{b^2+4a^5}}{2}\right)^{1/5}-a\left(\tfrac{-b+\sqrt{b^2+4a^5}}{2}\right)^{-1/5}\tag2$$ -When $a=-1$, it can also be solved as, -$$x = 2\cos\Bigg(\tfrac{\arccos\Big(\tfrac{-b}{2}\Big)}{5}\Bigg) = 2\cos\Bigg(\tfrac{2\arctan\Big(\tfrac{\sqrt{2+b}}{\sqrt{2-b}}\Big)}{5}\Bigg)\tag3$$ - -Q: When $\color{red}{a=1}$, is there a neat trigonometric solution similar to $(3)$? - -P.S. A google search revealed that it is may indeed be possible. - -REPLY [5 votes]: (This is an addendum to hbp's answer.) -I'm glad I raised a bounty for this question because, thanks to hbp, we can show that the general quintic can be solved in terms of trigonometric or hyperbolic functions, plus some special functions. -The general quintic can be reduced to the one-parameter Brioschi form, -$$w^5-10cw^3+45c^2w-c^2=0\tag1$$ -with the five solutions (see also this post), -$$w_n=\pm\sqrt{\frac{-c\,(x^2+4)(x^2-2x-4)^2}{b-11}}\tag2$$ -and for $n=0,1,2,3,4$, -$$x_n=-2\,i\sin\Bigg(\tfrac{i\log\Big(\tfrac{b+\sqrt{b^2+4}}{2}\Big)\,-\,2\pi\, n}{5} \Bigg)=2\sinh\Bigg(\tfrac{\sinh^{-1}\Big(\tfrac{b}{2}\Big)\,+\,2\pi\,i\, n}{5}\Bigg) \tag3$$ -$$b=\frac{v(v-5)^2}{(v-1)^2}+11$$ -$$v=\left(\frac{\vartheta_2(0,p)}{\vartheta_2(0,p^5)}\right)^2$$ -$$p=e^{\pi i \tau}=\exp(\pi i \tau)$$ -$$\tau = i\frac{K(k')}{K(k)}= i\,\frac{\text{EllipticK[1-m]}}{\text{EllipticK[m]}}\tag4$$ -$$m =\tfrac{1}{2}\left(1\pm\sqrt{1-4u}\right)\tag5$$ -and $u$ is a root of the cubic, -$$\frac{256(1-u)^3}{u^2}=\frac{1728c-1}{c}$$ -The solution also uses the Jacobi theta function $\vartheta_j(0,p)$ (where $j=3$ or $4$ will work as well), the complete elliptic integral of the first kind $K(k)$ and elliptic parameter $m=k^2$ (with $\tau$ also given in Mathematica syntax above). -Note 1: Since $(2)$ and $(5)$ uses square roots, then the proper sign has to be chosen. -Note 2: At first, I was missing a neat form for $(3)$ so, after hbp's answer, I'm glad I posted this question.<|endoftext|> -TITLE: embeddings of projective spaces into Euclidean spaces -QUESTION [5 upvotes]: Let $\mathbb{R}P^n$, $\mathbb{C}P^n$, $\mathbb{H}P^n$ be the real, complex, quaternionic projective spaces resp. -I want to find all $n$ such that $\mathbb{R}P^n$ can be embedded into $\mathbb{R}^{n+1}$. -And find all $n$ such that $\mathbb{C}P^n$ can be embedded into $\mathbb{R}^{2n+1}$. -And find all $n$ such that $\mathbb{H}P^n$ can be embedded into $\mathbb{R}^{4n+1}$. -What is the answer? - -REPLY [7 votes]: A necessary condition for a closed $n$-manifold $M$ to (smoothly) embed or immerse into $\mathbb{R}^{n+1}$ is that its tangent bundle $T$ becomes trivial after adding a single line bundle $L$ (namely the normal bundle of the embedding). This condition is sufficient for an immersion by Hirsch-Smale theory, but the question of embedding is more delicate. -The condition that $T \oplus L$ is trivial implies that $w(T) w(L) = 1$. When $M = \mathbb{RP}^n$, recall that the total Stiefel-Whitney class of $\mathbb{RP}^n$ is $(1 + \alpha)^{n+1}$ where $\alpha \in H^1(\mathbb{RP}^n, \mathbb{F}_2)$ is a generator. If $w_1(L) = 0$ then we need $(1 + \alpha)^{n+1} = 1$, or equivalently ${n+1 \choose k}$ even for $1 \le k \le n$. By Kummer's theorem this happens iff $n+1$ is a power of $2$, so the only candidates are $\mathbb{RP}^{2^k - 1}$. Similarly, if $w_1(L) = \alpha$ then we need $(1 + \alpha)^{n+2} = 1$, and this happens iff $n+2$ is a power of $2$, so the only candidates are $\mathbb{RP}^{2^k - 2}$. -This table of embeddings and immersions of real projective spaces shows that that $\mathbb{RP}^3$ and $\mathbb{RP}^7$ don't embed into $\mathbb{R}^4$ and $\mathbb{R}^8$ respectively, and a 1963 theorem of James rules out $\mathbb{RP}^{2^k - 1}$ for sufficiently large $k$ (I think $k \ge 4$ but I haven't checked). Also, $\mathbb{RP}^6$ doesn't embed into $\mathbb{R}^8$ and $\mathbb{RP}^{14}$ doesn't immerse into $\mathbb{R}^{21}$, so prospects seem poor for this case as well. -For $\mathbb{CP}^n$ we can instead compute the total Pontryagin class, which is $(1 + \alpha^2)^{n+1}$, where $\alpha \in H^2(\mathbb{CP}^n, \mathbb{Z})$ is a generator. The condition that $T \oplus L$ is trivial implies that this vanishes, which is only possible for $n = 1$: for $n \ge 2$ the first Pontryagin class is $(n+1) \alpha^2 \in H^4(\mathbb{CP}^n, \mathbb{Z})$ which doesn't vanish. Hence for $n \ge 2$ we find that $\mathbb{CP}^n$ cannot immerse into $\mathbb{R}^{2n+1}$; this argument in fact shows that the smallest possible codimension of an immersion is $4$ (because this is the smallest codimension for which the normal bundle can have a nontrivial Pontryagin class). Presumably a similar computation works for the total Pontryagin class of $\mathbb{HP}^n$ but I'm less familiar with this case.<|endoftext|> -TITLE: Heat equation - regularity of solutions -QUESTION [5 upvotes]: Consider the heat equation on $\mathbb{R}$ -$$ -u_t=u_{xx} -$$ -with boundary conditions $u(0,x)=g(x)$. -It is well-known that even if the function $g$ is "very bad'' (say, only bounded but not continuous), the function $u(t,\cdot)$ would be from class $C^{\infty}$even for very small $t$. -My question is whether it is possible to quantify this estimate. Namely, assuming that $g\in C^\gamma$ for $\gamma\in(0,1)$, I wonder whether one can prove something like this -$$ -\|u(t,\cdot)\|_{C^1}\le t^{-\lambda}\|g\|_{C^\gamma} -$$ -for some $\lambda>0$? -Thanks! - -REPLY [2 votes]: Yes, you can. But things are not completely elementary. You need the following ingredients: -1) The second derivative generates a (not strongly continuous!) semigroup on $L^\infty(\mathbb R)$ -2) This semigroups is analytic and hencce maps the ambient space into the domain of any power of its generator. -3) Such domains are known to be imbeddable in spaces of Hölder functions and hence the semigroup induces "restricted" semigroups on $C^\alpha(\mathbb R)$, for which your estimates can then be proved to hold. -You can find all this (and much more) in Chapter 3 of Lunardi's nice monograph Analytic Semigroups and Optimal Regularity in Parabolic Problems (Birkhäuser 1995). - -REPLY [2 votes]: If you write the heat equation $u_t=\nu\ u_{xx}$ with $\nu$ a physical constant of dimension (length)$^2$ (time)$^{-1}$ -- considering $t$ as time and $x$ as length --, you can see that, for dimensional reasons, only $\lambda=\frac{\alpha-\gamma}2$ is possible for all $t$, and the inequality is $$\sup\frac{|u(t,x)-u(t,y)|}{|x-y|^\alpha}\le C(\gamma,\alpha)\ (\nu t)^{-(\gamma-\alpha)/2}\ \sup \frac{|g(x)-g(y)|}{|x-y|^\gamma}$$for $\alpha\le 1$, and mutatis mutandis for all $\alpha$.<|endoftext|> -TITLE: There is no simple group of order 56 -QUESTION [7 upvotes]: I'm working on question 16 of http://www.austinmohr.com/Work_files/hw4.pdf -I'm confused by why there are 48 elements of order 7. I understand that $n_7 = 8$, but no further. - -REPLY [2 votes]: We have that $56=2^3\cdot 7$. Then the number of Sylow $7$-subgroups is $1+7k$ , for some $k \geq 0$ and divides $8$. So the number of Sylow $7$-subgroups is either $1$ or $8$. If it is $1$, we are done. So let us assume that it is $8$. Each Sylow $7$-subgroup has order $7$. In a group of prime order every nonidentity element has order $7$. Thus each Sylow $7$-subgroups has $6$ elements of order $7$. Also, they intersec trivially. It follows that there are $8\cdot 6 =48$ elements of order $7$. This leaves only $56-48=8$ elements, wich is the size of one Sylow $2$-subgroup. Thus, there is only room for one o such subgroups. That is, either the Sylow $7$-subgroup or the Sylow $2$-subgroup must be normal.<|endoftext|> -TITLE: Any Set of Ordinals is Well-Ordered -QUESTION [5 upvotes]: Is this a theorem of ZF or a theorem of ZFC? -My suspicion is that it is a theorem of ZFC (the proof I've seen in P. L. Clark's online notes on countable ordinals requires selecting an element from each set of a countable collection of sets). -Does the situation change if the ordinals in the collection are themselves countable? - -REPLY [5 votes]: It is not hard to show directly from the definitions: - -If $(A,<)$ is a well-ordered set, then for every $B\subseteq A$, $(B,<\restriction_B)$ is also well-ordered. -Every set of ordinals is a subset of an ordinal. - -So without using the axiom of choice we have that every set of ordinals is well-ordered.<|endoftext|> -TITLE: Ricci identity in Lectures on Ricci Flow -QUESTION [5 upvotes]: So in the book, Lectures on Ricci flow, the identity is given as -$$-\nabla^2_{X,Y}A(W,Z,\ldots)+\nabla_{Y,X}^2A(W,Z,\ldots)=-A(R(X,Y)W,Z,\ldots)-A(W,R(X,Y)Z,\ldots)$$ -where -$$R(X,Y)=\nabla^2_{Y,X}-\nabla_{X,Y}^2$$ -But the way how covariant derivatives distributes over a tensor field I think it should be -$$-\nabla^2_{X,Y}A(W,Z,\ldots)+\nabla_{Y,X}^2A(W,Z,\ldots)=R(X,Y)(A(W,Z,\ldots))-A(R(X,Y)W,Z,\ldots)-A(W,R(X,Y)Z,\ldots)-\cdots$$ -Is there something I am misunderstanding here? - -REPLY [6 votes]: You're correct, but so is Topping - the curvature operator on functions is just zero! Remember that $$-R(X,Y)A = \nabla_X (\nabla_Y A) - \nabla_Y (\nabla_X A) - \nabla_{[X,Y]}A,$$ so when $A = f$ is a function you get $XYf - YXf - [X,Y]f = 0$. Since $A(W,Z,\ldots)$ has all its slots filled, it is a scalar function, and thus this curvature term vanishes.<|endoftext|> -TITLE: What is the difference between root mean square, and standard deviation? -QUESTION [8 upvotes]: I am currently working through the Feynman Lectures, chapter 6: Probability. -I have reached his problem of the "random walk". -After deriving this and getting some root mean square, wouldn't this just be the same as finding the standard deviation? The standard deviation is the root of the mean of the squared data. Isn't that also just the root mean square? -Also, what exactly are the implications of the root mean square, what does it even mean in regards to our problem? -http://www.feynmanlectures.caltech.edu/I_06.html - -REPLY [14 votes]: in the case of standard deviation, the mean is removed out from obsevations, but in root mean square the mean is not removed. however in the case of noise where the mean is zero, the two concept are the same. I hope that this is the difference. see: http://www.madsci.org/posts/archives/2004-11/1100200293.Ph.r.html<|endoftext|> -TITLE: Primitive root pairs -QUESTION [5 upvotes]: Looking at the Wikipedia entry for Primitive roots modulo $n$, I noticed that for some $n$, they came in pairs: -When $n=p^k$ or $2p^k$ and $p\equiv1 \mod{4}$, then $r$ is a primitive root iff $n-r$ is. -I found a proof for this when $n=p$, but not in general. (Exercise online) -Also, when $n=3^k$ or $2(3^k)$ and $k>1$, then $r$ is a primitive root iff $n-r-2$ is. -Finally, when $n=7^k$, then $r$ is a primitive root iff $n-r+1$ is. -Of course, there may be more patterns like these, but these are the ones I've noticed, and I'm sure they've been noticed before, so I'm wondering if anyone can point me in the direction of a general proof here. - -REPLY [3 votes]: Answer for the case $n=p^k$ and $p\equiv 1\ (\ mod\ 4\ )$ : -We have $$(p^k-r)^m\equiv r^m\ (\ mod\ p^k\ )$$ for every even number $m$ -and -$$(p^k-r)^m\equiv -r^m\ (\ mod\ p^k\ )$$ -for every odd $m$. If $r$ has order $\phi(p^k)$ and $n-r$ not (or vice versa), then the order of $r$ or the order of $n-r$ must be odd. Denote this order with $o$. Then, we have $r^o\equiv\ -1\ (\ mod\ p^k)$ or $(n-r)^o\equiv -1\ (\ mod\ p^k)$. So, $r$ or $n-r$ has order $2o$. Because of $4|p^{k-1}(p-1)$ , we can conclude $2o\ne \phi(n)$, so neither $r$ nor $n-r$ is a primitve root.<|endoftext|> -TITLE: Showing $(1-\frac{1}{n}) - (1-\frac{1}{n})^n$ is an increasing sequence? -QUESTION [7 upvotes]: I've come across a familiar math expression in my research, and I want to formally prove the following. -$s_{n+1} - s_{n} > 0$ holds for integer $n \geq 2$, where $s_n := (1-\frac{1}{n}) - (1-\frac{1}{n})^n$. -I've plotted this using software, and it seems it's true. But I've been struggling to prove this. Does anyone have clues? -Thanks. - -Edited: I'm adding information as to roughly map out how I've tried. Please see below. -First, based on basic calculus. -I tried to think about functions rather than sequences, and show that both functions increase, but the former at a higher rate. To elaborate more, -$f(x) := 1 - \frac{1}{x}$ and $g(x) := (1 - \frac{1}{x})^x$ -Showing $f(x)$ is increasing is easy. Took the derivative of it, and it is always positive for all $x$. The value of $f(x)$ is $\frac{1}{2}$ and its slope $f'(x)$ is $\frac{1}{4}$ at $x=2$. -What remains is to show (1) the value of $g(x)$ is less than or equal to $f(0.5) = 0.5$; (2) the slope of $g(x)$ is less than or equal to $f'(0.5) = 0.25$, and (3) the rate at which the slope increases is less than $f''(x)$. $g(0.5) = 0.25 < f(0.5) = 0.5$ (1) is done. Now (2) and (3) to go. -Showing $g(x)$ is increasing is hard. Took the logarithm of it, and tried to take the derivative of it, but to no avail. -This is pretty much it so far. -Second, based on induction. -Now, I'm trying to prove the claim, using the standard approach, but have no real progress yet. - -REPLY [3 votes]: I thought it might be instructive to prove this inequality using only Bernoulli's Inequality and some straightforward arithmetic. To that end, we proceed. - -Let $s_n=\left(1-\frac1n\right)-\left(1-\frac1n\right)^n$. Then, the forward first difference of $s_n$ is given by -$$s_{n+1}-s_n=\frac{1}{n(n+1)}+\left(\left(1-\frac1n\right)^n-\left(1-\frac1{n+1}\right)^{n+1}\right) \tag 1$$ -Now, we can write the second term on the right-hand side of $(1)$ as -$$\begin{align} -\left(1-\frac1n\right)^n-\left(1-\frac1{n+1}\right)^{n+1}&=\left(1-\frac1{n+1}\right)^{n+1}\left(\frac{\left(1-\frac1n\right)^n}{\left(1-\frac1{n+1}\right)^{n+1}}-1\right) \tag 2\\\\ -&=\left(1-\frac1{n+1}\right)^{n+1}\left(\frac{1}{\left(1-\frac1{n+1}\right)}\left(1-\frac{1}{n^2}\right)^n-1\right) \tag 3\\\\ -&\ge \left(1-\frac1{n+1}\right)^{n+1}\left(\frac{1}{\left(1-\frac1{n+1}\right)}\left(1-\frac{1}{n}\right)-1\right) \tag 4\\\\ -&=-\frac1{n^2}\left(1-\frac1{n+1}\right)^{n+1}\tag 5\\\\ -&=-\frac{1}{n(n+1)}\left(1-\frac1{n+1}\right)^{n}\tag 6\\\\ -&\ge -\frac{1}{n(n+1)}\tag 7 -\end{align}$$ -Therefore, we have the expected inequality -$$s_{n+1}-s_n\ge 0$$ - -NOTES: -In arriving at $(2)$, we factor out the term $\left(1-\frac{1}{n+1}\right)^{n+1}$. -In going from $(2)$ to $(3)$, we noted that $\frac{\left(1-\frac1n\right)^n}{\left(1-\frac1{n+1}\right)^{n+1}}=\frac{1}{\left(1-\frac1{n+1}\right)}\left(1-\frac1{n^2}\right)^n$ -In arriving at $(4)$ we used Bernoulli's Inequality. -In going from $(4)$ to $(5)$ we simplified the expression in large parentheses. -In going from $(5)$ to $(6)$ we used the equality $1-\frac{1}{n+1}=\frac{n}{n+1}$ -In arriving at $(7)$, we noted that $\left(1-\frac{1}{n+1}\right)^{n}\le 1$<|endoftext|> -TITLE: What forms does the Moore-Penrose inverse take under systems with full rank, full column rank, and full row rank? -QUESTION [7 upvotes]: The normal form $ (A'A)x = A'b$ gives a solution to the least square problem. When $A$ has full rank $x = (A'A)^{-1}A'b$ is the least square solution. -How can we show that the moore-penrose solves the least square problem and hence is equal to $(A'A)^{-1}A'$. -Also what happens in a rank deficient matrix ? $(A'A)^{-1}$ would not exist so is the moore-penrose inverse still equal to $(A'A)^{-1}A'$ ? -Thanks - -REPLY [18 votes]: The generalized Moore-Penrose pseudoinverse can be classified by looking at the shape of the target matrix, or by the existence of the null spaces. The two perspectives are merged below and connected to left- and right- inverses as well as the classic inverse. -Singular value decomposition -Start with the matrix $\mathbf{A} \in \mathbb{C}^{m\times n}_{\rho}$ and its singular value decomposition: -$$ -\begin{align} - \mathbf{A} &= - \mathbf{U} \, \Sigma \, \mathbf{V}^{*} \\ -% -&= -% U - \left[ \begin{array}{cc} - \color{blue}{\mathbf{U}_{\mathcal{R}}} & -\color{red}{\mathbf{U}_{\mathcal{N}}} - \end{array} \right] -% Sigma - \left[ \begin{array}{cccc|cc} - \sigma_{1} & 0 & \dots & & & \dots & 0 \\ - 0 & \sigma_{2} \\ - \vdots && \ddots \\ - & & & \sigma_{\rho} \\ \hline - & & & & 0 & \\ - \vdots &&&&&\ddots \\ - 0 & & & & & & 0 \\ - \end{array} \right] -% V - \left[ \begin{array}{c} - \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \\ - \color{red}{\mathbf{V}_{\mathcal{N}}}^{*} - \end{array} \right] \\ -% - & = -% U - \left[ \begin{array}{cccccccc} - \color{blue}{u_{1}} & \dots & \color{blue}{u_{\rho}} & -\color{red}{u_{\rho+1}} & \dots & \color{red}{u_{n}} - \end{array} \right] -% Sigma - \left[ \begin{array}{cc} -\mathbf{S}_{\rho\times \rho} & \mathbf{0} \\ - \mathbf{0} & \mathbf{0} - \end{array} \right] -% V - \left[ \begin{array}{c} - \color{blue}{v_{1}^{*}} \\ - \vdots \\ - \color{blue}{v_{\rho}^{*}} \\ - \color{red}{v_{\rho+1}^{*}} \\ - \vdots \\ - \color{red}{v_{n}^{*}} - \end{array} \right] -% -\end{align} -$$ -Coloring distinguishes $\color{blue}{range}$ spaces from $\color{red}{null}$ spaces. -The beauty of the SVD is that it provides an orthonormal resolution for the four fundamental subspaces of the domain $\mathbb{C}^{n}$ and codomain $\mathbb{C}^{m}$: -$$ -\begin{align} -% domain - \mathbb{C}^{n} &= -\color{blue}{\mathcal{R}(\mathbf{A}^{*})} -\oplus -\color{red}{\mathcal{N}(\mathbf{A})} \\ -% -% codomain - \mathbb{C}^{m} &= -\color{blue}{\mathcal{R}(\mathbf{A})} -\oplus -\color{red}{\mathcal{N}(\mathbf{A}^{*})} -\end{align} -$$ -Moore-Penrose pseudoinverse -In block form, the target matrix and the Moore-Penrose pseudoinverse are -$$ -\begin{align} - \mathbf{A} &= \mathbf{U} \, \Sigma \, \mathbf{V}^{*} -= -% U - \left[ \begin{array}{cc} - \color{blue}{\mathbf{U}_{\mathcal{R}(\mathbf{A})}} & \color{red}{\mathbf{U}_{\mathcal{N}(\mathbf{A}^{*})}} - \end{array} \right] -% Sigma - \left[ \begin{array}{cc} -\mathbf{S} & \mathbf{0} \\ - \mathbf{0} & \mathbf{0} - \end{array} \right] -% V -\left[ \begin{array}{l} - \color{blue}{\mathbf{V}_{\mathcal{R}(\mathbf{A}^{*})}^{*}} \\ - \color{red}{\mathbf{V}_{\mathcal{N}(\mathbf{A})}^{*}} -\end{array} \right] -\\ -%% - \mathbf{A}^{\dagger} &= \mathbf{V} \, \Sigma^{\dagger} \, \mathbf{U}^{*} -= -% U - \left[ \begin{array}{cc} - \color{blue}{\mathbf{V}_{\mathcal{R}(\mathbf{A}^{*})}} & - \color{red}{\mathbf{V}_{\mathcal{N}(\mathbf{A})}} - \end{array} \right] -% Sigma - \left[ \begin{array}{cc} -\mathbf{S}^{-1} & \mathbf{0} \\ - \mathbf{0} & \mathbf{0} - \end{array} \right] -% V -\left[ \begin{array}{l} - \color{blue}{\mathbf{U}_{\mathcal{R}(\mathbf{A})}^{*}} \\ - \color{red}{\mathbf{U}_{\mathcal{N}(\mathbf{A}^{*})}^{*}} -\end{array} \right] -\end{align} -$$ -We can sort the least squares solutions into special cases according to the null space structures. -Both nullspaces are trivial: full row rank, full column rank -$$ -\begin{align} -\color{red}{\mathcal{N}(\mathbf{A})} &= \mathbf{0}, \\ -\color{red}{\mathcal{N}\left( \mathbf{A}^{*} \right)} &= \mathbf{0}. -\end{align} -$$ -The $\Sigma$ matrix is nonsingular: -$$ - \Sigma = \mathbf{S} -$$ -The classic inverse exists and is the same as the pseudoinverse: -$$ - \mathbf{A}^{-1} = \mathbf{A}^{\dagger} = - \color{blue}{\mathbf{V_{\mathcal{R}}}} \, \mathbf{S}^{-1} \, - \color{blue}{\mathbf{U_{\mathcal{R}}^{*}}} -$$ -Given the linear system $\mathbf{A}x = b$ with $b\notin\color{red}{\mathcal{N}(\mathbf{A})}$, the least squares solution is the point -$$ - x_{LS} = \color{blue}{\mathbf{A}^{-1}b}. -$$ -Only $\color{red}{\mathcal{N}_{\mathbf{A}^{*}}}$ is trivial: -full column rank, row rank deficit -This is the overdetermined case, also known as the full column rank case: $m>n$, $\rho=n$. -$$ - \Sigma = -\left[ \begin{array}{c} - \mathbf{S} \\ - \mathbf{0} -\end{array} \right] -$$ -The pseudoinverse provides the same solution as the normal equations: -$$ -\begin{align} -% - \mathbf{A} & = -% - \left[ \begin{array}{cc} - \color{blue}{\mathbf{U_{\mathcal{R}}^{*}}} & - \color{red}{\mathbf{U_{\mathcal{N}}}} -\end{array} \right] -% - \left[ \begin{array}{c} - \mathbf{S} \\ - \mathbf{0} - \end{array} \right] -% - \color{blue}{\mathbf{V_{\mathcal{R}}}} -\\ -% Apinv - \mathbf{A}^{\dagger} & = -% - \color{blue}{\mathbf{V_{\mathcal{R}}}} \, - \left[ \begin{array}{cc} - \mathbf{S}^{-1} & - \mathbf{0} - \end{array} \right] -% - \left[ \begin{array}{c} - \color{blue}{\mathbf{U_{\mathcal{R}}^{*}}} \\ - \color{red}{\mathbf{U_{\mathcal{N}}^{*}}} -\end{array} \right] -\end{align} -$$ -The inverse from the normal equations is -$$ -\begin{align} - \left( \mathbf{A}^{*}\mathbf{A} \right)^{-1} \mathbf{A}^{*} &= -% -\left( -% - \color{blue}{\mathbf{V_{\mathcal{R}}}} \, - \left[ \begin{array}{cc} - \mathbf{S} & - \mathbf{0} - \end{array} \right] -% - \left[ \begin{array}{c} - \color{blue}{\mathbf{U_{\mathcal{R}}^{*}}} \\ - \color{red}{\mathbf{U_{\mathcal{N}}^{*}}} -\end{array} \right] -% A - \left[ \begin{array}{cx} - \color{blue}{\mathbf{U_{\mathcal{R}}}} & - \color{red}{\mathbf{U_{\mathcal{N}}}} -\end{array} \right] - \left[ \begin{array}{c} - \mathbf{S} \\ - \mathbf{0} - \end{array} \right] - \color{blue}{\mathbf{V_{\mathcal{R}}^{*}}} \, -% -\right)^{-1} -% A* -% -\left( - \color{blue}{\mathbf{V_{\mathcal{R}}}} \, - \left[ \begin{array}{cc} - \mathbf{S} & - \mathbf{0} - \end{array} \right] -% - \left[ \begin{array}{c} - \color{blue}{\mathbf{U_{\mathcal{R}}^{*}}} \\ - \color{red}{\mathbf{U_{\mathcal{N}}^{*}}} -\end{array} \right] -\right) \\ -\\ -% - &= -% - \color{blue}{\mathbf{V_{\mathcal{R}}}} \, - \left[ \begin{array}{cc} - \mathbf{S}^{-1} \, - \mathbf{0} - \end{array} \right] -% - \left[ \begin{array}{c} - \color{blue}{\mathbf{U_{\mathcal{R}}^{*}}} \\ - \color{red}{\mathbf{U_{\mathcal{N}}^{*}}} -\end{array} \right] \\ -% -&= \mathbf{A}^{\dagger} -% -\end{align} -$$ -The figure below shows the solution as the projection of the data vector onto the range space $\color{blue}{\mathcal{R}(\mathbf{A})}$. - -Only $\color{red}{\mathcal{N}_{\mathbf{A}}}$ is trivial: -full row rank, column rank deficit -This is an underdetermined case, also known as the full row rank case: $m -TITLE: Image of dual map is annihilator of kernel -QUESTION [6 upvotes]: Suppose $T:V\to W$ and that $V$ is finite-dimensional. -I want to prove that $$\text{Im }T'=(\ker T)^0$$ where $T'$ is the dual/transpose map and $(\ker T)^0$ is the annihilator of the kernel. -I know that $\phi \in V'$ is an annihilator of $\ker T$ if and only if $$\phi(v)=0 \space \forall v\in \ker T$$ if and only if $$\phi(v)=0 \space \forall v \in V \text{ such that } T(v)=0$$ -Now I also know that $\phi \in \text{Im }T'\subset V'$ if and only if $$\phi = f \circ T \text{ for some } f \in W'$$ -I can see that if $v\in \ker T$, then this implies $\phi(v) = f(T(v)) = 0$, so $\phi \in (\ker T)^0.$ -However, I'm having trouble showing the other way, that if $\phi \in (\ker T)^0$, then $\phi \in \text{Im }T'.$ -Could anybody help? - -REPLY [2 votes]: Because you already have that $(\ker T)^\circ\subseteq \text{im } T^*$ you can simply show that their dimensions must be equal and thus we know that they must be equal. We know that $$\dim V^*=\dim V=\dim\; \ker T+\dim\; \text{im }T$$ and that $$\dim V^*=\dim\; \text{im } T^*+\dim\; (\text{im } T^*)^\circ.$$ -It is easy to show that $\text{rank } T^*=\text{rank }T.$ Thus we can re-write the equality and it follows immediately that $$\dim \ker T=\dim \; (\text{im } T^*)^\circ.$$ Therefore, the two must be equal.<|endoftext|> -TITLE: Has any one ever tried *this* to define a new version of metric? (division instead of subtraction) -QUESTION [8 upvotes]: Let $x \in \Bbb{R}^+$ be a positive real number. Define $|x|_{\bullet} = \max(x,\dfrac{1}{x})$. -Then define for $a, b \in \Bbb{R}^+, \ \ d(a,b) := |\dfrac{a}{b}|_{\bullet}$ -$d$ satisfies: -$$ -(1) \ d(a,b) \geq 1 \\ -(2) \ d(a,b) = d(b,a) \\ -(3) \ d(a,b) = 1 \iff a = b \\ -(4) \ d(a,b) \leq d(a,c) \cdot d(b,c) -$$ -Let's take a look at $(4)$. - -Suppose that $a \leq b \leq c$: $d(a,b) = \dfrac{b}{a} \leq \dfrac{c}{b}\cdot \dfrac{c}{a}$ since we're working with positive numbers here and can cancel $a$, and bring $b$ to the upper LHS to give equivalently $b^2 \leq c^2$. But the RHS equals $d(b,c)\cdot d(a,c)$. -The condition $a\leq b$ is WLOG for our purposes which isn't necc. easy to see, so give me just this one to make the proof simpler. Now all we need to check are the cases $a \leq c \leq b$ and $c \leq a \leq b$. -Suppose that $a \leq c \leq b$: $d(a,b) = \dfrac{b}{a} \leq \dfrac{b}{c}\cdot\dfrac{c}{a} = \dfrac{b}{a}$ the RHS being $d(b,c)\cdot d(a,c)$. -Suppose that $c \leq a \leq b$: $d(a,b) = \dfrac{b}{a}\leq \dfrac{a}{c}\cdot \dfrac{b}{c} \implies \dots$ (similar to bullet one). - -Thus $d$ satisfies $(1)$ - $(4)$ and is "metric-like" in these properties. So what now can we do with $d$. What about the topology generated by the balls $B_{\epsilon}(x) = \{ y \in \Bbb{R}^+ : d(x,y) \lt \epsilon \}$ where $\epsilon \gt 1$? Specifically I'm interested in $\Bbb{Z}^+$ so define a metric by restricting to this domain, and so on... -Any ideas what could be done with this "metric", or have you seen it before? - -REPLY [7 votes]: Eric Wofsey's comments are very much to the point: your notion of "metric" is just the thing you get when you apply the $\exp$ function to a standard metric. However, there are some things to be said about the specific example you have given. -The metric you have defined (or, more precisely the metric $d(a,b)=\lvert \log a- \log b \rvert$) is an invariant metric on the multiplicative group of reals. This is easy to check: if you multiply $a$ and $b$ by the same number, the distance remains unchanged (because numerator and denominator in your quotient change in the same way). -The general fact there is that whenever you have a topological group whose topology is metrisable, then you can find a left-invariant metric which induces the topology. Those metrics are the ``nice'' ones in such a group, and it is the case here: when you think about positive reals as a group with multiplication, the standard metric inherited from reals (which is invariant under addition) is far less natural than the one you have described. -Moreover, we can also consider the positive reals as a Lie group. In a Lie group, as soon as we fix an inner product in the tangent space at the identity, we can use translation to obtain a so-called metric tensor on the group, which allows us to measure the lengths of (piecewise smooth) curves. If the group is connected, this also gives rise to a metric (where the distance is simply the infimum of lengths of curves connecting the two points). The metric that this process gives in case of positive reals (with the standard inner product in the tangent space at $1$) is just $d(a,b)=\lvert \log a-\log b\rvert$. - -Edit: -I feel silly for not thinking about it sooner, but there is also a notion of a valuation. Given a ring $R$ and an ordered abelian group $\Gamma$, a valuation is a function from $R$ to $\Gamma\cup \{\infty\}$ which satisfies certain axioms, and this gives a notion of a distance in the ring -- the oddity here is that an element is close to $0$ if its valuation is very large: zero has valuation $\infty$. A valuation ring is a ring in which all elements have nonnegative valuation. -But there's a twist: sometimes, valuations are written multiplicatively. In this case, we think about $\Gamma$ as a multiplicative group, and instead look at homomorphisms $R\to \Gamma\cup \{0\}$. Under this convention, a valuation ring is a ring in which all elements have valuation $\leq 1$ (notice the inequality), and the "distance" between two elements is given by $v(a-b)=v(a)/v(b)$. The nice thing here is that under this convention, elements are close when the valuation is small. -If $\Gamma$ is actually a subgroup of (positive) reals, we can get from "multiplicative" world to "additive" world by taking each $\gamma$ to $-\log(\gamma)$, and in the other direction by taking $\gamma$ to $\exp(-\gamma)$. Even if $\Gamma$ is not a subgroup of reals, we can still think about the abstract exponential/logarithm maps. -A very specific example of this are the $p$-adic valuations, which (in additive form) take an integer to $n$, where $n$ is the largest natural number such that $p^n$ divides the integer. Thus the elements close to $0$ are those which are divisible by large powers of $p$. It extends to the rational numbers in the obvious way, and a completion with respect to the notion of distance that arises gives us the field of $p$-adic numbers. A multiplicative $p$-adic valuation is just $2^{-n}$ (or $\exp(-n)$, or $p^{-n}$, it doesn't really matter what base for the exponent you choose, as long as it's greater than $1$).<|endoftext|> -TITLE: Show $p$ prime s.t. $p \not\equiv 1 \mod 3$ is represented by the binary quadratic equation. -QUESTION [6 upvotes]: I am working on the following question: -Let $p>3$ be a prime such that $p \not\equiv 1 \mod 3$. Show that $p$ is not represented by the binary quadratic equation $f(x, y) = x^2 + xy + y^2$. -I would appreciate any help. - -REPLY [5 votes]: Remember that you can just add up mods. -If $x \equiv 1 \bmod 3$ and $y \equiv 1 \bmod 3$, then $x^2 + xy + y^2 \equiv 0 \bmod 3$. -If $x \equiv 1 \bmod 3$ and $y \equiv 2 \bmod 3$, then $x^2 + xy + y^2 \equiv 1 \bmod 3$. -If $x \equiv 1 \bmod 3$ and $y \equiv 0 \bmod 3$, then $x^2 + xy + y^2 \equiv 1 \bmod 3$. -If $x \equiv 2 \bmod 3$ and $y \equiv 2 \bmod 3$, then $x^2 + xy + y^2 \equiv 0 \bmod 3$. -Obviously we needn't worry about both $x$ and $y$ satisfying $0 \mod 3$, and as for the other possibilities with $x$ and $y$ switched, we can ignore them "without any loss of generality." -Since $x^2 + xy + y^2 \equiv 2 \bmod 3$ is impossible, no $p \equiv 2 \bmod 3$ can be represented by it.<|endoftext|> -TITLE: Prove by induction $ \sin x + \sin 2x + ... + \sin nx = \frac {\sin (\frac {n + 1} {2} x)} {\sin \frac{x}{2}} \sin \frac{nx}{2} $ -QUESTION [8 upvotes]: Prove by induction - $$ \sin x + \sin 2x + ... + \sin nx = \frac {\sin (\frac {n + 1} {2} x)} {\sin \frac{x}{2}} \sin \frac{nx}{2} $$ - -What I have for now: -$$ \frac {\sin (\frac {n + 1} {2} x)} {\sin \frac{x}{2}} \sin \frac{nx}{2} + \sin(n + 1)x = \frac {\sin (\frac {n + 2} {2} x)} {\sin \frac{x}{2}} \sin \frac{(n + 1)x}{2}$$ -Letting $y = \frac {(n + 1)x} {2} $ -$$\frac {\sin y} {\sin \frac{x}{2}} \sin (y - \frac{x}{2}) + \sin2y = -\frac {\sin (y + \frac {x} {2} )} {\sin \frac{x}{2}} \sin y $$ -Then I used $\sin (\alpha + \beta)$ and $\sin (\alpha - \beta)$ formulas, bit they didn't help. - -REPLY [5 votes]: You can also see that -\begin{align} -\sum_{k=0}^{n}\sin kx & =\Im \sum_{k=0}^{n}\mathrm{e}^{\mathrm{i}kx}\\ -& = \Im \sum_{k=0}^{n}\left(\mathrm{e}^{\mathrm{i}x}\right)^k\\ -& = \Im\frac{\left(\mathrm{e}^{\mathrm{i}x}\right)^{n+1}-1}{\mathrm{e}^{\mathrm{i}x}-1}\\ -& = \Im\frac{\mathrm{e}^{\mathrm{i}(n+1)x}-1}{\mathrm{e}^{\mathrm{i}x}-1}\\ -& = \Im\frac{\mathrm{e}^{\mathrm{i}\frac{n+1}{2}x}}{\mathrm{e}^{\mathrm{i}\frac{x}{2}}}\frac{\mathrm{e}^{\mathrm{i}\frac{n+1}{2}x}-\mathrm{e}^{-\mathrm{i}\frac{n+1}{2}x}}{\mathrm{e}^{\mathrm{i}\frac{x}{2}}-\mathrm{e}^{-\mathrm{i}\frac{x}{2}}}\\ -& = \Im\mathrm{e}^{\mathrm{i}\frac{nx}{2}}\frac{\sin\left(\frac{n+1}{2}x\right)}{\sin\left(\frac{x}{2}\right)}\\ -& = \sin\left(\frac{nx}{2}\right)\frac{\sin\left(\frac{n+1}{2}x\right)}{\sin\left(\frac{x}{2}\right)}. -\end{align}<|endoftext|> -TITLE: Cardinality of the Euclidean topology and the axiom of choice -QUESTION [11 upvotes]: It is relatively straightforward to prove that the Euclidean topology has the same cardinality as the space itself. -I have sketched a proof below. -The proof seems to rely quite heavily on the axiom of choice. -I'm perfectly fine with (countable) dependent choice — I want to be able to define sequences recursively, for example — but one doesn't usually need more choice for topology in Euclidean spaces. -Is dependent choice enough to show that the topology of $\mathbb R^n$, $n\geq1$, has the same cardinality as $\mathbb R^n$? -If yes, how to prove it? -If not, how much do we know about the cardinality of the topology? -It seems that the cardinality of the topology lies between $\mathfrak{c}$ and $2^{\mathfrak{c}}$, but are there tighter bounds? -Proof of equal cardinalities with choice: -Let $T$ be the topology of $\mathbb R^n$ and $B$ a countable basis for it (rational balls, for example). -Since $|2^B|=|\mathbb R^n|$, it suffices to find an injection $f:\mathbb R^n\to T$ and a surjection $g:2^B\to T$. -These together will establish $|\mathbb R^n|\leq|T|\leq|2^B|=|\mathbb R^n|$ and the conclusion follows. -(Alternatively, the surjection $g$ has an injective right inverse, and we thus have injections both ways between $\mathbb R^n$ and $T$. The existence of a bijection follows from the Schröder–Bernstein theorem.) -We can simply take $f(x)=B(x,1)$ and $g(A)=\bigcup_{U\in A}U$. -Injecting $\mathbb R^n$ to $T$ requires no choice and always $T\subset 2^{\mathbb R^n}$, so at least $\mathfrak{c}\leq|T|\leq2^{\mathfrak{c}}$ without AC. -Without choice, I can't compare cardinals so easily or produce injective right inverses for surjections, so my proof falls apart. - -REPLY [12 votes]: You don't need choice here at all - every open set $U$ has a canonical description in terms of basic open sets, $code(U)=\{B_\eta: B_\eta\subseteq U\}$ (where $\{B_\eta: \eta<\omega\}$ is the set of all balls with rational radius and center). We clearly have $code(U)=code(V)\iff U=V$, and the set of codes has size continuum.<|endoftext|> -TITLE: Countable elementary submodels -QUESTION [6 upvotes]: I'm having some trouble understanding elementary submodels. -Let $H_\chi$ be the set of all sets which are hereditarily of cardinality $<\chi$. Let $\textbf{N}=(N,\in)$ be a countable elementary submodel of $(H_{\chi} ,\in)$ Moreover let $p\in N$ and $q\in H_{\chi}$ such that $\lvert p\triangle q\rvert$ is finite. Under what sort of circumstances is $q\in N$? - -REPLY [3 votes]: The following are equivalent: - -$(i)$ $q\in N$, -$(ii)$ $p\Delta q\subseteq N$, -$(iii)$ $p\Delta q\in N$. - -$(ii)\rightarrow(iii)$ is easy, since by assumption $p\Delta q$ is finite - it's easy to show that the union of finitely many elements of $N$ is in $N$. -For $(iii)\rightarrow(i)$, note that $N$ contains an element $r$ which $N$ thinks satisfies $r=p\Delta(p\Delta q)$ - by elementarity, we in fact have $r=p\Delta(p\Delta q)$, but this is just $q$. -For $(i)\rightarrow (iii)$, if $q\in N$ then $N$ contains some element $s$ which $N$ thinks is $p\Delta q$ - so by elementarity, $s=p\Delta q$. -Finally, for $(iii)\rightarrow (ii)$, this follows from the following more general fact: - -Suppose $N$ is a countable elementary submodel of $H_\chi$ and $X\in N$ is countable. Then $X\subseteq N$. - -Pf: By elementarity, $N$ contains some $f$ which $N$ thinks is a surjection from $\omega^N$ to $X$. But $\omega^N=\omega$, so by elementarity $f$ is in fact a surjection from $\omega$ to $X$. But then each $x\in X$ is of the form $f(n)$ for some $n\in\omega$, and $\omega\subseteq M$ - so for each $n\in\omega$, $f(n)\in N$. - -Note that these conditions aren't vacuous. For example, fix $p\in N$ and $x\in H_\chi\setminus N$. Then $q:=p\cup\{x\}\not\in N$.<|endoftext|> -TITLE: Clarification on Borell-Cantelli lemma -QUESTION [5 upvotes]: Consider a sequence of decreasing events $A_n\downarrow A$ such that $\sum_{n=1}^\infty P(A_n)<\infty$. Then by the Borel-Cantelli lemma -$$P(\limsup_nA_n) = 0$$ or -$$P(\lim_n A_n^c) = 1$$ -I need to understand why this implies that there exists $n(\omega)<\infty$ a.s. such that $$P(A_n^c)=1,\quad \forall n > n(\omega).$$ -This should be obvious, but I fail to see it. - -REPLY [4 votes]: Note that the reason why we have $$ P(\lim A_n^c) = 1$$ -is because we have $$P(\liminf A_n^c) = 1$$ - -BCL1 gives us: -$$P(\limsup A_n) = 0$$ -$$\to P(\liminf A_n^c) = 1$$ -Let $$\omega \in \liminf A_n^c$$ -Then -$$\exists m \ge 1 \ s.t. \ \forall n \ge m,$$ -$$\omega \in A_n^c$$ -I guess $m$ is random i.e. $m = n(\omega)$<|endoftext|> -TITLE: What is the limit of $\lim_{n\to \infty} \frac{1}{n^{k+1}}\left(k!+\frac{(k+1)!}{1!}+\frac{(k+2)!}{2!}+\cdots+\frac{(k+n)!}{n!}\right)=?$ -QUESTION [6 upvotes]: For $k\in\mathbb{N},$ -$$\lim_{n\to \infty} \frac{1}{n^{k+1}}\left(k!+\frac{(k+1)!}{1!}+\frac{(k+2)!}{2!}+\cdots+\frac{(k+n)!}{n!}\right)=?$$ -So I am trying to find this limit. Try to apply limit comparison but don't know where to start. I observe that $(k+n)!\geq n!$, is this fact going to help me somehow? Help me out. Thanks. - -REPLY [3 votes]: Move the $1/n^{k+1}$ inside the sum to see the expression equals -$$\tag1 S_n = \sum_{j=0}^{n} \frac{(j+k)!}{j!n^k}\frac{1}{n} = \sum_{j=0}^{n} \left(\frac{j}{n} + \frac{k}{n}\right)\left(\frac{j}{n} + \frac{k-1}{n}\right)\cdots \left(\frac{j}{n} + \frac{1}{n}\right)\frac{1}{n}.$$ -Let $\epsilon>0.$ Then $k/n < \epsilon$ for large $n.$ For such $n,$ the right side of $(1)$ shows -$$\tag 2 \sum_{j=0}^{n} \left(\frac{j}{n}\right)^k\frac{1}{n} \le S_n \le \sum_{j=0}^{n} \left(\frac{j}{n}+\epsilon\right)^k\frac{1}{n}.$$ -The left and right sums in $(2)$ are Riemann sums, and we see -$$\int_0^1 x^k\,dx \le \liminf S_n \le \limsup S_n \le \int_0^1 (x+\epsilon)^k\,dx.$$ -As $\epsilon \to 0,$ the integral on the right converges to the integral on the left, which shows $\lim S_n = 1/(k+1).$<|endoftext|> -TITLE: log of summation expression -QUESTION [11 upvotes]: I am curious about simplifying the following expression: -$$\log \left(\sum_\limits{i=0}^{n}x_i \right)$$ -Is there any rule to simplify a summation inside the log? - -REPLY [3 votes]: There's this: -$$\log\left(\sum_{i=0}^n x_i\right) = \log(x_0) + \log\left(1+\sum_{i=1}^n\left(\frac{x_i}{x_0}\right)\right)$$ -$x_0$ must be the biggest in the series. It didn't help me with what I'm trying to solve, but I think it answers the question.<|endoftext|> -TITLE: Basic question: Global sections of structure sheaf is a field -QUESTION [8 upvotes]: $X$ is projective and reduced over a field $k$ (not necessarily algebraically closed). Why is $H^0(X,\mathcal{O}_X)$ a field? - -Are there any good lecture notes on this (valuative criteria, properness, projectiveness, completeness)? I really don't have time to go through EGA/SGA. - -REPLY [14 votes]: This is not true. The scheme $X=\mathrm{Spec}(k\times k)$ is projective and reduced over $k$ but the global sections of the structure sheaf do not form a field. You need to assume $X$ is connected too. In any case, by properness, $H^0(X,\mathscr{O}_X)$ is a finite-dimensional $k$-algebra (this is the hard part), hence a finite product of Artin local rings. Since $X$ is connected, $H^0(X,\mathscr{O}_X)$ is a connected ring, meaning that it must be a single Artin local ring. The unique maximal ideal of such a ring coincides with its nilradical, which is zero because $X$ is reduced, so $H^0(X,\mathscr{O}_X)$ must be a field.<|endoftext|> -TITLE: Proving two matrices are equal -QUESTION [5 upvotes]: A friend and I are having some trouble with a linear algebra problem: -Let $A$ and $B$ be square matrices with dimensions $n\times n$ -Prove or disprove: -If $A^2=B^2$ then $A=B$ or $A=-B$ -It seems to be true but the rest of my class insists it's false - I can't find an example where this isn't the case - can someone shed some light on this? -Thanks! - -REPLY [4 votes]: All matrices $E_{ij}$ of the standard basis of $M_n(\mathbf R)$, defined with -$$ a_{kl}=\begin{cases} -1&\text{if }(k,l)=(i,j)\\ -0&\text{if }(k,l)\neq(i,j) -\end{cases}$$ -satisfy the equation $\;E_{ij}^2=0$ if $i\neq j$.<|endoftext|> -TITLE: How do I evaluate the following definite integral: $\int^1_0\frac{x^2\ln x}{\sqrt{1-x^2}}dx$? -QUESTION [5 upvotes]: $$\int^1_0\frac{x^2\ln x}{\sqrt{1-x^2}}dx$$ -I tried using by parts but it didn't help. Also none of the properties of definite integral helps. - -REPLY [3 votes]: Rewrite the integral as$$I(a)=\lim\limits_{a\to0}\frac {\partial}{\partial a}\int\limits_0^1dx\,\frac {x^{a+2}}{\sqrt{1-x^2}}$$And make the substitution $u=x^2$. You should get the resulting integral in terms of the beta function.<|endoftext|> -TITLE: How discontinuous does a $[a,b] \to (c,d)$ bijection have to be? -QUESTION [6 upvotes]: I know that bijection from $[a,b]$ to $(c,d)$ can't be continuous, but I'm wondering if such a function could exist if it was discontinuous at just countable points, and if this isn't possible, why? - -REPLY [8 votes]: The function $f:[0,1]\to [0,1)$ given by -$$ -f(x) = \cases{\frac x2 & if $x = \frac{1}{2^n}$ for some $n\in \Bbb N$\\ x & otherwise} -$$ -is the standard example of a bijection $[0,1]\to [0,1)$ and it is discontinuous only at a countable number of points. It can also without too much work be altered to fit $[a, b]\to(c, d)$ without losing that property. - -REPLY [3 votes]: Such functions exist. For simplicity, I'll map the interval $[-1,1]$ to $(-1,1)$, but the technique works in the general case. -For $n \geq 1$, let $a_n = \frac{1}{n}$ and $b_n = -\frac{1}{n}$. Define $f\colon\thinspace [-1,1]\rightarrow (-1,1)$ as follows: --For values in the sequence $a_n$, $f(a_n) = a_{n+1}$. --For values in the sequence $b_n$, $f(b_n) = b_{n+1}$. --For all other $x$, $f(x) = x$. -One can easily verify that $f$ is a bijection from $[-1,1]$ to $(-1, 1)$. And it is continuous everywhere except on the sequences $\{a_n\}$ and $\{b_n\}$, which are countably many points.<|endoftext|> -TITLE: Is there any group with self-normalizing Sylow $p$-subgroup? -QUESTION [6 upvotes]: Let $G$ be a finite group and assume that $G$ is not a $p$-group. I am looking for an example that for every Sylow subgroup $P$ of $G$, -$$N_G(P)=P$$. -I have doubt whether such groups exist. - -REPLY [3 votes]: There are no such groups. Let us a call a group strange if all of its Sylow subgroups are self-normalizing. -We first show that if $G$ is strange and $N \unlhd G$ then $G/N$ is strange. If not, then there exists $P/N \in {\rm Syl}_p(G/N)$ with $P/N \lhd K/N$ and $P \ne K$. Let $Q \in {\rm Syl}_p(G)$ with $QN=P$. Then, by the Frattini argument, $K = N_K(Q)N$, so $Q \ne N_K(Q)$, contradicting $G$ strange. -Now let $G$ be a group of minimal order that is strange and not a $p$-group, and let $N$ be a minimal normal subgroup of $G$. Then, since $G/N$ is strange, it must be a $p$-group for some prime $p$. Since $G$ is not a $p$-group, $N$ is not a $p$-group, and if $Q \in {\rm Syl}_q(N)$ for some $q \ne p$, then $Q \in {\rm Syl}_q(G)$ and, by the Frattini Argument, $G=N_G(Q)N$. Hence, since $G$ is strange, we must have $G=N$ and $G$ is simple. -In fact $G$ is nonabelian simple and it is known from the classification that $G$ has a cyclic Sylow $p$-subgroup $P$ for some prime $p$. (See, for example, the discussion here.) But now, by Burnside's Transfer Theorem, $P$ cannot be self-normalizing in $G$, contradiction.<|endoftext|> -TITLE: Every separable Banach space is isomorphic to $\ell_1/A$ for some closed $A\subset \ell_1$ -QUESTION [7 upvotes]: How to prove the following mind-blowing fact? - -Let $X$ be a separable Banach space and let $\ell_1$ be the space of all absolutely summable scalar sequences. Then there exists such closed subspace $A\subset \ell_1$ that factor space $\ell_1/A$ and $X$ are isomorphic as normed spaces. - -Edit: -So what, this is like a classification up to isomorphism of all separable Banach spaces? Each separable Banach space corresponds to some closed subspace of $\ell_1$? - -REPLY [9 votes]: Let $X$ be a Banach space and let $\{x_d\colon d\in D\}$ be a dense subset of the unit ball of $X$. Consider the space $\ell_1(D)$ of all absolutely summable sequences on $D$. We define a linear map $T\colon \ell_1(D) \to X$ by -$$T\Big((\lambda_d)_{d\in D}\Big) = \sum_{d\in D}\lambda_d x_d\qquad ((\lambda_d)_{d\in D} \in \ell_1(D)).$$ -This is a well-defined linear map as the right-hand side converges absolutely for every $(\lambda_d)_{d\in D}\in \ell_1(D)$, hence it defines an element of $X$. For each $(\lambda_d)_{d\in D} \in \ell_1(D)$ we have $$\|T\big((\lambda_d)_{d\in D}\big)\|\leqslant \sum_{d\in D}\|\lambda_d x_d\|\leqslant \sum_{d\in D}|\lambda_d|=\|(\lambda_d)_{d\in D} \|.$$ -Consequently, $T$ is a bounded (actually norm-one) linear operator. Since $\{x_d\colon d\in D\}$ is dense in the unit sphere of $X$, $T$ is surjective. By the first isomorphism theorem, -$$X\cong \ell_1(D) / \ker T.$$ -Note that separability of $X$ means that we may take $D=\mathbb{N}$. -Here's a reference to the literature.<|endoftext|> -TITLE: Why do complex eigenvalues/eigenvectors cause rotation? -QUESTION [5 upvotes]: I am trying to understand the intuition behind eigenvalues/eigenvectors through the lens of repeated matrix multiplication: -Given a $2\times2$ matrix $M$ and $2D$ vector $v$, multiplying $v$ repeatedly -with $M$ causes the result ($M^n v$) to gravitate towards one of the eigenspaces of $M$ because: -$$M^n v = M^n(\alpha x_1 + \beta x_2) = (\alpha \lambda_1^n x_1 + \beta \lambda_2^n x_2)$$ -where $x_1$ and $x_2$ are eigenvectors of $M$ and $\lambda_1$ and $\lambda_2$ the corresponding eigenvalues. As $n$ gets larger $M^n v$ will gravitate towards either $\alpha \lambda_1^n x_1$ or $\beta \lambda_2^n x_2$, whichever has the dominant eigenvalue. -assuming: $v = \alpha x_1 + \beta x_2$ -So the above is a way to connect the abstract concept of eigenvalue/eigenvector to something concrete: what happens when you apply a matrix over and over to a vector. -However, the intuition breaks down for me with complex eigenvectors. I know repeated multiplication by a matrix with complex eigenvectors causes the result to either spiral outwards or inwards. -Is there simple math such as above to see why? -Edit: I know similar questions have been asked before, but I ask in the context of repeated matrix multiplication - -REPLY [3 votes]: Every square matrix is similar to a matrix in what's called Jordan Canonical Form. This has various properties, but most important here is that it is upper triangular, and the eigenvalues (of both the new and original matrix) are on the diagonal of the resulting matrix. -The way to think about this process is that we change bases, and in that new basis the matrix becomes diagonal. This will help the existing intuition given in the OP about iterating the matrix, because iterating a triangular matrix will simply exponentiate the diagonal entries (and do predictable but somewhat messy stuff to the part above the diagonal). -Now, if an eigenvalue is complex, all the above still holds. However if the original matrix had real entries, then that eigenvalue must be paired with its complex conjugate -- both $a+bi$ and $a-bi$ must be eigenvalues. Then, we can rearrange the standard JCF matrix, as described above, into what is called Real Jordan Canonical Form. Now, instead of the matrix being entirely upper triangular, there will be some $2\times 2$ blocks along the diagonal, a rearrangement of what was a complex eigenvalue-plus-its-conjugate pair in the original JCF. The entries of these $2\times 2$ blocks are exactly the real and imaginary parts of these two complex eigenvalues. -Each of these $2\times 2$ blocks performs a rotation in the two-dimensional space spanned by those two basis elements. Hence, a $6\times 6$ matrix with six complex eigenvalues might be doing three different rotations in three different two-dimensional directions at once. -In the original $2\times 2$ case, the reason that a complex eigenvalue leads to a rotation is that it must appear with its complex conjugate, assuming the original matrix has real entries. As a pair they give a rotation. If the original matrix did not have real entries, then the matrix need not represent a pure rotation, because the two eigenvalues need not be related.<|endoftext|> -TITLE: How to prove this infinite product identity? -QUESTION [7 upvotes]: How can I prove the following identity? -$$\large\prod_{k=1}^\infty\frac1{1-2^{1-2k}}=\sum_{m=0}^\infty\left(2^{-\frac{m^2+m}{2}}\prod_{n=1}^\infty\frac{1-2^{-m-n}}{1-2^{-n}}\right)$$ -Numerically both sides evaluate to -$$2.38423102903137172414989928867839723877...$$ - -REPLY [4 votes]: The identity is in fact true for any $0 < x < 1$: $$\prod_{k=1}^{\infty}\frac{1}{1-x^{2k-1}}=\sum_{m=0}^{\infty}\left(x^{\frac{m^2+m}{2}}\prod_{n=1}^{\infty}\frac{1-x^{m+n}}{1-x^{n}}\right) = \sum_{m=0}^\infty\left(x^{\frac{m^2+m}{2}}\prod_{n=1}^{m}\frac{1}{1-x^{n}}\right)$$ -We note the telescopic product: $$\displaystyle \prod_{k=1}^{\infty}\frac{1}{1-x^{2k-1}} = \prod_{k=1}^{\infty} \frac{1-x^{2k}}{1-x^k} = \prod_{k=1}^{\infty}(1+x^k)$$ -On the other hand the expression: -\begin{align}\prod_{k=1}^{\infty}(1+x^k) &= \sum_{n=0}^{\infty} \left(\sum\limits_{1\le j_1 < j_2 < \cdots < j_n} x^{j_1+\cdots +j_n}\right)\tag{1}\\&= \sum_{n=0}^{\infty} \left(\sum\limits_{1\le k_1,k_2, \cdots ,k_n} x^{nk_1+(n-1)k_2\cdots +k_n}\right)\tag{2}\\&= \sum_{n=0}^{\infty} \left(\sum\limits_{k_1 \ge 1} x^{nk_1}\sum\limits_{k_2 \ge 1}x^{(n-1)k_2} \cdots \sum\limits_{k_n \ge 1}x^{k_n}\right)\tag{3}\\&= \sum\limits_{n=0}^{\infty} \left(\frac{x^n}{1-x^n}\cdot \frac{x^{n-1}}{1-x^{n-1}}\cdots\frac{x}{1-x}\right)\\&= \sum\limits_{n=0}^{\infty} x^{\frac{n^2+n}{2}}\prod\limits_{m=1}^{n}\frac{1}{1-x^m}\end{align} -Justifications: -$(1)$ Coefficient of $z^n:$ in the infinite product $\displaystyle \prod\limits_{k=1}^{\infty}(1+x^kz)$ being $\displaystyle \left(\sum\limits_{1\le j_1 < j_2 < \cdots < j_n} x^{j_1+\cdots +j_n}\right)$. -$(2)$ Made the change of variable $k_m = j_m - j_{m-1}$ for $m \ge 1$ where, $j_0 = 0$. -Then note that $j_1+\cdots +j_n = nk_1+(n-1)k_{2}+\cdots + k_n$ -$(3)$ Used the formula for infinite geometric progression: $\displaystyle \sum\limits_{k\ge 1} x^{mk} = \frac{1}{1-x^m}$<|endoftext|> -TITLE: Show that all the cards contain the same number. -QUESTION [5 upvotes]: Natural numbers from $1$ to $99$ (not necessarily distinct) are written on $99$ cards. It is given that the sum of the numbers on any subset of cards (including the set of all cards) is not divisible by $100$. Show that all the cards contain the same number. -I was suggested by a friend to approach this problem by contradiction. -Assume to the contrary that there are two cards which are distinct. -However, I am not really seeing how that would help. -Does anyone see how this proof would be approached? Any starting point would help me a lot. - -REPLY [4 votes]: Let the number on the cards be $\{a_1,a_2, \ldots , a_{99}\}$. Suppose $a_1 \neq a_2$. Then consider the following $99$ subsets -$$\{a_1\}, \{a_1,a_2\}, \{a_1,a_2,a_3\}, \ldots \{a_1,a_2, \ldots a_{99}\}.$$ -Then the corresponding sums will be - $$s_1=a_1,\, s_2=a_1+a_2, \, s_3=a_1+a_2+a_3, \ldots , \, s_{99}=a_1+a_2+ \dotsb + a_{99}.$$ -Since none of them is divisible by $100$, therefore modulo $100$ they should all give distinct remainders (and also cover all possible non-zero remainders modulo $100$). If not, then there exists $i,j$ with $i < j$ such that -$s_i \equiv s_j \pmod{100}.$ But then the subset $\{a_{i+1}, a_{i+2}, \ldots ,a_{j}\}$ will have sum $s_j-s_i$ and it will be divisible by $100$ contradicting the condition given. -Now consider $a_2$. Since $a_1,a_2 \in \{1,2,\ldots ,99\}$ and are distinct therefore $a_2 \not\equiv a_1 \not\equiv s_1\pmod{100}$. Also $a_2 \not\equiv s_2 \pmod{100}$ either, otherwise $a_1 \equiv 0 \pmod{100}$. -Therefore $a_2 \equiv s_k \pmod{100}$ for some $k \geq 3$. -This is a contradiction because then the subset $\{a_1,a_3, \ldots ,a_{k}\}$ will have the sum divisible by $100$. -So we can only have $a_1=a_2$.<|endoftext|> -TITLE: Prove that a polytope is closed -QUESTION [6 upvotes]: Let the polytope defined by $$S:=co \left\{ x_1,x_2,...,x_k \right\}$$ where $x_1,x_2,...,x_k \in \mathbb{R^n}$ and $co \left \{... \right \}$ is the convex Hull. Prove that S i closed. -I tried the following. I want to show that $S=cl(S)$ -I've proved that for all set $S \subseteq cl(S)$. Now I want to prove that $cl(S)\subseteq S$. -I know that $cl(S) = int(S) \cup bd(S)$ So, taking $x \in cl(S)$. -If $x \in int(S)$ then its clear that $x \in S$. -If $x \in bd(S)$ is not clear but intuitively the border is the convex combination between $x_i$ and $x_j$ (in pairs) and $i,j=1,2,...,k$. I don't know how to rite it formally and how to prove that my intuition about the border is actually $bd(S)$ - -REPLY [3 votes]: In fact, the convex hull $\mathrm{conv}\{x_1,\ldots,x_k\} := \{\sum_{1 \le j \le k}t_kx_k | t \in \mathbb{R}^n, t \ge 0, \sum_{1 \le j \le n}t_j = 1\}$ is compact (in the usual euclidean topology)! -Step 1: The simplex $\Delta_n := \{t \in \mathbb{R}^k | t \ge 0, \sum_{1 \le j \le k}t_j = 1\}$ is compact. Indeed it closed, being the intersection of closed sets, namely the orthant $\mathbb{R}^k_+$ and the hyperplane $H:= \{t \in \mathbb{R}^k | \sum_{1 \le j \le k}t_j = 1\}$. Next, $\Delta_n$ is a subset of the hypercube $[0, 1]^k$, and is therefore bounded. -Step 2: The map $g: \Delta_k \rightarrow \mathbb{R}^k$, $t \mapsto \sum_{1 \le j \le k}t_kx_k$ is continuous. Thus $\mathrm{conv}\{x_1,\ldots,x_k\} := g(\Delta_k)$ is the continuous image of a compact set and is therefore compact.<|endoftext|> -TITLE: Polya's urn (martingale) -QUESTION [8 upvotes]: Suppose you have an urn containing one red ball and one green ball. You draw one at random; if the ball is red, put it back in the urn with an additional red ball , otherwise put it back and add a green ball . Repeat this procedure and let the random variable $X_n$ be the number of red balls in the urn after n draws. -Let $Y_n=\frac{1}{n+2}X_n$. Find $\mathbb{\mathbb{\textrm{E}}}\left(Y_{n}\right)$ and prove that $Y_n$ is a martingale with respect to $X_n$. - -MY ATTEMPT: -We have $\mathbb{\mathbb{\textrm{E}}}\left(\left.X_{n+1}\right|X_{n}\right)=X_{n}+\dfrac{X_{n}}{n+2}=\dfrac{n+3}{n+2}X_{n}$, so -$\mathbb{\mathbb{\textrm{E}}}\left(\left.Y_{n+1}\right|X_{n}\right)=\dfrac{1}{n+3}\mathbb{\mathbb{\textrm{E}}}\left(\left.X_{n+1}\right|X_{n}\right)=\dfrac{1}{n+3}\cdot\dfrac{n+3}{n+2}X_{n}=\dfrac{1}{n+2}\cdot X_{n}=Y_{n}$ -It's ok? -And, can you help me to find $\mathbb{\mathbb{\textrm{E}}}\left(Y_{n}\right)$? - -REPLY [4 votes]: Looks good, as indeed -$$ X_{n+1} = X_n + R_{n+1},$$ -where $R_{i}$ denotes the indicator variable that takes value $1$ if color of the $i$-th ball extracted is red, and $0$ if green. By definition we have that the urn contains $X_n$ red and $n+2-X_n$ green balls after $n$ extractions. Then the conditional probability given $X_n$ of a red ball on the $n+1$-th extraction (equal to the conditional expectation given $X_n$ of $R_{n+1}$ that we need) is $$\frac{X_n}{n+2}=Y_n.$$ -We also observe that -$$ X_n = 1+\sum_{i=1}^n R_i. $$ -Taking expectation we get: -$$ \mathbf{E}\left[ X_n\right] = 1+\sum_{i=1}^n \mathbf{E}\left[ R_i\right].$$ -As all $R_i$ have the same distribution as $R_1$, we get: -$$ \mathbf{E}\left[ R_i\right] = \mathbf{E}\left[ R_1\right] =\frac{1}{2},$$ -for all $i\in \{1,\ldots , n\}$. -Our indicator variables have the same distribution due to the fact that the sequence of variables $R_1,\ldots, R_n$ is exchangeable, as its joint distribution -$$\mathbf{P}\left(R_1=c_1,\ldots, R_n=c_n\right) $$ $$= \mathbf{P}\left(R_1=c_1\right)\mathbf{P}\left(R_2=c_2 | R_1=c_1\right) \ldots\mathbf{P}\left(R_n=c_n | R_1=c_1,\ldots, R_{n-1}=c_{n-1}\right) $$ -$$ = \frac{c!(n-c)!}{(n+1)!}$$ depends on $c_1,\ldots,c_n$ only through the number of red balls $c$, $c=c_1+\ldots + c_n$. -To conclude, we have $\mathbf{E}[X_n]=(n+2)/2$, so -$$\mathbf{E}[Y_n]=1/2.$$ -This can also be seen directly as we already know that $Y_n$ is a martingale, so (proof here) -$$\mathbf{E}[Y_n]=\mathbf{E}[Y_{n-1}]=\ldots = \mathbf{E}[Y_1]=1/2.$$<|endoftext|> -TITLE: Hartshorne problem III.5.2(a) -QUESTION [8 upvotes]: Consider problem III.5.2(a) in Hartshorne's Algebraic Geometry: - -Let $X$ be a projective scheme over a field $k$, let $\mathcal O_X(1)$ - be a very ample invertible sheaf on $X$ over $k$, and let $\mathcal F$ - be a coherent sheaf on $X$. Show that there is a polynomial $P(z)\in - \mathbb Q[z]$, such that $\chi(\mathcal F(n))=P(n)$ for all - $n\in\mathbb Z$. We call $P$ the Hilbert polynomial of $\mathcal F$ - with respect to the sheaf $\mathcal O_X(1)$. [Hints: Use induction on - dim Supp $\mathcal F$, general properties of numerical polynomials (I, - 7.3), and suitable exact sequences -$$0\rightarrow \mathcal R \rightarrow \mathcal F(-1)\rightarrow - \mathcal F \rightarrow \mathcal L \rightarrow 0.]$$ - -By III.2.10, we may suppose $X=\mathbb P^n_k$ for some $n$. -I think the key difficulty here is describing an injective map $\alpha:\mathcal F(-1)\rightarrow \mathcal F$. For any map between these two sheaves, we get an exact sequence of the kind in the hint by taking the relevant quotient and kernel sheaves. If this map is injective, we get a short exact sequence, and we can use the additivity of the Euler characteristic on short exact sequences to say -$$\chi(\mathcal F) - \chi(\mathcal F(-1))= \chi (\mathcal L).$$ -Tensoring with $\mathcal O(n)$ gives -$$\chi(\mathcal F(n)) - \chi(\mathcal F(n-1))= \chi (\mathcal L(n)).$$ -So if $\chi(L(n))$ is eventually a polynomial, then $\mathcal F(n)$ is too, by I.7.3(b). If we construct $\alpha$ in such a way that $\mathcal L$ has support at least one dimension less than the support of $\mathcal F$, we're done by induction on the dimension of the support. (We also need to handle dimension $0$ separately, and I omit this discussion because it's not too hard.) -So, let me say what I think $\alpha$ should be. -Consider a global section $s$ of $O(1)$. Tensoring with $s$ gives a map $\mathcal F(-1)\rightarrow \mathcal F$. In some affine chart, this looks like multiplication by $s$, I think. Now we examine the stalks of the quotient sheaf. The first map, on the level of stalks, is an isomorphism when $s\mathcal F_p=\mathcal F_p$, which happens in particular when $s$ is a unit in the local ring $\mathcal O_{X,p}$, and this happens when $s_p \notin \mathfrak m_p$. Then the support of $\mathcal L$ is contained in the intersection of the support of $\mathcal F$ and $V(s)$, where we consider $s$ as a linear polynomial over $k$. So clearly the dimension of the support goes down by at least one, since hyperplanes have codimension $1$. -It seems tricker to choose $s$ so that $\alpha$ is injective. By II.5.15, we know that every (quasi)coherent sheaf on projective space over a field is given as $\tilde M$ for some module $M$ over $k[x_i]$. We need $s$ to not be a zero-divisor. Recall that the set of zero-divisors is exactly the union of the associated primes, and that there are finitely many associated primes. But primes represent subvarieties, and these subvarieties might be impossible to avoid. For example, if a few hyperplanes are among the associate primes, then there’s no way we can choose $\alpha$ to be injective. Is this analysis correct? -If so, it seems we need to analyze $\mathcal R$ more carefully and not just suppose it disappears. Could we say something like, its support lies on the union of the associated primes, which are all subvarieties of codimension at least 1, so we are finished by an argument similar to the one indicated above? Explicitly, we would have -$$\chi(\mathcal F(n)) - \chi(\mathcal F(n-1))= \chi (\mathcal L(n)) - \chi (\mathcal R(n)),$$ -and then we can induct on dimension and apply I.7.3(b). - -REPLY [2 votes]: I don't think you need an injective map $\mathcal{F}(-1)\to \mathcal{F}$. The point is that since $\chi$ is additive, then for any exact sequence -$$0\to \mathcal{F}_1\to\mathcal{F}_2\to...\to \mathcal{F}_n\to 0$$ -we have $\sum_{i=1}^n (-1)^i\chi(\mathcal{F}_i)=0$. Just split the long exact sequence into many short exact sequences. With this in mind here is my attempt at a solution: -We know $Z:=\operatorname{Supp}(\mathcal{F})\subset \mathbb{P}^n$ is a closed subset. Let $L\subset \mathbb{P}^n$ be a hyperplane which does not contain any irreducible component of $\mathcal{F}$. Then $L$ corresponds to some section $s\in \mathcal{O}(1)$ and it gives rise to a map $\mathcal{F}(-1)\to \mathcal{F}$. Now consider an exact sequence -$$0\to \mathcal{R}\to \mathcal{F}(-1)\to \mathcal{F}\to \mathcal{L}\to 0.$$ -We have $\operatorname{Supp}(\mathcal{L}),\operatorname{Supp}(\mathcal{R})\subset \operatorname{Supp}(\mathcal{F})\cap V(s)$ because on $D_+(s)$ the map $\cdot s$ is an isomorphism. Therefore both have dimension strictly less than $\dim\operatorname{Supp}(\mathcal{F})$. By induction we can find polynomials $P_\mathcal{R}(z), P_\mathcal{L}(z)$ such that -$$P_\mathcal{R}(n)=\chi(\mathcal{R}(n)), \ P_\mathcal{L}(n)=\chi(\mathcal{L}(n))$$ -for all $n\in \mathbb{Z}$. We then get, for any $n\in \mathbb{Z}$, -$$\chi(\mathcal{F}(n))-\chi(\mathcal{F}(n-1))=P_\mathcal{L}(n)-P_\mathcal{R}(n).$$ -It then follows from the proof of proposition I.7.3 in Hartshorne that there is a numerical polynomial $P_\mathcal{F}(z)$ such that $P_\mathcal{F}(n)=\chi(\mathcal{F}(n))$ for all $n$.<|endoftext|> -TITLE: Infinite closed subset of $S^1$ such that the squaring map is a bijection? -QUESTION [5 upvotes]: Is there an infinite closed subset $X$ of the unit circle in $\mathbb C$ such that the squaring map induces a bijection from $X$ to itself? - -REPLY [2 votes]: Let us think of $S^1$ as $\mathbb{R}/\mathbb{Z}$, so we want an infinite closed subset $X$ on which multiplication by $2$ is a bijection. Suppose you have such an $X$; write $T:X\to X$ for the multiplication by $2$ map. Each element $x\in X$ determines a biinfinite binary expansion $f_x:\mathbb{Z}\to\{0,1\}$, such that $T^n(x)=\sum_{k=1}^\infty f_x(k+n)2^{-k}$ for each $n\in \mathbb{Z}$. Say that a finite string of $0$s and $1$s is admissible if it appears as a sequence of consecutive values of some $f_x$ (i.e., if it appears as a sequence of consecutive digits in the binary expansion of some element of $X$). For each $n$, let $A_n\subseteq\{0,1\}^n$ consist of those sequences $s$ such that both $0^\frown s$ and $1^\frown s$ are admissible (where $^\frown$ is string concatenation). If $A_n$ is empty for some $n$, that means that given a sequence of $n$ consecutive digits in any $f_x$, all of the preceding digits are uniquely determined. By pigeonhole, for each $x$, some sequence of $n$ digits must appear infinitely often in the restriction of $f_x$ to $\mathbb{N}$, and it follows that every $f_x$ is periodic. Furthermore, there is a uniform bound on the periods of all the $f_x$ (because if some particular $s\in\{0,1\}^n$ appears infinitely often in $f_x|_\mathbb{N}$, that determines the period of $f_x$, and there are only finitely many different such $s$). So there are only finitely many different $f_x$, so $X$ is finite. This is a contradiction. -Thus each $A_n$ is nonempty. By König's lemma, it follows that there exists an infinite string $s:\mathbb{N}\to\{0,1\}$ such that every initial segment of $s$ is in the appropriate $A_n$. But then since $X$ is closed, the numbers $y_0$ and $y_1$ whose binary expansions are $0^\frown s$ and $1^\frown s$, respectively, are in $X$. Since $T(y_0)=T(y_1)$, this is a contradiction. -Thus no such $X$ exists.<|endoftext|> -TITLE: Power series method for differential equation $x^2y''+y=0$ -QUESTION [5 upvotes]: I tried to solve $(x^2)y''+y=0$ using power series, but I cannot get the general solution or the relation at least -$$(x^2)y''+y=0 $$ -$$ \sum_{n=2}^\infty c_n n(n-1) x^n + \sum_{n=0}^\infty c_n x^n$$ -$$ \sum_{k=2}^\infty c_k k(k-1) x^k + \sum_{k=0}^\infty c_k x^k$$ -$$ c_o+c_1+\sum_{k=2}^\infty c_k (k^2-k+1) x^k$$ -From here $C_0=0$, $C_1=0$, $ C_k(k^2 -k+1)=0$. -Is this right? How can I get a recurrence relation for coefficients or the general solution of the series? - -REPLY [4 votes]: First rearrange $x^2y''+y=0$ to -$$y''+\frac{1}{x^2}y=0$$ -Since the equation has a regular singular point a likely plan of attack assumes $y = \sum\limits_{n= 0}^\infty c_n x^{n+p}$ where $p$ is yet to be determined. Taking derivatives and substituting above gives -$$\sum\limits_{n= 0}^\infty (n+p)(n+p-1)c_n x^{n+p-2}+\frac{1}{x^2}\sum\limits_{n= 0}^\infty c_n x^{n+p}=0$$ -$$\sum\limits_{n= 0}^\infty \bigg\{(n+p)(n+p-1) +1\bigg\} c_n x^{n+p-2}=0$$ -For $n=0$, we have the lowest power of $x$. It's coefficient is $\big\{p(p-1)+1 \big\}c_0$. This must equal zero if the equation is to equal zero for any $x$. For a nontrivial solution, $c_0 \ne 0$. Then -$$\big\{p(p-1)+1 \big\}c_0=0$$ -$$p^2-p+1=0$$ -This has complex roots $p_{1,2} = \frac{1\pm i\sqrt{3}}{2}$, which implies that $y$ is a complex series where $c_n \in \mathbb{C}$. Substituting our roots back into the assumed form for $y$, this implies that the solutions have form -$$y = x^{p1}\sum\limits_{n= 0}^\infty c_n x^{n} \quad \mathtt{and} \quad y = x^{p2}\sum\limits_{n= 0}^\infty c_n x^{n}$$ -There is a good deal of analysis that actually winds up with a generalized form for such a case! Solutions to the Frobenius equation with complex indicial roots are given by a general solution where the roots are $p_{1,2} = a +ib$: -$$y_1 = (x-x_0)^{a}\cos\left(b \ln|x-x_0| \right)$$ -$$y_2 = (x-x_0)^{a}\sin\left(b \ln|x-x_0| \right)$$ -where $x_0$ is the singular point. For us $x_0 = 0$, so for our roots we have -$$y_1 = A x^{1/2}\cos\left(\frac{\sqrt{3}}{2} \ln|x| \right)$$ -$$y_2 = B x^{1/2}\sin\left(\frac{\sqrt{3}}{2} \ln|x| \right)$$ -This matches WolframAlpha. I recommend researching the Frobenius Method where the indicial roots are complex for further information.<|endoftext|> -TITLE: Composition of lower semicontinuous function with continuous function is lower semicontinuous -QUESTION [5 upvotes]: Assume that $f\colon \mathbb{R}^n \to\mathbb{R}$ is lower semicontinuous at $g(a)$ and $g\colon \mathbb{R}^m \to\mathbb{R}^n$ is continuous at $a \in\mathbb{R}$. Define $h = f \circ g \colon \mathbb{R}^m \to \mathbb{R}$ by $h(x) = f(g(x))$ for every $x \in \mathbb{R}^n$. Then prove that $h$ is lower semicontinuous at $a$. - -I am unsure how to start this proof, any suggestions$?$ - -REPLY [3 votes]: According to the definition you provide in the comments, to prove that $h$ is lower semicontinuous at $a$ we need to prove that, for any $\varepsilon >0$ there exists $\delta >0$ such that $h(x)>h(a)-\varepsilon$, when $\|x-a\|<\delta$. -We know, since $g$ is continuous at $a$, that for any given $\varepsilon>0$ there exists $\delta_1>0$ such that $\|g(x)-g(a)\|<\varepsilon$, if $\|x-a\|<\delta_1$. -Because $f$ is lower semicontinuous at $g(a)$, we know that, for any $\varepsilon >0$, there exists $\delta_2>0$ such that $f(y)>f(g(a))-\varepsilon$, when $\|y-g(a)\|<\delta_2$. - -Let $\varepsilon>0$. By the fact that $f $ is lower semicontinuous at $g(a)$, by only considering $y=g(x)$ (that is, taking $y$ in the range of $g$) we know that there exists a $\delta_2>0$ such that $\|g(x)-g(a)\|<\delta_2$ implies $f(g(x))>f(g(a))-\varepsilon$. Now, by choosing $\varepsilon=\delta_2$ in the definition of continuity for $g$, we get that there exists $\delta_1>0$ such that if $\|x-a\|<\delta_1$ then $\|g(x)-g(a)\|<\delta_2$. -By putting the two together and taking $\delta=\delta_1$, we get that for any $\varepsilon>0$ there exists $\delta>0$ such that $f(g(x))>f(g(a))-\varepsilon$ when $\|x-a\|<\delta$, which is what we wanted to prove.<|endoftext|> -TITLE: Prove that the sum $a_1+a_2+....+a_n+b_1+b_2+...+b_n$ cannot equal to $0$ -QUESTION [7 upvotes]: We are given an $n \times n$ board, where $n$ is an odd number. In each cell - of the board either $+1$ or $-1$ is written. Let $a_k$ and $b_k$ - denote the products of the numbers in the $k$-th row and the $k$-th - column, respectively. Prove that the sum - $a_1+a_2+\cdots+a_n+b_1+b_2+\cdots+b_n$ cannot equal to $0$ - -I started off with some examples: - -If its a $3 \times 3$ board, then: -you can either have: - -five $-1$'s and four $1$'s -five $1$'s and four $-1$'s -three $1$'s and six $-1$'s -three $-1$'s and six $1$'s -two $-1$'s and seven $1$'s -two $1$'s and seven $-1$'s -one $1$'s and eight $-1$'s -one $-1$'s and eight $1$'s - -regardless, the sum will never be $0$. -but how would i prove that? - -REPLY [3 votes]: The product of the $a_i$ is equal to the product of the $b_i$. This is because each of these products is the product of all the elements in the $n\times n$ array. -So the numbers of $-1$'s among the $a_i$, and among the $b_i$, have the same parity (both numbers are even or both are odd). -It follows that the combined number of $-1$'s among the $a_i$ and $b_i$ is even. -This ensures that the sum of all the $a_i$ and all the $b_i$ cannot be $0$. For if the sum is $0$, the total number of $-1$'s among the $a_i$ and $b_i$ must be $n$. And $n$ is odd.<|endoftext|> -TITLE: Question based on chords of a circle -QUESTION [6 upvotes]: Question: -Given a circle and two points $P$ and $Q$ not neccessarily on that circle. Perpendiculars are drawn from points $P$ and $Q$ to the polar lines of the points $Q$ and $P$ respectively. Prove that the ratio of of lengths of those perpendiculars are equal to the ratio of distances of point $P$ and $Q$ from the centre of circle. - -Attempt: -I solved this question by assuming the circle equation as $x^2+y^2=1$ -Let $P$ be $(x_1,y_1)$ and $Q$ be $(x_2,y_2)$. -Polar of $P$ is $$xx_1+yy_1-1=0$$ -Polar of $Q$ is $$xx_2+yy_2-1=0$$ -The perpendicular distance from $P$ to polar of $Q$ is $$\frac{x_1x_2+y_1y_2-1}{\sqrt{{x_2}^2+{y_2}^2}}$$ -Similarly, perpendicular distance from $Q$ is $$\frac{x_1x_2+y_1y_2-1}{\sqrt{{x_1}^2+{y_1}^2}}$$ -Ratio is $$\frac{\sqrt{{x_1}^2+{y_1}^2}}{\sqrt{{x_2}^2+{y_2}^2}}$$which is the ratio of distance from centre of circle $(0,0)$ to the points $P$ and $Q$. - -Is there any method to solve this using geometry? It looks like a problem involving two similar triangles. - -REPLY [4 votes]: I was staring at this$ \space\downarrow$ - -for nearly a week, and couldn't come up with anything 'elegant' because I was searching for similar triangles. Then I drank a few cups of coffee and remembered something: - - $\square AGIO\sim \square BHJO$ . Quadrilaterals can be similar too.<|endoftext|> -TITLE: Prove $f(x)=0$ when $f(2x^2-1)= 2xf(x)$ -QUESTION [7 upvotes]: Let $f : \left[-1,1\right] \to \mathbb R$ be a continuous function. Assume that $$f(2x^2-1)= 2xf(x)$$ -for all $x \in \left[-1,1\right]$. Prove that $f(x)=0$ for all $x\in[-1, 1]$. - -It is simple for integer numbers. Another fact that I've noticed that -$$f(2(-x)^2-1)= (-2x)f(-x)= 2xf(x)$$ -$$ -x\bigg(f(x)+f(-x)\bigg) =0 -$$ -Hence, $f(x)$ is odd function for all $x \ne 0$. -Help me with the next step, please. - -REPLY [3 votes]: Let $g : \Bbb{R} \to \Bbb{R}$ be defined by $g(\theta) = f(\cos\theta)$. Then it suffices to prove that $g \equiv 0$ on $[0, \pi]$. -From the functional equation for $f$, we find that -$$g(2\theta) = 2\cos(\theta)g(\theta) \tag{1}.$$ -Now we consider the set $\mathcal{D}$ defined by -$$ \mathcal{D} = \bigg\{ \frac{2\pi k}{2^n + 1} : \text{$n \geq 1$ and $k = 1, \cdots, 2^{n-1}$ are integers} \bigg\}. $$ -Notice that $\mathcal{D}$ is dense in $[0, \pi]$. Now for each $\theta = 2\pi k/(2^n + 1) \in \mathcal{D}$, we have $\sin \theta \neq 0$ and -$$ g(2^n\theta) = 2^n \cos(2^{n-1}\theta) \cdots \cos(\theta) g(\theta) = \frac{\sin(2^n \theta)}{\sin(\theta)}g(\theta). \tag{2} $$ -Since $2^n \theta = 2\pi k - \theta$, we have $\cos(2^n\theta) = \cos(\theta)$ and $\sin(2^n\theta) = -\sin(\theta)$. Plugging this to $\text{(2)}$ shows that $g(\theta) = 0$ for $\theta \in \mathcal{D}$. Then by the density of $\mathcal{D}$ and the continuity of $g$, we have $g\equiv 0$ as desired.<|endoftext|> -TITLE: Is a Riemannian manifold with isometric coordinate charts flat? -QUESTION [5 upvotes]: Suppose $M$ is a Riemannian Manifold such that each point has a coordinate chart into $\mathbb{R}^n$ that is an isometry, in the sense that the inner products are preserved. Does this imply that $M$ is locally flat in the sense that is vanishing Gaussian curvature? -I really don't know much about differential geometry, but my impression is that this is Gauss' Theorema Egregium (or a corollary thereof), which is sometimes stated as 'Gaussian curvature is invariant under isometry'. Does this mean isometry in the sense that lenghts of paths are preserved (i.e. as metric spaces)? Or in the sense above that inner products are preserved? Or even more?Perhaps, does the coordinate chart need to be smoother than just smooth enough to formulate that it is an isometry? For example, the Wikipedia entry on Gaussian curvature gives a formula that contains second derivatives, so does the version of the Theorema that I cited above only hold for $C^2$ isometries? - -REPLY [6 votes]: In general, the notion of isometries for Riemannian manifolds is an infinitesimal one. A smooth map $\varphi \colon (M,g) \rightarrow (N,h)$ between smooth Riemannian manifolds is called a local isometry if for each $p \in M$ the differential $d\varphi_p \colon (T_pM, g_p) \rightarrow (T_{\varphi(p)}N, h_{\varphi(p)})$ is an isometry of inner-product spaces. A local isometry is a local diffeomorphism - if it is also a global diffeomorphism then it is called an isometry. -Given this definition, the answer to your question is yes - local isometries preserve the Riemann curvature tensor and since the curvature tensor for $\mathbb{R}^n$ is identically zero the curvature tensor of $M$ vanishes identically. -Note that the relevant term is "the Riemann curvatures" and not "Gaussian curvature" as the Gaussian curvature is usually a scalar that is associated to a surface embedded in $\mathbb{R}^m$ and you are asking about a general $n$-dimensional Riemannian manifold. -Regarding your other questions - local isometries of Riemannian manifolds preserve the length of paths and preserve the distance induced by the Riemannian metrics (the infimum of the length of all paths connecting two points). The definition of a local isometry presented above requires the map $\varphi$ to be at least differentiable. You can also define an isometry as a map that preserves the distances in the sense of metric spaces. Such a map will be continuous but a priori doesn't have to be differentiable. It turns outs that a map that preserves the distances in the sense of metric spaces will automatically be an isometry in the sense defined above and in particular, smooth. This result is called Myers-Steenrod theorem.<|endoftext|> -TITLE: Automata | Prove that if $L$ is regular than $half(L)$ is regular too -QUESTION [10 upvotes]: I've see couple of approaches to this kind of questions yet I have no clue how to approach this one. -Let L be regular language, and let $half(L)$ be: -$half(L) = \{u \mid uv \in L\ s.t. |u|=|v|\}$. -Prove that if $L$ is regular then $half(L)$ is regular too. -I tried to make a new regular language, $Even(L)$ that recognizes all the even words in $L$ (due to the fact that if we want to slice in half the total length must be even) and from there to "capture" the half words. -But I still think it isn't quite correct. -Would like to understand the logic behind a proper solution rather than just the solution :) -Thanks! - -REPLY [12 votes]: Let $A = (\delta_A, Q, q_0, F)$ be a DFA for $L$ (in some alphabet $\Sigma$). -Then define $B$ as follows: - -The states $Q_B$ of $B$ are of the form $[q,S]$ where $q \in Q$ and $S \subseteq Q$. -The initial state of $B$ is $[q_0, F]$. -$\delta_B([q,S],a) = [\delta_A(q,a), T]$ where $T = \{p \in Q: \exists b \in \Sigma: \exists p' \in S: \delta_A(p,b) = p' \}$ -The accepting states of $B$ are $F_B = \{[q,S] : q \in S \}$. - -Then we have the following invariant by construction: all reachable states $[q,S]$ after some input are such that $q$ is the state that $A$ would be in after reading that input, and the states in $S$ are all those states such that there is a path from that state to an accepting state (in $A$) that has the same length as the input that was read. -This holds for the initial state (no input, so the only state $A$ could be in is $q_0$ and the only accepting states on no input are those in $F$). -The way the transition rule is defined also upholds the relation: the first part just does what $A$ would have done on that extra letter, and we update the states so that there is a path to an accepting state that is 1 step longer. We could formally prove it by induction on the length of the input. -And when we are in a state $[q,S]$ with $q \in S$ after reading some input $w_1$, we know that $q$ is the state of $A$ after reading $w_1$ and as $q \in S$ there is some input $w_2$ with $|w_1| = |w_2|$ that induces a path from $q$ to an accepting state, which means that $w_1w_2 \in L$, and so $w_1 \in \operatorname{half}(L)$. -(source: this solution, a bit reformulated)<|endoftext|> -TITLE: Graph Isomorphism for non-mathematician -QUESTION [16 upvotes]: Question: What is a good explanation of the Graph Isomorphism problem, which would be understandable - and hopefully exciting - to a person who has minimal exposition to mathematics? Note that I do not need this application to be realistic - highly idealised fantasy stories are absolutely OK, as long as they make sense as stories. I'm specifically not interested in the actual applications of the Graph Isomorphism, unless they are appealing to non-mathematicians -Example: The Ramsey numbers problem has a very nice "real life" example involving a party, where some people know one another, and others don't. Then one can show, assuming there are at least 6 people, that either three of them are all friends, or some other three of them are all strangers. This is precisely the statement that $R(3,3) \geq 6$, but arguably laymen prefer thinking in terms of groups of friends at parties, but not so much in terms of monochromatic cliques in graphs. -The interpretation of Hall's theorem as a statement about marriages is another example of the type I'm looking for. Note that these two examples are somehow canonical - almost always, when these problems are first introduced, one of these interpretations is used (or a slight variation thereof). -Note also that Smullyan's books give rather more involved examples explaining some fundamental ideas in logic. -Work so far: The idea of representing a graph as a party is promising. One can represent vertices as people and edges as acquaintances. To introduce the isomorphism, one could have two independent ways of referring to people - for lack of a better idea, make it a mask ball, and then refer to people either by names or costumes, but that's perhaps too convoluted. Now, given two descriptions of the two different kinds, the question is: Could these descriptions be the same party? But I think there must be a better way... -Motivation: Given the recent breakthrough of Babai, it would be great to be able to communicate what happened to non-mathematicians, in as engaging a way as possible. -Apology: I'm not sure if this question fits the scope of MSE and if possibly it's too open-ended. Feel free to vote to close if necessary. Because some other problems have an essentially unique "real life" models, I'm hoping that an answer to this question might in principle exist, and not be terribly opinion-based. -P.S. Is there a way to make question CW here? I can't seem to find it, and I would use it if possible. - -REPLY [4 votes]: I don't know how much my answer is justified to what you needed but for a non-mathematician, the following example may pull some attention. - -A group of four persons (Angelina, Jessica, Bob and Jonty) working in a company have the friendship graph depicted in the first graph, where each of them has exactly two friends of opposite gender. If the company decides to send the male employees in its rural branch, then each one of these four may suffer the isolation syndrome in their working hours as depicted in the second graph.The working efficiency of each of them will certainly be effected but not their friendships. The second graph obtained by twisting and turning the first graph but keeping the edges intact is isomorphic to the first one.<|endoftext|> -TITLE: Leibniz test $\sum\limits_{n=1}^\infty \sin\left(\pi \sqrt{n^2+\alpha}\right)$ -QUESTION [6 upvotes]: I am given a series: -$$\sum\limits_{n=1}^\infty \sin\left(\pi \sqrt{n^2+\alpha}\right)$$ -And in the description of the problem it is said that I must show by Leibniz test that it is convergent. How can I even get the $(-1)^n$ symbol out of the starting series? - -REPLY [6 votes]: Write $$\begin{align} -\sqrt{n^2+\alpha}&=n + \left(\sqrt{n^2+\alpha}-n\right)\\ -&=n+\frac{\alpha}{\sqrt{n^2+\alpha}+n} -\end{align} -$$ -Now $\sin (n\pi + x)=(-1)^n\sin(x)$. Letting $$x_n=\frac{\alpha}{\sqrt{n^2+\alpha}+n}$$ we see that it is positive and decreases to zero, and thus that $\sin(x_n\pi)$ is positive and decreases to zero. -You might have to slightly alter the argument if $\alpha$ can be negative, but deal with that as a special case.<|endoftext|> -TITLE: Is there a general formula for $\sin( {p \over q} \pi)$? -QUESTION [14 upvotes]: Virtually everyone knows the basic values of the unit circle, $\sin(\pi) = 0; \ \ \sin({\pi \over 2}) = 1; \ \ \sin({\pi \over 3}) = {\sqrt{3} \over 2} \\$ -And other values can be calculated through various identities, like $\sin({\pi \over 8}) =\frac{1}{2} \sqrt{2 - \sqrt{2}}\\$ -Does there exist a general formula for $\sin({p\over q} \pi)$ for rational ${p \over q}$ as an algebraic number? - -REPLY [3 votes]: There is not so much a formula, as there is an algorithm, due to Gauss. In his Disquisitiones Arithmeticae (sorry, I couldn't find a free English translation), he gives a number-theoretic view of the roots of unity (and thus, trigonometric functions). In particular, he outlines in one section a recursive method for determining explicit radical expressions of roots of unity. -Modern methods that improve on the asymptotic complexity of Gauss's algorithm exist, however. In this paper, A. Weber presents an improvement of Gauss's original algorithm that hinges on an efficient way to evaluate the "recursive step" in the original method. A Maple implementation of this improved method is presented in the paper.<|endoftext|> -TITLE: Is there a covering map with uncountably many slices? -QUESTION [8 upvotes]: The following is from the Topology by Munkres: - -Let $E$ and $B$ be two topological spaces and $p:E\to B$ a continuous surjective map. The open set $U\subset B$ is said to be evenly covered by $p$ if the i nverse image $p^{-1}(U)$ can be written as the union of disjoint open sets $V_\alpha$ in $E$ such that for each $\alpha$, the restriction of $P$ to $V_\alpha$ is a homeomorphisam of $V_\alpha$ onto $U$. The collection $\{V_\alpha\}$ is called a partition of $p^{-1}(U)$ into slices. If every $b\in B$ has a neighborhood $U$ that is evenly covered by $p$, then $p$ is called a covering map. - -Here is my question: Is there an example that $p^{-1}(U)$ contains uncountably many slices? - -REPLY [7 votes]: Sure, why not? If $X$ is an uncountable discrete set and $Y$ is any space, the projection $X \times Y \to Y$ is a covering map. If you don't like this, then you could instead construct a CW complex (which always have universal covers) $X$ with $\pi_1(X)$ uncountable. Then the universal covering map $\tilde X \to X$ has uncountably many slices. -It just so happens that when doing topology the spaces and covering maps we prefer to think about almost always have countable fibers. - -REPLY [3 votes]: You can take the trivial covering space $E = B \times F$ where $F$ is an uncountable set with the discrete topology with $\pi \colon B \times F \rightarrow B$ the projection. Then $B$ is evenly covered by the disjoint union $\bigcup_{f \in F} B \times \{ f \}$ of open subsets of $E$ homeomorphic to $B$.<|endoftext|> -TITLE: Intuition for the valuative criterion for properness of morphisms? -QUESTION [9 upvotes]: I've always been told that the intuition for the valuative criterion for properness is something like this: a morphism $X\rightarrow Y$ is proper if, given a map of a small disk $D$ into $Y$ and a lifting $\alpha$ of $D\setminus\{p\}$ to $X$ for some point $p\in D$, there exists a unique lifting of $D$ to $X$ extending $\alpha$. -In the valuative criterion, we can think of the discrete valuation ring $R$ as representing a germ of the curve around a point, or $D$ in the above picture. Its field of fractions $K$ is the localization at its generic point. -Questions: - -How can I think of $K$ geometrically as a "punctured germ'' or "punctured disk"? It seems that $K$ is actually a "point'' of $D$, so that the criterion is saying something like, given a lift of the generic point of a small piece of a curve, we can lift to the entirety of the small piece. This makes less geometric sense to me, though. Perhaps one could view the generic point as a non-closed "fuzzy" thing whose closure includes the single closed point $\mathfrak m$ in the DVR and preserve the analogy that way. -It seems reasonable to expect the valuative criterion to be able to prove the following: if $X$ and $Y$ are projective (and hence proper) curves, and $p\in X$ is a regular point, then any morphism $X\setminus \{p\}\rightarrow Y$ extends uniquely to a morphism $X\rightarrow Y$. We can interpret this in terms of extending a lift of a small punctured disk (the image of some small punctured disk around $p$ mapped to $Y$). Can the valuative criterion be used to prove this, and if so, how? - -I think something like the following works. We have morphisms -$$FF(\mathcal O_{X,p})\rightarrow Y \rightarrow \operatorname{Spec} \mathbb C$$ -$$FF(\mathcal O_{X,p}) \rightarrow \mathcal O_{X,p}\rightarrow \operatorname{Spec} \mathbb C$$ -that commute. The map $$FF(\mathcal O_{X,p})\rightarrow Y$$ is given by the restriction of the given map $X\rightarrow Y$ to the generic point, since $FF(\mathcal O_{X,p})$ is the function field for $X$ (right?). Then the valuative criterion produces a map $f:\mathcal O_{X,p}\rightarrow Y$. -It seems like this map should be the desired extension, but I'm not sure how to say that precisely. - -REPLY [7 votes]: Lemma. Let $X$ be a variety over $k$. Then $X$ is proper if and only if for every smooth proper curve $C$ and every $U \subseteq C$ open, any morphism $f \colon U \to X$ can be extended to a map $C \to X$. -Proof. We extend one point at a time. Suppose $P \in C\setminus U$, and let $V = \operatorname{Spec} B$ be an affine open neighbourhood of $P$. Let $\eta \in C$ be the generic point; note that it is contained in any open. Restricting $f$ to the generic point gives a map -$$\operatorname{Spec} \kappa(\eta) \to U \to X.$$ -By the valuative criterion, we can extend uniquely to a map $g \colon \operatorname{Spec} \mathcal O_{C,P} \to X$. Let $W = \operatorname{Spec} A$ be an affine open neighbourhood of $g(P)$ in $X$. Then $g(\eta) \in W$ as well, since open sets are stable under generisation. Thus, $g$ is given by a ring homomorphism -$$A \to \mathcal O_{C,P} = B_{\mathfrak m_P}.$$ -Since $A$ is of finite type, we can collect denominators to get a map $A \to B_f$ for some $f \in B$; that is, we get a morphism -$$h \colon D(f) \to X.$$ -For every $Q \in U \cap V$, the valuative criterion of separatedness shows that the maps -$$h, f \colon \operatorname{Spec} \mathcal O_{C,Q} \to X$$ -agree (since their restrictions to $\operatorname{Spec} \kappa(\eta)$ agree by definition of $g$). Then a simple algebra argument shows that $f = h$ on $U \cap V$. Thus, they glue to a well-defined map on $C \cup \{P\}$. -Conversely, if $X$ is proper, and $f \colon U \to X$ is a map, let $C'$ be the scheme-theoretic image. If $C'$ is a point, then $f$ is constant, so we can trivially extend. Otherwise, $C'$ is a proper curve (being closed inside a proper). Then the normalisation of $C'$ is isomorphic to $C$, since there is only one smooth proper curve with a given function field. Thus, the normalisation map $C \to C'$ extends $f$. $\square$ -Remark. For a slightly more high-brow version of the same proof, see this blog post. -Remark. To see the analogy with the punctured disk, note that if we identify all nonzero points of the complex unit disk $\Delta$, we get a topological space with two points: one closed point (the image of the origin), and one non-closed point (the image of the punctured disk $\Delta^*$). Thus, removing the origin from this quotient space also gives a one-point space. -Of course, topological spaces behave very differently from ringed spaces, so to some extent the analogy ends there. -In another direction, if we equip $\Delta$ with the sheaf of rings of holomorphic functions, then the local ring $\mathcal O_{\Delta, 0}$ is the ring of convergent power series $\mathbb C\{x\}$. This looks a lot like the localisation $\mathbb C[x]_{(x)}$; for example the completion of both rings is the ring of power series $\mathbb C[[x]]$. This is the beginning of Serre's GAGA principle. -The analogy between the punctured disk and the function field of a curve does not end there. For example, the fundamental group of $\Delta^*$ is $\mathbb Z$, with covers given by $\Delta^* \to \Delta^*$, $x \mapsto x^n$. The absolute Galois group of $\mathbb C((x))$ is the profinite completion $\hat{\mathbb Z}$ of $\mathbb Z$, with extensions given by $\mathbb C ((x^{\frac{1}{n}}))$. This is no coincidence (the key word here is étale fundamental group).<|endoftext|> -TITLE: Entire functions with finite $L^1$ norm must be identically $0$ -QUESTION [5 upvotes]: Another question from Complex Variables: An Introduction by Berenstein and Gay. - -Show that an entire function has finite $L^1$ norm on $\mathbb{C}$ iff $f\equiv0$. Does this also hold true for $L^2, L^{\infty}$? - -So, the $L^{\infty}$ case I think just follows from Liouville's Theorem, but the others I'm not sure about. How should I approach these? My intuition would be to use a power series expansion at $0$ for $f$, but the work I've done on paper with this hasn't really gone anywhere fruitful. -Edit: As discussed in the comments, there are two cases to consider wrt the behavior at $\infty$: if $f$ has a pole there, then $f$ must be a polynomial so it must have infinite norm. So we are reduced to the case where $f$ has an essential singularity at $\infty$. - -REPLY [7 votes]: If I remember my complex analysis, the real and imaginary parts of $f(z)$, denoted $u(x,y)$ and $v(x,y)$ are harmonic on $\mathbb{R}^{2}$. If $\|f\|_{L^{1}(\mathbb{R}^{2})}$ is finite, then $\max\{\|u\|_{L^{1}},\|v\|_{L^{1}}\}<\infty$. By the mean value property, we have for any fixed $(x,y)\in\mathbb{R}^{2}$, -$$|u(x,y)|\lesssim\dfrac{1}{r^{2}}\int_{B_{r}(x,y)}|u(s,t)|dsdt\leq\dfrac{\|u\|_{L^{1}}}{r^{2}},\quad\forall r>0$$ -where the implied constant is independent of $r>0$ and $(x,y)$. Letting $r\rightarrow \infty$, we obtain that $|u(x,y)|=0$. By the same argument, one obtains $|v(x,y)|=0$. -If $\|f\|_{L^{2}}<\infty$, then by convexity, $\max\{\|u\|_{L^{2}},\|v\|_{L^{2}}\}<\infty$. By the above argument and Holder's inequality, -$$|u(x,y)|\lesssim\dfrac{1}{r^{2}}\int_{B_{r}(x,y)}|u(s,t)|dsdt\lesssim\left(\dfrac{1}{r^{2}}\int_{B_{r}(x,y)}|u(s,t)|^{2}dsdt\right)^{1/2}\leq\dfrac{\|u\|_{L^{2}}}{r},\qquad\forall r>0$$ -where the implied constants are independent of $r>0$ and $(x,y)$. Letting $r\rightarrow\infty$, we obtain the desired conclusion.<|endoftext|> -TITLE: Upper bound of $a_{n+1}=a_n + \frac{1}{a_n}$ -QUESTION [5 upvotes]: How can I prove: -If $$a_0=\alpha>0\quad and\quad a_{n+1}=a_n + \frac{1}{a_n}$$, then $$a_n^2<\alpha^2+2n+\frac{1}{\alpha^2}+\frac{1}{2}ln\left ( \frac{2n}{\alpha^2}+1 \right )$$ -? -I'll really appreciate your help. Thanks. - -REPLY [6 votes]: We have $\displaystyle a_{n+1}^2=a_n^2+\frac{1}{a_n^2}+2$. Hence for $n\geq 1$ -$$a_n^2=a_0^2+2n+\sum_{k=0}^{n-1}\frac{1}{a_k^2}$$ -This imply that $\displaystyle a_k^2\geq a_0^2+2k$ for $k\geq 1$. Hence -$$a_n^2\leq a_0^2+2n+\frac{1}{a_0^2}+\sum_{k=1}^{n-1}\frac{1}{a_0^2+2k}$$ -Now $$\sum_{k=1}^{n-1}\frac{1}{a_0^2+2k}\leq \int_0^{n}\frac{dt}{a_0^2+2t}=\frac{1}{2}\log (\frac{2}{a_0^2}n+1)$$ - and we are done.<|endoftext|> -TITLE: Chirality in Lie groups and Lie algebras -QUESTION [9 upvotes]: Is there an example of a chiral Lie group?In particular, is it true to say that the map $g\mapsto g^{-1}$ is orientation reversing for odd dimensional Lie groups? -Moreover is there a concept of chiral Lie algebra, a finite dimesnional Lie algebra $L$ such that every automorphism of $L$ necessarily preserves the orientation? - -REPLY [6 votes]: The rational cohomology $H^{\bullet}(G, \mathbb{Q})$ of a compact connected Lie group is the exterior algebra on some odd generators, the product of which lives in top cohomology. The number of generators $r$ is the rank. The map $g \mapsto g^{-1}$ acts by $-1$ on each generator, and so it acts on top cohomology by $(-1)^r$. Hence $g \mapsto g^{-1}$ reverses orientations iff $r$ is odd iff $\dim G$ is odd.<|endoftext|> -TITLE: Do complex roots always come in pairs? -QUESTION [8 upvotes]: The Complex Conjugate Theorem states that, given the polynomial $p(x)$ with real coefficients $p \in \mathbb{R}[p]$ and one of its roots being $a+bi \in \mathbb{C}$, its complex-conjugate pair $\overline {z}$ must be a root as well. We could also intuitively conclude that only by multiplying $(a+bi)$ with $(a-bi)$ will the imaginary parts be destroyed, allowing $p$ to be a real polynomial. -Following this line of reasoning, it shouldn't be necessary for complex roots of a complex polynomial to come in pairs. However, this premise was used in solving one of my school assignments. -Do complex roots always have to come in pairs, regardless of the field in which the polynomial was defined? - -REPLY [23 votes]: Consider the polynomial $z-i$ . . . - -It's good to remember the reason the complex conjugate theorem is true in the first place: the map $z\mapsto \overline{z}$ is an automorphism of the field $\mathbb{C}$, and fixes the subfield $\mathbb{R}$ (this is a fancy way of saying $\overline{r}=r$ if $r$ is real). Thus - defining the "conjugate" $\overline{p}$ of a polynomial (with coefficients in $\mathbb{C}$) to be the polynomial whose coefficients are the conjugates of the corresponding coefficients of $p$ - we have the following: - -If $p$ is any polynomial and $z$ is a root of $p$, then $\overline{z}$ is a root of $\overline{p}$. -If the coefficients of $p$ are from $\mathbb{R}$, then $\overline{p}=p$. - -This second fact is no longer true for polynomials with coefficients from outside $\mathbb{R}$! -Note that this generalizes in a natural way to: - -Suppose $F$ is any subfield of $\mathbb{C}$, $\varphi$ is a field automorphism of $\mathbb{C}$ which fixes $F$, and $p$ is a polynomial with coefficients from $F$. Then if $z$ is a root of $p$, so is $\varphi(z)$. - -This is the (fine, "a") first step towards Galois theory . . .<|endoftext|> -TITLE: Laplace transform of normal distribution function? -QUESTION [8 upvotes]: In my notes this was left as an exercise and I am a bit rusty with my calculus. -Starting with the definitions: -$$\mathcal{L}_X(t) = \mathbb{E}[e^{-tX}] = \int_0^\infty e^{-Xt}f(t)dt \;\;\text{ and }\;\;X\sim\mathcal{N}(\mu,\sigma)\;\;\text{ i.e. }\;\;f(t) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}}$$ so -$$\mathcal{L}_X(t) = \int_0^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}} e^{-xt}dx$$ -$$= \frac{1}{\sqrt{2\pi}\sigma} \int_0^\infty e^{-\frac{1}{2}\big(\frac{(x-\mu)^2}{\sigma^2}\big)-tx }dx$$ -suppose I now say $u = \frac{x-\mu}{\sigma}$ so that $x = u\sigma+\mu$ I get the following but have run out of ideas for how to continue: -$$ = \frac{1}{\sqrt{2\pi}\sigma} \int_{-\frac{\mu}{\sigma}}^\infty e^{-\frac{1}{2}u^2-t(u\sigma+\mu) }du$$ -I considered trying to complete the square but don't think it helped: the exponent becomes $-\frac{1}{2}(u^2-2tu\sigma-2t\mu)$ giving $-\frac{1}{2}((u-t\sigma)^2 -t^2\sigma^2+2t\mu)$, which doesn't seem to be any simpler to integrate. -EDIT -After thinking about the comments I think this should be a double sided integral, as the expectation would be an integral over the probability distribution's domain (?) -So now I get -$$ \mathcal{L}_X(t) = \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^\infty e^{-\frac{1}{2}u^2-t(u\sigma+\mu) }du$$ -Now I try completing the square as above and get -$$ \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^\infty e^{-\frac{1}{2}((u-t\sigma)^2 -t^2\sigma^2+2t\mu) }du = \frac{1}{\sqrt{2\pi}\sigma} e^{-t^2\sigma^2-2t\mu} \int_{-\infty}^\infty -e^{-\frac{1}{2}(u-t\sigma)^2}du$$ -Now I make a substitution to say $z = \frac{1}{\sqrt{2}}(u-t\sigma)$ and then we get $u = \sqrt{2}z+t\sigma$, and $du = \sqrt{2}dz$ so finally: -$$ \frac{1}{\sqrt{2\pi}\sigma}e^{-t^2\sigma^2-2t\mu} \int_{-\infty}^\infty e^{-z^2} \sqrt{2}dz = -\frac{1}{\sigma}e^{-t^2\sigma^2-2t\mu}$$ -OK, so that is my attempt. I am very not confident about it, so any help/corrections are very welcome. - -REPLY [9 votes]: Instead of doing the $u$-substitution: -$$ -\begin{align} -\mathcal{L}_X (t) & = \int_{-\infty}^\infty e^{-tx} \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2} \mathrm{d}x \\ -& = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2-tx} \mathrm{d}x \\ -& = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\left(\frac{x-\mu}{\sigma}\right)^2+2tx\right)} \mathrm{d}x \\ -& = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\left(\frac{x-\mu}{\sigma}\right)^2+2t(x-\mu)+2t\mu\right)} \mathrm{d}x \\ -& = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\left(\frac{x-\mu}{\sigma}\right)^2+2(t\sigma)\left(\frac{x-\mu}{\sigma}\right)+2t\mu\right)} \mathrm{d}x \\ -& = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\left(\frac{x-\mu}{\sigma}\right)^2+2(t\sigma)\left(\frac{x-\mu}{\sigma}\right)+t^2\sigma^2-t^2\sigma^2+2t\mu\right)} \mathrm{d}x \\ -& = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\left(\frac{x-\mu}{\sigma}\right)^2+2(t\sigma)\left(\frac{x-\mu}{\sigma}\right)+t^2\sigma^2\right)+\frac{1}{2}t^2\sigma^2-t\mu} \mathrm{d}x \\ -& = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}+t\sigma\right)^2+\frac{1}{2}t^2\sigma^2-t\mu} \mathrm{d}x \\ -& = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}+t\sigma\right)^2}e^{\frac{1}{2}t^2\sigma^2-t\mu} \mathrm{d}x \\ -& = e^{\frac{1}{2}t^2\sigma^2-t\mu} \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}+t\sigma\right)^2} \mathrm{d}x \\ -& = e^{\frac{1}{2}t^2\sigma^2-t\mu} \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\frac{(x+t\sigma^2)-\mu}{\sigma}\right)^2} \mathrm{d}x -\end{align} -$$ -Let $y=x+t\sigma^2$. $\mathrm{d}y=\mathrm{d}x$, $\lim_{x\to\infty}y\to\infty$, $\lim_{x\to-\infty}y\to-\infty$. -$$ -\begin{align} -\mathcal{L}_X (t) & = e^{\frac{1}{2}t^2\sigma^2-t\mu} \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\left(\frac{y-\mu}{\sigma}\right)^2} \mathrm{d}y \\ -& = e^{\frac{1}{2}t^2\sigma^2-t\mu} -\end{align} -$$<|endoftext|> -TITLE: How do you prove that $\arcsin( \frac{1}{2} \sqrt{2-\sqrt {2-2x}})=\frac{\pi}{8} + \frac{1}{4} \arcsin x$? -QUESTION [5 upvotes]: I have the task to prove that -$$ - \arcsin( \frac{1}{2} \sqrt{2-\sqrt {2-2x}})=\frac{\pi}{8} + \frac{1}{4} \arcsin x ,\left|x\right|\le 1 -$$ - -I do not have any ideas from where I should start. -Can anyone help me solve it? - -REPLY [2 votes]: You may just observe that, for $x \in [0,1)$, we have -$$ -\left(\arcsin\left( \frac{1}{2} \sqrt{2-\sqrt {2-2x}}\right)\right)'=\frac14\frac1{\sqrt{1-x^2}} -$$ and we have -$$ -\left(\frac{\pi}{8} + \frac{1}{4} \arcsin x\right)'=\frac14\frac1{\sqrt{1-x^2}} -$$ giving - -$$ -\arcsin( \frac{1}{2} \sqrt{2-\sqrt {2-2x}})=\frac{\pi}{8} + \frac{1}{4} \arcsin x, \quad x \in [0,1] -$$ - -since both functions take the same value at $x=0$ and at $x=1$.<|endoftext|> -TITLE: Natural equivalence between singular and simplicial homology -QUESTION [10 upvotes]: I know that, for every $\Delta$-complex $X$, there is a canonical isomorphism $\phi_n : H_n ^\Delta (X) \to H_n (X) $, where $H^\Delta _n (X)$ is the $n$-th simplicial homology group, and $H_n (X)$ is the $n$-th singular homology group. -For simplicial $\Delta$-complexes, we have also the notion of order-preserving simplicial maps, and every simplicial map $f:X\to Y$ induces a morphism $f^\Delta _{\bullet_*}$ in simplicial homology $H^\Delta _\bullet $. -My question is: are the morphisms $f^\Delta _{\bullet_*}$ and $f _{\bullet_*}$ (the induced morphism in singular homology) "the same" via the canonical isomorphisms? I believe that this is true, but I'm having troubles proving this, because if $\sigma$ is a $n$-simplex of $X$ and $f$ sends $\sigma$ in a simplex of $Y$ such that $\dim f(\sigma) < \dim \sigma$, then $ f_n (\sigma)=0 $, but if we denote $c_\sigma : \Delta ^n \to X $ the characteristic map of $\sigma$ in the $\Delta$-complex structure of $X$, then $ f\circ c_\sigma :\Delta^n \to Y$ is a well defined $n$-singular simplex of $Y$. -Thank you. - -REPLY [9 votes]: Yes, this is true. Here's one way to prove it. For $0\leq j -TITLE: Uniform convergence from pointwise convergence for uniform Lipshitz functions -QUESTION [5 upvotes]: I would like to prove or give a counterexample for the following statement: -Let $(S,d)$ be a complete and seperable space. We define: -$$ -\mathcal{P}^1(S) := \{P: \mathcal{B}_S \rightarrow [0,1] \mid P \mbox{ probability measure, }\exists a \in S: \int d(x,a) P(dx) < \infty\} -$$ -Let $(P_n)_n, P$ all be in $\mathcal{P^1}(S)$ and suppose we have for any $f:S \rightarrow \mathbb{R}$ with $\forall x,y \in S: |f(x) - f(y)| \leq d(x,y): \int f dP_n \rightarrow \int f dP$ then we also have $\sup_{f \in L^1(S,\mathbb{R})} \left| \int f dP_n - \int f dP \right| \rightarrow 0$. -where $L^1(S,\mathbb{R})$ is the collection of Lipschitz functions with Lischitz constant 1. -I have both tried proving and disproving this statement, as a counterexample I tried to take $S := \mathbb{R}$ and $P(\{n\}) := \frac{1}{2^n}, n \in \mathbb{N}$ and taking $P_n(\{n\}) := 1$ and stuff like that but it didn't work out (I also tried to fiddle with uniform distributions but this didn't give me a counterexample either. -Then I tried to prove it, by regularity we can find a compact set $K$ for which $P(S\setminus K)$ is ''small enough'' then we can get the uniform convergence on $K$ but the problem is that I can't bound $\sup_n P_n(S\setminus K)$, for this it would seem that I need that $(P_n)_n$ is tight but I don't think I have this.. - -REPLY [2 votes]: I think we want to assume maybe that $S$ is locally compact and the measures are regular; with the problem as stated I don't see why for example there exists a compact $K$ with $P(S\setminus K)$ small. -The argument might use a little cleaning up, but I think it's right: -We need the existence of a "Lebesgue number" for an open cover of a compact subset of a metric space. There's been some confusion about that, since the lemma is sometimes stated instead for an open cover of a compact metric space. So: -Lemma: Suppose $K$ is a compact subset of a metric space and $O$ is an open cover of $K$. There exists $r>0$ so that for every $x\in K$ there exists $V\in O$ with $B(x,r)\subset V$. -Proof: Define $\phi:K\to(0,1]$ by saying $\phi(x)$ is the supremum of the $r\in(0,1]$ such that there exists $V\in O$ with $B(x,r)\subset V$. It's clear that $|\phi(x)-\phi(y)|\le d(x,y)$, so $\phi$ is continuous. So there exists $\delta>0$ with $\phi(x)\ge\delta$ for all $x\in K$. Let $r=\delta/2$. QED. -Corollary: If $K$ is a compact subset of a locally compact metric space then there exists $r>0$ such that $K_r=\bigcup_{x\in K}\overline{B(x,r)}$ is compact. -Proof: Cover $K$ by finitely many balls with compact closure. Let $r$ be a Lebesgue number for this cover. QED. -Fix $a\in S$. Let $F$ be the space of all $f$ with $d(f(x),f(y))\le d(x,y)$ and let $F_0$ be the space of all $f\in F$ with $f(a)=0$. -Suppose the conclusion fails. Replacing $(P_n)$ by a subsequence, there exists $\epsilon>0$ and $f_n\in F_0$ such that $$|\int f_ndP_n-\int f_ndP|\ge100\epsilon$$for all $n$. Again taking a subsequence, we can assume that $f_n(x)$ converges as $n\to\infty$ for all $x$ in our countable dense subset of $S$. The equicontinuity shows that $f_n(x)\to f(x)$ for all $x\in S$, and in fact $f_n\to f$ uniformly on compact sets. -Let $C=\int d(x,a)dP$. Choose $K$ compact so $$\int_{S\setminus K}d(x,a)dP<\epsilon.$$ -Restricting to $n$ large enough we can assume that $$\int d(x,a)dP_n0$ so that $K_r$ is compact, as in the corollary. -Now, there exists a Lipschitz function $\phi$ which equals $1$ on $K$, vanishes off $K_r$, and satisfies $0\le \phi\le 1$ everywhere. Considering the product $\phi(x)d(x,a)$ shows that $$\liminf \int_{K_r}d(x,a)dP_n\ge \int\phi(x)d(x,a)dP\ge\int_K d(x,a)dP.$$ Hence if $n$ is large enough we have $$\int_{S\setminus K_r}d(x,a)dP_n<2\epsilon.$$ -Now since $f_n\to f$ uniformly on $K_r$ we have $$|\int_{K_r}|f_n-f|dQ<\epsilon$$if $n$ is large enough, for $Q=P$ and also for $Q=P_n$ If $g\in F_0$ then $|g(x)|\le d(x,a)$ and hence $$\int_{S\setminus K_r}|g|dQ<\epsilon$$again for $Q=P$ or $Q=P_n$, by the inequalities above. Hence $$\int_{S\setminus K_r}|f-f_n|dQ<8\epsilon.$$So $$\int_S|f-f_n|dQ<10\epsilon$$for large $n$. -Finally, if $n$ is large enough we have $$|\int fdP_n-\int fdP|<\epsilon.$$And if I didn't leave anything out, the triangle inequality plus the above now contradict $|\int f_ndP_n-\int f_ndP|>100\epsilon$.<|endoftext|> -TITLE: What are some books that are in the spirit of David A. Cox' "Primes of the Form $x^2+ny^2$" -QUESTION [6 upvotes]: David A. Cox "Primes of the Form $x^2+ny^2$: Fermat, Class Field Theory, and Complex Multiplication." has a very good (at least to me, and many) methodology. He starts from page 1 asking a simple question, and then he builds a whole machinery to answer it. And to be precise here's what he says on page 1: - -This leads to the basic question of the whole book, which we formulate as follows: - -Basic Question 0.1. Given a positive integer $n$, which primes $p$ can be expressed in the form $$p=x^2+ny^2$$ where $x$ and $y$ are integers? - - -We will answer this question completely, and along the way we will encounter some remarkably rich areas of number theory. -I think such way is extremely great, it gives the person the motivation to continue the whole book just by the existence of that basic question. So, -are there any books in the spirit of David A. Cox "Primes of the Form $x^2+ny^2$: Fermat, Class Field Theory, and Complex Multiplication." ? - -REPLY [2 votes]: Romik's The Surprising Mathematics of Longest Increasing Subsequences. A book about the problem that deals with finding out the distribution of the longest increasing subsequence of a random permutation. It is research-level, however. -Review can be found at MAA Reviews.<|endoftext|> -TITLE: Find sum of infinite anharmonic(?) series -QUESTION [11 upvotes]: I need help with this: -$$ -\frac{1}{1\cdot2\cdot3\cdot4}+\frac{1}{2\cdot3\cdot4\cdot5}+\frac{1}{3\cdot4\cdot5\cdot6}\dots -$$ -I don't know how to count sum of this series. It is similar to standard anharmonic series so it should have similar solution. However I can't work it out. - -REPLY [6 votes]: Using Euler's Beta function: -$$\begin{eqnarray*}\sum_{k=1}^{+\infty}\frac{1}{k(k+1)(k+2)(k+3)}&=&\sum_{k\geq 1}\frac{\Gamma(k)}{\Gamma(k+4)}=\frac{1}{\Gamma(4)}\sum_{k\geq 1}B(k,4)\\&=&\frac{1}{6}\sum_{k\geq 1}\int_{0}^{1}(1-x)^3 x^{k-1}\,dx\\&=&\frac{1}{6}\int_{0}^{1}(1-x)^2\,dx\\&=&\frac{1}{6}\int_{0}^{1}x^2\,dx = \color{red}{\frac{1}{18}}.\end{eqnarray*}$$ -Using creative (not so much) telescoping, for $a_k=\frac{1}{k(k+1)(k+2)}$ we have $a_{k}-a_{k+1}=\frac{3}{k(k+1)(k+2)(k+3)}$, so $$\sum_{k\geq 1}\frac{1}{k(k+1)(k+2)(k+3)}=\frac{a_1}{3}=\frac{1}{3\cdot 3!}=\frac{1}{18}.$$<|endoftext|> -TITLE: Find $f$ if $ f(x)+f\left(\frac{1}{1-x}\right)=x $ -QUESTION [7 upvotes]: Find $f$ if -$$ -f(x)+f\left(\frac{1}{1-x}\right)=x -$$ -I think, that I have to find x that $f(x) = f\left(\frac{1}{1-x}\right)$ -I've tried to put x which make $x = \frac{1}{1 - x}$, but this equation has no roots in real numbers. - -REPLY [3 votes]: In case you're a bit miffed by the "trickiness" of the solution, here's how I came upon the answer by playing around with the definition of $f$. The key, as in both answers above, is to realize there's some cyclicality induced by $\frac{1}{1-x}$. -My first instinct was to play around with $x=0$, from which we get $f(0)+f(1)=0$. In order to get something out of this, though, we look for another way to express $f(0)$ or $f(1)$. However, to get the left-hand term to be $f(1)$ requires plugging an undefined point ($x=1$) in the right-hand term, and getting $f(0)$ out of the right-hand term requires taking an infinite limit of the left-hand term. So that seems like a dead end. -I moved on to try $x=-1$ since this is also simple to play with. We get -$$f(-1)+f(\frac{1}{2})=-1$$ -Just for kicks, I tried plugging in $x=\frac{1}{2}$ since the numbers in the first equation were pretty palatable: -$$f(\frac{1}{2}) + f(2)=\frac{1}{2}$$ -Again, not bad -- nice, pretty numbers. So I tried once more: -$$f(2)+f(-1) = 2$$ -And now we're on to something. We note that we got $f(-1)$ to show up again, and that in the three equations we've written, there are only three unknowns -- namely, $f(-1), f(\frac{1}{2})$, and $f(2)$. This means we can solve for these three values (and luckily it's a simple linear form). -Now we begin to wonder whether we happened upon this by accident or whether we've found a more general phenomenon -- it appears that churning $-1$ through $\frac{1}{1-x}$ recursively brought us back to $-1$; is this true for other starting values? -Indeed it is -- $\frac{1}{1-\frac{1}{1-x}}=-\frac{1-x}{x}$ and $\frac{1}{1-(-\frac{1-x}{x})}=x$, so we'll get the system of three unknowns we saw in our concrete example for any (valid) starting value of $x$. -The rest is details, which are sufficiently covered in the other answers.<|endoftext|> -TITLE: Understanding Cantor's diagonal argument -QUESTION [10 upvotes]: I'm trying to grasp Cantor's diagonal argument to understand the proof that the power set of the natural numbers is uncountable. On Wikipedia, there is the following illustration: - -The explanation of the proof says the following: - -By construction, s differs from each sn, since their nth digits differ (highlighted in the example). Hence, s cannot occur in the enumeration. - -I don't understand why the sequence s at the bottom cannot occur anywhere in the enumeration of sequences above. I have read the proof about five times, but I'm still not getting it. I think I'm having an error in reasoning. Could someone please explain me why s cannot be in the enumeration with an example? - -REPLY [4 votes]: Basically, the diagonalizing method is to take every possible number one at a time and say "I can make a number that is different than this and different than any number so far". !!*!IF!!!* the resulting number did exist in the list then you would have come across it and you would have said "I can make a number different than this" and you would have, so the number can't be in the list. -That number can't be the first number because it has a different first term. It can't be the second number because it's got a different second term. It can't be the nth number because it's got a different nth term. -So let's suppose the number is on the list. Well, what number listing does it have? Let's say it's the coopledinkyfudgeth number on the list. Well when we do our diagonalization and we come the coopledinkyfudgeth number we change the coopledinkyfudgeth term and get a different number. So it ISN'T the coopledinkyfudgeth number after all! -It's not on the list! The list is impossible and there is no list.<|endoftext|> -TITLE: Why do we care about differential forms? (Confused over construction) -QUESTION [28 upvotes]: So it's said that differential forms provide a coordinate free approach to multivariable calculus. Well, in short I just don't get this, despite reading from many sources. I shall explain how it all looks to me. -Let's just stick to $\mathbb{R}^2$ for the sake of simplicity (maybe this is the down fall..). Picking some point $P=(x,y)\in\mathbb{R}^2$, we could ask about the directional derivative of some function $f:\mathbb{R}^2\rightarrow \mathbb{R}$, in direction $v=a\vec{i}+b\vec{j}$. This will be $$(\nabla \cdot v)|_P(f)=a\dfrac{\partial f}{\partial x}|_P+b\dfrac{\partial f}{\partial y}|_P =\underbrace{ (a\dfrac{\partial }{\partial x}|_P+b\dfrac{\partial }{\partial y}|_P)}_\text{$w_P$ }(f)$$ Where we can think of $w_P$ as an element of the tangent space at $P$. Now this in itself is a little weird; why have differential operators as a basis for something geometrical like a tangent space to a manifold? In any case, we apply these vectors to a function defined on our manifold, and we get the value we wanted out. -So who cares about differential forms? We just did all this without them. We could've done this by calculating $\mathrm{d}f$, in some basis $\mathrm{d}x, \mathrm{d}y$ (which is quite confusing), and then calculating $\mathrm{d}f(w_P)$, but what do we gain in doing it this way? -I mentioned I think the $\mathrm{d}x$'s are confusing. Well, $\mathrm{d}x$ is just the function $\mathrm{d}x(\frac{\partial}{\partial x})=1$ and 0 for any other differential operator - why write this as $\mathrm{d}x$, which had always previously been used to mean an infinitesimal increase in x? -Now I can understand caring about the dual of the tangent space. We are combining a vector in the tangent space with something and we're getting a scalar out of it - this something should then probably belong to the dual space. But if we're thinking of just the vector, then the function $f$ on the manifold needs to be encoded by the 1-form, right? Well, we can have 1-forms which aren't derivatives of any function on the manifold - what should it mean to combine such forms with tangent vectors? -And lastly, if we're writing all our forms in terms of $\mathrm{d}x$'s etc., where the $x$'s are exactly the coordinates of the manifold, then how exactly have we escaped dependence on coordinates? We're still essentially calculating with respect to a given coordinate system as in usual multivariable calculus! - -REPLY [3 votes]: Differential forms are an appropriate generalisation of derivation and integration (i.e of calculus) for arbitary (within the scope of integration) manifolds and spaces. -In order to arrive at such a generalisation (e.g like E. Cartan did) one starts with the basic definitions and operations of derivation and integration, how they affect the space they are part of and how they affect each other. Let's say a "methodological shift" from a coordinate-formula-based definition, which is restricted in its choice of representation, to a operational-based definition. (Note that in such a scheme the fundamental theorem of calculus is both easily proved and generalised, e.g generalised Stokes Theorem) - -An english translation of E. Cartan, "ON CERTAIN DIFFERENTIAL EXPRESSIONS AND THE PFAFF PROBLEM" is available here (and in wikipedia article) -In order to genertalise a particular (coordinate-dependent) representation of these operations, they are related to geometric entities (as spaces) of the space itself (for example the tangent and co-tangent space) and not to specific representations of these entities. Effectively it is a recursive construction of forms, a $k$ form is constructed by an operation $d$ on a $k-1$ form and so on. Nowhere is a specific representation taking place. So one can indeed use these operations and do calculus as needed on any manifold. Moreover like any good recursive scheme should do, $0$ forms can be identified with special entities on the space and the whole differential forms scheme is well defined.<|endoftext|> -TITLE: Closed form of the integral ${\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx$ -QUESTION [6 upvotes]: While doing some numerical experiments, I discovered a curious integral that appears to have a simple closed form: -$${\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx\stackrel{\color{gray}?}=\frac{\pi^2}{6\sqrt3}\tag1$$ -Could you suggest any ideas how to prove it? -The infinite product in the integrand can be written using q-Pochhammer symbol: -$$\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)=\left(e^{-24\!\;x};\,e^{-24\!\;x}\right)_\infty\tag2$$ - -REPLY [18 votes]: To be somewhat explicit. One may perform the change of variable, $q=e^{-x}$, $dq=-e^{-x}dx$, giving -$$ -{\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx={\large\int}_0^1 \prod_{n=1}^\infty\left(1-q^{24n}\right)dq\tag1 -$$ then use the identity (the Euler pentagonal number theorem) -$$ -\prod_{n=1}^\infty\left(1-q^{n}\right)=\sum_{-\infty}^{\infty}(-1)^nq^{\large \frac{3n^2-n}2} -$$ to get - -$$ -{\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx=\sum_{-\infty}^{\infty}{\large\int}_0^1(-1)^n q^{12 (3n^2-n)}dq=\sum_{-\infty}^{\infty}\frac{(-1)^n}{(6n-1)^2}=\frac{\pi ^2}{6 \sqrt{3}} -$$ - -The last equality is obtained by converting the series in terms of the Hurwitz zeta function and by using the multiplication theorem.<|endoftext|> -TITLE: Convert Riemann sum to a definite integral: $\lim_{n\to \infty} \sum_{k=1}^n\sqrt{{1\over n^2}\left(1+{2k\over n}\right)} $ -QUESTION [5 upvotes]: I'm between 2 answers for this question, but I am not sure if either of them are right. -$$\lim_{n\to \infty} \sum_{k=1}^n\sqrt{{1\over n^2}\left(1+{2k\over n}\right)} $$ -It has to be rewritten as a definite integral. I am between $$\int_0 ^1\sqrt{1+2x}dx$$ and $$\int_1^2\sqrt{x}dx$$ Not sure if either of them or right though, help please. - -REPLY [2 votes]: One may recall that for any Riemann integrable function over $[a,b]$, as $n \to \infty$, one has - -$$ -\sum_{k=0}^n\frac{(b-a)}nf\left(a+\frac{k(b-a)}n \right) \to \int_a^bf(x)dx -$$ - -Then you may apply it to $\displaystyle f(x)=\sqrt{1+2x}$, $a=0$, $b=1$ giving the limit -$$ -\int_0 ^1\sqrt{1+2x}\:dx. -$$ By the change of variable $1+2x \to u$ you also get -$$ -\frac12\int_1^3\sqrt{u}\:du -$$<|endoftext|> -TITLE: What is the purpose of K-Theory? -QUESTION [17 upvotes]: I have recognized that there is a theory called K-Theory in mathematics is used also for applications in mathematical physics. There is existing algebraic K-Theory and topological K-Theory. Are these theories very similar? -For algebraic K-Theory by Milnor I have seen that the K-Groups are given by -$K_n = T^n / a \otimes (1-a)$ (Wikipedia). -Here, $T^n$ is the $n$-fold Tensor product. For n=2 one obtains abelian matrices. I don't understand this theory in depht. What is the reason that K- theory was introduced? (Is the theoretical physics application topological or algebraic?) And is there material (lecture Video or good pdf script) where the algebraic K-theory is explained? -I would greatly appreciate an answer. - -REPLY [9 votes]: In short, algebraic $K$-theory starts with the observation that the dimension of vector spaces over a field is a very useful thing! The start is the study of the $K_0$ group of a ring, which is «the best thing for $A$-modules that feels like the dimension of vector spaces». -The next player in $K$-theory is the $K_1$ of a ring $A$, which again measures how far we are from the nice situation of linear algebra: there one can take, by applying row and column operations, matrices into very simple forms. This cannot be done for general rings, and $K_1$ tells you how badly it fails. -Higher $K$-theory is considerably more difficult to motivate and to explain, but can be felt in the same way. -You should browse Jonathan Rosenberg's beautiful book on $K$-theory.<|endoftext|> -TITLE: Finding the limit of a fraction: $\lim_{x \to 3} \frac{x^3-27}{x^2-9}$ -QUESTION [5 upvotes]: Find $$\lim_{x \to 3} \frac{x^3-27}{x^2-9}$$ - -What I did is: -\begin{align} -\lim_{x \to 3} \frac{x^3-27}{x^2-9} &= \lim_{x \to 3} \frac{(x-3)^3+9x-27x}{(x-3)(x+3)} = \lim_{x \to 3} \frac{(x-3)^3+9(x-3)}{(x-3)(x+3)} \\ -&= \lim_{x \to 3} \frac{(x-3)^3}{(x-3)(x+3)} + \lim_{x \to 3} \frac{9(x-3)}{(x-3)(x+3)} \\ -&= \lim_{x \to 3} \frac{(x-3)^2}{(x+3)} + \lim_{x \to 3} \frac{9}{(x+3)} =0 + \frac{9}{6} -\end{align} -Wolfram factor the numerator to $(x-3)(x^2+3x+9)$ is there a quick way to find this? - -REPLY [4 votes]: Wolfram factor the numerator to $(x-3)(x^2+3x+9)$ is there a quick way to find this? - -Generally, you have -$$ -a^{n+1}-b^{n+1}=(a-b)(a^n+a^{n-1}b+a^{n-2}b^2+\cdots+a^2b^{n-2}+ab^{n-1}+b^n) -$$ (to see this expand the right hand side, then terms telescope). -Then apply it to $a=x$, $b=3$, $n=2$ giving -$$ -x^3-27=(x-3)(x^2+3x+9). -$$ - -REPLY [2 votes]: $$\lim_{x \to 3}\frac{x^3-27}{x^2-9}=\frac{0}{0}=\\\lim_{x \to 3}\frac{(x^3-27)'}{(x^2-9)'} =\\ \lim_{x \to 3}\frac{3x^2}{2x}=\\ \lim_{x \to 3}\frac{3x^2}{2x}\\=\lim_{x \to 3}\frac{3x}{2}=\frac{9}{2}$$<|endoftext|> -TITLE: Computing $\sum{\frac{1}{m^2+n^2}}$ -QUESTION [7 upvotes]: I want to prove that $\sum_{1\leq m^2+n^2\leq R^2}{\frac{1}{m^2+n^2}}=2\pi\log R+O(1)$ as $R\rightarrow\infty$. -For this, I'm trying to approximate the sum by using the integral $\int_{1\leq r\leq R}{\frac{1}{x^2+y^2}}dxdy=\int_{0}^{2\pi}\int_{1}^{R}{\frac{1}{r}}drd\theta=2\pi\log R$ -How can I show that $\sum_{1\leq m^2+n^2\leq R^2}{\frac{1}{m^2+n^2}}=\int_{1\leq r\leq R}{\frac{1}{x^2+y^2}}dxdy$ as $R\rightarrow\infty$ rigorously? -Any help will be appreciated. - -REPLY [3 votes]: $$ -\begin{align} -\sum_{m,n=1}^R\frac1{m^2+n^2} -&=2\sum_{m=1}^R\sum_{n=1}^m\frac1{m^2+n^2}-\sum_{n=1}^R\frac1{2n^2}\\ -&=2\sum_{m=1}^R\frac1m\sum_{n=1}^m\frac{\frac1m}{1+\left(\frac nm\right)^2}-\sum_{n=1}^R\frac1{2n^2}\\ -\end{align} -$$ -Comparing the Riemann sum $\sum\limits_{n=1}^m\frac{\frac1m}{1+\left(\frac nm\right)^2}$ and the integral $\int_0^1\frac{\mathrm{d}x}{1+x^2}$, we see that -$$ -\sum_{n=1}^m\frac{\frac1m}{1+\left(\frac nm\right)^2}=\frac\pi4+O\left(\frac1m\right) -$$ -Since $\sum\limits_{m=1}^n\frac1m=\log(n)+O(1)$, and $\sum\limits_{n=1}^\infty\frac1{2n^2}=\frac{\pi^2}{12}$, we get that -$$ -\sum_{m,n=1}^R\frac1{m^2+n^2}=\frac\pi2\log(R)+O(1)\tag{1} -$$ -Furthermore, since $\log(R/\sqrt2)=\log(R)+O(1)$, -$$ -\sum_{m,n=1}^{R/\sqrt2}\frac1{m^2+n^2}=\frac\pi2\log(R)+O(1)\tag{2} -$$ -The sum over the terms where $m=0$ or $n=0$ is handled by -$$ -\sum_{m=1}^\infty\frac1{m^2}=\frac{\pi^2}6=O(1)\tag{3} -$$ -Since -$$ -\overbrace{\left\{1\le m,n\le R/\sqrt2\right\}}^{(2)}\subset\left\{1\le m^2+n^2\le R^2\text{ and }m,n\ge1\right\}\subset\overbrace{\left\{1\le m,n\le R\right\}}^{(1)} -$$ -and accounting for the four quadrants and the four borders where $m=0$ or $n=0$, we get -$$ -\sum_{1\le m^2+n^2\le R^2}\frac1{m^2+n^2}=2\pi\log(R)+O(1) -$$<|endoftext|> -TITLE: Why does this matrix give the derivative of a function? -QUESTION [216 upvotes]: I happened to stumble upon the following matrix: -$$ A = \begin{bmatrix} - a & 1 \\ - 0 & a -\end{bmatrix} -$$ -And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then: -$$ P(A)=\begin{bmatrix} - P(a) & P'(a) \\ - 0 & P(a) -\end{bmatrix}$$ -Where $P'(a)$ is the derivative evaluated at $a$. -Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me: -$$ \exp(A)=\begin{bmatrix} - e^a & e^a \\ - 0 & e^a -\end{bmatrix}$$ -and this does in fact follow the pattern since the derivative of $e^x$ is itself! -Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get: -$$ P(A)=\begin{bmatrix} - \frac{1}{a} & -\frac{1}{a^2} \\ - 0 & \frac{1}{a} -\end{bmatrix}$$ -And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds! -After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function. -I have two questions: - -Why is this happening? -Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds? - -REPLY [22 votes]: Just to expand on the comment of @Myself above... -Adjoining algebraics as matrix maths -Sometimes in mathematics or computation you can get away with adjoining an algebraic number $\alpha$ to some simpler ring $R$ of numbers like the integers or rationals $\mathbb Q$, and these characterize all of your solutions. If this number obeys the algebraic equation $$\alpha^n = \sum_{k=0}^{n-1} q_k \alpha^k$$ for all $q_k \in R,$ we call the above polynomial equation $Q(\alpha) = 0$, and then we can adjoin this number by using polynomials of degree $n - 1$ with coefficients from $R$ and evaluated at $\alpha$: the ring is formally denoted $R[\alpha]/Q(\alpha),$ "the quotient group of the polynomials with coefficients in $R$ of some parameter $\alpha$ given their equivalence modulo polynomial division by $Q(\alpha).$" -If you write down the action of the multiplication $(\alpha\cdot)$ on the vector in $R^n$ corresponding to such a polynomial in this ring, it will look like the matrix -$$\alpha \leftrightarrow A = \begin{bmatrix}0 & 0 & 0 & \dots & 0 & q_0\\ -1 & 0 & 0 & \dots & 0 & q_1 \\ -0 & 1 & 0 & \dots & 0 & q_2 \\ -\dots & \dots & \dots & \dots & \dots & \dots \\ -0 & 0 & 0 & \dots & 0 & q_{n-2} \\ -0 & 0 & 0 & \dots & 1 & q_{n-1} \\ -\end{bmatrix},$$ -and putting such a matrix in row-reduced echelon form is actually so simple that we can immediately strengthen our claim to say that the above ring $R[\alpha]/Q(\alpha)$ is a field when $R$ is a field and $q_0 \ne 0.$ The matrix $\sum_k p_k A^k$ is then a matrix representation of the polynomial $\sum_k p_k \alpha^k$ which implements all of the required operations as matrix operations. -A simple example before \$#!& gets real. -For example, there is a famous "O(1)" solution to generating the $k^\text{th}$ Fibonacci number which comes from observing that the recurrence $F_k = F_{k-1} + F_{k-2}$ can be solved by other functions for other boundary conditions than $F_{0,1} = 0,1$, and one very special set of solutions looks like $F_k = \phi^k$ for some $\phi$. Plugging and chugging we get the algebraic numbers $\phi^2 = \phi + 1,$ which we can solve for the golden ratio $\varphi = (1 + \sqrt{5})/2$ and its negative reciprocal $\bar\varphi = -\varphi^{-1}= (1-\sqrt{5})/2.$ However since the Fibonacci recurrence relation is linear, this means that any linear combination $$F_n = A \varphi^n + B \bar\varphi^n$$obeys the Fibonacci recurrence, and we can actually just choose $A = \sqrt{1/5},\; B = -\sqrt{1/5}$ to get the standard $F_{0,1} = 0,1$ starting points: this is the Fibonacci sequence defined purely in terms of exponentiation. -But, there is a problem with using this on a computer: the Double type that a computer has access to has only finite precision and the above expressions will round off wildly. What we really want is to use our arbitrary-precision Integer type to calculate this. We can do this with matrix exponentiation in a couple of different ways. The first would be to adjoin the number $\sqrt{5}$ to the integers, solving $\alpha^2 = 5.$ Then our ring consists of the numbers $a + b \sqrt{5}$ which are the matrices: $$\begin{bmatrix}a & 5b\\b & a\end{bmatrix}.$$ And that's easy-peasy to program. Your "unit vectors" can similarly be chosen as $1$ and $\varphi$ however, and that leads to the "no-nonsense" matrix $$a + b \varphi = \begin{bmatrix}a & b\\ b & a + b\end{bmatrix}$$ which I'm calling "no-nonsense" because for $a = 0, b = 1$ this is actually the Fibonacci recurrence relation on vectors $[F_{n-1}, F_{n}],$ which is a way to get to this result without going through the above hoops. There is also an interesting "symmetric" version where $\varphi$ and $\bar\varphi$ are our "unit vectors" and the matrix is (I think) $a \varphi + b \bar\varphi \leftrightarrow [2a-b,\; -a+b;\;a - b,\; -a + 2b].$ -(In any case, it turns out that the supposedly "O(1)" algorithm is not: even when we exponentiate-by-squaring we have to perform $\log_2(k)$ multiplications of numbers $m_i = F_{2^i}$ which are growing asymptotically like $F_k \approx \varphi^k/\sqrt{5},$ taking some $O(k^2)$ time, just like adding up the numbers directly. The big speed gain is that the adding-bignums code will "naturally" allocate new memory for each bignum and will therefore write something like $O(k^2)$ bits in memory if you don't specially make it intelligent; the exponentiation improves this to $O(k~\log(k))$ and possibly even to $O(k)$ since the worst of these happen only at the very end.) -Going off of the algebraics into complex numbers -Interestingly, we don't need to restrict ourselves to real numbers when we do the above. We know that in $\mathbb R$ there is no $x$ satisfying $x^2 = -1,$ so the above prescribes that we extend our field to the field $$ -a + b \sqrt{-1} \leftrightarrow \begin{bmatrix}a & -b\\b & a\end{bmatrix}.$$When we replace $a$ with $r\cos\theta$ and $b$ with $r\sin\theta$ we find out that in fact these "complex numbers" are all just scaled rotation matrices:$$ r (\cos\theta + i~\sin\theta) = r \begin{bmatrix}\cos\theta & -\sin\theta\\\sin\theta & \cos\theta\end{bmatrix} = r~R_\theta,$$giving us an immediate geometric understanding of a complex number as a scaled rotation (and then analytic functions are just the ones which locally look like a scaled rotation.) -Going way off the algebraics into infinitesimals. -Another interesting way to go with this is to consider adjoining a term $\epsilon$ which is not zero, but which squares to zero. This idea formalizes the idea of an "infinitesimal" with no real effort, although as mentioned before, the resulting algebra is doomed to be a ring. (We could adjoin an inverse $\infty = \epsilon^{-1}$ too but presumably we'd then have $\infty^2 = \infty$ which breaks associativity, $(\epsilon\cdot\infty)\cdot\infty \ne \epsilon\cdot(\infty\cdot\infty),$ unless we insert more infinities to push the problem out to infinity.) -Anyway, we then have the matrix: $$a + b \epsilon \leftrightarrow a I + b E = \begin{bmatrix}a & 0\\ b & a\end{bmatrix}.$$ It's precisely the transpose of what you were looking at. Following the rules, $(a + b \epsilon)^n = a^n + n~a^{n-1}~b~\epsilon$ with all of the other terms disappearing. Expanding it out we find by Taylor expansion that $f(x + \epsilon) = \sum_n f^{(n)}(x) \epsilon^n = f(x) + f'(x) \epsilon,$ and this is the property that you have seen in your own examination. -We can similarly keep infinitesimals out to second order with a 3x3 matrix $$a + b \epsilon + c \epsilon^2 \leftrightarrow \begin{bmatrix}a & 0 & 0\\ -b & a & 0\\ -c & b & a\end{bmatrix}$$Then $f(x I + E) = f(x) I + f'(x) E + f''(x) E^2 / 2$ straightforwardly.<|endoftext|> -TITLE: What are your favorite relations between e and pi? -QUESTION [18 upvotes]: This question is for a very cool friend of mine. See, he really is interested on how seemingly separate concepts can be connected in such nice ways. He told me that he was losing his love for mathematics due to some personal stuff in his life. (I will NOT discuss his personal life, but it has nothing to do with me). -To make him feel better about math (or love of math for that matter), I was planning on giving him a sheet of paper with quite a few relations of $e$ and $\pi$ -The ones I were going to give him were: -$$e^{i\pi}=-1$$ -and how $e$ and $\pi$ both occur in the distribution of primes (bounds on primes has to do with $\ln$ and the regularization of the product of primes are $4\pi^2$) -Can I have a few more examples of any relations please? I feel it could mean a lot to him. I'm sorry if this is too soft for this site, but I didn't quite know where to ask it. - -REPLY [16 votes]: Here's a beautiful relation between $\pi$ and $e$, -$$\sqrt{\frac{\pi\,e}{2}}=1+\frac{1}{1\cdot3}+\frac{1}{1\cdot3\cdot5}+\frac{1}{1\cdot3\cdot5\cdot7}+\dots+\cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$ -It shouldn't be hard to guess who found this. As Kevin Brown of Mathpages remarked, "Is there any other mathematician whose work is instantly recognizable?"<|endoftext|> -TITLE: Geodesic axis and displacement function -QUESTION [5 upvotes]: So let us assume our manifold is complete, simply connected, and has nonpositive sectional curvature. If we assume that the displacement function $f(x)=d(x,\phi(x))$, for an isometry $\phi:M\rightarrow M$, is bounded from below by a positive value, show that there exists a geodesic axis. That is there exists a geodesic $\gamma$ such that $\phi\circ \gamma(t)=\gamma(t+t_0)$ assuming that $f(x)=\inf(f)$ for some $x\in M$. -So here is what I have so far. It seems that I should select the geodesic that goes from $x$ to $\phi(x)$ where $f(x)=\inf(f)$. I had two ideas for showing that this is an axis. Namely we could show that $\phi\circ\gamma(-t_0)=\gamma(0)$, where $\gamma(0)=x$ and $\gamma(t_0)=\phi(x)$, then we could use that geodesics are unique in this type of manifold, or we could show that $f(\gamma)$ is constant which could also establish our result, but I'm not sure how to go about with either. Any suggestions? Thanks. - -REPLY [3 votes]: Your first idea (using the uniqueness of geodesics) is good. The key formula is the derivative of the distance function: we are given a minimum $x_0$ of $f$, so we'd like to extract some information from the critical point condition $Df_{x_0} = 0$. The hypotheses on the manifold guarantee the distance function will be smooth away from the diagonal, so this is definitely true. -If we let $\gamma$ be an arclength-parametrized minimizing geodesic joining $x$ to $y$ and $L$ be its length, then you should be able to derive from the first variation formula that -$$ (Dd_{(x,y)})(u,v) =\langle\dot \gamma(L),v\rangle - \langle \dot \gamma(0), u\rangle.$$ -Applying this with $x = x_0, y = \phi(x_0), v = D\phi(u)$ we get -$$ Df_{x_0}(u) = (Dd_{(x_0,\phi(x_0)))})(u,D\phi(u))=\langle\dot \gamma(L), D\phi(u)\rangle - \langle \dot \gamma(0), u\rangle=0$$ -for all vectors $u \in T_{x_0} M$. Since $\phi$ is an isometry, we can rewrite this as $$\langle\dot \gamma(L) - D\phi(\dot \gamma(0)), D\phi(u)\rangle=0$$ -for all $u$, and thus the fact that $D\phi_{x_0}$ is bijective tells us that $\dot \gamma(L) = D\phi(\dot \gamma(0))$. Since $\phi$ is an isometry we know $\phi\circ \gamma$ is geodesic, so the uniqueness of geodesics extends this equality of velocities at a point to equalities of the entire geodesics; i.e. $\phi(\gamma(t)) = \gamma(t + L)$.<|endoftext|> -TITLE: Rotations of 4-Cubes -QUESTION [5 upvotes]: I have recently learned the orbit stabilizer theorem, and have encountered unexpected results pertaining to the rotations of a tesseract; I am curious if there is any intuition for this. -A $4$-Cube has $(16*4)/2 = 32$ edges using some simple counting, and has $24*8 = 192$ rotational symmetries, which I obtained by considering that each of the $8$ cubical 'faces' can map to any other in $24$ ways (each way is a rotational symmetry of a 3-cube). Each edge has all $32$ edges in its orbit, and so by orbit stabilizer we conclude each edge has $192/32 = 6$ stabilizing rotations. -This seems very bizarre to me, as it suggest that you can perform $3$ rotations of a $4$-cube that keep any edge in the same position, facing the same direction, so essentially nothing has changed. Does this mean it is possible to rotate a $4$-cube non-trivially in $2$ ways while keeping one edge completely fixed? Or does the edge have to move, only to end up exactly as it was when it began? -Is my reasoning correct, and is there any intuitive way to understand these results? In general, are rotation in higher dimensions too bizarre to have an intuitive grasp of, and are there some interesting things I should know about these rotations? Thanks! - -REPLY [4 votes]: If you examine a tesseract, you should be able to find three cubical -hyperfaces that meet at a single edge. -In fact, every edge has three hyperfaces arranged symmetrically -around it in this fashion. -While keeping this edge fixed, then, you can rotate one of the three -adjoining cubical hyperfaces onto one of the others. Repeat this rotation -to find the second non-trivial rotation that leaves this edge fixed. -The third repetition takes the hyperface back to its original position.<|endoftext|> -TITLE: Continuous and increasing on $[0,1]$ and absolutely continuous on $[\varepsilon,1] \forall \varepsilon >0 =>$ absolutely continuous on $[0,1]$? -QUESTION [5 upvotes]: Does continuous and increasing on $[0,1]$ and absolutely continuous on $[\varepsilon,1]$ imply absolutely continuous on $[0,1]$? -EDIT: I am trying to do this via the definition, but I am doing something wrong: -$\forall \epsilon >0, \exists \delta$ such that $\forall$ sequences $(a_n,b_n)$ if $\sum_{k=1}^{n}|b_n-a_n|<\delta =>\sum_{k=1}^{n}|f(b_n)-f(a_n)|<\epsilon$ -Now, from continuity I have $|b_n-a_n|<\delta => |f(b_n)-f(a_n)|<\epsilon_1$ for some $\epsilon_1>0$. -and I can write the sum $\sum_{k=1}^{n}|f(b_n)-f(a_n)|<\epsilon_1n$ -But this has to be wrong since continuity does not imply absolute continuity, I should be using monotonicity at some point in here. But I can't see where. - -REPLY [3 votes]: @Craig I'll answer here as the comments section is quite cramped. -"Do they make the function monotonous just so it doesn't oscillate wildly?" Exactly. Notice that if you are adding up $\sum_{i=1}^m |f(d_i)-f(c_i)|$ for -nonoverlapping intervals in $[0,W]$ then -$$\sum_{i=1}^m |f(d_i)-f(c_i)| =\sum_{i=1}^m [ f(d_i)-f(c_i)] \leq f(W)-f(0).$$ -So all you need to control these for a monotonic increasing continuous function is to make sure that $f(W)-f(0)$ is small. -"Uniform continuity?" Well if $f$ is continuous on $[0,1]$ it is uniformly continuous there, if you feel you need it. - -So now the idea of the proof. Start with (as always) "Let $\epsilon>0$. -First we set up the wall $W$. Choose $W>0$ so that $f(W)-f(0)< [small_1]$. -Now $f$ is absolutely continuous on $[W,1]$ (the right side of the wall) so we can select $\delta_r>0$ so that -$$\sum_{i=1}^m |f(d_i)-f(c_i)| \leq [small_2] $$ -whenever these intervals are nonoverlapping subintervals of $[W,1]$ with total length less than $\delta_r$. -Now $f$ is continuous at $W$ (the wall) so there is a $\delta_W$ with -$$f(W+\delta_W) - f(W-\delta_W) < [small_3] .$$ -Now consider any finite sequence of intervals $\{[c_i,d_i]\}$ of $[0,1]$ with total length less than $\delta_?$ [still to be determined as this isn't the actual proof just our heuristics to find a proof]. -That sum has three parts: the pieces in $[0,W]$ on the left side of the wall -(ii) the pieces in $[W,1]$ on the right side of the wall and (iii) maybe one solitary piece straddling the wall. Give an $\epsilon/3$ to each of these possibilities and we are done and can write up a tidy $\epsilon$, $\delta$ proof. -P.S. Don't leave the problem without a counterexample showing that the hypotheses that you used (and that were stated in the problem) cannot be dropped). This is proper mathematical etiquette: think of it as saying "please" and "thankyou" in social situations. Add "counterexample" to your -solutions whether the problem asks it or not. Never be impolite and just answer the problem, rushing away for more important things. Obviously continuity cannot be dropped. Find an example with the oscillatory behaviour that you mentioned above happening at the left endpoint. The example will have to have unbounded variation on $[0,1]$ even though it is absolutely continuous (and hence BV) on $[W,1]$ for all $W>0$.<|endoftext|> -TITLE: Is the set of regular points in a scheme open in general? -QUESTION [5 upvotes]: In the situation when smooth/k coincides with regularity (for a finite-type k scheme edit: k also perfect, thanks Remy), I think this should be true (?). But I am not sure about the situation for a general scheme. -Maybe there is a scheme consisting only of singular (non-regular) points? Oh, there is, I think: $k[x]/x^2$. But this doesn't answer my question about openness. -By regularity I mean that local ring is a regular local ring, i.e. the dimension of the Zariski tangent space is the same as the dimension of the local ring. - -REPLY [3 votes]: No. According to this mathoverflow post, there is an example of an affine Noetherian integral scheme of dimension 1 whose regular locus is not open, see Exposé XIX of the volume "Travaux de Gabber" in Astérisque 363-364. -See comments by Remy and Rieux for nice sufficient conditions for the regular locus to be open.<|endoftext|> -TITLE: Can any NP-Complete problem can be reduced to any other NP-Complete problem in polynomial time? -QUESTION [8 upvotes]: Is it true to say that any NP-Complete problem can be reduced to any other NP-Complete problem in polynomial time? - -REPLY [8 votes]: Yes. By definition any NP problem can be reduced to an NP-complete problem in polynomial time. Since NP-complete problems are themselves NP problems, all NP-complete problems can be reduced to each other in polynomial time. - -REPLY [3 votes]: Yes. By definition, a problem $A$ is NP-hard if any problem $B$ in NP has a polynomial-time reduction to $A$. Thus if $A$ and $B$ are NP-complete, $B$ has a polynomial-time reduction to $A$ since $A$ is NP-hard and $B$ is in NP. (Obviously $A$ has a polynomial-time reduction to $B$ as well, since $A$ is in NP and $B$ is NP-hard.)<|endoftext|> -TITLE: What are some of the best book in mathematics for general reader? -QUESTION [10 upvotes]: I am preparing a list for my department library, consisting books of mathematics for general readers. I've included The men of mathematics by Bell, Fermat's last theorem by S.Singh , The man who knew infinity and The equation that couldn't be solved. But I need more books to add into my list. Can anyone suggest a few more, where mathematical development of certain concepts/problems or evolution is described in a lucid manner or contains mathematics which everyone can understand. Many thanks! - -REPLY [4 votes]: Here are some examples: - -Ian Roulstone, John Norbury: Invisible in the Storm: The Role of Mathematics in Understanding Weather -Vladimir Arnold: Catastrophe Theory -Julian Havil: GAMMA -David Harel: Computers Ltd -George Szpiro: Kepler's Conjecture -Malba Tahan: The Man Who Counted: A Collection of Mathematical Adventures.<|endoftext|> -TITLE: Inverse of circulant matrices -QUESTION [5 upvotes]: The following is an n x n circulant matrix where h is real but not equal to 1: -$$ A= - \begin{bmatrix} - 2&-h&0&0&...&...&0&-h\\ - -h&2&-h&0&...&...&0&0\\ - 0&-h&2&-h&...&...&...&...\\ - 0&0&-h&2&...&...&...&...\\ - ...&...&...&...&...&...&0&0\\ - ...&...&...&...&...&...&-h&0\\ - 0&0&...&...&0&-h&2&-h\\ - -h&0&...&...&0&0&-h&2 - \end{bmatrix} -$$ -So my question is how can I find an explicit formula for the inverse of this matrix? -So for $M = QDQ^T$, I found that $D = \mathrm{diag}[2(1-h*\cos(2\pi l/N))]$ from $0$ to $N-1$ somehow by reading some papers on unitary Van der Monde matrices and shift matrices. However, I don't fully understand how this works. -Can someone please show me how to derive an explicit formula for the inverse of this matrix and explain why we are able to do so? -Appreciate any help in advance. - -REPLY [8 votes]: As you already know (see also Wiki:Circulant matrices), you can diagonalize your matrix $A=QDQ^T$, with $Q$ being the discrete Fourier Transform , a special kind of Vandermonde matrix. -To get the inverse of $A$, all you got to do is to invert all diagonal entries of $D$, i.e. $d_{ij} \mapsto \frac1{d_{ij}}$.<|endoftext|> -TITLE: Does homotopy depend on function or the image? -QUESTION [7 upvotes]: I am just starting to read about algebraic topology, and I wonder whether homotopy depends on function or the image. According to Munkres' definition, two continuous function $f,g:[0,1]\to Y$ are said to be homotopic if there exists a continuous map $F:[0,1]\times[0,1]\to Y$ such that -$$F(0,x)=x_0$$ -$$F(1,x)=x_1$$ -$$F(x,0)=f(x)$$ -$$F(x,1)=g(x)$$ -Now if $f:[0,1]\to Y$ is a given continuous function, and $g:[0,1]\to f(X)$ is a surjective continuous function with the same end points as $f$. Is it necessary for $f$ and $g$ homotopic? - -REPLY [8 votes]: No. Consider the case of the unit circle $S_1=\mathbb R/\mathbb Z$. The map $f(x)=x$ winds around the circle once, where $g(x)=x+x$ winds around the circle twice. They have the same image and the same endpoints, but are not homotopic. -A somewhat more general example is to take any curve not homotopic to the trivial curve. Then, construct a new curve which traverses that curve once, and then traverses it in reverse. The former isn't homotopic to the trivial curve, but the latter is. - -REPLY [3 votes]: No. Consider the paths -$$ -f: [0, 1] \to S^1 : t \mapsto (\cos 2 \pi t, \sin 2 \pi t)\\ -g: [0, 1] \to S^1 : t \mapsto (\cos 4 \pi t, \sin 4 \pi t). -$$ -These are both surjective onto the unit circle $S^1$, and have the same starting and ending points, but are not homotopic as paths in $S^1$. (They are homotopic as maps into $\mathbb R^2$, though!)<|endoftext|> -TITLE: How to prove the matrix fractional function is convex by definition -QUESTION [8 upvotes]: It is well known that the matrix fractional function $f(\mathbf{w},\boldsymbol{\Omega})=\mathbf{w}^T\boldsymbol{\Omega}^{-1}\mathbf{w}$ is jointly convex with respect to $\mathbf{w}$ and $\boldsymbol{\Omega}$ for $\mathbf{w}\in\mathbb{R}^n$ and $\boldsymbol{\Omega}\in\mathbb{S}_+^{n\times n}$, where $\mathbb{S}_+^{n\times n}$ denotes the set of all $n\times n$ positive definite matrix. -Now I want to prove it based on the definition of convex functions. That is, I need to prove that the following inequality holds for any $\mathbf{w}_1,\mathbf{w}_2\in\mathbb{R}^n$, $\boldsymbol{\Omega}_1,\boldsymbol{\Omega}_2\in\mathbb{S}^{n\times n}_+$, and $\alpha\in[0,1]$: -$$\alpha\mathbf{w}_1^T\boldsymbol{\Omega}_1^{-1}\mathbf{w}_1+\beta\mathbf{w}_2^T\boldsymbol{\Omega}_2^{-1}\mathbf{w}_2-(\alpha\mathbf{w}_1+\beta\mathbf{w}_2)^T(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}(\alpha\mathbf{w}_1+\beta\mathbf{w}_2)\ge 0,$$ -where $\beta=1-\alpha$. In order to prove this inequality, I first simplify the left-hand side as -\begin{align*} -&\alpha\mathbf{w}_1^T\boldsymbol{\Omega}_1^{-1}\mathbf{w}_1+\beta\mathbf{w}_2^T\boldsymbol{\Omega}_2^{-1}\mathbf{w}_2-(\alpha\mathbf{w}_1+\beta\mathbf{w}_2)^T(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}(\alpha\mathbf{w}_1+\beta\mathbf{w}_2)\\ -=&(\alpha\mathbf{w}_1^T\boldsymbol{\Omega}_1^{-1}\mathbf{w}_1-\alpha^2\mathbf{w}_1^T(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}\mathbf{w}_1)+(\beta\mathbf{w}_2^T\boldsymbol{\Omega}_2^{-1}\mathbf{w}_2-\beta^2\mathbf{w}_2^T(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}\mathbf{w}_2)\\ -&-2\alpha\beta\mathbf{w}_1^T(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}\mathbf{w}_2\\ -=&\alpha\beta\mathbf{w}_1^T\boldsymbol{\Omega}_1^{-1}\boldsymbol{\Omega}_2(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}\mathbf{w}_1+\alpha\beta\mathbf{w}_2^T\boldsymbol{\Omega}_2^{-1}\boldsymbol{\Omega}_1(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}\mathbf{w}_2\\ -&-2\alpha\beta\mathbf{w}_1^T(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}\mathbf{w}_2\\ -=&\alpha\beta\mathbf{w}_1^T(\alpha\boldsymbol{\Omega}_1\boldsymbol{\Omega}_2^{-1}\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_1)^{-1}\mathbf{w}_1+\alpha\beta\mathbf{w}_2^T(\alpha\boldsymbol{\Omega}_2+\beta\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^{-1}\boldsymbol{\Omega}_2)^{-1}\mathbf{w}_2\\ -&-2\alpha\beta\mathbf{w}_1^T(\alpha\boldsymbol{\Omega}_1+\beta\boldsymbol{\Omega}_2)^{-1}\mathbf{w}_2 -\end{align*} -I cannot prove the inequality based on this simplification. Is there any way to prove the inequality? Thanks. - -REPLY [3 votes]: It suffices to establish midpoint convexity. So we show that -\begin{equation*} -f(\tfrac{w+v}{2},\tfrac{A+B}{2}) \le \tfrac12 f(w, A) + \tfrac12 f(v,B). -\end{equation*} -In other words, we show that -\begin{equation*} -\left\langle\tfrac{w+v}{2},\left(\tfrac{A+B}{2}\right)^{-1}\tfrac{w+v}{2}\right\rangle \le \tfrac12 f(w, A) + \tfrac12 f(v,B), -\end{equation*} -which simplifies to showing that -\begin{equation*} -\tag{*} -w^TA^{-1}w+v^TB^{-1}v \ge (w+v)^T(A+B)^{-1}(w+v). -\end{equation*} -This follows easily from recalling that a matrix with positive definite diagonal blocks is positive definite if and only if its Schur complement is psd. Thus, it follows that since -\begin{equation*} -\begin{bmatrix} -w^TA^{-1}w & w^T\\ -w & A -\end{bmatrix} \succeq 0,\ \text{and}\ -\begin{bmatrix} -v^TB^{-1}v & v^T\\ -v & B -\end{bmatrix} \succeq 0,\ \implies -\begin{bmatrix} -w^TA^{-1}w +v^TB^{-1}v & w^T+v^T\\ -w+v & A+B -\end{bmatrix} \succeq 0. -\end{equation*} -Taking Schur complements of the final matrix above we obtain (*) as desired.<|endoftext|> -TITLE: What is $\frac{x^{10} + x^8 + x^2 + 1}{x^{10} + x^6 + x^4 + 1}$ given $x^2 + x - 1 = 0$? -QUESTION [6 upvotes]: Given that $x^2 + x - 1 = 0$, what is $$V \equiv \frac{x^{10} + x^8 + x^2 + 1}{x^{10} + x^6 + x^4 + 1} = \; ?$$ - -I have reduced $V$ to $\dfrac{x^8 + 1}{(x^4 + 1) (x^4 - x^2 + 1)}$, if you would like to know. - -REPLY [3 votes]: $$\frac{x^{10} + x^8 + x^2 + 1}{x^{10} + x^6 + x^4 + 1} =\dfrac{(x^2+1)(x^8+1)}{(x^4+1)(x^6+1)}$$ -$$=\dfrac{\left(x+\dfrac1x\right)\left(x^4+\dfrac1{x^4}\right)}{\left(x^2+\dfrac1{x^2}\right)\left(x^3+\dfrac1{x^3}\right)}$$ -$$=\dfrac{\left(x^2+\dfrac1{x^2}\right)^2-2}{\left(x^2+\dfrac1{x^2}\right)\left(x^2+\dfrac1{x^2}+1\right)}$$ -Now $x^2+x-1=0\implies x-\dfrac1x=-1$ as $x\ne0$ -Now $x^2+\dfrac1{x^2}=\left(x-\dfrac1x\right)^2+2=1+2$ -Can you take it from here?<|endoftext|> -TITLE: A short exact sequence of abelian groups induces a long exact sequence in (co)homology with coefficients -QUESTION [8 upvotes]: Let -$$0\to V'\to V\to V''\to 0$$ -be a short exact sequence of abelian groups. -Let $X$ be a topological space. How to construct long exact sequences in singular homology and cohomology -$$\cdots \to H_i(X;V')\to H_i(X;V)\to H_i(X;V'')\to H_{i-1}(X;V')\to\cdots,$$ -$$\cdots \to H^i(X;V')\to H^i(X;V)\to H^i(X;V'')\to H^{i+1}(X;V')\to\cdots?$$ -My first idea was to prove, that we have short exact sequences of singular chain (co-)complexes -$$0\to C_i(X;V')\to C_i(X;V)\to C_i(X;V'')\to 0,$$ -$$0\to C^i(X;V')\to C^i(X;V)\to C^i(X;V'')\to 0$$ -and then use the snake lemma as usual. But here we are in a different situation, because we have the one topological space and different coefficients. Therefore I think my idea above isn't useful, or am I wrong? Could you give me a hint? - -REPLY [5 votes]: Tensor product is right exact: if $0 \to V' \to V \to V'' \to 0$ is an exact sequence of abelian groups, then $M \otimes V' \to M \otimes V \to M \otimes V'' \to 0$ is an exact sequence too (I'm looking at tensor product over $\mathbb{Z}$; if $f : X \to Y$ is a morphism, then the induced morphism $M \otimes X \to M \otimes Y$ is given on generators by $m \otimes x \mapsto m \otimes f(x)$). -But by definition $C_i(X)$ is a free abelian group, and a free abelian group if flat, meaning that $M \otimes -$ is actually exact (and not just right exact). So you do get an exact sequence -$$0 \to C_i(X) \otimes V' \to C_i(X) \otimes V \to C_i(X) \otimes V'' \to 0,$$ -and by definition $C_i(X;A) := C_i(X) \otimes A$. One can also check directly that the exact sequence above commutes with differentials, so you get a short exact sequence of chain complexes and can apply the snake lemma as usual. -For cohomology the idea is the same: a free abelian group is projective (one has a chain of implications "free $\implies$ projective $\implies$ flat"), and by definition this means that $\hom(P,-)$ is an exact functor (in general it's only left exact), and you get a short exact sequence: -$$0 \to \hom(C_i(X), V') \to \hom(C_i(X),V) \to \hom(C_i(X),V'') \to 0$$ -where by definition $C^i(X;A) := \hom(C_i(X), A)$, and this short exact sequence commutes with differentials. -PS: This long exact sequence is how you construct the Bockstein homomorphism, for example. It's the connecting homomorphism $H_i(X;V'') \to H_{i-1}(X;V')$.<|endoftext|> -TITLE: Proof of Gauss-Markov theorem -QUESTION [7 upvotes]: Theorem: Let $Y=X\beta+\varepsilon$ where $$Y\in\mathcal M_{n\times 1}(\mathbb R),$$ $$X\in \mathcal M_{n\times p}(\mathbb R),$$ $$\beta\in\mathcal M_{n\times 1}(\mathbb R ),$$ and $$\varepsilon\in\mathcal M_{n\times 1}(\mathbb R ).$$ -We suppose that $X$ has full rank $p$ and that $$\mathbb E[\varepsilon]=0\quad\text{and}\quad \text{Var}(\varepsilon)=\sigma ^2I.$$ -Then, the least square estimator (i.e. $\hat\beta=(X^TX)^{-1}X^Ty$) is the best unbiased estimator of $\beta$, that is for any linear unbiased estimator $\tilde\beta$ of $\beta$, it hold that $$\text{Var}(\tilde\beta)-\text{Var}(\hat\beta)\geq 0.$$ - -Proof -Let $\tilde\beta$ a linear unbiased estimator, i.e. -$$\tilde\beta=AY\ \ \text{for some }A_{n\times p}\quad\text{and}\quad\mathbb E[\tilde\beta]=\beta\text{ for all }\beta\in\mathbb R ^p.$$ -Questions : -1) Why $\mathbb E[\tilde\beta]=\beta$ for all $\beta$, I don't really understand this point. To me $\beta$ is fixed, so $\mathbb E[\tilde\beta]=\beta$ for all $\beta$ doesn't have really sense. -2) Actually, what is the difference between the least square estimator and the maximum likelihood estimator. They both are $\hat\beta=(X^TX)^{-1}X^Ty$, so I don't really see (if they are the same), why we give two different name. - -REPLY [7 votes]: The Gauss-Markov Theorem is actually telling us that in a regression model, where the expected value of our error terms is zero, $E(\epsilon_{i}) = 0$ and variance of the error terms is constant and finite $\sigma^{2}(\epsilon_{i}) = \sigma^{2} < \infty$ and $\epsilon_{i}$ and $\epsilon_{j}$ are uncorrelated for all i and j the least squares estimator $b_{0}$ and $b_{1}$ are unbiased and have minimum variance among all unbiased linear estimators. Note that there might be biased estimator which have a even lower variance. -Extensive information about the Gauss-Markov Theorem, such as the mathematical proof of the Gauss-Markov Theorem can be found here http://economictheoryblog.com/2015/02/26/markov_theorem/ -However, if you want to know which assumption is necessary for $b1$ to be an unbiased estimator for $\beta1$, I guess that assumption 1 to 4 of the following post (http://economictheoryblog.com/2015/04/01/ols_assumptions/) must be fulfilled to have an unbiased estimator. -Furthermore, it is true that the maximum likelihood estimator and least squares estimator are equivalent under certain conditions, i.e if noise $\epsilon$ is Gaussian distributed. -Hope this helps. -HTH<|endoftext|> -TITLE: Proving Slutsky's theorem -QUESTION [11 upvotes]: How do we go about proving the following part of Slutsky's theorem? -If $X_n \xrightarrow{d} X,\quad Y_n \xrightarrow{P} c$, then $X_nY_n \xrightarrow{d} Xc$ where $c$ is a degenerate random variable. -I tried using the following fact: -If $|X_n-Y_n| \xrightarrow{P} 0, \quad Y_n \xrightarrow{d} Y$, then $X_n \xrightarrow{d} Y$. -However, I could not arrive at a continuous transformation to use this fact. -I tried by lim sup and lim inf approach, directly from the definition of convergence in distribution: -$X_n \xrightarrow{d} X$, if $F_n(x) \rightarrow F(x)$ at all points of continuity of $F$, where $F_n$ and $F$ are the distribution functions of $X_n$ and $X$ respectively. -Is there any other equivalent characterisation that can help me proving Slutsky's theorem? - -REPLY [14 votes]: The fact you mention reads as follows: if $Z_n\to Z$ in distribution and $Z'_n\to 0$ in probability, then $Z_n+Z'_n\to Z$ in distribution. -We have -$$X_nY_n=X_n(Y_n-c)+cX_n;$$ -defining $Z_n:=cX_n$ and $Z'_n:=X_n(Y_n-c)$, we reach the wanted conclusion provided that we manage to show that $X_n(Y_n-c)\to 0$ in probability. But for a fixed $\varepsilon$, and each $R$ -$$\mathbb P\{| X_n(Y_n-c)|\gt \varepsilon\}\leqslant\mathbb P\{|X_n|\gt R\}+\mathbb P\{|Y_n-c|\gt \varepsilon/R \}.$$ -Choosing $R$ as a limiting point of the distribution function of $|X|$, we obtain from the convergence of $Y_n$ to $c$ in probability that -$$\limsup_{n\to +\infty} \mathbb P\left\{| X_n(Y_n-c)|\gt \varepsilon\right\}\leqslant \mathbb P\{|X|\gt R\}.$$ -Since $R$ can be chosen arbitrarily large, we are done.<|endoftext|> -TITLE: Product of two hypergeometric functions -QUESTION [7 upvotes]: For $\Re a, \Re b, \Re c, \Re a', \Re b', \Re c'>0$, I would calculate the following product -$$ {}_2 F_1(a, b; c; x^{-1}) \times \, {}_2 F_1(a', b'; c'; 1-\frac{x}{y}) $$ -For all $y>x>1$. -Is there a formula that allows me to calculate the above product -Thanks in advance. - -REPLY [4 votes]: This product of two Gaussian hypergeometric functions can be expressed by a sum over generalized hypergeometric functions ${}_{4}F_{3}$ according to formula 4.3.(14) on page 187 in "Higher Transcendental Functions, Vol. 1" by A. Erdelyi (Ed.). I reproduce it here for your convenience. (I've slightly changed the notation of the original.) -$$ -{}_{2}F_{1}(a,b;c;p z) \ {}_{2}F_{1}(a',b';c';q z)= \\ \sum_{n=0}^{\infty}\ \frac{(a)_{n} (b)_{n} (pz)^{n}}{n!\ (c)_{n}}\ {}_4F_{3}(a',b',1-c-n,-n;c',1-a-n,1-b-n;\frac{q}{p}), -$$ where $(a)_{n}$ etc denotes the Pochhammer symbol. Note that for negative integer $a$ (and euivalently $b$ and per symmetry also for negative integer $a'$ and/or $b'$) the sum terminates. By setting $z=1$ and substituting $p$ and $q$ you get the solution that you are looking for.<|endoftext|> -TITLE: Isomorphism between $Hom(E,F)$ and $E^*\otimes F$, E and F vector bundles. -QUESTION [7 upvotes]: I need some aid in finding the solution to the following problem: -Let $E$, $F$ be vector bundles (finite rank, if needed) over a manifold $M$. Consider the st $Hom(E,F)$ of all bundle morphisms from $E$ to $F$. Then, $Hom(E,F)$ is isomorphic to $E^*\otimes F$. -I believe that the first step O took in my attempt to solve this is correct: fixing $x\in M$, I have $E_x$, $F_x$ vector spaces. Then, I can consider the map -$$ -E^*_x\otimes F_x \rightarrow Hom(E_x,F_x) -$$ -given by -$$ -f(x)\otimes v(x) \rightarrow [w(x)\mapsto f(v)w(v)]. -$$ -This shows the result in the fibers. I do not know how can I extend its validity to the entire bundle. Can someone give me any suggestion? - -REPLY [8 votes]: Define a map $\Phi:E^*\otimes F\to Hom(E,F)$ by your map on each fiber. You have shown this is a linear isomorphism on each fiber, so it suffices to show it is a continuous map on the total spaces. This statement is local on $M$, so we may assume $E\cong M\times \mathbb{R}^m$ and $F\cong M\times\mathbb{R}^m$ are trivial. But then $\Phi:M\times((\mathbb{R}^m)^*\otimes\mathbb{R}^n)\to M\times (Hom(\mathbb{R}^m,\mathbb{R}^n))$ is just given by $\Phi(m,x)=(m,\varphi(x))$ for some particular linear isomorphism $\varphi:(\mathbb{R}^m)^*\otimes\mathbb{R}^n\to Hom(\mathbb{R}^m,\mathbb{R}^n)$ (namely, your map $E_x^*\otimes F_x\to Hom(E_x,F_x)$ in the particular case that $E_x=\mathbb{R}^m$ and $F_x=\mathbb{R}^n$). This $\varphi$ is a continuous map, and so it follows that $\Phi$ is continuous. -(The key point here is that your description of the isomorphism $E_x^*\otimes F_x\to Hom(E_x,F_x)$ depended only on the vector space structures of $E_x$ and $F_x$, so the same description still holds when we replace $E_x$ and $F_x$ with $\mathbb{R}^m$ and $\mathbb{R}^n$ via any chosen isomorphisms $\mathbb{R}^m\to E_x$ and $\mathbb{R}^n\to F_x$. It follows that in a local trivialization, $\Phi$ is just the same linear isomorphism on every fiber.)<|endoftext|> -TITLE: A homological perspective on the Hodge-theorem -QUESTION [5 upvotes]: Let $M$ be a smooth oriented manifold of dimension $n$. Let $(\mathcal{A} ^*(M),d)$ be the chain complex of differential forms on $M$. Endowing $M$ with an inner product gives us a hodge star operator which we could use to define the codiferential: -$$\delta : \mathcal{A} ^{k}(M) \to \mathcal{A} ^{k-1}(M), \delta: \omega \mapsto (-1)^k \star^{-1}d\star\omega $$ -This gives a chain complex $(\mathcal{A} ^*(M),\delta)$. -We can now define the laplacian as $\triangle = d \delta + \delta d$. As a chain map, $\triangle : \mathcal{A}^*(M) \to \mathcal{A}^*(M)$ is null homotopic by constrution. By the exterior algebra functor I mean the functor that takes vector bundles over a manifold to their corresponding exterior algebra bundles. - -1. Does there exist a map $\varphi: \Gamma(T^*M) \to \Gamma(T^*M)$ s.t. under the exterior algebra functor the induced map between graded algebras is $\triangle$? - -To prove the Hodge theorem all we need to do is show that $i: Ker(\triangle) \to \mathcal{A}^*(M)$ is a quasi-isomorphism. From homological algebra we know that the chain map $i$ is a quasi-isomorphism iff its cone $Cone(i)$ is acyclic. - -2. What properties should $\varphi: \Gamma (T^*M) \to \Gamma(T^*M)$ posses to induce a map whose kernel has an acyclic cone? - -Could the hodge theorem be proved in this way? i.e. by exhibiting some function $\varphi$ satisfying (1) and proving that it posses (2)? (the latter part will no doubt involve some functional anaysis). - -REPLY [3 votes]: If I understand your first question correctly, you ask whether you can find a $\varphi \colon \Gamma(T^{*}M) \rightarrow \Gamma(T^{*}M)$ such that $\bigwedge^{k}(\varphi) \colon \bigwedge^k\Gamma(T^{*}M) \rightarrow \bigwedge^k\Gamma(T^{*}M)$ coincides with the Laplacian acting on $k$-differentiable forms under the identification of $\bigwedge^k\Gamma(T^{*}M) \cong \mathcal{A}^{k}(M)$ as $C^{\infty}(M)$-modules. The map $\varphi$ should be the Laplacian acting on one-forms but the Laplacian is a second-order differential operator and not a morphism of $C^{\infty}(M)$-modules (it is not $C^{\infty}(M)$ -linear) so it doesn't make sense to apply $\bigwedge$ to it as a functor from $C^{\infty}(M)$-modules. - -On the most basic level, the Hodge decomposition is something that you can "expect" just from linear algebra. Consider the following observations: - -Let $V$ be a finite dimensional vector space and $W \subseteq V$ a subspace. By choosing any subspace $W'$ that complements $W$ (that is, $W \oplus W' = V$) we obtain an isomorphism $$W' \hookrightarrow V \xrightarrow{\pi} (V /W).$$ More geometrically, we want to choose for each affine subspace $v + W$ a unique representative and so we choose some subspace $W'$ that is transversal to $W$ and send $v + W$ to the unique intersection point $(v + W) \cap W'$. To get a more canonical choice for $W'$, we can endow $V$ with an inner product and take $W' = W^{\perp}$. Geometrically, this amounts to sending each affine subspace $v + W$ to the unique point in $v + W$ that is the closest to the origin (with respect to the distance coming from the inner product). -Let $V$ be a finite dimensional vector space and $d \colon V \rightarrow V$ be an operator such that $d^2 = 0$. We can form the cohomology $\ker(d) / \mathrm{im}(d)$ and want to construct an operator $S \colon V \rightarrow V$ such that $$ \ker(S) \hookrightarrow \ker(d) \xrightarrow{\pi} \ker(d) / \mathrm{im}(d)$$ is an isomorphism. Endowing $V$ with an inner product, we can take the complement of $\mathrm{im}(d)$ inside $\ker d$ which is $\mathrm{im}(d)^{\perp} \cap \ker d = \mathrm{ker} (d^{*} ) \cap (\ker d)$ and so any operator $S$ with $\ker(S) = \ker(d) \cap \ker(d^{*})$ will do the trick. - - -One choice is to take $S = d + d^{*}$. This is a self-adjoint operator and clearly $\ker(d) \cap \ker(d^{*}) \subseteq \ker (S)$. But if $Sv = 0$ then -$$ 0 = \left< Sv, Sv \right> = \left< dv, dv \right> + \left< dv, d^{*} v \right> + \left< d^{*} v, dv \right> + \left< d^{*}v, d^{*}v \right> = \left< dv, dv \right> + \left< v, (d^{*})^2v \right> + \left< v, d^2v \right> + \left< d^{*} v, d^{*} v \right> = \left< dv, dv \right> + \left< d^{*} v, d^{*} v \right>$$ -so we see that $dv = 0$ and $d^{*}v = 0$. -Another choice is to take $L = S^2 = (d + d^{*})^2 = dd^{*} + d^{*}d$. The operator $L$ is also self-adjoint and since $S$ is self-adoint we have $$\ker(L) = \ker(S^2) = \ker(S) = \ker(d) \cap \ker(d^{*}).$$ - -If $$0 \rightarrow V^{0} \xrightarrow{d} V^{1} \xrightarrow{d} \ldots \xrightarrow{d} V^{n} \rightarrow 0$$ -is a cochain complex of finite dimensional vector spaces, endow each $V^{i}$ with an inner-product and consider $V = \oplus V^i$ together with the direct sum inner-product. The discussion above proves that the inclusion $\ker(L) \hookrightarrow V$ is a quasi-isomorphism. Note that while $S$ is not a chain map (as $S(V^{i}) \subseteq V^{i-1} \oplus V^{i+1}$), the operator $L$ is a chain map of complexes. - -Now, all of the discussion above can be repeated for the chain complex $\mathcal{A}^{*}(M)$. Unfortunately, $\mathcal{A}^{*}(M)$ is not finite dimensional. If $\mathcal{A}^{*}(M)$ together with the relevant inner-product would form a Hilbert space and $d, d^{*}$ were bounded operators, then the arguments above would still work. Unfourtunately, $\mathcal{A}^{*}(M)$ is not complete and the operators are not bounded. The role of the heavy analysis in the proof is to show that the arguments above essentially work even for the more problematic case. BTW, in this context, the first order operator $S$ (which serves as a square root of the Laplacian) is known as a Dirac operator.<|endoftext|> -TITLE: another product of log integral -QUESTION [10 upvotes]: Assuming one exists, and I think it does, find a closed form for: -$$\displaystyle \int_{0}^{1}\log(1+x)\log(1-x^{3})dx$$ -From it, I did manage to derive: -$$\displaystyle -\gamma+2\gamma\log(2)-\sum_{n=1}^{\infty}\frac{(-1)^{n}\psi\left(\frac{n+4}{3}\right)}{n(n+1)}$$ -But, now I am stuck on the sum. -By merely switching the signs, an integral I found was: -$$\displaystyle\int_{0}^{1}\log(1-x)\log(1+x^{3})dx=-1/2\psi_{1}(1/3)+\log^{2}(2)-2\log(2)+\frac{5\pi^{2}}{36}-\frac{\pi}{\sqrt{3}}+6$$ -But, the other way around appears to be more difficult for some reason. -This is probably not new to some, but while playing around with this I did manage to find various fun identities, such as: -$$\int_{0}^{1}x^{2n}\log(1+x)dx=\frac{1}{2(3n-1)}\left(H_{6n-2}-H_{3n-1}\right)$$ when $n$ is odd and -$$\int_{0}^{1}x^{2n}\log(1+x)dx=\frac{2\log(2)}{6n+1}-\frac{1}{6n+1}\left(H_{6n+1}-H_{3n}\right)$$ when $n$ is even. - -REPLY [5 votes]: Making the substitution $x \mapsto \frac{1-x}{1+x}$ gives -$$I=\int_0^1 \ln(1+x)\ln(1-x^3)dx=2\int_0^1 \ln\left(\frac{2 x (3+x^2)}{(1+x)^3}\right)\ln\left(\frac{2}{1+x}\right)\frac{dx}{(1+x)^2}.$$ -Now, we separate the integral into 7 pieces, of which only one is not elementary (containing dilogarithms): $$ \ln\left(\frac{2 x (3+x^2)}{(1+x)^3}\right)\ln\left(\frac{2}{1+x}\right) -\\\small =\ln^22-\ln2\ln x+\ln2\ln(3+x^2)-4\ln2\ln(1+x)-\ln x\ln(1+x)+3\ln^2(1+x)-\ln(1+x)\ln(3+x^2).$$ -$1$st integral -$$\int_0^1 \frac{dx}{(1+x)^2}=-\frac1{1+x}\Bigg{|}_0^1=\frac12.$$ -$2$nd integral -$$\int_0^1 \frac{\ln x}{(1+x)^2}dx=\sum_{n\geq1} \frac{(-1)^n}{n}=-\ln2.$$ -$3$rd integral -$$\int_0^1 \frac{\ln(1+x)}{(1+x)^2}dx=-\frac{\ln(1+x)}{1+x}\Bigg{|}_0^1+\int_0^1\frac{dx}{(1+x)^2}dx=\frac12-\frac{\ln2}{2}.$$ -$4$th integral -$$\int_0^1 \frac{\ln(3+x^2)}{(1+x)^2}dx=-\frac{\ln(3+x^2)}{1+x}\Bigg{|}_0^1+\int_0^1 \frac{2x}{(1+x)(3+x^2)}dx -\\=-\ln2+\ln3+2\operatorname{Re} \int_0^1 \frac{dx}{(1+x)(x+i\sqrt{3})}=-\ln2+\frac34\ln3+\frac{\pi}{4\sqrt{3}}.$$ -$5$th integral -$$\int_0^1 \frac{\ln x \ln(1+x)}{(1+x)^2}dx=-\frac{\ln x\ln(1+x)}{1+x}\Bigg{|}_0^1+\int_0^1\frac1{1+x}\left(\frac{\ln(1+x)}{x}+\frac{\ln x}{1+x}\right)dx -\\=-\ln2+\int_0^1 \frac{\ln(1+x)}{x}dx-\int_0^1\frac{\ln(1+x)}{1+x}dx=\frac{\pi^2}{12}-\frac12\ln^22-\ln2.$$ -$6$th integral -$$\int_0^1 \frac{\ln^2(1+x)}{(1+x)^2}dx=-\frac{\ln^2(1+x)}{1+x}\Bigg{|}_0^1+\int_0^1 \frac{2\ln(1+x)}{(1+x)^2}dx -\\=1-\ln2-\frac12\ln^22.$$ -$7$th integral -$$\int_0^1 \frac{\ln(1+x)\ln(3+x^2)}{(1+x)^2}dx=-\frac{\ln(1+x)\ln(3+x^2)}{1+x}\Bigg{|}_0^1+\int_0^1 \frac1{1+x}\left(\frac{2x\ln(1+x)}{3+x^2}+\frac{\ln(3+x^2)}{1+x}\right)dx -\\=-\ln^22-\ln2+\frac34\ln3+\frac{\pi}{4\sqrt{3}}+2J$$ -where $$ J=\int_0^1\frac{x\ln(1+x)}{(1+x)(3+x^2)}dx=\operatorname{Re} \int_0^1 \frac{\ln(1+x)}{(1+x)(x+i\sqrt{3})}dx -\\=-\frac18\ln^22-\operatorname{Re} \,\frac1{i\sqrt{3}-1}\int_0^1 \frac{\ln(1+x)}{x+i\sqrt{3}}dx$$ -Now, it is not hard to prove that -$$\int_0^1 \frac{\ln(1+x)}{x+a}dx=\ln2 \ln\frac{a+1}{a-1}+\operatorname{Li}_2\left(\frac2{1-a}\right)-\operatorname{Li}_2\left(\frac1{1-a}\right).$$ -Putting $a=i\sqrt{3}$ then gives, after taking the real part, and assuming the principle value of the logarithm, -$$\small J=-\frac18\ln^22+\frac{\pi\sqrt{3}}{12}\ln2+\frac14\operatorname{Re}\text{Li}_2(e^{i\pi/3})-\frac{\sqrt{3}}{4}\operatorname{Im}\text{Li}_2(e^{i\pi/3})-\frac14\operatorname{Re}\text{Li}_2(\frac12 e^{i\pi/3})+\frac{\sqrt{3}}{4}\operatorname{Im}\text{Li}_2(\frac12 e^{i\pi/3})$$ -Now we can "simplify" a bit: we have $\displaystyle \,\, \operatorname{Re}\text{Li}_2(e^{i\pi/3})=\frac{\pi^2}{36}$ -and $\displaystyle \,\, \operatorname{Im}\text{Li}_2(e^{i\pi/3})=\frac1{2\sqrt{3}}\psi_1\left(\frac13\right)-\frac{\pi^2}{3\sqrt{3}}.$ -Furthermore, using known dilogarithm identities we have $\displaystyle \,\,\text{Li}_2(\frac12 e^{i\pi/3})=-\text{Li}_2\left(-\frac{i}{\sqrt{3}}\right)-\frac12\ln^2\left(\frac34-i\frac{\sqrt{3}}{4}\right),$ -and since $\displaystyle \,\, \operatorname{Re}\text{Li}_2\left(-\frac{i}{\sqrt{3}}\right)=\sum_{n=1}^{\infty} \frac{(-1)^n}{(2n)^2\, 3^n}=\frac14\text{Li}_2\left(-\frac13\right),$ -we have $$\operatorname{Re}\text{Li}_2(\frac12 e^{i\pi/3})=\frac{\pi^2}{72}-\frac18\ln^2\left(\frac34\right)-\frac14\text{Li}_2\left(-\frac13\right)$$ -and $$\operatorname{Im}\text{Li}_2(\frac12 e^{i\pi/3})=\frac{\pi}{12}\ln\left(\frac34\right)-\operatorname{Im}\text{Li}_2\left(-\frac{i}{\sqrt{3}}\right).$$ -Finally, putting everything together gives the expression I posted as a comment: -$$I=\ln^22 - \frac18\ln^23 + 2\ln2\ln3 - \frac32\ln3 - 6\ln2 + 6-\frac{\pi}{4\sqrt{3}}(2 + \ln3) - \frac{37\pi^2}{72} + \frac12\psi_1\left(\frac13 \right ) - \frac14\text{Li}_2\left( -\frac13 \right ) + \sqrt{3}\Im\text{Li}_2\left( -\frac{i}{\sqrt{3}} \right )$$ -I really hope that the dilogarithms can be simplified some more, but I can't see how for now.<|endoftext|> -TITLE: Morphism induced by a cellular map between CW-complexes -QUESTION [11 upvotes]: I'm trying to understand cellular homology as a functor from the category (CW-complexes, cellular maps) to the category of abelian groups sequences. -Let $X,Y$ be fixed CW-complexes. My lecturer defined the $n$-th cellular chain group $C_n ^{\text cell} (X)$ as the free abelian group with generators the $n$-cells of $X$, and the $n$-th boundary map $\partial _n ^{\text cell} $ as the morphism s.t., if $A$ is a $n$-cell of $X$ and $\Phi^{(n)}_A :D^n \to X^n $ is its characteristic map in $X$, we have $ \partial_n ^{\text cell} A = \sum_C \epsilon (A,C) C$, where $C$ ranges over the $n-1$-cells of $Y$ and $\epsilon (A,C)$ is the degree of a map $S^{n-1} \to S^{n-1} $ induced by the characteristic maps of $A$ and $B$ (i.e. he uses the cellular boundary formula as a definition). -I know that there is a natural isomorphism between $C_n ^{\text cell}(X) $ and $H_n (X^n ,X^{n-1} )$ (the latter being a singular homology group). -If $f:X \to Y$ is a cellular map, it is clear what is the chain-induced map $f_n :H_n (X^n ,X^{n-1}) \to H_n (Y^n ,Y^{n-1})$, but I'm having troubles understanding what $f_n :C_n ^{\text cell} (X) \to C_n ^{\text cell} (Y)$ is. Given a $n$-cell $A$ of $X$, I would express $f_n (A) $ as a linear combination of the $n$-cells of $Y$ that intersects $f(A)$, but I can't find how to define the coefficients. -Thank you - -REPLY [7 votes]: Let $\alpha$ be an $n$-cell of $X$ and $\beta$ be an $n$-cell of Y. Then $f(\alpha)=\sum_{\beta\in J_n'} y_{\alpha \beta} \beta$. Here $J_n'$ denotes the set of $n$-cells of $Y$. We wish to determine the values of $y_{\alpha \beta}\in \mathbb Z$. -Let $\varphi_\alpha:S^{n-1}\to X^{n-1}$ be the attaching map of $\alpha$. Let $\overline{f}:X^n/X^{n-1}\to Y^n/Y^{n-1}$ be the induced map by $f$ on the quotient. -For every wedge sum $\bigvee_i A_i$ of pointed topological spaces there are canonical retractions $r_j: \bigvee_i A_i \to A_j$. Note that for any CW complex $Z$, $Z^n/Z^{n-1}$ is a wedge of $n$-spheres. Under this identification, we obtain retractions $r_\gamma:Z^n/Z^{n-1}\to S^n$, one for every $n$-cell $\gamma$ of $Z$. -With this notation set up, consider the following commutative diagram: - -Proposition: $y_{\alpha \beta}=deg(f_{\alpha \beta})$. -This is theorem 10.13 of Switzer's book "Algebraic topology: Homology and Homotopy", or proposition 3.8 of Lundell and Weingram "The topology of CW complexes". They call $y_{\alpha \beta}$ the "degree with which $\alpha$ is mapped into $\beta$ by $f$", which is a nice name.<|endoftext|> -TITLE: Square of a second derivative is the fourth derivative -QUESTION [16 upvotes]: I have a simple question for you guys, if I have this: -$$\left(\frac{d^2}{{dx}^2}\right)^2$$ -Is it equal to this: -$$\frac{d^4}{{dx}^4}$$ -Such that if I have an arbitrary function $f(x)$ I can get: -$$\left(\frac{d^2 f(x)}{{dx}^2}\right)^2 = \frac{d^4 f(x)}{{dx}^4}$$ -Sorry if it's a pretty simple question, but I was trying to simplify something like this: -$$\left(a \cdot \frac{d^2}{{dx}^2} - f(x)\right)^2 g(x)$$ that's where it came up. - -REPLY [2 votes]: It turns out that the first fact you cited actually does apply. -The difficulty appears to be confusion between the expressions -$\left(a \cdot \frac{d^2}{{dx}^2} - f(x)\right)^2$ -and $\left(a \cdot \frac{d^2}{{dx}^2} f(x)\right)^2$ -The expression in parentheses has two terms, and expands like this: -\begin{align} -\left(a \frac{d^2}{{dx}^2} - f(x)\right)^2 g(x) -&= \left(a \frac{d^2}{{dx}^2} - f(x)\right) - \left(a \frac{d^2}{{dx}^2} - f(x)\right) g(x) \\ -&= \left(a \frac{d^2}{{dx}^2} - f(x)\right) - \left(a \frac{d^2}{{dx}^2} g(x)- f(x)g(x)\right) \\ -&= a \frac{d^2}{{dx}^2} \left(a \frac{d^2}{{dx}^2} g(x)- f(x)g(x)\right) - - f(x)\left(a \frac{d^2}{{dx}^2} g(x)- f(x)g(x)\right) \\ -&= a^2 \frac{d^4}{{dx}^4} g(x) - a \frac{d^2}{{dx}^2} \left(f(x)g(x)\right) - - a f(x) \frac{d^2}{{dx}^2} g(x) + (f(x))^2 g(x) -\end{align}<|endoftext|> -TITLE: What is the fallacy in this proof? -QUESTION [14 upvotes]: I came across this funny proof- - -$$4$$ -$$=4+\frac 92-\frac 92$$ -$$=\sqrt{(4-\frac 92)^2}+\frac 92$$ -$$=\sqrt{16+\frac{81}{4}-36}+\frac 92$$ -$$=\sqrt{25+\frac {81}{4}-45}+\frac 92$$ -$$=\sqrt{(5-\frac 92)^2}+\frac 92$$ -$$=5-\frac 92+\frac 92$$ -$$=5$$ - -I suspect,that the error is in the second line where operation on negative is done before positive violating $BODMAS$ rule.But,$\sqrt{(4-\frac 92)^2}=4-\frac 92$ and the similar solving continues.So, where is the real error? -Thanks for any help!! - -REPLY [2 votes]: The equality -$$\sqrt{\left(4-\frac92\right)^2} = \sqrt{\left(5-\frac92\right)^2}$$ -is equivalent to -$$\sqrt{\left(-\frac12\right)^2} = \sqrt{\left(\frac12\right)^2}$$ -which of course is true, but that does not imply -$$-\frac12 = \frac12$$ -Adding the first square and square root is invalid, this equality: - -$$4+\frac 92-\frac 92$$ - $$=\sqrt{\left(4-\frac 92\right)^2}+\frac 92$$ - -does not hold.<|endoftext|> -TITLE: Given a matrix $A$ such that $A^{\ell}$ is a constant matrix, must $A$ be a constant matrix? -QUESTION [9 upvotes]: This problem originates from an exercise in Richard Stanley's Algebraic Combinatorics. The exercise in the text (Chapter 3, Exercise 2(a)) asks - -Let $G$ be a finite graph (allowing loops and multiple edges). Suppose that there is some $\ell> 0$ such that the number of walks of length $\ell$ from any fixed vertex $u$ to any fixed vertex $v$ is independent of $u$ and $v$. Show that $G$ has the same number $k$ of edges between any two vertices (including $k$ loops at each vertex. - -The hypothesis of the problem (that the number of walks of length $\ell$ between any two vertices is the same) tells us that the adjacency matrix $A(G)$ of $G$ raised to the $\ell$ power is a constant matrix -$$ (A(G))^{\ell} = \begin{pmatrix} c & c & \cdots & c \\ c & c & \cdots & c \\ \vdots & \vdots & \ddots & \vdots \\ c & c & \cdots & c \end{pmatrix} $$ -for some constant $C$. We would like to conclude that this means the adjacency matrix itself is a constant matrix (hence, the number of walks of length 1 between any two vertices is the same, i.e., the number of edges between any two vertices is the same). Update in response to comments below: In this case we also have that $A(G)$ is a symmetric matrix which would eliminate some trivial counter examples. -Does this result follow from something in linear algebra? What is the proof? If not, is there some other approach that might be more fruitful? - -REPLY [6 votes]: Since $A(G)$ is symmetric, it is diagonalizable. In particular, this implies that $\ker(A(G)^\ell)=\ker(A(G))$ for all $\ell>0$. If $A(G)^\ell$ is a constant matrix, then for any $i$ and $j$, $e_i-e_j$ is in its kernel (where $\{e_i\}$ is the standard basis). Thus $e_i-e_j\in \ker(A(G))$ for every $i$ and $j$, which says that the columns of $A(G)$ are all the same. Since $A(G)$ is symmetric, this actually implies all the entries of $A(G)$ are the same.<|endoftext|> -TITLE: How do I develop numerical routines for the evaluation of my own special functions? -QUESTION [13 upvotes]: This question has been cross-posted to ComputationalScience.SE here. - -When performing computational work, I often come across a univariate function, defined in terms of an integral or differential equation, which I would like to rapidly evaluate (say, millions of times per second) over a specified interval to a given precision (say, one part in $10^{10}$). For example, the function -$$ f(\alpha) = \int_{k=0}^\infty \frac{e^{-\alpha^2 k^2}}{k+1}\ \mathrm{d}k $$ -over the interval $\alpha \in (0,10)$ came up in a recent project. Now it happens that this integral can be evaluated in terms of standard special functions (in particular, $\operatorname{Ei}(z)$ and $\operatorname{erfi}(z)$), but suppose we had a much more complicated function for which no such evaluation was known. Is there a systematic technique I can apply to develop my own numerical routines for the evaluation of such functions? -I am sure plenty of techniques must be out there, as fast algorithms seem to exist for basically all of the common special functions. However, I emphasize that the sort of technique I am looking for should not rely on the function having some particular structure (e.g. recurrence relations like $\Gamma(n+1) = n\Gamma(n)$ or reflection formulas like $\Gamma(z) \Gamma(1-z) = \pi \csc(\pi z)$). Ideally, such a technique would work for just about any (sufficiently well-behaved) function I come across. -You can take for granted that I do have some slow method of evaluating the desired function (e.g. direct numerical integration) to any precision, and that I'm willing to do a lot of pre-processing work with the slow method in order to develop a fast method. - -REPLY [19 votes]: Unfortunately, there is no single approach that will lead to robust, accurate, and high-performance implementations across the large universe of special functions. Often, two or more methods must be used for different parts of the input domain, and the necessary research and implementation work may take weeks for elementary functions and months for higher transcendental functions. -Since considerable mathematical as well as programming skill is required to produce high-quality implementations, my first recommendation would be to utilize and leverage existing mathematical libraries as much as feasible. These could be commercial libraries, such as the NAG numerical libraries or the IMSL numerical libraries by RogueWave, or open source libraries such as the GNU Scientific Library (GSL) or the math and numerics portions of the Boost library. You may also find relevant source code in online repositories such as Netlib's collected algorithms from ACM TOMS. -In practical terms, on modern SIMD-enhanced processors, the extensive use of tables is no longer advisable and (piecewise) polynomial approximations are usually the most attractive. The reason table-based methods have fallen into disfavor on high-performance processor architectures is that over the past decade the performance of functional units (measured in FLOPS) has grown much faster than the performance of memory sub-systems (measured in GB/sec). The reasoning in the following paper matches my own professional experience: -Marat Dukhan and Richard Vuduc, "Methods for high-throughput computation of elementary functions". In Parallel Processing and Applied Mathematics, pp. 86-95. Springer, 2014. (slides) -In terms of performance, the polynomial approximations benefit from the fused multiply-add operation (FMA) present in modern processor hardware (both CPUs and GPUs). This operation also helps reduce round-off errors while offering some protection against subtractive cancellation. For smallest error and best efficiency, one would want to use minimax approximation. -Commonly used tools such as Maple and Mathematica have built-in facilities to generate these. While they generate approximations that are (very close to) optimal in the mathematical sense, they do not typically account for the error incurred by representation of coefficients and evaluating operations in limited floating-point precision. The Sollya tool offers this functionality through its fpminimax command. Lastly, you can write your own approximation code which would likely be based on the Remez algorithm. -For some functions, polynomial approximations are not really practical, as too many terms would be required to reach IEEE-754 double precision. In those cases, one can chose one of two strategies. -The first strategy is to cleverly transform the input arguments, using basic arithmetic and simple elementary functions, such that the resulting function is "well-behaved" with respect to polynomial approximation. Typically such transformation tends to "linearize" the function to be approximated. A good instructional example of this approach is the computation of erfc in the following paper: -M. M. Shepherd and J. G. Laframboise, "Chebyshev Approximation of $(1 + 2x)\exp(x^2)\operatorname{erfc} x$ in $0 \leqslant x < -\infty$". Mathematics of Computation, Vol. 36, No. 153 (Jan., 1981), pp. 249-253 (online) -The second approach is to use the ratio of two polynomials, that is, a rational approximation, for example in the form of a Padé approximant. The tools already mentioned can help with that; there is also a copious amount of literature published on rational approximation, which in general is a harder problem than polynomial approximation. -For special functions (as opposed to elementary functions), straightforward polynomial and rational approximations are often inaccurate and/or inefficient. They require the application of more advanced mathematical concepts such as asymptotic expansions, recurrence relations, and continued fraction expansions. Even if the use of these solves the problem mathematically, there may still be numerical issues, for example when evaluating continued fractions in the forward direction. Not surprisingly, entire books have been written on the computer evaluation of certain functions, such as Bessel functions and Mathieu functions. -In the following, I am giving a quick overview of useful literature, starting with coverage of the mathematical foundations, moving to methods suitable for elementary functions and simple special functions such as erfc and tgamma, and finally advanced methods for special functions that are harder to compute both in terms of performance and accuracy. Obviously, this can only scratch the surface, much relevant material on particular functions can be found in individual papers, for example in journals and proceedings from AMS, SIAM, ACM, and the IEEE. -Much of the literature has not yet caught up to modern hardware and software environments, in particular the presence of the FMA operation and SIMD architectures. In terms of robust computer codes for the evaluation of mathematical functions, one could wish for closer co-operation between mathematics and science on one hand, and computer science and computer engineering on the other hand. Among the works below, those by Markstein and Muller are the most advanced in this regard. -Milton Abramowitz and Irene A. Stegun (eds.), "Handbook of Mathematical Functions. With Formulas, Graphs, and Mathematical Tables". New York, NY: Dover 1972 (online version) -Frank Olver, et. al. (eds.), "NIST Handbook of Mathematical Functions". New York, NY: Cambridge University Press 2010 (online version) -A. Erdelyi, et. al., "Higher Transcendental Functions." Vol. 1-3. New York, NY: McGraw-Hill 1955 -Oskar Perron, "Die Lehre von den Kettenbrüchen, 3rd ed." Vol. 1+2. Stuttgart (Germany): Teubner 1954, 1957 - -John F. Hart, "Computer Approximations". Malabar, FL: Krieger Publishing 1978 -William J. Cody and William Waite, "Software Manual for the Elementary Functions". Englewood Cliffs, NJ: Prentice-Hall 1980 -Peter Markstein, "IA-64 and Elementary Functions". Upper Saddle River, NJ: Prentice-Hall 2000 -Jean-Michel Muller, "Elementary Functions. Algorithms and Implementation 3rd. ed.". Birkhäuser 2016 -Nelson H. F. Beebe, "The Mathematical-Function Computation Handbook". Springer 2017 -Jean-Michel Muller, et. al., "Handbook of Floating-Point Arithmetic 2nd ed.". Birkhäuser 2018 - -Nico M. Temme, "Special Functions. An Introduction to the Classical Functions of Mathematical Physics". New York, NY: Wiley 1996 -Amparo Gil, Javier Segura, and Nico M. Temme, "Numerical Methods for Special Functions". SIAM 2007 - -Frank W. J. Olver, "Asymptotics and Special Functions". Natick, MA: A K Peters 1997 -Jet Wimp, "Computation with Recurrence Relations". Boston, MA: Pitman 1984 - -A. N. Khovanskii, "The application of continued fractions and their generalizations to problems in approximation theory". Groningen (Netherlands): Noordhoff 1963 -A. Cuyt, et. al., "Handbook of Continued Fractions for Special Functions". Springer 2008<|endoftext|> -TITLE: Does $A \lor \neg A$ assert decidability in intuitionistic logic? -QUESTION [5 upvotes]: I'm new to intuitionistic logic, so forgive my silly question. -In intuitionistic logic, does $A \lor \neg A$ assert the decidability of $A$? For example, let's say I don't personally have a proof of $A$, but I know someone who has either a proof or a refutation of $A$, but I don't know which one. Can I say in that case that $A \lor \neg A$ is true? - -REPLY [2 votes]: In model theoretic view we say we know $A\vee B$ for some sentence $A \vee B$ if we know $A$ or we know $B$, therefore if you want to prove $A \vee \neg A$ for some sentence $A$ then you should prove $A$ or prove $\neg A$ and this leads to decidability of $A$.<|endoftext|> -TITLE: Function in $L^p$ but not in $L^{\infty}$ -QUESTION [7 upvotes]: Show that if -$$f(x) = \ln\left({1\over x}\right),\quad 00$, s.t. $|f(x)|>M$ but I forget how to compute the integral and I am not sure how to find $M$. -Any help, Thanks. - -REPLY [4 votes]: You can do this change of variables : u = $\log{1/x} \implies x = e^{-u}$ -So, to calculate the p-norm, consider the following integral -$\int_0^{1}\log(\frac{1}{x})^pdx = \int_{0}^{\infty} u^p e^{-u} du$ -The integral in the r.h.s is the well-known gamma function $\Gamma(p+1)$, which converges for the values of p that we are interested. -For the $L^{\infty}$ part, consider $M \in [0,\infty)$ -So, we have that $\log(\frac{1}{x})> M \iff x < e^{-M} $ -Now, consider the interval $A = (0, e^{-M}) $. We have that, for all $x \in A, f(x) > M$ because the function is strictly decreasing. Also, f is positive in $(0,1)$, so we can throw away the modulus. -So we found a measurable set where the function is greater than M, for all M, and have positive measure.This implies that $ f \notin L^{\infty} $ and we are done.<|endoftext|> -TITLE: Problem with units in number field -QUESTION [6 upvotes]: Edit:There were several major mistakes by my side this post, most of which have been accounted for.Now, after editing these out, the post seems to have no purpose at all.Nevertheless, it feels wrong to delete it, so I am going to leave it as it is. -Consider the splitting field of $$x^3-2$$ which is $$K=Q(\alpha,\omega),$$ where alpha is the real cube root of 2 and omega is a primitive third root of unity.One can check that in fact $$K=Q(\alpha+\omega).$$ The galois group of K over Q is isomorphic to $$Z_2\times Z_3$$ and is abelian, so , by the Kronecker-Weber theorem, lies in a cyclotomic field, the smallest of which is dependent on the conductor of K.EDITThis in fact is wrong, the galois group of this extension is nonabelian, so K doesn't lie in a cyclotomic field. -I was trying to solve $$a^3+2b^3+4c^3=1$$ in integers, which is the norm of an element of the form $$a+b\alpha+c\alpha^2.$$EDIT The norm is actually $$a^3+2b^3+4c^3-6abc$$Let $$O_K$$ be the ring of integers, so $$Z(\alpha,\omega)$$ is contained in it.By Dirichlet's unit theorem, the ring of integers is generated by 2 elements.EDIT the fundamental units can be found here:http://www.math.uconn.edu/~kconrad/blurbs/gradnumthy/unittheorem.pdf -The diophantine equation seems to have a lot of solutions:(1,0,0),(5,-4,1),(-1,1,0) etc.So to solve this, we have to see when an element of the previous form is a product of powers of the two units.But the fundamental units look terrifying, so maybe this won't be a very fruitful process. -Also, if someone can tell me what the actual ring of integers is, that would be really helpful. - -REPLY [3 votes]: Hardly an answer, but an easily-seen unit in $\Bbb Q(\sqrt[3]2\,)$ is $\alpha-1$ in your notation, and it’s hard to imagine another unit closer to the identity (in the unique real embedding of that field), though I’m not skilled enough to say that it’s a fundamental unit. -For the full field $K$, which has three inequivalent complex embeddings, two obviously independent units are $\alpha-1$ and $\omega\alpha-1$. Again, I’m not going to be so rash as to say that I know that they form a basis. The truth in these matters can be looked up easily, though.<|endoftext|> -TITLE: Is every closure of a metric space a completion? -QUESTION [7 upvotes]: I know that every completion is a closure of a metric space, since every convergent sequence is cauchy and and the limit of that sequence will exist within the completion. -At the same time, from my understanding, every cauchy sequence will bunch closer together and get arbitrarily close to something, but it is just a question as to whether or not that element it gets closer to actually exists in the space. -This leads me to the question as to whether every closure of a metric space is a completion, because we would just be adding the limits to sequences which exist outside of the original space, including the limits of nonconvergent cauchy sequences. -So is there an example of a closure which is not a completion? Or are these notions equivalent? - -REPLY [8 votes]: Closures and completions are very different beasts. The process of completion takes a metric space and spits out a new one, with some new limits added. So, the completion of $\mathbb Q$ under the usual metric is $\mathbb R$. No matter what. -Closure, on the other hand, takes a metric space and a subset thereof, and puts in any points from that metric space that are limits of the subset. It doesn't, however, put in any points which were "missing" from the space. So, if we're working in $\mathbb Q$, then the closure of the subset $(0,1)\cap \mathbb Q$ is $[0,1]\cap\mathbb Q$, which is still missing irrationals since the metric space we're working in doesn't have them. More dramatically, if $(M,d)$ is any metric space, then $M$ is closed, but might not be complete. -It is notable that, in complete metric spaces, the closure of a subset $S$ is essentially (i.e. isomorphic to) the completion of the sub-metric space represented by $S$. - -REPLY [3 votes]: Your question doesn't quite make sense. Every metric space is closed by definition. For example if you consider $(0,1)$ with the regular metric, the closure of $(0,1)$ is just $(0,1)$.<|endoftext|> -TITLE: Why is abelianness such a precious property? -QUESTION [8 upvotes]: My abstract algebra teacher said the other day that constructions like ideals and cosets and normal subgroups are "trying to capture a little bit of abelianness." He has used phrases like "magic happens" when speaking of this property, or qualities that mimic commutativity in some way. So why is it such a game-changing quality? -Thanks! - -REPLY [6 votes]: For one, in studying an abelian group you can utilize a lot of your intuition from doing addition. -But more to the point about normal subgroups - One should note that in an abelian group $G$, every subgroup $H$ is normal, so you can always take the quotient $G/H$. In a non-abelian group, you need special conditions on $H$. The definition of a normal subgroup is a subgroup $H$ such that when $h\in H$, $ghg^{-1}\in H$ for every $g\in G$. You can see how this condition is trivial when G is abelian. But in the non-abelian case it's just what is needed for the quotient $G/H$ to be a group. I think this might be what your teacher meant when he said normal subgroups are "trying to capture a little bit of abelianness" - they do something that every subgroup can do in an abelian group. -Of course, this really doesn't even start the whole picture. I can't talk about every example, but here, in a nutshell, is one I've been trying to understand: Let $K$ be a number field. It turns out that the field $K$ itself (in particular, it's group of fractional ideals) has enough information to describe all of it's abelian extensions, that is, the extensions whose galois group is abelian. What about the non-abelian extensions? Oh, we don't even know. That's an open research area. -Many times, groups can be attached to another more complicated object (like a number field, in the above example) to tell us something about that object. When these groups are abelian, it often means these objects are particularly nice and well-behaved. -But we also just know a lot more about abelian groups themselves. For instance, every finite abelian group is isomorphic to a direct product of cyclic groups of prime power order. What can we say about every finite non-abelian group? We don't know them all, but we know they can get really complicated<|endoftext|> -TITLE: Riemann Zeta Function integral -QUESTION [5 upvotes]: I was reading about the Riemann Zeta Function when they mentioned the contour integral $$\int_{+\infty}^{+\infty}\frac{(-x)^{s-1}}{e^x - 1} dx$$ where the path of integration "begins at $+\infty$, moves left down the positive real axis, circles the origin once in the positive (counterclockwise) direction, and returns up the positive real axis to $+\infty$." They specified that $(-x)^{s-1} = e^{(s-1)\log(-x)}$ so that $\log(-x)$ is real when x is a negative real number. Furthermore, $\log(-x)$ is undefined when x is a positive real number as a result of this choice of $\log$. -They proceeded to evaluate the integral by splitting it into $$\int_{+\infty}^{\delta}\frac{(-x)^{s-1}}{e^x - 1}dx + \int_{|x|=\delta}\frac{(-x)^{s-1}}{e^x - 1} dx + \int_{\delta}^{+\infty}\frac{(-x)^{s-1}}{e^x - 1} dx$$ -I do not understand the justification for the path of the second integral; if $(-x)^{s-1}$ is undefined on the non-negative real axis, how can the path of integration cross the real axis at $x=\delta$, when $\delta$ is implicitly non-negative? -EDIT: I am aware that $\log(-x)$ is defined on the entire plane except for some ray from the origin. What I am specifically confused about is how the contour is allowed to cross the ray on which $\log(-x)$ is undefined. -EDIT: They describe the first and third integrals of the sum as being taken "slightly above" and "slightly below" the positive real axis as the function is undefined on the positive real axis. - -REPLY [5 votes]: Here is a picture that matches the description of the path of integration: - -Note that we are taking the positive real axis as the branch cut for $\log(-x)$, and we have taken the branch of $\log(-x)$ whose argument goes from $-\pi$ to $\pi$. -The red and blue lines should be infinitesimally above and below the positive real axis. \ No newline at end of file