diff --git "a/stack-exchange/math_stack_exchange/shard_104.txt" "b/stack-exchange/math_stack_exchange/shard_104.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_104.txt" +++ /dev/null @@ -1,9315 +0,0 @@ -TITLE: Solve the equation: $\cos^2(x)+\cos^2(2x)+\cos^2(3x)=1$ -QUESTION [5 upvotes]: How to solve the equation - -$$\cos^2(x)+\cos^2(2x)+\cos^2(3x)=1$$ - -Can anyone give me some hints? - -REPLY [2 votes]: Using Prove that $\cos (A + B)\cos (A - B) = {\cos ^2}A - {\sin ^2}B$, -$$\cos^2(x)+\cos^2(2x)+\cos^2(3x)-1$$ -$$=\cos^2(x)-\sin^23x+\cos^2(2x)$$ -$$=\cos(3x-x)\cos(3x+x)+\cos^2(2x)$$ -$$=\cos2x(\cos4x+\cos2x)$$ -Now use Prosthaphaeresis Formulas on $$\cos4x+\cos2x$$ -Should I use a single word more? - -Alternatively using $\cos2A=2\cos^2A-1,$ -$$\cos^2(x)+\cos^2(2x)+\cos^2(3x)=1$$ -$$\iff\cos2x+\cos4x+\cos6x+1=0$$ -$$\cos6x=2\cos^23x-1$$ and use Prosthaphaeresis Formula on $$\cos2x+\cos4x$$<|endoftext|> -TITLE: Proving that:$\int_X f_n g \, d\mu \to \int_X fg \, d\mu$ for all $g$ in $\mathscr{L}^q (X)$ -QUESTION [8 upvotes]: I found the following exercise and I'd like to know if my answer is correct. - -Let $(X, \mathscr A, \mu)$ a finite measure space. Let $\{f_n\}$ a sequence of measurable functions such that $\|f_n\|_p\le M$ for a real constant $M$ $(10$ there is a $\delta>0$ such that for $\mu(A)<\delta$ then $\nu (A) <\epsilon^q$ for any $A$ in $\mathscr A$. -Now by Egoroff's thm exists a $B$ in $\mathscr A$ such that $(f_ng) (x)\xrightarrow{\text{uniformly}} (fg) (x)$ for $x$ in $B$ and $\mu(X\setminus B) <\delta$. Let $N$ such that for all $n\ge N$, $|(f_ng)(x)-(fg)(x)|<\epsilon$ for all $x\in B$. Thus -\begin{align*}\left |\int_X (f_n-f) g \, d\mu \right|&\le \int_B |f_ng-fg| \, d\mu +\int_{X-B}|f_n g|\, d\mu +\int_{X-B}|fg|\,d\mu \\[6pt] -&\le \int_B |f_ng-fg| d\mu + 2M \left( \int_{X-B}|g|^qd\mu \right)^{1/q}\\[6pt] -&\le\epsilon \mu(X)+2M (\nu(X\setminus B))^{1/q}\\[6pt] -&\le \epsilon (\mu(X)+2M)\end{align*} -Since $\mu(x)<\infty$ and $M<\infty$ the result follows. - -REPLY [4 votes]: Your answer is correct but since $\mu(X)<\infty$ But that is a typical consequence of Vitali's convergence Theorem. Here is the generalization. - -Theorem -Assume $\mu(X)<\infty$ and $1 -TITLE: Dilations of Integrable Function Converge to Zero Almost Everywhere -QUESTION [5 upvotes]: Suppose $f:[0,\infty)\rightarrow [0,\infty)$ is integrable. Set $f_{n}(x):=f(nx)$. I want to show that $f_{n}(x)\rightarrow 0$ almost everywhere or equivalently, the set -$$\{x : \limsup_{n}f_{n}(x)\geq\delta\}$$ -has measure zero, for any $\delta>0$. -By dilation invariance, it's clear that $f_{n}\rightarrow 0$ in $L^{1}$ and therefore also in measure. Furthermore, we can pass to a subsequence to obtain a.e. convergence. If $f$ has compact support, then it's obvious that $f_{n}\rightarrow 0$ almost everywhere. -My thought was to try approximating $f$ in $L^{1}$ by $g\in C_{c}(\mathbb{R})$ and use something like -$$|\{\limsup f_{n}\geq\delta\}|\leq|\{\limsup|f_{n}-g_{n}|\geq\delta/2\}|+|\{\limsup|g_{n}|\geq\delta/2\}|$$ -and go from there. But I'm not sure how to control the first term on the RHS. Any suggestions? - -REPLY [2 votes]: Fix $a,b$ with $0 -TITLE: Least positive integer $n$ such that the digit string of $2^n$ ends on the digit string of $n$ -QUESTION [9 upvotes]: What is the least positive integer $n$ such that the digit string of $2^n$ ends on the digit string of $n$: - $$ (2^n)_{10} = d_m \, d_{m-1} \cdots - d_{q+1} \, (n)_{10} \\ (n)_{10} = d'_{q} \cdots d_1' \\ d_i, d'_j \in - \{0, \ldots, 9 \} $$ - -As in $2^3$ would somehow end in 3, or $2^5$ would end in 5. -Frankly I don't even know where to start. Thanks in advance. - -REPLY [2 votes]: This OEIS sequence is just what you are looking for: https://oeis.org/A064541. Least number is 36.<|endoftext|> -TITLE: Is there a forcing extension $M[G]$ of $M$ that adds a new $\omega$-sized subset to $\omega_2$ without adding any new subsets of $\omega$? -QUESTION [10 upvotes]: I should add that the forcing extension must preserve the cardinals $\aleph_1$ and $\aleph_2$. -Note that such a forcing extension cannot add any new $\omega$-sized subsets to $\omega_1$, and also cannot be $\omega$-distributive. -If necessary, you may assume $M \models GCH$. -I suppose that the question arises from my wondering about if a forcing extension not add any new subsets of $\omega$, must it also not add any new $\omega$-sized subsets to anything else? -Thanks. - -REPLY [8 votes]: If you insist that both $\omega_1$ and $\omega_2$ are preserved then this is impossible. The reason is that, since $\omega_2$ is preserved and, in particular, remains regular in $M[G]$, any countable subset $X$ of it in the extension must be bounded. But given a bound $\alpha<\omega_2$, we can fix in $M$ a bijection $f\colon \omega_1\to\alpha$. Then $X\subseteq\omega_2$ is new iff $f^{-1}[X]\subseteq\omega_1$ is new, but as you state (or following a version of Noah's argument), there can be no new subsets of $\omega_1$. -If you allow $\omega_2$ to be collapsed then Asaf's Namba forcing suggestion works (at least under CH). Allowing $\omega_1$ to be collapsed doesn't really make sense, since collapsing it will just add a real. -Interesting things start to happen when you try to replace $\omega_2$ with larger cardinals $\kappa$. If you want cardinals to be preserved, the argument above tells you that $\kappa$ cannot remain regular (or even of uncountable cofinality) in the extension (in particular, this cannot work for any successor $\kappa$). Jensen covering voodoo then says that you should really have something like a measurable running around. But if we have a measurable $\kappa$ then Prikry forcing at $\kappa$ is exactly the type of thing you are looking for: it preserves cardinals, does not add bounded subsets of $\kappa$, but does add an (unbounded) countable subset to $\kappa$.<|endoftext|> -TITLE: What does it mean for the tangent to a circle from an interior point to be "imaginary"? -QUESTION [8 upvotes]: My geometry text seems to say that the tangent to a circle from an interior point in the real plane is "imaginary". -Further... It seems that when a double cone is intersected by a plane with an angle greater than the angle of the generating line, it is said that "imaginary" lines pass through the vertex. -Basically, I just can't understand what is meant by "imaginary" tangents and lines. Pls shed light on the matter. :) - -REPLY [4 votes]: When you try to find the intersection of two disjoint circles (or of a circle with a line that it doesn't intersect), when you solve the equations you will get a pair of conjugate complex solutions. -For example the intersection of the cirvle $x^2+y^2=1$ with the line $x=2$ gives you the complex points $(x=2,y=\pm i\sqrt 3)$. Of course you can't really plot them on your $\Bbb R^2$ sheet of paper. -Giving a geometric interpretation to those complex points (points of the complexified real plane) is a bit challenging. -For every real object (points, lines, and circles mainly) you can generalize them to complex objects by allowing complex coefficients. -Then between 2 points there always is a line, 2 lines are either parallel or secant, there always is a circle going through three points, two circles almost always intersect in two points, etc. -In your case, the middle between the two tangency points is always a real point, and is also known as the image of the original point by the inversion around the circle. The inversion is involutive, so it gives an easy way to build it even if you start from inside the circle. - -You can identify the plane $\Bbb R^2$ with the complex plane $\Bbb C$ with the map $(x,y) \mapsto (x+iy)$. -Now if you complexify that, you can extend the scalars by adding $j$ such that $j^2=-1$, and then you can identify $\Bbb C^2$ with $\Bbb C[j]/(j^2+1)$ with $(x=x_1+jx_2, y=y_1+jy_2) \mapsto (x+iy = x_1+jx_2+iy_1+ijy_2)$. -Now $\Bbb C[j]/(j^2+1)$, has a "natural" map into $\Bbb C^2$ defined by replacing $j$ with $i$ and $-i$ respectively. You end up with two elements of $\Bbb C$, so an ordered pair of points. -Concretely, from your point with complex coordinates $(x=x_1+jx_2, y=y_1+jy_2)$ you end up with the ordered pair of points $((x_1-y_2, y_1+x_2),(x_1+y_2, y_1-x_2))$ and this will have various properties. -A point is real when $x_2=y_2=0$, which is when the ordered pair is of the form (P,P) so you can see a very natural embedding of the real plane into our complexified plane (the set of ordered pair of points) -The complex conjugate of a point is obtained by changing the signs of $x_2$ and $y_2$, which transforms a pair $(P,Q)$ into the pair $(Q,P)$, so conjugation is easy to visualize, and a pair of conjugate complex points can be thought of as an unordered pair of points. -And you can see that so far, the complex structure of our complexified plane can be done in a "coordinate-free" way. -From two points you can substract one from the other to get a vector. This is just componentwise substraction : a complexified vector is a pair of vectors. -You can also rescale a vector. Scaling by a real number is the usual componentwise scaling (because after all, the important map is $\Bbb R$-linear). -The important thing is what happens when you multiply a vector by the complex unit $j$ : the first component of the vector gets rotated counterclockwise by $\pi/4$ and the second component is rotated just as much in the other direction. -Finally, you can get the "real part" of a complexified point by computing $(P+\bar P)/2$, and this corresponds to the real point which is the midpoint of the elements in the pair. More generally, you can do weighted means as long as as the sum of the coefficient is $1$, and this will have a geometrical meaning as in it will give you a complexified point independantly of what coordinates you choose. In contrast, the imaginary part of a complexified point (this would be $(P - \bar P)/2i$) doesn't have a geometrical meaning because the coefficients sum to $0$. Instead you should interpret it as the complexified vector between a complexified point and its real part. - -Now, the usual real lines and real circles not only have real points on them, but they also have pairs of conjugate complexified points (that are not real). For real lines, they are the pairs that are symmetric around the real locus of the line. For real circles, they are the pairs where one point is the image of the other by the inversion around the circle. -(more generally, for almost all complex lines and complex circles, you go from the first component to the second by applying an antiholomorphic bijective rational map, but I won't describe what the nonreal lines and nonreal circles look like) -Usually you can construct the tangency points $Q,Q'$ from a point $P$ to a circle $C$ by drawing the circle $C'$ with diameter $[OP]$ and intersecting it with $C$. -When $P$ is inside the circle $C$, this circle $C'$ doesn't intersect $C$ at real points but at a pair of conjugate complexified points $(Q,Q')$ and $(Q',Q)$, For symmetry reasons, they are "located" somewhere on the line $(OP)$ and they are placed such that $Q'$ is the image of $Q$ by the inversions through both circles. -Constructing them would need too much explanation, however we can determine their real part $R$ (which is the midpoint of $[QQ']$) : computation shows that $R$ is the image of $P$ by the inversion through $C$. -To construct $R$, draw $(OP)$, then draw the line perpendicular to $(OP)$ that goes through $P$, find its intersection points with $C$, draw the two tangents, they will intersect at $R$. -Since $Q$ and $Q'$ are "on $(OP)$" and symmetric with regards to $R$, they are complex points of the line normal to $(OP)$ going through $R$, so this can also reduce he problem to finding the complex intersections of a line with a circle.<|endoftext|> -TITLE: How do i evaluate this sum :$\sum _{m=1}^{\infty } \sum _{k=1}^{\infty } \frac{m(-1)^m(-1)^k\log(m+k)}{(m+k)^3}$? -QUESTION [5 upvotes]: How do I evaluate the following sum: -$$\sum _{m=1}^{\infty } \sum _{k=1}^{\infty } \frac{m(-1)^m(-1)^k\log(m+k)}{(m+k)^3}$$ -Note I used many idea such as :Hochino's Idea and taylor expansion of -$\log(1+x)$ at $x=1$ where $x=\frac{k}{m}$ ,but those methods not work . -and also i tried to write $\log(m+k)$ as a power series but it became to me as a -triple series then it is very complicated for evaluation !!! -Thank you for any help - -REPLY [3 votes]: With all of the great comments, one of which suggested that one of us post an answer, I've decided to proceed. So, here we go ... -Let $S$ be the series of interest given by -$$S=\sum_{m=1}^{\infty}\sum_{k=1}^{\infty}\frac{m(-1)^{m+k}\log(m+k)}{(m+k)^3}$$ -Now exploiting symmetry, we can write $S$ as -$$S=\frac12 \sum_{m=1}^{\infty}\sum_{k=1}^{\infty}\frac{(-1)^{m+k}\log(m+k)}{(m+k)^2}$$ -Next, we make the substutution $k=p-m$ and change the order of summation to reveal -$$\begin{align} -S&=\frac12\sum_{p=2}^{\infty}\sum_{m=1}^{p-1}\frac{(-1)^p \log p}{p^2}\\\\ -&=\frac12 \sum_{p=1}^{\infty}\frac{(-1)^p\log p}{p}-\frac12 \sum_{p=1}^{\infty}\frac{(-1)^p\log p}{p^2}\\\\ -&=\frac12 \eta'(1)-\frac12 \eta'(2) -\end{align}$$ -where $\eta'(z)$ is derivative of the Dirichlet eta function. The Diriclet eta function is related to the Riemann zeta function by the expression -$$\eta(z)=(1-2^{1-z})\zeta(z)$$ -The derivative $\eta'(z)$ can be written -$$\eta'(z)= -\begin{cases} -(1-2^{1-z})\zeta'(z)+2^{1-z}\log(2)\zeta(2)&, z>1\\\\ -\left(\gamma-\frac12 \log(2)\right)\log(2)&,z=1 -\end{cases}$$ -Putting it all together gives -$$S=\frac12 \log(2)\left(\gamma-\frac12 \log(2)-\frac{\pi^2}{12}\right)-\frac14 \zeta'(2)$$<|endoftext|> -TITLE: Does this tricky series converge? -QUESTION [13 upvotes]: $$\sum_2 \frac{\cos(\log{n})}{n\log{n}}$$ -The naive attempt is to use dirichlet's test to falsely claim that $\cos(\log{n})$ has bounded partial sums, but I don't think it works. -I am also trying a difference of sum and integral type of strategy but am not sure where to go from it. -Finally, as A.S pointed out below in the comments, the integral test does not apply, since cos(logn) does not behave monotonically... -Any ideas are welcome. -Thanks, - -REPLY [9 votes]: First examine the corresponding integral: -$$\int_2^{\infty} \frac{\cos(\log(x))}{x \log(x)}dx = \int_{log(2)}^{\infty} \frac{\cos(u)}{u} du = \left[ \frac{\sin(u)}{u} \right]_{log(2)}^{\infty} + \int_{log(2)}^{\infty} \frac{\sin(u)}{u^2}du $$ this is convergent. -Next we look at the derivative of the function -$$\begin{align}f(x) &= \frac{\cos(\log(x))}{x \log(x)} \\ -f'(x) &= \frac{-1/x \cdot\sin(\log(x))\cdot x \log(x) + \cos(\log(x))(\log(x)+1)}{(x \log(x))^2}\\ - |f'(x)| &< \frac{2}{x^2}\end{align}$$ so we can compare the terms of sum and the integral over intervals of length 1: -$$\begin{align} \left| \frac{\cos(\log(k))}{k \log(k)} - \int_k^{k+1} \frac{\cos(\log(x))}{x \log(x)} dx \right| &\leq \int_k^{k+1} | f(k) - f(x) | dx \\ &\leq \max_{k -TITLE: A closed form of $\int_0^1\log (- \log x)\log \left(\frac{1+x}{1-x}\right)\,dx$ -QUESTION [9 upvotes]: Is it possible to obtain a closed form of the following integral? -$$\int_0^1\log (- \log x)\log \left(\frac{1+x}{1-x}\right)\,dx$$ I've made the change of variable $$t=\frac{1+x}{1-x} $$ but I feel like I'm turning in circles... - -REPLY [6 votes]: We have the following closed form. - -Proposition. $$ -\int_0^1\log (- \log x)\log \left(\frac{1+x}{1-x}\right)\,dx=\gamma_1-2\ln^2 2-2\gamma \ln 2 -\gamma_1\Big({1,\small\frac12}\Big)\tag{$\star$} -$$ - -where $\gamma_1$ is the Stieltjes constant, -$$\gamma_1 = \lim_{N\to+\infty}\left(\sum_{n=1}^N \frac{\log n}n-\int_1^N\frac{\log t}t\:dt\right)$$ - and where $\gamma_1(a,b)$ is the poly-Stieltjes constant, -$$\gamma_1(a,b) = \lim_{N\to+\infty}\left(\sum_{n=1}^N \frac{\log (n+a)}{n+b}-\int_1^N\frac{\log t}t\:dt\right)\!.$$ -Proof. -One may recall the classic integral representation of the Euler gamma function -$$ -\frac{\Gamma(s)}{(a+1)^s}=\int_0^\infty t^{s-1} e^{-(a+1)t}\:dt, \qquad s>0,\, a>-1. \tag1 -$$ By differentiating $(1)$ with respect to $s$, putting $s=1$ and making the change of variable $x=e^{-t}$, we get -$$ -\int_0^1x^a\log\left(-\log x\right)\:dx=-\frac{\gamma+\log(a+1)}{a+1},\qquad a>-1, \tag2 -$$ -where $\displaystyle \gamma=\lim_{N\to+\infty}\left(\sum_{n=1}^N \frac1n-\int_1^N\frac{dt}t\right)$ is the Euler-Mascheroni constant. -From the standard Taylor series expansion, -$$ --\log (1-x)= \sum_{n=1}^{\infty} \frac{x^n}n, \qquad |x|<1,\tag3 -$$ one gets -$$ -\log (1+x)-\log (1-x)=2 \sum_{n=0}^{\infty} \frac{x^{2n+1}}{2n+1}, \qquad |x|<1.\tag4 -$$ One may write the given integral as -$$ -\int_0^1\log (- \log x)\log \left(\frac{1+x}{1-x}\right)\,dx -=\int_0^1\log (- \log x) \left(\log (1+x)-\log (1-x)\right)dx -$$ -then, inserting $(4)$ into the latter integrand and using $(2)$, we obtain -$$ -\begin{align} -\int_0^1\log (- \log x)\log \left(\frac{1+x}{1-x}\right)\,dx&=2\int_0^1\log (- \log x) \sum_{n=0}^{\infty} \frac{x^{2n+1}}{2n+1}\:dx\\ -&=2\sum_{n=0}^{\infty} \frac1{2n+1}\int_0^1 x^{2n+1}\log (- \log x)\:dx\\ -&=-2\sum_{n=0}^{\infty} \frac{\gamma+\log(2n+2)}{(2n+1)(2n+2)}\\ -&=-\sum_{n=0}^{\infty} \frac{2\left(\gamma+\ln 2 \right)}{(2n+1)(2n+2)}-\sum_{n=1}^{\infty} \frac{2\log (n+1)}{(2n+1)(2n+2)}.\tag5 -\end{align} -$$ -On the one hand, using Abel's theorem and using $(3)$, one has -$$ -\begin{align} -\sum_{n=0}^{\infty} \frac2{(2n+1)(2n+2)}&=\lim_{x \to 1^-}\sum_{n=0}^{\infty} \frac{2x^{2n+2}}{(2n+1)(2n+2)}\\ -&=\lim_{x \to 1^-}\left( (1+x)\log(1+x)+(1-x)\log(1-x)\right)\\ -&=2\ln2.\tag6 -\end{align} -$$ -On the other hand, using Theorem 2 here one has -$$ -\begin{align} -\sum_{n=1}^{\infty} \frac{2\log (n+1)}{(2n+1)(2n+2)}=-\gamma_1+\gamma_1\Big({1,\small\frac12}\Big),\tag7 -\end{align} -$$ since $\gamma_1(1,1)=\gamma_1$. -Finally, bringing all the steps together gives $(\star)$.<|endoftext|> -TITLE: In how many ways can $7$ people be chosen out of $12$ people so that $2$ given people can never be selected together? -QUESTION [6 upvotes]: Is it right to take the combination of $7$ out of $12$ and subtract the combination of $5$ out of $10$ so i take out the ways that both of them are chosen? -So it will be $792-252=540$ -I just find the number way too small. - -REPLY [2 votes]: You are not quite right, but you can use your idea. -You have $\binom{12}{7}$, which are all combinations of $7$ out of $12$ people, disregarding that person $A$ and $B$ must not be together. -You want to leave out the choices, where $A$ and $B$ are together, so you assume you choose both $A$ and $B$, leaving $5$ choices among $10$, which is why you subtract $\binom{10}{5}$. -But you forgot that you could also not have chosen both $A$ and $B$, so that both of these people end up in the group that wasn't chosen. But if you fix $A$ and $B$ in the group that was not chosen, then you are left with choosing $7$ people out of $10$, i.e. $\binom{10}{7}$. -In total, you get $\binom{12}{7}-\binom{10}{5}-\binom{10}{7} = 420.$ -Edit: As per the comments, the answer to the intended question is $\binom{12}{7}-\binom{10}{5}$, as in the original post.<|endoftext|> -TITLE: "Localizing" commutative pointed monoids -QUESTION [5 upvotes]: A pointed monoid is a commutative monoid $A$ with a distinguished element $0\in A$ such that $0\cdot A=0$. Morphisms should preserve $0$. -If $A$ is a commutative ring or pointed monoid, and $f\in A$, there is a localization $A\to A_f$ that is initial with respect to the following property: every map $\phi:A_f\to B$ into a nontrivial ring/pointed monoid $B$ has $\phi(f)\neq 0$. -For both rings and monoids, we get this localization by formally adjoining $f^{-1}$. -What if we have two elements, $f,g\in A$, and we want to study maps $\varphi: A\to B$ into a nontrivial objects $B$ such that $\varphi(f)\neq\varphi(g)$? Is there a localization $A_{(f,g)}$ with a similar universal property? -For rings, the answer is clear: we can define $A_{(f,g)} = A_{f-g}$. But is there a more complicated construction that works for monoids? -For example, if $fh=gh$ in $A$, then $h$ should most likely be nilpotent in $A_{f,g}$. - -REPLY [5 votes]: Let me first explicitly state the universal property we're looking for. Let us say a pointed commutative monoid is a field if $1\neq 0$ and every nonzero element is a unit. Let $\mathcal{C}$ denote the category of pointed commutative monoids $A$ equipped with two chosen elements $f$ and $g$, and let $\mathcal{D}\subset\mathcal{C}$ be the full subcategory of such objects such that every homomorphism from $A$ to a field separates $f$ and $g$ (call such an object of $\mathcal{C}$ separative). The question is then whether $\mathcal{D}$ is a reflective subcategory of $\mathcal{C}$. That is, given an object $A$ of $\mathcal{C}$, is there a map $A\to B$ from $A$ to a separative object which is initial among all maps from $A$ to a separative object? -Before answering the question, let's discuss some general facts about pointed commutative monoids. Given a pointed commutative monoid $A$ and an element $f\in A$, the localization $A_f$ can be constructed explicitly as the set of fractions $a/f^n$ modulo the equivalence relation $a/f^n=b/f^m$ if $f^{N+m}a=f^{N+n}b$ for some $N$. In particular, $a\in A$ maps to $0$ in $A_f$ iff it is annihilated by some power of $f$. In addition, note that there is an obvious notion of "maximal ideal" in a pointed commutative monoid, and modding out a maximal ideal gives a field. In fact, in contrast with the case of commutative rings, every pointed commutative monoid in which $1\neq 0$ has a unique maximal ideal, consisting of all the non-units. In particular, any pointed commutative monoid with $1\neq 0$ has a map to a field. Furthermore, there is a terminal field $T=\{0,1\}$: every field has a unique homomorphism to $T$ sending every nonzero element to $1$. -Now suppose we have an object $A$ of $\mathcal{C}$ such that $fg$ is not nilpotent. We can then form the localization $A_{fg}$, and $1\neq 0$ in this localization. Modding out the maximal ideal and taking the unique map to $T$, we get a map $A\to T$ which sends both $f$ and $g$ to $1$. Thus if $A$ is separative, $fg$ must be nilpotent. -On the other hand, if the ideal generated by $f$ and $g$ is not all of $A$, we can mod out the maximal ideal and get a map from $A$ to a field sending both $f$ and $g$ to $0$. Thus if $A$ is separative, the ideal $(f,g)$ must be all of $A$. But that ideal is just the set of all elements of $A$ that are multiples of either $f$ or $g$, so this means $f$ or $g$ must be a unit. The condition that $fg$ is nilpotent now means that the other of $f$ and $g$ must be nilpotent. -Thus we have shown that if $A$ is separative, one of $f$ and $g$ must be a unit and the other must be nilpotent. Conversely, if one of $f$ and $g$ is a unit and the other is nilpotent, any map from $A$ to a field sends one of them to $0$ and the other to a unit, so $A$ is separative. So $\mathcal{D}$ consists exactly of those objects of $\mathcal{C}$ such that one of $f$ and $g$ is a unit and the other is nilpotent. Say that an object of $\mathcal{D}$ is type I if $f$ is a unit and type II if $g$ is a unit. Note that the only object that is both types at once is the zero monoid $A=\{0\}$, and that there can exist no maps in $\mathcal{D}$ between nonzero objects of different type. -We are now ready to answer the question. Taking $A$ to be the initial object of $\mathcal{C}$ (concretely, $A=\mathbb{N}^2\cup\{0\}$, with $f=(1,0)$ and $g=(0,1)$), if such a reflector existed, it would have to send $A$ to the initial object of $\mathcal{D}$. But an initial object of $\mathcal{D}$ would have to be able to map to nonzero objects of both types, and no such object exists. -More generally, if $A$ is any object of $\mathcal{C}$ such that $g$ does not divide any power of $f$ and $f$ does not divide any power of $g$, then $g$ is not a unit in $A_f$ and $f$ is not a unit in $A_g$, so $A$ can map to nonzero separative objects of both types (namely, $A_f/(g)$ is type I and $A_g/(f)$ is type II). So no such object of $\mathcal{C}$ can have a separative reflection. On the other hand, if $f$ divides a power of $g$, then $A$ can only map to type I separative objects, so $A$ has a separative reflection iff there is a minimal quotient of $A_f$ in which $g$ becomes nilpotent. Such a minimal quotient exists iff there is an $n\in\mathbb{N}$ such that $g^{n+1}$ divides $g^n$ in $A_f$, in which case the minimal quotient is $A_f/(g^n)$ for any such $n$. -To sum up, we can say that an object $A$ of $\mathcal{C}$ has a reflection in $\mathcal{D}$ iff either of the following two conditions holds: - -There exists $n\in\mathbb{N}$ such that $f$ divides $g^n$ and $(fg)^n g$ divides $(fg)^n$. -There exists $n\in\mathbb{N}$ such that $g$ divides $f^n$ and $(fg)^n f$ divides $(fg)^n$. - -In the first case, the reflection is $A_f/(g^n)$, and in the second case, the reflection is $A_g/(f^n)$.<|endoftext|> -TITLE: On Galois groups and $\int_{-\infty}^{\infty} \frac{x^2}{x^{10} - x^9 + 5x^8 - 2x^7 + 16x^6 - 7x^5 + 20x^4 + x^3 + 12x^2 - 3x + 1}\,dx$ -QUESTION [18 upvotes]: Given the solvable decic (among many in this database), -$$P(x) := x^{10} - x^9 + 5x^8 - 2x^7 + 16x^6 - 7x^5 + 20x^4 + x^3 + 12x^2 - 3x + 1$$ -we have, -$$\int_{-\infty}^{\infty} \frac{x^2}{P(x)}\,dx = 2\pi\sqrt{\frac{y}{33}}$$ -where $y\approx 0.005498$ is a root of the solvable $5$-real root quintic, -$$P(y):=410651^2 - 297369569963257 y + 64437688060325415 y^2 - 3213663132678906688 y^3 + 59485209442439490149 y^4 - (11^3\cdot67^2\cdot199^6) y^5 = 0$$ -(Added later): A relation between the roots $x,y$ such that $P(x)=P(y)=0$ is, -$$12675353 + 84680609 x^3 + 55168143 x^6 - 6841070 x^9 - 1801451 x^{12} = (11\cdot67^2\cdot199^2) y$$ - -Q: In general, if, - $$\int_{-\infty}^{\infty} \frac{x^2}{ P(x)}\,dx= 2\pi \sqrt{y}$$ - and $P(x)$ has a solvable Galois group, is it true that $P(y)$ also has a solvable Galois group? - -REPLY [5 votes]: Not an answer - just listing some obvious facts that seem to be relevant to get the ball rolling. - -The polynomials from that database surely only have simple roots, because they were selected with a suitable splitting field in mind. -For the integral $\int_{-\infty}^\infty\dfrac{x^2}{P(x)}\,dx$ to converge it is necessary that $P(x)$ has no real roots. -With $P(x)$ of high enough degree the usual business with sophomore level complex path integrals gives $$I=\int_{-\infty}^\infty\dfrac{x^2}{P(x)}\,dx=2\pi i\sum_{P(z)=0,z\in H}\operatorname{Res}(\frac{z^2}{P(z)},z),$$ -where the summation ranges over the zeros of $P(x)$ in the upper half plane $H$. -Those zeros are simple, so a single application of l'Hospital shows that at a zero $z_i\in H$ the residue is -$$ -\operatorname{Res}(\frac{z^2}{P(z)},z_i)=\frac{z_i^2}{P'(z_i)}. -$$ -So if we write $I=2\pi U$, then -$$U^2=-\left(\sum_i \frac{z_i^2}{P'(z_i)}\right)^2.$$ -If $K$ is the splitting field of $P(x)$ (inside $\Bbb{C}$), then $U^2\in K$. Furthermore, because $U^2$ is real, it belongs to the real subfield $L=K\cap \Bbb{R}$. So the minimal polynomial of $U^2$ is solvable, iff $Gal(K/\Bbb{Q})$ is. The number $U\in K(i)$, so it, too, has a solvable minimal polynomial.<|endoftext|> -TITLE: Prove that $R\cong \mathbb{C}^n$ -QUESTION [5 upvotes]: Let $R=\mathbb{C}[x]/(f(x))$ where $f(x)$ is a polynomial of degree $n>0$ which has $n$ distinct complex roots. Prove that $R\cong \mathbb{C}^n$. - -I tried to define a homomorphism $\phi$ from $\mathbb{C}[x]$ to $\mathbb{C}^n$ such that $\phi$ is onto and ker$\phi=(f(x))$. Then by first isomorphism theorem I am done. But the hard part is defining such homomorphism explicitely. First I tried to divide a given polynomial $p(x)$ by $f(x)$ and take the remainder polynomial. The remainder is a polynomial of degree at most $n-1$. So it has $n$ coefficients. Then my try was mapping $p(x)$ to the $n$ tuple of those coefficients. But then I was failed proving $\phi$ is a homomorphism. So how do I determine a homomorphism explicitely? Can somebody please help me? - -REPLY [9 votes]: If the roots are $z_1, \ldots , z_n$, then one simple approach is sending $f\in \mathbb{C}[X]$ to the tuple $(f(z_1), \ldots , f(z_n))$. -It is a homomorphism on each component, therefore a homomorphism. Using distinctness of the roots, we can calculate the kernel without too much fuss. -To show that it is surjective, count dimensions. Alternatively, we can use Lagrange interpolation (using again the distinctness of the roots).<|endoftext|> -TITLE: Open set containing rationals but complement non-denumerable -QUESTION [10 upvotes]: I am taking Real Analysis classes and I got a homework that asks me: -Give an example of an open set $\mathcal{A}$ such that $\mathcal{A}\supset\mathbb{Q}$ but $\mathbb{R}-\mathcal{A}$ is non-denumerable. -My attempt: First let $\mathcal{A} = \bigcup(r_n-1/2^n,r_n+1/2^n)$ where $r_n$ is the $n$-th rational, this is a union of open sets so $\mathbb{R}-\mathcal{A}$ is closed. I have reasons to believe that such set is also non-denumerable (as seen here: Uncountable closed set of irrational numbers but I have no experience in measure theory, is there other way to prove it's non-denumerability? Is that an answer at all? -Please excuse my bad english, thank you. - -REPLY [5 votes]: For a non measure-theoretic proof you can construct a Cantor set containing no rationals and then take its complement. Here's a sketch of how to make it: -Start with an interval $[a,b]$ with $a,b$ irrational. Let $\{q_n\}$ be an enumeration of the rationals in this interval. Now remove from $[a,b]$ an interval $(c,d)$ with irrational endpoints containing $q_1$. Now remove open intervals with irrational endpoints from the two remaining closed intervals so that $q_2$ is no longer in the set, and so on. -The result is a closed set avoiding every rational number which has cardinality equal to that of the continuum (equivalently, to the set of all infinite binary sequences). The proof that Cantor sets have cardinality continuum is very similar to the proof that the standard Cantor ternary set consists of all numbers with only 0 and 2 in their ternary expansion (Or since its a perfect set the Baire category theorem shows that it's uncountable).<|endoftext|> -TITLE: Find a example such $\frac{(x+y)^{x+y}(y+z)^{y+z}(x+z)^{x+z}}{x^{2x}y^{2y}z^{2z}}=2016$ -QUESTION [9 upvotes]: Assume $x,y,z$ be postive integers,and Find one example $(x,y,z)$ such -$$\dfrac{(x+y)^{x+y}(y+z)^{y+z}(x+z)^{x+z}}{x^{2x}y^{2y}z^{2z}}=2016$$ - -REPLY [4 votes]: Such a thing will never happen. Why? Because 2016 is divisible by 7. Let's see what can be said about the power of 7 in the prime decomposition of this expression. Those of $x,\;y,\;z,\;x+y,\;y+z,\;z+x$ which are not divisible by 7 themselves, contribute nothing. Those which are, contribute (add or detract) a multiple of themselves, and hence a multiple of 7. But $2016=2^5\cdot3^2\cdot7^1$, and 1 is not a multiple of 7. -(The same reasoning could be applied to 2 or 3, of course.)<|endoftext|> -TITLE: shortest distance from point to hyperplane lagrange method -QUESTION [5 upvotes]: I need to find the shortest distance, in D-dimensional Euclidean space ($\mathbb{R}^D$) from a point $\textbf{x}_0$ to a hyperplane $H: \textbf{w}^T \textbf{x} + b = 0$, using the method of Lagrange multipliers. The answer should be an expression in terms of $\textbf{w}, b$ and $\textbf{x}_0$. -Note: I am aware that a few similar questions exist, such this one. I am creating a new question because I need to know how the derivation steps work in order to get a solution in a specific form. I know how to solve this problem in three dimensions, but not with linear algebra. Any help would be appreciated. - -REPLY [9 votes]: Consider the Lagrange function $$L(\mathbf x,\lambda)=\|\mathbf x-\mathbf x_0\|^2+2\lambda(\mathbf w^T\mathbf x+b)$$ The Lagrange multiplier is multiplied by $\,2\,$ to simplify the computations (this is legal). -Since $$L(\mathbf x,\lambda)=\|\mathbf x\|^2-2(\mathbf x_0-\lambda \mathbf w)^T\mathbf x+2\lambda b+\|\mathbf x_0\|^2$$ one has $$\frac {\partial L}{\partial \mathbf x}=2\mathbf x-2(\mathbf x_0-\lambda \mathbf w)$$ using formulas (69) and (131) in the pdf quoted by @user25004. -Solving $\,\dfrac {\partial L}{\partial \mathbf x}=0\,$, one obtains $$\mathbf x=\mathbf x_0-\lambda \mathbf w$$ which, substituted in the equation of the hyperplane, gives $$\lambda=\frac {\mathbf w^T\mathbf x_0+b}{\|\mathbf w\|^2}$$ so the shortest distance is $$\|\mathbf x_0-\lambda \mathbf w-\mathbf x_0\|=|\lambda|\|\mathbf w\|=\frac {|\mathbf w^T\mathbf x_0+b|}{\|\mathbf w\|}$$<|endoftext|> -TITLE: Measure theory for self study. -QUESTION [8 upvotes]: I have good knowledge of Elementary Real analysis. Now I'd like to study measure theory by myself (self-study). So please give me direction for where to start? Which book is good for starting? I have Principles of Mathematical Analysis by W. Rudin and Measure Theory and Integration by G. de Barra. Which book is rich in examples and exercises? Please suggest to me. Thanks in advance. - -REPLY [3 votes]: I have discovered Yeh's Real Analysis: Theory Of Measure And Integration recently, and I recommend it warmheartedly! Easy to read I think, especially for self study. It has a problem & proof supplement by the way. -Rudin (not PMA but RCA) is very good on the long term, but hard for a first encounter with measure theory. PMA is not mainly about measure theory IIRW.<|endoftext|> -TITLE: Motivation for the nLab's definition of cohomology? -QUESTION [9 upvotes]: I am trying to penetrate the nLab article on cohomology. I don't know anything about higher category theory, but it seems like the real content here is topological. My question has two parts. -First, the nLab gives the following general definition of cohomology, with motivation here. For ordinary cohomology, I understand. We put in the Eilenberg-MacLane spaces $K(G,n)$ for $A$, and we recover cohomology using the standard fact that cohomology is a representable functor represented by the E-ML spaces. Further, we know that delooping one of these increases $n$ by $1$. Let's stick with this example for moment. -Philosophically, why is the the definition they give the right one? To me, cohomology means a functor from some category to (most commonly) the category of abelian groups, with the additional property of turning short exact sequences into long exact sequences in cohomology. In the topological category, we usually motivate this by talking about "measuring holes," and more generally we speak of measuring the failure of something to be exact or trivial. To me, the fact that cohomology is representable seems entirely coincidental (homology isn't, for example) and more like the end of the story than the beginning. Why should we take the representability and delooping concepts as our definition? Why are they more basic and central? -Second, I'm curious about how other cohomology theories fit into this. Let's take the examples of sheaf cohomology and group cohomology. I am wondering why it's conceptually important to know these fit into nLab's general schema. -For sheaf cohomology, the nLab links to a paper of Kenneth Brown. I no have idea what's going on in this paper, but it seems theorem 2 on page 247 is the result the nLab is interested in. Somehow, once we view this through the lens of higher category theory, sheaf cohomology is representable in the way ordinary cohomology is. However, I can't make heads or tails of the details. How exactly does Verdier's hypercovering result make sheaf cohomology satisfy nLab's definition? -For group cohomology, the nLab says: - -For instance group cohomology is nothing but the cohomology in $H= - \infty \operatorname{Grpd}$ on objects $X=BG$ that are deloopings of - groups. - -Broadly speaking, why is this just the usual definition of group cohomology? Does one actually need infinity-categories to understand this correspondence? If so, where could I find a good exposition of this topic? -I should probably say what my background is, to avoid answers pitched at too high a level. I know what it is taught in good introductory graduate courses on algebraic topology and homological algebra and so on, but not much more. In particular, I am completely ignorant of higher categories. -Edit: In light of the answer I got below, perhaps the best thing to ask is, what are the best references to learn this stuff from? - -REPLY [3 votes]: You can see a different approach to the border between homology and homotopy in our book Nonabelian Algebraic Topology: filtered spaces, crossed complexes, cubical homotopy groupoids (pdf available there) (EMS Tract vol 15, 2011). This starts with history and intuition. One main intuition was to use cubes to describe higher order compositions, leading to "algebraic inverses to subdivision" in a way and with uses difficult to obtain by simplicial methods. -The main work is in establishing the algebraic material to obtain higher order Seifert-van Kampen Theorems for some functors which give colimit theorems for certain higher homotopy invariants, and which lead to quite concrete and precise calculations. It can also be described as trying to make higher homotopy theory look more like that of the fundamental group, and so nonabelian (in this book nonabelian up to dimension 2). -You can also download some presentations from my preprint page. -I worked a bit in the mid 1960s on nonabelian cohomology in dimension 1 but was turned off it when I found that working with groupoids gave me stronger results; these led us eventually to strict cubical higher groupoids, defined on certain spaces with structure. The latter idea hardly occurs in the theory and applications of weak $\infty$-groupoids, but in the form of filtered spaces and of $n$-ads is part of traditional homotopy theory. -See this recent stackexchange answer for some concrete applications in group theory, only indicated in the above book.<|endoftext|> -TITLE: A categorical perspective on the equivalence of sheaf cohomology and Cech cohomology? -QUESTION [7 upvotes]: In the nLab article on cohomology, I found the following passage. - -One can then understand various "cohomology theories" as nothing but - tools for computing $\pi_0 \mathbf{H}(X,A)$ using the known - presentations of (∞,1)-categorical hom-spaces: for instance Čech - cohomology computes these spaces by finding cofibrant models for the - domain $X$, called Čech nerves. Dual to that, most texts on - abelian sheaf cohomology find fibrant models for the codomain $A$: - called injective resolutions. Both algorithms in the end compute the - same intrinsically defined $(\infty,1)$-categorical hom-space. - -I find this paragraph incredibly interesting, since it offers a conceptual explanation for why Čech cohomology should agree with sheaf cohomology in certain cases. (I believe the usual proof for schemes uses a spectral sequence argument, which seems opaque to me.) Unfortunately, I do not know any higher category theory. In broad strokes--at a level accessible to someone with just a first course in algebraic topology and homological algebra--what is going on here? Also, what's a good reference that explains the details? - -REPLY [4 votes]: We can be a lot more naive if all we want to do is sheaf cohomology. Take your favorite ringed space that has a nice derived category $D(X)$. Then sheaf cohomology is the cohomology groups of any complex representing $\mathbf{R}Hom(\mathcal{O}_X , F)$. Now, the point is that you can compute this object by resolving the target with an injective resolution, or the source with a Cech resolution. -The reason that a Cech resolution works, intuitively, is that we have descent. More generally, given an fpqc morphism $f:E\rightarrow X$ of schemes, we can form a "resolution" of $\mathcal{O}_X$ as follows: form a simplicial object in the derived category with terms $\mathbf{R}f_*( (\mathcal{O}_E)^{\otimes n})$. A version of cohomological descent says that the homotopy colimit of this object in the derived category is the structure sheaf back again. So you'd like to compute this homotopy colimit. -The trouble is that $D(X)$ isn't your friend for computing homotopy colimits, especially when you just use the bare triangulated category structure. In SGA 4 I think they get around this by using a nice model for the derived category of simplicial sheaves (which is not the derived category of the abelian category of simplicial sheaves.) -So you can either work in some world that lets you compute homotopy colimits, like model categories or $\infty$-categories, or you can hope that, in your example, there's a nicer model for the homotopy colimit. And when E is a disjoint union of quasi-compact open affines in a quasi-separated scheme, then an object that represents this homotopy colimit is just the Cech complex! (This uses the cohomological triviality of affine schemes).<|endoftext|> -TITLE: Model of homotopy type theory in ZFC -QUESTION [8 upvotes]: There is a model of ZFC in homotopy type theory -Does exist a model of homotopy type theory in ZFC? -Is there a proof of "equal logical expressivity" of these theories? -p.s. I use word "model" in common sense, because I don't know model theory - -REPLY [8 votes]: Yes, as long as by "ZFC" you include its extension with some inaccessible cardinals, or (perhaps) are willing to play games with natural models. In other words, the only difference in consistency strength is that "ZFC" by default doesn't include any universes, while "HoTT" by default includes countably many.<|endoftext|> -TITLE: When can stalks be glued to recover a sheaf? -QUESTION [17 upvotes]: Let $\mathcal{F}$ be a sheaf over some topological space. The stalks are $\mathcal{F}_x= \underset{{x\in U}}{ \underrightarrow{\lim}} \mathcal{F}(U)$. Is there a special name for a sheaf that satisfies $\mathcal{F}(U) = \underset{{x\in U}}{ \underleftarrow{\lim}} \mathcal{F}_x$? -Obviously this is a very restrictive property but here's a possible example: -Let $X=Spec A$ be an affine integral scheme with structure sheaf $\mathcal{O}_X$. We have: -$$\mathcal{O}_{X,x}= \underset{{x\in X_f}}{ \underrightarrow{\lim}} \mathcal{O}_X(X_f)=\underset{{f \notin \mathfrak{p}_x}}{\underrightarrow{\lim}} A_f = \bigcup_{f \notin \mathfrak{p}_x} A_f$$ -But we also have (I hope): -$$\mathcal{O}_X(X_f)=A_f= \bigcap_{f \notin \mathfrak{p}_x \subset A} A_{\mathfrak{p}_x}=\underset{{f \notin \mathfrak{p}_x}}{\underleftarrow{\lim}} A_{\mathfrak{p}_x}=\underset{{x \in X_f}}{\underleftarrow{\lim}} \mathcal{O}_{X,x}$$ -So we can recover the structure sheaf as a limit of the stalks. Does it still hold for non affine scheme? More generally: - -When is a sheaf the inverse limit of its stalks? - -Can I turn this into a technique for constructing sheaves? -Let $F: |X| \to Ab$ be a functor from the category of points of $X$ to Abelian groups. Now define: -$$\mathcal{F}(U) = \underset{{x\in U}}{\underleftarrow{\lim}} F(x)$$ - -If I take stalks and then do the above will I get back to the same - sheaf? (Possibly after sheafication). - -EDIT: Some details are missing. Whenever I'm taking limit of stalks, the category I'm taking the limit over is the poset of the points of the space. Where we have $x_0 \to x$ Iff $x$ is a generization of $x_0$ (i.e. if $x_0 \in \overline{\{x\}}$). - -REPLY [8 votes]: First of all: - -Let $X=Spec A$ be an affine integral scheme with structure sheaf $\mathcal{O}_X$. -We have: - $$ -\mathcal{O}_{X,x}= \underset{{x\in X_f}}{ \underrightarrow{\lim}} \mathcal{O}_X(X_f)=\underset{{f \notin \mathfrak{p}_x}}{\underrightarrow{\lim}} A_f = \bigcup_{f \notin \mathfrak{p}_x} A_f -$$ - But we also have (I hope): - $$ -\mathcal{O}_X(X_f)=A_f=\bigcap_{f\notin\mathfrak{p}_x\subset A} A_{\mathfrak{p}_x}=\underset{{f \notin \mathfrak{p}_x}}{\underleftarrow{\lim}} A_{\mathfrak{p}_x}=\underset{{x \in X_f}}{\underleftarrow{\lim}} \mathcal{O}_{X,x} -$$ - -The first equality is false, because for any commutative ring $A$ with unit: -\begin{equation} -\forall\mathfrak{p}\in\operatorname{Spec}A,\,\lim_{\overrightarrow{f\notin\mathfrak{p}}}A_f=A_{\mathfrak{p}} -\end{equation} -and the second equality is partially true, that is: -\begin{equation} -\mathcal{O}_X(D(f))=A_f=\bigcap_{\stackrel{\mathfrak{p}\in\operatorname{Spec}A}{f\notin\mathfrak{p}}}A_{\mathfrak{p}} -\end{equation} -where I prefer the notation $D(f)$ for the open set -\begin{equation} -\{x\in\operatorname{Spec}A\mid f(x)\neq0\}. -\end{equation} -At most in general, the following lemma holds. - -Lemma. Let $(X,\mathcal{T})$ be a topological space with topology $\mathcal{T}$ and $\mathfrak{B}$ a basis for $\mathcal{T}$; that is a system of open subsets of $X$ such that: - -$U,V\in\mathfrak{B}\Rightarrow U\cap V\in\mathfrak{B}$, -every open subset of $X$ is a union of sets from $\mathfrak{B}$. - -We can view $\mathfrak{B}$ as a category with objets its elements and the inclusions between the sets as morphisms. -Let $\mathcal{O}:\mathfrak{B}\to\mathbf{C}$ be a controvariant functor (or $\mathfrak{B}$-presheaf), where $\mathbf{C}$ is a category closed with respect to projective limits, such that $\mathcal{O}$ satisfies the sheaf conditions for coverings of type $\displaystyle U=\bigcup_{i\in I}U_i$, where $\forall i\in I,\,U,U_i\in\mathfrak{B}$. -Then $\mathcal{O}$ can be extended to a sheaf $\overline{\mathcal{O}}$ on $X$, where: - \begin{equation} -\forall U\in\mathcal{T},\,\overline{\mathcal{O}}(U)=\lim_{\overleftarrow{V\in\mathfrak{B}\,\text{with}\,V\subseteq U}}\mathcal{O}(V) -\end{equation} - and this extension is unique up to canonical isomorphism. - -For a proof one can consult Bosch S. - Algebraic Geometry and Commutative Algebra, chapter 6, section 6, lemma 4. -Let $(X,\mathcal{T})$ be a topological space and let $\mathcal{F}$ be a sheaf on $X$ with values in a category $\mathbf{C}$ closed with respect to inductive and projective limits. -Let $x,y\in X$ such that $x\in\overline{\{y\}}$, or in other words: -\begin{equation} -\forall U\in\mathcal{T},\,x\in U\Rightarrow y\in U; -\end{equation} -then the following diagrams commute: -\begin{equation} -\require{AMScd} -\forall V\subseteq U\in\mathcal{T},x,y\in V,x\in\overline{\{y\}},\, -\begin{CD} -\mathcal{F}(U) @>r_{U,x}>> \mathcal{F}_x\\ -@V{r^U_V}VV @VV{=}V\\ -\mathcal{F}(V) @>>\dot\exists r_{V,x}> \mathcal{F}_x -\end{CD}, -\begin{CD} -\mathcal{F}(U) @>r_{U,y}>> \mathcal{F}_y\\ -@V{r^U_V}VV @VV{=}V\\ -\mathcal{F}(V) @>>\dot\exists r_{V,y}> \mathcal{F}_y -\end{CD},\dot\exists r_{y,x}:\mathcal{F}_x\to\mathcal{F}_y -\end{equation} -and therefore $(\mathcal{F}_x,r_{y,x})_{x,y\in X}$ is a projective system in $\mathbf{C}$; where I get: -\begin{gather} -x\succcurlyeq y\iff x\in\overline{\{y\}}\,\text{or}\,x=y;\\ -\forall U\in\mathcal{T},\,\mathcal{G}(U)=\lim_{\overleftarrow{x\in U}}\mathcal{F}_x. -\end{gather} -Let $\mathfrak{B}$ be a basis of $(X,\mathcal{T})$, let $V\subseteq U\in\mathfrak{B}$, let $x,y\in V$ such that $y\in\overline{\{x\}}\iff x\prec y$; then one can consider the diagram -\begin{equation} -\mathcal{G}(U)\stackrel{\displaystyle\left(r_y^U\right)^{\prime}}{\longrightarrow}\mathcal{F}_y\stackrel{\displaystyle r_{x,y}}{\longrightarrow}\mathcal{F}_x\stackrel{\displaystyle\left(r_x^V\right)^{\prime}}{\longleftarrow}\mathcal{G}(V) -\end{equation} -by the universal property of $\mathcal{G}(V)$: -\begin{equation} -\dot\exists r^U_V:\mathcal{G}(U)\to\mathcal{G}(V)\mid\left(r_x^V\right)^{\prime}\circ r^U_V=r_{x,y}\circ\left(r_y^U\right)^{\prime}, -\end{equation} -by definition $\mathcal{G}$ is a $\mathfrak{B}$-presheaf. -The $\mathfrak{B}$-sheaf axioms for $\mathcal{G}$ are equivalent to affirm that for any $U\in\mathfrak{B}$ and for any (open) covering $\{U_i\in\mathfrak{B}\}_{i\in I}$, $\mathcal{G}(U)$ is the equilizer of the diagram -\begin{equation} -\prod_{i\in I}\mathcal{G}(U_i)\rightrightarrows\prod_{i,j\in I}\mathcal{G}(U_{ij}) -\end{equation} -where I get $U_{ij}=U_i\cap U_j$ and the double arrows is the categorical product (in $\mathbf{C}$) of the morphism $r^{U_i}_{U_{ij}}$ and $r^{U_j}_{U_{ij}}$. -By definition of $\mathcal{G}$: $\mathcal{G}(U)$ equalizes the previous diagram; let $E$ be the equalizer of the previous diagram, then: -\begin{equation} -\forall i,j\in I,x_i\in U_i,y_{ij}\in U_{ij},\dot\exists\varphi_i:E\to\mathcal{F}_{x_i},\varphi_{ij}:E\to\mathcal{F}_{y_{ij}} -\end{equation} -such that the $\varphi_i$'s and $\varphi_{ij}$'s commute opportunely with the $r_{x_{ij},x_i}$'s; by definition $(E,\varphi_{ij})_{i,j\in I}$ is a cone (in $\mathbf{C}$) on the projective system $(\mathcal{F}_x,r_{y,x})_{x,y\in U}$, by the universal properties of $\mathcal{G}(U)$ and $E$: they are canonically isomorphic, that is $\mathcal{G}$ is a $\mathfrak{B}$-sheaf; by previous lemma $\mathcal{G}$ is extendible to a sheaf on $X$. -Whithout confusion, I can continue to write $\mathcal{G}$ for both sheaves! -For any $U\in\mathcal{T}$, by previous reasoning, one can consider $\mathcal{F}(U)$ as a cone over the projective system $(\mathcal{F}_x,r_{y,x})_{x,y\in U}$; then by universal property of $\mathcal{G}(U)$, there exists a unique morphism $\varphi_U:\mathcal{F}(U)\to\mathcal{G}(U)$ such that it (opportunely) commutes whit the $r_{y,x}$'s; in particular, the data of $\varphi_U$'s defines a morphism $\varphi:\mathcal{F}\to\mathcal{G}$ of sheaves. -In this way, one can define the canonical morphism $\varphi_x:\mathcal{F}_x\to\mathcal{G}_x$, for any $x\in X$. -For any $x\in X$, $\left(\mathcal{F}_x,\left(r^U_x\right)^{\prime}\right)_{x\in U}$ and $\left(\mathcal{G}_x,r^U_x\right)_{x\in U}$ are cocones for the inductive system $\left(\mathcal{G}(U),r^U_V\right)_{x\in U,V}$; by the couniversal property of $\mathcal{G}_x$, there exists a unique morphism $\psi_x:\mathcal{G}_x\to\mathcal{F}_x$; in particular, the data of $\psi_x$'s defines a morphism $\psi:\mathcal{G}\to\mathcal{F}$ of sheaves. -Using the couniversal property of the stalks of a sheaf, one can prove that: -\begin{equation} -\forall x\in X,\,\psi_x\circ\varphi_x=Id_{\mathcal{F}_x},\varphi_x\circ\psi_x=Id_{\mathcal{G}_x}; -\end{equation} -in other words, $\mathcal{F}$ and $\mathcal{G}$ are canonical isomorphic sheaves on $X$ with values in $\mathbf{C}$. $\Box\,(Q.E.D.)$<|endoftext|> -TITLE: Why do we do mathematical induction only for positive whole numbers? -QUESTION [21 upvotes]: After reading a question made here, I wanted to ask "Why do we do mathematical induction only for positive whole numbers?" -I know we usually start our mathematical induction by proving it works for $0,1$ because it is usually easiest, but why do we only prove it works for $k+1$? -Why not prove it works for $k+\frac12$, assuming it works for $k=0$. -Applying some limits into this, why don't we prove that it works for $\lim_{n\to0}k+n$? -I want to do this because I realized that mathematical induction will only prove it works for $x=0,1,2,3,\dots$. assuming we start at $x=0$, meaning it is unknown if it will work for $0 -TITLE: Why can one discriminate between the trivial $S^1$ line bundle and the Möbius strip by knowing the fibre transformation group? -QUESTION [5 upvotes]: Both $S^1\times\mathbb R$ and the Möbius strip can be regarded as line bundles over $S^1$. -I have read that one can reconstruct a fibre bundle by knowing its base space, its fibre and the bundle group. In the standard definition of both $S^1$ line bundles (with two charts on $S^1$), both have the same base space $S^1$, the same fibre $\mathbb R$. Their difference is in the bundle group, which is trivial in the case of the trivial bundle, and $\mathbb Z_2$ in the case of the Möbius strip, because at one "coordinate patch", we have to switch the 1-dimensional coordinate $x^1\mapsto-x^1$. -What if we do this on both coordinate patches? If the $\mathbb Z_2$ transformation $x^1\mapsto-x^1$ is done at both patches, then we have a line bundle over $S^1$ that looks like this: - -I hope the image is clear. It should be a "double Möbius strip", meaning instead of just one twist in the strip there are two twists. The second twist reverses the action of the first one, so the resulting line bundle is the trivial bundle $S^1\times\mathbb R$. -This line bundle has the same parameters as the Möbius strip (same base manifold, same fibre, same group), but it's the trivial one. -What did I understand wrong? What's the precise mechanism of "re-constructing" a bundle? - -REPLY [4 votes]: To construct a vector bundle $p:E \to M$ with fibre $V$, it's not enough to specify an abstract "bundle group" $G$; instead you need something like a locally finite open covering $\{U_{\alpha}\}$ of the base space, a subgroup $G$ of the general linear group $GL(V)$, and a collection of $G$-valued clutching functions $g_{\alpha\beta}:U_{\alpha} \cap U_{\beta} \to G$ satistfying the cocycle condition -\begin{align*} -g_{\alpha\alpha} &= \text{Id} && \text{in $U_{\alpha}$,} \\ -g_{\beta\alpha} &= g_{\alpha\beta}^{-1} && \text{in $U_{\alpha} \cap U_{\beta}$,} \\ -g_{\beta\gamma}\, g_{\alpha\beta} &= g_{\alpha\gamma} && \text{in $U_{\alpha} \cap U_{\beta} \cap U_{\gamma}$.} -\end{align*} -The total space of $E$ is the quotient of the disjoint union $\coprod_{\alpha} U_{\alpha} \times V$ by the equivalence relation -$$ -(x, v) \in U_{\alpha} \times V \sim \bigl(x, g_{\alpha\beta}(x)v\bigr) \in U_{\beta} \times V. -$$ -Let $U_{1}$ and $U_{2}$ denote the sets in your covering of the circle, and let $U^{-}$ and $U^{+}$ denote the connected components of $U_{1} \cap U_{2}$. -For the "untwisted" bundle, the clutching function is $g_{12} = 1$. -For the Möbius strip, $g_{12}(x) = \pm x$ if $x \in U^{\pm}$. -For the twice-twisted bundle, $g_{12}(x) = -1$. Here, re-trivializing over $U_{2}$ (say) "converts" the clutching function to $g_{12} = 1$. In other words, the twice-twisted bundle is trivial because its clutching function has continuous, non-vanishing extensions to the trivializing neighborhoods $U_{1}$ and $U_{2}$. That the clutching function "doesn't take values in the trivial group" is, in this sense, beside the point.<|endoftext|> -TITLE: How to solve $\sqrt {1+\sqrt {4+\sqrt {16+\sqrt {64+\sqrt {256\ldots }}}}}$ -QUESTION [25 upvotes]: How to solve this equation? -$$x=\sqrt {1+\sqrt {4+\sqrt {16+\sqrt {64+\sqrt {256\ldots }}}}}.$$ -Answer: $x=2$ - -REPLY [17 votes]: I found this answer.Is that wrong ?<|endoftext|> -TITLE: Question on circle and equilateral triangles -QUESTION [5 upvotes]: Let $ABC$ be a triangle. Let $T$ be its circumcircle and let $I$ be its incenter. Let the internal bisectors of $A,B,C$ meet $T$ at $A',B',C'$ respectively. Let $B'C'$ intersect $AA'$ at $P$ and $AC$ at $Q$. Let $BB'$ intersect $AC$ at $R$. Suppose that the quadrilateral $PIRQ$ is a kite, i.e. $PI=IR$ and $QP=QR$: how to prove that triangle $ABC$ is a equilateral triangle? - -REPLY [3 votes]: Call $\angle CAA'=a,\angle BCC'=b,\angle CBB'=c$. We'll prove that $a=b=c$. -First notice that -$$\angle PAR= \angle PB'R.$$ -This follows directly from the fact that $IPQR$ is a kite. (For example notice that $PAQ$ and $QRB'$ are congruent, since $PQ=QR$, $\angle QRB'=\angle QPA$ and $\angle PQA=\angle RQB'$.) This implies that $ABC$ is isosceles, since $a=\angle PAR=\angle PB'R=\angle BCC'=b$. To conclude we only have to prove that $c=\angle CBB'=\angle ACC'=b$. Now notice that -$$\angle CAB'= \angle CBB'$$ -because they are subtended by the same chord $B'C$. -The last step is to notice $PQA$ and $QRB'$ are congruent triangles. This implies that $AQ=QB'$ and that $AQB'$ is isosceles. From this we deduce that $\angle AB'C'=\angle CAB'$, but it's easy to see that $ \angle AB'C'=\angle ACC'$, and so -$$b=\angle ACC'=\angle AB'C'=\angle CAB'=\angle CBB'=c $$ -that was the desired equality. -I know that the notation is not easy to read, I'll try to improve it.<|endoftext|> -TITLE: Calculate $\lim_{x \to 1^{-}} \frac{\arccos{x}}{\sqrt{1-x}}$ without using L'Hôpital's rule. -QUESTION [5 upvotes]: Question: -Calculate -$$\lim_{x \to 1^{-}} \frac{\arccos{x}}{\sqrt{1-x}}$$ -without using L'Hôpital's rule. -Attempted solution: -A spontaneous substitution of t = $\arccos{x}$ gives: -$$\lim_{x \to 1^{-}} \frac{\arccos{x}}{\sqrt{1-x}} = \lim_{t \to 0^{+}} \frac{t}{\sqrt{1-\cos{t}}}$$ -Using the half-angle formula for $\sin \frac{t}{2}$: -$$\lim_{t \to 0^{+}} \frac{t}{\sqrt{1-\cos{t}}} = \lim_{t \to 0^{+}} \frac{t}{\sqrt{2 \sin^{2}{(\frac{t}{2})}}} = \lim_{t \to 0^{+}} \frac{t}{\sqrt{2}\sin{(\frac{t}{2})}}$$ -Forcing a standard limit: -$$\lim_{t \to 0^{+}} \frac{t}{\sqrt{2}\sin{(\frac{t}{2})}} = \lim_{t \to 0^{+}} \frac{\frac{t}{2}}{\frac{\sqrt{2}}{2}\sin{(\frac{t}{2})}} = \frac{2}{\sqrt{2}}$$ -However, this is not correct as the limit is $\sqrt{2}$. Where have I gone wrong? - -REPLY [2 votes]: Here is another approach that relies only on the Squeeze Theorem and the elementary inequalities from geometry -$$x\cos x\le \sin x\le x \tag 1$$ -for $0\le x\le \pi/2$. -Letting $x=\arccos(y)$ in $(1)$ reveals -$$y\arccos(y)\le \sqrt{1-y^2}\le \arccos(y) \tag 2$$ -for $y\le 1$. -After rearranging $(2)$, we obtain for $0 - -$$\sqrt{1-y^2}\le \arccos(y)\le \frac{\sqrt{1-y^2}}{y} \tag 3$$ -Now, dividing $(3)$ by $\sqrt{1-y}$, we have for $y<1$ -$$\sqrt{1+y}\le \frac{\arccos(y)}{\sqrt{1-y}}\le \frac{\sqrt{1+y}}{y} \tag 4$$ -Finally, applying the squeeze theorem to $(4)$ gives the expected result -$$\lim_{y\to 1^{-}}\frac{\arccos(y)}{\sqrt{1-y}}=\sqrt 2$$ -And we are done!<|endoftext|> -TITLE: Use of partial derivatives as basis vector -QUESTION [22 upvotes]: I am trying to understand use of partial derivatives as basis functions from differential geometry - -In tangent space $\mathbb{R^n}$ at point $p$, the basis vectors $e_1, e_2,...,e_n$ can be written as $\frac {\partial}{\partial x^1} \bigg|_p,\frac {\partial}{\partial x^2} \bigg|_p,...,\frac {\partial}{\partial x^n} \bigg|_p$ - -Let's say in 2 dimensional Euclidean space, a function $f : \mathbb {R^2}\rightarrow \mathbb {R^2}$ is -$x^2 + y^2=4$ , a circle with radius 2. -Tangent at point $p$ (2,0) will be $0e_1 + e_2$. -If I say $f =x^2 + y^2-4 =0$, -$\frac {\partial f}{\partial x} \bigg|_{p=(2,0)} = 4 \quad$ and $\quad \frac {\partial f}{\partial y} \bigg|_{p=(2,0)} = 4$ -This does not make sense of the partial derivatives as basis vectors. Any comments? - -REPLY [4 votes]: The space of the OP's circle is not a tangent space, that's why the simple example does not make any sense. One must stay inside the circle, so to speak. So let's start over again and define that circle with radius $2$ as: -$$ -\begin{cases} x = 2\cos(\phi) \\ y = 2\sin(\phi) \end{cases} \quad \Longrightarrow \quad x^2+y^2=4 -$$ -There are no partial derivatives because the manifold is one-dimensional. -And there is only one basis (tangent) vector: -$$ -\begin{bmatrix} dx/d\phi \\ dy/d\phi \end{bmatrix} = \begin{bmatrix} -2\sin(\phi) \\ 2\cos(\phi) \end{bmatrix} -$$ -Normed to a unit vector $\,\vec{t}\,$ if we divide by $2$ : -$$ -\vec{t} = \begin{bmatrix} dx/d\phi \\ dy/d\phi \end{bmatrix} / 2 = -\begin{bmatrix} -\sin(\phi) \\ \cos(\phi) \end{bmatrix} = -\sin(\phi)\,\vec{e_1} + \cos(\phi)\,\vec{e_2} -$$ -Now specify for $\,\phi=0\,$ and we're done, for this example at least.<|endoftext|> -TITLE: Proving for $n \ge 25$, $p_n > 3.75n$ where $p_n$ is the $n$th prime. -QUESTION [5 upvotes]: The elements of the reduced residue system modulo $30$ are $\{1, 7, 11, 13, 17, 19, 23, 29\}$ -If we order them as $e_1, e_2, e_3, \dots$ so that $e_1 = 1, e_2 = 7, \dots$, it follows that $3.75(i-1) < e_i < 3.75i$. -We can generalize this. -If $\gcd(x,30)=1,$ then $x = 30a + b$ where $b \in \{1, 7, 11, 13, 17, 19, 23, 29\}$. -If we order $\{1,7, 11, \dots, 29, 31, \dots, 59, \dots, 30a-1, \dots, x, \dots, 30a+31, \dots, 30a+59 \}$ as $e_1, e_2, e_3, \dots$ there exists $j$ with $e_j = x$ and $3.75(j-1) < x < 3.75j$. (This is true for $x < 30$. Assume it is true for $x < 30c$ where $c \ge 1$. It is clearly also true for each $e_j=x$ where $x < 30(c+1)$). -For $4 \le i \le 15$, $p_i = e_{i-2} > (i-3)*3.75$. -For $16 \le i \le 21$, $p_i = e_{i-1} > (i-2)*3.75$ -For $22 \le i \le 24$, $p_i = e_i > (i-1)*3.75$ -For $i \ge 25$, $p_i \ge e_{i+1} > 3.75i$ -Is this reasoning valid? If so, what would be a more concise way of making the same argument? - -REPLY [2 votes]: Essentially, $p_n > n\log{n}$ (Rosser's Theorem) for sufficiently large $n$. And since $\log{n}$ is unbounded (in particular, is not bounded by 3.75), such a result is to be expected. -To limit the number of particular cases that must be checked manually, we can invoke a refinement of Rosser's Theorem according to which -$$p_n > n(\log{n} + \log{\log{n}} - 1),\; \forall n \ge 6.$$ -It turns out that the function $f: x \mapsto \log{x} + \log{\log{x}} - 1$ is increasing for $x > 1$, and that $f(34) > 3.7866268 > 3.75$. Thus we obtain that -$$ p_n > 3.75n,\; \forall n \ge 34.$$ -Now check the remaining cases $p_{25}, p_{26},\ldots, p_{33}$ with a small script :)<|endoftext|> -TITLE: The Schwartz Class is dense in $L^p$ -QUESTION [8 upvotes]: Is there any hint to prove that for every -$1 \le p < \infty $ the Schwartz Class is dense in $L^p$? -Thanks so much. - -REPLY [2 votes]: Let $f$ be a continuous function with compact support. Because such functions are dense in $L^p, 1\le p <\infty,$ it's enough to show $f$ can be approximated by Schwartz functions in all of these $L^p$ spaces. -Suppose $f$ is supported in $[-a,a].$ Let $\epsilon>0.$ By Weierstrass, there is a polynomial $q$ such that $|q-f|<\epsilon$ in $[-a,a].$ -Define the functions -$$\begin {cases}\varphi_n(t) = \exp (1/[n(t^2-a^2)]),& t\in (-a,a)\\ 0, & |t| \ge a\\ \end {cases}$$ -Each $\varphi_n$ is positive on $(-a,a)$ with support $[-a,a].$ We also have $\varphi_n \in C^\infty(\mathbb R),$ bounded by $1$ everywhere, and $\varphi_n \to\chi_{(-a,a)}$ pointwise everywhere. Of course each $\varphi_n$ is in the Schwartz space. -Now $q\varphi_n$ extends to be in $C_c^\infty(\mathbb R)$ in the obvious way. Thus -$$\int_{\mathbb R}|q\varphi_n - f|^p = \int_{-a}^a |q\varphi_n - f|^p \le 2^{p-1}\left (\int_{-a}^a |q\varphi_n - q|^p + \int_{-a}^a |q-f|^p\right ).$$ -If $n$ is large enough, the last integral is less than $2a\epsilon^p,$ and this is enough for what we want.<|endoftext|> -TITLE: Transformed probability distribution function (non-continuous transformation) -QUESTION [5 upvotes]: Let -$$ -F_X(x) = \left\{ -\begin{array}{ll} -\frac{1}{3}e^x & x < 0\\ -1 - \frac{1}{2}e^{-x} & x \geq 0 -\end{array} -\right . -$$ -What is the distribution of $Y = F(X)$? -I have a hard time using common-known results due to the discontinuity and lack of inverse of $F$. I'm not looking for an answer but rather a general method to solve such problems. Thanks. - -REPLY [2 votes]: Let's explore the CDF method. The first step is to find the CDF of $Y$ in terms of the CDF of $X$. Let's start for $x < 0$: -\begin{align} -F_Y(y) &= P(Y \leq y)\\ -&= P\left(\frac{1}{3}e^{X}\leq y\right)\qquad X < 0\\ -&= P(X\leq \text{log}(3y))\\ -&= F_X(\text{log}(3y))\\ -&= \frac{1}{3}e^{\text{log}(3y)}\\ -&= y \qquad0\leq y < \frac{1}{3} -\end{align} -For $x\geq 0$: -\begin{align} -F_Y(y) &= P(Y \leq y)\\ -&= P\left(1-\frac{1}{2}e^{-X}\leq y\right)\qquad X \geq 0\\ -&= P\left(X\leq \text{log}\left(\frac{1}{2(1-y)}\right)\right)\\ -&= F_X\left(\text{log}\left(\frac{1}{2(1-y)}\right)\right)\\ -&= 1-\frac{1}{2}e^{-\left(\text{log}\left(\frac{1}{2(1-y)}\right)\right)}\\ -&= y \qquad \frac{1}{2}\leq y\leq 1 -\end{align} -Now, if you want to get the PDF of $Y$, you derive its CDF to get -$$f_Y(y) = 1 \qquad \forall y \in S=\left\{\left[0,\frac{1}{3}\right)\cup \left(\frac{1}{2},1\right]\right\}$$ -Since the area under $f_Y(y)$ is not 1, we are evidently missing something. In order to understand what's going on, let's analyze $F_X(x)$. We note that there is a discontinuity in $x=0$ because $F_X(0^-) = \frac{1}{3}\neq \frac{1}{2} = F_X(0^+)$. This happens because $X$ is a mixed random variable that is discrete at $X=0$ and continuous everywhere else. Moreover, $P(X=0) = F_X(0^+) - F_X(0^-) = \frac{1}{6}$. This mixed nature is naturally inherited by $Y$, being discrete at $y=\frac{1}{2}$ when $x=0$ and continuous in $S$. Therefore, the partial probability density function of $Y$ for the continuous part is $f_Y(y)$, and the partial probability density function for the discrete part is $p_Y(y) = \frac{1}{6}$ for $y=\frac{1}{2}$. -Notation note: $\displaystyle F_X(0^+) = \lim_{x\to0^+} F_X(x)$ and $\displaystyle F_X(0^-) = \lim_{x\to0^-} F_X(x)$.<|endoftext|> -TITLE: Knight returning to corner on chessboard -- average number of steps -QUESTION [27 upvotes]: Context: My friend gave me a problem at breakfast some time ago. It is supposed to have an easy, trick-involving solution. I can't figure it out. -Problem: Let there be a knight (horse) at a particular corner (0,0) on a 8x8 chessboard. The knight moves according to the usual rules (2 in one direction, 1 in the orthogonal one) and only legal moves are allowed (no wall tunnelling etc). The knight moves randomly (i.e. at a particular position, it generates a set of all possible and legal new positions, and picks one at random). What is the average number of steps after which the knight returns to its starting corner? -To sum up: A knight starts at (0,0). How many steps on average does it take to return back to (0,0) via a random (but only legal knight moves) walk. -My attempt: (disclaimer: I don't know much about Markov chains.) -The problem is a Markov chain. There are $8\times8 = 64$ possible states. There exist transition probabilities between the states that are easy to generate. I generated a $64 \times 64$ transition matrix $M_{ij}$ using a simple piece of code, as it seemed too big to do by hand. -The starting position is $v_i = (1,0,0,...) = \delta_{0i}$. -The probability that the knight as in the corner (state 0) after $n$ steps is -$$ -P_{there}(n) = (M^n)_{0j} v_j \, . -$$ -I also need to find the probability that the knight did not reach the state 0 in any of the previous $n-1$ steps. The probability that the knight is not in the corner after $m$ steps is $1-P_{there}(m)$. -Therefore the total probability that the knight is in the corner for the first time (disregarding the start) after $n$ steps is -$$ -P(n) = \left ( \prod_{m=1}^{n-1} \left [ 1 - \sum_{j = 0}^{63} (M^m)_{0j} v_j \right ] \right ) \left ( \sum_{j = 0}^{63} (M^n)_{0j} v_j \right ) -$$ -To calculate the average number of steps to return, I evaluate -$$ -\left < n \right >= \sum_{n = 1}^{\infty} n P(n) \, . -$$ -My issue: -The approach I described should work. However, I had to use a computer due to the size of the matrices. Also, the $\left < n \right >$ seems to converge quite slowly. I got $\left < n \right > \approx 130.3$ numerically and my friend claims it's wrong. Furthermore, my solution is far from simple. Would you please have a look at it? -Thanks a lot! --SSF - -REPLY [27 votes]: Details of the method mentioned in @Batman's comment: -We can view each square on the chessboard as a vertex on a graph consisting of $64$ vertices, and two vertices are connected by an edge if and only if a knight can move from one square to another by a single legal move. -Since knight can move to any other squares starting from a random square, then the graph is connected (i.e. every pair of vertices is connected by a path). -Now given a vertex $i$ of the graph, let $d_i$ denote the degree of the vertex, which is number of edges connected to the vertex. This is equivalent to number of possible moves that a knight can make at that vertex (square on chessboard). Since the knight moves randomly, transition probabilities from $i$ to its neighbors is $1/d_i$. -Now since the chain is irreducible (since the graph is connected) the stationary distribution of the chain is unique. Let's call this distribution $\pi$. Now we claim the following: - -Claim Let $\pi_j$ denote $j^\text{th}$ component of $\pi$. Then $\pi_j$ is proportional to $d_j$. -Proof Let $I$ be the fuction on vertices of the graph such that $I(i)=1$ if $i$ is a neighbor of $j$, and $I(i)=0$ otherwise. Then -$$ -d_j=\sum_i I(i)=\sum_i d_i \cdot \frac{I(i)}{d_i} = \sum_i d_i p_{ij} -$$ - where $p_{ij}$ is the transition probability from $i$ to $j$. Hence we have $dP=d$ where $P$ is the transition matrix of the chain, and $d=(d_1,\cdots,d_j,\cdots,d_{64})$. Thus $\pi P=\pi \implies$ Claim - -Therefore, it follows that after normalising we have -$$ -\pi_j=d_j/\sum_i d_i -$$ -Finally we recall the following theorem - -Theorem If the chain is irreducible and positive recurrent, then -$$ -m_i=1/\pi_i -$$ - Where $m_i$ is the mean return time of state $i$, and $\pi$ is the unique stationary distribution. - -Thus if we call the corner vertex $1$, we have -$$ -m_1=1/\pi_1 -$$ -You can check that $\sum_i d_i = 336$, and we have $d_1=2$ (at corner knight can make at most $2$ legal moves. Therefore $\pi_1=1/168$ and -$$ -m_1=168 -$$<|endoftext|> -TITLE: Does the following equation have only 1 solution of $n=2$? -QUESTION [7 upvotes]: Conjecture: -If $$I=\frac{1}{1+p_{n+1}}+\sum_{k=1}^{n}\frac{1}{p_k}$$ -(where $p_n$ denotes the $n$'th prime) -then $n=2$ is the only natural number for $n$ that makes $I$ an integer. -All I really understand to do is to input numbers in for $n$. Beyond that, however, I am at a loss. -I attempted using the knowledge that -$$\sum_{k=1}^n \frac{1}{p_k}$$ -Is never an integer (for $n>1$), but that really lead nowhere. -I am looking for a proof of my conjecture. - -REPLY [2 votes]: Bring all the fractions to the same denominator and add them to get : -$$\frac{A}{p_1p_2\ldots p_k(p_{k+1}+1)}$$ -Note that in the numerator $A$ the only number that isn't divisible with $p_n$ is $$(p_{n+1}+1)p_1p_2\ldots p_{n-1}$$ -But for the expression to be an integer we must have $p_n \mid A$ so :$$p_n \mid p_{n+1}+1$$ but because $p_{n+1}>p_n$ it follows that : $$p_{n+1} \geq 2p_n-1$$ -Now use Bertrand's postulate : -There's a prime between $a$ and $2a-2$ for every $a \geq 3$ to see that $p_n=2,3$ are the only solutions and this leads for $n=2$ or $n=3$ but just $n=3$ leads to an integer .<|endoftext|> -TITLE: Convolution with a polynomial is a polynomial. Why? -QUESTION [5 upvotes]: Let $P:\mathbb{R}\to\mathbb{R}$ such that $\deg P=N$. Let $f$, an integrable-$2\pi$-periodic function. Show that $f\star P$ is also a polynomial. - -So we can prove it for an arbitrary $x^n$ (Since a linear combination of polynomial is obviously a polynomial). -$$f\star P = \frac{1}{2\pi}\int_0^{2\pi} f(x-t)t^n \ dt$$ -It looks like I don't have any other information in order to proceed. -What am I missing? - -REPLY [4 votes]: First notice that the all $C^{\infty}$ solutions of the differential equation $ y^{(N+1)}=0$ are polynomials. -$\\ $ -We know that convolution of an $C^{\infty}$ function is again $C^{\infty}$. -From commutativity and dominated convergence we see that $$(f\ast P)^{(N+1)} =(P\ast f )^{(N+1)}=\frac{1}{2\pi} \int _{0}^{2\pi} (P(x-t))^{(N+1)}f(t)\rm{d}t=0,$$ -hence we conclude the result.<|endoftext|> -TITLE: Yet another log-sin integral $\int\limits_0^{\pi/3}\log(1+\sin x)\log(1-\sin x)\,dx$ -QUESTION [29 upvotes]: There has been much interest to various log-trig integrals on this site (e.g. see [1][2][3][4][5][6][7][8][9]). -Here is another one I'm trying to solve: -$$\int\limits_0^{\pi/3}\log(1+\sin x)\log(1-\sin x)\,dx\approx-0.41142425522824105371...$$ -I tried to feed it to Maple and Mathematica, but they are unable to evaluate in this form. After changing the variable $x=2\arctan z,$ and factoring rational functions under logarithms, the integrand takes the form -$$\frac{2 \log ^2\left(z^2+1\right)}{z^2+1}-\frac{4 \log (1-z) \log \left(z^2+1\right)}{z^2+1}\\-\frac{4 \log (z+1) \log - \left(z^2+1\right)}{z^2+1}+\frac{8 \log (1-z) \log (z+1)}{z^2+1}$$ -in which it can be evaluated by Mathematica. It spits out a huge ugly expression with complex numbers, polylogarithms, polygammas and generalized hypergeometric functions (that indeed matches numerical estimates of the integral). It takes a long time to simplify and with only little improvement (see here if you are curious). -I'm looking for a better approach to this integral that can produce the answer in a simpler form. - -REPLY [27 votes]: Integral expressed in terms of $F_\pm(x,n)$ -For $2x\in(-\pi,\pi)$, one may write the integrand as -\begin{align} -\prod_\pm\ln(1\pm\sin x) -&=2f_-(2\bar x,2)-2f_-(\bar x,2)-2f_+(\bar x,2)-2\ln 2f_-(2\bar x,1)+2\bar x(2\bar x-\pi)+\ln^2 2 -\end{align} -where $4\bar x=\pi-2x$ and $f_\pm(x,n)=\mathrm{Re}\ln^n(1\pm e^{2ix})$. Now note that for $n=1,2$, $f_\pm(x,n)$ has antiderivatives $F_\pm(x,n)$ which can be obtained through integration by parts. To be specific, -\begin{align} -F_-(x,1)&=\mathrm{Re}\frac i2\mathrm{Li}_2(e^{2ix})\\ -F_-(x,2)&=\mathrm{Re}\frac i2\left(2\mathrm{Li}_3(1-e^{2ix})-2\mathrm{Li}_2(1-e^{2ix})\ln(1-e^{2ix})-\ln(e^{2ix})\ln^2(1-e^{2ix})\right)\\ -F_+(x,2)&=\mathrm{Re}\frac i2\left(2\mathrm{Li}_3(z)-2\mathrm{Li}_2(z)\ln(z)-\ln^2 z\ln(1-z)+\frac{\ln^3 z} 3\right) -\end{align} -where $z=(1+e^{2ix})^{-1}$. - -As the integrand has no poles in the first quadrant, we are allowed to simply plug in the limits into these antiderivatives. This gives -\begin{align} -\int^\frac{\pi}{3}_0\prod_\pm\ln(1\pm\sin x)\ dx=&\ 2F_-\left(\tfrac\pi 2,2\right)-2F_-\left(\tfrac\pi 6,2\right)-4F_-\left(\tfrac\pi 4,2\right)+4F_-\left(\tfrac\pi{12},2\right)-4F_+\left(\tfrac\pi 4,2\right)\\&+4F_+\left(\tfrac\pi{12},2\right)-2\ln 2 F_+\left(\tfrac\pi{2},1\right)+2\ln 2 F_+\left(\tfrac\pi{6},1\right)+\frac\pi 3\ln^2 2-\frac{23\pi^3}{324} -\end{align} -It remains to simplify these polylogarithmic expressions. - -Simplification of $F_-\left(\tfrac\pi{2},1\right)$ and $F_-\left(\tfrac\pi{6},1\right)$ -Evidently, $F_+\left(\tfrac\pi{2},1\right)=\mathrm{Re}\left(\frac i2\mathrm{Li}_2(-1)\right)=0$, while the value of -$$F_+\left(\tfrac\pi{6},1\right)=\mathrm{Re}\left(\tfrac i2\mathrm{Li}_2(e^{\pi i/3})\right)=\frac{-\psi_1\left(\frac 16\right)-\psi_1\left(\frac 13\right)+\psi_1\left(\frac 23\right)+\psi_1\left(\frac 56\right)}{48\sqrt 3}=\frac{\pi^2}{6\sqrt 3}-\frac{\psi_1\left(\frac 13\right)}{4\sqrt 3}$$ -can be deduced by writing it as a sum and applying the duplication formula followed by the reflection formula twice. - -Simplification of $F_-\left(\tfrac\pi{2},2\right)$ and $F_-\left(\tfrac\pi{6},2\right)$ -Use the polylogarithm inversion formulae to deduce that $F_-\left(\tfrac\pi{2},2\right)=0$. Since $1-e^{\pi i/3}=e^{-\pi i/3}$ lies on the unit circle it is easy to verify that -$$F_+\left(\tfrac\pi{6},2\right)=\frac{\pi^3}{324}$$ -using the known Fourier series identities for $\sum\cos(n\theta)n^{-2}$ and $\sum\sin(n\theta)n^{-3}$. - -Simplification of $F_-\left(\tfrac\pi{4},2\right)$ and $F_+\left(\tfrac\pi{4},2\right)$ -The 3 facts -\begin{align} -\mathrm{Li}_2(1-i)&=\frac{\pi^2}{16}-i\left(\frac{\pi}{4}\ln 2+G\right)\\ \mathrm{Li}_2\left(\frac{1-i}2\right)&=\frac{5\pi^2}{96}-\frac{\ln^2 2}{8}+i\left(\frac{\pi}{8}\ln 2-G\right)\\ --\mathrm{Im}\ \mathrm{Li}_3\left(\frac{1-i}2\right)&=\mathrm{Im}\ -\mathrm{Li}_3(1-i)+\frac{7\pi^3}{128}+\frac{3\pi}{32}\ln^2 2 -\end{align} -(which respectively follow from the dilogarithm reflection formula and Landen's di/trilogarithm identities) allow us to conclude, after some algebra, -$$F_-\left(\tfrac\pi{4},2\right)=-F_+\left(\tfrac\pi{4},2\right)=-\mathrm{Im}\ -\mathrm{Li}_3(1-i)-\frac{G}{2}\ln 2-\frac{\pi^3}{32}-\frac{\pi}{16}\ln^2 2$$ -So $F_-\left(\tfrac\pi{4},2\right)+F_+\left(\tfrac\pi{4},2\right)=0$ - a surprisingly convenient equality indeed. - -Simplification of $F_-\left(\tfrac\pi{12},2\right)+F_+\left(\tfrac\pi{12},2\right)$ -This is the most tedious part of the evaluation. We have the identity -\begin{align} -\mathrm{Li}_3\left(\frac{1-z}{1+z}\right)-\mathrm{Li}_3\left(-\frac{1-z}{1+z}\right)= -&\ 2\mathrm{Li}_3\left(1-z\right)+2\mathrm{Li}_3\left(\frac{1}{1+z}\right)-\frac12\mathrm{Li}_3\left(1-z^2\right)\\ -&\ -\frac{\ln^3(1+z)}{3}+\frac{\pi^2}6\ln(1+z)-\frac{7\zeta(3)}4 -\end{align} -and it so happens that when $z=e^{\pi i/6}$, $(1-z)(1+z)^{-1}=-(2-\sqrt 3)i$ is purely imaginary and $1-z^2$ lies on the unit circle. Therefore -\begin{align} -4\mathrm{Im}\left(\mathrm{Li}_3\left(1-e^{\pi i/6}\right)+\mathrm{Li}_3\left(\frac{1}{1+e^{\pi i/6}}\right)\right)&=-4\mathrm{Ti}_3\left(2-\sqrt 3\right)-\frac{17\pi^3}{288}+\frac{\pi}{24}\ln^2(2-\sqrt 3)\\ -&=4\mathrm{Ti}_3\left(2+\sqrt 3\right)-\frac{89\pi^3}{288}-\frac{23\pi}{24}\ln^2(2+\sqrt 3) -\end{align} -since $16\mathrm{Ti}_3(z)+16\mathrm{Ti}_3(z^{-1})=\pi^3+4\pi\ln^2 z$ . Furthermore, it is not hard to get -$$\mathrm{Li}_2\left(e^{\pi i/6}\right)=\frac{13\pi^2}{144}+i\left(\frac{\psi_1\left(\frac 13\right)}{8\sqrt 3}+\frac{2G}3-\frac{\pi^2}{12\sqrt 3}\right)$$ -by applying its definition, so by the dilogarithm reflection formula, -$$\mathrm{Li}_2\left(1-e^{\pi i/6}\right)=\frac{\pi^2}{144}+i\left(\frac{\pi^2}{12\sqrt 3}-\frac{\psi_1\left(\frac 13\right)}{8\sqrt 3}-\frac{2G}3+\frac{\pi}{12}\ln(2+\sqrt 3)\right)$$ -By a similar process we obtain -$$\mathrm{Li}_2\left(\frac{1}{1+e^{\pi i/6}}\right)=\frac{23\pi^2}{288}-\frac{\ln^2(2+\sqrt 3)}{8}+i\left(\frac{\psi_1\left(\frac 13\right)}{8\sqrt 3}-\frac{\pi^2}{12\sqrt 3}-\frac{2G}3+\frac{\pi}{24}\ln(2+\sqrt 3)\right)$$ -after an application of the inversion formula to $z=1+e^{\pi i/6}$. After some further manipulations using these values, we eventually arrive at -$$4F_-\left(\tfrac\pi{12},2\right)+4F_+\left(\tfrac\pi{12},2\right)=-4\mathrm{Ti}_3\left(2+\sqrt 3\right)+\frac{8G}{3}\ln(2+\sqrt 3)+\frac{5\pi}{6}\ln^2(2+\sqrt 3)+\frac{137\pi^3}{648}$$ - -The Closed Form -Assimilating all our results, we indeed get -\begin{align}\int^\frac{\pi}{3}_0\prod_\pm\ln(1\pm\sin x)\ dx=&-4 \mathrm{Ti}_3\left(2+\sqrt3\right)-\frac{\psi_1\left(\frac13\right)}{2 \sqrt{3}}\ln 2+\frac{8G}3\ln\left(2+\sqrt3\right)+\frac{29\pi^3}{216}\\ -&\ \ +\frac{5\pi}6\ln^2\left(2+\sqrt3\right)+\frac\pi3\ln^22+\frac{\pi^2}{3\sqrt3}\ln2\\ -\end{align} -as Cleo announced.<|endoftext|> -TITLE: Explanation of a step in a proof of Steinhaus' Theorem -QUESTION [7 upvotes]: Disclaimer: I understand this theorem has been discussed before. I am not looking for a proof however, just a clarification of a mysterious step. -I was reading a proof of the following version of Steinhaus' Theorem, and got stuck on one step: - -Theorem (Steinhaus): Let $E \subset \mathbb{R}$ and $m(E)>0$, where $m$ denotes Lebesgue measure on $\mathbb{R}$. Then the set $E-E$ - contains an interval around zero. -Proof: By the Lebesgue differentiation theorem, or, equivalently, by the definition of a Lebesgue point, there exists an interval $I$ - such that -$$m(I\cap E) \geq (1-\epsilon)\,m(I).$$ -If the result were not true, there would exist a sequence $x_n \rightarrow 0$ with $x_n \not \in E-E$. Take $n$ large enough that $|x_n|<\epsilon\, m(I)$. Then $x_n +E \subset \mathbb {R} \setminus E$, but -$$\mathbf{m(I\cap (x_n+E))\geq (1-3\epsilon)\,m(I)}$$ -(since Lebesgue measure is invariant under translations.) Since $E$ and $\mathbb R \setminus E$ are disjoint, this is a contradiction. - -The part I am having difficulty with is in bold characters. I believe it follows from the following: -\begin{align*} -m(I\cap (x_n+E)) & = m(x_n+(I-x_n)\cap E) \\ -& = m((I-x_n)\cap E) \\ -& \geq m(I) - 2|x_n| - m(I \setminus E) \\ -& \geq (1-3\epsilon)\,m(I) -\end{align*} -Is this correct? If not, could you point out the correct steps? Thanks! -Note: For the sake of completeness, -$$E-E:=\{x-y:\ x,y\in E \}.$$ - -REPLY [4 votes]: HINT: -$$\mu (I \cap (x + E) ) = \mu ( (I- x) \cap E) \ge \mu ( I \cap E) - \mu ( (I \backslash (I - x) ) \cap E)\ge \mu ( I \cap E) - \mu(I \backslash (I - x) ) $$ -and $\mu(I \backslash (I - x) )= |x|$ if $I$ fixed and $x$ small enough. -${\bf Added:}$ In fact, this seems to be true: if $\mu(I \cap E) \ge (\frac{1}{2} + \delta) |I| $ then $E - E$ will contain the interval $(-2\delta|I|, 2 \delta|I|)$.<|endoftext|> -TITLE: Special Vertex Partitioning -QUESTION [7 upvotes]: When can we partition the vertices of a graph $G$ into $n$ subsets such that every vertex is adjacent to vertex from every subset? For example, in the following graph, we have partitioned the vertices into $2$ subsets. - -Is anything known about this type of partitioning? A motivation for studying these partitionings is that we can represent vertices as possible states and adjacent vertices as states reachable through one move. Then this partitioning allows us to convey certain messages no matter what the state. - -REPLY [2 votes]: I realized that that in my old answer I misunderstood the question. -Let $N’(G)$ be the maximum $n$ for which such a partitioning exists. Then it is clear that such a partitioning exists also for all $n\le N’(G)$. Now I can only say that $N’(G)\le\delta(G)$, where $\delta(G)$ is the minimum vertex degree of the graph $G$. - -An old version. -Let $N(G)$ be the maximum $n$ for which exists a partitioning of the set of vertices of the graph $G$ such that each two distinct parts contain adjacent vertices. Then it is clear that such a partitioning exists also for all $n\le N(G)$. We have the following simple bounds for $N(G)$. -If $|V(G)|\ge 1$ then $\chi(G)\le N(G)\le V(G)$, where $V(G)$ is number of vertices of the graph $G$ and $\chi(G)$ is a chromatic number of the graph $G$, that is the minimal number of colors needed to color all vertices of the graph $G$ such that there are no adjacent monochromatic vertices. -$m(G)\le N(G)(N(G)-1)/2\le E(G)$, where $E(G)$ is number of edges of the graph $G$ and $m(G)$ is a number of edges of a maximum matching of the graph $G$, that is the maximal number of mutually non-adjacent edges of the graph $G$. -The upper bound $N(G)\le V(G)$ seems to be a corollary of the bound $N(G)(N(G)-1)/2\le E(G)$, because $E(G)\le V(G)(V(G)-1)/2$ (I assumed that the graph $G$ has no double edges). -The upper bounds are far from optimal, because a graph $G$ with big both $V(G)$ and $E(G)$ can have small $N(G)$. For instance, if $S_n$ is the star with $n$ vertices (the tree with one root and $n-1$ leaves) then $V(S_n)=n$, $E(S_n)=n-1$, whereas $N(S_n)=2$.<|endoftext|> -TITLE: Proof of how multiplicity in a polynomial works. -QUESTION [8 upvotes]: In Algebra (2) I was told that if a polynomial had an even multiplicity for some $x=a$, then the graph touches $y=0$ at $x=a$ but doesn't cross $y=0$. Odd multiplicities go through the $x$-intercept. For example:$$y=x^2\to y=(x-0)(x-0)\to x=0,0$$And you can clearly see the graph "touches without intersecting" at $x=0$. -However, I am confused on how this is proven. - -REPLY [5 votes]: Let $P(x) = Q(x)(x-a)^{2n}$ such that $(x-a)\nmid Q(x)$. In a small enough neighbourhood of $a$ (for instance one that contains no roots of $Q(x)$), then $Q(x)$ preserves sign. And $(x-a)^{2n}\geq 0$, therefore, in said neighbourhood, $P(x)$ preserves sign, i.e., the function does not cross the line $y=0$. -You can find a similar argument of why it happens the other way around for $P(x) = Q(x)(x-a)^{2n+1}$.<|endoftext|> -TITLE: Standard Deviation vs Population Standard Deviation -QUESTION [5 upvotes]: In GRE study guide, it gives difference between standard deviation and sample or population standard deviation. -I understand the mechanics of this, i.e. in standard deviation you divide all differences of the mean by $n$, but in population standard deviation, you divide by $n - 1$ -Honestly, why not just divide by $n$? I am really not understanding the logic of this. -When I google for an explanation, I find the same mechanical difference. -Kindly explain. - -REPLY [4 votes]: You have this backwards: It is in the sample variance that you divide by $n-1$. If $n$ is the size of the whole population then the population variance is the average of the squares of the deviations, and that is the sum of those squares divided by $n$. -Division by $n-1$, if done at all, should be done ONLY when using the sample to ESTIMATE the variance of the whole population. -This section of a Wikipedia article explains why it is done. -Whether it ought to be done, i.e. whether estimates ought to be unbiased, is debatable. -You may have read that if $X_1,\ldots,X_n$ are independent random variables then $$\operatorname{var}(X_1+\cdots+X_n) = \operatorname{var}(X_1) + \cdots + \operatorname{var}(X_n). \tag 1$$ That works with the one where you divide by $n$, but not with the one where you divide by $n-1$. The identity $(1)$ is the reason why standard deviation rather than mean absolute deviation or some other measure of dispersion is used.<|endoftext|> -TITLE: Tightness, relative compactness and convergence of stochastic processes -QUESTION [6 upvotes]: For proving the convergence of a certain sequence of stochastic processes (which take values on a compact set), I am taking the following approach (as taken in previous papers I am looking at): -1) First proving the sequence of stochastic processes is tight. -2) Having proven tightness, I try to identify a possible limit of the sequence of stochastic processes (e.g. using tools like Donsker's invariance principle). -I want to understand why, with the relevant additional conditions, these two steps are sufficient for proving that the sequence of stochastic processes does indeed converge to the possible limit (which is also a stochastic process) I find in step 2 above. My understanding so far: -By Prohorov's theorem on a family $\mathcal{M}$ of probability measures on a complete, separable metric space (S,d), tightness is equivalent to relative compactness. Here relative compactness is equivalent to saying every sequence in $\mathcal{M}$ has a subsequence which converges in $\mathcal{M}_1(S)$ (the complete space of probability measures in $(S,\mathcal{B}(S)))$. So taking the family of probability measures to be the laws of the stochastic processes in the sequence gives: if the sequence of stochastic processes is tight, then there is a subsequence which has a weak convergence limit. -However, at this point I am lost- how can I now show that the entire sequence (rather than one of its subsequences) has a weak convergence limit? Do we need some form of Cauchy criterion now to be satisfied by the sequence, so that the subsequence limit turns out to indeed be the sequence limit under completeness? -Thanks. - -REPLY [8 votes]: There is the following general statement - -Let $(X,d)$ be a metric space and $(x_n)_{n \in \mathbb{N}} \subseteq X$. Then $x_n$ converges (in $X$) if, and only if, for any subequence of $(x_n)_{n \in \mathbb{N}}$ there exists a (further) subsequence which converges to a limit $x \in X$ and this limit does not depend on the chosen subsequence. - -The proof is not difficult; the implication "$\Rightarrow$" is obvious (if the sequence converges, then any subsequence converges and the limit does not depend on the chosen subsequence) and "$\Leftarrow$" can be proved by contradiction. -Applying this statement in your framework, we can proceed as follows to prove the weak convergence of a sequence of probability measures, say $(\mu_n)_{n \in \mathbb{N}}$: - -Fix an arbitrary subsequence $(\mu_{n_k})_{k \in \mathbb{N}}$. -Using compactness, show that this subsequence admits a convergent subsequence $(\mu_{n_{k_{\ell}}})_{\ell \in \mathbb{N}}$. -Identify the possible limit of the sequence to conclude that the limit of the (convergent) subsequence does not depend on the chosen subsequence.<|endoftext|> -TITLE: If $A^2=A$ then prove that $\textrm{tr}(A)=\textrm{rank}(A)$. -QUESTION [8 upvotes]: Let $A\not=I_n$ be an $n\times n$ matrix such that $A^2=A$ , where $I_n$ is the identity matrix of order $n$. Then prove that , -(A) $\textrm{tr}(A)=\textrm{rank}(A)$. -(B) $\textrm{rank}(A)+\textrm{rank}(I_n-A)=n$ -I found by example that these hold, but I am unable to prove them. - -REPLY [4 votes]: Every vector $x$ can be written as -$$ - x = (I-A)x + Ax -$$ -The vector $x_0 = (I-A)x$ satisfies $Ax_0=0$, while $x_1 = Ax$ satisfies $Ax_1=x_1$. So you can choose a basis of elements -$$ - \{ x_{0,1},x_{0,2},\cdots,x_{0,k}\}\cup\{ x_{1,1},x_{1,2},\cdots,x_{1,n-k} \} -$$ -where $A=0$ on the subspace spanned by $\{ x_{0,1},x_{0,2},\cdots,x_{0,k} \}$ and where $A=I$ on the subspace spanned by $\{ x_{1,1},x_{1,2},\cdots,x_{1,n-k}\}$. In this basis, the matrix representation of $A$ has $0$'s in the first $k$ diagonal entries and has $1$'s in the next $n-k$ diagonal positions; all other matrix entries are $0$'s. Once you understand this representation, $(A)$ and $(B)$ become more-or-less obvious.<|endoftext|> -TITLE: Prove that $\frac{ab}{a^5+b^5+ab}+\frac{bc}{b^5+c^5+bc}+\frac{ca}{c^5+a^5+ca} \leq 1.$ -QUESTION [6 upvotes]: The following problem was on the IMO 1996 shortlist : - -Let $a,b,c$ be positive real numbers such that $abc = 1$. Prove that -$$\dfrac{ab}{a^5+b^5+ab}+\dfrac{bc}{b^5+c^5+bc}+\dfrac{ca}{c^5+a^5+ca} \leq 1.$$ - -I tried factoring out things but that didn't seem to work. I don't see how to factor the denominator so I get stuck. - -REPLY [6 votes]: since -$$a^5+b^5\ge a^2b^3+a^3b^2$$ -so -$$\sum\dfrac{ab}{a^5+b^5+ab}\le\sum\dfrac{1}{ab^2+a^2b+1}=\sum\dfrac{abc}{ab^2+a^2b+abc}=\sum\dfrac{c}{a+b+c}=1$$<|endoftext|> -TITLE: Can you give me an example of topological group which is not a Lie group. -QUESTION [17 upvotes]: I know the definitions of Lie group and topological group are different. Can you give me an example of topological group which is not a Lie group. - -REPLY [10 votes]: The $p$-adic numbers are a topological group, but not a Lie group. -There are many profinite groups which are not Lie groups, for example the profinite group completion of a knot group.<|endoftext|> -TITLE: Prove this inequality $\sum \cos{A}\ge\frac{1}{4}(3+\sum\cos{(A-B)})$ -QUESTION [5 upvotes]: Prove that in any triangle $ABC$ the following inequality holds -$$\cos{A}+\cos{B}+\cos{C}\ge\dfrac{1}{4}(3+\cos{(A-B)}+\cos{(B-C)}+\cos{(C-A)})$$ -And I have gotten -$$8(\cos{A}+\cos{B}+\cos{C})\ge 6+2(\cos{(A-B)}+\cos{(B-C)}+\cos{(C-A)})$$ -$$2(\cos{(A-B)}+\cos{(B-C)}+\cos{(C-A)})+3=(\sum_{cyc}\cos{A})^2+(\sum_{cyc}\sin{A})^2$$ -$$\Longleftrightarrow 8\sum_{cyc}\cos{A}\ge 3+(\sum_{cyc}\cos{A})^2+(\sum_{cyc}\sin{A})^2$$ -then Any hints, ideas? Thanks in advance. - -REPLY [3 votes]: use -$$\sum\cos{A}=\dfrac{R+r}{R},\sum\cos{A}\cos{B}=\dfrac{s^2+r^2-4R^2}{4R^2},\sum\sin{A}\sin{B}=\dfrac{s^2+4Rr+r^2}{4R^2}$$ -$$\Longleftrightarrow \dfrac{4R+4r}{R}\ge 3+\dfrac{s^2+r^2-4R^2}{4R^2}+\dfrac{s^2+4Rr+r^2}{4R^2}$$ -$$\Longleftrightarrow 4R^2+6Rr\ge s^2+r^2$$ -use Gerrentsen inequality -$$s^2\le 4R^2+4Rr+3r^2$$ -we only prove following -$$4R^2+6Rr\ge 4R^2+4Rr+4r^2$$ -it equal to Euler inequality -$$R\ge 2r$$<|endoftext|> -TITLE: Very strong inequality -QUESTION [10 upvotes]: Let $a$, $b$ and $c$ be non-negative numbers. Prove that: -$$a^3+b^3+c^3+3abc\geq(a+b-c)\sqrt{ab(a+c)(b+c)}+(a+c-b)\sqrt{ac(a+b)(b+c)}+(b+c-a)\sqrt{bc(a+b)(a+c)}$$ -I have a proof, but my proof is long. Maybe there is something simple? -Thanks! - -REPLY [4 votes]: Here is a brutal force method and I am not sure it is shorter than op: -it is easy to verify if $abc=0$, the inequality holds. -then suppose $a,b,c$ are sides of a triangle, then we can take the advantage of Thunderstruck's proof. -if $a,b,c$ are not sides of a triangle,WLOG, $a \ge b+c$, now we prove : -LHS $>(a+b-c)\sqrt{ab(a+c)(b+c)}+(a+c-b)\sqrt{ac(a+b)(b+c)}$ -we have: -$((a+b-c)^2+(a+c-b)^2)(ab(a+c)(b+c)+ac(a+b)(b+c))=2a(a^2+(b-c)^2)(b+c)(ab+ac+2bc) \ge ((a+b-c)\sqrt{ab(a+c)(b+c)}+(a+c-b)\sqrt{ac(a+b)(b+c)})^2$ -$a=b+c+x \implies x\ge0 $ -$(a^3+b^3+c^3+3abc)^2-2a(a^2+(b-c)^2)(b+c)(ab+ac+2bc)=x^6+6cx^5+6bx^5+13c^2x^4+32bcx^4+13b^2x^4+14c^3x^3+56bc^2x^3+56b^2cx^3+14b^3x^3+7c^4x^2+42bc^3x^2+79b^2c^2x^2+42b^3cx^2+7b^4x^2+12bc^4x+52b^2c^3x+52b^3c^2x+12b^4cx+16b^2c^4+32b^3c^3+16b^4c^2>0$<|endoftext|> -TITLE: Number of elements of order $2$ in Abelian groups of order $2^{n}$ -QUESTION [5 upvotes]: I'm self studying some group theory and one of the exercises I came across: -Question: Prove that an Abelian group of order $2^{n}, n \in \mathbb{N}$ must have an odd number of elements of order 2. -I'm not sure how to approach this problem. Any hints would be appreciated. Prefer no complete answers but could use one for self checking. Thank you. - -REPLY [2 votes]: Hint: define a relation on $G$ by -$$ -a\sim b\quad\text{if and only if}\quad b=a\text{ or }b=a^{-1} -$$ -Prove this is an equivalence relation and that the equivalence classes have one or two elements. Which ones have just one element?<|endoftext|> -TITLE: Applications of the law of the iterated logarithm -QUESTION [7 upvotes]: The law of the iterated logarithm says that if $X_n$ is a sequence of iid -random variables with zero expectation and unit variance, then the partial -sums sequence $S_n = \sum_{i = 1}^n X_i$ satisfies almost surely that -$\limsup_{n \rightarrow \infty} \frac{S_n}{\sqrt{2 n \log{\log\ n}}} = 1$. -What are the applications of this result? Why is it considered important or -even useful? -I looked at the wikipedia article. It doesn't explain to me in detail where is this result applied. What major results are built from it or what major areas of applications are. -What I am looking for is something like a list of major applications of that theorem. Like how is it used to proved other stuff. - -REPLY [6 votes]: Since I am a statistics guy, here are some facts relating to the application of law of iterated algorithms as far as my experience.General speaking people use it to evaluate probability bound. If you know a bit about the concept "confidence interval" and "power of the statistical test", you could have very amazing results by applying law of iterated algorithms, such as the confidence interval sequences with coverage probability 1 for all sample sizes, and the famous power 1 test (which means that the type II error is zero) based on stopping rule.This is the perfect statistical test people want, however often difficult to obtain in traditional approach by fixing sample sizes and so on.For the above two points, please refer to the Sect.3 and Sect.4 of the paper by Herbert Robbins (https://projecteuclid.org/euclid.aoms/1177696786) for which is a classic paper. Sect.1 and Sect.2 is just showing how to use the theorem to deduce the favorable results. -Another famous application of the theorem is related to Bahadur representation of quantiles of distribution, referring to the paper (http://arxiv.org/pdf/math/0508313.pdf). Donot be worried about details, key is to observe the remainder term of the asymptotic expansion to see how it relates to law of iterated algorithms. -I would say it is rather theoretical and hope you enjoy it and help you a bit.:)<|endoftext|> -TITLE: Lie algebra of the automorphism group of a Lie group? -QUESTION [7 upvotes]: Let $G$ be a Lie group and $\text{Aut}(G)$ the group of all Lie group automorphisms of $G$. If $\text{Aut}(G)$ can be interpreted to be a Lie group (for example, in the context of synthetic differential geometry), is there a nice characterization of it's Lie algebra? -The Lie algebra of $\text{Diff}(G)$ is $\mathcal{X}(G)$, the space of all vector fields on $G$, so the Lie algebra of $\text{Aut}(G)$ should be a Lie subalgebra of $\mathcal{X}(G)$. - -REPLY [6 votes]: Edit, 12/26/15: Following YCor's comments below, let's restrict to the case that $G$ is connected for simplicity. Here $\text{Aut}(G)$ is always a Lie group (in just plain old differential geometry), because it's always a closed subgroup of $\text{Aut}(\mathfrak{g})$, which is in turn a closed subgroup of $GL(\mathfrak{g})$. -The Lie algebra of $\text{Aut}(\mathfrak{g})$ is quite nice: it's the Lie algebra $\text{Der}(\mathfrak{g})$ of derivations on $\mathfrak{g}$. In general (for example, when $G = S^1$), $\text{Aut}(G)$ will be a proper subgroup of $\text{Aut}(\mathfrak{g})$, but the two agree if $G$ is simply connected. $\text{Aut}(G)$ always has a subgroup $\text{Inn}(G)$ of inner automorphisms, and correspondingly its Lie algebra always contains the subalgebra $\text{Inn}(\mathfrak{g})$ of inner derivations of $\mathfrak{g}$.<|endoftext|> -TITLE: Show that $\int_0^1 (\ln x)^n dx =(-1)^n n!$ -QUESTION [5 upvotes]: How can I prove the following: -$$\int_0^1 (\ln x)^n dx =(-1)^n n!$$ -where $n$ is an integer and $n>0$? -By using partial integration, I started by finding a reduction formula -$$ -\begin{align*} -I_n &= \int (\ln x)^n dx \\ &= x(\ln x)^n - nI_{n-1} -\end{align*} -$$ -however the bounds 0 and 1 complicate things seeing as $\ln x \to -\infty \quad \mathrm{as} \quad x \to 0^+$ - -REPLY [4 votes]: You can use the $\; \Gamma\;$function to evaluate the integral. -Substituting $x=e^t \;$ -$$\ \int_0^1 {(lnx)^ndx}=\int_{- \infty}^0 {t^n}e^tdt$$ -Substituting $\, p=-t \,$; $$ \int_{- \infty}^0 {t^n}e^tdt=(-1)^n\int_{0}^{\infty}{t^ne^{-p}dp}=(-1)^n n!$$<|endoftext|> -TITLE: What is the degree of the differential equation $\left|\frac{dy}{dx}\right| + \left|y\right| = 0$? -QUESTION [5 upvotes]: Consider the differential equation $$\left|\frac{dy}{dx}\right| + \left|y\right| = 0$$ -where $\left|\cdot\right|$ means the absolute value function. I have to find the degree of the above differential equation. Can I say the degree of this differential equation is not defined as it is not a polynomial in $y'$? -If we further solve it, we get $$\left|\frac{dy}{dx}\right| = -\left|y\right|$$ Then taking square we get $ \left(\frac{dy}{dx}\right)^{2}- y^{2} =0$ which has degree $2$. -Now can I say that the two differential equation are not same? So the degree of the first one is not defined. Am I right? - -REPLY [3 votes]: Okay. Let's do it nice and easy: -$$ -\left|\frac{dy}{dx}\right| + \left|y\right| = 0 -$$ -Consider instead the simplified equation: -$$ -\left|A\right| + \left|B\right| = 0 -$$ -Do you agree that this is exactly the same as: -$$ -\begin{cases} A = 0 \\ B = 0 \end{cases} -$$ -In your case, substitute: -$$ -A = \frac{dy}{dx} \quad \Longrightarrow \quad \frac{dy}{dx} = 0 \quad \Longrightarrow \quad y = \mbox{constant} \\ -B = y \quad \Longrightarrow \quad y = 0 -$$ -Therefore $y(x)=0$ is the only solution, as has been argued already in both the other answers and some of the comments as well. -But yes, the expression contains a derivative. So, strictly speaking, it's a differential equation. -And since the whole is equivalent with the equation $\,y=0$ , I'd say the degree of that ODE is zero.<|endoftext|> -TITLE: Find $\lim_{n\to \infty} \int_n^{n+1} {\sin x \over x} dx$ -QUESTION [6 upvotes]: Find $\lim_{n\to \infty} \int_n^{n+1} {\sin x \over x} dx$ - -I thought about defining $\space F(x) = \int_0^x {\sin t \over t} dt \space$ and then the limit is $\space \lim_{n\to \infty} (F(n+1) - F(n))$, so the answer is 0? It doesn't seem ok - -REPLY [3 votes]: EDIT: Similar to Leg's answer -It is known: $ -1 \le \sin(x) \le 1 \ \forall \ x \in \mathbb{R} $ -$ \Rightarrow \int_n^{n+1} {\sin x \over x} dx \le \ \int_n^{n+1} {1 \over x} dx = \ln(n+1) - \ln(n) = \ln(1+\frac{1}{n}) \xrightarrow{n \to \infty} 0 $ -$ \Rightarrow \int_n^{n+1} {\sin x \over x} dx \ge \ -\int_n^{n+1} {1 \over x} dx = -\ln(n+1) + \ln(n) = - \ln(1-\frac{1}{n}) \xrightarrow{n \to \infty} 0 $ -Hence: $\lim\limits_{n\to \infty} \int_n^{n+1} {\sin x \over x} dx = 0 $<|endoftext|> -TITLE: Finding the minimum value of $\sqrt { \frac { a }{ b+c } } +\sqrt [ 3 ]{ \frac { b }{ c+a } } +\sqrt [ 4 ]{ \frac { c }{ a+b } }$ -QUESTION [7 upvotes]: If $a, b, c\ge 0$ with $(a+b)(b+c)(c+a) > 0$, -find the minimum of $\sqrt { \frac { a }{ b+c } } +\sqrt [ 3 ]{ \frac { b }{ c+a } } +\sqrt [ 4 ]{ \frac { c }{ a+b } }$. -The minimum is $\frac{3}{\sqrt[3]{4}}$ achieved at $b = 0, \frac{a}{c} = 2^{-4/3}$. -I am not able to progress in this problem.I tried applying AM-GM,Cauchy,Weighted AM-GM,etc. but none seem to provide fruitful results. Please help. -Source: A collection of problems which couldn't be solved by any teacher of my school. -Thanks. - -REPLY [2 votes]: Remark: Here is an ugly solution. Hope to see nice solutions. -Problem: Let $a, b, c \ge 0$ with $(a+b)(b+c)(c+a)\ne 0$. Find the minimum of -$f(a,b,c) = \sqrt{\frac{a}{b+c}} + \sqrt[3]{\frac{b}{c+a}} + \sqrt[4]{\frac{c}{a+b}}$. -Solution: -If $b=0$, by AM-GM inequality, we have $f = \sqrt{\frac{a}{c}} + \sqrt[4]{\frac{c}{a}} \ge \frac{3}{\sqrt[3]{4}}$ -with equality if $\frac{a}{c} = 2^{-4/3}$. -Let us prove that the minimum of $f$ is $\frac{3}{\sqrt[3]{4}}$. It suffices to prove that, for $a, c \ge 0$ and $a+c > 0$, -$$\sqrt{\frac{a}{1+c}} + \sqrt[3]{\frac{1}{c+a}} + \sqrt[4]{\frac{c}{a+1}} \ge \frac{3}{\sqrt[3]{4}}$$ -or -$$\sqrt{\frac{2a\sqrt[3]{2}}{1+c}} + \sqrt[3]{\frac{4}{c+a}} + \sqrt[4]{\frac{8c}{(a+1)\sqrt[3]{2}}} \ge 3.$$ -Let -$$\frac{2a\sqrt[3]{2}}{1+c} = y^2, \quad \frac{8c}{(a+1)\sqrt[3]{2}} = x^4.$$ -Correspondingly, we have -$$a = \frac{y^2(x^4\sqrt[3]{2} + 8)}{(16-x^4y^2)\sqrt[3]{2}}, \quad -c = \frac{x^4(y^2 + 2\sqrt[3]{2})}{16-x^4y^2}.$$ -The constraint is $x, y \ge 0; \ x^4y^2 < 16$. It suffices to prove that -$$y + \sqrt[3]{\frac{2\sqrt[3]{2}(16 - x^4y^2)}{x^4y^2\sqrt[3]{2} + x^4\sqrt[3]{4} + 4y^2}} + x \ge 3.$$ -It suffices to prove that, for $x, y \ge 0; \ x^4y^2 < 16; \ x + y < 3$, -$$\frac{2\sqrt[3]{2}(16 - x^4y^2)}{x^4y^2\sqrt[3]{2} + x^4\sqrt[3]{4} + 4y^2} \ge (3-x-y)^3$$ -which is written as (after clearing the denominators) -$$-x^4(3 - x - y)^3 \sqrt[3]{4} + g(x, y)\sqrt[3]{2} - 4y^2(3 - x - y)^3 \ge 0$$ -where $g(x, y) = 32 - 2x^4y^2 - (3-x-y)^3x^4y^2$. -Consider -$$F(q) = -x^4(3 - x - y)^3 q^2 + g(x, y)q - 4y^2(3 - x - y)^3.$$ -Since $F(q)$ is concave and $\frac{5}{4} < \sqrt[3]{2} < \frac{19}{15}$, to prove $F(\sqrt[3]{2}) \ge 0$, it suffices to prove that -$$F(\tfrac{5}{4}) \ge 0, \quad F(\tfrac{19}{15}) \ge 0.$$ -It suffices to prove that, for $x, y\ge 0$ and $x+y \le 3$, -$$F(\tfrac{5}{4}) \ge 0, \quad F(\tfrac{19}{15}) \ge 0.$$ -They are verified by Mathematica. There are ugly proofs. Omitted.<|endoftext|> -TITLE: If $x-y = 5y^2 - 4x^2$, prove that $x-y$ is perfect square -QUESTION [7 upvotes]: Firstly, merry christmas! -I've got stuck at a problem. - -If x, y are nonzero natural - numbers with $x>y$ such that - $$x-y = 5y^2 - 4x^2,$$ - prove that $x - y$ is perfect square. - -What I've thought so far: -$$x - y = 4y^2 - 4x^2 + y^2$$ -$$x - y = 4(y-x)(y+x) + y^2$$ -$$x - y + 4(x-y)(x+y) = y^2$$ -$$(x-y)(4x+4y+1) = y^2$$ -So $4x+4y+1$ is a divisor of $y^2$. -I also take into consideration that $y^2$ modulo $4$ is $0$ or $1$ (I don't know if this can help.) -So how do I prove that $4x+4y+1$ is a perfect square (this would involve $x-y$ - a perfect square)? While taking examples, I couldn't find any perfect square with a divisor that is $M_4 + 1$ and is not perfect square. -If there are any mistakes or another way, please tell me. -Some help would be apreciated. Thanks! - -REPLY [3 votes]: Generalization. Let $a,b$ be integers. If there exists consecutive integers $c,d$ such that $a-b=a^2c-b^2d$, then $|a-b|$ is a perfect square. -Proof. If $c=d+1$, we have $a-b=a^2(d+1)-b^2d=(a-b)(a+b)d+a^2$, so $$a^2=(a-b)(1-d(a+b))$$ Now let $g=\text{gcd}(a-b,1-d(a+b))$. We have $g^2|a^2$, so $g|a$. Now we have $g|b$, and we have $g|1$, so $g=1$. -Since $a-b$ and $1-d(a+b)$ are coprime and their multiple is a perfect square, we are done. -The case $c+1=d$ is handled similarly. $\blacksquare$.<|endoftext|> -TITLE: Reference Request for Fibre Bundle Theory from the Smooth Manifold Point of View -QUESTION [6 upvotes]: I am looking for a book, or a set of notes, which discusses some basic theory of fibre bundles. I am interested more in the geometric aspect (smooth manifolds) rather than topological aspect. -I found Steenrod's Topology of Fiber Bundles which covers the topology part in quite some depth but nothing which discusses the smooth aspect. -Can anybody point me to a nice source? - -REPLY [4 votes]: You can consult: - -Husemöller - Fibre Bundles; -Koszul - Lectures On Fibre Bundles and Differential Geometry, available at http://www.math.tifr.res.in/~publ/ln/tifr20.pdf; -Spivak - A Comprehensive Introduction to Differential Geometry, only volumes 1 and 2. - -And if you are sufficiently experienced, you can consult: - -Kobayashi, Nomizu - Foundations of Differential Geometry, both two volumes; -Kolár, Michor, Slovák - Natural Operations in Differential Geometry, available at http://www.emis.de/monographs/KSM/kmsbookh.pdf; -Milnor, Stasheef - Characteristic classes, a very easy text but very deep!<|endoftext|> -TITLE: Question on a constructive proof of irrationality of $\sqrt 2$ -QUESTION [5 upvotes]: Here is the constructive proof of $\sqrt 2 \not \in \mathbb Q$ found on this page : - -Given positive integers $a$ and $b$, because the valuation (i.e., highest power of 2 dividing a number) of $2b^2$ is odd, while the valuation of $a^2$ is even, they must be distinct integers; thus $|2 b^2 - a^2| \geq 1$. Then -$$\left|\sqrt2 - \frac{a}{b}\right| = \frac{|2b^2-a^2|}{b^2(\sqrt{2}+a/b)} \ge \frac{1}{b^2(\sqrt2 + a / b)} \ge \frac{1}{3b^2},$$ -the latter inequality being true because we assume $\frac{a}{b} \leq 3- \sqrt{2}$ (otherwise the quantitative apartness can be trivially established). - -I don't understand why the first equality holds: why is it possible to divide by $\sqrt 2 + a/b$, since it is not yet known whether this number is zero... ? -Thank you in advance for your comments ! - -REPLY [5 votes]: $\sqrt{2}$ and $a/b$ are both (strictly) positive, so $\sqrt{2} + a/b$ cannot be zero. - -REPLY [2 votes]: This equality cannot be zero because it is said that, -Given positive integers $a$ and $b$. -Thus a/b cannot be zero.<|endoftext|> -TITLE: Proving that the Lebesgue integral over a measurable function $f$ is equal to the area/volume below the graph of $f$ -QUESTION [6 upvotes]: Given a Borel set $A \subseteq \mathbb{R}^d, d ≥ 1$ and a measurable function $f: A \to [0, \infty)$, I want to consider the set: -$$E = \{(x, y) \in \mathbb{R}^{d+1}: x \in A, 0 ≤ y ≤ f(x)\} \subseteq \mathbb{R}^{d+1}$$ -I first want to show that $E$ is a Borel set. Then, I want to prove that -$$\lambda_{d+1}(E) = \int_A f(x) d \lambda_d(x)$$ -where $\lambda_d$ is the $d$-dimensional Lebesgue measure. -I unfortunately wasn't even successful showing that $E$ is a Borel set so far. I first thought that one could write $E$ as the product of two Borel sets ($E = A \times \text{another Borel set}$), but I then realized that it isn't that simple, seeing as the $y$ in a vector $(x, y) \in E$ is dependent on $x$. Maybe one could construct a clever measurable function that sends $E$ onto a measurable set in $\mathbb{R}$ or something like that? I'm not really all that sure though. -Once established that $E$ is measurable, wouldn't the second part follow more or less right from Fubini's theorem? -Also, I think the intuition behind this excercice is to acknowledge that, in case $d = 1$, the Lebesgue integral of $f$ over $A$ is nothing but the area inbetween the graph of $f$ and the $x$-axis; for $d = 2$, it's the volume, and so on. I'm not really sure how that helps me (formally) showing it. - -REPLY [2 votes]: (I will assume that $f$ is Borel measurable, and extend $f$ to all of $\Bbb R^d$ by setting $f(x)=-1$ for $x\in A^c$.) Think of $E$ as the inverse image $g^{-1}([0,\infty))$, where $g:\Bbb R^d\times[0,\infty)\to\Bbb R$ is defined by $g(x,y)=f(x)-y$. Then $g$ is the composition of the $\mathcal B^d\otimes\mathcal B_+/\mathcal B^2$ map $(x,y)\to (f(x),y)$ with the continuous map $\Bbb R^2\ni(u,v)\to u-v\in\Bbb R$. It follows that $g$ is $\mathcal B^d\otimes\mathcal B_+/\mathcal B$-measurable, hence $E\in \mathcal B^d\otimes\mathcal B_+$. -Similar considerations apply if $f$ is Lebesgue measurable.<|endoftext|> -TITLE: Finding all possible integer solutions to $x_1+x_2+ \ldots$ where the variables have coefficients -QUESTION [5 upvotes]: I know how to find integer solutions to equations of the form $x_1+x_2+x_3=n$. You would use stars and bars and do ${n+2}\choose{2}$. -But what if the equation is of the form $x_1+3x_2+4x_3=n$. This is for the problem where you want to distribute n candies among 3 different sized boxes. One size holds one candy, another holds $3$ candies, and the other holds $4$ candies. And each box must be completely filled. -Is it even possible to use the equation approach to this problem? And if so, how? - -REPLY [2 votes]: In this case, your problem is analogous to: - -Finding the coefficient of $t^n$ in the expansion of - $$(1+t+t^2+t^3+\dots)(1+t^3+t^6+t^9+\dots)(1+t^4+t^8+t^{12}+\dots)$$ - -You can relate your question to this in the following way: -General term of the above expansion = $t^{x_1}\cdot t^{3x_2}\cdot t^{4x_3}=t^{x_1+3x_2+4x_3}$ -And we are requiring the coefficient of $t^n$. -So the coefficient will be equal to the number of solutions of $x_1+3x_2+4x_3=n$.<|endoftext|> -TITLE: Replacing in equation introduces more solutions -QUESTION [6 upvotes]: Let's say I have an equation $y=2-x^2-y^2$. -now, since I know that $y$ is exactly the same as $2-x^2-y^2$ I can create the following, equation by replacing $y$ with $2-x^2-y^2$. -$y=2-x^2-(2-x^2-y^2)^2$ -doing this replacement introduces new solutions such as $(-1, 0)$. Replacements in other various equations have similar results, although some do not change the equation at all! -What mechanic introduces these new solutions, and what are they? -Edit: one such example of an equation where no solutions are introduced via replacement is $y=x^2+y^2$. That will give $y=x^2+(x^2+y^2)^2$ which upon graphing is the same graph as the original, $y=x^2+y^2$. -Here is an image of the iteration of this replacement on the same function, just for fun. - -REPLY [5 votes]: Suppose first that $f$ is a real-valued function of one variable. The equation -$$ -x = f(x) -\tag{1} -$$ -acts as a condition, selecting values of $x$ for which (1) is true. Substituting (1) into itself gives a new condition, -$$ -x = f\bigl(f(x)\bigr) = f^{[2]}(x), -\tag{2} -$$ -and so forth. -Certainly every solution of (1) is a solution of (2). If the function $f$ is not injective (one-to-one), however, (2) can have solutions that are not solutions of (1). -For example, if $f(x) = 4x(1 - x)$, then $f$ maps $[0, 1]$ onto $[0, 1]$. Each point except $x = \frac{1}{2}$ has two preimages, and $f$ maps each interval $[0, \frac{1}{2}]$ and $[\frac{1}{2}, 1]$ bijectively to $[0, 1]$. It follows that $f \circ f$ maps each half-interval onto $[0, 1]$, and each point of $[0, 1]$ has two preimages in each half-interval, so $f^{[2]} = f \circ f$ has four fixed points, etc. (Diagram below.) In this example, the $n$-fold composition of $f$ with itself, $f^{[n]}$, has $2^{n}$ fixed points, i.e., the equation -$$ -x = f^{[n]}(x) = (\underbrace{f \circ \dots \circ f}_{n \text{times}})(x) -\tag{n} -$$ -has $2^{n}$ solutions, even though (n) is obtained from (1) by successively substituting (1) into itself. - -Your situation is analogous: Starting from -$$ -y = 2 - x^{2} - y^{2} = f(x, y), -\tag{1a} -$$ -you substitute (1a) into itself, obtaining -$$ -y = 2 - x^{2} - f(x, y)^{2} = f\bigl(x, f(x, y)\bigr), -\tag{2a} -$$ -and so forth. -In your example, $f$ has qualitatively similar behavior along the $y$-axis to the "logistic map" $f(x) = 4x(1 - x)$, and the solution sets -$$ -y = f\Bigl(\dots f\bigl(x, f(x, y)\bigr)\dots\Bigr) -$$ -become increasingly complicated with successive iteration. -This type of phenomenon (e.g., the precise location/shape of the solutions) is generally complicated (chaotic in the technical sense). Wikipedia pages of possible interest include: - -The logistic map -The tent map -The horseshoe map -Dynamical systems<|endoftext|> -TITLE: Balanced cutting of a convex polygon -QUESTION [7 upvotes]: Given a convex polygon $C$ and a number $R\geq 1$, say that a point $x$ is an $R$-balance-point of $C$ if every line through $x$ divides $C$ to two parts $C_1,C_2$ such that: -$$1/R \leq Area(C_1)/Area(C_2)\leq R$$ -Some polygons have a 1-balance-point, e.g. the centroid of a rectangle or an ellipse, since every line through it cuts $C$ to two parts of equal area. -Initially I thought that every convex polygon has a 1-balance-point, but then I found a counter-example. Consider the unit right-angled isosceles triangle, whose total area is 0.5: - -Suppose by contradiction that it has a 1-balance-point, H. Then, the verical line through H must cut a triangle of area 0.25, so it must have $x = 1-\sqrt{0.5} \approx 0.29$. Similarly, the horizontal line through H must have $y = 1-\sqrt{0.5} \approx 0.29$. This means that H must be the point (0.29,0.29). But, the line through H at angle $135^\circ$ from the x axis cuts a triangle of area $\approx 0.17$. -So, my question is: what is the smallest $R$ such that every convex polygon has an $R$-balance-point? -(and what is the standard name of this point?) - -REPLY [2 votes]: I believe the answer given by @orangeskid is correct, and at least to see the balance point is the center of mass for triangles, note that the balance point is preserved under rotations, and also scaling axes (because these carry lines to lines, preserve incidence of points and lines, and scale the area of any region by a common factor). You can transform any triangle to an equilateral triangle with these operations. For an equilateral triangle, the balance point is the center of mass. One way to see the center of mass is the answer is by noting that the balance point must be the same when the triangle is rotated by 120 or 240 degrees around the center of the triangle. And finally, center of mass is preserved under rotations and scaling axes, so center of mass is the balance point for any triangle.<|endoftext|> -TITLE: Homeomorphism between $\mathcal{C}(X,\Omega Y)$ and $\mathcal{C}(\Sigma X, Y)$ -QUESTION [5 upvotes]: It is easy to see that there is a natural bijection between $\mathcal{C}(X,\Omega Y)$ and $\mathcal{C}(\Sigma X, Y)$, where $\Omega Y$ is the based loop space, $\Sigma X$ is reduced suspension, $X$ and $Y$ are based spaces. -Now $\mathcal{C}(*,*)$ and also $\Omega Y$ can be given the compact-open topology. Is the natural bijection mentioned above actually a homeomorhpism with this topology? -I can prove this is true if I assume $X$ is a compact space. -I have shown that there is an obvious map $\Phi : \mathcal{C}(X,\Omega Y)\to \mathcal{C}(X\times S^1, Y)$ and $\Phi$ is a homeomorphism onto its image. From $\text{Im}{\Phi}$ I can define a map into $\mathcal{C}(\Sigma X,Y)$, but to show that this map is continuous I need $X$ to be compact. Then the composition is actually the natural bijection. -Is compactness of $X$ necessary to prove the statement? Can it be generalized to say locally compact spaces or compactly generated spaces? -Any help regarding this is appreciated. - -REPLY [6 votes]: As Rolf Sievers already indicated in the comments, this is true if you work in the category of compactly generated spaces. But be aware that the function spaces do not carry the compact-open topology in this situation, but the compactly generated refinement of it instead. -If you don't want to modify the topology on the function spaces, you have to put stronger conditions on one of the spaces: If $X$ is locally compact, $\Phi$ is a homeomorphism using the regular compact-open topology and any space $Y$. A proof of this appears in Tammo tom Diecks book on algebraic topology (see Theorem 2.4.11 using $\Sigma X\cong S^1\wedge X$).<|endoftext|> -TITLE: Maximum number of edges in a planar graph without $3$- or $4$-cycles -QUESTION [5 upvotes]: What is the largest possible number of edges in a planar graph without $3$- or $4$-cycles? - -I've been unsuccessfully trying to solve this problem from my book. I know that every planar graph without $3$-cycles has at most $2n - 4$ edges, though I'm not sure about graphs without $4$-cycles. - -REPLY [3 votes]: If a planar graph $G$ contains no $3$- or $4$-cycles, then given a planar embedding of $G$, every face is bounded by at least $5$ edges. Then since every edges is incident to exactly two faces, we have $5f \leq 2e$. Combining this with Euler's characteristic formula, -$$\begin{align} - f-e+n = 2 &\implies 5f-5e+5n = 10 \\ - &\implies 2e-5e+5n \geq 10 \\ - &\implies 5n-10 \geq 3e \\\\ - &\implies e \leq \left\lfloor\frac{5n-10}{3}\right\rfloor -\end{align}$$<|endoftext|> -TITLE: A formula to convert a counter-clockwise angle to clockwise angle with an offset -QUESTION [9 upvotes]: I have an angle in the coordinate system where $0^\circ$ is East, $90^\circ$ is North, $180^\circ$ is West, and $270^\circ$ is South. -I need to convert them to this one, where $0^\circ$ is North, $90^\circ$ is East, $180^\circ$ is South, and $270^\circ$ is West. Is there a determined algorithm to do this sort of transformation? - -REPLY [2 votes]: A formula that you are asking: -$E=E_2-E_1$ and $N=N_2-N_1$, and is working for any value of $E$ and $N$. -$$f(E,N)=\pi-\frac{\pi}{2} \left(1+\text{sign }N\right) (1-\text{sign }E^2)-\frac{\pi}{4} \left(2+\text{sign }N\right) \text{sign }E -\text{sign } (N E) \ \text{atan }\frac{\lvert N\rvert-\lvert E\rvert}{\lvert N\rvert+\lvert E\rvert}$$ -The formula gives a clockwise angle from $0$-north to $2\pi$.<|endoftext|> -TITLE: Compact Hausdorff Spaces with pre-caliber $\aleph_1$ has caliber $\aleph_1$ -QUESTION [7 upvotes]: Let us recall that a topological space has $\aleph_1$ pre-caliber (resp. caliber) if given any family of open sets $\{U_\alpha\}_{\alpha<\omega_1}$ there exists an uncontable set $B\subset\omega_1$ such that the subfamily $\{U_\alpha\}_{\alpha\in B}$ has the finite intersection property (resp. $\bigcap_{\alpha\in B} U_\alpha\neq\emptyset$). -I was trying to prove that a compact Hausdorff space has $\aleph_1$ pre-caliber if and only if it has $\aleph_1$ caliber. In this regard, I would construct a subfamily of closed sets with the FIP inside the family $\{U_\alpha\}_{\alpha\in B}$ and then using the compactness ensure that $\bigcap_{\alpha\in B} U_\alpha\neq\emptyset$. Probably using the normality of the space $X$ we can construct that subfamily but I'm not sure. -Can anybody help me? Thanks in advance. - -REPLY [2 votes]: Let $F=\{U_\alpha :\alpha\in \omega_1\}$ be an open family with $\varnothing\not \in F$. For $\alpha\in \omega_1,$ let $V_\alpha$ be open with $\varnothing\ne V_\alpha\subset \overline {V_\alpha}\subset U_\alpha$. Let $B$ be an uncountable subset of $\omega_1$ such that $\{V_\alpha :\alpha\in B\}$ has the F.I.P. Then $\{\overline V_\alpha :\alpha\in B\}$ has the F.I.P., so by compactness we have $\varnothing\ne \bigcap_{\alpha\in B}\overline {V_\alpha}\subset \bigcap_{\alpha\in B}U_\alpha$.<|endoftext|> -TITLE: Cardinality as "size of a set" -QUESTION [10 upvotes]: Given the discussions on Refuting the Anti-Cantor Cranks in which I left a comment that I suspected would not get any attention, I decided to start a new thread instead. -Now, I can't read the mind of "anti-Cantorians" so I don't really know what their objections are. But I get the impression that most reasonable objections aren't based upon the validity of the proofs that $\mathbb{R}$ has higher cardinality than $\mathbb{N}$. That seems pretty undisputable. -But a viewpoint I personally haven't been able to shake is the notion of cardinality as capable of describing "sizes" of sets. Now, different people may have different views on which criteria this notion of size absolutely needs to adhere to and which we can discard, but I think it's certainly thinkable to require that any reasonable notion of size needs to satisfy $B \subset A \Rightarrow \text{size}(B) < \text{size}(A).$ -Starting from the assumption that $B \subset A \Rightarrow \text{size}(B) < \text{size}(A)$, then the statement "if two sets can be put into one-to-one correspondence with each other, then they are of the same size" leads to a contradiction and can therefore not be true, leading us to a chicken-and-egg axiom argument with no right answer. -Which makes me wonder. Could the interpretation of cardinality as the "size of a set" simply be seen as a mneumonic? Am I wrong in assuming all mathematical results relating to cardinalities of sets will be as relevant without thinking of it as a "size"? -What does the interpretation really buy us and what would we lose by leaving the notion of the size of an infinite set undefined? -EDIT: The very nice discussion in Relative sizes of sets of integers and rationals revisited - how do I make sense of this? is probably sufficient for this question, but I'll reiterate my position since it apparently wasn't as clearly formulated as I thought given the down-votes. -We want a nice, intuitive notion of the "size" (or better yet, "numerosity") of a set. So we make a list of properties that we know hold for finite sets that ideally we would want to hold for infinite sets. Two of these: -1) If you can take all the elements of set $A$ and place each element next to a unique member of set $B$, then $A$ and $B$ are "of the same size". -2) If you take a set $A$ and proceed by removing some elements from it, then you will have a set smaller in size than you started out with. -Taking the first one as an axiom leads to the concept of "cardinality", causing property 2 to lead to a contradiction, while taking the second one as an axiom leads to the first one causing a contradiction. Clearly we can't have both of them. -From some answers and comments, you get the impression that expecting the second property to hold is a "fallacy", sprung through some naive expectation that properties of finite sets should automatically carry over to infinite sets. But it seems to me that the same could be said for the first property. The only reason to prefer using #1 as a definition seems to be "more interesting mathematics spring from it", which is clearly an excellent reason - but it doesn't make it a more natural candidate for capturing the notion of size than #2. -EDIT #2: There was an excellent point brought up in the question I linked above which is the formulation of various closure operations on sets - for example the convex hull of a $A$ is the smallest convex set containing $A$. So letting $A = \{0,1\}$, what's its convex hull? Well, $[0,1]$ certainly can't be the unique correct answer, since $\mathbb{R}$ is of the same size! Not that this invalidates the cardinality concept, but it shows that maybe we should be careful with equating cardinality with size. - -REPLY [2 votes]: Why any reasonable notion of size should satisfy $A \subsetneq B \implies size(A) < size(B)$? -Mass is a good notion of "size", but we can have a bar of chocolate surrounded by vacuum and it does not respect your properties (ignoring any physics "mumbo-jumbo"). Mathematically, measure (in the context of measure theory) is also a good notion of "size", and it does not respect your property either. -Not only that, but cardinality is not a measurement of "size". It measures "correspondencicity". This is true even for finite sets. When we count them (for example, with fingers), we are corresponding each finger with the elements of the finite set. For instance, if you take $10$ balls of steel, and put them scattered in a closed room , the "size" that they are enclosing is very big. If you cluster them in the table in that room, not so much. However, I can correspond to each element of the previous arrangement an element of the new arrangement in a bijective manner. This is the intuition from finite sets. - -As a sidenote, based on the comments, I feel this personal input may be useful: - In my understanding, intuition is when you work, see or deal with something on a regular basis and has developed an acquaintance with the subject, sufficient enough for you to be able to infer something without a clear logical concatenation. This seems to aggree with the entry on the wiktionary for intuition: - -Noun, intuition ‎(plural intuitions) - -Immediate cognition without the use of conscious rational processes. -A perceptive insight gained by the use of this - faculty. - - -and also to the "colloquial usage" section: "Intuition, as a gut feeling based on experience, (...)" -Therefore, a person who has dealed with finite sets has no intuition with infinite sets. Period. What he has is naivety: - -Naivety (...) is the state of being naïve, that is to say, having or showing a lack of experience, understanding or sophistication, often in a context where one neglects pragmatism in favor of moral idealism. - -or - -Adjective, naive -Lacking worldly experience, wisdom, or judgement; unsophisticated. -(of art) Produced in a simple, childlike style, deliberately rejecting sophisticated techniques. - -And one of the utmost goals of Mathematics is to get rid of naivety.<|endoftext|> -TITLE: Product of two series to get a series decomposition of zeta in the critical strip -QUESTION [6 upvotes]: $\def\sfrac#1#2{% - \small#1% - \kern-.05em\lower0.1ex/\kern-.025em% - \lower0.4ex\small#2}$I've been working on gaining an intuitive understanding of the analytic continuation of the zeta function, but I've gotten stuck at this part where I have to multiply two very strange series together. -The approach is quite simple: first start with the Dirichlet eta function, defined by $η(s) = \frac{1}{1^s} + \frac{-1}{2^s} + \frac{1}{3^s} + \frac{-1}{4^s} + ... $. Then the following relationship holds: -$ζ(s) = η(s) · \frac{1}{1-\frac{2}{2^s}}$ -If $\Re\{s\} > 1$, then the right factor can be expanded into a Taylor series, yielding -$ζ(s) = \left(\frac{1}{1^s} + \frac{-1}{2^s} + \frac{1}{3^s} + \frac{-1}{4^s} + ... \right) \cdot \left(\frac{1}{1^s} + \frac{2}{2^s} + \frac{4}{4^s} + \frac{8}{8^s} + ... \right) \\ -= \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \frac{1}{4^s} + ...$ -where the product can be evaluated as a Dirichlet convolution. -This is because the image of $\frac{2}{2^s}$ lies within the radius of convergence of the $1 + x + x^2 + x^3 + ...$ series expansion of $\frac{1}{1-x}$, so the Taylor expansion is valid. -However, if $\Re\{s\} < 1$, then the image of $\frac{2}{2^s}$ lies outside the ROC, so that expansion doesn't work. Instead, we can take advantage of the functional equation -$f(x) + f(\frac{1}{x}) = 1$ -where $f(x) = \frac{1}{1-x}$. -Substituting in $x=\frac{2}{2^s}$, and allowing that $\frac{1}{\left(\frac{2}{2^s}\right)} = \frac{\sfrac{1}{2}}{{\sfrac{1}{2}}^s}$, you get -$\frac{1}{1-\frac{2}{2^s}} = 1 - \frac{1}{1-\frac{\sfrac{1}{2}}{{\sfrac{1}{2}}^s}}$ -where $\frac{\sfrac{1}{2}}{{\sfrac{1}{2}}^s}$ now has an image lying within the ROC of the aforementioned series expansion for $\Re\{s\} < 1$. So we can expand the whole right side as -$\frac{1}{1-\frac{2}{2^s}} = \frac{-\sfrac{1}{2}}{{\sfrac{1}{2}}^s} + \frac{-\sfrac{1}{4}}{{\sfrac{1}{4}}^s} + \frac{-\sfrac{1}{8}}{{\sfrac{1}{8}}^s} + \frac{-\sfrac{1}{16}}{{\sfrac{1}{16}}^s} + ...$ -Finally, putting the whole thing together, we get -$ζ(s) = \left(\frac{1}{1^s} + \frac{-1}{2^s} + \frac{1}{3^s} + \frac{-1}{4^s} + ... \right) \cdot \left(\frac{-\sfrac{1}{2}}{{\sfrac{1}{2}}^s} + \frac{-\sfrac{1}{4}}{{\sfrac{1}{4}}^s} + \frac{-\sfrac{1}{8}}{{\sfrac{1}{8}}^s} + \frac{-\sfrac{1}{16}}{{\sfrac{1}{16}}^s} + ...\right)$ -Which, for $0 < \Re\{s\} < 1$, is the product of a conditionally convergent Dirichlet series, and an absolutely convergent "fractional Dirichlet series." -And here I'm stumped. How can I expand this product? I understand the result should be some kind of "fractional Dirichlet series" where the denominators are dyadic rationals raised to the power of s, and I understand I basically want to perform some kind of Dirichlet convolution type thing here. -But how do I actually do it? What does the resulting expression look like? - -REPLY [5 votes]: the series for $\eta(s)$ is absolutely convergent for $\Re(s) > 0$ if you group the terms by two : -$\eta(s) = \displaystyle\sum_{n=1}^\infty (2n-1)^{-s} - (2n)^{-s} = \sum_{n=1}^\infty \mathcal{O}(s (2n)^{-s-1})$ ( from the Taylor expansion of order 1 of $(1-x)^{-s}$ when $x \to 0$) -$\eta(s) = (1-2^{1-s}) \ \zeta(s)$ and for $\displaystyle \Re(s) < 1 : \ \ \frac{1}{1-2^{1-s}} = -\frac{2^{s-1}}{1-2^{s-1}} = - \sum_{k=1}^\infty 2^{k(s-1)}$. -you get : -$$\zeta(s) = - \sum_{k=1}^\infty 2^{k(s-1)} \eta(s) = - \sum_{n,k} \left( (2n-1)^{-s} - (2n)^{-s} \right) 2^{k(s-1)}$$ -which is an absolutely convergent double sum for $\Re(s) \in ]0;1[$.<|endoftext|> -TITLE: Are the natural numbers implicit in the construction of first-order logic? If so, why is this acceptable? -QUESTION [18 upvotes]: I have recently been reading about first-order logic and set theory. I have seen standard set theory axioms (say ZFC) formally constructed in first-order logic, where first-order logic is used as an object language and English is used as a metalanguage. -I'd like to construct first-order logic and then construct an axiomatic set theory in that language. In constructing first-order logic using English, one usually includes a countably infinite number of variables. However, it seems to me that one needs a definition of countably infinite in order to define these variables. A countably infinite collection (we don't want to call it a set yet) is a collection which can be put into 1-1 correspondence with the collection of natural numbers. It seems problematic to me that one appears to be implicitly using a notion of natural numbers to define the thing which then defines natural numbers (e.g. via Von Neumann's construction). Is this a legitimate concern that I have or is there an alternate definition of "countably infinite collection" I should use? If not, could someone explain to me why not? -I think that one possible solution is to simply assume whatever set-theoretic axioms I wish using clear and precise English sentences, define the natural numbers from there, and then define first-order logic as a convenient shorthand for the clear and precise English I am using. It seems to me that first-order logic is nothing but shorthand for clear and precise English sentences anyway. What the exact ontological status of English is and whether or not we are justified in using it as above are unresolvable philosophical questions, which I am willing to acknowledge, and then ignore because they aren't really in the realm of math. -Does this seem like a viable solution and is my perception of first-order logic as shorthand for clear and precise English (or another natural language) correct? -Thank so much in advance for any help and insight! - -REPLY [10 votes]: There are three inter-related concepts: - -The natural numbers -Finite strings of symbols -Formulas - particular strings of symbols used in formal logic. - -If we understand any one of these three, we can use that to understand all three. -For example, if we know what strings of symbols are, we can model natural numbers using unary notation and any particular symbol serving as a counter. -Similarly, the method of proof by induction (used to prove universal statements about natural numbers) has a mirror image in the principle of structural induction (used to prove universal statements about finite strings of symbols, or about formulas). -Conversely, if we know what natural numbers are, we can model strings of symbols by coding them as natural numbers (e.g. with prime power coding). -This gives a very specific argument for the way in which formal logic presupposes a concept of natural numbers. Even if we try to treat formal logic entirely syntactically, as soon as we know how to handle finite strings of symbols, we will be able to reconstruct the natural numbers from them. -The underlying idea behind all three of the related concepts is a notion that, for lack of a better word, could be described as "discrete finiteness". It is not based on the notion of "set", though - if we understand what individual natural numbers are, this allows us to understand individual strings, and vice versa, even with no reference to set theory. But, if we do understand sets of natural numbers, then we also understand sets of strings, and vice versa. -If you read more classical treatments of formal logic, you will see that they did consider the issue of whether something like set theory would be needed to develop formal logic - it is not. These texts often proceed in a more "finitistic" way, using an inductive definition of formulas and a principle of structural induction that allows us to prove theorems about formulas without any reference to set theory. This method is still well known to contemporary mathematical logicians, but contemporary texts are often written in a way that obscures it. The set-theoretic approach is not the only approach, however.<|endoftext|> -TITLE: Express "if true, then 1 else 0" in a formula suitable for Desmos calculator -QUESTION [10 upvotes]: In programming, often the value of True is also 1, and False is 0. -This means that: -(x>5)*4 - -will return 4 if x is greater than 5 (because (x>5)==1), else 0. -I need to accomplish a similar thing using mathematical operators (no piecewise functions, this has to be typed into Desmos calculator.) -Specifically, I need -$$ -f(x)=\begin{cases} 1&&\text{if}~ x\leq n \\ 0&&\text{otherwise} \end{cases} -$$ -without having to use piecewise notation. Here $n$ is a positive integer, as is $x$. - -REPLY [7 votes]: To simplify Hardy's answer, a function I commonly use is -$$\frac{1}{2} + \frac{n-x}{2|n-x|}$$ -If you want to make calculations less taxing by working only with integers during calculations, this is obviously the same as -$$\frac{1+\frac{n-x}{|n-x|}}{2}$$ -This solution immediately arises from the fact that $\frac{n-x}{|n-x|}$ is either $1$ when $xn$ -This is obviously undefined for equality of the two variables, although you can add a factor to the top and bottom to change the undefined point, such as -$$\frac{1}{2} + \frac{n-x+1}{2|n-x+1|}$$ -which is undefined not at equality, but when $x=n+1$ -This is not needed however, as Desmos has native support for piecewise functions... see this link for an explanation by the Desmos devs<|endoftext|> -TITLE: Is $\frac{1}{2}\|x\|^2$ the only function that is equal to its convex conjugate? -QUESTION [12 upvotes]: Is $\frac{1}{2}\|x\|^2$ the only function that is equal to its convex conjugate? -The convex conjugate is defined as -$$ -f^{*}(x) = \sup_y\{\langle x, y\rangle - f(y)\}. -$$ - -REPLY [14 votes]: This is an elementary result by J.-J. Moreau himself (see Proposition 9(a) of his famous "Proximité et dualité dans un espace hilbertien"). -Let's proof it. So, let $X$ be a Hibert space with inner product $(x,y) \mapsto \langle x, y\rangle$ and norm $x \mapsto \|x\| := \langle x, x\rangle^{1/2}$. Let $f:X \rightarrow (-\infty,+\infty]$ an extended real-valued function. Define the convex conjugate of $f$, denoted $f^*$, by -$$f^*(y) := \sup_{x \in X}\langle x, y\rangle - f(x), \; \forall y \in X.$$ -Note that $f^*$ is always convex l.s.c (being the supremum of affine functions) without any assumptions whatsoever on $f$. By the above - definition of $f^*(y)$, it's clear that for any pair $(x, -y) \in X^2$, we have $f^*(y) \ge \langle x, y \rangle - f(x)$, and so - -$$f(x) + f^*(y) \ge \langle x, y \rangle. -$$ - -This is just the -Fenchel-Young inequality, which holds absolutely, without any -assumption like convexity of $f$, etc. We now show -that $f^* = f$ iff $f = \frac{1}{2}\|.\|^2$. -($\impliedby$) For any $y\in X$, we have $$(\frac{1}{2}\|.\|^2)^*(y) -:= \sup_{x \in X}\langle x, y\rangle - \frac{1}{2}\|x\|^2 = \sup_{x \in - X}\frac{1}{2}\|y\|^2 - \frac{1}{2}\|x - y\|^2 = -\frac{1}{2}\|y\|^2.$$ -($\implies$) Suppose $f^* = f$. Then evaluating the Fenchel-Young -inequality with $y = x$, we get $2f(x) \ge \langle x, x\rangle$, i.e -$f^*(x) = f(x) \ge \frac{1}{2}\|x\|^2$ for all $x \in X$. Also, because $f$ must be convex l.s.c, it equals its convex biconjugate $f^{**}$, and so we have -$$ -f(x) = f^{**}(x) := \sup_{y \in X}\langle x, y\rangle - f^*(y) \le \sup_{y \in - X}\langle x, y\rangle - \frac{1}{2}\|y\|^2 =: (\frac{1}{2}\|.\|^2)^*(x) = -\frac{1}{2}\|x\|^2. -$$ -Thus, $f = \frac{1}{2}\|.\|^2.$<|endoftext|> -TITLE: Better proof for $\frac{1+\cos x + \sin x}{1 - \cos x + \sin x} \equiv \frac{1+\cos x}{\sin x}$ -QUESTION [7 upvotes]: It's required to prove that -$$\frac{1+\cos x + \sin x}{1 - \cos x + \sin x} \equiv \frac{1+\cos x}{\sin x}$$ -I managed to go about out it two ways: - -Show it is equivalent to $\mathsf{true}$: -$$\frac{1+\cos x + \sin x}{1 - \cos x + \sin x} \equiv \frac{1+\cos x}{\sin x}$$ -$$\Longleftrightarrow\sin x(1+\cos x+\sin x)\equiv(1+\cos x)(1-\cos x+\sin x)$$ -$$\Longleftrightarrow\sin x+\cos x\sin x+\sin^2 x\equiv1-\cos x+\sin x+\cos x-\cos^2 x+\sin x \cos x$$ -$$\Longleftrightarrow\sin^2 x\equiv1-\cos^2 x$$ -$$\Longleftrightarrow\cos^2 x +\sin^2 x\equiv1$$ -$$\Longleftrightarrow \mathsf{true}$$ -Multiplying through by the "conjugate" of the denominator: -$${\rm\small LHS}\equiv\frac{1+\cos x + \sin x}{1 - \cos x + \sin x} $$ -$$\equiv\frac{1+\cos x + \sin x}{1 - (\cos x - \sin x)} ~~\cdot ~~\frac{1+(\cos x - \sin x)}{1 +(\cos x - \sin x)}$$ -$$\equiv\frac{(1+\cos x + \sin x)(1+\cos x - \sin x)}{1 - (\cos x - \sin x)^2}$$ -$$\equiv\frac{1+\cos x - \sin x+\cos x + \cos^2 x - \sin x \cos x+\sin x + \sin x \cos x - \sin^2 x}{1 - \cos^2 x - \sin^2 x + 2\sin x \cos x}$$ -$$\equiv\frac{1+ 2\cos x + \cos^2 x- \sin^2 x}{2\sin x \cos x}$$ -$$\equiv\frac{1+ 2\cos x + \cos^2 x- 1 + \cos^2 x}{2\sin x \cos x}$$ -$$\equiv\frac{2\cos x (1+\cos x)}{2\cos x(\sin x)}$$ -$$\equiv\frac{1+\cos x}{\sin x}$$ -$$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\equiv {\rm\small RHS}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\square$$ -Both methods of proof feel either inelegant or unnecessarily complicated. Is there a simpler more intuitive way to go about this? Thanks. - -REPLY [6 votes]: For fun, I created a trigonograph: - -$$\frac{1 + \cos\theta + \sin\theta}{1 + \sin\theta - \cos\theta} = \frac{1 + \cos\theta}{\sin\theta}$$<|endoftext|> -TITLE: Inequality olympiad -QUESTION [5 upvotes]: For all positive numbers $a,b,c$, prove that -$$\frac{a^3}{b^2-bc+c^2}+\frac{b^3}{a^2-ac+c^2}+\frac{c^3}{a^2-ab+b^2}\geq 3 \frac{(ab+bc+ac)}{a+b+c}$$ -Note that both side are homogeneous of degree 1, so I think it is safe to assume $a+b+c=1$ but this does not go very far. -Any ideas/hint? -Thanks - -REPLY [6 votes]: By Cauchy-Schwarz inequality: -$$\sum_{\text{cyc}}\dfrac{a^3}{b^2-bc+c^2}\left(\sum_{\text{cyc}}a\left(b^2-bc+c^2\right)\right)\ge \left(a^2+b^2+c^2\right)^2$$ -In fact, you can prove the following stronger inequality: -$$\frac{\left(a^2+b^2+c^2\right)^2}{\sum_{\text{cyc}}a\left(b^2-bc+c^2\right)}\ge a+b+c\ge3\dfrac{ab+bc+ac}{a+b+c}$$ -This holds: -$$\iff \left(a^2+b^2+c^2\right)^2\ge (a+b+c)\sum_{\text{cyc}}a\left(b^2-bc+c^2\right)$$ -$$\iff a^4+b^4+c^4+abc(a+b+c)\ge ab\left(a^2+b^2\right)+bc\left(b^2+c^2\right)+ac\left(a^2+c^2\right)$$ -The last step is true by Schur's inequality, where $t=2$.<|endoftext|> -TITLE: Does a smooth "transition function" with bounded derivatives exist? -QUESTION [20 upvotes]: Does there exist a function $f: \mathbb{R} \to \mathbb{R}$ having the following properties? - -$f(x) = 0$ for all $x \le 0$. -$f(x) = 1$ for all $x \ge 1$. -For $0 < x < 1$, $f$ is strictly increasing. -$f$ is everywhere $C^\infty$. -The sequence of $L^\infty$ norms $\langle \left\lVert f \right\rVert_\infty, \left\lVert f' \right\rVert_\infty, \left\lVert f'' \right\rVert_\infty, \dots \rangle$ is bounded. - -If we impose only the first four conditions, there is a well-known answer: for $0 < x < 1$, define $f(x)$ by -$$ f(x) = \frac{e^{-1/x}}{e^{-1/x} + e^{-1/(1-x)}} = \frac{1}{1 + e^{1/x - 1/(1-x)}} $$ -However, the derivatives of this function appear to grow quite rapidly. (I'm not sure how to verify this, but it seems at least exponential to me.) -If such a function does not exist, what is the smallest order of asymptotic growth that the sequence $\langle \left\lVert f \right\rVert_\infty, \left\lVert f' \right\rVert_\infty, \left\lVert f'' \right\rVert_\infty, \dots \rangle$ can have? - -REPLY [12 votes]: If the sequence of infinity norms $\{\|f^{(n)}\|_{\infty}\}_{n = 0}^{\infty}$ is bounded, i.e. there exists a constant $M$ such that for all $n \in \mathbb{N}_0$, $|f^{(n)}(x)| \le M$ for all $x \in \mathbb{R}$, then by this theorem $f$ is analytic. -However, if an analytic function is $0$ on any interval (such as $(-\infty,0)$), then its Taylor series about any point in that interval is identically $0$. Then, since $f$ is analytic, the Taylor series converges to $f$ everywhere, i.e. $f \equiv 0$. -Therefore, no such function exists.<|endoftext|> -TITLE: Why does probability need the Axiom of pairwise disjoint events? -QUESTION [9 upvotes]: I'm a beginning student of Probability and Statistics and I've been reading the book Elementary Probability for Applications by Rick Durret. -In this book, he outlines the 4 Axioms of Probability. - -For any event $A$, $0 \leq P (A) \leq 1$. -If $\Omega $ is the sample space then $P (\Omega) =1$. -If $A$ and $B$ are disjoint, that is, the intersection $A \cap B = \emptyset$, then -$$P(A\cup B) = P(A) + P(B)$$ -If $A_1, A_2,\ldots$, is an infinite sequence of pairwise disjoint events (that is, $A_i\cap A_j = \emptyset$ when $i \neq j $) then - $$P\left(\bigcup_{i=1}^\infty A_i\right)=\sum_{i=1}^\infty P(A_i).$$ - -The book fails to explain why we need Axiom 4. I have tried searching on Wikipedia but I haven't had any luck. I don't understand how we can have a probability of disjoint infinite events. The book states that when the you have infinitely many events, the last argument breaks down and this is now a new assumption. But then the book states that we need this or else the theory of Probability becomes useless. -I was wondering if there were any intuitive examples of situations where this fourth axiom applies. -Why is it so important for probabilty theory? And why does the author state that not everyone believes we should use this axiom. - -REPLY [2 votes]: Another thing to point is that the fourth axiom gives us a way to deal with countably infinite sequence of events. In another word, it allows us to take limit! -$$P\left(\bigcup_{i=1}^\infty A_i\right)=\sum_{i=1}^\infty P(A_i)=\lim_{n\to \infty}\sum_{i=1}^n P(A_i).$$ -While the first three axioms can only deal with finite events.<|endoftext|> -TITLE: Convex Set with Empty Interior Lies in an Affine Set -QUESTION [5 upvotes]: In Section 2.5.2 of the book Convex Optimization by Boyd and Vandenberghe, the authors mentioned without proving that "a convex set in $\mathbb{R}^n$ with empty interior must lie in an affine set of dimension less than $n$." While I can intuitively understand this result, I was wondering how it can be proved formally? - -REPLY [4 votes]: Look at $d+1$, the largest number of affinely independent points from $C$. Let $x_0$, $\ldots$, $x_d$ one such affinely independent subset of largest size. Note that every other point is an affine combination of the points $x_k$, so lies in the affine subspace generated by them, which is of dimension $d$. -If $d < n$ then this subspace is contained in an affine hyperplane. -If $d=n$, then $C$ contains $d+1$ affinely independent points. Since $C$ is convex, it will also contain the convex hull of those $n+1$ points. Now, in an $n$-dimensional space the convex hull of $n+1$ affinely independent points has non-empty interior. So the interior of $C$ is non-empty.<|endoftext|> -TITLE: Convexity implies absolute continuity? -QUESTION [5 upvotes]: The following is taken from an exam: - -$f:[a,b]\rightarrow\mathbb{R}$ is convex implies $f$ is absolutely continuous (recall $f'$ exists a.e.) - -One has local Lipschitz-ness by convexity, but how to show absolute continuity without global Lipschitz-ness? - -REPLY [3 votes]: Convex functions on the real line are expressible as integrals of one-sided derivatives. -The ration $k(x,y)=\frac{f(y)-f(x)}{y-x}$ is increasing in $y$ on $[x,b]$. -Hence, the right-hand derivative $D_+f(x)$ exists for all $x\in [a,b]$. -In a similar way we conclude that the left-hand derivative $D_-f(x)$ exists for all $x\in[a,b]$, and that $D_-f(x)\leq D_+f(x)$. -Hence the set of points for which is $D_-f(x)< D_+f(x)$ is countable. -If $f$ is convex on $[a,b]$, then then both -$D_+f(x)$ and $D_-f(x)$ -are integrable with respect to Lebesgue -measure on -$[a,b]$, and $f(x)=f(a)+ \int_a^x D_+f(t) dt=f(a)+ \int_a^x D_-f(t) dt$. -More generally, suppose D is an increasing, real-valued function defined (at least) on $[a,b)$. Define $g(x) := \int^x_a D(t)dt$, for $a \leq x \leq b$. (Possibly $g(b) =\infty$.) Then $g$ is convex. -In the book Roberts, Varberg, Convex functions, on pp.9-10 is proved that -If $f:(a,b)\rightarrow R$ -is continuous and convex then f -is absolutely continuous on each $[c,d]\subset (a,b)$.<|endoftext|> -TITLE: Range of function $ f(x) = x\sqrt{x}+\frac{1}{x\sqrt{x}}-4\left(x+\frac{1}{x}\right),$ Where $x>0$ -QUESTION [11 upvotes]: Find the range of the function $\displaystyle f(x) = x\sqrt{x}+\frac{1}{x\sqrt{x}}-4\left(x+\frac{1}{x}\right),$ where $x>0$ - -$\bf{My\; Try::}$ Let $\sqrt{x}=t\;,$ Then $\displaystyle f(t) = t^3+\frac{1}{t^3}-4\left(t^2+\frac{1}{t^2}\right)\;,$ -Now After Simplification, We get $\displaystyle f(t) = \left(t+\frac{1}{t}\right)^3-3\left(t+\frac{1}{t}\right)-4\left[\left(t+\frac{1}{t}\right)^2-2\right]$ -Now Put $\displaystyle t+\frac{1}{t} = u\;,$ Then $\displaystyle \sqrt{x}+\frac{1}{\sqrt{x}} = u\;,$ So we get $u\geq 2$ (Using $\bf{A.M\geq G.M}$) -And our function convert into $\displaystyle f(u) = u^3-4u^2-3u+8\;,$ Where $u\geq 2$ -Now Using Second Derivative Test, $f'(u) = 3u^2-8u-3$ and $f''(u) = 6u-8$ -So for Max. and Min., We put $\displaystyle f'(u)=0\Rightarrow u=3$ and $f''(3)=10>0$ -So $u=3$ is a point of Minimum. -So $f(2)=8-4(4)-3(2)+8 = -6$ and $f(3) = -10$ -and Graph is Like this -So Range is $$\displaystyle \left[-10,\infty \right)$$ -My question is can we solve it any other way, Like using Inequality -If yes, Then plz explain here -Thanks - -REPLY [3 votes]: We have -$$\color{blue}{x\sqrt{x} + \dfrac1{x\sqrt{x}}-4\left(x+\dfrac1x\right) = \underbrace{\dfrac{\left(1+\sqrt{x}\right)^2\left(x-3\sqrt{x}+1\right)^2}{x^{3/2}}}_{\text{Is non-negative}}-10}$$ -Hence, the minimum is $-10$<|endoftext|> -TITLE: find the maximum possible area of $\triangle{ABC}$ -QUESTION [10 upvotes]: Let $ABC$ be of triangle with $\angle BAC = 60^\circ$ -. Let $P$ be a point in its interior so that $PA=1, PB=2$ and -$PC=3$. Find the maximum area of triangle $ABC$. -I took reflection of point $P$ about the three sides of triangle and joined them to vertices of triangle. Thus I got a hexagon having area double of triangle, having one angle $120$ and sides $1,1,2,2,3,3$. We have to maximize area of this hexagon. For that, I used some trigonometry but it went very complicated and I couldn't get the solution. - -REPLY [3 votes]: Let $\mathcal{A}$ be the area of $\triangle ABC$. -Let $\theta$ and $\phi$ be the angles $\angle PAC$ and $\angle BAP$ respectively. -We have $\theta + \phi = \angle BAC = \frac{\pi}{3}$. -As functions of $\theta$ and $\phi$, the side lengths $b$, $c$ and area $\mathcal{A}$ are: -$$ -\begin{cases} -c(\theta) &= \cos\theta + \sqrt{2^2-\sin^2\theta}\\ -b(\phi) &= \cos\phi + \sqrt{3^2-\sin^2\phi} -\end{cases} -\quad\text{ and }\quad -\mathcal{A}(\theta) = \frac{\sqrt{3}}{4} c(\theta)b\left(\frac{\pi}{3}-\theta\right) -$$ -In order for $\mathcal{A}(\theta)$ to achieve maximum has a particular $\theta$, -we need -$$\frac{d\mathcal{A}}{d\theta} = 0 -\iff \frac{1}{\mathcal{A}}\frac{d\mathcal{A}}{d\theta} = 0 -\iff \frac{1}{c}\frac{dc}{d\theta} - \frac{1}{b}\frac{db}{d\phi} = 0 -\iff \frac{\sin\theta}{\sqrt{2^2-\sin^2\theta}} - \frac{\sin\phi}{\sqrt{3^2-\sin^2\phi}} = 0$$ -This implies -$$\frac{\sin\theta}{2} = \frac{\sin\phi}{3} - = \frac13 \sin\left(\frac{\pi}{3} - \theta\right) - = \frac13 \left(\frac{\sqrt{3}}{2}\cos\theta - \frac12\sin\theta\right) -\iff 4\sin\theta = \sqrt{3}\cos\theta$$ -and hence -$$\theta = \tan^{-1}\left(\frac{\sqrt{3}}{4}\right) \approx 0.4086378550975924 -\;\;( \approx 23.41322444637054^\circ )$$ -Furthermore, we have -$\displaystyle\;\frac{\sin\theta}{2} = \frac{\sin\phi}{3} = \frac{\sqrt{3}}{2\sqrt{19}}\;$. -Substitute this into the expression for side lengths and area, we get -$$ -\begin{cases} -c &= \frac{4+\sqrt{73}}{\sqrt{19}}\\ -b &= \frac{7+3\sqrt{73}}{2\sqrt{19}} -\end{cases} -\quad\implies\quad -\mathcal{A} = \frac{\sqrt{3}}{8}(13+\sqrt{73}) \approx 4.664413635668018 -$$ -Please note that the condition $\displaystyle\;\frac{\sin\theta}{2} = \frac{\sin\phi}{3}$ is equivalent to $\angle ABP = \angle ACP$. If one can figure out why these two angles equal to each other when $\mathcal{A}$ is maximized, one should be able to derive all the result here w/o using any calculus.<|endoftext|> -TITLE: Prove that if $P(P(x)) = Q(Q(x))$, then the polynomials $P$ and $Q$ are equal. -QUESTION [7 upvotes]: Let $P$ and $Q$ be polynomials with complex coefficients such that $P(P(x)) = Q(Q(x))$. Prove that $P = Q$. -It is obvious that degree of both will be equal. But I don't have any idea how to solve this question. - -REPLY [8 votes]: This is clearly false. For instance, $P(x) = -x$ and $Q(x) = x$. We then have -$$P(P(x)) = x = Q(Q(x))$$ - -There are many other counter examples as well. For instance, even in the case of linear functions, there are infinite counter example. For instance, if $P(x) = ax+b$ and $Q(x) = cx+d$, we then have -$$P(P(x)) = P(ax+b) = a(ax+b)+b = Q(cx+d) = c(cx+d) + d$$ -This means we need -$$a^2x + b(a+1) = c^2x+d(c+1)$$ -This means $a^2=c^2$ and $b(a+1) = d(c+1)$. If $a=c$ and $a,c\ne -1$, then $b=d$. If $a=-c$, then $d=b \cdot \dfrac{1+a}{1-a}$. -Hence, for instance, $c=-2$, $a=2$, $d=-3b$. Hence, this gives $P(x) = 2x+b$ and $Q(x) = -2x-3b$. -This gives us $$P(P(x)) = P(2x+b) = 2(2x+b) + b = 4x+3b$$ and $$Q(Q(x)) = Q(-2x-3b) = -2(-2x-3b)-3b = 4x+3b$$<|endoftext|> -TITLE: Example of an endomorphism on an abelian group that is not left multiplication -QUESTION [6 upvotes]: It is well-known that all endomorphisms on the abelian group ($\Bbb{Z}$,+) can be seen as a left multiplication by some element in some ring structure on ($\Bbb{Z}$,+); namely left multiplication by any integer in the standard $(\Bbb{Z},+,\times)$ ring. -So far every endomorphism on abelian groups that I have examined has turned out to have this same interesting property, but I'm not very knowledgeable in advanced maths. -Can someone provide an example of an abelian group $G$ with an endomorphism that cannot be seen as a left multiplication by some element in some ring structure on $G$? - -REPLY [4 votes]: There are some abelian groups that admit no (possibly nonunital) ring structure with a left unit, and for such groups, the identity endomorphism cannot be multiplication by any element in any ring structure. The standard example of such a group is $G=\mathbb{Q}/\mathbb{Z}$. If you had a ring structure on $G$ with in which (the equivalence class of) $u=a/b$ was a left unit for some $a,b\in\mathbb{Z}$, then $bu=0$, so $bx=b(ux)=(bu)x=0$ for all $x\in G$. But this is clearly false, because (for instance) $b\cdot 1/2b=1/2\neq 0$.<|endoftext|> -TITLE: Show that a homogeneous function satisfies the PDE $x \frac{\partial f}{\partial x} + y\frac{\partial f}{\partial y} = n f(x,y)$ -QUESTION [6 upvotes]: I'm using the following definition of a homogeneous function. - -A function $f(x,y)$ is homogeneous of degree n if it satisfies the following equation $$f(tx, ty) = t^n f(x,y) \quad (1) -$$ for all $t$ where $n>0$ - -Problem -Show that if $f$ is homogeneous of degree n, then -$$x \frac{\partial f}{\partial x} + y\frac{\partial f}{\partial y} = n f(x,y) $$ -Attempted Solution -I differentiated $(1)$ w.r.t $t$ giving me -$$ -\begin{align*}\frac{\partial f}{\partial x} \frac{\partial x}{\partial t} + \frac{\partial f}{\partial y}\frac{\partial y}{\partial t} &= nt^{n-1} f(x,y) \\ x\frac{\partial f}{\partial x} + y\frac{\partial f}{\partial y}&=nt^{n-1} f(x,y) \end{align*} -$$ -The LHS looks okay, but I'm not sure how to handle the $t^{n-1}$ term on the RHS. - -REPLY [2 votes]: This is also known as Euler’s theorem. - -Euler’s theorem Let $f : \mathbb{R}^n_+ \to \mathbb{R}$ be continuous, and also differentiable on $\mathbb{R}^n_{++}$. Then $f$ - is homogeneous of degree $k$ if and only if for all $x \in -> \mathbb{R}^n_{++}$, $$kf(x) = \sum^n_{i=1} D_if(x)x_i \, \, \ldots \, \,\,(∗)$$ - -Proof: ($\Rightarrow$) Suppose $f$ is homogeneous of degree $k$. Fix $x \in \mathbb{R}^n_{++}$, and define the function $g : [0, \infty) \to \mathbb{R} $ (depending on x) by $$g(\lambda) = f(\lambda x) − \lambda ^kf(x)$$ -and note that for all $\lambda ⩾ 0$, -$$g(\lambda ) = 0$$ -Therefore, $$g′(\lambda ) = 0$$ for all $\lambda > 0$. But by the chain rule, -$$g′(\lambda ) = \sum^n_{i=1} D_if(x)x_i − k\lambda ^{k−1}f(x)$$ -Evaluate this at $\lambda = 1$ to obtain $(∗)$. -($\Leftarrow$) Suppose $$kf(x) = \sum^n_{i=1} D_if(x)x_i$$ for all $x \in -\mathbb{R}^n_{++}$. Fix any $x ≫ 0$ and again define $g : [0, \infty) \to \mathbb{R} $ (depending on $x$) by $$g(\lambda ) = f(\lambda x) − \lambda ^kf(x)$$ and note that $g(1) = 0$. Then for $\lambda > 0$, -$$g′(\lambda ) = \sum^n_{i=1} D_if(\lambda x)x_i − k\lambda ^{k−1}f(x)$$ $$= \lambda^{-1}\sum^n_{i=1} D_if(\lambda x)\lambda x_i − k\lambda ^{k−1}f(x)$$ -$$= \lambda^{-1} kf(\lambda x) − k\lambda ^{k−1}f(x)$$ -So -$$\lambda g′(\lambda ) = kf(\lambda x) − \lambda ^kf(x))= kg(\lambda )$$ -Since $\lambda $ is arbitrary, $g$ satisfies the following differential equation: -$$g′(\lambda ) −\frac{k}{\lambda }g(\lambda ) = 0$$ -and the initial condition $g(1) = 0$. By theorem below, -$$g(λ) = 0 \cdot e^{A(\lambda )} + e^{−A(\lambda )}\int_1^{\lambda} 0 \cdot e^{A(t)}dt = 0$$ -where, irrelevantly, $$A(\lambda ) = −\int_1^{\lambda} \frac{k}{t} dt = −k \ln \lambda $$ -This implies $g$ is identically zero, so $f$ is homogeneous on $\mathbb{R}^n_{++}$. Continuity guarantees that $f$ is homogeneous on $\mathbb{R}^n_{+}$.<|endoftext|> -TITLE: Multivariate Gaussian Definition when Covariance matrix is singular, What's wrong? -QUESTION [8 upvotes]: Given -$$\mathbf{\Sigma} \in \mathbb R^{k \times k}$$ -$$\mathbf{u} \in \mathbb R^k$$ -The multivariate Gaussian pdf can be determined By definition: -$$f(\mathbf{x})=\frac{1}{2\pi^{\frac{-k}{2}}|\Sigma|^{\frac{1}{2}}}e^{\frac{1}{2}(\mathbf{x-u})^T\mathbf{\Sigma}^{-1}(\mathbf{x-u})}$$ -The Covariance matrix is only limited to be positive semidefinite. -So it could be singular (Non-invertible) -This will also lead to a zero in the denumerator, and also the $\Sigma^{-1}$ doesn't exist. -What we do in that case to write the joint pdf? - -REPLY [2 votes]: The multivariate normal distribution with a singular $k\times k$ variance $\Sigma$ does not have a density function with respect to the usual $k$-dimensional measure $dx_1\cdots dx_k.$ -But it is still characterized by the expected value $\mu\in\mathbb R^{k\times1}$ and variance $\Sigma\in\mathbb R^{k\times k}$ in the sense that no other Gaussian distribution has those same values of $\mu$ and $\Sigma.$ -The "definition" you mention is not actually the definition, but rather a consequence of the definition. Either of the following characterizations can be taken to be the definition: - -$X$ has a $k$-dimensional normal distribution if for every constant (i.e. non-random) vector $a\in\mathbb R^{k\times1},$ the random variable $a^TX$ has a $1$-dimensional normal distribution. - -For some matrix $A\in\mathbb R^{k\times r},$ $X=AZ,$ where $Z= (Z_1,\ldots,Z_r)^T$ and $Z_1,\ldots,Z_r\sim\operatorname N_1(0,1).$ - - -In the latter case you have $\Sigma=AA^T.$<|endoftext|> -TITLE: Solving a trig. equation -QUESTION [8 upvotes]: Solve for x $$\tan 2x = 3 \tan x$$ -Well, I'm stuck with this one. This is how far I got. -$$\tan 2x = 3 \tan x$$ -$$\frac{2 \tan x}{1 - \tan^2 x} = 3 \tan x$$ -$$2 \tan x = 3 \tan x - 3 \tan^3 x$$ -$$3 \tan^3 x = \tan x$$ -$$3 \tan^3 x - \tan x = 0$$ -$$\tan x (3 \tan^2 x - 1) = 0$$ -So, then a part of the solution is $\tan x = 0$, so $x = k\pi$ -This leaves $\tan^2x = 1$. I thought to take the root of both sides, leaving me with $\tan x = 1$. So I figured $x = \frac{1}{4}\pi + k\pi$ so we the first and third quadrants, where the tangent is positive. -But, to my deep deep sadness, this last part turned out to be wrong. -What am I missing here? - -REPLY [11 votes]: The equation $3\tan^3 x - \tan x = 0$ factors as $\tan x(\color{red}{3}\tan^2 x - 1) = 0$ (you forgot the $3$). -Hence, $\tan x = 0$ or $\tan x = \pm \dfrac{1}{\sqrt{3}}$. Both of these should be easy to solve.<|endoftext|> -TITLE: Find the closed form of $\cos \frac{2 \pi}{13} + \cos \frac{6 \pi}{13} + \cos \frac{8 \pi}{13}$ -QUESTION [9 upvotes]: From the problem 1 from this link. -I was trying to prove that -$$\cos \dfrac{2 \pi}{13} + \cos \dfrac{6 \pi}{13} + \cos \dfrac{8 \pi}{13} = \dfrac{\sqrt{13} - 1}{4} $$ -Please provide me some hint. - -REPLY [16 votes]: Let $C_{k} = \cos \left(\dfrac{k \pi}{13} \right)$ -Lemma: -$$\color{blue}{C_{2} + C_{4} + C_{6} + C_{8} + C_{10} + C_{12} = -\dfrac{1}{2} }$$ -Proof : -Observe that $C_{2k} = \cos \left(\dfrac{2k \pi}{13} \right) = \Re (e^{(2k i\pi)/13})$ -So, -$$\color{red}{C_{2} + C_{4} + C_{6} + C_{8} + C_{10} + C_{12} = \Re \left(\sum_{k=1}^{6} e^{(2k i\pi)/13} \right)}$$ -It is a geometric series. So, evaluating it and finding the real part proves our lemma. - -Properties of $C_{k}$ : - -$C_{2k} = 2 C_{k}^2 - 1$. -$2C_{p} C_{q} = C_{p+q} + C_{p-q}$. -$C_{13+k} = C_{13-k}$. - -The above properties can be proved easily. - -Now we are required to find the value of $\cos \dfrac{2 \pi}{13} + \cos \dfrac{6 \pi}{13} + \cos \dfrac{8 \pi}{13}$, which is equal to $C_{2} + C_{6} + C_{8}$. -Let $x = C_{2} + C_{6} + C_{8}$. Squaring on both sides yields -$$x^2 = C_{2}^2 + C_{6}^2 + C_{8}^2 + 2C_{2}C_{6} + 2 C_{2}C_{8} + 2 C_{6}C_{8} \\ x^2 = \dfrac{C_{4} + 1}{2} + \dfrac{C_{12} + 1}{2} + \dfrac{C_{16} + 1}{2} + C_{8} + C_{4} + C_{10} + C_{6} + C_{14} + C_{2} \\ 2x^2 = C_{4} + C_{12} + C_{16} + 2C_{8} + 2C_{4} + 2C_{10} + 2C_{6} + 2C_{14} + 2C_{2} + 3 \\ 2x^2 = 3C_{4} + (C_{12} + 2C_{14}) + (C_{16} + 2 C_{10}) + 2(C_2 + C_6 + C_8) +3$$ -Now, using the third property that $C_{13+k} = C_{13-k}$, we get $C_{12} = C_{14}$ and $C_{16} = C_{10}$. Then, -$$2x^2 = 3C_{4} + 3C_{12} + 3C_{10} + 2(C_2 + C_6 + C_8) +3 \\ 2x^2 = 3(C_{4} + C_{10} + C_{12}) + 2x +3$$ -From our lemma, -$$C_{2} + C_{4} + C_{6} + C_{8} + C_{10} + C_{12}= - \dfrac{1}{2} \\ C_{4} + C_{10} + C_{12} = - \dfrac{1}{2} - (C_2 + C_6 + C_8) = - \dfrac{1}{2} - x$$ -So, -$$2x^2 = 3(C_{4} + C_{10} + C_{12}) + 2x +3 = 3(- \dfrac{1}{2} - x) + 2x +3 \\ 4x^2 = -2x+3 \\ 4x^2 + 2x -3 =0 \implies x = \dfrac{-1 \pm \sqrt{13}}{4}$$ -We are getting two values of $x$. Now, observe that $\cos \dfrac{6 \pi}{13} > 0$. And also $\cos \dfrac{2 \pi}{13} + \cos \dfrac{8 \pi}{13} = \cos \dfrac{2 \pi}{13} - \cos \dfrac{5 \pi}{13}$. But $\cos$ function is strictly decreasing i.e. -$$\dfrac{2 \pi}{13} < \dfrac{5 \pi}{13} \implies \cos \dfrac{2 \pi}{13} > \cos \dfrac{5 \pi}{13} \\ \implies \cos \dfrac{2 \pi}{13} - \cos \dfrac{5 \pi}{13} > 0 \\ \cos \dfrac{2 \pi}{13} + \cos \dfrac{6 \pi}{13} - \cos \dfrac{5 \pi}{13} > 0$$ -So, $\cos \dfrac{2 \pi}{13} + \cos \dfrac{6 \pi}{13} + \cos \dfrac{8 \pi}{13} > 0$. This implies that the required value is $\dfrac{\sqrt{13} - 1}{4}$. -Therefore, -$$\color{green}{\cos \dfrac{2 \pi}{13} + \cos \dfrac{6 \pi}{13} + \cos \dfrac{8 \pi}{13} = \dfrac{\sqrt{13} - 1}{4}}$$<|endoftext|> -TITLE: Does a power-complete finite pasture exist? -QUESTION [5 upvotes]: Suppose we define a pasture to be an algebraic structure $\langle M, 0, +, \times, \wedge \rangle$ where - -$\langle M, 0, +, \times \rangle$ is a ring (not necessarily commutative or unital) -$\wedge$ distributes over $\times$ on the left: $(a \times b) \wedge c = (a \wedge c) \times (b \wedge c)$ -$\wedge$ distributes $+$ into $\times$ on the right: $a \wedge (b + c) = (a \wedge b) \times (a \wedge c)$ - -The idea is that a pasture is a bit like a field (in that it consists of a ring with additional structure), but goes off in a slightly different direction (by adding exponentiation instead of division). -Now let's call $x \in M$ a perfect power if $x = y \wedge z$ for some $y, z \in M$. Moreover, let's say that $M$ is power-complete if all of its elements are perfect powers. For example, the trivial pasture $\{0\}$ is clearly power-complete. - -Question: Does a nontrivial power-complete finite pasture exist? - -I was inspired to ask this question after running a computer search for finite pastures and noticing that they tend to have few perfect powers. In fact, most of the pastures I found had a single perfect power, often (but not always) $0$. If my code is correct, then I have confirmed that no pasture of order $\le 8$ is power-complete, and moreover that no commutative unital pasture of order $\le 10$ is power-complete. -Side note: for $2 \le n \le 10$, the number of non-isomorphic commutative unital pastures of order $n$ is given by $(2, 2, 10, 2, 4, 2, 36, 10, 4)$. This is not a sequence recognized by the OEIS. - -Edit: Thanks to a comment by @user60589, I have discovered a bug in my code which invalidates the above results. In fact, there are plenty of examples of power-complete pastures of order $\le 10$. - -REPLY [2 votes]: On any ring $R$ in which all elements are idempotent there is a trivial pasture structure on $R$ defined via -$$ x^y =x $$ -for all $x,y \in R$. The left distributivity with multiplication is trivial and the right distributivity with addition is equivalent to the fact that all elements are idempotent. -This pasture structure on $R$ is obviously power-complete. -So for instance for all natural numbers $n$ -$$ (\mathbb{Z}/ 2\mathbb{Z} )^n $$ -is a power-complete pasture.<|endoftext|> -TITLE: Algebraic fixed point theorem -QUESTION [15 upvotes]: I was wondering if there are some "algebraic" fixed point theorems, in group theory. -More precisely, given a group $G$ and a group morphism $f : G \to G$, what conditions on $G$ and $f$ should we demand, so that $f$ has a non-trivial fixed point (i.e. $\exists x \neq 1_G, f(x)=x$) ? -Here are my thoughts : - -This « non-trivial fixed point condition » is sometimes a strong condition. For instance, if $G = \mathbb Z$, then the only $f \in \text{Hom}(G,G)$ to have a non-trivial fixed point is the identity. -The set of fixed point $\{y \in G \mid f(y)=y\}$ is a subgroup of $G$. -Let $G = \mathbb Z / n\mathbb Z$. Assume that $n=ab$ with $a,b>1$. If $f([1]_n) = [a+1]_n$, then $f$ has a non trivial fixed point, namely $x=[b]_n$. -This question may be «artificial» ; I don't know if a morphism with a non trivial fixed point can be useful in other contexts... -I don't see a natural way to turn this problem into a « group action » problem (to get some results about fixed points). I tried $G \curvearrowright \text{Im}(f)$ by defining $g \bullet f(x) := f(g)f(x) = f(gx)$, but this doesn't seem to help... - -Thank you in advance ! - -REPLY [5 votes]: The only fixed point theorem involving finite groups I know is the following: - -$p$-group fixed point theorem: Let $G$ be a finite $p$-group acting on a finite set $X$. Then $|X^G| \cong |X| \bmod p$. In particular, if $|X| \not \equiv 0 \bmod p$, then $G$ has a fixed point. - -For example, applied to the conjugacy action of a finite $p$-group on itself, we conclude that such a group has nontrivial center. We can get a statement of your form by asking that $G$ is a $p$-group and $f$ has order a power of $p$. -Other applications are given here.<|endoftext|> -TITLE: Is $\frac{1}{2^{2^{0}}}+\frac{1}{2^{2^{1}}}+\frac{1}{2^{2^{2}}}+\frac{1}{2^{2^{3}}}+....$ algebraic or transcendental? -QUESTION [7 upvotes]: Inspired by this question, the series $\dfrac{1}{2^{2^{0}}}+\dfrac{1}{2^{2^{1}}}+\dfrac{1}{2^{2^{2}}}+\dfrac{1}{2^{2^{3}}}+\dots$ is clearly irrational. -But is it algebraic or transcendental? - -I was thinking of answering this question by checking whether or not it can be represented as a periodic continued fraction: - -If no, then (as far as I know) it is transcendental -If yes, then (as far as I know) we cannot infer the answer - -But how do I determine whether or not it can be represented as a periodic continued fraction? -Is there a better way for tackling this question, or is the answer already known by any chance? - -UPDATE: -Based on @Wojowu's comment: - -If it can be represented as a periodic continued fraction, then it is algebraic -If it cannot be represented as a periodic continued fraction, then we cannot infer the answer - -REPLY [8 votes]: It is a general theorem proven by Mahler that given an integer $d>1$ and a nonzero algebraic number $z\in(-1,1)$ then the sum of the series $\sum_{n=0}^\infty z^{d^n}$ is a transcendental number. I can't find a direct reference, but you can find this theorem in this MO answer. -Transcendence of the number in your question follows by taking $d=2,z=\frac{1}{2}$.<|endoftext|> -TITLE: Characterization of the $m$-torsion points of an elliptic curve. -QUESTION [8 upvotes]: Let $(E,\mathcal{O})$ be the elliptic curve of equation -$$ -f=Y^{2}+a_{1}XY+a_{3}Y-X^{3}-a_{2}X^{2}-a_{4}X-a_{6}, -$$ -$\alpha:K(E)\rightarrow K(E)$ the derivation such that -$$ -\alpha(X)=\frac{\partial f}{\partial Y},\quad \alpha(Y)=-\frac{\partial f}{\partial X} -$$ -and $m$ a positive integer. I have to prove that, if we are working in a field $K$ of characteristic $\geq m+1$, and $\mathcal{L}(m\mathcal{O})=\langle 1,f_{1},\ldots,f_{m-1}\rangle$, then the $m$-torsion points of $E$ different from $\mathcal{O}$ coincide with -the zeros of -$$ -\det (\alpha^{i}(f_{j}))_{1\leq i,j\leq m-1}. -$$ -By the way, what can be said about the number of zeros of that determinant? -$\textbf{Edit:}$ In fact, that determinant vanishes at $p\in E-\{\mathcal{O}\}$ if and only if (see the comments that follow mercio's answer) there exists $g\in\mathcal{L}(m\mathcal{O})$ such that $\alpha(g)|_{p}=\cdots=\alpha^{m-1}(g)|_{p}=0$. This reduces the problem (because of mercio's answer again) to show that this is equivalent to $g$ having a zero of order $m$ at $p$. Is this true? -Any hint would be appreciated. - -REPLY [5 votes]: Let $P$ be a point of $E$ different from $\mathcal O$. -the determinant vanishes at $P$ if and only if there is a linear combination of the $f_j$ (which means a function with poles of order at most $m$ at $\mathcal O$), such that its first $m-1$ derivatives at $P$ all vanish. -By adding a suitable constant to that function if necessary, this means you have a function with a pole of order $m$ at $\mathcal O$ and a zero of order $m$ at $P$ (and nothing else because you can't have any additional pole) -But this means exactly that $m(\mathcal O - P)$ is a principal divisor, so that once you fix $\mathcal O$ as the origin of the group law, $P$ is a point of order $m$.<|endoftext|> -TITLE: Prove that the sum $1^k+2^k+\cdots+n^k$ where $n$ is an arbitrary integer and $k$ is odd, is divisible by $1+2+\cdots+n$. -QUESTION [7 upvotes]: Prove that the sum $$1^k+2^k+\cdots+n^k$$ - where $n$ is an arbitrary integer and $k$ is odd, is divisible by $1+2+\cdots+n$. - -Question -In the solution to this problem it splits it up into two cases: ($1$) $n$ is an even integer ($2$) and $n$ is an odd integer. In the case where $n$ is an odd integer it says the following: $$1^k+n^k,2^k+(n-1)^k,3^k+(n-2)^k,\ldots, \left (\dfrac{n-1}{2} \right )^k + \left(\dfrac{n+3}{2} \right )^k \left (\dfrac{n+1}{2} \right )^k$$ are all divisible by $\dfrac{n+1}{2}$. -I get how the beginning terms are all divisible by $\dfrac{n+1}{2}$, but did they make a typo when they said $\left(\dfrac{n+3}{2} \right )^k \left (\dfrac{n+1}{2} \right )^k$? If not, then how is $\left (\dfrac{n-1}{2} \right )^k + \left(\dfrac{n+3}{2} \right )^k \left (\dfrac{n+1}{2} \right )^k$ divisible by $\dfrac{n+1}{2}$? - -REPLY [3 votes]: Using Proof of $a^n+b^n$ divisible by a+b when n is odd, -$$r^k+(n-r)^k$$ is divisible by $r+n-r=n$ as $k$ is odd -$$\implies\sum_{r=1}^n(r^k+(n-r)^k)$$ will be divisible by $n$ -Similarly, -$$\sum_{r=1}^n(r^k+(n+1-r)^k)$$ will be divisible by $r+n+1-r=n+1$ -$$\implies\sum_{r=1}^n(r^k+(n+1-r)^k)=2\sum_{r=1}^n r^k$$ will be divisible by lcm$(n+1,n)$<|endoftext|> -TITLE: Geodesic curvature of sphere parallels -QUESTION [10 upvotes]: I want to compute the geodesic curvature of any circle on a sphere (not necessarily a great circle). -$$$$ -The geodesic curvature is given by the formula $$\kappa_g=\gamma'' \cdot (\textbf{N}\times \gamma ')$$ or $$\kappa_g=\pm \kappa \sin \psi$$ -where $\gamma$ is a unit-speed curve of the surface, $\textbf{N}$ is the normal unit of the surface, $\kappa$ is the curvature of $\gamma$ and $\psi$ is the angle between $\textbf{N}$ and the principal normal $n$ of $\gamma$. -$$$$ -We consider a circle of radius $r$. -Could you give me some hints how we could calculate the geodesic curvature? -$$$$ -EDIT: - -REPLY [4 votes]: For a more intrinsic perspective, parametrize the sphere as $$\varphi(u,v)=(\sin(u)\cos(v),\sin(u)\sin(v),\cos(u)),$$ -so that the coefficients of the first fundamental for are $E=1$, $F=0$ and $G=\sin^2(u)$. Then a lattitude circle on the sphere is a $v$-curve associated with this parametrization, and thus may be parametrized as $$\alpha_v(t)=\varphi(u_0,t)=(\sin(u_0)\cos(t),\sin(u_0)\sin(t),\cos(u_0)),$$ -with $0 -TITLE: If $\frac{x^2+ax+3}{x^2+x+a}$ takes all real values, prove $4a^3+39<0$ -QUESTION [6 upvotes]: If $\frac{x^2+ax+3}{x^2+x+a}$ takes all real values for possible real values of $x$, then prove that $4a^3+39<0$. Here is how I approached it. -Let $$\frac{x^2+ax+3}{x^2+x+a}=y$$ -Then, $$(y-1)x^2+(y-a)x+(ay-3)=0$$ -We want all those $y$, for which there is a real $x$, that is, we want $y$ such that this quadratic has real roots. So, the discriminant $\Delta \geq 0$. -$$(y-a)^2-4(y-1)(ay-3) \geq 0$$ -On simplifying, we obtain $$(1-4a)y^2+(2a+12)y+(a^2-12) \geq 0$$ -We want to find those $a$ for which this is true for all $y$. So, the discriminant $\Delta \leq 0$ (so that the parabola never crosses the $x$ axis.) and $(1-4a)>0$ (so that it faces upwards and is always above the $x$ axis.) -This gives $$(2a+12)^2-4(1-4a)(a^2-12) \leq 0$$ -$$(a+6)^2-(1-4a)(a^2-12) \leq 0$$ -which simplifies to $$a^3-9a+12 \leq 0$$ which is not what I set out to achieve. Where did I go wrong? -And is there any other method to do this? (Perhaps Calculus based?) - -REPLY [2 votes]: I would argue that $4a^3+39<0$ is wrong, for example $a=-3$ satisfies it, however the function, that follows from that value for $a$, does not span all real numbers. -First note that both the numerator and the denominator of the given function are polynomials of order two, with the same coefficients, which means that both limits of $x$ to plus or minus infinity will go to 1. So in order for the function to go to $\pm\infty$ the denominator has to become zero, so the poles of the function have to have real solutions -$$ -x=\frac{-1\pm\sqrt{1-4a}}{2}\to a\leq\frac{1}{4}. -$$ -In order for the function to be equal to zero the numerator has to become zero, so the zeros of the function have to have real solutions -$$ -x=\frac{-a\pm\sqrt{a^2-12}}{2}\to a^2\geq 12. -$$ -Combining these two constraints for $a$ yields -$$ -a\leq-2\sqrt{3}. -$$ -This does not yet ensure that the function covers all real values, namely one of the zeros has to lie between the two poles, such that the function will span all real numbers between the two poles (it will either go from $-\infty$ to $\infty$ or the other way around). Solving this does indeed yield $a^3-9a+12\leq 0$. -So your solution is correct and the given answer (partially) incorrect. Namely when your implicit solution is true, then the given implicit relation is also true. Because for real values of $a$, then your solution can also be approximated as $a\leq-3.5223$ and the given relation as $a<-2.1363$.<|endoftext|> -TITLE: How can I prove $x_{n+1} = e^{-x_n}$ is convergent? -QUESTION [7 upvotes]: I'm doing a practice problem which asks to prove that the sequence defined by $x_{n+1} = e^{-x_n}$ is convergent (or rather "study the convergence of $(x_n)$"). So I'd like to try and find some sufficient condition on $x_0$ for the sequence to converge. -I can see that $e^{-x}$ is $k$-lipschitzian with $k<1$ on $[a, \infty[$ for $a>0$. But the problem is that $e^{-x}$ does not map $[a, \infty[$ into itself. I started trying to find values of $a, b$ such that $[e^{-b}, e^{-a}]\subseteq[a, b]$, but then I wondered if maybe there was some simpler approach that I was missing. - -REPLY [2 votes]: I like Milen's answer and upvoted it. Here's an approach that uses the standard contraction mapping theorem. -Let $f(x)=\exp(-x)$ and let $F(x)=f(f(x))$. We compute -\begin{align} - F'(x) &= e^{-x-e^{-x}} \\ - F''(x) &= e^{-x-e^{-x}} \left(e^{-x}-1\right). -\end{align} -Note that $F''(x)<0$ for $x>0$. Thus $F'$ is decreasing on $[0,\infty)$ and -$$F'(x) -TITLE: Infinite summation of reciprocal of Binomial coefficients -QUESTION [6 upvotes]: When I was playing with Binomial coefficients. I got an interesting problem. It is very nice formula. The problem is -$$\large \sum_{n=0}^{\infty} \dfrac{1}{\binom{2n}{n}}$$ -where $\binom{2n}{n}$ represents the number of ways of selecting $n$ different things from $2n$ different things. I will provide the solution below. - -REPLY [4 votes]: Before we start the solution we shall learn about two functions. -1) Gamma Function: It is represented by $\Gamma(x)$ and it is given by the expression -$$\Gamma(x) = \int_{0}^{\infty} e^{-t} t^{x-1} dt$$ -It has some special properties. I shall mention those properties but not their proofs. - -$\Gamma(x+1) = x \Gamma(x)$. This can be proved by integration by parts. -$\Gamma(1) = 1$. -Gamma function is undefined for non-positive integers. -For every non-negative integer $n$, $\Gamma(n+1) = n!$ - -2) Beta Function: It is represented by $B(x,y)$ and it is given by the expression -$$B(x,y) = \int_{0}^{1} t^{x-1}(1-t)^{y-1} dt$$ -Relation between Gamma and Beta function: -$$B(x,y) = \dfrac{\Gamma(x) \Gamma(y)}{\Gamma(x+y)}$$ -Now let us move on to evaluating our summation. You should need to know another thing i.e. the value of $\binom{2n}{n} = \dfrac{(2n)!}{(n)! (n)!}$ -First we shall convert the factorial into Beta function. -$$\begin{align*} \binom{2n}{n} &= \dfrac{(2n)!}{(n)! (n)!} \\ &= \dfrac{\Gamma(2n+1)}{\Gamma(n+1) \Gamma(n+1)} \\ &= \dfrac{1}{2n+1} \dfrac{(2n+1)\Gamma(2n+1)}{\Gamma(n+1) \Gamma(n+1)} \\ &= \dfrac{1}{2n+1} \dfrac{\Gamma(2n+2)}{\Gamma(n+1) \Gamma(n+1)} \\ \dfrac{1}{\binom{2n}{n}} &= (2n+1) \dfrac{\Gamma(n+1) \Gamma(n+1)}{\Gamma(2n+2)}\\ &= (2n+1) B(n+1,n+1) \end{align*}$$ -$$\begin{align*} \sum_{n=0}^{\infty} \dfrac{1}{\binom{2n}{n}} &= \sum_{n=0}^{\infty} (2n+1) \int_{0}^{1} t^n(1-t)^n dt \\ &= \int_{0}^{1} \sum_{n=0}^{\infty} (2n+1)(t(1-t))^n dt \end{align*}$$ -Here, we have to evaluate an infinite summation i.e. $1+3x+5x^2 + 7x^3 + \ldots$. -$$\begin{align*} S &=1+3x+5x^2 + 7x^3 + \ldots \\ &= (1+ 2x+2x^2 + 2x^3 + \ldots ) + (x + 3x^2 + 5x^3 + \ldots ) \\ &= 2(1+x+x^2 +x^3 +\ldots) +xS -1 \\ S(1-x) &= \dfrac{2}{1-x} - 1 \\ S &= \dfrac{2}{(1-x)^2} - \dfrac{1}{1-x} \end{align*}$$ -So, -$$\sum_{n=0}^{\infty} (2n+1)x^n = \dfrac{2}{(1-x)^2} - \dfrac{1}{1-x} \\ \implies \sum_{n=0}^{\infty} (2n+1)(t(1-t))^n = \dfrac{2}{(1-t+t^2)^2} - \dfrac{1}{1-t+t^2} $$ -$$\int_{0}^{1} \sum_{n=0}^{\infty} (2n+1)(t(1-t))^n dt = \int_{0}^{1} \dfrac{2}{(1-t+t^2)^2} dt - \int_{0}^{1} \dfrac{1}{1-t+t^2} dt $$ -Now we have to evaluate the above integrals. -Let $ I=\int_{0}^{1} \dfrac{1}{1-t+t^2} dt$ and $J = \int_{0}^{1} \dfrac{1}{(1-t+t^2)^2} dt$ -First Integral (I): -$$\begin{align*} I &= \int_{0}^{1} \dfrac{1}{1-t+t^2} dt \\ &= \int_{0}^{1} \dfrac{1}{(t-1/2)^2 + (\sqrt{3}/2)} dt \quad \text{Take the substitution } y= t- 1/2 \\ &= \int_{-1/2}^{1/2} \dfrac{1}{y^2 + (\sqrt{3}/2)^2} dy \\ &= \dfrac{2}{\sqrt{3}} \arctan \left(2y/\sqrt{3} \right) \Bigr|_{-1/2}^{1/2} \\ &= \dfrac{2}{\sqrt{3}} \dfrac{\pi}{3} \\ &=\dfrac{2 \pi}{3 \sqrt{3}} \end{align*}$$ -Second Integral (J): -$$\begin{align*} J &= \int_{0}^{1} \dfrac{1}{(1-t+t^2)^2} dt \\ &= \int_{0}^{1} \dfrac{1}{((t-1/2)^2 + (\sqrt{3}/2)^2)^2} dt \quad \text{Take the substitution } y= t-1/2 \\ &= \int_{-1/2}^{1/2} \dfrac{1}{(y^2 + (\sqrt{3}/2)^2)^2} dy \quad \text{Take the substitution } y= \dfrac{\sqrt{3}}{2} \tan \theta \\ &= \dfrac{8}{3 \sqrt{3}} \int_{-\pi /6}^{\pi /6} \cos^2 \theta d \theta \\ &= \dfrac{4}{3 \sqrt{3}} \int_{\pi /6}^{\pi /6} (1+ \cos (2 \theta)) d \theta \\ &= \dfrac{4}{3 \sqrt{3}} \left( \dfrac{\pi}{3} + \dfrac{\sqrt{3}}{2} \right) \end{align*}$$ -Now, -$$\begin{align*} \sum_{n=0}^{\infty} \dfrac{1}{\binom{2n}{n}} &= 2J - I \\ &= \boxed{\large{\dfrac{2 \pi}{9 \sqrt{3}}} + \dfrac{4}{3}} \end{align*}$$<|endoftext|> -TITLE: Do all figures have a "centre", equidistant from all vertices? -QUESTION [6 upvotes]: Can we prove or disprove that : -There exists for any given closed figure, a point which is equidistant from all of its vertices? -Any closed figure means literally any closed figure? -I am gonna instinctively say no, but How!? - -REPLY [16 votes]: This is clearly false. First choose three random non-collinear points. Then there is a unique circle that goes through these three points, and hence there is a unique point (the circle's center) that is equidistant from the three points. Now add any other point which does not lie on the perimeter of the circle.<|endoftext|> -TITLE: Slick proof of cross product identities -QUESTION [8 upvotes]: The cross product between vectors in $\mathbb{R}^3$ obeys two pleasant identities (sometimes named after Lagrange), namely - -$a\times(b\times c)=b(a\cdot c)-c(a\cdot b)$ -$(a\times b)\cdot(c\times d)=(a\cdot c)(b\cdot d)-(a\cdot d)(b\cdot c)$. - -The first one can be proved by invoking multilinearity and thus reducing oneself to a tedious check when $\{a,b,c\}\subseteq\{e_1,e_2,e_3\}$. The second one can be deduced from the first using properties of the triple product. -I am looking for some slick/conceptual proof of the two identities (or one of them); maybe exterior algebra has something to tell us? - -REPLY [4 votes]: You can definitely approach this from the perspective of exterior algebra. Indeed, the part of your question relating to your first identity has been asked before, so for the coordinate-free, exterior algebra proof of that, please take a look at my answer to the earlier question. So, let me know turn to your second identity. Before continuing, let me recycle some background and notation from my old answer. - -Recall that if $V$ is an $n$-dimensional inner product space, then the - Hodge star is a linear isomorphism $\ast : \bigwedge^k V \to \bigwedge^{n-k} V$ for each $0 \leq k \leq n$, satisfying the - following: - -for $v$, $w \in V$, $\langle v,w\rangle \omega = v \wedge \ast w$ for $\omega = \ast 1$ the generator of $\bigwedge^n V$ satisfying - $\omega = e_1 \wedge \cdots \wedge e_n$ for any orthonormal basis - $\{e_k\}$ of $V$ with the appropriate orientation (e.g., the volume - form in $\bigwedge^n (\mathbb{R}^n)^\ast$); -in particular, $\ast \ast = \operatorname{Id}$ when $n$ is odd. - -Also, recall that the inner product on $\bigwedge^k V$ is given by $$\langle v_1 \wedge \cdots \wedge v_k, w_1 \wedge \cdots \wedge w_k \rangle = \det(\langle v_i,w_j \rangle). $$ -So, suppose that $V$ is $3$-dimensional, in which case we can define - the cross product of $a$, $b \in V$ by $$a \times b := \ast (a\wedge b).$$ - -Now, let $a$, $b$, $c$, $d \in \mathbb{R}^3$. Then -$$ - \langle a \times b, c \times d \rangle \omega = \langle \ast (a \wedge b), \ast (c \wedge d) \rangle \omega \\ -= \ast (a \wedge b) \wedge \ast \ast (c \wedge d) \\ -= \ast (a \wedge b) \wedge (c \wedge d)\\ -= (-1)^{1 \cdot 2} (c \wedge d) \wedge \ast (a \wedge b)\\ -= \langle c \wedge d, a \wedge b \rangle\omega \\ -= \langle a \wedge b, c \wedge d \rangle\omega\\ -= \begin{vmatrix} \langle a,c \rangle & \langle a,d \rangle \\ \langle b,c \rangle & \langle b,d \rangle \end{vmatrix}\omega\\ -= (\langle a,c \rangle \langle b,d \rangle - \langle a,d \rangle \langle b,c \rangle)\omega, -$$ -as was required. Observe, in particular, that the right-hand side of your second identity looks like a $2 \times 2$ determinant because once you've unpacked all relevant definitions, it really is a $2 \times 2$ determinant, namely the inner product -$$ -\langle a \wedge b, c \wedge d \rangle := \begin{vmatrix} \langle a,c \rangle & \langle a,d \rangle \\ \langle b,c \rangle & \langle b,d \rangle \end{vmatrix} -$$ -of the bivectors $a \wedge b$, $c \wedge d \in \bigwedge^2 \mathbb{R}^3$.<|endoftext|> -TITLE: Find all solutions to the functional equation $f(x+y)-f(y)=\frac{x}{y(x+y)}$ -QUESTION [5 upvotes]: Find all solutions to the functional equation - $f(x+y)-f(y)=\cfrac{x}{y(x+y)}$ - -I've tried the substitution technique but I didn't really get something useful. -For $y=1$ I have -$F(x+1)-F(1)=\cfrac{x}{x+1}$ -A pattern I've found in this example is that if I iterate $n$ times the function $g(x)=\cfrac{x}{x+1}$ I have that $g^n(x)=\cfrac{x}{nx +1}$ ,which may be a clue about the general behaviour of the function ( ?) . -I am really kinda of clueless ,it seems like the problem is calling some slick way of solving it. -Can you guys give me a hint ? - -REPLY [8 votes]: $f(x+y)-f(y) -=\cfrac{x}{y(x+y)} -=\frac1{y}-\frac1{x+y} -$ -so -$f(x+y)+\frac1{x+y} -=f(y)+\frac1{y} -$. -Therefore, -$f(x)+\frac1{x}$ -is constant, -so -$f(x) -=d-\frac1{x} -$ -for some $d$. -Substituting this, -the $d$s cancel out, -so any $d$ works, -and the solution is -$f(x) -=d-\frac1{x} -$ -for any $d$.<|endoftext|> -TITLE: Evaluating $\int_{0}^{\pi}\ln (1+\cos x)\, dx$ -QUESTION [14 upvotes]: The problem is -$$\int_{0}^{\pi}\ln (1+\cos x)\ dx$$ -What I tried was using standard limit formulas like changing $x$ to $\pi - x$ and I also tried integration by parts on it to no avail. Please help. Also this is my first question so please tell if I am wrong somewhere. - -REPLY [7 votes]: Another way to solve is to use the Cauchy Integral Formula -$$ f(z)=\frac{1}{2\pi i}\int_{\partial D}\frac{f(\xi)}{\xi-z}d\xi$$ -where $f(z)$ is analytic in $D$ and continuous in $\bar{D}$. Note that -$$ |1+\cos t+i\sin t|^2=2(1+\cos t).$$ -So -\begin{eqnarray} -&&\int_{0}^\pi\log(1+\cos t)\; dt\\ -&=&\frac12\int_{0}^{2\pi}\log(1+\cos t)\; dt\\ -&=&\frac12\int_{0}^{2\pi}\log|1+\cos t+i\sin t|\; dt-\pi\log2\\ -&=&\frac12\int_{0}^{2\pi}\Re\left[\log(1+\cos t+i\sin t)\right]\; dt-\pi\log2\\ -&=&\frac12\Re\left[\int_{0}^{2\pi}\log(1+\cos t+i\sin t)\; dt\right]-\pi\log2\\ -&=&\frac12\Re\left[\int_{|z|=1}\log(1+z)\; \frac{dz}{iz}\right]-\pi\log2\\ -&=&\frac12\times2\pi \log 1-\pi\log2\\ -&=&-\pi\log2. -\end{eqnarray}<|endoftext|> -TITLE: How do I prove a sequence is Cauchy -QUESTION [5 upvotes]: I was hoping someone could explain to me how to prove a sequence is Cauchy. I've been given two definitions of a Cauchy sequence: -$\forall \epsilon > 0, \exists N \in \mathbb{N}$ such that $n,m> N$ $\Rightarrow |a_n - a_m| ≤ \epsilon$ -and equivalently $\forall \epsilon > 0, \exists N \in \mathbb{N}$ such that $n> N$ $\Rightarrow |a_{n+p} - a_n| ≤ \epsilon$, $\forall p \in \mathbb{N}$ -I understand that proving a sequence is Cauchy also proves it is convergent and the usefulness of this property, however, it was never explicitly explained how to prove a sequence is Cauchy using either of these two definitions. I'd appreciate it if someone could explain me how to prove a sequence is Cauchy perhaps $a_n = \sqrt{n+1} - \sqrt{n}$ ? or another example just for me to grasp the concept. - -REPLY [2 votes]: For the particular example you chose, it is very easy to show directly that it converges to zero, because -$$\sqrt{n+1} - \sqrt{n} = \frac{1}{\sqrt{n+1} + \sqrt{n}}.$$ -Nevertheless, this same identity allows you to show that it is Cauchy, since -$$\left|\frac{1}{\sqrt{m+1} + \sqrt{m}} - \frac{1}{\sqrt{n+1} + \sqrt{n}}\right| \leq \left| \frac{1}{2\sqrt{m}} - \frac{1}{2\sqrt{n}} \right| \leq \frac{1}{2\sqrt{\min(m,n)}}< \epsilon$$ -whenever $\min(m,n) \geq N > \frac{1}{4\epsilon^2}$.<|endoftext|> -TITLE: Example of a relation that is reflexive but not symmetric -QUESTION [8 upvotes]: By definition, $R$, a relation in a set X, is reflexive if and only if $\forall x\in X$, $x\,R\,x$, and $R$ is symmetric if and only if $x\,R\,y\implies y\,R\,x$. -I think $x\,R\,x$ can also be symmetric when I read the definition, but I also feel there's something wrong or missing in my understanding. -Can you give an example of a relation that is reflexive but not symmetric? - -REPLY [4 votes]: Somewhere, there's a list that shows relations can be any combination of reflexive, symmetric and transitive (despite the famous false proof that symmetric + transitive -> reflexive). – barrycarter 3 hours ago - -Well, I couldn't find one to link to in a few minutes, so let me provide one here. -On the three-element set $\{a, b, c\}$, the following relations are: - -Transitive, symmetric: $R_0 = \emptyset$ -Transitive, not symmetric: $R_1 = \{(a,b)\}$ -Not transitive, not symmetric: $R_2 = \{(a,b), (b,c)\}$ -Not transitive, symmetric: $R_3 = \{(a,b), (b,a), (b,c), (c,b)\}$ - -None of the relations above are reflexive, but they can all be turned into reflexive relations, without affecting their transitivity or symmetry, by adding $R^* = \{(a,a), (b,b), (c,c)\}$ to them. -(In particular, $R_1 \cup R^* = \{(a,a), (a,b), (b,b), (c,c)\}$ is a reflexive, non-symmetric relation on the set $\{a, b, c\}$. Of course, the restriction of this relation to the two-element subset $\{a, b\}$ yields an even simpler example.)<|endoftext|> -TITLE: Compute $d(x^{100},P_{\le 98})$ where $P$ is subspace of polynomials with degree $\le 98$ -QUESTION [6 upvotes]: Compute $d(x^{100},P_{\le 98})$ where $P$ is subspace of polynomials with degree $\le 98$, looking at $C_{(2)}[-1,1]$, with $L_2$ norm. -I tried to look at a general polynomial $\sum_{i=0}^{98} a_ix^i$ and use $\|f-g\|=\int_{-1}^1 f(x)\overline{g(x)} \, dx$ but this is too excessive and I can't see what it hinders. I also tried using the zero element but would end up with a non-zero result which isn't helpful much (if the questioned had asked about $x^{99}$ instead, I would have none that $\|x^{99}\|=0$ and there is no non-negative value smaller than 0). But this is not the case, so what should I do? -Edit: how about $x\in P_{\le 98}$? $\displaystyle\int_{-1}^1 x^{100} \, dx = \left.\frac{x^{102}} {102}\right|_{-1}^1=0$. Is it correct? - -REPLY [2 votes]: This is an elaboration on angryavian's comment. -The $L^2$-distance is defined by $$\|f-g\|_2=\left(\int_{-1}^1|f(z)-g(z)|^2\,dz\right)^{1/2}$$ with inner product given by $$\langle f,g\rangle_2=\int_{-1}^1f(x)\overline{g(x)}\,dx.$$ -You will want to work with projection onto $P_{\le 98}$. Since we are working with finite-dimensional spaces, the point in $P_{\le 98}$ that is closest to some other point is the projection onto this subspace. Clearly, $P_{\le 98}$ is spanned by elements of the form $x^n$ for $n=0,1,\dots, 98$. So, you want to find a monic, degree 100 polynomial $P$ with zero coefficient of $x^{99}$ and such that $\int_{-1}^1P(x)x^n\,dx=0$ for each $n=0,\dots, 98$. This is just a system of 99 linear equations in 99 variables. -The point of $P_{98}$ which is closest to $x^{100}$ is $Q(x)=P(x)-x^{100}$, so you just find the distance between $Q$ and $x^{100}$. -In your edit, you integrated wrong and forgot to square the integrand.<|endoftext|> -TITLE: Augmentation ideal and the abelianization of $G$ -QUESTION [7 upvotes]: On a qual problem recently, I came across the following fact: - -If $G$ is a finite group, and $\mathfrak{a}$ is the augmentation ideal - of the integral group ring $\mathbb{Z}G$, then - $$\mathfrak{a}/\mathfrak{a}^2 \cong G/G', \qquad \text{as abelian groups.}$$ - -I understood the proof as far as it went, but I'm looking to absorb this on a deeper level. What is this "really" saying? Where does this fact come up? In what canonical resource "should" I have read about it already? Is it a simple fact about group cohomology in disguise? Is there a relation with the notion of the tangent space as $(I/I^2)^*$ for the ideal of functions vanishing at a point? - -REPLY [8 votes]: Yes, this is a simple fact about group (co)homology in disguise. -Recall that the abelianization is $H_1(G, \mathbb{Z})$. This suggests that you should be trying to relate the augmentation ideal to this homology group via some long exact sequence. The augmentation ideal $I$, as a $G$-module, by definition fits into a short exact sequence -$$0 \to I \to \mathbb{Z}G \to \mathbb{Z} \to 0$$ -which induces a long exact sequence in group homology the end of which goes -$$\dots H_1(G, \mathbb{Z} G) \to H_1(G, \mathbb{Z}) \to H_0(G, I) \to H_0(G, \mathbb{Z}G) \to H_0(G, \mathbb{Z}) \to 0.$$ -By freeness $H_1(G, \mathbb{Z} G) = 0$. We also have $H_0(G, \mathbb{Z}G) \cong \mathbb{Z}$, and in fact the natural map to $H_0(G, \mathbb{Z})$ is an isomorphism. By exactness we get -$$H_1(G, \mathbb{Z}) \cong H_0(G, I)$$ -and now it remains to verify that $H_0(G, I) \cong I/I^2$. In fact it's generally true that $H_0(G, M) \cong M/IM$. Throughout this argument there is no need to assume that $G$ is finite. -It's unclear how to interpret this in terms of tangent spaces since $\mathbb{Z}[G]$ is usually noncommutative.<|endoftext|> -TITLE: Question about creating a volume form for $SL(2,\mathbb{R})$ -QUESTION [5 upvotes]: This problem comes out of R.W.R. Darling (Differential Forms and Connections) ch.8. In the chapter he shows that if $M$ is an $n$-dimensional differential manifold immersed in $\mathbb{R}^{n+k}$, and $\Psi$ is an immersion from $\mathbb{R}^n \rightarrow \mathbb{R}^{n+k}$ that parametrizes the manifold, and $f$ is a submersion from $\mathbb{R}^{n+k} \rightarrow \mathbb{R}^k$ such that $f^{-1}(0) = M$, then we can construct a volume form $(\star df)$ on $M$ using the Hodge star, and that $(\star df)\Lambda^n \Psi_{*}$, which parametrizes the volume form, is given for $k=1$, and $\Psi(0) = r$ by -$$ \begin{vmatrix} D_1 f(r) & D_1 \Psi_1(0) & \cdots & D_n \Psi_1(0) \\ - \vdots & \vdots & \ddots & \vdots \\ - D_n f(r) & D_1 \Psi_n(0) & \cdots & D_n \Psi_n(0) \\ -\end{vmatrix}. $$ -The exercise is to do this with the submanifold $SL(2,\mathbb{R}) \subset GL(2,\mathbb{R})$ regarded as equivalent to $\mathbb{R}^{nxn}$, with $\Psi$ parametrizing $\begin{pmatrix} x & y \\ z & w \\ \end{pmatrix}$ as the image of $(x,y,z)$ and $f(x,y,z,w) = xw - yz - 1.$ I calculated this, and got: -$$ \begin{vmatrix} w & 1 & 0 & 0 \\ - -z & 0 & 1 & 0 \\ - -y & 0 & 0 & 1 \\ - x & \frac{-w}{x} & \frac{z}{x} & \frac{y}{x} \\ -\end{vmatrix}, $$ -where $w = \frac{1+yz}{x}$, which correctly evaluates at $I$ to give $-2dx\land dy\land dz$ as the parametrized volume operator. -The second part is where I have a problem, it says to extend this volume form in a left-invariant manner to $SL(2,\mathbb{R})$ by calculating $(L_A^{*}(\star df))(A^{-1})$, where $L_A$ is the left shift operator on $GL(2,\mathbb{R})$, with $L_A \begin{pmatrix} s & t \\ u & v \\ \end{pmatrix} = \begin{pmatrix} x & y\\ z & w\\ \end{pmatrix} \begin{pmatrix} s & t \\ u & v \\ \end{pmatrix}$ when $A = \begin{pmatrix} x & y\\ z & w\\ \end{pmatrix}.$ -I sense that I should take $(L_A^{*}(\star df)) = ((\star df)L_{A{*}})$ to start the process, but am confused as to how the push-forward fits into the calculation with respect to the parametrization $\Psi$. Could someone help me with how that works? And do I calculate $L_{A{*}}$ as an element of $\mathbb{R}^{nxn}$ which gives a differential as a 4x4 matrix? And if so, does it pre-multiply or post multiply which of the various pieces of the matrix determinant needed to form the volume form? (The problem is 8.4.5 in Darling, p.173, and this is a self-study question.) - -REPLY [3 votes]: First, I would suggest that you practice computing some pullbacks in a more basic setting. For example, if $f(u,v)=(u+v^2,uv,u^3+v^3)=(x,y,z)$ what is $f^*(x\,dy\wedge dz + z\,dx\wedge dy)$? You should learn how to do this without ever writing down a push-forward. -Second, you want to compute $L_{A^{-1}}^*(dx\wedge dy\wedge dz)(I)$. Since $A^{-1}=\begin{bmatrix}w&-y\\-z&x\end{bmatrix}$, we have -$$\begin{bmatrix} w&-y\\-z&x\end{bmatrix}\begin{bmatrix} dx &dy\\ dz &dw \end{bmatrix}=\begin{bmatrix} wdx-ydz&wdy-ydw\\-zdx+xdz&\dots\end{bmatrix},$$ -so $dx\wedge dy\wedge dz$ pulls back to -\begin{align*} -(wdx-ydz)\wedge (wdy-ydw)\wedge (-zdx+xdz)&=-dx\wedge dz\wedge (w dy-ydw)\\ &=-dx\wedge dz\wedge (w-yz/x)dy \\&=\frac 1x dx\wedge dy\wedge dz.\end{align*}<|endoftext|> -TITLE: Solving $x^{x^{x^{x^...}}}=a$ -QUESTION [5 upvotes]: Solving $$x^{x^{x^{x^...}}}=a$$ -My attempt is -$$x^{x^{x^{x^...}}}\log(x)=\log (a)$$ -$$a\log(x)=\log(a)$$ -$$x=a^{1/a}$$ -that means I can select any value of $a$ to get the root,but when I selected some values, I found them not satisfy the original equation, for example -$a=3$,$a=5$, and so on, -Is there a mistake in my procedures, -thanks for any help - -REPLY [6 votes]: Your answer is correct, at least for $x$ from $e^{-e}$ to $e^{\frac{1}{e}}$, by a proof by Euler. See this page for a much more detailed explanation, and check out the Tetration Forum online for a ton of information (much will likely be above the average mathematician (myself included) as the field is somewhat specialized, but I've always enjoyed browsing the site!)<|endoftext|> -TITLE: prove/disprove that if $f\circ g$ injective and g is surjective, then f is injective -QUESTION [6 upvotes]: Question would be: prove/disprove that if $f\circ g$ injective and g is surjective, then f is injective. -after thinking, I came to the conclusion that it's a proof. tried to prove it but it looks not that valid. Would appreciate your feedback and corrections. -Proof: - -because $f\circ g$ is injective, then g is injective as well. -because it's given that g is surjective, and we came to conclusion it's also injective -> it's reversible by $g^{-1}$ -if $f\circ g$ is injective and $g^{-1}$ is injective, then $f\circ g\circ g^{-1}$ injective as well. - -Let there be $a_1,a_2$. $a_1=a_2 \iff f\circ g\circ g^{-1}(a_1)=f\circ g\circ g^{-1}(a_2) \iff f\circ i(a_1) = f\circ i(a_2) \iff f(a_1)=f(a_2)$ -What do you think?? - -REPLY [7 votes]: I do not think your proof is wrong per se, but I would go about things a little more directly. -Suppose $f(x_1)=f(x_2)$. There exist $y_1,y_2$ such that $x_1=g(y_1)$ and $x_2=g(y_2)$. We have $f\circ g (y_1)=f\circ g (y_2)$. So $y_1=y_2$. So $x_1=x_2$.<|endoftext|> -TITLE: Matrix sequence convergence vs. matrix power series convergence: -QUESTION [5 upvotes]: Is my thinking correct? -The sequence $A^n$ converges if each entry converges to a finite number. -But for a matrix power series, $ I + A + \cdots + A^n + \cdots $ can never converge if it has, for example a "1" in the upper left corner, in entry $a_{11}$. Take, for simplicty, $A$ to be diagonal, so that the other diagonal entry is $.5$. This entry will eventually go to zero, but the "$1$" entry will accumulate to infinity. -So we really need, just as in the nth-term test for series convergence of real / complex numbers, for the matrices to tend to the zero-matrix, which I am guessing is a necessary but not sufficient condition for convergence. -What do you think? -Thanks, - -REPLY [7 votes]: I guess we're talking exclusively about matrix power series here. -It is indeed a necessary (but not sufficient) condition that, in order for $\sum A^n$ to converge, the $i,j$ entry of the matrix power $A^n$ must converge to zero as $n \to \infty$. If -$$ -A = \pmatrix{1\\&0.5\\ &&0.5} -$$ -then we indeed find that $\sum A^n$ diverges, since the $1,1$ entry is $1$ for every $n$. -So, if this is what you've been saying, then you're right so far. - -However, in practice, it is unnecessarily difficult to compute each $i,j$ entry of a matrix. In the matter of converging matrices, it is significantly easier to consider a submultiplicative matrix norm. A particularly nice norm of this type is the Frobenius norm. In particular, we define -$$ -\|A\| = \sqrt{\sum_{i=1}^n\sum_{j=1}^n |a_{ij}|^2} -$$ -What we can say then is that a necessary condition for the convergence of $\sum A^n$ is that $\|A^n\| \to 0$ as $n \to \infty$. -A more impressive result is that a sufficient condition for the convergence of $\sum A^n$ is that $\|A\| < 1$.<|endoftext|> -TITLE: Property of 111,111 -QUESTION [43 upvotes]: Whilst playing on my calculator, I noticed the following pattern. -$1^2-0^2=1$ -$6^2-5^2=11$ -${20}^2-{17}^2=111$ -${56}^2-{45}^2=1{,}111$ -${156}^2-{115}^2=11{,}111$ -To me, this is where it gets interesting: -$344^2-85^2=556^2-445^2=356^2-125^2=111{,}111.$ -My question: Is $111{,}111$ the first number with only $1$s as digits that can be represented as a difference of $2$ squares in $3$ different ways? Or, can $1,11,111,1111\,\mathrm{or}\,11111$ be written as $u^2-v^2=w^2-x^2=y^2-z^2$, where $u,v,w,x,y,z$ are all unique? -I lack the knowledge to write a computer program that would check possible solutions for me. Can anyone either prove that the previous numbers can't be written as I've stated or find a counterexample? - -REPLY [3 votes]: Note that, if we take $(s_n)=(1, 11, 111, 1111, \cdots),$ then easily we can see that $$s_n=\dfrac{10^n-1}{9}$$ and this lead us to find the number of solution for the Diophantine equation $\dfrac{10^n-1}{9}=x^2-y^2.$ Since $9=3^2$ this reduced to $$10^n=u^2-v^2+1$$ where both $u$ and $v$ are multiples of $\color{Green}{3}.$ -In general, for any $n\in\Bbb{N},$ $10^n-1$ has two factors $p, q$ such that $p\gt q\ge 1$ -satisfying $$10^n-1=u^2-v^2=(u-v)(u+v)=pq.$$ Therefore we can take $$u=\dfrac{p+q}{2} ,\,\,\,\text{and}\,\,\,\,\ v=\dfrac{p-q}{2}.$$ -Now, - -For $n=1$: $$9=3^2 \,\,\,\,\,\text{and}\,\,\,\,\, (p, q)\in\{(9,1),(3,3)\}.$$ For $n=2$: $$99=3^2\times 11 \,\,\,\,\,\text{and}\,\,\,\,\, (p, q)\in\{(99,1),(33,3),(11,9)\}.$$ - For $n=3$: $$999=3^3\times 37 \,\,\,\,\,\text{and}\,\,\,\,\, (p, q)\in\{(999,1),(333,3),(111,9),(37,27)\}.$$ - For $n=4$: $$9999=3^2\times 11\times 101 \,\,\,\,\,\text{and}\,\,\,\,\, (p, q)\in\{(9999,1),(3333,3),(1111,9),(909,11),(303,33),(101,99)\}.$$ - And so on. Finally choose $p, q$ which are multiples of $3.$ Number of such pairs will solve your problem.<|endoftext|> -TITLE: Prove that $ \sum\limits_{n=-\infty}^\infty\frac{\cos\pi\sqrt{n^2+1}}{3+4n^2}=\int\limits_{-\infty}^\infty\frac{\cos\pi\sqrt{x^2+1}}{3+4x^2}dx $? -QUESTION [29 upvotes]: How one can prove that the infinite sum of this function equals its integral -$$ -\sum_{n=-\infty}^\infty\frac{\cos\pi\sqrt{n^2+1}}{3+4n^2}=\int_{-\infty}^\infty\frac{\cos\pi\sqrt{x^2+1}}{3+4x^2}dx\ ? \tag{1} -$$ -My analysis: Mathematica wasn't able to return any closed form for the integral or the sum. Then I checked this relation by numerical computations and it agreed to about 20 decimal places. -I know from this question Sum equals integral that the function $\text{sinc}\ x=\frac{\sin x}{x}$ has the same property -$$ -\int_{-\infty}^{+\infty} {\rm sinc}\, x \, dx = \sum_{n = -\infty}^{+\infty} {\rm sinc}\, n = \pi -$$ -I tried to find a closed form for the integral $(1)$ but couldn't. -Motivation: I was challenged by a friend to prove this relation. I'm curious how one can prove it? -Note: There has been a suggestion to straightforwardly apply Euler-MacLauren summation formula to prove this statement. Though I don't know why it can not be applied in this case, I checked numerically whether the sum equals the integral for the similar looking functions $f_1(x)=\frac{\cos\pi\sqrt{x^2+1}}{1+x^2}$ and $f_2(x)=\frac{\cos\pi\sqrt{x^2+1}}{2+x^2}$, but in both cases there was a difference of about 1% between the sum and the integral. In starck contrast to this, using the same algorithm for $\frac{\cos\pi\sqrt{x^2+1}}{3+4x^2}$ there wasn't any difference between the sum and the integral at least to 20 decimal places. So I think it is very unlikely that 1% error can be attributed to computational error. - -REPLY [18 votes]: For any positive $a$, define -$$f_a(x) =\frac{\cos\pi\sqrt{x^2+1}}{x^2+a^2}$$ -What you have observed is caused by the equality -$$\sum_{n=-\infty}^\infty f_a(n) - \int_{-\infty}^\infty f_a(x) dx = -\frac{2\pi}{a(e^{2\pi a} - 1)}\times -\begin{cases} -\cosh\pi\sqrt{a^2-1}, & a > 1\\ -\\ -\cos\pi\sqrt{1-a^2}, & a < 1 -\end{cases} -\tag{*1} -$$ -and the fact $$\cos\pi\sqrt{1-a^2} = \cos\frac{\pi}{2} = 0\quad\text{ when } a^2 = \frac34$$ -To see why $(*1)$ is true, we use the fact $f_a(n)$ is an even function in $n$ to rewrite LHS of $(*1)$ as -$$2 \left[\sum_{n=0}^\infty f_a(n) - \left(\int_0^\infty f_a(x) dx + \frac12 f_a(0)\right)\right]$$ -This is similar to what you will find in -the Abel-Plana formula${}^{\color{blue}{[1]}}$, - -For any function $f(z)$ which is - -continuous on $\Re z \ge 0$ and analytic on $\Re z > 0$ -$f(z) \sim o(e^{2\pi|\Im z|} )$ as $\Im z \to \pm \infty$, uniformly with respect to $\Re z$. -$f(z) \sim O(e^{2\pi|\Im z|}/|z|^{1+\epsilon})$ as $\Re z \to +\infty$ ${}^{\color{blue}{[2]}}$. - -we have -$$\sum_{n=0}^\infty f(n) = \int_0^\infty f(x) dx + \frac12 f(0) + i \int_0^\infty \frac{f(it) - f(-it)}{e^{2\pi t}-1} dt\tag{*2}$$ - -However, $f_a(x)$ doesn't exactly satisfy the condition above. It has two -poles at $\pm a i$. After a little bit of tweaking of the contour used in the proof of the Abel-Plana formula, one find: -$$\text{LHS}(*1) = 2i \lim_{\epsilon\to 0^{+}} \int_0^\infty \frac{f_a(it+\epsilon) - f_a(-it+\epsilon)}{e^{2\pi t} - 1} dt$$ -For $t \ne a$, since $f_a(z)$ is even, the two pieces in $f_a(it+\epsilon) - f_a(-it+\epsilon)$ cancels out as $\epsilon \to 0^{+}$. -For $t \approx a$, the two pieces can be combined to a integral of $\frac{f(it)}{e^{2\pi t}-1}$ over a circle centered at $a$. -As a result, RHS reduces to -$$(2i)(2\pi i)\text{Res}_{t = a}\left[\frac{\cos\pi\sqrt{1-t^2}}{(a^2 - t^2)(e^{2\pi t} - 1)}\right] -= \frac{2\pi}{a(e^{2\pi a}-1)}\times\begin{cases} -\cosh\pi\sqrt{a^2-1}, & a > 1\\ -\\ -\cos\pi\sqrt{1-a^2}, & a < 1 -\end{cases} -$$ -Back to the special case $a^2 = \frac{3}{4}$ which corresponds to the equality in question. -When $a^2 = \frac{3}{4}$, the "pole" of $f_a(z)$ at $z = \pm a i$ become removable singularities. The original version of Abel-Plana formula in $(*2)$ applies. Since $f_a(x)$ is even, last integral in $(*2)$ vanishes and the equality follows. This explain why the sum equal to the integral for $\frac{\cos\pi\sqrt{x^2+1}}{3+4x^2}$ but not other similar looking integrand like $\frac{\cos\pi\sqrt{x^2+1}}{1+x^2}$ or $\frac{\cos\pi\sqrt{x^2+1}}{2+x^2}$. -Notes - -$\color{blue}{[1]}$ For more details of Abel-Plana formula and its derivation, please refer to $\S 8.3$ of Frank W. J Olver's book: Asymptotics and Special Functions. -$\color{blue}{[2]}$ In order to convert the AP formula on finite sum in Olver's book to infinite sum here, I have added a condition $(3)$ for this particular problem. The whole purpose of that is to force following limits to zero. -$$\lim\limits_{b\to\infty} f(b) = 0\quad\text{ and }\quad\lim\limits_{b\to\infty}\int_0^\infty \frac{f(b+it)-f(b-it)}{e^{2\pi t} - 1}dt = 0$$ -For other $f(z)$, if one can justifies these limits, we can forget condition $(3)$ and the AP formula remains valid.<|endoftext|> -TITLE: A Scalar times the Zero Vector -QUESTION [5 upvotes]: I'm reading Linear Algebra Done Right by Sheldon Axler and the proof given in the book is the same as the one in the answer provided for this question. -I tried to solve this before looking at the solution and the way I did it was: - -Theorem: $a \cdot \vec0 = \vec 0 $ for every $a \in \mathbb F$ - -Proof $\ $Let $a \in \mathbb F$, then -\begin{align}a \cdot \vec0 -&= a \cdot \langle 0_1,0_2, \ldots ,0_n\rangle \tag{Def. of a vector}\\ -&= \langle a \cdot0_1,a \cdot0_2, \ldots ,a \cdot0_n \rangle \tag{Def. of Scalar Multiplication} \\ -&= \langle 0_1,0_2,...,0_n \rangle \\ -&= \vec 0 -\end{align} -Hence, $a \cdot \vec0 = \vec 0 $ , desired result. - -Is there anything wrong with this proof? For example, I didn't explain why $a \cdot 0_j = 0$, do I have to do so? -Also doesn't this proof provide more insight in terms of using basic definitions rather than just vector algebra?* -Is there a way to proof this result besides this and the one given in the link? - -REPLY [4 votes]: Use the fact that the zero vector is the neutral element for vector addition, so it is equal to itself minus itself. Then use the distributive law for scalar multiplication.<|endoftext|> -TITLE: Topological space in which the principal filters are the only filters that converge -QUESTION [5 upvotes]: Let $(X, \mathcal{T})$ be a topological space in which only the principal filters converge. Show that $\mathcal{T}$ is the discrete topology. -It is similar to one of my previous questions (link: Topological space in which every filter in which every filter converges to every point), but it needs a different approach that I can't (yet) come up with. -Definitions used (for the sake of consistency): - -A filter is a nonempty set $\mathcal{F}$ for which the following properties hold: $\mathcal{F}$ does not contain the empty set; for every $F \in \mathcal{F}$ such that $F \subset G, G \in \mathcal{F}$ holds; for every $F \in \mathcal{F}$ and $G \in \mathcal{F} $ also $F \cap G \in \mathcal{F}$. -A principal filter generated by a set $A \subset X$ is the set$\{F \subset X \vert A \subset F\}$. -A filter $\mathcal{F}$ converges to $x \in X$ iff the neighbourhood filter of $x$ is contained in $\mathcal{F}$ or, equivalently, for every $V$ in the neighbourhood filter of $x$, there exists an element $F \in \mathcal{F}$ such that $F \subset V$. -A subset $V \subset X$ is called a neighbourhood of $x$ if there exists an open set $T \in \mathcal{T}$ such that $x \in T \subset V$. -The neighbourhood filter of $x$ is the set of all neighbourhoods of $x$. - -REPLY [6 votes]: This isn't true; for instance, if $X$ has only finitely many points, then every filter on $X$ is principal, so the condition holds rather trivially for any topology on $X$. -It is true if you assume $X$ is $T_1$. To show this, note that for any $x\in X$, the neighborhood filter of $x$ converges to $x$ and thus must be principal, generated by some set $A$. If $A$ contains any point other than $x$, you can now use the $T_1$ hypothesis to get a contradiction. - -REPLY [4 votes]: [This is the third version of this answer. Thanks to Eric Wofsey for corrections on prior versions.] -Consider the following properties on a topological space $X$: -(i) Every convergent filter on $X$ is principal. -(ii) Every point of $X$ has a finite neighborhood. -(iii) Every point of $X$ has a minimal neighborhood. -(iv) Every neighborhood filter on $X$ is principal. -(v) Arbitrary intersections of open subsets are open. -I claim (i) $\iff$ (ii) $\implies$ (iii) $\iff$ (iv) $\iff$ (v). Spaces satisfying the last three properties are called Alexandroff spaces. I have (evidently!) never met conditions (i) $\iff$ (ii) before. -Proofs: (iii) $\iff$ (iv) is immediate, since a filter is principal iff it has a minimal element. (iii) $\iff$ (v) is straightforward (and standard). (ii) $\implies$ (iii): if $U$ is a finite neighborhood of a point $x$ which is not minimal, then there is a neighborhood $V$ of $x$ not containing $U$ and thus $U \cap V$ is a strictly smaller finite neighborhood. Repeating this process we get to a minimal neighborhood. -(i) $\implies$ (ii): If (i) holds, then certainly (iv) holds, hence also (iii) holds. If for some point $x$ the minimal neighborhood $U_x$ (i.e., the intersection of all neighborhoods of $x$) is infinite, then the collection of subsets of $X$ which contain all but finitely many elements of $U_x$ is a nonprincipal filter converging to $x$. -(ii) $\implies$ (i): If $\mathcal{F} \rightarrow x$ then $\mathcal{F}$ contains the neighborhood filter at $x$, which is the principal filter associated to a finite set $U_x$. The filters containing the principal filter associated to a finite set $U_x$ are the principal filters associated to the finite nonempty subsets of $U_x$. -N.B.: (iii) does not imply (ii): endow any infinite set $X$ with the indiscrete topology. Then $X$ itself is the minimal neighborhood of all of its points. Moreover every filter on $X$ converges to every point on $X$, so the Frechet filter is convergent and nonprincipal. -In particular, as Eric points out, every finite space has property (i), and every separated ($T1$) space satisfying property (i) is discrete.<|endoftext|> -TITLE: Homotopy cardinality of the category of categories -QUESTION [6 upvotes]: The category of finite sets has homotopy cardinality $e$, because -$$ -|{\bf FinSet}|=\sum_{n=0}^{\infty}\frac{1}{\left|\operatorname{Aut}\ [n]\right|}=\sum_{n=0}^{\infty}\frac{1}{n!}. -$$ -What is the homotopy cardinality of ${\bf FinCat}$, the category of finite categories? - -Sets are $0$-categories. According to the periodic table of categories, we have -$$ - |{\bf FinCat_{-2}}|=1,\qquad |{\bf FinCat_{-1}}|=2,\qquad |{\bf FinCat_{0}}|=e. -$$ -Is there a reason to expect this to be an increasing sequence? - -REPLY [9 votes]: There are infinitely many inequivalent finite categories with no automorphisms, so the homotopy cardinality is infinite. For instance, if $G=(V,E)$ is any finite graph, we can consider the set $V\sqcup E$ to be partially ordered by saying an edge is greater than both of its vertices. Clearly the graph can be recovered from this poset, so every automorphism of the poset gives an automorphism of the graph. But there are infinitely many finite graphs with no nontrivial automorphisms (in fact, the probability that a random finite graph has no nontrivial automorphisms goes to $1$ as the number of vertices goes to $\infty$). So we get infinitely many finite posets with no nontrivial automorphisms.<|endoftext|> -TITLE: Examples where $H\ne \mathrm{Aut}(E/E^H)$ -QUESTION [7 upvotes]: If $E/F$ is a field extension, and $H$ is a subgroup of $\mathrm{Aut}(E/F)$, it is quite trivial to see that $H\subset \mathrm{Aut}(E/E^H)$. -Since the theorem only shows the inclusion relationship, I think there must be a lot of examples where $\subset$ is actually $\subsetneq$. But due to lack of knowledge I can't come up with one. -Could you help me with this? Best regards. - -REPLY [3 votes]: If $H$ is finite, we always have equality. See these notes by Noam Elkies. -There are counterexamples, when $H$ is infinite. -If we allow a transcendental extension, then the following example is easy to grasp. Let $E=\Bbb{Q}(x)$ be the field of fractions of the polynomial ring $\Bbb{Q}[x]$. Let $\sigma$ be the automorphism gotten by extending $x\mapsto x+1$ in the obvious way, and let $H=\langle\sigma\rangle$ be the infinite cyclic group. Then it is not difficult to show (for an argument see an earlier answer of mine) that $E^H=\Bbb{Q}$. But $E$ has many other $\Bbb{Q}$-automorphisms. They are all of the form -$$ -x\mapsto \frac{ax+b}{cx+d} -$$ -with $ad-bc\neq0$ (= a quotient group of invertible 2x2 matrices by scalar matrices). -If you want an example of an algebraic extension, then it is a bit more complicated. Let $E$ be the algebraic closure of $\Bbb{F}_p$, and let $H$ be the group of automorphisms generated by the Frobenius mapping $F:z\mapsto z^p$. Because a degree $p$ polynomial can have at most $p$ fixed points we see that $E^H=\Bbb{F}_p$. But $H$ is not all of $Aut(E/\Bbb{F}_p)$. The recipe for all $\Bbb{F}_p$-automorphisms of $E$ is described in this answer by Ted. Here $H$ is only a dense subgroup of $Aut(E/\Bbb{F}_p)$ (w.r.t. the Krull topology). You need to read a bit about Galois theory of infinite algebraic extensions for the use of topological concepts to make sense here.<|endoftext|> -TITLE: Prove that ${x^7-1 \over x-1}=y^5-1$ has no integer solutions -QUESTION [14 upvotes]: I want to show that $${x^7-1 \over x-1}=y^5-1$$ -cannot have any integer solutions. The only observation I have made so far is that the left hand side is the $7$th cyclotomic polynomial -$$\Phi_7(x)= {x^7-1 \over x-1}=x^6+x^5+x^4+x^3+x^2+1$$ -If I remember correctly cyclotomic polynomials are irreducible. Now can I use this property to arrive at the conclusion or should I try to approach by contradiction and assume $\Phi_7(a)=b^5-1$ for some integers $a$ and $b$? The only problem is that I don't see where I would look for an easy contradiction. Any hints? -Edit -I also see that the right hand side can be factored as -$$(y-1)(y^4+y^3+y^2+y+1)=(y-1)\Phi_5(y)\implies \frac{\Phi_7(x)}{\Phi_5(y)}=y-1$$ -which seems like it could give the result if we prove that the two cyclotomic polynomials have no common factors. How could this be done? -Edit: This is IMO2006 Shortlised Problem N5 (RUS). - -REPLY [2 votes]: Since this is an IMO2006 shortlisted problem, here is an solution from The IMO Compendium A Collection of Problems Suggested for The International Mathematical Olympiads: 1959-2009: - -Every prime divisor $p$ of $\frac{x^7-1}{x-1}=x^6+\dots x+1$ is congruent to $0$ or $1$ modulo $7$. Indeed, if $p \mid x-1$, then $\frac{x^7-1}{x-1} \equiv 1+\dots 1\equiv 7 \pmod{p}$, so $p=7$; otherwise the order of $x$ modulo $p$ is $7$ and hence $p\equiv 1 \pmod{7}$. Therefore every positive divisor $d$ of $\frac{x^7-1}{x-1}$ satisfies $d \equiv 0$ or $1\pmod{7}$. -Now suppose $(x,y)$ is a solution of the given equation. Since $y-1$ and $y^4+y^3+y^2+y+1$ divide $\frac{x^7-1}{x-1}=y^5-1$, we have $y\equiv 1$ or $2$ and $y^4+y^3+y^2+y+1\equiv 0$ or $1\pmod{7}$. However, $y\equiv 1$ or $2$ implies that $y^4+y^3+y^2+y+1\equiv 5$ or $3 \pmod{7}$, which is impossible.<|endoftext|> -TITLE: Contest math problem: $\sum_{n=1}^\infty \frac{\{H_n\}}{n^2}$ -QUESTION [10 upvotes]: $$\sum_{n=1}^\infty \frac{\{H_n\}}{n^2}$$ -I have managed to prove that it converges, but am having trouble with a closed form. This came from a school contest from last year, but can't really figure it out. -I came up with a numeric solution, but am having lots of trouble with the closed form. Thanks in advance. - -REPLY [3 votes]: Not an answer to the actual question, but some thoughts, some useful references and a derivation of a good approximation to the sum. - -We can rewrite the sum as (see e.g. this answer or the answer above for the $2\zeta(3)$ evaluation) -$$\sum_{n=1}^\infty \frac{\{ H_n \}}{n^2}=\sum_{n=1}^\infty \frac{H_n}{n^2}-\sum_{n=1}^\infty \frac{\lfloor H_n \rfloor}{n^2} = 2\zeta(3) - \sum_{n=1}^\infty \frac{\lfloor H_n \rfloor}{n^2}$$ -and we can further rewrite -$$ -\begin{array}{cll}\tag{1} -\sum_{n=1}^\infty \frac{\lfloor H_n \rfloor}{n^2} &=& 1\left(\frac{1}{1^2} + \ldots + \frac{1}{3^2}\right) \\&+& 2\left(\frac{1}{4^2} + \ldots + \frac{1}{10^2}\right) \\&+& 3\left(\frac{1}{11^2} + \ldots + \frac{1}{30^2}\right) \\&+& 4\left(\frac{1}{31^2} + \ldots + \frac{1}{82^2}\right) \\&+& \ldots\\ -&=& \sum_{k=1}^\infty k\sum_{j=n_k}^{n_{k+1}-1}\frac{1}{j^2} -\end{array} -$$ -where $n_k$ are defined as the smallest integer where $\lfloor H_n\rfloor = k$. Since $H_n = \log(n) + \gamma + \frac{1}{2n} + \mathcal{O}(n^{-2})$ we expect $n_k = \lfloor e^{k-\gamma} + \frac{1}{2}\rfloor$. This formula holds for $k\leq 100$ and heuristics suggest that it could hold for all $k$, see comments in OEIS A002387 and R. P. Boas, Jr. and J. W. Wrench, Jr., Partial Sums of the Harmonic Series, The American Mathematical Monthly. -Evaluating the sum requires knowledge of $n_k$ and since this is not known in the litterature (as far as I have chekced) it seems unlikely that the problem has a (known) closed form solution. Even if the simple formula for $n_k$ holds it still seems pretty hopeless to evaluate the sums, however it gives us a good starting point for an approximation. We can approximate -$$\sum_{j=n_k}^{n_{k+1}-1}\frac{1}{j^2} \approx \int_{n_k}^{n_{k+1}}\frac{{\rm d}x}{x^2} = \frac{(n_{k+1}-n_k)}{n_kn_{k+1}}$$ -Now if we take $n_k = e^{k-\gamma}$ in the expression above then the integral is $(e-1)e^{\gamma-1-k}$ and the resulting sum can be evaluated analytically. A good approximation can be found by explicitly summing the first $K-1$ brackets in $(1)$ and using the approximation above to estimate the rest of the terms. This leads to -$$\sum_{n=1}^\infty \frac{\{H_n\}}{n^2} \approx 2\zeta(3)-\sum_{k=1}^{K-1} k\sum_{j=n_k}^{n_{k+1}-1}\frac{1}{j^2} -\frac{K(e-1)+1}{(e-1)e^{K-\gamma}}$$ -Taking $K=5$ gives us -$$2\zeta(3)-2.00822-\frac{5 (e-1) + 1}{(e-1)e^{5-\gamma}} \approx 0.32890$$ -which is within $0.2\%$ of the exact result (found by summing numerically up to $n=10^7$).<|endoftext|> -TITLE: Solve the equation $27 \sin(x) \cdot \cos^2(x) \cdot \tan^3(x) \cdot \cot^4(x) \cdot \sec^5(x) \cdot \csc^6(x) = 256$. -QUESTION [8 upvotes]: Solve the equation $27 \sin(x) \cdot \cos^2(x) \cdot \tan^3(x) \cdot \cot^4(x) \cdot \sec^5(x) \cdot \csc^6(x) = 256$. - -I was hoping some things would cancel out when I expanded this but nothing. I think using inequalities will help. - -REPLY [3 votes]: There is actually quite an ingenious solution to this question. We have $\sin^6(x)\cos^2(x) = \dfrac{27}{256}$. Now we can write this as $3^3\dfrac{\sin^2(x)}{3}\dfrac{\sin^2(x)}{3}\dfrac{\sin^2(x)}{3}\cos^2(x) = \dfrac{27}{256}$. Then by AM-GM $\sqrt[4]{\dfrac{\sin^2(x)}{3}\dfrac{\sin^2(x)}{3}\dfrac{\sin^2(x)}{3}\cos^2(x)} \leq \dfrac{1}{4} $. Thus, $3^3\dfrac{\sin^2(x)}{3}\dfrac{\sin^2(x)}{3}\dfrac{\sin^2(x)}{3}\cos^2(x) \leq \dfrac{27}{256}$. Therefore equality holds iff $\sin^2(x) = 3\cos^2(x) \implies 3-4\sin^2(x) = 0 \implies \sin^2(x) = \dfrac{3}{4}$ and the solution proceeds.<|endoftext|> -TITLE: Closed form for an integral involving the incomplete Gamma function? -QUESTION [13 upvotes]: Let $\alpha>1$, $K>1$ and $n \in \mathbb{N}^+$. What is a closed form solution to a tough integral? $$I(\alpha,K,n)=-\int_{-\infty }^{\infty } \frac{2 i e^{-i K u} (K u-i) \Bigl[\alpha \, (-i u)^{\alpha } \,\Gamma (-\alpha ,-i u)\Bigr]^n}{u^2} \, du,$$ -where $\Gamma(.,.)$ is the incomplete Gamma function: -$\Gamma (a,z)=\int _z^{\infty }d t\, t^{a-1} e^{-t}$. -I tried all manner of substitutions and various combinations of integration by parts. - -REPLY [2 votes]: This is a partial answer, linked with the calculations via multiple convolution. -$\color{brown}{\textbf{Representation of the integral.}}$ -Is known that - -$$s^p\Gamma(-p,s) = E_{p+1}(s),$$ -$$\dfrac1{2\pi i} \int\limits_{-i\infty}^{+i\infty}F(s)\,e^{ts}\,\text ds -= \mathcal L^{-1}_{s\mapsto t}(F(s)),$$ -$$\mathcal L^{-1}_{s\mapsto t}(e^{s} E_{p+1}(s)) = \dfrac1{(t+1)^{p+1}},$$ - -where $\;E_{p+1}(s)\;$ is the exponential integral $\;E\;$ and $\;\mathcal L^{-1}_{s\mapsto t}\;$ is the inverse Laplace transform from $\;s\;$ to $\;t.$ -Let $\;-iu =s,\quad K=n+t,\;$ then -$$I(\alpha,n+t,n) = \dfrac{4\pi}{2\pi i}\int\limits_{-i\infty}^{+i\infty}\dfrac{(n+t)s-1}{s^2}\big(\alpha s^\alpha\Gamma(-\alpha, s)\big)^ne^{(n+t)s}\text ds$$ -$$ = \dfrac{4\pi\alpha^n}{2\pi i}\int\limits_{-i\infty}^{+i\infty}\dfrac{(n+t)s-1}{s^2}\big(e^s\,E_{\alpha+1}(s)\big)^n\,e^{ts}\,\text ds -=4\pi\alpha^n\mathcal L^{-1}_{s\,\mapsto t}\left(\dfrac{(n+t)s-1}{s^2}\big(e^s\,E_{\alpha+1}(s)\big)^n\right),$$ -$$=4\pi\alpha^n\left((n+t)\int\limits_0^t G_n(\alpha,\lambda)\,\text d\lambda --\int\limits_0^t\int\limits_0^\lambda G_n(\alpha,\mu)\,\text d\mu\,\text d\lambda \right)$$ -$$=4\pi\alpha^n\left((n+t)\int\limits_0^t G_n(\alpha,\lambda)\,\text d\lambda --\int\limits_0^t\int\limits_0^\lambda G_n(\alpha,\mu)\,\text d\mu\,\text d(n+\lambda) \right)$$ -$$\;\overset{\text{IBP}}{=\!=}\; 4\pi\alpha^n \int\limits_0^t(n+\lambda)G_n(\alpha,\lambda)\,\text d\lambda,$$ -$$I(\alpha,n+t,n)=4\pi\alpha^n\int\limits_0^t(n+\lambda) -G_n(\alpha,\lambda)\,\text d\lambda,\tag1$$ -where -$$G_n(\alpha,t) = \underbrace{\dfrac1{(t+1)^{\alpha+1}}*\dfrac1{(t+1)^{\alpha+1}}*\dots*\dfrac1{(t+1)^{\alpha+1}}}_n\tag2$$ -is the multiple convolution. -$\color{brown}{\mathbf{Case\; n = 1.}}$ -\begin{cases} -G_1(\alpha,t)=\dfrac1{(t+1)^{\alpha+1}}\\ -\big(G_1(\alpha,t)\big)'_t = -\dfrac1{(\alpha+1)(t+1)^{\alpha+2}}\\ -\int\limits_0^t G_1(\alpha,t)\,\text dt -=\dfrac1\alpha \left(1 - \dfrac1{(t+1)^{\alpha}}\right).\tag3 -\end{cases} -$$I(\alpha,t+1,1) = 4\pi\alpha\int\limits_0^t \dfrac{\text d\lambda} {(\lambda+1)^{\alpha}} -= \dfrac{4\pi\alpha}{\alpha-1}\left(1-\dfrac1{(t+1)^{\alpha-1}}\right),$$ -$$I(\alpha,K,1) = \dfrac{4\pi\alpha}{\alpha-1}\left(1-\dfrac1{K^{\alpha-1}}\right).\tag4$$ -$\color{brown}{\mathbf{Case\; n = 2.}}$ -$$G_2(\alpha,t) = G_1(\alpha,t) * \dfrac1{(t+1)^{\alpha+1}} -=\int\limits_0^t G_1(\alpha,u)\,\dfrac{\text du}{(t-u+1)^{\alpha+1}},\tag5$$ -$$I(\alpha,t+2,2) = 4\pi\alpha^2\int\limits_0^t \int\limits_0^\lambda \dfrac{(\lambda+2) G_1(\alpha,\mu)}{(\lambda-\mu+1)^{\alpha+1}}\,\text d\mu\,\text d\lambda$$ -$$= 4\pi\alpha^2\int\limits_0^t \int\limits_0^\mu \dfrac{\big((\lambda-\mu+1) + (\mu+1)\big) G_1(\alpha,\mu)}{(\lambda-\mu+1)^{\alpha+1}}\,\text d\lambda\,\text d\mu$$ -$$= 4\pi\alpha^2\int\limits_0^t \int\limits_0^\mu \left( -\dfrac{G_1(\alpha,\mu)}{(\lambda-\mu+1)^{\alpha}} -+\dfrac{G_1(\alpha-1,\mu)}{(\lambda-\mu+1)^{\alpha+1}} -\right)\,\text d\lambda\,\text d\mu$$ -$$= -4\pi\alpha^2\int\limits_0^t \left( -\dfrac{G_1(\alpha,\mu)}{(\alpha-1)(\lambda-\mu+1)^{\alpha-1}} -+\dfrac{G_1(\alpha-1,\mu)}{\alpha(\lambda-\mu+1)^{\alpha}} -\right)\bigg|_0^\mu\,\text d\mu$$ -$$= \dfrac{4\pi\alpha}{\alpha-1}\int\limits_0^t -\big(\alpha G_1(\alpha,\mu)(1-G_1(\alpha-2,-\mu)) -+(\alpha-1)G_1(\alpha-1,\mu)(1-G_1(\alpha-1,-\mu))\big) -\,\text d\mu$$ -$$= \dfrac{4\pi\alpha}{\alpha-1}\int\limits_0^t -(1-G_1(\alpha-2,-\mu))\,\text d G_1(\alpha-1,\mu)$$ -$$- 4\pi\alpha \int\limits_0^t G_1(\alpha-1,\mu)(1-G_1(\alpha-1,-\mu))) -\,\text d\mu$$ -$$= \dfrac{4\pi\alpha}{\alpha-1} -(1-G_1(\alpha-2,-\mu))\,G_1(\alpha-1,\mu)\bigg|_0^t -- 4\pi\alpha\int\limits_0^t G_1(\alpha-1,\mu) G_1(\alpha-1,-\mu) \,\text d\mu$$ -$$- 4\pi\alpha (1-G_1(\alpha-2,-t))) -+ 4\pi\alpha \int\limits_0^t G_1(\alpha-1,\mu)G_1(\alpha-1,-\mu)\,\text d\mu$$ -$$= \dfrac{4\pi}{\alpha-1}(1-G_1(\alpha-2,-t))\big(\alpha G_1(\alpha-1,t) -- \alpha+1)\big),$$ -$$I(\alpha,K,2) = \dfrac{4\pi}{\alpha-1}(1-G_1(\alpha-2,K-2))\big(\alpha G_1(\alpha-1,K-2) -- \alpha+1)\big),\tag6$$ -$$G_2(\alpha,t)=\int\limits_0^t\dfrac{\text du}{(u+1)^{\alpha+1}(t-u+1)^{\alpha+1}} -=\int\limits_1^{t+1}\dfrac{\text du}{u^{\alpha+1}(t+2-u)^{\alpha+1}}\\[4pt] -=\dfrac1{(t+2)^{2\alpha+1}}\int\limits_1^{t+1}\dfrac{\text d\left(\frac u{t+2}\right)}{\left(\frac u{t+2}\right)^{\alpha+1}\left(1-\frac u{t+2}\right)^{\alpha+1}} -=\dfrac1{(t+2)^{2\alpha+1}}\int\limits_{\large\frac1{t+2}}^{\large\frac{t+1}{t+2}} s^{-\alpha-1}(1-s)^{-\alpha-1}\text ds,$$ -$$G_2(\alpha,t) = \dfrac{\text B_{\large\frac{t+1}{t+2}}(-\alpha,-\alpha) --\text B_{\large\frac1{t+2}}(-\alpha,-\alpha)}{(t+2)^{2\alpha+1}}.\tag7$$ -Obtained results show that algebraic closed form of the given integral can exist even if the convolution has not such form. -However, I have not any closed forms for $n\ge3.$<|endoftext|> -TITLE: If $f,g$ integrable then $f(x-y)g(y)$ integrable for almost every $x$ -QUESTION [6 upvotes]: I am trying to prove that for two integrable functions $f,g: \mathbb{R}^n \rightarrow \mathbb{R}$ the function $y \mapsto f(x-y)g(y)$ is integrable for almost every $x$. By using the holder inequality I reduced this to showing that if a function is integrable then also its square is integrable but after browsing a bit I found this so I guess this leads nowhere. Any hints are welcomed. - -REPLY [8 votes]: Hint: consider $\int\int |f(x-y)g(y)| \, dy \, dx$, and use Tonelli's theorem to reverse the order of integration. If you can show this integral is finite, then $\int |f(x-y)g(y)| \, dy$ is finite for almost every $x$.<|endoftext|> -TITLE: Integral $\int_0^{1/2}\arcsin x\cdot\ln^3x\,dx$ -QUESTION [8 upvotes]: It's a follow-up to my previous question. -Can we find an anti-derivative -$$\int\arcsin x\cdot\ln^3x\,dx$$ -or, at least, evaluate the definite integral -$$\int_0^{1/2}\arcsin x\cdot\ln^3x\,dx$$ -in a closed form (ideally, as a combination of elementary functions and polylogarithms)? - -REPLY [2 votes]: We will outline of a way forward leaving some of the work to the reader. -Denote the integral of interest by $I$ where -$$I=\int \arcsin(x) \log^3(x)\,dx \tag 1$$ -Integrating $(1)$ by parts by letting $u=\arcsin(x)$ and $v=x\left(\log^3(x)-3\log^2(x)+6\log(x)-6\right)$, we find that -$$\begin{align} -I&=x\arcsin(x)\left(\log^3(x)-3\log^2(x)+6\log(x)-6\right)\\\\&-\int \left(\frac{\log^3(x)-3\log^2(x)+6\log(x)-6}{\sqrt{1-x^2}}\right)\,x\,dx \tag 2 -\end{align}$$ - -Next, denote the integral on the right-hand side of $(2)$ by $J$. Enforcing the substitution $x=\sqrt{1-y^2}$ yields -$$\begin{align} -J&=-\int \left(\frac{\log^3(x)-3\log^2(x)+6\log(x)-6}{\sqrt{1-x^2}}\right)\,x\,dx\\\\ -&=J_3+J_2+J_1+J_0 -\end{align}$$ -where -$$\begin{align} -J_3&=\int \log^3(\sqrt{1-y^2})\,dy \tag 3\\\\ -J_2&=-3\int \log^2(\sqrt{1-y^2})\,dy \tag 4\\\\ -J_1&=6\int \log(\sqrt{1-y^2})\,dy \tag 5\\\\ -J_0&=-6\int 1\,dy \tag 6 -\end{align}$$ - -The integrals in $(5)$ and $(6)$ can be evaluated in terms of elementary functions with -$$J_0=-6y$$ -and -$$J_1=3y\log(1-y^2)-6y-3\log(1-y)+3\log(1+y)$$ - -The integrals in $(3)$ and $(4)$ can be expressed in terms of polylogarithm functions. For $J_2$ we can write -$$\begin{align} -J_2&=-3\int \log^2(\sqrt{1-y^2})\,dy\\\\ -&=-\frac34 \left(K_1+K_2+K_3\right) -\end{align}$$ -where -$$\begin{align} -K_1&=\int \log^2(1-y)\,dy \tag 7\\\\ -K_2&=\int \log^2(1+y)\,dy \tag 8\\\\ -K_3&=2\int \log(1-y)\log(1+y)\,dy \tag 9 -\end{align}$$ -The integrals $K_1$ and $K_2$ can be written in closed form with -$$\begin{align} -K_1&=(y-1)\left(\log^2(1-y)-2\log(1-y)+2\right)\\\\ -\end{align}$$ -and -$$\begin{align} -K_2&=(y+1)\left(\log^2(1+y)-2\log(1+y)+2\right)\\\\ -\end{align}$$ - -For $K_3$ we integrate by parts with $u=\log(1-y)$ and $v=(y+1)\log(y+1)-y$ and obtain -$$\begin{align} -K_3&=2(y+1)\log(1-y^2)-2y\log(1-y)+2\int \frac{(y+1)\log(y+1)-y}{1-y}\,dy\\\\ -&=2(y+1)\log(1-y^2)-2y\log(1-y)+2y+2\log(1-y)+2\int \frac{(y+1)\log(y+1)}{1-y}\,dy\\\\ -&=2(y+1)\log(1-y)+2y\left(1-\log(1-y)\right)+4\int \frac{\log(1+y)}{1-y}\,dy \tag{10} -\end{align}$$ -To evaluate the integral in $(10)$, we make the substitution $y=1-2z$. Then, -$$\begin{align} -\int \frac{\log(1+y)}{1-y}\,dy&=-\log(2)\log(w)-\int \frac{\log(1-w)}{w}\,dw\\\\ -&=-\log(2)\log\left(\frac{1-y}{2}\right)+\text{Li}_2\left(\frac{1-y}{2}\right) -\end{align}$$ - -The integral $J_3$ can be evaluated in terms of the dilogarithm function $\text{Li}_2$ and trilogarithm function $\text{Li}_3$ using a similar approach to the one used herein to evaluate $K_2$. We will leave that very tedious analysis to the reader.<|endoftext|> -TITLE: Is the kth central moment less than the kth raw moment for even k? -QUESTION [6 upvotes]: If $X$ is a real-valued random variable, then the $k$th raw moment is $\mathbb{E}[X^k]$, while the $k$th central moment is $\mathbb{E}[(X-\mathbb{E}[X])^k]$. If $k$ is even, is the $k$th central moment always bounded above the $k$th raw moment? -When $k = 2$, then $\mathbb{E}[(X-\mathbb{E}[X])^2] = \mathbb{E}[X^2]-\mathbb{E}[X]^2$, and because $\mathbb{E}[X]^2$ is always positive, it follows that this is less than or equal to $\mathbb{E}[X^2]$. But I'm having trouble extending this to larger moments. - -REPLY [2 votes]: Roberto Rastapopoulos' proof could be simplified by firstly proving the inequality holds when $a\ge 0$: -$$a^m-(a-1)^m\ge a-1;$$ -Then similarly for $X\ge0$ with $E(X)=1$, -$$E(X^r)-E[(X-1)^r] = \int_{X\ge 0}[x^r-(x-1)^r]dP(\omega)\ge \int_{X\ge 0}(x-1)dP(\omega)=0.$$<|endoftext|> -TITLE: $3^{n+1}$ divides $2^{3^n}+1$ -QUESTION [11 upvotes]: Describe all positive integers,n such that $3^{n+1}$divides $2^{3^n}+1$. -I am little confused about what the question asks-if it asks me to find all such positive integers, or if it asks me to prove that for every positive integer n,$3^{n+1}$ divides $2^{3^n}+1$. Kindly clarify this doubt and if it's the former part, please verify my solution-n=1. - -REPLY [5 votes]: It follows from one of the Lifting The Exponent Lemmas (LTE): - -Let $\upsilon_p(a)$ denote the exponent of the largest power of $p$ that divides $a$. -If $p$ odd prime, $n\in\mathbb Z^+$ is odd, $a,b\in\mathbb Z$, $a\equiv -b\not\equiv 0\pmod{p}$, then -$$\upsilon_p\left(a^n+b^n\right)=\upsilon_p(a+b)+\upsilon_p(n)$$ - -Therefore $\upsilon_3\left(2^{3^k}+1\right)=\upsilon_3(2+1)+\upsilon_3\left(3^k\right)=k+1$. So in fact, I've proved a stronger result: We have $3^{k+1}\mid 2^{3^k}+1$ and $3^{k+2}\nmid 2^{3^k}+1$, for all $k\in\mathbb Z_{\ge 0}$.<|endoftext|> -TITLE: A mathematical fallacy concerning the integrability of a function and cancellation -QUESTION [12 upvotes]: I am reading the Florida Mu Alpha Theta Sponsors Guide. Page 43 is a list of clarifications and disputes commonly made, and their resolutions. One of their clarifications is this: - -A function which is not integrable on an interval A is not integrable on any interval B, where B contains A. I.e. no “the negative signs cancel” arguments. - -What is this argument they're referring to? - -REPLY [4 votes]: The rule that $\int_a^b f(x) dx + \int_b^c f(x) dx = \int_a^c f(x) dx $ should hold, and its generalization to all partitions of an interval into a finite number of closed sub-intervals. This means $f$ should have an integral on all closed subintervals, if it is to be integrable. -For the Riemann integral and its generalizations (Lebesgue, Stieltjes etc) the partition property is a consequence of $\int_a^b f $ existing. Thus the word "should" can be replaced by "must" in the first paragraph. -Cancellation of singularities happens in some regularization schemes for divergent or improper integrals, which is a different matter that does not affect integrability.<|endoftext|> -TITLE: $\frac{1}{\sin 8^\circ}+\frac{1}{\sin 16^\circ}+....+\frac{1}{\sin 4096^\circ}+\frac{1}{\sin 8192^\circ}=\frac{1}{\sin \alpha}$,find $\alpha$ -QUESTION [9 upvotes]: Let $\frac{1}{\sin 8^\circ}+\frac{1}{\sin 16^\circ}+\frac{1}{\sin 32^\circ}+....+\frac{1}{\sin 4096^\circ}+\frac{1}{\sin 8192^\circ}=\frac{1}{\sin \alpha}$ where $\alpha\in(0,90^\circ)$,then find $\alpha$(in degree.) - -$\frac{1}{\sin 8^\circ}+\frac{1}{\sin 16^\circ}+\frac{1}{\sin 32^\circ}+....+\frac{1}{\sin 4096^\circ}+\frac{1}{\sin 8192^\circ}=\frac{1}{\sin \alpha}$ -$\frac{2\cos8^\circ}{\sin 16^\circ}+\frac{2\cos16^\circ}{\sin 32^\circ}+\frac{2\cos32^\circ}{\sin 64^\circ}+....+\frac{2\cos4096^\circ}{\sin 8192^\circ}+\frac{1}{\sin 8192^\circ}=\frac{1}{\sin \alpha}$ -$\frac{2^2\cos8^\circ\cos16^\circ}{\sin 32^\circ}+\frac{2^2\cos16^\circ\cos32^\circ}{\sin 64^\circ}+\frac{2^2\cos32^\circ\cos64^\circ}{\sin 128^\circ}+....+\frac{2\cos4096^\circ}{\sin 8192^\circ}+\frac{1}{\sin 8192^\circ}=\frac{1}{\sin \alpha}$ -In this way this series is getting complicated at each stage,is there any way to simplify it?Please help me.Thanks. - -REPLY [5 votes]: This is just the same idea as in lab bhattacharjee's answer, but using the identity from the Weierstrass Substitution -$$ -\tan(x/2)=\frac{\sin(x)}{1+\cos(x)} -$$ -we get -$$ -\begin{align} -\frac1{\tan(x/2)}-\frac1{\tan(x)} -&=\frac{1+\cos(x)}{\sin(x)}-\frac{\cos(x)}{\sin(x)}\\ -&=\frac1{\sin(x)} -\end{align} -$$ -The rest is the same telescoping series -$$ -\begin{align} -\sum_{k=0}^n\frac1{\sin\left(2^kx\right)} -&=\sum_{k=0}^n\left[\frac1{\tan\left(2^{k-1}x\right)}-\frac1{\tan\left(2^kx\right)}\right]\\ -&=\frac1{\tan(x/2)}-\frac1{\tan\left(2^nx\right)} -\end{align} -$$ -The question has $x=8^\circ$ and $n=10$, so we get -$$ -\begin{align} -\sum_{k=0}^{10}\frac1{\sin\left(2^k8^\circ\right)} -&=\frac1{\tan(4^\circ)}-\frac1{\tan(8192^\circ)}\\ -&=\frac1{\tan(4^\circ)}+\frac1{\tan(88^\circ)}\\ -&=\frac1{\tan(4^\circ)}+\tan(2^\circ)\\ -&=\frac{\cos(4^\circ)}{\sin(4^\circ)}+\frac{\sin(4^\circ)}{1+\cos(4^\circ)}\\ -&=\frac{\cos(4^\circ)}{\sin(4^\circ)}+\frac{1-\cos(4^\circ)}{\sin(4^\circ)}\\ -&=\frac1{\sin(4^\circ)} -\end{align} -$$<|endoftext|> -TITLE: Group Theory : What is $Ha \ne Hb$? -QUESTION [6 upvotes]: As a beginner of Group Theory, I got stuck with the following question: -Suppose that $H$ is a subgroup of $G$ such that whenever $Ha \ne Hb \space ,$ then $aH \ne bH$. $(a,b \in G)$ Prove that $gHg^{-1} \subset H \space\space\forall g\in G$. -My first doubt is what is exactly implied by $Ha \ne Hb$ in the above question? Does it mean -$$ whenever \space ha\ne hb , ah\ne bh \space \forall\space h\in H $$ $$OR$$ $$whenever \space h_1a\ne h_2b , ah_1\ne bh_2 \space \forall \space h_1,h_2 \in H \space ?$$ -I dont know whether this is a very silly doubt or not ; please help me clarify. -Secondly , it will be very helpful if you give me a hint how to proceed with this problem. Thank you in advance.. -If this question is a repetition please give a link , but do not down-vote. I am very low in reputation. - -REPLY [6 votes]: I think your first question has been answered, -Now Hints to the second one: - -$Ha\neq Hb\implies aH\neq bH$ means $aH=bH\implies Ha=Hb$ means $b^{-1}a\in H\implies ab^{-1}\in H$. - -Now let $g\in G,h\in H;h(g^{-1}g)=he\in H\implies (hg^{-1})g\in H\implies ghg^{-1}\in H$ -which holds $\forall h,g$. -Thus $gHg^{-1}\subseteq H$<|endoftext|> -TITLE: Curious combinatorial summation -QUESTION [10 upvotes]: Let $\gamma$ denote a grid walk from the upper left corner $(1,k)$ to the lower right corner $(\ell,1)$ of the $k\times\ell$ rectangle $\{1,..,k\}\times\{1,..,\ell\}$. There are $\binom{k+\ell-2}{k-1}$ such paths. Denote -$$ -X_\gamma = \prod_{(i,j)\in\gamma} \frac{1}{i+j-1}\,. -$$ -Claim: -$$\sum_\gamma X_\gamma = \frac{1}{(k+\ell-1)(k-1)!(\ell-1)!}\,. -$$ -Equivalently, and more elegantly, for a random path $\gamma$, we have: $\ \Bbb E[X_\gamma] = 1/(k+\ell-1)!$ -Example: $k=2$, $\ell=3$. There are $3=\binom{3}{1}$ paths $\,\gamma_1: (1,2) \to (1,1) \to (2,1) \to (3,1)$, $\,\gamma_2: (1,2) \to (2,2) \to (2,1) \to (3,1)$, $\,\gamma_3: (1,2) \to (2,2) \to (3,2) \to (3,1)$. Then: -$$ -X_1 = \frac{1}{2\cdot 1\cdot 2\cdot 3} \ , \ X_2 = \frac{1}{2\cdot 3\cdot 2\cdot 3} \ , \ X_3 = \frac{1}{2\cdot 3\cdot 4\cdot 3} \ , -$$ -$$X_1+X_2+X_3 = \frac{1}{12}+\frac{1}{36}+\frac{1}{72} = \frac{1}{8} = \frac{1}{4\cdot 1!\cdot 2!}\,. -$$ -Question: Is there a simple proof of this combinatorial summation? If it's known, does anyone have a reference? -P.S. I can in fact prove the claim but the proof is incredibly involved for such a simple looking result. - -REPLY [3 votes]: Suppose the start point is $(a,b)$ and the end point is $(c,d)$, where $a\geq c$ and $d\geq b$. -From the general form of $F((k,1),(1,l))$, and the fact that paths from $(k,1)$ go through $(k-1,1)$ or $(k,2)$, you can deduce the general form of $F((k,2),(1,l))$. -Then $F((k,3),(1,l))$ and so on. -I got this formula: -$$F((a,b),(c,d))=\frac{(a+d-b-c)!(b+c-2)!}{(a+d-1)!(a-c)!(d-b)!}$$ -The base case of the induction proof is $F((a,b),(a,b))=1/(a+b-1)$ because there is one path of a single vertex. -The recursive equation is -$$F((a,b),(c,d))=\frac1{a+b-1}\left[F((a-1,b),(c,d))+F((a,b+1),(c,d))\right]$$ -$$=\frac1{a+b-1}\left[\frac{(a+d-b-c-1)!(b+c-2)!}{(a+d-2)!(a-c-1)!(d-b)!}+\frac{(a+d-b-c-1)!(b+c-1)!}{(a+d-1)!(a-c)!(d-b-1)!}\right]\\ -=\frac1{a+b-1}\frac{(a+d-b-c-1)!(b+c-2)!}{(a+d-1)!(a-c)!(d-b)!}\cdot\\ -\left[(a+d-1)(a-c)+(b+c-1)(d-b)\right]$$ -The final factor equals $(a+b-1)(a-b-c+d)$, so the final answer is $F((a,b),(c,d))$ given above, and we only assumed $F((a,b),(c,d))$ was correct for values with a lower value of $a-b$.<|endoftext|> -TITLE: Closed form solutions to Abel equation -QUESTION [6 upvotes]: Consider a (somewhat simplified) Abel equation of the first kind for $\alpha$: -$\left[\alpha(x)\right]^2 \left[1-f(x)\alpha(x)\right] + \alpha'(x) = 0$, -for some smooth function $f$. -Is it known what conditions on $f$ are necessary (and sufficient) to ensure a closed form solution? One particular case I am interested in is $f(x) = \lambda x^3$ for some $\lambda \in \mathbb{R}$; is there any hope of getting a closed form in this case? What other cases have been studied? - -REPLY [2 votes]: The question << Is it known what conditions on $f$ are necessary (and sufficient) to ensure a closed form solution? >> is not quite pertinent because the answer depends on the background of special functions allowed. -A closed form is made of a combination of a finite number of elementary and/or special functions, i.e. functions defined and referenced as "standard". So, if a new special function appears in the specialised litterature, the solutions of an ODE which were previously impossible to write on a closed form, possibly become writen on a closed form, thanks to the new special function. -One can imagine a new set of special functions especially defined and standardized, devoted to the solving of the Abel's ODE. This is not the case today. -In the case of $f(x)=\lambda x^3$ it seems that the ODE isn't of solvable kind in the sens of "solvable" considered for example in this paper : http://arxiv.org/ftp/arxiv/papers/1503/1503.05929.pdf<|endoftext|> -TITLE: How are the elements of a dihedral group usually defined? -QUESTION [5 upvotes]: While searching online, I've come across two ways to define the elements of the dihedral group. Both ways are internally consistent and are fine as far as I can tell, but they are mutually exclusive, so I was wondering which of the two ways is more standard or commonly used. -The two ways are as follows: -Way 1. Elements are defined as transformations against a fixed set of axes. -In this way of defining the elements, each element of $D_n$ is either a reflectional symmetry or a rotational symmetry of the polygon being considered. In an $n$-gon, there are $n$ reflectional symmetries and $n$ rotational symmetries. -The reflectional symmetries are described as follows: An $n$-gon has $n$ axes of symmetry. For each axis of symmetry in the $n$-gon there is an element in the dihedral group that reflects the $n$-gon along that axis. It is important to note that these axes stay fixed, even after the n-gon itself undergoes a rotational symmetry. -Similarly, there are $n$ rotational symmetries in the $n$-gon being discussed, and each of these rotations is a member of $D_n$. It is important to note that here, the rotations are always in the same direction, even if the shape undergoes reflectional symmetry. So, for example, an element $r$ that rotates a square $90^{\circ}$ clockwise will always rotate the square $90^{\circ}$ clockwise, even after the square is reflected. - -Way 2. Elements are defined as permutations of vertices. -The second way of defining the elements of $D_n$ is that each element is defined as a permutation of vertices of the $n$-gon. (It should be noted that there are in total $n!$ permutations of vertices, yet only $2n$ elements in $D_n$; this discrepancy is explained by the fact that a permutation must also preserve the structure of the $n$-gon in order to be included in $D_n$.) Like all permutations on a finite set, these permutations can be written out as cycles such as $(1234)$, where $1$, $2$, $3$, and $4$ are the names of vertices of a square, for example. - -Way 2 is actually distinct from Way 1, both geometrically and algebraically (as far as I can tell). -There are still $n$ rotations and $n$ reflections, but unlike in Way 1 where the transformations are defined in terms of a fixed "background" which doesn't move as the $n$-gon undergoes rotation or reflection, the transformations in Way 2 are now defined in terms of the vertices of the $n$-gon, which DO move as the $n$-gon undergoes rotation or reflection. What this means is that in Way 2, the transformations change depending on the current orientation of the $n$-gon. For example, consider the rotation of a square. In Way 1, we can let $r$ be the element that rotates the square $90^{\circ}$ clockwise. The direction of rotation ($90^{\circ}$ clockwise or, equivalently, $270{\circ}$ counterclockwise) never changes, even after the square is reflected across one of its axes of symmetry. On the other hand, the roughly corresponding rotation in Way 2 would be something like the cycle $(1234)$. Unlike $r$, which always rotates the square $90^{\circ}$ clockwise, $(1234)$ may rotate the square $90^{\circ}$ either clockwise or counterclockwise, depending on whether the square has been reflected or not. Similarly, the axes of reflection in Way 2 move along with the square, while those of Way 1 remain fixed. -The fact that the two ways are non-equivalent can also be verified algebraically (I think...). No isomorphism exists between the two Ways (at least as far as I can tell; I tried constructing an isomorphism by making a bijection between the elements as defined in Way 1 with the elements as defined in Way 2, matching the rotations and reflections in Way 1 with the corresponding ones in Way 2, but the bijection did not satisfy the criterion that $f(a)f(b) = f(ab)$ for all $a$, $b$ in Way 1. The problem arose when $a$ was a rotation and $b$ was a reflection. However, if there actually is an isomorphism that I overlooked, please correct me.) -My main question is: Which of these two ways is standard, or used more often by working mathematicians? Are both acceptable? -It seems to me that Way 2 is just nicer all-around, mainly because all of the elements are permutations and can thus be written as cycles, which are easy to work with algebraically (by Cayley's Theorem, Way 1 is isomorphic to some group of permutations anyway, but then it seems like kind of a hassle finding a way to write it as cycles and whatnot.) -If there are some benefits to Way 1, then I would like to learn about those too. Thanks in advance for those who bothered to read through all this! - -REPLY [2 votes]: The Way 2 by permutations is good only for small orders (say up to order $8$ or $10$). I have not seen its use for some specific purpose. -The Way 1 is geometric, and it is used many places, and also very useful in many places. For example, in regular $6$-gon (hexagon), we can (indirectly) see two regular $3$-gon (equilateral triangles). This implies that there are two copies of dihedral group of order $6$ in dihedral group of order $12$. -(The third way in an answer is better in some computation purposes.)<|endoftext|> -TITLE: prove this inequality with $a+b+c=1$ -QUESTION [5 upvotes]: Let $a,b,c>0,a+b+c=1$,show that -$$\left(\sqrt{\dfrac{a+b}{c}}+\sqrt{\dfrac{b+c}{a}}+\sqrt{\dfrac{c+a}{b}}\right)^2\ge \dfrac{16}{3(a+b)(b+c)(c+a)}$$ - -REPLY [5 votes]: This inequality I think some time,Now I solve it.Following is my solution. -Use Holder inequality we have -$$\left(\sqrt{\dfrac{a+b}{c}}+\sqrt{\dfrac{b+c}{a}}+\sqrt{\dfrac{c+a}{b}}\right)^2\cdot\sum_{cyc}c(a+b)^2\ge (a+b+b+c+c+a)^3$$ -so -$$\left(\sqrt{\dfrac{a+b}{c}}+\sqrt{\dfrac{b+c}{a}}+\sqrt{\dfrac{c+a}{b}}\right)^2\ge\dfrac{8(a+b+c)^3}{\displaystyle\sum_{cyc}c(a+b)^2}=\dfrac{8}{6abc+\displaystyle\sum_{cyc}c(a^2+b^2)}$$ -it suffices to show that -$$\dfrac{8}{6abc+\displaystyle\sum_{cyc}c(a^2+b^2)}\ge\dfrac{16}{3(a+b)(b+c)(c+a)}$$ -since -$$(a+b)(b+c)(c+a)=2abc+\sum_{cyc}c(a^2+b^2)$$ -it suffices to show -$$\sum_{cyc}c(a^2+b^2)\ge 6abc$$ -it is clear AM-GM inequality -By Done<|endoftext|> -TITLE: Find equation of plane given plane point -QUESTION [5 upvotes]: So I am given one plane point $M(5,2,0)$, also two points which are not plane points: $P(6,1,-1)$ distance to plane $1$, also point $Q(0,5,4)$ distance to plane $3$. How find equation of plane with the information given? - -REPLY [2 votes]: Hint. -Suppose that an equation of the plane $\mathscr P$ to be find is $\mathscr P \equiv ax+by+cz+d=0$ where $a^2+b^2+c^2=1$. By hypothesis, $M$ belongs to the plane. Hence $5a+2b+d=0$. -Also the distance of a point of coordinates $(x,y,z)$ to $\mathscr P$ is $\vert ax+by+cz+d \vert$. Therefore we also have $$\begin{cases} -\vert 6a+b-c+d \vert = 1\\ -\vert 5b +4c+d \vert =3 -\end{cases}$$ Replacing with $d=-5a-2b$ we get -$$\begin{cases} -\vert a-b-c\vert = 1\\ -\vert 5a-3b -4c\vert =3 -\end{cases}$$ So you have to solve the following system of equations: -$$\begin{cases} -a^2+b^2+c^2=1\\ -\vert a-b-c\vert = 1\\ -\vert 5a-3b -4c\vert =3 -\end{cases}$$<|endoftext|> -TITLE: For numerical integration, is it true that higher degree of precision gives better accuracy always? -QUESTION [6 upvotes]: In case of numerical integration, is it true that higher degree of precision always gives better accuracy? Justify your answer. - -I know the definition of degree of precision. For Trapezoidal and Simpson's 1/3 rule they are 1 and 3 respectively. Simpson's 1/3 gives better accuracy than Trapezoidal rule. Then whether the above statement is true always. If not, why? If yes, then why we learn Trapezoidal/ Simpson rule? Why we shall not establish/go for higher and higher DOP from generalized Newton-Cote's rule or other general quadrature formula -. - -REPLY [8 votes]: Increasing the precision, both in terms of the order of the method and in the number of gridpoints used, usually (most of the time) leads to a more accurate estimate for the integral we are trying to compute. However this is not always the case as the following (aritficial) example shows. - -$$\bf \text{Example where higher order does not imply better accuracy}$$ -Let -$$f(x) = \left\{\matrix{1 & x < \frac{1}{2}\\0 & x \geq \frac{1}{2}}\right.$$ -and consider the integral $I=\int_0^1f(x){\rm d}x = \frac{1}{2}$. If we use the trapezoidal rule with $n$ gridpoints then -$$I_{n} = \frac{1}{n}\sum_{i=1}^{n}\frac{f(\frac{i-1}{n})+f(\frac{i}{n})}{2} \implies I_n = \left\{\matrix{\frac{1}{2} & n~~\text{odd}\\\frac{1}{2} - \frac{1}{n} & n~~\text{even}}\right.$$ -so for $n=3$ we have the exact answer which is better than any even $n$ no matter how large it is. This shows that increasing the number of gridpoints does not always improve the accuracy. With Simpson's rule we find -$$I_n = \frac{1}{3n}\sum_{i=1}^{n/2}f\left(\frac{2i-2}{n}\right)+4f\left(\frac{2i-1}{n}\right)+f\left(\frac{2i}{n}\right) \implies I_n = \left\{\matrix{\frac{1}{2} - \frac{1}{3n}&n\equiv 0\mod 4\\\frac{1}{2}&n\equiv 1\mod 4\\\frac{1}{2} + \frac{2}{3n} & n\equiv 2\mod 4\\\frac{1}{2} - \frac{5}{6n} & n\equiv 3\mod 4}\right.$$ -so even if Simpson's rule has higher order we see that it does not always do better than the trapezoidal rule. - -$$\bf \text{What does higher degree of precision really mean?}$$ -If we have a smooth function then a standard Taylor series error analysis gives that the error in estimating the integral $\int_a^bf(x){\rm d}x$ using $n$ equally spaced points is bounded by (here for Simpsons and the trapezoidal rule) -$$\epsilon_{\rm Simpsons} = \frac{(b-a)^5}{2880n^4}\max_{\zeta\in[a,b]}|f^{(4)}(\zeta)|$$ -$$\epsilon_{\rm Trapezoidal} = \frac{(b-a)^3}{12n^2}\max_{\zeta\in[a,b]}|f^{(2)}(\zeta)|$$ -Note that the result we get from such an error analysis is always an upper bound (or in some cases an order of magnitude) for the error apposed to the exact value for the error. What this error analysis tell us is that if $f$ is smooth on $[a,b]$, so that the derivatives are bounded, then the error with a higher order method will tend to decrease faster as we increase the number of gridpoints and consequently we typically need fewer gridpoints to get the same accuracy with a higher order method. -The order of the method only tell us about the $\frac{1}{n^k}$ fall-off of the error and says nothing about the prefactor in front so a method that has an error of $\frac{100}{n^2}$ will tend to be worse than a method that has an error $\frac{1}{n}$ as long as $n\leq 100$. - -$$\bf \text{Why do we need all these methods?}$$ -In principle we don't need any other methods than the simplest one. If we can compute to arbitrary precision and have enough computation power then we can evaluate any integral with the trapezoidal rule. However in practice there are always limitations that in some cases forces us to choose a different method. -Using a low-order method requires many gridpoints to ensure good enough accuracy which can make the computation take too long time especially when the integrand is expensive to compute. Another problem that can happen even if we can afford to use as many gridpoints as we want is that truncation error (errors due to computers using a finite number of digits) can come into play so even if we use enough points the result might not be accurate. -Other methods can elevate these potential problems. Personally, whenever I need to integrate something and has to implement the method myself I always start with a low-precision method like the trapezoidal rule. This is very easy to implement, it's hard to make errors when coding it up and it's usually good enough for most purposes. If this is not fast enough or if the integrand has properties (e.g. rapid osccilations) that makes it bad I try a different method. For example I have had to compute (multidimensional) integrals where a trapezoidal rule would need more than a year to compute it to good enough accuracy, but with Monte-Carlo integration the time needed was less than a minute! It's therefore good to know different numerical integration methods in case you encounter a problem where the simplest method fails.<|endoftext|> -TITLE: Nonconstant polynomials do not generate maximal ideals in $\mathbb Z[x]$ -QUESTION [10 upvotes]: Let $f$ be a nonconstant element of ring $\mathbb Z[x]$. Prove that $\langle f \rangle$ is not maximal in $\mathbb Z[x]$. - -Let us assume $\langle f \rangle$ is maximal. Then $\mathbb Z[x] / \langle f \rangle$ would be a field. Let $a \in \mathbb{Z}$. Then $a + \langle f \rangle$ is a nonzero element of this field, hence a unit. Let $g + \langle f \rangle$ be its inverse. Then $a g - 1 \in \langle f \rangle$, hence $ag(x)-1 = f(x)h(x)$ for some $h \in Z[x]$, hence $ag(0) + f(0)h(0) = 1$, thus $(a,f(0))=1$ for all $a \in \Bbb Z$, contradiction, hence the proof. -Is my argument correct? Is there any other method? - -REPLY [7 votes]: Main result: - -If $R$ is an integral domain with infinitely many elements and only finitely many units, then no maximal ideal of $R[x]$ is principal. - -A pedestrian proof: - -Assume $R$ is an integral domain with infinitely many elements and only finitely many units. - -First, a few basic facts . . . - -Since $R$ is an integral domain, - -If $g,h \in R[x]$ and $g,h \ne 0$, then $\text{deg}(gh) = \text{deg}(g) + \text{deg}(h)\\[4pt]$. -If $r \in R$, then $r$ is a unit in $R[x]$ if and only if $r$ is a unit in $R$. - -Also, since $R$ is an integral domain, it follows that - -for any $r \in R$, and any $f \in R[x]$ with $\text{deg}(f) \ge 1$, the equation $f(x) = r$ has only finitely many roots in $R$. - -Next, some lemmas . . . - -Lemma $\mathbf{1}$: - -If $a,b \in R$ and $a$ is not a unit in $R$, then $(a,x-b)$ is a proper ideal of $R[x]$. - -proof: - -Suppose instead that $(a,x-b) = (1)$. -\begin{align*} -\text{Then}\;\,&(a,x-b) = (1)\\[4pt] -\implies\; &ag(x) + (x-b)h(x) = 1,\;\text{for some}\;g,h \in R[x]\\[4pt] -\implies\; &ag(b) + (b-b)h(b) = 1,\;\text{for some}\;g,h \in R[x]\\[4pt] -\implies\; &ag(b) = 1,\;\text{for some}\;g \in R[x]\\[4pt] -\implies\; &a\;\text{is a unit in $R$}\\[4pt] -\end{align*} -contradiction. - -This completes the proof of lemma $1$. - -Lemma $\mathbf{2}$: - -If $a \in R$, the ideal $(a)$ of $R[x]$ is not a maximal ideal. - -proof: - -Suppose instead that for some $a \in R$, the ideal $(a)$ of $R[x]$ is a maximal ideal of $R[x]$. - -Since $(a)$ is maximal in $R[x]$, $(a) \ne (1)$, hence $a$ is not a unit of $R$. - -Since $a$ is not a unit of $R$, it follows that $x \notin (a)$. - -Since $(a)$ is maximal, and $x \notin (a)$, it follows that $(a,x) = (1)$, which contradicts lemma $1$, since $a$ is not a unit of $R$. - -This completes the proof of lemma $2$. - -proof of the main result: - -Suppose the principal ideal $(f) \in R[x]$ is maximal, for some $f \in R[x]$. - -Our goal is to derive a contradiction. - -By lemma $2$, $f$ has degree at least $1$, hence $(f)$ has no nonzero constants elements. - -Since $R$ has infinitely many elements but only finitely many units, there exists an element $b \in R$, such that $f(b)$ is a nonzero nonunit. Actually, there are infinitely many such elements $b$, but we only need one. - -Thus, suppose $b \in R$ is such that $f(b) = a$, where $a \in R$ is a nonzero nonunit. -\begin{align*} -\text{Then}\;\, &\text{deg}(f) \ge 1\\[4pt] -\implies\; &f(x) = f(b) + (x-b)g(x),\;\text{for some nonzero }g \in R[x]\\[4pt] -\implies\; &(f,a) \subseteq (a,x-b)\\[4pt] -\implies\; &(f,a) \ne (1)\qquad\text{[since by lemma $1$, $(a,x-b) \ne (1)$]}\\[4pt] -\implies\; &(f,a) = (f)\qquad\text{[since $(f)$ is maximal]}\\[4pt] -\implies\; &a \in (f)\\[4pt] -\end{align*} -contradiction, since $(f)$ has no nonzero constants elements. - -This completes the proof of the main result. - -Corollary: - -No maximal ideal of $\mathbb{Z}[x]$ is principal. - -proof: - -This follows from the main result since $\mathbb{Z}$ is an infinite integral domain with only two units, namely $\pm 1$.<|endoftext|> -TITLE: preservation of extreme points under linear transformation -QUESTION [5 upvotes]: Suppose $\{e_1,...,e_N\}$ is the set of all extreme points of a compact convex subset $X\subset\mathbb R^n$. $L: \mathbb R^n\to \mathbb R^m$ is a linear transformation. $L$ is surjective but is not injective. Let $Y= L(X)$. -Would it hold that for every $1\leq i\leq N$, $L(e_i)$ must be an extreme point of $Y$? Is there any characterization on $L$ such that this property holds? -Thanks. - -REPLY [3 votes]: If $L$ fails to be injective, then this will not hold. -For example, take $X \subset \Bbb R^2$ to be $\{(x,y):|x|+|y| \leq 1\}$, and take $L:\mathbb{R}^2 \to \mathbb{R}$ to be given by $L(x,y) = x$. Note that $(0,1)$ is an extreme point of $X$, but $L(0,1) = 0$ is not an extreme point of $Y$. -On the other hand, if $L$ is injective in addition to being surjective, then $L$ is an invertible linear transformation and so the statement holds. - -REPLY [2 votes]: The images by $L$ of the extreme points are certainly not all extreme points of the image of the compact convex by $L$. -Take the example of a square and for $L$ the projection on one of the diagonal. The two extreme points on the diagonal are also extreme points of the image by $L$ of the square, but that is not the case for the opther two vertex of the square. -However the extreme points of the image of the compact convex is the subset of the images of the extreme points of the initial compact convex.<|endoftext|> -TITLE: What's the smallest number that we can multiply with a given one to get the result only zeros and ones? -QUESTION [17 upvotes]: I have the following set of numbers, -$$4, 198, 4356, 10296, 14454, 25542, 31779, 51252, 53946, 99999$$ -Let's take $3,4$ as an examples: -The smallest number to multiply with $4$ to get the result only $1$s and $0$s is -$$100 = 4 * 25$$ -so the result is $25$ for input $4$. -Also -$$111 = 3 * 37$$ -so the result is $37$ for input $3$. -Hint: -As you may notice that the rest of the numbers can be divided by $9$ or $99$ or $999$ and the smallest number to multiply with $9$ is $12345679$, this might be the trick! -$$111111111 = 9 * 12345679$$ -And one last thing: This is not a binary conversion as most people think by looking at the 4 example. - -REPLY [3 votes]: Let S be the set of number written with 0 and 1. -Let n be a natural number. -You look for the minimal positive natural number k such $k \times n \in S$ -I will look further to S -Let $s \in S$. -There is a finite subset $J \subset \mathbb{N}$ such: -$s = \sum_{i \ in J} 10^i$ -We got: -$k \times n = \sum_{i \ in J} 10^i$ -Using mod $n$: -$\sum_{i \ in J} 10^i \equiv 0 \pmod n$ -Your problem is now to find $J$ -Let $T = \left\{10^i\mod n, i \in \mathbb{N}\right\}$ -You now look for a subset of $T$ where the sum of the element is a multiple of $n$. -A greedy algorithm look good here.<|endoftext|> -TITLE: Why zero is a multiple of every integer, but not a divisor of zero? -QUESTION [6 upvotes]: All positive and negative numbers including zero are called integers. So in the form $a=bq$, since $0 = 0ㆍq$ is true for any integer $q$, $0$ can have $0$ as a divisor of itself as well as a multiple of itself by the definition expressed by $a=bq$. -But why it's said "We cannot divide by $0$"? It's understood as "$0$ cannot be a divisor" to me. -"Definition: An integer a is called a multiple of an integer $b$ if $a=bq$ for some integer $q$. In this case we also say that b is a divisor of $a$, and we use the notation $b | a$ . . . On the other hand, for any integer $a$, we have $0 = aㆍ0$ and thus $0$ is a multiple of any integer." -Source: Abstract Algebra: Third Edition, John A. Beachy, William D. Blair, p.4. -"Rule Division by $0$ is undefined. Any expression with a divisor of $0$ is undefined. We cannot divide by $0$" -Source: Prealgebra: A Text/Workbook, Charles McKeague, p.61. -"Observe that division by the integer $0$ is not defined, since for $n≠0$ there is no integer $x$ such that $0ㆍx=n$ and since for $n =0$ every integer $x$ satisfies $0ㆍx=0$" -Source: Introduction to Mathematical Proofs, Second Edition, Charles Roberts, p.99. -[Now I understand my question more after reading number theory chapter of a book] -$0=d\cdot 0 $ -Thus, 0 is a multiple of every integer except 0. - -REPLY [7 votes]: Every integer divides zero, including zero itself; however, the only integer that zero divides is itself. That is, $b \mid 0$ for all integers $b$; but if $a$ is an integer and $0 \mid a$, then $a = 0$. -When it is said that "you can't divide by zero", what is meant is that, given an integer $a$, there is not a unique quotient upon division by zero. -Specifically, given integers $a$ and $b$, with $b \ne 0$, if $b \mid a$ then there is a unique integer $q$ such that $a = qb$, namely $q=\frac{a}{b}$. However, if we allow the case when $b=0$, then we lose the uniqueness. Indeed, as already mentioned, $0 = q \cdot 0$ for all integers $q$, so it makes no sense to assign a value to the expression $\frac{0}{0}$. And if $a \ne 0$ then there is no integer $q$ such that $a = q \cdot 0$, so it also doesn't make any sense to assign a value to the expression $\frac{a}{0}$.<|endoftext|> -TITLE: Why the octahedral axiom? -QUESTION [11 upvotes]: My question is about the octahedral axiom (OA) in the definition of a triangulated category. For what I can understand so far (cf. Huybrechts, Fourier-Mukai in algebraic gometry, Definition 1.32), this axiom wants to roughly generalise the "double quotient" situation in the category of abelian groups, i.e. if $A\subset B\subset C$ are abelian groups then $C/B\cong(C/A)/(B/A)$. -I would like to know why people think that this axiom is superfluous. -I reported the "double quotient" situation because one may say that if it wants to generalise a situation which is natural in the non-generalised case, then one would expect this situation to be natural too. But it seems to me that this argument is too weak and probably there are better arguments... -Moreover, is it true that everyone is convinced about that? -As a motivation to this question I would like to say that: $1)$ last summer a paper by Maciocia appeared in which he proved the OA is a consequence of the previous ones. But unfortunately there was an error which is still not fixed. $2)$ In the Huybrechts book, I have cited before, he doesn't state the OA because he "will never use it explicitely and only once implicitely". $3)$ I am very curious about that. -Thank you all! -P.S.: I have tried to find something in the literature or in StackExchange as well but I was apparently unable. Sorry if it is a duplicate. - -REPLY [7 votes]: The octahedral axiom is probably not superfluous. That is, Maciocia's error will probably never be repaired. People were very surprised by his claim and unsurprised by the discovery of the flaw.<|endoftext|> -TITLE: Dual space of exterior power and exterior power of dual space -QUESTION [6 upvotes]: Let $V$ be a finite-dimensional vector space. -Is there an isomorphism between $\Lambda^k(V^\ast)$ and $\left(\Lambda^k(V)\right)^\ast$? -I was able to prove this with the additional requirement of an inner product on $V$ (and thus subsequently on $\Lambda^k(V)$) via -$$ -\require{AMScd} -\begin{CD} -\left(\Lambda^k(V)\right)^\ast @>\mathcal{J}^{-1}>> \Lambda^k(V) @>\Lambda^kJ>> \Lambda^k(V^\ast) -\end{CD} -$$ -where $J: V \to V^\ast$ and $\mathcal{J}: \Lambda^k(V) \to \left(\Lambda^k(V)\right)^\ast$ are the isomorphisms given by the Riesz representation theorem and $\Lambda^kJ$ is the map given by $v_1\wedge \cdots \wedge v_k \mapsto J(v_1) \wedge \cdots \wedge J(v_k)$. -Is there another way to identify these two spaces without the requirement of an inner product on $V$? I read Qiaochu Yuan's comment to his answer on a similar question but did not really understand it I fear. -Thank you very much. - -REPLY [13 votes]: An isomorphism is given by the non-degenerate pairing $\Phi \colon \Lambda^k(V^{*}) \times \Lambda^k(V) \rightarrow \mathbb{F}$ defined by -$$ \Phi(\varphi^1 \wedge \ldots \wedge \varphi^k, v_1 \wedge \ldots \wedge v_k) = \det (\varphi^i(v_j))_{i,j=1}^n $$ -and extended bilinearly. Sometimes, when working over $\mathbb{R}$ or $\mathbb{C}$, people use a slightly different pairing $\Phi' = \frac{1}{k!} \Phi$ which differs from $\Phi$ by a constant factor. - -REPLY [5 votes]: The natural isomorphism $I$ between these spaces is defined by -$$ -(I(\alpha_1\wedge\ldots\wedge\alpha_k))(v_1\wedge\ldots\wedge v_k):=\sum_\pi \mathrm{sgn}(\pi)\alpha_1(v_{\pi(1)})\ldots \alpha_k(v_{\pi(k)}). -$$<|endoftext|> -TITLE: Weak Limit of Measures Mutually Singular wrt Lebesgue Measure -QUESTION [6 upvotes]: I'm stuck on the following qual problem: -Let $\{h_{n}\}$ be a sequence of positive continuous functions on the unit cube $Q$ in $\mathbb{R}^{d}$ satisfying the following conditions: - -$\lim_{n\rightarrow\infty}h_{n}(x)=0$ $m$-a.e. ($m$ denotes the Lebesgue measure on $Q$) -$\int_{Q}h_{n}dx=1$ $\forall n$ -$\lim_{n\rightarrow\infty}\int_{Q}fh_{n}dx=\int_{Q}fd\mu$ for every continuous function $f$ on $Q$. - -Prove that $\mu\perp m$ or give a counterexample. -My intuition suggests to me that $\mu\perp m$ since $h_{n}\rightarrow 0$ a.e. and therefore must become very large on sets of small Lebesgue measure; however, I'm struggling to prove my guess. My thought was to write the $\int_{Q}fd\mu=\int_{Q}fh dx+\int_{Q}fd\nu$, where $hdx+\nu=\mu$ is the Lebesgue decomposition of $\mu$ and then show that $h=0$ $m$-a.e, or equivalently $\int_{Q}fhdx=0$ for every continuous $f$. But I have been unable to do this. Any suggestions? - -REPLY [4 votes]: The flaw in reasoning that $\mu$ must be singular because $h_n$ is very small except on a very small set is that very small set can be somewhat uniformly distributed (at the right resolution). -Counterexample in one dimension: -Define $$I_{n,j}=[j/n,(j+2^{-n})/n].$$ Let $\phi_{n,j}$ be a continuous function supported on $I_{n,j}$ with $\phi_{n,j}\ge0$ and $$\int\phi_{n,j}=1/n.$$ Set $$E_n=\bigcup_{j=0}^{n-1}I_{n,j}.$$Let $$h_n=\sum_{j=0}^{n-1}\phi_{n,j}.$$Then $h_n\ge0$, $\int h_n=1$, and $$\int fh_n\to\int_0^1f(x)\,dx$$for every $f\in C([0,1])$. (Hint: $f$ is uniformly continuous. The idea behind the example is that $\int fh_n$ is morally equivalent to a Riemann sum for $\int f$.) -Since $h_n=0$ on $[0,1]\setminus E_n$ and $\sum m(E_n)<\infty$ it follows that $h_n\to0$ almost everywhere. -If $h_n\ge0$ is not positive enough, let $g_n=(1-1/n)h_n+1/n$. -Exercise: Suppose $\mu$ is a Borel probability measure on $[0,1]$. Show that there exist $h_n\ge0$ such that $h_n\to0$ almost everywhere but $\int fh_n\to\int f\,d\mu$ for every $f\in C([0,1])$.<|endoftext|> -TITLE: Which options are true for $f(z)=\frac{1}{z}$ -QUESTION [5 upvotes]: Consider $f(z)=\frac{1}{z}$ on the annulus $A= \{z\in \Bbb{C} | \frac{1}{2}<|z|<2\}$. Which of the following are true? - -There is a sequence $\{p_n(z)\}$ of polynomials that approximate $f(z)$ uniformly on compact subsets of $A$. -There is a sequence $\{r_n(z)\}$ of rational functions, whose poles are contained in $\Bbb{C}\setminus A$ and which approximates $f(z)$ uniformly on compact subsets of $A$. -No sequence $\{p_n(z)\}$ of polynomials approximate $f(z)$ uniformly on compact subsets of $A$. -No sequence $\{r_n(z)\}$ of rational functions whose poles are contained in $\Bbb{C}\setminus A$ approximate $f(z)$ uniformly on compact subsets of $A$. - -I think options 2 and 3 are correct, 2 follows from Runge's theorem, but I don't know how to prove 3. - -REPLY [2 votes]: Let $K = \{ z \in \mathbb{C} \, | \, |z| = 1 \}$ and assume that there exists a sequence $\{p_n(z)\}_{n=1}^{\infty}$ of polynomials such that -$$ \max_{z \in K} |p_n(z) - f(z)| \xrightarrow[n \to \infty]{} 0. $$ -Choose $n \in \mathbb{N}$ such that $\max_{|z| = 1} |p_n(z) - f(z)| < \frac{1}{2}$. Hence, -$$ |p_n(z) - f(z)| = \left| \frac{z \cdot p_n(z) - 1}{z} \right| = \left|z \cdot p_n(z) - 1 \right| < \frac{1}{2} $$ -for all $|z| = 1$. By the maximum modulus principle, we have -$$ 1 = |0 \cdot p_n(0) - 1| \leq \max_{|z| \leq 1} |z \cdot p_n(z) - 1| = \max_{|z| = 1} |z \cdot p_n(z) - 1| < \frac{1}{2} $$ -and we have obtained a contradiction.<|endoftext|> -TITLE: How can I count solutions to $x_1 + \ldots + x_n = N$? -QUESTION [8 upvotes]: I am interested in how many non-negative integer solutions there are to: -$$x_1 + \ldots + x_N = B$$ -where at least $K$ of the variables $x_1, \ldots , x_N \geq C$ -For example when: -$B = 5, N = 3, K = 2, C = 2$ -I want to count the solutions to: -$$x_1 + x_2 + x_3 = 5$$ -where at least $2$ of the variables are $\geq 2$. -I found the total number of candidate solutions using the $\binom{B+N-1}{B} = 21$ -However, only $9$ of them have two variables $\geq 2$. -\begin{align*} - 2+0+3& =5\\ - 2+1+2& =5\\ - 3+0+2& =5\\ - 1+2+2& =5\\ - 3+2+0& =5\\ - 0+2+3& =5\\ - 0+3+2& =5\\ - 2+3+0& =5\\ - 2+2+1& =5 -\end{align*} -I feel there is a connection to the Associated Stirling numbers of the second kind. But I can't place it :( -EDIT: -Here is my code for enumerating them all to count the number of ways of select B elements from a set of N (uniformly with replacement), such that you have at least C copies of K elements - also shows the output for this question I'm asking here as it's the core piece. Obviously can't be run for very large values of the parameters - that's why I'm here :) Code is here -Here is another example for B = 6, N = 3, C = 2 and K = 2 there are 16 solutions: -\begin{align*} -0+2+4& = 6\\ -0+3+3& = 6\\ -0+4+2& = 6\\ -1+2+3& = 6\\ -1+3+2& = 6\\ -2+0+4& = 6\\ -2+1+3& = 6\\ -2+2+2& = 6\\ -2+3+1& = 6\\ -2+4+0& = 6\\ -3+0+3& = 6\\ -3+1+2& = 6\\ -3+2+1& = 6\\ -3+3+0& = 6\\ -4+0+2& = 6\\ -4+2+0& = 6\\ -\end{align*} -There are a number of different and correct solutions below. I don't know which to accept. - -REPLY [4 votes]: A tailor-made approach by analytic combinatorics. The coefficient of $x^B$ in -$$ \frac{1}{(1-x)^N} = \left(1+x+x^2+x^3+\ldots\right)^N $$ -obviously counts the number of ways of representing $B$ as a sum of $N$ non-negative integers. By stars and bars, or by the (negative) binomial theorem, such number is $\binom{N+B-1}{N-1}$. We may use an extra variable to mark the terms with exponent $\geq C$, and consider: -$$ g(z,x) = \left(1+x+x^2+\ldots+x^{C-1}+ z x^{C}+z x^{C+1}+ z x^{C+2}+\ldots\right)^N $$ -that is: -$$ g(z,x) = \left(\frac{1-x^C}{1-x}+z\cdot\frac{x^C}{1-x}\right)^N = \frac{(1+(z-1) x^C)^N}{(1-x)^N}.$$ -Now the coefficient of $x^B$ in $g(z,x)$ is a polynomial in the $z$ variable, $h_B(z)$, and we are interested in summing the monomials of $h_B(z)$ whose degree is $\geq K$. That sum evaluated at $z=1$ gives the answer to our problem. However, I suspect there is no nice closed formula for summarizing the process, since even the computation of $h_B(z)$ involves a convolution. Are you fine with an integral representation of the answer? That is not hard to achieve through Cauchy's integral formula. -If $B=5,N=3, C=2$ and $K=2$, we have $h_B(z)=12z+\color{red}{9}z^2$, hence the answer is $\color{red}{9}$ as you checked. The general form of the answer is given by: - -$$\begin{eqnarray*} [x^B]\sum_{D\geq K}[z^D]\frac{\left((1-x^C)+z x^C\right)^N}{(1-x)^N}&=&[x^B]\frac{1}{(1-x)^N}\sum_{D\geq K}\binom{N}{D}x^{CD}(1-x^C)^{N-D}\\&=&\sum_{D\geq K}\binom{N}{D}[x^{B-CD}]\frac{(1-x^C)^{N-D}}{(1-x)^N}\\&=&\color{red}{\sum_{D\geq K}\binom{N}{D}\sum_{h=0}^{N-D}\binom{N-D}{h}(-1)^h\binom{N+B-CD-Ch-1}{N-1}}, \end{eqnarray*}$$ - -rather ugly.<|endoftext|> -TITLE: Solving $\lim \limits _{x \to \infty} (\sqrt[n]{(x+a_1) (x+a_2) \dots (x+a_n)}-x)$ -QUESTION [5 upvotes]: $$\lim \limits _{x \to \infty}\bigg(\sqrt[n]{(x+a_1) (x+a_2) \dots (x+a_n)}-x\bigg)$$ -We can see the limit is of type $\infty-\infty$. I don't see anything I could do here. I can only see the geometric mean which is the $n$-th root term. Can I do anything with it? Any tips on solving this? - -REPLY [3 votes]: It's more complicated to write in LaTeX, than to solve. Remember that -$$A-B = \frac {A^n - B^n} {A^{n-1} + A^{n-2} B + \dots + A B^{n-2} + B^{n-1}} .$$ -Choosing $A = \sqrt[n]{(x+a_1) (x+a_2) \dots (x+a_n)}$ and $B = \sqrt[n] {x^n}$, note that the largest power of $x$ in $A^n - B^n$ is $x^{n-1}$, and it is multiplied by the coefficient $a_1 + \dots + a_n$. -Similarly, looking for the dominant part of $x$ in the denominator, write -$$A^{n-k} \cdot B^{k-1} = [ (x+a_1) (x+a_2) \dots (x+a_n) ] ^{\frac {n-k} n} \cdot (x^n) ^{\frac {k-1} n} = \\ (x^n) ^{\frac {n-k} n} \left[ \left( 1 + \frac {a_1} x \right) \dots \left( 1 + \frac {a_n} x \right)\right] ^{\frac {n-k} n} \cdot (x^n) ^{\frac {k-1} n} = x^{n-1} \left[ \left( 1 + \frac {a_1} x \right) \dots \left( 1 + \frac {a_n} x \right)\right] ^{\frac {n-k} n} .$$ -Note that the part between square brackets tends to $1$ when $x \to \infty$ (because $n$, the number of factors, stays fixed and each factor tends to $0$), so $\dfrac {A^{n-k} \cdot B^{k-1}} {x^{n-1}} \to 1$ (i.e $A^{n-k} \cdot B^{k-1}$ behaves asymptotically like $x^{n-1}$ when $x \to \infty$). There are $n$ such terms in the denominator, so $\dfrac {n x^{n-1}} {\text{denominator}} \to 1$. -Putting everything together you get that -$$\sqrt[n]{(x+a_1) (x+a_2) \dots (x+a_n)} - \sqrt[n] {x^n} = A-B = \\ \frac {(a_1 + \dots a_n) x^{n-1} + \text{smaller powers of} \ x} {n x^{n-1}} \frac {n x^{n-1}} {\text{denominator}} \to \frac {a_1 + \dots + a_n} n .$$<|endoftext|> -TITLE: What are the units of Z[x]? -QUESTION [6 upvotes]: Where $\mathbb{Z}[x]$ is the ring of polynomials in $x$ with integer coefficients. The book I am studying says the unity of this ring is $f(x) = 1$ so then if some $p \in \mathbb{Z}[x]$ is a unit, this then must mean that $p^{-1} \in \mathbb{Z}[x]$, where $p \cdot p^{-1} = 1$, correct? -Obviously then $f(x) = 1$ and $f(x) = -1$ are units, and I have seen elsewhere on the internet that people claim these are the only units of $\mathbb{Z}[x]$, but aren't simple one-term polynomials also units? That is, $\forall k \in \mathbb{Z}, f(x) = x^k$ is a unit because $(f(x) = x^k)^{-1} \Longleftrightarrow f(x) = x^{-k}$ and $x^k \cdot x^{-k} = 1$, no? - -REPLY [2 votes]: More generally, the units of $D[x]$ are exactly the units of $D$, when $D$ is a domain.<|endoftext|> -TITLE: Is a function necessarily measurable, given that all of its level sets are measurable? -QUESTION [9 upvotes]: Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a function such that the set $$T_{\alpha} \equiv \{ x \in \mathbb{R}^n : f(x) = \alpha\}$$ is measurable $\forall \alpha \in \mathbb{R}$. Is $f$ measurable? -Here's the proof I've sketched, but I'd like to know whether I'm on the right way or not. -Since $T_\alpha$ is measurable $\forall \alpha \in \mathbb{R}$, the set $$T^{+}_{\beta} \equiv \mathbb{R}/\bigcup_{\alpha = \beta}^{+\infty}T_{\alpha} = \{x \in \mathbb{R} : f(x) < \beta\}$$ is also measurable $\forall \beta \in \mathbb{R}$, therefore $f$ is measurable. - -REPLY [11 votes]: I'd like to know whether I'm on the right way or not. - -Unfortunately not. Others have already provided counterexamples, so you now know that your conjecture is false, and that your proof must therefore contain a wrong step. In particular, your proof states that -$$\bigcup_{\alpha = \beta}^{+\infty}T_{\alpha}$$ -is measurable since each $T_\alpha$ is such, but this reasoning step is unsound. -Indeed, the above argument only holds for countable unions, and the union above is not countable: it involves a set for every real $\geq \beta$. -If any union (with no size bounds) of measurable sets were measurable, then all sets would be measurable, provided singletons are. This follows by the trivial union -$$A = \bigcup_{a\in A} \{a\}$$<|endoftext|> -TITLE: Why are the quaternions not an algebra over the complex numbers? -QUESTION [9 upvotes]: I just began to study about algebras over rings and quickly came across the fact that the quaternions are not an algebra over the complex numbers. I would prefer an answer as elementary as possible. - -REPLY [4 votes]: Because the center of $\mathbb H$ is $\mathbb R$. - -Why? -Suppose $a+bi+cj+dk\in Z(\mathbb H)$, the center of $\mathbb H$. Then $i(a+bi+cj+dk)=-b+ai-dj+ck$ and $(a+bi+cj+dk)i=-b+ai+dj-ck$ have to be equal, so $c=d=0$. Now, $(a+bi)j=aj+bk$ and $j(a+bi)=aj-bk$ so $b=0$. Thus, $Z(\mathbb H)\subseteq\mathbb R$, and the opposite inclusion is obvious. - -Why does this suffice? -If $\mathbb H$ were a $\mathbb C$-algebra, then since $\mathbb C$ is commutative, we need $\mathbb C\subseteq Z(\mathbb H)$. -In general, if $A$ is a commutative ring, for any $A$-algebra $B$ we need $A\subseteq Z(B)$.<|endoftext|> -TITLE: How to determine if I'm talented enough to study math? -QUESTION [10 upvotes]: After getting Bachelor's degree from Computer Science I changed my field to Applied Mathematics. The previous degree was mostly programming-oriented, so I know quite a lot about software engineering, databases, Linux etc., but not so much about maths. -To be honest, we had just basic single-variable calculus, basics of linear algebra and a little of graph theory. -After finishing my B.S. I got some awards for my bachelor thesis and because of that I was offered a job in our research centre. I changed my field to mathematics to be more useful there (most of my colleagues are concentrating on computational mathematics). -The problem is, that while I'm studying for final exams of this term, I recognized, that I don't understand things thoroughly and some of them at all. The biggest problem for me is Functional Analysis, where I'm simply stuck on very basic concepts and I have to look up something for almost every lemma or proof I want to understand. -I takes tremendous amount of time and even then I feel like I know nothing about it, because in every subject there are exercises I don't understand and proofs I'm not able to invent on my own. -The truth is, I was able to pass somehow all the tests so far and to finish all the projects with full score. -But still, I feel I'm not very confident in this and so I thought about the possibility I'm simply not talented enough. -So, is there any way to find out if my problem is caused by the lack of talent or just by gaps in my knowledge I'll be able to fill one day? -I'm mean something like what amount of knowledge one should be able to grasp in one half of a year, one year etc. - -I've read several questions about studying maths here on SE. For example: -Grasping mathematics -How to effectively study math? -Steps to Re-Learn Mathematic the right way - -REPLY [3 votes]: Ok so -1) Dont let anybody else (exams, professors, university committiees) decide whether you are talented enough. -2)Forget the question of whether you are talented, and ask yourself if it interests you. -3) And most importantly are you willing to put in the time and effort to overcome the obstacles.<|endoftext|> -TITLE: Birational Equivalence of Diophantine Equations and Elliptic Curves -QUESTION [10 upvotes]: A while ago I saw this question Quartic diophantine equation: $16r^4+112r^3+200r^2-112r+16=s^2$ which was very relevant to a undergraduate research paper I am currently working on. The answer given for this problem describes a method for determining solutions to diophantine equations by finding a "birational equivalence" to the solution set of an elliptic curve. -The result I am seeking depends on the solution set to a quartic diophantine equation of two variables and I believe that there is only one solution and no more (which was indicated by a program which checked possible values up to a high number). -So my question is, is there a general method for determining such a birational equivalence between solutions sets of diophantine equations and elliptic curves? And where is a good source for an undergraduate to learn to use these tools? - -REPLY [5 votes]: (This is a long comment re MacLeod's answer.) -We can also combine the two cases together. Assume a quartic polynomial to be made a square, -$$pu^4+qu^3+ru^2+su+t=z_1^2\tag1$$ -has a known rational point, call it $w$. We substitute $u=v+w$ and collect the new variable $v$, -$$c_4v^4+c_3v^3+c_2v^2+c_1v+\color{blue}{c_0^2}=z_2^2\tag2$$ -where the $c_i$ are polynomials in $w$ and the coefficients of $(1)$. The constant term of $(2)$ turns out to be a square, specifically, $c_0^2:=pw^4+qw^3+rw^2+sw+t$. -Let $v=1/x,\,$ $z_2=c_0\, z_3/x^2$ and $(2)$ becomes, -$$x^4+d_3x^3+d_2x^2+d_1x+d_0=z_3^2\tag3$$ -which is the Case 1 and then be solved as explained by MacLeod.<|endoftext|> -TITLE: Inequivalent Hilbert norms on given vector space -QUESTION [6 upvotes]: Suppose we have a vector space $X$. Let $\|\cdot\|_1$ and $\|\cdot\|_2$ be two different complete norms on $X$ s.th. $X$ equipped with $\|\cdot\|_j, \ j\in\{1,2\}$ is a Hilbert space. -Are there simple examples of such norms, which are inequivalent ? -This question arises from this discussion. - -REPLY [5 votes]: I doubt that there are any simple examples. Note that if the two norms are comparable they must be equivalent, by, say, the open mapping theorem. (If a bounded linear map from one Banach space to another is invertible qua linear map then the inverse is also bounded.) This makes it hard to give a simple explicit example; when you start with a Hilbert space and write down another norm it tends to be dominated by the first norm, and is hence equivalent to the first norm. I don't know any set theory, but I suspect the existence of two inequivalent complete norms requires the axiom of choice. (Edit: Nate Eldredge says yes it does require AC; see his comment below for details.) -(Regarding my claim that if you write down an explicit new complete norm on a Banach space it tends to be dominated by the original norm, there's the closed graph theorem, which does not actually say this: "Suppose $X$ and $Y$ are Banach spaces and $T:X\to Y$ is linear. If $T$ was given by an explicit formula then $T$ is bounded." The CGT doesn't actually say that, but that's what it comes down to in practice, in my experience. There's a reason that those unbounded operators people study are only defined on a dense subspace...) -There do exist non-simple examples. Let $X$ and $Y$ be any two separable Hilbert spaces. Let $A$ be a Hamel basis for $X$ and let $B$ be a Hamel basis for $Y$. Multiply the basis elements by constants so every element of $A$ has norm $1$ but the norm of the elements of $B$ are unbounded. -Any bijection from $A$ onto $B$ extends to a linear isomorphism of $X$ and $Y$, hence induces a new norm on $X$ with respect to which $X$ is a Hilbert space. The two norms on $X$ are not equivalent, and hence they are incomparable.<|endoftext|> -TITLE: Proving the limit -QUESTION [12 upvotes]: I have to prove that:$$\lim_\limits{x\to\infty}\bigg(\frac{n}{\frac{1}{x+a_1}+\frac{1}{x+a_2}+\cdots+\frac{1}{x+a_n}}-x\bigg)=\frac{a_1+a_2+\cdots+a_n}{n}$$ -The way I started doing this is: -$$=\lim_\limits{x\to\infty}\left(\frac{n}{\frac{(x+a_1)(x+a_2)\cdots(x+a_n)\sum_{i=1}^{n}\big(\frac{1}{x+a_i}\big)}{(x+a_1)(x+a_2)\cdots(x+a_n)}}-x\right)$$ -Then I combine $x$ with the rest, but that leads me nowhere. Any tips on how to do this? Taylor expansion cannot be used. - -REPLY [12 votes]: $$\dfrac{n}{\sum_i \dfrac{1}{x+a_i}} - x = \dfrac{n-\sum_j\dfrac{x}{x+a_j}}{\sum_i \dfrac{1}{x+a_i}} = \dfrac{n-\sum_j\dfrac{x+a_j}{x+a_j}+\sum_j\dfrac{a_j}{x+a_j}}{\sum_i \dfrac{1}{x+a_i}} = \dfrac{\sum_j\dfrac{a_j}{x+a_j}}{\sum_i \dfrac{1}{x+a_i}} =$$ $$= \sum_j \dfrac{a_j}{\sum_i\dfrac{x+a_j}{x+a_i}}$$ -Now -$$\displaystyle\lim_{x\to\infty} \sum_j \dfrac{a_j}{\sum_i\dfrac{x+a_j}{x+a_i}} = \sum_j \dfrac{a_j}{\sum_i\displaystyle\lim_{x\to\infty}\dfrac{x+a_j}{x+a_i}} = \sum_j \dfrac{a_j}{n} = \dfrac{a_i + \ldots + a_n}{n}$$<|endoftext|> -TITLE: Is This a Proof of Multiplication On The Parabola? -QUESTION [5 upvotes]: I am a high school student who is beginning to look at proofs and I was wondering if this could be considered a proof for a property of multiplication of points on a parabola. I've seen this result before online but I haven't seen a formal proof of it using this method. This is my first time attempting a proof on my own so any advice would be appreciated. -Theorem: Given the function $f(x) = x^{2}$, the line which passes through points A and B which lie on the graph of $f$ has the $y$-intercept $-(A_{x}B_{x})$ if $A_{x} < B_{x}$ -first it is clear that: -$$f(A_{x}) = A_{y} = A_x^{2} \quad \& \quad f(B_{x}) = B_{y} = B_x^{2}$$ -The slope of the line joining A & B will be: -$$m = \frac{B_{x}^{2}-A_{x}^{2}}{B_{x}-A_{x}}$$ -The numerator is a difference of squares; thus the whole expression can be simplified to: -$$m = B_{x}+A_{x}$$ -Then choose one point (A or B) and sub this value into the equation of a line and solve for b: -$$A_{x}^{2} = (B_{x}+A_{x})A_{x}+b$$ -$$-(A_{x}B_{x}) = b$$ -and since $A_{x}$ will always be negative if non-zero and $B_{x}$ will always be positive if non-zero ; this is equivalent to saying $-A_{x}B_{x}$ QED. - -REPLY [2 votes]: This proof mostly looks good except for a few points. First, when you say $A$ is a point on $f(x)$, this doesn't mean much in standard mathematical vernacular. Usually, one would say, "$A$ is a point on the graph of $f$," or to be even more precise, $A\in\{(x,f(x)):x\in\mathbb R\}$. Next, you should always explain your notation. It didn't take me too long to determine that $A_x$ is the first coordinate of $A$, but it would be best for you, as the one introducing the notation, to explain this. When you say $f(A_x)=A_y=A^2$, this doesn't make sense because we cannot square the point $A$. What you mean is $f(A_x)=A_y=A_x^2$. Same for $B$. Finally, it is incorrect to state that the numerator is a perfect square. It most likely is not. What you mean is that the numerator is the difference of perfect squares. -One more thing: the hypothesis that $A_x\leq 0\leq B_x$ should be stated before the conclusion of the theorem. This will make your conclusion stand out better and make the statement of the theorem more clear. -You also need to deal with the degenerate case $A_x=B_x=0$ separately because $m$ is not well defined in this case.<|endoftext|> -TITLE: When the product of dice rolls yields a square -QUESTION [28 upvotes]: Succinct Question: -Suppose you roll a fair six-sided die $n$ times. -What is the probability that the product of the rolls is a square? -Context: -I used this as one question in a course for elementary school teachers when $n=2$, and thought the generalization might be a good follow-up question for secondary school math teachers. But I encountered quite a bit of difficulty in tackling it, and I am wondering if there is a neater solution than what I have already seen, and to what deeper phemonena it connects. -Known: -Since the six sides of a die are $1, 2, 3, 2^2, 5,$ and $2\cdot3$, the product of the rolls is always of the form $2^{A}3^{B}5^{C}$, and the question is now transformed into the probability that $A, B, C$ are all even. The actual "probability" component is mostly for ease of phrasing; its only contribution is a $6^n$ in the denominator, and my true question is of a more combinatorial nature: namely, - -In how many ways can the product of $n$ rolls yield a square? - -One approach that I have seen involves first creating an $8 \times 8$ matrix corresponding to the eight cases around the parity of $A, B, C$; one can then take the dot product of each roll with this matrix, and hope to spot a pattern. In this way, one may discover the formula: -$$\frac{6^n + 4^n + 3\cdot2^n}{8}$$ -and the "probability" version is simply this formula with another $6^n$ multiplied in the denominator. -As for proving this: Some guesswork around linear combinations of the numerator yields a formula for each of the eight cases concerning $A,B,C$ parity, and one can then prove all eight of them by induction. And so I "know" the answer in the sense that I have all eight of the formulae (and the particular one listed above is correct) but they were not found in a particularly organized fashion. - -My Actual Question: -What is a systematic way to deduce the formula, given above, for the number of ways the product of $n$ rolls yields a square, and to what deeper phenomena does this connect? - -REPLY [5 votes]: There's a slick approach to this based on bijections, though it loses a lot of the generality of the generating function methods. -Let $S=\{1,2,3,6\}$ and $T=\{4,5\}$. We will divide roll sequences into classes based on whether the sequence contains elements from $S, T,$ or both. -Class 1: Sequences consisting only of rolls in $T$. Swapping the first die roll between $4$ and $5$ gives a bijection between squares and non-squares, so exactly half the sequences in this class give a square. -Class 2: Sequences consisting only of rolls in $S$. Now we can divide the sequences into groups of $4$ that share the same last $n-1$ rolls. Each group has one square product, so exactly $1/4$ the sequences in this class give a square. -Class 3: Sequences containing both a roll in $S$ and a roll in $T$. To each sequence we assign a "type", consisting of (1): The location of the first roll in $S$, (2): the location of the first roll in $T$, and (3): The remaining $n-2$ rolls. Once the type is fixed, there's $8$ choices for the remaining roll, and exactly one of them gives a square. -So the number of square sequences is -$$\frac{1}{2} |\textrm{Class } 1| + \frac{1}{4} |\textrm{Class } 2| + \frac{1}{8} |\textrm{Class } 3| = \frac{1}{2} 2^n + \frac{1}{4} 4^n + \frac{1}{8} (6^n-2^n-4^n)$$<|endoftext|> -TITLE: The real numbers are a field extension of the rationals? -QUESTION [13 upvotes]: In preparing for an upcoming course in field theory I am reading a Wikipedia article on field extensions. It states that the complex numbers are a field extension of the reals. I understand this since $\mathbb R(i) = \{ a + bi : a,b \in \mathbb R\}$. -Then the article states that the reals are a field extension of the rationals. I do not understand how this could be. What would you adjoin to $\mathbb Q$ to get all the reals? The article doesn't seem to say anything more about this. Is there a way to explain this to someone who has yet to take a course in field theory? - -REPLY [2 votes]: Several people here have already noted that, while $\mathbb{Q}$ is a subfield of $\mathbb{R}$, this fact isn't "caused" by the same explanation as $\mathbb{R}$ being a subfield of $\mathbb{C}$. Pithily, $\mathbb{C}$ is an algebraically complete algebraic extension of $\mathbb{R}$ while $\mathbb{R}$ is a metric-complete metric extension of $\mathbb{Q}$. -If $N$ is a norm on a ring $R$, we can define Cauchy sequences in $R$ with respect to $N$ as those $(x_n)$ with $\forall \delta \in \mathbb{R}^+ \exists N\in\mathbb{N} \forall m,\,n \in \mathbb{N} (m,\,n > N \to N(x_m-x_n)<\delta)$. We can also define null sequences in $R$ with respect to $N$, viz. $\forall \delta \in \mathbb{R}^+ \exists N\in\mathbb{N} \forall n \in \mathbb{N} (n > N \to N(x_n)<\delta)$. We call Cauchy sequences $(x_n),\,(y_n)$ equivalent if $(x_n-y_n)$ is a null sequence. We can think of equivalent Cauchy sequences as "having the same limit", even if that limit does not exist in $R$. We say $R$ is metric-complete if it contains its Cauchy sequences' limits; for the choice $N(x)=|x|$, $\mathbb{R}$ is metric-complete but $\mathbb{Q}$ does not. Indeed, just as we may identify $\mathbb{C}=\mathbb{R}[i]$ with $i^2=-1$, we can identify $\mathbb{R}$ with the set of equivalence classes on $\mathbb{Q}$ with $N(x)=|x|$. Each real number is one such equivalence class. For example, $\sqrt{2}$ is the set of Cauchy sequences $(x_n)$ in $\mathbb{Q}$ for which $x_n^2 \to 2$. -If you want something to which to compare the $\mathbb{Q}$-to-$\mathbb{R}$ extension, you can consider the $p$-adic numbers. These are obtained the same way, but with a different choice of $N$. The trivial norm $$N\left(x\right)=\left\{ \begin{array}{cc} -0 & x=0\\ -1 & x\neq0 -\end{array}\right.$$ obtains only eventually constant Cauchy sequences, which are equivalent iff their eventually constant values match, so $\mathbb{Q}$ is metric-complete with respect to this choice. It can be shown the only other norms on $\mathbb{Q}$ are the $p$-adic norms; for fixed $p\in\mathbb{P}$ define $$\text{ord}_p x=\inf {k\in\mathbb{Z}|p^k x\in\mathbb{Z}},\,N(x)=p^{-\text{ord}_p x}.$$Now we get different Cauchy sequences and different equivalence classes of them (as we have different null sequences), and the $p$-adic numbers $\mathbb{Q}_p$ differ from the reals (as well as the $q$-adic numbers for $q\in\mathbb{P}$ with $p\neq q$). -All metric completions of $\mathbb{Q}$ are also field extensions. They can all be algebraically extended by introducing an imaginary unit; there are also complex $p$*-adic numbers* $\mathbb{C}_p$.<|endoftext|> -TITLE: What is the intersection of these two cylinders? -QUESTION [5 upvotes]: $$0\le x^2 + z^2 \le 1$$ -$$0 \le y^2 + z^2 \le 1$$ -I want to compute the volume of the intersection. -Sketching it out on paper is sort of nice: I see cross-sections that are disks, the first cylinder, the y-coordinate is free to vary, and for the second cylinder, the x-coordinate is free to vary. -The intersection, I would guess, seems to be something spherical. -So how can I pin down the actual set of points? -Well, one thing I thought of was to try to manipulate both inequalities to make use of the equation of a sphere, so I try looking at these inequalities instead: -$$y^2\le x^2 + y^2 + z^2 \le 1 +y^2$$ -$$x^2 \le x^2 +y^2 + z^2 \le 1+x^2$$ -Am I heading in the right direction? Where can I go from here? -Thanks, - -REPLY [9 votes]: I could not resist to model this in GeoGebra. - -Zenith is $(0, 0, 1)$ and nadir $(0, 0, -1)$. -A slice at height $z$ is a square with side length -$$ -a = 2 \sqrt{1-z^2} -$$ -so -$$ -dV = (2 \sqrt{1-z^2})^2 dz = 4(1-z^2) dz -$$<|endoftext|> -TITLE: Explicit countable elementary extension of $\mathbb{N}$ -QUESTION [6 upvotes]: I would like to see an explicit example of a non-trivial elementary extension of the structure $(\mathbb{N}, +, \cdot, 0, 1)$ where $\mathbb{N}$ includes zero. Most of all I am interested in countable ones. - -REPLY [2 votes]: It is worth pointing out is that nonstandard models of arithmetic are constructible in the logical theory WKL$_0$. Weak König's lemma is sometimes considered to be a form of the axiom of choice; however, it can be proven in classical Zermelo–Fraenkel set theory without the axiom of choice. In the sense of being constructible in ZF, such models can be called explicit. Such an explicit model was constructed by Skolem as early as 1933 using sequences of ordinary integers (i.e., sequences drawn from the so-called intended model). -A closely related discussion is taking place here.<|endoftext|> -TITLE: What is $\mathbb R^{\mathbb R}$ as a vector space? -QUESTION [9 upvotes]: In Sheldon Axler's Linear Algebra Done Right third edition the following is given as an example of a subspace: -The set of differentiable real-valued functions on $\mathbb R$ is a subspace of $\mathbb R^{\mathbb R}$ -I'm looking for an intuitive explanation of the statement? Letting $S$ be the set of all differentiable real-valued functions, in order for the statement to be true, $S$ must be a subset of $\mathbb R^{\mathbb R}$(a subspace needs to be a subset). --How can $S \subset \mathbb R^{\mathbb R}$ when $S$ is a set containing functions and $\mathbb R^{\mathbb R}$ is a set containing real numbers? --What are the elements in $\mathbb R^{\mathbb R}$? How can we think of $ \mathbb R^{\mathbb R}$ as a tuple? - -REPLY [6 votes]: If you go back to page 14 in chapter 1 of the text, at the bottom left corner of the page there is a footnote which mentions how $\mathbb{F}^{\infty}$ and $\mathbb{F}^n$ are special cases of $\mathbb{F}^S$, which Axler defines as the set of functions $g:S\to \mathbb{F}$. -Why is this so? -We can think of the $n$-tuples in $\mathbb{F}^n$ as being the assignment of elements of the set $\{1,2,3,....,n\}$ to elements of $\mathbb{F}$ by $g \in \mathbb{F}^{\{1, 2, 3,...,n\}}$ as $g(1)=x_1$, $g(2)=x_2$,...., $g(n)=x_n$ for the $n$-tuple $(x_1, x_2,..., x_n)$. We can also think of this as indexing a subset of $\mathbb{F}$ with $\{1,2,3,...,n\}$. -So similarly for $\mathbb{F}^{\infty}$, we have that the natural numbers are indexing our set (as Axler defined it), so $\mathbb{F}^{\infty}$ is really just $\mathbb{F}^{\mathbb{N}}$ i.e. functions $g: \mathbb{N} \to \mathbb{F}$ defined as $g(1)=x_1$, $g(2)=x_2$,...... which assign all of the natural numbers to elements of our field $\mathbb{F}$ in the form of countably infinite tuples $(x_1, x_2, ......)$. -Now to your question. From the perspective I just explained, we can see that $\mathbb{R}^{\mathbb{R}}$ are just functions from $\mathbb{R} \to \mathbb{R}$, and that we are indexing elements $\mathbb{R}$ with elements of $\mathbb{R}$. Hence each real valued function is just a single ordering of a subset of $\mathbb{R}$ into an uncountably long tuple!(since $\mathbb{R}$ is uncountable) And so these real valued functions correspond to single points in $\mathbb{R}^{\mathbb{R}}$. Amazing isn't it? -From this perspective we can see how the set of differentiable real valued functions is a subset of $\mathbb{R}^{\mathbb{R}}$, since these functions are just tuples which order the elements of $\mathbb{R}$ in a way such that the graph of the function $f(i)=x_i$ for $i,x\in \mathbb{R}$ is differentiable. -To start you off, note that our additive identity is just $g:\mathbb{R} \to \{0\}$ (since constant functions are differentiable). This is just an uncountably long tuple consisting only of zeroes, and this tuple is the origin of our $\mathbb{R}$-dimensional space! Since $0$ is the additive identity for $\mathbb{R}$, we can see how each component of any tuple we add with our uncountably long tuple of $0$'s will remain unchanged under addition of a $0$, hence the zero function is our additive identity. -Hope that helps :)<|endoftext|> -TITLE: Rather weird integral -QUESTION [13 upvotes]: I've reached a pretty weird integral -$$\int_0^{5} \frac{\ln(y)}{(y+3)\sqrt{y}} dy,$$ -And I'm having some difficulties starting from the -$u$-substitution method. I had the intuition that -I may take $\sqrt{y} = u$ and thus $\frac{1}{2\sqrt{y}}dy = du.$ -However, this method seems to get tangled with the issues related -to the natural log in the numerator. I felt that I could start on -integration by parts, but then I thought that there may be a cleaner -method with partial fractions. Could someone give me some suggestions -on either method in this problem? - -REPLY [13 votes]: $$\int \frac{\log(y)}{(y+3)\sqrt{y}} dy$$ -$t=\sqrt{y},\;\; y=t^2,\;\;dy=2t\,dt$ -$$=4\int \frac{\log(t)}{t^2+3} dt$$ -$u=\log(t),\;\;du=\frac 1t,\;\; v=\frac{1}{\sqrt{3}}\arctan\left(\frac{t}{\sqrt{3}}\right),\;\; dv = \frac{1}{t^2+3}$ -$$=4\left(\frac{\log(t)}{\sqrt{3}}\arctan\left(\frac{t}{\sqrt{3}}\right)-\frac{1}{\sqrt{3}}\int\frac{\arctan\left(\frac{t}{\sqrt{3}}\right)}{t}dt\right)$$ -Looking at the final integral, we get that -$$\int\frac{\arctan\left(\frac{t}{\sqrt{3}}\right)}{t}dt$$ -$$=\frac{i}{2}\left(\int\frac{\log\left(1-\frac{t}{\sqrt{3}}\right)}{t}dt-\int\frac{\log\left(1+\frac{t}{\sqrt{3}}\right)}{t}dt\right)$$ -$$=\frac{i}{2}\left(I_1 - I_2\right)$$ -$u=\frac{t}{\sqrt{3}},\;\; t=u\sqrt{3},\;\; dt=du\sqrt{3}$ -$$I_1 = -\int\frac{-\log(1-u)}{u}du = -\operatorname{Li_2}(u) = -\operatorname{Li_2}\left(\frac{t}{\sqrt{3}}\right)$$ -$u=\frac{-t}{\sqrt{3}},\;\; t=-u\sqrt{3},\;\; dt=-du\sqrt{3}$ -$$I_2 = -\int\frac{-\log(1-u)}{u}du = -\operatorname{Li_2}(u) = -\operatorname{Li_2}\left(\frac{-t}{\sqrt{3}}\right)$$ -Putting this all together, we get -$$\frac{4\log(t)}{\sqrt{3}}\arctan\left(\frac{t}{\sqrt{3}}\right)+\frac{i}{2\sqrt{3}}\left[\operatorname{Li_2}\left(\frac{t}{\sqrt{3}}\right)+\operatorname{Li_2}\left(\frac{-t}{\sqrt{3}}\right)\right]$$ -$$=\color{red}{\frac{4\log(t)}{\sqrt{3}}\arctan\left(\frac{t}{\sqrt{3}}\right)+\frac{i}{4\sqrt{3}}\operatorname{Li_2}\left(\frac{t^2}{3}\right)}$$ -This uses the Polylogarithm. Here is a link to the Wikipedia page on the topic.<|endoftext|> -TITLE: Expected number of steps to finish all the cookies -QUESTION [13 upvotes]: Please help on this question: - -Steve has 256 cookies. Each cookie has a label that is a distinct subset of $\{1,2,3,4,5,6,7,8\}$. At each step, Steve chooses a cookie randomly and eats it as well as all othe cookies whose label is a subset of the chosen one. What is the expected number of steps Steve takes before finishing all the cookies? - -All I could find was that he should anyway eat the root cookie with the $\{1,2,3,4,5,6,7,8\}$ to finish the game. But the probability to choose it depends on what kind of cookies he has chosen before he grabs the final one, and I have no idea how to tackle it. Thanks. - -I tried to see what happens for a much simplified case. If Steve has 4 cookies that are subsets of {$1,2$}, the possible cases are: -\begin{align} -&\emptyset\rightarrow\{1\}\rightarrow\{2\}\rightarrow\{1,2\}&=4\times\frac14\times\frac13\times\frac12\\ -&\emptyset\rightarrow\{2\}\rightarrow\{1\}\rightarrow\{1,2\}&=4\times\frac14\times\frac13\times\frac12\\ -&\emptyset\rightarrow\{1\}\rightarrow\{1,2\}&=3\times\frac14\times\frac13\times\frac12\\ -&\emptyset\rightarrow\{2\}\rightarrow\{1,2\}&=3\times\frac14\times\frac13\times\frac12\\ -&\emptyset\rightarrow\{1,2\}&=2\times\frac14\times\frac13\\ -&\{1\}\rightarrow\{2\}\rightarrow\{1,2\}&=3\times\frac14\times\frac12\\ -&\{1\}\rightarrow\{1,2\}&=2\times\frac14\times\frac12\\ -&\{2\}\rightarrow\{1\}\rightarrow\{1,2\}&=3\times\frac14\times\frac12\\ -&\{2\}\rightarrow\{1,2\}&=2\times\frac14\times\frac12\\ -&\{1,2\}&=1\times\frac14\\ -\end{align} -that sums up to $\cfrac94$. - -REPLY [5 votes]: Let's consider the general case with $2^s$ cookies, labelled with subsets of $\{1,2,\ldots,s\}$. Steve begins by choosing a random ordering of all the cookies. At each step, Steve eats the next available cookie, and instead of eating the remaining cookies whose label is a subset, let's say he simply discards them. The number of steps is equal to the total number of cookies eaten. -A particular cookie will eventually be eaten if and only if that cookie is first among those cookies whose labels are supersets of that cookie's label. If the label on a cookie has size $k$, there are $2^{s-k}$ possible supersets, so the probability that cookie gets eaten is $2^{k-s}$. There are ${s\choose k}$ cookies whose label has length $k$, so by linearity of expectation, the expected number of steps (= cookies eaten) is -$$ -\sum_{k=0}^s {s\choose k}2^{k-s}=\left(\frac{3}{2}\right)^s. -$$<|endoftext|> -TITLE: Hom / tensor adjunction for $O_X$ modules? -QUESTION [8 upvotes]: Does the hom-tensor adjunction hold for $O_X$ modules also? With sheaf hom and sheaf tensor product, the statement would consist of a natural transformation $Hom_O (M \otimes_O N, K)\cong_{nat} Hom_O(M, Hom_O(N, K))$, $O = O_X$ is the structure sheaf and $M,N,K$ are sheaves of $O_X$ modules. -If true I think I can check this by hand, by describing sheaves in terms of compatible germs, defining this adjunction on the germs and checking that they glue together. It is easier in the quasi-coherent sheaves on a scheme case because then one can just work in an affine cover and the distinguished base. -I am little bit concerned that one might need a finite presentation hypothesis somewhere, in order for $Hom$ to localize well. ($Hom_{R[S^{-1}]}(M[S^{-1}], N[S^{-1}]) \cong Hom_R(M,N)[S^{-1}]$ needs $M$ finitely presented.) - -REPLY [15 votes]: Everything actually works for arbitrary ringed spaces and arbitrary sheaves of $\mathcal O_X$-modules, cf. e.g. Tag 01CM of the Stacks Project (where the proof of this lemma is omitted). -Firstly, you should be aware of three subtleties: - -$M \otimes_{\mathcal O_X} N$ is the sheafification of the presheaf -$$U \mapsto M(U) \otimes_{\mathcal O(U)} N(U).$$ -To avoid confusion, I will write $M \odot_{\mathcal O_X} N$ for this presheaf tensor product. -The sheaf $\mathscr Hom_{\mathcal O_X}(M,N)$ is given on $U$ by $\operatorname{Hom}_{\mathcal O_U}(M|_U, N|_U)$, not $\operatorname{Hom}_{\mathcal O(U)}(M(U),N(U))$. This makes it slightly harder to define the 'obvious map', as we will see below. -For $M, N$ both quasicoherent, it is not in general true that $\mathscr Hom_{\mathcal O_X}(M, N)$ is quasicoherent (this is the remark you make on the bottom; it is enlightening to try to think of a counter-example). However, it does work when $M$ is coherent (and $X$ is Noetherian, or use the correct definition of coherent sheaves on an arbitrary scheme or even ringed space). - -I will define an obvious (but not so obvious) isomorphism -$$f \colon \mathscr Hom_{\mathcal O_X}(M, \mathscr Hom_{\mathcal O_X} (N, K)) \to \mathscr Hom_{\mathcal O_X}(M \otimes_{\mathcal O_X} N, K).$$ -We have to construct compatible isomorphisms -$$f_U \colon \operatorname{Hom}_{\mathcal O_U} (M|_U, \mathscr Hom_{\mathcal O_U} (N|_U, K|_U)) \to \operatorname{Hom}_{\mathcal O_U}((M \otimes_{\mathcal O_X} N)|_U, K|_U)$$ -for all $U \subseteq X$. Let $U$ be fixed from now on. I will break down what both sides are. -Right hand side. Since sheafification commutes with restriction to opens, the right hand side is -$$\operatorname{Hom}_{\mathcal O_U}(M|_U \otimes_{\mathcal O_U} N|_U, K_U) = \operatorname{Hom}_{\mathcal O_U}^{\operatorname{pre}} (M_U \odot_{\mathcal O_U} N|_U, K|_U).$$ -An element of this is a compatible family of maps (for all $V \subseteq U$) -$$\psi_V \colon M(V) \otimes_{\mathcal O(V)} N(V) \to K(V).$$ -However, as pointed out by the OP in the comments, you cannot now use the classical tensor-hom adjunction to say that this is the same as giving compatible maps -$$M(V) \to \operatorname{Hom}_{\mathcal O(V)}(N(V), K(V)).$$ -See also the remark below. -Left hand side. On the other hand, the left hand side consists of giving a compatible system of maps -$$\phi_V \colon M(V) \to \operatorname{Hom}_{\mathcal O_V}(N|_V, K|_V).$$ -The map $f$. Thus, there is an obvious map from left to right given by taking global sections: given $\phi_V$ and $m \in M(V)$, the map $\phi_V(m)\colon N|_V \to K|_V$ induces a map $\chi_V(m)\colon N(V) \to K(V)$ by taking global sections. This gives $\chi_V \colon M(V) \to \operatorname{Hom}_{\mathcal O(V)}(N(V), K(V))$. Using the classical tensor-hom adjunction, this in turn gives a map -$$\psi_V \colon M(V) \otimes_{\mathcal O(V)} N(V) \to K(V).$$ -However, having a map $\chi_V(m) \colon N(V) \to K(V)$ does not give you a map $\phi_V(m) \colon N|_V \to K|_V$ in general. It works when $V$ is an affine scheme and $N$ is quasicoherent, but that is not satisfactory. Thus, defining an inverse is not quite so easy. -The inverse of $f$. This is where the compatibility of the various $\psi_V$ comes in. That is, given a system $\psi_V$ from the right hand side, for any $m \in M(V)$, we have to construct a morphism of sheaves -$$\phi_V(m) \colon N|_V \to K|_V.$$ -For any $W \subseteq V$, we define -\begin{align*} -\phi_V(m) \colon N(W) &\to K(W)\\ -n &\mapsto \psi_W(m|_W \otimes n). -\end{align*} -One checks that these are compatible for varying $W$, so we get $\phi_V(m)$ as claimed. Then check that the $\phi_V$ are compatible for varying $V$, etc. $\square$ -I also omitted the verifications that anything was $\mathcal O$-linear, or even that the maps defined above are actually inverses. All of this is relatively straightforward. -Remark. Given a commutative diagram -$$\begin{array}{ccc} M(V) \otimes_{\mathcal O(V)} N(V) & \to & K(V) \\ \downarrow & & \downarrow \\ M(W) \otimes_{\mathcal O(W)} N(W) & \to &\ K(W), \end{array}$$ -we do not get a commutative diagram -$$\begin{array}{ccc} M(V) & \to & \operatorname{Hom}_{\mathcal O(V)}(N(V), K(V)) \\ \downarrow & & \downarrow \\ M(W) & \to &\ \operatorname{Hom}_{\mathcal O(W)}(N(W), K(W)).\end{array}$$ -In fact, we don't even get the right vertical map, because of the contravariance of $\operatorname{Hom}$ is the first variable. Thus, we cannot simplify the description of the right hand side.<|endoftext|> -TITLE: What does it mean when people say that groups are a study of symmetry? -QUESTION [11 upvotes]: I see many people make remarks to the effect that groups have basic symmetry properties. I am familiar with Cayley's Theorem and the symmetric groups $S_N$. However, when I think of the symmetric group, I think of a group of permutations, not anything to do with symmetries. Additionally, I have been told that semi-groups lack the basic symmetry properties that groups have. (Note: I only know the basic definition of a semi-group.) -What does symmetry mean in this context? - -REPLY [4 votes]: It's good that you keep symmetry groups and symmetric groups well separate, they each use symmetry in a slightly different sort of way! - -Classically, things like parabolas were considered symmetric: They have an axis of symmetry and reflection across that axis leaves the "footprint" of the parabola unchanged (even though individual points may get shuffled around). Regular triangles, squares (regular polygons in general), circles, and tilings (like a floor tile, or brick patterns) are all two-dimensional objects that have a lot (OK, usually at least a "nontrivial amount") of symmetry: they "look the same" from several different viewpoints. -Eventually, mathematicians formalized a symmetry to mean an isometry (function that preserves the distance between any two points) of Euclidean space that again leaves the "footprint" of (set of all points comprising) the object unchanged, while potentially shuffling the individual pieces. This is what Bye_world is referencing in the comments, and is really a pretty radical way to think about things! -The full set of symmetries of an object do form a group: The composition of symmetries (which, remember, are really just special functions from $\Bbb R^n$ [or related spaces] to itself) is yet another symmetry. Function composition is associative. The identity map is the identity symmetry, and it doesn't do anything. Finally we can "undo" any symmetry; for example performing a counterclockwise rotation of $28^\circ$ to undo a clockwise rotation of $28^\circ$, or translating in the opposite direction to undo a translation, etc. -This is why semigroups aren't really a very good language to talk about symmetry. - -We don't have an identity. We can't be guaranteed that we can just "do nothing" to the object in consideration. Not only is this not-so-good (TM) because doing nothing is nice, but also because -We aren't guaranteed inverses -- let alone the ability to define them, if we can't be sure we don't have an identity element! This really doesn't get along well with our picture of symmetries of an object: Two viewpoints of an object aren't really "the same" if we can't switch freely between them, but only from one to the other, and not back: We need inverses, and we need an identity to talk about inverses. - -Sidenote: More generally, we have automorphisms of mathematical objects. These are maps from an object to itself that preserves some essential structure; a more abstract version of symmetry. You've probably encountered group automorphisms, or graph automorphisms. I personally tend to think in the language of "symmetries of a graph" rather than "automorphisms of a graph" unless I need to be fancy for some reason. This is distinct from a symmetry in the "isometry" sense, although the line gets a little blurry, namely because we like to draw ( = "realize") graphs in $\Bbb R^2$, and then the notions sometimes coincide! -Again, automorphisms come together to naturally form a group, and "automorphism" is a term that can be applied to a surprisingly large number of kinds of structures. This is the sense in which groups are the language of symmetry: If you have things that "act like" (or are) symmetries, then they form a group! - -Polynomials are another place where "symmetric" is somewhat commonly used: There are things called symmetric polynomials (or more generally, symmetric functions), like $f(x, y) = x^2y + xy^2$ whose values don't change when we permute the variables; so $f(x,y) = f(y, x)$ above. I forget any surprising places these show up, but I know that Newton did some work with them, predating Galois and the formal definition of a group. -At any rate, I had suspected this is where the term "Symmetric Group" got its name (since the permutations of the variables do indeed form symmetric groups), and this answer confirms it, by quoting a MathOverflow post quoting the pioneer Burnside!<|endoftext|> -TITLE: Let $P(x)=(x-1)(x-2)(x-3)$.For how many polynomials $Q(x)$ does there exist a polynomial $R(x)$ of degree 3 such that $P(Q(x))=P(x).R(x)?$ -QUESTION [5 upvotes]: Let $P(x)=(x-1)(x-2)(x-3)$.For how many polynomials $Q(x)$ does there exist a polynomial $R(x)$ of degree 3 such that $P(Q(x))=P(x).R(x)?$ - -Let $R(x)$ be a third degree polynomial $ax^3+bx^2+cx+d=0$ -In $P(Q(x))=P(x).R(x)$,RHS is a sixth degree polynomial,so LHS must also be a sixth degree polynomial, -So $Q(x)$ must be a quadratic polynomial(let us say $ax^2+bx+c=0$) -But i dont know how to argue further and solve further.Please help me.Thanks. - -REPLY [8 votes]: Hint: If $P(Q(x)) = P(x) \cdot R(x)$ for all $x$, then we must have: -$$P(Q(1)) = P(1) \cdot R(1) = 0$$ $$P(Q(2)) = P(2) \cdot R(2) = 0$$ $$P(Q(3)) = P(3) \cdot R(3) = 0$$ -Since the zeros of $P$ are at $1,2,3$, we must have $Q(1),Q(2),Q(3) \in \{1,2,3\}$. -For each of the $3^3 = 27$ ways you can assign values to $Q(1)$, $Q(2)$, and $Q(3)$, there is exactly one possible polynomial $Q(x)$ with degree $\le 2$. How many of these result in $Q(x)$ being a quadratic polynomial (i.e. not linear or constant)?<|endoftext|> -TITLE: Groups of order $64$ with abelian group of automorphism -QUESTION [7 upvotes]: G. A. Miller in 1913 constructed the first example of a non-abelian group of order $64$ with abelian group of automorphisms. It is the group -$$G=(C_8\rtimes C_4)\rtimes C_2=\langle x,y,z\colon x^8, y^4, z^2, yxy^{-1}=x^5, zxz^{-1}=x,zyz^{-1}=y^{-1}\rangle.$$ -After few years, following observations were made: - -There is no non-abelian group of order $<64$ with abelian group of automorphism. -There are (exactly) two more non-abelian group of order $64$ with abelian automorphism group. - -Question: What are the other groups of order $64$ with abelian automorphism group? In the presentation, where they differs with $G$? (I mean, it may be a slight modification of $G$ above; if it is such, what is that modification?) - -Edit: James pointed error; there are two more non-abelian groups of order $64$, not one, with abelian automorphism group. - -REPLY [4 votes]: This doesn't seem to be quite correct. There appear to be three non-abelian groups of order $64$ with abelian automorphism group. They are: -SmallGroup( 64, 68 ) -SmallGroup( 64, 69 ) -SmallGroup( 64, 116 ) - -Miller's group is the last of these three. -EDIT: Assuming I haven't made any transcription errors here, a presentation for SmallGroup( 64, 68 ) is -$$\langle x,y,z \mid x^4, z^4, x^2 = y^2, -z^2 = [y,x], -[y,z], -[x, z^2], -[y, x^2], -[z, x^2], -[z,x] = [x^{-1},z] -\rangle.$$ -A presentation for SmallGroup( 64, 69 ) is: -$$\langle x,y,z \mid x^4, z^4, y^2 = z^2 x^2, -z^2 = [y,x], -[z,y], [z,x], -[y,[z,x]], -[x^2, z], [x^2, y], [z^2, x], -[z,x] = [x, z^{-1}]\rangle.$$<|endoftext|> -TITLE: Find the maximum modulus of $e^{z^2}$? -QUESTION [6 upvotes]: The maximum modulus of $e^{z^2}$ on the set $S=\{z\in \mathbb{C}: 0\leq Re(z)\leq1, 0\leq Im(z)\leq1\}$ is - -$e/2$ -$e$ -$e+1$ -$e^2$ - -My attempt: We know $|e^{z^2}|\leq e^{|z|^2}$ so maximum of $|z|=\sqrt{2}$ since $z$ can be $1+i$, so $|e^{z^2}|\leq e^{|z|^2}=e^2$, so $4$ is right? Is my solution correct? If it's not then how to solve this? Thanks. - -REPLY [9 votes]: You've shown that $e^2$ is an upper bound for $|e^{z^2}|$, but you haven't shown that $|e^{z^2}| = e^2$ for some $z \in S$. If you can find such a $z$, then $e^2$ would be the maximum of $|e^{z^2}|$ on $S$. However, if no such $z$ exists, then the maximum is smaller. -Hint: Let $z = x+iy$ where $x,y$ are real. Then, you have: -$e^{z^2} = e^{(x+iy)^2} = e^{x^2+2ixy+i^2y^2} = e^{x^2-y^2+i \cdot 2xy} = e^{x^2-y^2}e^{i \cdot 2xy}$. -Hence, $|e^{z^2}| = |e^{x^2-y^2}| \cdot |e^{i \cdot 2xy}| = e^{x^2-y^2} \cdot 1 = e^{x^2-y^2}$. -Now, what is the maximum of $e^{x^2-y^2}$ over the region $0 \le x \le 1$ and $0 \le y \le 1$?<|endoftext|> -TITLE: Number of ways to color n objects with 4 colors if all colors must be used at least once -QUESTION [6 upvotes]: I have seen, and solved the following problem: -How many ways to color n objects with 3 colors $\{A, B, C\}$, if all colors must be used at least once. -$\require{enclose}$ -The answer is as follows: -$$3^n-{{3}\choose{2}}\cdot2^n + {{3}\choose{1}}\cdot1^n$$ -Because the number of forbidden colorings is: -$${{3}\choose{2}}\cdot2^n - {{3}\choose{1}}\cdot1^n$$ -The overall answer then reduces to: -$$3^n - 3\cdot2^n + 3$$ -The solution comes to me as follows. There are $3\cdot2^n$ configurations that are illegal because they only use $2$ colors. Of these, some are counted more than once so let's take a closer look: -$$2^n\,\{B, C\}\qquad\qquad2^n\,\{A, C\}\qquad\qquad2^n\,\{A, B\}\qquad\qquad$$ -All the formations you can make with EXACTLY two colors are unique and counted only once. How about all the formations you can make with EXACTLY one color? Well that answer is obviously ${{3}\choose{1}}$ but let's see how many times we have overcounted by subtracting $3\cdot2^n$. Just like we broke up $3^n$ into our $3\cdot2^n$ formations with two colors, let's break up each $2^n$ into $2$ single color $1^n$ formations and see if we have any repeats! -$$2^n\,\{B, C\}\qquad\qquad2^n\,\{A, C\}\qquad\qquad2^n\,\{A, B\}\qquad\qquad$$ -\begin{array}{cc} -\text{Each of the $2^n$ can make 2 single color sets so we will have repeats:}\\ -\hline -\end{array} -$$1^n\,\{B\}\qquad\qquad\qquad1^n\,\{A\}\qquad\qquad\qquad\enclose{updiagonalstrike}{1^n\,\{A\}}$$ -$$1^n\,\{C\}\qquad\qquad\qquad\enclose{updiagonalstrike}{1^n\,\{C\}}\qquad\qquad\qquad\enclose{updiagonalstrike}{1^n\,\{B\}}$$ -You can see that there are obviously only ${{3}\choose{1}}$ unique illegal single color formations, however we've accounted for each one twice by subtracting $3\cdot2^n$ from $3^n$ so we must add back the ones we overcounted. This is why we add back a ${{3}\choose{1}}$. If we overcounted each unique single color formation by 100, I believe we would add back 100 to the final answer so we could be left with only the unique single color formations that are illegal. -I was working on the following problem: -How many ways to color n objects with 4 colors $\{A, B, C, D\}$, if all colors must be used at least once. -Following the same logic in this post's answer, the following process makes sense to me: -Total number of ways to color N objects with any colors would be $4^n$. Of the $4^n$, there are: -$$3^n\,\text{that use only}\,\{B, C, D\},\;3^n\,\text{that use only}\,\{A, C, D\},\;3^n\,\text{that use only}\,\{A, B, C\}$$ -This givs us ${{4}\choose{3}}\cdot3^n$ invalid options. However this over-counts several invalid options several times. Let's take a closer look. -$$3^n\, \{B, C, D\}\qquad3^n\, \{A, C, D\}\qquad3^n\, \{A, B, D\}\qquad3^n\, \{A, B, C\}$$ -\begin{array}{cc} -\text{Which breaks down to the following (${{4}\choose{2}}$duplicates crossed out):}\\ -\hline -\end{array} -$$2^n\,\{B, C\}\qquad\qquad2^n\,\{A, C\}\qquad\qquad2^n\,\{A, B\}\qquad\qquad\enclose{updiagonalstrike}{2^n\,\{A, B\}}$$ -$$2^n\,\{B, D\}\qquad\qquad2^n\,\{A, D\}\qquad\qquad\enclose{updiagonalstrike}{2^n\,\{A, D\}}\qquad\qquad\enclose{updiagonalstrike}{2^n\,\{A, C\}}$$ -$$2^n\,\{C, D\}\qquad\qquad\enclose{updiagonalstrike}{2^n\,\{C, D\}}\qquad\qquad\enclose{updiagonalstrike}{2^n\,\{B, D\}}\qquad\qquad\enclose{updiagonalstrike}{2^n\,\{B, C\}}$$ -\begin{array}{cc} -\text{Which breaks down to the following:}\\ -\hline -\end{array} -$$2^n\,\{B, C\}\quad2^n\,\{B, D\}\quad2^n\,\{C, D\}\quad2^n\,\{A, C\}\quad2^n\,\{A, D\}\quad2^n\,\{A, B\}$$ -\begin{array}{cc} -\text{Since all $2^n$ are unique, the only sets that we count many times here are the one with one letter}\\ -\hline -\end{array} -$$1^n\,\{B\}\qquad\enclose{updiagonalstrike}{1^n\,\{B\}}\qquad\enclose{updiagonalstrike}{1^n\,\{C\}}\qquad1^n\,\{A\}\qquad\enclose{updiagonalstrike}{1^n\,\{A\}}\qquad\enclose{updiagonalstrike}{1^n\,\{A\}}$$ -$$1^n\,\{C\}\qquad1^n\,\{D\}\qquad\enclose{updiagonalstrike}{1^n\,\{D\}}\qquad\enclose{updiagonalstrike}{1^n\,\{C\}}\qquad\enclose{updiagonalstrike}{1^n\,\{D\}}\qquad\enclose{updiagonalstrike}{1^n\,\{B\}}$$ -We obviously know that there are going to be only 4 unique countings of 1 letter sets since there are only 4 colors, so the other 8 were counted many times, just like the other 6 sets of $2^n$ -To me, this makes the answer: -$$4^n - {{4}\choose{3}}\cdot3^n + {{4}\choose{2}}\cdot2^n + 8\cdot1^n$$ -I am slightly suspicious as it doesn't follow the pattern that would make ${{4}\choose{1}}\cdot1^n$ be the last term, however walking through the logic it is clear to me that just as we counted 6 of the $2^n$ sets twice, we are counting, the monochromatic sets 8 extra times than necessary, so we need to give them back so I believe my first solution is correct? Could someone verify my logic here? - -REPLY [4 votes]: Consider $$4^n-\binom{4}{3}\cdot3^n+\binom{4}{2}2^n-\binom{4}{1}1^n$$ -Any coloring that uses exactly 3 colors is subtracted from the $4^n$ total colorings once in the $-\binom{4}{3}\cdot3^n$ term. That coloring only appears in one of the $\binom{4}{3}$ collections of colorings, and it is only one of that collection's $3^n$ colorings. -Any coloring that uses exactly 2 colors (for example, an $\{A,B\}$-coloring) is subtracted from the $4^n$ total colorings twice in the $-\binom{4}{3}\cdot3^n$ term (as one of the $\{A,B,C\}$-colorings and as one of the $\{A,B,D\}$-colorings), then added back once in the $+\binom{4}{2}2^n$ term (as one of the $\{A,B\}$-colorings). The net effect is to subtract it once from the $4^n$ total. -Any coloring that uses exactly 1 color (for example, the all-$A$ coloring) is subtracted from the $4^n$ total colorings three times in the $-\binom{4}{3}\cdot3^n$ term (as one of the $\{A,B,C\}$-colorings, as one of the $\{A,B,D\}$-colorings, and as one of the $\{A,C,D\}$-colorings), then added back three times in the $+\binom{4}{2}2^n$ term (as one of the $\{A,B\}$-colorings, as one of the $\{A,C\}$-colorings, and as one of the $\{A,D\}$-colorings), and then subtracted again once in the $-\binom{4}{1}1^n$ term (as the $\{A\}$-coloring). The net effect is to subtract it once from the $4^n$ total. -This is all an example of the inclusion-exclusion principle. - -Written as $$\binom{4}{4}4^n-\binom{4}{3}\cdot3^n+\binom{4}{2}2^n-\binom{4}{1}1^n+\binom{4}{0}\cdot0^n$$ you can directly verify that this gives correct results for $n=0,1,2,3,4$. - -For $r$ colors, the same logic gives the formula $$\sum_{k=0}^r\binom{r}{r-k}(-1)^k(r-k)^n$$ which is equivalent (by reindexing $k\mapsto r-k$) to $$\sum_{k=0}^r\binom{r}{k}(-1)^{r-k}k^n$$<|endoftext|> -TITLE: Nullstellensatz for non-algebraically closed fields -QUESTION [7 upvotes]: I'm trying to prove that the Nullstellensatz holds for non algebraically closed fields, when the variety is taken over the algebraic closure. Let $R=K[x_1,...,x_n]$ and $\overline{K}$ the algebraic closure of $K$. I was able to prove that $\sqrt{I}\subseteq \mathcal{I}_R(\mathcal{V}_{\overline{K}^n}(I))$ for any ideal $I$. I'm struggling a bit with the other direction. My attempt goes as follows: -It is clear that given any ideal $J$, $V_{K^n}(J)\subseteq \mathcal{V}_{\overline{K}^n}(J)$. Applying $\mathcal{I}_{\overline{K}^n}$ reverses the order, so we have: -$$\mathcal{I}_{\overline{K}^n}(V_{K^n}(J))\supseteq \mathcal{I}_{\overline{K}^n}(\mathcal{V}_{\overline{K}^n}(J))=\sqrt{I}$$ -where the equality is just the normal Nullstellensatz. I don't know if this idea seems fruitful since I haven't been able to get the reverse inclusion. Any ideas on how to show this direction would he highly appreciated. - -REPLY [5 votes]: This result appears in Section 11.2.1 of my commutative algebra notes under the name Semirational Nullstellensatz (which is not standard, but seems broadly reasonable). The proof is indeed nontrivial. (It uses something that I call Lang's Lemma.) Perhaps there is a more straightforward approach -- indeed, the exercise at the end of the section seem to ask about this -- and if so I would be interested to know.<|endoftext|> -TITLE: Why does topology rarely come up outside of topology? -QUESTION [42 upvotes]: I am currently taking topology and it seems like a completely different branch of math than anything else I have encountered previously. -I find it a little strange that things are not defined more concretely. For example, a topological space is defined as a set $X$ with a collection of open sets $\tau$ satisfying some properties such as the empty set and $X$ are in $\tau$, intersection of two open sets are in $\tau$, and unions of open sets is in $\tau$. -So, it seems that a lot of things are topological spaces, such as the real line equipped with a collection of open sets. But I have not seen anyone bringing this up in other areas of mathematics such as linear algebra, calculus, differential equations or analysis or complex analysis. Sure, open sets and closed sets are brought up but the concept of "topology", "base", etc. etc. are missing entirely. -As you scratch the surface a little more you encounter things such as the subspace topology, product topology, order topology and open sets are defined differently with respect to each of them. But nonetheless outside of a course in topology, you never encounter these concepts. -Is there a reason why topology is not essential for other courses that I have mentioned? Is there a good reference that meshes serious topology (as in Munkres) with more applied area of mathematics? - -REPLY [15 votes]: A major topic of classical analysis is Fourier series and Fourier integrals. These ideas generalize to analysis on (locally compact) topological groups. A topological group is a group $G$ in which group multiplication $G \times G \rightarrow G$ where $(g,h) \mapsto gh$ and group inversion $G \rightarrow G$ where $g \mapsto g^{-1}$ are both continuous (using the product topology on $G \times G$ in order to speak about a function on it being continuous). Analysis on topological groups is a major theme within representation theory. You might have heard about Fourier series in an undergraduate analysis class, but such a course would not have discussed the Fourier transform on topological groups because the audience wouldn't have the experience to appreciate such a generalization yet. It would look "too abstract." -In addition to topological groups there are topological vector spaces: vector spaces $V$ (over the real numbers, say) in which vector addition $V \times V \rightarrow V$ where $(v,w) \mapsto v+w$ and scalar multiplication $\mathbf R \times V \rightarrow V$ where $(c,v) \mapsto cv$ are continuous, using the product topology on both $V \times V$ and $\mathbf R \times V$ in order to speak about functions on them being continuous. A special feature of finite-dimensional real vector spaces like $V = \mathbf R^n$ is that the usual topology on them is the only topology they have that makes them Hausdorff topological vector spaces. (What about the discrete topology on $\mathbf R^n$? Vector addition on $\mathbf R^n$ is continuous when $\mathbf R^n$ has the discrete topology, but scalar multiplication $\mathbf R \times \mathbf R^n \rightarrow \mathbf R^n$ when $\mathbf R^n$ has the discrete topology and the scalars $\mathbf R$ have their usual topology is not continuous.) That $\mathbf R^n$ has only one Hausdorff topological vector space structure is in some sense why we can talk about concepts like continuity in multivariable calculus without having to get into a treatment of topology first: the usual way we think about continuity of functions on $\mathbf R^n$ is the only reasonable way to do so. However, once you pass to infinite-dimensional spaces the situation changes: these spaces can be made into topological vector spaces in more than one interesting way, and this quickly leads into the area of functional analysis, which is not something you would have seen yet just because you can't learn everything in your first two years of college. Functional analysis is not just abstraction for the sake of pure math: it's the mathematical foundation of quantum physics. -The language of topology is relevant to areas of math that at first glance seem to be unrelated to issues of continuity, such as number theory. The study of solutions to congruences mod $m$ can be reduced to the case that $m = p^k$ is a power of a prime number $p$, and the best way to think systematically about congruences modulo prime powers uses the $p$-adic integers $\mathbf Z_p$ (a compact ring) and the $p$-adic numbers $\mathbf Q_p$ (a locally compact field). A buzzword to look up in this context is "Hensel's lemma," which is the $p$-adic analogue of Newton's method from classical analysis. The $p$-adic numbers were created by Hensel at the end of the 19th century, but the original description of his ideas were muddled and awkward because he lacked the topological language that greatly simplifies what is going on (once you are used to the language so that you can recognize certain topological features of the situation). If you've never heard of $p$-adic numbers, to convey their importance I'll just point out that the solution to Fermat's Last Theorem depends on them: the work by Wiles is concerned with representations of Galois groups into $p$-adic matrix groups. -The construction of $p$-adic numbers can be extended to the notion of an inverse limit, leading to constructions such as profinite groups (an inverse limit of finite groups). To work with inverse limits you need the language of topology that you think is not used elsewhere but really is: product topology, subspace topology, and base.<|endoftext|> -TITLE: Prove multi-dimensional Mean Value Theorem -QUESTION [17 upvotes]: I've been asked to prove multi-dimensional Mean Value Theorem. I'd be - grateful if someone could give me feedback if it is okay. - -Proof of Mean Value Theorem: -Let $f: [a,b]\rightarrow \mathbb{R}$ be a continuous on $[a,b]$ and differentiable on $(a,b)$. Consider the function: -$$g(x)=f(x)-f(a)-\frac{f(b)-f(a)}{b-a}(x-a) \mbox{.}$$ -This function is continuous on $[a,b]$, differentiable on $(a,b)$ and $g(a)=g(b)$. Thus there is $c\in (a,b)$ such that $g'(c)=0$. But this means that there is $c\in (a,b)$ such that -$$f'(c)=\frac{f(b)-f(a)}{b-a}\mbox{.}$$ -Proof of multi-dimensional Mean Value Theorem: -Let $f:U\rightarrow\mathbb{R}$ be a differentiable function ($U$ is an open subset of $\mathbb{R}^n)$. Let $\mathbf{a}$ and $\mathbf{b}$ be points in $U$ such that the entire line segment between them is contained in $U$. Define $h:[0,1]\rightarrow U$ in the following way: -$$h(t)=(a_1+(b_1-a_1)t,\ldots,a_n+(b_n-a_n)t) \mbox{.}$$ -This function is differentiable on $(0,1)$ and continuous on $[0,1]$, so is $f \circ h$. If we apply Mean Value Theorem to $f\circ h$ we get -$$(f \circ h )'(c)=(f \circ h )(1)-(f \circ h )(0)$$ -where $c\in (0,1)$ and -$$f '(h(c))(\mathbf{b}-\mathbf{a})=f(\mathbf{b})-f(\mathbf{a}) \mbox{.}$$ -If we set $\zeta=h(c)$ we get -$$f '(\zeta)=\frac{f(\mathbf{b})-f(\mathbf{a})}{\mathbf{b}-\mathbf{a}} \mbox{.}$$ -(Obviously $f '(\zeta)$ is a gradient vector.) - -REPLY [20 votes]: Everything is fine, except the last formula: You cannot divide by a vector. Leave it at -$$f({\bf b})-f({\bf a})=\nabla f({\bf z})\cdot({\bf b}-{\bf a})$$ -for some ${\bf z}\in[{\bf a},{\bf b}]$.<|endoftext|> -TITLE: Alternative way to show that a simple group of order $60$ can not have a cyclic subgroup of order $6$ -QUESTION [9 upvotes]: Suppose $G$ is a simple group of order $60$, show that $G$ can not have a subgroup isomorphic to $ \frac {\bf Z}{6 \bf Z}$. - -Of course, one way to do this is to note that only simple group of order $60$ is $A_5$. So if $G$ has a cyclic subgroup of order $6$ then it must have a element $\sigma$ of order $6$, i.e. in (disjoint) cycle decomposition of $\sigma$ there must be a $3$ cycle and at least $2$ transposition, which is impossible in $A_5$.Hence,we are done. -I'm interested in solving this question without using the fact that $ G \cong A_5$. Here is what I tried: -Suppose $G$ has a subgroup say $H$ isomorphic to $ \frac {\bf Z}{6 \bf Z}$, then consider the natural transitive action $G \times \frac {G}{H} \to \frac {G}{H}$, which gives a homomorphism $\phi \colon G \to S_{10}$. Can some one help me to prove that $\ker \phi$ is nontrivial ? -Is there any other way to solve this question? Any hints/ideas? - -REPLY [2 votes]: Here is another answer based on a simpler observation than my last, inspired by considering a comment by @pGroups on an omission. It is in the same spirit, but is distinct. -Note that a simple group of order $60$ must have six subgroups of order $5$ permuted transitively by conjugation (Sylow). There can't be one subgroup of order $5$ because it would be normal. This action on the subgroups of order $5$ gives an injective homomorphism into $S_6$. -Now note that the elements of order $6$ in $S_6$ are all odd permutations, so if $G$ has an element of order $6$ its image contains odd permutations. The even permutations in the image form a normal subgroup of index $2$, so $G$ cannot be simple.<|endoftext|> -TITLE: If $f(2x)=2f(x)$ and $f'(0)=0$ then $f(x)=0$ -QUESTION [8 upvotes]: Recently, when I was working on a functional equation, I encountered something like an ordinary differential equation with boundary conditions! - -Theorem. If the following holds for all $x \in \mathbb R$ - $$\begin{align} -f(2x) &=2 f(x) \\ -f'(0) &=0 -\end{align}$$ - then $f(x)=0$ on $\mathbb R$. - -Intuitively, it is evident for me that $f(x)=0$ but I cannot show this by a formal argument. In fact, I don't have any idea to work on it! :) -I will be thankful if you provide me a hint or help to show this with a nice formal proof. - -REPLY [22 votes]: Setting $x=0$ in $f(2x)=2f(x)$, $f(0)=0$. Now fix $x\neq 0$ and consider the values $f(x/2^n)$. By induction, $f(x/2^n)=f(x)/2^n$ for all $n\in\mathbb{N}$. But by the definition of the derivative, $$\frac{f(x/2^n)-f(0)}{x/2^n-0}=\frac{f(x)/2^n}{x/2^n}=\frac{f(x)}{x}$$ must converge to $f'(0)=0$ as $n\to\infty$. It follows that $f(x)=0$.<|endoftext|> -TITLE: Characterization for the convergence of a series -QUESTION [9 upvotes]: Problem. Let $X$ be a topological spaces which is compact and Hausdorff, $\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}$, and suppose there exists a sequence $\{x_n\}_{n\in\mathbb N}\subset X$, such that $x_n\neq x_m$ for $n\neq m$. Moreover, assume that we have a sequence $\{\lambda_n\}_{n\in\mathbb N}\subset \mathbb{K}$ with the property that for every $f\in C(X,\mathbb{K})$ the series $\sum_{n\in\mathbb N}\lambda_n\, f(x_n)$ is convergent. -Show that $\sum_{n\in\mathbb N} \lvert\lambda_n\rvert<\infty$. -The opposite implication also holds. It follows from Weierstrass theorem. - -REPLY [5 votes]: We assume that for every $\,f\in C(X,\mathbb K)$, the series $\sum_{n\in\mathbb N}\lambda_n\,f(x_n)$ converges. -Define -$$ -T_n: C(X,\mathbb K)\to\mathbb K,\quad\text{as}\quad T_n(\,f)=\sum_{k=1}^n\lambda_k\,f(x_k). -$$ -Clearly, the $T_n$'s are linear functionals. Also, for every $\,f\in C(X,\mathbb K)$, the sequence $\,T_n(\,f)$ converges in $\mathbb K$, and hence -$$ -\sup_n \lvert T_n(\,f)\rvert<\infty. -$$ -Therefore, by virtue of the Uniform Boundedness Principle, -$$ -\sup_{n}\|T_n\|<\infty. -$$ -Now, it remains to show that -$$ -\|T_n\|=\sum_{k=1}^n\lvert\lambda_k\rvert. -$$ -The "$\le$" part of the above is straight-forward. Say now that $\lambda_n=w_n\lvert\lambda_n\rvert$, with $\lvert w_n\rvert=1$. We shall show that there exists an $f_n\in C(X,\mathbb K)$, with $f(x_i)=w_i$, for $i=1,\ldots,n$. As $X$ is $T_2$, then there exist $U_1,\ldots,U_n$, open and disjoint neighbourhoods of $x_1,\ldots,x_n$, respectively. As $X$ is compact and Hausdorff, it is also completely regular, and thus, it is possible, for $i=1,\ldots,n$, to define -$$ -g_i :X\to\mathbb R, -$$ -such that $g_i(x_i)=1$, $0\le g_i(x)\le 1$, for all $x\in X$ and $g_i(x)=0$, for $x\in X\setminus U_i$. -Then the sought for $f_n$ can be defined as $f_n=\sum_{i=1}^n w_ig_i$. Clearly, $\|f_n\|=1$, and $T_n(f_n)=\sum_{i=1}^n\lvert f_i(x_i)\rvert=\sum_{i=1}^n\lvert \lambda_i\rvert$. Thus we finished with the "$\le$" part as well.<|endoftext|> -TITLE: Applying the Yoneda-Lemma to prove the existence of Tensor-products -QUESTION [8 upvotes]: In class the professor said when he came to prove the existence of the tensor-product for $A$-modules ($A$ any ring) that the existence and properties of the tensor-product would be one-liners having proved the Yoneda-Lemma (stated below). He then proceeded to other stuff. -I wanted to fill in the details of this remark, but I really can't see where to go. Could anyone help me on this? -Thanks a lot! -Lemma (Yoneda): Let $D$ be a category, $r$ an object in $D$, and $F : D \rightarrow \mathbf{Set}$ a functor. Then there is a bijection: $$\mathrm{Nat}(\mathrm{Hom}(r, -), F) \simeq F(r)$$ -given by $(\alpha: \mathrm{Hom}(r, -) \rightarrow F) \mapsto \alpha_r (1_r)$. - -REPLY [7 votes]: As Eric Wofsey says, the Yoneda lemma does not prove that things exist; it proves that things are unique. -Here is how you can go about proving that things exist using category theory. Let me work with abelian groups for simplicity, although the discussion doesn't really change in general. If $a$ and $b$ are abelian groups, their tensor product $a \otimes b$ is, if exists, the object representing the functor $\text{Bil}(a \times b, -)$ sending an abelian group $c$ to the set of bilinear maps $a \times b \to c$. In general, if $C$ is a cocomplete category, then a functor $F : C \to \text{Set}$ is representable iff it has a left adjoint; if $c$ is the representing object, then the left adjoint takes a set $X$ to the coproduct $\sum_X c$. -Hence what you want is an adjoint functor theorem. A necessary condition for $\text{Bil}(a \times b, -)$ to have a left adjoint is that it preserves limits. This is not hard to check. Next you can apply the presentable adjoint functor theorem: a functor between presentable categories has a left adjoint iff it preserves limits and is accessible; this is a very mild smallness condition and isn't hard to check here either. -This is quite a bit more annoying than just explicitly constructing the tensor product, but it has the virtue of applying in great generality.<|endoftext|> -TITLE: Examples of torsion-free abelian groups with finite automorphism group -QUESTION [5 upvotes]: $\mathbb{Z}$ is a torsion-free abelian group with finite automorphism group. Are there other examples of such groups? -Jumping from $\mathbb{Z}$ to $\mathbb{Q}$ is not good; since $\mathbb{Q}$ has infinite automorphism group. Is there any group in-between $\mathbb{Z}$ and $\mathbb{Q}$ with required property? - -REPLY [3 votes]: Exercise: If $f$ is any map from the set of primes to the set of non-negative integers, then (1) the subgroup $B_f$ of $\mathbf{Q}$ generated by $1/p^{f(p)}$ when $p$ ranges over primes has automorphism group reduced to $\{\pm 1\}$. -And (2) $B_f$ and $B_g$ are isomorphic if and only if $f,g$ coincide outside a finite subset. (In particular these are uncountably many non-isomorphic groups.)<|endoftext|> -TITLE: Germs and local ring. -QUESTION [6 upvotes]: I'm having trouble understanding the following argument (which I believe to be somewhat incomplete or flawed). Let $A=C(X)$ be the set of continuous functions from the topological space $X$ to the complex plane $\mathbb{C}$. We define $m_{x} = \{f \in C(X): f(x) = 0 \}$ and $A_x$ the ring of germs at point $x$. The statement is the following $A_x \simeq A_{m_x}$. -(1) I don't see how one defines $A_{m_x}$ since the set contains global functions that might not be well-defined as we quotient by functions $f$ such that $f(x) \neq 0$, but it doesn't necessarily mean that $f \neq 0$. Though it's indeed well defined in a neighborhood of $x$. -(2) Now using the universal property of localization, we sure want to define $\phi : A_{m_x} \rightarrow A_x$ s.t we have $\phi(a/s) = \iota(a)\iota(s)^{-1}$ where $\iota$ is the inclusion map $\iota: A \rightarrow A_x$. We want $\phi$ to be an isomorphism. It is surjective; now we want it to be one-on-one. Now I don't see how this is possible as $\phi(a/s) = 0$ iff $a = 0$ in a neighborhood of $x$, which doesn't imply that $a=0$ globally. -I guess there's something I don't really fathom, or my textbook might just be flawed. Anyway, thanks for your help. - -REPLY [4 votes]: You need to assume something about the space $X$ for the claimed statement to be true. A natural assumption to make is that $X$ is completely regular; I will use this assumption in addressing (2) below. -(1) Well, $A_{m_x}$ is not (a priori) a set of functions, it's just a ring of formal "fractions" (which may or may not make sense when evaluated as pointwise fractions of functions). You can still think of it perfectly well as a ring without having to think of its elements as functions on $X$ (which, as you observe, you can't exactly do). -(2) If $a\in A$ is such that $a=0$ in a neighborhood $U$ of $x$, then in fact the image of $a$ in the localization $A_{m_x}$ vanishes: the canonical "inclusion" $A\to A_{m_x}$ is not injective! To prove this, note that by complete regularity, there is a function $f:X\to[0,1]$ such that $f(x)=1$ and $f(y)=0$ for all $y\not\in U$. We then have $f\not\in m_x$ and $fa=0$, so it follows that $a$ maps to $0$ in the localization $A_{m_x}$. -Note that you also need to use complete regularity to show that your map $\phi:A_{m_x}\to A_x$ is surjective: given a germ of a continuous function at $x$, it is not at all obvious a priori that you can write it as a quotient of two continuous functions that are defined on all of $X$. In detail, if you have a function $f:U\to\mathbb{C}$ where $U$ is an open neighborhood of $x$, let $V$ be an open neighborhood of $x$ whose closure is contained in $U$ (by regularity) and let $g:X\to[0,1]$ be a function such that $g(x)=1$ and $g(y)=0$ for all $y\not\in V$ (by complete regularity). Define $h(y)=\min(1,2g(y))$. Then $h=1$ on a neighborhood of $x$ (namely, the set where $g>1/2$), so $hf$ (which is defined on $U$) has the same germ at $x$ as $f$. But $hf$ vanishes on $U\setminus\overline{V}$, so we can continuously extend it to all of $X$ by setting it equal to $0$ outside of $U$. This continuous extension is then an element of $A$ whose germ at $x$ coincides with the germ of $f$. Thus the map $A\to A_x$ is surjective, and hence so is the map $A_{m_x}\to A_x$.<|endoftext|> -TITLE: Formalizing the meta-language of First order Logic and studying it as a formal system -QUESTION [11 upvotes]: We've a formal system say First order Logic, we reason about it in our meta-language using our meta-logic. We study its properties as a mathematical object. We prove theorems like group theory. This makes us able to know the limits and the strength of the system (like completeness) or studying arithmetic in First order logic. For example, Godel first incompleteness theorem is a theorem in the meta-language. -Now, I wonder, Why not to formalize the meta language itself which we use to argue about FOL. Why? two reasons, the first is that proving things about this system means proving things about what we can know about first order logic. Some sentences of FOL could be unprovable in this formalized system and hence we can not prove somethings about FOL. Let's call this new system the "formalized meta-logic" -Second, we could by some trick be able to reflect the reasoning in the formalized meta-language inside FOL and hence it could give us somethings useful which is unexpected? -For a preliminary sketch of how this "formalized meta-logic" will look like, I think we would have similar logical connectives (denoted by other symbols to avoid confusion) which range over collection of formulas (or over formulas, according to what turns out to be more appropriate), meta-predicate, which represents "properties of formulas", constants that represents fixed formulas. so for example, Godel incompleteness theorem (in the meta-language of FOL) could be represented as a formal formula in this "formalized meta-logic". -So, my questions are: -$1$- Are there any trials in that directions? If the answer is yes, Could you provide me with some resources (texts\articles etc ...) so that I can read about it? -$2$- If no such formalization exist, Could it be done? if yes, Do you think it will be of any use other than mere curiosity? why or why not? -$3$- will such a system be related to higher order logics? If yes, How? - I'm completely unfamiliar with those logics but I've been told that they quantify over collection of subsets of elements rather than elements itself. - -REPLY [7 votes]: It is completely standard to study formalized metatheories. Historically, this is particularly of interest from the point of view of mathematical finitism - most proof systems for first-order logic are completely finitistic, based only on symbol manipulation, and so their formalization is somewhat straightforward. -Formalization of the metatheory is also of interest to help us see the mathematical techniques required to prove theorems of logic such as the completeness theorem. -Rather than making a formal metatheory that quantifies directly over strings, as suggested in the question, it has been more common to work with Primitive Recursive Arithmetic, whose basic objects are natural numbers. There is a tight link between formulas and natural numbers, as I explained in this answer. PRA is often studied as a quantifier-free theory, with a large collection of function-building symbols; it is typically considered to be "the" formal standard for a finitistic theory. -We know for example, that a formalized version of the incompleteness theorem is provable in PRA. -PRA cannot talk about infinite sets, and so cannot talk about infinite models. To look at logical theorems such as the completeness theorem, we need to move to slightly stronger theories. We could use ZFC set theory, but that is far more than we need to study the countable logical theories that arise in practice. It has been common to use theories of second-order arithmetic, which can talk about both natural numbers and sets of natural numbers, to study the completeness theorem. We know that, in this context, the completeness theorem for countable theories is equivalent to a theory of second-order arithmetic known as $\mathsf{WKL}_0$, relative to a weaker base theory. -The way to get into this subject is to first learn quite a bit of proof theory, after you have a strong background in basic mathematical logic. There are not any resources I am aware of that are readable without quite a bit of background. Even introductory proof theory texts often assume quite a bit of mathematical maturity and exposure to mathematical logic, and are written at a graduate level. So there is no royal road to the area, unfortunately, although much beautiful work has been done. -To get a small sense of what exists, you can read about the formalization of the incompleteness theorem in Smorynski's article in the Handbook of Mathematical Logic. That article only describes one part of a much larger body of work, however. The overall body of work on formalized metatheories is spread out in many places, and described from many points of view. You can find some of these in the Handbook of Proof Theory, and in the following texts: - -Petr Hájek and Pavel Pudlák, Metamathematics of First-Order Arithmetic, Springer, 1998 -Stephen G. Simpson, Subsystems of Second-Order Arithmetic, Springer, 1999 -Ulrich Kohlenbach, Applied Proof Theory: Proof Interpretations and their Use in Mathematics, Springer, 2008 - -All of these are intended for graduate or postgraduate mathematicians.<|endoftext|> -TITLE: A "flowchart" for handling Diophantine equations -QUESTION [7 upvotes]: There's no algorithm that correctly decides if a Diophantine equation does or doesn't have a solution. Still, many equations can be successfully analyzed, and I'm wondering if anyone wrote down a "cookbook" for dealing with Diophantine equations of various shapes and forms, including the higher-degree, higher-dimensionality ones. -Given a system of polynomial equations with integer coefficients, we may wish to determine if there are any solutions in integers, and if so, whether there are finitely or infinitely many, and whether they can be explicitly described; we may also wish to determine if there are any solutions in rational numbers, and if so, whether there are finitely many, etc. - -If the system is linear, do this (easy). -If there is just one variable, do that (easy). -If there's one quadratic equation in two variables (or a homogenous one in three variables), there's again an explicit procedure: check if there's a singularity, determine if there are integer solutions at all (Hasse Minkowski), parametrize the curve, etc. I think all questions can be effectively answered in the case of genus 0 curves. -If it's an elliptic curve, follow these steps... (I don't think all questions can be algorithmically answered, at present). -Higher genus curve? What do you do? Find the Jacobian? What else? -Higher dimensional surfaces and varieties? What do you do? Which heuristics do you try, what are some useful families of equations that can be attacked? - -All of those pieces are well covered in the literature - I'm just wondering if there's a good resource that succinctly describes the various alternatives that we may be able to handle. -Note: this older question has similar goals, but it stops short of giving details on how to handle genus 0 and genus 1 and says nothing much about higher genera and higher dimensional varieties. - -REPLY [8 votes]: There are a lot of references which describe "the various possibilities", and it is difficult where to start. One of the first references one finds is the article Open Diophantine Problems by Michel Waldschmidt. It gives a survey about problems, methods, heuristics, etc. The following "flowchart" is taken from the literature (it is by no means complete): -1.) Definition of a "Diophantine Equation" by an algebraic equation of the from $F(x_1,\ldots ,x_n)=0$, where $F$ is a given polynomial in the ring -$\mathbb{Z}[x_1,\ldots ,x_n]$, and the equation is to be solved either in integers or rational numbers (which of both is more interesting depends on the particular problem). -2.) Questions about solvability: -a.) Is there any solution (integral or rational) at all ? -b.) Are there finitely many or infinitely many solutions ? -c.) Which structure does the set of all solutions have ? -d.) Is there an algorithm giving in principle a complete list of all solutions ? -3.) Methods: There are almost as many methods as Diophantine equations; -algebriac, analytic, geometric, arithmetic methods etc. -a.) The modular method: Applications of ideas of Shimura, Frey, Ribet, Wiles, etc. leading to the proof of FLT, i.e., modular forms, elliptic curves, Galois representations of the absolute Galois group -$$ -\rho\colon Gal(\overline{\mathbb{Q}}/\mathbb{Q})\rightarrow GL_d(\mathbb{F}) -$$ and so on. -b.) Cohomological obstructions: An old (and simple) method to show that a particular Diophantine equation has no solution, is to show that there exists a local obstruction, i.e., a Brauer-Manin obstruction; the study of Brauer groups of certain varieties over global fields. -c.) Mordell-Weil sieve methods: In particular for curves $C:f(x,y)=0$ one can use the knowledge on the Mordell-Weil group of the Jacobi variety of a curve with local informations, e.g., by reduction modulo a prime $p$, for many primes $p$. This gives often strong results concerning rational points on the curve, which can be used algorithmically. -4.) An example Perhaps it is nice to see an "easy" example. Let -$$ -f(x,y)=y^2-x^3-7823 -$$ -Then the Diophantine equation $f(x,y)=0$ has no integer solutions; all rational solutions are generated by a single, fundamental solution, namely by $(x,y)$ with -$$ -x=\frac{2263582143321421502100209233517777}{11981673410095561^2},\; -$$ -$$ -y=\frac{186398152584623305624837551485596770028144776655756}{11981673410095561^3}. -$$ -The Mordell-Weil group over $\mathbb{Q}$ is cyclic of rank $1$.<|endoftext|> -TITLE: Chord of a parabola $y^{2}= 4ax$ -QUESTION [5 upvotes]: Prove that on the axis of any parabola -$y^2=4ax$ -there is a certain point $K$ which has the property that,if a chord $PQ$ of the parabola be drawn through it ,then -$$\frac{1}{PK^2}+\frac{1}{QK^2}$$ -is same for all positions of the chord.Find also the coordinates of the point $K$. -We can apply the parametric equations of a parabola -Let the points $P$ and $Q$ be -$(at_1^{2},2at_1)$ and $(at_2^{2}, 2at_2)$ -So the equation of the chord would be -$$y(t_1+t_2)=2x+2at_1t_2$$ -Hence from there we have that the coordinates of $K$ are -$(−at_1t_2,0)$ -Now our aim is to show that -$\frac{1}{PK^2}+\frac{1}{QK^2}$ -is independent of -$t_1$ and $t_2$. I tried and applied the distance formula but no benefit. - -REPLY [2 votes]: Hint: This is beyond easy ! :-$)$ The sum in question stays constant, regardless of the position of P, correct ? So just let $K=(b,0),$ and then take two positions for P: when P is right above $($or right below$)$ K, and when $P\to\infty$. Where is Q in both these cases ? Can you deduce the value of b from equating the two sums ? I just did, and used GeoGebra to verify the result.<|endoftext|> -TITLE: Topology: reference for "Great Wheel of Compactness" -QUESTION [12 upvotes]: This seems to be a very informative diagram showing the relationship between four forms of compactness in a general topological space. Prior to finding this I was trying to make sense of a seemingly countless (now seen to be countable = 12) collection of theorems relating one to another. The 12 relations are seen to simply to 6 proofs (A - F) and 6 corollaries by transitivity. -I found a version of this here https://pantherfile.uwm.edu/ancel/www/OLD%20COURSES/MATH%20752%20SPRING%202011/CHAPTER%20III/751.F10.IIIB-C.pdf -I haven't seen it anywhere else and would be interested if anyone has information about it. - -REPLY [2 votes]: From: Fredric D Ancel - -Sent: 02 January 2016 22:08 -To: Tom Collinge -Subject: Re: Your (?) notes -Tom, -The diagram is a suped-up modification of one that I created sometime in the 1980’s for a Moore-method introductory graduate level topology I was teaching at University of Wisconsin-Milwaukee (or possible at the University of Oklahoma). It was in my notes for the class. The goal of the diagram was to help students keep track of the relations between various forms of compactness. Some students memorized it and would quote it back to me when I couldn’t remember these relations myself. The “great wheel” terminology is a humorous and probably politically incorrect reference to a Buddhist construct I read about is some Kipling novel. -The original diagram was created in a primitive version of Word (or possibly MacPaint) - no curved lines and very basic typography. Someone else has improved its aesthetics in the interim. I have attached an early version of the diagram below. - -Ric Ancel - -https://pantherfile.uwm.edu/ancel/www/OLD%20COURSES/MATH%20752%20SPRING%202011/CHAPTER%20III/751.F10.IIIB-C.pdf -(Professor Fredric D Ancel at the University of Wisconsin Milwaukee).<|endoftext|> -TITLE: convergence of $\sum_{n=1}^\infty \sqrt{n}a_n$ implies the convergence of $\sum_{n=1}^\infty a_n$ -QUESTION [5 upvotes]: How the convergence of $\sum_{n=1}^\infty \sqrt{n}a_n$ implies the convergence of $\sum_{n=1}^\infty a_n$? -I know this if $a_n>0$. But for arbitrary $a_n$, I have no idea... - -REPLY [3 votes]: It seems a situation for Abel's test, quite useful in this non absolutely convergent series case: - -The series $c_n = \sqrt{n}a_n $ yields a convergent sum, -$b_n = \frac{1}{\sqrt{n}}$ is monotone and bounded. - -Then $\sum_\limits{n} b_n c_n = \sum_\limits{n} a_n $ is convergent too.<|endoftext|> -TITLE: What is the difference between an impulse response and a transferfunction? -QUESTION [5 upvotes]: An imupulse response, is the output you get when you apply an impulse, like a delta dirac function, to your system (only for LTI?). -By knowing the impulse response you know the system. -The transferfunction relates the input to the output. I.e. this is a representation of the system. -So aren't both the same? Or Did I misunderstand something? - -REPLY [4 votes]: Impulse response represent the system in time domain and transfer function represent the system in frequency domain. Essentially both are same.<|endoftext|> -TITLE: Example of a relation that is symmetric and transitive, but not reflexive -QUESTION [16 upvotes]: Can you give an example of a relation that is symmetric and transitive, but not reflexive? -By definition, - -$R$, a relation in a set $X$, is reflexive if and only if $\forall x\in X$, $x\,R\,x$. -$R$ is symmetric if and only if $\forall x, y\in X$, $x\,R\,y\implies y\,R\,x$. -$R$ is transitive if and only if $\forall x, y, z\in X$, $x\,R\,y\land y\,R\,z\implies x\,R\,z$. - -I can give a relation $\leqslant$, in a set of real numbers, as an example of reflexive and transitive, but not symmetric. But I can't think of a relation that is symmetric and transitive, but not reflexive. - -REPLY [2 votes]: This is called a “partial equivalence relation (PER)”. PERs can be used to simultaneously quotient a set and imbue the quotiented set with a notion of equivalence. -A genuinely useful example (copied straight from the linked page) is functions that respect equivalence relations of the domain and codomain. Suppose we have sets $X$ and $Y$, with (partial or otherwise) equivalence relations $\approx_X$ and $\approx_Y$, respectively. We define the partial equivalence relation $\approx_{X \to Y}$ by -$$ -f \approx_{X \to Y} f' := \forall x,x' \in X. x \approx_X x' \implies f(x) \approx_Y f'(x') -$$ -Then, $\{~f~|~f \approx_{X \to Y} f~\}$ are exactly the functions that preserve equivalence. Furthermore, $\approx_{X \to Y}$ is an equivalence relation on that set, equating $f$ and $f'$ exactly when they map the same inputs (up to $\approx_X$) to the same outputs (up to $\approx_Y$). This is true even if equality of functions in the underlying set theory is more intensional – hence the use of this technique in type theory.<|endoftext|> -TITLE: Car parking related probability -QUESTION [6 upvotes]: A driver parks a car in a row of $25$ cars randomly at any place but not ends. After coming back he finds $10$ cars are gone so what is the probability that both the neighbouring cars have gone? - -What I did $$\dfrac{{24\choose 8}}{{23\choose 1}{24\choose 10}}$$ $24C8$ as we want two cars to go so we want to select only $8$ cars. And driver can park in $23$ ways and cars can go in $24C10$ ways. -But that doesn't yield the answer what am I missing on? Please any hints using basic probability equations. - -REPLY [4 votes]: Let $L$ denote the event that his left neighbor has left and $R$ -the event that his right neighbor has left. Then: -$$P\left(L\cap R\right)=P\left(L\mid R\right)P\left(R\right)=\frac{9}{23}\frac{10}{24}$$ -The second factor because of $24$ cars $10$ leave and all cars have equal chances to be one them. The first factor likewise, but now of $23$ cars $9$ are selected to leave.<|endoftext|> -TITLE: Rotors/Quaternions: double reflection question -QUESTION [5 upvotes]: I am trying to learn/understand quaternion. I found this reference (among many others): -http://www.geometricalgebra.net/quaternions.html -It states (see attached screenshot of that page), that to reflect a vector around an axis by a given angle $\phi$ you can perform a double reflection. First project the vector x on the plane perpendicular to the rotation axis (you get $a$), and then rotate by angle $\phi/2$: you get vector $b$ and then perform another reflection to get the final rotated vector. -My question: "the result is that $x$ is rotated over an angle $\phi$ which is twice the angle between $a$ and $b$". This is the statement I don't understand! Looking at figure 7.2 the angle between $a$ and $b$ and the vector x and its rotated version seem like the same? - -EDIT/SUGGESTION OF ANSWER -Rather than adding this as my own answer, I prefer to make an edit to my own question and suggest this as potential answer. Would appreciate if someone could confirm my intuition on this. -It seems like figure 7.2 is actually slightly misleading (because not easy to read properly). After reading more on bi-vectors and what seems related to Clifford algebra, my understanding is that rotations are best expressed as a combination of 2 vectors defined in the rotation plane (that is the plane perpendicular to the rotation axis) and the angle between these 2 vectors is half the angle of rotation we actually wish to apply (and this construct is what we call a rotor, is that correct?). In other words we "encode" the amount of rotation we wish to apply for our rotation within the angle subtended by the two vectors lying a plane (the rotation plane) perpendicular to the rotation axis. -This is my understanding of this problem/question, but would appreciate if a mathematician could confirm this and eventually formalize it. - -REPLY [2 votes]: Your understanding is correct. Given two unit vectors $a$ and $b$, their geometric product $R = a b$ is a rotor. The rotor is: -$R = a b = a \cdot b + a \wedge b$ -$R = \cos \theta + I \sin \theta$ -where $I = \frac{a \wedge b}{\|a \wedge b\|}$. When $R$ is applied to a blade $X$ using the versor product $R X \tilde R$, the total angle of rotation is $2 \theta$. For convenience the rotor is defined to rotate $\theta/2$ so that when the versor product is applied the total rotation angle amounts to $\theta$. -It can be seen algebraically developing the versor product: -$R X \tilde R = (\cos \theta + I \sin \theta) X (\cos \theta - I \sin \theta)$ -$R X \tilde R = \cos^2\theta \ X + (\cos \theta \sin \theta)( −X I + I X ) − - \sin^2\theta \ I X I $ -Using the fact that $- X I + I X = -2 X \cdot I$ -$R X \tilde R = \cos^2\theta \ X - 2(\cos\theta \sin\theta) X \cdot I − \sin^2\theta \ I X I$ -It is convinient for our purpose to express the reflection $I X I$ of the blade $X$ with respect to the dual of the bivector $I^* = I e_{321}$ instead of $I$, so we get $I X I = (-1)^k I^* X I^* = -2 I^* (I^* \cdot X) + X$. where $k$ is the grade of the blade $X$. So we get: -$R X \tilde R = (\cos^2\theta - \sin^2\theta) \ X - 2(\cos\theta \sin\theta) \ X \cdot I + 2 \sin^2\theta \ I^* (I^* \cdot X)$ -Applying the following trigonometric identities: -$\cos 2\theta = \cos^2\theta − \sin^2\theta$ -$\sin 2\theta = 2 \cos\theta \sin\theta$ -$1−\cos 2\theta = 2 \sin^2\theta$ -we finally get: -$R X \tilde R = \cos 2\theta \ X - \sin 2\theta \ X \cdot I + (1 − \cos 2\theta) \ I^* (I^* \cdot X)$ -This is the Geometric Algebra version of Rodrigues' formula. As can be seen, the versor product produces a rotation with total angle of rotation of $2 \theta$. That is why the convention is to take angle $\theta/2$.<|endoftext|> -TITLE: Does the Morse homology depend on the orientation? -QUESTION [5 upvotes]: Before asking my question I need to define some objects. I will follow the book "M. Audin, M.Damian - Morse theory and Floer homology", but the terminology is quite standard: -Let $M$ be a smooth compact manifold and consider a Morse-Smale pair $(X,f)$ on $M$ ($X$ is a gradient-like vector field and $f$ is an adapted Morse function). If $a,b$ are two critical points of $f$, we indicate with $\mathcal L(a,b)$ the manifold such that every point is a trajectory of $X$ ''starting'' from $a$ and ''ending'' in $b$. One can show that if $\text{ind}(a)=\text{ind}(b)+1$ then $\mathcal{L}(a,b)$ is a finite set. Moreover if we orient the stable manifold $W^s(a)$, remember that it is a disk, we induce an orientation on $\mathcal L(a,b)$, namely at each point we associate $\pm 1$ if $\text{ind}(a)=\text{ind}(b)+1$. -At this point one can define the Morse-Smale complex: -$$C_k:=\sum_{a\in\text{Crit}_k(f)}\mathbb Za$$ -where clearly $\text{Crit}_k(f)$ is the set of critical points of index $k$. The map $d_k:C_k\longrightarrow C_{k-1}$ acts on the generators of $C_k$ in the following way: -$$d_k(a)=\sum_{a\in\text{Crit}_{k-1}(f)}N(a,b)b$$ -where $N(a,b)\in\mathbb Z$ is the sum of the $\pm 1$ (the orientations) attached to the points of $\mathcal L(a,b)$. - -Question: From the above construction it is evident that the Morse-Smale complex (in particular the number $N(a,b)$) depends on the - orientation that we fix on the stable manifolds $W^s(a)$. This sounds - very strange to me, indeed I'd expect a complex independent from the - orientation. Maybe by passing to the homology group one can recover - the independence but I can't see it. - -Many thanks. - -REPLY [2 votes]: If you change the orientation on $W^s(a)$ then an isomorphism from the old complex to the new is given by sending the generator $a$ of $C_k$ to $-a$ (and acting as the identity on all other critical points).<|endoftext|> -TITLE: Kähler metrics on the coadjoint orbits of a compact Lie group -QUESTION [10 upvotes]: Let $G$ be a compact Lie group with Lie algebra $\mathfrak{g}$. It is well-known that each orbit for the coadjoint representation of $G$ on $\mathfrak{g}^*$ carries a canonical symplectic structure, known as the Kirillov-Kostant-Souriau symplectic form. -Moreover, I've read at a few different places that the coadjoint orbits are also Kähler manifolds: - -Theorem. Let $G$ be a compact Lie group, $\mathcal{O}$ a coadjoint orbit and $\omega$ its Kirillov-Kostant-Souriau symplectic form. Then, there exists a unique $G$-invariant Kähler metric on $\mathcal{O}$ that is compatible with $\omega$. - -For example, this result is mentioned in Robert Bryant's lecture notes An Introduction to Lie Groups and Symplectic Geometry on page 150, and at the beginning of this paper by Kronheimer. -However, I didn't find any proof of that theorem. Does someone know how to prove it or can point a good reference? -According to Bryant, it is "not hard" to prove it "using roots and weights". But I wasn't able to do so. - -REPLY [2 votes]: In fact if $M$ be a compact Kähler manifold, then its symplectic quotient is also Kähler manifold. So, because $T^∗G$ is Kähler manifold, so coadjoint orbit is also Kähler due to the following reason, -We have this useful decomposition $G^{\mathbb C}\cong G\times \mathfrak g^{*}\cong T^*G$ -If we take $\mu:T^*G\to \mathfrak g^*$ as momentum map then $\mu^{-1}(\lambda)=G$, so -$$\mu^{-1}(\lambda)/G_\lambda\cong G/G_\lambda\cong \mathcal O_\lambda$$ -so we need to take $M=T^*G$ -As far as I know this method is from Mostow<|endoftext|> -TITLE: Is the function $d(x,y) = \frac{\|x-y\|}{\|x\|\|y\|}$ a metric? -QUESTION [11 upvotes]: $d$ is defined for all $x,y \in \mathbb{R}^2 - \{0\}$. -It's clear that $d(x,y) = 0 \iff x=y$ and $d(x,y)=d(y,x)$ -I am having issues with triangle inequality. I couldn't find a counterexample for which the triangle inequality doesn't hold. So I tried to prove it. -What I have so far is: -$$d(x,z) = \frac{\|x-z\|}{\|x\|\|z\|} \leq \frac{\|x-y\|}{\|x\|\|z\|} -+ \frac{\|y-z\|}{\|x\|\|z\|} $$ -I'm stuck here. -I appreciate if you could give me some hints. -Thanks. - -REPLY [4 votes]: $\newcommand{\Reals}{\mathbf{R}}$Identify the set of non-zero vectors in $\Reals^{2}$ with the set of non-zero complex numbers. The Euclidean norm corresponds with the complex modulus, so if $x$ and $y$ are non-zero, then -$$ -\frac{\|x - y\|}{\|x\|\, \|y\|} - = \left\|\frac{x - y}{xy}\right\| - = \left\|\frac{1}{y} - \frac{1}{x}\right\|. -$$ -That is, $d$ corresponds to the ordinary Euclidean distance after a bijection (the complex reciprocal map), and therefore satisfies the triangle inequality.<|endoftext|> -TITLE: What do we know about $\sum_\limits{n=0}^{\infty} \frac{(-1)^n}{kn+1}$? -QUESTION [6 upvotes]: Let define, for $k \ge 1$ : $$ f(k) = \sum_\limits{n=0}^{\infty} \frac{(-1)^n}{kn+1}. $$ -It is well-known that $f(1) = \ln(2), f(2) = \pi/4$. Some computations on WolframAlpha led me to $f(3) =1/9 (\sqrt3 \pi+\ln(8))$, $f(4) = (\pi+2 \ln(1+\sqrt2))/(4 \sqrt2)$ and also (if I'm not mistaken) : -$$ f(5) = 1/b \cdot \Big(\frac{8\sqrt2}{\sqrt a} \;\pi \;-\; 6 (\sqrt5 - 1)\ln(2) \;+\; 2 (3-\sqrt5)\ln(\sqrt 5 + 1)\\ - 4 \ln(\sqrt5 - 1) -\;+\; (\sqrt5 - 5)\ln\Big( \frac{\bar a}{a} \Big) \Big)$$ -with $a = 5+\sqrt5, \bar a = 5-\sqrt5,b=20(\sqrt 5 - 1)$. -Then, I would like to ask the following : is it true that (for general $k \geq 1$) $f(k) \in \overline{ \mathbb Q} (A)$ where $A = \{ \pi \} \cup \{ \ln(x) \mid x \in \overline{ \mathbb Q} \cap \mathbb R \}$, as it seems to be the case for small values of $k$ ? -Are there some available results on these series? -I looked at some special functions : this result on the digamma function is related to my question. I don't know if it is possible to use this result in order to compute $f(k)$. -Any comment or answer would be appreciated! - -REPLY [4 votes]: We have: -$$ f(k) = \sum_{n\geq 0}(-1)^n \int_{0}^{1} x^{kn}\,dx = \int_{0}^{1}\frac{dx}{1+x^k} $$ -and the last integral can be easily computed through the residue theorem, since $\frac{1}{1+x^k}$ has simple poles at $\zeta_j = \exp\left(\frac{\pi i}{k}(2j-1)\right)$ for $j=1,2,\ldots,k$ with residues given by: -$$ \text{Res}\left(\frac{1}{1+x^k},x=\zeta_j\right) = \frac{\zeta_j}{k \zeta_j^{k}}=-\frac{\zeta_j}{k}.$$ -Since $\int_{0}^{1}\frac{dx}{x-\zeta_j}\,dx = \log\left(1-\frac{1}{\zeta_j}\right)$, we get: - -$$ f(k) = -\frac{1}{k}\sum_{j=1}^{k}\zeta_j \log\left(1-\frac{1}{\zeta_j}\right) $$ - -and that can be further simplified by coupling terms related with conjugated roots, leading to the $\log\cos$ contributes mentioned in the comments.<|endoftext|> -TITLE: Is there an irreducible projective hypersurface such that its complement has zero Euler characteristic? -QUESTION [9 upvotes]: We know that, if $f=X_0X_1...X_n \in \mathbb{C}[X_0,...,X_n]$ and $Z(f)\subset \mathbb{CP}^n$, then the Euler characteristic of its complement is zero, i.e. -$$ -\chi(\mathbb{CP}^n\setminus Z(f))=0. -$$ -But $f$ is not irreducible. -Let $Z\subset \mathbb{CP}^n$ be a smooth, irreducible hypersurface. Then, we know that -$$ -\chi(Z)=\frac{1}{d}((1-d)^{n+1}-1)+n+1, -$$ -where $d$ degree of $Z$. -In particular, if $g=X_0^2+...+X_3^2 \in \mathbb{C}[X_0,...,X_3]$, we have -$$ -\chi(Z(g))=\frac{1}{2}((1-2)^4-1)+4=4, -$$ -then $\chi(\mathbb{CP}^3\setminus Z(g))=0$, since $\chi(\mathbb{CP}^3)=4$. -So I ask: is there an irreducible homogeneous polynomial $h \in \mathbb{C}[X_0,...,X_n]$ such that deg$h>2$ and $\chi(\mathbb{CP}^n\setminus Z(h))=0$? -Remark: this is not possible if $Z(h)$ is smooth (with deg$h>2$). - -REPLY [3 votes]: Let $S = \{ x_1^j x_2^{d-j} = x_3^k x_4^{d-k} \}$ in $\mathbb{P}^3$, with $0 < j,k < d$ and $GCD(j,k,d)=1$. I claim that $S$ is irreducible and $\chi(S)=4$ and hence $\chi(\mathbb{P}^3 \setminus S)=0$. -Let $T =\{ x_1 x_2 x_3 x_4 \neq 0 \} \subset \mathbb{P}^3$, this is isomorphic to $(\mathbb{C}^{\ast})^3$. Since $GCD(j,k,d)=GCD(j,k,d-j,d-k)=1$, the locus in $T$ where $x_1^j x_2^{d-j} x_3^{-k} x_4^{-(d-k)}=1$ is isomorphic to $(\mathbb{C}^{\ast})^2$, with Euler characteristic $0$. So $\chi(S \cap T) =0$ and thus $\chi(S) = \chi(S \setminus T)$. -But (using the assumptions $0 < j,k < d$) the complement $S \setminus T$ is just four $\mathbb{P}^1$'s: $\{x_1=x_3=0\}$, $\{x_1=x_4=0\}$, $\{x_2=x_3=0\}$ and $\{x_2=x_4=0\}$, and this has Euler characteristic $4$. Also, $T \cap S$ is obviously irreducible and is dense in $S$, so $S$ is irreducible.<|endoftext|> -TITLE: How many acute triangles can be formed by 100 points in a plane? -QUESTION [10 upvotes]: Given 100 points in the plane, no three of which are on the same line, consider all triangles that have all their vertices chosen from the 100 given points. -Prove that at most 70% of those triangles are acute-angled. - -REPLY [8 votes]: This is from the 1970 International Mathematical Olympiad. You can find the questions here, and the (rather easy) solution to this question here. -It was one of the homework questions for aspirants to the British team for IMO 1978 in Bucharest. I managed to prove an explicit upper bound on the maximum possible proportion of triangles in $n$ points that tends to $2/3$ as $n$ tends to $\infty$. I can't remember what it was, but it is not too hard to work out, using the process described in the following paragraphs. -The proof of the simpler question goes like this: first prove geometrically that every $4$-set (i.e. set of $4$ points) must contain a non-acute triangle, so that at most $75$% of triangles in a $4$-set can be acute. Then argue combinatorially that if you have shown that at most a proportion $p$ of triangles in an $n$-set can be acute, it follows that at most a proportion $p$ of the triangles in an $(n+1)$-set can be acute. -So we immediately get that at most $7\frac12$ of the $10$ triangles in a $5$-set can be acute; and because the number of acute triangles must be an integer, we can bring this down to $7$ out of $10$, i.e. $70$%. -But there is no reason to stop there. The same process gains nothing when passing to $6$-sets, because $70$% of $20$ is an integer; but passing to $7$-sets gives us $70$% of $35$, which is $24\frac12$, which we can bring down to $24$. -So we can define an integer sequence $(s_i)$ by $s_4 = 3$, and $s_{i+1} = \left\lfloor \dfrac{(i+1)s_i}{i-2}\right\rfloor$. This gives us an upper bound on the maximum possible number of acute triangles in an $n$-set. I seem to remember that it is a cubic in $n$, whose exact form depends on $n$ mod $3$. And the proportion $s_n/\binom{n}{3}$ tends to $2/3$. -Note that $s_n$ is not necessarily the maximum possible number of acute triangles in an $n$-set. It just gives us an upper bound. In fact I think that this upper bound can't be attained for large enough $n$ ($n > 6$ perhaps?). I played around with this a bit, but after all it was $37$ years ago, so the details are blurred.<|endoftext|> -TITLE: Divisors of degree $2g-2$ on a hyperelliptic curve of genus $g$ -QUESTION [8 upvotes]: Suppose I have a divisor $D$ of degree $2g-2$ on a hyperelliptic curve of genus $g$. Then I can prove that either -a) $K_C\otimes\mathcal{O}(-D)=\mathcal{O}_C$, that is $K_C\cong \mathcal{O}(D)$, or -b) $K_C\otimes\mathcal{O}(-D)\neq \mathcal{O}_C$, but is a non-trivial degree 0 line bundle. Hence $h^0(C, K_C\otimes\mathcal{O}(-D))=0$. Hence $h^0(\mathcal{O}(D))=2g-2+1-g=g-1$, by Riemann Roch formula. -I am looking for an example of the following type. Say genus $g=2$. Let $i$ be the hyperelliptic involution. Can we find a divisor $D$ of the form $D=P+i(P)$ which satisfies (b). That is I want $h^0(C, K_C\otimes\mathcal{O}(-D))=0$. Is this possible. -Or more generally if $C$ is a hyperelliptic curve of genus $g$, is it possible to find a divisor $D=\Sigma_{j=1}^{g-1} p_j+\Sigma_{j=1}^{g-1} i(p_j)$ of degree $2g-2$ which satisfies b) or are all such divisors linearly equivalent to $K_C$? -I will be thankful for an example of the above type. Thanks in advance! - -REPLY [6 votes]: As Mohan suggests, (b) cannot occur. Indeed, if $D$ is an effective divisor of degree $g-1$ on a hyperelliptic curve $C$, then $D + i(D)$ is a canonical divisor. Recall that the hyperelliptic class $H$ of $C$ is the divisor class of a fibre of the canonical map $C \to \mathbb{P}^1$, so it is represented by a point $P \in C$ plus its hyperelliptic involution $i(P)$. By Riemann-Roch, the canonical class $K_C$ is $(g-1)H$. Therefore, if $D = P_1 + \ldots + P_{g-1}$, then -$$ -D + i(D) = (P_1 + i(P_1)) + \ldots + (P_n + i(P_n)) = (g-1)H = K_C. -$$ -$$ -\textrm{} -$$ -Edit: I've added an explanation of why the canonical class of a hyperelliptic curve is $g-1$ times the hyperelliptic class (the argument can also be found on this wiki). -Let $P \in C$ be a ramification point of the canonical map $C \to \mathbb{P}^{g-1}$, then the divisor class of $2P$ is the hyperelliptic class. -Take the short exact sequence for $2P$, twist by $\mathcal{O}_C(2kP)$ for any $k \geq 1$, and take global sections to get $h^0(C,2(k+1)P) \geq h^0(C,2kP) + 1$. In particular, $h^0(C,(2g-2)P) \geq g$. (We have not yet used anything about the point $P$.) -Now, Riemann-Roch says that -$$ -h^0(C, (2g-2)P) - h^0(C,K_C - (2g-2)P) = \deg((2g-2)P) -g + 1 = g-1, -$$ -so $h^0(K_C - (2g-2)P) \leq 1$, but it cannot be equal to zero, otherwise the Riemann-Roch equality would not hold. It follows that $h^0(K_C - (2g-2)P) = 1$, but any divisor of degree-$0$ and $h^0 = 1$ is principal (see this question), hence $K_C$ and $(2g-2)P$ are linearly equivalent. That is, $K_C$ is $g-1$ times the divisor class of $2P$.<|endoftext|> -TITLE: Prove that $\sqrt{x_1}+\sqrt{x_2}+\cdots+\sqrt{x_n} \geq (n-1) \left (\frac{1}{\sqrt{x_1}}+\frac{1}{\sqrt{x_2}}+\cdots+\frac{1}{\sqrt{x_n}} \right )$ -QUESTION [17 upvotes]: Let $x_1,x_2,\ldots,x_n > 0$ such that $\dfrac{1}{1+x_1}+\cdots+\dfrac{1}{1+x_n}=1$. Prove the following inequality. -$$\sqrt{x_1}+\sqrt{x_2}+\cdots+\sqrt{x_n} \geq (n-1) \left (\dfrac{1}{\sqrt{x_1}}+\dfrac{1}{\sqrt{x_2}}+\cdots+\dfrac{1}{\sqrt{x_n}} \right ).$$ - -This is Exercice 1.48 in the book "Inequalities-A mathematical Olympiad approach". -Attempt -I tried using HM-GM and I got $\left ( \dfrac{1}{x_1x_2\cdots x_n}\right)^{\frac{1}{2n}} \geq \dfrac{n}{\dfrac{1}{\sqrt{x_1}}+\dfrac{1}{\sqrt{x_2}}+\cdots+\dfrac{1}{\sqrt{x_n}}} \implies \dfrac{1}{\sqrt{x_1}}+\dfrac{1}{\sqrt{x_1}}+\cdots+\dfrac{1}{\sqrt{x_n}} \geq n(x_1x_2 \cdots x_n)^{\frac{1}{2n}}$. But I get stuck here and don't know if this even helps. - -REPLY [10 votes]: Rewrite our inequality in the following form: -$\sum\limits_{i=1}^n\frac{x_i+1}{\sqrt{x_i}}\sum\limits_{i=1}^n\frac{1}{x_i+1}\geq n\sum\limits_{i=1}^n\frac{1}{\sqrt{x_i}}$, -which is Chebyshov.<|endoftext|> -TITLE: How to explain "why study prime numbers" to 5th Graders? -QUESTION [9 upvotes]: I tend to teach 5th graders math ever so often just so they can be "friendly" with math in a playful manner, instead of being afraid. -However, one question that I constantly struggle with is this: Why should we care if a number is prime or not? -Coming from a computing background I try my best to explain to them the use of prime numbers in cryptography and how primes are related to factorization (as kid friendly an explanation as possible). However, they "sigh" and move on with the belief that I'm telling the truth. But, they still don't seem to get excited about it. -Answers for questions like these: Real-world applications of prime numbers? don't seem to be well suited for 5th graders. -What are some interesting ways/examples that one can use to help 5th graders understand why the study/knowledge of primes is useful? Bonus if they can "see the use" sooner in their 10 year old lifespan instead of waiting till college. -I'm okay even conjuring "games" to help them learn/understand. For example, currently I'm trying to use something like Diffie Hellman Key Exchange to make a game for them to encode messages and see if "eavesdroppers" (i.e., other students) can guess the message. Something on the lines of Alice wants to send Bob a number that she's thinking about. Other students have to guess what that number is. The number can be 'encoded' (loosely speaking) as a manipulation of numbers similar to the key exchange protocol, but that 5th graders can play around with. Hopefully the 'decoding' process shows them why it's better to choose primes. However, this could be rather abstract. That's the best I've got for now. -Any other ideas? - -REPLY [8 votes]: I may be a bit jaded, but I don't think there are a whole lot of applications that will impress someone 10 years old. I tell college students (in a class for future educators, no less!) that their ability to shop safely online depends on prime numbers, and hardly get a reaction. -So, I'd take a different approach: Mystery and intrigue. -It's hard for anyone who hasn't studied math seriously to understand what mathematics is about, but most people believe we have it pretty well figured out (and if not, the Bigger and Better Computer of Tomorrow (TM) will surely have it all straightened out in a few years, right?). Which is why it should be surprising that we really don't understand prime numbers very well! -That's a bit of a stretch, of course. We know a lot about prime numbers, but the biggest (or at least most famous) open question in all of mathematics, the Riemann Hypothesis (I wouldn't even mention the name, let alone give any details!) is a belief about prime numbers. Another big-hitter (again, fame-wise; I can't fathom why it would be important to anyone) is the Goldbach Conjecture, another belief concerning prime numbers. This one could easily be stated to 5th graders, and they could verify that it's true for any numbers they pick out. -If the million dollar bounty for the Riemann Hypothesis is still in effect and you knew everything there was to know about prime numbers, you'd walk away with a cool million dollars! That's how much more we want to know about prime numbers, because we just don't know certain things! -The point is this. It's easy to define a prime number, and easy to work with prime numbers. But when we start asking certain questions, we just don't know. Nobody does. A handful of incredible mathematicians know a bit more than most, but even the most well-informed people on earth only know incrementally more than your students, when it comes to prime numbers. (Again: obviously a stretch. This really applies to isolated statements about prime numbers, but we're trying to sell here, not be pedantic). - -I'll also mention that when we talked about the Sieve of Eratosthenes for finding prime numbers (again in the class for future educators), I remarked that this method is over two-thousand years old (older than many popular Western religions). Fast forward to now, and our best methods for listing all prime numbers in a certain range are only incrementally better. Cooler still, these better methods all use this basic sieving technique at their core! So we're better at listing primes because we're better at sieving, but not that much better -- in two thousand years! -Your 5th graders could easily sieve, and use the primes they find to verify Goldbach's Conjecture for tons of numbers. They'd be playing the game of mathematics then, getting their hands dirty in a completely self-sufficient way. And it can be phrased as a challenge: "I bet you can't write 138 as a sum of two primes!" -So the big moral of the story is that, to mathematicians, primes are mysterious, shiny objects. I wouldn't focus on their shininess, only the mystery. They have such mysterious facets that, in some ways, we're not much better at understanding them than we were thousands of years ago.<|endoftext|> -TITLE: Theta-space is a deformation retraction of the doubly-punctured plane, how to find equations. -QUESTION [8 upvotes]: That theta space is given by $S^1\cup(0\times[-1,1]) \subset\mathbb{R}^2$ it is said that this space is a deformation retract of the doubly punctured plane, here is the explanation I found: - -The first one I think is ok if you want to find the equations, if you assume that p and q are at the points 1 and -1. You can define the deformation retraction like this: -$(x_1,x_2)(1-t)+2(x_1,x_2)/\|x\|t, \|x\| \ge 2$ -$x, \|x\|\le2$. We can see that it is continuous by the pasting lemma, it is well defined, and we have a homotopy between the identity map and the disc of radius 2, so it is a deformation retraction. -But what about the last one? Is it difficult to find the equations? I have seen many cases where they just use intuition here. But there are many things that has to be shown, - -That the deformation leaves $S^1\cup(0\times[-1,1])$ fixed(this is clear from the picture) -That the deformation si continuous(not that clear) -That the deformation is homotpic with the identity map, and that it leaves the theta space fixed during this homomorphism. How is this just "seen" from the picture? - -Could these things be "seen" or "felt" intuitively? And do you know how to construct the homotpy explicitely in the last case? I am not sure how to find the equations here. Maybe they are too messy? - -REPLY [2 votes]: I will try to answer your questions one by one with the help of certain diagrams that I have made. Before I begin, I would like to tell you that I am not an expert on this subject. In fact, I have just started studying this. So, any suggestions or methods other than the one I am going to explain here are, indeed, welcome. -Since your question is mostly about the second deformation retraction, we will concentrate on that. First, we see what we want to do. We want to map every point of a disc with two holes to a point in $\mathbb{S}^1 \cup \left( \left\lbrace 0 \right\rbrace \times \left[ -1, 1 \right] \right)$. For the sake of simplicity, we will denote by $B_1 = \left\lbrace \textbf{x} \in \mathbb{R}^2 | \| \textbf{x} \| \leq 1 \right\rbrace \setminus \left\lbrace \left( \frac{1}{2}, 0 \right), \left( - \frac{1}{2}, 0 \right) \right\rbrace$. This is the unit disc with two points removed. Notice that you can do the same analysis that I will be doing by considering a disc of radius $2$ with two points removed (as you have mentioned in the beginning of your question). -As shown in the figure, we want to construct a function which brings the blue shaded region of $\mathbb{R}^2$ to the green shared region. - -In the figure, the two red dots denote the removed points. In our case they are $\left( \frac{1}{2}, 0 \right)$ and $\left( -\frac{1}{2}, 0 \right)$. To understand how to construct a function as desired, let us first concentrate on the blue region alone. -From the figures we have been provided as intuitions, it is clear that we want to create a function using straight lines. That is, if there is a point $\left( x, y \right)$ in the right half-plane and inside $B_1$, then we want to look at the line joining $\left( x, y \right)$ and $\left( \frac{1}{2}, 0 \right)$. Then, we would want to map this point $\left( x, y \right)$ to the intersection of the line with the theta space. -This intersection can happen in two cases: One where the line intersects with $\mathbb{S}^1$ and the other, when it intersects with the line segment $\left\lbrace 0 \right\rbrace \times \left[ -1, 1 \right]$. We want the intersection that happens first. That is, we would like to study two classes of lines. I have denoted them in the figure by orange and purple colours. The orange lines are those which intersection the line segment $\left\lbrace 0 \right\rbrace \times \left[ -1, 1 \right]$ and the purple lines are those which intersection with the circle $\mathbb{S}^1$. - -Let us look at the orange lines first. Of all the possible orange lines, two of them intersect the theta space at $\left( 0, 1 \right)$ and two intersect the theta space at $\left( 0, -1 \right)$. These four line segments make a quadrilateral. Anything inside this quadrilateral will have an intersection of its line with the line segment $\left\lbrace 0 \right\rbrace \times \left[ -1, 1 \right]$, and anything outside of this quadrilateral will have the intersection with $\mathbb{S}^1$. -Formally, these four lines are given by the equations: -$$y = 1 - 2x, y = 2x - 1, y = 2x + 1, y = -2x - 1.$$ -I have written them in the order: first quadrant, fourth quadrant, second quadrant, third quadrant. -So, it means that if for a point $\left( x, y \right) \in B_1$ with $0 \leq y \leq 1 - 2x$ or $2x - 1 \leq y \leq 0$ or $0 \leq y \leq 2x + 1$ or $-2x - 1 \leq y \leq 0$, we would expect the line joining $\left( x, y \right)$ with $\left( \frac{1}{2}, 0 \right)$ or $\left( - \frac{1}{2}, 0 \right)$ (depending on the sign of $x$) to intersect the line segment $\left\lbrace 0 \right\rbrace \times \left[ -1, 1 \right]$. -First, we see for $x \geq 0$. The line joining $\left( x, y \right)$ with $\left( \frac{1}{2}, 0 \right)$ is given by the equation: -$$y' = \left( \dfrac{y}{x - \frac{1}{2}} \right) \left( x' - \dfrac{1}{2} \right).$$ -Here, we assume that $x \neq \frac{1}{2}$. We will consider this case later. Now, whenever $0 \leq y \leq 1 - 2x$ or $2x - 1 \leq y \leq 0$, this line intersects with $\left\lbrace 0 \right\rbrace \times \left[ -1, 1 \right]$. The point of intersection will be $\left( 0, \frac{-y}{2x - 1} \right)$. Thus, for the points $\left( x, y \right)$ in the right region of the orange quadrilateral, we get a point $\left( 0, \frac{-y}{2x - 1} \right)$ in the theta space. -Similar computation would tell us that the points $\left( x, y \right)$ in the left half region of the orange quadrilateral should be mapped to $\left( 0, \frac{y}{2x + 1} \right)$. -Now, let us see what happens when $x = \frac{1}{2}$. Since $\left( frac{1}{2}, 0 \right)$ is not included in $B_1$, any point of the form $\left( \frac{1}{2}, y \right)$ will lie outside the orange quadrilateral. Therefore, we now look at the intersection of the corresponding line with $\mathbb{S}^1$. Since on the line $x = \frac{1}{2}$, the intersection will be at $\left( \frac{1}{2}, \pm \frac{\sqrt{3}}{2} \right)$, depending on the sign of $y$. -Similar things can be said about $x = - \frac{1}{2}$. Seethe figure for a better understanding. Now, let us only look at the case when $x \neq \pm \frac{1}{2}$ and $\left( x, y \right)$ lies outside the orange region. Let us first consider the right half plane. Now, we have to look at the intersection of the line $y' = \left( \dfrac{y}{x - \frac{1}{2}} \right) \left( x' - \dfrac{1}{2} \right)$ with $\mathbb{S}^1$. Upon calculation, we get a quadratic equation in $\left( x' - \dfrac{1}{2} \right)$, whose solution(s) are given by -$$x' - \dfrac{1}{2} = \dfrac{-1 \pm \sqrt{1 + 3 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)}}{2 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)}$$ -Now, $x' \geq 0$ should hold (otherwise the line would first intersect the segment $\left\lbrace 0 \right\rbrace \times [-1, 1]$, which is not desired for now). Therefore, for the purple points, we get the point -$$\left( \dfrac{-1 \pm \sqrt{1 + 3 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)}}{2 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)} + \dfrac{1}{2}, \left( \dfrac{y}{x - \frac{1}{2}} \right) \left( \dfrac{-1 \pm \sqrt{1 + 3 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)}}{2 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)} \right) \right)$$ -Similar computations can be done for purple points in the left half part of $B_1$. So, the function $f: B_1 \rightarrow B_1$ which we wanted to construct is as follows: -$$f \left( x, y \right) = \begin{cases} -\left( 0, \frac{-y}{2x - 1} \right), & x \geq 0 \text{ and } \left( 0 \leq y \leq 1 - 2x \text{ or } 2x - 1 \leq y \leq 0 \right) \\ -\left( 0, \frac{y}{2x - 1} \right), & x \leq 1 \text{ and } \left( 0 \leq y \leq 2x + 1 \text{ or } -2x - 1 \leq y \leq 0 \right) \\ -\left( \frac{1}{2}, \frac{\sqrt{3}}{2} \right), & x = \frac{1}{2} \text{ and } y > 0 \\ -\left( \frac{1}{2}, - \frac{\sqrt{3}}{2} \right), & x = \frac{1}{2} \text{ and } y < 0 \\ -\left( - \frac{1}{2}, \frac{\sqrt{3}}{2} \right), & x = - \frac{1}{2} \text{ and } y > 0 \\ -\left( - \frac{1}{2}, - \frac{\sqrt{3}}{2} \right), & x = - \frac{1}{2} \text{ and } y < 0 \\ -\left( \dfrac{-1 \pm \sqrt{1 + 3 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)}}{2 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)} + \dfrac{1}{2}, \left( \dfrac{y}{x - \frac{1}{2}} \right) \left( \dfrac{-1 \pm \sqrt{1 + 3 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)}}{2 \left( 1 + \left( \frac{y}{x - \frac{1}{2}} \right)^2 \right)} \right) \right), & x \geq 0 \text{ and } x \neq \frac{1}{2} \text{ and } y > 1 - 2x \text{ and } y < 2x - 1 \\ -\left( \dfrac{-1 \pm \sqrt{1 + 3 \left( 1 + \left( \frac{y}{x + \frac{1}{2}} \right)^2 \right)}}{2 \left( 1 + \left( \frac{y}{x + \frac{1}{2}} \right)^2 \right)} - \dfrac{1}{2}, \left( \dfrac{y}{x + \frac{1}{2}} \right) \left( \dfrac{-1 \pm \sqrt{1 + 3 \left( 1 + \left( \frac{y}{x + \frac{1}{2}} \right)^2 \right)}}{2 \left( 1 + \left( \frac{y}{x + \frac{1}{2}} \right)^2 \right)} \right) \right), & \text{otherwise} -\end{cases}$$ -As you can see, writing the function explicitly is way too tedious and lengthy. However, with this analytical expression, one can satisfy themselves about the continuity of $f$. -As for your question about homotopy between the identity function and this particular function we have constructed, we look at the following. While constructing this function, we essentially saw lines and "moved" each point along a certain line until we reached the theta space. So, a natural way to deform the disc (with two points removed) into the theta space is that at the beginning, stay where you are and then start moving along the lines we have considered. We know that then each of these lines (segments) would then start at a point in $B_1$ and end at a point in $\mathbb{S}^1 \cup \left( \left\lbrace 0 \right\rbrace \times \left[ -1, 1 \right] \right)$. Use this intuition to define the homotopy as $H: B_1 \times \left[ 0, 1 \right] \rightarrow B_1$, -$$H \left( \textbf{x}, t \right) = \left( 1 - t \right) \textbf{x} + t f \left( \textbf{x} \right).$$ -Now, we can easily verify that this is indeed a homotopy between the identity map and the map $f$, we have constructed.<|endoftext|> -TITLE: Computing $\sum\limits_{n=1}^\infty\frac{\sin n}{n}$ with residues -QUESTION [8 upvotes]: I'm running into some error in computing the sum. Since $\dfrac{\sin n}{n}$ is even, I'm considering the function $f(z)=\dfrac{\pi\sin z\cot\pi z}{z}$ and the contour integral -$$\oint_\gamma \frac{\pi\sin z\cot\pi z}{z}\,\mathrm{d}z$$ -where $\gamma$ is a square centered at the origin surrounding the poles and extending off to $\infty$. -So I have the impression that the integral should be -$$0=\mathrm{Res}(f(z),0)+\sum_{k\in\mathbb{Z}\setminus\{0\}}\mathrm{Res}(f(z),k)$$ -where all the poles are simple. Since $\dfrac{\sin n}{n}$ is even, the second term is twice the sum over the positive integers. At $z=0$, the residue is $1$, so I'm left with -$$0=1+2\sum_{k\ge1}\mathrm{Res}(f(z),k)=1+2\sum_{k\ge1}\frac{\sin k}{k}$$ -but this would suggest the value of the sum is $-\dfrac{1}{2}$. I'm off by $\dfrac{\pi}{2}$, but I don't know where I went wrong. Am I wrong in assuming the integral disappears? -Apologies if this is a duplicate; all the questions I've run into involving this sum were just testing for convergence, not finding the exact value. - -REPLY [3 votes]: If you want the residue in some way, we can use the fact that the Mellin transform identity for harmonic sums with base function $g\left(x\right)$ is $$\mathfrak{M}\left(\sum_{n\geq1}\lambda_{n}g\left(\mu_{n}x\right),s\right)=\sum_{n\geq1}\frac{\lambda_{n}}{\mu_{n}^{s}}\mathfrak{M}\left(g\left(x\right),s\right)$$ so if we consider the sum $$\sum_{n\geq1}\frac{\sin\left(nx\right)}{nx} - $$ we have $$\lambda_{n}=1,\,\mu_{n}=n,\, g\left(x\right)=\frac{\sin\left(x\right)}{x} - $$ and so we observe that $$\mathfrak{M}\left(g\left(x\right),s\right)=\Gamma\left(s-1\right)\sin\left(\frac{1}{2}\pi\left(s-1\right)\right) - $$ and $$\sum_{n\geq1}\frac{\lambda_{n}}{\mu_{n}^{s}}=\zeta\left(s\right) - $$ so $$\sum_{n\geq1}\frac{\sin\left(nx\right)}{nx}=\frac{1}{2\pi i}\int_{\mathbb{C}}\Gamma\left(s-1\right)\sin\left(\frac{1}{2}\pi\left(s-1\right)\right)\zeta\left(s\right)x^{-s}ds=\frac{1}{2\pi i}\int_{\mathbb{C}}Q\left(s\right)x^{-s}ds - $$ now observe that $Q\left(s\right) - $ has poles only at $s=0,1 - $, due to the zeros of the sine and the zeta function, then by residue theorem $$\sum_{n\geq1}\frac{\sin\left(nx\right)}{nx}=\textrm{Res}_{s=1}\left(Q\left(s\right)x^{-s}\right)+\textrm{Res}_{s=0}\left(Q\left(s\right)x^{-s}\right)=\frac{\pi}{2x}-\frac{1}{2} - $$ so finally take $x=1 - $.<|endoftext|> -TITLE: The set that only contains itself -QUESTION [12 upvotes]: Ignoring the axiom of regularity (and therefore the implication of "no set can contain itself"), would it be correct to state that the set that contains only itself is unique? -My argument is that if $x$ is said set, then -$$ x = \{x\} = \{\{x\}\} = \{\{\{\cdot\cdot\cdot\}\}\} $$ -Ad infinitum, which seems to be unique. - -REPLY [14 votes]: It need not be unique; in fact, you can weaken the axiom of foundation to allow either well-founded sets or sets of the form $x=\{x\}$. Sets of the latter form are called Quine atoms, and play the role of urelements. These are useful in set theory with atoms because they allow you to formulate a theory of sets-and-atoms without resorting to multi-sorted logic. -For more, see The Axiom of Choice by T. J. Jech.<|endoftext|> -TITLE: Problem with the ring $R=\begin{bmatrix}\Bbb Z & 0\\ \Bbb Q &\Bbb Q\end{bmatrix}$ and its ideal $D=\begin{bmatrix}0&0\\ \Bbb Q & \Bbb Q\end{bmatrix}$ -QUESTION [8 upvotes]: Let us consider the ring -$ -R:=\begin{bmatrix}\Bbb Z & 0\\ \Bbb Q & \Bbb Q\end{bmatrix} -$ -and its two-sided ideal -$ -D:=\begin{bmatrix}0 & 0\\ \Bbb Q & \Bbb Q\end{bmatrix} -$. -Let then consider the free right $R$-module $F_R:=\bigoplus_{\lambda\in\Lambda}x_{\lambda}R$. -I must show that -$$ -\bigcap_{n\ge1}nF_R=\bigoplus_{\lambda\in\Lambda}x_{\lambda}D=F_RD\;\;. -$$ -I proved the first equality using the fact that $\bigcap_{n\ge1}nR=D$. -The second inequality: observing that $D\unlhd R$ (i.e. $D$ is a two-sided ideal of $R$) we have that $D=RD$, from which we would have -$$ -\bigoplus_{\lambda\in\Lambda}x_{\lambda}D -=\bigoplus_{\lambda\in\Lambda}x_{\lambda}RD -=\underbrace{\left(\bigoplus_{\lambda\in\Lambda}x_{\lambda}R\right)}_{=F_R}D -$$ -my problem is with the last equality in this last line: $"\supseteq"$ is obvious. What I cannot prove is the other inclusion $"\subseteq"$. -I try writing the generic element of LHS, say $\sum_{\lambda\in F}x_{\lambda}r_{\lambda}d_{\lambda}=x_1r_1d_1+\cdots+x_nr_nd_n$, for some finite $F\subseteq\Lambda,\;|F|=n$. Then I should find some $d\in D$ and $r_1',\dots,r_n'\in R$ such that $x_1r_1d_1+\cdots+x_nr_nd_n=(x_1r_1'+\cdots+x_nr_n')d$: in such a way this last element would be in -$ -{\left(\bigoplus_{\lambda\in\Lambda}x_{\lambda}R\right)}D -$ -which is our RHS and I would have finished. -I tried to write out the matrices to find $d$ and the $r_j$'s, even doing some not elegant computations, but I didn't found any way to go out! Can someone help me? Many thanks! -EDIT: see my answer below. - -REPLY [2 votes]: I don't think it is possible to prove that $\bigoplus x_{\lambda} D \subseteq F_R D$, and here's why: -given an element $x_1d_1 + \dotsc + x_n d_n \in \bigoplus x_{\lambda} D$ we need to find $r_i \in R$ and $d \in D$ such that -$$ -d_i = r_i d \quad \text{for } 1 \leq i \leq n. \tag{1} \label{eq:1} -$$ -Now let's write -$$ -d_i = -\begin{pmatrix} -0 & 0 \\ -p_i & q_i -\end{pmatrix} -\quad -d = -\begin{pmatrix} -0 & 0 \\ -p & q -\end{pmatrix} -\quad -r_i = -\begin{pmatrix} -z_i & 0 \\ -x_i & y_i -\end{pmatrix} -$$ -and observe that -$$ -r_i d = -\begin{pmatrix} -z_i & 0 \\ -x_i & y_i -\end{pmatrix} -\begin{pmatrix} -0 & 0 \\ -p & q -\end{pmatrix} = -\begin{pmatrix} -0 & 0 \\ -y_i p & y_i q -\end{pmatrix} -$$ -so $\eqref{eq:1}$ is equivalent to the system of $2n$ equations -$$ -\begin{cases} -p_i = y_i p\\ -q_i = y_i q -\end{cases} -\quad -\text{for } 1 \leq i \leq n. \tag{2} \label{eq:2} -$$ -This implies $\frac{p}{q} = \frac{p_i}{q_i}$ for every $1 \leq i \leq n$, so $\eqref{eq:2}$ cannot have a solution in general, e.g. for -$$ -x_1 -\begin{pmatrix} -0 & 0 \\ -1 & 1 -\end{pmatrix} -+ x_2 -\begin{pmatrix} -0 & 0 \\ -1 & 2 -\end{pmatrix}. -$$<|endoftext|> -TITLE: Extremly simple combinatorics - divide to groups -QUESTION [5 upvotes]: We have a group of $10$ men and $4$ women, we want to divide this group into two groups of $7$ such that each of those groups has at least $1$ woman. -What I did: -I actually solved this in two directions, both of them are wrong, I'd like to know why. -First direction - First we choose a woman for group $A$, we have $4$ options for that. Then we choose a woman for group $B$, we have $3$ options for that. Now we have $12$ people that we need to divide into $2$ groups of $6$. We have $\begin{pmatrix} 12\\ 6\end{pmatrix}$ options for that. Overall thats $4 \times 3\times \begin{pmatrix} 12\\ 6\end{pmatrix} = 11088$ -Second direction - Lets count all the number of groupings to groups of $7$ and subtract the ones where theres a group with no women. -Number of overall groupings - $\begin{pmatrix}14 \\ 7 \end{pmatrix}$. We have 14 people, we need to divide them into $2$ groups of $7$. -Number of groupings where theres a group without a woman - $\begin{pmatrix}10 \\ 7\end{pmatrix}$. Choose how to group the $10$ men only. -$\begin{pmatrix}14 \\ 7 \end{pmatrix} - \begin{pmatrix}10 \\ 7\end{pmatrix}=3432-120 = 3312$ -Correct answer - $1596$. -What? How? Why? I'd like to know where my logic fails. - -REPLY [4 votes]: You're on the right track, but you're overcounting. In your first method, suppose that you choose woman $W_1$ to be in group $A$, and then later add women $W_2$ and $W_3$, say, to that group. The problem is that you're counting this as distinct from first choosing $W_2$ and then adding $W_1$ and $W_3$, or first $W_3$ and then $W_1$ and $W_2$. -In your second method, note that $14 \choose 7$ chooses a group of $7$ from a total of $14$, but in doing so you have also determined the other group of $7$ (i.e., the ones who weren't chosen). Therefore you should divide by $2$. (It might be easier to look at smaller case, e.g., splitting $4$ people into $2$ groups of $2$. There are $\frac12{4 \choose 2} = 3$ ways to do so.) -Continuing from the second method, put all $4$ women into one group and choose $3$ men to join them: there are $10 \choose 3$ ways to do so. The answer is therefore -$$\frac12{14 \choose 7} - {10 \choose 3}.$$ - -REPLY [3 votes]: Let us use a bit more notation. Let us refer to the men and women by number, $M_1,M_2,\dots,W_1,W_2,W_3,W_4$. -The reason why your logic fails for the first attempt was that the sequence of steps: (Pick which woman goes to group A)(Pick which woman goes to group B)(Pick six more people to go to group A) winds up having the woman picked in step one as special compared to the others. -The following two sequences of choices give the "same" result: - -$W_1, W_3, (W_2,M_1,M_2,M_3,M_4,M_5)$ -$W_2, W_3, (W_1,M_1,M_2,M_3,M_4,M_5)$ - -In both cases, our group $A$ looks like $(W_1,W_2,M_1,M_2,M_3,M_4,M_5)$. Remember that order within the group doesn't matter. -Your second interpretation is almost correct given that the two groups are distinguishable. That is, there is a "Group $A$" and a "Group $B$." However, we aren't told that. We can assume then that the two groups are not labeled. -I did say almost correct. Your mistake was in that you only removed the cases where group $A$ was full of guys. You need to also remove the cases where group $B$ was full of guys, for a total of $\binom{14}{7}-2\cdot \binom{10}{7} = 3192$ -In order to account for the fact that the two groups are unlabeled and indistinguishable, we note that by counting group $A$ as different from group $B$, we double counted each scenario, if we divide by two we take care of the double-counting, for a final answer of $\frac{3192}{2}=1596$<|endoftext|> -TITLE: Degree of the splitting field of $X^4-3X^2+5$ over $\mathbb{Q}$ -QUESTION [9 upvotes]: I would like to know how to solve part $ii)$ of the following problem: - -Let $K /\mathbb{Q}$ be a splitting field for $f(X) =X^4-3X^2+5$. -i) Prove that $f(X)$ is irreducible in $\mathbb{Q}[X]$ -ii) Prove that $K$ has degree $8$ over $\mathbb{Q}$. -iii) Determine the Galois group of the extension $K/\mathbb{Q}$ and show how it acts on the roots of $f$. - -I've done part i), and have found the roots of $f$ explicitly as: -$$\pm\bigg(\frac{3\pm\sqrt{-11}}{2}\bigg)^{1/2}$$ -but am not sure how to show that the extension has degree $8$. If $x_1$ is the root where both of the $\pm$ signs above are $+$ and $x_2$ is the root where only the outer sign is a $+$, then $K = \mathbb{Q}(x_1,x_2)$. By part $i)$, $x_1$ has degree $4$ over $\mathbb{Q}$ and then $x_2$ has degree $1$ or $2$ over $\mathbb{Q}(x_1)$, but I'm not sure how to show that this degree is $2$, or prove the result by other means. -Due to the ordering of the parts, I would expect there to be an answer for ii) that doesn't require computing the entire Galois group of the extension, so would appreciate something along these lines. - -REPLY [4 votes]: After some thought, I've found a very short answer that uses a minimal amount of computation: -With notation as in the question, we have: -$$x_1x_2 = \sqrt{5}, x_1^2 = \frac{3+\sqrt{-11}}{2}.$$ Thus $K$ contains the subfield $F = \mathbb{Q}(\sqrt{5},\sqrt{-11})$ which is Galois and degree $4$ over $\mathbb{Q}$, with Galois group $G'\cong V_4$, generated by $\sigma$ and $\tau$, where $\sigma$ fixes $\sqrt{5}$ and permutes $\pm\sqrt{-11}$ and $\tau$ fixes $\sqrt{-11}$ and permutes $\pm\sqrt{5}$. -If $F=K$, then $x_i \in F$ and then the relations above immediately give that $\sigma\tau(x_1) = \pm x_2$ and $\sigma\tau(x_2)=\mp x_1$. But then $\sigma\tau \in G'$ has order $4$, a contradiction, so we must have that $K$ is strictly larger than $F$, so must be of degree $8$ over $\mathbb{Q}$ as desired.<|endoftext|> -TITLE: Need help with $\int_0^\pi\arctan^2\left(\frac{\sin x}{2+\cos x}\right)dx$ -QUESTION [28 upvotes]: Please help me to evaluate this integral: -$$\int_0^\pi\arctan^2\left(\frac{\sin x}{2+\cos x}\right)dx$$ -Using substitution $x=2\arctan t$ it can be transformed to: -$$\int_0^\infty\frac{2}{1+t^2}\arctan^2\left(\frac{2t}{3+t^2}\right)dt$$ -Then I tried integration by parts, but without any success... - -REPLY [2 votes]: Generalizing Jack's answer. Take $a\in\Bbb R\setminus\{0\}$, and write -$$J(a)=\int_0^\pi\arctan^2\left(\frac{\sin x}{a+\cos x}\right)dx.$$ -From symmetry, we can write -$$J(a)=\frac12\int_{-\pi}^\pi\arctan^2\left(\frac{\sin x}{a+\cos x}\right)dx$$ -Then -$$\arctan\left(\frac{\sin x}{a+\cos x}\right)=\Im\log(a+e^{ix})=\sum_{n\ge1}\frac{(-1)^{n+1}}{na^n}\sin(nx).$$ -Then from Parseval's theorem, -$$\frac1\pi\int_{-\pi}^\pi\arctan^2\left(\frac{\sin x}{a+\cos x}\right)dx=\sum_{n\ge1}\left(\frac{(-1)^{n+1}}{na^n}\right)^2,$$ -so that -$$J(a)=\frac\pi2\mathrm{Li}_2\left(\tfrac1{a^2}\right).$$ -Unfortunately, your integral, given by $J(2)=\frac\pi2\mathrm{Li}_2(1/4)$, does not appear to have a closed form.<|endoftext|> -TITLE: Find number of integral solutions of $abcd=210$ -QUESTION [5 upvotes]: Find number of integral solutions of $a\times b\times c\times d=210$ - -$$210=2\times 3\times 5\times 7$$ -I tried by assuming 2,3,5,7 as numbered balls. The above problem is equivalent to placing 4 balls on 4 boxes where emplty boxes are allowed or placing 3 partitions between 4 balls. (Empty box signifies 1). -Assuming the partitions as sticks, I have to find the number of ways of arranging 4 different balls and 3 sticks. (The numbered balls between the sticks are like numbered balls in a box. So if two sticks come together, it means you get an empty box). -Number of ways = $7!$. But answer given is $8\times 4^4$ -(I don't know if negative solutions are allowed. If that is the case, my method will not work. But if only positive integral solutions are allowed, is my method correct?) - -REPLY [7 votes]: Assuming only positive integral solutions, you’re assigning each of the $4$ primes to one of the $4$ ‘boxes’ $a,b,c$, and $d$. Both the primes and the ‘boxes’ are individually identifiable, so this can be done in $4^4$ ways. Thus, there are $4^4$ solutions in positive integers. However, the problem merely requires the four factors to be integers. We can assign plus and minus signs arbitrarily to $a,b$, and $c$, but then there will be only one possible choice of sign for $d$ in order to make the product positive, so there are altogether $2^3=8$ ways to assign the signs. Alternatively, an even number of $a,b,c$, and $d$ must be negative, and this can happen in $8$ ways: all positive, all negative, or one of the $\binom42=6$ ways of picking two to be negative. -Note that the problem is not equivalent to the usual one of placing $3$ partitions in a line of $4$ balls, because these ‘balls’ are individually identifiable. You can line them up in the order $2,3,5,7$ and place your three partitions in $\binom73$ ways, but that will only give you the factorizations in which no prime appears to the right of any larger prime. (E.g., you can’t get $7\cdot10\cdot3\cdot1$ this way.) Unfortunately, if you multiply by $4!$ to allow for all possible orders of the primes, you overcount: $1\cdot1\cdot1\cdot210$, for instance, gets counted $4!$ times!)<|endoftext|> -TITLE: On defining cross (vector) product. -QUESTION [13 upvotes]: This has been bugging me for years so I finally decided to "derive" (for lack of a better term) the definition of the cross product in $\mathbb R{^3}$. Here was my method for finding a vector: -$\mathbf w = \mathbf u \times \mathbf v$ such that $\mathbf w \cdot \mathbf u = \mathbf w \cdot \mathbf v = 0$, where $\mathbf u = [$$ - \begin{matrix} - a & b & c \\ - \end{matrix}$$ -]$ -and $\mathbf v = [$$ - \begin{matrix} - d & e & f \\ - \end{matrix}$$ -]$. This of course shows orthogonality between $\mathbf w$ and $\mathbf u$, as well as $\mathbf v$. I set up the 2x3 matrix to solve for $\mathbf w = [$$ - \begin{matrix} - w_1 & w_2 & w_3 \\ - \end{matrix}$$ -]$ as follows: -$ -$$ - \begin{bmatrix} - a & b & c \\ -d & e & f - \end{bmatrix}$$ -\cdot \begin{bmatrix} - w_1 \\ w_2 \\ w_3 - \end{bmatrix}$$ = \begin{bmatrix} - 0 \\ 0 - \end{bmatrix}$$ -$ -Of course this is 3 unknowns and 2 equations, so I knew there would have to be an arbitrary parameter. I was fine with this for the time being and after some dirty work, ended up with the following: -$$\begin{bmatrix} - w_1 \\ w_2 \\ w_3 - \end{bmatrix} - = t -\begin{bmatrix} - \frac{\begin{vmatrix}b & c \\ e & f\end{vmatrix}}{\begin{vmatrix}a & b \\ d & e\end{vmatrix}} \\ -\frac{\begin{vmatrix}a & c \\ d & f\end{vmatrix}}{\begin{vmatrix}a & b \\ d & e\end{vmatrix}} \\ 1 - \end{bmatrix}$$ - -This looked very much like the "traditional" definition of the cross product, so I chose $t = \begin{vmatrix}a & b \\ d & e\end{vmatrix}$ and I finally ended up with -$\mathbf w = $$\begin{pmatrix} - \begin{vmatrix}b & c \\ e & f\end{vmatrix}\\-{\begin{vmatrix}a & c \\ d & f\end{vmatrix}} \\ \begin{vmatrix}a & b \\ d & e\end{vmatrix} - \end{pmatrix}$$ $ which is the definition of the cross product that I've seen in pretty much all of my calculus and physics texts (also shown in determinant form with unit vectors). But where does that value for $t$ come from? Why does that particular value of $t$ work, besides my hunch to make it look like a definition that is universally accepted? Is the rationale behind $t$ being negative for $\mathbf w = \mathbf v \times \mathbf u$ just to satisfy the right-hand-rule? -Sorry if anything is messed up, this is my first ever time using MathJax. -By the way, I've checked similar questions which ask for the rationale for the cross product existing which I have learned from studying electromagnetics myself. But I wanted to see the rational behind the length of the vector, hence my value for $t$. Thanks for any help you can offer! - -REPLY [3 votes]: It depends on your definition of cross product. -The history of cross products has been discussed in depth in Michael Crowe (2002), A History of Vector Analysis (see also this earlier thread). Apparently, there are two paths of development, one led by Hamilton and the other by Grassmann and the French mathematician Adhémar Barré, Comte de Saint-Venant. -According to Crowe (2002), Hamilton noted in 1846 that --- in modern language --- given two purely imaginary quaternions $Q=x\mathbf i+y\mathbf j+z\mathbf k$ and $Q'=x'\mathbf i+y'\mathbf j+z'\mathbf k$, the real part (called "scalar part" by Hamilton) of $QQ'$ is equal to $-(xx'+yy'+zz')$ (which is the negative of the modern dot product) and the imaginary part (called "vector part") of $QQ'$ is equal to $\mathbf i(yz'-zy')+\mathbf j(zx'-xz')+\mathbf k(xy'-yx')$, which is the modern cross product. Crowe comments that "This will be very significant historically; in fact, it was precisely along this path that modern vector analysis originated." -On the other front, Grassmann had already devised in 1840 something that is numerically equivalent to the modern cross product, and Barré de Saint-Venant also "lays out a number of the fundamental ideas of vector analysis, including a version of the cross product" However, they both view the results of vector products only as directed areas but not vectors. This is understandable, because their studies of cross products were motivated by physical applications. Unfortunately, according to Crowe, - -Grassmann and Saint-Venant correspond for a time, but Saint-Venant's ideas do not seem to have attracted significant attention. They do show, however, that the search for a vectorial system was “ - in the air”. - -The earliest known explicit definitions of the modern cross product were given by Tait's An Elementary Treatise on Quaternions (1867) and Gibb's Vector Analysis (1881). In Tait's Treatise, the cross product is motivated exactly in Hamilton's way (i.e. by considering the imaginary part of the product of two purely imaginary quaternions), while in Gibb's Vector Analysis, the cross product $C=A\times B$ is a vector whose length is the area of parallelogram with edges $A$ and $B$ and whose direction is determined by the right hand rule, so that a scalar triple product $A\cdot(B\times C)$ or $(A\times B)\cdot C$ gives the signed volume of the parallelopiped with concurrent edges $A,B,C$ and this volume is equal to the determinant of the augmented matrix $(A,B,C)$.<|endoftext|> -TITLE: A $T_0$ topological vector space is Hausdorff -QUESTION [7 upvotes]: I know a $T_0$ topological group is $T_1$, and if we have a topological vector space, there should be a way of using scalar multiplication to get disjoint neighborhoods. Right? - -REPLY [4 votes]: As suggested in the comments, a topological group which is $T_1$ is already Hausdorff. Here is a proof of this fact. -Remember a space is Hausdorff if and only if the diagonal is closed in the product space. Let $X$ be our space, with operation $\cdot$. -Define $f \colon X \times X \to X$ by $f(x,y)=x\cdot y^{-1}$. Such map is continuous and since the space is $T_1$, $\{\operatorname{id}\}$ is a closed set. The preimage under $f$ of the set $\{\operatorname{id}\}$ is the diagonal, which is closed by continuity, hence $X$ is Hausdorff.<|endoftext|> -TITLE: Is a continuous function between two uniformly continuous functions uniformly continuous? -QUESTION [8 upvotes]: I'm sorry for the long question in the title. Given three functions $\underline{f}(x), f(x), \overline{f}(x)$ that satisfy the following - -$\underline{f}(x)\leq f(x)\leq \overline{f}(x)$ for all $x\in\mathbb{R}$, -$\underline{f}(x)$ and $\overline{f}(x)$ are both uniformly continuous and bounded in $\mathbb{R}$, and -$f(x)$ is continuous in $\mathbb{R}$. - -Is $f(x)$ uniformly continuous in $\mathbb{R}$? -The desired conclusion seems intuitive but I get stuck when trying to prove it. I have hard time putting the conditions together. Any hint is highly appreciated! - -REPLY [27 votes]: The functions $\bar f(x) = 1$ and $\underline f(x) = -1$ are uniformly continuous. -The function $f(x) = \sin(x^2)$ is not uniformly continuous. (Take $\epsilon = \frac{1}{2}$ for instance). - -REPLY [13 votes]: No, since a bounded function like $\sin(x^2)$ is not necessarily uniformly continuous. For instance, $\sin(x^{1/2})$ is not.<|endoftext|> -TITLE: Is it possible to prove the Fundamental Theorem of Algebra for all polynomials of degree $n \le 4$? -QUESTION [6 upvotes]: Recenly I've been wondering whether it's possible to prove the FTA for all polynomials of degree $n \le 4$ without utilizing advanced maths but, at most, basic linear algebra (concepts such as eigenvectors, eigenvalues, determinants etc.). I've tried to give this some thought but I've been only able to make very naïve and futile statements such as "all real numbers can be represented as complex numbers " and "some quadratics and quartics, such as $p(x)=x^2 + 1$ and $p(x)=x^4 -1$, contain complex solutions". Now, I would be really greatful, if it is indeed possible to prove the FTA for all polynomials with degree $n \le 4$, if someone could provide a reference of some sort. - -REPLY [4 votes]: Let $p\in\mathbb{R}[X]\setminus\mathbb{R}$ be a polynomial of degree at most $4$. - -If $\deg(p)=1$, then you can easily explicit the root of $p$. -If $\deg(p)=2$, same remark. -If $\deg(p)=3$, you can also explicit the roots of $p$ using Cardano's method. -If $\deg(p)=4$, you can also explicit the roots of $p$ using this time Ferrari's method. - -Note that if $\deg(p)\geqslant 5$, in the general case, you won't be able to find such formulas for the roots of $p$, it's the Ruffini-Abel theorem<|endoftext|> -TITLE: Help with a proof of a special case of Dirichlet's Theorem -QUESTION [5 upvotes]: So I am reading through a proof of a special case of Dirichlet's theorem on arithmetic progressions, specifically a proof that if $p$ is prime, then there are infinitely many primes congruent to $1$ mod $p$ (Link Here). -In it, they assume there are a finite number of such primes, enumerated as $p_1,p_2,\ldots,p_n$, and they define the number: -$$ a=p\prod_{i=1}^n p_i$$ -They prove that $p \mid \phi(a^p-1)$, and they state that (clearly) $a^p-1$ is not divisible by $p$. -Then they use those two facts to claim that at least one prime factor of $a^p-1$ must be congruent to $1$ mod $p$. This is the step that I am not understanding; how do those two facts imply that at least one of its prime factors must be congruent to $1$ mod $p$? - -REPLY [3 votes]: suppose -$$ -a^p-1 = \prod_j q_j^{r_j+1} -$$ -with $r_j \ge 0$ -then -$$ -\phi(a^p-1)= \prod_jq_j^{r_j}\prod(q_r-1) -$$ -so for some $r$ we have $p|(q_r-1)$ which implies $q_r \equiv_p 1$<|endoftext|> -TITLE: Applications of the Lawvere Fixed Point Theorem for Sets -QUESTION [5 upvotes]: I'm not familiar with the general theorem for closed, cartesian categories (as I'm not familiar with closed, cartesian categories), but I am aware of this version of the fixed point theorem for sets: - -Let $A, B$ be sets. If there exists a surjective function $f: A \longrightarrow \textbf{Set}(A,B)$, then every function $g:B \longrightarrow B$ has a fixed point. - -I'm aware of two applications of the theorem: proving Cantor's theorem (setting $B=\{0,1\}$) and proving that $[0,1]$ is uncountable (setting $A=\mathbb{N}$, $B= \{0,1\}$, looking at binary representations). -Are there any other neat applications of this fixed point theorem for sets? For instance, can we deduce the Tarski fixed point theorem for sets (every non-decreasing endofunction of a power set of a set has a fixed point) or the Cantor-Bernstein theorem using this theorem? -Just thought this result was cool, and wanted to see what other things you could do with it. Thanks in advance for any replies! - -REPLY [3 votes]: The Lawvere fixed point theorem has limited applications in $\text{Set}$ because the only set with the fixed point property is the one-element set $1$, so if $B$ is any other set you just conclude that there can't be a surjection $A \to [A, B]$, which for $B \neq 0, 1$ is basically Cantor's theorem. -It has more interesting applications in other cartesian closed categories precisely because sometimes these categories have more interesting objects with the fixed point property. For example, there are some cartesian closed categories consisting of a single object $X$; in particular, this means that $X \cong [X, X]$, and applying the Lawvere fixed point theorem to an isomorphism of this form allows you to conclude that $X$ has the fixed point property. CCCs like this model the untyped lambda calculus, and writing down the fixed points explicitly gives you the Y combinator.<|endoftext|> -TITLE: Is $\mathbb{R}$ a subset of $\mathbb{R} \times \mathbb{R}$? -QUESTION [5 upvotes]: So I'm curious as why $\mathbb{R} \subseteq \mathbb{R}^{2}$, since $\mathbb{R}^{2} = \mathbb{R} \times \mathbb{R} = \left \{ (a,b) \mid a\in \mathbb{R}, b\in \mathbb{R} \right \}$. Do we think of $\mathbb{R}$ as being $\mathbb{R} \times 0$ in this case? - -REPLY [8 votes]: Technically no, for the reason you stated. However there IS a canonical bijection between $\Bbb{R}$ and $\Bbb{R}\times \{0\}$.<|endoftext|> -TITLE: Solving $a\sin \theta + b\cos \theta + c\sin 2\theta + d\cos 2\theta = k$ -QUESTION [7 upvotes]: I have to solve (for $\theta$) an equation of the form: -$$a\sin \theta + b\cos \theta + c\sin 2\theta + d\cos 2\theta = k$$ -I'm only interested in real-valued solutions where $0 ≤ \theta ≤ \frac\pi4$, if one exists, and also knowing if none exist. -Also, $a$, $b$, $c$, $d$, and $k$ are rational numbers. -Is there an "easy" way to attack this problem? -The only strategy I could come up with to express all sines and cosines -as $\sin \theta$, and then square the equation to get rid of square roots, and then solve a quartic equation, and then check the legitimacy of the roots. -Is there an easier approach, perhaps one that is more customized to this problem? - -REPLY [4 votes]: Set $x=\sin\theta$, $y=\cos\theta$ and get the conic -$$ax+by+2cxy+dx^2-dy^2=k,$$ which has solutions to your equation where it intersects the unit circle. -Since $(2c)^2-4d(-d)=4(c^2+d^2)>0$, we're looking at a hyperbola. (In fact, since the coefficients of $x^2$ and y^2$ are negatives of eachother, we're looking at a hyperbola whose asymptotes intercept at right angles.) -Everything is still kind of a mess though. If we rotate by $\frac{\arctan(-d/c)}{2}$ we should get a hyperbola that has its asymptotes running parallel to the new $u$- and $v$- coöridinate axes with equation $2c'uv+b'u+a'v=k$, which, after some more poking is -$$\left(u-\frac{-a'}{2c'}\right)\left(v-\frac{-b'}{2c'}\right)=\frac{2c'k+a'b'}{4c'^2}. $$ -If the center of the hyperbola is very far away from the unit circle you can probably check for intersection just by checking if the asymptotes intersect and then looking nearby.<|endoftext|> -TITLE: Find the value of $\sqrt{1+\frac{1}{1^2}+\frac{1}{2^2}}+\sqrt{1+\frac{1}{2^2}+\frac{1}{3^2}}+...+\sqrt{1+\frac{1}{1999^2}+\frac{1}{2000^2}}$ -QUESTION [5 upvotes]: Find the value of $\sqrt{1+\frac{1}{1^2}+\frac{1}{2^2}}+\sqrt{1+\frac{1}{2^2}+\frac{1}{3^2}}+...+\sqrt{1+\frac{1}{1999^2}+\frac{1}{2000^2}}$ -I found the general term of the sequence. -It is $\sqrt{1+\frac{1}{k^2}+\frac{1}{(k+1)^2}}$ -So the sequence becomes $\sum_{k=1}^{1999}\sqrt{1+\frac{1}{k^2}+\frac{1}{(k+1)^2}}$ -I tried telescoping but i could not split it into two partial fractions.And this raised to $\frac{1}{2}$ is also troubling me.What should i do to find the answer? - -REPLY [13 votes]: After taking LCM, we get the general term of the series as: -$$\sqrt{\frac{(k^{2}+k+1)^{2}}{k^{2}(k+1)^{2}}} $$ -$$=> \frac{k^{2}+k+1}{k^{2}+k}$$ -$$=> 1 + \frac{1}{k^{2}+k}$$ -So we have -$$\sum_{k=1}^{1999} 1 + \frac{1}{k^{2}+k}$$ -$$=> 1999 + \sum_{k=1}^{1999}\frac{1}{k(k+1)}$$ -$$=> 1999 + \sum_{k=1}^{1999}\frac{1}{k} - \frac{1}{(k+1)}$$ -$$=> 1999 + 1 - \frac{1}{2000}$$ -$$=> 2000 - \frac{1}{2000}$$ - -REPLY [3 votes]: Given $$1+\frac{1}{k^2}+\frac{1}{(k+1)^2} = 1+\frac{1}{k^2}+\frac{1}{(k+1)^2}-\frac{2}{k(k+1)}+\frac{2}{k(k+1)}$$ -So $$1^2+\left[\frac{1}{k}-\frac{1}{(k+1)}\right]^2+2\left[\frac{1}{k}-\frac{1}{(k+1)}\right]=\left[1+\frac{1}{k}-\frac{1}{k+1}\right]^2$$<|endoftext|> -TITLE: Computing alternating sum using contour integration -QUESTION [7 upvotes]: By considering the integral of: -$$\left(\frac{\sin\alpha z}{\alpha z}\right)^2 \frac{\pi}{\sin \pi z},\quad \alpha<\frac{\pi}{2}$$ -around a circle of large radius, prove that: -$$\sum\limits_{n=1}^\infty (-1)^{m-1} \frac{\sin ^2 m\alpha}{(m \alpha)^2} = \frac{1}{2}$$ - -Attempt at answer: -I can see that I have poles at $z=n$, and a double pole at $z=0$. -So in order to perform the contour integration, I first find the residues for $z=n$: -$$\frac{\pi}{\sin\pi z} = \frac{1}{z} \left( 1 + \frac{(\pi z)^2}{3!} + ...\right)$$ -the $1/z$ part is equal to $1$, so the residues are $$\sum\limits_{n=-N}^N \frac{\sin ^2 n\alpha}{(n \alpha)^2}$$ -Next, finding the residue at $z=0$: I found it to be $=1$, by series expansion. -I know that if I let my function tend to zero as the contour encloses all the poles, I have that: -$$2\pi i \left(2 \sum\limits_{n=1}^\infty \frac{\sin ^2 n\alpha}{(n \alpha)^2} + 1\right) = 0$$ -So I'm very almost there - but I have no idea where the $(-1)^{m-1}$ factor comes from, and also - if I rearrange the last equation, I get that the sum is $-\frac{1}{2}$ (i.e. not positive). -If anyone could help find where I'm going wrong, that would be great! - -REPLY [3 votes]: In general, if $k$ is a positive integer greater than or equal to $2$, $$\sum_{m=1}^{\infty} \frac{(-1)^{m-1} \sin^{k}(\alpha m)}{ m^{k}} = \frac{\alpha^{k}}{2} \, , \quad |\alpha| \le \frac{\pi}{k}.$$ -We first need to argue that $$\lim_{N \to \infty} \int_{|z|=N+\frac{1}{2}} \left(\frac{\sin\alpha z}{ z}\right)^k \frac{\pi}{\sin \pi z} \, dz = 0$$ if $|\alpha| \le \frac{\pi}{k}$. -But this follows from the fact that as $\text{Im}(z) \to \pm \infty$, $\left|\frac{\sin^{k}(\alpha z)}{\sin(\pi z)} \right|$ behaves like $\frac{1}{2^{k-1}} e^{\pm (k |\alpha|-\pi) \, \text{Im}(z)}$. -So we have -$$ \begin{align} 2\sum_{m=1}^{\infty} \frac{(-1)^{m-1} \sin^{k}(\alpha m)}{ m^{k}} &= \text{Res} \left[\left(\frac{\sin\alpha z}{ z}\right)^k \frac{z}{\sin \pi z} , \, 0 \right] \\ &= \lim_{z \to 0} \left(\frac{\sin\alpha z}{ z}\right)^k \frac{\pi z}{\sin \pi z} \\ &=a^{k}(1) \\&= a^{k}. \end{align}$$<|endoftext|> -TITLE: Cube root of discriminant of elliptic curve -QUESTION [13 upvotes]: Let $E/K$ be an elliptic curve over a field $K$, with discriminant $\Delta$. Then the polynomial $x^3-\Delta$ has a root (and hence all roots since Galois) in $K(E[3])$; this can be shown laboriously through solving the 3-division polynomial (a quartic). -Is there a nicer/more intuitive way of seeing this and can you please provide a proper reference for either the above method or whatever you suggest? - -REPLY [8 votes]: One way to think about this is via the modular curves parametrizing -elliptic curves $E$ with either $E[3]$ or $\Delta^{1/3}$ rational. -Note that $\Delta^{1/3}$ is rational iff $j^{1/3}$ is rational, -because $j = E_4^3 / \Delta$. -Assume for simplicity that $K$ contains the cube roots of unity -(because $K(E[3])$ contains them in any case thanks to the Weil pairing). -The $E[3]$ curve is the modular curve $X(3)$, with a map $X(3) \to X(1)$ -that forgets the $3$-torsion structure, and is Galois with group -${\rm PSL}_2({\bf Z}/3{\bf Z}) \cong A_4$. -Now $A_4$ has a normal subgroup, the - "Klein $4$-group" -$V_4$ consisting of the identity and the three double transpositions. -(The $V$ stands for German "Vierergruppe".) -So $X(3) / V_4$ is a Galois cover, call it $X'(3)$, of $X(1)$ with group -$A_4 / V_4 \cong {\bf Z} / 3 {\bf Z}$, i.e. a 3:1 cyclic cover. -Once $K$ contains cube roots of unity, Kummer theory says that -any 3:1 cyclic cover is obtained by adjoining a cube root -to the function field; here this function field is $K(j)$, -so the function field $K(X'(3))$ is $K(j,F^{1/3})$ -where $F$ is some rational function of $j$ that's not already a cube. -The punchline is that we can take $F=j$. Since the function field of -$X'(3)$ is contained in the function field of $X(3)$, this explains -why $\Delta^{1/3}$ is a rational function of the coordinates of $E[3]$. -The fact that $K(X'(3)) = K(j^{1/3})$ can almost be recovered from -the ramification structure of the map $X(3) \to X(1)$. It is ramified -only above $j=\infty$, $j=0$, and $j=1728$, with cycle structures -$(3,1)$, $(3,1)$, $(2,2)$ respectively. Hence the cover $X'(3) \to X(1)$ -is ramified only above $j=0$ and $j=\infty$, so $K(X'(3))$ must be -$K((cj)^{1/3})$ for some constant $c \in K^*$. -The fact that we can use $c=1$ takes a bit more work, -but once we know that the cover has this form it's enough -to just try some convenient $E$ with $j(E)\neq 0$ to complete the proof.<|endoftext|> -TITLE: A formula which gives the maximum of a series of numbers -QUESTION [6 upvotes]: This formula gives the maximum of 3 numbers: -$$\frac{a}{2} + \frac{b}{4} + \frac{c}{4} + \frac{|b-c|}{4} + \frac{1}{2}\left|a -\frac{b}{2} - \frac{c}{2} - \frac{|b-c|}{2}\right| = \max(a,b,c)$$ -I've found this over the internet, I have no idea how can one develop such a formula, and I wonder how. -What would it look like for 4 numbers ? and 5 etc. -Is it possible to have formula which gives the maximum of a series of $n$ numbers? - -REPLY [3 votes]: Let's start with $2$ numbers. The best way to see this is to imagine the numbers on the number line. -We find the midpoint of the $2$ numbers: $\frac{a+b}{2}$. -Next, we can find half the distance of the $2$ numbers: $\frac{|a-b|}{2}$ -Adding them up, we have $\max(a, b)=\frac{a+b}{2}+\frac{|a-b|}{2}$. -The formula you have for $3$ numbers is found by expanding $\max(a, \max(b, c))$. -You can continue with any number of variables, if we want to find the maximum of $a_1, a_2, a_3, \dots, a_n$, we can evaluate it slowly: -$$\max(a_1, \max(a_2, \max(a_3, \dots\max(a_{n-1}, a_n)\dots)))$$ -Because this post may take up too much space if I added too many expressions, I would write the expression for $n=5$: -$\frac{a_{1}}{2}+\frac{1}{2}\left(\frac{a_{2}}{2}+\frac{1}{2}\left(\frac{a_{3}}{2}+\frac{1}{2}\left(\frac{a_{4}}{2}+\frac{1}{2}\left(a_{5}\right)+\frac{1}{2}\left|a_{4}-a_{5}\right|\right)+\frac{1}{2}\left|a_{3}-\frac{a_{4}}{2}-\frac{1}{2}\left(a_{5}\right)-\frac{1}{2}\left|a_{4}+a_{5}\right|\right|\right)+\frac{1}{2}\left|a_{2}-\frac{a_{3}}{2}-\frac{1}{2}\left(\frac{a_{4}}{2}+\frac{1}{2}\left(a_{5}\right)+\frac{1}{2}\left|a_{4}-a_{5}\right|\right)-\frac{1}{2}\left|a_{3}+\frac{a_{4}}{2}-\frac{1}{2}\left(a_{5}\right)-\frac{1}{2}\left|a_{4}+a_{5}\right|\right|\right|\right)+\frac{1}{2}\left|a_{1}-\frac{a_{2}}{2}-\frac{1}{2}\left(\frac{a_{3}}{2}+\frac{1}{2}\left(\frac{a_{4}}{2}+\frac{1}{2}\left(a_{5}\right)+\frac{1}{2}\left|a_{4}-a_{5}\right|\right)+\frac{1}{2}\left|a_{3}-\frac{a_{4}}{2}-\frac{1}{2}\left(a_{5}\right)-\frac{1}{2}\left|a_{4}+a_{5}\right|\right|\right)-\frac{1}{2}\left|a_{2}+\frac{a_{3}}{2}-\frac{1}{2}\left(\frac{a_{4}}{2}+\frac{1}{2}\left(a_{5}\right)+\frac{1}{2}\left|a_{4}-a_{5}\right|\right)-\frac{1}{2}\left|a_{3}+\frac{a_{4}}{2}-\frac{1}{2}\left(a_{5}\right)-\frac{1}{2}\left|a_{4}+a_{5}\right|\right|\right|\right|$ -Here I include the C++ program used to generate the expression in $\LaTeX$. -#include -#include -using namespace std; -/** -Prints out the maximum of the variables -a_{printed+1}...a_n -sign is used to switch the + and - signs. -**/ -void printMax(int n, int printed, int sign) { - if (printed+1 == n) { - printf("a_{%d}", n); - return; - } - char p = '+', m = '-'; - if (sign < 0) p = '-', m = '+'; - printf("\\frac{a_{%d}}{2}%c\\frac{1}{2}\\left(", printed+1, p); - if (printed -TITLE: About phi function -QUESTION [6 upvotes]: Find all positive integers n such that $ \phi(n) |n $ -I find $n=2^k ,2^k ×3^j$ is answer ,I can't find another answers. - -REPLY [4 votes]: Using the formula for $\phi (n)$ as pointed out be @ScrondingersCat, -the $\phi (n) |n$ is equivalent to -$s=(p_1-1)(p_2-1)....(p_k-1)|p_1p_2p_3.....p_k=t$ -Assume WLOG $p_1=2$ (else $t$ is odd which is not possible except when $n=1$ -Since, we must have $v_2(s) \leq v_2(t)=1$. -So, $v_2(s)=1$ which gives $k=2$ or $v_2(s)=0$ which gives $n=2^m$ -Now,for $k=2$ we have $p_2-1|2p_2$. So, $p_2=3$ -So, $n=2^m3^j$<|endoftext|> -TITLE: Proving the transpose / dual map is well defined. -QUESTION [5 upvotes]: The definition for a dual map is as follows: - -The dual map, or transpose of linear $f:V \rightarrow W $ is given by - $f^t(g)(v) = g(f(v)) $ for $\forall g \in W^* , v \in V $. - -In my lecture notes, I have the following proof to show that the definition is well defined: -$f^t (g)(a_1v_1 + a_2v_2) = g(f(a_1v_1 + a_2v_2)) $ -$ = ag(f(v_1)) + a_2 g(f(v_2))$ -$= a_1 f^t(g)(v_1) + a_2f^t(g)(v_2)$ -and so $f^t(g) \in V^* $. -How does this last step show $f^t(g) \in V^* $? How can I get my head around this proof? -My understanding is that $g(f(v))$ takes in an element of $V$, applies $f$ to obtain an element of $W$ and then the functional $g$ to produce an element of the field ($K$), ultimately going from $V \rightarrow K$. Thus the function itself is an element of $V^*$. How is this intuition shown in the proof above? - -REPLY [3 votes]: You want to show: - -$f^t$ is a map from $W^*$ to $V^*$ -$f^t$ is linear - -For $g\in W^*$, you want to define an element of $V^*$, that is, a linear map $V\to K$. The definition is quite natural: -$$ -f^t(g)\colon v\mapsto g(f(v)) -$$ -Clearly, $f^t(g)$, as a map $V\to K$, is linear, because it's just $g\circ f$. -I see no reason for doing that complicated proof, which is just that the composition of linear maps is linear. -More interesting is showing that $f^t$ is linear. If $g_1,g_2\in W^*$ and $v\in V$, we have -$$ -f^t(a_1g_1+a_2g_2)\colon v\mapsto (a_1g_1+a_2g_2)(f(v)) -$$ -Now, -$$ -(a_1g_1+a_2g_2)(f(v))=a_1g_1(f(v))+a_2g_2(f(v)) -$$ -by definition. On the other hand -$$ -a_1f^t(g_1)+a_2f^t(g_2)\colon v\mapsto -a_1f^t(g_1)(v)+a_2f^t(g_2)= -a_1g_1(f(v))+a_2g_2(f(v)) -$$ -which is what was desired.<|endoftext|> -TITLE: Vector bundle as locally free coherent sheaves -QUESTION [15 upvotes]: I am studying coherent sheaves and was looking for a geometric motivation. Hence, in wikipedia and although here is stated that it can be seen as a generalization of vector bundles, which is quite satisfactorical, since this yields a better understanding of what tangent bundle, cotangent bundle or differential forms in sheaf theory and algebraic geometry might be. So, I tryed to see a vector bundle as a locally free coherent sheaf, but I got lost. So here are the two definitions and my first observations: - -A (real) vector bundle consists of: -(i) topological spaces $X$ (base space) and $E$ (total space) -(ii) a continuous surjection $\pi:E\mapsto X$ (bundle projection) -(iii) for every $x$ in $X$, the structure of a finite-dimensional real vector space on the fiber $\pi^{-1}(\lbrace x\rbrace)$ -where the following compatibility condition is satisfied: for every point in $X$, there is an open neighborhood $U$, a natural number $k$, and a homeomorphism -\begin{align} -\varphi :U\times \mathbf {R} ^{k}\to \pi ^{-1}(U) -\end{align} -such that for all $x \in U$, -(a) $(\pi \circ \varphi )(x,v)=x$ for all vectors v in $R^k$, and -(b) the map $v\mapsto \varphi (x,v)$ is a linear isomorphism between the vector spaces $R^k$ and $\pi^{−1}(x)$. - -and the definition of coherent sheaves is the following: - -A sheaf $\mathcal{F}$ of $\mathcal{O}_X$-Modules is coherent if : -1)$ \mathcal{F} $ is of finite type over $ \mathcal{O}_X $, i.e., for any point $ x\in X $ there is an open neighbourhood $ U\subset X $ such that the restriction $ \mathcal{F}|_U $ of $ \mathcal{F} $ to U is generated by a finite number of sections (in other words, there is a surjective morphism $ \mathcal{O}_X^n|_U \to \mathcal{F}|_U $ for some $ n\in\mathbb{N} $); -2) and for any open set $ U\subset X $, any $ n\in\mathbb{N} $ and any morphism $ \varphi\colon \mathcal{O}_X^n|_U \to \mathcal{F}|_U $ of $ \mathcal{O}_X $-modules, the kernel of $ \varphi $ is finitely generated. - -Thus, we can see that they both have a topological space $X$ and we can identify $E=\mathcal{O}_X$. Furthermore there is a surjection $ \mathcal{O}_X^n|_U \to \mathcal{F}|_U $.... - -REPLY [22 votes]: I think the comments may be (inadvertently) giving the impression that this is more complicated than it really is. So let's just say how everything fits together: -Given a vector bundle, its sheaf of sections is locally free. Conversely, if we have a locally free sheaf then it's the sheaf of sections of a vector bundle which we can build by taking sheaf Spec of the symmetric algebra of the locally free sheaf. -Why is this true? Well: - -A vector bundle is, by definition, something that's locally isomorphic to a trivial vector bundle. -A locally free sheaf (which we should really call a "locally free $\mathcal{O}_X$-module") is, by definition, something that's locally isomorphic to a free $\mathcal{O}_X$-module. -The sheaf of sections of a trivial vector bundle is a free $\mathcal{O}_X$-module. - -There is a bit to check here, but the picture itself is pretty clear.<|endoftext|> -TITLE: Necessary and Sufficient Conditions for a CDF -QUESTION [5 upvotes]: This is an attempt to prove Theorem 1.5.3. in Casella and Berger. Note that the only things that have been proven are really basic set-theory with $\mathbb{P}$ (a probability measure) theorems (e.g., addition rule). Recall for a random variable $X$, we define $$F_X(x) = \mathbb{P}(X \leq x)\text{.}$$ - -Theorem. $F$ is a CDF iff: - -$\lim\limits_{x \to -\infty}F(x) = 0$ -$\lim\limits_{x \to +\infty}F(x) = 1$ -$F$ is nondecreasing. -For all $x_0 \in \mathbb{R}$, $\lim\limits_{x \to x_0^{+}} F(x)= F(x_0)$ - - -$\Longrightarrow$ If $F$ is a CDF of $X$, by definition, -$$F_{X}(x) = \mathbb{P}(X \leq x) = \mathbb{P}\left(\{s_j \in S: X(s_j) \leq x\} \right) $$ -where $S$ denotes the overall sample space. -$(3)$ is easy to show. Suppose $x_1 \leq x_2$. Then notice -$$\{s_j \in S: X(s_j) \leq x_1\} \subset \{s_j \in S: X(s_j) \leq x_2\}$$ -and therefore by a Theorem, -$$\mathbb{P}\left(\{s_j \in S: X(s_j) \leq x_1\}\right) \leq \mathbb{P}\left(\{s_j \in S: X(s_j) \leq x_2\}\right)$$ -giving $F_{X}(x_1) \leq F_{X}(x_2)$, hence $F$ is nondecreasing. -I suppose $(1)$ and $(2)$ aren't consequences of anything more than saying that $\{s_j \in S: X(s_j) \leq -\infty\} = \varnothing$ and $\{s_j \in S: X(s_j) \leq +\infty\} = S$ (unless I'm completely wrong here). But this seems to suggest that $$\lim_{x \to -\infty}\mathbb{P}(\text{blah}(x)) = \mathbb{P}(\lim_{x \to -\infty}\text{blah}(x))$$ -where $\text{blah}(x)$ is a set dependent on $x$. At this point of the text, this hasn't been proven (if it's even true). -I'm not sure how to show $(4)$. -$\Longleftarrow$ I don't know how to prove sufficiency. Casella and Berger state that this is "much harder" than necessity, and we have to establish that there is a sample space, a probability measure, and a random variable defined on the sample space such that $F$ is the CDF of this random variable, but this isn't enough detail for me to go on. - -REPLY [3 votes]: This is from Probability: Theory and Examples by Rick Durrett, Theorem 1.2.1 and Theorem 1.2.2. -Theorem 1.2.1. Any distribution function $F$ has the properties that you listed. -Proof. - -(Non-decreasing) If $x\leq y$ then $\{X\leq x\}\subset\{X\leq y\}$, and the monotonicity of Lebesgue measure gives $P(X\leq x)\leq P(X\leq y)$. -(Limitations) It is because $\lim_{x\to \infty} \{X\leq x\}=\Omega$, $\lim_{x\to -\infty} \{X\leq x\}$. -(Right continuous) It is because $\lim_{y\to x^+} \{X\leq y\}=\{X\leq x\}$. - -Theorem 1.2.2. If $F$ satisfies the condition that you listed, it is the distribution function of some random variable. -Proof. Let $\Omega=(0,1),\mathcal{F}=$ the Borel sets and $P=$ Lebesgue measure. If $\omega\in(0,1)$, let $X(\omega)=\sup\{y:F(y)<\omega\}$. -If we can prove $\{\omega:X(\omega)\leq x\}=\{\omega:\omega\leq F(x)\}$, the desired result follows from since $P(\omega:\omega\leq F(x))=F(x)$. -To check $\{\omega:X(\omega)\leq x\}=\{\omega:\omega\leq F(x)\}$, we observe that if $\omega\leq F(x)$ then $X(\omega)\leq x$, since $x\notin \{y:F(y)<\omega\}$. On the other hand, if $\omega>F(x)$, then since $F$ is right continuous, there is an $\epsilon>0$ so that $F(x+\epsilon)<\omega$ and $X(\omega)\geq x+\epsilon>x$.<|endoftext|> -TITLE: There is a prime between $n$ and $n^2$, without Bertrand -QUESTION [18 upvotes]: Consider the following statement: - -For any integer $n>1$ there is a prime number strictly between $n$ and $n^2$. - -This problem was given as an (extra) qualification problem for certain workshops (which I unfortunately couldn't attend). There was a requirement to not use Bertrand's postulate (with which the problem is nearly trivial), and I was told that there does exist a moderately short proof of this statement not using Bertrand. This is my question: - -How can one prove the above statement without Bertrand postulate or any strong theorems? - -Although I can only accept one answer, I would love to see any argument you can come up with. -I would also want to exclude arguments using a proof of Bertrand's postulate, unless it can be significantly simplified to prove weaker statement. -Thank you in advance. - -REPLY [10 votes]: I have stumbled upon this paper due to Erdős, which in the course of proving something far more general proves this result (see a remark at the end of this page). I am replicating that proof here, with minor modifications by myself. -Suppose $n>8$ and that there are no primes between $n,n^2$. Since clearly (obvious induction works) $\pi(n)\leq\frac{1}{2}n$, by assumption we have $\pi(n^2)=\pi(n)\leq\frac{1}{2}n$. Now consider number $\binom{n^2}{n}$. All of its prime factors are less than $n^2$, and so less than $n$. We have the following inequality: -$$\binom{n^2}{n}=\frac{n^2}{n}\frac{n^2-1}{n-1}\dots\frac{n^2-n+2}{2}\frac{n^2-n+1}{1}>\frac{n^2}{n}\frac{n^2}{n}\dots\frac{n^2}{n}\frac{n^2}{n}=\left(\frac{n^2}{n}\right)^n=n^n$$ -At the same time, consider $p$ prime and let $p^a$ be the greatest power of $p$ less than or equal to $n^2$. Since $\binom{n^2}{n}=\frac{(n^2)!}{(n^2-n)!n!}$, By Legendre's formula, exponent of the greatest power of $p$ dividing this binomial coefficient is equal to -$$\left(\lfloor\frac{n^2}{p}\rfloor-\lfloor\frac{n^2-n}{p}\rfloor-\lfloor\frac{n}{p}\rfloor\right)+\left(\lfloor\frac{n^2}{p^2}\rfloor-\lfloor\frac{n^2-n}{p^2}\rfloor-\lfloor\frac{n}{p^2}\rfloor\right)+\dots+\left(\lfloor\frac{n^2}{p^a}\rfloor-\lfloor\frac{n^2-n}{p^a}\rfloor-\lfloor\frac{n}{p^a}\rfloor\right)\leq 1+1+\dots+1=a$$ -(first equality is true, because all further terms in the sum are zero. First inequality is true because for any $a,b\in\Bbb R$ $\lfloor a+b\rfloor-\lfloor a\rfloor-\lfloor b\rfloor\in\{0,1\}$) -Since $\binom{n^2}{n}$ is a product of at most $\pi(n)$ prime powers, all at most $p^a\leq n^2$ (by above), we must have -$$\binom{n^2}{n}\leq (n^2)^{\pi(n)}\leq (n^2)^{\frac{1}{2}n}=n^n$$ -We have proved two contradictory inequalities, so this ends the proof by contradiction.<|endoftext|> -TITLE: Difference between completeness and compactness -QUESTION [8 upvotes]: According to Wikipedia: - -A metric space $M$ is said to be complete if every Cauchy sequence - converges in $M$ - -$ $ - -A metric space $M$ is compact if every sequence in $M$ has a - subsequence that converges to a point in $M$ - -I can't seem to find a situation where a complete metric space is not compact or vice versa. -First of all, why can't we say $M$ is complete if every sequence converges in $M$. Since if a sequence converges to a point outside $M$ it is clearly not complete? -And so if every sequence converges in $M$, then clearly every sequence has a subsequence which converges in $M$ and hence it is also compact (if it is complete). -And if every sequence has a subsequence which converges to $M$ then doesn't the sequence itself converge to $M$ in which case the compact space is also complete? -Apologies if I'm totally off track. -If someone could provide me with some examples which show the difference between the two I'd be very grateful. Preferably an example that gives me a much better intuitive understanding, because I think my main problem is intuition, I go a bit mad trying to understand rigorous definitions. - -REPLY [11 votes]: The real line $\mathbb{R}$ is complete but not compact. The key word in the definition of completeness is "Cauchy". Note that the definition of compactness does not speak of Cauchy sequences but rather of arbitrary sequences. -Here $\mathbb{R}$ is not compact because the sequence $u_n=n$ does not contain a convergent subsequence. - -REPLY [8 votes]: Everything is explained once I answer this question, from which all other questions follow: - -First of all, why can't we say $M$ is complete if every sequence converges in $M$. Since if a sequence converges to a point outside $M$ it is clearly not complete? - -By your proposed definition, $\mathbb{R}$ would not be complete, since it is easy to furnish divergent sequences ($(-1)^n$, for instance). What you want to say is "OK, but every sequence that should converge must con... oh wait, that is the definition of completeness". -For an example of a complete but not compact space, $\mathbb{R}$ suffices. -Compactness implies completeness. To see that is easy. Take a Cauchy sequence. Since we are on a compact set, it has a convergent subsequence. But a Cauchy sequence with a convergent subsequence must converge (this is a good exercise, if you don't know this fact).<|endoftext|> -TITLE: Integrate $\int\frac{1}{x}\sqrt{\frac{1-x^2}{1+x^2}}\,\mathrm{d}x$ -QUESTION [5 upvotes]: How do I go about integrating: -$$\int\frac{1}{x}\sqrt{\frac{1-x^2}{1+x^2}}\,\mathrm{d}x$$ -The common trigonometric substitutions don't seem to work here. -I think it requires to take some power of $x$ outside the square root but I am not able to solve further. - -REPLY [7 votes]: HINT....If you want a trig substitution that works, try $x^2=\cos 2\theta$<|endoftext|> -TITLE: What is a first countable, limit compact space that is not sequentially compact? -QUESTION [5 upvotes]: I just read a proof that adds the assumption of T1 to conclude that a first countable, limit compact and T1 space must be sequentially compact, but I didn't understand what happens if we drop the T1 assumption. -If $X$ is limit point compact and first countable, and $\{a_n\}$ is a sequence in $X$, then it has a limit point $x$. -By first countability, we can find a basis of neighborhoods of $x$ and then build a subsequence that converges to $x$. Isn't that enough? -Reference: The proof is from relationship among different kinds of compactness by rm50 (Theorem 2). As you can see, $q$ appears from nowhere. - -REPLY [7 votes]: Consider the right topology on $\mathbb{R}$: the topology generated by the sets of the form $$(a,\infty) = \{ x \in \mathbb{R} : x > a \}$$ for $a \in \mathbb{R}$. - -It is clearly first- (even second-) countable. (Consider only rational $a$ for the basic open sets.) -If $ A\subseteq \mathbb {R} $ is nonempty, then any $x \in \mathbb{R}$ which is (strictly) less than an element of $A$ is a limit point of $A$, hence it is limit point compact. -The sequence $\langle -n \rangle_{n \in \mathbb {N}}$ has no convergent subsequence, hence it is not sequentially compact. - - -Note that in T1-spaces, to be a limit point of a set every neighbourhood must contain infinitely many points of that set. This is what allows for the construction of the subsequence. - -If $\langle x_n \rangle_{n \in \mathbb {N}}$ is a sequence in $X$, then either it has a constant subsequence (which obviously converges), or the set $A= \{ x_n : n \in \mathbb {N} \}$ is infinite, and hence has a limit point, $ x $. -Fix a decreasing countable base $\{ V_k \}_{k \in \mathbb{N}}$ for $x$. -Recursively pick $n_k \in \mathbb{N}$ ($k \in \mathbb{N}$) so that $x_{n_k} \in V_k$ and $x_{n_k} > x_{n_{k-1}}$. This can be done because $V_k \cap A$ is infinite, and thus so is the set $\{ n \in \mathbb {N} : x_n \in V_k\}$. (Or, to go a bit more slowly, since $X$ is T1 finite sets are closed, and so $V_k \setminus \{ x_n : n \leq n_{k-1}, x_n \neq x \}$ is an open neighbourhood of $x$, and so contains an element $z$ of $A$ distinct from $x$. Since $z\in A$ there must be an $n$ such that $z=x_n$, and by choice of neighbourhood of $x$ it must be that $n > n_{k-1}$, so set $n_k = n$.) -The subsequence $\langle x_{n_k} \rangle_{k \in \mathbb {N}}$ then converges to $x$. - -However for non-T1-spaces a neighbourhood of a limit point may only contain a single point of the set, which makes the construction of a subsequence as in the third paragraph of the above demonstration problematic. -In the above space, the only open set which contains infinitely many points of the sequence $\langle -n \rangle_{n \in \mathbb {N}}$ is the entire space $\mathbb {R}$, although $\overline { \{ -n : n \in \mathbb {N}\}} = (-\infty,-1]$.<|endoftext|> -TITLE: Must a proper curve minus a point be affine? -QUESTION [10 upvotes]: Let $C$ be a proper smooth geometrically connected curve over a field $K$, and let $P\in C(K)$ be a point. Must $C - P$ be affine? -EDIT: By Riemann-Roch, you can definitely find functions $f_1,\ldots,f_r : C-P\longrightarrow\mathbb{A}^n_K$, but how do you guarantee that for some $n$, you can find enough such $f_i$'s such that this gives you an embedding? -EDIT: Is the same true with $C$ not smooth? - -REPLY [22 votes]: Theorem -If a nonzero finite number of points $p_1,\dots, p_r $ are deleted from $C$ the resulting curve will be affine. -Indeed consider the divisor $D=p_1+\dots+ p_r $ on $C$. -Since it has positive degree some positive multiple $nD$ of it will be very ample. -Thus we get an embedding of $j:C\to \mathbb P^N$ (for some huge $N$) and a hyperplane section divisor $\Delta =H\cap j(C)$ on $j(C)$ such that $j^*\Delta=nD$. -But then $C\setminus \{p_1,\dots, p_r\}$ is isomorphic to $j(C)\cap (\mathbb P^N\setminus H)\cong j(C)\cap \mathbb A^N$ (the complement of a hyperplane in projective space is affine space) and since this last variety $j(C)\cap \mathbb A^N$ is clearly affine, so is $C\setminus \{p_1,\dots, p_r\}$. -Edit -The theorem is valid even if $C$ is singular. -To see that, consider the finite normalization morphism $n:\tilde C\to C$ and delete the inverse image of $\{p_1,\dots, p_r\}$, obtaining the smooth curve $C'=\tilde C\setminus n^{-1}(\{p_1,\dots, p_r\})$ which is affine by the result already proved for smooth curves. -Now consider the restricted finite morphism $n':C'\to C\setminus \{p_1,\dots, p_r\}$. -Since $C'$ is affine and the finite morphism $n'$ is surjective the curve $C\setminus \{p_1,\dots, p_r\}$ will also be affine by Chevalley's Theorem (EGA $_{II}$, Théorème (6.7.1), page 136), and we are done.<|endoftext|> -TITLE: Topology textbook with a solution manual -QUESTION [8 upvotes]: Does anyone know of a good topology textbook, that has a solutions manual for at least some of the problems? Older is fine; I just need to be able to check my own work. -I've researched best topology books/free topology books, but most do not have any solutions to problems provided. -Thanks for your help; hope I'm posting in right forum. - -REPLY [7 votes]: you can read munkres it's a good book ,there are solutions online on http://dbfin.com/topology/munkres - -REPLY [5 votes]: Elementary Topology Problem Textbook -O. Ya. Viro, O. A. Ivanov, N. Yu. Netsvetaev, V. M. Kharlamov -Great book, with many easy problems, and with basically everything solved at the end of each chapter.<|endoftext|> -TITLE: Difference Between Tensoring and Wedging. -QUESTION [14 upvotes]: Let $V$ be a vector space and $\omega\in \otimes^k V$. There are $2$ ways (at least) of thinking about $\omega\otimes \omega$. -1) We may think of $\otimes^k V$ as a vector space $W$, and $\omega\otimes \omega$ as a member of $W\otimes W$. -2) We may think of $\omega\otimes \omega$ as a member of $\otimes^{2k}V$. -The two interpretations are "same" because $W\otimes W$ is naturally isomorphic to $\otimes^{2k} V$. - -However, the situation is a bit different when talking about "wedging". - -Let $\eta\in \Lambda^k V$. We want to wonder about $\eta\wedge \eta$. -1) Let $W=\Lambda^k V$ and think of $\eta\wedge \eta$ as a member of $\Lambda^2 W$. Then $\eta\wedge \eta=0$ by super-commutativity of the wedge-product. -2) Think of $\eta\wedge\eta$ as a member of $\Lambda^{2k}V$. Then $\eta\wedge \eta$ may not be $0$. -Perhaps this confusion would not arise if we write $\wedge_V$ rather that $\wedge$, for when wedging we must remember the base space. Moreover, there is no such thing as taking the wedge product of two vector spaces, thought we can talk about tensor product of two vector spaces. -Admittedly, my mind is not completely clear here. Can somebody throw some more light on the different behaviours of tensoring and wedging. - -REPLY [7 votes]: This is not really a direct answer but is just too long for a comment. -One curious fact about the wedge construction is that $\bigwedge^n V$ can be (functorially) realized either as a subspace of $\bigotimes^n V$ or as a quotient. (These realizations are canonically isomorphic when the characteristic of the underlying field is $0$ or greater than $n$, but otherwise are non-canonically isomorphic.) -Although the quotient construction tends to be more natural, it's often useful to think about the subspace construction. The little wedge symbol means different things, depending on your construction. -In the quotient construction, $\bigwedge^n V$ is the quotient of $\bigotimes^n V$ by the subspace generated by symbols with repeated vectors, and the symbol $v_1 \wedge \ldots \wedge v_n$ means "the image of $v_1 \otimes \ldots \otimes v_n$ under the quotient map." -In the subspace construction, on the other hand, $\bigwedge^n V$ is the subspace of $\bigotimes^n V$ on which $S_n$ acts via the sign character, and the symbol $v_1 \wedge \ldots \wedge v_n$ means either (depending on your convention and on whether $n! = 0$ in your field) -$$ -\sum_{\sigma \in S_n} (-1)^{\text{sign}(\sigma)} v_{\sigma(1)}\otimes \ldots \otimes v_{\sigma(n)}$$ - or -$$ \frac{1}{n!} \sum_{\sigma \in S_n} (-1)^{\text{sign}(\sigma)} v_{\sigma(1)}\otimes \ldots \otimes v_{\sigma(n)}. -$$ -(The second convention has the advantage that under the natural map "subspace intepretation to quotient interpretation" is compatible with the notation, and the disadvantage that it's only available in characteristic prime to $n!$). -Now let's think about $\bigwedge^2 \bigwedge^n V$ versus $\bigwedge^{2n} V$ in terms of the subspace interpretation. The former is the subspace of $\bigotimes^2 \bigwedge^n V$ on which $S_2$ acts by the sign character, which is the subspace of $\bigotimes^2 \bigotimes^n V$ on which $S_2$ and $S_n$ act independently via the sign character. Under the natural "unravelling map" -$$ -\bigotimes^2(\bigotimes^n V) \to \bigotimes^{2n} V -$$ -we get the subspace of $\bigotimes^{2n} V$ on which $(S_n \times S_n) \rtimes S_2 < S_{2n}$ acts via the product of the sign characters. But this is a weaker, and different, demand than demanding that all of $S_{2n}$ act via the sign character, and so this subspace is bigger. -(EDIT: see comments for more details.) In terms of representation theory, your observation could be restated as follows: -Write $G = (S_n \times S_n) \rtimes S_2.$ Then there is a natural embedding $G \hookrightarrow S_{2n}$. Now for a field $k$ there is a unique character $G \to k^\times$ which restricts to the sign character each $S_n$ and to the sign character on $S_2$. There is also a character $G \to k^\times$ using the embedding $G \hookrightarrow S_{2n}$ followed by the sign character of $S_{2n}$. These characters aren't the same.<|endoftext|> -TITLE: How do I find the roots of this polynomial of degree $4$? -QUESTION [6 upvotes]: I am studying for finals and in the review packet is shown this problem: -$$P(x)=2x^4 + 5x^3 + 5x^2 + 20x - 12$$ -I don't know what to do, I have already tried looking in the textbook and Khan academy. - -REPLY [4 votes]: Have you heard of the rational root test? -It implies that rational roots to this are of the form $p/q$ where $p$ divides $12$ and $q$ divides $2$. There are only a few numbers like this so you can check them. -Once you have a few roots, you can divide and get a smaller polynomial that is easier to manage.<|endoftext|> -TITLE: Properties of 3-vector dot product -QUESTION [5 upvotes]: I've been playing around with an extension of a dot product to three vectors, as set forth in this question. Basically, if you have three vectors A, B, and C, then you could compute the following -$$TD(A,B,C) = \sum_{i=1}^N a_i b_i c_i$$ -where TD means "triple dot." I realize this isn't an accepted notation but it is useful for this question. -I'm curious if anyone has shown that -$$TD(A,B,C)\leqslant \lVert A \rVert \cdot\lVert B \rVert\cdot \lVert C \rVert.$$ -I suppose it might involve an extension of the Rearrangement inequality to three vectors. - -REPLY [2 votes]: A short proof can be given using the Frobenius norm: As already pointed out in the question you linked to, we have -$$ - \mathrm{TD}(a,b,c) = a^T \cdot \mathrm{diag}(b_1, \dotsc, b_n) \cdot c -$$ -where $\mathrm{diag}(b_1, \dotsc, b_n)$ denotes the diagonal matrix with diagonal entries $b_1, \dotsc, b_n$. Using the submultiplicativity of the Frobenius norm we find that -\begin{align*} - \mathrm{TD}(a,b,c) - &\leq |\mathrm{TD}(a,b,c)| - = \|\mathrm{TD}(a,b,c)\|_F - = \|a^T \cdot \mathrm{diag}(b_1, \dotsc, b_n) \cdot c\|_F \\ - &\leq \|a^T\|_F \cdot \|\mathrm{diag}(b_1, \dotsc, b_n)\|_F \cdot \|c\|_F - = \|a\| \cdot \|b\| \cdot \|c\|, -\end{align*} -where $\|\cdot\|_F$ denotes the Frobenius norm. -Another proof can be given by using the Cauchy-Schwarz inequality: We can assume w.l.o.g. that $a_i, b_i, c_i \geq 0$ for every $1 \leq i \leq n$. Because $b_i \geq 0$ for all $1 \leq i \leq n$ we find that the bilinear form $\langle \cdot, \cdot \rangle_b$ defined via -$$ - \langle x,y \rangle_b - = x^T \cdot \mathrm{diag}(b_1, \dotsc, b_n) \cdot y - = \mathrm{TD}(x,b,y) -$$ -is symmetric and positive semidefinite with -$$ - \|x\|_b - = \sqrt{\langle x,x \rangle_b} - = \sqrt{ \sum_{i=1}^n b_i x_i^2 }. -$$ -Notice that for all $1 \leq i,k \leq n$ we have $b_i b_k \leq \sum_{j=1}^n b_j^2$: If $i = k$ this is clear and if $i \neq k$ then -$$ - b_i b_k \leq 2 b_i b_k \leq b_i^2 + b_k^2 \leq \sum_{j=1}^n b_j^2. -$$ -So from the Cauchy-Schwarz inequality it follows that -\begin{align*} - \mathrm{TD}(a,b,c) - &= \langle a, c \rangle_b - \leq \|a\|_b \cdot \|c\|_b - = \sqrt{ \sum_{i=1}^n b_i a_i^2} \cdot \sqrt{\sum_{k=1}^n b_k c_k^2 } \\ - &= \sqrt{ \sum_{i,k=1}^n a_i^2 b_i b_k c_k^2} - \leq \sqrt{ \sum_{i,j,k=1}^n a_i^2 b_j^2 c_k^2 } \\ - &= \sqrt{\sum_{i=1}^n a_i^2} \cdot \sqrt{\sum_{j=1}^n b_j^2} - \cdot \sqrt{\sum_{k=1}^n c_k^2} - = \|a\| \cdot \|b\| \cdot \|c\|. -\end{align*} -It is worth noticing that using the approach via the Frobenius norm we can also directly generalize our results to arbitrary $x^1, \dotsc, x^m \in \mathbb{C}^n$, in the sense that -\begin{align*} - \left|\sum_{i=1}^n x^1_i \dotsm x^m_i\right| - &= \left\| - (x^1)^T - \cdot \mathrm{diag}(x^2_1, \dotsc, x^2_n) - \dotsm \mathrm{diag}(x^{m-1}_1, \dotsc, x^{m-1}_n) - \cdot x^m - \right\|_F \\ - &\leq - \|(x^1)^T\|_F - \cdot \|\mathrm{diag}(x^2_1, \dotsc, x^2_n)\|_F - \dotsm \|\mathrm{diag}(x^{m-1}_1, \dotsc, x^{m-1}_n)\|_F - \cdot \|x^m\|_F \\ - &= \|x^1\| \dotsm \|x^m\|. -\end{align*}<|endoftext|> -TITLE: Intuition behind Riesz-Markov Representation Theorem -QUESTION [9 upvotes]: I am currently reading Big Rudin and I have this theorem that I've been struggling with for some time now. -Usually, when reading, I either try to get a concrete intuition behind the idea, visualise the theorem and its proof in my head, or at least connect it to something I already know, as well as try to connect the proof to other proofs I know. -However, upon this theorem, I truly stumbled. As Rudin notes in the beginning of the chapter, one can see a little connection between the measure and linear functionals as $b-a$ can be approximated by $\Lambda f$ where $\Lambda f=\int_a^b fdx$ and $f$ is a continuous function on $[a;b]$ with range lying in $[0;1]$. However, this honestly felt like "cheating". -As to why this is cheating, it seems so to me for various reasons. First of all, he gave the most trivial of examples while one needs a general idea. However, the real reason is the fact that the linear function he uses is actually intimately related to measure by itself, without any need for a representation theorem. The ideas of measure and integrations are intertwined and thus stating that we can find a measure that represents an integral (especially since it's only on an interval) seems useless and uninformative. -Is there a way to intuitively understand why such a representation is possible or is this theorem too complex for any intuition? -Thanks in advance. - -REPLY [5 votes]: So the statement is that every continuous linear functional on $C[0,1]$ is given by integration against some Borel measure. First, the fact that Borel measures give continuous linear functionals is easy to prove, using pretty much whatever measure theoretic convergence theorem you want. That may seem obvious, but it is important for intuition. -To go the other way, recall that measuring a set is the same as integrating the indicator function of that set. So if $F$ is a continuous linear functional and $f_n$ converges to $1_A$ in some appropriate sense, then $\lim_{n \to \infty} F(f_n)$ is the natural candidate for $\mu(A)$. Thus you just need to construct such an approximating sequence. Can you do that if $A$ is assumed open? How can you extend that case to the general one?<|endoftext|> -TITLE: Is there a closed form expression for the "generalized" addition of the first $n$ numbers? -QUESTION [12 upvotes]: Firstly, I will explain what I am trying to do intuitively. We take the sum of the first $n$ positive integers. Let's say this sum is equal to $q$. Then you add that sum to the sum of the first $q$ positive integers. Let's say this new sum is equal to $m$. Then you add that to the sum of the first $m$ positive integers. The number of times this process is iterated is specified by some $k$, which, along with $n$, is the independent variable for this function. -Formally, suppose we define a function $\sigma: \mathbb{N}\times\mathbb{N} \rightarrow \mathbb{N}$ recursively as follows. -$$\sigma(0,n) = n$$ -and if $k>0$, $$\sigma(k,n) = \sum_{j = 0}^{k-1}\sum_{i = 1}^{\sigma(j,n)} i$$ -For example, $$\sigma(1,n) = \sum_{j = 0}^{0}\sum_{i = 1}^{\sigma(j,n)} i = \sum_{i = 1}^{\sigma(0,n)} i = \frac{n(n+1)}{2}$$ -$$\sigma(2,n) = \sum_{j = 0}^{1}\sum_{i = 1}^{\sigma(j,n)} i = \sum_{i = 1}^{\sigma(0,n)} i + \sum_{i = 1}^{\sigma(1,n)} i = \sum_{i = 1}^{n} i + \sum_{i = 1}^{\frac{n(n+1)}{2}} i$$ -Is there a "closed form" expression, or simply any general formula, for $\sigma(k,n)$? - -REPLY [4 votes]: The recurrence can be written as: -$$ -\begin{aligned} -\sigma(k + 1, n) &= \sigma(k, n) + \frac{\sigma(k, n)^2 + \sigma(k, n)}{2}\\ -&= \frac{\sigma(k, n)^2 + 3\sigma(k, n)}{2}\\ -\end{aligned} -$$ -Since $n$ doesn't matter for this analysis, I'll rewrite with $\sigma(n, k) = \sigma_k$ for conciseness. -$$ -\sigma_{k+1} = \frac{\sigma_k^2 + 3\sigma_k}{2} -$$ -This is a quadratic recurrence relation. I'll put it in standard form $S_{k+1} = aS_k^2 + bS_k + c$. -$$ -\sigma_{k+1} = \frac{1}{2}\sigma_k^2 + \frac{3}{2}\sigma_k -$$ -Now we're just trying to find a closed form for a quadratic recurrence relation. Following Will Jagy's answer here, we first try to create another sequence $T_k$ with a recurrence of the form $T_{k+1} = T_k^2 + c$. It turns out this sequence is just: -$$ -T_k = \frac{1}{2}\sigma_k + \frac{3}{4} -$$ -with the recurrence -$$ -T_{k+1} = T_k^2 + \frac{3}{16} -$$ -(Check me!). Will Jagy then says that there are only two cases with a closed form: when $c = 0$ and when $c = -2$. Neither case holds, so there isn't a closed form. -I would love to know more about this problem in generality. Why are those two cases the only ones that can be solved?<|endoftext|> -TITLE: Why can you mix Partial Derivatives with Ordinary Derivatives in the Chain Rule? -QUESTION [14 upvotes]: This question is a simplified version of this previous question asked by myself. -The following is a short extract from a book I am reading: - -If $u=(x^2+2y)^2 + 4$ and $p=x^2 + 2y$ $\space$ then $u=p^2 + 4=f(p)$ therefore $$\frac{\partial u}{\partial x}=\frac{\rm d f(p)}{\rm d p}\times \frac{\partial p}{\partial x}=2xf^{\prime}(p)\tag{1}$$ and $$\frac{\partial u}{\partial y}=\frac{\rm d f(p)}{\rm d p}\times \frac{\partial p}{\partial y}=2f^{\prime}(p)\tag{2}$$ - -I know that the chain rule for a function of one variable $y=f(x)$ is $$\begin{align}\color{red}{\fbox{$\frac{{\rm d}}{{\rm d}x}=\frac{{\rm d}}{{\rm d}y}\times \frac{{\rm d}y}{{\rm d}x}$}}\color{red}{\tag{A}}\end{align}$$ -I also know that if $u=f(x,y)$ then the differential is -$$\begin{align}\color{blue}{\fbox{${{\rm d}u=\frac{\partial u}{\partial x}\cdot{\rm d} x+\frac{\partial u}{\partial y}\cdot{\rm d}y}$}}\color{blue}{\tag{B}}\end{align}$$ -I'm aware that if $u=u(x,y)$ and $x=x(t)$ and $y=y(t)$ then the chain rule is $$\begin{align}\color{#180}{\fbox{$\frac{\rm d u}{\rm d t}=\frac{\partial u}{\partial x}\times \frac{\rm d x}{\rm d t}+\frac{\partial u}{\partial y}\times \frac{\rm d y}{\rm d t}$}}\color{#180}{\tag{C}}\end{align}$$ -Finally, I also know that if $u=u(x,y)$ and $x=x(s,t)$ and $y=y(s,t)$ then the chain rule is $$\begin{align}\color{#F80}{\fbox{$\frac{\partial u}{\partial t}=\frac{\partial u}{\partial x}\times \frac{\partial x}{\partial t}+\frac{\partial u}{\partial y}\times \frac{\partial y}{\partial t}$}}\color{#F80}{\tag{D}}\end{align}$$ -Could someone please explain the origin or meaning of equations $(1)$ and $(2)$? -The reason I ask is because I am only familiar with equations $\color{red}{\rm (A)}$, $\color{blue}{\rm (B)}$, $\color{#180}{\rm (C)}$ and $\color{#F80}{\rm (D)}$ so I am not used to seeing partial derivatives mixed up with ordinary ones in they way they were in $(1)$ and $(2)$. -Many thanks, -BLAZE. - -REPLY [9 votes]: This is often confusing because there is a conflation of the symbol for a function argument as opposed to a function itself. For example, when you write $f(p) = p^2 + 4$, you are thinking of $f$ as a function and $p$ as the argument of that function, which could be any dummy variable. In fact, let us write $f(\xi) = \xi^2+4$, which is the same function with simply another symbol representing the rule that $f$ implies. At the same time, you are using the symbol $p$ as a function $p(x,y) = x^2 + 2y$. Now, with the functions $f(\xi)$ and $p(x,y)$ you have $u(x,y) = f \circ p\,(x,y)$; that is, $u$ is the composition of $f$ with $p$. Hence, using the chain rule and suppressing the variables $x$ and $y$, you have -$$ -\frac{\partial u}{\partial x} = f'(p)\, \frac{\partial p}{\partial x} = \frac{d f}{d \xi} (p) \, \frac{\partial p}{\partial x} -$$ -where $f' = \frac{d f}{d \xi}$ since we changed the argument symbol of $f$ to $\xi$ -- notice that $\frac{d f}{d\xi}$ is still evaluated at the function $p$ by the chain rule. If we wanted to to explicitly show where the variables $x$ and $y$ would manifest, we would have -$$ -\frac{\partial u}{\partial x}(x,y) = f'(p(x,y))\, \frac{\partial p}{\partial x}(x,y) = \frac{d f}{d \xi} (p(x,y)) \, \frac{\partial p}{\partial x}(x,y). -$$ -Hopefully this helps.<|endoftext|> -TITLE: Curvature of a product of Riemannian manifolds -QUESTION [12 upvotes]: If $\mathcal{M}$ is a Riemannian manifold of constant curvature, is the manifold $\mathcal{M}^n$ with the product metric, of constant curvature? (and why?) -Thank you - -REPLY [14 votes]: For your question, the answer is -Proposition 1: Let M be a Riemannian manifold with constant curvature, then $M^n$ has constant curvature if and only if $M$ has constant curvature zero. -For example, the $S^2\times S^2$ is not a constant curvature space as the 2 dimensional plain spanned by two vectors which come from the tangent spaces of two $S^2$ has sectional curvature zero. -Let's prove a more general result here. -Proposition 2: Let $M=M_1\times M_2$ be the product of two riemannian manifolds, and R be its curvature tensor, $R_1, R_2$ be curvature tensor for $M_1$ and $M_2$ respectively, then one can relate $R, R_1$ and $R_2$ by -$$R(X_1+X_2,Y_1+Y_2,Z_1+Z_2,W_1+W_2)=R_1(X_1,Y_1,Z_1,W_1)+R_2(X_2,Y_2,Z_2,W_2)$$ -where $X_i, Y_i, Z_i, W_i\in TM_i$. -To show this, you should use: -(1)$\langle X_1+X_2,Y_1+Y_2 \rangle_{M}=\langle X_1,Y_1 \rangle_{M_1}+\langle X_2,Y_2 \rangle_{M_2}$; -(2)$[X_1+X_2,Y_1+Y_2]_{M}=[X_1,Y_1]_{M_1}+[X_2,Y_2]_{M_2}$; -(3)$\nabla_{X_1+X_2}^M(Y_1+Y_2)=\nabla_{X_1}^{M1}(Y_1)+\nabla_{Y_1}^{M2}(Y_2)$. -(1) is simply by definition of product Riemannian manifold, (2) can be shown in local coordinates, and (3) can be shown by (1) and (2) and along with Koszul formula. Also, you may find this post useful.<|endoftext|> -TITLE: Solving functional equation $f(4x)-f(3x)=2x$ -QUESTION [11 upvotes]: Given that $f(4x)-f(3x)=2x$ and that $f:\mathbb{R}\rightarrow\mathbb{R}$ is an increasing function, find $f(x)$. My thoughts so far: subtituting $\frac{3}{4}x$, $\left(\frac{3}{4}\right)^2x$, $\left(\frac{3}{4}\right)^3x$, $\ldots$, we get that: -$$f(4x)-f(3x)=2x$$ -$$ f\left(4\cdot\frac{3}{4}x\right)-f\left(3\cdot\frac{3}{4}x\right)=2\cdot\frac{3}{4}x $$ -$$ f\left(4\cdot\left(\frac{3}{4}\right)^2x\right)-f\left(3\cdot\left(\frac{3}{4}\right)^2x\right)=2\cdot\left(\frac{3}{4}\right)^2 $$ -$$ f\left(4\cdot\left(\frac{3}{4}\right)^3x\right)-f\left(3\cdot\left(\frac{3}{4}\right)^3x\right)=2\cdot\left(\frac{3}{4}\right)^3 $$ -$$\ldots$$ -Then note that after adding all these equations we get: -$$ f(4x)=\sum_{k=0}^{\infty}\left(\frac{3}{4}\right)^k2x $$ -And this series obviously converges to $8x$. Substituting $\frac{1}{4}x$ we get: -$$ f(x)=2x. $$ -Is this correct? ;) - -REPLY [8 votes]: It's almost correct. You are correct that we can deduce that -$$f(4x)=f(3x)+2x$$ -and by repeatedly applying this we get -$$f(4x)=f\left(4\left(\frac{3}4\right)^kx\right)+\sum_{n=0}^{k-1}2\left(\frac{3}4\right)^{n}x$$ -You make an error on the next step, however. You mean to take a limit as $k$ goes to infinity, but you to this incorrectly. In particular, the correct expression would be: -$$f(4x)=\lim_{k\rightarrow\infty}f\left(4\left(\frac{3}4\right)^kx\right)+8x$$ -where we have a term of $\lim_{k\rightarrow\infty}f\left(4\left(\frac{3}4\right)^kx\right)$ that you omitted; in particular, this can be any constant, and the constant can be different for positive and negative numbers. It does, however, exist since $f$ is increasing. Thus, the solutions are of the form: -$$f(x)=\begin{cases}2x+c_1&&\text{if }x>0\\c_2&&\text{if }x=0\\2x+c_3&&\text{if }x<0\end{cases}$$ -for some constants $c_1\geq c_2 \geq c_3$.<|endoftext|> -TITLE: Existence of mathematical objets constructed using the axiom of choice -QUESTION [9 upvotes]: Let consider the Vitali set $V \subset \mathbb R$, which is constructed using the axiom of choice. (I could take any other mathematical "object" that can be constructed using the axiom of choice, but I chose the Vitali set just to make the purpose clearer). -On the one hand, the Vitali set exists thanks to the axiom of choice. On the other hand, the power set $\mathcal P (\mathbb R)$ exists in ZF without assuming AC. In some sense, the existence of any $A \in \mathcal P (\mathbb R)$ is independant on (i.e. does not require) AC. -But then, I had the following reflection : the existence of the Vitali set $V \in \mathcal P (\mathbb R)$, as an element of this power set, should not require the axiom of choice... There must be some fallacy here ; that's why I would like some clarifications. -To be concise : can we say that the axiom of choice "affects" the existence of elements in $\mathcal P (\mathbb R)$ (as it seems to be the case for the existence of the Vitali set), although the construction of $\mathcal P (\mathbb R)$ (in ZF) has nothing to do with AC? -Thank you! - -REPLY [2 votes]: It is important to distinguish between actual objects (in our case 'sets') and definitions thereof. While we can prove in $\operatorname{ZF}$ (with or without choice) that every model of $\operatorname{ZF}$ has to contain an object that satisfies (inside this model) the definition of '$\mathcal P(\mathbb R)$', this doesn't tell us the whole story of what this particular object actually is. -(In the following, let's say that we defined $\mathbb R$ to be the set of all functions $f \colon \omega \to \omega$. It doesn't really matter, but we have to be a bit careful as how to define the reals as a set for some absoluteness reasons... Let's also take $\operatorname{ZFC}$ as our background theory.) -For example: Let $(M; \in)$ be a countable transitive model of (a sufficiently large fragement of) $\operatorname{ZF}$. Then there will be some $x \in M$ such that -$$ -(M; \in) \models 'x = \mathcal P (\mathbb R)' -$$ -(by which I mean that $M$ thinks that $x$ is the powerset of the reals). As $M$ is countable and transitive, we have that $x \subseteq M$ and thus $x$ has to be countable as well. But (this is the point where it matters how we defined the reals as a set) from the point of view of our background universe, $x$ really consists of subsets of reals, so $x$ is a subset of the 'true' powerset of the reals. It's just the case that it misses most subsets. In particular, $x$ may or may not contain some $y \in x$ such that -$$ -(M; \in ) \models 'y \text{ is a Vitali set}' -$$ -If $(M; \in)$ satisfies choice, then there will be indeed be such a $y \in x$.<|endoftext|> -TITLE: Using test functions to "test" whether functions vanish -QUESTION [6 upvotes]: Let $U$ be an open subset of $\mathbb R^n$ and let $f \in L_{\text {loc}}^1(U)$ (i.e. $f$ is integrable on compact subsets of $U$). Suppose $\int_U f \phi = 0$ for all test functions $\phi \in C_c^\infty(U)$. -Does this imply that $f = 0$ a.e.? If so, why? -I ask this question because I'm learning about analysis of PDEs from Evans' textbook. This fact, or something similar to it, is used everywhere, but I can't think of a rigorous proof for it. One approach I tried is to approximate indicator functions on arbitrary measurable subsets of $U$ by their mollifications, but I haven't managed to get this to work. I wonder if there is a better method. - -REPLY [3 votes]: You can get $f=0$ almost everywhere. Take an $x\not= 0$, and let $B$ be a ball centered in $x$, but small enough to be away from the origin (contained in $U$). Then take $\phi_n \in C^{\infty}_c(B)$ to be approximations of $\chi_B$ from below (just so you can use dominated convergence). Then, for every $n\in \mathbb{N}$, -$$ -\int_{U}f(x)\phi_n(x)dx = 0. -$$ -Then, from the dominated convergence theorem you get that -$$ -\int_Bf(x)dx = 0. -$$ -Dividing by the volume of the ball, you get $\frac{1}{|B|}\int_Bf(x)dx = 0$. Sending the radius of the ball to zero and using Lebesgue's differentiation theorem, you get $f \equiv 0$ a.e. in $U$. Note that you can't guarantee $f=0$ pointwise (just take $f$ not zero in a single point).<|endoftext|> -TITLE: Proper ideal $I \implies \exists $ prime ideals $P_i$ such that $P_1 \cdots P_n \subset I$. -QUESTION [5 upvotes]: Let the below ideals be in a commutative Noetherian ring $R$. -Corollary 22. (3) There are prime ideals $P_1, \dots, P_n$ (not necc. distinct) $\supset I$ such that $P_1\cdots P_n \subset I$. -(Out of D&F) - -Prove (3) of Corollary 22 directly by considering the coll. $\mathcal{S}$ of ideals that do not contain a finite product of prime ideals. [If $I$ is a maximal element in $\mathcal{S}$, show that since $I$ is not prime there are ideals $J, K$ properly containing $I$ (hence not in $\mathcal{S}$) with $JK \subset I$.] - -I know: - -$I$ is not prime $\implies \exists$ ideals $J,K$ such that $JK \subset I$ yet $J \not\subset I$ and $K \not\subset I$. -$I$ is not prime $\implies$ in particular not maximal $\implies$ $I$ properly contained in some maximal ideal $J$. -From examining proof to Proposition 20 the proof of this would go something like if $\mathcal{S}$ were not empty, then since $R$ is Noetherian, all chains in $\mathcal{S}$ are upper bounded and so $\mathcal{S}$ contains a maximal element $I$. - -I can't piece it together from these facts alone, what am I missing? - -REPLY [2 votes]: by maximality of $I$ in $S$, in the ring $R/I$ every ideal $K \subset R/I$ is contained in each of a finite set of prime ideals $K_i$ such that some product formed from them (possibly with repetitions) is contained in $K$. we may call the $K_i$ a primal set for $K$ and the product contained within $K$ as a primal product for $K$ -since $I$ is not prime the quotient $R/I$ has zero-divisors. let us say $PQ=0$ for the corresponding principal ideals. -let $P_i,Q_i$ be primal sets for $P$ and $Q$ respectively. since any $T'$ (the pre-image of a prime ideal $T \subset R/I$) is prime in $R$ we have that $P'_i,Q'_j$ are primal sets for $P',Q'$. -since $P'Q' \subset I$, $I \subset P'_i$ and $I \subset Q'_j$ the $P'_i$ and $Q'_j$ together form a primal set for $I$ (with the corresponding primal product being the product of the primal products for $P'$ and $Q'$ separately -this contradicts the assumption $I \subset S$ showing that $S$ is empty<|endoftext|> -TITLE: The group defined by Gauss's definition of composition of forms -QUESTION [5 upvotes]: In article 242 of Disquisitiones, Gauss investigates the properties of the direct composition of two forms of the same discriminant. In this case, he gives a "natural" choice for such a composition. Denoting this composition by $Ax^2 + Bxy + Cy^2$ (so skipping the extra "2" in Gauss's way of writing forms), Gauss notes that $A$ is determined by his definition, while $B$ is determined modulo $2A$. Once $A$ and $B$ are determined, $C$ is determined because the determinant is fixed. -This can be rephrased by saying that Gauss composition is well defined on the equivalence classes of forms under the action of the subgroup of $\mathrm{SL}_2 (\mathbb{Z})$ consisting of matrices of the form $\begin{bmatrix} 1 & m\\0 & 1\end{bmatrix}$. These classes then form a countably generated infinite Abelian group. -This group was studied for positive discriminant by Lenstra in his 1980 paper "On the calculation of regulators and class numbers of quadratic fields". Among other things, he embeds the group in a topological group which he notes is a subquotient of the idèle group of the corresponding quadratic field. Schoof later pointed out in "Computing Arakelov class groups" that Lenstra's topological group is essentially the Arakelov class group of the field. -My question is, was this group studied in its own right after Gauss? Or did all number theorists of the 19th and early 20th centuries study only the class group, and later the group of fractional ideals? - -REPLY [2 votes]: It is of course difficult to answer such questions unless the answer is positive, which I am afraid it isn't. Certainly the first two generations after Gauss who were familiar with his composition (Dirichlet, Jacobi, Eisenstein, Kummer, Pepin, Dedekind) did not study this group. After Hilbert's Zahlbericht composition was rarely used, and in fact most number theorists preferred Dedekind's language of modules. As far as I am aware, it was indeed Lenstra who pointed out the connection between Gauss's theory of composition and the idea of Shanks' infrastructure.<|endoftext|> -TITLE: Relationship between primes and practical numbers -QUESTION [13 upvotes]: This is my first post here. I am a musician, and not a mathematician, but I enjoy doing things to prime numbers and seeing what comes out. -I have defined a sequence which takes the following values for $n$: - --1 if $n$ is prime -1 if $n$ is a practical number -0 if $n$ is neither or both - -I have then taken a sequence of its partial sums. The first 50 terms are 1,1,0,1,0,1,0,1,1,1,0,1,0,0,0,1,0,1,0,1,1,1,0,1,1,1,1,2,1,2,1,2,2,2,2,3,2,2,2,3,2,3,2,2,2,2,1,2,2,2. -The plot for $n<100000$ looks quite linear: - -In order to see quite how linear it was, I then divided each term of the sequence by $n$ and got this plot: - -It seems to me like it wants to converge to some value. The arithmetic mean of the last 100 terms is 46.3225. -I vaguely understand that there are some analogies between practical numbers and primes. I am wondering how difficult it would be to establish if the above sequence does in fact converge, and if so, then to what value. I have tried it with other prime-like sequences, such as ludic numbers and lucky numbers, but the other ones didn't seem as neat... -Thanks! - -REPLY [11 votes]: The plot isn't actually linear, but it indeed looks like one. Here's why: -Let's introduce two functions, $\pi(x)$ and $p(x)$, that respectively count the number of prime and practical numbers less than $x$. -A famous result in number theory, the prime number theorem, tells us that -$$ -\pi(x) \sim \frac{x}{\log(x)} \quad \text{for } x \to +\infty -$$ -which essentially means that for $x$ large enough $\pi(x)$ behaves almost exactly like $x/\log(x)$.1 Remarkably, just in 2015 Weingartner showed that -$$ -p(x) \sim \frac{cx}{\log(x)} \quad \text{for } x \to +\infty -$$ -for some constant $c > 0$. -The sequence of partial sums you defined is then simply the sequence $\{s(n)\}_{n=1}^\infty$ where $s(x) := p(x) - \pi(x)$. It follows that -$$ -s(x) \sim (c-1) \frac{x}{\log(x)} \quad \text{for } x \to +\infty. -$$ -To conclude, here's what $\frac{x}{\log(x)}$ looks like on the interval $[0,10^4]$: - - 1. The almost is important: $\pi(x)$ may never be equal to $x/\log(x)$, but after a while the error will grow quite slower than $x/\log(x)$. - -Update: I should probably explicitly say something about your second graph, too, but first we need to give a precise meaning to "$\sim$". Given two functions $f,g \colon D \subseteq \Bbb{R} \to \Bbb{R}$ and $\rho \in \Bbb{R} \cup \{\pm\infty\}$, we say that $f \sim g$ for $x \to \rho$ if -$$ -\lim_{x \to \rho} \frac{f(x)}{g(x)} = 1. -$$ -Now, what can we say about the limit of the sequence depicted in your second graph? From the previous discussion we have -$$ -\begin{align} -\lim_{n \to \infty} \frac{s(n)}{n} &= \lim_{n \to \infty} \frac{p(n)-\pi(n)}{n} \\ -&= \lim_{n \to \infty} \frac{p(n)-\pi(n)}{n}\; \frac{\log(n)}{\log(n)} \\ -&= \lim_{n \to \infty} \frac{p(n)-\pi(n)}{n/\log(n)}\; \frac{1}{\log(n)} \\ -&= \left(\lim_{n \to \infty} \frac{p(n)-\pi(n)}{n/\log(n)}\right) \left(\lim_{n \to \infty}\frac{1}{\log(n)}\right) \\ -&= (c-1) \lim_{n \to \infty}\frac{1}{\log(n)} \\ -&= 0 -\end{align} -$$ -So $s(n)/n$ does indeed converge to $0$, but I don't know how fast.<|endoftext|> -TITLE: Interpretation for the curvature and monodromy of a connection - Reality check -QUESTION [7 upvotes]: Let $P \to M$ be a principal $G$-bundle with connection form $\omega \in \Omega^1(P,\mathfrak{g})$. Here are the statements I'm basing my viewpoint on: - - -A connection is flat (vanishing curvature) iff it is locally the pullback of the maurer cartan form on $G$ i.e. for all $p \in P$ - there's a neighborhood $p \in U$ and a map $f:U \to G$ satisfying - $\omega|_U = f^*\omega_G$, where $\omega_G$ is the maurer cartan form of - $G$ (this can be proved via an integrable distribution argument). -The monodromy of a flat connection is zero iff it is globally given by the pullback of the maurer cartan form on $G$. i.e. iff - there's a function $f:P \to G$ satisfying $\omega=f^*\omega_G$. - - -Here's what I want to be able to say: - -A connection $P$ is flat iff the $TP \to P$ admits covariantly constant local sections everywhere. Meaning, for every point $p \in P$ there's a neighborhood $p \in U$ and a section $X: U \to TP$ satisfying $\omega(X)=0$. -A flat connection on $P$ has zero monodromy iff $TP \to P$ admits a covariantly constant global section. Meaning there's a global section $\sigma : P \to TP$ satisfying $\omega(\sigma)=0$. - -I get a bit confused though whenever I try to formalize a proof of the above. Sometimes I think the covariantly constant sections should be of the bundle $P \to M$ and that $TP \to P$ always has a covariantly constant section in the sense I defined, here I also get confused. My questions has two parts: - - -Is the above interpretation a valid one? If so how can I formalize this with minimal effort and confusion? (a hint might suffice). If not how could it be fixed? -Does this picture still hold when moving to the category of associated bundles? In particular, do covariantly constant local (or global) vector fields all arise in this manner? - -REPLY [3 votes]: As far as I can see, the interpretation you give is not correct. To explain, I'll use the usual terminology of the horizontal distribution, i.e. for a point $p\in P$, the horizontal subspace $H_pP\subset T_pP$ is the kernel of $\omega(p)$. By definition of a connection, this subspace is complementary to the (canonical) vertical subspace $V_pP$, the kernel of the tangent map of the bundle projection $\pi$ in $p$. Correspondingly, one calls a vector field horizontal, if its values all lie in the horizontal distribution. -So the condition you propose in 1. for flatness of the connection is that locally there are horizontal vector fields on $P$, whereas the condition in 2. which you intend to use for vanishing monodromy just is existence of a global horizontal vector field on $P$. But for any principal bundle endowed with a principal connection, there are many global horizontal vector fields, for example the horizontal lifts of vector fields on the base of the bundle. -The standard interpretation of flatness of a connection is that the horizontal distribution $H$ is involutive. This is equvialent to the criterion on flatness that you use in point 1. of the first block: If $H$ is involutive, then there are local integral submanifolds for the distribution $H$. By definition, the bundle projection restricts to a local diffeomorphism on each such integral submanifold. Local inverses to this are smooth sections $\sigma:V\to P$ for $V\subset M$ open such that $\sigma^*\omega=0$. Conversely, the image of such a section is an integral submanifold for $H$. Hence the existence of such sections is equivalent to flatness of the connection (and this should be the correct version of what you propose as 1.). A local section $\sigma$ defines a local trivialization $V\times G\to \pi^{-1}(V)$ via $(x,g)\mapsto\sigma(x)\cdot g$. Calling $f$ the second component of the inverse of this isomorphism, the pullback of the Maurer Cartan form on $G$ along $f$ has the same horizontal subspaces as $\omega$, which easily implies that the two connections coincide. Conversely, if you have a $f:U\to G$ such that which pulls back the Maurer Cartan connection to $\omega$, it is easy to see that $f$ is a submersion, so locally around each point $p\in U$, $f^{-1}(f(p))$ is a smooth submanifold of $P$, which has the same dimension as $M$, and it is easy to see that this is an integral submanifold for the horizontal distribution. -You can also bring the monodromy nicely into the picture of the horizontal distribution and conclude that vanishing monodromy is equivalent to a global section $\sigma:M\to P$ such that $\sigma^*\omega=0$ (which in particular implies that $P$ is a trivial principal bundle). -This carries over to associated vector bundles to a certain extent, in the form of local or global frames made up of parallel sections. For the existence of single local or global parallel sections, one does not need flatness of the connection. The right concept here is holonomy of a connection.<|endoftext|> -TITLE: Why are there $12$ automorphisms of $\Bbb Z\oplus \Bbb Z_{3}$? -QUESTION [10 upvotes]: Let $A:=\Bbb Z\oplus \Bbb Z_{3}$, then what is $|\text{Aut}(A)|$? My answer is $4$ but the correct answer (without explanation) turns out to be $12$! How come? -Well my understanding is, it just suffices to find out all the possibilities of $f(1,\bar 1)$ where $f$ is an arbitrary automorphism, since $(1,\bar 1)$ is the generator. So I think there are altogether $4$ possibilities: $(\pm 1, \pm \bar 1),(\pm 1, \mp \bar 1)$. How could there be any more? -I'd be very grateful if anyone would solve this puzzle for me! Thanks in advance. - -REPLY [3 votes]: Consider presentation of $G$: $G=\langle x,y\colon y^3=1, xy=yx\rangle$. -Let $\sigma$ be any automorphism of $G$. Then $\sigma(y)$ could be $y$ or $y^2$ only (since $\langle y\rangle$ is torsion subgroup of $G$, so it is invariant under all the automorphisms). -What can be $\sigma(x)$? Of course, it could be $x,x^{-1}$. Anything more? Yes. $xy$, $x^{-1}y$, $xy^2$, $x^{-1}y^2$. That's all. -Thus, $\sigma(x)$ has two choices, $\sigma(y)$ has $6$ choices; each choice of $\sigma(x),\sigma(y)$ gives similar presentation of $G$, hence defines an automorphism. There are $12$.<|endoftext|> -TITLE: Morse functions and connected sum -QUESTION [5 upvotes]: My question is closely related to this post but it is slightly different. -Let $M_1$ and $M_2$ be two smooth closed $n$-manifolds such that there is a Morse function $f_i:M_i\rightarrow \mathbb R$ for $i=1,2$. Moreover suppose that $\mu_k(f_i)$ is the number of critical points of index $k$ of $f_i$ for $k=0,\ldots,n$. -If $X:=M_1\#M_2$ is the connected sum, I'd like to find a Morse function $F:X\rightarrow \mathbb R$ such that -$$\mu_k(F)=\mu_k(f_1)+\mu_k(f_2)\quad\text{for}\; k=1,\ldots, n-1$$ -(note the range of the index $k$ from $1$ to $n-1$) -Pay attention: I don't want necessarily the critical points of $F$ to be the union on the critical points of $f_1$ and $f_2$. -How can I construct such $F$? - -Edit: I understand that I need to glue $M_1$ and $M_2$ near a maximum and a minimum, but then I don't know how to construct $F$. -When $f_1$ and $f_2$ are heights in $\mathbb R^n$ the geometric picture is clear, but I don't know how to deal with the general case. - -REPLY [2 votes]: Let $D_i\subseteq M_i$ be small, open discs around the maximum / minimum respectively. -Then we can write the connected sum as -$M_1 + M_2 = (M_1\setminus D_1) \sqcup_{\partial D_1=S^{n-1}\times\{1\}} (S^{n-1}\times [1,2]) \sqcup_{S^{n-1}\times\{2\} = \partial D_2} (M_2\setminus D_2)$ -Now this works for all small discs around those points. But we know that there are discs on which $f_i$ is just the function $x\mapsto f(0) +\|x\|^2$ (or $x\mapsto f(0)-\|x\|$ respectively). In particular: $f_i$ is constant on $\partial D_i$. This means that we can glue together $f_1$ and $f_2$ by defining a smooth function $S^{n-1}\times[1,2] \to \mathbb{R}$ that only depends on the second parameter, not on the $S^{n-1}$ parameter, and extends $f_1$ (defined on a neigbourhood of $S^{n-1}\times\{1\}$) and $f_2$ (defined on a neighbourhood of $S^{n-1}\times\{2\}$). -This connecting function can be choosen to be monotonically increasing (w.r.t. the $[1,2]$-parameter) from the maximum of $f_1$ to the minimum of $f_2$ so that the composite $F$ does not have any critical points in the connecting cylinder.<|endoftext|> -TITLE: What's the name of the surface and Is it a $C^2$ smooth surface? -QUESTION [8 upvotes]: what's the name of the surface? Is it a $C^2$ smooth surface? -Its implicit equation is: -$(x−2)^2(x+2)^2+(y−2)^2(y+2)^2+(z−2)^2(z+2)^2+3(x^2y^2+x^2z^2+y^2z^2)+6xyz−10(x^2+y^2+z^2)+22=0$ - -REPLY [2 votes]: It looks to be a Goursat surface, I have no software to be sure, if you have try to change the parameters. -But I'm not sure at 100%<|endoftext|> -TITLE: Tangent vectors in $\mathbb{R}^n$ -QUESTION [5 upvotes]: I am confused with the idea of tangent vector or tangent space. First -of all, I learned that there is an isomorphism from $ \mathbb{R}_a^n$ -onto $T_a( \mathbb{R} ^n)$ from John M.Lee' book Introduction to -Smooth Manifolds. Although we have the perspective of regarding -tangent vectors as an operator defined on $\mathbb{R} ^n$ or more -generally, a manifolds, I am still have trouble with it. -Again, on Lee's book, - -For example, any geomantic tangent vector $v_a \in \mathbb{R} _a^n$ - yields a map $D_{v,a}:C^\infty ( \mathbb{R} ^n)\to \mathbb{R} $, which - takes the directional derivative in the direction $v$ at $a$: - $$D_{v,a}f = D_v f(a) = \frac{d}{dt} f(a+tv)$$ - -Here are my questions: Now considering a special manifold, a surface -embedded in $\mathbb{R}^3$ and the tangent of the surface. I know -we have to define a smooth function $f$ on the (special) manifold and -we must define a function $f$ with three dimensions in order to take -directional derivative by a three dimensional vector $v_a$. However, -our manifold is a two dimensional surface embedded in $\mathbb{R} -^3$. Unfortunately, $2\neq 3$. So, what's dimension of the domain of -$f$ with respect to the special manifold? -Edit:@Jack Lee: He points out that tangent vectors to the sphere are defined more abstractly as derivations. At first we have Euclidean space,$\mathbb{R}^3$, and the space $\mathbb{R}_a^n$, so we have its tangent space $T_p(\mathbb{R}^3)$ and we prove that these two linear vector space are isomorphic. We then define the general tangent space w.r.t manifolds. - -Let $M$ be a smooth manifold with or without boundary, and let $p$ be a point of $M$. A linear map $v:C^\infty(M)\to R$ is called a derivation at $p$ if it satisfies $$v(fg) = f(p)vg + g(p)vg$$ for all $f,g \in C^\infty(M)$. The set of all derivation of $C^\infty(M)$ at $p$, denoted by $T_p M$, is a vector space called the tangent space to $M$ at $P$. An element of $T_p M$ is called a tangent vector at $p$. - -The author then discuss the sub-manifolds and the relation between the tangent space of the embedding space and its ambient space. - -Let $M$ be a smooth manifold with or without boundary, and let $S\subseteq M$ be an immersed or embedded submanifold. Since the inclusion map $\iota:\hookrightarrow M$ is a smooth immersion, at each point $p\in S$ we have in injective linear map $d_{\iota_p}:T_p S\to T_p M$. In terms of derivations, this injection works in the following way: for any vector $v\in T_p S$, the pimage vector $\tilde{v} = d_{\iota_p}(v)\in T_p M$ acts on smooth functions on $M$ by $$\tilde{v}f = d_{\iota_p}(v)f = v(f\cdot\iota) = v(f\big|_S)$$ - -Finally, there is also a picture illustrated on the book help a lot. - -REPLY [7 votes]: I think you're trying to read more into this definition than is there. The definition you quoted is only talking about tangent vectors to $\mathbb R^n$, not to submanifolds of $\mathbb R^n$ such as the sphere. Tangent vectors to the sphere are defined more abstractly as derivations (see p. 54). The relationship between tangent vectors to $\mathbb R^n$ and tangent vectors to a submanifold like the sphere isn't developed until Chapter 5.<|endoftext|> -TITLE: Issues solving equations involving $x^{x^x...}$? -QUESTION [5 upvotes]: I stumbled across this problem: -$x^{x^{x^{...}}}=2$ -Obviously, I used the substitution trick and I got -$x^2=2$ -and thus, $x=\pm\sqrt{2}$. I have tested that this works. - -However, I tried to solve -$x^{x^{x^{...}}}=4$ -and it yields the same real answers ($x^4=4$). I have no idea why this is. - -REPLY [3 votes]: The infinite tetration $f(x)=x^{x^{x^\cdots}}$ only converges for $(1/e)^e \leq x \leq e^{e-1}$ and assumes values in $[1/e,e]$. Thus the inverse function $f^{-1}(y)$ is only defined for $y\in [1/e,e]$. So the solution of the first example is $\sqrt{2}$ whereas the second example does not have a solution (since $4>e$).<|endoftext|> -TITLE: Use the identity $\cos 3\theta = 4 \cos^3\theta- 3 \cos \theta$ to solve the cubic equation $t^3 + pt + q = 0$ when $p, q \in \mathbb{R}$. -QUESTION [10 upvotes]: I'm self studying Ian Stewart's Galois Theory and this is Exercise 1.8 from his Third Edition: - -Use the identity $\cos 3\theta = 4 \cos^3\theta- 3 \cos \theta$ to - solve the cubic equation $t^3 + pt + q = 0$ when $p, q \in \mathbb{R}$ - such that $27 q^2 + 4p^3 < 0$. - -I read through many times his method of solving the cubic equation where he didn't use the identity above; yet I'm not sure where the identity can come into play. -His method is sketched below: -First, he substitutes $t = \sqrt[3]{u} + \sqrt[3]{v}$ and express $t^3$ in terms of $u$ and $v$ as well. Then plugging $t$ and $t^3$ in terms of $u$ and $v$ back to the original equation. Finally solving for $u$ and $v$ will immediately give the zeros. -Thanks very much for hints and help! - -REPLY [3 votes]: Note that, since $27q^2+4p^3<0$ and since $27q^2\geqslant0$, $p<0$. So, it makes sense to define $u=\sqrt{-\frac43p}$. Consider the substitution $t=u\cos\theta$. Then $t^3+pt+q$ becomes $u^3\cos^3\theta+pu\cos\theta+q$ and\begin{align}u^3\cos^3\theta+pu\cos\theta+q=0&\iff\frac{u^3\cos^3\theta+pu\cos\theta+q}{u^3/4}=0\\&\iff4\cos^3\theta+\frac{4p}{u^2}\cos\theta+\frac{4q}{u^3}=0\\&\iff4\cos^3\theta-3\cos\theta=-\frac{4q}{u^3}.\end{align}But$$\left(-\frac{4q}{u^3}\right)^2=\frac{16q^2}{(u^2)^3}=\frac{16q^2}{-\frac{64}{27}p^3}=\frac{27q^2}{-4p^3}$$and\begin{align}27q^2+4p^3<0&\iff27q^2<-4p^3\\&\iff\frac{27q^2}{-4p^3}<1\end{align}and therefore $-\frac{4q}{u^3}\in(-1,1)$. So, there is some $\theta\in\Bbb R$ such that $\cos(3\theta)=-\frac{4q}{u^3}$; just take $\theta=\frac13\arccos\left(-\frac{4q}{u^3}\right)$. Then, since $4\cos^3\theta-3\cos\theta=\cos(3\theta)$, $\cos\theta$ is a root of the cubic $t^3+pt+q=0$. But you also have$$\cos\left(3\left(\theta+\frac{2\pi}3\right)\right)=-\frac{4q}{u^3}\quad\text{and}\quad\cos\left(3\left(\theta+\frac{4\pi}3\right)\right)=-\frac{4q}{u^3},$$and therefore $\cos\left(\theta+\frac{2\pi}3\right)$ and $\cos\left(\theta+\frac{4\pi}3\right)$ are also roots of that cubic.<|endoftext|> -TITLE: Compute the Jacobson radical of the group ring $\mathbb{F}_2S_3$. -QUESTION [8 upvotes]: Compute the Jacobson radical and the maximal semisimple quotient of - the group ring $\mathbb{F}_2S_3$ of the symmetric group on three - letters over the field with two elements, and compute the same for - $\mathbb{F}_3S_3$. - -Since the ring is left Artinian (being finite), the Jacobson radical is a nilpotent two-sided ideal. So to find it, we just need to look for a maximal nilpotent ideal. -I started by narrowing down the candidates: the Jacobson radical of $R = \mathbb{F}_2S_3$ is the intersection of the annihilators of the simple modules over $R$. The trivial representation is a simple $R$ module, and it has annihilator the augmentation ideal $\mathfrak{a}$. Therefore $J(R) \subset \mathfrak{a}$. -The only nilpotent ideal I succeeded in finding in $\mathfrak{a}$ was the ideal $I$ generated by $s = \sum_{g \in S_3}g$. Note that $rs = 0$ if $r$ has an even number of terms and $rs = s$ if $r$ has an odd number of terms. Therefore $I$ has two elements. -I haven't been able to prove that $I$ is a maximal nilpotent ideal, or that $R/I$ is semisimple. -Beyond just this particular problem, how would you approach computing the Jacobson radical or nilradical of a ring (including a noncommutative ring) in general? - -REPLY [9 votes]: The Artin-Wedderburn theorem tells us that the maximal semisimple quotient is a product of matrix rings over finite division rings, one for each irreducible representation. Furthermore, every finite division ring is a field, and the unit group of any finite field is cyclic. The only nontrivial homomorphism from $S_3$ to a cyclic group is the sign homomorphism $S_3\to\mathbb{Z}/2$. It follows that any homomorphism from $\mathbb{Z}S_3$ to a finite field lands in the prime subfield (since elements of $S_3$ can only map to $\pm 1$). -So, writing $\mathbb{F}$ for either $\mathbb{F}_2$ or $\mathbb{F}_3$, the maximal semisimple quotient of $\mathbb{F}S_3$ is a product of matrix rings $M_n(K)$ for finite extensions $K$ of $\mathbb{F}$, one for each irreducible representation, and in all the cases where $n=1$ the $K$ is just $\mathbb{F}$. The only $1$-dimensional representations are the trivial representation and the sign representation, and the sign representation is the same as the trivial representation in the case $\mathbb{F}=\mathbb{F}_2$. -For $\mathbb{F}=\mathbb{F}_3$, dimension-counting now tells us there can be no more irreducible representations: the two $1$-dimensional representations take up $2$ dimensions of the semisimple quotient, and the Jacobson radical is nontrivial since it contains $\sum_{g\in S_3} g$, so there are at most $3$ dimensions left. Another irreducible representation would give a copy of $M_n(\mathbb{F}_{3^d})$ in the semisimple quotient for some $d$ and some $n>1$, which is impossible since there aren't enough dimensions left. We conclude that the two $1$-dimensional representations are the only irreducible representations for $\mathbb{F}=\mathbb{F}_3$, and so the maximal semisimple quotient is $\mathbb{F}_3\times\mathbb{F}_3$. The Jacobson radical is then the kernel of the map $\mathbb{F}_3S_3\to\mathbb{F}_3\times\mathbb{F}_3$; explicitly, it is the set of elements $\sum_{g\in S_3} a_g g$ such that $\sum a_g=0$ and $\sum a_g \sigma(g)=0$, where $\sigma(g)$ is the sign of $g$. -Over $\mathbb{F}_2$, on the other hand, there are up to $4$ dimensions left after accounting for the single $1$-dimensional representation and the fact that the Jacobson radical is nontrivial, so there might be a $2$-dimensional irreducible representation. To find one, note that there is a permutation representation of $S_3$ on $\mathbb{F}_2^3$, and this splits as a direct sum of a trivial subrepresentation (generated by $(1,1,1)$) and a $2$-dimensional subrepresentation (consisting of $(a,b,c)$ such that $a+b+c=0$). (Note that this splitting of the permutation representation doesn't happen over $\mathbb{F}_3$, since $(1,1,1)$ is contained in the latter $2$-dimensional subrepresentation.) This $2$-dimensional representation can easily be verified to be irreducible (for another way of seeing it, note that $\mathbb{F}_2^2\setminus\{0\}$ has three elements, and every permutation of them gives a linear map, so in fact $GL_2(\mathbb{F}_2)\cong S_3$). -So over $\mathbb{F}_2$, we conclude that there is the trivial representation and also this $2$-dimensional irreducible representation; counting dimensions, we now see that we have accounted for all $6$ dimensions of $\mathbb{F}_2S_3$. We conclude that the Jacobson radical is only $1$-dimensional (generated by $\sum_{g\in S_3} g$), and the quotient is $\mathbb{F}_2\times M_2(\mathbb{F}_2)$.<|endoftext|> -TITLE: V.I. Arnold says Russian students can't solve this problem, but American students can -- why? -QUESTION [294 upvotes]: In a book of word problems by V.I Arnold, the following appears: - - -The hypotenuse of a right-angled triangle (in a standard American examination) is 10 inches, the altitude dropped onto it is 6 inches. Find the area of the triangle. - -American school students had been coping successfully with this problem for over a decade. But then Russian school students arrived from Moscow, and none of them was able to solve it as had their American peers (giving 30 square inches as the answer). Why? - -Here's the book. I assume the answer is some joke at the expense of the Americans, but I don't get it. Possibly a joke about inches? Anyone? - -REPLY [4 votes]: Let $\Delta ABC$ be our triangle, $\measuredangle ACB=90^{\circ}$ and $CD$ be an altitude of the triangle. -Thus, by AM-GM $$6=CD=\sqrt{AD\cdot BD}\leq\frac{AD+BD}{2}=\frac{10}{2}=5,$$ which is a contradiction, which says this triangle does not exist.<|endoftext|> -TITLE: Properties of the per-element exponential (Hadamard exponential) for matrices -QUESTION [7 upvotes]: I'm asking this question mostly out of curiosity, though I do also have a potential application. -In linear algebra we usually define the matrix emponential as $e^A = I + A + \frac{1}{2}A^2 + \frac{1}{6}A^3 + \dots$, which has lots of nice properties. However, we could also define a different kind of "matrix exponentiation", which I'll write $e^{\circ A}$, where $(e^{\circ A})_{ij} = e^{A_{ij}}$, i.e. we just apply the exponential function to each element independently. -After writing this question I guessed that the name of this operation would be "Hadamard exponential." An internet search revealed that it's mentioned by this name in a few textbooks and research papers, but in general I can find very little written about its properties from a linear algebra point of view. (I've edited this post to use what seems to be standard notation for the Hadamard exponential.) -One obvious thing is that it inherits all the usual properties of exponentiation, as long as we use the Hadamrd product $(\circ)$ (i.e. per-element multiplication) instead of the usual matrix product. Then we can immediately apply results like the Schur product theorem to conclude that if $e^{\circ A}$ and $e^{\circ B}$ are both positive definite then so is $e^{\circ (A+B)}$. Another obvious property is that for real matrices, the elements of $e^{\circ A}$ are all positive, and hence the Perron-Frobenius theorem applies. -However, what I would particularly like to know is whether anything can be said about the eigenvalues and eigenvectors of $e^{\circ A}$ in terms of the eigendecomposition of $A$. I suspect that there is no straightforward relationship in general, but I would expect there to be inequality constraints. -In short, my question is, has the operation I've called $\operatorname{eexp}$ been studied in linear algebra, and what is known about its properties? - -REPLY [4 votes]: There are some wonderful theorems regarding entrywise functions of matrices, especially regarding positive definite matrices. In particular: - -If $A,B$ are positive semidefinite, so is $A \circ B$ -If $A$ is positive semidefinite, then so is $e^{\circ A}$. -Define $f[A]$ to be an entrywise function of (real) square matrices (of arbitrary size). Then $f$ takes positive definite matrices to positive definite matrices if and only if it is an analytic function whose power series has only non-negative coefficients. - -These results are apparently important in the context of numerical analysis, especially when it comes to thresholding (i.e. rounding values to zero while keeping the resulting error to within certain bounds). -See also this question on MO. -Another quick result is, if $\|\cdot\|$ denotes the Frobenius norm, then -$$ -\|A\circ B\| \leq \|A\| \cdot \|B\| -\\ \left\| e^{\circ A}\right\| \leq e^{\|A\|} -$$ -One last quick and useful result: if $u,v$ are column vectors, then -$$ -A \circ (uv^T) = \operatorname{diag}(u) A \operatorname{diag}(v) -$$<|endoftext|> -TITLE: How is class equation of a group of given order determined? -QUESTION [6 upvotes]: How is class equation of a group of given order determined? -Suppose we have a group of order $8$ say $D_8=\{r^4=s^2=1;rs=sr^{-1}\}$. -How can I find the class equation of this group ? -Should I take each and every element and find the conjugacy class of that element.I know that class equation of a group is given by -$|G|=|Z(G)|+\sum_{i=1}^n |cl(a_i)| $ where $a_i's$ are distinct class representatives. -Is there any elegant approach available that would even work for higher order groups such as $D_{10},S_4$ etc. - -REPLY [6 votes]: Yes; you have to find conjugacy class of each element and sum their sizes. -(You are explicitly taking a group of order $8$; there are other ways to determine class equation for this group with some techniques; but we do not use these techniques, since group is explicitly known here.) -In case $D_{10}, S_4$, the elegant approach would be following: in the family of dihedral groups, there is some pattern of conjugacy classes, which enables us to write down class equation. Similarly, in family of symmetric group, the conjugacy class sizes of elements is well known by a combinatorial formula. One can obtain class equation for the family. In general, for a finite group, there is no elegant way for class equation, which works for all (Except determining classes explicitly). -For specific group among dihedral or quaternion, some other information about group could help much to obtain class equation. For example, $D_8$, it is non-abelian group of prime-power order ($8$); its center should be non-trivial, hence $|Z(D_8)|\geq 2$. If $|Z(D_8)|=4$ or $8$ then $D_8/Z(D_8)$ will be cyclic, and $D_8$ will be abelian contradiction, so $|Z(D_8)|=2$. -For $x\in D_8\setminus Z(D_8)$, the centralizer will contain $x$ as well as $Z(D_8)$, hence its order would be at least $4$. If it is $8$ then $x$ will be central, contr. Hence centralizer of $x$ has size $4$, i.e. its index in group is $2$, i.e. conjugacy class size is $2$, and is true for any non-central $x$. So class equation is $8=2+2+2+2$. -But in this illustration, we are not using presentation of $D_8$; we are simply using fact that it is non-abelian group of order $8$. So, such small information about group may help to obtain class equation with some technique.<|endoftext|> -TITLE: confusion about permutation -QUESTION [11 upvotes]: $7$ white identical balls and $3$ black identical balls are randomly placed in a row. The probability that no two black balls are together is ? -I am getting it as $ \frac{1}{3}$ while the answer in my book is $\frac{7}{15}$. Total ways are $\frac{10!}{7!.3!}=120$ now i considered three consecutive balls as one so $\frac{(1+7)!}{7!}=8$,then two balls as consecutive which is $\frac{(1+8)!}{7!}=72$ so probability is $\frac{120-8-72}{120}=1/3$ -What am I missing on? Any help. - -REPLY [7 votes]: Another way -Consider a string of $7$ white balls. There are $8$ places between balls (including ends) where black balls may be inserted w/o being adjacent, against $\binom{10}3$ unrestricted arrangements -$\uparrow\huge\circ$$\uparrow\huge\circ$$\uparrow\huge\circ$$\uparrow\huge\circ$$\uparrow\huge\circ$$\uparrow\huge\circ$$\uparrow\huge \circ$$\uparrow$ -thus $Pr = \dfrac{\binom83}{\binom{10}3} = \dfrac7{15}$<|endoftext|> -TITLE: The equivalence of definitions Riemann integral -QUESTION [9 upvotes]: First definition of Riemann integrable function. Let $f:[a,b] \to \mathbb{R}$ be a bounded function and $P=\{x_0,x_1,\dots, x_n\}$ partition of $[a,b]$. Define $U(P,f):=\sum \limits_{i=1}^{n}M_i\Delta x_i$ and $L(P,f):=\sum \limits_{i=1}^{n}m_i\Delta x_i$ where $M_i=\sup\limits_{[x_{i-1},x_i]} f(x),\quad m_i=\inf\limits_{[x_{i-1},x_i]} f(x), \Delta x_i=x_i-x_{i-1}.$ Let $\inf \limits_{P}U(P,f)=I^*$ and $\sup \limits_{P}L(P,f)=I_*$. If $I^*=I_*=I$ then we called $f(x)$ is Riemann integrable function on $[a,b]$ with integral $I$. -Second definition of Riemann integrable function. Let $P=\{x_0,x_1,\dots, x_n\}$ is a partition of $[a,b]$ with $\xi_i\in [x_{i-1},x_i]$ and we define Riemann-integral sum $\sigma(P):=\sum \limits_{i=1}^{n}f(\xi_i)\Delta x_i$ and $\lVert P\rVert=\max\limits_{i}\Delta x_i$. If the following limit $\lim \limits_{\lVert P\rVert\to 0}\sigma(P)$ exists and has value $L$ we say that $f(x)$ is Riemann integrable function on $[a,b]$ with integral $L$. -The first definition is from Rudin's PMA book but in other books I met the second definition. But I can't prove the equivalence of these definitions for a couple days. Can anyone show to me a strict and rigorous proof? I would be very grateful for your help! -P.S. Happy New Year! :) - -REPLY [5 votes]: This is explained very nicely in Apostol's Mathematical Analysis. In this book Apostol uses Riemann sums to define the definite integral, but he does not use the limit based on $||P|| \to 0$. Rather he uses the concept of finer partition. -Thus if $P, P'$ are partitions of $[a, b]$ then $P'$ is said to be finer than $P$ if $P \subseteq P'$. Adding more points to an existing partition makes it finer. The other concept is called norm of a partition which is defined as the length of largest sub-interval made by the partition. When you add points to a partition to make it finer, then the norm can only decrease (or remain unchanged). Thus finer partitions corresponds to partitions with smaller norms, but the converse does not hold. -It appears from Apostol's presentation that dealing with a limit where norm of partition tends to $0$ is difficult compared to dealing with limit when partitions become finer and finer. And he uses the following definition of Riemann integral: -Let $f$ be bounded on $[a, b]$ and let $P = \{x_{0}, x_{1}, x_{2}, \ldots, x_{n}\}$ be a partition of $[a, b]$. A sum of the form $$S(P, f) = \sum_{i = 1}^{n}f(\xi_{i})(x_{i} - x_{i - 1})$$ where $\xi_{i}$ is any point in the interval $[x_{i - 1}, x_{i}]$ is called a Riemann sum of $f$ over partition $P$. A number $I$ said to be the Riemann integral of $f$ over $[a, b]$ and we write $$\int_{a}^{b}f(x)\,dx = I$$ if for every $\epsilon > 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$|S(P, f) - I| < \epsilon$$ for partitions $P$ of $[a, b]$ which are finer than $P_{\epsilon}$. -It is now easy to show the equivalence of this definition of Riemann integral with the definition based on Darboux sums (the first definition of your question). This is because the Riemann sums are always sandwiched between the upper and lower Darboux sums and as partitions get finer and finer the lower sums increase and upper sums decrease. If these Darboux sums have same limit (i.e. supremum of lower sums is equal to infimum of upper sums) then obviously the Riemann sums also tend to same limit as partitions become finer and finer. On the other hand if $M_{i}, m_{i}$ are the supremum and infimum of $f$ on a sub-interval $[x_{i - 1}, x_{i}]$ generated by partition then it is easy to choose points $\xi_{i}, \xi'_{i}$ in this sub-interval such that $f(\xi_{i})$ is near $M_{i}$ and $f(\xi'_{i})$ is near $m_{i}$. Due to this we can a find a Riemann sum close to $U(P, f)$ and another Riemann sum close to $L(P, f)$. So if Riemann sums tends to a limit when partitions get finer and finer then the upper and lower Darboux sums also tend to the same limit. -Next Apostol shows the equivalence of his definition (mentioned above) with the definition based on limit of type $||P|| \to 0$ in his exercise. This proof is also available as an answer here. -Also note that in general (i.e. in the context of Riemann-Stieltjes integral) these two definitions are not equivalent and the one which uses the concept of finer partitions is more inclusive than the one based on norm. See this answer for more details.<|endoftext|> -TITLE: Fourier series of $\frac{1}{5+4 \cos x}$ using contour integration -QUESTION [5 upvotes]: The function -$$f(x)=\frac{1}{5+4 \cos x}$$ -is periodic with the main period being $T=2\pi$. The graph is easily obtained, but here is a graph from Desmos as it looks better: -The function is even, so all the coefficients $b_{n}$ vanish: $$b_{n}=\frac{2}{t}\int_{-T/2}^{T/2} f(x)\sin\left ( \frac{2n\pi x}{T} \right )\,dx=0.$$ -Using the residue theorem I found that -$$a_0 = \frac{2}{T} \int_{-T/2}^{T/2} f(x)\,dx = -\frac{1}{\pi i}\oint \frac{dz}{5z+2z^2+2}=\frac{2}{3}$$ -and also -$$a_{n}=\frac{1}{2\pi i }\oint \frac{z^{2n}+1}{z^{n}(5z+2+2z^2)}\, dz = \operatorname*{Res}\limits_{z=-1/2} f(z)+ \operatorname*{Res}\limits_{z=0} f(z).$$ -Furthermore -$$\operatorname*{Res}\limits_{z=-1/2} f(z) = \frac{1+\left ( \frac{-1}{2} \right )^{n}}{3\left ( \frac{-1}{2} \right )^{n}}.$$ -The residue at $z=0$ isn't so easily obtained since there is a pole of order $n$, and when doing Laurent expansion it gets too messy and it seems that the coefficient $A_{-1}$ may not be easily derived from there. How do I effectively find that residue? - -REPLY [3 votes]: I suggest that you write -$$ -\frac{1}{2+5z+2z^2}=\frac{1}{(2+z)(1+2z)}=\frac{2}{3}\frac{1}{1+2z}-\frac{1}{6}\frac{1}{1+z/2}. -$$ -Next, since $n$ is positive, the term $z^{2n}$ can be neglected (it will never contribute to the $z^{-1}$ coefficient in the Laurent expansion). Thus -$$ -\begin{aligned} -\text{Res}_{z=0}f(z)&=\text{Res}_{z=0}\frac{1}{z^n}\Bigl(\frac{2}{3}\frac{1}{1+2z}-\frac{1}{6}\frac{1}{1+z/2}\Bigr) -\end{aligned} -$$ -Now you only need to turn the geometric series around (using -$$ -\frac{1}{1+a}=1-a+a^2-a^3+\cdots) -$$ -and find the necessary coefficient in front of $z^{-1}$. I leave it for you to do the details. -Spoiler with result below: - - I get, as a result, $$\text{Res}_{z=0}f(z)=(-1)^n\frac{1-4^n}{3\cdot 2^n}.$$<|endoftext|> -TITLE: Study materials to help understand the generalized Stokes' theorem both intuitively and rigorously? -QUESTION [6 upvotes]: Dear MSE: My goal is to understand the generalized Stokes' theorem both intuitively and rigorously. Could someone give advice or recommend study materials to help understand the generalized Stokes' theorem both intuitively and rigorously? -Baby Rudin seems to contain a terse treatment of the generalized Stokes' theorem in Chapter 10: Integration of Differential Forms. Are there study materials suitable to accompany Baby Rudin's terse treatment? - -REPLY [7 votes]: The treatment in Baby Rudin is awful. I don't think I've met anyone who wasn't thoroughly confused by it. You are not alone. (And I would caution against trying to read the later chapters, too. In particular, please don't try to learn Lebesgue integration from that book.) -A proper treatment requires learning some basic differential geometry. My favorite introduction is Loring Tu's Introduction to Manifolds. It covers all the material necessary for Stokes's theorem, proves the theorem, and does even more. -As far as intuition goes, you should think of Stokes's theorem as a generalized version of theorems like the fundamental theorem of calculus and Green's theorem. Stokes's theorem says -$$\int_{\partial M} \omega = \int_M d\omega,$$ -where $M$ is some manifold and $\omega$ is some "differential form" on $M$, and $\partial M$ is the boundary of $M$. -When $M$ is an interval $[a,b]$, we get -$$\int_{\{a,b\}} f = \int_{[a,b]} f'(x)\, dx$$ -which is just the usual fundamental theorem of calculus. Here the boundary of $M=[a,b]$ is just the set of endpoints $\{a,b\}$, and $\omega=f$. -We can also look the case when $M$ is some two-dimensional domain in the plane. It turns out that in this case Stokes's theorem recovers Green's theorem.<|endoftext|> -TITLE: New Year Maths 2016: $\sum_{r=3}^{\; 3^2}r^3=2016$ -QUESTION [18 upvotes]: Decode the following summation to welcome the new year! -Find integer $n$ such that -$$\large\color{darkblue}{\sum_{\qquad \qquad r={\sum_{m=0}^\infty\left(\frac{n-1}n\right)^m }}^{\qquad \qquad \quad \sum_{m=0}^\infty\left(\frac{n^2-1}{n^2}\right)^m}}\color{purple}{r^n}=\color{red}{(n-1)^{n+2}}\color{orange}{n^{n-1}}\color{green}{(2n+1)}$$ - -Background -Every year around new year time, questions pop up on MSE on the numerical properties of the new year (such as this, this and this), These have yielded some interesting responses. I was particularly impressed with Jack d'Aurizio's response here and tried to come up with something similar myself. -After some experimenting, I found a neat summation result for 2016. I thought it would be more interesting if formulated as a problem instead, and "enhanced" using typographically interesting elements, e.g. a summation where the limits are themselves summations. Hence the formulation of problem posted. -I posted the answer myself as the intention is to share the result. It wasn't clear if an analytical approach would work (it would be nice if someone could show that this is possible). However, a numerical approach using the first few values of $n$ will soon lead to the solution. -The "recreational mathematics" tag indicates that this is something done for fun in the spirit of good cheer for the festive period. Thanks for reading (special thanks for those who voted to reopen the question) and Happy New Year! - -REPLY [14 votes]: $$\large\begin{align} -&\color{darkblue} -{\sum_{\qquad\qquad r={\sum_{m=0}^\infty\left(\frac{n-1}n\right)^m }}^{\quad \qquad\qquad \sum_{m=0}^\infty\left(\frac{n^2-1}{n^2}\right)^m}}\color{purple}{r^n}\\\\ -=&\color{darkblue}{\sum_{\qquad\qquad r={\sum_{m=0}^\infty\left(1-\frac 1n\right)^m }}^{\quad \qquad\qquad \sum_{m=0}^\infty\left(1-\frac 1{n^2}\right)^m}}\color{purple}{r^n}\\\\ -=&\color{darkblue}{\sum_{\qquad \qquad r=1\big/\left[1-\left(1-\frac 1n\right)\right]}^{\quad \qquad \qquad 1\big/\left[1-\left(1-\frac 1{n^2}\right)\right]}}\color{purple}{r^n}\\\\ -=&\color{darkblue}{\qquad\qquad \; \sum_{r=n}^{\; \;n^2} }\color{purple}{r^n}&&=\color{red}{(n-1)^{n+2}}\color{orange}{n^{n-1}}\color{green}{(2n+1)} -\end{align}$$ -By inspection, equality holds when $n=3$, giving the interesting summation result -$$\large\begin{align}\color{darkblue}{\sum_{r=3}^{\; 3^2}} -\color{purple}{r^3} -&=\color{red}{2^5\cdot}\color{orange}{3^2\cdot}\color{green}{7}\\ -\large\color{red}{\sum_{r=3}^{\; 3^2}}\color{red}{r^3} -&\color{red}{=2016} -\end{align}$$ -Happy New Year, everyone!!<|endoftext|> -TITLE: How "bounded" are $L^1$ functions? -QUESTION [5 upvotes]: I am well aware of the fact that $L^1-$functions are not necessarily essentially bounded. Take for instance the function $1/\sqrt{x}$ on $X=(0,1)$. -However, can we say that they are "almost" bounded in the sense that if we cut out the bad parts with an epsilon of room they are bounded a.e.? Formally: - -Suppose $(X,\mu)$ is a measure space and $f\in L^1(X)$. Then for every $\varepsilon>0,$ there exists a set $F\subset X$, such that $\mu(F)<\varepsilon$ and $f$ is essentially bounded on $X\setminus F$. - -Does this hold? I can't think of any counterexample. The usual ones to show that $L^1$ is not necessarily bounded clearly don't work (for instance above we can surely cut out any $\epsilon$ segment around $0$). - -REPLY [3 votes]: Actually, this holds for arbitrary (real or complex valued) measurable functions, not just for integrable ones. -Simply note that -$$ -X = \bigcup_n |f|^{-1}([0,n]), -$$ -where the sets in the union increase with $n$. -By continuity of the measure from below, we see that (since $\mu(X) < \infty$), there is some $n \in \Bbb{N}$ with $\mu(|f|^{-1}([0,n])) > \mu(X) - \varepsilon$, which shows that $\mu(|f|^{-1}((n,\infty)) < \varepsilon$.<|endoftext|> -TITLE: Prove that in any GCD domain every irreducible element is prime -QUESTION [10 upvotes]: The proof of the following proposition is not completely clear to me. I get everything up until the bold part and I have a feeling some crucial steps are omitted, can anybody help clear this up? - -Let $R$ be an integral domain. If every two elements of $R$ have a greatest common divisor, then every irreducible element in $R$ is prime. - -Proof: -Consider an irreducible element $p \in R$ and $x,y \in R$ such that $p\vert xy$. Suppose now that $py$ and $xy$ have a greatest common divisor $z$ in R. We want to conclude from this that $p \vert x$ or $p \vert y$. This is obvious if $xy = 0$, so we may assume that $xy \neq 0$. Then $z \neq 0$. As both $p$ and $y$ divide each of $py$ and $xy$, we have that $z = pu = yv$ for certain $u,v \in R$. Using the cancellation law with $\boldsymbol z \boldsymbol \neq \boldsymbol 0$, we obtain that $\boldsymbol v \boldsymbol \vert \boldsymbol p$. As $p$ is irreducible, either $v \in R^\times$ (i.e. the set of invertible elements of $R$) or $v \sim p$ (i.e. $Rv = Rp$). If $v \sim p$, then $p \vert x$. If $v \in R^\times$, then $p$ divides $v^{-1} pu = v^{-1} z=y$. - -REPLY [2 votes]: Since Kasper's question have already been clarified, I just provide another interpretation here, which might be more intuitive. -Let $R$ be a GCD domain. Denote $\sim$ as the relation of "being associates", then the quotient $R/\sim$ together with gcd (as meet, $\wedge$) and lcm (as join, $\vee$) forms a distributive lattice. -click here to view the fig. -As shown in the figure above, for irreducible $p$ which divides $ab$, $a\wedge p=\gcd(a,p)$ is a factor of $p$, thus is a unit(i.e. associate of $1$) or an associate of $p$. That is, the node labeling $a\wedge p$ must coincide with that labeling either $1$ or $p$. Same argument applies for $b\wedge p$. While since $(R/\sim,\gcd,\text{lcm})$ is a distributive lattice, the diamond lattice $M_{3}$ is forbidden, that is, at least one of $a\wedge p$ and $b\wedge p$ -must associates with $p$. Therefore at least one of $p|a$ and $p|b$ holds, by def., $p$ prime.<|endoftext|> -TITLE: Vector spaces: Is (the) scalar multiplication unique? -QUESTION [6 upvotes]: Notation -Consider an arbitrary vector space $(V, \oplus, \odot)$ over a field $F$ with - -vector addition $\oplus : V \times V \to V$ and -scalar multiplication $\odot : F \times V \to V$, - -both satisfying all the axioms defining a vector space. -Background -Let us fix the field $F$ and the set $V$. It is obvious that the vector addition does not have to be unique. Any binary operation $\oplus$ that makes $(V, \oplus)$ an Abelian group, would actually work. But I am not so sure if this is also true for (the) scalar multiplication. -Question -Let us fix the field $F$ and the Abelian group $(V, \oplus)$. Is the action of $F$ on $(V, \oplus)$, i.e., the scalar multiplication $\odot$ satisfying the axioms of a vector space, a unique operation? - -REPLY [9 votes]: No. On any complex vector space $(V,+,\cdot)$ you can introduce a new scalar multiplication $*$ given by $z * v = \overline{z} \cdot v$ for all $z \in \mathbb{C}$ and $v \in V$. -More generally: If $(V,+,\cdot)$ is an $F$-vector space and $\phi \colon F \to F$ a field automorphism then $z * v = \phi(z) \cdot v$ defines a new scalar multiplication $*$. -PS: The scalar multiplication is unique if $F$ is a prime field, i.e. if $F = \mathbb{Q}$ or $F = \mathbb{F}_p$ with $p > 0$ prime. This follows because the action of $1 \in F$ is uniquely determined by the axioms of the scalar multiplicaton and each element in these fields is a multiple of $1$ (if $F = \mathbb{F}_p$) or can be written as a quotient of multiples of $1$ (if $F = \mathbb{Q}$).<|endoftext|> -TITLE: Easy criteria to determine isomorphism of fields? -QUESTION [5 upvotes]: Let $K$ be a field and $f,g$ irreducible polynomials in $K[X]$, is there a nice iff condition for $K[X]/(f)\cong K[X]/(g)$? -($\cong$ denotes an isomorphism that is the identity on restriction to $K$). -Thoughts: It is sufficient that they are $K^\times$ multiples of each other. I'd hoped this was necessary but it isn't as $\mathbb{Q}[X]/(X^2-2)\cong\mathbb{Q}(\sqrt2)=\mathbb{Q}(\sqrt2+1)\cong\mathbb{Q}[X]/(X^2-2X-1)$ with the middle two fields viewed as subfields of $\mathbb{C}$. It is necessary that they have the same degree. Please let me know if there are any other simple necessary conditions. Thanks! - -REPLY [3 votes]: If $f,g$ are quadratic polynomials (and $K$ is any field of characteristic not 2), then by the quadratic formula the isomorphism classes are classified by the discriminant "$b^2 - 4ac$" (if $f = ax^2 + bx + c$), modulo squares. Ie, the isomorphism classes are in bijection with $K^\times/(K^\times)^2$. -In any other situation things become a lot more difficult. -If $K$ is $\mathbb{Q}$, then $\mathbb{Q}[X]/(f)\cong\mathbb{Q}[X]/(g)$ if and only if the action of $G_\mathbb{Q} := \text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ on the roots of $f$ is isomorphic to the action of $G_\mathbb{Q}$ on the roots of $g$. Ie, if $X_f,X_g$ denote the sets of roots of $f,g$, then $\mathbb{Q}[X]/(f)\cong\mathbb{Q}[X]/(g)$ if and only if there is a bijection $\phi : X_f\stackrel{\sim}{\rightarrow}X_g$ such that $\phi(\sigma x) = \sigma\phi(x)$ for all $x\in X_f$, $\sigma\in G_\mathbb{Q}$. -This is a consequence of the Galois correspondence, which says that the association sending any finite extension $L := \mathbb{Q}[X]/(h)$ of $\mathbb{Q}$ (with $h$ irreducible) to $X_h$ (as a set with $G_\mathbb{Q}$-action) gives an equivalence of categories between the category of finite field extensions of $\mathbb{Q}$ and the category of finite sets with a transitive $G_\mathbb{Q}$-action. -The result follows from the fact that two finite extensions of $\mathbb{Q}$ are isomorphic as fields if and only if they are isomorphic as extensions of $\mathbb{Q}$ (any abstract isomorphism between them must fix their prime subfields). -The same result will also be true if $K$ is replaced by any $\mathbb{F}_p$ (though this situation is trivial since finite extensions of $\mathbb{F}_p$ are uniquely determined by degree). With suitable modifications the result is also true when $K$ is any finite extension of $\mathbb{Q}$, and with more care, even true when $K$ is an algebraic extension of $\mathbb{Q}$. -If $K$ is not an algebraic extension over its prime subfield, then things can get weird, as we move into the world of arithmetic geometry. For example, for any field $k$, you can set $K := k(t)$, then $K = K[X]/(X) = k(t)$ is isomorphic to $K[X]/(X^2-t)\cong k(\sqrt{t})$. -If $K$ is finite over $\mathbb{Q}$, then in certain cases you may also be able to use class field theory. -Though, I should mention that in practice it's rare that you would care about isomorphisms between fields (as abstract fields). You generally will want to restrict yourself to "nice" extensions of a fixed base field, in which case a number of the conditions above can be relaxed.<|endoftext|> -TITLE: How to prove that series $\sum (-1)^n\sin^4n /\sqrt{n}$ converges? -QUESTION [5 upvotes]: I have a series: -$$\sum_{n=1}^\infty(-1)^n\frac{\sin^4n}{\sqrt n}.$$ -How can we prove that it converges? - -Usually, with $\sin^4n$ we would use Comparison Test, but it only applies when the terms are nonnegative. - -REPLY [3 votes]: Hint: Noting that -$$ \sin^4n=\frac{1}{8}(3-4\cos(2n)+\cos(4n)) $$ -you have -\begin{eqnarray} -\sum_{n=1}^\infty(-1)^n\frac{\sin^4n}{\sqrt n}&=&\frac{3}{8}\sum_{n=1}^\infty(-1)^n\frac{1}{\sqrt n}-\frac{1}{2}\sum_{n=1}^\infty(-1)^n\frac{\cos(2n)}{\sqrt n}+\frac{1}{8}\sum_{n=1}^\infty(-1)^n\frac{\cos(4n)}{\sqrt n}. -\end{eqnarray} -Now you can do the rest to show that $\sum_{n=1}^\infty(-1)^n\frac{\cos(2n)}{\sqrt n}$ and $\sum_{n=1}^\infty(-1)^n\frac{\cos(4n)}{\sqrt n}$ are convergent.<|endoftext|> -TITLE: Formulae of the Year 2016 -QUESTION [24 upvotes]: Decode the following limits to welcome the new year! -This is my love limits (Created by me). I hope you Love it. -Let $$A_{n}=\dfrac{n}{n^2+1}+\dfrac{n}{n^2+2^2}+\cdots+\dfrac{n}{n^2+n^2}$$ -show that -$$\lim_{n\to\infty}\dfrac{1}{n^4\left\{\dfrac{1}{24}-n\left[n\left(\dfrac{\pi}{4}-A_{n}\right)-\dfrac{1}{4}\right]\right\}}=2016$$ -can you create some nice other problem (result is 2016)? Happy New Year To Everyone . - -REPLY [6 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} - \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} - \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} - \newcommand{\dd}{\mathrm{d}} - \newcommand{\ds}[1]{\displaystyle{#1}} - \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} - \newcommand{\half}{{1 \over 2}} - \newcommand{\ic}{\mathrm{i}} - \newcommand{\iff}{\Longleftrightarrow} - \newcommand{\imp}{\Longrightarrow} - \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} - \newcommand{\ol}[1]{\overline{#1}} - \newcommand{\pars}[1]{\left(\,{#1}\,\right)} - \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} - \newcommand{\ul}[1]{\underline{#1}} - \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} - \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} - \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ -\begin{align} -A_{n} & \equiv -\sum_{k = 1}^{n}{n \over n^{2} + k^{2}} = -\Im\sum_{k = 0}^{n - 1}{1 \over k + 1 - n\ic} = -\Im\sum_{k = 0}^{\infty}\pars{{1 \over k + 1 - n\ic} - -{1 \over k + n + 1 - n\ic}} -\\[5mm] & = -\Im\bracks{\Psi\pars{n + 1 - n\ic} - \Psi\pars{1 - n\ic}}\,,\qquad\qquad -\pars{~\Psi\ \mbox{is the}\ Digamma\ Function~}. -\end{align} - -\begin{align} -A_{n} & = -\Im\braces{\Psi\pars{\bracks{1 - \ic}n} + {1 \over \pars{1 - \ic}n} -- \Psi\pars{-n\ic} - {1 \over -n\ic}}\quad\pars{~Recursion~} -\\[5mm] & = --\,{1 \over 2n} + -\Im\braces{\Psi\pars{\bracks{1 - \ic}n} - \Psi\pars{-n\ic}} -\end{align} - -The Digamma Function Asymptotic Formula is given by -\begin{align} -\Psi\pars{z} & \sim -\ln\pars{z} - {1 \over 2z} - \sum_{n = 1}^{\infty}{B_{2n} \over 2n\,z^{2n}} = -\ln\pars{z} - {1 \over 2z} - {1 \over 12 z^{2}} + {1 \over 120z^{4}} - -{1 \over 252z^{6}} + \cdots -\\[5mm] & \pars{~z \to \infty\ \mbox{in}\ \verts{\,\mathrm{arg}\pars{z}} < \pi~} -\,,\qquad B_{k}\ \mbox{is a Bernoulli Number.} -\end{align} - -\begin{align} -\Im\Psi\pars{\bracks{1 - \ic}n} & \sim --\,{\pi \over 4} - {1 \over 4n} - {1 \over 24n^{2}} + -{1 \over \color{#f00}{2016}\,n^{6}} + \cdots -\\[5mm] -\Im\Psi\pars{-n\ic} & \sim --\,{\pi \over 2} - {1 \over 2n} + \cdots -\end{align} - -\begin{align} -A_{n} &\ \sim\ {\pi \over 4} - {1 \over 4n} - {1 \over 24n^{2}} + {1 \over \color{#f00}{2016}\,n^{6}} + \cdots -\\[5mm] -n\pars{{\pi \over 4} - A_{n}} &\ \sim\ -{1 \over 4} + {1 \over 24n} - {1 \over \color{#f00}{2016}\,n^{5}} + \cdots -\\[5mm] -n\pars{{\pi \over 4} - A_{n}} - {1 \over 4} &\ \sim\ -{1 \over 24n} - {1 \over \color{#f00}{2016}\,n^{5}} + \cdots -\\[5mm] -n\bracks{n\pars{{\pi \over 4} - A_{n}} - {1 \over 4}} &\ \sim\ -{1 \over 24} - {1 \over \color{#f00}{2016}\,n^{4}} + \cdots -\\[5mm] -{1 \over 24} - n\bracks{n\pars{{\pi \over 4} - A_{n}} - {1 \over 4}} &\ \sim\ -{1 \over \color{#f00}{2016}\,n^{4}} + \cdots -\\[5mm] -n^{4}\braces{% -{1 \over 24} - n\bracks{n\pars{{\pi \over 4} - A_{n}} - {1 \over 4}}} &\ \sim\ -{1 \over \color{#f00}{2016}} + -\pars{~\mbox{terms of order}\ {1 \over n^{2}}~} -\end{align} - -$$ -\begin{array}{|c|}\hline\mbox{}\\ -\ds{\quad% -\color{#f00}{\lim_{n \to \infty}{1 \over -n^{4}\braces{% -1/24 - n\bracks{n\pars{\pi/4 - A_{n}} - 1/4}}}} = \color{#f00}{2016} -\quad} -\\ \mbox{}\\ \hline -\end{array} -$$<|endoftext|> -TITLE: Is it possible to draw this picture without lifting the pen? -QUESTION [37 upvotes]: Some days ago, our math teacher said that he would give a good grade to the first one that will manage to draw this: - -To draw this without lifting the pen and without tracing the same line more than once. It's a bit like the "nine dots" puzzle but here I didn't find any working solution. -So I have two questions: - -is it really impossible? -how can it be proven that it is impossible (if it's impossible) - -[EDIT]: After posting this question, and seeing how it was easy for people to solve it, I noticed that I posted the wrong drawing. The actual one is eaxctly like that but with triangles on all sides, not only top and bottom. As it would make the current answers invalid, I didn't replace the picture. - -REPLY [3 votes]: I have found it on my second try. It took less than 30 seconds to find the answer. Maybe i was lucky?<|endoftext|> -TITLE: Suggestions for research in Group Theory -QUESTION [5 upvotes]: (This is about a help for not to lose interest from Group Theory. Dear Group Theorist or Algebraist please help; if question is not clear, give suggestions.) - -(1) Few days before, I came across a review of a book on $p$-groups by an expert in p-groups (C. R. Leedham-Green), some part of which is as below: - -....The authors suggest no fewer than 1400 research - problem......Take at random Problem 1200: Study the p-groups whose cyclic - subgroups are characteristic in their centralisers. There is no objection to asking - a rather imprecise question (“Study. . . ”), except that it could rise to a number of papers, but there is an objection to studying some oddly defined class of groups without knowing why. ...... - -Today, I was looking so many papers on the Research Topic -$$\mbox{study of Frobenius groups $N\rtimes H$ acting on other group $G$ via automorphisms},$$ Concerning above review-comment, the first question came to mind was why to study such groups? I didn't find a good reason for their study in the papers. Introduction in many papers says (almost same statement): - -many properties of $G$ are related with that of fixed points of $H$ in $G$. - -I didn't find the reason interesting. Is there other motivation for study of such Frobenius actions? - -(2) -After mental preparation that ''let's see these papers, without philosophical reason'', I went for reading the papers. But, I faced lot of problems in Symbols. It was not said in paper, what the symbol $G^{\mathfrak{A}(p-1)}$ denotes. But in online search, I found two different meanings of this: - -abelian radical (Subgroup Lattices of Groups, Volume 14 -By Roland Schmidt) -abelian residual (Products of Finite Groups by Ballester-Bolinches, ...) - -And this pulled out my mind from the Research Topic! -What is a reasonable good way of research in Group Theory? - -REPLY [2 votes]: Let $G$ be a finite group , by Jordan Hölder we know that -$1=G_0\leq G_1 \leq G_2...\leq G_n=G$ -Such that $G_i$ is maximal normal in $G_{i+1}$. That means that $G_{i+1}/G_i$ is a simple group. -Thus, we have two main problem for understanding the group $G$ ? -$1)$ What are all finite simple groups ? -$2)$ If we know $G/M$ and $M$, can we know $G$ ? (extension problems) -(These can be seen as main problems in finite group theory) -The fisrt problem is finished at $2004$. (all papers related to this problems is about $10000$ pages). -The second one is not finished yet.(seems to be far away to finish) -The Second problem is very diffucult even $G=M\ltimes H$. In that case: $H$ acts on $M$ by automorphism. -There are many known result if $(|M|,|H|)=1$ called as coprime action. More specificly, Frobenius action. (it is also one of the coprime action.) -All such problems are the part of cases of second question. -Besdie these, Some people study on very specific groups like extra special groups. At first they can be seen as very specific and useless but when you notice that trying to solve many problems by induction force many groups to reduct to some special cases, you see that they are important indeed. Among them, extraspecial groups, frobenius groups, supersolvable groups ... Thus, these are not very "special case", are "general case". -As an example, assume that you want to solve $$x^2-bx+c =0$$ -Some people say that I solved this when $b=0$. At first it can be seen that it is very specific case but -$$(x-\dfrac{b}{2})^2+c-\dfrac{b^2}{4}=0$$ set $t=(x-\dfrac{b}{2})^2$ -$$t^2-C =0$$ -Actually, you solved the problem ! -I hope what I mean is clear.<|endoftext|> -TITLE: Prove that, $\sum_{i = 1}^n \frac{1}{a_ib_i} \sum_{i = 1}^n (a_i+b_i)^2 \geq 4n^2$ -QUESTION [5 upvotes]: Let $a_1,a_2,\ldots,a_n,b_1,b_2,\ldots,b_n$ be positive numbers. Prove that, $$\displaystyle \sum_{j = 1}^n \dfrac{1}{a_jb_j} \sum_{i = 1}^n (a_i+b_i)^2 \geq 4n^2.$$ - -I was thinking of using AM-GM. We have $a_ib_i \leq \dfrac{(a_i+b_i)^2}{4}$. So we can say $\displaystyle \sum_{i,j} \dfrac{(a_i+b_i)^2}{a_jb_j} \geq \sum_{i,j} \dfrac{4a_ib_i}{a_jb_j}$ but I don't know what to do next. - -REPLY [11 votes]: By the AM-GM inequality, $\sqrt{a_i/b_i}+\sqrt{b_i/a_i}\ge 2$. Combine this with Cauchy-Schwarz: -$$ -\eqalign{ -2n&=\sum_{i=1}^n 2\le\sum_{i=1}^n\left(\sqrt{a_i\over b_i}+\sqrt{b_i\over a_i}\right)\cr -&=\sum_{i=1}^n{a_i+b_i\over\sqrt{a_ib_i}}\cr -&\le\sqrt{\sum_{i=1}^n(a_i+b_i)^2\sum_{j=1}^n{1\over a_jb_j}}.\cr -} -$$<|endoftext|> -TITLE: How to check that a cubic polynomial is irreducible? -QUESTION [7 upvotes]: I want to argue that argue that $\pi(\alpha)=\alpha^3+3\alpha+3$ is an irreducible polynomial over the finite field with 5 elements $\mathbb{F}_5$. My approach was just to check that $\pi$ has no roots in $\mathbb{F}_5$. Is this right? - -REPLY [2 votes]: One has the: - -Proposition. Let $k$ be a field and $p\in k[X]$ be a polynomial of degree $2$ or $3$. $p$ is irreducible over $k$ if and only if $p$ has no roots in $k$. - -Proof. We distinguish the two following cases: - -If $\deg(p)=2$. Assume that $p$ has no roots in $k$ and assume their exists $(f,g)\in k[X]^2$ such that: $$p=fg.$$ -Taking the degree in this equality leads to: $$(\deg(f),\deg(g))\in\{(0,2),(1,1),(2,0)\}.$$ -Hence, since $p$ has no roots in $k$, $(\deg(f),\deg(g))\in\{(0,2),(2,0)\}$. In other words, $p\in k[X]^\times$ or $q\in k[X]^{\times}$ and $p$ is irreducible over $k$. -If $\deg(p)=3$. Assume that $p$ has no roots in $k$ and assume their exists $(f,g)\in k[X]^2$ such that: $$p=fg.$$ -Taking the degree in this equality leads to: $$(\deg(f),\deg(g))\in\{(0,3),(1,2),(2,1),(3,0)\}.$$ -Hence, since $p$ has no roots in $k$, $(\deg(f),\deg(g))\in\{(0,3),(3,0)\}$. In other words, $p\in k[X]^\times$ or $q\in k[X]^{\times}$ and $p$ is irreducible over $k$. $\Box$<|endoftext|> -TITLE: Given $|f(x)|≤x^2$, is $f$ both continuous and differentiable at $x=0$? -QUESTION [5 upvotes]: Let $f:\mathbb R \to \mathbb R$ be a function such that $|f(x)|\le x^2$, for all $x\in \mathbb R$. Then, at $x=0$, is $f$ both continuous and differentiable ? -No idea how to begin. Can someone help? - -REPLY [6 votes]: You need to use the squeeze theorem on the given condtition $|f(x)|\le x^2$ in order to prove that $f$ is both continuous and differentiable at $x=0$: - -Continuity: You know that $|f(x)|\le x^2$. Substituting $x=0$ you find that $|f(0)|\le 0$ or $f(0)=0$. But this condition also can be written as $$|f(x)|\le x^2 \implies -x^2\le f(x)\le x^2$$ and so take limits ($x\to 0$) $$\lim_{x\to 0}-x^2\le \lim_{x\to 0}f(x)\le \lim_{x\to 0}x^2 $$ which gives you $$0\le \lim_{x\to 0}f(x)\le 0 \implies \lim_{x\to}f(x)=0$$ -So the limit of $f$ as $x$ goes to $0$ and the value of $f$ at $x=0$ coincide which implies that $f$ is continuous at $x=0$. -Differentiability: $$\lim_{h\to 0}\frac{f(h)-f(0)}{h}\overset{1.}=\lim_{h \to 0}\frac{f(h)}{h}$$ And now bound again $$\lim_{h \to 0}\frac{-h^2}{h}\le \lim_{h \to 0}\frac{f(h)}{h}\le \lim_{h \to 0}\frac{h^2}{h}$$ which implies that $$\lim_{h \to 0}\frac{f(h)}{h}=0$$ or equivalently that $f'(0)=0$.<|endoftext|> -TITLE: $\sqrt[31]{12} +\sqrt[12]{31}$ is irrational -QUESTION [9 upvotes]: Prove that $\sqrt[31]{12} +\sqrt[12]{31}$ is irrational. - -I would assume that $\sqrt[31]{12} +\sqrt[12]{31}$ is rational and try to find a contradiction. -However, I don't know where to start. Can someone give me a tip on how to approach this problem? - -REPLY [7 votes]: It is known that algebraic integers are closed under addition, subtraction, product and taking roots. -Since $12$ and $31$ are algebraic integers, so does their roots $\sqrt[31]{12}$, $\sqrt[12]{31}$. Being the sum of two such roots, $\sqrt[31]{12} + \sqrt[12]{31}$ is an algebraic integer. -It is also known that if an algebraic integer is a rational number, it will be an ordinary integer. Notice -$$2 < \sqrt[31]{12} + \sqrt[12]{31} -< \sqrt[31]{2^4} + \sqrt[12]{2^5} = 2^{\frac{4}{31}} + 2^{\frac{5}{12}} < 2\sqrt{2} < 3$$ -$\sqrt[31]{12} + \sqrt[12]{31}$ isn't an integer and hence is an irrational number.<|endoftext|> -TITLE: An integral for the New Year 2016 -QUESTION [26 upvotes]: I have built this integral with the purpose of presenting a question. I find interesting and pleasant to readers MSE for the New Year 2016 and expecting -to see different methods of solution. -I can confirm that this has been the case by the welcome that has been given and the two motivated answers it have had. -Calculate: -$$\large \int_{2016}^{3\cdot 2016}\frac{\sqrt[5]{3\cdot 2016-x} }{\sqrt[5]{3\cdot 2016-x}+\sqrt[5]{x-2016}}\mathrm dx$$ - -REPLY [31 votes]: Let, $$I=\int_{2016}^{3\cdot 2016}\frac{\sqrt[5]{3\cdot 2016-x}}{\sqrt[5]{3\cdot 2016-x}+\sqrt[5]{x- 2016}}\ dx\tag 1$$ -Now, using the property of definite integral: $\int_a^bf(x)\ dx=\int_{a}^bf(a+b-x)\ dx$, one should get -\begin{align*} -I&=\int_{2016}^{3\cdot 2016}\frac{\sqrt[5]{3\cdot 2016-(4\cdot2016 -x)}}{\sqrt[5]{3\cdot 2016-(4\cdot2016 -x)}+\sqrt[5]{(4\cdot2016 -x)- 2016}}\ dx\\[3ex] -I&=\int_{2016}^{3\cdot 2016}\frac{\sqrt[5]{x-2016}}{\sqrt[5]{x-2016}+\sqrt[5]{3\cdot2016 -x}}\ dx\\[3ex] -I&=\int_{2016}^{3\cdot 2016}\frac{\sqrt[5]{x-2016}}{\sqrt[5]{3\cdot2016 -x}+\sqrt[5]{x-2016}}\ dx\tag 2\\[6ex] -\end{align*} -Now, adding (1) & (2), one should get -\begin{align*} -I+I&=\int_{2016}^{3\cdot 2016}\left(\frac{\sqrt[5]{3\cdot 2016-x}}{\sqrt[5]{3\cdot 2016-x}+\sqrt[5]{x- 2016}}+\frac{\sqrt[5]{x-2016}}{\sqrt[5]{3\cdot2016 -x}+\sqrt[5]{x-2016}}\right)\ dx\\[3ex] -2I&=\int_{2016}^{3\cdot 2016}\frac{\sqrt[5]{3\cdot2016 -x}+\sqrt[5]{x-2016}}{\sqrt[5]{3\cdot2016 -x}+\sqrt[5]{x-2016}}\ dx\\[3ex] -I&=\frac12\int_{2016}^{3\cdot 2016}\ dx\\[3ex] -&=\frac12(3\cdot 2016-2016)\\[3ex] &=\color{red}{2016} -\end{align*}<|endoftext|> -TITLE: dropping injectivity from multivariable change of variables -QUESTION [33 upvotes]: The change of variables for multivariable integration in Euclidean space is almost always stated for a $C^1$ diffeomorphism $\phi$, giving the familiar equation (for continuous $f$, say) -$$\boxed{\int_{\phi(U)}f=\int_U(f\circ\phi)\cdot|\det D\phi|}$$ -Of course, this result by itself is not very useful in practice because a diffeomorphism is usually hard to come by. The better advanced calculus and multivariable analysis texts explain explicitly how the hypothesis that $\phi$ is injective with $\det D\phi\neq0$ can be relaxed to handle problems along sets of measure zero -- a result which is necessary for almost all practical applications of the theorem, starting with polar coordinates. -Despite offering this slight generalization, very few of the standard texts state that the situation can be improved further still: there is an analogous theorem for arbitrary $C^1$ mappings $\phi$, not just those that are injective everywhere except on a set of measure zero. We simply account for how many times a point in the image gets hit by $\phi$, giving -$$\boxed{\int_{\phi(U)}f\cdot\,\text{card}(\phi^{-1})=\int_U(f\circ\phi)\cdot|\det D\phi|}$$ -where $\text{card}(\phi^{-1})$ measures the cardinality of $\phi^{-1}(x)$. -I think this theorem is a lot more natural and satisfying than the first, for many reasons. For one thing, it removes a huge restriction, bringing the theorem closer to the standard one-variable change of variables for which injectivity is not required (though of course the one-variable theorem is really a theorem about differential forms). It emphasizes that a certain degree of regularity is what's important here, not injectivity. For another thing, it's not a big step from here to degree theory for smooth maps between closed manifolds or to the "area formula" in geometric measure theory. (Indeed, the factor $\text{card}(\phi^{-1})$ is a special case of what old references in geometric measure theory called the "multiplicity function" or the "Banach indicatrix.") It's also used in multivariate probability to write down densities of non-injective transformations of random variables. And last, it's in the spirit of modern approaches to at least gesture at the most general possible result. The traditional statement is really just a special case; injectivity only becomes essential when we define the integral over a manifold (rather than a parametrized manifold), which we want to be independent of parametrization. I think teaching the more general result would greatly clarify these matters, which are a constant source of confusion to beginners. -Yet many otherwise excellent multivariable analysis texts (Spivak, Rudin PMA and RCA, Folland, Loomis/Sternberg, Munkres, Duistermaat/Kolk, Burkill) don't mention this result, even in passing, as far as I can tell. I've had to hunt for discussions of it, and I've found it here: - -Zorich, Mathematical Analysis II (page 150, exercise 9, for the Riemann integral) -Kuttler, Modern Analysis (page 258, for the Lebesgue integral) -Csikós, Differential Geometry (page 72, for the Lebesgue integral) -Ciarlet, Linear and Nonlinear Functional Analysis with Applications (page 34, for the Lebesgue integral) -Bogachev, Measure Theory I (page 381, for the Lebesgue integral) -the Planet Math page on multivariable change of variables (Theorem 2) - -I'm also confident I've seen it in some multivariable probability books, but I can't remember which. But none of these is a standard textbook, except perhaps for Zorich. -My question: are there standard references with nice discussions of this extension of the more familiar result? Probability references are fine, but I'm especially curious whether I've missed some definitive treatment in one of the classic analysis texts. -(Also feel free to speculate why so few texts mention it.) - -REPLY [7 votes]: The short answer is: When designing a course (and in the sequel, a textbook) you have to cover a lot of indispensable material, like "A continuous function on a compact set is uniformly continuous". But you also have to make hundreds of larger or smaller decisions about, e.g., the order of presentation, which "equally important" topics to include, which topics to sacrifice, or to "remove to the exercises", etc. -Concerning the change of variables formula: We absolutely need this formula for the computation of volumes, moments of inertia, heat content, etc., of "geometrically complicated", or else particularly symmetric bodies $B$. To this end an essentially injective parametrization of $B$ is completely sufficient. On the other hand the proof of this formula (even in the vanilla variant) is quite time consuming. Unfortunately its essential part, namely the geometric meaning of the determinant, tends to be obscured by the work necessary to effectively nullify measure zero effects. In one of the sources quoted above it is claimed that the general version of the formula (as well as its proof) includes a special case of Sard’s Theorem. The latter is definitely out of bounds for a first real analysis course. -It is forgiveable when we then leave it at that and just teach what the student will certainly need to handle standard arguments and situations in differential geometry, mathematical physics, and the like. In my own mathematical practice I have used the vanilla variant of the formula a thousand of times, but the more general formula involving the "covering number" maybe five times, e.g., in a course on integral geometry. Note that, if you have understood the vanilla variant, the general formula is intuitively obvious, so that you can work with it in probability theory or dynamical systems without much ado.<|endoftext|> -TITLE: Integral ${\large\int}_0^{\pi/2}\arctan^2\!\left(\frac{\sin x}{\sqrt3+\cos x}\right)dx$ -QUESTION [23 upvotes]: I need to evaluate this integral: -$$I=\int_0^{\pi/2}\arctan^2\!\left(\frac{\sin x}{\sqrt3+\cos x}\right)dx$$ -Maple and Mathematica cannot evaluate it in this form. -Its numeric value is -$$I\approx0.156371391375711701230837603266631522020409597791339398428...$$ -that is not recognized by WolframAlpha and Inverse Symbolic Calculator+. -Is it possible to evaluate this integral in a closed form? -I found similar questions here, here and here, but approaches shown in the answers do not seem to be directly applicable here. - -REPLY [8 votes]: I would like to actually take the time to solve a more general problem because a) we can, and b) the final result is much simpler than you might expect (or a CAS might lead you believe by offering you an antiderivative with an ungodly number of terms). -Define the function $\mathcal{I}:\left(0,1\right)\rightarrow\mathbb{R}$ via the definite integral -$$\mathcal{I}{\left(a\right)}:=\int_{0}^{\frac{\pi}{2}}\mathrm{d}\varphi\,\arctan^{2}{\left(\frac{a\sin{\left(\varphi\right)}}{1+a\cos{\left(\varphi\right)}}\right)}.$$ -We will show below that $\mathcal{I}{\left(a\right)}$ has the following closed-form expression in terms of polylogarithms for all $0-1}.$$ -Then, -$$\begin{align} -\mathcal{I}{\left(a\right)} -&=\int_{0}^{1}\mathrm{d}x\,\frac{2}{1+x^{2}}\left[\arctan{\left(x\right)}-\arctan{\left(px\right)}\right]^{2}\\ -&=\int_{0}^{1}\mathrm{d}x\,\frac{2}{1+x^{2}}\left[\arctan^{2}{\left(x\right)}-2\arctan{\left(x\right)}\arctan{\left(px\right)}+\arctan^{2}{\left(px\right)}\right]\\ -&=\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(x\right)}}{1+x^{2}}-\int_{0}^{1}\mathrm{d}x\,\frac{4\arctan{\left(x\right)}\arctan{\left(px\right)}}{1+x^{2}}+\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(px\right)}}{1+x^{2}}\\ -&=\frac23\arctan^{3}{\left(1\right)}\\ -&~~~~~-2\arctan^{2}{\left(1\right)}\arctan{\left(p\right)}+\int_{0}^{1}\mathrm{d}x\,\frac{2p\arctan^{2}{\left(x\right)}}{1+p^{2}x^{2}};~~~\small{I.B.P.}\\ -&~~~~~+\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(px\right)}}{1+x^{2}}\\ -&=\frac{\pi^{3}}{96}-\frac{\pi^{2}}{8}\arctan{\left(p\right)}+\int_{0}^{1}\mathrm{d}x\,\frac{2p\arctan^{2}{\left(x\right)}}{1+p^{2}x^{2}}+\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(px\right)}}{1+x^{2}}.\\ -\end{align}$$ - -Lengthy aside on the evaluation of $\int_{0}^{z}\mathrm{d}x\,\frac{2b\arctan^{2}{\left(ax\right)}}{1+b^{2}x^{2}}$: -Define the function $\mathcal{J}:\mathbb{R}_{>0}^{3}\rightarrow\mathbb{R}$ -$$\mathcal{J}{\left(a,b,z\right)}:=\int_{0}^{z}\mathrm{d}x\,\frac{2b\arctan^{2}{\left(ax\right)}}{1+b^{2}x^{2}}.$$ -It's a simple matter to show by rescaling the integral that -$$\forall\left(a,b,z\right)\in\mathbb{R}_{>0}^{3}:\mathcal{J}{\left(a,b,z\right)}=\mathcal{J}{\left(az,bz,1\right)},$$ -so we may assume WLOG that $z=1$ in the general evaluation of $\mathcal{J}{(a,b,z)}$. -Suppose $\left(a,b\right)\in\mathbb{R}_{>0}^{2}$, and set $\frac{b}{a}=:c\in\mathbb{R}_{>0}\land\arctan{\left(a\right)}=:\alpha\in\left(0,\frac{\pi}{2}\right)$. Then, -$$\begin{align} -\mathcal{J}{\left(a,b,1\right)} -&=\int_{0}^{1}\mathrm{d}x\,\frac{2b\arctan^{2}{\left(ax\right)}}{1+b^{2}x^{2}}\\ -&=\int_{0}^{a}\mathrm{d}y\,\frac{2ab\arctan^{2}{\left(y\right)}}{a^{2}+b^{2}y^{2}};~~\small{\left[x=\frac{y}{a}\right]}\\ -&=\int_{0}^{a}\mathrm{d}y\,\frac{2c\arctan^{2}{\left(y\right)}}{1+c^{2}y^{2}}\\ -&=\int_{0}^{\arctan{\left(a\right)}}\mathrm{d}\varphi\,\frac{2c\varphi^{2}\sec^{2}{\left(\varphi\right)}}{1+c^{2}\tan^{2}{\left(\varphi\right)}};~~\small{\left[\arctan{\left(y\right)}=\varphi\right]}\\ -&=\int_{0}^{\alpha}\mathrm{d}\varphi\,\frac{2c\varphi^{2}\sec^{2}{\left(\varphi\right)}}{1+c^{2}\tan^{2}{\left(\varphi\right)}}\\ -&=\left[2\varphi^{2}\arctan{\left(c\tan{\left(\varphi\right)}\right)}\right]_{\varphi=0}^{\varphi=\alpha}-\int_{0}^{\alpha}\mathrm{d}\varphi\,4\varphi\arctan{\left(c\tan{\left(\varphi\right)}\right)};~~~\small{I.B.P.}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\varphi\,\varphi\arctan{\left(c\tan{\left(\varphi\right)}\right)}.\\ -\end{align}$$ -Next, by rewriting the integral as a multiple integral and changing the order of integration in the appropriate way, we obtain the following: -$$\begin{align} -\mathcal{J}{\left(a,b,1\right)} -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\varphi\,\varphi\arctan{\left(c\tan{\left(\varphi\right)}\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\varphi\int_{0}^{\varphi}\mathrm{d}\vartheta\,\arctan{\left(c\tan{\left(\varphi\right)}\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\,\arctan{\left(c\tan{\left(\varphi\right)}\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\int_{0}^{c}\mathrm{d}y\,\frac{d}{dy}\arctan{\left(y\tan{\left(\varphi\right)}\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\int_{0}^{c}\mathrm{d}y\,\frac{\tan{\left(\varphi\right)}}{1+y^{2}\tan^{2}{\left(\varphi\right)}}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\,\frac{\tan{\left(\varphi\right)}}{1+y^{2}\tan^{2}{\left(\varphi\right)}}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\,\frac{\sin{\left(\varphi\right)}\cos{\left(\varphi\right)}}{\cos^{2}{\left(\varphi\right)}+y^{2}\sin^{2}{\left(\varphi\right)}}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\vartheta}^{\alpha}\mathrm{d}\varphi\,\frac{\sin{\left(\varphi\right)}\cos{\left(\varphi\right)}}{1+\left(y^{2}-1\right)\sin^{2}{\left(\varphi\right)}}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-4\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\sin{\left(\vartheta\right)}}^{\sin{\left(\alpha\right)}}\mathrm{d}t\,\frac{t}{1+\left(y^{2}-1\right)t^{2}};~~~\small{\left[\sin{\left(\varphi\right)}=t\right]}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-2\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\int_{\sin^{2}{\left(\vartheta\right)}}^{\sin^{2}{\left(\alpha\right)}}\mathrm{d}u\,\frac{1}{1+\left(y^{2}-1\right)u};~~~\small{\left[t^{2}=u\right]}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}\\ -&~~~~~-2\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\,\frac{\ln{\left(1+\left(y^{2}-1\right)\sin^{2}{\left(\alpha\right)}\right)}-\ln{\left(1+\left(y^{2}-1\right)\sin^{2}{\left(\vartheta\right)}\right)}}{\left(y^{2}-1\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}+\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{0}^{c}\mathrm{d}y\,\frac{2\ln{\left(\frac{1-\left(1-y^{2}\right)\sin^{2}{\left(\alpha\right)}}{1-\left(1-y^{2}\right)\sin^{2}{\left(\vartheta\right)}}\right)}}{\left(1-y^{2}\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}\\ -&~~~~~+\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{\frac{1-c}{1+c}}^{1}\mathrm{d}x\,\frac{1}{x}\ln{\left(\frac{x^{2}+2x+1-4x\sin^{2}{\left(\alpha\right)}}{x^{2}+2x+1-4x\sin^{2}{\left(\vartheta\right)}}\right)};~~~\small{\left[y=\frac{1-x}{1+x}\right]}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}\\ -&~~~~~+\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{r}^{1}\mathrm{d}x\,\frac{1}{x}\ln{\left(\frac{1+2x\cos{\left(2\alpha\right)}+x^{2}}{1+2x\cos{\left(2\vartheta\right)}+x^{2}}\right)};~~~\small{\left[r:=\frac{1-c}{1+c}\in\left(-1,1\right)\right]}.\\ -\end{align}$$ -At this point it will be helpful to introduce the following two-variable extension of the dilogarithm: -$$\operatorname{Li}_{2}{\left(r,\theta\right)}:=-\int_{0}^{r}\mathrm{d}x\,\frac{\ln{\left(1-2x\cos{\left(\theta\right)}+x^{2}\right)}}{2x};~~~\small{\left(r,\theta\right)\in\mathbb{R}^{2}}.$$ -This function gives the real part of the dilogarithm of complex argument inside the unit circle: -$$\Re{\left(\operatorname{Li}_{2}{\left(re^{i\theta}\right)}\right)}=\operatorname{Li}_{2}{\left(r,\theta\right)};~~~\small{\left(r,\theta\right)\in\mathbb{R}^{2}\land|r|<1}.$$ -The function $\operatorname{Li}_{2}{\left(r,\theta\right)}$ can be shown to have the following special cases: -$$\operatorname{Li}_{2}{\left(1,\theta\right)}=\frac14\left(\pi-\theta\right)^{2}-\frac{\pi^{2}}{12};~~~\small{0\le\theta\le2\pi},$$ -$$\operatorname{Li}_{2}{\left(r,\frac{\pi}{2}\right)}=\frac14\operatorname{Li}_{2}{\left(-r^{2}\right)};~~~\small{r\in\mathbb{R}}.$$ -Continuing with our evaluation of $\mathcal{J}$, we obtain -$$\begin{align} -\mathcal{J}{\left(a,b,1\right)} -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}+\int_{0}^{\alpha}\mathrm{d}\vartheta\int_{r}^{1}\mathrm{d}x\,\frac{\ln{\left(\frac{1+2x\cos{\left(2\alpha\right)}+x^{2}}{1+2x\cos{\left(2\vartheta\right)}+x^{2}}\right)}}{x}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}+\int_{0}^{\alpha}\mathrm{d}\vartheta\,\bigg{[}2\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}-2\operatorname{Li}_{2}{\left(1,\pi-2\alpha\right)}\\ -&~~~~~-2\operatorname{Li}_{2}{\left(r,\pi-2\vartheta\right)}+2\operatorname{Li}_{2}{\left(1,\pi-2\vartheta\right)}\bigg{]}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}+\int_{0}^{\alpha}\mathrm{d}\vartheta\,\bigg{[}2\vartheta^{2}-2\alpha^{2}+2\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\ -&~~~~~-2\operatorname{Li}_{2}{\left(r,\pi-2\vartheta\right)}\bigg{]}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\ -&~~~~~-2\int_{0}^{\alpha}\mathrm{d}\vartheta\,\operatorname{Li}_{2}{\left(r,\pi-2\vartheta\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\ -&~~~~~-2\int_{0}^{\alpha}\mathrm{d}\vartheta\,\Re{\left[\operatorname{Li}_{2}{\left(re^{i\left(\pi-2\vartheta\right)}\right)}\right]}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\ -&~~~~~-2\Re\int_{0}^{\alpha}\mathrm{d}\vartheta\,\operatorname{Li}_{2}{\left(re^{i\left(\pi-2\vartheta\right)}\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\ -&~~~~~-\Re\int_{\pi-2\alpha}^{\pi}\mathrm{d}\vartheta\,\operatorname{Li}_{2}{\left(re^{i\vartheta}\right)}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\ -&~~~~~-\Re{\left[\frac{1}{i}\operatorname{Li}_{3}{\left(re^{i\pi}\right)}-\frac{1}{i}\operatorname{Li}_{3}{\left(re^{i\left(\pi-2\alpha\right)}\right)}\right]}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}\\ -&~~~~~-\Im{\left[\operatorname{Li}_{3}{\left(-r\right)}-\operatorname{Li}_{3}{\left(re^{i\left(\pi-2\alpha\right)}\right)}\right]}\\ -&=2\alpha^{2}\arctan{\left(c\tan{\left(\alpha\right)}\right)}-\frac43\alpha^{3}+2\alpha\operatorname{Li}_{2}{\left(r,\pi-2\alpha\right)}+\Im{\left[\operatorname{Li}_{3}{\left(-re^{-2i\alpha}\right)}\right]}.\\ -\end{align}$$ -The following pair of integration formulas then follow from special cases of the previous result: -$$\int_{0}^{1}\mathrm{d}x\,\frac{2p\arctan^{2}{\left(x\right)}}{1+p^{2}x^{2}}=\Im{\left[\operatorname{Li}_{3}{\left(i\frac{1-p}{1+p}\right)}\right]}+\frac{\pi}{8}\operatorname{Li}_{2}{\left(-\left(\frac{1-p}{1+p}\right)^{2}\right)}+\frac{\pi^{2}}{8}\arctan{\left(p\right)}-\frac{\pi^{3}}{48},$$ -$$\begin{align} -\int_{0}^{1}\mathrm{d}x\,\frac{2\arctan^{2}{\left(px\right)}}{1+x^{2}} -&=\Im{\left[\operatorname{Li}_{3}{\left(\frac{1-p}{1+p}e^{-2i\arctan{\left(p\right)}}\right)}\right]}+2\arctan{\left(p\right)}\operatorname{Li}_{2}{\left(\frac{1-p}{1+p},2\arctan{\left(p\right)}\right)}\\ -&~~~~~-\frac43\arctan^{3}{\left(p\right)}+\frac{\pi}{2}\arctan^{2}{\left(p\right)},\\ -\end{align}$$ -where $0 -TITLE: In general, when does it hold that $f(\sup(X)) = \sup f(X)$? -QUESTION [9 upvotes]: Let $f: [-\infty, \infty] \to [-\infty, \infty]$. -What conditions should we impose on $f$ so that the following statement becomes true? -$$\forall \ X \subset [-\infty, \infty], \sup f(X) = f(\sup X)$$ -If that doesn't make much sense, then for some function with certain conditions, what kind of sets $X$ satisfy $f(\sup X) = \sup f(X)$? - -Some background to the question: -While doing a certain proof, I was about to swap $\sqrt{\cdot}$ and $\sup$, but I soon realized that such a step probably needs some scrutiny. I still do not know whether such a step is valid, and I would like to know what sort of functions satisfy the requirement. I supposed that $f$ is an extended real valued function for the possibility of $\sup = \infty$. - -REPLY [8 votes]: I believe the sufficient and necessary condition for $f$ is that it is nondecreasing, left-continuous (i.e. for all $x_0$, $\lim\limits_{x\rightarrow x_0^-}f(x)=f(x_0)$) and $f(-\infty)=-\infty$. Last condition is necessary for consideration of $X=\varnothing$. First condition is necessary as for $X=\{a,b\},a\leq b$ we need $\max\{f(a),f(b)\}=\sup f(X)=f(\sup X)=f(b)$, i.e. $f(a)\geq f(b)$. Second condition is necessary for when we take $X=(-\infty,x_0)$. Since we know $f$ is nondecreasing, $\sup f(X)=\lim\limits_{x\rightarrow x_0^-}f(x)$, and it must be equal $f(\sup X)=f(x_0)$. -As for sufficiency, assume the above three conditions. For $X=\varnothing$ of $X=\{-\infty\}$ this is clear so assume $\sup X:=x_0>-\infty$. It's easy to see that left continuity implies $\sup f((-\infty,x_0))=f(x_0)$ (thanks to monotonicity of the function), so we only need to prove $\sup f((-\infty,x_0))=\sup f(X)$. If $x_0\in X$, this is obvious from monotonicity. Clearly $\sup f((-\infty,x_0))\geq\sup f(X)$, since the former $\sup$ is taken on a (possibly) larger set. Now, take any $a\in (-\infty,x_0)$. Then there exists a $b\in X$ such that $a -TITLE: A degree $4$ polynomial whose Galois group is isomorphic to $S_4$. -QUESTION [8 upvotes]: I am reading an article about Galois groups. The article states that: - -It can also be shown that for each degree $d$ there exist polynomials whose Galois group is the fully symmetric group $S_d$. - -I think that the Galois group of a quadratic is isomorphic to $S_2$ if the roots are not rational. -I think that the Galois group of $x^3 - a$ is isomorphic to $S_3$ if $a$ is not a perfect cube. -Is it difficult to find an example of a degree $4$ polynomial whose Galois group is isomorphic to $S_4$? I am reading a text book and so far most of the examples have been for Galois groups of "small" order. -I have not progressed to the point where I can appreciate a proof of the statement made in the article but I would like to see an example of such a polynomial of degree $4$ or hear from someone knowledgeable that this is a difficult and/or tedious task. - -REPLY [5 votes]: This isn't too hard to do, but the below method requires a little knowledge of the cubic resolvent $h$ of a quartic polynomial $g$, namely that $\operatorname{Gal}(h) \leq \operatorname{Gal}(g)$ and that the discriminants of $g$ and $h$ (essentially) coincide. (I can expand about this some if it would be helpful.) -If a polynomial $g$ is irreducible (which is a necessary condition for $\operatorname{Gal}(g) \cong S_{\deg g}$, and which we thus henceforth assume), then its Galois group acts transitively. The only transitive subgroups of $S_4$ are (up to conjugacy) $S_4, A_4, D_8, \Bbb Z_4, \Bbb Z_2 \times \Bbb Z_2$. We'll use the fact that the only groups among these whose order is divisible by $6$ are $S_4$ and $A_4$. -At least when the character of the underlying field $\Bbb F$ is not $3$, we may for simplicity of the below formulae make a linear change of variables so that $g$ has zero coefficient in its $x^3$ term and write (after dividing by the leading coefficient) -$$g(x) = x^4 + p x^2 + q x + r,$$ -its resolvent cubic is -$$h(x) = x^3 - 2 p x^2 + (p^2 - 4 r) x + q^2,$$ -and the discriminants of $g$ and $h$ coincide (perhaps up to an overall nonzero multiplicative constant), and are -$$D = 16 p^4 r − 4 p^3 q^2 − 128 p^2 r^2 + 144 p q^2 r − 27 q^4 + 256 r^3.$$ -If $h$ is irreducible and its discriminant $D$ is not a square, then (1) $\operatorname{Gal}(h) \leq G := \operatorname{Gal}(g)$ is $S_3$, so $G$ has order divisible by $6$ and hence by the above is $S_4$ or $A_4$, and (2) since $D$ (the discriminant of $g$) is not a square, $G \not\leq A_4$ and hence $G \cong S_4$. For $g$ and $h$ to be irreducible, they must have nonzero constant terms and hence $q, r \neq 0$. For $p = 0$, the above formulae simplify to -$$h(x) = x^3 - 4 r x + q^2$$ and $$D = - 27 q^4 + 256 r^3,$$ so to find an example we can search for $q, r$ for which $h$ is irreducible and $D$ is nonsquare. For $\Bbb F = \Bbb Q$ the simple choice $q = r = 1$ satisfies these criteria (the first by the Rational Root Test), so, we have for example, that $$\operatorname{Gal}(x^4 + x + 1) \cong S_4 .$$ -See these notes for more details (using the same notation). Also, note that these sorts of examples are generic in that, in a sense that can be made precise, most irreducible polynomials of degree $n$ in $\Bbb Q[x]$ have Galois group $S_n$.<|endoftext|> -TITLE: The fundamental group of $\mathbb{R}^3$ with its non-negative half-axes removed -QUESTION [10 upvotes]: Determine whether the fundamental group of $\mathbb{R}^3$ with its non-negative half-axes removed is trivial, infinite cyclic, or isomorphic to the figure eight space. - I found this answer: - - -Why do we have that $\alpha*\beta=\gamma$? I can't see how we have this homotopy or deformation. -PS: I think we are actually supposed to solve this by showing that we can find that the figure eight space is a deformation retract of this space, or homotopy equivalent. Do you see a way of doing this? I cannot really see how to define the deformations. - -REPLY [10 votes]: Here's one approach to the question in the postscript: -If we denote by $X$ the subspace of $\Bbb R^3 - \{ 0 \}$ whose fundamental group we are computing, one can show that the mapping $\Bbb R^3 - \{ 0 \} \to \Bbb S^2 \subset \Bbb R^3$ defined by $x \mapsto \frac{x}{||x||}$ restricts to a homotopy between $X$ and a sphere with three points deleted, namely $Y := \Bbb S^2 - \{(1, 0, 0), (0, 1, 0), (0, 0, 1)\}$, and this restriction and the inclusion $Y \hookrightarrow X$ together comprise a homotopy equivalence $X \simeq Y$. The thrice-punctured sphere $Y$ is then homeomorphic (via, e.g., stereographic projection from one of the deleted points) to the plane with two points deleted, $\Bbb R^2 - \{p, q\}$, and this space is in turn homotopic to the figure eight space (see page 3 of these notes of Munkres for some diagrams that indicate how to write down explicitly this latter homotopy equivalence).<|endoftext|> -TITLE: Integral $\int_0^\frac{\pi}{2} \left(\operatorname{chi}(\cot^2x)+\text{shi}(\cot^2x)\right)\csc^2(x)e^{-\csc^2(x)}dx$ -QUESTION [7 upvotes]: The following problem was posted here a while ago by Cornel Ioan Valean. - -Evaluate: - $$\int_0^\frac{\pi}{2} [\text{chi}(\cot^2x)+\text{shi}(\cot^2 x)]\csc^2(x)e^{-\csc^2(x)}dx$$ - where $\operatorname{shi}(x)=\int_{0}^{x}\frac{\sinh t}{t}dt$ and $\operatorname{chi}(x)=\gamma +\log(x)+\int_{0}^{x}\frac{\cosh(t)-1}{t} dt.$ - -I have tried to use integral by parts but I didn't succeed as I crossed this: -$$\int_0^\frac{\pi}{2}\csc^2(x)e^{-\csc^2(x)}dx$$ -Which is: $\frac {\sqrt{\pi}}{2e}.$ -I don't know how I can complete integration by parts since it doesn't have a closed form. -Note: I guess this integral is $0$ (integration over closed path). - -REPLY [3 votes]: Two solutions can be found in this pdf. Here is another way: -$$\operatorname{chi}(x)+\operatorname{shi}(x):=\gamma+\ln x+\int_0^x \frac{\cosh t+\sinh t-1}{t}=\gamma +\ln x+\int_0^1 \frac{e^{tx}-1}{t}dt$$ -$$\int_0^\frac{\pi}{2}\left(\operatorname{chi}(\cot^2 x)+\operatorname{shi}(\cot^2 x)\right)\csc^2 x\,e^{-(1+\cot^2 x)}dx\overset{\cot x=y}=\frac{1}{e}\int_0^\infty\left(\operatorname{chi}(y^2)+\operatorname{shi}(y^2)\right)e^{-y^2}dy$$ -$$=\frac{1}{e}\int_0^\infty\left(\gamma +\ln y^2\right)e^{-y^2}dy+\frac1e\int_0^\infty e^{-y^2}\int_0^{1}\frac{e^{ty^2}-1}{t}dtdy=\frac1e\left(-\sqrt \pi \ln 2+\sqrt \pi \ln 2\right)=0$$ - -$$\int_0^\infty e^{-y^2}\ln y^2 dy = -\frac{\sqrt \pi}{2}\left(\gamma +2\ln 2 \right)\Rightarrow \int_0^\infty\left(\gamma +\ln y^2\right)e^{-y^2}dy=-\sqrt{\pi}\ln 2$$ -$$\int_0^\infty e^{-y^2}\int_0^1 \frac{e^{ty^2}-1}{t}dtdy=\int_0^1 \frac{1}{t}\int_0^\infty \left(e^{-y^2(1-t)}-e^{-y^2}\right)dydt$$ -$$=\frac{\sqrt\pi}{2}\int_0^1\frac{1}{t}\left(\frac{1}{\sqrt{1-t}}-1\right)dt\overset{1-t=x^2}=\sqrt \pi \int_0^1 \frac{1}{1+x}dx=\sqrt \pi \ln 2$$<|endoftext|> -TITLE: Evaluate $\int \frac{x^2}{x^2 -6x + 10}\,dx$ -QUESTION [5 upvotes]: Evaluate $$\int \frac{x^2}{x^2 -6x + 10} \, dx$$ -I'd love to get a hint how to get rid of that nominator, or make it somehow simpler. -Before posting this, I've looked into: Solve integral $\int{\frac{x^2 + 4}{x^2 + 6x +10}dx}$ -And I've not understood how they simplied the nominator. I know that it has to match $2x-6$ somehow. but the way they put $(x-6)$ and multipled the integral and have suddenly in the nominator $x+1$ does not make sense to me. - -REPLY [2 votes]: Hint: -$$\frac{x^2}{x^2-6x+10}=1+\frac{6x-10}{x^2-6x+10}$$ -Now, Partial Fractions. - -REPLY [2 votes]: Hint : $$\int\frac{x^2}{x^2-6x+10}dx=\int1+\frac{6x-10}{x^2-6x+10}dx$$ -$\dfrac{d}{dx} \ln{(x^2-6x+10)}=\frac{2x-6}{x^2-6x+10}$ , therefore make -$$\frac{6x-10}{x^2-6x+10}=\frac{3(2x-6)}{x^2-6x+10}+\frac{8}{x^2-6x+10}=3\frac{2x-6}{x^2-6x+10}+8\frac{1}{(x-3)^2+1}$$ -$$I=x+3\ln(x^2-6x+10)+8\arctan{(x-3)}+constant$$<|endoftext|> -TITLE: Binomial sum gives $4^n$ -QUESTION [9 upvotes]: I was looking at this question:Swapping the $i$th largest card between $2$ hands of cards -and WolframAlpha gave me this result. -Why is it so? $$\sum_{k=0}^n{2k\choose k}{2n-2k\choose n-k}=4^n?$$ - -REPLY [11 votes]: Convolution of the series -$$ -\frac1{\sqrt{1-4x}}=\sum_{n=0}^\infty\binom{2n}{n}x^n -$$ -gives -$$ -\begin{align} -\sum_{n=0}^\infty\sum_{k=0}^n\binom{2k}{k}\binom{2n-2k}{n-k}x^n -&=\frac1{\sqrt{1-4x}}\frac1{\sqrt{1-4x}}\\ -&=\frac1{1-4x}\\ -&=\sum_{n=0}^\infty4^nx^n -\end{align} -$$ -Equating coefficients of $x^n$ gives -$$ -\sum_{k=0}^n\binom{2k}{k}\binom{2n-2k}{n-k}=4^n -$$<|endoftext|> -TITLE: $ \lim \frac{a^x-b^x}{x}$ as $x \to 0$ where $a>b>0$ -QUESTION [6 upvotes]: Calculate $ \displaystyle \lim _{x \to 0} \frac{a^x-b^x}{x}$ - where $a>b>0$ - -My thoughts: I think L'hopital's rule would apply. But differentiating gives me a way more complicated limit. I tried to see if it's the derivative of some function evaluated at a point, but I can't find such function. - -REPLY [3 votes]: Application of L'Hospital's rule gives you: -$$ \lim _{x \to 0} \frac{a^x-b^x}{x} = \lim _{x \to 0} a^x \cdot \log a-b^x \cdot \log b = \log a - \log b = \log {a \over b}$$ - -REPLY [3 votes]: Your other suggestion also works. Note that our function is equal to -$$b^x \frac{(a/b)^x-1}{x}.$$ -One can recognize -$$\lim_{x\to 0}\frac{(a/b)^x-1}{x}$$ -as a derivative. -Even more simply, we recognize -$$\lim_{x\to 0} \frac{a^x-b^x-0}{x}$$ -as a derivarive. - -REPLY [2 votes]: Hint. Let $\alpha$ be any real number. One may recall that, as $u \to 0$, one has -$$ -\lim _{u \to 0} \frac{e^{\alpha u}-1}{u}=\alpha -$$ then write -$$ - \frac{a^x-b^x}{x}= \frac{e^{x\ln a}-1}{x}- \frac{e^{x\ln b}-1}{x}. -$$<|endoftext|> -TITLE: The probability that in a game of bridge each of the four players is dealt one ace -QUESTION [7 upvotes]: The question is to show that the probability that each of the four players in a game of bridge receives one ace is $$ \frac{24 \cdot 48! \cdot13^4}{52!}$$ My explanation so far is that there are $4!$ ways to arrange the 4 aces, $48!$ ways to arrange the other cards, and since each arrangement is equally likely we divide by $52!$. I believe the $13^4$ represents the number of arrangements to distribute 4 aces among 13 cards, but I don't see why we must multiply by this value as well? - -REPLY [4 votes]: After the cards have been shuffled and cut, there are $\binom{52}4$ equally likely possibilities for the set of four positions in the deck occupied by the aces. Among those $\binom{52}4$ sets, there are $13^4$ which result in each player getting an ace; namely, make one of the $13$ cards to be dealt to South an ace, and the same fo West, North, and East. So the probability is -$$\frac{13^4}{\binom{52}4}=\frac{4!\cdot48!\cdot13^4}{52!}=\frac{2197}{20825}\approx.1055$$<|endoftext|> -TITLE: Classification theorem for vector spaces -QUESTION [8 upvotes]: As I was going over the classification theorem for closed surfaces today, the text I'm reading gave another example of a classification theorem: finite dimensional vector spaces are classified by their dimension. As a point of fact, I think that this is slightly wrong -- if I understand what the author was trying to say I think that finite dimensional vector spaces are classified by their dimension and their base field. Then he's just talking about that theorem that says that any $\Bbb F$-vector space with dimension $n$ is isomorphic to $\Bbb F^n$. -His use of the word "finite" though has me wondering, does the same thing hold for infinite dimensional vector spaces? Can we "classify" infinite dimensional vector spaces by their dimensions (meaning $\aleph_0$, $\aleph_1$, etc) and base fields? Or is there something more complex that happens for infinite dimensional spaces? - -REPLY [3 votes]: Yes, vector spaces are always classified by their dimension, and the argument is as you would expect: if $\text{dim }V=\text{dim }W$, then we can choose a bijection between a basis of $V$ and a basis of $W$, and this bijection induces an isomorphism $V\cong W$. (Note, however, that the existence of a basis for every (infinite-dimensional) vector space requires the use of the Axiom of Choice/Zorn's Lemma.) -An interesting application of this fact is that the additive group of $\mathbb{C}$ is isomorphic to the additive group of $\mathbb{R}$ - indeed, both are $\mathfrak{c}$-dimensional $\mathbb{Q}$-vector spaces, where $\mathfrak{c}$ denotes the cardinality of the continuum. -As @Tryss points out in the comments, there are different notions of isomorphism for infinite-dimensional vector spaces decorated with additional mathematical structure, e.g., normed linear spaces (vector spaces equipped with a norm), Banach spaces (complete normed linear spaces), inner product spaces (vector spaces equipped with an inner product), Hilbert spaces (complete inner product spaces) etc. Hilbert spaces are completely classified up to isomorphism by their dimension (which is at most countable assuming separability) but Banach spaces are not. However, this is a separate mathematical theory, so I won't comment further unless you would like me to (and in the meantime, would direct you to the relevant Wikipedia articles). -Hope that helps!<|endoftext|> -TITLE: Small Representations of $2016$ -QUESTION [13 upvotes]: It's the new year at least in my timezone, and to welcome it in, I ask for small representations of the number $2016$. -Rules: Choose a single decimal digit ($1,2,\dots,9$), and use this chosen digit, combined with operations, to create an expression equal to $2016$. Each symbol counts as a point, and the goal is to minimize the total number of points. -Example (my best so far): -$$2016=\frac{(4+4)!}{4!-4}$$ -This expression scores 11 points. That is 2 points for the parentheses, 4 points for the $4$s, 1 point for $+$, 2 points for the $!$s, 1 point for the fraction, and 1 point for the $-$. -Allowable actions: basic arithmetic operations (addition, subtraction, multiplication, division), exponentiation, factorials, repeated digits (i.e. if you are working with the digit $7$, you can use $77$ for 2 points), and use of parentheses. - -What is the minimum number of points for an expression of the above form equaling 2016, and what are those minimum expressions? - -Note that by "Use a single decimal digit" I mean you may only use one of the digits $1$ through $9$, so for example, you can't save in the above expression just by using $8!$ instead of $(4+4)!$ because you would still have the $4!-4$ part. -This question is mostly for fun, but could have some relevance to students who participate in thematic math competitions this year. - -REPLY [3 votes]: Here's a devil of an answer: -$$2016=666+666+666+6+6+6$$<|endoftext|> -TITLE: Show that the square root of a non-negative operator is unique -QUESTION [7 upvotes]: Let $H$ be a Hilbert space, and $A\in B(H\to H)$ be a bounded non-negative operator (i.e. $\langle Ax,x\rangle \geq 0$ for all $x\in H$). The square root of $A$ is a bounded non-negative operator $B\geq $ such that $B^2=A$. -First, We can assume without loss of generality that $0\leq A\leq I$. Note that $B^2=A$ if and only if -$$I-B=\frac{1}{2}((I-A)+(I-B)^2).$$ -Hence, we definite inductively a sequence $C_n$ of operators as follows: $C_0:=0$, and $C_{n+1}:=1/2((I-A)+C_n^2))$. Then it is easy to see that $C_n$ converges to a bounded non-negetive operator $B$ in the strong operator topology and we also have $B^2=A$, thus the square root exists, but I don't know how to show that it is unique. - -REPLY [6 votes]: Suppose $A$ is a bounded nonnegative operator on a Hilbert space. Let $(p_n)$ be a sequence of polynomials such that -$$ -p_n(x) \to \sqrt{x} -$$ -uniformly for $x$ in the interval $[0, \|A\|]$ (the Weierstass Approximation Theorem implies the existence of such a sequence of polynomials). -Now suppose $B$ is a nonnegative square root of $A$. Let $\mathcal{B}$ denote the norm closed algebra generated by $B$. Then $\mathcal{B}$ is a commutative $C^*$-algebra that contains $B$ and $A$ (because $A = B^2$). Thus there is a compact Hausdorff space $K$ such that $\mathcal{B}$ is isomorphic as a $C^*$-algebra to $C(K)$. This isomorphism preserves all $C^*$ properties. Thus $A$ corresponds to some nonnegative function $f \in C(K)$ taking values in $[0, \|A\|]$ and $B$ must correspond to the function $\sqrt{f}$ (which is the only nonnegative square root of $f$ in $C(K)$). -Because $p_n \circ f$ converges uniformly to $\sqrt{f}$ uniformly on $K$, we conclude that $p_n(A)$ converges in operator norm to $B$. But the polynomials $(p_n)$ were chosen independently of $B$. Thus $B$ is uniquely determined as a nonnegative square root of $A$.<|endoftext|> -TITLE: Center of the Quaternions: Proof and Method -QUESTION [9 upvotes]: I have to calculate the center of the real quaternions, $\mathbb{H}$. -So, I assumed two real quaternions, $q_n=a_n+b_ni+c_nj+d_nk$ and computed their products. I assume since we are dealing with rings, that to check was to check their commutative product under multiplication. So i'm looking at $q_1q_2=q_2q_1$. When I do this, I find that clearly the constant terms are identical, so it is clear that the subset $\mathbb{R}$ is in the center. So, perhaps then that $\mathbb{C}\le\mathbb{H}$. However i ended up, after direct calculation with the following system; -$$c_1d_2=c_2d_1$$ -$$b_1d_2=b_2d_1$$ -$$b_1c_2=b_2c_1$$ -So the determination is then found by solving this system. Intuitively, I felt that this lead to $0$'s everywhere and thus the center of $\mathbb{H}$, $Z(\mathbb{H})=\mathbb{R}$. I then checked online for some confirmation and indeed it seemed to validate my result. However, the proof method used is something I haven't seen. It was pretty straight forward and understandable, but again, I've never seen it. It goes like this; -Suppose $b_1,c_1,$ and $d_1$ are arbitrary real coefficients and $b_2, c_2,$ and $d_2$ are fixed. Considering the first equation, assume that $d_1=1$ (since it is arbitrary, it's value can be any real...). This leads to -$$c_1=\frac{c_2}{d_2}$$ -And that this is a contradiction, since $c_1$ is no longer arbitrary (it depends on $c_2$ and $d_2$) -I really like this proof method, although it is unfamiliar to me. I said earlier that for my own understanding, it seemed intuitively obvious, but that is obviously not proof: -1) What are some other proof methods for solving this system other than the method of contradiction used below? I was struggling with this and I feel I sholnd't be. -2) What other proofs can be found in elementary undergraduate courses that use this method of "assume arbitrary stuff", and "fix some other stuff" and get a contradiction? I found this method very clean and fun, but have never seen it used (as far as I know) in any elementary undergraduate courses thus far... - -REPLY [14 votes]: I am not sure where the contradiction lies exactly in your proof by contradiction. But here is another method. -An element $x\in \mathbb H$ belongs to the center if and only if $[x,y]=0$ for all $y\in \mathbb H$, where $[x,y]=xy-yx$ denotes the commutator of two elements. -We see immediately that $[x,1]=0$, whereas if $x=a+bi+cj+dk$ we have -$$ -[x,i]=-2ck+2dj. -$$ -Thus $[x,i]=0$ if and only if $c=d=0$. Similarly $[x,j]=0$ if and only if $b=d=0$. Thus the only elements $x$ which commute with both $i$ and $j$ are $x\in \mathbb R$; in particular, it follows that $Z(\mathbb H)\subset \mathbb R$. Since it is clear that $\mathbb R\subset Z(\mathbb H)$, the result follows. -Idea behind the proof: There are three special copies of the complex numbers sitting inside $\mathbb H$: the subspaces -$$ -\mathbb C_i=\mathbb R[i],\qquad \mathbb C_j=\mathbb R[j],\qquad \mathbb C_k=\mathbb R[k]. -$$ -Over $\mathbb H$, all of these subspaces are their own centers: $Z_{\mathbb H}(\mathbb C_i)=\mathbb C_i$ and so forth. Since $$\mathbb H=\mathbb C_i+ \mathbb C_j+ \mathbb C_k,$$ -it follows that $Z(\mathbb H)=Z(\mathbb C_i)\cap Z(\mathbb C_j)\cap Z(\mathbb C_k)=\mathbb R$.<|endoftext|> -TITLE: An Odd Mean Value Theorem Problem -QUESTION [11 upvotes]: If $f: [x_1,x_2] \to \mathbb{R}$ is differentiable, show for some $c \in (x_1,x_2)$ that -$$ -\frac{1}{x_1-x_2} \left| \begin{matrix} x_1 & x_2 \\ -f(x_1) & f(x_2) \end{matrix} \right|=f(c)-cf'(c) -$$ -My attempt: Actually taking the determinant, multiplying by a negative, and carrying across the denominator on the left gives -$$ -x_1f(x_2)-x_2f(x_1)=(-f(c)+cf'(c)) \cdot (x_2-x_1) -$$ -and this screams Mean Value Theorem. So I took the function $g(x)=(x_2+x_1-x)f(x)$ which is clearly differentiable on $[x_1,x_2]$, then by the Mean Value Theorem, we know there is a $c \in (x_1,x_2)$ such that -$$ -g(x_2)-g(x_1)=g'(c)(x_2-x_1) -$$ -But for our function $g(x)$, we know $g(x_2)=x_1f(x_2)$ and $g(x_1)=x_2f(x_1)$. Moreover, $g'(x)= - f(x) + (x_2+x_1-x)f'(x)$. Then this gives -$$ -x_1f(x_2)-x_2f(x_1)=(-f(c)+(x_2+x_1-c)f'(c))(x_2-x_1) -$$ -which is so close to what we wanted to show that I do not see how this could not be the correct approach. Have I missed something or is the result false? - -REPLY [8 votes]: Using Cauchy's Mean Value Theorem with functions $\frac{f(x)}x$ and $\frac1x$, we get -$$ -\frac{x_1f(x_2)-x_2f(x_1)}{x_1-x_2}=\frac{\frac{f(x_2)}{x_2}-\frac{f(x_1)}{x_1}}{\frac1{x_2}-\frac1{x_1}} -=\frac{\frac{cf'(c)-f(c)}{c^2}}{-\frac1{c^2}}=f(c)-cf'(c) -$$ -for some $c\in[x_1,x_2]$. For this, I believe we may need that $0\not\in[x_1,x_2]$.<|endoftext|> -TITLE: What is the intuition for permuting $n$ objects where $p$ are alike -QUESTION [6 upvotes]: If we have $n$ objects in which $p$ are objects are alike and rest are all different, then the number of permutations is $\frac{n!}{p!}$. Is there some intuition on how this is correct? why do we have to divide by $p!$ - -REPLY [4 votes]: If you have $n$ objects where $p$ are alike. If you treat them like they're all different then the number of permutations is $n!$. But since there are $p$ objects alike this won't do. This is because the $n!$ will count the same permutation multiple times because it treats the $p$ alike objects like they're different. So to avoid double counting we divide by the number of ways to arrange $p$ objects, hence $\frac{n!}{p!}$.<|endoftext|> -TITLE: Prove that $a\sqrt{a^2+bc}+b\sqrt{b^2+ac}+c\sqrt{c^2+ab}\geq\sqrt{2(a^2+b^2+c^2)(ab+ac+bc)}$ -QUESTION [44 upvotes]: Let $a$, $b$ and $c$ be non-negative numbers. Prove that: - $$a\sqrt{a^2+bc}+b\sqrt{b^2+ac}+c\sqrt{c^2+ab}\geq\sqrt{2(a^2+b^2+c^2)(ab+ac+bc)}.$$ - -I have a proof, but my proof is very ugly: -it's enough to prove a polynomial inequality of degree $15$. -I am looking for an easy proof or maybe a long, but a smooth proof. - -REPLY [6 votes]: $\sum\limits_{cyc}a\sqrt{a^2+bc}\geq\sqrt{2(a^2+b^2+c^2)(ab+ac+bc)}\Leftrightarrow$ -$\Leftrightarrow\sum\limits_{cyc}\left(a^4+a^2bc+2ab\sqrt{(a^2+bc)(b^2+ac)}\right)\geq\sum\limits_{cyc}(2a^3b+2a^3c+2a^2bc)\Leftrightarrow$ -$\sum\limits_{cyc}(a^4-a^3b-a^3c+a^2bc)\geq\sum\limits_{cyc}\left(a^3b+a^3c+2a^2bc-2ab\sqrt{(a^2+bc)(b^2+ac)}\right)\Leftrightarrow$ -$\Leftrightarrow\frac{1}{2}\sum\limits_{cyc}(a-b)^2(a+b-c)^2\geq\sum\limits_{cyc}ab\left(a^2+bc+b^2+ac-2\sqrt{(a^2+bc)(b^2+ac)}\right)\Leftrightarrow$ -$\Leftrightarrow\sum\limits_{cyc}(a-b)^2(a+b-c)^2\geq2\sum\limits_{cyc}ab\left(\sqrt{a^2+bc}-\sqrt{b^2+ac}\right)^2\Leftrightarrow$ -$\Leftrightarrow\sum\limits_{cyc}(a-b)^2(a+b-c)^2\left(1-\frac{2ab}{\left(\sqrt{a^2+bc}+\sqrt{b^2+ac}\right)^2}\right)\geq0$, which is obvious.<|endoftext|> -TITLE: Difference between "real functions" and "real-valued functions" -QUESTION [8 upvotes]: According to my textbook: - -A function which has either $\mathbb R$ or one of its subsets as its range is called a real valued function. Further, if its domain is also either $\mathbb R$ or a subset of $\mathbb R$, it is called a real function. - -As there are 2 definitions here, is there a difference between "real functions" and "real-valued functions"? -MathWorld says that a real function is also called a real-valued function. - -REPLY [6 votes]: According to these definitions, any function $\mathbb C\to\mathbb R$ (for example, $z\mapsto |z|$) will be a real-valued function but not a real function. -As your research shows, this usage is not universal -- there can't be much disagreement about what a "real-valued function" is, but how the words "real function" are used can depend on the author and field.<|endoftext|> -TITLE: Mathematical meaning of "may not" -QUESTION [5 upvotes]: Does "may not" mean that never allowed or sometimes not allowed? For example: the sequence may not converge. Does this mean that the sequence never converges or that there is no guarantee that the sequence converges? - -REPLY [5 votes]: In everyday English, the construction "$X$ may not $Y$" can mean either that it is not allowed for $X$ to do $Y$, or that it is possible/conceivable that $X$ does not do $Y$. One needs to look to context and semantics to find out which of these in the case. -In mathematics it is unusual to speak about permission at all -- mathematical objects do whatever they do whether we want them to or not -- so generally the only meaning that makes sense in a mathematical context is that it is possible that $X$ does not do $Y$.<|endoftext|> -TITLE: Prove that $\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right)=\left(\frac{1}{64}+\frac{3}{128\sqrt{2}}\right)\pi^3$ -QUESTION [5 upvotes]: Prove that $$\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right)=\left(\frac{1}{64}+\frac{3}{128\sqrt{2}}\right)\pi^3$$ -I don't have an idea about how to start. - -REPLY [2 votes]: This answer uses the hint using the polygamma function -$$ -\psi^{(2)}(z) = -\int_0^1 \frac{t^{z-1}}{1-t}\ln^2t dt. -$$ -First, using the expansion of $(1-t)^{-1}$, the polygamma function $\psi^{(2)}(x)$ can be written as -\begin{align} -\psi^{(2)}(z) &= -\sum_{n=0}^\infty \int_0^1 t^{n+z-1}\ln^2 tdt \cr - &= -\sum_{n=0}^\infty \int_0^\infty s^2 e^{-(n+z)s}ds -\qquad \qquad \qquad (t=e^{-s}) \cr -&= -2\sum_{n=0}^\infty \frac{1}{(n+z)^3}. -\end{align} -Therefore, -$$ -\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right) = \frac{1}{1024}\left(\psi^{(2)}\left(\frac{7}{8}\right) --\psi^{(2)}\left(\frac{1}{8}\right)\right). -$$ -Using the reflection relation -$$ -\psi^{(2)}(1-z)-\psi^{(2)}(z) = \pi\frac{d^2}{dz^2}\cot \pi z -$$ -with $z=1/8$, the sum can be written as -$$ -\left.\frac{\pi}{1024}\frac{d^2}{dz^2}\cot \pi z\right|_{z=1/8} -=\frac{\pi^3}{512}\left( -\cot\frac{\pi}{8}+\cot^3\frac{\pi}{8}\right). -$$ -Finally, using trigonometric identities (half angle),we can get $\cot \pi/8 = 1+\sqrt{2}$, and -$$ -\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right) -=\frac{\pi^3}{512}\Big( -1+\sqrt{2}+(1+\sqrt{2})^3 -\Big)=\frac{\pi^3}{256}(4+3\sqrt{2}). -$$<|endoftext|> -TITLE: Derivative of intersection volume -QUESTION [6 upvotes]: Let $K$ be a convex body in $\mathbb{R}^n$ and set $f:\textrm{SL}(n)\rightarrow \mathbb{R}$ as $f(T)=\textrm{Vol}_n (TB\cap K)$ where $B$ is the Euclidean unit ball. How can we find extreme points of $f$? -What I'm looking for is some Taylor expansion of $f$, so I may write for matrices such as $Q=I_n + \epsilon F$ something in the line of $$f(Q)=f(I_n)+\epsilon f'(Q)$$ -where $f'$ is a directional derivative of some sort of $f$. I believe this should amount to something like $f'(T)=\textrm{Vol}_{n-1} (\partial TB\cap K)$, but this is pure intuition, I'm not sure how this can be proven. - -REPLY [3 votes]: Let us first formalise the idea of directional derivatives of matrices -Derivatives of variable matrices are usually expressed as Lie derivatives. The basic object is a Lie group, i.e., a differentiable manifold that has a group structure such that the group operations are differentiable. In our case this is the $(n-1)$-dimensional manifold $SL(n)$ consisting of all real $n\times n$ matrices with determinant $1.$ They are also precisely the linear transformations of $\mathbb R^n$ that preserve volume and orientation. -Lie theory considers $1$-parameter subgroups: differentiable homomorphisms from the simplest possible Lie group $(\mathbb R,+)$ to the Lie group under study: -$$T:\mathbb R\to SL(n):t\mapsto T_t,\hskip1cm T_{s+t}=T_sT_t.$$ -The derivatives at $0$ of all such possible subgroups form the tangent space of the differentiable manifold at the unit element $T_0=I$, which in this context is called the Lie algebra. Our Lie algebra is denoted ${\mathfrak{sl}(n)}$ and it consists of all $n\times n$ matrices with trace $0.$ -The one-parameter group generated by a matrix $A\in\mathfrak{sl}(n)$ is given by the exponential mapping -$$\exp:\mathfrak{sl}(n)\to SL(n):A\mapsto\exp(A)=\sum_{i=0}^\infty\frac{A^i}{i!}$$ -which answers your question about a power series expansion. -The derivative of $\textrm{Vol}_n (T_tB\cap K)$ is more easily evaluated if we replace the indicator functions of the compact sets $B$ and $K$ with differentiable functions $\phi$ and $\psi$ that approximate them. So we are looking at the quantity -$$V_t=\int_{\mathbb R^n}\phi(T_t^{-1}x)\psi(x)ds$$ -Let us evaluate the derivative of $V_t$ at $t=0.$ -$$\eqalign{ -\frac{dV_t}{dt}(t=0)&=\frac{d}{dt}\int_{\mathbb R^n}\phi(T_t^{-1}x)\psi(x)dx\\ -&=\int_{\mathbb R^n}\frac{d\phi(T_t^{-1}x)}{dt}\psi(x)dx\\ -&=\int_{\mathbb R^n}\nabla\phi\cdot (-A)x\psi(x)dx\\ -}$$ -As $\phi$ approaches the indicator of $K$ its gradient converges to a distribution that is concentrated on $\partial K$ and models the inward normal $(-n)$ of $K$ (since $K$ is convex its boundary has an inward normal almost everywhere). Thus we have -$$\eqalign{ -\frac{dV_t}{dt}(t=0)&=\int_{\partial K\cap B}Ax\cdot n\ dS\\ -}$$ -Alternatively, notice that $Ax$ is a divergence-free vector field (because the trace of $A$ is $0$) so the integral is also equal to -$$\eqalign{ -\frac{dV_t}{dt}(t=0)&=\int_{\partial B\cap K}Ax\cdot n\ dS\\ -}$$ -(the reason why these two integrals do not have opposite signs, as one would expect from partial integration, is that the interpretation of $n\ dS$ as an outward normal vector is different according to whether the 'outward' means out of $K$ or out of $B$) -The second integral is different from your intuitive idea but there is a close resemblance. -Higher derivatives of $V_t$ are not guaranteed to exist without additional conditions on the shape of $K.$ This can be intuitively understood by noticing that the first derivative is an integral where not only the integrand, but also the area of integration depends on $t.$ In fact the first derivative need not be a continuous function as can be seen in $2$ dimensions by letting $B=B(c=(5;0),r=1),$ $K$ the upper half of $B$ and $A=\left(\begin{matrix}0&1\\-1&0\end{matrix}\right)$ (generator of the rotations around the origin).<|endoftext|> -TITLE: Determine the Size of a Test Bank -QUESTION [8 upvotes]: Suppose you have two people take an exam which is composed of 30 questions which are randomly chosen from a test bank of n questions. -Person A and Person B both take different randomly generated instances of the exam, and then compare the question sets they were given. Person B notices that 7 of their 30 questions were repeated from Person A's question set. -Is there anyway to deduce the likely total number of questions in the test bank given you know 7/30 of them were repeated in a second instance of the exam? Obviously you would not get an exact value, but could you determine a range of probabilities for each different size of the test bank? How you would you go about solving this? -Thank you! - -REPLY [4 votes]: The minimum number in the pool must be $53$. Suppose there are $n$ in total. -So it's like if you had an urn with $n$ balls, $30$ are white and $n-30$ are red. Then you pull $30$ balls at random. You want to know how many of the balls you pulled are white. Or more specifically you want to know the probability that $7$ of the $30$ you pull are white. -Let $A$ be the number of white balls. Then $P(A=k)$ is hypergeometric and equal to -$\frac{{{30}\choose{k}}{ {n-30}\choose{30-k}}}{{n}\choose{30}}$ -So in your case: -$\frac{{{30}\choose{7}}{{n-30}\choose{23}}}{{n}\choose{30}}$ -This is the probability of an overlap of exactly $7$. -You now need to find the $n$ that maximizes that probability. -If you start plugging in numbers (using a calculator) starting at $n=53$ you'll probably see that it goes up and then soon starts to go back down. Choose the max before it starts going back down. Shouldn't be too much larger than 53. I'm guessing somewhere around 100.<|endoftext|> -TITLE: Equivalent definitions of meromorphic function -QUESTION [5 upvotes]: My complex analysis course gives the following definition of a meromorphic function: -"A function $f\colon A \rightarrow \mathbb{C}$ with $A\subset \mathbb{C}$ is meromorphic if it is holomorphic on $A$ except for isolated singularities, which should be poles. " -Searching through the web, I found that some authors or websites define a meromorphic function as a quotient of two holomorphic functions e.g. the wikipedia page mentions this: https://en.wikipedia.org/wiki/Meromorphic_function -Could anybody give me a brief outline of the proof why these two definitions are equivalent, or give me an (internet-accessible) reference? -Thanks in advance! - -REPLY [2 votes]: The equivalence of the two "definitions" is a consequence of Weierstrass Factorisation Theorem, which has a relatively long proof.<|endoftext|> -TITLE: When is a group isomorphic to the product of normal subgroup and quotient group? -QUESTION [6 upvotes]: Let $H$ be a normal subgroup in $G$. When is $G$ isomorphic to $H\times (G/H)$? -I think it's always true in the abelian case. Are there other rules? - -REPLY [16 votes]: It's not true in the abelian case. The smallest counterexample is $G = \mathbb{Z}_4, H = \mathbb{Z}_2$; the groups $\mathbb{Z}_4$ and $\mathbb{Z}_2 \times \mathbb{Z}_2$ are not isomorphic. -In general you need $H$ to be normal for this question makes sense. Then $G$ is an extension of $G/H$ by $H$. The classification of these is difficult in general and usually there are interesting extensions other than the trivial extension $H \times G/H$. When all three groups are abelian the classification is in terms of the Ext group $\text{Ext}^1(G/H, H)$.<|endoftext|> -TITLE: Absolute Continuity of the sum of two Cantor random variables -QUESTION [5 upvotes]: If we have two independent random variables each having a Cantor distribution is there an easy way to see that the distribution of their sum is not absolutely continuous? -I am pretty sure that if we let $S_n$ be the set of positive integers having an $n$ digit ternary expansion (leading zeros included) with $n/2$ or more 1's, and let -$$T_n = \left\{\frac{2s+1}{3^n}:s\in S_n\right\}$$ - Then our random variable has more than a 50-50 chance of being within $3^{-n-1}$ of a member of $T_n$. As the number of intervals grows as $2^n$, and their width shrinks as $3^{-n}$, the measure of the whole thing goes to 0 as $n$ goes to infinity. (It took some handwaving and arithmetic to get here, so don't trust me.) -In the best of all possible worlds, there would be an argument that works for the absolute continuity of the sum of three (or any number) of independent Cantor Random variables. - -REPLY [3 votes]: The characteristic function of Cantor distribution solves functional equation -$$ -\varphi_C(t) = \frac12(e^{i2t/3} + 1) \varphi_C(t/3)\tag{1} -$$ -(there is an explicit formula with cosines, but this is enough for us). -In particular, $\varphi_C(\pi) = \varphi_C(3\pi) = \varphi_C(3^2\pi) = \dots$ If this value were zero, then we would get from (1) that $\varphi(3^{-n}\pi) = 0$, $n\ge 1$, which contradicts the continuity of $\varphi_C$ (otherwise, you can use the formula with cosines to argue that it is not zero). -So for each $n\ge 1$ we have $$0\neq \varphi_C(\pi)^n = \varphi_C(3\pi)^n = \varphi_C(3^2\pi)^n = \dots\tag{2}$$ If the sum of $n$ independent Cantor variables had a density, we would have $\varphi_C(t)^n\to 0$, $t\to\infty$ by the Riemann-Lebesgue lemma, contradicting (2). - -I prefer the argument using the recursion (1) to the one with explicit formula, as it (or some clever modification) can be used to prove that some other random series does not have a density.<|endoftext|> -TITLE: Is it acceptable style to mix equalities and inequalities in one line -QUESTION [11 upvotes]: Is this considered bad style -$$2 = \sqrt{4} < \sqrt{16} = 4?$$ -It seems as though this is not strictly correct, since $2 = \sqrt{4}$ is a logical proposition which represents boolean value (true or false). A boolean value cannot be less than $\sqrt{16}$. -On the other hand, I am sure that most people will correctly interpret this as shorthand for $2 = \sqrt{4},$ $\sqrt{4} < \sqrt{16},$ and $\sqrt{16} = 4$ - -REPLY [8 votes]: It's fine - people write that way all the time. But don't ever do this: $$1\le b=c>d.$$ - -Edit: Various people have commented, saying that there's nothing wrong with the above. Perhaps not; it bothers me, but I'm not going to insist that it's wrong. If I claimed I didn't actually say it was wrong people would say I was being pedantic. -One person points out that if you write the above it certainly is wrong to deduce a relationship between $1$ and $d$. And that's the problem - in my experience in "beginning analysis" classes students who write things like what's above do tend to draw incorrect conclusions. So I'm going to just rephrase what I said: "Wrong or not, don't do that. It's a bad idea."<|endoftext|> -TITLE: difference between the dual space of $H^1(\Omega)$ and the dual of $H^1_0(\Omega)$ -QUESTION [5 upvotes]: In the Partial Differential Equations by Evans (2nd edition p299), $H^{-1}(\Omega)$ denotes the dual space to $H^1_0(\Omega)$ where $\Omega$ is an open subset of $\mathbb{R}^n$ and $H^1(\Omega)=W^{1,2}(\Omega)$, $H^1_0(\Omega)=W^{1,2}_0(\Omega)$: -$$ -W^{1,2}_0(\Omega)=\overline{C_c^\infty(\Omega)}^{\|\cdot\|_{W^{1,2}(\Omega)}} -$$ -While in the Navier Stokes Equations by Constantin and Foias (p7), $H^{-1}(\Omega)$ denotes the dual space of $H^1(\Omega)$. -Let $X$ be the (continuous) dual of $H^1(\Omega)$ and $Y$ the dual of $H^1_0(\Omega)$. One has that $X\subset Y$. -Here is my question: - -Could somebody describe the difference between $X$ and $Y$? - -REPLY [3 votes]: Every element of $W^{m,p}(\Omega)'$ is the continuous extension of a distribution. However, the extensions are non-unique. By restricting oneself to $W^{m,p}_0(\Omega)$ the extensions are unique. This provides a characterization of the dual of $W^{m,p}_0(\Omega)$ as the space of all distributions $T \in D'(\Omega)$ of the form -$$T=\sum_{0\leq|\alpha|\leq m}(-1)^{|\alpha|}D^\alpha T_{v_\alpha}$$ -where $v_\alpha \in L^{p'}$ and $T_{v_\alpha}(\phi)=\langle \phi,v_\alpha \rangle$, the duality pairing. -(This is explained in detail in Adam's book on Sobolev Spaces, in the section on "Duality and the spaces $W^{-m,p'}(\Omega)$"<|endoftext|> -TITLE: References request for prerequisites of topology and differential geometry -QUESTION [11 upvotes]: I am studying differential geometry and topology by myself. Not being a math major person and do not have rigorous background in analysis, manifolds, etc. I have background in intermediate linear algebra and multivariate calculus. To embark on the study, I delved into stackexchange past answers and other websites. - -Teaching myself differential topology and differential geometry : expounds many good references -Introductory texts on manifolds -recommending books for intro to diff. geometry -Reference for Topology and Geometry -Good introductory book on Calculus on Manifolds : This answer suggests a 'gentle' book - Topology Without Tears. It seems a good book, however, this book does not cover everything what I am looking for (and certain prerequisites as well). - -From these questions and their answers, I found that Milnor's Topology from a Differentiable Viewpoint, Lee's Introduction to Smooth Manifolds, Tu's An Introduction to Manifolds should work for self-study. I am not looking for a theorem and proof style book, but rather getting concepts such as topology, manifold, Lie groups, moving frames, etc. -When I start reading even the introductory chapters from books, I find that many books simply assume that the reader would already know concepts as homomorphism, isomorphism, wedge product, cotangent space, etc. This assumption is not true for many readers (like I). As a result, it is not possible to move ahead without knowing these stuff. -I further found that there is a large amount of literature devoted to these topics. I found, a branch of mathematics, abstract algebra, deals with homomorphism and other listed topics. Learning everything is a daunted task, in fact, only some portion might be needed for my purpose. -Differential geometry and topology have diverse applications and many people, who are from in different areas of sciences and who are not pure mathematicians, may need to learn these areas. Can someone suggest a 'self contained' introductory book that will sufficiently cover the subject-matter? If such book is not there, can someone mention references that will (quickly and with sufficient depth) cover the assumed prerequisites for learning topology and differential geometry (homomorphism, isomorphism, wedge product, cotangent space, etc.)? So that one does not have to entirely learn abstract algebra, which looks hard method. -Inputs are very much appreciated! -Edit: I believe that this is not a "personal advice" question as the links provided in the question are still valid questions and they belong to category "reference request." - -REPLY [2 votes]: http://www.topologywithouttears.net/ -This website and its contents should be useful. -If you want to learn some basic algebra, but nothing too in depth take a look at Fraleigh's Abstract Algebra. -For linear algebra , Axler's "Linear Algebra Done Right" is a good introduction.<|endoftext|> -TITLE: Calculating the value of infinite limit of $2^{-n^2}/\sum_{k=n+1}^\infty 2^{-k^2}$ -QUESTION [6 upvotes]: How to solve the limit? - $$\lim_{n \to \infty}\frac{2^{-n^2}}{\sum_{k=n+1}^\infty 2^{-k^2}}$$ -My approach:- - I have used logarithmic test to test the denominator sum for convergence as follows:- -$$ \lim_{ k\to\infty}k\log \frac{u_n}{u_{n+1}}=\lim_{k\to \infty}k\log\frac{2^{-k^2}}{2^{-(k+1)^2}}=\lim_{k\to\infty}(k+2k^2)\log2=\infty$$ -Thus the infinite sum diverges. -The numerator term also diverges because it is a term from monotonically decreasing sequence which has no lower bound. -So overall solution is $$\infty$$ -Is my attempt correct or wrong? - -REPLY [7 votes]: In a different way, -$$\frac{2^{-n^2}}{\sum_{k=n+1}^\infty 2^{-k^2}}>\frac{2^{-n^2}}{\sum_{k=(n+1)^2}^\infty 2^{-k}}=\frac{2^{-n^2}}{2^{-(n+1)^2}\cdot2}=2^{-n^2+(n+1)^2-1}=2^{2n}$$ -Therefore it diverges. - -REPLY [5 votes]: Unfortunately, your attempt is seriously wrong. The sequence $2^{-n^2}$ is convergent, with limit zero ($0$ is a clear lower bound). Likewise, the sum $\sum_{k = n + 1}^{\infty} 2^{-n^2}$ is convergent by, say, comparison to a geometric series. In fact, both the numerator and denominator tend to zero. Finally, even if your first two conclusions were correct, your final conclusion wouldn't follow from them. - -For a different approach, note that the denominator can be estimated by -$$\sum_{k = n + 1}^{\infty} 2^{-k^2} \approx 2^{-n^2 - 2n}$$ and so your sequence is bounded below by something like -$$\frac{2^{-n^2}}{2^{-n^2 - 2n}} = 2^{2n}$$<|endoftext|> -TITLE: Proving $ z^n + \frac{1}{z^n} = 2\cos(n\theta) $ for $z = \cos (\theta) + i\sin(\theta)$ -QUESTION [9 upvotes]: Question: Prove that if $z = \cos (\theta) + i\sin(\theta)$, then -$$ z^n + {1\over z^n} = 2\cos(n\theta) $$ - - - -What I have attempted -If $$ z = \cos (\theta) + i\sin(\theta) $$ -then $$ z^n = \cos (n\theta) + i\sin(n\theta) $$ -$$ z^n + {1\over z^n} $$ -$$ \cos (n\theta) + i\sin(n\theta) + {1\over \cos (n\theta) + i\sin(n\theta)} $$ -$$ (\cos (n\theta) + i\sin(n\theta))\cdot(\cos (n\theta) + i\sin(n\theta)) + 1 $$ -$$ \left[ {(\cos (n\theta) + i\sin(n\theta))\cdot(\cos (n\theta) + i\sin(n\theta)) + 1\over \cos (n\theta) + i\sin(n\theta)} \right] $$ -$$ \left[ {\cos^2(n\theta) + 2i\sin(n\theta)\cos (n\theta) - \sin^2(n\theta) + 1\over \cos (n\theta) + i\sin(n\theta)} \right] $$ -Now this is where I am stuck.. I tried to use a double angle identity but I can't eliminate the imaginary part.. - -REPLY [6 votes]: If you want to continue in the way you started, you can simply rewrite $1=\cos^2(n\theta)+\sin^2(n\theta)$ and then simplify: -$$ -\frac{\cos^2(n\theta) + 2i\sin(n\theta)\cos (n\theta) - \sin^2(n\theta) + 1}{ \cos (n\theta) + i\sin(n\theta)}= -\frac{\cos^2(n\theta) + 2i\sin(n\theta)\cos (n\theta) - \sin^2(n\theta) + \cos^2(n\theta)+\sin^2(n\theta)}{ \cos (n\theta) + i\sin(n\theta)}= -\frac{2\cos^2(n\theta) + 2i\sin(n\theta)\cos (n\theta)}{ \cos (n\theta) + i\sin(n\theta)}= -\frac{2\cos(n\theta) (\cos(n\theta)+i\sin(n\theta))}{ \cos (n\theta) + i\sin(n\theta)}= \underline{\underline{2\cos(n\theta)}} -$$ -Which shows that you were almost there. (But it is still useful that you posted your question here - you have seen other approaches.)<|endoftext|> -TITLE: Why doesn't the dot product give you the coefficients of the linear combination? -QUESTION [13 upvotes]: So the setting is $\Bbb R^{2}$. -Let's pick two unit vectors that are linearly independent. Say: $v_{1}= \begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{2}\end{bmatrix}$ and $v_{2} = \begin{bmatrix} \frac{\sqrt{3}}{2} \\ \frac{1}{2}\end{bmatrix}$. -Now, let's pick another vector with length smaller than $1$, say, $a = \begin{bmatrix} \frac{1}{2} \\ 0\end{bmatrix}$. -I've been trying to understand the dot product geometrically, and what I've read online has led me to believe that $a \cdot v_{1}$ is the scalar $c$ so that $cv_{1}$ is the "shadow" of $a$ on $v_{1}$. Similarly, $a \cdot v_{2}$ is the scalar $d$ so that $dv_{2}$ is the "shadow" of $a$ on $v_{2}$. -If this is true, then it should be that $cv_{1} + dv_{2} = a$, right? But this isn't the case. -We have $a \cdot v_{1} = \frac{1}{4}$ and $a \cdot v_{2} = \frac{\sqrt{3}}{4}$. So $$cv_{1} + d v_{2} = \frac{1}{4}\begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{2}\end{bmatrix} + \frac{\sqrt{3}}{4}\begin{bmatrix} \frac{\sqrt{3}}{2} \\ \frac{1}{2}\end{bmatrix} = \begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{4}\end{bmatrix} \neq a.$$ -This means something is wrong with my understanding about the intuition of the dot product. I'm not sure what's wrong with it, though. Any help would be appreciated. - -REPLY [8 votes]: Your intuition is mostly correct, and you would probably have seen the flaws in your reasoning if you had drawn a picture like this: - -We have two linearly-independent unit vectors $\mathbf{U}$ and $\mathbf{V}$, and a third vector $\mathbf{W}$ (the green one). We want to write $\mathbf{W}$ as a linear combination of $\mathbf{U}$ and $\mathbf{V}$. The picture shows the projections $(\mathbf{W} \cdot \mathbf{U})\mathbf{U}$ (in red) and $(\mathbf{W} \cdot \mathbf{V})\mathbf{V}$ (in blue). These are the things you call "shadows", and that's a good name. As you can see, when you add them together using the parallelogram rule, you get the black vector, which is obviously not equal to the original vector $\mathbf{W}$. In other words -$$ -\mathbf{W} \ne (\mathbf{W} \cdot \mathbf{U})\mathbf{U} + (\mathbf{W} \cdot \mathbf{V})\mathbf{V} -$$ -You certainly can write $\mathbf{W}$ in the form -$\mathbf{W} = \alpha\mathbf{U} + \beta\mathbf{V}$, but $\alpha = \mathbf{W} \cdot \mathbf{U}$ and $\beta = \mathbf{W} \cdot \mathbf{V}$ are not the correct coefficients unless $\mathbf{U}$ and $\mathbf{V}$ are orthogonal. And you can even calculate the coefficients $\alpha$ and $\beta$ using dot products, as you expected. It turns out that -$$ -\mathbf{W} = (\mathbf{W} \cdot \bar{\mathbf{U}})\mathbf{U} + (\mathbf{W} \cdot \bar{\mathbf{V}})\mathbf{V} -$$ -where $(\bar{\mathbf{U}}, \bar{\mathbf{V}})$ is the so-called dual basis of $(\mathbf{U}, \mathbf{V})$. You can learn more here.<|endoftext|> -TITLE: Zeroth homotopy group: what exactly is it? -QUESTION [8 upvotes]: What are the elements in the zeroth homotopy group? Also, why does $\pi_0(X)=0$ imply that the space is path-connected? -Thanks for the help. I find that zeroth homotopy groups are rarely discussed in literature, hence having some trouble understanding it. I do understand that the elements in $\pi_1(X)$ are loops (homotopy classes of loops), trying to see the relation to $\pi_0$. - -REPLY [2 votes]: Just a slight rephrase: you can consider $\pi_0(X)$ as the quotient set of the set of all points in $X$ where you mod out by the equivalence relation that identifies two points if there is a path between them.<|endoftext|> -TITLE: Functor of section over U is left-exact -QUESTION [5 upvotes]: I am trying to prove $\Gamma(U,\cdot)$ is a left-exact functor $\mathfrak{Ab}(X)\to\mathfrak{Ab}$. This is Exercise 1.8 in Hartshorne, Chapter II or exercise 2.5.F of Vakil's notes (Nov 28,2015 version). -Since monomorphism of sheaves implies injection on open sets we have the exactness at the left place. For the middle place I really do not have idea. Actually I have a very silly question, it seems to me that if kernel of morphism of sheaves equals to the image of a morphism, then the corresponding kernel and image on some U should also be the same. But in this way the functor should be exact. So why this is not true? - -REPLY [4 votes]: Method 1: Use Exercise II.1.4b to relate the image presheaf and its sheafification. Then by exactness of the original sequence, the isomorphism follows on sections of $U$. -Method 2: Proceed just as in proof of Proposition II.1.1. -Let $\varphi: \mathscr{F} \to \mathscr{F''}$ and $\psi : \mathscr{F'} \to \mathscr{F}$. We want to show that the kernel and images are equal after applying $\Gamma (U, \cdot)$. This can be checked on the stalks. -In one direction, we have that -$$(\varphi _U \circ \psi _U (s))_P = \phi _P \circ \psi _ P (s_P) = 0$$ -By the sheaf condition, this shows that $\phi _U \circ \psi _ U = 0$. -Conversely, let's suppose $t \in \textrm{ker } \varphi _U$, i.e. $\varphi _U (t) = 0$. Again we know that the stalks is an exact sequence so for each $P \in U$, there is a $s_P$ such that $\psi _P (s_P) = t_P$. Let's represent each $s_P = (V_P , s(P))$ where $s(P) \in \mathscr{F'} (V_P)$. Now, $\psi (s(P))$ and $t \mid _{V_P}$ are elements of $\mathscr{F} (V_P)$ whose stalks at $P$ are the same. Thus, WLOG, assume $\psi (s(P)) = t \mid _{V_P}$ in $\mathscr{F} (V_P)$. $U$ is covered by the $V_P$ and there is a corresponding $s(P)$ on each $V_P$ which, on intersections, are both sent by $\psi$ to the corresponding $t$ on the intersection. Here, we apply injection (exactness at left place, which you showed in your OP) which allows us to glue via sheaf condition to a section $s \in \mathscr{F'} (U)$ such that $s \mid _ {V_P} = s(P)$ for each $P$. Verify that $\psi (s) = t$ and we're done by applying the sheaf property and the construction to $\psi (s) - t$.<|endoftext|> -TITLE: What is the significance of this identity relating to partitions? -QUESTION [7 upvotes]: I was watching a talk given by Prof. Richard Kenyon of Brown University, and I was confused by an equation briefly displayed at the bottom of one slide at 15:05 in the video. -$$1 + x + x^3 + x^6 + \dots + x^{n(n-1)/2} + \dots = \left(\frac{1-x^2}{1-x}\right) \left(\frac{1-x^4}{1-x^3}\right) \left(\frac{1-x^6}{1-x^5}\right) \dots$$ -On the left we have the power series $\sum_{n=0}^{\infty}x^{T_n}$. On the right we have some sort of infinite product. Can anyone explain what the meaning of this identity is, in relation to integer partitions? - -Background: The speaker starts by discussing the generating function of the partition function, -$$P(x) = \prod_{k=1}^\infty \left(\frac {1}{1-x^k} \right)$$ -He then uses the idea behind this generating function to derive a fun identity: -$$(1+x)(1+x^2)(1+x^3)\dots = \frac{1}{(1-x)(1-x^3)(1-x^5)\dots}$$ -which shows that the number of partitions into unequal parts equals the number of partitions into odd parts. -This is the context for the above identity which I fell short of understanding. - -Also: I did a bit of searching and came across a 1991 paper by Ono, Robins & Wahl concerning partitions using triangle numbers, which might be related. -This paper proves that -$$ \sum_{n=1}^{\infty}{x^{T_n}} = \prod_{n=1}^{\infty}{\frac{(1-x^{2n})^2}{1-x^n}}$$ -which shows that the identity is true. - -REPLY [3 votes]: If the sides are divided by the numerator of the right, the formula is -$$\frac{\sum x^{T_n}}{(1-x^2)(1-x^4)\cdots}=\frac{1}{(1-x)(1-x^3)\cdots}. \tag{1}$$ -Here the left side with numerator replaced by $1$ represents partitions into even parts, while the right side partitions into odd parts. Putting the numerator back, the left side represents representations of a number by a single triangular number plus a sum of even parts, while the right side again represents representations by sums of odd parts. -So in this form, the identity says the number of ways to write $n$ as a triangular number plus a sum of even parts is the same as the number of ways to write $n$ as a sum of odd parts. Note the single triangular number involved here may be $0$ (which is $T_0$). I didn't know the equality of these two counts, but tried it on some small numbers and it seems to be so. -A slight correction and better explanation of the left side count: Since the taylor series of $1/[(1+x^2)(1+x^4)\cdots$ starts out with the term $1\cdot x^0,$ it is clear that series considers that $0$ is indeed the (only) partition of $0$ into even parts. [The same happens in the generating function for unrestricted partitions.] So when this series is multiplied by the numerator in (1), the result is counting, for a given $n,$ ordered pairs consisting of a triangular number $T$ (which may be zero) followed by a partition of $n-T$ into even parts, and notation such as $(6),2$ (for $n=8$) means it is the entity for which $T$ has been taken to be 6 and then $n-T=8-6=2$ is to be partitioned into even part(s), here the extra 2 after the (6) of $(6),2.$ Because one must "tag" these entitities by the triangular number used, this is a different entity than the $(0),2,6$ in the count. They come from different powers of $x$ in the numerator of (1). [I think somewhere in the answer or in comments I had erroneously insisted that the partition into even parts which follows the triangular number had to be positive even parts, but this is so only when $n$ itself is not triangular.]<|endoftext|> -TITLE: convergence in measure topology -QUESTION [6 upvotes]: I'm attending a course on measure theory this semester. While proposing different kinds of convergence (in measure, almost everywhere and in $L^{p}$), our professor stressed (and proved) the fact that convergence almost everywhere is not topological, but claimed that convergence in measure is. -As pointed out in a few questions on this site and wikipedia (1, 2, 3), in the case where $(\Omega, \mathcal{F}, \mu)$ is a finite measure space, convergence in measure can be described by a pseudometric (hence a topology). However, I haven't found an answer to why at least a topology should exist in the case where $\mu$ is an arbitrary measure. Wikipedia (3) claims that their pseudometric works for arbitrary measures, but their proposed function can take $\infty$ as a value, which I believe isn't allowed for metrics. -To sum up: let $(\Omega, \mathcal{F}, \mu)$ be a (not necessarily finite) measure space, does there exist a topology $\mathcal{T}$ on the set of measurable functions $f : \Omega \to \mathbb{R}$ such that a sequence of measurable functions $(f_{n})_{n}$ converges to a measurable function $f$ in measure if and only if it converges to $f$ in the topology $\mathcal{T}$? Extra: Is this topology unique? -Thank you for your help! I've had introductory courses in topology (metric spaces), Banach (Hilbert) spaces and now measure theory. - -REPLY [4 votes]: The convergence in measure is not just induced by a topology, it is in fact induced by a metric! Admittedly, it is not at all obvious how to come up with it, but here it is: -$$d(f,g) := \inf_{\delta > 0} \big(\mu(|f-g|>\delta) + \delta\big)$$ -(I found it a while back in this book) -This is, again, in general a $[0,\infty]$-valued metric, but this is not a problem as previously noted because you could just as well use $d':=d\wedge 1$ or $d'':=\frac{d}{1+d}$ to get the same topology. -Just a side note: What is quite neat is that you can know that there must be some metric even without having a specific candidate, because the space of measurable functions with convergence in measure is a first countable topological vector space and those are all metrisable. -EDIT: As to the uniqueness question: It was already noted that convergence of sequences alone does not uniquely determine a topology. Not unless you add other properties. For example there is a unique metrisable/quasi-metrisable/first-countable topology that induces exactly this convergence of sequences.<|endoftext|> -TITLE: Another way to evaluate $\int_0^{\infty} x^2 e^{-x^2}dx$ -QUESTION [5 upvotes]: In Stewart's Calculus book I came across the following Gaussian integral. - -Using $\int_{-\infty}^{\infty}\exp{(-x^2)}dx = \frac{\sqrt{\pi}}{2}$ evaluate - $$ -\int_0^{\infty} x^2 e^{-x^2}dx -$$ - -I read in this pdf that -$$ - \int_{-\infty}^{\infty} e^{-ax^2}dx =\sqrt{ \frac{\pi}{a}} -$$ -and how to use differentiation under the integral sign to evaluate it (recreated below for convenience). -$$ -\begin{align*} -I(a) &= \int_{-\infty}^{\infty} e^{-ax^2} dx =\sqrt{ \frac{\pi}{a}} \\ I'(a)&= -\int_{-\infty}^{\infty} x^2 e^{-ax^2}dx = -\frac{1}{2}\sqrt{\pi} a^{-3/2} \\I'(1) &= \frac{\sqrt{\pi}}{2} -\end{align*} -$$ -Using the results above (and Wolfram Alpha), I was able to conclude that -$$ -\int_0^{\infty} x^2 e^{-x^2}dx = \frac{\sqrt{\pi}}{4} -$$ -however, I was wondering if there is some substitution or an another way to evaluate the aforementioned integral seeing as Leibniz's rule is not mentioned anywhere in the chapter. - -REPLY [2 votes]: For $$\int\limits_{0}^{\infty} x^{2} \mathrm{e}^{-x^{2}} \mathrm{d} x$$ let $y = x^{2}$ -\begin{equation} -\int\limits_{0}^{\infty} x^{2} \mathrm{e}^{-x^{2}} \mathrm{d} x = -\frac{1}{2} \int\limits_{0}^{\infty} \mathrm{e}^{-y} y^{\frac{1}{2}} \mathrm{d} y - = \frac{1}{2} \Gamma\left(\frac{3}{2}\right) = \frac{\sqrt{\pi}}{4} -\end{equation}<|endoftext|> -TITLE: Does a connected countable metric space exist? -QUESTION [7 upvotes]: I'm wondering if a connected countable metric space exists. -My intuition is telling me no. - -For a space to be connected it must not be the union of 2 or more open - disjoint sets. -For a set $M$ to be countable there must exist an injective function - from $\mathbb{N} \rightarrow M$. - -I know the Integers and Rationals clearly are not connected. Consider the set $\mathbb{R}$, if we eliminated a single irrational point then that would disconnect the set. -A similar problem arises if we consider $\mathbb{Q}^2$ -In any dimension it seems by eliminating all the irrational numbers the set will become disconnected. And since $\mathbb{R}$ is uncountable there cannot exist a connected space that is countable. -My problem is formally proving this. Though a single Yes/No answer will suffice, I would like to know both the intuition and the proof behind this. -Thanks for any help. -I haven't looked at cofinite topologies (which I happened to see online). I also don't see where the Metric might affect the countability of a space, if we are primarily concerned with an injective function into the set alone. - -REPLY [20 votes]: Fix $x_0 \in X $. Then, the continuous(!) map -$$ -\Phi: X \to \Bbb {R}, x \mapsto d (x,x_0) -$$ -has an (at most) countable, connected image. -Thus, the image is a whole (nontrivial!, if $X $ has more than one point) interval, in contradiction to being countable. -EDIT: On a related note, this even show's that every connected metric space with more than one point has at least the cardinality of the continuum.<|endoftext|> -TITLE: Chinese remainder theorem as sheaf condition? -QUESTION [10 upvotes]: The chinese remainder theorem in its usual version says that for a finite set of pairwise comaximal ideals $R/\bigcap _jI_j\cong \prod _j R/I_j$. -In the binary case, the following general statement holds without conditions on the ideals $R/(I\cap J)\cong R/I\times _{R/I+J}R/J$. In this question I wanted to generalize the more general version to several ideals, but got stuck and only contrived an ad hoc justification for pairwise comaximality. -A few weeks ago I finally thought of $R/(I\cap J)\cong R/I\times _{R/I+J}R/J$ as a sheaf condition for a cover by two elements. Then I told myself the diagram below must be an equalizer, because pairwise comaximality pops out of it so naturally. -$$R/\bigcap _jI_j\rightarrow \prod _j R/I_j \rightrightarrows \prod _{i,j}R/(I_i+I_j)$$ -Several satisfied days later I stumbled upon this comment which to my dismay says the diagram above fails to be an equalizer for more than three ideals. But it just seems so perfect... -Can anyone give some counterexamples which show why the diagram above is not an equalizer and explain why things fail geometrically? - -REPLY [8 votes]: Each ideal $I\subset R$ corresponds to a closed subscheme of $\mathrm{Spec}(R)$, intersection of ideals corresponds to union of subschemes, and sum of ideals corresponds to intersection of subschemes. If a finite collection of closed subschemes is an open cover of their union (which is the case if and only if the union is disjoint), then indeed your diagram is an equalizer, precisely because of the sheaf condition. But in general the sheaf condition won't hold for coverings by closed sets. -A counterexample: let $k$ be a field, $R=k[x,y]$, $I_1=(x)$, $I_2=(y)$, $I_3=(x-y)$. Then $\mathrm{Spec}(R)$ is the affine plane and the ideals $I_1$, $I_2$, and $I_3$ correspond to the lines $L_1:x=0$, $L_2:y=0$, and $L_3:x=y$. The statement that your diagram is an equalizer is equivalent to the statement - -the data of a regular function on $L_1\cup L_2\cup L_3$ is the same as the data of a triple of regular function $f_1$, $f_2$ , $f_3$ on $L_1$, $L_2$, $L_3$ resp., such that all three functions take the same value at the origin. - -This is false because we need an extra condition on the functions $f_1$, $f_2$, $f_3$, namely for these functions to determine a function on $L_1\cup L_2\cup L_3$, the derivative of $f_3$ in the direction $(1,1)$ needs to equal the sum of the derivative of $f_1$ in the direction $(0,1)$ and the derivative of $f_2$ in the direction $(1,0)$. - -The data of $\prod R/I_j$ and the maps to $\prod R/(I_i+I_j)$ corresponds to data of a bunch of closed subschemes and their pairwise intersections. This is not enough to determine a scheme. For instance, the union $L_1\cup L_2\cup L_3$ is not isomorphic to the union of the coordinate axes in $\mathbb{A}^3$, because the tangent space to the former at the singular point is $2$-dimensional, while the tangent space to the latter is $3$-dimensional. But both schemes are the union of three lines, such that the pairwise intersection is a single point. For the example $I_1,I_2,I_3\subset R$ above, the equalizer of your diagram is the coordinate ring of the union of the coordinate axes in $\mathbb{A}^3$, rather than of $L_1\cup L_2\cup L_3$.<|endoftext|> -TITLE: Sum and Product of continued fraction expansion? -QUESTION [7 upvotes]: Give the continued fraction expansion of two real numbers $a,b \in \mathbb R$, is there an "easy" way to get the continued fraction expansion of $a+b$ or $a\cdot b$? -If $a,b$ are rational it is easy as you can easily conver the back to the 'rational' form, add or multiply and then conver them back to continued fraction form. But is there a way that requres no conversation? -Other than that I found no clues whether there is an "easy" way to do it for irrational numbers. - -REPLY [6 votes]: Gosper found efficient ways to do arithmetic with continued fractions (without converting them to ordinary fractions or decimals). Here is a page with links to Gosper's work, but also with an exposition of Gosper's methods. -See also this older m.se question, Faster arithmetic with finite continued fractions<|endoftext|> -TITLE: Is every converging sequence the sum of a constant sequence and a null sequence? -QUESTION [7 upvotes]: Let $a_n$ be any sequence converging to $a$ when $n \to \infty$. -Can you rewrite $a_n$ so that it is the sum of two other sequences? $$a_n=b_n + c_n,$$ with $b_n=b$ for every $n \in \mathbb{N}$ and $c_n\to 0$ as $n\to \infty$. -In other words: Is a converging sequence ($a_n$) actually a null sequence ($c_n$) "shifted" by a constant ($b$)? -Or is there any counterexample where one is not allowed to do so? - -REPLY [3 votes]: For a constant $b$ any number $a$ (whether it's a term in a sequence or not) can be written as $a = b + c$ where $c = a - b$ so any sequence $\{a_n\}$ can be written as $\{b + c_n\}$ where $c_n = a_n - b$ and in particular if $\lim a_n = a$ then the sequence can be written as $\{a + c_n\}$ where $c_n = a_n - a$. And clearly $\lim \{a_n\} = \lim \{a + c_n\} = a$. -So your question boils down to does $\lim\{b + c_n\} = b + \lim\{c_n\}$? And therefore if $b = a = \lim a_n$ does $\lim c_n = 0$. -This should be a basic proposition early on in the study of convergent sequence and the answer is: yes. -$|a - a_n| = |(a -b) - (a_n - b)| = |(a - b) - c_n|$. So whatever $\epsilon$, $N$, $n > N$ crap that you can say about $a$ and $a_n$ can also be said about $(a-b)$ and $c_n$. -So if $c_n = a_n - b$ then $\{a_n\} \rightarrow a \iff \{c_n\} \rightarrow (a - b)$.<|endoftext|> -TITLE: Is there a branch of Mathematics which connects Calculus and Discrete Math / Number Theory? -QUESTION [29 upvotes]: I am asking this question out of both curiosity and frustration. There are many problems in computer science which require you to perform operations on a finite set of numbers. It always bothers me that there is no way of mapping this discrete problem onto a continuous one and using the power of calculus to solve it, finally extracting out the answer of the original discrete problem. -Is this not possible? If it is, is there a branch of mathematics which deals with precisely this? Are we confined to solving such problems only by thinking of clever algorithmic techniques? -Thank you for taking the time to answer, and as always, apologies if the questions is silly. - -REPLY [2 votes]: See the book Concrete Mathematics by Graham, Knuth, and Patashnik (http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/0201558025) for a wonderful exposition of connections between CONtinuous and disCRETE mathematics, including number theory.<|endoftext|> -TITLE: Find the summation $\frac{1}{1!}+\frac{1+2}{2!}+\frac{1+2+3}{3!}+ \cdots$ -QUESTION [11 upvotes]: What is the value of the following sum? - -$$\frac{1}{1!}+\frac{1+2}{2!}+\frac{1+2+3}{3!}+ \cdots$$ - -The possible answers are: - -A. $e$ -B. $\frac{e}{2}$ -C. $\frac{3e}{2}$ -D. $1 + \frac{e}{2}$ - -I tried to expand the options using the series representation of $e$ and putting in $x=1$, but I couldn't get back the original series. Any ideas? - -REPLY [8 votes]: $$\frac{k(k+1)}{2k!}=\frac{k(k-1)+2k}{2k!}=\frac1{2(k-2)!}+\frac1{(k-1)!}$$ -Hence the sum is -$$\frac e2+e.$$ - -Note that the first summation in the RHS must be started at $k=2$ (or use $1/(-1)!:=0$).<|endoftext|> -TITLE: Finding the minimal $n$ such that a given finite group $G$ is a subgroup of $S_n$ -QUESTION [15 upvotes]: It is a theorem that, for every finite group $G$, there exists an $n$ such that $G \leq S_n$. Given a particular group, is there any way to determine or bound the smallest $n$ such that this occurs? The representation of the group is up to you, though bounds that use less information are preferred. -If this problem is too hard, specific special cases are also interesting, as are bounds on $n$. - -REPLY [2 votes]: Given a finite group $G$, consider $H$ to be subgroup such that it contains no normal subgroup of $G$ and consider such $H$ of maximum possible order. -Then $G$ permutes cosets of $H$ by left multiplication, and hence gives a homomorphism from $G$ to permutation group of cosets $\{gH\colon g\in G\}$. The kernel of this homomorphism is intersection of all the conjugates of $H$; but, by our assumption, it should be trivial. Hence $G$ embeds in permutation group on $|G/H|$ letters. -This also explains why $H$ is chosen in beginning of maximum order. -This works, as it can be seen for examples in other answers.<|endoftext|> -TITLE: Sectional curvature in a paraboloid is always positive. -QUESTION [5 upvotes]: I'm working on Lee's book ''Riemaniann Manifolds an Introduction to Curvature''. One exercise (11.1) is about to see that the paraboloid given by the equation $y=x_1^2+...+x_n^2$ has positive sectional curvature everywhere. -It is known that $K(\pi)=\frac{\langle R(u,v)v,u \rangle}{\langle u,u \rangle \langle v,v \rangle - \langle u,v \rangle ^2}$ where $\pi$ is the plane generated by $u,v$. Cauchy Schwarz ensures denominator is always positive so it would be enough to check that $\langle R(u,v)v,u \rangle$ is positive at any point for any pair of tangents vectors. That is equivalent to check $\langle\nabla_X \nabla_Y Y - \nabla_Y \nabla_X Y -\nabla_{[X,Y]}Y,X\rangle$ is positive where $\nabla$ is Levi-Civita connection. The first Christoffel identity (here) https://en.wikipedia.org/wiki/Fundamental_theorem_of_Riemannian_geometry allows to compute the Levi-Civita connection via Christoffel symbols. I can compute the Christoffel symbols by finding the metric of the paraboloid given by the evident chart (just by pull-back). -The question is if there is an easier way to check the paraboloid has positive sectional curvature everywhere or you have to follow the path I talked about; that I do not find complicated, but it requires some work with many calculations. - -REPLY [4 votes]: The key is using that the paraboloid is a submanifold of $\mathbb{R}^n$ combined with Gauss Equation. -Because we are working on an Euclidean submanifold (as we posted in the comments) if $u=\alpha_1 e_1+...+\alpha_{n+1} e_{n+1}$, $v=\beta_1 e_1+...+\beta_{n+1} e_{n+1}$ and $N$ is a normal unitary vector field (for example, $(2x_1,...,2x_n,-1)/f$ (where $f$ is the norm of the numerator) the shape operator gives $S(u)=\nabla_u N=(2 \alpha_1,..., 2\alpha_n,0)/f + u(1/f)fN$ where $\nabla$ is Levi-Civita in $ \mathbb{R}^n$(so Christoffel Symbols are $0$). Now $K(\pi)$ has the same sign that (we also use $\langle N,u \rangle = \langle N,v\rangle=0) \rangle$$\langle S(u),u \rangle \langle S(v),v \rangle - \langle S(u),v \rangle^2 =4(\alpha_1^2+...+\alpha_n^2)(\beta_1^2+...+\beta_n^2)-4(\alpha_1 \beta_1+...+\alpha_n \beta_n)^2 \geq 0 $$ -Finally we should show that equality is impossible. If the equality holds then $u$ and $v$ are proportional in the first $n$ coordinates. The condition $\langle u, N\rangle=\langle v, N\rangle=0$ makes that, in that case, the last coordinate has the same proportion too (because the last coordinate of N never vanishes). But in that case $u$ and $v$ are proportional and can not generate a plane.<|endoftext|> -TITLE: How to factor $x^6+x^5-3\,x^4+2\,x^3+3\,x^2+x-1$ by hand? -QUESTION [10 upvotes]: I know that -$x^6+x^5-3\,x^4+2\,x^3+3\,x^2+x-1 = (x^4-x^3+x+1)(x^2+2x-1)$ -but I would not know how to do that factoring without a software. -Some idea? Thank you! - -REPLY [6 votes]: The equation is palindromic (well, almost), so: -We can write it as $$x^3\left[x^3+x^2-3x+2+\frac 3x+\frac{1}{x^2}-\frac{1}{x^3}\right]$$ -$$=x^3\left[(x^3-3x+\frac 3x-\frac{1}{x^3})+\left((x-\frac 1x)^2+2\right)+2\right]$$ -$$=x^3\left[u^3+u^2+4\right],$$ where $u=x-\frac 1x$ -And hence the factorization is $$x^3(u+2)(u^2-u+2)$$ which will give us the expected answer.<|endoftext|> -TITLE: Application of Fourier Series and Stone Weierstrass Approximation Theorem -QUESTION [5 upvotes]: If $f \in C[0, \pi]$ and $\int_0^\pi f(x) \cos nx\, \text{d}x = 0$ , then $f = 0$ - - -Define $ g(x) = \begin{cases} - f(-x) & \text{if } -\pi \leq x < 0;\\ - f(x) & \text{if } 0 \leq x \leq \pi. \end{cases}$ -which is an even function -So $g(x)$ can be written as $\sum_{n=0}^\infty a_n\cos (nx)$ for all $x \in [-\pi , \pi]$ -$$\therefore \int_0^\pi f^2(x) \, dx = \int_0^\pi f(x) \left(\sum_{n=0}^\infty \cos(nx)\right) \, dx = \sum_{n=0}^\infty \int_0^\pi f(x) \cos(nx) \, dx = 0$$ -$$\therefore \int_0^\pi f^2(x) \, dx =0,$$ we get $f(x) = 0$ -I think this part is my true -$$\text{If }f \in C[0 , \pi] \text{ and } \int_o^\pi x^n f(x) \, dx = 0 \text{ for all } n\geq0, \text{ then } f = 0$$ -Since $f$ is continous on a closed interval, then by stone Aproximation Theoren , for each $\epsilon > 0$ , there is a polynomial $p(x)$ such that $|f(x) - p(x)| < \epsilon$. -I want to show that $\int_0^\pi f(x) \, dx = 0$, please help me how to proceed further and check the first part. Any help would be appreciated. - -REPLY [2 votes]: I am usually too lazy to critique answers even when the OP asks for it. It is easier to just snipe -with a comment or two. But @zhw has shamed me into really looking at the answer and maybe offering -a bit of a tutorial. Since I gave a sloppy comment I will make amends here I hope. -Here some things that you know or should know judging from the title of the problem and your answer. - -A continuous even function $f$ on $[-\pi,\pi]$ has a Fourier series of the form -$$\sum_{n=0}^\infty a_n\cos nx$$ but this series need not converge pointwise or uniformly to $f$ unless you have -stronger assumptions on $f$. [Here you don't. A course on Fourier series may not prove this negative comment, just proving the positive comment assuming, say, that $f$ is also of bounded variation or continuously differentiable. It is essential to know why 19th century mathematicians had to fuss so much to get convergence.] -A continuous even function $f$ on $[-\pi,\pi]$ has a uniform approximation -by a trigonometric polynomial -of the form -$$\sum_{n=0}^N a_n\cos nx$$ meaning that, for every $\epsilon>0$ you can select at least one such polynomial so that -$$\left| \sum_{n=0}^N a_n\cos nx -f(x)\right| < \epsilon$$ -for all $-\pi\leq x \leq \pi$. [Fejer's theorem supplies this as does the Stone-Weierstrass theorem.] -[Dangerous curve ahead!] If you change $\epsilon$ you may have to choose an entirely different polynomial, so the $a_n$ might change. Thus statement #2 does not give you a series converging to $f$, it gives you a sequence of trigonometric polynomials converging uniformly to $f$. In other words Stone-Weierstrass or Fejer's theorem does not give -$$f(x)=\sum_{n=0}^\infty a_n\cos nx$$ -either pointwise or uniformly. Don't write it! -If $f(x)=\sum_{i=0}^\infty f_n(x)$ on $[a,b]$ you cannot write $\int_a^b f =\sum_{i=0}^\infty \int_a^b f_n $ without claiming uniform convergence (or some more advanced property). -If $f(x)=\lim_{n\to \infty} f_n(x)$ on $[a,b]$ you can write $$\int_a^b fh =\lim_{n\to \infty} \int_a^b f_nh $$ -for any continuous $h$ -if you are sure you have uniform convergence (or some more advanced property). - -Now we are in a position to tidy up your solution. Your ideas were fine, just missing some caution. You tried to use a series but that fails--just use a sequence instead! -For the first problem use Stone-Weierstrass to select an appropriate sequence of functions $p_n \to f$ uniformly on $[-\pi,\pi]$. Check that -$$\int_{-\pi}^\pi f(x)p_n(x)\,dx =0$$ -for each $n$ and that -$$\lim_{n\to \infty} \int_{-\pi}^\pi f(x)p_n(x)\,dx =\int_{-\pi}^\pi [f(x)]^2\,dx.$$ -Conclude that $f=0$ since it is continuous. -For the second problem use Stone-Weierstrass to select an appropriate sequence of functions $p_n \to f$ uniformly on $[0,1]$. Check that - $$\int_{0}^1 f(x)p_n(x)\,dx =0$$ - for each $n$ and that - $$\lim_{n\to \infty} \int_{0}^1 f(x)p_n(x)\,dx =\int_{0}^1 [f(x)]^2\,dx.$$ - Conclude that $f=0$ since it is continuous.<|endoftext|> -TITLE: stein and shakarchi complex analysis exercise 3.15 (b) -QUESTION [5 upvotes]: I can't solve this exercise from the book, can anyone give me a hint? - -Show that if $f$ is holomorphic in the unit disc, is bounded, and converges - uniformly to zero in the sector $\theta < \arg z < \varphi$ as $|z| \to 1$, then $f = 0$. -(Use the Cauchy inequalities or the maximum modulus principle) - -My idea was to extend $f$ continuously to the border of the domain : $θ < \arg z < \varphi$ as $|z| = 1$ then since $f=0$ on the border, $f=0$ in the whole domain. -However I can't show that $f$ is continuously extendable. -thank you! - -REPLY [14 votes]: Here's one way to do it. Let $M$ be a bound for $|f|$ on the unit disc and let $t$ be slightly smaller than $\varphi-\theta$ and define -$$ -g(z) = f(z)f(ze^{it})f(ze^{2it})\cdots f(ze^{nit}) -$$ -where $n$ is chosen so large that $nt > 2\pi$. Let $\varepsilon > 0$. By assumption there is an $r < 1$ such that $|f(z)| < \varepsilon$ - for $r < |z| < 1$ and $\theta < \arg z < \phi$. Hence -$$ -|g(z)| < M^n \varepsilon -$$ -for all $z$ with $r < |z| < 1$. (One factor has modulus less than $\varepsilon$ and the other factors less than $M$.) By the maximum modulus principle, $|g| < M^n\varepsilon$ on the whole disc, and since $\varepsilon$ was arbitrary, we must have that $g(z) = 0$ for all $z$ in the unit disc. -Hence one of the factors, and consequently all factors, of $g$ vanishes identically (otherwise $g$ would have at most countably many zeros).<|endoftext|> -TITLE: Book of integrals -QUESTION [8 upvotes]: Is there a book which contains just a bunch of integrals to evaluate? I want to learn new integration techniques and I'm open to other suggestions as to how I can go about learning new techniques. Thank you - -REPLY [3 votes]: Here is a book of advanced integration if you interested -http://advancedintegrals.com/wp-content/uploads/2016/12/advanced-integration-techniques.pdf<|endoftext|> -TITLE: How many groups of order $2058$ are there? -QUESTION [8 upvotes]: I tried to calculate the number of groups of order $2058=2\times3\times 7^3$ and aborted after more than an hour. I used the (apparently slow) function $ConstructAllGroups$ because $NrSmallGroups$ did not give a result. -The number $n=2058$ is (besides $2048$) the smallest number $n$, for which I do not know $gnu(n)$ -The highest exponent is $3$, so it should be possible to calculate $gnu(2058)$ in a reasonable time. - -What is $gnu(2058)$. If a result is too difficult, is it smaller than ,larger than or equal to $2058$ ? - -REPLY [6 votes]: $\mathtt{ConstructAllGroups(2058)}$ completed for me in a little over two hours (8219 seconds on a 2.6GHz machine) and returned a list of $91$ groups, which confirms Alexander Hulpke's results. -Many serious computations in group theory take a long time - in some cases I have left programs running for months and got useful answers at the end! So this does not rate for me as a difficult calculation.<|endoftext|> -TITLE: Continuous map and irrational numbers -QUESTION [7 upvotes]: My question is the following : - -Let $f:\mathbb{R}\to\mathbb{R}$ be a continuous map such as each irrational number is mapped to a rational number (i.e. $f(\mathbb{R}\backslash\mathbb{Q})\subset\mathbb{Q}$). Show that $f$ is a constant map. - -What I have done : -Let's suppose that $f$ is not a constant map, i.e. it exists $x,y\in\mathbb{R}$ such that $f(x)\neq f(y).$ As $f$ is continuous, the intermediate value theorem gives us that $[f(x),f(y)]\subset f([x,y]).$ But, as $$f([x,y])\subset f(\mathbb{R})=f(\mathbb{R}\backslash\mathbb{Q}\,\cup\,\mathbb{Q})=f(\mathbb{R}\backslash\mathbb{Q})\,\cup\,\bigcup_{n\in\mathbb{N}}\{f(p_n)\}\subset\mathbb{Q}\,\cup\,\bigcup_{n\in\mathbb{N}}\{f(p_n)\},$$ where $p_n$ is a sequence which describes $\mathbb{Q},$ we would get that $[f(x),f(y)]$ is a subset of a countable set and so is countable, which is a contradiction and so $f$ is constant. -My questions : -Is my proof correct, and if yes, does someone see an other way to answer it ? -Thank you for your comments, and happy new year ! - -REPLY [6 votes]: Looks good! As a side note, as you asked for alternative methods, you do not have to formulate the proof by contradiction. Note that the image of $f$ is -$$ -f(\mathbb{R})=f(\mathbb{R}\setminus\mathbb{Q}\cup\mathbb{Q})=f(\mathbb{R}\setminus\mathbb{Q})\cup f(\mathbb{Q})=A\cup \{f(q_n):n\geq1\} -$$ -Where $A\subset \mathbb{Q}$ and $q_n$ is an enumeration of the rationals. Thus $f(\mathbb{R})$ is countable, and a continuous map with a countable image is constant.<|endoftext|> -TITLE: A Galois theory sanity check about conjugates. -QUESTION [5 upvotes]: Here is my question... -If $L/K$ is an algebraic extension and $\alpha,\beta \in L$ are $K$-conjugates (that is, they have the same minimal polynomial), is it always true that there exists some $\sigma \in $ Aut$(L/K)$ such that $\sigma(\alpha)=\beta$? -I have thought about this for a while and can neither come up with a proof nor a counterexample :( Of course this fails if the algebraicity condition is removed: consider $\mathbb{R}/\mathbb{Q}.$ -Any hints will be much appreciated! - -REPLY [7 votes]: Consider the equation $(x^2+x-i)(x^2+x+i)=0$ over $K=\mathbb{Q}$. -Now take $L=\mathbb{Q}(i, \sqrt{1+4i})$. That is I have added the roots of one of the factors but not the other. (Of course one must check that $\sqrt{1-4i} \not \in L$). In any case $i\mapsto -i$ sends an element to its conjugate. It also switches the two factors in the above equation. It cannot be extended since only one of these factors has a root, in this field. -This is a counterexample with $\alpha =i$ and $\beta =-i$<|endoftext|> -TITLE: Definite integral $\int_0^1 \frac{\arctan x}{x\,\sqrt{1-x^2}}\,\text{d}x$ -QUESTION [17 upvotes]: Wanting to calculate the integral $\int_0^1 \frac{\arctan x}{x\,\sqrt{1-x^2}}\,\text{d}x$ it will certainly already known to many of you that an interesting way to attack it is to refer to the method of integration and differentiation with respect to a parameter, getting $\frac{\pi}{2}\,\log\left(1+\sqrt{2}\right)$. -Instead, what it does not at all clear is how software such as Wolfram Mathematica can calculate that result in an exact manner and not only approximate. Can someone enlighten me? Thanks! - -REPLY [4 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} - \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} - \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} - \newcommand{\dd}{\mathrm{d}} - \newcommand{\ds}[1]{\displaystyle{#1}} - \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} - \newcommand{\half}{{1 \over 2}} - \newcommand{\ic}{\mathrm{i}} - \newcommand{\iff}{\Longleftrightarrow} - \newcommand{\imp}{\Longrightarrow} - \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} - \newcommand{\mc}[1]{\mathcal{#1}} - \newcommand{\mrm}[1]{\mathrm{#1}} - \newcommand{\ol}[1]{\overline{#1}} - \newcommand{\pars}[1]{\left(\,{#1}\,\right)} - \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} - \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} - \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} - \newcommand{\ul}[1]{\underline{#1}} - \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ -\begin{align} -&\color{#f00}{\int_{0}^{1}{\arctan\pars{x} \over x\root{1 - x^{2}}}\,\dd x} = -\int_{0}^{1}{1 \over \root{1 - x^{2}}}\ -\overbrace{\int_{0}^{1}{\dd t \over 1 + x^{2}t^{2}}} -^{\ds{\arctan\pars{x} \over x}}\ \,\dd x -\\[5mm] = &\ -\int_{0}^{1}\int_{0}^{1}{\dd x \over \root{1 - x^{2}}\pars{1 + t^{2}x^{2}}} -\,\dd t -\\[5mm] \stackrel{x\ \mapsto\ 1/x}{=}\,\,\,& -\int_{0}^{1} -\int_{1}^{\infty}{x\,\dd x \over \root{x^{2} - 1}\pars{x^{2} + t^{2}}}\,\dd t -\label{1}\tag{1} -\end{align} - -Note that the last substitution $\ds{\pars{~x \mapsto 1/x~}}$ leads to a $\ds{\ul{trivial\ integration}}$. - - -With the substitution $\ds{x^{2} \mapsto x}$ in the last integration $\ds{~\pars{\mbox{see}\ \eqref{1}}~}$, -\begin{align} -&\color{#f00}{\int_{0}^{1}{\arctan\pars{x} \over x\root{1 - x^{2}}}\,\dd x} = -\half\int_{0}^{1} -\int_{1}^{\infty}{\dd x \over \root{x - 1}\pars{x + t^{2}}}\,\dd t -\\[5mm] & -\stackrel{x\ \equiv\ 1 + y^{2}}{=}\,\,\,\ -\int_{0}^{1}\int_{0}^{\infty}{\dd y \over y^{2} + 1 + t^{2}}\,\dd t = -\int_{0}^{1}{1 \over \root{1 + t^{2}}} -\int_{0}^{\infty}{\dd y \over y^{2} + 1}\,\dd t -\\[5mm] = &\ -{\pi \over 2}\int_{0}^{1}{\dd t \over \root{1 + t^{2}}} -\stackrel{t\ =\ \sinh\pars{\theta}}{=}\,\,\, -{\pi \over 2}\int_{0}^{\mrm{arcsinh}\pars{1}}\,\dd\theta = -\color{#f00}{{\pi \over 2}\,\mrm{arcsinh}\pars{1}} -\\[5mm] = &\ -\color{#f00}{{\pi \over 2}\,\ln\pars{1 + \root{2}}} -\end{align}<|endoftext|> -TITLE: What did Whitehead and Russell's "Principia Mathematica" achieve? -QUESTION [20 upvotes]: In philosophical contexts, the Principia Mathematica is sometimes held in high regard as a demonstration of a logical system. -But what did Whitehead and Russell's Principia Mathematica achieve for mathematics? - -REPLY [14 votes]: I'll try to answer referring to the Introduction to the 1st edition of W&R's Principia (3 vols, 1910-13); see : - -Alfred North Whitehead & Bertrand Russell, Principia Mathematica to *56 (2nd ed,1927), page 1: - - -THE mathematical logic which occupies Part I of the present work has -been constructed under the guidance of three different purposes. In the first -place, it aims at effecting the greatest possible analysis of the ideas with -which it deals and of the processes by which it conducts demonstrations, -and at diminishing to the utmost the number of the undefined ideas and -undemonstrated propositions (called respectively primitive ideas and primitive -propositions) from which it starts. In the second place, it is framed with a -view to the perfectly precise expression, in its symbols, of mathematical -propositions: to secure such expression, and to secure it in the simplest and -most convenient notation possible, is the chief motive in the choice of topics. -In the third place, the system is specially framed to solve the paradoxes -which, in recent years, have troubled students of symbolic logic and the -theory of aggregates; it is believed that the theory of types, as set forth in -what follows, leads both to the avoidance of contradictions, and to the -detection of the precise fallacy which has given rise to them. [emphasis added.] - -Simplifying a little bit, the three purposes of the work are : - -the foundation of mathematical logic - -the formalization of mathematics - -the development of the philosophical project called Logicism. - - -I'll not touch the third point. - -Regarding the first one, PM are unquestionably the basic building block of modern mathemtical logic. -Unfortunately, its cumbersome notation and the "intermingling" of technical aspects and philosophical ones prevent for using it (at least the initial chapters) as a textbook. -Compare with the first "fully modern" textbook of mathematical logic : - -David Hilbert & Wilhelm Ackermann Principles of Mathematical Logic, the 1950 translation of the 1938 second edition of text Grundzüge der theoretischen Logik, firstly published in 1928. - -See the Introduction [page 2] : - -symbolic logic received a new impetus from the need of mathematics for an exact foundation and strict axiomatic treatment. G.Frege published his Begriffsschrift in 1879 -and his Grundgesetze der Arithmetik in 1893-1903. G.Peano and his co-workers began in 1894 the publication of the Formulaire -de Mathématiques, in which all the mathematical disciplines were to be presented in terms of the logical calculus. A high point of this development is the appearance of the Principia Mathematica (1910-1913) by A.N. Whitehead and B. Russell. - -H&A's work is a "textbook" because - in spite of Hilbert's deep involvement with is foundational project - it is devoted to a plain exposition of "technical" issues, without philosophical discussions. - -Now for my tentative answer to the question : - -what did Whitehead and Russell's Principia Mathematica achieve for mathematics? - -The first (and unprecedented) fully-flegded formalization of a huge part of mathematics, mainly the Cantorian mathematics of the infinite. -Unfotunately again, we have a cumbersome symbolism, as well as an axiomatization based on the theory of classes (and not : sets) that has been subsequently "surpassed" by Zermelo's axiomatization. -But we can find there "perfectly precise expression of mathematical propositions [and concepts]", starting from the elementary ones. -Some examples regarding operations on classes: - -*22.01. $\alpha \subset \beta \ . =_{\text {Df}} . \ x \in \alpha \supset_x x \in \beta$ -[in modern notation : $\forall x \ (x \in \alpha \to x \in \beta)$] -This defines "the class $\alpha$ is contained in the class $\beta$," or "all $\alpha$'s are $\beta$'s." - -and the definition of singleton: - -[the] function $\iota 'x$, meaning "the class of terms which are identical with $x$" which is the same thing as "the class whose only member is $x$." We are thus to have -$$\iota'x = \hat y(y = x).$$ - -[in modern notation : $\{ x \} = \{ y \mid y=x \}]$ - -[...] The distinction between $x$ and $\iota'x$ is one of the merits of Peano's symbolic logic, as well as of Frege's. [....] Let $\alpha$ be a class; then the class whose only member is $\alpha$ has only one member, namely $\alpha$, while $\alpha$ may have many members. Hence the class whose only member is a cannot be identical with $\alpha$*. [...] -*51.15. $y \in \iota'x \ . \equiv . \ y = x$ - -[in modern notation : $y \in \{ x \} \leftrightarrow y=x$].<|endoftext|> -TITLE: Infinite Series $\sum_{m=0}^\infty\sum_{n=0}^\infty\frac{m!\:n!}{(m+n+2)!}$ -QUESTION [7 upvotes]: Evaluating -$$\sum_{m=0}^\infty \sum_{n=0}^\infty\frac{m!n!}{(m+n+2)!}$$ -involving binomial coefficients. -My attempt: $$\frac{1}{(m+1)(n+1)}\sum_{m=0}^\infty \sum_{n=0}^\infty\frac{(m+1)!(n+1)!}{(m+n+2)!}=\frac{1}{(m+1)(n+1)} \sum_{m=0}^\infty \sum_{n=0}^\infty\frac{1}{\binom{m+n+2}{m+1}}=?$$ -Is there any closed form of this expression? - -REPLY [5 votes]: Another (way less subtle) approach. We have: -$$ S=\sum_{m\geq 0}\sum_{n\geq 0}\frac{\Gamma(m+1)\,\Gamma(n+1)}{(m+n+2)\,\Gamma(m+n+2)}=\sum_{m,n\geq 0}\iint_{(0,1)^2} x^m(1-x)^n y^{m+n+1}\,dx\,dy \tag{1}$$ -hence: -$$ S = \iint_{(0,1)^2}\frac{y\,dx\,dy}{(1-xy)(1-y+xy)}=2\int_{0}^{1}\frac{-\log(1-y)}{2-y}\,dy=2\int_{0}^{1}\frac{-\log(t)}{1+t}\,dt\tag{2} $$ -and by expanding $\frac{1}{1+t}$ as a geometric series, -$$ S = 2\sum_{n\geq 0}\frac{(-1)^n}{(n+1)^2} = \color{red}{\zeta(2)} = \frac{\pi^2}{6}.\tag{3}$$<|endoftext|> -TITLE: Two Banach spaces, if and only if criterion for range of closed unbounded operator to be closed? -QUESTION [6 upvotes]: Let $E$ and $F$ be two Banach spaces. Let $A: D(A) \subset E \to F$ be a closed unbounded operator. How do I see that $R(A)$ is closed if and only if there exists a constant $C$ such that$$\text{dist}(u, N(A)) \le C\|Au\| \text{ for all }u \in D(A)?$$Here, $D$ denotes domain, $R$ denotes range, $N$ denotes kernel. -Idea. We probably want to consider the operator $T: E_0 \to F$, where $E_0 = D(A)$ with the graph norm and $T = A$ in some regard? But I am not quite sure on what to do next. - -REPLY [4 votes]: The first part of the answer was wrong, so I have removed it. I have written a new version and put it at the end. Also the second part (which is now the first part) has been rewritten. -Suppose $R$ is closed (with this $R$ becomes a Banach space). -In general $D$ with the graph norm $||u||_G := ||u|| + ||A(u)||$ is a Banach space. This is so because it is a closed subset of $E \oplus F$ since $A$ is a closed operator. -$A$ is clearly bounded on this space. Taking the quotient space $D/N$ the map $\tilde A([u]):=A(u)$ is well defined, bounded, injective and has the same range as $A$. As such $\tilde A: D/N \to R$ is a bijective bounded map between two Banach spaces. The inverse $\tilde A^{-1} : R \to D/N$ is then also a bounded linear operator. -The norm on $D/N$ is given by: $||[u]||_{D/N}=\text{dist}(u,N)_G=\text{dist}(u,N)+||A(u)||$. -Finally: -$$\text{dist}(u,N)=||[u]||_{D/N}-||A(u)||=||\tilde A^{-1}(\tilde A([u]))||_{D/N}-||A(u)||≤(||\tilde A^{-1}||-1)\ ||A(u)||$$ -The other direction works as follows: -Suppose $\text{dist}(u,N)≤C\ ||A(u)||$ $\forall u \in D$. -As seen before $D/N$ is a Banach space if $D$ is given graph norm. On this space we can consider the bijective bounded function $\tilde A: D/N \to R$. -The inverse of $\tilde A$ is also bounded since: -$$\frac{||\tilde A^{-1}(\tilde A([u]))||_{D/N}}{||\tilde A([u])||}=\frac{\text{dist}(u,N)+||A(u)||}{||A(u)||}≤C+1$$ -Now let $A(u_n)$ be Cauchy. Then $||[u_n]-[u_m]||_{D/N}≤||\tilde A ^{-1}||\cdot ||A(u_n)-A(u_m)]]$ and $[u_n]$ is Cauchy. But since $D/N$ is a Banach space the limit $[u]$ exists and $||A(u_n)-A(u)||≤||\tilde A||\cdot ||[u_n]-[u]||_{D/N}$, so as a result $A(u_n)$ converges to $A(u)$.<|endoftext|> -TITLE: Tensor product of $\mathbb{Q}$ with an infinite product -QUESTION [6 upvotes]: How can I prove that the tensor $\mathbb{Q} \otimes \left( \prod_n \mathbb{Z}/n\mathbb{Z} \right)$, where the product is taken over all the positive -integers $n$, is not trivial? - -REPLY [9 votes]: Proposition: $\mathbb{Q}$ is flat as a $\mathbb{Z}$-module. (Which is to say, the functor $ \_ \otimes_{\mathbb{Z}} \mathbb{Q}$ is exact.) -Pf: The easiest is to observe that tensoring with $\mathbb Q$ is the same as localizing at $\mathbb{Z} \setminus \{0\}$, and localization is exact. -Fancier proof: $\mathbb{Q}$ is the filtered colimit of the free modules $\mathbb{Z}[1/n]$, for $n = 1,2,3 \ldots$. A free module is flat. Now use that $\text{Tor}$ commutes with filtered colimits (since tensoring commutes with colimits, and taking cohomology commutes with filtered colimits). -Suppose that $x \in A$ is an element of infinite order. This gives some $0 \to \mathbb{Z} \to A$. Tensor this exact sequence with $\mathbb{Q}$ to get $0 \to \mathbb{Q} \to A \otimes \mathbb{Q}$, using $\mathbb{Q}$ flat to keep injectivity. -Now, the element $(1,1,\ldots) \in \Pi_n \mathbb{Z} / n \mathbb{Z}$ has infinite order.<|endoftext|> -TITLE: Examples of Induced Representations of Lie Algebras -QUESTION [9 upvotes]: Given a (finite-dimensional) Lie algebra $\mathfrak{g}$, a subalgebra $\mathfrak{h}\subset\mathfrak{g}$, and a representation $\rho:\mathfrak{h}\rightarrow\mathfrak{gl}(V)$ of $\mathfrak{h}$, one can form (so I'm told) a representation of the whole algebra $\mathfrak{g}$ on -$$ \text{Ind}^{\mathfrak{g}}_{\mathfrak{h}}(V):=U(\mathfrak{g})\otimes_{U(\mathfrak{h})} V, $$ -where $U(-)$ denotes the universal enveloping algebra of $-$. My question is simple: - -How, exactly, is the representation of $\mathfrak{g}$ on $\text{Ind}^{\mathfrak{g}}_{\mathfrak{h}}(V)$ defined? Are there any good references covering this topic (focusing on the Lie algebra side, rather than induced representations of Lie groups, as essentially all references I've found discuss)? - -In particular, references going through numerous examples actually computing the universal enveloping algebras and the induced representation would be tremendously appreciated. - -REPLY [12 votes]: I assume that we are dealing with Lie algebras over an field $K$. Let me try to give an overview about how this induced representation is constructed, which is an exercise in understanding how representations of the Lie algebra $\mathfrak{g}$ relate to modules of the universal enveloping algebra $\mathcal{U}(\mathfrak{g})$. A book which you might want to look into is James E. Humphreys’ Introduction to Lie Algebras and Representation Theory, where the topic is covered from an algebraic point of view and without the use of Lie groups. -The universal enveloping algebra -As you probably already know that the universal eneveloping algebra $\mathcal{U}(\mathfrak{g})$ of the Lie algebra $\mathfrak{g}$ is an associative, untial $K$-algebra that can be defined as the quotient algebra $T(\mathfrak{g})/I$ where $T(\mathfrak{g})$ is the tensor algebra over $\mathfrak{g}$ and $I$ the two-sided ideal -$$ - I = \langle x \otimes y - y \otimes x - [x,y] \mid x,y \in \mathfrak{g}\rangle \,. -$$ -So as a vector space, $\mathcal{U}(\mathfrak{g})$ is generated by the monomials -$$ - x_1 \dotsm x_n - \qquad - \text{with $x_1, \dotsc, x_n \in \mathfrak{g}$}. -$$ -The inclusion map $\mathfrak{g} \hookrightarrow \mathcal{U}(\mathfrak{g})$ is a homomorphism of Lie algebras, so we can regard $\mathfrak{g}$ as a Lie subalgebra of $\mathcal{U}(\mathfrak{g})$. The multiplication in $\mathcal{U}(\mathfrak{g})$ satisfies the property -$$ - xy-yx = [x,y]_{\mathfrak{g}} -$$ -for all $x,y \in \mathfrak{g} \subseteq \mathcal{U}(\mathfrak{g})$. -An essential property that follows from this construction of $\mathcal{U}(\mathfrak{g})$ is the theorem of Poincaré–Birkhoff–Witt, which decribes a vector space basis of $\mathcal{U}(\mathfrak{g})$. - -Theorem (Poincaré–Birkhoff–Witt). Let $\mathfrak{g}$ be a Lie algebra and let $(x_i)_{i \in I}$ be a basis of $\mathfrak{g}$ where $(I, \leq)$ is a totally ordered set. Then the ordered monomials -$$ - x_{i_1} \dotsm x_{i_n} -\qquad -\text{with $n \in \mathbb{N}$, $i_1, \dotsc, i_n \in I$, $i_1 \leq \dotsb \leq i_n$} -$$ -form a basis of $\mathcal{U}(\mathfrak{g})$. - - -Remark: This basis can also be written as -$$ - x_{i_1}^{p_1} \dotsm x_{i_m}^{p_m} -$$ -with $m \in \mathbb{N}$, $i_1, \dotsc, i_n \in I$, $i_1 < \dotsb < i_n$, $p_1, \dotsc, p_n \in \mathbb{N}$. - -Let’s look at a specific example: - -Example: The Lie algebra $\mathfrak{sl}_2(\mathbb{C})$ admits a basis $(e,h,f)$ given by -$$ -e = -\begin{pmatrix} - 0 & 1 \\ - 0 & 0 -\end{pmatrix}, -\quad -h = -\begin{pmatrix} - 1 & 0 \\ - 0 & -1 -\end{pmatrix}, -\quad -f = -\begin{pmatrix} - 0 & 0 \\ - 1 & 0 -\end{pmatrix}. -$$ -This basis is already totally ordered by the order in which we write the tupel $(e,h,f)$. By the PBW-theorem it follows that $\mathcal{U}(\mathfrak{sl}_2(\mathbb{C}))$ has a basis consisting of the the monomials -$$ -e^l h^m f^n \quad \text{with $l,m,n \in \mathbb{N}$.} -$$ - -One nice thing about the universal enveloping algebra is its universal property: - -Proposition (Universal property of the UEA). -Let $\mathfrak{g}$ be a $K$-Lie algebra and let $A$ be an associative and unital $K$-algebra. -Any Lie algebra homomorphism $\phi \colon \mathfrak{g} \to A$ extends uniquely to an algebra homomorphism $\Phi \colon \mathcal{U}(\mathfrak{g}) \to A$. - -Representations of $\mathfrak{g}$ and $\mathcal{U}(\mathfrak{g})$-modules -This universal property has nice effects for the representation theory of $\mathfrak{g}$. -A representation of $\mathfrak{g}$ is, by definition, a Lie algebra homomorphism $\mathfrak{g} \to \mathfrak{gl}(V)$ for some vector space $V$. -Using the universal property of $\mathcal{U}(\mathfrak{g})$ it follows that the Lie algebra homomorphisms $\mathfrak{g} \to \mathfrak{gl}(V)$ correspond one-to-one to the algebra homomorphisms $\mathcal{U}(\mathfrak{g}) \to \mathrm{End}_K(V)$ by restricting or extending the homomorphisms in question. -But an algebra homomorphism from $\mathcal{U}(\mathfrak{g})$ to $\mathrm{End}_K(V)$ is the same as a $\mathcal{U}(\mathfrak{g})$-module structure on $V$. -We have thus found a one-to-one correspondence between representations of $\mathfrak{g}$ and $\mathcal{U}(\mathfrak{g})$-modules. -Let us be more explicit: -If $\rho \colon \mathfrak{g} \to \mathfrak{gl}(V)$ is a representation of $\mathfrak{g}$, then we have a multiplication map (i.e. a bilinear map) -$$ - \mathfrak{g} \times V \to V, - \quad - (x,v) \mapsto x.v -$$ -given by -$$ - x.v = \rho(x)(v) -$$ -for all $x \in \mathfrak{g}$ and $v \in V$. -This multiplication now extends to a multiplication map -$$ - \mathcal{U}(\mathfrak{g}) \times V \to V, - \quad - (y,v) \mapsto y \cdot v -$$ -that is given with respect to the PBW-basis of $\mathcal{U}(\mathfrak{g})$ by -$$ - (x_1 \dotsm x_n) \cdot v - = - x_1 . (x_2 . ( \dotsm ( x_{n-1} . (x_n.v) ) ) ) -$$ -for all $x_1, \dotsc, x_n \in \mathfrak{g}$ and $v \in V$. -This multiplication gives $V$ the structure of an $\mathcal{U}(\mathfrak{g})$-module. -Note also that to the restriction of this $\mathcal{U}(\mathfrak{g})$-module structure to $\mathfrak{g}$ (when regarded as a subset of $\mathcal{U}(\mathfrak{g})$ coincides with the representation of $\mathfrak{g}$ that we started with. -Extension of scalars -We will now need the tensor product, and in particular its use for the so called extension of scalars. A detailed explanation can, for example, be found in Abstract Algebra by Dummit and Foote. -The basic idea behind the induced representation will be precisely this extension of scalars. -Let $R$ is a unitial ring, let $S$ be a subring of $R$ (also unital) and let $M$ be a (unital) $S$-module. -Then we want to somehow extend the $S$-module structure of $M$ to an $R$-module structure on $M$. -The problem is that there is a priori no good way to extends the multiplication map $S \times M \to M$ to a multiplication map $R \times M \to M$. -So instead, we replace $M$ by another abelian group, namely $R \otimes_S M$. -Recall that $R \otimes_S M$ is an abelian group generated by elements of the form $r \otimes m$ with $r \in R$ and $m \in M$, under the constraints that -$$ - (r+r') \otimes m = r \otimes m + r' \otimes m, \\ - r \otimes (m+m') = r \otimes m + r \otimes m', \\ - (rs) \otimes m = r \otimes (sm) -$$ -where $r, r' \in R$, $m, m' \in M$ and $s \in S$. -The nice thing about $R \otimes_S M$ is that it naturally carries the structure of an $R$-module via -$$ - r' \cdot (r \otimes m) - = (r' r) \otimes m -$$ -for all $r', r \in R$ and $m \in M$. -Note that for all $s \in S$ and $m \in M$ we have -$$ - s \cdot (1 \otimes m) - = (s \cdot 1) \otimes m - = s \otimes m - = (1 \cdot s) \otimes m - = 1 \otimes (sm), -$$ -so we can, to some extend, think of $R \otimes_S M$ as an extending the original $S$-module structure of $M$. Indeed, the map -$$ - M \to R \otimes_S M \,, - \quad - m \mapsto 1 \otimes m -$$ -is a homomorphism of $S$-modules. (A word of warning though: It is not always true that this homomorphism is injective. So we can not necessarily regard $M$ as an $S$-submodule of $R \otimes_S M$.) -We refer to this process of “extending” the $S$-module structure of $M$ to the $R$-module structure on $R \otimes_S M$ as the extension of scalars from $S$ to $R$. -The induced representation -We are now ready to tackle the induced representation. For this let $\mathfrak{g}$ be a Lie algebra, let $\mathfrak{h}$ be a Lie subalgebra of $\mathfrak{g}$ a Lie subalgebra and let $\rho \colon \mathfrak{h} \to \mathfrak{gl}(V)$ a representation of $\mathfrak{h}$. We then have for all $x \in \mathfrak{h}$ and $v \in V$ the product $x.v = \rho(x)(v)$. -Note that the inclusion map $\mathfrak{h} \hookrightarrow \mathfrak{g}$ is a homomorphism of Lie algebras and therefore induces (by the universal property of the universal enveloping algebra) a homomorphism of algebras $\mathcal{U}(\mathfrak{h}) \to \mathcal{U}(\mathfrak{g})$ that is given on the PBW-bases of $\mathcal{U}(\mathfrak{h})$ and $\mathcal{U}(\mathfrak{g})$ by -$$ - x_1 \dotsm x_n \mapsto x_1 \dotsm x_n - \quad - \text{for all $x_1, \dotsc, x_n \in \mathfrak{h}$}. -$$ -This homomorphism of algebras is injective since it maps the PBW-basis of $\mathcal{U}(\mathfrak{h})$ injectively into the PBW-basis of $\mathcal{U}(\mathfrak{g})$. -We can therefore regard $\mathcal{U}(\mathfrak{h})$ as a subalgebra of $\mathcal{U}(\mathfrak{g})$. -We can now apply the extension of scalars from $\mathcal{U}(\mathfrak{h})$ to $\mathcal{U}(\mathfrak{g})$: -That $V$ is a representation of $\mathfrak{h}$ means that it is a $\mathcal{U}(\mathfrak{h})$-module via -$$ - (x_1 \dotsm x_n) \cdot v - = - x_1.(x_2.(\dotsm(x_{n-1}.(x_n.v)))) -$$ -for all $x_1, \dotsc, x_n \in \mathfrak{h}$ and $v \in V$. -So by applying extension of scalars we get the $\mathcal{U}(\mathfrak{g})$-module -$$ - \mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V) - = - \mathcal{U}(\mathfrak{g}) \otimes_{\mathcal{U}(\mathfrak{h})} V. -$$ -But let’s see how the $\mathcal{U}(\mathfrak{g})$-module structure on $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ really looks like: - -As a vector space, $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is generated by the simple tensors $x \otimes v$ with $x \in \mathcal{U}(\mathfrak{g})$ and $v \in V$, and $\mathcal{U}(\mathfrak{g})$ is generated by the monomial $x_1 \dotsm x_n$ with $x_1, \dotsc, x_n \in \mathfrak{g}$. -It follows that $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is, again as a vector space, generated by the elements -$$ - (x_1 \dotsm x_n) \otimes v - \quad - \text{with $x_1, \dotsc, x_n \in \mathfrak{g}$, $v \in V$}. -$$ -In terms of these vector space generators, the $\mathcal{U}(\mathfrak{g})$-module structure of $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is given by the multiplication -$$ - (x_1 \dotsm x_n) \cdot ((y_1 \dotsm y_m) \otimes v) - = (x_1 \dotsm x_n \cdot y_1 \dotsm y_m) \otimes v. -$$ -The fact that we are tensoring over $\mathcal{U}(\mathfrak{h})$ has the effect that -$$ - h \cdot (1 \otimes v) - = h \otimes v - = 1 \otimes (h.v) - \quad - \text{for all $h \in \mathfrak{h}$, $v \in V$}. -$$ - -These two properties can be nicely put together by choosing a suitable basis of $\mathfrak{g}$ and applying the PBW theorem: Let $(b_i)_{i \in I}$ be a basis of $\mathfrak{g}$ where - -$(I ,\leq)$ is a totally ordered set, -we have a partition $I = J' \cup J$ so that $(b_j)_{j \in J}$ is a basis of $\mathfrak{h}$, and -$j' \leq j$ for all $j' \in J'$ and $j \in J$. - -Then by the PBW-theorem, the ordered monmials -$$ - b_{i_1} \dotsm b_{i_n} b_{j_1} \dotsm b_{j_m} - \qquad - \begin{alignedat}{2} - i_1, \dotsc, i_n &\in J', \, - & - i_1 \leq \dotsb &\leq i_n, - \\ - j_1, \dotsc, j_m &\in J, \, - & - j_1 \leq \dotsb &\leq _m - \end{alignedat} -$$ -form a basis of $\mathcal{U}(\mathfrak{g})$, and the ordered monomials -$$ - b_{j_1} \dotsm b_{j_m} - \qquad - \text{with $j_1, \dotsc, j_m \in J$, $j_1 \leq \dotsb \leq j_m$} -$$ -form a basis of the subalgebra $\mathcal{U}(\mathfrak{h})$. -With respect to this basis, the $\mathcal{U}(\mathfrak{g})$-module structure on $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is given by -$$ - (b_{i_1} \dotsm b_{i_n} b_{j_1} \dotsm b_{j_m}) . (1 \otimes v) - = (b_{i_1} \dotsm b_{i_n}) \otimes (b_{j_1}.(\dotsm (b_{j_m}.v))) -$$ -(Roughly speaking: The basis elements of $\mathfrak{h}$ just do what they have always done, and the “new” basis elements are put in front. Just as expected from a formal and general construction.) - -Example: -Let $\mathfrak{g} = \mathfrak{gl}_2(\mathbb{C})$ and $\mathfrak{h} = \mathfrak{sl}_2(\mathbb{C})$, and let $(e,h,f)$ be the basis of $\mathfrak{sl}_2(\mathbb{C})$ as in the previous example. We can extend this basis of $\mathfrak{sl}_2(\mathbb{C})$ to a basis $(b,e,h,f)$ of $\mathfrak{gl}_2(\mathbb{C})$ by choosing -$$ - b = - \begin{pmatrix} - 1 & 0 \\ - 0 & 1 - \end{pmatrix}. -$$ -This basis $(b, e, h, f)$ of $\mathfrak{gl}_2(\mathbb{C})$ is already totally ordered by the way its elements are ordered in the tuple $(b, e, h, f)$. A basis of $\mathcal{U}(\mathfrak{gl}_n(\mathbb{C}))$ is now given by the monomials -$$ - b^k e^l h^m f^n - \quad - \text{with $k,l,m,n \in \mathbb{N}$} -$$ -and a basis of $\mathcal{U}(\mathfrak{sl}_2(\mathbb{C}))$ is given by the monomials -$$ - e^l h^m f^n - \text{with $l,m,n \in \mathbb{N}$}. -$$ -Consider now the natural representation $V = \mathbb{C}^2$ of $\mathfrak{sl}_2(\mathbb{C})$, which is given by -$$ - x.v - = - xv - \text{for all $x \in \mathfrak{sl}_2(\mathbb{C})$ and $v \in \mathbb{C}^2$}, -$$ -where the right hand side denotes the usual matrix-vector-multiplication. -Then the induced $\mathcal{U}(\mathfrak{gl}_2(\mathbb{C}))$-module structure on $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$ is given by -$$ - (b^k e^l h^m f^n) \cdot (1 \otimes v) - = b^k \otimes (e^l h^m f^n v) - \quad - \text{for all $v \in \mathbb{C}^2$}. -$$ -Note that we we do not get the natural representation of $\mathfrak{gl}_2(\mathbb{C})$, as $b$ does not act by the identity, but rather via $b.(1 \otimes v) = b \otimes v$ for all $v \in \mathbb{C}^2$. - -We are now nearly finished: We have constructed a $\mathcal{U}(\mathfrak{g})$-module structure on $\mathrm{Ind}_\mathfrak{h}^\mathfrak{g}(V)$, which by restriction to $\mathfrak{g}$ (when regarded as a subset of $\mathcal{U}(\mathfrak{g})$) now correspond to a representation of $\mathfrak{g}$. This representation is given by -$$ - x . ((x_1 \dotsm x_n) \otimes v) - = (x \cdot x_1 \dotsm x_n) \otimes v - \quad - \text{for all $x, x_1, \dotsc, x_n \in \mathfrak{g}$, $v \in V$}, -$$ -a special case of the previous formulae. -I hope this helps.<|endoftext|> -TITLE: Strange Mean Inequality -QUESTION [14 upvotes]: This problem was inspired by this question. -$\sqrt [ 3 ]{ a(\frac { a+b }{ 2 } )(\frac { a+b+c }{ 3 } ) } \ge \frac { a+\sqrt { ab } +\sqrt [ 3 ]{ abc } }{ 3 } $ -The above can be proved using Hölder's inequality. -$\sqrt [ 3 ]{ a(\frac { a+b }{ 2 } )(\frac { a+b+c }{ 3 } ) } =\sqrt [ 3 ]{ (\frac { a }{ 3 } +\frac { a }{ 3 } +\frac { a }{ 3 } )(\frac { a }{ 3 } +\frac { a+b }{ 6 } +\frac { b }{ 3 } )(\frac { a+b+c }{ 3 } ) } \ge \sqrt [ 3 ]{ (\frac { a }{ 3 } +\frac { a }{ 3 } +\frac { a }{ 3 } )(\frac { a }{ 3 } +\frac { \sqrt { ab } }{ 3 } +\frac { b }{ 3 } )(\frac { a }{ 3 } +\frac { b }{ 3 } +\frac { c }{ 3 } ) } (\because \text{AM-GM})\\ \ge \frac { a+\sqrt { ab } +\sqrt [ 3 ]{ abc } }{ 3 } (\because \text{Holder's inequality)}$ -However, I had trouble generalizing this inequality to -$\sqrt [ n ]{ \prod _{ i=1 }^{ n }{ { A }_{ i } } } \ge \frac { \sum _{ i=1 }^{ n }{ { G }_{ i } } }{ n } $ -when ${ A }_{ i }=\frac { \sum _{ j=1 }^{ i }{ { a }_{ i } } }{ i } $ -and ${ G }_{ i }=\sqrt [ i ]{ \prod _{ j=1 }^{ i }{ { a }_{ i } } } $ as I could not split the fractions as I did above. - -REPLY [6 votes]: This result was conjectured by professor Finbarr Holand, and then it was proved by -K. Kedlaya in an article in the American Mathematical Monthly that could be found here, and then it was generalized by Professor Holand in an article that could be found here.<|endoftext|> -TITLE: Making a proof precise of "Aut$(Q_8)\cong S_4$" -QUESTION [11 upvotes]: I know, with some machinery, how to prove that Aut$(Q_8)\cong S_4$. My question here is about not how to prove, but is about an incomplete proof (I feel) given by a student to me (it should be possibly somewhere on-line, since it is not so easy to get the idea of his proof, I think.) - -Consider a cube, and label $i,-i$ on a pair of opposite faces; similarly put $j,-j$ and $k,-k$ on other faces. Since the group of rotations of cube is $S_4$, Aut$(Q_8)$ is $S_4$. - -Can anyone make this argument precise? - -REPLY [5 votes]: As SpamIAm says in his solution, $-1$ and $1$ must be fixed by any automorphism $\phi$. Moreover, since $i$ and $j$ generate the group, an automorphism is determined by the images of $i$ and $j$. There are six possibilities left for $\phi(i)$ and at most five for $\phi(j)$. In fact, there are at most four, since $\phi(-i) = \phi(i^3) = \phi(i)^3$ is already decided. Therefore $Q_8$ has at most $24$ automorphisms. -All that is left now is to see that every rotation of the cube is actually an automorphism of the group. But it is well known that every rotation in $\mathbb{R}^3$ corresponds to some mapping $x \mapsto qxq^{-1}$ restricted to the pure quaternions, and this mapping is an inner automorphism of the ring of quaternions.<|endoftext|> -TITLE: How to get the complex number out of the polar form -QUESTION [5 upvotes]: How does one get the complex number out of this equation? - -$$\Large{c = M e^{j \phi}}$$ - -I would like to write a function for this in C but I don't see how I can get the real and imaginary parts out of this equation to store it in a C structure. - -REPLY [2 votes]: $$\mathcal{R}(c)=M\cdot\cos{\phi}$$ -$$\mathcal{I}(c)=M\cdot\sin{\phi}$$ -Assuming that $j^2=-1$ (Physics notation). If we follow a math notational convention where $i^2=-1$ and $j=e^{{2i\pi\over 3}}$ is a complex root of unity we just replace in the above $\phi$ by $\phi+{2i\pi\over 3}$<|endoftext|> -TITLE: Conjecture $\sum_{m=1}^\infty\frac{y_{n+1,m}y_{n,k}}{[y_{n+1,m}-y_{n,k}]^3}\overset{?}=\frac{n+1}{8}$, where $y_{n,k}=(\text{BesselJZero[n,k]})^2$ -QUESTION [13 upvotes]: While solving a quantum mechanics problem using perturbation theory I encountered the following sum -$$ -S_{0,1}=\sum_{m=1}^\infty\frac{y_{1,m}y_{0,1}}{[y_{1,m}-y_{0,1}]^3}, -$$ -where $y_{n,k}=\left(\text{BesselJZero[n,k]}\right)^2$ is square of the $k$-th zero of Bessel function $J_n$ of the first kind. -Numerical calculation using Mathematica showed that $S_{0,1}\approx 0.1250000$. Although I couldn't verify this with higher precision I found some other cases where analogous sums are close to rational numbers. Specifically, after some experimentation I found that the sums -$$ -S_{n,k}=\sum_{m=1}^\infty\frac{y_{n+1,m}y_{n,k}}{[y_{n+1,m}-y_{n,k}]^3} -$$ -are independent of $k$ and have rational values for integer $n$, and made the following conjecture - -$\bf{Conjecture:}\ $ for $k=1,2,3,...$ and arbitrary $n\geq 0$ - $$\sum_{m=1}^\infty\frac{y_{n+1,m}y_{n,k}}{[y_{n+1,m}-y_{n,k}]^3}\overset{?}=\frac{n+1}{8},\\ \text{where}\ y_{n,k}=\left(\text{BesselJZero[n,k]}\right)^2. -$$ - -How one can prove it? -It seems this conjecture is correct also for negative values of $n$. For example for $n=-\frac{1}{2}$ one has $y_{\frac{1}{2},m}=\pi^2 m^2$, $y_{-\frac{1}{2},k}=\pi^2 \left(k-\frac{1}{2}\right)^2$ and the conjecture becomes (see Claude Leibovici's answer for more details) -$$ -\sum_{m=1}^\infty\frac{m^2\left(k-\frac{1}{2}\right)^2}{\left(m^2-\left(k-\frac{1}{2}\right)^2\right)^3}=\frac{\pi^2}{16}. -$$ - -REPLY [5 votes]: There is a rather neat proof of this. -First, note that there is already an analogue for this: -DLMF §10.21 says that a Rayleigh -function $\sigma_n(\nu)$ is defined as a similar power series -$$ \sigma_n(\nu) = \sum_{m\geq1} y_{\nu, m}^{-n}. $$ -It links to http://arxiv.org/abs/math/9910128v1 among others as an example of how -to evaluate such things. -In your case, call $\zeta_m = y_{\nu,m}$ and $z=y_{\nu-1,k}$ ($\nu$ is $n$ shifted by $1$), so that after -expanding in partial fractions your sum is -$$ \sum_{m\geq1} \frac{\zeta_m z}{(\zeta_m-z)^3} = \sum_{m\geq1} -\frac{z^2}{(\zeta_m-z)^3} + \frac{z}{(\zeta_m-z)^2}. $$ -Introduce the function -$$ y_\nu(z) = z^{-\nu/2}J_\nu(z^{1/2}). $$ -By DLMF 10.6.5 its derivative -satisfies the two relations -$$\begin{aligned} - y'_\nu(z) &= (2z)^{-1} y_{\nu-1}(z) - \nu z^{-1} y_\nu(z) -\\&= --\tfrac12 y_{\nu+1}(z). -\end{aligned} $$ -It also has the infinite product -expansion -$$ y_\nu(z) = \frac{1}{2^\nu\nu!}\prod_{k\geq1}(1 - z/\zeta_k). $$ -Therefore, each partial sum of $(\zeta_k-z)^{-s}$, $s\geq1$ can be evaluated in -terms of derivatives of $y_\nu$: -$$ \sum_{k\geq1}(\zeta_k-z)^{-s} = \frac{-1}{(s-1)!}\frac{d^s}{dz^s}\log -y_\nu(z). $$ -When evaluating this logarithmic derivative, the derivative $y'_\nu$ -can be expressed in terms of $y_{\nu-1}$, going down in $\nu$, but the derivative -$y'_{\nu-1}$ can be expressed in terms of $y_\nu$ using the other -relation that goes up in the index $\nu$. So even higher-order derivatives contain only $y_\nu$ and $y_{\nu-1}$. -I calculated your sum using this procedure with a CAS as: -$$ -\tfrac12z^2(\log y)''' -z(\log y)'' -= \tfrac18\nu + z^{-1} P\big(y_{\nu-1}(z)/y_\nu(z)\big), $$ -where $P$ is the polynomial -$$ P(q) = -\tfrac18 q^3 + (\tfrac38\nu-\tfrac18) q^2 + (-\tfrac14\nu^2 -+ \tfrac14\nu - \tfrac18)q. $$ -When $z$ is chosen to be any root of $y_{\nu-1}$, -$z=\mathsf{BesselJZero}[\nu-1, k]\hat{}2$, $P(q)=0$, your sum is equal -to -$$ \frac{\nu}{8}, $$ -which is $(n+1)/8$ in your notation. -It is possible to derive a number of such closed forms for sums of -this type. For example, by differentiating $\log y$ differently -(going $\nu\to\nu+1\to\nu$), one would get -$$ \sum_{m\geq1} -\frac{y_{\nu,m}y_{\nu+1,k}}{(y_{\nu,m}-y_{\nu+1,k})^3} = --\frac{\nu}{8}. $$ -Some other examples, for which the r.h.s. is independent of $z$ ($\zeta_m=y_{\nu,m}, z=y_{\nu-1,l}$, $l$ arbitrary): -$$ \begin{gathered} -\sum_{k\geq1} \frac{\zeta_k}{(\zeta_k-z)^2} = \frac14,\\ -\sum_{k\geq1} \frac{z^2}{(\zeta_k-z)^4} - \frac{1}{(\zeta_k-z)^2} + \frac1{24}\frac{5-\nu}{\zeta_k-z} = \frac{1}{48}, \\ -\sum_{k\geq1} \frac{\zeta_k}{(\zeta_k-z)^4} + \frac1{96}\frac{z-\zeta_k-8+4\nu}{(\zeta_k-z)^2} = 0. -\end{gathered} $$ -or with $z=y_{\nu+1,l}$, $l$ arbitrary: -$$ \begin{gathered} -\sum_{k\geq1} \frac{z^2}{(\zeta_k-z)^3} = -\tfrac18\nu-\tfrac14, -\end{gathered} $$ -and they get messier with higher degrees.<|endoftext|> -TITLE: Prove that there is only one sequence which meets the following conditions -QUESTION [9 upvotes]: Problem statement is as follows: -Given $n\geq 2$, prove that you can choose $1 \lt a_1 \lt a_2 \lt ... \lt a_n$ such that $$a_i | 1 + a_1a_2...a_{i-1}a_{i+1}...a_n$$ Prove that if and only if $n \in \{2, 3, 4\}$ the sequence is unique. -I have solved the first part. A sequence that satisfies the conditions is $a_1 = 2$, $a_i = \prod_{j \lt i}{a_j} + 1$. You can see that because all $a_i$ with $i \gt j$ are equal $1$ modulo $a_j$. As for the second one I proved that in the case $n = 2$, which seems pretty easy. But I have no clue how to continue. Any help would be appreciated. - -REPLY [3 votes]: To prove the only if direction, it suffices to find more than one sequence if $n = 5$ since these can then be extended by the method you give in the problem statement. -For $n = 5$, we have the three sequences -$$a_1 = 2,a_2 = 3,a_3 = 7,a_4 = 43,a_5 = 1807$$ -$$a_1 = 2,a_2 = 3,a_3 = 7,a_4 = 47,a_5 = 395$$ -$$a_1 = 2,a_2 = 3,a_3 = 11,a_4 = 23,a_5 = 31$$ -Thus there are at least three sequences for $n \geq 5$. -I will now prove that the sequence is unique for $n = 3$ and $n = 4$. -This completes the if direction since $n = 2$ is trivial. -Note first that the given condition implies that the $a_i$ are relatively prime. -The key observation (as stated in the other answer) is the following: -The above conditions imply that for any sequence of distinct -indices $i_1,...,i_k$ we have -$$ a_{i_1}a_{i_2}...a_{i_k} | 1 + \displaystyle\sum_{m = 1}^k a_1a_2...a_{i_m - 1}a_{i_m + 1}...a_n $$ -This follows by multiplying the relations $a_{i_m}|1+a_1a_2...a_{i_m - 1}a_{i_m + 1}...a_n$. -Now I will show uniqueness if $n = 3$. Note that the above observation implies that -$$a_2a_3 | 1 + a_1a_3 + a_1a_2$$ -I claim that we must in fact have $a_2a_3 = 1 + a_1a_3 + a_1a_2$. This follows since if $a_2a_3 < 1 + a_1a_3 + a_1a_2$, then in fact -$$2a_2a_3 \leq 1 + a_1a_3 + a_1a_2$$ -which is impossible since $a_2a_3 > a_1a_3$ and $a_2a_3 > a_1a_2$. -Also $a_1 | 1 + a_2a_3 = 1 + (1 + a_1a_3 + a_1a_2) = 2 + a_1(a_2 + a_3)$. -Thus $a_1 | 2$ and $a_1 = 2$. -Now the other conditions on the sequence imply $a_2 | 1 + 2a_3$ and -$a_3 | 1 + 2a_2$. -Notice that we must have $a_3 = 1 + 2a_2$ as otherwise $2a_3 \leq 1 + 2a_2$ which would mean $a_3 \leq a_2$. -Then $a_2 | 1 + 2(1 + 2a_2) = 3 + 4a_2$ so that $a_2 | 3$ and we have -$a_2 = 3$, $a_3 = 7$. This completes the case $n = 3$. -Now I will prove uniqueness for $n = 4$. This proof will be more complicated than for $n = 3$ as there are many more possibilities to eliminate. -By the observation made at the beginning, we have that the case $k = 3$ implies that -$$a_2a_3a_4 | 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$ -Now since $a_2a_3a_4$ is strictly greater than all of the summands on the other side, we have that $$3a_2a_3a_4 > 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$ -Thus we have two possibilities -$$2a_2a_3a_4 = 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$ -or -$$a_2a_3a_4 = 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$ -I will proceed to eliminate the first possibility. -Note that in the first possibility, we cannot have that $a_1$ is even. -Additionally we see that -$$a_1|1 + a_2a_3a_4 = (2 + 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4)/2$$ -as $a_1$ is odd we get -$$a_1|3 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$ -so that $a_1|3$ and $a_1 = 3$. Using this information we see that -$$2a_2a_3a_4 = 1 + 3a_2a_3 + 3a_2a_4 + 3a_3a_4 < 1 + 9a_3a_4$$ -consequently we obtain that $a_2 \leq 9/2$ so $a_2 = 4$. -Plugging this back in we see that -$$8a_3a_4 = 1 + 12a_3 + 12a_4 + 3a_3a_4$$ -or rearranging -$$a_4 = \frac{12a_3 + 1}{5a_3 - 12}$$ -Combining this with $a_3 \geq 5$ we see that $a_4 \leq 61/13 < 6$, a contradiction. This proves that in fact we must have -$$a_2a_3a_4 = 1 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$ -Now, we use the fact that $a_1 | 1 + a_2a_3a_4$ to conclude that -$$a_1 | 2 + a_1a_2a_3 + a_1a_2a_4 + a_1a_3a_4$$ -and so $a_1 = 2$. Again we plug this is to obtain -$$a_2a_3a_4 = 1 + 2a_2a_3 + 2a_2a_4 + 2a_3a_4 < 1 + 6a_3a_4$$ -So we have $a_2 \leq 6$. Since $a_2$ and $a_1$ are relatively prime we -get that $a_2 = 3$ or $a_2 = 5$. -Suppose that $a_2 = 5$ and plug this in to get -$$5a_3a_4 = 1 + 10a_3 + 10a_4 + 2a_3a_4$$ -This implies that $$a_4 = \frac{10a_3 + 1}{3a_3 - 10}$$ -Combining this with the fact that $a_3 \geq 7$ ($gcd(a_1,a_3) = 1$) -we see that $a_4 \leq 71/11 < 7$, a contradiction. -Thus we must have $a_2 = 3$. Plugging this in we get -$$3a_3a_4 = 1 + 6a_3 + 6a_4 + 2a_3a_4$$ -Thus we have $$a_4 = \frac{6a_3 + 1}{a_3 - 6}$$ which implies $a_3 \geq 7$. -If $a_3 > 7$, then $a_3$ must be at least 17 ($a_3$ is relatively prime with $2$ and $3$, and $11$ and $13$ result in $a_4$ not being an integer). -But if $a_3 \geq 17$, then $a_4 \leq 103/11 < 10$, a contradiction. -Thus $a_3 = 7$ and $a_4 = 43$. This completes the proof of uniqueness if $n = 4$.<|endoftext|> -TITLE: Dividing a unit square into rectangles -QUESTION [15 upvotes]: I've been given this task: - -A unit square is cut into rectangles. Each of them is coloured by either yellow or blue and inside it a number is written. If the color of the rectangle is blue then its number is equal to rectangle’s width divided by its height. If the color is yellow, the number is rectangle’s height divided by its width. Let $x$ be the sum of the numbers in all rectangles. Assuming the blue area is equal to the yellow one, what is the smallest possible $x$? - -I've came with the solution below: I've simply split the unit square in half and assigned the colors. The reasoning behind that is that I want to have the blue side as high as possible (to make the $x$ as low as possible) and the yellow side as wide as possible (for the same reason). I didn't divide the square into rectangles with infinitely small height or width, because no matter how small they are, they eventually add up and form the two big rectangles that are on my picture. -I feel my solution is wrong though, because it is stupidly easy (you have to admit, that often means it's wrong). Is there anything I'm missing here? - -REPLY [2 votes]: Here is the full solution. The answer is, indeed, $5/2$. An example was already presented by the OP. Now we need to prove the inequality. -First of all we notice that for any color (blue or yellow) the sum of height/width (or width/height) ratios is always at least $1/2$. -Indeed, since all dimensions do not exceed $1$, we have (e.g., for blue color) -$$ -\sum \frac{w_i}{h_i} \geqslant \sum w_i h_i = \frac 12 \,. -$$ -as the final sum is the total area of all blue rectangles. -Second, we observe that either the blue rectangles connect the left and the right sides of the square, or the yellow rectangles connect the top and the bottom sides. We leave that as an exercise for the readers :) (Actually, as you will see below, it would suffice to show that either the sum of all blue widths or the sum of all yellow heights is at least $1$.) -Without loss of generality, assume that the blue rectangles connect the lateral sides of the large square. Then we intend to prove that -$$ -\sum \frac{w_i}{h_i} \geqslant 2 \,, -$$ -where the summation is done over the blue squares. Combining that with the inequality $\sum h_i/w_i \geqslant 1/2$ for the yellow squares we will have the required result, namely that the overall sum is always at least $5/2$. -Since the projections of the blue squares onto the bottom side must cover it completely, we have $\sum w_i \geqslant 1$. We also have $\sum w_ih_i = 1/2$. Now all we need is the following fact. -Lemma. Two finite sequences of positive numbers $\{w_i\}$ and $\{h_i\}$, i = $1$, ... , $n$ are such that -$$ -\sum w_i = W, \qquad \sum w_ih_i = S \,. -$$ -Then -$$ -\sum \frac{w_i}{h_i} \geqslant \frac{W^2}S \,. -$$ -Proof. We will use the well-known Jensen's inequality (which follows from the geometric convexity of the area above the graph of any convex function) for function $f(x) = 1/x$. That gives us -$$ -\sum \frac{w_i}W f(h_i) \geqslant f \left( \sum \frac{w_i}W h_i \right) \,. -$$ -In other words -$$ -\frac1W \sum \frac{w_i}{h_i} \geqslant \frac1{\sum \frac{w_i}W h_i } = -\frac{W}{\sum w_i h_i} = \frac WS \,. -$$ -and the required inequality immediately follows. $\square$ -Applying this lemma to our case where $W \geqslant 1$ and $S = 1/2$ completes our solution.<|endoftext|> -TITLE: How to show that $\mathfrak{sl}_n(\mathbb{R})$ and $\mathfrak{sl}_n(\mathbb{C})$ are simple? -QUESTION [5 upvotes]: We defined a Lie algebra to be simple, if it has no proper Lie ideals and is not $k$ (the ground field). We have the proposition that $\mathfrak{sl}_n(\mathbb{R})$ and $\mathfrak{sl}_n(\mathbb{C})$ are simple $\mathbb{R}$-Lie algebras, and $\mathfrak{sl}_n(\mathbb{C})$ is also simple as a $\mathbb{C}$-Lie algebra. We have shown the proposition for the case $n=2$ using an $\mathfrak{sl}_2$-triple. -How would I show the statement for general $n \in \mathbb{N}$? -I suspect there's a way to use induction, but don't see how that would work. Also, I have no idea how I would generalize and apply the idea with the $\mathfrak{sl}_2$-triple to higher dimensions. I can't even just do the case $n=3$. How do I generalize? - -REPLY [3 votes]: There is a direct proof that $\mathfrak{sl}(n,K)$ is a simple Lie algebra for any field $K$ of characteristic zero, which just uses Lie brackets of traceless matrices to show that a nontrivial ideal $J$ must be $\mathfrak{sl}(n,K)$ itself, see 6.4 in the book "Naive Lie Theory". This works uniformly for all $n\ge 2$. For a proof using a bit more theory, see Lemma $1.3$ here.<|endoftext|> -TITLE: Embedding fields into the complex numbers $\mathbb{C}$. -QUESTION [7 upvotes]: Let $k$ be a field of characteristic $0$ with $\mathrm{trdeg}_\mathbb{Q}(k)$ at most the cardinality of the continuum. I want to prove the existence of a field homomorphism $k\rightarrow\mathbb{C}$. (I hope this statement is even true, I made it up on my own.) -Let $S$ be a transcedence base for $k/\mathbb{Q}$, $S'$ one for $\mathbb{C}/\mathbb{Q}$. Let $S\rightarrow S'$ be an injection. The induced map $\mathbb{Q}[S]\rightarrow\mathbb{Q}[S']\rightarrow\mathbb{Q}(S')\rightarrow\mathbb{C}$ is injective, and hence (by the mapping property of the fraction field) induces a map $\mathbb{Q}(S)\rightarrow\mathbb{C}$. But as $k/\mathbb{Q}(S)$ is algebraic, we get an induced map $k\rightarrow\mathbb{C}$. -Is this proof correct? - -REPLY [2 votes]: Your proof seems all right to me. -As an aside, this method can be used to prove that $\mathbb{C}$ admits infinitely many proper subfields isomorphic to itself: simply choose any non-surjective injection $S'\to S'$, which will induce a non-surjective morphism $\mathbb{Q}(S')\to \mathbb{Q}(S')$ and thus a non-surjective morphism $\mathbb{C}\to \mathbb{C}$.<|endoftext|> -TITLE: How can I deduce that $\lim\limits_{x\to0} \frac{\ln(1+x)}x=1$ without Taylor series or L'Hospital's rule? -QUESTION [5 upvotes]: How can I calculate this limit without Taylor series and L'Hospital's rule? -$$\lim _{x\to0} \frac{\ln(1+x)}x=1$$ - -REPLY [7 votes]: If the limit exists, say $L$, then you can state that: -$$\begin{align} -L&=\lim_{x\to0}\frac{\ln(1+x)}{x}\\ -\therefore e^L&=e^{\lim_{x\to0}\frac{\ln(1+x)}{x}}\\ -&=\lim_{x\to0}e^{\frac{\ln(1+x)}{x}}\\ -&=\lim_{x\to0}(e^{\ln(1+x)})^\frac{1}{x}\\ -&=\lim_{x\to0}(1+x)^\frac{1}{x}\\ -&=e\\ -\therefore L&=1 -\end{align}$$<|endoftext|> -TITLE: I need help to advance in the resolution of that limit: $ \lim_{n \to \infty}{\sqrt[n]{\frac{n!}{n^n}}} $ -QUESTION [7 upvotes]: how I can continue this limit resolution? -The limit is: -$$ \lim_{n \to \infty}{\sqrt[n]{\frac{n!}{n^n}}} $$ -This is that I have done: -I apply this test: $ \lim_{n \to \infty}{\sqrt[n]{a_n}} = \frac{a_{n+1}}{a_n} $ -Operating and simplifying I arrive to this point: -$$ \lim_{n \to \infty}{\frac{n^n}{(n+1)^n}} $$ -I've done something wrong? Thanks! - -REPLY [3 votes]: By Stirling approximation as $n\to \infty$ we have -$$ -\left(\frac{n!}{n^n}\right)^{1/n}\sim \left(\sqrt{2\pi n}\frac{(n/e)^n}{n^n}\right)^{1/n} \sim \frac{1}{e}(2\pi n)^{1/2n} \to e^{-1}. -$$ -As pointed out by Clement, one should justify why one can take the limit directly inside; the reason is that for any $x< 1 -TITLE: Multiplication of algebraic fraction not giving desired result -QUESTION [5 upvotes]: I am having a try at solving this: - -that supposed to return: - -but I get stuck at: - -which can be written as - -REPLY [2 votes]: Hint:$$x^3+3x^2y+3xy^2+y^3=(x+y)^3$$<|endoftext|> -TITLE: Obtaining Fourier series of function without calculating the Fourier coefficients -QUESTION [6 upvotes]: In this question in one of the answers it's shown how to get from $$f\left ( x \right )=\sum_{n=1}^{\infty}\frac{\sin\left ( nx \right )}{10^{n}}$$ -to $$f\left ( x \right )=\frac{10 \sin x}{101-20\cos x}$$ -I recently came across a function similar to $f$ and I needed to compute its Fourier coefficients. That can be done using contour integration. But then I wondered whether it is possible to go backwards and see that the function is an imaginary or real part of some complex valued function which can be expressed as an infinite sum. Basically the whole process in reverse. I hope that isn't just a trial and error process -EDIT: I might have not expressed myself clearly. I do know how to get from the geometric progression to the function, it is basic precalc. It would be very elegant to be able to go backwards, that is, given the analytic expression for the function, trace it back to a geometric, or some other series. That could save us from a lot of work involving integration. Functions such as the one I posted above seem to be very fitting for this problem. - -REPLY [3 votes]: Yes, this is possible. The way to do is by analogy: With rational functions (that don't have a pole at $x=0$), one may always use a partial fraction's decomposition to rewrite a function in terms as a linear combination of the functions $\frac{1}{1-cx}$ (or powers thereof) and a polynomial. Then we use $\frac{1}{1-x}=1+x+x^2+x^3+\ldots$ (for $|x|<1$) to get a sum. So, for instance, one can go from a rational function back to a series with rational functions like this: $$\frac{1}{x^2+1}=\frac{1/2}{1-ix}+\frac{1/2}{1+ix}=\frac{1}2\sum_{n=0}^{\infty}(ix)^n + (-ix)^n=\sum_{n=0}^{\infty}(-1)^nx^{2n}.$$ -This is heavily related to the idea of generating functions - in particular, the coefficient of $x^n$ in a rational function will always obey some homogenous linear recurrence relation after a point. -Now, let's say we've been given the ratio of two trigonometric polynomials - that is a rational function in $e^{ix}$ such as -$$f(x)=\frac{10\sin(x)}{101-20\cos(x)}=\frac{5i(e^{-ix}-e^{ix})}{101-10(e^{ix}+e^{-ix})}=\frac{5i(1-e^{2ix})}{101e^{ix}-10(e^{2ix}+1)}.$$ -We then continue rearranging into a convenient form, where the denominator has constant term $1$: -$$f(x)=\frac{-\frac{1}2i(1-e^{2ix})}{e^{2ix}-\frac{101}{10}e^{ix}+1}$$ -Then, to begin partial fraction decomposition, we factor the denominator into terms of the form $(1-ce^{ix})$ which gives $e^{2ix}-\frac{101}{10}e^{ix}+1=(1-10e^{ix})(1-\frac{1}{10}e^{ix})$. Then we want to find the constants so that -$$f(x)=c_0+\frac{c_1}{1-\frac{1}{10}e^{ix}}+\frac{c_2}{1-10e^{ix}}$$ -which can be done by putting everything over one numerator and equating the numerator of this with that of our previous expression (other techniques for finding the coefficients of a partial fraction expansion work here too, though). One finds that the form of $f(x)$ is -$$f(x)=\frac{i}2+\frac{-i/2}{1-\frac{1}{10}e^{ix}}+\frac{-i/2}{1-10e^{ix}}.$$ -Now, we proceed with a little care, since naively expanding these terms gives only terms of the form $e^{ix}$ (so we wouldn't recover real valued numbers in the sum) and also wants to use the series $1+10e^{ix}+100e^{2ix}+1000e^{3ix}+\ldots$ which obviously doesn't converge. To get around this, notice that we can actually expand $\frac{1}{1-cx}$ using two identities: -$$\frac{1}{1-x}=1+x+x^2+x^3+\ldots$$ -$$1- \frac{1}{1-x}=1+\frac{1}x+\frac{1}{x^2}+\frac{1}{x^3}+\ldots$$ -where the latter is valid when $|x|>1$ and the former when $|x|<1$. Given that $|e^{ix}|=1$, we expand the term $\frac{-i/2}{1-\frac{1}{10}e^{ix}}$ using the first identity and the term $\frac{i}2-\frac{i/2}{1-10e^{ix}}$ using the latter to get: -$$f(x)=\frac{-i}2\left(1+\frac{1}{10}e^{ix}+\frac{1}{100}e^{2ix}+\frac{1}{1000}e^{3ix}+\ldots\right)+\frac{i}2\left(1+\frac{1}{10}e^{-ix}+\frac{1}{100}e^{-2ix}+\frac{1}{1000}e^{-3ix}+\ldots\right)$$ -and putting them together into one sum: -$$f(x)=\sum_{n=0}^{\infty}\frac{-i}{2}\cdot \frac{e^{nix}}{10^n}+\frac{i}{2}\cdot \frac{e^{-nix}}{10^n}=\sum_{n=0}^{\infty}\frac{\sin(nx)}{10^n}.$$ - -One might note that repeated factors need special handling since they can result in non-linear denominators in the partial fractions decomposition - but the identity like: -$$\frac{1}{(1-x)^n}={n-1 \choose n-1}+{n \choose n-1}x+{n+1 \choose n-1}x^2+{n+2 \choose n-1}x^3+\ldots$$ -where, for fixed $n$, the expression ${n+c-1\choose n-1}$ is simply a polynomial in $c$ of degree $n-1$ - so you might get sums like $\sum_{n=0}^{\infty}\frac{n\sin(n)}{10^n}$ at the end. -Another restriction is that $(1-ce^{ix})$ shouldn't be a factor of the denominator for $|c|=1$ since then we can use neither expansion to get a convergent power series. I don't see any easy patch for this, but it holds for any function with no poles.<|endoftext|> -TITLE: Why $e^{\pi}-\pi \approx 20$, and $e^{2\pi}-24 \approx 2^9$? -QUESTION [18 upvotes]: This was inspired by this post. Let $q = e^{2\pi\,i\tau}$. Then, -$$\alpha(\tau) = \left(\frac{\eta(\tau)}{\eta(2\tau)}\right)^{24} = \frac{1}{q} - 24 + 276q - 2048q^2 + 11202q^3 - 49152q^4+ \cdots\tag1$$ -where $\eta(\tau)$ is the Dedekind eta function. -For example, let $\tau =\sqrt{-1}$ so $\alpha(\tau) = 2^9 =512 $ and $(1)$ "explains" why, -$$512 \approx e^{2\pi\sqrt1}-24$$ -$$512(1+\sqrt2)^3 \approx e^{2\pi\sqrt2}-24$$ -and so on. - - -Q: Let $q = e^{-\pi}$. Can we find a relation, - $$\pi = \frac{1}{q} - 20 +c_1 q + c_2 q^2 +c_3 q^3 +\cdots\tag2$$ - where the $c_i$ are well-defined integers or rationals such that $(2)$ "explains" why $\pi \approx e^{\pi}-20$? - -Update: For example, we have the rather curious, -$$\beta_1 := \frac{1}{q} - 20 +\tfrac{1}{48}q - \tfrac{1}{300}q^3 -\tfrac{1}{972}q^5 +\tfrac{1}{2160}q^7+\tfrac{1}{\color{brown}{2841}}q^9-\tfrac{1}{\color{brown}{2369}}q^{11}-\cdots\tag3$$ -$$\beta_2 := \frac{1}{q} - 20 +\tfrac{1}{48}q - \tfrac{1}{300}q^3 -\tfrac{1}{972}q^5 +\tfrac{1}{2160}q^7+\tfrac{1}{\color{brown}{2842}}q^9-\tfrac{1}{\color{brown}{2810}}q^{11}-\cdots\tag4$$ -With $q = e^{-\pi}$, -$$\pi \approx \beta_1 = e^\pi -20 +\tfrac{1}{48}q-\dots\; (\text{differ:}\; {-4}\times 10^{-22})\\ -\pi \approx \beta_2 = e^\pi -20 +\tfrac{1}{48}q-\dots\; (\text{differ:}\; {-3}\times 10^{-22})$$ -However, there seems to be an indefinite number of formulas, where the choice of a coefficient (say, $2841$ or $2842$) determines an ever-branching tree of formulas. But there might be a subset where the coefficients have a nice closed-form. - -REPLY [6 votes]: This does not follow your proposal exactly but it is built on series with rational terms only. -From expansions -$$ -e^\pi=\sum_{k=0}^\infty\frac{3\left(e^{3\pi}-\left(-1\right)^ke^{-3\pi}\right)\Gamma\left(\frac{k}{2}+3i\right)\Gamma\left(\frac{k}{2}-3i\right)}{2 \pi k!}=\sum_{k=0}^\infty a(k) -$$ -http://oeis.org/A166748 -and -$ -\pi=3+\sum_{k=1}^\infty\frac{1}{4·16^k}\left(-\frac{40}{8k+1}+\frac{56}{8k+2}+\frac{28}{8k+3}+\frac{48}{8k+4}+\frac{10}{8k+5}+\frac{10}{8k+6}-\frac{7}{8k+7}\right)=3+\sum_{k=1}^\infty b(k) -$ -https://oeis.org/wiki/User:Jaume_Oliver_Lafont/Constants#Series_involving_convergents_to_Pi -the following representation is obtained -$$e^\pi-\pi-20 = \frac{1201757159}{10580215726080}+\sum_{k=16}^\infty a(k) -\sum_{k=2}^\infty b(k) \approx -0.00090002 \approx -\left(\frac{3}{100}\right)^2$$ -Cancellation comes from the first three decimal digits: -$$ -\sum_{k=0}^{15} a(k) = \frac{991388824265291953}{42849873690624000}\approx 23.136(2) -$$ -$$ -b(1) = \frac{49087}{360360} \approx .136(1) -$$ -[EDIT] Three digit cancellation may also be obtained by taking 14 terms from the series for $e^\pi$ and 3 terms from the simpler -$$\pi-3=\sum_{k=1}^{\infty}\frac{3}{(1+k)(1+2k)(1+4k)}$$ -(Lehmer, http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27121.pdf, pag 139-140) -[EDIT] -Another expression with a "wrong digit" that leads to higher precision when corrected is given by -$$ e - \gamma-log\left(\frac{17}{2}\right) \approx 0.0010000000612416$$ -$$ e \approx \gamma+log\left(\frac{17}{2}\right) +\frac{1}{10^3} +6.12416·10^{-11}$$ -Why is $e$ close to $H_8$, closer to $H_8\left(1+\frac{1}{80^2}\right)$ and even closer to $\gamma+log\left(\frac{17}{2}\right) +\frac{1}{10^3}$? -[EDIT] $$\sum_{k=0}^\infty \frac{2400}{(4 k+9) (4 k+15)} = 100 \pi-\frac{58480}{231} \approx 60.99909$$<|endoftext|> -TITLE: Is $gnu(2304)$ known? -QUESTION [6 upvotes]: I wonder whether the number of groups of order $2304=2^8\times 3^2$ is known. GAP exited because of the memory. $gnu(2304)$ must be greater than $1,000,000$ because of $gnu(768)=1,090,235$ and $768=2^8\times 3|2^8\times 3^2=2304$. - -Is $gnu(2304)$ known or at least a tight upper bound ? -What is the smallest number $n$, such that it is infeasible to calculate $gnu(n)\ ?$ I think, $gnu(2048)$ will be known in at most ten years, probably much earlier. -Could $n=3072=2^{10}\times 3$ be the smallest too difficult case ? - -REPLY [13 votes]: There are indeed $112\,184+1\,953+15\,641 993 = 15\,756\,130$ groups of order 2304, computed using an algorithm developed by Bettina Eick and myself. As Alexander Konovalov already kindly pointed out, you can find this number in our paper "The construction of finite solvable groups revisited", J. Algebra 408 (2014), 166–182, also available on the arXiv. -This is part of an on-going project to catalogue all groups up to order 10,000 (with a few orders excepted, e.g. multiples of 1024, as there are simply to many of these). So in particular, we skip groups of order 3072. There are already $49\,487\,365\,422$ groups of order 1024, and I expect the number of groups of order 3072 to be several orders of magnitude larger. -To maybe slightly motivate why I think so, consider this the proportion of number of (isomorphism clases of) groups of order $2^n$ vs $3\cdot 2^n$, computed here using GAP: -gap> List([0..9], n -> NrSmallGroups(3*2^n)/NrSmallGroups(2^n)*1.0); -[ 1., 2., 2.5, 3., 3.71429, 4.52941, 5.77903, 8.66366, 19.4366, 38.9397 ] - -If you plot $n$ against $gnu(3\cdot 2^n)/gnu(2^n)$, you'll see a roughly exponentially looking curve. Of course that is a purely empiric argument, not a proof of anything.<|endoftext|> -TITLE: When is the limit of a sum equal to the sum of limits? -QUESTION [12 upvotes]: I was trying to solve a problem and got stuck at the following step: -Suppose ${n \to \infty}$ . -$$\lim \limits_{n \to \infty} \frac{n^3}{n^3} = 1$$ -Let us rewrite $n^3=n \cdot n^2$ as $n^2 + n^2 + n^2 + n^2 \dots +n^2$,$\space$ n times. -Now we have -$$\lim \limits_{n \to \infty} \frac{n^3}{n^3} = \frac {n^2 + n^2 + n^2 + n^2 + n^2 \dots +n^2}{n^3} $$ -As far as I understand, we can always rewrite the limit of a sum as the sum of limits ... -$$\dots = \lim \limits_{n \to \infty} \left(\frac{n^2}{n^3} + \frac{n^2}{n^3} + \dots + \frac{n^2}{n^3}\right)$$ -...but we can only let ${n \to \infty}$ and calculate the limit if all of the individual limits are of defined form (is this correct?). That would be the case here, so we have: -$= \dots \lim \limits_{n \to \infty} \left(\frac{1}{n} + \frac{1}{n} + \dots + \frac{1}{n}\right) =$[ letting ${n \to \infty}]$ $= 0 + 0 + \dots + 0 = 0$ -and the results we get are not the same. -Where did I go wrong? - -REPLY [4 votes]: The problem you have described is common and there are three possible scenarios to cover: - -Number of terms is finite and independent of $n$: The limit of a sum is equal to sum of limits of terms provided each term has a limit. -Number of terms is infinite: Some sort of uniform convergence of the infinite series is required and under suitable conditions the limit of a sum is equal to the sum of limits of terms. -Number of of terms is dependent on $n$: This is the case which applies to the question at hand. The problem is difficult compared to previous two cases and a partial solution is provided by Monotone Convergence Theorem. Sadly the theorem does not apply to your specific example. But it famously applies to the binomial expansion of $(1+n^{-1})^n$ and gives the result $$e=1+1+\frac{1}{2!}+\frac{1}{3!}+\dots$$<|endoftext|> -TITLE: Confusion about contour integration of constant function: intuition vs. Residue Theorem -QUESTION [9 upvotes]: Let's say we have the holomorphic function -$$f(z) = 1.$$ -Because $f(z)$ has no poles, according the Residue Theorem we have -$$\oint_\gamma f(z)\,dz = 0$$ -for any closed counterclockwise path $\gamma$. -But let's say that $\gamma$ is a circle around the origin of radius $r$. Then shouldn't we have -$$\oint_\gamma f(z)\,dz = 2 \pi r$$ -because -$$\oint_\gamma f(z)\,dz = \oint_\gamma dz = \text{arclength}\,\gamma$$ -? -I'm pretty sure the result using the Residue Theorem is correct, so then my reasoning must be incorrect for the second way of looking at it. -Where is my reasoning incorrect? - -REPLY [12 votes]: No, because $dz$ does not represent arclength - rather, $|dz|$ does. So the correct statement would be -$$\oint_{\gamma} dz = 0, \quad\quad \oint_{\gamma} |dz| = 2\pi r$$ -Remember, you can always go back to the Riemann sum; when defining the integral $dz$, you sum things that look like $\Delta z$. If you move in a circular path, you don't travel anywhere - hence, the sum of $\Delta z$ is zero. - -REPLY [2 votes]: If $f$ is the constant fuction $1$, The integral -$$\int_\gamma f(z)\,dz$$ -does not give you the length of the arc $\gamma$. That would be -$$\int_\gamma f(z)\,d|z|,$$ -where $d|z|$ is integration with respect to arc-length.<|endoftext|> -TITLE: Smallest $n$-digit number $x$ with cyclic permutations multiples of $1989$ -QUESTION [8 upvotes]: Suppose $x=a_1...a_n$, where $a_1...a_n$ are the digits in decimal of $x$ and $x$ is a positive integer. We define $x_1=x$, $x_2=a_na_1...a_{n-1}$, and so on until $x_n=a_2...a_na_1$. Find the smallest $n$ such that each of $x_1,...,x_n$ are divisible by $1989$. Any zero digits are ignored when at the front of a number, e.g. $x_1 = 1240$ then $x_2 = 124$. -Defining $T_n = 11...(n 1's)$ and $S_n = \sum 10^{n-k}a_k $ it is clear to see that $1989k=T_nS_n$ is a necessary condition, which is useful for disproving small $n$. -But doing this I found $n \geq 6$, and the arithmetic becomes very difficult to do with pen and paper. -I'm looking for a more algebraic way of solving this question. Can anyone help me? - -REPLY [3 votes]: This problem can be solved surprisingly easily, building upon Steven Stadnicki's hint, as follows. -I write $a=a_1a_2\cdots a_{n-1}=\sum_{i=1}^{n-1}a_i10^{n-1-i}$ and $b=a_n$. -Then $x_1=10a+b$ and $x_2=10^{n-1}b+a$. These must both be divisible by all of $9,13$ and $17$. Consequently the number $10x_2-x_1=(10^n-1)b$ must similarly be divisible by all those three primes. As Steve explained, divisibility by nine is automatic. The key observation is that as $b$ is constrained to be in the range $(0)1,\ldots,9$ (we ignore zero for a moment) we can conclude that $10^n-1$ must be divisible by both $13$ and $17$. -The order of $10$ modulo $13$ is six (ten is a quadratic residue modulo thirteen), but modulo $17$ it has the full order $16$ (in other words ten is a primitive root modulo seventeen). The least common multiple of $6$ and $16$ is $48$, so we conclude that $n$ must be a multiple of $48$. -But, the above calculation shows that any 48-digit multiple of $1989$ actually has this property! Start with such an $x_1$. The calculation we already did shows that $10x_2-x_1$ is necessarily divisible by $1989$. Therefore so is $x_2$. Rinse. Repeat. -So to find the smallest such $x$ we need -$$ -m=\left\lceil\frac{10^{47}}{1989}\right\rceil= -50276520864756158873805932629462041226747110, -$$ -and the answer is -$$ -x=1989m=100000000000000000000000000000000000000000001790=10^{47}+1790. -$$ - -The reason we were able to ignore the case $a_n=b=0$ when deriving the condition $48\mid n$ is that the number must have some non-zero digits. So we can simply rotate its digits cyclically to have a non-zero digit in the least significant position. - -I did check with Mathematica that all the 48 cyclic shifts of this number are divisible $1989$ :-) It is not unthinkable to actually do this with pen & paper work. You see, most of those cyclic shifts are of the form $17901\cdot10^\ell$ and $17901=1989\cdot9.$<|endoftext|> -TITLE: Infinity-to-one function -QUESTION [11 upvotes]: Are there continuous functions $f:I\to S^2$ such that $f^{-1}(\{x\})$ is infinite for every $x\in S^2$? -Here, $I=[0,1]$ and $S^2$ is the unit sphere. -I have no idea how to do this. -Note: This is not homework! The question came up when I was thinking about something else. - -REPLY [11 votes]: Consider a space filling curve $\gamma: I \rightarrow I^2$, the projection $q: I^2 \rightarrow S^2$ given by the quotient topology on the square that furnishes the sphere, and the projection $\pi: I^2 \rightarrow I$ on the first coordinate. -The map $q \circ \gamma \circ \pi \circ \gamma$ satisfies what you want.<|endoftext|> -TITLE: why don't standard analysis texts relax the injectivity requirement of the multivariable change of variables theorem? -QUESTION [17 upvotes]: In standard multivariable analysis texts, the change of variables for multivariable integration in Euclidean space is almost always stated for a $C^1$ diffeomorphism $\phi$, giving the familiar equation (for continuous $f$, say) -$$\int_{\phi(U)}f=\int_U(f\circ\phi)\cdot|\det D\phi|$$ -Of course, this result by itself is not very useful in practice because a diffeomorphism is usually hard to come by. The better advanced calculus and multivariable analysis texts explain explicitly how the hypothesis that $\phi$ is injective with $\det D\phi\neq0$ can be relaxed to handle problems along sets of measure zero -- a result which is necessary for almost all practical applications of the theorem, starting with polar coordinates. -Despite offering this slight generalization, very few of the standard texts state that the situation can be improved further still: there is an analogous theorem for arbitrary $C^1$ mappings $\phi$, not just those that are injective everywhere except on a set of measure zero. We simply account for how many times a point in the image gets hit by $\phi$, giving -$$\int_{\phi(U)}f\cdot\,\text{card}(\phi^{-1})=\int_U(f\circ\phi)\cdot|\det D\phi|$$ -where $\text{card}(\phi^{-1})$ measures the cardinality of $\phi^{-1}(x)$. -I think this theorem is a lot more natural and satisfying than -- and surely just as heuristically plausible as -- the first. For one thing, it removes a huge restriction, bringing the theorem closer to the standard one-variable change of variables for which injectivity is not required (though of course the one-variable theorem is really a theorem about differential forms). In particular, it emphasizes that regularity is what's important, not injectivity. For another thing, it's not a big step from here to the geometric intuition for degree theory or for the "area formula" in geometric measure theory. (Indeed, the factor $\text{card}(\phi^{-1})$ is a special case of what old references in geometric measure theory called the "multiplicity function" or the "Banach indicatrix.") It's also used in multivariate probability to write down densities of non-injective transformations of random variables. And last, it's in the spirit of modern approaches to gesture at the most general possible result. The traditional statement is really just a special case; injectivity only becomes essential when we define the integral over a manifold (rather than a parametrized manifold), which we want to be independent of parametrization. I think teaching the more general result would greatly clarify these matters, which are a constant source of confusion to beginners. -Yet many of the standard multivariable analysis texts (Spivak, Rudin PMA and RCA, Folland, Loomis/Sternberg, Munkres, Duistermaat/Kolk, Burkill) don't mention this result, even in passing, as far as I can tell. The impression a typical undergraduate gets is that the traditional statement is the final word on the matter, not to be improved upon; after all, the possibility of improvement isn't even hinted at, even when the multivariable result is compared to the single variable result. So I've had to hunt for discussions of the extension; I've found it here: - -Zorich, Mathematical Analysis II (page 150, exercise 9, for the Riemann integral) -Kuttler, Modern Analysis (page 258, for the Lebesgue integral; used later in a discussion of probabilities densities) -Csikós, Differential Geometry (page 72, for the Lebesgue integral) -Ciarlet, Linear and Nonlinear Functional Analysis with Applications (page 34, for the Lebesgue integral) -Bogachev, Measure Theory I (page 381, for the Lebesgue integral) -the Planet Math page on multivariable change of variables (Theorem 2) - -I'm also confident I've seen it in some multivariable probability books, but I can't remember which. But none of these is a standard textbook, except perhaps for Zorich. -My question: are there standard analysis references with nice discussions of this extension of the more familiar result? Probability references are fine, but I'm especially curious whether I've missed some definitive treatment in one of the classic analysis texts. -Also feel free to speculate, or explain, why so few texts mention it. (Is there really any good reason for not mentioning it, when failing to do so implicitly trains students to think injectivity is an essential ingredient for this kind of result?) I'm hoping there's a more interesting answer than "most authors don't mention it because the texts they learned from didn't either" or "even an extra sentence alluding to the possibility of a more general result is too much to ask for since the traditional theorem is hard enough to prove on its own." -(Cross-posted on MSE.) - -REPLY [9 votes]: The better advanced calculus and multivariable analysis texts explain explicitly how the hypothesis that $\varphi$ is injective with $\det D \varphi \neq 0$ can be relaxed to handle problems along sets of measure zero -- a result which is necessary for almost all practical applications of the theorem, starting with polar coordinates. - -I speculate that most authors don't go beyond the injective immersion condition because it is sufficient for most practical applications of the theorem, polar and spherical coordinates being among the most important examples. The difficulty of formulating and proving the change of variables formula for integration is out of proportion to the rest of the content of an advanced calculus course, so if you are writing a textbook on the subject then there is a strong temptation not to stray too far from what you need to handle the basic examples and applications. This also explains why some authors are happy to live with the assumption that $\phi$ is a diffeomorphism - if your goal is just to prove Stokes' theorem, then why make it harder? -It wouldn't hurt, I suppose, to allude to more general versions of the theorem in a parenthetical remark or an extended exercise, but I don't think the stakes are very high. Undergraduates are usually accustomed to the fact that they aren't getting the most general possible theorems in their classes. - -REPLY [3 votes]: As Paul Siegel says in his answer: The usual formula is sufficient for most practical applications. I would go further and say: - -The plain form of the change of variables theorem makes it much more clear that the main motivation for this theorem is just to compute integrals. - -The change of variables theorem is really a workhorse-theorem to work with and (as far as I see) is not something that is structurally important. Check Christian Blatter's answer at MSE to see a mathematician with years of experience telling you how often he really used the non-injective form. -Also, the plain form really shows that the Jacobian is the crucial thing here and also the proofs of the plain form (at least the ones I know) makes that pretty clear. However, if you want to prove the more general form, I don't know anything else than to start from the plain result and add on top of that. -And my last point: As I said above, the change of variables theorem is a workhorse to do something, namely, to compute integrals. If you would ever calculate an integral of the form -$$\int_{\phi(U)}f\cdot\,\text{card}(\phi^{-1})=\int_U(f\circ\phi)\cdot|\det D\phi|$$ -what would you do? You would check for each point how often it is reached by $\phi$ and would patch the results together (neglecting the issues with null-sets) using the plain form on each patch. This is something that a student would came come up with by himself. Hence, the general form is not at all helpful to do the very thing for which the plain form of the theorem in intended. Balancing how complicated the proof of the more general result is and how intuitive and (most importantly) practically not so useful the result is, it seems clear what you should do when writing a textbook.<|endoftext|> -TITLE: Near-integer solutions to $y=x(1+2\sqrt{2})$ given by sequence - why? -QUESTION [9 upvotes]: EDIT: I've asked the same basic question in its more progressed state. If that one gets answered, I'll probably accept the answer given below (although I'm uncertain of whether or not this is the community standard; if you know, please let me know). - -I've found a sequence $x_{i+1}=\|x_i(2+k)\|$, where $k=1+2\sqrt{2}$ and where $||a||$ rounds $a$ to the nearest integer, that seems to minimize the distance $P_n$ to an integer solution (for $x$ and $y$) of the equation in the title. $P_n$ is more rigorously defined as the absolute value of $\|y_n\|-y_n$, where $y_n=nk$ for $n \in \mathbb{N}$. -Starting with $x_0=1$ the sequence becomes $x_1=\|2+k\|=\|5.83\ldots\|=6,x_2=\|6(2+k)\|=\|34.97\ldots\|=35,204,1189,6930,\ldots$ where $P_i$ very quickly becomes small. -In Fig. 1 below, I've plotted $P_n$ for $n$. In Fig. 2 only low values of $P$ is shown and it seems that the $P_i$'s from the sequence (in red) are the lowest value up until that $n$ (I've checked this up to $n=10^6$). -My main question is this: Why does this sequence give the solutions nearest integer-solutions? (Edit for further clarification:) And why are these the nearest up until that point (see Fig. 2)? Can it be proven, starting from the original equation, that the elements in this sequence will give the best approximations to integers up until that point, e.g. that it's error dies off faster than all other possible sequences? - Fig. 1 - Fig. 2 -Further questions: -I've also "found" something that looks like an attractor, see Fig. 3 below. Can someone explain what is going on here? I haven't really studied dynamical systems, so if you could dumb it down a bit, I'd be grateful. -Also, as seen in Fig. 1, there's a high degree of regularity here, with all the seemingly straight lines. If I remove the absolute value-part of the def. of $P_n$ the cross-pattern in Fig. 1 becomes (seemingly) straight, parallel declining lines. Why do these lines form? Could it be explained via some modular arithmetic? - Fig. 3 -Thanks! -EDIT: Changed "Bonus" to "Further", as I would also really like to hear answers to these questions. Should I post a new question with these, so I could accept an answer there that answers just those? - -REPLY [5 votes]: $$ \left( 3 + \sqrt 8 \right)^n + \left( 3 - \sqrt 8 \right)^n $$ -is an integer, while -$$ 3 - \sqrt 8 = \frac{1}{3 + \sqrt 8} $$ has absolute value smaller than one. -My sequence would be -$$ x_{n+2} = 6 x_{n+1} - x_n $$ -with $x_0 = 2,$ $ x_1 = 6,$ $x_2 = 34$ -I think I see what you did. Instead of taking powers $\left( 3 + \sqrt 8 \right)^n,$ you took the nearest integer and multiplied by it again. So you get $1,6,35,204,$ but once the error is small enough you also settle into the necessary $ x_{n+2} = 6 x_{n+1} - x_n .$ That is, you have -$ x_1 = 6,$ $x_2 = 35,$ $x_3 = 204,$ $x_4 = 1189,$ $x_5 = 6930,$ $x_6 = 40391.$ Start it with $x_0 = 1,$ because you have -$$ \left( \frac{8 + 3 \sqrt 8}{16} \right)\left( 3 + \sqrt 8 \right)^n + \left( \frac{8 - 3 \sqrt 8}{16} \right)\left( 3 - \sqrt 8 \right)^n$$ This is equal to -$$ \left( \frac{ \sqrt 8}{16} \right) \left( \left( 3 + \sqrt 8 \right)^{n+1} - \left( 3 - \sqrt 8 \right)^{n+1} \right) $$<|endoftext|> -TITLE: Question about collapsing cardinals -QUESTION [6 upvotes]: Suppose, in $M$, $\kappa$ regular, $\lambda>\kappa$ regular. Is there a generic extension of $M$ in which $\kappa^+ = \lambda$ and in which cardinals $\leq \kappa$ and $\geq \lambda$ are preserved? -I worked out that, assuming GCH, the answer is yes if $\lambda$ is a limit or is the successor of a cardinal of cofinality $\geq\kappa$. The only remaining case is $\lambda = \delta^+$ for some $\delta$ of cofinality $<\kappa$, e.g. $\kappa = \omega_1$ and $\lambda = \omega_\omega^+$. -I realize that in this remaining case, in $M[G]$ $\kappa^{<\kappa} \geq \lambda$, so the forcing notion cannot be $<\kappa$-distributive. Thus the Levy collapse cannot suffice. - -REPLY [5 votes]: Many questions of this sort are open. For example, consider the situation where $\kappa = \aleph_n$ and $\lambda = \aleph_{n+2}$. So you want to know if you can collapse $\aleph_{n+1}$ to $\aleph_n$ while preserving all other cardinals. Now if $n=0$ this is possible thanks to the Levy collapse. For $n=1$, this is again possible, which is due independently to Abraham and Todorcevic. For $n=2$ this was recently answered by Aspero (you will also find references to the other articles there). For $n \geq 3$ this is still open, and I believe it is also open, for example, whether you can collapse $\aleph_{\omega+1}$ to $\aleph_1$ with a stationary set preserving partial order of size $\aleph_{\omega+1}$ (which would hence not collapse any larger cardinals and also preserve $\aleph_1$). -Of course, this is only considering special cases of your question, so perhaps a negative answer to the general question is already known.<|endoftext|> -TITLE: Need a hint for this integral -QUESTION [7 upvotes]: I'm trying to evaluate the following integral -$$\int_0^{\infty} \frac{1}{x^{\frac{3}{2}}+1}\,dx.$$ -This is an old complex analysis exam question, so I plan to use the residue theorem. -How can I first deal with the square-root, cubed term? I've been trying to find a clever substitution to reduce the problem to a simple one but so far I have not found a good one... -Any ideas are welcome. -Thanks, - -REPLY [9 votes]: Hint: Every time I see integrand like this which integrate over $(0,\infty)$, I will transform it to a beta integral and follows my nose. -Spoiler 1 - - Let $\displaystyle\;y = x^{3/2}\;$ and $\displaystyle\;z = \frac{y}{1+y}\;$, we have - -Spoiler 2 - - $$\begin{align} \int_0^\infty \frac{dx}{x^{3/2}+1}&= \int_0^\infty \frac{dy^{2/3}}{y+1} = \frac23 \int_0^\infty \frac{y^{2/3-1} dy}{y+1}\\ &= \frac23 \int_0^\infty \left(\frac{y}{1+y}\right)^{2/3-1}\left(\frac{1}{1+y}\right)^{1/3-1}\frac{dy}{(1+y)^2}\\ &= \frac23 \int_0^1 z^{2/3-1}(1-z)^{1/3-1}dz \\ &= \frac23\frac{\Gamma(2/3)\Gamma(1/3)}{\Gamma(2/3+1/3)} = \frac23 \frac{\pi}{\sin\frac{\pi}{3}}\\ & = \frac{4\pi}{3\sqrt{3}} \end{align} $$ - -Update (sorry, this part is too hard to setup as spoiler correctly) -If you really want to compute the integral using residue, you can - -change variable to $y = x^{3/2}$. -pick the branch cut of $y^{-1/3}$ in the resulting integrand along the positive real axis. -set up a integral over the contour: -$$C := +\infty - \epsilon i\quad\to\quad -\epsilon-\epsilon i\quad \to \quad -\epsilon + \epsilon i \quad\to\quad +\infty + \epsilon i$$ - -If you fix the argument of $y^{-1/3}$ to be zero on the upper branch of $C$, you will have -$$\left(1 - e^{-\frac{2\pi i}{3}}\right)\int_0^\infty \frac{y^{-1/3}dy}{y+1} = \int_C \frac{y^{-1/3}dy}{y+1} =^{\color{blue}{[1]}} 2\pi i\mathop{\text{Res}}_{y = -1}\left(\frac{y^{-1/3}}{y+1}\right) = 2\pi i e^{-\frac{\pi i}{3}}$$ -This will give you -$$\int_0^\infty \frac{dx}{x^{3/2}+1} = \frac23 \int_0^\infty \frac{y^{-1/3} dy}{y+1} = \frac{4\pi i}{3}\left(\frac{e^{-\frac{\pi i}{3}}}{1 - e^{-\frac{2\pi i}{3}}}\right) = \frac{2\pi}{3\sin\frac{\pi}{3}} = \frac{4\pi}{3\sqrt{3}}$$ -Notes - -$\color{blue}{[1]}$ Since the integrand $\frac{y^{-1/3}}{y+1}$ goes to $0$ faster than $\frac{1}{|y|}$ as $|y| \to \infty$, we can complete the contour $C$ by a circle of infinite radius and evaluate the integral over $C$ by taking residue at poles within the extended contour.<|endoftext|> -TITLE: Evaluate the integral $\int \frac{1}{\sqrt[3]{(x+1)^2(x-2)^2}}\mathrm dx$ -QUESTION [7 upvotes]: What substitution is useful for this integral? -$$\int \frac{1}{\sqrt[3]{(x+1)^2(x-2)^2}}\mathrm dx$$ -Substitutions $u=x^{\frac{2}{3}},u=(x+1)^{\frac{2}{3}},u=(x-2)^{\frac{2}{3}}$ are not working. -Can't find useful trigonometric substitution. - -REPLY [2 votes]: What substitution is useful for this integral ? - -None. The integrand does not possess any elementary anti-derivative. See Liouville's theorem -and the Risch algorithm for more information. However, all definite integrals of the following -form: $\displaystyle\int_a^b\Big[(x-x_1)(x-x_2)\Big]^r~dx,$ with $a,b\in\{x_1,x_2,\pm\infty\},$ can be evaluated in terms of the -beta and $\Gamma$ functions, assuming, of course, that they converge in the first place. This should -come as no surprise, given the fact that Mufasa has already been able to rewrite the original -expression as a Wallis integral, whose relation to the aforementioned special functions is well -known. Not to mention the fact that Claude Leibovici's hypergeometric series can also be -expressed in terms of the incomplete beta function. Thus, for $x_{1,2}=\{-1,2\}$ and $r=-\dfrac23,$ -we have the following result: - -$$\begin{align} -\int_{-\infty}^\infty\Big[(x+1)(x-2)\Big]^{-\tfrac23}~dx -~&=~3\int_{-\infty}^{-1}\Big[(x+1)(x-2)\Big]^{-\tfrac23}~dx~=~ -\\\\ -~&=~3\int_{-1}^2\Big[(x+1)(x-2)\Big]^{-\tfrac23}~dx~=~ -\\\\ -~&=~3\int_2^\infty\Big[(x+1)(x-2)\Big]^{-\tfrac23}~dx~=~ -\\\\ -~&=~3\cdot\frac{B\Big(\tfrac13~,~\tfrac16\Big)}{\sqrt[3]{12}} -\end{align}$$<|endoftext|> -TITLE: If a set of integers can be partitioned into 3 subsets with equal sums, if one such subset is identified, can the remaining be partitioned as well? -QUESTION [7 upvotes]: Specifically, if a set $S$ of integers that sums to $k$ is known to be able to be partitioned into $3$ subsets such that each subset sums to $\dfrac{k}{3}$, if $A$ is one such subset of $S$, is it always possible to partition the remaining integers of $S-A$ into two subsets that both sum to $\dfrac{k}{3}$? -For example, if $S = \{2, 3, 4, 6, 7, 8\}$ and sums to $30$, it can be divided into sets $\{2, 8\}$, $\{3, 7\}$, and $\{4, 6\}$ which each sum to $\dfrac{30}{3}=10$. This is a simple example, but if an arbitrary set $S$ that sums to $k$ is known to be able to be partitioned into 3 subsets that each sum to $\dfrac{k}{3}$, if one such subset $A$ is identified, is it guaranteed that the remaining integers in $S-A$ can also be partitioned into subsets that each sum to $\dfrac{k}{3}$? -My gut feeling is that the answer is yes, but I don't know how to prove it. - -REPLY [11 votes]: $\{1,3,4,5,8,14,16\}$ -can be partitioned into the sets -$\{1,16\},\{3,14\},\{4,5,8\}$, each of which add to $17$. -The subset $A=\{1,3,5,8\}$ is also a subset which adds to $17$, however the remaining integers $\{4,14,16\}$ are all even and therefore could not possibly be partitioned into subsets which add to an odd number.<|endoftext|> -TITLE: What is the minimum polynomial of $x = \sqrt{2}+\sqrt{3}+\sqrt{4}+\sqrt{6} = \cot (7.5^\circ)$? -QUESTION [10 upvotes]: Inspired by a previous question what let $x = \sqrt{2}+\sqrt{3}+\sqrt{4}+\sqrt{6} = \cot (7.5^\circ)$. What is the minimal polynomial of $x$ ? -The theory of algebraic extensions says the degree is $4$ since we have the degree of the field extension $[\mathbb{Q}(\sqrt{2}, \sqrt{3}, \sqrt{4}, \sqrt{6}): \mathbb{Q}] = [\mathbb{Q}(\sqrt{2}, \sqrt{3}): \mathbb{Q}] =4$ -Does trigonometry help us find the other three conjugate roots? - -$+\sqrt{2}-\sqrt{3}+\sqrt{4}-\sqrt{6} = \cot \theta_1$ -$-\sqrt{2}+\sqrt{3}+\sqrt{4}-\sqrt{6} = \cot \theta_2$ -$-\sqrt{2}-\sqrt{3}+\sqrt{4}+\sqrt{6} = \cot \theta_3$ - -This problem would be easier if we used $\cos$ instead of $\cot$. If I remember the half-angle identity or... double-angle identity: -$$ \cot \theta = \frac{\cos \theta}{\sin \theta} = \sqrt{\frac{1 - \sin \frac{\theta}{2}}{1 + \sin \frac{\theta}{2}}}$$ -Sorry I am forgetting, but I am asking about the relationship between trigonometry and the Galois theory of this number. - -REPLY [4 votes]: By Galois theory the intermediate fields are $\Bbb{Q}(\sqrt2)$, $\Bbb{Q}(\sqrt3)$ and $\Bbb{Q}(\sqrt6)$. Your number is not an element of any of those, so it generates the whole 4-d extension. In particular, we know that the minimal polynomial will be a quartic. Therefore any quartic polynomial with integer coefficients with this number as a root is the minimal polynomial. -Let $x=2+\sqrt2+\sqrt3+\sqrt6$. Then -$$ -0=(x-2-\sqrt3)^2-(\sqrt2+\sqrt6)^2=x^2-(4+2\sqrt3)x-1. -$$ -The idea here is that squaring removes "the $\sqrt2$ content" from $\sqrt2+\sqrt6$ leaving only irrationalities coming from $\sqrt3$. -Using the obvious algebraic conjugate as an extra factor we see that -$$ -m(x)=(x^2-(4+2\sqrt3)x-1)(x^2-(4-2\sqrt3)x-1)=x^4-8x^3+2x^2+8x+1 -$$ -fits the bill. -The algebraic conjugates of values of trig functions (at rational multiples of $\pi$) are of the same type: -$$ -\begin{aligned} -2+\sqrt2+\sqrt3+\sqrt6&=\cot\frac{\pi}{24},\\ -2+\sqrt2-\sqrt3-\sqrt6&=\cot\frac{17\pi}{24},\\ -2-\sqrt2+\sqrt3-\sqrt6&=\cot\frac{13\pi}{24},\\ -2-\sqrt2-\sqrt3+\sqrt6&=\cot\frac{5\pi}{24}. -\end{aligned} -$$ -I haven't checked the details, but I'm fairly sure that when you dig for the integer multiple angle formulas for cotangent, the polynomial $m(x)$ pops out. After all, we can write $\cot 6x$ as a rational function of $\cot x$ with polynomials of degrees $\le6$ as numerators and denominators. When you use that formula in the l.h.s. of the equation -$$ -\cot 6x=1 -$$ -and clear the denominator, the resulting degree $6$ polynomial equation in $\cot x$ factors as a product of a quartic (obviously $m(\cot x)$) and a quadratic - the latter accounting for solutions $x=3\pi/8$ and $x=7\pi/8$.<|endoftext|> -TITLE: The smallest symmetric group $S_m$ into which a given dihedral group $D_{2n}$ embeds -QUESTION [16 upvotes]: Several questions, both here and on MathOverflow, address the issue of determining for a given group $G$ the smallest integer $\mu(G)$ for which there is an embedding (injective homomorphism) $G \hookrightarrow S_{\mu(G)}$. In general this is a difficult problem, but it's not hard to resolve the question for $G$ of small order, and $\mu(G)$ has been determined for some important classes of groups $G$. For example, for $G$ abelian, so that we can write $G$ uniquely (up to reordering) as $\Bbb Z_{a_1} \times \cdots \times \Bbb Z_{a_r}$ for (nontrivial) prime powers $a_1, \ldots, a_r$, -$$\mu(G) = a_1 + \cdots + a_r .$$ And of course, $\mu(S_m) = m$. -I have not been able to find, however, where this has been resolved for the dihedral groups; my question is: - -For each dihedral group $D_{2n}$ (of order $2 n$), what is the smallest symmetric group into which $D_{2n}$ embeds, that is, what is $\mu_n := \mu(D_{2n})$? - -Of course, $D_2 \cong S_2$ and $D_6 \cong S_3$, and so $\mu_1 = 2$ and $\mu_3 = 3$; also, $D_4 \cong \Bbb Z_2 \times \Bbb Z_2$, so by the above result $\mu_2 = \mu(D_4) = 4$. -For any group $G$ and subgroup $H \leq G$, an embedding $G \hookrightarrow S_m$ determines an embedding $H \hookrightarrow S_m$, and so $\mu(H) \leq \mu(G)$. Thus, since $D_{2n} \cong \Bbb Z_n \rtimes \Bbb Z_2$, we have $\mu_n = \mu(D_{2n}) \geq \mu(\Bbb Z_n)$, which by the above is the sum $\Sigma_n$ of the prime powers in the prime factorization of $n$. On the other hand, for $n > 2$, the usual action by rotations and reflections of $D_{2n}$ on an $n$-gon is faithful and so determines an embedding $D_{2n} \hookrightarrow S_n$; in particular, this gives the upper bound $\mu_n \leq n$. -Already, these bounds together give $\mu_4 = 4$ (realized by the embedding of the symmetry group of the square into the symmetric group on its vertices) and more generally that $\mu_a = a$ for prime powers $a > 2$. -This is not sufficient, however, to determine $\mu_n$ for other integers $> 5$; for example, $\mu(\Bbb Z_6) = 5$, so these bounds only give $5 \leq \mu_6 \leq 6$. It turns out that $D_{12}$ can be embedded in $S_5$ (as David points out in a comment under his question, this embedding can be realized explicitly as $\langle(12)(345), (12)(34)\rangle$), and this settles $\mu_6 = 5$. The above results together determine the subsequence $$(\mu_1, \ldots, \mu_9) = (2, 4, 3, 4, 5, 5, 7, 8, 9) ,$$ which in particular does not appear in the OEIS. -Edit Per David's answer, the sequence $(\mu_n)$ appears to be given by -$$ -\mu_n := \left\{\begin{array}{cl}2, & n = 1\\ 4, & n = 2\\ \Sigma_n, & n > 2 \end{array}\right. , -$$ -and $(\Sigma_n)$ itself appears in the OEIS as sequence A008475. - -REPLY [2 votes]: There is also a following geometric way to explain the solution by David. -Let us recall that $D_{2n}$ is the group of symmetries of a regular $n$-gon. The idea is that one should consider the action of $D_{2n}$ on regular polygons with smaller number of vertices that are inscribed into the given regular n-gon. - -To clarify the construction, let us first consider $n=6$. There are two regular 3-gons and three regular 2-gons whose vertices are the vertices of the given regular 6-gon. Every element of $D_{12}$ acts on the set of two regular 3-gons and acts on the set of three regular 2-gons, thus one has $D_{12} \rightarrow S_2 \times S_3$. -It is easy to see that this map is injective. Its composition with tautological embedding $S_2 \times S_3 \rightarrow S_{2+3}$ is the necessary map. - -Now let us consider a general $n = \prod_{i=1}^s p_i^{k_i}$. A regular $n$-gon has $p_i^{k_i}$ regular $n/p_i^{k_i}$-gons, which gives us a map $D_{2n} \to S_{p_i^{k_i}}$. Now by Chinese remainder theorem and thoughtful look one proves that the map $D_{2n} \to \prod_{i=1}^s S_{p_i^{k_i}}$ is an injection. -(Equivalently, suppose that a symmetry $g \in D_{2n}$ preserves every regular $n/p_i^{k_i}$-gon in the given $n$-gon. Consider any vertex $v$ of the given $n$-gon. The vertex $v$ lies in a certain $n/p_1^{k_1}$-gon, in a certain $n/p_2^{k_2}$-gon, etc. Each of them is preserved by $g$ and their intersection consists of $v$ only, thus $v$ is preserved by $g$). -Thus $D_{2n} \hookrightarrow S_{\sum_{i=1}^s p_i^{k_i}}$ and $\mu(D_{2n}) \leqslant \sum_{i=1}^s p_i^{k_i}$. But it can not be smaller by the argument by David.<|endoftext|> -TITLE: Sequence related to solutions of the equation $x^x=c$ -QUESTION [9 upvotes]: A couple years ago I remember repeatedly pressing $\sqrt{1+ans}$ into my calculator to be astonished that my calculator gives me an answer approaching the golden ratio. I was astonished, and dug deeper into this problem. I realized that the limit of the sequence given by the recurrence: -$$f(x_n)=x_{n+1}$$ -Is, if the limit exists, a solution to: -$$f(x)=x$$ -And applying this I realized I could solve almost every algebra problem. So here was one that I came across: -$$x^x=c$$ -Easily such equation can be rearranged as: -$$x=\sqrt[x]{c}$$ -Where now the problem becomes finding the limit of the sequence: -$$x_{n+1}=\sqrt[x_n]{c}$$ -This method worked for some $c$ and $x_1$ like $c=2$ and $x_1=1.5$. But the problem is for some $c$ the sequence seems to alternate according to my observations, like $c=100$ (I tried $x_1=3.5$ and others but it doesn't seem to matter as long as it is positive). I ask anyone if they can help me figure out for what $c$ will this sequence converge for every given $x_1>0$. - -REPLY [9 votes]: What you discovered is called a fixed point iteration. - -A fixed point $x$ of a function $f\colon X\rightarrow X$ is a point - satisfying the equation $$x=f(x).$$ - -A fixed point iteration is defined by the sequence -\begin{align*}x_{1}&=\text{given}\\x_{n+1}&=f(x_{n})\text{ for }n>0.\end{align*} -We say this iteration converges when $x_{n}\rightarrow x$ for some $x$. The Banach fixed point thereom (a.k.a. contraction mapping principle) gives sufficient conditions for when a fixed point exists, is unique, and can be computed by a fixed point iteration (i.e. as the limit of the $x_{n}$). You should read the article on Wikipedia to familiarize yourself with the ideas. -For simplicity, let's now assume $X=\mathbb{R}$ instead of an arbitrary complete metric space, as is usually treated in the statement of the Banach fixed point theorem. One of the consequences of the theorem is as follows: let $Y\subset X$ such that $f$ is continuously differentiable on $Y$, $\sup_{Y}|f^{\prime}|\leq K$ for some $K<1$, and $f(Y)\subset Y$, then $f$ has a unique fixed point on $Y$ which can be computed by a fixed point iteration initialized in $Y$ (i.e. $x_{1}\in Y$). -In your first example, you have $f(x)=\sqrt{1+x}$. Let $Y=[0,\infty)$. Noting that $|f^{\prime}(x)|<1$ on $Y$ (check) and $f(Y)\subset Y$ trivially, it follows that for any $x_{1}\in Y$, $x_{n}\rightarrow x$ where $x$ is the unique fixed point of $f$ on $Y$. In fact, this fixed point must satisfy $x=f(x)=\sqrt{1+x}$, or equivalently, since $x\geq0$, $x^{2}=1+x$. This quadratic equation has two roots: $x=1/2\pm\sqrt{5}/2$. The positive root is, as you pointed out, the golden ratio $1.61803\ldots$ -Your other problem, involves $f(x)=\sqrt[x]{c}$. It is not so clear under which conditions this satisfies the Banach fixed point theorem. In fact, as you noticed, you can find examples in which the iteration is nonconvergent. -Do not despair though, there are better ways to compute solutions to your equation. One that comes to mind is Newton's method, which you should take a look at. -In fact, a solution to your problem is given by Lambert's W function as -$$x = \frac{\ln c}{W(\ln c)}.$$ -Values of the Lambert W are, in fact, often computed by Newton's method, or higher order Newton's methods. - -Addendum -Newton's method for $g(x)=x^{x}-c$ is given by \begin{align*}x_1&=\text{given}\\x_{n+1}&=f(x_{n})\text{ for }n>0\end{align*} where $$f(x)=x+\frac{cx^{-x}-1}{1+\ln x}. $$ Since $g^{\prime\prime}(x)\geq0$ for $x>0$ and $f(x)>1/e$ for $x>1/e$ (check), it follows that Newton's method converges whenever $x_{1}>1/e$ and $x=f(x)$ has a solution with $x>1/e$ (see this). This is true at least when $c\geq1$, so that we can conclude: - -Newton's method for $x^x - c = 0$ converges for $c \geq 1$ and $x_1 > 1/e$. - -Here's some MATLAB code: -c = 100.; % Value of c -x = 1.; % Initial guess -while 1 - x_new = x + (c*x^(-x) - 1)/(1 + log(x)); - if abs (x_new - x) < 1e-12; break; end - x = x_new; -end -disp (x) % Solution<|endoftext|> -TITLE: Why is there only one term on the RHS of this chain rule with partial derivatives? -QUESTION [13 upvotes]: I know that if $u=u(s,t)$ and $s=s(x,y)$ and $t=t(x,y)$ then the chain rule is $$\begin{align}\color{blue}{\fbox{$\frac{\partial u}{\partial x}=\frac{\partial u}{\partial s}\times \frac{\partial s}{\partial x}+\frac{\partial u}{\partial t}\times \frac{\partial t}{\partial x}$}}\color{#F80}{\tag{A}}\end{align}$$ -A short extract from my book tells me that: - -If $u=(x^2+2y)^2 + 4$ and $p=x^2 + 2y$ then $u=p^2 + 4$ therefore $$\frac{\partial u}{\partial x}=\frac{\partial u}{\partial p}\times \frac{\partial p}{\partial x}\tag{1}$$ as $u=u(x,y)$ and $p=p(x,y)$ - -The book mentions no origin of equation $(1)$ and unlike $\color{#F80}{\rm{(A)}}$ is has only one term on the RHS; So I would like to know how it was formed. Is $(1)$ simply equivalent to $\color{#F80}{\rm{(A)}}$ but with the last term missing? Or is there more to it than that? -Many thanks, -BLAZE. - -REPLY [11 votes]: To expand a bit on Hagen von Eitzen’s answer and littleO’s comment, there are really two different functions that are both named ‘$u$’. The first is a function of two variables, $u:(x,y)\mapsto (x^2+2y)^2+4$, while the second is a function of only one variable, $u:t\mapsto t^2+4$. Let’s call the former $\bar u$ to keep them straight. We also have $p:(x,y)\mapsto x^2+2y$, so $\bar u=u\circ p$, i.e., $\bar u(x,y)=u(p(x,y))$. By the chain rule, ${\partial\over\partial x}\bar u={\partial\over\partial x}(u\circ p)=\sum{\partial u\over\partial w_i}{\partial w_i\over\partial x}$, the sum taken over all of the parameters $w_i$ of $u$. In this case, $u$ is a function of only one variable, so this sum has only the one term, ${\partial u\over\partial p}{\partial p\over\partial x}$. Because this $u$ is a function of only one variable, you might see this written as ${du\over dp}{\partial p\over\partial x}$ instead.<|endoftext|> -TITLE: In Borel-Cantelli lemma, what is the limit superior? -QUESTION [5 upvotes]: In a proof of the Borel-Cantelli lemma in the stochastic process textbook, the author used the following. -$$\limsup_{n\to\infty}A_n=\bigcap_{n\ge1}\bigcup_{k\ge n} A_k$$ -Can someone explain why lim sup is intersection and union? Thank you - -REPLY [13 votes]: I find it very helpful to think of the limit superior and limit inferior of a sequence of real numbers and a sequence of sets as special cases of limit superior and limit inferior in so called complete lattices: -You probably already know the following notions for the special case of the ordered set $(\mathbb{R},\leq)$: - -Definition: Let $(S,\leq)$ be a partially ordered set and $A \subseteq S$ a subset. An element $s \in S$ is called an upper bound of $A$ if $a \leq s$ for all $a\ \in A$. An element $t \in S$ is called a lower bound of $A$ if $t \leq a$ for all $a \in A$. -An element $s \in S$ is called supremum of $A$ if $s$ is a least upper bound of $A$, i.e. $s$ is an upper bound of $A$ and for every upper bound $s'$ of $A$ we have $s \leq s'$. -Similarly an element $t \in S$ is called infimum of $A$ if $t$ is a least lower bound of $A$, i.e. $t$ is a lower bound of $A$ and for every lower bound $t'$ of $A$ we have $t' \leq t$. - -If $(S, \leq)$ is a partially ordered set and $A \subseteq S$ a subset then neither a supremum of $A$, nor an infimum of $A$ need to exist. If it does, however, then it is unique, and is denoted by $\sup A$ and $\inf A$ respectively. -Notice that in the special case of $(\mathbb{R},\leq)$ the above definition coincides with the usual notion of the supremum and infimum of a set of real numbers. In the case of the extended real numbers $\mathbb{R}\cup \{-\infty,\infty\} = [-\infty,\infty]$ we have the nice property that each subset $S \subseteq [-\infty,\infty]$ has a supremum (possibly $\infty$) and an infimum (possibly $-\infty$). Such ordered sets are called complete lattices. - -Definition: A ordered set $(S,\leq)$ is called a complete lattice if for each subset $A \subseteq S$ both $\sup A$ and $\inf A$ exist. - -Aside from the extended real numbers $[-\infty,\infty]$ another complete lattice which we commonly encounter are power sets: - -Example: Let $X$ be any set and denote by $\mathcal{P}(X) = \{T \mid T \subseteq X\}$ the power set of $X$. With the usual subset relation $\subseteq$ the power set becomes a partially ordered set $(\mathcal{P}(X),\subseteq)$. Let $\mathcal{A} \subseteq P(X)$ (i.e. $\mathcal{A}$ is a collection of subsets of $X$). -For any subset $S \in \mathcal{P}(X)$ we have that $S$ is an upper bound of $\mathcal{A}$ if and only if $T \subseteq S$ for all $T \in \mathcal{A}$. Therefore $S := \bigcup_{T \in \mathcal{A}} T$ is an upper bound of $\mathcal{A}$. If $S' \in \mathcal{P}(X)$ is any upper bound of $\mathcal{A}$, then we have $T \subseteq S'$ for all $T \in \mathcal{A}$, and thus also $S \subseteq S'$. So $S$ is a supremum of $\mathcal{A}$. -In the same way we also find that $\bigcap_{T \in \mathcal{A}} T$ is an infimum of $\mathcal{A}$. So $(\mathcal{P}(X), \subseteq)$ is a complete lattice, and for any collection of subsets $\mathcal{A} \subseteq \mathcal{P}(X)$ we have $\sup \mathcal{A} = \bigcup_{T \in \mathcal{A}} T$ and $\inf \mathcal{A} = \bigcap_{T \in \mathcal{A}} T$. - -Notice that this result is not very suprising: The smallest set containing all sets of $\mathcal{A}$ in naturally the union of these sets. In the same way the biggest set which is contained in all sets of $\mathcal{A}$ is naturally the intersection of these sets. -Since in complete lattices we have suprema and infima we have all that we need to define the limit superior and limit inferior. - -Definition: Let $(S,\leq)$ be a complete lattice and $(s_n)_{n \in \mathbb{N}}$ a sequence of elements $s_n \in S$. Then the limit superior and limit inferior of this sequence are - $$ - \limsup_{n \to \infty} s_n = \inf_{n \geq 0} \sup_{k \geq n} s_k -$$ - and - $$ - \liminf_{n \to \infty} s_n = \sup_{n \geq 0} \inf_{k \geq n} s_k. -$$ - If $\limsup_{n \to \infty} s_n = \liminf_{n \to \infty} s_n$ then we also write - $$ - \lim_{n \to \infty} s_n = \limsup_{n \to \infty} s_n = \liminf_{n \to \infty} s_n -$$ - and call this the limit of the sequence $(s_n)_{n \in \mathbb{N}}$. - -Notice that for the extended real line $[-\infty,\infty]$ this is the usual definition of limit superior and limit inferior. But what about power sets? - -Example: Let $X$ be any set and $(A_n)_{n \in \mathbb{N}}$ a sequence of subsets $A_n \subseteq X$. Then $(A_n)_{n \in \mathbb{N}}$ is a sequence in the complete lattice $(\mathcal{P}(X), \subseteq)$. From the previous example we see that the limit superior of this sequence is given by - $$ - \limsup_{n \to \infty} A_n - = \inf_{n \geq 0} \sup_{k \geq n} A_k - = \bigcap_{n \geq 0} \bigcup_{k \geq n} A_k, -$$ - and the limit inferior is given by - $$ - \liminf_{n \to \infty} A_n - = \sup_{n \geq 0} \inf_{k \geq n} A_k - = \bigcup_{n \geq 0} \bigcap_{k \geq n} A_k. -$$ - -So we see that the definiton of the limit superior and limit inferior of a sequence of sets really comes down to what the supremum and infimum of a collections of sets is, which naturally is their union and intersection respectively.<|endoftext|> -TITLE: Can exist an even number greater than $36$ with more even divisors than $36$, all of them being a prime$-1$? -QUESTION [13 upvotes]: I did a little test today looking for all the numbers such as their even divisors are exactly all of them a prime number minus 1, to verify possible properties of them. These are the first terms, it is not included at OEIS: - -2, [2] -4, [2, 4] -6, [2, 6] -10, [2, 10] -12, [2, 4, 6, 12] -18, [2, 6, 18] -22, [2, 22] -30, [2, 6, 10, 30] -36, [2, 4, 6, 12, 18, 36] -46, [2, 46] -58, [2, 58] - -I tried to look for the one with the longest list of even divisors, but it seems that the longest one is $36$, at least up to $10^6$: - -$36$, even divisors $[2, 4, 6, 12, 18, 36]$, so the primes are $[3, 5, 7, 13, 19, 37]$. - -For instance, for the same exercise for the even divisors being exactly all of them a prime number plus 1 (except $1$ in the case of the even divisor $2$) it seems to be $24$ - -$24$, $[2, 4, 6, 8, 12, 24]$, so the primes are $[3, 5, 7, 11, 23]$. - -And for instance for the case in which both minus and plus one are a prime (or $1$ for the even divisor $2$) the longest one seems to be $12$: $[2, 4, 6, 12]$. -I would like to ask the following question: - -These are heuristics, but I do not understand why it seems impossible to find a greater number than those small values such as all the even divisors comply with the property and that list of divisors is longer than the list of $36$. Is there a theoretical reason behind that or should it be possible to find a greater number (maybe very big) complying with the property? The way of calculating such possibility is related somehow with Diophantine equations? - -Probably the reason is very simple, but I can not see it clearly. Thank you very much in advance! - -REPLY [8 votes]: Sieving with small primes reveals the following. -Assume that -$$ -n=2^a\cdot3^bp_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}\qquad(*) -$$ -has the property that $d+1$ is a prime whenever $d$ is an even factor of $n$. Here $(*)$ gives the prime factorization of $n$, so $p_i$ are all distinct primes $>3$. Without loss of generality we can assume that $a>0$ and that $a_i>0$ for all $i$. -If any of the primes $p_i$ satisfies the congruence $p_i\equiv1\pmod 3$, then $2p_i+1$ is divisible by three, and $d=2p_i\mid n$ is in violation of the assumption. We can conclude that $p_i\equiv-1\pmod 3$ for all $i$. -If $k\ge 2$ then $2p_1p_2+1$ is divisible by three, so $d=2p_1p_2$ is in violation. Therefore $k\le1$. But also if $a_1\ge2$, then $2p_1^2+1$ is divisible by three and $d=2p_1^2$ is a violating factor. -At this point we know that either (call this case A) -$$ -n=2^a\cdot3^bp -$$ -for some prime $p\equiv-1\pmod3$, or (call this case B) -$$ -n=2^a3^b. -$$ -In case A we make the following further observations. First we see that $4p+1$ is again divisible by three, so $d=4p$ is in violation. Therefore in case A we must have $a=1$. Also, $2\cdot 3^3+1$ is a multiple of five, so we similarly conclude that $b\le 2$. -In case B we observe that $2^3+1$ and $2\cdot 3^3+1$ are no primes, and therefore $a\le2$ and $b\le2$. -So we are left with the possibilities - -$n=2\cdot 3^b p$ with $b\le2$. This $n$ has $2(b+1)$ even factors, so there are at most six of them. -$n=2^a\cdot 3^b$ with $a,b\le2$. This $n$ has $a(b+1)$ even factors, so again there are at most six of them.<|endoftext|> -TITLE: in every coloring $1,...,n$ there are distinct integers $a,b,c,d$ such that $a+b+c=d$ -QUESTION [7 upvotes]: Prove that for every $k$ there is a finite integer $n = n(k)$ so that for any coloring of the integers $1, 2, . . . , n$ by $k$ colors there are distinct integers $a, b, c$ and $d$ of the same color satisfying $a + b + c = d$ - -REPLY [6 votes]: By Ramsey's theorem we can choose $n=n(k)$ so that, for any coloring of the $2$-element subsets of an $n+1$-element set with $k$ colors, there is a $7$-element subset whose $2$-element subsets all have the same color. -Now suppose the numbers $1,2,\dots,n$ are colored with $k$ colors. Color the $2$-element subsets of the set $\{0,1,2,\dots,n\}$ as follows: if $x,y\in\{0,1,2,\dots,n\}$ and $x\lt y$, give $\{x,y\}$ the same color as the number $y-x$. Thus there are numbers $x_1\lt x_2\lt x_3\lt x_4\lt x_5\lt x_6\lt x_7$ such that all the differences $x_j-x_i\ (1\le i\lt j\le7)$ have the same color. -Choose $i\in\{3,4\}$ so that $x_i-x_2\ne x_2-x_1$ and then choose $j\in\{5,6,7\}$ so that $x_j-x_i\ne x_2-x_1,\ x_i-x_2$. -Let $a=x_2-x_1,\ b=x_i-x_2,\ c=x_j-x_i$ and $d=x_j-x_1$. Then $a,\ b,\ c,\ d$ are distinct, and $a,\ b,\ c,\ a+b,\ b+c$, and $a+b+c=d$ all have the same color. -Alternatively choose $m=m(k)$ so that, for any coloring of the $2$-element subsets of an $m+1$-element set with $k$ colors, there is a $4$-element subset whose $2$-element subsets all have the same color, and let $n=n(k)=2^m-1$. -Now suppose the numbers $1,2,\dots,n$ are colored with $k$ colors. Color the $2$-element subsets of the set $\{0,1,\dots,m\}$ as follows: if $x,y\in\{0,1,\dots,m\}$ and $x\lt y$, give $\{x,y\}$ the same color as the number $2^y-2^x$. Thus there are numbers $x_1\lt x_2\lt x_3\lt x_4$ such that all the differences $2^{x_j}-2^{x_i}(1\le i\lt j\le4)$ have the same color. -Let $a=2^{x_2}-2^{x_1},\ b=2^{x_3}-2^{x_2},\ c=2^{x_4}-2^{x_3},\ d=2^{x_4}-2^{x_1}$. Then $a\lt b\lt c\lt d$ and $a,\ b,\ c,\ a+b,\ b+c$, and $a+b+c=d$ all have the same color. -This construction can be improved by using a Sidon sequence (also called a Golomb ruler) $a_0,a_1,\dots,a_m$ instead of $2^0,2^1,\dots,2^m.$<|endoftext|> -TITLE: Which is larger, $\sqrt[2015]{2015!}$ or $\sqrt[2016]{2016!}$? -QUESTION [18 upvotes]: This was a question in a maths contest, where no calculator was allowed. Also, note that only a (>,< or =) relationship is being searched for and not the value of the numbers itself. - -Which is larger, $\sqrt[2015]{2015!}$ or $\sqrt[2016]{2016!}$ ? - - -What I've done: -My approach is to divide one number by the other and infer from the result which number is the bigger one; -WolframAlpha gives $\frac{\sqrt[2016]{2016!}}{\sqrt[2015]{2015!}}=1.0049\ldots$, so clearly $\sqrt[2016]{2016!}>\sqrt[2015]{2015!}$ -Let $a=\sqrt[2016]{2016!}$ and $b=\sqrt[2015]{2015!}$ -$\therefore a=\sqrt[2016]{2016!}={2016!}^{1 \over 2016}=2016^{1 \over 2016}\times2015!^{1\over 2016}=\sqrt[2016]{2016}\cdot \sqrt[2016]{2015!}$ -$\therefore b=\sqrt[2015]{2015!}={2015!}^{1 \over 2015}={2015!}^{\frac{2016}{2015}\cdot\frac{1}{2016}}=\sqrt[2016]{2015!^{2016 \over 2015}}$ -Hence -$$\begin{align} -\require{cancel} -\frac{a}{b}=\frac{\sqrt[2016]{2016!}}{\sqrt[2015]{2015!}}&=\frac{\sqrt[2016]{2016}\cdot \sqrt[2016]{2015!}}{\sqrt[2016]{2015!^{2016 \over 2015}}}\\ -&=\sqrt[2016]{2016}\cdot \sqrt[2016]{2015!^{\frac{-1}{2015}}}= \cancelto{*}{\sqrt[2016]{\frac{2016}{2015!^{2015}}} \quad \text{which appears to be} <1}\\ -=\sqrt[2016]{\frac{2016}{2015!^\frac{1}{2015}}}\\ -\end{align}$$ -That is $\cancelto{*}{\frac{a}{b}<1 \implies a$$ $$>\frac {1}{2016}\log (2016)-\frac {1}{(2016)(2015)}\sum_{n=1}^{2015}\log (2015)=$$ $$=\frac {1}{2016}\log (2016)-\frac {1}{2016}\log (2015)\;>0\;.$$<|endoftext|> -TITLE: Hypersurfaces meet everything of dimension at least 1 in projective space -QUESTION [5 upvotes]: The following exercise is taken from ravi vakil's notes on algebraic geometry. - -Suppose $X$ is a closed subset of $\mathbb{P}^n_k$ of dimension at least $1$, and $H$ is a nonempty hypersurface in $\mathbb{P}^n_k$. Show that $H\cap X \ne \emptyset$. - -The clue suggests to consider the cone over $X$. I'm stuck on this and I realized that i'm at this point again where i'm not sure how a neat formal proof of this should look like. -Thoughts: -Does a hypersurface in projective space mean $H=V_+(f)$, the homogeneous primes not containing $f$? -If $X \hookrightarrow Proj(S_{\bullet})$ is a closed embedding then it corresponds (before taking $Proj$(-)) to a surjection of graded rings $S_{\bullet} \to R_{\bullet}=S_{\bullet}/I_+(X)$ where $I_+(X)$ is the set of all homogeneous elements vanishing on $X$. The cone $C(X)$ over a $X$ is then obtained by taking the $Spec(-)$ of this morphism $C(X) \hookrightarrow Spec(S_{\bullet})$. Is that right? How can this help me prove the theorem above? -I got the feeling so far that there's a very elegant way to describe all hypersurfaces in terms of vanishing of global sections of line bundles. This would help me enourmously since it would enable me to carry my geometric intuition to this setting. In this context the statement would look like: -A global section of a non-trivial line bundle on projective space must have a zero on all zariski closed subsets. This is the same as saying that a nontrivial line bundle on projective space restricts to a nontrivial line bundle on all closed subspaces. And here I have a cohomology problem that feels pretty specific and managable, This all feels much less ambiguous to me than "take the cone over $X$". Clarifying this would help me a lot. - -REPLY [5 votes]: Here is a different but most elementary proof that $H\cap X\neq \emptyset$: -The complement $ U=\mathbb P^n\setminus H$ of the hypersurface is an affine variety, an easy by-product of the Veronese embedding: see Theorem 1.1 here. -But it is impossible that $X\subset U$, since an affine variety cannot contain a positive-dimensional dimensional projective variety. -Hence $H\cap X$ is non-empty. -Edit -At the OP's request let me remind that given two points on an affine variety $U$ there is a regular function $h\in \mathcal O(U)$ taking different values at them, whereas on a projective varieties $X$ all regular functions are constant. -This is why $U$ cannot contain $X$: consider two points on $X$ and restrict $h$ to $X$ to obtain a contradiction.<|endoftext|> -TITLE: Countable-infinity-to-one function -QUESTION [14 upvotes]: Are there continuous functions $f:I\to I$ such that $f^{-1}(\{x\})$ is countably infinite for every $x$? Here, $I=[0,1]$. -The question "Infinity-to-one function" answers is similar but without the condition that it be countable. (The range was $S^2$, not $I$, but the accepted answer also worked for $I$.) -I doubt one exists, since I haven't been able to come up with one, but I'm not sure. There's probably some topological reason why this is impossible. - -REPLY [6 votes]: Here is an example. Start with the Cantor function $g:[0,1]\to[0,1]$, i.e. the function that sends $x=\sum a_n/3^n$ to $\sum a_n/2^{n+1}$ if every $a_n$ is $0$ or $2$ and is locally constant off of the Cantor set $K$. Note that for each dyadic rational $q\in(0,1)$, there is a unique (nondegenerate) interval $[a_q,b_q]$ with $a_q,b_q\in K$ such that $g(x)=q$ for all $x\in[a_q,b_q]$, and $[0,1]\setminus K$ is the disjoint union of the intervals $(a_q,b_q)$. Define a function $h:[0,1]\to[0,1]$ by saying $h=g$ on $K$, and on each interval $[a_q,b_q]$, $h$ is a finite-to-one continuous surjection $[a_q,b_q]\to[q,q+1/2^n]$, where $2^n$ is the denominator of $q$ (in lowest terms), and $h(a_q)=h(b_q)=q$. -The function $h$ is clearly continuous when restricted to each interval $[a_q,b_q]$, and in particular is continuous off of $K$. As you approach a point of $K$ without staying in a single interval $[a_q,b_q]$, you pass through infinitely many such intervals $[a_q,b_q]$, with the denominators of the numbers $q$ getting larger and larger, and so $h$ remains continuous because $g$ is continuous. Thus $h$ is continuous on all of $[0,1]$. -I claim every point of $[0,1]$ except $0$ has countably infinitely many preimages under $h$ (since $h(x)\geq g(x)$ for all $x$, $0$ is the only preimage of $0$). It is clear that every point has countably many preimages: $h$ agrees with $g$ on $K$ and $g$ is finite-to-one on $K$, and off of $K$, $h$ is finite-to-one on each of the countably many intervals $[a_q,b_q]$. Now if $x\in[0,1]$ and $q\in(0,1)$ is any dyadic rational obtained by truncating a binary expansion of $x$ at some point, then by construction, $h$ takes the value $x$ somewhere on the interval $[a_q,b_q]$ (since $q\leq x\leq q+1/2^n$). If $x\neq0$, then there are infinitely many different dyadic rationals $q\in(0,1)$ that can be obtained by truncating a binary expansion of $x$ (if $x$ is a dyadic rational, use the binary expansion of it that ends in $1$s). So we find that $x$ must have infinitely many preimages unless $x=0$. -Finally, it is easy to modify $h$ to give $0$ infinitely many preimages. For instance, define $f(x)=i(x)$ if $x\in[0,1/2]$ and $f(x)=h(2x-1)$ if $x\in[1/2,1]$, where $i:[0,1/2]\to[0,1]$ is any countable-to-one continuous function that achieves the value $0$ infinitely many times and satisfies $i(1/2)=0$ (it is easy to construct such a function by appropriately modifying the function $x\mapsto x\sin^2(1/x)$). Then $f$ is a continuous function $[0,1]\to[0,1]$ with countably infinitely many preimages for each point.<|endoftext|> -TITLE: Why is my Monty Hall answer wrong using Bayes Rule? -QUESTION [8 upvotes]: The Monty Hall problem is described this way: - -Suppose you're on a game show, and you're given the choice of three - doors: Behind one door is a car; behind the others, goats. You pick a - door, say No. 1, and the host, who knows what's behind the doors, - opens another door, say No. 3, which has a goat. He then says to you, - "Do you want to pick door No. 2?" Is it to your advantage to switch - your choice? - -I am interested in finding the probability of winning when you switch. I already know it's $2/3$ but I want to show it with Bayes Rule. -I tried this: -$A$ = car behind door $1$ -$B$ = goat is behind door $3$ -$$P(A|B) = \frac{P(B|A)P(A)}{P(B)} = \frac{1 \cdot 1/3}{1-1/3} = \frac{1}{2}$$ -$P(B|A)$ = the probability that a goat is behind door $3$ given that the car is behind door $1$. This is equal to $1$ because if we know where the car is, then any other door must have a goat. -$P(A)$ = the probability of the car being behind door $1$. Assuming any door is equally likely to contain a car before we open any doors, this is $1/3$. -$P(B)$ = the probability of a goat behind behind door $3$. This is equal to $1$ minus the probability that the car is behind door $3$, so $1-1/3$. -Where is my mistake? - -REPLY [6 votes]: If you define event $B$ simply as 'there is a goat behind door 3', then of course $P(A|B)=\frac{1}{2}$, for there are two options left for the car. And your use of Bayes' theorem to show $P(A|B)=\frac{1}{2}$ is also correct, for indeed with the $B$ defined this way, you have $P(A)=\frac{1}{3}$, $P(B)=\frac{2}{3}$, and $P(B|A)=1$ -Put differently: asking what the chance is that door 1 has a car given that door 3 has a goat is effectively ignoring the whole 'game play' behind this problem. That is, you are not taking into account that Monty is revealing a door as a result of your choice, and whatever other assumptions are in force (such as: Monty knows where the prize is; Monty is certain to open a door with a goat; if you initially pick a door with the car, Monty will randomly pick one of the remaining two). Instead, event $B$ simply says: "there is a goat behind door $3$'. Indeed, as such, the problem statement might as well be: - -Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. Oh, and another thing: door $3$ contains a goat. You pick door $1$. Is it to your advantage to switch your choice? - -OK, what we need to do is take into account Monty's actions: it is indeed Monty's-act-of-picking-and-revealing-door-$3$-to-have-a-goat that is all important here. -So, instead, define $B$ as: "Monty Hall shows door $3$ to have a goat" -Notice how this is crucially different: for example, if door $1$ has the car, then door $3$ is certain to have a goat, but Monty is not certain to open door $3$ and reveal that: he might also open door $2$. -Now, let's use the standard assumptions that Monty is always sure to reveal a door with a goat and that, if both remaining doors have a goat, Monty chooses randomly between them to open. -OK, now $P(A)$ is still $\frac{1}{3}$, but otherwise things change radically with this new definition of $B$: -First, $P(B|A)$: Well, as pointed out above, this is no longer $1$, but becomes $\frac{1}{2}$, since Monty randomly chooses between doors 2 and 3 to open up. -Next, $P(B)$: what is the probability Monty opens door 3 to reveal a goat? There are two cases to consider in which Monty opens door 3: door 1 has the car, or door 2 has the car, each having a probability of $\frac{1}{3}$ Now, if door 1 has the car then, as we saw, there is a probability of $\frac{1}{2}$ of Monty revealing door 3 to have a goat. If door 2 has the car, then Monty is certain to reveal door 3 to have a goat. So: $P(B)=\frac{1}{3}\cdot \frac{1}{2}+\frac{1}{3}\cdot 1 = \frac{1}{2}$ -Plugging this into Bayes' rule: -$$P(A|B)=\frac{P(B|A)\cdot P(A)}{P(B)}=\frac{\frac{1}{2}\cdot \frac{1}{3}}{\frac{1}{2}}=\frac{1}{3}$$ -I believe the difference between these two $B$'s is actually at the heart of the Monty Hall Paradox. Most people will treat Monty opening door 3 and revealing a goat simply as the information that "door 3 has a goat" (i.e. as your initial $B$), in which case switching makes no difference, whereas using the information that "Monty Hall opens door 3 and reveals a goat" (i.e. as the newly defined $B$), switching does turn out to make a difference (again, within the context of the Standard Assumptions regarding this puzzle). And this is hard to grasp.<|endoftext|> -TITLE: Derivative of multivariate normal distribution wrt mean and covariance -QUESTION [11 upvotes]: I want to differentiate this wrt $\mu$ and $\Sigma$ : -$${1\over \sqrt{(2\pi)^k |\Sigma |}} e^{-0.5 (x-\mu)^T \Sigma^{-1} (x-\mu)} $$ -I'm following the matrix cookbook here and also this answer . The solution given in the answer (2nd link), doesn't match with what I read in the cookbook. -For example, for this term, if I follow rule 81 from the linked cookbook, I get a different answer (differentiating wrt $\mu$) : -$(x-\mu)^T \Sigma^{-1} (x-\mu)$ -According to the cookbook, the answer should be : $-(\Sigma^{-1} + \Sigma^{-T}) (x-\mu)$ . Or, am I missing something here? Also, how do I differentiate $(x-\mu)^T \Sigma^{-1} (x-\mu)$ - with respect to $\Sigma$ ? - -REPLY [5 votes]: I also had the same question as you. After trying equation 81 from the Matrix cookbook, I got this equation: -$$ -\frac{\partial{f}}{\partial{\mu}} = -\frac{1}{2}(\Sigma ^{-1} + (\Sigma^{-1})^{T}) (x - \mu)*(-1) -$$ -Since $ \Sigma $ is the co-variance matrix, it is symmetrical. Inverse of a symmetrical matrix is also symmetric (Is the inverse of a symmetric matrix also symmetric?). Therefore, we have $ (\Sigma^{-1})^{T} = \Sigma ^{-1} $. -Now, the above equation reduces to -$$ -\frac{\partial{f}}{\partial{\mu}} = \Sigma ^{-1}(x - \mu) $$<|endoftext|> -TITLE: Why does the minimum value of $x^x$ equal $1/e$? -QUESTION [11 upvotes]: The graph of $y=x^x$ looks like this: - -As we can see, the graph has a minimum value at a turning point. According to WolframAlpha, this point is at $x=1/e$. -I know that $e$ is the number for exponential growth and $\frac{d}{dx}e^x=e^x$, but these ideas seem unrelated to the fact that the mimum value of $x^x$ is $1/e$. Is this just pure coincidence, or could someone provide an intuitive explanation (i.e. more than just a proof) of why this is? - -REPLY [2 votes]: Try to minimize logarithm of $ y= x^x,$ i.e., $y=x \, \log x$ -Its derivative is -$$ 1 + \log(x) $$ -When equated to zero, it solves to that the minimum of $y(x)$ ocuurs at -$$ x= \dfrac{1}{e}= 0.36788$$ -and the minimum value is -$$ y_{min}= \dfrac{1}{{e}^{\frac{1}{e}}} \approx 0.6922$$ -The tiny red dot shown in your graph for minimum is: -$$ (x,y) = (0.36788, 0.6922) $$ -Note that the minimum value is not $\dfrac{1}{e}\, ! \, $ ... but is the reciprocal of $e^{th} $root of $e.$<|endoftext|> -TITLE: How to calculate $\sum_{n \in P}\frac{1}{n^2}, P=\{n \in \mathbb{N}: \exists (a,b) \in\ \mathbb{N^+} \times \mathbb{N^+} \mbox{ with } a^2+b^2=n^2\}$ -QUESTION [5 upvotes]: How I can evaluate -$$\sum_{n \in P}\frac{1}{n^2} \quad \quad P=\{n \in \mathbb{N^+}: \exists (a,b) \in\ \mathbb{N^+} \times \mathbb{N^+} \mbox{ with } a^2+b^2=n^2\}$$ -It's clearly convergent. I thought about seeing the sum as a sum of complex numbers using $(a+ib)(a-ib)=a^2+b^2$. - -REPLY [3 votes]: Let $S(x)$ denote the number of positive integers not exceeding $x$ which can be expressed as a sum of two squares. Then, as proved by Landau in 1908 the following limit holds. -$$ -\lim_{x\to\infty} \frac{\sqrt{\ln x}}{x} S(x) = K, -$$ -where $K \approx 0.76422365358922$ is a constant. The convergence to the constant $K$, known as the Landau–Ramanujan constant is very slow. The first ten thousand digits of $K$ is here. -The exact value of $K$ can be expressed as -$$ -K = \frac{1}{\sqrt{2}} \prod_{\substack{p \text{ prime } \\ \equiv \, 3 \pmod{4}}} \left(1-\frac{1}{p^2}\right)^{-1/2}, -$$ -so we have that -$$ -\prod_{\substack{p \text{ prime } \\ \equiv \, 3 \pmod{4}}} \left(1-\frac{1}{p^2}\right) = \frac{1}{2K^2}. -$$ -From the excellent answer of @Jack D'Aurizio we know that -$$ -S = \sum_{n\in P}\frac{1}{n^2} = \frac{\pi^2}{6}-\frac{4}{3}\cdot \prod_{\substack{p \text{ prime } \\ \equiv \, 3 \pmod{4}}} \left(1-\frac{1}{p^2}\right)^{-1}. \\ -$$ -Using the exact value of $K$ we could express your sum in term of the Landau–Ramanujan constant. - -$$ -S = \sum_{n\in P}\frac{1}{n^2} = \frac{\pi^2}{6}-\frac{8K^2}{3}. -$$ - -Numerically - -$$ -\begin{align} -S \approx -0.08749995296754071824615285056063798739937787259940111394813\\ -9987884926591232919611579752270225245544983905801979851301833\\ -539947996701320476568130203602037004645936371\dots\phantom{0000000000000} -\end{align} -$$ - -Note that the numerical evaluation of the product for $K$ is hopeless in this form. You could find an other expression for $K$ here (eq. 11), which has much faster convergence.<|endoftext|> -TITLE: Interesting shapes using probability and discrete view of a problem -QUESTION [9 upvotes]: Suppose we have a circle of radius $r$, we show the distance between a point and the center of the circle by $d$. We then choose each point inside the circle with probability $\frac{d}{r}$ , and turn it black (note that $\frac{d}{r}<1$). With these rules we get shapes like this: (With help of Java) - - - -The shapes made were pretty interesting to me. I decided to add a few lines of code to my program to keep count of the drawn points, and then divide them by the area of the circle, to see what percentage of the circle is full. All I got for multiple number of tests was a number getting close to $2/3$ as the circle got bigger. -Problem A: Almost what percentage of the circle is black? I found an answer as following: -As all the points with distance $l$ from the center lie on a circle of radius $l$ and the same center, to find the "probable" number of black points, we shall consider all circles with radius $l$ ($00$ full of black points, there exists an infinite number of points so that their distance to the center of the circle is equal, and the probability of all of them being black is $0$ (because there is an infinite number of them, $d$ and $r$ are constant, $\frac{d}{r}<1$ and $\lim_{n\to\infty}{(\frac{d}{r})}^n=0$). -It gets more interesting with a discrete view of the problem. Suppose we have a grid full of $1*1$ squares with the size $(2r+1)*(2r+1)$.We define the reference square with length $2l+1$ (we call $l$ it's radius) as the set of squares "around" the reference square with length $2l-1$, and the reference square with length $1$ is the set of $8$ squares around the central square. We define the distance $d$ of a square from the central square by the radius of it's reference square. Now we are ready to propose a problem similar to problem A: Problem C: Suppose each square with distance $d$ to the central square turns black with a probability $\frac{d}{r}$. Prove that Almost $2/3$ of the squares turn black, as $r\to \infty$ -We begin our proof by proving a reference square of radius $l$ contains exactly $8l$ squares. This is easy because there are $(2l+1)^2-(2l-1)^2=8l$ squares in a reference square with radius $l$. Now, each square in the reference square is black with probability $\frac{l}{r}$, so almost $\frac{l}{r}*8l=\frac{8l^2}{r}$ of the reference square is black (Similar to the proof of Problem A). By summing up all the reference squares with radius $l=1$ to $r$ we get: $$\sum_{l=1}^{r} \frac{8l^2}{r} = \frac{8}{r}*\frac{(r)(r+1)(2r+1)}{6}=\frac{4(r+1)(2r+1)}{3}$$ and so the percentage of the whole square that is black (as $r$ tends to infinity) is:$$\lim_{r\to\infty} \frac{\frac{4(r+1)(2r+1)}{3}}{(2r+1)^2}=\frac{4(r+1)}{3(2r+1)}=\frac{2}{3}$$ as expected. -Some problems, though, still remain. -First, I would appreciate the verification of my solutions to problems A, B and C. The program I wrote in java only fills out a finite number of "pixels", does this represent Problem A or Problem C? would there be a difference if we draw the circle for infinite number of "points"? Second,The result of Problem B seems a little strange because as $X$ tends to $\frac{3}{2}$, $P$ should go to $1$ because $\frac{2}{3}$ of the circle is black (as proved in Problem A). Then Why do we get $P=0$ for all values of $X$? How can we explain this? - Third, What is the connection between the discrete view of the problem and the main problem? Can someone generalize the proof of Problem C to prove Problem A? I think one can easily generalize Problem C to "circles" full of tiny squares and prove the fact similarly, but going to an infinite number of points instead of "pixels" (which are equivalent to tiny squares in problem C) is still another matter. -I would appreciate any help. - -REPLY [2 votes]: Your approaches to Problems A and C are correct. To eliminate the ambiguity between 0-dimensional points and 2-dimensional pixels and probability, we can determine the expected proportion $s$ of shaded pixels at radius $d$: -$$E(s)=\lim_{n\to\infty} (\sum_{i=1}^n {1 \over n} * {d \over r}) = {d \over r}$$ -That allows us to integrate from $0$ to $r$ to find the expected shaded area of the entire circle, based on each individual thin ring, just as you demonstrated. -$$ -{\int_0^r E(s) * 2 \pi d \; dd -\over -\int_0^r 2 \pi d \; dd -} -\\ -= -{\int_0^r {d \over r} * 2 \pi d \; dd -\over -\int_0^r 2 \pi d \; dd -} -\\= -{{2 \over 3} \pi r^2 -\over -\pi r^2 -} -={2 \over 3} -$$ -For Problem B, I think the approach may be flawed. You're trying to determine the probability of the aggregate proportional area, which is a single value and can be determined empirically based on the probability of any given point being shaded. Essentially, $P(E(s)={1 \over x})$. Imagine that you have an urn with a single white ball: -$$P(white) = 1\\ -P(not\,white) = 0$$ -Just as this theoretical urn has only one potential outcome, a white ball, so does your circle have only one potential value for the overall ratio of shaded area. -$$P(Shaded \, area = {2 \over 3} \pi r ^2) = 1\\ -P(Shaded \, area \ne {2 \over 3} \pi r^2) = 0$$<|endoftext|> -TITLE: Has this equation appeared before? -QUESTION [6 upvotes]: I want to know if the following equation has appeared in mathematical literature before, or if it has any important significance. -$$\sqrt{\frac{a+b+x}{c}}+\sqrt{\frac{b+c+x}{a}}+\sqrt{\frac{c+a+x}{b}}=\sqrt{\frac{a+b+c}{x}},$$ -where $a,b,c$ are any three fixed positive real and $x$ is the unknown variable. - -REPLY [2 votes]: This provides the explicit polynomial in $x$ (for those curious), though I'm not aware if the equation has appeared in the mathematical literature. We get rid of the square roots by multiplying out the $8$ sign changes, -$$\prod^8 \left(\sqrt{\frac{a+b+x}{c}}\pm\sqrt{\frac{b+c+x}{a}}\pm\sqrt{\frac{a+c+x}{b}}\pm\sqrt{\frac{a+b+c}{x}}\right)=0$$ -then collecting powers of $x$. It turns out the $8$th-deg equation factors into a linear (cubed), a quadratic, and a cubic. For simplicity, let, -$$\begin{aligned} -p &= a+b+c\\ -q &= ab+ac+bc\\ -r &= abc -\end{aligned}$$ -Then, -$$(p+x)^3=0\tag1$$ -$$r^2 - 2 q r x + (q^2 - 4 p r) x^2 = 0\tag2$$ -$$p r^2 + r (-2 p q + 9 r) x + (p q^2 - 4 p^2 r + 6 q r) x^2 + (q^2 - 4 p r) x^3 = 0\tag3$$ - -Example: - -Let $a,b,c = 1,2,4$, then -$$(7+x)^3=0\\ --16 + 56 x + 7 x^2 = 0\\ --112 + 248 x - 119 x^2 + 7 x^3 = 0$$ -The roots of the quadratic solve, -$$\sqrt{\frac{a+b+x}{c}}\pm\sqrt{\frac{b+c+x}{a}}+\sqrt{\frac{a+c+x}{b}}-\sqrt{\frac{a+b+c}{x}}=0$$ -while a root of the cubic solves, -$$\sqrt{\frac{a+b+x}{c}}-\sqrt{\frac{b+c+x}{a}}-\sqrt{\frac{a+c+x}{b}}+\sqrt{\frac{a+b+c}{x}}=0$$ -and two others, while the linear root takes care of the remaining three sign changes.<|endoftext|> -TITLE: Why are there more Irrationals than Rationals given the density of $Q$ in $R$? -QUESTION [5 upvotes]: I'm reading "Understanding Analysis" by Abbott, and I'm confused about the density of $Q$ in $R$ and how that ties to the cardinality of rational vs irrational numbers. -First, on page 20, Theorem 1.4.3 "Density of $Q$ in $R$" Abbot states: - -For every two real numbers a and b - with a < b, there exists a rational number r satisfying a < r < b. - -For which he provides a proof. -Later, on page 22, in the section titled "Countable and Uncountable Sets" he states: - -Mentally, there is a temptation to think of $Q$ and $I$ as being intricately mixed together in equal proportions, but this turns out not to be the case...the irrational numbers far outnumber the rational numbers in making - up the real line. - -My question is: how are these two statements not in direct contradiction? Given any closed interval of irrational numbers of cardinality $X$, $A$, shouldn't be the case that we would have corresponding set of $X-1$ rational numbers, $B$, where each rational in $B$ falls "between" two other irrationals in $A$? -If this is not the case, how do we have so many more irrationals than rationals while still satisfying our theorem that between every two reals there is a rational number? -I know there are other questions similar to this, but I haven't found an answer that explains this very well, and none that address this (perceived) contradiction. - -REPLY [4 votes]: Given any closed interval of irrational numbers of cardinality $X$, $A$, shouldn't be the case that we would have corresponding set of $X-1$ rational numbers, $B$, where each rational in $B$ falls "between" two other irrationals in $A$? - -That will certainly be true if you change the word "interval" to "set" -and stipulate that $X$ is a finite integer. -Consider a finite set $A$ containing $X$ distinct irrational numbers and nothing else, where $X \in \mathbb Z$. -Then you can arrange the members of $A$ in increasing sequence, that is, -write $A = {a_i}, 1 \leq i \leq X$ such that $a_i > a_{i-1}$ when $i > 1$. -And then you can insert $X - 1$ rational numbers in the "gaps" between -the consecutive members of ${a_i}$. -The problem with this in the more general case is that there are more than a finite number of irrational numbers in any closed interval in $\mathbb R$. -In fact, there are more than a countable number of them. -You can't just go and insert a rational number between each consecutive pair -of irrational numbers, because there is no such thing as a consecutive pair of irrational numbers in an interval. In fact, take any two irrational numbers $r, s$ in the interval; there will be an uncountably infinite number of irrational numbers between $r$ and $s$. -We do indeed have a rational number $q$ that falls between $r$ and $s$, -in fact a countably infinite set of such numbers; but we also have an uncountably infinite set of irrational numbers that fall between $r$ and $s$. There is no way to organize these numbers into an increasing sequence of alternating irrational and rational numbers, like this: -$$ r_1 < q_1 < r_2 < q_2 < r_3 < \cdots < r_{X-1} < q_{X-1} < r_X, $$ -so any counting argument based on imagining such a sequence is incorrect.<|endoftext|> -TITLE: There is no Baire bijection between $\mathbb R$ and the set of functions $\mathbb Z\to\mathbb R$ modulo shifts -QUESTION [5 upvotes]: Let $X$ denote the set $\mathbb{R}^\mathbb{Z}$ (the set of all functions from integers to reals), -and $\sim$ the equivalence relation on $X$ defined by: -$f \sim g$ iff there is a $z \in \mathbb{Z}$ such that for all $z' \in \mathbb{Z}: f(z'+z) = g(z')$. -Consider the quotient set $X/\sim$. -Some years ago, Mike Oliver made the remark that no bijection between $X/\sim$ and $\mathbb{R}$ could possibly be a Baire function. -I could do with a hint (or two) on how to prove that. - -REPLY [2 votes]: Forget about $\mathbb{R}^\mathbb{Z}$; just look at $2^\mathbb{Z}$. -Hint 1: any set $A\subseteq 2^\mathbb{Z}$ which has the Baire property and which is $\sim$-saturated (in the sense that $x\in A$, $x\sim y$ implies $y\in A$) must be either meager or comeager. -Hint 2: if $f : 2^\mathbb{Z}\to \mathbb{R}$ is Baire-measurable and constant on every $\sim$-equivalence class, think about what $f^{-1}(U)$ could be, where $U$ ranges over open sets. Maybe do something with a countable basis for $\mathbb{R}$... -I hope this is enough of a hint.<|endoftext|> -TITLE: Symbol for "the greater of the two values" -QUESTION [11 upvotes]: I'm looking for an operator that returns the greater of two values. -Here's an example. If $a=5$, $b=6$ and $???$ is the operator, I'd like to have $x$ equal $b$ when I do $x=a???b$, since $b$ is the larger of the two values. - -REPLY [3 votes]: For $S=\{a_1,\cdots,a_n\}$, define $*$ inductively by operator $*:\mathbb{R}^n\to\mathbb{R}$ as $*(a_1,a_2)=\dfrac{|a_2-a_1|+a_2+a_1}{2}$ if $n=2$ and $*:\mathbb{R}^{n+1}\to\mathbb{R}$ as $*(a_1,...,a_n,a_{n+1})=*(*(a_1,\cdots,a_n),a_{n+1})$. Thus $*(a_1,\cdots,a_n)=\max S$ for any set $S=\{a_1,\cdots,a_n\}$.<|endoftext|> -TITLE: An elementary verification of the equivalence between two expressions for $e^x$ -QUESTION [5 upvotes]: I would appreciate some constructive comments on the following argument for -\begin{equation*} -\sum_{n=0}^{\infty} \frac{x^{n}}{n!} -= \lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^{n} . -\end{equation*} -I understand that there are several different arguments for it. I like that this does not involve the natural logarithm function. I looked in many elementary textbooks on real analysis for such an argument. I only found one, but it was in the special case $x = 1$, and it was flawed. The only analysis techniques used are the convergence of the sequence defined by -\begin{equation*} -\left(1 + \frac{x}{n}\right)^{n} , -\end{equation*} -and the absolute convergence of -\begin{equation*} -\sum_{n=0}^{\infty} \frac{x^{n}}{n!} . -\end{equation*} -Here it is. -Demonstration -According to the Binomial Theorem, for every positive integer $n$, -\begin{align*} -&\sum_{i=0}^{n} \frac{x^{i}}{i!} - \left(1 + \frac{x}{n}\right)^{n} \\ -&\qquad = \sum_{i=0}^{n} \frac{x^{i}}{i!} - \sum_{i=0}^{n} \binom{n}{i} \frac{x^{i}}{n^{i}} \\ -&\qquad = \sum_{i=0}^{n} \left[\frac{x^{i}}{i!} - \binom{n}{i} \frac{x^{i}}{n^{i}} \right] \\ -&\qquad = \sum_{i=0}^{n} \left[\frac{1}{i!} - \frac{1}{n^{i}} \binom{n}{i} \right] x^{i} \\ -&\qquad = \sum_{i=2}^{n} \left[\frac{1}{i!} - \frac{1}{n^{i}} \binom{n}{i} \right] x^{i} . -\end{align*} -For each integer $2 \leq i \leq n$, -\begin{align*} -\frac{1}{n^{i}} \binom{n}{i} &= \frac{1}{n^{i}} \cdot \frac{n!}{i!(n-i)!} \\ -&= \frac{1}{n^{i}} \cdot \frac{n(n - 1)(n - 2) \cdots (n - i + 1)}{i!} \\ -&= \frac{1}{i!} \cdot \frac{n(n - 1) (n - 2) \cdots (n - (i - 1))}{n^{i}} \\ -&= \frac{1}{i!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) . -\end{align*} -So, -\begin{equation*} -\sum_{i=0}^{n} \frac{x^{i}}{i!} - \left(1 + \frac{x}{n}\right)^{n} -= \sum_{i=2}^{n} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] \frac{x^{i}}{i!} -\end{equation*} -According to the Triangle Inequality, for each pair of positive integers $2 \leq k < n$, -\begin{align*} -&\left\vert \sum_{i=0}^{n} \frac{x^{i}}{i!} - \left(1 + \frac{x}{n}\right)^{n} \right\vert \\ -&\qquad \leq \left\vert \sum_{i=2}^{k} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] \frac{x^{i}}{i!} \right\vert \\ -&\qquad\qquad + \left\vert \sum_{i=k+1}^{n} \frac{x^{i}}{i!} \right\vert -+ \left\vert \sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} x^{i} \right\vert \\ -&\qquad \leq \sum_{i=2}^{k} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] \frac{\vert x \vert^{i}}{i!} \\ -&\qquad\qquad + \sum_{i=k+1}^{n} \frac{\vert x \vert^{i}}{i!} -+ \sum_{i=k+1}^{n} \frac{1}{n^{i}}\binom{n}{i} \vert x \vert^{i} . -\end{align*} -\begin{align*} -&\sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} \vert x \vert^{i} \\ -&\qquad = \sum_{i=k+1}^{n} \frac{1}{i!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) -\vert x \vert^{i} \\ -&\qquad = \frac{1}{(k+1)!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{k}{n}\right) \vert x \vert^{k+1} \\ -&\qquad\qquad \!\begin{aligned}[t] -&+ \frac{1}{(k+2)!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{k+1}{n}\right) \vert x \vert^{k+2} \\ -&+ \frac{1}{(k+3)!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{k+2}{n}\right) \vert x \vert^{k+3} \\ -&+\ldots -+ \frac{1}{n!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{n-1}{n}\right) \vert x \vert^{n} . -\end{aligned} \\ -&\qquad = \frac{1}{k!} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{k}{n}\right) \vert x \vert^{k} \\ -&\qquad\qquad \!\begin{aligned}[t] -&\biggl[\frac{1}{k+1} \left(1 - \frac{k}{n}\right) \vert x \vert \\ -&\hphantom{\biggl[\vphantom{\frac{1}{k+1}}}+ \frac{1}{(k+1)(k+2)} \left(1 - \frac{k}{n}\right) \left(1 - \frac{k+1}{n}\right) \vert x \vert^{2} \\ -&\hphantom{\biggl[\vphantom{\frac{1}{k+1}}}+ \frac{1}{(k+1)(k+2)(k+3)} \left(1 - \frac{k}{n}\right) \left(1 - \frac{k+1}{n}\right) \left(1 - \frac{k+2}{n}\right) \vert x \vert^{3} \\ -&\hphantom{\biggl[\vphantom{\frac{1}{k+1}}}+ \ldots \\ -&\hphantom{\biggl[\vphantom{\frac{1}{k+1}}} -+ \frac{1}{(k+1)(k+2) \cdots n} \left(1 - \frac{k}{n}\right) \left(1 - \frac{k+1}{n}\right) \cdots \left(1 - \frac{n-1}{n}\right) -\vert x \vert^{n-k} -\biggr] -\end{aligned} \\ -&\qquad < \frac{\vert x \vert^{k}}{k!} \left[ -\frac{\vert x \vert}{k+1} + \left(\frac{\vert x \vert}{k+1}\right)^{2} -+ \ldots + \left(\frac{\vert x \vert}{k+1}\right)^{n-k} -\right] . \\ -\text{So, if $k \geq \vert x \vert$,} \\ -&\sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} \vert x \vert^{i} \\ -&\qquad < \frac{\vert x \vert^{k}}{k!} \sum_{i=1}^{\infty} \left(\frac{\vert x \vert}{k+1} \right)^{i} \\ -&\qquad = \frac{\vert x \vert^{k}}{k!} \cdot \frac{\dfrac{\vert x \vert}{k + 1}}{1 - \dfrac{\vert x \vert}{k+1}} \\ -&\qquad = \frac{\vert x \vert^{k}}{k!} \cdot \frac{\vert x \vert}{k + 1 - \vert x \vert} , \\ -\text{and if $k \geq 2\vert x \vert$,} \\ -&\sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} \vert x \vert^{i} -< \frac{\vert x \vert^{k}}{k!} . -\end{align*} -By the absolute convergence of -\begin{equation*} -\sum_{n=0}^{\infty} \frac{x^{n}}{n!} , -\end{equation*} -for every $\epsilon > 0$, there is a big enough positive integer $K$ such that for every integer $k \geq K$, -\begin{equation*} -\sum_{i=k}^{\infty} \frac{\vert x \vert ^{i}}{i!} -< \frac{\epsilon}{3} , -\end{equation*} -and so, -\begin{equation*} -\frac{\vert x \vert^{k}}{k!} -< \sum_{i=k}^{\infty} \frac{\vert x \vert ^{i}}{i!} -< \frac{\epsilon}{3} -\qquad \text{and} \qquad -\sum_{i=k+1}^{\infty} \frac{\vert x \vert ^{i}}{i!} -< \sum_{i=k}^{\infty} \frac{\vert x \vert ^{i}}{i!} -< \frac{\epsilon}{3} . -\end{equation*} -So, if $k \geq \max\{2\vert x \vert, \, K\}$, and if $n > k$, -\begin{equation*} -\sum_{i=k+1}^{n} \frac{1}{n^{i}} \binom{n}{i} \vert x \vert^{i} -< \frac{\vert x \vert^{k}}{k!} -< \frac{\epsilon}{3} . -\end{equation*} -Likewise, since for each integer $2 \leq i \leq k$, -\begin{equation*} -\lim_{n\to\infty} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] = 0 , -\end{equation*} -there is a big enough positive integer $N$ such that for every integer $n \geq N$, -\begin{equation*} -\sum_{i=2}^{k} \left[ 1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] -< \frac{\epsilon}{3 \cdot \max\{\vert x \vert^{k}, \, 1\}} , -\end{equation*} -and so, -\begin{equation*} -\sum_{i=2}^{k} \left[1 - \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right) \cdots \left(1 - \frac{i - 1}{n}\right) \right] \frac{\vert x \vert^{i}}{i!} -< \frac{\epsilon}{3} . -\end{equation*} -Consequently, for any positive integers $k \geq \max\{2\vert x \vert, \, K\}$ and $n > \max\{k, \, N\}$, -\begin{equation*} -\left\vert \sum_{i=0}^{n} \frac{x^{i}}{i!} - \left(1 + \frac{x}{n}\right)^{n} \right\vert < \epsilon . -\end{equation*} -Equivalently, -\begin{equation*} -\sum_{i=0}^{\infty} \frac{x^{i}}{i!} = \lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^{n} . -\end{equation*} - -REPLY [2 votes]: Here is another simpler approach which does not use logarithms (so that this is not exactly an answer, but it is rather too long for a comment). Let $$E_{n}(x) = 1 + x + \frac{x^{2}}{2!} + \cdots + \frac{x^{n}}{n!}\tag{1}$$ then we know that $$E(x) = \lim_{n \to \infty}E_{n}(x)$$ exists for all $x \in \mathbb{R}$. Using binomial theorem it is easy to show that $$F_{n}(x) = \left(1 + \frac{x}{n}\right)^{n} \leq E_{n}(x) \leq \left(1 - \frac{x}{n}\right)^{-n} = G_{n}(x)\tag{2}$$ for $x > 0$ and $n > x$. (Clearly by binomial theorem each of the expressions $F_{n}(x), E_{n}(x)$ is a finite series of $(n + 1)$ terms and each term of $F_{n}(x)$ is less than or equal to the corresponding term of $E_{n}(x)$. For $0 < x < n$ the function $G_{n}(x)$ can be expressed as an infinite series via the binomial theorem for general index. And again each term of $E_{n}(x)$ is less than or equal to the corresponding term of $G_{n}(x)$. The restriction $0 < x < n$ is needed for the convergence of infinite series representation of $G_{n}(x)$.) -Further it can be shown that both $(1 + x/n)^{n}$ and $(1 - x/n)^{-n}$ tend to same limit as $n \to \infty$ (you show that the ratio of these two expressions tends to $1$). It thus follows that for $x > 0$ we have $$E(x) = \lim_{n \to \infty}E_{n}(x) = \lim_{n \to \infty}\left(1 + \frac{x}{n}\right)^{n} = \lim_{n \to \infty}\left(1 - \frac{x}{n}\right)^{-n}\tag{3}$$ The relation obviously holds for $x = 0$. For negative $x$ it is easy to see that that $E(x)E(-x) = 1$ (by multiplication of infinite series) and the proof is easily extended to negative values of $x$. (The fact that $F_{n}(-x) = 1/G_{n}(x)$ will be of help here because it implies that the common limit of $F_{n}(x)$ and $G_{n}(x)$, say $F(x)$, will satisfy $F(x)F(-x) = 1$ similar to $E(x)E(-x) = 1$.) - -The approach given by OP is correct but bit lengthy with lot of intermediate steps and needs some patience to follow. A better approach in the same direction is to appeal to the general theorem called "Monotone Convergence Theorem": -If for all natural numbers $j, k$, the dual indexed sequence $a_{j, k}$ is non-negative and $a_{j, k} \leq a_{j + 1, k}$ for all natural numbers $j, k$ then $$\lim_{j \to \infty}\sum_{k = 1}^{\infty}a_{j, k} = \sum_{k = 1}^{\infty}\lim_{j \to \infty}a_{j, k}$$ -For the current case let's set $$a_{j, k} = \binom{j}{k}\left(\frac{x}{j}\right)^{k}$$ if $k \leq j$ and $a_{j, k} = 0$ otherwise. Then we can see that $$\left(1 + \frac{x}{j}\right)^{j} = \sum_{k = 0}^{\infty}a_{j, k}$$ It is easily verified that $a_{j, k} \leq a_{j + 1, k}$ for all $j, k$ if $x > 0$. Hence the monotone convergence theorem applies and clearly $$\lim_{j \to \infty}a_{j, k} = \frac{x^{k}}{k!}$$ hence we have the result $$\lim_{j \to \infty}\left(1 + \frac{x}{j}\right)^{j} = \sum_{k = 0}^{\infty}\frac{x^{k}}{k!}$$ For $x < 0$ we again need the multiplicative properties of the series for $e^{x}$ namely that $e^{x}e^{-x} = 1$.<|endoftext|> -TITLE: Reflected rays /lines bouncing in a circle? -QUESTION [5 upvotes]: Consider the following situation. -You are standing in a room that is perfectly circular with mirrors for walls. You shine a light, a single ray of light, in a random direction. Will the light ever return to its original position (the single point where the light originated from)? If so, will it return to its position an infinite amount of times or a definite amount of times? Will it ever return to its original position in the original direction? -I thought of this little teaser when reading about a problem concerning rays in a circle and wondered about this question. -As for my attempts, this is well beyond my skill. - -REPLY [4 votes]: This can be solved completely by characterizing the set of points on the ray which are the same distance from the center as the original point. We consider two symmetries which generate all such points. Here is a diagram showing all the points we consider: - -In particular, if our initial point is $C$, consider the line $AB$ which is coincident with the ray and which has $A$ and $B$ both on the unit circle and $B$ in the forwards direction of the ray. This segment would be visible if you shot a ray in either direction, and it did not bounce. Notice that, if $O$ is the center of the circle, then the angle $\alpha=\angle AOB$ is significant since rotating by $\alpha$ (in the same direction as the rotation that takes $B$ to $A$) takes one segment of the ray to the next segment after it bounces. -One may notice that this rotation preserves the distance from the center of the circle - so every point ever at this distance in any segment of the light's path is such a rotation of one at that distance on $AB$. There may be at most two such points - the point $C$ itself, and the point $C'$ which is the reflection of $C$ through the perpendicular bisector of $AB$. Let us set $\beta=\angle COC'$ which should be taken as a signed angle - so if the rotation taking $C$ to $C'$ is the opposite direction as the one taking $A$ to $B$, we take $\beta$ to be negative. We will always consider $\alpha$ to be positive. -Notice that if the ray returns, then there is some non-trivial rotation taking $C$ to itself. That means that one of the following is true (for some integers $n,k$): -$$n\alpha = 2\pi k$$ -$$n\alpha + \beta = 2\pi k.$$ -This completely characterizes the problem. One should note that any point inside a circle along with a ray extending from it may be described up to rotation and reflection by a pair $(\alpha,\beta)$ with $0\leq \alpha \leq \pi$ and $|\beta|\leq \alpha$. -The case of $C$ being on the circumference corresponds to the case $|\beta|=\alpha$. One should note that the first set of solutions is just when $\alpha$ is a rational multiple of $\pi$, in which case the ray is truly periodic. Moreover, if $\beta$ is a rational multiple of $\pi$, then the second set of solutions doesn't add any new solutions for $\alpha$ (meaning that if the ray ever returned, it would be bouncing periodically).<|endoftext|> -TITLE: Finding the common integer solutions to $a + b = c \cdot d$ and $a \cdot b = c + d$ -QUESTION [14 upvotes]: I find nice that $$ 1+5=2 \cdot 3 \qquad 1 \cdot 5=2 + 3 .$$ - -Do you know if there are other integer solutions to - $$ a+b=c \cdot d \quad \text{ and } \quad a \cdot b=c+d$$ - besides the trivial solutions $a=b=c=d=0$ and $a=b=c=d=2$? - -REPLY [6 votes]: First, note that if $(a, b, c, d)$ is a solution, so are $(a, b, d, c)$, $(c, d, a, b)$ and the five other reorderings these permutations generate. -We can quickly dispense with the case that all of $a, b, c, d$ are positive using an argument of @dREaM: If none of the numbers is $1$, we have -$ab \geq a + b = cd \geq c + d = ab$, so $ab = a + b$ and $cd = c + d$, and we may as well assume $a \geq b$ and $c \geq d$. In particular, since $a, b, c, d > 1$, we have $a b \geq 2a \geq a + b = ab$, so $a = b = 2$ and likewise $c = d = 2$, giving the solution $$(2, 2, 2, 2).$$ On the other hand, if at least one number is $1$, say, $a$, we have $b = c + d$ and $1 + b = cd$, so $1 + c + d = cd$, and we may as well assume $c \leq d$. Rearranging gives $(c - 1)(d - 1) = 2$, so the only solution is $c = 2, d = 3$, giving the solution -$$(1, 5, 2, 3).$$ -Now suppose that at least one of $a, b, c, d$, say, $a$ is $0$. Then, we have $0 = c + d$ and $b = cd$, so $c = -d$ and $b = -d^2$. This gives the solutions $$A_s := (0, -s^2, -s, s), \qquad s \in \Bbb Z .$$ -We are left with the case for which at least one of $a, b, c, d$, say, $a$, is negative, and none is $0$. Suppose first that none of the variables is $-1$. If $b < 0$, we must have $cd = a + b < 0$, and so we may assume $c > 0 > d$. On the other hand, $c + d = ab > 0$, and so (using a variation of the argument for the positive case) we have -$$ab = (-a)(-b) \geq (-a) + (-b) = -(a + b) = -cd \geq c > c + d = ab,$$ which is absurd. If $b > 0$, we have $c + d = ab < 0$, so at least one of $c, d$, say, $c$ is negative. Moreover, we have $cd = a + b$, so $d$ and $a + b$ have opposite signs. If $d < 0$, then since $c, d < 0$, we are, by exploiting the appropriate permutation, in the above case in which $a, b < 0$, so we may assume that $d > 0$, and hence that $a + b < 0$. Now, -$$ab \leq a + b = cd < c + d = ab,$$ which again is absurd, so there no solutions in this case. This leaves only the case in which at least one of $a, b, c, d$ is $-1$, say, $a$. Then, we have $-b = c + d$ and $-1 + b = cd$, so $-1 + (- c - d) = cd$. Rearranging gives $(c + 1)(d + 1) = 0$, so we may assume $c = -1$ giving (up to permtuation) the $1$-parameter family of solutions $$B_t := (-1, t, -1, 1 - t), \qquad t \in \Bbb Z,$$ I mentioned in my comment (this includes two solutions, $B_0$ and $B_1$, which are equivalent by a permutation, that include a zero entry). This exhausts all of the possibilities; in summary: - -Any integer solution to the system - $$\left\{\begin{array}{rcl}a + b \!\!\!\!& = & \!\!\!\! cd \\ ab \!\!\!\! & = & \!\!\!\! c + d \end{array}\right.$$ - is equal (up to the admissible permutations mentioned at the beginning of this answer) to exactly one of - -$(1, 5, 2, 3)$ -$(2, 2, 2, 2)$ -$A_s := (0, -s^2, -s, s)$, $s \geq 0$, and -$B_t := (-1, t, -1, 1 - t)$, $t \geq 2$. - - -The restrictions on the parameters $s, t$ are consequences of the redundancy in the solutions we found: $A_{-s}$ is an admissible permutation of $A_s$, $B_{1 - t}$ an admissible permutation of $B_t$, and $B_1$ one of $A_1$.<|endoftext|> -TITLE: how to make a $3$-$(10,4,1)$ design using graphs -QUESTION [6 upvotes]: A $t$-$(v,k,\lambda)$ design is defined this way : -We have a set with $v$ elements (called the points). We also have a collection of distinguished subsets each having $k$ elements, we call each of these subsets a block. Each set of $t$ points appear together in exactly $\lambda$ blocks. -I want to make a $3$-$(10,4,1)$ using graphs. According to this link: -http://www.sciencedirect.com/science/article/pii/0012365X83901176 -( page 2 ) -I know which graph I should use but I don't know how to convert that graph to a $3$-$(10,4,1)$ design. -Note: one of my friends solved this problem using a $K_5$ graph. He said that we should see every edge of $K_5$ as a vertex in our new graph. But still we don't know why $K_5$ and why we should use this method and how to build that $3$-$(10,4,1)$ design. - -REPLY [6 votes]: You take $\Gamma = K_{5}$, and consider the edges of $\Gamma$ to be the points of your design. Then you have some specified subgraphs, given in the picture on page 2 that you refer to, they all have $4$ edges. The blocks of your design are the subgraphs of $\Gamma$ that are isomorphic to one of these specified subgraphs. (considered as sets of edges). -For example, take any $3$ edges of $\Gamma$. If they all share a common vertex, then they occur together in exactly one block, corresponding to the first picture in the list for these parameters. If they form a triangle, then they occur in exactly one block, corresponding to the second picture. Otherwise, they occur in a block corresponding to the third picture (try to show this). This argument shows that this construction does in fact give a $3$-$(10,4,1)$ design. -The selection of $K_{5}$ is just related to the construction, in this paper they always start with a complete graph $K_{n}$ for some $n$. Notice that $K_{5}$ has $(5\cdot 4)/2 = 10$ edges, which is the same as the number of points you want for your design. Also notice that all the graphs they give in the table for this parameter set have $5$ vertices, they are all supposed to be taken as subgraphs of $K_{5}$. Because of the way this is constructed, this gives the symmetric group $S_{5}$ as the automorphism group of the graph, and so $S_{5}$ will also act as an automorphism group of the design.<|endoftext|> -TITLE: Finding the sum of the infinite series whose general term is not easy to visualize: $\frac16+\frac5{6\cdot12}+\frac{5\cdot8}{6\cdot12\cdot18}+\cdots$ -QUESTION [5 upvotes]: I am to find out the sum of infinite series:- -$$\frac{1}{6}+\frac{5}{6\cdot12}+\frac{5\cdot8}{6\cdot12\cdot18}+\frac{5\cdot8\cdot11}{6\cdot12\cdot18\cdot24}+...............$$ -I can not figure out the general term of this series. It is looking like a power series as follows:- -$$\frac{1}{6}+\frac{5}{6^2\cdot2!}+\frac{5\cdot8}{6^3\cdot3!}+\frac{5\cdot8\cdot11}{6^4\cdot4!}+.....$$ -So how to solve it and is there any easy way to find out the general term of such type of series? - -REPLY [4 votes]: Let us consider $$\Sigma=\frac{1}{6}+\frac{5}{6\times 12}+\frac{5\times8}{6\times12\times18}+\frac{5\times8\times11}{6\times12\times18\times24}+\cdots$$ and let us rewrite it as $$\Sigma=\frac{1}{6}+\frac 16\left(\frac{5}{ 12}+\frac{5\times8}{12\times18}+\frac{5\times8\times11}{12\times18\times24}+\cdots\right)=\frac{1}{6}+\frac 16 \sum_{n=0}^\infty S_n$$ using $$S_n=\frac{\prod_{i=0}^n(5+3i)}{\prod_{i=0}^n(12+6i)}$$ Using the properties of the gamma function, we have $$\prod_{i=0}^n(5+3i)=\frac{5\ 3^n \Gamma \left(n+\frac{8}{3}\right)}{\Gamma \left(\frac{8}{3}\right)}$$ $$\prod_{i=0}^n(12+6i)=6^{n+1} \Gamma (n+3)$$ which make $$S_n=\frac{5\ 2^{-n-1} \Gamma \left(n+\frac{8}{3}\right)}{3 \Gamma - \left(\frac{8}{3}\right) \Gamma (n+3)}$$ $$\sum_{n=0}^\infty S_n=\frac{10 \left(3\ 2^{2/3}-4\right) \Gamma \left(\frac{2}{3}\right)}{9 \Gamma - \left(\frac{8}{3}\right)}=3\ 2^{2/3}-4$$ $$\Sigma=\frac{1}{\sqrt[3]{2}}-\frac{1}{2}$$<|endoftext|> -TITLE: Number of $k$-dimensional subspaces in $V$ -QUESTION [6 upvotes]: I know my following question is somewhat similar to this one but still I need help . -How many $k$-dimensional subspaces of a $n$-dimensional vector space $V$ over the finite field $F$ with $q$ elements are there? - -REPLY [13 votes]: Let me try to answer your question: -Let us first look at the case of one-dimensional subspaces: Every one-dimensional subspace is spanned by non-zero vector, which there are $q^n-1$ many of. Two of these vectors span the same subspace if and only if they are non-zero scalar multiples of each other; we have $q-1$ such scalars. Thus we have $\frac{q^n-1}{q-1}$ one-dimensional subspaces. -Simlilary we can count the number of $k$-dimensional subspaces for $0 \leq k \leq n$. We will need the following formula: - -Proposition Let $W$ be an $n$-dimensional vector space over $\mathbb{F}_q$, the finite field with $q$ elements, and let $0 \leq k \leq n$. Then there exist - $$ - \frac{(q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})}{k!} -$$ - many linearly independent subsets of $W$ consisting of $k$ elements. -Proof: We first figure out the number of linearly independent families of $k$ elements: For the first member $b_1$ we can pick any non-zero vector, so we have $q^n-1$ choices. For the second member $b_2$ we can take any vector not in the span of $b_1$. So we have $q^n-q$ choices for $b_2$. Continuing this we can pick $b_i$ arbitrarily outside of the span of the previous members $b_1, \dotsc, b_{i-1}$, so we have $q^n - q^i$ choices for $b_i$. Thus we find that we have - $$ - (q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1}) -$$ - many linearly independent families $(b_1, \dotsc, b_k)$. - Two of these families represent the same linearly indendent subset if and only if they are the same up to reorderding of its members. Because there are $k!$ such reorderings we have - $$ - \frac{(q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})}{k!} -$$ - many linearly indepndent subsets consisting of $k$ elements. - -We know that every $k$-dimensional subspace of $V$ is spanned by $k$ linearly indendent vectors of $V$. So by the above proposition we have at most -$$ - \frac{(q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})}{k!} -$$ -$k$-dimensional subspaces. -Now the same subspace $U$ is spanned by many different linearly independent subsets, say $L$ many. These $L$ subsets are precisely the bases of $U$. By we above formula we know that there are -$$ - \frac{(q^k-1)(q^k-q) \dotsm (q^k-q^{k-1})}{k!} -$$ -many such bases. -Putting this together we find that there are -$$ - \frac{ - \left( \frac{(q^n-1)(q^n-q)(q^n-q^2) \dotsm (q^n-q^{k-1})}{k!} \right) - }{ - \left( \frac{(q^k-1)(q^k-q)(q^k-q^2) \dotsm (q^k-q^{k-1})}{k!} \right) - } - = \frac{ - (q^n-1)(q^n-q) \dotsm (q^n-q^{k-1}) - }{ - (q^k-1)(q^k-q) \dotsm (q^k-q^{k-1}) - } -$$ -many different $k$-dimensional subspaces.<|endoftext|> -TITLE: Can $G$ have a Sylow -5 subgroups and Sylow -3 subgroups [CSIR-NET-DEC-2015] -QUESTION [8 upvotes]: Let $G$ be a simple group of order $60$. Then - -$G$ has six Sylow -5 subgroups. -$G$ has four Sylow -3 subgroups. -$G$ has a cyclic subgroup of order 6. -$G$ has a unique element of order $2$. - - -$60=2^2.3.5$ No. of Sylow -5 subgroups =$1+5k$ divides 12.So $1+5k=1,6\implies n_5=1,6\implies n_5=6$ as $G$ is a simple group. -Consider $n_3=1+3k$ divides $20\implies 1+3k=1,4,10\implies 1+3k=4,10$. If $n_3=4$ then we have $8 $ elements of order $3$ and $A_5$ has 20 elements of order $3$ which is a contradiction.Hence $n_3=10$. -Since $A_5$ has no element of order $6$.So 3 is false. -$A_5$ has many elements of order $2$ viz. $(12)(34),(13)(24),,$. Hence $1$ is correct only .Please can someone check whether I am correct /not? - -REPLY [3 votes]: As was remarked before, you do not have to assume that $G \cong A_5$. -(1) You did that one correctly! -(2) If $|Syl_3(G)|=4$, and $S \in Syl_3(G)$, then $|G:N_G(S)|=4$. Now let $G$ act on the left cosets of $N_G(S)$ by left multiplication, then the kernel of this action is $core_G(N_G(S))=\bigcap_{g \in G}N_G(S)^g$, which is a normal subgroup. Hence, by the simplicity of $G$, it must be trivial and $G$ can embedded in $S_4$, a contradiction, since $60 \nmid 24$. -(3) We prove that if a non-abelian simple $G$, with $|G|=60$, has an abelian subgroup of order $6$, then $G \cong A_5$. This gives a contradiction, since it is easily seen that $A_5$ does not contain any elements of order $6$ (note that an abelian group of order $6$ must be cyclic). So assume $H \lt G$ is abelian and $|H|=6$. $H$ is not normal so, $N_G(H)$ is a proper subgroup (if not then $H$ would be normal) and since $|G:N_G(H)| \mid 10$, we must have $|G:N_G(H)|=5$ ($=2$ is not possible since subgroups of index $2$ are normal). Similarly as in (2), $G/core_G(N_G(H))$ embeds homomorphically in $S_5$ this time. Of course $core_G(N_G(H))=1$, so $G$ is isomorphic to a subgroup of $S_5$ and since it is simple it must be isomorphic to $A_5$ (if we write also $G$ for the image in $S_5$, consider $G \cap A_5 \lhd G$ and use $|S_5:A_5|=2$). -(4) In general: if $G$ is a group with a unique element $x$ of order $2$, then $x \in Z(G)$. Why? Because for every $g \in G$, $g^{-1}xg$ has also order $2$ and must be equal to $x$. In your case $G$ is non-abelian simple, so $Z(G)=1$. -So only (1) is the true statement. - -Edit For case (3) I forgot the case where $|G:N_G(H)|=10$. I have a proof that is quite sophisticated and maybe there is an easier way. Anyway, in this case $H=N_G(H)$. Consider the subgroup $P$ of order $3$ of $H$. This must be a Sylow $3$-subgroup of $G$, since $3$ is the higest power of $3$ dividing $|G|=60$. Observe that in fact $N_G(P)=H$. This follows from what we showed in (2): $|Syl_3(G)|=|G:N_G(P)|=10$ and of course $H \subseteq N_G(P)$. Trivially, $P \subset Z(N_G(P))$. Now $P$ satifies the criterion of Burnside's Normal $p$-Complement Theorem, see for example Theorem (5.13) here. But then $P$ has a normal complement $N$, such that $G=PN$ and $P \cap N=1$. Now $G$ is non-abelian simple, so $N=1$ or $N=G$, which both lead to a contradiction.<|endoftext|> -TITLE: The greatest integer less than or equal to the number $R=(8+3\sqrt{7})^{20}$ -QUESTION [6 upvotes]: Given $$R=(8+3\sqrt{7})^{20}, $$ if $\lfloor R \rfloor$ is Greatest integer less than or equal to $R$, then which of the following option(s) is/are true? - -$\lfloor R \rfloor$ is an even number -$\lfloor R \rfloor$ is an odd number -$R-\lfloor R \rfloor=1-\frac{1}{R}$ -$R(R-\lfloor R \rfloor-1)=-1$ - - -My try: I wrote $R$ as $$R=8^{20}\left(1+\sqrt{\frac{63}{64}}\right)^{20} \approx8^{20}\left(1+\sqrt{0.98}\right)^{20} \approx8^{20}\left(1.989\right)^{20} .$$ -Now, $8^{20}\left(1.989\right)^{20}$ is slightly less than $8^{20} \times 2^{20}=2^{80}$, -$$\lfloor 2^{80}\rfloor=2^{80}$$ -hence -$$\lfloor R \rfloor=2^{80}-1,$$ -so option $2$ is correct. -How does one figure out whether options $3$ and $4$ are correct or wrong? - -REPLY [3 votes]: Hints For (1), (2): Using the binomial expansion twice gives that \begin{align}A := R + (8 - 3 \sqrt{7})^{20} &= (8 + 3 \sqrt{7})^{20} + (8 - 3 \sqrt{7})^{20} \\ &= \sum_{k = 0}^{20} {20 \choose k} 8^{20 - k} (3 \sqrt{7})^k + \sum_{k = 0}^{20} {20 \choose k} 8^{20 - k} (-3 \sqrt{7})^k \\ &= \sum_{k = 0}^{20} {20 \choose k} 8^{20 - k} (1 + (-1)^k) (3 \sqrt{7})^k .\end{align} -The appearance of the factor $1 + (-1)^k$ means that summands with odd $k$ are zero, so only the even terms contribute, and we can rewrite the sum as -$$A = \sum_{j = 0}^{10} {20 \choose 2j} 8^{20 - 2j} (3 \sqrt{7})^{2j} = \sum_{j = 0}^{10} {20 \choose 2j} 64^{10 - j} 63^j .$$ -In particular, $A$ is an integer. On the other hand, since $49 < 63 < 64$, we have $7 < 3 \sqrt{7} < 8$ and hence $0 < 8 - 3 \sqrt{7} < 1$. -For (3): Note that $(8 + 3 \sqrt{7})(8 - 3 \sqrt{7}) = 64 - 63 = 1.$ - -Additional hints For (1)-(2): So, the second summand of $A$ satisfies $0 < (8 - 3 \sqrt{7})^{20} < 1$. (In fact, it is very close to zero.) So, $A - 1 < R < A$, and in particular, $\lfloor R \rfloor = A - 1$. Since we can determine the parity of $A$ from the last summation expression, we can also determine that of $\lfloor R \rfloor$. For (3): So $$(8 + 3 \sqrt{7})^{20} (8 - 3 \sqrt{7})^{20} = 1 ,$$ hence $$(8 - 3 \sqrt{7})^{20} = \frac{1}{R} .$$ - -There appears to be a typo in the equation in (4).<|endoftext|> -TITLE: Is there an equivalent concept of a "variety" for SAT? -QUESTION [5 upvotes]: Couldn't find anything via google - I was wondering what work is out there looking at SAT problems from the perspective akin to an algebraic variety, e.g. a set of variables $X_1=$true, $X_2=$false, .., $X_k=$true, etc define a set of propositions which are all satisfed by that particular set of values in much the same way that a variety defines a set of polynomials that all happen to vanish at the same points. -Thanks. - -REPLY [2 votes]: For one variety over one finite field, the algebraic geometry translation of a SAT problem is not all that useful. Switching between the two descriptions is computationally trivial. Algebraic geometry doesn't help with finding solutions so much as describing relations between solutions. SAT problems do not lead to structured varieties like elliptic curves or algebraic surfaces where algebraic geometry comes into its own. -The advantage of geometry is when there are many varieties, or many fields and rings in which to solve the equations, or both. -This is roughly the idea of Aaronson and Wigderson's paper on Algebraization as a barrier to $P \neq NP$ proofs. General arguments about computation with bits, thought of as facts about polynomials or varieties where we look for solutions mod $2$, also would work over finite fields with more than $2$ elements, or more general rings than that. In those more general settings they can arrange oracles for which $P = NP$, so things are not that easy. -Geometric Complexity Theory proposed by Mulmuley is an application of algebraic geometry to complexity and an approach to the $P=NP$ problem, but the geometry does not come from mapping SAT problems to algebraic equations mod $2$. The idea is to look for algebraic problems in invariant theory and group representation theory that are close enough analogues of $P=NP$ that solving the algebra problems would have implications for computational complexity theory. Algebraic geometry appears in GCT because it is a tool for the algebra problems that appear.<|endoftext|> -TITLE: Is this a valid way to think about sheafification? -QUESTION [5 upvotes]: This all feels like it should be valid, but I just wanted to get more experienced eyes on it in case I've made a mistake. -Take a presheaf $\mathscr{F}$ on a topological space $X$. In order to be a sheaf, for any set of compatible functions $f_i \in \mathscr{F}(U_i)$, there needs to exist a unique gluing. So the sheafification $\mathscr{F}^+$ of $\mathscr{F}$ can be constructed by doing the "least amount of work" to make this happen. -Intuitively, I feel like this should mean two things happen: -First, suppose more than one gluing exist. If $f$ and $g$ are both gluings of $\{f_i\}$, we have no natural way to decide on which to keep. The easiest thing to do is to equate all gluings, and require that $f=g$ in $\mathscr{F}^+$. -Second, if a gluing for $\{f_i\}$ does not exist, one is freely adjoined. This new section will not be equal to any old sections in $\mathscr{F}$. We can identify this new section with the collection $\{f_i\}$, perhaps even calling it by the name $[f_i]$. -Doing this, we need to have the understanding that it's very likely another compatible family $\{g_j\}$ will exist which glues together to form the same section. If this is the case, we require $[f_i]=[g_j]$. -But since no section existed previously, you need a rule to tell when two such compatible families of functions should glue to the same section. In general, they will be defined on different covers, and you need to take a common refinement $\{V_i\}$ of these covers. By pushing each $f_i$ and each $g_j$ through the restriction maps into the appropriate open sets in this refinement, you can compare them directly for equality. And if all the right $f_i$'s and $g_j$'s are equal, then $[f_i] = [g_j]$. - -REPLY [4 votes]: That's more or less right. We divide out by the relation "Two sections $f, g$ on an open $U\subseteq X$ are equivalent iff there is some cover $U_i \subseteq U$ such that $f|_{U_i} = g|_{U_i}$ for all $i$" (which turns out to be an equivalence relation, and for any given $U$, the class $[0]$ turns out to be a subgroup / ideal if $\mathscr F$ is a presheaf of groups / rings, which means dividing out by it works out nicely). And we adjoin sections where there is a cover with compatible sections. -However, that's not the usual way to construct the associated sheaf. One usually lets $\mathscr F'$ be the sheaf given by $U\mapsto \prod_{x \in U}\mathscr F_x$ (which is a very big sheaf), and then $\mathscr F^+$ is set to be the subsheaf where for any section $f\in \mathscr F(U)$, there is some cover $U_i \subseteq U$ and sections $f_i \in \mathscr F(U_i)$ such that $f|{U_i} = f_i$. In other words, the subsheaf of $\mathscr F'$ that has some "local coherence". -For instance, say $\mathscr F$ is the presheaf over $\Bbb C$ (with standard topology) of analytic functions (this is actually a sheaf, but nevermind). Then $\mathscr F'$ is the sheaf where a section over some open $U$ is given by, for each point, a power series around that point with some positive radius of convergence. The sheaf $\mathscr F^+$ is the subsheaf consisting of those sections where neighbouring points have power series that comes from the same analytic function. We see that we are back with the usual sheaf of analytic functions.<|endoftext|> -TITLE: Nontrivial cup product realized in $\Bbb R^4$ -QUESTION [6 upvotes]: Let $A$ be a closed subspace $A$ of $[0,1]^4$---let's say, a subcomplex of some triangulation of the cube. I would like to show that the cup product $H^2(A)\times H^2(A)\to H^4(A)$ is trivial (or at least that the square map $x\mapsto x\smile x$ is trivial). -I tried analyzing simple examples of nontrivial cup product occurances and it seems to me that the basic "building blocks" are examples such as products of spaces (for example, $S^2\times S^2$) and/or attaching $4$-cells to something $2$-dimensional in a nontrivial way (for example, $\Bbb CP^2$) and none of these can be realized in a Euclidean $4$-spaces; however, I don't know how to prove the claim formally. -Thanks for possible hint. - -REPLY [3 votes]: You may embed your complex $A$ into $S^4$. Then Alexander duality (see Hatcher, th. 3.44) gives you -$$ -\tilde H^k(A)=\tilde H_{3-k}(S^4\setminus A), -$$ -so $H^4(A)$ are always $0$.<|endoftext|> -TITLE: $p$-adic valuation of harmonic numbers -QUESTION [8 upvotes]: For an integer $m$ let $\nu_p(m)$ be its $p$-valuation i.e. the greatest non-negative integer such that $p^{\nu_p(m)}$ divides $m$. Let now -$H_n=1+\dfrac{1}{2}+ \cdots+ \dfrac{1}{n}$. -If $H_n=\dfrac{a_n}{b_n}$ then $\nu_p(H_n)=\nu_p(a_n)-\nu_p(b_n).$ -It is known that if $2^k \leq n < 2^{k+1}$ then $\nu_2(H_n)=-k.$ -Question For what $n$ and arbitrary $p$ we may be sure that $\nu_p(H_n)<0?$ -Any estimates and reference, please. - -REPLY [2 votes]: Question For what $n$ and arbitrary $p$ we may be sure that $v_p(H_n)<0$ ? -Any estimates and reference, please. - -In $[1]$ it is proved that for all $x \geq 1$ it holds $v_p(H_n) \leq 0$ for all positive integers $n \leq x$ but at most $129 p^{2/3} x^{0.765}$ exceptions. Hence $v_p(H_n) \leq 0$ for a 100% of the positive integers (respect to asymptotic density) and I guess that this is true even for the strict inequality $v_p(H_n) < 0$. -$[1]$ C. Sanna, On the p-adic valuation of harmonic numbers, J. Number Theory (2016) 166, 41-46. (http://dx.doi.org/10.1016/j.jnt.2016.02.020)<|endoftext|> -TITLE: How to solve this integral $\int _0^{\infty} e^{-x^3+2x^2+1}\,\mathrm{d}x$ -QUESTION [5 upvotes]: My classmate asked me about this integral:$$\int_0^{\infty} e^{-x^3+2x^2+1}\,\mathrm{d}x$$ -but I have no idea how to do it. What's the closed form of it? I guess it may be related to the Airy function. - -REPLY [5 votes]: I guess it may be related to the Airy function. - -You guessed well. In general, we have $~\displaystyle\int_0^\infty\exp\Big(-x^2(x+3a)\Big)~dx~=~\frac2{e^{2a^3}}\cdot\frac{\text{Bi}\Big(3^{2/3}~a^2\Big)}{3^{4/3}}-$ -$-a\cdot~_2F_2\bigg(\bigg[\dfrac12~,~1\bigg]~;~\bigg[\dfrac23~,~\dfrac43\bigg]~;~-4a^3\bigg).~$ In this particular case, $a=-\dfrac23.$<|endoftext|> -TITLE: Invariant factors and elementary divisors of an abelian group -QUESTION [5 upvotes]: I have to find the elementary divisors and invariant factors of : -$$ \mathbb Z_6\oplus\mathbb Z_{20}\oplus\mathbb Z_{36}$$ -I'm following this. -I think that elementary divisors are $\{2,2^2,2^2,3,3^2,5\}$, just using the prime decomposition of $\{6,20,36\}$. -Using the web I've put above, the invariant factor decomposition is -$$ \mathbb Z_2\oplus\mathbb Z_{12}\oplus\mathbb Z_{180}$$ -However, I have written in my notes that the invariant factors are $\{2,2,6,6,30\}$. -I'd like to know which is the right option and where and why I'm wrong. -Thanks in advance. - -REPLY [5 votes]: $\mathbb Z_2\oplus\mathbb Z_{12}\oplus\mathbb Z_{180}$ is right. -Your notes must be wrong because if the invariant factors were $\{2,2,6,6,30\}$ then there wouldn't be an element of order $36$ but $\mathbb Z_6\oplus\mathbb Z_{20}\oplus\mathbb Z_{36}$ has an element of order $36$ coming from $\mathbb Z_{36}$. This also gives elements of order $4$, $9$, $12$, which are not in $\{2,2,6,6,30\}$.<|endoftext|> -TITLE: Show that $\left(\int_{0}^{1}\sqrt{f(x)^2+g(x)^2}\ dx\right)^2 \geq \left(\int_{0}^{1} f(x)\ dx\right)^2 + \left(\int_{0}^{1} g(x)\ dx\right)^2$ -QUESTION [5 upvotes]: Show that - $$ -\left( \int_{0}^{1} \sqrt{f(x)^2+g(x)^2}\ \text{d}x \right)^2 -\geq -\left( \int_{0}^{1} f(x)\ \text{d}x\right)^2 -+ \left( \int_{0}^{1} g(x)\ \text{d}x \right)^2 -$$ - where $f$ and $g$ are integrable functions on $\mathbb{R}$. - -That inequality is a particular case. I want to approximate the integral curves using some inequalities who imply this inequality. - -REPLY [2 votes]: Suppose that $f$ and $g$ are continuous functions. Define -$$\phi(t) = \left(\int_0^t \sqrt{f(s)^2 + g(s)^2} ds\right)^2 -\left(\int_0^t f(s) ds\right)^2 - \left(\int_0^t g(s) ds\right)^2.$$ -It is obvious that $\phi(0) =0$ and -$$\phi'(t) = 2\left[\int_0^t \sqrt{f(t)^2 + g(t)^2}\sqrt{f(s)^2 + g(s)^2}ds - \int_0^t (f(s)f(t)+g(s) g(t))ds\right].$$ -By Cauchy-Schwartz inequality, we have -$$\sqrt{f(t)^2 + g(t)^2}\sqrt{f(s)^2 + g(s)^2} \geq f(t)f(s) + g(t) g(s).$$ -Hence $\phi'(t) \geq 0$. This implies that $\phi(1) \geq \phi(0) =0$. This is the inequality in question. -In general case when $f$ and $g$ are integrable, we can approximate them by continous functions, and hence finish the proof.<|endoftext|> -TITLE: Example of a Borel measure, which is not Borel-regular -QUESTION [5 upvotes]: I have asked a question to find four types of outer measures here, and I could find three of the four examples. -We call an outer measure $\mu: \mathcal P(\mathbb R^n) \to [0, \infty]$ Borel, if every Borel set $B \subset \mathbb R^n$ is $\mu$-measurable. We say, that an outer measure $\mu: \mathcal P(\mathbb R^n) \to [0, \infty]$ is Borel-regular, if $\mu$ is Borel and for any subset $A \subset \mathbb R^n$ there is a Borel set $B \supset A$, such that $\mu(A) = \mu(B)$. -I would like to give an example of a Borel measure, which is not Borel-regular. Can you help me? - -REPLY [5 votes]: Take a set $C\subset\mathbb{R}^n$ that is not Borel. For every $X\subset \mathbb{R}^n$, let -$$ -\mu(X) = \begin{cases} -0 & \text{if } X\subset C; \\ -\infty & \text{if } X\not\subset C. \\ -\end{cases} -$$ -This is a measure on the entire $P(\mathbb{R}^n)$. In particular, all Borel sets are measuable. -For every Borel set $B\supset C$ we have -$\mu(B)=\infty> 0=\mu(C)$, so $\mu$ is not Borel-regular.<|endoftext|> -TITLE: Why is this theorem also a proof that matrix multiplication is associative? -QUESTION [6 upvotes]: The author remarks that this theorem, which is basically all about what happens if we compose linear transformations, also gives a proof that matrix multiplication is associative: - -Let $V$, $W$, and $Z$ be finite-dimensional vector spaces over the field $F$; let $T$ be a linear transformation from $V$ into $W$ and $U$ a linear transformation from $W$ into $Z$. If $\mathfrak{B}$, $\mathfrak{B^{'}}$, and $\mathfrak{B^{''}}$ are ordered basis for the spaces $V$, $W$, $Z$, respectively, if $A$ is the matrix of $T$ relative to the pair $\mathfrak{B}$, $\mathfrak{B^{'}}$, and $B$ is the matrix of $U$ relative to the pair $\mathfrak{B^{'}}$, $\mathfrak{B^{''}}$, then the matrix of the composition $UT$ relative to the pair $\mathfrak{B}$, $\mathfrak{B^{''}}$ is the product matrix $C=BA$. - -However, I see no reason why that's true... - -REPLY [5 votes]: Associativity is a property of function composition, and in fact essentially everything that's associative is just somehow representing function composition. This theorem says that matrix multiplication is just composition of linear transformations, and so it follows that it's associative. -Of course in reality this is backwards: the "true" definition of matrix multiplication is "compose the linear transformations and write down the matrix," from which you can easily derive the familiar algorithm.<|endoftext|> -TITLE: What's the formula for the 365 day penny challenge? -QUESTION [10 upvotes]: Not exactly a duplicate since this is answering a specific instance popular in social media. -You might have seen the viral posts about "save a penny a day for a year and make $667.95!" The mathematicians here already get the concept while some others may be going, "what"? Of course, what the challenge is referring to is adding a number of pennies to a jar for what day you're on. So: -Day 1 = + .01 -Day 2 = + .02 -Day 3 = + .03 -Day 4 = + .04 - -So that in the end, you add it all up like so: -1 + 2 + 3 + 4 + 5 + 6 + ... = 66795 - -The real question is, what's a simple formula for getting a sum of consecutive integers, starting at whole number 1, without having to actually count it all out?! - -REPLY [2 votes]: The arithmetic progression is the sequence of numbers such that the difference $d$ between the consecutive terms is constant. If the first term is $a_1$, the number of terms is $n$ and the last term is $a_n$, the whole sum -$$ S = \frac{n \cdot (a_1 + a_n)}{2} $$ -where -$$ a_n = a_1 + (n - 1) \cdot d $$ -In your example $a_1 = 0.01$, $d = 0.01$, $n = 365$, so -$$ a_{365} = 0.01 + (365 - 1) \cdot 0.01 = 3.65 $$ -and -$$ S = \frac{365 \cdot (0.01 + 3.65)}{2} = 667.95 $$ -If you want to use only $a_1$, $d$ and $n$ then the only one formula from both one above -$$ S = \frac{n \cdot (2 \cdot a_1 + (n-1) \cdot d)}{2} $$ -$$ S = \frac{365 \cdot (2 \cdot 0.01 + (356-1) \cdot 0.01)}{2} = 667.95 $$<|endoftext|> -TITLE: Is the Zariski topology the same as the cofinite topology? -QUESTION [6 upvotes]: Let $R$ be a commutative ring, $spec(R)$ be the set of all prime ideals on $R$. For any ideal $I$ on $R$, we define the $V_I$ to be the set of all prime ideals containing $I$. We define the Zariski topology on $spec(R)$ via the closed sets $\{V_I:I\textrm{ is an ideal of }R\}$. -I am still wrapping my mind around this topology. Can someone tell me if it is a cofinite topology, i.e. the open sets are complements of finite sets, or not? - -REPLY [6 votes]: There can be infinite closed sets besides the whole space. -In $\Bbb Q[x,y]$, for example, $(y)$ is contained in infinitely many prime ideals, so $V_{(y)}$ is a closed, but not finite set. -Another way: the cofinite topology always makes its space $T_1$, but the spectrum is $T_1$ in the Zariski topology iff prime ideals are maximal, and that only occurs for certain rings. -Another way: The cofinite topology on a finite space is the discrete topology, but the Zariski topology on a ring with a finite spectrum need not be discrete. Consider, for example, $Spec(\Bbb Q[[x]])$, where the spectrum is $\{0, (x)\}$ and the closed sets are $\{\emptyset, \{x\}, \{0,(x)\}\}$ -I've spent some time in the same boat as you on this topic. It's especially disorienting if most of your topological experience is derived from thinking about metrizable spaces.<|endoftext|> -TITLE: Find sum of series $\sum_{n=1}^{\infty}\frac{1}{n(4n^2-1)}$ -QUESTION [12 upvotes]: I need help with finding sum of this: -$$ -\sum_{n=1}^{\infty}\frac{1}{n(4n^2-1)} -$$ -First, I tried to telescope it in some way, but it seems to be dead end. The only other idea I have is that this might have something to do with logarithm, but really I don't know how to proceed. Any hint would be greatly appreciated. - -REPLY [3 votes]: One way forward is to note that -$$\begin{align} -\sum_{n=1}^{N}\left(\frac{1}{2n-1}-\frac{1}{2n}\right)&=\sum_{n=1}^{N}\left(\frac{1}{2n-1}+\frac{1}{2n}\right)-\sum_{n=1}^{N}\frac1n \tag 1\\\\ -&=\sum_{n=1}^{2N}\frac1n -\sum_{n=1}^{N}\frac1n \tag 2\\\\ -&=\sum_{n=N+1}^{2N}\frac1n \\\\ -&=\sum_{n=1}^{N}\frac{1}{n+N} \\\\ -&=\frac1N \sum_{n=1}^{N}\frac{1}{1+n/N} \tag 3 -\end{align}$$ -In going from $(1)$ to $(2)$ we simply noted that the sum, $\sum\limits_{n=1}^{2N}\frac1n$, can be written in terms of sums of even and odd indexed terms. -Now, we observe that limit of $(3)$ is the Riemann sum for the integral $$\int_0^1 \frac{1}{1+x}\,dx=\log(2).$$ -Similarly, we see that -$$\begin{align} -\sum_{n=1}^{N}\left(\frac{1}{2n+1}-\frac{1}{2n}\right)&=-1+\frac1N \sum_{n=1}^{N}\frac{1}{1+n/N} -\end{align}$$ -is the Riemann sum for $$-1+\int_0^1\frac{1}{1+x}\,dx=-1+\log(2).$$ -Putting all of this together, we recover the expected result -$$\sum_{n=1}^\infty \frac{1}{n(2n-1)(2n+1)}=\sum_{n=1}^\infty\left(\frac{1}{2n-1}-\frac{1}{2n}\right)+\sum_{n=1}^\infty\left(\frac{1}{2n+1}-\frac{1}{2n}\right)=2\log(2)-1.$$<|endoftext|> -TITLE: Proving that there only finitely many minimal prime ideals of any ideal in Noetherian commutative ring -QUESTION [7 upvotes]: Currently, I'm trying to solve a problem from a textbook: -Let $R$ be a commutative Noetherian ring with identity, and let $I \subset R$ be a proper ideal of $R$. Then we know that set of prime ideals of $R$ containing $I$ has minimal elements by inclusion (I decided to call this set $\mathrm{Min}(I)$ in sequel). Prove that $\mathrm{Min}(I)$ is finite. -There is also a hint: Define $\mathcal{F}$ as set of all ideals $I$ of $R$ such that $ \vert \mathrm{Min}(I) \vert = \infty$. Assume that $\mathcal{F} \neq \emptyset$. Then it must have a maximal element $I$. Find ideals $J_1,J_2$ such that they all strictly include $I$, such that $J_1J_2 \subset I$ and deduce a contradiction. -So I went along this hint: $I$ can't be a prime, as a prime is the only minimal prime over itself. It means that $\exists a,b \not \in I: ab \in I$. As $R$ is Noetherian there is a finite list of elements that generates $I = (r_1, \dots, r_n)$. Then it's possible to set $J_1 = (r_1, \dots,r_n,a)$, $J_2 =(r_1, \dots, r_n, b )$ with all required properties. As $I$ is maximal in $\mathcal{F}$ the sets $\mathrm{Min}(J_1)$ and $\mathrm{Min}(J_2)$ must be finite. -I am failing to find a desired contradiction and will be grateful for any help. - -REPLY [8 votes]: (To continue your argument) But let $P\in Min(I)$, since $ab\in I\subset P$ and $P$ is prime, $a\in P$ or $b\in P$ this implies that $J_1\subset P$ or $J_2\subset P$. Remark that an element $P$ of $Min(I)$ which contains $J_l,l=1,2$ is in $Min(J_l)$ thus $Min(I)\subset Min(J_1)\cup Min(J_2)$. This implies that $Min(J_1)$ or $Min(J_2)$ is infinite. This is in the contradiction with the fact that $I$ is maximal among the ideals such that $Min(I)$ is infinite.<|endoftext|> -TITLE: Possible mistake in Rudin's definition of limits in the extended real number system -QUESTION [5 upvotes]: From Baby Rudin page 98 - -This seems to be a mistake since we have seemingly absurd results like -$$ graph(f) = \{(0,0)\} \Rightarrow \lim_{x\to \infty} f(x)= 0 $$ -We define the limit(for $x$ real) only for limit points of $E$ so my initial thinking is to enforce that every neighborhood of $x$ must have infinitely many points of $E$. This would imply that limits at infinity could only happen for unbound $E$ so the previous example would not be true. Is there a more standard way of defining such limits? -This has been discussed before at Definition of the Limit of a Function for the Extended Reals but I'm more interested in the infinite case and how to fix the definition. - -REPLY [2 votes]: Yes, there is a more standard way. If you know topology, what you are doing is attaching two points to $\mathbb{R}$, which we will call $\infty$ and $-\infty$, giving the obvious order and imposing the order topology. What follows is that limits are now well-defined as in any topological space, and your proposed definition is equivalent. Just as in any topological space, limits are defined on limit points only. -This has the advantage of putting away the "special" feeling and treatment about $\infty$ and $-\infty$, putting them in the same ground as any real number. - -I've made this blog post sometime ago about some considerations on the extended real line from a topological viewpoint. You may find it useful.<|endoftext|> -TITLE: How can we find geodesics on a one sheet hyperboloid? -QUESTION [17 upvotes]: I am looking at the following exercise: -Describe four different geodesics on the hyperboloid of one sheet -$$x^2+y^2-z^2=1$$ passing through the point $(1, 0, 0)$. -$$$$ -We have that a curve $\gamma$ on a surface $S$ is called a geodesic if $\ddot\gamma(t)$ is zero or perpendicular to the tangent plane of the surface at the point $\gamma (t)$, i.e., parallel to its unit normal, for all values of the parameter $t$. -Equivalently, $\gamma$ is a geodesic if and only if its tangent vector $\dot\gamma$ is parallel along $\gamma$. -$$$$ -Could you give me some hints how we can find in this case the geodesics? - -REPLY [20 votes]: First, look at some pictures of hyperboloids, to get a feeling for their shape and symmetry. -There are two ways to think of your hyperboloid. Firstly, it's a surface of revolution. You can form it by drawing the hyperbola $x^2 - z^2 = 1$ in the plane $y=0$, and then rotating this around the $z$-axis. -Another way to get your hyperboloid is as a "ruled" surface. Take two circles of radius $\sqrt2$. One circle, $C_1$, lies in the plane $z=1$ and has center at the point $(0,0,1)$. The other one, $C_2$, lies in the plane $z=-1$ and has center at the point $(0,0,-1)$. As you can see, $C_1$ lies vertically above $C_2$. Their parametric equations are: -\begin{align} -C_1(\theta) &= (\sqrt2\cos\theta, \sqrt2\sin\theta, 1) \\ -C_2(\theta) &= (\sqrt2\cos\theta, \sqrt2\sin\theta, -1) -\end{align} -For each $\theta$, draw a line from $C_1(\theta)$ to $C_2(\theta + \tfrac{\pi}{2})$. This gives you the family of blue lines shown in the picture below. Similarly, you can get the red lines by joining $C_1(\theta)$ and $C_2(\theta - \tfrac{\pi}{2})$ for each theta: - -To identify geodesics, we will use two facts that are fairly well known (they can be found in many textbooks): -Fact #1: Any straight line lying in a surface is a geodesic. This is because its arclength parameterization will have zero second derivative. -Fact #2: Any normal section of a surface is a geodesic. A normal section is a curve produced by slicing the surface with a plane that contains the surface normal at every point of the curve. The commonest example of a normal section is a section formed by a plane of symmetry. So, any intersection with a plane of symmetry is always a geodesic. -There are infinitely many geodesics passing through the point $(1,0,0)$. But, using our two facts, we can identify four of them that are fairly simple. They are the curves G1, G2, G3, G4 shown in the picture below: - - -G1: the circle $x^2+y^2 =1$ lying in the plane $z=0$. This is a geodesic by Fact #2, since the plane $z=0$ is a plane of symmetry. At each point along the curve G1, the curve's principal normal must be parallel to the surface normal at the point, by symmetry. If this geometric argument is not convincing, we can confirm by calculations. At any point $P=(x,y,0)$ on G1, the surface normal and the curve's principal normal are both in the direction $(x,y,0)$. This is illustrated in the picture below: - - - -G2: the hyperbola $x^2 - z^2 = 1$ lying in the plane $y=0$. Again, this is a geodesic by Fact #2, since the plane $y=0$ is a plane of symmetry. - -G3: the line through the points $(1,-1,1)$ and $(1, 1, -1)$. This is one of the blue lines mentioned in the discussion of ruled surfaces above. In fact its two defining points are $(1,-1,1) = C_1\big(-\tfrac{\pi}{4}\big)$ and $(1,1,-1) = C_2\big(\tfrac{\pi}{4}\big)$. It has parametric equation -$$ -G_3(t) = \big(x(t),y(t),z(t)\big) = (1,t,-t) -$$ -To check that $G_3$ lies on the surface, we observe that -$$ -x(t)^2 + y(t)^2 -z(t)^2 = 1 +t^2-t^2 = 1 \quad \text{for all } t -$$ -It's a geodesic by Fact #1. - -G4: the line through the points $(1,-1,-1)$ and $(1, 1, 1)$. The reasoning is the same as for G3.<|endoftext|> -TITLE: If $R[x]$ and $R[[x]]$ are isomorphic, then are they isomorphic to $R$ as well? -QUESTION [12 upvotes]: There are examples of commutative rings $R \neq 0$ such that $R[x]$ is isomorphic to $R[[x]]$ (see this question; an example would be $R=S[x_1, x_2, \ldots][[y_1, y_2, \ldots]]$, with $S \neq 0$ any commutative ring). This is false, see Martin Brandenburg's answer. -The following question was asked as a comment on the thread linked above: if $R$ is such that $R[x]\cong R[[x]]$, must we have that $R\cong R[x] \cong R[[x]]$? -This is clearly true for the example above, which is (essentially) the only family of examples I could come up with. - -REPLY [2 votes]: Yes, because $R[x] \cong R[[x]]$ implies $R=0$ (see my answer to the previous question here). -(I make this community wiki because this answer is rather trivial now.)<|endoftext|> -TITLE: What does 'coherent isomorphism' mean in the sense of pseudofunctors? -QUESTION [5 upvotes]: From what I've been able to find, psuedofunctors are not-quite-functors, in the sense that they preserve the identity morphism and composition of morphisms only up to coherent isomorphism, and not 'on the nose'. -But I'm struggling to find an explicit definition of what this means anywhere. The nLab has a large section on coherence theorems, but most of it is far above my head. -I'm assuming that it simply means 'isomorphic in a nice way', but I can't quite pin down what 'nice' means here. -Could anybody please explain what it does mean here? -Or even give a different definition of a pseudofunctor. - -REPLY [7 votes]: The definition of pseudofunctors on nLab is actually quite explicit, but it's for pseudofunctors between bicategories. Borceux (Handbook of Categorical Algebra, volume 1, 7.5) gives the definition for strict categories. -As for what a coherent (iso)morphism is, it's a morphism that satisfies coherence laws. These laws ensure that things that ought to be equal (and trivially would be, if the morphisms were identities) really are. -For example in this case you are given isomorphisms $φ : Ff ∘ Fg ≅ F(f ∘ g)$, but to do anything, you'll obviously need isomorphisms $Ff ∘ Fg ∘ Fh ≅ F(f ∘ g ∘ h)$ too, and similarly for any number morphisms. -Of course you can get from $Ff ∘ Fg ∘ Fh$ to $F(f ∘ g ∘ h)$ using $φ$, but you can do so in two ways: via $Ff ∘ F(g ∘ h)$ or via $F(f ∘ g) ∘ Fh$. -One of the coherence laws for pseudofunctors says that these two ways are the same. That given this, all paths (constructed from $φ$) from $F(f_1) ∘ ... ∘ F(f_n)$ to $F(f_1 ... f_n)$ are the same is then a part of the coherence theorem for pseudofunctors (but these can also be stated in more sophisticated ways). -As a side-note, monoidal categories are a good way to get familiar with coherence and coherence theorems. In a way, anybody who uses product of sets or tensor products already is.<|endoftext|> -TITLE: Optimization-like question -QUESTION [8 upvotes]: Let's say I have a formula like $ax + by + cz = N$. $a, b, c$, and $N$ are known and cannot be changed. $x, y$, and $z$ are known and can be changed. -The problem is that the equation is not true! My problem (for a program I'm writing) is: how can $x, y$, and $z$ be changed enough that they equal $N$ while differing from their previous values as little as possible? - -REPLY [2 votes]: Define $A = [a,b,c]$ and $X=[x, y, z]^T$, so your equation is $A*X = N$. Assume your known point to be $ \bar X = [x_0, y_0,z_0]^T$. Define the residual to be $R = N - A*\bar X$, since the equation is not true. -The question becomes $$\min|| X- \bar X ||,s.t. A*X = N$$ -The solution $X^*$ that minimize the distance is $X^* = \bar X + A^+R$ where $A^+$ is the Moore_Penrose pseudo inverse of $A$ and is this case since $A$ has linearly independent rows, $A^+ = A^T(AA^T)^{-1}$. The distance is $||X^* - \bar X || = || A^+R||$, you can pick whatever norm you like. -This is also true if $A$ is a matrix instead of a row vector suppose you have a linear equation system.<|endoftext|> -TITLE: Generalization of Urysohn's Lemma -QUESTION [6 upvotes]: Urysohn's lemma in general topology states: - -A topological space $X$ is normal (i.e., $T_4$) iff, for each pair of disjoint closed subsets $C, D \subset X$, there is a function $f : X \to [0, 1]$ such that $f(C) = 0$ and $f(D) = 1$. - -Of course Urysohn's proof relies heavily on the structure of $[0, 1]$, to go through, but I wondered how necessary this is. In particular, let's say - -A space $Y$ is said to "have property $\mathscr{U}$" if, for every compact Hausdorff space $X$, and every pair of disjoint subsets $C, D \subset X$, there exists a continuous map $f : X \to Y$ such that there are $y_1 \neq y_2 \in Y$ with $f(C) = y_1$ and $f(D) = y_2$. - -Question: Can we characterize those spaces with property $\mathscr{U}$? -Urysohn's lemma says that any space containing a line segment has property $\mathscr{U}$. Is this sufficient condition in fact necessary? -(Of course we could define a similar property replacing "compact Hausdorff" with "normal." I'm directly interested in the former situation, but I don't have any idea which is the "correct" definition to make.) - -Motivation: I've recently learned the following cute result: Let $X$ be a compact Hausdorff space and $F$ be a topological field. Then there is a continuous bijection -$$\varphi : X \to \operatorname{MaxSpec} C(X, F),$$ -where the space on the right is endowed with the Zariski topology. Of course $\varphi^{-1}$ will usually be far from continuous, since the Zariski topology is fairly weak. However, when $F = \mathbb{R}$ then $\varphi$ is a homeomorphism. To see this, we verify directly that $\varphi$ is a closed map using Urysohn's lemma. -I'm wondering if this allows us to give a characterization of the real numbers as a topological field that's essentially different from the usual ones. - -REPLY [5 votes]: Property $\mathscr U$ is rather trivial and it is equivalent to “the space $Y$ contains a continuous image $f_0([0,1])$ of the unit segment such that $f_0(0)\ne f_0(1)$”. Indeed, the necessity follows from the existence of a continuous map $f_0:[0,1]\to Y$ such that $f_0(0)\ne f_0(1)$, the sufficiency (even for normal spaces $X$) follows from Urysohn's lemma (the composition $f_0\circ f$ is the required separating map). (Here I assume that you mean both sets $C$ and $D$ are closed (conversely, if we take as $C$ and $D$ disjoint dense subsets of $[0,1]$ then $f(C)=f([0,1])=f(D)$ for each continuous map $f$ into a $T_1$-space such that both $f(C)$ and $f(D)$ are one-point sets)).<|endoftext|> -TITLE: How to compute $\int_0^{\frac{\pi}{2}} \frac{\ln(\sin(x))}{\cot(x)}\ \text{d}x$ -QUESTION [7 upvotes]: I am trying to compute this integral. -$$\int_0^{\frac{\pi}{2}} \frac{\ln(\sin(x))}{\cot(x)}\ \text{d}x$$ -Any thoughts will help. Thanks. - -REPLY [4 votes]: Note that -\begin{eqnarray} -&&\int_0^{\frac{\pi}{2}} \frac{\ln(\sin(x))}{\cot(x)}\ \text{d}x\\ -&=&\int_0^{\frac{\pi}{2}}(\cos x)^{-1}\sin x\ln(\sin x)\ \text{d}x=\lim_{a\to0,b\to2}\frac{d}{db}\int_0^{\frac{\pi}{2}}(\cos x)^{a-1}(\sin x)^{b-1}\ \text{d}x\\ -&=&\lim_{a\to0,b\to2}\frac{d}{db}\frac{1}{2}B\left(\frac{a}{2},\frac{b}{2}\right)\\ -&=&\lim_{a\to0,b\to2}\frac{1}{4}B\left(\frac{a}{2},\frac{b}{2}\right)\left(\psi\left(\frac{b}{2}\right)-\psi\left(\frac{a+b}{2}\right)\right)\\ -&=&\lim_{a\to0}\frac{1}{4}B\left(\frac{a}{2},1\right)\left(\psi\left(\frac{b}{2}\right)-\psi\left(\frac{a+2}{2}\right)\right)\\ -&=&\lim_{a\to0}\frac{1}{4}B\left(\frac{a}{2},1\right)\left(\psi(1)-\psi\left(\frac{a}{2}+1\right)\right)\\ -&=&-\lim_{a\to0}\frac{1}{4}B\left(\frac{a}{2},1\right)\left(\gamma+\psi\left(\frac{a}{2}+1\right)\right)\\ -&=&-\lim_{a\to0}\frac{1}{4}\frac{\Gamma(\frac{a}{2})\Gamma(1)}{\Gamma(\frac{a+2}{2})}\left(\gamma+\psi\left(\frac{a}{2}+1\right)\right)\\ -&=&-\lim_{a\to0}\frac{1}{4}\left(\frac{2}{a}+O(a^2)\right)\left(\frac{\pi^2}{12}a+O(a^2)\right)\\ -&=&-\frac{\pi^2}{24}. -\end{eqnarray} -Here we use the following fact -$$ \Gamma(a)\approx\frac{1}{a}, \gamma+\psi(\frac{a}{2}+1)\approx\frac{\pi^2}{12}a.$$<|endoftext|> -TITLE: Is this a way to prove there are infinitely many primes? -QUESTION [21 upvotes]: Someone gave me the following fun proof of the fact there are infinitely many primes. I wonder if this is valid, if it should be formalized more or if there is a falsehood in this proof that has to do with "fiddling around with divergent series". -Consider the product -$\begin{align}\prod_{p\text{ prime}} \frac p{p-1} &= \prod_{p\text{ prime}} \frac 1{1-\frac1p}\\&=\prod_{p\text{ prime}}\left(\sum_{i=0}^\infty\frac1{p^i}\right)\\&=(1+\tfrac12+\tfrac14+\ldots)(1+\tfrac13+\tfrac19+\ldots)\ldots&(a)\\&=\sum_{i=1}^\infty\frac1i&(b)\\&=\infty\end{align}\\\text{So there are infinitely many primes }\blacksquare$ -Especially the step $(a)$ to $(b)$ is quite nice (and can be seen by considering the unique prime factorization of each natural number). Is this however a valid step, or should we be more careful, since we're dealing with infinite, diverging series? - -REPLY [19 votes]: The rigorous approach, as noted in comments. -If there are only finitely many primes, pick an $n$ so that: -$$H_n=\sum_{m=1}^{n}\frac{1}m > \prod_{p}\frac{1}{1-\frac 1p}$$ -You can do this because the right side is a finite product of positive real numbers, and the series $\sum_{m=1}^{\infty}\frac1m$ diverges. -Next, show that: -$$\prod_p \frac{1}{1-\frac{1}{p}} > \prod_p \sum_{k=0}^{\lfloor \log_{p}n\rfloor} \frac{1}{p^k}>\sum_{m=1}^{n}\frac{1}{m}$$ -Reaching a contradiction.<|endoftext|> -TITLE: Combinatorial proof that $\frac{({10!})!}{{10!}^{9!}}$ is an integer -QUESTION [8 upvotes]: I need help to prove that the quantity of this division : -$\dfrac{({10!})!}{{10!}^{9!}}$ -is an integer number, using combinatorial proof - -REPLY [17 votes]: You have $10!$ students. You want to divide them into groups of $10$ and line of the groups; in how many different ways can you do this? -You can line up the students in $(10!)!$ different orders. Now imagine that you count them off by tens and mark a line on the ground between groups of $10$; you have $\frac{10!}{10}=9!$ groups of $10$. Since all you care about is the order of the groups, you can allow the students in each group of $10$ to rearrange themselves within the group as they please, and you’ll still have the same lineup of groups. Each group can rearrange itself in $10!$ different orders, and there are $9!$ groups, so there are $10!^{9!}$ different lineups of students that produce the same lineup of groups. The number of lineups of groups is therefore $\frac{(10!)!}{10!^{9!}}$, which of course must be an integer. -You can replace $10$ by $n$ and $9$ by $n-1$ and repeat the argument to show that $\dfrac{(n!)!}{n!^{(n-1)!}}$ is an integer.<|endoftext|> -TITLE: What's the definition of a "collection"? -QUESTION [7 upvotes]: I cannot seem to find a formal definition for the following. -What's a "collection" in the context of set theory? - -REPLY [6 votes]: "Collections" have no formal existence in set theory. The word is deliberately left without a technical meaning, such that it is available for speaking about our intuitive non-rigorous idea about, erm, collections of things -- without implying that the collection we're talking about satisfies the formal conditions for being considered a "set" or "class", both of which are often technical terms.<|endoftext|> -TITLE: Extension of vector bundles on $\mathbb{CP}^1$ -QUESTION [6 upvotes]: Let $\lambda\in\text{Ext}^1(\mathcal{O}_{\mathbb{P}^1}(2),\mathcal{O}_{\mathbb{P}^1}(-2))$ and $E_\lambda$ be a vector bundle on $\mathbb{CP}^1$ which is given by the exact sequence -\begin{equation}0\to\mathcal{O}_{\mathbb{P}^1}(-2)\to E_\lambda\to\mathcal{O}_{\mathbb{P}^1}(2)\to0,\,\,\,\,(1)\end{equation} -and corresponds to $\lambda$. -Then, as it was discussed here, $E\cong\mathcal{O}_{\mathbb{P}^1}(a_\lambda)\oplus\mathcal{O}_{\mathbb{P}^1}(-a_\lambda)$ for some $a_\lambda\in\{0,1,2\}$. -For each $\lambda$ I want to find the explicit value of $a_\lambda$. Can anyone help me? -I tried the following explicit construction. -Let $P=\mathcal{O}_{\mathbb{P}^1}(-1)^{\oplus4}$. There is the surjective map $P\to\mathcal{O}_{\mathbb{P}^1}(2)$, which is given by the evaluation map $H^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(3))\otimes\mathcal{O}_{\mathbb{P}^1}\to\mathcal{O}_{\mathbb{P}^1}(3)$ twisted by $-1$. Let $F=\mathcal{O}_{\mathbb{P}^1}(-2)^{\oplus3}$ be the kernel of this map so we have the exact sequence -\begin{equation}0\to F\to P\to\mathcal{O}_{\mathbb{P}^1}(2)\to0.\,\,\,\,(2)\end{equation} -Note that $\text{Ext}^1(P,\mathcal{O}_{\mathbb{P}^1}(-2))=0$. Thus applying $\text{Hom}(-,\mathcal{O}_{\mathbb{P}^1}(-2))$ to $(2)$ we obtain the surjective connecting map -$$\delta:\text{Hom}(F,\mathcal{O}_{\mathbb{P}^1}(-2))\to\text{Ext}^1(\mathcal{O}_{\mathbb{P}^1}(2),\mathcal{O}_{\mathbb{P}^1}(-2)).$$ -Now from the surjectivity of $\delta$ there exists $f\in\text{Hom}(F,\mathcal{O}_{\mathbb{P}^1}(-2))$ such that $\delta(f)=\lambda$ and $E_\lambda$ is given as the push-out of $F\to P$ and $F\to\mathcal{O}_{\mathbb{P}^1}(-2)$. -I don't know how to compute $f$ explicitly for a given $\lambda$ and then how to compute the push-out. -Maybe there exists a different approach to solve this problem? - -REPLY [2 votes]: I think that you can use duality (Hartshorne, III Thm 7.1), -$$ \mathrm{Ext}^1(\mathcal{O}_{\mathbb{P}^1}(2),\mathcal{O}_{\mathbb{P}^1}(-2)) \simeq \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))^{\vee} $$ -Identify $\mathcal{O}_{\mathbb{P}^1}(-2) = \mathcal{O}_{\mathbb{P}^1}(-p -q)$, where $p$ and $q$ are distinct points. Let $A_{\lambda}$ be a homogeneous quadratic polynomial corresponding to $\lambda$. -You can use $p$, $q$ and $A_{\lambda}$ to construct $E_{\lambda}$. -If $A_{\lambda}(p)=A_{\lambda}(q)=0$, then $a_{\lambda}=0$. -If $A_{\lambda}(p)=0 $ and $A_{\lambda}(q)\neq 0$, then $a_{\lambda}=1$. -If $A_{\lambda}(p\neq 0 $ and $A_{\lambda}(q)\neq 0$, then $a_{\lambda}=2$. -Note that choosing $p$ and $q$ is the same as choosing a basis for $\mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(1))$, which determines a basis for $\mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))$ that you have to fix to construct a isomorphism -$$ \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2)) \simeq \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))^{\vee} $$<|endoftext|> -TITLE: A property of the series $\sum_{i=1}^{\infty}\frac{(1-x)x^i}{1+x^i}$ -QUESTION [7 upvotes]: I think this is a hard question: -$$f(x)=\sum_{i=1}^{\infty}\frac{(1-x)x^i}{1+x^i}~~~\text{for}~~~x\in(0,1)$$ -Prove: $$\lim_{x\to 1^-}f(x)=\ln(2)$$ -Find the maximum value of $f(x)$. - -REPLY [2 votes]: We start by expanding $\frac{1}{1+x^i} = \frac{1-x^i}{1-x^{2i}}$ in a geometrical series to get the double sum -$$f(x)=\sum_{i=1}^{\infty}\sum_{n=0}^\infty (1-x)(1-x^i)x^{2n i}$$ -Since the summands above are non-negative we can, by Tonelli's theorem, interchange the order of summation to get -$$f(x) = \sum_{n=0}^\infty \frac{x^{2n+1}(1-x)^2}{(1-x^{2n+1})(1-x^{2n+2})}$$ -The function -$$f_n(x) = \frac{x^{2n+1}(1-x)^2}{(1-x^{2n+1})(1-x^{2n+2})}$$ -is monotonely increasing on $[0,1]$ and therefore satisfy -$$f_n(x) \leq \lim_{x\to 1^-} f_n(x) = \frac{1}{(2n+1)(2n+2)}$$ -Since $\sum_{n=0}^\infty \frac{1}{(2n+1)(2n+2)}$ converges it follows from Weierstrass M-test that the series $\sum_{n=0}^\infty f_n(x)$ converges uniformly on $[0,1]$ and therefore -$$\lim_{x\to 1^-} f(x) = \lim_{x\to 1^-}\sum_{n=0}^\infty f_n(x) = \sum_{n=0}^\infty \lim_{x\to 1^-} f_n(x) = \sum_{n=0}^\infty \frac{1}{(2n+1)(2n+2)} = \log(2)$$ -This is also the maximum value of $f(x)$ on $[0,1]$. The last equality above follows from $\sum_{n=0}^\infty \frac{1}{(2n+1)(2n+2)} = \sum_{n=0}^\infty \frac{1}{2n+1} - \frac{1}{2n+2} = \sum_{n=0}^\infty \frac{(-1)^n}{n+1} = \log(2)$.<|endoftext|> -TITLE: Find the range of $k$ for which the inequality $k\cos^2x-k\cos x+1\geq0 ,\forall x\in(-\infty,\infty)$ holds. -QUESTION [5 upvotes]: Find the range of $k$ for which the inequality $k\cos^2x-k\cos x+1\geq0 ,\forall x\in(-\infty,\infty)$ holds. - -This is an inequality involving trigonometric function $\cos x$ which varies from $-1$ to $1$. -If the question had been $kx^2-kx+1\geq 0$, i would have easily solved it,by using the discriminant property of the quadratics.If quadratic is positive then its discriminant is negative but i am not able to find the range of $k$ in this question. - -REPLY [7 votes]: Just another way: Equivalently, you have to find when -$$4k(\cos x -\tfrac12)^2 \ge k-4$$ -Now $\cos x \in [-1, 1] \implies (\cos x - \tfrac12)^2 \in [0, \tfrac94]$, and so $ k \in [-\frac12, 4]$.<|endoftext|> -TITLE: Does sequence always give a square -QUESTION [5 upvotes]: Show that there exist a non-constant postive integer sequence $\{a_{n}\},a_{0}=1$ such that -$$\dfrac{a^2_{n}+a_{n}}{2}-\dfrac{a^2_{n-1}+a_{n-1}}{2},\forall n\in\mathbb{N}$$ is always a perfect square. - -REPLY [3 votes]: Let $a_n=\frac{3^n-1}2$ (so indeed $a_1=\frac{3^1-1}2=1$). Then -$$ \frac{a_n^2+a_n}2-\frac{a_{n-1}^2+a_{n-1}}2=\frac{3^{2n}-1}8-\frac{3^{2(n-1)}-1}8=3^{2(n-1)}=(3^{n-1})^2.$$<|endoftext|> -TITLE: Constructing a Convergent/Divergent Series from a Positive Sequence -QUESTION [7 upvotes]: Suppose that $\{x_n\}$ is a positive sequence such that $x_n \to 0$. Construct a positive sequence $s_n$ such that $\displaystyle \sum_{n=1}^\infty s_n$ diverges while $\displaystyle \sum_{n=1}^\infty x_ns_n$ converges. -It is clear that if $\sum x_n$ converges, then taking $s_n=1$ for all $n$ will work. We are then reduced to the case where $\sum x_n$ diverges. My only thought is that as $\sum x_n$ does not converge, its tail does not form a Cauchy sequence, hence for some $\epsilon>0$, we have -$$ -\sum_{n=m}^\infty x_n >\epsilon -$$ -Any hint as to how I should proceed? - -REPLY [3 votes]: Because $x_n\to 0,$ we can say i) the $x_n$'s are bounded above by some $M;$ and ii) There are $n_1 < n_2 < \cdots $ such that $x_{n_k} < 1/k^2$ for each $k.$ -Let $E = \{n_1,n_2,\dots \}.$ For $n \in E,$ define $s_n = 1.$ For $n\notin E,$ define $s_n=1/n^2.$ Because $s_n = 1$ for infinitely many $n,$ $\sum s_n =\infty.$ We have -$$\sum_{n=1}^{\infty} x_ns_n = \sum_{n\in E} x_ns_n + \sum_{n\notin E} x_ns_n.$$ -The first sum on the right is -$$\sum_{k=1}^\infty x_{n_k}s_{n_k}< \sum_{k=1}^\infty \frac{1}{k^2}\cdot 1 < \infty.$$ -The second sum on the right is no more than -$$\sum_{n\notin E} M\cdot \frac{1}{n^2} < \infty.$$ -Thus $\sum_{n=1}^{\infty} x_ns_n$ converges as desired.<|endoftext|> -TITLE: Can Isomorphism between categories be defined only if both categories are small? -QUESTION [6 upvotes]: In wikipedia page, "Isomorphism of categories -", - -A functor F : C → D yields an isomorphism of categories if and only if - it is bijective on objects and on morphism sets. - -I have heard that bijection or "morphism sets" can be defined only if the category is small. "Category isomorphism" can be only if the both categories are small? - -REPLY [7 votes]: No. But the answer in detail depends on the foundations you are using. For instance, in $\mathsf{ZFC}$ we may interpret classes using formulas (up to equivalence). Then, a category $C$ has a formula $\mathrm{Ob}(C)$ with one variable such that "$x$ is an object of $C$" means that $\mathrm{Ob}(C)(x)$ holds. Also, we have formulas $\mathrm{Mor}(C)$, $\mathrm{dom}(C)$, $\mathrm{cod}(C)$, $\mathrm{Comp}(C)$, $\mathrm{Id}(C)$ such that the category axioms are satisfied. Now, a functor $F : C \to D$ consists of two formulas $F_O$ and $F_M$ such that - -$\forall x ( \mathrm{Ob}(C)(x) \Rightarrow \exists ! y (\mathrm{Ob}(D)(y) \wedge F_O(x,y))$ - -i.e., every object of $C$ is mapped to some specified object of $D$; one usually writes $F(x)=y$ instead of $F_O(x,y)$. Likewise for morphisms: - -$\forall f (\mathrm{Mor}(C)(f) \Rightarrow \exists ! g ( \mathrm{Mor}(D)(g) \wedge F_M(f,g))$ -$\forall f,g,x,x',y,y' (F_M(f,g) \wedge \mathrm{dom}(C)(f,x) \wedge \mathrm{cod}(C)(f,x') \wedge F_O(x,y) \wedge F_O(x',y') \Rightarrow \mathrm{dom}(D)(g,y) \wedge \mathrm{cod}(D)(g,y'))$ - -i.e., if $f$ is a morphism in $C$ from $x$ to $x'$, then $F(f)$ is a morphism in $D$ from $F(x)$ to $F(x')$, - -compabilities with respect to identities and composition (which I won't write down here). - -The composition of two functors $F : C \to D$, $G : D \to E$ is the functor $G \circ F : C \to E$ defined by $(G \circ F)_O(x,z) :\Leftrightarrow \exists y (F_O(x,y) \wedge G_O(y,z))$, likewise for $(G \circ F)_M$. -Now, $F$ is an isomorphism iff $F_O$ and $F_M$ are bijective, i.e. - -$\forall y ( \mathrm{Ob}(D)(y) \Rightarrow \exists ! x (\mathrm{Ob}(C)(x) \wedge F_O(x,y))$ -$\forall g (\mathrm{Mor}(D)(g) \Rightarrow \exists ! f ( \mathrm{Mor}(C)(f) \wedge F_M(f,g))$ - -In this case, the inverse functor $F^{-1}$ is defined by $F^{-1}_O(y,x) :\Leftrightarrow F_O(x,y)$ and $F^{-1}_M(g,f) :\Leftrightarrow F_M(f,g)$.<|endoftext|> -TITLE: Marginal Distribution of Uniform Vector on Sphere -QUESTION [5 upvotes]: Suppose that $U=(U_1,\ldots,U_n)$ has the uniform distribution on the unit sphere $S_{n-1}=\{x\in\mathbb R^n:\|x\|_2=1\}$. -I'm trying to understand the marginal distributions of individual components of the vector $U$, say, without loss of generality, $U_1$. -So far, this is what I have: I know that $U$ is equal in distribution to $Z/\|Z\|_2$, where $Z$ is an $n$-dimensional standard gaussian vector. Thus, $U_1$ is equal in distribution to $Z_1/\|Z\|_2$. Then, we could in principle compute the distribution of $U_1$ as -$$P[U_1\leq t]=P\big[Z_1\leq\|Z\|_2t\big],$$ -but the right-hand side above using conventional means is horribly messy (i.e., nested integrals over the set $[x_1\leq\|x\|t]$). -Is there a more practical/intuitive way of computing this distribution? - -REPLY [4 votes]: I hope that this is still relevant to you. -The answer can be found in eq. 1.26: -Fang, Bi-Qi; Fang, Kai-Tai, Symmetric multivariate and related distributions, (2017). (https://books.google.co.il/books?hl=iw&lr=&id=NL1HDwAAQBAJ&oi=fnd&pg=PT10&ots=u5cnMFtVxP&sig=ZgnWbGkG8qdVARoJ64mfm9fcag0&redir_esc=y#v=onepage&q&f=false). -The marginal density of $(z_1/\Vert \textbf{z} \Vert, ..., z_k /\Vert \textbf{z} \Vert )$ is -$$\frac{\Gamma(n/2)}{\Gamma((n-k)/2) \pi^{k/2}} \left(1 - \sum_{i=1}^k z_i^2 \right)^{(n-k)/2 -1}$$ -This can be derived from the Dirichlet distribution.<|endoftext|> -TITLE: Subgroup of $\mathbb{C}^*$ such that $\mathbb{C}^*/H \cong \mathbb{R}^*$ -QUESTION [9 upvotes]: Define $\mathbb{C}^* = \mathbb{C}\setminus \{0\}$ and $\mathbb{R}^* = \mathbb{R}\setminus \{0\}$ - -Does there exist a subgroup $H$ of $\mathbb{C}^*$ such that $\mathbb{C}^*/H$ isomorphic to $\mathbb{R^*}$? - -I knew that $\mathbb{C}^*/{U}\cong \mathbb{R}^+$, with $U = \{ z \in \mathbb{C}:|z| = 1\}$, but how about $\mathbb{R}^*$? -Thank a lot. - -REPLY [14 votes]: Suppose there exists a group epimorphism $\phi \colon \mathbb{C}^\times \to \mathbb{R}^\times$. Then there exists some $z \in \mathbb{C}^\times$ with $\phi(z) = -1$. If $w$ is a root of $z$ then $\phi(w)^2 = -1$, which is not possible. -This show more generally that the image of every group homomorphism $\mathbb{C}^\times \to \mathbb{R}^\times$ already lies in $\mathbb{R}_{>0}$.<|endoftext|> -TITLE: Modules over associative algebras are just special cases of "ordinary" modules over rings? -QUESTION [5 upvotes]: By module over a ring, I mean always a right-module. All rings are supposed to be unital, and the module fulfills $m\cdot 1 = m$. If $R$ is commutative and $M$ a right-module, we can define $rx := xr$ and get also a left-module. In this situation we call $M$ a bimodule, and if not otherwise said this natural definition is applied if I speak of bimodules. -An algebra $A$ over a commutative ring $R$ is itself a ring with $1$, which is also a $R$-module, and such that for all $r \in R$ and $x,y \in A$ we have -$$ - (rx)y = r(xy) = x(ry). -$$ -As the ring is assumed to be commutative, we essentially have a bimodul over $R$ as written above. -A module $M$ over an algebra $A$ which is defined over some ring $R$ is itself an $R$-module, for which we have an operation $M \times A \to M$ such that -for each $u, v \in M$ and $x,y \in A$ and $r \in R$ we have -(1) $(u+v)x = ux + vx$ -(2) $v(x+y) = vx + vy$ -(3) $(vx)y = v(xy)$ -(4) $v1 = v$ -(5) $(rv)x = r(vx) = v(rx)$. -This is the standard definition I see everywhere. But I guess I observed the following. For an algebra we can embed $R$ into $A$ by identifying it with $R\cdot 1$ for $1 \in A$. So the assertion that $A$ must be an $R$-module is implied by (1), (2), (3) and (4). So we can just define a module over an algebra as an ordinary module over $A$ seen as a ring with $1$. As $A$ is in general not commutative, this is just a right-module. But by the embedding, and the additional requirement for an algebra, every element of $R$ is in the center $Z(A) = \{ x : xy = yx \mbox{ for all y} \in A \}$, restricted to $R$ everything is fine and we again have a bimodule. So the only thing that makes modules over algebras special here is (5). But here we have -\begin{align*} - (rv)x & = (vr)x & \mbox{definition of $M$ as bimodul over $R$} \\ - & = v(rx) & \mbox{by (3)} \\ - & = v(xr) & \mbox{$R$ is central in $A$} \\ - & = (vx)r & \mbox{by (3)} \\ - & = r(vx) & \mbox{definition of $M$ as bimodul over $R$} -\end{align*} -so we see that in the third and last line we have recovered everything from (5). -So as I see it, a module over an algebra is just an ordinary module over the algebra seen as a ring, so why bother with these extra definition? I guess if we generalise further, for example look at non-associative algebras, then they are no longer rings and we cannot define modules over them as special cases of modules over rings. But most of the time (and in the textbooks I am reading right now) such generalizations are not considered, but nevertheless most of the time modules over algebras are defined separately to modules over rings. -So why that? Or have I overlooked something and my computations are wrong? -Remark: These modules over algebras come from the representation theory of (finite) groups. - -REPLY [2 votes]: Indeed your proof seems to work fine assuming in the definition you require $M$ to be a right $R$-module. In that case one could simplify the definition by requiring that a module of an algebra $A$ is just an $A$-module (the $R$-module structure can be derived in the way you showed). -On the other hand if we reguard $M$ as a left $R$-module things change. -If $M$ is just a left $R$-module the condition $(5)$ is required in order to ensure that the left action of $R$ and right action of $A$ (and so also the induced right action of $R$) are compatible: that is that $M$ is a $(R,A)$-module. -Note that in general even for $(R,R)$-bimodules (i.e. left and right $R$-modules where the two actions are compatible) it is not required that the left and right actions coincide. -Hope this helps.<|endoftext|> -TITLE: Example of a function differentiable in a point, but not continuous in a neighborhood of the point? -QUESTION [15 upvotes]: Is there a function that is differentiable in a point $x_0$ (and so continuous of course in $x_0$) but not continuous in a neighborhood of $x_0$ (as said, besides the point $x_0$ itself)? -Can anyone suggest me an example of that ? -Thanks a lot in advice - -REPLY [11 votes]: To begin with we take $x_0 = 0$ : we want $f(0) = 0$ and $f'$ defined at $0$, with $f'(0)=1$. -Then you take : - -$f(x) = x$ if $x \in \mathbb{Q}$ -$f(x) = x + x^2$ if $x \notin \mathbb{Q}$ - -$f$ is continuous only at $0$. Moreover, you can check that $f'(0) = 1$ : indeed we have $f(0) = 0$, so $\mid \frac{f(x) - f(0)}{x} - 1\mid = \mid \frac{f(x)}{x} - 1\mid$. This quantity equals either to $0$ if $x \in \mathbb{Q}$ or to $\mid x \mid$ if $x \notin \mathbb{Q}$. Either way, when $x$ approaches $0$, $\mid \frac{f(x) - f(0)}{x} -1 \mid $ approaches $0$. Although $f$ is almost nowhere continuous, the derivative at $0$ exists and $f'(0) = 1$. - -More generally, if you want $g(x_0) = y_0$, $g'(x_0) = a$ and $g$ continuous only at $x_0$, you can take : -$g(x) = y_0 + a\times f(x-x_0)$ -A quite equivalent (but slightly different) explicit version would be : - -$g(x) = y_0 + a(x-x_0)$ if $x\in \mathbb{Q}$ -$g(x) = y_0 + a(x-x_0) +(x-x_0)^2$ if $x\notin \mathbb{Q}$<|endoftext|> -TITLE: Is it true that a ring has no zero divisors iff the right and left cancellation laws hold? -QUESTION [9 upvotes]: This is the definition of zero divisor in Hungerford's Algebra: - -A zero divisor is an element of $R$ which is BOTH a left and a right zero divisor. - -It follows a statement: -It is easy to verify that a ring $R$ has no zero divisors -if and only if the right and left cancellation laws hold in $R$; -that is, -for all $a,b,c\in R$ with $a\neq 0$, -$$ab=ac~~~\text{ or }~~~ba=ca~~~\Rightarrow~~~ b=c.$$ -I think it is not true. -But I can't find a counterexample. - -REPLY [6 votes]: Lemma: A ring has a left (or right) zero-divisor if and only if it has a zero divisor. -Proof: Assume $ab=0$ for $a,b\neq 0$. -If $ba=0$, you are done - $a$ is both a left and right zero divisor. -If $ba\neq 0$, then $a(ba)=(ab)a=0$ and $(ba)b=b(ab)=0$, so $ba$ is a left and right zero divisor. - -Now it is much easier to prove your theorem. -If $ax=ay$ and $R$ has no zero-divisors, then $a(x-y)=0$. But, by the lemma, $R$ also has no left-zero divisors, so either $a=0$ or $x-y=0$. -Similarly for $xa=ya$. -On the other hand, if cancellation is true, then $a\cdot b=0=a\cdot 0$ means that either $a=0$ or $b=0$. So there can't be any left zero divisors, and thus no zero divisors. - -REPLY [2 votes]: Suppose $ab = 0$ with $a, b \ne 0$. Either $ba = 0$ (which means $a$ and $b$ are zero-divisors), or $ba \ne 0$, in which case $ba$ is a zero-divisor because $a(ba) = 0$ and $(ba)b = 0$.<|endoftext|> -TITLE: How can I visualize independent and dependent set of vectors? -QUESTION [9 upvotes]: Can someone help me visualize those concepts? It will also help me understand it better. -Thanks :) - -REPLY [2 votes]: Two parallel vectors are linearly dependent. Three vectors lying in a plane are linearly dependent. Four vectors lying in the same three-dimensional hyperplane are linearly dependent. -In n-dimensional space, you can find at most n linearly independent vectors. -Think of the vectors as rods with which you want to span up a tent: one rod gives you just a line, two rods give you a face. you need a third rod outside of that plane (linearly independent) to span up a volume. Any additional rods cannot span into a fourth dimension, so four rods in three dimensions must be linearly dependent.<|endoftext|> -TITLE: Number of different ordered pairs -QUESTION [7 upvotes]: Let $X = \{1,2,3,4,5\}$. What is the number of different ordered pairs $(Y,Z)$ that can formed such that -$$Y\subseteq X$$ -$$Z\subseteq X$$ -$$Y\cap Z=\emptyset$$ -How to make cases in this question? - -REPLY [3 votes]: Here's another way to think about the problem. -Each element can be 'colored' with the color $Y$, $Z$ or $0$, based on whether it belongs to $Y$, $Z$ or $0$. Since $Y \cap Z = \emptyset$, each element can be assigned exactly one color for every possible ordered pair. For example, $Y = \{ 1,2,3\}$ and $Z= \{4 \}$ would give the coloring $YYYZ0$. -There are 3 different colors and 5 symbols, so $3^5$ possible colorings. Each coloring corresponds to a pair $(Y,Z)$, so there are $3^5$ such ordered pairs.<|endoftext|> -TITLE: How do you evaluate $\int_{0}^{\frac{\pi}{2}} \frac{(\sec x)^{\frac{1}{3}}}{(\sec x)^{\frac{1}{3}}+(\tan x)^{\frac{1}{3}}} \, dx ?$ -QUESTION [12 upvotes]: Problem: -$$\int_{0}^{\frac{\pi}{2}} \frac{(\sec x)^{\frac{1}{3}}}{(\sec x)^{\frac{1}{3}}+(\tan x)^{\frac{1}{3}}} dx$$ -My attempt: -I tried applying the property: $\int_{0}^{a} f(x)dx$ = $\int_{0}^{a} f(a-x)dx$ but got nowhere since the denominator changes. Even on adding the two integrals by taking LCM of the denominators, the final expression got more complicated because the numerator and denominator did not have any common factor. -I also tried dividing numerator and denominator by $(secx)^{\frac{1}{3}}$ to get -$$\int_{0}^{\frac{\pi}{2}} \frac{1}{1+(\sin x)^{\frac{1}{3}}} dx$$ and then tried substituting $sinx$ = $t^3$ to get a complicated integral in $t$, which I couldn't evaluate. - -How do you evaluate this integral? (PS: If possible, please evaluate this without using special functions since this is a practice question for an entrance exam and we've only learnt some basic special functions and the gamma function.) - -REPLY [8 votes]: $$ \int_{0}^{\pi/2}\frac{1}{1+(\sin x)^{1/3}} = \int_{0}^{\pi/2}\frac{1-(\sin x)^{1/3}+(\sin x)^{2/3}}{1+\sin x}\,dx=I_1-I_2+I_3$$ -where: -$$ I_1 = \int_{0}^{\pi/2}\frac{dx}{1+\cos x}=\int_{0}^{\pi/2}\frac{1-\cos x}{\sin^2 x}\,dx = \left.\left(\csc x-\cot x\right)\right|_{0}^{\pi/2}=1,$$ -$$ I_2 = \int_{0}^{\pi/2}\frac{(\cos x)^{1/3}-(\cos x)^{4/3}}{\sin^2 x}\,dx,\quad I_3 = \int_{0}^{\pi/2}\frac{(\cos x)^{2/3}-(\cos x)^{5/3}}{\sin^2 x}\,dx $$ -but Euler's beta function gives: -$$ \int_{0}^{\pi/2}(\sin x)^\alpha (\cos x)^{\beta}\,dx = \frac{\Gamma\left(\frac{\alpha+1}{2}\right)\cdot\Gamma\left(\frac{\beta+1}{2}\right)}{2\cdot\Gamma\left(\frac{2+\alpha+\beta}{2}\right)}$$ -hence, after some simplification: - -$$ \int_{0}^{\pi/2}\frac{dx}{1+(\sin x)^{1/3}} = 1-\frac{2^{4/3}\pi^2(\sqrt{3}-1)}{3\cdot\Gamma\left(\frac{1}{3}\right)^3}+\frac{2^{2/3}\pi^2(2-\sqrt{3})}{9\cdot\Gamma\left(\frac{2}{3}\right)^3}. $$<|endoftext|> -TITLE: I know there are three real roots for cubic however cubic formula is giving me non-real answer. What am I doing wrong? -QUESTION [6 upvotes]: I want to solve the equation $x^3-x=0$ using this cubic equation. For there to be real roots for the cubic (I know the roots are $x=-1$, $x=0$, $x=1$), I assume there must be a positive inside the inner square root. (Or is that wrong?) -However, when I substitute in $a=1$, $b=0$, $c=-1$, $d=0$, the square root term inside the cube root terms becomes -$$\sqrt{\;\left(\;2(0)^3 - 9(1)(0)(-1) + 27(1)^2(0)\;\right)^2 - 4 \left(\;(0)^2 - 3(1)(-1)\;\right)^3\quad}$$ -It gives me $\sqrt{-108}$, which is $10.39i$. Now that I have a non-real number as part of the equation I can't see any way for it to be cancelled or got rid of, even though I know there is a real answer. -Could somebody please tell me how I can get a real answer and what I am doing wrong? Thanks. - -REPLY [7 votes]: Okay, so you're getting that operands of the cube root parts of the formula will look like this: -$$p + q i \qquad\text{and}\qquad p - q i$$ -with some pesky non-zero $q$ (namely, $\sqrt{108}$). Well, these values are conjugates, so that their respective (principal) cube roots are conjugates, as well. For the $x_1$ value in your formula, these conjugate cube roots add together, and their imaginary parts conveniently cancel. The same kind of cancellation happens for the $x_2$ and $x_3$ values, too, because the factors $\frac{1}{2}(1+i\sqrt{3})$ and $\frac{1}{2}(1-i\sqrt{3})$ are themselves conjugates. When the dust settles, you'll have the three real roots you expect. -As @Aloizio points out, this is historically how imaginary numbers snuck into mathematics: as temporary diversions on the way to real solutions of cubic equations. These numbers seemed weird, maybe even scary, but they cancelled in the end so no harm done. Then people started to wonder about a world in which these things didn't always cancel ...<|endoftext|> -TITLE: List of old books that modern masters recommend -QUESTION [36 upvotes]: This is a fairly unambigious question but it hasn't been asked before so I thought I would ask it myself: -Which old books do the modern masters recommend? -There are old books where the mathematical fields explored there have been so thoroughly plowed by later mathematicians that it would be extremely naive and foolish to think that anything of value can still be salvaged from them. Those books are no longer read for the purposes of inspiring research in that area, but are curiosities which are mainly consulted for historical purposes. These are not the old books I am talking about. -The type of old book I refer to is one which still has treasures hidden inside it waiting to be explored. Such is for example Gauss's Disquisitiones Arithmeticae, which Manjul Bhargava claimed inspired his work on higher composition laws, for which he won the 2014 Fields medal. -This is why we need the opinion of the modern masters in the field as to which books are worth consulting today, because only a master in any given field (with his experience of the literature etc) can point us to the fruitful works in that field. -If you list a book, please include the quote of the master who recommended it. -Here is my attempt at the first two: -Fields medallist Alan Baker recommends Gauss's Disquisitiones Arithmeticae in his book A Comprehensive Course of Number Theory: "The theory of numbers has a long and distinguished history, and indeed the concepts and problems relating to the field have been instrumental in the foundation of a large part of mathematics. It is very much to be hoped that our exposition will serve to stimulate the reader to delve into the rich literature associated with the subject and thereby to discover some of the deep and beautiful theories that have been created as a result of numerous researches over the centuries. By way of introduction, there is a short account of the Disquisitiones Arithmeticae of Gauss, and, to begin with, the reader can scarcely do better than to consult this famous work." -Andre Weil recommends Euler's Introductio in Analysin Infinitorum for today's Precalculus students, as quoted by J D Blanton in the preface to his translation of that book: "... our students of mathematics would profit much more from a study of Euler's Introductio in analysin infinitorum, rather than of the available modern textbooks." -I feel this question will be found useful by many people who are looking to follow Abel's advice in a sensible and efficient manner, and I hope this question is clear-cut enough that it doesn't get voted for closure. -Edit: Thanks to Bye-World for bringing up the question of who qualifies as an old master. My response is that any great dead mathematician should qualify as an old master, so Grothendieck is an old master for instance. - -REPLY [7 votes]: I would like to add another mathematics book to Breven Ellefsens list of books above. Its also a great guide for grad students or undergrad college kids struggling with math in general because it goes over general problem solving techniques along with examples from a variety of math topics such as geometry, calculus, proofs both direct and indirect. - -G. Polya: How to Solve It - -Herman Weyl in an article for the Mathematical Review, had this to say about the book: - -"This Elementary textbook on heuristic reasoning, shows anew how keen its author is on questions of method and the formulation of methodological principles. Exposition and illustrative material are of a disarmingly elementary character, but very carefully thought out and selected."<|endoftext|> -TITLE: Prove that $\frac{a}{b+2c}+\frac{b}{c+2a}+\frac{c}{a+2b} \geq 1$ -QUESTION [5 upvotes]: For three positive real numbers $a,b,$ and $c$, prove that $$\dfrac{a}{b+2c}+\dfrac{b}{c+2a}+\dfrac{c}{a+2b} \geq 1.$$ - -Attempt -Rewritting we obtain $\dfrac{2 a^3+2 a^2 b-3 a^2 c-3 a b^2-3 a b c+2 a c^2+2 b^3+2 b^2 c-3 b c^2+2 c^3}{(a+2b)(2a+c)(b+2c)} \geq 0$. Then to I proveed to use rearrangement, AM-GM, etc. on the numerator? - -REPLY [7 votes]: Very similar to Nesbitt's inequality. If we set $A=b+2c,B=c+2a,C=a+2b$, we have $ 4A+B-2C = 9c $ and so on, and the original inequality can be written as: -$$ \frac{4B+C-2A}{9A}+\frac{4C+A-2B}{9B}+\frac{4A+B-2C}{9C} \geq 1 $$ -or: -$$ \frac{4B+C}{A}+\frac{4C+A}{B}+\frac{4A+B}{C} \geq 15 $$ -that follows from combining $\frac{B}{A}+\frac{A}{B}\geq 2$ (consequence of the AM-GM inequality) with $\frac{B}{A}+\frac{C}{B}+\frac{A}{C}\geq 3$ (consequence of the AM-GM inequality again).<|endoftext|> -TITLE: Liouville's theorem and the Wronskian -QUESTION [5 upvotes]: Liouville's theorem states that under the action of the equations of motion, the phase volume is conserved. The equations of motion are the flow ODE's generated by a Hamiltonian field $X$ and the solutions to Hamilton's equations are the integral curves of the system. The effect of a symplectomorphism $\phi_t$ is that it would take $(p^i,q_i) \to (p^j,q_j)$ or $\omega \to \omega'$ and that would be the contribution of the Jacobian of the transformation. If the flow $\phi_t$ changes variables from time, say, $t$ to time $0$. The Jacobian would be -$$ J(t) = \left| \begin{array}{ccc} -\frac{\partial p_1(t) }{\partial p_1(0) } & \ldots & \frac{\partial p_1(t) }{\partial q_{n}(0) } \\ -\vdots & \ddots & \vdots \\ -\frac{\partial p_{n}(t) }{\partial q_1(0) } &\ldots & \frac{\partial q_{n}(t) }{\partial q_{n}(0) } \ -\end{array} \right| $$ -Now, the derivatives appearing in the Jacobian satisfy a linear system of ordinary differential equations and moreover, viewed as a system of ordinary differential equations, it is easy to see that the Jacobian we need is also the Wronskian $W$ for the system of differential equations. My first question is why is this true? I know that the definition of the Wronskian of two differential functions $f,g$ is $W(f,g)=f'g-g'f$. Then I read that Wroskian's of linear equations satisfy the vector ODE -\item Wroskian's of linear equations satisfy the vector ODE -\begin{equation*} -\frac{d}{dt}{\bf{y}} = {\bf{M}}(t) {\bf{y}} -\end{equation*} -What exactly is this equation? -Then I read that -\begin{equation*} -W(t) = W(0) \exp\left( \int_0^t dt \, \text{Tr}{\bf{M}}(0) \right) -\end{equation*} -and for our case this trace vanishes, thus $J(0)=J(t)=1$ which shows that the contribution of the Jacobian over volume integrals is trivial. -Can you help me to understand the above statements? Unfortunately I have lost the reference out of which I read those. - -REPLY [3 votes]: Equations of Motion -What you are describing is Hamiltonian's view of the evolution of a -dynamical system. $\mathbf q$ is the vector describing system's configuration with generalized coordinates (or degrees of freedom) in some abstract configuration space isomorphic to $\mathbb R^{s}$. For instance $s=3n$ particle coordinates for a system of $n$ free particles in Euclidean space. $\mathbf p$ is the vector of generalized momentum attached to instanteanous time rate of the configuration vector. For example for particles of mass $m$ in non-relavistic mechanics $\mathbf p= m \frac{d}{dt}\mathbf q$. Classical mechanics postulates that the simulteanous knowledge of both $\mathbf q(t)$ and $\mathbf p(t)$ at time $t$ is required to predict the system's temporal evolution for any time $t'>t$ (causality). So the complete dynamical state of the system is in fact described by the phase $\mathbf y =(\mathbf p, \mathbf q)$ which evolves in an abstract space called the phase space isomorphic to $\mathbb R^{2s}$. The fact that the knowledge of this phase is sufficient to predict the evolution demands on a mathematical point of view configuration $\mathbf q$ to evolve according to a system of $s$ ODEs of at most second order in time (known as Lagrange equations) or equivalently that phase evolves according to a system of $2s$ ODEs of the first order in time knwown as Hamilton's equations of motion, -$\frac{d\mathbf p}{dt}=-\frac{\partial H}{\partial \mathbf q}(\mathbf y,t)$ -$\frac{d\mathbf q}{dt}=\frac{\partial H}{\partial \mathbf p}(\mathbf y,t)$ -given some Hamilton's function $H(\mathbf y,t)$ describing the system. Physically it represents mechanical energy. For non-dissipative systems it does not depend over time explicitely. as a consequence of Liouville's theorem it is a conserved quantity along phase-space curves. -More generally, equations of motions appear to be formulated as a system of Euler-Cauchy's ODEs -$\frac{d\mathbf y}{dt} = \mathbf f(\mathbf y,t)$ -\with $\mathbf f$ some vector-function over $\mathbb R^{2s}\times\mathbb R$ -In accordance with Hamiltonian formalism, $\mathbf f$ appears to be -$\mathbf f= (-\frac{\partial H}{\partial \mathbf q}, \frac{\partial H}{\partial \mathbf p}) $ -Liouville's theorem -Now consider the phase flow $\mathbf y_t$, that is consider the one-parameter ($t\in \mathbb R$) group of transformations $\mathbf y_0\mapsto \mathbf y_t(\mathbf y_0)$ mapping initial phase at time $t=0$ to current one -in phase space. This is another parametrization of the phase using -$y_0$ as curvilinear coordinates. Suppose you have now hypothetical -system replica with initial phase points distributed to fill some volume $\mathcal V_0$ in phase space. Liouville's theorem says that the cloud of points will evolve such as preserving its density along their curves -in phase space, like an incompressible fluid flow, keeping the -filled volume unchanged. Since -$\mathcal V(t)=\int_{\mathcal V_0} \det \frac{\partial \mathbf y}{\partial \mathbf y_0}(\mathbf y_0,t)\ \underline{dy_0} = \int_{\mathcal V_0} J(\mathbf y_0,t)\ \underline{dy_0}$ -Now compute the volume-change time rate at any instant $t$ introducing Euler's -form (1), -$\frac{d\mathcal V}{dt}=\int_{\mathcal V} \frac{\partial \mathbf f}{\partial t}+ \nabla_{\mathbf y_0} .\mathbf f(\mathbf y_0,t)\ \underline{dy_0}$. Setting this time rate to zero gives Liouville's theorem in local form: -$\frac{\partial \mathbf f}{\partial t}+ \nabla_{\mathbf y_0} .\mathbf f(\mathbf y_0,t)=0$. -Applying them to Hamilton's form, -$\frac{\partial }{\partial t}[\frac{\partial H}{\partial \mathbf p}-\frac{\partial H}{\partial \mathbf q}] + \frac{\partial}{\partial \mathbf q}\frac{\partial H}{\partial \mathbf p} - \frac{\partial}{\partial \mathbf p}\frac{\partial H}{\partial \mathbf q}=0$ (1) -For conservative systems in energy, $H(\mathbf y,t)$ does not depend upon time explicitely and the left-most term in (1) vanishes. Liouville's theorem can be generalized to any physical observable depending upon the phase of the system $A(\mathbf y,t)$ which is a conserved along the curves of the phase space, -$\frac{dA}{dt}= \frac{\partial A}{\partial t} + \frac{\partial A}{\partial \mathbf q}\frac{\partial H}{\partial \mathbf p} - \frac{\partial A}{\partial \mathbf p}\frac{\partial H}{\partial \mathbf q}=0$ -The Wronskian -Now consider the situation of Euler's ODE can be linearized around phase -$\mathbf y_0$. The vectorial function $\mathbf f$ now expresses as a matrix -vector product with the phase. You have now a $2s\times 2s$ square -linear system of ODEs. -$\frac{d\mathbf y}{dt} = {\mathbf f}(\mathbf y_0,0)+M_{\mathbf y_0}(t)(\mathbf y-\mathbf y_0)+\ldots$ -with $M=\frac{\partial f}{\partial \mathbf y_0}$. -One can solve a new system of the form, $\mathbf y'=M_{\mathbf y_0}(t) \mathbf y$ for any translated phase around $\mathbf y_0$. -Consider $2s$ phase solutions of this system $(\mathbf y^1, \mathbf y^2,\ldots, \mathbf y^{2s})$. Then the wronskian -$W=\det(\mathbf y^1, \mathbf y^2,\ldots, \mathbf y^{2s})$ satisfies -the first-order ODE, -$\frac{d}{dt}W= \mathrm{tr}(M_{y_0}) W$ -and can be integrated as -$W(t) = W_0\exp \int _0^t\mathrm{tr}(M_{\mathbf y_0}(s)) ds$ -Hope this helps.<|endoftext|> -TITLE: Basic understanding of quotients of "things"? -QUESTION [8 upvotes]: My modern algebra needs some work. Am I right in thinking that $\mathbb{Z}/2\mathbb{Z}$ refers to the two sets $$\{\pm0, \pm2, \pm4, \pm6, \ldots\}$$ and $$\{\pm1, \pm3, \pm5, \pm7\}~~?$$ What about $\mathbb{R}/2\mathbb{Z}$ if that makes sense to write? Would that mean $$\{,\ldots,[0,1),[2,3),[4,5),\ldots\}$$ and $$\{\ldots,[1,2),[3,4),[5,6),\ldots\}~~?$$ I haven't got round to looking at these "quotients" as I think they're called. It's on my list of things to do. I believe they're to do with equivalence classes? Are there some more "exotic" examples with concrete examples of the sets produced as above? Just asking to see if I'm thinking on the right track. So my original understanding of $\mathbb{R}/2\mathbb{Z}$ was incorrect (as pointed out in answer(s) below). -EDIT -Not sure if this is a good way to visualise what's going on with e.g. $\mathbb{R}/2\mathbb{Z}$, but imagine the Cartesian plane with $x$ and $y$ axes. For $\mathbb{R}/2\mathbb{Z}$ I can see the left hand side ($\mathbb{R}$) corresponding to the $y$-axis and the right hand side ($2\mathbb{Z}$) corresponding to the integers on the $x$-axis. If I imagine $2\mathbb{Z}$ on the $x$-axis slicing the plane vertically then what I'm left with is an infinite number of slices each of width $2$. The quotient kind of takes all these slices and stacks them on top of each other so that the only information available to me belongs to $[0,2)$. Positioning along the $x$-axis is lost. - -REPLY [12 votes]: A good way to think about quotients is to pretend that nothing has changed except your concept of equality. -You can think of $\mathbb{Z}/2\mathbb{Z}$ as just the integers (under addition) but multiples of 2 (i.e. elements of $2\mathbb{Z}$) are eaten up like they're zero. So in this quotient world $1=3=-5=41$ etc. and $0=6=104=-58$ etc. -What is $1+1$? Well, $1+1=2$. But in this quotient group $2=0$ so $1+1=0$. Notice that $1=-3=99$ so $1+1 =-3+99=96$ (which also $=0$). Equivalent "representatives" give equivalent answers. -Formally, yes, $\mathbb{Z}/2\mathbb{Z} = \{0+2\mathbb{Z}, 1+2\mathbb{Z}\}$ where $0+2\mathbb{Z}=2\mathbb{Z}=$ even integers and $1+2\mathbb{Z}=$ odd integers. -A more formal version of my previous calculation: $(1+2\mathbb{Z})+(1+2\mathbb{Z}) = (-3+2\mathbb{Z})+(99+2\mathbb{Z}) = (-3+99)+2\mathbb{Z} = 96+2\mathbb{Z}=0+2\mathbb{Z}$. -If we move to $\mathbb{R}/2\mathbb{Z}$, then elements are equivalence classes: $x+2\mathbb{Z} = \{x+2k \;|\; k \in \mathbb{Z}\} = \{\dots,x-4,x-2,x,x+2,x+4,\dots\}$. Addition works exactly the same as it does in $\mathbb{R}$ (except we have enlarged what "equals" means). So $((3+\sqrt{2})+2\mathbb{Z})+((-10+\pi)+2\mathbb{Z}) = (7+\sqrt{2}+\pi)+2\mathbb{Z}$. Of course, here, $7+\sqrt{2}+\pi$ could be replaced by something like $-3+\sqrt{2}+\pi$. -In fact, every $x+2\mathbb{Z}$ is equal to $x'+2\mathbb{Z}$ where $x' \in [0,2)$ (add an appropriate even integer to $x$ to get within the interval $[0,2)$). So as a set $\mathbb{R}/2\mathbb{Z}$ is essentially $[0,2)$ (each equivalence class in the quotient can be uniquely represented by a real number in $[0,2)$). -Alternatively, think of this group like $[0,2]$ with $0=2$. Take the interval $[0,2]$ and glue the ends together. It's a circle group. Basically $\mathbb{R}/2\mathbb{Z}$ as a group is just like adding angles (but $2=0$ not $2\pi=0$). :)<|endoftext|> -TITLE: What was the genesis of Hua's identity? -QUESTION [7 upvotes]: Many resources I have read prove Hua's identity more-or-less mechanically. I have seen there is more than one raison d'être for Hua's identity: e.g. its connection to the fundamental theorem of projective geometry, and also Jordan algebra theory. My impression, though, is that these two things are mostly application rather than inspiration. (I could be wrong, though.) - -I would very much like to know how Hua's identity arose, hopefully with motivation/intuition as to how it was discovered. - -I have intended to get ahold of the(?) original proof by Hua in hopes that it contained such information, but so far I haven't managed to lay my hands on the original citation(s). This would be a much-appreciated bonus to any solution. -If it turns out there is a good retroactive motivation/intuition for deriving the identity that beats the original, of course that would be welcome as well. - -Happily I've seen the original paper now (thanks Martin). Surprisingly, the identity cited by all authors since the paper is different-looking from the original. I will have to compare the two versions and see if this version gives any more insight. No direct intuition about its origins are apparent, and indeed it is called "nearly trivial" although it seems a bit mystifying, IMO. - -REPLY [5 votes]: Well, I can only guess, but if I were Hua this is what I would have thought to derive the identity and say it is "almost trivial": -1) We want to generalize the theorem of Cartan and Dieudonné (now called Cartan-Brauer-Hua), so we want to express an element $a$ as a combination of sums, products and inverses of conjugates of $a,b$, for any other $b$ (such that $ab\neq ba$). -2) Immediately we think about proving our luck with the well known formula for multiplicative commutators in division rings, since it can readily give us $a$ as a factor so that we can solve it "as a fraction", and it is close to conjugations: If we denote $(a,b):=a^{-1}b^{-1}ab$ then -$$a(a,b)=(a-1)(a-1,b)+1.$$ -Therefore $$a((a,b)-(a-1,b))=1-(a-1,b).$$ -3) If we expand we see we don't yet have conjugates due to the $b$ factors at the right of $(a,b)=a^{-1}b^{-1}ab$, etc.; but since it is the same for all terms, we add a right $b^{-1}$ to solve the problem, and get -$$a((a,b)-(a-1,b))b^{-1}=(1-(a-1,b))b^{-1}$$ -$$a(a^{-1}b^{-1}a-(a-1)^{-1}b^{-1}(a-1))=b^{-1}-(a-1)^{-1}b^{-1}(a-1)$$ -and now Hua's identity follows by solving for $a$.<|endoftext|> -TITLE: Wikipedia's explanation of the lambda-calclulus form of Curry's paradox makes no sense -QUESTION [6 upvotes]: Wikipedia gives multiple explanations of Curry's paradox, one of which is expressed via lambda calculus. -However, the explanation doesn't look like any lambda calculus I've ever seen, and there's an existing discussion-page section that would appear to indicate I'm not alone. -The proof is given as follows: - -Consider a function $r$ defined as -$$r = ( λx. ((x x) → y) )$$ -Then $(r r)$ $\beta$-reduces to -$$(r r) → y$$ -If $(r r)$ is true then its reduct $(r r) → y$ is also true, and, by modus ponens, so is $y$. If $(r r)$ is false then $(r r) → y$ is true by the principle of explosion, which is a contradiction. So $y$ is true and as $y$ can be any statement, any statement may be proved true. -$(r r)$ is a non-terminating computation. Considered as logic, $(r r)$ is an expression for a value that does not exist. - -The last sentence appears to be irrelevant (or perhaps it's intended as a corollary), so I'll ignore it. -The primary question on the talk page is what $→$ means in a lambda expression. I hypothesized that it might be supposed to indicate that $(x → y)$ is an expression that evaluates to $y$ if $x$ is "true" for some meaning of "truth" in lambda calculus (perhaps if $x$ is a Church-numeral greater than 0) and to something else otherwise. But in there's no "else" expression specified, so that doesn't seem right, unless $\beta$-reduction on $(x → y)$ is non-terminating for $x=\bar 0$. -The proof then claims that if $(r r)$ is false (again, it's not clear what "true" or "false" means in this context), then the principle of explosion may be invoked, but it's not clear why $(r r)$ being false would imply a contradiction, so I don't know why the principle of explosion can be invoked here. -Is this demonstration of the paradox correct but simply poorly explained, or is it incorrect? - -REPLY [4 votes]: Long Comment -Here is an outline of Curry's original version of the paradox, using combinatory logic [hoping that someone smarter than me can derive the correct "$\lambda$-version"]. -See Haskell Curry & Robert Feys & William Craig, Combinatory Logic. Volume I (1958), Ch.8A. THE RUSSELL PARADOX, page 258-59 or Katalin Bimbò, Combinatory Logic : Pure Applied and Typed (2012), Ch.8.1 Illative combinatory logic, page 221-on. -The system is called illative combinatory logic, that extends pure combinatory logic with the inclusion of new constants that, of course, expands the set of CL-terms. -The new symbol [Bimbò, page 221-22]: - -$\mathsf P$ is the symbol for $\to$ that is, for implication. [...] $\mathsf Pxx$ is thought of as $x \to x$ , and $\mathsf P(\mathsf Pxy)(\mathsf P(\mathsf Pyz)(\mathsf Pxz))$ is $(x \to y) \to ((y \to z) \to (x \to z))$ . -This formula is a theorem of classical logic (with formulas in place for $x, y$ - and $z$). -DEFINITION 8.1.2. Let the set of constants be expanded by $\mathsf P$. Further, let the set of CL-terms [built from the constants into] $\{ \mathsf S, \mathsf K, \mathsf P \}$. The set of theorems comprises equalities (in the extended language) provable with the combinatory axioms restricted to $\mathsf S$ and $\mathsf K$, and assertions obtainable from the assertions (A1)–(A3) by rules (R1)–(R2). - -(A1) $\mathsf PM(\mathsf PNM)$ [compare with the propositional axiom : $M \to (N \to M)$] -(A2) $\mathsf P(\mathsf PM(\mathsf PNR))(\mathsf P(\mathsf PMN)(\mathsf PMR))$ [compare with : $(M \to (N \to R)) \to ((M \to N) \to (M \to R))$] -(A3) $\mathsf P(\mathsf P(\mathsf PMN)M)M$ [Peirce's axiom : $((M \to N) \to M) \to M$] -(R1) $\mathsf PMN$ and $M$ imply $N$ [i.e. detachment] -(R2) $M$ and $M = N$ imply $N$. - -This system is called the minimal illative combinatory logic. The axioms together with (R1) are equivalent — with $M$ and $N $thought of as formulas rather than CL-terms — to the implicational fragment of classical logic. - -The "implication" is characterized by the following theorems [Bimbò, page 223]: - -$\mathsf PMM$ : self-implication [i.e. $M \to M$] -$\mathsf P(\mathsf PM(\mathsf PMN))(\mathsf PMN)$ : contraction [i.e. $(M \to (M \to N)) \to (M \to N)$]. - -In addition, we need [Bimbò, page 12 and page 47] - -the fixed point combinator, often denoted by $\mathsf Y$. The axiom for $\mathsf Y$ is $\mathsf Yx \rhd x(\mathsf Yx)$. - -Now for the proof of Curry’s original paradox [Bimbò, page 224]: - -Let the meta-term $M$ stand for $\mathsf C \mathsf PN$ [...] $\mathsf C \mathsf PNx \rhd_w \mathsf PxN$, and so $\mathsf C \mathsf PN(\mathsf Y(\mathsf C \mathsf PN)) \rhd_w \mathsf P(\mathsf Y(\mathsf C \mathsf PN))N$. Of course, $\mathsf Y(\mathsf C \mathsf PN) \rhd_w \mathsf C \mathsf PN(\mathsf Y(\mathsf C \mathsf PN))$. That is, using $M$, we have that $\mathsf YM = \mathsf P(\mathsf YM)N$. Again we construct a proof (with some equations restated and inserted) to show that $N$ is a theorem. - -$\mathsf P(\mathsf P(\mathsf YM)(\mathsf P(\mathsf YM)N))(\mathsf P(\mathsf YM)N)$ [contraction] -$\mathsf YM = \mathsf P(\mathsf YM)N$ [provable equation above] -$\mathsf P(\mathsf P(\mathsf YM)(\mathsf YM))(\mathsf P(\mathsf YM)N)$ [by (R2) from 1. and 2.] -$\mathsf P(\mathsf YM)(\mathsf YM)$ [self-implication] -$\mathsf P(\mathsf YM)N$ [by (R1) from 3. and 4.] -$\mathsf YM$ [by (R2) from 5. and 2.] - - - -$N$ [by (R1) from 5. and 6.] - - - - -Curry's original exposition [Curry, Feys & Craig, page 4] starts from : - -the paradox of Russell. This may be formulated as follows: Let $F(f)$ be the property of properties $f$defined by the equation - -(1) $F(f) = \lnot f(f)$, - -where "$\lnot$" is the symbol for negation. Then, on substituting $F$ for $f$, we have : - -(2) $F(F) = \lnot F(F)$. - -If, now, we say that $F(F)$ is a proposition, where a proposition is something which is either true or false, then we have a contradiction at once. But it is an essential step in this argument that $F(F)$ should be a proposition. [...] - The usual explanations of this paradox are to the effect that $F$, or at - any rate $F(F)$, is "meaningless". Thus, in the Principia Mathematica the - formation of $f(f)$ is excluded by the theory of types. - -Following this analysis, Curry "manufactured" the paradoxical combinator $\mathsf Y$ [CFC, page 177] : - -Let $Neg$ represent negation. Then the definition (1) becomes - -$Ff=Neg(ff) = \mathsf B Neg ff = \mathsf W(\mathsf B Neg)f$. - -Thus the definition (1) could be made in the form - -(3) $F \equiv \mathsf W(\mathsf BNeg)$. - -This $F$ really has the paradoxical property. For we have [that] $FF$ reduces to its own negation. - -The definition (3) can be generalized to : - - -$\mathsf Y = \mathsf W \mathsf S(\mathsf B \mathsf W \mathsf B)$. - - -We have seen that axioms (A1)-(A3) and the two rules define the implicational fragment of classical logic. We have seen also that if $Neg$ stands for negation, then $\mathsf YNeg$ is equal to its own negative. - -[CFC, page 258] it is clear that we cannot ascribe to $\mathsf Y Neg$ properties characteristic of propositions; and that we can avoid the paradox by formulating the category of propositions in such a way that $\mathsf YN$ is excluded from it. - - - -Note. Here is a list of combinators, given by their axioms [Bimbò, page 6]: - -$\mathsf I x \rhd x$ : identity combinator -$\mathsf M x \rhd xx$ : duplicator -$\mathsf K xy \rhd x$ : cancellator -$\mathsf W xy \rhd xyy$ : duplicator -$\mathsf B xyz \rhd x(yz)$ : associator -$\mathsf C xyz \rhd xzy$ : permutator -$\mathsf S xyz \rhd xz(yz)$ - -The other basic definitions are that of one-step reduction and of weak reduction (denoted by $\rhd_w$), i.e. the reflexive transitive closure of the -one-step reduction relation.<|endoftext|> -TITLE: What is the integral of log(z) over the unit circle? -QUESTION [6 upvotes]: I tried three times, but in the end I am concluding that it equals infinity, after parametrizing, making a substitution and integrating directly (since the Residue Theorem is not applicable, because the unit circle encloses an infinite amount of non-isolated singularities.) -Any ideas are welcome. -Thanks, - -REPLY [11 votes]: If $\log z$ is interpreted as principal value -$${\rm Log}z:=\log|z|+i{\rm Arg} z\ ,$$ -where ${\rm Arg}$ denotes the polar angle in the interval $\ ]{-\pi},\pi[\ $, then the integral in question is well defined, and comes out to $-2\pi i$. (This is the case $\alpha:=-\pi$ in the following computations). -But in reality the logarithm $\log z$ of a $z\in{\mathbb C}^*$ is, as we all know, not a complex number, but only an equivalence class modulo $2\pi i$. Of course it could be that due to miraculous cancellations the integral in question has a unique value nevertheless. For this to be the case we should expect that for any $\alpha\in{\mathbb R}$ and any choice of the branch of the $\log$ along -$$\gamma:\quad t\mapsto z(t):=e^{it}\qquad(\alpha\leq t\leq\alpha+2\pi)$$ we obtain the same value of the integral. This boils down to computing -$$\int_\alpha^{\alpha+2\pi}(it+2k\pi i)\>ie^{it}\>dt=-\int_\alpha^{\alpha+2\pi}t\>e^{it}\>dt=2\pi i\>e^{i\alpha}\ .$$ -During the computation several things have cancelled, but the factor $e^{i\alpha}$ remains. This shows that the integral in question cannot be assigned a definite value without making some arbitrary choices.<|endoftext|> -TITLE: Extract imaginary part of $\text{Li}_3\left(\frac{2}{3}-i \frac{2\sqrt{2}}{3}\right)$ in closed form -QUESTION [5 upvotes]: We know that polylogarithms of complex argument sometimes have simple real and imaginary parts, e.g. -$\mathrm{Re}[\text{Li}_2(i)]=-\frac{\pi^2}{48}$ -Is there a closed form (free of polylogs and imaginary numbers) for the imaginary part of -$\text{Li}_3\left(\frac{2}{3}-i \frac{2\sqrt{2}}{3}\right)$ - -REPLY [2 votes]: Inspired in turn by user 153012's answer which is similar to my answer in this post, then more generally, for any real $k>1$, -$$\Im\left[\operatorname{Li}_3\left(\frac2k\,\big(1\pm\sqrt{1-k}\big)\right)\right] =\color{red}\mp\frac13\arcsin^3\left(\frac1{\sqrt k}\right)\pm\frac2{\sqrt k}\;{_4F_3}\left(\begin{array}c\tfrac12,\tfrac12,\tfrac12,\tfrac12\\ \tfrac32,\tfrac32,\tfrac32\end{array}\middle|\;\frac1k\right)$$ -where the OP's case was just $k=3$. -Edit: Courtesy of Oussama Boussif in his answer here, there is also a broad identity for $\rm{Li}_2(x)$ but for the real part, -$$\Re\left[\rm{Li}_{2}\left(\frac{1}{2}+iq\right)\right]=\frac{{\pi}^{2}}{12}-\frac{1}{8}{\ln{\left(\frac{1+4q^2}{4}\right)}}^{2}-\frac{{\arctan{(2q)}}^{2}}{2} -$$<|endoftext|> -TITLE: A curious pattern on primes congruent to $1$ mod $4$? -QUESTION [8 upvotes]: It is known that every prime $p$ that satisfies the title congruence can be expressed in the form $a^{2} + b^{2}$ for some integers $a,b$, and unique factorisation in $Z[i]$ ensures exactly one such representation for each $p \equiv 1 \mod 4$. -It seems at least one of $a-b, a+b$ is always a prime ? Is there any mathematical explanation for this ? - -REPLY [5 votes]: Let $a+b=35$ and $a-b=9$. Neither is prime. -Then $a=22$ and $b=13$, and the sum $(22)^2+(13)^2$ is the prime $653$. -Remark: For nice examples of apparent patterns that disappear when we look at larger numbers, please see Richard Guy's The Strong Law of Small numbers.<|endoftext|> -TITLE: What is the integral of 1/(z-i) over the unit circle? -QUESTION [6 upvotes]: At present there is a simple pole on the closed contour, so the Residue Theorem appears to be inapplicable. -But I want to claim that we can enlarge this circle to make sure that it encloses the pole, and the integral value should not change, primarily because of Cauchy's Theorem. -So the integral is simply $2\pi i$. (The residue at $z=i$ is 1.) -What do you think? -Thanks, - -REPLY [4 votes]: The integral does not converge. To show this in the most informative way, you should recall the definition of a contour integral: Let $\gamma(t)$ be a contour defined over $\mathbb{C}$ with $t\in I$. Then -\begin{align} -\oint_{\gamma}{f(z)dz}:=\int_I{f(\gamma(t))\gamma'(t)dt}. -\end{align} -So applying this to the integral in question (I've missed out a bit of algebra), we get -\begin{align} -\oint_{S^1}\frac{1}{z-\imath}dz&=\int_0^{2\pi}{\frac{\imath}{e^{\imath t}-\imath}}dt\\ -&=\int_0^{2\pi}\frac{\sin(t)-1+\imath\cos(t)}{1-2\sin(t)}dt. -\end{align} -Now, one can (after a little work) compute that -\begin{align} -\int\frac{\sin(t)-1+\imath\cos(t)}{1-2\sin(t)}dt=\frac{\tanh^{-1}\left(\frac{\tan\left(\frac{t}{2}\right)-2}{\sqrt{3}}\right)}{\sqrt{3}}-\frac{t}{2}-\frac{\imath}{2}\log(1-2\sin(t))+z_0. -\end{align} -This function is undefined at both $0$ and $2\pi$ and hence the integral has no limit, or formally it diverges.<|endoftext|> -TITLE: How do we compare the size of numbers that are around the size of Graham's number or larger? -QUESTION [6 upvotes]: When numbers get as large as Graham's number, or somewhere around the point where we can't write them as numerical values, how do we compare them? -For example: -$$G>S^{S^{S^{\dots}}}$$ -Where $G$ is Graham's number and $S^{S^{S^{\dots}}}$ is $S$ raised to itself $S$ times and $S$ is Skewes number. -It appears obvious (I think) that Graham's number is indeed larger, but how does one go about proving that if both numbers are "so large" that they become hard to compare? -More generally, how do we compare numbers of this general size? -As a much harder problem than the above, imagine a function $G(x,y)$ where $G(64,3)=$ Graham's number. The function $G(x,y)$ is as follows: -$$G(x,y)=y\uparrow^{(G(x-1,y))}y$$ -Where $G(0,y)$ is given. -I ask to compare $G(60,S)$ and $G(64,3)$ - -REPLY [4 votes]: Basically you want to construct a chain of inequalities that links the smaller expression to the larger expression. Induction is often helpful in these cases. -A useful theorem for Knuth arrows is $(a \uparrow^n b) \uparrow^n c < a \uparrow^n (b+c)$, proven in this paper. It is also proven that $a \uparrow^n c$ is monotonic in $a,n$, and $c$ when $a,c \ge 3$, which is useful as well. -For example, one can easily see that $S < 3 \uparrow\uparrow 6$, so -$$S^{S^{S^\cdots}} = S \uparrow \uparrow S < (3\uparrow\uparrow 6)\uparrow\uparrow(3 \uparrow\uparrow 6) < 3\uparrow\uparrow (6 + 3\uparrow\uparrow 6) < 3 \uparrow\uparrow (3 \uparrow\uparrow 7) < 3 \uparrow\uparrow (3\uparrow\uparrow 3^{3^3}) = 3 \uparrow\uparrow (3\uparrow\uparrow\uparrow 3) = 3\uparrow\uparrow\uparrow 4 < 3\uparrow\uparrow\uparrow (3 \uparrow\uparrow\uparrow 3) = 3\uparrow\uparrow\uparrow\uparrow 3 = G_1$$ -To address your harder question, first we need to know what $G(0,y)$ is. Since we need $G(0,3) =4$ so that $G(64,3)$ is Graham's number, I will assume that $G(0,y)=4$. -Theorem: $G(n,S) < G(n+1,3)$ -We will prove this by induction. First, observe that $G(0,S) = 4 < 3\uparrow\uparrow\uparrow\uparrow 3 = G(1,3)$. -Observe that for $n \ge 3$, -$$S \uparrow^n S < (3\uparrow\uparrow 6)\uparrow^n (3\uparrow\uparrow 6) < (3\uparrow^n 6)\uparrow^n (3\uparrow\uparrow 6) < 3\uparrow^n (6+3\uparrow\uparrow 6) < 3\uparrow^n (3\uparrow\uparrow\uparrow 3) \le 3\uparrow^n (3\uparrow^n 3) = 3\uparrow^{n+1} 3$$ -So if we have $G(n,S) < G(n+1,3)$, then $G(n,S)+1 \le G(n+1,3)$, so -$$G(n+1,S) = S \uparrow^{G(n,S)} S < 3 \uparrow^{G(n,S)+1} 3 \le 3 \uparrow^{G(n+1,3)} 3 = G(n+2,3)$$ -and the theorem follows by induction. -So in particular, $G(60,S) < G(61,3) < G(64,3)$.<|endoftext|> -TITLE: How to find the general solution to $\int f^{-1}(x){\rm d}x$ in terms of $\int f(x){\rm d}x$ -QUESTION [10 upvotes]: I am trying to find a general proof of $\int f^{-1}(x)\,{\rm d}x$ in terms of $\int f(x)\,{\rm d}x$. The first step that I took was to piece apart what it means for a function to have an inverse. So I know the way an inverse function works, but I don’t know how it works in integration like in proving this. I am very interested in seeing the solution to this because knowing what this is would help to solve integrals where you know what the integral for the inverse function is. - -REPLY [16 votes]: Note that -$$\int f^{-1}(x)\,dx=\int 1\cdot f^{-1}(x)\,dx = x\cdot f^{-1}(x)-\int x\cdot (f^{-1})’(x)\,dx=x\cdot f^{-1}(x)-\int \frac{x}{f’(f^{-1}(x))}\,dx$$ - -Now, we calculate $\int \frac{x}{f’f^{-1}(x)}\,dx$: -Let $F=\int f(x)\,dx$, and make the substitutions of $u=f^{-1}(x)\implies x=f(u)\implies dx=f’(u)\,du$. Therefore, we have transformed our integral to: -$$\int \frac{f(u)}{f’(u)}f’(u)\,du=F(u)=F\left(f^{-1}(x)\right)$$ - -$$\therefore \int f^{-1}(x)\,dx = x\cdot f^{-1}(x)-F\left(f^{-1}(x)\right)$$<|endoftext|> -TITLE: Deriving the coefficients during fourier analysis -QUESTION [5 upvotes]: I'm self-studying Fourier transforms, but I'm stuck on a basic point about integration during the derivation of an expression for the coefficients of the Fourier transform. -For a function of period $1$, the function can be written -$f(t) = \sum_{k=-n}^{n} C_k e^{2 \pi i k t}$ -Now in order to obtain an expression for a specific $C_k$ (which I shall call $C_m$), I can do: -$f(t) = \sum_{k=-n,k =/= m}^{k=n} C_k e^{2 \pi i k t} + C_m e^{2 \pi i m t}$ -$C_m e^{2 \pi i m t} = f(t) - \sum_{k=-n,k =/= m}^{k=n} C_k e^{2 \pi i k t}$ -$C_m = e^{-2 \pi imt} f(t) - \sum_{k=-n,k =/= m}^{k=n} C_k e^{2 \pi i (k-m) t}$ -Integrating over the full period from 0 to 1, -$C_m = \int_0^1 e^{-2 \pi imt} f(t) dt - \sum_{k=-n,k =/= m}^{k=n} C_k \int_0^1 e^{2 \pi i (k-m) t}$ -which I understand. -During evaluation of $\int_0^1 e^{2 \pi i (k-m) t}$ above, I get -$[\frac{1}{2 \pi i (k-m)} e^{2 \pi i (k-m) t}]_{0}^{1} = \frac{1}{2 \pi i (k-m)} (e^{2 \pi i (k-m)} - e^0)$ -Because we are "integrating over the whole period", the $e^{2 \pi i (k-m)}$ term evaluates to $1$. Can anyone explain what this means? -I tried a simple test-case with $n=2$, $m=1$, $A = 2 \pi i$: -$C_1 = \int_0^1 e^{-1At} f(t) dt - [\frac{C_{-2}}{-3A} (e^{-3A}-1) + \frac{C_{-1}}{-2A} (e^{-2A}-1) \frac{C_{0}}{-1A} (e^{-1A}-1) + \frac{C_{2}}{1A} (e^{1A}-1)]$ -I was hoping some terms would cancel, but I guess I'm thinking about this in the wrong way somehow. -Maybe this is a basic question, but any help is appreciated! - -REPLY [2 votes]: Some of those terms do cancel. The principal fact which you've not applied to your equation is that $e^{2\pi i}=1$. Written with the terms of your last equation, one has $e^A=1$. Thus, for instance, we have $$e^{-3A}=(e^A)^{-3}=1^{-3}=1.$$ -That means that all of the terms like $e^{-3A}-1$ equal $0$ and so you can cancel that whole part of the expression. -You could also draw this simplification out to where you integrate -$$\int_{0}^1e^{2\pi i(k-m)t}\,dt=\frac{1}{2\pi i(k-m)}(e^{2\pi i(k-m)}-e^0)=\frac{1}{2\pi i(k-m)}(1-1)=0.$$ The intuitive way to state this (which is alluded to in the phrasing "integrating over the whole period") is that the function $e^{2\pi i(k-m)x}$ traces out a circle $(k-m)$ times as $x$ goes from $0$ to $1$. The integral essentially takes the average value here - and the average value taken over the circle has to be the center of the circle, which is $0$. (One may argue that, the "average" has to be fixed by the symmetries of a circle, and the only point satisfying that is the center)<|endoftext|> -TITLE: Any other Caltrops? -QUESTION [13 upvotes]: This question has been edited. -The regular tetrahedron is a caltrop. When it lands on a face, one vertex points straight up, ready to jab the foot of anyone stepping on it. -Define a caltrop as a polyhedron with the same number of vertices and faces such that each vertex is at distance 1 from most of the corners of the opposing face. Are there any other caltrops besides the tetrahedron? -Use these 5 values in the list of vertices that follow. -$\text{C0}=0.056285130364630088035020091792834$ -$\text{C1}=0.180220007048851841582537343751297$ -$\text{C2}=0.309443563867344767680227839435148$ -$\text{C3}=0.348675924605445651138054435209609$ -$\text{C4}=0.466391197450500551933366795454853$ - -verts =( - (C1,C1,C4),(C1,-C1,-C4),(-C1,-C1,C4),(-C1,C1,-C4),(C4,C1,C1),(C4,-C1,-C1), - (-C4,-C1,C1),(-C4,C1,-C1),(C1,C4,C1),(C1,-C4,-C1),(-C1,-C4,C1),(-C1,C4,-C1), -(C3,-C0,C3),(C3,C0,-C3),(-C3,C0,C3),(-C3,-C0,-C3),(C3,-C3,C0),(C3,C3,-C0), - (-C3,C3,C0),(-C3,-C3,-C0),(C0,-C3,C3),(C0,C3,-C3),(-C0,C3,C3),(-C0,-C3,-C3), -(C2,C2,C2),(C2,-C2,-C2),(-C2,-C2,C2),(-C2,C2,-C2)); - -The resulting polyhedron has the following appearance, arranged so that each of the three types of faces is on the bottom: - -Here's a transparent picture showing the 48 unit diagonals. - -Diagonals $(13-16, 14-15, 17-19, 18-20, 21-22, 23-24)$ have a length of about $~0.98620444$. I'm not sure of the maximum length, and don't have exact values for coordinates. -That's one more caltrop. Are there any others? -Rahul pointed out that some faces of my initial caltrop weren't exactly planar. This new version fixes that error, but I had to sacrifice 6 unit diagonals. A stronger caltrop would have each vertex at distance 1 from all corners of an opposing face, instead of most corners. - -REPLY [2 votes]: There is a caltrop on 76 points. - -Points 1, 13, 25, 29, 41, and 53 are as follows: -{0.0833`, 0.0833`, 0.4930122817942774`} -{0.32530527130128584`, -0.20709494964790603`, 0.32530527130128584`} -{0.28875291001058745`, 0.28875291001058745`, 0.28875291001058745`} -{-0.2142`, 0.40369721678726284`, -0.2142`} -{-0.07272969962634213`, 0.35355339059327373`, -0.35355339059327373`} -{0.07587339432355446`, 0.44185`, -0.23402687345687453`} - -This caltrop generates a solid of constant width. The polyhedron has 150 unit diagonals, is self-dual, and has tetrahedral symmetry.<|endoftext|> -TITLE: Poincare's lemma for 1-form -QUESTION [6 upvotes]: Let $\omega=f(x,y,z)dx+g(x,y,z)dy+h(x,y,z)dz$ be a differentiable 1-form in $\mathbb{R}^{3}$ such that $d\omega=0$. Define $\hat{f}:\mathbb{R}^{3}\to\mathbb{R}$ by -$$\hat{f}(x,y,z)=\int_{0}^{1}{(f(tx,ty,tz)x+g(tx,ty,tz)y+h(tx,ty,tz)z)dt}.$$ -Show that $d\hat{f}=\omega$. -My approach: If $d\omega=0$, then -$$\left(\dfrac{\partial g}{\partial x}-\dfrac{\partial f}{\partial y}\right)dx\wedge dy+\left(\dfrac{\partial h}{\partial x}-\dfrac{\partial f}{\partial z}\right)dx\wedge dz+\left(\dfrac{\partial h}{\partial y}-\dfrac{\partial g}{\partial z}\right)dy\wedge dz=0,$$ -therefore $\dfrac{\partial g}{\partial x}=\dfrac{\partial f}{\partial y}, \dfrac{\partial h}{\partial x}=\dfrac{\partial f}{\partial z},\dfrac{\partial h}{\partial y}=\dfrac{\partial g}{\partial z}$. -For the other hand, note that -$$f(x,y,z)=\int_{0}^{1}{\dfrac{d}{dt}(f(tx,ty,tz)t)dt}=\int_{0}^{1}{f(tx,ty,tz)dt}+\int_{0}^{1}{t\dfrac{d}{dt}(f(tx,ty,tz))dt}$$ -where -$$\dfrac{d}{dt}(f(tx,ty,tz))=x\dfrac{df}{dx}(tx,ty,tz)+y\dfrac{df}{dy}(tx,ty,tz)+z\dfrac{df}{dz}(tx,ty,tz).$$ -But now, I have trouble with the differential of $\hat{f}$. Then for the above equations I think we can prove $d\hat{f}=\omega$. - -REPLY [5 votes]: Note that $$d\hat{f} = \frac{\partial\hat{f}}{\partial x}dx + \frac{\partial\hat{f}}{\partial y}dy + \frac{\partial\hat{f}}{\partial z}dz.$$ -First we have -\begin{align*} -\frac{\partial\hat{f}}{\partial x} &= \frac{\partial}{\partial x}\int_{0}^{1}(f(tx,ty,tz)x+g(tx,ty,tz)y+h(tx,ty,tz)z)dt\\ -&= \int_{0}^{1}\frac{\partial}{\partial x}(f(tx,ty,tz)x+g(tx,ty,tz)y+h(tx,ty,tz)z)dt\\ -&= \int_0^1\left(\frac{\partial f}{\partial x}(tx, ty, tz)tx + f(tx, ty, tz) + \frac{\partial g}{\partial x}(tx, ty, tz)ty + \frac{\partial h}{\partial x}(tx, ty, tz)tz\right)dt\\ -&= \int_0^1\left(\frac{\partial f}{\partial x}(tx, ty, tz)tx + f(tx, ty, tz) + \frac{\partial f}{\partial y}(tx, ty, tz)ty + \frac{\partial f}{\partial z}(tx, ty, tz)tz\right)dt\\ -&= \int_0^1\left(f(tx, ty, tz) + t\left(\frac{\partial f}{\partial x}(tx, ty, tz)x + \frac{\partial f}{\partial y}(tx, ty, tz)y + \frac{\partial f}{\partial z}(tx, ty, tz)z\right)\right)dt\\ -&= \int_0^1\left(f(tx, ty, tz) + t\frac{d}{dt}(f(tx, ty, tz))\right)dt\\ -&= \int_0^1\frac{d}{dt}\left(f(tx, ty, tz)t\right)dt\\ -&= [f(tx, ty, tz)t]_0^1\\ -&= f(x, y, z). -\end{align*} -A similar calculation shows $\dfrac{\partial\hat{f}}{\partial y} = g$ and $\dfrac{\partial\hat{f}}{\partial z} = h$, so $d\hat{f} = \omega$.<|endoftext|> -TITLE: Limit $(n - a_n)$ of sequence $a_{n+1} = \sqrt{n^2 - a_n}$ -QUESTION [5 upvotes]: Consider the sequence $\{a_n\}_{n=1}^{\infty}$ defined recursively by - $$a_{n+1} = \sqrt{n^2 - a_n}$$ with $a_1 = 1$. Compute $$\lim_{n\to\infty} (n-a_n)$$ - -I am having trouble with this. I am not even sure how to show the limit exists. I think if we know the limit exists, it is just algebra, but I'm not sure. - -REPLY [3 votes]: Here is a briefer answer which illustrates that the difficulty of the problem lies in bounding the growth of $a_n$. -All we need is $a_n \sim n$, in the sense that $$\lim_{n\to\infty} \frac{a_n}{n} = 1.$$ -This would follow if we knew e.g. that $n - a_n$ is bounded. JimmyK's sharp result that $0 \le n - a_n \le 2$ is more than enough! -So, indeed, $a_n \sim n$. From here, using difference of squares, -$$(n+1) - a_{n+1} = \frac{(n+1)^2 - (n^2 - a_n)}{(n+1) + a_{n+1}} = \frac{2n + 1 + a_n}{n + 1 + a_{n+1}} = \frac{2 + \frac{1}{n} + \frac{a_n}{n}}{1 + \frac{1}{n} + \frac{a_{n+1}}{n}} \to \frac{2 + 0 + 1}{1 + 0 + 1} = \frac{3}{2}.$$<|endoftext|> -TITLE: Is $(\#^k \Bbb{RP}^2) \times I$ an $\mathbb{RP}^2$-irreducible 3-manifold? -QUESTION [7 upvotes]: Consider $S$ a surface homeomorphic to a connected sum of $n$ projective planes, $n \geq 2$. Can there be a two sided projective plane embedded in $[-\epsilon,\epsilon]\times S$? - -REPLY [3 votes]: Call your surface $\Sigma$ so as to avoid confusion with spheres. Our first course of action is to note that $\Sigma$ has no 2-torsion in its fundamental group. (Actually, there's a much stronger true fact: a finite CW complex with contractible universal cover has no torsion in its fundamental group. I will not need or prove this.) To see this, note that if it did, it has as a covering space a noncompact surface with fundamental group $\Bbb Z/2$; but the only simply connected surfaces are $S^2$ and $\Bbb R^2$, and $\Bbb R^2$ does not have any continuous involutions with no fixed point. (It's easier to see that neither $\Bbb R^2$ or the unit disc with hyperbolic metric have isometric involutions with no fixed point; we only need to work in the case of isometric quotients by the uniformization theorem.) -Something slightly stronger than your question is true: there's not even a 2-sided $\Bbb{RP}^2$ in $\Sigma \times S^1$. (Indeed, there's no embedded $\Bbb{RP}^2$ at all, since a manifold with a 1-sided $\Bbb{RP}^2$ has $\Bbb{RP}^3$ as a connected summand; and we have no 2-torsion in our fundamental group, so that's not possible.) -To see this, note that a 2-sided $\Bbb{RP}^2$ cannot possibly disconnect the manifold: this would imply that $\Bbb{RP}^2$ bounds a compact 3-manifold, which it does not. The fact that it does not disconnect implies that it is a homologically nontrivial submanifold (meaning it represents a nonzero class in $H_2$): you can find a loop that intersects with $\Bbb{RP}^2$ in precisely one point, and mod 2 intersection numbers are defined on the level of homology classes. -Now recall that $\pi_1(S^1 \times \Sigma)$ had no 2-torsion, so the map $\pi_1(\Bbb{RP}^2) \to \pi_1(S^1 \times \Sigma)$ is trivial, and the map from $\Bbb{RP}^2$ factors through the universal cover of $S^1 \times \Sigma$: that is, through $\Bbb R^3$. But $\Bbb R^3$ is contractible, which implies your 2-sided embedding of $\Bbb{RP}^2$ was null-homotopic. This contradicts the fact above that it was a homologically nontrivial submanifold.<|endoftext|> -TITLE: loewner ordering of symetric positive definite matrices and their inverse -QUESTION [5 upvotes]: $M_1$ and $M_2$ are symetric positive definite matrix and $M_2>M_1$ in Loewner ordering ($M_2-M_1$ is positive definite). does this imply that $M_1^{-1}>M_2^{-1}$? - -REPLY [7 votes]: The answer is yes. -Two facts first: -(1) The statement $M_2>M_1$ is equivalent to $x^TM_2x>x^TM_1x$ for any $x\neq 0$; -(2) For any symmetric positive definite matrix $M$, there exist a positive definition matrix $L$ such that $M=L^2$ (called the square root of $M$). -We can show it is true when $M_1$ is the identity matrix $I$: for $M_2=L_2^2$, -$$ -x^TM_2^{-1}x=x^TL_2^{-T}L_2^{-1}x=(L_2^{-1}x)^T(L_2^{-1}x) -\leq (L_2^{-T}x)^TM_2(L_2^{-T}x)=x^Tx. -$$ -In the general case for $M_1=L_1^2$, the condition $M_2>M_1$ is equivalent to -$L_1^{-1}M_2L_1^{-1}>I$, which implies that -$ -I>(L_1^{-1}M_2L_1^{-1})^{-1}=L_1M_2^{-1}L_1 -$ -or $M_1^{-1}>M_2^{-1}$.<|endoftext|> -TITLE: Showing $\sum_{n = 0}^\infty \int f^n$ converges -QUESTION [9 upvotes]: I am having trouble solving a real analysis qualifying exam problem. - -The question assumes $\mu(X) < \infty$ and $\left| f \right| < 1$ (EDIT: Assume $f$ is real-valued). We are to show that $$ \lim_{n \to \infty} \int_X 1 + f + \dots + f^n d\mu$$ exists, possibly equal to $\infty$. - -My work so far. Each integral in the sequence makes sense since $\int 1 + \left| f \right| + \dots + \left| f \right|^n < (n+1) \mu(X) < \infty$. Rephrasing the problem, we want to show $\sum_{n = 0}^\infty \int f^n$ converges. It is immediate by the Monotone Convergence Theorem that the result is true for nonnegative functions $f$. Considering absolute convergence, we have $$\sum \left| \int f^n \right| \leq \sum \int \left| f \right|^n$$ where the series on the right converges by what we just said. If said series is finite, then $\sum \int f^n$ converges absolutely, hence converges. -Question. I am stuck on the case that $$ \sum \int \left| f \right|^n = \infty. \;\;\;\;\;\;\;\;\; (*)$$ -I know from the statement of the problem that we are allowing for $\sum \int f^n = \infty$, but it is not clear to me whether this should follow from $(*)$. We do know $$\sum \int \left| f \right|^n = \int \sum \left| f \right|^n = \frac{1}{1 - \left| f \right|}.$$ So if this equals $\infty$, then $\mu \left\{ x \colon \left| f(x) \right| > 1 - \frac{1}{n} \right\} > 0$ for all $n$. And of course $\sum f^n = \frac{1}{1 - f}$ as well. But I can't see how to put this all together. -Any help would be much appreciated. Thanks. - -REPLY [2 votes]: For $f \geq 0$, you showed the claim yourself. Now, for the general case, by splitting $f = f_+ - f_-$, it suffices to show the claim for $f\leq 0$. For this, also note $f^n = (f_+)^n + (-f_-)^n $, since the supports of $f_+, f_-$ are disjoint. -Now, for $-1 < x\leq 0$, we have -$$ -\bigg | \sum_{k=0 }^n x^k \bigg | = \frac {1-x^{n+1}}{1-x} \leq 1, -$$ -so that you can apply the dominated convergence theorem. This easily shows that -$$ -\lim_n \int 1+\dots + f^n d\mu = \int \frac {1}{1-f}d\mu -$$ -is finite (remember $f \leq 0$). Together with the case $f\geq 0$, we get the claim.<|endoftext|> -TITLE: How many closed subsets of $\mathbb R$ are there up to homeomorphism? -QUESTION [11 upvotes]: I know there are lists of convex subsets of $\mathbb{R}$ up to homeomorphism, and closed convex subsets of $\mathbb{R}^2$ up to homeomorphism, but what about just closed subsets in general of $\mathbb{R}$? - -REPLY [3 votes]: Inasmuch as there are just $2^{\aleph_0}$ closed subsets of $\mathbb R$ all told, it will suffice to exhibit $2^{\aleph_0}$ nonhomeomorphic nowhere dense closed subsets of $\mathbb R.$ -For $S\subseteq\mathbb R$ and $n\in\omega$ let $S^{(n)}$ denote the $n^{\text{th}}$ Cantor-Bendixson derivative of $S,$ i.e., $S^{(0)}=S,\ S^{(1)}=S',\ S^{(n+1)}=(S^{(n)})'.$ -For $X\subseteq\mathbb R$ let $A(X)$ denote the set of all positive integers $n$ for which there exists a relatively open set $U\subseteq X$ such that $S^{(n-1)}\cap U\ne S^{(n)}\cap U=S^{(n+1)}\cap U\ne\emptyset.$ -It will suffice to show that, for every set $A$ of positive integers, there is a nowhere dense closed set $X\subseteq R$ with $A(X)=A;$ in fact, it will suffice to show this for a one-point set $A=\{n\}$ where $n$ is a positive integer. -Given a positive integer $n,$ construct a closed set $X\subseteq\mathbb R$ of order type $\omega^n+\varphi$ where $\varphi$ is the order type of the Cantor set; then $A(X)=\{n\}.$ \ No newline at end of file