diff --git "a/stack-exchange/math_stack_exchange/shard_111.txt" "b/stack-exchange/math_stack_exchange/shard_111.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_111.txt" +++ /dev/null @@ -1,5745 +0,0 @@ -TITLE: Geometric interpretation of the fact that the extrema of $\sin x(\sin x+2\cos x)$ are the golden ratio and its conjugate -QUESTION [5 upvotes]: I stumbled upon the function $$f(x)=\sin(x)(\sin(x)+2\cos(x)).$$ -Now I noticed this function has maximum and minimum values $$\frac{1\pm\sqrt5}{2}$$ -which are exactly the golden ratio and its conjugate. -Computationally, this can be verified, but I wondered if there is also an intuitive (geometric) explanation of the golden ratio appearing here. - -REPLY [2 votes]: You mean in fact that the maximal or minimal values taken by the function are the golden ratio $\Phi$ or its conjugate $1-\Phi$. -Here is an explanation. The derivative of -$$f(x)=\sin(x)(\sin(x)+2\cos(x)) \ \ (*)$$ -is -$$f'(x)=\cos x (\sin x + 2 \cos x)+\sin x(\cos x -2 sin x).$$ -which can be written -$$f'(x)=2 (\cos^2 x - \sin^2 x) + 2 sin x \cos x=2 \cos 2x +\sin 2 x$$ -Thus, $f'(x)$ is equal to zero if and only if $\tan 2x=-2$. Knowing relationship $\tan 2x = (2 \tan x)/(1- (\tan x)^2 )$, we have now to solve $2T/(1-T^2)=-2$ (by setting $T=\tan x$) which amounts to: -$$T^2=T+1 \ \ \ (1)$$ -Thus $T$ is either $T_1=1.618....$ (golden ratio), or $T_2=-0.618...$ (its conjugate). -Now, we have to go back to (*) because it is the extremal values of $f$ that we are interest in. We can write the expression of $f(x)$ under a form that only involves $\tan x$: -$$f(x)=\sin^2 x + 2 \sin \cos x=\dfrac{\tan^2x}{1+\tan^2 x}+\dfrac{2 \tan x}{1+\tan^2 x} \ \ (2)$$ -The extreme values of $f(x)$ are obtained for one of the $T_k$s above. -Let us denote by $T$ any of these two (that both verify (1)). -Equation (2) becomes : $\dfrac{T^2+2T}{1+T^2}$ which is equal... to $T$. -Indeed, $T^2+2T=(1+T^2)T$ is immediately brought back to (1).<|endoftext|> -TITLE: Open mapping and space filling -QUESTION [5 upvotes]: Suppose $n1$, cannot be open. Indeed, the image $f([0,1])$ is a compact $C$ with continuum of frontier points (unless $C$ is a point); hence, $f((0,1))$ cannot be open. -On the other hand, for each $m>3$ there exists a continuous open map $f: R^3\to R^m$, see -L. V. Keldysh, A monotone mapping of the cube onto the cube of greater -dimension, Matem. Sb., 43:2 (1957), 129–158 -and the follow-up paper where the construction was completed: -L. V. Keldysh, -Transformation of a monotone irreducible mapping into a monotone-open one, and monotone-open mappings of the cube which raise the dimension. -Mat. Sb. N.S. 43 (85) 1957 187–226. -A proof in English can be found in -D. Wilson, Open mappings on manifolds and a counterexample to the Whyburn conjecture, Duke Math. J. -40 (1973), 705-716. -In these papers the authors construct open mappings $f_m$ of the 3-dimensional cube $I^3$ onto $m$-dimensional cube $I^m$ for all $m\ge 4$. In order to get similar maps $R^3\to R^m$ one proceeds as follows: -Restrict $f_m$ to a small open ball $B^3$ in the interior of $I^3$, whose image is disjoint from the boundary of $I^m$ (such $B^3$ exists due to continuity of $f_m$). Then $f_m(B^3)$ is an open domain in $R^m$. Now, use the fact that $B^3$ is homeomorphic to $R^3$. You can also promote this to a surjective map $R^3\to R^m$ -using the fact that every open nonempty subset in $R^m$ admits an open map to $R^m$. (First prove it for $m=2$ using the maps of the form $z\mapsto z^2$, $z\in {\mathbb C}$.) -I do not know about open maps $R^2\to R^m$. There are reasons to expect that they do not exist.<|endoftext|> -TITLE: Show that d is a metric if it fulfills triangle inequality and $d(x,y)=0\iff x=y$ -QUESTION [5 upvotes]: I have the following exercise: - -$M$ is a non empty set and $d: M\times M \to \mathbb R$ and - application such that: -a) $d(x,y)=0\iff x=y$ -b) $d(x,z) \le d(x,y)+d(y,z)$ -show that $d$ is a metric - -Aren't those 2 of the properties for a metric space? Also, wouldn't $b)$ imply that: -$$d(x,x)\le d(x,y)+d(y,x) \implies 0 \le d(x,y)+d(y,x)$$ -which if we accept that $d(x,y) = d(y,x)$: -$$0 \le 2d(x,y) \implies d(x,y) \ge 0$$ -? -So, wouldn't it be necessary for a metric space to have only: -$$d(x,y)\ge 0$$ -$$d(x,z)\le d(x,y)+d(y,z)$$ -$$d(x,y)=0\iff x=y$$ -? -What am I not getting here? -UPDATE: the right question is: - -$M$ is a non empty set and $d: M\times M \to \mathbb R$ and - application such that: -a) $d(x,y)=0\iff x=y$ -b) $d(x,z) \le d(x,y)+d(z,y)$ -show that $d$ is a metric - -REPLY [3 votes]: So you have a function $d: M\times M \to \mathbb R$ satisfying - -a) $d(x,y)=0\iff x=y$ -b) $d(x,z) \le d(x,y)+d(z,y)$ -As you noted, you just have to show that it takes non-negative values and that the symmetry property is satisfied. Also you need to show triangle inequality (which is obvious from (b) once you've shown symmetry) -To show non-negativity (as you have done) - -$$d(x,x)\le d(x,y)+d(x,y) \implies 0 \le 2d(x,y)\implies d(x,y)\ge0$$ -To show symmetry we play around with (b)- -Taking $y=x$ in (b) we get, $$d(x,z)\le d(x,x)+d(z,x) \implies d(x,z) \le d(z,x)$$ Similarly interchanging $x$ and $z$ and taking $y=z$ we get, -$$d(z,x)\le d(z,z)+d(x,z) \implies d(z,x) \le d(x,z)$$ So that $$d(x,z)=d(z,x)$$<|endoftext|> -TITLE: What is the smallest possible number -QUESTION [8 upvotes]: I spent an hour on this problem, but have no clue how to solve the it. Could anyone help me? -The problem ---- The numbers from 1 through 8 are separated into two sets, A and B. The numbers in A are multiplied together to get a. The numbers in B are multiplied together to get b. The larger of the two numbers a and b is written down. What is the smallest possible number that can be written down using this procedure? - -REPLY [9 votes]: First note that the optimal values for $a$ and $b$ will both be even: otherwise, if say $a$ is odd then it will be at most $1\cdot3\cdot5\cdot 7 = 105$, so that $b-a \ge 279$ which we shall soon see is not optimal. -When $a$ and $b$ have the same parity, the factorization $8! = a\cdot b$ corresponds uniquely to a difference of squares -$$8! = \left(\frac{a+b}2\right)^2 - \left(\frac{a-b}2\right)^2.$$ -Thus we can simply look for the smallest integer $y$ satisfying $8! = x^2-y^2$. Noting that $x$ increases along with $y$, we can just try successive values of $x$, starting with the smallest conceivable value $201$ (any smaller and we'd have $x^2 < 8!$): -$$(201)^2 - 8! = 81 = 9^2.$$ -Oh look, we already have a solution! So $(a-b)/2 = 9$ and $(a+b)/2 = 201$, so $a = 210$ and $b=192$. This is a very old (and now fairly obsolete) factoring technique known as Fermat factorization. It happened to work splendidly well in this case, but usually it takes more trial and error. For small numbers like $40320$ it's not a bad way to find the two factors that are closest together. -We still need to check that $210$ (or $192$) can be factored into some combination of numbers from $\{1,2,\ldots,8\}$: we might have had some bad luck if, we are forced to split the factor $6$ into $2\cdot 3$ because there's no other way to get $210$, in which case we'd have to look for the next value of $x$ and $y$. But happily, $210 = 2\cdot3\cdot5\cdot7$ and the complementary factors $1\cdot4\cdot6\cdot 8$ obviously yield $192$, so the first value of $x$ and $y$ worked out.<|endoftext|> -TITLE: Is f(x)=0 considered continous? -QUESTION [6 upvotes]: I'm aware that constant functions are considered continuous, but would $f(x)=0$ be considered a continuous function everywhere? Does $0$ count as a constant? - -REPLY [8 votes]: Indeed it is continuous. Using the $\epsilon-\delta$ definition of continuity, we find that $\forall c\in \mathbb R$ and $\forall \epsilon > 0$ there exists $\delta > 0$ such that $\forall x\in\mathbb R$ -$$|x-c| <\delta \implies |f(x) - f(c)| = 0 < \epsilon$$ -Clearly, $\delta$ can be anything, so the continuity holds. - -REPLY [6 votes]: Yes, it is continuous and 0 is a constant<|endoftext|> -TITLE: Geometry Math 8th (Area of a Trapezoid) -QUESTION [6 upvotes]: The windshield in a truck is in the shape of a trapezoid. The lengths of the bases of the trapezoid are 70 inches and 79 inches. The height is 35 inches. Find the area of the glass in the windshield. - -REPLY [7 votes]: The formula for the area of a trapezoid is $\dfrac {a+b}2\times h$, so that becomes $\dfrac {149}2\times35 = 2,607.5$inches$^2$<|endoftext|> -TITLE: the vector space of Magic Squares -QUESTION [5 upvotes]: Can anyone offer help? I have no clue how to do this problem. -Magical squares are 3 by 3 matrices with the following properties: the sum of all numbers in each row, and in each column, and in each diagonal is equal. This number is called the magical number. -(i)Prove that the set of magical squares forms a vector space with the usual matrix addition and scalar-matrix product. -(ii) Find a basis of the vector space of magical squares and determine its dimension. - -REPLY [2 votes]: Here's an easy way to do this for general $n$: given a magic number $S$, consider the topleft $(n-1)$ by $(n-1)$ submatrix of the square. Given these values, one can fill in the margins by subtracting rows and columns of the submatrix from $S$, and the bottom-right entry by subtracting the diagonal of the submatrix from $S$. -The only equations remaining to satisfy are: (1) the sum of each new margin equals $S$ and (2) the sum of the non-principal diagonal equals $S$. The condition (1) is the same for each margin (because the last column can be determined given all the rows and all the other columns). So the conditions are, where $1\le i,j\le n$ and $1\lt k\lt n$: -$$\sum_{i}a_{ii}=(n-1)S-\sum_{ij}a_{ij}$$ -$$\sum_k a_{k(n-k+1)}+\sum_j a_{1j}+\sum_i a_{i1}=S$$ -These can be checked to be linearly independent for $n>2$. Allowing $S$ to be free, the dimension of our space is therefore $(n-1)^2-2+1$, which equals: -$$n^2-2n$$ -Which indeed gives 3 in the case $n=3$ Meanwhile, for $n=1$ and $n=2$, the dimension is clearly 1.<|endoftext|> -TITLE: For ideal $m$ maximal and principal, there's no ideal between $m^2$ and $m$. Prove that this can be false when $m$ is not principal or maximal. -QUESTION [6 upvotes]: Prove that for ideal $m$ maximal and principal, there's no ideal $I$ such that $m^2 \subsetneq I \subsetneq m$. Show that this can be false when $m$ is not principal or maximal. - -Suppose $\mathfrak m=(a)$, and $a\notin I$. Let's show that $I\subseteq\mathfrak m^2$. Pick $x\in I$. Then $x=ay$, $y\in R$. If $y\in\mathfrak m$, then $y=az$ and thus $x=a^2z\in\mathfrak m^2$. Otherwise, $\mathfrak m+(y)=R$, so $1=am+yn$. Then $a=a^2m+ayn$, so $a=a^2m+xn\in I$, a contradiction. -This proves the first part but I don't know how to do the second. - -REPLY [5 votes]: Can you find an ideal $I$ such that $(x^2,xy,y^2)=(x,y)^2\subsetneq I\subsetneq (x,y)$? (The ideal $\mathfrak m=(x,y)$ is maximal in $K[x,y]$, but not principal.) -The same question for $36\mathbb Z\subsetneq I\subsetneq 6\mathbb Z$. (The ideal $6\mathbb Z$ is principal, but not maximal in $\mathbb Z$.)<|endoftext|> -TITLE: Prove that $1+\frac{1}{3}+\frac{1\cdot 3}{3\cdot 6}+\frac{1\cdot 3\cdot 5}{3\cdot 6 \cdot 9}+.........=\sqrt{3}$ -QUESTION [15 upvotes]: Prove that $$1+\frac{1}{3}+\frac{1\cdot 3}{3\cdot 6}+\frac{1\cdot 3\cdot 5}{3\cdot 6 \cdot 9}+.........=\sqrt{3}$$ - -$\bf{My\; Try::}$ Using Binomial expansion of $$(1-x)^{-n} = 1+nx+\frac{n(n+1)}{2}x^2+\frac{n(n+1)(n+2)}{6}x^3+.......$$ -So we get $$nx=\frac{1}{3}$$ and $$\frac{nx(nx+x)}{2}=\frac{1}{3}\cdot \frac{3}{6}$$ -We get $$\frac{1}{3}\left(\frac{1}{3}+x\right)=\frac{1}{3}\Rightarrow x=\frac{2}{3}$$ -So we get $$n=\frac{1}{2}$$ -So our series sum is $$(1-x)^{-n} = \left(1-\frac{2}{3}\right)^{-\frac{1}{2}} = \sqrt{3}$$ -Although I know that this is the simplest proof, can we solve it any other way someting like defining $a_{n}$ and then use Telescopic sum. -Thanks. - -REPLY [5 votes]: Here is a variation to obtain $\sqrt{3}$ based upon the generating function of the Central binomial coefficients -\begin{align*} - \sum_{n=0}^{\infty}\binom{2n}{n}z^n=\frac{1}{\sqrt{1-4z}}\qquad\qquad |z|<\frac{1}{4} - \end{align*} - -We obtain -\begin{align*} - 1&+\frac{1}{3}+\frac{1\cdot 3}{3\cdot 6}+\frac{1\cdot 3\cdot 5}{3\cdot 6\cdot 9}+\cdots\\ - &=1+\frac{1!!}{3^11!}+\frac{3!!}{3^22!}+\frac{5!!}{3^33!}+\cdots\tag{1}\\ - &=1+\sum_{n=1}^{\infty}\frac{(2n-1)!!}{n!}\frac{1}{3^n}\\ - &=1+\sum_{n=1}^{\infty}\frac{(2n)!}{n!(2n)!!}\frac{1}{3^n}\tag{2}\\ - &=1+\sum_{n=1}^{\infty}\frac{(2n)!}{n!n!}\frac{1}{6^n}\tag{3}\\ - &=\sum_{n=0}^{\infty}\binom{2n}{n}\frac{1}{6^n}\\ - &=\left.\frac{1}{\sqrt{1-4z}}\right|_{z=\frac{1}{6}}\\ - &=\frac{1}{\sqrt{1-\frac{2}{3}}}\\ - &=\sqrt{3} - \end{align*} - -Comment: - -In (1) we use double factorial for odd values $$(2n-1)!!=(2n-1)(2n-3)\cdots 5\cdot 3\cdot 1$$ -In (2) we use the identity -\begin{align*} - (2n)!=(2n)!!(2n-1)!! - \end{align*} -In (3) we use the identity -\begin{align*} - (2n)!!=2^nn! - \end{align*}<|endoftext|> -TITLE: How prove that: $[12\sqrt[n]{n!}]{\leq}7n+5$? -QUESTION [5 upvotes]: How prove that: $[12\sqrt[n]{n!}]{\leq}7n+5$,$n\in N$ I know $\lim_{n\to \infty } (1+ \frac{7}{7n+5} )^{ n+1}=e$ and $\lim_{n\to \infty } \sqrt[n+1]{n+1} =1$. - -REPLY [6 votes]: By AM-GM -$$\frac{1+2 + 3 + \cdots + n}{n} \ge \sqrt[n]{1 \times 2 \times 3 \times \cdots \times n}$$ -$$\implies \frac{n+1}2 \ge \sqrt[n]{n!} \implies 6n+6 \ge 12\sqrt[n]{n!}$$ -But $7n+5 \ge 6n+6$ for $n \ge 1$...<|endoftext|> -TITLE: If every absolutely convergent series is convergent then $X$ is Banach -QUESTION [17 upvotes]: Show that - -A Normed Linear Space $X$ is a Banach Space iff every absolutely convergent series is convergent. - -My try: -Let $X$ is a Banach Space .Let $\sum x_n$ be an absolutely convergent series .Consider $s_n=\sum_{i=1}^nx_i$. Now $\sum \|x_n\|<\infty \implies \exists N$ such that $\sum_{i=N}^ \infty \|x_i\|<\epsilon$ for any $\epsilon>0$ -Then $\|s_n-s_m\|\le \sum _{i=m+1}^n \|x_i\|<\epsilon \forall n,m>N$ -So $s_n$ is Cauchy in $X$ and hence converges $s_n\to s$ (say). -Thus $\sum x_i$ converges. -Conversely, let $x_n$ be a Cauchy Sequence in $X$. Here I can't proceed how to use the given fact. -Any help will be great. - -REPLY [9 votes]: A normed space is complete if and only if every absolutely convergent series converges. -$\implies$ -We will prove this by proving absolutely convergent series is a Cauchy series. -We define a absolutely convergent series. Suppose $x_n\in E$ and $\sum_{n=1}^\infty||x_n||<\infty$ and denote -$$ -s_n=\sum_{k=1}^nx_k -$$ -Because the sequence of partial sums converges in E, for every $\epsilon >0 $, there exists $k>0$ such that -$$ -\sum_{n=k+1}^\infty ||x_n||<\epsilon -$$ -To show $(s_n)$ is a Cauchy sequence, $\forall \epsilon>0,\exists M,\forall m,n>M$ such that -$$ -||s_m-s_n||=||x_{n+1}+x_{n+2}+\dotsm+x_m||\le \sum_{r=n+1}^\infty ||x_r||<\epsilon -$$ -(without loss of generality, we assume $m>n$) -Since E is complete, $s_n$ converges. -$\Longleftarrow$ -We need to prove if every absolutely convergent series in a normed space converges, then the normed space is complete. -Let $(x_n)$ be an Cauchy sequence in E and therefore $\forall \epsilon>0,\exists p_k\in N,\forall m,n>p_k$ such that -$$ -||x_m-x_n||<2^{-k} -$$ -without loss of generality, we can assume $(p_k)$ is strictly increasing. -Then the series $\sum_{k=1}^\infty (x_{p_{k+1}}-x_{p_k})$ is absolutely convergent and therefore, convergent and therefore, the sequence -$$ -x_{p_k}=x_{p_1}+(x_{p_2}-x_{p_1})+(x_{p_3}-x_{p_2})+\dotsm+(x_{p_k}-x_{p_{k-1}}) -$$ -converges to an element $x\in E$ -Then -$$ -||x_n-x||\le ||x_n-x_{p_n}||+||x_{p_n}-x||\rightarrow 0 -$$ -Q.E.D.<|endoftext|> -TITLE: Bockstein homomorphism and the universal coefficient theorem -QUESTION [7 upvotes]: The following statement is given in the third comment of -kernel of the mod $2$ Bockstein on the first cohomology group: -Statement: Let $X$ be a path-connected finite $CW$-complex. Suppose -$$ -H_1(X;\mathbb{Z})=\mathbb{Z}_2^{\oplus r}\oplus A -$$ -where $r\geq 0$ and $A$ is a finite abelian group of odd order. -Then for any nonzero element $x\in H^1(M;\mathbb{Z}_2)$, $x^2\neq 0$. -My attempt to prove the statement: I notice that for any nonzero element $x\in H^1(M;\mathbb{Z}_2)$, $x^2=Sq^1 x=\beta x$ where $\beta$ is the Bockstein homomorphism associated with the coefficient sequence $$ -0\to \mathbb{Z}_2 \to\mathbb{Z}_4 \to \mathbb{Z}_2\to 0.$$ -Hence we only need to prove -$$ - \text{Ker} \beta=0. -$$ -By the universal coefficient theorem, -$$ -H^1(M;\mathbb{Z}_4)=Hom(H_1(M;\mathbb{Z});\mathbb{Z}_4) -=Hom(\mathbb{Z}_2^{\oplus r};\mathbb{Z}_4) -=\mathbb{Z}_2^{\oplus r},$$ -$$ -H^1(M;\mathbb{Z}_2)=Hom(H_1(M;\mathbb{Z});\mathbb{Z}_2) -=Hom(\mathbb{Z}_2^{\oplus r};\mathbb{Z}_2) -=\mathbb{Z}_2^{\oplus r}.$$ -By the construction of Bockstein homomorphism, we have an exact sequence -$$ -H^1(M;\mathbb{Z}_4)\overset{f}{\longrightarrow} H^1(M;\mathbb{Z}_2)\overset{\beta}{\longrightarrow }H^2(M;\mathbb{Z}_2).$$ -Hence when $r=0$, I obtain -$$ -\text{Ker}\beta=\text{Im} f=0. -$$ -Whether is my above argument right? -Question: When $r>0$, could we still obtain the statement, which is equivalent to prove that the image of $f$ is zero? - -REPLY [2 votes]: The map $f$ isn't just any map: it's the map induced by the quotient map $q:\mathbb{Z}_4\to\mathbb{Z}_2$ on coefficients. By the naturality of the universal coefficients theorem (with respect to the coefficient group), the map $f:H^1(M;\mathbb{Z}_4)\to H^1(M;\mathbb{Z}_2)$ can be identified with the map $\operatorname{Hom}(H_1(M;\mathbb{Z}),\mathbb{Z}_4)\to\operatorname{Hom}(H_1(M;\mathbb{Z}),\mathbb{Z}_2)$ given by taking a homomorphism $H_1(M;\mathbb{Z})\to\mathbb{Z}_4$ and composing it with $q$ to get a homomorphism $H_1(M;\mathbb{Z})\to\mathbb{Z}_2$. But because $H_1(M;\mathbb{Z})\cong \mathbb{Z}_2^r\oplus A$, the image of every homomorphism $H_1(M;\mathbb{Z})\to\mathbb{Z}_4$ is contained in the kernel of $q$ (since every element not in the kernel of $q$ has order $4$, but $H_1(M;\mathbb{Z})$ has no elements of order divisible by $4$). So the map $\operatorname{Hom}(H_1(M;\mathbb{Z}),\mathbb{Z}_4)\to\operatorname{Hom}(H_1(M;\mathbb{Z}),\mathbb{Z}_2)$ is identically $0$.<|endoftext|> -TITLE: Cauchy Principal value of integral with undefined point -QUESTION [5 upvotes]: This is a follow-up question to my post on Stack Overflow. I want to (either analytically or numerically) integrate: -$I=\displaystyle\int_{-\infty}^{\infty}\dfrac{1}{(z+1)^2+4} \dfrac{1}{\exp(-z)-1} dz$ -using MATLAB, but it tells me that the integral may not exist – the integral is undefined at $z=0$. The Cauchy principal value doesn't seem to exist either, so what does this tell us about the integral? Does it mean we can't evaluate $I$ (numerically or otherwise)? - -REPLY [4 votes]: I did a (non-rigorous) analysis based on residuals on the plane Im[z]>0. -Let $C$ be the contour that runs from $-a$ to $a$ (in a straight line over the real axis), and back over the semi-circle $|z|=a$ with $Im[z]\ge0$, see the image (taken from Wikipedia) below. - -Inside this contour, the integrand is analytical, except for a finite number of poles: the pole on $z=-1+2i$ due to the first factor, and the poles on $z=2\pi n$ for $n=1,2,...,N$ due to the second factor. Here, $N$ is the biggest integer such that $2\pi N -TITLE: Why is it impossible to move from the corner to the center of a $3 \times 3 \times 3$ cube under these conditions? -QUESTION [5 upvotes]: There is a $3 \times 3 \times 3$ cube block, starting at the corner, you are allowed to take 1 block each time, and the next one must share a face with the last one. Can it be finished in the center? -This is a question about graph theory (I think), and obviously it is impossible to finished in the center. I start to consider about the Hamiltonian path and the degree of each vertex , but it is different because you have to start and end with a specific vertex. Can anyone tell me the reason why it is impossible? - -REPLY [7 votes]: Set a coordinates system, where every block corresponds to a triplet $(a,b,c)$, for example, the starting cornere will be $(1,1,1)$ and the center ($2,2,2)$. Every time you move from a block to another who shares a face with the previous you add or subtract $1$ from the previous triplet. You start from $(1,1,1)$ and you want to finish in $(2,2,2)$ with $26$ steps. Call $a$ the number of $+1$ and $b$ the number of $-1$, you must have -$$a+b=26$$ -and -$$a-b=2+2+2-1-1-1=3$$ -and this is not possible.<|endoftext|> -TITLE: Evaluation of $1-\frac{1}{7}+\frac{1}{9}-\frac{1}{15}+\frac{1}{17}-\dotsb$ -QUESTION [14 upvotes]: How can we calculate sum of following infinite series -$\displaystyle \bullet\; 1+\frac{1}{3}-\frac{1}{5}-\frac{1}{7}+\frac{1}{9}+\frac{1}{11}-\cdots$ -$\displaystyle \bullet\; 1-\frac{1}{7}+\frac{1}{9}-\frac{1}{15}+\frac{1}{17}-\cdots$ - -$\textbf{My Try:}$ Let $$S = \int_0^1 (1+x^2-x^4-x^6+x^8+x^{10}+\cdots) \, dx$$ -So we get $$S=\int_0^1 \left(1-x^4+x^8-\cdots\right)dx+x^2\int_0^1 (1-x^4+x^8-\cdots)\,dx$$ -So we get $$S=\int_0^1 \frac{1+x^2}{1+x^4} \, dx= \frac{\pi}{2\sqrt{2}}$$ after that we can solve it -Again for second One, Let $$S=\int_0^1 (1-x^6+x^8-x^{14}+x^{16}+\cdots)$$ -So we get $$S=\int_0^1 (1+x^8+x^{16}+\cdots) \, dx-\int_0^1 (x^6+x^{14}+ -\cdots)\,dx$$ -So we get $$S=\int_{0}^{1}\frac{1-x^6}{1-x^8}dx = \int_{0}^{1}\frac{x^4+x^2+1}{(x^2+1)(x^4+1)}dx$$ -Now how can i solve after that, Help me -Thanks - -REPLY [8 votes]: Start with partial fractions -$$\frac{x^4+x^2+1}{(x^2+1)(x^4+1)} = \frac{A}{x^2+1}+ \frac{B x^2+C}{x^4+1}$$ -Thus, -$$A+B=1$$ -$$B+C=1$$ -$$A+C=1$$ -or $A=B=C=1/2$. Also note that -$$x^4+1 = (x^2+\sqrt{2} x+1)(x^2-\sqrt{2} x+1) $$ -so that -$$\frac{x^2+1}{x^4+1} = \frac{P}{x^2-\sqrt{2} x+1} + \frac{Q}{x^2+\sqrt{2} x+1}$$ -where $P=Q=1/2$. Thus, -$$\frac{x^4+x^2+1}{(x^2+1)(x^4+1)} = \frac14 \left [2 \frac1{x^2+1} + \frac1{(x-\frac1{\sqrt{2}})^2+\frac12} + \frac1{(x+\frac1{\sqrt{2}})^2+\frac12} \right ]$$ -And the integral is -$$\frac12 \frac{\pi}{4}+ \frac14 \sqrt{2} \left [\arctan{(\sqrt{2}-1)}-\arctan{(-1)} \right ] + \frac14 \sqrt{2} \left [\arctan{(\sqrt{2}+1)}-\arctan{(1)} \right ]= \frac{\pi}{8} (\sqrt{2}+1) $$<|endoftext|> -TITLE: Does every non-trivial finite group have a subgroup with prime index? -QUESTION [8 upvotes]: Let G be any non-trivial finite group. - -Has G always a subgroup, whose index is prime ? - -If G is solvable and |G| has a prime divisor $p$, such that $p^2$ does not divide $|G|$, this is the case because of Hall's theorem. -If $G$ is a $p$-group, the answer is also positive. -The group $A_5$, for example, is not solvable, but has subgroups with index $5$. -So, I wonder whether we always can find a subgroup with prime index. - -REPLY [2 votes]: $A_n$ , has no subgroup of prime index iff $n \geq 6$ and $n$ is not prime . - -Because of this theorm : -Theorem : If $G$ is a simple group with subgroup of index $m>1$, then $$|G|\quad |\quad m!$$ Proof : if $G$ has subgroup $H$ , and $G$ act on the cosets of $H$, then we have homomorphism from $G$ to $S_m$ with trivial kernel (because $G$ is simple). -Of course $A_{p-1} \triangleleft A_p$ .$\Box$ -For another examples , because only normal subgroups of $S_n ( n \geq 5)$ are $ 1 , A_n $and $S_n$, by the same way you can show there is no subgroup of index $2 -TITLE: Geometrical meaning of $\operatorname{Spec}\widehat{\mathcal O}_{X,x}$ -QUESTION [8 upvotes]: Let $X$ be a complete smooth irreducible variety over a field $K$ and consider a closed point $x \in X$. Moreover let $\widehat{\mathcal O}_{X,x}$ be the $\mathfrak m_x$-adic completion of the local ring at $x$. -What is the geometrical meaning of $\operatorname{Spec}\widehat{\mathcal O}_{X,x}$? many sources say very vaguely that this spectrum carries pieces of information about very small neighbourhoods around $x$, but I'd like a more effective explanation. I know that maybe one should have a knowledge of formal schemes in order to fully understand this object, but unfortunately I only know "standard" scheme theory. -There is obviuously a morphism $f:\operatorname{Spec} \widehat{\mathcal O}_{X,x}\longrightarrow \operatorname{Spec}\mathcal O_{X,x}$, so I would like to know what kind of geometric properties one can evince respectively from $\operatorname{Spec}\widehat{\mathcal O}_{X,x}$ and $\operatorname{Spec}\mathcal O_{X,x}$: the latter is already a "local object" so why do we need the completion? In particular I'm interested in the minimal prime ideals of $\widehat{\mathcal O}_{X,x}$ which are also called "formal branches at $x$".... -Many thanks in advance. - -REPLY [4 votes]: I don't think $\mathcal O_{X,x}$ is as local an object as you think it is. For example, you can get the function field of $X$ from it, which is surely a global thing? -The intuition for $\widehat{\mathcal O_{X,x}}$ is that it contains "Taylor expansions" of functions around $x$. If you know what the normal cone is, then you can also convince yourself that if $x$ is a closed point, then $\widehat{\mathcal{O_{X,x}}}$ contains essentially the same information as the normal cone of $x$ in $X$. -You can also verify that both of those objects are isomorphic in the following two cases: -$x$ is the node of the nodal curve and -$x$ is the origin in $\text{Spec}k[x,y]/(xy)$. -And by looking at pictures, you can immediately see that the two points are geometrically locally similar.<|endoftext|> -TITLE: Square root of smooth function f(0)=0, f'(0)=0, f''(0)>0 is smooth? -QUESTION [5 upvotes]: $\newcommand{\nc}{\newcommand} \nc{\BR}{\mathbb R}$ Let $f: U \to \BR$ be a smooth function where $U$ is a neighborhood of $0\in \BR$ and $f$ is smooth. Furthermore, $f(0) = 0$, $f'(0) = 0$, and $f''(0) > 0$. From this we can shrink $U$ to a set on which $f$ is nonnegative, so the square root is well-defined. I wish to show (conjecture) that $g(t) = \pm \sqrt{f(t)}$ is a smooth function. (where $g(t) = -\sqrt{f(t)}$ for $t<0$, $g(t) = \sqrt{f(t)}$ for $t \geq 0$) -Clearly the only issue is smoothness at 0. I read this link which might be related, but in the counterexamples given $f''(0) = 0$, whereas I am assuming $f''(0) > 0$. I have shown directly (using limits) that $g'(0) = \frac{1}{2}f''(0)$ and that $g''(0) = g'''(0) = 0$. However, the form of $g^{(n)}(t)$ quickly grows very complicated to work with, so I am stuck with proving it in the general case. -Context: I wish to prove the Morse lemma in dimension 1. If I can show that $g$ is smooth, then its inverse $g^{-1}$ is a smooth change of coordinates near 0 (smooth by Inverse Function Theorem) such that $f(g^{-1}(t)) = t^2$. - -REPLY [6 votes]: For fixed $x$, since $f(0) = 0$, using the helper function $h_x(t) = f(t\cdot x)$, we can write -\begin{align} -f(x) &= h_x(1) - h_x(0)\\ -&= \int_0^1 h_x'(t)\,dt\\ -&= \int_0^1 f'(t\cdot x)\cdot x\,dt\\ -&= x\cdot \underbrace{\int_0^1 f'(tx)\,dt}_{f_1(x)}. -\end{align} -Since $f$ is smooth, we can differentiate under the integral as often as we wish, hence $f_1$ is also smooth. -Now $f_1(0) = f'(0) = 0$, so we can make the same construction with $f_1$, -$$f_1(x) = x\cdot \underbrace{\int_0^1 f_1'(tx)\,dt}_{f_2(x)}.$$ -By the same argument as above, $f_2$ is smooth. With -$$f_1'(x) = \int_0^1 s\cdot f''(sx)\,ds,$$ -we can write -$$f_2(x) = \int_0^1 f_1'(tx)\,dt = \int_0^1 \int_0^1 s f''(stx)\,ds\,dt,$$ -which shows -$$f_2(0) = \frac{1}{2} f''(0) > 0,$$ -and by continuity $f_2(x) > 0$ on some neighborhood $V$ of $0$. On $V$, we have $g(x) = x\cdot \sqrt{f_2(x)}$, and as the square root of a strictly positive smooth function, $\sqrt{f_2(x)}$ is smooth. It follows that $g$ is smooth on $V$.<|endoftext|> -TITLE: Is there a closed-form expression for $\sum_{k=1}^{n}\lfloor k^{q} \rfloor$ for $q \in \mathbb{Q}_{> 0}$? -QUESTION [17 upvotes]: There are well-known closed-form expressions for summations such as $\sum_{k=1}^{n}\lfloor k^{\frac{1}{2}} \rfloor$, $\sum_{k=1}^{n}\lfloor k^{\frac{1}{3}} \rfloor$, $\sum_{k=1}^{n}\lfloor k^{\frac{1}{4}} \rfloor$, etc. For example, we have that $$\sum_{k=1}^{n}\lfloor k^{\frac{1}{3}} \rfloor = -\frac{1}{4} \left\lfloor\sqrt[3]{n}\right\rfloor \left( \left\lfloor\sqrt[3]{n}\right\rfloor^{3} + 2 \left\lfloor\sqrt[3]{n}\right\rfloor^{2} + \left\lfloor\sqrt[3]{n}\right\rfloor - 4(n+1) \right)$$ for all $n \in \mathbb{N}$. -However, Mathematica is unable to evaluate the sum $\sum_{k=1}^{n}\lfloor k^{\frac{2}{3}} \rfloor$. Furthermore, there is no closed-form expression for this summation given in the OEIS sequence http://oeis.org/A032514 corresponding to this sum. -More generally, Mathematica is not able to evaluate summations such as $\sum_{k=1}^{n}\lfloor k^{\frac{4}{3}} \rfloor$, $\sum_{k=1}^{n}\lfloor k^{\frac{3}{4}} \rfloor$, $\sum_{k=1}^{n}\lfloor k^{\frac{3}{7}} \rfloor$, etc. Letting $q \in \mathbb{Q}$ be positive, it appears that there is a known closed-form expression for $\sum_{k=1}^{n}\lfloor k^{q} \rfloor$ if and only if $q \in \mathbb{N}$ or $q$ is of the form $q = \frac{1}{r}$ where $r \in \mathbb{N}$. So it is natural to ask: -(1) Is there a closed-form expression for $\sum_{k=1}^{n}\lfloor k^{\frac{2}{3}} \rfloor$? -(2) More generally, is there a closed-form expression for $\sum_{k=1}^{n}\lfloor k^{q} \rfloor$ for $q \in \mathbb{Q}_{> 0} \setminus \mathbb{N} \setminus \left\{ \frac{1}{2}, \frac{1}{3}, \ldots \right\}$? - -REPLY [3 votes]: It's unlikely that there is a closed form expression for what you want. Consider the expression -$$ -\sum_{k=1}^{n}\left\lfloor f(k)\right\rfloor, -$$ -where $f(k)$ is a monotonically increasing function with $f(1)=1$. Let $g(m)$ be the smallest integer value of $k$ such that $f(k)\ge m$; i.e., $g(m)=\left\lceil f^{-1}(m)\right\rceil$. Then the terms in the sum are at least as large as $1$ starting at $k = g(1)$, and at least as large as $2$ starting at $k= g(2)$, and so on. As long as $g(m)\le n$, each term contributes $n-g(m)+1$ to the sum. So -$$ -\sum_{k=1}^{n}\left\lfloor f(k)\right\rfloor = \sum_{m=1}^{\lfloor f(n)\rfloor} \left(n + 1 - \left\lceil f^{-1}(m)\right\rceil\right)=(n+1)\lfloor f(n)\rfloor - \sum_{m=1}^{\lfloor f(n)\rfloor} \left\lceil f^{-1}(m)\right\rceil. -$$ -This will simplify in certain cases where the inverse is integer-valued (so the ceiling function goes away) and "nice" (so the resulting sum can be evaluated). Two standard examples are when the inverse is a polynomial (e.g., $f(k)=k^{1/q}$, so $f^{-1}(m)=m^q$) and when the inverse is an exponential function (e.g., $f(k)=1+\log_b k$, so $f^{-1}(m)=b^{m-1}$). In your case you have -$$ -\sum_{k=1}^{n}\left\lfloor k^{2/3} \right\rfloor=(n+1)\left\lfloor n^{2/3}\right\rfloor - - \sum_{m=1}^{\lfloor n^{2/3}\rfloor} \left\lceil m^{3/2}\right\rceil, -$$ -but this sum doesn't seem any easier to evaluate.<|endoftext|> -TITLE: Geometry - Pentagon -QUESTION [6 upvotes]: This is a tough program I have hard time to find the answer. Can anyone help me? Thank you very much in advance. -Problem - In regular pentagon $ABCDE$, point $M$ is the midpoint of side $AE$, and segments $AC$ and $BM$ intersect at point $Z$. If $ZA = 3$, what is the value of $AB$? Express your answer in simplest radical form. -This is a drawing for the problem: - -REPLY [4 votes]: Let $F$ be the intersection of $AC$ and $BE$. By angle chasing, we obtain the following picture: - -Now $\triangle BAF$ and $\triangle BEA$ are similar, so -$$BF:BA = BA:BE,$$ -or -$$\frac yx = \frac{x}{x+y}=\frac{1}{1+y/x}.\tag{1}$$ -Solving that for $y$ in terms of $x$ we have -$$\frac yx = \frac{\sqrt{5}-1}2.$$ -Now apply Menelaus' Theorem to $\triangle AEF$ with $M$, $Z$, $B$ colinear we get -$$\frac{EM}{MA}\cdot \frac{ZA}{ZF}\cdot \frac{BF}{BE} = 1.\tag{2}$$ -Note that $EM = MA$, -$$\frac{BF}{BE} = \frac{y}{x+y}=1-\frac{x}{x+y}=\frac{3-\sqrt{5}}2,$$ -and -$$\frac{ZA}{ZF} = \frac{3}{y-3}.$$ -Thus, (2) becomes -$$\frac{y-3}{3} = \frac{3-\sqrt{5}}2.$$ -So -$$ y = \frac{9-3\sqrt{5}}{2}+3 = \frac{15-3\sqrt{5}}2.$$ -And we get -$$ x = \frac xy \cdot y = \frac{\sqrt{5}+1}2 \cdot \frac{15-3\sqrt{5}}2$$ - -Note: - -I'm sure there are more elegant solutions. -One can avoid Menelaus' Theorem by connecting $M$ with the midpoint $N$ of $AF$.<|endoftext|> -TITLE: Differences between Quaternion integration methods -QUESTION [8 upvotes]: I've implemented a Quaternion Kalman filter and i have the choice between multiple way to integrate angular velocities. -The goal is to predict futur orientation $q^{n+1}$ from current orientation $q^{n}$ and angular velocity $\vec{r}$. During the time step $\Delta_t$ separating $q^{n+1}$ from $q^{n}$, the angular velocity is said to be constant. -The first method transform angular velocity $\vec{r}=[r_x \ r_y \ r_z]$ into a quaternion $q_r$ and multiply the result with the current orientation quaternion $q^n$ : -$$ -q_r =(a,\vec{v})\\ -a = \cos{(\frac{|\vec{r}|\Delta_t}{2})} \\ -\vec{v}=\sin{(\frac{|\vec{r}|\Delta_t}{2})}\frac{\vec{r}}{|\vec{r}|}\\ -q^{n+1}=q^{n}q_r -$$ -The second method is based on the quaternion derivative formula : -$$ -q_r =(0,\vec{r})\\ -q^{n+1}=q^n+ \Delta_t(\frac{1}{2}q^nq_r) -$$ -What are the fundamental differences between the two approach? What are their properties ? -I understand the second one is a simple Euler integration, a crude first order approximation assuming a fixed $q$ and $\vec{r}$ during integration. At the end, we can possibly have $|q^{n+1}|$ different from unity so a normalization can be necessary. This kind of approach, based on the derivative, can easily be generalized to higher orders. -On the other hand, i don't know what the first approach really is. It doesn't require any normalization. In which aspect is it an approximation ? If this integration method a numeric approximation, what's its order ? Could a RK4 approximation be better ? -Here is a similar question asked on a programming thread. No clear answer was given. -Here is a patent relative to the first method : http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20080004113.pdf -Best, - -REPLY [2 votes]: Method 1: -As was pointed out the first method uses the angular velocity to calculate a difference-orientation $q_r$, which accumulated as a result of the contant angular velocity. -This method is an Euler-approximation of the integral of the angular velocity! -$$\Delta \alpha = |\vec{r}| \Delta_t$$ -is a first order approximation of this difference angle! In the next step this angle is transformed into the quaternion $q_r$ and then this difference rotaion is applied to the current rotation. -The benefit is a correct integration for constant velocities. -The drawback is a computational effor by using trigonometric functions. -Method 2: Here, the calculations are ideal up until the derivative of the quaternion -$$\dot{q} = \frac{1}{2} q_r \, q $$ -which has no simplifications in it. Now at this (later) point you apply the approximations that 1) the angular velocity is constant and 2) that you can integrate this quaternion derivative by a first order method. You can easily use a higher order method for integrating this last equation, yes! RK4 is a valid choice. -Conclusion: Both methods include a first order integration. However, the first one yields correct rotations for constant velocities and is affecting the norm of the quaternion less than the latter approach. However, using a higher order method I would prefer the second approach, since it is computationally more efficient. This is the main point of method 2! You can implement this on really cheap hardware with high speed even with fixed point arithmetics!<|endoftext|> -TITLE: If a family of straight lines is $\lambda^2 P+\lambda Q+R=0$ ,then the family of lines will be tangent to the curve $Q^2=4PR.$ -QUESTION [6 upvotes]: I have read this theorem in my book but i do not know how to prove it. -If a family of straight lines can be represented by an equation $\lambda^2 P+\lambda Q+R=0$ where $\lambda $ is a parameter and $P,Q,R$ are linear functions of $x$ and $y$ then the family of lines will be tangent to the curve $Q^2=4PR.$ -My try: -Let $P:a_1x+b_1y+c_1=0$ -$Q:a_2x+b_2y+c_2=0$ -$R:a_3x+b_3y+c_3=0$ -Then the family of straight lines can be represented by an equation $\lambda^2 (a_1x+b_1y+c_1)+\lambda (a_2x+b_2y+c_2)+(a_3x+b_3y+c_3)=0$ -$x(\lambda^2a_1+\lambda a_2+a_3)+y(\lambda^2b_1+\lambda b_2+b_3)+(\lambda^2c_1+\lambda c_2+c_3)=0$ -But i do not know how to prove that the family of lines will be tangent to the curve $Q^2=4PR.$ - -REPLY [2 votes]: To find envelope by C-discriminant method, we eliminate $\lambda$ between -$$f(\lambda) \;=\; \lambda^2 P + \lambda Q + R =0, \; f^\prime(\lambda) \;=\; 2\lambda P + Q =0 \; $$ -( the latter is the characteristic for the parameter $\lambda$ ) -yielding: -$$ Q^2 = 4 P R $$ -The method can be can be extended to 2,3 .. arbitrary number of parameters. -The method and result are same as that given by Blue.<|endoftext|> -TITLE: Mayer Vietoris for locally finite singular homology -QUESTION [7 upvotes]: Usually one defines the traditionnal singular homology (let's say in $\mathbb{Z}$ and on a topological space $X$) by using singular $p$-chains. A singular $p$-chain is a finite formal sum $\sum c_{\sigma} \sigma$ where $\sigma$ are $p$-simplexes in $X$ and $c_{\sigma}$ integers. A well-known theorem, usually proven with barycentric decomposition is the Mayer-Vietoris theorem. -Theorem : Let $\{U,V\}$ be an open covering of $X$. Then on has the following long exact sequence $$\cdots \rightarrow H_{n+1}(X) \xrightarrow\, H_{n}(U\cap V) \xrightarrow\, H_{n}(U) \oplus H_{n}(V) \xrightarrow\, H_{n}(X) \xrightarrow\, H_{n-1} (U\cap V) \rightarrow \cdots$$ -Now my question is the following one. Instead of defining usual singular homology, one can define locally finite singular homology, noted $H^{lf}_{\cdot}(X)$ by using infinite formal sums $\sum c_{\sigma} \sigma$ but which are locally finite. That means that for all $x$ their is a neighbourhood $V_x$ which meets only a finite number of $supp(\sigma)$. Does Mayer-Vietoris hold in this context ? I.e, do we have a long exact sequence $$\cdots \rightarrow H^{lf}_{n+1}(X) \xrightarrow\, H^{lf}_{n}(U\cap V) \xrightarrow\, H^{lf}_{n}(U) \oplus H^{lf}_{n}(V) \xrightarrow\, H^{lf}_{n}(X) \xrightarrow\, H^{lf}_{n-1} (U\cap V) \rightarrow \cdots$$ for any open covering $\{U,V\}$ of $X$ ? This seems not easy to me because the barycentric decomposition will probably not work as in the finite case. -Any help or recommandation for a book that works with locally finite homology will be much appreciated. - -REPLY [3 votes]: No, in case of $H^{lf}_\bullet$ Mayer-Vietoris doesn't hold. -For locally compact $X$ it is well-known that $H^{lf}_n(X)=\widetilde H_n(\overline X)$, where $\overline X$ is one-point compactification. To construct counterexample take $X=S^3$, let $U$ and $V$ be $S^3\setminus x$ and $S^3\setminus y$ for $x\ne y$. As it easy to see, $\overline{U\cap V}=S^3\vee S^1$, so $H^{lf}_1(U\cap V)=\mathbb Z$, and $\overline{U}=\overline{V}=S^3$. -Finally, consider the fragment of desired sequence -$$ -\ldots\to H^{lf}_2(X)\to H^{lf}_1(U\cap V) -\to H^{lf}_1(U)\oplus H^{lf}_1(V)\to\ldots -$$ -which, in light of the above, becomes -$$ -\ldots\to0\to\mathbb Z\to0\to\ldots -$$<|endoftext|> -TITLE: Show that the integral of a positive function is positive -QUESTION [5 upvotes]: Suppose $f:[0,1] \to (0,\infty)$ is a Riemann integrable function. Prove that the integral of the function from $0$ to $1$ is strictly positive. - -I have been trying to do this for awhile but I can't seem to get it. Here is my thought process: If the function is Riemann integrable, then its set of discontinuities has measure zero (Lebesgue). I am not sure how to connect this to the integral being positive - -REPLY [6 votes]: Say $f$ is continuous at $x$. Then there exist $r>0$ and $\delta>0$ so that $f\ge r$ on $[x-\delta,x+\delta]$. This shows easily that $$\int_0^1 f\ge\int_{x-\delta}^{x+\delta}f\ge2\delta r>0.$$<|endoftext|> -TITLE: Coefficients of Mystery Polynomial -QUESTION [10 upvotes]: Let's play a game. I have a polynomial $f(x)$ which has nonnegative integer coefficients. You can ask me for the value of $f(n)$, where $n$ is a nonnegative integer. In how many $n$ can you determine the coefficients of my polynomial? -This problem isn't so bad. First you can ask for $f(1)$, which is greater than or equal to each individual coefficient of my polynomial. Then you can ask for $f(f(1))$ and express it in base $f(1)$, which works because we've already determined that $f(1)$ is at least as large as each coefficient, so at each digit there is no "bleed-over" into the next digit. -Quick example, just to confirm I'm not crazy: -$$f(x) = 2x^3 + 2x + 3$$ -$$f(1) = 7$$ -$$f(7) = 703$$ -$$703_7 = 2023$$ -My question is this: one of our assumptions is that $f$ has nonnegative coefficients. What happens if we relax this restraint to say the coefficients can be any integer? -My intuition is that we should still be able to guess the polynomial in finitely many steps, but we can't use the method outlined above because we have no guarantee $f(1)$ will be larger than each coefficient - in fact, $f(1)$ may be zero. -What happens if we relax the constraint to be nonnegative rational coefficients, instead? - -REPLY [3 votes]: It is not possible to determine $f$ in finitely many queries if its coefficients are allowed to be arbitrary integers. Indeed, suppose you have queried the values of $f(n_1),\dots,f(n_k)$ so far. Then the polynomial $g(x)=f(x)+(x-n_1)(x-n_2)\dots(x-n_k)$ is another polynomial with integer coefficients with $g(n_1)=f(n_1),\dots,g(n_k)=f(n_k)$. So, you cannot yet determine $f$ uniquely.<|endoftext|> -TITLE: How to prove that $ {\mathbf{GL}_{n}}(\mathbb{R}) $ is dense in $ {\mathbf{M}_{n}}(\mathbb{R}) $ -QUESTION [10 upvotes]: This question is inspired by an old question on here. - -Prove: -$ {\mathbf{GL}_{n}}(\mathbb{R}) $ is dense in $ {\mathbf{M}_{n}}(\mathbb{R}) $ - -Proof: -$ {\mathbf{GL}_{n}}(\mathbb{R}) = \{A \in {\mathbf{M}_{n}}(\mathbb{R}) : \det A \neq 0\}$ where $ {\mathbf{M}_{n}}(\mathbb{R}) $ consists of all $n \times n$ matrices with real entries. -We want to prove that $ {\mathbf{GL}_{n}}(\mathbb{R}) $ is dense in $ {\mathbf{M}_{n}}(\mathbb{R}) $. -$ {\mathbf{GL}_{n}}(\mathbb{R}) $ is dense in $ {\mathbf{M}_{n}}(\mathbb{R}) $ if every matrix $A \in {\mathbf{M}_{n}}(\mathbb{R}) $ either belongs to $ {\mathbf{GL}_{n}}(\mathbb{R}) $ or is a limit point of $ {\mathbf{GL}_{n}}(\mathbb{R}) $. -A matrix $P$ is a limit point of $ {\mathbf{GL}_{n}}(\mathbb{R}) $ in $ {\mathbf{M}_{n}}(\mathbb{R}) $ if and only if every open set $U$ containing $P$ also contains a point of $ {\mathbf{GL}_{n}}(\mathbb{R}) $ different from -$P$. - -And now when we should start to think I am not sure what to do as I have never worked with $ {\mathbf{GL}_{n}}(\mathbb{R}) $ or $ {\mathbf{M}_{n}}(\mathbb{R}) $ before. -I would appreciate a hint or a few on how to continue. - -REPLY [4 votes]: Let $A \in M_n(K)$, with $K=\mathbb{R}$ or $\mathbb{C}$, then $\chi_A(\lambda)=\det\left(A-\lambda I_n \right)$ is a nonzero polynomial, so it has a finite number of roots. So it exits a $p \in \mathbb{N}$ such that $\forall k>p \text{, }$ $\chi_A(1/k)\neq 0$ and then the sequence $$\left(A-\frac{1}{k} \cdot I_n\right)_{k>p}\longrightarrow A$$ -is made of invertible matrices. -QED.<|endoftext|> -TITLE: Prove that $f(1999)=1999$ -QUESTION [21 upvotes]: A function $f$ maps from the positive integers to the positive integers, with the following properties: -$$f(ab)=f(a)f(b)$$ where $a$ and $b$ are coprime, and -$$f(p+q)=f(p)+f(q)$$ for all prime numbers $p$ and $q$. Prove that $f(2)=2, f(3)=3$, and $f(1999)=1999$. -It is simple enough to prove that $f(2)=2$ and $f(3)=3$, but I'm struggling with $f(1999)=1999$. -I tried proving the general solution of $f(n)=n$ for all $n$ with a proof by contradiction: suppose $x$ is the smallest $x$ such that $f(x)1$. Write $2n = q+q'$ with $q,q'$ prime and $q' < n$. We still need to check that $f(q) = q$, but we can now apply the same argument as case 2 to see this, with the caveat that if $q' = 3$ we should instead show that $f(q+5)=q+5$ so as to avoid needing the $n$ case.<|endoftext|> -TITLE: Why does Hartshorne require that a scheme is separated for defining Weil Divisors? -QUESTION [8 upvotes]: When defining Weil divisors, Harsthorne considers only schemes which are integral, noetherian, regular in codimension 1 and separated. I am wondering if one can drop the condition that the scheme is separated, as I don't see how this plays a role in the usual definitions and properties of divisors. Is there anything that goes terribly wrong? :) -(I am aware that the other conditions are not all relevant for every definition). -Thank you! - -REPLY [5 votes]: Added: I noticed that your question was also about the usual definitions of Weil divisors, thus I want to add the following comment: -When $X$ is nonseparated, a discrete valuation on $k(X)$ no longer determines a unique prime divisor on $X$ (Hartshorne Ex.II.4.5). In particular for the line with infinitely many origins, the principal divisor corresponding to $1/t \in k(X) = k(t)$ is an infinite sum. Stacks Project gets around this by defining Weil divisors to be locally finite sums - when $X$ is quasi-compact this recovers the original definition. -New answer: -I flipped through Hartshorne section II.6, and nothing requires separatedness there that I could see. The following doesn't precisely address the question, but I added it because it helped me lose interest in the question :). -Fitting with our intuition for the line with doubled origin, Stacks Project Lemma 27.29.3 says that if $X$ is quasicompact then we can find a dense open subscheme $U \subseteq X$ which is separated. Then by Hartshorne Prop. 6.5 there is a surjection $Cl(X) \to Cl(U)$, which is an isomorphism when codim$_XZ > 1$. So, in many examples the class group is the same or not much different than that of something separated. Lemma 27.29.3 is also still satisfied for the easiest of non-quasicompact examples, like affine space with infinitely many origins. -If you are hunting for pathologies, maybe you should look for something that fails 27.29.3 then. Still though, I'm not sure what properties you would look for to fail since I think everything in Hartshorne is still true. - -Upon closer inspection, the paper I linked to below has more to do with nonreducedness and is about Cartier divisors rather than Weil divisors. My apologies for that. (In particular, the paper is about when Hartshorne Prop. 6.15 fails, and 2.2 is a non-reduced scheme example, which so happens to also be nonseparated.) - -Original answer: -I don't know something specific that goes wrong, so maybe this will not answer your question - my apologies if none of this is news to you. -I want to point out that at The Stacks Project Weil divisors are defined for $X$ a locally Noetherian integral scheme. I think these assumptions are just enough to make principal divisors well-defined; I recommend looking through the Stacks Project chapter for more information. -Also, when someone is wondering "why did Hartshorne make the assumption _____" (which happens often), they should go to the Stacks Project to see the most general thing possible. -Update: -Have you seen this paper of Schroer? Example (2.2) might be what you are looking for.<|endoftext|> -TITLE: Closed form solution for the zeros of an infinite sum -QUESTION [10 upvotes]: Does there exist a closed form expression for the zeros of the following equation? -$$\sum\limits_{n=1}^\infty\frac{1}{n^4 - x^2} = 0 \text{ where } x \in \rm \mathbb R$$ -Could you suggest a numerical method for calculate the approximate values of these zeros if that solution doesn't exist? - -REPLY [6 votes]: If you decompose $\frac{1}{n^4 - x^2}$ into $\frac{1}{2x}(\frac{1}{n^2 - x} - \frac{1}{n^2 + x})$, and compute both series independently (everything converges absolutely, so it is OK to do so), you'd end up with an equation -$$\pi\sqrt{x}(\coth{\pi\sqrt{x}} + \cot{\pi\sqrt{x}}) = 2$$ -which doesn't look promising for a closed form. It might be a good starting point for a numerical solution.<|endoftext|> -TITLE: Asymptotic expression for sum of first n prime numbers? -QUESTION [10 upvotes]: Is one known? If not, what are the best known bounds? Is there reason to think that an asymptotic expression is beyond current methods if none exists? - -REPLY [7 votes]: Maybe it is interesting to see a easy proof of the simplest approximation. From the PNT and the Abel's summation we have $$\sum_{k=1}^{n}p_{k}\sim\sum_{k=1}^{n}k\log\left(k\right)$$ -$$\sum_{k=1}^{n}k\cdot\log\left(k\right) -=\frac{n\left(n+1\right)}{2}\log\left(n\right)-\frac{1}{2}\int_{1}^{n}\frac{\left\lfloor t\right\rfloor \left(\left\lfloor t\right\rfloor +1\right)}{t}dt - $$ $$=\frac{n^{2}}{2}\log\left(n\right)+O(n^{2}) - $$ where $\left\lfloor t\right\rfloor$ is the floor function, hence - -$$\color{red}{\sum_{k=1}^{n}p_{k}\sim\frac{n^{2}}{2}\log\left(n\right).}$$<|endoftext|> -TITLE: What's the least class of ordinals closed under successor and the limits of omega-sequences? -QUESTION [5 upvotes]: What is the smallest class $S$ of ordinals that contains $0$, closed under successor and the limits of omega-sequences, i.e., -$$ -\forall i \in \omega [\alpha_i \in S] \Rightarrow \sup_{i} \alpha_i \in S. -$$ -Will it be an initial segment of the class of all ordinals? Or will it contain all ordinals of cofinality $\aleph_0$? What other ordinals does $S$ contain? -Background: I've been reading Chapter 6 of Proofs and Computations by Schwichtenberg and Wainer, where they explain the generalized recursion theory in the abstract setting, in the style of Scott/Plotkin. There, they give a type-theoretic definition of ordinals: the type of ordinals is -$$ -\mathbf O := \mu_\xi(\xi, \xi\rightarrow\xi, (\mathbf N\rightarrow\xi)\rightarrow\xi). -$$ -where $\mathbf N$ is the type of naturals. I assume this consists of three constructors, each of which corresponds to zero, successor and limits, respectively, and I believe $\mathbf O$ corresponds to the class $S$ defined above. - -REPLY [6 votes]: This gives you exactly the set of all countable ordinals (assuming the axiom of countable choice). Indeed, you can prove by induction that $S$ contains all countable ordinals (every countable limit ordinal is the supremum of all the ordinals below it, which form a countable set). On the other hand, any successor of a countable ordinal is countable and any limit of a countable collection of countable ordinals is countable, since a countable union of countable sets is countable. -As Noah commented, this argument uses the axiom of countable choice to conclude that a countable union of countable sets is countable, and in $ZF$ it is actually possible for $\omega_1$ to be a countable union of countable sets, so $S$ would contain some uncountable ordinals. However, even in $ZF$, $S$ will always be an initial segment of the ordinals. Indeed, if $\beta$ is the least ordinal not in $S$, then let $S'$ be the set of ordinals less than $\beta$. If $\alpha>\beta$, then $\alpha$ cannot be the sup of any collection of ordinals in $S'$, since $\beta$ is a smaller upper bound for any such collection. Similarly, $\alpha$ cannot be the successor of any ordinal in $S'$. Thus $S'$ is also closed under your operations, so $S'$ must be all of $S$. -I don't know whether it's consistent with $ZF$ for $S$ to be all of the ordinals.<|endoftext|> -TITLE: Are derivatives of geometric progressions all irreducible? -QUESTION [22 upvotes]: Consider the polynomials $P_n(x)=1+2x+3x^2+\dots+nx^{n-1}$. Problem A5 in 2014 Putnam competition was to prove that these polynomials are pairwise relatively prime. In the solution sheet there is the following remark: - -It seems likely that the individual polynomials $P_k(x)$ are all irreducible, but this appears difficult to prove. - -My question is exactly about this: is it known if all these polynomials are irreducible? Or is it an open problem? -Thanks in advance. - -REPLY [22 votes]: In the article Classes of polynomials having only one non-cyclotomic irreducible factor the authors (A. Borisov, M. Filaseta, T. Y. Lam, and O. Trifonov) had proved for any $\epsilon > 0$ for all but $O(t^{(1/3)+\epsilon})$ positive integers $n\leq t$, the derivative of the polynomial $f(x)= 1+ x + x^2 + \cdots + x^n$ is irreducible, and in general for all $n\in \mathbb N$ they conjectured $f'(x)$ is irreducible.<|endoftext|> -TITLE: Analytical solution for rational equality, square root in denominator on both sides -QUESTION [5 upvotes]: So I was trying to solve a problem I saw in a practice set for a 6th-grade math competition, as far as I can remember it. It was a story problem, but I think the solution is the minimum value of -$$ -\sqrt{x^2 + 25} + \sqrt{(5-x)^2 + 49} -$$ -for $x$ in the interval $[0,5]$. I know the derivative at a minimum should be zero, so -$$ -{x \over \sqrt{x^2+25}} + {x-5 \over \sqrt{(5-x)^2 + 49}} = 0 -$$ -I can see by experimentation that the answer is close to $13$, with $x$ close to $2.08$. However, I get stuck after that. If I write as -$$ -{x \over \sqrt{x^2+25}} = {5-x \over \sqrt{(5-x)^2 + 49}} -$$ -can I multiply both sides by the product of the denominators? Can I square both sides? When I do that, I get a fourth-degree polynomial on each side, but the fourth-degree and third-degree term cancel out; and the I'm left with a quadratic equation whose roots are nowhere near the interval I need. -More generally, is there an analytical solution for -$$ -\min\left(\sqrt{x^2+y_0^2} + \sqrt{(x_0-x)^2+y_1^2}\right) -$$ -where $y_0$, $y_1$, and $x_0$ are positive constants? - -REPLY [4 votes]: Not many Grade 6 students are comfortable with calculus, so we solve the problem in another way. -Draw the point $P=(0,5)$ and the point $Q=(5,7)$. We want the point $X=(x,0)$ on the $x$-axis such that the sum of the distances $PX$ and $XQ$ is a minimum. (We also want $0\le x\le 5$, but that will turn out to be irrelevant.) -Let $P'=(0,-5)$. By symmetry we want $P'X+XQ$ to be a minimum. -Join $P'$ and $Q$. Putting $X$ where $P'Q$ meets the $x$-axis minimizes the sum of the distances. (The straight line path from $P'$ to $Q$ minimizes the distance travelled.) -By the Pythagorean Theorem, the distance $P'Q$ is $\sqrt{5^2+12^2}$.<|endoftext|> -TITLE: What is known about this group reminiscent of the anharmonic group? -QUESTION [6 upvotes]: The anharmonic group is this nonabelian group of six rational functions with the operation of composition of functions: -\begin{align} -t & \mapsto t & & \text{order 1} \\[8pt] -t & \mapsto 1/t & & \text{order 2} \\ -t & \mapsto 1-t & & \text{order 2} \\ -t & \mapsto t/(t-1) & & \text{order 2} \\[8pt] -t & \mapsto 1/(1-t) & & \text{order 3} \\ -t & \mapsto (t-1)/t & & \text{order 3} -\end{align} -The reason it is called "anharmonic" appears to be that a set of four numbers is said to divide the line harmonically if their cross-ratio is $1$, and so the cross-ratio measures deviation from harmonic division, and when four numbers with cross-ratio $t$ are permuted, this group gives the six values that the cross-ratio can take. The members of this group permute the elements $0$, $1$, and $\infty$ of $\mathbb C\cup\{\infty\}$. -Today I noticed that something very similar-looking forms a group of four elements, each of the three non-identity elements having order $2$: -\begin{align} -t & \mapsto t \\ -t & \mapsto -1/t \\ -t & \mapsto (1-t)/(1+t) \\ -t & \mapsto (t+1)/(t-1) -\end{align} - -Can anything interesting be said about this, including, but not limited to, relevance to geometry, algebra, combinatorics, probability, number theory, physics, or engineering? -What other finite groups of rational functions as simple as these exist? - -REPLY [4 votes]: The group of invertible rational functions, which explicitly consists of Mobius transformations $z \mapsto \frac{az + b}{cz + d}$, is abstractly the group $PSL_2(\mathbb{C})$. Its finite subgroups are known: by an averaging argument they correspond to finite subgroups of $PSU(2) \cong SO(3)$, the group of orientation-preserving isometries of the sphere $S^2$. There are two infinite sequences of such subgroups, the cyclic groups $C_n$ and the dihedral groups $D_n$, and then three "exceptional" groups given by the symmetry groups of the Platonic solids: - -$A_4$, the symmetry group of the tetrahedron. -$S_4$, the symmetry group of the cube and the octahedron. -$A_5$, the symmetry group of the icosahedron and dodecahedron. - -The subgroups you've identified are two copies of the dihedral subgroups $D_3$ and $D_2$ respectively. There are many copies of the cyclic and dihedral groups, each corresponding to rotations about a different axis. -For more on this the keyword is the McKay correspondence. These finite groups also show up in Galois theory because each of these finite groups $G$ acts on $\mathbb{C}(t)$ by automorphisms, and so we get $\mathbb{C}(t)$ as a Galois extension of $\mathbb{C}(t)^G$ with Galois group $G$.<|endoftext|> -TITLE: What is the sum of the reciprocal of all of the factors of a number? -QUESTION [12 upvotes]: Suppose I have some operation $f(n)$ that is given as - -$$f(n)=\sum_{k\ge1}\frac1{a_k}$$ - -Where $a_k$ is the $k$th factor of $n$. -For example, $f(100)=\frac11+\frac12+\frac14+\frac15+\frac1{10}+\frac1{20}+\frac1{25}+\frac1{50}+\frac1{100}=\frac{217}{100}$ -$f(101)=\frac11+\frac1{101}=\frac{102}{101}$ -$f(102)=\frac11+\frac12+\frac13+\frac16+\frac1{17}+\frac1{34}+\frac1{51}+\frac1{102}=\frac{216}{102}$ -I was wondering if it were possible to plot a graph of $f(n)$ and wondered if there were any interesting patterns. I was also wondering if there is a closed form representation and if $\lim_{n\to\infty}f(n)$ could be evaluated or determined to be finite or not or any other interesting things that might happen in this limit. -Secondly, I was wondering about another similar series, which considers $b_k$ as the $k$th prime factor of $n$. - -$$p(n)=\sum_{k\ge1}\frac1{b_k}$$ - -What can we determine about this series? - -REPLY [3 votes]: Here is the plot of $f(n)=\frac{\sigma(n)}{n}$ . -Patterns in these plots are amazing! -For $1000$ - -And here is for $100,000$ - -This diagram shows the $\lim_{x\to \infty} f_n$ isn't plausible !<|endoftext|> -TITLE: Generalizing $f(n)=\int_0^\infty \frac{1}{e^{x^n}+1}=\left(1-2^{(n-1)/n}\right )\zeta(n^{-1})\Gamma(1+n^{-1})$ -QUESTION [5 upvotes]: I have come up with the following solution to this integral, but is just incomplete to my standards -$$f(n)=\int_0^\infty \frac{1}{e^{x^n}+1}=\left(1-2^{(n-1)/n}\right )\zeta(n^{-1})\Gamma(1+n^{-1})$$ -Seems to only work for $x\in\Bbb{N},x\gt 2$ -This identity, therefore does not apply to $n=1$, and we all know that $f(1)=\ln 2$ because $\zeta(1)$ diverges. -So my question is this: How can you generalize the integral solution I gave to fit the case $n=1$? - -REPLY [3 votes]: $$ -\begin{align} -\int_0^\infty\frac{\mathrm{d}x}{e^{x^n}+1} -&=\frac1n\int_0^\infty\frac{x^{\frac1n-1}\,\mathrm{d}x}{e^x+1}\\ -&=\frac1n\int_0^\infty x^{\frac1n-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ -&=\frac1n\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^{1/n}}\int_0^\infty x^{\frac1n-1}e^{-x}\,\mathrm{d}x\\[3pt] -&=\frac1n\eta\left(\frac1n\right)\Gamma\left(\frac1n\right)\tag{1} -\end{align} -$$ -where $\eta(s)$ is the Dirichlet eta function: -$$ -\begin{align} -\eta(s) -&=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^s}\\[6pt] -&=\left(1-2^{1-s}\right)\zeta(s)\tag{2} -\end{align} -$$ -and as you say, $\eta(1)=\log(2)$. -See this recent answer.<|endoftext|> -TITLE: In how many dimensions is the full-twisted "Mobius" band isotopic to the cylinder? -QUESTION [13 upvotes]: There is a question on this site about the distinctions between the full-twisted Mobius band and the cylinder, but I would like to ask something different, so I start a new question. -Let us call $C$ the standard cylinder embedded in $\mathbb{R}^3$, and call $F$ the full-twisted Mobius band (aka. the Mobius band with two "half-twists", or the Mobius band with a 360 degree twist), also embedded in $\mathbb{R}^3$. $C$ and $F$ are topologically homeomorphic to each other, but they are not isotopic within $\mathbb{R}^3$. Now think of $\mathbb{R}^3$ as a subspace of some higher dimensional Euclidean space $\mathbb{R}^n$ where $n\geq 3$, and ask if $C$ and $F$ are isotopic in $\mathbb{R}^n$. Obviously if $n \geq 6$ then $C$ and $F$ are isotopic. But what about $n = 4$ or $5$? Is there any chance that they are still isotopic, or can one prove that $n=6$ is the smallest number of dimensions in which $C$ and $F$ are isotopic? Any help or idea is appreciated. Thanks! - -REPLY [7 votes]: I think they are isotopic in $\mathbb R^4$. Consider a circle embedded smoothly in $\mathbb R^4$. It has a normal bundle with fiber $D^3$ and boundary $S^2\times S^1$. Given a band, such as the twisted or untwisted ones in the question, you can think of one boundary component of the band as an embedded circle in $\mathbb R^4$, while the other boundary component lies in the boundary of the normal bundle. This gives a loop in $\pi_1(S^2)$. Such a loop can be homotoped to a point, since $S^2$ is simply connected, and this homotopy will give a fiber-preserving isotopy untwisting any band.<|endoftext|> -TITLE: How to tell what dimension an object is? -QUESTION [35 upvotes]: I was reading about dimensions and in the Wikipedia article it states the following: - -In mathematics, the dimension of an object is an intrinsic property independent of the space in which the object is embedded. For example, a point on the unit circle in the plane can be specified by two Cartesian coordinates, but a single polar coordinate (the angle) would be sufficient, so the circle is $1$-dimensional even though it exists in the $2$-dimensional plane. This intrinsic notion of dimension is one of the chief ways the mathematical notion of dimension differs from its common usages." -Taken from Dimension article at Wikipedia. - -How could I know if a square is a $2$D object (which I'm sure it probably is), or that a circle is $1$D (which previously I believed it to be $2$D)? -Is there a way for any object to tell what dimensional object it is? -I don't really know how to tag this question, sorry. - -REPLY [13 votes]: While goblin's answer is right, its a little specific. A more common notion is covering dimension, which is both applies in more situations, and is a little easier to understand. -First we start with the notion of open set. An open set is one that does not contain any points in its boundary. For example, the set of points less than $1$ away from $(0,0)$ is open, because the boundary is the set of points exactly $1$ away from $(0,0)$, and that set is disjoint from the first one. The set of points less than or equal to $1$ away from $(0,0)$ is not open, since it contains some points, all of the points actually, that are in its boundary. -Now, if you take a line and cover it with blobs that are open sets, there will be some points covered by $2$ blobs. Since open sets don't contain their boundaries, if you try putting them right next to each other, they either have to overlap or have space in between them. If you do the same with the plane, some points will be covered by $3$ blobs. If you the same with space, some will be covered by $4$, etc... -So basically, the dimension of space is the max number of blobs to cover a point minus $1$. -We need to take one more thing into account though. Take a circle. If you cover it with tiny blobs, some of its points will have to be covered by $2$ blobs, so it is $1$ dimensional. But also notice that you could cover the entire circle with one big blob, suggesting it is $0$ dimensional. For this reason, we require the blobs to be arbitrarily small. -But what if we don't have a notion of size in our space? That's fine. What we say is that a space is dimension $d$, if when you cover the entire thing with blobs that are open sets, you can shrink (replace with a subset that is also an open set) the open sets such that each point is covered by at most $d+1$ blobs. -Notice how all this required was a notion of boundaries of sets. We didn't need to define distance or anything else, just boundaries. -Also note that if we viewed the circle as a space in and of itself, this still works. The boundaries of a set of points on the circle is just the set of any endpoints it has. The circle's dimension will be the same whether considered as part of the plane or as a space unto itself. -See https://en.wikipedia.org/wiki/Lebesgue_covering_dimension for more details.<|endoftext|> -TITLE: Is there a deeper meaning to the identity ${{\sin x}\over x} = \prod_{k=1}^{\infty} \cos\left({x\over{2^k}}\right)$? -QUESTION [9 upvotes]: Is there any deeper meaning to trigonometric identity -$${{\sin x}\over x} = \prod_{k=1}^{\infty} \cos\left({x\over{2^k}}\right)$$ -beyond it corresponding to characteristic functions of i.i.d. random variables? - -REPLY [13 votes]: I give 2 proofs, showing that there is something in this formula, beyond its intrinsic beauty. -First proof: -Let us consider the formula under the form: -$$\frac{\sin \pi x}{\pi x} = \prod_{k=1}^{\infty}\cos{\frac{\pi x}{2^k}}= \lim_{n \rightarrow \infty}{\prod_{k=1}^{n}\cos{\frac{\pi x}{2^k}}}$$ -Its Fourier Transform (using Euler formula $\cos(a)=\frac{1}{2}(e^{ia}+e^{-ia}))$ and the fact that a product is transformed into a convolution, is: -$$\mathbb{1}_{[-\frac{1}{2},\frac{1}{2}]}(u) = \lim_{n \rightarrow \infty}{{\Huge \ast}_{k=1}^{n}\frac{1}{2}\left(\delta(u+\frac{k}{2^{k+1}})+\delta(u-\frac{k}{2^{k+1}})\right)} \ \ \ (*)$$ -Now, let us expand the convolution product ${\Huge \ast}_{k=1}^{n}$ on the RHS, in the particular case $n=3$ in order to understand what is going on, i.e., -(using property $\delta(u-a)*\delta(u-b)=\delta(u-(a+b))$): -$$\frac{1}{8}\left(\delta(u+\frac{1}{4})+\delta(u-\frac{1}{4})\right)*\left(\delta(u+\frac{1}{8})+\delta(u-\frac{1}{8})\right)*\left(\delta(u+\frac{1}{16})+\delta(u-\frac{1}{16})\right)$$ -$$=\frac{1}{8}\left(\delta(u+\frac{1}{4}+\frac{1}{8}+\frac{1}{16})+\delta(u+\frac{1}{4}+\frac{1}{8}-\frac{1}{16})+\cdots+\delta(u-\frac{1}{4}-\frac{1}{8}-\frac{1}{16})\right)$$ -$$=\frac{1}{8}\left(\delta(u+\frac{7}{16})+\delta(u+\frac{5}{16})+\cdots+\delta(u-\frac{7}{16}))\right)$$ -The general case being: -$$\prod_{k=1}^{n}\frac{1}{2}\left(\delta(u+\frac{k}{2^{k+1}})+\delta(u-\frac{k}{2^{k+1}})\right)=\frac{1}{2^n}\sum_{k=-2^{n}+1}^{k=2^{n}-1}\delta(u+\frac{k}{2^{n+1}})$$ -Explanation: each term obtained in the development of the product gives rise to a number that can be (up to factorization by $1/2$) written as the base-2 decomposition of a number of the form $\dfrac{N}{2^{n+1}}$, all numbers of this form being represented exactly once. This is why we obtain a regular spacing. -(Nice) interpretation : - -Intuitively, the uniformly spread "mass" on interval $[-\frac{1}{2},\frac{1}{2}]$ is placed into $2^n$ "Dirac bins". -In more probabilistic (or "measure theory") and rigorous terms, the transformed formula expresses a convergence of a regularly spaced discrete uniform distribution to its continuous counterpart. - -The figure below illustrates this kind of equivalence between the continuous measure represented by the characteristic function of $[-1/2,1/2]$ and the "Dirac forest" made (in our particular case) of 8 regularly spaced "Dirac trees" of height=mass 1/8 each. - -Edit: A second proof using a very different, entirely geometrical, interpretation: -Let us consider the figure below with $A_k(\cos\left(\dfrac{\theta}{2^{k-1}}\right),\sin\left(\dfrac{\theta}{2^{k-1}}\right))$ and $H_k=AA_k \cap OA_k$. -Let us establish relationship -$$AH_{k+1}=\dfrac{AH_k}{2 \cos\left(\dfrac{\theta}{2^{k}}\right)} \ \ \ (1)$$ -In order to follow more easily, we will take the particular case $n=2$ WLOG: -Relationship $AH_{3}=\dfrac{AH_2}{2 \cos\left(\dfrac{\theta}{4}\right)}$ is the consequence of two facts: - -$AA_2=2 AH_3$ ($H_2$ is the foot of the altitude in isosceles triangle $AOA_1$) and -$AH_2=AA_2 \cos\left(\dfrac{\theta}{4}\right) \ $ by inscribed angle theorem: (https://en.wikipedia.org/wiki/Inscribed_angle): the angle under which arc $A_1A_2$ is "seen" from $A$ is half the angle under which it is "seen" from the center $0$). - -Thus, from (1), -$$AH_{n+1}=\dfrac{AH_1}{2^{n} \prod_{k=1}^{n}\cos\left(\dfrac{\theta}{2^{k}}\right)} \ \ \ (2)$$ -But, if $n$ is large, $$AH_{n+1}=H_{n+1}A_{n}=sin\left(\dfrac{\theta}{2^{n}}\right)\approx \dfrac{\theta}{2^{n}} \ \ \ (3) $$ -(classical approximation for small angles). Thus, taking into account that $AH_1=\sin \theta$, grouping (2) and (3) gives: -$$\dfrac{\sin \theta}{2^{n} \prod_{k=1}^{n}\cos\left(\dfrac{\theta}{2^{k}}\right)}\approx\dfrac{\theta}{2^{n}} \ \ \ (4)$$ -After cancellation of $2^n$ in both sides, formula (4) is clearly a proof of the infinite product formula. -Note: A rigorous treatment would necessitate to replace the approximation argument by a limit argument, which is rather easy...<|endoftext|> -TITLE: Find $\lim\limits_{n\to\infty}{\left(1+\frac{1}{n^k}\right)\left(1+\frac{2}{n^k}\right)\cdots\left(1+\frac{n}{n^k}\right) }$ for $k=1, 2, 3, \cdots$ -QUESTION [9 upvotes]: First of all, I already searched Google, math.stackexchange.com... -I know -$$ \lim_{n\rightarrow\infty} \left( 1+ \frac{1}{n} \right) ^n=e$$ -That is -$$ \lim_{n\rightarrow\infty} \underbrace{\left(1+\frac{1}{n}\right)\left(1+\frac{1}{n}\right)\cdots\left(1+\frac{1}{n}\right) }_{\text{n times}} =e$$ -$$$$ -At this time, I made some problems modifying above. - -$$ \lim_{n\rightarrow\infty} {\left(1+\frac{1}{n}\right)\left(1+\frac{2}{n}\right)\cdots\left(1+\frac{n}{n}\right) } =f(1) $$ -$$ \lim_{n\rightarrow\infty} {\left(1+\frac{1}{n^2}\right)\left(1+\frac{2}{n^2}\right)\cdots\left(1+\frac{n}{n^2}\right) } =f(2)$$ -$$ \lim_{n\rightarrow\infty} {\left(1+\frac{1}{n^3}\right)\left(1+\frac{2}{n^3}\right)\cdots\left(1+\frac{n}{n^3}\right) } =f(3)$$ -$$ \lim_{n\rightarrow\infty} {\left(1+\frac{1}{n^k}\right)\left(1+\frac{2}{n^k}\right)\cdots\left(1+\frac{n}{n^k}\right) } =f(k)$$ - -$$$$ -After thinking above, I feel I'm spinning my wheels with these limit problems. -Eventually, I searched wolframalpha. And the next images are results of wolfram. -(I take a LOG, because I don't know COMMAND of n-times product.) -$$$$ - - - - -These result (if we trust wolframalpha) say -$$f(1)=\infty$$ -$$f(2)=\sqrt{e}$$ -$$f(3)=1$$ -$$f(30)=1$$ -NOW, I'm asking you for help. -I'd like to know how can I find $f(k)$ (for $k=1,2,3,4, \cdots$ ). -I already used Riemann sum, taking Log... but I didn't get anyhing. ;-( -Thank you for your attention to this matter. ------------ EDIT --------------------------------- -The result for $f(1), f(2), f(3), f(30)$ is an achievement of Wolframalpha, not me. -I'm still spinning my wheel, $f(1), f(2), f(3)$, and so on... - -REPLY [8 votes]: Hint. You may start with -$$ -x-\frac{x^2}2\leq\log(1+x)\leq x, \quad x\in [0,1], -$$ giving, for $n\geq1$, -$$ -\frac{p}{n^k}-\frac{p^2}{2n^{2k}}\leq\log\left(1+\frac{p}{n^k}\right)\leq \frac{p}{n^k}, \quad 0\leq p\leq n, -$$ and -$$ -\sum_{p=1}^n\frac{p}{n^k}-\sum_{p=1}^n\frac{p^2}{2n^{2k}}\leq \sum_{p=1}^n\log\left(1+\frac{p}{n^k}\right)\leq \sum_{p=1}^n\frac{p}{n^k}, \quad 0\leq p\leq n, -$$ or -$$ -\frac{n(n+1)}{2n^k}-\frac{n(n+1)(2n+1)}{6n^{2k}}\leq \sum_{p=1}^n\log\left(1+\frac{p}{n^k}\right)\leq \frac{n(n+1)}{2n^k} -$$ and, for $k\geq3$, as $n \to \infty$, -$$ -\sum_{p=1}^n\log\left(1+\frac{p}{n^k}\right) \to 0. -$$ that is - -$$ -\lim_{n\rightarrow\infty} {\left(1+\frac{1}{n^k}\right)\left(1+\frac{2}{n^k}\right)\cdots\left(1+\frac{n}{n^k}\right) }=1, \quad k\geq3. -$$ - -The cases $k=1, 2$ are clear.<|endoftext|> -TITLE: How to find $a+b$ in square $ABCD,$? -QUESTION [5 upvotes]: In square $ABCD,$ $AB=1.$ $BEFA$ and $MNOP$ are congruent. $BE= a - \sqrt b.$ Where $a$ and $b$ are both primes. How to find $a+b$? I have no idea how to do this, can this be proved with simple geometry? - -REPLY [3 votes]: Let the side of the square be $p$ (so we can better track dimensions) and let $|\overline{BE}| = q$; and let $\theta = \angle EMP$. We have two equations: -$$\begin{align} -q + p \sin\theta + q \cos\theta &= p \\ -p \cos\theta + q \sin\theta &= p -\end{align}$$ -Solving for $\cos\theta$ and $\sin\theta$ gives -$$\cos\theta = \frac{p^2 - p q + q^2}{p^2 - q^2}\qquad \sin\theta = \frac{p\, (p - 2 q)}{p^2 - q^2}$$ -Since $\sin^2\theta + \cos^2\theta = 1$, we deduce -$$\frac{p\,(p - 2 q)\,(p^2 - 4 p q + q^2)}{(p^2 - q^2)^2} = 0$$ -so that $q = p/2$ (extraneous), or $q = p\,( 2 + \sqrt{ 3 } )$ (extraneous), or $q = p\,(2 - \sqrt{3})$ (bingo!). - -Recalling $p=1$, we have $a = 2$ and $b = 3$, so that $a+b = 5$. $\square$ - -Note: In this solution, $\theta = 30^\circ$.<|endoftext|> -TITLE: In $C([0,1],\mathbb{R})$, the sup norm and the $L^1$ norm are not equivalent. -QUESTION [6 upvotes]: How does the proof here show that the two norms are not equivalent? We have that in the sup norm $f_n$ converges to $1$, and in the $L^1$ norm $f_n$ converges to $0$, but how does this mean that the two norms are not equivalent? - -REPLY [3 votes]: If the two norms were equivalent then the $\|\cdot\|_\infty$-neighborhood $U_{1/2}(0)$ would have to contain a $\|\cdot\|_1$-neighborhood $V_\epsilon(0)$ for some $\epsilon>0$. But your example shows that any such $V_\epsilon(0)$ contains functions $f_n$ lying outside of $U_{1/2}(0)$.<|endoftext|> -TITLE: Prove that $\frac{\sqrt{3}}{8}<\int_{\frac{\pi}{4}}^{\frac{\pi}{3}}\frac{\sin x}{x}\,dx<\frac{\sqrt{2}}{6}$ -QUESTION [6 upvotes]: Prove that $$\frac{\sqrt{3}}{8}<\int_{\frac{\pi}{4}}^{\frac{\pi}{3}}\frac{\sin x}{x}\,dx<\frac{\sqrt{2}}{6}$$ - -My try: Using $$\displaystyle \sin x\frac{1-0}{\frac{\pi}{2}-0}=\frac{2}{\pi}$$ -So we get $$\frac{2}{\pi}<\frac{\sin x}{x}<1$$ -So we get $$\frac{2}{\pi}\int_{\frac{\pi}{4}}^{\frac{\pi}{3}}1\,dx<\frac{\sin x}{x}<1\cdot \int_{\frac{\pi}{4}}^{\frac{\pi}{3}}\,$$ -But this is not what I have to prove here. - -REPLY [6 votes]: This is not an answer but it is too long for a comment. -I cannot resist the pleasure of reporting again the magnificent approximation proposed by Mahabhaskariya of Bhaskara I, a seventh-century Indian mathematician.$$\sin(x) \simeq \frac{16 (\pi -x) x}{5 \pi ^2-4 (\pi -x) x}\qquad (0\leq x\leq\pi)$$ (see here). -So, as an approximation $$\int\frac{\sin(x)}x dx\approx-2 \left(\log \left(4 x^2-4 \pi x+5 \pi ^2\right)+\tan - ^{-1}\left(\frac{1}{2}-\frac{x}{\pi }\right)\right)$$ which gives $$\int_{\frac{\pi}{4}}^{\frac{\pi}{3}}\frac{\sin x}{x}dx\approx 2 \log \left(\frac{153}{148}\right)+\tan ^{-1}\left(\frac{100}{621}\right)\approx 0.226111$$ while the exact solution is $$\int_{\frac{\pi}{4}}^{\frac{\pi}{3}}\frac{\sin x}{x}dx=\text{Si}\left(\frac{\pi }{3}\right)-\text{Si}\left(\frac{\pi }{4}\right)\approx 0.226483$$ your bounds being $\approx 0.216506$ and $\approx 0.235702$<|endoftext|> -TITLE: A map of a compact connected manifold with non-empty boundary to a sphere of the same dimension is null-homotopic -QUESTION [5 upvotes]: In Milnor & Kervaire's Groups of Homotopy Spheres paper, this claim: - -A map of a compact connected manifold with non-empty boundary to a sphere of the same dimension is null-homotopic - -is made without justification or reference. Can anyone see why is it true? -All I can think of is that compact manifolds with non-empty boundary have top homology $0$, so this map induces $0$ in all homology groups (except the $0$th, of course). However, this doesn't seem to help. It's easy to see that the map restricted to the boundary is null-homotopic (since it can't be surjective there, say by Sard's theorem), but that also doesn't seem to help. Any explanations / references are appreciated. - -REPLY [3 votes]: This follows from the fact that such a manifold has no top-dimensional cohomology. -Note that one can construct a $K(\mathbb{Z}, n)$ from $S^n$ by gluing on cells of dimension at least $n + 2$ to kill the higher homotopy groups. So the $(n+1)$-skeleton of this $K(\mathbb{Z}, n)$, denoted $K(\mathbb{Z}, n)^{(n+1)}$, is $S^n$. As Eilenberg-MacLane spaces represent cohomology, for any $n$-dimensional manifold $M$ we have -$$H^n(M; \mathbb{Z}) \cong [M, K(\mathbb{Z}, n)] \cong [M, K(\mathbb{Z}, n)^{(n+1)}] \cong [M, S^n]$$ -where we use cellular approximation in the second step. Therefore, if $M$ is an $n$-dimensional manifold which is not closed, every map $M \to S^n$ is nullhomotopic.<|endoftext|> -TITLE: What is the optimal strategy when driving to my university: Wait or take alternative route and (possibly) wait? -QUESTION [7 upvotes]: When bicycling to my university, I'm faced with a difficult decision. The situation is shown here: - -I have to get to point marked with the $\times$, but in order to do so, I must cross a big road with a traffic light at both places where crossing is possible. If the first traffic light (at the bottom) is green, then there's no debate; I simply cross and go up $A$ with no waiting time at all. The choice arises when it is red (as is always the case anyway), since I can choose to wait for the first light to change, or go up a smaller road $B$ and then try to cross at the second light. -Let's assume that the two traffic lights are identical w.r.t. the percent of the time they are red (let's call this quantity $\Theta$), but that the second is phase-shifted with $\tau$ w.r.t. the first, s.t. the second changes to e.g. green a time $\tau$ after the first. The length $L$ of the paths are identical and I bicycle with a constant speed $v$ (although I suspect these last two are irrelevant to the solution). I don't know anything about the light before I arrive at it. The question then becomes - -If I arrive at a random time during which the first traffic light is red, given $\Theta$ and $\tau$ (and $L$ and $v$), should I - -wait at the first light and go by path $A$, or should I -go by path $B$ - -in order to minimize waiting time? - -The type of solution I'm looking for can better be formulated if we first look at two examples: - -The $\mathbf{I}$'s represent the time during which the traffic light is red - the space in between them represent the green lights. The drawn lines represent my position if I chose to go by $B$. If I arrive at the first red light during the blue stretch, I should go by $B$; if I, on the other hand, arrive during the red stretch, I should go wait and go by $A$. As can be seen from the two scenarios $(a)$ and $(b)$ (where the lines represent my position should I choose path $B$, and where the slope is determined by $L$ and $v$), my choice depends on $\tau$, which is the only parameter that has been changed. - -If we call the length of the blue strip $t_{\text{go}}$ and the length of the red strip $t_{\text{wait}}$, s.t. $t_{\text{go}}+t_{\text{wait}}=\Theta$, the solution should be of the form $$t_{\text{go}}(\tau)$$ - -I don't think $v$ and $L$ should play a role, since the effect of changing these can be incorporated into $\tau$, so we could probably set $v=L=1$. I'm guessing there should be a pretty simple geometric way of solving this, but for the time being it is eluding me. -Some random observations: If $v=\frac{L}{\tau}$ it doesn't matter what I choose. Also, if the second light was independent of the first, I should always go by $B$, since there I will at least have a $1-\Theta$ chance of getting a green light. -Any help is much appreciated! - -REPLY [3 votes]: Let's choose a unit of time so that each light cycle is exactly -one unit long. -As you noticed, -if $v = \frac L\tau$ it does not matter which choice you make. -In fact, since you will arrive at the second light exactly $\frac Lv$ units of time later than you arrived at the first light -(if you choose to try the second light), -the expected waiting time at the second light will be exactly the same -as the expected waiting time of a light at the location of the first -light, but phase-shifted $\frac Lv$ units earlier than the second light, -which itself is phase-shifted $\tau$ units later than the first light. -So choosing to go to the second light is equivalent to -making an irrevocable choice to phase-shift the first light -$\tau - \frac Lv$ units later. Moreover, since phase-shifting by -any whole number of units of time has no effect, -we can say that choosing the second light is equivalent to phase-shifting -the first light by $\delta$, -where $0 \leq \delta < 1$ and there is an integer $N$ -such that $N + \delta = \tau - \frac Lv$. -Let's suppose you arrive at the first light at time $t$, where -$t=0$ if the light is just turning red and $t=\Theta$ if the light -is just turning green again. -Assuming the first light is red when you arrive, $0 \leq t < \Theta$. -In addition to that constraint, there are three possible cases to consider: -\begin{align} - t &< \Theta + \delta - 1 \tag1 \\ -\Theta + \delta - 1 \leq t &< \delta \tag2 \\ - \delta \leq t & \tag3 \\ -\end{align} -Which cases are even possible depends on $\Theta$ and $\delta$. -Essentially, by shifting the light cycle by $\delta$, you turn the light -green for some portion of the time when it would have been red. -Suppose the part of the cycle that was red and is now green starts at time -$\alpha$ and ends at time $\beta$. -If $\delta > 1 - \Theta$, then $\alpha = \Theta + \delta - 1$; -otherwise $\alpha = 0$, ruling out case $(1)$. -If $\delta < \Theta$, then $\beta = \delta$; -otherwise $\beta = \Theta$, ruling out case $(3)$. -If $\delta = 0$ then Case $(2)$ is ruled out (and it makes no difference -which path you take); but Case $(2)$ is possible whenever $\delta \neq 0$. -In case $(1)$, you get a green light at time $\Theta + \delta - 1$, -which is $1 - \delta$ units of time earlier than you would have gotten -without the phase shift. -In case $(2)$, the phase shift turns the light green, -and you can cross $\Theta - t$ units of time earlier than you would -have without the phase shift. -In case $(3)$, you arrived at the first light after the portion of the -light cycle that is turned green by the phase shift; -you end up crossing $\delta$ units of time later than you would have -without the phase shift. -Assuming that the distribution of your arrival time within the -light cycle is uniform, the expected change in waiting time -(positive for increased waiting, negative for decreased waiting), -given that you arrive at the first light when it is red, is therefore -\begin{align} -E[\Delta T] -&= \frac 1\Theta \int_0^\alpha (\delta - 1) \,dt - + \frac 1\Theta \int_\alpha^\beta (t - \Theta) \,dt - + \frac 1\Theta \int_\beta^\Theta \delta \,dt \\ -&= \frac \alpha\Theta(\delta - 1) - + \frac 1\Theta \left(\frac12 \beta^2 - \frac12 \alpha^2\right) - - \frac{\beta - \alpha}{\Theta} \Theta - + \frac{\Theta - \beta}{\Theta} \delta \\ -&= \frac{1}{2\Theta}(\beta^2 - \alpha^2) - + \frac \alpha\Theta(\delta - 1) + \alpha - \beta - + \frac{\Theta - \beta}{\Theta} \delta \\ -&= \frac{1}{2\Theta}(\beta^2 - \alpha^2) - + \frac \alpha\Theta(\Theta + \delta - 1) - + \frac{\Theta - \beta}{\Theta} (\Theta + \delta) - \Theta -\end{align} -The reason for the seemingly peculiar rearrangement of terms on the -last line of the equation above -is that we know that either $\alpha = 0$ (in which case the -term involving $\alpha$ disappears) or $\alpha = \Theta + \delta - 1$ -(in which case the term becomes $\frac1\Theta (\Theta + \delta - 1)^2$); -we also know that either $\beta = \Theta$ -(in which case the term involving $\beta$ disappears) -or $\beta = \delta$ (in which case that term becomes -$\Theta - \frac1\Theta \delta^2$). -We then have the following four possible answers, depending on whether -$\delta \leq 1 - \Theta$ (that is, $\alpha = 0$) and on whether -$\delta \geq \Theta$ (that is, $\beta = \Theta$): -Case $\Theta \leq \delta \leq 1 - \Theta$: -$$E[\Delta T] = -\frac{\Theta}{2}.$$ -Case $\delta \leq 1 - \Theta, \delta < \Theta$: -$$E[\Delta T] = -\frac{\delta^2}{2\Theta}.$$ -Case $\delta > 1 - \Theta, \delta \geq \Theta$: -$$E[\Delta T] = \frac{(\Theta + \delta - 1)^2 - \Theta^2}{2\Theta}.$$ -Case $1 - \Theta < \delta < \Theta$: -$$E[\Delta T] = \frac{(\Theta + \delta - 1)^2 - \delta^2}{2\Theta}.$$ -You should wait at the first light if $E[\Delta T]$ is positive, -but go to the second light if $E[\Delta T]$ is negative. -If $E[\Delta T] = 0$ either choice is equally good. -In the first two formulas for $E[\Delta T]$, -where $\delta \leq 1 - \Theta$, clearly it is always advantageous -to try the second light if the first is red, unless $\delta = 0$ -(in which case it makes no difference). -In the other formulas, where $\delta > 1 - \Theta$, -we have $\Theta + \delta - 1 > 0$, so -$(\Theta + \delta - 1)^2 - \Theta^2$ is negative if and only if -$\Theta > \Theta + \delta - 1$, that is, -if and only if $\delta < 1$, which is true. -For similar reasons, -$(\Theta + \delta - 1)^2 - \Theta^2$ is negative if and only if -$\Theta < 1$, which presumably is also true. -So it seems that except for the case $\delta = 0$ -(where it obviously makes no difference which path you take), -it is always advantageous to go to the second light if the first is red.<|endoftext|> -TITLE: Where are the transcendental numbers? -QUESTION [5 upvotes]: This question is motivated from an exercise from Rudin. The exercise says that prove that set of all algebraic numbers is countable. -Proof: We know that a number $z$ is called algebraic if it is the root of a polynomial $a_0z^n+a_1z^{n-1}+\cdots +a_n=0$, where $a_i$'s $\in \Bbb{Z}$. Since there are only finitely many equations of the form $n+|a_0|+|a_1|+\cdots+|a_n|=N$ for a positive integer $N$ this implies there are only finitely many such polynomials. Now since each of these polynomials have finitely many roots therefore there are countably many algebraic numbers. -Now the next exercise in the process asks us to prove that there is a real number which is not algebraic which is very trivial. So this proves that there exists transcendental numbers. Also this shows that there are uncountably many of them. So that means there are fairly many of them. -But where are they? -Now if there were uncountably many of them then it should be fairly easy to find them but it is not . Because we can get all sorts of wonky numbers being the root of polynomials. -https://en.wikipedia.org/wiki/Transcendental_number mentions all the transcendental numbers and the numbers which might be transcendental. These do not seem to be many. So what about the rest of the transcendental numbers. Where are they? And why is it so difficult to find them and prove that they are transcendental? -A side question: Is there some nice property that the class of all these transcendental numbers might satisfy . By a nice property I mean that if I take all transcendental numbers and form a group then what about the isomorphisms on this group. Similarly what about if I take all transcendental numbers and form a topological space? -Note: Now if this is a question that has already been asked then please do notify me in the comments I will delete this question - -REPLY [5 votes]: Given that $x$ is transcendental, so is $p(x)$ for $p$ any polynomial with rational coefficients. That means that each number we prove transcendental, like $\pi, e, $ and the Liouville numbers, gives us a countable infinity of them.<|endoftext|> -TITLE: Finding all left inverses of a matrix -QUESTION [5 upvotes]: I have to find all left inverses of a matrix -$$A = \begin{bmatrix} - 2&-1 \\ -5 & 3\\ - -2& 1 -\end{bmatrix}$$ -I created a matrix to the left of $A$, -$$\begin{bmatrix} -a &b &c \\ -d &e &f -\end{bmatrix} \begin{bmatrix} - 2&-1 \\ -5 & 3\\ - -2& 1 -\end{bmatrix} = \begin{bmatrix} - 1&0 \\ - 0&1 -\end{bmatrix}$$ -and I got the following system of equations: -\begin{array} {lcl} 2a+5b-2c & = & 1 \\-a+3b+c & = & 0 \\ -2d+5e-2f & = & 0 \\ -d+3e+f & = & 1 \end{array} -After this step, I am unsure how to continue or form those equations into a solvable matrix, and create a left inverse matrix from the answers of these equations. - -REPLY [3 votes]: Find the left inverse for the rank $\rho = 2$ matrix -$$ -\mathbf{A} = -\left[ \begin{array}{rr} - 2 & -1 \\ - 5 & 3 \\\ - -2 & 1 -\end{array} \right] -$$ -The approach is straightforward. Compute the singular value decomposition -$$ - \mathbf{A} = \mathbf{U} \, \Sigma \, \mathbf{V}^{*} -\tag{1} -$$ -and use this to construct the Moore-Penrose pseudoinverse matrix -$$ -\mathbf{A}^{+} = \mathbf{V} \, \Sigma^{+} \, \mathbf{C}^{*} -\tag{2} -$$ -How do we know the pseudoinverse matrix is a left inverse? See generalized inverse of a matrix and convergence for singular matrix, What forms does the Moore-Penrose inverse take under systems with full rank, full column rank, and full row rank? -The singular value decomposition is completed using the recipe for the row space in this post: SVD and the columns — I did this wrong but it seems that it still works, why? - - -Compute the SVD -1. Construct product matrix -$$ -\begin{align} -\mathbf{W} &= \mathbf{A}^{*} \mathbf{A}\\ -&= -% A* -\left[ -\begin{array}{rcr} - 2 & 5 & -1 \\ - -1 & 3 & 2 \\ -\end{array} -\right] -% A -\left[ -\begin{array}{rr} - 2 & -1 \\ - 5 & 3 \\ - -2 & 1 \\ -\end{array} -\right] \\ -% W -&= -\left[ -\begin{array}{cc} - 33 & 11 \\ - 11 & 11 \\ -\end{array} -\right] -\end{align} -$$ - -2. Solve for eigenvalues -The characteristic polynomial is given by -$$ - p(\lambda) = \lambda^{2} - \lambda \text{ tr }\mathbf{W} + \det \mathbf{W} -$$ -The trace and determinant are -$$ -\text{tr }\mathbf{W} = 44, \qquad \det \mathbf{W} = (33-11)11=242 -$$ -Therefore -$$ - p(\lambda) = \lambda^{2} - 44 \lambda + 242 -$$ -The roots are the eigenvalues: -$$ - \lambda \left( \mathbf{W} \right) = \left\{ - 11 \left(2 + \sqrt{2}\right), 11 \left(2-\sqrt{2}\right) - \right\} -$$ -The matrix of singular values is -$$ -\mathbf{S} = \sqrt{11} -\left( -\begin{array}{cc} - \sqrt{2 + \sqrt{2}} & 0 \\ - 0 & \sqrt{2 - \sqrt{2}} \\ -\end{array} -\right) -$$ -The sabot matrix is -$$ - \Sigma = -\left[ -\begin{array}{c} - \mathbf{S} \\ - \mathbf{0} \\ -\end{array} -\right] -= - \sqrt{11} - \left[ -\begin{array}{cc} - \sqrt{2 + \sqrt{2}} & 0 \\ - 0 & \sqrt{2 - \sqrt{2}} \\ - 0 & 0 \\ -\end{array} -\right] -$$ - -3. Solve for eigenvectors -First eigenvector -Solving -$$ -\begin{align} - \left( \mathbf{W} - \lambda_{1} \mathbf{I}_{2} \right) w_{1} -&= -\mathbf{0} \\ -% -\left[ -\begin{array}{cc} - 33-11 \left(2+\sqrt{2}\right) & 11 \\ - 11 & 11-11 \left(2+\sqrt{2}\right) \\ -\end{array} -\right] -% -\left[ -\begin{array}{cc} - w_{x} \\ - w_{y} \\ -\end{array} -\right] -% -&= -% -\left[ -\begin{array}{cc} - 0 \\ - 0 \\ -\end{array} -\right] -% -\end{align} -$$ -produces -$$ - w_{1} = -\left[ -\begin{array}{cc} - 1+\sqrt{2} \\ - 1 \\ -\end{array} -\right] -$$ -Second eigenvector -Solving -$$ -\begin{align} - \left( \mathbf{W} - \lambda_{2} \mathbf{I}_{2} \right) w_{2} -&= -\mathbf{0} \\ -% -\left[ -\begin{array}{cc} - 33-11 \left(2-\sqrt{2}\right) & 11 \\ - 11 & 11-11 \left(2-\sqrt{2}\right) \\ -\end{array} -\right] -% -\left[ -\begin{array}{cc} - w_{x} \\ - w_{y} \\ -\end{array} -\right] -% -&= -% -\left[ -\begin{array}{cc} - 0 \\ - 0 \\ -\end{array} -\right] -% -\end{align} -$$ -produces -$$ - w_{2} = -\left[ -\begin{array}{cc} - 1 - \sqrt{2} \\ - 1 \\ -\end{array} -\right] -$$ - -4. Assemble the domain matrix for the row space -$$ -\begin{align} - \mathbf{V} &= -\left[ -\begin{array}{cc} - \frac{w_{1}}{\lVert w_{1} \rVert} & - \frac{w_{2}}{\lVert w_{2} \rVert} \\ -\end{array} -\right] \\ -&= -\left[ -\begin{array}{cc} -% c1 -\left( -2 \left(2 + \sqrt{2} \right) -\right)^{-\frac{1}{2}} - \left[ -\begin{array}{c} - 1 + \sqrt{2} \\ - 1 -\end{array} -\right] & -% c2 -\left( -2 \left(2 - \sqrt{2} \right) -\right)^{-\frac{1}{2}} - \left[ -\begin{array}{c} - 1 - \sqrt{2} \\ - 1 -\end{array} -\right] -\end{array} -\right] -\end{align} -$$ - -5. Compute the domain matrix for the column space -Compute the column vectors for $k=1,2$: -$$ - \mathbf{U}_{k} = \sigma^{-1}_{k} \mathbf{A} \mathbf{V}_{k} -$$ -$$ -\begin{align} - \mathbf{U} &= -\left[ -\begin{array}{ccc} -% c1 -\left( -\sqrt{22} \left(\sqrt{2}+2\right) -\right)^{-1} - \left[ -\begin{array}{r} - 1 + 2\sqrt{2} \\ - 8 + 5\sqrt{2} \\ --1 - 2\sqrt{2} \\ -\end{array} -\right] & -% c2 -\left( -\sqrt{22} \left(\sqrt{2} - 2\right) -\right)^{-1} - \left[ -\begin{array}{r} - 1 - 2\sqrt{2} \\ - 8 - 5\sqrt{2} \\ --1 + 2\sqrt{2} \\ -\end{array} -\right] & -% c3 -\left( -2 -\right)^{-\frac{1}{2}} - \color{gray}{\left[ -\begin{array}{r} - 1 \\ - 0 \\ --1 \\ -\end{array} -\right]} -% -\end{array} -\right] -\end{align} -$$ -The third column vector, the gray null space vector, was added by inspection. - - -Construct the pseudoinverse -Following the prescription in $(2)$ -$$ - \mathbf{A}^{+} = \mathbf{V} \, \Sigma^{+} \, \mathbf{U}^{*} = -\left[ -\begin{array}{rcr} - 3 & 2 & -3 \\ - -5 & 4 & 5 \\ -\end{array} -\right] -$$ - - -Verify pseudoinverse -Left inverse -$$ - \mathbf{A}^{+}\mathbf{A} = -\left[ -\begin{array}{rcr} - 3 & 2 & -3 \\ - -5 & 4 & 5 \\ -\end{array} -\right] -\left[ -\begin{array}{rr} - 2 & -1 \\ - 5 & 3 \\ - -2 & 1 \\ -\end{array} -\right] -= -\left[ -\begin{array}{cc} - 1 & 0 \\ - 0 & 1 \\ -\end{array} -\right] -= \mathbf{I}_{2} -$$ -Therefore, $\mathbf{A}^{+}$ is a left inverse. -$$ - \mathbf{A} \mathbf{A}^{+} = -\left[ -\begin{array}{rr} - 2 & -1 \\ - 5 & 3 \\ - -2 & 1 \\ -\end{array} \right] -\left[ -\begin{array}{rcr} - 3 & 2 & -3 \\ - -5 & 4 & 5 \\ -\end{array} -\right] -= -\frac{1}{2} -\left[ -\begin{array}{rcr} - 1 & 0 & -1 \\ - 0 & 2 & 0 \\ - -1 & 0 & 1 \\ -\end{array} -\right] -\ne \mathbf{I}_{3} -$$ -Therefore, $\mathbf{A}^{+}$ is not a right inverse.<|endoftext|> -TITLE: Running through all permutations of a Rubik's cube -QUESTION [9 upvotes]: According to Wikipedia a $3 \times3\times3$ Rubik's cube has $43252003274489856000$ permutations. -I never tried solving one myself (too tedious), however I wondered, if one could miraculously "solve" the cube in color blind mode (all faces of the cube have the same color to him or the person is just blind folded). (Yes, it is tedious, but I guess sooner or later he will just hit the right permutation and hopefully someone tells him, that it's good and he may stop.) -I wondered, if there were some kind of "best" approach to run all through those permutations without too many repetitions, for example the algorithm makes sure one gets any permutation at most $n$ times. -I don't know the answer yet and probably won't have it anytime soon. However I would be quite interested in seeing this algorithm and thus I hope the question has been asked before and been solved. Any constructive comment/answer is appreciated. - -REPLY [6 votes]: In this link, it is claimed (and I believe them!) that one can achieve every single permutation of the cube without ever revisiting a single vertex twice. The details are, not surprisingly, pretty devilish. -The basic idea is to imagine creating a graph with one vertex for each permutation of the cube. We'll call the solved state's vertex $1$, and give a name to each vertex depending on the moves that need to be taken to get there from $1$. If you perform the move $URU^3$ starting from $1$, you'll call that vertex $URU^3$ (Note that we have to have some convention about names; they're not unique). It's not especially important for this question, but the graph is called the Cayley graph of the Rubik group -- this is what let me search for an answer, by knowing the magic password. -Your question could now be phrased as, "Are there any circuits (walks that return to where they started, also called cycles) of this Rubik graph that visit each vertex at least once, without too many repeats?" The best kind of circuits are called Hamiltonian circuits, and they visit each vertex exactly before returning to where they started. -What the webpage shows, in full technical jargon, is that the Cayely graph of the Rubik's cube group admits a Hamiltonian circuit. - -It's my understanding that it would be completely ridiculously unmanageable to use the circuit they found. It was constructed by "opening up" more and more states of the cube in a very controlled manner. -For example, if you can only use the move $UR$ (not $U$ and $R$ separately, yet), then by repeating the move $UR$ $105$ times, you'll wind up where you started. If you can use any combination of $U$'s and $R$'s, there are a lot more possibilities -- $73{,}483{,}200$, to be precise! But you can string together a bunch of walks that look like $UR$ repeated $105$ times, with some appropriate shifting between walks, to get all $73{,}483{,}200$ moves accessible with only $U$ and $R$. -Apparently the author of the page played that game with increasingly less limited movesets, slowly adding the moves $D,\ L$, and $F$ to their repertoire. So I think the cycle they obtained looks a lot like a bunch of those little $UR$-repeated-$105$-times walks, but with various move combinations thrown in between the walks. -It is definitely not something that could be committed to memory! But probably something a very, very dumb computer could implement to solve the Rubik's cube with absolutely no strategy besides executing an unfathomably large sequence of the basic quarter-turn moves.<|endoftext|> -TITLE: How to prove the result of this definite integral? -QUESTION [15 upvotes]: During my work I came up with this integral: -$$\mathcal{J} = \int_0^{+\infty} \frac{\sqrt{x}\ln(x)}{e^{\sqrt{x}}}\ \text{d}x$$ -Mathematica has a very elegant and simple numerical result for this, which is -$$\mathcal{J} = 12 - 8\gamma$$ -where $\gamma$ is the Euler-Mascheroni constant. -I tried to make some substitutions, but I failed. Any hint to proceed? - -REPLY [4 votes]: Start from the well-known integral -$$-\gamma=\int_0^\infty\exp(-x)\log x\,\mathrm dx$$ -A round of integration by parts yields -$$\begin{align*} --\gamma&=\int_0^\infty\exp(-x)\log x\,\mathrm dx\\ -&=\int_0^\infty x\exp(-x)\log x\,\mathrm dx-\int_0^\infty x\exp(-x)\,\mathrm dx\\ -1-\gamma&=\int_0^\infty x\exp(-x)\log x\,\mathrm dx -\end{align*}$$ -A second round gives -$$\begin{align*} -1-\gamma&=\int_0^\infty x\exp(-x)\log x\,\mathrm dx\\ -&=\int_0^\infty x\exp(-x)(x-1)(\log x-1)\,\mathrm dx\\ -&=\int_0^\infty x\exp(-x)\,\mathrm dx-\int_0^\infty x^2\exp(-x)\,\mathrm dx-\int_0^\infty x\exp(-x)\log x\,\mathrm dx+\int_0^\infty x^2\exp(-x)\log x\,\mathrm dx\\ -3-2\gamma&=\int_0^\infty x^2\exp(-x)\log x\,\mathrm dx -\end{align*}$$ -where the last integral is the one obtained by Dr. MV after an appropriate substitution. Thus, the original integral is equal to $12-8\gamma$.<|endoftext|> -TITLE: Explicit characterization of dual of $H^1$ -QUESTION [7 upvotes]: Let's start by some well-known facts: - -$H^1(\mathbb{R})$ is a Hilbert space, hence there holds the Riesz representation theorem, stating that any linear functional on it can be represented as $L = \langle \cdot , v\rangle$, where $v \in H^1(\mathbb{R})$, and $\langle \cdot, \cdot \rangle$ is the pairing in $H^1(\mathbb{R})$: -$$ -\langle u, v \rangle = \int_{\mathbb{R}} (u_x v_x + uv) d x. -$$ -Here, subscript indicates derivative. -Let $w \in L^2(\mathbb{R})$. Since the $H^1$-norm is stronger than the $L^2$-norm, the functional $v \mapsto L(v) := \int_{\mathbb{R}^2} w(x) v(x) d x$ is a continuous linear functional on $H^1(\mathbb{R})$. - -My question is: is it possible to give the representative explicitly? In other words, find $z \in H^1(\mathbb{R})$ such that $L(v) = \langle v,z\rangle$. -I tried mimicking the proof of the Riesz theorem but it did not help. I am sure there is something really easy I don't see, but I seem to have some misconception here. -Any help would be appreciated. - -REPLY [9 votes]: (i) $H^s(\mathbb{R}^n):=\lbrace u \in \mathcal{S}'(\mathbb{R}^n) : \Lambda^s u \in L^2(\mathbb{R}^n) \rbrace$ with $\omega_s(\xi):=(1+|\xi|^2)^{s/2}$, $\Lambda^s u := \mathcal{F}^{-1}(\omega_s \widehat{u})$ $\forall u \in \mathcal{S}'(\mathbb{R}^n)$, and by Plancherel theorem we have the norm -$$\left \| u \right \|_{H^s}= \left \| \Lambda^s u \right \|_{L^2}= \left\| \mathcal{F}(\Lambda^s u) \right\|_{L^2}=\left \| \omega_s \widehat{u} \right \|_{L^2}=\left( \int_{\mathbb{R}^n} |\widehat{u}(\xi)|^2 (1+|\xi|^2)^s d\xi \right)^{1/2}$$ -Note that you can also define (ii) $H^s(\mathbb{R}^n):=\lbrace u \in L^2(\mathbb{R}^n) : \Lambda^s u \in L^2(\mathbb{R}^n) \rbrace$, the definition (i) is only a little more general. In particular, we have the scalar product -$$ (u,v)_{H^s}:=(\Lambda^s u, \Lambda^s v)_{L^2}= \int_{\mathbb{R}^n} \widehat{u}(\xi) (1+|\xi|^2)^s \overline{\widehat{v}(\xi)} d\xi.$$ -Now $H^s(\mathbb{R}^n)$ and $H^{-s}(\mathbb{R}^n)$ are one of the dual isometric of the other. In other words for the space (iii) $H^s(\mathbb{R}^n)^* = \lbrace u \in H^s(\mathbb{R}^n)\longmapsto (u,v)_* : v \in H^s(\mathbb{R}^n) \rbrace$ we have $H^s(\mathbb{R}^n)^* \cong H^{-s}(\mathbb{R}^n)$, and likewise it is shown that $H^{-s}(\mathbb{R}^n)^* \cong H^{s}(\mathbb{R}^n)$. In particular, the application in (iii) are isometries, and $|(u,v)_*| \le \left \| u \right \|_{H^s} \left \| v \right \|_{H^{-s}}$. -Proof. -let $v \in H^{-s}(\mathbb{R}^n)=\lbrace v \in \mathcal{S}'(\mathbb{R}^n) : \Lambda^{-s}v \in L^2(\mathbb{R}^n) \rbrace$ and $u \in H^s(\mathbb{R}^n)$. Consider the scalar product -$$ (u,v)_*:=(\Lambda^s u , \Lambda^{-s} v)_{L^2}=\int_{\mathbb{R}^n} \widehat{u}(\xi)(1+|\xi|^2)^{s/2} \overline{\widehat{v}}(\xi) (1+|\xi|^2)^{-s/2} d\xi$$ -by Schwartz inequality $|(u,v)_*| \le \left \| u \right \|_{H^s} \left \| v \right \|_{H^{-s}}$. -We show that $\forall v \in H^{-s}(\mathbb{R}^n)$, the maps $u \in H^{s}(\mathbb{R}^n) \longmapsto (u,v)_* \in \mathbb{C}$ is an element of the dual space $H^s(\mathbb{R}^n)^*$, infact if $T \in H^s(\mathbb{R}^n)^*$, by Riesz rapresentation therem: -$$ -\exists h \in H^s(\mathbb{R}^n): T(u)=(u,h)_{H^s} , \forall u \in H^s(\mathbb{R}^n) -$$ -and $\left \| T \right \|_*=\left \| h \right \|_{H^s}$. Define $v=\mathcal{F}^{-1}(\omega_{2s} \widehat{h})$, by Plancherel theorem we have -$$ -\left \| v \right \|_{H^{-s}} -= \left \| \Lambda^{-s}v \right \|_{L^2} -= \left \| \mathcal{F}\Lambda^{-s}v \right \|_{L^2} -= \left \| \omega_{-s} \mathcal{F}v \right \|_{L^2} -= \left \| \omega_{-s} \omega_{2s} \mathcal{F}h \right \|_{L^2} -= \left \| \omega_s \mathcal{F}h \right \|_{L^2} -< \infty -$$ -therefore $v \in H^{-s}(\mathbb{R}^n)$ with $h \in H^s(\mathbb{R}^n)$, and for the uniqueness -$$ -T(u)=(u,h)_{H^s} -= \int_{\mathbb{R}^n} \widehat{u}(\xi) (1+|\xi|^2)^s \overline{\widehat{h}(\xi)} d\xi -= (u,v)_{*} -$$ -and $H^s(\mathbb{R}^n)^* \cong H^{-s}(\mathbb{R}^n)$. In the end -$$ -\left \| T \right \|_{*} -= \left \| h \right \|_{H^s} -= \left \| \omega_s \mathcal{F}h \right \|_{L^2} -=\left \| v \right \|_{H^{-s}}. -$$<|endoftext|> -TITLE: A problem involving the product $\prod_{k=1}^{n} k^{\mu(k)}$, where $\mu$ denotes the Möbius function -QUESTION [12 upvotes]: Let $\mu$ denote the Möbius function whereby -$$\mu(k) = - \begin{cases} - 0 & \text{if $k$ has one or more repeated prime factors} \\ - 1 & \text{if $k=1$} \\ - (-1)^j & \text{if $k$ is a product of $j$ distinct primes}\end{cases}$$ for all $k \in \mathbb{N}$. I have previously noted a surprising connection between products of the form $$\prod_{k=1}^{n} k^{\mu(k)}$$ and harmonic numbers (see http://oeis.org/A130087). Letting $H_{n} = 1 + \frac{1}{2} + \cdots + \frac{1}{n}$ denote the $n^{\text{th}}$ harmonic number, I noted that the denominator (in lowest terms) of this product is equal to the denominator (in lowest terms) of $H_{n}^{2}n!$ for all $n < 897$. For example, letting $n = 11$, we have that $$1^{\mu(1)} \cdot 2^{\mu(2)} \cdot \cdots \cdot 11^{\mu(11)} = \frac{2}{77},$$ and $$H_{11}^{2}11! = \frac{28030126084}{77}.$$ This property does not hold for all $n \in \mathbb{N}$, and the first counterexample occurs in the case whereby $n = 897$. Letting $\text{den}(q)$ denote the denominator in lowest terms of $q \in \mathbb{Q}_{> 0}$, it is not obvious to me why the equality $$\text{den}\left(\prod_{k=1}^{n} k^{\mu(k)}\right) = \text{den}\left(H_{n}^{2}n!\right)$$ holds for the first several hundred natural numbers. So it seems natural to me to ask: -(1) Is there some simple number-theoretic explanation as to 'why' this equality holds for the first several hundred natural numbers? -(2) Is there some simple number-theoretic interpretation as to 'why' this equality does not hold in general (which specifically explains the counterexample in the case whereby $n = 897)$? -(3) Is there a natural combinatorial way of approaching this problem? Note that integers of the form $H_{n}n!$ are unsigned Stirling numbers of first kind, so it is plausible that there is a natural combinatorial interpretation of the expression $\text{den}\left(H_{n}^{2}n!\right)$. - -REPLY [6 votes]: Let $k_p(n)$ be defined as the non-negative integer such that $p^{k_p(n)}≤n$ but $p^{k_p(n)+1}>n$. Furthermore let $\epsilon_p(n)$ be the exponent of $p$ in the prime factorisation of $n!$; it is well known that $\epsilon_p(n)=\sum_{k=1}^{\infty}\lfloor\frac{n}{p^k}\rfloor$. Define $\mathcal{P}_n:=\{p \text{ prime } : \frac{n}{2}5$ with $M(m)\geq 0$ is $m=39$ and the smallest prime of $\mathcal{P}_{39}$ is $23$. With $r=0$ we obtain $n=897$ and from there on we find many counter examples.<|endoftext|> -TITLE: Is there a non brute force way to solve this problem? -QUESTION [8 upvotes]: A friend of mine asked me to prove or disprove that: - -$$ -\left\lceil\frac{2}{2^{1/n}-1}\right\rceil=\left\lfloor\frac{2n}{\ln 2}\right\rfloor \forall n \in \mathbb{Z^+} -$$ - -First of all, I run a python code to numerically compute the values upto $5000$ in order to conjecture about the truth of proposition. I got a positive result. Then I tried plotting to get a rough idea about the behavior of functions, the conjecture seemed to be true. Nonetheless, I couldn't prove it. Now I am really frustrated to know that the first counter example is $777,451,915,729,368$. -Is an existential proof possible without brute force? - -REPLY [6 votes]: I read about this problem some time ago... -You can easily see that the difference -$$\frac{2n}{\ln 2} - \frac{2}{2^{\frac{1}{n}}-1} $$ -is increasing and getting closer to one. If $\frac{2n}{\ln 2}$ is smaller than a certain integer $k$, but close enough to $k$ so that $\frac{2}{2^{1/n}-1}$ is greater than $k-1$, then you have found a counterexample. But asking for a $n$ such that $\frac{2n}{\ln 2}$ is well enough approximated by $k$ is the same as asking for a good enough rational approximation $\frac{k}{n}$ of $\frac{2}{\ln 2}$. The convergents of the continued fraction of $\frac{2}{\ln 2}$ are the best approximations you can get. You just have to calculate them until you reach a counterexample, and you'll probably have to compute just tens of convergents, not billions. $777,451,915,729,368$ should be the denominator of one of these convergents.<|endoftext|> -TITLE: Obscuring squares of Rubik's cube -QUESTION [18 upvotes]: This is a combinatorial question related to Rubik's cube $3\times3\times3$ (and, in the end, its generalizations $n\times n\times n$). I assume that the readers are familiar with this puzzle. Let's recall some basic terminology and conventions to rule out possible ambiguities. In the following, we use the words state and configuration interchangeably, as exact synonyms. -We are only interested in configurations of Rubik's cube where it has the shape of a cube (so, we consider rotations of its layers by an angle of $90^\circ$ as atomic transformations, and no incomplete rotations are allowed). Also, we consider configurations that can be obtained from one another by solid rotations of the cube as a whole (without any relative movements of its parts) as identical. -The cube has $6$ faces, each one of them is a $3\times3$ grid of squares — $9$ squares per face, and $54$ squares in total. Each square is of a solid color. Orientation of every single square is irrelevant (in fact, this rule has significance only for central squares). There are $6$ different colors, and there are $9$ squares of each color. There is a single selected state where each face consists of all squares of the same color — this state is called the solved state. A state that can be reached from the solved state by a sequence of valid rotations of layers is called a valid state (the solved state, of course, is a valid state too). -As a side note, if we allowed disassembling the cube into its parts and reassemling them, then we would get the number of possible states $12$ times larger than the set of valid states (this larger set would be a disjoint union of $12$ equivalence classes under valid rotations, each of the classes called an orbit). We are only interested in valid states in the following. -Usually, it is assumed that colors of all squares on the cube are visible. Let's consider a possibility that some squares may be obscured (e.g. completely covered by an opaque colorless sticker), so that their colors are not visible. -In general, obscuring some squares can result in some states becoming visually indistinguishable. As trivial examples, obscuring all $54$ squares makes all states indistinguishable, but obscuring only $1$ square (no matter which one) does not make any states indistinguishable. - - -What maximal number of squares in Rubik's cube $3\times3\times3$ can be obscured without making any states indistinguishable? - -What maximal number of squares in Rubik's cube $2\times 2\times 2$ can be obscured without making any states indistinguishable? - -What maximal number of squares in Rubik's cube $n\times n\times n$ can be obscured without making any states indistinguishable? Can we find a general formula, recurrence relation, etc. to compute it for every $n\in\mathbb N$? - -REPLY [2 votes]: format of terms used in this answer: -removal = obscuring (hiding a sticker) -piece = mini-cube -face = side of a piece =equiv sticker -$n$ = one 1-dimensional face unit -side = whole face of the rubik's cube -cube = entire rubik's cube -state = configuration -type = which form of piece (corner-proper, edge-proper, or edgeless face) -c = corner_piece (starts with $3C_v$ = three visible faces) -e = edge-without-corner_piece (starts with $2E_v$) -f = face-without-edge_piece (starts with $1F_v$) -$C$ = an individual c face -$C_r$ = a C removed -$C_v$ = a visible C -$C_A$ = all c faces combined -${C_A}_v$ = all visible c faces -$E_A$ = all e faces combined -$F_A$ = all f faces combined -$T_v$ = all visible faces combined ($C_v$+$E_v$+$F_v$) -$X_A$ = all deducible faces -$({X_A})_n$ = arbitrary # deducible faces for $n^3$ cube -(⇒ solve $∀x=r:$ max$[{(X_A)}_n]$ = ${(T_A)}_n$-min$[{(T_v)}_n]$) - -This question (well, three questions to the same end) boils down to two considerations: What is minimal stickers required to know what the solved state is supposed to look like, and minimal stickers needed to retain the identities of them all. Considering the first part without the second, as few as two diagonally opposite corner pieces would suffice ($2c$, establishing every color's/side's correct alignment to every other). Of course, this defies the second necessity, since not only the other 6 corner pieces' but all the other pieces' identities would become indistinguishable. Considering the second part (which mostly if not wholly absorbs the first), one can afford one piece of each type getting stripped of its 1, 2, and 3 stickers ($6$ so far). Of course there is a lowerbound of $n^2$ (one whole face, 9n for a 3×3×3 cube), but we can do better than that for the n=3 cube (edit bolded for emphasis). For n≥[some integer ≥4], the general maximal formula will include n^2; this formula might be able to fit for smaller n values despite that the particular algorithms for determining the actual faces to remove from smallish differs from what the formula implies. Consider n=50 cube. -To what extent depends upon which variation of 2^2=4 possible ways to interpret the problem as originally presented: do we know correct color-scheme alignment (medium outside information, reasonable supposition since already know how to play the puzzlegame and have access to internet, and confirmed in response to Vepir's question); do we know a particular pattern to how the faces were removed (heftier outside information, mostly absorbing preceding knowledge); do we know both of these info (heftiest outside information); or de we know none of these info (minimum outside information). -I will approach this problem assuming minimal outside information to start, because considering the difference affordable from there upward seems easier to me, and I am curious how the up to four different general formulas would differ. - -Since any valid state with all stickers visible can be deduced as such (by solving it, even if achieving that from a random state might be computationally pricey), they convey the same information; so one sensible way to think of this problem is as subtracting stickers from the solved state. Besides the straightforward description, the response to Vepir's question confirming knowledge of the color scheme may and likely will increase the maximal count by allowing for more removals of non-corner edge pieces. To start, however, I will ignore this fact. Another possible consideration for n>3 (and possibly even n<4) is if you can presuppose knowledge of pattern to which faces you hid. When I get to the part of formulating a general pattern, I will assume initially that the obfuscated cube is to be decipherable by a stranger. -Any piece within a rubik's cube of of side n length 3 or greater can be separated into three disjoint categories: corner [3 different colors], edge without corner [2 different colors], and face without either [1 color]. In the case of 2×2×2 cube, each type shares the same category (is not disjoint). The number of total faces (visible or not) as a function of integer n is equal to 6⋅(n^2). These might be helpful later in constructing a general formula. -Focusing now on the 3×3×3 cube, with all 8⋅3=24 corner stickers removed you could still ascertain what state$_{[solved]}$ should look like by using $c_a$=${(c_a)}_v$ to track a course around the cube. But that would also make those 8 c indistinguishable, so 8⋅2=16 removals allowed. Remove the remaining face from one of the c making it blank and 1 from a single f = 16+2=18 removals allowed (lower bound), since these single "extra" oddball removals from two different types of the three does not by itself prevent deduction of solved state. That leaves the 10 untouched e and 7 one-sticker c pieces to consider. Well, at least one e could be totally removed, so 18+1=19 removals lower_bound for the 3×3×3 cube. -Here is a good place to pause to calculate the formulas for the different types of pieces (by how many original faces contained) as equations based on n. -corners: 8 (constant); $c_n$=8. non-corner edges: as few as 0, for n={1,2}. For every n beyond that, it increases by 10; $e_n$=10(n-2). non-edge/corner faces: all the rest. For n={1,2}, 0. For every n beyond that, it increases by faces times square of two less than n: $f_n$=6(n-2)^2. Any beyond 1 from this category becomes indistinguishable with its only sticker removed, so +1 should be the only effect to the general formula from this type. -Back to optimizing the 3×3×3 cube. None of the remaining 7 C could get removed, since that would make those c indistinguishable. That leaves the nine remaining e to focus on, or possibly all ten if it can ensure a greater maximum (past lowerbound of 20 or alternative removals). Focusing on the 9 is simpler than 10 and likely the correct path so I shall. Every c piece serves for this purpose also as an e piece, but with all of them out for use except for one serving now as an f piece, the remaining e pieces are the only pieces showing more than 1 side/color. The question now is how many of the eight two-side relationships can be removed from the cube before the full identities of the removed (non-corner) ones cannot be deduced. I believe the answer is 4: two on opposite sides of and sharing a side, the other two on reverse side diagonal from the first 2, allowing for an unobstructed route linking all six sides (contiguous, since every side is contiguous). One of these is accounted for already. So long as the other absent face of that piece is considered (rather than preserving it still on) when deciding the remaining 3 to remove, which I believe means choosing the same "obverse" side (as opposed to the opposite "reverse" side) as the first two mentioned (proving this is another task), it should work out. That results in 19+3 = 22 removals from the 3×3×3 cube's 54 faces without knowing the color scheme. If you know the color scheme, then one whole side could be "removed" (possibly better), so long doesn't cost an existing deduction, which I think it wouldn't if you did so unto the remaining "reverse" side: 22+2 = 24 removals from the 3×3×3 with prior knowledge of the color scheme. -This brings up an important consideration for a general formula. While for n<4 a majority of the removals came from corner pieces, as n increases the corner pieces remains constant whereas the edge pieces increases linearly. As such, for higher n values the majority of faux "data-hiding" will be unto non-corner edge pieces, in alternating half-visible fashion. To formulate a general equation, at first I will suppose no prior knowledge of the color scheme nor methodology to the removals. Considering for a moment the particular case of a 4×4×4, there are 20⋅2=40 E, compared to 8⋅3 = 24 C. Since the corner-pieces reveal more information about the whole cube (particularly without prior info), retaining most of them in favor removing all of the non-corner edgepiece stickers likely yields higher value of lossless data-compression. I'm not sure yet how to formulate this into a general equation without if/else-style verbiage included, though it is probably possible. I will come up with a general formula and post it as a comment or update this post, but for now I'll leave with a summary of some relationships to n-width to consider: -corner stickers = ${(C_A)}_n = 3{c_a} = 3⋅8$ -edge stickers = ${(E_A)}_n = 2{(e_a)}_{n-2} = 2[10(n-2)]$ -face stickers = ${(F_A)}_n = 1{(f_a)}_{n-2} = 1[6(n-2)^2]$ -total stickers = ${(T_A)}_n = 6*{(F_A)}_n = 6⋅n^2$ -Considering lower n^3 cubes (to help figure out a general pattern and answer second question): -For n=1, no prior knowledge, 1 face could get removed. With prior knowledge of correct scheme, presumably all except two = 4 of the six could get removed. -For n=2, no prior knowledge, trickier. $T_2$ = ${(C_A)}_2$ = 24. A maximum bound entails linking all sides, from 3 C with 2 c each ⇒ max$[{(X_A)}_2]$ = ${(T_A)}_2$-min$[{(T_v)}_2$] = max$[{(X_A)}_2]$ = $24-(2⋅3)$ = 18. For n=2 with prior knowledge of color scheme: .. - -For n=4, no prior knowledge: min$[{(E_v)}_4]$ = min$[{(e_v)}_4]$÷2-1 =20÷2-1=9 ⇒ max$[{(E_r)}_4]$=${(E_A)}_4$-9 = 40-9 =31. max$[{(F_r)}_4]$=1. max$[{C_r}_4]$=8⋅2+1=17. Assuming no contradictions (noting thatn=4 is even, n=5 is odd,..) max${(X_A)}_4$=${(T_r)}_4$ = max$[{C_r}_4]$+max$[{(E_r)}_4]$+max$[{(F_r)}_n]$ = 17+31+1 = {49 = upperbound}. -$∀n∈N:$upperbound$[{(X_A)}_n]$ = max$[{(C_r)}_n]$+max$[{(E_r)}_n]$+max$[{(F_r)}_n]$, -⇒ max$[{(X_A)}_n] ≤ 17+(½{(E_A)}_n+1)+1$ -⇒ max$[{(X_A)}_n] ≤ 17+1+(½(2[10(n-2)])+1)$ -⇒ max$[{(X_A)}_n] ≤ (10n-20+1)+18$ -⇒ max$[{(X_A)}_n] $?$=$?$ 10n-2$ doesn't fit; too high -but $[{(X_A)}_n]=10n-9$ seems pretty close for n<4 - -large n value: n=50 -lowerbound[max$[{(X_A)}_n]$ = ${f_n}^2+3C_v+2E_v$ = $n^2+3+2$ -lowerbound[max$[{(X_A)}_50]$ = $50^2+5$: one whole side removed plus one whole opposite corner and one whole middle edge piece (these latter 5 removals might not all apply, but are guaranteed in a simple quadratic lowerbound). So what other faces can get removed (and still get ascertained by a stranger familiar with the rules of the puzzle but not the color scheme used)?<|endoftext|> -TITLE: Find polynomials $p_n$ on $[0,1]$ with integral $=3$ and converges pointwisely to $0$ -QUESTION [6 upvotes]: Prove that there is a sequence of polynomials $\{p_{n}\}$ such that $p_{n} \to 0$ pointwise on $[0,1]$, but such that -$$\int_{0}^{1}p_{n}(x)dx=3.$$ - -My thought: - I was thinking of working with a seuqence of functions that has pointwise limits to equal to $0$ but the integral to equal $3$. And then using the Weiestrass approximation theorem to say that there is a $p_n$ that would approximate the functions. I just can't think of the functions that would work. - -REPLY [5 votes]: Take $p_n(x) = 3x^n(1-x)(n^2+3n+2)$. Each $\displaystyle\int_0^1p_n = 3$ and $\displaystyle\lim_{n \to \infty} p_n(x) = 0$ for $x \in [0,1]$.<|endoftext|> -TITLE: help understanding a paragraph from Linear Algebra Done Right -QUESTION [10 upvotes]: I am attempting to work my way through the 3rd edition of "Linear Algebra Done Right", but there's a paragraph on page 14 that I don't understand. I have struggled with it myself for a few hours and have come to the conclusion that I need some help. But first some background. -Axler lets $\textbf{F}$ stand for either the set of real or complex numbers. (He mentions fields but doesn't use them directly.) He also uses the term "list" instead of "tuple". -The problematic paragraph is preceded by this: - - -If $S$ is a set, then $\textbf{F}^S$ denotes the set of functions from $S$ to $\textbf{F}$. -For $f, g \in \textbf{F}^S$, the sum $f + g \in \textbf{F}^S$ is the function defined by - $$(f + g)(x) = f(x) + g(x)$$ - for all $x \in S$. -For $\lambda \in \textbf{F}$ and $f \in \textbf{F}^S$, the product $\lambda f \in \textbf{F}^S$ is the function defined by - $$(\lambda f)(x) = \lambda f(x)$$ - for all $x \in S$. - -As an example of the notation above, if $S$ is the interval [0,1] and $\textbf{F} = \textbf{R}$, then $\textbf{R}^{[0,1]}$ is the set of real-valued functions on the interval [0,1]. - -So far, so good. This is where I get lost: - -Our previous examples of vector spaces, $\textbf{F}^n$ and $\textbf{F}^\infty$, are special cases of the vector space $\textbf{F}^S$ because a list of length $n$ of numbers in $\textbf{F}$ can be thought of as a function from {1, 2, ..., $n$} to $\textbf{F}$ and a sequence of numbers in $\textbf{F}$ can be thought of as a function from the set of positive integers to $\textbf{F}$. In other words, we can think of $\textbf{F}^n$ as $\textbf{F}^{\{1,2,...,n\}}$ and we can think of $\textbf{F}^\infty$ as $\textbf{F}^{\{1,2,...\}}$. - -The general idea seems straightforward to me. Let $S$ be the set of all tuples in $\textbf{F}^n$ or $\textbf{F}^\infty$, and now $\textbf{F}^S$ is a vector space. Is it really that simple? But what is he saying? -[My background: I am a computer programmer who took a "normal" linear algebra class in college, who has a new-found love for higher mathematics, but not enough time and money to go back to school.] - -REPLY [11 votes]: It's actually simpler but much more subtle than that. -Let $\mathbb R^{\{1, ..., n\}}$ be the set of all functions from $\{1, 2, ..., n\} \rightarrow \mathbb R$. -For example if $f(x) = x^2 - 3$ then $f\in \mathbb R^{\{1, ..., n\}}$. -But $f$ can be thought of as an n-tuple. If, for instance, $n= 3$. -$f = (-2, 1, 6) \in \mathbb R^3$ -If $f(x) = e^x$ then $f \in \mathbb R^{\{1, 2, 3\}} = (e, e^2, e^3) \in \mathbb R^3$. -In other words: Every 3-tuple $(x_1, x_2, x_3)$ in the vector space $\mathbb R^3$ can be thought as a function $f:\{1, 2, 3\} \rightarrow \mathbb R$ where $f(1) = x_1; f(2) = x_2; f(3) = x_3)$. The set of functions and the set of n-tuples are the same thing.<|endoftext|> -TITLE: When solving inequalities, does $(x-9)$ count as a negative number? -QUESTION [6 upvotes]: I know that when solving inequalities, you reverse the sign when dividing or multiplying by a negative number, but right now I need to solve $\frac{x+9}{x-9}≤2$. Do I need to reverse the sign when I multiply $x-9$ across? -I'm sorry if this is outrageously obvious... - -REPLY [2 votes]: This is an inequality. You should not multiply across by $(x-9)$ at all. -Instead you should bring the $2$ over to the left and combine the fraction through a common denominator. Verify you arrive at: $\frac{-x+27}{x-9}≤0$ Now one should make separate numberlines for numerator and denominator and verify the signs on the numberlines. -NUM:$$___________plus_______27__minus______$$ -DENOM: $$___minus____9____plus_____________$$ -When you "divide" the numberlines you look for the negative interval, which is $x<9$ or $x\geq27$<|endoftext|> -TITLE: Is proving "not $A \iff$ not $B$" the same as proving $A \iff B$? -QUESTION [5 upvotes]: Is proving "not $A \iff$ not $B$" the same as proving $A \iff B$? -I'm wondering because I'm looking at a statement in my book of the form $A \iff B$ yet the proof shows not $A \iff$ not $B$ and then says it is complete. -So I think they would be the same but I don't exactly see why. -Anyone explain to me please? - -REPLY [3 votes]: I'll assume you know the contrapositive: proving $p\rightarrow q$ is the same as proving $\neg q \rightarrow \neg p$. -With this, it's easy: -$$(p\iff q) \equiv ((p\rightarrow q)\wedge (q\rightarrow p))\equiv ((\neg q\rightarrow \neg p)\wedge (\neg p\rightarrow \neg q))\equiv (\neg q \iff \neg p)$$ - -REPLY [2 votes]: Think of it this way: $A\iff B$ is two statements: $A\implies B$ and $B\implies A$. -Using prime to denote "not", by contraposition we have $B'\implies A'$ and $A'\implies B'$. Thus $A'\iff B'$. -Since this proof goes the other way, we have $(A\iff B)\iff (A'\iff B')$<|endoftext|> -TITLE: Is it possible to prove uniqueness without using proof by contradiction? -QUESTION [10 upvotes]: Until now, I've been presented to several proofs of uniqueness: - -$\emptyset$ -$1$ -$0$ - -For set theory, fields, etc. And it seems that they all rely in proof by contradiction. At least, at the present moment, I've searched a bit for it and all the proofs seems to employ this technique. So is it possible to prove it without it? - -REPLY [9 votes]: You're probably structuring uniqueness proofs like, "We assume there are two different things, we prove they are the same, a contradiction." Just delete the word "different". Now it's not by contradiction. -This is a common quasi-error where people start by assuming the opposite out of some sort of training or conditioning, then don't actually use the false assumption. -Per your comment, I really do see why this feels against the grain. You really shouldn't think of a proof that says, "Suppose there are two objects satisfying a certain property, then *insert math* it turns out they were the same object all along!" as a contradiction. -With that attitude you'll probably fall for bogus proofs that $0=1$, like: Let $a=b$. Then $a-a=b-a$ so $\frac{a-a}{b-a} = 1$. But $a-a=0$ so $0=1$. If you don't see the flaw, here's a hint: think about what $b-a$ is. This is the kind of mistake you will make when your intuition thinks of something called $a$ and something called $b$ as two different objects because they have different labels. If $x=y$ is that a contradiction? No, that sounds silly and it is. -On a tangent, have you thought of the proof that $0.999\ldots = 1$? It's an amazing problem and many, many people have the wrong intuition - that the numbers must be different - because they are written differently. It turns out the same number may have two different ways to write it down by decimal expansion. People are very uncomfortable with that. They've never thought of numbers as abstract entities existing independently of their decimal representations, and if the decimal representations - ink on paper - are different, then the numbers must be. But this is false. For a given real number $x$, there are always (I think) two decimal representations, one ending in $000$s and one ending in $999$s.<|endoftext|> -TITLE: Possible cardinalities of the union of two sets -QUESTION [6 upvotes]: So the question is: - -What are the possible cardinalities of the union of the two sets $A$ - (where $[A] = 5$) and $B$ (where $[B] = 9$) - -So, the smallest $[A \cup B]$ is when all elements of $A$ are also elements of $B$. -Then, $[A \cup B]$ in this case is: - -(those 5 similar elements) + (the remaining 4 in B) = 9 - -And the largest $[A \cup B]$ is when no element of A is in B -Then, $[A \cup B]$ in this case is: - -$[A] + [B] = 14$ - -Then the possible cardinalities of $[A \cup B]$ is: - -9, 10, ... , 14 - -I don't understand how my reasoning is incorrect. -My book says that 6 is a possible cardinality. -The only explanation I could think of is that one or both of the sets has duplicate elements. But, wouldn't the cardinality of a set with duplicate elements be the amount of unique elements? -Edit: I actually worded the question for the sake of my explanation. The actual question is: - -We form the union of a set with 5 elements and a set with 9 elements. Which of the following numbers can we get as the cardinality of the union: 4, 6, 9, 10, 14, 20 - -REPLY [3 votes]: Your reasoning is correct. More simply, $6$ can't possibly be the cardinality of the union, since the union must contain at least as many elements as $B$! It seems that the book just has an error in the solution.<|endoftext|> -TITLE: Reference request for an identity involving binomial coefficients -QUESTION [6 upvotes]: The identity is - -$$\sum_{i\ge 0}(-1)^i \binom{\frac{s^k + s^{-k} - - 10}{4}}{i}\binom{\frac{s^k + s^{-k} - 10}{4}+4}{\frac{gs^k + -2g^{-1}s^{-k} - 4}{8}-i}= 0$$ - -where $k\gt 1$ , $s = - 3+2\sqrt{2}$ and $ g = 2-\sqrt{2}$ -exemples - $$\sum_{i\ge 0}(-1)^i \binom{6}{i}\binom{10}{2-i}= 0$$ - $$\sum_{i\ge 0}(-1)^i \binom{47}{i}\binom{51}{14-i}= 0$$ - $$\sum_{i\ge 0}(-1)^i \binom{286}{i}\binom{290}{84-i}= 0$$ -$$...$$ - ....etc. -It comes from this question. -Are there other identities like this one, where the integers are obtained from a recursion? -A generalisation (in the neater presentation suggested by @JeanMarie) would be, for a given positive integer $p$ - -$$\sum_{i\ge 0}(-1)^i \binom{u_k}{i}\binom{u_k+p}{v_k-i}= 0$$ - -This was the case $p=4$. The cases $p\le 3$ are dealt in the previous question. -For $p=5$ we have four pairs of sequences $(u_k,v_k)$ - -$$u_k=18u_{k-1}-u_{k-2}+48\ \ \ v_k=18v_{k-1}-v_{k-2}+8$$ with - $$u_0=2,u_1=-2\ \ \ v_0=6,v_1=0$$ $$u_0=-1,u_1=-1\ \ \ v_0=2,v_1=0$$ - $$u_0=-1,u_1=-1\ \ \ v_0=3,v_1=1$$ $$u_0=-2,u_1=2\ \ \ v_0=1,v_1=3$$ - -For $p=6$ we have two pairs of sequences $(u_k,v_k)$ - -$$u_k=14u_{k-1}-u_{k-2}+42 \ \ \ v_k=14v_{k-1}-v_{k-2}+6$$ with - $$u_0=-1,u_1=-2\ \ \ v_0=4,v_1=0$$ $$u_0=-2,u_1=-1\ \ \ v_0=2,v_1=0$$ - -For $p=8$ we have two pairs of sequences $(u_k,v_k)$ - -$$u_k=6u_{k-1}-u_{k-2}+18 \ \ \ v_k=6v_{k-1}-v_{k-2}+2$$ with - $$u_0=-1,u_1=-2\ \ \ v_0=5,v_1=1$$ $$u_0=-2,u_1=-1\ \ \ v_0=3,v_1=1$$ - -Question: Find other pairs of sequences $(u_k,v_k)$, $v_k \lt u_k$ for $p=7$ or $p\ge 9$ (if any). - -REPLY [2 votes]: Two pieces of information -1) I advise you to read Chapter 5 of the marvelous book "Concrete Mathematics" (Graham, Knuth, Patashnik) Addison-Wesley, 1989, which is in the same spirit as the book "A=B" that has been recommended by @Gabriel Nivasch. -2) Your expression should be simplified in this way: -$$\sum_{i\ge 0}(-1)^i \binom{u_k}{i}\binom{u_k+4}{v_k-i}= 0$$ -where $u_n$ and $v_n$ are the sequences defined resp. by the second order recurrence relationships: -$$u_{n+1}=6u_n-u_{n-1}+10 \ \ \text{with} \ \ u_0=6 \ \text{and} \ u_1=47$$ -and -$$v_{n+1}=6v_n-v_{n-1}+2 \ \ \text{with} \ \ v_0=10 \ \text{and} \ v_1=51$$ -In this way you have a neater presentation without spurious irrationals.<|endoftext|> -TITLE: Simple method to solve a geometry question for junior high school student -QUESTION [10 upvotes]: Rencently, my sister asked me a geometry question that came from her mock examination, please see the following graph. - -Here, - -$\angle DOE=45°$ -the length of $DE$ is constant, and $DE=1$. Namely, $OD,OE$ are changeable. -$\triangle DEF$ is equilateral triangle. - - -Q: What is the maximum length of $OF$? - - -My solution -Denote $OD,OE,\angle ODE$ as $x,y,\theta$, respectively. -Via sine theorem -$$ - \begin{cases} - ED^{2} = OE^{2} + OD^{2} - 2OE \times OD\cos \angle EOD \\[6pt] - \cos \theta = \dfrac{EO^{2} + ED^{2} - OD^{2}}{2 EO \times ED} - \end{cases} -$$ -$$ -\begin{align} - 1^{2} &= x^{2} + y^{2} - 2xy\cos 45^{\circ} \\ - &= x^{2} + y^{2} - \sqrt{2} xy -\end{align} -$$ -$$ - \implies - \begin{cases} - \color{red}{xy} = \dfrac{x^{2} + y^{2} - 1}{\sqrt{2}} \color{red}{\leq} - \dfrac{x^{2} + y^{2}}{2} \implies x^{2} + y^{2} \color{red}{\leq} - 2 + \sqrt{2} \\[6pt] - \cos \theta = \dfrac{x^{2} + 1 - y^{2}}{2x} - \end{cases} -$$ -Via cosine theorem -$$ - \frac{y}{\sin \theta} = \frac{DE}{\sin \angle EOD} = \frac{1}{\sin - 45^{\circ}} \implies \sin \theta = \frac{y}{\sqrt{2}} -$$ -$$\begin{align} - OF^{2} &= EO^{2} + EF^{2} - 2EO \times EF\cos \angle OEF \\ - &= x^{2} + 1^{2} - 2x\cos(\theta + 60^{\circ}) \\ - &= x^{2} + 1 - 2x(\cos \theta \cos 60^{\circ} - \sin \theta \sin - 60^{\circ}) \\ - &= x^{2} + 1 - 2x\left(\frac{x^{2} + 1 - y^{2}}{2x} \frac{1}{2} - - \frac{y}{\sqrt{2}} \frac{\sqrt{3}}{2}\right) \\ - &= \frac{x^{2} + y^{2} + 1}{2} + \frac{\sqrt{3} xy}{\sqrt{2}} \\ - &= \frac{x^{2} + y^{2} + 1}{2} + \frac{\sqrt{3}}{\sqrt{2}} - \frac{x^{2} + y^{2} - 1}{\sqrt{2}} \\ - &= \frac{(\sqrt{3} + 1)(x^{2} + y^{2})}{2} + \frac{1 - \sqrt{3}}{2} - \\ - &\color{red}{\leq} \frac{(\sqrt{3} + 1)(2 + \sqrt{2})}{2} + \frac{1 - - \sqrt{3}}{2} = \frac{1}{2}(3 + \sqrt{3} + \sqrt{2} + \sqrt{6}) -\end{align} -$$ -However, for junior high school student, she doesn't learn the following formulae: - -sine theorem -cosine therem -$\cos(x+y)=\cos x \cos y-\sin x \sin y$ -fundamental inequality $x y\leq \frac{x^2+y^2}{2}$ - -Question - -Is there other simple/elegant method to solve this geometry question? - - -Update -Thanks for MXYMXY's hint - -Here, the line $O'F$ pass the center of the circle. Namely, $O'D=OF$ -In Rt $\triangle O'OF$, the inequality $O'F>OF$ holds. - -REPLY [2 votes]: The hardest step is the first one: guessing the correct answer. By symmetry, it's plausible that $OF$ bisecting $\angle DOE$ will give either a maximum or minimum length. Another likely guess is when $O$, $D$, and $E$ are collinear, but for the given angles of $45^\circ$ and $60^\circ$ that would make $F$ lie outside the angle $DOE$, so "common sense" would eliminate that possibility. -So, how to prove the symmetrical diagram does give the longest line? Imagine the triangle $DEF$ is fixed in space. (Note, the diagram given in the question, including the angle $\theta$, is perhaps designed to mislead the student into thinking about $O$ being fixed, and $D$ and $E$ moving). -The locus of points $O'$ such that $\angle DO'E$ is constant is the arc of a circle with chord $DE$ and center $C$ as in @Ameet Sharma's diagram. On the other hand the locus of points $O'$ such that $O'F$ is constant is a circle center $F$. At high school geometry level (and even in Eulid!) is it "obvious" that one circle lies outside the other one, and they have a common tangent at point $O$. -Now that you know the symmetrical figure is the solution, calculating the length is simple trigonometry.<|endoftext|> -TITLE: Constructing integers as equivalence classes of pairs of natural numbers -QUESTION [5 upvotes]: Can you tell me how to construct the integer numbers ($\mathbb Z$) as equivalence classes of pairs of natural numbers ($\mathbb N$)? And also tell me the commutative and associative law by an equivalence relation. Be sure to use only addition and multiplication. - -REPLY [3 votes]: Yes, this is possible. Consider the relation $(a,b) \sim (c,d) \text{ iff } a+d = c+b$. -Verify that $\sim$ is an equivalence relation and let $[(a,b)]$ be the equivalence class of $(a,b)$ with respect to $\sim$. Define -$[(a,b)] +_\sim [(c,d)] := [(a+c,b+d)]$ and $-_\sim[(a,b)] := [(b,a)]$. -Prove that these are well-defined functions and that -$$\pi \colon (\{ [(a,b)] \colon a,b \in \mathbb N \}, +_\sim) \to (\mathbb Z , +), [(a,b)] \mapsto a-b$$ -is an group isomorphism. -I leave it to you to define $\cdot_\sim$ such that $\pi$ becomes a ring isomorphism.<|endoftext|> -TITLE: Question on upper triangular matrix with complex eigenvalues with modulus less than 1 -QUESTION [6 upvotes]: This is problem 16, Section 6.B from Linear Algebra Done Right, 3rd Edition. -Suppose the field is $\mathbb{C}$, $V$ is finite-dimensional, $T \in \mathcal{L}(V)$, all the eigenvalues -of $T$ have absolute value less than 1, and $\epsilon > 0$. - Show that there exists a positive integer $m$ such that $||T^m v|| < \epsilon ||v|| \; \forall \; v \in V$. -At this point in the book Jordan form has not been proved, so all I can use is Schur's Theorem that guarantees upper triangular matrix with eigenvalues on the diagonal. -Could anyone give me a hint on how to proceed ? I tried computing powers of $T$, but while I get $\lambda^m$ on diagonal and same upper triangular structure, I am unable to manage the off-diagonal entries and control their growth. -Note that in this case if matrix is split as diagonal and strictly upper diagonal matrix as $T = D + U$, $D$ and $U$ do not commute because entries on diagoals of $D$ are unequal. - -REPLY [9 votes]: Here is what I had in mind when I added that exercise to the third edition of Linear Algebra Done Right: -By Schur's Theorem (which is in the same section as this exercise), there is an orthonormal basis of $V$ such that $T$ has an upper-triangular matrix $A$ with respect to that basis. Orthonormality is important so that we can compute norms. We will work with $A$ instead of $T$ and $z \in \mathbf{C}^n$ instead of $v$. -The entries on the diagonal of $A$ are the eigenvalues of $T$. Thus all the diagonal entries of $A$ have absolute value less than $1$. -If we replace each entry in $A$ with its absolute value and each coordinate of $z$ with its absolute value, then $\|A^mz\|$ gets larger or remains the same for each positive integer $m$, and $\|z\|$ remains the same. Thus we can assume without loss of generality that each entry of $A$ is nonnegative. -If we now replace each entry on the diagonal of $A$ with the largest element on the diagonal of $A$, then $\|A^mz\|$ gets larger or remains the same for each positive integer $m$. Thus we can assume without loss of generality that there exists a nonnegative number $\lambda < 1$ such that every entry of the diagonal $A$ equals $\lambda$. -We can write -$$ -A = \lambda I + N, -$$ -where $N$ is an upper-triangular matrix whose diagonal entries all equal $0$. Thus $N^n = 0$. -Let $m$ be a positive integer with $m > n$. Then -\begin{align} -A^m &= (\lambda I + N)^m \\[6pt]&= \lambda^m I + m \lambda^{m-1} N + \tfrac{m(m-1)}{2} \lambda^{m-2} N^2 \\[6pt] -&\quad + \tfrac{m(m-1)(m-2)}{2 \cdot 3} \lambda^{m-3} N^3 + \dots + \tfrac{m(m-1) \dotsm (m - n + 1)}{(n-1)!} \lambda^{m-n+1} N^{n-1}, -\end{align} -where the other terms in the binomial expansion do not appear because $N^k = 0$ for $k \ge n$. In the expression above, the coefficients of the $n$ matrices $I$, $N$, $N^2$, $N^3$, $\dots$, $N^{n-1}$ are all bounded by -$$ -\frac{m^n \lambda^m}{\lambda^{n-1}}. -$$ -Think of $n$ and $\lambda$ as fixed. Then the limit of the expression above as $m \to \infty$ equals $0$ (because $0 \le \lambda < 1$). -Because the sum above has only a fixed number $n$ of terms, we thus see that by taking $m$ sufficiently large, we can make all the entries of the matrix $A^m$ have absolute value as small as we wish. In particular, there exists a positive integer $m$ such every entry of $A^m$ has absolute value less than $\epsilon/n$. -The definition of matrix multiplication and the Cauchy-Schwarz Inequality imply that for each $j$, the entry in row $j$, column $1$ of $Az$ has absolute value at most $\epsilon \|z\|/\sqrt{n}$. Thus $\|Az\| \le \epsilon \|z\|$, as desired. -This exercise is, in my opinion, one of the harder exercises in the third edition of Linear Algebra Done Right if one uses only the tools available at that stage of the book. The proof becomes much clearer and cleaner if one uses the tools associated with the spectral radius in a Banach algebra, which shows the power of those tools.<|endoftext|> -TITLE: Does there exist an injective function $f:\mathbb R^2 \to \mathbb R$ such that $f$ is continuous in one of the variables $x$ or $y$? -QUESTION [5 upvotes]: Does there exist an injective function $f:\mathbb R^2 \to \mathbb R$ such that $f$ is continuous in one of the variables $x$ or $y$ ? I only know that such an injection cannot be continuous in each real variable $x$ and $y$ . Please help . Thanks in advance - -REPLY [2 votes]: No. Here is an outline: -Suppose $g:\mathbb{R} \to \mathbb{R}$ is continuous and injective, then $g(\mathbb{R})$ is a non trivial interval. -Suppose $I_\alpha$ is an uncountable collection of non trivial intervals. Then at least two of the intervals overlap. -Now consider the uncountable non trivial collection $\{f(\mathbb{R},y) \}_y$<|endoftext|> -TITLE: How to simplify the nested radical $\sqrt{1 - \frac{\sqrt{3}}{2}}$ by hand? -QUESTION [7 upvotes]: I was solving a Mock Mathcounts Contest Mock contest (.pdf) written by a user on the Art of Problem Solving Forums. In problem #24 the only thing I couldn't do by hand was simplify the radical mentioned above. Note that the contest should involve math topics accessible to a high performing Mathcounts middle school student. - -How do you simplify $$\sqrt{1 - \frac{\sqrt{3}}{2}}$$ - -REPLY [6 votes]: Proposed another way to collection -You can also use formulas -$$\sqrt{a\pm\sqrt b}=\sqrt{\frac{a + \sqrt{a^2-b}}{2}}\pm\sqrt{\frac{a - \sqrt{a^2-b}}{2}}$$ -Then -$$\sqrt{1-\frac{\sqrt3}{2}}=\frac12\sqrt{4-\sqrt{12}}=\frac12 \cdot\left(\sqrt{\frac{4 + \sqrt{4^2-12}}{2}}-\sqrt{\frac{4 - \sqrt{4^2-12}}{2}} \right)=$$ -$$=\frac12 \left(\sqrt{\frac{4 + \sqrt{4}}{2}}-\sqrt{\frac{4 - \sqrt{4}}{2}} \right)=\frac12 \left(\sqrt{\frac{6}{2}}-\sqrt{\frac{2}{2}} \right)=\frac 12(\sqrt3-1)$$<|endoftext|> -TITLE: if $f(x + y) = f(x)f(y)$ is continuous, then it has to be injective. -QUESTION [10 upvotes]: Let $f$: $\Bbb R$ $\rightarrow$ $\Bbb R$ be a non-constant function such that $f(a + b) = f(a)f(b)$ for all real numbers $a$ and $b$. -Prove that if $f(x + y) = f(x)f(y)$ is continuous, then it has to be injective. -I derived a few useful properties from this function such that -$f(x)\neq0;f(0) = 1 ; f(x) = \frac{1}{\mathrm f(-x)}$ -In order to prove $f$ is injective, I need to prove that -$f(x) = f(y)$ implies that $x = y$ -I suppose that there are $x, y \in \Bbb R$ such that $f(x) = f(y)$ -$$f(x) = \frac{1}{f(-y)}$$ -$$f(x)f(-y) = 1$$ -$$f(x - y) = 1$$ -To finish the proof I need to prove $f(x) = 1$ implies $x = 0$. This is where I got stuck. -I don't know if I'm on the right track. Any suggestion is much appreciated. - -REPLY [5 votes]: Hint : show that $f(t)=f(\frac{t}{2})^2$. Deduce $f\geq 0$. Let $G=\lbrace x\in{\mathbb R} | f(x)=1\rbrace$. Then $G$ is an additive subgroup of $\mathbb R$. If $f$ is continuous it is also a closed subgroup. Also, show that $t\in G \Rightarrow \frac{t}{2} \in G$ using the above identity. Deduce that $G$ is dense if $G\neq \lbrace 0 \rbrace$. Conclude.<|endoftext|> -TITLE: Does the function $\log(1+\exp(x))$ have a conventional name? -QUESTION [6 upvotes]: Does the function $\log(1+\exp(x))$ (or the function $\log(1+\exp(-x))$) have a conventional or at least fairly common name? -Alternatively, is it closely related to some reasonably well-known, named function? -[It arises in a problem I'm working on and it would be nice to use a reasonably standard terminology and notation rather than either writing $\log(1+\exp(g(x)))$ each time (where $g$ is a fairly complicated set of terms), or coming up with a non-standard notation for it (like $\omega(g(x))$ say) and then discovering I could have used a symbol and term for it that are already recognized). It would also help in terms of locating properties, should I end up needing any beyond the simple ones I've already derived.] - -REPLY [7 votes]: The name softplus has been coined in 2001 (Dugas et al.), and seems to be in fairly common use by now.<|endoftext|> -TITLE: Fundamental group of Mapping Torus $M_h$: how to prove that the action is really $h_*$? -QUESTION [5 upvotes]: Let us work with the following setting: - -let $h$ be an automorphism (assume base point preserving) of a genus $g$ surface ($g>0$) to himself. $h \colon (\Sigma^g,\ast) \to (\Sigma^g,\ast)$ and define the mapping torus $M_h$ in the usual way. - -Notice that $h$ induces an action $\pi_1(S^1,\ast)\to Aut(\pi_1(\Sigma^g,\ast))$ just by sending the chosen generator to $h_*$, so it is indeed plausible the existence of the semidirect product: $\pi_1(\Sigma^g,\ast)\rtimes_{h_*} \pi_1(S^1,\ast)$. -I was asked to prove that the fundamental group $\pi_1(M_h, \ast)\cong \pi_1(\Sigma^g,\ast)\rtimes_{h_*} \pi_1(S^1,\ast)$ (with a little abuse of notation one can identify all the basepoints to a chosen one). - -By the l.e.s. of the fibration $\Sigma^g \to M_h \to S^1$ it's easy to see that $$\pi_1(M_h, \ast)\cong \pi_1(\Sigma^g,\ast)\rtimes_{?} \pi_1(S^1,\ast)$$ - -Being $\pi_1(S^1,\ast)\cong \mathbb{Z}$, $\pi_2(S^1,\ast)\cong 0$ and on the zero level the inclusion of the fibre induces a bijection, we have the following s.e.s. $$ 0 \to \pi_1(\Sigma^g,\ast) \xrightarrow{incl.} \pi_1(M_h, \ast) \xrightarrow{\pi} \pi_1(S^1,\ast)\to 0$$ -which is right-split (since $\mathbb{Z}$ is free). -Please notice the "?" I've put: I don't have the slightest idea on how to determine that the action there is really the one induced by $h$, because my reasoning above was purely algebraic. I know abstractly the action is given by conjugation, but how to prove the this action is the same as the one induced by $h$?. My attempt was to try and find something from that l.e.s. but I don't see anything helpful there. for what concern S.v K. I can't find an helpful covering of the mapping torus. -I'm aware that there is another analogue question here, but it is $5$ years old and the answer doesn't provide any insights, nor the comments. - -REPLY [2 votes]: Here are two ways to see this for a general mapping torus $M_f$ and $f:N\to N$: -1) Bass Serre Theory: take open neighborhoods $U,V$ of $N\times [0,\frac 12]$ and $N\times[ \frac 12,1]$. The fundamental group of $M_f$ computes as fundamental group of the graph of groups of this open cover. The graph of groups is given by two vertices $U$ and $V$ connected with two edges coming from the two components of $U\cap V$. Note that the groups of all vertices and edges are $\pi_1N$ and that all but one edge inclusion are the identity. A maximal tree contains only one edge (for computation preferrably the one with the two identity morphisms), which immediately gives the fundamental group (by slight abuse of notation) as -$$ -\langle \pi_1N,t|txt^{-1}= f(x)\rangle, -$$ -where $t$ comes as generator from the remaining edge. -2) The other way to see this is to regard the action of the fundamental group of the base space on the fiber (exists for all Serre fibrations). It is easy to compute that this action is precisely the same as the action coming from $0 \to \pi_1N \to \pi_1M_f\to \pi_1S^1\to 0$. Once the action is coming from topology and once from algebra. However the one coming from topology is on the nose :-) -2*) For regular $G$-covering spaces we have the fiber sequence $0 \to \pi_1\tilde M \to \pi_1M\to G\to 0$ (note that $G$ is this time $\pi_0$ of the fiber!). You can check that the Deck group action of $G$ coincides the the $G$-action coming from the exact sequence. Hence by inserting the correct cover $G=\mathbb Z$, $\tilde M=N\times \mathbb R$, $M=M_f$ (the pullback of the universal $\mathbb Z$-cover along the fibration) and the deck group action on $N\times \mathbb R$ is one the nose once again. -Bonus question: what is the relationship between 2) and 2*)?<|endoftext|> -TITLE: How to show that the inverse under multiplication of a positive real number is positive? -QUESTION [5 upvotes]: Let $a\,{\in}\,\mathbb{R}:a>0$. How do we know that $a^{-1}>0$ too? - -REPLY [4 votes]: Hint. -$a \cdot a^{-1} = 1 > 0$<|endoftext|> -TITLE: applications of (topological and algebraic) commutative diagrams in organic synthesis -QUESTION [5 upvotes]: In algebraic topology, there are a lot of commutative diagrams and commutative diagrams up to homotopy. Different ways of compositions of maps in a commutative diagram are equal or homotopy equivalent. -An example of commutative diagrams in algebraic topology -In organic synthesis, there are some diagrams consisting of -synthetic routes. Different ways of chemical reactions produces the same product. -An example of diagrams of synthetic routes -Question: are there any really useful research topics about the application of commutative diagrams in mathematics into organic synthesis? - -REPLY [2 votes]: John Baez's has recently worked on something related: he modeled Petri nets and chemical reaction inside the framework of category theory. -His work can be found on this link. -Hope this address your question.<|endoftext|> -TITLE: If $\cal{I}$ is a indiscernible sequence over $A$ then it is indiscernible over $acl(A)$ -QUESTION [6 upvotes]: Let $\cal{I}=(b_i\mid i\in I)$ be an infinite indiscernible sequence over $A$. And let $acl(A)$ be the algebraic closure of $A$. (all in some structure) I am trying to show that $\cal{I}$ is also indiscernible over $acl(A)$. -What i tried so far was to assume towards contradiction that it is not, and by that to conclude that some of the indiscernibles are in the algebraic closure, and thus by the algebraic formula who witnessed that one of them is in $acl(A)$ (that formula is with parameters from $A$.) we can conclude that all of the sequence satisfy that formula. since it is indiscernible over $A$. But we assumed that the sequence is infinite, and that's our contradiction. -The problem is that I'm not sure how to find an algebraic member over $A$ in $\cal I$. - -REPLY [4 votes]: The answer by user27454 is a good "hands-on" way of seeing this. But I'd like to give a more abstract proof, which shows that your proposition is an immediate consequence of a very useful general lemma. -We work in a monster model $\mathcal{U}$. As usual, small means small enough to apply saturation and strong homogeneity of $\mathcal{U}$. -Lemma: Let $A$ be a small set and $I$ an $A$-indiscernible sequence. Then there is a small model $M$ containing $A$ such that $I$ is $M$-indiscernible. -Given the lemma, we just need to note that every model containing $A$ contains $\text{acl}(A)$, and hence $I$ is also $\text{acl}(A)$-indiscernible. -Proof: Let $M'$ be any small model containing $A$. Using Ramsey's theorem and compactness, we build a sequence $I'$ which is $M'$-indiscernible and "based on $I$", in the sense that whenever $\mathcal{U}\models \varphi(\overline{b},\overline{m})$, where $\overline{b}$ is an increasing tuple from $I'$ and $\overline{m}\in M$, there exists $\overline{a}$ an increasing tuple from $I$ such that $\mathcal{U}\models \varphi(\overline{a},\overline{m})$ (this is sometimes called the "Standard Lemma" for indiscernibles). Then since $I$ is actually $A$-indiscernible, we have $\text{tp}(I'/A) = \text{tp}(I/A)$. Let $\sigma\in \text{Aut}(\mathcal{U}/A)$ such that $\sigma(I') = I$, and let $M = \sigma(M')$. Then $M$ is a model containing $A$ and $I$ is $M$-indiscernible.<|endoftext|> -TITLE: Prove this stronger inequality with $\frac{e^x}{x+1}-\frac{x-1}{\ln{x}}-\left(\frac{e-2}{2}\right)>0$ -QUESTION [6 upvotes]: Let $x>1$ show that -$$\dfrac{e^x}{x+1}-\dfrac{x-1}{\ln{x}}-\left(\dfrac{e-2}{2}\right)>0$$ -It seem this inequality can use derivatives solve it,But it is ugly.can you help? -$$\lim_{x\to 1}\left(\dfrac{e^x}{x+1}-\dfrac{x-1}{\ln{x}}-\left(\dfrac{e-2}{2}\right)\right)=0$$ -and $f(x)=\dfrac{e^x}{x+1},g(x)=\dfrac{x-1}{\ln{x}}$,But $$f'(x)=\dfrac{xe^x}{(x+1)^2}>0, g'(x)=\dfrac{\ln{x}-\dfrac{x-1}{x}}{\ln^2{x}}>0$$ -Unfortunately I don't know any nice expression for this 1-th derivative which could help. I'll be grateful for all useful suggestions. - -REPLY [3 votes]: (I have to admit that the method in this answer is "ugly".) -Multiplying the both sides by $2(x+1)\ln x\gt 0$ gives -$$2e^x\ln x-2(x-1)(x+1)-(e-2)(x+1)\ln x\gt 0\tag1$$ -Let $f(x)$ be the LHS of $(1)$. Then, -$$f'(x)=2\left(e^x\ln x+\frac{e^x}{x}\right)-4x-(e-2)\left(\ln x+\frac{x+1}{x}\right)$$ -$$g(x):=xf'(x)=2xe^x\ln x+2e^x-4x^2-(e-2)(x\ln x+x+1)$$ -$$g'(x)=2e^x\ln x+2xe^x\ln x+4e^x-8x-(e-2)(\ln x+2)$$ -$$g''(x)=2e^x\ln x+\frac{2e^x}{x}+2e^x\ln x+2x\left(e^x\ln x+\frac{e^x}{x}\right)+4e^x-8-\frac{e-2}{x}$$ -$$h(x):=xg''(x)=4xe^x\ln x+2e^x+2x^2e^x\ln x+6xe^x-8x-(e-2)$$ -$$h'(x)=4e^x\ln x+8xe^x\ln x+12e^x+2x^2e^x\ln x+8xe^x-8$$ -$$h''(x)=12e^x\ln x+\frac{4e^x}{x}+12xe^x\ln x+28e^x+2x^2e^x\ln x+10xe^x$$ -Now it's easy to see that $h''(x)\gt 0$ for $x\gt 1$ because each term is positive for $x\gt 1$. -Since we have -$$h'(1)=20e-8\gt 0,\quad h(1)=7e-6\gt 0,\quad g'(1)=2e-4\gt 0,\quad g(1)=f(1)=0$$ we can see that each of $h'(x),h(x),g'(x),g(x),f(x)$ is increasing for $x\gt 1$, and that $h'(x)\gt 0,h(x)\gt 0,g''(x)\gt 0,g'(x)\gt 0,g(x)\gt 0,f'(x)\gt 0,f(x)\gt 0$ for $x\gt 1$.<|endoftext|> -TITLE: Necessary and sufficient condition for a curve to have infinite length -QUESTION [8 upvotes]: What is the necessary and sufficient condition for a curve to have infinite length in a compact interval? -Say the curve is restricted to $[0, 1]$. -I vaguely remember that it is related to the boundedness of the total variation. I checked already the answers here but they are related to specific examples. - -REPLY [6 votes]: Given a function -$$f:\quad [a,b]\to{\mathbb R}^n, \qquad t\mapsto f(t)\ ,$$ -the total variation of $f$ over $[a,b]$ is defined by -$$V(f):=\sup_{\cal P}\sum_{k=1}^N|f(t_k)-f(t_{k-1})|\leq\infty\ ,\tag{1}$$ -whereby the $\sup$ ranges over all partitions -$${\cal P}:\qquad a=t_0|y_1-y_0|\}\leq|{\bf z}_1-{\bf z}_0|\leq|x_1-x_0|+|y_1-y_0|\ .$$ -From $(1)$ it then easily follows that the function ${\bf z}(\cdot)$ in $(2)$ is of bounded variation iff both $x(\cdot)$ and $y(\cdot)$ are of bounded variation.This allows to conclude that a graph -$$\gamma:\quad [a,b]\to{\mathbb R}^2,\qquad x\mapsto\bigl(x,f(x)\bigr)$$ -has finite length iff $V(f)<\infty$, since the total variation of the first coordinate is $=b-a<\infty$ in any case.<|endoftext|> -TITLE: Prove that $n^{n+1} \leq (n+1)^{n} \sqrt[n]{n!}$ -QUESTION [18 upvotes]: Let $n$ be a positive integer. I conjectured that the following inequality is true -\begin{equation} -n^{n+1} \leq (n+1)^{n} \sqrt[n]{n!} . -\end{equation} -Anyhow I could neither prove nor disprove it. I could only check, by using Stirling's Formula, that the ratio of the right and left members tends to 1 as $n \rightarrow \infty$. Any help is welcome. - -REPLY [17 votes]: $$ -\begin{align} -\log\left(n\left(1+\frac1n\right)^{-n}\right) -&=\log(n)-n\log\left(1+\frac1n\right)\\ -&=\log(n)+n\log\left(1-\frac1{n+1}\right)\\ -&\le\log(n)-\frac{n}{n+1}\\ -&\le\log(n)-\frac{n-1}n\\ -&=\frac1n\int_1^n\log(x)\,\mathrm{d}x\\ -&\le\frac1n\sum_{k=2}^n\log(k)\\ -&=\log\left(\sqrt[n]{n!}\right) -\end{align} -$$ -Exponentiate and rearrange to get -$$ -n^{n+1}\le(n+1)^n\sqrt[n]{n!} -$$<|endoftext|> -TITLE: Orthogonal Projection of a function -QUESTION [5 upvotes]: If $C[-2,2]$, let $W:= span\{x,e^x\}.$ -How would I go about figuring out the orthogonal projection of $x+1$ on $W$? -encountered this problem and it really has me stumped. I was told that the Grahm-Schmidt Process is the correct route but am a bit confused on how to even go about that with this type of problem. - -REPLY [5 votes]: 1. Not using Grahm-Schmidt Process: -Let $g(x)$ be the orthogonal projection of $f(x)=x+1$ on $W$. Since $g \in W$ then $g(x) = c_1x+c_2e^x$ for some $c_1,c_2 \in\mathbb{R}$. Then $f(x)-g(x)$ is orthogonal to $W$. So, $f(x)-g(x)$ is orthogonal to $x$ and to $e^x$. In other words, -$$ - \left\{ - \begin{array}{l} - \int\limits_{-2}^2(x+1 - c_1x - c_2e^x)x{\text d}x = 0,\\ - \int\limits_{-2}^2(x+1 - c_1x - c_2e^x)e^x{\text d}x = 0. - \end{array} - \right. -$$ -2. Using Grahm-Schmidt Process: -1) Using Grahm-Schmidt Process find an orthonormal base of $\text{span}\{x, e^x\}$. -2) Suppose $g_1(x),g_2(x)$ are the new base vectors (that are orthogonal and normal). Then the projection $g(x)=c_1g_1(x)+c_2g_2(x)$, where -$$c_1 = \int\limits_{-2}^2 f(x)g_1(x){\text d}x \qquad\text{and}\qquad c_2 = \int\limits_{-2}^2 f(x)g_2(x){\text d}x$$ -Grahm-Schmidt Process: -a) $g_1(x) = x$. -b) $g_2(x) = k g_1(x) + e^x$. Now, since $\langle g_1, g_2 \rangle$ should be $0$, we can find -$$k=-\frac{\langle e^x, g_1 \rangle}{\langle g_1, g_1 \rangle}=-\frac{\int_{-2}^2 xe^x \text{d}x}{\int_{-2}^2 x^2 \text{d}x}.$$ -And don't forget to normalize $g_1$ and $g_2$.<|endoftext|> -TITLE: Direct limit of completions of finitely generated submodules -QUESTION [5 upvotes]: Let $A$ be a noetherian, local, integral domain with maximal ideal $\mathfrak m$. Moreover let $M$ be an $A$-module; I'd like to know if there exists an explicit expression of the module: -$$\varinjlim_{N\subseteq M} \widehat N$$ -where $N$ varies among the finitely generated $A$-submodules of $M$ and $\widehat N$ is the $\mathfrak m$-adic completion of $N$. -In particular what happens if $M$ is already finitely generated? - -I'm asking this question because it is well known that -$$M\cong \varinjlim_{N\subseteq M} N$$ -so I'm interested in the behavior of the completions with respect to the direct limit. -Many thanks in advance - -REPLY [5 votes]: For finitely generated modules the completion functor is isomorphic to $-\otimes_{A} \hat{A}$ and direct limits commute with tensor products, so you get: -$ -\varinjlim\limits_{N\subset M} \hat{N}\cong \varinjlim\limits_{N\subset M} (N\otimes_{A}\hat{A}) \cong (\varinjlim\limits_{N\subset M} N)\otimes_{A} \hat{A}\cong M\otimes_{A}\hat{A}\cong \hat{M} -$<|endoftext|> -TITLE: Sequence of continuous function converging pointwise to Thomae's function -QUESTION [6 upvotes]: Recall that Thomae's function (also called Popcorn function) $f\colon\mathbb{R}\to\mathbb{R}$ is defined as -$$ -f(x) = \begin{cases} -\frac{1}{q} & \text{ if } x=\frac{p}{q} \neq 0 \text{ is rational, } \gcd(p,q)=1 \text{ and } q> 0\\ -0 &\text{ otherwise.} -\end{cases} -$$ -In particular, it is a standard exercise to show that $f$ is continuous at every irrational, and discontinuous at every rational. - -Find a sequence of continuous functions $(f_n)_{n\in\mathbb{N}}$ that converges pointwise to $f$ on $[0,1]$. - -REPLY [4 votes]: Detailed sketch of the proof: -Since $\mathbb{Q}\cap[0,1]$ is countable, we can write it $\mathbb{Q}\cap[0,1]=(x_n)_{n\in\mathbb{N}}$, where each $x_n$ has a canonical form $x_n=\frac{p_n}{q_n}$, $p_n\wedge q_n=1$. We then define $f_n$ by considering only the $n+1$ first terms of the sequence $(x_k)$, and putting the "right" value on them (i.e, $f_n(x_k)=\frac{1}{q_n}$, $0\leq k\leq n$). -To make $f_n$ continuous, we then make it "affine by parts": each $x_k$ is surrounded by two elements of $\{0,x_1,\dots,x_n,1\}$ (finite number of elements), we can pick the max $\underline{x}_k$ of those strictly less, the min $\bar{x}_k$ of those strictly greater, and then go from 0 to $\frac{1}{q_n}$ on the interval $\left[x_k-\frac{x_k-\underline{x}_k}{2^n}, x_k\right]$, then from $\frac{1}{q_n}$ to 0 on the interval $\left[x_k, x_k+\frac{\bar{x}_k-x_k}{2^n}\right]$ (cf. the very bad picture below). - -Since each rational $q$ is one of the $x_n$ and thus will, for $k\geq n$, be be "considered" by $f_k$ and its image put to the desired value, $f_k(q)\to f(q)$ for all rationals in $[0,1]$. The problem is for the irrationals. Pick $r\in\mathbb{R}\setminus\mathbb{Q}$ arbitrary; in order to prove that $f_k(r)\to0$, we can, by contradiction, suppose it doesn't.\ -It means that there is an $\varepsilon>0$ such that, for an infinity of $k$, $f_k(r)>\varepsilon$. Pick the subsequence of such $k$: it defines a sequence $(f_{\varphi(k)}(r))_k$ of values always strictly greater that $\varepsilon$. -But that means that there is a corresponding sequence of rationals always very close to $r$ (one for each $f_{\varphi(k)}$, otherwise $f_{\varphi(k)}(r)=0$). By very close, it actually means (closer than $\frac{\bar{q}-q}{2^n}\leq 2^{-\varphi(k)}$ for some rational $x$. In other terms, we have a sequence of rationals converging (since $2^{-\varphi(k)}\to0$, because $\varphi(k)\to\infty$ (it is a subsequence, so $\varphi(k)\geq k$ for every $k$). -Then, we can use the fact that if a sequence of rationals $\frac{p_n}{q_n}\to r$ irrational, necessarily $p_n\to \infty$, $q_n\to \infty$ (it is rather simple to prove*). And thus in particular $\frac{1}{q_n}\to 0$. Since $f_{\varphi(k)}(r)$ is always less than the "peak", which is the $\frac{1}{q}$ of the "close rational associated with $f_{\varphi(k)}$" mentioned earlier, $f_{\varphi(k)}(r)\to0$ when $k\to\infty$. Which is a contradiction, because it is supposed not to go under $\varepsilon>0$. -Hence, $f_k(r)\to 0$, and $f_n$ converges pointwise to $f$. - -${}$* for example, by contradiction, supposing some subsequence of $p_n$ (or $q_n$) is bounded, applying Bolzano—Weierstrass' theorem to it to have a converging subsequence, and then considering the corresponding subsequence of $q_n$ which must be itself, then, bounded because of the convergence to $r$, and again apply Bolzano—Weiestrass, and find finally a sub-sub-sequence of $p_n$ and $q_n$ such that both converge to some integers $p$ and $q$, which in turn implies that $r=\frac{p}{q}\in\mathbb{Q}$, which can be seen as a problem for an irrational.<|endoftext|> -TITLE: How to find fundamental groups and covering spaces of $\mathbb{RP}^2\vee \mathbb{RP}^2$? -QUESTION [8 upvotes]: The following is an exercise I was assigned in homotopy theory. -Defined $X = \mathbb{RP}^2\vee \mathbb{RP}^2$. -a) Find $\pi_1(X)$. -b) Find the universal cover of $X$. -c) Find all of its connected $2$-sheeted covers. -I am trying to use van Kampen's theorem for (a). -I decompose $X$ into two open sets $U_1$ and $U_2$, where each $U_i$ contains one copy of $\mathbb{RP}^2$ as well as a neighborhood around the point of attachment small enough so that each $U_i$ deformation retracts onto the copy of $\mathbb{RP}^2$ that it contains. -Then van Kampen's theorem gives that $\pi_1(X)$ is a quotient of $\mathbb{Z}_2*\mathbb{Z}_2$, where we quotient out by the subgroup normally generated by the elements $\iota_{1, 2}(\omega)\iota_{2, 1}(\omega)^{-1}$, where $\iota_{1, 2}: \pi_1(U_1 \cap U_2) \to \pi_1(U_1)$ is the inclusion map, and similarly for $\iota_{2, 1}$, and the $\omega$'s are arbitrary elements of $\pi_1(U_1 \cap U_2)$. -I want to say that $U_1 \cap U_2$ is simply connected, so each $\iota_{1, 2}(\omega)\iota_{2, 1}(\omega)^{-1}$ is in fact trivial, so then $\pi_1(X)$ is isomorphic to $\mathbb{Z}_2*\mathbb{Z}_2$. Is this true, or did my reasoning go awry somewhere? -For (b), I know that the universal cover is just an infinite chain of copies of $S^2$, where the covering map is just ``locally'' the quotient map. -For (c), I know that the only covering spaces of $\mathbb{RP}^2$ is itself and $S^2$, but I don't really know how to go from there to finding all the connected $2$-sheeted covers. -To summarize: is in fact $\pi_1(X)$ isomorphic to the free product of two copies of $\mathbb{Z}_2$, and how do I find all connected $2$-sheeted covers of $X$? - -REPLY [3 votes]: Everything you've said is correct (to prove $U_1\cap U_2$ is simply connected, just note that you can separately deformation-retract the part of it in each $\mathbb{RP}^2$ down to the basepoint to show it is contractible). For part (c), you can use some group theory. By the correspondence between connected covers and subgroups of $\pi_1$, the 2-sheeted covers correspond to index 2 subgroups of $\mathbb{Z}_2*\mathbb{Z}_2$. Any index two subgroup of a group is normal, so such subgroups are exactly the kernels of epimorphisms $\mathbb{Z}_2*\mathbb{Z}_2\to\mathbb{Z}_2$. Such epimorphisms are easy to classify by the universal property of free products: a homomorphism $\mathbb{Z}_2*\mathbb{Z}_2\to\mathbb{Z}_2$ is just a pair of homomorphisms $\mathbb{Z}_2\to\mathbb{Z}_2$, each of which is either trivial or an isomorphism. Such a homomorphism is surjective iff at least one of the homomorphisms $\mathbb{Z}_2\to\mathbb{Z}_2$ is nontrivial. -This gives that there are three index two subgroups: the kernels of the homomorphisms $f,g,h:\mathbb{Z}_2*\mathbb{Z}_2\to\mathbb{Z}_2$ given by $f(a)=1$, $f(b)=0$, $g(a)=0$, $g(b)=1$, and $h(a)=1$, $h(b)=1$ (where $a$ and $b$ are the two generators of $\mathbb{Z}_2*\mathbb{Z}_2$). You can then get the covering spaces as the quotients of the universal covering space by these subgroups acting as deck transformations. More explicitly, the covering coming from $\ker(f)$ is $S^2$ (mapping to the first $\mathbb{RP}^2$) with two copies of $\mathbb{RP}^2$ (mapping to the second $\mathbb{RP}^2$) attached to it at two antipodal points. The covering coming from $\ker(g)$ looks the same, just with the roles of the two $\mathbb{RP}^2$s in $X$ swapped. The covering coming from $\ker(h)$ is two copies of $S^2$ attached at two antipodal points.<|endoftext|> -TITLE: Homotopy Equivalence intuition -QUESTION [7 upvotes]: Can somebody tell me intuitively what does it mean geometrically when we say two spaces are homotopy equivalent ? I understand the technincal definition. - -REPLY [6 votes]: Homotopy equivalence is a weaker version of equivalence than homeomorphism. A homotopy equivalence can be thought of as a continuous squishing and stretching (i.e. a deformation) of the space. I'd like to give some examples of homotopy equivalences which are and are not homeomorphisms. -1) Every pair of spaces that are homeomorphic are homotopy equivalent. This is because you can take a family of maps parameterized by $t \in [0,1]$ as the homotopy which start at the identity and end at the homeomorphism. If $f$ is the homeomorphism, the family might be something like $(1-t)Id + tf$. -2) Spaces that are not homeomorphic might be homotopy equivalent. Consider the letter X. We can contract this space to its center point by sucking up the horizontal lines on the legs, and then pulling the legs in to the center point. However, X is not homeomorphic to a point, because this map is not bijective (and more generally, X minus its center point has 4 connected components, but the point minus a point is just empty). -3) For compact surfaces (without boundary), homotopy equivalence and homeomorphism are actually the same thing. If you're not familiar with surface theory, there are an abundance of good references on the basics that give the classification theorem which should clear this point up. I think Munkres is the standard reference.<|endoftext|> -TITLE: Does $A^{-1}A=G$ imply that $AA^{-1}=G$? -QUESTION [13 upvotes]: Let $G$ be a group, $A\subseteq G$ and put $A^{-1}=\{ a^{-1}:a\in A\}$. -Is it true that if $A^{-1}A=G$ then $AA^{-1}=G$ (and vice versa)? - -REPLY [8 votes]: I think I could construct a random counter-example with GAP ($t$ is the set of the products $A^{-1}A$ forming $G$ , $w$ is the set of the products $AA^{-1}$) -The following GAP session: -gap> A:=[ (), (3,4), (2,4,3), (2,4), (1,2), (1,2,3), (1,2,3,4), (1,3,2,4), (1,4,2,3) ];; -gap> t:=[];; -gap> w:=[];; -gap> for u in A do -> for v in A do -> t:=Union(t,List([u^(-1)*v])); -> w:=Union(w,List([u*v^(-1)])); -> od; -> od; -gap> t; -[ (), (3,4), (2,3), (2,3,4), (2,4,3), (2,4), (1,2), (1,2)(3,4), (1,2,3), (1,2,3,4), - (1,2,4,3), (1,2,4), (1,3,2), (1,3,4,2), (1,3), (1,3,4), (1,3)(2,4), (1,3,2,4), - (1,4,3,2), (1,4,2), (1,4,3), (1,4), (1,4,2,3), (1,4)(2,3) ] -gap> w; -[ (), (3,4), (2,3), (2,3,4), (2,4,3), (2,4), (1,2), (1,2)(3,4), (1,2,3), (1,2,3,4), - (1,2,4,3), (1,2,4), (1,3,2), (1,3,4,2), (1,3,4), (1,3)(2,4), (1,3,2,4), (1,4,3,2), - (1,4,2), (1,4,3), (1,4), (1,4,2,3), (1,4)(2,3) ] -gap> Length(t); -24 -gap> Length(w); -23 -gap> G:=AsGroup(t); -Group([ (3,4), (2,3), (1,2) ]) -gap> StructureDescription(G); -"S4" -gap> AsGroup(w); -fail - -demonstrates the subset $A \subseteq G = S_4$ with the required properties: -$$[ (), (3,4), (2,4,3), (2,4), (1,2), (1,2,3), (1,2,3,4), (1,3,2,4), (1,4,2,3)]$$ -The products forming the group $G$ are : -$$A = [ (), (3,4), (2,3), (2,3,4), (2,4,3), (2,4), (1,2), (1,2)(3,4), (1,2,3), (1,2,3,4), (1,2,4,3), (1,2,4), (1,3,2), - (1,3,4,2), (1,3), (1,3,4), (1,3)(2,4), (1,3,2,4), (1,4,3,2), (1,4,2), (1,4,3), (1,4), (1,4,2,3), (1,4)(2,3) ] -$$ -The permutations of the product $AA^{-1}$ are -$$[ (), (3,4), (2,3), (2,3,4), (2,4,3), (2,4), (1,2), (1,2)(3,4), (1,2,3), (1,2,3,4), (1,2,4,3), (1,2,4), (1,3,2), - (1,3,4,2), (1,3,4), (1,3)(2,4), (1,3,2,4), (1,4,3,2), (1,4,2), (1,4,3), (1,4), (1,4,2,3), (1,4)(2,3) ]$$ -$A$ is a subset of $G$, as required. The permutation $(13)$ is missing in the last set.<|endoftext|> -TITLE: Can a gradient vector field with no equilibria point in every direction? -QUESTION [22 upvotes]: Suppose that $V:\mathbb{R}^n \to \mathbb{R}$ is a smooth function such that $\nabla V : \mathbb{R}^n \to \mathbb{R}^n$ has no equilibria (i.e. $\forall x \in \mathbb{R}^n : \nabla V (x) \not = 0$). Under these hypotheses, is it possible that $\nabla V (x)$ can point in every direction? -To be more precise, under the above hypotheses the map $$\mathbb{R}^n \to \mathbb{S}^{n-1} $$ $$x \mapsto \frac{\nabla V(x)}{\|\nabla V (x)\|} $$ is well-defined. Is it impossible for such a map to be surjective? If not, what is a counterexample? - -REPLY [12 votes]: A quick solution for $n=2$, and an explanation of where it came from: -$$\nabla (e^x \sin y) = (e^x \sin y, e^x \cos y).$$ -The right hand side is never zero, but does assume every nonzero value in $\mathbb{R}^2$. -Motivation: If $f: \mathbb{C} \to \mathbb{C}$ is holomorphic, then $\nabla(\mathrm{Re}(f)) = (\mathrm{Re}(f'), -\mathrm{Im}(f'))$. I looked for an $f'$ (namely $e^z$) which takes $\mathbb{C}$ onto $\mathbb{C}_{\neq 0}$, and then integrated it to find $f$. - -Inspired by this, let's try -$$\nabla(e^{x_0} \cos(x_1^2+x_2^2+\cdots + x_n^2)) =$$ -$$e^{x_0} (\cos(x_1^2+\cdots + x_n^2), - x_1 \sin(x_1^2+\cdots + x_n^2), \cdots, -x_n \sin(x_1^2+\cdots + x_n^2)).$$ -First, we note that the gradient is not zero. The first coordinate only vanishes if $x_1^2+ \cdots + x_n^2$ is of the form $(2k+1) \pi$. But, in this case, at least one of $x_1$, $x_2$, ..., $x_n$ are nonzero; say $x_j$. Then $- x_j \sin(x_1^2+\cdots + x_n^2) = \pm x_j \neq 0$. -Now, let's take a nonzero vector $(v_0, \ldots, v_n)$. We need to go through cases. -If $v_0 = 0$, choose $(x_1, \ldots, x_n)$ proportional to $(v_1, \ldots, v_n)$ and such that $\sin(x_1^2+\cdots+x_n^2)$ has the right sign. -If $v_1 = \cdots = v_n = 0$ and $(-1)^k v_0>0$, take $x_1^2+\cdots+x_n^2 = k \pi$. -If neither of those cases holds, we'll take $(x_1, \ldots, x_n)$ of the form $(t v_1, \ldots, t v_n)$ for some $t$ to be determined soon. Set $s = x_1^2+ \cdots + x_n^2$. Then our vector points in direction $\pm (- \cot(t^2 s), v_1, v_2, \ldots, v_n)$. Since cotangent is surjective, we can choose $t$ such that $\cot (t^2 s) = v_0$, and we can get the sign right.<|endoftext|> -TITLE: Compute the homology groups using Mayer-Vietoris sequence -QUESTION [5 upvotes]: I need to compute the homology groups under group of integers $ H_k(D; \mathbb Z) $, of the simplicial complex being a triangulation of the following figure: - -I divided it two parts $D = L_1 \cup L_2$ like this: - -Then both parts $L_i$ are homeomorphic to the standard 2-dimentional simplicial complex and hence we know thier homology groups. -$$ H_k(\Delta^2, \mathbb Z) = \begin{cases} -\mathbb Z, & k = 0 \\ -0, & k > 0 -\end{cases} $$ -The intersection $L_1 \cap L_2 = I_1 \cup I_2 \cup I_3$ of the parts is merging of 3 disjoint segments and hence we know their homology groups too. -$$ H_k(I_1 \cup I_2 \cup I_3, \mathbb Z) = \begin{cases} -\mathbb Z \oplus \mathbb Z \oplus \mathbb Z, & k = 0 \\ -0, & k > 0 -\end{cases} $$ -And we can construct the Mayer-Vietoris sequence: -$$ 0 \to H_1(L_1 \cap L_2; \mathbb Z) \to H_1(L_1; \mathbb Z) \oplus H_1(L_2; \mathbb Z) \to H_1(L_1 \cup L_2; \mathbb Z) \to H_0(L_1 \cap L_2; \mathbb Z) \to H_0(L_1; \mathbb Z) \oplus H_0(L_2; \mathbb Z) \to H_0(L_1 \cup L_2; \mathbb Z) \to 0 $$ -And substituting computed groups: -$$ 0 \to 0 \to 0 \to H_1(D; \mathbb Z) \to \mathbb Z \oplus \mathbb Z \oplus \mathbb Z \to \mathbb Z \oplus \mathbb Z \to H_0(D; \mathbb Z) \to 0 $$ -How can I compute from this groups $ H_0(D, \mathbb Z) $ and $ H_1(D, \mathbb Z) $? Thanks for the help! - -REPLY [2 votes]: From this information alone, you cannot compute the groups, because they could be -$$ -\mathbb Z \oplus \mathbb Z \oplus \mathbb Z -$$ -and -$$ -\mathbb Z \oplus \mathbb Z -$$ -with the two obvious maps being isomorphisms, or the first could be $\mathbb Z$ and the second be $0$. There are surely other possibilities. -To work out the groups, you'd need to know the actual maps on the various groups. -It's considerably easier to divide the shape with a line running NW-SE rather than NE/SW. Then you get two circles, whose homology groups you know, and the intersection is a line segment, which is contractible. And the map from $H_0$ -of the intersection to $H_0$ of either part is an isomorphism (generated by the map taking a 0-simplex to a point in the intersection).<|endoftext|> -TITLE: Intuitive Explanation of Why the Power Set of $\mathbb{R}$ is "too big" for the Lebesgue Measure? -QUESTION [14 upvotes]: I've been working with the construction of measures for a little bit, and I understand that in order for the Lebesgue measure to be an official measure on $\mathbb{R}$, we need to restrict it to a certain $\sigma$-algebra, namely the one generated by $\tau \cup \mathcal{N}$, where $\tau$ is our topology and $\mathcal{N}$ is the collection of all null sets. -I have been looking at a proof as to why the Lebesgue measure "fails" when we consider it as a mapping from $2^{\mathbb{R}}$, and it seems to be more algebraic in nature even though it is an Analysis book. Condensed, it basically defines a relation $x\sim y$ if $x - y \in \mathbb{Q}$ for $x,y \in [0,1]$. We then consider the set of equivalence classes (basically quotient $[0,1]$ by this equivalence relation), and the rest of the proof is over my head, in the sense that I have no idea where the rest of the steps are coming from. In the end, we get a contradiction, so our measure doesn't work, basically. -My question is about any kind of intuition behind why we need to restrict our domain? The Lebesgue $\sigma$-algebra is still an uncountable set, but it seems as though if we allow all possible subsets, then there is too much "overlap" for our intervals, but I do not really know how to formulate this rigorously. -Thanks! - -REPLY [16 votes]: Let me try to explain the Vitali construction by breaking everything into small pieces. (Usually I find this presented quite rapidly in books, so that everything seems to come all at once and it's not clear what all the pieces even were.) -You begin by quotienting $[0,1]$ by this equivalence relation that you stated. You get the full collection of equivalence classes. Now you use the axiom of choice to get a set which contains exactly one element of each equivalence class. This set is called a Vitali set; let's denote one such set by $V$. -Let's define the translation of a set of real numbers $A$ by a real number $b$: $A+b=\{ a+b : a \in A \}$. From basic properties of equivalence relations, we find that a Vitali set has the property that $V+q$ is disjoint from $V+q'$ for any rational numbers $q \neq q'$. -This translation property by itself is not a problem. For example, singletons also have this property, and yet they are of course measurable. But a countable union of singletons is a null set under Lebesgue measure. By contrast, when we take the union of $V+q$ when $q$ ranges over $\mathbb{Q} \cap [-1,1]$, we do not get a null set. In fact, this set must contain all of $[0,1]$. Why? Because 1. every difference between two points in $[0,1]$ is in $[-1,1]$ and 2. every point in $[0,1]$ is a rational translate of a point in $V$. On the other hand, the union is contained in $[0,1]+[-1,1]=[-1,2]$. So if $V$ is measurable then the measure of this union is some number in the interval $[1,3]$. -These two things together still do not constitute a problem. What we have now is a countable collection of disjoint sets whose union is contained between two sets, each of which have finite, positive measure. Where the problem comes in is when we postulate that the Lebesgue measure should be translation-invariant: $m(A+b)=m(A)$. If this is the case, then countable additivity implies that either the measure of this union is zero (if all the things in the union have measure zero) or the measure is infinite (if all the things have some fixed, positive measure). But we just said that neither of these can be the case, so we have a contradiction. -A summary of what we did: we got this set $V$. We built a set, say $U$, comprised of a countable union of translates of $V$. $U$ contains $[0,1]$, so its measure must be at least $1$. $U$ is contained in $[-1,2]$, so its measure must be at most $3$. On the other hand, $U$ is a countable union of disjoint translates of the same set, so since the Lebesgue measure is supposed to be translation-invariant, its measure must be either $0$ or $\infty$. Since neither $0$ nor $\infty$ is between $1$ and $3$, $V$ must not be Lebesgue measurable. So in particular the problem is not so much that the power set is too big as that we insist on Lebesgue measure being translation-invariant.<|endoftext|> -TITLE: Intuition about the first isomorphism theorem -QUESTION [19 upvotes]: I'm currently studying group theory and recently I've read about the first isomorphism theorem which can be stated as follows: - -Let $G$ and $H$ be groups and $\varphi :G\to H$ a homomorphism, then $\ker \varphi$ is a normal subgroup of $G$, $\varphi(G)$ is a subgroup of $H$ and $G/\ker \varphi \simeq \varphi(G)$. - -The proof is quite easy, but I've been thinking about what's the best way to understand this result. In that setting I've came up with the following intuition: -It's easy to see that a homomorphism $\varphi : G\to H$ is injective if and only if $\ker \varphi = \{e\}$ where $e$ is the identity of $G$. -Now, my intuition about the first isomorphism theorem is: if $\varphi : G\to H$ is a homomorphism which is not injective, we can then construct a new group on which the equivalent homomorphism is indeed injective. We do this by quotienting out what is in the way of making $\varphi$ injective, that is, everything that is in the kernel. -In that way taking the quotient $G/\ker \varphi$ we construct a group on which we "kill" everything that is in the kernel of $\varphi$. The natual projection of $\varphi$ to this quotient will then be an injective function. -So is this the best way to understand the first isomorphism theorem? It's a way to "get out of the way" everything which is stoping a homomorphism from being an injective map? If not, what is the correct intuition about this theorem and its importance? - -REPLY [3 votes]: The fiber point of view is the one I like, because it captures the idea that when you quotient out $ker\phi $, you identify through $\sim $ all the points that $\phi $ sends to $0$. -To extend this a bit, suppose we take a topological space $X$, a space $Y$ and an $f:X\to Y$, and topologize $Y$ by declaring that $V$ open in $Y$ $\Leftrightarrow f^{-1}(V)$ open in $X.$ -Now given any $\sim $ on $X$, $q:X\to X/\sim $ induces a topology on $X/\sim $ as above and from this you get the following result: -if $g:X\to Z$ is continuous, then there is a unique $\ \overline g:X/\sim \to Z$ such that $\overline g\circ q=g$ -and so we have a topological analog of the First Isomorphism Theorem.<|endoftext|> -TITLE: Is there any uncountably infinite set that does not generate the reals? -QUESTION [31 upvotes]: Does there exist an uncountably infinite set $X \subseteq \mathbb R$ such that $\mathbb R \neq \left$? I can't think of any, but I'm also having trouble trying to prove that no such subset exists. -For example: $\mathbb R$ is uncountable and obviously $\mathbb R = \left<\mathbb R\right>$. The Cantor set $C$ is uncountable, and we know that $C - C = [0, 1]$, so then since $\mathbb R = \left<[0, 1]\right>$ we know that $C$ also generates $\mathbb R$. Also the set of irrationals $\mathbb R \setminus \mathbb Q$ is uncountable, but we can generate all the rational numbers by fixing one irrational number $\alpha$ and then saying the any rational number $x$ shall be $(\alpha + x) - \alpha$, since both $\alpha + x$ and $\alpha$ are irrational. -So the examples that quickly come to mind all generate the reals. Is there a simple counterexample? - -REPLY [3 votes]: generate in the sense of fields -Let $X \subseteq \mathbb R$ be a set with Hausdorff dimension zero, and furthermore all Cartesian products $X \times X \times \dots \times X$ have Hausdorff dimension zero. Then the field $F$ generated by $X$ also has Hausdorff dimension zero (so $F$ is not all of $\mathbb R$). You can construct Cantor sets $X$ like this, which are uncountable. -plug -G. A. Edgar & Chris Miller, Borel subrings of the reals. Proc. Amer. Math. Soc. 131 (2003) 1121-1129 -LINK -Borel sets that are subrings of $\mathbb R$ either have Hausdorff dimension zero as described, or else are all of $\mathbb R$. -Also: see the references there for subgroups of the reals (due to Erdös and Volkmann) with Hausdorff dimension $t$ for any $t$ with $0 -TITLE: Spanning the reals with a small set - choicelessly -QUESTION [19 upvotes]: Working in ZF (so, no choice): is it possible that there is a set of reals $X$ such that - -$\vert X\vert<\mathbb{R}$, but -$X$ generates $\mathbb{R}$ as a subgroup under addition? - -This seems weird, but I can't even show that we can't generate $\mathbb{R}$ with a Dedekind-finite set! - -REPLY [7 votes]: This is in the Geometric Set Theory book with Jindra, in particular on pages 190-191 of -the version here: https://people.clas.ufl.edu/zapletal/files/balanced14.pdf -The partial order there forces over a Solovay model. The conditions are disjoint pairs of set of reals $(a,b)$, where $a$ is finite and $b$ is countable, with the order of coordinatewise inclusion. -The GST machinery shows that the partial order doesn't add reals. The union of the finite parts of a generic filter will then be a set of reals of cardinality less than the continuum such that, by genericity, every real is a sum of two of them. In fact, the set will be Dedekind-finite.<|endoftext|> -TITLE: limit of $|n^t\sin n|$ -QUESTION [5 upvotes]: It is known that $\{\sin n : n\in\mathbb{N}\}$ is dense in $[-1,1]$, hence $\lim_{n\to\infty}\sin n$ doesn't exist and also $\lim_{n\to\infty} n^t\sin n$ doesn't exist for all $t>0$ (the reason is that the density implies that inequalities $\sin n>\frac{1}{2}$ and $\sin n<-\frac{1}{2}$ are satisfied infinitely many times, so there are subsequences tending to $+\infty$ and $-\infty$). -What about $\lim_{n\to\infty} |n^t\sin n|$ ? -The above argument shows that the limit - if exists - is infinite -I dont't think it does converge, but I don't know how to prove it. - -REPLY [5 votes]: The question is strongly connected with irrationality measure of $\pi$. That is such number $\mu$, that for every numbers $\lambda, \nu$ with $\lambda < \mu < \nu$: - -there exist infinitely many distinct rational numbers $p/q$ for which -$$ -\left| \frac{p}{q} - \pi \right| < \frac{1}{q^\lambda} \;\;\Longleftrightarrow\;\; \lvert p - \pi q \rvert < q^{1-\lambda} -$$ -for each rational $p/q$ with sufficiently large denominator -$$ -\left| \frac{p}{q} - \pi \right| > \frac{1}{q^\nu} \;\;\Longleftrightarrow\;\; \lvert p - \pi q \rvert > q^{1-\nu} -$$ - -Exact value of $\mu$ is not known, but it's true that $2\leqslant \mu \leqslant 7.6063$. Note, that by Dirichlet's theorem, the first item is always true for $\lambda = 2$, disregarding the irrationality measure. -Returning to question: there is watershed for the parameter. If $t > \mu-1$, then the limit is $+\infty$; if $t < \mu-1$, then it doesn't exist. The remaining case $t = \mu-1$ depends on the behavior of diophantine approximations of $\pi$. -First case: $t < \mu-1$. -If $t < \mu-1$ and $p/q$ satisfies the first inequality, then -$$ -\lvert \sin p \rvert = \lvert \sin (p-\pi q) \rvert \leqslant \lvert p - \pi q \rvert \leqslant q^{-t} \sim \pi^t p^{-t} = O(p^{-t}) \;\text{ as }\; p\rightarrow \infty -$$ -hence there is bounded subsequence of $\{ n^t \sin n \}_{n=1}^\infty$ and it cannot have infinite limit. -Second case: $t > \mu-1$. -Take $\varepsilon > 0$, such that $t-\varepsilon > \mu-1$. Given $n\in \mathbb N$, choose $m \in \mathbb N$, such that $\lvert n -\pi m \rvert \leqslant \frac{\pi}{2}$. When $n$ is sufficiently large, -$$ -\lvert \sin n\rvert = \lvert \sin(n-\pi m) \rvert \geqslant \tfrac{2}{\pi} \lvert n - \pi m \rvert = \tfrac{2}{\pi} \lvert n - \pi m \rvert \geqslant \tfrac{2}{\pi} m^{-t+\varepsilon} \sim 2\pi^{t-\varepsilon-1} n^{-t+\varepsilon}, -$$ -hence $\lvert n^t \sin n \rvert \geqslant C n^\varepsilon \rightarrow +\infty$. -The remaining case: $t = \mu-1$. -There are two alternatives: -$(A)$ whether exist $C > 0$ and infinitely many rational solutions $p/q$ of -$$ \left| \frac{p}{q} - \pi \right| \leqslant \frac{C}{q^\mu} \;\;\Longleftrightarrow\;\; \lvert p - \pi q \rvert \leqslant C q^{1-\mu} = C q^{-t}$$ -or $(B)$ the converse. As I already mentioned, if $\mu = 2$, then $(A)$ holds. -Suppose $(A)$ is true. Then by the same argument, as in the first case, $\nexists\lim\limits_{n\rightarrow\infty} \lvert n^t \sin n \rvert$. -Oppositely, we have $(B)$, which, in fact, is equivalent that for every sequence of distinct rationals $\{p_n / q_n \}_{n=1}^\infty$ the sequence -$\{ q_n^t \lvert p_n - \pi q_n \rvert \}_{n=1}^\infty$ -is unbounded. Combining this with the second case solution yields $\lvert n^t \sin n \rvert \rightarrow +\infty$. -However, it's probably open problem which of $(A)$ or $(B)$ holds, along with the very value of $\mu$.<|endoftext|> -TITLE: Infinite nested radical and infinite continued fractions -QUESTION [6 upvotes]: If $$a = \sqrt{k_0+\sqrt{k_1+\sqrt{k_2+\sqrt{k_3+\sqrt{\cdots}}}}}$$ -and $$b = \cfrac{1}{k_0+\cfrac{1}{k_1+\cfrac{1}{k_2+\cfrac{1}{\cdots}}}}$$ -what is the relation between $a$ and $b$. What function always satisfies $a = f(b)$? Would $f(x)$ be bijective, injective, or niether? - -Edit: all values in the sequence $k_n$ are whole numbers. - -REPLY [2 votes]: I'm going to provide the most basic example of your function to show how complicated it can get. -Let's take $\{k_n\}$ to be a constant sequence $\{p\}$. Then we find $a$ and $b$ explicitly: -$$a=\sqrt{p+a}$$ -$$a=\frac{1+\sqrt{1+4p}}{2}$$ -$$b=\frac{1}{p+b}$$ -$$b=\frac{\sqrt{p^2+4}-p}{2}$$ -Using $p=a^2-a$, it is easier to find $b(a)$ which will be: -$$b=\frac{1}{2} (a-a^2+\sqrt{a^4-2a^3+a^2+4})$$ -But remember, since $k_n$ can only be whole, if follows that $p > 0$, or one of the $a,b$ would not converge. So, the smallest possible $p=1$, with $a(1)=\phi$ and $b(1)=\phi-1$, where $\phi$ is the Golden ratio. -Here is the plot of the function $b(a)$, where we correctly obtain that $b(p)$ is decreasing, while $a(p)$ is increasing. Only the orange curve is allowed, so the functions $b(a)$ and $a(b)$ are both single valued. - -However, because of your condition that $k_n$ are whole, the function only exists for $p=1,2,3,4, \dots$, with $a,b$ values determined by the above expressions. --- -If we take even a little bit more complicated sequence with two parameters ${p,q}$, we get a quartic equation for $a$: -$$a^4-2pa^2-a+p^2-q=0$$ -For $b$ the equation stays quadratic (every periodic simple continued fraction converges to a quadratic irrational as far as I know): -$$pb^2+pqb-q=0$$ -Because of the quartic and two parameters it is extremely hard even to plot $a(b)$ or $b(a)$. I'm leaving it up to you, if you want. - -As another example, we can compare the continued fraction constant: -$$\cfrac{1}{1+\cfrac{1}{2+\cfrac{1}{3+\cfrac{1}{4+\cdots}}}}=\frac{I_1(2)}{I_0(2)}=0.697774657964$$ -to the nested radical constant, which has no known closed form: -$$\sqrt{1+\sqrt{2+\sqrt{3+\sqrt{4+\cdots}}}}=1.757932756618$$ -The sum, difference, product or quotient of these two numbers make no known constant (as far as I was able to check).<|endoftext|> -TITLE: Structure sheaf consists of noetherian rings -QUESTION [6 upvotes]: Let $X\subseteq \mathbb{A}^n$ be an affine variety. The ring $k[x_1,\ldots,x_n]$ is noetherian because of Hilbert's basis theorem. -The coordinate ring $k[X]=k[x_1,\ldots,x_n]/I(X)$ is noetherian because ideals of $k[X]$ are of the form $J/I(X)$, where $J\supseteq I(X)$ is an ideal of $k[x_1,\ldots,x_n]$. -The local ring of $X$ at $p\in X$, given by $\mathcal{O}_{X,p}=\{f \in k(X) : f \text{ regular at } p\}$ is noetherian because it is a localization of $k[X]$, and the ideals of a ring of fractions $S^{-1}A$ are of the form $S^{-1}J$, where $J$ is an ideal of $A$. -If $U\subseteq X$ is open, let $\mathcal{O}_X(U)=\bigcap_{p\in U}\mathcal{O}_{X,p}$. Is this ring noetherian as well? - -REPLY [2 votes]: There is a counterexample in section 19.11.13 of Ravi Vakil's Foundations of Algebraic Geometry https://math216.wordpress.com/<|endoftext|> -TITLE: Is there any graphical methods by which we can directly notice whether the $\{f_k\}$ converges uniformly? -QUESTION [5 upvotes]: Thanks to books, many pdf files on google and this web-site, I understood somewhat about uniform and pointwise convergence. This question may be the last question about this part. -Now, I can check whether the sequence of functions $f_k$ does not converge, converges pointwise, or converges uniformly by using the definitions. -However, I cannot check it, only depending on graph. - -Let me give you some examples. -$$f_n(x)=x^n, ~~~~~ \mbox{for all}~~x \in [0, 1)$$ -$\lim_{n\to\infty}f_n(x)=0$ because $|x|\lt1$. Hence, $\{f_n\}$ converges pointwise, whose pointwise-limit function $f$ is the zero function on the domain $D=[0, 1)$. -Now, I am checking whether the sequence $\{f_n\}$ converges uniformly or not. -for $\displaystyle\varepsilon=\frac{1}{3}$, $~~\nexists N\in\mathbb{R}$ such that $\displaystyle\left|f_n(1-\frac{1}{n})-f(1-\frac1n)\right|=\left(1-\frac1n\right)^n\lt\varepsilon~~~$ for every $n\gt{N}$ and for every $x\in{D}$. -The reason why $N$ does not exist is that even if $n$ approaches to $\infty$, $\displaystyle\left(1-\frac1n\right)^n=\frac{1}{e}\gt\varepsilon=\frac{1}{3}$. -Therefore, the sequence of functions $\{f_n\}$ converges pointwise but doesn't uniformly. - -I tried to draw the functions by using Matlab. - -However, I failed to find intuitively that it converges only pointwise only by seeing graph. Is there any methods to know $\{f_n\}$ converges uniformly or pointwise with only graph? - -REPLY [4 votes]: Examples of sequence of functions that converge uniformly - -In this case: - -Limit function must be continuous -sequence of functions eventually converges to the function (lies in an $\epsilon$-tube) as $N \to \infty$ - -Examples of sequence of functions that converge pointwise but not uniformly - -Note (b) has an error it should be $f_n(x) = \exp(-nx)$ -In this case: - -The limit function do not have to be continuous -The sequence of functions do not have to lie fully in an $\epsilon$-tube of the limit function as $N \to \infty$ - -The latter is because in the definition of pointwise convergence: -$\forall \epsilon > 0, \forall x \in [a,b], \exists N\in \mathbb{N}$ s.t. $\forall n\in \mathbb{N}, n \geq N, \|f_n - f\| < \epsilon$ -We can pick our $N$ large enough to make the latter condition $\|f_n - f\| < \epsilon$ true for a particular $x$, but not for all $x$ at once.<|endoftext|> -TITLE: What will mathematicians do when they run out of letters in the Greek and English alphabets? -QUESTION [7 upvotes]: Like $x$, $y$, $z$ are commonly understood to be dimensions and $\theta$ is an angle, $\pi$ is a specific irrational constant, and $\tau$ is two times $\pi$, et cetera. -They must be running out of letters by now. -Is that a problem? -What's the solution? - -REPLY [10 votes]: P. Halmos addressed the problem in his highly recommended paper How to write mathematics. Let me quote his advice (from the end of Section 6). - -As history progresses, more and more symbols get frozen. The standard -examples are e, i and π, and, of course, 0,1,2,3,... (Who would dare -write “Let 6 be a group.”?) A few other letters are almost frozen: -many readers would fell offended if “n” were used for a complex -number, “ε” for a positive integer, and “z” for a topological space. -(A mathematician’s nightmare is a sequence nε that tends to 0 as ε -becomes infinite.) -Moral: do not increase the rigid frigidity. Think -about the alphabet. It’s a nuisance, but it’s worth it. To save time -and trouble later, think about the alphabet for an hour now; then -start writing.<|endoftext|> -TITLE: Find the first few Legendre polynomials without using Rodrigues' formula -QUESTION [5 upvotes]: If a polynomial is given by $$y=\color{red}{a_0\left[1-\frac{l(l+1)}{2!}x^2+\frac{l(l+1)(l-2)(l+3)}{4!}x^4-\cdots\right]}+\color{blue}{a_1\left[x-\frac{(l-1)(l+2)}{3!}x^3+\frac{(l-1)(l+2)(l-3)(l+4)}{5!}x^5-\cdots\right]}\tag{1}$$ where $l$ is a constant and $a_0,a_1$ are coefficients. -The recurrence relation is given by $$a_{n+2}=-\frac{(l-n)(l+n+1)}{(n+2)(n+1)}a_n\tag{2}$$ The objective is to find the first few Legendre polynomials $P_l(x)$ such that $P_l(1)=1$ without using Rodrigues' formula: -$$\fbox{$P_l(x)=\frac{1}{2^{l}l!}\frac{\mathrm{d}^l}{\mathrm{d}x^l}{\left(x^2-1\right)}^l$}$$ -The method given in my textbook states that: - -If the value of $a_0$ or $a_1$ in each polynomial is selected so - that $y = 1$ when $x = 1$, the resulting polynomials are called Legendre Polynomials, written $P_l(x)$. From $(1)$ and $(2)$ and the requirement $P_l(1) = 1$, we find the following expressions for the first few Legendre polynomials: -$\color{#180}{\quad P_0(x)=1,\quad P_1(x)=x}\quad \text{and}\quad \color{#180}{P_2(x)=\frac12(3x^2-1)}$ - -I don't understand how those polynomials marked $\color{#180}{\mathrm{green}}$ were obtained as I've only just started reading about Legendre polynomials and hence I'm not sure how to tackle this problem. But since it is mandatory that OP's show their efforts for questions of this nature -Here is my attempt anyway: -I substituted $l=0$ in the $\color{red}{\mathrm{red}}$ bracket to obtain $1=a_0(1)$ so $a_0=1$ and hence $P_0(x)=1$. -I substituted $l=1$ in the $\color{blue}{\mathrm{blue}}$ bracket to obtain $1=a_1(x)$ so $a_1=x$ and hence $P_1(x)=x$. -I substituted $l=2$ in the $\color{red}{\mathrm{red}}$ bracket to obtain $1=a_0\left[1-\dfrac{2(2+1)}{2!}x^2\right]=a_0\left[1-3x^2\right]$ so $a_0=\dfrac{1}{1-3x^2}\ne \frac12(3x^2-1)$. -Obviously I am doing something wrong. Can anyone please explain to me how to achieve the answers properly (and without using Rodrigues' formula)? - -REPLY [3 votes]: We consider for even $l$ the polynomial (OPs red part) - \begin{align*} - a_0^{(l)}\left[1-\frac{l(l+1)}{2!}x^2+\frac{l(l+1)(l-2)(l+3)}{4!}x^4-\cdots\right]\tag{1} - \end{align*} - and for odd $l$ the polynomial (OPs blue part) - \begin{align*} - a_1^{(l)}\left[x-\frac{(l-1)(l+2)}{3!}x^3+\frac{(l-1)(l+2)(l-3)(l+4)}{5!}x^5-\cdots\right]\tag{2} - \end{align*} - -Note, we use for convenience an upper index $(l)$ to indicate to which polynomial the index $a_0$ resp. $a_1$ belongs. (... and I suppose this helps to clarify the things ...) - -Setting $l=0$ in (1) we obtain -\begin{align*} - y_0(x)=a_0^{(0)} - \end{align*} -We observe that all terms containing $x^2$ or higher powers of $x$ vanish, since they all contain the factor $l$. - -$$ $$ - -Setting $l=1$ in (2) we obtain -\begin{align*} - y_1(x)=a_1^{(1)}x - \end{align*} -We observe that all terms containing $x^3$ or higher powers of $x$ vanish, since they all contain the factor $l-1$. - -$$ $$ - -Similarly to the first case we obtain with $l=2$: -\begin{align*} - y_2(x)=a_0^{(2)}\left[1-\frac{2\cdot 3}{2!}x^2\right]=a_0^{(2)}\left(1-3x^2\right) - \end{align*} - since all other terms in (1) contain the factor $l-2$. - -We also know that $y_l(1)=1$ for all polynomials $y_l(x)$. We obtain -\begin{align*} - y_0(1)&=a_0^{(0)}=1\\ - y_1(1)&=a_1^{(1)}=1\\ - y_2(1)&=-2a_0^{(2)}=1 - \end{align*} -We conclude $a_0^{(2)}=-\frac{1}{2}$ and obtain finally - -\begin{align*} - y_0(x)&=a_0^{(0)}=1\\ - y_1(x)&=a_1^{(1)}x=x\\ - y_2(x)&=a_0^{(2)}\left(1-3x^2\right)=\frac{1}{2}\left(3x^2-1\right) - \end{align*} - -[2016-03-22]: Update according to OPs comment: Why can we restrict the consideration to $a_0^{(l)}$ when $l$ is even and restrict the consideration to $a_1^{(l)}$ when $l$ is odd? -The short answer is: Since the other series diverges when considering the boundary condition at $x=1$ we can set $a_1^{(l)}=0$ when $l$ is even and we can set $a_0^{(l)}=1$ when $l$ is odd. - -Some details: We start with the Legendre differential equation - \begin{align*} -(1-x^2)y^{\prime\prime}-2xy^{\prime}+l(l+1)y=0\tag{3} -\end{align*} - and we want to find polynomials as solution which additionally fulfill the boundary condition -\begin{align*} -y(1)=1 -\end{align*} -We make an Ansatz with generating functions - \begin{align*} -y(x)=a_0+a_1x+a_2x^2+\cdots -\end{align*} - With the help of the differential equation (3) and via comparing coefficients we obtain - \begin{align*} - y(x)&=a_0^{(l)}\left[1-\frac{l(l+1)}{2!}x^2+\frac{l(l+1)(l-2)(l+3)}{4!}x^4-\cdots\right]\\ -&+a_1^{(l)}\left[x-\frac{(l-1)(l+2)}{3!}x^3+\frac{(l-1)(l+2)(l-3)(l+4)}{5!}x^5-\cdots\right] -\end{align*} - with $a_0^{l}$ and $a_1^{l}$ being two degrees of freedom. -We next apply the boundary condition $y(1)=1$. -First case: $l=2k$ even. In this case $y$ has the shape - \begin{align*} - y(x)&=a_0^{(0)}\left[1-\frac{l(l+1)}{2!}x^2-\cdots -\pm\frac{l(l+1)\cdots(l-2k+2)(l+2k-1)}{(l-2k)!}x^{l-2k}\right]\\ -&+a_1^{(l)}\left[x-\frac{(l-1)(l+2)}{3!}x^3+\frac{(l-1)(l+2)(l-3)(l+4)}{5!}x^5-\cdots\right] -\end{align*} -We see the $a_0^{(l)}$ series contains only finitely many terms, since all terms having factor $l-2k$ vanish. If we now consider $y(1)=1$, it can be shown that the other $a_1^{(l)}$ series diverges for $x=1$. To overcome this, we set $a_1^{(l)}=0$. - -The second case is symmetrically. Here we correspondingly set $a_0^{(l)}=0$ since the $a_0^{(l)}$ series diverges, while the $a_1{(l)}$ series is a polynomial.<|endoftext|> -TITLE: Vector subspace of $M_n(\mathbb{R})$ with invertible matrices -QUESTION [10 upvotes]: I recall this claim that I have read in some book a long time ago, but now I do not remember and unfortunately I could not find anything on google about it. I was wondering if someone could help me with some reference about this. - -For $n>8$ there is no a $n$-dimensional vector subspace of $M_n(\mathbb{R})$ which all non zero elements are invertible matrix. - -I was also wondering if we can say something for $n \leq 8$. Thank you. -Remark: I think this should be related to Hurwitz's Theorem (https://en.wikipedia.org/wiki/Hurwitz%27s_theorem_(composition_algebras)). For example, for $n=1,2,4,8$ these $n$-dimensional vector subspaces are the ones isomorphic to $\mathbb{R},\mathbb{C},\mathbb{H}$ (quaternions) and $\mathbb{O}$ (octonions) respectively. -Remark 2: I think that the fact these matrices are real it is very important, but I don't know why. - -REPLY [4 votes]: As to why it's important to work over $\mathbb{R}$, user1551 gives a first indication, but it goes in the "wrong" direction (it shows that there are even less subspaces with invertible matrices over $\mathbb{C}$). -If you take $\mathbb{Q}$, you can find $n$-dimensional subspaces of $M_n(\mathbb{Q})$ consisting of invertible matrices (except for $0$ of course) for arbitrarily large $n$. This is because just as over $\mathbb{R}$ you have the well-know Hamilton quaternions, over $\mathbb{Q}$ you have division algebras of all degrees (the degree is the squareroot of the dimension in this case, so quaternions are of degree $2$ and dimension $4$). To see that, you may for instance use the Brauer-Hasse-Noether theorem, but it is probably overkill (I just don't immediately see an elementary argument). -If $D$ is such a division algebra, it embeds in $End_\mathbb{Q}(D)$ (where $D$ is seen as a vector space) by multiplication on the left, and the resulting matrices are invertible except for $0$.<|endoftext|> -TITLE: Is linear algebra laying the foundation for something important? -QUESTION [73 upvotes]: I'm majoring in mathematics and currently enrolled in Linear Algebra. It's very different, but I like it (I think). My question is this: What doors does this course open? (I saw a post about Linear Algebra being the foundation for Applied Mathematics -- but I like doing math for the sake of math, not so much the applications.) Is this a stand-alone class, or will the new things I'm learning come into play later on? - -REPLY [4 votes]: Linear algebra is often (as seems to be the case here by your instructor) taught in a disjoint way from calculus and then eventually the threads are gathered together and merged, but usually hastily with what seems to be tangential asides, leaving most students to miss the point. -The truth of the matter is it turns out that to really understand the functions you study in calculus, you need to understand spaces of functions. You already get a hint of that when you talk about differentiation and integration as being "inverse" operators. The natural structure to put on these spaces is that of a vector space (it's not coincidence that high-schoolers learn about the "algebra of functions"). -Linear algebra can be considered the study of vector spaces and natural operations upon them. Using the tools of linear algebra, such as eigenvalues, you can gain an understanding of linear operators on these function spaces. This will actually help you understand differential equations, which themselves are the chief motivation for studying calculus in the first place (historically Newton et al studied calculus to solve differential equations in physics).<|endoftext|> -TITLE: Why is $\cos(x)^2$ written as $\cos^2(x)$? -QUESTION [7 upvotes]: I'm just wondering why the square of $\cos(x)$ (i.e.: $(\cos(x))*(\cos(x))$) is almost universally written in the form $\cos^2(x)$ rather than $\cos(x)^2$. This seems particularly bizarre when one considers that $\cos^{-1}(x) \ne \cos(x)^{-1}$. - -REPLY [2 votes]: This is because composition of functions are very rare when you are talking about trigonometric functions. -For any other $f: \mathbb{D} \to \mathbb{R}$, it may make sense to calculate $f(f(x))$, however for $\sin(x)$ or $\cos(x)$, composition like $\cos(\cos(x))$ is not a frequent use. That's why a misunderstanding in $\cos^2(x)$ is not so much in concern. -On the other hand, when it is about $\arcsin(x)$ and $\csc(x)$, there are conflicts about the use of $\sin^{-1}(x)$.<|endoftext|> -TITLE: In which sense is composition a tensor product -QUESTION [6 upvotes]: Let $\Phi\colon U\to V$ and $\Psi\colon V \to W$ be linear operators, and consider their composition -$$ -\Psi\circ \Phi -$$ -The operation, $$\circ:\mathcal{L}(U,V)\times\mathcal{L}(V,W)\to \mathcal{L}(U,W)\\ -(\Phi,\Psi)\mapsto \Psi\circ \Phi -$$ is bilinear. So I expect that we can understand $\Psi$ and $\Phi$ identified (by $\iota$) within a tensor space $T$ such that -$$ -\Psi\circ\Phi= \iota(\Psi \otimes \Phi)\ . -$$ -However, I cannot figure out what $T$ should be. - -REPLY [5 votes]: Yes, in fact the tensor product $U^*\otimes V$ can be identified to $\mathcal{L}^{fin}(U,V)$, the space of operators with finite rank. Now, with a suitable topology (the pointwise convergence, $V$ being endowed with the discrete topology, to put it explicit) one has -$$ -\mathcal{L}(U,V)\cong U^*\hat\otimes V -$$ -then, at the level of finite rank operators, your composition, -$$ -\circ : \mathcal{L}^{fin}(U,V)\otimes \mathcal{L}^{fin}(V,W)\rightarrow \mathcal{L}^{fin}(U,W) -$$ - reads as the trace contraction of factors 2-3 as follows -$$ -U^*\otimes (V \otimes V^*)\otimes W\ . -$$ -This passes to the completion. -So, this was the general scheme. Let us now go into details -The isomorphism $\mathcal{L}^{fin}(U,V)\cong U^*\otimes V$ -One has a natural arrow $j_{U,V} : U^*\otimes V\rightarrow \mathcal{L}(U,V)$ -given by $j_{U,V}(f\otimes v)[x]=f(x)v$ as you remarked in the comments. Its image is $\mathcal{L}^{fin}(U,V)$ as it is easy to see that $Im(j_{U,V})\subset \mathcal{L}^{fin}(U,V)$. For each $T\subset V$ of finite dimension, one constructs an approximate section of $j_{U,V}$ by means of a finite basis ${t_j}_{j\in J}$ of $T$ to $\phi\in \mathcal{L}^{fin}(U,V)$ with image in $T$, we set -$$ -s_T(\phi)=\sum_{j\in J}(t_j^*\circ \phi)\otimes t_j -$$ -where $t_j^*$ is the coordinate family (i.e. $t_j^*(t_i)=\delta_{ij}$). It can be shown that $s_T$ does not depend on the chosen basis and that the $s_T$ extend each other (inductive system). So setting $s=\lim_{T\rightarrow V}s_T$, we get a section of $j_{U,V}$ which is, in fact, the inverse isomorphism. -Topology on $\mathcal{L}(U,V)$. Endowing $V$ with the discrete topology and $\mathcal{L}(U,V)$ with the pointwise convergence, we get the criterium that $(f_\alpha)_{\alpha\in A}$ ($A$ is a sup-directed set) tends to zero iff -$$ -(\forall u\in U)(\exists B\in A)(\alpha\geq B\Longrightarrow f_\alpha(u)=0) -$$ -likewise, one has the summability criterium $(f_i)_{i\in I}$ is summable iff -$$ -(\forall u\in U)(\exists F\subset_{finite} I)(i\notin F\Longrightarrow f_i(u)=0) -$$ -one can see at once that we can consider $\sum_{i\in F}f_i(u)$ as the limit and check that the map $u\rightarrow \sum_{i\in F_u}f_i(u)$ ($F$ depends on $u$) is linear. Let us call $l$ this map. We can prove easily that $l=lim_{F\rightarrow_{finite} I}$ and we have $l=\sum_{i\in I}f_i$. -Representation of $\mathcal{L}(U,V)$ as $U^*\hat\otimes V$ -With the preceding topology it can be shown that - - $\mathcal{L}(U,V)$ is complete - $\mathcal{L}^{fin}(U,V)$ is dense in $\mathcal{L}(U,V)$ -still calling $j_{U,V}$ the embedding $U^*\otimes V\rightarrow \mathcal{L}(U,V)$, one gets a topology on the tensor product and its completion gives the isomorphism -$$ -j_{U,V}: U^*\hat\otimes V\cong \mathcal{L}(U,V) -$$ -(by a little abuse of language, we still note it $j_{U,V}$). - -Concrete computations -We can give two expressions for the inverse of $j_{U,V}$ (representation of linear maps). Let $(u_i)_{i\in I}$ (resp. $(v_j)_{j\in J}$) be a basis of $U$ (resp. $V$), then, for any $\phi\in \mathcal{L}(U,V)$, the families -$$ -\Big(u_i^*\otimes \phi(u_i)\Big)_{i\in I}\ ;\ -\Big((v_j^*\circ \phi)\otimes v_j)\Big)_{j\in J} -$$ -are summable and their sums are $\phi$, hence -$$ -\phi=\sum_{i\in I}u_i^*\otimes \phi(u_i)=\sum_{j\in J}(v_j^*\circ \phi)\otimes v_j \qquad (2) -$$ -(where, for the sake of expressiveness, by a little abuse of language, we identified the tensors to their image through $j_{U,V}$) -Continuity of the composition -For the aforementioned topologies, the composition -$$ -\circ : \mathcal{L}^{fin}(U,V)\otimes \mathcal{L}^{fin}(V,W)\rightarrow \mathcal{L}^{fin}(U,W) -$$ -is continuous (means, separately continuous and jointly continuous at $(0,0)$), it extends to the completions as the usual $\circ$. This proves by isomorphisms that the usual trace operator (between second and third factors) -$$ -tr_{23}:(U^*\otimes V)\otimes (V^*\otimes W)\rightarrow U^*\otimes W -$$ -extends as -$$ -\hat{tr}_{23}:(U^*\hat\otimes V)\otimes (V^*\hat\otimes W)\rightarrow U^*\hat\otimes W -$$ -giving the interpretation of the composition as a tensor contraction. To figure out concretely $\hat{t_{23}}$, the best is to take a basis of -$V$ and use the second representation for $\phi$ and the first for $\psi$, one gets -$$ -\hat{t_{23}}\Big(\sum_{(i,j)\in I^2}(v_i^*\circ \phi)\otimes v_i)(v_j^*\otimes \psi(v_j)\Big)=\sum_{i\in I}(v_i^*\circ \phi)\otimes \psi(v_i) -$$ -which represents $\psi\circ\phi$. -Nota I gave the scheme (a lot of piled notions, but not very difficult each), all can be unfolded on request. -%$$\require{AMScd} -%\begin{CD} -%\Gamma(X,\mathcal O_X) @>>> \mathcal O_{X,x}\\ -%@AAA @AAA \\ \Gamma(Y,\mathcal O_Y) @>>> \mathcal O_{Y,f(x)} -%\end{CD}$$<|endoftext|> -TITLE: Writing $1/(X+i\epsilon)$ in terms of principal value and imaginary part -QUESTION [5 upvotes]: I am looking to derive the relation $$\frac{1}{X + i\delta} = \text{P.V} \frac{1}{X} - i \pi \delta(X)$$ In particular, I don't see where the factor of $\pi$ comes from in the derivation. I proceed by computing the imaginary part of the l.h.s as a discontinuity: Take $X = 1-zt$, then, in complex $z$ plane, $$\text{Disc}_z \frac{1}{(1-zt)} = \text{lim}_{\epsilon \rightarrow 0 } \left( \frac{1}{(1-(z+i\epsilon)t)} - \frac{1}{(1-(z-i\epsilon)t)}\right)$$ $$= \text{lim}_{\epsilon \rightarrow 0 } \left( \frac{1}{(1-zt -i\epsilon)} - \frac{1}{(1-zt+i\epsilon)}\right) = \text{lim}_{\epsilon \rightarrow 0 } \frac{2i \epsilon}{(1-zt)^2 + \epsilon^2} = 2i \delta(1-zt) $$ The imaginary part is therefore $i \delta(1-zt)$ so I am off from the actual result by a minus and a factor of $\pi$. Can anyone see where I went wrong? -Thanks! - -REPLY [8 votes]: The best strategy is to rationalize -$$ -\frac{1}{X + i\epsilon} = \frac{X-i \epsilon}{(X+i\epsilon)(X-i\epsilon)}=\frac{X}{X^2+\epsilon^2}+i\frac{-\epsilon}{X^2+\epsilon^2}\ . -$$ -The imaginary part is a nascent delta function, as the limit of a Lorentzian with an 'infinitely sharp' peak (see http://mathworld.wolfram.com/DeltaFunction.html) -$$ -\lim_{\epsilon\to 0^+}\frac{\epsilon}{\pi (X^2+\epsilon^2)}=\delta(X)\ . -$$ -The factor of $\pi$ is due to normalization: -$$ -\int_{-\infty}^\infty \delta(x-X)dX=1 -$$ -by definition for all $x$, and the integral of the Lorentzian is normalized to $1$ only with that factor of $\pi$. See also https://en.wikipedia.org/wiki/Sokhotski%E2%80%93Plemelj_theorem<|endoftext|> -TITLE: Integrating $\int^2_{-2}\frac{x^2}{1+5^x}$ -QUESTION [5 upvotes]: $$\int^2_{-2}\frac{x^2}{1+5^x}$$ -How do I start to integrate this? -I know the basics and tried substituting $5^x$ by $u$ where by changing the base of logarithm I get $\frac{\ln(u)}{\ln 5}=x$, but I got stuck. -Any hints would suffice preferably in the original question and not after my substitution. -(And also using the basic definite integrals property.) -Now I know only basic integration, that is restricted to high school, so would prefer answer in terms of that level. - -REPLY [9 votes]: $$\tag1I=\int_{-2}^{2}\frac{x^2}{1+5^x}dx$$ -Note that $$\int_a^bf(x)dx=\int_a^bf(a+b-x)dx$$ -Thus, -$$\tag2I=\int_{-2}^{2}\frac{(-2+2-x)^2}{1+5^{-2+2-x}}dx=\int_{-2}^{2}\frac{x^2}{1+5^{-x}}dx=\int_{-2}^{2}\frac{5^xx^2}{1+5^{x}}dx$$ -Add $(1)$ and $(2)$.<|endoftext|> -TITLE: The integral $\int\ln(x)\cos(1+(\ln(x))^2)\,dx$ -QUESTION [6 upvotes]: Help with a integral calculus please!? -The equation is -$$\int\ln(x)\cos(1+(\ln(x))^2)\,dx$$ -My teacher told me, i have to use substitution? but i can't still solve it. -I've been solving this last week but still i can't get the answer, please help me guys. Thanks! - -REPLY [2 votes]: First, consider the following integral: -$$I(t)=\int\sin(1+(t+\ln(x))^2)\ dx$$ -By letting $x=e^u$ and $\sin(\theta)=\Im(e^{i\theta})$, we get -$$I(t)=\int e^u\sin(1+(t+u)^2)\ du=\Im\int e^{u+(1+(t+u)^2)i}\ du$$ -This may then be solving using the error function, -$$\int e^{u+(1+(t+u)^2)i}\ du=\frac{-\sqrt{\pi i}e^{i+\frac{(2it+1)^2}{4i}}\operatorname{erf}\left(\frac{2iu+2it+1}2\sqrt i\right)}2$$ -It then follows that -$$I'(t)=\frac d{dt}\int\sin(1+(t+\ln(x))^2)\ dx=\int2(t+\ln(x))\cos(1+(t+\ln(x))^2)\ dx\\I'(t)=\Im\frac d{dt}\frac{-\sqrt{\pi i}e^{i+\frac{(2it+1)^2}{4i}}\operatorname{erf}\left(\frac{2iu+2it+1}2\sqrt i\right)}2$$ -Thus, -$$\frac12I'(0)=\int\ln(x)\cos(1+(\ln(x))^2)\ dx$$ -Evaluating the derivative, one gets -$$\begin{align}\frac12I'(t)&=\Im\frac{-\sqrt{\pi i}}4\frac d{dt}e^{i+\frac{(2it+1)^2}{4i}}\operatorname{erf}\left(\frac{2iu+2it+1}2\sqrt i\right)\\&=\Im\frac{-\sqrt{\pi i}}4e^{i+\frac{(2it+1)^2}{4i}}\left[(2it+1)\operatorname{erf}\left(\frac{2iu+2it+1}2\sqrt i\right)+\frac{4i}{\sqrt\pi}e^{-\left(\frac{2iu+2it+1}2\sqrt i\right)^2}\right]\end{align}$$ -Finally, - -$$\int\ln(x)\cos(1+(\ln(x))^2)\ dx=\Im\frac{-\sqrt{\pi i}}4\left[e^{i+\frac1{4i}}\left[\operatorname{erf}\left(\frac{2iu+1}2\sqrt i\right)+\frac{4i}{\sqrt\pi}e^{-\left(\frac{2iu+1}2\sqrt i\right)^2}\right]\right]$$ - -Where $u=\ln(x)$.<|endoftext|> -TITLE: A number 47_ _74 is a multiple of consecutive numbers. Find the numbers. -QUESTION [7 upvotes]: I had recently solved a problem. - -A number 47_ _74 is multiple of at least two consecutive numbers. Find the numbers. The list of numbers may be of any length $\ge 2$. - -I first saw that if they were multiples of 4 numbers then it must be divisible by 4 but it isn't so they are multiples of 2 or 3 numbers. -Also all the two or three numbers must be 2-digit or 3-digit. -I tried pairing consecutive numbers but no two consecutive numbers produced a result whose units digit was 4. So I tried for 3 number pairs. The only two pairs were $(*2, *3, *4)$ and $(*7, *8, *9)$. (Replace the stars with one-digit numbers). So now since $70\cdot 70\cdot 70=343000 \text{ and } 80\cdot 80\cdot 80 = 512000$. I tried $72\cdot 73\cdot 74$ and $77\cdot 78\cdot 79$ and $77\cdot 78\cdot 79$ produced a result of 474474 and fulfilled the result. -I want to know if my approach is practical. Is it correct? -Can you suggest a better way of tackling this problem? -I would love new answers. Can you suggest some 'elegant' proof? - -REPLY [5 votes]: I think there's not much of room for improvement in your approach. There's a little issue in you trying $72\cdot 73\cdot 74$ as it would be divisible by $4$. -By using modulo 5 arithmetics you can narrow down the possibilities quite narrowly. We know that $47..74\equiv4$. In modulo 5 arithmetics theres only three (or four) factorizations of $4\equiv 2\cdot 2\equiv 3\cdot 3\equiv 1\cdot 4\equiv 4\cdot 1$. Therefore it's rather obvious that two consecutive numbers can't equal $47..74$. For three consecutive to be equivalent to $4$ none of them can't be $0$ (because their product would be equivalent to $0$) - so that leaves only $1\cdot2\cdot3\equiv 1$ and $2\cdot3\cdot4\equiv 4$. Four consecutive are impossible as we already know (that we see by using modulo 4 arithmetics instead). -Next narrowing down is to use estimates to narrow down the possibilities, which also becomes quite narrow. By noting that a product $(n-1)n(n+1)$ has the property: -$$(n-1)^3 \le (n-1)n(n+1) \le (n+1)^3$$ -and you have that the product should be in the range $[470074, 479974]$ you have that, $(n-1)^3 \le 479974$ which means $n-1 \le 78$ and $(n+1)^3\ge 470074$ which means $n+1\ge 78$, that is -$$77 \le n \le 79$$ -But the restriction of none of $(n-1)$, $n$ and $(n+1)$ being divisible by four means that $n$ is even so that leaves only $n=78$ (or from the modulo 5 arithmetics we know that $n\equiv 3$, that is ends with $3$ or $8$), that is the possible candidate is $77\cdot78\cdot79 = 474474$<|endoftext|> -TITLE: why does $\frac{df}{dz}=\frac{1}{2}\left ( \frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y}\right )$, why the $\frac{1}{2}$? -QUESTION [7 upvotes]: I have trouble seeing where the $\frac{1}{2}$ comes from in $$\frac{df}{dz}=\frac{1}{2}\left ( \frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y}\right )$$ -For a change of variables $z=x+iy$ we have $$\frac{df}{dz}= \ \frac{\partial f}{\partial x}\frac{\partial x}{\partial z}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial z}$$ and $\frac{\partial x}{\partial z}=1$ and $\frac{\partial y}{\partial z}=-i$. Therefore we have the above but without the $\frac{1}{2}$. I've seen someone derive the correct expression by including the change of variables for $\overline{z}$ however I don't see how that is necessary, it should work without, right? I don't know what I am missing? - -REPLY [5 votes]: Well it starts with the general formula for differential $$df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy.$$ In complex analysis we prefer to use $dz$ and $d\overline{z}$. So using, $dx = \dfrac{dz+d\overline{z}}{2}$ and $dy = \dfrac{dz-d\overline{z}}{2i},$ one finally gets $$df = \frac{1}{2}\left ( \frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y}\right ) dz + \frac{1}{2}\left ( \frac{\partial f}{\partial x}+i\frac{\partial f}{\partial y}\right )d\overline{z}.$$ Now it is natural to define $\frac{\partial}{\partial z}$ to be $\frac{1}{2}\left ( \frac{\partial }{\partial x}-i\frac{\partial }{\partial y}\right )$ and $\frac{\partial}{\partial \overline{z}}$ to be $\frac{1}{2}\left ( \frac{\partial }{\partial x}+i\frac{\partial }{\partial y}\right )$. Thanks to that we have the beautiful formula $$df = \frac{\partial f}{\partial z} dz + \frac{\partial f}{\partial \overline{z}} d\overline{z}.$$<|endoftext|> -TITLE: Reversing the digits of an infinite decimal -QUESTION [11 upvotes]: Let $x$ be a real number in $[0,1)$, with decimal expansion -$$ -x = 0.d_1 d_2 d_3 \cdots d_i \cdots \;. -$$ -If the decimal expansion is finite, ending at $d_i$, then extend with zeros: -$d_k = 0$ for all $k > i$. -Define a sequence $x_k^R$ by digit reversals, as follows: -\begin{eqnarray} -x_1^R & = & 0.d_1 \\ -x_2^R & = & 0.d_2 d_1 \\ -x_3^R & = & 0.d_3 d_2 d_1 \\ -x_4^R & = & 0.d_4 d_3 d_2 d_1 \\ -& \cdots &\\ -x_k^R & = & 0.d_k d_{k-1} \cdots d_3 d_2 d_1\\ -& \cdots & -\end{eqnarray} -Finally, define $x^R = \lim_{k\to\infty} x_k^R$, when that limit exists. - -Q. For which $x$ does the limit exist? - In particular, must $x$ be rational for the limit $x^R$ to exist? - If not, what are some irrationals with limits? - -If the decimal expansion of $x$ is finite, then the extension by zeros -leads to $\lim_{k\to\infty} x_k^R = 0$. - -REPLY [15 votes]: The limit exists precisely when the sequence of digits is eventually constant. If it is eventually constantly $d$, the limit is $\frac{d}9$. - -REPLY [14 votes]: Suppose that $\lim_{k \to \infty} x^R_k = x^R$ exists. Then, for $k$ large enough, the first digit of $x^R_k$ is equal to the first digit of $x^R$, say $a$. -Hence, $d_k = d_{k+1} = \dots = a$ and $x^R = 0.aaaa \dots = a/9$. -So, the number $x$ must have a decimal expansion where eventually only the digit $a$ appears.<|endoftext|> -TITLE: Elementary Lebesgue measure problem -QUESTION [9 upvotes]: Suppose $E_1,\cdots,E_n\subset [0,1]$ are Borel sets such that -$\sum_{i=1}^n\mu(E_i)>n-1$, in which $\mu$ denotes Lebesgue measure. Prove that $\cap_{i=1}^nE_i$ is nonempty. -My attempts included using the famous equation, which is true as all sets in question are of finite lengths: -$$\mu(\sum E_i)=\sum^1\mu(E_{i_1})+(-1)^1\sum^2\mu(E_{i_1}E_{i_2})+\cdots+(-1)^{n-2}\sum^{n-1}\mu(E_{i_1}\cdots E_{i_{n-1}})+(-1)^{n-1}\mu(E_1E_2\cdots E_n). $$ -where $\sum^k$ denotes the $k$-th cyclic sum, and set addition and multiplication are used in place of union and intersection, for the sake of notational simplicity. Sadly, this came to no avail at all. Indeed I could do $n=2$ (which of course is too trivial to discuss here) but I couldn't even do $n=3$. -Whatever I think the ultimate goal is to show $\mu(\cap E_i)>0$, and some kind of elementary set operations must be involved. But now I'm at a loss of what to do. - -REPLY [2 votes]: The most straightforwards way to do this is by induction; notice that if $A$ and $B$ are subsets of $[0,1]$ we can easily prove that -$$m(A\cap B)\geq m(A) + m(B) - 1.$$ -Then, define $E'_n=\bigcap_{i=1}^nE_i$. We can prove that $\mu(E'_n)\geq \left(\sum_{i=1}^n \mu(E_i)\right)-n+1$ by induction. -Obviously $\mu(E'_1)=\mu(E_1)\geq \mu(E_1)$ and we have -$$m(E'_{n+1})=m(E'_n\cap E_{n+1})\geq m(E'_n) + m(E_{n+1})-1 \geq \left(\sum_{i=1}^{n+1}m(E_n)\right)-n - 1 + 1$$ -which is the desired inequality. This gives that $E'_n$ has positive measure, given the condition you have, and thus is non-empty.<|endoftext|> -TITLE: Proof of the universal property of the quotient topology -QUESTION [6 upvotes]: In this question: -universal property in quotient topology -I saw the following theorem: - -Let $X$ be a topological space and $\sim$ an equivalence relation on $X$. Let $\pi: X\to X/{\sim}$ be the canonical projection. If $g : X → Z$ is a continuous map such that $a \sim b$ implies $g(a) = g(b)$ for all $a$ and $b$ in $X$, then there exists a unique continuous map $f : X/{\sim} → Z$ such that $g = f ∘ \pi$. - -I was wondering how one would prove this. - -REPLY [5 votes]: For $x\in X$ let $[x]$ denote that $\sim$-equivalence class of $x$; $X/{\sim}=\{[x]:x\in X\}$. To show that such an $f$ exists, we simply define it: for $[x]\in X/{\sim}$ let $f([x])=g(x)$. Now use the fact that $g$ is constant on $[x]$ to show that $f$ is well-defined. -To show that $f$ is unique, suppose that $h:X/{\sim}\to Z$ is continuous and satisfies $g=h\circ\pi$. Let $[x]\in X/{\sim}$ be arbitrary. Then -$$f([x])=(f\circ\pi)(x)=g(x)=(h\circ\pi)(x)=h([x])\;,$$ -and hence $f=h$. -To show that $f$ is continuous, let $U$ be an open set in $Z$. Show that $$f^{-1}[U]=\{[x]\in X/{\sim}:x\in g^{-1}[U]\}\;,$$ and then use the fact that $X/{\sim}$ bears the quotient topology to conclude that $f^{-1}[U]$ is open in $X/{\sim}$.<|endoftext|> -TITLE: Understanding the notion of a connection and covariant derivative -QUESTION [12 upvotes]: I have been reading Nakahara's book "Geometry, Topology & Physics" with the aim of teaching myself some differential geometry. Unfortunately I've gotten a little stuck on the notion of a connection and how it relates to the covariant derivative. -As I understand it a connection $\nabla :\mathcal{X}(M)\times\mathcal{X}(M)\rightarrow\mathcal{X}(M)$, where $\mathcal{X}(M)$ is the set of tangent vector fields over a manifold $M$, is defined such that given two vector fields $X,V\in\mathcal{X}(M)$ then $\nabla :(X,V)\mapsto\nabla_{X}V$. The connection enables one to "connect" neighbouring tangent spaces such that one can meaningfully compare vectors in the two tangent spaces. -What confuses me is that Nakahara states that this is in some sense the correct generalisation of a directional derivative and that we identify the quantity $\nabla_{X}V$ with the covariant derivative, but what makes this a derivative of a vector field? In what sense is the connection enabling one to compare the vector field at two different points on the manifold (surely required in order to define its derivative), when the mapping is from the (Cartesian product of) the set of tangent vector fields to itself? I thought that the connection $\nabla$ "connected" two neighbouring tangent spaces through the notion of parallel transport in which on transports a vector field along a chosen curve, $\gamma :(a,b)\rightarrow M$, in the manifold connecting the two tangent spaces. -Given this, what does the quantity $\nabla_{e_{\mu}}e_{\nu}\equiv\nabla_{\mu}e_{\nu}=\Gamma_{\mu\nu}^{\lambda}e_{\lambda}$ represent? ($e_{\mu}$ and $e_{\nu}$ are coordinate basis vectors in a given tangent space $T_{p}M$ at a point $p\in M$) I get that since $e_{\mu},e_{\nu}\in T_{p}M$, then $\nabla_{\mu}e_{\nu}\in T_{p}M$ and so can be expanded in terms of the coordinate basis of $T_{p}M$, but I don't really understand what it represents?! -Apologies for the long-windedness of this post but I've really confused myself over this notion and really want to clear up my understanding. - -REPLY [6 votes]: In what sense is the connection enabling one to compare the vector field at two different points on the manifold [...], when the mapping is from the (Cartesian product of) the set of tangent vector fields to itself? I thought that the connection ∇ "connected" two neighbouring tangent spaces through the notion of parallel transport [...] - -To see a connection only as a mapping $\nabla: \mathcal{X}(M)\times\mathcal{X}(M)\rightarrow\mathcal{X}(M)$ is too restrictive. Often a connection is also seen as a map $Y\mapsto\nabla Y\in\Gamma(TM\otimes TM^*)$, which highlights the derivative aspect. However, the important point is that $\nabla$ is $C^\infty(M)$-linear in the first argument which results in the fact that the value $\nabla_X Y|_p$ only depends on $X_p$ in the sense that -$$ -X_p=Z_p \Rightarrow \nabla_X Y|_p = \nabla_Z Y|_p. -$$ -Hence, for every $v\in TM_p$, $\nabla_vY$ is well-defined. This leads directly to the definition of parallel vector fields and parallel transport (as I think you already know). -Vice versa, given parallel transport maps $\Gamma(\gamma)^t_s: TM_{\gamma(s)}\rightarrow TM_{\gamma(t)}$, one can recover the connection via -$$ -\nabla_X Y|p = \frac{d}{dt}\bigg|_{t=0}\Gamma(\gamma)_t^0Y_{\gamma(t)} \quad(\gamma \text{ is a integral curve of }X). -$$ -This is exactly the generalisation of directional derivatives in the sense that we vary $Y$ in direction of $X_p$ in a parallel manner. -In Euclidean space this indeed reduces to the directional derivative: Using the identity chart every vector field can be written as $Y_p=(p,V(p))$ for $V:\mathbb R^n\rightarrow \mathbb R^n$ and the parallel transport is just given by -$$ -\Gamma(\gamma)_s^t (\gamma(s),v)=(\gamma(t),v). -$$ -Hence, we find in Euclidean space: -$$ -\frac{d}{dt}\bigg|_{t=0}\Gamma(\gamma)_t^0Y_{\gamma(t)} = \frac{d}{dt}\bigg|_{t=0}(p,V(\gamma(t))) = (p,DV\cdot\gamma'(0)), -$$ -which is exactly the directional derivative of $V$ in direction $v=\gamma'(0)$. -Back to the original question: I think it is hard to see how a connection "connects neighbouring tangent spaces" only from the axioms. You should keep in mind, however, that the contemporary formalism has passed many abstraction layers since the beginning and is reduced to its core, the axioms (for a survey see also Wikipedia). To get the whole picture, it is essential that one explores all possible interpretations and consequences of the definition, since often they led to the definition in the first place. In my opinion, the connection is defined as it is with the image in mind that it is an infinitesimal version of parallel transport. Starting from this point, properties as the Leibniz rule are a consequence. However, having such a differential operator $\nabla$ fulfilling linearity, Leibniz rule and so on, is fully equivalent to having parallel transport in the first place. In modern mathematics, these properties are thus taken as the defining properties/axioms of a connection, mainly because they are easier to handle and easier to generalise to arbitrary vector bundles. - -Given this, what does the quantity $\nabla_{e_\mu}e_\nu=\Gamma^\lambda_{\mu\nu}e_\lambda$ represent? [...] - -As you wrote, the connection coefficients / Christoffel symbols $\Gamma^\lambda_{\mu\nu}$ are the components of the connection in a local frame and are needed for explicit computations. I think on this level you can't get much meaning out these coefficients. However, they reappear in a nicer way if you restate everything in the Cartan formalism and study Cartan and/or principal connections. The Wikipedia article on connection forms tries to give an introduction to this approach. -Nahakara also gives an introduction to connections on principal bundles and the relation to gauge theory later on in his book. In my opinion, this chapter is a bit short and could be more detailed, especially to the end. But it is a good start.<|endoftext|> -TITLE: The definition of Determinant in the spirit of algebra and geometry -QUESTION [13 upvotes]: The concept of determinant is quite unmotivational topic to introduce. Textbooks use such "strung out" introductions like axiomatic definition, Laplace expansion, Leibniz'a permutation formula or something like signed volume. -Question: is the following a possible way to introduce the determinant? - -Determinant is all about determing whether a given set of vectors are linearly independent, and a direct way to check this is to add scalar multiplications of column vectors to get the diagonal form: -$$\begin{pmatrix} -a_{11} & a_{12} & a_{13} & a_{14} \\ -a_{21} & a_{22} & a_{23} & a_{24} \\ -a_{31} & a_{32} & a_{33} & a_{34} \\ -a_{41} & a_{42} & a_{43} & a_{44} \\ -\end{pmatrix} \thicksim \begin{pmatrix} -d_1 & 0 & 0 & 0 \\ -0 & d_2 & 0 & 0 \\ -0 & 0 & d_3 & 0 \\ -0 & 0 & 0 & d_4 \\ -\end{pmatrix}.$$ -During the diagonalization process we demand that the information, i.e. the determinant, remains unchanged. Now it's clear that the vectors are linearly independent if every $d_i$ is nonzero, i.e. $\prod_{i=1}^n d_i\neq0$. It may also be the case that two columns are equal and there is no diagonal form, so we must add a condition that annihilates the determinant (this is consistent with $\prod_{i=1}^n d_i=0$), since column vectors can't be linearly independent. -If we want to have a real valued function that provides this information, then we simply introduce an ad hoc function $\det:\mathbb{R}^{n \times n} \rightarrow \mathbb{R}$ with following properties: - -$$\det (a_1,\ldots,a_i,\ldots,a_j,\ldots,a_n)=\det (a_1,\ldots,a_i,\ldots,k\cdot a_i+a_j,\ldots,a_n).$$ -$$\det(d_1\cdot e_1,\ldots,d_n\cdot e_n)=\prod_{i=1}^n d_i.$$ -$$\det (a_1,\ldots,a_i,\ldots,a_j,\ldots,a_n)=0, \space \space \text{if} \space \space a_i=a_j.$$ - - -From the previous definition of determinant we can infer the multilinearity property: -$$[a_1,\ldots,c_1 \cdot u+c_2 \cdot v,\ldots,a_n]\thicksim diag[d_1,\ldots,c_1 \cdot d'_i+c_2 \cdot d''_i ,\ldots,d_n],$$ so $$\det[a_1,\ldots,c_1 \cdot u+c_2 \cdot v,\ldots,a_n]=\prod_{j=1:j\neq i}^n d_j(c_1 \cdot d'_i+c_2 \cdot d''_i)$$ $$=c_1\det(diag[d_1,\ldots, d'_i,\ldots,d_n])+c_2\det(diag[d_1,\ldots, d''_i,\ldots,d_n])$$ $$=c_1\det[a_1,\ldots,u,\ldots,a_n]+c_2\det[a_1,\ldots, v,\ldots,a_n].$$ -Note that previous multilinearity together with property $(1)$ gives the property $(2)$, so we know from the literature that the determinant function $\det:\mathbb{R}^{n \times n} \rightarrow \mathbb{R}$ actually exists and it is unique. - -Obviously, the determinant offers information how orthogonal a set of vectors is. Thus, with Gram-Schmidt process we can form an orthogonal set of vectors form set $(a_1,\ldots, a_n)$, and by multilinearity and property $(2)$ the absolute value of determinant is the volume of parallelepiped spanned by the set of vectors. -Definition. -Volume of parallelepiped formed by set of vectors $(a_1,\ldots, a_n)$ is $Vol(a_1,\ldots, a_n)=Vol(a_1,\ldots, a_{n-1})\cdot |a_{n}^{\bot}|=|a_{1}^{\bot}|\cdots |a_{n}^{\bot}|$, where $a_{i}^{\bot} \bot span(a_1,\ldots, a_{i-1}).$ - -This approach to determinant works equally well if we begin with the volume of a parallelepiped (geometric approach) or with the search of invertibility (algebraic approach). I was motivated by the book Linear algebra and its applications by Lax on chapter 5: -Rather than start with a formula for the determinant, we shall deduce it from the properties forced on it by the geometric properties of signed volume. This approach to determinants is due to E. Artin. - -$\det (a_1,\ldots,a_n)=0$, if $a_i=a_j$, $i\neq j.$ -$\det (a_1,\ldots,a_n)$ is a multilinear function of its arguments, in the sense that if all $a_i, i \neq j$ are fixed, $\det$ is a linear function of the remaining argument $a_j.$ -$\det(e_1,\ldots,e_n)=1.$ - -REPLY [3 votes]: The way I teach determinants to my students is to start with the case $n=2$, and to use the complex numbers and/or trigonometry in order to show that, for $(a,b), (c,d)$ vectors on the plane, the quantity -$$ad-bc=||(a,b)||\cdotp ||(c,d)|| \sin \theta$$ -is the signed area between $(a,b)$ and $(c,d)$ (in this order). -Then, using the vector product and its properties (we have seen it before coming to the topic of determinants in full generality), we check that $3$ by $3$ determinants carry the meaning of signed volumes. -The next step is to introduce determinants as alternate multilinear functions. We have seen examples of bilinear maps (inner products), trilinear maps, such as -$$(u,v,w)\mapsto (u\times v)\bullet w$$ -and the quadrilinear maps $$(a,b,c,d)\mapsto (a\bullet c) (b\bullet d)-(b\bullet c) (a\bullet d),$$ -$$(a,b,c,d)\mapsto (a \times b)\bullet (c\times d).$$ -Now, when explaining multilinearity we did emphasise that the fact that the last two examples are equal can be proven if only we check equality for the case where $a,b,c,d$ are vectors of the canonical basis. -Then the time comes to define the determinant of $n$ vectors in $\mathbb{R}^n$, which is a new example of $n-$linear, alternate function. They check that the vector space of such maps is indeed $\left(^n_n\right).$ The students thus learn that the determinant is essentially the only possible such function, up to a multiple, in the same way they saw that more general multilinear maps depend exclusively on their values on vectors of a chosen basis (say, the canonical basis in our case). -Although I learnt to prove stuff such as $\det(AB)=\det(A) \det(B)$ by strict functoriality, in class we do define the map -$$L(X_1, \ldots , X_n)=\det(AX_1, \ldots , AX_n),$$ which by uniqueness is a constant multiple of the determinant function $T(X_1, \ldots , X_n)=\det(X_1, \ldots, X_n),$ and compute the constant by evaluating on the identity matrix, i.e. $X_i=e_i.$ -Thus $\det(AB)=\det(A)\det(B).$ -It is in T.W. Korner's book called Vectors, Pure and Applied that one can see a construction that uses elementary matrices and is rigorous. The OP can check Korner's book to see a nice, slightly more down-to-earth exposition. -In op. cit. one can see how Korner uses the fact that an invertible matrix can be decomposed as a product of elementary matrices to obtain the formula $\det(AB)=\det(A)\det(B).$ -Note: I have been deliberately brief in my exposition, just so as not to repeat too much stuff that was already included in other answers.<|endoftext|> -TITLE: The torsion subgroup of principal units $U^{(1)}$ -QUESTION [5 upvotes]: $\newcommand{\U}{U^{(1)}}$ -$\newcommand{\O}{\mathcal{O}}$ -$\newcommand{\p}{\mathfrak{p}}$ -$\DeclareMathOperator{\char}{char}$ -$\newcommand{\N}{\mathbb{N}}$ -I have a question about the torsion subgroup of principal units $\U$. -Let $K$ be a local field with valuation ring $\O$, maximal ideal $\p$. -Let $q = p^f = \#\O/\p$. Let $\char K = 0$. The group of principal units $\U$ is -$$ -\U := 1 + \p. -$$ -For $n \in \N$, let $\mu_n$ be a group of $n$-th roots of unity, -i.e. -$$ - \mu_n := \{ x \in K \mid x^n = 1 \}. -$$ -Then, I want to show that: - -The torsion subgroup of $\U$ can be written as $\mu_{p^a}$ for some $a \in \mathbb{N}$. - -First, I tried to show that for any $x \in \U$ which has finite order, -an order of $x$ can be written as $p^n$ for some $n \in \N$, -but I failed. -This question is related to the Proposition (5.7) in -Neukirch, "Algebraic Number Theory" at page 140. - -REPLY [4 votes]: Suppose $1+x$ is a torsion element of $U^{(1)}$, $x \in \mathfrak p$, killed by $z \in \mathbb Z_p$. That is, $(1+x)^z=1$. If $z=p^au$, $u \in \mathbb Z_p^{\times}$. Let $uv=1$, $v \in \mathbb Z_p^\times$. -$$(1+x)^{{p^a}u}=1 \implies (1+x)^{{p^a}uv}=(1+x)^{p^a}=1 \implies 1+x \in \mu_{p^a}.$$<|endoftext|> -TITLE: Studying Graduate Level Mathematics Outside Lecture Hours -QUESTION [13 upvotes]: I am not exactly sure if this should be posted on math.se or academia.se, but I think the question is more mathematical in nature. -I am a first year graduate student in mathematics, and I would like to think that I am a diligent student at that. My study habit involves reproducing almost all the theorems that we have gone through in class by myself. This has worked really well for me when I was taking undergraduate level mathematics classes. But when I took grad level courses, I realized that reproducing and studying a 1.5 hours worth of lecture is taking me almost a whole day. This kind of freaked me out at first. But I thought I'm just not working hard enough and I am still in the undergraduate to graduate transition. Anyway, I asked my classmates in the courses I'm taking in if they even bother to reprove what we covered in the lectures. To my surprise, most of them told me that they do not. They just use the theorems as they are to answer the exercises/problem sets. Now, I'm thinking that it might actually be a more efficient way of passing the course if you can pull this off. You might be able to 'study' a lot more since you can save time; you'll be able to solve more exercises from the text book, and have more time for problem sets if you do this. But I feel very uncomfortable using theorems that I did not even try to prove for myself (and this takes away the fun of doing mathematics!). Also, dissecting the important proofs that one sees in the lectures tend to give you an idea of the recurring techniques that are likely to appear later. -This has gone too long; my question is that, should I compromise on reproducing lectures, maybe cut the time I'm spending in reproving the theorems we have already done in class in favor of doing end of the chapter exercises / problem sets? - -REPLY [8 votes]: You need to be able to do all of it really, both the problems and understanding the proofs from classes. You'll just need to prioritize as you go. -I always went over lecture notes after the class to make sure I understood the proofs and formulations, however I didn't re-write it all as I was doing this, I would just add some extra explanation at parts where I felt I needed it. What I would do was to make sure I understood the lecture notes and then do the problems, then once the whole course has finished I would go right back through and re-write the proofs and notes, but in a condensed form if I could. -Just to make sure I am being absolutely clear you definitely DO need to make sure you understand everything that happens in lectures, often with solving problems in maths we need a thorough understanding of the subject. I have known plenty of people who just try and apply theorems without understanding the content of them and it never ends well.<|endoftext|> -TITLE: Lie derivative along the commutator of two vector fields -QUESTION [6 upvotes]: I would like to know how to show that the Lie derivative on a differentiable manifold satisfies -\begin{equation*} -\mathcal{L}_{[X, Y]} = \mathcal{L}_X \mathcal{L}_Y - \mathcal{L}_Y \mathcal{L}_X -\end{equation*} -for any tensor field on which the derivative is applied, where $[X, Y] = XY - YX$ is the commutator of $X$ and $Y$, which are arbitrary vector fields. -Edit: -In component notation, $[X, Y]^\mu = X^\lambda \partial_\lambda Y^\mu - Y^\lambda \partial_\lambda X^\mu = X^\lambda \nabla_\lambda Y^\mu - Y^\lambda \nabla_\lambda X^\mu$, where $\partial_\mu$ and $\nabla_\mu$ are respectively the partial and the covariant derivatives with respect to the variable $x^\mu$. - -REPLY [6 votes]: One way to prove this result is to first show it for functions on the manifold, and use this to further prove it for vector fields, co-vector fields and finally general tensor fields. The advantage of this approach as it requires the minimum of formulas. -We will need the following assumptions: -(a) The Lie derivative follows the Leibnitz rule when acting on a product of objects. -(b) $ \mathcal{L}_{X} (f) = X(f) $ -Part 1 - Show for a function, f: -Using (b) we have -$$ \mathcal{L}_X \mathcal{L}_Y f = \mathcal{L}_X ( Y(f)) = X(Y(f)) $$ -In a coordinate basis this is: -$$ X^{\rho} \partial_{\rho} (Y^{\sigma} \partial_{\sigma} f) = X^{\rho} ( \partial_{\rho} Y^{\sigma}) (\partial_{\sigma} f) + X^{\rho} Y^{\sigma} (\partial_{\rho} \partial_{\sigma} f) $$ -Because $ \partial_{\rho} \partial_{\sigma} f $ is symmetric in $\rho \leftrightarrow \sigma $ then the second term on the RHS is symmetric in $ X \leftrightarrow Y $. Therefore: -$$ \mathcal{L}_X \mathcal{L}_Y f - \mathcal{L}_Y \mathcal{L}_X f = X^{\rho} ( \partial_{\rho} Y^{\sigma}) (\partial_{\sigma} f) - Y^{\rho} ( \partial_{\rho} X^{\sigma}) (\partial_{\sigma} f) = (X^{\rho} ( \partial_{\rho} Y^{\sigma}) - X^{\rho} ( \partial_{\rho} Y^{\sigma}))(\partial_{\sigma} f) $$ -$$ = [X,Y]^{\sigma} \partial_{\sigma} f = [X,Y](f) = \mathcal{L}_{[X,Y]} f $$ -Part 2: Show for a vector field, $ T = V \in TM $ -Using the result from Part 1 we have: -$$ \mathcal{L}_{[X,Y]} (V(f)) = \mathcal{L}_{X} \mathcal{L}_{Y} (V(f)) - \mathcal{L}_{Y} \mathcal{L}_{X} (V(f)) $$ -Using the Leibnitz rule: -$$ \mathcal{L}_{X} \mathcal{L}_{Y} (V(f)) = \mathcal{L}_{X} (( \mathcal{L}_{Y} V)f) + \mathcal{L}_{X}( V(\mathcal{L}_{Y} f) ) = (\mathcal{L}_{X} \mathcal{L}_{Y} V)(f) + V(\mathcal{L}_{X} \mathcal{L}_{Y}f) + (\mathcal{L}_{X} V)(\mathcal{L}_{Y} f) + (\mathcal{L}_{Y} V)(\mathcal{L}_{X} f) $$ -The last 2 terms taken together are symmetric in $ X \leftrightarrow Y $. We therefore have that -$$ \mathcal{L}_{[X,Y]} (V(f)) = \mathcal{L}_{X} \mathcal{L}_{Y} (V(f)) - \mathcal{L}_{Y} \mathcal{L}_{X} (V(f)) = (\mathcal{L}_{X} \mathcal{L}_{Y} V - \mathcal{L}_{Y} \mathcal{L}_{X} V)(f) + V(\mathcal{L}_{X} \mathcal{L}_{Y} f - \mathcal{L}_{Y} \mathcal{L}_{X}f) $$ -But we can also apply the Leibnitz rule to $\mathcal{L}_{[X,Y]} (V(f))$ to give: -$$ \mathcal{L}_{[X,Y]} (V(f)) = (\mathcal{L}_{[X,Y]} V)(f) + V(\mathcal{L}_{[X,Y]} f)) = (\mathcal{L}_{[X,Y]} V)(f) + V(\mathcal{L}_{X} \mathcal{L}_{Y} f - \mathcal{L}_{Y} \mathcal{L}_{X}f))$$ -Comparing these 2 expressions we get that: -$$ (\mathcal{L}_{[X,Y]} V)(f) = (\mathcal{L}_{X} \mathcal{L}_{Y} V - \mathcal{L}_{Y} \mathcal{L}_{X} V)(f) $$ -This is true for any function hence we have shown the identity for a vector field. -Part 3 - Show for a covector field, $ T = \eta \in T^{*}M $ -We consider the action of $ \mathcal{L}_{[X,Y]} $ on the function $ \eta (V) $ where $ \eta $ is a covector field and V is a vector field. This works exactly the same as in part 2 giving us: -$$(\mathcal{L}_{[X,Y]} \eta)(V) = (\mathcal{L}_{X} \mathcal{L}_{Y} \eta - \mathcal{L}_{Y} \mathcal{L}_{X} \eta)(V) $$ -This is true for any vector field hence the result is shown for covector fields. -Part 4 - General Tensor field, T -We will now consider the action of $ \mathcal{L}_{[X,Y]} $ on the function $T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s)$, where T is an (r,s) rank tensor, $ X_i $ is a vector field, and $ \eta_i $ is a covector field. This step proceeds similarly to before. -By the result from Part 1: -$$ \mathcal{L}_{[X,Y]}T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) = (\mathcal{L}_{X} \mathcal{L}_{Y} - \mathcal{L}_{Y} \mathcal{L}_{X})T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) $$ -Similarly to Part 2 $ \mathcal{L}_{X} \mathcal{L}_{Y} $ acting on $ T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) $ gives terms symmetric in $ X \leftrightarrow Y $ when the Lie derivatives hit seperate terms. These terms will cancel with terms contained in $ - \mathcal{L}_{Y} \mathcal{L}_{X} T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) $. Therefore we are left with: -$$ (\mathcal{L}_{X} \mathcal{L}_{Y} - \mathcal{L}_{Y} \mathcal{L}_{X})T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) = (\mathcal{L}_{X} \mathcal{L}_{Y}T - \mathcal{L}_{Y} \mathcal{L}_{X}T)(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + T((\mathcal{L}_{X}\mathcal{L}_{Y} - \mathcal{L}_{Y} \mathcal{L}_{X})X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + T(X_1,(\mathcal{L}_{X}\mathcal{L}_{Y} - \mathcal{L}_{Y} \mathcal{L}_{X})X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + \dots $$ -Using the results from Parts 2 and 3 gives: -$$ \mathcal{L}_{[X,Y]}T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) =(\mathcal{L}_{X} \mathcal{L}_{Y}T - \mathcal{L}_{Y} \mathcal{L}_{X}T)(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + T(\mathcal{L}_{[X,Y]}X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + T(X_1,\mathcal{L}_{[X,Y]}X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + \dots $$ -Lastly, the use of the Leibnitz rule gives -$$ \mathcal{L}_{[X,Y]}T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) = \mathcal{L}_{[X,Y]}T(X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + T(\mathcal{L}_{[X,Y]}X_1,X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + T(X_1,\mathcal{L}_{[X,Y]}X_2,...,X_r,\eta_1,\eta_2,...,\eta_s) + \dots $$ -Comparing the 2 equalities and noting that they are true for any vector fields $ X_1,X_2,...,X_r $ and any covector fields $ \eta_1,\eta_2,...,\eta_s $ proves that the desired result holds for any general tensor field T.<|endoftext|> -TITLE: Can type theory be viewed as an alternative to model theory? -QUESTION [5 upvotes]: While type theory certainly has traditionally been used for different purposes than model theory, as noted in this Philosophy SE post, I wonder to what extent type theory could model model theory itself. -My motivation for this question is based on an observation that we can view the typing relation in type theory as analagous to the "is a model of" relation in model theory. For example, one might regard the Sigma type $Magma(A) :\equiv\Sigma_{a:A} A \rightarrow A \rightarrow A$ as the theory of magmas over a type A, and components of said type models of the theory. -Now certainly, model theory has more structure to it than a simple "x is a model of y" relation between theories and models of those theories, but I think the idea of interpreting types and terms as a theory/model relation begs the question: Can we develop a richer language in type theory to deal with this types as theories, terms as models interpretation in order to interpret traditional (i.e. ZFC-based) model theory within a type theoretic framework? And, if full interpretation with classical model theory is not possible, is there still some meaningful type-based model theory that we could develop? - -REPLY [3 votes]: Denoting $\mathcal{U}$ as a type universe, one usualy one writes $\mathsf{MagmaStr} : \mathcal{U} \to \mathcal{U}$ for the magma structure over some type. As allready noted the magma structure of $A$ is just a map $A \times A \to A$ so it should be $\mathsf{MagmaStr}(A) = A \times A \to A$. -For a type theoretical definition of $\mathsf{Magma} : \mathcal{U_1}$ itself I would write $\mathsf{Magma} = \sum_{A : \mathcal{U}} A \times A \to A$ note that this type, the magmas in a the universe $\mathcal{U}$, lives in a higher universe level. Then a member of $\mathsf{Magma}$ is a pair of of a type $A$ and a binary operator on $A$ living in $\mathcal{U}$. In that way the type $\mathsf{Magma}$ can be seen as a theory and the members of this type as its models. -Ofcourse the $\mathsf{Magma}$ answer is somewhat simplistic but the above readily generalizes to for example a $\mathsf{SemiGroup} = \sum_{G : \mathcal{U}} \sum_{* : G \times G \to G} \prod_{a b c : M} (a * b) * c = a * (b * c)$. Which is a bit more involved as you can see. -It think that within this framework a lot model theory can be done, at least if the type theory is rich enough.<|endoftext|> -TITLE: Integration using Euler $\int{\sqrt{x^2+2x-1} }/{x}\,dx$ -QUESTION [5 upvotes]: I've just tried to use the Euler's formula for my integral, but I can't get the correct answer. So if anyone could help me I would really appreciate that. -This is my integral: -$$\int\frac{\sqrt{x^2+2x-1} }{x}\,dx$$ -P.S. The ingral must be solven using Euler's formula -This is where I've got stuck: -I started with this substitution: $\sqrt{x^2+2x-1} = -x + t$. -After derivating I get $dx= t^2 + 2x -1 /2(t+1)^2$. -After immpleneting it into my integral I get to this point -$$\int\frac{(t^2+2t-1)(t^2+2t-1)}{(t^2+1)2(t+1)^2}\,dt$$ and I don't have any idea what I should do next (thought to do another substitution but don't know what to substitute). - -REPLY [3 votes]: After we get $$\int\frac{(t^2+2t-1)(t^2+2t-1)}{(t^2+1)2(t+1)^2}\,dt$$ -We can try to transform it into alternate form. Just let -$$\frac{(t^2+2t-1)(t^2+2t-1)}{(t^2+1)2(t+1)^2} = a + \frac{b}{t+1} + \frac{ct+d}{(t+1)^2} + \frac{e}{t^2+1}$$ -We can reduct the right to a common denominator.Then compare the coefficient. -Or we can substitute some t then we can get a lot of linear equation then solve them.(For convinience, we can substitute $t=0$,$t=1$,$t=2$,$t=3$,$t=4$,then get five equations.) -After solved, we get: -$$ -\left\{ -\begin{array}{c} -a=\frac{1}{2}\\ -b=1\\ -c=0\\ -d=1\\ -e=-2 -\end{array} -\right.$$ -Then it's tranformed into (I do it by WA) -$$\int\left(\frac{1}{2} + \frac{1}{t+1} + \frac{1}{(t+1)^2} - \frac{2}{t^2+1}\right)dt$$ -And now we can get the answer: -$$\frac{t}{2} + ln(t+1) - \frac{1}{t+1} - 2arctan(t)$$ -It's hard and boring to caculate. But it works(●'◡'●)<|endoftext|> -TITLE: $\mathbb Q_8 $ as Galois group -QUESTION [6 upvotes]: Let $\alpha =\sqrt {(2+\sqrt2)(3+\sqrt3)}$ and consider the extension $\mathbb Q(\alpha)/\mathbb Q$. Prove that $\mathbb Q(\alpha)/\mathbb Q$ is a Galois extension with Gal$ (\mathbb Q(\alpha)/\mathbb Q) \simeq \mathbb Q_8$. -I did the basic calculation to find a polynomial which is satisfied by $\alpha$ and that is of the form: -$x^8 -24 x^6+144x^4-288x^2+144 =0$. -I am unable to show this is irreducible. -Thanks for kind help. -Ok. From this link now it is clear that why this extension is galois. But what about the Galois group? - -REPLY [4 votes]: Let $L$ be your degree 8 field, $G = \text{Gal}(L/\mathbb{Q})$, and let $K_2 = \mathbb{Q}(\sqrt 2)$, $K_3 = \mathbb{Q}(\sqrt 3)$, $K_6 = \mathbb{Q}(\sqrt 6)$ be its three quadratic subfields. Up to isomorphism, there are exactly two non abelian groups of order $8$ : the dihedral $D_8$ and the quaternionic $H_8$, characterized by the number of their cyclic subgroups of order 4: $D_8$ admits exactly 1 such subgroup; for $H_8$ , all its 3 subgroups of order 4 are cyclic. To decide whether G is $D_8$ or $H_8$, you just have to compute the Galois groups of $\text{Gal}(L/K_2)$, etc. If Lucky, you don't even need to do the complete calculations, because there is a criterion which says that $L/K_i$ is embeddable into a cyclic extension of degree 4 iff $-1$ is a norm in $L/K_i$ . My "guess" : $G$ is $H_8$ .<|endoftext|> -TITLE: What are some interesting counterexamples given by finite topological spaces? -QUESTION [10 upvotes]: According to Wikipedia, 'finite topological spaces are often used to provide examples of interesting phenomena or counterexamples to plausible sounding conjectures.' I have been studying the book 'Counterexamples in Topology' (by L. Steen & J. Seebach), and I have not found any explicit nontrivial 'counterexamples' per se given by finite topological spaces in this book. -The only nontrivial example I have thus far found of a counterexample given by a finite topological space is given in the article 'A counterexample in finite fixed point theory' by John R. Isbell (under the pseudonym 'H. C. Enos'). Isbell exhibited a finite topological space which is a counterexample to the following question: -Question (Lee Mohler, 1970): Given a topological space $X$ which is the union of closed subspaces $Y$ and $Z$ such that $Y$, $Z$, and $Y \cap Z$ have the fixed point property, does it follow that $X$ has the fixed point property? -It thus seems natural to ask: what are some other examples of interesting nontrivial counterexamples given by finite topological spaces? - -REPLY [3 votes]: The Sierpiński space is the smallest example of a topological space which is neither trivial nor discrete. It is also the smallest Kolmogorov ($T_0$) space that is not Hausdorff.<|endoftext|> -TITLE: Real part of $(1+2i)^n$ -QUESTION [10 upvotes]: Is it true that for all $n\in \mathbb{N}$, $n\ge 2$ we have -$$|\textrm{Re}((1+2i)^n)|>1?$$ -I do know de Moivre's Theorem. -I do not know how to show that $|\sqrt{5}^n\cos(n\arccos\left ( \frac{1}{5} \right ))|>1$ because the value $\cos(n\arccos\left ( \frac{1}{5} \right ))$ can become (theoretically) arbitrarily small. - -REPLY [4 votes]: Hint: -All parts are integer. -Let $|\Re(1+2i)^n|=1=5^{n/2}|\cos(n\alpha)|$, with $\alpha=\arctan(2)$. -Then we must have $|\Im(1+2i)^n|=5^{n/2}|\sin(n\alpha)|=5^{n/2}\sqrt{1-5^{-n}}=\sqrt{5^n-1}\in\mathbb N$, i.e. $5^n-1$ is a perfect square.<|endoftext|> -TITLE: Quotient Spaces Defined By Bijection -QUESTION [5 upvotes]: I was working with a question in topology and came to the following statement that I can't seem to figure out: - -Let $f:\mathbb R^2\rightarrow\mathbb R^2$ be a homeomorphism with no fixed points. Consider the equivalence relations $\sim$ defined as $a\sim b$ exactly when $f^n(a)=f^m(b)$ for some $n,m\in \mathbb N$. Is it true that $\mathbb R^2/\sim$ is homeomorphic to $\mathbb R\times S_1$? - -My intuition here is that any such $f$ must "look like" a translation in some sense, and this is certainly true of a translation. Mainly, if I could construct a curve $\gamma:\mathbb R\rightarrow\mathbb R^2$ with $\lim_{x\rightarrow \pm\infty}\gamma(x)=\infty$ in the one point compactification of $\mathbb R^2$ and such that the image of $\gamma$ was disjoint from the image of $f\circ \gamma$, then I could conclude, but it's not obvious to me how to do this (nor whether this is the most elegant approach to take). - -REPLY [2 votes]: The answer to your question is negative (even if you assume that $f$ is orientation-preserving, otherwise a glide-reflection will yield an easy counter example). But first, some good news: - -By Brouwer's plane translation theorem, for each fixed-point free (I will simply say "free" below) orientation-preserving homeomorphism $h: E^2\to E^2$, and each $p\in E^2$ there exists a (proper) topological embedding $\alpha: {\mathbb R}\to E^2$ such that $\alpha(t)=p$ for some $t$, $h$ preserves the image of $\alpha$ and its action on that image is (topologically) conjugate to a translation. - -From this viewpoint, each free orientation-preserving planar homeomorphism "looks like" a translation. - -If $f: E^2\to E^2$ is a free homeomorphism with Hausdorff quotient space, then indeed, the quotient is homeomorphic to the annulus or to the Moebius band. This is a simple application of the classification of surfaces: Every noncompact connected surface with infinite cyclic fundamental group is homeomorphic to the annulus or Moebius band. -However, there are examples of orientation-preserving free planar homeomorphisms whose quotients are non-Hausdorff. The simplest examples I know are time-1 homeomorphisms of the planar Reeb flow. The Reeb foliation of the plane is the more standard object, but there is a (smooth) flow $F_t$ on the plane whose trajectories are the leaves of the Reeb foliation. -Things, however, can be even worse, there are free planar homeomorphisms which cannot be embedded in any flow. - -See -Oscillation set of a Brouwer homeomorphism, Reeb homeomorphisms by François Béguin and Frédéric Le Roux -and -Flows of flowable Reeb homeomorphisms by Shigenori Matsumoto.<|endoftext|> -TITLE: Solution for sets of $(x_k-x_l)(x_l-x_m)(x_m-x_k)>0$ -QUESTION [5 upvotes]: Given a set of inequalities like the following: -$$ -(x_k-x_l)(x_l-x_m)(x_m-x_k)>0, -$$ -with $x_n\in\mathbb N_0$. These inequalities have solutions, when $\{x_k,x_l,x_m\}$ obeys a cyclic ordering like $\{0,1,2\}$, e.g. $(0-1)(1-2)(2-0)=2>0$. -How to show that a given set of inequalities has or does not have a solution? -This is easy when all inequalities have independant variables. How entangled can these systems get before it gets impossible to solve them? -EDIT: The question is in fact graph theorectically motivated: How to colour the edges of a $3$-regular simple graph (e.g. below) so that the colours of the edges that meet at a point always obey a cyclic clockwise ordering? -$\hskip1.5in$ - -REPLY [2 votes]: The graph-theoretical problem for cube can be simplified by taking the quotient of cube by a symmetry subgroup that - -preserves the orientation of cube's surface, -does not identify edges starting in any vertex. - -Then you change equivalent numbers a bit and get a solution. -For example, first take a symmetry around the center of cube, see the left image for the quotient space, then take a rotation in the plane of the monitor by $180°$, see the right one.<|endoftext|> -TITLE: What is the difference between a function and a curve? -QUESTION [8 upvotes]: Are all curves functions? To my current knowledge a function gives only one output for a given input so x->f(x) , but for example a circle which is a curve is "made" out of two functions, so generally speaking a circle is a curve yet not a function, so can someone provide me a good distinction between them and good definition of a curve. Thanks! - -REPLY [8 votes]: I think that the most general definition of ''curve'' that correspond to our intuition is: - -A curve is a continuous function $\gamma: I \to X$ where $ I \subset \mathbb{R}$ is an interval and $X$ is a topological space. - -So, every curve is a function, but this does not means that, If $X= \mathbb{R}^2$ than any curve can be expressed as a function $f:\mathbb{R} \to \mathbb{R} \qquad y=f(x)$. -In this case, as you notice, a circle is a curve, but we have not a single function $f:\mathbb{R} \to \mathbb{R} $ such that the points of the circle are the graph of $f$. -But note that these points $P=(x,y)$ on the circle can be represented as the domain of a function $f:[0,2\pi) \to \mathbb{R}^2$ as $f(t)=(\cos t,\sin t)$.<|endoftext|> -TITLE: Finding roots of $\sin(x)=\sin(ax)$ without resorting to complex analysis -QUESTION [6 upvotes]: If you were given an equation $\sin(x)=\sin(ax)$ (say $a$ is a natural number), how would you go about finding all the roots on $[0,2\pi)$ without delving into complex numbers? -From a simple geometric analysis it is obvious that solving for $\pi-x=ax$ would yield 4 solutions, and $x=0$ and $x=\pi$ are another 2 obvious solutions. From complex analysis though we know that there could be many roots in this interval depending on the $a$. Is there any way to find all these roots using only techniques from real analysis? - -REPLY [8 votes]: $$\sin(x) - \sin(ax) = 2\sin\left(\frac{x-ax}{2}\right)\cos\left(\frac{x+ax}{2}\right) = 0 $$ This identity is not difficult to prove. -Solving $\frac{x-ax}{2} = {\pi}n$ and $\frac{x+ax}{2} = \frac {\pi}{2} + \pi{n}$ so that -$$x=\frac{2\pi{n}}{1-a} , \frac{2\pi{n} + \pi}{1+a}$$ for $a \neq 1, -1$. The case $a=1$ is trivial, and $a=-1$ yields solutions whenever $\sin$ vanishes, namely $\pi{n}$ which is clear since $\sin$ is an odd function.<|endoftext|> -TITLE: Help needed in showing $SU(n)$ is a submanifold of $U(n)$ -QUESTION [5 upvotes]: I have proved that $U(n)$, which is a group of unitary matrices, is a smooth manifold of real Dimension $n^2$. Now I am trying to show that $SU(n)$, which is a group of unitary matrices with determinant 1, is a smooth submanifold of $U(n)$ of real dimension $n^2-1$. However I am stuck at a step: - -The idea is to use the theorem: Suppose $f:N^n \to M^m$ is a smooth map between smooth manifolds $N,M$ of dimension $n,m$ respectively. Then preimage of any regular value $y$ of $f$ is a regular submanifold of $M$ and is of dimension $n-m$. Here, $y$ is a regular value means that the rank of $f$ at every point in the preimage of $y$ has rank exactly $m$. -To use this Theorem, I considered the determinant map -$$ det: U(n) \to S^1:=\{z \in \mathbb{C}:|z|=1\}. $$ Now I want to show that -$1 \in S^1$ is a regular value of the map $det.$ That is, I have to show that derivative of $det$ at an arbitrary $g\in det^{-1}(1)=SU(n)$ has rank $1$. For an arbitrary $Y \in Mat_{n \times n}\mathbb{C}$, I get $$D_gdet(Y)=\lim_{t \to 0}\frac{det(g+tY)-det(g)}{t}=\lim_{t \to 0}\frac{det(g+tY)-1}{t}$$ - -I don`t know how should I proceed from here. How can I simplify the RHS? - -REPLY [5 votes]: I don't know if you're in the context of Lie Groups. In this case you just need to invoke the Closed Subgroup Theorem which states that every closed subgroup of a Lie Group is a Lie Group, which also means by definition that is a submanifold. -$SU(n)$ is a closed subgroup of $U(n)$ hence a submanifold. To see that is closed just consider the function determinant.<|endoftext|> -TITLE: closed form for $I(n)=\int_0^1\left ( \frac{\pi}{4}-\arctan x \right )^n\frac{1+x}{1-x}\frac{dx}{1+x^2}$ -QUESTION [9 upvotes]: $$I(n)=\int_0^1\left ( \frac{\pi}{4}-\arctan x \right )^n\frac{1+x}{1-x}\frac{dx}{1+x^2}$$ -for $n=1$ I tried to use $\arctan x=u$ -and by notice that $$\frac{1+\tan u}{1-\tan u}=\cot\left ( \frac{\pi}{4}-u \right )$$ then $\frac{\pi}{4}-u=y$ and got -$$I(1)=\int_0^{\pi/4}y\cot y dy$$ -which equal to $$I(1)=\frac{1}{8}(4G +\pi \log 2)$$ -Where $G$ is Catalan constant - -so $$I(n)=\int_0^{\pi/4}x^n\cot x dx$$ -but how to find the closed form for any n using real or complex analysis ? -and what about $I(2)$ it seems related to $\zeta(3)$ ?!! - -REPLY [8 votes]: For $n=2$ we have, integrating by parts, $$I\left(2\right)=\int_{0}^{\pi/4}x^{2}\cot\left(x\right)dx=\frac{\pi^{2}}{16}\log\left(\frac{1}{\sqrt{2}}\right)-2\int_{0}^{\pi/4}x\log\left(\sin\left(x\right)\right)dx - $$ and now we can use the Fourier series of $\log\left(\sin\left(x\right)\right)$ $$\log\left(\sin\left(x\right)\right)=-\log\left(2\right)-\sum_{k\geq1}\frac{\cos\left(2kx\right)}{k},\,0 -TITLE: prove this inequality $a^n>b^n+c^n$ -QUESTION [5 upvotes]: We know a,b and c are positive and - $a^2=b^2+c^2$ -How we can conclude this inequality: -$a^n>b^n+c^n$ , $n>2$ -I tried Binomial Theorem but I can't prove this. -Thanks - -REPLY [2 votes]: $a^2=b^2+c^2\iff 1=(\frac ba)^2+(\frac ca)^2$ and clearly $\frac ba <1$ and $\frac ca <1$. Hence $$1=(\frac ba)^2+(\frac ca)^2>(\frac ba)^3+(\frac ca)^3>(\frac ba)^4+(\frac ca)^4>......>(\frac ba)^n+(\frac ca)^n$$ -Thus $$1>(\frac ba)^n+(\frac ca)^n\iff a^n>b^b+c^n$$<|endoftext|> -TITLE: Defining adjoint functors: What does "natural bijection" mean? -QUESTION [6 upvotes]: Take the following definition of adjunction from the nlab - -This definition can be found in numerous other places. My brain parses this definition perfectly up till the point where it says that for every $c$ and $d$ the hom-sets are naturally isomorphic. At his point it does SCRREEETCH, because, as a beginner, learning category theory, natural isomorphisms were only defined between functors as a collection of morphisms in the target category and not *between collections of morphisms lying in different categories. -I know that there are other equivalent definitions of adjunctions, but I don't have the time to go through them to see how to make sense of this definition. I just want a clean explanation how to understand this definition. - -REPLY [4 votes]: Converting my comment to an answer, as suggested: -$\text{Hom}_D(L(c),d)$ and $\text{Hom}_C(c,R(d))$ are both sets, i.e. they're objects of the category $\mathsf{Set}$, which is the target category where the natural isomorphism is taking place. -You wrote: - -My brain parses this definition perfectly up till the point where it says that for every c and d the hom-sets are naturally isomorphic. - -It's not that the $\text{Hom}$ sets are naturally isomorphic - there's just a bijection between them, and these bijections cohere to a natural ismorphism between the $\text{Hom}$ functors.<|endoftext|> -TITLE: Functions satisfying 4 out of 5 inner product properties -QUESTION [9 upvotes]: Let us consider function $s:K^m \times K^m \mapsto K$ (here $K = \mathbb{R}$ or $K = \mathbb{C}$). If $\forall x, y, z \in K^m, \forall \lambda \in K$ - -$s(x + y, z) = s(x, z) + s(y, z)$ -$s(\lambda x, y) = \lambda s(x, y)$ -$s(y, x) = \overline{s(x, y)}$ -$s(x, x) \geq 0$ -$s(x, x) = 0 \implies x = 0$ - -then $s$ is called inner product. -Problem. For each $n = 1, 2, 3, 4, 5$ find a function $s$ that doesn't satisfy the $n$-th property and satisfies the remaining four. -First consider $K = \mathbb{R}$. I found the following: -$n = 3, s(x, y) = xy^3$ -$n = 4, s(x, y) = -xy$ -$n = 5, s(x, y) \equiv 0$ -How can I approach $n = 1, 2$? Perhaps I need to choose $K = \mathbb{C}$ for those? - -Edit: I changed the domain of $s$ from $\mathbb{R} \times \mathbb{R}$ to $K^m \times K^m$ because - -if $\lambda \in \mathbb{C}$ then $\mathbb{R}$ is not closed w.r.t. scalar multiplication and - -if $s: K \times K \mapsto K$ and 2-5 hold then 1 must hold. - -REPLY [3 votes]: Cases $n = 3, 4, 5$ have been shown in the OP. - -Case $n = 1$. -I will show that there are no such functions from $K \times K$ and provide an example for $\mathbb{R}^2 \times \mathbb{R}^2$. -Let $m = 1$. Take property 2 and choose $x = 1$ so $s(\lambda, y) = \lambda s(1, y)$. Denote $s(1, y) = f(y)$ so $s(x, y) = x f(y) \ \forall x, y$. By symmetry property, $s(x, y) = \overline{y f(x)}$. Hence -$$\frac{f(y)}{\overline{y}} = \frac{\overline{f(x)}}{x} = c = \text{const}$$ -because $x, y$ can be any elements of $K$. -This immediately gives $s(x, y) = c x \overline{y}$. It is easy to see that 3-5 hold, as well, if $c \in \mathbb{R}$ and $c > 0$. Inserting in 1, we see that it holds, as well. -So there are no such functions for $m = 1$. -Let $m = 2$, $K = \mathbb{R}$ and denote $x = (x_1, x_2) \in \mathbb{R}^2$. Let -$$s(x, y) = \sqrt[3]{(x_1 y_1)^3 + (x_2 y_2)^3}.$$ -Obviously, properties 2-5 hold but 1 (additivity) doesn't. - -Case $n = 2$. -If $K = \mathbb{C}$ then we can choose -$$s(x,y) = \overline{x}y$$ -Property 1 holds: $\overline{(x + y)}z = \overline{x}z + \overline{y}z$ -Property 2 doesn't hold if $\text{Im} \lambda \neq 0$: $\overline{\lambda x}y \neq \lambda \overline{x}y$ -Property 3 holds: $\overline{\overline{x}y} = x\overline{y} = \overline{y}x$ -Properties 4, 5 hold: $\overline{x}x = |x| \geq 0$ and $|x| = 0 \implies x = 0$ -Note: for $K = \mathbb{R}$ there are functions that satisfy $f(x + y) = f(x) + f(y)$ but aren't linear (see here). If $h(x)$ is such a function, $s(x,y) = h(x)h(y)$ would't satisfy property 1 and would satisfy 2-4. I am unsure how to make it satisfy property 5.<|endoftext|> -TITLE: Is it possible to have a connected manifold that is a double cover of a 2-sphere? -QUESTION [5 upvotes]: I have come up with a branched covering, but it necessarily has two branch points. From that I'm assuming that it can't be done, possibly related to the hairy ball theorem, but I don't know how to prove it either way. -The branched covering is two spheres which both have a great circle segment between two branch points along which they are connected. So if you're traveling on the first sphere and cross that segment, you are now traveling on the second sphere and vice-versa. Of course, at the branch points there is a discontinuity. -Is there a simple way to demonstrate that there is no such manifold? Or is there some construction which avoids the branch points? -I apologize for any misuse of terminology. Corrections welcome. - -REPLY [6 votes]: Because the sphere $S^{2}$ is simply-connected, it admits no connected, non-trivial covering. (If it matters, the trouble isn't related to the hairy ball theorem: The $3$-sphere admits continuous, nowhere-vanishing vector fields, but admits no connected, non-trivial covering, again because $S^{3}$ is simply-connected.)<|endoftext|> -TITLE: Are all computable functions continuous or vice-versa? -QUESTION [5 upvotes]: A famous result in intuitionistic mathematics is that all real-valued total functions are continuous. Since the requirements for a function to be admitted intuitionistically is that it must define a procedure or algorithm, all functions are computable. This seems to suggest that all computable functions are continuous. My questions are: - -Is this true for all total recursive or just primitive recursive functions? -Where can I find such a proof? -Conversely, what are examples of continuous functions that are not computable? - -By the way, I heard here and here that most intuitionistic formal systems can only prove that there are no real-valued total discontinuous functions, not the stronger positive result that all functions are continuous. - -REPLY [8 votes]: Basically, we might say that computable real numbers are equivalence classes of Cauchy sequences of rationals. To specify a sequence computably, you need a computable function $s$ from natural numbers to rationals. To ensure it is Cauchy, there should be another computable function $N$ such that for all $m$, if $n, n' > N(m)$ then $|s(n) - s(n')| < 1/m$. -One difficulty is that it is impossible in general to compute whether two such real numbers are equal. For example, consider the real number defined as follows. If there is no odd perfect number $k < n$, then $s_n = 0$, else $s_n = 1/k$ where $k$ is the least odd perfect number. -Here we can take $N(m) = m$. This fits the definition above, so it -corresponds to a computable real number, let's call it $b$. But without knowing whether there is an odd perfect number, we can't tell whether $b=0$. -Now a computable function from computable reals to computable reals has to take $s$ and $N$ above, corresponding to real number $x$, as "black boxes" and produce new functions $\tilde{s}$ and $\tilde{N}$ defining another computable real number $f(x)$. Moreover it must do it in such a way that if $s$, $N$ and $s'$, $N'$ define the same real number, $\tilde{s}$ and $\tilde{N}$ define the same real number as -$\tilde{s'}$ and $\tilde{N'}$. But $\tilde{s}(n)$ and $\tilde{N}(m)$ must be obtained by finite computations, in particular only involving finitely many values of $s$ and $N$. - This is the essential reason why -discontinuous functions are not going to work. Suppose $f$ is discontinuous at $0$, so that for some $y < z$ there are some reals arbitrarily close to $0$ with $f$ values $\ge z$ and others $\le y$. You can't approximate $f(x)$ arbitrarily well knowing only finitely many values of $s$ and $N$: those values could be consistent with either $f(x) \le y$ or $f(x) \ge z$.<|endoftext|> -TITLE: Motivation of paracompactness -QUESTION [18 upvotes]: "A paracompact space is a topological space in which every open cover admits a locally finite open refinement" is the definition of paracompactness on Wikipedia. -Comparing with the definition of compactness, "a topological space is called compact if each of its open covers has a finite subcover". -In a first understanding, the difference you notice is that, to be compact, the space must have a finite subcover for EACH of its open covers, so every compact space would be paracompact. -I would like to know what is the motivation on that definition, in what way that concept helps in "extending" the notion of compactness and some examples in which paracompactness is an important property; if they are examples from differential topology or functional analysis, even better. Thanks in advance. - -REPLY [5 votes]: As Mike Miller said, paracompactness gives you partitions of unity, and anything you might want to do uses partitions of unity. Let me just give a short list of desirable properties a paracompact smooth manifold enjoys. -One can always define a Riemannian metric, and hence an ordinary distance function (non-paracompact spaces are not metrizable!) -Every real vector bundle is isomorphic to its dual (this is essentially the point above). -Definition of integration of compactly supported forms on an oriented manifold requires paracompactness. -Vector bundles over a manifold can be classified: A vector bundle over the manifold corresponds to the pullback of the tautological bundle over some grassmannian by some map. This does not hold for non-paracompact manifolds.<|endoftext|> -TITLE: Mathematicians shocked(?) to find pattern in prime numbers -QUESTION [79 upvotes]: There is an interesting recent article "Mathematicians shocked to find pattern in "random" prime numbers" in New Scientist. (Don't you love math titles in the popular press? Compare to the source paper's Unexpected Biases in the Distribution of Consecutive Primes.) -To summarize, let $p,q$ be consecutive primes of form $a\pmod {10}$ and $b\pmod {10}$, respectively. In the paper by K. Soundararajan and R. Lemke Oliver, here is the number $N$ (in million units) of such pairs for the first hundred million primes modulo $10$, -$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} -\hline -&a&b&\color{blue}N&&a&b&\color{blue}N&&a&b&\color{blue}N&&a&b&\color{blue}N\\ -\hline -&1&3&7.43&&3&7&7.04&&7&9&7.43&&9&1&7.99\\ -&1&7&7.50&&3&9&7.50&&7&1&6.37&&9&3&6.37\\ -&1&9&5.44&&3&1&6.01&&7&3&6.76&&9&7&6.01\\ -&1&1&\color{brown}{4.62}&&3&3&\color{brown}{4.44}&&7&7&\color{brown}{4.44}&&9&9&\color{brown}{4.62}\\ -\hline -\text{Total}& & &24.99&& & &24.99&& & &25.00&& & &24.99\\ -\hline -\end{array}$$ -As expected, each class $a$ has a total of $25$ million primes (after rounding). The "shocking" thing, according to the article, is that if the primes were truly random, then it is reasonable to expect that each subclass will have $\color{blue}{N=25/4 = 6.25}$. As the present data shows, this is apparently not the case. -Argument: The disparity seems to make sense. For example, let $p=11$, so $a=1$ . Since $p,q$ are consecutive primes, then, of course, subsequent numbers are not chosen at random. Wouldn't it be more likely the next prime will end in the "closer" $3$ or $7$ such as $q=13$ or $q=17$, rather than looping back to the same end digit, like $q=31$? (I've taken the liberty of re-arranging the table to reflect this.) -However, what is surprising is the article concludes, and I quote, "...as the primes stretch to infinity, they do eventually shake off the pattern and give the random distribution mathematicians are used to expecting." -Question: What is an effective way to counter the argument given above and come up with the same conclusion as in the article? (Will all the $N$ eventually approach $N\to 6.25$, with the unit suitably adjusted?) Or is the conclusion based on a conjecture and may not be true? -P.S: A more enlightening popular article "Mathematicians Discover Prime Conspiracy". (It turns out the same argument is mentioned there, but with a subtle way to address it.) - -REPLY [4 votes]: If I have read the New Scientist article correctly, the so-called "discrepancy" is: If a prime ends in 1 (as in the first class), the observed probability that the next prime also end in 1 is not 1/4. This observation can be explained by elementary probability with the assumption that the classes are indeed truly random. I was also troubled by this and I asked it in the math overflow forum: https://mathoverflow.net/questions/234753/article-in-the-new-scientist-on-last-number-of-prime-number. -Let me restate the argument. Let's write the sequence of all numbers ending with 1, 3, 7, 9 (beginning with 7): 7, 9, 11, 13, 17, 19,... and flag each number with a probability of $p$. Let's denote $q=1−p$. Now if a number ending in 1 has been flagged, the probability that the next number being flagged ends in 1 can easy be computed: $\sum_{k=0}^\infty q^{3+4k}p=q^3p\frac{1}{1-q^4}$. That's not 25%! In order to make that more intuitive, suppose that p is close to 1 that is we flag each number with high probability (but nevertheless randomly). If we have flagged a number ending in 1, the probability that the next number being flagged ends in 1 is very small, because we can expect that at least one of the three following numbers (ending in 3, 7, 9) will be flagged (recall that we have expected p being close to 1). -Now this model is oversimplificated. The probability that a random number $n$ is prime can be evaluated as $1/ln(n)$ (not as a constant $p$) by the prime counting function. If we know that the number ends in $1, 3, 7, 9$; this probability becomes $\frac{10}{4}\frac{1}{ln(n)}$ (assuming the classes are random). Because the sequence $q^{3+4k}p$ tends to zero rapidly for $k\rightarrow\infty$, if a number $n$ ending in 1 has been flagged, the probability that the next number being flagged also ends in 1 can be evaluated as $q_n^3p_n\frac{1}{1-q_n^4}$ with $p_n=\frac{10}{4}\frac{1}{ln(n)}$ and $q_n=1-p_n$. -For $n=100\cdot10^6$, we find 19.8% which is not much different from the number cited in the article (take in mind the simplification in my argument, also the article seem to make the experiment for a random number between 1 and $10\cdot10^6$ not just a number which is approximately equal to $10\cdot10^6$). Moreover as $n\rightarrow\infty$, $p_n\rightarrow 0$ and we can check that $\lim_{p\rightarrow 0}q^3p\frac{1}{1-q^4}=\frac{1}{4}$ which seems to explain that the "discrepancy" vanishes when we take a longer sequence. -There are a lot of mysteries concerning prime numbers but it seems that the one pointed by the New Scientist is nothing more than misconception on elementary probability.<|endoftext|> -TITLE: (2016 China team selection Test) with a complex inequality -QUESTION [7 upvotes]: Find the minimum possible value of the positive real number $A$ such that -$$|z_{1}z_{2}+z_{2}z_{3}+z_{3}z_{1}|^2+|z_{1}z_{2}z_{3}|^2 -TITLE: What comes after length, area and volume? -QUESTION [29 upvotes]: The length of the unit is 1. - The area of the unit square is 1. - The volume of the unit cube is 1. - The $\color{red}{???}$ of the unit tesseract is 1. - The $\color{blue}{???}$ of the unit 5-cube is 1. - -So this is a fill in the blanks-question, but I've been wondering if there's a common term for this? I'd imagine it'd be "n-volume", but I cannot seem to find any uses of that term, so I'm guessing there is another term? - -REPLY [32 votes]: The term "hypervolume" is in use, but that doesn't tell you the dimension. I don't see any trouble using "$4$-volume" to describe the amount of space the unit tesseract occupies.<|endoftext|> -TITLE: Proving that $2\sqrt 3+3\sqrt[3] 2-1$ is irrational -QUESTION [5 upvotes]: Prove that $2\sqrt 3+3\sqrt[3] 2-1$ is irrational - -My attempt: -$$k=2\sqrt 3+3\sqrt[3] 2-1$$ -Suppose $k\in \mathbb Q$, then $k-1\in \mathbb Q$. -$$2\sqrt 3+3\sqrt[3] 2=p/q$$ -I'm stuck here and don't know how to procced. I tried to do this: -$$\sqrt 3=\frac{p/q-3\sqrt[3] 2}{2}$$ -contradiction, but I'm not at all sure about that. How should I proceed? - -REPLY [2 votes]: This is an elementary but very good question. We have $$k = 2\sqrt{3} + 3\sqrt[3]{2} - 1$$ so that $$\sqrt[3]{2} = \frac{k + 1 - 2\sqrt{3}}{3}$$ and if $k$ is rational then it means that that $\sqrt[3]{2}$ is a quadratic irrationality i.e. it is a root of a quadratic equation with rational coefficients. -Hardy proves in "A Course of Pure Mathematics" that $\sqrt[3]{2}$ is not the root of such a quadratic equation. I show his method here. Let us suppose on the contrary that $\sqrt[3]{2}$ is a root of the equation $$ax^{2} + bx + c = 0\tag{1}$$ where $a \neq 0 \neq c$ and $a, b, c$ are integers. Now $\sqrt[3]{2}$ is also a root of $$x^{3} - 2 = 0\tag{2}$$ Multiplying $(1)$ by $2$ we get $$2ax^{2} + 2bx + 2c = 0$$and using $(2)$ we get $$2ax^{2} + 2bx + cx^{3} = 0$$ or $$cx^{2} + 2ax + 2b = 0\tag{3}$$ Multiply $(1)$ by $c$ and $(3)$ by $a$ and subtract to get $$(bc - 2a^{2})x + c^{2} - 2ab = 0$$ Now $x = \sqrt[3]{2}$ is irrational and hence the above equation is possible only when $c^{2} - 2ab = 0$ and $bc - 2a^{2} = 0$. This means that $c^{2} = 2ab$ and $bc = 2a^{2}$ so that $b^{2}c^{2} = 4a^{4}$ or $2ab^{3} = 4a^{4}$ or $b^{3} = 2a^{3}$. This is not possible unless $a = 0, b = 0$ because $\sqrt[3]{2}$ is irrational. We have reached a contradiction and hence $k$ is irrational. -Note: In terms of abstract algebra the above is a very elementary proof that polynomial $x^{3} - 2$ is irreducible over $\mathbb{Q}$ and $[\mathbb{Q}(\sqrt[3]{2}): \mathbb{Q}] = 3$. A proof more in line with techniques of abstract algebra is done via the concept of GCD. The GCD of polynomials in $(1)$ and $(2)$ also has $\sqrt[3]{2}$ as its root and since it is irrational it follows that the GCD has degree $2$ so that the polynomial in $(1)$ is the GCD. Hence $ax^{2} + bx + c$ is a factor of $x^{3} - 2$ and thus $x^{3} - 2$ has a linear factor so that $(2)$ has a rational root and thus the contradiction.<|endoftext|> -TITLE: General Theorem About Symmetric Polynomials -QUESTION [10 upvotes]: Let $F$ be a field. It is known that - -1. Every symmetric polynomial in $F[x_1, \ldots, x_n]$ can be expressed as a polynomial in the elementary symmetric polynomials. - -A polynomial $f\in F[x_1, \ldots, x_n]$ is said to be semi-symmetric if $f(x_{\sigma(1)}, \ldots, x_{\sigma(n)})=f(x_1, \ldots, x_n)$ for every even permutation $\sigma\in S_n$. We have - -2. Every semi-symmetric polynomial in $F[x_1, \ldots, x_n]$ is of the form $f+\delta g$ where $f$ and $g$ are symmetric polynomials and $\delta=\prod_{i -TITLE: Quadratic Formula, nature of roots with Trigonometric Functions -QUESTION [9 upvotes]: The original problem: - -If $0\le a,b\le 3$ and the equation $$x^2+4+3\cos(ax+b)=2x$$ has at least one real solution, then find the value of $a+b$ - -$$$$ -At first, on rearranging, I got the following expression: -$$x^2-2x+(4+3\cos(ax+b))=0$$ I thought this was a quadratic in $x$, and thus from the quadratic formula(and that at least one real root exists), $D\ge 0$ ie $$4-4(4+\cos(ax+b))\ge 0$$ -$$$$ -However I'm not really sure about this. I've treated $\cos(ax+b)$ as a constant term even though the argument of the cosine includes $x$: the variable in which the quadratic expression is. -$$$$Under these circumstances, is it correct to use $3\cos(ax+b)$ as a constant? If not, how could I use the quadratic formula to find values of $x$ satisfying $$x^2-2x+(4+3\cos(ax+b))=0$$ -Many thanks in anticipation! - -REPLY [6 votes]: Completing the square, -$$x^2-2x+1=(x-1)^2=-3(1+\cos(ax+b)).$$ -The two expressions can only be equal if they are zero, i.e. $x=1$ and $\cos(ax+b)=-1$. -Hence, -$$a+b=\pi.$$<|endoftext|> -TITLE: Maximum of a upper semicontinuous function -QUESTION [6 upvotes]: If $f:\mathbb{R}^N\rightarrow\mathbb{R}$ is a continuous function and $\lim_{|x|\rightarrow \infty}f(x)=-\infty$, so for definition for all $N>0$ exists a $M>0$ such that $|x|>M$ implies $f(x)<-N$. Since I want search a global maximum, I can search it in $A=\{x\in\mathbb{R}^N : |x|\leq M\}$. It is a compact and so I can say that $f$ attains its maximum in some point $x_{0}$. How can I extend this for a upper semicontinuous function? Thank you. - -REPLY [4 votes]: Recall that a function $f$ is upper-semicontinuous at $x_0 \in \mathbf R^N$ iff -$$ \limsup_{x\to x_0} f(x) \le f(x_0) $$ -Now let $K \subseteq \mathbf R^N$ be a compact subset, $m := \sup_{x \in K} f(x)$. For every $n \in \mathbf N$, choose $x_n \in K$ such that $f(x_n) \ge m - \frac 1n$. As $K$ is compact, some subsequence $(x_{n_k})$ converges, say $x_{n_k} \to x_0$. Then, by semi-continuity, -$$ m \ge f(x_0) \ge \limsup_{x \to x_0} f(x) \ge \limsup_{k \to \infty} f(x_{n_k}) \ge \lim_k m - \frac 1{n_k} = m $$ -Hence $m = f(x_0)$ and $f$ attains its maximum on $K$. Now use the same argument as for continuous functions.<|endoftext|> -TITLE: Isotrivial family: different definitions -QUESTION [6 upvotes]: Let $f:X\to B$ a flat morphism of varieties over an algebraically closed field $k$. If $f$ is flat and with connected fibres we say that $f:X\to B$ is a family over $B$. -In literature you can find two definitions of "isotriviality" that seems to be not equivalent: - -(Mainly used when $X$ is a surface and $B$ is a curve). $f:X\to B$ is called isotrivial if the smooth fibres of $f$ are isomorphic. -$f:X\to B$ is called isotrivial if there exists a dense open set $U\subseteq B$ such that $f^{-1}(x)\cong f^{-1}(y)$ for every $x,y\in U$. - -Clearly $1)\Rightarrow 2)$ but the converse seems to be false. Am I right? - -REPLY [3 votes]: Yes, 2 is a priori weaker than 1. And it is false in general. -There are examples of flag varieties specializing to smooth projective horospherical varieties; see Pasquier-Perrin -In particular, there exist a smooth projective morphism $X\to B$ with $B$ a smooth affine connected curve, and a closed point $b$ in $B$ such that $X_s \cong X_t$ for all $s,t\in B\setminus \{b\}$. However, $X_s \cong X_b$ if and only if $b=s$. -There are easier examples of this phenomenon, but this is the first one that came to mind.<|endoftext|> -TITLE: Compound Interest Formula adding annual contributions -QUESTION [10 upvotes]: I'd like to know the compound interest formula for the following scenario: -P = Initial Amount -i = yearly interest rate -A = yearly contribution or deposit added. -n = the deposits will be made for 10 consecutive years. -F = final amount obtained. -I start with an initial amount and an yearly interest rate applied will be applied to it. Then, every year a contribution/deposit is made at the end of the period, that is, after the interest is applied to the previous amount. No withdrawals are made. - -REPLY [7 votes]: The final value $F=F'+F''$ is the sum of two components: - -the initial deposit will produce after $n$ years at the interest rate $i$ the future value -$$F'=P(1+i)^n$$ -the periodic payments are an annuity-immediate (made at the end of each contribution period) the future value is -$$ -F''=A\,s_{\overline{n}|i}=A\frac{(1+i)^n-1}{i} -$$ -Thus, the future value $F$ is -$$ -F=P(1+i)^n+A\frac{(1+i)^n-1}{i}=\left(P+\frac{A}{i}\right)(1+i)^n-\frac{A}{i} -$$ -If the additional payments are made at the beginning of each period the annuity is an annuity due and the future value is obtained multiplying by $(1+i) $, that is -$$ -F''=A\frac{(1+i)^n-1}{i}(1+i) -$$ -and then -$$ -F=P(1+i)^n+A\frac{(1+i)^n-1}{i}(1+i) =\left(P+\frac{A}{d}\right)(1+i)^n-\frac{A}{d} -$$ -where $ d=\frac{i}{1+i} $ is the discount rate.<|endoftext|> -TITLE: Automorphisms of rationals -QUESTION [5 upvotes]: Given to me was the following assignment: - -Prove that for each $a\in\mathbb Q^{*}$, the mapping from $\mathbb Q$ to $\mathbb Q$ which sends $x$ to $ax$ is an automorphism of the additive group $(\mathbb{Q},+)$. Vice versa, every automotphism of $(\mathbb{Q},+)$ is of this form. Conclude that $\mathrm{Aut}(\mathbb{Q},+)$ is isomorphic to $\mathbb Q^{*}$. - -My first problem was that I didn't really 'understand' this problem (still don't). I do understand the definitions. And my idea was that I was asked to basically prove that each homomorphism from $(\mathbb{Q},+)$ to itself can be assigned to a unique $a\in\mathbb Q^{*}$? - -REPLY [4 votes]: Let $\phi$ be an automorphism of the additive group $(\mathbb Q,+)$. -Then $\phi(n) = \phi(1+1+\cdots+1)=\phi(1)+\phi(1)+\cdots+\phi(1)=n\phi(1)$. -Thus, $n\phi(m/n)=\phi(m)=m\phi(1)$ and so $\phi(m/n)= (m/n)\phi(1)$ and $\phi$ is determined by $\phi(1)$. -Since $\phi$ is injective and $\phi(0)=0$, we cannot have $\phi(1)=0$.<|endoftext|> -TITLE: Evaluation of $\int\frac{(1+x^2)(2+x^2)}{(x\cos x+\sin x)^4}dx$ -QUESTION [18 upvotes]: Evaluation of $$\int\frac{(1+x^2)(2+x^2)}{(x\cos x+\sin x)^4}dx$$ - -$\bf{My\; Try::}$ We can write $$x\cos x+\sin x= \sqrt{1+x^2}\left\{\frac{x}{\sqrt{1+x^2}}\cdot \cos x+\frac{1}{\sqrt{1+x^2}}\cdot \sin x\right\}$$ -So we get $$(x\cos x+\sin x) = \sqrt{1+x^2}\cos(x-\alpha)\;,$$ Where $\displaystyle \alpha = \tan^{-1}\left(\frac{1}{x}\right)$ -So Integral $$I = \int\frac{(1+x^2)(2+x^2)}{(1+x^2)^2\cdot \cos^4 (x-\alpha)}dx = \int\frac{(2+x^2)}{(1+x^2)}\cdot \sec^4 (x-\alpha)dx$$ -Now how can i solve after that,Help me -Thanks - -REPLY [3 votes]: Your integral becomes $$I=\int { \frac { \left( 2+{ x }^{ 2 } \right) }{ \left( 1+{ x }^{ 2 } \right) } { \left( \sec { \left( x-\cot ^{ -1 }{ x } \right) } \right) }^{ 4 } } dx$$ -On substituting $$\left( x-\cot ^{ -1 }{ x } \right) =t$$ -diffrentiating $$dt = \frac { \left( 2+{ x }^{ 2 } \right) }{ \left( 1+{ x }^{ 2 } \right) } dx$$ -Integration becomes $$I=\int { { \sec { \left( t \right) } }^{ 4 } } dt$$ - -Using : $\quad \quad \sec ^{ 2 }{ \left( x \right) } -\tan ^{ 2 }{ \left( x \right) } =1$ - -$$I=\int { { \sec { \left( t \right) } }^{ 2 } } \left( { \tan { \left( t \right) } }^{ 2 }+1 \right) dt$$ -Again substituting $$\tan{ \left (t\right)} = u$$ -$$du = { { \sec { \left( t \right) } }^{ 2 } }dt$$ -Intrgration becomes $$I = \int { \left( { u }^{ 2 }+1 \right) du } $$ -$$I=\frac { { u }^{ 3 } }{ 3 } +u+c$$ -Substituting value of u -$$I = \frac { { \tan { \left( t \right) } }^{ 3 } }{ 3 } +\tan { \left( t \right) } +c$$ -$$I=\frac { { \tan { \left( x-\cot ^{ -1 }{ x } \right) } }^{ 3 } }{ 3 } +\tan { \left( x-\cot ^{ -1 }{ x } \right) } +c$$<|endoftext|> -TITLE: Solving $10x \equiv 1 \pmod{11^2}$ -QUESTION [5 upvotes]: $$10x \equiv 1 \pmod{11^2}$$ -I know $10x \equiv 1 \pmod{11}$. -I Have $x \equiv 109 \pmod{11^2}$ but this took me a lot of time to find. -Is there a quicker way? - -REPLY [4 votes]: The systematic way is to use the extended Euclidean algorithm for $\gcd(10,11^2)$ and get the inverse of $10 \bmod 11^2$, which is $109$.<|endoftext|> -TITLE: Constants in a signature -QUESTION [5 upvotes]: This is my first post so I hope it works! -Taking the axioms for a group as an example, the literature defines -a group in (at least) two different ways: -Method 1 - -A signature of $(G,\circ,\,^{-1})$ and axioms - -Associativity. $g_{1}\circ(g_{2}\circ g_{3})=(g_{1}\circ g_{2})\circ g_{3}$. -Identity. $\exists e\in G,e\circ g=g$. -Inverse. $g^{-1}\circ g=e$. - - -Method 2 - -A signature of $(G,\circ,\,^{-1},e)$ and axioms - -Associativity. $g_{1}\circ(g_{2}\circ g_{3})=(g_{1}\circ g_{2})\circ g_{3}$. -Identity. $e\circ g=g$. -Inverse. $g^{-1}\circ g=e$. - - -The difference between the two is the definition of the constant $e$. -So is there any mathematical difference between these methods. I presume -that all theorems provable under method 1 are provable under method -2. Is that correct? If so, in general, is it the case that all constants -can be excluded from signatures and introduced with an existential -clause in an axiom? - -REPLY [2 votes]: As mentioned in Kyle's answer, in the case you specifically asked about, it doesn't make much difference whether you include the identity element in your list of operations. -However, generally speaking, if you single out a special element of your universe and include it (as a constant, i.e. "nullary", operation) among your set of basic operations, it may turn out to have significant consequences for the equational theory of the algebra. To see why, think about the set of equations you can write down which may or may not hold for this algebra. This set may become larger if you give a name to some "special" element of the universe. -This is most strikingly illustrated in the 1982 paper [1] in which Roger Bryant proves that if you give a name to another element of a group, then the equational theory of the resulting algebra need not be generated by a finite set of equations. In contrast, Oates and Powell in [2] proved that the equational theory of a finite group (without any specially named elements) is "finitely based". -...but let's not dwell on this deep equational stuff which can be very hard to make peace with. -On a much lighter note, there's another way to think about a nullary operation if you're a computer programmer---it's a thunk! That is, it's an element of the universe that is posing as a function. This has important consequences for programming because it can determine when an expression will be evaluated. (Functions and higher types are usually passed around using call-by-name, unlike primitive types which are usually passed call-by-value.) Not sure how helpful it is to make that connection, but anyway I think it's kind of neat. -[1] “The laws of finite pointed groups” R. M. Bryant, Bull. London Math. Soc, 14 (1982), 119-123. -[2] “Identical relations in finite groups,” S. Oates, M. B. Powell, J. Algebra (1965).<|endoftext|> -TITLE: Prime numbers are related by $q=2p+1$ -QUESTION [5 upvotes]: Let primes $p$ and $q$ be related by $q=2p+1$. Prove that there is a positive multiple of $q$ for which the sum of its digits does not exceed $3$. -My work so far: -$p,q -$ primes and $q=2p+1 \Rightarrow \exists n,k \in \mathbb N: n=qk$ and $S(n) \le 3$ -$p=2 \Rightarrow q=5 \Rightarrow 5|10=n; S(10)=1 \le3$. -$p=3 \Rightarrow q=7 \Rightarrow 7|21=n; S(21)=3 \le3$. -$p=5 \Rightarrow q=11 \Rightarrow 1|11=n; S(11)=2 \le3$. -$p=7 \Rightarrow q=15=5 \cdot3 $. -$p=11 \Rightarrow q=23 \Rightarrow 23|n; S(n) \le3$. - -REPLY [3 votes]: (ALMOST an answer) In order for $S(kq)\leq3$, $kq$ must be in one of the following forms: -(1): $$kq=2*10^n+1$$ -(2): $$kq=10^n+1$$ -(3): $$kq=10^n+10^m+1$$, where $n\neq m$ -We will look into the cases one by one and show that there must exist some $n$ or $n,m$ that satisfy at least one of the above cases, following @Peter 's direction. - -Case (1): -$2*10^n+1\equiv2p+1$ (mod $q$) -$10^n\equiv p$ -Now let $g$ be a primitive root for mod $q$, such that $g^a\equiv10$ and $g^b\equiv p$ (mod $q$), where $1\leq a,b\leq 2p$ -Then the question reduces to whether $g^{an}=g^b$ (mod q) is solvable or not. And since the primitive root has order $2p$, -$an\equiv b$ (mod $2p$) -Is only solvable iff -$b\equiv 0$ (mod $GCD(a,2p)$) -So if $a,2p$ are coprime, there must be an integer $0\geq n\geq 2p$ that satisfies $kq=2*10^n+1$ - -Case (2): -If the above case fails, that means $b\not\equiv 0$ (mod $GCD(a,2p)$) -Then $a$ and $2p$ must not be coprime, since $p$ is a prime, $a=p$ or $a=2$. -$2p\equiv -1\equiv 10^n$ (mod $q$) -Let $g^c\equiv 2$ (mod $q$), $1\leq c\leq 2p$ -We know, -$g^p\equiv g^{b+c}\equiv -1$ -Because $(g^p)^2\equiv 1$ (mod $q$) but $p\not\equiv 0$ (mod $2p$) -If $g^p\equiv g^{an}$ is solvable -Then, -$p\equiv an$ (mod $2p$) has a solution for $n$, and it has solution iff -$p\equiv 0$ (mod $GCD(a,2p)$) - -Sub-case: $a=p$, -$p\equiv 0$ (mod $a=p$) is trivial -So there must be solution for $n$ if $a=p$ - -Sub-case: $a=2$, there must be solution for $n$ if $b,c$ have the same parity. -In fact, $1 \leq b,c \leq 2p$ implies that $2\leq b+c\leq 4p$, -so $b+c=p$ or $b+c=3p$, -because if $b+c=2p$ or $4p$, it would contradict that $g^{b+c}\equiv -1$ (mod $q$) -So $b+c\equiv 1$ (mod $a=2$), for the case that $a=2$, there will be no solution in the form of (1) or (2). - -Case (3): -$kq=10^n+10^m+1$ -$2p\equiv 10^n+10^m$ (mod $q$) -$g^{b+c}\equiv -1\equiv g^{2n}+g^{2m}$ -I cant figure how to show that this type of equation has solution $n,m$. If you can prove that there exist solutions, then we have covered all possible cases. -(Side note: it can be shown that if $a=p$, then there must be no solution in the form $10^n+10^m+1$ since -$g^{pm}+g^{pn}+1\not\equiv 0$ (mod $q$) since $g^{pm},g^{pn}\equiv \pm 1$)<|endoftext|> -TITLE: Hamilton paths/cycles in grid graphs -QUESTION [8 upvotes]: Let G be a grid graph with m rows and n columns, i.e. m = 4, n = 7 is shown here: - -For what values of m and n does G have a Hamilton path, and for what values of m and n does G have a Hamilton cycle? -So far I've figured out that a grid graph always has a Hamilton path, and has a Hamilton cycle when at least one of m or n is even. I'm struggling to provide justification as to why this is true... - -REPLY [8 votes]: This is a more formal construction, building off of the answers by Brian M. Scott and David Ongaro. -The theorem is actually: an n x m grid graph is Hamiltonian if and only if: -A) m or n is even and m > 1 and n > 1 -or -B) mn = 1 -There are four parts to the proof. -Part 1: If either m or n is even, and both m > 1 and n > 1, the graph is Hamiltonian -This proof is going to be by construction. -If one of the even sides is of length 2, you can form a ring that reaches all vertices, so the graph is Hamiltonian. -Otherwise, there exists an even side of length greater than 2. Let's call that direction "vertical" and the other direction "horizontal". Thus, the graph has an even number of rows. Take one of the vertical edges of the graph and make a starting path connecting all squares on that edge. Then, the ends of the starting path can be extended through the rest of the nodes by creating "switchbacks", paths that run out to the edge of the grid opposite the starting path using one horizontal row of squares and then back to the column adjacent to the starting path using the next horizontal row. A complete switchback will reach all of the nodes in its rows that aren't reached by the starting path and requires 2 rows. Since there are an even number of rows in the graph, we can fill the remainder of the graph with switchbacks and end in a square adjacent to the open end of the starting path. Connecting the final switchback to the open end of the starting path creates a Hamiltonian cycle. -Ex: -* - * - * - * - * -| | -* * - * - * - * -| | -* * - * - * - * -| | -* * - * - * - * -| | -* * - * - * - * -| | -* - * - * - * - * - -Part 2: If mn = 1, the graph is Hamiltonian -If both mn = 1, then it is a 1 vertex graph. This is trivially Hamiltonian in that there is a zero length path that visits the vertex. [1] -Part 3: If m = 1 xor n = 1, the graph is not Hamiltonian -All Hamiltonian graphs are biconnected. [2] -If exactly one of the dimensions is 1, then the graph is a line of length at least 2. If the length is 2, then it is a simple graph with 2 vertices. There are no simple 2-node Hamiltonian graphs (OEIS A003216), so this is not Hamiltonian. -If the length is greater than 2, there must be a central vertex of the graph that can be removed and the graph will become disconnected. Thus, the graph is not biconnected and is therefore not Hamiltonian. -Part 4: If both m and n are odd and mn != 1, the graph is not Hamiltonian -Any bipartite graph with unbalanced vertex parity is not Hamiltonian. [3][4] -You can color the grid like a chess board, such that alternating squares are black and white and no two adjacent squares share a color. Noting that squares of the grid are vertices of the graph, it follows that the set of black vertices and the set of white vertices are a partition of the graph into 2 sets such that none of the vertices in the same set are adjacent. Thus, the graph is bipartite. Since both of the dimensions are odd, the total number of squares and therefore vertices is odd, so therefore the partitions must have different numbers of vertices. As a result, the graph is bipartite with unbalanced vertices and isn't Hamiltonian.<|endoftext|> -TITLE: Arnold's proof of Liouville's Theorem on integrable systems -QUESTION [5 upvotes]: My question happens to be almost identical to the one left unanswered/closed here, which gives a bit of background information - it may not be necessary. I hope the reason it was closed on mathoverflow is not a reason for it to be closed here, please let me know if I should reformulate the question. -To reiterate, I refer to page 278 of Arnold's book 'Mathematical Methods of classical mechanics', namely the part after Lemma 3 is proved but before section 50 on Action-angle variables. An online version can be referred to here. -My confusion first occurs when $p$ is stated as being a chart for $\mathbb{T}^k\times\mathbb{R}^{n-k}$, since it is certainly not 1-1 (all $f_i$ are mapped to 0 under it). This leads on to confusion with problem 10 - I simply cannot see a way to prove $\tilde{A}$ is a diffeomorphism, given that neither $p$ nor $g$ are diffeomorphisms. This seems to be the only gap for me at the moment - at first glance I am not sure about Problem 11 either, but will post about that separately. Any help on this would be much appreciated. - -REPLY [2 votes]: Ok, I have attempted two proofs, the first of which relies on homogeneous spaces, and the second using universal covers (neither of which I have much experience with, so take some results for granted). Any comments would be really appreciated. -I) Since $g$ is a transitive group action and $\Gamma$ is the stabilizer of this action (as we've seen, with respect to which $x_0$ is irrelevant), we get a homeomorphism between $M_F$ and $\mathbb{R}^n/\Gamma$. Viewing $\Gamma$ is a closed subgroup of $\mathbb{R}^n$, considered as a Lie group, it follows by Cartan's Closed Subgroup Theorem that $\Gamma$ is a Lie subgroup of $\mathbb{R}^n$. Therefore $\mathbb{R}^n/\Gamma$ is a smooth manifold, and the action $g$ induces a smooth structure on $M_F$, that is, we have a diffeomorphism between $M_F$ and $\mathbb{R}^n/\Gamma$. But $\mathbb{R}^n/\Gamma$ is diffeomorphic to $\mathbb{T}^k\times\mathbb{R}^{n-k}$, since $\mathbb{T}$ is just the quotient $\mathbb{R}/\mathbb{Z}$ and $\Gamma$ is generated by $k$ copies of $\mathbb{Z}$. Since the diffeomorphism between $M_F$ and $\mathbb{T}^k\times\mathbb{R}^{n-k}$ must preserve compactness, it must be the case we have no $\mathbb{R}$ components. Therefore $k=n$ and $M_F$ is diffeomorphic to $\mathbb{T}^n$. -II) Alternatively, define the points $f_1,\dots,f_k\in\mathbb{R}^n$ by $f_i=2\pi(0,\dots,0,1,0,\dots,0)$ where the 1 occurs in the $i$'th position - then $p(f_i)=0$ for all $1\leq{i}\leq{k}$. Let $e_1,\dots,e_k\in\Gamma\subset\mathbb{R}^n$ be the generators of the stationary group $\Gamma$, which we know exist by Claim B. Let $A:\mathbb{R}^n\rightarrow\mathbb{R}^n$ be an isomorphism mapping the vector space $\mathbb{R}^n$ with coordinates $(\phi_1,\dots,\phi_k,y_1,\dots,y_{n-k})$ to the vector space $\mathbb{R}^n$ with coordinates $\mathbf{t}=(t_1,\dots,t_n)$, such that each $f_i$ is sent to $e_i$. Note that since $k\leq{n}$, this isomorphism need not be unique. In any case, $A$ preserves the lattice structures between the two copies of $\mathbb{R}^n$, since $A(x+z_1f_1+\dots+z_kf_k)=A(x)+z_1e_1+\dots+z_ke_k$ for $x\in\mathbb{R}^n$, $z_i\in\mathbb{Z}$. Now, $p$ and $g$ are universal covering maps for $\mathbb{T}^k\times\mathbb{R}^{n-k}$ and $M_F$ respectively, and so $A$ descends to an isomorphism $\tilde{A}$ between $\mathbb{T}^k\times\mathbb{R}^{n-k}$ and $M_F$. \ -But since $p$ and $g$ are both charts, they are local diffeomorphisms and hence the descension $\tilde{A}$ is in fact a diffeomorphism (I think this needs more justification). Then as above, since $M_F$ is compact and diffeomorphisms must preserve compactness, it must be the case we have no $\mathbb{R}$ components. Therefore $k=n$ and $M_F$ is diffeomorphic to $\mathbb{R}^n$.<|endoftext|> -TITLE: Extracting the nth digit of pi using Plouffe's formula? -QUESTION [8 upvotes]: I have come upon the following formula to extract the nth digit of pi in base 10: - $$\pi + 3 = \sum_{n=1}^{\infty} \frac{n 2^n n!^2}{(2n)!} -$$ -But this just seems to be a formula for pi. How can I use this formula to extract the nth digit of pi? - -REPLY [4 votes]: It is easier to find the $n$-th digit in base $d$ if you have a formula of the form -$$\sum_k \frac{\textbf{maybe large}}{\textbf{small} \times d^k},$$ -like BBP formula for $d=16$. Just multiply the formula by $d^n$ and simplify the first $n$ summands as -$$\frac{(\textbf{maybe large} \times d^{n-k}) \text{ mod } \textbf{small}}{\textbf{small}}.$$ -Then you need to sum up $n$ real numbers between $0$ and $1$ and several of the next terms (until the rest of series is small enough), see an example for $\ln 2$. -For BBP formula or $\ln 2$ example, the complexity of each step is $O(\log k)$, so the overall complexity is $O(n \log n)$. Your formula is used by Plouffe (difficult to read) with complexity $O(n^3 \log^3 n)$ -- but in any base. The version by Goudon is even more sophisticated, with complexity less than $O(n^2)$ in any base, but Proposition 1 from there shows that the idea is as in the answer.<|endoftext|> -TITLE: Prove that for $n\ge 2$, $2{n \choose 2}+{n \choose 1} = n^2$ -QUESTION [5 upvotes]: Problem : Prove that for $n\ge 2$, $2{n \choose 2}+{n \choose 1} = n^2$ -My Approach : I would assume that we can prove by induction. -Base case $n=2$. -$$=2{2 \choose 2}+ {2 \choose 1}= (2\cdot 1)+2 =4$$ -$$n^2 =2^2 = 4.$$ -Assume for $n\ge 2$, $n\ge 2$, $2{n \choose 2}+{n \choose 1} = n^2$ for $n \le k$. -Let $n=k+1$ -$$2{k+1 \choose 2}+{k+1 \choose 1} = (k+1)^2$$ -And, that's as far as I got. -I get stuck with the ${k+1 \choose 2}$ part. - -REPLY [4 votes]: We can also give this a combinatorial interpretation: - -$\binom{n}{2}$ is the number of unordered pairs of distinct elements of $\underline{n}$ -So $2\binom{n}{2}$ is the number of ordered pairs of distinct elements of $\underline{n}$ -So $2\binom{n}{2}+\binom{n}{1}$ is the number of ordered pairs of $\underline{n}$ -So $2\binom{n}{2}+\binom{n}{1}=|\underline{n}^2|$ -So $2\binom{n}{2}+\binom{n}{1}=n^2$ - -With enough care, this can be made into a rigorous proof (these are called "combinatorial proofs").<|endoftext|> -TITLE: Uniform and Pointwise convergence Almost Everywhere -QUESTION [5 upvotes]: In class our professor wants us to give an example of a function $f$ on an interval $[a,b]$ and a sequence {$f_n$} converging to $f$ almost uniformly so that there is no set E of measure zero so that the sequence {$f_n$} converges uniformly to $f$ on $[a,b]\backslash E$. -I am a little confused about what he's asking; does this mean we want a function and sequence that does NOT converge uniformly almost everywhere to $f$? Or do we want a function in which uniform convergence (not just uniform convergence almost everywhere) holds? - -REPLY [10 votes]: "almost uniformly" means that for any $\epsilon>0$, there exists a set $E$, $\mu(E)<\epsilon$ such that $f_n\to f$ uniformly on $E^c$. The name may be a little misleading. By Egorov's theorem, if $\mu(X)<\infty$, then a.s. convergence implies almost uniform convergence. -What you're asked to show that almost uniform convergence does not imply the stronger uniform convergence almost everywhere. -A simple example for this is the sequence $f_n=x^n$ on $[0,1]$. This converges pointwise to the function $0$ on $(0,1)$ and to $1$ at $0$, that is, it converges a.e. to the constant function $f=0$. The sequence clearly converges almost uniformly to $f$, because it converges uniforly to $0$ on every interval of the form $[0,1-\epsilon]$. -What about uniform convergence except on a set of measure zero ? -Let $E$ have zero measure. Observe that for any $m$, there exists a point in the interval $[1-1/m,1-1/(m+1)]$ not in $E$ (othwise $E$ would have positive measure). Therefore (using monotonicity of $f_nx^n$): -$$\sup_{x \in [0,1]\backslash E} |f_n (x) - f(x)|\ge (1-1/m)^n-0,$$ -for any $m$. Letting $m\to \infty$, we see that -$$\sup_{x\in [0,1]\backslash E} |f_n (x)-f(x)|\ge 1-0=1.$$ -(It is of course equal to $1$). -Thus, no matter which measure zero set $E$ we choose, we will never converge uniformly to the (a.e.) limit $f=0$ outside $E$.<|endoftext|> -TITLE: Ugly solutions to easily stated problems -QUESTION [5 upvotes]: I recently saw a very hideous closed form for a quartic equation here: -Does a closed form solution exist for $x$? -For fun, I'm wondering about surprisingly ugly solutions/ complicated machinery needed to problems that are simply stated. -Clearly, they all don't have to be algebra-based. -I'm trying to get a sense for how particular different mathematic methods are necessary, and interesting. Even for easily stated problems. - -REPLY [3 votes]: Classify pairs of commuting matrices up to a change of basis. -This sounds like an extension of Jordan normal form, since the matrices should be simultaneously Jordan-izable or something. -In fact it is the canonical example of a "wild" linear algebra problem that is as hard as classifying $k$-tuples of (noncommuting) matrices for all $k$. No reasonable solution is expected to exist.<|endoftext|> -TITLE: You are dealt five cards from a standard deck. What is the probability that you'll have at most three kings in your hand? -QUESTION [6 upvotes]: You are dealt five cards from a standard and shuffled deck of playing cards. Note that a standard deck has 52 cards and four of those are kings. What is the probability that you'll have at most three kings in your hand? - -I know that the answer is $\frac{54144}{54145}$ from the answer key, and I know that the sample space is ${^{52}\mathrm C_5}$. What I don't get is how to find the event. -Do I just add the combination for each number of kings together (${^5\mathrm C_2} + {^5\mathrm C_3}$)? -Or do I need to multiply as well to account for the other cards in the 5-card hand? - -REPLY [3 votes]: Yet another way: there are 52 cards in the deck, 4 of which are kings. The probability the first card is a king is 4/52= 1/13. There are then 51 cards left, 3 of them kings. The probability the second card is a king is 3/51= 1/17. There are then 50 cards left, 2 of them kings. The probability the third card is a king is 2/50= 1/25. There are then 49 cards left, 1 of them a king. The probability the fourth card is a king is 1/49. There are no longer any kings in the deck so the probability the fifth card is not a king is 1. -The probability of getting "four kings and one non-king" in that order is (1/13)(1/17)(1/25)(1/49). There are 5 ways to order "four kings and one non- king" (the non-king being in any of the 5 places) so the probability of "four kings and one non-king" in any order is 5(1/13)(1/17)(1/25)(1/49). -Finally, the probability of get "at most three kings in a five card hand" is 1- 5(1/13)(1/17)(1/25)(1/49).<|endoftext|> -TITLE: Lie bracket of exact differential one-forms -QUESTION [10 upvotes]: Let $(M,g)$ be a Riemannian manifold. The musical isomorphisms $^\flat:\chi(M) \to \Omega^1(M)$ and $^\sharp:\Omega^1(M) \to \chi(M)$ allow the space of differential one-forms $\Omega^1(M)$ to be identified with the space of vector fields $\chi(M)$. -If I'm not mistaken, I can define the Lie bracket of two differential one forms $\alpha,\beta$ by $$[\alpha,\beta] := [\alpha^\sharp, \beta^\sharp]^\flat.$$ -Now suppose that $\alpha$ and $\beta$ are exact; i.e. there exist smooth functions $A,B:M\to \mathbb{R}$ such that $\alpha = dA$ and $\beta = dB$, where $``d"$ denotes the exterior derivative. Is it necessarily true that $[dA,dB] = 0$? -Here is the motivation for my question: On a $k$-manifold $M$, the integral curves of $k$ vector fields linearly independent at the point $x \in M$ are the coordinate curves of a local coordinate system centered at $x$ if and only if their pairwise Lie brackets are zero (see, e.g., the discussion here). On a Riemannian manifold, differential one-forms also have integral curves after identifying these one-forms with vector fields. I would like to know if there are "natural" conditions on a collection of $k$ differential one-forms that determine whether their integral curves similarly form the coordinate lines of a coordinate chart. - -REPLY [8 votes]: Answer: -The answer to my question is that $[dA,dB]$ is not necessarily zero. As a counterexample, consider $\mathbb{R}^2$ with the standard inner product and take $A(x,y) = x^2y^2$, $B(x,y) = xy^3$ (I selected these arbitrarily without thinking about them). Then $\nabla A(x,y) = (dA)^\sharp(x,y) = 2xy^2\frac{\partial}{\partial x} + 2x^2y \frac{\partial}{\partial y}$ and $\nabla B(x,y) = (dB)^\sharp(x,y) = y^3 \frac{\partial}{\partial x} + 3xy^2\frac{\partial}{\partial y}$. -Computing the Lie Bracket of $\nabla A$ and $\nabla B$ in coordinates shows that $[dA,dB](x,y) = [\nabla A, \nabla B](x,y) = (-6x^2y^3 - 2y^5)\frac{\partial}{\partial x} + (2xy^4 + 6x^3y^2)\frac{\partial}{\partial y}$ which is not zero at all points. -Resolving my misunderstanding: The following is an explanation of the misunderstanding which prompted my question. Let $A_1,\ldots,A_k$ be functions on a Riemannian $k$-manifold $(M,g)$ such that the collection of vectors $\{\nabla A_i := (dA_i)^\sharp\}$ are linearly independent at the point $x_0 \in M$. This is true if and only if the collection of forms $\{dA_1,\ldots,dA_k\}$ are themselves linearly independent as linear functionals at $x_0 \in M$. Then the derivative of the map $\varphi: M \to \mathbb{R}^k$, $\varphi: x \mapsto (A_1(x),\ldots,A_k(x))$ is full rank at $x_0 \in M$, so the inverse function theorem guarantees that there exists an open set $U \ni x$ such that $\varphi|_U: U \to \varphi(U)$ is a diffeomorphism. Initially, I thought that the pairwise Lie brackets of all of the $\nabla A_i$ must therefore be zero. However, this is only necessarily be true if the integral curves of the $\nabla A_i$ were the coordinate lines determined by the chart $\varphi$. The definition of $\nabla A_i$ relies on the particular metric $g$, and in general $g$ has nothing to do with $\varphi$; the coordinate lines of $\varphi$ which have nothing do with $g$ are thus not the integral curves of the $\nabla A_i$ in general.<|endoftext|> -TITLE: Seeking a new, more natural definition of the cartesian product of sets -QUESTION [5 upvotes]: In "standard" set theory usually we have the definition $(a,b) := \{ \{ a \} -, \{a,b\}\}$, see for example wikipedia for other similar ones. Then if we set for two sets $A,B$ -$$ - A\times B := \{ (a,b) : a \in A, b \in B \} -$$ -we have $A \times (B \times C) \ne (A \times B) \times C$, just that they are in bijective correspondence (hence isomorphic as sets). If we further define -$$ - A^0 := \{\emptyset\}, \quad A^1 := A -$$ -and $A^{n+1} := A^n \times A$. Again, with this definition we have $A^0 \times B \ne B$, just isomorphism). But for isomorphism any singleton set would work, so for example we could equally well define $A^0 := \{ A \}$, or $A^0 := \{ 1 \}$. So what makes the emptyset special here? -One definition that seems to work better might be the definition as functions as seen on wikipedia:infinite products. But for finite index sets $I$ the above one is usually presented. But still with the second definition, what is -$$ - \prod_{i \in I} X_i \times \prod_{j\in J} X_j? -$$ -Guess it should be -$$ - \prod_{i \in I} X_i \times \prod_{j\in J} X_j - = \{ f : \{1,2\} \to \prod_{i\in I} X_i \cup \prod_{j \in J} X_j \mid - f(1) \in \prod_{i \in I} X_i, f(2) \in \prod_{j \in J} X_j \} -$$ -But with this definition again we have $(A\times B) \times C \ne A \times (B \times C)$, as in $(A \times B) \times C$ we have functions $f : \{1,2\} \to (A\times B) \cup C$ and in $A \times (B \times C)$ functions $f : \{1,2\} \to A \cup (B \times C)$. But at least the definition $A^0 = \{ f : \emptyset \to \emptyset \} = \{\emptyset\}$ is natural. - -So I do not like all the above constructions. Is there a construction of the cartesian product which gives -i) associativity, -ii) $A^0 \times A^n = A^n$ -iii) and a natural choice for $A^0$? - -The only one that comes to my mind might be to define equality just up to isomorphism (i.e. define the cartesian product modulo the notion of bijectivity), so then in essence we just compute with representations all the time (which by the way would raise many question about well-definiteness in most mathematics books where this is taken for granted, for example in multivariable calculus where $\mathbb R^n$ is interpreted as $n$-tuples and all the above is assumed without questioning it; which by the way is one of my motiviations to seek an alternative construction). But is there any more direct construction which does not suffer from the above drawbacks? -EDIT: Changed $(a,b) := \{a,\{a,b\}\}$ to $(a,b) := \{\{a,\}, \{a,b\}\}$; the former might be problematic due to the comments by Brian M. Scott. -EDIT 2: changes "$A^{n+1} := A^n \times A$ (still this is problematic as we do not have associativity).", see the comments. - -REPLY [3 votes]: The other, "direct" construction is to define $A^n$ to be the set of all functions from $n = \{0,\ldots, n-1\}$ to $A$. This is not an inductive construction; $A^5$ is not defined from $A$ and $A^4$; $A^5$ is defined only from $5$ and $A$. -Of course, the formal definition of "function" may already require a notion of "ordered pair", and the set of "ordered pairs" in that formal sense will usually not be the same as the set of functions from $2$ to $A$. The formal definition of an "ordered pair" is likely to be something like the definition in the question above. -But, if we like, we can use a definition of ordered pairs only for the purposes of defining functions, and use the definition $A^n$ from above to define "Cartesian products" only in terms of functions. -In this direct construction, if $I$ and $J$ are disjoint, then -$$ -\left (\prod_{i \in I} A_i \right ) \times \left (\prod_{j \in J} A_j \right )$$ -would be defined as -$$ -\prod_{k \in I \cup J} A_k, -$$ -that is, the set of functions $f$ with domain $I \cup J$ and so that $f(k) \in A_k$ for $k \in I \cup J$. If $I$ and $J$ are not disjoint, we need to make them disjoint first. -In the "direct" construction $A^0$ is again $\emptyset$, and $A^0 \times A^n$ is equal to $A^n$, because it is -$$ -\prod_{i \in \emptyset} A \times \prod_{i \in n} A = \prod_{i \in \emptyset \cup n} A = \prod_{i \in n} A. -$$<|endoftext|> -TITLE: Prove this : $\left(a\cos\alpha\right)^n + \left(b\sin\alpha\right)^n = p^n$ -QUESTION [5 upvotes]: I have this question: -If the line $x\cos\alpha + y\sin\alpha = p$ touches the curve $\left(\frac{x}{a}\right)^\frac{n}{n - 1} + \left(\frac{y}{b}\right)^\frac{n}{n - 1} = 1$ -then prove that $\left(a\cos\alpha\right)^n + \left(b\sin\alpha\right)^n = p^n$ -I know that the equation given is an equation of the line in normal form with perpendicular distance $p$ from origin. -Also the slope of given line is $-\cot\alpha$ -and this slope of line will be equal to the slope of the curve. But equating both is not yielding the desired result. -The only little progress I seem to make after substituting $x$ and $y$ from the equation of line to the equation of curve seems futile to prove this further. -I seem to make no further progress in this question. What should I do? - -REPLY [3 votes]: Note the slope of the line is -$$ m_1=-\frac{\cos\alpha}{\sin\alpha} $$ -and the slope of the tangent line of the curve at $(x_0,y_0)$ is -$$ m_2=y'|_{x=x_0}=-\frac{b}{a}\left(\frac{bx_0}{ay_0}\right)^{\frac1{n-1}}.$$ -Thus from $m_1=m_2$ we have -$$ y_0=\frac{b^n\sin^{n-1}\alpha}{a^n\cos^{n-1}\alpha}x_0.$$ -Noting that $(x_0,y_0)$ is on the line, we have -$$ x_0=\frac{pa^n\cos^{n-1}\alpha}{a^n\cos^{n}\alpha+b^n\sin^{n}\alpha}, y_0=\frac{pb^n\sin^{n-1}\alpha}{a^n\cos^{n}\alpha+b^n\sin^{n}\alpha}. $$ -But $(x_0,y_0)$ is on the curve, namely - $\left(\frac{x_0}{a}\right)^\frac{n}{n - 1} + \left(\frac{y_0}{b}\right)^\frac{n}{n - 1} = 1$ -from which we deduce -$$\left(a\cos\alpha\right)^n + \left(b\sin\alpha\right)^n = p^n. $$<|endoftext|> -TITLE: Integral of $x^{\sqrt x}$ -QUESTION [5 upvotes]: I'm trying to find this integral: -$$\int x^\sqrt x \, dx $$ -Wolframalpha gave me an integral. (So it does exist) -I tried integration by parts & tried converting it to $$ e^{\sqrt x \ln(x)} $$ then expanding $e$ by its summation notation. - -REPLY [5 votes]: $$x^{\sqrt{x}} = e^{\sqrt{x}\ln(x)} = \sum_{k = 0}^{+\infty} \frac{(\sqrt{x}\ln(x))^k}{k!}$$ -Thence you get -$$\sum_{k = 0}^{+\infty} \frac{1}{k!} \int \left(\sqrt{x}\ln(x)\right)^k\ \text{d}x$$ -A repeated integration by parts gives: -$$\int \left(\sqrt{x}\ln(x)\right)^k\ \text{d}x = \Gamma\left[1 + \frac{k}{2},\ -\left(1 + \frac{k}{2}\right)\ln(x)\right]\ln^{1 + k/2}(x)\left(-\left(1 + \frac{k}{2}\right)\ln(x)\right)^{-1 - k/2}$$ -So in the end we have -$$\sum_{k = 0}^{+\infty} \frac{1}{k!}\left(\Gamma\left[1 + \frac{k}{2},\ -\left(1 + \frac{k}{2}\right)\ln(x)\right]\ln^{1 + k/2}(x)\left(-\left(1 + \frac{k}{2}\right)\ln(x)\right)^{-1 - k/2}\right)$$ -Gamma Function -More here about the Gamma function -Incomplete Gamma Function -More here about the incomplete Gamma Function (which is the used one)<|endoftext|> -TITLE: Fermat's Challenge of composition of numbers -QUESTION [6 upvotes]: In his letter to Carcavi (August 1659), Fermat mentions the following challenge - -There is no number, one less than a multiple of $3$, composed of a - square and the triple of another square. - -He says that he has solved it using infinite descent. The next problem that he proposes in the same letter is also solved by him using infinite descent (which was discovered in his copy of Diophantus Arithematica). -However, this question is not particularly clear to me, as to what is being asked to prove here. Can anyone please restate it using modern notations and provide a proof using Infinite Descent? -Thanks in advance. - -REPLY [4 votes]: This letter to Carcavi is not preserved; what we do have is a copy of the main part of the letter made, I think, by Huygens. I believe that Huygens misread Fermat's statement, since Fermat would not have included such a trivial statement in a letter that may be seen as his number theoretical testament. What Fermat probably wrote is that numbers divisible by some prime of the form $3n-1$ cannot be written in the form $x^2 + 3y^2$, and this may be proved by infinite descent in the usual way (see https://mathoverflow.net/questions/88539/sums-of-rational-squares).<|endoftext|> -TITLE: Why $c(a_1 \ a_2 \dots \ a_k)c^{-1}=(c(a_1) c(a_2)… c(a_k))$? -QUESTION [6 upvotes]: We investigate on an arbitrary $a_i$ : $c(a_1 \ a_2 \dots \ a_k)c^{-1}(a_i)$. -First step, $c(a_i)=a_k$. Second step, $(a_1 \ a_2 \dots \ a_k)(a_k)=a_{k+1}$, Third step, $c^{−1}(a_{k+1})=? $. -Any answer that I read in MSE was not helpful to understand. In the final step, they all imply $c^{−1}(a_{k+1})=c(a_i)$, but why $c^{−1}(a_{k+1})=a_k=(a_1 \ a_2 \dots \ a_k)^{−1}(a_{k+1})$? which may imply $c=(a_1 \ a_2 \dots \ a_k)$ ! -I would appreciate any simple clear detailed explanation. - -REPLY [12 votes]: Your question is about permutations and how conjugating a $k$-cycle by a permutation preserves the cycle type. -Claim: Let $c\in S_n$ and let $(a_1 \; a_2 \; \ldots \; a_k)$ be a $k$-cycle in $S_n$. -Then -$$ c (a_1 \; a_2 \; \ldots \; a_k) c^{-1} = (c(a_1) \; c(a_2) \; \ldots \; c(a_k))$$ -Proof: Let's just consider how the left and right hand sides act on $c(a_1)$. -Then right hand side sends $c(a_1)$ to $c(a_2)$. -$c^{-1}$ sends $c(a_1)$ to $a_1$, which the $k$-cycle sends to $a_2$, which $c$ then sends to $c(a_2)$. Thus the left hand side as the composition of those three operations sends $c(a_1)$ to $c(a_2)$. -We chose $c(a_1)$ without loss of generality so we obtain the same agreement if we pick any $c(a_i)$. If a number in the set $\{ 1,\dots , n \}$ is not in the $k$-cycle, so it is not one of the $a_i$, then it is clearly fixed by both sides. So both sides are the same.<|endoftext|> -TITLE: Let $f(x) =\int^x_1 \frac{\ln t}{1+t}dt ;$ for $x >0$ then find the value of $f(e) +f(\frac{1}{e})$ -QUESTION [5 upvotes]: Problem : -Let $f(x) =\int^x_1 \frac{\ln t}{1+t}dt ;$ for $x >0$ then find the value of $f(e) +f(\frac{1}{e})$ -please prove some hint on this as I am not getting any clue how to proceed here, thanks - -REPLY [2 votes]: Given $$f(x) = \int_{1}^{x}\frac{\ln t}{1+t}dt$$ and $$f\left(\frac{1}{x}\right) = \int_{1}^{\frac{1}{x}}\frac{\ln t}{1+t}dt$$ -So $$F(x) = \int_{1}^{x}\frac{\ln t}{1+t}dt+\int_{1}^{\frac{1}{x}}\frac{\ln t}{1+t}dt$$ -Now Using Differentiation under Integral Sign, We get -$$F'(x)=\frac{\ln x}{1+x}+\frac{\ln x}{(1+x)x} = \frac{\ln x\cdot (1+x)}{(1+x)\cdot x}$$ -So we get $$F'(x) = \frac{\ln x}{x} \Rightarrow \int F'(x)dx = \int \frac{\ln x}{x}dx$$ -So we get $$F(x) = \frac{(\ln x)^2}{2}+\mathcal{C}$$ -Now at $x=1\;,$ We get $$F(1) = f(1)+f\left(\frac{1}{1}\right)=0$$ -So we get $$F(1)= \frac{(\ln(1))^2}{2}+\mathcal{C}$$ -So we get $\mathcal{C=0}\;,$ Then $$F(x)=f(x)+f\left(\frac{1}{x}\right)=\frac{(\ln x)^2}{2}$$ -So we get $$F(e)=f(e)+f\left(\frac{1}{e}\right)=\frac{(\ln e)^2}{2}=\frac{1}{2}$$<|endoftext|> -TITLE: Expected value of the smallest eigenvalue -QUESTION [20 upvotes]: Consider a random $m\times n$ matrix $M$ with elements from $\{-1,1\}$ and $m -TITLE: Given a field $\mathbb{F}$, what is $\text{Aut}(\mathbb{F}^{\ast})$? -QUESTION [5 upvotes]: There are many well-known problems and results concerning automorphism groups of fields. For example, it is well-known that the automorphism group of each field in $\{ \mathbb{Q}, \mathbb{F}_{p}, \mathbb{R} \}$ is trivial, and it is well-known that the (known) constructions of the 'wild' automorphisms of the field $\mathbb{C}$ require the axiom of choice (or Zorn's lemma). -It thus seems natural to consider the 'easier' problem of evaluating the group $\text{Aut}(\mathbb{F}^{\ast})$ of group automorphisms of the underlying multiplicative group $\mathbb{F}^{\ast}$ of $\mathbb{F}$. -It is clear that the problem of evaluating $\text{Aut}(\mathbb{F}_{p}^{\ast})$ reduces to the closed problem of evaluating the automorphism group of a finite abelian group (see Automorphisms of Finite Abelian Groups), and it is clear that evaluating $\text{Aut}(\mathbb{Q}^{\ast})$ reduces to the problem of evaluating the group of group automorphisms of $C_{2} \oplus \mathbb{Z} \oplus \mathbb{Z} \oplus \cdots$ (see this related discussion). It thus seems natural to ask: -(1) What is $\text{Aut}\left(\mathbb{R}^{\ast}\right)$? -(2) What is $\text{Aut}\left(\mathbb{C}^{\ast}\right)$? -(3) What is $\text{Aut}\left(\mathbb{C}(t)^{\ast}\right)$? -(4) What is $\text{Aut}\left(\mathbb{Q}_{p}^{\ast}\right)$, where $\mathbb{Q}_{p}$ denotes the field of $p$-adic numbers? -(5) More generally, given a field $\mathbb{F}$, is there a known way of evaluating $\mathbb{F}^{\ast}$? - -REPLY [6 votes]: Before you ask what the automorphism groups of these groups are it's probably a good idea to figure out what these groups themselves are. This is not so bad: - -$\mathbb{R}^{\times} \cong C_2 \oplus \mathbb{R}$. This comes from looking at the exponential map from $\mathbb{R}$: the $C_2$ factor is $\pm 1$. -$\mathbb{C}^{\times} \cong S^1 \oplus \mathbb{R}$. This comes from looking at the exponential map from $\mathbb{C}$. -$\mathbb{C}(t)^{\times} \cong S^1 \oplus \mathbb{R} \oplus \bigoplus_{\mathbb{C}} \mathbb{Z}$. This comes from looking at constant terms + unique factorization. -$\mathbb{Q}_p^{\times} \cong C_{p-1} \oplus \mathbb{Z}_p \oplus \mathbb{Z}$. The last term is powers of $p$, the first term is looking at constant terms, and the second term comes from the "exponential" map $n \mapsto (1 + p)^n$ from $\mathbb{Z}_p$. - -All of these groups have the pleasant property that they split up as a direct sum of torsion and torsion-free parts (although that isn't necessarily the decomposition I wrote above). Whenever an abelian group $A \cong T \oplus F$ has this property, we can write its automorphism group in matrix form as -$$\text{Aut}(A) \cong \text{Aut}(T \oplus F) \cong \left[ \begin{array}{cc} \text{Aut}(T) & \text{Hom}(F, T) \\ 0 & \text{Aut}(F) \end{array} \right].$$ -The point is that there can't be any nontrivial homomorphisms $T \to F$, so every endomorphism has this upper triangular form, and then it's an automorphism iff its diagonal components are. This works more generally whenever we can write an abelian group as a direct sum of two subgroups one of which admits no nontrivial homomorphisms to the other. Now let's compute automorphism groups. -Below it will be very convenient to write $I$ to denote the index set of a basis of $\mathbb{R}$ as a $\mathbb{Q}$-vector space (we need AC for this). Occasionally I will pretend this is also a basis of something like $\mathbb{R} \oplus \mathbb{R}/\mathbb{Q}$ since (again assuming AC) these have the same dimension. - -$\text{Aut}(C_2)$ is trivial, and there aren't any homomorphisms $\mathbb{R} \to C_2$, so we get $\text{Aut}(\mathbb{R})$, the group of automorphisms of an uncountable dimensional $\mathbb{Q}$-vector space. Assuming AC this is some huge group $GL_I(\mathbb{Q})$ of matrices over $\mathbb{Q}$. -Write $\mathbb{R} \cong \mathbb{Q} \oplus \mathbb{R}/\mathbb{Q}$ (both are just $\mathbb{Q}$-vector spaces). Applying the exponential $\exp (ix)$ gives $S^1 \cong \mathbb{Q}/\mathbb{Z} \oplus \mathbb{R}/\mathbb{Q}$, so $\mathbb{C}^{\times}$ is $\mathbb{Q}/\mathbb{Z}$ plus another uncountable dimensional $\mathbb{Q}$-vector space which I will pretend is isomorphic to $\mathbb{R}$ for ease of notation, although I need AC for this. $\text{Aut}(\mathbb{Q}/\mathbb{Z})$ turns out to be the group of units of the profinite integers $\widehat{\mathbb{Z}}^{\times} \cong \prod_p \mathbb{Z}_p^{\times}$, and there are lots of homomorphisms from a $\mathbb{Q}$-vector space to $\mathbb{Q}/\mathbb{Z}$. Altogether we get - -$$\left[ \begin{array}{cc} \widehat{\mathbb{Z}}^{\times} & \prod_I \text{Hom}(\mathbb{Q}, \mathbb{Q}/\mathbb{Z}) \\ 0 & GL_I(\mathbb{Q}) \end{array} \right].$$ - -Now in addition to the issues that arose in 2 we also need to figure out the automorphism group of $\mathbb{R}$ plus uncountably many copies of $\mathbb{Z}$. There are no nontrivial homomorphisms from $\mathbb{R}$ to any sum of copies of $\mathbb{Z}$, so this again has an upper triangular form, involving $\text{Aut}(\mathbb{R})$ (big matrices over $\mathbb{Q}$), $\text{Aut}(\bigoplus_{\mathbb{C}} \mathbb{Z})$ (big matrices over $\mathbb{Z})$, and homomorphisms from $\bigoplus_{\mathbb{C}} \mathbb{Z}$ to $\mathbb{R}$, of which there are again a lot. Altogether we get - -$$\left[ \begin{array}{cc} \widehat{\mathbb{Z}}^{\times} & \prod_I \text{Hom}(\mathbb{Q}, \mathbb{Q}/\mathbb{Z}) & \prod_{\mathbb{C}} \mathbb{Q}/\mathbb{Z} \\ - 0 & GL_I(\mathbb{Q}) & \prod_{\mathbb{C}} \mathbb{R} \\ - 0 & 0 & GL_{\mathbb{C}}(\mathbb{Z}) \end{array} \right]$$ - -Now we need to figure out the automorphism group of $\mathbb{Z}_p \oplus \mathbb{Z}$. At this point I have to confess: I just don't know what this is. I know what the automorphisms of $\mathbb{Z}_p$ are as a profinite group, but not as an abstract group. And I don't know what the homomorphisms $\mathbb{Z}_p \to \mathbb{Z}$, again as abstract groups, are.<|endoftext|> -TITLE: Fibration over contractible space is homotopic to a fiber -QUESTION [7 upvotes]: Let $\pi: E \to B$ be a fibration of $E$ over $B$, let $F = \pi^{-1}(b)$ for some $b \in B$ be a representative fiber, and suppose that $B$ is contractible. Is it always the case (or are there some nice conditions guaranteeing) that $E$ is homotopic to $F$? -(For those curious, the context is John Milnor's work on the Milnor fibration: given an analytic function $f: \mathbf{C}^m \to \mathbf{C}$ with a singular point $f(0) = 0$ at the origin, the intersection of $f^{-1}(0)$ with a sufficiently small sphere $S_\epsilon^{2m-1}$ about the origin is transverse. Call this intersection $K$; the map $\pi(z) = f(z)/|f(z)|$ gives a locally trivial fibration of $S_\epsilon^{2m-1}-K$ over $S^1$. Milnor seems to rely implicitly on a result similar to the above.) - -REPLY [6 votes]: Yes, it is always the case. If you just need a weak homotopy equivalence, look at the long exact sequence in homotopy: -$$\require{cancel} -\dots \to \cancel{\pi_{n+1}(B)} \to \pi_n(F) \to \pi_n(E) \to \cancel{\pi_n(B)} \to \dots$$ -to see that the inclusion $F \to E$ induces an isomorphism on all homotopy groups. -More generally, if you want a full-on homotopy equivalence, you can use the well-known fact that if $f_0, f_1 : B' \to B$ are homotopic and $E \to B$ is a fibration, then the two fibrations $f_0^* E \to B'$ and $f_1^* E \to B'$ are fibre homotopic. (The proof of this is a bit more technical.) -In the present case, let $f_0 = \operatorname{id}_B : B \to B$ and $f_1 : B \to B$ to be a constant map. Since $B$ is contractible, $f_0 \sim f_1$. Of course $f_0^*E = E$, while $f_1^*E \to B$ is the trivial fibration $B \times F \to B$. Since these two fibrations are fibre homotopy equivalent, their total spaces must be homotopy equivalent, i.e. $E \simeq F \times B$. -In fact we even see that there is a stronger result, that there is a commuting diagram where the vertical arrows are homotopy equivalences: -$$\require{AMScd} -\begin{CD} -F @>>> E @>>> B \\ -@V{\sim}VV @V{\sim}VV @V{=}VV \\ -F @>>> B \times F @>>> B -\end{CD}$$<|endoftext|> -TITLE: Surjectivity of linear map between "naive" and "abstract" Zariski tangent spaces -QUESTION [6 upvotes]: I know this is probably a simple problem but I have got myself stuck trying to prove this fact myself. I'd be very grateful if anyone could clear this up for me. -Let $V$ be an affine variety in $\mathbb{A}^n$ (say, over an algebraically closed field $K$) and suppose $P = (0,\dots, 0)\in V$ is the origin of affine space. I am trying to prove there's an isomorphism between the "abstract" Zariski tangent space $T_P (V) = (\mathfrak{m}_P/\mathfrak{m}_P^2)^*$ and the "naive" vector space -$$W = \left\{(\alpha_1, \dots, \alpha_n): \sum_{i=1}^n \alpha_i \frac{\partial f}{\partial x_i} (P)=0, \quad \forall f\in I(V)\right\}$$ -I've shown there's an injection $\phi: W\to T_P (V)$ as follows: for $Q = (\alpha_1, \dots, \alpha_n)\in W$ we get a linear functional $\phi_Q : \mathfrak{m}_P/\mathfrak{m}_P^2 \to K$ via -$$\phi(Q) = \phi_Q = \sum_{i=1}^n \alpha_i \frac{\partial}{\partial x_i}\lvert_P$$ -which corresponds to taking a "weighted total derivative" of an equivalence class of a function $r\in \mathfrak{m}_P$ modulo $\mathfrak{m}_P^2$ i.e. -$$\phi_Q (\bar{r}) = \sum_{i=1}^n \alpha_i \frac{\partial r}{\partial x_i} (P)$$ -where $\bar{r}$ denotes the image of $r$ in the quotient. This is indeed a linear functional on $\mathfrak{m}_P/\mathfrak{m}_P^2$. Moreover the mapping $\phi$ is an injective $K$-linear map $W\to T_P (V)$. What I'm struggling to do is show surjectivity of this map. Every element $g\in T_P (V)$ can be written as -$$ g = \sum_{i=1}^n \alpha_i g_i$$ -where $g_i (\bar{x_j}) = \delta_{i j}$ is one of the dual basis vectors. What isn't clear to me is that the coefficients $\alpha_i$ come from some $Q = (\alpha_1, \dots, \alpha_n)\in W$ i.e. that -$$\sum_{i=1}^n g(\bar{x_i}) \frac{\partial f}{\partial x_i} (P) = 0, \quad \forall f\in I(V)$$ -Can anyone explain why this map is surjective? -Edit: I mistakenly forgot to include the condition that the linear combination sums to zero in the definition of the "naive" tangent space in the original post, which caused some confusion. This has been amended above. - -REPLY [5 votes]: The inverse morphism $\psi=\phi^{-1}:(\mathfrak{m}_P/\mathfrak{m}_P^2 )^*\to W$ is defined as follows. -Given a $K$-linear map $t: \mathfrak{m}_P/\mathfrak{m}_P^2 \to K$ we define $\psi(t)= (t(x_1),...,t(x_n))\in W$ -But why is that vector $(t(x_1),...,t(x_n))$ in $W$? -We have to check that for $f\in I(V)$ we have $\sum_{i=1}^n t(x_i) \frac{\partial f}{\partial X_i} (P)=0$ -This is true: write $$f= \sum_{i=1}^n \frac{\partial f}{\partial X_i} (P).X_i+q$$ where $q$ is a polynomial with no constant nor linear term. -Since $f$ becomes zero already in $\mathcal O(V)$, we have a fortiori $f=0\in \mathfrak m_P$ and we get by applying the $K$-linear map $t$ to $f$: $$0=t(f)=\sum_{i=1}^n \frac{\partial f}{\partial X_i} (P).t(x_i) +t(q) $$ However $q\in \mathfrak m^2_P$ forces $t(q)=0$ so that we obtain our crucial relation $\sum_{i=1}^n t(x_i) \frac{\partial f}{\partial X_i} (P)=0$.<|endoftext|> -TITLE: When is $\binom{2n}{n}\cdot \frac{1}{2n}$ an integer? -QUESTION [29 upvotes]: In a recent question here, asking about the number of necklaces of $n$ white and $n$ black beads (reworded in terms of apples and oranges), one of the naive and incorrect answers was that as there are $\binom{2n}{n}$ ways to arrange the beads in a straight line, dividing by $2n$ to account for the symmetry of the circle would correct the count. -In the case that $p$ is a prime not equal to two, it is clear to see that $\dfrac{(2p)!}{p!p!2p}$ has a factor of $p$ exactly twice in the numerator yet three times in the denominator and is thus not even an integer. -My question: - -For what values of $n$ will $\binom{2n}{n}\frac{1}{2n}$ be an integer? - -A similar question was asked here for the more general case of when $\frac{1}{n}\binom{n}{k}$ is an integer, generating some very nice graphics, and some special cases were mentioned such as when $\gcd(n,k)=1$, however that will never be the case in the special case I ask about (since $\gcd(2n,n)=n\neq 1$ for all $n>1$). Indeed, when looking at the lovely image made by @BrunoJoyal, one notices a black line down the center of the image, which would correspond to those positions I am interested in. - -Wolfram gives us the beginnings of a sequence $1,6,15,28,42,45,66,77,91,\dots$ With the exception of $42$ and $77$ these numbers are the first hexagonal numbers, but that pattern breaks with $120$ not being in the sequence. -A longer list from the comments by @BanachTarski: There are 89 such numbers in the first 1000 natural numbers 1, 6, 15, 28, 42, 45, 66, 77, 91, 110, 126, 140, 153, 156, 170, 187, 190, 204, 209, 210, 220, 228, 231, 238, 266, 276, 299, 308, 312, 315, 322, 325, 330, 345, 378, 414, 420, 429, 435, 440, 442, 450, 459, 460, 468, 476, 483, 493, 496, 510, 527, 551, 558, 561, 570, 580, 589, 600, 609, 620, 651, 665, 682, 684, 696, 703, 740, 744, 748, 770, 777, 806, 812, 814, 851, 861, 868, 888, 902, 920, 924, 936, 943, 946, 950, 962, 966, 988, 989 -Is there anything special we can say about those $n$ for which this is the case? (By imposing the extra condition on $k$ and $n$ in the generalized question, more patterns will hopefully emerge) - -REPLY [2 votes]: Just for the purpose of answering "for what values" part, let's note $$K_n=\binom{2n}{n}\cdot \frac{1}{2n}$$ Then -$$K_n=\frac{1}{2n} \cdot \frac{2n \cdot (2n-1)\cdot ... \cdot (n+1)}{1 \cdot 2 \cdot ... \cdot n}=$$ $$\frac{2n-1}{n^2} \cdot \frac{(2n-2)\cdot ... \cdot (n+1) \cdot n}{1 \cdot 2 \cdot ... \cdot (n-1)}=\frac{2n-1}{n^2} \cdot \binom{2(n-1)}{n-1}$$ -From this perspective: $$K_n \in \mathbb{N} \Leftrightarrow n^2 \mid \binom{2(n-1)}{n-1}$$ -because $\gcd(n^2, 2n-1)=1$ -This leads to another observation: -$$2n-2 \equiv -2 \pmod{n}$$ -$$2n-3 \equiv -3 \pmod{n}$$ -$$...$$ -$$n+1=2n-(n-1) \equiv -(n-1) \pmod{n}$$ -So that: $$(2n-2)\cdot ... \cdot (n+1) \equiv (-1)^{n-2} (n-1)! \pmod{n}$$ -Or $$(2n-2)\cdot ... \cdot (n+1)=n \cdot Q + (-1)^{n-2} (n-1)!$$ -And $$K_n=\frac{2n-1}{n^2} \cdot \binom{2(n-1)}{n-1}=\frac{2n-1}{n}\cdot \left ( \frac{n \cdot Q}{(n-1)!} +(-1)^{n-2} \right )$$ -The only time this is not true is, when: $$K_n \in \mathbb{N} \Leftrightarrow (n-1)! \equiv 0 \pmod{n}$$ -E.g. $K_6 \in \mathbb{N}$ because $5! \equiv 0 \pmod{6}$ -$K_{15} \in \mathbb{N}$ because $14! \equiv 0 \pmod{15}$ -... -It is obvious that $n$ can not be prime, because, according to Wilson's theorem we would have $$(n-1)! \equiv -1 \pmod{n}$$<|endoftext|> -TITLE: Symbol of differential operator transforms like a cotangent vector -QUESTION [10 upvotes]: Suppose that $D=\sum_{|\alpha| \leq k} A_{\alpha}(x)\frac{\partial^{\alpha}}{\partial x_1^{\alpha_1}...\partial x_n^{\alpha_n}}$ is a differential operator defined on vector valued functions on $\mathbb{R}^n$ (here $A_{\alpha}$ are matrices). Then one can form the expression $\sum_{|\alpha| = k} A_{\alpha}(x)\xi_1^{\alpha_1} \cdot ... \cdot \xi_n^{\alpha_n}$ which is called the principial symbol of $D$. This is fine while we deal with flat case, i.e. everything takes place on $\mathbb{R}^n$. But differential operators may be defined in the broader context of (say) compact manifolds and arbitrary vector bundle. Then the symbol is defined by the similiar procedure, but only locally. On the global level suddenly cotangent bundle somehow pops up: I've heard that this follows from the fact that the principial symbol transforms like a $(0,1)$ tensor more precisely that the variables $\xi$ transform like this. I'm a bit confuses what does it mean: I know transformation law for general tensors of type $(p,q)$ but here are formulas which involve higher order differential operators (not just first order as in standard vector fields). So - -How one can show that the variables $\xi$ transform like a $(0,1)$ tensor? How it can be used to define symbol as a function from the cotangent bundle? - -There is also more invariant way of defining symbol, I asked about this some time ago and I received the answer only to the half of my questions (the relevant discussion may be found here Symbol of the differential operator on vector bundles). So I would also like to ask - -Why this invariant definition which can be found in this discussion produces locally the expression $\sum_{|\alpha| = k} A_{\alpha}(x)\xi_1^{\alpha_1} \cdot ... \cdot \xi_n^{\alpha_n}$ - -REPLY [3 votes]: Here's how I like to think about it. If we have a Riemannian structure, we can verify that a linear differential operator $L$ is coordinate-independent by writing it as -$$ Lf = \sum_{0\le i \le k} \langle A_i, \nabla^i f \rangle$$ -where $A_i$ is a symmetric contravariant $i$-tensor, $\nabla^i$ is the iterated covariant derivative and $\langle,\rangle$ is the natural pairing of $(0,i)$-tensors with $(i,0)$-tensors. It then is completely natural to pair $A_k$ with covectors: for $\xi \in \Lambda^1(TM^*)$ we can form the coordinate-independent expression $$\langle A_k, \otimes^k \xi\rangle= A_k^\alpha\xi_{\alpha_1} \cdots \xi_{\alpha_k}.$$ -The Riemannian structure is not necessary here - I just use it because it makes me feel better about higher derivatives. If you change the Riemannian structure then any change in $\nabla^k f$ will only involve lower derivatives of $f$ and Christoffel symbols, so the principal symbol will be untouched.<|endoftext|> -TITLE: Can a basis for $\mathbb{R}$ be Borel? -QUESTION [12 upvotes]: Work in ZF (so no choice). Then it is consistent that there is no (Hamel) basis for $\mathbb{R}$ as a $\mathbb{Q}$-vector space. My question is about models where $\mathbb{R}$ does have a basis, but choice still fails in terrible ways. Specifically: - -Is it consistent with ZF that there is a basis for $\mathbb{R}$ as a $\mathbb{Q}$-vector space which is Borel? - -(This question arose out of Is there any uncountably infinite set that does not generate the reals?, my answer, and Ricky Demer's and my comments to it.) -Some observations: - -By "Borel" I mean the weak sense of Borel: "in the smallest $\sigma$-algebra containing the open sets." Note that if we use the stronger (and better-behaved) notion "Borel coded", meaning "there is a Borel code for", then the answer to the question is easily no. -How bad can Borel-ness be without choice? Well, it is consistent with ZF that every set of reals is Borel; specifically, it's consistent with ZF that $\mathbb{R}$ is a countable union of countable sets. -However, such models don't seem to answer the question; as far as I know, in such models, $\mathbb{R}$ doesn't have a basis. -So the question becomes: can we on the one hand "stretch" the notion of Borel-ness by killing choice very badly, while preserving the existence of a basis in the first place (so forcing us to not kill choice too badly)? - -NOTE: This question is related in spirit (if not content) to my previous question Spanning the reals with a small set - choicelessly. Specifically, I mention this because I think techniques useful to one may be useful to the other. - -REPLY [4 votes]: Well, if you assume a little bit of choice($\mathsf{DC}$), the answer is no, because of the following argument. -So suppose $B$ is a Hamel basis of $\Bbb R$ over $\Bbb Q$ that also happens to be a Borel set. For each $n\geq 1$, and each $\bar q=(q_1,\ldots,q_n)\in\Bbb Q^n,$ consider the set -$$B_{\bar q}:=q_1B+\cdots+q_nB,$$ -then $B_{\bar q}$ is an analytic set, as it is the direct image of the Borel set $B^n$ via the continuous function $\Bbb R^n\rightarrow \Bbb R$ given by $(x_1,\ldots,x_n)\mapsto q_1x_1+\cdots+ q_nx_n$. We have -$$\Bbb R=\bigcup_{\bar q\in\Bbb Q^{<\omega}}B_{\bar q}.$$ -Thus as this is a countable union and analytical sets are Lebesgue measurable, we must have that $\mu(B_{\bar q})>0$ for some $\bar q\in\Bbb Q^{<\omega}$. Because of this result we know $$B_{\bar q\frown\bar q}=B_{\bar q}+B_{\bar q}$$ -contains an open interval. We may assume this open interval contains $0$ using a function $\Bbb R\rightarrow\Bbb R$ of the form $x\mapsto x+a$ , so that -$$\bigcup_{n\geq 1}nB_{\bar q\frown\bar q}=\Bbb R,$$ -which implies that any $x\in\Bbb R$ is of the form $$m_1q_1x_1+\cdots+m_nq_nx_n+m_1'q_1y_1+\cdots+m_n'q_ny_n,$$ -for some $x_1,\ldots,x_n,y_1,\ldots,y_n\in B$ and some $m_1,\ldots,m_n,m_1',\ldots,m_n'\in\Bbb N$ and this is a contradiction, as $\Bbb Q$ is not a finitely generated abelian group. Therefore there cannot exist such set $B.$ - -Under the presence of large cardinals and using the axiom of choice there cannot even exist Hamel basis that are projective. -If there is for instance a supercompact cardinal, there exists an elementary embedding -$$j:L(\Bbb R)\rightarrow L(\Bbb R)^{Col(\omega,<\kappa)},$$ -whenever $\kappa$ is a supercompact cardinal; this is a theorem you can find in Woodin's article Supercompact cardinals, sets of reals, and weakly homogeneous trees. Such embeddings can be gotten using way weaker assumptions. -Here's why: -We can take $L(\Bbb R)^{Col(\omega,<\kappa)}$ as the Solovay's model as $\kappa$ is inaccesible. Because of this MO answer there is no Hamel basis of $\Bbb R$ as a $\Bbb Q$-vector space in $L(\Bbb R)^{Col(\omega,<\kappa)}$. Hence for any $X\in L(\Bbb R)$ we have that in $L(\Bbb R)$, $X$ is not a Hamel basis of $\Bbb R$ over $\Bbb Q$, using the elementarity of $j$. -But $\Bbb R\subset L(\Bbb R)$ and $\Bbb R\in L(\Bbb R)$, hence $X$ is a Hamel basis of $\Bbb R$ over $\Bbb Q$ in $L(\Bbb R)$ if and only it is in $V$. However all projective sets belong to $L(\Bbb R)$, thus no Hamel basis can be projective.<|endoftext|> -TITLE: Chern character in odd K-theory -QUESTION [8 upvotes]: I'm familiar with the definition of Chern character for a vector bundle. This leads to the definition of Chern character for $K$ theory (even theory) with values in even cohomology (the definition involves Chern classes). How the Chern character for the odd $K$-tehory is defined? It should land in odd cohomology theory, so I don't know what is the natural candidate. - -REPLY [3 votes]: I think it is something like this: Recall that $K^{-1}(X):=\overline{K}(S(X_+))\cong\overline {K^0}(S(X))\oplus \overline{K^0}(S^1)\cong \overline {K^0}(S(X))$. Hence we can define a chern character (coefficients of cohomology are with rational coefficients) -$$ -K^{-1}(X)\otimes\mathbb{Q}\cong\overline{K^0}(SX)\otimes\mathbb{Q}\rightarrow \overline{H^{\mathrm{ev}}}(SX). -$$ -Using long exact sequences in cohomology, realize that $\overline{H^{\mathrm{ev}}}(SX)=H^{\mathrm{odd}}(X)$. Compose the chern character above with the map $\overline{H^{\mathrm{ev}}}(SX)\rightarrow H^{\mathrm{odd}}(X)$, to obtain a map $K^{-1}(X)\otimes\mathbb{Q}\rightarrow H^\mathrm{odd}(X)$.<|endoftext|> -TITLE: Why is $\mathbb R^n$ so important in algebraic topology? -QUESTION [6 upvotes]: If you study the different tools from algebraic topology, you realise that most (if not all) of them are somehow meant to compare the space with Euclidean space: - -Homotopy theory deals with maps from $S^n\subset\mathbb R^{n+1}$ into the space. -Singular homology deals with maps from $\Delta^n\subset\mathbb R^{n+1}$ into the space. -Simplicial homology only studies spaces built from simplices (which are subspaces of Euclidean space) -Cellular homology does the same with $n$-cells. - -All of these tools are excellent for studying spaces that are built up from Euclidean space using the operations from point set topology, identifying and crossing stuff here and there. But you would think that these spaces form an extremely small corner of the entire category of topological spaces. I would assume that there are many other (possibly not very “interesting,” from a concrete point of view) spaces that are very different from Euclidean ones and which are extremely hard to compare to them. So why does it seem that the entire toolbox from algebraic topology is only concerned with Euclidean-like spaces? I would assume that the explanation is some combination of the following, mainly the last one: - -Euclidean spaces have some (to me unknown) universal property in the category of topological spaces that makes them very distinguished objects, a natural choice to compare others to; or -Most of the concrete problems that we want to solve are naturally occurring in Euclidean space or something very similar. - -Can one or both of these explain this issue? - -REPLY [8 votes]: My algebraic topology professor had a slogan: "Point set topology looks at 'simple' properties of 'exotic' spaces whereas algebraic topology looks at 'exotic' properties of 'simple' spaces." -General topology lays out things like connected, path connected, locally connected, etc. Then you start to look at how these properties are related. You ask questions like: "If I'm sequentially compact, am I also compact? pseudocompact?" "What's a really weird space which has properties 4 and 6 but not 1 or 5." -Algebraic topology is trying to get at more fine tuned data. So we work in an "easy" setting: I'm connected, compact, ... heck, I'm a compact manifold. How can you tell me apart from this other nice space? I need some really finely tuned properties to do this. In comes homology / cohomology / homotopy etc. -When it comes down to it, the "nice" spaces like a torus or sphere are already "hard enough" to deal with, so we aren't as concerned with more difficult things. -Think of general topology like using a telescope and algebraic topology like using a microscope. You aren't going to begin by pointing your microscope up at the sky. Point it as your desk first. -Alternatively, we care a lot about $\mathbb{R}^n$ related spaces because that's where the lion's share of applications lie. :)<|endoftext|> -TITLE: Is linear algebra more “fully understood” than other maths disciplines? -QUESTION [79 upvotes]: In a recent question, it was discussed how LA is a foundation to other branches of mathematics, be they pure or applied. One answer argued that linear problems are fully understood, and hence a natural target to reduce pretty much anything to. -Now, it's evident enough that such a linearisation, if possible, tends to make hard things easier. Find a Hilbert space hidden in your domain, obtain an orthonormal basis, and bam, any point/state can be described as a mere sequence of numbers, any mapping boils down to a matrix; we have some good theorems for existence of inverses / eigenvectors/exponential objects / etc.. -So, LA sure is convenient. -OTOH, it seems unlikely that any nontrivial mathematical system could ever be said to be thoroughly understood. Can't we always find new questions within any such framework that haven't been answered yet? I'm not firm enough with Gödel's incompleteness theorems to judge whether they are relevant here. The first incompleteness theorem says that discrete disciplines like number theory can't be both complete and consistent. Surely this is all the more true for e.g. topology. -Is LA for some reason exempt from such arguments, or does it for some other reason deserve to be called the best understood branch of mathematics? - -REPLY [13 votes]: This also depends a lot on what you actually need. As other answerers have pointed out, first of all you need to specify what linear algebra is all about. I define linear algebra as the study of finite dimensional vector spaces and their linear transformations, in particular it contains most of matrix theory. -As has been pointed out by many, many natural questions in linear algebra have been solved for many years. Classifications are often easy and can be explained in a few lectures - in particular the Jordan normal form, diagonalisability, perturbation theory are more or less completely understood. Furthermore, it is easy to construct an example and examine it with a computer (most of computers are good at is linear algebra). In this sense, the field has been "solved", because if we have a specific linear map, we can answer just about any question we have about it. The problems begin once we have infinitely many matrices. -However, when you want to apply linear algebra to other fields, the picture is very different. Most of the questions I encounter are hard and their solutions are difficult. Given a certain set of matrices that crops up in some physics problem, can I bound the second largest singular value? This might be very hard. Many fields in applied mathematics suffer from this problem - one very prominent example is signal reconstruction (compressed sensing was only discovered recently. Many questions in this field are answered using random matrix theory - but from what I gather that is only because linear algebra is not developed enough to deal with their questions). -A bit more concretely: -Consider for example the following very natural question: - -Given two Hermitian matrices $A$ and $B$ with given spectrum. What are the possibilities for the spectrum of $A+B$? - -This is known as Horn's problem and was only solved in this century (see also this MO question). It also took some of the smartest mathematicians to finally solve the problem. -Also consider the following problem: - -Define a Hadamard matrix as a square $n\times n$ matrix with entries only $\{\pm 1\}$ and mutually orthogonal rows. In what dimensions does such a matrix exist? - -Hadamard matrices have been studied for centuries - yet this question is still not completely solved. The problem may seem artificial at first, but it would have many interesting applications in information theory. -Here is a list of many open problems in matrix theory - many of which a lot of smart people have worked on: Open problems in matrix theory -It gets even worse if you also consider tensor products. While you could say that this is in principle "multilinear algebra", it can also be considered as a subset of matrix theory and would ordinarily be taught in linear algebra courses. Tensor products and questions such as the tensor rank are notoriously difficult - even computation only gets you so far as many problems are NP-complete or even undecidable in total (but might be very much decidable for interesting subclasses). -Another branch of mathematics that I would consider linear algbra concerns linear maps between matrices. Since $n\times n$ matrices are themselves a vector space, such maps can also be represented as matrices. This class is particularly important for quantum mechanics and contains a lot of hard questions. -Nevertheless, I do agree that we probably understand much more about linear algebra and we have a much wider variety of tools avaliable than in all other areas of mathematics.<|endoftext|> -TITLE: Hermite Polynomials: Rodrigues to Integral Representation -QUESTION [5 upvotes]: I would like to go from this representation of the Hermite polynomials: -$$H_n(z)=(-1)^ne^{z^2}\frac{d^n}{dz^n}e^{-z^2} \tag{1}$$ -to this representation -$$H_n(z)=\frac{2^n}{\sqrt{\pi}}\int_{-\infty}^{\infty}(z+is)^n\,e^{-s^2}\,ds \tag{2}$$ -I have tried to use Cauchy's theorem to give the $n^{\textrm{th}}$ derivative term in terms of a contour integral, and then contort the integrand and contour so that we're left with an integral over the real line, but I just can't find a way to do it. I've also read this question, but all of answers go through a different representation/route than the one I desire. Could you help me out? -EDIT: To shed some light on what exactly I desire, I will present the first few steps of my attempted path. For an analytic function $f(z)$, we can express its $n^{\textrm{th}}$ derivative by a contour integral. -$$\frac{d^n}{dz^n}f(z)=\frac{n!}{2\pi i}\oint_{\Gamma} \frac{f(w)}{(w-z)^{n+1}} dw $$ -Where $\Gamma$ is a simple closed curve that encircles the point $z$ in the complex plane. Using this we can rewrite $(1)$ as -$$H_n(z)=(-1)^ne^{z^2}\frac{n!}{2\pi i}\oint_{\Gamma} \frac{e^{-w^2}}{(w-z)^{n+1}} dw$$ -I don't know where to go from here. If I make the substitution $s=w-z$, I end up going in a circle showing that $H_n(z)=H_n(z)$. Moreover, I can't find a simple way to reduce that integral to an integral over $\mathbb{R}$ (due to the gaussian term in the integral). - -REPLY [5 votes]: Note that we can write $e^{-z^2}$ as the integral -$$e^{-z^2}=\frac{1}{\sqrt{\pi}}\int_{-\infty}^\infty e^{-s(s+2iz)}\,ds \tag 1$$ -Taking the $n$'th derivative of $(1)$ with respect to $z$ yields -$$\begin{align} -\frac{d^n}{dz^n}\left(e^{-z^2}\right)&=\frac{1}{\sqrt{\pi}}\int_{-\infty}^\infty (i2s)^n\,e^{-s(s+2iz)}\,ds\\\\ -&=\frac{2^n}{\sqrt{\pi}}e^{-z^2}\int_{-\infty}^\infty (-is)^n\,e^{-(s+iz)^2}\,ds\\\\ -&=\frac{2^n}{\sqrt{\pi}}e^{-z^2}\int_{-\infty}^\infty (-i(s-iz))^n\,e^{-s^2}\,ds\\\\ -&=\frac{(-1)^n\,2^n}{\sqrt{\pi}}e^{-z^2}\int_{-\infty}^\infty (z+is)^n\,e^{-s^2}\,ds\tag 2 -\end{align}$$ -Multiplying both sides of $(2)$ by $(-1)^n\,e^{z^2}$ reveals -$$\begin{align} -H_n(z)&=(-1)^n\,e^{z^2}\,\frac{d^n}{dz^n}\left(e^{-z^2}\right)\\\\ -&=\frac{\,2^n}{\sqrt{\pi}}\int_{-\infty}^\infty (z+is)^n\,e^{-s^2}\,ds -\end{align}$$ -as was to be shown! - -ORIGINAL POST -Although I am not sure exactly what you are seeking, I wanted to present a way to show that the integral expression in the OP is indeed the $n$'th order Hermite Polynomial. -Proceeding, we use the binomial theorem to write the integral of interest as -$$\begin{align}\frac{2^n}{\sqrt{\pi}}\int_{-\infty}^\infty (z+is)^n e^{-s^2}\,ds&=\frac{2^n}{\sqrt{\pi}}\sum_{k=0}^n\binom{n}{k}i^kz^{n-k}\int_{-\infty}^\infty s^ke^{-s^2}\,ds \\\\ -&=\frac{2^n}{\sqrt{\pi}}\sum_{k=0}^{n}\binom{n}{k}i^kz^{n-k}\left(\frac{1-(-1)^k}{2}\right)\Gamma\left(\frac{k+1}{2}\right) \\\\ -&=\frac{2^n}{\sqrt{\pi}}\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}(-1)^kz^{n-2k}\Gamma\left(k+\frac12\right) \\\\ -&=\frac{2^n}{\sqrt{\pi}}\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}(-1)^kz^{n-2k}\left(\sqrt{\pi}\,\frac{(2k)!}{4^k\,k!}\right) \\\\ -&=2^nn!z^n\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^k\left(\frac{1}{2z}\right)^{2k}\frac{1}{k!(n-2k)!} \tag 3 -\end{align}$$ -where $(3)$ is the explicit expression for the $n$'th order Hermite Polynomial found here.<|endoftext|> -TITLE: Evaluate $\int_0^1\int_0^1 \left\{ \frac{e^x}{e^y} \right\}dxdy$ -QUESTION [18 upvotes]: I want compute this integral -$$\int_0^1\int_0^1 \left\{ \frac{e^x}{e^y} \right\}dxdy, $$ -where $ \left\{ x \right\} $ is the fractional part function. -Following PROBLEMA 171, Prueba de a), last paragraph of page 109 and firts two paragraphs of page 110,here in spanish, I say the case $k=1$. -When I take $x=\log u$ and $y=\log v$ then I can show that -$$\int_0^1\int_0^1 \left\{ \frac{e^x}{e^y} \right\}dxdy=\int_1^e\int_1^e \left\{ \frac{x}{y} \right\}\frac{1}{xy}dxdy=I_1+I_2$$ -since following the strategy in cited problem and take $t=\frac{1}{u}$ -$$I_1:=\int_1^e\int_1^x \left\{ \frac{x}{y} \right\}\frac{1}{xy}dydx=\int_1^e\frac{1}{x}\int_{\frac{1}{x}}^1 \left\{ \frac{1}{t} \right\}\frac{dt}{t}dx=\int_1^e\int_1^x\frac{ \left\{ u \right\} }{u}dudx,$$ -and since if there are no mistakes -$$ \int_1^x\frac{ \left\{ u \right\} }{u}du = -\begin{cases} -x-1-\log x, & \text{if $1\leq x<2$} \\ -1+\log 2+(x-2)-2\log x, & \text{if $2\leq x\leq e$} -\end{cases}$$ -then $$I_1=\int_1^2\frac{1}{x}(x-1-\log x)dx+\int_1^2\frac{1}{x}(1+\log 2+(x-2)-2\log x)dx,$$ -It is $I_1=-3+\log 2-\frac{\log^22}{2}+e$. On the other hand following the cited problem, since $y>x$ then $ \left\{ \frac{x}{y} \right\}= \frac{x}{y}$ and the second integral is computed as -$$I_2:=\int_1^e\int_x^e \left\{ \frac{x}{y} \right\}\frac{1}{xy}dydx=\int_1^e\int_x^e \frac{x}{y} \frac{1}{y^2}dydx.$$ -Thus I've computed $I_2=\frac{1}{e}$. - -Question. I would to know if my computations with the fractional part function $ \left\{ x \right\} $ were rights (the evaluation of $ \int_1^x\frac{ \left\{ u \right\} }{u}du$ and $I_1$). Can you compute $$\int_0^1\int_0^1 \left\{ \frac{e^x}{e^y} \right\}^kdxdy$$ for the case $k=1$? (At least this case to see it as a proof verification of my computations; your are welcome if you provide us similar identities for integers $k\geq 1$, as in the cited problem). Thanks in advance. - -REPLY [9 votes]: Let $u=x-y$ and $v=x+y$. Then -$$ -\left\{(x,y):0\le x,y\le1\right\} -=\left\{(u,v):0\le\left|u\right|\le1,\left|v-1\right|\le1-\left|u\right|\right\} -$$ -The change of coordinates makes things a bit easier -$$ -\begin{align} -&\int_0^1\int_0^1\left\{\frac{e^x}{e^y}\right\}\,\mathrm{d}x\,\mathrm{d}y\\ -&=\int_0^1\int_0^1\left\{e^{x-y}\right\}\,\mathrm{d}x\,\mathrm{d}y\\ -&=\frac12\int_{-1}^0\int_{-u}^{2+u}\left\{e^u\right\}\,\mathrm{d}v\,\mathrm{d}u -+\frac12\int_0^1\int_u^{2-u}\left\{e^u\right\}\,\mathrm{d}v\,\mathrm{d}u\\ -&=\int_{-1}^0(1+u)\left\{e^u\right\}\,\mathrm{d}u -+\int_0^1(1-u)\left\{e^u\right\}\,\mathrm{d}u\\ -&=\int_0^1(1-u)\left\{e^{-u}\right\}\,\mathrm{d}u -+\int_0^1(1-u)\left\{e^u\right\}\,\mathrm{d}u\\ -&=\int_0^1(1-u)\,e^{-u}\,\mathrm{d}u -+\int_0^{\log(2)}(1-u)\left(e^u-1\right)\,\mathrm{d}u -+\int_{\log(2)}^1(1-u)\left(e^u-2\right)\,\mathrm{d}u\\ -&=\int_0^1(1-u)\left(e^{-u}+e^u\right)\,\mathrm{d}u --\int_0^{\log(2)}(1-u)\,\mathrm{d}u --2\int_{\log(2)}^1(1-u)\,\mathrm{d}u\\[3pt] -&=\left[(2-u)e^u+ue^{-u}\right]_0^1 --\left[u-\tfrac12u^2\right]_0^{\log(2)} --2\left[u-\tfrac12u^2\right]_{\log(2)}^1\\[6pt] -&=\left[(2-u)e^u+ue^{-u}\right]_0^1 --\left[u-\tfrac12u^2\right]_0^1 --\left[u-\tfrac12u^2\right]_{\log(2)}^1\\[9pt] -&=2\cosh(1)-2-\tfrac12-\tfrac12+\log(2)-\tfrac12\log(2)^2\\[12pt] -&=2\cosh(1)-3+\log(2)-\tfrac12\log(2)^2 -\end{align} -$$<|endoftext|> -TITLE: $n^{th}$ derivative of $\cot x$ -QUESTION [7 upvotes]: What is the $n^{th}$ derivative of $\cot(x)$? -I tried to differentiate it may times: - -I can't see a pattern forming. Please help. - -REPLY [7 votes]: There is a pattern but it is not simple. Apparently the pattern was found only quite recently: - -V.S. Adamchik, On the Hurwitz function for rational arguments, Applied Mathematics and Computation, Volume 187, Issue 1, 1 April 2007, Pages 3–12 - -See Lemma 2.1. The text is available at the author's site: pdf -There is also a recursion formula: - -If $\dfrac{d^n}{dx^n} \cot x = (-1)^n P_n(\cot x)$, then $P_0(u)=u$, $P_1(u)=u^2+1$, and - $$ -P_{n+1}(u) = \sum_{j=0}^n \binom{n}{j} P_j(u) P_{n-j}(u) -$$ for $n\ge 1$. - -This formula appears in - -Michael E. Hoffman, Derivative polynomials for tangent and secant, - The American Mathematical Monthly, Vol. 102, No. 1 (Jan., 1995), pp. 23-30 - -I learned of this formula and paper in - -Kurt Siegfried Kölbig, The polygamma function and the derivatives of the cotangent function for rational arguments, CERN-CN-96-005, 1996.<|endoftext|> -TITLE: Does ZFC decide every question about finitely generated groups? -QUESTION [17 upvotes]: In ZFC, we can easily say when a triple $\mathscr{G}=\left\langle G,\cdot,1 \right\rangle $ is a group. Furthermore, we can say when a group is finitely generated: First define a "canonical" finitely generated group on $n$ generators by taking the set of all finite ordered tuples of elements from a fixed set of size $n$ ("words in these elements"), and defining the usual equivalence relation that will make the set of equivalence classes into a group. Second, a f.g. group will be a group isomorphic to some quotient of that canonical f.g. group (I believe we can say all this in ZFC, please correct me if I'm mistaken). -So now the question is - will every statement about f.g. groups be decidable in ZFC? Put differently - can any statement about these groups be independent of ZFC? -What about finitely presented groups (where the the group in the quotient is itself f.g.)? -For reference, I think of Whitehead's problem that was shown by Shelah to be independent of ZFC. The main difference is that here we are dealing with things that have some finiteness in them, so it is less clear to me whether they can be so manipulated. - -REPLY [9 votes]: I am not sure I understood your question correctly, so let me state my interpretation: - -Are there questions about groups which are undecidable (in ZFC)?* - -The answer to this question is "yes". This is particularly interesting for finitely presented groups, due to the notion of a Markov property (see Lydon and Schupp, Section IV.4, p192). Let $\mathcal{P}$ be a property of finitely presented groups which is preserved under isomorphism. The property $\mathcal{P}$ is said to be a Markov property if: - -There is a finitely presented group $G_1$ with $\mathcal{P}$, and -There is a finitely presented group $G_2$ which cannot be embedded in any finitely presented group which has $\mathcal{P}$. - - -Theorem (Aidan-Rabin). Let $\mathcal{P}$ be any Markov property of finitely presented groups. Then there is no algorithm which decides whether or not a given finitely presented group has the property $\mathcal{P}$. - -Examples of Markov properties, and hence of properties of finitely presented groups which are not recursively recognizable, are: - -being the trivial group; -being finite; -being abelian; -being nilpotent; -being solvable; -being free; -being torsion-free; -being residually finite; -having a solvable word problem; -being simple; -being automatic. - -Incidentally, the property of being "being a group" for a set under an operation is undecidable (you state otherwise in your question). To see this, it helps to know that Markov properties were originally defined for semigroup presentations, and Markov proved the analogous result to the Aidan-Rabin theorem, above. Then "being a group" is a Markov property for finitely presented semigroups (because there exist semigroups which do not embed into groups), and hence is recursively undecidable. -I doubt, but am unsure how to prove, that "being finitely generated" is decidable for groups. -*This is quite different to your re-statement: "can any statement about (f.g.) groups be independent of ZFC?"<|endoftext|> -TITLE: On definable bijections $b:M^n\rightarrow M^m$ in an o-minimal structure $\mathcal{M}$. -QUESTION [7 upvotes]: Let $\mathcal{M}=\{M,<,\ldots\}$ be an o-minimal first order structure, namely a structure where every definable set in $M$ is finite union of points and intervals with endpoints in $M\cup \{\pm\infty\}$. - -For $n,m<\omega$, $n\neq m$, can there exists a definable bijection $b:M^n\rightarrow M^m$? - -I summarise some of the results which hold in o-minimal structures. -Monotonicity Theorem Every definable function $f:M\rightarrow M$ is piecewise continuous and monotone (i.e. strictly increasing, decreasing or constant). -Uniform finiteness Let $\phi(x,y)$ be a formula, $x\in M^n$, $y\in M^m$, and let's denote $A_{y}:=\{x\in M^n\,:\,\mathcal{M}\models \phi(x,y)\}$. We say $\{A_y\,:\, y\in M^m\}$ is a uniformly definable family of sets. There exists $k<\omega$ such that for all $y\in M^m$ either $|A_{y}| -TITLE: Prove that the determinant is a multiple of $17$ without developing it -QUESTION [6 upvotes]: Let, matrix is given as : -$$D=\begin{bmatrix} -1 & 1 & 9 \\ -1 & 8 & 7 \\ -1 & 5 & 3\end{bmatrix}$$ - -Prove that the determinant is a multiple of $17$ without developing it? - - -I saw a resolution by the Jacobi method , but could not apply the methodology in this example. - -REPLY [3 votes]: Notice that 119, 187 and 153 are all divisible by 17. So multiplying column 2 by 10 and adding to column 3 and multiplying column 1 by 100 and adding to column 3, gives us a column in which each element is divisible by 17: -$D=\left|\begin{matrix} -1 & 1 & 9 \\ -1 & 8 & 7 \\ -1 & 5 & 3\end{matrix}\right| -=\left|\begin{matrix} -1 & 1 & 19 \\ -1 & 8 & 87 \\ -1 & 5 & 53\end{matrix}\right| -=\left|\begin{matrix} -1 & 1 & 119 \\ -1 & 8 & 187 \\ -1 & 5 & 153\end{matrix}\right| -=17\left|\begin{matrix} -1 & 1 & 7 \\ -1 & 8 & 11 \\ -1 & 5 & 9\end{matrix}\right|$ -Thus $D = 17\cdot E$ where $E$ is the determinant of a matrix whose elements are integers which multiplied out using the definition of a determinant will be an integer.<|endoftext|> -TITLE: Homogeneity lemma in point set topology -QUESTION [6 upvotes]: The statement is simple: - -For all $x\in \operatorname{int} D_n:=\{x\in\Bbb R^n: \| x \|<1\}$, there exists a homeomorphism $h$ on $D_n$ such that $h(0)=x$ and $h$ does not move the points of $S_{n-1}$. - -This somehow reminds me of the sound wave pattern present in the Doppler effect: - -Perhaps I can accordingly construct an explicit homeomorphism mapping? But it's a very vague and naive idea, and doesn't seem to help me in any practical way. -If an explicit homeomorphism is not to be easily found, then is there any intuitive (and rigorous and convincing, of course) way to tackle this seemingly simple problem? -(When I search for a proof, I find that most results are actually not what I want, rather, they seem relevant to another lemma (under the same name) in differential topology/geometry, of which I know nothing.) -Ps: the following theorem might be related (but I don't know how): - -Any closed, convex "volume" (I don't know the terminology; put simply, something solid and has a volume) in $\Bbb R^n$ is homeomorphic to $D_n$. - -REPLY [2 votes]: There is an elementary approach that works in any normed real vector space. -The general idea is to use linear interpolation along radii. -Concretely, any $x \in D$ can be written as $x = te$ with $t \in [0, 1], \|e\| = 1$. We can set up a general form $h(te) = C + tf(e)$ with unknown -$C$ and $f$, and then try to solve the system $h(0) = a, h(e) = e$. This -works out to the pleasantly simple $h(x) = x + (1 - \|x\|)a$. -Some elementary linear algebra shows that $\|h(x)\| \le 1$ when -$\|x\| \le 1$, and furthermore for arbitrary $x, y$ -$$ - (1 - \|a\|)\|x - y\| \le \|h(x) - h(y)\| \le 2\|x - y \| -$$ -so $h$ is at least a homeomorphic embedding. -To show that it is surjective may be a little trickier, since there seems -to be no easy expression for $h^{-1}$, but it can be done by applying the intermediate value theorem to the function $g(t) = \|a + t(x - a)\|$ to -show that there is a unit vector $e$ such that $x$ is in the segment -$[a, e] = h([0, e])$.<|endoftext|> -TITLE: Let $a_n=\cos(a_{n-1}), L=[a_1,a_2,...,a_n,...].$ Is there an $a_0$ such that $L$ is dense in$[-1,1]?$ -QUESTION [10 upvotes]: I've been experimenting with recursive sequences lately and I've come up with this problem: - -Let $a_n= \cos(a_{n-1})$ with $a_0 \in \Bbb{R}$ and $L=[a_1,a_2,...,a_n,...].$ - Does there exist an $a_0$ such that $L$ is dense in $[-1,1]?$ - - I know of $3$ ways of examining whether a set is dense: -$i)$The definition, that is, whether its closure is the set on which it is dense, in our case this means if: $\bar L=[-1,1]$. -ii)$(\forall x \in [-1,1])(\forall \epsilon>0)(\exists b \in L):|x-b|<\epsilon$ -$iii)$ $(\forall x \in [-1,1])(\exists b_n \subseteq L):b_n\rightarrow -x$ -So far I haven't been able to use these to answer the question. I tried plugging in different values of $a_0$ and see where that leads but I have not found any corresponding promising "pattern" for $a_n$. Any ideas on how to approach this? - -REPLY [8 votes]: Ok, if you study recursive sequences, then you probably heard about "Lamere Ladder". -According to Lamere Ladder for $\cos(x)$ (which is also a contraction because of MVT), this function has a fixed (stationary) point. So, regardless of $a_0$, $a_1$ will end in between $[-1, 1]$ and from there on, the sequence $\{a_n\}$ will tend to the fixed point of $\cos(x)$. Which makes $L$ a converging sequence, so $L$ can't be dense, because the only point satisfying ii) (in your question) is its limit. -On another note, Kronecker's approximation theorem is quite an useful tool too. For example $\{n+ m \cdot 2 \cdot \pi \space | \space m,n \in \mathbb{Z} \}$ is dense on $\mathbb{R}$ and $cos(x)$ is a continuous function, making $\{cos(n)\}_{n \in \mathbb{Z}}$ dense on $[-1,1]$.<|endoftext|> -TITLE: Largest prime gap under $2^{64}$ -QUESTION [6 upvotes]: Thanks to Tomás Oliveira e Silva's extensive calculations, it is known that the largest prime gap less than $4\cdot10^{18}\approx2^{61.8}$ is 1476. I'd like an upper bound for the largest prime gap less than $2^{64}$. I know it will be horrifically large, but I'm hoping that more can be done than the best I know: -$$ -\left\lfloor\frac{2^{64}}{25\log^2(2^{64})}\right\rfloor=374946102691094 -$$ -due to [1]. As DanaJ suggested in the comments, this can be improved by a result of Axler [2]. In fact Büthe [3] improves on both for this range, reducing the bound to 672606194883. -Note: When I write "the largest prime gap less than $x$" I mean "the largest $q-p$ where $p$ and $q$ are consecutive primes and $p -TITLE: Is it possible to formalize all mathematics in terms of ordinals only? -QUESTION [12 upvotes]: Our experience shows that all finitary mathematical objects could be encoded using the natural numbers, and all operations on those objects could be expressed in terms of a few basic operations on numbers: the successor operation, addition and multiplication. -As far as I understand, this is not a theorem in the strict sense (because the notion of a finitary mathematical object is somewhat vague), but is rather a principle with a status similar to the Church–Turing thesis. Nevertheless, the essence of this principle can be expressed by a precise statement, for example, that the theory of hereditary finite sets and the Peano arithmetic are mutually interpretable. -As it turns out, other familiar operations on integers — the exponentiation, prime factorization and in fact (invoking the Church–Turing thesis), every effectively computable operation — can be expressed using the few aforementioned basic operations. -Now, the set theory encompassing infinite sets and the powerset operations is much richer than the Peano arithmetic. But we can observe that the universe of sets contains a subclass similar in some of its properties to the natural numbers — the class of ordinals. So, here is my question: - -Is it possible to reduce studying of (finite and infinite) mathematical objects to studying of ordinals, and to build a formal theory free of the concept of an arbitrary set, the theory whose universe consists of ordinals only, that uses only a few basic operations on them, but the theory that is still powerful enough to serve as a foundational theory of all mathematics — in the same sense as it is possible to use the Peano arithmetic as a foundational theory for finitary mathematics? And if so, what could be its language and its axioms? - -To clarify, I'm not asking if it would be useful or convenient to use such a theory, but I'm interested in a mere possibility to do that. - -REPLY [4 votes]: Edit: I've just been told that Rathjen's definition of proof-theoretic ordinal is intentionally vague for the introduction and not fit to base results off of. However as far as I know one of Rathjen's more specific renderings of Gentzen's result in particular is correct, $$\textrm{PRA+}``\textrm{There exists a provably recursive well-ordering of order type }\varepsilon_0"\vdash\textrm{Con(PA)}$$ -(Although another user told me this might be wrong as well, it might work if we fix an ordinal notation in advance as in (14).) - -Some analysis of theories where every object in their domain of discourse is an ordinal, or "theories of ordinals" as Arai calls them, has been done. For example in the paper Proof theory for theories of ordinals II: $\Pi_3$-reflection, Arai has defined and analyzed a theory $T_3$ for universes that are $\Pi_3$-reflecting ordinals. And this theory proves the consistency of Peano arithmetic: -According to Rathjen's paper "The realm of ordinal analysis", $\textrm{PRA}$ plus transfinite induction of length $\varepsilon_0$ along a certain primitive recursive predicate is sufficient to prove $\textrm{Con(PA)}$. Since the ordinal Arai calibrated $\vert T_3\vert$ to is greater than $\varepsilon_0$, this means $T_3$ proves some primitive recursive ordering of $\omega$ with order type $\varepsilon_0$ to be a recursive well-ordering. If $T_3$ also interprets $\textrm{PRA}$, then $T_3$ proves $\textrm{PRA}+\textrm{PR-TI}(\varepsilon_0)$, and so proves $\textrm{Con(PA)}$.<|endoftext|> -TITLE: Existence and uniqueness of the eigen decomposition of a square matrix -QUESTION [6 upvotes]: I'm confused on the sufficient conditions for the existence and uniqueness of the eigen decomposition of a square matrix. -Consider a matrix $A$ of dimension $m\times m$, a matrix $B$ of dimension $m\times m$ and a matrix $D$ diagonal of dimension $m\times m$. -Assumption 1: $B$ invertible -Assumption 2: The diagonal elements of $D$ are all distinct -Assumption 3: $A=BDB^{-1}$ where $B^{-1}$ exists by Assumption 1 -Questions: -(1) Does Assumption 3 mean that $BDB^{-1}$ is the eigen decomposition of $A$? In other words, does Assumption 3 is equivalent to say that the columns of $B$ are the eigenvectors of $A$ and the diagonal elements of $D$ are the eigenvalues of $A$? Or do we need other assumptions to state that? -My doubt is that: if Assumption 3 means that the columns of $B$ are the eigenvectors of $A$ and the diagonal elements of $D$ are the eigenvalues of $A$, then, since $B$ is invertible, it should be that the eigenvectors of $A$ are linearly independent and, hence, that $A$ is invertible (which is not among my assumptions). -(2) From what I have read in some sources, Assumptions 2 and 3 imply that the the eigen decomposition of $A$ is unique [up to a left multiplication of $B$ by a invertible diagonal matrix and up to an order for the eigenvalues]. What does "unique" exactly mean? My thought was that it means that there are no other matrices $E,F$ with $F$ diagonal such that $A=EFE^{-1}$? But if that is right, the uniqueness would be necessary to guarantee that the columns of $B$ are the eigenvectors of $A$ and the diagonal elements of $D$ are the eigenvalues of $A$; in other words would be "embedded" in saying that $BDB^{-1}$ is the eigen decomposition of $A$. Could you clarify this point? - -REPLY [5 votes]: (1) Let $e_i$ be the column vector with $i$th entry $1$ and all other entries $0$. Then $$A(Be_i) = BDB^{-1}(Be_i) = B(De_i) = Bd_{ii}e_i = d_{ii}(Be_i)$$ Therefore $Be_i$ is an eigenvector of $A$ with eigenvalue $d_{ii}$. -(2) As is explained the parenthetical, unique means - -In $A = EFE^{-1}$, the diagonal elements of $F$ will be the same as the diagonal elements as $D$ has, though not necessarily in the same order. (Since $F$ is assumed diagonal, its off-diagonal elements are all $0$, just like $D$.) -If the elements of $F$ are arranged to match $D$ (i.e., if $F = D$), then $E = BQ$ for some invertible diagonal matrix $Q$ (i.e., none of the diagonal elements is $0$). (And note that it is right-multiplication, not left. Left works on rows, not columns.) - -This does follow from (1), because the relation $A= EFE^{-1}$ implies that the diagonal elements of $F$ are also eigenvalues of $A$, with the column vectors of $E$ as eigenvectors. The fact that $E$ is invertible implies that $\{Ee_i\}$ spans space. Therefore every eigenvalue must be represented in the diagonal elements of $F$, just as in $D$. Since by assumption 2 they are all distinct, they must be exactly the same. -Now if we arrange the eigenvectors in $F$ in the same order as $D$, then $F = D$, and $A = EDE^{-1}$. Because the eigenvalues are distinct, all of the eigenspaces are one-dimensional. So the $i$th columns of $E$ and $B$ are both eigenvectors for the same eigenvalue, and so must be parallel. Collecting the scalar multipliers together as the diagonal elements of a diagonal matrix $Q$ gives $E = BQ$.<|endoftext|> -TITLE: Categorical formulations of basic results and ideas from functional analysis? -QUESTION [9 upvotes]: I'm taking a first (undergrad) course on functional analysis. Though the material is nice, the approach seems very ad hoc and in a sense, near-sighted (?). -I was wondering whether the/a big picture of (parts of) the elementary landscape of functional analysis admits some nice categorical descriptions. -What are some basic facts, theorems, and constructions in elementary functional analysis admit enlightening categorical formulations? - -REPLY [3 votes]: I think Lectures and Exercises on Functional Analysis by A. Y. Helemskii might be exactly what you're looking for. Quoting from the introduction: - -Perhaps the main idea is that our book is written from the categorical - point of view. Everywhere we stress and comment on the categorical nature of the fundamental constructions and results (like the - constructions of adjoint operators and completion, the Riesz-Fischer - and the Schmidt theorems, and closer to the end of the book, the - great Hilbert spectral theorem). This, as we believe, provides a new - level of understanding of the topics discussed. We are sure that - students (and even professors!) are ready for the perception of the - very basic categorical notions (and only those are used) and, what is - more important, for the unifying mathematical language of category - theory. Functional analysis, with its synthetic algebraic and topological content, works very well for first acquaintance with - categories, the same way as "Analysis III" did for the exposition of - the foundations of set theory 50 years ago. (Of course, the exposition - must be accompanied by a sufficient supply of examples and exercises; - but we shall discuss this later.)<|endoftext|> -TITLE: Field $K$ with $\operatorname{Gal}(\overline{K}/K)\simeq\widehat{F_2}$ -QUESTION [8 upvotes]: Is there a field K such that $\operatorname{Gal}(\overline{K}/K)$ is the profinite free group with two generators? -For one generator I know that for all the $\mathbb{F}_p$ we have $\operatorname{Gal}(\overline{\mathbb{F}_p}/\mathbb{F}_p)\simeq\widehat{\mathbb{Z}}$. - -REPLY [4 votes]: Yes, there is. In fact, for every projective profinite group $G$, there is a perfect and pseudo-algebraically closed field whose absolute Galois group is $G$. -You may want to check corollary 23.1.2 of Field Arithmetic by Fried and Jarden for a proof.<|endoftext|> -TITLE: Prove that the group $\mathrm{GL}(n, \mathbb{Z})$ is finitely generated -QUESTION [10 upvotes]: Knowing that for $n \geq 2$, $\mathrm{GL}(n, \mathbb{Z}) = \big\{ A \in \mathrm{M}_{n,n}(\mathbb{Z}) \mid \det(A) \in \{ 1, −1 \} \big\}$ is a group with respect to matrix multiplication, prove that for every integer $n \geq 2$ the group $\mathrm{GL}(n, \mathbb{Z})$ is finitely generated. -If I prove that $\mathrm{GL}(n, \mathbb{Z})$ has finite subgroups does that mean it has a finite set of generators so that it is finitely generated? - -REPLY [3 votes]: It is known, that for any Euclidean ring $R$ that $GL_n(R)$ is generated by the elementary matrices (proved via Gaussian elimination). For $\mathbb{Z}$ there are finitely many of them (as there are finitely many invertible elements)<|endoftext|> -TITLE: how to find the basis of a plane or a line? -QUESTION [5 upvotes]: Find a basis for the plane $x-2y+3z=0$ in $R^3$. Then find a basis for the intersection of that plane with the $xy$ plane. -Is there a proper/algebraic way of finding the basis of a plane? -Just by looking at it a basis could be $(2, 1, 0)$ because any multiple of that will give you $0$ when you substitute, but how do I find this without guessing? -would I use the same process when finding the basis of a line? -Any hints on how to figure out the second part of the question? - -REPLY [10 votes]: One thing you can identify is that $z = \frac{2y-x}{3}$, then the points that are going to satisfy the equality will be of the form -$$ -\left(x, y, \frac{2y-x}{3}\right). -$$ -For you to be able to cover all of such points, you would need to have two different vectors satisfying above such that they are not a multiple of each other. -You can see that indeed we can decompose the above vector as -$$ -\left(x, y, \frac{2y-x}{3}\right)=x\left(1,0,-\frac{1}{3}\right)+y\left(0,1,\frac{2}{3}\right). -$$ -which gives you an obvious basis -$$ -(v_1,v_2)=\left(1,0,-\frac{1}{3}\right)+\left(0,1,\frac{2}{3}\right). -$$ -There are many different ways of constructing a vector given in the first characterization, which would result in how the basis vector are aligned with respect to each other.<|endoftext|> -TITLE: Zariski topology, non-empty intersection of open sets -QUESTION [5 upvotes]: Let $k$ be a field and $X=\mathbb k^n$. A subset $Y \subset X$ is closed if there are $f_1,\ldots,f_m \in k[x_1,.\ldots,x_n]$ such that $Y=\{a \in k^n :f_i(a)=0 \space \forall \space i\}$, we write $Y=V(f_1,\ldots,f_m)$. The Zariski topology is $T=\{U \subset X : X \setminus U=V(f_1,\ldots,f_m) \}$. -I am trying to show that given the Zariski topology on $k^n$, if $U$ and $V$ are non-empty open subsets, then $U \cap V$ is non-empty. I don't know how to prove this so any hints would be greatly appreciated. - -REPLY [2 votes]: Let $U,V$ be two open sets of $k^n$ under Zariski topology. Thus they have associated two families of nonzero polynomials $f_1,...f_m;g_1,...,g_l\in k[x_1,...,x_n]$ such that $$f_i(a)=0\,\forall a\not\in U,\forall i\qquad g_j(b)=0\,\forall b\not\in V,\forall j.$$ Note that the intersection $U\cap V$ has associated the products $f_ig_j$, varying $i,j$. Since $k[x_1,...,x_n]$ is an integer domain, all these products are not zero. -Now, suppose that $U\cap V=\varnothing$. This means that all points of $k^n$ are roots of all $f_ig_j$. But this is a contradiction because this would imply $f_ig_j=0$ for all $i,j$.<|endoftext|> -TITLE: Tangent Space to Moduli Space of Vector Bundles on Curve -QUESTION [9 upvotes]: Let $X$ be a curve of genus $g \geq 2$. Using Geometric Invariant Theory, we can construct a moduli space $\mathcal{M}(r,d)$ of vector bundles on $X$ of rank $r$ and degree $d$. The details of this construction are a bit over my head for now, however I would like to at least be able to prove that the dimension of this moduli space is $r^{2}(g-1)+1$. -I'm lacking understanding of one key fact. In Michael Thaddeus' paper (http://www.math.columbia.edu/~thaddeus/papers/odense.pdf) he mentions that the tangent space to $\mathcal{M}(r,d)$ at any stable bundle $E$ satisfies the following -$T_{E} \mathcal{M}(r,d) \simeq H^{1}(\rm{End}E)$ -Can anyone help me understand this? I don't understand Thaddeus' argument. Given the above fact, it's trivial to apply Hirzebruch-Riemann-Roch and complete the derivation of the dimension of the moduli space. - -REPLY [2 votes]: Warning: I have seen the construction of the moduli of (stable) vector bundles as a manifold via a symplectic quotient (in a course with notes available here), rather than as a scheme via GIT quotient, which I don't understand well. However, my understanding is that the two are analagous, and this is supported based on the fact that the argument Thaddeus appears to be making is the same as in the symplectic case. In particular, both for both types of quotient $X//G$, we first find some subset $Y \subset X$ where $G$ acts nicely enough to take a nice quotient, and $X//G:= Y/G$. -I think the argument in mind is that whenever you have a (sufficiently nice) group $G$ acting (sufficiently nicely) on a (sufficiently nice) scheme $X$ (so that the quotient makes sense as an orbit space), the projection map $T_x Y \rightarrow T_{[x]}(X//G)$ has kernel equal to the image of the infinitesimal action of $G$ on $Y$, $\mathfrak{g}\rightarrow T_x Y$ induced by the action of $G$ on $Y$. In Thaddeus' argument, he demonstrates an exact sequence: -$$\mathfrak{gl}(\chi,\mathbb{C})\rightarrow T_EY \rightarrow H^1(\operatorname{End}(E)) \rightarrow 0$$ -After this, he observes that the first map is the infinitesimal action, so we have that $H^1(\operatorname{End}(E))$ is isomorphic to the cokernel of the infinitesimal action, which gives the result we wanted by the general theory mentioned above.<|endoftext|> -TITLE: If a, b, c are three natural numbers with $\gcd(a,b,c) = 1$ such that $\frac{1}{a}+\frac{1}{b}=\frac{1}{c}$ then show that $a+b$ is a square. -QUESTION [7 upvotes]: If a, b, c are three natural numbers with $\gcd(a,b,c) = 1$ such that $$\frac{1}{a} + \frac{1}{b}= \frac{1}{c}$$ then show that $a+b$ is a perfect square. - -This can be simplified to: -$$a+b = \frac{ab}{c}$$ -Also, first few such examples of $(a,b,c)$ are $(12, 4, 3)$ and $(20, 5, 4)$. So, I have a feeling that $b$ and $c$ are consecutive. -I don't think I have made much progress. -Any help would be appreciated. - -REPLY [9 votes]: Rewrite as $(a-c)(b-c)=c^2$. First we show that $a-c$ and $b-c$ are relatively prime. Suppose to the contrary that the prime $p$ divides $a-c$ and $b-c$. Then $p$ divides $c$ and therefore $a$ and $b$, contradicting the fact that $\gcd(a,b,c)=1$. -Since $a-c$ and $b-c$ are relatively prime, it follows that $a-c=s^2$ and $b-c=t^2$, where $st=c$. -We conclude that $a=s^2+st$ and $b=t^2+st$, so $a+b=(s+t)^2$. - -REPLY [2 votes]: To show this, we note that $c(a+b)=ab$. Now let $g$ be the gcd of $a$ and $b$, which need not necessarily be $1$. Denote $a=a'g$ and $b=b'g$ so that we get $c(a'+b') = a'b'g$. -Because $a' + b'$ is relatively prime to both $a'$ and $b'$, it follows that it divides $g$. But g also divides $c(a'+b')$. Further, note that $g$ is coprime to $c$, because it was the gcd of $a$ and $b$, so that gcd($g$,$c$)=gcd($a$,$b$,$c$)=1. It follows that $a'+b'=g$, and therefore that $a+b = (a'+b')g = g^2$. -For example, $\frac{1}{6} + \frac{1}{3} = \frac{1}{2}$ and $(3,6)=3$ and $3+6=9=3^2$.<|endoftext|> -TITLE: Can a complex number be prime? -QUESTION [38 upvotes]: I've been pondering over this question since a very long time. -If a complex number can be prime then which parts of the complex number needs to be prime for the whole complex number to be prime. - -REPLY [64 votes]: The notion of being "prime" is only meaningful relative to a base ring. -For instance, in the integers $\mathbb{Z}$ the number 5 is prime, whereas in the Gaussian integers $\mathbb{Z}[i]$ we have -$$5 = (2 + i)(2 - i) = 2^2 - i^2 = 4 - (-1) = 5$$ -and in the ring $\mathbb{Z}[\sqrt{5}]$ we have -$$5 = (\sqrt{5})^2$$ -so over these rings 5 is not a prime number. -The definition of prime you're probably familiar with -- a number is prime if it is divisible only by itself and one -- doesn't even really work over the integers: for instance, 5 is divisible not only by 1 and 5, but also by -1 and -5. So we need to formulate the definition of a prime differently, while still preserving the basic idea, to make sense of it in an arbitary ring. -Notice the following difference between primes and composites: since 5 is prime, if we have two numbers $a$ and $b$ such that $ab$ is a multiple of 5, then obviously one of $a$ or $b$ has to be a multiple of 5 just by unique factorization. On the other hand, if $ab$ is a multiple of 15, it may be the case that neither $a$ nor $b$ is a multiple of 15, because we might instead have $a$ a multiple of 3 but not 5 and $b$ a multiple of 5 and not 3. † -This gives us our definition of "prime" for a general ring: a ring element is prime if it is neither zero nor a unit, and moreover has the property that whenever it divides a product it must divide at least one of the factors. -There are a great many rings contained in the complex numbers, and in many of these rings there are non-real complex numbers that are primes in that ring. However, given any number that's prime in a given ring, there's a larger ring in which it's not prime, just as we saw above that 5 is prime over the integers, but not over the Gaussian integers $\mathbb{Z}[i]$ or over $\mathbb{Z}[\sqrt{5}]$. -In particular, since $\mathbb{C}$ is a field, every nonzero element is a unit, so nothing is prime over the complex numbers. (Similarly, nothing is prime over the real numbers, or the rational numbers.) However, I reiterate that many complex numbers are prime over smaller rings: for instance, it turns out that $2 + i$ is prime over $\mathbb{Z}[i]$. - -† A slightly more straightforward generalization of the definition you're used to would be to look at nonzero, nonunit elements $r$ for which we only have $r = s t$ when either $s$ or $t$ is a unit. This actually gives a weaker notion called "irreducibility." In Unique Factorization Domains the two notions are the same (which explains why they're the same for the integers), but in rings like $\mathbb{Z}[\sqrt{-5}]$ where we do not have unique factorization you can have situations like -$$3 \cdot 3 = 9 = (2 + \sqrt{-5})(2 - \sqrt{-5})$$ -Here each of $3$, $2 + \sqrt{-5}$, and $2 - \sqrt{-5}$ is irreducible, with none dividing any of the others. We've decided that being "prime" is about unique prime factorization, so we chose a definition of "prime" under which each of the above is "irreducible" but none is "prime." - -Note: several answers (including this one) have brought up the Gaussian integers specifically. They're indeed an example of a subring of the complex numbers containing non-real complex numbers, but just to be clear they're in no way "the" natural example here -- they're on the same footing as all the others.<|endoftext|> -TITLE: What is known about the counting function of Gaussian primes" -QUESTION [9 upvotes]: The counting function of primes among $\Bbb{N}$, describing the asymptotic density of the primes, is well known (the Prime Number theorem). Let's define a mild generalization of the counting function concept: -Given sets $S$ and $P\subset S$, and a map $M: S \mapsto \Bbb{N}$, then $C(S,P;M)$ is the asymptotic ratio of the number of $p \in P : M(p) \leq n$ to the number of $s \in S : M(s) \leq n$. -For example, if we take $S$ to be $\Bbb{N}$ and $P$ to be the primes, and $M_0$ to be the trivial map $\forall k \in \Bbb{N}, M_1(k)=k$, then the Prime Number Theorem supplies the form of $C(\Bbb{N}, $P$; M_1)$. You could equally well transform that to a counting function $C(\Bbb{N}, $P$; M_2)$ where $\forall k \in \Bbb{N}, M_2(k)=k^2$ for example; no interesting new information emerges. -Given this definition, my question is: Has anybody studied the counting function of Gaussian primes? Here, a natural map to use is the squared magnitude $M(a+bi) = a^2+b^2$. - -REPLY [2 votes]: I believe that Hecke proved in 1919 that the von Mangoldt measure on Gaussian integers converges under rescaling to uniform measure in the complex plane. Here the von Mangoldt function or measure assigns $\ln (a^2+b^2)$ to every Gaussian prime power $(a+ib)^n$. This is therefore a 2-dimensional prime number theorem. (As in the usual prime number theorem, you can discard the measure for higher prime powers with negligible loss of measure.) It is usually stated in slightly different terms, but I believe that this is what it amounts to. See for instance Rudnick and Waxman.<|endoftext|> -TITLE: proof - Show that $1! +2! +3!+\cdots+n!$ is a perfect power if and only if $n=3$ -QUESTION [6 upvotes]: Show that $1! +2! +3!+\cdots+n!$ is a perfect power if and only if $n =3$ - -For $n=3$, $1!+2!+3!=9=3^2$. I also feel that the word 'power' makes it a whole lot hard to prove. How do we prove this? What technique do we use? -I don't have any idea how to proceed with this. I would love some hints. - -REPLY [3 votes]: Note that,$$1! + 2! + 3! + 4! = 33$$ -$$1! + 2! + 3! + 4! + 5! = 153$$ -$$1! + 2! + 3! + 4! + 5! + 6! = 873 \ldots $$ -The last digit of the numbers is $3$ (This is happening because for $n>4$ the last digit of $n!$ is $0$). Now for a number to be a perfect square the last digit should be one of the digits $1,4,5,6,9$. Hence for $n>3$, $\sum\nolimits_{i = 1}^n {i!} $ cannot be a perfect square.<|endoftext|> -TITLE: Find Nth formula of recursive formula $a_n=a_{n-1}+n(n-1)a_{n-2}$ -QUESTION [8 upvotes]: $$a_n=a_{n-1}+n(n-1)a_{n-2}$$ -$$a_0=1, a_1=-\frac{1}{2}$$ -Is it possible to find explicit formula for $a_n$ just by using $a_0$ and $a_1$? -I know how to solve this problem if $a_n=Aa_{n-1}+Ba_{n-2}$ where $A$ and $B$ is some real number(constant), but in this problem that is not a case. - -REPLY [7 votes]: Let $\displaystyle\;b_n = \frac{a_n}{n!}$, the recurrence relation at hand can be transformed to -$$n ( b_n - b_{n-2} ) = b_{n-1}\quad\text{ with }\quad \begin{cases} b_0 = 1\\b_1 = -\frac12\end{cases}\tag{*1}$$ -Let $f(z) = \sum_{n=0}^\infty b_n z^n$ be the OGF for $b_n$. If we multiple $(*1)$ -by $z^n$ and start to sum from $n = 2$. We get -$$\begin{align} -& z\frac{d}{dz}\left[ (1-z^2) f(z) - 1 + \frac{z}{2} \right] = z(f(z)-1)\\ -\iff & (1-z^2) \frac{df(z)}{dz} - (1+2z) f(z) = -\frac{3}{2} -\end{align} -$$ -Solving the ODE gives us -$$f(z) = \frac{5 - 3\sin^{-1}(z)}{2(1-z)\sqrt{1-z^2}} - \frac{3}{2(1-z)}$$ -Expanding $f(z)$ as a power series in $z$, we obtain following -ugly expression of $a_n$. -$$a_n = \frac{n!}{2}\left[ -5\sum_{p=0}^{\lfloor \frac{n}{2}\rfloor} \frac{\binom{2p}{p}}{4^p} -- 3\sum_{s=0}^{\lfloor\frac{n-1}{2}\rfloor} \sum_{p=0}^s \frac{\binom{2p}{p}\binom{2s-2p}{s-p}}{4^s(2p+1)} - 3 -\right]$$ -As a double check, the first few values of $a_n$ according this formula are -$$(2a_0,2a_1,\ldots) = 2, -1, 3, -3, 33, -27, 963, -171, 53757, 41445,4879575, 9438525, \ldots$$ -and they do satisfy the recurrence relation. -Update -It turns out we can simplify the mess a little bit. -The recurrence relation on $b_n$ can be rewritten as $b_n = (n+1)(b_{n+1} - b_{n-1})$. The RHS has the form of a finite difference. We can use it to get -rid of one level of summation. The end result is for all $n \ge 0$, -$$a_n = \frac{(n+1)!}{2}\left[ -5\frac{\binom{2r}{r}}{4^r} -- 3\sum_{p=0}^s \frac{\binom{2p}{p}\binom{2s-2p}{s-p}}{4^s(2p+1)} -\right] -\quad\text{ with }\quad -\begin{cases} -r = \lfloor \frac{n+1}{2}\rfloor\\ -s = \lfloor \frac{n}{2}\rfloor -\end{cases} -$$<|endoftext|> -TITLE: rank of an abelian group and its embedment into vectorspace -QUESTION [5 upvotes]: I am confused about the rank of an abelian group. -In this wiki page,, we have that -$$\mathbb{Z}^n \oplus \mathbb{Z}_{q_1} \oplus \cdots \oplus \mathbb{Z}_{q_t}$$ -where $n$ is the rank. -But in this wiki page,, it says that the rank of an abelian group is the cardinality of a maximal linearly independent subset. -Are these two ranks equivalent definitions? -It also says that if $A$ is a torsion-free abelian group, then it embeds into a vector space over the rational numbers of dimension rank $A$. Isn't $A\cong \mathbb{Z}^n$ where $n$ is the rank? Why do we need rational number? -It also says that: -If $A$ is torsion-free then the canonical map $A \to A \otimes\mathbb{Q}$ is injective and the rank of $A$ is the minimum dimension of $\mathbb{Q}$-vector space containing $A$ as an abelian subgroup. -How to show this? - -REPLY [3 votes]: If $\{x_1,\dots,x_m\}$ is a maximal linearly independent subset of a finitely generated abelian group $G$ and $H=\mathbb{Z}x_1+\dots+\mathbb{Z}x_m$, then $G/H$ is torsion. -Otherwise, let $y+H$ be an element in $G/H$ with zero annihilator. Then $\{x_1,\dots,x_n,y\}$ is linearly independent (easy check), so contradicting maximality. -Then we have, after tensoring with $\mathbb{Q}$ the exact sequence $0\to H\to G\to G/H\to0$, -$$ -G\otimes\mathbb{Q}\cong H\otimes\mathbb{Q}\cong -\mathbb{Z}^m\otimes\mathbb{Q}\cong\mathbb{Q}^m -$$ -Now suppose $G\cong\mathbb{Z}^n \oplus \mathbb{Z}_{q_1} \oplus \cdots \oplus \mathbb{Z}_{q_t}$. -Tensoring it with $\mathbb{Q}$, we obtain -$$ -G\otimes\mathbb{Q}\cong -\mathbb{Z}^n\otimes\mathbb{Q}\cong\mathbb{Q}^n -$$ -The isomorphisms are as $\mathbb{Q}$-vector spaces, so $m=n$.<|endoftext|> -TITLE: How can I evaluate $\int_{0}^{1}\frac{(\arctan x)^2}{1+x^{2}}\ln\left ( 1+x^{2} \right )\mathrm{d}x$ -QUESTION [8 upvotes]: How to calculate this relation? -$$I=\int_{0}^{1}\frac{(\arctan x)^2}{1+x^{2}}\ln\left ( 1+x^{2} \right )\mathrm{d}x=\frac{\pi^3}{96}\ln{2}-\frac{3\pi\zeta{(3)}}{128}-\frac{\pi^2G}{16}+\frac{\beta{(4)}}{2}$$ - Where G is the Catalan's constant, and $$\beta(x)=\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{(2k-1)^{x}}$$ is the Dirichlet's beta function. - Integrate by parts $$u=(\arctan{\,x})^2\ln{(1+x^2)}$$ $$v=\arctan{\,x}$$ - We have $$3I=\frac{\pi^3}{64}\ln{2-2\int_0^{1}x(\arctan{\,x})^3}\frac{dx}{1+x^2}$$ -But how to calculate the latter integral?Could anybody please help by offering useful hints or solutions?Ithing very difficult to prove. - -REPLY [8 votes]: Take $\arctan\left(x\right)=u,\,\frac{dx}{1+x^{2}}=du - $. Then $$I=\int_{0}^{1}\frac{\left(\arctan\left(x\right)\right)^{2}}{1+x^{2}}\log\left(1+x^{2}\right)dx=\int_{0}^{\pi/4}u^{2}\log\left(1+\tan^{2}\left(u\right)\right)du - $$ $$=-2\int_{0}^{\pi/4}u^{2}\log\left(\cos\left(u\right)\right)du - $$ and now using the Fourier series $$\log\left(\cos\left(u\right)\right)=-\log\left(2\right)-\sum_{k\geq1}\frac{\left(-1\right)^{k}\cos\left(2ku\right)}{k},\,0\leq x<\frac{\pi}{2} - $$ we have $$I=\frac{\log\left(2\right)\pi^{3}}{96}+2\sum_{k\geq1}\frac{\left(-1\right)^{k}}{k}\int_{0}^{\pi/4}u^{2}\cos\left(2ku\right)du - $$ and the last integral is trivial to estimate $$\int_{0}^{\pi/4}u^{2}\cos\left(2ku\right)du=\frac{\pi^{2}\sin\left(\frac{\pi k}{2}\right)}{32k}-\frac{\sin\left(\frac{\pi k}{2}\right)}{4k^{3}}+\frac{\pi\cos\left(\frac{\pi k}{2}\right)}{8k^{2}} - $$ so we have $$I=\frac{\log\left(2\right)\pi^{3}}{96}+\pi^{2}\sum_{k\geq1}\frac{\left(-1\right)^{k}\sin\left(\frac{\pi k}{2}\right)}{16k^{2}}-\sum_{k\geq1}\frac{\left(-1\right)^{k}\sin\left(\frac{\pi k}{2}\right)}{2k^{4}}+\pi\sum_{k\geq1}\frac{\left(-1\right)^{k}\cos\left(\frac{\pi k}{2}\right)}{4k^{3}} - $$ and now observing that $$\cos\left(\frac{\pi k}{2}\right)=\begin{cases} --1, & k\equiv2\,\mod\,4\\ -1, & k\equiv0\,\mod\,4\\ -0, & \textrm{otherwise} -\end{cases} - $$ and $$ \sin\left(\frac{\pi k}{2}\right)=\begin{cases} --1, & k\equiv3\,\mod\,4\\ -1, & k\equiv1\,\mod\,4\\ -0, & \textrm{otherwise} -\end{cases} - $$ we have $$I=\frac{\log\left(2\right)\pi^{3}}{96}-\frac{\pi^{2}}{16}K+\frac{\beta\left(4\right)}{2}-\frac{3\pi\zeta\left(3\right)}{128}\approx 0.064824$$ where the last sum is obtained using the relation between Dirichlet eta function and Riemann zeta function.<|endoftext|> -TITLE: Limit involving the inverse beta regularized function -QUESTION [5 upvotes]: Let $0 -TITLE: Intuitive interpration of the Thom isomorphism and relative cohomology -QUESTION [10 upvotes]: Let $p: E \rightarrow B $ be an an oriented real vector bundle or rank $n$. Then there exists a unique class $u \in H^n(E, E - B; \mathbb{Z})$, where $B$ is embedded into $E$ as the zero section, such that for any fiber $F$ the restriction of $u$ to $F$ is the class induced by the orientation of $F$. $H^k(E) \cong H^{k+n}(E,E - B)$ and this isomorphism is given explicitly by $\, x \mapsto x \cup u$. I would like to get a feel for this isomorphism. For the record: I am only really interested in smooth vector bundles over manifolds. -If $k+n$ is greater than the dimension of the base space then I think this is easily interpreted (though please correct me if I say something wrong or misleading). Writing out the the long exact sequence associated with the relative cohomology, we see that $H^{k+n}(E)$ is zero as $E$ deformation retracts to $B$. We see that in this case $H^{k+n}(E;E - B) \cong H^{k+n-1}(E - B)$. The isomorphism $H^k(E) \cong H^{k+n}(E,E - B)$ then tells us about the cohomology of $H^{k+n-1}(E - B)$, which I can "feel" is the twisting of the vector bundle. -For the case $k+n$ less than the dimension of the manifold, it is less clear for me how to think of relative cohomology. -I know that Bott and Tu have some interpretation like this and a lot of people swear by that book, but to be honest that book just confused me. Maybe I should try again... Any other ideas? - -REPLY [2 votes]: Sometimes these things make more intuitive sense by passing to the representing spectra, but I do not know if you are interested in that degree of generality. If you want a more "low-tech" approach, I can recommend Milnor-Stasheff's Characteristic Classes. -By excision, that relative cohomology is the same as the disk bundle relative its boundary. In that case, we can think about it, with coefficients in $\mathbb{R}$, as having to do with forms on the disk bundle that vanish on the boundary. We are looking for a form which integrates to $1$ on each fiber. There is always a choice of such a thing when the bundle is trivial, but when we glue, we might worry about a $1$ becoming a $-1$. An orientation says we can make choices that do not cancel out. Once we have such a thing, cupping with it is an isomorphism with inverse given by "integrating it out."<|endoftext|> -TITLE: How to solve the following equation involving an exponential function -QUESTION [7 upvotes]: How do you solve $10^x = x$? I'm not sure how to solve this algebraically. Using log functions wasn't enough. - -REPLY [2 votes]: I can show you some interesting ways to find the answer. -Start with $x=\log_{10}(x)$. -Substitute this into itself to get $x=\log_{10}(\log_{10}(x))$ -Repeat this infinitely: $$x=\log_{10}(\log_{10}(\log_{10}(\dots\log_{10}(x)\dots)))$$ -Try plugging in a random number for the $x$ inside the logarithms and use a calculator that can calculate complex numbers with logarithms to find the result. -Plugging in different numbers may produce different answers, but all answers should work to solve $x=10^x$. -For numerical reassurance (I used google) -$$\log(2)=0.30102999566$$ -$$\log(\log(2))=-0.52139022765$$ -$$\log(\log(\log(2)))=-0.2828+1.3643i$$ -$$\log(\log(\log(\dots\log(2)\dots)))=-0.119193073+0.750583294i$$ -When I try to plug this back into $x=10^x$, I get that it works, with a very small amount of error. -Also note that: -$$10^x=10^{x\pm\log_{10}(e)2\pi in},n=0,1,2,3,\dots$$ -And using that, we get $$x=10^{x+\log_{10}(e)2\pi in}\implies x=\log_{10}(x)\mp\log_{10}(e)2\pi in$$ -And putting this into our substitution method: -$$x=\log_{10}(\log_{10}(\dots\log_{10}(x)\mp\log_{10}(e)2\pi in)\mp\log_{10}(e)2\pi in)\mp\log_{10}(e)2\pi in$$<|endoftext|> -TITLE: Joint PDF of two random variables in a triangle -QUESTION [8 upvotes]: Let the random variables $X$ and $Y$ have a joint PDF which is uniform - over the triangle with vertices at $(0, 0), (0, 1 )$ and $(1, 0)$. - -Find the joint PDF of $X$ and $Y$. - - -So apparently the answer is $2$. And it's related with the area of this triangle that easily we can see is $1/2$. How do we get that answer? I guess I don't understand the meaning of joint PDF. Could someone explain that to me too? - -REPLY [11 votes]: The random variable $(X,Y)$ is uniformly distributed over the region, lets call it $R$, i.e. the pdf -$f_{X,Y}(x,y) = k$ for some constant on the region. Now lets integrate over the region -$$\int\int_R f_{X,Y}(x,y)dxdy = \int\int_R k \:dxdy$$ -$$\int\int_R f_{X,Y}(x,y)dxdy = k * area(R)$$ -$$\int\int_R f_{X,Y}(x,y)dxdy = k * \frac{1}{2}$$ -We also know that -$$\int\int_R f_{X,Y}(x,y)dxdy = 1$$ -so -$$k * \frac{1}{2} = 1 \Rightarrow k = 2 $$ -And we end up with this pdf: -$$f_{X,Y}(x,y) = -\begin{cases} -2, & (x,y) \in R \\ -0, & \text{otherwise} -\end{cases} -$$<|endoftext|> -TITLE: Arrangement of houses with 2 colors -QUESTION [6 upvotes]: From the 2016 International Mathematical and Logic Games Contest - -Along the coast of Maths-land, the straight beach-front - road contains a line of houses, all on the same side of the - road. The houses are painted either blue or yellow and - there is at least one house of each colour. Curiously - enough, every pair of houses separated by ten other - houses is painted the same colour, as is every pair - separated by fifteen houses. - What is the maximum number of houses on this road? - -Bruteforcing this with a computer I found that $25$ is the maximum. I've been looking for a formal proof, to no avail. My combinatoric skills are very limited, perhaps someone can come up with a short proof... - -REPLY [10 votes]: One way to show that $25$ is the maximum would be to observe that the chain -$$11\to22\to6\to17\to1\to12\to23\to7\to18\to2\to13\to24\to8\to19\to3\to14\to25\to9\to20\to4\to15\to26\to10\to21\to5\to16$$ -where each step in the chain either goes up $11$ or down $16$, accounts for all the numbers from $1$ to $26$. This shows that in any stretch of $26$ houses, all houses have the same color as the $11$th house.<|endoftext|> -TITLE: Baer Sum notation requires clearence. -QUESTION [6 upvotes]: I am working on Baer sum and I have my book by Rotman, Introduction to Homology, and also MacLanes book Homology and they use notation I am puzzled on. -I have understood baer sum of extensions through a different manner but I want to understand their notation as I understand it is used to extend it from just short exact sequences. -Let -$$E: B\overset{i}{\hookrightarrow} E\overset{\pi}{\twoheadrightarrow}A$$ -and -$$F: B\overset{j}{\hookrightarrow} F\overset{p}{\twoheadrightarrow}A$$ -be two extensions of $A$ by $B$. First step is the direct sum -$$E\oplus F: B\oplus B\overset{i\oplus j}{\hookrightarrow} E\oplus F\overset{\pi\oplus p}{\twoheadrightarrow}A\oplus A$$ -Nothing difficult so far, another short exact sequence that makes no mysteries to me. They define the diagonal morphism $\bigtriangleup_B:B\to B\oplus B$ such that $\bigtriangleup_B(b)=b\oplus b$ and the codiagonal one with $\bigtriangledown_B:B\oplus B\to B$, such that $\bigtriangledown_B(a\oplus b)=a+b$. From this they demonstrate that for $f,g:A\to B$ we have $f+g=\bigtriangledown_B\circ(f\oplus g)\circ \bigtriangleup_A$ Which again doens't bother me. It is easy to show from the given definitions. My issues arises in how to interpretate this -$$\bigtriangledown_A\circ(E\oplus F)\circ \bigtriangleup_B$$ -where it is the entire extension. I do not understand how I am to view the chain, modules and morphisms in this instance and notation, especially the middle part. I do know that the baer sum is equal to $X/N$ with $X=\{a\oplus b\in E\oplus F:\pi(a)=p(b)\}$ and $N=\{i(n)\oplus -j(n):n\in B\}$. -How to connect these two definitions currently eludes me. The main issue is that I do not know how to interpretate that notation and as hence cannot make a connection. I'd appreciate if someone could show me step by step in the composition how the given chain, from direct sum as it is fairly easy to understand, becomes and then how it ends up and why. -Best regards and thanks for any assistence. - -REPLY [8 votes]: I guess the author meant the following : let $\operatorname{Ext}^1(A,B)$ be the set of extension of $A$ by $B$. Then $\operatorname{Ext}^1(A,B)$ is a covariant functor in $B$ and a contravariant functor in $A$. -So, if $f:A'\rightarrow A$ and $E\in\operatorname{Ext}^1(A,B)$, you can take its image in $\operatorname{Ext}^1(A,B)$, this image will be denoted by $E\circ f\in\operatorname{Ext}^1(A',B)$. Namely, $E\circ f$ is the pull-back $E\circ f=A'\times_A E$ : -$$\require{AMScd} -\begin{CD} -0@>>>B@>>>E\circ f@>>>A'@>>>0\\ -@.@|@VVV@VfVV@.\\ -0@>>>B@>>>E@>>>A@>>>0 -\end{CD}$$ -And similarly, if $g:B\rightarrow B'$ is any morphism, write $g\circ E\in\operatorname{Ext}^1(A,B')$ for the corresponding extension. It is dually the pushout : -$$\require{AMScd} -\begin{CD} -0@>>>B@>>>E@>>>A@>>>0\\ -@.@VgVV@VVV@|@.\\ -0@>>>B'@>>>g\circ E@>>>A@>>>0 -\end{CD}$$ -Now you have an extension $E\oplus F\in\operatorname{Ext}^1(A\oplus A,B\oplus B)$. Just take $\nabla_B\circ(E\oplus F)\circ\Delta_A$. In other words, the Baer sum of $E$ and $F$ is exactly your quotient $X/N$.<|endoftext|> -TITLE: Lee, Introduction to Smooth Manifolds, Change of Coordinates -QUESTION [6 upvotes]: In all versions of John M. Lee's Introduction to Smooth Manifolds, he claims that -$$\left(\psi\circ\varphi^{-1}\right)_*\left.\frac{\partial}{\partial -x^i}\right|_{\varphi\left(p\right)}=\left.\frac{\partial\tilde -x^j}{\partial -x^i}\left(\varphi\left(p\right)\right)\frac{\partial}{\partial\tilde -x^j}\right|_{\psi\left(p\right)},$$ -where $\left(U,\varphi\right)=\left(U,\left(x^i\right)\right)$ and $\left(V,\psi\right)=\left(V,\left(\tilde x^i\right)\right)$ are smooth charts of some smooth manifold such that $p\in U\cap V$. -However, this is what I computed: -$$\begin{align}\left(\psi\circ\varphi^{-1}\right)_*\left.\frac{\partial}{\partial x^i}\right|_{\varphi\left(p\right)}&=\psi_*\left(\left(\varphi^{-1}\right)_*\left.\frac{\partial}{\partial x^i}\right|_{\varphi\left(p\right)}\right)\\&=\psi_*\left.\frac{\partial}{\partial x^i}\right|_p\\&=\left.\frac{\partial\tilde x^j}{\partial x^i}\left(\color{red}{p}\right)\frac{\partial}{\partial\tilde x^j}\right|_{\psi\left(p\right)}.\end{align}$$ -I highlighted my difference in red. What am I doing wrong? - -REPLY [3 votes]: Please allow me to introduce another notation. Let $M$ be a smooth manifold with dimension $ \ m \in \mathbb{N}^*$, $U$ and $V$ be open sets, $p \in U \cap V$, $x:U \to \mathbb{R}^m \ $ and $ \ y:V \to \mathbb{R}^m \ $ be charts at $p$ and $ \ i: im(x) \hookrightarrow \mathbb{R}^m \ $ and $ \ j: im(y) \hookrightarrow \mathbb{R}^m \ $ be the inclusion maps. As usual, we denote the coordinate maps $ \ x = (x^1,...,x^m)$, $y= (y^1,...,y^m)$, $i = (i^1,...,i^m) \ $ and $ \ j = (j^1,...,j^m)$. So, for all $ \ \mu \in \{ 1,...,m \}$, the maps $ \ i^{\mu}: im(x) \to \mathbb{R} \ $ and $ \ j^{\mu}: im(y) \to \mathbb{R} \ $ are just restrictions of the projection onto the $\mu$-th coordinate. -Treating tangent vectors as derivations acting on germs, first note that, $\forall h \in C^{\infty} \big( x(p) \big)$, we have -\begin{align*} \displaystyle \left\{ \Big[ d(x^{-1})|_{x(p)} \Big] \left( \left. \frac{\partial}{\partial i^{\mu}} \right|_{x(p)} \right) \right\}(h) & = \left[ \left. \frac{\partial}{\partial i^{\mu}} \right|_{x(p)} \right] (h \circ x^{-1}) = \\ & = [\partial_{\mu} (h \circ x^{-1} \circ i^{-1})] \Big(i \big( x(p) \big) \Big) = \\ & = [\partial_{\mu} (h \circ x^{-1})] \big( x(p) \big) = \\ & = \left( \left. \frac{\partial}{\partial x^{\mu}} \right|_{p} \right) (h) \ \ . -\end{align*} -Then, $\forall \mu \in \{ 1,...,m \}$, we have $$\displaystyle \Big[ d(x^{-1})|_{x(p)} \Big] \left( \left. \frac{\partial}{\partial i^{\mu}} \right|_{x(p)} \right) = \left. \frac{\partial}{\partial x^{\mu}} \right|_{p} \ \ . $$ By the chain rule, $\forall \mu \in \{ 1,...,m \}$, we conclude that -\begin{align*} -\displaystyle \big[ d(y \circ x^{-1})|_{x(p)} \big] \left( \left. \frac{\partial}{\partial i^{\mu}} \right|_{x(p)} \right) & = \Big[ dy|_{x^{-1} (x(p))} \circ d(x^{-1})|_{x(p)} \Big] \left( \left. \frac{\partial}{\partial i^{\mu}} \right|_{x(p)} \right) = \\ & = (dy|_p) \left( \left. \frac{\partial}{\partial x^{\mu}} \right|_{p} \right) = \\ & = \sum_{\nu = 1}^{m} \left[ \left( \left. \frac{\partial}{\partial x^{\mu}} \right|_{p} \right) (j^{\nu} \circ y) \right] \cdot \left. \frac{\partial}{\partial j^{\nu}} \right|_{y(p)} = \\ & = \sum_{\nu = 1}^{m} \left[ \left( \left. \frac{\partial}{\partial x^{\mu}} \right|_{p} \right) (y^{\nu}) \right] \cdot \left. \frac{\partial}{\partial j^{\nu}} \right|_{y(p)} = \\ & = \sum_{\nu = 1}^{m} \frac{\partial y^{\nu}}{\partial x^{\mu}} (p) \cdot \left. \frac{\partial}{\partial j^{\nu}} \right|_{y(p)} \ \ . -\end{align*} -Or we can compute, $\forall \mu \in \{ 1,...,m \}$, -\begin{align*} -\displaystyle \big[ d(y \circ x^{-1})|_{x(p)} \big] \left( \left. \frac{\partial}{\partial i^{\mu}} \right|_{x(p)} \right) & = \sum_{\nu = 1}^{m} \left[ \left( \left. \frac{\partial}{\partial x^{\mu}} \right|_{p} \right) (y^{\nu}) \right] \cdot \left. \frac{\partial}{\partial j^{\nu}} \right|_{y(p)} = \\ & = \sum_{\nu = 1}^{m} [\partial_{\mu} (y^{\nu} \circ x^{-1})] \big( x(p) \big) \cdot \left. \frac{\partial}{\partial j^{\nu}} \right|_{y(p)} = \\ & = \sum_{\nu = 1}^{m} [\partial_{\mu} (y^{\nu} \circ x^{-1} \circ i^{-1})] \Big( i \big( x(p) \big) \Big) \cdot \left. \frac{\partial}{\partial j^{\nu}} \right|_{y(p)} = \\ & = \sum_{\nu = 1}^{m} \left\{ \left[ \left. \frac{\partial}{\partial i^{\mu}} \right|_{x(p)} \right] (y^{\nu} \circ x^{-1}) \right\} \cdot \left. \frac{\partial}{\partial j^{\nu}} \right|_{y(p)} = \\ & = \sum_{\nu = 1}^{m} \left[ \frac{\partial (y^{\nu} \circ x^{-1})}{\partial i^{\mu}} \right] \big( x(p) \big) \cdot \left. \frac{\partial}{\partial j^{\nu}} \right|_{y(p)} \ \ . -\end{align*} -Using that, $\forall \nu \in \{ 1,...,m \}$, $$y^{\nu} \circ x^{-1} = j^{\nu} \circ y \circ x^{-1} = (y \circ x^{-1})^{\nu}$$ is the $\nu$-th coordinate function of the map $y \circ x^{-1}$, we have the desired equality.<|endoftext|> -TITLE: Normalizing a quaternion -QUESTION [14 upvotes]: How do I normalize a quaternion $$q=w + \mathbf ix + \mathbf jy + \mathbf kz = a + v$$ ? -I already know: The normalized quaternion is called unit quaternion and can be calculated in this way: $$U_q = {q \over ||q||}$$ Does this mean I have to divide the quaternion by its "length"? How do I calculate its "length", like a 4D-vector? After that, how do I divide a quaternion by a number? Do I divide each part by the length individually? - -REPLY [20 votes]: Yes, by its length or norm as it's also called. -You get: -$$U_q=\frac{w}{d} + \mathbf i \cdot \frac{x}{d} + \mathbf j \cdot \frac{y}{d} + \mathbf k \cdot \frac{z}{d}$$ -where $d = ||q|| = \sqrt {w^2 + x^2 + y^2 + z^2} $ is the norm.<|endoftext|> -TITLE: $\prod _{k=2}^{n} {\log k}$ is big-$O$ of what? -QUESTION [5 upvotes]: $$\prod _{k=2}^{n} {\log k}$$ -is a big-$O$ of what? -I can see it $O(n!)$ but is there a tighter solution? - -REPLY [7 votes]: If your product is $P(n)$, then $\log P(n) = \sum_{k=2}^n \log \log k -\le n \log \log n$, so $P(n) \le \exp((n-1) \log \log n) = (\log n)^{n-1}$.<|endoftext|> -TITLE: Morphisms in the category of rings -QUESTION [9 upvotes]: We know that in the category of (unitary) rings, $\mathbb{Z}$ is the itinial object, i.e. it is the only ring such that for each ring $A,$ there exists a unique ring homomorphism $f:\mathbb{Z} \to A$. -This means, in particular, that $\mathbb{R}$ does not satisfy this property, so for a certain ring $B$, we can construct $g:\mathbb{R} \to B$ and $h:\mathbb{R} \to B$ such that $g \ne h$. -So far, I have proven that $B$ cannot be an ordered ring and cannot be $\mathbb{R}^n \ (n\in\mathbb{N})$. Can you help me finding such a ring $B$? - -REPLY [7 votes]: Many useful and (sometimes surprisingly) true things have been said in other answers; let me just make some simple remarks. -To begin, you could have consulted the proof of what you implicitly used, namely that if $\def\Z{\Bbb Z}\Z$ is an initial object, then it is automatically unique, up to canonical isomorphism (indeed, for once, up to unique isomorphism). The (very standard) proof goes: suppose $R$ is another initial object in the category, then by assumption there are unique morphisms $f:\Z\to R$ and $g:R\to\Z$. Then $g\circ f:\Z\to\Z$ is a morphism and $\Z$ being initial it is unique and therefore equal to the identity of$~\Z$; similarly $f\circ g:R\to R$ is a morphism and $R$ being initial it is unique and therefore equal to the identity of$~R$, from which it follows that $f$ and $g$ are inverse morphisms, so $R$ is isomorphic to $Z$. -Now taking $\def\R{\Bbb R}R=\R$ it is clear that $f:\Z\to\R$ fails to be surjective so cannot have an inverse; by (the contrapositive of) the above argument this shows that $\R$ cannot be initial, and it does not admit a morphism $g:\R\to\Z$ at all. [That is a small lie. The argument uses the hypothesis that $R$ is initial twice: for the existence of $g$, and for equating $f\circ g$ to the identity of$~R$; one or the other must fail. But it is fairly easy to see that for $R=\R$ the former fails, and indeed one does have the property that any morphism $\R\to\R$ equals the identity, though it takes a bit of works to show that.] - -Next, and rather unrelated, some words about the tensor product proof that $\R$ does not even have, as the above might suggest, the weaker property that if a morphism $\R\to A$ exists, then it is unique. You can think of the $\otimes$ in the construction of the ring $S\otimes S$ as similar to the fraction bar used to construct $\Bbb Q$, with the following difference: in fractions we decree that one can cancel (or introduce) equal factors ($an/nb=a/b$), but for the tensor product one decrees that one can move a factor across the $\otimes$ symbol in either direction ($an\otimes b=a\otimes nb$). The difference has as consequence that while sums of fractions can always be combined to a single fraction, sum of tensors cannot always. -The factors allowed to be moved are integers (the bare minimum), or possibly some larger class $R$ of scalars, which must then be attached as $\otimes_R$ to indicate this attribute of the construction; it serves as a gatekeeper that allows elements of (the ring) $R$ to cross the $\otimes_R$, but not others. Nonetheless, even for a bare $\otimes_\Z$, one can effectively pass rational factors across: to move $p/q$ from left to right, it suffices to move a factor $p$ from left to right and a factor $q$ from right to left (therefore $\otimes_\Bbb Q$ is not really different from $\otimes_\Z$). However one cannot go beyond rationals: $\sqrt2\otimes_\Z1$ and $1\otimes_\Z\sqrt2$ are different, for if they could be shown equal using a finite number of tensor-equivalences, one would basically have produced a fraction $p/q$ equal to $\sqrt2$, which is impossible. (A formal proof requires work.) -As a concluding remark: the ring $\Bbb Q$ and the finite fields $\Z/p\Z$ with $p$ prime do have the property that morphisms from them are unique if they exist. These are exactly the prime fields: fields without any proper sub-fields.<|endoftext|> -TITLE: Why is it bad to pick basis for a vector space? -QUESTION [10 upvotes]: Reading `This Week's Finds', http://math.ucr.edu/home/baez/week247.html, -I'm informed that one should avoid picking coordinate systems and I'm unsure why that is the case. Any help on the matter is appreciated. - -Linear algebra is all about vector spaces and linear maps. One of the lessons that gets drummed into you when you study this subject is that it's good to avoid picking bases for your vector spaces until you need them. It's good to keep the freedom to do coordinate transformations... and not just keep it in reserve, but keep it manifest! - -Why? -Is there a simple example of when a choice of coordinates makes a difference in a result? ---thereby giving reason to avoid basis. - -As Hermann Weyl wrote, "The introduction of a coordinate system to geometry is an act of violence". - -Why? -Some googling has informed me that this issue is part of the larger notion of categorical naturality... - -REPLY [6 votes]: You have a mission to find out how strong the wind blows on the pick of the mountain. So you climb to the top, put an aparatus and you get a result that it blows 200km/h in one direction. Then you turn it by 90 degree clockwise and get record 80km/h. You end up with such picture: - -You know that tangent space to the surface of our planet is 2 dimentional vector space. So you choose coordinates and write your results in your coordinates [200,80]. You are sure that it is all data you can collect and you go back to give an report. In the base you say that wind blows 200km/h in one direction and 80km/h in other. Report receivers asks you additionally how were you oriented while taking first record. But you no longer remember. Hence your mission fails. -Conclusion -While modeling "real world" situations with vector spaces (in particular $\mathbb{R}^2$ or $\mathbb{R}^3$) you must be aware that in reality there is no canonical/natural choice of coordinates. In our situation by choosing coordinates, you assumed that tangent space has canonical coordinates. Obviously it does not. Just stand on the ground, look around and see if there are any.<|endoftext|> -TITLE: A proposed criterion for finding when an homogenous ideal is radical -QUESTION [5 upvotes]: Let $X$ be a projective variety over an algebraically closed field, and $I$ be the homogenous ideal of $X$ and $J$ be an ideal with the same zero set. Suppose that I know $I=\langle f_1,...f_n \rangle$ with $f_i$ of degree $d$, and $J= \langle g_1,...g_m \rangle$ with $g_i$ also of degree $d$. Does it follow that $J$ is also a radical ideal (and hence equal to $I$)? - -My motivation: I am trying to show that the quadratic relations (a postiori Plucker relations) that one gets by a certain tensor contraction, generate the ideal of the image of the grassmannian under the Plucker embedding. I know that elements of the coordinate ring of the image can't satisfy any relations of degree 1. So if the criterion above holds then these quadratic relations are actually Plucker relations. - -REPLY [2 votes]: This criterion is not correct. Let $F_{123}$ be the variety $GL(4)/B$, $B$ a Borel subgroup, and $F_{12}=G/P_{12}$, $F_{23}=G/P_{23}$, where $P_{12}$ and $P_{23}$ are respective parabolic subgroups. These are all projective subvarieties of the product of grassmanians $G_1^4 \times\cdots\times G_4^4$. -Then if you give Plucker coordinates to this product of Grassmann varieties $$(\sum a_i e_i,\sum_{i -TITLE: Existence of minimal prime ideal contained in given prime ideal and containing a given subset -QUESTION [14 upvotes]: Let $R$ be a unital commutative ring, $P$ $\subseteq$ $R$ a prime ideal, $X\subseteq P$ a subset. Show there exists a minimal (inclusion minimal) prime ideal contained in $P$ which contains $X$. - -My approach: Let $\langle X\rangle$ be the ideal generated by $X$. -$P^c$ is a multiplicative set, as $P$ is prime. $\langle X\rangle\subseteq P\subseteq R$ and $P^c\cap\langle X\rangle=\emptyset $ so by Zorn's lemma there exist prime ideal $I_1$ satisfying $\langle X\rangle\subseteq I_1\subseteq P$. As $I_1$ is prime, it's complement is a multiplicative subset which does not intersect $\langle X\rangle$, hence I can find prime ideal $I_2$ satisfying $\langle X\rangle\subseteq I_2 \subseteq I_1$ and so on... -I've got an infinite descending chain of prime ideals and I can't be sure that the process will stop. Hints will be appreciated, thank you. -Edit: Can I conclude that every prime ideal contains a minimal prime ideal? - -REPLY [18 votes]: Hint: The intersection of a chain of prime ideals is prime. - -Full solution: -The idea is to apply Zorn's lemma downwards. Consider the set $S:=\{Q \ | \ Q\text{ is a prime ideal of } R, X\subseteq Q\subseteq P\}$ ordered by inclusion. Note that $P\in S$ implies $S\neq\emptyset$. Consider a chain $\{Q_i\}_{i\in I}$ in $S$. Then $\bigcap_{i\in I} Q_i$ is in $S$, so by Zorn's lemma there exists an element minimal in $S$, i.e., minimal with respect being prime inside $P$ and containing $X$. -Let us see that $Q:=\bigcap_{i\in I} Q_i$ is actually prime: let $ab\in Q$ with $a,b\in R\setminus Q$. Then there exist indices $j,k$ such that $a\not\in Q_j$, $b\not\in Q_k$, say with $Q_j\supseteq Q_k$. Therefore $a,b\not\in Q_k$, but $ab\in Q$ implies $ab\in Q_k$, contradicting the primeness of $Q_k$. - -Regarding your last question: Just apply the above proposition with $X:=0$ to get a minimal prime ideal of $R$ inside $P$.<|endoftext|> -TITLE: Prove that $\vert E \vert=0$ if $\frac{x+y}{2}\notin E$ for $x,y \in E$ -QUESTION [14 upvotes]: Let $E\subset\mathbb{R}$ measurable such that if $x\neq y, x,y\in E$ then $\frac{x+y}{2}\notin E$. -Prove that $\vert E \vert=0$. -Any idea? Thanks! - -REPLY [2 votes]: First we prove the following claim. - -Let $a 0$. Let $\epsilon = \lambda(A)/2$ and suppose that $O$ is an open set containing $A$, with $\lambda(O) < \lambda(A) + \epsilon = 3\lambda(A)/2$. Such a set exists because $A = E \cap J$ is an intersection of measurable sets, hence measurable, and $A$ has finite measure because it is a subset of $J$. -As $O$ is an open subset of $\mathbb R$, we may express it as a countable union of disjoint open intervals, $O = \cup_{n=1}^{\infty}U_n$, and each $U_n$ has finite length since $O$ has finite measure. -For each $n$, we have two possibilities. If $A \cap U_n = \emptyset$ or $A \cap U_n$ is a singleton, then clearly $\lambda(A \cap U_n) = 0$. Otherwise, $A \cap U_n$ contains at least two distinct points of $A$, (hence of $E$), so $\lambda(A \cap U_n) = \lambda(E \cap (J \cap U_n)) \leq \lambda(J \cap U_n)/2 \leq \lambda(U_n)/2$. -Then -$$\begin{aligned} -\lambda(A) &= \lambda(A \cap O) \\ -&= \lambda\left(A \cap \left(\bigcup_{n=1}^{\infty}U_n\right)\right) \\ -&= \sum_{n=1}^{\infty}\lambda(A \cap U_n) \\ -&\leq \sum_{n=1}^{\infty}\lambda(U_n)/2 \\ -&= \frac{1}{2}\sum_{n=1}^{\infty}\lambda(U_n) \\ -&= \frac{1}{2}\lambda(O) \\ -&< \frac{3}{4}\lambda(A) -\end{aligned}$$ -This contradiction resulted from the assumption that $\lambda(A) > 0$, so in fact we must have $\lambda(A) = 0$. This shows that the intersection of $E$ with any finite-length interval has measure zero. Therefore -$$E = E \cap \mathbb R = E \cap \left(\bigcup_{n=-\infty}^{\infty} [n,n+1]\right) = \bigcup_{n=-\infty}^{\infty}(E \cap [n, n+1])$$ -also has zero measure, by countable additivity.<|endoftext|> -TITLE: Is there a function for every possible line? -QUESTION [15 upvotes]: I'm currently in Pre-Calculus (High School), and I have a relatively simple question that crossed my mind earlier today. If I were to take a graph and draw a random line of any finite length, in which no two points along this line had the same $x$ coordinate, would there be some function that could represent this line? If so, is there a way we can prove this? - -REPLY [4 votes]: The only straight lines in the $x$-$y$ plane that are not functions are those that are perfectly vertical. Those are of the form $x=c$, where $c$ is a constant. -All other lines can be expressed in the form $y =f (x)= mx + b$ where $m $ is the slope of the line and $b $ is the $y$-intercept-- the $y$ value when $x$ is $0$. -Given any two points on the line we can find this formula, and given its formula we can find any point on the line. -Such a function is called a linear function.<|endoftext|> -TITLE: Show $x-1$ is irreducible in $\mathbb{Z}_8[x]$ -QUESTION [9 upvotes]: How can I show that the polynomial $x-1$ is irreducible in $\mathbb{Z}_8[x]$? -$\mathbb{Z}_8[x]$ is not an integral domain so we cannot use degree considerations. I have tried reducing the problem mod 2 to conclude a factorization of $x-1$ must be of form $(1+a_1x+a_2x^2+...+a_mx^m)(1+b_1x+b_2x^2+...+b_nx^n)$ where $a_i,b_i$ are even, except $b_1$ is odd. I also tried using induction on the degrees $m,n$ but to no avail. -Any help is appreciated! -EDIT: Upon trial and error I have discovered $(2x^2-x-1)(4x^2-2x+1)=x-1$ in $\mathbb{Z}_8[x]$, however this does not disprove anything because we do not know if the 2 polynomials on the left are units or not... - -REPLY [3 votes]: First of all I have to admit that I don't know what's an irreducible polynomial over a non-integral domain. -But if I understand well the OP wants to show that $X-1$ can't be written as a product of two non-invertible polynomials (of degree at least one). -Suppose the contrary, and write $X-1=fg$ with $\deg f\ge1$, $\deg g\ge1$. -If we work over a ring $R$ having only one prime ideal $\mathfrak p$ (as it is $\mathbb Z/8\mathbb Z$), then we arrive to a contradiction immediately: in $R/\mathfrak p$ we have $X-\overline 1=\overline f\overline g$ hence $\deg\overline f=0$ and $\deg\overline g=1$ (or vice versa). Then all coefficients of $f$, excepting $f_0$, belong to $\mathfrak p$, so they are nilpotent. But obviously $f_0$ is invertible, so $f$ is invertible. -Remark. We showed that $X-1$ is "irreducible" over any commutative ring having only one prime ideal (in particular for $\mathbb Z/p^n\mathbb Z$ with $p$ prime, $n\ge1$). One can't generalize this to commutative rings with at least two prime ideals: in $(\mathbb Z/6\mathbb Z)[X]$ we can write $X-1=(3X-1)(2X+1)$.<|endoftext|> -TITLE: Convexity, Hessian matrix, and positive semidefinite matrix -QUESTION [13 upvotes]: I would like to ask whether my understanding of convexity, Hessian matrix, and positive semidefinite matrix is correct. -For a twice differentiable function $f$, it is convex iff its Hessian $H$ is positive semidefinite. -The Hessian matrix $H$ can be calculated by: -https://en.wikipedia.org/wiki/Hessian_matrix -And the definition of positive semidefinite is: -$$x^THx\geqslant0$$ -https://en.wikipedia.org/wiki/Positive-definite_matrix#Negative-definite.2C_semidefinite_and_indefinite_matrices -For example, a function -$$f(x,y)=\frac{x^2}{y^4}$$ -where $x\geqslant0, y>0$. -Its Hessian is -$$ -\begin{bmatrix} -2y^{-4} & -8xy^{-5}\\ --8xy^{-5} & 20x^2y^{-6}\\ -\end{bmatrix} -$$ -And -$$x^THx=6x^2y^{-4}\geqslant0$$ -Therefore, $H$ is positive semidefinite and $f(x,y)$ is convex. -On the other hand, the determinant of $H$ is -$$40x^2y^{-10}-64x^2y^{-10}=-24x^2y^{-10}\leqslant0$$ -which means $f(x,y)$ is concave. -Since $f(x,y)$ is nonlinear, it cannot be both convex and concave, and there must be something wrong with the derivation above. -I would like to ask which part of my under standing is wrong. -Thank you. - -REPLY [4 votes]: First, you had $$x^{T}Hx=6x^{2}y^{-4}$$. -$$x$$ is a non-zero real-valued column vector. Let $$x=\begin{vmatrix} -x_{1}\\ -x_{2} -\end{vmatrix}$$, then $$x^{T}Hx=\begin{vmatrix} -x_{1} & x_{2} -\end{vmatrix}\begin{vmatrix} -2y^{-4} &-8xy^{-5} \\ - -8xy^{-5}& 20x^{2}y^{-6} -\end{vmatrix}\begin{vmatrix} -x_{1}\\ -x_{2} -\end{vmatrix}=2x_{1}^{2}y^{-4}+20x_{2}^{2}x^{2}y^{-6}-16x_{1}x_{2}xy^{-5}$$ -It is hard to tell whether this function is positive or negative, while the only thing that is known is $$x\geq 0,y> 0$$. -Second, you got the determinant of the Hessian matrix to be $$40x^{2}y^{-10}-64x^{2}y^{-10}=-24x^{2}y^{-10}\leq 0$$ and you concluded that the function was "concave". -While the expression you had for the determinant of the Hessian is correct, your conclusion needs re-considerations. -The determinant of the first principal minor is $$2y^{-4}> 0$$, since $$y>0$$. -Then, if the determinant of the Hessian matrix is greater than $$0$$, then the function is strictly convex. If the determinant of the Hessian is equal to $$0$$, then the Hessian is positive semi-definite and the function is convex. -For the function in question here, the determinant of the Hessian is $$-24x^{2}y^{-10}\leq $$. There is a lot of uncertainty here. If $$x$$ takes on the value of $$0$$ and you have a $$0$$ determinant for the Hessian, then you have $$\Delta _{1}= 2y^{-4}> 0,\Delta _{2}=-24x^{2}y^{-10}= 0$$, this is the criteria for positive semi-definite and the function is convex. -However, if $$x$$ takes on non-zero value, then determinant of the Hessian is indeed negative. In that case, you have $$\Delta _{1}>0,\Delta _{2}< 0$$. The Hessian matrix is actually indefinite and no conclusion about the concavity (or convexity) of the function can be made from the Hessian matrix. -There are a lot of ambiguities here.<|endoftext|> -TITLE: Existence of surjective homomorphism between Boolean algebras $\Lambda\subset\mathscr P(\mathscr B)\to\mathscr B$ (in ZF) -QUESTION [8 upvotes]: I am trying to prove the following theorem, due to Tarski according to W. A. J. Luxemburg on Reduced powers of the real number system and equivalents of the Hahn-Banach extension theorem: - -Given a Boolean algebra $\mathscr B$, there is a subalgebra $\Lambda$ of the Boolean algebra $\mathscr P(\mathscr B)$ and a surjective homomorphism $h: \Lambda \to \mathscr B$. - -I need it in order to prove that $\Lambda/\ker h \cong \mathscr B $, but have no proper experience on Boolean algebras, have not found any reference, and trying to prove it have some doubts. -It is clear that Tarski proved it using only Zermelo-Fraenkel's axioms, and I need not to depend on any extra axiom such as AC or BPI (that excludes the possibility of using Stone's representation theorem). - -REPLY [2 votes]: For $x\in\mathscr{B}$, let $I(x)=\{y\in\mathscr{B}:y\leq x\}$. Let $\Lambda$ be the subalgebra of $\mathscr{P}(\mathscr{B})$ generated by the sets $I(x)$ for all $x\in\mathscr{B}$. Then I claim there exists a homomorphism $h:\Lambda\to\mathscr{B}$ such that $h(I(x))=x$ for all $x$. -The key to proving this is noticing that if such an $h$ did not exist, this failure would be detected by a finite subalgebra of $\mathscr{B}$. Indeed, since $\Lambda$ is generated by the sets $I(x)$, there is at most one way to define $h$, and the only thing that can go wrong is if it is not well defined. That means that there are two Boolean combinations of sets of the form $I(x)$ which are equal in $\mathscr{P}(\mathscr{B})$ but such that the corresponding Boolean combinations of the elements $x$ in $\mathscr{B}$ are not equal. Such an occurrence is witnessed by finitely many elements of $\mathscr{B}$, namely the finitely many $x$'s such that $I(x)$ appears in either of the two Boolean combinations. Letting $\mathscr{B}_0\subseteq\mathscr{B}$ be the subalgebra generated by these finitely many elements, then the corresponding $h$ for the algebra $\mathscr{B}_0$ will also fail to be well-defined. -This means that we may assume $\mathscr{B}$ is finite. But now we can just use the classification of finite Boolean algebras to check that our $h$ is well-defined. Indeed, we have $\mathscr{B}\cong\mathscr{P}(A)$ for some finite set $A$; let us identify $\mathscr{B}$ with $\mathscr{P}(A)$. It is easy to check that the subalgebra $\Lambda$ is all of $\mathscr{P}(\mathscr{P}(A))$. Let $g:A\to \mathscr{P}(A)$ be the map $g(a)=\{a\}$. It is then straightforward to verify that if we define $h:\mathscr{P}(\mathscr{P}(A))\to\mathscr{P}(A)$ by $h(S)=g^{-1}(S)$, then $h$ is a homomorphism satisfying $h(I(x))=x$ for each $x\in\mathscr{P}(A)$. -By the way, if I'm not mistaken, $\Lambda$ can be considered as the quotient on the free Boolean algebra of the set $\mathscr{B}$ by relations saying that meets of the generators are computed as in $\mathscr{B}$ (clearly these relations hold in $\Lambda$ among the generators $I(x)$; conversely, you can use the explicit description when $\mathscr{B}$ is finite to show that all relations among the generators are implied by the meet relations). The surjective homomorphism $\Lambda\to\mathscr{B}$ is then the quotient map that further imposes the relations that joins are computed as in $\mathscr{B}$.<|endoftext|> -TITLE: How many ways are there to choose 3 subsets $A,B,C$ of a set $\{1,2,3...n\}$ such that $ A \cap B \cap C = \emptyset $ -QUESTION [5 upvotes]: I believe that this is a pretty simple question in terms of combinatorics. -Assuming we have the following set $\{ {1,2,3...n} \}$ -In how many ways can we choose 3 subsets (A,B,C) such that the intersection between all them is empty? -$ A \cap B \cap C = \emptyset $ -Well, what I have so far is the following: -For A we have $ 2^n $ options (where n is the size of the set) -Then, for B, the remaining options are exactly $ 2^{n-|A|} $ since we can choose only from subsets who does not have elements in common with A. -And finaly, for C, since we already have an empty intersection, we can choose whatever subset we want and get the expected result $ \emptyset $. -But for some reason it feels like the numbers are too big. -If we take for example the set {1,2}, then, according to my "formula", the number of options should be: -$ \binom{4}{1} * \binom{2}{1} * \binom{2}{1} = \frac{4!}{3!} * \frac{2!}{1!} * \frac{2!}{1!} = 16 $ -Does it make sense? - -REPLY [11 votes]: Think of it this way: for each $i\in\{1,\cdots,n\}$, $i$ can be in at most two sets. Therefore, $i$ could be in none of the sets, $i$ could be in exactly one of $A$, $B$, or $C$, or $i$ could be in exactly two of $A$, $B$, and $C$. This gives $7$ possible situations for each $i$. Since the sets for each $i$ can be chosen independently, this gives $7^n$ ways to define $A$, $B$, and $C$. -This works for $n=1$. In this case, there are: - -$A=B=C=\emptyset$. -($A=\{1\}$ and $B=C=\emptyset$) or ($B=\{1\}$ and $A=C=\emptyset$) or ($C=\{1\}$ and $A=B=\emptyset$) -($A=B=\{1\}$ and $C=\emptyset$) or ($A=C=\{1\}$ and $B=\emptyset$) or ($B=C=\{1\}$ and $A=\emptyset$)<|endoftext|> -TITLE: Find Taylor series of $f(z)=\frac{1}{z^2-1}$ for $1<|z+2|<3$ -QUESTION [5 upvotes]: find Taylor series of $f(z)=\frac{1}{z^2-1}$ for $1<|z+2|<3$, -No matter what I do I reach the conclusion this series does not exist unless -$|z+2|<1$ but I don't understand why... -I can write $f(z)=\frac{1}{(z-1)(z+1)}$ and then try to develop the series for each factor separately $f_1(z)=\frac{1}{(z+1)},f_2(z)=\frac{1}{(z-1)}$. -Both analytic on any $1 \neq z \in \mathbb{C}$, so it must be analytic on the disc 1<|z+2|<3 since $z_0=\pm1$ is not in the disc... -So the disc should be an appropriate Domain to develop a Taylor Series... -But any series I find does not converge in it... -Can someone please explain to me why is this? - -REPLY [2 votes]: $$\frac1{z^2-1} = \frac12 \left (\frac1{z-1} - \frac1{z+1} \right ) = \frac12 \left (\frac1{z+2-3} - \frac1{z+2-1} \right )$$ -$|z+2| \gt 1$ and $|z+2| \lt 3$ implies that we approach the expansions of the two terms differently: -$$\frac1{z+2-3} = -\frac13 \frac1{1-\frac{z+2}{3}} = -\frac13 \sum_{k=0}^{\infty} \left (\frac{z+2}{3} \right )^k$$ -$$\frac1{z+2-1} = \frac1{z+2} \frac1{1-\frac1{z+2}} = \sum_{k=1}^{\infty} \frac1{(z+2)^k} $$ -Combine the expansions and you are there.<|endoftext|> -TITLE: Given 10 white, 9 green and 7 black. In how many ways you can select one or more balls? -QUESTION [8 upvotes]: Question -Assuming the balls to be identical except for difference in colours, the number of ways in which one or more balls can be selected from 10 white, 9 green and 7 black balls is? -Answer in book is $(10+1)(9+1)(7+1)-1 = 880-1=879$ -I don't understand the answer -What I have understood and what I have done -The question is asking that out of the balls given in how many ways I can select one ball + in how many ways I can select two balls + . . . + in how many ways I can select 25 balls + in how many ways I can select 26 balls. -One ball can be selected in 3 ways i.e. either you select green, white or black ball. -Two balls can be selected in 3+3=6 ways i.e. either you select two green or two black or two white or one green one black or one green one white or one black one white. -Only this much I can do. -I don't know how to write this in terms of combination. So please help me with this. - -REPLY [10 votes]: Your question is similar to finding the number of divisors of a number of the form $$\color{blue}{a^{10}b^9c^7}$$ other than $\color{red}{1}$, where $a,b,c$ are primes. Because $1$ only appears in that case when none of white, green or black balls are selected (i.e. none of $a,b$ and $c$ is included into the factors), which is prohibited; you have to choose at least $1$ ball. -So, of course, this can be done in $$(10+1)(9+1)(7+1)-\color{red}{1}$$ ways.<|endoftext|> -TITLE: If the Fourier serie $S_nf\longrightarrow g$ in $L^p$ then $f=g$. -QUESTION [7 upvotes]: Suppose that $f\in L^1(\mathbb S^1)$ where $\mathbb S^1=\mathbb R/\mathbb Z$. Suppose that the sequence of partial Fourier sums $\{S_nf\}_{n\geq 1}$ converge in $L^p(\mathbb S^1)$ toward some $g\in L^p(\mathbb S^1)$ (here $p\in [1,\infty ]$). Show that $f=g$. -My work -If $\{S_nf\}_{n\in\mathbb N}$ converge do we necessarily have $S_nf$ converge also to $f$ ? If yes, the result is obvious, but it's may be what I have to show. Any help would be appreciated. - -REPLY [2 votes]: If $\lim_n\|S_n f - g\|_{L^p}=0$ for some $g \in L^p$ with $p\in [1,\infty)$, then, for $n \ge |k|$ and for $q \in (1,\infty]$ conjugate to $p$, -\begin{align} - |\hat{f}(k)-\hat{g}(k)| & =|\widehat{(S_n f)}(k)-\widehat{g}(k)| \\ - & \le \frac{1}{2\pi}\int_{0}^{2\pi}|S_nf-g|dx \\ - & \le \frac{1}{2\pi}\|S_n f-g\|_p\|1\|_q \rightarrow 0 \mbox{ as }n\rightarrow\infty. -\end{align} -So it is definitely true that $\hat{f}(k)=\hat{g}(k)$ for all $k$. Both $f$ and $g$ are in $L^1$. So the problem of showing that $f=g$ a.e. is reduced to looking at the uniqueness of Fourier coefficients for functions in $L^1$. Every $2\pi$ periodic continuously-differentiable function $h$ is the uniform limit of its Fourier series, which gives -$$ - \int_{0}^{2\pi}(f-g)hdx=\lim_{n}\int_{0}^{2\pi}(f-g)S_{n}hdx =\lim_n 0=0. -$$ -Using the bounded convergence theorem, you can obtain -$$ - 0=\int_{0}^{2\pi}(f-g)\chi_{[0,t]}dx=\int_{0}^{t}(f-g)dx. -$$ -By the Lebesgue differentiation theorem, -$$ - 0=\frac{d}{dt}\int_{0}^{t}(f-g)dx = f(t)-g(t) \;\;\; a.e. t\in [0,2\pi]. -$$ -Hence, $f=g$ a.e..<|endoftext|> -TITLE: Intuition about the semidirect product of groups -QUESTION [33 upvotes]: If we have two groups $G,H$ the construction of the direct product is quite natural. If we think about the most natural way to make the Cartesian product $G\times H$ into a group it is certainly by defining the multiplication -$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2),$$ -with identity $(1,1)$ and inverse $(g,h)^{-1}=(g^{-1},h^{-1})$. -On the other hand we have the construction of the semidirect product which is as follows: consider $G$,$H$ groups and $\varphi : G\to \operatorname{Aut}(H)$ a homomorphism, we define the semidirect product group as the Cartesian product $G\times H$ together with the operation -$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1\varphi(g_1)(h_2)),$$ -and we denote the resulting group as $G\ltimes H$. -We then show that this is a group and show many properties of it. My point here is the intuition. -This construction doesn't seem quite natural to make. There are many operations to turn the Cartesian product into a group. The one used when defining the direct product is the most natural. Now, why do we give special importance to this one? -What is the intuition behind this construction? What are we achieving here and why this particular way of making the Cartesian product into a group is important? - -REPLY [3 votes]: Many have given good answers here, so I just want to answer specifically for the intuition behind it. -Semi direct product came into light when we found out that if a group $H$ is a normal subgroup, and another group $K$ is also a subgroup (not necessarily normal) of a bigger group, and $H \cap K = 1$, the multiplication of those two subgroups would yield another group $HK$ with order $\frac{|H||K|}{|H \cap K|} = |H||K|$ (as $H \cap K = 1$). -So we know that the new group $HK$ can be written uniquely in the form of $hk$ where $h \in H$ and $k \in K$. Because of that, the multiplication in $HK$, thus all elements of $HK$, must can always be written in the form of well... $hk$, e.g.: -$(h_1 k_1)(h_2 k_2) = h_1 k_1 h_2 (k_1^{-1} k_1) k_2 = h_1 (k_1 h_2 k_1^{-1}) k_1 k_2$ -As $H$ is a normal subgroup, $k_1 h_2 k_1^{-1} \in H$, so it can be re-written as: -$(h_1 k_1)(h_2 k_2) = (h_1 (k_1 h_2 k_1^{-1})) (k_1 k_2) = h_3 k_3$ -where $h_3 = h_1 (k_1 h_2 k_1^{-1}) \in H$ and $k_3 = k_1 k_2 \in K$. -But we notice that this left conjugation by $k_1$ which is $k_1 h_2 k_1^{-1}$ is an automorphism of $H$, so of course any automorphism of $H$ would do the job. If we define a homomorphism: -$\varphi: K \rightarrow Aut(H)$, -that homomorphism can be used in the place of left conjugation by $k_1$ and again achieve the same form of $hk$. Rewriting the above derivation with direct product notation and the defined homomorphism, we would get: -$(h_1, k_1)(h_2, k_2) = (h_1 \: \varphi(k_1)(h_2), k_1 k_2) = (h_3, k_3)$ -where $h_3 = h_1 \: \varphi(k_1)(h_2) \in H$ and $k_3 = k_1 k_2 \in K$. -which is exactly the definition of semi direct product multiplication you ask.<|endoftext|> -TITLE: How does one interpret statements like: "The traveling salesman problem is NP-complete?" -QUESTION [10 upvotes]: The world abounds with statements like: - -The traveling salesman problem is NP-complete. - -But when I follow try to follow the Internet's links "down the rabbit hole," I don't get a truly sensible explanation of what these kinds of statements actually mean. I'll try to narrow this down to exactly the part where the "understanding black hole" occurs for me. -Take the traveling salesman problem, for example. When I read about this, all I really see is a function. Explicitly, there's a function that maps each weighted graph $G$ to the set of all Hamiltonian cycles in $G$ of minimum length. We might as well denote this function $\mathrm{TSF}$ and call it the traveling salesman function. -Now lets change topic a bit. When I follow the links related to NP-completeness, I manage to surmise that NP-completeness can be defined formally as a collection of subsets of $\mathbb{N}$. -$$\mathrm{NPCOMP} \subseteq 2^\mathbb{N}$$ -In more detail, I surmise that -$$\mathrm{NPCOMP} = \mathrm{NP} \cap \mathrm{NPHARD}$$ -Okay. So lets try putting these together. For convenience, by a blurgle, I'll mean a function that (like $\mathrm{TSF})$, assigns to each weighted graph $G$ a set of cycles in $G$. So what's going on, presumably, is that we can assign to each blurgle $\mathrm{FOO}$ a corresponding collection of subsets of $\mathbb{N}$, call this $\underline{\mathrm{FOO}}$. We can therefore define that a blurgle $\mathrm{FOO}$ is NP-complete iff $\underline{\mathrm{FOO}} \subseteq \mathrm{NPCOMP}.$ -Hence, since $\mathrm{TSF}$ is a blurgle, we can reinterpret the opening statement as saying that: $$\underline{\mathrm{TSF}} \subseteq \mathrm{NPCOMP}$$ -It furthermore seems reasonable to define that the travelling salesman problem equals $\underline{\mathrm{TSF}}.$ -The trouble I'm having (and this is the "understanding black hole") is that I don't understand how to get from a blurgle $\mathrm{FOO}$ to $\underline{\mathrm{FOO}}$, which is meant to be a collection of subsets of $\mathbb{N}$. Therefore, although I can describe $\mathrm{TSF}$ in completely precise terms, I don't have a clue what $\underline{\mathrm{TSF}}$ is, except that its a collection of subsets of $\mathbb{N}$. -But its much worse than that. Speaking very generally, and forgetting all about graph theory and blurgles and what not, and just dealing with things at the highest possible level of abstraction, it seems that people are often dealing with an object $\mathrm{FOO}$, and then they start talking about the "complexity" of foo, which implicitly means that they're studying $\underline{\mathrm{FOO}},$ but it seems like, in general, no one ever really tells you how to get from $\mathrm{FOO}$ to $\underline{\mathrm{FOO}}.$ -So, my question is: - -Question. How does one interpret statements like: "The traveling salesman problem is NP-complete" when in reality the writer hasn't given you a problem at all, what they've given you is an object $\mathrm{FOO}$ (like the traveling salesman function), but next thing you know they're making statements about the corresponding problem $\underline{\mathrm{FOO}}$ without telling you precisely which subsets of $\mathbb{N}$ this refers to? -I suppose you're just meant to make an educated guess: but, if so, how are we meant to make this guess, and why would the reader's guess agree with what the writer is actually thinking? - -REPLY [3 votes]: The set of $NP-complete$ problems is specific to decision problems. When people say that the Travelling Salesman Problem is NP-complete, they are referring to the decision variant of it: -Does there exist a tour of less than length $k$? -The above is different from the variant of the TSP where the goal is to produce the minimal weight tour (the one you are asking about). -This variant is a function problem, not a decision problem. There are function complexity classes that are analogous to decision ones, but allows for the output to be more than just ACCEPT or REJECT. For instance, the output could be the binary encoding a TSP tour or just an integer. The function analogue to the decision complexity class, $P$ (polynomial time), is known as $FP$, which is the set of functions computable in polynomial time on a deterministic Turing machine with an output tape. Likewise, there are other function classes like $FNP$ and $FEXP$. -The travelling salesman problem you are considering is complete for the function complexity class $FP^{NP}$, which is functional polynomial time with access to an $NP$ oracle (black box).<|endoftext|> -TITLE: Proving that $\lim\limits_{x\to 4}\frac{2x-5}{6-3x} = -\frac{1}{2} $ -QUESTION [6 upvotes]: Proving that $$\lim_{x\to 4}\frac{2x-5}{6-3x} = -\frac{1}{2} $$ - -Now of course when you plug in $4$ you get $-\frac{1}{2}$, but I need to prove it using: -$\forall \varepsilon \gt 0,\;\exists\delta \gt 0\;\forall x \in D $ (where $D$ is the domain of the function) -$$0\lt |x-x_0|\lt \delta \implies |f(x)-L| \lt \varepsilon $$ -Where $L$ is the limit we "guessed" and $x_0$ is where $x$ approaches) -So far I have: -$$\begin{align} -0\lt |x-4|\lt \delta &\implies \left|f(x)-\left(-\frac{1}{2}\right)\right| \lt \varepsilon\\ -\iff 0\lt |x-4|\lt \delta &\implies \left|\frac{2x-5}{6-3x}+\left(\frac{1}{2}\right)\right| \lt \varepsilon\\ -\iff 0\lt |x-4|\lt \delta &\implies \left|\frac{4-x}{6(x-2)}\right| \lt \varepsilon -\end{align}$$ -This is where I'm stuck. What exactly do I need to prove after this last line? - -REPLY [7 votes]: Given any $\varepsilon > 0$, let $\delta = \min\{1, \varepsilon/M\} > 0$, where $M$ is some fixed magical positive constant. While doing this proof, we will eventually figure out what $M$ should be, then go back and erase $M$ and replace it with this constant. Now if $0 < |x - 4| < \delta$, then observe that: -\begin{align*} -\left| \frac{2x - 5}{6 - 3x} - \frac{-1}{2} \right| -&= \left| \frac{4 - x}{6(x - 2)} \right| \\ -&= \frac{1}{6|x - 2|}|x - 4| \\ -&< \frac{1}{6|x - 2|} \cdot \frac{\varepsilon}{M} &\text{since } |x - 4| < \delta \leq \frac{\varepsilon}{M} \\ -\end{align*} -Now we take a break from our proof and try to figure out a way to bound $\frac{1}{6|x - 2|}$ by a constant $M$ given that $x$ is at most $1$ away from $4$, implying that $|x - 2|$ won't be too close to zero. Indeed, if $|x - 4| < 1$, then observe that: -\begin{align*} -|x - 2| -&= |(x - 4) - (-2)| \\ -&\geq ||x - 4| - \left| -2 \right|| &\text{by the reverse triangle inequality} \\ -&= 2 - |x - 4| &\text{since } |x - 4| < 1 \leq 2 \\ -&> 2 - 1 &\text{since } |x - 4| < 1 \\ -&= 1 -\end{align*} -so that: -$$ -\frac{1}{6|x - 2|} < \frac{1}{6} -$$ -Thus, by taking $M = \frac{1}{6}$, we can finish up our original proof: - -Given any $\varepsilon > 0$, let $\delta = \min\{1, 6\varepsilon\} > 0$. Then if $0 < |x - 4| < \delta$, observe that: -\begin{align*} -\left| \frac{2x - 5}{6 - 3x} - \frac{-1}{2} \right| -&= \left| \frac{4 - x}{6(x - 2)} \right| \\ -&= \frac{1}{6|x - 2|}|x - 4| \\ -&< \frac{1}{6|x - 2|} \cdot 6\varepsilon &\text{since } |x - 4| < \delta \leq 6\varepsilon \\ -&< \frac{1}{6} \cdot 6\varepsilon &\text{since } |x - 4| < \delta \leq 1 \\ -&= \varepsilon -\end{align*} -as desired. $~\blacksquare$<|endoftext|> -TITLE: What are all the elements of the group of symmetries of a regular tetrahedron? -QUESTION [9 upvotes]: I can see that why the order of the group of symmetries of a regular tetrahedron is $12$ : Roughly speaking, each time one of ${\{1,2,3,4}\}$ is on 'top' and we do to the other $3$ as we did in a regular triangle. But I need to know the elements of the group and I don't have not only a physical regular tetrahedron but also my imagination is not strong. Videos like $1$ or $2$ didn't help, because I can't understand for example what happens to the other $2$ points when the top point and one of the surface points is exchanged. -I would appreciate any simple clear explanation. -Added - For next permutations (with order $3$) now simply we consider the ‘surface’ under the other three-points, i.e. each time $4$ on top is replaced by $1$ or $2$ or $3$. And, we perform rotations as we were doing when $4$ was on top. So we will get: -‘$4$’ on top : ${\{(123),(132)}\}$ -‘$1$’ on top : ${\{(234),(243)}\}$ -‘$2$’ on top : ${\{(134),(143)}\}$ -‘$3$’ on top : ${\{(124),(142)}\}$. -But what about $(12)(34)$, $(13)(24)$ and $(14)(23)$? How to get them? - -REPLY [5 votes]: Consider the set of half edges of the tetrahedron. It is easy to convince yourself that the symmetry group of the solid allows you to move from one such half edge to any other in exactly one way. Since there are six edges, there are twelve half edges, and this implies that the group of symmetries has exactly 12 elements. -Now the group obviously permutes the four vertices. Since there is exactly one subgroup of order 12 in $S_4$, namely the alternating group, it must be our group.<|endoftext|> -TITLE: Proportion of elements of prime order $p$ in $S_n$ -QUESTION [6 upvotes]: I was trying to answer the following question recently : What is the proportion of elements of order $p$ in the symmetric group $S_n$ , where $p$ is some prime number ? -I managed to work out that in general, the number of elements of order $p$ in $S_n$ is: -$ n \choose p $$(p-1)! +$$\displaystyle \sum_{k=2}^{[\frac{n}{p}]} \frac{n!}{k!p^k}$ -Where $[x]$ is the greatest integer less than or equal to $x$. -Using this, as well as the Taylor series at $x=0$ for $\large e^{\frac{1}{p}}$, I was able to determine that the proportion of elements of order $p$ in $S_n$ as $n \to \infty$ is: -$\displaystyle \sum_{k=2}^{\infty} \frac{1}{k!p^k} = \large e^{\frac{1}{p}} - \small \frac{p+1}{p}$ -I was wondering what significance, if any, this has - it feels somewhat similar to the result that states that the limit of the ratio of the number of derangements of $n$ elements to $n!$ is $\frac{1}{e}$, but admittedly I don't fully understand the significance of this result either. -Any help would be much appreciated. -Thanks! - -REPLY [2 votes]: There is a well-known formula for the generating function of the proportion of elements of order $1$ or $p$ in $S_n$, i.e., the proportion of $g$ in $S_n$ that satisfy $g^p = 1$. Letting $c_n$ be the number of such solutions, with the convention that $c_0 = 1$, the generating function is -$$ -\sum_{n\geq 0} \frac{c_n}{n!}x^n = e^{x+x^p/p}. -$$ -In particular, since this series converges at $x=1$, the proportion $c_n/n!$ tends to $0$, so you are making an error when you say the proportion has a positive limit as $n\rightarrow \infty$. -Counting the number of elements of order $1$ or $p$ in $S_n$ is the same as counting homomorphisms from $\mathbf Z/(p)$ to $S_n$. More generally, for any finite group $G$ we have -$$ -\sum_{n\geq 0} \frac{\#{\rm Hom}(G,S_n)}{n!}x^n = e^{\sum_{H\subset G} x^{[G:H]}/[G:H]}, -$$ -where we make the convention that $S_0$ is trivial and the sum in the exponent on the right runs over all subgroups $H$ of $G$. Taking $G= \mathbf Z/(p)$ recovers the first formula. This formula goes back to Wohlfahrt. If you google "wohlfahrt group formula" you'll find references to related work.<|endoftext|> -TITLE: Prove that the number $2a+b+c$ is not a prime number -QUESTION [8 upvotes]: Let $a,b,c$ be positive integers such that ${ a }^{ 2 }-bc$ is a perfect square. prove that the number $2a+b+c$ is not a prime number. - -I have no idea about the solution . How can I start to prove it . - -REPLY [4 votes]: Let us assume that $p=2a+b+c -$ prime. Then $a^2-bc=a^2-b(p-2a-b)=(a+b)^2-pb$. If this number is equal to $n^2$, it is less $a^2$, and $n < a$. It turns out that $pb=(a+b+n)(a+b-n)$, where both factors exceed $b$. Therefore, each of them is strictly less than $p$, ie $p$ is not divided. This contradicts the fact that the product is divided into simple $p$.<|endoftext|> -TITLE: A chain complex is split if and only if it splits as a direct sum. -QUESTION [7 upvotes]: This is the first part of Exercise 1.4.2 in An Introduction to Homological Algebra by Weibel. -The first part is showing that a chain complex, $C$, with boundaries $B_n$ and cycles $Z_n$ in $C_n$ is split if and only if there are $R$-module decompositions $C_n \cong Z_n \oplus C_n/Z_n$ and $Z_n \cong B_n \oplus H_n(C)$. -If $C$ is split we have maps $s_n: C_{n-1} \rightarrow C_{n}$ such that $d_{n} = d_{n}s_nd_{n}$ where $d_{n}:C_{n} \rightarrow C_{n-1}$ is our differential. Notice that then we have exact sequences $0 \rightarrow kerd_{n} \rightarrow C_{n} \rightarrow Imd_{n} \rightarrow 0$. Now $s_n|_{Imd_n}: Imd_n \rightarrow C_n$ so that if $a \in Imd_n$ we have a $b \in C_n$ so that $d_n(b)=a$. Then we have $d_ns_n|_{Imd_n}(a) = d_ns_nd_n(b) = d_n(b) = a$ so our short exact sequence splits giving us $C_n \cong kerd_n \oplus Imd_n \cong Z_n \oplus C_n/Z_n$. -Now we also have a short exact sequence $0 \rightarrow Imd_{n+1} \rightarrow kerd_n \rightarrow kerd_n/Imd_{n+1} \rightarrow 0$. I am stuck on this part of the first proof as i don't see why this is split. We would like to use the maps $s_n$ given by the assumption but these are submodules of $C_n$ so they don't seem to be of use. I also don't see why $Imd_{n+1}$ is injective or $kerd_n/Im(d_{n+1}$ is projective. I believe I am forgetting one of the ways we construct the splitting maps in this case any help would be appreciated! - -REPLY [4 votes]: Let me tell you the answer firstly. In the notation of Weibel's book, $B^{\prime}_{n}=sd(C_{n})$, $B_{n}=ds(C_{n})$ and $H^{\prime}_{n}=(1-sd-ds)(C_{n})$. (This is the only thing I can guess after finished the "if" part) -I believe that you already showed $\mathrm{Im}\ d_{n}\cong sd(C_{n})$ in your question. Hence we have short exact sequence -$$0\rightarrow Z_{n}\rightarrow C_{n} \rightarrow sd(C_{n})\rightarrow 0$$ -The right split map is just the inclusion $sd(C_{n})\hookrightarrow C_{n}$. -It is also not hard to show that $ds(C_{n})=\mathrm{Im}\ d_{n+1}=B_{n}$. Then we have short exact sequence -$$0\rightarrow ds(C_{n})\rightarrow Z_{n} \rightarrow Z_{n}/ds(C_{n})\rightarrow 0$$ -The left split map is $ds$, so we have $Z_{n}\cong ds(C_{n})\oplus Z_{n}/ds(C_{n})$. -In summary, we have $\mathrm{Im}\ d_{n+1}=ds(C_{n})$ and $\mathrm{Im}\ d_{n}\cong sd(C_{n})$. The first splitting short exact sequence implies $\ker d_{n}=(1-sd)(C_{n})$. Then the second splitting short exact sequence implies $H^{\prime}_{n}=(1-sd-ds)(C_{n})$.<|endoftext|> -TITLE: Series of squares of n integers - where is the mistake? -QUESTION [8 upvotes]: Given the following two series: -$$1^3 + 2^3 + ... + n^3$$ -$$0^3 + 1^3 + .... + (n-1)^3$$ -I take the difference vertically of the two: -$$\left(1^3-0^3\right) + \left(2^3-1^3\right) + .... + \left(n^3-(n-1)^3\right)$$ -This equals to $n^3$ -If I now express this in sum notation: -$$\sum_{i=1}^n\left(i^3-(i-1)^3\right) = n^3$$ -If I expand: $(i-1)^3 = i^3 - 3i^2 + 3i - 1$ -Thus -$$\left(i^3 - (i-1)^3\right) = 3i^2 - 3i +1$$ -My sum is now: -$$3\sum_{i=1}^n i^2 -3 \sum_{i=1}^n i + n = n^3$$ -$$\sum_{i=1}^n i^2 = \frac{1}{3} \left(n^3 + 3 \frac{n(n-1)}{2} -n\right)$$ -And the expression on the RHS above is not $\frac{1}{6} n (n+1) (2n+1)$ -I don't want to solve the above using forward difference, I want to keep it backward. - -REPLY [3 votes]: Note that $\sum_{i=1}^n i = \frac 12n(n{\color{red}+}1)$, hence -\begin{align*} - \frac 13 \left(n^3 + 3\frac{n(n+1)}2 - n\right) - &= \frac 16 n(2n^2 + 3n + 3 - 2)\\ - &= \frac 16 n(2n^2 + 3n + 1)\\ - &= \frac 16 n(2n+1)(n+1) -\end{align*}<|endoftext|> -TITLE: Haar Measure of a Topological Ring -QUESTION [35 upvotes]: A topological ring is a (not necessarily unital) ring $(R,+,\cdot)$ equipped with a topology $\mathcal{T}$ such that, with respect to $\mathcal{T}$, both $(R,+)$ is a topological group and $\cdot:R\times R\to R$ is a continuous map. A left Haar measure on $R$ is a (nonnegative) measure $\lambda$ on $R$ with respect to the Borel $\sigma$-algebra of $R$ such that there is a multiplicative map $l:R\to[0,\infty]$ called a left multiplier for which - -$l(x\cdot y)=l(x)\,l(y)$ for all $x,y\in R$, -$\lambda(x\cdot S)=l(x)\,\lambda(S)$ for all $x\in R$ and for any Borel measurable subset $S\subseteq R$ (with respect to $\mathcal{T}$), and -$\lambda$ is a Haar measure for the topological group $(R,+)$. - -(We interpret $0\cdot \infty$ and $\infty\cdot 0$ as $0$.) - -An example is the ring $\text{Mat}_{n\times n}(\mathbb{R})$ of $n$-by-$n$ matrices over $\mathbb{R}$ under the topology inherited from $\mathbb{R}^{n\times n}\cong\text{Mat}_{n\times n}(\mathbb{R})$ (here, $\cong$ means "is homeomorphic to"). If $\textbf{A}=\left[a_{i,j}\right]_{i,j=1,2,\ldots,n}$, then we take $\text{d}\lambda(\textbf{A})$ to be $\prod_{i=1}^n\,\prod_{j=1}^n\,\text{d}a_{i,j}$ (i.e., $\lambda$ is the Lebesgue measure of $\text{Mat}_{n\times n}(\mathbb{R})\cong \mathbb{R}^{n\times n}$). Then, $\lambda$ is a left Haar measure of $\text{Mat}_{n\times n}(\mathbb{R})$ with respect to the left multiplier $l(\textbf{X}):=\big|\det(\textbf{X})\big|^n$ for all $\textbf{X}\in\text{Mat}_{n\times n}(\mathbb{R})$. -Another example is $(\mathbb{C},+,\cdot)$. We take $\lambda$ to be such that $\text{d}\lambda(z):=\text{d}x\,\text{d}y$ for $z=x+\text{i}y$. Then, a left multiplier is $l(w):=|w|^2$ for each $w\in\mathbb{C}$. Similarly to the case of the ring of real $n$-by-$n$ matrices, $\text{Mat}_{n\times n}(\mathbb{C})$ has a left Haar measure $\lambda$ with respect to $l(\textbf{X}):=\big|\det(\textbf{X})\big|^{2n}$ for all $\textbf{X}\in\text{Mat}_{n\times n}(\mathbb{C})$, i.e., $\text{d}\lambda(\textbf{A}):=\prod_{i=1}^n\,\prod_{j=1}^n\,\left(\text{d}x_{i,j}\,\text{d}y_{i,j}\right)$ for $\textbf{A}=\left[x_{i,j}+\text{i}y_{i,j}\right]_{i,j=1,2,\ldots,n}$. -The third example here is $\mathbb{R}[T]/\left(T^n\,\mathbb{R}[T]\right)$ equipped with the topology inherited from $\mathbb{R}^n$. For $f(T)=\left(a_0+a_1T+\ldots+a_{n-1}T^{n-1}\right)+\left(T^n\,\mathbb{R}[T]\right)$, we take $\text{d}\lambda(f):=\prod_{i=0}^{n-1}\,\text{d}a_i$. Then, a left multiplier is $l(g):=\left|b_0\right|^n$, for $g(T)=\left(b_0+b_1T+\ldots+b_{n-1}T^{n-1}\right)+\left(T^n\,\mathbb{R}[T]\right)$. - -Note that we can also similarly define the notion of right Haar measures for rings and their corresponding right multipliers (which are usually denoted by $r$ in this thread). If a ring has both left and right Haar measures, then (due to commutativity of addition) the two Haar measures coincide (up to scalar multiple) and we can omit the adjectives left and right, and simply call them Haar measures (or two-sided Haar measures to be precise). The adjectives "left" and "right" only indicate which kind of multipliers the ring has. A ring with a Haar measure is unimodular if the left multiplier coincides with the right multiplier; otherwise, we call the ring non-unimodular. - -Then, $\text{Mat}_{n\times n}(\mathbb{R})$, $\mathbb{C}$ (as well as $\text{Mat}_{n\times n}(\mathbb{C})$), $\mathbb{R}[T]/\left(T^n\,\mathbb{R}[T]\right)$ (as well as $\mathbb{C}[T]/\left(T^n\,\mathbb{C}[T]\right)$), the ring $\mathbb{Z}_{p}$ of $p$-adic integers (see Crostul's comment below), and the ring of formal power series $\mathbb{F}_p[\![ T]\!]$ (see Jyrki Lahtonen's comment below) are examples of unimodular rings. Of course, many of these rings are commutative rings, and commutative rings with Haar measures are necessarily unimodular. -An example of non-unimodular rings is as follows. Consider $\mathbb{R}\times\mathbb{R}$ (some people may denote it by the semidirect sum $\mathbb{R}\,{\supset\!\!\!\!\!\!+}\,\mathbb{R}$) with the usual entry-wise addition, but with the twisted multiplication $(u,v)\cdot(x,y):=(ux,uy+v)$. Then, with the Haar measure $\text{d}\lambda(a,b):=\text{d}a\,\text{d}b$, we notice that a left multiplier of $\mathbb{R}^2$ is $l(u,v):=|u|^2$ for every $u,v\in\mathbb{R}$, whereas a right multiplier is $r(u,v):=|u|$ for every $u,v\in\mathbb{R}$. The ring $\text{Mat}_{n\times n}(\mathbb{R})\times\mathbb{R}^n$ (also denoted by $\text{Mat}_{n\times n}(\mathbb{R})\,{\supset\!\!\!\!\!\!+}\,\mathbb{R}^n$) and the ring $\text{Mat}_{n\times n}(\mathbb{C})\times\mathbb{C}^n$ (also denoted by $\text{Mat}_{n\times n}(\mathbb{C})\,{\supset\!\!\!\!\!\!+}\,\mathbb{C}^n$), with the usual entry-wise addition and with the twisted multiplication $(\mathbf{A},\mathbf{u})\cdot(\mathbf{B},\mathbf{v}):=(\mathbf{AB},\mathbf{Av}+\mathbf{u})$, are also non-unimodular for the same reason. - -Has there been any study on this sort of concepts? I'm sure that I am not the first who thought about the notion of Haar measures for rings. Are there any criteria for a ring to be unimodular? Are there rings with Haar measures of a given characteristic $k>0$ which are non-unimodular, or simple, or at least non-commutative? Are there rings with left Haar measures, but without right Haar measures? (Unlike groups, I expect to find rings with only one kind of Haar measures.) The cardinality of each example of rings with Haar measures I have so far is that of the continuum $\mathfrak{c}=2^{\aleph_0}$. Do there exist rings with Haar measures of higher cardinalities? Please let me know if you have or know of any references. -On the other hand, there may be some pathology with this concept of Haar measures for rings, and because of that, nobody introduces this concept. If that is the case, would you please let me know why? - - -EDIT I: I removed a wrong example from the question. It turns out that, with respect to the discrete topology, $(\mathbb{Z},+,\cdot)$ does not have a Haar measure. -EDIT II: I think I may need another compatibility condition, as stated in menag's reply below. That is, I may need to enforce the following condition (which I shall name it left compatibility): for any Borel measurable subset $S\subseteq R$ and for every $x\in R$, $x\cdot S$ is also Borel measurable. For right Haar measures, we may similarly define right compatibility. Alternatively, I may need to modify Condition 2 to $\lambda(x\cdot S)=l(x)\,\lambda(S)$ for each Borel measurable subset $S\subseteq R$ and for any $x\in R$ such that $x\cdot S$ is Borel measurable (and likewise for right Haar measures). I am in favor of the former modification (i.e., the left or right compatibility condition) and shall assume it in all instances. -EDIT III: I may be wrong about this claim: "An abelian group has at most one Haar measure (up to scalar multiple)." (See the boldfaced portion of my question.) Without the locally compact Hausdorff assumption, there may be an abelian group with two essentially different Haar measures. If the underlying additive group of a ring is isomorphic to such an abelian group, then the ring may have a left Haar measure and a right Haar measure which are not proportional. Maybe somebody who knows better about topological groups can give me some insight. - -REPLY [5 votes]: To not load the question with too much text, I decided to add my discoveries as an answer, though very lacking it may be. However, these results are simple and have no references. I would greatly appreciate if you find any mistake, if you can add anything to this, or if you make some comments. My answer here is still loaded with many questions. - - -Proposition: For a fixed integer $k>0$, there exists a commutative topological ring of characteristic $k>0$ with a Haar measure. - -Let $R_1,R_2,\ldots,R_m$ be topological rings with Haar measures $\lambda_1,\lambda_2,\ldots,\lambda_m$ corresponding to left multipliers $l_1,l_2,\ldots,l_m$. Define $R:=\bigoplus\limits_{i=1}^m\,R_i$, $\lambda:=\bigotimes\limits_{i=1}^m\,\lambda_i$, and $l:=\prod\limits_{i=1}^m\,\left(l_i\circ \pi_i\right)$, where $\pi_i:R\to R_i$ is the canonical projection. Then, under the product topology, $R$ is a topological ring with Haar measure $\lambda$ with respect to the left multiplier $l$. Hence, if $k_i:=\text{char}\left(R_i\right)$, then the characteristic of $R$ is $\text{lcm}\left(k_1,k_2,\ldots,k_m\right)$. In particular, we know that there exists a ring of a given prime characteristic which possesses a Haar measure, so that there is a ring of a characteristic $k$ which has a Haar measure, where $k$ is a square-free positive integer. It is therefore left to find a ring of characteristic $p^j$ with $p$ being prime and $j>1$ that possesses a Haar measure. -Now, let $R:=\Big(\mathbb{Z}\big/\left(p^j\,\mathbb{Z}\right)\Big)[\![T]\!]$. Each $f(T)\in R$ can be written as $p\,f_0(T)+f_1(T)$, where $f_0(T),f_1(T)\in R$ and the nonzero coefficients of $T^i$ in $f_1(T)$ are not divisible by $p$. Let $\lambda$ be the Haar measure of the group $(R,+)$, which is compact and so we can assume $\lambda(R)=1$. Let $l(f):=l\left(f_1\right)$ for all $f(T)\in R$, $l(f):=1$ if $f(T)$ is an invertible element of $R$, $l(0):=0$, and $l(T):=\frac{1}{p^j}$. Extend the definition of $l$ onto $R$ using the multiplicativity condition. Then, $R$ as a ring has a Haar measure $\lambda$ with respect to the left multiplier $l$ (which is also a right multiplier). Ergo, there exists a topological ring of characteristic $p^j$ with a Haar measure. -Now, how about simple rings with Haar measures? We know that the characteristic of a simple ring is either $0$ or a prime integer. Then, for a fixed prime $p\in\mathbb{N}$, does there exist a simple topological ring of characteristic $p$ with a left, right, or two-sided Haar measure? (For the case of rings of characteristic $0$, $\text{Mat}_{n\times n}(\mathbb{R})$ and $\text{Mat}_{n\times n}(\mathbb{C})$ are simple topological rings with Haar measures.) - - -Proposition: Any ring with a left, right, or two-sided Haar measure is uncountable. - -If $R$ is a countable nonzero ring, then $\left\{0_R\right\}$ must be of zero measure, so that $R=\bigcup_{r\in R}\,\big(r+\left\{0_R\right\}\big)$ has zero measure, but this contradicts positivity of Haar measures of the group $(R,+)$. Hence, the only countable ring with a Haar measure is the zero ring $\{0\}$, which is the unique ring of characteristic $1$, with the counting measure and the two-sided multiplier (or simply the multiplier) $m$ with $m(0):=1$. -Here is another trivial example. A trivial ring (i.e., a ring with trivial multiplication) is a ring with Haar measure. For an uncountable trivial ring $R$ and a Haar measure $\lambda$ of the group $(R,+)$, we can define the multiplier $m$ as $m(x):=0$ for all $x\in R$. - - -Proposition: For any finite group $G$, the group algebra $\mathbb{C}[G]$ equipped with the standard topology of $\mathbb{C}^{|G|}\cong \mathbb{C}[G]$ (again, $\cong$ means "is homeomorphic to") is a unimodular ring with a Haar measure. - -This result is due to the Artin-Wedderburn Theorem. That is, $\mathbb{C}[G]$ decomposes as the direct sum $\bigoplus\limits_{V\,\text{ irrep}}\,\left(V\underset{\mathbb{C}}{\otimes} V^*\right)$ of two-sided ideals $V\underset{\mathbb{C}}{\otimes} V^*$ where $V$ runs over all the (non-isomorphic) simple $\mathbb{C}[G]$-modules and $V^*$ is its algebraic dual. Since $V\underset{\mathbb{C}}{\otimes} V^*$ is isomorphic (and homeomorphic) to the ring $\text{Mat}_{\dim_\mathbb{C}(V)\times \dim_\mathbb{C}(V)}(\mathbb{C})$ for any finite-dimensional $\mathbb{C}$-vector space $V$, the claim follows. -Similarly, $\mathbb{R}[G]$ is also a unimodular ring for any finite group $G$. The Artin-Wedderburn Theorem can also be used here to show that $\mathbb{R}[G]$ decomposes as a direct sum of two-sided ideals of the form $\text{Mat}_{n\times n}(\mathbb{R})$ or $\text{Mat}_{n\times n}(\mathbb{C})$. -On the other hand, if $\mathbb{F}$ is a finite field and $G$ is a Lie group, then what can we say about $\mathbb{F}[G]$? Is there a way to equip $\mathbb{F}[G]$ with a "nice" topology, "compatible" with that of $G$? If so, under this topology, is $\mathbb{F}[G]$ a ring with a left, right, or two-sided Haar measure? - -Under the usual topology of the quaternion skew-field $\mathbb{H}\cong\mathbb{R}^4$, $\mathbb{H}$ equipped with the Lebesgue measure is a unimodular ring with a Haar measure. Its multiplier is given by $x\mapsto |x|^4$ for all $x\in\mathbb{H}$. One can also verify that $\mathbb{H}[T]/\left(T^n\,\mathbb{H}[T]\right)$ is a unimodular ring. Now, I speculate that $\text{Mat}_{n\times n}(\mathbb{H})$ too is a unimodular ring, whilst $\text{Mat}_{n\times n}(\mathbb{H})\times\mathbb{H}^n$ (better denoted by $\text{Mat}_{n\times n}(\mathbb{H})\,{\supset\!\!\!\!\!\!+}\,\mathbb{H}^n$) is non-unimodular. (Maybe somebody with more experiences in handling $\mathbb{H}$ can help me with my claims.) What can we say about $\mathbb{H}[G]$ for a finite group $G$? - -Definition: A symmetric subset $S$ of a ring $R$ is a subset $S$ of $R$ such that $x\cdot S=S\cdot x$ for all $x\in R$. - -If there exists a Borel measurable symmetric subset $S$ of a ring $R$ with a Haar measure $\lambda$ such that $0<\lambda(S)<\infty$, then $R$ is unimodular. For $l(x)\,\lambda(S)=\lambda(x\cdot S)=\lambda(S\cdot x)=\lambda(S)\,r(x)$ for all $x\in R$ (where $l$ is the left multiplier and $r$ is the right multiplier). Note that $\mathbb{H}$ is an example of a noncommutative ring $R$ with a Haar measure with such a subset $S$ (by taking $S$ to be the open unit $4$-disc $\big\{x\in\mathbb{H}\,\big|\,|x|<1\big\}$). Ergo, we have a very weak sufficient condition for a ring to be unimodular. Is there a noncommutative topological ring with a Haar measure which is not a division ring and which possesses a Borel measurable symmetric subset? - -Some benefits of having the notion of Haar measures for rings are as follows. Firstly, we have, depending on whether the ring $R$ has a left or a right Haar measure, the equalities $$l(t)\,\left(\int_S\,f(t\cdot x)\,\text{d}\lambda(x)\right)=\int_{t\cdot S}\,f(x)\,\text{d}\lambda(x)$$ and $$\left(\int_S\,f(x\cdot t)\,\text{d}\lambda(x)\right)\,r(t)=\int_{ S\cdot t}\,f(x)\,\text{d}\lambda(x)\,,$$ for $t\in R$, for a Borel measurable $S\subseteq R$, and for a measurable function $f:R\to\mathbb{C}$. -Secondly, we may have a way to define the derivative of some function $f:R\to\mathbb{C}$, provided that $R$ is a unital ring, using the formula $$\left(\text{D}_l\,f\right)(x):=\lim_{t\to 1_R}\,\frac{f(t\cdot x)-f(x)}{l(t)-1}$$ -or -$$\left(\text{D}_r\,f\right)(x):=\lim_{t\to 1_R}\,\frac{f(x\cdot t)-f(x)}{r(t)-1}\,,$$ -depending on whether the ring has a left multiplier or a right multiplier. (Note that, in cases where the ring is non-unimodular, we may have two types of derivatives: left derivatives and right derivatives.) These derivatives are not quite the derivatives in the usual sense. For example, for the ring $\mathbb{R}$, $\left(\text{D}_l\,f\right)(x)=x\,f'(x)=\left(\text{D}_r\,f\right)(x)$. (This derivative concept may be totally useless as, in most cases, I would expect that the limit does not exist, or otherwise, we would need some kind of manifold structures on $R$, but then there is already a concept of derivatives on manifolds.) - -What can we say about the quotient $R/I$, where $R$ is a topological ring with a left, right, or two-sided Haar measure and $I$ is a two-sided ideal of $R$? Do we have anything similar to https://mathoverflow.net/questions/21704/haar-measure-on-a-quotient? What are good conditions on $I$ such that the Haar measure on $R$ will "carry over" onto $R/I$? What are good conditions on $R$ such that a good $I$ exists? -What can we say about topological unital rings $R$ with Haar measures if we add this strong condition to the topology: $x\mapsto x^{-1}$ is a continuous map from $\text{Units}(R)$ into itself? Here, $\text{Units}(R)$ is the multipliciative group of units in $R$, equipped with the subspace topology. Topological division rings are examples of such rings with this stronger condition. My examples as well as others' examples so far obey this condition. Do we have a situation where a topological unital ring with this strong condition doesn't have a Haar measure, while it will have a Haar measure if this condition is dropped, and vice versa? What are examples of non-unital rings with Haar measures which are non-trivial or non-commutative? - -Now, regarding representations of a topological ring $R$ with a left, right, or two-sided Haar measure, is it possible to introduce a representation theory of $R$ in parallel to the representation theory of topological groups? I can think of the case where $R$ is a unital $\mathbb{R}$-algebra, where we may use a unitary left $R$-module $V$ of finite $\mathbb{R}$-dimension and define an $R$-invariant inner product $\langle \bullet,\bullet\rangle$ on $V$ via $$\langle u,v\rangle:=\int\limits_R\,\langle\!\langle x\cdot u,x\cdot v\rangle\!\rangle\,\text{d}\lambda(x)$$ for $u,v\in V$ and for a fixed inner product $\langle\!\langle\bullet,\bullet\rangle\!\rangle$ on $V$. I haven't tried to make anything out of this $R$-invariant inner product yet. It might not even make sense at all because $\mathbb{R}$-algebras with Haar measures are unlikely to be compact, whence the integral may not be defined or have a finite value in most cases. We can also look at the Banach space $\mathcal{L}^\kappa(R,\lambda)$ for $\kappa\in[1,\infty]$.<|endoftext|> -TITLE: There exist meager subsets of $\mathbb{R}$ whose complements have Lebesgue measure zero -QUESTION [6 upvotes]: The Baire Category Theorem - Let $X$ be a complete metric space -a.) If $\{U_n\}_1^\infty$ is a sequence of open dense subsets of $X$, then $\bigcap_1^\infty U_n$ is dense in $X$. -b.) $X$ is not a countable union of nowhere dense sets. -The name for this theorem comes from Baire's terminology for sets: If $X$ is a topological space, a set $E\subset X$ if of the first category, or meager, according to Baire, if $E$ is a countable union of nowhere dense sets; otherwise $E$ is of the second category. - -Problem 5.3.27 from Folland's Real Analysis: There exist meager subsets of $\mathbb{R}$ whose complements have Lebesgue measure zero - -Attempted proof: Let $X$ be a topological space in $\mathbb{R}$ and $E\subset X$ be of the first category (meager). Set $\{x_k\}$ to be an enumeration of the rational numbers, let $$E_n = \bigcup_{k=1}^{\infty} \left(x_k - \frac{1}{2^{k-1} n},x_k + \frac{1}{2^{k-1}n}\right)$$ Consider the set $E = \cap_{1}^{\infty} E_n$, then $$m(E_n) \leq \sum_{k=1}^{\infty}\frac{1}{2^k n} = \frac{1}{n}$$ hence $m(E) = 0$ -I am not sure if I am right, any suggestions is greatly appreciated. - -REPLY [5 votes]: Yes, your proof is right. You need to prove that $E^c$ is meager. -Since $E_n$ is open, $E_n^c$ is closed. $E_n$ is dense in $\Bbb{R}$ for $\Bbb{Q}\subset E_n$. Thus $E_n^c$ is nowhere dense. So $E^c=\bigcup_{n=1}^{\infty}E_n^c$ is meager.<|endoftext|> -TITLE: How do Gap generate the elements in permutation groups? -QUESTION [7 upvotes]: I understand that permutationgroups in Gap are represented by generators, which seems to be far more efficient than groups represented by all it's elements, but how could then for example -gap>Elements( Group( (1,2,3,4,5,6,7,8),(1,2) ) ); - -so fast list all $8!$ elements of $S_8$? Can someone describe the algorithm or the method used? - -REPLY [7 votes]: Alex's answer is very good, but I'd like to add a somewhat simplified explanation of the Schreier-Sims Algorithm, since @Lehs asked for it. -I want to emphasis that this is not the actual Schreier-Sims algorithm but it contains the essential ideas in the Schreier-Sims algorithm, and I think it will serve to illustrate why it is preferable to the brute force exhaustive algorithm. -I will try to describe this not the Schreier-Sims algorithm using an example. -Let $G$ be the group generated by the permutations: -$$a=(2\ 3),\quad b=(1\ 2\ 3)(4\ 5).$$ -We will use not the Schreier-Sims Algorithm to find the size of this group. This is a bit simpler than finding the actual elements of the group. -We require two concepts: - -the orbit of a point $\alpha$ under the action of a group $G$ on a set $\Omega$ is the set -$$\alpha \cdot G = \{\alpha\cdot g\in \Omega : g \in G\}.$$ -In the example above: $1\cdot G =\{1, 2, 3\}$. -the stabliser of a point $\alpha$ under the action of a group $G$ is the subgroup -$$G_{\alpha} = \{g\in G: \alpha \cdot g = \alpha\}.$$ -In our example, $G_1 = \langle (2\ 3), (4\ 5)\rangle$. - -By the Orbit-Stabilizer Theorem, $|G| = |\alpha \cdot G|\cdot |G_{\alpha}|$, i.e. the size of $G$ is the size of the orbit of $\alpha$ multiplied by the size of the stabiliser of $\alpha$. This is also important, and we will use this later. -But how do we calculate the stabiliser? One way (bad!) would be to enumerate all of the elements of $G$ and check which elements fix $1$. -Another way (good!) is to use the following lemma: -Schreier's Lemma. -If $G$ is generated by a set $X$ and $G$ acts on a set $\Omega$, then -$$G_\alpha = \langle u_ixu_j^{-1} : 1\leq i, j\leq n, x\in X, \alpha_i \cdot x = \alpha_j\rangle$$ -where $\alpha\cdot G = \{\alpha_1, \alpha_2, \ldots, \alpha_n\}$. -In other words, if you keep track of what maps what to what when enumerating the orbit of $\alpha$, then you get a generating set for the stabiliser for free (more or less). -Returning to our example -$$ 1\cdot G = \{1, 2, 3\}\quad G_1 = \langle (2\ 3), (4\ 5)\rangle.$$ -Note that it is possible to find the generators for the stabiliser $G_1$ using Schreier's Lemma, but this is a bit tedious so I have not given any of the details. -We repeat this process on the stabiliser $G_1$: -$$ 2\cdot G_1 = \{2, 3\} \quad G_{1, 2} = \langle (4\ 5)\rangle$$ -where $G_{1,2}$ means the stabiliser of $2$ in $G_1$ -(again omitting the details). -We repeat this process again on $G_{1,2}$: -$$4\cdot G_{1,2} = \{4, 5\} \quad G_{1, 2, 4} = \{\text{id} \}.$$ -Here is where our algorithm ends, by the Orbit-Stabilizer Theorem (applied 3 times to $G$, then $G_1$, then $G_{1,2}$): -$$|G| = |1\cdot G|\cdot |G_1| = |1\cdot G| \cdot |2\cdot G| \cdot |G_{1,2}| -= |1\cdot G |\cdot |2\cdot G| \cdot |4\cdot G| \cdot |G_{1,2,4}| -= 3\cdot 2 \cdot 2 \cdot 1 = 12.$$ -The set of initial points in the orbits $\{1, 2, 5\}$ is often called a base, the union of the generators of $G$ with the generators of the stabilisers $G_1$, $G_{1,2}$, and $G_{1,2,4}$ is called strong generating set, and the orbits and stabilisers a called a stabiliser chain. You can use a stabiliser chain to answer more questions than just finding the size, for example, it can be used to check membership in the group $G$ and it can be used to find the elements in $G$. -In this rather simple example, to do the exhaustive algorithm (to find the size) is not much more difficult than the algorithm proposed above. But think about the example of, say, the symmetric group on 10 points, forgetting that we know its size already. You only have to compute 10 orbits of length 10 to 1 (that 55 points in total). In the absolute worst case (which won't happen in practice), finding the generators of the stabiliser of a point whose orbit is of length $n$ involves $2n$ multiplications of permutations producing (again in the worst case, which doesn't occur) $n$ generators for the stabiliser. So in total we require 350 applications of permutations to points (to calculate the orbits) and 110 multiplications of permutations (to find the generators in Schreier's Lemma). -From this you get that the size of this group is $10! = 3628800$. -Compare this to the exhaustive algorithm where we'd require $7257600$ products of permutations on 10 points (never mind anything else).<|endoftext|> -TITLE: An elegant way to solve $\frac {\sqrt3 - 1}{\sin x} + \frac {\sqrt3 + 1}{\cos x} = 4\sqrt2 $ -QUESTION [6 upvotes]: The question is to find $x\in\left(0,\frac{\pi}{2}\right)$: -$$\frac {\sqrt3 - 1}{\sin x} + \frac {\sqrt3 + 1}{\cos x} = 4\sqrt2 $$ -What I did was to take the $\cos x$ fraction to the right and try to simplify ; -But it looked very messy and trying to write $\sin x$ in terms of $\cos x$ didn't help. -Is there a more simple (elegant) way to do this. - -REPLY [7 votes]: Another approach is to write the equation as -$(\sqrt{3} - 1) \cos x +(\sqrt{3} + 1) \sin x = 4 \sqrt2 \sin x \cos x$ -then rearranging gives -$\frac{\sqrt{3} - 1}{2 \sqrt{2}} \cos x +\frac{\sqrt{3} + 1}{2 \sqrt{2}} \sin x = 2 \sin x \cos x$. -Now note that $(\frac{\sqrt{3} - 1}{2 \sqrt{2}})^2 +(\frac{\sqrt{3} + 1}{2 \sqrt{2}})^2 = \frac{1}{8} ((3 -2 \sqrt{3} + 1) + (3 +2 \sqrt{3} + 1) = 1$ so that we can write $\frac{\sqrt{3} - 1}{2 \sqrt{2}}$ as the $\sin$ of some angle, say $\alpha$, and identify $\frac{\sqrt{3} + 1}{2 \sqrt{2}}$ as $\cos \alpha$. -Then you have to solve $\sin(\alpha + x) = \sin(2x)$, with $\tan \alpha = \frac{\sqrt{3} - 1}{\sqrt{3} + 1}$.<|endoftext|> -TITLE: Minimize the Area -QUESTION [8 upvotes]: The lower corner of a page is to folded to reach the opposite inner edge. -We have to find the width of the folded part if the Area of the folded part is minimum. -Now how I proceeded: -Let the width of the page be $1$. -Let the folded part be $x$. -And the angle of the part folded be $\theta$. -Now I found a relation between $x$ and $\theta$ and thus wrote the area entirely as a function of tan($\theta$) and minimized it. -What I want to know is of other different ways to do this. -Here's a sketch : - -Just in case someone wants to check their answers, the width x comes out to be 2/3. - -REPLY [2 votes]: I suggest working with $\theta':={\pi\over2}-\theta$ instead, and putting $\tan\theta'=:\tau$. You then have -$${1-x\over x}=\cos(\pi-2\theta)=\cos(2\theta')={1-\tau^2\over 1+\tau^2}$$ -and therefore ${\displaystyle x={1+\tau^2\over2}}$. -The area in question then comes to -$$A={1\over2}x\cdot{x\over\tau}={(1+\tau^2)^2\over 8\tau}\ .$$ -It turns out that $A$ is minimal when $\tau^2={1\over3}$, or $\theta'=30^\circ$, so that $x={2\over3}$. -The simplicity of the result opens up a chance that it can be obtained also using some geometric reasoning.<|endoftext|> -TITLE: When are canonical maps between limits monomorphisms? -QUESTION [5 upvotes]: If $\mathbf{D}_1 \hookrightarrow \mathbf{D}_2$ is an inclusion of diagrams in a category $\mathbf{C}$, and $\mathbf{C}$ has $\varprojlim \mathbf{D}_1$ and $\varprojlim \mathbf{D}_2$, then the inclusion induces a canonical map $\varprojlim \mathbf{D}_2 \to \varprojlim \mathbf{D}_1$ of cones to $\mathbf{D}_1$. -In $\mathbf{Set}$ (I'm not sure if this is always true elsewhere, even in other toposes), taking $\mathbf{D}_2$ as $$\require{AMScd} \begin{CD} @. X \\ -@. @VfVV \\ -Y @>g>> Z\end{CD}$$ and obtaining $\mathbf{D}_1$ by forgetting $Z$ induces a monomorphism $X \times_Z Y \hookrightarrow X \times Y.$ -However, taking $\mathbf{D}_2$ to be $X \overset{\operatorname{id}}{\to} X$ and $\mathbf{D}_1$ to be the empty diagram induces terminal maps $X \to \mathbf{1}$ for each $X$, which (at least in $\mathbf{Set}$) are almost never monomorphisms. -Going back to (a complete, say) $\mathbf{C}$, what are conditions on $\mathbf{C}$, $\mathbf{D}_2$, and $\mathbf{D}_1$ that ensure $\varprojlim \mathbf{D}_2 \to \varprojlim \mathbf{D}_1$ is a monomorphism? - -REPLY [3 votes]: Let $F : \mathcal{D}_1 \to \mathcal{D}_2$ be a functor where $\mathcal{D}_1$ and $\mathcal{D}_2$ are small categories. The following are equivalent: - -For every locally small category $\mathcal{C}$ and every diagram $X : \mathcal{D}_2 \to \mathcal{C}$, if $\varprojlim_{\mathcal{D}_2} X$ and $\varprojlim_{\mathcal{D}_1} X F$ exist, then the comparison $\varprojlim_{\mathcal{D}_2} X \to \varprojlim_{\mathcal{D}_1} X F$ is a monomorphism. -For every diagram $X : \mathcal{D}_2 \to \mathbf{Set}$, the comparison $\varprojlim_{\mathcal{D}_2} X \to \varprojlim_{\mathcal{D}_1} X F$ is injective. -For every object $d$ in $\mathcal{D}_2$, the comma category $(F \downarrow d)$ is inhabited. - -The equivalence of (1) and (2) is essentially a Yoneda-type argument. -To see that (1) implies (3), consider the case where $\mathcal{C} = [\mathcal{D}_2, \mathbf{Set}]^\mathrm{op}$ and $X : \mathcal{D}_2 \to \mathcal{C}$ is the Yoneda embedding. The assumption says that the comparison $\varinjlim_{\mathcal{D}_1^\mathrm{op}} \mathcal{D}_2 (F -, d) \to \varinjlim_{\mathcal{D}_2^\mathrm{op}} \mathcal{D}_2 (-, d)$ is surjective for every object $d$ in $\mathcal{D}_2$; but $\varinjlim_{\mathcal{D}_2^\mathrm{op}} \mathcal{D}_2 (-, d) \cong 1$, so this says that there exist an object $d'$ in $\mathcal{D}_1$ and a morphism $F d' \to d$ in $\mathcal{D}_2$. -That (3) implies (2) is a universality argument (because $[\mathcal{D}_2, \mathbf{Set}]^\mathrm{op}$ is the complete category freely generated by a diagram of shape $\mathcal{D}_2$), but there is also a fairly obvious direct argument.<|endoftext|> -TITLE: Given $g(x)$ and $f(g(x))$, solve for $f(x)$. -QUESTION [44 upvotes]: I've hit a wall on the above question and was unable to find any online examples that also contain trig in $f(g(x))$. I'm sure I am missing something blatantly obvious but I can't quite get it. -$$ g(x)=3x+4 , \quad f(g(x)) = \cos\left(x^2\right)$$ -So far I've managed to get to the point where I have $f(x+8) = \cos\left(x^2\right)$, by solving $g^{-1}(g(x))$ (loosely based on the last bit of advice here), but I can't make that final connection. -My best attempt so far was $f(x)=\cos(x^2-16x+64)$, but while that does result in $x^2$, it still ends up wrong due to it being cosine. - -REPLY [21 votes]: $$y=g(x)=3x+4$$ -$$x=\frac{y-4}{3} \Rightarrow g^{-1}(x)=\frac{x-4}{3}$$ -$$f=f \circ (g \circ g^{-1})=(f \circ g) \circ g^{-1} =\cos \left(\frac{x-4}{3} \right)^2 $$<|endoftext|> -TITLE: GAP tells this semigroup is not a group. -QUESTION [5 upvotes]: Happy Nowruz 2016 to every one here! -Using the code which James pointed here; I was playing with the following finite semigroup: -gap > f:=FreeSemigroup("a","b");; - a:=f.1;; b:=f.2;; - w:=f/[[a^4,a],[b^2,a^3],[b*a,a^2*b],[b^3,b]];; - T=Range(IsomorphismTransformationSemigroup(w));; - -GAP tells us that T has $6$ elements, is regular (Inverse) and has just one idempotent. This means that T is a finite group see here. -But, by calling another codes AsGroup(T) and IsGroup(T) both would end to undesirable results failed and false respectively. Is there anything obvious I cannot see well? -Thanks for the time. - -REPLY [5 votes]: Alex is correct: IsGroup only returns true if the object it is applied to belongs to the category of groups. This means that certain operations make sense for the object, such as inversion (via Inverse or ^ -1), and One. Since transformations don't always have an inverse, this means that they sometimes don't belong to the category of groups even though they do define a group mathematically. Just to make life more complicated they also do sometimes belong to the category of groups but let's not get into that. -The thing to check is IsGroupAsSemigroup which checks that the object mathematically defines a group, whether it belongs in the category of groups or not. Also you can do IsomorphismPermGroup to give an isomorphism permutation group. -With the Semigroups package loaded you will get: -gap> IsGroupAsSemigroup(T); -true -gap> IsomorphismPermGroup(T); -MappingByFunction( -, Group([ (2,3,4)(5,6,7), (2,7)(3,6) -(4,5) ]), , function( x ) ... end ) - -Neither IsGroupAsSemigroup nor IsomorphismPermGroup work in GAP without the Semigroups package. -I don't know why AsGroup returns fail, the documentation is a bit vague. This is probably a bug.<|endoftext|> -TITLE: $x^y=y^x$. Prove that $x^y>e^e$ -QUESTION [7 upvotes]: Let $x>1$, $y>1$ and $x\neq y$ such that $x^y=y^x$. Prove that: -$$x^y>e^e$$ -We can assume $y=kx$, where $k>1$. -Hence, $x=k^{\frac{1}{k-1}}$ and $y=k^{\frac{k}{k-1}}$ and we need to prove that $f(k)>e^e$, where $f(k)=k^{\frac{k^{\frac{k}{k-1}}}{k-1}}$. -We can show that $f'(k)>0$ for $k>1$ and $\lim\limits_{k\rightarrow1^+}f(k)=e^e$ and we are done. -Is there a nice proof for this inequality? Thank you! - -REPLY [4 votes]: We have to minimize the function $$f(x)=x^{y}\tag{1}$$ where $x$ and $y$ are connected by equation $$x^{y}=y^{x}\tag{2}$$ As usual we need to find derivative $f'(x)$ and this is bit tricky here. First we take logarithm of the equation $(2)$ to get $$y\log x = x\log y$$ and then differentiating with respect to $x$ we get $$\frac{y}{x} + y'\log x = \log y + \frac{xy'}{y}$$ or $$y' = \frac{y}{x}\cdot\frac{y - x\log y}{x - y\log x}\tag{3}$$ From $(1)$ we get $$\log f(x) = y\log x$$ and differentiation gives us $$\frac{f'(x)}{f(x)} = \frac{y}{x} + y'\log x$$ and using equation $(3)$ we get $$\frac{f'(x)}{f(x)} = \frac{y}{x}\left(1 + \frac{y\log x - x\log x\log y}{x - y\log x}\right)$$ and thus $f'(x) = 0$ is possible only when $$\log x\log y = 1\tag{4}$$ so that $y = \exp(1/\log x)$ and $$f(x) = x^{y} = y^{x} = \exp\left(\frac{x}{\log x}\right)$$ It can be easily checked that the function $g(x) = x/\log x$ attains it's minimum value at $x = e$ and therefore $f(x) = e^{g(x)}$ also attains it's minimum value at $x = e$ and therefore the minimum value of $f(x) = x^{y}$ is $e^{e}$.<|endoftext|> -TITLE: Kernel of an action -QUESTION [5 upvotes]: $$G = \left\{ - \begin{pmatrix} - a & b \\ - 0 & 1 \\ - \end{pmatrix} -\text{with $a$ in $\{1, -1\}$ and $b$ in } \mathbb{Z}\right\}$$ -G is a subgroup of the matrix group $GL_2(\mathbb{Q})$. -$\phi : G \rightarrow \{1,-1\} \times \mathbb{Z}/2\mathbb{Z}$ is given by $ ( - \begin{smallmatrix} - a & b \\ - 0 & 1 \\ - \end{smallmatrix}) -\rightarrow (a,\overline{b})$ -Is the kernel of $\phi$ equal to $ \left\{ - \begin{pmatrix} - 1 & 2 \\ - 0 & 1 \\ - \end{pmatrix}^n -\text{with $n$ in $\mathbb{Z}$ } \right\}$? Give a proof or a counterexample. -I'm totally stuck on this question. How would one go about answering this? I know the kernel of an action $G$ on $S$ is defined as $\{g \in G \vert g \cdot s = s \text{ for all $s$ in $S$}\}$, but I don't know how to go from there. For instance, what is $S$ in this case? Or is that even the way to go? - -REPLY [2 votes]: The answer is yes. The group $\{1,-1\}\times \mathbb Z/2\mathbb Z$ has $(1, \bar 0)$ as its identity. -The kernel of $\phi$ is therefore all matrices of the form $ \begin{pmatrix} - 1 & 2n \\ - 0 & 1 \\ - \end{pmatrix}$ -But ${\begin{pmatrix} - 1 & 2 \\ - 0 & 1 - \end{pmatrix}}^n=\begin{pmatrix} - 1 & 2n \\ - 0 & 1 \\ - \end{pmatrix}$ as can be seen by induction.<|endoftext|> -TITLE: Submultiplicative Hilbert space norm on $B(H)$ -QUESTION [8 upvotes]: Let $H$ be a complex Hilbert space and let $B(H)$ denote the space of bounded linear operators $H \to H$ equipped with operator norm: -$$ \lVert T \rVert = \sup\big\{ \lVert Tx \rVert \: : \: \lVert x \rVert \leq 1\big\}. $$ -One easily shows that $B(H)$ is not a Hilbert space whenever $\dim(H) > 1$ holds, for it does not satisfy the parallelogram rule. Furthermore, an abstract argument shows that there exists a Hilbert space norm $\lVert\:\cdot\:\rVert_2$ on $B(H)$, but it does not provide us with a very concrete description of such a norm. In the finite-dimensional case one might take the Hilbert–Schmidt norm: this turns $B(H)$ into a Hilbert space and it is known to be submultiplicative. However, in the infinite-dimensional case this does not work, for now the space of Hilbert–Schmidt operators is a proper subspace of $B(H)$. This leads me to the following question: - -Question. Is there a submultiplicative Hilbert space norm on $B(H)$ if $H$ is infinite-dimensional, either by abstract reasoning or by concrete example? For the moment I do not care whether this new norm is equivalent to the operator norm. - -This is a strengthening of the question Is B(H) a Hilbert space? which did not ask for submultiplicativity. - -REPLY [2 votes]: The answer is no: such a norm does not exist. First I'll give a short-ish proof; after that I will spell out some of the details for the benefit of those who do not know these already. - -Assumption. All spaces (vector spaces, algebras, etc.) under consideration are complex. - -We prove something slightly stronger. - -Theorem. Let $H$ be an infinite-dimensional Hilbert space, and let $\lVert\:\cdot\:\rVert$ be a norm on $B(H)$ such that for each $a\in B(H)$ the left and right multiplication operators $L_a : b\mapsto ab$, $R_a : b\mapsto ba$ are continuous. Then $B(H)$ is not reflexive with respect to $\lVert\:\cdot\:\rVert$. In particular, $\lVert\:\cdot\:\rVert$ cannot be a Hilbert space norm. -Proof. If $\lVert\:\cdot\:\rVert$ is not complete, then clearly $B(H)$ is not reflexive with respect to this norm. So we assume for the remainder of this proof that $\lVert\:\cdot\:\rVert$ is complete. Now there exists an equivalent norm $\lVert\:\cdot\:\rVert_{\text{Ban}}$ on $B(H)$ turning $B(H)$ into a unital Banach algebra (in other words, we have $\lVert ab\rVert_{\text{Ban}} \leq \lVert a\rVert_{\text{Ban}} \cdot \lVert b\rVert_{\text{Ban}}$ for all $a,b\in B(H)$, and $\lVert \text{id} \rVert_{\text{Ban}} = 1$). -Now choose some orthonormal basis $\{v_i\}_{i\in I}$ of $H$, and let $A \subseteq B(H)$ be the subalgebra of diagonal operators with respect to this basis. Then $A$ is equal to its own commutator, and therefore it is $\lVert\:\cdot\:\rVert_{\text{Ban}}$-closed. This means that $\lVert\:\cdot\:\rVert_{\text{Ban}}$ turns $A$ into a Banach algebra. But $A$ is algebra-isomorphic to $\ell^\infty(I)$, which is unital, commutative and semisimple, so we know that all Banach algebra norms on $A$ are equivalent. In other words, the algebra isomorphism $A \cong \ell^\infty(I)$ is also an isomorphism of Banach spaces¹ $(A,\lVert\:\cdot\:\rVert_{\text{Ban}})\cong(\ell^\infty(I),\lVert\:\cdot\:\rVert_\infty)$. Since $I$ is infinite, we know that $(\ell^\infty(I),\lVert\:\cdot\:\rVert_\infty)$ is not reflexive, so it follows that $(A,\lVert\:\cdot\:\rVert_{\text{Ban}})$ is not reflexive either. Since $A$ is closed in $(B(H),\lVert\:\cdot\:\rVert_{\text{Ban}})$, it follows that $(B(H),\lVert\:\cdot\:\rVert_{\text{Ban}})$ also fails to be reflexive. -Finally, since the norms $\lVert\:\cdot\:\rVert$ and $\lVert\:\cdot\:\rVert_{\text{Ban}}$ are equivalent, it follows that $B(H)$ is not reflexive with respect to $\lVert\:\cdot\:\rVert$.$\quad\blacksquare$ - -¹: By isomorphism of Banach spaces we mean an invertible bounded linear operator, not necessarily isometric. Strictly speaking, such a map is not an isomorphism of Banach spaces, since the norm is part of the Banach space structure. A better term would be isomorphism of Banachable spaces. See also this discussion on MathOverflow. -$$ {} $$ - -$$ {} $$ -We proceed to fill in some of the details (references given below). -$$ {} $$ - -Fact 1. A reflexive normed space must be complete. -Proof. Dual spaces are complete. -See also: [Rynne&Youngson] Definition 5.38, or [Conway] Definition III.11.2.$\quad\Box$ - -$$ {} $$ - -Fact 2. Existence of the equivalent Banach algebra norm $\lVert\:\cdot\:\rVert_{\text{Ban}}$. -Proof. See any one of the following sources: - -[Conway] Exercise VII.1.1; -[Kaniuth] Proposition 1.1.1; -[Rudin] Theorem 10.2.$\hspace{10cm}\Box$ - - -$$ {} $$ - -Fact 3. $A$ is equal to its own commutator. -Proof. Since $A$ is commutative, we have $A \subseteq A'$. Now let $b\in A'$ be given. We show that $b$ must be diagonal with respect to $\{v_i\}_{i\in I}$. To that end, choose some $i_0 \in I$ and let $a\in A$ denote the orthogonal projection onto $v_{i_0}$ (this is the diagonal operator given by $v_{i_0} \mapsto v_{i_0}$ and $v_j \mapsto 0$ for all $j \neq i_0$). Then we have $bv_{i_0} = bav_{i_0} = abv_{i_0}$, hence $bv_{i_0} \in \text{im}(a) = \text{span}(v_{i_0})$. We see that every $v_i$ is an eigenvector of $b$, so indeed $b$ is diagonal with respect to $\{v_i\}_{i\in I}$.$\quad\Box$ - -$$ {} $$ - -Fact 4. A commutator in a Banach algebra is closed. -Proof. Let $X$ be a Banach algebra and $S \subseteq X$ a subset. Then we have - $$ S' = \bigcap_{s\in S} \: \ker(L_s - R_s), $$ - which is closed since it is an intersection of closed sets. (Here $L_s : r \mapsto sr$ and $R_s : r \mapsto rs$ denote the left and right multiplication operators. We have $L_s,R_s \in B(X)$, so that $\ker(L_s - R_s)$ is closed.)$\quad\Box$ - -$$ {} $$ - -Fact 5. For any index set $I$, the algebra $\ell^\infty(I)$ is semisimple. -Proof. It admits a $C^*$-algebra structure, and commutative $C^*$-algebras are semisimple. (Recall that a commutative Banach algebra $X$ is semisimple if and only if the Gelfand representation $X\to C_0(\Omega(X))$ is injective.)$\quad\Box$ - -$$ {} $$ - -Fact 6. In a unital, commutative and semisimple algebra, all Banach algebra norms are equivalent. -Proof. See any one of the following sources: - -[Conway] Exercise VII.8.15; -[Kaniuth] Corollary 2.1.11; -[Pedersen] Exercise 4.2.13; -[Rudin] Corollary 11.10.$\hspace{10cm}\Box$ - - -$$ {} $$ - -Fact 7. A closed subspace of a reflexive Banach space is again reflexive. -Proof. See any one of the following sources: - -[Conway] Exercise III.11.4; -[Pedersen] Exercise 2.4.8; -[Rudin] Exercise 4.1(d); -[Rynne&Youngson] Theorem 5.44.$\hspace{8cm}\Box$ - - -$$ {} $$ - -Fact 8. If $\lVert\:\cdot\:\rVert_1$ and $\lVert\:\cdot\:\rVert_2$ are equivalent norms on a vector space $X$, then $(X,\lVert\:\cdot\:\rVert_1)$ is reflexive if and only if $(X,\lVert\:\cdot\:\rVert_2)$ is reflexive. Consequently, if $T : X \to Y$ is an invertible bounded linear operator on normed spaces $X,Y$, then $X$ is reflexive if and only if $Y$ is reflexive. -Proof. Since $\lVert\:\cdot\:\rVert_1$ and $\lVert\:\cdot\:\rVert_2$ induce the same topology, the dual spaces $(X,\lVert\:\cdot\:\rVert_1)^*$ and $(X,\lVert\:\cdot\:\rVert_1)^*$ are not only isomorphic, but actually the same vector space. It is easily seen that the corresponding dual norms are equivalent. Taking this one step further, it is clear that the natural maps $J_1 : (X,\lVert\:\cdot\:\rVert_1) \to (X,\lVert\:\cdot\:\rVert_1)^{**}$ and $J_2 : (X,\lVert\:\cdot\:\rVert_2) \to (X,\lVert\:\cdot\:\rVert_2)^{**}$ are equal. Therefore $J_1$ is surjective if and only if $J_2$ is surjective. -Now let $X$ and $Y$ be normed spaces and let $T : X \to Y$ be an invertible bounded linear operator. We let $\lVert\:\cdot\:\rVert_1$ denote the original norm on $X$ and $\lVert\:\cdot\:\rVert_2$ the norm on $X$ obtained from $Y$ via $T$, so that $T$ is an isometric isomorphism $(X,\lVert\:\cdot\:\rVert_2)\cong Y$. Clearly $(X,\lVert\:\cdot\:\rVert_2)$ is reflexive if and only if $Y$ is reflexive, since they are isometrically isomorphic (and thus, in every sense, the same Banach space). Now it follows from the above that $(X,\lVert\:\cdot\:\rVert_1)$ is reflexive if and only if $Y$ is reflexive. -For an alternative proof, see [Rynne&Youngson] Corollary 5.56.$\quad\Box$ - -$$ {} $$ - -Fact 9. For any infinite set $I$, the space $\ell^\infty(I)$ is irreflexive. -Proof. Choose some countably infinite subset $J \subseteq I$, then $\ell^\infty(J)$ is easily seen to be a closed subspace of $\ell^\infty(I)$ that is isometrically isomorphic to the sequence space $\ell^\infty(\mathbb{N})$. The latter is known to be irreflexive, so we see that $\ell^\infty(J)$, and therefore $\ell^\infty(I)$, are irreflexive as well (using Fact 7).$\quad\Box$ - -$$ {} $$ - -$$ {} $$ -References: -[Conway]: John B. Conway, A Course in Functional Analysis (1985), Springer Graduate Texts in Mathematics 96. -[Kaniuth]: Eberhard Kaniuth, A Course in Commutative Banach Algebras (2009), Springer Graduate Texts in Mathematics 246. -[Pedersen]: Gerd K. Pedersen, Analysis Now, Revised Printing (1995), Springer Graduate Texts in Mathematics 118. -[Rudin]: Walter Rudin, Functional Analysis, Second Edition (1991), McGraw–Hill. -[Rynne&Youngson]: Bryan P. Rynne & Martin A. Youngson, Linear Functional Analysis, Second Edition (2008), Springer Undergraduate Mathematics Series.<|endoftext|> -TITLE: How to compare units? -QUESTION [7 upvotes]: Something is confusing me, it's about real world units vs abstract ones and what should be abstract and absolute. -Here's my problem: -1 dog + 1 dog = 2 dogs - -A dog is an abstract unit, all dogs are different, yet this equation still makes sense. You can't convert a dog to a number of equal particles (so you could relate it to other objects) and say for example: -1 dog = 5u -2 dogs = 5u + 5u = 10u - -In the other hand, if you have a dog named Morgan (a unique dog), you could say: -1 Morgan = 3u -1 Morgan + 1 Morgan = 2 Morgans = 6u - -But if you do this: -1 Morgan + 1 dog = 2 dogs -1 dog = 1 Morgan = 3u - -How do I say that Morgan is a dog and relate them both? -Does it make sense to have any absolute units? Because we don't know everything and everything might change in different places or times. -Thank you. - -REPLY [6 votes]: Well I find something wrong with this equation: -1 dog = 1 Morgan = 3u -Morgan is a dog, but a dog is not Morgan. It's like saying because a human is a mammal, a mammal is a human. But, Morgan is more specific than just "dog", so you can't set them equal to each other.<|endoftext|> -TITLE: Can someone confirm this sketch of a proof for the existence of smooth partitions of unity on an unbounded convex open euclidean set? -QUESTION [6 upvotes]: EDIT!! The problem originally described (see below) has been reduced to the correctness of a simple extension of an argument from Rudin's PMA. Feel free to skip to the proposed solution, below. -As part of a much needed calculus review, I've been working through Rudin's Principles of Mathematical Analysis. Besides the following, I've found the prospective on the material therein to be rather illuminating -- which leads me to believe I've missed something, in this case. -The proof of a lemma (10.38) to be used in the proof of Poincare's Lemma (10.39) cites the following fact: (to paraphrase) -$V=\{(x,y)|x\in\Bbb{R^{p-1}, y\in\Bbb{R}}\}; p>1$; V convex open. $U=\{x|(x,y)\in V \text{ for some y}\}$. It follows that there is a continuously differentiable function on U, let's call it $\alpha$, such that $(x,\alpha(x))\in V.$ -This is one of very few proof details in Rudin explicitly left as an exercise. The hint is that $\alpha$ may be taken to be a constant if V is a ball; it should also be noted that the same $\alpha$ notation is conspicuously used in the proof of the existence of partitions of unity on compact sets. -Now, ignoring the hints and using inf, sup, and handling the special case of extended values, I think I can find a continuous $\alpha$ easily enough, but the resulting function is not necessarily smooth. -I've seen a totally different solution involving countable open coverings, compact exhaustions, and smooth partitions of unity on open sets, but, although this solution makes use of the hints, I'm apprehensive about this scheme because it relies on so much machinery not mentioned in the text or exercises elsewhere in the book. It would be very uncharacteristic of the author if this were the intended solution (I say this having worked hundreds of other exercises from his books). -With that in mind, can someone propose a solution scheme that does not rely on so much beyond-the-text machinery? -EDIT!! In particular, after examining Rudin's proof of the existence of partitions of unity on compact sets, I believe I have found an extension of his argument that works in this case. Can anyone verify this? (See below for my proposed answer). - -REPLY [2 votes]: This potential answer was too long to comment, and it seemed inappropriate to so dramatically change the original post. Could any of you commenting/following the thread verify that I'm on the right track? Thanks! -From the prior exercises in chapter 10, it is clear that partitions of unity on compact sets can be chosen to be smooth. So, If V is bounded, its closure is compact, and the hints make it obvious that a smooth partition of unity on it's projection $\bar{U}$ does the trick. -If V is unbounded, the intersection of its closure with the balls of natural radii centered at the origin admit a solution, as above. The question is, can we patch the solutions together with tools from the text, rather than building all new machinery? -Take the balls of rational radii centered at points with rational coordinates, $B_{r,s}$; the balls cover $\bar{V}$. For each intersection of $\overline{B_{n,0}}$ with $\bar{V}$, collect the finitely many $B_{r,s}; r<1/2; s \in B_{n,0}-B_{n-1,0}$ required for a covering. -Notice that, in Rudin's partition of unity proof the functions are defined inductively: that is to say, by adjoining new sets proceeding from n-1 to n above, it seems we need not change any of the partition functions previously defined: - -$\phi_i(x)$ is associated with one member of the countable covering, say $B_{r,s}$ and is 1 on it's projection to $\mathbb{R^{p-1}}$ but zero outside the projection of $B_{2r,s}$; it is assumed smooth; -$\psi_{i+1}=(1-\phi_1)...(1-\phi_i)\phi_{i+1}$ hence $\psi_{i}+...+\psi_{1}=1-(1-\phi_i)...(1-\phi_1)$ - -If this in fact gives a locally finite smooth partition of unity on $\bar{U}$, then this affords a solution which naturally extends the cases where V is a ball or V is bounded. -The major simplification here compared with the generally accepted answer, if this is correct, comes from the fact that the partition functions do not "pile up" on $\partial V$ as is the case with the more general smooth partitions of unity on open sets from, say, Munkres.<|endoftext|> -TITLE: Prove that $ \lim_{x \to \infty} f^{n}(x) = 0$ -QUESTION [8 upvotes]: Let $n$ be a positive integer. Assume $ \lim_{x \to \infty} f(x)$ and $ \lim_{x \to \infty} f^{(n)}(x)$ are both real numbers. Prove that $$ \lim_{x \to \infty} f^{n}(x) = 0$$ - -We have that $$\lim_{x \to \infty} \frac{f(x)+x^n}{x^n} = \lim_{x \to \infty} \frac{f'(x)+nx^{n-1}}{nx^{n-1}} = \cdots = \lim_{x \to \infty} \frac{f^{(n)}(x)+n!}{n!} = \lim_{x \to \infty}f^{(n+1)}(x) = 1,$$ by L'Hospital's rule. But did I make a mistake or how does this show that $\displaystyle \lim_{x \to \infty} f^{n}(x) = 0$? - -REPLY [2 votes]: Obviously as $\lim_{x\to \infty}f(x)$ exists we have$$\lim_{x \to \infty} \frac{f(x)+x^n}{x^n} =1$$ -So your 2nd from last inequality is patently incorrect. You can't use L'Hospital's rule because you no longer have an indeterminate form. This is no issue though, as just equate your 3rd last step with your 1st. $$1=\lim_{x \to \infty} \frac{f^{(n)}(x)+n!}{n!}$$ so obviously from this $$1+\lim_{x \to \infty}\frac{f^{(n)}(x)}{n!}=1$$ and the result follows.<|endoftext|> -TITLE: If $A$ is a square matrix such that $A^{27}=A^{64}=I$ then $A=I$ -QUESTION [6 upvotes]: If $A$ is a square matrix such that $A^{27}=A^{64}=I$ then $A=I$. - -What I did is to subtract I from both sides of the equation: -$$A^{27}-I=A^{64}-I=0$$ -then: -\begin{align*} -A^{27}-I &= (A-I)(A+A^2+A^3+\dots+A^{26})=0\\ -A^{64}-I &= (A-I)(A+A^2+A^3+\dots+A^{63})=0. -\end{align*} -So from what I understand, either $A=I$ (as needed) or $A+A^2+A^3+\dots+A^{26}=0$ or $A+A^2+A^3+\dots+A^{63}=0$. -At this point I got stuck. By the way, I found out that $A$ is an invertible matrix because if $A^{27}=I$ then also $A^{26}A=AA^{26}=I$ then $A^{26}=A^{-1}$. -Also I thought to use the contradiction proving by assuming that $A+A^2+A^3+\dots+A^{63}=0$, but because $A^{27}=I$, then: -$$A+A^2+A^3+\dots+A^{26}+I+A^{28}+\dots+A^{53}+I+A^{55}+\dots+A^{63}=0$$ -but yet nothing. -Would appreciate your guidance, thanks! - -REPLY [13 votes]: Firstly, since $A A^{26} = A^{26}A=I$, then $A$ is an invertible matrix. -Use the fact that $\gcd(27,64)=1$: hence there exist some $a,b \in \Bbb{Z}$ such that $1=27a+64b$. Now, compute $$A=A^1=A^{27a+64b}=(A^{27})^a(A^{64})^b=I^aI^b=I$$<|endoftext|> -TITLE: Intuition behind definition of Stable Bundles? -QUESTION [13 upvotes]: To my understanding, given a rank $r$ and degree $d$, we can fix a $C^{\infty}$ vector bundle $\mathcal{V}$ over some curve $\Sigma$ of genus $g$. We then want to study the moduli space $M_{g}(r,d)$ of holomorphic structures on this fixed bundle $\mathcal{V}$. Let $\mathcal{C}$ denote the space of holomorphic structures on $\mathcal{V}$. Naively, we would simply use $\mathcal{C} / \rm{Aut}(\mathcal{V})$ as the moduli space, but apparently this space has terrible properties, i.e. isn't even Hausdorff, etc. So I've heard Mumford's Geometric Invariant Theory is one way around this problem. And this is where you see the familiar definition of stable vector bundles: $E$ is stable if $\mu(F) < \mu(E)$ for all proper, holomorphic sub-bundles $F \subset E$, where $\mu$ is of course the slope of the bundle. -Personally, I don't have any intuition as to why this strange definition of stability leads to well-defined moduli spaces of bundles. Is there any intuition anyone can help me with, or perhaps is it just the sort of thing where you say, with hindsight, it's simply the correct thing to do to force the Geometric Invariant Theory to work? -EDIT: So restricting to simple bundles (where $\rm{Aut}\mathcal{V} = \mathbb{C}^{*}$) makes sense to me. It seems to be analogous to asking that a group action not have fixed points; thus, avoiding singularities. Maybe it would be helpful for me to consider how semi-stable bundles relate to simple bundles? (I know stable implies simple) - -REPLY [6 votes]: In pure mathematics the slope stability condition is motivated only from the fact that it works. But if you are after a good intuition as for what is conceptually going on here, it helps to look at the physics analog of this, which is Douglas's "Pi-stability". This was, in turn, the inspiration for Birdgeland's general concept of stability conditions, which includes slope stability of coherent sheaves as a special case. -So, the physical interpretation of slope stability of vector bundles is revealed once one thinks of the vector bundles as being the "Chan-Paton gauge fields" on D-branes. Then the rank of the vector bundle is proportional to the mass density of a bunch of coincident D-branes, while the degree, being the Chern-class, is a measure for the RR-charge carried by the D-branes. -This reveals that the "slope of a vector bundle" is nothing but the charge density of the corresponding D-brane configuration. -Now a D-brane state is supposed to be stable if it is a "BPS-state", which is the higher dimensional generalization of the classical concept of a charged black hole being an extremal black hole in that it carries maximum charge for given mass. -Hence the stable D-branes are those which maximize their charge density, hence the "slope" of their Chan-Paton vector bundles. -The condition that every sub-bundle have smaller slope hence means that smaller branes can increase their charge density, hence their slope, by forming "bound states" into the larger, stable object. -Hence slope-stability of vector bundles/coherent sheaves is the BPS stability condition on charged D-branes. -This idea is really what underlies Michael Douglas's discussion of "Pi-stability" of D-branes, which then inspired Tom Bridgeland to his general mathematical definition, now known as Bridgeland stability, which subsumes slope-stability/mu-stability of vector bundles as a special case. But, unfortunately, this simple idea is never quite stated that explicitly in Douglas's many articles on the topic. -For more along these lines and more pointers see the discussion at -**nLab: Bridgeland stability -- As stability of BPS D-branes **<|endoftext|> -TITLE: Subgroup of $C^*$ (nonzero complex) with finite index. -QUESTION [6 upvotes]: True or false: -Let $C^*$ be the set of all nonzero complex numbers and $H$ be a subgroup of $C^*$(with respect to multiplication) be such that $[C^*:H]$ is finite then $H=C^*$. -I'm guessing it true as I am thinking that if suppose there is such a proper subgroup $H$ for which the number of coset will be finite then I'm guessing that there is a gap between $C^*$ and $H$ and that gap cannot be filled up by finite union. But I am unable to give a concrete prove. -Thanks in advance. - -REPLY [8 votes]: Suppose $H$ has finite index in $\Bbb C^\times$, let $m=[\Bbb C^\times:H]$. Then for any nonzero complex number, $z^m\in H$. Now given $w\in\Bbb C$ we can always solve $z^m-w=0$, so $w\in H$. -Alternatively, a finite divisible abelian group is trivial. Now $\Bbb C^\times /H$ is finite, and it is divisible, so it must be trivial.<|endoftext|> -TITLE: Topology on the set of ordinal numbers -QUESTION [7 upvotes]: This is a problem I encountered while reading Topology : An Outline for a First Course by Lewis E. Ward. Suppose $\Omega$ denotes the smallest ordinal number with uncountably many predecessors. Let $\mathcal{O}(\Omega)$ denote the space of ordinal numbers less than or equal to $\Omega$ endowed with the order topology. -Now, it is easy to see that $\mathcal{O}(\Omega)$ and $\mathcal{O}(\Omega) \setminus \{\Omega\}$ are normal spaces. The book says that $(\mathcal{O}(\Omega) \setminus \{\Omega\}) \times \mathcal{O}(\Omega)$ is not a normal space by giving the example of the 2 closed subsets: -1) $A:=(\mathcal{O}(\Omega)\setminus \{\Omega\}) \times \{\Omega\}$ -2) $B:=\{(x,x) : x \in \mathcal{O}(\Omega) \setminus \{\Omega\} \}$ -So, I need to prove that we can not find disjoint open sets $U$ and $V$ containing $A$ and $B$ respectively. I have 2 issues now: -1) I have not been able to prove the above. Can anyone tell me how to prove it? -2) While trying to prove the above, I have managed to 'disprove' the claim. That is, I think I have found sets $U$ and $V$ which satisfy the required property. So, could you tell me where I am going wrong? The construction of $U$ and $V$ is as follows: -a) Construction of $U$: For any $x \in \mathcal{O}(\Omega) \setminus \{\Omega\}$, by the well ordering principle, there exists a positive integer $n_x$, such that $\omega^{n_x-1} < x \leq \omega^{n_x}$. Then, let $$U=\cup ([x,\omega^{n_x}] \times (\omega^{n_x},\Omega])$$ where the union is over all the elements of $ \mathcal{O}(\Omega) \setminus \{\Omega\}$. -b) Construction of $V$: For any $x \in \mathcal{O}(\Omega) \setminus \{\Omega\}$, by the well ordering principle, there exists a positive integer $n_x$, such that $\omega^{n_x-1} < x \leq \omega^{n_x}$. Then, let $$V=\cup ([x,\omega^{n_x}] \times [x,\omega^{n_x}])$$ where the union is over all the elements of $\mathcal{O}(\Omega) \setminus \{\Omega\}$. - -REPLY [3 votes]: HINT: The first step is to prove a weak version of the pressing-down lemma. -For brevity let me write $X$ instead of $\mathcal{O}(\Omega)$. Suppose that $\varphi:X\to X$ is such that $\varphi(x)x$, and $y<\psi(x)$ for all $y\in R(x)$. In other words, if $\varphi(y)\le x$, then $y<\psi(x)$. Let $x_0\in X$ be arbitrary. Given $x_n\in X$ for some $n\in\omega$, let $x_{n+1}=\psi(x_n)$. The sequence $\langle x_n:n\in\omega\rangle$ is an increasing sequence in $X$, so it converges to some $x\in X$. For each $n\in\omega$ we have $\psi(x_n)=x_{n+1}\varphi(x)$, which is absurd. This proves the claim. -Now use this and the definition of the product topology to show that if $U$ is an open nbhd of your set $A$, then there is a $z\in X$ such that -$$[z,\Omega)\times[z,\Omega)\subseteq U\;,$$ -and observe that if $V$ is any open nbhd of $B$, -$$V\cap\big([z,\Omega)\times[z,\Omega)\big)\ne\varnothing\;.$$<|endoftext|> -TITLE: Showing that the first hitting time of a closed set is a stopping time. -QUESTION [9 upvotes]: I found this exercise online: - -I am stuggling with the last part of the second exercise, that is I am not able to show that $\tau = \sup_i \tau_i$. Obviously we have that $\tau \ge \sup_i \tau_i$, but I struggle to show the other inequality. Any tips? -Also, I have heard that the second case also holds under the restriction from the first case, that is right continuity, or maybe we need to assume cadlag-trajectories. Do you see how we can show the second case assuming this? -My idea for proving this was this: I was able to show it easily if $\tau=0,\infty$. So I can exclude those cases, and look at any $\epsilon>0$. Then I must have that in the interval $[0,\tau-\epsilon]$ atleast one $G_i$ is not hit. -Can you please help me? - -REPLY [7 votes]: Since $G_i \supseteq B$, we have $\tau_i \leq t$ and therefore, as you already noted, -$$\sup_{i \in \mathbb{N}} \tau_i \leq \tau.$$ -It remains to show that -$$\tau \leq \sup_{i \in \mathbb{N}} \tau_i.$$ -Fix $\omega \in \Omega$. Without loss of generality, we may assume that the right-hand side is finite (otherwise there is nothing to prove), i.e. -$$T(\omega):= \sup_{i \in \mathbb{N}} \tau_i(\omega) <\infty.$$ -Since $\tau_1 \leq \tau_2 \leq \ldots$, it follows from the very definition of "sup" that $\lim_{i \to \infty} \tau_i(\omega) = T(\omega)$. Hence, as $(X_t)_{t \geq 0}$ has continuous sample paths, -$$X_{T(\omega)}(\omega) = \lim_{i \to \infty} X_{\tau_i(\omega)}(\omega).$$ -Since -$$|X_{T(\omega)}-b| \leq |X_{T(\omega)}-X_{\tau_i(\omega)}(\omega)| + |X_{\tau_i(\omega)}(\omega)-b|$$ -for any $b \in B$, we find -$$d(X_{T(\omega)}(\omega),B) \leq |X_{T(\omega)}-X_{\tau_i(\omega)}(\omega)| + \underbrace{d(X_{\tau_i(\omega)}(\omega),B)}_{\leq i^{-1}} \xrightarrow[]{i \to \infty} 0,$$ -i.e. -$$d(X_{T(\omega)}(\omega),B)=0.$$ -As $B$ is closed, this entails $X_{T(\omega)}(\omega) \in B$. By the very definition of $\tau$, this means that $\tau(\omega) \leq T(\omega)$. -As the proof shows we just need left-continuity of the sample paths and not necessarily continuity.<|endoftext|> -TITLE: Functional division $\max(f(x+y),f(x-y))\mid \min(xf(y)-yf(x), xy)$ -QUESTION [7 upvotes]: As the title suggests, the problem here is: - -Find all functions $f:\mathbb{Z}\to\mathbb{N}$ such that, for every $x,y\in\mathbb{Z}$, we have - $$\max(f(x+y),f(x-y))\mid \min(xf(y)-yf(x), xy)$$ - -I have already solved the problem - but, with seven pages full of calculations and case splitting, not quite the elegant proof I wished I'd have. I've put the answer in the yellow box below (I've hidden it for the people that wish to try this themselves first). Hover your mouse over the box to reveal its contents. - - The only solution is $f(x)=1$ for all $x\in\mathbb{Z}$. - -I'd love to see a shorter proof for this. I'd be happy with any suggestions! - -REPLY [2 votes]: For any $a, b > 0$, $x=a, y=-b$ gives $\max(f(a+b), f(a-b)) | \min(af(-b)+bf(a), -ab)$. Since $f$ takes positive values, $\max(f(a + b), f(a-b)) | ab$. -For $a=b=1$ this shows that $f(0)=f(2)=1$. -For $a=b$ this shows that $f(2a) | a^2$, and for $b=1$ this shows that $\max(f(a+1), f(a-1))|a$. Combined together these imply that $f(x) = 1$ for all even $x \geq 0$. -Now we claim that either $f(1) = 1$ or $f(3) = 1$. Suppose that neither equalled one. By the above inequality with $a=2$ and $b=1$, $\max(f(1), f(3)) | 2$, so $f(1)=f(3)=2$. By the original inequality with $x=3$ and $y=2$, $\max(f(1), f(5)) = \min(3f(2)-2f(3), 6)$. Since $3f(2)-2f(3) = 3 \cdot 1 - 2 \cdot 2 = -1$, $f(1)=1$, a contradiction. Therefore either $f(1) = 1$ or $f(3) = 1$. -Now for any odd $x \geq 1$ with $f(x) = 1$ let $m = \max(f(x+2), f(x-2))$. By the original inequality $m| \min(xf(2)-2f(x), 2x) =x-2$. If $x=1$ then this shows that $m=1$. Otherwise, if $x > 1$, by the above inequality ($a=x, b=2$) $m | 2x$, so since $\gcd(x-2, 2x) = 1$, $m = 1$. Either way $f(x \pm 2) = 1$, so by induction $f(x) = 1$ for all odd $x > 0$. -Therefore $f(x) = 1$ for all integer $x \geq 0$. -Now consider the case of negative arguments to $f$. For $x > 0$ and any $k > 0$, use the above inequality with $a = kx$ and $b = (k+1)x$. We know that $f((2k+1)x) = 1$, so the above inequality shows that $f(a-b) = f(-x) | k(k+1)x^2$. Since this holds for all such $k$, $f(-x) | x^2$. -In particular this shows that $f(-1) = 1$. -For $x > 0$, the above inequality with $a = 1$ and $b=x$ yields $\max(f(1+x), f(1-x)) = f(1-x) | x$. So for $x > 1$, $f(-x) | x + 1$. We also know $f(-x) | x^2$, so since $\gcd(x^2, x + 1) = 1$, $f(-x) = 1$. -Thus $f(x)=1$ for all $x \in \mathbb{Z}$. QED<|endoftext|> -TITLE: Bernoulli trial: probability of even times of sum of $7$. -QUESTION [5 upvotes]: We throw a pair of dice unlimited number of times. For any $n\in \Bbb N$, - let $$E_n=\text{"at the first n trials, the number of time we get sum of $7$ is even"}$$ - Also let $P_n=P(E_n)$. We need to calculate $P_n$ (in terms of $n$). - - -So I have used a recurrence relation (for $n>1$): $$P_n=\frac56 P_{n-1}+\frac16(1-P_{n-1})$$ and got $P_n=1/2\cdot(2/3)^n+1/2$, for $P_1=30/36=5/6$. -Now, I need to to calculate $P_n$ in Bernoulli trial. - -REPLY [3 votes]: Here's a solution with generating functions. Let $p(x)=5/6+x/6$ be the generating function for the number of sevens when you throw the dice once. If you throw the dice $n$ times, the generating function is $p(x)^n.$ To extract the even powers only, you consider ${p(x)^n+p(-x)^n\over 2}$, and to get the required probability you set $x=1$. The answer is therefore $$P_n= {p(1)^n+p(-1)^n\over 2}={1\over 2}\left( 1+\left({2\over 3}\right)^n\right).$$<|endoftext|> -TITLE: Relation between exterior derivative and Lie bracket -QUESTION [14 upvotes]: There is a formula connecting the exterior derivative and the Lie bracket -$$d\omega (X,Y) = X \omega(Y) - Y \omega(X) - \omega([X,Y]).$$ -What is a good way to remember this? By which I mean, what structure does this reveal? (Or, what essentially is going on here?) In his book, John Lee says, "In a sense, the Lie bracket is dual to the exterior derivative." But that's not really satisfying to me. - -REPLY [5 votes]: I cannot improve on Ted Shifrin's answer describing why the bracket term is necessary. So, I'll only attempt in this answer to elaborate the sense in which the exterior derivative and bracket are dual. -Fix a local frame $(E_a)$ and let $(\theta^a)$ denote its dual coframe, so that $\theta^a (E_b) = \delta^a{}_b$; in particular each such contraction is constant. Then, for frame and coframe elements the exterior derivative formula simplifies to -$$\phantom{(ast)} \qquad d\theta^a(E_b, E_c) = -\theta^a([E_b, E_c]) . \qquad (\ast)$$ But the right-hand side is (up to sign) exactly the structure function $C^a{}_{bc}$ of the frame, and the structure functions tell us everything there is to know about the geometry of the frame $(E_a)$, or equivalently the coframe $(\theta^a)$. Put another way: - -The exterior derivatives $d\theta^a$ contain the same information encoded by the Lie bracket but instead expressed in the (dual) language of the dual coframe.<|endoftext|> -TITLE: Contractibility of an exact chain complex -QUESTION [6 upvotes]: How can one prove that an exact (acyclic) chain complex of projective modules that is trivial in negative degrees is contractible? -I would appreciate some nudges in the right direction more than anything; I'm new to homological algebra and am having a hard time seeing how to start proving this. - -REPLY [3 votes]: The hypothesis that the chain complex is bounded below points you in the direction of induction. Let $C_*$ be the chain complex. The chain complex being acyclic means that every cycle is a boundary, i.e. if $x \in C_n$ is such that $dx = 0$, then $x = dy$ for some $y \in C_{n+1}$. What we want is $C_*$ being contractible, meaning that there is a chain homotopy $h_n : C_n \to C_{n+1}$ such that $x = h_{n-1}(d_n x) + d_{n+1}(h_n x)$ for all $x \in C_*$. -It's this $h$ that's constructed by induction. The first one is $h_0 : C_0 \to C_1$ and it must satisfy $d_1 h_0 x = x$ for all $x \in C_0$ (because $C_{-1} = 0$). What are our hypotheses? The complex $C_*$ is acyclic, and $C_{-1} = 0$. So every $x \in C_0$ is a cycle ($C_{-1} = 0$), hence a boundary $C_*$ acyclic. Thus $d_1 : C_1 \to C_0$ is surjective. -And now we use the hypothesis that all the $C_n$ are projective modules. $d_1 : C_1 \to C_0$ is surjective and $C_0$ is projective, hence there is a lift of the identity, a map $s : C_0 \to C_1$ such that $d_1 s x = \operatorname{id}_{C_0}(x) = x$. This map $s$ is exactly the $h_0$ we needed. -And now we start again and want to construct $h_1 : C_1 \to C_2$ such that $x = h_0(d_1(x)) + d_2(h_1(x))$ for all $x \in C_1$. Recalling that we'll use the fact that the modules are projective, we rewrite the equation as -$$d_2(h_1(x)) = x - h_0(d_1(x)) = g(x),$$ -where $g(x) = x - h_0(d_1(x))$. -To use projectivity, we need a surjective map. Let $Z_1 \subset C_1$ be the kernel of $d_1$. Then since $C_*$ is a chain complex, $d_2(C_2) \subset Z_1$; the acyclicity of $C_*$ means that $d_2 : C_2 \to Z_1$ is surjective. -The map we want to lift is $g : C_1 \to C_1$, so all we need to check is that $g(C_1) \subset Z_1$ and we're done. But using $d_1(h_0(y)) = y$, we find: -$$d_1(g(x)) = d_1(x) - d_1(h_0(d_1(x)) = d_1(x) - d_1(x) = 0$$ -and so $g(C_1) \subset Z_1$. We can now use projectivity of $C_1$ to find a lift $h_1 : C_1 \to C_2$ such that $d_2(h_1(x)) = g(x) = x - h_0(d_1(x))$. - -There are a few details missing here, and you need to write down the induction fully. I'll let you fill in the details.<|endoftext|> -TITLE: The role of the Zariski topology in algebraic geometry -QUESTION [9 upvotes]: I am having trouble understading the relevance of the Zariski topology being a topology. -Every time I see the proof that sets of the form $V(I)=\{p\in\mathbb{A}^n\mid f(p)=0 \ \forall f\in I\}$ consitute a topology on the affine space $\mathbb{A}^n$, it looks to me like the topology axioms just happened by accident. -When you begin to prove results about this topology, you find it to be nothing like the geometrically intuitive kind of spaces one encounters in analysis, geometric topology or differential geometry (open sets always intersect!). -On the other hand, it feels right to talk about the Zariski topology because, for instance, algebraic morphisms between affine varieties happen to be continuous. But I just can't feel any intuition of continuity in this context. -Why is it important that the Zariski topology be actually a topology? - -REPLY [2 votes]: I guess my answer would be that it's actually not that important that the Zariski topology is really a topology. -Topology is a pretty good axiomatic frame to describe the notion of closeness and continuity that some mathematicians came up with in the beginning of the last century. It's pretty good, that's granted. But it is not (even though some may have come to consider it to be) the ultimate formalization of these notions. It's just simple to formulate, quite easy to manipulate, and contains pretty much all examples that were needed at the time it was introduced. It gets the job done in an elegant fashion, and for that due credit must be given. -The Zariski topology on the spectrum of a ring is without doubt an intelligent and meaningful way to represent closeness of points in the setting, a notion that should exist given that we are doing geometry here (or at the very least we really want to think we do). And it is fortunate that it neatly folds into the dominant conceptual frame on that matter. It's really convenient and I think people back then were really happy with it. -But if it hadn't, well, I don't think it would have been the end of the world. For instance, when Grothendieck realized that there should be a finer notion of "covering" that captures more information, and that it was basically hopeless to try to define it as a classical topology, he came up with étale topology and the (essentially) more general notion of Grothendieck topology. It turned out to be a pretty good notion as well. -Basically, what I'm trying to say is that there are vague conceptual reasons that explain why it's not surprising that the Zariski topology is indeed a topology, but I don't think that too much importance and depth should be given to that fact. -Still, as an example of those reasons, topology was defined that way because people felt that it was natural that closed/open sets should have a certain lattice structure : as a convincing handwaving, a closed set should be somewhat like the set of zeros of some continuous functions, so it should be stable by intersection, and finite unions. Now a basic idea of algebraic geometry is that ideals of a ring can be thought as sets of equations, so they satisfy this special lattice structure for the same vague general reasons. Now you can see the construction of the spectrum of a ring as a special case of Stone duality : it only depends on this lattice structure, the prime ideals naturally arise from this structure, even though this is certainly not how they historically came up. So the Zariski topology is a topology because topology was defined following the same kind of intuition as algebraic topology. -(Actually, my reasoning is a bit anachronic, probably in a lot of ways since I'm not a specialist in math history, but at least because originally topology was formulated not in terms of a lattice structure on open/closed sets, but in terms of the closure operator. So the idea was that topology was a way to make sense of adherent points to a subset of a space. Of course all this is equivalent.)<|endoftext|> -TITLE: Galois closure of a finite extension is finite -QUESTION [5 upvotes]: We proved the fundamental theorem of algebra in my field theory class the other day, but the professor glossed over an (imo) important step which I have found myself unable to prove. Can someone help me out? -If $K/F$ be a finite field extension and $E/F$ is the Galois closure of $K$, then $[E : F] < \infty$. - -REPLY [4 votes]: Here’s one of several possible ways of doing it: -Let $\{b_1,\cdots,b_n\}$ be a basis for $K$ as an $F$-vector space. Each $b_i$ has an $F$-minimal polynomial $P_i(X)\in F[X]$. Adjoin all roots of all the polynomials $P_i$. It’s clearly a normal extension of $F$, and pretty clearly the smallest normal extension of $F$ containing $E$. -(If $E$ wasn’t separable over $F$ to start with, I don’t know what a Galois closure of $E$ over $F$ could be.)<|endoftext|> -TITLE: computing the series $\sum_{n=1}^\infty \frac{1}{n^2 2^n}$ -QUESTION [5 upvotes]: $$\sum_{n=1}^\infty \frac{1}{n^2 2^n}$$ -I am new in series thus I tried a pair of methods to compute but I couldn't - -REPLY [5 votes]: Note that we can write -$$\begin{align} -\sum_{n=1}^N\frac{x^n}{n^2}&=\sum_{n=1}^N \int_0^x s^{n-1}\,ds \int_0^1 t^{n-1}\,dt\\\\ -&=\int_0^1 \int_0^x \frac{1-(st)^N}{1-(st)}\,ds\,dt \tag 1\\\\ -\end{align}$$ -For $x<1$, using the Dominated Convergence Theorem to evaluate the limit of $(1)$ as $N\to \infty$, we can write -$$\begin{align} -\sum_{n=1}^\infty \frac{x^n}{n^2}&=\int_0^1 \int_0^x \frac{1}{1-(st)}\,ds\,dt\\\\ -&=-\int_0^1 \frac{\log(1-xt)}{t}\,dt\\\\ -&=-\int_0^x \frac{\log(1-u)}{u}\,du\\\\ -&=\text{Li}_2(x) \tag 2 -\end{align}$$ -where in $(2)$ $\text{Li}_2(x)$ is the dilogarithm function. Therefore, for $x=1/2$ we have -$$\sum_{n=1}^\infty \frac{1}{n^22^n}=\text{Li}_2(1/2) \tag 3$$ -Note that $\text{Li}_2(1)=\frac{\pi^2}{6}$ See Basel Problem. Furthermore, it can be shown (see the NOTE at the end of this solution) that the dilogarithm function satisfies the reflection identity -$$\begin{align} -\text{Li}_2(x)+\text{Li}_2(1-x)&=\text{Li}_2(1)-\log(x)\log(1-x)\\\\ -&=\frac{\pi^2}{6}-\log(x)\log(1-x) \tag 4 -\end{align}$$ -Using $(4)$ with $x=1/2$ in $(3)$ reveals -$$\bbox[5px,border:2px solid #C0A000]{\sum_{n=1}^\infty \frac{1}{n^22^n}=\frac{\pi^2}{12}-\frac12 \log^2(1/2)}$$ -And we are done! - -NOTE: -Here, we prove the reflection identity given in $(4)$. Observe that we can write -$$\begin{align} -\text{Li}_2(x)+\text{Li}_2(1-x)&=-\int_0^x \frac{\log(1-u)}{u}\,du-\int_0^{1-x}\frac{\log(1-u)}{u}\,du\\\\ -&=-\int_0^x \frac{\log(1-u)}{u}\,du-\int_0^{1}\frac{\log(1-u)}{u}\,du-\int_1^{1-x} \frac{\log(1-u)}{u}\,du\\\\ -&=\frac{\pi^2}{6}-\int_0^x \frac{\log(1-u)}{u}\,du-\int_1^{1-x} \frac{\log(1-u)}{u}\,du \\\\ -&=\frac{\pi^2}{6}-\int_0^x \frac{\log(1-u)}{u}\,du+\int_0^{x} \frac{\log(u)}{1-u}\,du \\\\ -&=\frac{\pi^2}{6}-\int_0^x \frac{\log(1-u)}{u}\,du+\left.\left(-\log(u)\log(1-u)\right)\right|_0^x +\int_0^x \frac{\log(1-u)}{u}\,du\\\\ -&=\frac{\pi^2}{6}-\log(x)\log(1-x) -\end{align}$$ -as was to be shown!<|endoftext|> -TITLE: $A^2$ $B=A^2-B$ then $AB=BA$ -QUESTION [6 upvotes]: If for $2$ real $n$ by $n$ matrices we have $A^2B=A^2-B$ then prove that the two matrices commute. -This is a problem from a competition. -I've tried several manipulations but none of them work. -Can't come up with a counter example, as well. - -REPLY [2 votes]: As Sangchul Lee says in the comments, putting everything to one side allows us to add $I$ to both sides and factor as $I=(I+A^2)(I-B)$, telling us $I+A^2$ and $I-B$ are invertible, and $B=I-(I+A^2)^{-1}$. -Obviously to determine $B$ commutes with $A$, it suffices to see $(I+A^2)^{-1}$ commutes with $A$. -Prove that whenever $X$ is invertible, $X$ and $A$ commute if and only if $X^{-1}$ and $A$ commute. Then you can apply this principle with $X=I+A^2$.<|endoftext|> -TITLE: Are these functions Riemann integrable on $[0,1]$ using this theorem? -QUESTION [6 upvotes]: The question asks to whether the theorem "bounded function $f$ on $[a,b]$ that is continuous on $(a,b]$ is Riemann integrable" to explain whether these functions are Riemann integrable: -a) $\sin^2(\frac 1x)$ -b) $\frac 1x\cdot\sin(\frac 1x)$ -c) $\ln x$ -I think the answer is Yes, No, No. For the first one, the function is clearly bounded and continuous on $(0,1)$. For the second one, I'm not entirely sure because the function is continuous on $(0,1]$ but is it bounded? The last one the function is unbounded. -Could someone confirm and explain to me more clearly? Much appreciated! - -REPLY [8 votes]: $\frac{1}{x} \sin(1/x)$ is not bounded: consider the sequence of points $x_n=\frac{1}{\pi/2+2n\pi}$ and notice $\sin(1/x_n)=1$. -Also, in all of these you should be a bit careful: you need to give some alternate definition of them at $x=0$ in order for the question of Riemann integrability to make sense. As your theorem shows, the choice of this value is of no consequence, but still, the concept of proper Riemann integrability is restricted to functions whose domain is a closed bounded interval.<|endoftext|> -TITLE: How to prove $\exp(x)/(\exp(x)+1)^2$ is even? -QUESTION [12 upvotes]: I've spent some time trying to prove that the function: -$$f(x)=\frac{\exp x}{(\exp x+1)^2}$$ -is even. -I tried expanding the different $\exp x$ as power series, but I had a very difficult time trying to track the different indices. Is that the correct way to proceed or is there some other property that I am no taking into account ? -Greetings. - -REPLY [5 votes]: Something that may be more generally useful: instead of proving that $f(x) = f(-x)$ directly, it's sometimes easier to show that $f(x) - f(-x) = 0$. Writing out $f(x)$ and $f(-x)$ and trying to combine terms may suggest a simplification that you wouldn't think of otherwise. In this case, you'd start with -$$\frac{e^x}{(e^x + 1)^2} - \frac{e^{-x}}{(e^{-x} + 1)^2}$$ -(I don't think there's any point in hiding the specific steps, since they appear in several other answers.) -You might recognize immediately that multiplying one of these terms by $e^{\pm 2x}$ on the top and bottom converts it into the other term. If not, you can try the classic technique for adding fractions with different denominators: -$$\frac{e^x(e^{-x} + 1)^2}{(e^x + 1)^2(e^{-x} + 1)^2} - \frac{e^{-x}(e^{x} + 1)^2}{(e^{-x} + 1)^2(e^x + 1)^2}$$ -and then it should be pretty clear that you should expand the numerators, after which the result is fairly obvious. -The same idea goes for proving that a function is odd, just show that $f(x) + f(-x) = 0$ instead of $f(x) - f(-x)$.<|endoftext|> -TITLE: Limit exists or not? $\lim \limits_{n \to\infty}\ \left[n-\frac{n}{e}\left(1+\frac{1}{n}\right)^n\right] $ -QUESTION [6 upvotes]: Determine whether or not the following limit exists, and find its value if it exists: $$\lim \limits_{n \to\infty}\ \left[n-\frac{n}{e}\left(1+\frac{1}{n}\right)^n\right] $$ -I think the limit of $\left(1+\frac{1}{n}\right)^n$ is $e$, but I am not sure I can use this or not in the limit calculation. Could you please help me to solve this? Thank you! - -REPLY [2 votes]: This is very similar to Dr. MV's answer using a slightly different approach. -Considering $$A_n=n-\frac{n}{e}\left(1+\frac{1}{n}\right)^n$$ Let us first look at $$B_n=\left(1+\frac{1}{n}\right)^n$$ Take the logarithm $$\log(B_n)=n\log\left(1+\frac{1}{n}\right)$$ Since $n$ is large, use Taylor for $\log(1+x)$ when $x$ is small and replace $x$ by $\frac 1n$. So, you have $$\log(B_n)=n\Big(\frac{1}{n}-\frac{1}{2 n^2}+\frac{1}{3 n^3}+O\left(\frac{1}{n^4}\right)\Big)=1-\frac{1}{2 n}+\frac{1}{3 n^2}+O\left(\frac{1}{n^3}\right)$$ Now $$B_n=e^{\log(B_n)}=e-\frac{e}{2 n}+\frac{11 e}{24 n^2}+O\left(\frac{1}{n^3}\right)$$ Back to $A_n$ $$A_n=n-\frac n e\Big(e-\frac{e}{2 n}+\frac{11 e}{24 n^2}+O\left(\frac{1}{n^3}\right) \Big)=\frac{1}{2}-\frac{11}{24 n}+O\left(\frac{1}{n^2}\right)$$ which shows the limit and also how it is approached. -For illustration purposes, using $n=10$, $A_n\approx 0.458155$ while the above formula gives $\frac{109}{240}\approx 0.454167$.<|endoftext|> -TITLE: Prove that all roots of $\sum_{r=1}^{70} \frac{1}{x-r} =\frac{5}{4} $ are real -QUESTION [11 upvotes]: Prove that all roots of $$\displaystyle \sum_{r=1}^{70} \dfrac{1}{x-r} =\dfrac{5}{4} $$ -are real - -I encountered this question in my weekly test. I tried setting $\displaystyle \sum_{r=1}^{70} \dfrac{1}{x-r} = \dfrac{P'(x)}{P(x)} $ ; where $\displaystyle P(x) = \prod_{r=1}^{70} (x-r)$ and tried to use some inequality, but to no avail. -Can we also generalize this? - -Find the condition such that -$$\displaystyle \sum_{r=1}^{n} \dfrac{1}{x-r} = a $$ -has all real roots. -$n\in \mathbb{Z^+} \ ; \ a\in \mathbb{R}$ - -REPLY [20 votes]: Alternatively: suppose that the function has a complex root; call it $x+iy$; $y\neq 0$. -Then $$\sum_{r=1}^{70}\frac{1}{x-r+iy}=\frac{5}{4}$$ -multiply by the conjugate to get -$$\sum_{r=1}^{70}\frac{x-r-iy}{(x-r)^2+y^2}=\frac{5}{4}$$ -which implies -$$\operatorname{Im}\left( \sum_{r=1}^{70}\frac{x-r-iy}{(x-r)^2+y^2} \right)=0$$ -or -$$\sum_{r=1}^{70}\frac{-y}{(x-r)^2+y^2}=0$$ -which is a contradiction since each term has the same sign and is nonzero.<|endoftext|> -TITLE: Necessary and sufficient conditions for left and right eigenvectors to be equal -QUESTION [10 upvotes]: Suppose I have a matrix $A$ such that for some eigenvalue $\lambda$, the left eigenvector corresponding to $\lambda$ is equal to the conjugate transpose of the right eigenvector corresponding to $\lambda$. That is, I have $\lambda$, $\mathbf u$ and $\mathbf v$ such that -$$ -A\mathbf u = \lambda \mathbf u, \qquad \mathbf v A = \mathbf v \lambda\quad\text{and}\quad \mathbf u = \mathbf v^*. -$$ -Obviously this will be the case for all eigenvalues if $A$ is Hermitian (or symmetric in the real case). However, I'm interested in the case where $A$ is not symmetric, and the relation holds only for some particular eigenvalue, not for all of them. -I am interested in what properties the matrix $A$ must have in order for this to be the case. Are there any simple necessary and sufficient conditions in terms of the elements of $A$? -Although I have stated the question more generally, I am actually interested in the case where $A$ has real, non-negative elements and is irreducible, and where $\lambda$ is the Perron-Frobenius eigenvalue. So if it helps to assume that $\lambda$ is real or that the elements of $\mathbf{u}$ and $\mathbf{v}$ are positive then please do so. - -REPLY [2 votes]: I am answering the question about the general case. -Let $A=B+C$ denote the decomposition of $A$ into hermitian and antihermitian part. -The assumption that $u$ and $u^*$ are (right and left)eigenvectors with respect to $\lambda$ is then equivalent to the equations -$$Bu+Cu=\lambda u,~Bu-Cu=\lambda^*u.$$ -These equations are equivalent to -$$Bu=\Re(\lambda)u,~Cu=i\Im(\lambda)u,$$ -where $\Re(\lambda)$ and $\Im(\lambda)$ are the real and imaginary part. -In other words, $u$ is an eigenvector of both $B$ and $C$. -I somewhat doubt that there is a more explicit expression in the entrys of $A$ (in the general case) because the whole situation is unchanged when undergoing a unitary similarity transformation.<|endoftext|> -TITLE: Existence of open map from $\mathbb{R}$ to $\mathbb{R}^2$ -QUESTION [5 upvotes]: Is there an open map from $\mathbb{R}$ to $\mathbb{R}^2$? I am guessing that non-existence can be derived from the fact that $\mathbb{R}$ and $\mathbb{R}^2$ are not locally homeomorphic, but cannot write down a rigorous argument. Any help is appreciated, thanks! - -REPLY [4 votes]: Chris Culter's answer shows that there does exist such a map. However, it is impossible for such a map to be continuous. Indeed, suppose $f:\mathbb{R}\to\mathbb{R}^2$ is continuous and open. Then $U=f((0,1))$ is open in $\mathbb{R}^2$, and $f([0,1])$ is compact and hence closed and bounded in $\mathbb{R}^2$. It follows that $U$ is bounded and $f([0,1])$ is its closure, so the boundary of $U$ must be contained in $f(\{0,1\})$. But any nonempty bounded open subset of $\mathbb{R}^2$ must contain more than two boundary points (in fact, uncountably many). For instance, fix any point $p\in U$; then by boundedness of $U$, every line through $p$ must contain at least two boundary points of $U$ (one on each side of $p$). \ No newline at end of file