tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
linear-algebra | <p>A matrix is diagonalizable iff it has a basis of eigenvectors. Now, why is this satisfied in case of a real symmetric matrix ? </p>
| <p>Suppose the ground field is $\mathbb C$. It is immediate then that every square matrix can be triangulated. Now, symmetry certainly implies normality ($A$ is normal if $AA^t=A^tA$ in the real case, and $AA^*=A^*A$ in the complex case). Since normality is preserved by similarity, it follows that if $A$ is symmetric, then the triangular matrix $A$ is similar to is normal. But obviously (compute!) the only normal triangular matrix is diagonal, so in fact $A$ is diagonalizable. </p>
<p>So it turns out that the criterion you mentioned for diagonalizability is not the most useful in this case. The one that is useful here is: A matrix is diagonalizable iff it is similar to a diagonal matrix. </p>
<p>Of course, the result shows that every normal matrix is diagonalizable. Of course, symmetric matrices are much more special than just being normal, and indeed the argument above does not prove the stronger result that symmetric matrices are orthogonaly diagonalizable. </p>
<p>Comment: To triangulate the matrix, use induction of the order of the matrix. For $1\times 1$ it's trivial. For $n\times n$, first find any arbitrary eigenvector $v_1$ (one such must exist). Thinking of the matrix as a linear transformation on a vector space $V$ of dimension $n$, write $V$ as $V=V_1\oplus W$, where $V_1$ is the subspace spanned by $v_1$. Then $W$ is $n-1$-dimensional, apply the induction hypothesis to $A|_{W}$ to obtain a base $v_2,\ldots, v_n$ in which $A|_W$ is triangular. It now follows that in the base $v_1,\ldots, v_n$ $A$ is triangular.</p>
| <p>This question is about the spectral theorem for (finite dimensional) real Euclidean spaces, which says that in such a space any self-adjoint operator is diagonalisable (over the real numbers) with mutually orthogonal eigenspaces (so that orthonormal bases of eigenvectors exist). This is of course a classic result that should be proved in any course on the subject, so you can look this up in any textbook. However there are quite a few questions on this site that more or less touch on this matter, yet no really satisfactory answers, so I thought this is as good a place as any for me to state what seems a good proof to me (the one I teach in my course).</p>
<p>Before starting, I would like to note this is a subtle result, as witnessed by the fact that it becomes false if one replaces the real numbers either by the rational numbers or by the complex numbers (the latter case can be salvaged by throwing in some complex conjugation, but I want to avoid the suggestion that the validity of the stated result somehow depends on that). Nonetheless there are a number of relevant considerations that are independent of the base field, which easy stuff I will state as preliminaries before we get to the meat of the subject.</p>
<p>First off, the matrix formulation in the question is just a restatement, in terms of the matrix of the operator with respect to any orthonormal basis, of the result I mentioned: under such expression the adjoint operator gets the transpose matrix, so a self-adjoint operator gets represented by a symmetric matrix. Since the basis used for such an expression bears no particular relation to our operator or the problem at hand (trying to find a basis of eigenvectors) it will be easier to ignore the matrix and reason directly in terms of the operator. The translation by expression in terms of matrices is easy and I will leave this aside; for reference, the claim I am proving translates to the claim that for every real symmetric matrix<span class="math-container">$~A$</span>, there is an orthogonal matrix <span class="math-container">$P$</span> (whose columns describe an orthonormal basis of eigenvectors) such that <span class="math-container">$P^{-1}AP=P^tAP$</span> is a diagonal matrix. It is worth noting that the converse is true and obvious: if <span class="math-container">$D$</span> is diagonal and <span class="math-container">$P$</span> orthogonal, then <span class="math-container">$A=PDP^t$</span> is symmetric (since <span class="math-container">$D^t=D$</span>).</p>
<p>A basic fact about adjoints is that for any operator <span class="math-container">$\phi$</span> on a Euclidean vector space<span class="math-container">$~V$</span>, whenever a subspace <span class="math-container">$W$</span> is stable under<span class="math-container">$~\phi$</span>, its orthogonal complement <span class="math-container">$W^\perp$</span> is stable under its adjoint<span class="math-container">$~\phi^*$</span>. For if <span class="math-container">$v\in W^\perp$</span> and <span class="math-container">$w\in W$</span>, then <span class="math-container">$\langle w\mid \phi^*(v)\rangle=\langle \phi(w)\mid v\rangle=0$</span> since <span class="math-container">$\phi(w)\in W$</span> and <span class="math-container">$v\in W^\perp$</span>, so that <span class="math-container">$\phi^*(v)\in W^\perp$</span>. Then for a <em>self-adjoint</em> operator <span class="math-container">$\phi$</span> (so with <span class="math-container">$\phi^*=\phi$</span>), the orthogonal complement of any <span class="math-container">$\phi$</span>-stable subspace is again <span class="math-container">$\phi$</span>-stable.</p>
<p>Now our focus will be on proving the following fact.</p>
<p><strong>Lemma.</strong> <em>Any self-adjoint operator <span class="math-container">$\phi$</span> on a real Euclidean vector space<span class="math-container">$~V$</span> of finite nonzero dimension has an eigenvector.</em></p>
<p>Assuming this for the moment, one easily proves our result by induction on the dimension. In dimension<span class="math-container">$~0$</span> the unique operator is diagonalisable, so the base case it trivial. Now assuming <span class="math-container">$\dim V>0$</span>, get an eigenvector <span class="math-container">$v_1$</span> by applying the lemma. The subspace <span class="math-container">$W=\langle v_1\rangle$</span> it spans is <span class="math-container">$\phi$</span>-stable by the definition of an eigenvector, and so <span class="math-container">$W^\perp$</span> is <span class="math-container">$\phi$</span>-stable as well. We can then restrict <span class="math-container">$\phi$</span> to a linear operator on <span class="math-container">$W^\perp$</span>, which is clearly self-adjoint, so our induction hypothesis gives us an orthonormal basis of <span class="math-container">$W^\perp$</span> consisting of eigenvectors for that restriction; call them <span class="math-container">$(v_2,\ldots,v_n)$</span>. Viewed as elements of <span class="math-container">$V$</span>, the vectors <span class="math-container">$v_2,\ldots,v_n$</span> are eigenvectors of<span class="math-container">$~\phi$</span>, and clearly the family <span class="math-container">$(v_1,\ldots,v_n)$</span> is orthonormal. It is an orthonormal basis of eigenvectors of<span class="math-container">$~\phi$</span>, and we are done.</p>
<p>So that was the easy stuff, the harder stuff is proving the lemma. As said, we need to use that the base field is the real numbers. I give two proofs, one in purely algebraic style, but which is based on the fundamental theorem of algebra and therefore uses the complex numbers, albeit in an indirect way, while the other uses a bit of topology and differential calculus but avoids complex numbers.</p>
<p>My first proof of the lemma is based on the fact that the irreducible polynomials in <span class="math-container">$\def\R{\Bbb R}\R[X]$</span> all have degree at most <span class="math-container">$2$</span>. This comes from decomposing the polynomial<span class="math-container">$~P$</span> into a leading coefficient and monic linear factors over the complex numbers (as one can by the fundamental theorem of algebra); any monic factor not already in <span class="math-container">$\R[X]$</span> must be <span class="math-container">$X-z$</span> with <span class="math-container">$z$</span> a non-real complex number, but then <span class="math-container">$X-\overline z$</span> is relatively prime to it and also a divisor of<span class="math-container">$~P$</span>, as is their product <span class="math-container">$(X-z)(Z-\overline z)=X^2-2\Re z+|z|^2$</span>, which lies in <span class="math-container">$\R[X]$</span>.</p>
<p>Given this we first establish (without using self-adjointness) in the context of the lemma the existence of a <span class="math-container">$\phi$</span>-stable subspace of nonzero dimension at most<span class="math-container">$~2$</span>. Start taking any monic polynomial <span class="math-container">$P\in\R[X]$</span> annihilating<span class="math-container">$~\phi$</span>; the minimal or characteristic polynomial will do, but we really only need the existence of such a polynomial which is easy. Factor <span class="math-container">$P$</span> into irreducibles in <span class="math-container">$\R[X]$</span>, say <span class="math-container">$P=P_1P_2\ldots P_k$</span>. Since <span class="math-container">$0=P[\phi]=P_1[\phi]\circ\cdots\circ P_k[\phi]$</span>, at least one of the <span class="math-container">$P_i[\phi]$</span> has nonzero kernel (indeed the sum of the dimensions of their kernels is at least <span class="math-container">$\dim V>0$</span>); choose such an<span class="math-container">$~i$</span>. If <span class="math-container">$\deg P_i=1$</span> then the kernel is a nonzero eigenspace, and any eigenvector in it spans a <span class="math-container">$\phi$</span>-stable subspace of dimension<span class="math-container">$~1$</span> (and of course we also directly get the conclusion of the lemma here). So we are left with the case <span class="math-container">$\deg P_i=2$</span>, in which case for any nonzero vector <span class="math-container">$v$</span> of its kernel, <span class="math-container">$v$</span> and <span class="math-container">$\phi(v)$</span> span a <span class="math-container">$\phi$</span>-stable subspace of dimension<span class="math-container">$~2$</span> (the point is that <span class="math-container">$\phi^2(v)$</span> lies in the subspace because <span class="math-container">$P_i[\phi](v)=0$</span>).</p>
<p>Now that we have a <span class="math-container">$\phi$</span>-stable subspace<span class="math-container">$~W$</span> of nonzero dimension at most<span class="math-container">$~2$</span>, we may restrict <span class="math-container">$\phi$</span> to <span class="math-container">$W$</span> and search an eigenvector there, which will suffice. But this means it suffices to prove the lemma with the additional hypothesis <span class="math-container">$\dim V\leq2$</span>. Since the case <span class="math-container">$\dim V=1$</span> is trivial, that can be done by showing that the characteristic polynomial of any symmetric real <span class="math-container">$2\times2$</span> matrix has a real root. But that can be shown by showing that its discriminant is non-negative, which is left as an easy exercise (it is a sum of squares).</p>
<p>(Note that the final argument shows that for <span class="math-container">$\phi$</span> self-adjoint, the case <span class="math-container">$\deg P_i=2$</span> actually never occurs.)</p>
<hr />
<p>Here is a second proof of lemma, less in the algebraic style I used so far, and less self-contained, but which I find more intuitive. It also suggests a practical method to find an eigenvector in the context of the lemma. Consider the real function<span class="math-container">$~f:V\to\R$</span> defined by <span class="math-container">$f:x\mapsto \langle x\mid \phi(x)\rangle$</span>. It is a quadratic function, therefore in particular differentiable and continuous. For its gradient at <span class="math-container">$p\in V$</span> one computes for <span class="math-container">$v\in V$</span> and <span class="math-container">$h\in\R$</span>:
<span class="math-container">$$
f(p+hv)=\langle p+hv\mid \phi(p+hv)\rangle
=f(p)+h\bigl(\langle p\mid \phi(v)\rangle
+\langle v\mid \phi(p)\rangle\bigr)
+h^2f(v)
\\=f(p)+2h\langle v\mid \phi(p)\rangle+h^2f(v)
$$</span>
(the latter equality by self-adjointness), so that gradient is <span class="math-container">$2\phi(p)$</span>. Now the point I will not prove is that the restriction of <span class="math-container">$f$</span> to the unit sphere <span class="math-container">$S=\{\,v\in V\mid\langle v\mid v\rangle=1\,\}$</span> attains its maximum somewhere on that sphere. There are many ways in analysis of showing that this is true, where the essential points are that <span class="math-container">$f$</span> is continuous and that <span class="math-container">$S$</span> is compact. Now if <span class="math-container">$p\in S$</span> is such a maximum, then every tangent vector to<span class="math-container">$~S$</span> at<span class="math-container">$~p$</span> must be orthogonal to the gradient <span class="math-container">$2\phi(p)$</span> of <span class="math-container">$f$</span> at<span class="math-container">$~p$</span> (or else one could increase the value of <span class="math-container">$f(x)$</span> near <span class="math-container">$p$</span> by varying <span class="math-container">$x$</span> along <span class="math-container">$S$</span> in the direction of that tangent vector). The tangent space of<span class="math-container">$~S$</span> at <span class="math-container">$p$</span> is <span class="math-container">$p^\perp$</span>, so this statement means that <span class="math-container">$\phi(p)\in(p^\perp)^\perp$</span>. But we know that <span class="math-container">$(p^\perp)^\perp=\langle p\rangle$</span>, and <span class="math-container">$\phi(p)\in\langle p\rangle$</span> (with <span class="math-container">$p\neq 0$</span>) means precisely that <span class="math-container">$p$</span> is an eigenvector of<span class="math-container">$~\phi$</span>. (One may check it is one for the maximal eigenvalue of<span class="math-container">$~\phi$</span>.)</p>
|
geometry | <p>I'm thinking about a circle rolling along a parabola. Would this be a parametric representation?</p>
<p>$(t + A\sin (Bt) , Ct^2 + A\cos (Bt) )$</p>
<p>A gives us the radius of the circle, B changes the frequency of the rotations, C, of course, varies the parabola. Now, if I want the circle to "match up" with the parabola as if they were both made of non-stretchy rope, what should I choose for B?</p>
<p>My first guess is 1. But, the the arc length of a parabola from 0 to 1 is much less than the length from 1 to 2. And, as I examine the graphs, it seems like I might need to vary B in order to get the graph that I want. Take a look:</p>
<p><img src="https://i.sstatic.net/voj4f.jpg" alt="I played with the constants until it looked ALMOST like what I had in mind."></p>
<p>This makes me think that the graph my equation produces will always be wrong no matter what constants I choose. It should look like a cycloid:</p>
<p><img src="https://i.sstatic.net/k3u17.jpg" alt="Cycloid"></p>
<p>But bent to fit on a parabola. [I started this becuase I wanted to know if such a curve could be self-intersecting. (I think yes.) When I was a child my mom asked me to draw what would happen if a circle rolled along the tray of the blackboard with a point on the rim tracing a line ... like most young people, I drew self-intersecting loops and my young mind was amazed to see that they did not intersect!]</p>
<p>So, other than checking to see if this is even going in the right direction, I would like to know if there is a point where the curve shown (or any curve in the family I described) is most like a cycloid-- </p>
<p>Thanks.</p>
<p>"It would be really really hard to tell" is a totally acceptable answer, though it's my current answer, and I wonder if the folks here can make it a little better.</p>
| <p>(I had been meaning to blog about roulettes a while back, but since this question came up, I'll write about this topic here.)</p>
<p>I'll use the parametric representation</p>
<p>$$\begin{pmatrix}2at\\at^2\end{pmatrix}$$</p>
<p>for a parabola opening upwards, where $a$ is the focal length, or the length of the segment joining the parabola's vertex and focus. The arclength function corresponding to this parametrization is $s(t)=a(t\sqrt{1+t^2}+\mathrm{arsinh}(t))$.</p>
<p>user8268 gave a derivation for the "cycloidal" case, and Willie used unit-speed machinery, so I'll handle the generalization to the "trochoidal case", where the tracing point is not necessarily on the rolling circle's circumference.</p>
<p>Willie's comment shows how you should consider the notion of "rolling" in deriving the parametric equations: a rotation (about the wheel's center) followed by a rotation/translation. The first key is to consider that the amount of rotation needed for your "wheel" to roll should be equivalent to the arclength along the "base curve" (in your case, the parabola).</p>
<p>I'll start with a parametrization of a circle of radius $r$ tangent to the horizontal axis at the origin:</p>
<p>$$\begin{pmatrix}-r\sin\;u\\r-r\cos\;u\end{pmatrix}$$</p>
<p>This parametrization of the circle was designed such that a positive value of the parameter $u$ corresponds to a clockwise rotation of the wheel, and the origin corresponds to the parameter value $u=0$.</p>
<p>The arclength function for this circle is $ru$; for rolling this circle, we obtain the equivalence</p>
<p>$$ru=s(t)-s(c)$$</p>
<p>where $c$ is the parameter value corresponding to the point on the base curve where the rolling starts. Solving for $u$ and substituting the resulting expression into the circle equations yields</p>
<p>$$\begin{pmatrix}-r\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-r\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>So far, this is for the "cycloidal" case, where the tracing point is on the circumference. To obtain the "trochoidal" case, what is needed is to replace the $r$ multiplying the trigonometric functions with the quantity $hr$, the distance of the tracing point from the center of the rolling circle:</p>
<p>$$\begin{pmatrix}-hr\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-hr\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>At this point, I note that $r$ here can be a positive or a negative quantity. For your "parabolic trochoid", negative $r$ corresponds to the circle rolling outside the parabola and positive $r$ corresponds to rolling inside the parabola. $h=1$ is the "cycloidal" case; $h > 1$ is the "prolate" case (tracing point outside the rolling circle), and $0 < h < 1$ is the "curtate" case (tracing point within the rolling circle).</p>
<p>That only takes care of the rotation corresponding to "rolling"; to get the circle into the proper position, a further rotation and a translation has to be done. The further rotation needed is a rotation by the <a href="http://mathworld.wolfram.com/TangentialAngle.html">tangential angle</a> $\phi$, where for a parametrically-represented curve $(f(t)\quad g(t))^T$, $\tan\;\phi=\frac{g^\prime(t)}{f^\prime(t)}$. (In words: $\phi$ is the angle the tangent of the curve at a given $t$ value makes with the horizontal axis.)</p>
<p>We then substitute the expression for $\phi$ into the <em>anticlockwise</em> rotation matrix</p>
<p>$$\begin{pmatrix}\cos\;\phi&-\sin\;\phi\\\sin\;\phi&\cos\;\phi\end{pmatrix}$$</p>
<p>which yields</p>
<p>$$\begin{pmatrix}\frac{f^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}&-\frac{g^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\\\frac{g^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}&\frac{f^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\end{pmatrix}$$</p>
<p>For the parabola as I had parametrized it, the tangential angle rotation matrix is</p>
<p>$$\begin{pmatrix}\frac1{\sqrt{1+t^2}}&-\frac{t}{\sqrt{1+t^2}}\\\frac{t}{\sqrt{1+t^2}}&\frac1{\sqrt{1+t^2}}\end{pmatrix}$$</p>
<p>This rotation matrix can be multiplied with the "transformed circle" and then translated by the vector $(f(t)\quad g(t))^T$, finally resulting in the expression</p>
<p>$$\begin{pmatrix}f(t)\\g(t)\end{pmatrix}+\frac1{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\begin{pmatrix}f^\prime(t)&-g^\prime(t)\\g^\prime(t)&f^\prime(t)\end{pmatrix}\begin{pmatrix}-hr\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-hr\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>for a trochoidal curve. (What those last two transformations do, in words, is to rotate and shift the rolling circle appropriately such that the rolling circle touches an appropriate point on the base curve.)</p>
<p>Using this formula, the parametric equations for the "parabolic trochoid" (with starting point at the vertex, $c=0$) are</p>
<p>$$\begin{align*}x&=2at+\frac{r}{\sqrt{1+t^2}}\left(ht\cos\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)-t-h\sin\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)\right)\\y&=at^2-\frac{r}{\sqrt{1+t^2}}\left(h\cos\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)+ht\sin\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)-1\right)\end{align*}$$</p>
<p>A further generalization to a <em>space curve</em> can be made if the rolling circle is not coplanar to the parabola; I'll leave the derivation to the interested reader (hint: rotate the "transformed" rolling circle equation about the x-axis before applying the other transformations).</p>
<p>Now, for some plots:</p>
<p><img src="https://i.sstatic.net/ukYzm.png" alt="parabolic trochoids"></p>
<p>For this picture, I used a focal length $a=1$ and a radius $r=\frac34$ (negative for the "outer" ones and positive for the "inner" ones). The curtate, cycloidal, and prolate cases correspond to $h=\frac12,1,\frac32$.</p>
<hr>
<p>(added 5/2/2011)</p>
<p>I did promise to include animations and code, so here's a bunch of GIFs I had previously made in <em>Mathematica</em> 5.2:</p>
<p>Inner parabolic cycloid, $a=1,\;r=\frac34\;h=1$</p>
<p><img src="https://i.sstatic.net/RfcAB.gif" alt="inner parabolic cycloid"></p>
<p>Curtate inner parabolic trochoid, $a=1,\;r=\frac34\;h=\frac12$</p>
<p><img src="https://i.sstatic.net/vWh6M.gif" alt="curtate inner parabolic trochoid"></p>
<p>Prolate inner parabolic trochoid, $a=1,\;r=\frac34\;h=\frac32$</p>
<p><img src="https://i.sstatic.net/YjTZR.gif" alt="prolate inner parabolic trochoid"></p>
<p>Outer parabolic cycloid, $a=1,\;r=-\frac34\;h=1$</p>
<p><img src="https://i.sstatic.net/DhioB.gif" alt="outer parabolic cycloid"></p>
<p>Curtate outer parabolic trochoid, $a=1,\;r=-\frac34\;h=\frac12$</p>
<p><img src="https://i.sstatic.net/pJtsO.gif" alt="curtate outer parabolic trochoid"></p>
<p>Prolate outer parabolic trochoid, $a=1,\;r=-\frac34\;h=\frac32$</p>
<p><img src="https://i.sstatic.net/u5zSM.gif" alt="prolate outer parabolic trochoid"></p>
<p>The <em>Mathematica</em> code (unoptimized, sorry) is a bit too long to reproduce; those who want to experiment with parabolic trochoids can obtain a notebook from me upon request.</p>
<p>As a final bonus, here is an animation of a <em>three-dimensional</em> generalization of the prolate parabolic trochoid:</p>
<p><img src="https://i.sstatic.net/uNQsd.gif" alt="3D prolate parabolic trochoid"></p>
| <p>If I understand the question correctly:</p>
<p>Your parabola is $p(t)=(t,Ct^2)$. Its speed is $(1,2Ct)$, after normalization it is $v(t)=(1,2Ct)//\sqrt{1+(2Ct)^2)}$, hence the unit normal vector is $n(t)=(-2Ct,1)/\sqrt{1+(2Ct)^2)}$. The center of the circle is at $p(t)+An(t)$. The arc length of the parabola is $\int\sqrt{1+(2Ct)^2}dt= (2 C t \sqrt{4 C^2 t^2+1}+\sinh^{-1}(2 C t))/(4 C)=:a(t)$. The position of a marked point on the circle is $p(t)+An(t)+A\cos(a(t)-a(t_0))\,n(t)+A\sin(a(t)-a(t_0))\,v(t)$ -that's the (rather complicated) curve you're looking for.</p>
<p><strong>edit:</strong> corrected a mistake found by Willie Wong</p>
|
linear-algebra | <blockquote>
<p>Let $ A, B $ be two square matrices of order $n$. Do $ AB $ and $ BA $ have same minimal and characteristic polynomials?</p>
</blockquote>
<p>I have a proof only if $ A$ or $ B $ is invertible. Is it true for all cases?</p>
| <p>Before proving $AB$ and $BA$ have the same characteristic polynomials show that if $A_{m\times n}$ and $B_{n\times m} $ then characteristic polynomials of $AB$ and $BA$ satisfy following statement: $$x^n|xI_m-AB|=x^m|xI_n-BA|$$ therefore easily conclude if $m=n$ then $AB$ and $BA$ have the same characteristic polynomials.</p>
<p>Define $$C = \begin{bmatrix} xI_m & A \\B & I_n \end{bmatrix},\ D = \begin{bmatrix} I_m & 0 \\-B & xI_n \end{bmatrix}.$$ We have
$$
\begin{align*}
\det CD &= x^n|xI_m-AB|,\\
\det DC &= x^m|xI_n-BA|.
\end{align*}
$$
and we know $\det CD=\det DC$ if $m=n$ then $AB$ and $BA$ have the same characteristic polynomials.</p>
| <p>If $A$ is invertible then $A^{-1}(AB)A= BA$, so $AB$ and $BA$ are similar, which implies (but is stronger than) $AB$ and $BA$ have the same minimal polynomial and the same characteristic polynomial.
The same goes if $B$ is invertible.</p>
<p>In general, from the above observation, it is not too difficult to show that $AB$, and $BA$ have the same characteristic polynomial, the type of proof could depends on the field considered for the coefficient of your matrices though.
If the matrices are in $\mathcal{M}_n(\mathbb C)$, you use the fact that $\operatorname{GL}_n(\mathbb C)$ is dense in $\mathcal{M}_n(\mathbb C)$ and the continuity of the function which maps a matrix to its characteristic polynomial. There are at least 5 other ways to proceed (especially for other field than $\mathbb C$).</p>
<p>In general $AB$ and $BA$ do not have the same minimal polynomial. I'll let you search a bit for a counter example.</p>
|
geometry | <p>The volume of a $d$ dimensional hypersphere of radius $r$ is given by:</p>
<p>$$V(r,d)=\frac{(\pi r^2)^{d/2}}{\Gamma\left(\frac{d}{2}+1\right)}$$</p>
<p>What intrigues me about this, is that $V\to 0$ as $d\to\infty$ for any fixed $r$. How can this be? For fixed $r$, I would have thought adding a dimension would make the volume bigger, but apparently it does not. Anyone got a good explanation?</p>
| <p>I suppose you could say that adding a dimension "makes the volume bigger" for the hypersphere, but it does so even more for the unit you measure the volume with, namely the unit <em>cube</em>. So the numerical value of the volume does go towards zero.</p>
<p>Really, of course, it is apples to oranges because volumes of different dimensions are not commensurable -- it makes no sense to compare the <em>area</em> of the unit disk with the <em>volume</em> of the unit sphere.</p>
<p>All we can say is that in higher dimensions, a hypersphere is a successively worse approximation to a hypercube (of side length twice the radius). They coincide in dimension one, and it goes downward from there.</p>
| <p>The reason is because the length of the diagonal cube goes to infinity.</p>
<p>The cube in some sense does exactly what we expect. If it's side lengths are $1$, it will have the same volume in any dimension. So lets take a cube centered at the origin with side lengths $r$. Then what is the smallest sphere which contains this cube? It would need to have radius $r\sqrt{d}$, so that radius of the sphere required goes to infinity. </p>
<p>Perhaps this gives some intuition.</p>
|
geometry | <p>This is an idea I have had in my head for years and years and I would like to know the answer, and also I would like to know if it's somehow relevant to anything or useless.
I describe my thoughts with the following image:<br>
<img src="https://i.sstatic.net/UnAyt.png" alt="enter image description here"><br>
What would the area of the "red almost half circle" on top of the third square be, assuming you rotate the hypotenuse of a square around it's center limiting its movement so it cannot pass through the bottom of the square.<br>
My guess would be: </p>
<p>$$\ \frac{\left(\pi*(h/2)^2 - a^2\right)}{2}$$</p>
<p>And also, does this have any meaning? Have I been wandering around thinking about complete nonsense for so many years?</p>
| <p>I found this problem interesting enough to make a little animation along the line of @Blue's diagram (but I didn't want to edit their answer without permission):</p>
<p><img src="https://i.sstatic.net/5le9i.gif" alt="enter image description here"></p>
<p><em>Mathematica</em> syntax for those who are interested:</p>
<pre><code>G[d_, t_] := {t - (d t)/Sqrt[1 + t^2], d /Sqrt[1 + t^2]}
P[c_, m_] := Show[ParametricPlot[G[# Sqrt[8], t], {t, -4, 4},
PlotStyle -> {Dashed, Hue[#]}, PlotRange -> {{-1.025, 1.025}, {-.025,
2 Sqrt[2] + 0.025}}] & /@ (Range[m]/m),
ParametricPlot[G[Sqrt[8], t], {t, -1, 1}, PlotStyle -> {Red, Thick}],
Graphics[{Black, Disk[{0, 1}, .025], Opacity[0.1], Rectangle[{-1, 0}, {1, 2}],
Opacity[1], Line[{{c, 0}, G[Sqrt[8], c]}], Disk[{c, 0}, .025],
{Hue[#], Disk[G[# Sqrt[8], c], .025]} & /@ (Range[m]/m)}],
Axes -> False]
Manipulate[P[c, m], {c, -1, 1}, {m, 1, 20, 1}]
</code></pre>
| <p><img src="https://i.sstatic.net/0Z1P6.jpg" alt=""></p>
<p>Let $O$ be the center of the square, and let $\ell(\theta)$ be the line through $O$ that makes an angle $\theta$ with the horizontal line.
The line $\ell(\theta)$ intersects with the lower side of the square at a point $M_\theta$, with
$OM_\theta=\dfrac{a}{2\sin \theta }$. So, if $N_\theta$ is the other end of our 'rotating' diagonal then we have
$$ON_\theta=\rho(\theta)=h-OM_\theta=a\sqrt{2}-\dfrac{a}{2\sin \theta }.$$
Now, the area traced by $ON_\theta$ as $\theta$ varies between $\pi/4$ and $3\pi/4$ is our desired area augmented by the area of the quarter of the square. So, the desired area is
$$\eqalign{
\mathcal{A}&=\frac{1}{2}\int_{\pi/4}^{3\pi/4}\rho^2(\theta)\,d\theta-\frac{a^2}{4}\cr
&=a^2\int_{\pi/4}^{\pi/2}\left(\sqrt{2}-\frac{1}{2\sin\theta}\right)^2\,d\theta-\frac{a^2}{4}
&=a^2\left(\frac{\pi}{2}-\sqrt{2}\ln(1+\sqrt{2})\right)
}
$$
Therefore, the correct answer is about $13.6\%$ larger than the conjectured answer.</p>
|
linear-algebra | <blockquote>
<p>Show that the determinant of a matrix $A$ is equal to the product of its eigenvalues $\lambda_i$.</p>
</blockquote>
<p>So I'm having a tough time figuring this one out. I know that I have to work with the characteristic polynomial of the matrix $\det(A-\lambda I)$. But, when considering an $n \times n$ matrix, I do not know how to work out the proof. Should I just use the determinant formula for any $n \times n$ matrix? I'm guessing not, because that is quite complicated. Any insights would be great.</p>
| <p>Suppose that <span class="math-container">$\lambda_1, \ldots, \lambda_n$</span> are the eigenvalues of <span class="math-container">$A$</span>. Then the <span class="math-container">$\lambda$</span>s are also the roots of the characteristic polynomial, i.e.</p>
<p><span class="math-container">$$\begin{array}{rcl} \det (A-\lambda I)=p(\lambda)&=&(-1)^n (\lambda - \lambda_1 )(\lambda - \lambda_2)\cdots (\lambda - \lambda_n) \\ &=&(-1) (\lambda - \lambda_1 )(-1)(\lambda - \lambda_2)\cdots (-1)(\lambda - \lambda_n) \\ &=&(\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots (\lambda_n - \lambda)
\end{array}$$</span></p>
<p>The first equality follows from the factorization of a polynomial given its roots; the leading (highest degree) coefficient <span class="math-container">$(-1)^n$</span> can be obtained by expanding the determinant along the diagonal.</p>
<p>Now, by setting <span class="math-container">$\lambda$</span> to zero (simply because it is a variable) we get on the left side <span class="math-container">$\det(A)$</span>, and on the right side <span class="math-container">$\lambda_1 \lambda_2\cdots\lambda_n$</span>, that is, we indeed obtain the desired result</p>
<p><span class="math-container">$$ \det(A) = \lambda_1 \lambda_2\cdots\lambda_n$$</span></p>
<p>So the determinant of the matrix is equal to the product of its eigenvalues.</p>
| <p>I am a beginning Linear Algebra learner and this is just my humble opinion. </p>
<p>One idea presented above is that </p>
<p>Suppose that $\lambda_1,\ldots \lambda_2$ are eigenvalues of $A$. </p>
<p>Then the $\lambda$s are also the roots of the characteristic polynomial, i.e.</p>
<p>$$\det(A−\lambda I)=(\lambda_1-\lambda)(\lambda_2−\lambda)\cdots(\lambda_n−\lambda)$$.</p>
<p>Now, by setting $\lambda$ to zero (simply because it is a variable) we get on the left side $\det(A)$, and on the right side $\lambda_1\lambda_2\ldots \lambda_n$, that is, we indeed obtain the desired result</p>
<p>$$\det(A)=\lambda_1\lambda_2\ldots \lambda_n$$.</p>
<p>I dont think that this works generally but only for the case when $\det(A) = 0$. </p>
<p>Because, when we write down the characteristic equation, we use the relation $\det(A - \lambda I) = 0$ Following the same logic, the only case where $\det(A - \lambda I) = \det(A) = 0$ is that $\lambda = 0$.
The relationship $\det(A - \lambda I) = 0$ must be obeyed even for the special case $\lambda = 0$, which implies, $\det(A) = 0$</p>
<p><strong>UPDATED POST</strong></p>
<p>Here i propose a way to prove the theorem for a 2 by 2 case.
Let $A$ be a 2 by 2 matrix. </p>
<p>$$ A = \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\\end{pmatrix}$$ </p>
<p>The idea is to use a certain property of determinants, </p>
<p>$$ \begin{vmatrix} a_{11} + b_{11} & a_{12} \\ a_{21} + b_{21} & a_{22}\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\\end{vmatrix} + \begin{vmatrix} b_{11} & a_{12}\\b_{21} & a_{22}\\\end{vmatrix}$$</p>
<p>Let $ \lambda_1$ and $\lambda_2$ be the 2 eigenvalues of the matrix $A$. (The eigenvalues can be distinct, or repeated, real or complex it doesn't matter.)</p>
<p>The two eigenvalues $\lambda_1$ and $\lambda_2$ must satisfy the following condition :</p>
<p>$$\det (A -I\lambda) = 0 $$
Where $\lambda$ is the eigenvalue of $A$.</p>
<p>Therefore,
$$\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = 0 $$</p>
<p>Therefore, using the property of determinants provided above, I will try to <em>decompose</em> the determinant into parts. </p>
<p>$$\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} - \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix}= \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\\\end{vmatrix} - \begin{vmatrix} a_{11} & a_{12} \\ 0 & \lambda \\\end{vmatrix}-\begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix}$$</p>
<p>The final determinant can be further reduced. </p>
<p>$$
\begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} \\\end{vmatrix} - \begin{vmatrix} \lambda & 0\\ 0 & \lambda\\\end{vmatrix}
$$</p>
<p>Substituting the final determinant, we will have </p>
<p>$$
\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\\\end{vmatrix} - \begin{vmatrix} a_{11} & a_{12} \\ 0 & \lambda \\\end{vmatrix} - \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} \\\end{vmatrix} + \begin{vmatrix} \lambda & 0\\ 0 & \lambda\\\end{vmatrix} = 0
$$</p>
<p>In a polynomial
$$ a_{n}\lambda^n + a_{n-1}\lambda^{n-1} ........a_{1}\lambda + a_{0}\lambda^0 = 0$$
We have the product of root being the coefficient of the term with the 0th power, $a_{0}$.</p>
<p>From the decomposed determinant, the only term which doesn't involve $\lambda$ would be the first term </p>
<p>$$
\begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\\end{vmatrix} = \det (A)
$$</p>
<p>Therefore, the product of roots aka product of eigenvalues of $A$ is equivalent to the determinant of $A$. </p>
<p>I am having difficulties to generalize this idea of proof to the $n$ by $$ case though, as it is complex and time consuming for me. </p>
|
logic | <p>Most of the systems mathematicians are interested in are consistent, which means, by Gödel's incompleteness theorems, that there must be unprovable statements.</p>
<p>I've seen a simple natural language statement here and elsewhere that's supposed to illustrate this: "I am not a provable statement." which leads to a paradox if false and logical disconnect if true (i.e. logic doesn't work to prove it by definition). Like this answer explains: <a href="https://math.stackexchange.com/a/453764/197692">https://math.stackexchange.com/a/453764/197692</a>.</p>
<p>The natural language statement is simple enough for people to get why there's a problem here. But Gödel's incompleteness theorems show that similar statements exist within mathematical systems.</p>
<p>My question then is, are there a <em>simple</em> unprovable statements, that would seem intuitively true to the layperson, or is intuitively unprovable, to illustrate the same concept in, say, integer arithmetic or algebra?</p>
<p>My understanding is that the continuum hypothesis is an example of an unprovable statement in Zermelo-Fraenkel set theory, but that's not really simple or intuitive.</p>
<p>Can someone give a good example you can point to and say "That's what Gödel's incompleteness theorems are talking about"? Or is this just something that is fundamentally hard to show mathematically?</p>
<p>Update:
There are some fantastic answers here that are certainly accessible. It will be difficult to pick a "right" one.</p>
<p>Originally I was hoping for something a high school student could understand, without having to explain axiomatic set theory, or Peano Arithmetic, or countable versus uncountable, or non-euclidean geometry. But the impression I am getting is that in a sufficiently well developed mathematical system, mathematician's have plumbed the depths of it to the point where potentially unprovable statements either remain as conjecture and are therefore hard to grasp by nature (because very smart people are stumped by them), or, once shown to be unprovable, become axiomatic in some new system or branch of systems.</p>
| <p>Here's a nice example that I think is easier to understand than the usual examples of Goodstein's theorem, Paris-Harrington, etc. Take a countably infinite paint box; this means that it has one color of paint for each positive integer; we can therefore call the colors <span class="math-container">$C_1, C_2, $</span> and so on. Take the set of real numbers, and imagine that each real number is painted with one of the colors of paint.</p>
<p>Now ask the question: Are there four real numbers <span class="math-container">$a,b,c,d$</span>, all painted the same color, and not all zero, such that <span class="math-container">$$a+b=c+d?$$</span></p>
<p>It seems reasonable to imagine that the answer depends on how exactly the numbers have been colored. For example, if you were to color every real number with color <span class="math-container">$C_1$</span>, then obviously there are <span class="math-container">$a,b,c,d$</span> satisfying the two desiderata. But one can at least entertain the possibility that if the real numbers were colored in a sufficiently complicated way, there would not be four numbers of the same color with <span class="math-container">$a+b=c+d$</span>; perhaps a sufficiently clever painter could arrange that for any four numbers with <span class="math-container">$a+b=c+d$</span> there would always be at least one of a different color than the rest.</p>
<p>So now you can ask the question: Must such <span class="math-container">$a,b,c,d$</span> exist <em>regardless</em> of how cleverly the numbers are actually colored?</p>
<p>And the answer, proved by Erdős in 1943 is: yes, <em>if and only if the continuum hypothesis is false</em>, and is therefore independent of the usual foundational axioms for mathematics.</p>
<hr>
<p>The result is mentioned in </p>
<ul>
<li>Fox, Jacob “<a href="http://math.mit.edu/~fox/paper-foxrado.pdf" rel="noreferrer">An infinite color analogue of Rado's theorem</a>”, Journal of Combinatorial Theory Series A <strong>114</strong> (2007), 1456–1469.</li>
</ul>
<p>Fox says that the result I described follows from a more general result of Erdős and Kakutani, that the continuum hypothesis is equivalent to there being a countable coloring of the reals such that each monochromatic subset is linearly independent over <span class="math-container">$\Bbb Q$</span>, which is proved in:</p>
<ul>
<li>Erdős, P and S. Kakutani “<a href="http://projecteuclid.org/euclid.bams/1183505209" rel="noreferrer">On non-denumerable graphs</a>”, Bull. Amer. Math. Soc. <strong>49</strong> (1943) 457–461.</li>
</ul>
<p>A proof for the <span class="math-container">$a+b=c+d$</span> situation, originally proved by Erdős, is given in:</p>
<ul>
<li>Davies, R.O. “<a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=2073404" rel="noreferrer">Partitioning the plane into denumerably many sets without repeated distance</a>” Proc. Cambridge Philos. Soc. <strong>72</strong> (1972) 179–183.</li>
</ul>
| <p>Any statement which is not logically valid (read: always true) is unprovable. The statement $\exists x\exists y(x>y)$ is not provable from the theory of linear orders, since it is false in the singleton order. On the other hand, it is not disprovable since any other order type would satisfy it.</p>
<p>The statement $\exists x(x^2-2=0)$ is not provable from the axioms of the field, since $\Bbb Q$ thinks this is false, and $\Bbb C$ thinks it is true.</p>
<p>The statement "$G$ is an Abelian group" is not provable since given a group $G$ it can be Abelian and it could be non-Abelian.</p>
<p>The statement "$f\colon\Bbb{R\to R}$ is continuous/differentiable/continuously differentiable/smooth/analytic/a polynomial" and so on and so forth, are all unprovable, because just like that given an arbitrary function we don't know anything about it. Even if we know it is continuous we can't know if it is continuously differentiable, or smooth, or anything else. So these are all additional assumptions we have to make.</p>
<p>Of course, given a particular function, like $f(x)=e^x$ we can sit down and prove things about it, but the statement "$f$ is a continuous function" cannot be proved or disproved until further assumptions are added.</p>
<p>And that's the point that I am trying to make here. Every statement which cannot be always proved will be unprovable from some assumptions. But you ask for an intuitive statement, and that causes a problem.</p>
<p>The problem with "intuitive statement" is that the more you work in mathematics, the more your intuition is decomposed and reconstructed according to the topic you work with. The continuum hypothesis is perfectly intuitive and simple for me, it is true that understanding <strong>how</strong> it can be unprovable is difficult, but the statement itself is not very difficult once you cleared up the basic notions like cardinality and power sets.</p>
<p>Finally, let me just add that there are plenty of theories which are complete and consistent and we work with them. Some of them are even recursively enumerable. The incompleteness theorem gives us <em>three</em> conditions from which incompleteness follows, any two won't suffice. (1) Consistent, (2) Recursively enumerable, (3) Interprets arithmetics.</p>
<p>There are complete theories which satisfy the first two, and there are complete theories which are consistent and interpret arithmetics, and of course any inconsistent theory is complete.</p>
|
probability | <p>Could you kindly list here all the criteria you know which guarantee that a <em>continuous local martingale</em> is in fact a true martingale? Which of these are valid for a general local martingale (non necessarily continuous)? Possible references to the listed results would be appreciated.</p>
| <p>Here you are :</p>
<p>From Protter's book "Stochastic Integration and Differential Equations" Second Edition (page 73 and 74)</p>
<p>First :
Let $M$ be a local martingale. Then $M$ is a martingale with
$E(M_t^2) < \infty, \forall t > 0$, if and only if $E([M,M]_t) < \infty, \forall t > 0$. If $E([M,M]_t) < \infty$, then $E(M_t^2) = E([M,M]_t)$. </p>
<p>Second :</p>
<p>If $M$ is a local martingale and $E([M, M]_\infty) < \infty$, then $M$ is a
square integrable martingale (i.e. $sup_{t>0} E(M_t^2) = E(M_\infty^2) < \infty$). Moreover $E(M_t^2) = E([M, M]_t), \forall t \in [0,\infty]$. </p>
<p>Third :</p>
<p>From George Lowther's Fantastic Blog, for positive Local Martingales that are (shall I say) weak-unique solution of some SDEs.</p>
<p>Take a look at it by yourself :
<a href="http://almostsure.wordpress.com/category/stochastic-processes/">http://almostsure.wordpress.com/category/stochastic-processes/</a></p>
<p>Fourth :</p>
<p>For a positive continuous local martingales $Y$ that can written as Doléans-Dade exponential of a (continuous)-local martingale $M$, if $E(e^{\frac{1}{2}[M,M]_\infty})<\infty$ is true (that's Novikov's condition over $M$), then $Y$ is a uniformely integrable martingale.(I think there are some variants around the same theme)</p>
<p>I think I can remember I read a paper with another criteria but i don't have it with me right now. I 'll to try find it and give this last criteria when I find it.</p>
<p>Regards</p>
| <p>I found by myself other criteria that I think it is worth adding to this list.</p>
<p>5) $M$ is a local martingale of class DL iff $M$ is a martingale</p>
<p>6) If $M$ is a bounded local martingale, then it is a martingale.</p>
<p>7) If $M$ is a local martingale and $E(\sup_{s \in [0,t]} |M_s|) < \infty \, \forall t \geq 0$, then $M$ is a martingale.</p>
<p>8) Let $M$ be a local martingale and $(T_n)$ a reducing sequence for it. If $E(\sup_{n} |M_{t \wedge T_n}|) < \infty \, \forall t \geq 0$, then $M$ is a martingale.</p>
<p>9) Suppose we have a process $(M_t)_{t\geq 0}$ of the form $M_t=f(t,W_t)$. Then $M$ is a local martingale iff $(\frac{\partial}{\partial t}+\frac{1}{2}\frac{\partial^2}{\partial x^2})f(t,x)=0$.
If moreover $\forall \, \varepsilon >0$ $\exists C(\varepsilon,t)$ such that $|f(s,x)|\leq C e^{\epsilon x^2} \, \forall s \geq 0$, then $M$ is a martingale.</p>
|
geometry | <p>In the textbook I am reading, it says a dimension is the number of independent parameters needed to specify a point. In order to make a circle, you need two points to specify the $x$ and $y$ position of a circle, but apparently a circle can be described with only the $x$-coordinate? How is this possible without the $y$-coordinate also?</p>
| <p>Suppose we're talking about a unit circle. We could specify any point on it as:
$$(\sin(\theta),\cos(\theta))$$
which uses only one parameter. We could also notice that there are only $2$ points with a given $x$ coordinate:
$$(x,\pm\sqrt{1-x^2})$$
and we would generally not consider having to specify a sign as being an additional parameter, since it is discrete, whereas we consider only continuous parameters for dimension.</p>
<p>That said, a <a href="http://en.wikipedia.org/wiki/Hilbert_curve">Hilbert curve</a> or <a href="http://en.wikipedia.org/wiki/Z-order_curve">Z-order curve</a> parameterizes a square in just one parameter, but we would certainly not say a square is one dimensional. The definition of dimension that you were given is kind of sloppy - really, the fact that the circle is of dimension one can be taken more to mean "If you zoom in really close to a circle, it looks basically like a line" and this happens, more or less, to mean that it can be paramaterized in one variable.</p>
| <p>Continuing ploosu2, the circle can be parameterized with one parameter (even for those who have not studied trig functions)...
$$
x = \frac{2t}{1+t^2},\qquad y=\frac{1-t^2}{1+t^2}
$$</p>
|
probability | <p>Whats the difference between <em>probability density function</em> and <em>probability distribution function</em>? </p>
| <p><strong>Distribution Function</strong></p>
<ol>
<li>The probability distribution function / probability function has ambiguous definition. They may be referred to:
<ul>
<li>Probability density function (PDF) </li>
<li>Cumulative distribution function (CDF)</li>
<li>or probability mass function (PMF) (statement from Wikipedia)</li>
</ul></li>
<li>But what confirm is:
<ul>
<li>Discrete case: Probability Mass Function (PMF)</li>
<li>Continuous case: Probability Density Function (PDF)</li>
<li>Both cases: Cumulative distribution function (CDF)</li>
</ul></li>
<li>Probability at certain <span class="math-container">$x$</span> value, <span class="math-container">$P(X = x)$</span> can be directly obtained in:
<ul>
<li>PMF for discrete case</li>
<li>PDF for continuous case</li>
</ul></li>
<li>Probability for values less than <span class="math-container">$x$</span>, <span class="math-container">$P(X < x)$</span> or Probability for values within a range from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>, <span class="math-container">$P(a < X < b)$</span> can be directly obtained in:
<ul>
<li>CDF for both discrete / continuous case</li>
</ul></li>
<li>Distribution function is referred to CDF or Cumulative Frequency Function (see <a href="http://mathworld.wolfram.com/DistributionFunction.html" rel="noreferrer">this</a>)</li>
</ol>
<p><strong>In terms of Acquisition and Plot Generation Method</strong></p>
<ol>
<li>Collected data appear as discrete when:
<ul>
<li>The measurement of a subject is naturally discrete type, such as numbers resulted from dice rolled, count of people.</li>
<li>The measurement is digitized machine data, which has no intermediate values between quantized levels due to sampling process.</li>
<li>In later case, when resolution higher, the measurement is closer to analog/continuous signal/variable.</li>
</ul></li>
<li>Way of generate a PMF from discrete data:
<ul>
<li>Plot a histogram of the data for all the <span class="math-container">$x$</span>'s, the <span class="math-container">$y$</span>-axis is the frequency or quantity at every <span class="math-container">$x$</span>.</li>
<li>Scale the <span class="math-container">$y$</span>-axis by dividing with total number of data collected (data size) <span class="math-container">$\longrightarrow$</span> and this is called PMF.</li>
</ul></li>
<li>Way of generate a PDF from discrete / continuous data:
<ul>
<li>Find a continuous equation that models the collected data, let say normal distribution equation.</li>
<li>Calculate the parameters required in the equation from the collected data. For example, parameters for normal distribution equation are mean and standard deviation. Calculate them from collected data.</li>
<li>Based on the parameters, plot the equation with continuous <span class="math-container">$x$</span>-value <span class="math-container">$\longrightarrow$</span> that is called PDF.</li>
</ul></li>
<li>How to generate a CDF:
<ul>
<li>In discrete case, CDF accumulates the <span class="math-container">$y$</span> values in PMF at each discrete <span class="math-container">$x$</span> and less than <span class="math-container">$x$</span>. Repeat this for every <span class="math-container">$x$</span>. The final plot is a monotonically increasing until <span class="math-container">$1$</span> in the last <span class="math-container">$x$</span> <span class="math-container">$\longrightarrow$</span> this is called discrete CDF.</li>
<li>In continuous case, integrate PDF over <span class="math-container">$x$</span>; the result is a continuous CDF.</li>
</ul></li>
</ol>
<p><strong>Why PMF, PDF and CDF?</strong></p>
<ol>
<li>PMF is preferred when
<ul>
<li>Probability at every <span class="math-container">$x$</span> value is interest of study. This makes sense when studying a discrete data - such as we interest to probability of getting certain number from a dice roll.</li>
</ul></li>
<li>PDF is preferred when
<ul>
<li>We wish to model a collected data with a continuous function, by using few parameters such as mean to speculate the population distribution.</li>
</ul></li>
<li>CDF is preferred when
<ul>
<li>Cumulative probability in a range is point of interest. </li>
<li>Especially in the case of continuous data, CDF much makes sense than PDF - e.g., probability of students' height less than <span class="math-container">$170$</span> cm (CDF) is much informative than the probability at exact <span class="math-container">$170$</span> cm (PDF).</li>
</ul></li>
</ol>
| <p>The relation between the probability density funtion <span class="math-container">$f$</span> and the cumulative distribution function <span class="math-container">$F$</span> is...</p>
<ul>
<li><p>if <span class="math-container">$f$</span> is discrete:
<span class="math-container">$$
F(k) = \sum_{i \le k} f(i)
$$</span></p></li>
<li><p>if <span class="math-container">$f$</span> is continuous:
<span class="math-container">$$
F(x) = \int_{y \le x} f(y)\,dy
$$</span></p></li>
</ul>
|
logic | <p>For some reason, be it some bad habit or something else, I can not understand why the statement "p only if q" would translate into p implies q. For instance, I have the statement "Samir will attend the party only if Kanti will be there." The way I interpret this is, "It is true that Samir will attend the party only if it is true that Kanti will be at the party;" which, in my mind, becomes "If Kanti will be at the party, then Samir will be there." </p>
<p>Can someone convince me of the right way?</p>
<p>EDIT:</p>
<p>I have read them carefully, and probably have done so for over a year. I understand what sufficient conditions and necessary conditions are. I understand the conditional relationship in almost all of its forms, except the form "q only if p." <strong>What I do not understand is, why is p the necessary condition and q the sufficient condition.</strong> I am <strong>not</strong> asking, what are the sufficient and necessary conditions, rather, I am asking <strong>why.</strong></p>
| <p>Think about it: "$p$ only if $q$" means that $q$ is a <strong>necessary condition</strong> for $p$. It means that $p$ can occur <strong>only when</strong> $q$ has occurred. This means that whenever we have $p$, it must also be that we have $q$, as $p$ can happen only if we have $q$: that is to say, that $p$ <strong>cannot happen</strong> if we <strong>do not</strong> have $q$. </p>
<p>The critical line is <em>whenever we have $p$, we must also have $q$</em>: this allows us to say that $p \Rightarrow q$, or $p$ implies $q$.</p>
<p>To use this on your example: we have the statement "Samir will attend the party only if Kanti attends the party." So if Samir attends the party, then Kanti must be at the party, because Samir will attend the party <strong>only if</strong> Kanti attends the party.</p>
<p>EDIT: It is a common mistake to read <em>only if</em> as a stronger form of <em>if</em>. It is important to emphasize that <em>$q$ if $p$</em> means that $p$ is a <strong>sufficient condition</strong> for $q$, and that <em>$q$ only if $p$</em> means that $p$ is a <strong>necessary condition</strong> for $q$.</p>
<p>Furthermore, we can supply more intuition on this fact: Consider $q$ <em>only if</em> $p$. It means that $q$ can occur only when $p$ has occurred: so if we don't have $p$, we can't have $q$, because $p$ is necessary for $q$. We note that <em>if we don't have $p$, then we can't have $q$</em> is a logical statement in itself: $\lnot p \Rightarrow \lnot q$. We know that all logical statements of this form are equivalent to their contrapositives. Take the contrapositive of $\lnot p \Rightarrow \lnot q$: it is $\lnot \lnot q \Rightarrow \lnot \lnot p$, which is equivalent to $q \Rightarrow p$.</p>
| <p>I don't think there's really anything to <em>understand</em> here. One simply has to learn as a fact that in mathematics jargon the words "only if" invariably encode that particular meaning. It is not really forced by the everyday meanings of "only" and "if" in isolation; it's just how it is.</p>
<p>By this I mean that the mathematical meaning is certainly a <em>possible</em> meaning of the English phrase "only if", the mathematical meaning is not the <em>only</em> possible way "only if" can be used in everyday English, and it just needs to be memorized as a fact that the meaning in mathematics is less flexible than in ordinary conversation.</p>
<p>To see that the mathematical meaning is at least <em>possible</em> for ordinary language, consider the sentence</p>
<blockquote>
<p>John smokes only on Saturdays.</p>
</blockquote>
<p>From this we can conclude that if we see John pulsing on a cigarette, then today must be a Saturday. We <em>cannot</em>, out of ordinary common sense, conclude that if we look at the calendar and it says today is Saturday, then John must <em>currently</em> be lighting up -- because the claim doesn't say that John smokes <em>continously</em> for the entire Saturday, or even every Saturday.</p>
<p>Now, if we can agree that there's no essential difference between "if" and "when" in this context, this might as well he phrased as</p>
<blockquote>
<p>John is smoking now <em>only if</em> today is a Saturday.</p>
</blockquote>
<p>which (according to the above analysis) ought to mean, mathematically,
$$ \mathit{smokes}(\mathit{John}) \implies \mathit{today}=\mathit{Saturday} $$</p>
|
linear-algebra | <p>I happened to stumble upon the following matrix:
$$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$</p>
<p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then:
$$ P(A)=\begin{bmatrix}
P(a) & P'(a) \\
0 & P(a)
\end{bmatrix}$$</p>
<p>Where $P'(a)$ is the derivative evaluated at $a$.</p>
<p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me:
$$ \exp(A)=\begin{bmatrix}
e^a & e^a \\
0 & e^a
\end{bmatrix}$$
and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p>
<p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get:
$$ P(A)=\begin{bmatrix}
\frac{1}{a} & -\frac{1}{a^2} \\
0 & \frac{1}{a}
\end{bmatrix}$$
And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p>
<p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p>
<p>I have two questions:</p>
<ol>
<li><p>Why is this happening?</p></li>
<li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li>
</ol>
| <p>If $$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$
then by induction you can prove that
$$ A^n = \begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} \tag 1
$$
for $n \ge 1 $. If $f$ can be developed into a power series
$$
f(z) = \sum_{n=0}^\infty c_n z^n
$$
then
$$
f'(z) = \sum_{n=1}^\infty n c_n z^{n-1}
$$
and it follows that
$$
f(A) = \sum_{n=0}^\infty c_n A^n = I + \sum_{n=1}^\infty c_n
\begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} = \begin{bmatrix}
f(a) & f'(a) \\
0 & f(a)
\end{bmatrix} \tag 2
$$
From $(1)$ and
$$
A^{-1} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}
$$
one gets
$$
A^{-n} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}^n =
(-a^{-2})^{n} \begin{bmatrix}
-a & 1 \\
0 & -a
\end{bmatrix}^n \\ =
(-1)^n a^{-2n} \begin{bmatrix}
(-a)^n & n (-a)^{n-1} \\
0 & (-a)^n
\end{bmatrix} =
\begin{bmatrix}
a^{-n} & -n a^{-n-1} \\
0 & a^{-n}
\end{bmatrix}
$$
which means that $(1)$ holds for negative exponents as well.
As a consequence, $(2)$ can be generalized to functions
admitting a Laurent series representation:
$$
f(z) = \sum_{n=-\infty}^\infty c_n z^n
$$</p>
| <p>It's a general statement if <span class="math-container">$J_{k}$</span> is a Jordan block and <span class="math-container">$f$</span> a function matrix then
<span class="math-container">\begin{equation}
f(J)=\left(\begin{array}{ccccc}
f(\lambda_{0}) & \frac{f'(\lambda_{0})}{1!} & \frac{f''(\lambda_{0})}{2!} & \ldots & \frac{f^{(n-1)}(\lambda_{0})}{(n-1)!}\\
0 & f(\lambda_{0}) & \frac{f'(\lambda_{0})}{1!} & & \vdots\\
0 & 0 & f(\lambda_{0}) & \ddots & \frac{f''(\lambda_{0})}{2!}\\
\vdots & \vdots & \vdots & \ddots & \frac{f'(\lambda_{0})}{1!}\\
0 & 0 & 0 & \ldots & f(\lambda_{0})
\end{array}\right)
\end{equation}</span>
where
<span class="math-container">\begin{equation}
J=\left(\begin{array}{ccccc}
\lambda_{0} & 1 & 0 & 0\\
0 & \lambda_{0} & 1& 0\\
0 & 0 & \ddots & 1\\
0 & 0 & 0 & \lambda_{0}
\end{array}\right)
\end{equation}</span>
This statement can be demonstrated in various ways (none of them short), but it's a quite known formula. I think you can find it in various books, like in Horn and Johnson's <em>Matrix Analysis</em>.</p>
|
linear-algebra | <p>I'm in the process of writing an application which identifies the closest matrix from a set of square matrices $M$ to a given square matrix $A$. The closest can be defined as the most similar.</p>
<p>I think finding the distance between two given matrices is a fair approach since the smallest Euclidean distance is used to identify the closeness of vectors. </p>
<p>I found that the distance between two matrices ($A,B$) could be calculated using the <a href="http://mathworld.wolfram.com/FrobeniusNorm.html">Frobenius distance</a> $F$:</p>
<p>$$F_{A,B} = \sqrt{trace((A-B)*(A-B)')} $$</p>
<p>where $B'$ represents the conjugate transpose of B.</p>
<p>I have the following points I need to clarify</p>
<ul>
<li>Is the distance between matrices a fair measure of similarity?</li>
<li>If distance is used, is Frobenius distance a fair measure for this problem? any other suggestions?</li>
</ul>
| <p>Some suggestions. Too long for a comment:</p>
<p>As I said, there are many ways to measure the "distance" between two matrices. If the matrices are $\mathbf{A} = (a_{ij})$ and $\mathbf{B} = (b_{ij})$, then some examples are:
$$
d_1(\mathbf{A}, \mathbf{B}) = \sum_{i=1}^n \sum_{j=1}^n |a_{ij} - b_{ij}|
$$
$$
d_2(\mathbf{A}, \mathbf{B}) = \sqrt{\sum_{i=1}^n \sum_{j=1}^n (a_{ij} - b_{ij})^2}
$$
$$
d_\infty(\mathbf{A}, \mathbf{B}) = \max_{1 \le i \le n}\max_{1 \le j \le n} |a_{ij} - b_{ij}|
$$
$$
d_m(\mathbf{A}, \mathbf{B}) = \max\{ \|(\mathbf{A} - \mathbf{B})\mathbf{x}\| : \mathbf{x} \in \mathbb{R}^n, \|\mathbf{x}\| = 1 \}
$$
I'm sure there are many others. If you look up "matrix norms", you'll find lots of material. And if $\|\;\|$ is any matrix norm, then $\| \mathbf{A} - \mathbf{B}\|$ gives you a measure of the "distance" between two matrices $\mathbf{A}$ and $\mathbf{B}$.</p>
<p>Or, you could simply count the number of positions where $|a_{ij} - b_{ij}|$ is larger than some threshold number. This doesn't have all the nice properties of a distance derived from a norm, but it still might be suitable for your needs.</p>
<p>These distance measures all have somewhat different properties. For example, the third one shown above will tell you that two matrices are far apart even if all their entries are the same except for a large difference in one position.</p>
| <p>If we have two matrices $A,B$.
Distance between $A$ and $B$ can be calculated using Singular values or $2$ norms.</p>
<p>You may use Distance $= \vert(\text{fnorm}(A)-\text{fnorm}(B))\vert$
where fnorm = sq root of sum of squares of all singular values. </p>
|
linear-algebra | <p>The largest eigenvalue of a <a href="https://en.wikipedia.org/wiki/Stochastic_matrix" rel="noreferrer">stochastic matrix</a> (i.e. a matrix whose entries are positive and whose rows add up to $1$) is $1$.</p>
<p>Wikipedia marks this as a special case of the <a href="https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem" rel="noreferrer">Perron-Frobenius theorem</a>, but I wonder if there is a simpler (more direct) way to demonstrate this result.</p>
| <p>Here's a really elementary proof (which is a slight modification of <a href="https://math.stackexchange.com/questions/8695/no-solutions-to-a-matrix-inequality/8702#8702">Fanfan's answer to a question of mine</a>). As Calle shows, it is easy to see that the eigenvalue $1$ is obtained. Now, suppose $Ax = \lambda x$ for some $\lambda > 1$. Since the rows of $A$ are nonnegative and sum to $1$, each element of vector $Ax$ is a convex combination of the components of $x$, which can be no greater than $x_{max}$, the largest component of $x$. On the other hand, at least one element of $\lambda x$ is greater than $x_{max}$, which proves that $\lambda > 1$ is impossible.</p>
| <p>Say <span class="math-container">$A$</span> is a <span class="math-container">$n \times n$</span> row stochastic matrix. Now:
<span class="math-container">$$A \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix} =
\begin{pmatrix}
\sum_{i=1}^n a_{1i} \\ \sum_{i=1}^n a_{2i} \\ \vdots \\ \sum_{i=1}^n a_{ni}
\end{pmatrix}
=
\begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix}
$$</span>
Thus the eigenvalue <span class="math-container">$1$</span> is attained.</p>
<p>To show that the this is the largest eigenvalue you can use the <a href="http://en.wikipedia.org/wiki/Gershgorin_circle_theorem" rel="noreferrer">Gershgorin circle theorem</a>. Take row <span class="math-container">$k$</span> in <span class="math-container">$A$</span>. The diagonal element will be <span class="math-container">$a_{kk}$</span> and the radius will be <span class="math-container">$\sum_{i\neq k} |a_{ki}| = \sum_{i \neq k} a_{ki}$</span> since all <span class="math-container">$a_{ki} \geq 0$</span>. This will be a circle with its center in <span class="math-container">$a_{kk} \in [0,1]$</span>, and a radius of <span class="math-container">$\sum_{i \neq k} a_{ki} = 1-a_{kk}$</span>. So this circle will have <span class="math-container">$1$</span> on its perimeter. This is true for all Gershgorin circles for this matrix (since <span class="math-container">$k$</span> was taken arbitrarily). Thus, since all eigenvalues lie in the union of the Gershgorin circles, all eigenvalues <span class="math-container">$\lambda_i$</span> satisfy <span class="math-container">$|\lambda_i| \leq 1$</span>.</p>
|
combinatorics | <p>I'm having a hard time finding the pattern. Let's say we have a set</p>
<p>$$S = \{1, 2, 3\}$$</p>
<p>The subsets are:</p>
<p>$$P = \{ \{\}, \{1\}, \{2\}, \{3\}, \{1, 2\}, \{1, 3\}, \{2, 3\}, \{1, 2, 3\} \}$$</p>
<p>And the value I'm looking for, is the sum of the cardinalities of all of these subsets. That is, for this example, $$0+1+1+1+2+2+2+3=12$$</p>
<p><strong>What's the formula for this value?</strong></p>
<p>I can sort of see a pattern, but I can't generalize it.</p>
| <p>Here is a bijective argument. Fix a finite set $S$. Let us count the number of pairs $(X,x)$ where $X$ is a subset of $S$ and $x \in X$. We have two ways of doing this, depending which coordinate we fix first.</p>
<p><strong>First way</strong>: For each set $X$, there are $|X|$ elements $x \in X$, so the count is $\sum_{X \subseteq S} |X|$. </p>
<p><strong>Second way:</strong> For each element $x \in S$, there are $2^{|S|-1}$ sets $X$ with $x \in X$. We get them all by taking the union of $\{x\}$ with an arbitrary subset of $S\setminus\{x\}$. Thus, the count is $\sum_{x \in S} 2^{|S|-1} = |S| 2^{|S|-1}$.</p>
<p>Since both methods count the same thing, we get
$$\sum_{X \subseteq S} |X| = |S| 2^{|S|-1},$$
as in the other answers.</p>
| <p>Each time an element appears in a set, it contributes $1$ to the value you are looking for. For a given element, it appears in exactly half of the subsets, i.e. $2^{n-1}$ sets. As there are $n$ total elements, you have $$n2^{n-1}$$ as others have pointed out.</p>
|
linear-algebra | <p>I'm starting a very long quest to learn about math, so that I can program games. I'm mostly a corporate developer, and it's somewhat boring and non exciting. When I began my career, I chose it because I wanted to create games.</p>
<p>I'm told that Linear Algebra is the best place to start. Where should I go?</p>
| <p>You are right: Linear Algebra is not just the "best" place to start. It's THE place to start.</p>
<p>Among all the books cited in <a href="http://en.wikipedia.org/wiki/Linear_algebra" rel="nofollow noreferrer">Wikipedia - Linear Algebra</a>, I would recommend:</p>
<ul>
<li>Strang, Gilbert, Linear Algebra and Its Applications (4th ed.)</li>
</ul>
<p>Strang's book has at least two reasons for being recommended. First, it's extremely easy and short. Second, it's the book they use at MIT for the extremely good video Linear Algebra course you'll find in the <a href="https://math.stackexchange.com/a/4338/391081">link of Unreasonable Sin</a>.</p>
<p>For a view towards applications (though maybe not necessarily your applications) and still elementary:</p>
<ul>
<li>B. Noble & J.W. Daniel: Applied Linear Algebra, Prentice-Hall, 1977</li>
</ul>
<p>Linear algebra has two sides: one more "theoretical", the other one more "applied". Strang's book is just elementary, but perhaps "theoretical". Noble-Daniel is definitively "applied". The distinction from the two points of view relies in the emphasis they put on "abstract" vector spaces vs specific ones such as <span class="math-container">$\mathbb{R}^n$</span> or <span class="math-container">$\mathbb{C}^n$</span>, or on matrices vs linear maps.</p>
<p>Maybe because of my penchant towards "pure" maths, I must admit that sometimes I find matrices somewhat annoying. They are funny, specific, whereas linear maps can look more "abstract" and "ethereal". But, for instance: I can't stand the proof that the matrix product is associative, whereas the corresponding associativity for the composition of (linear or non linear) maps is true..., well, just because it can't help to be true the first moment you write it down.</p>
<p>Anyway, at a more advanced level in the "theoretical" side you can use:</p>
<ul>
<li><p>Greub, Werner H., Linear Algebra, Graduate Texts in Mathematics (4th ed.), Springer</p>
</li>
<li><p>Halmos, Paul R., Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics, Springer</p>
</li>
<li><p>Shilov, Georgi E., Linear algebra, Dover Publications</p>
</li>
</ul>
<p>In the "applied" (?) side, a book that I love and you'll appreciate if you want to study, for instance, the exponential of a matrix is <a href="https://rads.stackoverflow.com/amzn/click/com/0486445542" rel="nofollow noreferrer" rel="nofollow noreferrer">Gantmacher</a>.</p>
<p>And, at any time, you'll need to do a lot of exercises. Lipschutz's is second to none in this:</p>
<ul>
<li>Lipschutz, Seymour, 3,000 Solved Problems in Linear Algebra, McGraw-Hill</li>
</ul>
<p>Enjoy! :-)</p>
| <p>I'm very surprised no one's yet listed Sheldon Axler's <a href="https://books.google.com/books?id=5qYxBQAAQBAJ&source=gbs_similarbooks" rel="nofollow noreferrer">Linear Algebra Done Right</a> - unlike Strang and Lang, which are really great books, Linear Algebra Done Right has a lot of "common sense", and is great for someone who wants to understand what the point of it all is, as it carefully reorders the standard curriculum a bit to help someone understand what it's all about.</p>
<p>With a lot of the standard curriculum, you can get stuck in proofs and eigenvalues and kernels, before you ever appreciate the intuition and applications of what it's all about. This is great if you're a typical pure math type who deals with abstraction easily, but given the asker's description, I don't think that a rigorous pure math course is what he/she's asking for.</p>
<p>For the very practical view, yet also not at all sacrificing depth, I don't think you can do better than Linear Algebra Done Right - and if you are thirsty for more, after you've tried it, Lang and Strang are both great texts.</p>
|
linear-algebra | <p>In which cases is the inverse of a matrix equal to its transpose, that is, when do we have <span class="math-container">$A^{-1} = A^{T}$</span>? Is it when <span class="math-container">$A$</span> is orthogonal? </p>
| <p>If $A^{-1}=A^T$, then $A^TA=I$. This means that each column has unit length and is perpendicular to every other column. That means it is an orthonormal matrix.</p>
| <p>You're right. This is the definition of orthogonal matrix.</p>
|
matrices | <p>Is there an intuitive meaning for the <a href="http://mathworld.wolfram.com/SpectralNorm.html">spectral norm</a> of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks</p>
| <p>The spectral norm (also know as Induced 2-norm) is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.</p>
<p>The maximum singular value is the square root of the maximum eigenvalue or the maximum eigenvalue if the matrix is symmetric/hermitian</p>
| <p>Let us consider the singular value decomposition (SVD) of a matrix <span class="math-container">$X = U S V^T$</span>, where <span class="math-container">$U$</span> and <span class="math-container">$V$</span> are matrices containing the left and right singular vectors of <span class="math-container">$X$</span> in their columns. <span class="math-container">$S$</span> is a diagonal matrix containing the singular values. A intuitive way to think of the norm of <span class="math-container">$X$</span> is in terms of the norm of the singular value vector in the diagonal of <span class="math-container">$S$</span>. This is because the singular values measure the <em>energy</em> of the matrix in various principal directions.</p>
<p>One can now extend the <span class="math-container">$p$</span>-norm for a finite-dimensional vector to a <span class="math-container">$m\times n$</span> matrix by working on this singular value vector:</p>
<p><span class="math-container">\begin{align}
\|X\|_p &= \left( \sum_{i=1}^{\text{min}(m,n)} \sigma_i^p \right)^{1/p}
\end{align}</span></p>
<p>This is called the <em>Schatten norm</em> of <span class="math-container">$X$</span>. Specific choices of <span class="math-container">$p$</span> yield commonly used matrix norms:</p>
<ol>
<li><span class="math-container">$p=0$</span>: Gives the rank of the matrix (number of non-zero singular values).</li>
<li><span class="math-container">$p=1$</span>: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.</li>
<li><span class="math-container">$p=2$</span>: Gives the Frobenius norm (square root of the sum of squares of singular values).</li>
<li><span class="math-container">$p=\infty$</span>: Gives the spectral norm (max. singular value).</li>
</ol>
|
number-theory | <p>The question is written like this:</p>
<blockquote>
<p>Is it possible to find an infinite set of points in the plane, not all on the same straight line, such that the distance between <strong>EVERY</strong> pair of points is rational?</p>
</blockquote>
<p>This would be so easy if these points could be on the same straight line, but I couldn't get any idea to solve the question above(not all points on the same straight line). I believe there must be a kind of concatenation between the points but I couldn't figure it out.</p>
<p>What I tried is totally mess. I tried to draw some triangles and to connect some points from one triangle to another, but in vain.</p>
<p><strong>Note:</strong> I want to see a real example of such an infinite set of points in the plane that can be an answer for the question. A graph for these points would be helpful.</p>
| <p>You can even find infinitely many such points on the unit circle: Let $\mathscr S$ be the set of all points on the unit circle such that $\tan \left(\frac {\theta}4\right)\in \mathbb Q$. If $(\cos(\alpha),\sin(\alpha))$ and $(\cos(\beta),\sin(\beta))$ are two points on the circle then a little geometry tells us that the distance between them is (the absolute value of) $$2 \sin \left(\frac {\alpha}2\right)\cos \left(\frac {\beta}2\right)-2 \sin \left(\frac {\beta}2\right)\cos \left(\frac {\alpha}2\right)$$ and, if the points are both in $\mathscr S$ then this is rational.</p>
<p>Details: The distance formula is an immediate consequence of the fact that, if two points on the circle have an angle $\phi$ between them, then the distance between them is (the absolute value of) $2\sin \frac {\phi}2$. For the rationality note that $$z=\tan \frac {\phi}2 \implies \cos \phi= \frac {1-z^2}{1+z^2} \quad \& \quad \sin \phi= \frac {2z}{1+z^2}$$</p>
<p>Note: Of course $\mathscr S$ is dense on the circle. So far as I am aware, it is unknown whether you can find such a set which is dense on the entire plane.</p>
| <p>Yes, it's possible. For instance, you could start with $(0,1)$ and $(0,0)$, and then put points along the $x$-axis, noting that there are infinitely many different right triangles with rational sides and one leg equal to $1$. For instance, $(3/4,0)$ will have distance $5/4$ to $(0,1)$.</p>
<p>This means that <em>most</em> if the points are on a single line (the $x$-axis), but one point, $(0,1)$, is not on that line.</p>
|
probability | <p>I give you a hat which has <span class="math-container">$10$</span> coins inside of it. <span class="math-container">$1$</span> out of the <span class="math-container">$10$</span> have two heads on it, and the rest of them are fair. You draw a coin at random from the jar and flip it <span class="math-container">$5$</span> times. If you flip heads <span class="math-container">$5$</span> times in a row, what is the probability that you get heads on your next flip?</p>
<p>I tried to approach this question by using Bayes: Let <span class="math-container">$R$</span> be the event that the coin with both heads is drawn and <span class="math-container">$F$</span> be the event that <span class="math-container">$5$</span> heads are flipped in a row. Then
<span class="math-container">$$\begin{align*}
P(R|F) &= \frac{P(F|R)P(R)}{P(F)} \\ &= \frac{1\cdot 1/10}{1\cdot 1/10 + 1/2^5\cdot 9/10} \\ &= 32/41
\end{align*}$$</span></p>
<p>Thus the probability that you get heads on the next flip is</p>
<p><span class="math-container">$$\begin{align*}
P(H|R)P(R) + P(H|R')P(R') &= 1\cdot 32/41 + 1/2\cdot (1 - 32/41) \\ &= 73/82
\end{align*}$$</span></p>
<p>However, according to my friend, this is a trick question because the flip after the first <span class="math-container">$5$</span> flips is independent of the first <span class="math-container">$5$</span> flips, and therefore the correct probability is
<span class="math-container">$$1\cdot 1/10+1/2\cdot 9/10 = 11/20$$</span></p>
<p>Is this true or not?</p>
| <p>To convince your friend that he is wrong, you could modify the question:</p>
<blockquote>
<p>A hat contains ten 6-sided dice. Nine dice have scores 1, 2, 3, 4, 5, 6, and the other dice has 6 on every face. Randomly choose one dice, toss it <span class="math-container">$1000$</span> times, and write down the results. Repeat this procedure many times. Now look at the trials in which the first <span class="math-container">$999$</span> tosses were all 6's: what proportion of those trials have the <span class="math-container">$1000^\text{th}$</span> toss also being a 6?</p>
</blockquote>
<p>Common sense tells us that the proportion is extremely close to <span class="math-container">$1$</span>. <span class="math-container">$\left(\text{The theoretical proportion is}\dfrac{6^{1000}+9}{6^{1000}+54}.\right)$</span></p>
<p>But according to your friend's method, the theoretical proportion would be <span class="math-container">$\dfrac{1}{10}(1)+\dfrac{9}{10}\left(\dfrac{1}{6}\right)=\dfrac{1}{4}$</span>. I think your friend would see that this is clearly wrong.</p>
| <p>The main idea behind this problem is a topic known as <em>predictive posterior probability</em>.</p>
<p>Let <span class="math-container">$P$</span> denote the probability of the coin you randomly selected landing on heads.</p>
<p>Then <span class="math-container">$P$</span> is a random variable supported on <span class="math-container">$\{0.5,1\}$</span> and has pdf <span class="math-container">$$\mathbb{P}(P=0.5)=0.9\\ \mathbb{P}(P=1)=0.1$$</span></p>
<p>Let <span class="math-container">$E=\{HHHHH\}$</span> denote the "evidence" you witness. The <em>posterior</em> probability that <span class="math-container">$P=0.5$</span> given this evidence <span class="math-container">$E$</span> can be evaluated using Bayes' rule:</p>
<p><span class="math-container">$$\begin{eqnarray*}\mathbb{P}(P=0.5|E) &=& \frac{\mathbb{P}(E|P=0.5)\mathbb{P}(P=0.5)}{\mathbb{P}(E|P=0.5)\mathbb{P}(P=0.5)+\mathbb{P}(E|P=1)\mathbb{P}(P=1)} \\ &=& \frac{(0.5)^5\times 0.9}{(0.5)^5\times 0.9+1^5\times 0.1} \\ &=& \frac{9}{41}\end{eqnarray*}$$</span> The <em>posterior</em> pdf of <span class="math-container">$P$</span> given <span class="math-container">$E$</span> is supported on <span class="math-container">$\{0.5,1\}$</span> and has pdf <span class="math-container">$$\mathbb{P}\left(P=0.5|E\right)=\frac{9}{41} \\ \mathbb{P}\left(P=1|E\right)=\frac{32}{41}$$</span> Finally, the <em>posterior predictive probability</em> that we flip heads again given this evidence <span class="math-container">$E$</span> is <span class="math-container">$$\begin{eqnarray*}\mathbb{P}(\text{Next flip heads}|E) &=& \mathbb{P}(\text{Next flip heads}|E,P=0.5)\mathbb{P}(P=0.5|E)+\mathbb{P}(\text{Next flip heads}|E,P=1)\mathbb{P}(P=1|E) \\ &=& \frac{1}{2}\times \frac{9}{41}+1\times \frac{32}{41} \\ &=& \frac{73}{82}\end{eqnarray*}$$</span></p>
|
geometry | <p>What is the simplest way to find out the area of a triangle if the coordinates of the three vertices are given in $x$-$y$ plane? </p>
<p>One approach is to find the length of each side from the coordinates given and then apply <a href="https://en.wikipedia.org/wiki/Heron's_formula" rel="noreferrer"><em>Heron's formula</em></a>. Is this the best way possible? </p>
<p>Is it possible to compare the area of triangles with their coordinates provided without actually calculating side lengths?</p>
| <p>What you are looking for is called the <a href="http://en.wikipedia.org/wiki/Shoelace_formula" rel="nofollow noreferrer">shoelace formula</a>:</p>
<p><span class="math-container">\begin{align*}
\text{Area}
&= \frac12 \big| (x_A - x_C) (y_B - y_A) - (x_A - x_B) (y_C - y_A) \big|\\
&= \frac12 \big| x_A y_B + x_B y_C + x_C y_A - x_A y_C - x_C y_B - x_B y_A \big|\\
&= \frac12 \Big|\det \begin{bmatrix}
x_A & x_B & x_C \\
y_A & y_B & y_C \\
1 & 1 & 1
\end{bmatrix}\Big|
\end{align*}</span></p>
<p>The last line indicates how to generalize the formula to higher dimensions.</p>
<p><strong>PS.</strong> Another generalization of the formula is obtained by noting that it follows from a discrete version of the <a href="https://en.wikipedia.org/wiki/Green%27s_theorem" rel="nofollow noreferrer">Green's theorem</a>:</p>
<p><span class="math-container">$$ \text{Area} = \iint_{\text{domain}}dx\,dy = \frac12\oint_{\text{boundary}}x\,dy - y\,dx $$</span></p>
<p>Thus the signed (oriented) area of a polygon with <span class="math-container">$n$</span> vertices <span class="math-container">$(x_i,y_i)$</span> is given by</p>
<p><span class="math-container">$$ \text{Area} = \frac12\sum_{i=0}^{n-1} x_i y_{i+1} - x_{i+1} y_i $$</span></p>
<p>where indices are added modulo <span class="math-container">$n$</span>.</p>
| <p>You know that <strong>AB × AC</strong> is a vector perpendicular to the plane ABC such that |<strong>AB × AC</strong>|= Area of the parallelogram ABA’C. Thus this area is equal to ½ |AB × AC|.</p>
<p><a href="https://i.sstatic.net/3oDbh.png" rel="noreferrer"><img src="https://i.sstatic.net/3oDbh.png" alt="enter image description here" /></a></p>
<p>From <strong>AB</strong>= <span class="math-container">$(x_2 -x_1, y_2-y_1)$</span>; <strong>AC</strong>= <span class="math-container">$(x_3-x_1, y_3-y_1)$</span>, we deduce then</p>
<p>Area of <span class="math-container">$\Delta ABC$</span> = <span class="math-container">$\frac12$$[(x_2-x_1)(y_3-y_1)- (x_3-x_1)(y_2-y_1)]$</span></p>
|
linear-algebra | <p>I am looking for an intuitive reason for a projection matrix of an orthogonal projection to be symmetric. The algebraic proof is straightforward yet somewhat unsatisfactory.</p>
<p>Take for example another property: $P=P^2$. It's clear that applying the projection one more time shouldn't change anything and hence the equality.</p>
<p>So what's the reason behind $P^T=P$?</p>
| <p>In general, if $P = P^2$, then $P$ is the projection onto $\operatorname{im}(P)$ along $\operatorname{ker}(P)$, so that $$\mathbb{R}^n = \operatorname{im}(P) \oplus \operatorname{ker}(P),$$ but $\operatorname{im}(P)$ and $\operatorname{ker}(P)$ need not be orthogonal subspaces. Given that $P = P^2$, you can check that $\operatorname{im}(P) \perp \operatorname{ker}(P)$ if and only if $P = P^T$, justifying the terminology "orthogonal projection."</p>
| <p>There are some nice and succinct answers already. If you'd like even more intuition with as little math and higher level linear algebra concepts as possible, consider two arbitrary vectors <span class="math-container">$v$</span> and <span class="math-container">$w$</span>.</p>
<h2>Simplest Answer</h2>
<p>Take the dot product of one vector with the projection of the other vector.
<span class="math-container">$$
(P v) \cdot w
$$</span>
<span class="math-container">$$
v \cdot (P w)
$$</span></p>
<p>In both dot products above, one of the terms (<span class="math-container">$P v$</span> or <span class="math-container">$P w$</span>) lies entirely in the subspace you project onto. Therefore, both dot products ignore every vector component that is <em>not</em> in this subspace - they consider only components <em>in</em> the subspace. This means both dot products are equal to each other, and are in fact equal to:
<span class="math-container">$$
(P v) \cdot (P w)
$$</span></p>
<p>Since <span class="math-container">$(P v) \cdot w = v \cdot (P w)$</span>, it doesn't matter whether we apply the projection matrix to the first or second argument of the dot product operation. Some simple identities then imply <span class="math-container">$P = P^T$</span>, so <span class="math-container">$P$</span> is symmetric (See step 2 below if you aren't familiar with this property).</p>
<h2>Less intuitive Answer</h2>
<p>If the above explanation isn't intuitive, we can use a little more math.</p>
<h1>Step 1.</h1>
<p>First, prove that the two dot products above are equal.</p>
<p>Decompose <span class="math-container">$v$</span> and <span class="math-container">$w$</span>:
<span class="math-container">$$
v = v_p + v_n
$$</span>
<span class="math-container">$$
w = w_p + w_n
$$</span></p>
<p>In this notation the <span class="math-container">$p$</span> subscript indicates the component of the vector in the subspace of <span class="math-container">$P$</span>, and the <span class="math-container">$n$</span> subscript indicates the component of the vector outside (normal to) the subspace of <span class="math-container">$P$</span>.</p>
<p>The projection of a vector lies in a subspace. The dot product of anything in this subspace with anything orthogonal to this subspace is zero. We use this fact on the dot product of one vector with the projection of the other vector:
<span class="math-container">$$
(P v) \cdot w \hspace{1cm} v \cdot (P w)
$$</span>
<span class="math-container">$$
v_p \cdot w \hspace{1cm} v \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot (w_p + w_n) \hspace{1cm} (v_p + v_n) \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot w_p + v_p \cdot w_n \hspace{1cm} v_p \cdot w_p + v_n \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot w_p \hspace{1cm} v_p \cdot w_p
$$</span>
Therefore
<span class="math-container">$$
(Pv) \cdot w = v \cdot (Pw)
$$</span></p>
<h1>Step 2.</h1>
<p>Next, we can show that a consequence of this equality is that the projection matrix P must be symmetric. Here we begin by expressing the dot product in terms of transposes and matrix multiplication (using the identity <span class="math-container">$x \cdot y = x^T y$</span> ):
<span class="math-container">$$
(P v) \cdot w = v \cdot (P w)
$$</span>
<span class="math-container">$$
(P v)^T w = v^T (P w)
$$</span>
<span class="math-container">$$
v^T P^T w = v^T P w
$$</span>
Since v and w can be any vectors, the above equality implies:
<span class="math-container">$$
P^T = P
$$</span></p>
|
differentiation | <p>Are continuous functions always differentiable? Are there any examples in dimension <span class="math-container">$n > 1$</span>?</p>
| <p>No. <a href="http://en.wikipedia.org/wiki/Karl_Weierstrass" rel="noreferrer">Weierstraß</a> gave in 1872 the first published example of a <a href="http://en.wikipedia.org/wiki/Weierstrass_function" rel="noreferrer">continuous function that's nowhere differentiable</a>.</p>
| <p>No, consider the example of $f(x) = |x|$. This function is continuous but not differentiable at $x = 0$.</p>
<p>There are even more bizare functions that are not differentiable everywhere, yet still continuous. This class of functions lead to the development of the study of fractals.</p>
|
linear-algebra | <p>Here's a cute problem that was frequently given by the late Herbert Wilf during his talks. </p>
<p><strong>Problem:</strong> Let $A$ be an $n \times n$ matrix with entries from $\{0,1\}$ having all positive eigenvalues. Prove that all of the eigenvalues of $A$ are $1$.</p>
<p><strong>Proof:</strong></p>
<blockquote class="spoiler">
<p> Use the AM-GM inequality to relate the trace and determinant.</p>
</blockquote>
<p>Is there any other proof?</p>
| <p>If one wants to use the AM-GM inequality, you could proceed as follows:
Since $A$ has all $1$'s or $0$'s on the diagonal, it follows that $tr(A)\leq n$.
Now calculating the determinant by expanding along any row/column, one can easily see that the determinant is an integer, since it is a sum of products of matrix entries (up to sign). Since all eigenvalues are positive, this integer must be positive. AM-GM inequality implies
$$det(A)^{\frac{1}{n}}=\left(\prod_{i}\lambda_{i}\right)^{\frac{1}{n}}\leq \frac{1}{n}\sum_{i=1}^{n}\lambda_{i}\leq 1.$$
Since $det(A)\neq 0$, and $m^{\frac{1}{n}}>1$ for $m>1$, the above inequality forces $det(A)=1$.
We therefore have equality which happens precisely when $\lambda_{i}=\lambda_{j}$ for all $i,j$. Combining this with the above equality gives the result.</p>
| <p>Suppose that A has a column with only zero entries, then we must have zero as an eigenvalue. (e.g. expanding det(A-rI) using that column). So it must be true that in satisfying the OP's requirements we must have each column containing a 1. The same holds true for the rows by the same argument. Now suppose that we have a linear relationship between the rows, then there exists a linear combination of these rows that gives rise to a new matrix with a zero row. We've already seen that this is not allowed so we must have linearly independent rows. I am trying to force the form of the permissible matrices to be restricted enough to give the result. Linear independence of the rows gives us a glimpse at the invertability of such matrices but alas not its diagonalizability. The minimal polynomial $(1-r)^n$ results from upper or lower triangular matrices with ones along the diagonal and I suspect that we may be able to complete this proof by looking at what happens to this polynomial when there are deviations from the triangular shape. The result linked by user1551 may be the key. Trying to gain some intuition about what are the possibilities leads one to match the binomial theorem with Newton's identities: <a href="http://en.wikipedia.org/wiki/Newton%27s_identities#Expressing_elementary_symmetric_polynomials_in_terms_of_power_sums" rel="nofollow">http://en.wikipedia.org/wiki/Newton%27s_identities#Expressing_elementary_symmetric_polynomials_in_terms_of_power_sums</a></p>
<p>and the fact that the trace must be $n$(diagonal ones) and the determinant 1. I would like to show that any deviation from this minimal polynomial must lead to a non-positive eigenvalue. Two aspects of the analysis, a combinatorial argument to show what types of modifications(from triangular) are permissible while maintaining row/column independence and looking at the geometrical effects of the non-dominant terms in the resulting characteristic polynomials. Maybe some type of induction argument will surface here. </p>
|
matrices | <p>If the matrix is positive definite, then all its eigenvalues are strictly positive. </p>
<p>Is the converse also true?<br>
That is, if the eigenvalues are strictly positive, then matrix is positive definite?<br>
Can you give example of $2 \times 2$ matrix with $2$ positive eigenvalues but is not positive definite?</p>
| <p>I think this is false. Let <span class="math-container">$A = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}$</span> be a 2x2 matrix, in the canonical basis of <span class="math-container">$\mathbb R^2$</span>. Then A has a double eigenvalue b=1. If <span class="math-container">$v=\begin{pmatrix}1\\1\end{pmatrix}$</span>, then <span class="math-container">$\langle v, Av \rangle < 0$</span>.</p>
<p>The point is that the matrix can have all its eigenvalues strictly positive, but it does not follow that it is positive definite.</p>
| <p>This question does a great job of illustrating the problem with thinking about these things in terms of coordinates. The thing that is positive-definite is not a matrix $M$ but the <em>quadratic form</em> $x \mapsto x^T M x$, which is a very different beast from the linear transformation $x \mapsto M x$. For one thing, the quadratic form does not depend on the antisymmetric part of $M$, so using an asymmetric matrix to define a quadratic form is redundant. And there is <em>no reason</em> that an asymmetric matrix and its symmetrization need to be at all related; in particular, they do not need to have the same eigenvalues. </p>
|
differentiation | <p>I am a Software Engineering student and this year I learned about how CPUs work, it turns out that electronic engineers and I also see it a lot in my field, we do use derivatives with discontinuous functions. For instance in order to calculate the optimal amount of ripple adders so as to minimise the execution time of the addition process:</p>
<p><span class="math-container">$$\text{ExecutionTime}(n, k) = \Delta(4k+\frac{2n}{k}-4)$$</span>
<span class="math-container">$$\frac{d\,\text{ExecutionTime}(n, k)}{dk}=4\Delta-\frac{2n\Delta}{k^2}=0$$</span>
<span class="math-container">$$k= \sqrt{\frac{n}{2}}$$</span></p>
<p>where <span class="math-container">$n$</span> is the number of bits in the numbers to add, <span class="math-container">$k$</span> is the amount of adders in ripple and <span class="math-container">$\Delta$</span> is the "delta gate" (the time that takes to a gate to operate).</p>
<p>Clearly you can see that the execution time function is not continuous at all because <span class="math-container">$k$</span> is a natural number and so is <span class="math-container">$n$</span>.
This is driving me crazy because on the one hand I understand that I can analyse the function as a continuous one and get results in that way, and indeed I think that's what we do ("I think", that's why I am asking), but my intuition and knowledge about mathematical analysis tells me that this is completely wrong, because the truth is that the function is not continuous and will never be and because of that, the derivative with respect to <span class="math-container">$k$</span> or <span class="math-container">$n$</span> does not exist because there is no rate of change.</p>
<p>If someone could explain me if my first guess is correct or not and why, I'd appreciate it a lot, thanks for reading and helping!</p>
| <p>In general, computing the extrema of a continuous function and rounding them to integers does <em>not</em> yield the extrema of the restriction of that function to the integers. It is not hard to construct examples.</p>
<p>However, your particular function is <em>convex</em> on the domain <span class="math-container">$k>0$</span>. In this case the extremum is at one or both of the two integers nearest to the <em>unique</em> extremum of the continuous function.</p>
<p>It would have been nice to explicitly state this fact when determining the minimum by this method, as it is really not obvious, but unfortunately such subtleties are often forgotten (or never known in the first place) in such applied fields. So I commend you for noticing the problem and asking!</p>
| <p>The main question here seems to be "why can we differentiate a function only defined on integers?". The proper answer, as divined by the OP, is that we can't--there is no unique way to define such a derivative, because we can interpolate the function in many different ways. However, in the cases that you are seeing, what we are really interested in is not the derivative of the function, per se, but rather the extrema of the function. The derivative is just a tool used to find the extrema.</p>
<p>So what's really going on here is that we start out with a function <span class="math-container">$f:\mathbb{N}\rightarrow \mathbb{R}$</span> defined only on positive integers, and we implicitly <em>extend</em> <span class="math-container">$f$</span> to another function <span class="math-container">$\tilde{f}:\mathbb{R}\rightarrow\mathbb{R}$</span> defined on all real numbers. By "extend" we mean that values of <span class="math-container">$\tilde{f}$</span> coincide with those of <span class="math-container">$f$</span> on the integers. Now, here's the crux of the matter: If we can show that there is some integer <span class="math-container">$n$</span> such that <span class="math-container">$\tilde{f}(n)\geq \tilde{f}(m)$</span> for all integers <span class="math-container">$m$</span>, i.e. <span class="math-container">$n$</span> is a maximum of <span class="math-container">$\tilde{f}$</span> <em>over the integers</em>, then we know the same is true for <span class="math-container">$f$</span>, our original function. The advantage of doing this is that now can use calculus and derivatives to analyze <span class="math-container">$\tilde{f}$</span>. It doesn't matter how we extend <span class="math-container">$f$</span> to <span class="math-container">$\tilde{f}$</span>, because at the end of the day we're are only using <span class="math-container">$\tilde{f}$</span> as a tool to find properties of <span class="math-container">$f$</span>, like maxima.</p>
<p>In many cases, there is a natural way to extend <span class="math-container">$f$</span> to <span class="math-container">$\tilde{f}$</span>. In your case, <span class="math-container">$f=\text{ExecutionTime}$</span>, and to extend it you just take the formula <span class="math-container">$\Delta \left(4k + \frac{2n}{k} - 4\right)$</span> and allow <span class="math-container">$n$</span> and <span class="math-container">$k$</span> to be real-valued instead of integer-valued. You could have extended it a different way--e.g. <span class="math-container">$\Delta \left(4k + \frac{2n}{k} - 4\right) + \sin(2\pi k)$</span> is also a valid extension of <span class="math-container">$\text{ExecutionTime}(n,k)$</span>, but this is not as convenient. And all we are trying to do is find a convenient way to analyze the original, integer-valued function, so if there's a straightforward way to do it we might as well use it.</p>
<hr />
<p>As an illustrative example, an interesting (and non-trivial) case of this idea of extending an integer-valued function to a continuous-valued one is the <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="noreferrer">gamma function</a> <span class="math-container">$\Gamma$</span>, which is a continuous extension of the integer-valued factorial function. <span class="math-container">$\Gamma$</span> is not the only way to extend the factorial function, but it is for most purposes (in fact, all purposes that I know of) the most convenient.</p>
|
geometry | <p>I need to find the volume of the region defined by
$$\begin{align*}
a^2+b^2+c^2+d^2&\leq1,\\
a^2+b^2+c^2+e^2&\leq1,\\
a^2+b^2+d^2+e^2&\leq1,\\
a^2+c^2+d^2+e^2&\leq1 &\text{ and }\\
b^2+c^2+d^2+e^2&\leq1.
\end{align*}$$
I don't necessarily need a full solution but any starting points would be very useful.</p>
| <p>It turns out that this is much easier to do in <a href="http://en.wikipedia.org/wiki/Hyperspherical_coordinates#Hyperspherical_coordinates">hyperspherical coordinates</a>. I'll deviate somewhat from convention by swapping the sines and cosines of the angles in order to get a more pleasant integration region, so the relationship between my coordinates and the Cartesian coordinates is</p>
<p>$$
\begin{eqnarray}
a &=& r \sin \phi_1\\
b &=& r \cos \phi_1 \sin \phi_2\\
c &=& r \cos \phi_1 \cos \phi_2 \sin \phi_3\\
d &=& r \cos \phi_1 \cos \phi_2 \cos \phi_3 \sin \phi_4\\
e &=& r \cos \phi_1 \cos \phi_2 \cos \phi_3 \cos \phi_4\;,
\end{eqnarray}
$$</p>
<p>and the Jacobian determinant is $r^4\cos^3\phi_1\cos^2\phi_2\cos\phi_3$. As stated in my other answer, we can impose positivity and a certain ordering on the Cartesian coordinates by symmetry, so the desired volume is $2^55!$ times the volume for positive Cartesian coordinates with $a\le b\le c\le d\le e$. This translates into the constraints $0 \le \phi_4\le \pi/4$ and $0\le\sin\phi_i\le\cos\phi_i\sin\phi_{i+1}$ for $i=1,2,3$, and the latter becomes $0\le\phi_i\le\arctan\sin\phi_{i+1}$. The boundary of the volume also takes a simple form: Because of the ordering of the coordinates, the only relevant constraint is $b^2+c^2+d^2+e^2\le1$, and this becomes $r^2\cos^2\phi_1\le1$, so $0\le r\le\sec\phi_1$. Then the volume can readily be evaluated with a little help from our electronic friends:</p>
<p>$$
\begin{eqnarray}
V_5
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\int_0^{\sec\phi_1}r^4\cos^3\phi_1\cos^2\phi_2\cos\phi_3\mathrm dr\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\frac15\sec^2\phi_1\cos^2\phi_2\cos\phi_3\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\frac15\sin\phi_2\cos^2\phi_2\cos\phi_3\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\left[-\frac1{15}\cos^3\phi_2\right]_0^{\arctan\sin\phi_3}\cos\phi_3\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\frac1{15}\left(1-\left(1+\sin^2\phi_3\right)^{-3/2}\right)\cos\phi_3\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\left[\frac1{15}\left(1-\left(1+\sin^2\phi_3\right)^{-1/2}\right)\sin\phi_3\right]_0^{\arctan\sin\phi_4}\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\frac1{15}\left(\left(1+\sin^2\phi_4\right)^{-1/2}-\left(1+2\sin^2\phi_4\right)^{-1/2}\right)\sin\phi_4\mathrm d\phi_4
\\
&=&
2^55!\left[\frac1{15}\left(\frac1{\sqrt2}\arctan\frac{\sqrt2\cos\phi_4}{\sqrt{1+2\sin^2\phi_4}}-\arctan\frac{\cos \phi_4}{\sqrt{1+\sin^2\phi_4}}\right)\right]_0^{\pi/4}
\\
&=&
2^8\left(\frac1{\sqrt2}\arctan\frac1{\sqrt2}-\arctan\frac1{\sqrt3}-\frac1{\sqrt2}\arctan\sqrt2+\arctan1\right)
\\
&=&
2^8\left(\frac\pi{12}+\frac{\mathrm{arccot}\sqrt2-\arctan\sqrt2}{\sqrt2}\right)
\\
&\approx&
5.5035\;,
\end{eqnarray}
$$</p>
<p>which is consistent with my numerical results.</p>
<p>The same approach readily yields the volume in four dimensions:</p>
<p>$$
\begin{eqnarray}
V_4
&=&
2^44!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\int_0^{\sec\phi_1}r^3\cos^2\phi_1\cos\phi_2\mathrm dr\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3
\\
&=&
2^44!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\frac14\sec^2\phi_1\cos\phi_2\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3
\\
&=&
2^44!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_3}\frac14\sin\phi_2\cos\phi_2\mathrm d\phi_2\mathrm d\phi_3
\\
&=&
2^44!\int_0^{\pi/4}\left[\frac18\sin^2\phi_2\right]_0^{\arctan\sin\phi_3}\mathrm d\phi_3
\\
&=&
2^44!\int_0^{\pi/4}\frac18\frac{\sin^2\phi_3}{1+\sin^2\phi_3}\mathrm d\phi_3
\\
&=&
2^44!\left[\frac18\left(\phi_3-\frac1{\sqrt2}\arctan\left(\sqrt2\tan\phi_3\right)\right)\right]_0^{\pi/4}
\\
&=&
12\left(\pi-2\sqrt2\arctan\sqrt2\right)
\\
&\approx&
5.2746\;.
\end{eqnarray}
$$</p>
<p>The calculation for three dimensions becomes almost trivial:</p>
<p>$$
\begin{eqnarray}
V_3
&=&
2^33!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_2}\int_0^{\sec\phi_1}r^2\cos\phi_1\mathrm dr\mathrm d\phi_1\mathrm d\phi_2
\\
&=&
2^33!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_2}\frac13\sec^2\phi_1\mathrm d\phi_1\mathrm d\phi_2
\\
&=&
2^33!\int_0^{\pi/4}\frac13\sin\phi_2\mathrm d\phi_2
\\
&=&
8(2-\sqrt2)
\\
&\approx&
4.6863\;.
\end{eqnarray}
$$</p>
<p>With all these integrals miraculously working out, one might be tempted to conjecture that there's a pattern here, with a closed form for all dimensions, and perhaps even that the sequence $4,4.69,5.27,5.50,\dotsc$ monotonically converges. However, that doesn't seem to be the case. For six dimensions, the integrals become intractable:</p>
<p>$$
\begin{eqnarray}
V_6
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\int_0^{\sec\phi_1}r^5\cos^4\phi_1\cos^3\phi_2\cos^2\phi_3\cos\phi_4\mathrm dr\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\frac16\sec^2\phi_1\cos^3\phi_2\cos^2\phi_3\cos\phi_4\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\frac16\sin\phi_2\cos^3\phi_2\cos^2\phi_3\cos\phi_4\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\left[-\frac1{24}\cos^4\phi_2\right]_0^{\arctan\sin\phi_3}\cos^2\phi_3\cos\phi_4\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\frac1{24}\left(1-\left(1+\sin^2\phi_3\right)^{-2}\right)\cos^2\phi_3\cos\phi_4\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\left[\frac1{48}\left(\phi_3+\sin\phi_3\cos\phi_3-\frac1{\sqrt2}\arctan\left(\sqrt2\tan\phi_3\right)-\frac{\sin\phi_3\cos\phi_3}{1+\sin^2\phi_3}\right)\right]_0^{\arctan\sin\phi_4}\cos\phi_4\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\frac1{48}\left(\arctan \sin \phi_4 + \frac{\sin \phi_4}{1 + \sin^2 \phi_4}-\frac1{\sqrt2}\arctan\left(\sqrt2\sin \phi_4\right)-\frac{\sin \phi_4}{1 + 2 \sin^2\phi_4}\right) \cos \phi_4\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\left[\frac1{96} \sin\phi_4 \left(2 \arctan\sin\phi_4-\sqrt2\arctan\left(\sqrt2 \sin\phi_4\right)\right)\right]_0^{\arctan\sin\phi_5}\mathrm d\phi_5
\\
&=&
480\int_0^{\pi/4}\sin\arctan\sin\phi_5 \left(2 \arctan\sin\arctan\sin\phi_5-\sqrt2\arctan\left(\sqrt2 \sin\arctan\sin\phi_5\right)\right)\mathrm d\phi_5
\\
&\approx&
5.3361
\;.
\end{eqnarray}
$$</p>
<p>Wolfram|Alpha doesn't find a closed form for that last integral (and I don't blame it), so it seems that you may have asked for the last of these volumes that can be given in closed form. Note that $V_2<V_3<V_4<V_5>V_6$. The results from Monte Carlo integration for higher dimensions show a monotonic decrease thereafter. This is also the behaviour of the volume of the unit hypersphere:</p>
<p>$$
\begin{array}{|c|c|c|c|c|c|}
d&\text{sphere}&\text{cylinders}&\text{sphere}&\text{cylinders}&\text{ratio}\\
\hline
2&\pi&4&3.1416&4&1.2732\\
3&4\pi/3&8(2-\sqrt2)&4.1888&4.6863&1.1188\\
4&\pi^2/2&12\left(\pi-2\sqrt2\arctan\sqrt2\right)&4.9348&5.2746&1.0689\\
5&8\pi^2/15&2^8\left(\frac\pi{12}+\frac{\mathrm{arccot}\sqrt2-\arctan\sqrt2}{\sqrt2}\right)&5.2638&5.5036&1.0456\\
6&\pi^3/6&&5.1677&5.3361&1.0326\\
7&16\pi^3/105&&4.7248&4.8408&1.0246\\
8&\pi^4/24&&4.0587&4.1367&1.0192\\
\hline
\end{array}
$$</p>
<p>The ratio seems to converge to $1$ fairly rapidly, so in high dimensions almost all of the intersection of the cylinders lies within the unit hypersphere.</p>
<p>P.S.: Inspired by leonbloy's approach, I improved the Monte Carlo integration by integrating the admissible radius over uniformly sampled directions. The standard error was less than $10^{-5}$ in all cases, and the results would have to deviate by at least three standard errors to change the rounding of the fourth digit, so the new numbers are in all likelihood the correct numbers rounded to four digits. The results show that leonbloy's estimate converges quite rapdily.</p>
| <p>There's reflection symmetry in each of the coordinates, so the volume is $2^5$ times the volume for positive coordinates. There's also permutation symmetry among the coordinates, so the volume is $5!$ times the volume with the additional constraint $a\le b\le c\le d\le e$. Then it remains to find the integration boundaries and solve the integrals.</p>
<p>The lower bound for $a$ is $0$. The upper bound for $a$, given the above constraints, is attained when $a=b=c=d=e$, and is thus $\sqrt{1/4}=1/2$. The lower bound for $b$ is $a$, and the upper bound for $b$ is again $1/2$. Then it gets slightly more complicated. The lower bound for $c$ is $b$, but for the upper bound for $c$ we have to take $c=d=e$ with $b$ given, which yields $\sqrt{(1-b^2)/3}$. Likewise, the lower bound for $d$ is $c$, and the upper bound for $d$ is attained for $d=e$ with $b$ and $c$ given, which yields $\sqrt{(1-b^2-c^2)/2}$. Finally, the lower bound for $e$ is $d$ and the upper bound for $e$ is $\sqrt{1-b^2-c^2-d^2}$. Putting it all together, the desired volume is</p>
<p>$$V_5=2^55!\int_0^{1/2}\int_a^{1/2}\int_b^{\sqrt{(1-b^2)/3}}\int_c^{\sqrt{(1-b^2-c^2)/2}}\int_d^{\sqrt{1-b^2-c^2-d^2}}\mathrm de\mathrm dd\mathrm dc\mathrm db\mathrm da\;.$$</p>
<p>That's a bit of a nightmare to work out; Wolfram Alpha gives up on even small parts of it, so let's do the corresponding thing in $3$ and $4$ dimensions first. In $3$ dimensions, we have</p>
<p>$$
\begin{eqnarray}
V_3
&=&
2^33!\int_0^{\sqrt{1/2}}\int_a^{\sqrt{1/2}}\int_b^{\sqrt{1-b^2}}\mathrm dc\mathrm db\mathrm da
\\
&=&
2^33!\int_0^{\sqrt{1/2}}\int_a^{\sqrt{1/2}}\left(\sqrt{1-b^2}-b\right)\mathrm db\mathrm da
\\
&=&
2^33!\int_0^{\sqrt{1/2}}\frac12\left(\arcsin\sqrt{\frac12}-\arcsin a-a\sqrt{1-a^2}+a^2\right)\mathrm da
\\
&=&
2^33!\frac16\left(2-\sqrt2\right)
\\
&=&
8\left(2-\sqrt2\right)\;.
\end{eqnarray}$$</p>
<p>I've worked out part of the answer for $4$ dimensions. There are some miraculous cancellations that make me think that a) there must be a better way to do this (perhaps anon's answer, if it can be fixed) and b) this might be workable for $5$ dimensions, too. I have other things to do now, but I'll check back and if there's no correct solution yet I'll try to finish the solution for $4$ dimensions.</p>
|
geometry | <p>Source: <a href="http://www.math.uci.edu/%7Ekrubin/oldcourses/12.194/ps1.pdf" rel="noreferrer">German Mathematical Olympiad</a></p>
<h3>Problem:</h3>
<blockquote>
<p>On an arbitrarily large chessboard, a generalized knight moves by jumping p squares in one direction and q squares in a perpendicular direction, p, q > 0. Show that such a knight can return to its original position only after an even number of moves.</p>
</blockquote>
<h3>Attempt:</h3>
<p>Assume, wlog, the knight moves <span class="math-container">$q$</span> steps <strong>to the right</strong> after its <span class="math-container">$p$</span> steps. Let the valid moves for the knight be "LU", "UR", "DL", "RD" i.e. when it moves <strong>L</strong>eft, it has to go <strong>U</strong>p("LU"), or when it goes <strong>U</strong>p , it has to go <strong>R</strong>ight("UR") and so on.</p>
<p>Let the knight be stationed at <span class="math-container">$(0,0)$</span>. We note that after any move its coordinates will be integer multiples of <span class="math-container">$p,q$</span>. Let its final position be <span class="math-container">$(pk, qr)$</span> for <span class="math-container">$ k,r\in\mathbb{Z}$</span>. We follow sign conventions of coordinate system.</p>
<p>Let knight move by <span class="math-container">$-pk$</span> horizontally and <span class="math-container">$-qk$</span> vertically by repeated application of one step. So, its new position is <span class="math-container">$(0,q(r-k))$</span> I am thinking that somehow I need to cancel that <span class="math-container">$q(r-k)$</span> to achieve <span class="math-container">$(0,0)$</span>, but don't be able to do the same.</p>
<p>Any hints please?</p>
| <p>Case I: If $p+q$ is odd, then the knight's square changes colour after each move, so we are done.</p>
<p>Case II: If $p$ and $q$ are both odd, then the $x$-coordinate changes by an odd number after every move, so it is odd after an odd number of moves. So the $x$-coordinate can be zero only after an even number of moves.</p>
<p>Case III: If $p$ and $q$ are both even, we can keep dividing each of them by $2$ until we reach Case I or Case II. (Dividing $p$ and $q$ by the same amount doesn't change the shape of the knight's path, only its size.)</p>
| <p>This uses complex numbers.</p>
<p>Define $z=p+qi$. Say that the knight starts at $0$ on the complex plane. Note that, in one move, the knight may add or subtract $z$, $iz$, $\bar z$, $i\bar z$ to his position.</p>
<p>Thus, at any point, the knight is at a point of the form:
$$(a+bi)z+(c+di)\bar z$$
where $a$ and $b$ are integers.</p>
<p>Note that the parity (evenness/oddness) of the quantity $a+b+c+d$ changes after every move. This means it's even after an even number of moves and odd after an odd number of moves. Also note that:
$$a+b+c+d\equiv a^2+b^2-c^2-d^2\pmod2$$
(This is because $x\equiv x^2\pmod2$ and $x\equiv-x\pmod2$ for all $x$.)</p>
<p>Now, let's say that the knight has reached its original position. Then:
\begin{align}
(a+bi)z+(c+di)\bar z&=0\\
(a+bi)z&=-(c+di)\bar z\\
|a+bi||z|&=|c+di||z|\\
|a+bi|&=|c+di|\\
\sqrt{a^2+b^2}&=\sqrt{c^2+d^2}\\
a^2+b^2&=c^2+d^2\\
a^2+b^2-c^2-d^2&=0\\
a^2+b^2-c^2-d^2&\equiv0\pmod2\\
a+b+c+d&\equiv0\pmod2
\end{align}
Thus, the number of moves is even.</p>
<blockquote>
<p>Interestingly, this implies that $p$ and $q$ do not need to be integers. They can each be any real number. The only constraint is that we can't have $p=q=0$.</p>
</blockquote>
|
linear-algebra | <p>When someone wants to solve a system of linear equations like</p>
<p>$$\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}\,,$$</p>
<p>they might use this logic: </p>
<p>$$\begin{align}
\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}
\iff &\begin{cases} -2x-y=0 \\ 3x+y=4 \end{cases}
\\
\color{maroon}{\implies} &\begin{cases} -2x-y=0\\ x=4 \end{cases}
\iff \begin{cases} -2(4)-y=0\\ x=4 \end{cases}
\iff \begin{cases} y=-8\\ x=4 \end{cases}
\,.\end{align}$$</p>
<p>Then they conclude that $(x, y) = (4, -8)$ is a solution to the system. This turns out to be correct, but the logic seems flawed to me. As I see it, all this proves is that
$$
\forall{x,y\in\mathbb{R}}\quad
\bigg(
\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}
\color{maroon}{\implies}
\begin{cases} y=-8\\ x=4 \end{cases}
\bigg)\,.
$$</p>
<p>But this statement leaves the possibility open that there is no pair $(x, y)$ in $\mathbb{R}^2$ that satisfies the system of equations.</p>
<p>$$
\text{What if}\;
\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}
\;\text{has no solution?}
$$</p>
<p>It seems to me that to really be sure we've solved the equation, we <em>have</em> to plug back in for $x$ and $y$. I'm not talking about checking our work for simple mistakes. This seems like a matter of logical necessity. But of course, most people don't bother to plug back in, and it never seems to backfire on them. So why does no one plug back in?</p>
<p><strong>P.S.</strong> It would be great if I could understand this for systems of two variables, but I would be deeply thrilled to understand it for systems of $n$ variables. I'm starting to use Gaussian elimination on big systems in my linear algebra class, where intuition is weaker and calculations are more complex, and still no one feels the need to plug back in.</p>
| <p>You wrote this step as an implication: </p>
<blockquote>
<p>$$\begin{cases} -2x-y=0 \\ 3x+y=4 \end{cases} \implies \begin{cases} -2x-y=0\\ x=4 \end{cases}$$</p>
</blockquote>
<p>But it is in fact an equivalence:</p>
<p>$$\begin{cases} -2x-y=0 \\ 3x+y=4 \end{cases} \iff \begin{cases} -2x-y=0\\ x=4 \end{cases}$$</p>
<p>Then you have equivalences end-to-end and, as long as all steps are equivalences, you <em>proved</em> that the initial equations are equivalent to the end solutions, so you don't need to "<em>plug back</em>" and verify. Of course, carefulness is required to ensure that every step <em>is</em> in fact reversible.</p>
| <p>The key is that in solving this system of equations (or with row-reduction in general), every step is <em>reversible</em>. Following the steps forward, we see that <em>if</em> $x$ and $y$ satisfy the equations, then $x = 4$ and $y = -8$. That is, we conclude that $(4,-8)$ is the only possible solution, assuming a solution exists. <em>Conversely</em>, we can follow the arrows in the other direction to find that if $x = 4$ and $y = -8$, then the equations hold.</p>
<p>Take a second to confirm that those $\iff$'s aren't really $\implies$'s.</p>
<p>Compare this to a situation where the steps aren't reversible. For example:
$$
\sqrt{x^2 - 3} = -1 \implies x^2 -3 = 1 \iff x^2 = 4 \iff x = \pm 2
$$
You'll notice that "squaring both sides" isn't reversible, so we can't automatically deduce that $\pm 2$ solve the orginal equation (and in fact, there is no solution).</p>
|
probability | <p>Given the rapid rise of the <a href="http://en.wikipedia.org/wiki/Mega_Millions">Mega Millions</a> jackpot in the US (now advertised at \$640 million and equivalent to a "cash" prize of about \$448 million), I was wondering if there was ever a point at which the lottery became positive expected value (EV), and, if so, what is that point or range?</p>
<p>Also, a friend and I came up with two different ways of looking at the problem, and I'm curious if they are both valid.</p>
<p>First, it is simple to calculate the expected value of the "fixed" prizes. The first five numbers are selected from a pool of 56, the final "mega" ball from a pool of 46. (Let us ignore taxes in all of our calculations... one can adjust later for one's own tax rate which will vary by state). The expected value of all these fixed prizes is \$0.183.</p>
<p>So, then you are paying \$0.817 for the jackpot prize. My plan was then to calculate the expected number of winners of the jackpot (multiple winners split the prize) to get an expected jackpot amount and multiply by the probability of selecting the winning numbers (given by $\binom{56}{5} * 46 = 1 \text{ in } 175,711,536$). The number of tickets sold can be easily estimated since \$0.32 of each ticket is added to the prize, so: </p>
<p>(<em>Current Cash Jackpot</em> - <em>Previous Cash Jackpot</em>) / 0.32 = Tickets Sold
$(448 - 252) / 0.32 = 612.5$ million tickets sold (!!).</p>
<p>(The cash prizes are lower than the advertised jackpot. Currently, they are about 70% of the advertised jackpot.) Obviously, one expects multiple winners, but I can't figure out how to get a precise estimate, and various web sources seem to be getting different numbers.</p>
<p><strong>Alternative methodology:</strong> My friend's methodology, which is far simpler, is to say 50% of this drawing's sales will be paid out in prizes (\$0.18 to fixed prizes and \$0.32 to the jackpot). Add to that the carried over jackpot amount (\$250 million cash prize from the unwon previous jackpot) that will also be paid out. So, your expected value is $\$250$ million / 612.5 million tickets sold = \$0.40 from the previous drawing + \$0.50 from this drawing = \$0.90 total expected value for each \$1 ticket purchased (before taxes). Is this a valid approach or is it missing something? It's far simpler than anything I found while searching the web for this.</p>
<p><strong>Added:</strong> After considering the answer below, this is why I don't think my friend's methodology can be correct: it neglects the probability that no one will win. For instance, if a $1$ ticket was sold, the expected value of that ticket would not be $250 million + 0.50 since one has to consider the probability of the jackpot not being paid out at all. So, <em>additional question:</em> what is this probability and how do we find it? (Obviously it is quite small when $612.5$ million tickets are sold and the odds of each one winning is $1:175.7$ million.) Would this allow us to salvage this methodology?</p>
<p>So, is there a point that the lottery will be positive EV? And, what is the EV this week, and the methodology for calculating it?</p>
| <p>I did a fairly <a href="http://www.circlemud.org/~jelson/megamillions">extensive analysis of this question</a> last year. The short answer is that by modeling the relationship of past jackpots to ticket sales we find that ticket sales grow super-linearly with jackpot size. Eventually, the positive expectation of a larger jackpot is outweighed by the negative expectation of ties. For MegaMillions, this happens before a ticket ever becomes EV+.</p>
| <p>An interesting thought experiment is whether it would be a good investment for a rich person to buy every possible number for \$175,711,536. This person is then guaranteed to win! Then you consider the resulting size of the pot (now a bit larger), the probability of splitting it with other winners, and the fact that you get to deduct the \$175.7M spent from your winnings before taxes. (Thanks to Michael McGowan for pointing out that last one.)</p>
<p>The current pot is \$640M, with a \$462M cash payout. The previous pot was \$252M cash payout, so using \$0.32 into the cash pot per ticket, we have 656,250,000 tickets sold. I, the rich person (who has enough servants already employed that I can send them all out to buy these tickets at no additional labor cost) will add about \$56M to the pot. So the cash pot is now \$518M.</p>
<p>If I am the only winner, then I net (\$518M + \$32M (my approximate winnings from smaller prizes)) * 0.65 (federal taxes) + 0.35 * \$176M (I get to deduct what I paid for the tickets) = \$419M. I live in California (of course), so I pay no state taxes on lottery winnings. I get a 138% return on my investment! Pretty good. Even if I did have to pay all those servants overtime for three days.</p>
<p>If I split the grand prize with just one other winner, I net \$250M. A 42% return on my investment. Still good. With two other winners, I net $194M for about a 10% gain.</p>
<p>If I have to split it with three other winners, then I lose. Now I pay no taxes, but I do not get to deduct my gambling losses against my other income. I net \$161M, about an 8% loss on my investment. If I split it with four other winners, I net \$135M, a 23% loss. Ouch.</p>
<p>So how many will win? Given the 656,250,000 other tickets sold, the expected number of other winners (assuming a random distribution of choices, so I'm ignoring the picking-birthdays-in-your-numbers problem) is 3.735. Hmm. This might not turn out well for Mr. Money Bags. Using Poisson, <span class="math-container">$p(n)={\mu^n e^{-\mu}\over n!}$</span>, where <span class="math-container">$\mu$</span> is the expected number (3.735) and <span class="math-container">$n$</span> is the number of other winners, there is only a 2.4% chance of me being the only winner, a 9% chance of one other winner, a 17% chance of two, a 21% chance of three, and then it starts going down with a 19% chance of four, 14% for five, 9% for six, and so on.</p>
<p>Summing over those, my expected return after taxes is \$159M. Close. But about a 10% loss on average.</p>
<p>Oh well. Time to call those servants back and have them make me a sandwich instead.</p>
<p><em>Update for October 23, 2018 Mega Millions jackpot:</em></p>
<p>Same calculation.</p>
<p>The game has gotten harder to win, where there are now 302,575,350 possible numbers, <em>and</em> each ticket costs \$2. So now it would cost $605,150,700 to assure a winning ticket. Also the maximum federal tax rate has gone up to 39.6%.</p>
<p>The current pot (as of Saturday morning -- it will probably go up more) has a cash value of \$904,900,000. The previous cash pot was \$565,600,000. So about 530 million more tickets have been or are expected to be purchased, using my previous assumption of 32% of the cost of a ticket going into the cash pot. Then the mean number of winning tickets, besides the one assured for Mr. Money Bags, is about 1.752. Not too bad actually.</p>
<p>Summing over the possible numbers of winners, I get a net <em>win</em> of <span class="math-container">$\approx$</span>\$60M! So if you can afford to buy all of the tickets, and can figure out how to do that in next three days, go for it! Though that win is only a 10% return on investment, so you could very well do better in the stock market. Also that win is a slim margin, and is dependent on the details in the calculation, which would need to be more carefully checked. Small changes in the assumptions can make the return negative.</p>
<p>Keep in mind that if you're not buying all of the possible tickets, this is not an indication that the expected value of one ticket is more than \$2. Buying all of the possible ticket values is <span class="math-container">$e\over e-1$</span> times as efficient as buying 302,575,350 <em>random</em> tickets, where you would have many duplicated tickets, and would have less than a 2 in 3 chance of winning.</p>
|
game-theory | <p>The <a href="https://en.wikipedia.org/wiki/Monty_Hall_problem" rel="nofollow">Monty Hall problem or paradox</a> is famous and well-studied. But what confused me about the description was an unstated assumption.</p>
<blockquote>
<p>Suppose you're on a game show, and you're given the choice of three
doors: behind one door is a car; behind the others, goats. You pick
a door, say No. 1, and the host, who knows what's behind the doors,
opens another door, say No. 3, which has a goat. He then says to you,
"Do you want to pick door No. 2?" Is it to your advantage to switch
your choice?</p>
</blockquote>
<p>The assumption is that the host of the show does not have a choice whether to offer the switch. In fact, Monty Hall himself, in response to Steve Selvin's original formulation of the problem, pointed out that as the host he did not always offer the switch.</p>
<p>Because the host knows what's behind the doors, it would be possible and to his advantage to offer a switch more often to contestants who guess correctly. If he only offered the switch to contestants who guess correctly, all contestants who accept the offer would lose. However, if he did this consistently, the public would learn not to accept the offer and soon all contestants who first guess correctly would win.</p>
<p>If, for instance, he gave the offer to one third of incorrect guessers and two thirds of correct guessers, 2/9 contestants would be given the offer and should not switch and 2/9 contestants would be given the offer and should, which would bring the chances of winning back to 1/2 whether one accepts the offer or not, instead of 1/3 or 2/3.</p>
<p>Is this a Nash equilibrium for the Monty Hall problem (or the iterated Monty Hall problem) as a two-player zero-sum game? And if not, what is it, or aren't there any?</p>
| <p>The car probably doesn't come out of the host's salary, so he probably doesn't really want to minimize the payoff, he wants to maximize the show's ratings. But OK, let's
suppose he did want to minimize the payoff, making this a zero-sum game.
Then the optimal value of the game (in terms of the probability of winning the car) would be $1/3$. An optimal strategy for the contestant is to always refuse to switch,
ensuring that the expected payoff is $1/3$. An optimal strategy for the host is
never to offer a switch unless the contestant's first guess is correct, ensuring that
the expected payoff is no more than $1/3$.</p>
| <p>Thanks for your answer, Robert. If the optimal value is 1/3 as you showed, then I suppose there must be infinitely many mixed strategies that the host could employ that would be in equilibrium. If, as I mentioned in the question, the host offers the switch to 2/3 of correct guessers and 1/3 of incorrect guessers, 1/9 contestants will guess correctly and not be offered a switch, winning immediately. Also, 2/9 will be correct guessers offered a switch and 2/9 will be incorrect guessers offered a switch. Therefore 4/9 will be offered a switch and 2/9 will win, whether they accept it or not, since their expected value will be 1/2 whether they accept the switch or not. The rest of the contestants lose immediately.</p>
<pre><code>1/9 + 2/9 = 1/3
</code></pre>
<p>This means the host can offer any number between 0 and 2/3 of his contestants a switch without changing the expected value, as long as the number of correct guessers and incorrect guessers he offers a switch is the same. He can accomplish this easily with a mixed strategy of making correct guessers exactly twice as likely to receive an offer.</p>
<p>With any of this family of strategies, the host then cannot possibly ensure a better outcome than 1/3, which he has done. And the contestant will have the same expected value regardless of their strategy, so they cannot improve. So, these are also Nash equilibria. And you would have to agree, from a practical perspective, the host should employ one of these highly mixed strategies so that the game is more exciting, without harming his bottom line. I don't have any direct evidence, but I would hazard a guess that this is roughly what Monty Hall actually did.</p>
<p>The surprising thing about this is that the naive answer to the classic Monty Hall problem, "No, there is no benefit," (50/50) can be correct under reasonable assumptions.</p>
|
probability | <p>This is a really natural question for which I know a stunning solution. So I admit I have a solution, however I would like to see if anybody will come up with something different. The question is</p>
<blockquote>
<p>What is the probability that two numbers randomly chosen are coprime?</p>
</blockquote>
<p>More formally, calculate the limit as $n\to\infty$ of the probability that two randomly chosen numbers, both less than $n$ are coprime.</p>
| <p><strong>Here is a fairly easy approach.</strong>
<strong>Let us start with a basic observation:</strong></p>
<p><span class="math-container">$\bullet$</span> Every integer has the probability "1" to be divisible by 1.</p>
<p><span class="math-container">$\bullet$</span> A given integer is either even or odd hence has probability <span class="math-container">$"1/2"$</span> to be divisible by 2</p>
<p><span class="math-container">$\bullet$</span> Similarly, an integer has a probability <span class="math-container">$"1/3"$</span> to be divisible by 3. Because any interger is either the form <span class="math-container">$3k, 3k+1$</span> or <span class="math-container">$3k+2$</span>.</p>
<blockquote>
<p><strong>Conjecture</strong> More generally, one integer chosen amongst <span class="math-container">$"p"$</span> other integers has one chance to be divisible by <span class="math-container">$p$</span></p>
</blockquote>
<ol>
<li>From this we infer that the probability that an integer is divisible by <span class="math-container">$p$</span> is <span class="math-container">$\frac{1}{p}$</span>. This is having one chance over <span class="math-container">$p-$</span>chances to be divisible by <span class="math-container">$p$</span>.</li>
<li>Therefore the probability that two different integers are both simultaneously divisible by a prime <span class="math-container">$p$</span> is <span class="math-container">$\frac{1}{p^2}$</span></li>
<li>This means that the probability that two different integers are not simultaneously divisible by a prime <span class="math-container">$p$</span> is <span class="math-container">$$1-\frac{1}{p^2}$$</span>
<blockquote>
<ol start="4">
<li><strong>Conclusion:</strong> The probability that two different integers are never simultaneously divisible by a prime (<strong>meaning that they are co-prime</strong>)
is therefore given by<br />
<span class="math-container">$$ \color{blue}{\prod_{p, prime}\left(1-\frac{1}{p^2} \right) =
\left(\prod _{p, prime}\frac {1}{1-p^{-2}}\right)^{-1}=\frac {1}{\zeta (2)}=\frac {6}{\pi ^{2}} \approx 0,607927102 ≈ 61 \%}$$</span></li>
</ol>
</blockquote>
</li>
</ol>
<p>Where we should recall that, from the <a href="https://en.wikipedia.org/wiki/Basel_problem" rel="noreferrer">Basel problem</a> we have the following Euler identity</p>
<p><span class="math-container">$$\frac{\pi^2}{6}=\sum_{n=1}^{\infty} \frac{1}{n^2} = \zeta(2)=\prod _{p, prime}\frac {1}{1-p^{-2}}.$$</span></p>
<p>By a similar token, the probability that <span class="math-container">$m$</span> integers are co-prime is given by</p>
<p><span class="math-container">$$ \color{red}{\prod_{p, prime}\left(1-\frac{1}{p^m} \right) =
\left(\prod _{p, prime}\frac {1}{1-p^{-m}}\right)^{-1}=\frac {1}{\zeta (m)}}.$$</span></p>
<p>Here <span class="math-container">$\zeta$</span> is the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function" rel="noreferrer">Riemann zeta function</a>. <span class="math-container">$$\zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} $$</span></p>
| <p>Let's look at the function <span class="math-container">$$S(x)=\sum_{\begin{array}{c}
m,n\leq x\\
\gcd(m,n)=1\end{array}}1.$$</span> </p>
<p>Then notice that <span class="math-container">$$S(x)=\sum_{m,n\leq x}\sum_{d|\gcd(m,n)}\mu(d)=\sum_{d\leq x}\sum_{r,s\leq\frac{x}{d}}\mu(d)= \sum_{d\leq x}\mu(d)\left[\frac{x}{d}\right]^{2}$$</span></p>
<p>From here it is straight forward to see that in the limit, <span class="math-container">$$\frac{S(x)}{x^2}\rightarrow\sum_{n=1}^\infty \frac{\mu(n)}{n^2}=\frac{1}{\zeta(2)}=\frac{6}{\pi^2}.$$</span></p>
<p>However, there are still some interesting questions here. How fast does in converge, and what are the secondary terms? It turns out we can easily relate this to the summatory totient function, which has a rich history. See these two math stack exchange posts: <a href="https://math.stackexchange.com/questions/38101/how-to-derive-an-identity-between-summations-of-totient-and-mobius-functions">Totient function</a>, <a href="https://math.stackexchange.com/questions/37863/asymptotic-formula-for-munx-n2-summation">Asymptotic formula</a>. What follows below is a modification of
my answer on the second post.</p>
<p><strong>The History Of The Error Term</strong></p>
<p>In 1874, Mertens proved that <span class="math-container">$$S(x)=\frac{6}{\pi^{2}}x^{2}+O\left(x\log x\right).$$</span> Throughout we use <span class="math-container">$E(x)=S(x)-\frac{6}{\pi^2}x^2$</span> for the error function. </p>
<p>The best unconditional result is given by Walfisz 1963: <span class="math-container">$$E(x)\ll x\left(\log x\right)^{\frac{2}{3}}\left(\log\log x\right)^{\frac{4}{3}}.$$</span> </p>
<p>In 1930, Chowla and Pillai showed this cannot be improved much more, and that <span class="math-container">$E(x)$</span> is <strong>not</strong> <span class="math-container">$$o\left(x\log\log\log x\right).$$</span> </p>
<p>In particular, they showed that <span class="math-container">$\sum_{n\leq x}E(n)\sim\frac{3}{\pi^{2}}x^{2}$</span> so that <span class="math-container">$E(n)\asymp n$</span> on average. In 1950, Erdos and Shapiro proved that there exists <span class="math-container">$c$</span> such that for infinitely many positive integers <span class="math-container">$N,M$</span> we have <span class="math-container">$$E(N)>cN\log\log\log\log N\ \ \text{and}\ \ E(M)<-cM\log\log\log\log M, $$</span> </p>
<p>or more concisely </p>
<p><span class="math-container">$$E(x)=\Omega_{\pm}\left(x\log\log\log\log x\right).$$</span></p>
<p>In 1987 Montgomery improved this to </p>
<p><span class="math-container">$$E(x)=\Omega_{\pm}\left(x\sqrt{\log\log x}\right).$$</span></p>
<p>Hope you enjoyed that,</p>
<p><strong>Added:</strong> At some point, I wrote a <a href="http://enaslund.wordpress.com/2012/01/15/the-sum-of-the-totient-function-and-montgomerys-lower-bound/" rel="noreferrer">long blog post</a> about this, complete with a proof of Montgomery's lower bound.</p>
|
logic | <p>I enjoy reading about formal logic as an occasional hobby. However, one thing keeps tripping me up: I seem unable to understand what's being referred to when the word "type" (as in type theory) is mentioned.</p>
<p>Now, I understand what types are in programming, and sometimes I get the impression that types in logic are just the same thing as that: we want to set our formal systems up so that you can't add an integer to a proposition (for example), and types are the formal mechanism to specify this. Indeed, <a href="https://en.wikipedia.org/wiki/Type_theory" rel="noreferrer">the Wikipedia page for type theory</a> pretty much says this explicitly in the lead section.</p>
<p>However, it also goes on to imply that types are much more powerful than that. Overall, from everything I've read, I get the idea that types are:</p>
<ul>
<li>like types in programming</li>
<li>something that is like a set but different in certain ways</li>
<li>something that can prevent paradoxes</li>
<li>the sort of thing that could replace set theory in the foundations of mathematics</li>
<li>something that is not just analogous to the notion of a proposition but can be thought of as the same thing ("propositions as types")</li>
<li>a concept that is <em>really, really deep</em>, and closely related to higher category theory</li>
</ul>
<p>The problem is that I have trouble reconciling these things. Types in programming seem to me quite simple, practical things (alhough the type system for any given programming language can be quite complicated and interesting). But in type theory it seems that somehow the types <em>are</em> the language, or that they are responsible for its expressive power in a much deeper way than is the case in programming.</p>
<p>So I suppose my question is, for someone who understands types in (say) Haskell or C++, and who also understands first-order logic and axiomatic set theory and so on, how can I get from these concepts to the concept of <em>type theory</em> in formal logic? What <em>precisely</em> is a type in type theory, and what is the relationship between types in formal mathematics and types in computer science?</p>
<p>(I am not looking for a formal definition of a type so much as the core idea behind it. I can find several formal definitions, but I've found they don't really help me to understand the underlying concept, partly because they are necessarily tied to the specifics of a <em>particular</em> type theory. If I can understand the motivation better it should make it easier to follow the definitions.)</p>
| <p><strong>tl;dr</strong> Types only have meaning within type systems. There is no stand-alone definition of "type" except vague statements like "types classify terms". The notion of type in programming languages and type theory are basically the same, but different type systems correspond to different type theories. Often the term "type theory" is used specifically for a particular family of powerful type theories descended from Martin-Löf Type Theory. Agda and Idris are simultaneously proof assistants for such type theories and programming languages, so in this case there is no distinction whatsoever between the programming language and type theoretic notions of type.</p>
<p>It's not the "types" themselves that are "powerful". First, you could recast first-order logic using types. Indeed, the sorts in <a href="https://en.wikipedia.org/wiki/Many-sorted_logic" rel="noreferrer">multi-sorted first-order logic</a>, are basically the same thing as types.</p>
<p>When people talk about type theory, they often mean specifically Martin-Löf Type Theory (MLTT) or some descendant of it like the Calculus of (Inductive) Constructions. These are powerful higher-order logics that can be viewed as constructive set theories. But it is the specific system(s) that are powerful. The simply typed lambda calculus viewed from a propositions-as-types perspective is basically the proof theory of intuitionistic propositional logic which is a rather weak logical system. On the other hand, considering the equational theory of the simply typed lambda calculus (with some additional axioms) gives you something that is very close to the most direct understanding of higher-order logic as an extension of first-order logic. This view is the basis of the <a href="https://hol-theorem-prover.org/" rel="noreferrer">HOL</a> family of theorem provers.</p>
<p>Set theory is an extremely powerful logical system. ZFC set theory is a first-order theory, i.e. a theory axiomatized in first-order logic. And what does set theory accomplish? Why, it's essentially an embedding of higher-order logic into first-order logic. In first-order logic, we can't say something like $$\forall P.P(0)\land(\forall n.P(n)\Rightarrow P(n+1))\Rightarrow\forall n.P(n)$$ but, in the first-order theory of set theory, we <em>can</em> say $$\forall P.0\in P\land (\forall n.n\in P\Rightarrow n+1\in P)\Rightarrow\forall n.n\in P$$ Sets behave like "first-class" predicates.</p>
<p>While ZFC set theory and MLTT go beyond just being higher-order logic, higher-order logic on its own is already a powerful and ergonomic system as demonstrated by the HOL theorem provers for example. At any rate, as far as I can tell, having some story for doing higher-order-logic-like things is necessary to provoke any interest in something as a framework for mathematics from mathematicians. Or you can turn it around a bit and say you need some story for set-like things and "first-class" predicates do a passable job. This latter perspective is more likely to appeal to mathematicians, but to me the higher-order logic perspective better captures the common denominator.</p>
<p>At this point it should be clear there is no magical essence in "types" themselves, but instead some families of type theories (i.e. type systems from a programming perspective) are very powerful. Most "powerful" type systems for programming languages are closely related to the polymorphic lambda calculus aka System F. From the proposition-as-types perspective, these correspond to intuitionistic second-order <em>propositional</em> logics, not to be confused with second-order (predicate) logics. It allows quantification over propositions (i.e. nullary predicates) but not over terms which don't even exist in this logic. <em>Classical</em> second-order propositional logic is easily reduced to classical propositional logic (sometimes called zero-order logic). This is because $\forall P.\varphi$ is reducible to $\varphi[\top/P]\land\varphi[\bot/P]$ classically. System F is surprisingly expressive, but viewed as a logic it is quite limited and far weaker than MLTT. The type systems of Agda, Idris, and Coq are descendants of MLTT. Idris in particular and Agda to a lesser extent are dependently typed programming languages.<sup>1</sup> Generally, the notion of type in a (static) type system and in type theory are essentially the same, but the significance of a type depends on the type system/type theory it is defined within. There is no real definition of "type" on its own. If you decide to look at e.g. Agda, you should be quickly disabused of the idea that "types are the language". All of these type theories have terms and the terms are not "made out of types". They typically look just like functional programs.</p>
<p><sup>1</sup> I don't want to give the impression that "dependently typed" = "super powerful" or "MLTT derived". The LF family of languages e.g. Elf and Twelf are intentionally weak dependently typed specification languages that are far weaker than MLTT. From a propositions-as-types perspective, they correspond more to first-order logic.</p>
| <blockquote>
<p>I've found they don't really help me to understand the underlying concept, partly because they are necessarily tied to the specifics of a particular type theory. If I can understand the motivation better it should make it easier to follow the definitions.</p>
</blockquote>
<p>The basic idea: In ZFC set theory, there is just one kind of object - sets. In type theories, there are multiples kind of objects. Each object has a particular kind, known as its "type". </p>
<p>Type theories typically include ways to form new types from old types. For example, if we have types $A$ and $B$ we also have a new type $A \times B$ whose members are pairs $(a,b)$ where $a$ is of type $A$ and $b$ is of type $B$. </p>
<p>For the simplest type theories, such as higher-order logic, that is essentially the only change from ordinary first-order logic. In this setting, all of the information about "types" is handled in the metatheory. But these systems are barely "type theories", because the theory itself doesn't really know anything about the types. We are really just looking at first-order logic with multiple sorts. By analogy to computer science, these systems are vaguely like statically typed languages - it is not possible to even write a well-formed formula/program which is not type safe, but the program itself has no knowledge about types while it is running. </p>
<p>More typical type theories include ways to reason <em>about</em> types <em>within</em> the theory. In many type theories, such as ML type theory, the types themselves are objects of the theory. So it is possible to prove "$A$ is a type" as a <em>sentence</em> of the theory. It is also possible to prove sentences such as "$t$ has type $A$". These cannot even be expressed in systems like higher-order logic. </p>
<p>In this way, these theories are not just "first order logic with multiple sorts", they are genuinely "a theory about types". Again by analogy, these systems are vaguely analogous to programming languages in which a program can make inferences about types of objects <em>during runtime</em> (analogy: we can reason about types within the theory). </p>
<p>Another key aspect of type theories is that they often have their own <em>internal</em> logic. The <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="noreferrer">Curry-Howard correspondence</a> shows that, in particular settings, <em>formulas</em> of propositional or first-order logic correspond to <em>types</em> in particular type theories. Manipulating the types in a model of type theory corresponds, via the isomorphism, to manipulating formulas of first order logic. The isomorphism holds for many logic/type theory pairs, but it is strongest when the logic is intuitionistic, which is one reason intuitionistic/constructive logic comes up in the context of type theory. </p>
<p>In particular, logical operations on formulas become type-forming operations. The "and" operator of logic becomes the product type operation, for example, while the "or" operator becomes a kind of "union" type. In this way, each model of type theory has its own "internal logic" - which is often a model of intuitionistic logic. </p>
<p>The existence of this internal logic is one of the motivations for type theory. When we look at first-order logic, we treat the "logic" as sitting in the metatheory, and we look at the truth value of each formula within a model using classical connectives. In type theory, that is often a much less important goal. Instead we look at the collection of types in a model, and we are more interested in the way that the type forming operations work in the model than in the way the classical connectives work. </p>
|
differentiation | <p>I never understand what the trigonometric function sine is..</p>
<p>We had a table that has values of sine for different angles, we by hearted it and applied to some problems and there ends the matter. Till then, sine function is related to triangles, angles.</p>
<p>Then comes the graph. We have been told that the figure below is the graph of the function sine. This function takes angles and gives numbers between $-1$ and $1$ and we have been told that it is a continuous function as it is clear from the graph.</p>
<p><a href="https://i.sstatic.net/qpda9.gif" rel="noreferrer"><img src="https://i.sstatic.net/qpda9.gif" alt="enter image description here"></a></p>
<p>Then comes taylor expansion of sine and we have $$\sin (x)=x-\frac{x^3}{3!}+\cdots$$</p>
<p>I know that for any infinitely differentiable function, we have taylor expansion. But how do we define differentiability of the function sine?</p>
<p>We define differentiability of a function from real numbers to real numbers..</p>
<p>But sine is a function that takes angles and gives real numbers..</p>
<p>Then how do we define differentiability of such a function? are we saying the real number $1$ is the degree 1? </p>
<p>I am confused.. Help me..</p>
<p>Above content is a copy paste of a mail i received from my friend, a 1 st year undergraduate. I could answer some things vaguely i am not happy with my own answers. So, I am posting it here.. Help us (me and my friend) to understand sine function in a better way.</p>
| <p>The sine function doesn't actually operate on angles, it's a function from the real numbers to the interval [-1, 1] (or from the complex numbers to the complex numbers).</p>
<p>However, it just so happens that it's a very useful function when the input you give it relates to angles. In particular, if you express an angle as a number in radians (in other words, on a scale where an angle of $2\pi$ corresponds to a full circle), it gives you a value that relates to the ratio of two sides of a right-angled triangle that has that angle in one corner.</p>
<p>If that explanation doesn't satisfy you, then you can look at it another way - if you take it that the sine function <em>does</em> take an angle as input and outputs a number, then the differentiability of it relates to how its output changes as you change the angle slightly. If you go far enough in calculus, you'll learn about functions whose inputs and outputs are bizarre multi-dimensional concepts, and as long as the space of bizarre multi-dimensional concepts has the right properties, you can calculate derivatives in a meaningful sense, and if you can get your head around that then differentiating a function of an angle is small fry.</p>
| <p>Imagine the unit circle in the usual Cartesian plane: the set of pairs $(x, y)$ where $x$ and $y$ are real numbers. The unit circle is the set of all such pairs a distance of exactly $1$ from the origin.</p>
<p>Imagine a point moving around the circle. As it travels around the circle, it makes an angle of $t$ <em>radians</em> (not degrees!) with the positive $x$-axis. From now on we call the $x$ coordinate the cosine of $t$; and the $y$ coordinate the sine of $t$.</p>
<p>It's as simple as that. If you only remember this one fact you can figure everything else out: the definitions of the trig functions in terms of triangles, the shape of the graphs of the functions, and everything else.</p>
<p>To repeat: $\cos(t)$ and $\sin(t)$ are the $x$ and $y$ coordinates, respectively, of a point on the unit circle that makes an angle of $t$ radians with the origin and the positive $x$-axis.</p>
<p>There's a picture here ... <a href="https://en.wikipedia.org/wiki/Unit_circle">https://en.wikipedia.org/wiki/Unit_circle</a></p>
|
matrices | <p>More precisely, does the set of non-diagonalizable (over $\mathbb C$) matrices have Lebesgue measure zero in $\mathbb R^{n\times n}$ or $\mathbb C^{n\times n}$? </p>
<p>Intuitively, I would think yes, since in order for a matrix to be non-diagonalizable its characteristic polynomial would have to have a multiple root. But most monic polynomials of degree $n$ have distinct roots. Can this argument be formalized? </p>
| <p>Yes. Here is a proof over $\mathbb{C} $.</p>
<ul>
<li>Matrices with repeated eigenvalues are cut out as the zero locus of the discriminant of the characteristic polynomial, thus are algebraic sets. </li>
<li>Some matrices have unique eigenvalues, so this algebraic set is proper.</li>
<li>Proper closed algebraic sets have measure $0.$ (intuitively, a proper closed algebraic set is a locally finite union of embedded submanifolds of lower dimension)</li>
<li>(over $\mathbb{C} $) The set of matrices that aren't diagonalizable is contained in this set, so it also has measure $0$. (not over $\mathbb{R}$, see this comment <a href="https://math.stackexchange.com/a/207785/565">https://math.stackexchange.com/a/207785/565</a>)</li>
</ul>
| <p>Let $A$ be a real matrix with a non-real eigenvalue. It's rather easy to see that if you perturb $A$ a little bit $A$ still will have a non-real eigenvalue. For instance if $A$ is a rotation matrix (as in Georges answer), applying a perturbed version of $A$ will still come close to rotating the vectors by a fixed angle so this perturbed version can't have any real eigenvalues.</p>
|
combinatorics | <blockquote>
<p>All numbers <span class="math-container">$1$</span> to <span class="math-container">$155$</span> are written on a blackboard, one time each. We randomly choose two numbers and delete them, by replacing one of them with their product plus their sum. We repeat the process until there is only one number left. What is the average value of this number?</p>
</blockquote>
<p>I don't know how to approach it: For two numbers, <span class="math-container">$1$</span> and <span class="math-container">$2$</span>, the only number is <span class="math-container">$1\cdot 2+1+2=5$</span>
For three numbers, <span class="math-container">$1, 2$</span> and <span class="math-container">$3$</span>, we can opt to replace <span class="math-container">$1$</span> and <span class="math-container">$2$</span> with <span class="math-container">$5$</span> and then <span class="math-container">$3$</span> and <span class="math-container">$5$</span> with <span class="math-container">$23$</span>, or
<span class="math-container">$1$</span> and <span class="math-container">$3$</span> with <span class="math-container">$7$</span> and then <span class="math-container">$2$</span>, <span class="math-container">$7$</span> with <span class="math-container">$23$</span> or
<span class="math-container">$2$</span>, <span class="math-container">$3$</span> with <span class="math-container">$11$</span> and <span class="math-container">$1$</span>, <span class="math-container">$11$</span> with <span class="math-container">$23$</span>
so we see that no matter which two numbers we choose, the average number remains the same. Does this lead us anywhere?</p>
| <p>Claim: if <span class="math-container">$a_1,...,a_n$</span> are the <span class="math-container">$n$</span> numbers on the board then after n steps we shall be left with <span class="math-container">$(1+a_1)...(1+a_n)-1$</span>.</p>
<p>Proof: <em>induct on <span class="math-container">$n$</span></em>. Case <span class="math-container">$n=1$</span> is true, so assume the proposition holds for a fixed <span class="math-container">$n$</span> and any <span class="math-container">$a_1$</span>,...<span class="math-container">$a_n$</span>. Consider now <span class="math-container">$n+1$</span> numbers <span class="math-container">$a_1$</span>,...,<span class="math-container">$a_{n+1}$</span>. Suppose that at the first step we choose <span class="math-container">$a_1$</span> and <span class="math-container">$a_2$</span>. We will be left with <span class="math-container">$ n$</span> numbers <span class="math-container">$b_1=a_1+a_2+a_1a_2$</span>, <span class="math-container">$b_2=a_3$</span>,...,<span class="math-container">$b_n=a_{n+1}$</span>, so by the induction hypothesis at the end we will be left with <span class="math-container">$(b_1+1)...(b_n+1)-1=(a_1+1)...(a_{n+1}+1)-1$</span> as needed, because <span class="math-container">$b_1+1=a_1+a_2+a_1a_2+1=(a_1+1)(a_2+1)$</span></p>
<p>Where did I get the idea of the proof from? I guess from the n=2 case: for <span class="math-container">$a_1,a_2$</span> you are left with <span class="math-container">$a_1+a_2+a_1a_2=(1+a_1)(1+a_2)-1$</span> and I also noted this formula generalises for <span class="math-container">$n=3$</span></p>
<p>So in your case we will be left with <span class="math-container">$156!-1=1\times 2\times...\times 156 -1$</span></p>
| <p>Another way to think of Sorin's observation, without appealing to induction explicitly:</p>
<p>Suppose your original numbers (both the original 155 numbers and later results) are written in <em>white</em> chalk. Now above each <em>white</em> number write that number plus one, in <em>red</em> chalk. Write new red companions to each new white number, and erase the red numbers when their white partners go away.</p>
<p>When we erase <span class="math-container">$x$</span> and <span class="math-container">$y$</span> and write <span class="math-container">$x+y+xy$</span>, the new red number is <span class="math-container">$x+y+xy+1=(x+1)(y+1)$</span>, exactly the product of the two red companions we're erasing.</p>
<p>So we can reformulate the entire game as:</p>
<blockquote>
<p>Write in red the numbers from <span class="math-container">$2$</span> to <span class="math-container">$156$</span>. Keep erasing two numbers and writing their product instead. At the end when you have <em>one</em> red number left, subtract one and write the result in white.</p>
</blockquote>
<p>Since the order of factors is immaterial, the result must be <span class="math-container">$2\cdot 3\cdots 156-1$</span>.</p>
|
differentiation | <p>Which derivatives are eventually periodic?</p>
<p>I have noticed that is $a_{n}=f^{(n)}(x)$, the sequence $a_{n}$ becomes eventually periodic for a multitude of $f(x)$. </p>
<p>If $f(x)$ was a polynomial, and $\operatorname{deg}(f(x))=n$, note that $f^{(n)}(x)=C$ if $C$ is a constant. This implies that $f^{(n+i)}(x)=0$ for every $i$ which is a natural number. </p>
<p>If $f(x)=e^x$, note that $f(x)=f'(x)$. This implies that $f^{(n)}(x)=e^x$ for every natural number $n$. </p>
<p>If $f(x)=\sin(x)$, note that $f'(x)=\cos(x), f''(x)=-\sin(x), f'''(x)=-\cos(x), f''''(x)=\sin(x)$.</p>
<p>This implies that $f^{(4n)}(x)=f(x)$ for every natural number $n$. </p>
<p>In a similar way, if $f(x)=\cos(x)$, $f^{(4n)}(x)=f(x)$ for every natural number $n$.</p>
<p>These appear to be the only functions whose derivatives become eventually periodic. </p>
<p>What are other functions whose derivatives become eventually periodic? What is known about them? Any help would be appreciated. </p>
| <p>The sequence of derivatives being globally periodic (not eventually periodic) with period $m$ is equivalent to the differential equation </p>
<p>$$f(x)=f^{(m)}(x).$$</p>
<p>All solutions to this equation are of the form $\sum_{k=1}^m c_k e^{\lambda_k x}$ where $\lambda_k$ are solutions to the equation $\lambda^m-1=0$. Thus $\lambda_k=e^{2 k \pi i/m}$. Details can be found in any elementary differential equations textbook.</p>
<p>If you merely want eventually periodic, then you can choose an index $n \geq 1$ at which the sequence starts to be periodic and solve</p>
<p>$$f^{(n)}(x)=f^{(m+n)}(x).$$</p>
<p>The characteristic polynomial in this case is $\lambda^{m+n}-\lambda^n$. This has a $n$-fold root of $0$, and otherwise has the same roots as before. This winds up implying that it is a sum of a polynomial of degree at most $n-1$, plus a solution to the previous equation. Again, the details can be found in any elementary differential equations textbook.</p>
| <p>Let's also look at it upside down. You can define analytical (infinitely differentiable) functions with their Taylor series $\sum \frac{a_n}{n!}x^n$. Taylor series are simply all finite and infinite polynomials with coefficient sequences $(a_n)$ that satisfy the series convergence criteria ($a_n$ are the derivatives in the chosen origin point, in my example, $x=0$). Compare this to real numbers (an infinite sequence of non-repeating "digits" - cardinality of continuum). On the other hand, a set of repeating sequences is comparable to rational numbers (which have eventually repeating digits). So... the "fraction" of all functions which have repeating derivatives, is immeasureably small - it's only a very special class of functions that satisfy this criterion (see other answers for appropriate expressions).</p>
<p>EDIT: I mentioned this to illustrate the comparison of how special and rare functions with periodic derivatives are. The actual cardinality of the sets depends on the field over which you define the coefficients. If $a_n\in \mathbb{R}$, then recall that set of continuous function has $2^{\aleph_0}$, the cardinality of a continuum, so cardinalities are the same in this case. If coefficients are rational, then we have $\aleph_0^{\aleph_0}=2^{\aleph_0}$ for infinite sequences and $\aleph_0\times\aleph_0^n=\aleph_0$ for periodic ones.</p>
<p>Not only that, but you can generate all the functions with this property. Just plug any periodic sequence $(a_n)$ into the expression. It's guaranteed to converge for $x\in \mathbb{R}$, because a periodic sequence is bounded, and $n!$ dominates all powers.</p>
<p>A simple substitution can demonstrate that if the coefficients are periodic for a series around one origin point, they are periodic for all of them.</p>
|
differentiation | <p>As referred <a href="https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="noreferrer">in Wikipedia</a> (see the specified criteria there), L'Hôpital's rule says,</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)}
$$</span></p>
<p>As</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f'(x)}{g'(x)}=
\lim_{x\to c}\frac{\int f'(x)\ dx}{\int g'(x)\ dx}
$$</span></p>
<p>Just out of curiosity, can you integrate instead of taking a derivative?
Does</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int f(x)\ dx}{\int g(x)\ dx}
$$</span></p>
<p>work? (given the specifications in Wikipedia only the other way around: the function must be integrable by some method, etc.) When? Would it have any practical use? I hope this doesn't sound stupid, it just occurred to me, and I can't find the answer myself.</p>
<br/>
##Edit##
<p>(In response to the comments and answers.)</p>
<p>Take 2 functions <span class="math-container">$f$</span> and <span class="math-container">$g$</span>. When is</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int_x^c f(a)\ da}{\int_x^c g(a)\ da}
$$</span></p>
<p>true?</p>
<p>Not saying that it always works, however, it sometimes may help. Sometimes one can apply l'Hôpital's even when an indefinite form isn't reached. Maybe this only works on exceptional cases.</p>
<p>Most functions are simplified by taking their derivative, but it may happen by integration as well (say <span class="math-container">$\int \frac1{x^2}\ dx=-\frac1x+C$</span>, that is simpler). In a few of those cases, integrating functions of both nominator and denominator may simplify.</p>
<p>What do those (hypothetical) functions have to make it work? And even in those cases, is is ever useful? How? Why/why not?</p>
| <p>With L'Hôpital's rule your limit must be of the form <span class="math-container">$\dfrac 00$</span>, so your antiderivatives must take the value <span class="math-container">$0$</span> at <span class="math-container">$c$</span>. In this case you have <span class="math-container">$$\lim_{x \to c} \frac{ \int_c^x f(t) \, dt}{\int_c^x g(t) \, dt} = \lim_{x \to c} \frac{f(x)}{g(x)}$$</span> provided <span class="math-container">$g$</span> satisfies the usual hypothesis that <span class="math-container">$g(x) \not= 0$</span> in a deleted neighborhood of <span class="math-container">$c$</span>.</p>
| <p>I recently came across a situation where it was useful to go through exactly this process, so (although I'm certainly late to the party) here's an application of L'Hôpital's rule in reverse:</p>
<p>We have a list of distinct real numbers $\{x_0,\dots, x_n\}$.
We define the $(n+1)$th <em>nodal polynomial</em> as
$$
\omega_{n+1}(x) = (x-x_0)(x-x_1)\cdots(x-x_n)
$$
Similarly, the $n$th nodal polynomial is
$$
\omega_n(x) = (x-x_0)\cdots (x-x_{n-1})
$$
Now, suppose we wanted to calculate $\omega_{n+1}'(x_i)/\omega_{n}'(x_i)$ when $0 \leq i \leq n-1$. Now, we could calculate $\omega_{n}'(x_i)$ and $\omega_{n+1}'(x_i)$ explicitly and go through some tedious algebra, or we could note that because these derivatives are non-zero, we have
$$
\frac{\omega_{n+1}'(x_i)}{\omega_{n}'(x_i)} =
\lim_{x\to x_i} \frac{\omega_{n+1}'(x)}{\omega_{n}'(x)} =
\lim_{x\to x_i} \frac{\omega_{n+1}(x)}{\omega_{n}(x)} =
\lim_{x\to x_i} (x-x_{n+1}) = x_i-x_{n}
$$
It is important that both $\omega_{n+1}$ and $\omega_n$ are zero at $x_i$, so that in applying L'Hôpital's rule, we intentionally produce an indeterminate form. It should be clear though that doing so allowed us to cancel factors and thus (perhaps surprisingly) saved us some work in the end. </p>
<p>So would this method have practical use? It certainly did for me!</p>
<hr>
<p><strong>PS:</strong> If anyone is wondering, this was a handy step in proving a recursive formula involving Newton's divided differences.</p>
|
linear-algebra | <p>Let $A$ be an $n \times n$ matrix. Then the solution of the initial value problem
\begin{align*}
\dot{x}(t) = A x(t), \quad x(0) = x_0
\end{align*}
is given by $x(t) = \mathrm{e}^{At} x_0$.</p>
<p>I am interested in the following matrix
\begin{align*}
\int_{0}^T \mathrm{e}^{At}\, dt
\end{align*}
for some $T>0$. Can one write down a general solution to this without distinguishing cases (e.g. $A$ nonsingular)?</p>
<p>Is this matrix always invertible?</p>
| <p><strong>Case I.</strong> If <span class="math-container">$A$</span> is nonsingular, then
<span class="math-container">$$
\int_0^T\mathrm{e}^{tA}\,dt=\big(\mathrm{e}^{TA}-I\big)A^{-1},
$$</span>
where <span class="math-container">$I$</span> is the identity matrix.</p>
<p><strong>Case II.</strong> If <span class="math-container">$A$</span> is singular, then using the Jordan form we can write <span class="math-container">$A$</span> as
<span class="math-container">$$
A=U^{-1}\left(\begin{matrix}B&0\\0&C\end{matrix}\right)U,
$$</span>
where <span class="math-container">$C$</span> is nonsingular, and <span class="math-container">$B$</span> is strictly upper triangular. Then
<span class="math-container">$$
\mathrm{e}^{tA}=U^{-1}\left(\begin{matrix}\mathrm{e}^{tB}&0\\0&\mathrm{e}^{tC}
\end{matrix}\right)U,
$$</span>
and
<span class="math-container">$$
\int_0^T\mathrm{e}^{tA}\,dt=U^{-1}\left(\begin{matrix}\int_0^T\mathrm{e}^{tB}dt&0\\0&C^{-1}\big(\mathrm{e}^{TC}-I\big)
\end{matrix}\right)U
$$</span>
But <span class="math-container">$\int_0^T\mathrm{e}^{tB}dt$</span> may have different expressions. For example if
<span class="math-container">$$
B_1=\left(\begin{matrix}0&0\\0&0\end{matrix}\right), \quad
B_2=\left(\begin{matrix}0&1\\0&0\end{matrix}\right),
$$</span>
then
<span class="math-container">$$
\int_0^T\mathrm{e}^{tB_1}dt=\left(\begin{matrix}T&0\\0&T\end{matrix}\right), \quad
\int_0^T\mathrm{e}^{tB_2}dt=\left(\begin{matrix}T&T^2/2\\0&T\end{matrix}\right).
$$</span></p>
| <p>The general formula is the power series</p>
<p>$$ \int_0^T e^{At} dt = T \left( I + \frac{AT}{2!} + \frac{(AT)^2}{3!} + \dots + \frac{(AT)^{n-1}}{n!} + \dots \right) $$</p>
<p>Note that also</p>
<p>$$ \left(\int_0^T e^{At} dt \right) A + I = e^{AT} $$</p>
<p>is always satisfied.</p>
<p>A sufficient condition for this matrix to be non-singular is the so-called Kalman-Ho-Narendra Theorem, which states that the matrix $\int_0^T e^{At} dt$ is invertible if</p>
<p>$$ T(\mu - \lambda) \neq 2k \pi i $$</p>
<p>for any nonzero integer $k$, where $\lambda$ and $\mu$ are any pair of eigenvalues of $A$.</p>
<p>Note to the interested: This matrix also comes from the discretization of a continuous linear time invariant system. It can also be said that controllability is preserved under discretization if and only if this matrix has an inverse.</p>
|
logic | <p>There are many classic textbooks in <strong>set</strong> and <strong>category theory</strong> (as possible foundations of mathematics), among many others Jech's, Kunen's, and Awodey's.</p>
<blockquote>
<p>Are there comparable classic textbooks in <strong>type theory</strong>, introducing and motivating their matter in a generally agreed upon manner from the ground up and covering the whole field, essentially?</p>
</blockquote>
<p>If not so: why?</p>
| <p>Although not as comprehensive a textbook as, say, Jech's classic book on set theory, Jean-Yves Girard's <a href="http://www.paultaylor.eu/stable/Proofs+Types"><em>Proofs and Types</em></a> is an excellent starting point for reading about type theory. It's freely available from translator Paul Taylor's website as a PDF. Girard does assume some knowledge of the lambda calculus; if you need to learn this too, I recommend Hindley and Seldin's <a href="http://www.cambridge.org/gb/knowledge/isbn/item1175709/?site_locale=en_GB"><em>Lambda-Calculus and Combinators: An Introduction</em></a>.</p>
<p>As others have mentioned, Martin-Löf's <em>Intuitionistic Type Theory</em> would then be a good next step.</p>
<p>A different approach would be to read Benjamin Pierce's wonderful textbook, <a href="http://www.cis.upenn.edu/~bcpierce/tapl/"><em>Types and Programming Languages</em></a>. This is oriented towards the practical aspects of understanding types in the context of writing programming languages, rather than purely its mathematical characteristics or foundational promise, but nonetheless it's a very clear and well-written book, with numerous exercises.</p>
<p>The bibliography provided by the <a href="http://plato.stanford.edu/entries/type-theory/">Stanford Encyclopedia of Philosophy entry on type theory</a> is quite extensive, and might provide alternative avenues for your research.</p>
| <p>There are two main settings in which I see type theory as a foundational system.</p>
<p>The first is intuitionistic type theory, particularly the system developed by Martin-Löf. The book <em>Intuitionistic Type Theory</em> (1980) seems to be floating around the internet. </p>
<p>The other setting is second-order (and higher-order) arithmetic. Two main books on this are <em>Foundations without foundationalism</em> by Stewart Shapiro (1991) and <em>Subsystems of second order arithmetic</em> by Stephen Simpson (1999). A decent amount of constructive mathematics, for example the material in <em>Constructive Analysis</em> by Bishop and Bridges (1985), can also be formalized directly in constructive higher-order arithmetic, however the taste of many constructivists is to avoid doing this. </p>
|
logic | <p>What are some good online/free resources (tutorials, guides, exercises, and the like) for learning Lambda Calculus?</p>
<p>Specifically, I am interested in the following areas:</p>
<ul>
<li>Untyped lambda calculus</li>
<li>Simply-typed lambda calculus</li>
<li>Other typed lambda calculi</li>
<li>Church's Theory of Types (I'm not sure where this fits in).</li>
</ul>
<p>(As I understand, this should provide a solid basis for the understanding of type theory.)</p>
<p>Any advice and suggestions would be appreciated.</p>
| <p><img src="https://i.sstatic.net/8E8Sp.png" alt="alligators"></p>
<p><a href="http://worrydream.com/AlligatorEggs/" rel="noreferrer"><strong>Alligator Eggs</strong></a> is a cool way to learn lambda calculus.</p>
<p>Also learning functional programming languages like Scheme, Haskell etc. will be added fun.</p>
| <p>Recommendations:</p>
<ol>
<li>Barendregt & Barendsen, 1998, <a href="https://www.academia.edu/18746611/Introduction_to_lambda_calculus" rel="noreferrer">Introduction to lambda-calculus</a>;</li>
<li>Girard, Lafont & Taylor, 1987, <a href="http://www.paultaylor.eu/stable/Proofs+Types.html" rel="noreferrer">Proofs and Types</a>;</li>
<li>Sørenson & Urzyczyn, 1999, <a href="https://disi.unitn.it/%7Ebernardi/RSISE11/Papers/curry-howard.pdf" rel="noreferrer">Lectures on the Curry-Howard Isomorphism</a>.</li>
</ol>
<p>All of these are mentioned in <a href="http://lambda-the-ultimate.org/node/492" rel="noreferrer">the LtU Getting Started thread</a>.</p>
|
differentiation | <p>I've got this task I'm not able to solve. So i need to find the 100-th derivative of $$f(x)=e^{x}\cos(x)$$ where $x=\pi$.</p>
<p>I've tried using Leibniz's formula but it got me nowhere, induction doesn't seem to help either, so if you could just give me a hint, I'd be very grateful.</p>
<p>Many thanks!</p>
| <p>HINT:</p>
<p>$e^x\cos x$ is the real part of $y=e^{(1+i)x}$</p>
<p>As $1+i=\sqrt2e^{i\pi/4}$</p>
<p>$y_n=(1+i)^ne^{(1+i)x}=2^{n/2}e^x\cdot e^{i(n\pi/4+x)}$</p>
<p>Can you take it from here?</p>
| <p>Find fewer order derivatives:</p>
<p>\begin{align}
f'(x)&=&e^x (\cos x -\sin x)&\longleftarrow&\\
f''(x)&=&e^x(\cos x -\sin x -\sin x -\cos x) \\ &=& -2e^x\sin x&\longleftarrow&\\
f'''(x)&=&-2e^x(\sin x + \cos x)&\longleftarrow&\\
f''''(x)&=& -2e^x(\sin x + \cos x + \cos x -\sin x)\\ &=& -4e^x \cos x \\ &=& -4f(x)&\longleftarrow&\\
&...&\\
\therefore f^{(100)}(\pi)&=&-4^{25} f(\pi)
\end{align}</p>
|
logic | <p>I would like to know more about the <em>foundations of mathematics</em>, but I can't really figure out where it all starts. If I look in a book on <em><a href="http://rads.stackoverflow.com/amzn/click/0387900500">axiomatic set theory</a></em>, then it seems to be assumed that one already have learned about <em>languages</em>. If I look in a book about <a href="http://books.google.com/books?id=2sCuDMUruSUC&printsec=frontcover&dq=logic%20structure&hl=en&sa=X&ei=ByejT9j_MOjA2gXS8Ngl&ved=0CDAQ6AEwAA#v=onepage&q=logic%20structure&f=false">logic and structure</a>, it seems that it is assumed that one has already learned about set theory. And some books seem to assume a philosophical background. So where does it all start? </p>
<p>Where should I start if I really wanted to go back to the <em>beginning</em>? </p>
<p>Is it possible to make a bullet point list with where one start? For example:</p>
<ul>
<li>Logic</li>
<li>Language</li>
<li>Set theory</li>
</ul>
<p><strong>EDIT:</strong> I should have said that I was not necessarily looking for a soft or naive introduction to logic or set theory. What I am wondering is, where it starts. So for example, it seems like predicate logic comes before set theory. Is it even possible to say that something comes first?</p>
| <p>There are different ways to build a foundation for mathematics, but I think the closest to being the current "standard" is:</p>
<ul>
<li><p>Philosophy (optional)</p></li>
<li><p><a href="http://en.wikipedia.org/wiki/Propositional_logic">Propositional logic</a></p></li>
<li><p><a href="http://en.wikipedia.org/wiki/First-order_logic">First-order logic</a> (a.k.a. "<a href="http://en.wikipedia.org/wiki/Predicate_logic">predicate logic</a>")</p></li>
<li><p>Set theory (specifically, <a href="http://en.wikipedia.org/wiki/ZFC">ZFC</a>)</p></li>
<li><p>Everything else</p></li>
</ul>
<p>When rigorously followed (e.g., in a <a href="http://en.wikipedia.org/wiki/Hilbert_system">Hilbert system</a>), classical logic does not depend on set theory in any way (rather, it's the other way around), and I believe the only use of languages in low-level theories is to prove things <em>about</em> the theories (e.g., the <a href="http://en.wikipedia.org/wiki/Deduction_theorem">deduction theorem</a>) rather than <em>in</em> the theories. (While proving such metatheorems can make your work easier afterwards, it is not strictly necessary.)</p>
| <p>I strongly urge you to look at Goldrei [9] and Goldrei [10]. I learned about these books by chance in Fall 2011. Among foundational books, I think Goldrei's books must rate as among the best books I've ever come across relative to how little well-known they are. In particular, Goldrei [10] has been invaluable to me for some things I was working on a few months ago.</p>
<p>In case my personal situation could be of some help, in what follows I'll outline the approach I've been taking for what you asked about.</p>
<p>I too am trying to improve my understanding of ground-level foundational matters, at least I was this past Fall and Winter. (During the past few months I've been spending all my free time on something else, which is related to a subject taken by some students I've been tutoring.) I started with Lemmon's book [1], which was the text for a philosophy department's beginning symbolic logic course I took in 1979 (but I'd forgotten much of the material), and I very carefully read the text material and pretty much worked every single problem in the book.</p>
<p>After this I began reading/working through Mates [2], which was the standard beginning graduate level philosophy symbolic logic text where I was an undergraduate (but when I took the class, also in 1979, the instructor used a different text). However, I quickly decided that I was wasting my time because I had zero interest in many of the topics Mates [2] deals with and it was becoming clear to me that, after my extensive work with Lemmon [1], I could easily skip Mates [2] and precede to something at the "next level".</p>
<p>I then began Hamilton [3]. I got through the first couple of chapters, doing all the exercises (propositional logic), and then I decided to take a temporary detour and study a little deeper Hilbert style (non-standard) propositional calculus before continuing into Hamilton's predicate calculus chapter. I spent about 10 weeks on this, and have a nearly finished 50+ manuscript on how I think the subject should be presented, motivated by what seems to me to be major pedagogical shortcomings in the existing literature, especially in Hamilton's book. (Goldrei [10], which I didn't discover until later, is an exception.) In this regard, see my answer at [11]. However, at the start of the Spring 2012 semester I had to stop because some students I was tutoring in Fall 2011 wanted me to work with them this semester in a subject that I needed a lot of brush-up with (vector calculus). (I work full time, not teaching, so I have a limited amount of free time to devote to math.)</p>
<p>My intent is to return to Hamilton [3], a book I've had for over 20 years and have always wanted to work through. After Hamilton's book, I'm thinking I'll quickly work through Machover [4], which should be easy as I've already read through much of Machover's book at this point. After these "preliminaries", my goal is to very carefully work through Boolos/Burgess/Jeffrey [5], a (later edition of a) book I actually had a reading course out of in Spring 1990 but, due to other issues at the time, I wasn't able to do much justice to and I feel bad about it to this day.</p>
<p>After this (or perhaps at the same time), I intend to very carefully work through Enderton [6], a book that was strongly recommended to me back in 1986 when I was in a graduate program (different from 1990) with the intention of doing research in either descriptive set theory or in set-theoretic topology, but I had to leave after not passing my Ph.D. exams (two tries).</p>
<p>I have several other logic books, but probably the most significant for possible future study, should I continue, are Ebbinghaus/Flum/Thomas [7] and van Dalen [8]. Each of these is approximately the same level as Boolos/Burgess/Jeffrey [5] and Enderton [6], but they appear to offer more emphasis on some topics (e.g. model theory and intuitionism).</p>
<p>Everything I've mentioned is mathematical logic because set theory (naive set theory, at least) is something I've picked up a lot of in other math courses and on my own. What I'm really looking for is sufficient background in logic to understand and read about things like transitive models of ZF, forcing, etc.</p>
<p><strong>[1]</strong> E. J. Lemmon, <strong>Beginning Logic</strong> (1978)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0915144506" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0915144506</a></p>
<p><strong>[2]</strong> Benson Mates, <strong>Elementary Logic</strong> (1972)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/019501491X" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/019501491X</a></p>
<p><strong>[3]</strong> A. G. Hamilton, <strong>Logic for Mathematicians</strong> (1988)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0521368650" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0521368650</a></p>
<p><strong>[4]</strong> Moshe Machover, <strong>Set Theory, Logic and their Limitations</strong> (1996)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0521479983" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0521479983</a></p>
<p><strong>[5]</strong> George S. Boolos, John P. Burgess, and Richard C. Jeffrey, <strong>Computability and Logic</strong> (2007)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0521701465" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0521701465</a></p>
<p><strong>[6]</strong> Herbert Enderton, <strong>A Mathematical Introduction to Logic</strong> (2001)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0122384520" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0122384520</a></p>
<p><strong>[7]</strong> H.-D. Ebbinghaus, J. Flum, and W. Thomas, <strong>Mathematical Logic</strong> (1994)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0387942580" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0387942580</a></p>
<p><strong>[8]</strong> Dirk van Dalen, <strong>Logic and Structure</strong> (2008)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/3540208798" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/3540208798</a></p>
<p><strong>[9]</strong> Derek C. Goldrei, <strong>Classic Set Theory for Guided Independent Study</strong> (1996)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0412606100" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0412606100</a></p>
<p><strong>[10]</strong> Derek C. Goldrei, <strong>Propositional and Predicate Calculus: A Model of Argument</strong> (2005)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/1852339217" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/gp/product/1852339217</a></p>
<p><strong>[11]</strong> <a href="https://math.stackexchange.com/a/94483/13130">Proving <span class="math-container">$(p \to (q \to r)) \to ((p \to q) \to (p \to r))$</span></a></p>
|
differentiation | <p>I think this is just something I've grown used to but can't remember any proof.</p>
<p>When differentiating and integrating with trigonometric functions, we require angles to be taken in radians. Why does it work then and only then?</p>
| <blockquote>
<p>Radians make it possible to relate a linear measure and an angle
measure. A unit circle is a circle whose radius is one unit. The one
unit radius is the same as one unit along the circumference. Wrap a
number line counter-clockwise around a unit circle starting with zero
at (1, 0). The length of the arc subtended by the central angle
becomes the radian measure of the angle.</p>
</blockquote>
<p>From <a href="http://teachingcalculus.wordpress.com/2012/10/12/951/">Why Radians? | Teaching Calculus</a></p>
<p>We are therefore comparing like with like the length of a radius and and the length of an arc subtended by an angle $L = R \cdot \theta$ where $L$ is the arc length, $R$ is the radius and $\theta$ is the angle measured in radians.</p>
<p>We could of course do calculus in degrees but we would have to introduce awkward scaling factors.</p>
<p>The degree has no direct link to a circle but was chosen arbitrarily as a unit to measure angles: Presumably its $360^o$ because 360 divides nicely by a lot of numbers. </p>
| <p>To make commenters' points explicit, the "degrees-mode trig functions" functions $\cos^\circ$ and $\sin^\circ$ satisfy the awkward identities
$$
(\cos^\circ)' = -\frac{\pi}{180} \sin^\circ,\qquad
(\sin^\circ)' = \frac{\pi}{180} \cos^\circ,
$$
with all that implies about every formula involving the derivative or antiderivative of a trig function (reduction formulas for the integral of a power of a trig function, power series representations, etc., etc.).</p>
<hr>
<p><strong>Added</strong>: Regarding Yves Daoust's comment, I read the question, "Why does it work [if angles are taken in radians] and only then?", as asking, "Why do the derivative formulas for $\sin$ and $\cos$ take their familiar form when (and only when) $\sin$ and $\cos$ are $2\pi$-periodic (rather than $360$-periodic)?" If this interpretation is correct, and if one accepts that one full turn of a circle is both $360$ units of one type (degrees) and $2\pi$ of another (radians), then the above formulas are equivalent to $\sin' = \cos$ and $\cos' = -\sin$, and (I believe) <em>do</em> justify "why" we use the $2\pi$-periodic functions $\cos$ and $\sin$ in calculus rather than $\cos^\circ$ and $\sin^\circ$.</p>
<p>Of course, it's possible naslundx was asking "why" in a deeper sense, i.e., for precise definitions of "cosine and sine in radians mode" and a proof that $\cos' = -\sin$ and $\sin' = \cos$ for these functions. </p>
<p>To address this possibility: In my view, it's most convenient to define cosine and sine analytically (i.e., <em>not</em> to define them geometrically), as solutions of the second-order initial-value problems
\begin{align*}
\cos'' + \cos &= 0 & \cos 0 &= 1 & \cos' 0 = 0, \\
\sin'' + \sin &= 0 & \sin 0 &= 0 & \sin' 0 = 1.
\end{align*}
(To say the least, not everyone shares this view!) From these ODEs, it's easy to establish the characterization:
$$
y'' + y = 0,\quad y(0) = a,\ y'(0) = b\quad\text{iff}\quad
y = a\cos + b\sin.
$$
One quickly gets $\cos' = -\sin$ and $\sin' = \cos$, the angle-sum formulas, power series representations, and periodicity (obtaining an <em>analytic</em> definition of $\pi$). After this, it's trivial to see $\mathbf{x}(\theta) = (\cos \theta, \sin \theta)$ is a unit-speed parametrization of the unit circle (its velocity $\mathbf{x}'(\theta) = (\sin\theta, -\cos\theta)$ is obviously a unit vector). Consequently, $\theta$ may be viewed as <em>defining</em> a numerical measurement of "angle" coinciding with "arc length along the unit circle", and $2\pi$ units of this measure equals one full turn.</p>
|
probability | <p>I was asked this question regarding correlation recently, and although it seems intuitive, I still haven't worked out the answer satisfactorily. I hope you can help me out with this seemingly simple question.</p>
<p>Suppose I have three random variables $A$, $B$, $C$. Is it possible to have these three relationships satisfied?
$$
\mathrm{corr}[A,B] = 0.9
$$
$$
\mathrm{corr}[B,C] = 0.8
$$
$$
\mathrm{corr}[A,C] = 0.1
$$
My intuition is that it is not possible, although I can't see right now how I can prove this conclusively.</p>
| <p>Here's an answer to the general question, which I wrote up a while ago. It's a common interview question.</p>
<p>The question goes like this: "Say you have X,Y,Z three random variables such that the correlation of X and Y is something and the correlation of Y and Z is something else, what are the possible correlations for X and Z in terms of the other two correlations?"</p>
<p>We'll give a complete answer to this question, using the Cauchy-Schwarz inequality and the fact that $\mathcal{L}^2$ is a Hilbert space.</p>
<p>The Cauchy-Schwarz inequality says that if x,y are two vectors in an inner product space, then</p>
<p>$$\lvert\langle x,y\rangle\rvert \leq \sqrt{\langle x,x\rangle\langle y,y\rangle}$$</p>
<p>This is used to justify the notion of an ''angle'' in abstract vector spaces, since it gives the constraint</p>
<p>$$-1 \leq \frac{\langle x,y\rangle}{\sqrt{\langle x,x\rangle\langle y,y\rangle}} \leq 1$$
which means we can interpret it as the cosine of the angle between the vectors x and y.</p>
<p>A Hilbert space is an infinite dimensional vector space with an inner product. The important thing for this post is that in a Hilbert space the inner product allows us to do geometry with the vectors, which in this case are random variables. We'll take for granted that the space of mean 0 random variables with variance 1 is a Hilbert space, with inner product $\mathbb{E}[XY]$. Note that, in particular</p>
<p>$$\frac{\langle X,Y\rangle}{\sqrt{\langle X,X\rangle\langle Y,Y\rangle}} = \text{Cor}(X,Y)$$</p>
<p>This often leads people to say that ''correlations are cosines'', which is intuitively true, but not formally correct, as they certainly aren't the cosines we naturally think of (this space is infinite dimensional), but all of the laws hold (like Pythagorean theorem, law of cosines) if we define them to be the negative of the cosines of the angle between two random variables, whose lengths we can think of as their standard deviations in this vector space.</p>
<p>Because this space is a Hilbert space, we can do all of the geometry that we did in high school, such as projecting vectors onto one another, doing orthogonal decomposition, etc. To solve this question, we use orthogonal decomposition, which is often called the ''uncorrelation trick'' in statistics and consists of writing a random variable as a function of another random variable plus a random variable that is uncorrelated with the second random variable. This is especially useful in the case of multivariate normal random variables, when two components being uncorrelated implies independence.</p>
<p>Okay, let's suppose that we know that the correlation of X and Y is $p_{xy}$, the correlation of Y and Z is $p_{yz}$, and we want to know the correlation of X and Z, which we'll call $p_{xz}$. Note that we don't lose generality by assuming mean 0 and variance 1 as scaling and translating vectors doesn't affect their correlations. We can then write that:</p>
<p>$$X = \langle X,Y\rangle Y + O^X_Y$$</p>
<p>$$Z = \langle Z,Y\rangle Y + O^Z_Y$$</p>
<p>where $\langle \cdot,\cdot\rangle$ stands for the inner product on the space and the $O$ are uncorrelated with Y. Then, we take the inner product of $X,Z$ which is the correlation we're looking for, since everything has variance 1. We have that</p>
<p>$$\langle X,Z\rangle = p_{xz} = \langle p_{xy}Y+O^X_Y,p_{zy}Y+O^Z_Y\rangle = p_{xy}p_{yz}+\langle O^X_Y,O^Z_Y\rangle$$</p>
<p>since the variance of Y is 1 and the other terms of this bilinear expansion are orthogonal and hence have 0 covariance. We can now apply the Cauchy-Schwarz inequality to the last term above to get that</p>
<p>$$p_{x,z} \leq p_{xy}p_{yz} + \sqrt{(1-p_{x,y}^2)(1-p_{y,z}^2)}$$</p>
<p>$$p_{x,z} \geq p_{xy}p_{yz} - \sqrt{(1-p_{x,y}^2)(1-p_{y,z}^2)}$$</p>
<p>where the fact that</p>
<p>$$\langle O^X_Y,O^X_Y\rangle = 1-p_{xy}^2$$</p>
<p>comes from the equation setting the variance of X equal to 1 or</p>
<p>$$1 = \langle X,X\rangle = \langle p_{xy}Y + O^X_Y,p_{xy}Y+O^X_Y\rangle = p_{xy}^2 + \langle O^X_Y,O^X_Y\rangle$$</p>
<p>and the exact same thing can be done for $O^Z_Y$.</p>
<p>So we have our answer. Sorry this was so long.</p>
| <p>Assume without loss of generality that the random variables $A$, $B$, $C$ are standard, that is, with mean zero and unit variance. Then, for any $(A,B,C)$ with the prescribed covariances,
$$\mathrm{var}(A-B+C)=\mathrm{var}(A)+\mathrm{var}(B)+\mathrm{var}(C)-2\mathrm{cov}(A,B)-2\mathrm{cov}(B,C)+2\mathrm{cov}(A,C),
$$
that is,
$$
\mathrm{var}(A-B+C)=3-2\cdot0.9-2\cdot0.8+2\cdot0.1=-0.2\lt0,
$$
which is absurd.</p>
<p><strong>Edit:</strong> Since <a href="http://www.johndcook.com/blog/2010/06/17/covariance-and-law-of-cosines/" rel="noreferrer">correlations are cosines</a>, for every random variables such that $\mathrm{corr}(A,B)=b$, $\mathrm{corr}(A,C)=c$ and $\mathrm{corr}(B,C)=a$, one must have
$$
a\geqslant bc-\sqrt{1-b^2}\sqrt{1-c^2}.
$$
For $b=0.9$ and $c=0.8$, this yields $a\geqslant.458$.</p>
|
probability | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/111314/choose-a-random-number-between-0-and-1-and-record-its-value-and-keep-doing-it-u">choose a random number between 0 and 1 and record its value. and keep doing it until the sum of the numbers exceeds 1. how many tries?</a> </p>
</blockquote>
<p>So I'm reading a book about simulation, and in one of the chapters about random numbers generation I found the following exercise:</p>
<hr>
<p>For uniform $(0,1)$ random independent variables $U_1, U_2, \dots$ define</p>
<p>$$
N = \min \bigg \{ n : \sum_{i=1}^n U_i > 1 \bigg \}
$$</p>
<p>Give an estimate for the value of $E[N]$.</p>
<hr>
<p>That is: $N$ is equal to the number of random numbers uniformly distributed in $(0,1)$ that must be summed to exceed $1$. What's the expected value of $N$?</p>
<p>I wrote some code and I saw that the expected value of $N$ goes to $e = 2.71\dots$</p>
<p>The book does not ask for a formal proof of this fact, but now I'm curious!</p>
<p>So I would like to ask for</p>
<ul>
<li><strong>A (possibily) simple (= undergraduate level) analytic proof of this fact</strong></li>
<li><strong>An intuitive explanation for this fact</strong></li>
</ul>
<p>or both.</p>
| <p>Here is a way to compute $\mathbb E(N)$. We begin by <em>complicating</em> things, namely, for every $x$ in $(0,1)$, we consider $m_x=\mathbb E(N_x)$ where
$$
N_x=\min\left\{n\,;\,\sum_{k=1}^nU_k\gt x\right\}.
$$
Our goal is to compute $m_1$ since $N_1=N$. Assume that $U_1=u$ for some $u$ in $(0,1)$. If $u\gt x$, then $N_x=1$. If $u\lt x$, then $N_x=1+N'$ where $N'$ is distributed like $N_{x-u}$. Hence
$$
m_x=1+\int_0^xm_{x-u}\,\mathrm du=1+\int_0^xm_{u}\,\mathrm du.
$$
Thus, $x\mapsto m_x$ is differentiable with $m'_x=m_x$. Since $m_0=1$, $m_x=\mathrm e^x$ for every $x\leqslant1$, in particular $\mathbb E(N)=m_1=\mathrm e$.</p>
| <p>In fact it turns out that <span class="math-container">$P(N = n) = \frac{n-1}{n!}$</span> for <span class="math-container">$n \ge 2$</span>. Let <span class="math-container">$S_n = \sum_{j=1}^n U_j$</span>, and <span class="math-container">$f_n(s)$</span> the probability density function for <span class="math-container">$S_n$</span>. For <span class="math-container">$0 < x < 1$</span> we have
<span class="math-container">$f_1(x) = 1$</span> and <span class="math-container">$f_{n+1}(x) = \int_0^x f_n(s) \ ds$</span>. By induction, we get <span class="math-container">$f_n(x) = x^{n-1}/(n-1)!$</span> for <span class="math-container">$0 < x < 1$</span>, and thus <span class="math-container">$P(S_n < 1) = \int_0^1 f_n(s)\ ds = \dfrac{1}{n!}$</span>.
Now
<span class="math-container">\begin{align*}
P(N=n) &= P(S_{n-1} < 1 \le S_n)\\
&= P(S_{n-1} < 1) - P(S_n \le 1)\\
&= \frac{1}{(n-1)!} - \frac{1}{n!} \\
&= \frac{n-1}{n!}
\end{align*}</span></p>
|
linear-algebra | <blockquote>
<p>Assume that we are working over a complex space $W$ of dimension $n$. When would an operator on this space have the same characteristic and minimal polynomial? </p>
</blockquote>
<p>I think the easy case is when the operator has $n$ distinct eigenvalues, but what about if it is diagonalizable? Is that sufficient, or can there be cases (with repeated eigvals) when char poly doesn't equal min poly? What are the general conditions when the equality holds? Is it possible to define them without use of determinant? (I am working by Axler and he doesn't like it.)</p>
<p>Thanks.</p>
| <p><strong>Theorem.</strong> <em>Let $T$ be an operator on the finite dimensional complex vector space $\mathbf{W}$. The characteristic polynomial of $T$ equals the minimal polynomial of $T$ if and only if the dimension of each eigenspace of $T$ is $1$.</em></p>
<p><em>Proof.</em> Let the characteristic and minimal polynomial be, respectively, $\chi(t)$ and $\mu(t)$, with
$$\begin{align*}
\chi(t) &= (t-\lambda_1)^{a_1}\cdots (t-\lambda_k)^{a_k}\\
\mu(t) &= (t-\lambda_1)^{b_1}\cdots (t-\lambda_k)^{b_k},
\end{align*}$$
where $1\leq b_i\leq a_i$ for each $i$. Then $b_i$ is the size of the largest Jordan block associated to $\lambda_i$ in the Jordan canonical form of $T$, and the sum of the sizes of the Jordan blocks associated to $\lambda_i$ is equal to $a_i$. Hence, $b_i=a_i$ if and only if $T$ has a unique Jordan block associated to $\lambda_i$. Since the dimension of $E_{\lambda_i}$ is equal to the number of Jordan blocks associated to $\lambda_i$ in the Jordan canonical form of $T$, it follows that $b_i=a_i$ if and only if $\dim(E_{\lambda_i})=1$. <strong>QED</strong></p>
<p>In particular, if the matrix has $n$ distinct eigenvalues, then each eigenvalue has a one-dimensional eigenspace. </p>
<p>Also in particular,</p>
<p><strong>Corollary.</strong> <em>Let $T$ be a diagonalizable operator on a finite dimensional vector space $\mathbf{W}$. The characteristic polynomial of $T$ equals the minimal polynomial of $T$ if and only if the number of distinct eigenvalues of $T$ is $\dim(\mathbf{W})$.</em></p>
<p>Using the Rational Canonical Form instead, we obtain:</p>
<p><strong>Theorem.</strong> <em>Let $W$ be a finite dimensional vector space over the field $\mathbf{F}$, and $T$ an operator on $W$. Let $\chi(t)$ be the characteristic polynomial of $T$, and assume that the factorization of $\chi(t)$ into irreducibles over $\mathbf{F}$ is
$$\chi(t) = \phi_1(t)^{a_1}\cdots \phi_k(t)^{a_k}.$$
Then the minimal polynomial of $T$ equals the characteristic polynomial of $T$ if and only if $\dim(\mathrm{ker}(\phi_i(T)) = \deg(\phi_i(t))$ for $i=1,\ldots,k$.</em></p>
<p><em>Proof.</em> Proceed as above, using the Rational Canonical forms instead. The exponent $b_i$ of $\phi_i(t)$ in the minimal polynomial gives the largest power of $\phi_i(t)$ that has a companion block in the Rational canonical form, and $\frac{1}{d_i}\dim(\mathrm{ker}(\phi_i(T)))$ (where $d_i=\deg(\phi_i)$) is the number of companion blocks. <strong>QED</strong></p>
| <p>The following equivalent criteria, valid for an arbitrary field, are short to state. Whether or not any one of the conditions is easy to test computationally may depend on the situation, though 2. is in principle always doable.</p>
<p><strong>Proposition.</strong> <em>The following are equivalent for a linear operator on a vector space of nonzero finite dimension.</em></p>
<ol>
<li><em>The minimal polynomial is equal to the characteristic polynomial.</em></li>
<li><em>The list of invariant factors has length one.</em></li>
<li><em>The Rational Canonical Form has a single block.</em></li>
<li><em>The operator has a matrix similar to a companion matrix.</em></li>
<li><em>There exists a (so-called cyclic) vector whose images by the operator span the whole space.</em></li>
</ol>
<p>Point 1. and 2. are equivalent because the minimal polynomial is the largest invariant factor and the characteristic polynomial is the product of all invariant factors. The invariant factors are in bijection with the blocks of the Rational Canonical Form, giving the equivalence of 2. and 3. These blocks are companion matrices, so 3. implies 4., and by the uniqueness of the RCF 4. also implies 3 (every companion matrix is its own RCF). Finally 4. implies 5. (take the first basis vector as cyclic vector) and 5. implies 4. by taking a basis consisting of <span class="math-container">$n$</span> successive images (counting from <span class="math-container">$0$</span>) of the cyclic vector.</p>
|
linear-algebra | <blockquote>
<p>Show that the determinant of a matrix $A$ is equal to the product of its eigenvalues $\lambda_i$.</p>
</blockquote>
<p>So I'm having a tough time figuring this one out. I know that I have to work with the characteristic polynomial of the matrix $\det(A-\lambda I)$. But, when considering an $n \times n$ matrix, I do not know how to work out the proof. Should I just use the determinant formula for any $n \times n$ matrix? I'm guessing not, because that is quite complicated. Any insights would be great.</p>
| <p>Suppose that <span class="math-container">$\lambda_1, \ldots, \lambda_n$</span> are the eigenvalues of <span class="math-container">$A$</span>. Then the <span class="math-container">$\lambda$</span>s are also the roots of the characteristic polynomial, i.e.</p>
<p><span class="math-container">$$\begin{array}{rcl} \det (A-\lambda I)=p(\lambda)&=&(-1)^n (\lambda - \lambda_1 )(\lambda - \lambda_2)\cdots (\lambda - \lambda_n) \\ &=&(-1) (\lambda - \lambda_1 )(-1)(\lambda - \lambda_2)\cdots (-1)(\lambda - \lambda_n) \\ &=&(\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots (\lambda_n - \lambda)
\end{array}$$</span></p>
<p>The first equality follows from the factorization of a polynomial given its roots; the leading (highest degree) coefficient <span class="math-container">$(-1)^n$</span> can be obtained by expanding the determinant along the diagonal.</p>
<p>Now, by setting <span class="math-container">$\lambda$</span> to zero (simply because it is a variable) we get on the left side <span class="math-container">$\det(A)$</span>, and on the right side <span class="math-container">$\lambda_1 \lambda_2\cdots\lambda_n$</span>, that is, we indeed obtain the desired result</p>
<p><span class="math-container">$$ \det(A) = \lambda_1 \lambda_2\cdots\lambda_n$$</span></p>
<p>So the determinant of the matrix is equal to the product of its eigenvalues.</p>
| <p>I am a beginning Linear Algebra learner and this is just my humble opinion. </p>
<p>One idea presented above is that </p>
<p>Suppose that $\lambda_1,\ldots \lambda_2$ are eigenvalues of $A$. </p>
<p>Then the $\lambda$s are also the roots of the characteristic polynomial, i.e.</p>
<p>$$\det(A−\lambda I)=(\lambda_1-\lambda)(\lambda_2−\lambda)\cdots(\lambda_n−\lambda)$$.</p>
<p>Now, by setting $\lambda$ to zero (simply because it is a variable) we get on the left side $\det(A)$, and on the right side $\lambda_1\lambda_2\ldots \lambda_n$, that is, we indeed obtain the desired result</p>
<p>$$\det(A)=\lambda_1\lambda_2\ldots \lambda_n$$.</p>
<p>I dont think that this works generally but only for the case when $\det(A) = 0$. </p>
<p>Because, when we write down the characteristic equation, we use the relation $\det(A - \lambda I) = 0$ Following the same logic, the only case where $\det(A - \lambda I) = \det(A) = 0$ is that $\lambda = 0$.
The relationship $\det(A - \lambda I) = 0$ must be obeyed even for the special case $\lambda = 0$, which implies, $\det(A) = 0$</p>
<p><strong>UPDATED POST</strong></p>
<p>Here i propose a way to prove the theorem for a 2 by 2 case.
Let $A$ be a 2 by 2 matrix. </p>
<p>$$ A = \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\\end{pmatrix}$$ </p>
<p>The idea is to use a certain property of determinants, </p>
<p>$$ \begin{vmatrix} a_{11} + b_{11} & a_{12} \\ a_{21} + b_{21} & a_{22}\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\\end{vmatrix} + \begin{vmatrix} b_{11} & a_{12}\\b_{21} & a_{22}\\\end{vmatrix}$$</p>
<p>Let $ \lambda_1$ and $\lambda_2$ be the 2 eigenvalues of the matrix $A$. (The eigenvalues can be distinct, or repeated, real or complex it doesn't matter.)</p>
<p>The two eigenvalues $\lambda_1$ and $\lambda_2$ must satisfy the following condition :</p>
<p>$$\det (A -I\lambda) = 0 $$
Where $\lambda$ is the eigenvalue of $A$.</p>
<p>Therefore,
$$\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = 0 $$</p>
<p>Therefore, using the property of determinants provided above, I will try to <em>decompose</em> the determinant into parts. </p>
<p>$$\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} - \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix}= \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\\\end{vmatrix} - \begin{vmatrix} a_{11} & a_{12} \\ 0 & \lambda \\\end{vmatrix}-\begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix}$$</p>
<p>The final determinant can be further reduced. </p>
<p>$$
\begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} \\\end{vmatrix} - \begin{vmatrix} \lambda & 0\\ 0 & \lambda\\\end{vmatrix}
$$</p>
<p>Substituting the final determinant, we will have </p>
<p>$$
\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\\\end{vmatrix} - \begin{vmatrix} a_{11} & a_{12} \\ 0 & \lambda \\\end{vmatrix} - \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} \\\end{vmatrix} + \begin{vmatrix} \lambda & 0\\ 0 & \lambda\\\end{vmatrix} = 0
$$</p>
<p>In a polynomial
$$ a_{n}\lambda^n + a_{n-1}\lambda^{n-1} ........a_{1}\lambda + a_{0}\lambda^0 = 0$$
We have the product of root being the coefficient of the term with the 0th power, $a_{0}$.</p>
<p>From the decomposed determinant, the only term which doesn't involve $\lambda$ would be the first term </p>
<p>$$
\begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\\end{vmatrix} = \det (A)
$$</p>
<p>Therefore, the product of roots aka product of eigenvalues of $A$ is equivalent to the determinant of $A$. </p>
<p>I am having difficulties to generalize this idea of proof to the $n$ by $$ case though, as it is complex and time consuming for me. </p>
|
game-theory | <blockquote>
<p>Let <span class="math-container">$ N = {1,2,.....,n } $</span> be a set of elements called voters. Let <span class="math-container">$$C=\lbrace S : S \subseteq N \rbrace$$</span> be the set of all subsets of <span class="math-container">$N$</span>. members of <span class="math-container">$C$</span> are called coalitions. let <span class="math-container">$f$</span> be a function from <span class="math-container">$C$</span> to <span class="math-container">$(0,1)$</span>. A coalition <span class="math-container">$S \subseteq N$</span> is said to be winning if <span class="math-container">$f(S) = 1 $</span>, it is said to be losing coalition if <span class="math-container">$f(S) = 0.$</span>.Such a function f is called a voting game if the following conditions hold</p>
<p>(a) N is a winning coalition.</p>
<p>(b) the empty set is a losing coalition.</p>
<p>(c)if S is a winning coalition and if <span class="math-container">$S \subseteq S'$</span> , then S' is also winning.</p>
<p>(d)if both S and S' are winning coalitions , then S and S' have a common voter.
<a href="https://www.isical.ac.in/%7Eadmission/IsiAdmission2017/PreviousQuestion/BStat-BMath-UGA-2016.pdf" rel="nofollow noreferrer">Source</a></p>
<p>Show that
a. The maximum number of winning coalitions of a voting game is <span class="math-container">$2^{n-1}$</span></p>
<p>b. Find a voting game for which number of winning coalition is <span class="math-container">$2^{n-1}$</span></p>
</blockquote>
<hr>
<p>I tried by checking if any element of <span class="math-container">$N$</span> is a winning coalition then its union with any other element of <span class="math-container">$N$</span> will also be a winning coalition. Thus there are two options for the remaining elements either become a part of winning coalition or not.Thus there are <span class="math-container">$2^{n-1}$</span> .I am not sure I am right in my reasoning and unable to prove that this indeed is maximum.Any ideas?Thanks.</p>
| <p>Your example with $2^{n-1}$ winning coalitions is a good one. </p>
<p>To prove there are no more than $2^{n-1}$ winning coalitions, note that a subset and its complement cannot both be winning as they do not share a voter. As no more than half the subsets can be winning, the maximum number of winning coalitions is $\frac 12(2^n)=2^{n-1}$ </p>
<p>There are other ways to get $2^{n-1}$ winning coalitions. Pick any odd number $k$ of voters to count. A winning coalition is any group that includes a majority of those voters. There are $2^{k-1}$ subsets of the $k$ that are a majority and you can add in any of the $2^{n-k}$ subsets of the voters that do not count. Any two winning coalitions will have at least one of the $k$ voters that count in common.</p>
| <p>Your solution for $2^{n-1}$ winning coalitions is just perfect. Say that $\{1\}$ is a winning coalition, then there are $2^{n-1}$ coalitions that include $1$ and are winners, and $2^{n-1}$ that do not and are losers. These respect all the conditions:</p>
<p>(a) $N$, including $1$, is a winning coalition</p>
<p>(b) the empty set, not including $1$, is a losing coalition</p>
<p>(c) if $S$ is a winning one, thus it includes $1$, then $S'\supseteq S$ also includes $1$ and is a winning coalition as well</p>
<p>(d) $S$ and $S'$ both winners have at least $1$ in common</p>
<p>Let's prove that it's not possible to have more than $2^{n-1}$ winning coalitions. Indeed you may have at most $2^{n-1}$ subsets among which you don't have any pair of complementary subsets. As soon as you have such a pair of winning subsets, you are no longer respecting condition (d) (complementary subsets by definition do not share any element). So $2^{n-1}$ is the maximum.</p>
<p><strong>EDIT:</strong> Ross Millikan has exposed a more general valid solution, which includes the one from the OP as a particular case. I'm going to give an even more general setup, easily observed in real life situations. An excellent comment by Alex Ravsky gives a further theoretical completion of the topic.</p>
<p>Pick any $k \neq 0$ voters, they will constitute the <em>committee</em>. Only their votes (that is, presence in a subset) are going to count: votes from the other $N-k$ elements will be irrelevant. Note that $k=N$ (all the voters count) and $k=1$ (the OP's solution, with only one significant voter) are perfectly valid choices. You decide what are "winning coalitions" in the intuitive way derived from real life: those including the majority of the <em>committee</em>. If $k$ is even, you need to elect a "president", whose presence will decide any tie (think of his vote as counting $3/2$ instead of $1$; you may want to have such a president also in a committee with an odd number of members, but it's pretty much useless since it's not going to modify the results). Again you can see that this setup respects conditions (a-d).</p>
|
geometry | <p>I'm searching to develop the intuition (rather than memorization) in relating the two forms of a dot product (by an angle theta between the vectors and by the components of the vector ). </p>
<p>For example, suppose I have vector $\mathbf{a} = (a_1,a_2)$ and vector $\mathbf{b}=(b_1,b_2)$. What's the physical or geometrical meaning that </p>
<p>$$a_1b_1 + a_2b_2 = |\mathbf{a}||\mathbf{b}|\cos(\theta)\;?$$</p>
<p>Why is multiplying $|\mathbf{b}|$ times $|\mathbf{a}|$ in direction of $\mathbf{b}$ the same as multiplying the first and second components of $\mathbf{a}$ and $\mathbf{b}$ and summing ? </p>
<p>I know this relationship comes out when we use the law of cosines to prove, but even then i cant get a intuition in this relationship.</p>
<p>This image clarifies my doubt:</p>
<p><img src="https://i.sstatic.net/Bpyfg.jpg" alt="enter image description here"></p>
<p>Thanks</p>
| <p>Start with the following definition:</p>
<p><a href="https://i.sstatic.net/o2kgw.jpg" rel="noreferrer"><img src="https://i.sstatic.net/o2kgw.jpg" alt="enter image description here"></a></p>
<p>(with a negative dot product when the projection is onto $-\mathbf{b}$)</p>
<p>This implies that the dot product of perpendicular vectors is zero and the dot product of parallel vectors is the product of their lengths.</p>
<p>Now take any two vectors $\mathbf{a}$ and $\mathbf{b}$. </p>
<p><a href="https://i.sstatic.net/sPoEK.jpg" rel="noreferrer"><img src="https://i.sstatic.net/sPoEK.jpg" alt="enter image description here"></a></p>
<p>They can be decomposed into horizontal and vertical components $\mathbf{a}=a_x\mathbf{i}+a_y\mathbf{j}$ and $\mathbf{b}=b_x\mathbf{i}+b_y\mathbf{j}$:</p>
<p><a href="https://i.sstatic.net/HigB9.jpg" rel="noreferrer"><img src="https://i.sstatic.net/HigB9.jpg" alt="enter image description here"></a></p>
<p>and so </p>
<p>$$\begin{align*}
\mathbf{a}\cdot \mathbf{b}&=(a_x\mathbf{i}+a_y\mathbf{j})\cdot(b_x\mathbf{i}+b_y\mathbf{j}),
\end{align*}$$</p>
<p>but the perpendicular components have a dot product of zero while the parallel components have a dot product equal to the product of their lengths.</p>
<p>Therefore</p>
<p>$$\mathbf{a}\cdot\mathbf{b}=a_xb_x+a_yb_y.$$</p>
| <p>I found a reasonable proof using polar coordinates. Lets suppose the point "$a$" points is $(|a|\cos(r)$ , $|a|\sin(r) )$ and the point vector "$b$" points is ($|b|\cos(s),|b|\sin(s) $).
Then doing the definition of the scalar product we get : </p>
<p>$a\cdot b = |a||b|\cos(r)\cos(s) + |b||a|\sin(r)\sin(s) = |a||b|\cos(r - s)$. But $\cos(r-s) = \cos(\theta)$ where theta is the angle between the vectors.</p>
<p>So, $a\cdot b = |a||b|\cos(\theta)$.</p>
|
linear-algebra | <p>I am currently trying to self-study linear algebra. I've noticed that a lot of the definitions for terms (like eigenvectors, characteristic polynomials, determinants, and so on) require a <strong>square</strong> matrix instead of just any real-valued matrix. For example, <a href="http://mathworld.wolfram.com" rel="noreferrer">Wolfram</a> has this in its <a href="http://mathworld.wolfram.com/CharacteristicPolynomial.html" rel="noreferrer">definition</a> of the characteristic polynomial:</p>
<blockquote>
<p>The characteristic polynomial is the polynomial left-hand side of the characteristic equation $\det(A - I\lambda) = 0$, where $A$ is a square matrix.</p>
</blockquote>
<p>Why must the matrix be square? What happens if the matrix is not square? And why do square matrices come up so frequently in these definitions? Sorry if this is a really simple question, but I feel like I'm missing something fundamental.</p>
| <p>Remember that an $n$-by-$m$ matrix with real-number entries represents a linear map from $\mathbb{R}^m$ to $\mathbb{R}^n$ (or more generally, an $n$-by-$m$ matrix with entries from some field $k$ represents a linear map from $k^m$ to $k^n$). When $m=n$ - that is, when the matrix is square - we're talking about a map from a space to itself.</p>
<p>So really your question amounts to:</p>
<blockquote>
<p>Why are maps from a space to <em>itself</em> - as opposed to maps from a space to <em>something else</em> - particularly interesting?</p>
</blockquote>
<p>Well, the point is that when I'm looking at a map from a space to itself inputs to and outputs from that map are the same "type" of thing, <em>and so I can meaningfully compare them</em>. So, for example, if $f:\mathbb{R}^4\rightarrow\mathbb{R}^4$ it makes sense to ask when $f(v)$ is parallel to $v$, since $f(v)$ and $v$ lie in the same space; but asking when $g(v)$ is parallel to $v$ for $g:\mathbb{R}^4\rightarrow\mathbb{R}^3$ doesn't make any sense, since $g(v)$ and $v$ are just different types of objects. (This example, by the way, is just saying that <em>eigenvectors/values</em> make sense when the matrix is square, but not when it's not square.)</p>
<hr>
<p>As another example, let's consider the determinant. The geometric meaning of the determinant is that it measures how much a linear map "expands/shrinks" a unit of (signed) volume - e.g. the map $(x,y,z)\mapsto(-2x,2y,2z)$ takes a unit of volume to $-8$ units of volume, so has determinant $-8$. What's interesting is that this applies to <em>every</em> blob of volume: it doesn't matter whether we look at how the map distorts the usual 1-1-1 cube, or some other random cube.</p>
<p>But what if we try to go from $3$D to $2$D (so we're considering a $2$-by-$3$ matrix) or vice versa? Well, we can try to use the same idea: (proportionally) how much <em>area</em> does a given <em>volume</em> wind up producing? However, we now run into problems:</p>
<ul>
<li><p>If we go from $3$ to $2$, the "stretching factor" is no longer invariant. Consider the projection map $(x,y,z)\mapsto (x,y)$, and think about what happens when I stretch a bit of volume vertically ...</p></li>
<li><p>If we go from $2$ to $3$, we're never going to get any volume at all - the starting dimension is just too small! So regardless of what map we're looking at, our "stretching factor" seems to be $0$.</p></li>
</ul>
<p>The point is, in the non-square case the "determinant" as naively construed either is ill-defined or is $0$ for stupid reasons.</p>
| <p>Lots of good answers already as to why square matrices are so important. But just so you don't think that other matrices are not interesting, they have analogues of the inverse (e.g., the <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse" rel="noreferrer">Moore-Penrose inverse</a>) and non-square matrices have a <a href="https://en.wikipedia.org/wiki/Singular-value_decomposition" rel="noreferrer">singular-value decompition</a>, where the singular values play a role loosely analogous to the eigenvalues of a square matrix. These topics are often left out of linear algebra courses, but they can be important in numerical methods for statistics and machine learning. But learn the square matrix results before the fancy non-square matrix results, since the former provide a context for the latter.</p>
|
linear-algebra | <p>Denote by $M_{n \times n}(k)$ the ring of $n$ by $n$ matrices with coefficients in the field $k$. Then why does this ring not contain any two-sided ideal? </p>
<p>Thanks for any clarification, and this is an exercise from the notes of Commutative Algebra by Pete L Clark, of which I thought as simple but I cannot figure it out now.</p>
| <p>Suppose that you have an ideal $\mathfrak{I}$ which contains a matrix with a nonzero entry $a_{ij}$. Multiplying by the matrix that has $0$'s everywhere except a $1$ in entry $(i,i)$, kill all rows except the $i$th row; multiplying by a suitable matrices on the right, kill all columns except the $j$th column; now you have a matrix, necessarily in $\mathfrak{I}$, which contains exactly one nonzero entry, namely $a_{ij}$ in position $(i,j)$.</p>
<p>Now show that $\mathfrak{I}$ must contain <em>all</em> matrices in $M_{n\times n}(k)$. This will show that a $2$-sided ideal consists either of <em>only</em> the $0$ matrix, or must be equal to the entire ring.</p>
<p><em>Added.</em> Now that you have a matrix that has a single nonzero entry, can you get a matrix that has a single nonzero entry on whatever coordinate you specify, and such that this nonzero entry is whatever element of $k$ you want, by multiplying this matrix (on either left, or right, or both) by suitable elementary matrices? Will they all be in $\mathfrak{I}$?</p>
<p>And...</p>
<p>$$\left(\begin{array}{cc}
a&b\\
c&d
\end{array}\right) = \left(\begin{array}{cc}
a & 0\\
0 & 0
\end{array}\right) + \cdots$$</p>
| <p>A faster, and more general result, which Arturo hinted at, is obtained via following proposition from Grillet's <em>Abstract Algebra</em>, section "Semisimple Rings and Modules", page 360:</p>
<p><img src="https://i.sstatic.net/bEYxY.jpg" alt="enter image description here"></p>
<p><strong>Consequence:</strong> if $R:=D$ is a division ring, then $M_n(D)$ is simple.</p>
<p><strong>Proof:</strong> Suppose there existed an ideal of $M_n(D)$. By the proposition, it'd be of the form $M_n(I)$, for $I\unlhd D$, but division rings do not have any ideals (other than $0$ and $D$), so this is a contradiction. $\blacksquare$</p>
|
geometry | <p>When differentiated with respect to $r$, the derivative of $\pi r^2$ is $2 \pi r$, which is the circumference of a circle.</p>
<p>Similarly, when the formula for a sphere's volume $\frac{4}{3} \pi r^3$ is differentiated with respect to $r$, we get $4 \pi r^2$.</p>
<p>Is this just a coincidence, or is there some deep explanation for why we should expect this?</p>
| <p>Consider increasing the radius of a circle by an infinitesimally small amount, $dr$. This increases the area by an <a href="http://en.wikipedia.org/wiki/Annulus_%28mathematics%29" rel="noreferrer">annulus</a> (or ring) with inner radius $2 \pi r$ and outer radius $2\pi(r+dr)$. As this ring is extremely thin, we can imagine cutting the ring and then flattening it out to form a rectangle with width $2\pi r$ and height $dr$ (the side of length $2\pi(r+dr)$ is close enough to $2\pi r$ that we can ignore that). So the area gain is $2\pi r\cdot dr$ and to determine the rate of change with respect to $r$, we divide by $dr$ and so we get $2\pi r$. Please note that this is just an informative, intuitive explanation as opposed to a formal proof. The same reasoning works with a sphere, we just flatten it out to a rectangular prism instead.</p>
| <p>$\newcommand{\Reals}{\mathbf{R}}\newcommand{\Bd}{\partial}\DeclareMathOperator{\vol}{vol}$The formulas are no accident, but not especially deep. The explanation comes down to a couple of geometric observations.</p>
<ol>
<li><p>If $X$ is the closure of a bounded open set in the Euclidean space $\Reals^{n}$ (such as a solid ball, or a bounded polytope, or an ellipsoid) and if $a > 0$ is real, then the image $aX$ of $X$ under the mapping $x \mapsto ax$ (uniform scaling by a factor of $a$ about the origin) satisfies
$$
\vol_{n}(aX) = a^{n} \vol_{n}(X).
$$
More generally, if $X$ is a closed, bounded, piecewise-smooth $k$-dimensional manifold in $\Reals^{n}$, then scaling $X$ by a factor of $a$ multiplies the volume by $a^{k}$.</p></li>
<li><p>If $X \subset \Reals^{n}$ is a bounded, $n$-dimensional intersection of closed half-spaces whose boundaries lie at unit distance from the origin, then scaling $X$ by $a = (1 + h)$ "adds a shell of uniform thickness $h$ to $X$ (modulo behavior along intersections of hyperplanes)". The volume of this shell is equal to $h$ times the $(n - 1)$-dimensional measure of the boundary of $X$, up to added terms of higher order in $h$ (i.e., terms whose total contribution to the $n$-dimensional volume of the shell is negligible as $h \to 0$).</p></li>
</ol>
<p><img src="https://i.sstatic.net/Goo5G.png" alt="The change in area of a triangle under scaling about its center"></p>
<p>If $X$ satisfies Property 2. (e.g., $X$ is a ball or cube or simplex of "unit radius" centered at the origin), then
$$
h \vol_{n-1}(\Bd X) \approx \vol_{n}\bigl[(1 + h)X \setminus X\bigr],
$$
or
$$
\vol_{n-1}(\Bd X) \approx \frac{(1 + h)^{n} - 1}{h}\, \vol_{n}(X).
\tag{1}
$$
The approximation becomes exact in the limit as $h \to 0$:
$$
\vol_{n-1}(\Bd X)
= \lim_{h \to 0} \frac{(1 + h)^{n} - 1}{h}\, \vol_{n}(X)
= \frac{d}{dt}\bigg|_{t = 1} \vol_{n}(tX).
\tag{2}
$$
By Property 1., if $r > 0$, then
$$
\vol_{n-1}\bigl(\Bd (rX)\bigr)
= r^{n-1}\vol_{n-1}(\Bd X)
= \lim_{h \to 0} \frac{(1 + h)^{n}r^{n} - r^{n}}{rh}\, \vol_{n}(X)
= \frac{d}{dt}\bigg|_{t = r} \vol_{n}(tX).
\tag{3}
$$
In words, the $(n - 1)$-dimensional volume of $\Bd(rX)$ is the derivative with respect to $r$ of the $n$-dimensional volume of $rX$.</p>
<p>This argument fails for non-cubical boxes and ellipsoids (to name two) because for these objects, uniform scaling about an arbitrary point does not add a shell of uniform thickness (i.e., Property 2. fails). Equivalently, adding a shell of uniform thickness does not yield a new region similar to (i.e., obtained by uniform scaling from) the original.</p>
<p>(The argument also fails for cubes (etc.) not centered at the origin, again because "off-center" scaling does not add a shell of uniform thickness.)</p>
<p>In more detail:</p>
<ul>
<li><p>Scaling a non-square rectangle adds "thicker area" to the pair of short sides than to the long pair. Equivalently, adding a shell of uniform thickness around a non-square rectangle yields a rectangle <em>having different proportions</em> than the original rectangle.</p></li>
<li><p>Scaling a non-circular ellipse adds thicker area near the ends of the major axis. Equivalently, adding a uniform shell around a non-circular ellipse yields a non-elliptical region. (The principle that "the derivative of area is length" fails drastically for ellipses: The area of an ellipse is proportional to the product of the axes, while the arc length is a <em>non-elementary function</em> of the axes.)</p></li>
</ul>
|
differentiation | <p>Does a function $f: \mathbb{R} \rightarrow \mathbb{R}$ such that $f'(x) > f(x) > 0$ exist?</p>
<p>Intuitively, I think it can't exist.</p>
<p>I've tried finding the answer using the definition of derivative:</p>
<ol>
<li><p>I know that if $\lim_{x \rightarrow k} f(x)$ exists and is finite, then $\lim_{x \rightarrow k} f(x) = \lim_{x \rightarrow k^+} f(x) = \lim_{x \rightarrow k^-} f(x)$</p></li>
<li><p>Thanks to this property, I can write:</p></li>
</ol>
<p>$$\begin{align}
& f'(x) > f(x) > 0 \\
& \lim_{h \rightarrow 0^+} \frac{f(x + h) - f(x)}h > f(x) > 0 \\
& \lim_{h \rightarrow 0^+} f(x + h) - f(x) > h f(x) > 0 \\
& \lim_{h \rightarrow 0^+} f(x + h) > (h + 1) f(x) > f(x) \\
& \lim_{h \rightarrow 0^+} \frac{f(x + h)}{f(x)} > h + 1 > 1
\end{align}$$</p>
<ol start="3">
<li>This leads to the result $1 > 1 > 1$ (or $0 > 0 > 0$ if you stop earlier), which is false.</li>
</ol>
<p>However I guess I made serious mistakes with my proof. I think I've used limits the wrong way. What do you think?</p>
| <p><strong>expanded from David's comment</strong> </p>
<p>$f' > f$ means $f'/f > 1$ so $(\log f)' > 1$. Why not take $\log f > x$, say $\log f = 2x$, or $f = e^{2x}$.</p>
<p>Thus $f' > f > 0$ since $2e^{2x} > e^{2x} > 0$.</p>
<p><strong>added:</strong> Is there a sub-exponential solution?<br>
From $(\log f)'>1$ we get
$$
\log(f(x))-\log(f(0)) > \int_0^x\;1\;dt = x
$$
so
$$
\frac{f(x)}{f(0)} > e^x
$$
and thus
$$
f(x) > C e^x
$$
for some constant $C$ ... it is <em>not</em> sub-exponential.</p>
| <p>There are multiple mistakes in your proof (i.e. dividing by $f(x)$ is not necessarily okay, since it is not necessarily positive). The most major is that you treat the variable in the limit as if it were not "bound". That is, if you have something like
$$\lim_{h\rightarrow 0^+}1>0$$
which is true, you can't necessarily, say, multiply through by $h$ to get
$$\lim_{h\rightarrow 0^+}h>0\cdot h$$
which is false. This is essentially what you do, and why your conclusion is wrong. You have to regard the $h$ as belonging to the limit - that is $\lim_{h\rightarrow 0^{+}}f(h)$ is a constant - it does not depend on $h$, because there is no notion of "$h$" outside of the limit. </p>
<p>For an example of a function that does not satisfy this, you can take $e^{\alpha x}$ for any $\alpha>1$. Another solution is $xe^x$. It's worth noting that since the solution to $f'=f$ is $x\mapsto e^x$ we can prove that any solution grows faster than exponentially.</p>
|
geometry | <p>Wait before you dismiss this as a crank question :)</p>
<p>A friend of mine teaches school kids, and the book she uses states something to the following effect: </p>
<blockquote>
<p>If you divide the circumference of <em>any</em> circle by its diameter, you get the <em>same</em> number, and this number is an irrational number which starts off as $3.14159... .$</p>
</blockquote>
<p>One of the smarter kids in class now has the following doubt: </p>
<blockquote>
<p>Why is this number equal to $3.14159....$? Why is it not some <em>other</em> irrational number?</p>
</blockquote>
<p>My friend is in a fix as to how to answer this in a sensible manner. Could you help us with this?</p>
<p>I have the following idea about how to answer this: Show that the ratio must be greater than $3$. Now show that it must be less than $3.5$. Then show that it must be greater than $3.1$. And so on ... . </p>
<p>The trouble with this is that I don't know of any easy way of doing this, which would also be accessible to a school student.</p>
<p><strong><em>Could you point me to some approximation argument of this form?</em></strong></p>
| <p>If the kids are not too old you could visually try the following which is very straight forward. Build a few models of a circle of paperboard and then take a wire and put it straigt around it. Mark the point of the line where it is meets itself again and then measure the length of it. You will get something like 3.14.. </p>
<p><img src="https://i.sstatic.net/FX2gW.gif" alt="Pi unrolled"></p>
<p>Now let them measure themselves the diameter and circumference of different circles and let them plot them into a graph. Tadaa they will see the that its proportional and this is something they (hopefully) already know.</p>
<p>Or use the approximation using the archimedes algorithm. Still its not really great as they will have to handle big numbers and the result is rather disappointing as it doesn't reveal the irrationality of pi and just gives them a more accurate number of $\pi$.</p>
| <p>You can try doing what Archimedes did: using polygons inside and outside the circle.</p>
<p><a href="http://itech.fgcu.edu/faculty/clindsey/mhf4404/archimedes/archimedes.html" rel="nofollow">Here is a webpage which seems to have a good explanation.</a></p>
<p>An other method one can try is to use the fact that the area of the circle is $\displaystyle \pi r^2$. Take a square, inscribe a circle. Now randomly select points in the square (perhaps by having many students throw pebbles etc at the figure or using rain or a computer etc). Compute the ratio of the points which are within the circle to the total. This ratio should be approximately the ratio of the areas $ = \displaystyle \frac{\pi}{4}$. Of course, this is susceptible to experimental errors :-)</p>
<p>Or maybe just have them compute the approximate area of circle using graph paper, or the approximate perimeter using a piece of string.</p>
<p>Not sure how convincing the physical experiments might be, though.</p>
|
probability | <p>If nine coins are tossed, what is the probability that the number of heads is even?</p>
<p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p>
<p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p>
<p><span class="math-container">$n = 9, k = 0$</span></p>
<p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p>
<p><span class="math-container">$n = 9, k = 2$</span></p>
<p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p>
<p><span class="math-container">$n = 9, k = 4$</span>
<span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p>
<p><span class="math-container">$n = 9, k = 6$</span></p>
<p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p>
<p><span class="math-container">$n = 9, k = 8$</span></p>
<p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p>
<p>Add all of these up: </p>
<p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
| <p>The probability is <span class="math-container">$\frac{1}{2}$</span> because the last flip determines it.</p>
| <p>If there are an even number of heads then there must be an odd number of tails. But heads and tails are symmetrical, so the probability must be <span class="math-container">$1/2$</span>.</p>
|
logic | <p>A bag contains 2 counters, as to which nothing is known except
that each is either black or white. Ascertain their colours without taking
them out of the bag.</p>
<p>Carroll's solution: One is black, and the other is white.</p>
<blockquote>
<h3><a href="https://i.sstatic.net/mlmdS.jpg" rel="noreferrer">Lewis Carroll's explanation:</a></h3>
<p>We know that, if a bag contained <span class="math-container">$3$</span> counters, two being black and one white, the chance of drawing a black one would be <span class="math-container">$\frac{2}{3}$</span>; and that any <em>other</em> state of things would <em>not</em> give this chance.<br />
Now the chances, that the given bag contains <span class="math-container">$(\alpha)\;BB$</span>, <span class="math-container">$(\beta)\;BW$</span>, <span class="math-container">$(\gamma)\;WW$</span>, are respectively <span class="math-container">$\frac{1}{4}$</span>, <span class="math-container">$\frac{1}{2}$</span>, <span class="math-container">$\frac{1}{4}$</span>.<br />
Add a black counter.<br />
Then, the chances that it contains <span class="math-container">$(\alpha)\;BBB$</span>, <span class="math-container">$(\beta)\;BBW$</span>, <span class="math-container">$(\gamma)\;BWW$</span>, are, as before, <span class="math-container">$\frac{1}{4}$</span>, <span class="math-container">$\frac{1}{2}$</span>, <span class="math-container">$\frac{1}{4}$</span>.<br />
Hence the chances of now drawing a black one,<br />
<span class="math-container">$$= \frac{1}{4} \cdot 1 + \frac{1}{2} \cdot \frac{2}{3} + \frac{1}{4} \cdot \frac{1}{3} = \frac{2}{3}.$$</span>
Hence the bag now contains <span class="math-container">$BBW$</span> (since any <em>other</em> state of things would <em>not</em> give this chance).<br />
Hence, before the black counter was added, it contained BW, i.e. one black counter and one white.<br />
<a href="http://en.wikipedia.org/wiki/Q.E.D.#QEF" rel="noreferrer">Q.E.F.</a></p>
<p>Can you explain this explanation?</p>
</blockquote>
<p>I don't completely understand the explanation to begin with. It seems like there are elements of inverse reasoning, everything he says is correct but he is basically assuming what he intends to prove. He is assuming one white, one black, then adding one black yields the <span class="math-container">$\frac{2}{3}$</span>. From there he goes back to state the premise as proof.</p>
<p>Can anyone thoroughly analyze and determine if this solution contains any fallacies/slight of hand that may trick the reader?</p>
| <p>There is a reason this is the last of Lewis Carroll's <em>Pillow Problems</em>. It is a mathematical joke from the author of <em>Alice in Wonderland</em>.</p>
<p>The error (and Lewis Carroll knew it) is the phrase</p>
<blockquote>
<p>We know ... that any <em>other</em> state of things would <em>not</em> give this chance</p>
</blockquote>
<p>since he then immediately gives an example of another case which gives the same chance. Indeed any position where the probability of three blacks is equal to the probability of two whites and a black would also give the same combined chance.</p>
<p>There is no need to add the third black counter: it simply confuses the reader, in order to distract from the logical error. Lewis Carroll could equally have written something like:</p>
<blockquote>
<p>We know that, if a bag contained $2$ counters, one being black and one white, the chance of drawing a black one would be $\frac12$; and that any <em>other</em> state of things would <em>not</em> give this chance.</p>
<p>Now the chances, that the given bag contains (α) BB, (β) BW, (γ) WW, are respectively $\frac14$, $\frac12$, $\frac14$.</p>
<p>Hence the chance, of now drawing the black one, $=\frac14 \cdot 1 +\frac12 \cdot \frac12 + \frac14 \cdot 0 = \frac12.$</p>
<p>Hence the bag contains BW (since any <em>other</em> state of things would <em>not</em> give this chance).</p>
</blockquote>
<p>If he had written that, it would be more immediately obvious that this was faulty logic with an assertion followed by a counterexample followed by faulty use of the assertion.</p>
| <p>Well, of course the reasoning is flawed, since it's certainly possible to have a bag with two counters of the same color in it!</p>
<p>The facts that are correct are:</p>
<p>The probability of drawing a black counter from a fixed bag with 3 counters is 2/3 iff the bag contains two black counters.</p>
<p>By adding a black counter to a randomly generated 2-counter bag, the probability of drawing a black from the resulting bag is 2/3. </p>
<p>The conclusion that this means the resulting bag in the latter case therefore contains 2 black counters and 1 white counter is what is flawed, because the bag itself is not fixed; the probability is being calculated over a variable number of possibilities for the bag.</p>
|
linear-algebra | <p>Suppose I have a square matrix $\mathsf{A}$ with $\det \mathsf{A}\neq 0$.</p>
<p>How could we define the following operation? $$\mathsf{A}!$$</p>
<p>Maybe we could make some simple example, admitted it makes any sense, with </p>
<p>$$\mathsf{A} =
\left(\begin{matrix}
1 & 3 \\
2 & 1
\end{matrix}
\right)
$$</p>
| <p>For any holomorphic function <span class="math-container">$G$</span>, we can define a corresponding matrix function <span class="math-container">$\tilde{G}$</span> via (a formal version of) the Cauchy Integral Formula: We set
<span class="math-container">$$\tilde{G}(B) := \frac{1}{2 \pi i} \oint_C G(z) (z I - B)^{-1} \, dz ,$$</span>
where <span class="math-container">$C$</span> is an (arbitrary) anticlockwise curve that encloses (once each) the eigenvalues of the (square) matrix <span class="math-container">$B$</span>. Note that the condition on <span class="math-container">$C$</span> means that restrictions on the domain of <span class="math-container">$G$</span> determine restrictions on the domain of <span class="math-container">$\tilde{G}$</span>.</p>
<p>So, we could make sense of the factorial of a matrix if we had a holomorphic function that restricted to the factorial function <span class="math-container">$n \mapsto n!$</span> on nonnegative integers. Fortunately, there is such a function: The function <span class="math-container">$$F: z \mapsto \Gamma(z + 1),$$</span> where <span class="math-container">$\Gamma$</span> denotes the <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="noreferrer">Gamma function</a>, satisfies <span class="math-container">$F(n) = n!$</span> for nonnegative integers <span class="math-container">$n$</span>. (There is a sense in which <a href="https://math.stackexchange.com/questions/1537/why-is-eulers-gamma-function-the-best-extension-of-the-factorial-function-to"><span class="math-container">$F$</span> is the best possible function extending the factorial function</a>, but notice the target of that link really just discusses the real Gamma function, which our <span class="math-container">$\Gamma$</span> preferentially extends.) Thus, we may define factorial of a (square) matrix <span class="math-container">$B$</span> by substituting the second display equation above into the first:
<span class="math-container">$$\color{#df0000}{\boxed{B! := \tilde{F}(B) = \frac{1}{2 \pi i} \oint_C \Gamma(z + 1) (z I - B)^{-1} \, dz}} .$$</span></p>
<p>The (scalar) Cauchy Integral Formula shows that this formulation has the obviously desirable property that for scalar matrices it recovers the usual factorial, or more precisely, that <span class="math-container">$\pmatrix{n}! = \pmatrix{n!}$</span> (for nonnegative integers <span class="math-container">$n$</span>).</p>
<p>Alternatively, one could define a matrix function <span class="math-container">$\tilde G$</span> (and in particular define <span class="math-container">$B!$</span>) by evaluating formally the power series <span class="math-container">$\sum_{i = 0}^{\infty} a_k (z - z_0)^k$</span> for <span class="math-container">$G$</span> about some point <span class="math-container">$z_0$</span>, that is, declaring <span class="math-container">$\tilde G(B) := \sum_{i = 0}^{\infty} a_k (B - z_0 I)^k$</span>, but in general this definition is more restrictive than the Cauchy Integral Formula definition, simply because the power series need not converge everywhere (where it does converge, it converges to the value given by the integral formula). Indeed, we cannot use a power series for <span class="math-container">$F$</span> to evaluate <span class="math-container">$A!$</span> directly for our particular <span class="math-container">$A$</span>: The function <span class="math-container">$F$</span> has a pole on the line segment in <span class="math-container">$\Bbb C$</span> with endpoints the eigenvalues of <span class="math-container">$A$</span>, so there is no open disk in the domain of <span class="math-container">$F$</span> containing all of the eigenvalues of <span class="math-container">$A$</span>, and hence there is no basepoint <span class="math-container">$z_0$</span> for which the series for <span class="math-container">$\tilde F$</span> converges at <span class="math-container">$A$</span>.</p>
<p>We can define <span class="math-container">$\tilde G$</span> in yet another way, which coincides appropriately with the above definitions but which is more amenable to explicit computation: If <span class="math-container">$B$</span> is diagonalizable, so that we can decompose <span class="math-container">$$B = P \pmatrix{\lambda_1 & & \\ & \ddots & \\ & & \lambda_n} P^{-1} ,$$</span>
for eigenvalues <span class="math-container">$\lambda_a$</span> of <span class="math-container">$B$</span> and some matrix <span class="math-container">$P$</span>, we define
<span class="math-container">$$\tilde{G}(B) := P \pmatrix{G(\lambda_1) & & \\ & \ddots & \\ & & G(\lambda_n)} P^{-1} .$$</span> Indeed, by substituting and rearranging, we can see that this coincides, at least formally, with the power series characterization. There is a similar but more complicated formula for nondiagonalizable <span class="math-container">$B$</span> that I won't write out here but which is given in the Wikipedia article <a href="https://en.wikipedia.org/wiki/Matrix_function#Jordan_decomposition" rel="noreferrer"><em>Matrix function</em></a>.</p>
<p><strong>Example</strong> The given matrix <span class="math-container">$A$</span> has distinct eigenvalues <span class="math-container">$\lambda_{\pm} = 1 \pm \sqrt{6}$</span>, and so can be diagonalized as <span class="math-container">$$P \pmatrix{1 - \sqrt{6} & 0 \\ 0 & 1 + \sqrt{6}} P^{-1} ;$$</span> indeed, we can take <span class="math-container">$$P = \pmatrix{\tfrac{1}{2} & \tfrac{1}{2} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}}.$$</span></p>
<p>Now, <span class="math-container">$F(\lambda_{\pm}) = \Gamma(\lambda_{\pm} + 1) = \Gamma (2 {\pm} \sqrt{6}),$</span>
and putting this all together gives that
<span class="math-container">\begin{align*}\pmatrix{1 & 3 \\ 2 & 1} ! = \bar{F}(A) &= P \pmatrix{F(\lambda_-) & 0 \\ 0 & F(\lambda_+)} P^{-1} \\ &= \pmatrix{\tfrac{1}{2} & \tfrac{1}{2} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}} \pmatrix{\Gamma (2 - \sqrt{6}) & 0 \\ 0 & \Gamma (2 + \sqrt{6})} \pmatrix{1 & -\frac{\sqrt{3}}{\sqrt{2}} \\ 1 & \frac{\sqrt{3}}{\sqrt{2}}} .\end{align*}</span>
Multiplying this out gives
<span class="math-container">$$\color{#df0000}{\boxed{\pmatrix{1 & 3 \\ 2 & 1} ! = \pmatrix{\frac{1}{2} \alpha_+ & \frac{\sqrt{3}}{2 \sqrt{2}} \alpha_- \\ \frac{1}{\sqrt{6}} \alpha_- & \frac{1}{2} \alpha_+}}} ,$$</span>
where <span class="math-container">$$\color{#df0000}{\alpha_{\pm} = \Gamma(2 + \sqrt{6}) \pm \Gamma(2 - \sqrt{6})}. $$</span></p>
<p>It's perhaps not very illuminating, but <span class="math-container">$A!$</span> has numerical value
<span class="math-container">$$
\pmatrix{1 & 3 \\ 2 & 1}! \approx \pmatrix{3.62744 & 8.84231 \\ 5.89488 & 3.62744} .
$$</span></p>
<p>To carry out these computations, one can use Maple's built-in <a href="https://www.maplesoft.com/support/help/Maple/view.aspx?path=LinearAlgebra%2FMatrixFunction" rel="noreferrer"><code>MatrixFunction</code></a> routine (it requires the <code>LinearAlgebra</code> package) to write a function that computes the factorial of any matrix:</p>
<pre><code>MatrixFactorial := X -> LinearAlgebra:-MatrixFunction(X, GAMMA(z + 1), z);
</code></pre>
<p>To evaluate, for example, <span class="math-container">$A!$</span>, we then need only run the following:</p>
<pre><code>A := Matrix([[1, 3], [2, 1]]);
MatrixFactorial(A);
</code></pre>
<p>(NB executing this code returns an expression for <span class="math-container">$A!$</span> different from the one above: Their values can be seen to coincide using the the reflection formula <span class="math-container">$-z \Gamma(z) \Gamma(-z) = \frac{\pi}{\sin \pi z} .$</span> We can further simplify the expression using the identity <span class="math-container">$\Gamma(z + 1) = z \Gamma(z)$</span> extending the factorial identity <span class="math-container">$(n + 1)! = (n + 1) \cdot n!$</span> to write <span class="math-container">$\Gamma(2 \pm \sqrt{6}) = (6 \pm \sqrt{6}) \Gamma(\pm \sqrt{6})$</span> and so write the entries as expressions algebraic in <span class="math-container">$\pi$</span>, <span class="math-container">$\sin(\pi \sqrt{6})$</span>, and <span class="math-container">$\Gamma(\sqrt{6})$</span> alone. One can compel Maple to carry out these substitutions by executing <code>simplify(map(expand, %));</code> immediately after executing the previous code.) To compute the numerical value, we need only execute <code>evalf(%);</code> immediately after the previous code.</p>
<p>By the way, we need not have that <span class="math-container">$\det B \neq 0$</span> in order to define <span class="math-container">$B!$</span>. In fact, proceeding as above we find that the factorial of the (already diagonal) zero matrix is the identity matrix: <span class="math-container">$$0! = \pmatrix{\Gamma(1) \\ & \ddots \\ & & \Gamma(1)} = I .$$</span> Likewise using the formula for nondiagonalizable matrices referenced above together with a special identity gives that the factorial of the <span class="math-container">$2 \times 2$</span> Jordan block of eigenvalue <span class="math-container">$0$</span> is, somewhat amusingly,
<span class="math-container">$$\pmatrix{0 & 1\\0 & 0} ! = \pmatrix{1 & -\gamma \\ 0 & 1} ,$$</span> where <span class="math-container">$\gamma$</span> is the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant" rel="noreferrer">Euler-Mascheroni constant</a>.</p>
| <p>The <a href="https://en.wikipedia.org/wiki/Gamma_function">gamma function</a> is analytic. Use the <a href="https://en.wikipedia.org/wiki/Matrix_function">power series</a> of it.</p>
<p>EDIT: already done: <a href="http://www.sciencedirect.com/science/article/pii/S0893965997001390">Some properties of Gamma and Beta matrix functions</a> (maybe paywalled).</p>
|
geometry | <p>I learned that the volume of a sphere is <span class="math-container">$\frac{4}{3}\pi r^3$</span>, but why? The <span class="math-container">$\pi$</span> kind of makes sense because its round like a circle, and the <span class="math-container">$r^3$</span> because it's 3-D, but <span class="math-container">$\frac{4}{3}$</span> is so random! How could somebody guess something like this for the formula?</p>
| <p>In addition to the methods of calculus, Pappus, and Archimedes already mentioned, <a href="https://en.wikipedia.org/wiki/Cavalieri%27s_principle" rel="nofollow noreferrer">Cavalieri's Principle</a> can be useful for these kinds of problems.</p>
<p>Suppose you have two solid figures lined up next to each other, each fitting between the same two parallel planes. (E.g., two stacks of pennies lying on the table, of the same height). Then, consider cutting the two solids by a plane parallel to the given two and in between them. If the cross-sectional area thus formed is the same for each of the solids for any such plane, the volumes of the solids are the same.</p>
<p>If you're willing to accept that you know the volume of a cone is 1/3 that of the cylinder with the same base and height, you can use Cavalieri, comparing a hemisphere to a cylinder with an inscribed cone, to get the volume of the sphere. This diagram (from Wikipedia) illustrates the construction: <a href="https://en.wikipedia.org/wiki/File:Sphere_cavalieri.svg" rel="nofollow noreferrer">look here</a></p>
<p>Consider a cylinder of radius <span class="math-container">$R$</span> and height <span class="math-container">$R$</span>, with, inside it, an inverted cone, with base of radius <span class="math-container">$R$</span> coinciding with the top of the cylinder, and again height <span class="math-container">$R$</span>. Put next to it a hemisphere of radius <span class="math-container">$R$</span>. Now consider the cross section of each at height <span class="math-container">$y$</span> above the base. For the cylinder/cone system, the area of the cross-section is <span class="math-container">$\pi (R^2-y^2)$</span>. It's the same for the hemisphere cross-section, as you can see by doing the Pythagorean theorem with any vector from the sphere center to a point on the sphere at height y to get the radius of the cross section (which is circular).</p>
<p>Since the cylinder/cone and hemisphere have the same height, by Cavalieri's Principle the volumes of the two are equal. The cylinder volume is <span class="math-container">$\pi R^3$</span>, the cone is a third that, so the hemisphere volume is <span class="math-container">$\frac{2}{3} \pi R^3$</span>. Thus the sphere of radius <span class="math-container">$R$</span> has volume <span class="math-container">$\frac{4}{3} \pi R^3$</span>.</p>
| <p>The volume of a sphere with radius <span class="math-container">$a$</span> may be found by evaluating the triple integral<span class="math-container">$$V=\iiint \limits _S\,dx\,dy\,dz,$$</span>where <span class="math-container">$S$</span> is the volume enclosed by the sphere <span class="math-container">$x^2+y^2+z^2=a^2$</span>. Changing variables to spherical polar coordinates, we obtain<span class="math-container">$$V=\int \limits _0^{2\pi}d\phi \int \limits _0^\pi d\theta \int \limits _0^ar^2\sin \theta \,dr=\int \limits _0^{2\pi}d\phi \int \limits _0^\pi \sin \theta \,d\theta \int \limits _0^ar^2\,dr=\frac{4\pi a^3}{3},$$</span>as expected.</p>
|
combinatorics | <p>The Collatz Conjecture is a famous conjecture in mathematics that has lasted for over 70 years. It goes as follows:</p>
<p>Define $f(n)$ to be as a function on the natural numbers by:</p>
<p>$f(n) = n/2$ if $n$ is even and
$f(n) = 3n+1$ if $n$ is odd</p>
<p>The conjecture is that for all $n \in \mathbb{N}$, $n$ eventually converges under iteration by $f$ to $1$.</p>
<p>I was wondering if the "5n+1" problem has been solved. This problem is the same as the Collatz problem except that in the above one replaces $3n+1$ with $5n+1$.</p>
| <p>You shouldn't expect this to be true. Here is a nonrigorous argument. Let $n_k$ be the sequence of odd numbers you obtain. So (heuristically), with probability $1/2$, we have $n_{k+1} = (5n_k+1)/2$, with probability $1/4$, we have $n_{k+1} = (5 n_k+1)/4$, with probability $1/8$, we have $n_{k+1} = (5 n_k+1)/8$ and so forth. Setting $x_k = \log n_k$, we approximately have $x_{k+1} \approx x_k + \log 5 - \log 2$ with probability $1/2$, $x_{k+1} \approx x_k + \log 5 - 2 \log 2$ with probability $1/4$, $x_{k+1} \approx x_k + \log 5 - 3 \log 2$ with probability $1/8$ and so forth.</p>
<p>So the expected change from $x_{k}$ to $x_{k+1}$ is
$$\sum_{j=1}^{\infty} \frac{ \log 5 - j \log 2}{2^j} = \log 5 - 2 \log 2.$$</p>
<p>This is positive! So, heurisitically, I expect this sequence to run off to $\infty$. This is different from the $3n+1$ problem, where $\log 3 - 2 \log 2 <0$, and so you heurisitically expect the sequence to decrease over time. </p>
<p>Here is a numerical example. I started with $n=25$ and generated $25$ odd numbers. Here is a plot of $(k, \log n_k)$, versus the linear growth predicted by my heuristic. Notice that we are up to 4 digit numbers and show no signs of dropping down.</p>
<p><img src="https://i.sstatic.net/8nRFy.png" alt="alt text"></p>
| <p>In Part I of Lagarias' <a href="http://arxiv.org/abs/math/0309224">extensive, annotated bibliography</a> of the 3x+1 problem, he notes a 1999 paper by Metzger (reference 112) regarding the 5x+1 problem:</p>
<blockquote>
<p>For the 5x + 1 problem he shows that on the positive integers there is no cycle of size 1, a unique cycle of size 2, having smallest element n = 1, and exactly two cycles of size 3, having smallest elements n = 13 and n = 17, respectively.</p>
</blockquote>
<p>It is unclear from the notes whether the paper shows that these are the <em>only</em> cycles of the 5x+1 problem or whether there may exist longer cycles.</p>
|
number-theory | <p>I've solved for it making a computer program, but was wondering there was a mathematical equation that you could use to solve for the nth prime?</p>
| <p>No, there is no known formula that gives the nth prime, except <a href="https://www.youtube.com/watch?v=j5s0h42GfvM" rel="nofollow noreferrer">artificial ones</a> you can write that are basically equivalent to "the <span class="math-container">$n$</span>th prime". But if you only want an approximation, the <span class="math-container">$n$</span>th prime is roughly around <span class="math-container">$n \ln n$</span> (or more precisely, near the number <span class="math-container">$m$</span> such that <span class="math-container">$m/\ln m = n$</span>) by the <a href="http://en.wikipedia.org/wiki/Prime_number_theorem" rel="nofollow noreferrer">prime number theorem</a>. In fact, we have the following asymptotic bound on the <span class="math-container">$n$</span>th prime <span class="math-container">$p_n$</span>:</p>
<blockquote>
<p><span class="math-container">$n \ln n + n(\ln\ln n - 1) < p_n < n \ln n + n \ln \ln n$</span> for <span class="math-container">$n\ge{}6$</span></p>
</blockquote>
<p>You can <a href="http://en.wikipedia.org/wiki/Generating_primes#Prime_sieves" rel="nofollow noreferrer">sieve</a> within this range if you want the <span class="math-container">$n$</span>th prime. [Edit: Using more accurate estimates you'll have a much smaller range to sieve; see the answer by Charles.]</p>
<p>Entirely unrelated: if you want to see formulae that generate a lot of primes (not the <span class="math-container">$n$</span>th prime) up to some extent, like the famous <span class="math-container">$f(n)=n^2-n+41$</span>, look at the Wikipedia article <a href="http://en.wikipedia.org/wiki/Formula_for_primes" rel="nofollow noreferrer">formula for primes</a>, or Mathworld for <a href="http://mathworld.wolfram.com/PrimeFormulas.html" rel="nofollow noreferrer">Prime Formulas</a>.</p>
| <p>Far better than sieving in the large range ShreevatsaR suggested (which, for the 10¹⁵th prime, has 10¹⁵ members and takes about 33 TB to store in compact form), take a good first guess like Riemann's R and use one of the advanced methods of computing pi(x) for that first guess. (If this is far off for some reason—it shouldn't be—estimate the distance to the proper point and calculate a new guess from there.) At this point, you can sieve the small distance, perhaps just 10⁸ or 10⁹, to the desired number.</p>
<p>This is about 100,000 times faster for numbers around the size I indicated. Even for numbers as small as 10 to 12 digits, this is faster if you don't have a precomputed table large enough to contain your answer.</p>
|
linear-algebra | <p>If the matrix is positive definite, then all its eigenvalues are strictly positive. </p>
<p>Is the converse also true?<br>
That is, if the eigenvalues are strictly positive, then matrix is positive definite?<br>
Can you give example of $2 \times 2$ matrix with $2$ positive eigenvalues but is not positive definite?</p>
| <p>I think this is false. Let <span class="math-container">$A = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}$</span> be a 2x2 matrix, in the canonical basis of <span class="math-container">$\mathbb R^2$</span>. Then A has a double eigenvalue b=1. If <span class="math-container">$v=\begin{pmatrix}1\\1\end{pmatrix}$</span>, then <span class="math-container">$\langle v, Av \rangle < 0$</span>.</p>
<p>The point is that the matrix can have all its eigenvalues strictly positive, but it does not follow that it is positive definite.</p>
| <p>This question does a great job of illustrating the problem with thinking about these things in terms of coordinates. The thing that is positive-definite is not a matrix $M$ but the <em>quadratic form</em> $x \mapsto x^T M x$, which is a very different beast from the linear transformation $x \mapsto M x$. For one thing, the quadratic form does not depend on the antisymmetric part of $M$, so using an asymmetric matrix to define a quadratic form is redundant. And there is <em>no reason</em> that an asymmetric matrix and its symmetrization need to be at all related; in particular, they do not need to have the same eigenvalues. </p>
|
geometry | <p><strong>Background:</strong> Many (if not all) of the transformation matrices used in $3D$ computer graphics are $4\times 4$, including the three values for $x$, $y$ and $z$, plus an additional term which usually has a value of $1$.</p>
<p>Given the extra computing effort required to multiply $4\times 4$ matrices instead of $3\times 3$ matrices, there must be a substantial benefit to including that extra fourth term, even though $3\times 3$ matrices <em>should</em> (?) be sufficient to describe points and transformations in 3D space.</p>
<p><strong>Question:</strong> Why is the inclusion of a fourth term beneficial? I can guess that it makes the computations easier in some manner, but I would really like to know <em>why</em> that is the case.</p>
| <p>I'm going to copy <a href="https://stackoverflow.com/questions/2465116/understanding-opengl-matrices/2465290#2465290">my answer from Stack Overflow</a>, which also shows why 4-component vectors (and hence 4×4 matrices) are used instead of 3-component ones.</p>
<hr>
<p>In most 3D graphics a point is represented by a 4-component vector (x, y, z, w), where w = 1. Usual operations applied on a point include translation, scaling, rotation, reflection, skewing and combination of these. </p>
<p>These transformations can be represented by a mathematical object called "matrix". A matrix applies on a vector like this:</p>
<pre><code>[ a b c tx ] [ x ] [ a*x + b*y + c*z + tx*w ]
| d e f ty | | y | = | d*x + e*y + f*z + ty*w |
| g h i tz | | z | | g*x + h*y + i*z + tz*w |
[ p q r s ] [ w ] [ p*x + q*y + r*z + s*w ]
</code></pre>
<p>For example, scaling is represented as</p>
<pre><code>[ 2 . . . ] [ x ] [ 2x ]
| . 2 . . | | y | = | 2y |
| . . 2 . | | z | | 2z |
[ . . . 1 ] [ 1 ] [ 1 ]
</code></pre>
<p>and translation as</p>
<pre><code>[ 1 . . dx ] [ x ] [ x + dx ]
| . 1 . dy | | y | = | y + dy |
| . . 1 dz | | z | | z + dz |
[ . . . 1 ] [ 1 ] [ 1 ]
</code></pre>
<p><strong><em>One of the reason for the 4th component is to make a translation representable by a matrix.</em></strong></p>
<p>The advantage of using a matrix is that multiple transformations can be combined into one via matrix multiplication.</p>
<p>Now, if the purpose is simply to bring translation on the table, then I'd say (x, y, z, 1) instead of (x, y, z, w) and make the last row of the matrix always <code>[0 0 0 1]</code>, as done usually for 2D graphics. In fact, the 4-component vector will be mapped back to the normal 3-vector vector via this formula:</p>
<pre><code>[ x(3D) ] [ x / w ]
| y(3D) ] = | y / w |
[ z(3D) ] [ z / w ]
</code></pre>
<p>This is called <a href="http://en.wikipedia.org/wiki/Homogeneous_coordinates#Use_in_computer_graphics" rel="noreferrer">homogeneous coordinates</a>. <strong><em>Allowing this makes the perspective projection expressible with a matrix too,</em></strong> which can again combine with all other transformations.</p>
<p>For example, since objects farther away should be smaller on screen, we transform the 3D coordinates into 2D using formula</p>
<pre><code>x(2D) = x(3D) / (10 * z(3D))
y(2D) = y(3D) / (10 * z(3D))
</code></pre>
<p>Now if we apply the projection matrix</p>
<pre><code>[ 1 . . . ] [ x ] [ x ]
| . 1 . . | | y | = | y |
| . . 1 . | | z | | z |
[ . . 10 . ] [ 1 ] [ 10*z ]
</code></pre>
<p>then the real 3D coordinates would become</p>
<pre><code>x(3D) := x/w = x/10z
y(3D) := y/w = y/10z
z(3D) := z/w = 0.1
</code></pre>
<p>so we just need to chop the z-coordinate out to project to 2D.</p>
| <blockquote>
<p>Even though 3x3 matrices should (?) be sufficient to describe points and transformations in 3D space.</p>
</blockquote>
<p>No, they aren't enough! Suppose you represent points in space using 3D vectors. You can transform these using 3x3 matrices. But if you examine the definition of matrix multiplication you should see immediately that multiplying a zero 3D vector by a 3x3 matrix gives you another zero vector. So simply multiplying by a 3x3 matrix can never move the origin. But translations and rotations do need to move the origin. So 3x3 matrices are not enough.</p>
<p>I haven't tried to explain exactly how 4x4 matrices are used. But I hope I've convinced you that 3x3 matrices aren't up to the task and that something more is needed.</p>
|
probability | <p>Given $n$ independent geometric random variables $X_n$, each with probability parameter $p$ (and thus expectation $E\left(X_n\right) = \frac{1}{p}$), what is
$$E_n = E\left(\max_{i \in 1 .. n}X_n\right)$$</p>
<hr>
<p>If we instead look at a continuous-time analogue, e.g. exponential random variables $Y_n$ with rate parameter $\lambda$, this is simple:
$$E\left(\max_{i \in 1 .. n}Y_n\right) = \sum_{i=1}^n\frac{1}{i\lambda}$$</p>
<p>(I think this is right... that's the time for the first plus the time for the second plus ... plus the time for the last.)</p>
<p>However, I can't find something similarly nice for the discrete-time case.</p>
<hr>
<p>What I <em>have</em> done is to construct a Markov chain modelling the number of the $X_n$ that haven't yet "hit". (i.e. at each time interval, perform a binomial trial on the number of $X_n$ remaining to see which "hit", and then move to the number that didn't "hit".) This gives
$$E_n = 1 + \sum_{i=0}^n \left(\begin{matrix}n\\i\end{matrix}\right)p^{n-i}(1-p)^iE_i$$
which gives the correct answer, but is a nightmare of recursion to calculate. I'm hoping for something in a shorter form.</p>
| <p>There is no nice, closed-form expression for the expected maximum of IID geometric random variables. However, the expected maximum of the corresponding IID exponential random variables turns out to be a very good approximation. More specifically, we have the hard bounds</p>
<p>$$\frac{1}{\lambda} H_n \leq E_n \leq 1 + \frac{1}{\lambda} H_n,$$
and the close approximation
$$E_n \approx \frac{1}{2} + \frac{1}{\lambda} H_n,$$
where $H_n$ is the $n$th harmonic number $H_n = \sum_{k=1}^n \frac{1}{k}$, and $\lambda = -\log (1-p)$, the parameter for the corresponding exponential distribution.</p>
<p>Here's the derivation. Let $q = 1-p$. Use Did's expression with the fact that if $X$ is geometric with parameter $p$ then $P(X \leq k) = 1-q^k$ to get </p>
<p>$$E_n = \sum_{k=0}^{\infty} (1 - (1-q^k)^n).$$</p>
<p>By viewing this infinite sum as right- and left-hand Riemann sum approximations of the corresponding integral we obtain </p>
<p>$$\int_0^{\infty} (1 - (1 - q^x)^n) dx \leq E_n \leq 1 + \int_0^{\infty} (1 - (1 - q^x)^n) dx.$$</p>
<p>The analysis now comes down to understanding the behavior of the integral. With the variable switch $u = 1 - q^x$ we have</p>
<p>$$\int_0^{\infty} (1 - (1 - q^x)^n) dx = -\frac{1}{\log q} \int_0^1 \frac{1 - u^n}{1-u} du = -\frac{1}{\log q} \int_0^1 \left(1 + u + \cdots + u^{n-1}\right) du $$
$$= -\frac{1}{\log q} \left(1 + \frac{1}{2} + \cdots + \frac{1}{n}\right) = -\frac{1}{\log q} H_n,$$
which is exactly the expression the OP has above for the expected maximum of $n$ corresponding IID exponential random variables, with $\lambda = - \log q$.</p>
<p>This proves the hard bounds, but what about the more precise approximation? The easiest way to see that is probably to use the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="noreferrer">Euler-Maclaurin summation formula</a> for approximating a sum by an integral. Up to a first-order error term, it says exactly that</p>
<p>$$E_n = \sum_{k=0}^{\infty} (1 - (1-q^k)^n) \approx \int_0^{\infty} (1 - (1 - q^x)^n) dx + \frac{1}{2},$$
yielding the approximation
$$E_n \approx -\frac{1}{\log q} H_n + \frac{1}{2},$$
with error term given by
$$\int_0^{\infty} n (\log q) q^x (1 - q^x)^{n-1} \left(x - \lfloor x \rfloor - \frac{1}{2}\right) dx.$$
One can verify that this is quite small unless $n$ is also small or $q$ is extreme.</p>
<p>All of these results, including a more rigorous justification of the approximation, the OP's recursive formula, and the additional expression
$$E_n = \sum_{i=1}^n \binom{n}{i} (-1)^{i+1} \frac{1}{1-q^i},$$
are in Bennett Eisenberg's paper "On the expectation of the maximum of IID geometric random variables" (<em>Statistics and Probability Letters</em> 78 (2008) 135-143). </p>
| <p>First principle:</p>
<blockquote>
<p>To deal with maxima $M$ of independent random variables, use as much as possible events of the form $[M\leqslant x]$.</p>
</blockquote>
<p>Second principle:</p>
<blockquote>
<p>To compute the expectation of a nonnegative random variable $Z$, use as much as possible the complementary cumulative distribution function $\mathrm P(Z\geqslant z)$.</p>
</blockquote>
<p>In the discrete case, $\mathrm E(M)=\displaystyle\sum_{k\ge0}\mathrm P(M>k)$, the event $[M>k]$ is the complement of $[M\leqslant k]$, and the event $[M\leqslant k]$ is the intersection of the independent events $[X_i\leqslant k]$, each of probability $F_X(k)$. Hence,
$$
\mathrm E(M)=\sum_{k\geqslant0}(1-\mathrm P(M\leqslant k))=\sum_{k\geqslant0}(1-\mathrm P(X\leqslant k)^n)=\sum_{k\geqslant0}(1-F_X(k)^n).
$$
The continuous case is even simpler. For i.i.d. nonnegative $X_1, X_2, \ldots, X_n$,
$$
\mathrm E(M)=\int_0^{+\infty}(1-F_X(t)^n) \, \mathrm{d}t.
$$</p>
|
probability | <p>I have been using Sebastian Thrun's course on AI and I have encountered a slightly difficult problem with probability theory.</p>
<p>He poses the following statement:</p>
<p>$$
P(R \mid H,S) = \frac{P(H \mid R,S) \; P(R \mid S)}{P(H \mid S)}
$$</p>
<p>I understand he used Bayes' Rule to get the RHS equation, but fail to see how he did this. If somebody could provide a breakdown of the application of the rule in this problem that would be great.</p>
| <p>Taking it one step at a time:
$$\begin{align}
\mathsf P(R\mid H, S) & = \frac{\mathsf P(R,H,S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R, S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R\mid S)\,\mathsf P(S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R\mid S)}{\mathsf P(H\mid S)}\frac{\mathsf P(S)}{\mathsf P(S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\;\mathsf P(R\mid S)}{\mathsf P(H\mid S)}
\end{align}$$</p>
| <p>You don't really need Bayes' Theorem. Just apply the definition of conditional probability in two ways. Firstly,</p>
<p>\begin{eqnarray*}
P(R\mid H,S) &=& \dfrac{P(R,H\mid S)}{P(H\mid S)} \\
&& \\
\therefore\quad P(R,H\mid S) &=& P(R\mid H,S)P(H\mid S).
\end{eqnarray*}</p>
<p>Secondly,</p>
<p>\begin{eqnarray*}
P(H\mid R,S) &=& \dfrac{P(R,H\mid S)}{P(R\mid S)} \\
&& \\
\therefore\quad P(R,H\mid S) &=& P(H\mid R,S)P(R\mid S).
\end{eqnarray*}</p>
<p>Combine these two to get the result.</p>
|
probability | <p>I was watching the movie <span class="math-container">$21$</span> yesterday, and in the first 15 minutes or so the main character is in a classroom, being asked a "trick" question (in the sense that the teacher believes that he'll get the wrong answer) which revolves around theoretical probability.</p>
<p>The question goes a little something like this (I'm paraphrasing, but the numbers are all exact):</p>
<p>You're on a game show, and you're given three doors. Behind one of the doors is a brand new car, behind the other two are donkeys. With each door you have a <span class="math-container">$1/3$</span> chance of winning. Which door would you pick?</p>
<p>The character picks A, as the odds are all equally in his favor.</p>
<p>The teacher then opens door C, revealing a donkey to be behind there, and asks him if he would like to change his choice. At this point he also explains that most people change their choices out of fear; paranoia; emotion and such.</p>
<p>The character does change his answer to B, but because (according to the movie), the odds are now in favor of door B with a <span class="math-container">$1/3$</span> chance of winning if door A is picked and <span class="math-container">$2/3$</span> if door B is picked.</p>
<p>What I don't understand is how removing the final door increases the odds of winning if door B is picked only. Surely the split should be 50/50 now, as removal of the final door tells you nothing about the first two?</p>
<p>I assume that I'm wrong; as I'd really like to think that they wouldn't make a movie that's so mathematically incorrect, but I just can't seem to understand why this is the case.</p>
<p>So, if anyone could tell me whether I'm right; or if not explain why, I would be extremely grateful.</p>
| <p>This problem, known as the Monty Hall problem, is famous for being so bizarre and counter-intuitive. It is in fact best to switch doors, and this is not hard to prove either. In my opinion, the reason it seems so bizarre the first time one (including me) encounters it is that humans are simply bad at thinking about probability. What follows is essentially how I have justified switching doors to myself over the years.</p>
<p>At the start of the game, you are asked to pick a single door. There is a $1/3$ chance that you have picked correctly, and a $2/3$ chance that you are wrong. This does not change when one of the two doors you did not pick is opened. The second time is that you are choosing between whether your first guess was right (which has probability $1/3$) or wrong (probability $2/3$). Clearly it is more likely that your first guess was wrong, so you switch doors.</p>
<p>This didn't sit well with me when I first heard it. To me, it seemed that the situation of picking between two doors has a certain kind of <em>symmetry</em>-things are either behind one door or the other, with equal probability. Since this is not the case here, I was led to ask where the asymmetry comes from? What causes one door to be more likely to hold the prize than the other? The key is that the host <em>knows</em> which door has the prize, and opens a door that he knows does not have the prize behind it.</p>
<p>To clarify this, say you choose door $A$, and are then asked to choose between doors $A$ and $B$ (no doors have been opened yet). There is no advantage to switching in this situation. Say you are asked to choose between $A$ and $C$; again, there is no advantage in switching. However, what if you are asked to choose between a) the prize behind door $A$ and b) the better of the two prizes behind door $B$ and $C$. Clearly, in this case it is in your advantage to switch. But this is exactly the same problem as the one you've been confronted with! Why? Precisely because the host <em>always</em> opens (hence gets rid of) the door that you did not pick which has the worse prize behind it. This is what I mean when I say that the asymmetry in the situation comes from the knowledge of the host.</p>
| <p>To understand why your odds increase by changing door, let us take an extreme example first. Say there are $10000$ doors. Behind one of them is a car and behind the rest are donkeys. Now, the odds of choosing a car is $1\over10000$ and the odds of choosing a donkey are $9999\over10000$. Say you pick a random door, which we call X for now. According to the rules of the game, the game show host now opens all the doors except for two, one of which contains the car. You now have the option to switch. Since the probability for not choosing the car initially was $9999\over10000$ it is very likely you didn't choose the car. So assuming now that door X is a goat and you switch you get the car. This means that as long as you pick the goat on your first try you will always get the car. </p>
<p>If we return to the original problem where there are only 3 doors, we see that the exact same logic applies. The probability that you choose a goat on your first try is $2\over3$, while choosing a car is $1\over3$. If you choose a goat on your first try and switch you will get a car and if you choose the car on your first try and switch, you will get a goat. Thus, the probability that you will get a car, if you switch is $2\over3$ (which is more than the initial $1\over3$).</p>
|
logic | <p>Let's say that I prove statement $A$ by showing that the negation of $A$ leads to a contradiction. </p>
<p>My question is this: How does one go from "so there's a contradiction if we don't have $A$" to concluding that "we have $A$"?</p>
<p>That, to me, seems the exact opposite of logical. It sounds like we say "so, I'll have a really big problem if this thing isn't true, so out of convenience, I am just going to act like it's true". </p>
| <p>Proof by contradiction, as you stated, is the rule$\def\imp{\Rightarrow}$ "$\neg A \imp \bot \vdash A$" for any statement $A$, which in English is "If you can derive the statement that $\neg A$ implies a contradiction, then you can derive $A$". As pointed out by others, this is not a valid rule in intuitionistic logic. But I shall now show you why you probably have no choice but to agree with the rule (under certain mild conditions).</p>
<p>You see, given any statement $A$, the law of excluded middle says that "$A \lor \neg A$" is true, which in English is "Either $A$ or $\neg A$". Now is there any reason for this law to hold? If you desire that everything you can derive comes with direct evidence of some sort (such as various constructive logics), then it might not hold, because sometimes we have neither evidence for nor against a statement. However, if you believe that the statements you can make have meaning in the real world, then the law obviously holds because the real world either satisfies a statement or its negation, regardless of whether you can figure out which one.</p>
<p>The same reasoning also shows that a contradiction can never be true, because the real world never satisfies both a statement and its negation at the same time, simply by the meaning of negation. This gives the principle of explosion, which I will come to later.</p>
<p>Now given the law of excluded middle consider the following reasoning. If from $\neg A$ I can derive a contradiction, then $\neg A$ must be impossible, since my other rules are truth-preserving (starting from true statements they derive only true statements). Here we have used the property that a contradiction can never be true. Since $\neg A$ is impossible, and by law of excluded middle we know that either $A$ or $\neg A$ must be true, we have no other choice but to conclude that $A$ must be true.</p>
<p>This explains why proof by contradiction is valid, as long as you accept that for every statement $A$, exactly one of "$A$" and "$\neg A$" is true. The fact that we use logic to reason about the world we live in is precisely why almost all logicians accept classical logic. This is why I said "mild conditions" in my first paragraph.</p>
<p>Back to the principle of explosion, which is the rule "$\bot \vdash A$" for any statement $A$. At first glance, this may seem even more unintuitive than the proof by contradiction rule. But on the contrary, people use it without even realizing. For example, if you do not believe that I can levitate, you might say "If you can levitate, I will eat my hat!" Why? Because you know that if the condition is false, then whether the conclusion is true or false is completely irrelevant. They are implicitly assuming the rule that "$\bot \imp A$" is always true, which is equivalent to the principle of explosion.</p>
<p>We can hence show by a formal deduction that the law of excluded middle and the principle of explosion together give the ability to do proofs by contradiction:</p>
<p>[Suppose from "$\neg A$" you can derive "Contradiction".]</p>
<p>  $A \lor \neg A$. [law of excluded middle]</p>
<p>  If $A$:</p>
<p>    $A$.</p>
<p>  If $\neg A$:</p>
<p>    Contradiction.</p>
<p>    Thus $A$. [principle of explosion]</p>
<p>  Therefore $A$. [disjunction elimination]</p>
<p>Another possible way to obtain the proof by contradiction rule is if you accept double negation elimination, that is "$\neg \neg A \vdash A$" for any statement $A$. This can be justified by exactly the same reasoning as before, because if "$A$" is true then "$\neg A$" is false and hence "$\neg \neg A$" is true, and similarly if "$A$" is false so is "$\neg \neg A$". Below is a formal deduction showing that contradiction elimination and double negation elimination together give the ability to do proofs by contradiction:</p>
<p>[Suppose from "$\neg A$" you can derive "Contradiction".]</p>
<p>  If $\neg A$:</p>
<p>    Contradiction.</p>
<p>  Therefore $\neg \neg A$. [contradiction elimination / negation introduction]</p>
<p>  Thus $A$. [double negation elimination]</p>
| <p>A contradiction isn't a “problem”. A contradiction is an impossibility. This isn't a matter of saying “Gee, if I have fewer than 20 dollars in the back I won't be able to go out to dinner and I want to so badly, I'll just assume I have more than 20 dollars.” This is a matter of walking into the bank and saying "I'd like to withdraw 20 dollars" and having a trapdoor under you collapse and a 300 lb security guard jumping on your spleen shouting in you ear “You don't <em>have</em> it!!! You don't have it!!” </p>
<p>You can't just say “Oh, I got a contradiction when I assumed I had 20 dollars... But that doesn't mean I don't have 20 dollars.”</p>
<p>It means <em>precisely</em> that. It is <em>impossible</em> for you to have 20. So you must conclude you <em>don't</em> have 20 dollars.</p>
<p>If you get a contradiction, it just isn't possible for A to be false. </p>
<p>A contradiction, by its definition is an impossibility. So if you assume A isn't true and you get a contradiction. You have <em>proven</em> that it is <em>impossible</em> for A not to be true. If it is <em>impossible</em> for something not to be true what other options are there? </p>
|
number-theory | <p>In <a href="https://math.stackexchange.com/a/373935/752">this recent answer</a> to <a href="https://math.stackexchange.com/q/373918/752">this question</a> by Eesu, Vladimir
Reshetnikov proved that
$$
\begin{equation}
\left( 26+15\sqrt{3}\right) ^{1/3}+\left( 26-15\sqrt{3}\right) ^{1/3}=4.\tag{1}
\end{equation}
$$</p>
<p>I would like to know if this result can be <em>generalized</em> to other triples of
natural numbers. </p>
<blockquote>
<p><strong>Question</strong>. What are the solutions of the following equation? $$ \begin{equation} \left( p+q\sqrt{3}\right) ^{1/3}+\left(
p-q\sqrt{3}\right) ^{1/3}=n,\qquad \text{where }\left( p,q,n\right)
\in \mathbb{N} ^{3}.\tag{2} \end{equation} $$</p>
</blockquote>
<p>For $(1)$ we could write $26+15\sqrt{3}$ in the form $(a+b\sqrt{3})^{3}$ </p>
<p>$$
26+15\sqrt{3}=(a+b\sqrt{3})^{3}=a^{3}+9ab^{2}+3( a^{2}b+b^{3})
\sqrt{3}
$$</p>
<p>and solve the system
$$
\left\{
\begin{array}{c}
a^{3}+9ab^{2}=26 \\
a^{2}b+b^{3}=5.
\end{array}
\right.
$$</p>
<p>A solution is $(a,b)=(2,1)$. Hence $26+15\sqrt{3}=(2+\sqrt{3})^3 $. Using the same method to $26-15\sqrt{3}$, we find $26-15\sqrt{3}=(2-\sqrt{3})^3 $, thus proving $(1)$.</p>
<p>For $(2)$ the very same idea yields</p>
<p>$$
p+q\sqrt{3}=(a+b\sqrt{3})^{3}=a^{3}+9ab^{2}+3( a^{2}b+b^{3}) \tag{3}
\sqrt{3}
$$</p>
<p>and</p>
<p>$$
\left\{
\begin{array}{c}
a^{3}+9ab^{2}=p \\
3( a^{2}b+b^{3}) =q.
\end{array}
\right. \tag{4}
$$</p>
<p>I tried to solve this system for $a,b$ but since the solution is of the form</p>
<p>$$
(a,b)=\Big(x^{3},\frac{3qx^{3}}{8x^{9}+p}\Big),\tag{5}
$$</p>
<p>where $x$ satisfies the <em>cubic</em> equation
$$
64x^{3}-48x^{2}p+( -15p^{2}+81q^{2}) x-p^{3}=0,\tag{6}
$$
would be very difficult to succeed, using this naive approach. </p>
<blockquote>
<p>Is this problem solvable, at least partially?</p>
</blockquote>
<p>Is $\sqrt[3]{p+q\sqrt{3}}+\sqrt[3]{p-q\sqrt{3}}=n$, $(p,q,n)\in\mathbb{N} ^3$ solvable?</p>
| <p>The solutions are of the form $\displaystyle(p, q)= \left(\frac{3t^2nr+n^3}{8},\,\frac{3n^2t+t^3r}{8}\right)$, for any rational parameter $t$. To prove it, we start with $$\left(p+q\sqrt{r}\right)^{1/3}+\left(p-q\sqrt{r}\right)^{1/3}=n\tag{$\left(p,q,n,r\right)\in\mathbb{N}^{4}$}$$
and cube both sides using the identity $(a+b)^3=a^3+3ab(a+b)+b^3$ to, then, get $$\left(\frac{n^3-2p}{3n}\right)^3=p^2-rq^2,$$ which is a nicer form to work with. Keeping $n$ and $r$ fixed, we see that for every $p={1,2,3,\ldots}$ there is a solution $(p,q)$, where $\displaystyle q^2=\frac{1}{r}\left(p^2-\left(\frac{n^3-2p}{3n}\right)^3\right)$. When is this number a perfect square? <a href="http://www.wolframalpha.com/input/?i=%5Cfrac%7Bp%5E2-%5Cleft%28%5Cfrac%7Bn%5E3-2p%7D%7B3n%7D%5Cright%29%5E3%7D%7Br%7D">Wolfram</a> says it equals $$q^2 =\frac{(8p-n^3) (n^3+p)^2}{(3n)^2\cdot 3nr},$$ which reduces the question to when $\displaystyle \frac{8p-n^3}{3nr}$ is a perfect square, and you get solutions of the form $\displaystyle (p,q)=\left(p,\frac{n^3+p}{3n}\sqrt{\frac{8p-n^3}{3nr}}\right).$ Note that when $r=3$, this simplifies further to when $\displaystyle \frac{8p}{n}-n^2$ is a perfect square.</p>
<hr>
<p>Now, we note that if $\displaystyle (p,q)=\left(p,\frac{n^3+p}{3n}\sqrt{\frac{8p-n^3}{3nr}}\right) \in\mathbb{Q}^2$, $\displaystyle\sqrt{\frac{8p-n^3}{3nr}}$ must be rational as well. Call this rational number $t$, our parameter. Then $8p=3t^2nr+n^3$. Substitute back to get $$(p,q)=\left(\frac{3t^2nr+n^3}{8},\,\frac{3n^2t+t^3r}{8}\right).$$ This generates expressions like <a href="http://www.wolframalpha.com/input/?i=%5Cleft%28p%2bq%5Csqrt%7Br%7D%5Cright%29%5E%7B1/3%7D%2b%5Cleft%28p-q%5Csqrt%7Br%7D%5Cright%29%5E%7B1/3%7D%20where%20r=11,%20p=2589437/8%20and%20q=56351/4&a=%5E_Real">$$\left(\frac{2589437}{8}+\frac{56351}{4}\sqrt{11}\right)^{1/3}+\left(\frac{2589437}{8}-\frac{56351}{4}\sqrt{11}\right)^{1/3}=137$$</a></p>
<p><a href="http://www.wolframalpha.com/input/?i=%5Cleft%28p%2bq%5Csqrt%7Br%7D%5Cright%29%5E%7B1/3%7D%2b%5Cleft%28p-q%5Csqrt%7Br%7D%5Cright%29%5E%7B1/3%7D%20where%20r=3,%20p=11155/4%20and%20q=6069/4&a=%5E_Real">$$\left(\frac{11155}{4}+\frac{6069}{4}\sqrt{3}\right)^{1/3}+\left(\frac{11155}{4}-\frac{6069}{4}\sqrt{3}\right)^{1/3}=23$$</a></p>
<p>for whichever $r$ you want, the first using $(r,t,n)=(11,2,137)$ and the second $(r,t,n)=(3,7,23)$.</p>
| <p>Here's a way of finding, at the very least, a large class of rational solutions. It seems plausible to me that these are all the rational solutions, but I don't actually have a proof yet...</p>
<p>Say we want to solve $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}=n$ for some fixed $n$. The left-hand side looks an awful lot like the root of a depressed cubic (as it would be given by <a href="http://en.wikipedia.org/wiki/Cubic_function#Cardano.27s_method">Cardano's formula</a>). So let's try to build some specific depressed cubic having $n$ as a root, where the cubic formula realizes $n$ as $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}$.</p>
<p>The depressed cubics having $n$ as a root all take the following form:
$$(x-n)(x^2+nx+b) = x^3 + (b-n^2)x-nb$$
where $b$ is arbitrary. If we want to apply the cubic formula to such a polynomial and come up with the root $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}$, we must have:
\begin{eqnarray}
p&=&\frac{nb}{2}\\
3q^2&=& \frac{(nb)^2}{4}+\frac{(-n^2+b)^3}{27}\\
&=&\frac{b^3}{27}+\frac{5b^2n^2}{36}+\frac{bn^4}{9}-\frac{n^6}{27}\\
&=&\frac{1}{108}(4b-n^2)(b+2n^2)^2
\end{eqnarray}
(where I cheated and used Wolfram Alpha to do the last factorization :)).</p>
<p>So the $p$ that arises here will be rational iff $b$ is; the $q$ that arises will be rational iff $4b-n^2$ is a perfect rational square (since $3 * 108=324$ is a perfect square). That is, we can choose rational $n$ and $m$ and set $m^2=4b-n^2$, and then we will be able to find rational $p,q$ via the above formulae, where
$(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}$ is a root of the cubic
$$
(x-n)\left(x^2+nx+\frac{m^2+n^2}{4}\right)=(x-n)\left(\left(x+\frac{n}{2}\right)^2+\left(\frac{m}{2}\right)^2\right) \, .
$$</p>
<p>The quadratic factor of this cubic manifestly does not have real roots unless $m=0$; since $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}$ is real, it must therefore be equal to $n$ whenever $m \neq 0$.</p>
<p>To summarize, we have found a two-parameter family of rational solutions to the general equation $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}=n$. One of those parameters is $n$ itself; if the other is $m$, we can substitute $b=\frac{m^2+n^2}{4}$ into the above relations to get
\begin{eqnarray}
p&=&n\left(\frac{m^2+n^2}{8}\right)\\
q&=&m\left(\frac{m^2+9n^2}{72}\right) \, .
\end{eqnarray}</p>
<p>To make sure I didn't make any algebra errors, I randomly picked $n=5$, $m=27$ to try out. These give $(p,q)=\left(\frac{1885}{4},\frac{1431}{4}\right)$, and indeed Wolfram Alpha <a href="http://www.wolframalpha.com/input/?i=%281885/4%2b1431/4%20%2a%20sqrt%283%29%29%5E%281/3%29%2b%281885/4-1431/4%20%2a%20sqrt%283%29%29%5E%281/3%29&a=%5E_Real">confirms</a> that
$$
\left(\frac{1885}{4}+\frac{1431}{4} \sqrt{3}\right)^{1/3}+\left(\frac{1885}{4}-\frac{1431}{4} \sqrt{3}\right)^{1/3}=5 \, .
$$</p>
|
logic | <p>Recently I have started reviewing mathematical notions, that I have always just accepted. Today it is one of the fundamental ones used in equations: </p>
<blockquote>
<p>If we have an equation, then the equation holds if we do the same to both sides. </p>
</blockquote>
<p>This seems perfectly obvious, but it must be stated as an axiom somewhere, presumably in formal logic(?). Only, I don't know what it would be called, or indeed how to search for it - does anybody knw?</p>
| <p>This axiom is known as the <em>substitution property of equality</em>. It states that if $f$ is a function, and $x = y$, then $f(x) = f(y)$. See, for example, <a href="https://en.wikipedia.org/wiki/First-order_logic#Equality_and_its_axioms" rel="nofollow noreferrer">Wikipedia</a>.</p>
<p>For example, if your equation is $4x = 2$, then you can apply the function $f(x) = x/2$ to both sides, and the axiom tells you that $f(4x) = f(2)$, or in other words, that $2x = 1$. You could then apply the axiom again (with the same function, even) to conclude that $x = 1/2$.</p>
| <p>"Do the same to both sides" is rather vague. What we can say is that if $f:A \rightarrow B$ is a <em>bijection</em> between sets $A$ and $B$ then, by definition</p>
<p>$\forall \space x,y \in A \space x=y \iff f(x)=f(y)$</p>
<p>The operation of adding $c$ (and its inverse subtracting $c$) is a bijection in groups, rings and fields, so we can conclude that</p>
<p>$x=y \iff x+c=y+c$</p>
<p>However, multiplication by $c$ is only a bijection for certain values of $c$ ($c \ne 0$ in fields, $\gcd(c,n)=1$ in $\mathbb{Z}_n$ etc.), so although we can conclude</p>
<p>$x=y \Rightarrow xc = yc$</p>
<p>it is not safe to assume the converse i.e. in general</p>
<p>$xc=yc \nRightarrow x = y$</p>
<p>and we have to take care about which values of $c$ we can "cancel" from both sides of the equation.</p>
<p>Some polynomial functions are bijections in $\mathbb{R}$ e.g.</p>
<p>$x=y \iff x^3=y^3$</p>
<p>but others are not e.g.</p>
<p>$x^2=y^2 \nRightarrow x = y$</p>
<p><em>unless</em> we restrict the domain of $f(x)=x^2$ to, for example, non-negative reals. Similarly</p>
<p>$\sin(x) = \sin(y) \nRightarrow x = y$</p>
<p><em>unless</em> we restrict the domain of $\sin(x)$.</p>
<p>So in general we can only "cancel" a function from both sides of an equation if we are sure it is a bijection, or if we have restricted its domain or range to create a bijection.</p>
|
number-theory | <p>Our algebra teacher usually gives us a paper of $20-30$ questions for our homework. But each week, he tells us to do all the questions which their number is on a specific form.</p>
<p>For example, last week it was all the questions on the form of $3k+2$ and the week before that it was $3k+1$. I asked my teacher why he always uses the linear form of numbers... Instead of answering, he told me to determine which form of numbers of the new paper should we solve this week.</p>
<p>I wanted to be a little creative, so I used the prime numbers and I told everyone to do numbers on the form of $\lfloor \sqrt{p} \rfloor$ where $p$ is a prime number. At that time, I couldn't understand the smile on our teacher's face.</p>
<p>Everything was O.K., until I decided to do the homework. Then I realized I had made a huge mistake.$\lfloor \sqrt{p} \rfloor$ generated all of the questions of our paper and my classmates wanted to kill me. I went to my teacher to ask him to change the form, but he said he will only do it if I could solve this:</p>
<blockquote>
<p>Prove that $\lfloor \sqrt{p} \rfloor$ generates all natural numbers.</p>
</blockquote>
<p>What I tried: Suppose there is a $k$ for which there is no prime number $p$ that $\lfloor \sqrt{p} \rfloor=k$. From this we can say there exists two consecutive perfect squares so that there is no prime number between them. So if we prove that for every $x$ there exists a prime number between $x^2$ and $x^2+2x+1$ we are done.</p>
<p>I tried using Bertrand's postulate but it didn't work.</p>
<p>I would appreciate any help from here :) </p>
| <blockquote>
<p>If we prove that for every <em>x</em> there exists a prime number between $x^2$ and $x^2+2x+1$, we are done.</p>
</blockquote>
<p>This is <a href="http://en.wikipedia.org/wiki/Legendre%27s_conjecture">Legendre's conjecture</a>, which remains unsolved. Hence <a href="http://static1.wikia.nocookie.net/__cb20111127113630/batman/images/2/22/The_Joker_smile.jpg">the big smile on your teacher's face</a>.</p>
| <p>Any of the accepted conjectures on sieves and random-like behavior of primes would predict that the chance of finding counterexamples to the conjecture in $(x^2, (x+1)^2)$ decrease rapidly with $x$, since they correspond to random events that are (up to logarithmic factors) $x$ standard deviations from the mean, and probabilities of those are suppressed very rapidly. This makes the computational evidence for the conjecture more reliable than just the fact of checking up to the millions and billions.</p>
|
linear-algebra | <p>The rotation matrix
$$\pmatrix{ \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta}$$
has complex eigenvalues $\{e^{\pm i\theta}\}$ corresponding to eigenvectors $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$. The real eigenvector of a 3d rotation matrix has a natural interpretation as the axis of rotation. Is there a nice geometric interpretation of the eigenvectors of the $2 \times 2$ matrix?</p>
| <p><a href="https://math.stackexchange.com/a/241399/3820">Tom Oldfield's answer</a> is great, but you asked for a geometric interpretation so I made some pictures.</p>
<p>The pictures will use what I called a "phased bar chart", which shows complex values as bars that have been rotated. Each bar corresponds to a vector component, with length showing magnitude and direction showing phase. An example:</p>
<p><img src="https://i.sstatic.net/8YsvT.png" alt="Example phased bar chart"></p>
<p>The important property we care about is that scaling a vector corresponds to the chart scaling or rotating. Other transformations cause it to distort, so we can use it to recognize eigenvectors based on the lack of distortions. (I go into more depth in <a href="http://twistedoakstudios.com/blog/Post7254_visualizing-the-eigenvectors-of-a-rotation" rel="noreferrer">this blog post</a>.)</p>
<p>So here's what it looks like when we rotate <code><0, 1></code> and <code><i, 0></code>:</p>
<p><img src="https://i.sstatic.net/Uauy6.gif" alt="Rotating 0, 1"> <img src="https://i.sstatic.net/nYUhr.gif" alt="Rotating i, 0"></p>
<p>Those diagram are not just scaling/rotating. So <code><0, 1></code> and <code><i, 0></code> are not eigenvectors.</p>
<p>However, they do incorporate horizontal and vertical sinusoidal motion. Any guesses what happens when we put them together?</p>
<p>Trying <code><1, i></code> and <code><1, -i></code>:</p>
<p><img src="https://i.sstatic.net/t80MU.gif" alt="Rotating 1, i"> <img src="https://i.sstatic.net/OObMZ.gif" alt="Rotation 1, -i"></p>
<p>There you have it. The phased bar charts of the rotated eigenvectors are being rotated (corresponding to the components being phased) as the vector is turned. Other vectors get distorting charts when you turn them, so they aren't eigenvectors.</p>
| <p>Lovely question!</p>
<p>There is a kind of intuitive way to view the eigenvalues and eigenvectors, and it ties in with geometric ideas as well (without resorting to four dimensions!). </p>
<p>The matrix, is unitary (more specifically, it is real so it is called orthogonal) and so there is an orthogonal basis of eigenvectors. Here, as you noted, it is $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$, let us call them $v_1$ and $v_2$, that form a basis of $\mathbb{C^2}$, and so we can write any element of $\mathbb{R^2}$ in terms of $v_1$ and $v_2$ as well, since $\mathbb{R^2}$ is a subset of $\mathbb{C^2}$. (And we normally think of rotations as occurring in $\mathbb{R^2}$! Please note that $\mathbb{C^2}$ is a two-dimensional vector space with components in $\mathbb{C}$ and need not be considered as four-dimensional, with components in $\mathbb{R}$.)</p>
<p>We can then represent any vector in $\mathbb{R^2}$ uniquely as a linear combination of these two vectors $x = \lambda_1 v_1 + \lambda_2v_2$, with $\lambda_i \in \mathbb{C}$. So if we call the linear map that the matrix represents $R$</p>
<p>$$R(x) = R(\lambda_1 v_1 + \lambda_2v_2) = \lambda_1 R(v_1) + \lambda_2R(v_2) = e^{i\theta}\lambda_1 (v_1) + e^{-i\theta}\lambda_2(v_2) $$</p>
<p>In other words, when working in the basis ${v_1,v_2}$:
$$R \pmatrix{\lambda_1 \\\lambda_2} = \pmatrix{e^{i\theta}\lambda_1 \\ e^{-i\theta}\lambda_2}$$</p>
<p>And we know that multiplying a complex number by $e^{i\theta}$ is an anticlockwise rotation by theta. So the rotation of a vector when represented by the basis ${v_1,v_2}$ is the same as just rotating the individual components of the vector in the complex plane!</p>
|
linear-algebra | <p>I'm doing a raytracing exercise. I have a vector representing the normal of a surface at an intersection point, and a vector of the ray to the surface. How can I determine what the reflection will be?</p>
<p>In the below image, I have <code>d</code> and <code>n</code>. How can I get <code>r</code>?</p>
<p><img src="https://i.sstatic.net/IQa15.png" alt="Vector d is the ray; n is the normal; t is the refraction; r is the reflection"></p>
<p>Thanks.</p>
| <p>$$r = d - 2 (d \cdot n) n$$
</p>
<p>where $d \cdot n$ is the dot product, and
$n$ must be normalized.</p>
| <p>Let $\hat{n} = {n \over \|n\|}$. Then $\hat{n}$ is the vector of magnitude one in the same direction as $n$. The projection of $d$ in the $n$ direction is given by $\mathrm{proj}_{n}d = (d \cdot \hat{n})\hat{n}$, and the projection of $d$ in the orthogonal direction is therefore given by $d - (d \cdot \hat{n})\hat{n}$. Thus we have
$$d = (d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$
Note that $r$ has $-1$ times the projection onto $n$ that $d$ has onto $n$, while the orthogonal projection of $r$ onto $n$ is equal to the orthogonal projection of $d$ onto $n$, therefore
$$r = -(d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$
Alternatively you may look at it as that $-r$ has the same projection onto $n$ that $d$ has onto $n$, with its orthogonal projection given by $-1$ times that of $d$.
$$-r = (d \cdot \hat{n})\hat{n} - [d - (d \cdot \hat{n})\hat{n}]$$
The later equation is exactly
$$r = -(d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$</p>
<p>Hence one can get $r$ from $d$ via
$$r = d - 2(d \cdot \hat{n})\hat{n}$$
Stated in terms of $n$ itself, this becomes
$$r = d - {2 d \cdot n\over \|n\|^2}n$$</p>
|
logic | <p>I have recently been reading about first-order logic and set theory. I have seen standard set theory axioms (say ZFC) formally constructed in first-order logic, where first-order logic is used as an object language and English is used as a metalanguage. </p>
<p>I'd like to construct first-order logic and then construct an axiomatic set theory in that language. In constructing first-order logic using English, one usually includes a countably infinite number of variables. However, it seems to me that one needs a definition of countably infinite in order to define these variables. A countably infinite collection (we don't want to call it a set yet) is a collection which can be put into 1-1 correspondence with the collection of natural numbers. It seems problematic to me that one appears to be implicitly using a notion of natural numbers to define the thing which then defines natural numbers (e.g. via Von Neumann's construction). Is this a legitimate concern that I have or is there an alternate definition of "countably infinite collection" I should use? If not, could someone explain to me why not? </p>
<p>I think that one possible solution is to simply assume whatever set-theoretic axioms I wish using clear and precise English sentences, define the natural numbers from there, and then define first-order logic as a convenient shorthand for the clear and precise English I am using. It seems to me that first-order logic is nothing but shorthand for clear and precise English sentences anyway. What the exact ontological status of English is and whether or not we are justified in using it as above are unresolvable philosophical questions, which I am willing to acknowledge, and then ignore because they aren't really in the realm of math.</p>
<p>Does this seem like a viable solution and is my perception of first-order logic as shorthand for clear and precise English (or another natural language) correct?</p>
<p>Thank so much in advance for any help and insight! </p>
| <p>I think there are two (very interesting) questions here. Let me try to address them.</p>
<blockquote>
<p>First, the title question: do we presuppose natural numbers in first-order logic?</p>
</blockquote>
<p>I would say the answer is definitely <em>yes</em>. We have to assume something to get off the ground; on some level, I at least take the natural numbers as granted. </p>
<p>(Note that there's a lot of wiggle room in exactly what this <em>means</em>: I know people who genuinely find it inconceivable that PA could be inconsistent, and I know people who find it very plausible, if not likely, that PA is inconsistent - all very smart people. But I think we do have to presuppose $\mathbb{N}$ to at least the extent that, say, Presburger arithmetic <a href="https://en.wikipedia.org/wiki/Presburger_arithmetic">https://en.wikipedia.org/wiki/Presburger_arithmetic</a> is consistent.)</p>
<p>Note that this isn't circular, as long as we're honest about the fact that we really are taking some things for granted. This shouldn't be too weird - if you really take <em>nothing</em> for granted, you can't get very much done <a href="https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles">https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles</a>. In terms of foundations, note that we will still find it valuable to define the natural numbers inside our foundations; but this will be an "internal" expression of something we take for granted "externally." So, for instance, at times we'll want to distinguish between "the natural numbers" (as defined in ZFC) and "the natural numbers" (that we assume at the outset we "have" in some way).</p>
<blockquote>
<p>Second question: Is it okay to view first-order logic as a kind of "proxy" for clear, precise natural language?</p>
</blockquote>
<p>My answer is a resounding: Sort of! :P</p>
<p>On the one hand, I'm inherently worried about natural language. I don't trust my own judgment about what is "clear" and "precise." For instance, is "This statement is false" clear and precise? What about the Continuum Hypothesis? </p>
<p>For me, one of the things first-order logic does is pin down a class of expressions which I'm <em>guaranteed</em> are clear and precise. Maybe there's more of them (although I would argue there aren't any, in a certain sense; see Lindstrom's Theorem <a href="https://en.wikipedia.org/wiki/Lindstr%C3%B6m%27s_theorem">https://en.wikipedia.org/wiki/Lindstr%C3%B6m%27s_theorem</a>), but at the very least anything I can express in first-order logic is clear and precise. There are a number of properties FOL has which make me comfortable saying this; I can go into more detail if that would be helpful.</p>
<p>So for me, FOL really <em>is</em> a proxy for clear and precise mathematical thought. There's a huge caveat here, though, which is that context matters. Consider the statement "$G$ is torsion" (here $G$ is a group). In the language of set theory with a parameter for $G$, this is first-order; but in the language of groups, there is no first-order sentence $\varphi$ such that [$G\models\varphi$ iff $G$ is torsion] for all groups $G$! This is a consequence of the <em>Compactness Theorem</em> for FOL.</p>
<p>So you have to be careful when asserting that something is first-order, if you're working in a domain that's "too small" (in some sense, set theory is "large enough," and an individual group isn't). But so long as you are careful about whether what you are saying is really expressible in FOL, I think this is what everyone does to a certain degree, or in a certain way.</p>
<p>At least, it's what I do. </p>
| <p>There are three inter-related concepts:</p>
<ul>
<li>The natural numbers</li>
<li>Finite strings of symbols</li>
<li>Formulas - particular strings of symbols used in formal logic.</li>
</ul>
<p>If we understand any one of these three, we can use that to understand all three. </p>
<p>For example, if we know what strings of symbols are, we can model natural numbers using unary notation and any particular symbol serving as a counter.</p>
<p>Similarly, the method of proof by induction (used to prove universal statements about natural numbers) has a mirror image in the principle of structural induction (used to prove universal statements about finite strings of symbols, or about formulas). </p>
<p>Conversely, if we know what natural numbers are, we can model strings of symbols by coding them as natural numbers (e.g. with prime power coding).</p>
<p>This gives a very specific argument for the way in which formal logic presupposes a concept of natural numbers. Even if we try to treat formal logic entirely syntactically, as soon as we know how to handle finite strings of symbols, we will be able to reconstruct the natural numbers from them. </p>
<p>The underlying idea behind all three of the related concepts is a notion that, for lack of a better word, could be described as "discrete finiteness". It is not based on the notion of "set", though - if we understand what individual natural numbers are, this allows us to understand individual strings, and vice versa, even with no reference to set theory. But, if we do understand sets of natural numbers, then we also understand sets of strings, and vice versa. </p>
<p>If you read more classical treatments of formal logic, you will see that they did consider the issue of whether something like set theory would be needed to develop formal logic - it is not. These texts often proceed in a more "finitistic" way, using an inductive definition of formulas and a principle of structural induction that allows us to prove theorems about formulas without any reference to set theory. This method is still well known to contemporary mathematical logicians, but contemporary texts are often written in a way that obscures it. The set-theoretic approach is not the only approach, however. </p>
|
number-theory | <p>We have,</p>
<p>$$\sum_{k=1}^n k^3 = \Big(\sum_{k=1}^n k\Big)^2$$</p>
<p>$$2\sum_{k=1}^n k^5 = -\Big(\sum_{k=1}^n k\Big)^2+3\Big(\sum_{k=1}^n k^2\Big)^2$$</p>
<p>$$2\sum_{k=1}^n k^7 = \Big(\sum_{k=1}^n k\Big)^2-3\Big(\sum_{k=1}^n k^2\Big)^2+4\Big(\sum_{k=1}^n k^3\Big)^2$$</p>
<p>and so on (apparently). </p>
<p>Is it true that the sum of consecutive odd $m$ powers, for $m>1$, can be expressed as <strong>sums of squares of sums*</strong> in a manner similar to the above? What is the general formula? </p>
<p>*(Edited re Lord Soth's and anon's comment.)</p>
| <p>This is a partial answer, it just establishes the existence.</p>
<p>We have
$$s_m(n) = \sum_{k=1}^n k^m = \frac{1}{m+1}
\left(\operatorname{B}_{m+1}(n+1)-\operatorname{B}_{m+1}(1)\right)$$
where $\operatorname{B}_m(x)$ denotes the monic
<a href="http://mathworld.wolfram.com/BernoulliPolynomial.html">Bernoulli polynomial</a>
of degree $m$, which has the following useful properties:
$$\begin{align}
\int_x^{x+1}\operatorname{B}_m(t)\,\mathrm{d}t &= x^m
\quad\text{(from which everything else follows)}
\\\operatorname{B}_{m+1}'(x) &= (m+1)\operatorname{B}_m(x)
\\\operatorname{B}_m\left(x+\frac{1}{2}\right) &\begin{cases}
\text{is even in $x$} & \text{for even $m$}
\\ \text{is odd in $x$} & \text{for odd $m$}
\end{cases}
\\\operatorname{B}_m(0) = \operatorname{B}_m(1) &= 0
\quad\text{for odd $m\geq3$}
\end{align}$$</p>
<p>Therefore,
$$\begin{align}
s_m(n) &\text{has degree $m+1$ in $n$}
\\s_m(0) &= 0
\\s_m'(0) &= \operatorname{B}_m(1) = 0\quad\text{for odd $m\geq3$}
\\&\quad\text{(This makes $n=0$ a double zero of $s_m(n)$ for odd $m\geq3$)}
\\s_m\left(x-\frac{1}{2}\right) &\begin{cases}
\text{is even in $x$} & \text{for odd $m$}
\\ \text{is odd in $x$} & \text{for even $m\geq2$}
\end{cases}
\end{align}$$</p>
<p>Consider the vector space $V_m$ of univariate polynomials $\in\mathbb{Q}[x]$
with degree not exceeding $2m+2$, that are even in $x$ and have a double zero
at $x=\frac{1}{2}$.
Thus $V_m$ has dimension $m$ and is clearly spanned by
$$\left\{s_j^2\left(x-\frac{1}{2}\right)\mid j=1,\ldots,m\right\}$$
For $m>0$, we find that $s_{2m+1}(x-\frac{1}{2})$ has all the properties
required for membership in $V_m$.
Substituting $x-\frac{1}{2}=n$, we conclude that there exists a representation
$$s_{2m+1}(n) = \sum_{j=1}^m a_{m,j}\,s_j^2(n)
\quad\text{for $m>0$ with $a_{m,j}\in\mathbb{Q}$}$$
of $s_{2m+1}(n)$ as a linear combination of squares of sums.</p>
| <p><strong>Not an answer, but only some results I found.</strong></p>
<p>Denote $s_m=\sum_{k=1}^nk^m.$
Suppose that
$$s_{2m+1}=\sum_{i=1}^m a_is_i^2,a_i\in\mathbb Q.$$</p>
<p>Use the method of undetermined coefficients, we can get a list of $\{m,\{a_i\}\}$:
$$\begin{array}{l}
\{1,\{1\}\} \\
\left\{2,\left\{-\frac{1}{2},\frac{3}{2}\right\}\right\} \\
\left\{3,\left\{\frac{1}{2},-\frac{3}{2},2\right\}\right\} \\
\left\{4,\left\{-\frac{11}{12},\frac{11}{4},-\frac{10}{3},\frac{5}{2}\right\}\right\} \\
\left\{5,\left\{\frac{61}{24},-\frac{61}{8},\frac{28}{3},-\frac{25}{4},3\right\}\right\} \\
\left\{6,\left\{-\frac{1447}{144},\frac{1447}{48},-\frac{332}{9},\frac{595}{24},-\frac{21}{2},\frac{7}{2}\right\}\right\} \\
\left\{7,\left\{\frac{5771}{108},-\frac{5771}{36},\frac{5296}{27},-\frac{2375}{18},56,-\frac{49}{3},4\right\}\right\} \\
\left\{8,\left\{-\frac{53017}{144},\frac{53017}{48},-\frac{60817}{45},\frac{21817}{24},-\frac{3861}{10},\frac{1127}{10},-24,\frac{9}{2}\right\}\right\} \\
\left\{9,\left\{\frac{2755645}{864},-\frac{2755645}{288},\frac{632213}{54},-\frac{1133965}{144},\frac{13379}{4},-\frac{11725}{12},208,-\frac{135}{4},5\right\}\right\}\end{array}$$</p>
<p>Through some observation, we can find that
$a_i a_{i+1}<0,a_m=\dfrac{m+1}2,a_2=-3a_1,a_{m-1}=\dfrac{1}{24} m^2 (m+1).$</p>
|
probability | <p>If you are given a die and asked to roll it twice. What is the probability that the value of the second roll will be less than the value of the first roll?</p>
| <p>There are various ways to answer this. Here is one:</p>
<p>There is clearly a $1$ out of $6$ chance that the two rolls will be the same, hence a $5$ out of $6$ chance that they will be different. Further, the chance that the first roll is greater than the second must be equal to the chance that the second roll is greater than the first (e.g. switch the two dice!), so both chances must be $2.5$ out of $6$ or $5$ out of $12$.</p>
| <p>Here another way to solve the problem
$$
\text{Pr }[\textrm{second} > \textrm{first}] + \text{Pr }[\textrm{second} < \textrm{first}] + \text{Pr }[\textrm{second} = \textrm{first}] = 1
$$
Because of symmetry $\text{Pr }[\text{second} > \text{first}] = \text{Pr }[\text{second} < \text{first}]$, so
$$
\text{Pr }[\text{second} > \text{first}] = \frac{1 - \text{Pr }[\text{second} = \text{first}]}{2}
= \frac{1 - \frac{1}{6}}{2} = \frac{5}{12}
$$</p>
|
logic | <p>A bag contains 2 counters, as to which nothing is known except
that each is either black or white. Ascertain their colours without taking
them out of the bag.</p>
<p>Carroll's solution: One is black, and the other is white.</p>
<blockquote>
<h3><a href="https://i.sstatic.net/mlmdS.jpg" rel="noreferrer">Lewis Carroll's explanation:</a></h3>
<p>We know that, if a bag contained <span class="math-container">$3$</span> counters, two being black and one white, the chance of drawing a black one would be <span class="math-container">$\frac{2}{3}$</span>; and that any <em>other</em> state of things would <em>not</em> give this chance.<br />
Now the chances, that the given bag contains <span class="math-container">$(\alpha)\;BB$</span>, <span class="math-container">$(\beta)\;BW$</span>, <span class="math-container">$(\gamma)\;WW$</span>, are respectively <span class="math-container">$\frac{1}{4}$</span>, <span class="math-container">$\frac{1}{2}$</span>, <span class="math-container">$\frac{1}{4}$</span>.<br />
Add a black counter.<br />
Then, the chances that it contains <span class="math-container">$(\alpha)\;BBB$</span>, <span class="math-container">$(\beta)\;BBW$</span>, <span class="math-container">$(\gamma)\;BWW$</span>, are, as before, <span class="math-container">$\frac{1}{4}$</span>, <span class="math-container">$\frac{1}{2}$</span>, <span class="math-container">$\frac{1}{4}$</span>.<br />
Hence the chances of now drawing a black one,<br />
<span class="math-container">$$= \frac{1}{4} \cdot 1 + \frac{1}{2} \cdot \frac{2}{3} + \frac{1}{4} \cdot \frac{1}{3} = \frac{2}{3}.$$</span>
Hence the bag now contains <span class="math-container">$BBW$</span> (since any <em>other</em> state of things would <em>not</em> give this chance).<br />
Hence, before the black counter was added, it contained BW, i.e. one black counter and one white.<br />
<a href="http://en.wikipedia.org/wiki/Q.E.D.#QEF" rel="noreferrer">Q.E.F.</a></p>
<p>Can you explain this explanation?</p>
</blockquote>
<p>I don't completely understand the explanation to begin with. It seems like there are elements of inverse reasoning, everything he says is correct but he is basically assuming what he intends to prove. He is assuming one white, one black, then adding one black yields the <span class="math-container">$\frac{2}{3}$</span>. From there he goes back to state the premise as proof.</p>
<p>Can anyone thoroughly analyze and determine if this solution contains any fallacies/slight of hand that may trick the reader?</p>
| <p>There is a reason this is the last of Lewis Carroll's <em>Pillow Problems</em>. It is a mathematical joke from the author of <em>Alice in Wonderland</em>.</p>
<p>The error (and Lewis Carroll knew it) is the phrase</p>
<blockquote>
<p>We know ... that any <em>other</em> state of things would <em>not</em> give this chance</p>
</blockquote>
<p>since he then immediately gives an example of another case which gives the same chance. Indeed any position where the probability of three blacks is equal to the probability of two whites and a black would also give the same combined chance.</p>
<p>There is no need to add the third black counter: it simply confuses the reader, in order to distract from the logical error. Lewis Carroll could equally have written something like:</p>
<blockquote>
<p>We know that, if a bag contained $2$ counters, one being black and one white, the chance of drawing a black one would be $\frac12$; and that any <em>other</em> state of things would <em>not</em> give this chance.</p>
<p>Now the chances, that the given bag contains (α) BB, (β) BW, (γ) WW, are respectively $\frac14$, $\frac12$, $\frac14$.</p>
<p>Hence the chance, of now drawing the black one, $=\frac14 \cdot 1 +\frac12 \cdot \frac12 + \frac14 \cdot 0 = \frac12.$</p>
<p>Hence the bag contains BW (since any <em>other</em> state of things would <em>not</em> give this chance).</p>
</blockquote>
<p>If he had written that, it would be more immediately obvious that this was faulty logic with an assertion followed by a counterexample followed by faulty use of the assertion.</p>
| <p>Well, of course the reasoning is flawed, since it's certainly possible to have a bag with two counters of the same color in it!</p>
<p>The facts that are correct are:</p>
<p>The probability of drawing a black counter from a fixed bag with 3 counters is 2/3 iff the bag contains two black counters.</p>
<p>By adding a black counter to a randomly generated 2-counter bag, the probability of drawing a black from the resulting bag is 2/3. </p>
<p>The conclusion that this means the resulting bag in the latter case therefore contains 2 black counters and 1 white counter is what is flawed, because the bag itself is not fixed; the probability is being calculated over a variable number of possibilities for the bag.</p>
|
linear-algebra | <p>What is the difference between sum of two vectors and direct sum of two vector subspaces? </p>
<p>My textbook is confusing about it. Any help would be appreciated.</p>
| <p><em>Direct sum</em> is a term for <em>subspaces</em>, while <em>sum</em> is defined for <em>vectors</em>.
We can take the sum of subspaces, but then their intersection need not be $\{0\}$.</p>
<p><strong>Example:</strong> Let $u=(0,1),v=(1,0),w=(1,0)$. Then</p>
<ul>
<li>$u+v=(1,1)$ (sum of vectors),</li>
<li>$\operatorname{span}(v)+\operatorname{span}(w)=\operatorname{span}(v)$, so the sum is not direct,</li>
<li>$\operatorname{span}(u)\oplus\operatorname{span}(v)=\Bbb R^2$, here the sum is direct because $\operatorname{span}(u)\cap\operatorname{span}(v)=\{0\}$,</li>
<li>$u\oplus v $ makes no sense in this context.</li>
</ul>
<p><em>Note that the direct sum of subspaces of a vector space is not the same thing as the direct sum of some vector spaces.</em></p>
| <p>In Axler's Linear Algebra Done Right, he defines the <em>sum of subspaces</em> $U + V$ as </p>
<p>$\{u + v : u \in U, v \in V \}$.</p>
<p>He then says that $W = U \oplus V$ if </p>
<p>(1) $W = U + V$, and</p>
<p>(2) The representation of each $w$ as $u + v$ is <em>unique</em>.</p>
<p>This is a different way of presenting these definitions than most texts, but it's equivalent to other definitions of direct sum.</p>
<p>In anyone's book, the sum and direct sum of subspaces are always defined; and the sum of vectors is always defined; but there's no such thing as a direct sum of vectors. </p>
|
probability | <p>This is a neat little problem that I was discussing today with my lab group out at lunch. Not particularly difficult but interesting implications nonetheless</p>
<p>Imagine there are a 100 people in line to board a plane that seats 100. The first person in line, Alice, realizes she lost her boarding pass, so when she boards she decides to take a random seat instead. Every person that boards the plane after her will either take their "proper" seat, or if that seat is taken, a random seat instead.</p>
<p>Question: What is the probability that the last person that boards will end up in their proper seat?</p>
<p>Moreover, and this is the part I'm still pondering about. Can you think of a physical system that would follow this combinatorial statistics? Maybe a spin wave function in a crystal etc...</p>
| <p>Here is a rephrasing which simplifies the intuition of this nice puzzle.</p>
<p>Suppose whenever someone finds their seat taken, they politely evict the squatter and take their seat. In this case, the first passenger (Alice, who lost her boarding pass) keeps getting evicted (and choosing a new random seat) until, by the time everyone else has boarded, she has been forced by a process of elimination into her correct seat.</p>
<p>This process is the same as the original process except for the identities of the people in the seats, so the probability of the last boarder finding their seat occupied is the same.</p>
<p>When the last boarder boards, Alice is either in her own seat or in the last boarder's seat, which have both looked exactly the same (i.e. empty) to her up to now, so there is no way poor Alice could be more likely to choose one than the other.</p>
| <p>This is a classic puzzle!</p>
<p>The answer is that the probability that the last person ends in up in their proper seat is exactly <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>The reasoning goes as follows:</p>
<p>First observe that the fate of the last person is determined the moment either the first or the last seat is selected! This is because the last person will either get the first seat or the last seat. Any other seat will necessarily be taken by the time the last person gets to 'choose'.</p>
<p>Since at each choice step, the first or last is equally probable to be taken, the last person will get either the first or last with equal probability: <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>Sorry, no clue about a physical system.</p>
|
combinatorics | <p>How can we prove that a graph is bipartite if and only if all of its cycles have even order? Also, does this theorem have a common name? I found it in a maths Olympiad toolbox.</p>
| <p>One direction is very easy: if <span class="math-container">$G$</span> is bipartite with vertex sets <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span>, every step along a walk takes you either from <span class="math-container">$V_1$</span> to <span class="math-container">$V_2$</span> or from <span class="math-container">$V_2$</span> to <span class="math-container">$V_1$</span>. To end up where you started, therefore, you must take an even number of steps.</p>
<p>Conversely, suppose that every cycle of <span class="math-container">$G$</span> is even. Let <span class="math-container">$v_0$</span> be any vertex. For each vertex <span class="math-container">$v$</span> in the same component <span class="math-container">$C_0$</span> as <span class="math-container">$v_0$</span> let <span class="math-container">$d(v)$</span> be the length of the shortest path from <span class="math-container">$v_0$</span> to <span class="math-container">$v$</span>. Color red every vertex in <span class="math-container">$C_0$</span> whose distance from <span class="math-container">$v_0$</span> is even, and color the other vertices of <span class="math-container">$C_0$</span> blue. Do the same for each component of <span class="math-container">$G$</span>. Check that if <span class="math-container">$G$</span> had any edge between two red vertices or between two blue vertices, it would have an odd cycle. Thus, <span class="math-container">$G$</span> is bipartite, the red vertices and the blue vertices being the two parts.</p>
| <blockquote>
<p>does this theorem have a common name?</p>
</blockquote>
<p>It is sometimes called <strong>König's Theorem</strong> (1936), for example in lecture notes <a href="http://www-math.ucdenver.edu/%7Ewcherowi/courses/m4408/gtln5.html" rel="nofollow noreferrer">here</a> (<a href="https://web.archive.org/web/20211127091845/http://www-math.ucdenver.edu/%7Ewcherowi/courses/m4408/gtln5.html" rel="nofollow noreferrer">Wayback Machine</a>).</p>
<hr />
<p>However, this name is <a href="https://en.wikipedia.org/wiki/K%C3%B6nig%27s_theorem" rel="nofollow noreferrer">ambiguous</a>.</p>
|
linear-algebra | <p>A Hermitian matrix always has real eigenvalues and real or complex orthogonal eigenvectors. A real symmetric matrix is a special case of Hermitian matrices, so it too has orthogonal eigenvectors and real eigenvalues, but could it ever have complex eigenvectors?</p>
<p>My intuition is that the eigenvectors are always real, but I can't quite nail it down.</p>
| <p>Always try out examples, starting out with the simplest possible examples (it may take some thought as to which examples are the simplest). Does for instance the identity matrix have complex eigenvectors? This is pretty easy to answer, right? </p>
<p>Now for the general case: if $A$ is any real matrix with real eigenvalue $\lambda$, then we have a choice of looking for real eigenvectors or complex eigenvectors. The theorem here is that the $\mathbb{R}$-dimension of the space of real eigenvectors for $\lambda$ is equal to the $\mathbb{C}$-dimension of the space of complex eigenvectors for $\lambda$. It follows that (i) we will always have non-real eigenvectors (this is easy: if $v$ is a real eigenvector, then $iv$ is a non-real eigenvector) and (ii) there will always be a $\mathbb{C}$-basis for the space of complex eigenvectors consisting entirely of real eigenvectors.</p>
<p>As for the proof: the $\lambda$-eigenspace is the kernel of the (linear transformation given by the) matrix $\lambda I_n - A$. By the rank-nullity theorem, the dimension of this kernel is equal to $n$ minus the rank of the matrix. Since the rank of a real matrix doesn't change when we view it as a complex matrix (e.g. the reduced row echelon form is unique so must stay the same upon passage from $\mathbb{R}$ to $\mathbb{C}$), the dimension of the kernel doesn't change either. Moreover, if $v_1,\ldots,v_k$ are a set of real vectors which are linearly independent over $\mathbb{R}$, then they are also linearly independent over $\mathbb{C}$ (to see this, just write out a linear dependence relation over $\mathbb{C}$ and decompose it into real and imaginary parts), so any given $\mathbb{R}$-basis for the eigenspace over $\mathbb{R}$ is also a $\mathbb{C}$-basis for the eigenspace over $\mathbb{C}$.</p>
| <p>If $A$ is a symmetric $n\times n$ matrix with real entries, then viewed as an element of $M_n(\mathbb{C})$, its eigenvectors always include vectors with non-real entries: if $v$ is any eigenvector then at least one of $v$ and $iv$ has a non-real entry.</p>
<p>On the other hand, if $v$ is any eigenvector then at least one of $\Re v$ and $\Im v$ (take the real or imaginary parts entrywise) is non-zero and will be an eigenvector of $A$ with the same eigenvalue. So you can always pass to eigenvectors with real entries.</p>
|
geometry | <p>My wife came up with the following problem while we were making some decorations for our baby: given a triangle, what is the largest equilateral triangle that can be inscribed in it? (In other words: given a triangular piece of cardboard, what is the largest equilateral triangle that you can cut out of it?)</p>
<p>She also came up with the following heuristic/conjecture: take the largest angle of the given triangle. (It is guaranteed to be at least $60^\circ = \pi/3$, as not all three angles of a triangle can be less than that.) Now the answer (the largest inscribed equilateral triangle) can be found among those made by marking off a $60^\circ$ angle at that vertex, with the two ends chosen somehow. (In other words: the inscribed equilateral triangle can be chosen to have that vertex as one of its vertices.)</p>
<p><a href="https://i.sstatic.net/oC7nm.png" rel="noreferrer"><img src="https://i.sstatic.net/oC7nm.png" alt="example: triangle ADD' inscribed in triangle ABC"></a></p>
<p>My intuition for geometry is not so good. :-) I played with a few examples in Geogebra and couldn't find any counterexamples, nor could I think of a proof, so I'm asking here.</p>
<p>This is similar to <a href="https://math.stackexchange.com/questions/59616/find-the-maximum-area-possible-of-equilateral-triangle-that-inside-the-given-squ">Find the maximum area possible of equilateral triangle that inside the given square</a> and a special case of <a href="https://math.stackexchange.com/questions/883050/largest-equilateral-triangle-in-a-polygon">Largest Equilateral Triangle in a Polygon</a> (whose paper I don't have access to—all I could find via citations to that paper is that it gives an $O(n^3)$ algorithm—and in any case the problem may be simpler for a triangle).</p>
<p><a href="https://i.sstatic.net/mQpDB.png" rel="noreferrer"><img src="https://i.sstatic.net/mQpDB.png" alt="another example, with the two vertices not lying on BC"></a></p>
<p>Questions:</p>
<ul>
<li>How can one find the largest equilateral triangle that can be inscribed in a given triangle?</li>
<li>Can such a triangle always be found with one vertex at one of the vertices of the given triangle, specifically the one with the largest angle?</li>
<li>(If the answer to the above is no) When is the above true? For instance, is the conjecture true when the triangle is isosceles, with the two sides adjacent to the largest angle equal?</li>
</ul>
| <p>Consider an arbitrary triangle PQR. Excluding the trivial case of an equilateral triangle, either one or two of the angles are at least 60 degrees, and <em>wlog</em> let P be the minimal example of these. I.e. P is the smallest angle of at least 60 degrees, and Q is either the largest angle or the second largest after P. Then the largest enclosed equilateral triangle (call it E) has one vertex at P.</p>
<p>If Q < 60 (i.e. P is the only angle >=60), then both other vertices are on QR.</p>
<p>If Q > 120 - P/2, then the second vertex lies on QR, at the intersection with a line drawn at an angle of 60 degrees from PR.</p>
<p>Otherwise the second vertex is at Q.</p>
<p>Motivation: The interesting case to consider is a triangle of angles 62, 89, 29 degrees. (Actually this is almost exactly the OP's triangle ABC above.</p>
<p><a href="https://i.sstatic.net/WtSjR.png" rel="noreferrer"><img src="https://i.sstatic.net/WtSjR.png" alt="enter image description here"></a></p>
<p>If Q<90, there will be a local maximum E based on P and Q. The diagram shows the perpendicular dropped from P to QR, and shows that there will be an equal E if it is possible to rotate this about P so that the vertex on QR is the same distance on the opposite side of the perpendicular. In this case, since angle Q is 89 degrees, we need to be able to rotate E through 2 degrees, which is exactly possible. Of course if angle Q is more than 90 degrees, rotation will always increase the size of E.</p>
<p>This is a sketch of an answer; it depends on proving that one vertex of E is at a vertex of PQR, and a messy set of cases for optimisation. But I hope I have captured the distinction between the two cases illustrated by (62, 89, 29). </p>
| <p>Let $T$ be the given triangle, having vertices $A_i$, angles $\alpha_i$, and edges $a_i=[A_{i-1},A_{i+1}]$. Use $\triangle$ as variable for equilateral triangles contained in $T$, and denote the vertices of such triangles by $v_j$. Given an instance $I:=(\triangle, T)$ call the number of incidences $v_j\in a_i$ the <em>defect</em> of this instance. Then one has</p>
<p><strong>Lemma.</strong> If the defect of the instance $I$ is $\leq3$ then $\triangle$ is not maximal.</p>
<p><em>Proof.</em> We leave defects $\leq2$ to the reader and bother about instances with defect $3$. If $v_1=A_1$ is a vertex of $T$, $v_2$ lies on the adjacent edge $a_2$, and $v_3$ lies in $T^\circ$ then one can just move $v_2$ along $a_2$ to increase $\triangle$. If $v_1=A_1$ and $v_2$ lies on the opposite edge $a_1$ then one can slightly rotate $\triangle$ around $v_1$, thereby freeing $v_2$, so that $\triangle$ then can be enlarged by scaling from $v_1$. If $v_1$, $v_2$ are lying on the same edge $a_j$ and $v_3$ on some other edge then one may slightly translate $[v_1,v_2]$ along $a_j$ to free $v_3$.</p>
<p>It remains the case that the three vertices $v_i$ of $\triangle$ are lying on the three edges $a_i$ of $T$. Assume that $\triangle$ has side length $s$, and assume that $\alpha_3\leq60^\circ$. If we let $v_1$ and $v_2$ glide along $a_1$ and $a_2$ such that $|v_1v_2|=s$ at all times then $v_3$ will describe an arc of an ellipse (some computation is needed to verify this). The edge $a_3$ is intersecting or touching this arc at the initial position of $v_3$. It follows that we can free $v_3$ in this way and then enlarge $\triangle$.</p>
<p>We therefore have to consider only instances $I$ with a defect of $4$ or more (the latter can happen if one or three of the $\alpha_i$ are $=60^\circ$). A defect $\geq4$ can be realized in the following ways:</p>
<p>(i) The vertex $v_1$ of $\triangle$ coincides with the vertex $A_1$ of $T$, and the vertices $v_2$, $v_3$ are lying on the opposite edge $a_1$ of $T$. This is possible only if $\alpha_2$ and $\alpha_3$ are $\leq60^\circ$.</p>
<p>(ii) The vertex $v_1$ of $\triangle$ coincides with the vertex $A_1$ of $T$, the vertex $v_2$ is on the adjacent edge $a_2$ or $a_3$ of $T$, and the vertex $v_3$ is on the opposite edge $a_1$ of $T$. This is possible only if $\alpha_1\geq60^\circ$.</p>
<p>(iii) The vertices $v_1$, $v_2$ of $\triangle$ coincide with the two vertices $A_1$, $A_2$ of $T$. This is possible only if $\alpha_1$ and $\alpha_2$ are $\geq60^\circ$.</p>
<p>In this way the problem has been reduced to the (computational) analysis of a finite number of cases. </p>
<p>If $T$ has two angles $<60^\circ$ then we have to compare the three colored $\triangle$s in the following figure. It is obvious that the red triangle is the largest.</p>
<p><a href="https://i.sstatic.net/A7p8z.jpg" rel="noreferrer"><img src="https://i.sstatic.net/A7p8z.jpg" alt="enter image description here"></a></p>
<p>If $T$ has two angles $>60^\circ$ then we have to compare the three colored triangles in the following figure. Which of them is the largest depends in an intricate way on the angles $\alpha$ and $\beta$. Assume $\beta>\alpha$, as in the figure. Then the green $\triangle$ is larger than the blue $\triangle$. If $\alpha+2\beta>240^\circ$ then the green $\triangle$ is also larger than the red $\triangle$, hence the largest of the three. If $\alpha+2\beta<240^\circ$ then the red triangle is the largest. The following surprise is easy to verify: If $T$ has angles $80^\circ$, $80^\circ$ and $20^\circ$ then $s=u=v$. It follows that in this case there are three different maximal $\triangle$s in $T$.</p>
<p><a href="https://i.sstatic.net/YRMXW.jpg" rel="noreferrer"><img src="https://i.sstatic.net/YRMXW.jpg" alt="enter image description here"></a></p>
<p>To substantiate the claims made in the last paragraph note that the angles $\eta$ in the figure are $=240^\circ-\alpha-\beta$. The sine theorem then gives
$$u=\sin\beta{s\over\sin\eta}=s{\sin\beta\over\sin(240^\circ-\alpha-\beta)},\quad v=s{\sin\alpha\over\sin(240^\circ-\alpha-\beta)}\ .$$
It follows that
$$\max\{s,u,v\}={s\over\sin(240^\circ-\alpha-\beta)}\max\{\sin(240^\circ-\alpha-\beta),\sin\beta,\sin\alpha\}\ ,$$
so that it remains to decide which of the last three entries is largest.</p>
|
game-theory | <p>[There's still the strategy to go. A suitably robust argument that establishes what is <em>statistically</em> the best strategy will be accepted.]</p>
<p><strong>Here's my description of the game:</strong></p>
<p>There's a <span class="math-container">$4\times 4$</span> grid with some random, numbered cards on. The numbers are either one, two, or multiples of three. Using up, down, left, and right moves, you add the numbers on adjacent cards to make a new card like so: <span class="math-container">$$\begin{align}\color{blue}1+\color{red}2&=3\tag{1} \\ n+n&=2n\end{align}$$</span> for <span class="math-container">$n=2^k3\ge 3$</span>, where <span class="math-container">$k\in\{0, 1, . . . , 10\}$</span>, so the <strong>highest a card can be is <span class="math-container">$2^{11}3$</span></strong>. But at each move the <strong>"free" cards move too and a random new card appears</strong> at a random point along the edge you slide away from. Everything is kept on the grid. <strong>The card for the next move is indicated by colour</strong> at the top of the screen: blue for <span class="math-container">$1$</span>, red for <span class="math-container">$2$</span>, and white for <span class="math-container">$n\ge 3$</span> (such that <span class="math-container">$n$</span> is attainable using the above process). The white <span class="math-container">$2^\ell 3$</span>-numbered cards are worth <span class="math-container">$3^{\ell+1}$</span> points; <strong>the rest give no points</strong>. Once there are no more available moves, the points on the remaining cards are summed to give your score for the game.</p>
<p><a href="http://www.geekswithjuniors.com/blog/2014/2/10/threes-the-best-single-swipe-puzzle-game-on-ios.html" rel="nofollow noreferrer">Here's</a> another description I've found; it's the least promotional. It has the following gif.</p>
<p><img src="https://i.sstatic.net/iAXVM.gif" alt="enter image description here" /></p>
<p>So:</p>
<blockquote>
<p>What's the best strategy for the game? What's the highest possible score?</p>
</blockquote>
<p><em>Thoughts:</em></p>
<p>We could model this using some operations on <span class="math-container">$4\times 4$</span> matrices over <span class="math-container">$\mathbb{N}$</span>. A new card would be the addition of <span class="math-container">$\alpha E_{ij}$</span> for some appropriate <span class="math-container">$\alpha$</span> and standard basis vector <span class="math-container">$E_{ij}$</span>. That's all I've got . . .</p>
<hr />
<p><strong>NB:</strong> If this is a version of some other game, please let me know so I can avoid giving undue attention to this version :)</p>
<hr />
<p>The number on each card can be written <span class="math-container">$n=2^k3^{\varepsilon_k}$</span>, where
<span class="math-container">$$\varepsilon_k=\cases{\color{blue}0\text{ or }1 &: $k=0$ \\
\color{red}0\text{ or }1 &: $k=1$ \\
1 &: $k\ge 2$;}$$</span>that is, <span class="math-container">$\varepsilon_k=\cases{0 &:$n<3$ \\ 1 &:$n\ge 3$}$</span>. So we can write <span class="math-container">$(k, \varepsilon_k)$</span> instead under
<span class="math-container">$$(k, \varepsilon_k)+(\ell, \varepsilon_\ell)\stackrel{(1)}{=}\cases{(k+1, 1)&: $\varepsilon_k, \varepsilon_\ell, k=\ell > 0$ \\
(0, 1)&: $\color{blue}k=\color{blue}{\varepsilon_k}=\color{red}{\varepsilon_\ell}=0, \color{red}\ell=1$ \\
(0, 1)&: $\color{blue}\ell=\color{red}{\varepsilon_k}=\color{blue}{\varepsilon_\ell}=0, \color{red}k=1$.}$$</span></p>
<p>Looking at a <span class="math-container">$2\times 2$</span> version might help: the moves from different starting positions show up if we work systematically. It fills up quickly.</p>
<hr />
<p>It'd help to be more precise about what a good strategy might look like. The best strategy might be one that, <em>from an arbitrary <span class="math-container">$4\times 4$</span> grid</em> <span class="math-container">$G_0$</span> and with the least number of moves, gives the highest score attainable with <span class="math-container">$G_0$</span>, subject to the random nature of the game. That's still a little vague though . . .</p>
| <p>The strategy I employ is simply to make the move that leaves the most available moves, and to disregard score entirely. As a natural consequence of playing more moves, the score of the board will increase simply because the only way to continue playing is to make combinations, and combinations generate higher scores.</p>
<p>At the beginning of the game, the most important thing to consider is the placement of $1$s and $2$s. They are unique in that nothing can be combined adjacent to them to make a valid combination, they will only combine with their complement, which can only be achieved by board translation (verses, say a
$12$, which can be adjacent to a $6$, which combines with another $6$ and then the $12$ can subsequently combine with that $12$. There's no way to make a $1$ or a $2$, it must simply be moved around the board).</p>
<p>Later, with higher scoring tiles on the board, the "bonus" tile (which shows up as a white tile with a $+$ in it at the top) becomes increasingly important, and the best strategy I've found is to attempt to place the bonus tile as near a mixed group of larger tiles as possible. The bonus tile will always be at least $6$, but never the same score as your highest scoring tile in play.</p>
<p>There is also the nature of the tile selection. It's been reverse engineered that the random generator uses a "bag" where $12$ tiles are shuffled. The original board layout uses this method, and $9$ tiles are placed into the board with $3$ remaining in the "bag". Once the bag is exhausted, the tiles are shuffled again. There are always $4$ of each: $1$, $2$, and $3$. Once you reach a high tile of $48$ a "bonus" tile is inserted with a potential value of greater than $3$. This changes the size of the "bag" to $13$ instead of $12$. So, keeping track of where you are in the "bag" and how many of each color you've seen can give you an advantage when looking at future moves.</p>
<hr>
<p>Curiously, the possibility space for scoring is actually quite sparse. All scores will necessarily be multiples of $3$, but it turns out that only about $\frac38$ of the multiples of $3$ between $0$ and the max score are actually valid. There are a lot that are simply impossible to get, like a $19$ in cribbage.</p>
<p>The lowest one that isn't trivially small is still $39,363$, though, which seems well out of the range of the average player. The next lowest I found is $52,485$. There are lots of gaps at the high end, due to the fact that highest scoring tile is worth over $500$k by itself.</p>
| <p><em>A partial answer:</em> </p>
<p>The highest possible score is $16\times 3^{12}$.</p>
<p>If the game could start with no available moves, then just suppose it starts with $2^{11}3$ everywhere.</p>
<p>Alternatively, suppose you start with $2^{11}3$ in the top left and suppose every new card happens to be $2^{11}3$. Assume the cards all show up in the top left corner, which we can do in the following. Slide right until the top row is full. Slide down once. Repeat. This will eventually fill the grid; once it does, there'll be no more available moves so the game ends (with a score of $16\times 3^{11+1}$).</p>
|
probability | <p>I've a confession to make. I've been using PDF's and PMF's without actually knowing what they are. My understanding is that density equals area under the curve, but if I look at it that way, then it doesn't make sense to refer to the "mass" of a random variable in discrete distributions. How can I interpret this? Why do we call use "mass" and "density" to describe these functions rather than something else?</p>
<p>P.S. Please feel free to change the question itself in a more understandable way if you feel this is a logically wrong question.</p>
| <p>(This answer takes as its starting point the OP's question in the comments, "Let me understand mass before going to density. Why do we call a point in the discrete distribution as mass? Why can't we just call it a point?")</p>
<p>We could certainly call it a point. The utility of the term "probability mass function," though, is that it tells us something about how the function in the discrete setting relates to the function in the continuous setting because of the associations we already have with "mass" and "density." And I think to understand why we use these terms in the first place we have to start with what we call the density function. (In fact, I'm not sure we would even be using "probability mass" without the corresponding "probability density" function.)</p>
<p>Let's say we have some function $f(x)$ that we haven't named yet but we know that $\int_a^b f(x) dx$ yields the probability that we see an outcome between $a$ and $b$. What should we call $f(x)$? Well, what are its properties? Let's start with its units. We know that, in general, the units on a definite integral $\int_a^b f(x) dx$ are the units of $f(x)$ times the units of $dx$. In our setting, the integral gives a probability, and $dx$ has units in say, length. So the units of $f(x)$ must be probability per unit length. This means that $f(x)$ must be telling us something about how much probability is concentrated per unit length near $x$; i.e., how <em>dense</em> the probability is near $x$. So it makes sense to call $f(x)$ a "probability density function." (In fact, one way to view $\int_a^b f(x) dx$ is that, if $f(x) \geq 0$, $f(x)$ is <em>always</em> a density function. From this point of view, height is area density, area is volume density, speed is distance density, etc. One of my colleagues uses an approach like this when he discusses applications of integration in second-semester calculus.)</p>
<p>Now that we've named $f(x)$ a density function, what should we call the corresponding function in the discrete setting? It's not a density function; its units are probability rather than probability per unit length. So what is it? Well, when we say "density" without a qualifier we are normally talking about "mass density," and when we integrate a density function over an object we obtain the mass of that object. With this in mind, the relationship between the probability function in the continuous setting to that of the probability function in the discrete setting is exactly that of density to mass. So "probability mass function" is a natural term to grab to apply to the corresponding discrete function. </p>
| <p>Probability mass functions are used for discrete distributions. It assigns a probability to each point in the sample space. Whereas the integral of a probability density function gives the probability that a random variable falls within some interval. </p>
|
matrices | <p>I am taking a proof-based introductory course to Linear Algebra as an undergrad student of Mathematics and Computer Science. The author of my textbook (Friedberg's <em>Linear Algebra</em>, 4th Edition) says in the introduction to Chapter 4:</p>
<blockquote>
<p>The determinant, which has played a prominent role in the theory of linear algebra, is a special scalar-valued function defined on the set of square matrices. <strong>Although it still has a place in the study of linear algebra and its applications, its role is less central than in former times.</strong> </p>
</blockquote>
<p>He even sets up the chapter in such a way that you can skip going into detail and move on:</p>
<blockquote>
<p>For the reader who prefers to treat determinants lightly, Section 4.4 contains the essential properties that are needed in later chapters.</p>
</blockquote>
<p>Could anyone offer a didactic and simple explanation that refutes or asserts the author's statement?</p>
| <p>Friedberg is not wrong, at least on a historical standpoint, as I am going to try to show it.</p>
<p>Determinants were discovered "as such" in the second half of the 18th century by Cramer who used them in his celebrated rule for the solution of a linear system (in terms of quotients of determinants). Their spread was rather rapid among mathematicians of the next two generations ; they discovered properties of determinants that now, with our vision, we mostly express in terms of matrices.</p>
<p>Cauchy has given two important results about determinants as explained in the very nice article by Hawkins referenced below :</p>
<ul>
<li><p>around 1815, Cauchy discovered the multiplication rule (rows times columns) of two determinants. This is typical of a result that has been completely revamped : nowadays, this rule is for the multiplication of matrices, and determinants' multiplication is restated as the homomorphism rule <span class="math-container">$\det(A \times B)= \det(A)\det(B)$</span>.</p>
</li>
<li><p>around 1825, he discovered eigenvalues "associated with a symmetric <em>determinant</em>" and established the important result that these eigenvalues are real ; this discovery has its roots in astronomy, in connection with Sturm, explaining the word "secular values" he attached to them: see for example <a href="http://www2.cs.cas.cz/harrachov/slides/Golub.pdf" rel="noreferrer">this</a>.</p>
</li>
</ul>
<p>Matrices made a shy apparition in the mid-19th century (in England) ; "matrix" is a term coined by Sylvester <a href="http://mathworld.wolfram.com/Matrix.html" rel="noreferrer">see here</a>. I strongly advise to take a look at his elegant style in his <a href="https://archive.org/stream/collectedmathema04sylvuoft#page/n7/mode/2up" rel="noreferrer">Collected Papers</a>.</p>
<p>Together with his friend Cayley, they can rightly be named the founding fathers of linear algebra, with determinants as permanent reference. Here is a major quote of Sylvester:</p>
<p><em>"I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent".</em></p>
<p>A lot of important polynomials are either generated or advantageously expressed as determinants:</p>
<ul>
<li><p>the characteristic polynomial (of a matrix) is expressed as the famous <span class="math-container">$\det(A-\lambda I)$</span>,</p>
</li>
<li><p>in particular, the theory of orthogonal polynomials mainly developed at the end of 19th century, can be expressed in great part with determinants,</p>
</li>
<li><p>the "resultant" of two polynomials, invented by Sylvester (giving a condition for these polynomials to have a common root), etc.</p>
</li>
</ul>
<p>Let us repeat it : for a mid-19th century mathematician, a <em>square</em> array of numbers has necessarily a <strong>value</strong> (its determinant): it cannot have any other meaning. If it is a <em>rectangular</em> array, the numbers attached to it are the determinants of submatrices that can be "extracted" from the array.</p>
<p>The identification of "Linear Algebra" as an integral (and new) part of Mathematics is mainly due to the German School (say from 1870 till the 1930's). I don't cite the names, there are too many of them. An example among many others of this german domination: the germenglish word "eigenvalue". The word "kernel" could have remained the german word "kern" that appears around 1900 (see <a href="http://jeff560.tripod.com/mathword.html" rel="noreferrer">this site</a>).</p>
<p>The triumph of Linear Algebra is rather recent (mid-20th century). "Triumph" meaning that now Linear Algebra has found a very central place. Determinants in all that ? Maybe the biggest blade in this swissknife, but not more ; another invariant (this term would deserve a long paragraph by itself), the <strong>trace</strong>, would be another blade, not the smallest.</p>
<p>In 19th century, Geometry was still at the heart of mathematical education; therefore, the connection between geometry and determinants has been essential in the development of linear algebra. Some cornerstones:</p>
<ul>
<li>the development of projective geometry, <em>in its analytical form,</em> in the 1850s. This development has led in particular to place homographies at the heart of projective geometry, with their associated matricial expression. Besides, conic curves, described by a quadratic form, can as well be written under an all-matricial expression <span class="math-container">$X^TMX=0$</span> where <span class="math-container">$M$</span> is a symmetrical <span class="math-container">$3 \times 3$</span> matrix. This convergence to a unique and new "algebra" has taken time to be recognized.</li>
</ul>
<p>A side remark: this kind of reflexions has been capital in the decision of Bourbaki team to avoid all figures and adopt the extreme view of reducing geometry to linear algebra (see the <a href="https://hsm.stackexchange.com/q/2578/3730">"Down with Euclid"</a> of J. Dieudonné in the sixties).</p>
<p>Different examples of the emergence of new trends :</p>
<p>a) the concept of <strong>rank</strong>: for example, a pair of straight lines is a conic section whose matrix has rank 1. The "rank" of a matrix used to be defined in an indirect way as the "dimension of the largest nonzero determinant that can be extracted from the matrix". Nowadays, the rank is defined in a more straightforward way as the dimension of the range space... at the cost of a little more abstraction.</p>
<p>b) the concept of <strong>linear transformations</strong> and <strong>duality</strong> arising from geometry: <span class="math-container">$X=(x,y,t)^T\rightarrow U=MX=(u,v,w)$</span> between points <span class="math-container">$(x,y)$</span> and straight lines with equations <span class="math-container">$ux+vy+w=0$</span>. More precisely, the tangential description, i.e., the constraint on the coefficients <span class="math-container">$U^T=(u,v,w)$</span> of the tangent lines to the conical curve has been recognized as associated with <span class="math-container">$M^{-1}$</span> (assuming <span class="math-container">$\det(M) \neq 0$</span>!), due to relationship</p>
<p><span class="math-container">$$X^TMX=X^TMM^{-1}MX=(MX)^T(M^{-1})(MX)=U^TM^{-1}U=0$$</span>
<span class="math-container">$$=\begin{pmatrix}u&v&w\end{pmatrix}\begin{pmatrix}A & B & D \\ B & C & E \\ D & E & F \end{pmatrix}\begin{pmatrix}u \\ v \\ w \end{pmatrix}=0$$</span></p>
<p>whereas, in 19th century, it was usual to write the previous quadratic form as :</p>
<p><span class="math-container">$$\det \begin{pmatrix}M^{-1}&U\\U^T&0\end{pmatrix}=\begin{vmatrix}a&b&d&u\\b&c&e&v\\d&e&f&w\\u&v&w&0\end{vmatrix}=0$$</span></p>
<p>as the determinant of a matrix obtained by "bordering" <span class="math-container">$M^{-1}$</span> precisely by <span class="math-container">$U$</span></p>
<p>(see the excellent lecture notes (<a href="http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html" rel="noreferrer">http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html</a>)). It is to be said that the idea of linear transformations, especially orthogonal transformations, arose even earlier in the framework of the theory of numbers (quadratic representations).</p>
<p>Remark: the way the former identities have been written use matrix algebra notations and rules that were unknown in the 19th century, with the notable exception of Grassmann's "Ausdehnungslehre", whose ideas were too ahead of his time (1844) to have a real influence.</p>
<p>c) the concept of <strong>eigenvector/eigenvalue</strong>, initially motivated by the determination of "principal axes" of conics and quadrics.</p>
<ul>
<li>the very idea of "geometric transformation" (more or less born with Klein circa 1870) associated with an array of numbers (when linear or projective). A matrix, of course, is much more that an array of numbers... But think for example to the persistence of expression "table of direction cosines" (instead of "orthogonal matrix") as can be found for example still in the 2002 edition of Analytical Mechanics by A.I. Lorrie.</li>
</ul>
<p>d) The concept of "companion matrix" of a polynomial <span class="math-container">$P$</span>, that could be considered as a tool but is more fundamental than that (<a href="https://en.wikipedia.org/wiki/Companion_matrix" rel="noreferrer">https://en.wikipedia.org/wiki/Companion_matrix</a>). It can be presented and "justified" as a "nice determinant" :
In fact, it has much more to say, with the natural interpretation for example in the framework of <span class="math-container">$\mathbb{F}_p[X]$</span> (polynomials with coefficients in a finite field) as the matrix of multiplication by <span class="math-container">$P(X)$</span>. (<a href="https://glassnotes.github.io/OliviaDiMatteo_FiniteFieldsPrimer.pdf" rel="noreferrer">https://glassnotes.github.io/OliviaDiMatteo_FiniteFieldsPrimer.pdf</a>), giving rise to matrix representations of such fields. Another remarkable application of companion matrices : the main numerical method for obtaining the roots of a polynomial is by computing the eigenvalues of its companion matrix using a Francis "QR" iteration (see (<a href="https://math.stackexchange.com/q/68433">https://math.stackexchange.com/q/68433</a>)).</p>
<p>References:</p>
<p>I discovered recently a rather similar question with a very complete answer by Denis Serre, a specialist in the domain of matrices :
<a href="https://mathoverflow.net/q/35988/88984">https://mathoverflow.net/q/35988/88984</a></p>
<p>The article by Thomas Hawkins : "Cauchy and the spectral theory of matrices", Historia Mathematica 2, 1975, 1-29.</p>
<p>See also (<a href="http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf" rel="noreferrer">http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf</a>)</p>
<p>An important bibliography is to be found in (<a href="http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html" rel="noreferrer">http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html</a>).</p>
<p>See also a good paper by Nicholas Higham : (<a href="http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf" rel="noreferrer">http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf</a>)</p>
<p>For conic sections and projective geometry, see a) this excellent chapter of lectures of the University of Vienna (see the other chapters as well) : (<a href="https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf" rel="noreferrer">https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf</a>). See as well : (maths.gla.ac.uk/wws/cabripages/conics/conics0.html).</p>
<p>Don't miss the following very interesting paper about various kinds of useful determinants : <a href="https://arxiv.org/pdf/math/9902004.pdf" rel="noreferrer">https://arxiv.org/pdf/math/9902004.pdf</a></p>
<p>See also <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Matrix_(mathematics).html" rel="noreferrer">this</a></p>
<p>Very interesting precisions on determinants in <a href="https://arxiv.org/pdf/math/9902004.pdf" rel="noreferrer">this text</a> and in these <a href="https://math.stackexchange.com/q/194579">answers</a>.</p>
<p>A fundamental work on "The Theory of Determinants" in 4 volumes has been written by Thomas Muir : <a href="http://igm.univ-mlv.fr/%7Eal/Classiques/Muir/History_5/VOLUME5_TEXT.PDF" rel="noreferrer">http://igm.univ-mlv.fr/~al/Classiques/Muir/History_5/VOLUME5_TEXT.PDF</a> (years 1906, 1911, 1922, 1923) for the last volumes or, for all of them <a href="https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf" rel="noreferrer">https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf</a>. It is very interesting to take random pages and see how the determinant-mania has been important, especially in the second half of the 19th century. Matrices appear at some places with the double bar convention that lasted a very long time. Matrices are mentionned here and there, rarely to their advantage...</p>
<p>Many historical details about determinants and matrices can be found <a href="https://mathshistory.st-andrews.ac.uk/HistTopics/Matrices_and_determinants/" rel="noreferrer">here</a>.</p>
| <p><strong>It depends who you speak to.</strong></p>
<ul>
<li>In <strong>numerical mathematics</strong>, where people actually have to compute things on a computer, it is largely recognized that <strong>determinants are useless</strong>. Indeed, in order to compute determinants, either you use the Laplace recursive rule ("violence on minors"), which costs <span class="math-container">$O(n!)$</span> and is infeasible already for very small values of <span class="math-container">$n$</span>, or you go through a triangular decomposition (Gaussian elimination), which by itself already tells you everything you needed to know in the first place. Moreover, for most reasonably-sized matrices containing floating-point numbers, determinants overflow or underflow (try <span class="math-container">$\det \frac{1}{10} I_{350\times 350}$</span>, for instance). To put another nail on the coffin, computing eigenvalues by finding the roots of <span class="math-container">$\det(A-xI)$</span> is hopelessly unstable. In short: in numerical computing, whatever you want to do with determinants, there is a better way to do it without using them.</li>
<li>In <strong>pure mathematics</strong>, where people are perfectly fine knowing that an explicit formula exists, all the examples are <span class="math-container">$3\times 3$</span> anyway and people make computations by hand, <strong>determinants are invaluable</strong>. If one uses Gaussian elimination instead, all those divisions complicate computations horribly: one needs to take different paths whether things are zero or not, so when computing symbolically one gets lost in a myriad of cases. The great thing about determinants is that they give you an explicit polynomial formula to tell when a matrix is invertible or not: this is extremely useful in proofs, and allows for lots of elegant arguments. For instance, try proving this fact without determinants: given <span class="math-container">$A,B\in\mathbb{R}^{n\times n}$</span>, if <span class="math-container">$A+Bx$</span> is singular for <span class="math-container">$n+1$</span> distinct real values of <span class="math-container">$x$</span>, then it is singular for all values of <span class="math-container">$x$</span>. This is the kind of things you need in proofs, and determinants are a priceless tool. Who cares if the explicit formula has a exponential number of terms: they have a very nice structure, with lots of neat combinatorial interpretations. </li>
</ul>
|
probability | <p>Wikipedia says:</p>
<blockquote>
<p>The probability density function is nonnegative everywhere, and its integral over the entire space is equal to one.</p>
</blockquote>
<p>and it also says.</p>
<blockquote>
<p>Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval $[0, \frac{1}{2}]$ has probability density $f(x) = 2$ for $0 ≤ x ≤ \frac{1}{2}$ and $f(x) = 0$ elsewhere.</p>
</blockquote>
<p>How are these two things compatible?</p>
| <p>Consider the uniform distribution on the interval from $0$ to $1/2$. The value of the density is $2$ on that interval, and $0$ elsewhere. The area under the graph is the area of a rectangle. The length of the base is $1/2$, and the height is $2$
$$
\int\text{density} = \text{area of rectangle} = \text{base} \cdot\text{height} = \frac 12\cdot 2 = 1.
$$</p>
<p>More generally, if the density has a large value over a small region, then the probability is comparable to the value times the size of the region. (I say "comparable to" rather than "equal to" because the value my not be the same at all points in the region.) The probability within the region must not exceed $1$. A large number---much larger than $1$---multiplied by a small number (the size of the region) can be less than $1$ if the latter number is small enough.</p>
| <p>Remember that the 'PD' in PDF stands for "probability density", not probability. Density means probability per unit value of the random variable. That can easily exceed $1$. What has to be true is that the integral of this density function taken with respect to this value must be exactly $1$.</p>
<p>If we know a PDF function (e.g. normal distribution), and want to know the "probability" of a given value, say $x=1$, what will people usually do?
To find the probability that the output of a random event is within some range, you integrate the PDF over this range.</p>
<p>Also see <a href="http://www.mathworks.com/matlabcentral/newsreader/view_thread/248797" rel="noreferrer">Why mvnpdf give probablity larger than 1?</a></p>
|
matrices | <p>In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a <span class="math-container">$2\times 2$</span> matrix by the formula. Our teacher showed us how to compute the determinant of an <span class="math-container">$n \times n$</span> matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?</p>
| <p>Your trouble with determinants is pretty common. They’re a hard thing to teach well, too, for two main reasons that I can see: the formulas you learn for computing them are messy and complicated, and there’s no “natural” way to interpret the value of the determinant, the way it’s easy to interpret the derivatives you do in calculus at first as the slope of the tangent line. It’s hard to believe things like the invertibility condition you’ve stated when it’s not even clear what the numbers mean and where they come from.</p>
<p>Rather than show that the many usual definitions are all the same by comparing them to each other, I’m going to state some general properties of the determinant that I claim are enough to specify uniquely what number you should get when you put in a given matrix. Then it’s not too bad to check that all of the definitions for determinant that you’ve seen satisfy those properties I’ll state.</p>
<p>The first thing to think about if you want an “abstract” definition of the determinant to unify all those others is that it’s not an array of numbers with bars on the side. What we’re really looking for is a function that takes N vectors (the N columns of the matrix) and returns a number. Let’s assume we’re working with real numbers for now.</p>
<p>Remember how those operations you mentioned change the value of the determinant?</p>
<ol>
<li><p>Switching two rows or columns changes the sign.</p>
</li>
<li><p>Multiplying one row by a constant multiplies the whole determinant by that constant.</p>
</li>
<li><p>The general fact that number two draws from: the determinant is <em>linear in each row</em>. That is, if you think of it as a function <span class="math-container">$\det: \mathbb{R}^{n^2} \rightarrow \mathbb{R}$</span>, then <span class="math-container">$$ \det(a \vec v_1 +b \vec w_1 , \vec v_2 ,\ldots,\vec v_n ) = a \det(\vec v_1,\vec v_2,\ldots,\vec v_n) + b \det(\vec w_1, \vec v_2, \ldots,\vec v_n),$$</span> and the corresponding condition in each other slot.</p>
</li>
<li><p>The determinant of the identity matrix <span class="math-container">$I$</span> is <span class="math-container">$1$</span>.</p>
</li>
</ol>
<p>I claim that these facts are enough to define a <em>unique function</em> that takes in N vectors (each of length N) and returns a real number, the determinant of the matrix given by those vectors. I won’t prove that, but I’ll show you how it helps with some other interpretations of the determinant.</p>
<p>In particular, there’s a nice geometric way to think of a determinant. Consider the unit cube in N dimensional space: the set of N vectors of length 1 with coordinates 0 or 1 in each spot. The determinant of the linear transformation (matrix) T is the <em>signed volume of the region gotten by applying T to the unit cube</em>. (Don’t worry too much if you don’t know what the “signed” part means, for now).</p>
<p>How does that follow from our abstract definition?</p>
<p>Well, if you apply the identity to the unit cube, you get back the unit cube. And the volume of the unit cube is 1.</p>
<p>If you stretch the cube by a constant factor in one direction only, the new volume is that constant. And if you stack two blocks together aligned on the same direction, their combined volume is the sum of their volumes: this all shows that the signed volume we have is linear in each coordinate when considered as a function of the input vectors.</p>
<p>Finally, when you switch two of the vectors that define the unit cube, you flip the orientation. (Again, this is something to come back to later if you don’t know what that means).</p>
<p>So there are ways to think about the determinant that aren’t symbol-pushing. If you’ve studied multivariable calculus, you could think about, with this geometric definition of determinant, why determinants (the Jacobian) pop up when we change coordinates doing integration. Hint: a derivative is a linear approximation of the associated function, and consider a “differential volume element” in your starting coordinate system.</p>
<p>It’s not too much work to check that the area of the parallelogram formed by vectors <span class="math-container">$(a,b)$</span> and <span class="math-container">$(c,d)$</span> is <span class="math-container">$\Big|{}^{a\;b}_{c\;d}\Big|$</span>
either: you might try that to get a sense for things.</p>
| <p>You could think of a determinant as a volume. Think of the columns of the matrix as vectors at the origin forming the edges of a skewed box. The determinant gives the volume of that box. For example, in 2 dimensions, the columns of the matrix are the edges of a rhombus.</p>
<p>You can derive the algebraic properties from this geometrical interpretation. For example, if two of the columns are linearly dependent, your box is missing a dimension and so it's been flattened to have zero volume.</p>
|
matrices | <p>there is a similar thread here <a href="https://math.stackexchange.com/questions/311580/coordinate-free-proof-of-operatornametrab-operatornametrba">Coordinate-free proof of $\operatorname{Tr}(AB)=\operatorname{Tr}(BA)$?</a>, but I'm only looking for a simple linear algebra proof. </p>
| <p>Observe that if $A$ and $B$ are $n\times n$ matrices, $A=(a_{ij})$, and $B=(b_{ij})$, then
$$(AB)_{ii} = \sum_{k=1}^n a_{ik}b_{ki},$$
so
$$
\operatorname{Tr}(AB) = \sum_{j=1}^n\sum_{k=1}^n a_{jk}b_{kj}.
$$
Conclude calculating the term $(BA)_{ii}$ and comparing both traces.</p>
| <blockquote>
<p>For any couple $(A,B)$ of $n\times n$ matrices with complex entries, the following identity holds:
$$ \operatorname{Tr}(AB) = \operatorname{Tr}(BA).$$</p>
</blockquote>
<p><strong>Proof</strong>. Assuming $A$ is an invertible matrix, $AB$ and $BA$ share the same characteristic polynomial, since they are conjugated matrices
due to $BA = A^{-1}(AB)A$. In particular they have the same trace. Equivalently, they share the same eigenvalues (counted according to their
algebraic multiplicity) hence they share the sum of such eigenvalues. On the other hand, if $A$ is a singular matrix then
$A_\varepsilon\stackrel{\text{def}}{=} A+\varepsilon I$ is an invertible matrix for any $\varepsilon\neq 0$ small enough. It follows that
$\operatorname{Tr}(A_\varepsilon B) = \operatorname{Tr}(B A_\varepsilon)$, and since $\operatorname{Tr}$ is a continuous operator,
by considering the limits of both sides as $\varepsilon\to 0$ we get $\operatorname{Tr}(AB)=\operatorname{Tr}(BA)$ just as well.</p>
|
logic | <p>Why is this true?</p>
<p><span class="math-container">$\exists x\,\big(P(x) \rightarrow \forall y\:P(y)\big)$</span></p>
| <p>Since this may be homework, I do not want to provide the full formal proof, but I will share the informal justification. Classical first-order logic typically makes the assumption of existential import (i.e., that the domain of discourse is non-empty). In classical logic, the principle of excluded middle holds, i.e., that for any $\phi$, either $\phi$ or $\lnot\phi$ holds. Since I first encountered this kind of sentence where $P(x)$ was interpreted as "$x$ is a bird," I will use that in the following argument. Finally, recall that a material conditional $\phi \to \psi$ is true if and only if either $\phi$ is false or $\psi$ is true.</p>
<p>By excluded middle, it is either true that everything is a bird, or that not everything is a bird. Let us consider these cases:</p>
<ul>
<li>If everything is a bird, then pick an arbitrary individual $x$, and note that the conditional “if $x$ is a bird, then everything is a bird,” is true, since the consequent is true. Therefore, if everything is a bird, then there is something such that if it is a bird, then everything is a bird.</li>
<li>If it is not the case that everything is a bird, then there must be some $x$ which is not a bird. Then consider the conditional “if $x$ is a bird, then everything is a bird.” It is true because its antecedent, “$x$ is a bird,” is false. Therefore, if it is not the case that everything is a bird, then there is something (a non-bird, in fact) such that if it is a bird, then everything is a bird.</li>
</ul>
<p>Since it holds in each of the exhaustive cases that there is something such that if it is a bird, then everything is a bird, we conclude that there is, in fact, something such that if it is a bird, then everything is a bird.</p>
<h2>Alternatives</h2>
<p>Since questions about the domain came up in the comments, it seems worthwhile to consider the three preconditions to this argument: existential import (the domain is non-empty); excluded middle ($\phi \lor \lnot\phi$); and the material conditional ($(\phi \to \psi) \equiv (\lnot\phi \lor \psi)$). Each of these can be changed in a way that can affect the argument. This might not be the place to examine <em>how</em> each of these affects the argument, but we can at least give pointers to resources about the alternatives.</p>
<ul>
<li>Existential import asserts that the universe of discourse is non-empty. <a href="http://plato.stanford.edu/entries/logic-free/">Free logics</a> relax this constraint. If the universe of discourse were empty, it would seem that $\exists x.(P(x) \to \forall y.P(y))$ should be vacuously false.</li>
<li><a href="http://en.wikipedia.org/wiki/Intuitionistic_logic">Intuitionistic logics</a> do not presume the excluded middle, in general. The argument above started with a claim of the form “either $\phi$ or $\lnot\phi$.”</li>
<li>There are plenty of <a href="http://en.wikipedia.org/wiki/Material_conditional#Philosophical_problems_with_material_conditional">philosophical difficulties with the material conditional</a>, especially as used to represent “if … then …” sentences in natural language. If we took the conditional to be a counterfactual, for instance, and so were considering the sentence “there is something such that if it were a bird (even if it is not <em>actually</em> a bird), then everything would be a bird,” it seems like it should no longer be provable.</li>
</ul>
| <p>Hint: The only way for $A\implies B$ to be false is for $A$ to be true and $B$ to be false.</p>
<p>I don't think this is actually true unless you know your domain isn't empty. If your domain is empty, then $\forall y: P(y)$ is true "vacuously," but $\exists x: Q$ is not true for any $Q$.</p>
|
logic | <p>maybe this question doesn't make sense at all. I don't know exactly the meaning of all these concepts, except the internal language of a topos (and searching on the literature is not helping at all). </p>
<p>However, vaguely speaking by a logic I mean a pair $(\Sigma, \vdash)$ where $\Sigma$ is a signature (it has the types, functionals and relational symbols) and a consequence operator $\vdash$. </p>
<p>By an internal logic in a category I mean viewing each object as a type and each morphism as a term. </p>
<p>So what's the difference between a logic, an internal logic (language) of a category, an internal logic (language) of a topos and a type theory? Furthermore why the interchanging in the literature between the word "logic" and "language" when dealing with the internal properties of a category? Moreover, how higher order logic, modal logic and fuzzy logic suit in these concepts stated above?</p>
<p>Thanks in advance.</p>
| <p>The internal logic of a topos is an instance of the internal logic of a category (since toposes are special kinds of categories). The internal logic of toposes (instead of an arbitrary category) can also be interpreted with the Kripke-Joyal semantics. (<strong>Update almost ten years later:</strong> A form of the Kripke-Joyal semantics exists also for general categories which are not toposes.) For more on this, check part D of Johnstone's <em>Elephant</em> and chapter VI of Mac Lane's and Moerdijk's <em>Sheaves in Geometry and Logic</em>, <a href="http://www.mathematik.tu-darmstadt.de/%7Estreicher/CTCL.pdf" rel="nofollow noreferrer">lecture notes by Thomas Streicher</a>, and of course the nLab articles on these matters.</p>
<p>I don't know the term "internal logic of a type theory". But check the (very accessible) introduction of the <a href="http://homotopytypetheory.org/book/" rel="nofollow noreferrer">HoTT book</a> on how type theory and logic are related.</p>
<p>The terms "internal logic" and "internal language" are often used synonymously. Personally, I prefer "internal language", since this stresses that one can use it not only to <em>reason</em> internally, but also to <em>construct objects and morphisms</em> internally.</p>
<p>The internal language of a topos <span class="math-container">$\mathcal{E}$</span> is higher-order in the sense that, because of the existence of a subobject classifier, every object <span class="math-container">$X \in \mathcal{E}$</span> has an associated <a href="http://ncatlab.org/nlab/show/power+object" rel="nofollow noreferrer">power object</a> <span class="math-container">$\mathcal{P}(X) \in \mathcal{E}$</span> which one can quantify over in the internal language. (In the special case <span class="math-container">$\mathcal{E} = \mathrm{Set}$</span>, the internal language is really the same as the usual mathematical language and <span class="math-container">$\mathcal{P}(X)$</span> is simply the power set of <span class="math-container">$X$</span>.) In an arbitrary category, power objects need not exist, such that their internal language is (at best) first-order. Generally, richer categorical properties allow you to interpret greater fragments of first-order logic, this is neatly explained in Johnstone's part D.</p>
<p>Any <a href="http://ncatlab.org/nlab/show/Lawvere-Tierney+topology" rel="nofollow noreferrer">Lawvere-Tierney topology</a> in a topos gives rise to a modal operator in its associated internal language. (These operators can have concrete geometric meanings, for instance "on a dense open set it holds that" or "on an open neighbourhood of a point <span class="math-container">$x$</span> it holds that".)</p>
<p>I don't know of a direct relationship between fuzzy logic and the internal language of toposes, see <a href="https://math.stackexchange.com/questions/55957/fuzzy-logic-and-topos-theory">an older question here</a>.</p>
| <p>Your definition of logic is pretty much correct. A logic contains both the <em>language</em> which the signature <span class="math-container">$\Sigma$</span> generates and the deductive system defined by <span class="math-container">$\vdash$</span>.</p>
<p>A <em>type theory</em> is a logic with different sorts of individuals (called "types") and constructions that generate new types from existing ones, like product and arrow types.</p>
<p>An <em>internal logic</em> is a type theory derived from a category and the <em>internal language</em> is the language part of that logic. Specifically, the atomic sorts of the internal language are the objects of the category. Since a topos is a specific category of categories, the internal logic of a topos is the derived type theory.</p>
<p>The modalities of modal logic can sometimes be related to operators on subobjects in a category, but only if they preserve logical equivalence: <span class="math-container">$\alpha\iff\beta$</span> should imply <span class="math-container">$\Box\alpha \iff \Box \beta$</span>. Otherwise, the induced function of subobjects is not well-defined.</p>
<p>Fuzzy logic is misnamed. It is more like a model theory than a logic.</p>
|
linear-algebra | <p>If I have a covariance matrix for a data set and I multiply it times one of it's eigenvectors. Let's say the eigenvector with the highest eigenvalue. The result is the eigenvector or a scaled version of the eigenvector. </p>
<p>What does this really tell me? Why is this the principal component? What property makes it a principal component? Geometrically, I understand that the principal component (eigenvector) will be sloped at the general slope of the data (loosely speaking). Again, can someone help understand why this happens? </p>
| <p><strong>Short answer:</strong> The eigenvector with the largest eigenvalue is the direction along which the data set has the maximum variance. Meditate upon this.</p>
<p><strong>Long answer:</strong> Let's say you want to reduce the dimensionality of your data set, say down to just one dimension. In general, this means picking a unit vector <span class="math-container">$u$</span>, and replacing each data point, <span class="math-container">$x_i$</span>, with its projection along this vector, <span class="math-container">$u^T x_i$</span>. Of course, you should choose <span class="math-container">$u$</span> so that you retain as much of the variation of the data points as possible: if your data points lay along a line and you picked <span class="math-container">$u$</span> orthogonal to that line, all the data points would project onto the same value, and you would lose almost all the information in the data set! So you would like to maximize the <em>variance</em> of the new data values <span class="math-container">$u^T x_i$</span>. It's not hard to show that if the covariance matrix of the original data points <span class="math-container">$x_i$</span> was <span class="math-container">$\Sigma$</span>, the variance of the new data points is just <span class="math-container">$u^T \Sigma u$</span>. As <span class="math-container">$\Sigma$</span> is symmetric, the unit vector <span class="math-container">$u$</span> which maximizes <span class="math-container">$u^T \Sigma u$</span> is nothing but the eigenvector with the largest eigenvalue.</p>
<p>If you want to retain more than one dimension of your data set, in principle what you can do is first find the largest principal component, call it <span class="math-container">$u_1$</span>, then subtract that out from all the data points to get a "flattened" data set that has <em>no</em> variance along <span class="math-container">$u_1$</span>. Find the principal component of this flattened data set, call it <span class="math-container">$u_2$</span>. If you stopped here, <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span> would be a basis of the two-dimensional subspace which retains the most variance of the original data; or, you can repeat the process and get as many dimensions as you want. As it turns out, all the vectors <span class="math-container">$u_1, u_2, \ldots$</span> you get from this process are just the eigenvectors of <span class="math-container">$\Sigma$</span> in decreasing order of eigenvalue. That's why these are the principal components of the data set.</p>
| <p>Some informal explanation:</p>
<p>Covariance matrix $C_y$ (it is symmetric) encodes the correlations between variables of a vector. In general a covariance matrix is non-diagonal (i.e. have non zero correlations with respect to different variables).</p>
<p><strong>But it's interesting to ask, is it possible to diagonalize the covariance matrix by changing basis of the vector?</strong>. In this case there will be no (i.e. zero) correlations between different variables of the vector. </p>
<p>Diagonalization of this symmetric matrix is possible with eigen value decomposition.
You may read <em><a href="https://arxiv.org/pdf/1404.1100.pdf" rel="noreferrer">A Tutorial on Principal Component Analysis</a></em> (pages 6-7), by Jonathon Shlens, to get a good understanding. </p>
|
geometry | <p>Consider the function <span class="math-container">$f(x)=a_0x^2$</span> for some <span class="math-container">$a_0\in \mathbb{R}^+$</span>. Take <span class="math-container">$x_0\in\mathbb{R}^+$</span> so that the arc length <span class="math-container">$L$</span> between <span class="math-container">$(0,0)$</span> and <span class="math-container">$(x_0,f(x_0))$</span> is fixed. Given a different arbitrary <span class="math-container">$a_1$</span>, how does one find the point <span class="math-container">$(x_1,y_1)$</span> so that the arc length is the same?</p>
<p>Schematically,</p>
<p><a href="https://i.sstatic.net/djrlsm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/djrlsm.png" alt="enter image description here" /></a></p>
<p>In other words, I'm looking for a function <span class="math-container">$g:\mathbb{R}^3\to\mathbb{R}$</span>, <span class="math-container">$g(a_0,a_1,x_0)$</span>, that takes an initial fixed quadratic coefficient <span class="math-container">$a_0$</span> and point and returns the corresponding point after "straightening" via the new coefficient <span class="math-container">$a_1$</span>, keeping the arc length with respect to <span class="math-container">$(0,0)$</span>. Note that the <span class="math-container">$y$</span> coordinates are simply given by <span class="math-container">$y_0=f(x_0)$</span> and <span class="math-container">$y_1=a_1x_1^2$</span>. Any ideas?</p>
<p><strong>My approach:</strong> Knowing that the <a href="https://math.libretexts.org/Bookshelves/Calculus/Map%3A_Calculus__Early_Transcendentals_(Stewart)/08%3A_Further_Applications_of_Integration/8.01%3A_Arc_Length#:%7E:text=Arc%20Length%3D%E2%88%ABba,(x)%20to%20be%20smooth." rel="nofollow noreferrer">arc length is given by</a>
<span class="math-container">$$
L=\int_0^{x_0}\sqrt{1+(f'(x))^2}\,dx=\int_0^{x_0}\sqrt{1+(2a_0x)^2}\,dx
$$</span>
we can use the conservation of <span class="math-container">$L$</span> to write
<span class="math-container">$$
\int_0^{x_0}\sqrt{1+(2a_0x)^2}\,dx=\int_0^{x_1}\sqrt{1+(2a_1x)^2}\,dx
$$</span>
which we solve for <span class="math-container">$x_1$</span>. This works, but it is not very fast computationally and can only be done numerically (I think), since
<span class="math-container">$$
\int_0^{x_1}\sqrt{1+(2a_1x)^2}\,dx=\frac{1}{4a_1}\left(2a_1x_1\sqrt{1+(2a_1x_1)^2}+\operatorname{arcsinh}{(2a_1x_1)}\right)
$$</span>
Any ideas on how to do this more efficiently? Perhaps using the tangent lines of the parabola?</p>
<p><strong>More generally</strong>, for fixed arc lengths, I guess my question really is what are the expressions of the following red curves for fixed arc lengths:</p>
<p><a href="https://i.sstatic.net/GnCkkm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GnCkkm.png" alt="enter image description here" /></a></p>
<p>Furthermore, could this be determined for any <span class="math-container">$f$</span>?</p>
<p><strong>Edit:</strong> Interestingly enough, I found this clip from 3Blue1Brown. The origin point isn't fixed as in my case, but I wonder how the animation was made (couldn't find the original video, only a clip, but <a href="https://www.youtube.com/watch?v=uXPb9iBDwsw" rel="nofollow noreferrer">here's the link</a>)</p>
<p><a href="https://i.sstatic.net/x7o8V.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x7o8V.gif" alt="enter image description here" /></a></p>
<p>For any <em>Mathematica</em> enthusiasts out there, a computational implementation of the straightening effect is also being discussed <a href="https://mathematica.stackexchange.com/questions/251570/how-to-straighten-a-curve">here</a>, with some applications.</p>
| <p>Phrased differently, what we want are the level curves of the function</p>
<p><span class="math-container">$$\frac{1}{2}f(x,y) = \int_0^x\sqrt{1+\frac{4y^2t^2}{x^4}}\:dt = \frac{1}{2}\int_0^2 \sqrt{x^2+y^2t^2}\:dt$$</span></p>
<p>which will always be perpendicular to the gradient at that point</p>
<p><span class="math-container">$$\nabla f = \int_0^2 dt\left(\frac{x}{\sqrt{x^2+y^2t^2}},\frac{yt^2}{\sqrt{x^2+y^2t^2}}\right)$$</span></p>
<p>Now is the time to naturally reintroduce <span class="math-container">$a$</span> as the parameter for these curves. Therefore what we want is to solve the differential equation</p>
<p><span class="math-container">$$x'(a) = \int_0^2 \frac{-axt^2}{\sqrt{1+a^2x^2t^2}}dt \hspace{20 pt} x(0) = L$$</span></p>
<p>where we substitute <span class="math-container">$y(a) = a\cdot x^2(a)$</span>, thus solving for one component automatically gives us the other.</p>
<hr />
<p>EDIT: Further investigation has led me to some interesting conclusions. It seems like if <span class="math-container">$y=f_a(x)$</span> is a family strictly monotonically increasing continuous functions and <span class="math-container">$$\lim_{a\to0^+}f_a(x) = \lim_{a\to\infty}f_a^{-1}(y) = 0$$</span></p>
<p>Then the curves of constant arclength will start and end at the points <span class="math-container">$(0,L)$</span> and <span class="math-container">$(L,0)$</span>. Take for example the similar looking family of curves</p>
<p><span class="math-container">$$y = \frac{\cosh(ax)-1}{a}\implies L = \frac{\sinh(ax)}{a}$$</span></p>
<p>The curves of constant arclength are of the form</p>
<p><span class="math-container">$$\vec{r}(a) = \left(\frac{\sinh^{-1}(aL)}{a},\frac{\sqrt{1+a^2L^2}-1}{a}\right)$$</span></p>
<p>Below is a (sideways) plot of the curve of arclength <span class="math-container">$L=1$</span> (along with the family of curves evaluated at <span class="math-container">$a=\frac{1}{2},1,2,4,$</span> and <span class="math-container">$10$</span>), which has an explicit equation of the form</p>
<p><span class="math-container">$$x = \frac{\tanh^{-1}y}{y}\cdot(1-y^2)$$</span>
<a href="https://i.sstatic.net/ut15h.jpg" rel="noreferrer"><img src="https://i.sstatic.net/ut15h.jpg" alt="enter image description here" /></a></p>
<p>These curves and the original family of parabolas in question both have this property, as well as the perfect circles obtained from the family <span class="math-container">$f_a(x) = ax$</span>. The reason the original question was hard to tractably solve was because of the non analytically invertible arclength formula</p>
| <p><span class="math-container">$L$</span> being the known arc length, let <span class="math-container">$x_1=\frac t{2a}$</span> and <span class="math-container">$k=4a_1L$</span>; then you need to solve for <span class="math-container">$t$</span> the equation
<span class="math-container">$$k=t\sqrt{t^2+1} +\sinh ^{-1}(t)$$</span>A good approximation is given by <span class="math-container">$t_0=\sqrt k$</span>.</p>
<p>Now, using a Taylor series around <span class="math-container">$t=t_0$</span> and then series reversion gives
<span class="math-container">$$t_1=\sqrt{k}+z-\frac{\sqrt{k} }{2 (k+1)}z^2+\frac{(3 k-1) }{6 (k+1)^2}z^3+\frac{(13-15
k) \sqrt{k} }{24 (k+1)^3}z^4+\cdots$$</span> where <span class="math-container">$$z=-\frac{\sqrt{k(k+1)} +\sinh ^{-1}\left(\sqrt{k}\right)-k}{2 \sqrt{k+1}}$$</span></p>
<p>Let us try for <span class="math-container">$k=10^n$</span>
<span class="math-container">$$\left(
\begin{array}{ccc}
n & \text{estimate} & \text{solution} \\
0 & 0.4810185 & 0.4819447 \\
1 & 2.7868504 & 2.7868171 \\
2 & 9.8244940 & 9.8244940 \\
3 & 31.549250 & 31.549250 \\
4 & 99.971006 & 99.971006 \\
5 & 316.21678 & 316.21678 \\
6 & 999.99595 & 999.99595
\end{array}
\right)$$</span> This seems to be quite decent.</p>
|
logic | <p>In <a href="https://www.youtube.com/watch?v=O4ndIDcDSGc" rel="noreferrer">the most recent numberphile video</a>, Marcus du Sautoy claims that a proof for the Riemann hypothesis must exist (starts at the 12 minute mark). His reasoning goes as follows:</p>
<ul>
<li><p>If the hypothesis is undecidable, there is no proof it is false.</p></li>
<li><p>If we find a non-trivial zero, that is a proof that it is false.</p></li>
<li><p>Thus if it is undecidable there are no non-trivial zeros.</p></li>
<li><p>This constitutes a proof the hypothesis is true, thus it is decidable.</p></li>
</ul>
<p>This makes perfect sense, however only seconds earlier he states that the Goldbach conjecture <em>might</em> be undecidable. It seems to me that the exact same reasoning we applied to the Riemann hypothesis could be applied to the Goldbach conjecture. Just switch out the words "non-trivial zero" for "even number that cannot be represented by the sum of two primes" and to me this reasoning looks fine.</p>
<p>In fact as far as I can tell <em>any</em> statement that can be verified or falsified by example can be plugged into this to generate a proof it is decidable.</p>
<p>Since Marcus du Sautoy is considerably more qualified than I to speak about this I trust that there is something more complex going on behind the scenes here. Does this apply to the Goldbach conjecture? If not why not? What am I not understanding here?</p>
| <p>The issue here is how complicated is each statement, when formulated as a claim about the natural numbers (the Riemann hypothesis can be made into such statement).</p>
<hr>
<p>For the purpose of this discussion we work in the natural numbers, with $+,\cdot$ and the successor function, and Peano Axioms will be our base theory; so by "true" and "false" we mean in the natural numbers and "provable" means from Peano Arithmetic. </p>
<p>We will say that a statement is "simple" if in order to verify it, you absolutely know that you do not have to check <em>all</em> the natural numbers. (The technical term here is "bounded" or "$\Delta_0$".)</p>
<p>For example, "There is a prime number small than $x$" is a simple statement, since verifying whether or not some $n$ is prime requires only checking its divisibility by numbers less than $n$. So we only need to check numbers below $x$ in order to verify this.</p>
<p>On the other hand, "There is a Fermat prime larger than $x$" is not a simple statement, since possibly this is false but only checking <em>all</em> the numbers above $x$ will tell us the truth of this statement.</p>
<p>The trick is that a simple statement is true if and only if it is provable. This requires some work, but it is not incredibly difficult to show. Alas, most interesting questions cannot be formulated as simple statements. Luckily for us, this "provable if and only if true" can be pushed a bit more. </p>
<p>We say that a statement is "relatively simple" if it has the form "There exists some value for $x$ such that plugging it in will turn the statement simple". (The technical term is $\Sigma_1$ statement.)</p>
<p>Looking back, the statement about the existence of a Fermat prime above $x$ is such statement. Because if $n>x$ is a Fermat prime, then the statement "$n$ is larger than $x$ and a Fermat prime" is now simple. </p>
<p>Using a neat little trick we can show that a relatively simple statement is also true if and only if it is provable. </p>
<p>And now comes the nice part. The Riemann hypothesis can be formulated as the negation of a relatively simple statement. So if the Riemann hypothesis was false, its negation was provable, so Riemann hypothesis would be refutable. This means that if you cannot disprove the Riemann hypothesis, it has to be true. The same can also be said on the Goldbach conjecture.</p>
<p>So both of these might turn to be independent, in the sense that they cannot be proved from Peano Arithmetic, but if you show that they are at least consistent, then you immediately obtain that they are true. And this would give us a proof of these statements from a stronger theory (e.g. set theory).</p>
<p>You could also ask the same about the twin prime conjecture. But this conjecture is no longer a relatively simple statement nor the negation of one. So the above does not hold for, and it is feasible that the conjecture is consistent, but false in the natural numbers.</p>
| <p>The last bullet point, saying that this constitutes a proof it is decidable, does not follow.</p>
<p>$X$ is decidable means either $X$ is provable or $\neg X$ is provable. It's possible that both are provable, in which case the theory is inconsistent and every statement and its negation are provable and every statement is decidable.</p>
<p>The situation generalizes to any statement on the same level of the <a href="https://en.wikipedia.org/wiki/Arithmetical_hierarchy" rel="noreferrer">arithmetical hierarchy</a> as the Riemann hypothesis or the Goldbach conjecture, including the Gödel sentence. All assert the existence or non-existence of a natural number satisfying a computable predicate. If the existential form is unprovable then the universal form is true. And the contrapositive is: if the universal form is false, then the existential form is provable (implying both forms are decidable). So if RH is false then RH is decidable, and if GC is false then GC is decidable, and if arithmetic is inconsistent then the consistency of arithmetic is decidable. That may be what was intended by the final point.</p>
<p>An example of a <a href="https://rjlipton.wordpress.com/2009/05/27/arithmetic-hierarchy-and-pnp/" rel="noreferrer">problem with a different situation</a> is $\text{P=NP}$.</p>
|
probability | <p><strong>Question :</strong> What is the difference between Average and Expected value?</p>
<hr />
<p>I have been going through the definition of expected value on <a href="http://en.wikipedia.org/wiki/Expected_value" rel="noreferrer">Wikipedia</a> beneath all that jargon it seems that the expected value of a distribution is the average value of the distribution. Did I get it right ?</p>
<p>If yes, then what is the point of introducing a new term ? Why not just stick with the average value of the distribution ?</p>
| <p>The concept of expectation value or expected value may be understood from the following example. Let $X$ represent the outcome of a roll of an unbiased six-sided die. The possible values for $X$ are 1, 2, 3, 4, 5, and 6, each having the probability of occurrence of 1/6. The expectation value (or expected value) of $X$ is then given by </p>
<p>$(X)\text{expected} = 1(1/6)+2\cdot(1/6)+3\cdot(1/6)+4\cdot(1/6)+5\cdot(1/6)+6\cdot(1/6) = 21/6 = 3.5$</p>
<p>Suppose that in a sequence of ten rolls of the die, if the outcomes are 5, 2, 6, 2, 2, 1, 2, 3, 6, 1, then the average (arithmetic mean) of the results is given by</p>
<p>$(X)\text{average} = (5+2+6+2+2+1+2+3+6+1)/10 = 3.0$</p>
<p>We say that the average value is 3.0, with the distance of 0.5 from the expectation value of 3.5. If we roll the die $N$ times, where $N$ is very large, then the average will converge to the expected value, i.e.,$(X)\text{average}=(X)\text{expected}$. This is evidently because, when $N$ is very large each possible value of $X$ (i.e. 1 to 6) will occur with equal probability of 1/6, turning the average to the expectation value.</p>
| <p>The expected value, or mean $\mu_X =E_X[X]$, is a parameter associated with the distribution of a random variable $X$.</p>
<p>The average $\overline X_n$ is a computation performed on a sample of size $n$ from that distribution. It can also be regarded as an unbiased estimator of the mean, meaning that if each $X_i\sim X$, then $E_X[\overline X_n] = \mu_X$.</p>
|
number-theory | <p>Mathematica knows that the logarithm of $n$ is:</p>
<p>$$\log(n) = \lim\limits_{s \rightarrow 1} \zeta(s)\left(1 - \frac{1}{n^{(s - 1)}}\right)$$</p>
<p>The von Mangoldt function should then be:</p>
<p>$$\Lambda(n)=\lim\limits_{s \rightarrow 1} \zeta(s)\sum\limits_{d|n} \frac{\mu(d)}{d^{(s-1)}}.$$</p>
<p>Setting the first term of the von Mangoldt function $\Lambda(1)$ equal to the harmonic number $H_{\operatorname{scale}}$ where scale is equal to the scale of the Fourier transform matrix, one can calculate the Fourier transform of the von Mangoldt function with the Mathematica program at the link below:</p>
<p><a href="http://pastebin.com/ZNNYZ4F6" rel="noreferrer">http://pastebin.com/ZNNYZ4F6</a></p>
<p>In the program I studied the function within the limit for the von Mangoldt function, and made some small changes to the function itself:</p>
<p>$f(t)=\sum\limits_{n=1}^{n=k} \frac{1}{\log(k)} \frac{1}{n} \zeta(1/2+i \cdot t)\sum\limits_{d|n} \frac{\mu(d)}{d^{(1/2+i \cdot t-1)}}$
as $k$ goes to infinity.</p>
<p>(Edit 20.9.2013: The function $f(t)$ had "-1" in the argument for the zeta function.)</p>
<p>The plot of this function looks like this:</p>
<p><img src="https://i.sstatic.net/TAwEf.png" alt="function"> </p>
<p>While the plot of the Fourier transform of the von Mangoldt function with the program looks like this:</p>
<p><img src="https://i.sstatic.net/REY7h.png" alt="Fourier transform"></p>
<p>There are some similarities but the Fourier transform converges faster towards smaller oscillations in between the spikes at zeta zeros and the scale factor is wrong.</p>
<p>Will the function $f(t)$ above eventually converge to the Fourier transform of the von Mangoldt function, or is it only yet another meaningless plot?</p>
<p>Now when I look at it I think the spikes at zeros comes from the zeta function itself and the spectrum like feature comes from the Möbius function which inverts the zeta function.</p>
<p>In the Fourier transform the von Mangoldt function has this form:</p>
<p>$$\log (\text{scale}) ,\log (2),\log (3),\log (2),\log (5),0,\log (7),\log (2),\log (3),0,\log (11),0,...,\Lambda(\text{scale})$$</p>
<p>$$scale = 1,2,3,4,5,6,7,8,9,10,...k$$</p>
<p>Or as latex:</p>
<p>$$\Lambda(n) = \begin{cases} \log q & \text{if }n=1, \\\log p & \text{if }n=p^k \text{ for some prime } p \text{ and integer } k \ge 1, \\ 0 & \text{otherwise.} \end{cases}$$</p>
<p>$$n=1,2,3,4,5,...q$$</p>
<pre><code>TableForm[Table[Table[If[n == 1, Log[q], MangoldtLambda[n]], {n, 1, q}],
{q, 1, 12}]]
</code></pre>
<hr>
<pre><code>scale = 50; (*scale = 5000 gives the plot below*)
Print["Counting to 60"]
Monitor[g1 =
ListLinePlot[
Table[Re[
Zeta[1/2 + I*k]*
Total[Table[
Total[MoebiusMu[Divisors[n]]/Divisors[n]^(1/2 + I*k - 1)]/(n*
k), {n, 1, scale}]]], {k, 0 + 1/1000, 60, N[1/6]}],
DataRange -> {0, 60}, PlotRange -> {-0.15, 1.5}], Floor[k]]
</code></pre>
<p>Dirichlet series:</p>
<p><img src="https://i.sstatic.net/SjS5c.jpg" alt="zeta zero spectrum 1"></p>
<hr>
<pre><code>Clear[f];
scale = 100000;
f = ConstantArray[0, scale];
f[[1]] = N@HarmonicNumber[scale];
Monitor[Do[
f[[i]] = N@MangoldtLambda[i] + f[[i - 1]], {i, 2, scale}], i]
xres = .002;
xlist = Exp[Range[0, Log[scale], xres]];
tmax = 60;
tres = .015;
Monitor[errList =
Table[(xlist^(1/2 + I k - 1).(f[[Floor[xlist]]] - xlist)), {k,
Range[0, 60, tres]}];, k]
ListLinePlot[Im[errList]/Length[xlist], DataRange -> {0, 60},
PlotRange -> {-.01, .15}]
</code></pre>
<p>Fourier transform:</p>
<p><img src="https://i.sstatic.net/QVWkQ.jpg" alt="zeta zero spectrum 2"></p>
<hr>
<p>Matrix inverse:</p>
<pre><code>Clear[n, k, t, A, nn];
nn = 50;
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1, nn}], {n, 1,
nn}];
MatrixForm[A];
ListLinePlot[
Table[Total[
1/Table[n*t, {n, 1, nn}]*
Total[Transpose[Re[Inverse[A]*Zeta[1/2 + I*t]]]]], {t, 1/1000, 60,
N[1/6]}], DataRange -> {0, 60}, PlotRange -> {-0.15, 1.5}]
</code></pre>
<hr>
<pre><code>Clear[n, k, t, A, nn];
nnn = 12;
Show[Flatten[{Table[
ListLinePlot[
Table[Re[
Total[1/Table[n*t, {n, 1, nn}]*
Total[Transpose[
Inverse[
Table[Table[
If[Mod[n, k] == 0, N[1/(n/k)^(1/2 + I*t - 1)], 0], {k,
1, nn}], {n, 1, nn}]]*Zeta[1/2 + I*t]]]]], {t, 1/1000,
60, N[1/10]}], DataRange -> {0, 60},
PlotRange -> {-0.15, 1.5}], {nn, 1, nnn}],
Table[ListLinePlot[
Table[Re[
Total[1/Table[n*t, {n, 1, nn}]*
Total[Transpose[
Inverse[
Table[Table[
If[Mod[n, k] == 0, N[1/(n/k)^(1/2 + I*t - 1)], 0], {k,
1, nn}], {n, 1, nn}]]*Zeta[1/2 + I*t]]]]], {t, 1/1000,
60, N[1/10]}], DataRange -> {0, 60},
PlotRange -> {-0.15, 1.5}, PlotStyle -> Red], {nn, nnn, nnn}]}]]
</code></pre>
<p>12 first curves together or partial sums:</p>
<p><img src="https://i.sstatic.net/HG7Jy.jpg" alt="12 curves"></p>
<pre><code>Clear[n, k, t, A, nn, dd];
dd = 220;
Print["Counting to ", dd];
nn = 20;
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}];
Monitor[g1 =
ListLinePlot[
Table[Total[
1/Table[n*t, {n, 1, nn}]*
Total[Transpose[
Re[Inverse[
IdentityMatrix[nn] + (Inverse[A] - IdentityMatrix[nn])*
Zeta[1/2 + I*t]]]]]], {t, 1/1000, dd, N[1/100]}],
DataRange -> {0, dd}, PlotRange -> {-7, 7}];, Floor[t]];
mm = N[2*Pi/Log[2], 20];
g2 = Graphics[
Table[Style[Text[n, {mm*n, 1}], FontFamily -> "Times New Roman",
FontSize -> 14], {n, 1, 32}]];
Show[g1, g2, ImageSize -> Large];
</code></pre>
<p>Matrix Inverse of matrix inverse times zeta function (on critical line):</p>
<p><img src="https://i.sstatic.net/PVrdb.jpg" alt="matrix inverse of matrix inverse as function of t"></p>
<pre><code>Clear[n, k, t, A, nn, h];
nn = 60;
h = 2; (*h=2 gives log 2 operator, h=3 gives log 3 operator and so on*)
A = Table[
Table[If[Mod[n, k] == 0,
If[Mod[n/k, h] == 0, 1 - h, 1]/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}];
MatrixForm[A];
g1 = ListLinePlot[
Table[Total[
1/Table[n*t, {n, 1, nn}]*
Total[Transpose[Re[Inverse[A]*Zeta[1/2 + I*t]]]]], {t, 1/1000,
nn, N[1/6]}], DataRange -> {0, nn}, PlotRange -> {-3, 7}];
mm = N[2*Pi/Log[h], 12];
g2 = Graphics[
Table[Style[Text[n*2*Pi/Log[h], {mm*n, 1}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 32}]];
Show[g1, g2, ImageSize -> Large]
</code></pre>
<p>Matrix inverse of Riemann zeta times log 2 operator:</p>
<p><img src="https://i.sstatic.net/5aaE9.jpg" alt="2 Pi div log2 spectrum"></p>
<p><a href="http://www.math.ucsb.edu/~stopple/zeta.html" rel="noreferrer">Jeffrey Stopple's code</a>:</p>
<pre><code>Show[Graphics[
RasterArray[
Table[Hue[
Mod[3 Pi/2 + Arg[Zeta[sigma + I t]], 2 Pi]/(2 Pi)], {t, -30,
30, .1}, {sigma, -30, 30, .1}]]], AspectRatio -> Automatic]
</code></pre>
<p>Normal or usual zeta:</p>
<p><img src="https://i.sstatic.net/hFtGB.jpg" alt="Normal zeta"> </p>
<pre><code>Show[Graphics[
RasterArray[
Table[Hue[
Mod[3 Pi/2 +
Arg[Sum[Zeta[sigma + I t]*
Total[1/Divisors[n]^(sigma + I t - 1)*
MoebiusMu[Divisors[n]]]/n, {n, 1, 30}]],
2 Pi]/(2 Pi)], {t, -30, 30, .1}, {sigma, -30, 30, .1}]]],
AspectRatio -> Automatic]
</code></pre>
<p>Spectral zeta (30-th partial sum):</p>
<p><img src="https://i.sstatic.net/2iXhr.jpg" alt="spectral zeta"></p>
<pre><code>Clear[n, k, t, A, nn, B];
nn = 60;
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}]; MatrixForm[A];
B = FourierDCT[
Table[Total[
1/Table[n, {n, 1, nn}]*
Total[Transpose[Re[Inverse[A]*Zeta[1/2 + I*t]]]]], {t, 1/1000,
600, N[1/6]}]];
g1 = ListLinePlot[B[[1 ;; 700]]*Table[Sqrt[n], {n, 1, 700}],
DataRange -> {0, 60}, PlotRange -> {-60, 600}];
mm = 11.35/Log[2];
g2 = Graphics[
Table[Style[Text[n, {mm*Log[n], 100 + 20*(-1)^n}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 16}]];
Show[g1, g2, ImageSize -> Large]
</code></pre>
<p>Mobius function -> Dirichlet series -> Spectral Riemann zeta -> Fourier transform -> von Mangoldt function:</p>
<p><img src="https://i.sstatic.net/yj8Ef.jpg" alt="from Mobius via spectral Riemann zeta to von Mangoldt"></p>
<p>Larger von Mangoldt function plot still wrong amplitude:
<a href="https://i.sstatic.net/02A1p.jpg" rel="noreferrer">https://i.sstatic.net/02A1p.jpg</a></p>
<pre><code>Clear[n, k, t, A, nn, B, g1, g2];
nn = 32;
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}];
MatrixForm[A];
B = FourierDCT[
Table[Total[
1/Table[n, {n, 1, nn}]*
Total[Transpose[Re[Inverse[A]*Zeta[1/2 + I*t]]]]], {t, 0, 2000,
N[1/6]}]];
g1 = ListLinePlot[B[[1 ;; 2000]], DataRange -> {0, 60},
PlotRange -> {-5, 50}];
2*N[Length[B]/1500, 12];
mm = 13.25/Log[2];
g2 = Graphics[
Table[Style[Text[n, {mm*Log[n], 7 + (-1)^n}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 40}]];
Show[g1, g2, ImageSize -> Full]
</code></pre>
<p>Plot from program above: <a href="https://i.sstatic.net/r6mTJ.jpg" rel="noreferrer">https://i.sstatic.net/r6mTJ.jpg</a></p>
<p>Partial sums of zeta function, use this one:</p>
<pre><code>Clear[n, k, t, A, nn, B];
nn = 80;
mm = 11.35/Log[2];
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}];
MatrixForm[A];
B = Re[FourierDCT[
Monitor[Table[
Total[1/Table[
n, {n, 1, nn}]*(Total[
Transpose[Inverse[A]*Sum[1/j^(1/2 + I*t), {j, 1, nn}]]] -
1)], {t, 1/1000, 600, N[1/6]}], Floor[t]]]];
g1 = ListLinePlot[B[[1 ;; 700]], DataRange -> {0, 60/mm},
PlotRange -> {-30, 30}];
g2 = Graphics[
Table[Style[Text[n, {Log[n], 5 - (-1)^n}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 32}]];
Show[g1, g2, ImageSize -> Full]
</code></pre>
<hr>
<p>Edit 17.1.2015:</p>
<pre><code>Clear[g1, g2, scale, xres, x, a, c, d, datapointsdisplayed];
scale = 1000000;
xres = .00001;
x = Exp[Range[0, Log[scale], xres]];
a = -FourierDCT[
Log[x]*FourierDST[
MangoldtLambda[Floor[x]]*(SawtoothWave[x] - 1)*(x)^(-1/2)]];
c = 62.357;
d = N[Im[ZetaZero[1]]];
datapointsdisplayed = 500000;
ymin = -1.5;
ymax = 3;
p = 0.013;
g1 = ListLinePlot[a[[1 ;; datapointsdisplayed]],
PlotRange -> {ymin, ymax},
DataRange -> {0, N[Im[ZetaZero[1]]]/c*datapointsdisplayed}];
Show[g1, Graphics[
Table[Style[Text[n, {22800*Log[n], -1/4*(-1)^n}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 12}]],
ImageSize -> Large]
</code></pre>
<p><img src="https://i.sstatic.net/cvtCy.png" alt="MangoldLambdaTautology"></p>
<pre><code>Show[Graphics[
RasterArray[
Table[Hue[
Mod[3 Pi/2 +
Arg[Sum[Zeta[sigma - I t]*
Total[1/Divisors[n]^(sigma + I t)*MoebiusMu[Divisors[n]]]/
n, {n, 1, 30}]], 2 Pi]/(2 Pi)], {t, -30,
30, .1}, {sigma, -30, 30, .1}]]], AspectRatio -> Automatic]
</code></pre>
<p><img src="https://i.sstatic.net/t9qup.png" alt="spectral riemann"></p>
<p>The following is a relationship:</p>
<p>Let $\mu(n)$ be the Möbius function, then:</p>
<p>$$a(n) = \sum\limits_{d|n} d \cdot \mu(d)$$</p>
<p>$$T(n,k)=a(GCD(n,k))$$</p>
<p>$$T = \left( \begin{array}{ccccccc} +1&+1&+1&+1&+1&+1&+1&\cdots \\ +1&-1&+1&-1&+1&-1&+1 \\ +1&+1&-2&+1&+1&-2&+1 \\ +1&-1&+1&-1&+1&-1&+1 \\ +1&+1&+1&+1&-4&+1&+1 \\ +1&-1&-2&-1&+1&+2&+1 \\ +1&+1&+1&+1&+1&+1&-6 \\ \vdots&&&&&&&\ddots \end{array} \right)$$</p>
<p>$$\sum\limits_{k=1}^{\infty}\sum\limits_{n=1}^{\infty} \frac{T(n,k)}{n^c \cdot k^z} = \sum\limits_{n=1}^{\infty} \frac{\lim\limits_{s \rightarrow z} \zeta(s)\sum\limits_{d|n} \frac{\mu(d)}{d^{(s-1)}}}{n^c} = \frac{\zeta(z) \cdot \zeta(c)}{\zeta(c + z - 1)}$$</p>
<p>which is part of the limit:</p>
<p>$$\frac{\zeta '(s)}{\zeta (s)}=\lim_{c\to 1} \, \left(\zeta (c)-\frac{\zeta (c) \zeta (s)}{\zeta (c+s-1)}\right)$$</p>
| <p>The Laplace transform of a function</p>
<p>$\sum _{i=1}^{\infty } a_i \delta (t-\log (i))$
where $\delta (t-\log (i))$ is the Delta function (i.e Unit impulse) at time $\log(i)$</p>
<p>is</p>
<p>$\int_0^{\infty } e^{-s t} \sum _{i=1}^{\infty } a_i \delta (t-\log (i)) \,
dt$</p>
<p>or</p>
<p>$\sum _{i=1}^{\infty } a_i i^{-s}$</p>
<p>Your $a_i$ are $log(p)$ if $i = prime^k$ else 0, so it is the Laplace transform (of what you think) which is very closely related to Fourier transform. You might find that $\sum _{i=1}^{\infty }\frac{1}{s} a_i i^{-s} $ gives smoother results (although it then becomes a sum of $a_i$)</p>
| <p>This is not a complete answer but I want show that there is nothing mysterious going on here.</p>
<p>We want to prove that:</p>
<p>$$\text{Fourier Transform of } \Lambda(1)...\Lambda(k) \sim \sum\limits_{n=1}^{n=\infty} \frac{1}{n} \zeta(s)\sum\limits_{d|n}\frac{\mu(d)}{d^{(s-1)}}$$</p>
<p>The Dirichlet inverse of the Euler totient function is</p>
<p>$$a(n)=\sum\limits_{d|n} \mu(d)d$$</p>
<p>Construct the matrix $$T(n,k)=a(GCD(n,k))$$</p>
<p>which starts:</p>
<p>$$\displaystyle T = \begin{bmatrix} +1&+1&+1&+1&+1&+1&+1&\cdots \\ +1&-1&+1&-1&+1&-1&+1 \\ +1&+1&-2&+1&+1&-2&+1 \\ +1&-1&+1&-1&+1&-1&+1 \\ +1&+1&+1&+1&-4&+1&+1 \\ +1&-1&-2&-1&+1&+2&+1 \\ +1&+1&+1&+1&+1&+1&-6 \\ \vdots&&&&&&&\ddots \end{bmatrix}
$$</p>
<p>where GCD is the Greatest Common Divisor of row index $n$ and column index $k$.</p>
<p>joriki <a href="https://math.stackexchange.com/a/51708/8530">showed</a> that the von Mangoldt function is $$\Lambda(n)=\sum\limits_{k=1}^{k=\infty} \frac{T(n,k)}{k}$$</p>
<p>Then add this quote by Terence Tao from <a href="https://groups.yahoo.com/neo/groups/harmonicanalysis/conversations/messages/230" rel="nofollow noreferrer">here</a>, that I don't completely understand but I do almost see why it should be true:</p>
<p>Quote:" The Fourier transform in this context becomes (essentially) the Mellin transform, which is particularly important in analytic number theory. (For instance, the Riemann zeta function is essentially the Mellin transform of the Dirac comb on the natural numbers"</p>
<p>Now let us return to the matrix $T$:</p>
<p>First the von Mangoldt is expanded as:</p>
<p>$$\displaystyle \begin{bmatrix} +1/1&+1/1&+1/1&+1/1&+1/1&+1/1&+1/1&\cdots \\ +1/2&-1/2&+1/2&-1/2&+1/2&-1/2&+1/2 \\ +1/3&+1/3&-2/3&+1/3&+1/3&-2/3&+1/3 \\ +1/4&-1/4&+1/4&-1/4&+1/4&-1/4&+1/4 \\ +1/5&+1/5&+1/5&+1/5&-4/5&+1/5&+1/5 \\ +1/6&-1/6&-2/6&-1/6&+1/6&+2/6&+1/6 \\ +1/7&+1/7&+1/7&+1/7&+1/7&+1/7&-6/7 \\ \vdots&&&&&&&\ddots \end{bmatrix}$$</p>
<p><strong>Edit: 24.1.2016.</strong> From here on the variables $n$ and $k$ should be permutated but I don't know how to fix the rest of this answer right now.</p>
<p>Summing the columns first is equivalent to what was said earlier:
$$\Lambda(n)=\sum\limits_{k=1}^{k=\infty} \frac{T(n,k)}{k}$$</p>
<p>where:</p>
<p>$$\Lambda(n) = \begin{cases} \infty & \text{if }n=1, \\\log p & \text{if }n=p^k \text{ for some prime } p \text{ and integer } k \ge 1, \\ 0 & \text{otherwise.} \end{cases}$$</p>
<p>or as a sequence:</p>
<p>$$\infty ,\log (2),\log (3),\log (2),\log (5),0,\log (7),\log (2),\log (3),0,\log (11),0,...,\Lambda(\infty)$$</p>
<p>And now based on the quote above let us say that:
$$\text{Fourier Transform of } \Lambda(1)...\Lambda(k) = \sum\limits_{n=1}^{n=k}\frac{\Lambda(n)}{n^s}$$</p>
<p>Expanding this into matrix form we have the matrix:</p>
<p>$$\displaystyle \begin{bmatrix}
\frac{T(1,1)}{1 \cdot 1^s}&+\frac{T(1,2)}{1 \cdot 2^s}&+\frac{T(1,3)}{1 \cdot 3^s}+&\cdots&+\frac{T(1,k)}{1 \cdot k^s} \\
\frac{T(2,1)}{2 \cdot 1^s}&+\frac{T(2,2)}{2 \cdot 2^s}&+\frac{T(2,3)}{2 \cdot 3^s}+&\cdots&+\frac{T(2,k)}{2 \cdot k^s} \\
\frac{T(3,1)}{3 \cdot 1^s}&+\frac{T(3,2)}{3 \cdot 2^s}&+\frac{T(3,3)}{3 \cdot 3^s}+&\cdots&+\frac{T(3,k)}{3 \cdot k^s} \\
\vdots&\vdots&\vdots&\ddots&\vdots \\
\frac{T(n,1)}{n \cdot 1^s}&+\frac{T(n,2)}{n \cdot 2^s}&+\frac{T(n,3)}{n \cdot 3^s}+&\cdots&+\frac{T(n,k)}{n \cdot k^s} \end{bmatrix} = \begin{bmatrix} \frac{\zeta(s)}{1} \\ +\frac{\zeta(s)\sum\limits_{d|2} \frac{\mu(d)}{d^{(s-1)}}}{2} \\ +\frac{\zeta(s)\sum\limits_{d|3} \frac{\mu(d)}{d^{(s-1)}}}{3} \\ \vdots \\ +\frac{\zeta(s)\sum\limits_{d|n} \frac{\mu(d)}{d^{(s-1)}}}{n} \end{bmatrix}$$</p>
<p>On the right hand side we see that it sums to the right hand side of what we set out to prove, namely:</p>
<p>$$\text{Fourier Transform of } \Lambda(1)...\Lambda(k) \sim \sum\limits_{n=1}^{n=\infty} \frac{1}{n} \zeta(s)\sum\limits_{d|n}\frac{\mu(d)}{d^{(s-1)}}$$</p>
<p>Things that remain unclear are: What factor should the left hand side be multiplied with in order to have the same magnitude as the right hand side? And why in the Fourier transform does the first term of the von Mangoldt function appear to be $\log q$?</p>
<p>$$\Lambda(n) = \begin{cases} \log q & \text{if }n=1, \\\log p & \text{if }n=p^k \text{ for some prime } p \text{ and integer } k \ge 1, \\ 0 & \text{otherwise.} \end{cases}$$</p>
<p>$$n=1,2,3,4,5,...q$$</p>
<p>As a heuristic, $$\Lambda(n) = \log q \;\;\;\; \text{if }n=1$$ probably has to do with that in the Fourier transform $q$ terms of $\Lambda$ are used and the first column in square matrix $T(1..q,1..q)$ sums to a Harmonic number.</p>
|
linear-algebra | <blockquote>
<p>Let <span class="math-container">$ \sigma(A)$</span> be the set of all eigenvalues of <span class="math-container">$A$</span>. Show that <span class="math-container">$ \sigma(A) = \sigma\left(A^T\right)$</span> where <span class="math-container">$A^T$</span> is the transpose matrix of <span class="math-container">$A$</span>.</p>
</blockquote>
| <p>The matrix <span class="math-container">$(A - \lambda I)^{T}$</span> is the same as the matrix <span class="math-container">$\left(A^{T} - \lambda I\right)$</span>, since the identity matrix is symmetric.</p>
<p>Thus:</p>
<p><span class="math-container">$$\det\left(A^{T} - \lambda I\right) = \det\left((A - \lambda I)^{T}\right) = \det (A - \lambda I)$$</span></p>
<p>From this it is obvious that the eigenvalues are the same for both <span class="math-container">$A$</span> and <span class="math-container">$A^{T}$</span>.</p>
| <p>I'm going to work a little bit more generally.</p>
<p>Let $V$ be a finite dimensional vector space over some field $K$, and let $\langle\cdot,\cdot\rangle$ be a <em>nondegenerate</em> bilinear form on $V$.</p>
<p>We then have for every linear endomorphism $A$ of $V$, that there is a unique endomorphism $A^*$ of $V$ such that
$$\langle Ax,y\rangle=\langle x,A^*y\rangle$$
for all $x$ and $y\in V$.</p>
<p>The existence and uniqueness of such an $A^*$ requires some explanation, but I will take it for granted.</p>
<blockquote>
<p><strong>Proposition:</strong> Given an endomorphism $A$ of a finite dimensional vector space $V$ equipped with a nondegenerate bilinear form $\langle\cdot,\cdot\rangle$, the endomorphisms $A$ and $A^*$ have the same set of eigenvalues.</p>
</blockquote>
<p><em>Proof:</em>
Let $\lambda$ be an eigenvalue of $A$. And let $v$ be an eigenvector of $A$ corresponding to $\lambda$ (in particular, $v$ is nonzero). Let $w$ be another arbitrary vector. We then have that:
$$\langle v,\lambda w\rangle=\langle\lambda v,w\rangle=\langle Av,w\rangle=\langle v,A^*w\rangle$$
This implies that $\langle v,\lambda w-A^*w\rangle =0$ for all $w\in V$. Now either $\lambda$ is an eigenvalue of $A^*$ or not. If it isn't, the operator $\lambda I -A^*$ is an automorphism of $V$ since $\lambda I-A^*$ being singular is equivalent to $\lambda$ being an eigenvalue of $A^*$. In particular, this means that $\langle v, z\rangle = 0$ for all $z\in V$. But since $\langle\cdot,\cdot\rangle$ is nondegenerate, this implies that $v=0$. A contradiction. $\lambda$ must have been an eigenvalue of $A^*$ to begin with. Thus every eigenvalue of $A$ is an eigenvalue of $A^*$. The other inclusion can be derived similarly.</p>
<p>How can we use this in your case? I believe you're working over a real vector space and considering the dot product as your bilinear form. Now consider an endomorphism $T$ of $\Bbb R^n$ which is given by $T(x)=Ax$ for some $n\times n$ matrix $A$. It just so happens that for all $y\in\Bbb R^n$ we have $T^*(y)=A^t y$. Since $T$ and $T^*$ have the same eigenvalues, so do $A$ and $A^t$.</p>
|
geometry | <p>Given a surface $f(x,y,z)=0$, how would you determine whether or not it's a surface of revolution, and find the axis of rotation?</p>
<p>The special case where $f$ is a polynomial is also of interest.</p>
<p>A few ideas that might lead somewhere, maybe:</p>
<p><strong>(1) For algebra folks:</strong> Surfaces of the form $z = g(x^2 + y^2)$ are always surfaces of revolution. I don't know if the converse is true. If it is, then we just need to find a coordinate system in which $f$ has this particular form. Finding special coordinate systems that simplify things often involves finding eigenvalues. You use eigenvalues to tackle the special case of quadric surfaces, anyway.</p>
<p><strong>(2) For differential geometry folks:</strong> Surfaces of revolution have a very special pattern of lines of curvature. One family of lines of curvature is a set of coaxial circles. I don't know if this property characterizes surfaces of revolution, but it sounds promising.</p>
<p><strong>(3) For physicists & engineers:</strong> The axis of rotation must be one of the principal axes for the centroidal moments of inertia, according to physics notes I have read. So, we should compute the centroid and the inertia tensor. I'm not sure how. Then, diagonalize, so eigenvalues, again. Maybe this is actually the same as idea #1.</p>
<p><strong>(4) For classical geometry folks:</strong> What characterizes surfaces of revolution (I think) is that every line that's normal to the surface intersects some fixed line, which is the axis of rotation. So, construct the normal lines at a few points on the surface (how many??), and see if there is some line $L$ that intersects all of these normal lines (how?). See <a href="https://math.stackexchange.com/questions/607348/line-intersecting-three-or-four-given-lines?lq=1">this related question</a>. If there is, then this line $L$ is (probably) the desired axis of rotation. This seems somehow related to the answers given by Holographer and zyx.</p>
<p>Why is this is interesting/important? Because surfaces of revolution are easier to handle computationally (e.g. area and volume calculations), and easy to manufacture (using a lathe). So, it's useful to be able to identify them, so that they can be treated as special cases. </p>
<p>The question is related to <a href="https://math.stackexchange.com/questions/593478/how-to-determine-that-a-surface-is-symmetric/603976#comment1274516_603976">this one about symmetric surfaces</a>, I think. The last three paragraphs of that question (about centroids) apply here, too. Specifically, if we have a bounded (compact) surface of revolution, then its axis of rotation must pass through its centroid, so some degrees of freedom disappear.</p>
<p>If you want to test out your ideas, you can try experimenting with
$$
f(x,y,z) = -6561 + 5265 x^2 + 256 x^4 + 4536 x y - 1792 x^3 y + 2592 y^2 +
4704 x^2 y^2 - 5488 x y^3 + 2401 y^4 + 2592 x z - 1024 x^3 z -
4536 y z + 5376 x^2 y z - 9408 x y^2 z + 5488 y^3 z + 5265 z^2 +
1536 x^2 z^2 - 5376 x y z^2 + 4704 y^2 z^2 - 1024 x z^3 +
1792 y z^3 + 256 z^4
$$
This is a surface of revolution, and it's compact. Sorry it's such a big ugly mess. It was the simplest compact non-quadric example I could invent.</p>
<p>If we rotate to a $(u,v,w)$ coordinate system, where
\begin{align}
u &= \tfrac{1}{9}( x - 4 y + 8 z) \\
v &= \tfrac{1}{9}(8 x + 4 y + z) \\
w &= \tfrac{1}{9}(-4 x + 7 y + 4 z)
\end{align}
then the surface becomes
$$
u^2 + v^2 + w^4 - 1 = 0
$$
which is a surface of revolution having the $w$-axis as its axis of rotation.</p>
| <p>You can reduce it to an algebraic problem as follows:</p>
<p>The definition of a surface of revolution is that there is some axis such that rotations about this axis leave the surface invariant. Let's denote the action of a rotation by angle $\theta$ about this axis by the map $\vec x\mapsto \vec R_\theta(\vec x)$. With your surface given as a level set $f(\vec{x})=0$, the invariance condition is that $f(\vec R_\theta(\vec x))=0$ whenever $f(\vec{x})=0$, for all $\theta$.</p>
<p>In particular, we can differentiate this with respect to $\theta$ to get $\vec\nabla f(\vec R_\theta(\vec x))\cdot \frac{\partial}{\partial\theta}\vec R_\theta(\vec x)=0$, which at $\theta=0$ gives $\vec\nabla f(\vec x)\cdot \vec k(\vec x)=0$, where $\vec k(\vec x)=\left.\frac{\partial}{\partial\theta}\vec R_\theta(\vec x)\right|_{\theta=0}$ is the vector field representing the action of an infinitesimal rotation. (If the language of differential geometry is familiar, this is a Killing vector field).</p>
<p>So with this language established, what we need to check is whether there is any Killing field $\vec k(\vec x)$ associated to a rotation, which is orthogonal to the gradient of $f$ everywhere on the surface (i.e., whenever $f(\vec x)=0$). In fact, this will be not just necessary, but also sufficient, since (intuitively) any rotation can be built from many small ones.</p>
<p>Luckily, it's quite straightforward to write down the most general Killing field: if $\vec x_0$ is a point on the axis of rotation, and $\vec a$ a unit vector pointing along the axis, we have $\vec k(\vec x)=\vec a \times (\vec x-\vec x_0)$. (Note that this is a degenerate parametrization, since we can pick any point on the axis, corresponding to shifting $\vec x_0$ by a multiple of $\vec a$, and also send $\vec a$ to $-\vec a$, to give the same rotation).</p>
<p>To summarize: the question has been recast as "are there any solutions $\vec x_0, \vec a$ to $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)=0$, which hold for all $\vec x$ such that $f(\vec x)=0$?".</p>
<p>(You could also write the equation as $\det[\vec a, \, \vec x-\vec x_0, \,\vec\nabla f(\vec x)]=0$).</p>
<p>For your example, I got Mathematica to use this method. I let $\vec x_0=(x_0,y_0,0)$, taking $z_0=0$ to remove some degeneracy, and $\vec a=(a_x,a_y,a_z)$, giving 5 unknowns for the axis. I then found four points on the surface, by solving $f(x,y,z)=0$ with $y=z=0$ and $x=z=0$. Then I got it to solve $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)=0$ for the four points, and $|\vec a|^2=1$ (5 equations for the 5 unknowns), getting a unique solution up to the sign of $\vec a$. I then substituted the solution into $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)$ for general $\vec x$, getting zero, so the solution indeed gave an axis of revolution. It is simplified in this case because all the level sets of $f$ give a surface of revolution about the same axis, so this last step did not require assuming $f(\vec x)=0$.</p>
| <p>If you are working numerically you are probably not interested in degenerate cases, so let's assume that the surface is "generic" in a suitable sense (this rules out the sphere, for example). As you point out, it is helpful to use Gaussian curvature $K$ because $K$ is independent of presentation, such as the particular choice of the function $f(x,y,z)$. The good news is that it is possible to calculate $K$ directly from $f$ using a suitable differential operator. For curves in the plane this is the Riess operator from algebraic geometry; for surfaces it is more complicated. This was treated in detail in an article by Goldman:</p>
<p>Goldman, Ron: Curvature formulas for implicit curves and surfaces.
Comput. Aided Geom. Design 22 (2005), no. 7, 632-658. </p>
<p>See <a href="http://u.math.biu.ac.il/~katzmik/goldman05.pdf" rel="nofollow">here</a></p>
<p>Now the punchline is that if this is a surface of revolution then the lines $\gamma$ of curvature are very special: they are all circles (except for degenerate cases like the sphere). In particular, $\gamma$ has constant curvature and zero torsion $\tau$ as a curve in 3-space. If $f$ is a polynomial, one should get a rational function, say $g$, for curvature, and then the condition of constant curvature for $\gamma$ should be expressible by a pair of equations.</p>
|
linear-algebra | <p>Today, at my linear algebra exam, there was this question that I couldn't solve.</p>
<hr />
<blockquote>
<p>Prove that
<span class="math-container">$$\det \begin{bmatrix}
n^{2} & (n+1)^{2} &(n+2)^{2} \\
(n+1)^{2} &(n+2)^{2} & (n+3)^{2}\\
(n+2)^{2} & (n+3)^{2} & (n+4)^{2}
\end{bmatrix} = -8$$</span></p>
</blockquote>
<hr />
<p>Clearly, calculating the determinant, with the matrix as it is, wasn't the right way. The calculations went on and on. But I couldn't think of any other way to solve it.</p>
<p>Is there any way to simplify <span class="math-container">$A$</span>, so as to calculate the determinant?</p>
| <p>Here is a proof that is decidedly not from the book. The determinant is obviously a polynomial in n of degree at most 6. Therefore, to prove it is constant, you need only plug in 7 values. In fact, -4, -3, ..., 0 are easy to calculate, so you only have to drudge through 1 and 2 to do it this way !</p>
| <p>Recall that $a^2-b^2=(a+b)(a-b)$. Subtracting $\operatorname{Row}_1$ from $\operatorname{Row}_2$ and from $\operatorname{Row}_3$ gives
$$
\begin{bmatrix}
n^2 & (n+1)^2 & (n+2)^2 \\
2n+1 & 2n+3 & 2n+5 \\
4n+4 & 4n+8 & 4n+12
\end{bmatrix}
$$
Then subtracting $2\cdot\operatorname{Row}_2$ from $\operatorname{Row}_3$ gives
$$
\begin{bmatrix}
n^2 & (n+1)^2 & (n+2)^2 \\
2n+1 & 2n+3 & 2n+5 \\
2 & 2 & 2
\end{bmatrix}
$$
Now, subtracting $\operatorname{Col}_1$ from $\operatorname{Col}_2$ and $\operatorname{Col}_3$ gives
$$
\begin{bmatrix}
n^2 & 2n+1 & 4n+4 \\
2n+1 & 2 & 4 \\
2 & 0 & 0
\end{bmatrix}
$$
Finally, subtracting $2\cdot\operatorname{Col}_2$ from $\operatorname{Col}_3$ gives
$$
\begin{bmatrix}
n^2 & 2n+1 & 2 \\
2n+1 & 2 & 0 \\
2 & 0 & 0
\end{bmatrix}
$$
Expanding the determinant about $\operatorname{Row}_3$ gives
$$
\det A
=
2\cdot\det
\begin{bmatrix}
2n+1 & 2\\
2 & 0
\end{bmatrix}
=2\cdot(-4)=-8
$$
as advertised.</p>
|
combinatorics | <p>Source: <a href="http://www.math.uci.edu/%7Ekrubin/oldcourses/12.194/ps1.pdf" rel="noreferrer">German Mathematical Olympiad</a></p>
<h3>Problem:</h3>
<blockquote>
<p>On an arbitrarily large chessboard, a generalized knight moves by jumping p squares in one direction and q squares in a perpendicular direction, p, q > 0. Show that such a knight can return to its original position only after an even number of moves.</p>
</blockquote>
<h3>Attempt:</h3>
<p>Assume, wlog, the knight moves <span class="math-container">$q$</span> steps <strong>to the right</strong> after its <span class="math-container">$p$</span> steps. Let the valid moves for the knight be "LU", "UR", "DL", "RD" i.e. when it moves <strong>L</strong>eft, it has to go <strong>U</strong>p("LU"), or when it goes <strong>U</strong>p , it has to go <strong>R</strong>ight("UR") and so on.</p>
<p>Let the knight be stationed at <span class="math-container">$(0,0)$</span>. We note that after any move its coordinates will be integer multiples of <span class="math-container">$p,q$</span>. Let its final position be <span class="math-container">$(pk, qr)$</span> for <span class="math-container">$ k,r\in\mathbb{Z}$</span>. We follow sign conventions of coordinate system.</p>
<p>Let knight move by <span class="math-container">$-pk$</span> horizontally and <span class="math-container">$-qk$</span> vertically by repeated application of one step. So, its new position is <span class="math-container">$(0,q(r-k))$</span> I am thinking that somehow I need to cancel that <span class="math-container">$q(r-k)$</span> to achieve <span class="math-container">$(0,0)$</span>, but don't be able to do the same.</p>
<p>Any hints please?</p>
| <p>Case I: If $p+q$ is odd, then the knight's square changes colour after each move, so we are done.</p>
<p>Case II: If $p$ and $q$ are both odd, then the $x$-coordinate changes by an odd number after every move, so it is odd after an odd number of moves. So the $x$-coordinate can be zero only after an even number of moves.</p>
<p>Case III: If $p$ and $q$ are both even, we can keep dividing each of them by $2$ until we reach Case I or Case II. (Dividing $p$ and $q$ by the same amount doesn't change the shape of the knight's path, only its size.)</p>
| <p>This uses complex numbers.</p>
<p>Define $z=p+qi$. Say that the knight starts at $0$ on the complex plane. Note that, in one move, the knight may add or subtract $z$, $iz$, $\bar z$, $i\bar z$ to his position.</p>
<p>Thus, at any point, the knight is at a point of the form:
$$(a+bi)z+(c+di)\bar z$$
where $a$ and $b$ are integers.</p>
<p>Note that the parity (evenness/oddness) of the quantity $a+b+c+d$ changes after every move. This means it's even after an even number of moves and odd after an odd number of moves. Also note that:
$$a+b+c+d\equiv a^2+b^2-c^2-d^2\pmod2$$
(This is because $x\equiv x^2\pmod2$ and $x\equiv-x\pmod2$ for all $x$.)</p>
<p>Now, let's say that the knight has reached its original position. Then:
\begin{align}
(a+bi)z+(c+di)\bar z&=0\\
(a+bi)z&=-(c+di)\bar z\\
|a+bi||z|&=|c+di||z|\\
|a+bi|&=|c+di|\\
\sqrt{a^2+b^2}&=\sqrt{c^2+d^2}\\
a^2+b^2&=c^2+d^2\\
a^2+b^2-c^2-d^2&=0\\
a^2+b^2-c^2-d^2&\equiv0\pmod2\\
a+b+c+d&\equiv0\pmod2
\end{align}
Thus, the number of moves is even.</p>
<blockquote>
<p>Interestingly, this implies that $p$ and $q$ do not need to be integers. They can each be any real number. The only constraint is that we can't have $p=q=0$.</p>
</blockquote>
|
combinatorics | <p>I know that the sum of squares of binomial coefficients is just <span class="math-container">${2n}\choose{n}$</span> but what is the closed expression for the sum <span class="math-container">${n\choose 0}^2 - {n\choose 1}^2 + {n\choose 2}^2 + \cdots + (-1)^n {n\choose n}^2$</span>?</p>
| <p><span class="math-container">$$(1+x)^n(1-x)^n=\left( \sum_{i=0}^n {n \choose i}x^i \right)\left( \sum_{i=0}^n {n \choose i}(-x)^i \right)$$</span></p>
<p>The coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$\sum_{k=0}^n {n \choose n-k}(-1)^k {n \choose k}$</span> which is exactly your sum.</p>
<p>On another hand:</p>
<p><span class="math-container">$$(1+x)^n(1-x)^n=(1-x^2)^n=\left( \sum_{i=0}^n {n \choose i}(-1)^ix^{2i} \right)$$</span></p>
<p>Thus, the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$0$</span> if <span class="math-container">$n$</span> is odd or <span class="math-container">$(-1)^{\frac{n}2}{n \choose n/2}$</span> if <span class="math-container">$n$</span> is even.</p>
| <p>Here's a combinatorial proof. </p>
<p>Since $\binom{n}{k} = \binom{n}{n-k}$, we can rewrite the sum as $\sum_{k=0}^n \binom{n}{k} \binom{n}{n-k} (-1)^k$. Then $\binom{n}{k} \binom{n}{n-k}$ can be thought of as counting ordered pairs $(A,B)$, each of which is a subset of $\{1, 2, \ldots, n\}$, such that $|A| = k$ and $|B| = n-k$. The sum, then, is taken over all such pairs such that $|A| + |B| = n$. </p>
<p>Given $(A,B)$, let $x$ denote the largest element in the <em>symmetric difference</em> $A \oplus B = (A - B) \cup (B - A)$ (assuming that such an element exists). In other words, $x$ is the largest element that is in exactly one of the two sets. Then define $\phi$ to be the mapping that moves $x$ to the other set. The pairs $(A,B)$ and $\phi(A,B)$ have different signs, and $\phi(\phi(A,B)) = (A,B)$, so $(A,B)$ and $\phi(A,B)$ cancel each other out in the sum. (The function $\phi$ is what is known as a <em>sign-reversing involution</em>.)</p>
<p>So the value of the sum is determined by the number of pairs $(A,B)$ that do not cancel out. These are precisely those for which $\phi$ is not defined; in other words, those for which there is no largest $x$. But there can be no largest $x$ only in the case $A=B$. If $n$ is odd, then the requirement $\left|A\right| + \left|B\right| = n$ means that we cannot have $A=B$, so in the odd case the sum is $0$. If $n$ is even, then the number of pairs is just the number of subsets of $\{1, 2, \ldots, n\}$ of size $n/2$; i.e., $\binom{n}{n/2}$, and the parity is determined by whether $|A| = n/2$ is odd or even.</p>
<p>Thus we get $$\sum_{k=0}^n \binom{n}{k}^2 (-1)^k = \begin{cases} (-1)^{n/2} \binom{n}{n/2}, & n \text{ is even}; \\ 0, & n \text{ is odd}.\end{cases}$$</p>
|
differentiation | <p>If $(V, \langle \cdot, \cdot \rangle)$ is a finite-dimensional inner product space and $f,g : \mathbb{R} \longrightarrow V$ are differentiable functions, a straightforward calculation with components shows that </p>
<p>$$
\frac{d}{dt} \langle f, g \rangle = \langle f(t), g^{\prime}(t) \rangle + \langle f^{\prime}(t), g(t) \rangle
$$</p>
<p>This approach is not very satisfying. However, attempting to apply the definition of the derivative directly doesn't seem to work for me. Is there a slick, perhaps intrinsic way, to prove this that doesn't involve working in coordinates?</p>
| <p>Observe that
$$
\begin{align*}
\frac{1}{h}
&
\left[
\langle f(t+h),\, g(t+h)\rangle - \langle f(t),\, g(t) \rangle
\right] \\
& =
\frac{1}{h}
\left[
\langle f(t+h),\, g(t+h)\rangle - \langle f(t),\, g(t+h)\rangle
\right]
+ \frac{1}{h}
\left[
\langle f(t),\, g(t+h)\rangle - \langle f(t),\, g(t)\rangle
\right] \\
&=
\left\langle
\frac{1}{h}
\left[
f(t+h) - f(t)
\right],\,
g(t+h)
\right\rangle
+
\left\langle
f(t),\,
\frac{1}{h}
\left[
g(t+h) - g(t)
\right]
\right\rangle.
\end{align*}
$$
As $h\to 0$ the first expression converges to
$$
\frac{d}{dt} \langle f(t), g(t) \rangle
$$ and the last expression converges to
$$
\langle f^{\prime}(t), g(t) \rangle + \langle f(t), g^{\prime}(t) \rangle
$$
by definition of the derivative, by continuity of $g$ and by continuity of the scalar product. Hence the desired equality follows.</p>
<p>Note that this doesn't use finite-dimensionality and that the argument is the exact same as the one for the ordinary product rule from calculus.</p>
| <p>This answer may be needlessly complicated if you don't want such generality, taking the approach of first finding the Fréchet derivative of a bilinear operator.</p>
<p>If $V$, $W$, and $Z$ are normed spaces, and if $T:V\times W\to Z$ is a continuous (real) <a href="http://en.wikipedia.org/wiki/Bilinear_map">bilinear operator</a>, meaning that there exists $C\geq 0$ such that $\|T(v,w)\|\leq C\|v\|\|w\|$ for all $v\in V$ and $w\in W$, then the <a href="http://en.wikipedia.org/wiki/Fr%C3%A9chet_derivative">derivative</a> of $T$ at $(v_0,w_0)$ is $DT|_{(v_0,w_0)}(v,w)=T(v,w_0)+T(v_0,w)$. (I am assuming that $V\times W$ is given a norm equivalent with $\|(v,w)\|=\sqrt{\|v\|^2+\|w\|^2}$.) This follows from the straightforward computation </p>
<p>$$\frac{\|T(v_0+v,w_0+w)-T(v_0,w_0)-(T(v,w_0)+T(v_0,w))\|}{\|(v,w)\|}=\frac{\|T(v,w)\|}{\|(v,w)\|}\leq C\frac{\|v\|\|w\|}{\|(v,w)\|}\to 0$$</p>
<p>as $(v,w)\to 0$.</p>
<p>With $V=W$, $Z=\mathbb R$ or $Z=\mathbb C$, and $T:V\times V\to Z$ the inner product, this gives $DT_{(v_0,w_0)}(v,w)=\langle v,w_0\rangle+\langle v_0,w\rangle$. Now if $f,g:\mathbb R\to V$ are differentiable, then $F:\mathbb R\to V\times V$ defined by $F(t)=(f(t),g(t))$ is differentiable with $DF|_t(h)=h(f'(t),g'(t))$. By the chain rule, </p>
<p>$$D(T\circ F)|_{t}(h)
=DT|_{F(t)}\circ DF|_t(h)=h(\langle f'(t),g(t)\rangle+\langle f(t),g'(t)\rangle),$$</p>
<p>which means $\frac{d}{dt} \langle f, g \rangle = \langle f'(t),g(t)\rangle+\langle f(t),g'(t)\rangle$.</p>
|