tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
combinatorics | <p>So, the full problem goes like this:</p>
<blockquote>
<p>There are $25$ people at a party. Assuming that among any three people, at least two of them know each other, prove that there exists one person who must know at least twelve people.</p>
</blockquote>
<p>I've been stuck on this problem for a while and haven't really figured out how to proceed. I'm pretty sure that there is an answer that can be found via the <a href="https://en.m.wikipedia.org/wiki/Pigeonhole_principle" rel="noreferrer">pigeonhole principle</a> or some graph theory, but I'm not really sure how to get started. Any help would be appreciated.</p>
| <p>If everyone knows everyone, then you are done.</p>
<p>Otherwise choose two people, A and B say, who don't know each other. These two people are part of $23$ triples. In each of these triples, either A knows the third person, or B knows the third person.</p>
<p>Thus one of A or B knows (at least) $12$ people.</p>
| <p>Pick a vertex <span class="math-container">$v$</span>. If <span class="math-container">$\deg(v) \geq 12$</span> you are done.</p>
<p>Otherwise <span class="math-container">$v$</span> is connected with at most 11 vertices. Let <span class="math-container">$C$</span> be the vertices connected to <span class="math-container">$v$</span> and <span class="math-container">$N$</span> be the vertices not connected to <span class="math-container">$v$</span>. Note that <span class="math-container">$N$</span> has at least <span class="math-container">$13$</span> vertices.</p>
<p>Fix one vertex <span class="math-container">$u \in N$</span>.</p>
<p>Now, for each <span class="math-container">$w \in N$</span> look at the group <span class="math-container">$\{ u, v, w\}$</span>. The only possible edge in this group is <span class="math-container">$uw$</span>. Therefore, <span class="math-container">$uw \in E(G)$</span>.</p>
<p>This shows that <span class="math-container">$u$</span> is connected to all the other vertices in <span class="math-container">$N$</span>.</p>
<p><strong>Note</strong> The proof is basically the following:</p>
<p>The given condition shows that if you fix one vertex <span class="math-container">$v$</span>, and you look to all the vertices <span class="math-container">$N$</span> which are not connected to <span class="math-container">$v$</span>, then the induced graph on <span class="math-container">$N$</span> is the complete graph.</p>
<p>So if <span class="math-container">$|N| \geq 13$</span> you are done, otherwise <span class="math-container">$|N| \leq 12$</span> which means <span class="math-container">$\deg(v) \geq 12$</span>.</p>
|
linear-algebra | <p>I am trying to understand the similarities and differences between the minimal polynomial and characteristic polynomial of Matrices.</p>
<ol>
<li>When are the minimal polynomial and characteristic polynomial the same</li>
<li>When are they different</li>
<li>What conditions (eigenvalues/eigenvectors/...) would imply 1 or 2</li>
<li>Please tell me anything else about these two polynomials that is essential in comparing them.</li>
</ol>
| <p>The minimal polynomial is quite literally the smallest (in the sense of divisibility) nonzero polynomial that the matrix satisfies. That is to say, if $A$ has minimal polynomial $m(t)$ then $m(A)=0$, and if $p(t)$ is a nonzero polynomial with $p(A)=0$ then $m(t)$ divides $p(t)$.</p>
<p>The characteristic polynomial, on the other hand, is defined algebraically. If $A$ is an $n \times n$ matrix then its characteristic polynomial $\chi(t)$ must have degree $n$. This is not true of the minimal polynomial.</p>
<p>It can be proved that if $\lambda$ is an eigenvalue of $A$ then $m(\lambda)=0$. This is reasonably clear: if $\vec v \ne 0$ is a $\lambda$-eigenvector of $A$ then
$$m(\lambda) \vec v = m(A) \vec v = 0 \vec v = 0$$
and so $m(\lambda)=0$. The first equality here uses linearity and the fact that $A^n\vec v = \lambda^n \vec v$, which is an easy induction.</p>
<p>It can also be proved that $\chi(A)=0$. In particular that $m(t)\, |\, \chi(t)$.</p>
<p>So one example of when (1) occurs is when $A$ has $n$ distinct eigenvalues. If this is so then $m(t)$ has $n$ roots, so has degree $\ge n$; but it has degree $\le n$ because it divides $\chi(t)$. Thus they must be equal (since they're both monic, have the same roots and the same degree, and one divides the other).</p>
<p>A more complete characterisation of when (1) occurs (and when (2) occurs) can be gained by considering Jordan Normal Form; but I suspect that you've only just learnt about characteristic and minimal polynomials so I don't want to go into JNF.</p>
<p>Let me know if there's anything else you'd like to know; I no doubt missed some things out.</p>
| <p>The minimal polynomial $m(t)$ is the smallest factor of the characteristic polynomial $f(t)$ such that if $A$ is the matrix, then we still have $m(A) = 0$. The only thing the characteristic polynomial measures is the algebraic multiplicity of an eigenvalue, whereas the minimal polynomial measures the size of the $A$-cycles that form the generalized eigenspaces (a.k.a. the size of the Jordan blocks). These facts can be summarized as follows.</p>
<ul>
<li>If $f(t)$ has a factor $(t - \lambda)^k$, this means that the eigenvalue $\lambda$ has $k$ linearly independent generalized eigenvectors.</li>
<li>If $m(t)$ has a factor $(t - \lambda)^p$, this means that the largest $A$-cycle of generalized eigenvectors contains $p$ elements; that is, the largest Jordan block for $\lambda$ is $p \times p$. Notice that this means that $A$ is only diagonalizable if $m(t)$ has only simple roots.</li>
<li>Thus $f(t) = m(t)$ if and only if each eigenvalue $\lambda$ corresponds to a single Jordan block, a.k.a each eigenvalue corresponds to a single minimal invariant subspace of generalized eigenvectors.</li>
<li>$f(t)$ and $m(t)$ differ if any eigenvalue has more than one Jordan block, a.k.a. if an eigenvalue has more than one generalized eigenspace.</li>
</ul>
|
number-theory | <p>Consider the following iterative process. We start with a 2-element set $S_0=\{0,1\}$. At $n^{\text{th}}$ step $(n\ge1)$ we take all non-empty subsets of $S_{n-1}$, then for each subset compute the arithmetic mean of its elements, and collect the results to a new set $S_n$. Let $a_n$ be the size of $S_n$. Note that, because some subsets of $S_{n-1}$ may have identical mean values, $a_n$ may be less than the number of non-empty subsets of $S_{n-1}$ (that is, $2^{a_{n-1}}-1$).</p>
<p>For example, at the $1^{\text{st}}$ step we get the subsets $\{\{0\},\,\{1\},\,\{0,1\}\}$, and their means are $\{0,\,1,\,1/2\}.$ So $S_1=\{0,\,1,\,1/2\}$ and $a_1=|S_1|=3.$</p>
<p>At the $2^{\text{nd}}$ step we get the subsets $\{\{0\},\,\{1\},\,\{1/2\},\,\{0,\,1\},\,\{0,\,1/2\},\,\{1,\,1/2\},\,\{0,\,1,\,1/2\}\},$ and their means are $\{0,\,1,\,1/2,\,1/2,\,1/4,\,3/4,\,1/2\}.$ So, after removing duplicate values, we get $S_2=\{0,\,1,\,1/2,\,1/4,\,3/4\}$ and $a_2=|S_2|=5.$ And so on.</p>
<p>The sequence $\{a_n\}_{n=0}^\infty$ begins: $2,\,3,\,5,\,15,\,875,\,...$ </p>
<p>I submitted it as <a href="http://oeis.org/A273525" rel="nofollow noreferrer">A273525</a> in OEIS.</p>
<p>A brute-force algorithmic approach allows to easily find its elements up to $a_4=875$, but becomes computationally unfeasible after that. My question is:</p>
<blockquote>
<p>What is the value of $a_5$?</p>
</blockquote>
<p>It's easy to see that $5\times10^5<a_5<2^{875}<10^{264}$ (the lower bound $5\times10^5$ is found by direct enumeration of some subsets of $S_4$ on computer). Greg Martin in <a href="https://math.stackexchange.com/a/1797665/19661">his answer</a> below proves stricter bounds $2\times10^6<a_5<7\times10^{12}$. Can we find the exact value of $a_5$?</p>
| <p>Out of curiosity, I computed this directly. Apparently $a_5 = |S_5| = 603919253973 \approx 6\cdot 10^{11}$.</p>
<p>I wrote a program to calculate all subset sums $s$ for each subset size $k$. (There are just under $10^{12}$ such $s$'s.) Then the values $\frac s k$ are the members of $S_5$. For checking the algorithm, I also obtained $|S_{5,125}| = 33947876$ and $|S_{5,200}| = 1088970851$, i.e. restricted to the smallest 125 and 200 elements of $S_4$.</p>
<p>This value for $|S_5|$ is fairly close to Greg Martin's upper bound, which isn't surprising to me; there are $2^{875}-1$ subsets and less than $7\cdot 10^{12}$ places for their sums to go. Most of the gap between $|S_5|$ and that upper bound is due to the fact that, for whatever reason, only $10^{12}$ of the possible sums actually occur. Part of the remaining gap can be explained by divisibility—the naïve model where $\frac{\phi(d)}{k}$ of the $\frac s k$'s have reduced denominator $d$ (which holds if the $s$'s are evenly distributed mod $k$) appears to fit quite accurately for $S_5$. I suspect that $|S_6|$ is also not far from its corresponding upper bound, the trivial bound being $$lcm(1\dots875)\cdot \sum_{k=1}^{|S_5|} k \approx 10^{405}.$$</p>
<p>I also used a birthday method to guess $|S_5|$: take averages of random subsets of $S_4$ until there is a duplicate. Since the take-random-subset operation does not produce uniformly random averages, this provides a too-low estimator rather than an unbiased one. For $S_5$ in particular, it guesses a lower bound of $10^{10}$ (improvable to $10^{11}$ using some ad-hoc methods to reduce the non-uniformity). Unfortunately it takes expected $\Theta(\sqrt{N})$ samples for a size $N$, which makes it quite hopeless for $S_6$.</p>
| <p>Not a complete answer, but some ideas:</p>
<p>The number of distinct means of $k$-element subsets of an $n$-element set is at least $k(n-k)+1$. For example, when $k=3$ and $n=9$, the following subsets all have different means: $\{x_1,x_2,x_3\}$, $\{x_1,x_2,x_4\}$, ..., $\{x_1,x_2,x_9\}$, $\{x_1,x_3,x_9\}$, ..., $\{x_1,x_8,x_9\}$, $\{x_2,x_8,x_9\}$, ..., $\{x_3,x_8,x_9\}$. Applying this with $n=875$ and $k=438$ already gives 191,407 distinct means.</p>
<p>We can build on this though. Of the 875 means counted by $a_4$, 52 of them have a factor of 13 in their denominator, while the other 823 do not. Taking $k=412$ and $n=823$, we get 169,333 subsets with distinct means. But furthermore, of those 52, there are numerators corresponding to each nonzero residue class modulo 13. Therefore we can take each of the 169,333 subsets and get 13 variants of it with different means (the subset itself, together with the subset with a single element appended, that element having a denominator divisible by 13 and a numerator from each of the nonzero residue classes modulo 13). That gives 2,201,329 means that a little thought verifies are distinct.</p>
<p>One could experiment with denominator factors other than 13 (perhaps composite ones) to squeeze more out of this argument.</p>
<p>Finally, note that the mean of a $p$-element subset and the mean of a $q$-element subset, if $p$ and $q$ are relatively prime, are quite likely to be distinct from each other. (Both primes would have to be cancelled from their denominators by the sums of the elements in the subsets.) So one should be able to combine various collections of means in this way and improve the lower bound. (Of course, taking $p$ and $q$ near $875/2$ seems the best place to explore.)</p>
<p><em>(added later)</em> As for the upper bound, let's bound the number of $k$-element means separately and mostly forget about whether they could coincide. Obviously there are 875 $1$-element means. For $2\le k\le 875$, there are obviously at most $\binom{875}k$ $k$-element means. However, we can get a different upper bound as follows: The largest of the 875 elements is $1$ of course, and the least common denominator of the 875 elements is 17,297,280. Therefore every single $k$-element mean is a rational number between $0$ and $1$ whose denominator divides $17\text{,}297\text{,}280k$, and there are at most $17\text{,}297\text{,}280k-1$ of them (not counting $0$ and $1$ themselves, which are already counted by the $1$-element means). Therefore an upper bound for $a_5$ is
$$
875 + \sum_{k=2}^{875} \min\bigg\{ \binom{875}k, 17\text{,}297\text{,}280k-1 \bigg\} = 6\text{,}568\text{,}806\text{,}008\text{,}597.
$$
So at least we know that $a_5$ is between $2\times10^6$ and $7\times10^{12}$.</p>
|
probability | <p>Recently I was asked the following in an interview:</p>
<blockquote>
<p>If you are a pretty good basketball player, and were betting on whether you could make $2$ out of $4$ or $3$ out of $6$ baskets, which would you take?</p>
</blockquote>
<p>I said anyone since ratio is same. Any insights?</p>
| <h1>Depends on how good you are</h1>
<p><img src="https://i.sstatic.net/Naze3.png" alt="enter image description here"></p>
<p>The explanation is intuitive:</p>
<ul>
<li><p>If you are not very good (probability that you make a single shot - p < 0.6), then your overall probability is not very high, but it is better to bet that you'll make 2 out of 4, because you may do it just by chance and your clumsiness has less chance to prove in 4 than in 6 attempts.</p></li>
<li><p>If you are really good (p > 0.6), then it is better to bet on 3 out of 6, because if you miss just by chance, you have better chance to correct yourself in 6 attempts.</p></li>
</ul>
<p>The curves meet exactly at p = 0.6.</p>
<h2>In general, the more attempts, the more of real skill reveals</h2>
<p>This is best illustrated on the extreme case:</p>
<p><img src="https://i.sstatic.net/W6WZV.png" alt="enter image description here"></p>
<p>With more attempts, it is almost binary case - you either succeed or not, based on your skill. With high N, the result will be close to your original expectation.</p>
<p>Note that with high N and p = 0.5, the binomial distribution gets narrower and converges to normal distribution.</p>
<h2>Everything here just revolves around <a href="http://en.wikipedia.org/wiki/Binomial_distribution" rel="noreferrer">binomial distribution</a>,</h2>
<p>which tells you that the probability that you will score <em>exactly</em> <code>k</code> shots out of <code>n</code> is</p>
<p>$$P(X = k) = \binom{n}{k} p^k (1-p)^{n-k}$$</p>
<p>The probability that you will score at least k = n/2 shots (and win the bet) is then </p>
<p>$$P(X \ge k) = \sum^{n}_{i=k} \binom{n}{i} p^i (1-p)^{n-i}$$</p>
<h1>Why the curves don't meet at p = 0.5?</h1>
<p>Look at the following plots:</p>
<p><img src="https://i.sstatic.net/vsdVi.png" alt="enter image description here"></p>
<p>These plots are for p = 0.5. The binomial distribution is symmetric for this value. Intuitivelly, you <em>expect</em> 2 of 4 or 3 of 6 to take half of the distribution. But if you look especially at the left plot, it is clear that the middle column (2 successful shots) goes far beyond the half of the distribution (dashed line), which is denoted by the red arrow. In the right plot (3/6), this proportion is much smaller.</p>
<p>If you sum the gold bars, you will get:</p>
<pre><code>P(make at least 2 out of 4) = 0.6875
P(make at least 3 out of 6) = 0.65625
P(make at least 500 out of 1000) = 0.5126125
</code></pre>
<p>From these figures, as well as from the plots, is apparent that with high N, the proportion of the distribution "beyond the half" converges to zero, and the total probability converges to 0.5.</p>
<p>So, for the curves to meet for low Ns, <code>p</code> must be higher to compensate for this:</p>
<p><img src="https://i.sstatic.net/gDZBh.png" alt="enter image description here"></p>
<pre><code>P(make at least 2 out of 4) = 0.8208
P(make at least 3 out of 6) = 0.8208
</code></pre>
<p>Full code in R:</p>
<pre><code>f6 <- function(p) {
dbinom(3, 6, p) +
dbinom(4, 6, p) +
dbinom(5, 6, p) +
dbinom(6, 6, p)
}
f4 <- function(p) {
dbinom(2, 4, p) +
dbinom(3, 4, p) +
dbinom(4, 4, p)
}
fN <- function(p, from, max) {
#sum(sapply(from:max, function (x) dbinom(x, max, p)))
s <- 0
for (i in from:max) {
s <- s + dbinom(i, max, p)
}
s
}
f1000 <- function (p) fN(p, 500, 1000)
plot(f6, xlim = c(0,1), col = "red", lwd = 2, ylab = "", main = "Probability that you will make ...", xlab = "p (probability you make a single shot)")
curve(f4, col = "green", add = TRUE, lwd = 2)
curve(f1000, add = TRUE, lwd = 2, col = "blue")
legend("topleft", c("2 out of 4", "3 out of 6", "500 out of 1000"), lwd = 2, col = c("green", "red", "blue"), bty = "n")
plotHist <- function (n, p) {
plot(x=c(-0.5,n+0.5),y=c(0,0.41),type="n", xaxt="n", xlab = "successful shots", ylab = "probability",
main = paste0(n/2, "/", n, ", p = ", p))
axis(1, at=0:n, labels=0:n)
x <- 0:n
y <- dbinom(0:n, n, p)
w <- 0.9
#lines(0:4, dbinom(0:4, 4, 0.5), lwd = 50, type = "h", lend = "butt")
rect(x-0.5*w, 0, x+0.5*w, y, col = "lightgrey")
uind <- (n/2+1):(n+1)
rect(x[uind]-0.5*w, 0, x[uind]+0.5*w, y[uind], col = "gold")
}
par(mfrow = c(1, 2))
plotHist(4, 0.5)
abline(v = 2, lty = 2)
arrows(2-0.5*0.9, 0.17, 2, 0.17, col = "red", code = 3, length = 0.1, lwd = 2)
plotHist(6, 0.5)
f4(0.5)
f6(0.5)
f1000(0.5)
par(mfrow = c(1, 2))
plotHist(4, 0.6)
plotHist(6, 0.6)
f4(0.6)
f6(0.6)
</code></pre>
| <p>The probability of you getting at least half increases with the number of shots. E.g. with a probability of 2/3 per shot the probability of getting at least half the baskets increases as below.</p>
<p><em>Edit</em> it is important to point out that this only holds if by a "pretty good basketball player" you mean your chance of making a basket is somewhat better than evens (in the range 0.6 to 1 exclusive). This is shown very clearly in <a href="https://math.stackexchange.com/questions/678515/probability-2-4-vs-3-6/678537#678537">Hagen von Eitzen's answer</a>.</p>
<p><img src="https://i.sstatic.net/hcPVU.jpg" alt="enter image description here"></p>
<p>An intuitive way of looking at this is that it's like a <strong>diversification effect</strong>. With only a few baskets, you could get unlucky, just as you might if you tried to pick only a couple of stocks for an investment portfolio, even if you were a good stock picker. You increase the number of baskets -- or stocks -- and the role of chance is reduced and your skill shines through.</p>
<p>Formally, assuming that</p>
<ul>
<li><p>each throw is independent, and</p></li>
<li><p>you have the same probability $p$ of scoring on each throw</p></li>
</ul>
<p>you can model the chance of scoring $b$ baskets out of $n$ using the <a href="http://en.wikipedia.org/wiki/Binomial_distribution" rel="nofollow noreferrer">binomial distribution</a></p>
<p>$$ \mathbb{P}(b \text{ from } n) = \binom{n}{b} p^{b}(1-p)^{n-b} $$</p>
<p>To get the probability of scoring at least half of the $n$ baskets, you have to add up these probilities. E.g. for at least 2 out of 4 you want $\mathbb{P}(2 \text{ from } 4) + \mathbb{P}(3 \text{ from } 4) + \mathbb{P}(4 \text{ from } 4)$.</p>
|
matrices | <p>Today, at my linear algebra exam, there was this question that I couldn't solve.</p>
<hr />
<blockquote>
<p>Prove that
<span class="math-container">$$\det \begin{bmatrix}
n^{2} & (n+1)^{2} &(n+2)^{2} \\
(n+1)^{2} &(n+2)^{2} & (n+3)^{2}\\
(n+2)^{2} & (n+3)^{2} & (n+4)^{2}
\end{bmatrix} = -8$$</span></p>
</blockquote>
<hr />
<p>Clearly, calculating the determinant, with the matrix as it is, wasn't the right way. The calculations went on and on. But I couldn't think of any other way to solve it.</p>
<p>Is there any way to simplify <span class="math-container">$A$</span>, so as to calculate the determinant?</p>
| <p>Here is a proof that is decidedly not from the book. The determinant is obviously a polynomial in n of degree at most 6. Therefore, to prove it is constant, you need only plug in 7 values. In fact, -4, -3, ..., 0 are easy to calculate, so you only have to drudge through 1 and 2 to do it this way !</p>
| <p>Recall that $a^2-b^2=(a+b)(a-b)$. Subtracting $\operatorname{Row}_1$ from $\operatorname{Row}_2$ and from $\operatorname{Row}_3$ gives
$$
\begin{bmatrix}
n^2 & (n+1)^2 & (n+2)^2 \\
2n+1 & 2n+3 & 2n+5 \\
4n+4 & 4n+8 & 4n+12
\end{bmatrix}
$$
Then subtracting $2\cdot\operatorname{Row}_2$ from $\operatorname{Row}_3$ gives
$$
\begin{bmatrix}
n^2 & (n+1)^2 & (n+2)^2 \\
2n+1 & 2n+3 & 2n+5 \\
2 & 2 & 2
\end{bmatrix}
$$
Now, subtracting $\operatorname{Col}_1$ from $\operatorname{Col}_2$ and $\operatorname{Col}_3$ gives
$$
\begin{bmatrix}
n^2 & 2n+1 & 4n+4 \\
2n+1 & 2 & 4 \\
2 & 0 & 0
\end{bmatrix}
$$
Finally, subtracting $2\cdot\operatorname{Col}_2$ from $\operatorname{Col}_3$ gives
$$
\begin{bmatrix}
n^2 & 2n+1 & 2 \\
2n+1 & 2 & 0 \\
2 & 0 & 0
\end{bmatrix}
$$
Expanding the determinant about $\operatorname{Row}_3$ gives
$$
\det A
=
2\cdot\det
\begin{bmatrix}
2n+1 & 2\\
2 & 0
\end{bmatrix}
=2\cdot(-4)=-8
$$
as advertised.</p>
|
number-theory | <p>I have a friend who turned <span class="math-container">$32$</span> recently. She has an obsessive compulsive disdain for odd numbers, so I pointed out that being <span class="math-container">$32$</span> was pretty good since not only is it even, it also has no odd factors. That made me realize that <span class="math-container">$64$</span> would be an even better age for her, because it's even, has no odd factors, and has no odd <em>digits</em>. I then wondered how many other powers of <span class="math-container">$2$</span> have this property. The only higher power of <span class="math-container">$2$</span> with all even digits that I could find was <span class="math-container">$2048.$</span> </p>
<p>So is there a larger power of <span class="math-container">$2$</span> with all even digits? If not, how would you go about proving it?</p>
<p>I tried examining the last <span class="math-container">$N$</span> digits of powers of <span class="math-container">$2$</span> to look for a cycle in which there was always at least one odd digit in the last <span class="math-container">$N$</span> digits of the consecutive powers. Unfortunately, there were always a very small percentage of powers of <span class="math-container">$2$</span> whose last <span class="math-container">$N$</span> digits were even.</p>
<p><strong>Edit:</strong> Here's a little more info on some things I found while investigating the <span class="math-container">$N$</span> digit cycles.</p>
<p><span class="math-container">$N$</span>: <span class="math-container">$2,3,4,5,6,7,8,9$</span></p>
<p>Cycle length: <span class="math-container">$20,100,500,2500,12500,62520,312500,1562500,\dotsc, 4\cdot 5^{N-1}$</span></p>
<p>Number of suffixes with all even digits in cycle: <span class="math-container">$10, 25, 60, 150, 370, 925, 2310,5780,\sim4\cdot2.5^{N-1}$</span> </p>
<p>It seems there are some interesting regularities there. Unfortunately, one of the regularities is those occurrences of all even numbers! In fact, I was able to find a power of <span class="math-container">$2$</span> in which the last <span class="math-container">$33$</span> digits were even <span class="math-container">$(2^{3789535319} = \dots 468088628828226888000862880268288)$</span>. </p>
<p>Yes it's true that it took a power of <span class="math-container">$2$</span> with over a billion digits to even get the last <span class="math-container">$33$</span> to be even, so it would seem any further powers of <span class="math-container">$2$</span> with all even digits are extremely unlikely. But I'm still curious as to how you might prove it.</p>
<p><strong>Edit 2:</strong> Here's another interesting property I noticed. The next digit to the left of the last <span class="math-container">$N$</span> digits will take on every value of its parity as the <span class="math-container">$N$</span> digits cycle each time. Let me illustrate.</p>
<p>The last <span class="math-container">$2$</span> digits cycle every <span class="math-container">$20$</span> powers. Now examine the following:</p>
<p><span class="math-container">$2^7 = 128$</span><br>
<span class="math-container">$2^{27} = \dots 728$</span><br>
<span class="math-container">$2^{47} = \dots 328$</span><br>
<span class="math-container">$2^{67} = \dots 928$</span><br>
<span class="math-container">$2^{87} = \dots 528$</span><br>
<span class="math-container">$2^{107} = \dots 128$</span> </p>
<p>Notice that the hundreds place starts out odd and then proceeds to take on every odd digit as the final 2 digits cycle.</p>
<p>As another example, let's look at the fourth digit (knowing that the last 3 digits cycle every 100 powers.)</p>
<p><span class="math-container">$2^{18} = 262144$</span>,
<span class="math-container">$2^{118} = \dots 6144$</span>,
<span class="math-container">$2^{218} = \dots 0144$</span>,
<span class="math-container">$2^{318} = \dots 4144$</span>,
<span class="math-container">$2^{418} = \dots 8144$</span>,
<span class="math-container">$2^{518} = \dots 2144$</span> </p>
<p>This explains the power of 5 in the cycle length as each digit must take on all five digits of its parity.</p>
<p><strong>EDIT 3:</strong> It looks like the <span class="math-container">$(N+1)$</span><sup>st</sup> digit takes on all the values <span class="math-container">$0-9$</span> as the last <span class="math-container">$N$</span> digits complete half a cycle. For instance, the last <span class="math-container">$2$</span> digits cycle every <span class="math-container">$20$</span> powers, so look at the third digit every <span class="math-container">$10$</span> powers:</p>
<p><span class="math-container">$2^{8} = 256$</span>,
<span class="math-container">$2^{18} = \dots 144$</span>,
<span class="math-container">$2^{28} = \dots 456$</span>,
<span class="math-container">$2^{38} = \dots 944$</span>,
<span class="math-container">$2^{48} = \dots 656$</span>,
<span class="math-container">$2^{58} = \dots 744$</span>,
<span class="math-container">$2^{68} = \dots 856$</span>,
<span class="math-container">$2^{78} = \dots 544$</span>,
<span class="math-container">$2^{88} = \dots 056$</span>,
<span class="math-container">$2^{98} = \dots 344$</span> </p>
<p>Not only does the third digit take on every value 0-9, but it also alternates between odd and even every time (as the Edit 2 note would require.) Also, the N digits cycle between two values, and each of the N digits besides the last one alternates between odd and even. I'll make this more clear with one more example which looks at the fifth digit:</p>
<p><span class="math-container">$2^{20} = \dots 48576$</span>,
<span class="math-container">$2^{270} = \dots 11424$</span>,
<span class="math-container">$2^{520} = \dots 28576$</span>,
<span class="math-container">$2^{770} = \dots 31424$</span>,
<span class="math-container">$2^{1020} = \dots 08576$</span>,
<span class="math-container">$2^{1270} = \dots 51424$</span>,
<span class="math-container">$2^{1520} = \dots 88576$</span>,
<span class="math-container">$2^{1770} = \dots 71424$</span>,
<span class="math-container">$2^{2020} = \dots 68576$</span>,
<span class="math-container">$2^{2270} = \dots 91424$</span></p>
<p><strong>EDIT 4:</strong> Here's my next non-rigorous observation. It appears that as the final N digits cycle 5 times, the <span class="math-container">$(N+2)$</span><sup>th</sup> digit is either odd twice and even three times, or it's odd three times and even twice. This gives a method for extending an all even suffix. </p>
<p>If you have an all even N digit suffix of <span class="math-container">$2^a$</span>, and the (N+1)<sup>th</sup> digit is odd, then one of the following will have the (N+1)<sup>th</sup> digit even:</p>
<p><span class="math-container">$2^{(a+1*4*5^{N-2})}$</span>,
<span class="math-container">$2^{(a+2*4*5^{N-2})}$</span>,
<span class="math-container">$2^{(a+3*4*5^{N-2})}$</span></p>
<p><strong>Edit 5:</strong> It's looking like there's no way to prove this conjecture solely by examining the last N digits since we can always find an arbitrarily long, all even, N digit sequence. However, all of the digits are distributed so uniformly through each power of 2 that I would wager that not only does every power of 2 over 2048 have an odd digit, but also, every power of 2 larger than <span class="math-container">$2^{168}$</span> has <em>every digit</em> represented in it somewhere.</p>
<p>But for now, let's just focus on the parity of each digit. Consider the value of the <span class="math-container">$k^{th}$</span> digit of <span class="math-container">$2^n$</span> (with <span class="math-container">$a_0$</span> representing the 1's place.) </p>
<p><span class="math-container">$$
a_k = \left\lfloor\frac{2^n}{10^k}\right\rfloor \text{ mod 10}\Rightarrow a_k = \left\lfloor\frac{2^{n-k}}{5^k}\right\rfloor \text{ mod 10}
$$</span></p>
<p>We can write
<span class="math-container">$$2^{n-k} = d\cdot5^k + r$$</span>
where <span class="math-container">$d$</span> is the divisor and <span class="math-container">$r$</span> is the remainder of <span class="math-container">$2^{n-k}/5^k$</span>. So
<span class="math-container">$$
a_k \equiv \frac{2^{n-k}-r}{5^k} \equiv d \pmod{10}
$$</span>
<span class="math-container">$$\Rightarrow a_k \equiv d \pmod{2}$$</span>
And
<span class="math-container">$$d\cdot5^k = 2^{n-k} - r \Rightarrow d \equiv r \pmod{2}$$</span>
Remember that <span class="math-container">$r$</span> is the remainder of <span class="math-container">$2^{n-k} \text{ div } {5^k}$</span> so </p>
<p><span class="math-container">$$\text{The parity of $a_k$ is the same as the parity of $2^{n-k}$ mod $5^k$.}$$</span></p>
<p>Now we just want to show that for any <span class="math-container">$2^n > 2048$</span> we can always find a <span class="math-container">$k$</span> such that <span class="math-container">$2^{n-k} \text{ mod }5^k$</span> is odd.</p>
<p>I'm not sure if this actually helps or if I've just sort of paraphrased the problem.</p>
<p><strong>EDIT 6:</strong> Thinking about <span class="math-container">$2^{n-k}$</span> mod <span class="math-container">$5^k$</span>, I realized there's a way to predict some odd digits. </p>
<p><span class="math-container">$$2^a \pmod{5^k} \text{ is even for } 1\le a< log_2 5^k$$</span></p>
<p>The period of <span class="math-container">$2^a \pmod{5^k}$</span> is <span class="math-container">$4\cdot5^{k-1}$</span> since 2 is a primitive root mod <span class="math-container">$5^k$</span>. Also </p>
<p><span class="math-container">$$2^{2\cdot5^{k-1}} \equiv -1 \pmod{5^k}$$</span></p>
<p>So multiplying any <span class="math-container">$2^a$</span> by <span class="math-container">$2^{2\cdot5^{k-1}}$</span> flips its parity mod <span class="math-container">$5^k$</span>. Therefore <span class="math-container">$2^a \pmod{5^k}\text{ }$</span> is odd for</p>
<p><span class="math-container">$$1 + 2\cdot5^{k-1} \le a< 2\cdot5^{k-1} + log_2 5^k$$</span></p>
<p>Or taking the period into account, <span class="math-container">$2^a \pmod{5^k} \text{ }$</span> is odd for any integer <span class="math-container">$b\ge0$</span> such that</p>
<p><span class="math-container">$$1 + 2\cdot5^{k-1} (1 + 2b) \le a< 2\cdot5^{k-1} (1 + 2b) + log_2 5^k$$</span></p>
<p>Now for the <span class="math-container">$k^{th}$</span> digit of <span class="math-container">$2^n$</span> (<span class="math-container">$ k=0 \text{ } $</span> being the 1's digit), we're interested in the parity of <span class="math-container">$2^{n-k}$</span> mod <span class="math-container">$5^k$</span>. Setting <span class="math-container">$ a =n-k \text{ } $</span> we see that the <span class="math-container">$k^{th}$</span> digit of <span class="math-container">$2^n$</span> is odd for integer <span class="math-container">$b\ge0$</span> such that</p>
<p><span class="math-container">$$1 + 2\cdot5^{k-1} (1 + 2b) \le n - k < 2\cdot5^{k-1} (1 + 2b) + log_2 5^k$$</span></p>
<p>To illustrate, here are some guaranteed odd digits for different <span class="math-container">$2^n$</span>: </p>
<p>(k=1 digit): <span class="math-container">$ 2\cdot5^0 + 2 = 4 \le n \le 5 $</span><br>
(k=2 digit): <span class="math-container">$ 2\cdot5^1 + 3 = 13 \le n \le 16 $</span><br>
(k=3 digit): <span class="math-container">$ 2\cdot5^2 + 4 = 54 \le n \le 59 $</span><br>
(k=4 digit): <span class="math-container">$ 2\cdot5^3 + 5 = 255 \le n \le 263 $</span> </p>
<p>Also note that these would repeat every <span class="math-container">$4\cdot5^{k-1}$</span> powers.</p>
<p>These guaranteed odd digits are not dense enough to cover all of the powers, but might this approach be extended somehow to find more odd digits?</p>
<p><strong>Edit 7:</strong> The two papers that Zander mentions below make me think that this is probably a pretty hard problem.</p>
| <p>This seems to be similar to (I'd venture to say as hard as) a problem of Erdős open since 1979, that the base-3 representation of $2^n$ contains a 2 for all $n>8$.</p>
<p><a href="http://arxiv.org/abs/math/0512006">Here is a paper by Lagarias</a> that addresses the ternary problem, and for the most part I think would generalize to the question at hand (we're also looking for the intersection of iterates of $x\rightarrow 2x$ with a Cantor set). Unfortunately it does not resolve the problem.</p>
<p>But Conjecture 2' (from Furstenberg 1970) in the linked paper suggests a stronger result, that every $2^n$ for $n$ large enough will have a 1 in the decimal representation. Though it doesn't quantify "large enough" (so even if proved wouldn't promise that 2048 is the largest all-even decimal), it looks like it might be true for all $n>91$ (I checked up to $n=10^6$).</p>
| <p>This sequence is known to the <a href="http://oeis.org/A068994">OEIS</a>.</p>
<p>Here are the notes, which give no explicit answer but suppose that your conjecture is correct:</p>
<blockquote>
<p>Are there any more terms in this sequence?</p>
<p>Evidence that the sequence may be finite, from Rick L. Shepherd
(rshepherd2(AT)hotmail.com), Jun 23 2002:</p>
<p>1) The sequence of last two digits of $2^n$, A000855 of period $20$, makes clear that $2^n > 4$ must have $n = 3, 6, 10, 11,$ or $19 (\text{mod }20)$ for $2^n$ to be a member of this sequence. Otherwise, either the tens digit (in $10$ cases), as seen directly, or the hundreds digit, in the $5$ cases receiving a carry from the previous power's tens digit $\geq 5$, must be odd.</p>
<p>2) No additional term has been found for n up to $50000$.</p>
<p>3) Furthermore, again for each n up to $50000$, examining $2^n$'s digits
leftward from the rightmost but only until an odd digit was found, it
was only once necessary to search even to the 18th digit. This
occurred for $2^{12106}$ whose last digits are
$\ldots 3833483966860466862424064$. Note that $2^{12106}$ has $3645$ digits. (The
clear runner-up, $2^{34966}$, a $10526$-digit number, required searching
only to the $15$th digit. Exponents for which only the $14$th digit was
reached were only $590, 3490, 8426, 16223, 27771, 48966$ and $49519$ -
representing each congruence above.)</p>
</blockquote>
|
logic | <p>I am just a high school student, and I haven't seen much in mathematics (calculus and abstract algebra).</p>
<p>Mathematics is a system of axioms which you choose yourself for a set of undefined entities, such that those entities satisfy certain basic rules you laid down in the first place on your own.</p>
<p>Now using these laid-down rules and a set of other rules for a subject called logic which was established similarly, you define certain quantities and name them using the undefined entities and then go on to prove certain statements called theorems.</p>
<p>Now what is a proof exactly? Suppose in an exam, I am asked to prove Pythagoras' theorem. Then I prove it using only one certain system of axioms and logic. It isn't proved in all the axiom-systems in which it could possibly hold true, and what stops me from making another set of axioms that have Pythagoras' theorem as an axiom, and then just state in my system/exam "this is an axiom, hence can't be proven".</p>
<p><strong>EDIT</strong> : How is the term "wrong" defined in mathematics then? You can say that proving Fermat's Last Theorem using the number theory axioms was a difficult task but then it can be taken as an axiom in another set of axioms.</p>
<p>Is mathematics as rigorous and as thought-through as it is believed and expected to be? It seems to me that there many loopholes in problems as well as the subject in-itself, but there is a false backbone of rigour that seems true until you start questioning the very fundamentals.</p>
| <p>There are really two very different kinds of proofs:</p>
<ul>
<li><p><em>Informal proofs</em> are what mathematicians write on a daily basis to convince themselves and other mathematicians that particular statements are correct. These proofs are usually written in prose, although there are also geometrical constructions and "proofs without words". </p></li>
<li><p><em>Formal proofs</em> are mathematical objects that model informal proofs. Formal proofs contain absolutely every logical step, with the result that even simple propositions have amazingly long formal proofs. Because of that, formal proofs are used mostly for theoretical purposes and for computer verification. Only a small percentage of mathematicians would be able to write down any formal proof whatsoever off the top of their head. </p></li>
</ul>
<p>With a little humor, I should say there is a third kind of proof: </p>
<ul>
<li><em>High-school proofs</em> are arguments that teachers force their students to reproduce in high school mathematics classes. These have to be written according to very specific rules described by the teacher, which are seemingly arbitrary and not shared by actual informal or formal proofs outside high-school mathematics. High-school proofs include the "two-column proofs" where the "steps" are listed on one side of a vertical line and the "reasons" on the other. The key thing to remember about high-school proofs is that they are only an imitation of "real" mathematical proofs.</li>
</ul>
<p>Most mathematicians learn about mathematical proofs by reading and writing them in classes. Students develop proof skills over the course of many years in the same way that children learn to speak - without learning the rules first. So, as with natural languages, there is no firm definition of "what is an informal proof", although there are certainly common patterns. </p>
<p>If you want to learn about proofs, the best way is to read some real mathematics written at a level you find comfortable. There are many good sources, so I will point out only two: <a href="http://www.maa.org/pubs/mathmag.html">Mathematics Magazine</a> and <a href="http://www.maa.org/mathhorizons/">Math Horizons</a> both have well-written articles on many areas of mathematics. </p>
| <p>Starting from the end, if you take Pythagoras' Theorem as an axiom, then proving it is very easy. A proof just consists of a single line, stating the axiom itself. The modern way of looking at axioms is not as things that can't be proven, but rather as those things that we explicitly state as things that hold. </p>
<p>Now, exactly what a proof is depends on what you choose as the rules of inference in your logic. It is important to understand that a proof is a typographical entity. It is a list of symbols. There are certain rules of how to combine certain lists of symbols to extend an existing proof by one more line. These rules are called inference rules. </p>
<p>Now, remembering that all of this happens just on a piece of paper - the proof consist just of marks on paper, where what you accept as valid proof is anything that is obtained from the axioms by following the inference rules - we would somehow like to relate this to properties of actual mathematical objects. To understand that, another technicality is required. If we are to write a proof as symbols on a piece of paper we had better have something telling us which symbols are we allowed to use, and how to combine them to obtain what are called terms. This is provided by the formal concept of a language. Now, to relate symbols on a piece of paper to mathematical objects we turn to semantics. First the language needs to be interpreted (another technical thing). Once the language is interpreted each statement (a statement is a bunch of terms put together in a certain way that is trying to convey a property of the objects we are interested in) becomes either true or false. </p>
<p>This is important: Before an interpretation was made, we could still prove things. A statement was either provable or not. Now, with an interpretation at hand, each statement is also either true or false (in that particular interpretation). So, now comes the question whether or not the rules of inference are <em>sound</em>. That is to say, whether those things that are provable from the axioms are actually true in each and every interpretation where these axioms hold. Of course we absolutely must choose the inference rules so that they are sound. </p>
<p>Another question is whether we have completeness. That is, if a statement is true under each and every interpretation where the axioms hold, does it follow that a proof exists? This is a very subtle question since it relates semantics (a concept that is quite illusive) to provability (a concept that is very trivial and completely mechanical). Typically, proving that a logical system is complete is quite hard. </p>
<p>I hope this satisfies your curiosity, and thumbs up for your interest in these issues!</p>
|
logic | <p>I've heard that within the field of intuitionistic mathematics, all real functions are continuous (i.e. there are no discontinuous functions). Is there a good book where I can find a proof of this theorem?</p>
| <p>Brouwer proved (to his own satisfaction) that every function from $\mathbb{R}$ to $\mathbb{R}$ is continuous. Modern constructive systems rarely are able to prove this, but they are <em>consistent</em> with it - they are unable to disprove it. These system are also (almost always) consistent with classical mathematics in which there are plenty of discontinuous functions from $\mathbb{R}$ to $\mathbb{R}$. One place you can find something about this is the classic <em>Varieties of Constructive Mathematics</em> by Bridges and Richman. </p>
<p>The same phenomenon occurs in classical computable analysis, by the way. Any <em>computable</em> function $f$ from $\mathbb{R}$ to $\mathbb{R}$ which is well defined with respect to equality of reals (and thus is a function from $\mathbb{R}$ to $\mathbb{R}$ in the normal sense) is continuous. In particular the characteristic function of a singleton real is never computable. This would be covered in any computable analysis text, such as the one by Weihrauch. </p>
<p>Here is a very informal argument that has a grain of truth. It should appear naively correct that if you can "constructively" prove that something is a function from $\mathbb{R}$ to $\mathbb{R}$, then you can compute that function. So the classical fact that every computable real function is continuous suggests that anything that you can constructively prove to be a real function will also be continuous. This suggests that you cannot prove constructively that any classically discontinuous function is actually a function. The grain of truth is that there are ways of making this argument rigorous, such as the method of "realizability". </p>
| <p>There exists a Grothendieck topos $\mathcal{E}$ in which the statement "every function from the Dedekind real numbers to the Dedekind real numbers is continuous" is true in the internal logic. To be a little more precise, in the topos $\mathcal{E}$, there is an object $R$ of Dedekind cuts of rational numbers, such that
$$\forall f \in R^R . \, \forall \epsilon \in R . \, \forall x \in R . \, \epsilon > 0 \Rightarrow \exists \delta \in R . \, \forall x' \in R . \, \left| x - x' \right| < \delta \Rightarrow \left| f (x) - f (x') \right| < \epsilon$$
holds when interpreted using Kripke–Joyal semantics in $\mathcal{E}$. The topos $\mathcal{E}$ is constructed as follows: we take a full subcategory $\mathbb{T}$ of the category of topological spaces $\textbf{Top}$ such that</p>
<ul>
<li>$\mathbb{T}$ is small,</li>
<li>the real line $\mathbb{R}$ is in $\mathbb{T}$, </li>
<li>for each $X \in \operatorname{ob} \mathbb{T}$ and each open subset $U$ of $X$, we have $U \in \operatorname{ob} \mathbb{T}$, and</li>
<li>$\mathbb{T}$ is closed under finite products in $\textbf{Top}$;</li>
</ul>
<p>and we set $\mathcal{E}$ to be the category of sheaves on $\mathbb{T}$ equipped with the open immersion topology. One can then show that the object of internal Dedekind real numbers in $\mathcal{E}$ is (isomorphic to) the representable sheaf $\mathbb{T}(-, \mathbb{R})$, and with more work, one finds that Brouwer's "theorem" holds in $\mathcal{E}$. The details of the construction and the proof of validity can be found in [<em>Sheaves in geometry and logic</em>, Ch. VI, §9], though I have not understood it in full.</p>
|
probability | <p>Many results are based on the fact of the Moment Generating Function (MGF) Uniqueness Theorem, that says: </p>
<blockquote>
<p>If $X$ and $Y$ are two random variables and equality holds for their MGF's: $m_X(t) = m_Y(t)$ then $X$ and $Y$ have the same probability distribution: $F_X(x) = F_Y(y)$.</p>
</blockquote>
<p>The proof of this theorem is never shown in textbooks, and I cannot seem to find it online or in any book I have access to.</p>
<p>Can someone show me the proof or tell me where to look it up?</p>
<p>Thanks for your time.</p>
| <p>Let us first clarify the assumption. Denote the <em>moment generating function of <span class="math-container">$X$</span></em> by <span class="math-container">$M_X(t)=Ee^{tX}$</span>.</p>
<blockquote>
<p><strong>Uniqueness Theorem.</strong> If there exists <span class="math-container">$\delta>0$</span> such that <span class="math-container">$M_X(t) = M_Y(t) < \infty$</span> for all <span class="math-container">$t \in (-\delta,\delta)$</span>, then <span class="math-container">$F_X(t) = F_Y(t)$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>.</p>
</blockquote>
<p>To prove that the moment generating function determines the distribution, there are at least two approaches:</p>
<ul>
<li><p>To show that finiteness of <span class="math-container">$M_X$</span> on <span class="math-container">$(-\delta,\delta)$</span> implies that the moments <span class="math-container">$X$</span> do not increase too fast, so that <span class="math-container">$F_X$</span> is determined by <span class="math-container">$(EX^k)_{k\in\mathbb{N}}$</span>, which are in turn determined by <span class="math-container">$M_X$</span>. This proof can be found in Section 30 of <a href="http://www.ams.org/mathscinet-getitem?mr=1324786" rel="noreferrer">Billingsley, P. <em>Probability and Measure</em></a>.</p>
</li>
<li><p>To show that <span class="math-container">$M_X$</span> is analytic and can be extended to <span class="math-container">$(-\delta,\delta)\times i\mathbb{R} \subseteq \mathbb{C}$</span>, so that <span class="math-container">$M_X(z)=Ee^{zX}$</span>, so in particular <span class="math-container">$M_X(it)=\varphi_X(t)$</span> for all <span class="math-container">$t\in\mathbb{R}$</span>, and then use the fact that <span class="math-container">$\varphi_X$</span> determines <span class="math-container">$F_X$</span>. For this approach, see <a href="http://www.ams.org/mathscinet-getitem?mr=7577" rel="noreferrer">Curtiss, J. H. Ann. Math. Statistics 13:430-433</a> and references therein or Roja's answer.</p>
</li>
</ul>
<p>At undergraduate level, <strong>it is interesting to work with the moment generating function and state the above theorem without proving it</strong>. One possible proof requires familiarity with holomorphic functions and the Identity Theorem from complex analysis, which restricts the set of students to which it can be taught.</p>
<p>In fact, the proof is so advanced that, at such a point it usually makes more sense to accept working with complex numbers, forget about moment generating function and work with the <em>charachteristic function</em> <span class="math-container">$\varphi_X(t)=Ee^{itX}$</span> instead. Almost every graduate textbook takes this path and proves that the characteristic function determines the distribution as a corollary of the <em>inversion formula</em>.</p>
<p>This proof of the inversion formula is bit long, but it only requires Fubini Theorem to switch an expectation with an integral and Dominated Convergence Theorem to switch an integral with a limit. A direct proof of uniqueness without inversion formula is shorter and simpler, and it only requires Weierstrass Theorem to approximate a continuous function by a trigonometric polynomial.</p>
<p>Side remark. If you only admit random variables whose support are contained in <span class="math-container">$\mathbb{Z}_+$</span>, then the <em>probability generating function</em> <span class="math-container">$G_X(z)=Ez^X$</span> determines <span class="math-container">$p_X$</span> (and thus <span class="math-container">$F_X$</span>). This elementary result is proved in most undergraduate textbooks and is mentioned in Did's answer.
If you only admit random variables whose support are contained in <span class="math-container">$\mathbb{Z}$</span>, then it is simpler to show that <span class="math-container">$\varphi_X$</span> determines <span class="math-container">$p_X$</span>, as also mentioned in Did's answer, and the proof uses Fubini.</p>
| <p>$$(\forall n\geqslant0)\qquad \left.\frac{\mathrm d^n}{\mathrm ds^n}\mathbb E[s^X]\right|_{s=0}=n!\cdot\mathbb P[X=n]
$$
$$(\forall x\in\mathbb R)\qquad \int_0^{2\pi}\mathbb E[\mathrm e^{\mathrm itX}]\,\mathrm e^{-\mathrm itx}\,\mathrm dt=2\pi\cdot\mathbb P[X=x]
$$</p>
|
probability | <p>There are many descriptions of the "birthday problem" on this site — the problem of finding the probability that in a group of $n$ people there will be any (= at least 2) sharing a birthday.</p>
<p>I am wondering how to find instead the expected number of people sharing a birthday in a group of $n$ people. I remember that expectation means the weighted sum of the probabilities of each outcome:</p>
<p>$$E[X]=\sum_{i=0}^{n-1}x_ip_i$$</p>
<p>And here $x$ must mean the number of collisions involving $i+1$ people, which is $n\choose i$. All $n$ people born on different days means no collisions, $i=0$; two people born on the same day means $n$ collisions, $i=1$; all $n$ people born on the same day means $n$ collisions, $i=n-1$.</p>
<p>Since the probabilities of three or more people with the same birthday are vanishingly small compared to two people with the same birthday, and decreases faster than $x$ increases, is it correct to say that this expectation can be approximated by</p>
<p>$$E[X]\approx {n\choose 0}p_{no\ collisions}+{n\choose 1}p_{one\ collision}$$</p>
<p>This doesn't look right to me and I'd appreciate some guidance.</p>
<hr>
<p>Sorry - edited to change ${n\choose 1}$ to ${n\choose 0}$ in second equation. Sloppy of me.</p>
| <p>The probability person $B$ shares person $A$'s birthday is $1/N$, where $N$ is the number of equally possible birthdays, </p>
<p>so the probability $B$ does not share person $A$'s birthday is $1-1/N$, </p>
<p>so the probability $n-1$ other people do not share $A$'s birthday is $(1-1/N)^{n-1}$, </p>
<p>so the expected number of people who do not have others sharing their birthday is $n(1-1/N)^{n-1}$, </p>
<p>so the expected number of people who share birthdays with somebody is $n\left(1-(1-1/N)^{n-1}\right)$.</p>
| <p>I will try to get control of the most standard interpretation of our question by using (at first) very informal language. Let us call someone <em>unhappy</em> if one or more people share his/her "birthday." We want to find the "expected number" of unhappy people.</p>
<p>Define the random variable $X$ by saying that $X$ is the number of unhappy people.
We want to find $\text{E}(X)$. Let $p_i$ be the probability that $X=i$. Then
$$\text{E}(X)=\sum_{i=0}^{n} i\,p_i$$
That is roughly the approach that you took. That approach is correct, and a very reasonable thing to try. Indeed have been <em>trained</em> to use this approach, since that's exactly how you solved the exercises that followed the definition of expectation. </p>
<p>Unfortunately, in this problem, finding the $p_i$ is very difficult. One could, as you did, decide that for a good approximation, only the first few $p_i$ really matter. That is sometimes true, but depends quite a bit on the values $N$ of "days in the year" and the number $n$ of people.</p>
<p>Fortunately, in this problem, and many others like it, there is an alternative <em>very</em> effective approach. It involves a bit of theory, but the payoff is considerable.</p>
<p>Line the people up in a row. Define the random variables $U_1,U_2,U_3,\dots,U_n$ by saying that $U_k=1$ if the $k$-th person is unhappy, and $U_k=0$ if the $k$-th person is not unhappy. The crucial observation is that
$$X=U_1+U_2+U_3+\cdots + U_n$$ </p>
<p>One way to interpret this is that you, the observer, go down the line of people, making a tick mark on your tally sheet if the person is unhappy, and making no mark if the person is not unhappy. The number of tick marks is $X$, the number of unhappy people. It is also the sum of the $U_k$.</p>
<p>We next use the following very important theorem: <strong>The expectation of a sum is the sum of the expectations</strong>. This theorem holds "always." The random variables you are summing <em>need not be independent</em>. In our situation, the $U_k$ are not independent, but, for expectation of a sum, that does not matter. So we have
$$\text{E}(X)=\text{E}(U_1) + \text{E}(U_2)+ \text{E}(U_3)+\cdots +\text{E}(U_n)$$</p>
<p>Finally, note that the probability that $U_k=1$ is, as carefully explained by @Henry, equal to $p$, where
$$p=1-(1-1/N)^{n-1}$$
It follows that $\text{E}(U_k)=p$ for any $k$, and therefore $\text{E}(X)=np$.</p>
|
game-theory | <p>Got this for an interview and didn't get it. How to solve?</p>
<p>You and your opponent have a uniform random sampler from 0 to 1. We both sample from our own machines. Whoever has the higher number wins. The catch is when you see your number, you can resample. The opponent can re sample once but you can resample twice. What’s the probability I win?</p>
<p>My not-confident-at-all approach: For each player you come up with a strategy that revolves around the idea of “if this number is too low, resample.” you know that for myself, I have three samples, and the EV of the third sample is 1/2. So for the second sample, if it’s below 1/2, you should resample; if above 1/2, do not resample. And you do this for the first sample with a slightly higher threshold. And then assuming our player is opponent they will follow the same approach, but they only have two rolls.</p>
<p>No matter what, we know the game can end with six outcomes: it can end with me ending on the first, second, or third sample, and them ending on the first or second outcome. We just condition on each of those six cases and find the probability that my roll is bigger than their roll on that conditional uniform distribution.</p>
| <p>Let's look at the single player game which is that I have a budget of <span class="math-container">$n$</span> total rolls and I want to come up with a strategy for getting the biggest roll in expectation.</p>
<p>For <span class="math-container">$k\leq n$</span>, let us look at what happens from <span class="math-container">$k$</span> rolls.
Now if we think of each of the rolls <span class="math-container">$R_1,R_2,\dots,R_k$</span> as being independent draws from the uniform distribution and we define a new random variable <span class="math-container">$X:=\max\{R_1,R_2,\dots,R_k\}$</span>, we see that for an arbitrary <span class="math-container">$x\in[0,1]$</span>, the probability that <span class="math-container">$X$</span> is smaller or equal to <span class="math-container">$x$</span> is <span class="math-container">$\mathbb{P}(X\leq x)=\mathbb{P}(R_1\leq x, R_2\leq x,\dots R_k\leq x)=\prod_{j=1}^k \mathbb{P}(R_j\leq x)=\prod_{j=1}^k x=x^k$</span>.</p>
<p>This is clearly the cumulative distribution function of <span class="math-container">$X$</span>, so we can differentiate that to get the density of <span class="math-container">$X$</span>, which will obviously be <span class="math-container">$f_X(x)=kx^{k-1}$</span>. This discussion can also be found with a bit more detail here: <a href="https://stats.stackexchange.com/questions/18433/how-do-you-calculate-the-probability-density-function-of-the-maximum-of-a-sample">https://stats.stackexchange.com/questions/18433/how-do-you-calculate-the-probability-density-function-of-the-maximum-of-a-sample</a></p>
<p>But if we have the density function, we can immediately compute the expected value of <span class="math-container">$X$</span>, which will be <span class="math-container">$\mathbb{E}[X]:=\int_0^1 xf_X(x)dx = \int_0^1 kx^k=\tfrac{k}{k+1}$</span>.</p>
<p>So what does this mean? If I still have <span class="math-container">$k$</span> rolls left in my budget, I should expect that the maximum value I will encounter during those remaining <span class="math-container">$k$</span> rolls will be <span class="math-container">$\tfrac{k}{k+1}$</span>.</p>
<p>With this in mind, an optimal strategy for the single player game with a total budget of <span class="math-container">$n$</span> rolls is quite straightforward.</p>
<p>Do the first roll and get the value <span class="math-container">$x_1$</span>. Is <span class="math-container">$x_1>\tfrac{n-1}{n}$</span>, which is the expected highest value I will see in the remaining <span class="math-container">$n-1$</span> rolls? If yes, stop. Otherwise roll again. Get <span class="math-container">$x_2$</span> on the second roll. Is <span class="math-container">$x_2>\tfrac{n-2}{n-1}$</span>? Then stop. Else roll the third time. Inductively, if on roll <span class="math-container">$\ell$</span> you have that <span class="math-container">$x_\ell>\tfrac{n-\ell}{n-\ell+1}$</span>, stop. Otherwise roll for the <span class="math-container">$\ell+1$</span>-th time. If you are unlucky enough to get to the <span class="math-container">$n$</span>-th roll, the value you will stop at will be arbitrary.</p>
<p>But now, what is the expected outcome of the optimal strategy? Note that you would stop at roll <span class="math-container">$1$</span> with probability <span class="math-container">$S_1=\tfrac{1}{n}$</span> and you will only do that when <span class="math-container">$x_1\in(\tfrac{n-1}{n},1]$</span>. The average value of stopping at step <span class="math-container">$1$</span> will clearly be <span class="math-container">$A_1=\tfrac{1}{2}\big(\tfrac{n-1}{n}+1\big)=\tfrac{2n-1}{2n}$</span>.</p>
<p>You stop at roll <span class="math-container">$2$</span> if you did not stop at roll <span class="math-container">$1$</span> and if <span class="math-container">$x_2\in(\tfrac{n-2}{n-1},1]$</span>. The probability that you did not stop at roll <span class="math-container">$1$</span> is <span class="math-container">$1-\tfrac{1}{n}=\tfrac{n-1}{n}$</span> and the probability that you roll <span class="math-container">$x_2>\tfrac{n-2}{n-1}$</span> is <span class="math-container">$\tfrac{1}{n-1}$</span>. So the probability that you stop at roll <span class="math-container">$2$</span> is <span class="math-container">$S_2=\tfrac{n-1}{n}\tfrac{1}{n-1}=\tfrac{1}{n}$</span>. The average stopping value on roll <span class="math-container">$2$</span> is <span class="math-container">$A_2=\tfrac{2n-3}{2n-2}$</span>.</p>
<p>You stop at roll <span class="math-container">$3$</span> if you did not stop at roll <span class="math-container">$1$</span>, did not stop at roll <span class="math-container">$2$</span> and if <span class="math-container">$x_3\in(\tfrac{n-3}{n-2},1]$</span>. This will happen with probability <span class="math-container">$S_3=(1-2\tfrac{1}{n})\tfrac{1}{n-2}=\tfrac{1}{n}$</span> and your average stopping value will be <span class="math-container">$A_3=\tfrac{2n-5}{2n-4}$</span>.</p>
<p>You can show by induction that <span class="math-container">$S_k=\tfrac{1}{n}$</span> since you stop at roll <span class="math-container">$k$</span> if you have not stopped at any of the previous rolls (which happens with probability <span class="math-container">$1-\sum_{j=1}^{k-1} S_j=\tfrac{n-k}{n}$</span> by the induction hypothesis) and if <span class="math-container">$x_k>\tfrac{n-k-1}{n-k}$</span> which has probability <span class="math-container">$\tfrac{1}{n-k}$</span>. The average outcome will be <span class="math-container">$A_k=\tfrac{2n-2k+1}{2n-2k+2}$</span>. This will hold for all <span class="math-container">$2\leq k\leq n-1$</span>. Stopping at roll <span class="math-container">$n$</span> will occur with probability <span class="math-container">$S_n=1-\sum_{j=1}^{n-1} S_j=\tfrac{1}{n}$</span> with average outcome <span class="math-container">$A_n=\tfrac{1}{2}=\tfrac{2n-2n+1}{2n-2n+2}$</span>.</p>
<p>The expected value of the single player strategy is then <span class="math-container">$E_n=\sum_{k=1}^n S_kA_k= \tfrac{1}{n} \sum_{k=1}^n \tfrac{2n-2k+1}{2n-2k+2} =\tfrac{1}{n} \sum_{k=1}^n\big(1-\tfrac{1}{2k}\big)=1-\tfrac{1}{2n}\sum_{k=1}^n \tfrac{1}{k}$</span>.</p>
<p><span class="math-container">$E_2=1-\tfrac{1}{4}\big(1+\tfrac{1}{2}\big)=\tfrac{5}{8}$</span>.</p>
<p><span class="math-container">$E_3=1-\tfrac{1}{6}\big(1+\tfrac{1}{2}+\tfrac{1}{3}\big)=\tfrac{25}{36}$</span>.</p>
<p>As expected <span class="math-container">$E_3>E_2$</span>. I am unsure what the game theory perspective on this is though. If both players follow the single-player strategy, the player with <span class="math-container">$3$</span> rolls is definitely expected to win. How the other player would want to adapt his strategy depends on some factors. For instance, do both players know what roll the other player is on or whether they stopped early?</p>
<p><strong>Major edit thanks to @hgmath</strong></p>
<p>What I described above is not the optimal single player strategy. Indeed, consider the case when <span class="math-container">$n=3$</span>. If on roll one we get <span class="math-container">$x_1\in(E_2,\tfrac{2}{3})=(\tfrac{5}{8},\tfrac{2}{3})$</span>, rolling again and pursuing the above strategy would be a mistake, since our expected outcome would just be <span class="math-container">$E_2$</span> (the upper bound <span class="math-container">$\tfrac{2}{3}$</span> is put there since even with the current strategy we would not reroll if we got above <span class="math-container">$\tfrac{2}{3}$</span>).</p>
<p>This suggests an inductive strategy. Namely, if we already know the optimal expected outcome of <span class="math-container">$E_{n-1}$</span> rolls, we should only reroll after the first roll if <span class="math-container">$x_1<E_{n-1}$</span> instead of rerolling if <span class="math-container">$x_1<\tfrac{n-1}{n}$</span>. So let us try to write down these optimal expectations <span class="math-container">$E_k$</span>.</p>
<p>If <span class="math-container">$n=1$</span>, i.e., if our roll budget is <span class="math-container">$1$</span>, clearly we will stop after <span class="math-container">$x_1$</span> with probability <span class="math-container">$S_1=1$</span> and the average outcome will be <span class="math-container">$A_1=\tfrac{1}{2}$</span>, meaning that the optimal <span class="math-container">$E_1=S_1A_1=\tfrac{1}{2}$</span>.</p>
<p>If our roll budget is <span class="math-container">$n=2$</span>, then after the first roll <span class="math-container">$x_1$</span>, we need to check if <span class="math-container">$x_1>E_1=\tfrac{1}{2}$</span>. This will happen with probability <span class="math-container">$S_1=1-E_1=\tfrac{1}{2}$</span> and the average outcome will be <span class="math-container">$A_1=\tfrac{1+E_1}{2}=\tfrac{3}{4}$</span>. If <span class="math-container">$x_1\leq E_1$</span>, which will happen with probability <span class="math-container">$E_1=\tfrac{1}{2}$</span>, we stop at <span class="math-container">$S_2=1-S_1=\tfrac{1}{2}$</span> with average outcome <span class="math-container">$A_2=\tfrac{1}{2}$</span>. Thus the expected optimal outcome is <span class="math-container">$E_2=S_1A_1+S_2A_2=\tfrac{1}{2}\big((1-E_1)(1+E_1)+E_1\big)=\tfrac{1}{2}(1+E_1-E_1^2)=\tfrac{5}{8}$</span>. So far no difference to our previous computation.</p>
<p>However, if <span class="math-container">$n=3$</span>, we should stop at <span class="math-container">$x_1$</span> if <span class="math-container">$x_1>E_2$</span>, with probability <span class="math-container">$S_1=1-E_2=\tfrac{3}{8}$</span> and average outcome <span class="math-container">$A_1=\tfrac{1+E_2}{2}=\tfrac{13}{16}$</span>. If <span class="math-container">$x_1\leq E_2$</span>, then we roll the second time to get <span class="math-container">$x_2$</span>. We stop if <span class="math-container">$x_2>E_1$</span>, an event with probability <span class="math-container">$S_2=(1-S_1)(1-E_1)=E_2(1-E_1)=\tfrac{5}{8}\tfrac{1}{2}=\tfrac{5}{16}$</span>, and an average outcome <span class="math-container">$A_2=\tfrac{1+E_1}{2}=\tfrac{3}{4}$</span>. If <span class="math-container">$x_2<E_1$</span>, we roll again and we are forced to stop with <span class="math-container">$x_3$</span>, an event with probability <span class="math-container">$S_3=1-S_1-S_2=E_2-E_2(1-E_1)=E_1E_2=\tfrac{5}{16}$</span> and average outcome <span class="math-container">$A_3=\tfrac{1}{2}$</span>.</p>
<p>Thus, the expected outcome of the optimal <span class="math-container">$3$</span> roll strategy is <span class="math-container">$E_3=S_1 A_1+S_2A_2+S_3A_3=\tfrac{1}{2}\big((1-E_2)(1+E_2)+E_2(1-E_1)(1+E_1)+E_1E_2\big) = \tfrac{1}{2}(1+E_2+E_2E_1-E_2^2-E_2E_1^2)=\tfrac{1}{2}\big(1+\tfrac{5}{8}+\tfrac{5}{16}-\tfrac{25}{64}-\tfrac{5}{32}\big)=\tfrac{89}{128}$</span>.</p>
<p>This is bigger than what we got with the old strategy by an eight of a percent.</p>
<p>If <span class="math-container">$n=4$</span>, we would get <span class="math-container">$S_1=1-E_3$</span> with <span class="math-container">$2\cdot A_1=1+E_3$</span>, <span class="math-container">$S_2=(1-S_1)(1-E_2)=E_3(1-E_2)$</span> with <span class="math-container">$2\cdot A_2=1+E_2$</span>, <span class="math-container">$S_3=(1-S_1-S_2)(1-E_1)=E_3E_2(1-E_1)$</span> with <span class="math-container">$2\cdot A_3=1+E_1$</span> and <span class="math-container">$S_4=(1-S_1-S_2-S_3)=E_3E_2E_1$</span> with <span class="math-container">$2\cdot A_4=1$</span>, meaning that <span class="math-container">$E_4=\tfrac{1}{2}(1+E_3+E_3E_2+E_3E_2E_1-E_3^2-E_3E_2^2-E_3E_2E_1^2)$</span>. After some computation, this means <span class="math-container">$E_4=\tfrac{25195}{32768}$</span>. This is an almost <span class="math-container">$4\%$</span> increase over the expected outcome of our previous strategy, which would have been <span class="math-container">$1-\tfrac{1}{8}(1+\tfrac{1}{2}+\tfrac{1}{3}+\tfrac{1}{4})=\tfrac{71}{96}$</span>.</p>
<p>It is not difficult to find a recursive relation for <span class="math-container">$E_n$</span> based on induction and the obvious pattern from the computed cases, but I don't know if there is any reasonable way to get a closed form expression for <span class="math-container">$E_n$</span>.</p>
| <p>First, we propose a method for computing the best response for a certain policy of the opponent.</p>
<p>For example, assuming that the opponent plays a thresholding policy with threshold = 1/2. With a single sample, I have a winning probability of 3/8. Therefore in the second round, I will only resample if my winning probability is less than 3/8. We can compute that the threshold is 7/12. Then I can compute the conditional winning probability if I resample in the first round, assuming which to be <span class="math-container">$1/2+a$</span>. Then I can similarly use this value to determine my threshold for the first round.</p>
<p>With this idea, if both players can resample once, then one can compute the unique pure strategy Nash equilibrium is <span class="math-container">$(x_1,x_2)=(\frac{\sqrt{5}-1}{2},\frac{\sqrt{5}-1}{2})$</span>, where <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> are the thresholds of the two players. Similarly, we can compute a pure strategy Nash for the original problem, but which will be quite complicated.</p>
|
differentiation | <p>I'm finishing up a semester of multivariable calculus and will be taking a course on analysis this Spring. In any of the calculus courses I've taken, we never covered anything beyond the standard techniques of integration (<span class="math-container">$u$</span>-sub, parts, etc.) One of the techniques I saw used recently which I had not heard of was <em>differentiation under the integral sign</em>, which makes use of the fact that:</p>
<p><span class="math-container">$$\frac{d}{dx} \int_a^bf(x,t)dt = \int_a^b \frac{\partial}{\partial x}f(x,t)dt $$</span></p>
<p>in solving integrals. My question is, is there ever an indication that this should be used? Is there any explainable intuition or rule of thumb for the use of differentiation under the integral sign?</p>
| <p>Differentiation under integral signs, better known as the Feynman’s trick, is not a standard integration technique taught in curriculum calculus. Nevertheless, it is widely utilized outside classrooms and may appear somewhat magic to those seeing it the first time.</p>
<p>Despite the mystique around it, it is actually rooted in double integrals. A bare-bone illustrative example is</p>
<p><span class="math-container">$$I=\int_0^1\int_0^1 x^t dt \>dx= \ln2$$</span></p>
<p>The natural approach is to integrate <span class="math-container">$x$</span> first and then <span class="math-container">$t$</span>. But, an unsuspecting person may integrate <span class="math-container">$t$</span> first and then encounter</p>
<p><span class="math-container">$$I=\int_0^1\frac{x-1}{\ln x} dx$$</span></p>
<p>Now, he/she is stuck since there is not easy way out. Fortunately, there is, which is to differentiate <span class="math-container">$J(t)$</span> below under the integral, i.e.</p>
<p><span class="math-container">$$J(t)=\int_0^1\frac{x^t-1}{\ln x} dx,\>\>J(t)' = \int_0^1 x^t dx= \frac{1}{1+t}
\implies I=\int_0^1 J(t)'dt=\ln 2$$</span></p>
<p>A knowledgeable math person, aware of its double-integral origin, would just undo the <span class="math-container">$t$</span>-integral to reintroduce the double form, and then integrate in the right order,</p>
<p><span class="math-container">$$I=\int_0^1\frac{x-1}{\ln x} dx=\int_0^1\int_0^1 x^t dt dx = \int_0^1 \frac1{t+1}dt= \ln 2$$</span></p>
<p>The two approaches are in fact equivalent, with the double-integrals actually more straightforward. The Feynman’s trick is appealing, since it ‘decouples’ a double-integral in appearance, especially when the embedded double-integral is not immediately discernible. When a seemingly difficult integral is encountered, the differentiation trick is often employed to transform the original integrand to a manageable one.</p>
<p>As a practical example, the trick can be used for deriving the well-known integral
<span class="math-container">$$I=\int_0^\infty \frac{\sin x}x dx=\frac\pi2$$</span>
with <span class="math-container">$J(t)=\int_0^\infty \frac{\sin x}x e^{-tx}dx$</span>, <span class="math-container">$
J’(t)=-\frac{1}{1+t^2}$</span> and <span class="math-container">$I=- \int_0^\infty J’(t)dt$</span>.</p>
| <p>It usually comes up when you are dealing with functions defined in terms of an integral but also can be used to clean up ugly integrals by introducing a new parameter and differentiating with respect to said new parameter. Look up Feynman integration, there are lots of instructive videos and examples of this technique.</p>
|
matrices | <blockquote>
<p>Find the largest eigenvalue of the following matrix
$$\begin{bmatrix}
1 & 4 & 16\\
4 & 16 & 1\\
16 & 1 & 4
\end{bmatrix}$$</p>
</blockquote>
<p>This matrix is symmetric and, thus, the eigenvalues are real. I solved for the possible eigenvalues and, fortunately, I found that the answer is $21$.</p>
<p>My approach:</p>
<p>The determinant on simplification leads to the following third degree polynomial.
$$\begin{vmatrix}
1-\lambda & 4 &16\\
4 &16-\lambda&1\\
16&1&4-\lambda
\end{vmatrix}
= \lambda^3-21\lambda^2-189\lambda+3969.$$</p>
<p>At a first glance seen how many people find the roots of this polynomial with pen and paper using elementary algebra. I managed to find the roots and they are $21$, $\sqrt{189}$, and $-\sqrt{189}$ and the largest value is $21$.</p>
<p>Now the problem is that my professor stared at this matrix for a few seconds and said that the largest eigenvalue is $21$. Obviously, he hadn't gone through all these steps to find that answer. So what enabled him answer this in a few seconds? Please don't say that he already knew the answer.</p>
<p>Is there any easy way to find the answer in a few seconds? What property of this matrix makes it easy to compute that answer?</p>
<p>Thanks in advance.</p>
| <p>Requested by @Federico Poloni:</p>
<p>Let $A$ be a matrix with positive entries, then from the Perron-Frobenius theorem it follows that the dominant eigenvalue (i.e. the largest one) is bounded between the lowest sum of a row and the biggest sum of a row. Since in this case both are equal to $21$, so must the eigenvalue.</p>
<p>In short: since the matrix has positive entries and all rows sum to $21$, the largest eigenvalue must be $21$ too.</p>
| <p>The trick is that $\frac1{21}$ of your matrix is a <a href="https://en.wikipedia.org/wiki/Doubly_stochastic_matrix" rel="noreferrer">doubly stochastic matrix</a> with positive entries, hence the bound of 21 for the largest eigenvalue is a straightforward consequence of the <a href="https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem" rel="noreferrer">Perron-Frobenius theorem</a>.</p>
|
logic | <p><em>A more focused version of this question has now been <a href="https://mathoverflow.net/questions/359958/positive-set-theory-and-the-co-russell-set">asked at MO</a>.</em></p>
<p>Tl;dr version: are there "reasonable" theories which prove/disprove "the set of all sets containing themselves, contains itself"?</p>
<hr>
<p>Inspired by <a href="https://math.stackexchange.com/questions/2431073/significance-of-non-decidable-statement">this question</a>, I'd like to ask a question which has been vaguely on my mind for a while but which I've never looked into.</p>
<p>Working naively for a moment, let <span class="math-container">$S=\{x: x\in x\}$</span> be the "dual" to <a href="https://en.wikipedia.org/wiki/Russell%27s_paradox" rel="noreferrer">Russell's paradoxical set</a> <span class="math-container">$R$</span>. There does not seem to be an immediate argument showing that <span class="math-container">$S$</span> is or is not an element of itself, nicely paralleling the fact that there <em>are</em> of course arguments for <span class="math-container">$R$</span> <em>both</em> containing and not containing itself (that's exactly what the paradox is, of course).</p>
<p>However, it's a bit premature to leap to the conclusion that there actually <strong>are</strong> no such arguments. Namely, if we look at the Godel situation, we see something quite different: while the Godel sentence "I am unprovable (in <span class="math-container">$T$</span>)" is neither provable nor disprovable (in <span class="math-container">$T$</span>), the sentence "I am provable (in <span class="math-container">$T$</span>)" <a href="https://en.wikipedia.org/wiki/L%C3%B6b%27s_theorem" rel="noreferrer"><strong>is provable</strong> (in <span class="math-container">$T$</span>)</a> (as long as we express "is provable" <a href="https://math.stackexchange.com/questions/2135587/variant-of-g%C3%B6del-sentence">in a reasonable way</a>)! So a certain intuitive symmetry is broken. So this raises the possibility that the question</p>
<p><span class="math-container">$$\mbox{Does $S$ contain itself?}$$</span></p>
<p>could actually be answered, at least from "reasonable" axioms.</p>
<p>Now ZFC does answer it, albeit in a trivial way: in ZFC we have <span class="math-container">$S=\emptyset$</span>. So ideally we're looking for a set theory which allows sets containing themselves, so that <span class="math-container">$S$</span> is nontrivial. Also, to keep the parallel with Russell's paradox, a set theory more closely resembling naive comprehension is a reasonable thing to desire.</p>
<p>All of this suggests looking at some <a href="https://en.wikipedia.org/wiki/Positive_set_theory" rel="noreferrer">positive set theory</a> - which proves that <span class="math-container">$S$</span> exists, since "<span class="math-container">$x\in x$</span>" is a positive formula, but is not susceptible to Russell's paradox since "<span class="math-container">$x\not\in x$</span>" is not a positive formula - possibly augmented by some kind of <a href="https://en.wikipedia.org/wiki/Non-well-founded_set_theory" rel="noreferrer">antifoundation axiom</a>.</p>
<p>To be specific:</p>
<blockquote>
<p>Is there a "natural" positive set theory (e.g. <span class="math-container">$GPK_\infty^+$</span>), or extension of such by a "natural" antifoundation axiom (e.g. Boffa's), which decides whether <span class="math-container">$S\in S$</span>?</p>
</blockquote>
<p>In general, I'm interested in the status of "<span class="math-container">$S\in S$</span>" in positive set theories. I'm especially excited by those which prove <span class="math-container">$S\in S$</span>; note that these would have to prove the existence of sets containing themselves, since otherwise <span class="math-container">$S=\emptyset\not\in S$</span>.</p>
| <p>I found <a href="https://doi.org/10.1023/A:1025159016268" rel="nofollow noreferrer">a paper of Cantini's</a> that contains an argument that can be used to establish that <span class="math-container">$S \in S$</span> under fairly weak assumptions (on both the amount of comprehension and the underlying logic). Ultimately the proof is a fixed-point argument in the vein of Löb's theorem. This argument is strong enough to establish that <span class="math-container">$S \in S$</span> in <span class="math-container">$\mathsf{GPK}$</span>. While Cantini is concerned with a contractionless logic, I would like to avoid writing out sequent calculus in this answer, so I will state and prove a weaker result in classical first-order logic.</p>
<p>EDIT: I recently found out that adding an abstraction operator (i.e., set builder notation) is far less innocuous than I had realized. (This is discussed by Forti and Hinnion in the introduction of <a href="https://doi.org/10.2307/2274822" rel="nofollow noreferrer">this paper</a>. My understanding of the issue is that it allows you to code negation with equality.) I suspect that the old version of my answer was only vacuously correct in that the resulting theory is inconsistnt, so I have fixed it, although I have specialized the argument to the particular case at hand. I've also cleaned up the argument a bit, mostly to make sure I actually understood it.</p>
<p>We need to assume that our theory <span class="math-container">$T$</span> has enough machinery for the following:</p>
<ul>
<li><span class="math-container">$T$</span> entails extensionality.</li>
<li>There is definable pairing function <span class="math-container">$(x,y) \mapsto \langle x,y\rangle$</span>.</li>
<li>For any <span class="math-container">$a$</span> and <span class="math-container">$f$</span>, there is a set <span class="math-container">$f[a]$</span> such that <span class="math-container">$x \in f[a]$</span> if and only if <span class="math-container">$\langle a,x\rangle \in f$</span>. Note that <span class="math-container">$(f,a) \mapsto f[a]$</span> is a definable function by extensionality.</li>
<li>There is a set <span class="math-container">$D$</span> such that every element of <span class="math-container">$D$</span> is an ordered pair <span class="math-container">$\langle x,y\rangle$</span> and <span class="math-container">$\langle x,y\rangle \in D$</span> if and only if either <span class="math-container">$y \in y$</span> or <span class="math-container">$y = x[x]$</span>.</li>
</ul>
<p>(It is easy to check that <span class="math-container">$\mathsf{GPK}$</span> satisfies all of these.)</p>
<p>Now let <span class="math-container">$I = D[D]$</span>. Unpacking, we have that <span class="math-container">$x \in I$</span> if and only if <span class="math-container">$\langle D,x\rangle \in D$</span> if and only if either <span class="math-container">$x \in x$</span> or <span class="math-container">$x = D[D] = I$</span>. Therefore <span class="math-container">$I$</span> contains precisely the elements of the co-Russell class <span class="math-container">$S$</span> and <span class="math-container">$I$</span> itself, but since <span class="math-container">$I \in I$</span>, <span class="math-container">$I \in S$</span> and so <span class="math-container">$I = S$</span>, whence <span class="math-container">$S \in S$</span>.</p>
<p>(Incidentally, a similar argument also resolves a question in <a href="https://mathoverflow.net/a/379632/83901">my earlier answer to your related question</a>. In particular, <span class="math-container">$\mathsf{GPK}$</span> does entail the existence of a Quine atom by the above argument if we just say that <span class="math-container">$\langle x,y\rangle \in D$</span> if and only if <span class="math-container">$y = x[x]$</span>.)</p>
<p>In light of this, I wonder whether there even is a 'reasonable' set theory in which <span class="math-container">$S$</span> is non-trivial and <span class="math-container">$S \notin S$</span> is consistent.</p>
| <p>This is not a direct answer to your specific question, but it might shed an idea on a possible solution within the arena of <span class="math-container">$\mathsf{GPK}_\infty^+$</span> in which your question is decidable and to the positive!</p>
<p>Around three months ago I've asked Olivier Esser if whether adding the following condition is consistent with <span class="math-container">$\mathsf{GPK}_\infty^+$</span>:</p>
<p><span class="math-container">$``$</span> if <span class="math-container">$\phi$</span> is <em>purely</em> positive without free variables other than <span class="math-container">$y,A$</span>, and without using the false formula, then: <span class="math-container">$$\exists A \forall y (y \in A \iff \phi)"$$</span> By this principle we can construct Quine atoms and alike sets, which are not constructible in merely <span class="math-container">$\mathsf{GPK}_\infty^+$</span></p>
<p>However Olivier Esser see that it's <em>unclear</em> whether such addition is consistent or not? So this principle is itself debatable?</p>
<p>The idea is that everything depends on what "reasonable" is? If the above principle is considered as more or less reasonable and if found consistent, then an answer is there! However, we are not there yet!</p>
|
geometry | <blockquote>
<p><strong>Problem:</strong></p>
<p><em>A vertex of one square is pegged to the centre of an identical square, and the overlapping area is blue. One of the squares is then rotated about the vertex and the resulting overlap is red.</em></p>
<p><em>Which area is greater?</em></p>
</blockquote>
<p><img src="https://i.sstatic.net/jmjxu.png" alt="a diagram showing overlapped squares, one forming a smaller blue square and the other an irregular red quadrilateral" /></p>
<p>Let the area of each large square be exactly <span class="math-container">$1$</span> unit squared. Then, the area of the blue square is exactly <span class="math-container">$1/4$</span> units squared. The same would apply to the red area if you were to rotate the square <span class="math-container">$k\cdot 45$</span> degrees for a natural number <span class="math-container">$k$</span>.</p>
<p>Thus, I am assuming that no area is greater, and that it is a trick question <span class="math-container">$-$</span> although the red area might appear to be greater than the blue area, they are still the same: <span class="math-container">$1/4$</span>.</p>
<p>But how can it be <em>proven</em>?</p>
<p>I know the area of a triangle with a base <span class="math-container">$b$</span> and a height <span class="math-container">$h\perp b$</span> is <span class="math-container">$bh\div 2$</span>. Since the area of each square is exactly <span class="math-container">$1$</span> unit squared, then each side would also have a length of <span class="math-container">$1$</span>.</p>
<p>Therefore, the height of the red triangle area is <span class="math-container">$1/2$</span>, and so <span class="math-container">$$\text{Red Area} = \frac{b\left(\frac 12\right)}{2} = \frac{b}{4}.$$</span></p>
<p>According to the diagram, the square has not rotated a complete <span class="math-container">$45$</span> degrees, so <span class="math-container">$b < 1$</span>. It follows, then, that <span class="math-container">$$\begin{align} \text{Red Area} &< \frac 14 \\ \Leftrightarrow \text{Red Area} &< \text{Blue Area}.\end{align}$$</span></p>
<blockquote>
<p><strong>Assertion:</strong></p>
<p>To conclude, the <span class="math-container">$\color{blue}{\text{blue}}$</span> area is greater than the <span class="math-container">$\color{red}{\text{red}}$</span> area.</p>
</blockquote>
<p>Is this true? If so, is there another way of proving the assertion?</p>
<hr />
<p>Thanks to users who commented below, I did not take account of the fact that the red area is <em>not</em> a triangle <span class="math-container">$-$</span> it does not have three sides! This now leads back to my original question on whether my hypothesis was correct.</p>
<p>This question is very similar to <a href="https://math.stackexchange.com/questions/2369403/which-area-is-larger-the-blue-area-or-the-white-area?rq=1">this post</a>.</p>
<hr />
<p><strong>Source:</strong></p>
<p><a href="https://www.youtube.com/watch?v=sj8Sg8qnjOg" rel="noreferrer">The Golden Ratio (why it is so irrational) <span class="math-container">$-$</span> Numberphile</a> from <span class="math-container">$14$</span>:<span class="math-container">$02$</span>.</p>
| <p><a href="https://i.sstatic.net/mXMhQ.jpg" rel="noreferrer"><img src="https://i.sstatic.net/mXMhQ.jpg" alt="some text"></a> </p>
<p>The four numbered areas are congruent.</p>
<hr>
<p>[Added later] The figure below is from a suggested edit by @TomZych, and it shows the congruent parts more clearly. Given all the upvotes to the (probably tongue-in-cheek) comment “This answer also deserves the tick for artistic reasons,” I’m leaving my original “artistic” figure but also adding Tom’s improved version to my answer.</p>
<p><a href="https://i.sstatic.net/RgrTs.png" rel="noreferrer"><img src="https://i.sstatic.net/RgrTs.png" alt="enter image description here"></a></p>
| <p>I think sketching the two identical triangles marked with green below makes this rather intuitive. This could also be turned into a formal proof quite easily.</p>
<p><a href="https://i.sstatic.net/zilQU.png" rel="noreferrer"><img src="https://i.sstatic.net/zilQU.png" alt="Identical triangles"></a></p>
|
number-theory | <p>I've recently been reading about the Millennium Prize problems, specifically the Riemann Hypothesis. I'm not near qualified to even fully grasp the problem, but seeing the hypothesis and the other problems I wonder: what practical use will a solution have?</p>
<p>Many researchers have spent a lot of time on it, trying to prove it, but why is it important to solve the problem?</p>
<p>I've tried relating the situation to problems in my field. For instance, solving the <span class="math-container">$P \ vs. NP$</span> problem has important implications if <span class="math-container">$P = NP$</span> is shown, and important implications if <span class="math-container">$P \neq NP$</span> is shown. For instance, there would be implications regarding the robustness or security of cryptographic protocols and algorithms. However, it's hard to say WHY the Riemann Hypothesis is important.</p>
<p>Given that the Poincaré Conjecture has been resolved, perhaps a hint about what to expect if and when the Riemann Hypothesis is resolved could be obtained by seeing what a proof of the Poincaré Conjecture has led to.</p>
| <p>Proving the Riemann Hypothesis will get you tenure, pretty much anywhere you want it. </p>
| <p>The Millennium problems are not necessarily problems whose solution will lead to curing cancer. These are problems in mathematics and were chosen for their importance in mathematics rather for their potential in applications.</p>
<p>There are plenty of important open problems in mathematics, and the Clay Institute had to narrow it down to seven. Whatever the reasons may be, it is clear such a short list is incomplete and does not claim to be a comprehensive list of the most important problems to solve. However, each of the problems solved is extremely central, important, interesting, and hard. Some of these problems have direct consequences, for instance the Riemann hypothesis. There are many (many many) theorems in number theory that go like "if the Riemann hypothesis is true, then blah blah", so knowing it is true will immediately validate the consequences in these theorems as true.</p>
<p>In contrast, a solution to some of the other Millennium problems is (highly likely) not going to lead to anything dramatic. For instance, the <span class="math-container">$P$</span> vs. <span class="math-container">$NP$</span> problem. I personally doubt it is probable that <span class="math-container">$P=NP$</span>. The reason it's an important question is not because we don't (philosophically) already know the answer, but rather that we don't have a bloody clue how to prove it. It means that there are fundamental issues in computability (which is a hell of an important subject these days) that we just don't understand. Solving <span class="math-container">$P \ne NP$</span> will be important not for the result but for the techniques that will be used. (Of course, in the unlikely event that <span class="math-container">$P=NP$</span>, enormous consequences will follow. But that is about as likely as it is that the Hitchhiker's Guide to the Galaxy is based on true events.)</p>
<p>The Poincaré conjecture is an extremely basic problem about three-dimensional space. I think three-dimensional space is very important, so if we can't answer a very fundamental question about it, then we don't understand it well. I'm not an expert on Perelman's solution, nor the field to which it belongs, so I can't tell what consequences his techniques have for better understanding three-dimensional space, but I'm sure there are.</p>
|
probability | <p>Of course, we've all heard the colloquialism "If a bunch of monkeys pound on a typewriter, eventually one of them will write Hamlet."</p>
<p>I have a (not very mathematically intelligent) friend who presented it as if it were a mathematical fact, which got me thinking... Is this really true? Of course, I've learned that dealing with infinity can be tricky, but my intuition says that time is countably infinite while the number of works the monkeys could produce is uncountably infinite. Therefore, it isn't necessarily given that the monkeys would write Hamlet.</p>
<p>Could someone who's better at this kind of math than me tell me if this is correct? Or is there more to it than I'm thinking?</p>
| <p>I found online the claim (which we may as well accept for this purpose) that there are $32241$ words in Hamlet. Figuring $5$ characters and one space per word, this is $193446$ characters. If the character set is $60$ including capitals and punctuation, a random string of $193446$ characters has a chance of $1$ in $60^{193446}$ (roughly $1$ in $10^{344000}$) of being Hamlet. While very small, this is greater than zero. So if you try enough times, and infinity times is certainly enough, you will probably produce Hamlet. But don't hold your breath. It doesn't even take an infinite number of monkeys or an infinite number of tries. Only a product of $10^{344001}$ makes it very likely. True, this is a very large number, but most numbers are larger.</p>
| <p>Some references (I am mildly surprised that no one has done this yet). This is called the <a href="http://en.wikipedia.org/wiki/Infinite_monkey_theorem">infinite monkey theorem</a> in the literature. It follows from the second <a href="http://en.wikipedia.org/wiki/Borel%E2%80%93Cantelli_lemma">Borel-Cantelli lemma</a> and is related to <a href="http://en.wikipedia.org/wiki/Kolmogorov%27s_zero-one_law">Kolmogorov's zero-one law</a>, which is the result that provides the intuition behind general statements like this. (The zero-one law tells you that the probability of getting Hamlet is either zero or one, but doesn't tell you which. This is usually the hard part of applying the zero-one law.) Since others have addressed the practical side, I am telling you what the mathematical idealization looks like.</p>
<blockquote>
<p>my intuition says that time is countably infinite while the number of works the monkeys could produce is uncountably infinite.</p>
</blockquote>
<p>This is a good idea! Unfortunately, the number of finite strings from a finite alphabet is countable. This is a good exercise and worth working out yourself.</p>
<p>Edit: also, regarding some ideas which have come up in the discussions on other answers, Jorge Luis Borges' short story <a href="http://en.wikipedia.org/wiki/The_Library_of_Babel">The Library of Babel</a> is an interesting read.</p>
|
linear-algebra | <p>I am taking a proof-based introductory course to Linear Algebra as an undergrad student of Mathematics and Computer Science. The author of my textbook (Friedberg's <em>Linear Algebra</em>, 4th Edition) says in the introduction to Chapter 4:</p>
<blockquote>
<p>The determinant, which has played a prominent role in the theory of linear algebra, is a special scalar-valued function defined on the set of square matrices. <strong>Although it still has a place in the study of linear algebra and its applications, its role is less central than in former times.</strong> </p>
</blockquote>
<p>He even sets up the chapter in such a way that you can skip going into detail and move on:</p>
<blockquote>
<p>For the reader who prefers to treat determinants lightly, Section 4.4 contains the essential properties that are needed in later chapters.</p>
</blockquote>
<p>Could anyone offer a didactic and simple explanation that refutes or asserts the author's statement?</p>
| <p>Friedberg is not wrong, at least on a historical standpoint, as I am going to try to show it.</p>
<p>Determinants were discovered "as such" in the second half of the 18th century by Cramer who used them in his celebrated rule for the solution of a linear system (in terms of quotients of determinants). Their spread was rather rapid among mathematicians of the next two generations ; they discovered properties of determinants that now, with our vision, we mostly express in terms of matrices.</p>
<p>Cauchy has given two important results about determinants as explained in the very nice article by Hawkins referenced below :</p>
<ul>
<li><p>around 1815, Cauchy discovered the multiplication rule (rows times columns) of two determinants. This is typical of a result that has been completely revamped : nowadays, this rule is for the multiplication of matrices, and determinants' multiplication is restated as the homomorphism rule <span class="math-container">$\det(A \times B)= \det(A)\det(B)$</span>.</p>
</li>
<li><p>around 1825, he discovered eigenvalues "associated with a symmetric <em>determinant</em>" and established the important result that these eigenvalues are real ; this discovery has its roots in astronomy, in connection with Sturm, explaining the word "secular values" he attached to them: see for example <a href="http://www2.cs.cas.cz/harrachov/slides/Golub.pdf" rel="noreferrer">this</a>.</p>
</li>
</ul>
<p>Matrices made a shy apparition in the mid-19th century (in England) ; "matrix" is a term coined by Sylvester <a href="http://mathworld.wolfram.com/Matrix.html" rel="noreferrer">see here</a>. I strongly advise to take a look at his elegant style in his <a href="https://archive.org/stream/collectedmathema04sylvuoft#page/n7/mode/2up" rel="noreferrer">Collected Papers</a>.</p>
<p>Together with his friend Cayley, they can rightly be named the founding fathers of linear algebra, with determinants as permanent reference. Here is a major quote of Sylvester:</p>
<p><em>"I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent".</em></p>
<p>A lot of important polynomials are either generated or advantageously expressed as determinants:</p>
<ul>
<li><p>the characteristic polynomial (of a matrix) is expressed as the famous <span class="math-container">$\det(A-\lambda I)$</span>,</p>
</li>
<li><p>in particular, the theory of orthogonal polynomials mainly developed at the end of 19th century, can be expressed in great part with determinants,</p>
</li>
<li><p>the "resultant" of two polynomials, invented by Sylvester (giving a condition for these polynomials to have a common root), etc.</p>
</li>
</ul>
<p>Let us repeat it : for a mid-19th century mathematician, a <em>square</em> array of numbers has necessarily a <strong>value</strong> (its determinant): it cannot have any other meaning. If it is a <em>rectangular</em> array, the numbers attached to it are the determinants of submatrices that can be "extracted" from the array.</p>
<p>The identification of "Linear Algebra" as an integral (and new) part of Mathematics is mainly due to the German School (say from 1870 till the 1930's). I don't cite the names, there are too many of them. An example among many others of this german domination: the germenglish word "eigenvalue". The word "kernel" could have remained the german word "kern" that appears around 1900 (see <a href="http://jeff560.tripod.com/mathword.html" rel="noreferrer">this site</a>).</p>
<p>The triumph of Linear Algebra is rather recent (mid-20th century). "Triumph" meaning that now Linear Algebra has found a very central place. Determinants in all that ? Maybe the biggest blade in this swissknife, but not more ; another invariant (this term would deserve a long paragraph by itself), the <strong>trace</strong>, would be another blade, not the smallest.</p>
<p>In 19th century, Geometry was still at the heart of mathematical education; therefore, the connection between geometry and determinants has been essential in the development of linear algebra. Some cornerstones:</p>
<ul>
<li>the development of projective geometry, <em>in its analytical form,</em> in the 1850s. This development has led in particular to place homographies at the heart of projective geometry, with their associated matricial expression. Besides, conic curves, described by a quadratic form, can as well be written under an all-matricial expression <span class="math-container">$X^TMX=0$</span> where <span class="math-container">$M$</span> is a symmetrical <span class="math-container">$3 \times 3$</span> matrix. This convergence to a unique and new "algebra" has taken time to be recognized.</li>
</ul>
<p>A side remark: this kind of reflexions has been capital in the decision of Bourbaki team to avoid all figures and adopt the extreme view of reducing geometry to linear algebra (see the <a href="https://hsm.stackexchange.com/q/2578/3730">"Down with Euclid"</a> of J. Dieudonné in the sixties).</p>
<p>Different examples of the emergence of new trends :</p>
<p>a) the concept of <strong>rank</strong>: for example, a pair of straight lines is a conic section whose matrix has rank 1. The "rank" of a matrix used to be defined in an indirect way as the "dimension of the largest nonzero determinant that can be extracted from the matrix". Nowadays, the rank is defined in a more straightforward way as the dimension of the range space... at the cost of a little more abstraction.</p>
<p>b) the concept of <strong>linear transformations</strong> and <strong>duality</strong> arising from geometry: <span class="math-container">$X=(x,y,t)^T\rightarrow U=MX=(u,v,w)$</span> between points <span class="math-container">$(x,y)$</span> and straight lines with equations <span class="math-container">$ux+vy+w=0$</span>. More precisely, the tangential description, i.e., the constraint on the coefficients <span class="math-container">$U^T=(u,v,w)$</span> of the tangent lines to the conical curve has been recognized as associated with <span class="math-container">$M^{-1}$</span> (assuming <span class="math-container">$\det(M) \neq 0$</span>!), due to relationship</p>
<p><span class="math-container">$$X^TMX=X^TMM^{-1}MX=(MX)^T(M^{-1})(MX)=U^TM^{-1}U=0$$</span>
<span class="math-container">$$=\begin{pmatrix}u&v&w\end{pmatrix}\begin{pmatrix}A & B & D \\ B & C & E \\ D & E & F \end{pmatrix}\begin{pmatrix}u \\ v \\ w \end{pmatrix}=0$$</span></p>
<p>whereas, in 19th century, it was usual to write the previous quadratic form as :</p>
<p><span class="math-container">$$\det \begin{pmatrix}M^{-1}&U\\U^T&0\end{pmatrix}=\begin{vmatrix}a&b&d&u\\b&c&e&v\\d&e&f&w\\u&v&w&0\end{vmatrix}=0$$</span></p>
<p>as the determinant of a matrix obtained by "bordering" <span class="math-container">$M^{-1}$</span> precisely by <span class="math-container">$U$</span></p>
<p>(see the excellent lecture notes (<a href="http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html" rel="noreferrer">http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html</a>)). It is to be said that the idea of linear transformations, especially orthogonal transformations, arose even earlier in the framework of the theory of numbers (quadratic representations).</p>
<p>Remark: the way the former identities have been written use matrix algebra notations and rules that were unknown in the 19th century, with the notable exception of Grassmann's "Ausdehnungslehre", whose ideas were too ahead of his time (1844) to have a real influence.</p>
<p>c) the concept of <strong>eigenvector/eigenvalue</strong>, initially motivated by the determination of "principal axes" of conics and quadrics.</p>
<ul>
<li>the very idea of "geometric transformation" (more or less born with Klein circa 1870) associated with an array of numbers (when linear or projective). A matrix, of course, is much more that an array of numbers... But think for example to the persistence of expression "table of direction cosines" (instead of "orthogonal matrix") as can be found for example still in the 2002 edition of Analytical Mechanics by A.I. Lorrie.</li>
</ul>
<p>d) The concept of "companion matrix" of a polynomial <span class="math-container">$P$</span>, that could be considered as a tool but is more fundamental than that (<a href="https://en.wikipedia.org/wiki/Companion_matrix" rel="noreferrer">https://en.wikipedia.org/wiki/Companion_matrix</a>). It can be presented and "justified" as a "nice determinant" :
In fact, it has much more to say, with the natural interpretation for example in the framework of <span class="math-container">$\mathbb{F}_p[X]$</span> (polynomials with coefficients in a finite field) as the matrix of multiplication by <span class="math-container">$P(X)$</span>. (<a href="https://glassnotes.github.io/OliviaDiMatteo_FiniteFieldsPrimer.pdf" rel="noreferrer">https://glassnotes.github.io/OliviaDiMatteo_FiniteFieldsPrimer.pdf</a>), giving rise to matrix representations of such fields. Another remarkable application of companion matrices : the main numerical method for obtaining the roots of a polynomial is by computing the eigenvalues of its companion matrix using a Francis "QR" iteration (see (<a href="https://math.stackexchange.com/q/68433">https://math.stackexchange.com/q/68433</a>)).</p>
<p>References:</p>
<p>I discovered recently a rather similar question with a very complete answer by Denis Serre, a specialist in the domain of matrices :
<a href="https://mathoverflow.net/q/35988/88984">https://mathoverflow.net/q/35988/88984</a></p>
<p>The article by Thomas Hawkins : "Cauchy and the spectral theory of matrices", Historia Mathematica 2, 1975, 1-29.</p>
<p>See also (<a href="http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf" rel="noreferrer">http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf</a>)</p>
<p>An important bibliography is to be found in (<a href="http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html" rel="noreferrer">http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html</a>).</p>
<p>See also a good paper by Nicholas Higham : (<a href="http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf" rel="noreferrer">http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf</a>)</p>
<p>For conic sections and projective geometry, see a) this excellent chapter of lectures of the University of Vienna (see the other chapters as well) : (<a href="https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf" rel="noreferrer">https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf</a>). See as well : (maths.gla.ac.uk/wws/cabripages/conics/conics0.html).</p>
<p>Don't miss the following very interesting paper about various kinds of useful determinants : <a href="https://arxiv.org/pdf/math/9902004.pdf" rel="noreferrer">https://arxiv.org/pdf/math/9902004.pdf</a></p>
<p>See also <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Matrix_(mathematics).html" rel="noreferrer">this</a></p>
<p>Very interesting precisions on determinants in <a href="https://arxiv.org/pdf/math/9902004.pdf" rel="noreferrer">this text</a> and in these <a href="https://math.stackexchange.com/q/194579">answers</a>.</p>
<p>A fundamental work on "The Theory of Determinants" in 4 volumes has been written by Thomas Muir : <a href="http://igm.univ-mlv.fr/%7Eal/Classiques/Muir/History_5/VOLUME5_TEXT.PDF" rel="noreferrer">http://igm.univ-mlv.fr/~al/Classiques/Muir/History_5/VOLUME5_TEXT.PDF</a> (years 1906, 1911, 1922, 1923) for the last volumes or, for all of them <a href="https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf" rel="noreferrer">https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf</a>. It is very interesting to take random pages and see how the determinant-mania has been important, especially in the second half of the 19th century. Matrices appear at some places with the double bar convention that lasted a very long time. Matrices are mentionned here and there, rarely to their advantage...</p>
<p>Many historical details about determinants and matrices can be found <a href="https://mathshistory.st-andrews.ac.uk/HistTopics/Matrices_and_determinants/" rel="noreferrer">here</a>.</p>
| <p><strong>It depends who you speak to.</strong></p>
<ul>
<li>In <strong>numerical mathematics</strong>, where people actually have to compute things on a computer, it is largely recognized that <strong>determinants are useless</strong>. Indeed, in order to compute determinants, either you use the Laplace recursive rule ("violence on minors"), which costs <span class="math-container">$O(n!)$</span> and is infeasible already for very small values of <span class="math-container">$n$</span>, or you go through a triangular decomposition (Gaussian elimination), which by itself already tells you everything you needed to know in the first place. Moreover, for most reasonably-sized matrices containing floating-point numbers, determinants overflow or underflow (try <span class="math-container">$\det \frac{1}{10} I_{350\times 350}$</span>, for instance). To put another nail on the coffin, computing eigenvalues by finding the roots of <span class="math-container">$\det(A-xI)$</span> is hopelessly unstable. In short: in numerical computing, whatever you want to do with determinants, there is a better way to do it without using them.</li>
<li>In <strong>pure mathematics</strong>, where people are perfectly fine knowing that an explicit formula exists, all the examples are <span class="math-container">$3\times 3$</span> anyway and people make computations by hand, <strong>determinants are invaluable</strong>. If one uses Gaussian elimination instead, all those divisions complicate computations horribly: one needs to take different paths whether things are zero or not, so when computing symbolically one gets lost in a myriad of cases. The great thing about determinants is that they give you an explicit polynomial formula to tell when a matrix is invertible or not: this is extremely useful in proofs, and allows for lots of elegant arguments. For instance, try proving this fact without determinants: given <span class="math-container">$A,B\in\mathbb{R}^{n\times n}$</span>, if <span class="math-container">$A+Bx$</span> is singular for <span class="math-container">$n+1$</span> distinct real values of <span class="math-container">$x$</span>, then it is singular for all values of <span class="math-container">$x$</span>. This is the kind of things you need in proofs, and determinants are a priceless tool. Who cares if the explicit formula has a exponential number of terms: they have a very nice structure, with lots of neat combinatorial interpretations. </li>
</ul>
|
combinatorics | <p>A particular lock at my university has a keypad with the digits 1-5 on it. When pressed in the correct permutation of those five digits, the lock will open.</p>
<p>Obviously, since there are only 120 permutations, we can bruteforce the lock in 600 presses. But we can do better! The sequence <code>1234512</code> actually tests three distinct sequences with only seven presses - <code>12345</code>, <code>23451</code>, and <code>34512</code>.</p>
<p>What's the fastest strategy to bruteforce this lock?</p>
<p>(and for bonus points, other locks with more numbers, longer passcodes, etc.)</p>
<p>(I've taken a few stabs at this problem with no progress to speak of - particularly, De Bruijn sequences are not directly relevant here)</p>
| <p>We want to find a word over the alphabet $\{1,\ldots,n\}$ that is as short as possible and contains all $n!$ permutations of the alphabet as infixes.
Let $\ell_n$ denote the shortest achievable length.
Clearly, $$\tag1\ell_n\ge n!+(n-1).$$
Of course, $(1)$ is somewhat optimistic because it requires $(n-1)$-overlaps between all consecutive permutations, but that is only possible for cyclicly equivalent permutations. As there are $(n-1)!$ equivalence classes under cyclic equivalence, and switching between these requires us to "waste" a symbol, we find that
$$\tag2\ell_n\ge n!+(n-1)+(n-1)!-1=n!+(n-1)!+(n-2).$$
In particular, inequality $(2)$ is sharp for the first few cases $\ell_1=1$, $\ell_2=3$ (from "121"), $\ell_3=9$ (from "312313213").
However, it seems that that $\ell_4=33$ (from "314231432143124313421341234132413").</p>
<p>Feeding these few values into the OEIS search engine reveals <a href="http://oeis.org/A180632" rel="nofollow noreferrer">http://oeis.org/A180632</a> and we see that the exact values seem to be known only up to $\ell_5=153$ (which is your original problem)!
Quoting the known bounds from there, we have
$$ \ell_n\ge n! + (n-1)! + (n-2)! + n-3$$
(which can supposedly be shown similarly to how we found ($2)$ above) and
$$\ell_n\le \sum_{k=1}^nk!.$$
These bounds tell us that $152\le l_5\le 153$, but it has been shown in 2014 that in fact $\ell_5=153$.
The next case seems to be still open: The inequalities only tell us $867\le \ell_6\le 873$, but another result of 2014 shows that $\ell_6\le 872$.</p>
| <p>Hagen von Eitzen’s answer reports the state of the art. Since in the given link there is no example of the eight distinct
$\ell_5$-character long solutions, here is one. I computed it with a simple python program. It is also found in the <a href="https://arxiv.org/pdf/1408.5108.pdf" rel="nofollow noreferrer">paper of Aug. 22, 2014 [arXiv]</a> where Robin Houston exhibits his candidate for a minimal superpermutation of 6 symbols.</p>
<pre><code>1234512341 5234125341 2354123145 2314253142 3514231542
3124531243 5124315243 1254312134 5213425134 2153421354
2132451324 1532413524 1325413214 5321435214 3251432154
321
</code></pre>
<p><strong>Edited:</strong> Prompted by the OP, I decided to post my python 2 code. It’s a quick-and-dirty script and no example of well written code. It can be improved and optimized in a zillion ways, but it works. I wouldn’t post it to a programming site, but here we are wearing a mathematician’s hat...</p>
<p>The basic idea is to grow our superpermutation step by step. We start from <code>12345</code> and at each step we identify the longest tail which could provide a permutation still unused. For example, at the second step we identify <code>2345</code>, upon which we find <code>23451</code> as the next permutation, and we add <code>1</code> to our result, so that now we have <code>123451</code>. Continuing in this way we find a first whole orbit of cyclic permutations at <code>123451234</code>. After that, we have to add two characters, since the next permutation can only be <code>23415</code>, and our growing result becomes <code>12345123415</code>. We will have junctures where we have so few possible overlapping characters that we can choose between different new permutations (that’s why there are eight distinct optimal solutions). In this case I order the possible candidates lexicographically and choose the first. When there are no more available permutations, the program exits. This yields optimal results if the alphabet is up to five characters long. With six characters, my program yields a superpermutation of 873 characters, which was conjectured optimal before Houston’s paper. My program works with alphabets of arbitrary length (if given enough time to run ;-) ) but, as Houston proved, its result won’t be optimal. Houston arrived at his shorter superpermutation using a heuristic solver of operations research problems.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
## Change this if you want a step-by-step output
verbose = False
## The alphabet. Also, the first permutation dealt with,
## which defines lexicographic ordering for subsequent choices.
A = '12345'
N = len(A)
## The result string is initialized with the alphabet itself.
R = A
## A dictionary to keep track of the found permutations.
Done = { A: True }
# The basic iteration step.
def process():
global R, Done
# t represents how many characters we need to add.
# First we try with one, then we increase...
for t in range(1,N):
# h is the initial stub (the “head”) of our new permutation.
h=R[t-N:]
while len(h) < N:
# A crude way to find the characters still missing from
# the new permutation we are building.
for c in [ x for x in A if x not in h ]:
h += c
break
if h not in Done:
# Found new permutation, update globals and return it.
Done[h] = True
R += h[-t:]
return h
# All possible permutations found!
return ''
p = A
while p != '':
if verbose:
print 'found', p
p = process()
print R
print len(R), 'bytes for', len(Done.keys()), 'permutations of', N, 'objects'
</code></pre>
|
combinatorics | <p>There are $n$ boys and $n$ girls. Each of them is given a hat of only 4 possible (known) colors and doesn't know its color. Now each can only see all the colors of hats of those of the other gender and no contact is allowed, then each is asked to guess the color of their own hat <strong>at the same time</strong>. Determine whether such $n$ exists that there is a strategy that at least one child can guess right under any circumstances.</p>
<p>When there are only 3 possible colors, the problem has been solved ($n=2$ is OK) through simple algebra. But when it comes to 4 colors, the problem seems much harder. Please help.</p>
<p>Solution for 3 colors ($n=2$): we use 0, 1, 2 to represent the colors, the boys are $a, b$ and girls are $c, d$, respectively. Each boy knows the value of $c, d$, while each girl knows the value of $a, b$.
Now consider the four number:$$a+b+c,$$ $$d+a-b,$$ $$d+b-c,$$ $$d+c-a.$$ It's easy to show at least one of them is divisible by 3. As a result, the strategy is: $A, B, C, D$ guess $c+d$, $c-d$, $-a-b$, $b-a$$\pmod 3$ respectively, and one of them must be right.</p>
| <p>There is always such a strategy, no matter the number of colors, when $n$ is sufficiently large (depending on the number of colors). See the bottom of this answer for an explicit value of $n$ that works. For $4$ colors, it gives $n = 4^{144}$.</p>
<p>Let $k$ be the number of colors. The idea is that a child can, on beforehand, make $k$ statements about the other group's colors, in such a way that in every scenario exactly one statement is true, and map each statement to a color. A simple example is, when Tom makes the $k$ obvious statements about Jade's color and maps them to, well, those colors. When the experiment starts, the girls may then assume that the statement corresponding to Tom's color is false. (Indeed, if Tom guesses right, nothing matters anymore.) In the example, they may assume Jade's color is not the same as Tom's. We will use more interesting statements.</p>
<p>By the pigeonhole principle, among a group of $1+(b-1)k$ girls, there is a group of $b$ which has the same color. Fix $b$ for the moment and suppose $n \geq 1+(b-1)k$. (The choice of $b$ will depend on $k$ only.) Fix such a group $G$ of $1+(b-1)k$ girls. The children agree about an enumeration $G_1, \ldots, G_{\binom{1+(b-1)k}{b}}$ of the subsets of size $b$ of $G$.</p>
<p>The key is that, given a group of $k$ girls that may assume they have the same color, they can each guess a different color, so that at least one of them will guess right. The problem is that the girls can never know which group of $k$ girls has the same color, but the boys can limit down the possibilities:</p>
<p><strong>Lemma.</strong> Given a piece of information about the girls' colors, which takes values in a set of size $N$, the boys can limit down the number of values it can take to $k-1$, provided there are at least $\binom N{k-1}$ boys.</p>
<p>Formally, if $C$ is the set of colors, and given a function $f : C^n \to I = \{1, \ldots, N\}$ known to the boys and girls, there exists a strategy which takes $\binom N{k-1}$ boys and makes that, if all those boys guess wrong, then there is a subset $J \subseteq I$ of size $k-1$ such that $f(x) \in J$. Where $x \in C^n$ denotes the vector of the girls' colors.</p>
<p><em>Proof.</em> In the case $N = k-1$ (or smaller), this is clear: just let one boy map each element of $I$ to a different color. The point is that this is possible for very large $N$. When Tom chooses $k-1$ indices $i_j \in I$, maps each group $i_j$ to a color, and the statement "$f(x)$ is none of the $i_j$" to the $k$th color, then the girls may either assume that $f(x) \neq i_j$ for some $i_j$, or that $f(x)$ is one of the $i_j$, depending on Tom's color. In the latter case, we are happy: we've narrowed down the possibilities of $f(x)$ to $k-1$ numbers. We need a strategy for the former case, where the girls can only exclude one of the $k-1$ indices Tom chose, and we have no control about which it will be. It suffices to find subsets $U_j$ of size $k-1$ of $I$ such that, for each choice of elements $u_j \in U_j$, the complement $U - \cup_j\{u_j\}$ has cardinality at most $k-1$. This is of course possible, simply take all subsets of $I$ of size $k-1$. Thus, provided that there are at least $\binom{|I|}{k-1}$ boys, the girls can assume that there is a subset $J \subseteq I$ of size $a \leq k-1$, such that $f(x) \in J$. W.l.o.g. we may assume $a = k-1$; the girls can always make $J$ larger in a way they agreed about on beforehand. $\square$</p>
<p>Let $I = \{1, \ldots, \binom{1+(b-1)k}{b}\}$. The boys can now make statements about which group $G_i$ has the same color (where the group with smallest index is chosen if there are multiple such groups). Call that group (with smallest index) $H$. This is our $f(x)$. Thus, provided that there are at least $\binom{|I|}{k-1}$ boys, the girls can assume that there is a subset $J \subseteq I$ of size $k-1$, such that $H = G_j$ for some $j \in J$. This $J$ is known to all the girls, by looking at the boys' colors.</p>
<p>Now that we've limited the possibilities for $H$ to $\{G_j : j \in J \}$, we would like to apply our key idea to each of these groups: in each group, let the girls say all possible colors. The problem is that those groups are not necessarily disjoint. But the children know how to deal with that:</p>
<p><strong>Lemma.</strong> Given integers $a,k \geq 1$ and $b \geq k+(a-1)(k-1)$, then given $a$ sets $G_1, \ldots, G_a$ of size $b$, there exist $m \leq a$ and a finite number of disjoint sets $T_1, \ldots, T_m$ of size $k$ such that each $G_j$ contains some $T_i$.</p>
<p><em>Proof.</em> By induction on $a$. For $a=1$ this is trivial. Let $a>1$ and suppose each intersection $G_a \cap G_i$ is at most of size $k-1$. Then we can select $k$ elements of $G_a$ that are not in any other $G_i$, take these to form a $T_j$, and proceed by induction. If some $|G_a \cap G_i| \geq k$, choose $k$ elements in their intersection and let them form a $T_j$. This $T_j$ works for any $G_l$ that contains those $k$ elements. Remove these elements from all $G_l$ and proceed by induction. $\square$</p>
<p>The lemma implies, with $a = k-1$, that there exist disjoint groups $T_j$ of $k$ girls, such that the girls may assume each $G_j$ contains a $T_j$. (In practice, the girls must agree about such a choice of $T_j$'s for every possible $J \subseteq I$ of size $k-1$.) In particular, there exist disjoint groups of $k$ girls, at least one of which is contained in $H$. That is, at least one of which consists of girls with the same color. In each group $T_j$, let the girls guess all different colors. Then at least one girl guesses correctly (unless one of the boys guesses correctly).</p>
<hr>
<p>We conclude that $$n = \max \left(\binom{|I|}{k-1} , 1+(b-1)k \right) $$
suffices, where $|I| = \binom{1+(b-1)k}{b}$ and $b = k+(a-1)(k-1)$ and $a = k-1$. Using the bound $\binom xy \leq x^y$ and estimating $b \leq k^2$ and $1+(b-1)k \leq k^3$ we get that
$$n \geq \left( (k^3)^{k^2}\right)^{k} = k^{3k^3}$$
suffices.</p>
| <p>Here is what I've tried so far to find a possible solution. First I tested your solution for 3 colors against all 81 hat combinations to make sure it works since it's still pretty mind boggling to me. I also went back and looked at n=1 with 2 colors, where one person simply says the color they see, and the other person says the opposite of the color they see. I tried to find a pattern that might apply for 4 colors. </p>
<p>In both examples the players have positionally unique strategies that are sensitive to the order of the inputs, & every group input has a unique group output.</p>
<p>With n=1, we have 1 variable <strong><em>a</em></strong>, which can expressed as
<strong><em>a</em></strong> and <strong><em>-a</em></strong> to describe each player's strategy. </p>
<p>With n= 2, we have 2 variables <strong><em>a</em></strong> and <strong><em>b</em></strong>, which can be expressed together as follows to form the strategies you posted:</p>
<blockquote>
<p><strong><em>a+b</em></strong>, <strong><em>a-b</em></strong>, <strong><em>-(a+b)</em></strong>, and <strong>-<em>(a-b)</em></strong> -- replacing a & b in the first two expressions with c & d </p>
</blockquote>
<p>With 3 variables we can write the following 8 possible strategies:</p>
<blockquote>
<p><strong><em>a+b+c</em></strong>, <strong><em>a+b-c</em></strong>, <strong><em>a-b+c</em></strong>, <strong><em>a-b-c</em></strong>, and their <strong>4 opposites</strong> -- making the whole expression negative and replacing a, b, & c with d, e, & f. </p>
</blockquote>
<p>I ran into trouble when deciding how many players to have. With 6 players (a thru f, resulting in 4096 hat combinations), we have to choose which 2 of the 8 available expressions to ignore. I haven't found a combination of 6 of them that produces a unique output (after %4) for every input. With 8 players, we can use all 8 expressions, but each expression only refers to three other players, which is not all the information each players has available.</p>
<p>Hopefully this idea could go somewhere, but I've personally hit a block.</p>
|
probability | <p>It seems that there are two ideas of expectation, variance, etc. going on in our world.</p>
<p><strong>In any probability textbook:</strong></p>
<p>I have a random variable <span class="math-container">$X$</span>, which is a <em>function</em> from the sample space to the real line. Ok, now I define the expectation operator, which is a function that maps this random variable to a real number, and this function looks like,
<span class="math-container">$$\mathbb{E}[X] = \sum\limits_{i = 1}^n x_i p(x_i)$$</span> where <span class="math-container">$p$</span> is the probability mass function, <span class="math-container">$p: x_i \mapsto [0,1], \sum_{i = 1}^n p(x_i) = 1$</span> and <span class="math-container">$x_i \in \text{range}(X)$</span>. The variance is,
<span class="math-container">$$\mathbb{E}[(X - \mathbb{E}[X])^2]$$</span></p>
<p>The definition is similar for a continuous RV.</p>
<hr />
<p><strong>However, in statistics, data science, finance, bioinformatics (and I guess everyday language when talking to your mother)</strong></p>
<p>I have a multi-set of data <span class="math-container">$D = \{x_i\}_{i = 1}^n$</span> (weight of onions, height of school children). The mean of this dataset is</p>
<p><span class="math-container">$$\dfrac{1}{n}\sum\limits_{i= 1}^n x_i$$</span></p>
<p>The variance of this dataset (according to "<a href="https://www.sciencebuddies.org/science-fair-projects/science-fair/variance-and-standard-deviation" rel="noreferrer">science buddy</a>" and "<a href="https://www.mathsisfun.com/data/standard-deviation.html" rel="noreferrer">mathisfun dot com</a>" and <a href="https://www150.statcan.gc.ca/n1/edu/power-pouvoir/ch12/5214891-eng.htm" rel="noreferrer">government of Canada</a>) is,</p>
<p><span class="math-container">$$\dfrac{1}{n}\sum\limits_{i= 1}^n(x_i - \sum\limits_{j= 1}^n \dfrac{1}{n} x_j)^2$$</span></p>
<hr />
<p>I mean, I can already see what's going on here (one is assuming uniform distribution), however, I want an authoritative explanation on the following:</p>
<ol>
<li><p>Is the distinction real? Meaning, is there a universe where expectation/mean/variance... are defined for functions/random variables and another universe where expectation/mean/variance... are defined for raw data? Or are they essentially the same thing (with hidden/implicit assumption)</p>
</li>
<li><p>Why is it no probabilistic assumption is made when talking about mean or variance when it comes to dealing with data in statistics or data science (or other areas of real life)?</p>
</li>
<li><p>Is there some consistent language for distinguishing these two seemingly different mean and variance terminologies? For example, if my cashier asks me about the "mean weight" of two items, do I ask him/her for the probabilistic distribution of the random variable whose realization are the weights of these two items (<strong>def 1</strong>), or do I just add up the value and divide (<strong>def 2</strong>)? How do I know which mean the person is talking about?/</p>
</li>
</ol>
| <p>You ask a very insightful question that I wish were emphasized more often.</p>
<p><strong>EDIT</strong>: It appears you are seeking reputable sources to justify the above. Sources and relevant quotes have been provided.</p>
<p>Here's how I would explain this:</p>
<ul>
<li>In probability, the emphasis is on population models. You have assumptions that are built-in for random variables, and can do things like saying that "in this population following such distribution, the probability of this value is given by the probability mass function."</li>
<li>In statistics, the emphasis is on sampling models. With most real-world data, you do <strong>not</strong> have access to the data-generating process governed by the population model. Probability provides tools to make guesses on what the data-generating process might be. But there is always some uncertainty behind it. We therefore attempt to estimate characteristics about the population given data.</li>
</ul>
<p>From Wackerly et al.'s <em>Mathematical Statistics with Applications</em>, 7th edition, chapter 1.6:</p>
<blockquote>
<p>The objective of statistics is to make an inference about a population based on information contained in a sample taken from that population...</p>
<p>A necessary prelude to making inferences about a population is the ability to describe a set of numbers...</p>
<p>The mechanism for making inferences is provided by the theory of probability. The probabilist reasons from a known population to the outcome of a single experiment, the sample. In contrast, the statistician utilizes the theory of probability to calculate the probability of an observed sample and to infer this from the characteristics of an unknown population. Thus, probability is the foundation of the theory of statistics.</p>
</blockquote>
<p>From Shao's <em>Mathematical Statistics</em>, 2nd edition, section 2.1.1:</p>
<blockquote>
<p>In statistical inference... the data set is viewed as a realization or observation of a random element defined on a probability space <span class="math-container">$(\Omega, \mathcal{F}, P)$</span> related to the random experiment. The probability measure <span class="math-container">$P$</span> is called the population. The data set or random element that produces the data is called a sample from <span class="math-container">$P$</span>... In a statistical problem, the population <span class="math-container">$P$</span> is at least partially unknown and we would like to deduce some properties of <span class="math-container">$P$</span> based on the available sample.</p>
</blockquote>
<p>So, the probability formulas of the mean and variance <strong>assume you have sufficient information about the population to calculate them</strong>.</p>
<p>The statistics formulas for the mean and variance are <strong>attempts to estimate the population mean and variance, given a sample of data</strong>. You could estimate the mean and variance in any number of ways, but the formulas you've provided are some standard ways of estimating the population mean and variance.</p>
<p>Now, one logical question is: <em>why</em> do we choose those formulas to estimate the population mean and variance?</p>
<p>For the mean formula you have there, one can observe that if you assume that your <span class="math-container">$n$</span> observations can be represented as observed values of independent and identically distributed random variables <span class="math-container">$X_1, \dots, X_n$</span> with mean <span class="math-container">$\mu$</span>,
<span class="math-container">$$\mathbb{E}\left[\dfrac{1}{n}\sum_{i=1}^{n}X_i \right] = \mu$$</span>
which is the population mean. We say then that <span class="math-container">$\dfrac{1}{n}\sum_{i=1}^{n}X_i$</span> is an "unbiased estimator" of the population mean.</p>
<p>From Wackerly et al.'s <em>Mathematical Statistics with Applications</em>, 7th edition, chapter 7.1:</p>
<blockquote>
<p>For example, suppose we want to estimate a population mean <span class="math-container">$\mu$</span>. If we obtain a random sample of <span class="math-container">$n$</span> observations <span class="math-container">$y_1, y_2, \dots, y_n$</span>, it seems reasonable to estimate <span class="math-container">$\mu$</span> with the sample mean <span class="math-container">$$\bar{y} = \dfrac{1}{n}\sum_{i=1}^{n}y_i$$</span></p>
<p>The goodness of this estimate depends on the behavior of the random variables <span class="math-container">$Y_1, Y_2, \dots, Y_n$</span> and the effect this has on <span class="math-container">$\bar{Y} = (1/n)\sum_{i=1}^{n}Y_i$</span>.</p>
</blockquote>
<p><strong>Note</strong>. In statistics, it is customary to use lowercase <span class="math-container">$x_i$</span> to represent observed values of random variables; we then call <span class="math-container">$\frac{1}{n}\sum_{i=1}^{n}x_i$</span> an "estimate" of the population mean (notice the difference between "estimator" and "estimate").</p>
<p>For the variance estimator, it is customary to use <span class="math-container">$n-1$</span> in the denominator, because if we assume the random variables have finite variance <span class="math-container">$\sigma^2$</span>, it can be shown that
<span class="math-container">$$\mathbb{E}\left[\dfrac{1}{n-1}\sum_{i=1}^{n}\left(X_i - \dfrac{1}{n}\sum_{j=1}^{n}X_j \right)^2 \right] = \sigma^2\text{.}$$</span>
Thus <span class="math-container">$\dfrac{1}{n-1}\sum_{i=1}^{n}\left(X_i - \dfrac{1}{n}\sum_{j=1}^{n}X_j \right)^2$</span> is an unbiased estimator of <span class="math-container">$\sigma^2$</span>, the population variance.</p>
<p>It is also worth noting that the formula you have there has expected value
<span class="math-container">$$\dfrac{n-1}{n}\sigma^2$$</span>
and <span class="math-container">$$\dfrac{n-1}{n} < 1$$</span>
so on average, it will tend to underestimate the population variance.</p>
<p>From Wackerly et al.'s <em>Mathematical Statistics with Applications</em>, 7th edition, chapter 7.2:</p>
<blockquote>
<p>For example, suppose that we wish to make an inference about the population variance <span class="math-container">$\sigma^2$</span> based on a random sample <span class="math-container">$Y_1, Y_2, \dots, Y_n$</span> from a normal population... a good estimator of <span class="math-container">$\sigma^2$</span> is the sample variance
<span class="math-container">$$S^2 = \dfrac{1}{n-1}\sum_{i=1}^{n}(Y_i - \bar{Y})^2\text{.}$$</span></p>
</blockquote>
<p>The estimators for the mean and variance above are examples of point estimators. From Casella and Berger's <em>Statistical Inference</em>, Chapter 7.1:</p>
<blockquote>
<p>The rationale behind point estimation is quite simple. When sampling is from a population described by a pdf or pmf <span class="math-container">$f(x \mid \theta)$</span>, knowledge of <span class="math-container">$\theta$</span> yields knowledge of the entire population. Hence, it is natural to seek a method of finding a good estimator of the point <span class="math-container">$\theta$</span>, that is, a good point estimator. It is also the case that the parameter <span class="math-container">$\theta$</span> has a meaningful physical interpretation (as in the case of a population) so there is direct interest in obtaining a good point estimate of <span class="math-container">$\theta$</span>. It may also be the case that some function of <span class="math-container">$\theta$</span>, say <span class="math-container">$\tau(\theta)$</span> is of interest.</p>
</blockquote>
<p>There is, of course, a lot more that I'm ignoring for now (and one could write an entire textbook, honestly, on this topic), but I hope this clarifies things.</p>
<p><strong>Note.</strong> I know that many textbooks use the terms "sample mean" and "sample variance" to describe the estimators above. While "sample mean" tends to be very standard terminology, I disagree with the use of "sample variance" to describe an estimator of the variance; some use <span class="math-container">$n - 1$</span> in the denominator, and some use <span class="math-container">$n$</span> in the denominator. Also, as I mentioned above, there are a multitude of ways that one could estimate the mean and variance; I personally think the use of the word "sample" used to describe such estimators makes it seem like other estimators don't exist, and is thus misleading in that way.</p>
<hr />
<h2>In Common Parlance</h2>
<p>This answer is informed primarily by my practical experience in statistics and data analytics, having worked in the fields for about 6 years as a professional. (As an aside, I find one serious deficiency with statistics and data analysis books is providing mathematical theory and how to approach problems in practice.)</p>
<p>You ask:</p>
<blockquote>
<p>Is there some consistent language for distinguishing these two seemingly different mean and variance terminologies? For example, if my cashier asks me about the "mean weight" of two items, do I ask him/her for the probabilistic distribution of the random variable whose realization are the weights of these two items (def 1), or do I just add up the value and divide (def 2)? How do I know which mean the person is talking about?</p>
</blockquote>
<p>In most cases, you want to just stick with the statistical definitions. Most people do not think of statistics as attempting to estimate quantities relevant to a population, and thus are not thinking "I am trying to estimate a population quantity using an estimate driven by data." In such situations, people are just looking for summaries of the data they've provided you, known as <a href="https://en.wikipedia.org/wiki/Descriptive_statistics" rel="nofollow noreferrer">descriptive statistics</a>.</p>
<p>The whole idea of estimating quantities relevant to a population using a sample is known as <a href="https://en.wikipedia.org/wiki/Statistical_inference" rel="nofollow noreferrer">inferential statistics</a>. While (from my perspective) most of statistics tends to focus on statistical inference, in practice, most people - especially if they've not had substantial statistical training - do not approach statistics with this mindset. Most people whom I've worked with think "statistics" is just descriptive statistics.</p>
<p>Shao's <em>Mathematical Statistics</em>, 2nd edition, Example 2.1 talks a little bit about this difference:</p>
<blockquote>
<p>In descriptive data analysis, a few summary measures may be calculated, for example, the sample mean... and the sample variance... However, what is the relationship between <span class="math-container">$\bar{x}$</span> and <span class="math-container">$\theta$</span> [a population quantity]? Are they close (if not equal) in some sense? The sample variance <span class="math-container">$s^2$</span> is clearly an average of squared deviations of <span class="math-container">$x_i$</span>'s from their mean. But, what kind of information does <span class="math-container">$s^2$</span> provide?... These questions cannot be answered in descriptive data analysis.</p>
</blockquote>
<hr />
<h2>Other remarks about the sample mean and sample variance formulas</h2>
<p>Let <span class="math-container">$\bar{X}_n$</span> and <span class="math-container">$S^2_n$</span> denote the sample mean and sample variance formulas provided earlier. The following are properties of these estimators:</p>
<ul>
<li>They are unbiased for <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma^2$</span>, as explained earlier. This is a relatively simple probability exercise.</li>
<li>They are consistent for <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma^2$</span>. Since you know measure theory, assume all random variables are defined over a probability space <span class="math-container">$(\Omega, \mathcal{F}, P)$</span>. It follows that <span class="math-container">$\bar{X}_n \overset{P}{\to} \mu$</span> and and <span class="math-container">$S^2_n \overset{P}{\to} \sigma^2$</span>, where <span class="math-container">$\overset{P}{\to}$</span> denotes convergence in probability, also known as convergence with respect to the measure <span class="math-container">$P$</span>. See <a href="https://math.stackexchange.com/a/1655827/81560">https://math.stackexchange.com/a/1655827/81560</a> for the sample variance (observe that the estimator with the <span class="math-container">$n$</span> in the denominator is used here; simply multiply by <span class="math-container">$\dfrac{n-1}{n}$</span> and apply a result by Slutsky) and <a href="https://math.stackexchange.com/questions/715629/proving-a-sample-mean-converges-in-probability-to-the-true-mean">Proving a sample mean converges in probability to the true mean</a> for the sample mean. As a stronger result, convergence is almost sure with respect to <span class="math-container">$P$</span> in both cases (<a href="https://math.stackexchange.com/questions/243348/sample-variance-converge-almost-surely">Sample variance converge almost surely</a>).</li>
<li>If one assumes <span class="math-container">$X_1, \dots, X_n$</span> are independent and identically distributed based on a normal distribution with mean <span class="math-container">$\mu$</span> and variance <span class="math-container">$\sigma^2$</span>, one has that <span class="math-container">$\dfrac{\sqrt{n}(\bar{X}_n - \mu)}{\sqrt{S_n^2}}$</span> follows a <span class="math-container">$t$</span>-distribution with <span class="math-container">$n-1$</span> degrees of freedom, which <a href="https://math.stackexchange.com/a/4237657/81560">converges in distribution to a normally-distributed random variable with mean <span class="math-container">$0$</span> and variance <span class="math-container">$1$</span></a>. This is a modification of the central limit theorem.</li>
<li>If one assumes <span class="math-container">$X_1, \dots, X_n$</span> are independent and identically distributed based on a normal distribution with mean <span class="math-container">$\mu$</span> and variance <span class="math-container">$\sigma^2$</span>, <span class="math-container">$\bar{X}_n$</span> and <span class="math-container">$S^2_n$</span> are <a href="https://en.wikipedia.org/wiki/Minimum-variance_unbiased_estimator" rel="nofollow noreferrer">uniformly minimum-variance unbiased estimators</a> (UMVUEs) for <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma^2$</span> respectively. It also follows that <span class="math-container">$\bar{X}_n$</span> and <span class="math-container">$S^2_n$</span> are independent, through - as mentioned by Michael Hardy - showing that <span class="math-container">$\text{Cov}(\bar{X}_n, X_i - \bar{X}_n) = 0$</span> for each <span class="math-container">$i = 1, \dots, n$</span>, or as one can learn from more advanced statistical inference courses, an application of <a href="https://en.wikipedia.org/wiki/Basu%27s_theorem" rel="nofollow noreferrer">Basu's Theorem</a> (see, e.g., Casella and Berger's <em>Statistical Inference</em>).</li>
</ul>
| <p>The first definitions you gave are correct and standard, and statisticians and data scientists will agree with this. (These definitions are given in statistics textbooks.) The second set of quantities you described are called the "sample mean" and the "sample variance", not mean and variance.</p>
<p>Given a random sample from a random variable <span class="math-container">$X$</span>, the sample mean and sample variance are natural ways to estimate the expected value and variance of <span class="math-container">$X$</span>.</p>
|
probability | <p>What’s the probability of getting 3 heads and 7 tails if one flips a fair coin 10 times. I just can’t figure out how to model this correctly.</p>
| <p>Your question is related to the <a href="http://en.wikipedia.org/wiki/Binomial_distribution">binomial distribution</a>.</p>
<p>You do $n = 10$ trials. The probability of one successful trial is $p = \frac{1}{2}$. You want $k = 3$ successes and $n - k = 7$ failures. The probability is:</p>
<p>$$
\binom{n}{k} p^k (1-p)^{n-k} = \binom{10}{3} \cdot \left(\dfrac{1}{2}\right)^{3} \cdot \left(\dfrac{1}{2}\right)^{7} = \dfrac{15}{128}
$$</p>
<p>One way to understand this formula: You want $k$ successes (probability: $p^k$) and $n-k$ failures (probability: $(1-p)^{n-k}$). The successes can occur anywhere in the trials, and there are $\binom{n}{k}$ to arrange $k$ successes in $n$ trials.</p>
| <p>We build a mathematical model of the experiment. Write H for head and T for tail. Record the results of the tosses as a string of length $10$, made up of the letters H and/or T. So for example the string HHHTTHHTHT means that we got a head, then a head, then a head, then a tail, and so on.</p>
<p>There are $2^{10}$ such strings of length $10$. This is because we have $2$ choices for the first letter, and <em>for every such choice</em> we have $2$ choices for the second letter, and for every choice of the first two letters, we have $2$ choices for the third letter, and so on.</p>
<p>Because we assume that the coin is fair, and that the result we get on say the first $6$ tosses does not affect the probability of getting a head on the $7$-th toss, each of these $2^{10}$ ($1024$) strings is <em>equally likely</em>. Since the probabilities must add up to $1$, each string has probability $\frac{1}{2^{10}}$.
So for example the <em>outcome</em> HHHHHHHHHH is just as likely as the outcome HTTHHTHTHT. This may have an intuitively implausible feel, but it fits in very well with experiments.</p>
<p>Now let us assume that we will be happy only if we get exactly $3$ heads. To find the probability we will be happy, we <em>count</em> the number of strings that will make us happy. Suppose there are $k$ such strings. Then the probability we will be happy is $\frac{k}{2^{10}}$.</p>
<p>Now we need to find $k$. So we need to <em>count</em> the number of strings that have exactly $3$ H's. To do this, we find the number of ways to <em>choose</em> <strong>where</strong> the H's will occur. So we must choose $3$ places (from the $10$ available) for the H's to be. </p>
<p>We can choose $3$ objects from $10$ in $\binom{10}{3}$ ways. This number is called also by various other names, such as $C_3^{10}$, or ${}_{10}C_3$, or $C(10,3)$, and there are other names too. It is called a <em>binomial coefficient</em>, because it is the coefficient of $x^3$ when the expression $(1+x)^{10}$ is expanded. </p>
<p>There is a useful formula for the binomial coefficients. In general
$$\binom{n}{r}=\frac{n!}{r!(n-r)!}.$$</p>
<p>In particular, $\binom{10}{3}=\frac{10!}{3!7!}$. This turns out to be $120$. So the probability of exactly $3$ heads in $10$ tosses is
$\frac{120}{1024}$.</p>
<p><strong>Remark:</strong> The idea can be substantially generalized. If we toss a coin $n$ times, and the probability of a head on any toss is $p$ (which need not be equal to $1/2$, the coin could be unfair), then the probability of exactly $k$ heads is
$$\binom{n}{k}p^k(1-p)^{n-k}.$$
This probability model is called the <em>Binomial distribution</em>. It is of great practical importance, since it underlies all simple yes/no polling. </p>
|
linear-algebra | <p>This is my first semester of quantum mechanics and higher mathematics and I am completely lost. I have tried to find help at my university, browsed similar questions on this site, looked at my textbook (Griffiths) and read countless of pdf's on the web but for some reason I am just not getting it. </p>
<p>Can someone explain to me, in the simplest terms possible, what this "Bra" and "Ket" (Dirac) notation is, why it is so important in quantum mechanics and how it relates to Hilbert spaces? I would be infinitely grateful for an explanation that would actually help me understand this.</p>
<p><strong>Edit 1:</strong> I want to thank everyone for the amazing answers I have received so far. Unfortunately I am still on the road and unable to properly read some of the replies on my phone. When I get home I will read and respond to all of the replies and accept an answer. </p>
<p><strong>Edit 2:</strong> I just got home and had a chance to read and re-read all of the answers. I want to thank everyone again for the amazing help over the past few days. All individual answers were great. However, the combination of all answers is what really helped me understand bra-ket notation. For that reason I cannot really single out and accept a "best answer". Since I have to accept an answer, I will use a random number generator and accept a random answer. For anyone returning to this question at a later time: Please read all the answers! All of them are amazing.</p>
| <p>In short terms, kets are vectors on your Hilbert space, while bras are linear functionals of the kets to the complex plane</p>
<p>$$\left|\psi\right>\in \mathcal{H}$$</p>
<p>\begin{split}
\left<\phi\right|:\mathcal{H} &\to \mathbb{C}\\
\left|\psi\right> &\mapsto \left<\phi\middle|\psi\right>
\end{split}</p>
<p>Due to the <a href="https://en.wikipedia.org/wiki/Riesz_representation_theorem">Riesz-Frechet theorem</a>, a correspondence can be established between $\mathcal{H}$ and the space of linear functionals where the bras live, thereby the maybe slightly ambiguous notation.</p>
<p>If you want a little more detailed explanation, check out page 39 onwards of Galindo & Pascual: <a href="http://www.springer.com/fr/book/9783642838569">http://www.springer.com/fr/book/9783642838569</a>.</p>
| <p>First, the $bra$c$ket$ notation is simply a convenience invented to greatly simplify, and abstractify the mathematical manipulations being done in quantum mechanics. It is easiest to begin explaining the abstract vector we call the "ket". The ket-vector $|\psi\rangle $ is an abstract vector, it has a certain "size" or "dimension", but without specifying what coordinate system we are in (i.e. the basis), all we know is that the vector $\psi$ exists. Once we want to write down the components of $\psi,$ we can specify a basis and see the projection of $\psi$ onto each of the basis vectors. In other words, if $|\psi\rangle$ is a 3D vector, we can represent it in the standard basis $\{e_1,e_2,e_3\}$ as $\psi = \langle e_1|\psi\rangle |e_1\rangle + \langle e_2|\psi\rangle|e_2\rangle + \langle e_3|\psi\rangle|e_3\rangle,$ where you notice that the $\langle e_i|\psi\rangle$ is simply the coefficient of the projection in the $|e_i\rangle$ direction. </p>
<p>If $|\psi\rangle $ lives in a function space (a Hilbert space is the type of function space used in QM - because we need the notion of an inner product and completeness), then one could abstractly measure the coefficient of $\psi$ at any given point by dotting $\langle x | \psi \rangle = \psi(x)$, treating each point $x$ as its own coordinate or its own basis vector in the function space. But what if we dont use the position basis? Say we want the momentum-frequency-fourier basis representation? Simple, we have an abstract ket vector, how do we determine its representation in a new basis? $\langle p | \psi \rangle = \hat{\psi}(p)$ where $\hat{\psi}$ is the fourier transform of $\psi(x)$ and $|p\rangle$ are the basis vectors of fourier-space. So hopefully this gives a good idea of what a ket-vector is - just an abstract vector waiting to be represented in some basis.</p>
<p>The "bra" vector... not the most intuitive concept at first, assuming you don't have much background in functional analysis. Mathematically, the previous answers discuss how the bra-vector is a linear functional that lives in the dual hilbert space... all gibberish to most people just starting to learn this material. The finite dimensional case is the easiest place to begin. Ket vectors are vertical $n\times 1$ matrices, where $n$ is the dimension of the space. Bra vectors are $1 \times n$ horizontal matrices. We "identify" the ket vector $|a\rangle = (1,2,3)^T$ with the bra vector $\langle a| = (1,2,3),$ although they are not strictly speaking "the same vector," one does correspond to the other in an obvious way. Then, if we define $\langle a | a \rangle \equiv a \cdot a \in \mathbb{R}$ in the finite dimensional case, we see that $\langle a |$ acts on the ket vector $|a\rangle$ to produce a real (complex) number. This is exactly what we call a "linear functional". So we see that maybe it would be reasonable to define a whole new space of these horizontal vectors (call it the dual space), keeping in mind that each of these vectors in the dual space has the property that when it acts on a ket vector, it produces a real (complex) number via the dot product.</p>
<p>Finally, we are left with the infinite dimensional case. We now have the motivation to define the space of all bra-vectors $\langle \psi |$ as the space of all functions such that when you give another function as an input, it produces a real (complex) number. There are many beautiful theorems by Riesz and others that establish existence and uniqueness of this space of elements and their representation in a Hilbert space, but foregoing that discussion, the intuitive thing to do is to say that bra $\langle \phi |$ will be very loosely defined as the function $\phi^*$, and that when you give the input function $\psi(x),$ the symbol means $\langle \phi | \psi\rangle = \int \phi^*\psi \; dx \in \mathbb{R},$ hence $\phi$ is in the dual space, and it acts on a ket-vector in the Hilbert space to produce a real number. If anything needs clarification, just ask. Its a worthwhile notation to master, whether a mathematician or physicist.</p>
|
number-theory | <p>Sequences that avoid arithmetic progressions have been studied, e.g., "Sequences Containing No 3-Term Arithmetic Progressions," Janusz Dybizbański, 2012, <a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/v19i2p15" rel="noreferrer">journal link</a>.</p>
<p>I started to explore sequences that avoid both arithmetic and geometric progressions,
i.e., avoid <span class="math-container">$x, x+c, x+2c$</span> and avoid <span class="math-container">$y, c y, c^2 y$</span> anywhere in the sequence
(not necessarily consecutively).
Starting with <span class="math-container">$(1,2)$</span>, one cannot extend with <span class="math-container">$3$</span> because <span class="math-container">$(1,2,3)$</span> forms an
arithemtical progression, and one cannot extend with <span class="math-container">$4$</span> because <span class="math-container">$(1,2,4)$</span> is a geometric
progression. But <span class="math-container">$(1,2,5)$</span> is fine.</p>
<p>Continuing in the same manner leads to the following "greedy" sequence:
<span class="math-container">$$1, 2, 5, 6, 12, 13, 15, 16, 32, 33, 35, 39, 40, 42, 56, 81, 84, 85, 88,$$</span>
<span class="math-container">$$90, 93, 94, 108, 109, 113, 115, 116, 159, 189, 207, 208, 222, \ldots$$</span></p>
<p>This sequence is not in the OEIS.
Here are a few questions:</p>
<blockquote>
<p><strong>Q1</strong>. What is its growth rate? </p>
</blockquote>
<p><br />
<img src="https://i.sstatic.net/wQ9Ai.jpg" alt="Avoiding3Terms">
<br /></p>
<blockquote>
<p><strong>Q2</strong>. Does <span class="math-container">$\sum_{i=1}^\infty 1/s_i$</span> converge? (Where <span class="math-container">$s_i$</span> is the <span class="math-container">$i$</span>-th term of the above
sequence.)</p>
<p><strong>Q3</strong>. If it does, does it converge to <em>e</em>?
<em>Update</em>: No. The sum appears to be approximately <span class="math-container">$2.73 > e$</span>, as per
@MichaelStocker and @Turambar.</p>
</blockquote>
<p>That is wild numerical speculation. The first 457 terms (the extent
of the graph above) sum to 2.70261.
<hr />
<strong>Addendum</strong>. <em>11Jul2014</em>. Starting with <span class="math-container">$(0,1)$</span> rather than <span class="math-container">$(1,2)$</span> renders
a direct hit on OEIS <a href="https://oeis.org/A225571" rel="noreferrer">A225571</a>.</p>
| <p><span class="math-container">$\color{brown}{\textbf{HINT}}$</span></p>
<p>Denote the target sequence <span class="math-container">$\{F_3(n)\}$</span> and let us try to estimate the probability <span class="math-container">$P(N)\}$</span> that natural number belongs to <span class="math-container">$\{F_3\}.$</span></p>
<p>Suppose
<span class="math-container">$$F_3(1)=1,\quad F_3(2)=2,\quad P(1)=P(2)=P(5) = P(6) = 1,\\ P(3)=P(4)=P(7)=P(8)=0,\tag1$$</span>
<span class="math-container">$$V(N)=\sum\limits_{i=1}^{N}P(i),\quad F_3(N) = \left[V^{-1}(N)\right].\tag2$$</span></p>
<p>Let <span class="math-container">$P_a(N)$</span> is the probability that N does not belong to arithmetic progression and <span class="math-container">$P_g(N)$</span> is the similar probability for geometric progressions.</p>
<p>Suppose
<span class="math-container">$$P(N) = P_a(N)P_g(N)).\tag3$$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{Arithmetic Probability estimation.}}$</span></p>
<p>Suppose
<span class="math-container">$$P_a(N)=\prod\limits_{k=1}^{[N/2]}P_a(N,k),\tag4$$</span>
where <span class="math-container">$P_a(N,k)$</span> is the probability that arithmetic progression <span class="math-container">$\{N-2k,N-k, N\}$</span> does not exist for any <span class="math-container">$j.$</span>
Suppose
<span class="math-container">$$P_a(N,k) = \big(1-P(N-2k)\big)\big((1-P(N-k)\big).\tag5$$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{Geometric Probability estimation.}}$</span></p>
<p>Suppose
<span class="math-container">$$P_g(N)=\prod\limits_{k=1}^{\left[\,\sqrt Nn\,\right]}P_g(N,k),\tag6$$</span>
where <span class="math-container">$P_g(N,k)$</span> is the probability that geometric progression <span class="math-container">$\left(\dfrac{N}{k^2}, \dfrac Nk, N\right\}$</span> with the denominator <span class="math-container">$k$</span> does not exist for all <span class="math-container">$i,j.$</span></p>
<p>Taking in account that the geometric progression can exist only if <span class="math-container">$k^2\,| \ N,$</span> suppose
<span class="math-container">$$P_g(N,k) = \left(1-\dfrac1{k^2}P\left(\dfrac{N}{k^2}\right)\right)\left(1-P\left(\dfrac{N}{k}\right)\right).\tag7$$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{Common model.}}$</span></p>
<p>Common model can be simplified to next one,
<span class="math-container">\begin{cases}
P(N) = 1-\prod\limits_{k=1}^{[N/2-1]}\Big(1-\big(1-P(N-2k)\big)\big(1-P(N-k)\big)\Big)\\
\times\prod\limits_{k=1}^{\left[\sqrt N\right]}\left(1-\left(1-\dfrac1{k^2}P\left(\dfrac{N}{k^2}\right)\right)\left(1-P\left(\dfrac{N}{k}\right)\right)\right)\\[4pt]
P(1)=P(2)=P(5) = P(6) = 1,\quad P(3)=P(4)=P(7)=P(8)=0\\
V(N)=\sum\limits_{i=1}^{N}P(i),\quad F_3(n) = \left[V^{-1}(n)\right].\tag8
\end{cases}</span></p>
<p>Looking the solution in the form of
<span class="math-container">$$\left\{\begin{align}
&P(N)=P_v(N),\quad\text{where}\quad v=\left[\dfrac{N-1 \mod 4}2\right],\\[4pt]
&P_0(N)=
\begin{cases}
1,\quad\text{if}\quad N<9\\
cN^{-s},\quad\text{otherwise}
\end{cases}\\[4pt]
&P_1(N)=
\begin{cases}
0,\quad\text{if}\quad N<9\\
dN^{-t},\quad\text{otherwise}
\end{cases}
\end{align}\right.\tag9$$</span></p>
<p>then
<span class="math-container">$$\begin{cases}
&P_0(N) =1-&\prod\limits_{k=1}^{[N/4-1]}\Big(1-\big(1-P_0(N-4k)\big)\big(1-P_1(N-2k)\big)\Big)\\
&&\times\Big(1-\big(1-P_1(N-4k-2)\big)\big(1-P_0(N-2k-1)\big)\Big)\\
&&\times\prod\limits_{k=1}^{\left[\sqrt N/2\right]-1}
\left(1-\left(1-\dfrac1{(2k-1)^2}P_0\left(\dfrac{N}{(2k-1)^2}\right)\right)
\left(1-P_1\left(\dfrac{N}{2k-1}\right)\right)\right)\\[4pt]
&&\times\left(1-\left(1-\dfrac1{4k^2}P_1\left(\dfrac{N}{4k^2}\right)\right)
\left(1-P_0\left(\dfrac{N}{2k}\right)\right)\right)\\[4pt]
&P_1(N) = 1- &\prod\limits_{k=1}^{[N/4-1]}\Big(1-\big(1-P_1(N-4k)\big)\big(1-P_0(N-2k)\big)\Big)\\
&&\Big(1-\big(1-P_0(N-4k-2)\big)\big(1-P_1(N-2k-1)\big)\Big)\\
&&\times\prod\limits_{k=1}^{\left[\sqrt N/2\right]-1}
\left(1-\left(1-\dfrac1{(2k-1)^2}P_1\left(\dfrac{N}{(2k-1)^2}\right)\right)
\left(1-P_0\left(\dfrac{N}{2k-1}\right)\right)\right)\\[4pt]
&&\times\left(1-\left(1-\dfrac1{4k^2}P_0\left(\dfrac{N}{4k^2}\right)\right)
\left(1-P_1\left(\dfrac{N}{2k}\right)\right)\right).
\end{cases}\tag{10}$$</span></p>
<p>Taking in account <span class="math-container">$(9),$</span> can be written
<span class="math-container">$$\begin{cases}
&P_0(N) =1-&\prod\limits_{k=[N/4-2]}^{[N/4-1]}\,c(N-2k-1)^{-s}\prod\limits_{k=1}^{[N/4-3]}\Big(1-\big(1-c(N-4k)^{-s}\big)\big(1-d(N-2k)^{-t}\big)\Big)\\
&&\times\Big(1-\big(1-d(N-4k-2)^{-t}\big)\big(1-c(N-2k-1)^{-s}\big)\Big)\\
&&\times\prod\limits_{k=1}^{\left[\sqrt N/2\right]-1}
\left(1-\left(1-\dfrac1{(2k-1)^2}c\left(\dfrac{N}{(2k-1)^2}\right)^{-s}\right)
\left(1-d\left(\dfrac{N}{2k-1}\right)^{-t}\right)\right)\\[4pt]
&&\times\left(1-\left(1-\dfrac1{4k^2}d\left(\dfrac{N}{4k^2}\right)^{-t}\right)
\left(1-c\left(\dfrac{N}{2k}\right)^{-s}\right)\right)\\[4pt]
&P_1(N) = 1- &\prod\limits_{k=[N/4-2]}^{[N/4-1]}\,c(N-2k)^{-s}\prod\limits_{k=1}^{[N/4-3]}\Big(1-\big(1-d(N-4k)^{-t}\big)\big(1-(N-2k)^{-s}\big)\Big)\\
&&\Big(1-\big(1-(N-4k-2)^{-s}\big)\big(1-d(N-2k-1)^{-t}\big)\Big)\\
&&\times\prod\limits_{k=1}^{\left[\sqrt N/2\right]-1}
\left(1-\left(1-\dfrac1{(2k-1)^2}d\left(\dfrac{N}{(2k-1)^2}\right)^{-t}\right)
\left(1-c\left(\dfrac{N}{2k-1}\right)^{-s}\right)\right)\\[4pt]
&&\times\left(1-\left(1-\dfrac1{4k^2}c\left(\dfrac{N}{4k^2}\right)^{-s}\right)
\left(1-d\left(\dfrac{N}{2k}\right)^{-t}\right)\right).
\end{cases}\tag{11}$$</span></p>
<p>Model <span class="math-container">$(11)$</span> should be checked theoretically and practically, but it gives the approach to the required estimations.</p>
<p>The next steps are estimation of parameters <span class="math-container">$$c,d,s,t$$</span> and using of obtained model.</p>
| <p>Since both @awwalker and @mathworker21 mentioned Erdős' conjecture, and because
a paper discussing this conjecture was just published, I thought I would mention it:</p>
<blockquote>
<p><strong>Erdős Conjecture</strong> (1940s or 1950s). If <span class="math-container">$A \subset \mathbb{N}$</span>
satisfies <span class="math-container">$\sum_{n \in A} \frac{1}{n}= \infty$</span>, then
<span class="math-container">$A$</span> contains arbitrarily long arithmetic progressions.</p>
</blockquote>
<p>In </p>
<ul>
<li>Grochow, Joshua. "New applications of the polynomial method: The cap
set conjecture and beyond." <em>Bulletin of the American Mathematical
Society</em>, Vol.56, No.1, Jan. 2019,</li>
</ul>
<p>he says:</p>
<blockquote>
<p>"It remains open even to prove that a set <span class="math-container">$A$</span> satisfying the hypothesis contains <span class="math-container">$3$</span>-term arithmetic progressions."</p>
</blockquote>
|
geometry | <p>I'm watching a naive introduction to the Möbius band, the lecturer asks if it's possible to construct a one sided surface and then she says that there is one of these surfaces, namely the Möbius band. Then she mentions that some surfaces have two sides.</p>
<p>I've also had this doubt when reading Flegg's <em>From Geometry to Topology</em>. So, is it possible to have a surface with more than two sides? My intuition says no, but perhaps someone made some magic trick and made it somehow. I've looked at some wikipedia articles and I've seen no mention to something with more than two surfaces (I've used my browser search tool), unless such surfaces have other names.</p>
<p>I guess that the tags should be <em>geometry</em> and <em>topology</em>, if you think there is something more, please edit. </p>
| <p>Here's an intuitive explanation (and I am writing this under the assumption that the surface is connected).</p>
<p>The key lies in understanding the difference between the number of sides "locally" and the number of sides "globally".</p>
<p>For any surface depicted in 3-D space, the number of sides locally is exactly two. The intuitive reason for this is that if you train a microscope on a point of the surface and zoom way in, you see a local piece of the surface which looks very, very similar to a flat plane in 3-dimensional space. A flat plane in 3-D space always has exactly two sides. So from a local perspective, a surface has exactly two sides. Not one, not three or more, but exactly two.</p>
<p>Now we switch to the global perspective. Imagine in your microscope you see a tiny creature walking along one side of the local piece of the surface. Is it possible for the creature to travel along some path, one which is allowed to exit the view of the microscope but which returns at a later time, so that when the path returns the creature is on the opposite side of the piece of the surface in the microscope? If so, if such a path exists, then from a global perspective the surface is one-sided. If not, if no matter what path the creature walks around it always comes back to the same side of the surface in the microscope, then the surface is globally two-sided.</p>
<p>As @MoisheCohen says, this is the concept of "co-orientation", also called "transverse orientation". At every point on every surface in 3-D space there are exactly two local transverse orientations. The surface is globally one-sided if there exists a path which switches the two local transverse orientations, and the surface is globally two sided if no path switches the two local transverse orientations.</p>
<ul>
<li>ADDITIONAL REMARKS TO ADDRESS THE COMMENT</li>
</ul>
<p>To summarize: the reason there are not "3 or more sides" is that there are exactly two sides <em>locally</em>, and those two sides are either <em>globally</em> equivalent or <em>globally</em> inequivalent. When you have an equivalence relation on a set of two elements, the number of equivalence classes is either one or two.</p>
| <p>Getting a true answer to your question requires some familiarity with differential topology (or algebraic topology), which I assume in what follows. My favorite reference is do Carmo's "Riemannian Geometry". </p>
<p>The definitions below are (intentionally) formal (you had enough informal input from other answers). To gain an intuition, one should work out specific examples of surfaces in $R^3$. </p>
<p>In order to keep things simple, I will assume in what follows that $S$ is a smooth hypersurface in an $n$-dimensional Riemannian manifold $(M,g)$, i.e., $S$ is a smooth submanifold in $M$ and $S$ has dimension $n-1$. (The assumptions of smoothness and presence of a Riemannian metric, simplify definition and proofs but can be avoided by doing a bit more work. To avoid using Riemannian metric I would have to define the normal bundle of $S$ in $M$. To eliminate smoothness assumptions, I would have to invoke some heavy machinery from the theory of topological manifolds.) For concreteness, you can consider the case when $M=R^n$ with the standard metric. </p>
<p><strong>Definition.</strong> A <em>unit normal vector</em> to $S$ at a point $x\in S$ is a unit vector in $T_x(M)$ (the tangent space of $M$ at $x$) which is orthogonal to $T_x(S)$ (the tangent space of $S$ at $x$). </p>
<p><strong>Definition.</strong> A unit normal vector field to $S$ is a vector field along $S$ consisting of unit normal vectors, i.e., a smooth map $\nu: S\to TM$ (the tangent bundle of $M$) which sends each $x\in S$ to a unit normal vector $\nu_x$ to $S$ at the point $x$. </p>
<p><strong>Definition.</strong> A <em>coorientation</em> of $S$ is a choice of a unit normal vector field $\nu$ to $S$. A submanifold $S\subset M$ is called <em>coorientable</em> if it admits a coorientation. </p>
<p>Clearly, if $\nu$ is a coorientation of $S$, then the vector field $-\nu$ is also a coorientation of $S$, and $-\nu\ne \nu$. </p>
<p><strong>Exercise.</strong> Coorientability of $M$ is independent of the choice of a Riemannian metric on $M$: $S$ is coorientable if and only if there exists a continuous vector field $\xi$ along $S$ such that $\xi_x\notin T_xS$ for every $x\in S$. Hint: Use the orthogonal projection of $T_xM$ to the orthogonal complement of $T_xS$ in $T_xM$. </p>
<p>Note that coorientability is independent of orientability of $S$ and $M$. It is also independent of the number of components of $M\setminus S$. However:</p>
<p><strong>Exercise.</strong> a. If both $M$ and $S$ are orientable, then $S$ is coorientable. </p>
<p>b. If $M$ and $S$ are connected and $M\setminus S$ is not connected, then $S$ is cooriented. </p>
<p><strong>Definition.</strong> Suppose that $S$ is connected. Then $S$ is called <em>1-sided</em> if it does not admit a coorientation. (Such $S$ is said to have one side.) </p>
<p><strong>Lemma.</strong> Suppose that $S$ is connected and coorientable. Then $S$ admits exactly two coorientations. </p>
<p><strong>Proof.</strong> Let $\nu, \mu$ be coorientations of $S$. I claim that either $\nu=\mu$ or $\mu=-\nu$. Consider a point $x\in S$. Then either $\nu_x=\mu_x$ or $\nu_x=-\mu_x$ since these are unit normal vectors to $S$ and $T_x M= T_x S\oplus {\mathbb R}$ is the orthogonal decomposition. Therefore, we obtain a partition $U\sqcup V$ of $S$ where
$$
U=\{x\in M: \nu_x=\mu_x\}, V=\{x\in M: \nu_x=-\mu_x\}.
$$
Both sets are closed in $S$ since $\nu_x, \mu_x$ are continuous vector fields. If both sets are nonempty then $S$ is not connected, which contradicts our assumption. Hence, either $\nu_x=\mu_x$ for every $x\in S$ or $\nu_x=-\mu_x$ for every $x\in S$. qed </p>
<p><strong>Definition.</strong> A <em>side</em> of a connected hypersurface $S$ is a choice of coorientation of $S$. </p>
<p><strong>Corollary.</strong> Every connected hypersurface either is 1-sided or has exactly two sides. In other words, the number of sides, $\sigma(S)$, of a connected hypersurface $S$ is either 1 or 2. </p>
<p>Suppose now that $S$ is not necessarily connected and
$$
S= \coprod_{j\in J} S_j
$$
is the decomposition of $S$ in its connected components. Then the number of sides of $S$ is defined as
$$
\sigma(S):= \sum_{j\in J} \sigma(S_j).
$$
For instance, if $S$ is the disjoint union of the Moebius band and an annulus in $R^3$, then $\sigma(S)=1+2=3$, that is, $S$ has three sides. If $M=RP^2\times S^1$ and $S=RP^2\times \{p\}\subset M$, then $\sigma(S)=2$, that is, $S$ has two sides. </p>
<p>This answers your question on the "number of sides" of a hypersurface. Note that if $S$ is not a hypersurface in $M$ then its "number of sides" is not defined. </p>
|
geometry | <p>I am aware that, historically, hyperbolic geometry was useful in showing that there can be consistent geometries that satisfy the first 4 axioms of Euclid's elements but not the fifth, the infamous parallel lines postulate, putting an end to centuries of unsuccesfull attempts to deduce the last axiom from the first ones.</p>
<p>It seems to be, apart from this fact, of genuine interest since it was part of the usual curriculum of all mathematicians at the begining of the century and also because there are so many books on the subject.</p>
<p>However, I have not found mention of applications of hyperbolic geometry to other branches of mathematics in the few books I have sampled. Do you know any or where I could find them? </p>
| <p>Maybe this isn't the sort of answer you were looking for, but I find it striking how often hyperbolic geometry shows up in nature. For instance, you can see some characteristically hyperbolic "crinkling" on lettuce leaves and jellyfish tentacles:![
<img src="https://i.sstatic.net/kPCNv.jpg" alt="lettuce leaves (from fudsubs.com)">
<img src="https://i.sstatic.net/sugCH.jpg" alt="jellyfish tentaces (from goldenstateimages.com)"></p>
<p>My guess as to why this shows up again and again (and I am certainly not a biologist here, so this is only speculation) is that hyperbolic space manages to pack in more surface area within a given radius than flat or positively curved geometries; perhaps this allows lettuce leaves or jellyfish tentacles to absorb nutrients more effectively or something.</p>
<p>EDIT: In response to the OP's comment, I'll say a little bit more about how these relate to hyperbolic geometry. </p>
<p>One way to detect the curvature of your surface is to look at what the surface area of a circle of a given radius is. In flat (Euclidean) space, we all know that the formula is given by $A(r) = \pi r^2$, so that there is a quadratic relationship between the radius of your circle and the area enclosed. Off the top of my head, I don't know what the formula is for a circle inscribed on the sphere (a positively-curved surface) is, but we can get an indication that circles in positive curvature enclose <i>less</i> area than in flat space as follows: the upper hemisphere on a sphere of radius 1 is a spherical circle of radius $\pi/2$, since the distance from the north pole to the equator, walking along the surface of the sphere, is $\pi/2$. In flat space, this circle would enclose an area of $\pi^3/4 \approx 7.75$. But the upper hemisphere has a surface area of $2 \pi \approx 6.28$.</p>
<p>By contrast, in hyperbolic space, a circle of a fixed radius packs in more surface area than its flat or positively-curved counterpart; you can see this explicitly, for example, by putting a hyperbolic metric on the unit disk or the upper half-plane, where you will compute that a hyperbolic circle has area that grows exponentially with the radius. </p>
<p>So what happens when you have a hyperbolic surface sitting inside three-dimensional space? Well, all that extra surface area has to go somewhere, and things naturally "crinkle up". If you are at all interested, you can crochet hyperbolic planes (see, for instance, <a href="http://www.math.cornell.edu/~dwh/papers/crochet/crochet.html" rel="noreferrer">this article</a> of David Henderson and Daina Taimina), and you'll see how this happens in practice.</p>
| <p>My personal pick is the way hyperbolic geometry is used in network science to reason about a whole lot of strange properties of complex networks:</p>
<p>Krioukov et al.: <a href="http://arxiv.org/abs/1006.5169" rel="noreferrer">Hyperbolic Geometry of Complex Networks</a></p>
|
logic | <p>I read that contraposition $\neg Q \rightarrow \neg P$ in intuitionistic logic is not generally equivalent to $P \rightarrow Q$. If this is right, in what case can this contraposition logical-equivalence be used in intuitionistic logic?</p>
| <p>Contraposition in intuitionism can, sometimes, be used, but it's a delicate situation.</p>
<p>You can think of it this way. Intuitionistically, the meaning of $P\to Q$ is that any proof of $P$ can be turned into a proof of $Q$. Similarly, $\neg Q \to \neg P$ means that every proof of $\neg Q$ can be turned into a proof of $\neg P$. </p>
<p>If $P\to Q$ is true, and you are given a proof of $\neg Q$, can you construct a proof of $\neg P$ ? The answer is yes, as follows. Well, we are given a proof that there is no proof of $Q$. Suppose $P$ is true, then it can be turned into a proof of $Q$, but then we will have a proof of $Q\wedge \neg Q$, which is impossible. Thus we just showed that it is not the case that $P$ holds, thus $\neg P$ holds. In other words, $(P\to Q)\to (\neg Q \to \neg P)$.</p>
<p>In the other direction, suppose that $\neg Q \to \neg P$, and you are given a proof of $P$. Can you now construct a proof of $Q$? Well, not quite. The best you can do is as follows. Suppose I have a proof of $\neg Q$. I can turn it into a proof of $\neg P$, and then obtain a proof of $P\wedge \neg P$, which is impossible. It thus shows that $\neg Q$ can not be proven. That is, that $\neg \neg Q$ holds. In other words, $(\neg Q \to \neg P)\to (P\to \neg \neg Q)$.</p>
| <blockquote>
<p>In what case can this contraposition logical-equivalence be used in intuitionistic logic?</p>
</blockquote>
<p>This is not straightforward to answer. What needs to be true is that $P$ and $Q$ need to act sufficiently like classical formulas. Here are two examples:</p>
<p><em>1.</em> The <a href="https://en.wikipedia.org/wiki/Double-negation_translation" rel="noreferrer">negative translation</a> embeds classical logic into intuitionistic logic, sending a formula $S$ to a formula $S^N$. If we compute this for an instance of contraposition, we obtain:</p>
<p>$$(\lnot R \to \lnot S) \to (S \to R))^N\\
(\lnot R \to \lnot S)^N \to (S \to R)^N\\
((\lnot R)^N \to (\lnot R)^N \to (S^N \to R^N)\\
(\lnot R^N \to \lnot S^N) \to (S^N \to R^N)$$</p>
<p>Therefore $(\lnot R^N \to \lnot S^N) \to (S^N \to R^N)$ is intuitionistically valid for all $R,S$. In particular, if $P$ and $Q$ are equivalent to negative translations of other formulas, then contraposition holds for $P$ and $Q$.</p>
<hr>
<p><em>2.</em> Here is a different view. Suppose $Q_0$ is a fixed formula such that contraposition holds between $Q_0$ and all $P$:
$$(\lnot Q_0 \to \lnot P) \to (P \to Q_0)$$</p>
<p>Then, letting $P$ be $\lnot \bot$, we have $\lnot P$ equivalent to $\bot$, and so from
$$(\lnot Q_0 \to \lnot P) \to (P \to Q_0)$$
we obtain
$$(\lnot \lnot Q_0) \to (\lnot \bot \to Q_0)$$
which is equivalent to
$$\lnot \lnot Q_0 \to Q_0$$
Thus if $Q_0$ satisfies contraposition with all $P$, then $Q_0$ is equivalent to $\lnot\lnot Q_0$. The converse of this was shown by Ittay Weiss in another answer: if $Q_0$ is equivalent to $\lnot\lnot Q_0$ then $Q_0$ satisfies contraposition with all $P$.</p>
|
probability | <p>I'm a beginner in mathematics and there is one thing that I've been wondering about recently. The formula for the normal distribution is:</p>
<p>$$f(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\displaystyle{\frac{(x-\mu)^2}{2\sigma^2}}},$$</p>
<p>However, what are $e$ and $\pi$ doing there? $\pi$ is about circles and the ratio to its diameter, for example. $e$ is mostly about exponential functions, specifically about the fact that $\frac{\mathrm{d}}{\mathrm{d}x} e^x = e^x$.</p>
<p>It is my firm conviction that proofs and articles are available, but could someone perhaps shed some light on this and please explain in a more 'informal' language what they stand for here?</p>
<p>I'm very curious to know as those numbers have very different meanings as far as I'm concerned.</p>
| <p>So I think you want to know "why" $\pi$ and $e$ appear here based on an explanation that goes back to circles and natural logarithms, which are the usual contexts in which one first sees these.</p>
<p>If you see $\pi$, you think there's a circle hidden somewhere. And in fact there is. As has been pointed out, in order for this expression to give a probability density you need $\int_{-\infty}^\infty f(x) \: dx = 1$. (I'm not sure how much you know about integrals -- this just means that the area between the graph of $f(x)$ and the $x$-axis is 1.) But it turns out that this can be derived from $\int_{-\infty}^\infty e^{-x^2} dx = \sqrt{\pi}$. </p>
<p>And it turns out that this is true because the square of this integral is $\pi$. Now, why should the square of this integral have anything to do with circles? Because it's the total volume between the graph of $e^{-(x^2+y^2)}$ (as a function $g(x,y)$ of two variables) and the $xy$-plane. And of course $x^2+y^2$ is just the square of the distance of $(x,y)$ from the origin -- so the volume I just mentioned is rotationally symmetric. (If you know about multiple integration, see the <a href="http://en.wikipedia.org/wiki/Gaussian_integral" rel="noreferrer">Wikipedia article "Gaussian integral", under the heading "brief proof"</a> to see this volume worked out.)</p>
<p>As for where $e$ comes from -- perhaps you've seen that the normal probability density can be used to approximate the binomial distribution. In particular, the probability that if we flip $n$ independent coins, each of which has probability $p$ of coming up heads, we'll get $k$ heads is
$$ {n \choose k} p^{k} (1-p)^{n-k} $$
where ${n \choose k} = n!/(k! (n-k)!)$. And then there's
<a href="http://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="noreferrer">Stirling's approximation</a>,
$$ n! \approx \sqrt{2\pi n} (n/e)^{n}. $$
So if you can see why $e$ appears here, you see why it appears in the normal. Now, we can take logs of both sides of $n! = 1 \cdot 2 \cdot \ldots \cdot n$ to get
$$ \log (n!) = \log 1 + \log 2 + \cdots + \log n $$
and we can approximate the sum by an integral,
$$ \log (n!) \approx \int_{1}^{n} \log t \: dt. $$
But the indefinite integral here is $t \log t - t$, and so we get the definite integral
$$ \log (n!) \approx n \log n - n. $$
Exponentiating both sides gives $n! \approx (n/e)^n$. This is off by a factor of $\sqrt{2\pi n}$ but at least explains the appearance of $e$ -- because there are logarithms in the derivation. This often occurs when we deal with probabilities involving lots of events because we have to find products of many terms; we have a well-developed theory for sums of very large numbers of terms (basically, integration) which we can plug into by taking logs.</p>
| <p>One of the important operations in (continuous) probability is the integral. $e$ shows up there just because it's convenient. If you rearrange it a little you get $$ {1 \over \sqrt{2\pi \sigma^2}} (e^{1 \over 2\sigma^2})^{-(x-\mu)^2},$$ which makes it clear that the $e$ is just a convenient number that makes the initial constant relatively straightforward; using some other number in place of $e$ just rescales $\sigma$ in some way.</p>
<p>The $\pi$ is a little tougher to explain; the fact you just have to "know" (because it requires multivariate calculus to prove) is that $\int_{-\infty}^{\infty} e^{-x^2} dx = \sqrt{\pi}$. This is called the Gaussian integral, because Gauss came up with it. It's also why this distribution (with $\mu = 0, \sigma^2 = 1/2$) is called the Gaussian distribution. So that's why $\pi$ shows up in the constant, so that no matter what values you use for $\sigma$ and $\mu$, $\int_{-\infty}^{\infty} f(x) dx = 1$.</p>
|
linear-algebra | <p>I want to understand the meaning behind the Jordan Normal form, as I think this is crucial for a mathematician.</p>
<p>As far as I understand this, the idea is to get the closest representation of an arbitrary endomorphism towards the diagonal form. As diagonalization is only possible if there are sufficient eigenvectors, we try to get a representation of the endomorphism with respect to its generalized eigenspaces, as their sum always gives us the whole space. Therefore bringing an endomorphism to its Jordan normal form is always possible.</p>
<p>How often an eigenvalue appears on the diagonal in the JNF is determined by its algebraic multiplicity. The number of blocks is determined by its geometric multiplicity. Here I am not sure whether I've got the idea right. I mean, I have trouble interpreting this statement.</p>
<blockquote>
<p>What is the meaning behind a Jordan normal block and why is the number of these blocks equal to the number of linearly independent eigenvectors?</p>
</blockquote>
<p>I do not want to see a rigorous proof, but maybe someone could answer for me the following sub-questions.</p>
<blockquote>
<p>(a) Why do we have to start a new block for each new linearly independent eigenvector that we can find?</p>
<p>(b) Why do we not have one block for each generalized eigenspace?</p>
<p>(c) What is the intuition behind the fact that the Jordan blocks that contain at least <span class="math-container">$k+1$</span> entries of the eigenvalue <span class="math-container">$\lambda$</span> are determined by the following? <span class="math-container">$$\dim(\ker(A-\lambda I)^{k+1}) - \dim(\ker(A-\lambda I)^k)$$</span></p>
</blockquote>
| <p>Let me sketch a proof of existence of the Jordan canonical form which, I believe, makes it somewhat natural.</p>
<hr />
<p>Let us say that a linear endomorphism <span class="math-container">$f:V\to V$</span> of a nonzero finite dimensional vector space is <strong>decomposable</strong> if there exist <em>proper</em> subspaces <span class="math-container">$U_1$</span>, <span class="math-container">$U_2$</span> of <span class="math-container">$V$</span> such that <span class="math-container">$V=U_1\oplus U_2$</span>, <span class="math-container">$f(U_1)\subseteq U_1$</span> and <span class="math-container">$f(U_2)\subseteq U_2$</span>, and let us say that <span class="math-container">$f$</span> is <strong>indecomposable</strong> if it is not decomposable. In terms of bases and matrices, it is easy to see that the map <span class="math-container">$f$</span> is decomposable iff there exists a basis of <span class="math-container">$V$</span> such that the matrix of <span class="math-container">$f$</span> with respect to which has a non-trivial diagonal block decomposition (that it, it is block diagonal two blocks)</p>
<p>Now it is not hard to prove the following:</p>
<blockquote>
<p><strong>Lemma 1.</strong> <em>If <span class="math-container">$f:V\to V$</span> is an endomorphism of a nonzero finite dimensional vector space, then there exist <span class="math-container">$n\geq1$</span> and nonzero subspaces <span class="math-container">$U_1$</span>, <span class="math-container">$\dots$</span>, <span class="math-container">$U_n$</span> of <span class="math-container">$V$</span> such that <span class="math-container">$V=\bigoplus_{i=1}^nU_i$</span>, <span class="math-container">$f(U_i)\subseteq U_i$</span> for all <span class="math-container">$i\in\{1,\dots,n\}$</span> and for each such <span class="math-container">$i$</span> the restriction <span class="math-container">$f|_{U_i}:U_i\to U_i$</span> is indecomposable.</em></p>
</blockquote>
<p>Indeed, you can more or less imitate the usual argument that shows that every natural number larger than one is a product of prime numbers.</p>
<p>This lemma allows us to reduce the study of linear maps to the study of <em>indecomposable</em> linear maps. So we should start by trying to see how an indecomposable endomorphism looks like.</p>
<p>There is a general fact that comes useful at times:</p>
<blockquote>
<p><strong>Lemma.</strong> <em>If <span class="math-container">$h:V\to V$</span> is an endomorphism of a finite dimensional vector space, then there exists an <span class="math-container">$m\geq1$</span> such that <span class="math-container">$V=\ker h^m\oplus\def\im{\operatorname{im}}\im h^m$</span>.</em></p>
</blockquote>
<p>I'll leave its proof as a pleasant exercise.</p>
<p>So let us fix an indecomposable endomorphism <span class="math-container">$f:V\to V$</span> of a nonzero finite dimensional vector space. As <span class="math-container">$k$</span> is algebraically closed, there is a nonzero <span class="math-container">$v\in V$</span> and a scalar <span class="math-container">$\lambda\in k$</span> such that <span class="math-container">$f(v)=\lambda v$</span>. Consider the map <span class="math-container">$h=f-\lambda\mathrm{Id}:V\to V$</span>: we can apply the lemma to <span class="math-container">$h$</span>, and we conclude that <span class="math-container">$V=\ker h^m\oplus\def\im{\operatorname{im}}\im h^m$</span> for some <span class="math-container">$m\geq1$</span>. moreover, it is very easy to check that <span class="math-container">$f(\ker h^m)\subseteq\ker h^m$</span> and that <span class="math-container">$f(\im h^m)\subseteq\im h^m$</span>. Since we are supposing that <span class="math-container">$f$</span> is indecomposable, one of <span class="math-container">$\ker h^m$</span> or <span class="math-container">$\im h^m$</span> must be the whole of <span class="math-container">$V$</span>. As <span class="math-container">$v$</span> is in the kernel of <span class="math-container">$h$</span>, so it is also in the kernel of <span class="math-container">$h^m$</span>, so it is not in <span class="math-container">$\im h^m$</span>, and we see that <span class="math-container">$\ker h^m=V$</span>.</p>
<p>This means, precisely, that <span class="math-container">$h^m:V\to V$</span> is the zero map, and we see that <span class="math-container">$h$</span> is <em>nilpotent</em>. Suppose its nilpotency index is <span class="math-container">$k\geq1$</span>, and let <span class="math-container">$w\in V$</span> be a vector such that <span class="math-container">$h^{k-1}(w)\neq0=h^k(w)$</span>.</p>
<blockquote>
<p><strong>Lemma.</strong> The set <span class="math-container">$\mathcal B=\{w,h(w),h^2(w),\dots,h^{k-1}(w)\}$</span> is a basis of <span class="math-container">$V$</span>.</p>
</blockquote>
<p>This is again a nice exercise.</p>
<p>Now you should be able to check easily that the matrix of <span class="math-container">$f$</span> with respect to the basis <span class="math-container">$\mathcal B$</span> of <span class="math-container">$V$</span> is a Jordan block.</p>
<p>In this way we conclude that every indecomposable endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a Jordan block as a matrix.
According to Lemma 1, then, every endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a block diagonal matrix with Jordan blocks.</p>
<hr />
<p>Much later: <em>How to prove the lemma?</em> A nice way to do this which is just a computation is the following.</p>
<ul>
<li>Show first that the vectors <span class="math-container">$w,h(w),h^2(w),\dots,h^{k-1}(w)$</span> are linearly independent. Let <span class="math-container">$W$</span> be the subspace they span.</li>
<li>Find vectors <span class="math-container">$z_1,\dots,z_l$</span> so that together with the previous <span class="math-container">$k$</span> ones we have a basis for <span class="math-container">$V$</span>. There is then a unique map <span class="math-container">$\Phi:V\to F$</span> such that <span class="math-container">$$
\Phi(h^i(w)) = \begin{cases} 1 & \text{if $i=k-1$;} \\ 0 & \text{if $0<i<k-1$;} \end{cases}
$$</span>
and <span class="math-container">$$\Phi(z_i)=0\quad\text{for all $i$.}$$</span> Using this construct another map <span class="math-container">$\pi:V\to V$</span> such that <span class="math-container">$$\pi(v) = \sum_{i=0}^{k-1}\Phi(h^i(v))h^{k-1-i}(w)$$</span> for all <span class="math-container">$v\in V$</span>. Prove that <span class="math-container">$W=\operatorname{img}\pi$</span> and that <span class="math-container">$\pi^2=\pi$</span>, so that <span class="math-container">$V=W\oplus\ker\pi$</span>, and that <span class="math-container">$\pi h=h\pi$</span>, so that <span class="math-container">$\ker\pi$</span> is <span class="math-container">$h$</span>-invariant. Since we are supposing <span class="math-container">$V$</span> to be <span class="math-container">$h$</span>-indecomposable, we must have <span class="math-container">$\ker\pi=0$</span>.</li>
</ul>
<p>This is not the most obvious proof. It is what one gets if one notices that <span class="math-container">$V$</span> is an <span class="math-container">$k[X]/(X^k)$</span>-module with <span class="math-container">$X$</span> a acting by the map <span class="math-container">$h$</span>, and that the ring is self-injective. In fact, if you know what this means, there is really no need to even write down the map <span class="math-container">$\Phi$</span> and <span class="math-container">$\pi$</span>, as the fact that <span class="math-container">$W$</span> is a direct summand of <span class="math-container">$V$</span> as a <span class="math-container">$k[X]/(X^k)$</span>-module is immediate, since it is obviously free of rank 1, and therefore injective.</p>
<p>Fun fact: a little note with the details of this argument was rejected by the MAA Monthly because «this is a well-known argument».</p>
| <p>The <strong>true meaning</strong> of the Jordan canonical form is explained in the context of representation theory, namely, of finite dimensional representations of the algebra <span class="math-container">$k[t]$</span> (where <span class="math-container">$k$</span> is your algebraically closed ground field):</p>
<ul>
<li>Uniqueness of the normal form is the Krull-Schmidt theorem, and </li>
<li>existence is the description of the indecomposable modules of <span class="math-container">$k[t]$</span>. </li>
</ul>
<p>Moreover, the description of indecomposable modules follows more or less easily (in a strong sense: if you did not know about the Jordan canonical form, you could guess it by looking at the following:) the simple modules are very easy to describe (this is where algebraically closedness comes in) and the extensions between them (in the sense of homological algebra) are also easy to describe (because <span class="math-container">$k[t]$</span> is an hereditary ring) Putting these things together (plus the Jordan-Hölder theorem) one gets existence.</p>
|
probability | <p>I have been looking at the birthday problem (http://en.wikipedia.org/wiki/Birthday_problem) and I am trying to figure out what the probability of 3 people sharing a birthday in a room of 30 people is. (Instead of 2).</p>
<p>I thought I understood the problem but I guess not since I have no idea how to do it with 3.</p>
| <p>The birthday problem with 2 people is quite easy because finding the probability of the complementary event "all birthdays distinct" is straightforward. For 3 people, the complementary event includes "all birthdays distinct", "one pair and the rest distinct", "two pairs and the rest distinct", etc. To find the exact value is pretty complicated. </p>
<p>The Poisson approximation is pretty good, though. Imagine checking every triple and calling it a "success" if all three have the same birthdays. The total number of successes is approximately Poisson with mean value ${30 \choose 3}/365^2$. Here $30\choose 3$ is the number of triples, and $1/365^2$ is the chance that any particular triple is a success.
The probability of getting at least one success is obtained from the Poisson distribution:
$$ P(\mbox{ at least one triple birthday with 30 people})\approx 1-\exp(-{30 \choose 3}/365^2)=.0300. $$ </p>
<p>You can modify this formula for other values, changing either 30 or 3. For instance,
$$ P(\mbox{ at least one triple birthday with 100 people})\approx 1-\exp(-{100 \choose 3}/365^2)=.7029,$$
$$ P(\mbox{ at least one double birthday with 25 people })\approx 1-\exp(-{25 \choose 2}/365)=.5604.$$</p>
<p>Poisson approximation is very useful in probability, not only for birthday problems! </p>
| <p>An exact formula can be found in Anirban DasGupta, <a href="http://www.math.ucdavis.edu/~tracy/courses/math135A/UsefullCourseMaterial/birthday.pdf">
The matching, birthday and the strong birthday problem: a contemporary review</a>, Journal of Statistical Planning and Inference 130 (2005), 377-389. This paper claims that if $W$ is the number of triplets of people having the same birthday, $m$ is the number of days in the year, and $n$ is the number of people, then</p>
<p>$$ P(W \ge 1) = 1 - \sum_{i=0}^{\lfloor n/2 \rfloor} {m! n! \over i! (n-2i)! (m-n+i)! 2^i m^n} $$</p>
<p>No derivation or source is given; I think the idea is that the term corresponding to $i$ is the probability that there are $i$ birthdays shared by 2 people each and $n-2i$ birthdays with one person each.</p>
<p>In particular, if $m = 365, n = 30$ this formula gives $0.0285$, not far from Byron's approximation.</p>
|
logic | <p>The most relative that I found on Google for <code>de morgan's 3 variable</code> was: (ABC)' = A' + B' + C'.</p>
<p>I didn't find the answer for my question, therefore I'll ask here:</p>
<p>What is De-Morgan's theorem for <code>(A + B + C)'</code>?</p>
| <p>DeMorgan's Theorem applied to $(A + B + C)'$ is as follows:</p>
<p>$$(A + B + C)' = A'B'C'{}{}{}{}$$</p>
<p>We have $\;\;$NOT(A or B or C) $\;\equiv\;$ Not(A) and Not(B) and Not(C),</p>
<p>which in boolean-algebra equates to $A'B'C'$</p>
<p>Both these extensions from DeMorgan's defined for two variables can be justified precisely because we can apply DeMorgan's in a nested manner, and in so doing, reapply, etc, in the end, it is equivalent to an immediate extension of it's application to three variables (or more) variables, provided they are connected by the same connective, $\land, \lor$.</p>
<p>For example, we can write $(A+B+C)' \equiv \big(A + (B+C)\big)' \equiv \big(A' \cdot (B+C)'\big) \equiv A'\cdot (B'C') \equiv A'B'C'$. </p>
<hr>
<p>Indeed, provided we have a negated series of multiple variables all connected by the <em>SAME</em> connective (all and'ed or all or'ed), we can generalize DeMorgan's to even more than three variables, again, due to the associativity of AND and OR connectives. For any arbitrary finite number of connected variables:</p>
<p>So, $$(ABCDEFGHIJ)' = A' + B' + C' + \cdots + H' + I' + J'$$</p>
<p>And $$(A + B + C + \cdots + H + I + J)' = A'B'C'D'E'F'G'H'I'J'$$</p>
| <p>This is one instance where introducing another variable provides some insight. Let $D = B\lor C$.</p>
<p>Then, we have:
$$\begin{align}
\neg(A\lor B\lor C) &= \neg(A\lor D)\\
&=\neg A \land \neg D \\
&=\neg A \land \neg(B\lor C) \\
&=\neg A \land \neg B \land \neg C
\end{align}$$</p>
<p>Thus:
$$\neg(A\lor B \lor C) = \neg A \land \neg B \land \neg C$$</p>
<p>The idea is effectively the same for even more terms. Thus we can have:
$$\neg(P_1 \lor P_2 \lor \cdots \lor P_n) = \neg P_1 \land \neg P_2 \land \cdots \land \neg P_n$$
...and...
$$\neg(P_1 \land P_2 \land \cdots \land P_n) = \neg P_1 \lor \neg P_2 \lor \cdots \lor \neg P_n$$
(Note: I'm more familiar with this notation for logic, so I'm using it. $\lor$ is or, $\land$ is and, and $\neg$ is not.)</p>
|
linear-algebra | <p>What is the difference between sum of two vectors and direct sum of two vector subspaces? </p>
<p>My textbook is confusing about it. Any help would be appreciated.</p>
| <p><em>Direct sum</em> is a term for <em>subspaces</em>, while <em>sum</em> is defined for <em>vectors</em>.
We can take the sum of subspaces, but then their intersection need not be $\{0\}$.</p>
<p><strong>Example:</strong> Let $u=(0,1),v=(1,0),w=(1,0)$. Then</p>
<ul>
<li>$u+v=(1,1)$ (sum of vectors),</li>
<li>$\operatorname{span}(v)+\operatorname{span}(w)=\operatorname{span}(v)$, so the sum is not direct,</li>
<li>$\operatorname{span}(u)\oplus\operatorname{span}(v)=\Bbb R^2$, here the sum is direct because $\operatorname{span}(u)\cap\operatorname{span}(v)=\{0\}$,</li>
<li>$u\oplus v $ makes no sense in this context.</li>
</ul>
<p><em>Note that the direct sum of subspaces of a vector space is not the same thing as the direct sum of some vector spaces.</em></p>
| <p>In Axler's Linear Algebra Done Right, he defines the <em>sum of subspaces</em> $U + V$ as </p>
<p>$\{u + v : u \in U, v \in V \}$.</p>
<p>He then says that $W = U \oplus V$ if </p>
<p>(1) $W = U + V$, and</p>
<p>(2) The representation of each $w$ as $u + v$ is <em>unique</em>.</p>
<p>This is a different way of presenting these definitions than most texts, but it's equivalent to other definitions of direct sum.</p>
<p>In anyone's book, the sum and direct sum of subspaces are always defined; and the sum of vectors is always defined; but there's no such thing as a direct sum of vectors. </p>
|
number-theory | <p>How to solve this problem, I can not figure it out:</p>
<p>If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.</p>
<p>Find the sum of all the multiples of 3 or 5 below 1000.</p>
| <p>The previously posted answer isn't correct. The statement of the problem is to sum the multiples of 3 and 5 below 1000, not up to and equal 1000. The correct answer is
\begin{eqnarray}
\sum_{k_{1} = 1}^{333} 3k_{1} + \sum_{k_{2} = 1}^{199} 5 k_{2} - \sum_{k_{3} =1}^{66} 15 k_{3} = 166833 + 99500 - 33165 = 233168,
\end{eqnarray}
where we have the used the identity
\begin{eqnarray}
\sum_{k = 1}^{n} k = \tfrac{1}{2} n(n+1).
\end{eqnarray}</p>
| <p>First of all, stop thinking on the number $1000$ and turn your
attention to the number $990$ instead. If you solve the problem
for $990$ you just have to add $993, 995, 996$ & $999$ to it for
the final answer. This sum is $(a)=3983$</p>
<p>Count all the #s divisible by $3$: From $3$... to $990$ there are
$330$ terms. The sum is $330(990+3)/2$, so $(b)=163845$ </p>
<p>Count all the #s divisible by $5$: From $5$... to $990$ there are
$198$ terms. The sum is $198(990+5)/2$, so $(c)=98505$</p>
<p>Now, the GCD (greatest common divisor) of $3$ & $5$ is $1$, so the
LCM (least common multiple) should be $3\times 5 = 15$.</p>
<p>This means every number that divides by $15$ was counted twice,
and it should be done only once. Because of this, you have an
extra set of numbers started with $15$ all the way to $990$ that
has to be removed from (b)&(c).</p>
<p>Then, from $15$... to $990$ there are $66$ terms and their sum is
$66(990+15)/2$, so $(d)=33165$</p>
<p>The answer for the problem is: $(a)+(b)+(c)-(d) = 233168$</p>
<p>Simple but very fun problem.</p>
|
differentiation | <p>I was wondering if there are functions for which $$f'(x) > f(x)$$ for all $x$. Only examples I could think of were $e^x - c$ and simply $- c$ in which $c > 0$. Also, is there any significance in a function that is always less than its derivative?</p>
<hr>
<p>Edit: Thank you very much for all the replies. It seems almost all functions that apply are exponential by nature...
Are there more examples like - 1/x?</p>
<p>Again are there any applications/physical manifestations of these functions? [for example an object with a velocity that is always greater than its position/acceleration is always greater than its velocity]</p>
| <p>If $y'(x)>y(x)\quad\forall x\in\mathbb{R}$, we can define $f(x)=y'(x)-y(x)$ which is positive forall $x$.
Suppose that $y'(x)$ is continuous function so that $f(x)$ is continuous too. Now with this element we can build the differential equation $$y'(x)=y(x)+f(x)$$ and its solutions are given by: $$y(x)=e^{x}\left(c+\int_{x_0}^{x}e^{-s}f(s)ds\right)$$</p>
<blockquote>
<p>Again are there any applications/physical manifestations of these
functions? [for example an object with a velocity that is always
greater than its position/acceleration is always greater than its
velocity]</p>
</blockquote>
<p>I don't know if there's application of this interesting property, but I'm sure that you can't compare velocity with the position because they are not homogeneous quantities.</p>
| <p>Assuming $f(x)>0$, $f:\mathbb{R}\mapsto\mathbb{R}$</p>
<p>$f'(x) > f(x) \iff \frac{d}{dx}\ln(f(x))>1$</p>
<p>So you can turn any function $g$ where $g'(x)>1$ into this type of function by taking the exponential of it:</p>
<p>$\frac{d}{dx}g(x)>1 \implies \frac{d}{dx}\ln(e^{g(x)})>1 \implies \frac{d}{dx} e^{g(x)}>e^{g(x)}$</p>
|
matrices | <p>I can follow the definition of the transpose algebraically, i.e. as a reflection of a matrix across its diagonal, or in terms of dual spaces, but I lack any sort of geometric understanding of the transpose, or even symmetric matrices.</p>
<p>For example, if I have a linear transformation, say on the plane, my intuition is to visualize it as some linear distortion of the plane via scaling and rotation. I do not know how this distortion compares to the distortion that results from applying the transpose, or what one can say if the linear transformation is symmetric. Geometrically, why might we expect orthogonal matrices to be combinations of rotations and reflections?</p>
| <p>To answer your second question first: an orthogonal matrix $O$ satisfies $O^TO=I$, so $\det(O^TO)=(\det O)^2=1$, and hence $\det O = \pm 1$. The determinant of a matrix tells you by what factor the (signed) volume of a parallelipiped is multipled when you apply the matrix to its edges; therefore hitting a volume in $\mathbb{R}^n$ with an orthogonal matrix either leaves the volume unchanged (so it is a rotation) or multiplies it by $-1$ (so it is a reflection).</p>
<p>To answer your first question: the action of a matrix $A$ can be neatly expressed via its singular value decomposition, $A=U\Lambda V^T$, where $U$, $V$ are orthogonal matrices and $\Lambda$ is a matrix with non-negative values along the diagonal (nb. this makes sense even if $A$ is not square!) The values on the diagonal of $\Lambda$ are called the singular values of $A$, and if $A$ is square and symmetric they will be the absolute values of the eigenvalues.</p>
<p>The way to think about this is that the action of $A$ is first to rotate/reflect to a new basis, then scale along the directions of your new (intermediate) basis, before a final rotation/reflection.</p>
<p>With this in mind, notice that $A^T=V\Lambda^T U^T$, so the action of $A^T$ is to perform the inverse of the final rotation, then scale the new shape along the canonical unit directions, and then apply the inverse of the original rotation.</p>
<p>Furthermore, when $A$ is symmetric, $A=A^T\implies V\Lambda^T U^T = U\Lambda V^T \implies U = V $, therefore the action of a symmetric matrix can be regarded as a rotation to a new basis, then scaling in this new basis, and finally rotating back to the first basis. </p>
| <p>yoyo has succinctly described my intuition for orthogonal transformations in the comments: from <a href="http://en.wikipedia.org/wiki/Polarization_identity">polarization</a> you know that you can recover the inner product from the norm and vice versa, so knowing that a linear transformation preserves the inner product ($\langle x, y \rangle = \langle Ax, Ay \rangle$) is equivalent to knowing that it preserves the norm, hence the orthogonal transformations are precisely the linear <a href="http://en.wikipedia.org/wiki/Isometry">isometries</a>. </p>
<p>I'm a little puzzled by your comment about rotations and reflections because for me a rotation is, <em>by definition</em>, an orthogonal transformation of determinant $1$. (I say this not because I like to dogmatically stick to definitions over intuition but because this definition is elegant, succinct, and agrees with my intuition.) So what intuitive definition of a rotation are you working with here?</p>
<p>As for the transpose and symmetric matrices in general, my intuition here is not geometric. First, here is a comment which may or may not help you. If $A$ is, say, a stochastic matrix describing the transitions in some <a href="http://en.wikipedia.org/wiki/Markov_chain">Markov chain</a>, then $A^T$ is the matrix describing what happens if you run all of those transitions backwards. Note that this is not at all the same thing as inverting the matrix in general. </p>
<p>A slightly less naive comment is that the transpose is a special case of a structure called a <a href="http://en.wikipedia.org/wiki/Dagger_category">dagger category</a>, which is a category in which every morphism $f : A \to B$ has a dagger $f^{\dagger} : B \to A$ (here the adjoint). The example we're dealing with here is implicitly the dagger category of Hilbert spaces, which is relevant to quantum mechanics, but there's another dagger category relevant to a different part of physics: the $3$-<a href="http://en.wikipedia.org/wiki/Cobordism">cobordism category</a> describes how space can change with time in relativity, and here the dagger corresponds to just flipping a cobordism upside-down. (Note the similarity to the Markov chain example.) Since relativity and quantum mechanics are both supposed to describe the time evolution of physical systems, it's natural to ask for ways to relate the two dagger categories I just described, and this is (roughly) part of <a href="http://en.wikipedia.org/wiki/Topological_quantum_field_theory">topological quantum field theory</a>.</p>
<p>The punchline is that for me, "adjoint" is intuitively "time reversal." (Unfortunately, what this has to do with self-adjoint operators as observables in quantum mechanics I'm not sure.)</p>
|
logic | <p>I am doing some homework exercises and stumbled upon this question. I don't know where to start. </p>
<blockquote>
<p>Prove that the union of countably many countable sets is countable.</p>
</blockquote>
<p>Just reading it confuses me. </p>
<p>Any hints or help is greatly appreciated! Cheers!</p>
| <p>Let's start with a quick review of "countable". A set is countable if we can set up a 1-1 correspondence between the set and the natural numbers. As an example, let's take <span class="math-container">$\mathbb{Z}$</span>, which consists of all the integers. Is <span class="math-container">$\mathbb Z$</span> countable?</p>
<p>It may seem uncountable if you pick a naive correspondence, say <span class="math-container">$1 \mapsto 1$</span>, <span class="math-container">$2 \mapsto 2 ...$</span>, which leaves all of the negative numbers unmapped. But if we organize the integers like this:</p>
<p><span class="math-container">$$0$$</span>
<span class="math-container">$$1, -1$$</span>
<span class="math-container">$$2, -2$$</span>
<span class="math-container">$$3, -3$$</span>
<span class="math-container">$$...$$</span></p>
<p>We quickly see that there is a map that works. Map 1 to 0, 2 to 1, 3 to -1, 4 to 2, 5 to -2, etc. So given an element <span class="math-container">$x$</span> in <span class="math-container">$\mathbb Z$</span>, we either have that <span class="math-container">$1 \mapsto x$</span> if <span class="math-container">$x=0$</span>, <span class="math-container">$2x \mapsto x$</span> if <span class="math-container">$x > 0$</span>, or <span class="math-container">$2|x|+1 \mapsto x$</span> if <span class="math-container">$x < 0$</span>. So the integers are countable.</p>
<p>We proved this by finding a map between the integers and the natural numbers. So to show that the union of countably many sets is countable, we need to find a similar mapping. First, let's unpack "the union of countably many countable sets is countable":</p>
<ol>
<li><p>"countable sets" pretty simple. If <span class="math-container">$S$</span> is in our set of sets, there's a 1-1 correspondence between elements of <span class="math-container">$S$</span> and <span class="math-container">$\mathbb N$</span>.</p></li>
<li><p>"countably many countable sets" we have a 1-1 correspondence between <span class="math-container">$\mathbb N$</span> and the sets themselves. In other words, we can write the sets as <span class="math-container">$S_1$</span>, <span class="math-container">$S_2$</span>, <span class="math-container">$S_3$</span>... Let's call the set of sets <span class="math-container">$\{S_n\}, n \in \mathbb N$</span>.</p></li>
<li><p>"union of countably many countable sets is countable". There is a 1-1 mapping between the elements in <span class="math-container">$\mathbb N$</span> and the elements in <span class="math-container">$S_1 \cup S_2 \cup S_3 ...$</span></p></li>
</ol>
<p>So how do we prove this? We need to find a correspondence, of course. Fortunately, there's a simple way to do this. Let <span class="math-container">$s_{nm}$</span> be the <span class="math-container">$mth$</span> element of <span class="math-container">$S_n$</span>. We can do this because <span class="math-container">$S_n$</span> is by definition of the problem countable. We can write the elements of ALL the sets like this:</p>
<p><span class="math-container">$$s_{11}, s_{12}, s_{13} ...$$</span>
<span class="math-container">$$s_{21}, s_{22}, s_{23} ...$$</span>
<span class="math-container">$$s_{31}, s_{32}, s_{33} ...$$</span>
<span class="math-container">$$...$$</span></p>
<p>Now let <span class="math-container">$1 \mapsto s_{11}$</span>, <span class="math-container">$2 \mapsto s_{12}$</span>, <span class="math-container">$3 \mapsto s_{21}$</span>, <span class="math-container">$4 \mapsto s_{13}$</span>, etc. You might notice that if we cross out every element that we've mapped, we're crossing them out in diagonal lines. With <span class="math-container">$1$</span> we cross out the first diagonal, <span class="math-container">$2-3$</span> we cross out the second diagonal, <span class="math-container">$4-6$</span> the third diagonal, <span class="math-container">$7-10$</span> the fourth diagonal, etc. The <span class="math-container">$nth$</span> diagonal requires us to map <span class="math-container">$n$</span> elements to cross it out. Since we never "run out" of elements in <span class="math-container">$\mathbb N$</span>, eventually given any diagonal we'll create a map to every element in it. Since obviously every element in <span class="math-container">$S_1 \cup S_2 \cup S_3 ...$</span> is in one of the diagonals, we've created a 1-1 map between <span class="math-container">$\mathbb N$</span> and the set of sets.</p>
<p>Let's extend this one step further. What if we made <span class="math-container">$s_{11} = 1/1$</span>, <span class="math-container">$s_{12} = 1/2$</span>, <span class="math-container">$s_{21} = 2/1$</span>, etc? Then <span class="math-container">$S_1 \cup S_2 \cup S_3 ... = \mathbb Q^+$</span>! This is how you prove that the rationals are countable. Well, the positive rationals anyway. Can you extend these proofs to show that the rationals are countable?</p>
| <p>@Hovercouch's answer is correct, but the presentation hides a really rather important point that you ought probably to know about. Here it is:</p>
<blockquote>
<p>The argument depends on accepting (a weak version of) the Axiom of Choice!</p>
</blockquote>
<p>Why so?</p>
<p>You are only given that each $S_i$ is <em>countable</em>. You aren't given up front a way of <em>counting</em> any particular $S_i$, so you need to choose a surjective function $f_i\colon \mathbb{N} \to S_i$ to do the counting (in @Hovercouch's notation, $f_m(n) = s_{mn}$). And, crucially, you need to choose such an $f_i$ countably many times (a choice for each $i$). </p>
<p>That's an infinite sequence of choices to make: and it's a version of the highly non-trivial Axiom of Choice that says, yep, it's legitimate to pretend we can do that. </p>
|
probability | <p>This question was asked in a test and I got it right. The answer key gives $\frac12$.</p>
<blockquote>
<p><strong>Problem</strong>: If 3 distinct points are chosen on a plane, find the probability that they form a triangle.</p>
</blockquote>
<p><strong>Attempt 1</strong>: The 3rd point will either be collinear or non-collinear with the other 2 points. Hence the probability is $\frac12$, assuming that collinearity and non-collinearity of the 3 points are equally likely events.</p>
<p><strong>Attempt 2</strong>: Now suppose we take the midpoint (say $M$) of 2 of the points (say $A$ and $B$). We can draw an infinite number of lines passing through $M$, out of which only 1 line will pass through $A$ and $B$. Keeping this in mind, we can choose the 3rd point $C$ on any of those infinite lines, excluding the one passing through $A$ and $B$. Now it seems as if the probability will be tending to 1.</p>
<p>What is wrong with attempt 2? Or is the answer actually 1 and not $\frac12$?</p>
| <p>There is no such thing as a uniform distribution on the plane. Without specifying how the points are chosen, the question is not properly stated. However, if the points are chosen independently from some continuous distribution (absolutely continuous with respect to <a href="https://en.m.wikipedia.org/wiki/Lebesgue_measure">Lebesgue measure</a>), the probability of the third point lying exactly on the line through the first two is $0$.</p>
| <p>Nothing can be said about this as long as nothing has been said about the distribution (justifying the comment of angryavian). </p>
<p>Expressions "at random" or "are chosen" do not speak for themselves because there is no natural uniform distribution on $\mathbb R^2$. </p>
<p>If the distribution is absolutely continuous wrt the Lebesgue measure (i.e. if the distribution has a PDF) then automatically the answer is $1$ because every line in the plane $\mathbb R^2$ has Lebesgue measure $0$ (which is probably what Kaj means to say). </p>
<p>So in that case for any fixed line the probability that the third point is chosen on it equals $0$.</p>
|
linear-algebra | <p>In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a <span class="math-container">$2\times 2$</span> matrix by the formula. Our teacher showed us how to compute the determinant of an <span class="math-container">$n \times n$</span> matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?</p>
| <p>Your trouble with determinants is pretty common. They’re a hard thing to teach well, too, for two main reasons that I can see: the formulas you learn for computing them are messy and complicated, and there’s no “natural” way to interpret the value of the determinant, the way it’s easy to interpret the derivatives you do in calculus at first as the slope of the tangent line. It’s hard to believe things like the invertibility condition you’ve stated when it’s not even clear what the numbers mean and where they come from.</p>
<p>Rather than show that the many usual definitions are all the same by comparing them to each other, I’m going to state some general properties of the determinant that I claim are enough to specify uniquely what number you should get when you put in a given matrix. Then it’s not too bad to check that all of the definitions for determinant that you’ve seen satisfy those properties I’ll state.</p>
<p>The first thing to think about if you want an “abstract” definition of the determinant to unify all those others is that it’s not an array of numbers with bars on the side. What we’re really looking for is a function that takes N vectors (the N columns of the matrix) and returns a number. Let’s assume we’re working with real numbers for now.</p>
<p>Remember how those operations you mentioned change the value of the determinant?</p>
<ol>
<li><p>Switching two rows or columns changes the sign.</p>
</li>
<li><p>Multiplying one row by a constant multiplies the whole determinant by that constant.</p>
</li>
<li><p>The general fact that number two draws from: the determinant is <em>linear in each row</em>. That is, if you think of it as a function <span class="math-container">$\det: \mathbb{R}^{n^2} \rightarrow \mathbb{R}$</span>, then <span class="math-container">$$ \det(a \vec v_1 +b \vec w_1 , \vec v_2 ,\ldots,\vec v_n ) = a \det(\vec v_1,\vec v_2,\ldots,\vec v_n) + b \det(\vec w_1, \vec v_2, \ldots,\vec v_n),$$</span> and the corresponding condition in each other slot.</p>
</li>
<li><p>The determinant of the identity matrix <span class="math-container">$I$</span> is <span class="math-container">$1$</span>.</p>
</li>
</ol>
<p>I claim that these facts are enough to define a <em>unique function</em> that takes in N vectors (each of length N) and returns a real number, the determinant of the matrix given by those vectors. I won’t prove that, but I’ll show you how it helps with some other interpretations of the determinant.</p>
<p>In particular, there’s a nice geometric way to think of a determinant. Consider the unit cube in N dimensional space: the set of N vectors of length 1 with coordinates 0 or 1 in each spot. The determinant of the linear transformation (matrix) T is the <em>signed volume of the region gotten by applying T to the unit cube</em>. (Don’t worry too much if you don’t know what the “signed” part means, for now).</p>
<p>How does that follow from our abstract definition?</p>
<p>Well, if you apply the identity to the unit cube, you get back the unit cube. And the volume of the unit cube is 1.</p>
<p>If you stretch the cube by a constant factor in one direction only, the new volume is that constant. And if you stack two blocks together aligned on the same direction, their combined volume is the sum of their volumes: this all shows that the signed volume we have is linear in each coordinate when considered as a function of the input vectors.</p>
<p>Finally, when you switch two of the vectors that define the unit cube, you flip the orientation. (Again, this is something to come back to later if you don’t know what that means).</p>
<p>So there are ways to think about the determinant that aren’t symbol-pushing. If you’ve studied multivariable calculus, you could think about, with this geometric definition of determinant, why determinants (the Jacobian) pop up when we change coordinates doing integration. Hint: a derivative is a linear approximation of the associated function, and consider a “differential volume element” in your starting coordinate system.</p>
<p>It’s not too much work to check that the area of the parallelogram formed by vectors <span class="math-container">$(a,b)$</span> and <span class="math-container">$(c,d)$</span> is <span class="math-container">$\Big|{}^{a\;b}_{c\;d}\Big|$</span>
either: you might try that to get a sense for things.</p>
| <p>You could think of a determinant as a volume. Think of the columns of the matrix as vectors at the origin forming the edges of a skewed box. The determinant gives the volume of that box. For example, in 2 dimensions, the columns of the matrix are the edges of a rhombus.</p>
<p>You can derive the algebraic properties from this geometrical interpretation. For example, if two of the columns are linearly dependent, your box is missing a dimension and so it's been flattened to have zero volume.</p>
|
probability | <p>Suppose $X$ is a real-valued random variable and let $P_X$ denote the distribution of $X$. Then
$$
E(|X-c|) = \int_\mathbb{R} |x-c| dP_X(x).
$$
<a href="http://en.wikipedia.org/wiki/Median#Medians_of_probability_distributions" rel="noreferrer">The medians</a> of $X$ are defined as any number $m \in \mathbb{R}$ such that $P(X \leq m) \geq \frac{1}{2}$ and $P(X \geq m) \geq \frac{1}{2}$.</p>
<p>Why do the medians solve
$$
\min_{c \in \mathbb{R}} E(|X-c|) \, ?
$$</p>
| <p>For <strong>every</strong> real valued random variable $X$,
$$
\mathrm E(|X-c|)=\int_{-\infty}^c\mathrm P(X\leqslant t)\,\mathrm dt+\int_c^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt
$$
hence the function $u:c\mapsto \mathrm E(|X-c|)$ is differentiable almost everywhere and, where $u'(c)$ exists, $u'(c)=\mathrm P(X\leqslant c)-\mathrm P(X\geqslant c)$. Hence $u'(c)\leqslant0$ if $c$ is smaller than every median, $u'(c)=0$ if $c$ is a median, and $u'(c)\geqslant0$ if $c$ is greater than every median.</p>
<p>The formula for $\mathrm E(|X-c|)$ is the integrated version of the relations $$(x-y)^+=\int_y^{+\infty}[t\leqslant x]\,\mathrm dt$$ and $|x-c|=((-x)-(-c))^++(x-c)^+$, which yield, for every $x$ and $c$,
$$
|x-c|=\int_{-\infty}^c[x\leqslant t]\,\mathrm dt+\int_c^{+\infty}[x\geqslant t]\,\mathrm dt
$$</p>
| <p>Let $f$ be the pdf and let $J(c) = E(|X-c|)$. We want to maximize $J(c)$. Note that $E(|X-c|) = \int_{\mathbb{R}} |x-c| f(x) dx = \int_{-\infty}^{c} (c-x) f(x) dx + \int_c^{\infty} (x-c) f(x) dx.$</p>
<p>To find the maximum, set $\frac{dJ}{dc} = 0$. Hence, we get that,
$$\begin{align}
\frac{dJ}{dc} & = (c-x)f(x) | _{x=c} + \int_{-\infty}^{c} f(x) dx + (x-c)f(x) | _{x=c} - \int_c^{\infty} f(x) dx\\
& = \int_{-\infty}^{c} f(x) dx - \int_c^{\infty} f(x) dx = 0
\end{align}
$$</p>
<p>Hence, we get that $c$ is such that $$\int_{-\infty}^{c} f(x) dx = \int_c^{\infty} f(x) dx$$ i.e. $$P(X \leq c) = P(X > c).$$</p>
<p>However, we also know that $P(X \leq c) + P(X > c) = 1$. Hence, we get that $$P(X \leq c) = P(X > c) = \frac12.$$</p>
<p><strong>EDIT</strong></p>
<p>When $X$ doesn't have a density, all you need to do is to make use of integration by parts. We get that $$\displaystyle \int_{-\infty}^{c} (c-x) dP(x) = \lim_{y \rightarrow -\infty} (c-y) P(y) + \displaystyle \int_{c}^{\infty} P(x) dx.$$ Similarly, we also get that $$\displaystyle \int_{c}^{\infty} (x-c) dP(x) = \lim_{y \rightarrow \infty} (y-c) P(y) - \displaystyle \int_{c}^{\infty} P(x) dx.$$</p>
|
linear-algebra | <p>In linear algebra and differential geometry, there are various structures which we calculate with in a basis or local coordinates, but which we would like to have a meaning which is basis independent or coordinate independent, or at least, changes in some covariant way under changes of basis or coordinates. One way to ensure that our structures adhere to this principle is to give their definitions without reference to a basis. Often we employ universal properties, functors, and natural transformations to encode these natural, coordinate/basis free structures. But the Riemannian volume form does not appear to admit such a description, nor does its pointwise analogue in linear algebra.</p>
<p>Let me list several examples.</p>
<ul>
<li><p>In linear algebra, an inner product on $V$ is an element of $\operatorname{Sym}^2{V^*}$. The symmetric power is a space which may be defined by a universal property, and constructed via a quotient of a tensor product. No choice of basis necessary. Alternatively an inner product can be given by an $n\times n$ symmetric matrix. The correspondence between the two alternatives is given by $g_{ij}=g(e_i,e_j)$. Calculations are easy with this formulation, but one should check (or require) that the matrix transforms appropriately under changes of basis.</p></li>
<li><p>In linear algebra, a volume form is an element of $\Lambda^n(V^*)$. Alternatively one may define a volume form operator as the determinant of the matrix of the components of $n$ vectors, relative to some basis.</p></li>
<li><p>In linear algebra, an orientation is an element of $\Lambda^n(V^*)/\mathbb{R}^>$.</p></li>
<li><p>In linear algebra, a symplectic form is an element of $\Lambda^2(V^*)$. Alternatively may be given as some $\omega_{ij}\,dx^i\wedge dx^j$.</p></li>
<li><p>In linear algebra, given a symplectic form, a canonical volume form may be chosen as $\operatorname{vol}=\omega^n$. This operation can be described as a natural transformation $\Lambda^2\to\Lambda^n$. That is, to each vector space $V$, we have a map $\Lambda^2(V)\to\Lambda^n(V)$ taking $\omega\mapsto \omega^n$ and this map commutes with linear maps between spaces.</p></li>
<li><p>In differential geometry, all the above linear algebra concepts may be specified pointwise. Any smooth functor of vector spaces may be applied to the tangent bundle to give a smooth vector bundle. Thus a Riemannian metric is a section of the bundle $\operatorname{Sym}^2{T^*M}$, etc. A symplectic form is a section of the bundle $\Lambda^2(M)$, and the wedge product extends to an operation on sections, and gives a symplectic manifold a volume form. This is a global operation; this definition of a Riemannian metric gives a smoothly varying inner product on every tangent space of the manifold, <em>even if the manifold is not covered by a single coordinate patch</em></p></li>
<li><p>In differential geometry, sometimes vectors are defined as $n$-tuples which transform as $v^i\to \tilde{v}^j\frac{\partial x^i}{\partial \tilde{x}^j}$ under a change of coordinates $x \to \tilde{x}$. But a more invariant definition is to say a vector is a derivation of the algebra of smooth functions. Cotangent vectors can be defined with a slightly different transformation rule, or else invariantly as the dual space to the tangent vectors. Similar remarks hold for higher rank tensors.</p></li>
<li><p>In differential geometry, one defines a connection on a bundle. The local coordinates definition makes it appear to be a tensor, but it does not behave the transformation rules set forth above. It's only clear why when one sees the invariant definition.</p></li>
<li><p>In differential geometry, there is a derivation on the exterior algebra called the exterior derivative. It may be defined as $d\sigma = \partial_j\sigma_I\,dx^j\wedge dx^I$ in local coordinates, or better via an invariant formula $d\sigma(v_1,\dotsc,v_n) = \sum_i(-1)^iv_i(\sigma(v_1,\dotsc,\hat{v_i},\dotsc,v_n)) + \sum_{i+j}(-1)^{i+j}\sigma([v_i,v_j],v_1,\dotsc,\hat{v_i},\dotsc,\hat{v_j},\dotsc,v_n)$</p></li>
<li><p>Finally, the volume form on an oriented inner product space (or volume density on an inner product space) in linear algebra, and its counterpart the Riemannian volume form on an oriented Riemannian manifold (or volume density form on a Riemannian manifold) in differential geometry. Unlike the above examples which all admit global basis-free/coordinate-free definitions, we can define it only in a single coordinate patch or basis at a time, and glue together to obtain a globally defined structure. There are two definitions seen in the literature:</p>
<ol>
<li>choose an (oriented) coordinate neighborhood of a point, so we have a basis for each tangent space. Write the metric tensor in terms of that basis. Pretend that the bilinear form is actually a linear transformation (this can always be done because once a basis is chosen, we have an isomorphism to $\mathbb{R}^n$ which is isomorphic to its dual (via a different isomorphism than that provided by the inner product)). Then take the determinant of resulting mutated matrix, take the square root, multiply by the wedge of the basis one-forms (the positive root may be chosen in the oriented case; in the unoriented case, take the absolute value to obtain a density).</li>
<li>Choose an oriented orthonormal coframe in a neighborhood. Wedge it together. (Finally take the absolute value in the unoriented case).</li>
</ol></li>
</ul>
<p>Does anyone else think that one of these definitions sticks out like a sore thumb? Does it bother anyone else that in linear algebra, the volume form on an oriented inner product space doesn't exist as natural transformation $\operatorname{Sym}^2 \to \Lambda^n$? Do the instructions to "take the determinant of a bilinear form" scream out to anyone else that we're doing it wrong? Does it bother anyone else that in Riemannian geometry, in stark contrast to the superficially similar symplectic case, the volume form cannot be defined using invariant terminology for the whole manifold, but rather requires one to break the manifold into patches, and choose a basis for each? Is there any other structure in linear algebra or differential geometry which suffers from this defect?</p>
<p><strong>Answer:</strong> I've accepted Willie Wong's answer below, but let me also sum it up, since it's spread across several different places. There is a canonical construction of the Riemannian volume form on an oriented vector space, or pseudoform on a vector space. At the level of level of vector spaces, we may define an inner product on the dual space $V^*$ by $\tilde{g}(\sigma,\tau)=g(u,v)$ where $u,v$ are the dual vectors to $\sigma,\tau$ under the isomorphism between $V,V^*$ induced by $g$ (which is nondegenerate). Then extend $\tilde{g}$ to $\bigotimes^k V^*$ by defining $\hat{g}(a\otimes b\otimes c,\dotsb,x\otimes y\otimes z\dotsb)=\tilde{g}(a,x)\tilde{g}(b,y)\tilde{g}(c,z)\dotsb$. Then the space of alternating forms may be viewed as a subspace of $\bigotimes^k V^*$, and so inherits an inner product as well (note, however that while the alternating map may be defined canonically, there are varying normalization conventions which do not affect the kernel. I.e. $v\wedge w = k! Alt(v\otimes w)$ or $v\wedge w = Alt(v\otimes w)$). Then $\hat{g}(a\wedge b\dotsb,x\wedge y\dotsb)=\det[\tilde{g}(a,x)\dotsc]$ (with perhaps a normalization factor required here, depending on how Alt was defined).</p>
<p>Thus $g$ extends to an inner product on $\Lambda^n(V^*)$, which is a 1 dimensional space, so there are only two unit vectors, and if $V$ is oriented, there is a canonical choice of volume form. And in any event, there is a canonical pseudoform.</p>
| <p>A few points:</p>
<ul>
<li>It is necessary to define "Riemannian volume forms" a patch at a time: you can have non-orientable Riemannian manifolds. (Symplectic manifolds are however <em>necessarily</em> orientable.) So you cannot just have a <strong>global</strong> construction mapping Riemannian metric to Riemannian volume form. (Consider the Möbius strip with the standard metric.)</li>
<li>It is however to possible to give a definition of the Riemannian volume form locally in a way that does not depend on choosing a coordinate basis. This also showcases why there <strong>cannot</strong> be a natural map from <span class="math-container">$\mathrm{Sym}^2\to \Lambda^n$</span> sending inner-products to volume forms. We start from the case of the vector space. Given a vector space <span class="math-container">$V$</span>, we know that <span class="math-container">$V$</span> and <span class="math-container">$V^*$</span> are isomorphic as vector spaces, but not canonically so. However if we also take a positive definite symmetric bilinear form <span class="math-container">$g\in \mathrm{Sym}_+^2(V^*)$</span>, we can pick out a unique compatible isomorphism <span class="math-container">$\flat: V\to V^*$</span> and its inverse <span class="math-container">$\sharp: V^*\to V$</span>. A corollary is that <span class="math-container">$g$</span> extends to (by abuse of notation) an element of <span class="math-container">$\mathrm{Sym}_+^2(V)$</span>. Then by taking wedges of <span class="math-container">$g$</span> you get that the metric <span class="math-container">$g$</span> (now defined on <span class="math-container">$V^*$</span>) extends to uniquely to a metric<sup>1</sup> on <span class="math-container">$\Lambda^k(V^*)$</span>. Therefore, <strong>up to sign</strong> there is a unique (using that <span class="math-container">$\Lambda^n(V^*)$</span> is one-dimensional) volume form <span class="math-container">$\omega\in \Lambda^n(V^*)$</span> satisfying <span class="math-container">$g(\omega,\omega) = 1$</span>. <em>But be very careful that this definition is only up to sign.</em></li>
<li>The same construction extends directly to the Riemannian case. Given a differentiable manifold <span class="math-container">$M$</span>. There is a natural map from sections of positive definite symmetric bilinear forms on the tangent space <span class="math-container">$\Gamma\mathrm{Sym}_+^2(T^*M) \to \Gamma\left(\Lambda^n(M)\setminus\{0\} / \pm\right)$</span> to the non-vanishing top forms <em>defined up to sign</em>. From which the usual topological arguments shows that if you fix an orientation (either directly in the case where <span class="math-container">$M$</span> is orientable or lifting to the orientable double cover if not) you get a map whose image now is a positively oriented volume form.</li>
</ul>
<p>Let me just summarise by giving the punch line again:</p>
<p>For every inner product <span class="math-container">$g$</span> on a vector space <span class="math-container">$V$</span> there are <strong>two</strong> compatible volume forms in <span class="math-container">$\Lambda^n V$</span>: they differ by sign. Therefore the natural mapping from inner products takes image in <span class="math-container">$\Lambda^n V / \pm$</span>!</p>
<p>Therefore if you want to construct a map based on fibre-wise operations on <span class="math-container">$TM$</span> sending Riemannian metrics to volume forms, you run the very real risk that, due to the above ambiguity, what you construct is not even continuous anywhere. The "coordinate patch" definition has the advantage that it sweeps this problem under the rug by implicitly choosing one of the two admissible local (in the sense of open charts) orientation. You can do without the coordinate patch if you start, instead, with an orientable Riemannian manifold <span class="math-container">$(M,g,\omega)$</span> and use <span class="math-container">$\omega$</span> to continuously choose one of the two admissible pointwise forms.</p>
<hr />
<p><sup>1</sup>: this used to be linked to a post on MathOverflow, which has since been deleted. So for completeness: the space of <span class="math-container">$k$</span>-tensors is the span of tensors of the form <span class="math-container">$v_1 \otimes \cdots \otimes v_k$</span>, and you can extend <span class="math-container">$g$</span> to the space of <span class="math-container">$k$</span>-tensors by setting
<span class="math-container">$$ g(v_1\otimes\cdots v_k, w_1\otimes\cdots\otimes w_k) := g(v_1, w_1) g(v_2, w_2) \cdots g(v_k, w_k) $$</span>
and extending using bilinearity. The space <span class="math-container">$\Lambda^k(V^*)$</span> embeds into <span class="math-container">$\otimes^k V^*$</span> in the usual way and hence inherits a inner product.</p>
| <p>A coordinate-free definition of <a href="http://en.wikipedia.org/wiki/Volume_form">volume form</a> is in fact well-known and frequently used, e.g. the cited Wikipedia article. I will try to reproduce it the nutshell to the best of my understanding.</p>
<p>Let $V$ be a (real, for certainty) vector space of finite dimension $\dim V = n$. The space of $n$-forms $\Lambda^n (V)$ has dimension 1. Thus $\Lambda^n (V)$ isomorphic to $\mathbb{R}$, however this isomorphism is <em>not canonical</em>: any choice of non-trivial $n$-form $\omega$ can be mapped to $1 \in \mathbb{R}$.</p>
<p><strong>A volume form</strong> on a finite-dimensional vector space $V$ is <em>a choice</em> of a top-rank non-trivial exterior form (skew-symmetric $n$-linear functional) $\omega \in \Lambda^n (V)$. I think that this definition is quite coordinate-free.</p>
<p>Once such a form has been chosen, it can be used to divide the space of bases in $V$ into two classes that are called <em>orientations</em>. There are two of them, <em>positive</em> ($\omega > 0$) and <em>negative</em> ($\omega < 0$). Having a volume form chosen, one can speak about oriented volumes of parallelotopes, for instance.</p>
<p>If for any reason we have an <em>inner product</em> $g$ in $V$ we can make this choice canonical. One needs to consider orthonormal frames (with respect to $g$). The canonical volume form will take value 1 on positively oriented orthonormal frames.</p>
<p><strong>The volume form</strong> of an inner-product space $(V, g)$ is that canonical choice of a volume form. It can be denoted by $Vol_{g}$ provided one also keeps in mind that there is a choice of orientation involved.</p>
<p>Along these lines one can obtain an understanding of the volume form as the <a href="http://en.wikipedia.org/wiki/Hodge_dual">Hodge dual</a> of 1 in a pretty coordinate-free manner.</p>
|
probability | <p>I'm a beginner in mathematics and there is one thing that I've been wondering about recently. The formula for the normal distribution is:</p>
<p>$$f(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\displaystyle{\frac{(x-\mu)^2}{2\sigma^2}}},$$</p>
<p>However, what are $e$ and $\pi$ doing there? $\pi$ is about circles and the ratio to its diameter, for example. $e$ is mostly about exponential functions, specifically about the fact that $\frac{\mathrm{d}}{\mathrm{d}x} e^x = e^x$.</p>
<p>It is my firm conviction that proofs and articles are available, but could someone perhaps shed some light on this and please explain in a more 'informal' language what they stand for here?</p>
<p>I'm very curious to know as those numbers have very different meanings as far as I'm concerned.</p>
| <p>So I think you want to know "why" $\pi$ and $e$ appear here based on an explanation that goes back to circles and natural logarithms, which are the usual contexts in which one first sees these.</p>
<p>If you see $\pi$, you think there's a circle hidden somewhere. And in fact there is. As has been pointed out, in order for this expression to give a probability density you need $\int_{-\infty}^\infty f(x) \: dx = 1$. (I'm not sure how much you know about integrals -- this just means that the area between the graph of $f(x)$ and the $x$-axis is 1.) But it turns out that this can be derived from $\int_{-\infty}^\infty e^{-x^2} dx = \sqrt{\pi}$. </p>
<p>And it turns out that this is true because the square of this integral is $\pi$. Now, why should the square of this integral have anything to do with circles? Because it's the total volume between the graph of $e^{-(x^2+y^2)}$ (as a function $g(x,y)$ of two variables) and the $xy$-plane. And of course $x^2+y^2$ is just the square of the distance of $(x,y)$ from the origin -- so the volume I just mentioned is rotationally symmetric. (If you know about multiple integration, see the <a href="http://en.wikipedia.org/wiki/Gaussian_integral" rel="noreferrer">Wikipedia article "Gaussian integral", under the heading "brief proof"</a> to see this volume worked out.)</p>
<p>As for where $e$ comes from -- perhaps you've seen that the normal probability density can be used to approximate the binomial distribution. In particular, the probability that if we flip $n$ independent coins, each of which has probability $p$ of coming up heads, we'll get $k$ heads is
$$ {n \choose k} p^{k} (1-p)^{n-k} $$
where ${n \choose k} = n!/(k! (n-k)!)$. And then there's
<a href="http://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="noreferrer">Stirling's approximation</a>,
$$ n! \approx \sqrt{2\pi n} (n/e)^{n}. $$
So if you can see why $e$ appears here, you see why it appears in the normal. Now, we can take logs of both sides of $n! = 1 \cdot 2 \cdot \ldots \cdot n$ to get
$$ \log (n!) = \log 1 + \log 2 + \cdots + \log n $$
and we can approximate the sum by an integral,
$$ \log (n!) \approx \int_{1}^{n} \log t \: dt. $$
But the indefinite integral here is $t \log t - t$, and so we get the definite integral
$$ \log (n!) \approx n \log n - n. $$
Exponentiating both sides gives $n! \approx (n/e)^n$. This is off by a factor of $\sqrt{2\pi n}$ but at least explains the appearance of $e$ -- because there are logarithms in the derivation. This often occurs when we deal with probabilities involving lots of events because we have to find products of many terms; we have a well-developed theory for sums of very large numbers of terms (basically, integration) which we can plug into by taking logs.</p>
| <p>One of the important operations in (continuous) probability is the integral. $e$ shows up there just because it's convenient. If you rearrange it a little you get $$ {1 \over \sqrt{2\pi \sigma^2}} (e^{1 \over 2\sigma^2})^{-(x-\mu)^2},$$ which makes it clear that the $e$ is just a convenient number that makes the initial constant relatively straightforward; using some other number in place of $e$ just rescales $\sigma$ in some way.</p>
<p>The $\pi$ is a little tougher to explain; the fact you just have to "know" (because it requires multivariate calculus to prove) is that $\int_{-\infty}^{\infty} e^{-x^2} dx = \sqrt{\pi}$. This is called the Gaussian integral, because Gauss came up with it. It's also why this distribution (with $\mu = 0, \sigma^2 = 1/2$) is called the Gaussian distribution. So that's why $\pi$ shows up in the constant, so that no matter what values you use for $\sigma$ and $\mu$, $\int_{-\infty}^{\infty} f(x) dx = 1$.</p>
|
linear-algebra | <p>Assuming the axiom of choice, set $\mathbb F$ to be some field (we can assume it has characteristics $0$).</p>
<p>I was told, by more than one person, that if $\kappa$ is an infinite cardinal then the vector space $V=\mathbb F^{(\kappa)}$ (that is an infinitely dimensional space with basis of cardinality $\kappa$) is <em>not</em> isomorphic (as a vector space) to the algebraic dual, $V^*$.</p>
<p>I have asked several professors in my department, and this seems to be completely folklore. I was directed to some book, but could not have find it in there as well.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Dual_space#Infinite-dimensional_case">Wikipedia entry</a> tells me that this is indeed not a cardinality issue, for example $\mathbb R^{<\omega}$ (that is all the eventually zero sequences of real numbers) has the same cardinality as its dual $\mathbb R^\omega$ but they are not isomorphic.</p>
<p>Of course being of the same cardinality is necessary but far from sufficient for two vector spaces to be isomorphic.</p>
<p>What I am asking, really, is whether or not it is possible when given a basis and an embedding of a basis of $V$ into $V^*$, to say "<strong>This</strong> guy is not in the span of the embedding"?</p>
<p><strong>Edit:</strong> I read the answers in the link given by Qiaochu. They did not satisfy me too much. </p>
<p>My main problem is this: suppose $\kappa$ is our basis then $V$ consists of $\{f\colon\kappa\to\mathbb F\Big| |f^{-1}[\mathbb F\setminus\{0\}]|<\infty\}$ (that is finite support), while $V^*=\{f\colon\kappa\to\mathbb F\}$ (that is <em>all</em> the functions).</p>
<p>In particular, the basis for $V$ is given by $f_\alpha(x) = \delta_{\alpha x}$ (i.e. $1$ on $\alpha$, and $0$ elsewhere), while $V^*$ needs a much larger basis. Why can't there by other linear functionals on $V$?</p>
<p><strong>Edit II:</strong> After the discussions in the comments and the answers, I have a better understanding of my question to begin with. I have no qualms that under the axiom of choice given an infinite set $\kappa$ there are a lot more functions from $\kappa$ into $\mathbb F$, than functions with <em>finite support</em> from $\kappa$ into $\mathbb F$. It is also clear to me that the basis of a vector space is actually the set of $\delta$ functions, whereas the basis for the dual is a subset of characteristic functions.</p>
<p>My problem is, if so, <em>why</em> is the dual space composed from all functions from $A$ into $F$? </p>
<p>(And if possible, not to just show by cardinality games that the basis is much larger but actually show the algorithm for the diagonalization.)</p>
| <p>This is just Bill Dubuque's sci.math proof (see <a href="http://groups.google.com/d/msg/sci.math/8aeaiKMLP8o/2IqZlhlzdCIJ">Google Groups</a> or <a href="http://mathforum.org/kb/message.jspa?messageID=7216370">MathForum</a>) mentioned in the comments, expanded.</p>
<p><strong>Edit.</strong> I'm also reorganizing this so that it flows a bit better.</p>
<p>Let $F$ be a field, and let $V$ be the vector space of dimension $\kappa$.</p>
<p>Then $V$ is naturally isomorphic to $\mathop{\bigoplus}\limits_{i\in\kappa}F$, the set of all functions $f\colon \kappa\to F$ of finite support. Let $\epsilon_i$ be the element of $V$ that sends $i$ to $1$ and all $j\neq i$ to $0$ (that is, you can think of it as the $\kappa$-tuple with coefficients in $F$ that has $1$ in the $i$th coordinate, and $0$s elsewhere).</p>
<p><strong>Lemma 1.</strong> If $\dim(V)=\kappa$, and either $\kappa$ or $|F|$ are infinite, then $|V|=\kappa|F|=\max\{\kappa,|F|\}$.</p>
<p><em>Proof.</em> If $\kappa$ is finite, then $V=F^{\kappa}$, so $|V|=|F|^{\kappa}=|F|=|F|\kappa$, as $|F|$ is infinite here and the equality holds.</p>
<p>Assume then that $\kappa$ is infinite. Each element of $V$ can be represented uniquely as a linear combination of the $\epsilon_i$. There are $\kappa$ distinct finite subsets of $\kappa$; and for a subset with $n$ elements, we have $|F|^n$ distinct vectors in $V$. </p>
<p>If $\kappa\leq |F|$, then in particular $F$ is infinite, so $|F|^n=|F|$. Hence you have $|F|$ distinct vectors for each of the $\kappa$ distinct subsets (even throwing away the zero vector), so there is a total of $\kappa|F|$ vectors in $V$.</p>
<p>If $|F|\lt\kappa$, then $|F|^n\lt\kappa$ since $\kappa$ is infinite; so there are at most $\kappa$ vectors for each subset, so there are at most $\kappa^2 = \kappa$ vectors in $V$. Since the basis has $\kappa$ elements, $\kappa\leq|V|\leq\kappa$, so $|V|=\kappa=\max\{\kappa,|F|\}$. <strong>QED</strong></p>
<p>Now let $V^*$ be the dual of $V$. Since $V^* = \mathcal{L}(V,F)$ (where $\mathcal{L}(V,W)$ is the vector space of all $F$-linear maps from $V$ to $W$), and $V=\mathop{\oplus}\limits_{i\in\kappa}F$, then again from abstract nonsense we know that
$$V^*\cong \prod_{i\in\kappa}\mathcal{L}(F,F) \cong \prod_{i\in\kappa}F.$$
Therefore, $|V^*| = |F|^{\kappa}$. </p>
<hr/>
<p><strong>Added.</strong> Why is it that if $A$ is the basis of a vector space $V$, then $V^*$ is equivalent to the set of all functions from $A$ to the ground field?</p>
<p>A functional $f\colon V\to F$ is completely determined by its value on a basis (just like any other linear transformation); thus, if two functionals agree on $A$, then they agree everywhere. Hence, there is a natural injection, via restriction, from the set of all linear transformations $V\to F$ (denoted $\mathcal{L}(V,F)$) to the set of all functions $A\to F$, $F^A\cong \prod\limits_{a\in A}F$. Moreover, given any function $g\colon A\to F$, we can extend $g$ linearly to all of $V$: given $\mathbf{x}\in V$, there exists a unique finite subset $\mathbf{a}_1,\ldots,\mathbf{a}_n$ (pairwise distinct) of $A$ and unique scalars $\alpha_1,\ldots,\alpha_n$, none equal to zero, such that $\mathbf{x}=\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n$ (that's from the definition of basis as a spanning set that is linearly independent; spanning ensures the existence of at least one such expression, linear independence guarantees that there is at most one such expression); we define $g(\mathbf{x})$ to be
$$g(\mathbf{x})=\alpha_1g(\mathbf{a}_1)+\cdots \alpha_ng(\mathbf{a}_n).$$
(The image of $\mathbf{0}$ is the empty sum, hence equal to $0$).
Now, let us show that this is linear.</p>
<p>First, note that $\mathbf{x}=\beta_1\mathbf{a}_{i_1}+\cdots\beta_m\mathbf{a}_{i_m}$ is <em>any</em> expression of $\mathbf{x}$ as a linear combination of pairwise distinct elements of the basis $A$, then it must be the case that this expression is equal to the one we already had, plus some terms with coefficient equal to $0$. This follows from the linear independence of $A$: take
$$\mathbf{0}=\mathbf{x}-\mathbf{x} = (\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n) - (\beta_1\mathbf{a}_{i_1}+\cdots+\beta_m\mathbf{a}_{i_m}).$$
After any cancellation that can be done, you are left with a linear combination of elements in the linearly independent set $A$ equal to $\mathbf{0}$, so all coefficients must be equal to $0$. That means that we can likewise define $g$ as follows: given <strong>any</strong> expression of $\mathbf{x}$ as a linear combination of elements of $A$, $\mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m$, with $\mathbf{a}_i\in A$, not necessarily distinct, $\gamma_i$ scalars not necessarily equal to $0$, we define
$$g(\mathbf{x}) = \gamma_1g(\mathbf{a}_1)+\cdots+\gamma_mg(\mathbf{a}_m).$$
This will be well-defined by the linear independence of $A$. And now it is very easy to see that $g$ is linear on $V$: if $\mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m$ and $\mathbf{y}=\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n$ are expressions for $\mathbf{x}$ and $\mathbf{y}$ as linear combinations of elements of $A$, then
$$\begin{align*}
g(\mathbf{x}+\lambda\mathbf{y}) &= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+\lambda(\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n)\Bigr)\\
&= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+ \lambda\delta_{1}\mathbf{a'}_1+\cdots+\lambda\delta_n\mathbf{a'}_n\\
&= \gamma_1g(\mathbf{a}_1) + \cdots \gamma_mg(\mathbf{a}_m) + \lambda\delta_1g(\mathbf{a'}_1) + \cdots + \lambda\delta_ng(\mathbf{a'}_n)\\
&= g(\mathbf{x})+\lambda g(\mathbf{y}).
\end{align*}$$</p>
<p>Thus, the map $\mathcal{L}(V,F)\to F^A$ is in fact onto, giving a bijection. </p>
<p>This is the "linear-algebra" proof. The "abstract nonsense proof" relies on the fact that if $A$ is a basis for $V$, then $V$ is isomorphic to $\mathop{\bigoplus}\limits_{a\in A}F$, a direct sum of $|A|$ copies of $A$, and on the following universal property of the direct sum:</p>
<p><strong>Definition.</strong> Let $\mathcal{C}$ be an category, let $\{X_i\}{i\in I}$ be a family of objects in $\mathcal{C}$. A <em>coproduct</em> of the $X_i$ is an object $C$ of $\mathcal{C}$ together with a family of morphisms $\iota_j\colon X_j\to C$ such that for every object $X$ and ever family of morphisms $g_j\colon X_j\to X$, there exists a unique morphism $\mathbf{f}\colon C\to X$ such that for all $j$, $g_j = \mathbf{g}\circ \iota_j$. </p>
<p>That is, a family of maps from each element of the family is equivalent to a single map from the coproduct (just like a family of maps <em>into</em> the members of a family is equivalent to a single map <em>into</em> the product of the family). In particular, we get that:</p>
<p><strong>Theorem.</strong> Let $\mathcal{C}$ be a category in which the sets of morphisms are sets; let $\{X_i\}_{i\in I}$ be a family of objects of $\mathcal{C}$, and let $(C,\{\iota_j\}_{j\in I})$ be their coproduct. Then for every object $X$ of $\mathcal{C}$ there is a natural bijection
$$\mathrm{Hom}_{\mathcal{C}}(C,X) \longleftrightarrow \prod_{j\in I}\mathrm{Hom}_{\mathcal{C}}(X_j,X).$$</p>
<p>The left hand side is the collection of morphisms from the coproduct to $X$; the right hand side is the collection of all families of morphisms from each element of $\{X_i\}_{i\in I}$ into $X$. </p>
<p>In the vector space case, the fact that a linear transformation is completely determined by its value on a basis is what establishes that a vector space $V$ with basis $A$ is the coproduct of $|A|$ copies of the one-dimensional vector space $F$. So we have that
$$\mathcal{L}(V,W) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}\limits_{a\in A}F,W\right) \leftrightarrow \prod_{a\in A}\mathcal{L}(F,W).$$
But a linear transformation from $F$ to $W$ is equivalent to a map from the basis $\{1\}$ of $F$ into $W$, so $\mathcal{L}(F,W) \cong W$. Thus, we get that if $V$ has a basis of cardinality $\kappa$ (finite or infinite), we have:
$$\mathcal{L}(V,F) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}_{i\in\kappa}F,F\right) \leftrightarrow \prod_{i\in\kappa}\mathcal{L}(F,F) \leftrightarrow \prod_{i\in\kappa}F = F^{\kappa}.$$</p>
<hr/>
<p><strong>Lemma 2.</strong> If $\kappa$ is infinite, then $\dim(V^*)\geq |F|$. </p>
<p><em>Proof.</em> If $F$ is finite, then the inequality is immediate. Assume then that $F$ is infinite. Let $c\in F$, $c\neq 0$. Define $\mathbf{f}_c\colon V\to F$ by $\mathbf{f}_c(\epsilon_n) = c^n$ if $n\in\omega$, and $\mathbf{f}_c(\epsilon_i)=0$ if $i\geq\omega$. These are linearly independent:</p>
<p>Suppose that $c_1,\ldots,c_m$ are pairwise distinct nonzero elements of $F$, and that $\alpha_1\mathbf{f}_{c_1} + \cdots + \alpha_m\mathbf{f}_{c_m} = \mathbf{0}$. Then for each $i\in\omega$ we have
$$\alpha_1 c_1^i + \cdots + \alpha_m c_m^i = 0.$$
Viewing the first $m$ of these equations as linear equations in the $\alpha_j$, the corresponding coefficient matrix is the <a href="http://en.wikipedia.org/wiki/Vandermonde_matrix">Vandermonde matrix</a>,
$$\left(\begin{array}{cccc}
1 & 1 & \cdots & 1\\
c_1 & c_2 & \cdots & c_m\\
c_1^2 & c_2^2 & \cdots & c_m^2\\
\vdots & \vdots & \ddots & \vdots\\
c_1^{m-1} & c_2^{m-1} & \cdots & c_m^{m-1}
\end{array}\right),$$
whose determinant is $\prod\limits_{1\leq i\lt j\leq m}(c_j-c_i)\neq 0$. Thus, the system has a unique solution, to wit $\alpha_1=\cdots=\alpha_m = 0$. </p>
<p>Thus, the $|F|$ linear functionals $\mathbf{f}_c$ are linearly independent, so $\dim(V^*)\geq |F|$. <strong>QED</strong></p>
<p>To recapitulate: Let $V$ be a vector space of dimension $\kappa$ over $F$, with $\kappa$ infinite. Let $V^*$ be the dual of $V$. Then $V\cong\mathop{\bigoplus}\limits_{i\in\kappa}F$ and $V^*\cong\prod\limits_{i\in\kappa}F$.</p>
<p>Let $\lambda$ be the dimension of $V^*$. Then by Lemma 1 we have $|V^*| = \lambda|F|$. </p>
<p>By Lemma 2, $\lambda=\dim(V^*)\geq |F|$, so $|V^*| = \lambda$. On the other hand, since $V^*\cong\prod\limits_{i\in\kappa}F$, then $|V^*|=|F|^{\kappa}$.</p>
<p>Therefore, $\lambda= |F|^{\kappa}\geq 2^{\kappa} \gt \kappa$. Thus, $\dim(V^*)\gt\dim(V)$, so $V$ is not isomorphic to $V^*$. </p>
<hr/>
<p><strong>Added${}^{\mathbf{2}}$.</strong> Some results on vector spaces and bases.</p>
<p>Let $V$ be a vector space, and let $A$ be a maximal linearly independent set (that is, $A$ is linearly independent, and if $B$ is any subset of $V$ that properly contains $A$, then $B$ is linearly dependent). </p>
<p>In order to guarantee that there <em>is</em> a maximal linearly independent set in any vector space, one needs to invoke the Axiom of Choice in some manner, since the existence of such a set is, as we will see below, equivalent to a basis; however, here we are assuming that we already have such a set given. I believe that the Axiom of Choice is <strong>not</strong> involved in any of what follows.</p>
<p><strong>Proposition.</strong> $\mathrm{span}(A) = V$.</p>
<p><em>Proof.</em> Since $A\subseteq V$, then $\mathrm{span}(A)\subseteq V$. Let $\mathbf{v}\in V$. If $v\in A$, then $v\in\mathrm{span}(A)$. If $v\notin A$, then $B=V\cup\{v\}$ is linearly dependent by maximality. Therefore, there exists a finite subset $a_1,\ldots,a_m$ in $A$ and scalars $\alpha_1,\ldots,\alpha_n$, not all zero, such that $\alpha_1a_1+\cdots+\alpha_ma_m=\mathbf{0}$. Since $A$ is linearly independent, at least one of the $a_i$ must be equal to $v$; say $a_1$. Moreover, $v$ must occur with a nonzero coefficient, again by the linear independence of $A$. So $\alpha_1\neq 0$, and we can then write
$$v = a_1 = \frac{1}{\alpha_1}(-\alpha_2a_2 -\cdots - \alpha_na_n)\in\mathrm{span}(A).$$
This proves that $V\subseteq \mathrm{span}(A)$. $\Box$</p>
<p><strong>Proposition.</strong> Let $V$ be a vector space, and let $X$ be a linearly independent subset of $V$. If $v\in\mathrm{span}(X)$, then any two expressions of $v$ as linear combinations of elements of $X$ differ only in having extra summands of the form $0x$ with $x\in X$.</p>
<p><em>Proof.</em> Let $v = a_1x_1+\cdots a_nx_n = b_1y_1+\cdots+b_my_m$ be two expressions of $v$ as linear combinations of $X$. </p>
<p>We may assume without loss of generality that $n\leq m$. Reordering the $x_i$ and the $y_j$ if necessary, we may assume that $x_1=y_1$, $x_2=y_2,\ldots,x_{k}=y_k$ for some $k$, $0\leq k\leq n$, and $x_1,\ldots,x_k,x_{k+1},\ldots,x_n,y_{k+1},\ldots,y_m$ are pairwise distinct. Then
$$\begin{align*}
\mathbf{0} &= v-v\\
&=(a_1x_1+\cdots+a_nx_n)-(b_1y_1+\cdots+b_my_m)\\
&= (a_1-b_1)x_1 + \cdots + (a_k-b_k)x_k + a_{k+1}x_{k+1}+\cdots + a_nx_n - b_{k+1}y_{k+1}-\cdots - b_my_m.
\end{align*}$$
As this is a linear combination of pairwise distinct elements of $X$ equal to $\mathbf{0}$, it follows from the linear independence of $X$ that $a_{k+1}=\cdots=a_n=0$, $b_{k+1}=\cdots=b_m=0$, and $a_1=b_1$, $a_2=b_2,\ldots,a_k=b_k$. That is, the two expressions of $v$ as linear combinations of elements of $X$ differ only in that there are extra summands of the form $0x$ with $x\in X$ in them. <strong>QED</strong></p>
<p><strong>Corollary.</strong> Let $V$ be a vector space, and let $A$ be a maximal independent subset of $V$. If $W$ is a vector space, and $f\colon A\to W$ is any function, then there exists a unique linear transformation $T\colon V\to W$ such that $T(a)=f(a)$ for each $a\in A$.</p>
<p><em>Proof.</em> <strong>Existence.</strong> Given $v\in V$, then $v\in\mathrm{span}{A}$. Therefore, we can express $v$ as a linear combination of elements of $A$,
$v = \alpha_1a_1+\cdots+\alpha_na_n$. Define
$$T(v) = \alpha_1f(a_1)+\cdots+\alpha_nf(a_n).$$
Note that $T$ is well-defined: if $v = \beta_1b_1+\cdots+\beta_mb_m$ is any other expression of $v$ as a linear combination of elements of $A$, then by the lemma above the two expressions differ only in summands of the form $0x$; but these summands do not affect the value of $T$. </p>
<p>Note also that $T$ is linear, arguing as above. Finally, since $a\in A$ can be expressed as $a=1a$, then $T(a) = 1f(a) = f(a)$, so the restriction of $T$ to $A$ is equal to $f$.</p>
<p><strong>Uniqueness.</strong> If $U$ is any linear transformation $V\to W$ such that $U(a)=f(a)$ for all $a\in A$, then for every $v\in V$, write $v=\alpha_1a_1+\cdots+\alpha_na_n$ with $a_i\in A$. Then.
$$\begin{align*}
U(v) &= U(\alpha_1a_1+\cdots + \alpha_na_n)\\
&= \alpha_1U(a_1) + \cdots + \alpha_n U(a_n)\\
&= \alpha_1f(a_1)+\cdots + \alpha_n f(a_n)\\
&= \alpha_1T(a_1) + \cdots + \alpha_n T(a_n)\\
&= T(\alpha_1a_1+\cdots+\alpha_na_n)\\
&= T(v).\end{align*}$$
Thus, $U=T$. <strong>QED</strong></p>
| <p>The "this guy" you're looking for is just the function that takes each of your basis vectors and sends them to 1.</p>
<p>Note that this is <em>not</em> in the span of the set of functions that each take a single basis vector to 1, and all others to 0, because the span is defined to be the set of <em>finite linear combinations</em> of basis vectors. And a finite linear combination of things that have finite-dimensional support will still have finite-dimensional support, and thus can't send infinitely many independent vectors all to 1.</p>
<p>You may want to say, "But look! If I add up these infinitely many functions, I clearly get a function that sends all my basis vectors to 1!" But this is actually a very tricky process. What you need is a notion of <em>convergence</em> if you want to add infinitely many things, which isn't always obvious how to define.</p>
<p>In the end, it boils down to a cardinality issue - not of the vector spaces themselves, but of the dimensions. In the example you give, $\mathbb{R}^{<\omega}$ has countably infinite dimension, but the dimension of its dual is uncountable.</p>
<p>(Added, in response to comment below): Think of all the possible ways you can have a function which is 1 on some set of your basis vectors and 0 on the rest. The only ways you can do these and stay in the span of your basis vectors is if you take the value 1 on only <em>finitely many</em> of those vectors. Since your starting space was infinite-dimensional, there's an uncountable number of such functions, and so uncountably many of them lie outside the span of your basis. You can only ever incorporate finitely many of them by "adding" them in one at a time (or even countably many at a time), so you'll never establish the vector isomorphism you're looking for.</p>
|
probability | <p>Let $H_n$ denote the $n$th harmonic number; i.e., $H_n = \sum\limits_{i=1}^n \frac{1}{i}$. I've got a couple of proofs of the following limiting expression, which I don't think is that well-known: $$\lim_{n \to \infty} \left(H_n - \frac{1}{2^n} \sum_{k=1}^n \binom{n}{k} H_k \right) = \log 2.$$
I'm curious about other ways to prove this expression, and so I thought I would ask here to see if anybody knows any or can think of any. I would particularly like to see a combinatorial proof, but that might be difficult given that we're taking a limit and we have a transcendental number on one side. I'd like to see any proofs, though. I'll hold off from posting my own for a day or two to give others a chance to respond first.</p>
<p>(The probability tag is included because the expression whose limit is being taken can also be interpreted probabilistically.)</p>
<p><HR></p>
<p>(<strong>Added</strong>: I've accepted Srivatsan's first answer, and I've posted my two proofs for those who are interested in seeing them. </p>
<p>Also, the sort of inverse question may be of interest. Suppose we have a function $f(n)$ such that $$\lim_{n \to \infty} \left(f(n) - \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} f(k) \right) = L,$$ where $L$ is finite and nonzero. What can we say about $f(n)$? <a href="https://math.stackexchange.com/questions/8415/asymptotic-difference-between-a-function-and-its-binomial-average">This question was asked</a> and <a href="https://math.stackexchange.com/questions/8415/asymptotic-difference-between-a-function-and-its-binomial-average/22582#22582">answered a while back</a>; it turns out that $f(n)$ must be $\Theta (\log n)$. More specifically, we must have $\frac{f(n)}{\log_2 n} \to L$ as $n \to \infty$.) </p>
| <p>I made an quick estimate in my comment. The basic idea is that the binomial distribution $2^{−n} \binom{n}{k}$ is concentrated around $k= \frac{n}{2}$. Simply plugging this value in the limit expression, we get $H_n−H_{n/2} \sim \ln 2$ for large $n$. Fortunately, formalizing the intuition isn't that hard. </p>
<p>Call the giant sum $S$. Notice that $S$ can be written as $\newcommand{\E}{\mathbf{E}}$
$$
\sum_{k=0}^{\infty} \frac{1}{2^{n}} \binom{n}{k} (H(n) - H(k)) = \sum_{k=0}^{\infty} \Pr[X = k](H(n) - H(k)) = \E \left[ H(n) - H(X) \right],
$$
where $X$ is distributed according to the binomial distribution $\mathrm{Bin}(n, \frac12)$. We need the following two facts about $X$: </p>
<ul>
<li>With probability $1$, $0 \leqslant H(n) - H(X) \leqslant H(n) = O(\ln n)$.</li>
<li>From the <a href="http://en.wikipedia.org/wiki/Bernstein_inequalities_%28probability_theory%29" rel="noreferrer">Bernstein inequality</a>, for any $\varepsilon \gt 0$, we know that $X$ lies in the range $\frac{1}{2}n (1\pm \varepsilon)$, except with probability at most $e^{- \Omega(n \varepsilon^2) }$. </li>
</ul>
<p>Since the function $x \mapsto H(n) - H(x)$ is monotone decreasing, we have
$$
S \leqslant \color{Red}{H(n)} \color{Blue}{-H\left( \frac{n(1-\varepsilon)}{2} \right)} + \color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}.
$$
Plugging in the standard estimate $H(n) = \ln n + \gamma + O\Big(\frac1n \Big)$ for the harmonic sum, we get:
$$
\begin{align*}
S
&\leqslant \color{Red}{\ln n + \gamma + O \Big(\frac1n \Big)} \color{Blue}{- \ln \left(\frac{n(1-\varepsilon)}{2} \right) - \gamma + O \Big(\frac1n \Big)} +\color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}
\\ &\leqslant \ln 2 - \ln (1- \varepsilon) + o_{n \to \infty}(1)
\leqslant \ln 2 + O(\varepsilon) + o_{n \to \infty}(1). \tag{1}
\end{align*}
$$</p>
<p>An analogous argument gets the lower bound
$$
S \geqslant \ln 2 - \ln (1+\varepsilon) - o_{n \to \infty}(1) \geqslant \ln 2 - O(\varepsilon) - o_{n \to \infty}(1). \tag{2}
$$
Since the estimates $(1)$ and $(2)$ hold for all $\varepsilon > 0$, it follows that $S \to \ln 2$ as $n \to \infty$. </p>
| <p>Here's a different proof. We will simplify the second term as follows:
$$
\begin{eqnarray*}
\frac{1}{2^n} \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} \frac{1}{t} \right]
&=&
\frac{1}{2^n} \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} \int_{0}^1 x^{t-1} dx \right]
\\ &=&
\frac{1}{2^n} \int_{0}^1 \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} x^{t-1} \right] dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \sum\limits_{k=0}^n \left[ \binom{n}{k} \cdot \frac{x^k-1}{x-1} \right] dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \frac{\sum\limits_{k=0}^n \binom{n}{k} x^k- \sum\limits_{k=0}^n \binom{n}{k}}{x-1} dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \frac{(x+1)^n- 2^n}{x-1} dx.
\end{eqnarray*}
$$</p>
<p>Make the substitution $y = \frac{x+1}{2}$, so the new limits are now $1/2$ and $1$. The integral then changes to:
$$
\begin{eqnarray*}
\int_{1/2}^1 \frac{y^n- 1}{y-1} dy
&=&
\int_{1/2}^1 (1+y+y^2+\ldots+y^{n-1}) dy
\\ &=&
\left. y + \frac{y^2}{2} + \frac{y^3}{3} + \ldots + \frac{y^n}{n} \right|_{1/2}^1
\\ &=&
H_n - \sum_{i=1}^n \frac{1}{i} \left(\frac{1}{2} \right)^i.
\end{eqnarray*}
$$
Notice that conveniently $H_n$ is the first term in our function. Rearranging, the expression under the limit is equal to:
$$
\sum_{i=1}^n \frac{1}{i} \left(\frac{1}{2} \right)^i.
$$
The final step is to note that this is just the $n$th partial sum of the Taylor series expansion of $f(y) = -\ln(1-y)$ at $y=1/2$. Therefore, as $n \to \infty$, this sequence approaches the value $$-\ln \left(1-\frac{1}{2} \right) = \ln 2.$$</p>
<p><em>ADDED:</em> As Didier's comments hint, this proof also shows that the given sequence, call it $u_n$, is monotonoic and is hence always smaller than $\ln 2$. Moreover, we also have a tight error estimate:
$$
\frac{1}{n2^n} < \ln 2 - u_n < \frac{2}{n2^n}, \ \ \ \ (n \geq 1).
$$</p>
|
linear-algebra | <p>The dot product of vectors $\mathbf{a}$ and $\mathbf{b}$ is defined as:
$$\mathbf{a} \cdot \mathbf{b} =\sum_{i=1}^{n}a_{i}b_{i}=a_{1}b_{1}+a_{2}b_{2}+\cdots +a_{n}b_{n}$$</p>
<p>What about the quantity?
$$\mathbf{a} \star \mathbf{b} = \prod_{i=1}^{n} (a_{i} + b_{i}) = (a_{1} +b_{1})\,(a_{2}+b_{2})\cdots \,(a_{n}+b_{n})$$</p>
<p>Does it have a name?</p>
<p>"Dot sum" seems largely inappropriate. Come to think of it, I find it interesting that the dot product is named as such, given that it is, after all, a "sum of products" (although I am aware that properties of $\mathbf{a} \cdot{} \mathbf{b}$, in particular distributivity, make it a meaningful name).</p>
<p>$\mathbf{a} \star \mathbf{b}$ is commutative and has the following property:</p>
<p>$\mathbf{a} \star (\mathbf{b} + \mathbf{c}) = \mathbf{b} \star (\mathbf{a} + \mathbf{c}) = \mathbf{c} \star (\mathbf{a} + \mathbf{b})$</p>
| <p>Too long for a comment, but I'll list some properties below, in hopes some idea comes up.</p>
<ul>
<li>${\bf a}\star {\bf b}={\bf b}\star {\bf a}$;</li>
<li>$(c{\bf a})\star (c {\bf b})=c^n ({\bf a}\star {\bf b})$;</li>
<li>$({\bf a+b})\star {\bf c} = ({\bf a+c})\star {\bf b} = ({\bf b+c})\star {\bf a}$;</li>
<li>${\bf a}\star {\bf a} = 2^n a_1\cdots a_n$;</li>
<li>${\bf a}\star {\bf 0} = a_1\cdots a_n$;</li>
<li>$(c{\bf a})\star {\bf b} = c^n ({\bf a}\star ({\bf b}/c))$;</li>
<li>${\bf a}\star (-{\bf a}) = 0$;</li>
<li>${\bf 1}\star {\bf 0} = 1$, where ${\bf 1} = (1,\ldots,1)$;</li>
<li>$\sigma({\bf a}) \star \sigma({\bf b}) = {\bf a}\star {\bf b}$, where $\sigma \in S_n$ acts as $\sigma(a_1,\ldots,a_n) \doteq (a_{\sigma(1)},\ldots,a_{\sigma(n)})$.</li>
</ul>
| <p>I don't know if it has a particular name, but it is essentially a peculiar type of convolution.
Note that
$$ \prod_{i}(a_{i} + b_{i}) = \sum_{X \subseteq [n]} \left( \prod_{i \in X} a_{i} \right) \left( \prod_{i \in X^{c}} b_{i} \right), $$
where $X^{c} = [n] \setminus X$ and $[n] = \{1, 2, \dots n\}$. In other words, if we define $f_{a}, f_{b}$ via
$$ f_{a}(X) = \prod_{i \in X}a_{i}, $$
then
$$ a \star b = (f_{a} \ast f_{b})([n]) $$
where $\ast$ denotes the convolution product
$$ (f \ast g)(Y) = \sum_{X \subseteq Y} f(X)g(Y \setminus X). $$
To learn more about this, I would recommend reading about multiplicative functions and Moebius inversion in number theory. I don't know if there is a general theory concerning this, but the notion of convolutions comes up in many contexts (see this <a href="https://en.wikipedia.org/wiki/Convolution" rel="noreferrer">wikipedia article</a>, and another on <a href="https://en.wikipedia.org/wiki/Dirichlet_convolution" rel="noreferrer">its role in number theory</a>).</p>
<p>Edit:
For what it's worth, the operation is not a vector operation in linear-algebraic sense. That is, it is not preserved under change-of-basis. In fact, it is not even preserved under orthogonal change-of-basis (aka rotation). For example, consider $a = (3,4) \in \mathbb{R}^{2}$. Note that $a \star a = 32$. Then we apply the proper rotation $T$ defined by $T(a) = (5, 0)$. Then we see $T(a) \star T(a) = 0$.</p>
|
differentiation | <p>Randall Munroe, the creator of <a href="http://xkcd.com/">xkcd</a> in his latest book <a href="http://rads.stackoverflow.com/amzn/click/0544272994">What if</a> writes (p. 175) that the mathematical analog of the phrase "knock me over with a feather" is seeing the expression $ \ln( x )^{e}$. And he writes regarding this expression: <em>"it's not that, taken literally, it doesn't make sense - it's that you can't imagine a situation where this would apply."</em></p>
<p>In the footer (same page) he also states that <em>"if you want to be mean to first year calculus students, you can ask them to take the derivative of $ \ln( x )^{e}$. It looks like it should be "$1$" or something but it's not."</em></p>
<p>I don't get the joke. I think I am not understanding something correctly and I'm not appreciating the irony. Any help?</p>
| <p>One is more accustomed to see something like $e^{\ln x}$, which is indeed equal to $x$. Its derivative is $1$. </p>
<p>In general, anytime you see exponentials elevated to a logarithm, you think this is going to simplify. In this case you have just a power of a logarithm, but that power is $e$, so it "looks" like an exponential, but of course it is not. </p>
<p>Not one of the best xkcd in my opinion though :P</p>
<p>Ah by the way, apparently there are a lot of people who are confused about xkcd jokes, and so <a href="http://www.explainxkcd.com/wiki/index.php/Main_Page">explain xkcd</a> was born… I used it a lot :D</p>
| <p>Good Lord of Purple Unicorns; I got the book a day ago, and I'm on page 172. XD </p>
<p>He means to say that many expressions like $e^{\ln(x)}$ and $\ln(e^x)$ equal $x$, but if you want to be mean to first year calculus students (owing to their naivety), they'll initially think it's a simple problem, but in fact the derivative of the expression $\ln( x )^{e}dx$ is </p>
<p>$$ \frac{e(\ln(x))^{e-1}}{x}$$</p>
|
differentiation | <p>I was recently explaining differentiation from first principles to a colleague and how differentiation can be used to obtain the tangent line to a curve at any point. While doing this, my colleague came back at me with an argument for which I had no satisfactory reply.</p>
<p>I was describing the tangent line to a curve at a specific point in the same way that I was taught at school - that it is a line that <strong>just touches the curve at that point and has gradient equal to the derivative of the curve at that point</strong>. My colleague then said that for a cubic curve, the line <strong>can</strong> touch the curve again at other points so I explained the concept again but restricted to a neighbourhood about the point in question.</p>
<p>He then came back with the argument of this definition when the "curve" in question is a straight line. He argued that in this case the definition of the tangent line as "just touching the curve at that point" is simply not true as it is coincident with the line itself and so touches at all points.</p>
<p>I had no comeback to this argument at all and had to concede that I should have just defined the tangent as the line passing through the point on the curve that has gradient equal to the derivative at that point.</p>
<p>Now this whole exchange left me feeling rather stupid as I hold a Phd in Maths myself and I could not adequately define a tangent without using the notion of differential calculus - and yet when I was taught calculus at school it was shown as a tool to calculate the gradient of a tangent line and so this becomes a circular argument.</p>
<p>I have given this serious thought and can find no argument to counter my colleagues observation of the inadequacy of the informal definition in the case when the curve in question is already a straight line. </p>
<p>Also, if I do this again in future with another colleague how can I avoid embarrassment again? At what point did I go wrong here with my explanations? Should I have avoided the geometric view completely and gone with rate of changes instead? I am not a teacher but have taught calculus from first principles to many people over the years and would be very interested in how it should be done properly.</p>
| <p>$\newcommand{\Reals}{\mathbf{R}}$This is a broader question than it looks, involving both mathematics (e.g., what is a <em>curve</em>, what structure does the <em>ambient space</em> have) and pedagogy (e.g., what definition best conveys a concept of differential calculus, what balance of concreteness and generality is most suitable for a given purpose).</p>
<ul>
<li>If a <em>curve</em> is the graph in $\Reals^{2}$ of a differentiable real-valued function of one variable, then I'd argue the "right" definition of the <em>tangent line</em> to the graph at a point $x_{0}$ is the line with equation
$$
y = f(x_{0}) + f'(x_{0})(x - x_{0})
$$
through $\bigl(x_{0}, f(x_{0})\bigr)$ and having slope $f'(x_{0})$. (With minor modifications, the same concept handles the image of a regular parametric path, i.e., a differentiable mapping from an open interval into $\Reals^{2}$ whose velocity is non-vanishing.)</li>
</ul>
<p>Under this definition, the fact that "(modulo fine print) the tangent line is the limit of secant lines" is a <em>geometric expression of the definition</em> rather than a theorem expressing equivalence of an analytic and a geometric definition of "tangency".</p>
<ul>
<li>If a plane curve is an <em>algebraic</em> set, i.e., a non-discrete zero locus of a non-constant polynomial, then one might investigate tangency at $(x_{0}, y_{0})$ by expanding the curve's defining polynomial in powers of $x - x_{0}$ and $y - y_{0}$, declaring the curve to be <em>smooth</em> at $(x_{0}, y_{0})$ if the resulting expansion has a non-vanishing linear part, and defining the <em>tangent line</em> to be the zero locus of that linear part. (Similar considerations hold for analytic curves—non-discrete zero loci of non-constant analytic functions.)</li>
</ul>
<p>For example, if the curve has equation $x^{3} - y = 0$, the binomial theorem gives
\begin{align*}
0 &= x_{0}^{3} + 3x_{0}^{2}(x - x_{0}) + 3x_{0}(x - x_{0})^{2} + (x - x_{0})^{3} - \bigl[(y - y_{0}) + y_{0}\bigr] \\
&= \bigl[3x_{0}^{2}(x - x_{0}) - (y - x_{0}^{3})\bigr] + 3x_{0}(x - x_{0})^{2} + (x - x_{0})^{3}.
\end{align*}
The bracketed terms on the second line are the linear part, and the tangent line at $(x_{0}, y_{0}) = (x_{0}, x_{0}^{3})$ has equation
$$
0 = 3x_{0}^{2}(x - x_{0}) - (y - x_{0}^{3}),\quad\text{or}\quad
y = x_{0}^{3} + 3x_{0}^{2}(x - x_{0}),
$$
"as expected".</p>
<ul>
<li>In "higher geometry", the "<a href="https://en.wikipedia.org/wiki/Tangent_space">tangent space</a>" is usually defined intrinsically. One determines the behavior of the tangent space under morphisms, and defines the "tangent space" of the image of a morphism to be the image of the intrinsic tangent space in the appropriate sense.</li>
</ul>
<p>In the study of smooth manifolds it's common to use differential operators (a.k.a., derivations on the algebra of smooth functions). In algebraic geometry it's common to use the ideal $I$ of functions vanishing at $x_{0}$, and to define the <em>tangent space</em> to be the dual of the quotient $I/I^{2}$. The preceding examples are, respectively, calculus-level articulations of these two viewpoints.</p>
<p>These are not, however, the appropriate levels of generality to foist on calculus students. I personally stick to the analytic definition, and in fact usually assume "curves" are <em>continuously differentiable</em>.</p>
| <p>For a more purely geometrical notion of what a tangent is, I imagine a line $T$ through a point $P$ on a curve such that for any given double cone (however thin) with $T$ as its axis and $P$ as its apex point, there is a small enough neighborhood $N$ of $P$ such that the part of the curve within that neighborhood is completely contained in that double cone. Then $T$ is tangent to the curve at $P$.</p>
<p><a href="https://i.sstatic.net/DvAaF.png" rel="noreferrer"><img src="https://i.sstatic.net/DvAaF.png" alt="enter image description here"></a></p>
<p>This definition mimics the idea that the tangent is a linear approximation to and resembles the curve in small neighborhoods of the point.</p>
|
logic | <p>The Continuum Hypothesis says that there is no set with cardinality between that of the reals and the natural numbers. Apparently, the Continuum Hypothesis can't be proved or disproved using the standard axioms of set theory.</p>
<p>In order to disprove it, one would only have to construct one counterexample of a set with cardinality between the naturals and the reals. It was proven that the CH can't be disproven. Equivalently, it was proven that one cannot construct a counterexample for the CH. Doesn't this prove it?</p>
<p>Of course, the issue is that it was also proven that it can't be proved. I don't know the details of this unprovability proof, but how can it avoid a contradiction? I understand the idea of something being independent of the axioms, I just don't see how if there is provably no counterexample the hypothesis isn't immediately true, since it basically just says a counterexample doesn't exist.</p>
<p>I'm sure I'm making some horrible logical error here, but I'm not sure what it is.</p>
<p>So my question is this: what is the flaw in my argument? Is it a logical error, or a gross misunderstanding of the unprovability proof in question? Or something else entirely?</p>
| <p>Here's an example axiomatic system:</p>
<ol>
<li>There exist exactly three objects $A, B, C$.</li>
<li>Each of these objects is either a banana, a strawberry or an orange.</li>
<li>There exists at least one strawberry.</li>
</ol>
<p>Let's name the system $X$.</p>
<p><strong>Vincent's Continuum Hypothesis (VCH)</strong>: Every object is either a banana or a strawberry (i.e., there are no oranges).</p>
<p>Now, to disprove this in $X$, you would have to show that one of $A, B, C$ is an orange ("construct a counterexample"). But this does not follow from $X$, because the following model is consistent with $X$: A and B are bananas, C is a strawberry.</p>
<p>On the other hand, VCH does not follow from $X$ either, because the following model is consistent with $X$: A is a banana, B is a strawberry, C is an orange.</p>
<p>As you can see, there is no contradiction, because you have to take into account different models of the axiomatic system.</p>
| <p>I think the basic problem is in your statement that "In order to disprove it, one would only have to construct one counterexample of a set with cardinality between the naturals and the reals." Actually, to disprove CH by this strategy, one would have to produce a counterexample <strong>and prove</strong> that it actually has cardinality between those of $\mathbb N$ and $\mathbb R$. </p>
<p>So, from the fact that CH can't be disproved in ZFC, you can't infer that there is no counterexample but only that no set can be proved in ZFC to be a counterexample.</p>
|
probability | <p>Since integration is not my strong suit I need some feedback on this, please:</p>
<p>Let $Y$ be $\mathcal{N}(\mu,\sigma^2)$, the <em>normal distrubution</em> with parameters $\mu$ and $\sigma^2$. I know $\mu$ is the expectation value and $\sigma$ is the variance of $Y$.</p>
<p><strong>I want to calculate the $n$-th central moments of $Y$.</strong></p>
<p>The <em>density function</em> of $Y$ is $$f(x)=\frac{1}{\sigma\sqrt {2\pi}}e^{-\frac{1}{2}\left(\frac{y-\mu}{\sigma}\right)^2}$$</p>
<p>The $n$-th <em>central moment</em> of $Y$ is $$E[(Y-E(Y))^n]$$</p>
<p>The $n$-th <em>moment</em> of $Y$ is $$E(Y^n)=\psi^{(n)}(0)$$ where $\psi$ is the <em>Moment-generating function</em> $$\psi(t)=E(e^{tX})$$</p>
<p>So I started calculating:</p>
<p>$$\begin{align}
E[(Y-E(Y))^n]&=\int_\mathbb{R}\left(f(x)-\int_\mathbb{R}f(x)dx\right)^n\,dx \\
&=\int_\mathbb{R}\sum_{k=0}^n\left[\binom{n}{k}(f(x))^k\left(-\int_\mathbb{R}f(x)dx\right)^{n-k}\right]\,dx \\
&=\sum_{k=0}^n\binom{n}{k}\left(\int_\mathbb{R}\left[(f(x))^k\left(-\int_\mathbb{R}f(x)dx\right)^{n-k}\right]\,dx\right) \\
&=\sum_{k=0}^n\binom{n}{k}\left(\int_\mathbb{R}\left[(f(x))^k\left(-\mu\right)^{n-k}\right]\,dx\right) \\
&=\sum_{k=0}^n\binom{n}{k}\left((-\mu)^{n-k}\int_\mathbb{R}(f(x))^k\,dx\right) \\
&=\sum_{k=0}^n\binom{n}{k}\left((-\mu)^{n-k}E\left(Y^k\right)\right) \\
\end{align}$$</p>
<p>Am I on the right track or completely misguided? If I have made no mistakes so far, I would be glad to get some inspiration because I am stuck here. Thanks!</p>
| <p>The $n$-th central moment $\hat{m}_n = \mathbb{E}\left( \left(X-\mathbb{E}(X)\right)^n \right)$. Notice that for the normal distribution $\mathbb{E}(X) = \mu$, and that $Y = X-\mu$ also follows a normal distribution, with zero mean and the same variance $\sigma^2$ as $X$.</p>
<p>Therefore, finding the central moment of $X$ is equivalent to finding the raw moment of $Y$.</p>
<p>In other words,
$$ \begin{eqnarray}
\hat{m}_n &=& \mathbb{E}\left( \left(X-\mathbb{E}(X)\right)^n \right) =
\mathbb{E}\left( \left(X-\mu\right)^n \right) = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi} \sigma} (x-\mu)^n \mathrm{e}^{-\frac{(x-\mu)^2}{2 \sigma^2}} \mathrm{d} x\\
& \stackrel{y=x-\mu}{=}& \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi} \sigma} y^n \mathrm{e}^{-\frac{y^2}{2 \sigma^2}} \mathrm{d} y \stackrel{y = \sigma u}{=}
\int_{-\infty}^\infty \frac{1}{\sqrt{2\pi} \sigma} \sigma^n u^n \mathrm{e}^{-\frac{u^2}{2}} \sigma \mathrm{d} u \\
&=& \sigma^n \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi} } u^n \mathrm{e}^{-\frac{u^2}{2}} \mathrm{d} u
\end{eqnarray}
$$
The latter integral is zero for odd $n$ as it is the integral of an odd function over a real line. So consider
$$
\begin{eqnarray}
&& \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi} } u^{2n} \mathrm{e}^{-\frac{u^2}{2}} \mathrm{d} u = 2 \int_{0}^\infty \frac{1}{\sqrt{2\pi} } u^{2n} \mathrm{e}^{-\frac{u^2}{2}} \mathrm{d} u \\
&& \stackrel{u=\sqrt{2 w}}{=} \frac{2}{\sqrt{2\pi}} \int_0^\infty (2 w)^n \mathrm{e}^{-w} \frac{\mathrm{d} w }{\sqrt{2 w}} = \frac{2^n}{\sqrt{\pi}} \int_0^\infty w^{n-1/2} \mathrm{e}^{-w} \mathrm{d} w = \frac{2^n}{\sqrt{\pi}} \Gamma\left(n+\frac{1}{2}\right)
\end{eqnarray}
$$
where $\Gamma(x)$ stands for the Euler's <a href="http://en.wikipedia.org/wiki/Gamma_function">Gamma function</a>. Using its <a href="http://en.wikipedia.org/wiki/Gamma_function#General">properties</a> we get
$$
\hat{m}_{2n} = \sigma^{2n} (2n-1)!! \qquad\qquad
\hat{m}_{2n+1} = 0
$$</p>
| <p>If <span class="math-container">$X\sim N(\mu,\sigma^2)$</span> then the <span class="math-container">$k$</span>th central moment <span class="math-container">$E[(X-\mu)^k]$</span> is the same as the <span class="math-container">$k$</span>th moment <span class="math-container">$E(Y^k)$</span> of <span class="math-container">$Y\sim N(0,\sigma^2)$</span>.</p>
<p>For <span class="math-container">$Y\sim N(0,\sigma^2)$</span> the moment-generating function is<span class="math-container">$^\color{red}a$</span>:
<span class="math-container">$$E(e^{tY})=e^{t^2\sigma^2/2}.\tag1$$</span>
One of the uses of the moment-generating function is, ahem, to generate moments. You can do this by expanding both sides of (1) as power series in <span class="math-container">$t$</span>, and then matching coefficients. This is easily done for the normal distribution: Using <span class="math-container">$\displaystyle e^x=\sum_\limits{k=0}^\infty \frac {x^k}{k!}$</span>, the LHS of (1) expands as
<span class="math-container">$$
E(e^{tY})=E\left(\sum_{k=0}^\infty \frac{(tY)^k}{k!}\right)=\sum_{k=0}^\infty\frac{E(Y^k)}{k!}t^k\tag2
$$</span>
while the RHS expands as
<span class="math-container">$$
e^{t^2\sigma^2/2}=\sum_{k=0}^\infty \frac {(t^2\sigma^2/2)^k}{k!}=\sum_{k=0}^\infty\frac{\sigma^{2k}}{k!2^k}t^{2k}.\tag3
$$</span>
By comparing coefficients of like powers of <span class="math-container">$t$</span> in (2) and (3), we see:</p>
<ul>
<li><p>If <span class="math-container">$k$</span> is odd, then <span class="math-container">$E(Y^k)=0$</span>.</p></li>
<li><p>If <span class="math-container">$k$</span> is even, say <span class="math-container">$k=2n$</span>, then
<span class="math-container">$\displaystyle\frac{E(Y^{2n})}{(2n)!}$</span>, which is the coefficient of <span class="math-container">$t^{2n}$</span> in (2),
equals the coefficient of <span class="math-container">$t^{2n}$</span> in (3), which is <span class="math-container">$\displaystyle\frac{\sigma^{2n}}{n!2^n}$</span>. In other words:
<span class="math-container">$$E(Y^{2n})=\frac{(2n)!}{n!2^n}\sigma^{2n}.\tag4
$$</span>
By using <span class="math-container">$n!2^n=2(n)\cdot 2(n-1)\cdots2(1)=(2n)\cdot(2n-2)\cdots(2)$</span>, we can rewrite (4) as:
<span class="math-container">$$E(Y^{2n})=(2n-1)!!\,\sigma^{2n}.\tag5
$$</span></p></li>
</ul>
<hr>
<p><span class="math-container">$\color{red}a:$</span> If <span class="math-container">$Z$</span> has standard normal distribution then its moment generating function is</p>
<p><span class="math-container">$$E(e^{tZ})=\int e^{tz}\frac1{\sqrt{2\pi}}e^{-\frac12z^2}\,dz=\int\frac1{\sqrt{2\pi}}e^{-\frac12(z^2-2tz)}dz=e^{t^2/2}\underbrace{
\int\frac1{\sqrt{2\pi}}e^{-\frac12(z-t)^2}dz
}_{1}=e^{t^2/2}.$$</span></p>
<p>If <span class="math-container">$X\sim N(\mu,\sigma^2)$</span> then <span class="math-container">$X$</span> is distributed like <span class="math-container">$\mu+\sigma Z$</span> hence the moment generating function of <span class="math-container">$X$</span> is
<span class="math-container">$$E(e^{tX})=E(e^{t(\mu +\sigma Z)})=e^{t\mu} E(e^{t\sigma Z}) = e^{t\mu+(t\sigma)^2/2}.$$</span></p>
|
probability | <p>How can I find this probability $P(X<Y)$ ? knowing that X and Y are independent random variables.</p>
| <p>Assuming both variables are real-valued and $Y$ is absolutely continuous with density $f_Y$ and $X$ has cumulative distribution function $F_X$ then it is possible to do the following</p>
<p>$$ \Pr \left[ X < Y \right] = \int \Pr \left[ X < y \right] f_Y \left( y
\right) \mathrm{d} y = \int F_X \left( y \right) f_Y \left( y \right)
\mathrm{d} y $$</p>
<p>Otherwise, as @ThomasAndrews said in a comment, it is case-by-case.</p>
| <p>I think we can control everything by the following general solution.</p>
<p>Consider $Z:=X-Y$. Then, by putting condition on the value of X, we get</p>
<p>$$\begin{align}
P(X<Y) & = P(Z<0)\\
& =\int_{-\infty}^{\infty}P(Z<0|X=x)dF_{X}(x)\\
& =\int_{-\infty}^{\infty}P(X-Y<0|X=x)dF_{X}(x)\\
& =\int_{-\infty}^{\infty}P(x-Y<0|X=x)dF_{X}(x)\\
& =\int_{-\infty}^{\infty}P(x<Y)dF_{X}(x)\\
& =\int_{-\infty}^{\infty}(1-P(Y\leq{x}))dF_{X}(x)\\
& =\int_{-\infty}^{\infty}(1-F_{Y}(x))dF_{X}(x)
\end{align}$$</p>
<p>You may also put a condition on the value of $Y$ to get a similar result. So, the solution of this problem depends on what you want.</p>
|
differentiation | <p>I am familiar with the definition of the Frechet derivative and it's uniqueness if it exists. I would however like to know, how the derivative is the "best" linear approximation. What does this mean formally? The "best" on the entire domain is surely wrong, so it must mean the "best" on a small neighborhood of the point we are differentiating at, where this neighborhood becomes arbitrarily small? Why does the definition of the derivative formalize precisely this? Thank you in advance.</p>
| <p>Say the graph of $L$ is a straight line and at one point $a$ we have $L(a)=f(a)$. And suppose $L$ is the tangent line to the graph of $f$ at $a$. Let $L_1$ be another function passing through $(a,f(a))$ whose graph is a straight line. Then there is some open interval $(a-\varepsilon,a+\varepsilon)$ such that for every $x$ in that interval, the value of $L(x)$ is closer to the value of $f(x)$ than is the value of $L_1(x)$. Now one might then have another line $L_2$ through that point whose slope is closer to that of the tangent line than is that of $L_1$, such that $L_2(x)$ actually comes closer to $f(x)$ than does $L(x)$, for <em>some</em> $x$ in that interval. But now there is a still smaller interval $(a-\varepsilon_2,a+\varepsilon_2)$, within which $L$ beats $L_2$. For every line except the tangent line, one can make the interval small enough so that the tangent line beats the other line within that interval. In general there's no one interval that works no matter how close the rival line gets. Rather, one must make the interval small enough in each case separately.</p>
| <p>Michael's answer is wonderful. Here is another interpretation of the idea of "best" linear approximation, one that solely appeals to intuition. First, we talk about simply about the notion of 'approximation.'</p>
<p>Imagine the points $\pi$ and $3.14$ on a number line. One might say that $3.14$ 'approximates' $\pi$, and at a certain scale of the number line, our eye would agree. That is, depending on our level of magnification, the points $\pi$ and $3.14$ will appear very close, and perhaps almost indistinguishable.</p>
<p>Next, suppose we begin to zoom in on $\pi$. What will we see? While $\pi$ remains fixed, $3.14$ will slowly move to become a distinguishable point, and begin to travel further and further from $\pi$ until at some magnification it has traveled off of our 'screen.' Now it doesn't seem like $3.14$ is a good approximation of $\pi$.</p>
<p>No matter how many decimals of $\pi$ we include in our approximation, we will always be able to zoom in far enough so that the approximation has traveled outside of our screen. There is only one value that will remain on our screen no matter how far we zoom in, and that is $\pi$ itself.</p>
<p>Now, for the case of a tangent line, imagine a smooth curve in the plane, and a tangent line at a point on this curve. As we zoom in on our point, the line and the curve appear to become indistinguishable, and in fact, at each successive level of magnification, we would need a more precise measuring instrument to distinguish the line from the curve. For any other line passing through the point, there is one measuring instrument accurate enough to distinguish the line from the curve no matter how far we zoom in.</p>
<p>Although the two notions of approximation described here are different, they still serve to illustrate the usefulness of zooming in when asking for the 'best' approximation.</p>
<p>Here is a <a href="http://www.youtube.com/watch?v=5TPQHmuFyM4">video</a> that might be helpful.</p>
|
matrices | <ol>
<li><p>How does <span class="math-container">$ {\sqrt 2 \over 2} = \cos (45^\circ)$</span>?</p>
</li>
<li><p>Is my graph (the one underneath the original) accurate with how I've depicted the representation of the triangle that the trig function represent? What I mean is, the blue triangle is the pre-rotated block, the green is the post-rotated block, and the purple is the rotated change (<span class="math-container">$45^\circ$</span>) between them.</p>
</li>
<li><p>How do these trig functions in this matrix represent a clockwise rotation? (Like, why does "<span class="math-container">$-\sin \theta $</span> " in the bottom left mean clockwise rotation... and "<span class="math-container">$- \sin \theta $</span> " in the upper right mean counter clockwise? Why not "<span class="math-container">$-\cos \theta $</span> "? <span class="math-container">$$\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta & \cos \theta \end{bmatrix}$$</span></p>
</li>
</ol>
<p><img src="https://i.sstatic.net/mAexq.jpg" alt="enter image description here" /></p>
<p>Any help in understanding the trig representations of a rotation would be extremely helpful! Thanks</p>
| <p>Here is a <strong>"small" addition to the answer by @rschwieb</strong>:</p>
<p>Imagine you have the following rotation matrix:</p>
<p><span class="math-container">$$
\left[
\begin{array}{ccc}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
$$</span></p>
<p>At first one might think this is just another identity matrix. Well, yes and no. This matrix can represent a rotation around all three axes in 3D Euclidean space with...<strong>zero degrees</strong>. This means that no rotation has taken place around any of the axes.</p>
<p>As we know <span class="math-container">$\cos(0) = 1$</span> and <span class="math-container">$\sin(0) = 0$</span>.</p>
<p>Each column of a rotation matrix represents one of the axes of the space it is applied in so if we have <strong>2D</strong> space the default rotation matrix (that is - no rotation has happened) is</p>
<p><span class="math-container">$$
\left[
\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}
\right]
$$</span></p>
<p>Each column in a rotation matrix represents the state of the respective axis so we have here the following:</p>
<p><span class="math-container">$$
\left[
\begin{array}{c|c}
1 & 0\\
0 & 1
\end{array}
\right]
$$</span></p>
<p>First column represents the <strong>x</strong> axis and the second one - the <strong>y</strong> axis. For the 3D case we have:</p>
<p><span class="math-container">$$
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
$$</span></p>
<p>Here we are using the canonical base for each space that is we are using the <strong>unit vectors</strong> to represent each of the 2 or 3 axes.</p>
<p>Usually I am a fan of explaining such things in 2D however in 3D it is much easier to see what is happening. Whenever we want to rotate around an axis, we are basically saying "The axis we are rotating around is the anchor and will NOT change. The other two axes however will".</p>
<p>If we start with the "no rotation has taken place" state</p>
<p><span class="math-container">$$
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
$$</span></p>
<p>and want to rotate around - let's say - the <strong>x</strong> axis we will do</p>
<p><span class="math-container">$$
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & \cos(\theta) & -\sin(\theta)\\
0 & \sin(\theta) & \cos(\theta)
\end{array}
\right] .
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right] =
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & \cos(\theta) & -\sin(\theta)\\
0 & \sin(\theta) & \cos(\theta)
\end{array}
\right]
$$</span></p>
<p>What this means is:</p>
<ul>
<li>The state of the <strong>x axis</strong> remains unchanged - we've started with a state of no rotation so the x axis will retain its original state - the unit vector <span class="math-container">$\left[\begin{array}{c}1\\0\\0\end{array}\right]$</span></li>
<li>The state of the <strong>y and z axis</strong> has changed - instead of the original <span class="math-container">$\left[\begin{array}{c}0\\1\\0\end{array}\right]$</span> (for y) and <span class="math-container">$\left[\begin{array}{c}0\\0\\1\end{array}\right]$</span> (for z) we now have <span class="math-container">$\left[\begin{array}{c}0\\\cos(\theta)\\\sin(\theta)\end{array}\right]$</span> (for the new orientation of y) and <span class="math-container">$\left[\begin{array}{c}0\\-\sin(\theta)\\\cos(\theta)\end{array}\right]$</span> (for the new orientation of z).</li>
</ul>
<p>We can continue applying rotations around this and that axis and each time this will happen - the axis we are rotating around remains as it was in the previous step and the rest of the axes change accordingly.</p>
<p>Now when it comes to 2D we have
<span class="math-container">$$
R(\theta) = \left[
\begin{array}{c|c}
\cos(\theta) & -\sin(\theta)\\
\sin(\theta) & \cos(\theta)
\end{array}
\right]
$$</span></p>
<p>for counterclockwise rotation and</p>
<p><span class="math-container">$$
R(-\theta) = \left[
\begin{array}{c|c}
\cos(\theta) & \sin(\theta)\\
-\sin(\theta) & \cos(\theta)
\end{array}
\right]
$$</span></p>
<p>for clockwise rotation. Notice that both column vectors are different. This is because in 2D none of the two axes remains idle and both need to change in order to create a rotation. This is why also the 3D version has two of the three axes change simultaneously - because it is just a derivative from its 2D version.</p>
<p>When it comes to rotating clock- or counterclockwise you can always use the <strong>left or right hand rule</strong>:</p>
<ol>
<li>Use your right or left hand to determine the axes:</li>
</ol>
<p><a href="https://i.sstatic.net/kJdJU.gif" rel="noreferrer"><img src="https://i.sstatic.net/kJdJU.gif" alt="enter image description here"></a></p>
<ol start="2">
<li>See which way is clock- and which way is counterclockwise. In the image below the <strong>four</strong> finger tips that go straight into your palm <strong>always</strong> point along the direction of rotation (<a href="https://en.wikipedia.org/wiki/Right-hand_rule" rel="noreferrer">right hand rule</a>):</li>
</ol>
<p><a href="https://i.sstatic.net/n6k7r.png" rel="noreferrer"><img src="https://i.sstatic.net/n6k7r.png" alt="enter image description here"></a></p>
<p>Once you pick one of the two hands stick with it and use it until the end of the specific task otherwise the results will probably end up screwed up. <strong>Notice also that this rule can also be applied to 2D</strong>. Just remove (but not cut off) the finger that points along the <strong>z</strong> axis (or whichever dimension of the three you don't need) and do your thing.</p>
<p>A couple of <strong>must knows</strong> things:</p>
<ol>
<li><p>Matrix multiplication is generally not commutative - what this means is that <span class="math-container">$A.B \ne B.A$</span></p></li>
<li><p>Rotation order is determined by the multiplication order (due to 1)) - there are a LOT of rotation conventions (RPY (roll,pitch and yaw), Euler angles etc.) so it is important to know which one you are using. If you are not certain pick one and stick with it (better have one consistent error than 10 different errors that you cannot follow up on) (see <a href="https://en.wikipedia.org/wiki/Rotation_matrix#General_rotations" rel="noreferrer">here</a> for some compact information on this topic)</p></li>
<li><p>Inverse of a rotation matrix rotates in the opposite direction - if for example <span class="math-container">$R_{x,90}$</span> is a rotation around the x axis with +90 degrees the inverse will do <span class="math-container">$R_{x,-90}$</span>. On top of that rotation matrices are awesome because <span class="math-container">$A^{-1} = A^t$</span> that is the inverse is the same as the transpose</p></li>
</ol>
| <p>All of the trigonometry will be clear if you examine what happens to the points $(1,0)$ and $(0,1)$ under these transformations. After they have moved, drop a perpendicular vertically and a line through the origin and consider the triangle formed. They will be (sometimes degenerate) triangles with hypotenuese 1 and then you will see why each of their legs has measure $\sin(\phi)$ or $\cos(\phi)$ etc.</p>
<p>Here's what I mean: after a $\pi/6$ rotation counterclockwise, the point $(1,0)$ has moved to $(\sqrt{3}/2,1/2)$. This point, in addition to $(0,0)$ and the point directly below it on the $x$ axis, $(\sqrt{3}/2,0$ form a right triangle with hypoteneuse $1$. Look at lengths of the short sides of the triangle. Try to do the same thing with an angle $\phi$ between 0 and $\pi/2$, and analyze what the sides of the triangle have to be in terms of $\sin(\phi)$ and $\cos(\phi)$.</p>
<p>Because a rotation in the plane is totally determined by how it moves points on the unit circle, this is all you have to understand.</p>
<p>You don't actually need a representation for <em>both</em> clockwise and counterclockwise. You can use the counterclockwise one all the time, if you agree that a clockwise rotation would be a negative counterclockwise rotation. That is, if you want to perform a clockwise rotation of $\pi/4$ radians, then you should use $\phi=-\pi/4$ in the counterclockwise rotation representation. </p>
<p>The fact that $\sin(-\phi)=-\sin(\phi)$ accounts for the change in the sign of sine between the two representations, and the fact that the $\cos(\phi)$ doesn't change is because $\cos(-\phi)=\cos(\phi)$. You may as well just pick the counterclockwise representation scheme, and perform <em>both</em> clockwise and counterclockwise rotations with it.</p>
<hr>
<p>To provide some extra evidence that it makes sense these are rotation matrices, you can check to see that the columns of these matrices always have Euclidean length 1 (easy application of the $\sin^2(x)+\cos^2(x)=1$ identity.) Moreover, they are orthogonal to each other. That means they are orthogonal matrices, and consequently represent rotations. They satisfy $UU^T=U^TU=I_2$. This demonstrates that $U^T=U^{-1}$, and now you'll notice that the transpose of the counterclockwise representation gives you the clockwise representation! Of course, rotating clockwise and rotating counterclockwise by $\phi$ radians are inverse operations.</p>
|
game-theory | <p>This question was inspired by another question posted today: <a href="https://math.stackexchange.com/questions/608957/monty-hall-problem-extended">Monty Hall Problem Extended</a>. </p>
<p>So I thought that the comments an answers brought up a great point about increasing the doors to 100 or something much larger, and using that as a way to help visualize why switching is always the best choice when trying to explain the problem to others.</p>
<p>And then I was thinking about the game show, Deal or No Deal. For those unfamiliar with Deal or No Deal: there are 26 cases, each containing amounts of money ranging from \$0.01 to one million dollars. You choose one case, and it's "yours" and out-of-play (this is analogous to choosing the first door in the Monty Hall problem). Throughout the game you open 24 of the remaining cases, and you see how much money was in each case. </p>
<p>In the end, you are left with 2 cases: "your" case, that you chose in the beginning, and the only other case you didn't open. This is where it becomes Monty Hall: you can either choose to keep your case, or switch cases and get the other one.</p>
<p>So what I'm wondering is, does the Monty Hall logic of "always switch doors/cases" apply here? The differences: </p>
<p>1) It's not a case of there being simply 1 car and a bunch of goats. All the money values are different in each case. You aren't always going to end up with a choice between a million dollars or something small... The two remaining cases might end up being \$10,000 and \$250,000. Or it might be \$10 and a million dollars. Or \$10 and $100.</p>
<p>2) I think part of what makes Monty Hall work is that the car always remains in play. Your first choice is a 1/26 probability of selecting the car/million dollar case. But in Deal or No Deal, the car/million dollar case can be eliminated partway through the game. So I'm thinking that probably changes things.</p>
<p>My first vague thoughts are... If you make it to the end and the million dollar case still <em>is</em> in play, Monty Hall applies and you should switch cases. Because it's the same idea; I had a 1/26 shot at the million. 24 have been eliminated. It's much more likely that the other case has the million.</p>
<p>But if the million is eliminated while you're playing, what then? Can Monty Hall not help us, because you can't compare the probability of selecting the million dollar case because now it's zero? I'm trying to think of a way to figure out whether or not you should switch, in an attempt to get the case <em>with the most money in it</em>. We know that \$1,000,000 is no longer available. But is there anything we can do to decide which case is likely to be more valuable? Or is this outside Monty Hall's bounds?</p>
| <p>The key is: Monty knows where the car is (and will never open that door). We don't know where the million dollar is so we MIGHT open that door. For an illustration, we look at how the tree diagram differs for the two cases. </p>
<p>Suppose we have 3 doors, A, B and C and our car/million is in door A. We further assume we will always switch. (Once we understand this, we can extend it to $n$ doors and see that the situation will be similar.)</p>
<p><strong>Case 1: Monty Hall Problem</strong>
<img src="https://i.sstatic.net/Bbjmo.png" alt="Monty Hall Tree Diagram"></p>
<p>If we switch, $P($Win$) = \frac{2}{3}$. </p>
<p><strong>Case 2: Deal or No Deal scenario</strong>
<img src="https://i.sstatic.net/HoTqo.png" alt="Deal or no deal tree diagram"></p>
<p>Notice our assumption in the question is we only look at the situation if the million has not been opened. So we are in essence calculating a conditional probability. If we switch, </p>
<p>$P($Win $|$ Million not opened$) = \displaystyle \frac{P(\textrm{Win}\cap \textrm{Million not opened})}{P(\textrm{Million not opened})} = \frac{\frac{1}{6}+\frac{1}{6}}{\frac{1}{6}+\frac{1}{6}+\frac{1}{6}+\frac{1}{6}}=\frac{1}{2}$.</p>
| <p>"Odds of picking $1 million immediately: 1/26</p>
<p>Odds million is not picked right away: 25/26</p>
<p>If you get through picking 24 briefcases, and 1 million dollars still remains when given the option to switch briefcases with 2 left, the odds that the other briefcase (not the one you picked) of having $1 million is 25/26. SWITCH!"</p>
<p>Literally, all you need to do is replace "1 million" with "1" in this scenario and the egregious fallacies in this logic are overtly obvious. </p>
<p>Simply put, in a scenario with 26 cases, there is a 1/26 chance that the 1 million case is picked. Subsequently, there is a 1/25 chance that the 1 remains after RANDOM selection, given that RANDOMLY eliminating 24 of 25 cases equates to RANDOMLY selecting one case. So, what are the odds that the case picked is 1 million and the the case remaining is 1?
(1/26)×(1/25)=1/650</p>
<p>Now, the odds of selecting the 1 case in the beginning are 1/26, and again we have a 1/25 probability that the 1 million dollar case will remain at the end. So what are the odds of selecting the 1 case and having the 1 million case remain at the end?
(1/26)×(1/25)=1/650</p>
<p>The odds of selecting EITHER the 1 case OR the 1 million case first are 2/26. The odds that the other of these two cases will remain until the very end is again 1/25. Therefore, the probability of either of these scenarios occurring is:
(2/26)×(1/25)=2/650</p>
<p>In conclusion, the odds that the 1 million and 1 prizes remain regardless of which was picked and which remains, is 2/650. The odds that the 1 million was the picked case is 1/650. The odds that the 1 million is the remaining case is 1/650. This means that there are only TWO scenarios where the 1 million and 1 cases are the last two cases standing, and it is EQUALLY probable that the 1 million (or 1) case is the selected case, or remaining case. </p>
<p>I know it sounds similar to the Monty Hall Problem, but since ALL selections are random (NOT the case in the MHP,) it really only requires application of VERY elementary statistics to determine the probability of these scenarios. There are 325 different combinations of final two cases (assuming 26 different values,) and for each occurrence of two remaining cases, regardless of value (call them x and y,) there is a 50/50 probability that x or y is the selected case or the remaining case. </p>
|
matrices | <blockquote>
<p>Let <span class="math-container">$\,A,B,C\in M_{n}(\mathbb C)\,$</span> be Hermitian and positive definite matrices such that <span class="math-container">$A+B+C=I_{n}$</span>, where <span class="math-container">$I_{n}$</span> is the identity matrix. Show that <span class="math-container">$$\det\left(6(A^3+B^3+C^3)+I_{n}\right)\ge 5^n \det \left(A^2+B^2+C^2\right)$$</span></p>
</blockquote>
<p>This problem is a test question from China (xixi). It is said one can use the equation</p>
<p><span class="math-container">$$a^3+b^3+c^3-3abc=(a+b+c)(a^2+b^2+c^2-ab-bc-ac)$$</span></p>
<p>but I can't use this to prove it. Can you help me?</p>
| <p>Here is a partial and positive result, valid around the "triple point"
<span class="math-container">$A=B=C= \frac13\mathbb 1$</span>.</p>
<p>Let <span class="math-container">$A,B,C\in M_n(\mathbb C)$</span> be Hermitian satisfying <span class="math-container">$A+B+C=\mathbb 1$</span>, and additionally assume that
<span class="math-container">$$\|A-\tfrac13\mathbb 1\|\,,\,\|B-\tfrac13\mathbb 1\|\,,\,
\|C-\tfrac13\mathbb 1\|\:\leqslant\:\tfrac16\tag{1}$$</span>
in the spectral or operator norm. (In particular, <span class="math-container">$A,B,C$</span> are positive-definite.)<br />
Then we have
<span class="math-container">$$6\left(A^3+B^3+C^3\right)+\mathbb 1\:\geqslant\: 5\left(A^2+B^2+C^2\right)\,.\tag{2}$$</span></p>
<p><strong>Proof:</strong>
Let <span class="math-container">$A_0=A-\frac13\mathbb 1$</span> a.s.o., then <span class="math-container">$A_0+B_0+C_0=0$</span>, or
<span class="math-container">$\,\sum_\text{cyc}A_0 =0\,$</span> in notational short form.
Consider the</p>
<ul>
<li>Sum of squares
<span class="math-container">$$\sum_\text{cyc}\big(A_0 + \tfrac13\mathbb 1\big)^2
\:=\: \sum_\text{cyc}\big(A_0^2 + \tfrac23 A_0+ \tfrac19\mathbb 1\big)
\:=\: \sum_\text{cyc}A_0^2 \:+\: \tfrac13\mathbb 1$$</span></li>
<li>Sum of cubes
<span class="math-container">$$\sum_\text{cyc}\big(A_0 + \tfrac13\mathbb 1\big)^3
\:=\: \sum_\text{cyc}\big(A_0^3 + 3A_0^2\cdot\tfrac13
+ 3A_0\cdot\tfrac1{3^2} + \tfrac1{3^3}\mathbb 1\big) \\
\;=\: \sum_\text{cyc}A_0^3 \:+\: \sum_\text{cyc}A_0^2 \:+\: \tfrac19\mathbb 1$$</span>
to obtain
<span class="math-container">$$6\sum_\text{cyc}\big(A_0 + \tfrac13\mathbb 1\big)^3+\mathbb 1
\;-\; 5\sum_\text{cyc}\big(A_0 + \tfrac13\mathbb 1\big)^2
\:=\: \sum_\text{cyc}A_0^2\,(\mathbb 1 + 6A_0) \:\geqslant\: 0$$</span>
where positivity is due to each summand being a product of commuting positive-semidefinite matrices.
<span class="math-container">$\quad\blacktriangle$</span></li>
</ul>
<p><em><strong>Two years later observation:</strong></em><br />
In order to conclude <span class="math-container">$(2)$</span> the additional assumptions <span class="math-container">$(1)$</span> may be weakened a fair way off to
<span class="math-container">$$\tfrac16\mathbb 1\:\leqslant\: A,B,C\tag{3}$$</span>
or equivalently, assuming the smallest eigenvalue of each matrix <span class="math-container">$A,B,C\,$</span> to be at least <span class="math-container">$\tfrac16$</span>.</p>
<p><strong>Proof:</strong>
Consider the very last summand in the preceding proof.
Revert notation from <span class="math-container">$A_0$</span> to <span class="math-container">$A$</span> and use the same argument, this time based on <span class="math-container">$(3)$</span>, to obtain
<span class="math-container">$$\sum_\text{cyc}\big(A-\tfrac13\mathbb 1\big)^2\,(6A -\mathbb 1)\:\geqslant\: 0\,.\qquad\qquad\blacktriangle$$</span></p>
| <p><strong>Not a full proof</strong>, but a number of thoughts too long for a comment. This post aims at finding alternative (yet harder) criteria for proving the conjecture. Please discuss.</p>
<p>As in previous comments, let's denote
$
X=6(A^3+B^3+C^3)+I
$
and
$
Y=5(A^2+B^2+C^2)
$ and $D = X-Y$.</p>
<p>The question is to show $\det(X) \ge \det(Y)$ or $1 \ge \det(X^{-1}Y)$. Write $Q = X^{-1}Y$, then $Q$ is positive definite, since $X$ and $Y$ are positive definite. Now it is known for a positive definite matrix $Q$ (see e.g. <a href="https://math.stackexchange.com/questions/202248/">here</a>) that the <em>trace bound</em> is given by
$$
\bigg(\frac{\text{Tr}(Q)}{n}\bigg)^n \geq \det(Q)
$$ </p>
<p>So a second (harder) criterion for the conjecture is $n \ge \text{Tr}(X^{-1}Y)$ or $ \text{Tr}(X^{-1}D) \geq 0$. I wouldn't see how I can compute this trace or find bounds, can someone?</p>
<p>Let's call $d_i$ the eigenvalues of $D$, likewise for $X$ and $Y$. While $x_i > 0$, this doesn't necessarily hold for $d_i$ since we know from comments that $D$ is not necessarily positive definite. So (if $X$ and $D$ could be simultaneously diagonalized) $ \text{Tr}(X^{-1}D) = \sum_i \frac{d_i}{x_i} = r \sum_i {d_i}$ where there exists an $r$ by the mean value theorem. Where $r$ is not guaranteed to be positive, it is likely that $r$ <em>will</em> be positive, since $r$ will only become negative if there are (many, very) negative $d_i$ with small associated $x_i$. Can positivity of $r$ be shown? If we can establish that a positive $r$ can be found,
a third criterion is $ \text{Tr}(D) \geq 0$.</p>
<p>Now with this third criterion, we can use that the trace is additive and that the trace of commutors vanishes, i.e. $\text{Tr} (AB -BA) = 0$. Using this argument, it becomes unharmful when matrices do not commute, since under the trace their order can be changed. This restores previous solutions where the conjecture was reduced to the valid Schur's inequality (as noted by a previous commenter), which proves the conjecture.</p>
<hr>
<p>A word on how hard the criteria are, indicatively in terms of eigenvalues:</p>
<p>(hardest) positive definiteness: $d_i >0$ $\forall i$ or equivalently, $\frac{y_i}{x_i} <1$ $\forall i$ </p>
<p>(second- relies on positive $r$) $ \text{Tr}(D) \geq 0$: $\sum_i d_i \geq 0$</p>
<p>(third) $n \ge \text{Tr}(X^{-1}Y)$: $\sum_i \frac{y_i}{x_i} \leq n$</p>
<p>(fourth - least hard) $\det(X) \ge \det(Y)$: $\prod_i \frac{y_i}{x_i} \leq 1$</p>
<p>Solutions may also be found by using criteria which interlace between those four. </p>
<hr>
<p>A word on simulations and non-positive-definiteness:</p>
<p>I checked the above criteria for the non-positive definite example given by @user1551 in the comments above, and the second, third and fourth criteria hold. </p>
<p>Note that equality $\det(X) = \det(Y)$ occurs for (a) symmetry point: $A=B=C=\frac13 I$ and for (b) border point: $A=B=\frac12 I$ and $C=0$ (and permutations). I checked the "vicinity" of these equality points by computer simulations for real matrices with $n=2$ where I extensively added small matrices with any parameter choices to $A$ and $B$ (and let $C = I - A-B$), making sure that $A,B$ and $C$ are positive definite. It shows that for the vicinity of the symmetry point, the second, third and fourth criteria above hold, while there occur frequent non-positive-definite examples. For the vicinity of the border point all four criteria hold.</p>
|
game-theory | <p>Define a game with S players to be Symmetric if all players have the same set of options and the payoff of a player depends only on the player's choice and the set of choices of all players.
Equivalently A game is symmetric if applying a permutation to the options chosen by people induces the same permutation on the payoffs. For example if the original set of options chosen were 1,2,1,3 and the pay-offs were 6,0,6,100 respectively then if the game is symmetric the set of options 2,1,1,3 would have to lead to the pay-offs 0,6,6,100</p>
<p>Suppose a Symmetric Game S has at least 1 nash equilbrium, then must S have a symmetric Nash equilbrium i.e. a nash equilbrium where all players use the same strategy? If not under what conditions does there exist a nash equilbrium. If so is there a simple proof or a simple idea behind the proof?</p>
<p>Clearly this doesn't hold if we restrict to pure strategies the game with the following payoff matrix where all pure equilbria are asymmetric serves as a counter example, But I've yet to find a counterexample for impure strategies.</p>
<p>0/0 1/1<br>
1/1 0/0 </p>
| <p>The answer is yes for finite games and mixed strategies and this was already shown in the <a href="http://www.princeton.edu/mudd/news/faq/topics/Non-Cooperative_Games_Nash.pdf">Ph.D thesis</a> of John Nash, where it occurs as Theorem 4. Nash considered actually slightly more invariances in his theorem.</p>
<p>The proof amounts to the verification that one can do the usual fixed-point argument used for the proof that every finite game has a Nash equilibrium in mixed strategies, restricted to the set of symmetric strategy profiles and to symmetric best responses.</p>
| <p>The answer is yes for finite games and for zero-sum games. In general, however, the answer is no: <a href="http://www.rochester.edu/college/faculty/markfey/papers/SymmGame3.pdf">http://www.rochester.edu/college/faculty/markfey/papers/SymmGame3.pdf</a></p>
|
combinatorics | <p>After reading <a href="https://math.stackexchange.com/questions/48080/proof-that-sum-limits-k-1nk2-fracnn12n16">this question</a>, the most popular answer use the identity
<span class="math-container">$$\sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1},$$</span>
or, what is equivalent,
<span class="math-container">$$\sum_{t=k}^n \binom{t}{k} = \binom{n+1}{k+1}.$$</span></p>
<p>What's the name of this identity? Is it the identity of the <a href="https://en.wikipedia.org/wiki/Pascal%27s_triangle" rel="noreferrer">Pascal's triangle</a> modified.</p>
<p>How can we prove it? I tried by induction, but without success. Can we also prove it algebraically?</p>
<p>Thanks for your help.</p>
<hr />
<p><strong>EDIT 01 :</strong> This identity is known as the <a href="https://en.wikipedia.org/wiki/Hockey-stick_identity" rel="noreferrer"><strong>hockey-stick identity</strong></a> because, on Pascal's triangle, when the addends represented in the summation and the sum itself are highlighted, a <em>hockey-stick</em> shape is revealed.</p>
<p><a href="https://i.sstatic.net/7tW63.jpg" rel="noreferrer"><img src="https://i.sstatic.net/7tW63.jpg" alt="Hockey-stick" /></a></p>
| <p>Imagine the first <span class="math-container">$n + 1$</span> numbers, written in order on a piece of paper. The right hand side asks in how many ways you can pick <span class="math-container">$k+1$</span> of them. In how many ways can you do this? </p>
<p>You first pick a highest number, which you circle. Call it <span class="math-container">$s$</span>. Next, you still have to pick <span class="math-container">$k$</span> numbers, each less than <span class="math-container">$s$</span>, and there are <span class="math-container">$\binom{s - 1}{k}$</span> ways to do this. </p>
<p>Since <span class="math-container">$s$</span> is ranging from <span class="math-container">$1$</span> to <span class="math-container">$n+1$</span>, <span class="math-container">$t:= s-1$</span> is ranging from <span class="math-container">$0$</span> to <span class="math-container">$n$</span> as desired.</p>
| <p>We can use the well known identity
$$1+x+\dots+x^n = \frac{x^{n+1}-1}{x-1}.$$
After substitution $x=1+t$ this becomes
$$1+(1+t)+\dots+(1+t)^n=\frac{(1+t)^{n+1}-1}t.$$
Both sides of these equations are polynomials in $t$. (Notice that the RHS simplifies to $\sum_{j=1}^{n+1}\binom {n+1}j t^{j-1}$.)</p>
<p>If we compare coefficient of $t^{k}$ on the LHS and the RHS we see that
$$\binom 0k + \binom 1k + \dots + \binom nk = \binom{n+1}{k+1}.$$</p>
<hr>
<p>This proof is basically the same as the proof using generating functions, which was posted in other answers. However, I think it is phrased a bit differently. (And if it is formulated this way, even somebody who has never heard of generating functions can follow the proof.) </p>
|
game-theory | <p>Suppose three players play the following game: Player 1 picks a number in $[0,1]$. Then player $2$ picks a number in the same range but different from the number player $1$ picked. Player $3$ also picks a number in the same range but different from the previous two. We then pick a random number in $[0,1]$ uniformly randomly. Whoever has a number closer to the random number we picked wins the game. Assume all players play optimally with the goal of maximizing their probability of winning. If one of them has several optimal choices, they pick one of them at random.</p>
<p>1)If Player 1 chooses zero, what is the best choice for player 2?</p>
<p>2)What is the best choice for player 1?</p>
<p>I have some trouble seeing how this problem is well-defined. For instance, if Player $1$ picks $0$ and Player $2$ picks 1, then I cannot see what the optimal choice would be for the last player since he has to pick different numbers. Can someone help?</p>
<p>EDIT: I now understand better how the problem works, but I still have no idea how to approach this. Can someone give me some hints?</p>
| <p><strong>The first part.</strong> Let the three players choose $x,y,z$ in order.
Suppose that $x=0$ and consider the third player's choice.</p>
<p>Option A: Choose $z>y$. Then the optimal choice is arbitrarily close to $y$ to make the winning probability arbitrarily close to $1-y$.
In this case, the winning probability for the second player is arbitrarily close to (and a little larger than) $y/2$.</p>
<p>Option B: Choose $z<y$. The winning probability is always $y/2$, so the choice is a random element in $(0,y)$.
In this case, the winning probability for the second player is $(1-y)+y/2=(1-y)/2$.
[The probability that the random element is $\geq y$) $+$ (the probability that it is nearer than $z$ to $y$ conditioned on it being less than $y$)$\times$(probability of random number being less than $y$)].</p>
<p>The third player will choose A if $1-y>y/2$, i.e. if $y<2/3$ and will choose B if $y \geq 2/3$.</p>
<p>Whether the third player is made to choose option A or B, we can see that <strong>the optimal choice of $y$ is $2/3$.</strong></p>
<p><strong>The second part.</strong> This is an intuitive rather than analytic solution. After the first two players have made their choice, the maximum probability that the third player can get is among $x,(y-x)/2,1-y$. Equating the three of them, we get $x=1/4,y=3/4$, so both $1/4$ and $3/4$ should be optimal for the first player.</p>
| <p>I thought about this some more and realized why the question is posed the way it is, in two steps. We can use the first case to solve the general case recursively.</p>
<p>As the existing answers have established, the answer to part $1$) is $2/3$, and the payoffs in this case are $(\frac16,\frac12,\frac13$).</p>
<p>In the general case, by playing at $x_1$ Player $1$ effectively creates two new games on either side of $x_1$, one scaled down by $x_1$ and one scaled down by $1-x_1$, and these subgames work like the game in part $1$), since they're both delimited by a play by Player $1$ on one end and a normal boundary on the other.</p>
<p>Again, without loss of generality assume $x_1\le\frac12$. Either the other two players play in the same subgame, or in different subgames. If they play in the same subgame, it must be the larger one, of size $1-x_1$, since Player $3$ wouldn't play in the smaller game if Player $2$ has already played there. From $1)$ we know that in this case the payoffs are $(1-x_1)(\frac16,\frac12,\frac13)$, plus $x$ for Player $1$ that she gets on the other side, for a total of $(5x_1+1)/6$ for Player $1$.</p>
<p>Player $3$ plays in the smaller subgame if Player $2$ has played in the larger one and the payoff $x_1-\epsilon$ in the smaller subgame is larger than the payoff $\frac13(1-x_1)$ in the larger subgame, and thus if $x_1\gt\frac14$. In this case, Player $2$ would play at $1-x_1+\delta$ to deter Player $3$ from playing at $x_2+\epsilon$, so the payoff for Player $1$ would be $\frac\epsilon2+\frac\delta2+\frac12((1-x_1)-x_1)=\frac\epsilon2+\frac\delta2+\frac12-x_1$.</p>
<p>Player $2$ plays in the smaller subgame if the payoff $x_1-\delta$ in the smaller subgame is larger than the payoff $\frac12(1-x_1)$ in the larger subgame, and thus if $x_1\gt\frac13$. Player $3$ would then play at $x_1+\epsilon$. This would leave Player $1$ with only $\frac\delta2+\frac\epsilon2$, so she'll avoid this outcome.</p>
<p>Since at $x_1=\frac14$ we already have $(5x_1+1)/6\gt\frac12-x_1$, Player $1$ plays at $x_1=\frac14$.</p>
|
matrices | <p>I have the following <span class="math-container">$n\times n$</span> matrix:</p>
<p><span class="math-container">$$A=\begin{bmatrix} a & b & \ldots & b\\ b & a & \ldots & b\\ \vdots & \vdots & \ddots & \vdots\\ b & b & \ldots & a\end{bmatrix}$$</span></p>
<p>where <span class="math-container">$0 < b < a$</span>.</p>
<blockquote>
<p>I am interested in the expression for the determinant <span class="math-container">$\det[A]$</span> in
terms of <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$n$</span>. This seems like a trivial problem, as the
matrix <span class="math-container">$A$</span> has such a nice structure, but my linear algebra skills are
pretty rusty and I can't figure it out. Any help would be
appreciated.</p>
</blockquote>
| <p>Add row 2 to row 1, add row 3 to row 1,..., add row $n$ to row 1, we get
$$\det(A)=\begin{vmatrix}
a+(n-1)b & a+(n-1)b & a+(n-1)b & \cdots & a+(n-1)b \\
b & a & b &\cdots & b \\
b & b & a &\cdots & b \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
b & b & b & \ldots & a \\
\end{vmatrix}$$
$$=(a+(n-1)b)\begin{vmatrix}
1 & 1 & 1 & \cdots & 1 \\
b & a & b &\cdots & b \\
b & b & a &\cdots & b \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
b & b & b & \ldots & a \\
\end{vmatrix}.$$
Now add $(-b)$ of row 1 to row 2, add $(-b)$ of row 1 to row 3,..., add $(-b)$ of row 1 to row $n$, we get
$$\det(A)=(a+(n-1)b)\begin{vmatrix}
1 & 1 & 1 & \cdots & 1 \\
0 & a-b & 0 &\cdots & 0 \\
0 & 0 & a-b &\cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \ldots & a-b \\
\end{vmatrix}=(a+(n-1)b)(a-b)^{n-1}.$$</p>
| <p>SFAICT this route hasn't been mentioned yet, so:</p>
<p>Consider the decomposition</p>
<p>$$\small\begin{pmatrix}a&b&\cdots&b\\b&a&\cdots&b\\\vdots&&\ddots&\vdots\\b&\cdots&b&a\end{pmatrix}=\begin{pmatrix}a-b&&&\\&a-b&&\\&&\ddots&\\&&&a-b\end{pmatrix}+\begin{pmatrix}\sqrt b\\\sqrt b\\\vdots\\\sqrt b\end{pmatrix}\cdot\begin{pmatrix}\sqrt b&\sqrt b&\cdots&\sqrt b\end{pmatrix}$$</p>
<p>Having this decomposition allows us to use the Sherman-Morrison-Woodbury formula for determinants:</p>
<p>$$\det(\mathbf A+\mathbf u\mathbf v^\top)=(1+\mathbf v^\top\mathbf A^{-1}\mathbf u)\det\mathbf A$$</p>
<p>where $\mathbf u$ and $\mathbf v$ are column vectors. The corresponding components are simple, and thus the formula is easily applied (letting $\mathbf e$ denote the column vector whose components are all $1$'s):</p>
<p>$$\begin{align*}
\begin{vmatrix}a&b&\cdots&b\\b&a&\cdots&b\\\vdots&&\ddots&\vdots\\b&\cdots&b&a\end{vmatrix}&=\left(1+(\sqrt{b}\mathbf e)^\top\left(\frac{\sqrt{b}}{a-b}\mathbf e\right)\right)(a-b)^n\\
&=\left(1+\frac{nb}{a-b}\right)(a-b)^n=(a+(n-1)b)(a-b)^{n-1}
\end{align*}$$</p>
<p>where we used the fact that $\mathbf e^\top\mathbf e=n$.</p>
|
game-theory | <p>[There's still the strategy to go. A suitably robust argument that establishes what is <em>statistically</em> the best strategy will be accepted.]</p>
<p><strong>Here's my description of the game:</strong></p>
<p>There's a <span class="math-container">$4\times 4$</span> grid with some random, numbered cards on. The numbers are either one, two, or multiples of three. Using up, down, left, and right moves, you add the numbers on adjacent cards to make a new card like so: <span class="math-container">$$\begin{align}\color{blue}1+\color{red}2&=3\tag{1} \\ n+n&=2n\end{align}$$</span> for <span class="math-container">$n=2^k3\ge 3$</span>, where <span class="math-container">$k\in\{0, 1, . . . , 10\}$</span>, so the <strong>highest a card can be is <span class="math-container">$2^{11}3$</span></strong>. But at each move the <strong>"free" cards move too and a random new card appears</strong> at a random point along the edge you slide away from. Everything is kept on the grid. <strong>The card for the next move is indicated by colour</strong> at the top of the screen: blue for <span class="math-container">$1$</span>, red for <span class="math-container">$2$</span>, and white for <span class="math-container">$n\ge 3$</span> (such that <span class="math-container">$n$</span> is attainable using the above process). The white <span class="math-container">$2^\ell 3$</span>-numbered cards are worth <span class="math-container">$3^{\ell+1}$</span> points; <strong>the rest give no points</strong>. Once there are no more available moves, the points on the remaining cards are summed to give your score for the game.</p>
<p><a href="http://www.geekswithjuniors.com/blog/2014/2/10/threes-the-best-single-swipe-puzzle-game-on-ios.html" rel="nofollow noreferrer">Here's</a> another description I've found; it's the least promotional. It has the following gif.</p>
<p><img src="https://i.sstatic.net/iAXVM.gif" alt="enter image description here" /></p>
<p>So:</p>
<blockquote>
<p>What's the best strategy for the game? What's the highest possible score?</p>
</blockquote>
<p><em>Thoughts:</em></p>
<p>We could model this using some operations on <span class="math-container">$4\times 4$</span> matrices over <span class="math-container">$\mathbb{N}$</span>. A new card would be the addition of <span class="math-container">$\alpha E_{ij}$</span> for some appropriate <span class="math-container">$\alpha$</span> and standard basis vector <span class="math-container">$E_{ij}$</span>. That's all I've got . . .</p>
<hr />
<p><strong>NB:</strong> If this is a version of some other game, please let me know so I can avoid giving undue attention to this version :)</p>
<hr />
<p>The number on each card can be written <span class="math-container">$n=2^k3^{\varepsilon_k}$</span>, where
<span class="math-container">$$\varepsilon_k=\cases{\color{blue}0\text{ or }1 &: $k=0$ \\
\color{red}0\text{ or }1 &: $k=1$ \\
1 &: $k\ge 2$;}$$</span>that is, <span class="math-container">$\varepsilon_k=\cases{0 &:$n<3$ \\ 1 &:$n\ge 3$}$</span>. So we can write <span class="math-container">$(k, \varepsilon_k)$</span> instead under
<span class="math-container">$$(k, \varepsilon_k)+(\ell, \varepsilon_\ell)\stackrel{(1)}{=}\cases{(k+1, 1)&: $\varepsilon_k, \varepsilon_\ell, k=\ell > 0$ \\
(0, 1)&: $\color{blue}k=\color{blue}{\varepsilon_k}=\color{red}{\varepsilon_\ell}=0, \color{red}\ell=1$ \\
(0, 1)&: $\color{blue}\ell=\color{red}{\varepsilon_k}=\color{blue}{\varepsilon_\ell}=0, \color{red}k=1$.}$$</span></p>
<p>Looking at a <span class="math-container">$2\times 2$</span> version might help: the moves from different starting positions show up if we work systematically. It fills up quickly.</p>
<hr />
<p>It'd help to be more precise about what a good strategy might look like. The best strategy might be one that, <em>from an arbitrary <span class="math-container">$4\times 4$</span> grid</em> <span class="math-container">$G_0$</span> and with the least number of moves, gives the highest score attainable with <span class="math-container">$G_0$</span>, subject to the random nature of the game. That's still a little vague though . . .</p>
| <p>The strategy I employ is simply to make the move that leaves the most available moves, and to disregard score entirely. As a natural consequence of playing more moves, the score of the board will increase simply because the only way to continue playing is to make combinations, and combinations generate higher scores.</p>
<p>At the beginning of the game, the most important thing to consider is the placement of $1$s and $2$s. They are unique in that nothing can be combined adjacent to them to make a valid combination, they will only combine with their complement, which can only be achieved by board translation (verses, say a
$12$, which can be adjacent to a $6$, which combines with another $6$ and then the $12$ can subsequently combine with that $12$. There's no way to make a $1$ or a $2$, it must simply be moved around the board).</p>
<p>Later, with higher scoring tiles on the board, the "bonus" tile (which shows up as a white tile with a $+$ in it at the top) becomes increasingly important, and the best strategy I've found is to attempt to place the bonus tile as near a mixed group of larger tiles as possible. The bonus tile will always be at least $6$, but never the same score as your highest scoring tile in play.</p>
<p>There is also the nature of the tile selection. It's been reverse engineered that the random generator uses a "bag" where $12$ tiles are shuffled. The original board layout uses this method, and $9$ tiles are placed into the board with $3$ remaining in the "bag". Once the bag is exhausted, the tiles are shuffled again. There are always $4$ of each: $1$, $2$, and $3$. Once you reach a high tile of $48$ a "bonus" tile is inserted with a potential value of greater than $3$. This changes the size of the "bag" to $13$ instead of $12$. So, keeping track of where you are in the "bag" and how many of each color you've seen can give you an advantage when looking at future moves.</p>
<hr>
<p>Curiously, the possibility space for scoring is actually quite sparse. All scores will necessarily be multiples of $3$, but it turns out that only about $\frac38$ of the multiples of $3$ between $0$ and the max score are actually valid. There are a lot that are simply impossible to get, like a $19$ in cribbage.</p>
<p>The lowest one that isn't trivially small is still $39,363$, though, which seems well out of the range of the average player. The next lowest I found is $52,485$. There are lots of gaps at the high end, due to the fact that highest scoring tile is worth over $500$k by itself.</p>
| <p><em>A partial answer:</em> </p>
<p>The highest possible score is $16\times 3^{12}$.</p>
<p>If the game could start with no available moves, then just suppose it starts with $2^{11}3$ everywhere.</p>
<p>Alternatively, suppose you start with $2^{11}3$ in the top left and suppose every new card happens to be $2^{11}3$. Assume the cards all show up in the top left corner, which we can do in the following. Slide right until the top row is full. Slide down once. Repeat. This will eventually fill the grid; once it does, there'll be no more available moves so the game ends (with a score of $16\times 3^{11+1}$).</p>
|
matrices | <p><strong>Background:</strong> Many (if not all) of the transformation matrices used in $3D$ computer graphics are $4\times 4$, including the three values for $x$, $y$ and $z$, plus an additional term which usually has a value of $1$.</p>
<p>Given the extra computing effort required to multiply $4\times 4$ matrices instead of $3\times 3$ matrices, there must be a substantial benefit to including that extra fourth term, even though $3\times 3$ matrices <em>should</em> (?) be sufficient to describe points and transformations in 3D space.</p>
<p><strong>Question:</strong> Why is the inclusion of a fourth term beneficial? I can guess that it makes the computations easier in some manner, but I would really like to know <em>why</em> that is the case.</p>
| <p>I'm going to copy <a href="https://stackoverflow.com/questions/2465116/understanding-opengl-matrices/2465290#2465290">my answer from Stack Overflow</a>, which also shows why 4-component vectors (and hence 4×4 matrices) are used instead of 3-component ones.</p>
<hr>
<p>In most 3D graphics a point is represented by a 4-component vector (x, y, z, w), where w = 1. Usual operations applied on a point include translation, scaling, rotation, reflection, skewing and combination of these. </p>
<p>These transformations can be represented by a mathematical object called "matrix". A matrix applies on a vector like this:</p>
<pre><code>[ a b c tx ] [ x ] [ a*x + b*y + c*z + tx*w ]
| d e f ty | | y | = | d*x + e*y + f*z + ty*w |
| g h i tz | | z | | g*x + h*y + i*z + tz*w |
[ p q r s ] [ w ] [ p*x + q*y + r*z + s*w ]
</code></pre>
<p>For example, scaling is represented as</p>
<pre><code>[ 2 . . . ] [ x ] [ 2x ]
| . 2 . . | | y | = | 2y |
| . . 2 . | | z | | 2z |
[ . . . 1 ] [ 1 ] [ 1 ]
</code></pre>
<p>and translation as</p>
<pre><code>[ 1 . . dx ] [ x ] [ x + dx ]
| . 1 . dy | | y | = | y + dy |
| . . 1 dz | | z | | z + dz |
[ . . . 1 ] [ 1 ] [ 1 ]
</code></pre>
<p><strong><em>One of the reason for the 4th component is to make a translation representable by a matrix.</em></strong></p>
<p>The advantage of using a matrix is that multiple transformations can be combined into one via matrix multiplication.</p>
<p>Now, if the purpose is simply to bring translation on the table, then I'd say (x, y, z, 1) instead of (x, y, z, w) and make the last row of the matrix always <code>[0 0 0 1]</code>, as done usually for 2D graphics. In fact, the 4-component vector will be mapped back to the normal 3-vector vector via this formula:</p>
<pre><code>[ x(3D) ] [ x / w ]
| y(3D) ] = | y / w |
[ z(3D) ] [ z / w ]
</code></pre>
<p>This is called <a href="http://en.wikipedia.org/wiki/Homogeneous_coordinates#Use_in_computer_graphics" rel="noreferrer">homogeneous coordinates</a>. <strong><em>Allowing this makes the perspective projection expressible with a matrix too,</em></strong> which can again combine with all other transformations.</p>
<p>For example, since objects farther away should be smaller on screen, we transform the 3D coordinates into 2D using formula</p>
<pre><code>x(2D) = x(3D) / (10 * z(3D))
y(2D) = y(3D) / (10 * z(3D))
</code></pre>
<p>Now if we apply the projection matrix</p>
<pre><code>[ 1 . . . ] [ x ] [ x ]
| . 1 . . | | y | = | y |
| . . 1 . | | z | | z |
[ . . 10 . ] [ 1 ] [ 10*z ]
</code></pre>
<p>then the real 3D coordinates would become</p>
<pre><code>x(3D) := x/w = x/10z
y(3D) := y/w = y/10z
z(3D) := z/w = 0.1
</code></pre>
<p>so we just need to chop the z-coordinate out to project to 2D.</p>
| <blockquote>
<p>Even though 3x3 matrices should (?) be sufficient to describe points and transformations in 3D space.</p>
</blockquote>
<p>No, they aren't enough! Suppose you represent points in space using 3D vectors. You can transform these using 3x3 matrices. But if you examine the definition of matrix multiplication you should see immediately that multiplying a zero 3D vector by a 3x3 matrix gives you another zero vector. So simply multiplying by a 3x3 matrix can never move the origin. But translations and rotations do need to move the origin. So 3x3 matrices are not enough.</p>
<p>I haven't tried to explain exactly how 4x4 matrices are used. But I hope I've convinced you that 3x3 matrices aren't up to the task and that something more is needed.</p>
|
logic | <p>According to <a href="https://en.wikipedia.org/wiki/Complete_theory" rel="noreferrer">wikipedia</a> a theory (i.e. a set of sentences) is complete iff for every formula either it, or its negation, is provable.</p>
<p>On the other side, a logic is complete iff "semantically valid" and "provable" are the same.</p>
<p>The first notion of completeness is with what Gödel's Incompleteness result is concerned, but then I do not understand the significance given to it, or why it surprised people? Because if I read the first definition, if I can give a formula which is satisfied in one model, but not the other, then this formula and its negation could not be provable (if the logic is sound). And in general, I would expect this property more than the property that in every theory, every sentence either holds for all models (is valid), or its negation is valid.</p>
<p>To be more specific, Gödel's result in its original formulation is concerned with Peano arithmetic, but it also holds in some form of first order theory of the natural numbers with multiplication and addition as primitive notions, and for this we know that the natural numbers are not the only model.</p>
<p>So, why did it come as a surprise? Did people really think that for every theory and a given formula, either it or its negation are semantically valid, i.e. fulfilled by every model?</p>
| <blockquote>
<p>Did people really thought that <strong>for every theory</strong> and a given formula, either it or its negation are semantically valid, i.e. fulfilled by every model?</p>
</blockquote>
<p>(Emphasis added). No, of course not. It's easy to make theories that are obviously incomplete.</p>
<p>But the content of Gödel's incompleteness theorem is not just that "there are some theories that are incomplete", but that <strong>every reasonable axiomatization of basic arithmetic</strong> will be one of the incomplete theories.</p>
<p>Many mathematicians in the beginning of the 20th century did expect that there would be <em>some</em> way to present a foundation of mathematics in a way that would (at least in principle) resolve every question we could pose about it. The feeling was that it was just a matter of figuring out <em>how</em> to do that, and there was a general feeling of making progress towards the goal.</p>
<p>Then along came Gödel and proved that it cannot be done -- <em>not even</em> for basic arithmetic.</p>
| <p><em>EDIT: I've added here some of the facts from the discussion between me and the OP in the comments below the question. These doesn't address the actual OP - "why was Godel's theorem surprising?" - but I think they clear up some relevant confusions.</em></p>
<p>Godel proves (essentially) that any recursively axiomatizable theory which is true of $\mathbb{N}$ is incomplete; in particular, that under reasonable hypotheses the specific theory PA is incomplete. <em>(Note that TA by definition is complete - see below - but by the compactness theorem does not pin down $\mathbb{N}$ up to isomorphism.)</em> Note that this is equivalent to the statement that the true theory of arithmetic TA is not recursively axiomatizable, so it's expressible without ever using the word "incomplete." However, the computability-theoretic interpretation above doesn't really capture the spirit of the theorem at the time.</p>
<p>Also, focusing on TA causes us to miss an important extension of the theorem: that <em>no</em> complete consistent theory extending PA is recursively axiomatizable! This merely involves <a href="https://en.wikipedia.org/wiki/Rosser%27s_trick" rel="noreferrer">a simple tweak to the proof</a>, but it's fundamentally about PA rather than about TA (and incidentally PA here can be replaced with a <a href="https://en.wikipedia.org/wiki/Robinson_arithmetic" rel="noreferrer">vastly weaker theory</a>).</p>
<hr>
<p>You write:</p>
<blockquote>
<p>To be more specific, Gödels result in its original formulation is concerned with Peano arithmetic, but it also holds in some form of first order theory of the natural numbers with multiplication and addition as primitive notions, and for this we know that the natural numbers are not the only model.</p>
</blockquote>
<p>But this isn't true in the way you want it to be. The proof that the first order theory of the natural numbers (call this "TA" for "true arithmetic") has models not isomorphic to the standard model is via the <a href="https://en.wikipedia.org/wiki/Compactness_theorem" rel="noreferrer">compactness theorem</a>. However, these models <strong>do</strong> satisfy all the same sentences that $\mathbb{N}$ does! That is, they <strong>are not isomorphic to</strong>, but they <strong>are elementarily equivalent to</strong>, the standard model $\mathbb{N}$.</p>
<p>The key point here is that TA is a complete theory. Specifically, we define TA as $\{\theta: \mathbb{N}\models\theta\}$, that is, the set of first-order sentences true in $\mathbb{N}$. This is complete because for any sentence $\eta$, either $\mathbb{N}\models \eta$ (in which case $\eta\in$ TA) or $\mathbb{N}\models\neg\eta$ (in which case $\neg\eta\in$ TA). More generally, for any structure $\mathcal{A}$ the set $Th(\mathcal{A})=\{\theta: \mathcal{A}\models\theta\}$ is a complete theory. Note that we are <strong>not</strong> claiming that $Th(\mathcal{A})$ characterizes $\mathcal{A}$ up to isomorphism! A consequence of compactness is that elementary equivalence - that is, agreement on all first-order sentences - is <strong>strictly weaker</strong> than isomorphism, and so having lots of models in no way suggests incompleteness <em>(e.g. <a href="http://modeltheory.wikia.com/wiki/DLO" rel="noreferrer">DLO</a> has lots of nonisomorphic models, but is complete)</em>. Thus, <strong>producing nonisomorphic models does not show that a theory is incomplete</strong>.</p>
<hr>
<p>The above explains why existing <em>results</em> didn't immediately imply the incompleteness theorem. But, why couldn't existing <em>techniques</em> give a quick proof?</p>
<p>Well, the problem is that there were really only two techniques for building models: one could either prove the existence of a model via compactness, or one could find a structure "in nature" (or cook one up by hand) and prove that it was a model of the desired theory.</p>
<p>The compactness theorem is unhelpful for showing that PA is incomplete:</p>
<ul>
<li><p>To show that PA is incomplete, it's enough to find a model $M$ of PA and a sentence $\varphi$ such that $M\models\varphi$ but $\varphi$ isn't in TA.</p></li>
<li><p>Once you've picked an appropriate $\varphi$, you can do this via the compactness theorem applied to PA + $\varphi$ ...</p></li>
<li><p><strong>if</strong> you know that PA + $\varphi$ is finitely satisfiable! By the completeness theorem, you know that PA + $\varphi$ is finitely satisfiable iff PA + $\varphi$ is consistent (trivially "finitely consistent" and "consistent" mean the same thing), so all you need to do is ...</p></li>
<li><p>... pick some sentence $\varphi$ not in TA (= false in $\mathbb{N}$) such that PA + $\varphi$ is consistent. </p></li>
</ul>
<p>Aaaaand we've gone in a circle!</p>
<p>Another option would be to first find a nonstandard model $M$ of PA and then show that $M$ is not elementarily equivalent to $\mathbb{N}$ by explicitly finding a sentence which they disagree about. This type of argument is extremely useful in cases where the theory being studied has lots of easily-describable models. However, $\mathbb{N}$ is the only easily-describable model of PA <a href="https://en.wikipedia.org/wiki/Tennenbaum%27s_theorem" rel="noreferrer">in a precise sense</a>! While this wasn't known at the time, it <em>does</em> mean that the failure of attempts to explicitly find nonstandard models of PA not elementarily equivalent to $\mathbb{N}$ is not surprising.</p>
<p>The point is that <strong>there was no concrete evidence for PA being incomplete</strong> at the time, at least from the model-theoretic side.</p>
|
probability | <p>I understand how to define conditional expectation and how to prove that it exists.</p>
<p>Further, I think I understand what conditional expectation means intuitively. I can also prove the tower property, that is if $X$ and $Y$ are random variables (or $Y$ a $\sigma$-field) then we have that</p>
<p>$$\mathbb E[X] = \mathbb{E}[\mathbb E [X | Y]].$$</p>
<p>My question is: What is the intuitive meaning of this? It seems quite puzzling to me.</p>
<p>(I could find similar questions but not this one.)</p>
| <p>First, recall that in <span class="math-container">$E[X|Y]$</span> we are taking the expectation with respect to <span class="math-container">$X$</span>, and so it can be written as <span class="math-container">$E[X|Y]=E_X[X|Y]=g(Y)$</span> . Because it's a function of <span class="math-container">$Y$</span>, it's a random variable, and hence we can take its expectation (with respect to <span class="math-container">$Y$</span> now). So the double expectation should be read as <span class="math-container">$E_Y[E_X[X|Y]]$</span>.</p>
<p>About the intuitive meaning, there are several approaches. I like to think of the expectation as a kind of <strong>predictor/guess</strong> (indeed, it's the predictor that minimizes the mean squared error).</p>
<p>Suppose for example that <span class="math-container">$X, Y$</span> are two (positively) correlated variables, say the weigth and height of persons from a given population. The expectation of the weight <span class="math-container">$E(X)$</span> would be my best guess of the weight of a unknown person: I'd bet for this value, if not given more data (my <strong>uninformed bet</strong> is constant). Instead, if I know the height, I'd bet for <span class="math-container">$E(X | Y)$</span> : that means that for different persons I'd bet a diferent value, and my <strong>informed bet</strong> would not be constant: sometimes I'd bet more that the "uninformed bet" <span class="math-container">$E(X)$</span> (for tall persons) , sometime less. The natural question arises, can I say something about my informed bet <strong>in average</strong>? Well, the tower property answers: In average, you'll bet the same.</p>
<hr />
<p>Added : I agree (ten years later) with @Did 's comment below. My notation here is misleading, an expectation is defined in itself, it makes little or no sense to specify "with respect to <span class="math-container">$Y$</span>". In <a href="http://math.stackexchange.com/questions/4049293/">my answer here</a> I try to clarify this, and reconcile this fact with the (many) examples where one qualifies (subscripts) the expectation (<a href="https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm#Description" rel="nofollow noreferrer">with respect of ...</a>).</p>
| <p>For simple discrete situations from which one obtains most basic intuitions, the meaning is clear.</p>
<p>I have a large bag of biased coins. Suppose that half of them favour heads, probability of head $0.7$. Two-fifths of them favour heads, probability of head $0.8$. And the rest favour heads, probability of head $0.9$.</p>
<p>Pick a coin at random, toss it, say once. To find the expected number of heads, calculate the expectations, <strong>given</strong> the various biasing possibilities. Then average the answers, taking into consideration the proportions of the various types of coin. </p>
<p>It is intuitively clear that this formal procedure "should" give about the same answer as the highly informal process of say repeating the experiment $1000$ times, and dividing by $1000$. For if we do that, in about $500$ cases we will get the first type of coin, and out of these $500$ we will get about $350$ heads, and so on. The informal arithmetic mirrors exactly the more formal process described in the preceding paragraph. </p>
<p>If it is more persuasive, we can imagine tossing the chosen coin $12$ times.</p>
|
geometry | <p>What is wrong with this proof?</p>
<p><img src="https://i.sstatic.net/GU8wd.jpg" alt=""></p>
<p>Is <span class="math-container">$\pi=4?$</span></p>
| <p>This question is usually posed as the length of the diagonal of a unit square. You start going from one corner to the opposite one following the perimeter and observe the length is $2$, then take shorter and shorter stair-steps and the length is $2$ but your path approaches the diagonal. So $\sqrt{2}=2$.</p>
<p>In both cases, you are approaching the area but not the path length. You can make this more rigorous by breaking into increments and following the proof of the Riemann sum. The difference in area between the two curves goes nicely to zero, but the difference in arc length stays constant.</p>
<p>Edit: making the square more explicit. Imagine dividing the diagonal into $n$ segments and a stairstep approximation. Each triangle is $(\frac{1}{n},\frac{1}{n},\frac{\sqrt{2}}{n})$. So the area between the stairsteps and the diagonal is $n \frac{1}{2n^2}$ which converges to $0$. The path length is $n \frac{2}{n}$, which converges even more nicely to $2$.</p>
| <p>This problem illustrates the fact that two functions can be very close: $|f(x)-g(x)|<\epsilon$
for all $x\in [0,1]$, but their derivatives can still be far apart, $|f'(x)-g'(x)|>c$ for some
constant $c>0$.
In our case, let $x=a(t),y=b(t),0\le t\le 1$ and $x=c(t),y=d(t), 0\le t\le 1$ be the
parametrizations of the two curves. By smoothing the corners, we may assume that both
are smooth. $$ \|(a(t),b(t))\|\approx \|(c(t),d(t))\|$$ does not imply
$$ \|(a'(t),b'(t))\|\approx \|(c'(t),d'(t))\|$$
Therefore $\int_0^1 \|(a'(t),b'(t))\| dt$ need not be close to $\int_0^1 \|(c'(t),d'(t))\| dt.$
Here $\|(x,y)\|$ denotes $\sqrt{x^2+y^2}$.</p>
|
probability | <p>Let's define a sequence of numbers between 0 and 1. The first term, $r_1$ will be chosen <strong>uniformly randomly</strong> from $(0, 1)$, but now we iterate this process choosing $r_2$ from $(0, r_1)$, and so on, so $r_3\in(0, r_2)$, $r_4\in(0, r_3)$... The set of all possible sequences generated this way contains the sequence of the reciprocals of all natural numbers, which sum diverges; but it also contains all geometric sequences in which all terms are less than 1, and they all have convergent sums. The question is: does $\sum_{n=1}^{\infty} r_n$ converge in general? (I think this is called <em>almost sure convergence</em>?) If so, what is the distribution of the limits of all convergent series from this family?</p>
| <p>Let $(u_i)$ be a sequence of i.i.d. uniform(0,1) random variables. Then the sum you are interested in can be expressed as
$$S_n=u_1+u_1u_2+u_1u_2u_3+\cdots +u_1u_2u_3\cdots u_n.$$
The sequence $(S_n)$ is non-decreasing and certainly converges, possibly to $+\infty$.</p>
<p>On the other hand, taking expectations gives
$$E(S_n)={1\over 2}+{1\over 2^2}+{1\over 2^3}+\cdots +{1\over 2^n},$$
so $\lim_n E(S_n)=1.$ Now by Fatou's lemma,
$$E(S_\infty)\leq \liminf_n E(S_n)=1,$$
so that $S_\infty$ has finite expectation and so is finite almost surely.</p>
| <blockquote>
<p>The probability $f(x)$ that the result is $\in(x,x+dx)$ is given by $$f(x) = \exp(-\gamma)\rho(x)$$ where $\rho$ is the <a href="https://en.wikipedia.org/wiki/Dickman_function" rel="noreferrer">Dickman function</a> as @Hurkyl <a href="https://math.stackexchange.com/questions/2130264/sum-of-random-decreasing-numbers-between-0-and-1-does-it-converge#comment4383202_2130701">pointed out below</a>. This follows from the the delay differential equation for $f$, $$f^\prime(x) = -\frac{f(x-1)}{x}$$ with the conditions $$f(x) = f(1) \;\rm{for}\; 0\le x \le1 \;\rm{and}$$ $$\int\limits_0^\infty f(x) = 1.$$ Derivation follows</p>
</blockquote>
<p>From the other answers, it looks like the probability is flat for the results less than 1. Let us prove this first.</p>
<p>Define $P(x,y)$ to be the probability that the final result lies in $(x,x+dx)$ if the first random number is chosen from the range $[0,y]$. What we want to find is $f(x) = P(x,1)$.</p>
<p>Note that if the random range is changed to $[0,ay]$ the probability distribution gets stretched horizontally by $a$ (which means it has to compress vertically by $a$ as well). Hence $$P(x,y) = aP(ax,ay).$$</p>
<p>We will use this to find $f(x)$ for $x<1$.</p>
<p>Note that if the first number chosen is greater than x we can never get a sum less than or equal to x. Hence $f(x)$ is equal to the probability that the first number chosen is less than or equal to $x$ multiplied by the probability for the random range $[0,x]$. That is, $$f(x) = P(x,1) = p(r_1<x)P(x,x)$$</p>
<p>But $p(r_1<x)$ is just $x$ and $P(x,x) = \frac{1}{x}P(1,1)$ as found above. Hence $$f(x) = f(1).$$</p>
<p>The probability that the result is $x$ is constant for $x<1$.</p>
<p>Using this, we can now iteratively build up the probabilities for $x>1$ in terms of $f(1)$.</p>
<p>First, note that when $x>1$ we have $$f(x) = P(x,1) = \int\limits_0^1 P(x-z,z) dz$$
We apply the compression again to obtain $$f(x) = \int\limits_0^1 \frac{1}{z} f(\frac{x}{z}-1) dz$$
Setting $\frac{x}{z}-1=t$, we get $$f(x) = \int\limits_{x-1}^\infty \frac{f(t)}{t+1} dt$$
This gives us the differential equation $$\frac{df(x)}{dx} = -\frac{f(x-1)}{x}$$
Since we know that $f(x)$ is a constant for $x<1$, this is enough to solve the differential equation numerically for $x>1$, modulo the constant (which can be retrieved by integration in the end). Unfortunately, the solution is essentially piecewise from $n$ to $n+1$ and it is impossible to find a single function that works everywhere.</p>
<p>For example when $x\in[1,2]$, $$f(x) = f(1) \left[1-\log(x)\right]$$</p>
<p>But the expression gets really ugly even for $x \in[2,3]$, requiring the logarithmic integral function $\rm{Li}$.</p>
<p>Finally, as a sanity check, let us compare the random simulation results with $f(x)$ found using numerical integration. The probabilities have been normalised so that $f(0) = 1$.</p>
<p><a href="https://i.sstatic.net/C86kr.png" rel="noreferrer"><img src="https://i.sstatic.net/C86kr.png" alt="Comparison of simulation with numerical integral and exact formula for $x\in[1,2]$"></a></p>
<p>The match is near perfect. In particular, note how the analytical formula matches the numerical one exactly in the range $[1,2]$.</p>
<p>Though we don't have a general analytic expression for $f(x)$, the differential equation can be used to show that the expectation value of $x$ is 1.</p>
<p>Finally, note that the delay differential equation above is the same as that of the <a href="https://en.wikipedia.org/wiki/Dickman_function" rel="noreferrer">Dickman function</a> $\rho(x)$ and hence $f(x) = c \rho(x)$. Its properties have been studied. <a href="https://www.encyclopediaofmath.org/index.php/Dickman_function" rel="noreferrer">For example</a> the Laplace transform of the Dickman function is given by $$\mathcal L \rho(s) = \exp\left[\gamma-\rm{Ein}(s)\right].$$
This gives $$\int_0^\infty \rho(x) dx = \exp(\gamma).$$ Since we want $\int_0^\infty f(x) dx = 1,$ we obtain $$f(1) = \exp(-\gamma) \rho(1) = \exp(-\gamma) \approx 0.56145\ldots$$ That is, $$f(x) = \exp(-\gamma) \rho(x).$$
This completes the description of $f$.</p>
|
probability | <p>Suppose that we have two different discreet signal vectors of $N^\text{th}$ dimension, namely $\mathbf{x}[i]$ and $\mathbf{y}[i]$, each one having a total of $M$ set of samples/vectors.</p>
<p>$\mathbf{x}[m] = [x_{m,1} \,\,\,\,\, x_{m,2} \,\,\,\,\, x_{m,3} \,\,\,\,\, ... \,\,\,\,\, x_{m,N}]^\text{T}; \,\,\,\,\,\,\, 1 \leq m \leq M$<br>
$\mathbf{y}[m] = [y_{m,1} \,\,\,\,\, y_{m,2} \,\,\,\,\, y_{m,3} \,\,\,\,\, ... \,\,\,\,\, y_{m,N}]^\text{T}; \,\,\,\,\,\,\,\,\, 1 \leq m \leq M$</p>
<p>And, I build up a covariance matrix in-between these signals.</p>
<p>$\{C\}_{ij} = E\left\{(\mathbf{x}[i] - \bar{\mathbf{x}}[i])^\text{T}(\mathbf{y}[j] - \bar{\mathbf{y}}[j])\right\}; \,\,\,\,\,\,\,\,\,\,\,\, 1 \leq i,j \leq M $</p>
<p>Where, $E\{\}$ is the "expected value" operator.</p>
<p>What is the proof that, for all arbitrary values of $\mathbf{x}$ and $\mathbf{y}$ vector sets, the covariance matrix $C$ is always semi-definite ($C \succeq0$) (i.e.; not negative definte; all of its eigenvalues are non-negative)?</p>
| <p>A symmetric matrix $C$ of size $n\times n$ is semi-definite if and only if $u^tCu\geqslant0$ for every $n\times1$ (column) vector $u$, where $u^t$ is the $1\times n$ transposed (line) vector. If $C$ is a covariance matrix in the sense that $C=\mathrm E(XX^t)$ for some $n\times 1$ random vector $X$, then the linearity of the expectation yields that $u^tCu=\mathrm E(Z_u^2)$, where $Z_u=u^tX$ is a real valued random variable, in particular $u^tCu\geqslant0$ for every $u$. </p>
<p>If $C=\mathrm E(XY^t)$ for two centered random vectors $X$ and $Y$, then $u^tCu=\mathrm E(Z_uT_u)$ where $Z_u=u^tX$ and $T_u=u^tY$ are two real valued centered random variables. Thus, there is no reason to expect that $u^tCu\geqslant0$ for every $u$ (and, indeed, $Y=-X$ provides a counterexample).</p>
| <p>Covariance matrix <strong>C</strong> is calculated by the formula,
<span class="math-container">$$
\mathbf{C} \triangleq E\{(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\}.
$$</span>
Where are going to use <a href="https://en.wikipedia.org/wiki/Definite_matrix" rel="nofollow noreferrer">the definition of positive semi-definite matrix</a> which says:</p>
<blockquote>
<p>A real square matrix <span class="math-container">$\mathbf{A}$</span> is positive semi-definite if and only if<br />
<span class="math-container">$\mathbf{b}^T\mathbf{A}\mathbf{b}\succeq0$</span><br />
is true for arbitrary real column vector <span class="math-container">$\mathbf{b}$</span> in appropriate size.</p>
</blockquote>
<p>For an arbitrary real vector <strong>u</strong>, we can write,
<span class="math-container">$$
\begin{array}{rcl}
\mathbf{u}^T\mathbf{C}\mathbf{u} & = & \mathbf{u}^TE\{(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\}\mathbf{u} \\
& = & E\{\mathbf{u}^T(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\mathbf{u}\} \\
& = & E\{s^2\} \\
& = & \sigma_s^2. \\
\end{array}
$$</span>
Where <span class="math-container">$\sigma_s$</span> is the variance of the zero-mean scalar random variable <span class="math-container">$s$</span>, that is,
<span class="math-container">$$
s = \mathbf{u}^T(\mathbf{x}-\bar{\mathbf{x}}) = (\mathbf{x}-\bar{\mathbf{x}})^T\mathbf{u}.
$$</span>
Square of any real number is equal to or greater than zero.
<span class="math-container">$$
\sigma_s^2 \ge 0
$$</span>
Thus,
<span class="math-container">$$
\mathbf{u}^T\mathbf{C}\mathbf{u} = \sigma_s^2 \ge 0.
$$</span>
Which implies that covariance matrix of any real random vector is always positive semi-definite.</p>
|
matrices | <p>I see on Wikipedia that the product of two commuting symmetric positive definite matrices is also positive definite. Does the same result hold for the product of two positive semidefinite matrices?</p>
<p>My proof of the positive definite case falls apart for the semidefinite case because of the possibility of division by zero...</p>
| <p>You have to be careful about what you mean by "positive (semi-)definite" in the case of non-Hermitian matrices. In this case I think what you mean is that all eigenvalues are
positive (or nonnegative). Your statement isn't true if "$A$ is positive definite" means $x^T A x > 0$ for all nonzero real vectors $x$ (or equivalently $A + A^T$ is positive definite). For example, consider
$$ A = \pmatrix{ 1 & 2\cr 2 & 5\cr},\ B = \pmatrix{1 & -1\cr -1 & 2\cr},\
AB = \pmatrix{-1 & 3\cr -3 & 8\cr},\ (1\ 0) A B \pmatrix{1\cr 0\cr} = -1$$</p>
<p>Let $A$ and $B$ be positive semidefinite real symmetric matrices. Then $A$ has a positive semidefinite square root, which I'll write as $A^{1/2}$. Now $A^{1/2} B A^{1/2}$ is symmetric and positive semidefinite, and $AB = A^{1/2} (A^{1/2} B)$ and $A^{1/2} B A^{1/2}$ have the same nonzero eigenvalues.</p>
| <p>The product of two symmetric PSD matrices is PSD, iff the product is also symmetric.
More generally, if $A$ and $B$ are PSD, $AB$ is PSD iff $AB$ is normal, ie, $(AB)^T AB = AB(AB)^T$.</p>
<p>Reference:
On a product of positive semidefinite matrices, A.R. Meenakshi, C. Rajian, Linear Algebra and its Applications, Volume 295, Issues 1–3, 1 July 1999, Pages 3–6.</p>
|
differentiation | <p>Could anyone explain in simple words (and maybe with an example) what the difference between the gradient and the Jacobian is? </p>
<p>The gradient is a vector with the partial derivatives, right? </p>
| <p>These are two particular forms of matrix representation of the derivative of a differentiable function <span class="math-container">$f,$</span> used in two cases:</p>
<ul>
<li>when <span class="math-container">$f:\mathbb{R}^n\to\mathbb{R},$</span> then for <span class="math-container">$x$</span> in <span class="math-container">$\mathbb{R}^n$</span>, <span class="math-container">$$\mathrm{grad}_x(f):=\left[\frac{\partial f}{\partial x_1}\frac{\partial f}{\partial x_2}\dots\frac{\partial f}{\partial x_n}\right]\!\bigg\rvert_x$$</span> is the matrix <span class="math-container">$1\times n$</span> of the linear map <span class="math-container">$Df(x)$</span> exprimed from the canonical base of <span class="math-container">$\mathbb{R}^n$</span> to the canonical base of <span class="math-container">$\mathbb{R}$</span> (=(1)...). Because in this case this matrix would have only one row, you can think about it as the vector
<span class="math-container">$$\nabla f(x):=\left(\frac{\partial f}{\partial x_1},\frac{\partial f}{\partial x_2},\dots,\frac{\partial f}{\partial x_n}\right)\!\bigg\rvert_x\in\mathbb{R}^n.$$</span>
This vector <span class="math-container">$\nabla f(x)$</span> is the unique vector of <span class="math-container">$\mathbb{R}^n$</span> such that <span class="math-container">$Df(x)(y)=\langle\nabla f(x),y\rangle$</span> for all <span class="math-container">$y\in\mathbb{R}^n$</span> (see <a href="https://en.wikipedia.org/wiki/Riesz_representation_theorem" rel="noreferrer" title="Riesz representation theorem">Riesz representation theorem</a>), where <span class="math-container">$\langle\cdot,\cdot\rangle$</span> is the usual scalar product
<span class="math-container">$$\langle(x_1,\dots,x_n),(y_1,\dots,y_n)\rangle=x_1y_1+\dots+x_ny_n.$$</span> </li>
<li>when <span class="math-container">$f:\mathbb{R}^n\to\mathbb{R}^m,$</span> then for <span class="math-container">$x$</span> in <span class="math-container">$\mathbb{R}^n$</span>, <span class="math-container">$$\mathrm{Jac}_x(f)=\left.\begin{bmatrix}\frac{\partial f_1}{\partial x_1}&\frac{\partial f_1}{\partial x_2}&\dots&\frac{\partial f_1}{\partial x_n}\\\frac{\partial f_2}{\partial x_1}&\frac{\partial f_2}{\partial x_2}&\dots&\frac{\partial f_2}{\partial x_n}\\
\vdots&\vdots&&\vdots\\\frac{\partial f_m}{\partial x_1}&\frac{\partial f_m}{\partial x_2}&\dots&\frac{\partial f_m}{\partial x_n}\\\end{bmatrix}\right|_x$$</span> is the matrix <span class="math-container">$m\times n$</span> of the linear map <span class="math-container">$Df(x)$</span> exprimed from the canonical base of <span class="math-container">$\mathbb{R}^n$</span> to the canonical base of <span class="math-container">$\mathbb{R}^m.$</span></li>
</ul>
<p>For example, with <span class="math-container">$f:\mathbb{R}^2\to\mathbb{R}$</span> such as <span class="math-container">$f(x,y)=x^2+y$</span> you get <span class="math-container">$\mathrm{grad}_{(x,y)}(f)=[2x \,\,\,1]$</span> (or <span class="math-container">$\nabla f(x,y)=(2x,1)$</span>) and for <span class="math-container">$f:\mathbb{R}^2\to\mathbb{R}^2$</span> such as <span class="math-container">$f(x,y)=(x^2+y,y^3)$</span> you get <span class="math-container">$\mathrm{Jac}_{(x,y)}(f)=\begin{bmatrix}2x&1\\0&3y^2\end{bmatrix}.$</span></p>
| <p>The gradient is the vector formed by the partial derivatives of a <em>scalar</em> function.</p>
<p>The Jacobian matrix is the matrix formed by the partial derivatives of a <em>vector</em> function. Its vectors are the gradients of the respective components of the function.</p>
<p>E.g., with some argument omissions,</p>
<p><span class="math-container">$$\nabla f(x,y)=\begin{pmatrix}f'_x\\f'_y\end{pmatrix}$$</span></p>
<p><span class="math-container">$$J \begin{pmatrix}f(x,y),g(x,y)\end{pmatrix}=\begin{pmatrix}f'_x&&g'_x\\f'_y&&g'_y\end{pmatrix}=\begin{pmatrix}\nabla f;\nabla g\end{pmatrix}.$$</span></p>
<p>If you want, the Jacobian is a generalization of the gradient to vector functions.</p>
<hr />
<p><strong>Addendum:</strong></p>
<p>The first derivative of a scalar multivariate function, or gradient, is a vector,</p>
<p><span class="math-container">$$\nabla f(x,y)=\begin{pmatrix}f'_x\\f'_y\end{pmatrix}.$$</span></p>
<p>Thus the second derivative, which is the Jacobian of the gradient is a matrix, called the <em>Hessian</em>.</p>
<p><span class="math-container">$$H(f)=\begin{pmatrix}f''_{xx}&&f''_{xy}\\f''_{yx}&&f''_{yy}\end{pmatrix}.$$</span></p>
<p>Higher derivatives and vector functions require the <em>tensor</em> notation.</p>
|
linear-algebra | <p>Let $k$ be a field with characteristic different from $2$, and $A$ and $B$ be $2 \times 2$ matrices with entries in $k$. Then we can prove, with a bit art, that $A^2 - 2AB + B^2 = O$ implies $AB = BA$, hence $(A - B)^2 = O$. It came to a surprise for me when I first succeeded in proving this, for this seemed quite nontrivial to me.</p>
<p>I am curious if there is a similar or more general result for the polynomial equations of matrices that ensures commutativity. (Of course, we do not consider trivial cases such as the polynomial $p(X, Y) = XY - YX$ corresponding to commutator)</p>
<p>p.s. This question is purely out of curiosity. I do not know even this kind of problem is worth considering, so you may regard this question as a recreational one.</p>
| <p>Your question is very interesting, unfortunately that's not a complete answer, and in fact not an answer at all, or rather a negative answer.</p>
<p>You might think, as generalization of <span class="math-container">$A^2+B^2=2AB$</span>, of the following matrix equation in <span class="math-container">$\mathcal M_n\Bbb C$</span> :
<span class="math-container">$$ (E) :\ \sum_{l=0}^k (-1)^k \binom kl A^{k-l}B^l = 0. $$</span></p>
<p>This equation implies the commutativity if and only if <span class="math-container">$n=1$</span> of <span class="math-container">$(n,k)=(2,2)$</span>, which is the case you studied.However, the equation (E) has a remarkable property : <strong>if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> satisfy (E) then their characteristic polynomials are equal</strong>. Isn't it amazing ? You can have a look at <a href="https://hal.inria.fr/hal-00780438/document" rel="nofollow noreferrer">this paper for a proof</a>.</p>
| <p>I'm neither an expert on this field nor on the unrelated field of the facts I'm about to cite, so this is more a shot in the dark. But: Given a set of matrices, the problem whether there is some combination in which to multiply them resulting in zero is undecidable, even for relatively small cases (such as two matrices of sufficient size or a low fixed number of $3\times3$ matrices).</p>
<p>A solution to one side of this problem (the "is there a polynomial such that..." side) <em>looks</em> harder (though I have no idea beyond intuition whether it really is!) than the mortality problem mentioned above. If that is actually true, then it would at least suggest that $AB = BA$ does not guarantee the existance of a solution (though it might still happen).</p>
<p>In any case, the fact that the mortality problem is decidable for $2 \times 2$ matrices at least shows that the complexity of such problems increases rapidly with dimension, which could explain why your result for $2$ does not easily extend to higher dimensions.</p>
<p>Apologies for the vagueness of all this, I just figured it might give someone with more experience in the field a different direction to think about the problem. If someone does want to look that way, <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.5792&rep=rep1&type=pdf" rel="nofollow">this paper</a> has the mentioned results as well as references to related literature.</p>
|
number-theory | <p>I need to prove the following, but I am not able to do it. This is not homework, nor something related to research, but rather something that came up in preparation for an exam.</p>
<blockquote>
<p>If $n = 1 + m$, where $m$ is the product of four consecutive positive
integers, prove that $n$ is a perfect square.</p>
</blockquote>
<p>Now since $m = p(p+1)(p+2)(p+3)$;</p>
<blockquote>
<p>$p = 0, n = 1$ - Perfect Square</p>
<p>$p = 1, n = 25$ - Perfect Square</p>
<p>$p = 2, n = 121$ - Perfect Square</p>
</blockquote>
<p>Is there any way to prove the above without induction? My approach was to expand $m = p(p+1)(p+2)(p+3)$ into a 4th degree equation, and then try proving that $n = m + 1$ is a perfect square, but I wasn't able to do it. Any idea if it is possible?</p>
| <p>Your technique <em>should</em> have worked, but if you don't know which expansions to do first you can get yourself in a tangle of algebra and make silly mistakes that bring the whole thing crashing down.</p>
<p>The way I reasoned was, well, I have four numbers multiplied together, and I want it to be two numbers of the same size multiplied together. So I'll try multiplying the big one with the small one, and the two middle ones.</p>
<p>$$p(p+1)(p+2)(p+3) + 1 = (p^2 + 3p)(p^2 + 3p + 2) + 1$$</p>
<p>Now those terms are <em>nearly</em> the same. How can we force them together? I'm going to use the basic but sometimes-overlooked fact that $xy = (x+1)y - y$, and likewise $x(y + 1) = xy + x$.</p>
<p>$$\begin{align*}
(p^2 + 3p)(p^2 + 3p + 2) + 1 &= (p^2 + 3p + 1)(p^2 + 3p + 2) - (p^2 + 3p + 2) + 1 \\
&= (p^2 + 3p + 1)(p^2 + 3p + 1) + (p^2 + 3p + 1) - (p^2 + 3p + 2) + 1 \\
&= (p^2 + 3p + 1)^2
\end{align*}$$
Tada.</p>
| <p>$(n-1)(n+1)+1 = n^{2}$.</p>
<p>Note that $(n+1)-(n-1)=2$.</p>
<p>With this in mind</p>
<p>$$\begin{align*}
p(p+1)(p+2)(p+3)+1 &= (p^{2}+3p)(p^{2}+3p+2)+1 \\
&= [(p^{2}+3p+1)-1][(p^{2}+3p+1)+1]+1 \\
&= (p^{2}+3p+1)^2
\end{align*}$$</p>
|
matrices | <p>If <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are square matrices, and <span class="math-container">$AB=I$</span>, then I think it is also true that <span class="math-container">$BA=I$</span>. In fact, <a href="http://en.wikipedia.org/wiki/Invertible_matrix" rel="noreferrer">this Wikipedia page</a> says that this "follows from the associativity of matrix multiplication". I assume there's a nice simple one-line proof, but can't seem to find it.</p>
<p>Nothing exotic, here -- assume that the matrices have finite size and their elements are real numbers.</p>
<p>This isn't homework (if that matters to you). My last homework assignment was about 50 years ago.</p>
| <p>Since $AB=I$ then $B=B(AB)=(BA)B$. Note from $AB=I$ that $1=\det(AB)=\det(A)\det(B)$ so $\det(B)\neq0$.</p>
<p>So by $(BA)B=B$ we have:</p>
<p>$(BA-I)B=0$. Since $\det(B)\neq0$ then $B$ is not a $0$ divisor. So $BA=I$</p>
| <p>I suggest proving it in one line:
Let $B\in\mathbb F^{n\times n}$ be right inverse, $C\in\mathbb F^{n\times n}$ left inverse of $A\in\mathbb F^{n\times n}$. Since Multiplying matrices is associative:
$$B=IB=(CA)B=CAB=C(AB)=CI=C$$
Thus $B=C$ as required. </p>
|
linear-algebra | <p>The following is a well-known result in functional analysis:</p>
<blockquote>
<p>If the vector space $X$ is finite dimensional, all norms are equivalent. </p>
</blockquote>
<p>Here is the standard proof in one textbook. First, pick a norm for $X$, say
$$\|x\|_1=\sum_{i=1}^n|\alpha_i|$$
where $x=\sum_{i=1}^n\alpha_ix_i$, and $(x_i)_{i=1}^n$ is a basis for $X$. Then show that every norm for $X$ is equivalent to $\|\cdot\|_1$, i.e.,
$$c\|x\|\leq\|x\|_1\leq C\|x\|.$$
For the first inequality, one can easily get $c$ by triangle inequality for the norm. For the second inequality, instead of constructing $C$, the <a href="http://en.wikipedia.org/wiki/Bolzano%E2%80%93Weierstrass_theorem" rel="noreferrer">Bolzano-Weierstrass theorem</a> is applied to construct a contradiction. </p>
<p>The strategies for proving these two inequalities are so different. Here is my <strong>question</strong>, </p>
<blockquote>
<p>Can one prove this theorem without Bolzano-Weierstrass theorem?</p>
</blockquote>
<p><strong>UPDATE:</strong></p>
<blockquote>
<p>Is the converse of the theorem true? In other words, if all norms for a vector space $X$ are equivalent, then can one conclude that $X$ is of finite dimension?</p>
</blockquote>
| <p>To answer the question in the update:</p>
<p>If $(X,\|\cdot\|)$ is a normed space of infinite dimension, we can produce a non-continuous linear functional: Choose an algebraic basis $\{e_{i}\}_{i \in I}$ which we may assume to be normalized, i.e., $\|e_{i}\| = 1$ for all $i$. Every vector $x \in X$ has a unique representation $x = \sum_{i \in I} x_i \, e_i$ with only finitely many nonzero entries (by definition of a basis).</p>
<p>Now choose a countable subset $i_1,i_2, \ldots$ of $I$. Then $\phi(x) = \sum_{k=1}^{\infty} k \cdot x_{i_k}$ defines a linear functional on $x$. Note that $\phi$ is not continuous, as $\frac{1}{\sqrt{k}} e_{i_k} \to 0$ while $\phi(\frac{1}{\sqrt{k}}e_{i_k}) = \sqrt{k} \to \infty$.</p>
<p>There can't be a $C \gt 0$ such that the norm $\|x\|_{\phi} = \|x\| + |\phi(x)|$ satisfies $\|x\|_\phi \leq C \|x\|$ since otherwise $\|\frac{1}{\sqrt{k}}e_k\| \to 0$ would imply $|\phi(\frac{1}{\sqrt{k}}e_k)| \to 0$ contrary to the previous paragraph.</p>
<p>This shows that on an infinite-dimensional normed space there are always inequivalent norms. In other words, the converse you ask about is true.</p>
| <p>You are going to need something of this nature. A Banach Space is a complete normed linear space (over $\mathbb{R}$ or $\mathbb{C}$). The equivalence of norms on a finite dimensional space eventually comes down to the facts that the unit ball of a Banach Space is compact if the space is finite-dimensional, and that continuous real-valued functions on compact sets achieve their sup and inf. It is the Bolzano Weirstrass theorem that gives the first property.</p>
<p>In fact, a Banach Space is finite dimensional if and only if its unit ball is compact. Things like this do go wrong for infinite-dimensional spaces. For example, let $\ell_1$ be the space of real sequences such that $\sum_{n=0}^{\infty} |a_n| < \infty $. Then $\ell_1$ is an infinite dimensional Banach Space with norm $\|(a_n) \| = \sum_{n=0}^{\infty} |a_n|.$ It also admits another norm $\|(a_n)\|' = \sqrt{ \sum_{n=0}^{\infty} |a_{n}|^2}$ , and this norm is not equivalent to the first one.</p>
|
linear-algebra | <p>For a lower triangular matrix, the inverse of itself should be easy to find because that's the idea of the LU decomposition, am I right? For many of the lower or upper triangular matrices, often I could just flip the signs to get its inverse. For eg: $$\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
-1.5 & 0 & 1
\end{bmatrix}^{-1}=
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
1.5 & 0 & 1
\end{bmatrix}$$
I just flipped from -1.5 to 1.5 and I got the inverse.</p>
<p>But this apparently doesn't work all the time. Say in this matrix:
$$\begin{bmatrix}
1 & 0 & 0\\
-2 & 1 & 0\\
3.5 & -2.5 & 1
\end{bmatrix}^{-1}\neq
\begin{bmatrix}
1 & 0 & 0\\
2 & 1 & 0\\
-3.5 & 2.5 & 1
\end{bmatrix}$$
By flipping the signs, the inverse is wrong.
But if I go through the whole tedious step of gauss-jordan elimination, I would get its correct inverse like this: $\begin{bmatrix}
1 & 0 & 0\\
-2 & 1 & 0\\
3.5 & -2.5 & 1
\end{bmatrix}^{-1}=
\begin{bmatrix}
1 & 0 & 0\\
2 & 1 & 0\\
1.5 & 2.5 & 1
\end{bmatrix}$
And it looks like some entries could just flip its signs but not for others.</p>
<p>Then this is kind of weird because I thought the whole idea of getting the lower and upper triangular matrices is to avoid the need to go through the tedious process of gauss-jordan elimination and can get the inverse quickly by flipping signs? Maybe I have missed something out here. How should I get an inverse of a lower or an upper matrix quickly?</p>
| <p>Ziyuang's answer handles the cases, where <span class="math-container">$N^2=0$</span>, but it can be generalized as follows. A triangular <span class="math-container">$n\times n$</span> matrix <span class="math-container">$T$</span> with 1s on the diagonal can be written in the form <span class="math-container">$T=I+N$</span>. Here <span class="math-container">$N$</span> is the strictly triangular part (with zeros on the diagonal), and it always satisfies the relation <span class="math-container">$N^{n}=0$</span>. Therefore we can use the polynomial factorization <span class="math-container">$1-x^n=(1-x)(1+x+x^2+\cdots +x^{n-1})$</span> with <span class="math-container">$x=-N$</span> to get the matrix relation
<span class="math-container">$$
(I+N)(I-N+N^2-N^3+\cdot+(-1)^{n-1}N^{n-1})=I + (-1)^{n-1}N^n=I
$$</span>
telling us that <span class="math-container">$(I+N)^{-1}=I+\sum_{k=1}^{n-1}(-1)^kN^k$</span>.</p>
<p>Yet another way of looking at this is to notice that it also is an instance of a geometric series <span class="math-container">$1+q+q^2+q^3+\cdots =1/(1-q)$</span> with <span class="math-container">$q=-N$</span>. The series converges for the unusual reason that powers of <span class="math-container">$q$</span> are all zero from some point on. The same formula can be used to good effect elsewhere in algebra, too. For example, in a residue class ring like <span class="math-container">$\mathbf{Z}/2^n\mathbf{Z}$</span> all the even numbers are nilpotent, so computing the modular inverse of an odd number can be done with this formula. </p>
| <p>In case of a lower triangular matrix with arbitrary non-zero diagonal members, you may just need to change it in to: $T = D(I+N)$ where $D$ is a diagonal matrix and $N$ is again an strictly lower diagonal matrix. Apparently, all said about inverse in previous comments will be the same.</p>
|
game-theory | <p>This question is a result of having too much free time years ago during military service.
One of the many pastimes was playing tic-tac-toe in varying grid sizes and dimensions, and it lead me to a conjecture.
Now, after several years of mathematical training at a university, I am still unable to settle the conjecture, so I present it to you.</p>
<p>The classical tic-tac-toe game is played on a $3\times3$ grid and two players take turns to put their mark somewhere in the grid.
The first one to get three collinear marks wins.
Collinear includes horizontal, vertical and diagonal lines.
Experience shows that the game always ends in a draw if both players play wisely.</p>
<p>Let us write the grid size $3\times3$ as $3^2$.
We can change the edge length by playing on any $a^2$ grid (where each player tries to get $a$ marks in a row on the $a\times a$ grid).
We can also change dimension by playing on any $a^d$ grid, for example $3^3=3\times3\times3$.
I want to understand something about this game for general $a$ and $d$.
Let me repeat: The goal is to make $a$ collinear marks.</p>
<p>I assume both players play in an optimal way.
It is quite easy to see that the first player wins on a $2^d$ grid for any $d\geq2$ but the game is a tie on $2^1$.
The game is a tie also on $3^1$ and $3^2$, but my experience suggests that the first player wins on $3^3$ but the game ties on $4^d$ for $d\leq3$.
It seems quite credible that if there is a winning strategy on $a^d$, there is one also on $a^{d'}$ for any $d'\geq d$, since more dimensions to move in gives more room for winning rows.
<a href="https://math.stackexchange.com/a/417190/166535">This answer</a> to a related question tells that for any $a$ there is $d$ so that there is a winning strategy on $a^d$.</p>
<p>This brings me to the conjecture:</p>
<blockquote>
<p><s>There is a winning strategy for tic-tac-toe on an $a^d$ grid if and only if $d\geq a$.</s> (Refuted by TonyK's answer below.)</p>
</blockquote>
<p>Is there a characterization of the cases where a winning strategy exists?
It turns out not to be as simple as I thought.</p>
<p>To fix notation, let
$$
\delta(a)=\min\{d;\text{first player wins on }a^d\}
$$
and
$$
\alpha(d)=\max\{a;\text{first player wins on }a^d\}.
$$
The main question is:</p>
<blockquote>
<p>Is there an explicit expression for either of these functions?
Or decent bounds?
Partial answers are also welcome.</p>
</blockquote>
<p>Note that the second player never wins, as was discussed in <a href="https://math.stackexchange.com/questions/366077/why-does-the-strategy-stealing-argument-for-tic-tac-toe-work">this earlier post</a>.</p>
<hr>
<p>A remark for the algebraically-minded:
We can also allow the lines of marks to continue at the opposite face when they exit the grid; this amounts to giving the grid a torus-like structure.
Now there are no special points, unlike in the usual case with boundaries.
Collinear points on a toric grid of size $a^d$ corresponds to a line (maximal collinear set) in the module $(\mathbb Z/a\mathbb Z)^d$.
(If $a$ is odd, then $a$ collinear points in the mentioned module add up to zero, but the converse does not always hold: the nine points in $(\mathbb Z/9\mathbb Z)^3$ with multiples of three as all coordinates add up to zero but are not collinear.)
This approach might be more useful when $a$ is a prime and the module becomes a vector space.
Anyway, if this version of the game seems more manageable, I'm happy with answers about it as well (although the conjecture as stated is not true in this setting; the first player wins on $3^2$).</p>
| <p>I will quote some results and problems from the book <a href="http://www.cambridge.org/us/academic/subjects/mathematics/discrete-mathematics-information-theory-and-coding/combinatorial-games-tic-tac-toe-theory" rel="noreferrer"><em>Combinatorial Games: Tic-Tac-Toe Theory</em></a> by <a href="https://en.wikipedia.org/wiki/J%C3%B3zsef_Beck" rel="noreferrer">József Beck</a>, some of which were also quoted in <a href="https://math.stackexchange.com/questions/994408/converting-a-gomoku-winning-strategy-from-a-small-board-to-a-winning-strategy-on/994604#994604">this answer</a>.</p>
<p>The terms "<strong>win</strong>" and "<strong>draw</strong>" refer to the game as ordinarily played, i.e., the <em>first</em> player to complete a line wins. The term "<strong>Weak Win</strong>" refers to the corresponding Maker-Breaker game, where the first player ("Maker") wins if he completes a line, <em>regardless</em> of whether the second player has previously completed a line; in other words, the second player ("Breaker") can only defend by blocking the first player, he cannot "counterattack" by threatening to make his own line. (Note that ordinary $3\times3$ tic-tac-toe is a Weak Win.) A game is a "<strong>Strong Draw</strong>" if it is not a Weak Win, i.e., if the second player ("Breaker") can prevent the first player from completing a line.</p>
<blockquote>
<p><strong>Theorem 3.1</strong> <em>Ordinary $3^2$ Tic-Tac-Toe is a draw but not a Strong Draw.</em><br>
<br><strong>Theorem 3.2</strong> <em>The $4^2$ game is a Strong Draw, but not a Pairing Strategy Draw (because the second player cannot force a draw by a single pairing strategy.)</em><br>
<br><strong>Theorem 3.3</strong> <em>The $n\times n$ Tic-Tac-Toe is a Pairing Strategy Draw for every $n\ge5.$</em></p>
</blockquote>
<p>For a discussion of <a href="https://en.wikipedia.org/wiki/Oren_Patashnik" rel="noreferrer">Oren Patashnik</a>'s computer-assisted result that $4^3$ tic-tac-toe is a first player win, Beck refers to Patashnik's paper:</p>
<blockquote>
<p>Oren Patashnik, Qubic: $4\times4\times4$ Tic-Tac-Toe, <em>Mathematics Magazine</em> <strong>53</strong> (1980), 202-216.</p>
</blockquote>
<p>Not much more is known about multidimensional tic-tac-toe, as can be seen from the open problems:</p>
<blockquote>
<p><strong>Open Problem 3.2</strong> <em>Is it true that $5^3$ Tic-Tac-Toe is a draw game? Is it true that $5^4$ Tic-Tac-Toe is a first player win?</em></p>
</blockquote>
<p>The conjecture that "if there is a winning strategy on $a^d$, there is one also on $a^{d'}$ for any $d'\geq d$" is given as an open problem:</p>
<blockquote>
<p><strong>Open Problem 5.2</strong> <em>Is it true that, if the $n^d$ Tic-Tac-Toe is a first player win, then the $n^D$ game, where $D\gt d$, is also a win?</em><br>
<br><strong>Open Problem 5.3.</strong> <em>Is it true that, if the $n^d$ game is a draw, then the $(n+1)^d$ game is also a draw?</em></p>
</blockquote>
<p>To see that the intuition "adding more ways to win can't turn a winnable game into a draw game" is wrong, consider the following example of a tic-tac-toe-like game, attributed to Sujith Vijay: The board is the set $V=\{1,2,3,4,5,6,7,8,9\};\ $ the winning sets are $\{1,2,3\},$ $\{1,2,4\},$ $\{1,2,5\},$ $\{1,3,4\},$ $\{1,5,6\},$ $\{3,5,7\},$ $\{2,4,8\},$ $\{2,6,9\}$. As in tic-tac-toe, the two players take turns choosing (previously unchosen) elements of $V;$ the game is won by the first player to succeed in choosing all the elements of a winning set. It can be verified that this is a draw game, but the restriction to the board $\{1,2,3,4,5,6,7\}$ (with winning sets $\{1,2,3\},$ $\{1,2,4\},$ $\{1,2,5\},$ $\{1,3,4\},$ $\{1,5,6\},$ $\{3,5,7\}$) is a first-player win.</p>
| <p>$4^3$ ("Qubic") is a win for the first player. According to <a href="http://en.wikipedia.org/wiki/Oren_Patashnik" rel="nofollow">this link</a>, it was first proved by Oren Patashnik in 1980. The proof is complicated. It took 12 years for this proof to be converted into a practical computer algorithm; I was present at the 1992 Computer Olympiad where the program of Victor Allis and Patrick Schoo romped to victory.</p>
|
probability | <p>In the book "Zero: The Biography of a Dangerous Idea", author Charles Seife claims that a dart thrown at the real number line would never hit a rational number. He doesn't say that it's only "unlikely" or that the probability approaches zero or anything like that. He says that it will never happen because the irrationals take up all the space on the number line and the rationals take up no space. This idea <em>almost</em> makes sense to me, but I can't wrap my head around why it should be impossible to get really lucky and hit, say, 0, dead on. Presumably we're talking about a magic super sharp dart that makes contact with the number line in exactly one point. Why couldn't that point be a rational? A point takes up no space, but it almost sounds like he's saying the points don't even exist somehow. Does anybody else buy this? I found one academic paper online which ridiculed the comment, but offered no explanation. Here's the original quote:</p>
<blockquote>
<p>"How big are the rational numbers? They take up no space at all. It's a tough concept to swallow, but it's true. Even though there are rational numbers everywhere on the number line, they take up no space at all. If we were to throw a dart at the number line, it would never hit a rational number. Never. And though the rationals are tiny, the irrationals aren't, since we can't make a seating chart and cover them one by one; there will always be uncovered irrationals left over. Kronecker hated the irrationals, but they take up all the space in the number line. The infinity of the rationals is nothing more than a zero." </p>
</blockquote>
| <p>Mathematicians are strange in that we distinguish between "impossible" and "happens with probability zero." If you throw a magical super sharp dart at the number line, you'll hit a rational number with probability zero, but it isn't <em>impossible</em> in the sense that there do exist rational numbers. What <em>is</em> impossible is, for example, throwing a dart at the real number line and hitting $i$ (which isn't even on the line!). </p>
<p>This is formalized in <a href="http://en.wikipedia.org/wiki/Measure_(mathematics)">measure theory</a>. The standard measure on the real line is <a href="http://en.wikipedia.org/wiki/Lebesgue_measure">Lebesgue measure</a>, and the formal statement Seife is trying to state informally is that the rationals have measure zero with respect to this measure. This may seem strange, but lots of things in mathematics seem strange at first glance. </p>
<p>A simpler version of this distinction might be more palatable: flip a coin infinitely many times. The probability that you flip heads every time is zero, but it isn't impossible (at least, it isn't <em>more</em> impossible than flipping a coin infinitely many times to begin with!).</p>
| <p>Note that if you randomly (i.e. uniformly) choose a real number in the interval $[0,1]$ then for <em>every</em> number there is a zero probability that you will pick this number. This does not mean that you did not pick <em>any</em> number at all.</p>
<p>Similarly with the rationals, while infinite, and dense and all that, they are very very sparse in the aspect of measure and probability. It is perfectly possible that if you throw countably many darts at the real line you will hit <em>exactly</em> all the rationals and every rational exactly once. This scenario is <strong>highly unlikely</strong>, because the rational numbers is a measure zero set.</p>
<p>Probability deals with "<em>what are the odds of that happening?</em>" a priori, not a posteriori. So we are interested in measuring a certain structure a set has, in modern aspects of probability and measure, the rationals have size zero and this means zero probability.</p>
<p>I will leave you with some food for thought: if you ask an arbitrary mathematician to choose <em>any</em> real number from the interval $[0,10]$ there is a good chance they will choose an integer, a slightly worse chance it will be a rational, an even slimmer chance this is going to be an algebraic number, and even less likely an transcendental number. In some aspect this is strongly against measure-theoretic models of a uniform probability on $[0,10]$, but that's just how life is.</p>
|
game-theory | <p>So my friend comes up and confidently says that he can defeat me in this game:<br></p>
<ul>
<li>The integers <span class="math-container">$1$</span> to <span class="math-container">$14$</span> are written down on a blackboard (paper in our case).</li>
<li>Players take turns colouring (striking out in our case) <span class="math-container">$2$</span> or <span class="math-container">$3$</span> numbers that total <span class="math-container">$15$</span>. They can only colour the uncoloured numbers</li>
<li>The last player to colour a pair or a triplet wins the game</li>
</ul>
<p>Since I don't want to lose, I figured I'll ask it here. Is there any strategy a player (which, first or second?) can employ to win the game? I have thought about it for a lot of time, but to no avail. So, I present a few sample games:
<br><span class="math-container">$(1,14),(2,5,8), (4,11), (9,6), (3,12)$</span>. So, the first player wins.<br>
<span class="math-container">$(4,5,6), (7,8), (10,3,2), (1,14)$</span> so second player wins. <br>
I also calculated the number of solutions to <span class="math-container">$a+b+c = 15$</span>, where <span class="math-container">$0 \le a < b<c\le 14$</span>. There are <span class="math-container">$19$</span> such <span class="math-container">$(a,b,c)$</span>.</p>
<p>Verified <span class="math-container">$19$</span> solutions using python. If they can be of any help:<br></p>
<pre><code>1,14
2,13
3,12
4,11
5,10
6,9
7,8
1,2,12
1,3,11
1,4,10
1,5,9
1,6,8
2,3,10
2,4,9
2,5,8
2,6,7
3,4,8
3,5,7
4,5,6
</code></pre>
<p>PS: I see that the tag description about game theory matches my question, so I've used it. But please do not use complicated notation employed in game theory since I don't know much about it.</p>
| <p>Sorry for my previous wrong answer. Now, I believe this one is correct, although ugly and seems not generalizable.</p>
<p>Player 1 can always win. He does that by coloring 1,6,8 first. Now, the possible moves for player 2 are coloring:</p>
<pre><code>2,13
3,12
4,11
5,10
2,3,10
2,4,9
3,5,7
</code></pre>
<p>Suppose that player 2 chooses 2,3,10. Now, player 1 chooses 4,11 and wins.</p>
<p>Suppose that player 2 chooses 2,4,9 (3,5,7 resp). Now player 1 chooses 3,5,7 (2,4,9 resp) and wins.</p>
<p>Suppose that player 2 chooses 2,13. Now, player 1 chooses 3,12. Now, player 2 chooses 4,11 (5,10 resp). Now, player 1 chooses 5,10 (4,11 resp) and wins.</p>
<p>Suppose that player 2 chooses 3,12. Now, player 1 chooses 2,13. And the remaining game is analogous to the previous case.</p>
<p>Suppose that player 2 chooses 4,11. Now, player 1 chooses 2,3,10 and wins.</p>
<p>Suppose that player 2 chooses 5,10. Now, player 1 chooses 4,11. Now, player 2 chooses 3,12 (2, 13 resp). Now, player 1 chooses 2, 13 (3, 12 resp) and wins.</p>
| <h1>Theoretical background</h1>
<p>For a game like this, there is only one theoretical idea from Combinatorial Game Theory we really need: The concept of "N-positions" and "P-positions". An N-position is a state where the <strong>N</strong>ext player to move has a winning strategy. And a P-position is a state where instead the <strong>P</strong>revious player to move has one.</p>
<p>Two key facts:</p>
<ol>
<li>Note that at the end of the game when there are no moves available, the previous player had a winning strategy of "make the move that ends the game like this", so it's a P-position.</li>
<li>In general, a position is a P-position when all of the moves are to N-positions. That is, any move (if one exists) would hand your opponent a win if they play perfectly. And, otherwise, the position is an N-position since a good move to make would be a move to a P-position.</li>
</ol>
<p>If you prefer video, this idea is discussed at <a href="https://youtu.be/YV_oWBi1_ck" rel="nofollow noreferrer">P-positions and N-positions: Introduction to Combinatorial Game Theory #2</a> by <a href="https://www.youtube.com/@KnopsCourse" rel="nofollow noreferrer">Knop's Course</a> on YouTube.</p>
<hr />
<h1>Solution</h1>
<p>(I will use "take" to mean "strike out".)</p>
<p>This game is probably <em>just</em> simple enough that it could reasonably be solved by hand with a decent amount of paper and time - just write out some branches of play, building up which states are N-positions or P-positions from below, until a promising line of play is found. But I used a program very similar to the Python mentioned below to analyze it.</p>
<p>The game is a <strong>win for the first player</strong>, who can win by taking any pair, or one of the triples <span class="math-container">$(1,6,8)$</span> or <span class="math-container">$(2,6,7)$</span>.</p>
<p>It suffices to show that one of those moves is part of a winning strategy. For simplicity, consider the move of taking <span class="math-container">$(1,6,8)$</span>, leaving <span class="math-container">$(2,3,4,5,7,9,10,11,12,13,14)$</span>. Then there are seven things the opponent can take: <span class="math-container">$(2,13)$</span>, <span class="math-container">$(3,12)$</span>, <span class="math-container">$(4,11)$</span>, <span class="math-container">$(5,10)$</span>, <span class="math-container">$(2,3,10)$</span>, <span class="math-container">$(2,4,9)$</span>, and <span class="math-container">$(3,5,7)$</span>.</p>
<ul>
<li>If they take <span class="math-container">$(2,13)$</span>, the winning moves are <span class="math-container">$(3,12)$</span> and <span class="math-container">$(5,10)$</span>. Either way, after one of those moves the remaining two moves are independent: <span class="math-container">$(4,11)$</span> and whichever is remaining of <span class="math-container">$(3,12)$</span> and <span class="math-container">$(5,10)$</span>.</li>
<li>If they take <span class="math-container">$(3,12)$</span> or <span class="math-container">$(5,10)$</span> one winning move is <span class="math-container">$(2,13)$</span> by the above. (Aside: <span class="math-container">$(4,11)$</span> is also a winning move.)</li>
<li>If they take <span class="math-container">$(2,3,10)$</span>, the only available move is <span class="math-container">$(4,11)$</span>, leaving <span class="math-container">$(5,7,9,12,13,14)$</span>, from which there are no more moves.</li>
<li>If they take <span class="math-container">$(4,11)$</span>, one winning move is <span class="math-container">$(2,3,10)$</span> by the above. (Aside: <span class="math-container">$(3,12)$</span> and <span class="math-container">$(5,10)$</span> are also winning moves.)</li>
<li>If they take <span class="math-container">$(2,4,9)$</span> or <span class="math-container">$(3,5,7)$</span>, the only winning move it to take the other of those triples, leaving <span class="math-container">$(10,\ldots,14)$</span> from which there are no more moves.</li>
</ul>
<hr />
<h1>Python code</h1>
<p>Since you mentioned Python, we can turn this idea of N and P positions into Python code. If <code>moves</code> gives a list of reachable states starting from the state <code>s</code>, then we can use #2 above to test if a state is an N-position:</p>
<pre><code>def isNPosition(s): return not(all(map(isNPosition,moves(s))))
</code></pre>
<p>One way to define <code>moves</code> is to use a set of numbers (so that striking out is easy), as in:</p>
<pre><code>import itertools
def moves(s): return [s-set(x)\
for x in list(itertools.combinations(s,2))+list(itertools.combinations(s,3))\
if sum(x)==15]
</code></pre>
<p>Since the game is so short with not too much branching, this can evaluate whether any state (including the starting state) is an N-position very quickly.</p>
<pre><code>import itertools
def moves(s): return [s-set(x)\
for x in list(itertools.combinations(s,2))+list(itertools.combinations(s,3))\
if sum(x)==15]
def isNPosition(s): return not(all(map(isNPosition,moves(s))))
print("The game is a win for the first player:",isNPosition(set(range(1,15))))
print("(1,6,8) would be a bad first move:",isNPosition(set(range(1,15))-set([1,6,8])))
#list which first moves are good/bad with "False"/"True"
for x in map(lambda s : ("The first move of:"\
,sorted(list(set(range(1,15))-s)),\
"leads to an N-position?",\
isNPosition(s)),\
moves(set(range(1,15)))): print(x)
</code></pre>
<p><a href="https://tio.run/##hZExb8IwEIV3/4qTu9iqKaKoVRUJdeuIOrABg4MdYsnxRbbTwK9P7UApCKn1eH733bt37THW6ObDYJoWfQQTtY@INhCidAUNfunAAi/A69h5B@swCTqyA98QqNDDAYwDa0Jkl86nHTalcTIadKlXPHP@@LdizjPOVBC6JqEXi9nL9jTfhOUnBpOF1y4cRiatZY1s2ZVE/NhNj5DWGxcZXdUa9rLRiQUS@mQ3@46pWhkfIrRWHrUvqLiZlXb00u01m4nZyw0vVV7FG4ceO6ug1AlaSnWGZQP/oMb81iNkO3IfcjjQ12ZXX1GSWZ@MI6ppxvcm1kA/pA2aTunKd5qSS/45BiubUkkIUMBp518UYFXQDREhHVgrNt7i3hXnYkOo1VIFiAjSwXLSnpd4p@nv9hZZfY77LqsCTlkd@DB8Aw" rel="nofollow noreferrer" title="Python 3 – Try It Online">Try it online!</a></p>
|
logic | <p>I am trying very hard to understand Gödel's Incompleteness Theorem. I am really interested in what it says about axiomatic languages, but I have some questions:</p>
<p>Gödel's theorem is proved based on Arithmetic and its four operators: is all mathematics derived from these four operators (×, +, -, ÷) ?</p>
<p>Operations such as log(x) and sin(x) are indeed atomic operations, like those of arithmetic, but aren't there infinitely many such operators that have inverses (that is, + and - are "inverse" operations, × and ÷ are inverse operations).</p>
<p>To me it seems as though making a statement about the limitations of provability given 4 arbitrary operators is absurd, but that probably highlights a gap in my understanding, given that he proved this in 1931 and its unlikely that I have found a counter-argument.</p>
<p>As a follow-up remark, why the obsession with arithmetic operators? They probably seem "fundamental" to us as humans, but to me they all seem to be derived from four possible graphical arrangements of numbers (if we consider four sides to a digit), and fundamentally derived from addition.</p>
<p>[][] o [][] addition and, its inverse, subtraction</p>
<p>[][]<br>
[][] multiplication (iterative addition) and, its inverse, division</p>
<p>There must be operators that are consistent on the natural numbers that we certainly aren't aware of, no?</p>
<p>Please excuse my ignorance, I am hoping I haven't offended any real mathematicians with this posting.</p>
<p><hr>
edit: I think I am understanding this a lot more, and I think my main difficulty in understanding this was that:</p>
<blockquote>
<p>There are statements that are true that are unprovable. </p>
</blockquote>
<p>Seemed like an impossible statement. It does, however, make sense to me at the moment in the context of an axiomatic language with a limited number of axioms. Ultimately, suggesting that there are statements that are true and expressible in the language, but are unprovable in the language (because of the limited set of axioms), is what I believe to be the point of the proof -- is this correct?</p>
| <p>There's a fair amount of historical context to make some of Gödel's choices in his original proof clear. For a good overview of the proof, Nagel and Newman's <strong>Goedel's Proof</strong> is pretty good (though not without its detractors). I also highly recommend the late Torkel Franzen's <a href="http://rads.stackoverflow.com/amzn/click/1568812388">Godel's Theorem: An Incomplete Guide to its Use and Abuse</a>. I may myself be guilty of some of those abuses below; I'm trying to shy away from a lot of the technical details, and this usually invites imprecision and even abuse in this particular field. I hope those who know better than me will keep me honest via comments and appropriate ear-pulling.</p>
<p><strong>Some history.</strong> Sometimes I think of the 19th century as the <em>Menagerie century</em>. A "menagerie" was like a zoo of weird animals. During a lot of the 19th century, people were trying to clean up some of the logical problems that abounded in the foundations of mathematics. Calculus plainly <em>worked</em>, but a lot of the notions of infinitesimals were simply incompatible with some 'facts' about real numbers (eventually solved by Weierstrass's notion of limits using $\epsilon$s and $\delta$s). A lot of assumptions people had been making implicitly (or explicitly) were shown to be false through the construction of explicit counterexamples (the "weird animals" that lead me to call it the <em>menagerie century</em>; many mathematicians were like explorers bringing back weird animals nobody had seen before and which challenged people's notions of what was and was not the case): the Dirichlet function to show that you can have functions that are discontinuous everywhere; functions that are continuous everywhere but nowhere differentiable; the failure of Dirichlet's principle for some functions; Peano's curve that filled-up a square; etc. Then, the antinomies (paradoxes and contradictions) in the early set theory. Even some work which today we find completely without problem caused a lot of debate: Hilbert's solution of the problem of a finite basis for invariants in any number of variables was originally derided by Gordan as "theology, not mathematics" (the solution was not constructive and involved the use of an argument by contradiction). Many found a lot of these developments deeply troubling. A foundational crisis arose (see the link in Qiaochu's answer).</p>
<p>Hilbert, one of the leading mathematicians of the late 19th century, proposed a way to settle the differences between the two main camps. His proposal was essentially to try to use methods that both camps found unassailably valid to show that mathematics was <em>consistent</em>: that it was not possible to prove both a proposition and its negation. In fact, his proposal was to use methods that both camps found unassailable to prove that the methods that <em>one</em> camp found troublesome would not introduce any problems. This was the essence of the <strong>Hilbert Programme.</strong></p>
<p>In order to be able to accomplish this, however, one needed to have some way to study proofs and mathematics itself. There arose the notion of "formal proof", the idea of an axiomatization of basic mathematics, etc. There were several competing axiomatic systems for the basics of mathematics: Zermelo attempted to axiomatize set theory (later expanded to Zermelo-Fraenkel set theory); Peano had proposed a collection of axioms for basic arithmetic; and famously Russell and Whitehead had, in their massive book <strong>Principia Mathematica</strong> attempted to establish an axiomatic and deductive system for all of mathematics (as I recall, it takes hundreds of pages to finally get to $1+1=2$). Some early successes were achieved, with people showing that some parts of such theories were in fact consistent (more on this later). Then came Gödel's work.</p>
<p><strong>Consistency and Completeness.</strong> We say a formal theory is <em>consistent</em> if you cannot prove both $P$ and $\neg P$ in the theory for some sentence $P$. In fact, because from $P$ and $\neg P$ you can prove anything using classical logic, it is equivalent that a theory is consistent if and only if there is at least one sentence $Q$ such that there is no proof of $Q$ in the theory. By contrast, a theory is said to be <em>complete</em> if given any sentence $P$, either the has a proof of $P$ or a proof of $\neg P$. (Note that an inconsistent theory is necessarily complete). Hilbert proposed to find a consistent and complete axiomatization of arithmetic, together with a proof (using only the basic mathematics that both camps agreed on) that it was both complete and consistent, and that it would remain so even if some of the tools that his camp used (which the other found unpalatable and doubtful) were used with it.</p>
<p><strong>Why arithmetic?</strong> Arithmetic was a particularly good field to focus on in the early efforts. First, it was the basis of the "other camp". Kronecker famously said "God gave us the natural numbers, the rest is the work of man." It was hoped that an axiomatization of the natural numbers and their basic operations (addition, multiplication) and relations (order) would be both relatively easy, and also have a hope of being both consistent a complete. That is, it was a good testing ground, because it contained a lot of interesting and nontrivial mathematics, and yet seemed to be reasonably simple.</p>
<p><strong>Gödel</strong>. Gödel focused on arithmetic for this reason. As it happens, multiplication is key to the argument (there is something special about multiplication; some theories of the natural numbers that include only addition can be shown to be consistent using only the kinds of tools that Hilbert allowed). To answer one of your questions along the way, Gödel even defined new operations and relations on natural numbers that had little to do with addition and multiplication along the way, so that yes, there are operations other than those (no fetishism about them at all). But in fact, Gödel did not restrict himself <em>solely</em> to arithmetic. His proof is, on its face, about the entire system of mathematics set forth in Russell and Whitehead's <strong>Principia</strong>, though as Gödel notes it can easily be adapted to other systems so long as they satisfy certain criteria (that's why Gödel's original paper has a title that explicitly refers to the <em>Principia</em> "and related systems"). </p>
<p>What Gödel showed was that <em>any</em> theory, subject to some technical restrictions (for example, you must have a way of recognizing whether a given sentence is or is not an axiom), that is "strong enough" that you can use it to define a certain portion of arithmetic will necessarily be either incomplete or inconsistent (that is, either you can prove <em>everything</em> in that theory, or else there is at least one sentence $P$ such that neither $P$ nor $\neg P$ can be proven). It's not a limitation based on four operations, or an obsession with those operations: quite the opposite. What it says if that if you want your theory to include <em>at least</em> some arithmetic, then your theory is going to be so complicated that <em>either</em> it is inconsistent, or else there are propositions that can neither be proven nor disproven <em>using only the methods that both camps found valid.</em> </p>
<p>That is: what it shows is a limitation of those particular (logically unassailable) methods. If we use other methods, we are able to establish consistency of arithmetic, for example, but if you had your doubts about the consistency of arithmetic in the first place, chances are you will find those methods just as doubtful. </p>
<p>Now, about you coda, and the statement "statements that are true but unprovable"; this is not very apt. You will find a lot of criticism to this paraphrase in Franzen's book, with good reason. It's best to think that you have statements that are neither provable nor disprovable. In fact, one of the things we know is that if you have such a statement $P$, in a theory $M$, then you can find a <strong>model</strong> (an interpretation of the axioms of $M$ that makes the axioms true) in which $P$ is true, and a different model in which $P$ is false. So in a sense, $P$ is neither "true" nor "false", because whether it is true or false will depend on the <em>model</em> you are using. For example, the Gödel sentence $G$ that proves the First Incompleteness Theorem (that if arithmetic is consistent then it is incomplete, since there can be no proof of the sentence $G$ and no proof of $\neg G$) is often said to be "true but unprovable" because $G$ can be interpreted as saying "There is no proof of $G$ in this theory." But in fact, you can find a model of arithmetic in which $G$ is <em>false</em>, so why do we say $G$ is "true"? Well, the point is that $G$ is true in what is called "the standard model." There is a particular interpretation of arithmetic (of what "natural number" means, of what $0$ means, of what "successor of $n$" means, of what $+$ means, etc) which we usually have in mind; <em>in that model</em>, $G$ is true but not provable. But we know that there are different models (where 'natural number' may mean something completely different, or perhaps $+$ means something different) where we can <em>show</em> that $G$ is false <em>under that interpretation</em> of the axioms. I would stay away from "true" and "false", and stick with "provable" and "unprovable" when discussing this; it tends to prevent problems.</p>
<p><strong>First Summary.</strong> So: there were historical reasons why Gödel focused on arithmetic; the limitation is not of arithmetic itself, but rather of the formal methods in question: if your theory is sufficiently "strong" that it can represent part of arithmetic (plus it satisfies the few technical restrictions), then either your theory is inconsistent, or else the finitistic proof methods at issue cannot suffice to settle all questions (there are sentences $P$ which can neither be proven nor disproven).</p>
<p><strong>Can something be rescued?</strong> Well, ideally of course we would have liked a theory that was complete and consistent, and that we could <em>show</em> is complete and consistent using only the kinds of logical methods which we, and pretty much everyone else, finds beyond doubt. But perhaps we can at least show that the other methods don't introduce any problems? That is, that the theory is consistent, even if it is not complete? That at least would be somewhat of a victory.</p>
<p>Unfortunately, Gödel also proved that this is not the case. He showed that if your theory is sufficiently complex that it can represent the part of arithmetic at issue (and it satisfies the technical conditions alluded to earlier), and the theory <em>is</em> consistent, then in fact one of the things that it cannot settle is whether the theory is consistent! That is, one can write down a sentence $C$ which makes sense in the theory, and which essentially "means" "This theory is consistent" (much like the Gödel sentence $G$ essentially "means" that "there is not proof of $G$ in this theory"), and which one can prove that if the theory is consistent then the theory has no proof of $C$ and no proof of $\neg C$. </p>
<p>Again, this is a limitation of those finitistic methods that everyone finds logically unassailable. In fact, there are proofs of the consistency of arithmetic using transfinite induction, but as I alluded to above, if you harbored doubts about arithmetic in the first place, you are simply not going to be very comfortable with transfinite induction either! Imagine that you are not sure that $X$ is being truthful, and $X$ suggests that you ask $Y$ about $X$s truthfulness; you don't know $Y$, but $X$ assures you that $Y$ is <em>very</em> trustworthy. Well, that's not going to help you, right?</p>
<p><strong>Key take-away:</strong> Because the theorem applies to <em>any</em> theory that is sufficiently complex (and satisfies the technical restrictions), we are not even in a position of enlarging our set of axioms to escape these problems. So long as we restrict ourselves to enlargement methods that we find logically unassailable, the technical restrictions will still be satisfied, so that the new theory, stronger and larger though it will be because it has more axioms, will <em>still</em> be incomplete (though it will possibly be <em>other</em> sentences that are now incomplete or unprovable; remember that by adding axioms, we are also potentially expanding the kinds of things about which we can talk). So the theorems are not about shortcomings of <em>particular</em> axiomatic systems, but rather about those finitistic methods within a very large class of systems. </p>
<p><strong>What about those 'technical restrictions'?</strong> They are important. Suppose that arithmetic were consistent. That means that there is at least one model for it. We could pick a model $M$, and then say "Let's make a theory whose axioms are exactly those sentences that are true when interpreted in $M$." This is a <em>complete</em> and <em>consistent</em> axiomatic system for arithmetic. Complete, because each sentence is either true in $M$ (and hence an axiom, hence provable in this theory) or else false in $M$, in which case its negation is true (and hence an axiom, and hence provable). And consistent, because it has a model, $M$, and a theory is consistent if and only if it has a model. The problem with this axiomatic theory is that if I give you a sentence, you're going to have a hard time deciding if it is or it is not an axiom! We didn't really achieve anything by taking this axiomatic system. The "technical restrictions" are both in the form of making the system actually usable, and also certain technical issues that arise from the mechanics of the proof. But the restrictions are mild enough that pretty much everyone agrees that most reasonable theories will likely satisfy them.</p>
<p><strong>Second summary.</strong> So: if you have a formal axiomatic system which satisfies certain technical (but mild) restrictions, if the theory is large enough that you can represent (a certain part of) arithmetic in it, then the theory is either inconsistent, or else the finitistic methods that everyone agrees are unassailable are insufficient to prove or disprove every sentence in the theory; worse, one of the things that the finitistic methods cannot prove or disprove is <em>whether</em> the theory is in fact consistent or not. </p>
<p>Hope that helps. I really recommend Franzen's book in any case. It will lead you away from potential misinterpretations of what the theorem says (I am likely guilty of a few myself above, which will no doubt be addresssed in comments by those who know better than I do). </p>
| <p>The incompleteness theorem is much more general than "arithmetic and its four operators." What it says is that any effectively generated formal system which is sufficiently powerful is either inconsistent or incomplete. This requires that the system be capable of talking about arithmetic, but it can also talk about much more than arithmetic.</p>
<blockquote>
<p>To me it seems as though making a statement about the limitations of provability given 4 arbitrary operators is absurd</p>
</blockquote>
<p>Then you should study harder! Mathematics is not beholden to your intuition. When your intuition clashes with the mathematics, you should assume that your intuition is wrong. (I am not trying to be harsh here, but this is an attitude you need to digest before you can really learn any mathematics.)</p>
<p>You also shouldn't think of arithmetic as just being about the arithmetic operations: it's also about the <em>quantifiers</em>.</p>
<blockquote>
<p>As a follow-up remark, why the obsession with arithmetic operators?</p>
</blockquote>
<p>Who claims to have an obsession with arithmetic operators? Perhaps what you are missing here is historical context. The historical context is, roughly, this: back in Gödel's day there was a program, initiated by <a href="http://en.wikipedia.org/wiki/David_Hilbert">Hilbert</a>, which sought to give a complete set of axioms for all of mathematics. That is, Hilbert wanted to write down a set of (consistent) axioms from which all mathematical truths were deducible. (To really understand why Hilbert wanted to do this you should read about the <a href="http://en.wikipedia.org/wiki/Foundations_of_mathematics#Foundational_crisis">foundational crisis in mathematics</a>). This was very grand and ambitious and many people had great hopes for Hilbert's program until Gödel destroyed those hopes with the Incompleteness theorem, which shows that Hilbert's program could not possibly succeed: if the axioms are powerful enough to talk about arithmetic, then they are either inconsistent (you can prove false statements) or incomplete (you can't prove some true statements).</p>
|
combinatorics | <p>This is a neat little problem that I was discussing today with my lab group out at lunch. Not particularly difficult but interesting implications nonetheless</p>
<p>Imagine there are a 100 people in line to board a plane that seats 100. The first person in line, Alice, realizes she lost her boarding pass, so when she boards she decides to take a random seat instead. Every person that boards the plane after her will either take their "proper" seat, or if that seat is taken, a random seat instead.</p>
<p>Question: What is the probability that the last person that boards will end up in their proper seat?</p>
<p>Moreover, and this is the part I'm still pondering about. Can you think of a physical system that would follow this combinatorial statistics? Maybe a spin wave function in a crystal etc...</p>
| <p>Here is a rephrasing which simplifies the intuition of this nice puzzle.</p>
<p>Suppose whenever someone finds their seat taken, they politely evict the squatter and take their seat. In this case, the first passenger (Alice, who lost her boarding pass) keeps getting evicted (and choosing a new random seat) until, by the time everyone else has boarded, she has been forced by a process of elimination into her correct seat.</p>
<p>This process is the same as the original process except for the identities of the people in the seats, so the probability of the last boarder finding their seat occupied is the same.</p>
<p>When the last boarder boards, Alice is either in her own seat or in the last boarder's seat, which have both looked exactly the same (i.e. empty) to her up to now, so there is no way poor Alice could be more likely to choose one than the other.</p>
| <p>This is a classic puzzle!</p>
<p>The answer is that the probability that the last person ends in up in their proper seat is exactly <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>The reasoning goes as follows:</p>
<p>First observe that the fate of the last person is determined the moment either the first or the last seat is selected! This is because the last person will either get the first seat or the last seat. Any other seat will necessarily be taken by the time the last person gets to 'choose'.</p>
<p>Since at each choice step, the first or last is equally probable to be taken, the last person will get either the first or last with equal probability: <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>Sorry, no clue about a physical system.</p>
|
probability | <p>This morning, I wanted to flip a coin to make a decision but only had an SD card:</p>
<p><img src="https://i.sstatic.net/T7fUT.png" alt="enter image description here"></p>
<p>Given that <em>I don't know</em> the bias of this SD card, would flipping it be considered a "fair toss"?</p>
<p>I thought if I'm just as likely to assign an outcome to one side as to the other, then it must be a fair. But this also seems like a recasting of the original question; instead of asking whether the <em>unknowing of the SD card's construction</em> defines fairness, I'm asking if the <em>unknowing of my own psychology</em> (<em>e.g.</em> which side I'd choose for which outcome) defines fairness. Either way, I think I'm asking: What's the exact relationship between <em>not knowing</em> and "fairness"?</p>
<p>Additional thought: An SD card might be "fair" <em>to me</em>, but not at all fair to, say, a design engineer looking at the SD card's blueprint, who immediately sees that the chip is off-center from the flat plane. So it seems <em>fairness</em> even depends on the subjects to whom fairness <em>matters</em>. In a football game then, does an SD card remain "fair" as long as no design engineer is there to discern the object being tossed?</p>
| <p>Here's a pragmatic answer from an engineer. You can always get a fair 50/50 outcome with <strong>any</strong> "coin" (or SD card, or what have you), <em>without having to know whether it is biased, or how biased it is</em>:</p>
<ul>
<li>Flip the coin twice. </li>
<li>If you get $HH$ or $TT$, discard the trial and
repeat. </li>
<li>If you get $HT$, decide $H$. </li>
<li>If you get $TH$, decide $T$.</li>
</ul>
<p>The only conditions are that (i) the coin is not completely biased (i.e., $\Pr(H)\neq 0, \Pr(T)\neq 0$), and (ii) the bias does not change from trial to trial.</p>
<p>The procedure works because whatever the bias is (say $\Pr(H)=p$, $\Pr(T)=1-p$), the probabilties of getting $HT$ and $TH$ are the same: $p(1-p)$. Since the other outcomes are discarded, $HT$ and $TH$ each occur with probability $\frac{1}{2}$.</p>
| <p>That is a very good question!</p>
<p>There are (at least) two different ways to define probability: as a measure of frequencies, and as a measure of (subjective) knowlegde of the result.</p>
<p>The frequentist definition would be: the probability of the sd card landing "heads" is the proportion of times it lands "heads", if you toss it many times
(details ommited partially because of ignorance: what do we mean by 'many'?)</p>
<p>The "knowledge" approach (usually called bayesian) is harder to define. It asks how likely you (given your information) think an outcome is. As you have no information about the construction of the sd card, you might think both sides are equally likely to appear. </p>
<p>In more concrete terms, say I offer you a bet: I give you one dollar if 'heads', and you give me one if 'tails'. If we both are ignorant about the sd card, then, for us both, the bet sounds neither good nor bad. In a sense, it is a fair bet.</p>
<p>Notice that the bayesian approach defines more probabilities that the frequentist. I can, say, talk about the probability that black holes exist. Well, either they do, or they don't, but that does not mean there are no bets I would consider advantageous on the matter: If you offer me a million dollars versus one dollar, saying that they exist, I might take that bet (and that would 'imply' that I consider the probability that they don't exist to be bigger than 1 millionth).</p>
<p>Now, the question of fairness: if no one knows anything about the sd card, I would call your sd card toss fair. In a very meaningfull way: neither of the teams, given a side, would have reason to prefer the other side. However, obviously, it has practical drawbacks: a team might figure something out latter on, and come to complain about it. (that is: back when they chose a side, their knowledge did not allow them to distinguish the sides. Now, it does)</p>
<p>It the end: there is not one definition of probability that is 100% accepted. Hence, there is no definition of fair that is 100% accepted.</p>
<p><a href="http://en.wikipedia.org/wiki/Probability_interpretations">http://en.wikipedia.org/wiki/Probability_interpretations</a></p>
|
game-theory | <h1>Three shooters compete in three way duel game.</h1>
<h2>Game 1</h2>
<p><strong>Rules:</strong></p>
<ol>
<li>Shooters take turns to shoot.</li>
<li>If it's your turn, you have to choose one other person to shoot, and cannot pass your turn or shoot in air, etc..</li>
<li>For the sake of fairness, shooters draw lots to decide who shoots first, second and third. They then fire in this order repeatedly until only one survives.</li>
<li>Everyone is rational and calculates to maximize his survival probability.</li>
</ol>
<p>Before the game starts, there're three guns available to choose from, whose hitting probabilities are not revealed, but are known to have been drawn from <span class="math-container">$U[0,1]$</span> independently. The gun with the highest hitting probability is labeled "1", the one with the 2nd highest is labeled "2", and worst one is labeled "3". Shooters understand what the labels mean. After each chose his gun, the guns' exact hitting probabilities <span class="math-container">$g_1,g_2,g_3$</span> are reveal to all, and the game starts (aka Players draw lots and start shooting).</p>
<p>Question: If you're the first one to choose a gun, which one should you choose to maximize your surviving probability? Which gun gives you the least surviving probability?</p>
<h2>Game 2</h2>
<p><strong>Rules:</strong></p>
<ol>
<li>Each turn, a fair dice is flipped to decide who should shoot in this turn.</li>
<li>If it's your turn, you have to choose one other person to shoot, and cannot pass your turn or shoot in air, etc..</li>
<li>Step 1 and 2 are repeated until only one survives.</li>
<li>Everyone is rational and calculates to maximize his survival probability.</li>
</ol>
<p>Guns have to be chosen before the game starts as in Game 1.</p>
<p>Question: If you're the first one to choose a gun, which one should you choose to maximize your surviving probability? Which gun gives you the least surviving probability?</p>
<hr />
<h2>Game 0</h2>
<p>This is an update. It just occurred to me that allowing the shooter with the worst gun to hold fire in Rule 2 Game 1 will not add much to the computation complexity. This is also more consistent with the spirit of the classical truel game, and is perhaps more reasonable. So while we're at game 1, might as well think about this case.</p>
<p><strong>Rules:</strong></p>
<p>Same as game 1 but with rule 2 changed, so that the shooter with the worst gun is allowed to hold fire/pass turns.</p>
<p>Analysis for game 0:</p>
<blockquote>
<p>Holding fire can only happen when all 3 shooters are alive. If he
should choose to hold fire, the worst shooter (call him #3) is
essentially waiting to duel with the winner of the duel between #1 and
#2. This gives <span class="math-container">$$P_{hold}(3,3\vert 3,2,1)=P(2,2\vert 2,1)P(3,3\vert 3,2)+P(1,2\vert 2,1)P(3,3\vert 3,1)$$</span>
<span class="math-container">$$=\frac{g_2}{g_2+g_1-g_2g_1}\frac{g_3}{g_2+g_3-g_2g_3}+\frac{g_1(1-g_2)}{g_2+g_1-g_2g_1}\frac{g_3}{g_1+g_3-g_1g_3}$$</span> <br/> <span class="math-container">$$P_{hold}(3,3\vert 3,1,2)=P(1,1\vert 1,2)P(3,3\vert
3,1)+P(2,1\vert 1,2)P(3,3\vert 3,2)$$</span>
<span class="math-container">$$=\frac{g_1}{g_2+g_1-g_2g_1}\frac{g_3}{g_1+g_3-g_1g_3}+\frac{g_2(1-g_1)}{g_2+g_1-g_2g_1}\frac{g_3}{g_2+g_3-g_2g_3}$$</span> where the notation <span class="math-container">$P(1,2\vert 2,1)$</span> means #1's survival probability
when its #2's turn to shoot, given the current set of shooters are
ordered in <span class="math-container">$\vert 2,1)$</span>, for instance. To decide whether to hold or
not, #3 only needs to compare <span class="math-container">$P_{hold}(3,3\vert 3,1,2)$</span> with
<span class="math-container">$P_{shoot}(3,3\vert 3,1,2)$</span>, and <span class="math-container">$P_{hold}(3,3\vert 3,2,1)$</span> with
<span class="math-container">$P_{shoot}(3,3\vert 3,2,1)$</span>, where <span class="math-container">$P_{shoot}$</span> is computed by game 1.
This is the only additional computation you need to perform for game
0.</p>
</blockquote>
<br/>
<br/>
<br/>
<hr />
<p><em><strong>Some motivations for formulating the games as such:</strong></em></p>
<p>In simpler versions of the classic three way duel game, hitting probabilities are given and you're asked to solve for surviving probabilities for the players. In the above games that goal is in some sense reversed, because I want to know how important is your accuracy (or hit probability) in a somewhat fair setting.</p>
<p>Conclusions drawn from just one set of hit probabilities and one set of firing order don't tell much, because they are highly sensitive to those parameters. <strong>So you can think of the games as a kind of framework to answering the big picture question: overall, does a better shooter generally have higher survival rate?</strong> Unlike solving for instances of the game, questions like this are <strong>meta questions</strong> for the game, and actually give you more insights about the nature and structure of the game itself. (Meta questions are generally more interesting and challenging, I think. Think of the halting problem as a meta question about algorithms and Godel's incompleteness Theorems as meta questions about arithmetics! I'd better stop before I'm carried too far away by this :-p).</p>
<p>The same question can even be asked for cases more than 3 players. For more than 3 players a closed form solution may be impractical to obtain, although simulations could always help. For game 1 for example, Simulation for 4 shooters with guns' hit probabilities <span class="math-container">$g_1\gt g_2\gt g_3\gt g_4$</span> randomly chosen shows that <span class="math-container">$P_{g_3}\gt P_{g_1}\gt P_{g_4}\gt P_{g_2}$</span>. For 5 shooters, <span class="math-container">$P_{g_4}\gt P_{g_3}\gt P_{g_1}\gt P_{g_5}\gt P_{g_2}$</span>. Not intuitive at all. Effective simulation of 6 shooters would take hours. So it seems small teens may be the most you can manage (if you have a super computer at hand). <strong>This means you can't go meta on the meta question again.</strong> Questions like "If many shooters play game 1, choosing top notch guns never give you highest survival probability" just rest safely beyond the ceiling of your computation power.</p>
| <p>I've been working on game <span class="math-container">$2$</span>. I've gotten expressions for the probabilities of survival in terms of <span class="math-container">$g_1,g_2,g_3$</span>. I've gone over my calculations, but I'd appreciate it if someone would check them.</p>
<p>First, we consider a game with only two players. Let <span class="math-container">$p_i$</span> be the survival probability of the player with gun <span class="math-container">$i$</span>, for <span class="math-container">$i=1,2.$</span> Then <span class="math-container">$$
\begin{align}
p_1 &= \frac12g_1+\frac12(1-g_1)p_1+\frac12(1-g2)p_1\\
&=\frac{g_1}{g_1+g_2}
\end{align}
$$</span><br>
This is because half the time player <span class="math-container">$1$</span> gets to shoot. If he hits, of course he survives. If he misses, he's back in the original position, since the next shooter will be determined by a coin toss. Half the time, player <span class="math-container">$2$</span> shoots first, and he must miss if player <span class="math-container">$1$</span> is to survive. If he does miss, then once again player <span class="math-container">$1$</span> is pack in the original position. Of course, we have <span class="math-container">$$p_2=\frac{p_2}{p_1+p_2}$$</span></p>
<p>Now for the <span class="math-container">$3$</span>-player game. Let <span class="math-container">$p_i$</span> be the survival probability of the player with gun <span class="math-container">$i$</span>, for <span class="math-container">$i=1,2.$</span> In this game player <span class="math-container">$1$</span> will shoot at player <span class="math-container">$2$</span>, and players <span class="math-container">$2$</span> and <span class="math-container">$3$</span> will shoot at player <span class="math-container">$1$</span>. To make things a little less ugly, let <span class="math-container">$q$</span> be the probability that the first shooter misses:<span class="math-container">$$q= 1-\frac{g_1+g_2+g_3}{3}$$</span>
Then
<span class="math-container">$$\begin{align}
p_1&=
\frac13g_1\left(\frac{g_1}{g_1+g_3}\right)+qp_1\\
&=\boxed{\frac{g_1}{g_1+g_2+g_3}\left(\frac{g_1}{g_1+g_3}\right)}\\
p_2 &=
\frac13g_2\left(\frac{g_2}{g_2+g_3}\right)+
\frac13g_3\left(\frac{g_2}{g_2+g_3}\right)+qp_2\\
&=\frac13g_2+qp_2\\
&=\boxed{\frac{g_2}{g_1+g_2+g_3}}\\
p_3 &=\frac13g_3\left(\frac{g_3}{g_2+g_3}\right)+
\frac13g_2\left(\frac{g_3}{g_2+g_3}\right)+
\frac13g_1\left(\frac{g_3}{g_1+g_3}\right)+
qp_3\\
&=\frac{g_3}{3}+
\frac13g_1\left(\frac{g_3}{g_1+g_3}\right)+
qp_3\\
&=\boxed{\frac{g_3}{g_1+g_2+g_3}\left(1+\frac{g_1}{g_1+g_3}\right)}
\end{align}$$</span></p>
<p>It seems difficult to compare these probabilities analytically, (though I haven't really made an effort,) so I wrote a python script to simulate.</p>
<pre><code>from random import random
trials =1000000
count = [0,0,0]
def first(g1,g2,g3):
return g1/(g1+g2+g3)*g1/(g1+g3)
def second(g1,g2,g3):
return g2/(g1+g2+g3)
def third(g1,g2,g3):
return g3/(g1+g2+g3)*(1+g1/(g1+g3))
for _ in range(trials):
g = [random(), random(), random()]
g1 = max(g)
g3 = min(g)
g2 = sum(g)-g1-g3
p1 = first(g1, g2, g3)
p2 = second(g1, g2, g3)
p3 = third(g1, g2, g3)
m = max(p1,p2,p3)
if m == p1:
count[0] += 1
elif m == p2:
count[1] += 1
else:
count[2] += 1
print(count)
</code></pre>
<p>This produced the output </p>
<pre><code>[521166, 194460, 284374]
</code></pre>
<p>for a million trials. This is typical. About <span class="math-container">$52\%$</span> of the time gun gun <span class="math-container">$1$</span> is best, about <span class="math-container">$20%$</span> of the time gun <span class="math-container">$2$</span> is best, and gun <span class="math-container">$3$</span> is best about <span class="math-container">$28\%$</span> of the time. </p>
<p>It's just occurred to me that I ought to write a script to simulate the dues and check if I get the same results. I'll let you know how that comes out.</p>
<p><strong>EDIT</strong></p>
<p>The script is computing the wrong thing, as Eric points out in the comments. It's computing the probability that choosing gun <span class="math-container">$1$</span> is best, whereas what we want to know is the probability that the player who chooses gun <span class="math-container">$1$</span> survives. </p>
| <p>Let me summarize my progress with game 1.</p>
<h2>Two shooters</h2>
<p>Easy to show in this case <span class="math-container">$$P(1,1\vert 1,2)=\frac{g_1}{g_1+g_2+g_1g_2}$$</span> <span class="math-container">$$P(1,2\vert 1,2)=\frac{g_1(1-g_2)}{g_1+g_2+g_1g_2}$$</span>
where <span class="math-container">$g_i$</span> is hit probability for gun i. The notation <span class="math-container">$P(1,2\vert 1,2)$</span> means survival probability for gun 1 user when it's gun 2 user's turn to shoot, given current set of players ordered as <span class="math-container">$\vert 1,2)$</span>. </p>
<p>Other 2 players scenarios are calculated similarly.</p>
<p><br/></p>
<h2>Three shooters</h2>
<p>Because shooting order is randomly determined, there are a total of six different orders with equal probability <span class="math-container">$1/6$</span>:
<span class="math-container">$$ (1, 2, 3)\qquad(1, 3, 2)\qquad(2, 3, 1)\qquad(2, 1, 3)\qquad(3, 2, 1)\qquad(3, 1, 2)$$</span> </p>
<p>Assuming <span class="math-container">$g_1\gt g_2\gt g_3$</span>, then for all those orders, <span class="math-container">$2$</span> and <span class="math-container">$3$</span> will shoot <span class="math-container">$1$</span>, <span class="math-container">$1$</span> will shoot <span class="math-container">$2$</span>. So we have
<span class="math-container">$$P(1,1\vert 1,2,3)=g_1P(1,3\vert 1,3)+(1-g_1)P(1,2\vert 1,2,3)$$</span>
<span class="math-container">$$P(1,2\vert 1,2,3)=g_2\cdot0+(1-g_2)P(1,3\vert 1,2,3)$$</span>
<span class="math-container">$$P(1,3\vert 1,2,3)=g_3\cdot0+(1-g_3)P(1,1\vert 1,2,3)$$</span></p>
<p>These three equations can be solved for the three unknowns <span class="math-container">$P(1,1\vert 1,2,3)$</span>, <span class="math-container">$P(1,2\vert 1,2,3)$</span> and <span class="math-container">$P(1,3\vert 1,2,3)$</span>.</p>
<p>Similarly, we can solve for <span class="math-container">$P(1,1\vert 1,3,2)$</span>, <span class="math-container">$P(1,2\vert 1,3,2)$</span> and <span class="math-container">$P(1,3\vert 1,3,2)$</span>.</p>
<p>The six variables solved above correspond to <span class="math-container">$1$</span>'s survival probability under each one of the six orders, for given <span class="math-container">$g_1,g_2,g_3$</span>.</p>
<p>So <span class="math-container">$1$</span>'s surviving probability (the integrand), is given by <span class="math-container">$$p_1=\frac{P(1,1\vert 1,2,3)+P(1,2\vert 1,2,3)+P(1,3\vert 1,2,3)+P(1,1\vert 1,3,2)+P(1,2\vert 1,3,2)+P(1,3\vert 1,3,2)}{6}$$</span> </p>
<p><span class="math-container">$p_2$</span> and <span class="math-container">$p_3$</span> can be calculated similarly. </p>
<p>Using Matlab to solve for 18 equations and 18 variables gives the following ugly monsters:</p>
<p><span class="math-container">$$p_1=\frac{{g_1}^2(g_3-1)(3g_2+3g_3-2g_2g_3 - 6)}{6 (g_1 + g_3 - g_1 g_3) (g_1 + g_2 + g_3 - g_1 g_2 - g_1 g_3 - g_2 g_3 + g_1 g_2 g_3)
}$$</span>
<span class="math-container">$$p_2=\frac{g_2 (6 g_2 + 6g_3 - 3 g_1 g_2 - 3 g_1 g_3 - 12 g_2 g_3 + 3 g_2 {g_3}^2 + 7 g_1 g_2 g_3 - 2 g_1 g_2 {g_3}^2 )}{6 (g_2 + g_3 - g_2 g_3) (g_1 + g_2 + g_3 - g_1 g_2 - g_1 g_3 - g_2 g_3 + g_1 g_2 g_3)}$$</span>
<span class="math-container">$$p_3=\frac{g_3(2{g_1}^2{g_2}^2{g_3}^2 - 2{g_1}^2{g_2}^2{g_3} - 7{g_1}^2g_2{g_3}^2 + 10{g_1}^2g_2g_3 - 3{g_1}^2g_2 + 3{g_1}^2{g_3}^2 - 3{g_1}^2g_3 - 7g_1{g_2}^2{g_3}^2 + 8g_1{g_2}^2g_3 - 3g_1{g_2}^2 + 24g_1g_2{g_3}^2 - 33g_1g_2g_3 + 12g_1g_2 - 12g_1{g_3}^2 + 12g_1g_3 + 3{g_2}^2{g_3}^2 - 12g_2{g_3}^2 + 6g_2g_3 + 6{g_3}^2)}{6(g_1 + g_3 - g_1g_3)(g_2 + g_3 - g_2g_3)(g_1 + g_2 + g_3 - g_1g_2 - g_1g_3 - g_2g_3 + g_1g_2g_3)}$$</span></p>
<p>For an intuitive grasp of these probabilities, We can plot, under random simulations of the <span class="math-container">$g$</span>'s, when each <span class="math-container">$p_i$</span> is going to be the greatest. </p>
<p><a href="https://i.sstatic.net/p8JnT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p8JnT.jpg" alt="Different choices for the g space"></a> </p>
<p>Here green dots are where choosing gun 1 is best (i.e. <span class="math-container">$p_1\gt p_2,p_3$</span>); red dots mean gun 2 is best choice; blue dots mean gun 3 is best choice. Notice how gun 2 is best only under very restricted cases, the red dots being a small thin wedge between green and blue, and once <span class="math-container">$g_3\gt 0.4$</span> or so, gun 2 can never aspire to be a best choice. Gun 3 is best choice along the diagonal of the g-cube, where the difference between everyone is small. Best choices for gun 1 occupy the edge where difference between hitting probabilities is more extreme. </p>
<p>Can these integrand <span class="math-container">$p_1,p_2,p_3$</span> be used to solve for exact result? I think in principle yes. But how would you do that? Say</p>
<p><span class="math-container">$$P_1=\int_0^1\int_0^{g_1}\int_0^{g_2}\frac{{g_1}^2(g_3-1)(3g_2+3g_3-2g_2g_3 - 6)}{6 (g_1 + g_3 - g_1 g_3) (g_1 + g_2 + g_3 - g_1 g_2 - g_1 g_3 - g_2 g_3 + g_1 g_2 g_3)
}\mathrm{d}{g_3}\,\mathrm{d}{g_2}\,\mathrm{d}{g_1}$$</span></p>
<p>Of course you can always do simulations to approximate for the result of <span class="math-container">$P_1,P_2,P_3$</span>. Simulations of 10 million trials see <span class="math-container">$P_1,P_2,P_3$</span> converge beyond third decimal place, with values <span class="math-container">$0.417,0.292,0.291$</span>. So it seems better gun does give you higher survival probability after all! Although difference between gun 2 and gun 3 are negligible. </p>
<p>On the other hand, the above integrations seem elementary and evaluable by software. Yet step by step evaluation using software yielded complex number as results. I have absolutely no idea what went wrong.</p>
<hr>
<p>I list <span class="math-container">$p_1, p_2, p_3$</span> here below for anyone wanting to investigate further about the integrations to copy.</p>
<blockquote>
<p>p1=(g1^2*(g3 - 1)*(3*g2 + 3*g3 - 2*g2*g3 - 6))/(6*(g1 + g3 - g1*g3)*(g1 + g2 + g3 - g1*g2 - g1*g3 - g2*g3 + g1*g2*g3))</p>
<p>p2=(g2*(6*g2 + 6*g3 - 3*g1*g2 - 3*g1*g3 - 12*g2*g3 + 3*g2*g3^2 + 7*g1*g2*g3 - 2*g1*g2*g3^2))/(6*(g2 + g3 - g2*g3)*(g1 + g2 + g3 - g1*g2 - g1*g3 - g2*g3 + g1*g2*g3))</p>
<p>p3=(g3*(2*g1^2*g2^2*g3^2 - 2*g1^2*g2^2*g3 - 7*g1^2*g2*g3^2 + 10*g1^2*g2*g3 - 3*g1^2*g2 + 3*g1^2*g3^2 - 3*g1^2*g3 - 7*g1*g2^2*g3^2 + 8*g1*g2^2*g3 - 3*g1*g2^2 + 24*g1*g2*g3^2 - 33*g1*g2*g3 + 12*g1*g2 - 12*g1*g3^2 + 12*g1*g3 + 3*g2^2*g3^2 - 12*g2*g3^2 + 6*g2*g3 + 6*g3^2))/(6*(g1 + g3 - g1*g3)*(g2 + g3 - g2*g3)*(g1 + g2 + g3 - g1*g2 - g1*g3 - g2*g3 + g1*g2*g3))</p>
</blockquote>
|
probability | <p>My math class went over the original Monty Hall problem a few days ago, then looked at a related question where the number of doors was increased to five. There was a struggle to figure out what the answer to the problem is, and after coming back to it a few more times we're still a bit unclear.</p>
<p>In this extended problem, let's say you pick door A out of doors A, B, C, D and E. The host then opens one of the other doors to show it's empty and gives you the choice to stay or switch to one of the other remaining doors.
a) If you always stay with the door you picked, what is the probability of winning?
b) If you always switch to another door, what is the probability of winning?</p>
<p>Note that the host will open only one door. All the extended Monty Hall problems I found online had the host open all but one, so they weren't really helpful with this particular problem my class is working on.</p>
<p>I calculated that the chances are 1/4 regardless of whether you switch or not since the host opening only one empty door is not enough to truly affect the difference in win rates between staying and switching. Is that right?</p>
<p>EDIT: Sorry about the confusion for me not being clear enough. The problem I bring is indeed ising the same basic principles as the original: the host will always open a door after you choose one to show it is empty, and then you are given the choice. The reason why I got to 1/4 is because I was looking at the situation by figuring out how many ways can you win/lose depending on where the prize is after the host opens an empty door as well as which door you switched to, which gave me 3/12 for every switch or 1/4 (or by putting it all together i got 12/48). We didn't get far enough into the lessons to learn more about calculating probability with conditions so I apologize if that was what led to me a false calculation. Thanks for the answers, everyone!</p>
| <p>Think about it this way:</p>
<p>You have five doors, and you choose one. You already know you had $\frac{1}{5}$ chance of being right. Now the presenter must open one of the other doors that he/she knows is empty. That means the probability that the one of the three doors left is a winner is $\frac{4}{5}$, while the probability that the door you've chosen is correct is still $\frac{1}{5}$.</p>
<p>So if you stick with your first choice, you have a $\frac{1}{5}$ chance of winning. Now, if you decide to change to one of the other three doors, you know that you'll have a $\frac{4}{5}$ chance of winning (meaning if you were allowed to say "I choose this group of three doors" but not specify any single one, you would win $4$ out of $5$ times).</p>
<p>But you still only get to pick ONE door, and you have $3$ to choose from, so if you are going to choose to pick from the remaining $3$ doors, which together have $\frac{4}{5}$ chance of winning, you'll have a $\frac{1}{3}$ chance of being right from the selection of these three (since it is equally likely that you'll choose any of these three). </p>
<p>But put all together, that means you have a $\frac{4}{5} \cdot \frac{1}{3} = \frac{4}{15}$ chance of winning by switching to ONE door from the remaining $3$.</p>
<p>Here is a picture:</p>
<p><img src="https://i.sstatic.net/KSCwq.png" alt="enter image description here"></p>
| <p>The computation is straightforward, though it is easy to get tangled up.</p>
<p>The chance of having chosen the correct door was one fifth when you chose it. The host's action would have been possible whether or not you have chosen the correct one. The probablity remains one fifth if you stick.</p>
<p>The probability that the prize is behind one of the other doors is therefore four fifths. There are three indistinguishable possibilities, each of which is correct with probability four fifteenths. The probability if you change is four fifteenths.</p>
|
probability | <p>Suppose $n$ gangsters are randomly positioned in a square room such that the positions of any three gangsters do not form an isosceles triangle.</p>
<p>At midnight, each gangster shoots the person that is nearest to him. (A person can get shot more than once but each person can only shoot one person)</p>
<p>How many people are expected to survive? (I.e. what is the expected value of the number of people who do not get shot?)</p>
<p>E.g. For one person, the expected value is 1. For two people, it is zero since they both get shot. For three, the value is 1 since they form the vertices of a scalene triangle. I'm just interested in what happens as $n \rightarrow \infty$. </p>
<p>Thanks for your help!</p>
| <h1>Summary:</h1>
<p>The expected number of survivors after a shootout given as $$\lim_{n\rightarrow\infty}\operatorname{E}[n]\approx 0.284051\ n;\quad \text{(Tao/Wu - see below)}$$ is, if not correct, almost certainly very close to being correct <em>(see update 2)</em>.</p>
<p>However, this is disputed by Finch in <strong>Mathematical constants</strong> <em>(again, see below for details).</em> The results from Finch are easily replicable in <em>Mathematica</em> or similar, but I was not able to replicate even the partial results in Tao/Wu's paper <em>(despite leaving out the absolute values of $\alpha$ and $\beta,$ which Finch points out as being incorrect - see below for futher details)</em>, leaving me unsure as to whether I am missing something in my <em>"translation"</em> of the problem into Finch's more modern notation. I should be most grateful if someone could illuminate me further in this matter.</p>
<h1>Original answer:</h1>
<p>Based on numerical tests, I would say the expected number of survivors for $n>3 \approx n/3.5$</p>
<p>Trial example <code>test[20]</code> <em>(code below)</em>:</p>
<p><a href="https://i.sstatic.net/ZrjyU.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZrjyU.gif" alt="enter image description here"></a></p>
<p><code>anim[20,8]</code>:</p>
<p><a href="https://i.sstatic.net/obzCV.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/obzCV.gif" alt="enter image description here"></a></p>
<p>For $1000$ trials, $1\leq n\leq 40$ <code>est[40,10^3]</code>:</p>
<p><a href="https://i.sstatic.net/7MKO1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7MKO1.png" alt="enter image description here"></a></p>
<p><strong>Note</strong></p>
<p>Using <code>RandomReal</code> it is very unlikely that any two distances will be exactly equal, thereby fulfilling the <em>no isosceles triangle</em> requirement.</p>
<h1>Update 1</h1>
<p><strong>History of the problem</strong></p>
<p>Robert Abilock proposed in <a href="http://www.jstor.org/stable/2314274?seq=1#page_scan_tab_contents" rel="nofollow noreferrer"><strong>American Monthly</strong> The Rifle-Problem</a> <em>(R. Abilock; 1967)</em>, </p>
<blockquote>
<p>$n$ riflemen are distributed at random points on a plane. At a signal,
each one shoots at and kills his nearest neighbor. What is the expected
number of riflemen who are left alive?</p>
</blockquote>
<p>This was reposed as the <a href="http://nuweb9.neu.edu/fwu/wp-content/uploads/Wu111_JPA20_L299.pdf" rel="nofollow noreferrer">Vicious neighbor problem</a> <em>(R.Tao and F.Y.Wu; 1986)</em>, where the answer of $\approx 0.284051 n$ remaining riflemen <em>(or $\approx n/3.52049$)</em> was given as the solution in $2$ dimensions.</p>
<p>This agrees distinctly with tests of sample-size $10^5:$</p>
<p><a href="https://i.sstatic.net/Mv7Uw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mv7Uw.png" alt="enter image description here"></a></p>
<pre><code>ListLinePlot[{const[#, 100000] & /@ Range@40}, GridLines -> {{}, {1/0.284051}}]
</code></pre>
<p>However, in <a href="http://www.people.fas.harvard.edu/~sfinch/csolve/nng.pdf" rel="nofollow noreferrer"><strong>Mathematical Constants</strong> Nearest-neighbor graphs</a> <em>(S.R.Finch; 2008)</em>, Finch states that</p>
<blockquote>
<p>In [<em>Vicious neighbor problem</em>], the absolute value signs in the definitions of $\varphi$ and $\psi$ were mistakenly omitted.)$\dots$</p>
<p>Given the discrepancy between our estimate $\dots$ and their estimate $\dots$,
it seems doubtful that their approximation $\beta(2) = 0.284051\dots$ is entirely correct.</p>
</blockquote>
<p>So the question (for the bounty) is then reduced to:</p>
<p><em>Has any progress been made since 2008 on the problem? In short, is Tao and Wu's calculation incorrect, and if so, is a more precise estimate of $\beta(2)$ known?</em></p>
<h1>Update 2</h1>
<p>I have also tested the problem in other regular polygons (circle, triangle, pentagon, etc.) for $10^5$ trials, $1\leq n \leq 30$, and it seems that the comment by @D.Thomine below is in agreement with the data gathered, in that the constant for any bounded $2$ dimensional region appears to be the same for large enough $n,$ <em>ie</em>, independent of the global geometry of the domain:</p>
<p><a href="https://i.sstatic.net/WYgQv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WYgQv.png" alt="enter image description here"></a></p>
<p>while further simulations, using $2\cdot 10^6$ trials for $n=30$ and $n=100$ yielded the following results:</p>
<p><a href="https://i.sstatic.net/i2mJl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i2mJl.png" alt="enter image description here"></a></p>
<p>with the final averages after $2\cdot 10^6,$ compared to Tao/Wu's result, being:</p>
<p>\begin{align}
&n=30:&0.284090\dots\\
&n=100:&0.284066\dots\\
&\text{Tao/Wu:}&0.284051\dots\\
\end{align}</p>
<p>indicating that the Tao/Wu result of $\lim_{n\rightarrow\infty}\operatorname{E}[n]\approx 0.284051\ n$ is, if not correct, almost certainly very close to being correct.</p>
<h1>Upper and lower bounds</h1>
<p>Following up on @mathreadler's suggestion that it may be interesting to study the spread of data, I include the following as a minor contribution to the topic:</p>
<p>Since arrangements like this</p>
<p><a href="https://i.sstatic.net/hagEw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hagEw.png" alt="enter image description here"></a></p>
<p>are possible (and their circular counterparts, however unlikely through random point selection), clearly the lower bound for odd $n$ is $1$ and for even $n$ it is $0$ (since the points can be paired).</p>
<p>Finding an upper bound is less obvious though. Looking at <a href="http://mks.mff.cuni.cz/kalva/short/soln/sh00g7.html" rel="nofollow noreferrer">this sketch proof</a> for upper bound $n=10$ provided by @JohnSmith in the comments, it is easy to see that the upper bound is $7:$</p>
<p><a href="https://i.sstatic.net/NqDEF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NqDEF.png" alt="enter image description here"></a></p>
<p>and by employing the same method, upper bounds for larger $n$ can be constructed:</p>
<p><img src="https://i.sstatic.net/BerFA.png" width="300" height="250">
<img src="https://i.sstatic.net/GF56y.png" width="300" height="250">
<img src="https://i.sstatic.net/2Rak2.png" width="300" height="250">
<img src="https://i.sstatic.net/sk2X8.png" width="300" height="250">
<img src="https://i.sstatic.net/BmeaE.png" width="300" height="250">
<img src="https://i.sstatic.net/vTP2X.png" width="300" height="250"></p>
<p><a href="https://i.sstatic.net/ph5zU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ph5zU.png" alt="enter image description here"></a></p>
<p>Assuming one can repeat this process indefinitely, it is likely that an upper bound for $n\geq 6$ then is $n-\lfloor n/3\rfloor:$</p>
<p><a href="https://i.sstatic.net/uejwn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uejwn.png" alt="enter image description here"></a></p>
<p>which has been set against the data for $2\cdot 10^4$ trials <em>(red dots - see <code>data</code> below)</em>.</p>
<p>Regarding density of spread, (again with $2\cdot 10^4$ trials) produces the following plot:</p>
<p><a href="https://i.sstatic.net/RhBMk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RhBMk.png" alt="enter image description here"></a></p>
<pre><code>ListPlot3D[Flatten[data, 1], ColorFunction -> "LakeColors"]
</code></pre>
<p><em>(courtesy of @AlexeiBoulbitch <a href="https://mathematica.stackexchange.com/a/96492/9923">here</a>)</em>, and regarding max. density of spread along $x/z$ axes from above plot, produces the following:</p>
<p><a href="https://i.sstatic.net/eFdmR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eFdmR.png" alt="enter image description here"></a></p>
<pre><code>With[{c = 0.284051},
Show[ListLinePlot[Max@#[[All, 3]] & /@ data, PlotRange -> All],
Plot[{(1 + c)/(n - (1 + c)^2)^(1/2)}, {n, 0, 100}, PlotRange -> All,
PlotStyle -> {Dashed, Red}]]]
</code></pre>
<p>It is tempting to conjecture max height of distribution to be $\approx (c+1)/\sqrt{n-(c+1)^2},$ but of course this is largely empirical.</p>
<hr>
<pre><code>test[nn_] := With[{aa = Partition[RandomReal[{0, 1}, 2 nn], 2]},
With[{cc = ({aa[[#]], First@Nearest[DeleteCases[aa, aa[[#]]], aa[[#]]]}
& /@ Range@nn)},
With[{dd = Table[Position[aa, cc[[p, 2]]][[1, 1]], {p, nn}]},
With[{ee = Complement[Range@nn, dd]},
Column[{StringJoin[ToString["Expected: "], ToString[nn/3.5]],
StringJoin[ToString["Survivors: "], ToString[Length@ee], ToString[": "],
ToString[ee]], Show[Graphics[{Gray, Line@# & /@ cc}, Frame -> True,
PlotRange -> {{0, 1}, {0, 1}}, Epilog -> {Text[Style[(Range@nn)[[#]],
30/Floor@Log@nn], aa[[#]]] & /@ Range@nn}], ImageSize -> 300]}]]]]]
est[mm_, trials_] := ListLinePlot@({Quiet@With[{nn = #},
(N@Total@(With[{aa = Partition[RandomReal[{0, 1}, 2 nn], 2]},
With[{cc = ({aa[[#]], First@Nearest[DeleteCases[aa, aa[[#]]],
aa[[#]]]} & /@ Range@nn)},
With[{dd = Table[Position[aa, cc[[p, 2]]][[1, 1]], {p, nn}]},
With[{ee = Complement[Range@nn, dd]},Length@ee]]]]
& /@ Range@trials)/trials)] & /@ Range@mm, Range@mm/3.5})
anim[nn_, range_] := ListAnimate[test@nn & /@ Range@range,
ControlPlacement -> Top, DefaultDuration -> nn]
const[mm_, trials_] := With[{ans = Quiet@With[{nn = #},
SetPrecision[(Total@(With[{aa = Partition[RandomReal[{0, 1}, 2 nn], 2]},
With[{cc = ({aa[[#]],First@Nearest[DeleteCases[aa, aa[[#]]],
aa[[#]]]} & /@ Range@nn)},
With[{dd = Table[Position[aa, cc[[p, 2]]][[1, 1]], {p, nn}]},
With[{ee = Complement[Range@nn, dd]},
Length@ee]]]] & /@ Range@trials)/trials), 20]] &@ mm}, mm/ans]
act[nn_, trials_] := With[{aa = Partition[RandomReal[{0, 1}, 2 nn], 2]},
With[{cc = ({aa[[#]], First@Nearest[DeleteCases[aa, aa[[#]]], aa[[#]]]} & /@
Range@nn)}, With[{dd = Table[Position[aa, cc[[p, 2]]][[1, 1]], {p, nn}]},
With[{ee = Complement[Range@nn, dd]}, Length@ee]]]] & /@ Range@trials
data = Quiet@ Table[With[{tt = 2*10^4},
With[{aa = act[nn, tt]}, With[{bb = Sort@DeleteDuplicates@aa},
Transpose@{ConstantArray[nn, Length@bb], bb, (Length@# & /@
Split@Sort@aa)/tt}]]], {nn, 1, 100}];
</code></pre>
| <p>${\bf Update~Oct~19}$: Added some analysis on the $D\to\infty$ limit of $\frac{E[n]}{n}$ for the original game and to the slightly modified case where the geometry is that of a torus.</p>
<hr>
<p>Martin have done a great job in exploring the problem and as mentioned in his answer there seem to be some disagreement in the litterature about the exact value of $\lim\limits_{n\to\infty}\frac{E[n]}{n}$ where $\frac{E[n]}{n}$ is the fraction of survivors in a game with $n$ players. I will here try to nail down this limit to $4-5$ decimal places for the first few values of $D$ - the dimension the game is played in.</p>
<hr>
<p><strong>Numerical algorithm</strong></p>
<p>The bootleneck in the numerical calculation is finding the closest neighbor of any given player. A brute force search scales as $O(n^2D)$ which on a single CPU becomes way to slow for $n\gtrsim 10^2-10^3$. To improve on this I added a uniform grid with $M^D$ gridcells covering $[0,1]^D$ to the simulation. $M$ is choosen such that $n_{\rm per~cell} = n/M^D \sim $ a few. Players are added to the cells and when we search we normally only need to go through the neighboring $3^D$ cells to find the closest neighbor to any given player. This brings the number of operations down to $O(nD3^D)$ allowing us to go to much larger $n$ than with brute-force search. The code I used to do this, written in c++, <a href="https://folk.uio.no/hansw/riflemangame.cpp" rel="nofollow noreferrer">can be found here</a>. The code is only suitable to explore relatively low values of $D \lesssim 5$ as the grid needed to speed up the calculation becomes too memory expensive for large $D$ (plus the algorithm scales exponentially with $D$).</p>
<hr>
<p><strong>Results</strong></p>
<p>In the figure below one can see the cummulative average of $\frac{E[n]}{n}$ as function of the number of samples for $\{D=1,~n=10^3\}$ (left) and $\{D=2,~n=10^4\}$ (right).</p>
<p><img src="https://folk.uio.no/hansw/onerun2.png" width="300" height="300">
<img src="https://folk.uio.no/hansw/onerun.png" width="300" height="300"></p>
<p>In the figure below I show the evolution of $\frac{E[n]}{n}$ as a function of $n$ for different values of $D$. For each $n$ I performed $N$ samples (varying from $10^2-10^8$ depending on $D$ and $n$) until the desired accuracy was reached. The error bars shows $3\hat{\sigma}$ ($99.7\%$ confidence) of the standard error $\hat{\sigma} = \frac{\sigma}{\sqrt{N}}$ where $\sigma^2 = \frac{1}{N}\sum_{i=1}^N(f_i-\overline{f})^2$ and $f_i$ is the fraction of survivors in one single run and $\overline{f} = \frac{1}{N}\sum_{i=1}^Nf_i$ is the cummulative mean. To be able to show all in one plot I have subtracted $f$ (the value given in the table below).</p>
<p>$~~~~~~~~$<img src="https://folk.uio.no/hansw/simres1.png" alt=""></p>
<p>This gives me the following result:</p>
<p>\begin{array}{c|c}
D & f=\lim\limits_{n\to\infty}\frac{E[n]}{n} \\ \hline
1 & 0.25000 \pm 10^{-5} \\ \hline
2 & 0.28418 \pm 10^{-5}\\ \hline
3 & 0.30369 \pm 10^{-5}\\ \hline
4 & 0.3170 \pm 10^{-4}\\
\end{array}</p>
<p>The quoted error is the $99.7$% confidence statistical error plus the estimated error in the evolution with $n$ (only relevant for $D=4$ as we have convergence to within the statistical error for lower $D$). Note that for $D=1$ we have the analytical result $f=\frac{1}{4} = 0.25$ which serves as a test of our numerical analysis.</p>
<p><strong>The large $D$ limit</strong></p>
<p>I also did a some simulations for low values of $n\lesssim 10^3$ looking at the evolution of $\frac{E[n]}{n}$ with $D$ - the dimension the game is played in. For these calculations I just used a brute forced neighbor-finding algorithm. </p>
<p>I considered both a closed box and a box with periodic boundary conditions (i.e. $x_i=0$ is the same point as $x_i=1$). The latter situation is equivalent to doing the game on a torus. The results are seen below</p>
<p>$~~~~~~~~~~~~~~$<img src="https://folk.uio.no/hansw/devo.png" alt=""></p>
<p>For low values of $D$ the results between the two geometries are pretty similar, but we start to see some big differences for large values of $D$ (and $n$). The reason the boundary effects becomes more and more important for large $D$ can be understood by considering a sphere at the center of our box (with radius $1/2$). As we increase $D$ we find that the volume of the sphere to the total volume of the box goes to zero so (loosely speaking) most of the volume of the box is in the corners. A player in a corner is less likely to get shot than a person close to the center thus a common sitation for large $D$ is that we have many players in different corners shooting players close to the center giving us a high survival percentage.</p>
<p>For the torus geometry a person in the corner is just as likely to get shot as anybody else and if $A$ is $B$'s closest neighbor than $B$ is $A$'s closest neighbor with probabillity $1/2$ as $D\to\infty$ (see <a href="https://math.stackexchange.com/a/271604/147873">this related question</a>) which implies that $$\lim\limits_{D\to\infty}\frac{E[n]}{n} \to \left(1-\frac{1}{n}\right)^{n-1}$$ which converges to $\frac{1}{e}$ for $n\to\infty$.</p>
|
matrices | <p>It may be the dumbest question ever asked on math.SE, but...</p>
<hr>
<p>Given a real matrix $\mathbf A\in\mathbb R^{m\times n}$, the <a href="http://en.wikipedia.org/wiki/Column_space">column space</a> is defined as
$$C(\mathbf A) = \{\mathbf A \mathbf x : \mathbf x \in \mathbb{R}^n\} \subseteq \mathbb R^m.$$</p>
<p>It is sometimes called <em>image</em> or <em>range</em>.</p>
<ul>
<li>I'm OK with the name 'column space' because $C(\mathbf A)$ is the set of all possible <a href="http://en.wikipedia.org/wiki/Linear_combination">linear combinations</a> of $\mathbf A$'s column vectors.</li>
<li>I'm OK with the name 'image' because if I consider $\mathbf A \mathbf x$ as a <a href="http://en.wikipedia.org/wiki/Function_%28mathematics%29">function</a> then $C(\mathbf A)$ is this function's <a href="http://en.wikipedia.org/wiki/Image_%28mathematics%29">image</a> (the subset of a function's <a href="http://en.wikipedia.org/wiki/Codomain">codomain</a>).</li>
<li>I'm OK with the name 'range' because I can consider $C(\mathbf A)$ as a <a href="http://en.wikipedia.org/wiki/Range_%28mathematics%29">range</a> of a function $f(\mathbf x) = \mathbf A \mathbf x$.</li>
</ul>
<p>Unfortunately, I'm not happy with the name <a href="http://en.wikipedia.org/wiki/Kernel_%28matrix%29">kernel</a>.
$$\ker(\mathbf A) = \{\mathbf x: \mathbf A\mathbf x = \mathbf 0\}\subseteq \mathbb R^n$$</p>
<p>The kernel is sometimes called <em>null space</em> and I can fairly understand where this name came from -- it's because this set contains all the elements in $\mathbb R^n$ that are mapped to zero by $\mathbf A$.</p>
<p>Then why is it called 'kernel'? Any historic background or colloquial meaning that I completely missed?</p>
| <p>The word <em>kernel</em> means “seed,” “core” in nontechnical language (etymologically: it's the diminutive of <em>corn</em>). If you imagine it geometrically, the origin is the center, sort of, of a Euclidean space. It can be conceived of as the <em>kernel</em> of the space. You can rationalize the nomenclature by saying that the kernel of a matrix consists of those vectors of the domain space that are mapped into the center (<em>i.e.</em>, the origin) of the range space.</p>
<p>I think a somewhat analogous rationale might motivate the designation <a href="http://en.wikipedia.org/wiki/Core_%28game_theory%29">“core”</a> in cooperative game theory: It denotes a particular set that is of central interest. (In this case, it denotes—loosely speaking—the set of such allocations among a given number of persons that cannot be overturned by collusion among some of them. This property lends the core a sense of stability and equilibrium, which is why it is so interesting.)</p>
| <p>The imagery is consistent with inhomogeneous equations $Ax = b$ where the degrees of freedom in the answer are those of $Ax = 0$ and the latter could be seen as the invariant core of the problem separate from the particularities of different $b$ (for some values there are solutions, for others there can be no solutions).</p>
<p>Whether this really was the historical origin I cannot say. Of course it makes sense for group homomorphisms.</p>
|
probability | <p>I saw this problem yesterday on <a href="https://www.reddit.com/r/mathriddles/comments/3sa4ci/colliding_bullets/" rel="noreferrer">reddit</a> and I can't come up with a reasonable way to work it out.</p>
<hr />
<blockquote>
<p>Once per second, a bullet is fired starting from <span class="math-container">$x=0$</span> with a uniformly random speed in <span class="math-container">$[0,1]$</span>. If two bullets collide, they both disappear. If we fire <span class="math-container">$N$</span> bullets, what is the probability that at least one bullet escapes to infinity? What if we fire an infinite number of bullets?</p>
</blockquote>
<p><strong>Attempt.</strong></p>
<ul>
<li><p>If <span class="math-container">$N$</span> is two, then it's equal to the probability that the first bullet is faster than the second, which is <span class="math-container">$\dfrac{1}{2}$</span>.</p>
</li>
<li><p>If <span class="math-container">$N$</span> is odd, the probability of three bullets or more colliding in the same spot is <span class="math-container">$0$</span>, so we can safely ignore this event. And since collisions destroy two bullets then there will be an odd number of bullets at the end. So at least one escapes to infinity.</p>
</li>
</ul>
<p>For infinite bullets, I suspect that no single bullet will escape to infinity, but that they'll reach any arbitrarily big number. Although, I'm not sure on how I'd begin proving it.</p>
<p>Is there a closed form solution for even <span class="math-container">$N$</span>? Or some sort of asymptotic behavior?</p>
| <p>Among the <span class="math-container">$N$</span> bullets fired at time <span class="math-container">$0,1,2...N$</span> (with <span class="math-container">$N$</span> even), let us call <span class="math-container">$B_{\max, N}$</span> the one with the highest velocity. We have two cases:</p>
<ul>
<li><p>if <span class="math-container">$B_{\max, N}$</span> is the first bullet that has been fired, then it will escape to infinity: the probability of this event is <span class="math-container">$\dfrac{1}{N}$</span>;</p>
</li>
<li><p>if <span class="math-container">$B_{\max, N}$</span> is any one of the other bullets (probability <span class="math-container">$\dfrac{N-1}{N}$</span>), we can consider the pair of consecutive bullets <span class="math-container">$B_k$</span> and <span class="math-container">$B_{k+1}$</span> where the expected time to collision is the shortest (with <span class="math-container">$B_{k+1}$</span>, fired at time <span class="math-container">$k+1$</span>, faster than <span class="math-container">$B_{k}$</span>, fired at time <span class="math-container">$k$</span>). This pair will be the first to collide.</p>
</li>
</ul>
<p>Now let us cancel out the two bullets that have collided, so that there remain <span class="math-container">$N-2$</span> bullets. Applying the same considerations explained above, we can call <span class="math-container">$B_{\max, N-2}$</span> the bullet with the highest velocity among these remaining <span class="math-container">$N-2$</span> ones (note that <span class="math-container">$B_{\max, N}$</span> and <span class="math-container">$B_{\max, N-2}$</span> can be the same bullet, since the first pair that collides does not necessary include <span class="math-container">$B_{\max, N}$</span>). Again, we have two cases:</p>
<ul>
<li><p>if <span class="math-container">$B_{\max, N-2}$</span> is the first bullet that has been fired among the <span class="math-container">$N-2$</span> ones that now we are considering, then it will escape to infinity: assuming uniformity,the probability of this event is <span class="math-container">$\dfrac{1}{N-2}$</span>;</p>
</li>
<li><p>if <span class="math-container">$B_{\max, N-2}$</span> is any one of the other bullets, then it will not escape to infinity, since it will necessary collide with a bullet that, among these <span class="math-container">$N-2$</span> ones, has been fired before it: the probability of this event is <span class="math-container">$\dfrac{N-3}{N-2}$</span>. Again we can now consider the pair of consecutive bullets <span class="math-container">$B_j$</span> and <span class="math-container">$B_{j+1}$</span> (with <span class="math-container">$B_{j+1}$</span> faster than <span class="math-container">$B_{j}$</span>) where the expected time to collision is the shortest. This pair will be the second to collide.</p>
</li>
</ul>
<p>Continuing in this way we get that the probability that none of the bullets escape to infinity is</p>
<p><span class="math-container">$$\frac{(N-1)!!}{N!!}$$</span></p>
<p>that is equivalent to</p>
<p><span class="math-container">$$\frac{[(N-1)!!]^2}{N!}$$</span></p>
<p>EDIT: This answer assumes that, in any step, the probability that <span class="math-container">$B_{\max, N-2k}$</span> is the first fired bullet among the remaining <span class="math-container">$N-2k$</span> ones is <span class="math-container">$\dfrac{1}{N-2k}$</span>. This uniformity assumption remains to be demonstrated.</p>
| <p>I did some experiment with Mathematica for even $N$ and found some regularities, but I don't have an explanation for them at the moment.</p>
<p>1) If we fix $N$ different speeds for the bullets and count how many bullets escape to infinity for all the $N!$ permutations of those speeds among bullets, then the result doesn't seem to depend on the chosen speeds. In particular, I consistently found that only in $\bigl((N-1)!!\bigr)^2$ cases out of $N!$ no bullet reaches infinity (checked for $N$ equal to 4, 6, 8, 10).</p>
<p>2) Running a simulation (1,000,000 trials) the probability that no bullet reaches infinity is consistent with the above result, i.e. $\bigl((N-1)!!\bigr)^2/N!$, for $N$ equal to 4, 6, 8.</p>
|
differentiation | <p>I'm going through the MIT lecture on implicit differentiation, and the first two steps are shown below, taking the derivative of both sides:</p>
<p>$$x^2 + y^2 = 1$$
$$\frac{d}{dx} x^2 + \frac{d}{dx} y^2 = \frac{d}{dx} 1$$
$$2x + \frac{d}{dx}y^2 = 0$$</p>
<p>That makes some sense, but what about this example:</p>
<p>$$x = 5$$
$$\frac{d}{dx} x = \frac{d}{dx} 5$$
$$1 = 0$$</p>
<p>Why is the first example correct, while the second is obviously wrong?</p>
| <p>The first of your identities makes some implicit assumptions: it should be read as
<span class="math-container">$$
x^2+f(x)^2=1
$$</span>
where <span class="math-container">$f$</span> is some (as yet undetermined) function. If we <em>assume</em> <span class="math-container">$f$</span> to be differentiable, then we can differentiate both sides:
<span class="math-container">$$
2x+2f(x)f'(x)=0
$$</span>
because the assumption is that the function <span class="math-container">$g$</span> defined by <span class="math-container">$g(x)=x^2+f(x)^2$</span> is constant.</p>
<p>From this we can derive
<span class="math-container">$$
f'(x)=-\frac{x}{f(x)}
$$</span>
at least in the points where <span class="math-container">$f(x)\ne0$</span>, which excludes <span class="math-container">$x=1$</span> and <span class="math-container">$x=-1$</span> from the domain where <span class="math-container">$f$</span> is differentiable.</p>
<p>Thus what you get is that <em>assuming</em> <span class="math-container">$f$</span> exists and is differentiable, then, for <span class="math-container">$x\ne1$</span> and <span class="math-container">$x\ne 1$</span>, <span class="math-container">$f'$</span> satisfies the above relation.</p>
<p>Why is the relation written in that way? The answer is that often we're given a <em>locus</em> defined by some equation in two variables: it's the set of points <span class="math-container">$(x,y)$</span> such that <span class="math-container">$h(x,y)=0$</span> and we try finding an <em>explicit form</em> for the locus, that is a relation <span class="math-container">$y=f(x)$</span> or <span class="math-container">$x=g(y)$</span> , so that
<span class="math-container">$$
h(x,f(x))=0\qquad\text{ or }\qquad h(g(y),y)=0
$$</span>
holds for <span class="math-container">$x$</span> in a suitable neighborhood of <span class="math-container">$x_0$</span> or <span class="math-container">$y$</span> in a suitable neighborhood of <span class="math-container">$y_0$</span> where <span class="math-container">$(x_0,y_0)$</span> belongs to the locus.</p>
<p>Take for example the folium Cartesii <span class="math-container">$x^3+y^3-3xy=0$</span>.</p>
<p><a href="https://i.sstatic.net/4fWCB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4fWCB.jpg" alt="folium Cartesii" /></a></p>
<p>If we differentiate with respect to <span class="math-container">$x$</span>, we get
<span class="math-container">$$
3x^2+3y^2y'-3y-3xy'=0
$$</span>
which gives
<span class="math-container">$$
y'=\frac{y-x^2}{y^2-x}
$$</span>
We're able to find where the derivative is zero by setting <span class="math-container">$y=x^2$</span> and plugging in the original equation
<span class="math-container">$$
x^3+x^6-3x^3=0
$$</span>
that is <span class="math-container">$x=0$</span> (which can't be used) or <span class="math-container">$x^3=2$</span>, without even knowing the “explicit form“ <span class="math-container">$y=f(x)$</span>.</p>
| <p>You wrote "<span class="math-container">$x = 5$</span>"; what does that tell us about <span class="math-container">$x$</span>? Just that, <span class="math-container">$x$</span> <em>equals</em> 5. So in differentiating both sides you must keep that in mind. In other words, <span class="math-container">$x$</span> is constant and 5 is constant.</p>
<p>Also, then you can't do</p>
<p><span class="math-container">$${d \over dx} x = {d \over dx} 5, \tag{1}$$</span></p>
<p>since that's equivalent to</p>
<p><span class="math-container">$${d \over dx} x = {d \over d5} 5, \tag{2}$$</span></p>
<p>which already has been pointed out is meaningless.</p>
<p>Though you can do</p>
<p><span class="math-container">$${d \over dy} x = {d \over dy} 5 \Leftrightarrow 0 =0;\tag{3}$$</span></p>
<p>here <span class="math-container">$y$</span> is an independent variable over the real numbers.</p>
|
linear-algebra | <p>Multiplication of matrices — taking the dot product of the $i$th row of the first matrix and the $j$th column of the second to yield the $ij$th entry of the product — is not a very intuitive operation: if you were to ask someone how to mutliply two matrices, he probably would not think of that method. Of course, it turns out to be very useful: matrix multiplication is precisely the operation that represents composition of transformations. But it's not intuitive. <strong>So my question is where it came from. Who thought of multiplying matrices in that way, and why?</strong> (Was it perhaps multiplication of a matrix and a vector first? If so, who thought of multiplying <em>them</em> in that way, and why?) My question is intact no matter whether matrix multiplication was done this way only after it was used as representation of composition of transformations, or whether, on the contrary, matrix multiplication came first. (Again, I'm not asking about the <em>utility</em> of multiplying matrices as we do: this is clear to me. I'm asking a question about history.)</p>
| <p>Matrix multiplication is a symbolic way of substituting one linear change of variables into another one. If $x' = ax + by$ and $y' = cx+dy$, and
$x'' = a'x' + b'y'$ and $y'' = c'x' + d'y'$ then we can plug the first pair of formulas into the second to express $x''$ and $y''$ in terms of $x$ and $y$:
$$
x'' = a'x' + b'y' = a'(ax + by) + b'(cx+dy) = (a'a + b'c)x + (a'b + b'd)y
$$
and
$$
y'' = c'x' + d'y' = c'(ax+by) + d'(cx+dy) = (c'a+d'c)x + (c'b+d'd)y.
$$
It can be tedious to keep writing the variables, so we use arrays to track the coefficients, with the formulas for $x'$ and $x''$ on the first row and for $y'$ and $y''$ on the second row. The above two linear substitutions coincide with the matrix product
$$
\left(
\begin{array}{cc}
a'&b'\\c'&d'
\end{array}
\right)
\left(
\begin{array}{cc}
a&b\\c&d
\end{array}
\right)
=
\left(
\begin{array}{cc}
a'a+b'c&a'b+b'd\\c'a+d'c&c'b+d'd
\end{array}
\right).
$$
So matrix multiplication is just a <em>bookkeeping</em> device for systems of linear substitutions plugged into one another (order matters). The formulas are not intuitive, but it's nothing other than the simple idea of combining two linear changes of variables in succession.</p>
<p>Matrix multiplication was first defined explicitly in print by Cayley in 1858, in order to reflect the effect of composition of linear transformations. See paragraph 3 at <a href="https://web.archive.org/web/20120910034016/http://darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html">http://darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html</a>. However, the idea of tracking what happens to coefficients when one linear change of variables is substituted into another (which we view as matrix multiplication) goes back further. For instance, the work of number theorists in the early 19th century on binary quadratic forms $ax^2 + bxy + cy^2$ was full of linear changes of variables plugged into each other (especially linear changes of variable that we would recognize as coming from ${\rm SL}_2({\mathbf Z})$). For more on the background, see the paper by Thomas Hawkins on matrix theory in the 1974 ICM. Google "ICM 1974 Thomas Hawkins" and you'll find his paper among the top 3 hits.</p>
| <p>Here is an answer directly reflecting the historical perspective from the paper <em>Memoir on the theory of matrices</em> By Authur Cayley, 1857. This paper is available <a href="https://ia600701.us.archive.org/20/items/philtrans05474612/05474612.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>This paper is credited with "containing the first abstract definition of a matrix" and "a matrix algebra defining addition, multiplication, scalar multiplication and inverses" (<a href="http://www-history.mcs.st-and.ac.uk/history/HistTopics/Matrices_and_determinants.html" rel="nofollow noreferrer">source</a>).</p>
<p>In this paper a nonstandard notation is used. I will do my best to place it in a more "modern" (but still nonstandard) notation. The bulk of the contents of this post will come from pages 20-21.</p>
<p>To introduce notation, <span class="math-container">$$ (X,Y,Z)= \left( \begin{array}{ccc}
a & b & c \\
a' & b' & c' \\
a'' & b'' & c'' \end{array} \right)(x,y,z)$$</span></p>
<p>will represent the set of linear functions <span class="math-container">$(ax + by + cz, a'x + b'y + c'z, a''x + b''y + c''z)$</span> which are then called <span class="math-container">$(X,Y,Z)$</span>.</p>
<p>Cayley defines addition and scalar multiplication and then moves to matrix multiplication or "composition". He specifically wants to deal with the issue of:</p>
<p><span class="math-container">$$(X,Y,Z)= \left( \begin{array}{ccc}
a & b & c \\
a' & b' & c' \\
a'' & b'' & c'' \end{array} \right)(x, y, z) \quad \text{where} \quad (x, y, z)= \left( \begin{array}{ccc}
\alpha & \beta & \gamma \\
\alpha' & \beta' & \gamma' \\
\alpha'' & \beta'' & \gamma'' \\ \end{array} \right)(\xi,\eta,\zeta)$$</span></p>
<p>He now wants to represent <span class="math-container">$(X,Y,Z)$</span> in terms of <span class="math-container">$(\xi,\eta,\zeta)$</span>. He does this by creating another matrix that satisfies the equation:</p>
<p><span class="math-container">$$(X,Y,Z)= \left( \begin{array}{ccc}
A & B & C \\
A' & B' & C' \\
A'' & B'' & C'' \\ \end{array} \right)(\xi,\eta,\zeta)$$</span></p>
<p>He continues to write that the value we obtain is:</p>
<p><span class="math-container">$$\begin{align}\left( \begin{array}{ccc}
A & B & C \\
A' & B' & C' \\
A'' & B'' & C'' \\ \end{array} \right) &= \left( \begin{array}{ccc}
a & b & c \\
a' & b' & c' \\
a'' & b'' & c'' \end{array} \right)\left( \begin{array}{ccc}
\alpha & \beta & \gamma \\
\alpha' & \beta' & \gamma' \\
\alpha'' & \beta'' & \gamma'' \\ \end{array} \right)\\[.25cm] &= \left( \begin{array}{ccc}
a\alpha+b\alpha' + c\alpha'' & a\beta+b\beta' + c\beta'' & a\gamma+b\gamma' + c\gamma'' \\
a'\alpha+b'\alpha' + c'\alpha'' & a'\beta+b'\beta' + c'\beta'' & a'\gamma+b'\gamma' + c'\gamma'' \\
a''\alpha+b''\alpha' + c''\alpha'' & a''\beta+b''\beta' + c''\beta'' & a''\gamma+b''\gamma' + c''\gamma''\end{array} \right)\end{align}$$</span></p>
<p>This is the standard definition of matrix multiplication. I must believe that matrix multiplication was defined to deal with this specific problem. The paper continues to mention several properties of matrix multiplication such as non-commutativity, composition with unity and zero and exponentiation.</p>
<p>Here is the written rule of composition:</p>
<blockquote>
<p>Any line of the compound matrix is obtained by combining the corresponding line of the first component matrix successively with the several columns of the second matrix (p. 21)</p>
</blockquote>
|
linear-algebra | <p>If I have a covariance matrix for a data set and I multiply it times one of it's eigenvectors. Let's say the eigenvector with the highest eigenvalue. The result is the eigenvector or a scaled version of the eigenvector. </p>
<p>What does this really tell me? Why is this the principal component? What property makes it a principal component? Geometrically, I understand that the principal component (eigenvector) will be sloped at the general slope of the data (loosely speaking). Again, can someone help understand why this happens? </p>
| <p><strong>Short answer:</strong> The eigenvector with the largest eigenvalue is the direction along which the data set has the maximum variance. Meditate upon this.</p>
<p><strong>Long answer:</strong> Let's say you want to reduce the dimensionality of your data set, say down to just one dimension. In general, this means picking a unit vector <span class="math-container">$u$</span>, and replacing each data point, <span class="math-container">$x_i$</span>, with its projection along this vector, <span class="math-container">$u^T x_i$</span>. Of course, you should choose <span class="math-container">$u$</span> so that you retain as much of the variation of the data points as possible: if your data points lay along a line and you picked <span class="math-container">$u$</span> orthogonal to that line, all the data points would project onto the same value, and you would lose almost all the information in the data set! So you would like to maximize the <em>variance</em> of the new data values <span class="math-container">$u^T x_i$</span>. It's not hard to show that if the covariance matrix of the original data points <span class="math-container">$x_i$</span> was <span class="math-container">$\Sigma$</span>, the variance of the new data points is just <span class="math-container">$u^T \Sigma u$</span>. As <span class="math-container">$\Sigma$</span> is symmetric, the unit vector <span class="math-container">$u$</span> which maximizes <span class="math-container">$u^T \Sigma u$</span> is nothing but the eigenvector with the largest eigenvalue.</p>
<p>If you want to retain more than one dimension of your data set, in principle what you can do is first find the largest principal component, call it <span class="math-container">$u_1$</span>, then subtract that out from all the data points to get a "flattened" data set that has <em>no</em> variance along <span class="math-container">$u_1$</span>. Find the principal component of this flattened data set, call it <span class="math-container">$u_2$</span>. If you stopped here, <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span> would be a basis of the two-dimensional subspace which retains the most variance of the original data; or, you can repeat the process and get as many dimensions as you want. As it turns out, all the vectors <span class="math-container">$u_1, u_2, \ldots$</span> you get from this process are just the eigenvectors of <span class="math-container">$\Sigma$</span> in decreasing order of eigenvalue. That's why these are the principal components of the data set.</p>
| <p>Some informal explanation:</p>
<p>Covariance matrix $C_y$ (it is symmetric) encodes the correlations between variables of a vector. In general a covariance matrix is non-diagonal (i.e. have non zero correlations with respect to different variables).</p>
<p><strong>But it's interesting to ask, is it possible to diagonalize the covariance matrix by changing basis of the vector?</strong>. In this case there will be no (i.e. zero) correlations between different variables of the vector. </p>
<p>Diagonalization of this symmetric matrix is possible with eigen value decomposition.
You may read <em><a href="https://arxiv.org/pdf/1404.1100.pdf" rel="noreferrer">A Tutorial on Principal Component Analysis</a></em> (pages 6-7), by Jonathon Shlens, to get a good understanding. </p>
|
logic | <p>is $\forall x\,\exists y\, Q(x, y)$ the same as $\exists y\,\forall x\,Q(x, y)$?</p>
<p>I read in the book that the order of quantifiers make a big difference so I was wondering if these two expressions are equivalent or not. </p>
<p>Thanks. </p>
| <p>Certainly not. In that useful "loglish" dialect (a halfway house between the formal language and natural English), the first says</p>
<blockquote>
<p>For any <span class="math-container">$x$</span>, there is a <span class="math-container">$y$</span> such that <span class="math-container">$Qxy$</span>.</p>
</blockquote>
<p>The second says</p>
<blockquote>
<p>For some <span class="math-container">$y$</span>, all <span class="math-container">$x$</span> are such that <span class="math-container">$Qxy$</span>.</p>
</blockquote>
<p>These are quite different. Compare</p>
<blockquote>
<p>For any natural number <span class="math-container">$x$</span>, there is some number <span class="math-container">$y$</span>, such that <span class="math-container">$x < y$</span>. True (for any number, there's a bigger one).</p>
<p>For some natural number <span class="math-container">$y$</span>, any number <span class="math-container">$x$</span> we take is such that <span class="math-container">$x < y$</span>. False (there is no number larger than all numbers!).</p>
</blockquote>
<p>A more general comment. The whole <em>point</em> of the quantifier-variable notation is to enforce clarity as to the relative scope of logical operators.</p>
<p>To take a simple case, consider the English "Everyone's not yet arrived". This is structurally ambiguous. You can easily imagine contexts where the natural reading is "It isn't the case that everyone has arrived", and other contexts where the natural reading is "Not everyone has arrived". Compare, however,</p>
<blockquote>
<p><span class="math-container">$\neg\forall xAx$</span></p>
<p><span class="math-container">$\forall x\neg Ax$</span></p>
</blockquote>
<p>Here, in the language of first-order logic, the order of the logical operators ensures that each sentence has a unique parsing, and there is no possibility of structural ambiguity.</p>
| <p>Because I <em>really</em> hate the real world analogies (after an exam I was forced to read nearly 300 answers mumbling "every pot has a lid" analogies), let me give you a mathematical way of understanding this without evaluating the actual formulas.</p>
<p>Let $M$ be an arbitrary structure for our language, let $A(x)=\{y\in M\mid M\models Q(x,y)\}$. So given a point $m\in M$ we match it the set $A(m)$ of all those which are satisfying $Q$ <em>with</em> $m$.</p>
<ol>
<li><p>The first sentence $\forall x\exists yQ(x,y)$ tells us that for every $m\in M$, $A(m)$ is non-empty. </p></li>
<li><p>The second sentence $\exists y\forall xQ(x,y)$ tells us that there is some $y$ such that $y\in A(m)$ for all $m$. That is to say that the intersection of all $A(m)$ is not empty.</p></li>
</ol>
<p>We can immediately draw the conclusion that the second sentence implies the first. If the intersection of all the $A(m)$'s non-empty then certainly no $A(m)$ can be empty.</p>
<p>On the other hand it is also quite clear that the second sentence implies that the intersection of all the $A(m)$'s is not empty, they all contain at least one shared point. The first sentence makes no such requirement, so it's not difficult to construct a structure where the first sentence is true but the second is false.</p>
|
game-theory | <p>I a beginner in Game theory and reading the book "Non Cooperative Game Theory" by Tamer Basar. I am not able to comprehend the difference between behavioral strategy and mixed strategy.</p>
<p>I saw this video:<a href="https://class.coursera.org/gametheory-003/lecture/71" rel="noreferrer">https://class.coursera.org/gametheory-003/lecture/71</a> but could not understand it clearly. </p>
<p>Thanks in advance</p>
| <p>To put it simply, </p>
<ul>
<li><strong>mixed strategies</strong> assign a probability distribution over pure strategies</li>
<li><strong>behavioural strategies</strong> assign, independently for each information set, a probability distribution over actions</li>
</ul>
<p>Here is an example in the Coursera Game Theory Course:
4-09 - Mixed and Behavioral Strategies - <a href="https://www.youtube.com/watch?v=tT0E7PaDVck" rel="noreferrer">https://www.youtube.com/watch?v=tT0E7PaDVck</a></p>
<p><a href="https://i.sstatic.net/0SgC5.png" rel="noreferrer">(extensive form image of the example)</a></p>
<p>They give this as a behavioural strategy
A with probability 0.5 and and G with probability 0.3</p>
<p>Note:</p>
<ul>
<li>each information set has an independent probability distribution over actions</li>
<li>when we use this strategy, we may play (A, G), (A, H), (B, G), or (B, H) depending on what happens randomly.</li>
</ul>
<p>They give this as a mixed strategy which is not a behavioural strategy.
(0.6 (A, G), 0.4 (B, H))</p>
<p>Note:</p>
<ul>
<li>we assign a single probability distribution over the pure strategies (A, G) and (B, H)</li>
<li>we may only possibly play (A, G) or (B, H) (not (B, G) or (A, H))</li>
<li>both decisions depend on each other so it is not a behavioural strategy</li>
</ul>
<p>In normal form games, these 2 concepts are equivalent since there is only 1 "information set". However, this is not necessarily the case in extensive form games.</p>
| <p>Well the answer is rather simple, I think the video in coursera (referred in the question) made it unnecessarily complex. </p>
<p>Suppose A and B are playing a game in which both have to randomly choose a card each from a pair 'W' and 'L'. </p>
<p>Whoever gets a 'W' gets +1 and whoever gets 'L' gets -1.</p>
<p>A is allowed to play again, but under the condition that he has not seen whether he won or lost. </p>
<p>Only the winner can choose whether to continue the play or not. </p>
<p>If the game continues then A can decide whether to exchange cards with B or keep them as it is
<img src="https://i.sstatic.net/7IK1b.jpg" alt="Behavioral Game vs Mixed Game">
S = stop, C= continue, K = keep, E = Exchange</p>
<p>A's strategies could be (S,E), (S,K), (C,E), (C,K)
B's strategy (S), (C)</p>
<p>In this context if A chooses to play the strategies and with probabilities say 0.5 and 0.5, then this is the mixed strategy.</p>
<p>If however, A assigns independent probabilities when he plays for the first time and the second time. </p>
<p>That is S with say 0.4, C with 0.6 for the first time play and </p>
<p>0.9 for E and 0.1 for K for the second time play</p>
<p>This independent assignment of strategy is called the behavioral strategy.</p>
<p>The mixed strategy for (S,E) and (C,K) is 0.36 and 0.06 </p>
<p>Reference: http://www.ma.huji.ac.il/hart/papers/ext-hgt.pdf</p>
|
differentiation | <p>Plotting the function <span class="math-container">$f(x)=x^{1/3}$</span> defined for any real number <span class="math-container">$x$</span> gives us:
<a href="https://i.sstatic.net/zdDPO.png" rel="noreferrer"><img src="https://i.sstatic.net/zdDPO.png" alt="plot of function"></a></p>
<p>Since <span class="math-container">$f$</span> is a function, for any given <span class="math-container">$x$</span> value it maps to a single y value (and not more than one <span class="math-container">$y$</span> value, because that would mean it's not a function as it fails the vertical line test).
This function also has a vertical tangent at <span class="math-container">$x=0$</span>. </p>
<p>My question is: how can we have a function that also has a vertical tangent? To get a vertical tangent we need 2 vertical points, which means that we are not working with a "proper" function as it has multiple y values mapping to a single <span class="math-container">$x$</span>. How is it possible for a "proper" function to have a vertical tangent?</p>
<p>As I understand, in the graph I pasted we cannot take the derivative of x=0 because the slope is vertical, hence we cannot see the instantaneous rate of change of x to y as the y value is not a value (or many values, which ever way you want to look at it). How is it possible to have a perfectly vertical slope on a function? In this case I can imagine a very steep curve at 0.... but vertical?!? I can't wrap my mind around it. How can we get a vertical slope on a non vertical function?</p>
| <p>The tangent line is simply an ideal picture of what you would expect to see if you zoom in around the point.</p>
<p><span class="math-container">$\hspace{8em}$</span> <a href="https://i.sstatic.net/ym3bk.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ym3bk.gif" alt="Tangent line" /></a></p>
<p>Hence, the vertical tangent line to the graph <span class="math-container">$y = \sqrt[3]{x}$</span> at <span class="math-container">$(0,0)$</span> says nothing more than that the graph would look steeper and steeper as we zoom in further around <span class="math-container">$(0, 0)$</span>.</p>
<p>We can also learn several things from this geometric intuition.</p>
<p><strong>1.</strong> The line is never required to pass through two distinct points, as the idea of a tangent line itself does not impose such an extraneous condition.</p>
<p>For instance, tangent lines pass through a single point even in many classical examples such as conic sections. On the other extreme, a tangent line can pass through infinitely many points of the original curve as in the example of the graph <span class="math-container">$y = \sin x$</span>.</p>
<p><strong>2.</strong> Tangent line is purely a geometric notion, hence it should not depend on the coordinate system being used.</p>
<p>On the contrary, identifying the curve as the graph of some function <span class="math-container">$f$</span> and differentiating it does depend on the coordinates system. In particular, it is not essential for <span class="math-container">$f$</span> to be differentiable in order to discuss a tangent line to the graph <span class="math-container">$y = f(x)$</span>, although it is a sufficient condition.</p>
<p>OP's example serves as a perfect showcase of this. Differentiating the function <span class="math-container">$f(x) = \sqrt[3]{x}$</span> fails to detect the tangent line at <span class="math-container">$(0,0)$</span>, since it is not differentiable at this point. On the other hand, it perfectly makes sense to discuss the vertical tangent line to the <em>curve</em></p>
<p><span class="math-container">$$ \mathcal{C} = \{(x, \sqrt[3]{x}) :x \in \mathbb{R} \} = \{(y^3, y) : y \in \mathbb{R} \}, $$</span></p>
<p>and indeed the line <span class="math-container">$x = 0$</span> is the tangent line to <span class="math-container">$\mathcal{C}$</span> at <span class="math-container">$(0, 0)$</span>.</p>
| <p>No, we don't need two vertical points. By the same idea, if the graph of a function <span class="math-container">$f$</span> has an horizontal tangent line somewhere, then there has to be two points of the graph of <span class="math-container">$f$</span> with the same <span class="math-container">$y$</span> coordinate. However, the tangent at <span class="math-container">$0$</span> of <span class="math-container">$x\mapsto x^3$</span> (note that this is <em>not</em> the function that you mentioned) is horizontal, in spite of the fact that no two points of its graph have the same <span class="math-container">$y$</span> coordinate.</p>
|
geometry | <p>I have learned about the correspondence of radians and degrees so 360° degrees equals $2\pi$ radians. Now we mostly use radians (integrals and so on)</p>
<p>My question: Is it just mathematical convention that radians are much more used in higher maths than degrees or do radians have some intrinsic advantage over degrees?</p>
<p>For me personally it doesn't matter if I write $\cos(360°)$ or $\cos(2\pi)$. Both equals 1, so why bother with two conventions?</p>
| <p>The reasons are mostly the same as the fact that we usually use base $e$ exponentiation and logarithm. Radians are simply the natural units for measuring angles.</p>
<ul>
<li>The length of a circle segment is $x\cdot r$, where $x$ is the measure and $r$ is the radius, instead of $x\cdot r\cdot \pi/180$.</li>
<li>The power series for sine is simply $\sin(x)=\sum_{i=0}^\infty(-1)^i{x}^{2i+1}/(2i+1)!$, not $\sin(x)=\sum_{i=0}^\infty(-1)^i(x\cdot \pi/180)^{2i+1}/(2i+1)!$.</li>
<li>The differential equation $\sin$ (and $\cos$) satisfies is $f+f''=0$, not $f+f''\pi^2/(180)^2=0$.</li>
<li>$\sin'=\cos$, not $\cos\cdot 180/\pi$.</li>
</ul>
<p>You could add more and more to the list, but I think the point is clear.</p>
| <p>As I teach my trigonometry students: "Degrees are useless."</p>
<p>You want to know the length of a circular arc? It's $r \theta$ where $r$ is the radius of the circle and $\theta$ is the angle it subtends <em>in radians</em>. If you use degrees, you get ridiculous answers.</p>
<p>You want to know the area of a sector? It's $\frac{1}{2} r^2 \theta$, with $r$ and $\theta$ as above. Again, if you use degrees, you get ridiculous results.</p>
<p>To really understand this, move on to calculus and study arc length. The arc length of the graph of the circle gives radian results. Or, look at the power series expansion of the circular trigonometric functions: if you use radians, everything works with small coefficients; if you use degrees, extra powers of $\frac{\pi}{180}$ scatter around.</p>
<p>What are degrees any good for? Dividing circles into even numbers of parts. That's it. If you want to actually <em>calculate</em> something, degrees are useless.</p>
|
geometry | <p>Let <span class="math-container">$X_0$</span> be the unit disc, and consider the process of "cutting out circles", where to construct <span class="math-container">$X_n$</span> you select a uniform random point <span class="math-container">$x \in X_{n-1}$</span>, and cut out the largest circle with center <span class="math-container">$x$</span>. To illustrate this process, we have the following graphic:</p>
<p><a href="https://i.sstatic.net/D1mbZ.png" rel="noreferrer"><img src="https://i.sstatic.net/D1mbZ.png" alt="cutting out circles" /></a></p>
<p>where the graphs are respectively showing one sample of <span class="math-container">$X_1,X_2,X_3,X_{100}$</span> (the orange parts have been cut out).</p>
<p>Can we prove we eventually cut everything out? Formally, is the following true
<span class="math-container">$$\text{lim}_{n \to \infty} \mathbb{E}[\text{Area}(X_n)] = 0$$</span></p>
<p>where <span class="math-container">$\mathbb{E}$</span> denotes we are taking the expectation value. Doing simulations, this seems true, in fact <span class="math-container">$\mathbb{E}[\text{Area}($</span>X_n<span class="math-container">$)]$</span> seems to decay with some power law, but after 4 years I still don't really know how to prove this :(. The main thing you need to rule out is that <span class="math-container">$X_n$</span> doesn't get too skinny too quickly, it seems.</p>
| <p><strong>This proof is incomplete, as noted in the comments and at the end of this answer</strong></p>
<p>Apologies for the length. I tried to break it up in to sections so it's easier to follow and I tried to make all implications really clear. Happy to revise as needed</p>
<p>I'll start with some definitions to keep things clear.</p>
<p>Let</p>
<ul>
<li>The area of a set <span class="math-container">$S \subset \mathbb{R^2}$</span> be the 2-Lebesgue measure <span class="math-container">$\lambda^*_2(S):= A(S)$</span></li>
<li><span class="math-container">$p_n$</span> be the point selected from <span class="math-container">$X_{n-1}$</span> such that <span class="math-container">$P(p_n \in Q) = \frac{A(Q)}{A({X_{n-1}})} \; \forall Q\in \mathcal{B}(X_{n-1})$</span></li>
<li><span class="math-container">$C_n(p)$</span> is the maximal circle drawn around <span class="math-container">$p \in X_{n-1}$</span> that fits in <span class="math-container">$X_{n-1}$</span>: <span class="math-container">$C_n(p) = \max_r \textrm{Circle}(p,r):\textrm{Circle}(p,r) \subseteq X_{n-1})$</span></li>
<li><span class="math-container">$A_n = A(C_n(p_n)) $</span> be the area of the circle drawn around <span class="math-container">$p_n$</span> <span class="math-container">$($</span>i.e., <span class="math-container">$X_n = X_{n-1}\setminus C_n(p_n))$</span></li>
</ul>
<p>We know that <span class="math-container">$0 \leq A_n \leq 1$</span>. By your definition of the generating process we can also make a stronger statement:</p>
<p>Also, since you're using a uniform probability measure over (well-behaved) subsets of <span class="math-container">$X_{n-1}$</span> as the distribution of <span class="math-container">$p_n$</span> we have <span class="math-container">$P(p_n \in B) := \frac{A(B)}{A(X_{n-1})}\;\;\forall B\in \sigma\left(X_{n-1}\right) \implies P(p_1 \in S) = P(S) \;\;\forall S \in \sigma(X_0)$</span>.</p>
<p><strong>Lemma 1</strong>: <span class="math-container">$P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1$</span></p>
<p><em>Proof</em>: We'll show this by proving</p>
<ol>
<li><span class="math-container">$P(A_n>0)=1\;\forall n$</span></li>
<li><span class="math-container">$(1) \implies P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span></li>
<li><span class="math-container">$(2) \implies P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1$</span></li>
</ol>
<p><span class="math-container">$A_n = 0$</span> can only happen if <span class="math-container">$p_n$</span> falls directly on the boundary of <span class="math-container">$X_n$</span> (i.e., <span class="math-container">$p_n \in \partial_{X_{n-1}} \subset \mathbb{R^2})$</span>. However, since each <span class="math-container">$\partial_{X_{n-1}}$</span> is the union of a finite number of smooth curves (circular arcs) in <span class="math-container">$\mathbb{R^2}$</span> we have <span class="math-container">${A}(\partial_{X_{n-1}})=0 \;\forall n \implies P(p_n \in \partial_{X_{n-1}})=0\;\;\forall n \implies P(A_n>0)=1\;\forall n$</span></p>
<p>If <span class="math-container">$P(A_n>0)=1\;\forall n$</span> then since <span class="math-container">$A(X_i) = A(X_{i-1}) - A_n\;\forall i$</span> we have that <span class="math-container">$A(X_{i-1}) - A(X_i) = A_n\;\forall i$</span></p>
<p>Therefore, <span class="math-container">$P(A(X_{i-1}) - A(X_i) > 0\;\forall i) = P(A_n>0\;\forall i)=1\implies P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span></p>
<p>If <span class="math-container">$P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span> then <span class="math-container">$(A(X_{i}))_{i\in \mathbb{N}}$</span> is a monotonic decreasing sequence almost surely.</p>
<p>Since <span class="math-container">$A(X_i)\geq 0\;\;\forall i\;\;(A(X_{i}))_{i\in \mathbb{N}}$</span> is bounded from below, the monotone convergence theorem implies <span class="math-container">$P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1\;\;\square$</span></p>
<p>As you've stated, what we want to show is that eventually we've cut away all the area. There are two senses in which this can be true:</p>
<ol>
<li>Almost all sequences <span class="math-container">$\left(A(X_i)\right)_1^{\infty}$</span> converge to <span class="math-container">$0$</span>: <span class="math-container">$P\left(\lim \limits_{n\to\infty}A(X_n) = 0\right) = 1$</span></li>
<li><span class="math-container">$\left(A(X_i)\right)_1^{\infty}$</span> converges in mean to <span class="math-container">$0$</span>: <span class="math-container">$\lim \limits_{n\to \infty} \mathbb{E}[A(X_n)] = 0$</span></li>
</ol>
<p>In general, these two senses of convergence do not imply each other. However, with a couple additional conditions we can show almost sure convergence implies convergence in mean. Your question is about (2), and we will get there via proving (1) <em>plus</em> a sufficient condition for <span class="math-container">$(1)\implies (2)$</span>.</p>
<p>I'll proceed as follows:</p>
<ol>
<li>Show <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span> using Borel-Cantelli Lemma</li>
<li>Use the fact that <span class="math-container">$0<A(X_n)\leq 1$</span> to apply the Dominated Convergence Theorem to show <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></li>
</ol>
<hr />
<h2>Step 1: <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span></h2>
<p>If <span class="math-container">$\lim_{n\to \infty} A(X_n) = A_R > 0$</span> then there is some set <span class="math-container">$R$</span> with positive area <span class="math-container">$A(R)=A_R >0$</span> that is a subset of <em>all</em> <span class="math-container">$X_n$</span> (i.e.,<span class="math-container">$\exists R \subset X_0: A(R)>0\;\textrm{and}\;R \subset X_i\;\;\forall i> 0)$</span></p>
<p>Let's call a set <span class="math-container">$S\subset X_0:A(S)>0,\;S \subset X_i\;\;\forall i> 0$</span> a <em>reserved set</em> <span class="math-container">$(R)$</span> since we are "setting it aside". In the rest of this proof, the letter <span class="math-container">$R$</span> will refer to a reserved set.</p>
<p>Let's define the set <span class="math-container">$Y_n = X_n \setminus R$</span>, and the event <span class="math-container">$T_n:=p_n \in Y_{n-1}$</span> then</p>
<p><strong>Lemma 2</strong>: <span class="math-container">$P\left(\bigcap_1^n T_i \right) \leq A(Y_0)^n = (1 - A_R)^n\;\;\forall n>0$</span></p>
<p><em>Proof</em>: We'll prove this by induction. Note that <span class="math-container">$P(T_1) = A(Y_0)$</span> and <span class="math-container">$P(T_1\cap T_2) = P(T_2|T_1)P(T_1)$</span>. We know that if <span class="math-container">$T_1$</span> has happened, then <strong>Lemma 1</strong> implies that <span class="math-container">$A(Y_{1}) < A(Y_0)$</span>. Therefore</p>
<p><span class="math-container">$$P(T_2|T_1)<P(T_1)=A(Y_0)\implies P\left(T_1 \bigcap T_2\right)\leq A(Y_0)^2$$</span></p>
<p>If <span class="math-container">$P(\bigcap_{i=1}^n T_i) \leq A(Y_0)^n$</span> then by a similar argument we have</p>
<p><span class="math-container">$$P\left(\bigcap_{i=1}^{n+1} T_i\right) = P\left( T_{n+1} \left| \;\bigcap_{i=1}^n T_i\right. \right)P\left(\bigcap_{i=1}^n T_i\right)\leq A(Y_0)A(Y_0)^n = A(Y_0)^{n+1}\;\;\square$$</span></p>
<p>However, to allow <span class="math-container">$R$</span> to persist, we must ensure that <em>not only</em> does <span class="math-container">$T_n$</span> occur for all <span class="math-container">$n>0$</span> but that each <span class="math-container">$p_n$</span> doesn't fall in some neighborhood <span class="math-container">$\mathcal{N}_n(R)$</span> around <span class="math-container">$R$</span>:</p>
<p><span class="math-container">$$\mathcal{N}_n(R):= \mathcal{R}_n\setminus R$$</span>
<span class="math-container">$$\textrm{where}\; \mathcal{R}_n:=\{p \in X_{n-1}: A(C_n(p)\cap R)>0\}\supseteq R$$</span></p>
<p>Let's define the event <span class="math-container">$T'_n:=p_n \in X_{n-1}\setminus \mathcal{R}_n$</span> to capture the above requirement for a particular point <span class="math-container">$p_n$</span>. We then have the following.</p>
<p><strong>Lemma 3</strong>: <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R \implies P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1$</span></p>
<p><em>Proof</em>: Assume <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span>. If <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)<1$</span> then <span class="math-container">$P\left(\exists k>0:p_k \in \mathcal{R}_k\right)>0$</span>. By the definition of <span class="math-container">$ \mathcal{R}_k$</span>, <span class="math-container">$A(C_k(p_k)\cap R) > 0$</span> which means that <span class="math-container">$X_{k}\cap R \subset R \implies A(X_{k}\cap R) < A_R$</span>. By <strong>Lemma 1</strong>, <span class="math-container">$(X_i)_{i \in \mathbb{N}}$</span> is a strictly decreasing sequence of sets so <span class="math-container">$A(X_{j}\cap R) < A_R \;\;\forall j>i$</span>; therefore, <span class="math-container">$\exists \epsilon > 0: P\left(A(X_n) \overset{a.s.}{\to} A_R - \epsilon\right)>0$</span>. However, this contradicts our assumption <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span>. Therefore, <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)<1$</span> is false which implies <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1\;\square$</span></p>
<p><strong>Corollary 1</strong>: <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1$</span> is a necessary condition for <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span></p>
<p><em>Proof</em>: This follows immediately from <strong>Lemma 3</strong> by the logic of material implication: <span class="math-container">$X \implies Y \iff \neg Y \implies \neg X$</span> -- an implication is logically equivalent to its contrapositive.</p>
<p>We can express <strong>Corollary 1</strong> as an event <span class="math-container">$\mathcal{T}$</span> in a probability space <span class="math-container">$\left(X_0^{\mathbb{N}},\mathcal{F},\mathbb{P}\right)$</span> constructed from the sample space of infinite sequences of points <span class="math-container">$p_n \in X_0$</span> where:</p>
<ul>
<li><p><span class="math-container">$X_0^{\mathbb{N}}:=\prod_{i\in\mathbb{N}}X_0$</span> is the set of all sequences of points in the unit disk <span class="math-container">$X_0 \subset \mathbb{R^2}$</span></p>
</li>
<li><p><span class="math-container">$\mathcal{F}$</span> is the product Borel <span class="math-container">$\sigma$</span>-algebra generated by the product topology of all open sets in <span class="math-container">$X_0^{\mathbb{N}}$</span></p>
</li>
<li><p><span class="math-container">$\mathbb{P}$</span> is a probability measure defined on <span class="math-container">$\mathcal{F}$</span></p>
</li>
</ul>
<p>With this space defined, we can define our event <span class="math-container">$\mathcal{T}$</span> as as the intersection of a non-increasing sequence of cylinder sets in <span class="math-container">$\mathcal{F}$</span>:</p>
<p><span class="math-container">$$\mathcal{T}:=\bigcap_{i=1}^{\infty}\mathcal{T}_i \;\;\;\textrm{where } \mathcal{T}_i:=\bigcap_{j=1}^{i} T'_j = \text{Cyl}_{\mathcal{F}}(T'_1,..,T'_i)$$</span></p>
<p><strong>Lemma 4</strong>: <span class="math-container">$\mathbb{P}(\mathcal{T}_n) = \mathbb{P}(\bigcap_1^n T'_i)\leq \mathbb{P}\left(\bigcap_1^n T_i\right)\leq (1-A_R)^n$</span></p>
<p><em>Proof</em>: <span class="math-container">$\mathbb{P}(\mathcal{T}_n) = \mathbb{P}(\bigcap_1^n T'_i)$</span> follows from the definition of <span class="math-container">$\mathcal{T}_n$</span>. <span class="math-container">$\mathbb{P}(\bigcap_1^n T'_i)\leq \mathbb{P}\left(\bigcap_1^n T_i\right)$</span> follows immediately from <span class="math-container">$R\subseteq \mathcal{R}_n\;\;\forall n\;\square$</span></p>
<p><strong>Lemma 5</strong>: <span class="math-container">$\mathcal{T} \subseteq \limsup \limits_{n\to \infty} \mathcal{T}_n$</span></p>
<p><em>Proof</em>: By definition <span class="math-container">$\mathcal{T} \subset \mathcal{T}_i \;\forall i>0$</span>. Since <span class="math-container">$\left(\mathcal{T}_i\right)_{i \in \mathbb{N}}$</span> is nonincreasing, we have <span class="math-container">$\limsup \limits_{i\to \infty} \mathcal{T}_i = \limsup \limits_{i\to \infty}\mathcal{T}_i = \lim \limits_{i\to \infty}\mathcal{T}_i = \mathcal{T}\;\;\square$</span></p>
<p><strong>Lemma 6</strong>: <span class="math-container">$\mathbb{P}\left(\limsup \limits_{i\to \infty} \mathcal{T}_i\right) = 0\;\;\forall A_R \in (0,1]$</span></p>
<p><em>Proof</em>: From <strong>Lemma 4</strong>
<span class="math-container">$$\sum \limits_{i=1}^{\infty} \mathbb{P}\left(\mathcal{T}_i\right) \leq \sum \limits_{i=1}^{\infty} (1-A_R)^i = \sum \limits_{i=0}^{\infty} \left[(1-A_R) \cdot (1-A_R)^i\right] =$$</span>
<span class="math-container">$$ \frac{1-A_R}{1-(1-A_R)} = \frac{1-A_R}{A_R}=\frac{1}{A_R}-1 < \infty \;\; \forall A_R \in (0,1]\implies$$</span>
<span class="math-container">$$ \mathbb{P}\left(\limsup \limits_{i\to \infty} \mathcal{T}_i\right) = 0 \;\; \forall A_R \in (0,1]\textrm{ (Borel-Cantelli) }\;\square$$</span></p>
<p><strong>Lemma 6</strong> implies that only <em>finitely many</em> <span class="math-container">$\mathcal{T}_i$</span> will occur with probability 1. Specifically, for almost every sequence <span class="math-container">$\omega \in X_0^{\infty}$</span> there <span class="math-container">$\exists n_{\omega}<\infty$</span> such <span class="math-container">$p_{n_{\omega}} \in \mathcal{R}_{n_{\omega}}$</span>.</p>
<p>We can define this as a stopping time for each sequence <span class="math-container">$\omega \in X_0^{\infty}$</span> as follows:</p>
<p><span class="math-container">$$\tau(\omega) := \max \limits_{n \in \mathbb{N}} \{n:\omega \in \mathcal{T}_n\}$$</span></p>
<p><strong>Corollary 2</strong>: <span class="math-container">$\mathbb{P}(\tau < \infty) = 1$</span></p>
<p><em>Proof</em>: This follows immediately from <strong>Lemma 6</strong> and the definition of <span class="math-container">$\tau$</span></p>
<p><strong>Lemma 7</strong>: <span class="math-container">$P(\mathcal{T}) = 0\;\;\forall R:A(R)>0$</span></p>
<p><em>Proof</em>: This follows from <strong>Lemma 5</strong> and <strong>Lemma 6</strong></p>
<hr />
<p><strong>This is where I'm missing a step</strong>
For Theorem 1 below to work, Lemma 7 + Corollary 1 are not sufficient.</p>
<p>Just because every subset of positive area <span class="math-container">$R$</span> has probability zero of occurring doesn't imply that the probability of the set of all possible subsets of area <span class="math-container">$R$</span> has a zero probability. An analogous situation is with continuous random variables -- there are an uncountable number of points, but yet when we draw from it we nonetheless get a point.</p>
<p>What I don't know are the sufficient conditions for the following:</p>
<p><span class="math-container">$P(\omega)=0 \;\forall \omega\in \Omega: A(\omega)=R \implies P(\{\omega: A(\omega)=R\})=0$</span></p>
<hr />
<hr />
<p><strong>Theorem 1</strong>: <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span></p>
<p><em>Proof</em>: <strong>Lemma 7</strong> and <strong>Corollary 1</strong> imply <span class="math-container">$A(X_n)$</span> does <em>not</em> converge to <span class="math-container">$A_R$</span> almost surely, which implies <span class="math-container">$P(A(X_n) \to A_R) < 1 \;\forall A_R > 0$</span>. <strong>Corollary 2</strong> makes the stronger statement that <span class="math-container">$P(A(X_n) \to A_R)=0\;\forall A_R>0$</span> (i.e., almost never), since we know that the sequences of centers of each circle <span class="math-container">$p_n$</span> viewed as a stochastic process will almost surely hit <span class="math-container">$R$</span> (again, since we've defined <span class="math-container">$R$</span> such that <span class="math-container">$A(R)>0)$</span>. <span class="math-container">$P(A(X_n) \to A_R) = 0 \;\forall A_R>0$</span> with <strong>Lemma 1</strong> implies that <span class="math-container">$P(A(X_n) \to 0) = 1$</span>. Therefore, <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0\;\square$</span></p>
<hr />
<h2>Step 2: <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></h2>
<p>We will appeal to the <a href="https://en.wikipedia.org/wiki/Convergence_of_random_variables#Properties_4" rel="nofollow noreferrer">Dominated Convergence Theorem</a> to prove this result.</p>
<p><strong>Theorem 2</strong>: <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></p>
<p><em>Proof</em>: From <strong>Theorem 1</strong> we've shown that <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span>. Given an almost surely constant random variable <span class="math-container">$Z\overset{a.s.}{=}c$</span>, we have <span class="math-container">$c>1 \implies |A(X_n)| < Z\;\forall n$</span>. In addition, <span class="math-container">$\mathbb{E}[Z]=c<\infty$</span>, hence <span class="math-container">$Z$</span> is <span class="math-container">$\mathbb{P}$</span>-integrable. Therefore, <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span> by the Dominated Convergence Theorem. <span class="math-container">$\square$</span></p>
| <p>New to this, so not sure about the rigor, but here goes.</p>
<p>Let $A_k$ be the $k$th circle. Assume the area of $\bigcup_{k=1}^n A_k$ does not approach the total area of the circle $A_T$ as $n$ tends towards infinity. Then there must be some area $K$ which is not covered yet cannot harbor a new circle. Let $C = \bigcup_{k=1}^\infty A_k$. Consider a point $P$ in such that $d(P,K)=0$ and $d(P,C)>0$. If no such point exists, then $K \subset C$, as $C$ is a clearly a closed set of points. If such a point does exist, then another circle with center $P$ and nonzero area can be made to cover part of $K$, and the same logic applies to all possible $K$. Therefore there is no area $K$ which cannot contain a new circle, and by consequence $$\lim_{n\to\infty}\Bigg[\bigcup_{k=1}^n A_k\Bigg] = \big[A_T\big]$$
Since the size of circles is continuous, there must be a set of circles $\{A_k\}_{k=1}^\infty$ such that $\big[A_k\big]=E(\big[A_k\big])$ for each $k \in \mathbb{N}$, and therefore $$\lim_{n\to\infty} E(\big[A_k\big]) = \big[A_k\big] $$</p>
<p><strong>EDIT</strong>: This proof is wrong becuase I'm bad at probability, working on a new one.</p>
|
geometry | <p>I understand that Coxeter diagrams are supposed to communicate something about the structure of symmetry groups of polyhedra, but I am baffled about what that something is, or why the Coxeter diagram is clearer, simpler, or more useful than a more explicit notation. The information on Wikipedia has not helped me.</p>
<p>Wikipedia tells me, for example, that the Coxeter diagram for a cube is <img src="https://i.sstatic.net/6lm07.png" alt="Coxeter diagram for cube">, but I don't understand why it is this, in either direction; I don't understand either how you could calculate the Coxeter diagram from a knowledge of the geometry of the cube, or how you could get from the Coxeter diagram to an understanding of the geometric properties of the cube.</p>
<p>I gather that the three points represent three reflection symmetries, and that the mutual angles between the three reflection planes are supposed to be $45^\circ, 60^\circ, $ and $90^\circ$, but I can't connect this with anything I know about cubic symmetry. Nor do I understand why it is perspicuous to denote these angles, respectively, with a line marked with a 4, an unmarked line, and a missing line.</p>
<p>My questions are:</p>
<ul>
<li>What information is the Coxeter diagram communicating?</li>
<li>Why is this information useful? How does it relate to better-known geometric properties? What are the applications of the diagram?</li>
<li>What makes it a good notation? Is it used for its concision, or because it is easy to calculate with, or for some other reason?</li>
<li>Where is a good place to start understanding this?</li>
</ul>
| <p>The diagrams are a way of describing a group generated by reflections. Any collection of reflections (in Euclidean space, say) will generate a group. To know what this group is like, you need to know more than just how many generators there are: you need to know the relationships between the generators. The Coxeter diagram tells you that information. There is a node for each generator, and an edge between the two labeled with the order of their product.</p>
<p>For instance, if you have a group generated by three reflections <span class="math-container">$\rho_1$</span>, <span class="math-container">$\rho_2$</span>, and <span class="math-container">$\rho_3$</span>, then you know that <span class="math-container">$\rho_i^2 = 1$</span> (the order of each reflection is two), but the order of <span class="math-container">$\rho_1 \rho_2$</span>, <span class="math-container">$\rho_1 \rho_3$</span>, and <span class="math-container">$\rho_2 \rho_3$</span> could be anything. Maybe <span class="math-container">$(\rho_1 \rho_2)^3 = (\rho_1 \rho_3)^4 = (\rho_2 \rho_3)^5 = 1$</span>.
Then the Coxeter diagram is <img src="https://i.sstatic.net/2sctk.png" alt="Triangle labeled 3,4,5">.
By convention, edges that would be labeled "2" are omitted, and any "3" labels are left off, so we'd actually have <img src="https://i.sstatic.net/Lch5e.png" alt="Triangle labeled " ",4,5">.</p>
<p>So, nodes in the graph are not-adjacent exactly when the product of the corresponding generators has order 2, which for involutions means they commute:
<span class="math-container">$ (\rho_i \rho_j)^2 = 1$</span> means <span class="math-container">$\rho_i \rho_j = (\rho_i \rho_j)^{-1} = \rho_j^{-1} \rho_i^{-1} = \rho_j \rho_i$</span>.</p>
<h2>Regular polytopes</h2>
<p>For a regular convex polytope <span class="math-container">$P$</span>, there is a standard way to label the generators. Fix a base flag <span class="math-container">$\Phi$</span> (a maximal collection of mutually incident faces: a vertex, an edge, etc.) Since <span class="math-container">$P$</span> is regular, there are symmetries (i.e. isometries which take <span class="math-container">$P$</span> to itself) carrying each flag to every other flag; in particular, there is a symmetry taking <span class="math-container">$\Phi$</span> to the flag with all the same faces, except that it has the other vertex on the given edge. We call this flag <span class="math-container">$\Phi^0$</span>, the 0-adjacent flag to <span class="math-container">$\Phi$</span>, and the symmetry <span class="math-container">$\rho_0$</span>. We can see that <span class="math-container">$\rho_0(\Phi^0)$</span> must be <span class="math-container">$\Phi$</span> again, and so <span class="math-container">$\rho_0$</span> is an involution (it's not hard to show that a symmetry which fixes some flag is the identity.)</p>
<p>Similarly, there is a unique flag <span class="math-container">$\Phi^1$</span> which has all the same faces as <span class="math-container">$\Phi$</span> except that it has the other edge containing the given vertex and contained in the given 2-face, and a symmetry <span class="math-container">$\rho_1$</span> which carries <span class="math-container">$\Phi$</span> to <span class="math-container">$\Phi^1$</span>; and for every rank <span class="math-container">$j$</span> up to the dimension of <span class="math-container">$P$</span>, there is a symmetry <span class="math-container">$\rho_j$</span> carrying <span class="math-container">$\Phi$</span> to the unique <span class="math-container">$j$</span>-adjacent flag <span class="math-container">$\Phi^j$</span>.</p>
<p>It can be shown that these are involutions, and that they generate the whole symmetry group. Moreover, in this particular case, <span class="math-container">$\rho_i$</span> and <span class="math-container">$\rho_j$</span> always commute if <span class="math-container">$|i - j| \geq 2$</span>. (For instance, with <span class="math-container">$\rho_0$</span> and <span class="math-container">$\rho_2$</span>: if you switch from one vertex on an edge to the other, then switch from one 2-face at the edge to the other, then switch vertices back, then switch 2-faces back, you get back where you started.)</p>
<p>For this reason, the Coxeter diagram of the symmetry group of a regular polytope will be a string (like the example you gave for the cube). Conventionally the nodes are given left-to-right as <span class="math-container">$\rho_0, \rho_1, \dots$</span>. If the labels on the edges are <span class="math-container">$p, q, r, \dots$</span>, then the abstract Coxeter group associated with the diagram is often called <span class="math-container">$[p,q,r,\dots]$</span>.</p>
<p>The abstract Coxeter group is simply the group defined by the presentation inherent in the diagram, i.e. for your example <img src="https://i.sstatic.net/6lm07.png" alt="4,3">
<span class="math-container">$$ [4,3] =
\langle \rho_0, \rho_1, \rho_2 \mid \rho_0^2 = \rho_1^2 = \rho^2_2 = (\rho_0 \rho_2)^2 = (\rho_0 \rho_1)^4 = (\rho_1 \rho_2)^3 = 1 \rangle,
$$</span>
which is isomorphic to any concrete Coxeter group with the same diagram, formed by actual reflections in some space.</p>
<p>The regular polytope with this group has the so-called Schläfli symbol <span class="math-container">$\{p,q,r,\dots\}$</span>. The Schläfli symbol for the cube is {4,3}.
This means that following a vertex-swap by an edge-swap (i.e. <span class="math-container">$\rho_0 \rho_1$</span>) has order 4, and an edge-swap followed by a facet-swap (<span class="math-container">$\rho_1 \rho_2$</span>) has order 3. A more typical way to recognize this is to say "Each facet is a 4-gon and each vertex is incident to 3 edges."</p>
<p>Here's the diagram of the 4-cube.
<img src="https://i.sstatic.net/kd6Ya.png" alt="String of edges 4,3,3">
The corresponding Schläfli symbol is {4,3,3}:</p>
<ul>
<li><span class="math-container">$\rho_0 \rho_1$</span> has order 4; each 2-face is a square.</li>
<li><span class="math-container">$\rho_1 \rho_2$</span> has order 3; within a given facet, each vertex is in 3 edges.</li>
<li><span class="math-container">$\rho_2 \rho_3$</span> has order 3; each edge is in 3 facets.</li>
</ul>
<p>It is probably clear that for regular polytopes, you might as well just use Schläfli symbols. But there are many groups generated by reflections which are not the symmetry groups of regular polytopes. Every such group is described by a Coxeter diagram.</p>
<h2>Why?</h2>
<p>As far as why this notation is used: you just need some way to give the orders of all the products <span class="math-container">$\rho_i \rho_j$</span>. Listing all the orders in a group presentation is usually really long, ill-organized, and hard to read.
Another way is to put it in a matrix, and this is indeed frequently used.
If you have an <span class="math-container">$n \times n$</span> matrix <span class="math-container">$M = [m_{ij}]$</span>,
your group
is generated by <span class="math-container">$n$</span> involutions <span class="math-container">$\rho_0, \dotsc, \rho_{n-1}$</span> such that <span class="math-container">$(\rho_i \rho_j)^{m_{ij}} = 1$</span>
(and you want <span class="math-container">$m_{ij}$</span> to be minimal, of course, so that it is the order of <span class="math-container">$\rho_i \rho_j$</span>.)
<span class="math-container">$M$</span> must be symmetric, since the order of <span class="math-container">$\rho_i \rho_j$</span> and <span class="math-container">$\rho_j \rho_i$</span> are the same. And if the generators are involutions, the diagonal entries <span class="math-container">$m_{ii}$</span> must be 1.</p>
<p>Perhaps you are more familiar or comfortable with the idea of defining a group by such a matrix (called a Coxeter matrix.) In this case, it's worth emphasizing that <strong>a Coxeter diagram and a Coxeter matrix are entirely equivalent.</strong> Some people go so far as to identify the two.</p>
<p>The advantages of the diagram are that the matrix is redundant; you only need the entries above the diagonal (or below it.) Also, diagrams make it more clear when things commute and highlight the "interesting" relationships (when order is more than 3) so that they're not lost in the noise.
For instance, in the diagram for a <span class="math-container">$p$</span>-gonal prism:
<img src="https://i.sstatic.net/F8poi.png" alt="edge labeled p and a dot">
it is immediately clear that we have the symmetry group of a <span class="math-container">$p$</span>-gon, and another reflection orthogonal to both the generating reflections of the former.
This is perhaps not as immediate looking at the matrix
<span class="math-container">$\begin{bmatrix} 1 & p & 2\\ p & 1 & 2 \\ 2 & 2 & 1\end{bmatrix}$</span>.</p>
<h2>From diagrams to polytopes</h2>
<p>This is not my area of expertise, but it addresses the parts of your question about angles, and the mysterious extra circle in your diagram for the cube.</p>
<p>Given a Coxeter diagram with <span class="math-container">$n$</span> nodes, you can construct reflections in <span class="math-container">$n$</span>-dimensional space to realize the Coxeter group.
For convenience, we'll identify reflection isometries with their hyperplane of reflection, so if <span class="math-container">$\rho$</span> is a reflection, then <span class="math-container">$\rho$</span> also means the hyperplane fixed by <span class="math-container">$\rho$</span>.</p>
<p>To get the product of reflections to have order <span class="math-container">$p$</span>, you want their hyperplanes of reflection at an angle of <span class="math-container">$\pi/p$</span> to each other, since the composition of the two reflections is a rotation by twice the angle between them.</p>
<p>The composition would have the same order with an angle of <span class="math-container">$m\pi/p$</span>, where <span class="math-container">$m$</span> is relatively prime to <span class="math-container">$p$</span>. The group generated ends up being the same, so you might as well work with the hyperplanes at an angle of <span class="math-container">$\pi/p$</span>.</p>
<p>So with <img src="https://i.sstatic.net/6lm07.png" alt="4,3">,
<span class="math-container">$\rho_0$</span> and <span class="math-container">$\rho_1$</span> should form an angle of <span class="math-container">$\pi/4$</span> (or 45°),
<span class="math-container">$\rho_0$</span> and <span class="math-container">$\rho_2$</span> should form an angle of <span class="math-container">$\pi/2$</span> (or 90°),
and <span class="math-container">$\rho_1$</span> and <span class="math-container">$\rho_2$</span> should form an angle of <span class="math-container">$\pi/3$</span> (or 60°).</p>
<p>I don't really know how to go about finding planes that have the specified relationship, but you can visualize how to do this one. Start with two planes which are orthogonal to eachother: call them <span class="math-container">$\rho_0$</span> and <span class="math-container">$\rho_2$</span>.
Stick in a plane which forms an angle of 45° with <span class="math-container">$\rho_0$</span>; you can start with it also being orthogonal to <span class="math-container">$\rho_2$</span>, so we have the situation in depicted in this picture.
On the left, we have transparent plane segments;
on the right, they are opaque. I am thinking of <span class="math-container">$\rho_2$</span> as the horizontal plane, <span class="math-container">$\rho_0$</span> as the vertical plane coming straight out of the page,
and the "new plane" (intended to be <span class="math-container">$\rho_1$</span>) as the plane going from corner to corner.</p>
<p><img src="https://i.sstatic.net/1oqiy.png" alt="planes"></p>
<p>Then rotate the new plane, keeping it at 45° with <span class="math-container">$\rho_0$</span>, until it forms an angle of 60° with <span class="math-container">$\rho_2$</span>. You end up with this:</p>
<p><img src="https://i.sstatic.net/t0UVO.png" alt="tilted planes"></p>
<p>Come to think of it, on a sphere whose center is the point of intersection of your three planes, the spherical triangle cut out by the planes will have angles 45°, 90°, and 60°. So I guess finding such spherical simplices is a general method to do this.
A more systematic way of finding some reflection planes for your group seems to be described in <a href="http://web.archive.org/web/20121120065612/http://www.win.tue.nl/%7Ejpanhuis/coxeter/notes/notes.pdf" rel="noreferrer">Arjeh Cohen's notes on Coxeter groups</a>, Section 2.2: The reflection representation.</p>
<p>Anyway, one way or another, you've found some reflections whose compositions have the prescribed order. Now to find a polytope which the group generated by these reflections act on, just pick any point in space, take all its images under the reflections, and voilà! The convex hull of these points is a polytope acted on by the group. This is known as <em>Wythoff's construction</em>, or a <em>kaleidoscope</em> (because the original point is replicated by all the reflection planes just as a colored dot is replicated in a kaleidoscope.)</p>
<p>Many choices of points yield combinatorially identical (or <em>isomorphic</em>) polytopes; for instance, taking any point which is not contained in any of the planes of reflection will result in isomorphic polytopes. More interesting things happen when the initial point is in some of the planes (but not all the planes; then its orbit under the group is just a point.)</p>
<p>As an extension to the Coxeter diagram, you circle all the nodes of the diagram corresponding to reflection planes that DO NOT contain the initial point. (This might seem kind of backwards. It probably is.)</p>
<p>So, in <img src="https://i.sstatic.net/6lm07.png" alt="4,3">, the initial point is contained in the reflection planes for <span class="math-container">$\rho_1$</span> and <span class="math-container">$\rho_2$</span> but not for <span class="math-container">$\rho_0$</span>. Here's one such initial point (the black one) on the reflection planes we constructed earlier:</p>
<p><img src="https://i.sstatic.net/aNfpy.png" alt="kaleidoscope"></p>
<p>The black point is the initial point; its reflection through the <span class="math-container">$\rho_0$</span> plane (the vertical plane) is red.
The green point is the reflection of the red one through <span class="math-container">$\rho_1$</span>.
The blue points are the reflections of the green one through <span class="math-container">$\rho_0$</span> and <span class="math-container">$\rho_2$</span>.
The hollow black point is the reflection of the blue ones (through either <span class="math-container">$\rho_0$</span> or <span class="math-container">$\rho_2$</span>.)
The hollow red point (not visible in the left picture) is the reflection of the hollow black point through the plane <span class="math-container">$\rho_1$</span>.
Its reflection through <span class="math-container">$\rho_0$</span> is the hollow green point.</p>
<p>On the right, we see that the convex hull of these points is indeed the cube.</p>
| <p>Following Nich Matteo's answer, I find this's straightforward to find <span class="math-container">$\rho_0, \rho_1, \rho_0$</span> in a cube.</p>
<p><span class="math-container">$\rho_0$</span> is reflecting by plane OAD.</p>
<p><span class="math-container">$\rho_1$</span> is reflecting by plane OBC.</p>
<p><span class="math-container">$\rho_2$</span> is reflecting by plane OAB.</p>
<p><a href="https://i.sstatic.net/5KQ0Z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5KQ0Z.jpg" alt="enter image description here"></a></p>
<p><span class="math-container">$ (\rho_1 \rho_2)^3=1$</span> can be illustrated by the following six reflections:</p>
<p><a href="https://i.sstatic.net/WUf9J.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WUf9J.jpg" alt="enter image description here"></a>
<span class="math-container">$ (\rho_0 \rho_1)^4=1$</span> can be illustrated by the following 8 operations:
<a href="https://i.sstatic.net/CjLAQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CjLAQ.jpg" alt="enter image description here"></a> </p>
<p>quote from wiki: In a symmetry group, the group elements are the symmetry operations (not the symmetry elements), and the binary combination consists of applying first one symmetry operation and then the other.</p>
<p>The binary operations also satisfy:</p>
<p>(1) closure property: If <span class="math-container">$\rho_i \rho_j$</span> is a group element, then <span class="math-container">$\rho_j \rho_i$</span> must be a group element.</p>
<p>(2) associative property: <span class="math-container">$\rho_0 \rho_1 \rho_2 = \rho_2 \rho_0 \rho_1 $</span>.</p>
|
combinatorics | <p>Let us systematically generate all constructible points in the plane. We begin with just two points, which specify the unit distance. </p>
<p><a href="https://i.sstatic.net/UfCcSm.jpg" rel="noreferrer"><img src="https://i.sstatic.net/UfCcSm.jpg" alt="enter image description here"></a></p>
<p>With the straightedge, we may construct the line joining them. And with the compass, we may construct the two circles centered at each of them, having that unit segment as radius. These circles intersect each other and the line, creating four additional points of intersection. Thus, we have now six points in all.</p>
<p><a href="https://i.sstatic.net/14byzm.jpg" rel="noreferrer"><img src="https://i.sstatic.net/14byzm.jpg" alt="enter image description here"></a></p>
<p>Using these six points, we proceed to the next stage, constructing all possible lines and circles using those six points, and finding the resulting points of intersection. </p>
<p><a href="https://i.sstatic.net/LiLBk.jpg" rel="noreferrer"><img src="https://i.sstatic.net/LiLBk.jpg" alt="enter image description here"></a></p>
<p>I believe that we now have 203 points. Let us proceed in this way to systematically construct all constructible points in the plane, in a hierarchy of finite stages. At each stage, we form all possible lines and circles that may be formed from our current points using straightedge and compass, and then we find all points of intersection from the resulting figures. </p>
<p>This produces what I call the <em>constructibility sequence</em>:</p>
<p><span class="math-container">$$2\qquad\qquad 6\qquad\qquad 203\qquad\qquad ?$$</span></p>
<p>Each entry is the number of points constructed at that stage. I have a number of questions about the constructibility sequence:</p>
<p><strong>Question 1.</strong> What is the next constructibility number? </p>
<p>There is no entry in the online encyclopedia of integer sequences beginning 2, 6, 203, and so I would like to create an entry for the constructibility sequence. But they request at least four numbers, and so we seem to need to know the next number. I'm not sure exactly how to proceed with this, since if one proceeds computationally, then one will inevitably have to decide if two very-close points count as identical are not, and I don't see any principled way to ensure that this is done correctly. So it seems that one will need to proceed with some kind of idealized geometric calculus, which gets the right answer about coincidence of intersection points. [<strong>Update:</strong> The sequence now exists as <a href="https://oeis.org/A333944" rel="noreferrer">A333944</a>.]</p>
<p><strong>Question 2.</strong> What kind of asymptotic upper bounds can you prove on the growth of the constructibility sequence? </p>
<p>At each stage, every pair of points determine a line and two circles. And every intersection point is realized as the intersection of two lines, two circles or a line and circle, which have at most two intersection points in each case. So a rough upper bound is that from <span class="math-container">$k$</span> points, we produce no more than <span class="math-container">$3k^2$</span> many lines and circles, and so at most <span class="math-container">$(3k^2)^2$</span> many pairs of line and circles, and so at most <span class="math-container">$2(3k^2)^2$</span> many points of intersection. This leads to an upper bound of growth something like <span class="math-container">$18^n2^{4^n}$</span> after <span class="math-container">$n$</span> stages. Can anyone give a better bound? </p>
<p><strong>Question 3.</strong> And what of lower bounds? </p>
<p>I suspect that the sequence grows very quickly, probably doubly exponentially. But to prove this, we would seem to need to identify a realm of construction patterns where there is little interference of intersection coincidence, so that one can be sure of a certain known growth in new points.</p>
| <p>I have written some Haskell <a href="https://codeberg.org/teo/constructibility" rel="nofollow noreferrer">code</a> to compute the next number in the constructibility sequence. It's confirmed everything we have already established and gave me the following extra results:</p>
<p>There are <span class="math-container">$149714263$</span> line-line intersections at the 4th step (computed in ~14 hours). Pace Nielsen's approximation was only off by 8! This includes some points that are between a distance of <span class="math-container">$10^{-12}$</span> and <span class="math-container">$10^{-13}$</span> from each other.</p>
<p>I have found the fourth number in the constructibility sequence: <span class="math-container">$$1723816861$$</span>
I computed this by splitting the first quadrant into sections along the x-axis, computing values in these sections and combining them. The program was not parallel, but the work was split amongst 6 process on 3 machines. It took approximately 6 days to complete and each process never used more than 5GB of RAM.</p>
<p>My data can be found <a href="https://docs.google.com/spreadsheets/d/1upFYzrD6A9ZSTuNPfZnNe-lMmsgAuG_FircY4fq15ms/edit?usp=sharing" rel="nofollow noreferrer">here</a>. I've produced these two graphs from my data, which give a sense of how the points are distributed:
<a href="https://i.sstatic.net/MBNUA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBNUA.png" alt="enter image description here" /></a>
If we focus at the area from <span class="math-container">$0$</span> to <span class="math-container">$1$</span> we get:
<a href="https://i.sstatic.net/VuU6O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VuU6O.png" alt="enter image description here" /></a></p>
<hr />
<h2>Implementation Details:</h2>
<p>I represent constructible reals as pairs of a 14 decimal digit approximation (using <a href="http://hackage.haskell.org/package/ireal" rel="nofollow noreferrer">ireal</a>) and a symbolic represenation (using <a href="http://hackage.haskell.org/package/constructible-0.1.0.1" rel="nofollow noreferrer">constructible</a>). This is done to speed up comparisons: the approximations give us quick but partial comparisons functions, while the symbolic representation gives us slower but total comparison functions.</p>
<p>Lines are represented by a pair <span class="math-container">$\langle m, c \rangle$</span> such that <span class="math-container">$y = mx + c$</span>. To deal with vertical lines, we create a data-type that's enhanced with an infinite value. Circles are triples <span class="math-container">$\langle a, b, r \rangle$</span> such that <span class="math-container">$(x-a)^2 + (y-b)^2 = r^2$</span>.</p>
<p>I use a sweep-line algorithm to compute the number of intersections in a given rectangle. It extends the <a href="https://en.wikipedia.org/wiki/Bentley%E2%80%93Ottmann_algorithm" rel="nofollow noreferrer">Bentley-Ottoman Algorithm</a> to allow the checking of intersections between circles as well as lines. The idea behind the algorithm is that we have a vertical line moving from the left to right face of the rectangle. We think of this line as a strictly ordered set of objects (lines and circles). This requires some care to get right. Circles need to be split into their top and bottom semi-circles, and we don't only want to order objects based on their y-coordinates but if they are equal then also their slope and we need to deal with circles that are tangential to each other at a point. The composition or order of this set can change in 3 ways as we move from left to right:</p>
<ol>
<li>Addition: We reach the leftmost point on an object and so we add it to our sorted set.</li>
<li>Deletion: We reach the rightmost point on our object and so we remove it from our sorted set.</li>
<li>Intersection: Several objects intersect. This is the only way the order of objects can change. We reorder them and note the intersection.</li>
</ol>
<p>We keep track of these events in a priority queue, and deal with them in the order they occur as we move from left to right.</p>
<p>The big advantage of this alogrithm over the naive approach is that it doesn't require us to keep track of the collision points. It also has the advantage that if we compute the amount of collisions in two rectangles then it is very easy to combine them, since we just need to add the amount of collisions in each, making sure we aren't double counting the borders. This makes it very easy to distribute computation. It also has very reasonable RAM demands. Its RAM use should be <span class="math-container">$O(n)$</span> where <span class="math-container">$n$</span> is the amount of lines and circles and the constant is quite small.</p>
| <p>Mathematica has a command to solve systems of equations over the real numbers; or one can just solve them equationally. It also has a command to find the minimal polynomial of an algebraic number. Thus intersection points between lines and circles can be found using exact arithmetic (as numbered roots of minimal polynomials over <span class="math-container">$\mathbb{Q}$</span>), as can the slopes of lines and radii of circles. Using such methods, there are exactly 17,562 distinct lines and 32,719 distinct circles on the next stage.</p>
<p>Finding the minimal polynomial of an algebraic number this way is somewhat slow (there may be ways to speed that up), but these lines and circles can also be found in just a few minutes if we instead use (10 digit) floating point approximations.</p>
<p>I've now optimized the code a bit, and using those floating point approximations, in a little under 21 hours I compute that there are at least
<span class="math-container">$$149,714,255$$</span> distinct intersections between those 17,562 lines. This could be undercounting, because the floating point arithmetic might make us think that two distinct intersection points are the same. However, the computations shouldn't take much longer using 20 digit floating points (but they would take a lot more RAM). I expect that the numbers won't change much, if at all. But I did see changes going from 5 digit to 10 digit approximations, so trying the 20 digit computation would be useful.</p>
<p>Storing those 10 digits, for a little more than hundred million intersection points, was taking most of my RAM. It appears that if I try to do the same computation with the circle intersections, it will exceed my RAM limits. However, it is certainly doable, and I'm happy to give my code to anyone who has access to a computer with a lot of RAM (just email me; my computer has 24 GB, so you'd want quite a bit more than that). The code may still have some areas where it can be sped up--but taking less than 1 day to find all intersection points between lines is already quite nice.</p>
<p>Another option would be to store these points on the hard disk---but there are computers out there with enough RAM to make that an unnecessary change.</p>
<hr>
<p>Edited to add: I found a computer that is slightly slower than my own but had a lot of RAM. It took about 6 weeks, and about 360 GB of RAM, but the computation finished. It is still only an approximation (not exact arithmetic, only 10 digit precision past the decimal place). The number of crossings I get is
<span class="math-container">$$
1,723,814,005
$$</span>
If you have a real need to do exact arithmetic, I could probably do that, but it would take a bit longer. Otherwise I'll consider this good enough.</p>
|
matrices | <p>I am currently trying to self-study linear algebra. I've noticed that a lot of the definitions for terms (like eigenvectors, characteristic polynomials, determinants, and so on) require a <strong>square</strong> matrix instead of just any real-valued matrix. For example, <a href="http://mathworld.wolfram.com" rel="noreferrer">Wolfram</a> has this in its <a href="http://mathworld.wolfram.com/CharacteristicPolynomial.html" rel="noreferrer">definition</a> of the characteristic polynomial:</p>
<blockquote>
<p>The characteristic polynomial is the polynomial left-hand side of the characteristic equation $\det(A - I\lambda) = 0$, where $A$ is a square matrix.</p>
</blockquote>
<p>Why must the matrix be square? What happens if the matrix is not square? And why do square matrices come up so frequently in these definitions? Sorry if this is a really simple question, but I feel like I'm missing something fundamental.</p>
| <p>Remember that an $n$-by-$m$ matrix with real-number entries represents a linear map from $\mathbb{R}^m$ to $\mathbb{R}^n$ (or more generally, an $n$-by-$m$ matrix with entries from some field $k$ represents a linear map from $k^m$ to $k^n$). When $m=n$ - that is, when the matrix is square - we're talking about a map from a space to itself.</p>
<p>So really your question amounts to:</p>
<blockquote>
<p>Why are maps from a space to <em>itself</em> - as opposed to maps from a space to <em>something else</em> - particularly interesting?</p>
</blockquote>
<p>Well, the point is that when I'm looking at a map from a space to itself inputs to and outputs from that map are the same "type" of thing, <em>and so I can meaningfully compare them</em>. So, for example, if $f:\mathbb{R}^4\rightarrow\mathbb{R}^4$ it makes sense to ask when $f(v)$ is parallel to $v$, since $f(v)$ and $v$ lie in the same space; but asking when $g(v)$ is parallel to $v$ for $g:\mathbb{R}^4\rightarrow\mathbb{R}^3$ doesn't make any sense, since $g(v)$ and $v$ are just different types of objects. (This example, by the way, is just saying that <em>eigenvectors/values</em> make sense when the matrix is square, but not when it's not square.)</p>
<hr>
<p>As another example, let's consider the determinant. The geometric meaning of the determinant is that it measures how much a linear map "expands/shrinks" a unit of (signed) volume - e.g. the map $(x,y,z)\mapsto(-2x,2y,2z)$ takes a unit of volume to $-8$ units of volume, so has determinant $-8$. What's interesting is that this applies to <em>every</em> blob of volume: it doesn't matter whether we look at how the map distorts the usual 1-1-1 cube, or some other random cube.</p>
<p>But what if we try to go from $3$D to $2$D (so we're considering a $2$-by-$3$ matrix) or vice versa? Well, we can try to use the same idea: (proportionally) how much <em>area</em> does a given <em>volume</em> wind up producing? However, we now run into problems:</p>
<ul>
<li><p>If we go from $3$ to $2$, the "stretching factor" is no longer invariant. Consider the projection map $(x,y,z)\mapsto (x,y)$, and think about what happens when I stretch a bit of volume vertically ...</p></li>
<li><p>If we go from $2$ to $3$, we're never going to get any volume at all - the starting dimension is just too small! So regardless of what map we're looking at, our "stretching factor" seems to be $0$.</p></li>
</ul>
<p>The point is, in the non-square case the "determinant" as naively construed either is ill-defined or is $0$ for stupid reasons.</p>
| <p>Lots of good answers already as to why square matrices are so important. But just so you don't think that other matrices are not interesting, they have analogues of the inverse (e.g., the <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse" rel="noreferrer">Moore-Penrose inverse</a>) and non-square matrices have a <a href="https://en.wikipedia.org/wiki/Singular-value_decomposition" rel="noreferrer">singular-value decompition</a>, where the singular values play a role loosely analogous to the eigenvalues of a square matrix. These topics are often left out of linear algebra courses, but they can be important in numerical methods for statistics and machine learning. But learn the square matrix results before the fancy non-square matrix results, since the former provide a context for the latter.</p>
|
matrices | <blockquote>
<p>Let <span class="math-container">$K$</span> be nonsingular symmetric matrix, prove that if <span class="math-container">$K$</span> is positive definite so is <span class="math-container">$K^{-1}$</span> .</p>
</blockquote>
<hr />
<p>My attempt:</p>
<p>I have that <span class="math-container">$K = K^T$</span> so <span class="math-container">$x^TKx = x^TK^Tx = (xK)^Tx = (xIK)^Tx$</span> and then I don't know what to do next.</p>
| <p>If <span class="math-container">$K$</span> is positive definite then <span class="math-container">$K$</span> is invertible, so define
<span class="math-container">$y = K x$</span>. Then <span class="math-container">$y^T K^{-1} y = x^T K^{T} K^{-1} K x = x^T K^{T} x >0$</span>.</p>
<p>Since the transpose of a positive definite matrix is also positive definite, cf. <a href="https://www.quora.com/Is-the-transpose-of-a-positive-definite-matrix-positive-definite?share=1" rel="noreferrer">here</a>, this proves that <span class="math-container">$K^{-1}$</span> is positive definite.</p>
| <p>Here's one way: $K$ is positive definite if and only if all of its eigenvalues are positive. What do you know about the eigenvalues of $K^{-1}$?</p>
|
probability | <p>A fair coin is tossed repeatedly until 5 consecutive heads occurs. </p>
<p>What is the expected number of coin tosses?</p>
| <p>Let $e$ be the expected number of tosses. It is clear that $e$ is finite.</p>
<p>Start tossing. If we get a tail immediately (probability $\frac{1}{2}$) then the expected number is $e+1$. If we get a head then a tail (probability $\frac{1}{4}$), then the expected number is $e+2$. Continue $\dots$. If we get $4$ heads then a tail, the expected number is $e+5$. Finally, if our first $5$ tosses are heads, then the expected number is $5$. Thus
$$e=\frac{1}{2}(e+1)+\frac{1}{4}(e+2)+\frac{1}{8}(e+3)+\frac{1}{16}(e+4)+\frac{1}{32}(e+5)+\frac{1}{32}(5).$$
Solve this linear equation for $e$. We get $e=62$. </p>
| <p>Lets calculate it for $n$ consecutive tosses the expected number of tosses needed.</p>
<p>Lets denote $E_n$ for $n$ consecutive heads.
Now if we get one more head after $E_{n-1}$, then we have $n$ consecutive heads
or if it is a tail then again we have to repeat the procedure.</p>
<p>So for the two scenarios: </p>
<ol>
<li>$E_{n-1}+1$</li>
<li>$E_{n}{+1}$ ($1$ for a tail)</li>
</ol>
<p>So, $E_n=\frac12(E_{n-1} +1)+\frac12(E_{n-1}+ E_n+ 1)$,
so $E_n= 2E_{n-1}+2$.</p>
<p>We have the general recurrence relation. Define $f(n)=E_n+2$ with $f(0)=2$. So, </p>
<p>\begin{align}
f(n)&=2f(n-1) \\
\implies f(n)&=2^{n+1}
\end{align}</p>
<p>Therefore, $E_n = 2^{n+1}-2 = 2(2^n-1)$</p>
<p>For $n=5$, it will give us $2(2^5-1)=62$.</p>
|
linear-algebra | <h1>The logarithm is <em>non-linear</em></h1>
<p>Almost unexceptionally, I hear people say that the logarithm was a <strong>non-linear</strong> function. If asked to prove this, they often do something like this:</p>
<blockquote>
<p>We have
$$
\ln(x + y) \neq \ln(x) + \ln(y) \quad\text{and}\quad \ln(\lambda \cdot x) = \ln(\lambda) + \ln(x) \neq \lambda \cdot \ln(x),
$$
and therefore $\ln$ is not linear.</p>
</blockquote>
<p>And indeed, the literature is abundant with the claim that...</p>
<blockquote>
<p>... a function $f : V \to W$ is linear, if and only if
$$
f(x + y) = f(x) + f(y) \quad\text{and}\quad f(\lambda \cdot x) = \lambda \cdot f(x)
$$
for all $x,y$ and all scalars $\lambda$.</p>
</blockquote>
<p>Often, there is no hint that the symbols $+$ and $\cdot$ on the left belong to $V$, whereas the symbols $+$ and the $\cdot$ on the right belong to $W$.</p>
<h1>The logarithm is <em>linear</em></h1>
<p>My proof that the logarithm is a <strong>linear</strong> function goes like this:</p>
<blockquote>
<p>$$\ln(x \cdot y) = \ln(x) + \ln(y) \quad\text{and}\quad \ln(x^\lambda) = \lambda \cdot \ln(x).$$</p>
</blockquote>
<p>The rationale for this is that $\ln : \mathbb{R}_{>0} \to \mathbb{R}$, i.e., the logarithm is a function from the $\mathbb{R}$-vector space $\mathbb{R}_{>0}$ (the positive-real numbers), to the $\mathbb{R}$-vector space $\mathbb{R}$ (the real numbers). <em>Vector addition</em> in $\mathbb{R}_{>0}$ is, however, not usual addition, but multiplication. Likewise, <em>scalar multiplication</em> in $\mathbb{R}_{>0}$ is not usual multiplication, but potentiation.</p>
<p>In fact, the linear-algebra definition of linearity is (e.g. <a href="https://books.google.de/books?id=s7bMBQAAQBAJ&pg=PA356&lpg=PA356&dq=%22linear+transformation%22+%22different+symbols%22&source=bl&ots=N86WrbmSi6&sig=EWQOsAK4O14JAXq7w2esa8NgIZQ&hl=de&sa=X&ved=0CDUQ6AEwA2oVChMI9rK-4ImwxwIVw1wUCh110goQ#v=onepage&q=%22linear%20transformation%22%20%22different%20symbols%22&f=false" rel="noreferrer">Ricardo, 2009</a>; <a href="https://books.google.de/books?id=Hr5bhIVWr4wC&pg=PA85&lpg=PA85&dq=%22linear+transformation%22+%22different+symbols%22&source=bl&ots=4Lqw912TSx&sig=wzoacG4iWmwqY1zCrN65Z-svKFc&hl=de&sa=X&ved=0CFIQ6AEwCGoVChMI9rK-4ImwxwIVw1wUCh110goQ#v=onepage&q=%22linear%20transformation%22%20%22different%20symbols%22&f=false" rel="noreferrer">Bowen and Wang, 1976</a>):</p>
<blockquote>
<p>A function $f : V \to W$ from a vectors space $(V,\oplus,\odot)$ over a field $F$ to a vector space $(W,\boxplus,\boxdot)$ over $F$ is linear if and only if it satisfies
$$
f(x \oplus y) = f(x) \boxplus f(y) \quad\text{and}\quad f(\lambda \odot x) = \lambda \boxdot f(x)
$$
for all $x,y \in V$ and $\lambda \in F$.</p>
</blockquote>
<p>Another proof goes as follows:</p>
<blockquote>
<p>The logarithm is an isomorphism between the vector space of positive-real numbers to the vector space of real numbers. And as every isomorphism is a linear function, so is the logarithm.</p>
</blockquote>
<h1>Question</h1>
<p>We have two conflicting statements here:</p>
<ol>
<li>The logarithm is non-linear.</li>
<li>The logarithm is linear.</li>
</ol>
<p>Can both statements be correct simultaneously, depending on something I cannot imagine now? But wouldn't this also imply that two conflicting concepts of linearity exist?</p>
<p>Or is this a case of sloppy notation, e.g., abuse of the same symbol $+$ for vector addition or $\cdot$ for scalar multiplication even though two different vector spaces are involved?</p>
<h2>Update</h2>
<p>The solutions given to rescue the first statement haven't convinced me yet, because they are inconsistent:</p>
<ul>
<li>Using usual addition and multiplication on $\mathbb{R}_{>0}$ implies that $(\mathbb{R}_{>0},+,\cdot)$ is not a vector space anymore. But a precondition of the linearity proof is that the domain and the range of $f$ are vector spaces.</li>
<li>Allowing the domain of $\ln$ to be $\mathbb{R}$ with usual addition and multiplication instead of $\mathbb{R}_{>0}$ doesn't work, because then the image of $\ln$ is the set of complex numbers.</li>
<li>A mathematically consistent definition of "linearity" for subsets (but not subspaces) of a vector space was given in a comment by @Alex G. Let $S$ be an arbitrary subset of a real vector space $V$, and let $W$ be a real vector space. <strong>A function $f : S \to W$ is called "linear" if for all $x,y \in S$ such that $x+y \in S$, then $f(x+y) = f(x)+f(y)$, and for all $x \in S$, $k \in \mathbb{R}$ such that $kx \in S$, then $f(kx)=k⋅f(x)$.</strong> However, this definition is not what is meant by the concept of linearity coming from linear algebra. One would actually need to use another term for "linearity" here.</li>
</ul>
| <p>You are correct if we endow $\Bbb R_{> 0}$ with the strange vector space structure in which "addition" is given by the usual multiplication, and "scalar multiplication" is given by exponentiation. When people say that logarithms are not linear, they are usually thinking of giving $\Bbb R$ the usual vector space structure, and with this being understood, then logarithms really are not linear.</p>
<p>The takeaway here is that the statement "the logarithm is linear" depends on what vector space structure you have in mind. With your strange vector space structure, this is true. With the usual one, this is false.</p>
| <p>it's worth noting that with your argument, any bijection is linear!</p>
<p>We have a set $X$ and a vector space $Y$. We have a bijection
$$
f: X \to Y.
$$</p>
<p>We simply define the operations
$$
x+y = f^{-1}(f(x) + f(y)), \;\;\; \lambda x = f^{-1}(\lambda(f(x)).
$$
Now $f$ is linear.</p>
<p>The issue is that when we say $f: V \to W$ is linear, we generally already have linear structures on $V$ and $W$ that are not defined in terms of $f.$</p>
|
probability | <p><strong>Question</strong> (previously asked <a href="https://math.stackexchange.com/questions/214680/simple-bayes-theorem-question">here</a>)</p>
<blockquote>
<p>You know there are 3 boys and an unknown number of girls in a nursery at a hospital. Then a woman gives birth a baby, but you do not know its gender, and it is placed in the nursery. Then a nurse comes in a picks up a baby and it is a boy. Given that the nurse picks up a boy, what is the probability that the woman gave birth to a boy?</p>
</blockquote>
<p>Assume that - in this question's universe - the unconditional probabiilty that any newly born baby is a boy or a girl is exactly half.</p>
<p><strong>Short solution</strong></p>
<p>Let number of girls be <span class="math-container">$k$</span>. Event A is the newborn is a boy, Event B is that nurse picks up a boy. So, we are asked <span class="math-container">$P(A|B)$</span>.</p>
<blockquote class="spoiler">
<p> <span class="math-container">$$P(A|B) = \frac{P(B|A)P(A)}{P(B)} = \frac{\frac 4{k+4}\frac 12}{\frac 4{k+4}\frac 12 + \frac 3{k+4}\frac 12} = \frac 47$$</span></p>
</blockquote>
<p><strong>My question</strong></p>
<p>Why is the probability constant? I would have expected the probability to change with respect to the number of girls. More specifically, I would have expected the probability to increase as the value of <span class="math-container">$k$</span> increases, and decrease if <span class="math-container">$k$</span> was less. Why so? Because we are already given the claim that we have selected a boy. If we have infinite girls, then the newborn has to almost surely be a boy to help support that observed claim. Because initially there are only three boys, the more help they could get in supporting the claim, the better.</p>
<p>Of course, this is not a very rigorous argument, but the point here is that in many such questions there is a natural expectation for the probability to vary with the variable. And it does do in many, say for example the <a href="https://en.wikipedia.org/wiki/Monty_Hall_problem#N_doors" rel="noreferrer">generalized monty hall problem</a>.</p>
<p>I do know that <em>technically</em> the <span class="math-container">$k$</span> does not matter because it gets cancelled out in the denominator, but <em>intuitively</em> that is not a very helpful explanation. Can anyone give an intuitive explanation for why the probability answer in this question is a constant?</p>
| <p>I imagine the argument may go like this...</p>
<p>Let's assume you have two identical wards A and B in the hospital, both having nurseries, in each nursery there are <span class="math-container">$3$</span> boys and <span class="math-container">$k$</span> girls. Then a woman in ward A gives birth to a boy and another woman in ward B gives birth to a girl. Now there are <span class="math-container">$4$</span> boys in ward A's nursery, but still <span class="math-container">$3$</span> boys in ward B's.</p>
<p>Imagine now you (not having the wards clearly labelled, as it often happens in hospitals) randomly (with probabilities <span class="math-container">$50\%$</span> each) enter one of the wards and see a nurse holding a boy from the nursery. <em>What is the probability you'd entered ward A?</em></p>
<p>This is the same problem as the original one, but has the obvious solution <span class="math-container">$4/7$</span>. Namely, each child (out of all <span class="math-container">$8+2k$</span> children) is picked with equal probability, so knowing that it was a boy, it could've been one of <span class="math-container">$7$</span> equally likely boys. However, <span class="math-container">$4$</span> of them are from ward A, so the odds that you'd strolled into ward A are <span class="math-container">$4/7$</span>.</p>
| <p>Wow this one was a doozy. For brevity's sake, I will refer to the three other boys as "boy 1," "boy 2" and "boy 3," and to the child in question as just "the child."</p>
<p>There are seven different possible outcomes:</p>
<p>If the child is female: (1) Boy 1 is chosen, (2) Boy 2 is chosen, (3) Boy 3 is chosen.</p>
<p>If the child is male: (4) Boy 1 is chosen, (5) Boy 2 is chosen, (6) Boy 3 is chosen, (7) the child is chosen.</p>
<p>Essentially, each of these seven events have equal probability, which is pretty counter intuitive. This is because 4/7 of the time the nurse will pick the second category since there are four children instead of three. In fact, this is where the probability that the child is male comes from. Note that this has nothing to do with the .5 chance of any child being male, since the nurse is more likely to pick from the pool of males if there are more males.</p>
<p>It might be a bit easier to consider if you consider the case with 1 known male. You are twice as likely to pick a male if the child is male, which means that 2/3 times, you will pick from the second pool, which is synonymous to saying the child is male.</p>
<p>You could also think about it as if the child has half the "weight" of the other children, if that would help.</p>
<p>If you want some numbers to convince you: if any of the three other boys is chosen, which happens 6/7 of the time, this has no bearing on the gender of the child. However, 1/7 of the time, when the child is chosen, he is guaranteed to be male.</p>
<p>Then the calculation is <span class="math-container">$(\frac12) \frac67 + (1)\frac17 = \frac47 $</span></p>
<p>If you realize this extremely counter intuitive way of looking at it, this problem is pretty much immediate and requires no calculation. I apologize if this is a convoluted explanation.</p>
|
linear-algebra | <p>Let $A=\begin{bmatrix}a & b\\ c & d\end{bmatrix}$.</p>
<p>How could we show that $ad-bc$ is the area of a parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+b, c+d)$?</p>
<p>Are the areas of the following parallelograms the same? </p>
<p>$(1)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+c, b+d)$.</p>
<p>$(2)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+b, c+d)$.</p>
<p>$(3)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+d, b+c)$.</p>
<p>$(4)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+d, b+c)$.</p>
<p>Thank you very much.</p>
| <p>Spend a little time with this figure due to <a href="http://en.wikipedia.org/wiki/Solomon_W._Golomb" rel="noreferrer">Solomon W. Golomb</a> and enlightenment is not far off:</p>
<p><img src="https://i.sstatic.net/gCaz3.png" alt="enter image description here" /></p>
<p>(Appeared in <em>Mathematics Magazine</em>, March 1985.)</p>
| <p><img src="https://i.sstatic.net/PFTa4.png" alt="enter image description here"></p>
<p>I know I'm extremely late with my answer, but there's a pretty straightforward geometrical approach to explaining it. I'm surprised no one has mentioned it. It does have a shortcoming though - it does not explain why area flips the sign, because there's no such thing as negative area in geometry, just like you can't have a negative amount of apples(unless you are economics major).</p>
<p>It's basically:</p>
<pre><code> Parallelogram = Rectangle - Extra Stuff.
</code></pre>
<p>If you simplify $(c+a)(b+d)-2ad-cd-ab$ you will get $ad-bc$.</p>
<p>Also interesting to note that if you swap vectors places then you get a negative(opposite of what $ad-bc$ would produce) area, which is basically:</p>
<pre><code> -Parallelogram = Rectangle - (2*Rectangle - Extra Stuff)
</code></pre>
<p>Or more concretely:</p>
<p>$(c+a)(b+d) - [2*(c+a)(b+d) - (2ad+cd+ab)]$</p>
<p>Also it's $bc-ad$, when simplified.</p>
<p>The sad thing is that there's no good geometrical reason why the sign flips, you will have to turn to linear algebra to understand that. </p>
<p>Like others had noted, determinant is the scale factor of linear transformation, so a negative scale factor indicates a reflection.</p>
|
number-theory | <p>This question relates to a discussion on <a href="http://newcafe.org/motet/bin/motet.cgi?show%20-uwRlEv%20-h%20Science%207%20671">another message board</a>. Euclid's proof of the infinitude of primes is an indirect proof (a.k.a. proof by contradiction, reductio ad absurdum, modus tollens). My understanding is that Intuitionists reject such proofs because they rely on the Law of the Excluded Middle, which they don't accept. Does there exist a direct and constructive proof of the infinitude of primes?</p>
| <p>Due to a widely propagated historical error, it is commonly misbelieved that Euclid's proof was by contradiction. This is false. Euclid's proof was in fact presented in the obvious constructive fashion explained below. See Hardy and Woodgold's <a href="http://dx.doi.org/10.1007/s00283-009-9064-8" rel="nofollow noreferrer">Intelligencer article [1]</a> for a detailed analysis of the history (based in part on many sci.math discussions [2]).</p>
<p>The key idea is not that Euclid's sequence <span class="math-container">$\ f_1 = 2,\ \ \color{#0a0}{f_{n}} = \,\color{#a5f}{\bf 1}\, +\, f_1\cdot\cdot\cdot\cdot\, f_{n-1}$</span> is an infinite sequence of <em>primes</em> but, rather, that it's an infinite sequence of <em>coprimes</em>, i.e. <span class="math-container">$\,{\rm gcd}(f_k,f_n) = 1\,$</span> if <span class="math-container">$\,k<n,\,$</span> since then any common divisor of <span class="math-container">$\,\color{#c00}{f_k},\color{#0a0}{f_n}\,$</span> must also divide
<span class="math-container">$\, \color{#a5f}{\bf 1} = \color{#0a0}{f_n} - f_1\cdot\cdot\, \color{#c00}{f_k}\cdot\cdot\, f_{n-1}.$</span></p>
<p>Any infinite sequence of pairwise <em>coprime</em> <span class="math-container">$f_n > 1 \,$</span> yields an infinite sequence of distinct <em>primes</em> <span class="math-container">$\, p_n $</span> obtained by choosing <span class="math-container">$\,p_n$</span> to be any prime factor of <span class="math-container">$\,f_n,\,$</span> e.g. its least factor <span class="math-container">$> 1$</span>.</p>
<p>A variant that deserves to be much better known is the following folklore one-line proof that there are infinitely many prime integers</p>
<p><span class="math-container">$$n\:\! (n+1)\,\ \text{has a larger set of prime factors than does }\, n\qquad$$</span></p>
<p>because <span class="math-container">$\,n+1>1\,$</span> is coprime to <span class="math-container">$\,n\,$</span> so it has a prime factor which does not divide <span class="math-container">$\,n.\,$</span> Curiously, Ribenboim believes this proof to be of recent vintage, attributing it to Filip Saidak. But I recall seeing variants published long ago. Does anyone know its history?</p>
<p>For even further variety, here is a version of Euclid's proof reformulated into infinite <em>descent</em> form. If there are only finitely many primes, then given any prime <span class="math-container">$\,p\,$</span> there exists a smaller prime, namely the least factor <span class="math-container">$> 1\,$</span> of <span class="math-container">$\, 1 + $</span> product of all primes <span class="math-container">$\ge p\:.$</span></p>
<p>It deserves to be much better known that Euclid's constructive proof generalizes very widely - to any fewunit ring, i.e. any ring having fewer units than elements - <a href="https://artofproblemsolving.com/community/c7h217448p1209616" rel="nofollow noreferrer">see my proof here</a>. <span class="math-container">$ $</span> The key idea is that Euclid's construction of a new prime generalizes from elements to ideals, i.e. given some maximal ideals <span class="math-container">$\rm\, P_1,\ldots,P_k,\, $</span>
a simple pigeonhole argument employing CRT deduces that <span class="math-container">$\rm\, 1+P_1\:\cdots\:P_k\, $</span> contains a nonunit, which lies in some maximal ideal which, by construction,
is comaximal (so distinct) from the initial max ideals <span class="math-container">$\rm\,P_1,\ldots,P_k.$</span></p>
<p>[1] Michael Hardy; Catherine Woodgold. <a href="http://dx.doi.org/10.1007/s00283-009-9064-8" rel="nofollow noreferrer">Prime Simplicity.</a><br />
<em>The Mathematical Intelligencer,</em> Volume 31, Number 4, 44-52 (2009).</p>
<p>[2] Note: Although the article [1] makes no mention of such, it appears to have strong roots in frequent sci.math discussions - in which the first author briefly participated. A Google groups search in the usenet newsgroup sci.math for "Euclid plutonium" will turn up many long discussions on various misinterpretations of Euclid's proof.</p>
| <p>Your question is predicated on a common misconception. In fact Euclid's proof is thoroughly constructive: it gives an algorithm which, upon being given as input any finite set of prime numbers, outputs a prime number which is not in the set.</p>
<p><b>Added</b>: For a bit more on mathematical issues related to the above algorithm, see Problem 6 <a href="http://alpha.math.uga.edu/%7Epete/NT2009HW1.pdf" rel="nofollow noreferrer">here</a>. (This is one of the more interesting problems on the first problem set of an advanced undergraduate number theory course that I teach from time to time.)</p>
|
probability | <p>This is a neat little problem that I was discussing today with my lab group out at lunch. Not particularly difficult but interesting implications nonetheless</p>
<p>Imagine there are a 100 people in line to board a plane that seats 100. The first person in line, Alice, realizes she lost her boarding pass, so when she boards she decides to take a random seat instead. Every person that boards the plane after her will either take their "proper" seat, or if that seat is taken, a random seat instead.</p>
<p>Question: What is the probability that the last person that boards will end up in their proper seat?</p>
<p>Moreover, and this is the part I'm still pondering about. Can you think of a physical system that would follow this combinatorial statistics? Maybe a spin wave function in a crystal etc...</p>
| <p>Here is a rephrasing which simplifies the intuition of this nice puzzle.</p>
<p>Suppose whenever someone finds their seat taken, they politely evict the squatter and take their seat. In this case, the first passenger (Alice, who lost her boarding pass) keeps getting evicted (and choosing a new random seat) until, by the time everyone else has boarded, she has been forced by a process of elimination into her correct seat.</p>
<p>This process is the same as the original process except for the identities of the people in the seats, so the probability of the last boarder finding their seat occupied is the same.</p>
<p>When the last boarder boards, Alice is either in her own seat or in the last boarder's seat, which have both looked exactly the same (i.e. empty) to her up to now, so there is no way poor Alice could be more likely to choose one than the other.</p>
| <p>This is a classic puzzle!</p>
<p>The answer is that the probability that the last person ends in up in their proper seat is exactly <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>The reasoning goes as follows:</p>
<p>First observe that the fate of the last person is determined the moment either the first or the last seat is selected! This is because the last person will either get the first seat or the last seat. Any other seat will necessarily be taken by the time the last person gets to 'choose'.</p>
<p>Since at each choice step, the first or last is equally probable to be taken, the last person will get either the first or last with equal probability: <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>Sorry, no clue about a physical system.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.