INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
How to prove that proj(proj(b onto a) onto a) = proj(b onto a)? How to prove that proj(proj(b onto a) onto a) = proj(b onto a)? It makes perfect sense conceptually, but I keep going in circles when I try to prove it mathematically. Any help would be appreciated.
If they are vectors in ${\mathbb R}^n$, you can do it analytically too. You have $$proj_{\bf a}({\bf b}) = ({\bf b} \cdot {{\bf a} \over ||{\bf a}||}) {{\bf a} \over ||{\bf a}||}$$ So if ${\bf c}$ denotes $proj_{\bf a}({\bf b})$ then $$proj_{\bf a}({\bf c}) = ({\bf c} \cdot {{\bf a} \over ||{\bf a}||}) {{\bf a} \over ||{\bf a}||}$$ $$= \bigg(({\bf b} \cdot {{\bf a} \over ||{\bf a}||}) {{\bf a} \over ||{\bf a}||}\cdot {{\bf a} \over ||{\bf a}||}\bigg) {{\bf a} \over ||{\bf a}||}$$ Factoring out constants this is $$\bigg(({\bf b} \cdot {{\bf a} \over ||{\bf a}||^3}) ({\bf a} \cdot {\bf a})\bigg) {{\bf a} \over ||{\bf a}||}$$ $$= ({\bf b} \cdot {{\bf a} \over ||{\bf a}||^3}) ||{\bf a}||^2 {{\bf a} \over ||{\bf a}||}$$ $$ = ({\bf b} \cdot {{\bf a} \over ||{\bf a}||}) {{\bf a} \over ||{\bf a}||}$$ $$= proj_{\bf a}({\bf b}) $$
Counting Number of k-tuples Let $A = \{a_1, \dots, a_n\}$ be a collection of distinct elements and let $S$ denote the collection all $k$-tuples $(a_{i_1}, \dots a_{i_k})$ where $i_1, \dots i_k$ is an increasing sequence of numbers from the set $\{1, \dots n \}$. How can one prove rigorously, and from first principles, that the number of elements in $S$ is given by $n \choose k$?
We will show that the number of ways of selecting a subset of $k$ distinct objects from a pool of $n$ of them is given by the binomial coefficient $$ \binom{n}{k} = \frac{n!}{k!(n-k)!}. $$ I find this proof easiest to visualize. First imagine permuting all the $n$ objects in a sequence; this can be done in $n!$ ways. Given a permutation, we pick the first $k$ objects, and we are done. But wait! We overcounted... * *Since we are only interested in the subset of $k$ items, the ordering of the first $k$ items in the permutation does not matter. Remember that these can be arranged in $k!$ ways. *Similarly, the remaining $n-k$ items that we chose to discard are also ordered in the original permutation. Again, these $n-k$ items can be arranged in $(n-k)!$ ways. So to handle the overcounting, we simply divide our original answer by these two factors, resulting in the binomial coefficient. But honestly, I find this argument slightly dubious, at least the way I wrote it. Are we to take on faith that we have taken care of all overcounting? And, why exactly are we dividing by the product of $k!$ and $(n-k)!$ (and not some other function of these two numbers)? One can make the above argument a bit more rigorous in the following way. Denote by $S_n$, $S_k$ and $S_{n-k}$ be the set of permutations of $n$, $k$ and $n-k$ objects respectively. Also let $\mathcal C(n,k)$ be the $k$-subsets of a set of $n$ items. Then the above argument is essentially telling us that There is a bijection between $S_n$ and $\mathcal C(n,k) \times S_k \times S_{n-k}$. Here $\times$ represents Cartesian product. The formal description of the bijection is similar to the above argument: specify the subset formed by the first $k$ items, specify the arrangement among the first $k$ items, specify the arrangement among the remaining $n-k$ items. (The details are clear, I hope. :)) Given this bijection, we can then write: $$ |S_n| = |\mathcal C(n,k)| \cdot |S_k| \cdot |S_{n-k}|, $$ which is exactly what we got before.
ODE question: $y'+A(t) y =B(t)$, with $y(0)=0, B>0$ implies $y\ge 0$; another proof? I am trying to prove that the solution for the different equation $$y'+A(t) y =B(t)$$ with initial condition $y(0)=0$ and the assumption that $B\ge 0$, has non-negative solution for all $t\ge 0$, i.e. $y(t)\ge 0$ for all $t\ge 0$. The one I know is constructive: Fisrt consider the homogeneous ODE $x'+\frac{1}{2} A(t) x=0$ and let $u'=\frac{B}{x^2}$ with $u(0)=0$. Then $y=\frac{ux^2}{2}$ will satisfy the origional ODE of $y$. Clearly, by the construction of $y$, $y\ge 0$ for $t\ge 0$. But this proof is not natural in my opnion, there is no reason(at least I didn't see) to construct such $x(t)$ and $u(t)$. So is there any other way to prove this fact?
A natural approach is to start from the special case where $A(t)=0$ for every $t\geqslant0$. Then the ODE reads $y'(t)=B(t)$ hence $y'(t)\geqslant0$ for every $t\geqslant0$ hence $y$ is nondecreasing. Since $y(0)\geqslant0$, this proves that $y(t)\geqslant0$ for every $t\geqslant0$. One can deduce the general case from the special one. This leads to consider $z(t)=C(t)y(t)$ and to hope that $z'(t)$ is a multiple of the LHS of the ODE. As you know, defining $C(t)=\exp\left(\int\limits_0^tA(s)\mathrm ds\right)$ fits the bill since $(C\cdot y)'=C\cdot (y'+A\cdot y)$ hence one is left with the ODE $z'=C\cdot B$. Since $C(t)>0$, $C(t)\cdot B(t)\geqslant0$ hence the special case for the RHS $C\cdot B$ yields the inequality $z(t)\geqslant z(0)$ and since $z(0)=y(0)\geqslant0$, one gets $y(t)\geqslant C(t)^{-1}y(0)\geqslant0$ and the proof is finished. Edit The same trick applies to more general functions $A$ and $B$. For example, replacing $A(t)$ by $A(t,y(t))$ and $B(t)$ by $B(t,y(t))$ and assuming that $B(s,x)\geqslant0$ for every $s\geqslant0$ and every $x\geqslant0$, the same reasoning yields $y(t)\geqslant C(t)^{-1}y(0)\geqslant0$ with $C(t)=\exp\left(\int\limits_0^tA(s,y(s))\mathrm ds\right)$. The only modification is that one must now assume that $t\geqslant0$ belongs to the maximal interval $[0,t^*)$ where $y$ may be defined. Solutions of such differential equations when $A$ does depend on $y(t)$ may explode in finite time hence $t^*$ may be finite but $y(t)\geqslant0$ on $0\leqslant t<t^*$.
Functions with subscripts? In the equation: $f_\theta(x)=\theta_1x$ Is there a reason that $\theta$ might be a subscript of $f$ and not either a second parameter or left out of the left side of the equation altogether? Does it differ from the following? $f(x,\theta)=\theta_1x$ (I've been following the Machine Learning class and the instructor uses this notation that I've not seen before)
As you note, this is mostly notational choice. I might call the $\theta$ a parameter, rather than an independent variable. That is to say, you are meant to think of $\theta$ as being fixed, and $x$ as varying. As an example (though I am not sure of the context you saw this notation), maybe you are interested in describing the collection of functions $$f(x) = x^2+c$$, where $c$ is a real number. I might call this function $f_c(x)$, so that I can later say that for $c\leq 0$, the function $f_c$ has two real roots, while for $c>0$ the roots are uniformly convex. I think these statements would be much more opaque if I made them about the function $f(x,c)$.
Mathematical reason for the validity of the equation: $S = 1 + x^2 \, S$ Given the geometric series: $1 + x^2 + x^4 + x^6 + x^8 + \cdots$ We can recast it as: $S = 1 + x^2 \, (1 + x^2 + x^4 + x^6 + x^8 + \cdots)$, where $S = 1 + x^2 + x^4 + x^6 + x^8 + \cdots$. This recasting is possible only because there is an infinite number of terms in $S$. Exactly how is this mathematically possible? (Related, but not identical, question: General question on relation between infinite series and complex numbers).
The $n$th partial sum of your series is $$ \begin{align*} S_n &= 1+x^2+x^4+\cdots +x^{2n}= 1+x^2(1+x^2+x^4+\cdots +x^{2n-2})\\ &= 1+x^2S_{n-1} \end{align*} $$ Assuming your series converges you get that $$ \lim_{n\to\infty}S_n=\lim_{n\to\infty}S_{n-1}=S. $$ Thus $S=1+x^2S$.
Not homotopy equivalent to a 3-manifold w/ boundary Let $X_g$ be the wedge of $g$ copies of the circle $S^1$ where $g>1$. Prove that $X_g \times X_g$ is not homotopy equivalent to a 3-manifold with boundary.
If it is a homotopy equivalent to a $3$-manifold $M$, looks at the homology long exact sequence for the pair $(M,\partial M)$ with $\mathbb Z_2$-coefficients. By Poincare duality, $H_i(M)\cong H_{3-i}(M,\partial M)$. You also know the homology groups of $M$, since you know those of $X$. If $\partial M$ has $c$ components, the fact that the long exact sequence has trivial euler characteristic allows you to compute that the rank of $H_1(\partial M;\mathbb Z_2)$ is equal to $2c+4g-2g^2-2<2c$. On the other hand, as you noted $\pi_2$ must be trivial. Yet any boundary component which is a sphere or $\mathbb{RP}^2$ will contribute some $\pi_2$, since it will represent a map of $S^2$ that doesn't bound a map of a $3$-ball to one side. Thus each boundary component has at least two homology generators, which is a contradiction.
Bijections from $A$ to $B$, where $A$ is the set of subsets of $[n]$ that have even size and $B$ is the set of subsets of $[n]$ that have odd size Let $A$ be the set of subsets of $[n]$ that have even size, and let $B$ be the set of subsets of $[n]$ that have odd size. Establish a bijection from $A$ to $B$. The following bijection is suggested for $n=3$: $$\matrix{A: & \{1,2\} & \{1,3\} & \{2,3\} & \varnothing\\ B: & \{1,2,3\} & \{1\} & \{2\} & \{3\}}$$ I know that first we have to establish a function that is both surjective and injective so that it is bijective. I don't know where to take a step from here in the right direction. So I need a bit of guidance. Something suggested is let f be the piecewise function: $$f(x) = x \setminus \{n\}\text{ if }n \in x\text{ and }f(x) = x\cup\{n\}\text{ if }n \notin x$$
For $n$ an odd positive integer, there is a natural procedure. The mapping that takes any subset $E$ of $[n]$ with an even number of elements to its complement $[n]\setminus E$ is a bijection from $A$ to $B$. Dealing with even positive $n$ is messier. Here is one way. The subsets of $[n]$ can be divided into two types: (i) subsets that do not contain $n$ and (ii) subsets that contain $n$. (i) If $E$ is a subset of $[n]$ with an even number of elements, and does not contain $n$, map $E$ to $[n-1]\setminus E$. Call this map $\phi$. (ii) If $E$ is a subset of $n$ with an even number of elements, and contains $n$, map $E$ to $\{n\} \cup \phi^{-1}(E\setminus\{n\})$.
Question regarding upper bound of fixed-point function The problem is to estimate the value of $\sqrt[3]{25}$ using fixed-point iteration. Since $\sqrt[3]{25} = 2.924017738$, I start with $p_0 = 2.5$. A sloppy C++ program yield an approximation to within $10^{-4}$ by $14$ iterations. #include <cmath> #include <iostream> using namespace std; double fx( double x ) { return 5.0 / sqrt( x ); } void fixed_point_algorithm( double p0, double accuracy ) { double p1; int n = 0; do { n++; p1 = fx( p0 ); cout << n << ": " << p1 << endl; if( abs( p1 - p0 ) <= accuracy ) { break; } p0 = p1; } while( true ); cout << "n = " << n << ", p_n = " << p1 << endl; } int main() { fixed_point_algorithm( 2.5, 0.0001 ); } Then I tried to solve it mathematically using the these two fixed-point theorems: Fixed-point Theorem Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g'$ exists on $(a,b)$ and that a constant $0 < k < 1$ exists with $$|g'(x)| \leq k, \text{ for all } x \in (a, b)$$ Then, for any number $p_0$ in $[a,b]$, the sequence defined by $$p_n = g(p_{n-1}), n \geq 1$$ converges to the unique fixed-point in $[a,b]$ Corollary If $g$ satisfies the hypotheses of Theorem 2.4, then bounds for the error involved in using $p_n$ to approximate $p$ are given by $$|p_n - p| \leq k^n \max\{p_0 - a, b - p_0\}$$ and $$|p_n - p| \leq \dfrac{k^n}{1-k}|p_1 - p_0|, \text{ for all } n \geq 1$$ I picked the interval $[2.5, 3.0],$ $$g(x) = \dfrac{5}{\sqrt{x}}$$ $$g'(x) = \dfrac{-5}{2 \cdot x^{3/2}}$$ Plugging in several values in $(2.5, 3.0)$ convince me $x = 2.5$ yield the largest value of $k$. $$\implies \lim_{x\to\ 2.5} \bigg|\dfrac{-5}{2\cdot x^{3/2}} \bigg| = \dfrac{\sqrt{10}}{5}$$ So I chose $k = \dfrac{\sqrt{10}}{5}$, where $p_1 = g(p_0) = \sqrt{10}$. Then I solved for $n$ in the inequality equation: $$ 10^{-4} \leq |p_n - p| \leq \dfrac{k^n}{1-k}|p_1 - p_0|$$ $$\dfrac{\bigg(\dfrac{\sqrt{10}}{5}\bigg)^n}{1-\dfrac{\sqrt{10}}{5}}|\sqrt{10} - 2.5| \geq 10^{-4}$$ And I got $n \approx 18$ which is odd :(. From my understanding fixed-point iteration converges quite fast, so 4 iteration is significant. Then I tried to vary the interval to see if the result can come closer to 14, but I couldn't find any interval that satisfied. So I guess either my upper bound must be wrong or I didn't fully understand the theorem. Can anyone give me a hint? Thank you,
If I understand this right, $p_n$ converges to a fixed point of $g$. Taking $g(x)=\sqrt5/x$ as you have done, the fixed point of $g$ is not the $\root3\of{25}$ that you are after, but rather it is $\root4\of5$. So it's no wonder everything is going haywire.
Proving an inequality between functions: are the bounds sufficient if both strictly increase and are concave? I would like to show that $$f(n) > g(n)$$ for all $n$ within a certain range. If I can show that both $f(n)$ and $g(n)$ are strictly increasing with $n$, and that both are strictly concave, and that $f(n) > g(n)$ at both the lower and upper bounds of $n$, is that sufficient?
No. Consider, for example, $f(x)=1+12x-x^2$ and $g(x)=20x-10x^2$ between $0$ and $1$. Plotted by Wolfram Alpha.
Finding inverse cosh I am trying to find $\cosh^{-1}1$ I end up with something that looks like $e^y+e^{-y}=2x$. I followed the formula correctly so I believe that is correct up to this point. I then plug in $1$ for $x$ and I get $e^y+e^{-y}=2$ which, according to my mathematical knowledge, is still correct. From here I have absolutely no idea what to do as anything I do gives me an incredibly complicated problem or the wrong answer.
start with $$\cosh(y)=x$$ since $$\cosh^2(y)-\sinh^2(y)=1$$ or $$x^2-\sinh^2(y)=1$$ then $$\sinh(y)=\sqrt{x^2-1}$$ now add $\cosh(y)=x$ to both sides to make $$\sinh(y)+\cosh(y) = \sqrt{x^2-1} + x $$ which the left hand side simplifies to : $\exp(y)$ so the answer is $$y=\ln\left(\sqrt{x^2-1}+x\right)$$
Span of permutation matrices The set $P$ of $n \times n$ permutation matrices spans a subspace of dimension $(n-1)^2+1$ within, say, the $n \times n$ complex matrices. Is there another description of this space? In particular, I am interested in a description of a subset of the permutation matrices which will form a basis. For $n=1$ and $2$, this is completely trivial -- the set of all permutation matrices is linearly independent. For $n=3$, the dimension of their span is $5$, and any five of the six permutation matrices are linearly independent, as can be seen from the following dependence relation: $$ \sum_{M \in P} \det (M) \ M = 0 $$ So even in the case $n=4$, is there a natural description of a $10$ matrix basis?
As user1551 points out, your space is the span of all "magic matrices" -- all $n\times n$ matrices for which every row and column sum is equal to the same constant (depending on the matrix). As an algebra this is isomorphic to $\mathbb{C} \oplus M_{n-1}(\mathbb{C})$. You can think of this as the image in $\operatorname{End}_{\mathbb{C}}(\mathbb{C}^n)$ of the natural representation of $S_n$ on $n$ points -- perhaps this is where your question comes from. The representation decomposes as the direct sum of the trivial rep and an $(n-1)$-dimensional irreducible. The set of permutation matrices coming from the permutations $1$, $(1,r)$, $(1,r,s)$ for $1\neq r \neq s \neq 1$ form a basis of this space. To see that they are linearly independent, consider the first rows then the first columns of the corresponding matrices.
Connected components of subspaces vs. space If $Y$ is a subspace of $X$, and $C$ is a connected component of $Y$, then C need not be a connected component of $X$ (take for instance two disjoint open discs in $\mathbb{R}^2$). But I read that, under the same hypothesis, $C$ need not even be connected in $X$. Could you please provide me with an example, or point me towards one? Thank you. SOURCE http://www.filedropper.com/manifolds2 Page 129, paragraph following formula (A.7.16).
Isn't it just false? The image of a connected subspace by the injection $Y\longrightarrow X$ is connected...
Countable subadditivity of the Lebesgue measure Let $\lbrace F_n \rbrace$ be a sequence of sets in a $\sigma$-algebra $\mathcal{A}$. I want to show that $$m\left(\bigcup F_n\right)\leq \sum m\left(F_n\right)$$ where $m$ is a countable additive measure defined for all sets in a $\sigma$ algebra $\mathcal{A}$. I think I have to use the monotonicity property somewhere in the proof, but I don't how to start it. I'd appreciate a little help. Thanks. Added: From Hans' answer I make the following additions. From the construction given in Hans' answer, it is clear the $\bigcup F_n = \bigcup G_n$ and $G_n \cap G_m = \emptyset$ for all $m\neq n$. So $$m\left(\bigcup F_n\right)=m\left(\bigcup G_n\right) = \sum m\left(G_n\right).$$ Also from the construction, we have $G_n \subset F_n$ for all $n$ and so by monotonicity, we have $m\left(G_n\right) \leq m\left(F_n\right)$. Finally we would have $$\sum m(G_n) \leq \sum m(F_n).$$ and the result follows.
Given a union of sets $\bigcup_{n = 1}^\infty F_n$, you can create a disjoint union of sets as follows. Set $G_1 = F_1$, $G_2 = F_2 \setminus F_1$, $G_3 = F_3 \setminus (F_1 \cup F_2)$, and so on. Can you see what $G_n$ needs to be? Using $m(\bigcup_{n = 1}^\infty G_n)$ and monotonicity, you can prove $m(\bigcup_{n = 1}^\infty F_n) \leq \sum_{n = 1}^\infty m(F_n)$.
A ring element with a left inverse but no right inverse? Can I have a hint on how to construct a ring $A$ such that there are $a, b \in A$ for which $ab = 1$ but $ba \neq 1$, please? It seems that square matrices over a field are out of question because of the determinants, and that implies that no faithful finite-dimensional representation must exist, and my imagination seems to have given up on me :)
Take the ring of linear operators on the space of polynomials. Then consider (formal) integration and differentiation. Integration is injective but not surjective. Differentiation is surjective but not injective.
normal groups of a infinite product of groups I have a question regarding the quotient of a infinite product of groups. Suppose $(G_{i})_{i \in I}$ are abelian groups with $|I|$ infinite and each $G_i$ has a normal subgroup $N_i$. Is it true in general that $$\prod_{i \in I} G_i/ \prod_{i \in I} N_i \cong \prod_{i\in I} G_i/N_i$$ More specifically, is it true that $$\prod_{p_i \text{prime}} \mathbb{Z}_{p_i} / \prod_{p_i \text{prime}} p^{e_i}\mathbb{Z}_{p_{i}} \cong \prod_{p_i \text{prime},e_{i} \leq \infty} \mathbb{Z}/p_{i}^{e_i}\mathbb{Z} \times \prod_{p_i \text{prime},e_{i} = \infty}\mathbb{Z}_{p_i}$$ where $\mathbb{Z}_{p_i}$ stands for the $p_i$-adic integers and $p_i^\infty \mathbb{Z}_{p_i}=0$ and all $e_i$ belong to $\mathbb{N} \cup \{\infty\}$. Any help would be appreciated.
Here is a slightly more general statement. Let $(X_i)$ be a family of sets, and $X$ its product. For each $i$ let $E_i\subset X_i^2$ be an equivalence relation. Write $x\ \sim_i\ y$ for $(x,y)\in E_i$. Let $E$ be the product of the $E_i$. There is a canonical bijection between $X^2$ and the product of the $X_i^2$. Thus $E$ can be viewed as a subset of $X^2$. Write $x\sim y$ for $(x,y)\in E$. Let $x,y$ be in $X$. The followong is clear: Lemma 1. We have $x\sim y\ \Leftrightarrow\ x_i\ \sim_i\ y_i\ \forall\ i$. In particular $\sim$ is an equivalence relation on $X$. Define $f_i:X\to X_i/E_i$ by mapping $x$ to the canonical image of $x_i$. Let $$ f:X\to\prod\ (X_i/E_i) $$ be the map attached to the family $(f_i)$. CLAIM. The map $f$ induces a bijection $g$ from $X/E$ to $\prod(X_i/E_i)$. Let $x,y$ be in $X$. Lemma 2. We have $x\sim y\ \Leftrightarrow\ f(x)=f(y)$. Proof: This follows from Lemma 1. Conclusion: The map $f$ induces an injection $g$ from $X/E$ to $\prod(X_i/E_i)$. It only remains to prove that $g$ is surjective. To do this, let $a$ be in $\prod(X_i/E_i)$. For each $i$ choose a representative $x_i\in X_i$ of $a_i$, put $x:=(x_i)$, and check the equality $f(x)=a$.
Cardinality of Borel sigma algebra It seems it's well known that if a sigma algebra is generated by countably many sets, then the cardinality of it is either finite or $c$ (the cardinality of continuum). But it seems hard to prove it, and actually hard to find a proof of it. Can anyone help me out?
It is easy to prove that the $\sigma$-algebra is either finite or has cardinality at least $2^{\aleph_0}$. One way to prove that it has cardinality at most $2^{\aleph_0}$, without explicitly using transfinite recursion, is the following. It is easy to see that it is enough to prove this upper bound for a "generic" $\sigma$-algebra, e.g., for the Borel $\sigma$-algebra of $\{0,1\}^{\omega}$, or for the Borel $\sigma$-algebra of the Baire space $\mathcal{N} = \omega^{\omega}$. Note that $\mathcal{N}$ is a Polish space, so we can talk about analytic subsets of $\mathcal{N}$. Every Borel subset is an analytic subset of $\mathcal{N}$ (in fact, $A \subseteq \mathcal{N}$ is Borel if and only if $A$ and $X \setminus A$ are both analytic). So it is enough to prove that $\mathcal{N}$ has $2^{\aleph_0}$ analytic subsets. Now use the theorem stating that every analytic subset of $\mathcal{N}$ is the projection of a closed subset of $\mathcal{N} \times \mathcal{N}$. Since $\mathcal{N} \times \mathcal{N}$ has a countable basis of open subsets, it has $2^{\aleph_0}$ open subsets, so it has $2^{\aleph_0}$ closed subsets. So $\mathcal{N}$ has $2^{\aleph_0}$ analytic subsets. The proof using transfinite recursion might be simpler, but I think the analytic subset description gives a slightly different, kind of direct ("less transfinite") view on the Borel sets, that could be useful to know.
What Implications Can be Drawn from a Binomial Distribution? Hello everyone I understand how to calculate a binomial distribution or how to identify when it has occurred in a data set. My question is what does it imply when this type of distribution occurs? Lets say for example you are a student in a physics class and the professor states that the distribution of grades on the first exam throughout all sections was a binomial distribution. With typical class averages of around 40 to 50 percent. How would you interpret that statement?
Lets say for example you are a student in a physics class and the professor states that the distribution of grades on the first exam throughout all sections was a binomial distribution. With typical class averages of around 40 to 50 percent. How would you interpret that statement? Most likely the professor was talking loosely and his statement means that the histogram of percentage scores resembled the bell-shaped curve of a normal density function with average or mean value of $40\%$ to $50\%$. Let us assume for convenience that the professor said the average was exactly $50\%$. The standard deviation of scores would have to be at most $16\%$ or so to ensure that only a truly exceptional over-achiever would have scored more than $100\%$. As an aside, in the US, raw scores on the GRE and SAT are processed through a (possibly nonlinear) transformation so that the histogram of reported scores is roughly bell-shaped with mean $500$ and standard deviation $100$. The highest reported score is $800$, and the smallest $200$. As the saying goes, you get $200$ for filling in your name on the answer sheet. At the high end, on the Quantitative GRE, a score of $800$ ranks only at the $97$-th percentile. What if the professor had said that there were no scores that were a fraction of a percentage point, and that the histogram of percentage scores matched a binomial distribution with mean $50$ exactly? Well, the possible percentage scores are $0\%$, $1\%, \ldots, 100\%$ and so the binomial distribution in question has parameters $(100, \frac{1}{2})$ with $P\{X = k\} = \binom{100}{k}/2^{100}$. So, if $N$ denotes the number of students in the course, then for each $k, 0 \leq k \leq 100$, $N\cdot P\{X = k\}$ students had a percentage score of $k\%$. Since $N\cdot P\{X = k\}$ must be an integer, and $P\{X = 0\} = 1/2^{100}$, we conclude that $N$ is an integer multiple of $2^{100}$. I am aware that physics classes are often large these days, but having $2^{100}$ in one class, even if it is subdivided into sections, seems beyond the bounds of plausibility! So I would say that your professor had his tongue firmly embedded in his cheek when he made the statement.
Lesser-known integration tricks I am currently studying for the GRE math subject test, which heavily tests calculus. I've reviewed most of the basic calculus techniques (integration by parts, trig substitutions, etc.) I am now looking for a list or reference for some lesser-known tricks or clever substitutions that are useful in integration. For example, I learned of this trick $$\int_a^b f(x) \, dx = \int_a^b f(a + b -x) \, dx$$ in the question Showing that $\int\limits_{-a}^a \frac{f(x)}{1+e^{x}} \mathrm dx = \int\limits_0^a f(x) \mathrm dx$, when $f$ is even I am especially interested in tricks that can be used without an excessive amount of computation, as I believe (or hope?) that these will be what is useful for the GRE.
When integrating rational functions by partial fractions decomposition, the trickiest type of antiderivative that one might need to compute is $$I_n = \int \frac{dx}{(1+x^2)^n}.$$ (Integrals involving more general quadratic factors can be reduced to such integrals, plus integrals of the much easier type $\int \frac{x \, dx}{(1+x^2)^n}$, with the help of substitutions of the form $x \mapsto x+a$ and $x \mapsto ax$.) For $n=1$, we know that $I_1 = \int \frac{dx}{1+x^2} = \arctan x + C$, and the usual suggestion for finding $I_n$ for $n \ge 2$ is to work one's way down to $I_1$ using the reduction formula $$ I_n = \frac{1}{2(n-1)} \left( \frac{x}{(1+x^2)^{n-1}} + (2n-3) \, I_{n-1} \right) . $$ However, this formula is not easy to remember, and the computations become quite tedious, so the lesser-known trick that I will describe here is (in my opinion) a much simpler way. From now on, I will use the abbreviation $$T=1+x^2.$$ First we compute $$ \frac{d}{dx} \left( x \cdot \frac{1}{T^n} \right) = 1 \cdot \frac{1}{T^n} + x \cdot \frac{-n}{T^{n+1}} \cdot 2x = \frac{1}{T^n} - \frac{2n x^2}{T^{n+1}} = \frac{1}{T^n} - \frac{2n (T-1)}{T^{n+1}} \\ = \frac{1}{T^n} - \frac{2n T}{T^{n+1}} + \frac{2n}{T^{n+1}} = \frac{2n}{T^{n+1}} - \frac{2n-1}{T^n} . $$ Let us record this result for future use, in the form of an integral: $$ \int \left( \frac{2n}{T^{n+1}} - \frac{2n-1}{T^n} \right) dx = \frac{x}{T^n} + C . $$ That is, we have $$ \begin{align} \int \left( \frac{2}{T^2} - \frac{1}{T^1} \right) dx &= \frac{x}{T} + C ,\\ \int \left( \frac{4}{T^3} - \frac{3}{T^2} \right) dx &= \frac{x}{T^2} + C ,\\ \int \left( \frac{6}{T^4} - \frac{5}{T^3} \right) dx &= \frac{x}{T^3} + C ,\\ &\vdots \end{align} $$ With the help of this, we can easily compute things like $$ \begin{align} \int \left( \frac{1}{T^3} + \frac{5}{T^2} - \frac{2}{T} \right) dx &= \int \left( \frac14 \left( \frac{4}{T^3} - \frac{3}{T^2} \right) + \frac{\frac34 + 5}{T^2} - \frac{2}{T} \right) dx \\ &= \int \left( \frac14 \left( \frac{4}{T^3} - \frac{3}{T^2} \right) + \frac{23}{4} \cdot \frac12 \left( \frac{2}{T^2} - \frac{1}{T^1} \right) + \frac{\frac{23}{8}-2}{T} \right) dx \\ &= \frac14 \frac{x}{T^2} + \frac{23}{8} \frac{x}{T} + \frac{7}{8} \arctan x + C . \end{align} $$ Of course, the relation that we are using, $2n \, I_{n+1} - (2n-1) \, I_n = \frac{x}{T^n}$, really is the reduction formula in disguise. However, the trick is: (a) to derive the formula just by differentiation (instead of starting with an integral where the exponent is one step lower than the one that we're interested in, inserting a factor of $1$, integrating by parts, and so on), and (b) to leave the formula in its "natural" form as it appears when differentiating (instead of solving for $I_{n+1}$ in terms of $I_n$), which results in a structure which is easier to remember and a more pleasant way of organizing the computations.
Consequences of the Langlands program I have been reading the book Fearless Symmetry by Ash and Gross.It talks about Langlands program, which it says is the conjecture that there is a correspondence between any Galois representation coming from the etale cohomology of a Z-variety and an appropriate generalization of a modular form, called an “automorphic representation". Even though it appears to be interesting, I would like to know that are there any important immediate consequences of the Langlands program in number theory or any other field. Why exactly are the mathematicians so excited about this?
There are many applications of the Langlands program to number theory; this is why so many top-level researchers in number theory are focusing their attention on it. One such application (proved six or so years ago by Clozel, Harris, and Taylor) is the Sato--Tate conjecture, which describes rather precisely the deviation of the number of mod $p$ points on a fixed elliptic curve $E$, as the prime $p$ varies, from the "expected value" of $1 + p$. Further progress in the Langlands program would give rise to analogous distribution results for other Diophantine equations. (The key input is the analytic properties of the $L$-functions mentioned in Jeremy's answer.) At a slightly more abstract level, one can think of the Langlands program as providing a classification of Diophantine equations in terms of automorphic forms. At a more concrete level, it is anticipated that such a classification will be a crucial input to the problem of developing general results on solving Diophantine equations. (E.g. all results in the direction of the Birch--Swinnerton-Dyer conjecture take as input the modularity of the elliptic curve under investigation.)
Proof of dividing fractions $\frac{a/b}{c/d}=\frac{ad}{bc}$ For dividing two fractional expressions, how does the division sign turns into multiplication? Is there a step by step proof which proves $$\frac{a}{b} \div \frac{c}{d} = \frac{a}{b} \cdot \frac{d}{c}=\frac{ad}{bc}?$$
Suppose $\frac{a}{b}$ and $\frac{c}{d}$ are fractions. That is, $a$, $b$, $c$, $d$ are whole numbers and $b\ne0$, $d\ne0$. In addition we require $c\ne0$. Let $\frac{a}{b}\div\frac{c}{d}=A$. Then by definition of division of fractions , $A$ is a unique fraction so that $A\times\frac{c}{d}=\frac{a}{b}$. However, $(\frac{a}{b}\times\frac{d}{c})\times\frac{c}{d}=\frac{a}{b}\times(\frac{d}{c}\times\frac{c}{d})=\frac{a}{b}\times(\frac{dc}{cd})=\frac{a}{b}\times(\frac{dc}{dc})=\frac{a}{b}$. Then by uniqueness (of $A$), $A=\frac{a}{b}\times\frac{d}{c}$.
Proof that series diverges Prove that $\displaystyle\sum_{n=1}^\infty\frac{1}{n(1+1/2+\cdots+1/n)}$ diverges. I think the only way to prove this is to find another series to compare using the comparison or limit tests. So far, I have been unable to find such a series.
This answer is similar in spirit to Didier Piau's answer. The following theorem is a very useful tool: Suppose that $a_k > 0$ form a decreasing sequence of real numbers. Then $$\sum_{k=1}^\infty a_k$$ converges if and only if $$\sum_{k=1}^\infty 2^k a_{2^k}$$ converges. Applying this to the problem in hand we are reduced to investigating the convergence of $$\sum_{k=1}^\infty \frac{1}{1 + 1/2 + \dots + 1/2^k}$$ But one easily sees that $$1 + 1/2 + \dots + 1/2^k \le 1 + 2 \cdot 1/2 + 4 \cdot 1/4 + \dots + 2^{k-1}\cdot1/2^{k-1} + 1/2^k \le k + 1.$$ Because $$\sum_{k=1}^\infty \frac{1}{k+1}$$ diverges, we are done.
On the meaning of being algebraically closed The definition of algebraic number is that $\alpha$ is an algebraic number if there is a nonzero polynomial $p(x)$ in $\mathbb{Q}[x]$ such that $p(\alpha)=0$. By algebraic closure, every nonconstant polynomial with algebraic coefficients has algebraic roots; then, there will be also a nonconstant polynomial with rational coefficients that has those roots. I feel uncomfortable with the idea that the root of a polynomial with algebraic coefficients is again algebraic; why are we sure that for every polynomial in $\mathbb{\bar{Q}}[x]$ we could find a polynomial in $\mathbb{Q}[x]$ that has the same roots? I apologize if I'm asking something really trivial or my question comes from a big misunderstanding of basic concepts.
Let $p(x) = a_0+a_1x+\cdots +a_{n-1}x^{n-1} + x^n$ be a polynomial with coefficients in $\overline{\mathbb{Q}}$. For each $i$, $0\leq i\leq n-1$, let $a_i=b_{i1}, b_{i2},\ldots,b_{im_i}$ be the $m_i$ conjugates of $a_i$ (that is, the "other" roots of the monic irreducible polynomial with coefficients in $\mathbb{Q}$ that has $a_i$ as a root). Now let $F = \mathbb{Q}[b_{11},\ldots,b_{n-1,m_{n-1}}]$. This field is Galois over $\mathbb{Q}$. Let $G=\mathrm{Gal}(F/\mathbb{Q})$. Now consider $$q(x) = \prod_{\sigma\in G} \left( \sigma(a_0) + \sigma(a_1)x+\cdots + \sigma(a_{n-1})x^{n-1} + x^n\right).$$ This is a polynomial with coefficients in $F$, and any root of $p(x)$ is also a root of $q(x)$ (since one of the elements of $G$ is the identity, so one of the factors of $q(x)$ is $p(x)$). The key observation is that if you apply any element $\tau\in G$ to $q(x)$, you get back $q(x)$ again: $$\begin{align*} \tau(q(x)) &= \tau\left(\prod_{\sigma\in G} \left( \sigma(a_0) + \sigma(a_1)x+\cdots + \sigma(a_{n-1})x^{n-1} + x^n\right)\right)\\ &= \prod_{\sigma\in G} \left( \tau\sigma(a_0) +\tau\sigma(a_1)x+\cdots + \tau\sigma(a_{n-1})x^{n-1} + x^n\right)\\ &= \prod_{\sigma'\in G} \left( \sigma'(a_0) + \sigma'(a_1)x+\cdots + \sigma'(a_{n-1})x^{n-1} + x^n\right)\\ &= q(x). \end{align*}$$ That means that the coefficients of $q(x)$ must lie in the fixed field of $G$. But since $F$ is Galois over $\mathbb{Q}$, the fixed field is $\mathbb{Q}$. That is: $q(x)$ is actually a polynomial in $\mathbb{Q}[x]$. Thus, every root of $p(x)$ is the root of a polynomial with coefficients in $\mathbb{Q}$. For an example of how this works, suppose you have $p(x) = x^3 - (2\sqrt{3}+\sqrt{5})x + 3$. The conjugate of $\sqrt{3}$ is $-\sqrt{3}$; the conjugate of $\sqrt{5}$ is $-\sqrt{5}$. The field $\mathbb{Q}[\sqrt{3},\sqrt{5}]$ already contains all the conjugates, and the Galois group over $\mathbb{Q}$ has four elements: the one that maps $\sqrt{3}$ to itself and $\sqrt{5}$ to $-\sqrt{5}$; the one the maps $\sqrt{3}$ to $-\sqrt{3}$ and $\sqrt{5}$ to itself; the one that maps $\sqrt{3}$ to $-\sqrt{3}$ and $\sqrt{5}$ to $-\sqrt{5}$; and the identity. So $q(x)$ would be the product of $x^3 - (2\sqrt{3}+\sqrt{5})x + 3$, $x^3 - (-2\sqrt{3}+\sqrt{5})x+3$, $x^3 - (2\sqrt{3}-\sqrt{5})x + 3$, and $x^3 - (-2\sqrt{3}-\sqrt{5})x + 3$. If you multiply them out, you get $$\begin{align*} \Bigl( &x^3 - (2\sqrt{3}+\sqrt{5})x + 3\Bigr)\Bigl( x^3 + (2\sqrt{3}+\sqrt{5})x+3\Bigr)\\ &\times \Bigl(x^3 - (2\sqrt{3}-\sqrt{5})x + 3\Bigr)\Bigl( x^3 + (2\sqrt{3}-\sqrt{5})x + 3\Bigr)\\ &= \Bigl( (x^3+3)^2 - (2\sqrt{3}+\sqrt{5})^2x^2\Bigr)\Bigl((x^3+3)^2 - (2\sqrt{3}-\sqrt{5})^2x^2\Bigr)\\ &=\Bigl( (x^3+3)^2 - 17x^2 - 2\sqrt{15}x^2\Bigr)\Bigl( (x^3+3)^2 - 17x^2 + 2\sqrt{15}x^2\Bigr)\\ &= \Bigl( (x^3+3)^2 - 17x^2\Bigr)^2 - 60x^4, \end{align*}$$ which has coefficients in $\mathbb{Q}$.
Questions about cosets: "If $aH\neq Hb$, then $aH\cap Hb=\emptyset$"? Let $H$ be a subgroup of group $G$, and let $a$ and $b$ belong to $G$. Then, it is known that $$ aH=bH\qquad\text{or}\qquad aH\cap bH=\emptyset $$ In other words, $aH\neq bH$ implies $aH\cap bH=\emptyset$. What can we say about the statement "If $aH\neq Hb$, then $aH\cap Hb=\emptyset$" ? [EDITED:] What I think is that when $G$ is Abelian, this can be true since $aH=Ha$ for any $a\in G$. But what if $G$ is non-Abelian? How should I go on?
It is sometimes true and sometimes false. For example, if $H$ is a normal subgroup of $G$, then it is true. If $H$ is the subgroup generated by the permutation $(12)$ inside $G=S_3$, the symmetric group of degree $3$, then $(123)H\neq H(132)$, yet $(13)\in(123)H\cap H(132)$
The limit of locally integrable functions If ${f_i} \in L_{\rm loc}^1(\Omega )$ with $\Omega $ an open set in ${\mathbb R^n}$ , and ${f_i}$ are uniformly bounded in ${L^1}$ for every compact set, is it necessarily true that there is a subsequece of ${f_i}$ converging weakly to a regular Borel measure?
Take $K_j$ a sequence of compact sets such that their interior grows to $\Omega$. That is, $\mathrm{int}(K_j) \uparrow \Omega$. Let $f_i^0$ be a sub-sequence of $f_i$ such that $f_i^0|_{K_0}$ converges to a Borel measure $\mu_0$ over $K_0$. For each $j > 0$, take a sub-sequence $f_i^j$ of $f_i^{j-1}$ converging to a Borel measure $\mu_j$ over $K_j$. It is evident, from the concept of convergence, that for $k \leq j$, and any Borel set $A \subset K_k$, $\mu_j(A) = \mu_k(A)$. Now, define $\mu(A) = \lim \mu(A \cap K_j)$. And take the sequence $f_j^j$. For any continuous $g$ with compact support $K$, there exists $k$ such that $K \subset \mathrm{int}(K_k)$ (why?). Then, since for $j \geq k$, $f_j^j|_{K_k}$ is a sequence that converges to $\mu_k$, $$ \int_{K_k} g f_j^j \mathrm{d}x \rightarrow \int_{K_k} g f_j^j \mathrm{d}\mu_k = \int g f_j^j \mathrm{d}\mu. $$ That is, $f_j^j \rightarrow \mu$.
moment-generating function of the chi-square distribution How do we find the moment-generating function of the chi-square distribution? I really couldn't figure it out. The integral is $$E[e^{tX}]=\frac{1}{2^{r/2}\Gamma(r/2)}\int_0^\infty x^{(r-2)/2}e^{-x/2}e^{tx}dx.$$ I'm going over it for a while but can't seem to find the solution. By the way, the answer should be $$(1-2t)^{(-r/2)}.$$
In case you have not yet figure it out, the value of the integral follows by simple scaling of the integrand. First, assume $t < \frac{1}{2}$, then change variables $x = (1-2 t) y$: $$ \int_0^\infty x^{(r-2)/2} \mathrm{e}^{-x/2}\mathrm{e}^{t x}\mathrm{d}x = \int_0^\infty x^{r/2} \mathrm{e}^{-\frac{(1-2 t) x}{2}} \, \frac{\mathrm{d}x}{x} = \left(1-2 t\right)^{-r/2} \int_0^\infty y^{r/2} \mathrm{e}^{-\frac{t}{2}} \, \frac{\mathrm{d}y}{y} $$ The integral in $y$ gives the normalization constant, and value of m.g.f. follows.
Large $n$ asymptotic of $\int_0^\infty \left( 1 + x/n\right)^{n-1} \exp(-x) \, \mathrm{d} x$ While thinking of 71432, I encountered the following integral: $$ \mathcal{I}_n = \int_0^\infty \left( 1 + \frac{x}{n}\right)^{n-1} \mathrm{e}^{-x} \, \mathrm{d} x $$ Eric's answer to the linked question implies that $\mathcal{I}_n \sim \sqrt{\frac{\pi n}{2}} + O(1)$. How would one arrive at this asymptotic from the integral representation, without reducing the problem back to the sum ([added] i.e. expanding $(1+x/n)^{n-1}$ into series and integrating term-wise, reducing the problem back to the sum solve by Eric) ? Thanks for reading.
Interesting. I've got a representation $$ \mathcal{I}_n = n e^n \int_1^\infty t^{n-1} e^{- nt}\, dt $$ which can be obtained from yours by the change of variables $t=1+\frac xn$. After some fiddling one can get $$ 2\mathcal{I}_n= n e^n \int_0^\infty t^{n-1} e^{- nt}\, dt+o(\mathcal{I}_n)= n^{-n} e^n \Gamma(n+1)+\ldots=\sqrt{2\pi n}+\ldots. $$
A couple of problems involving divisibility and congruence I'm trying to solve a few problems and can't seem to figure them out. Since they are somewhat related, maybe solving one of them will give me the missing link to solve the others. $(1)\ \ $ Prove that there's no $a$ so that $ a^3 \equiv -3 \pmod{13}$ So I need to find $a$ so that $a^3 \equiv 10 \pmod{13}$. From this I get that $$a \equiv (13k+10)^{1/3} \pmod{13} $$ If I can prove that there's no k so that $ (13k+10)^{1/3} $ is a integer then the problem is solved, but I can't seem to find a way of doing this. $(2)\ \ $ Prove that $a^7 \equiv a \pmod{7} $ If $a= 7q + r \rightarrow a^7 \equiv r^7 \pmod{7} $. I think that next step should be $ r^7 \equiv r \pmod{7} $, but I can't figure out why that would hold. $(3)\ \ $ Prove that $ 7 | a^2 + b^2 \longleftrightarrow 7| a \quad \textbf{and} \quad 7 | b$ Left to right is easy but I have no idea how to do right to left since I know nothing about what 7 divides except from the stated. Any help here would be much appreciated. There're a lot of problems I can't seem to solve because I don't know how to prove that a number is or isn't a integer like in problem 1 and also quite a few that are similar to problem 3, but I can't seem to find a solution. Any would be much appreciated.
HINT $\rm\ (2)\quad\ mod\ 7\!:\ \{\pm 1,\:\pm 2,\:\pm3\}^3\equiv\: \pm1\:,\:$ so squaring yields $\rm\ a^6\equiv 1\ \ if\ \ a\not\equiv 0\:.$ $\rm(3)\quad \ mod\ 7\!:\ \ if\ \ a^2\equiv -b^2\:,\:$ then, by above, cubing yields $\rm\: 1\equiv -1\ $ for $\rm\ a,b\not\equiv 0\:.$ $\rm(1)\quad \ mod\ 13\!:\ \{\pm1,\:\pm3,\:\pm4\}^3 \equiv \pm 1,\ \ \{\pm2,\pm5,\pm6\}^3\equiv \pm 5\:,\: $ and neither is $\rm\:\equiv -3\:.$ If you know Fermat's little theorem or a little group theory then you may employ such to provide more elegant general proofs - using the above special cases as hints.
$11$ divisibility We know that $1331$ is divisible by $11$. As per the $11$ divisibility test, we can say $1331$ is divisible by $11$. However we cannot get any quotient. If we subtract each unit digit in the following way, we can see the quotient when $1331$ is divided by $11$. $1331 \implies 133 -1= 132$ $132 \implies 13 - 2= 11$ $11 \implies 1-1 = 0$, which is divisible by $11$. Also the quotient is, arrange, all the subtracted unit digits (in bold and italic) from bottom to top, we get $121$. Which is quotient. I want to know how this method is working? Please write a proof.
HINT $\ $ Specialize $\rm\ x = 10\ $ below $$\rm(x+1)\ (a_n\ x^n +\:\cdots\:+a_1\ x + a_0)\ =\ a_n\ x^{n+1}+ (a_n+a_{n-1})\ x^{n}+\:\cdots\:(a_1+a_0)\ x+ a_0$$
Prove there is no element of order 6 in a simple group of order 168 Let $G$ be a simple group of order 168. Let $n_p$ be the number of Sylow $p$ subgroups in $G$. I have already shown: $n_7 = 8$, $n_3 = 28$, $n_2 \in \left\{7, 21 \right\}$ Need to show: $n_2 = 21$ (showing there is no element of order 6 In $G$ will suffice) Attempt so far: If $P$ is a Sylow-2 subgroup of $G$, $|P| = 8$. Assume for contradiction that $n_2 = 7$. Then the normalizer $N(P)$ has order 24. Let $k_p$ be the number of Sylow-$p$ subgroups in $N(P)$. Then $k_3 \in \left\{1,4 \right\}$ and $k_2 \in \left\{1,3 \right\}$. Then I showed $k_3 = 4, k_2 = 1$. Counting argument shows there is an element of order 6 in $N(P)$, and thus in $G$ too. I don't know how to proceed from here. I am told that there cannot be an element of order 6 in $G$, but I don't know how to prove it, if someone could help me prove this I would very much appreciate it. Can someone help me?
If there is an element of order 6, then that centralizes the Sylow $3$-subgroup $P_3$ generated by its square. You have already shown that $|N(P_3)|=168/n_3=6$. Therefore the normalizer of any Sylow $3$-subgroup would have to be cyclic of order 6, and an element of order 6 belongs to exactly one such normalizer. Thus your group would have $56=2\cdot n_3$ elements of order $3$, $56=2\cdot n_3$ elements of order $6$, $48=6\cdot n_7$ elements of order $7$, and therefore only eight other elements. Those eight would have to form a Sylow $2$-subgroup, and that would be unique, so...
Correlation Coefficient between these two random variables Suppose that $X$ is real-valued normal random variable with mean $\mu$ and variance $\sigma^2$. What is the correlation coefficient between $X$ and $X^2$?
Hint: You are trying to find: $$\frac{E\left[\left(X^2-E\left[X^2\right]\right)\left(X-E\left[X\right]\right)\right]}{\sqrt{E\left[\left(X^2-E\left[X^2\right]\right)^2\right]E\left[\left(X-E\left[X\right]\right)^2\right]}}$$ For a normal distribution the raw moments are * *$E\left[X^1\right] = \mu$ *$E\left[X^2\right] = \mu^2+\sigma^2$ *$E\left[X^3\right] = \mu^3+3\mu\sigma^2$ *$E\left[X^4\right] = \mu^4+6\mu^2\sigma^2+3\sigma^4$ so multiply out, substitute and simplify.
Why is the math for negative exponents so? This is what we are taught: $$5^{-2} = \left({\frac{1}{5}}\right)^{2}$$ but I don't understand why we take the inverse of the base when we have a negative exponent. Can anyone explain why?
For natural numbers $n$, $m$, we have $x^nx^m=x^{n+m}$. If you want this rule to be preserved when defining exponentiation by all integers, then you must have $x^0x^n = x^{n+0} = x^n$, so that you must define $x^0 = 1$. And then, arguing similarly, you have $x^nx^{-n} = x^{n-n}=x^0=1$, so that $x^{-n}=1/x^n$. Now, you can try to work out for yourself what $x^{1/n}$ should be, if we want to preserve the other exponentiation rule $(x^n)^m = x^{nm}$.
Representation of this function using a single formula without conditions Is it possible to represent the following function with a single formula, without using conditions? If not, how to prove it? $F(x) = \begin{cases}u(x), & x \le 0, \ v(x) & x > 0 \end{cases}$ So that it will become something like that: $F(x) = G(x)$ With no conditions? I need it for further operations like derivative etc.
Note: This answers the original question, asking whether a formula like $F(x)=G(u(x),v(x))$ might represent the function $F$ defined as $F(x) = u(x)$ if $x \leqslant 0$ and $F(x)=v(x)$ if $x > 0$. The OP finally reacted to remarks made by several readers that another answer did not address this, by modifying the question, which made the other answer fit (post hoc) the question. Just to make sure @Rasmus's message got through: for any set $E$ with at least two elements, there can exist no function $G:\mathbb R^2\to E$ such that for every functions $u:\mathbb R\to E$ and $v:\mathbb R\to E$ and every $x$ in $\mathbb R$, one has $G(u(x),v(x))=u(x)$ if $x\leqslant0$ and $G(u(x),v(x))=v(x)$ if $x>0$.
A graph with less than 10 vertices contains a short circuit? Lately I read an old paper by Paul Erdős and L. Pósa ("On the maximal number of disjoint circuits of a graph") and stumbled across the following step in a proof (I changed it a bit to be easier to read): It is well known and easy to show that every (undirected) graph with $n < 10$ vertices, where all vertices have a degree $\geq 3$ contains a circuit of at most $4$ edges. I would be very happy if someone could enlighten me how this is simple and why he can conclude that, maybe there are some famous formulas for graphs that make this trivial? For the ones interested, he also mentions that a counterexample for $n=10$ is the petersen graph.
Because every vertex have degree $\ge 2$, there must be at least one cycle. Consider, therefore, a cycle of minimal length; call this length $n$. Because each vertex in the cycle has degree $\ge 3$, it is connected to at least one vertex apart from its two neighbors in the cycle. That cannot be a non-neighbor member of the cycle either, because then the cycle wouldn't be minimal. If the graph has $V$ vertices and $n > V/2$, then due to the pigeonhole principle two vertices in the cycle must share a neighbor outside of the cycle. We can then construct a new cycle that contains these two edges and half of the original cycle, and this will be shorter than the original cycle unless each half of the original cycle has length at most $2$. Therefore, a graph with $V$ vertices each having degree $\ge 3$ must have a cycle of length at most $\max(4,\lfloor V/2\rfloor)$.
Express this curve in the rectangular form Express the curve $r = \dfrac{9}{4+\sin \theta}$ in rectangular form. And what is the rectangular form? If I get the expression in rectangular form, how am I able to convert it back to polar coordinate?
what is the rectangular form? It is the $y=f(x)$ expression of the curve in the $x,y$ referential (see picture). It can also be the implicit form $F(x,y)=F(x,f(x))\equiv 0$. Steps: 1) transformation of polar into rectangular coordinates (also known as Cartesian coordinates) (see picture) $$x=r\cos \theta ,$$ $$y=r\sin \theta ;$$ 2) from trigonometry and from 1) $r=\sqrt{x^2+y^2}$ $$\sin \theta =\frac{y}{r}=\frac{y}{\sqrt{ x^{2}+y^{2}}};$$ 3) substitution in the given equation $$r=\frac{9}{4+\sin \theta }=\dfrac{9}{4+\dfrac{y}{\sqrt{x^{2}+y^{2}}}}=9\dfrac{\sqrt{x^{2}+y^{2}}}{4\sqrt{x^{2}+y^{2}}+y};$$ 4) from 1) $r=\sqrt{x^2+y^2}$, equate $$9\frac{\sqrt{x^{2}+y^{2}}}{4\sqrt{ x^{2}+y^{2}}+y}=\sqrt{x^{2}+y^{2}};$$ 5) simplify to obtain the implicit equation $$4\sqrt{x^{2}+y^{2}}+y-9=0;$$ 6) Rewrite it as $$4\sqrt{x^{2}+y^{2}}=9-y,$$ square it (which may introduce extraneous solutions, also in this question), rearrange as $$16y^{2}+18y+15x^{2}-81=0,$$ and solve for $y$ $$y=-\frac{3}{5}\pm \frac{4}{15}\sqrt{81-15x^{2}}.$$ 7) Check for extraneous solutions. if I get the expression in rectangular form, how am I able to convert it back to polar coordinate? The transformation of rectangular to polar coordinates is $$r=\sqrt{x^{2}+y^{2}}, \qquad \theta =\arctan \frac{y}{x}\qquad \text{in the first quadrant},$$ or rather $\theta =\arctan2(y,x)$ to take into account a location different from the first quadrant. (Wikipedia link). As commented by J.M. the curve is an ellipse. Here is the plot I got using the equation $16y^{2}+18y+15x^{2}-81=0$.
General Lebesgue Dominated Convergence Theorem In Royden (4th edition), it says one can prove the General Lebesgue Dominated Convergence Theorem by simply replacing $g-f_n$ and $g+f_n$ with $g_n-f_n$ and $g_n+f_n$. I proceeded to do this, but I feel like the proof is incorrect. So here is the statement: Let $\{f_n\}_{n=1}^\infty$ be a sequence of measurable functions on $E$ that converge pointwise a.e. on $E$ to $f$. Suppose there is a sequence $\{g_n\}$ of integrable functions on $E$ that converge pointwise a.e. on $E$ to $g$ such that $|f_n| \leq g_n$ for all $n \in \mathbb{N}$. If $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $g_n$ = $\int_E$ $g$, then $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $f_n$ = $\int_E$ $f$. Proof: $$\int_E (g-f) = \liminf \int_E g_n-f_n.$$ By the linearity of the integral: $$\int_E g - \int_E f = \int_E g-f \leq \liminf \int_E g_n -f_n = \int_E g - \liminf \int_E f_n.$$ So, $$\limsup \int_E f_n \leq \int_E f.$$ Similarly for the other one. Am I missing a step or is it really a simple case of replacing.
You made a mistake: $$\liminf \int (g_n-f_n) = \int g-\limsup \int f_n$$ not $$\liminf \int (g_n-f_n) = \int g-\liminf \int f_n.$$ Here is the proof: $$\int (g-f)\leq \liminf \int (g_n-f_n)=\int g -\limsup \int f_n$$ which means that $$\limsup \int f_n\leq \int f$$ Also $$\int (g+f)\leq \liminf \int(g_n+f_n)=\int g + \liminf \int f_n$$ which means that $$\int f\leq \liminf \int f_n$$ i.e. $$\limsup \int f_n\leq \int f\leq \liminf\int f_n\leq \limsup \int f_n$$ So they are all equal.
Existence of least squares solution to $Ax=b$ Does a least squares solution to $Ax=b$ always exist?
If you think at the least squares problem geometrically, the answer is obviously "yes", by definition. Let me try to explain why. For the sake of simplicity, assume the number of rows of $A$ is greater or equal than the number of its columns and it has full rang (i.e., its columns are linearly independent vectors). Without these hypotheses the answer is still "yes", but the explanation is a little bit more involved. If you have a system of linear equations $$ Ax = b \ , $$ you can look at it as the following equivalent problem: does the vector $b$ belong to the span of the columns of $A$? That is, $$ Ax = b \qquad \Longleftrightarrow \qquad \exists \ x_1, \dots , x_n \quad \text{such that }\quad x_1a_1 + \dots + x_na_n = b \ . $$ Here, $a_1, \dots , a_n$ are the columns of $A$ and $x = (x_1, \dots , x_n)^t$. If the answer is "yes", then the system has a solution. Otherwise, it hasn't. So, in this latter case, when $b\notin \mathrm{span }(a_1, \dots , a_n)$, that is, when your system hasn't a solution, you "change" your original system for another one which by definition has a solution. Namely, you change vector $b$ for the nearest vector $b' \in \mathrm{span }(a_1, \dots , a_n)$. This nearest vector $b'$ is the orthogonal projection of $b$ onto $\mathrm{span }(a_1, \dots , a_n)$. So the least squares solution to your system is, by definition, the solution of $$ Ax = b' \ , \qquad\qquad\qquad (1) $$ and your original system, with this change and the aforementioned hypotheses, becomes $$ A^t A x = A^tb \ . \qquad\qquad\qquad (2) $$ EDIT. Formula (1) becomes formula (2) taking into account that the matrix of the orthogonal projection onto the span of columns of $A$ is $$ P_A = A(A^tA)^{-1}A^t \ . $$ (See Wikipedia.) So, $b' = P_Ab$. And, if you put this into formula (1), you get $$ Ax = A(A^tA)^{-1}A^tb \qquad \Longrightarrow \qquad A^tAx = A^tA(A^tA)^{-1}A^tb = A^tb \ . $$ That is, formula (2).
How to show that $\frac{f}{g}$ is measurable Here is my attempt to show that $\frac{f}{g}~,g\neq 0$ is a measurable function, if $f$ and $g$ are measurable function. I'd be happy if someone could look if it's okay. Since $fg$ is measurable, it is enough to show that $\frac{1}{g}$ is measurable. $$ \left\{x\;\left|\;\frac{1}{g}\lt \alpha \right\}\right.=\left\{x\;\left|\;g \gt \frac{1}{\alpha} \right\}\right., \qquad g\gt 0,\quad \alpha \in \mathbb{R},$$ which is measurable, since the right hand side is measurable. Also, $$ \left\{x\;\left|\;\frac{1}{g}\lt \alpha \right\}\right.= \left\{x\;\left|\;g\lt\frac{1}{\alpha} \right\}\right.,\qquad g\lt 0,\quad \alpha \in \mathbb{R},$$ which is measurable since the right hand side is measurable. Therefore, $\frac{1}{g}$ is measurable, and so $\frac{f}{g}$ is measurable.
Using the fact that $fg$ is a measurable function and in view of the identity $$\frac{f}{g}=f\frac{1}{g}$$ it suffices to show that $1/g$ (with $g\not=0$) is measurable on $E$. Indeed, $$E\left(\frac{1}{g}<\alpha\right)=\left\{\begin{array}{lll} E\left(g<0\right) & \quad\text{if }\alpha=0\\ E\left(g>1/\alpha\right)\cup E\left(g<0\right) & \quad\text{if }\alpha>0\\ E\left(g>1/\alpha\right)\cap E\left(g<0\right) & \quad\text{if }\alpha<0 \end{array}\right.$$ Hence, $f/g$ ($g$ vanishing nowhere on $E$) is a measurable function on $E$. For a better understanding of what is going on, I suggest to plot the function.
Cyclic group with exactly 3 subgroups: itself $\{e\}$ and one of order $7$. Isn't this impossible? Suppose a cyclic group has exactly three subgroups: $G$ itself, $\{e\}$, and a subgroup of order $7$. What is $|G|$? What can you say if $7$ is replaced with $p$ where $p$ is a prime? Well, I see a contradiction: the order should be $7$, but that is only possible if there are only two subgroups. Isn't it impossible to have three subgroups that fit this description? If G is cyclic of order n, then $\frac{n}{k} = 7$. But if there are only three subgroups, and one is of order 1, then 7 is the only factor of n, and $n = 7$. But then there are only two subgroups. Is this like a trick question? edit: nevermind. The order is $7^2$ right?
Hint: how can a number other than $7$ not have other factors?
Is it ever $i$ time? I am asking this question as a response to reading two different questions: Is it ever Pi time? and Are complex number real? So I ask, is it ever $i$ time? Could we arbitrarily define time as following the imaginary line instead of the real one? (NOTE: I have NO experience with complex numbers, so I apologize if this is a truly dumb question. It just followed from my reading, and I want to test my understanding)
In the Wikipedia article titled Paul Émile Appell, we read that "He discovered a physical interpretation of the imaginary period of the doubly periodic function whose restriction to real arguments describes the motion of an ideal pendulum." The interpretation is this: The real period is the real period. The maximum deviation from vertical is $\theta$. The imaginary period is what the period would be if that maximum deviation were $\pi - \theta$.
Generating coordinates for 'N' points on the circumference of an ellipse with fixed nearest-neighbor spacing I have an ellipse with semimajor axis $A$ and semiminor axis $B$. I would like to pick $N$ points along the circumference of the ellipse such that the Euclidean distance between any two nearest-neighbor points, $d$, is fixed. How would I generate the coordinates for these points? For what range of $A$ and $B$ is this possible? As a clarification, all nearest-neighbor pairs should be of fixed distance $d$. If one populates the ellipse by sequentially adding nearest neighbors in, say, a clockwise fashion, the first and last point added should have a final distance $d$.
I will assume that $A$, $B$ and $N$ are given, and that $d$ is unknown. There is always a solution. Let $L$ be the perimeter of the ellipse. An obvious constraint is $N\,d<L$. Take $d\in(0,L/N)$. As explained in Gerry Myerson's answer, pick a point $P_1$ on the ellipse, and then pick points $P_2,\dots,P_N$ such that $P_{i+1}$ is clockwise from $P_i$ and the euclidean distance between $P_i$ and $P_{i+1}$ is $d$. If $d$ is small, $P_N$ will be short of $P_1$, while if $d$ is large, it will "overpass" $P_1$. In any case, the position of $P_N$ is a continuous function of $d$. By continuity, there will be a value of $d$ such that $P_1=P_N$. It is also clear that this value is unique. To find $P_N$ for a given $d$ you need to solve $N-1$ quadratic equations. To compute the value of $d$, you can use the bisection method. Edit: TonyK's objections can be taken care of if $N=2\,M$ is even. Take $P_1=(A,0)$ and follow the procedure to find points $P_2,\dots,P_{M+1}$ in the upper semiellipse such that $P_{i+1}$ is clockwise of $P_i$ and at distance $d$, and $P_{M+1}=(-A,0)$. The the sought solution is $P_1,\dots,P_{M+1},\sigma(P_M),\dots,\sigma(P_2)$, where $\sigma(P)$ is the point symmetric of $P$ with respect to the axis $y=0$. If $N=2\,M+1$ is odd, I believe that there is also a symmetric solution, but I have to think about it.
Simple use of a permutation rule in calculating probability I have the following problem from DeGroot: A box contains 100 balls, of which 40 are red. Suppose that the balls are drawn from the box one at a time at random, without replacement. Determine (1) the probability that the first ball drawn will be red. (2) the probability that the fifteenth ball drawn will be red, and (3) the probability that the last ball drawn will be red. For the first question: * *the probability that the first ball drawn will be red should be: $$\frac{40!}{(100-40)!}$$ *the probability that the fifteenth ball will be red: $$\frac{60! \times 40!}{(60-14)! \times (40-1)}$$ *the probability that the last ball drawn will be red: $$\frac{60! \times 40!}{100}$$ Am I on the right track with any of these?
The answers are (a) $40/100$; (b) $40/100$; (c) $40/100$. (a) Since balls tend to roll around, let us imagine instead that we have $100$ cards, with the numbers $1$ to $100$ written on them. The cards with numbers $1$ to $40$ are red, and the rest are blue. The cards are shuffled thoroughly, and we deal the top card. There are $100$ cards, and each of them is equally likely to be the top card. Since $40$ of the cards are red, it follows that the probability of "success" (first card drawn is red) is $40/100$. (b) If we think about (b) the right way, it should be equally clear that the probability is $40/100$. The probability that any particular card is the fifteenth one drawn is the same as the probability that it is the first one drawn: all permutations of the $100$ cards are equally likely. It follows that the probability that the fifteenth card drawn is red is $40/100$. (c) First, fifteenth, eighty-eighth, last, it is all the same, since all permutations of the cards are equally likely. Another way: We look again at (a), using a more complicated sample space. Again we use the card analogy. There are $\binom{100}{40}$ ways to decide on the locations of the $40$ red cards (but not which red cards occupy these locations). All these ways are equally likely. In how many ways can we choose these $40$ locations so that Position $1$ is one of the chosen locations? We only need to choose $39$ locations, and this can be done in $\binom{99}{39}$ ways. So the desired probability is $$\frac{\binom{99}{39}}{\binom{100}{40}}.$$ Compute. The top is $\frac{99!}{39!60!}$ and the bottom is $\frac{100!}{40!60!}$. Divide. There is a lot of cancellation. We quickly get $40/100$. Now count the number of ways to find locations for the reds that include Position $15$. An identical argument shows that the number of such choices is $\binom{99}{39}$, so again we get probability $40/100$. Another other way: There are other reasonable choices of sample space. The largest natural one is the set of $100!$ permutations of our $100$ cards. Now we count the permutations that have a red in a certain position, say Position $15$. The red in Position $15$ can be chosen in $40$ ways. For each of these ways, the remaining $99$ cards can be arranged in $99!$ ways, for a total of $(40)(99!)$. Thus the probability that we have a red in Position $15$ is $$\frac{(40)(99!)}{100!},$$ which quickly simplifies to $40/100$. The same argument works for any other specific position.
How to get apparent linear diameter from angular diameter Say I have an object, whose actual size is 10 units in diameter, and it is 100 units away. I can find the angular diameter as such: $2\arctan(5/100) = 5.725\ $ radians. Can I use this angular diameter to find the apparent linear size (that is, the size it appears to be to my eye) in generic units at any given distance?
It appears you are using the wrong angular units: $2\;\tan^{-1}\left(\frac{5}{100}\right)=5.7248$ degrees $=0.099917$ radians. The formula you cite above is valid for a flat object perpendicular to the line of sight. If your object is a sphere, the angular diameter is given by $2\;\sin^{-1}\left(\frac{5}{100}\right)=5.7320$ degrees $=0.100042$ radians. Usually, the angular size is referred to as the apparent size. Perhaps you want to find the actual size of the object which has the same apparent size but lies at a different distance. In that case, as joriki says, just multiply the actual distance by $\frac{10}{100}$ to get the actual diameter. This is a result of the "similar triangles" rule used in geometry proofs. Update: In a comment to joriki's answer, the questioner clarified that what they want is to know how the apparent size varies with distance. The formulae for the angular size comes the diagram above: for the flat object: $\displaystyle\tan\left(\frac{\alpha}{2}\right)=\frac{D/2}{r}$; for the spherical object: $\displaystyle\sin\left(\frac{\alpha}{2}\right)=\frac{D/2}{r}$
Prove for which $n \in \mathbb{N}$: $9n^3 - 3 ≤ 8^n$ A homework assignment requires me to find out and prove using induction for which $n ≥ 0$ $9n^3 - 3 ≤ 8^n$ and I have conducted multiple approaches and consulted multiple people and other resources with limited success. I appreciate any hint you can give me. Thanks in advance.
Let $f(n)=n^3-3$, and let $g(n)=8^n$. We compute a little, to see what is going on. We have $f(0) \le g(0)$; $f(1)\le g(1)$; $f(2) > g(2)$; $f(3) \le g(3)$; $f(4) \le g(4)$. Indeed $f(4)=573$ and $g(4)=4096$, so it's not even close. The exponential function $8^x$ ultimately grows incomparably faster than the polynomial $9x^3-3$. So it is reasonable to conjecture that $9n^3-3 \le 8^n$ for every non-negative integer $n$ except $2$. We will show by induction that $9n^3-3 \le 8^n$ for all $n \ge 3$. It is natural to work with ratios. We show that $$\frac{8^n}{9n^3-3} \ge 1$$ for all $n \ge 3$. The result certainly holds at $n=3$. Suppose that for a given $n \ge 3$, we have $\frac{8^n}{9n^3-3} \ge 1$. We will show that $\frac{8^{n+1}}{9(n+1)^3-3} \ge 1$. Note that $$\frac{8^{n+1}}{9(n+1)^3-3}=8 \frac{9n^3-3}{9(n+1)^3-3}\frac{8^n}{9n^3-3}.$$ By the induction hypothesis, we have $\frac{8^n}{9n^3-3} \ge 1$. So all we need to do is to show that $$8 \frac{9n^3-3}{9(n+1)^3-3} \ge 1,$$ or equivalently that $$\frac{9(n+1)^3-3}{9n^3-3} \le 8.$$ If $n\ge 3$, the denominator is greater than $8n^3$, and the numerator is less than $9(n+1)^3$. Thus, if $n \ge 3$, then $$\frac{9(n+1)^3-3}{9n^3-3} <\frac{9}{8}\frac{(n+1)^3}{n^3}=\frac{9}{8}\left(1+\frac{1}{n}\right)^3.$$ But if $n \ge 3$, then $(1+1/n)^3\le (1+1/3)^3<2.5$, so $\frac{9}{8}(1+1/n)^3<8$, with lots of room to spare.
Real elliptic curves in the fundamental domain of $\Gamma(2)$ An elliptic curve (over $\mathbf{C}$) is real if its j-invariant is real. The set of real elliptic curves in the standard fundamental domain of $\mathrm{SL}_2(\mathbf{Z})$ can be explicitly described. In the standard fundamental domain $$F(\mathbf{SL}_2(\mathbf{Z}))=\left\{\tau \in \mathbf{H} : -\frac{1}{2} \leq \Re(\tau) \leq 0, \vert \tau \vert \geq 1 \right\} \cup \left\{ \tau \in \mathbf{H} : 0 < \Re(\tau) < \frac{1}{2}, \vert \tau \vert > 1\right\},$$ the set of real elliptic curves is the boundary of this fundamental domain together with the imaginary axis lying in this fundamental domain. Let $F$ be the standard fundamental domain for $\Gamma(2)$. Can one describe how the set of real elliptic curves in this fundamental domain looks like? Of course, the set of real elliptic curves in $F(\mathbf{SL}_2(\mathbf{Z}))$ is contained in the set of real elliptic curves in $F$. But there should be more.
Question answered in the comments by David Loeffler. I'm not sure which fundamental domain for $\Gamma(2)$ you consider to be "standard"? But whichever one you go for, it'll just be the points in your bigger domain whose $SL_2(\Bbb{Z})$ orbit contains a point of the set you just wrote down.
Order of finite fields is $p^n$ Let $F$ be a finite field. How do I prove that the order of $F$ is always of order $p^n$ where $p$ is prime?
A slight variation on caffeinmachine's answer that I prefer, because I think it shows more of the structure of what's going on: Let $F$ be a finite field (and thus has characteristic $p$, a prime). * *Every element of $F$ has order $p$ in the additive group $(F,+)$. So $(F,+)$ is a $p$-group. *A group is a $p$-group iff it has order $p^n$ for some positive integer $n$. The first claim is immediate, by the distributive property of the field. Let $x \in F, \ x \neq 0_F$. We have \begin{align} p \cdot x &= p \cdot (1_{F} x) = (p \cdot 1_{F}) \ x \\ & = 0 \end{align} This is the smallest positive integer for which this occurs, by the definition of the characteristic of a field. So $x$ has order $p$. The part that we need of the second claim is a well-known corollary of Cauchy's theorem (the reverse direction is just an application of Lagrange's theorem).
Variance of sample variance? What is the variance of the sample variance? In other words I am looking for $\mathrm{Var}(S^2)$. I have started by expanding out $\mathrm{Var}(S^2)$ into $E(S^4) - [E(S^2)]^2$ I know that $[E(S^2)]^2$ is $\sigma$ to the power of 4. And that is as far as I got.
Maybe, this will help. Let's suppose the samples are taking from a normal distribution. Then using the fact that $\frac{(n-1)S^2}{\sigma^2}$ is a chi squared random variable with $(n-1)$ degrees of freedom, we get $$\begin{align*} \text{Var}~\frac{(n-1)S^2}{\sigma^2} & = \text{Var}~\chi^{2}_{n-1} \\ \frac{(n-1)^2}{\sigma^4}\text{Var}~S^2 & = 2(n-1) \\ \text{Var}~S^2 & = \frac{2(n-1)\sigma^4}{(n-1)^2}\\ & = \frac{2\sigma^4}{(n-1)}, \end{align*}$$ where we have used that fact that $\text{Var}~\chi^{2}_{n-1}=2(n-1)$. Hope this helps.
Find $a$ and $b$ with whom this expression $x\bullet y=(a+x)(b+y)$ is associative I need to find a and b with whom this expression is associative: $$x\bullet y=(a+x)(b+y)$$ Also known that $$x,y\in Z$$ So firstly I write: $$(x\bullet y)\bullet z=x\bullet (y\bullet z)$$ Then I express and express them and after small algebra I get: $$ab^2+b^2x+az+abz+bxz=a^2b+a^2z+bx+abx+axz$$ And I don't know what to do now, I don't know if I should do it, but I tried to assign to $x=0$ and $z=1$ but nothing better happened. Could you please share some ideas? EDIT: Sorry, but I forgot to mention that I need to find such a and b with whom it should be associative
Assuming that you still want the element $1 \in \mathbb{Z}$ to be a multiplicative identity element with your new multiplication, here is an alternative way of quickly seeing that $a=b=0$. For an arbitrary $y\in \mathbb{Z}$ we shall have: $y=1 \bullet y=(a+1)(b+y)=ab+ay+b+y=(a+1)b+(a+1)y$ This is satisfied if and only if $a+1=1$ and $(a+1)b=0$. The unique solution is $a=0$ and $b=0$.
Calculating the exponential of a $4 \times 4$ matrix Find $e^{At}$, where $$A = \begin{bmatrix} 1 & -1 & 1 & 0\\ 1 & 1 & 0 & 1\\ 0 & 0 & 1 & -1\\ 0 & 0 & 1 & 1\\ \end{bmatrix}$$ So, let me just find $e^{A}$ for now and I can generalize later. I notice right away that I can write $$A = \begin{bmatrix} B & I_{2} \\ 0_{22} & B \end{bmatrix}$$ where $$B = \begin{bmatrix} 1 & -1\\ 1 & 1\\ \end{bmatrix}$$ I'm sort of making up a method here and I hope it works. Can someone tell me if this is correct? I write: $$A = \mathrm{diag}(B,B) + \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$$ Call $S = \mathrm{diag}(B,B)$, and $N = \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$. I note that $N^2$ is $0_{44}$, so $$e^{N} = \frac{N^{0}}{0!} + \frac{N}{1!} + \frac{N^2}{2!} + \cdots = I_{4} + N + 0_{44} + \cdots = I_{4} + N$$ and that $e^{S} = \mathrm{diag}(e^{B}, e^{B})$ and compute: $$e^{A} = e^{S + N} = e^{S}e^{N} = \mathrm{diag}(e^{B}, e^{B})\cdot[I_{4} + N]$$ This reduces the problem to finding $e^B$, which is much easier. Is my logic correct? I just started writing everything as a block matrix and proceeded as if nothing about the process of finding the exponential of a matrix would change. But I don't really know the theory behind this I'm just guessing how it would work.
Consider $M(t) = \exp(t A)$, and as you noticed, it has block-diagonal form $$ M(t) = \left(\begin{array}{cc} \exp(t B) & n(t) \\ 0_{2 \times 2} & \exp(t B) \end{array} \right). $$ Notice that $M^\prime(t) = A \cdot M(t)$, and this results in a the following differential equation for $n(t)$ matrix: $$ n^\prime(t) = \mathbb{I}_{2 \times 2} \cdot \exp(t B) + B \cdot n(t) $$ which translates into $$ \frac{\mathrm{d}}{\mathrm{d} t} \left( \exp(-t B) n(t) \right) = \mathbb{I}_{2 \times 2} $$ which is to say that $n(t) = t \exp(t B)$.
Index notation for tensors: is the spacing important? While reading physics textbooks I often come across notation like this; $$J_{\alpha}{}^{\beta},\ \Gamma_{\alpha \beta}{}^{\gamma}, K^\alpha{}_{\beta}.$$ Notice the spacing in indices. I don't understand why they do not write simply $J_{\alpha}^\beta, \Gamma_{\alpha \beta}^\gamma, K^\alpha_{\beta}$.
It's important to keep track of the ordering if you want to use a metric to raise and lower indices freely (without explicitly writing out $g_{ij}$'s all the time). For example (using Penrose abstract index notation), if you raise the index $a$ on the tensor $K_{ab}$, then you get $K^a{}_b (=g^{ac} K_{cb})$, whereas if you raise the index $a$ on the tensor $K_{ba}$, you get $K_b{}^a (=g^{ac}K_{bc})$. Since the tensors $K^a{}_b$ and $K_b{}^a$ act differently on $X_a Y^b$ (unless $K$ happens to be symmetric, i.e., $K_{ab}=K_{ba}$), one doesn't want to denote them both by $K^a_b$.
Solve for equation algebraically Is it possible to write the following function as $H(x)=$'some expresssion` ? $$D(x) = H(x) + H(x-1)$$ Edit: Hey everyone, thanks for all the great responses, and just to clarify H(x) and D(x) are always going to be polynomials, I wasn't sure if that made too big of a difference so I didn't mention it before. Edit 2: I'm sincerely sorry if I was too general, I've only really used polynomial equations in my education thus far, so I didn't realize they might not seam just regular to everyone else. Once again I'm very sorry If I didn't give everyone an opportunity to answer the question because of my vagueness. I sincerely appreciate all of your answers and time.
$H(x)$ could be defined as anything on $[0,1)$, then the relation $$ D(x) = H(x) + H(x-1)\tag{1} $$ would define $H(x)$ on the rest of $\mathbb{R}$. For example, for $x\ge0$, $$ H(x)=(-1)^{\lfloor x\rfloor}H(x-\lfloor x\rfloor)+\sum_{k=0}^{\lfloor x\rfloor-1}(-1)^kD(x-k)\tag{2} $$ and for $x<0$, $$ H(x)=(-1)^{\lfloor x\rfloor}H(x-\lfloor x\rfloor)+\sum_{k=\lfloor x\rfloor}^{-1}(-1)^kD(x-k)\tag{3} $$ Polynomial solutions: Some interest was shown for polynomial solutions when $D(x)$ is a polynomial. As was done in another recent answer, we can use Taylor's Theorem to show that $$ e^{-\mathcal{D}}P(x)=P(x-1)\tag{4} $$ where $\mathcal{D}=\dfrac{\mathrm{d}}{\mathrm{d}x}$, at least for polynomial $P$. Using $(4)$, $(1)$ becomes $$ D(x)=\left(I+e^{-\mathcal{D}}\right)H(x)\tag{5} $$ Noting that $$ (1+e^{-x})^{-1}=\tfrac{1}{2}+\tfrac{1}{4}x-\tfrac{1}{48}x^3+\tfrac{1}{480}x^5+\dots\tag{6} $$ we can get a polynomial solution of $(1)$ with $$ H(x)=\left(\tfrac{1}{2}I+\tfrac{1}{4}\mathcal{D}-\tfrac{1}{48}\mathcal{D}^3+\tfrac{1}{480}\mathcal{D}^5+\dots\right)D(x)\tag{7} $$ Polynomial example: For example, if $D(x)=x^2$, then using $(7)$, $H(x)=\tfrac{1}{2}x^2+\tfrac{1}{2}x$. Check: $$\left(\tfrac{1}{2}x^2+\tfrac{1}{2}x\right)+\left(\tfrac{1}{2}(x-1)^2+\tfrac{1}{2}(x-1)\right)=x^2$$
Limiting distribution of sum of normals How would I go about solving this problem below? I am not exactly sure where to start. I know that I need to make use of the Lebesgue Dominated Convergence theorem as well. Thanks for the help. Let $X_1, X_2, \ldots, X_n$ be a random sample of size $n$ from a distribution that is $N(\mu, \sigma^2)$ where $\sigma^2 > 0$. Show that $Z_n = \sum\limits_{i=1}^n X_i$ does not have a limiting distribution.
Another way to show $P(Z_n\le z)=0$ for all $z$ as Did suggests: Note that $Z_n\sim N(n\mu, n\sigma^2)$, so for some fixed real number $z$, $P(Z_n\le z)=\frac{1}{2}\biggr[1+erf\biggr(\frac{z-n\mu}{\sqrt{2n\sigma^2}}\biggr)\biggr]\to0$ as $n\to\infty$, since $\frac{z-n\mu}{\sqrt{2n\sigma^2}}\to-\infty$ and $\lim_{x\to-\infty}erf(x)=-1$. Hence the limit of the CDFs is equal to 0 everywhere, and cannot limit to a CDF. [I use the definition of erf from the Wikipedia page: https://en.wikipedia.org/wiki/Normal_distribution]
How to integrate this trigonometry function? The question is $ \displaystyle \int{ \frac{1-r^{2}}{1-2r\cos(\theta)+r^{2}}} d\theta$. I know it will be used weierstrass substitution to solve but i did not have any idea of it.
There's a Wikipedia article about this technique: Weierstrass substitution. Notice that what you've got here is $\displaystyle\int\frac{d\theta}{a+b\cos\theta}$. The factor $1-r^2$ pulls out, and $a=1+r^2$ and $b=-2r$.
How to prove $\lim_{n \to \infty} \sqrt{n}(\sqrt[n]{n} - 1) = 0$? I want to show that $$\lim_{n \to \infty} \sqrt{n}(\sqrt[n]{n}-1) = 0$$ and my assistant teacher gave me the hint to find a proper estimate for $\sqrt[n]{n}-1$ in order to do this. I know how one shows that $\lim_{n \to \infty} \sqrt[n]{n} = 1$, to do this we can write $\sqrt[n]{n} = 1+x_n$, raise both sides to the n-th power and then use the binomial theorem (or to be more specific: the term to the second power). However, I don't see how this or any other trivial term (i.e. the first or the n-th) could be used here. What estimate am I supposed to find or is there even a simpler way to show this limit? Thanks for any answers in advance.
The OP's attempt can be pushed to get a complete proof. $$ n = (1+x_n)^n \geq 1 + nx_n + \frac{n(n-1)}{2} x_n^2 + \frac{n(n-1)(n-2)}{6} x_n^3 > \frac{n(n-1)(n-2) x_n^3}{6} > \frac{n^3 x_n^3}{8}, $$ provided $n$ is "large enough" 1. Therefore, (again, for large enough $n$,) $x_n < 2 n^{-2/3}$, and hence $\sqrt{n} x_n < 2n^{-1/6}$. Thus $\sqrt{n} x_n$ approaches $0$ by the sandwich (squeeze) theorem. 1In fact, you should be able to show that for all $n \geq 12$, we have $$ \frac{n(n-1)(n-2)}{6} > \frac{n^3}{8} \iff \left( 1-\frac1n \right) \left( 1- \frac2n \right) \geq \frac34. $$
Fractional cardinalities of sets Is there any extension of the usual notion of cardinalities of sets such that there is some sets with fractional cardinalities such as 5/2, ie a set with 2.5 elements, what would be an example of such a set? Basically is there any consistent set theory where there is a set whose cardinality is less than that of {cat,dog,fish} but greater than that of {47,83} ?
One can extend the notion of cardinality to include negative and non-integer values by using the Euler characteristic and homotopy cardinality. For example, the space of finite sets has homotopy cardinality $e=\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\dotsi$. The idea is to sum over each finite set, inversely weighted by the size of their symmetry group. John Baez discusses this in detail on his blog. He has plenty of references, as well as lecture notes, course notes, and blog posts about the topic here. The first sentence on the linked page: "We all know what it means for a set to have 6 elements, but what sort of thing has -1 elements, or 5/2? Believe it or not, these questions have nice answers." -Baez
A number when successively divided by $9$, $11$ and $13$ leaves remainders $8$, $9$ and $8$ respectively A number when successively divided by $9$, $11$ and $13$ leaves remainders $8$, $9$ and $8$ respectively. The answer is $881$, but how? Any clue about how this is solved?
First when the number is divided by 9. the remainder is 8. So N = 9x+8. Similarly, next x = 11y+9, and y=13z+8. So N = 99y+89 = 99(13z+8)+89 = 1287z+792+89 = 1287z+881. So N is of the form, 1287*(A whole Number)+881. If you need to find the minimum number, then it would be 881. Hope that helps.
existence and uniqueness of Hermite interpolation polynomial What are the proofs of existence and uniqueness of Hermite interpolation polynomial? suppose $x_{0},...,x_{n}$ are distinct nodes and $i=1 , ... ,n$ and $m_{i}$ are in Natural numbers. prove exist uniqueness polynomial $H_{N}$ with degree N=$m_{1}+...+m_{n}$-1 satisfying $H_{N}^{(k)}$=$y_{i}^{(k)}$ k=0,1,...,$m_{i}$ & i=$0,1,\ldots,n$ ?
I think you've got your indices mixed up a bit; they're sometimes starting at $0$ and sometimes at $1$. I'll assume that the nodes are labeled from $1$ to $n$ and the first $m_i$ derivatives at $x_i$ are determined, that is, the derivatives from $0$ to $m_i-1$. A straightforward proof consists in showing how to construct a basis of polynomials $P_{ik}$ that have non-zero $k$-th derivative at $x_i$ and zero for all other derivatives and nodes. For given $i$, start with $k=m_i-1$ and set $$P_{i,m_i-1}=(x-x_i)^{m_i-1}\prod_{j\ne i}(x-x_j)^{m_j}\;.$$ Then decrement $k$ in each step. Start with $$Q_{i,k}=(x-x_i)^k\prod_{j\ne i}(x-x_j)^{m_j}\;,$$ which has zero derivatives up to $k$ at $x_i$, and subtract out multiples of the $P_{i,k'}$ with $k'\gt k$, which have already been constructed, to make the $k'$-th derivatives at $x_i$ with $k'\gt k$ zero. Doing this for all $i$ yields a basis whose linear combinations can have any given values for the derivatives. Uniqueness follows from the fact that the number of these polynomials is equal to the dimension $d=\sum_i m_i$ of the vector space of polynomials of degree up to $d-1$. Since the $P_{ik}$ are linearly independent, there's no more room for one more that also satisfies one of the conditions, since it would have to be linearly independent of all the others.
Norm of adjoint operator in Hilbert space Suppose $H$ is a Hilbert space and let $T \in B(H,H)$ where in our notation $B(H,H)$ denotes the set of all linear continuous operators $H \rightarrow H$. We defined the adjoint of $T$ as the unique $T^* \in B(H,H)$ such that $\langle Tx,y \rangle = \langle x, T^*y\rangle$ for all $x,y$ in $H$. I proved its existence as follows: Fix $y \in H$. Put $\Phi_y: H \rightarrow \mathbb{C}, x \mapsto \langle Tx,y \rangle$. This functional is continuous since $|\langle Tx, y\rangle | \leq ||Tx||\; ||y|| \leq ||T||\; ||x||\; ||y||$. Therefore we can apply the Riesz-Fréchet theorem which gives us the existence of a vector $T^*y \in H$ such that for all $x \in H$ we have $\langle Tx, y\rangle = \langle x, T^* y\rangle$. I now have to prove that $||T^*|| = ||T||$. I can show $||T^*|| \leq ||T||$: Since the Riesz theorem gives us an isometry we have $||T^*y|| = ||\Phi_y||_{H*} = \sup_{||x||\leq 1} |\langle Tx, y\rangle| \leq ||T||\;||y||$ and thus $||T^*|| \leq ||T||$. However, I do not see how to prove the other inequality without using consequences of Hahn-Banach or similar results. It seems like I am missing some quite simple point. Any help is very much appreciated! Regards, Carlo
Why don't you look at what is $T^{**}$ ...
Does Zorn Lemma imply the existence of a (not unique) maximal prolongation of any solution of an ode? Let be given a map $F:(x,y)\in\mathbb{R}\times\mathbb{R}^n\to F(x,y)\in\mathbb{R}^n$. Let us denote by $\mathcal{P}$ the set whose elements are the solutions of the ode $y'=F(x,y)$, i.e. the differentiable maps $u:J\to\mathbb{R}^n$, where $J\ $ is some open interval in $\mathbb{R}\ $, s.t. $u'(t)=F(t,u(t))$ for all $t\in J$. Let $\mathcal{P}$ be endowed with the ordering by extension. In order to prove that any element of $\mathcal{P}$ is extendable to a (not unique) maximal element, without particular hypothesis on $F$, I was wondering if the Zorn lemma can be used.
Yes, Zorn's Lemma should be all you need. Take the set of partial solutions that extend your initial solution, and order them by the subset relation under the common definition of a function as the set of pairs $\langle x, f(x)\rangle$. Then the union of all functions in a chain will be another partial solution, so Zorn's Lemma applies.
Prove an inequality by Induction: $(1-x)^n + (1+x)^n < 2^n$ Could you give me some hints, please, to the following problem. Given $x \in \mathbb{R}$ such that $|x| < 1$. Prove by induction the following inequality for all $n \geq 2$: $$(1-x)^n + (1+x)^n < 2^n$$ $1$ Basis: $$n=2$$ $$(1-x)^2 + (1+x)^2 < 2^2$$ $$(1-2x+x^2) + (1+2x+x^2) < 2^2$$ $$2+2x^2 < 2^2$$ $$2(1+x^2) < 2^2$$ $$1+x^2 < 2$$ $$x^2 < 1 \implies |x| < 1$$ $2$ Induction Step: $n \rightarrow n+1$ $$(1-x)^{n+1} + (1+x)^{n+1} < 2^{n+1}$$ $$(1-x)(1-x)^n + (1+x)(1+x)^n < 2·2^n$$ I tried to split it into $3$ cases: $x=0$ (then it's true), $-1<x<0$ and $0<x<1$. Could you tell me please, how should I move on. And do I need a binomial theorem here? Thank you in advance.
The proof by induction is natural and fairly straightforward, but it’s worth pointing out that induction isn’t actually needed for this result if one has the binomial theorem at hand: Corrected: $$\begin{align*} (1-x)^n+(1+x)^n &= \sum_{k=0}^n\binom{n}k (-1)^kx^k + \sum_{k=0}^n \binom{n}k x^k\\ &= \sum_{k=0}^n\binom{n}k \left((-1)^k+1\right)x^k\\ &= 2\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}x^{2k}\\ &< 2\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}\tag{1}\\ &= 2\cdot 2^{n-1}\tag{2}\\ &= 2^n, \end{align*}$$ where the inequality in $(1)$ holds because $|x|< 1$, and $(2)$ holds for $n>0$ because $\sum\limits_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}$, the number of subsets of $[n]=\{1,\dots,n\}$ of even cardinality, is equal to the number of subsets of $[n]$ of odd cardinality.
Solving a recurrence using substitutions I have to solve this recurrence using substitutions: $(n+1)(n-2)a_n=n(n^2-n-1)a_{n-1}-(n-1)^3a_{n-2}$ with $a_2=a_3=1$. The only useful substitution that I see is $b_n=(n+1)a_n$, but I don't know how to go on, could you help me please?
So if $b_n=(n+1)a_n$, then $b_{n-1}=na_{n-1}$, and $b_{n-2}=(n-1)a_{n-2}$, and you equation becomes $$(n-2)b_n=(n^2-n-1)b_{n-1}-(n-1)^2b_{n-2}$$ which is a little simpler than what you started with, though I must admit I can't see offhand any easy way to get to a solution from there. Are you working from a text or some notes that have other examples of solving via substitution? Maybe there's a hint in some other example as to how best to proceed.
Is $\sum_{n=0}^{\infty}2^n$ equal to $-1$? Why? Possible Duplicate: Divisibility with sums going to infinity From Wikipedia and Minute Physics i see that the sum would be -1. I find this challenging to understand, how does a sum of positive integers end up being negative?
It's all about the principle of analytic continuation. The function $f(x)=\sum_{n=0}^\infty z^n$ defines an analytic function in the unit disk, equal to the meromorphic function $g(z)=1/(1-z)$. Note that the equality $f\equiv g$ holds only in the disk, where $f$ converges absolutely. Despite this, if we naively wanted to assign any value to $f(x)$ outside of $\vert z \vert < 1$, an intuitive choice would be to set $f(2)=g(2)= -1$. Moreover, the theory of complex functions implies that this is somehow the only possible choice (for this function, at least). The principle is called analytic continuation, and it is incredibly important in many areas of mathematics, most notably complex analysis.
Why is $(0, 0)$ not a minimum of $f(x, y) = (y-3x^2)(y-x^2)$? There is an exercise in my lists about those functions: $$f(x, y) = (y-3x^2)(y-x^2) = 3 x^4-4 x^2 y+y^2$$ $$g(t) = f(vt) = f(at, bt); a, b \in \mathbf{R}$$ It asks me to prove that $t = 0$ is a local minimum of $g$ for all $a, b \in \mathbf{R}$ I did it easily: $$g(t) = 3 a^4 t^4-4 a^2 t^2 b t+b^2 t^2$$ $$g'(t) = 2 b^2 t-12 a^2 b t^2+12 a^4 t^3$$ $$g''(t) = 2 b^2-24 a^2 b t+36 a^4 t^2$$ It is a critical point: $$g'(0) = 0; \forall a, b$$ Its increasing for all a, b: $$g''(0) = 2b^2 > 0; \forall b \ne 0$$ and $$b = 0 \implies g(t) = 3 a^4 t^4$$ which has only one minimum, at $0$, and no maximum However, it also asks me to prove that $(0, 0)$ is not a local minimum of $f$. How can this be possible? I mean, if $(0, 0)$ is a minimum over every straight line that passes through it, then, in this point, $f$ should be increasing in all directions, no?
Draw the set of points in the $xy$-plane where $f(x,y) = 0$. Then look at the regions of the plane that are created and figure out in which of them $f(x,y)$ is positive and in which $f(x,y)$ is negative. From there, you should be able to prove that $(0,0)$ is neither a local maximum nor a local minimum point. (Hint: Using your sketch, is there any disk centered at $(0,0)$ in which $f(x,y)$ takes its minimum value at $(0,0)$?) As important as understanding why the phenomenon you are observing is happening (no local minimum at the origin even though the function restricted to any straight line through the origin has a local minimum there) is to figure out how you would construct such a function and why the example given is, in some sense, the simplest kind of example you could construct. The idea used here is very similar to creating an example which shows that $\lim_{(x,y) \rightarrow (0,0)} f(x,y)$ may not exist even if the limit exists along all straight lines approaching the origin. What you're really learning in this example is that the seemingly reasonable intuition that we would naturally have that we can understand a function of two variables near a point by understanding all of the functions of one variable obtained by restricting the function to lines through that point is misguided -- in particular, it fails if we are trying to find local maxima and minima.
Statistics: Maximising expected value of a function of a random variable An agent wishes to solve his optimisation problem: $ \mbox{max}_{\theta} \ \ \mathbb{E}U(\theta S_1 + (w - \theta) + Y)$, where $S_1$ is a random variable, $Y$ a contingent claim and $U(x) = x - \frac{1}{2}\epsilon x^2$. My problem is - how to I 'get rid' of '$\mathbb{E}$', to get something I can work with? Thanks
Expanding the comment by Ilya: $$\mathbb{E}\,U(\theta S_1 + (w - \theta) + Y) =\mathbb{E} (\theta S_1 + (w - \theta) + Y) - \frac{\epsilon}{2} \mathbb{E} \left((\theta S_1 + (w - \theta) + Y)^2\right) $$ is a quadratic polynomial in $\theta $ with negative leading coefficients. Its unique point of maximum is found by setting the derivative to $0$.
Are any two Cantor sets ; "Fat" and "Standard" Diffeomorphic to each Other? All: I know any two Cantor sets; "fat" , and "Standard"(middle-third) are homeomorphic to each other. Still, are they diffeomorphic to each other? I think yes, since they are both $0$-dimensional manifolds (###), and any two $0$-dimensional manifolds are diffeomorphic to each other. Still, I saw an argument somewhere where the claim is that the two are not diffeomorphic. The argument is along the lines that, for $C$ the characteristic function of the standard Cantor set integrates to $0$ , since $C$ has (Lebesgue) measure zero, but , if $g$ where a diffeomorphism into a fat Cantor set $C'$, then: $ f(g(x))$ is the indicator function for $C'$, so its integral is positive. And (my apologies, I don't remember the Tex for integral and I don't have enough points to look at someone else's edit ; if someone could please let me know ) By the chain rule, the change-of-variable $\int_0^1 f(g(x))g'(x)dx$ should equal $\int_a^b f(x)dx$ but $g'(x)>0$ and $f(g(x))>0$ . So the change-of-variable is contradicted by the assumption of the existence of the diffeomorphism $g$ between $C$ and $C'$. Is this right? (###)EDIT: I realized after posting --simultaneously with "Lost in Math"* , that the Cantor sets {C} are not 0-dimensional manifolds (for one thing, C has no isolated points). The problem then becomes, as someone posted in the comments, one of deciding if there is a differentiable map $f:[0,1]\rightarrow [0,1]$ taking C to C' with a differentiable inverse. * *I mean, who isn't, right?
I think I found an answer to my question, coinciding with the idea in Ryan's last paragraph: absolute continuity takes sets of measure zero to sets of measure zero. A diffeomorphism defined on [0,1] is Lipshitz continuous, since it has a bounded first derivative (by continuity of f' and compactness of [0,1]), so that it is absolutely continuous, so that it would take sets of measure zero to sets of measure zero.
Explicit formula for Fermat's 4k+1 theorem Let $p$ be a prime number of the form $4k+1$. Fermat's theorem asserts that $p$ is a sum of two squares, $p=x^2+y^2$. There are different proofs of this statement (descent, Gaussian integers,...). And recently I've learned there is the following explicit formula (due to Gauss): $x=\frac12\binom{2k}k\pmod p$, $y=(2k)!x\pmod p$ ($|x|,|y|<p/2$). But how to prove it? Remark. In another thread Matt E also mentions a formula $$ x=\frac12\sum\limits_{t\in\mathbb F_p}\left(\frac{t^3-t}{p}\right). $$ Since $\left(\dfrac{t^3-t}p\right)=\left(\dfrac{t-t^{-1}}p\right)=(t-t^{-1})^{2k}\mod p$ (and $\sum t^i=0$ when $0<i<p-1$), this is, actually, the same formula (up to a sign).
Here is a high level proof. I assume it can be done in a more elementary way. Chapter 3 of Silverman's Arithmetic of Elliptic Curves is a good reference for the ideas I am using. Let $E$ be the elliptic curve $y^2 = x^3+x$. By a theorem of Weyl, the number of points on $E$ over $\mathbb{F}_p$ is $p- \alpha- \overline{\alpha}$ where $\alpha$ is an algebraic integer satisfying $\alpha \overline{\alpha} =p$, and the bar is complex conjugation. (If you count the point at $\infty$, then the formula should be $p - \alpha - \overline{\alpha} +1$.) Let $p \equiv 1 \mod 4$. We will establish two key claims: Claim 1: $\alpha$ is of the form $a+bi$, for integers $a$ and $b$, and Claim 2: $-2a \equiv \binom{(p-1)/2}{(p-1)/4} \mod p$. So $a^2+b^2 = p$ and $a \equiv -\frac{1}{2} \binom{(p-1)/2}{(p-1)/4}$, as desired. Proof sketch of Claim 1: Let $R$ be the endomorphism ring of $E$ over $\mathbb{F}_p$. Let $j$ be a square root of $-1$ in $\mathbb{F}_p$. Two of the elements of $R$ are $F: (x,y) \mapsto (x^p, y^p)$ and $J: (x,y) \mapsto (-x,jy)$. Note that $F$ and $J$ commute; this uses that $j^p = j$, which is true because $p \equiv 1 \mod 4$. So $F$ and $J$ generate a commutative subring of $R$. If you look at the list of possible endomorphism rings of elliptic curves, you'll see that such a subring must be of rank $\leq 2$, and $J$ already generates a subring of rank $2$. (See section 3.3 in Silverman.) So $F$ is integral over the subring generated by $J$. That ring is $\mathbb{Z}[J]/\langle J^2=-1 \rangle$, which is integrally closed. So $F$ is in that ring, meaning $F = a+bJ$ for some integers $a$ and $b$. If you understand the connection between Frobenius actions and points of $E$ over $\mathbb{F}_p$, this shows that $\alpha = a+bi$. Proof sketch of Claim 2: The number of points on $E$ over $\mathbb{F}_p$ is congruent modulo $p$ to the coefficient of $x^{p-1}$ in $(x^3+x)^{(p-1)/2}$ (section 3.4 in Silverman). This coefficient is $\binom{(p-1)/2}{(p-1)/4}$. So $$- \alpha - \overline{\alpha} \equiv \binom{(p-1)/2}{(p-1)/4} \mod p$$ or $$-2a \equiv \binom{(p-1)/2}{(p-1)/4} \mod p$$ as desired. Remark: This is very related to the formula Matt E mentions. For $u \in \mathbb{F}_p$, the number of square roots of $u$ in $\mathbb{F}_p$ is $1+\left( \frac{u}{p} \right)$. So the number of points on $E$ is $$p+\sum_{x \in \mathbb{F}_p} \left( \frac{x^3+x}{p} \right).$$ This is essentially Matt's sum; if you want, you could use the elliptic curve $y^2 = x^3-x$ in order to make things exactly match, although that would introduce some signs in other places. So your remark gives another (morally, the same) proof of Claim 2.
what's the relationship between a.s. continuous and m.s. continuous? suppose that X(t) is a s.p. on T with $EX(t)^2<+\infty$. we give two kinds of continuity of X(t). * *X(t) is continuous a.s. *X(t) is m.s. continuous, i.e. $\lim\limits_{\triangle t \rightarrow 0}E(X(t+\triangle t)-X(t))^2=0$. Then, what's the relationship between these two kinds of continuity.
I don't know if there is a clear relation between both concepts. For example if you take the Brownian Motion it satisfies 1 and 2 but if you take a Poisson process then it only satisfies 2 (although it satisfies a weaker form of condition 1 which is continuity in probability). The question is what do you want to do with those processes ? Regards
Simplify fraction - Where did the rest go? While studying maths I encountered the following fraction : $\frac{5ab}{10b}$ Which I then had to simplify. The answer I came up with is: $\frac{5ab}{10b} = \frac{ab}{2b}$ But the correct answer seemed to be: $\frac{5ab}{10b} = \frac{a}{2} = \frac{1}{2}$a Why is the above answer correct and mine wrong? I can't wrap my head around $b$ just disappearing like that.
To get from $\dfrac{5ab}{10b}$ to $\dfrac{ab}{2b}$ you probably divided the numerator and denominator each by $5$. Now divide them each by $b$ (if $b \not = 0$).
There exists a real number $c$ such that $A+cI$ is positive when $A$ is symmetric Without using the fact that symmetric matrices can be diagonalized: Let $A$ be a real symmetric matrix. Show that there exists a real number $c$ such that $A+cI$ is positive. That is, if $A=(a_{ij})$, one has to show that there exists real $c$ that makes $\sum_i a_{ii}x_i^2 + 2\sum_{i<j}a_{ij}x_ix_j + c\sum_i x_i^2 > 0$ for any vector $X=(x_1,...,x_n)^T$. This is an exercise in Lang's Linear Algebra. Thank you for your suggestions and comments.
Whether $x^TAx$ is positive doesn't depend on the normalization of $x$, so you only have to consider unit vectors. The unit sphere is compact, so the sum of the first two sums is bounded. The third sum is $1$, so you just have to choose $c$ greater than minus the lower bound of the first two sums.
Is there a reason why curvature is defined as the change in $\mathbf{T}$ with respect to arc length $s$ And not with respect to time $t$? (or whatever parameter one is using) $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{dt}}|$ seems more intuitive to me. I can also see that $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{ds}}| = |\frac{d\mathbf{r}'(t)}{dt}|$ (because $\displaystyle |\mathbf{r}'(t)| = \frac{ds}{dt}$, which does make sense, but I don't quite understand the implications of $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{dt}}|$ vs. $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{ds}}|$ and why the one was chosen over the other.
The motivation is that we want curvature to be a purely geometric quantity, depending on the set of points making up the line alone and not the parametric formula that happened to generate those points. $\left|\frac{dT}{dt}\right|$ does not satisfy this property: if I reparameterize by $t\to 2t$ for instance I get a curve that looks exactly the same as my original curve, but has twice the curvature. This isn't desirable. $\left|\frac{dT}{ds}\right|$ on the other hand has the advantage of being completely invariant, by definition, to parameterization (assuming some regularity conditions on the curve).
A question about hyperbolic functions Suppose $(x,y,z),(a,b,c)$ satisfy $$x^2+y^2-z^2=-1, z\ge 1,$$ $$ax+by-cz=0,$$ $$a^2+b^2-c^2=1.$$ Does it follow that $$z\cosh(t)+c\sinh(t)\ge 1$$ for all real number $t$?
The curve $(X_1,X_2,X_3)=\cosh(t)(x,y,z)+\sinh(t)(a,b,c), -\infty<t<\infty$ is continuous and satisfies $X_1^2+X_2^2-X_3^2=-\cosh^2(t)+\sinh^2(t)=-1$. One of its point $(x,y,z)$ (when $t=0$) lies on the upper sheet $X_1^2+X_2^2-X_3^2=-1, X_3\ge 1$. By connectness of the curve, the whole curve must lie in this connected component. Hence $z\cosh(t)+c\sinh(t)\ge 1$ for all $t$.
On a finite nilpotent group with a cyclic normal subgroup I'm reading Dummit & Foote, Sec. 6.1. My question is the following. If $G$ is a finite nilpotent group with a cyclic normal subgroup $N$ such that $G/N$ is also cyclic, when is $G$ abelian? I know that dihedral groups are not abelian, and I think the question is equivalent to every Sylow subgroup being abelian. EDIT: So, the real question is about finding all metacyclic finite p-groups that are not abelian. Thanks.
Rod, you are right when you say this can be brought back to every Sylow group being abelian. Since $G$ is nilpotent you can reduce to $G$ being a $p$-group. However, a counterexample is easily found, take the quaternion group $G$ of order 8, generated by $i$ and $j$ as usual. Let $N$ be the subgroup $<i>$ of index 2. $N$ satisfies your conditions but $G$ is not abelian.
What is the math notation for this type of function? A function that turns a real number into another real number can be represented like $f : \mathbb{R}\to \mathbb{R}$ What is the analogous way to represent a function that turns an unordered pair of elements of positive integers each in $\{1,...,n\}$ into a real number? I guess it would almost be something like $$f : \{1,...,n\} \times \{1,...,n\} \to \mathbb{R}$$ but is there a better notation that is more concise and that has the unorderedness?
I would say that it might be best to preface your notation with a sentence explaining it, which will allow the notation itself to be more compact, and generally increase the understanding of the reader. For example, we could write: Let $X=\{x\in\mathbb{N}\mid x\leq N\}$, and let $\sim$ be an equivalence relation on $X^2$ defined by $(a,b)\sim(c,d)$ iff either $a=c$ and $b=d$, or $a=d$ and $b=c$. Let $Y=X^2/\sim$, and let $f:Y\to\mathbb{R}$. So, $Y$ can be thought of as the set of unordered pairs of positive integers up to $N$, and you can then proceed to use this notation every time you want to talk about such a function.
Proof of inequality $\prod_{k=1}^n(1+a_k) \geq 1 + \sum_{k=1}^n a_k$ with induction I have to show that $\prod_{k=1}^n(1+a_k) \geq 1 + \sum_{k=1}^n a_k$ is valid for all $1 \leq k \leq n$ using the fact that $a_k \geq 0$. Showing that it works for $n=0$ was easy enough. Then I tried $n+1$ and get to: $$\begin{align*} \prod_{k=1}^{n+1}(1+a_k) &= \prod_{k=1}^{n}(1+a_k)(1+a_{n+1}) \\ &\geq (1+\sum_{k=1}^n a_k)(1+a_{n+1}) \\ &= 1+\sum_{k=1}^{n+1} a_k + a_{n+1}\sum_{k=1}^n a_k \end{align*}$$ In order to finish it, I need to get rid of the $+ a_{n+1}\sum_{k=1}^n a_k$ term. How do I accomplish that? It seems that this superfluous sum is always positive, making this not really trivial, i. e. saying that it is even less if one omits that term and therefore still (or even more so) satisfies the $\geq$ …
We want to show : $$\left(\frac{1}{a_{n+1}}+1\right)\prod_{i=1}^{n}\left(1+a_{i}\right)>1+\frac{1}{a_{n+1}}+\sum_{i=1}^{n}\frac{a_{i}}{a_{n+1}}$$ We introduce the function : $$f(a_{n+1})=\left(\frac{1}{a_{n+1}}+1\right)\prod_{i=1}^{n}\left(1+a_{i}\right)-1+\frac{1}{a_{n+1}}+\sum_{i=1}^{n}\frac{a_{i}}{a_{n+1}}$$ If we differentiate and multiply by $a_{n+1}^2$ we fall on the hypothesis induction . Moreover we deduce that the function is decreasing so we have : $$f(a_{n+1})\geq \lim_{a_{n+1}\to \infty}f(a_{n+1})=\operatorname{induction hypothesis}\geq 0$$ Remins to multiply by $a_{n+1}$ and conclude . The advantage is : we get a $n$ refinements and we know much more on the behavior of the difference .
Injective functions also surjective? Is it true that for each set $M$ a given injective function $f: M \rightarrow M$ is surjective, too? Can someone explain why it is true or not and give an example?
This statement is true if $M$ is a finite set, and false if $M$ is infinite. In fact, one definition of an infinite set is that a set $M$ is infinite iff there exists a bijection $g : M \to N$ where $N$ is a proper subset of $M$. Given such a function $g$, the function $f : M \to M$ defined by $f(x) = g(x)$ for all $x \in M$ is injective, but not surjective. Henning's answer illustrates this with an example when $M = \mathbb N$. To put that example in the context of my answer, let $E \subseteq \mathbb N$ be the set of positive even numbers, and consider the bijection $g: \mathbb N \to E$ given by $g(x) = 2x$ for all $x \in \mathbb N$. On the other hand, if $M$ is finite and $f: M \to M$, then it is true that $f$ is injective iff it is surjective. Let $m = |M| < \infty$. Suppose $f$ is not surjective. Then $f(M)$ is a strict subset of $M$, and hence $|f(M)| < m$. Now, think of $x \in M$ as pigeons, and throw the pigeon $x$ in the hole $f(x)$ (also a member of $M$). Since the number of pigeons strictly exceeds the number of holes (both these numbers are finite), it follows from the pigeonhole principle that some two pigeons go into the same hole. That is, there exist distinct $x_1, x_2 \in M$ such that $f(x_1) = f(x_2)$, which shows that $f$ is not injective. (See if you can prove the other direction: if $f$ is surjective, then it is injective.) Note that the pigeonhole principle itself needs a proof and that proof is a little elaborate (relying on the definition of a finite set, for instance). I ignore such complications in this answer.
Is $[0,1]^\omega$ a continuous image of $[0,1]$? Is $[0,1]^\omega$, i.e. $\prod_{n=0}^\infty [0,1]$ with the product topology, a continuous image of $[0,1]$? What if $[0,1]$ is replaced by $\mathbb{R}$? Edit: It appears that the answer is yes, and follows from the Hahn-Mazurkiewicz Theorem ( http://en.wikipedia.org/wiki/Space-filling_curve#The_Hahn.E2.80.93Mazurkiewicz_theorem ). However, I am still interested in the related question: is $\mathbb{R}^\omega$ a continuous image of $\mathbb{R}$?
So if I'm reading correctly you want to find out if there is a continuous (with respect product topology) surjective map $f: \mathbb{R} \rightarrow \mathbb{R}^{\omega}$? No, there is not. Note that $\mathbb{R}$ is $\sigma$-compact, so write: $$\mathbb{R} = \bigcup_{n \in \mathbb{N}} [-n,n]$$ Then using the fact that $f$ is surjective we get: $$\mathbb{R}^{\omega} = \bigcup_{n \in \mathbb{N}} f([-n,n])$$ By continuity of $f$ each $D_n=f([-n,n])$ is a compact subset of $\mathbb{R}^{\omega}$. So the question boils down to whether is possible that $\mathbb{R}^{\omega}$ is $\sigma$-compact with product topology. No, let $\pi_{n}$ be the standard projection from $\mathbb{R}^{\omega}$ onto $\mathbb{R}$, then $\pi_{n}f([-n,n])$ is a compact subset of $\mathbb{R}$ so bounded. Thus for each $n \in \mathbb{N}$ choose $x_{n} \in \mathbb{R} \setminus \pi_{n}f([-n,n])$ then $x=(x_{n})$ lies in $\mathbb{R}^{\omega}$ but not in $\bigcup_{n \in \mathbb{N}} f([-n,n])$, a contradiction.
Extending to a holomorphic function Let $Z\subseteq \mathbb{C}\setminus \overline{\mathbb{D}}$ be countable and discrete (here $\mathbb{D}$ stands for the unit disc). Consider a function $f\colon \mathbb{D}\cup Z\to \mathbb{C}$ such that 1) $f\upharpoonright \overline{\mathbb{D}}$ is continuous 2) $f\upharpoonright \mathbb{D}$ is holomorphic 3) if $|z_0|=1$ and $z_n\to z_0$, $z_n\in Z$ then $(f(z_n)-f(z_0))/(z_n-z_0)\to f^\prime(z_0)$ Can $f$ be extended to a holomorphic function on some domain containing $Z$?
No. The function $g(z) = 1+ 2z + \sum_{n=1}^{\infty} 2^{-n^2} z^{2^n}$ is holomorphic on the open disk $\mathbb{D}$ and infinitely often real differentiable in any point of the closed disk $\overline{\mathbb{D}}$ but cannot be analytically extended beyond $\overline{\mathbb{D}}$: The radius of convergence is $1$. For $n \gt k$ we have $2^{nk} 2^{-n^2} \leq 2^{-n}$, hence the series and all of its derivatives converge uniformly on $\overline{\mathbb{D}}$, thus $g$ is indeed smooth in the real sense and holomorphic on $\mathbb{D}$. By Hadamard's theorem on lacunary series the function $g$ cannot be analytically continued beyond $\overline{\mathbb D}$. [In fact, it is not difficult to show that $g$ is injective on $\mathbb{\overline{D}}$, so $g$ is even a diffeomorphism onto its image $g(\overline{\mathbb D})$ — but that's not needed here.] I learned about this nice example from Remmert, Classical topics in complex function theory, Springer GTM 172, Chapter 11, §2.3 (note: I'm quoting from the German edition). The entire chapter is devoted to the behavior of power series on the boundary of convergence and gives theorems that provide positive and negative answers on whether a given power series can be extended or not. Now, to get a counterexample, apply Whitney's theorem to extend $g$ to a smooth function $f$ on all of $\mathbb{C}$. Every restriction of $f$ to any set of the form $\overline{\mathbb{D}} \cup Z$ (I think that's what was intended) as in your question will provide a counterexample.
How do you parameterize a sphere so that there are "6 faces"? I'm trying to parameterize a sphere so it has 6 faces of equal area, like this: But this is the closest I can get (simply jumping $\frac{\pi}{2}$ in $\phi$ azimuth angle for each "slice"). I can't seem to get the $\theta$ elevation parameter correct. Help!
The following doesn't have much to do with spherical coordinates, but it might be worth noting that these 6 regions can be seen as the projections of the 6 faces of an enclosing cube. In other words, each of the 6 regions can be parametrized as the region of the sphere $$S=\{\{x,y,z\}\in\mathbb R^3\mid x^2+y^2+z^2=1\}$$ for which, respectively: * *$x > \max(\lvert y\rvert,\lvert z\rvert)$ *$x < -\max(\lvert y\rvert,\lvert z\rvert)$ *$y > \max(\lvert z\rvert,\lvert x\rvert)$ *$y < -\max(\lvert z\rvert,\lvert x\rvert)$ *$z > \max(\lvert x\rvert,\lvert y\rvert)$ *$z < -\max(\lvert x\rvert,\lvert y\rvert)$ For example, the top region can be parametrized by $z =\sqrt{1-x^2-y^2}$, with region \begin{align*} \lvert x\rvert&<\frac{1}{\sqrt{2}}\\ \lvert y\rvert&<\min\left(\sqrt{\frac{1-x^2}{2}},\sqrt{1-2x^2}\right) \end{align*}
How do we check Randomness? Let's imagine a guy who claims to possess a machine that can each time produce a completely random series of 0/1 digits (e.g. $1,0,0,1,1,0,1,1,1,...$). And each time after he generates one, you can keep asking him for the $n$-th digit and he will tell you accordingly. Then how do you check if his series is really completely random? If we only check whether the $n$-th digit is evenly distributed, then he can cheat using: $0,0,0,0,...$ $1,1,1,1,...$ $0,0,0,0,...$ $1,1,1,1,...$ $...$ If we check whether any given sequence is distributed evenly, then he can cheat using: $(0,)(1,)(0,0,)(0,1,)(1,0,)(1,1,)(0,0,0,)(0,0,1,)...$ $(1,)(0,)(1,1,)(1,0,)(0,1,)(0,0,)(1,1,1,)(1,1,0,)...$ $...$ I may give other possible checking processes but as far as I can list, each of them has flaws that can be cheated with a prepared regular series. How do we check if a series is really random? Or is randomness a philosophical concept that can not be easily defined in Mathematics?
All the sequences you mentioned have a really low Kolmogorov complexity, because you can easily describe them in really short space. A random sequence (as per the usual definition) has a high Kolmogorov complexity, which means there is no instructions shorter then the string itself that can describe or reproduce the string. Ofcourse the length of the description depends on the formal system (language) you use to describe it, but if the length of the string is much longer then the axioms of your formal systems, then the Kolmogorov-complexity of a random string becomes independent of your choice of system. Luckily, under the Church-Turing thesis, there is only 1 model of computation,(unless your machine uses yet undiscovered physical laws), so there is only 1 language your machine can speak that we have to check. So to test if a string is random, we only have to brute-force check the length of the shortest Turing-program that outputs the first n bits correctly. If the length eventually becomes proportional to n, then we can be fairly certain we have a random sequence, but to be 100% we have to check the whole (infinite) string. (As per definition of random).
How many ways can 8 people be seated in a row? I am stuck with the following question, How many ways can 8 people be seated in a row? if there are 4 men and 4 women and no 2 men or women may sit next to each other. I did it as follows, As 4 men and 4 women must sit next to each other so we consider each of them as a single unit. Now we have we 4 people(1 men group, 1 women group, 2 men or women) they can be seated in 4! ways.Now each of the group of men and women can swap places within themselves so we should multiply the answer with 4!*4! This makes the total 4!*4!*4! =13824 . Please help me out with the answer. Are the steps clear and is the answer and the method right? Thanks
If there is a man on the first seat, there has to be a woman on the second, a man on the third so forth. Alternatively, we could start with a woman, then put a man, then a woman and so forth. In any case, if we decide which gender to put on the first seat, the genders for the others seats are forced upon us. So there are only two ways in which we can divide the eight aligned seats into "man-seats" and "woman-seats" without violating the rule that no two men and no two women may sit next to each other. Once you have chosen where to seat the men and where the women you can permute the two groups arbitrarily, giving $4!$ possibilities each. So in total the number of possible constellations is $$ 2\cdot 4!\cdot 4!=1152. $$
Probability that no two consecutive throws of some (A,B,C,D,E,F)-die show up consonants I have a question on probability. I am looking people presenting different approaches on solving this. I already have one solution but I was not satisfied like a true mathematician ;).....so go ahead and take a dig.....if no one answers....I will post my solution....thanks! There is an unbiased cubical die with its faces labeled as A, B, C, D, E and F. If the die is thrown $n$ times, what is the probability that no two consecutive throws show up consonants? If someone has already asked a problem of this type then I will be grateful to be redirected :)
Here is a solution different from the one given on the page @joriki links to. Call $c_n$ the probability that no two consecutive consonants appeared during the $n$ first throws and that the last throw produces a consonant. Call $b_n$ the probability that no two consecutive consonants appeared during the $n$ first throws and that the last throw did not produce a consonant. Thus one is looking for $p_n=c_n+b_n$. For every $n\geqslant1$, $c_{n+1}=\frac23b_n$ (if the previous throw was a consonant, one cannot get a consonant now and if it was not, $\frac23$ is the probability to get a consonant now) and $b_{n+1}=\frac13b_n+\frac13c_n$ (if the present throw is not a consonant, one asks that two successive consonants were not produced before now). Furthermore $c_1=\frac23$ and $b_1=\frac13$. One asks for $p_n=3b_{n+1}$ and one knows that $9b_{n+2}=3b_{n+1}+3c_{n+1}=3b_{n+1}+2b_n$ for every $n\geqslant1$. The roots of the characteristic equation $9r^2-3r-2=0$ are $r_2=\frac23$ and $r_1=-\frac13$ hence $b_n=B_2r_2^n+B_1r_1^n$ for some $B_1$ and $B_2$. One can use $b_2=\frac13$ as second initial condition, this yields $B_2=\frac23$ and $B_1=\frac13$ hence $b_n=r_2^{n+1}-r_1^{n+1}$. Finally, $p_n=3^{-n-1}\left(2^{n+2}-(-1)^{n}\right)$. (Sanity check: one can check that $p_0=p_1=1$, $p_2=\frac59$, and even with some courage that $p_3=\frac{11}{27}$, are the correct values.)
How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$? How can one prove the statement $$\lim_{x\to 0}\frac{\sin x}x=1$$ without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution. This is homework. In my math class, we are about to prove that $\sin$ is continuous. We found out, that proving the above statement is enough for proving the continuity of $\sin$, but I can't find out how. Any help is appreciated.
Usual proofs can be circular, but there is a simple way for proving such inequality. Let $\theta$ be an acute angle and let $O,A,B,C,D,C'$ as in the following diagram: We may show that: $$ CD \stackrel{(1)}{ \geq }\;\stackrel{\large\frown}{CB}\; \stackrel{(2)}{\geq } CB\,\stackrel{(3)}{\geq} AB $$ $(1)$: The quadrilateral $OCDC'$ and the circle sector delimited by $O,C,C'$ are two convex sets. Since the circle sector is a subset of the quadrilateral, the perimeter of the circle sector is less than the perimeter of the quadrilateral. $(2)$: the $CB$ segment is the shortest path between $B$ and $C$. $(3)$ $CAB$ is a right triangle, hence $CB\geq AB$ by the Pythagorean theorem. In terms of $\theta$ we get: $$ \tan\theta \geq \theta \geq 2\sin\frac{\theta}{2} \geq \sin\theta $$ for any $\theta\in\left[0,\frac{\pi}{2}\right)$. Since the involved functions are odd functions the reverse inequality holds over $\left(-\frac{\pi}{2},0\right]$, and $\lim_{\theta\to 0}\frac{\sin\theta}{\theta}=1$ follows by squeezing. A slightly different approach might be the following one: let us assume $\theta\in\left(0,\frac{\pi}{2}\right)$. By $(2)$ and $(3)$ we have $$ \theta \geq 2\sin\frac{\theta}{2}\geq \sin\theta $$ hence the sequence $\{a_n\}_{n\geq 0}$ defined by $a_n = 2^n \sin\frac{\theta}{2^n}$ is increasing and bounded by $\theta$. Any increasing and bounded sequence is convergent, and we actually have $\lim_{n\to +\infty}a_n=\theta$ since $\stackrel{\large\frown}{BC}$ is a rectifiable curve and for every $n\geq 1$ the $a_n$ term is the length of a polygonal approximation of $\stackrel{\large\frown}{BC}$ through $2^{n-1}$ equal segments. In particular $$ \forall \theta\in\left(0,\frac{\pi}{2}\right), \qquad \lim_{n\to +\infty}\frac{\sin\left(\frac{\theta}{2^n}\right)}{\frac{\theta}{2^n}} = 1 $$ and this grants that if the limit $\lim_{x\to 0}\frac{\sin x}{x}$ exists, it is $1$. By $\sin x\leq x$ we get $\limsup_{x\to 0}\frac{\sin x}{x}\leq 1$, hence it is enough to show that $\liminf_{x\to 0}\frac{\sin x}{x}\geq 1$. We already know that for any $x$ close enough to the origin the sequence $\frac{\sin x}{x},\frac{\sin(x/2)}{x/2},\frac{\sin(x/4)}{x/4},\ldots$ is convergent to $1$, hence we are done. Long story short: $\lim_{x\to 0}\frac{\sin x}{x}=1$ follows from the fact that a circle is a rectifiable curve, and a circle is a rectifiable curve because it is the boundary of a convex, bounded subset of $\mathbb{R}^2$. The convexity of the disk follows from the triangle inequality: a disk is a closed ball for the euclidean distance. $(1)$ relies on this powerful Lemma: Lemma. If $A,B$ are convex bounded sets in $\mathbb{R}^2$ and $A\subsetneq B$, the perimeter of $A$ is less than the perimeter of $B$. Proof: by boundedness and convexity, $\partial A$ and $\partial B$ are rectifiable, with lengths $L(A)=\mu(\partial A),\,L(B)=\mu(\partial B)$. Always by convexity, there is some chord in $B$ that does not meet the interior of $A$ (a tangent to $\partial A$ at a smooth point does the job, for instance). Assume that such chord has endpoints $B_1, B_2 \in \partial B$ and perform a cut along $B_1 B_2$: both the area and the perimeter of $B$ decrease, but $B$ remains a bounded convex set enclosing $A$. Since $A$ can be approximated through a sequence of consecutive cuts, $L(A)<L(B)$ follows.
Euclidean distance vs Squared So I understand that Euclidean distance is valid for all of properties for a metric. But why doesn't the square hold the same way?
You lose the triangle inequality if you don’t take the square root: the ‘distance’ from the origin to $(2,0)$ would be $4$, which is greater than $2$, the sum of the ‘distances’ from the origin to $(1,0)$ and from $(1,0)$ to $(2,0)$.
mean and std deviation of a population equal? Hypothetically, if we have a population of size $n$ whose mean and std deviation are equal, I think with some work we have a constraint that the ratio, (Sum of squared points)/(Sum of points$)^2$ $= \frac{(2n-1)}{n^2}$, which gets small quickly as $n$ gets large. Are there heuristic considerations that might render such a population plausible as an extension of, say, the binomial distribution (as with the Poisson distribution, although that distribution the mean is equal to the variance)? Does this property (mean = Sqrt[variance] ) suggest anything about the population generally, if that question is not too vague? I have not encountered a population with this property in any texts, but am fairly sure it has been considered...?
The distributions of exponential type whose variance and mean are related, so that $\operatorname{Var}(X) \sim (\mathbb{E}(X))^p$ for a fixed $p$, are called an index parameter, are known as Tweedie family. The case you are interested in corresponds to index $p =2$. $\Gamma$-distribution possesses this property (the exponential distribution is a special case of $\Gamma$-distribution). $\Gamma$-distribution has probability density $f(x) = \frac{1}{\Gamma(\alpha)} x^{\alpha - 1} \beta^{-\alpha} \exp\left(- \frac{x}{\beta}\right) \mathbf{1}_{x > 0}$. Its mean is $\mu = \alpha \beta$ and variance $\mu_2 = \alpha \beta^2$, hence $\mu_2 = \frac{\mu^2}{\alpha}$. The equality requires $\alpha = 1$, so we recover the exponential distribution as already noted by Michael Hardy. But we do not have to stay within the exponential family. You can certainly achieve the equality with normal distribution, and with beta distribution, and with many discrete distributions, for instance, the generalized Poisson distribution with $\lambda = \frac{1}{1-\mu}$, both the mean and variance equal $(1 - \mu)^{-2}$.
How many everywhere defined functions are not $ 1$ to $1$ I am stuck with the following question, How many everywhere defined functions from S to T are not one to one S={a,b,c,d,e} T={1,2,3,4,5,6} Now the teacher showed that there could be $6^5$ ways to make and everywhere defined function and $6!$ ways of it to be $1$ to $1$ but when I drew them on paper I could drew no more than 30, here are those, $(a,1),(a,2).........(a,6)$$(b,1),(b,2).........(b,6)$$(c,1),(c,2).........(c,6)$$(d,1)(d,2).........(d,6)$$(e,1)(e,2).........(e,6)$ Can anyone please help me out with how there are $6^5$ functions? Thanks
You have produced a complete and correct list of all ordered pairs $(x,y)$, where $x$ ranges over $S$ and $y$ ranges over $T$. However, this is not the set of all functions from $S$ to $T$. Your list, however, gives a nice way of visualizing all the functions. We can produce all the functions from $S$ to $T$ by picking any ordered pair from your first row, followed by any ordered pair from your second row, followed by any ordered pair from your third row, and so on. We have $6$ choices from the first row. For any of these choices, we have $6$ choices from the second row, for a total of $6\times 6$ choices from the first two rows. For every way of choosing from the first two rows, we have $6$ ways to choose from the third row, and so on for a total of $6^5$. A function from $S$ to $T$ is a set of ordered pairs, with one ordered pair taken from each row. This view is quite close to the formal definition of function that you may have seen in your course. For example, we might choose $(a,1)$, $(b,5)$, $(c,1)$, $(d,5)$, $(e,2)$. This gives us the function that maps $a$ and $c$ to $1$, $b$ and $d$ to $5$, and $e$ to $2$. We can produce the one-to-one functions in similar way from your "matrix." We start by taking any ordered pair from your first row. But then, from the second row, we must pick an ordered pair which is not in the same column as the first ordered pair you picked. So even though there are $6$ choices from the first row, for every such choice, there are only $5$ choices from the second row. Similarly, once we have picked a pair from each of the first two rows, in the third row we must avoid the columns these pairs are in. So we end up with only $6\times 5\times 4\times 3\times 2$ one-to-one functions.
The Pigeon Hole Principle and the Finite Subgroup Test I am currently reading this document and am stuck on Theorem 3.3 on page 11: Let $H$ be a nonempty finite subset of a group $G$. Then $H$ is a subgroup of $G$ if $H$ is closed under the operation of $G$. I have the following questions: 1. It suffices to show that $H$ contains inverses. I don't understand why that alone is sufficient. 2. Choose any $a$ in $G$...then consider the sequence $a,a^2,..$ This sequence is contained in $H$ by the closure property. I know that if $G$ is a group, then $ab$ is in $G$ for all $a$ and $b$ in $G$.But, I don't understand why the sequence has to be contained in $H$ by the closure property. 3. By the Pigeonhole Principle, since $H$ is finite, there are distinct $i,j$ such that $a^i=a^j$. I understand the Pigeonhole Principle (as explained on page 2) and why $H$ is finite, but I don't understand how the Pigeonhole Principle was applied to arrive at $a^i=a^j$. 4. Reading the proof, it appears to me that $H$ = $\left \langle a \right \rangle$ where $a\in G$. Is this true?
To show $H$ is a subgroup you must show it's closed, contains the identity, and contains inverses. But if it's closed, non-empty, and contains inverses, then it's guaranteed to contain the identity, because it's guaranteed to contain something, say, $x$, then $x^{-1}$, then $xx^{-1}$, which is the identity. $H$ is assumed closed, so if it contains $a$ and $b$, it contains $ab$. But $a$ and $b$ don't have to be different: if it contains $a$, it contains $a$ and $a$, so it contains $aa$, which is $a^2$. But then it contains $a$ and $a^2$ so it contains $aa^2$ which is $a^3$. Etc. So it contains $a,a^2,a^3,a^4,\dots$. $H$ is finite, so these can't all be different, so some pair is equal, that is, $a^i=a^j$ for some $i\ne j$. As for your last question, do you know any example of a group with a non-cyclic subgroup?
Prove that $\lim \limits_{n\to\infty}\frac{n}{n^2+1} = 0$ from the definition This is a homework question: Prove, using the definition of a limit, that $$\lim_{n\to\infty}\frac{n}{n^2+1} = 0.$$ Now this is what I have so far but I'm not sure if it is correct: Let $\epsilon$ be any number, so we need to find an $M$ such that: $$\left|\frac{n}{n^2 + 1}\right| < \epsilon \text{ whenever }x \gt M.$$ $$ n \lt \epsilon(n^2 + 1) $$ $$n \lt \epsilon n^2 + \epsilon$$ Now what? I am completely clueless on how to do this!
First, $\epsilon$ should not be "any number", it should be "any positive number." Now, you are on the right track. What do you need in order for $\frac{n}{n^2+1}$ to be smaller than $\epsilon$? You need $n\lt \epsilon n^2 + \epsilon$. This is equivalent to requiring $$\epsilon n^2 - n + \epsilon \gt 0.$$ You want to find out for what values of $n$ this is true. This is a quadratic inequality: you first solve $$\epsilon n^2 - n + \epsilon = 0,$$ and then you use the solution to determine where the quadratic is positive, and where it is negative. The answer will, of course, depend on $\epsilon$. Using the quadratic formula, we have that $$\epsilon n^2 - n + \epsilon = 0$$ has solutions $$n = \frac{1 + \sqrt{1-4\epsilon^2}}{2\epsilon}, \quad n= \frac{1-\sqrt{1-4\epsilon^2}}{2\epsilon}.$$ That is, $$\epsilon n^2 - n + \epsilon = \epsilon\left( n - \frac{1-\sqrt{1-4\epsilon^2}}{2\epsilon}\right)\left(n - \frac{1+\sqrt{4\epsilon^2}}{2\epsilon}\right).$$ Now, we can assume that $\epsilon\lt \frac{1}{2}$, so that $4\epsilon^2\lt 1$ (if it works for all small enough $\epsilon$, then it works for all $\epsilon$. Since $\epsilon\gt 0$, then the quadratic is positive if $n$ is smaller than the smallest of the roots, or if $n$ is larger than the larger of the two roots. The larger root is $\displaystyle \frac{1 + \sqrt{1-4\epsilon^2}}{2\epsilon}$. So if $$n \gt \frac{1+\sqrt{1-4\epsilon^2}}{2\epsilon},$$ then $$\epsilon n^2 -n + \epsilon \gt 0.$$ Can you finish it up from here?
Number of point subsets that can be covered by a disk Given $n$ distinct points in the (real) plane, how many distinct non-empty subsets of these points can be covered by some (closed) disk? I conjecture that if no three points are collinear and no four points are concyclic then there are $\frac{n}{6}(n^2+5)$ distinct non-empty subsets that can be covered by a disk. (I have the outline of an argument, but it needs more work. See my answer below.) Is this conjecture correct? Is there a good BOOK proof? This question is rather simpler than the related unit disk question. The answer to this question provides an upper bound to the unit disk question (for $k=1$).
When $n=6$, consider four points at the corner of a square, and two more points very close together near the center of the square. To be precise, let's take points at $(\pm1,0)$ and $(0,\pm1)$ and at $(\epsilon,\epsilon)$ and $(-2\epsilon,\epsilon)$ for some small $\epsilon>0$. Then if I'm not mistaken, the number of nonempty subsets that can be covered by a disk is 34 (6 of size 1, 11 of size 2, 8 of size 3, 4 of size 4, 4 of size 5, and 1 of size 6), while your conjectured formula gives 41 [I originally had the incorrect 31]. Now that I think about, when $n=4$, taking two points very near the midpoint of the segment joining the other two points (say $(\pm1,0)$ and $(0,\pm\epsilon)$) gives 12 such nonempty subsets (4 of size 1, 5 of size 2, 2 of size 3, and 1 of size 4) while your conjectured formula gives 14. Edited to add: it's been commented correctly that the above $n=4$ example does indeed give 14, since every size-3 subset can be covered by a disk. But what about if the four points are $(\pm1,0)$, $(0,\epsilon)$, and $(0,0)$ instead? Now I believe the first three points cannot be covered by a disk without covering the fourth also. Perhaps you want to add the condition that no three points are collinear?
Filter to obtain MMSE of data from Gaussian vector Data sampled at two time instances giving bivariate Gaussian vector $X=(X_1,X_2)^T$ with $f(x_1,x_2)=\exp(-(x_1^2+1.8x_1x_2+x_2^2)/0.38)/2\pi \sqrt{0.19}$ Data measured in noisy environment with vector: $(Y_1,Y_2)^T=(X_1,X_2)^T+(W_1,W_2)^T$ where $W_1,W_2$ are both $i.i.d.$ with $\sim N (0,0.2)$. I have found correlation coefficient of $X_1,X_2$, $\rho=-0.9$ and $X_1,X_2 \sim N(0,1)$ Question: How to design filter to obtain MMSE estimator of $X_1$ from $Y$ vector and calculate MSE of this estimator?
What you need is $\mathbb{E}(X_1 \mid Y_1, Y_2)$. We have $$ \operatorname{var}\begin{bmatrix} X_1 \\ Y_1 \\ Y_2 \end{bmatrix} = \left[\begin{array}{r|rr} 1 & 1 & -0.9 \\ \hline1 & 1.02 & -0.9 \\ -0.9 & -0.9 & 1.02 \end{array}\right]= \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{12}^\top & \Sigma_{22} \end{bmatrix}. $$ So the conditional expected value is $$ \mathbb{E}(X_1) + \Sigma_{12} \Sigma_{22}^{-1} \left( \begin{bmatrix} Y_1 \\ Y_2 \end{bmatrix} - \mathbb{E}\begin{bmatrix} Y_1 \\ Y_2 \end{bmatrix}. \right) $$ See: http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions
How to prove that $\sum\limits_{n=1}^\infty\frac{(n-1)!}{n\prod\limits_{i=1}^n(a+i)}=\sum\limits_{k=1}^\infty \frac{1}{(a+k)^2}$ for $a>-1$? A problem on my (last week's) real analysis homework boiled down to proving that, for $a>-1$, $$\sum_{n=1}^\infty\frac{(n-1)!}{n\prod\limits_{i=1}^n(a+i)}=\sum_{k=1}^\infty \frac{1}{(a+k)^2}.$$ Mathematica confirms this is true, but I couldn't even prove the convergence of the original series (the one on the left), much less demonstrate that it equaled this other sum; the ratio test is inconclusive, and the root test and others seem hopeless. It was (and is) quite a frustrating problem. Can someone explain how to go about tackling this?
This uses a reliable trick with the Beta function. I say reliable because you can use the beta function and switching of the integral and sum to solve many series very quickly. First notice that $$\prod_{i=1}^{n}(a+i)=\frac{\Gamma(n+a+1)}{\Gamma(a+1)}.$$ Then $$\frac{(n-1)!}{\prod_{i=1}^{n}(a+i)}=\frac{\Gamma(n)\Gamma(a+1)}{\Gamma(n+a+1)}=\text{B}(n,a+1)=\int_{0}^{1}(1-x)^{n-1}x{}^{a}dx.$$ Hence, upon switching the order we have that $$\sum_{n=1}^{\infty}\frac{(n-1)!}{n\prod_{i=1}^{n}(a+i)}=\int_{0}^{1}x^{a}\left(\sum_{n=1}^{\infty}\frac{(1-x)^{n-1}}{n}\right)dx.$$ Recognizing the power series, this is $$\int_{0}^{1}x^{a}\frac{-\log x}{1-x}dx.$$ Now, expand the power series for $\frac{1}{1-x}$ to get $$\sum_{m=0}^{\infty}-\int_{0}^{1}x^{a+m}\log xdx.$$ It is not difficult to see that $$-\int_{0}^{1}x^{a+m}\log xdx=\frac{1}{(a+m+1)^{2}},$$ so we conclude that $$\sum_{n=1}^{\infty}\frac{(n-1)!}{n\prod_{i=1}^{n}(a+i)}=\sum_{m=1}^{\infty}\frac{1}{(a+m)^{2}}.$$ Hope that helps, Remark: To evaluate the earlier integral, notice that $$-\int_{0}^{1}x^{r}\log xdx=\int_{1}^{\infty}x^{-(r+2)}\log xdx=\int_{0}^{\infty}e^{-u(r+1)}udu=\frac{1}{(r+1)^{2}}\int_{0}^{\infty}e^{-u}udu. $$ Alternatively, as Joriki pointed out, you can just use integration by parts.
Conditional expectation of $\max(X,Y)$ and $\min(X,Y)$ when $X,Y$ are iid and exponentially distributed I am trying to compute the conditional expectation $$E[\max(X,Y) | \min(X,Y)]$$ where $X$ and $Y$ are two iid random variables with $X,Y \sim \exp(1)$. I already calculated the densities of $\min(X,Y)$ and $\max(X,Y)$, but I failed in calculating the joint density. Is this the right way? How can I compute the joint density then? Or do I have to take another ansatz?
For two independent exponential distributed variables $(X,Y)$, the joint distribution is $$ \mathbb{P}(x,y) = \mathrm{e}^{-x-y} \mathbf{1}_{x >0 } \mathbf{1}_{y >0 } \, \mathrm{d} x \mathrm{d} y $$ Since $x+y = \min(x,y) + \max(x,y)$, and $\min(x,y) \le \max(x,y)$ the joint distribution of $(U,V) = (\min(X,Y), \max(X,Y))$ is $$ \mathbb{P}(u,v) = \mathcal{N} \mathrm{e}^{-u-v} \mathbf{1}_{v \ge u >0 } \, \mathrm{d} u \mathrm{d} v $$ The normalization constant is easy to find as $$ \int_0^\infty \mathrm{d} v \int_0^v \mathrm{d} u \,\, \mathrm{e}^{-u-v} = \int_0^\infty \mathrm{d} v \,\, \mathrm{e}^{-v} ( 1 - \mathrm{e}^{-v} ) = 1 - \frac{1}{2} = \frac{1}{2} = \frac{1}{\mathcal{N}} $$ Thus the conditional expectation we seek to find is found as follows (assuming $u>0$): $$ \mathbb{E}(\max(X,Y) \vert \min(X,Y) = u) = \frac{\int_0^\infty v \mathrm{d} P(u,v)}{\int_u^\infty \mathrm{d} P(u,v)} = \frac{\int_u^\infty \mathcal{N} v \mathrm{e}^{-u-v} \mathrm{d} v}{\int_u^\infty \mathcal{N} \mathrm{e}^{-u-v} \mathrm{d} v} = 1 + u $$
Showing $f^{-1}$ exists where $f(x) = \frac{x+2}{x-3}$ Let $f(x) = \dfrac{x + 2 }{x - 3}$. There's three parts to this question: * *Find the domain and range of the function $f$. *Show $f^{-1}$ exists and find its domain and range. *Find $f^{-1}(x)$. I'm at a loss for #2, showing that the inverse function exists. I can find the inverse by solving the equation for $x$, showing that it exists without just solving for the inverse. Can someone point me in the right direction?
It is a valid way to find the inverse by solving for x first and then verify that $f^{-1}(f(x))=x$ for all $x$ in your domain. It is quite preferable to do it here because you need it for 3. anyways.
Second countability and products of Borel $\sigma$-algebras We know that the Borel $\sigma$-algebra of the Cartesian product space (with the product topology) of two topological spaces is equal to the product of the Borel $\sigma$-algebras of the factor spaces. (The product $\sigma$-algebra can be defined via pullbacks of projection maps...) When one upgrades the above statement to a product of a countable family of topological spaces, the analagous result, namely that the Borel $\sigma$-algebra is the the product Borel $\sigma$-algebra, is conditioned by the topological spaces being second countable. Why? My question is this: how and why does second countability make its appearance when we upgrade the finite product to countable product? (Second countability means that the base is countable, not just locally...) My difficulty is that I do not see how suddenly second countability is important when we pass from finite to countable products.
This is not even true for a product of two spaces: see this Math Overflow question. To rephrase, second countability can be important even for products of two topological spaces.
Expected value of the stochastic integral $\int_0^t e^{as} dW_s$ I am trying to calculate a stochastic integral $\mathbb{E}[\int_0^t e^{as} dW_s]$. I tried breaking it up into a Riemann sum $\mathbb{E}[\sum e^{as_{t_i}}(W_{t_i}-W_{t_{i-1}})]$, but I get expected value of $0$, since $\mathbb{E}(W_{t_i}-W_{t_{i-1}}) =0$. But I think it's wrong. Thanks! And I want to calculate $\mathbb{E}[W_t \int_0^t e^{as} dW_s]$ as well, I write $W_t=\int_0^t dW_s$ and get $\mathbb{E}[W_t \int_0^t e^{as} dW_s]=\mathbb{E}[\int_0^t e^{as} dW_s]$. Is that ok? ($W_t$ is brownian motion.)
The expectation of the Ito integral $\mathbb{E}( \int_0^t \mathrm{e}^{a s} \mathrm{d} W_s )$ is zero as George already said. To compute $\mathbb{E}( W_t \int_0^t \mathrm{e}^{a s} \mathrm{d} W_s )$, write $W_t = \int_0^t \mathrm{d} W_s$. Then use Ito isometry: $$ \mathbb{E}( W_t \int_0^t \mathrm{e}^{a s} \mathrm{d} W_s ) = \mathbb{E}\left( \int_0^t \mathrm{d} W_s \cdot \int_0^t \mathrm{e}^{a s} \mathrm{d} W_s \right) = \int_0^t (1 \cdot \mathrm{e}^{a s}) \mathrm{d} s = \frac{\mathrm{e}^{a t} - 1}{a} \phantom{hhhh} $$
The tricky time complexity of the permutation generator I ran into tricky issues in computing time complexity of the permutation generator algorithm, and had great difficulty convincing a friend (experienced in Theoretical CS) of the validity of my reasoning. I'd like to clarify this here. Tricky complexity question Given a positive integer $n$, what is the time complexity of generating all permutations on the set $[n]=\{1,2,..,n\}$? Friend's reasoning Any algorithm to generate all permutations of $[n]$ takes $\Omega(n!)$ time. This is a provable , super-exponential lower bound, [edited ]hence the problem is in EXPTIME. My reasoning The above reasoning is correct, except that one should compute the complexity with respect to the number of expected output bits. Here, we expect $n!$ numbers in the output, and each can be encoded in $\log n$ bits; hence we expect $b=O(n!\log n)$ output bits. A standard algorithm to traverse all $n!$ permutations will take a polynomial time overhead i.e. it will execute in $s(n)=O(n!n^k)$ time, hence we will need $t(n)=b(n)+s(n) = O(n!(\log n + n^k)) $ time in all. Since $b(n)$ is the number of output bits, we will express $t(n)$ as a function of $b(n)$. To do so, note that $n^k \approx (n!)^{k/n}$ using $n! \approx n^n$; so $s(n)=O( b(n) (b(n))^{k/n}) = O(b^2(n) )$ . Hence we have a polynomial time algorithm in the number of output bits, and the problem should be in $P$, not in say EXPTIME. Main Question : Whose reasoning is correct, if at all? Note I raised this problem here because I had a bad experience at StackOverflow with a different tricky time complexity problem; and this is certainly not suited for Cstheory.SE as it isn't research level.
Your friend's bound is rather weak. Let the input number be $x$. Then the output is $x!$ permutations, but the length of each permutation isn't $x$ bits, as you claim, but $\Theta(\lg(x!)) = \Theta(x \lg x)$ bits. Therefore the total output length is $\Theta(x! \;x \lg x)$ bits. But, as @KeithIrwin has already pointed out, complexity classes work in terms of the size of the input. An input value of $x$ has size $n=\lg x$, so the generation is $\Omega(2^n! \;2^n n)$.
Homeomorphism between two spaces I am asked to show that $(X_{1}\times X_{2}\times \cdots\times X_{n-1})\times X_{n}$ is homeomorphic to $X_{1}\times X_{2}\times \cdots \times X_{n}$. My guess is that the Identity map would work but I am not quite sure. I am also wondering if I could treat the the set $(X_{1}\times X_{2}\times \cdots\times X_{n-1})\times X_{n}$ as the product of two sets $X_{1}\times X_{2}\times \cdots\times X_{n-1}$ and $X_{n}$ so that I could use the projection maps but again I am not sure exactly how to go about this. Can anyone help me?
Let us denote $A = X_1\times \cdots \times X_{n-1}$ and $X = X_{1}\times \cdots\times X_{n-1}\times X_n$. The box topology $\tau_A$ on $A$ is defined by the basis of open product sets: $$ \mathcal B(A) = \{B_1\times\cdots \times B_{n-1}:B_i \text{ is open in } X_i,1\leq i\leq n-1\}. $$ The box topology $\tau_X$ on $X$ is defined by the basis: $$ \mathcal B(X) = \{B_1\times\cdots\times B_{n}:B_i \text{ is open in } X_i,1\leq i\leq n\}. $$ Let us follow Henning and put $f:A\times X_n\to X$ as $$f((x_1,\ldots,x_{n-1}),x_n) = (x_1,\ldots,x_n)$$ so $$ f^{-1}(x_1,\ldots,x_n) = ((x_1,\ldots,x_{n-1}),x_n). $$ Clearly, it is a bijection. Then we should check that $B\in\tau'$ iff $B\in \tau_X$. Let us check it: * *if $B\in\tau_X$ then $$ f^{-1}(B) = \bigcup\limits_{\alpha}(B_{1,\alpha}\times\cdots\times B_{n-1,\alpha})\times B_{n,\alpha}\in \tau'$$ since $B_{1,\alpha}\times\cdots\times B_{n-1,\alpha}\in \tau_A$. *if $B\in \tau'$ then $$ B = \bigcup\limits_\alpha C_\alpha \times B_{n,\alpha} $$ where $C_\alpha \in \tau(A)$. But we know the basis for the latter topology, so $$ C_\alpha = \bigcup\limits_\beta C_{1,\alpha,\beta}\times\cdots\times C_{n-1,\alpha,\beta} $$ where $C_{i,\alpha,\beta}$ are open in $X_i$, here $1\leq i\leq n-1$. Finally we substitute these expressions and get $$ f(B) = \bigcup\limits_{\alpha}B_{1,\alpha}\times\cdots\times B_{n-1,\alpha}\times B_{n,\alpha}\in \tau_X $$ where we denote $$ B_{i,\alpha} = \bigcup\limits_{\beta}C_{i,\alpha,\beta}\text{ - open in }X_i. $$ Note that we also implicitly interchanged unions w.r.t. $\alpha$ and $\beta$.
If G is a group of order n=35, then it is cyclic I've been asked to prove this. In class we proved this when $n=15$, but our approached seemed unnecessarily complicated to me. We invoked Sylow's theorems, normalizers, etc. I've looked online and found other examples of this approach. I wonder if it is actually unnecessary, or if there is something wrong with the following proof: If $|G|=35=5\cdot7$ , then by Cauchy's theorem, there exist $x,y \in G$ such that $o(x)=5$, $o(y)=7$. The order of the product $xy$ is then $\text{lcm}(5,7)=35$. Since we've found an element of $G$ of order 35, we conclude that $G$ is cyclic. Thanks.
Another explicit example: Consider $$ A = \left( \begin{array}{cc} 1 & -1 \\ 0 & -1 \end{array} \right), \quad \text{and} \quad B = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right). $$ Then, $A^2 = B^2 = I$, but $$ AB = \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right) $$ has infinite order. It should also be mentioned that if $x$ has order $n$ and $y$ has order $m$, and $x$ and $y$ commute: $xy = yx$, then the order of $xy$ divides $\text{lcm}(m,n)$, though the order of $xy$ is not $\text{lcm}(m,n)$ in general. For example, if an element $g \in G$ has order $n$, then $g^{-1}$ also has order $n$, but $g g^{-1}$ has order $1$. Joriki's example also provides a scenario where the order of $xy$ is not $\text{lcm}(m,n)$ in general.
What would be the radius of convergence of $\sum\limits_{n=0}^{\infty} z^{3^n}$? I know how to find the radius of convergence of a power series $\sum\limits_{n=0}^{\infty} a_nz^n$, but how does this apply to the power series $\sum\limits_{n=0}^{\infty} z^{3^n}$? Would the coefficients $a_n=1$, so that one may apply D'Alembert's ratio test to determine the radius of convergence? I would appreciate any input that would be helpful.
What do you do for the cosine and sine series? There, you cannot use the Ratio Test directly because every other coefficient is equal to $0$. Instead, we do the Ratio Test on the subsequence of even (resp. odd) terms. You can do the same here. We have $a_{3^n}=1$ for all $n$, and $a_j=0$ for $j$ not a power of $3$. Define $b_k = z^{3^k}$. Then we are trying to determine the convergence of the series $\sum b_k$, so using the Ratio Test we have: $$\lim_{n\to\infty}\frac{|b_{n+1}|}{|b_{n}|} = \lim_{n\to\infty}\frac{|z|^{3^{n+1}}}{|z|^{3^{n}}} = \lim_{n\to\infty}|z|^{3^{n+1}-3^n} = \lim_{n\to\infty}|z|^{2\times3^{n}} = \left\{\begin{array}{cc} 0 & \text{if }|z|\lt 1\\ 1 & \text{if }|z|=1\\ \infty &\text{if }|z|\gt 1 \end{array}\right.$$ So the radius of convergence is $1$.
Infinitely many $n$ such that $p(n)$ is odd/even? We denote by $p(n)$ the number of partitions of $n$. There are infinitely many integers $m$ such that $p(m)$ is even, and infinitely many integers $n$ such that $p(n)$ is odd. It might be proved by the Euler's Pentagonal Number Theorem. Could you give me some hints?
Hint: Look at the pentagonal number theorem and the recurrence relation it yields for $p(n)$, and consider what would happen if $p(n)$ had the same parity for all $n\gt n_0$.
Lie bracket and covariant derivatives I came across the following equality $[\text{grad} f, X] = \nabla_{\text{grad} f} X + \nabla_X \text{grad} f$ Is this true, and how can I prove this (without coordinates)?
No. Replace all three occurrences of the gradient by any vector field, call it $W,$ but then replace the plus sign on the right hand side by a minus sign, and you have the definition of a torsion-free connection, $$ \nabla_W X - \nabla_X W = [W,X].$$ If, in addition, there is a positive definite metric, the Levi-Civita connection is defined to be torsion-free and satisfy the usual product rule for derivatives, in the guise of $$ X \, \langle V,W \rangle = \langle \nabla_X V, W \,\rangle + \langle V, \, \nabla_X W \, \rangle. $$ Here $\langle V,W \rangle$ is a smooth function, writing $X$ in front of it means taking the derivative in the $X$ direction. Once you have such a connection, it is possible to define the gradient of a function, for any smooth vector field $W$ demand $$ W(f) = df(W) = \langle \, W, \, \mbox{grad} \, f \, \rangle $$ Note that physicists routinely find use for connections with torsion. Also, $df$ (the gradient) comes from the smooth structure, the connection needs more.
Simplify an expression to show equivalence I am trying to simplify the following expression I have encountered in a book $\sum_{k=0}^{K-1}\left(\begin{array}{c} K\\ k+1 \end{array}\right)x^{k+1}(1-x)^{K-1-k}$ and according to the book, it can be simplified to this: $1-(1-x)^{K}$ I wonder how is it done? I've tried to use Mathematica (to which I am new) to verify, by using $\text{Simplify}\left[\sum _{k=0}^{K-1} \left(\left( \begin{array}{c} K \\ k+1 \end{array} \right)*x{}^{\wedge}(k+1)*(1-x){}^{\wedge}(K-1-k)\right)\right]$ and Mathematica returns $\left\{\left\{-\frac{K q \left((1-q)^K-q^K\right)}{-1+2 q}\right\},\left\{-\frac{q \left(-(1-q)^K+(1-q)^K q+(1+K) q^K-(1+2 K) q^{1+K}\right)}{(1-2 q)^2}\right\}\right\}$ which I cannot quite make sense of it. To sum up, my question is two-part: * *how is the first expression equivalent to the second? *how should I interpret the result returned by Mathematica, presuming I'm doing the right thing to simplify the original formula? Thanks a lot!
Simplify[PowerExpand[Simplify[Sum[Binomial[K, k + 1]*x^(k + 1)*(1 - x)^(K - k - 1), {k, 0, K - 1}], K > 0]]] works nicely. The key is in the use of the second argument of Simplify[] to add assumptions about a variable. and using PowerExpand[] to distribute powers.