title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Maximum of a sequence of iid random variables divided by n converges to 0 in probability
$E|Y| <\infty$ implies that $\sum P(|Y| >n) <\infty$. Applying this to $Y=\frac {X_1} {\epsilon}$ we see that $\sum P(|X_i| >n \epsilon ) < \infty$. Hence $P(\frac {\max \{X_k,X_{k+1},...,X_n\}} n >\epsilon) \leq \sum\limits_{i=k}^{n} P(|X_i| >n \epsilon )\leq \sum\limits_{i=k}^{\infty} P(|X_1| >n \epsilon )\to 0$ as $ k \to \infty$. Can you finish the proof? Here is a stronger result: Let $S_n=X_1+X_2+...+X_n$. Then $\frac {S_n} n \to EX_1$ with probability $1$ by Strong Law of Large Numbers. Now $\frac {X_n} n=\frac {S_n} n-\frac {n-1} n\frac {S_{n-1}} {n-1}$. Hence $\frac {X_n} n \to EX_1-EX_1=0$ with probability $1$. Thus $\frac {|X_n|} n \to 0$ with probability $1$. It is an elementary fact that if $a_n /n \to $ then $\max \{a_i: 1\leq i\leq n\} /n \to0$. We have proved a stronger result that $\max \{|X_i|: 1 \leq i\leq n\} /n \to 0$ with probability $1$
Find the number of paths in random walk
Something to note here is that you are counting paths so the probabilities are not actually important, especially since the $X_i$ are independent. If we throw out condition (c) for a moment and just consider a random walk from $S_0$ to $S_{2n}$, we can actually break this problem into two halves. To satisfy condition (b) we must have $X_1 = 1$ and $X_{2n} = -1$. Between those moves we just need to make sure $S_k$ does not fall below $1$. To count this number of paths we can use the $n-2$nd Catalan number, denoted $C_{n-2}$. If you're interested in computing this thing, it has a closed form $C_n = \displaystyle\frac{1}{n+1}\binom{2n}{n}$ and some Googling or reading on Wikipedia can lead you to a proof of that. So we have $\displaystyle\frac{1}{n-1}\binom{2(n-2)}{n-2}$ to make the first $2n$ moves of our random walk, and then we have to make $2n$ moves that give us a net $+2$ to our position. This means we have $2$ more successes than failures, but we know that successes and failures combined add to $2n$. So we have $n+1$ successes and $n-1$ failures. They can come in any order so the number of paths to do this is $\displaystyle\binom{2n}{n+1}$. These 'halves' were independent of each other because the $X_i$ are independent, so our total number of paths is $\displaystyle\frac{1}{n-1}\binom{2n-4}{n-2}\binom{2n}{n+1}$ which maybe gets a little nicer if you expand out the binomial coefficients, though I haven't tried.
$\sigma-$ weakly continuous functionals on Von Neumann Algebra.
This was first understood for $B(H)$. If you consider the compact operators $K (H) $, their dual are the trace-class operators $T(H)$, the via the duality $$\tag1 \hat R(S)=\operatorname{Tr}(SR),\ \ \ \ \ R\in T(H),\ \ S\in K(H). $$ And with the same duality we have $T(H)^*=B(H)$. This is exactly similar to $c_0^*=\ell^1(\mathbb N)$, $\ell^1(\mathbb N)^*=\ell^\infty(\mathbb N)$. One can characterize the linear functionals in $B(H)$ that come from $T(H)$ as those that are ultraweakly continuous. And then it was found that this approach works in general: given a von Neumann algebra $M$, one can prove that the dual of $M_*$, the space of ultraweakly continuous linear functionals on $M$, is $M$ via the duality $$ \hat x(\phi)=\phi(x),\ \ \ \ x\in M, \ \ \ \phi\in M_*. $$
argument and atan/arctan
That is because, when you want to determine the argument of a complex number $z=x+iy$, really you have to solve, not a single trigonometric equation, but a system of trigonometric equations: $$\begin{cases}\cos\theta=\dfrac x{\sqrt{x^2+y^2}}\\[1ex] \sin\theta=\dfrac y{\sqrt{x^2+y^2}}\end{cases} $$ This system implies the equation $\;\tan\theta=\dfrac yx$, but the converse is not true, and the latter equation determines $\theta$ modulo $\pi$, whereas the system determines it modulo $2\pi$. To know if you have to add (or subtract a $\pi$), you can use the signs of $y$ and $y$: if $\dfrac y x >0$, either $x$ and $y$ are both positive, or they're both negative. if $\dfrac y x <0$, a single one of them is positive. Thus considering the signs of $x$ and $y$ lets you know whether you have to add $\pi$ to the solution found from the tangent, or not.
Probability with two uniformly distributed cost
Just draw a rectangle in the coordinate plane. Let the $x$-axis be the first cost, and the $y$-axis be the second. Then we have a rectangle corresponding to the relevant values of the two costs, see the diagram below. Then look at the graph of the inequality $x+y<9000$. It cuts out a portion of the rectangle, over which the total cost will be less than $9000$. So just figure out what fraction of the rectangle that portion is. Bad Paint sketch, not to scale. The answer should be $$\frac{3000\times3000/2}{3000\times9000}=\frac{1}{6}$$
How to evaluate $\int \frac{\sin (\pi x)}{|x|^a + 1} dx$
You are only asked when the improper integral exists (that is, converges), not what the integral equals. There are two senses in which the Improper Integral can exist: $\;\text{1}$. Cauchy Principal Value: $\displaystyle\lim_{L\to\infty}\int_{-L}^L\frac{\sin(\pi x)}{|x|^a+1}\,\mathrm{d}x$ Since the integrand is odd, the integral is $0$ for any $L$, so the limit is $0$ for any $a$. $\;\text{2}$. standard: $\displaystyle\lim_{L\to\infty}\int_0^L\frac{\sin(\pi x)}{|x|^a+1}\,\mathrm{d}x+\lim_{L\to\infty}\int_{-L}^0\frac{\sin(\pi x)}{|x|^a+1}\,\mathrm{d}x$ Since the integrand is odd, the integrals above are negatives of each other. Therefore, if one of the limits exists, both do. Define $$ b_n=(-1)^n\int_n^{n+1}\frac{\sin(\pi x)}{|x|^a+1}\,\mathrm{d}x=\int_n^{n+1}\frac{|\sin(\pi x)|}{|x|^a+1}\,\mathrm{d}x\lt\frac1{n^a+1}\tag{1} $$ then $$ \sum_{k=0}^{n-1}(-1)^kb_k=\int_0^n\frac{\sin(\pi x)}{|x|^a+1}\,\mathrm{d}x\tag{2} $$ Note that $$ \begin{align} b_n-b_{n+1} &=\int_n^{n+1}|\sin(\pi x)|\left(\frac{1}{|x|^a+1}-\frac{1}{|x+1|^a+1}\right)\,\mathrm{d}x\\ &\gtreqless0\text{ when }a\gtreqless0\tag{3} \end{align} $$ When $a\le0$ the terms of the series in $(2)$ do not tend to $0$, so the series, and therefore the improper integral, does not converge. When $a\gt0$, $b_n$ is a decreasing sequence, tending to $0$, and so by the Dirichlet Test, the series in $(2)$ converges. This handles the case for $L=n$, an integer. However, for $x\in[0,1]$ $$ \int_n^{n+x}\frac{|\sin(\pi x)|}{|x|^a+1}\,\mathrm{d}x\lt\frac1{n^a+1}\tag{4} $$ thus the limit is true even when $L$ is not restricted to integers. Summary The Cauchy Principal Value of the improper integral exists for all $a$. The standard improper integral exists only when $a>0$. When the improper integral exists, its value is $0$ because the integrand is odd.
What is the correct mapping notation for this set-valued function?
Yes, $c\colon P \to \mathcal{P}\left(P\right)$, but we cant write $e\colon P\to c\left(P\right)$. Set $c\left(P\right)$ is subset of $\mathcal P\left(P\right)$ and we don't need that set. Function $e$ takes index $i$ and returns element of $c\left(i\right)$. As $c\left(i\right)\subseteq P$, it follows that $e\left(i\right)\in P$, so type of function $e$ is $$e\colon P\rightarrow P.$$
Proof verification for Identity matrices
Rather than saying that moving the identiy to the LHS, it is due to we add $-I$ to both sides. We have $A^2-I=(A-I)(A+I)$, we just have to expand the right hand side to verify that. In matrices, $AB=0$ doesn't imply that $A=0$ or $B=0$. For example $$\begin{bmatrix} 2 & 0 \\ 0 & 0\end{bmatrix}\begin{bmatrix} 0 & 0 \\ 0 & -2\end{bmatrix}= \begin{bmatrix} 0 & 0 \\ 0 & 0\end{bmatrix}$$ In particular, $$\left(\begin{bmatrix} 1 & 0 \\ 0 & -1\end{bmatrix}+\begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}\right)\left(\begin{bmatrix} 1 & 0 \\ 0 & -1\end{bmatrix}-\begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}\right)= \begin{bmatrix} 0 & 0 \\ 0 & 0\end{bmatrix}$$ that is we cant' conclude that $(A+I)(A-I)=0$ implies $A+I=0$ or $A-I=0$ as well.
Problem of Conditional Probability
ok then I did not understood 4th point itself. How is (Given that everyone in the set of k has a match, the other N−k people will be randomly choosing among their own N−k hats, so the probability that none of them has a match) = (the probability of no matches in a problem having N−k people choosing among their own N−k hats) We have arbitrarily divided $N$ people into two clusters, call them $A,B$. If $A$ is of size $k$, then $B$ will be of size $N-k$. Event $E$ is the event that everyone in cluster $A$ has their own hat.   If this happens then where are all the hats belonging to people in cluster $B$?   No-one in cluster $A$ will have any of them if everyone in cluster $A$ has their own hat, so... Event $G$ is the event that no one in cluster $B$ has their own hat. Thus the probability $P(G\mid E)$ is the (conditional) probability that no-one in cluster $B$ has their own hat, given that all the hats in cluster $B$ belong to someone in that cluster (since everyone in cluster $A$ has their own hat).   Thus it is the probability that $N-k$ people and their own $N-k$ hats will be mismatched.
Why does transforming a 2-D shape defined by a 2xn matrix using a Singular Transformation Matrix always result in a line?
A $2 \times 2$ matrix can have a 0-, 1-, or 2-dimensional image. If it has a 2-dimensional image, it is nonsingular (also called invertible). If it has a 0- or 1-dimensional image, is is called singular (also, non-invertible). An example of the former is $\begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}$, which clearly only has the vector $(0,0)$ for its image. An example of the latter is $\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$, which maps $(x,y)$ to $(x,0)$, i.e. to a 1-dimensional space. Another way to think about this is that singular matrices take some non-zero vector to the zero vector, so they take all multiples of that vector to the zero vector. This is a 1-dimensional subspace that is collapsed to a point. A parallel collapsing happens everywhere else -- only a space perpendicular to that original vector survives. (And in the 0-dimensional case, that perpendicular vector doesn't survive, because it is also sent to zero.)
Finding a gcd in the form of a monic polynomial
What do you get if you multiply your GCD by 3?
Proof by induction of a specific series
In the inductive step you have $$ 2\times4\times6\times\cdots\times(4(k+1)-2)=2\times4\times6\times\cdots\times(4k-2)\times(4(k+1)-2)\\ \ \\=\frac{(2k)!(4(k+1)-2)}{k!} =\frac{(2k)!(4k+2)}{k!} =\frac{2(2k)!(2k+1)}{k!} =\frac{2(2k+1)!}{k!} =\frac{2(2k+2)!}{k!(2k+2)} =\frac{(2k+2)!}{k!(k+1)} =\frac{(2k+2)!}{(k+1)!} $$
Perceptrons that recognize AND, OR, NOT
A look at the Wikipedia page on perceptron weights, see here, indicates that these are a bit complicated in applying them to some kind of "learning process" whereby various cases are looked at in order to find a perceptron which is in some sense optimal. However using just the definition of a perceptron as a function $f$ defined on binary vectors $x$ in terms of a weight vector $w$ and bias real number $b$ having the form $$f(x)=w \cdot x +b\tag{1}$$ where here $f(x)$ is to return $1$ if the right side of $(1)$ is greater than $0$, otherwise $f(x)=0,$ makes it fairly simple to get at least one "perceptron" for each of AND, OR, NOT. For each of AND, OR one can use the weight vector $w=(1,1).$ Then specifically to get a perceptron for OR one can use bias $b=0$ and then with $x=(p,q)$ the formula $f(p,q)=(1,1)\cdot (p,q)+0=p+q$ has the desired property that it is positive when $p$ OR $q$ is true, otherwise it is not positive. The same idea works for $p$ AND $q$ using the same $w=(1,1)$ but this time set the bias $b$ to $-1$ and then here $f(p,q)=p+q-1$ gives a working formula for AND. In the case of NOT, the weight vector $w$ has only one component and may be taken as $(-1)$ and use bias $b=+1$ giving the formula $-p+1$ as a formula capturing NOT $p.4
Is the space of continuous functions which are uniformly continuous on bounded subsets complete wrt the sup-norm?
I think all you need here is the standard argument that a uniform limit of uniformly continuous functions is uniformly continuous. Suppose that $f=\lim_nf_n$, with each $f_n$ uniformly continuous on bounded subsets. Let $\varepsilon>0$ and $Y\subset H$ be bounded. Then there exists $n$ with $\|f-f_n\|_{C_b^2(H)}<\varepsilon/3$. As $f_n$, $Df_n$, $D^2f_n$ are uniformly continuous on $Y$, there exists $\delta>0$ such that $\|D^kf_n(x)-D^kf_n(y)\|<\varepsilon/3$, $k=0,1,2$, whenever $x,y\\in Y$ and $\|x-y\|<\delta$. Then, if $x,y\in Y$ with $\|x-y\|<\delta$, for $k=0,1,2$ we have \begin{align} \|D^kf(x)-D^kf(y)\| &\leq\|D^kf(x)-D^kf_n(x)\|+\|D^kf_n(x)-D^kf_n(y)\|+\|D^kf_n(y)-D^kf(y)\|\\[0.3cm] &\leq \|f-f_n\|_{C_b^2(H)}+\frac\varepsilon3+\|f_n-f\|_{C_b^2(H)}\\[0.3cm] &\leq\frac\varepsilon3+\frac\varepsilon3+\frac\varepsilon3=\varepsilon. \end{align} So $f$, $Df$, $D^2f$ are uniformly continuous.
Why is the modular $\lambda$ function a quotient of two meromorphic functions in the U.H.P.?
I think the next step is proving that both the numerator and denominator are analytic as a function of $\tau$ in $\Im \tau > 0$. Firstly, I don't know how to do that, and secondly, shouldn't they be analytic for $\Im \tau < 0$ as well? Yes, the natural way is to see that numerator and denominator are analytic functions of $\tau$. Let me answer the second point first, since that is short and easy: Yes, they are analytic for $\Im \tau < 0$ too, but we're not interested in that. It is of course an arbitrary choice whether to use the upper or the lower half plane, but the restriction to a half plane is necessary, since the real line is a natural boundary of $\lambda$. If you look at the sums $$\sum' \frac{1}{(\rho - n_1 - n_2\tau)^2} - \frac{1}{(n_1+n_2\tau)^2},$$ where $\rho\in \left\lbrace\dfrac12,\,\dfrac{\tau}{2},\,\dfrac{1+\tau}{2} \right\rbrace$, you see that each term is analytic in the entire upper (and lower) half plane, but infinitely many terms have a pole in each $\tau \in \mathbb{Q}$, and that means the sum cannot be well-behaved in any interval of $\mathbb{R}$. So let's come to the first point, how to see that these are analytic functions of $\tau$ (in the upper half plane). Every term of the sums is analytic, so the standard way of seeing analyticity is to show that the sums converge locally uniformly in the half plane. That is done quite similarly to showing the locally uniform convergence of $\wp$. For $\omega = n_1 + n_2\tau \in \Omega\setminus \{0\}$, we have $$\frac{1}{(\rho-\omega)^2} - \frac{1}{\omega^2} = \frac{\rho(2\omega-\rho)}{(\rho-\omega)^2\omega^2}.$$ For $\lvert\tau\rvert \leqslant K$ and $\lvert \omega\rvert \geqslant 2K$ (with $K\geqslant 1$), we can estimate the absolute modulus by $$\frac{M}{\lvert\omega\rvert^3},$$ since $\lvert \rho-\omega\rvert \geqslant \frac12\lvert\omega\rvert$ and $\lvert 2\omega-\rho\rvert \leqslant 3\lvert\omega\rvert$. Now bounding $\lvert n_1 + n_2\tau\rvert^2$ below by a positive multiple of $n_1^2 + n_2^2$ will establish locally uniform convergence by Weierstraß' $M$-test. To get such an estimate, one must keep $\tau$ away from the real line, $\Im\tau \geqslant \varepsilon > 0$. Writing $\tau = \xi + i\eta$, we have $$\lvert n_1 + n_2\tau\rvert^2 = (n_1+n_2\xi)^2 + n_2^2\eta^2 \geqslant (n_1+n_2\xi)^2 + \varepsilon^2 n_2^2,$$ and the compactness of the unit circle ensures the existence of a $c > 0$ with $$(x+y\xi)^2 + \varepsilon^2y^2 \geqslant c\cdot (x^2+y^2)$$ for all real $x,y$. Thus the sums converge uniformly on every compact subset of the upper half plane, hence yield analytic functions.
Regarding linear dependence and independence for finite sequences of vectors
I guess, the thing is, that $e^{\pi/2}=4.8104...\approx\sqrt[3]{110}=4.7914...$, so these vectors are very close to each other, but still independent. Hence they can make a basis in $\mathbb{R}^2$. But yet, I don't grasp the necessity to write that. The only things, you need to know, is what linearly independent vector set is, what it has to do with the dimension of the space $V$ and the basis in it. So I do not think, you miss something.
Integrals of partial derivative
If you extract the desired term you will get the following result. $$ P(T \geq v, U \geq u) = 1 − P((T \geq v, U \geq u)^C) = 1-((T \geq v)^C \cup (U \geq u)^C) = 1 - P((T < v) \cup (U<u)) = 1 -[P(T < v) + P(U < u) - P(T < v, U < u)] = 1 - F_T(v) + P_U(u)-F_{T,U}(v, u) $$ You stated that T is absolutely continous, and U is arbitrary continous. Hence the result from above should also be continous.
Does this sequence converges?
Another way for even $T$: $$\frac{[1^a + \cdots + (T/2)^a] + [ (T/2+1)^a + \cdots + T^a]}{T} \geq \frac{[0] + [(T/2)^a + \cdots + (T/2)^a]}{T} \\ = \frac{(T/2)^a(T/2)}{T} = T^a/2^{a+1}$$ where the second expression is the sum of $(T/2)^a$ added together $T/2$ times. P.S. The principle of what you are doing in your question is fine. I'm too lazy to check that the bounds are all correct.
stats confidence intervals
Right. So that I don't have to leave it as a one-word answer, and perhaps to give you more "confidence", I can refer you to Wikipedia "Student's t-distribution" under "Confidence intervals".
Geometric interpretation of the norm $\|\vec x\|={(|x_1|+|x_2|)\over 3}+{2\max(|x_1|,|x_2|)\over 3}$
Note the expression for the norm can be altered to; $$ ||(x_1, x_2)|| = \max \{|x_1|, |x_2|\} + \frac{\min \{|x_1|, |x_2|\}}{3} $$ Hence the graph you look for is points $(x_1, x_2)$ such that, $$ \max \{|x_1|, |x_2|\} + \frac{\min \{|x_1|, |x_2|\}}{3} \lt 1 \iff 3 \max \{|x_1|, |x_2|\} + \min \{|x_1|, |x_2|\}\lt 3 $$ I'm not sure about this part. Needs verification: From here I think you must map the following regions on the $(x, y)$ plane in the corresponding regions. For $ |y| \ge |x| $; we get the equation to be $ 3|y| + |x| \lt 1 $ $$ 3 y + x \lt 3 \;\; \text{for } \;\; x \ge 0, y \ge 0 $$ $$ 3 y - x \lt 3 \;\; \text{for } \;\; x \lt 0, y \ge 0 $$ $$ - 3 y - x \lt 3 \;\; \text{for } \;\; x \lt 0, y \lt 0 $$ $$ - 3 y + x \lt 3 \;\; \text{for } \;\; x \ge 0, y \lt 0 $$ Now I think we need to similarly map $ 3|y| + |x| \lt 1 $ for $ |x| \ge |y| $. Note however that the $8$ separate equations we get are confined to $8$ disjoint regions in the plane. The $8$ sub-quadrants or octants if you will. So I'm guessing it can be done.
is $\sin(\sin^{-1}(x))$ always equal to $x$?
$y=\sin^{-1}x$ will be defined if $-1\le x\le1$ and $-\dfrac\pi2\le y\le\dfrac\pi2$ using Principal values then $\sin(y)=x$
Find an example of a function that sustains the following terms:
Let $$ f(x_1,x_2) = \begin{cases} x_1 & \text{if $|x_2| \geq |x_1|$} \\ -x_1 & \text{if $|x_2| < |x_1|$} \\ \end{cases} $$ Then $$ \lim_{t\to 0} \frac{f(th_1,th_2)}{t} = \begin{cases} h_1 & \text{if $|h_2| \geq |h_1|$}\\ -h_1 & \text{if $|h_2| < |h_1|$}\\ \end{cases} $$
Can I convert $2\cos x$ to $\sin \dfrac{x}{2}$?
Neither of those is a valid substitution. For the first, observe that, at $x = 0$, we have $$ 2\cos 0 = 2\\ \sin \dfrac{0}{2} = 0. $$ For the second, look at $x = \pi$ and notice that $$ -2 + 2 \cos \pi = -4 \\ \sin \dfrac{\pi}{2} = 1. $$
Group Theory and Lagrange's Theorem: coprime subgroups.
For the second part, just consider two groups $G_{1}, G_{2}$ of order $2$, say, and let $K$ be the diagonal subgroup. For the first part, you are right in wanting to use Bezout. Suppose the order of $G_{i}$ is $n_{i}$, and find suitable $a_{i}$ such that $$ a_{1} n_{1} + a_{2} n_{2} = 1. $$ Take any $(h, k) \in K$, and note that $$ (h, k)^{a_{1} n_{1}} = (e, k^{a_{1} n_{1}}), \text{so that } k^{a_{1} n_{1}}\in H_{2}, \qquad (h, k)^{a_{2} n_{2}} = (h^{a_{2} n_{2}}, e), \text{so that } h^{a_{2} n_{2}} \in H_{1}. $$ Now $$ (h, k) = (h, k)^{1} = (h, k)^{a_{1} n_{1} + a_{2} n_{2}} = (e, k^{a_{1} n_{1}}) \cdot (h^{a_{2} n_{2}}, e) = (h^{a_{2} n_{2}}, k^{a_{1} n_{1}}) \in H_{1} \times H_{2}. $$
Simple Bayes rule question, how much to pay
I will assume that "we are shown one bead" means that we are shown one bead, chosen at random from the beads in the selected envelope. Since intuition can sometimes lead us astray, we will use the formal conditional probability machinery. Let $W$ (for "win") be the event that the envelope contains $2$ red and $2$ black, and let $B$ be the event that the ball that was drawn is black. We want the conditional probability $P(W|B)$. By the definition of conditional probability, we have $$P(W|B)=\frac {P(W\cap B)}{P(B)}.$$ We compute the two required probabilities on the right. The probability of $W\cap B$ is $(1/2)(2/4)$. This is because we must pick the correct envelope (probability $1/2$) and draw one of the $2$ black beads from the $4$ beads in the envelope. The probability of $B$ is $(1/2)(2/4)+(1/2)(2/3)$. This is because the event "we get a black bead" can happen in two disjoint ways: (i) We pick the envelope with $2$ black, $2$ red and draw a black bead or (ii) We pick the envelope with $2$ black, $1$ red and draw a black bead. Divide and simplify. We get $3/7$. The expectation of the amount of money in the envelope is therefore $(3/7)(1)+(4/7)(0)$, which is $3/7$. As to how much one "should" pay for the envelope, the commonsense answer is as little as possible. But if we pay $3/7$ of a dollar for the envelope, then our expected total gain is $0$, making it a "fair" game. That is undoubtedly the expected answer. A Somewhat Different Analysis: Let $a$ be the amount that we pay for the envelope, and let $X$ be our net gain. If there is a dollar in the envelope, the net gain is $1-a$. If there is no money in the envelope, our net gain is $-a$. Thus $$E(X)=(1-a)(3/7)+(-a)(4/7).$$ Simplify. We get $$E(X)=\frac{3-7a}{7}.$$ This expectation is $0$ ("fair game") if $3-7a=0$, that is, if $a=3/7$. Reality Check: The envelope with no money is (proportionally) richer in black than the envelope with money. So drawing a black bead means that there is a better than $1/2$ probability that we are dealing with the no money envelope. Thus the expected value of the money in the envelope is less than $50$ cents. And indeed $3/7$ is less than $1/2$. It is almost always a good idea to check whether a computed answer "makes sense." Comment: If the bead we are shown is not chosen at random, the answer depends on the strategy that the opponent is using. For example, if the opponent always shows you a black bead, whatever envelope you choose, then the probability of winning a dollar is $1/2$, so a payment of $50$ cents makes the game fair.
Regarding relation between dimension of dual space of normed linear space X and dimension of X.
If $\{e_1,\ldots,e_n\}$ is a basis of $X$, every vector $x\in X$ can be decomposed uniquely according to this basis \begin{equation} x = x^1 e_1 +\cdots+x^n e_n \end{equation} The map $f_i: x\mapsto x^i$ that maps every vector to its i-th coordinate on this basis is a linear functional on the space $X$, that is to say $f_i \in X'$. Hint: Prove that $f_1,\ldots,f_n$ are linearly independent in $X'$.
Show field extension is Galois via constructing separable polynomial
$\beta$ is not real, so $\beta \notin \Bbb Q(\alpha)$. As you noticed, $\beta$ and $\alpha$ share the same minimal polynomial, whose roots are $\alpha,-\alpha,\beta,-\beta$. Therefore, $(X-\beta)(X+\beta) = (X^4 -2X^2-2)/(X-\alpha)(X+\alpha) \in \Bbb Q(\alpha)[X]$ is the minimal polynomial of $\beta$ over $\Bbb Q(\alpha)$, so $\Bbb Q(\alpha,\beta)$ has degree $8$ and is the splitting field of the polynomial $X^4-2X^2-2$. Now about the Galois group. Since the extension is Galois it has order $8$. One automorphism is determined by the image of $\alpha$ (among the $4$ candidates), which determines the image of $- \alpha$, and then the image of $\beta$ among the two remaining roots. We can write them down explicitly and find its group structure : Let $\phi$ be defined by $\phi(\alpha,\beta) = (\beta,- \alpha)$. Then $\phi^2(\alpha,\beta) = \phi(\beta, -\alpha) = (-\alpha,-\beta)$ $\phi^3(\alpha,\beta) = \phi(-\alpha,-\beta) = (-\beta,\alpha)$ $\phi^4(\alpha,\beta) = \phi(-\beta,\alpha) = (\alpha,\beta)$, so $\phi$ has order $4$. Let $\psi$ be defined by $\psi(\alpha,\beta) = (-\alpha,\beta)$. $\psi$ has order two, and the remaining automorphisms are $\psi\phi(\alpha,\beta) = \psi(\beta,-\alpha) = (\beta,\alpha)$ $\psi\phi^2(\alpha,\beta) = (\alpha,-\beta)$ $\psi\phi^3(\alpha,\beta) = (-\beta,-\alpha)$ $\phi\psi(\alpha,\beta) = \phi(-\alpha,\beta) = (-\beta,-\alpha) = \psi\phi^3(\alpha,\beta)$ so $\phi\psi = \psi\phi^3$ : the group is not commutative. All of this shows that the group is isomorphic to the diedral group of order $8$, $D_8$. $D_8$ has three subgroups of order $4$ and $5$ subgroups of order $2$, corresponding to $\Bbb Q(\sqrt {-2}), \Bbb Q(\sqrt 3),\Bbb Q(\sqrt {-6}),\Bbb Q(\sqrt {-2},\sqrt 3) ,\Bbb Q(\alpha), \Bbb Q(\beta), \Bbb Q(\alpha+ \beta), \Bbb Q(\alpha-\beta)$. See if you can find a simplified expression of $\alpha+\beta$ and $\alpha-\beta$ In order to get an irreducible polynomial of order $8$ that splits completely, you need the minimal polynomial of an element of $\Bbb Q(\alpha,\beta)$ that is not fixed by any element of the Galois group. For example you can pick the minimal polynomial of $\alpha+2\beta$
Decimal expansion of a fraction
The point here is that the two real numbers $$0.24999... \text{ and } 0.25000...$$ are the same. There are a couple of ways to see this. Let's just consider the number $$r=0.9999...$$ we can demonstrate that this is actually equal to 1: $$10r-r = 9$$ $$9r=9$$ $$r=1$$ If that isn't convincing, you can view this in terms of a geometric series: $$\sum_{n=1}^\infty 9 \cdot 10^{-n} = 9 \frac{10^{-1}}{1-10^{-1}} = 9 \cdot \frac{1}{9} = 1$$ Or finally from the definition of real numbers as equivalence classes of Cauchy sequences of rational numbers, we see that the two sequences $$\{0.9, 0.99, 0.999, ...\} \text{ and } \{1, 1, 1, 1,... \}$$ are in the same equivalence class. Thus the two real numbers are the same.
Countable subadditivity of $L^p$ norms (proof check)
You cannot expect equality: Take $f(x) = \exp(-x)$ on $[1,\infty)$, $p=2$ and $E_n=[n,n+1)$. Then we have $$M_n:=\int_{n}^{n+1} \exp(-2x) \, \mathrm{d} x = \frac{\exp(-2n) - \exp(-2(n+1))}{2}$$ and $$\Bigg( \sum_{n=1}^\infty M_n \Bigg)^{1/2} = \Bigg( \int_1^\infty \exp(-2x) \, \mathrm{d} x \Bigg)^{1/2} = \frac{e^{-1}}{\sqrt{2}},$$ but by the geometric-series $$ \sum_{n=1}^\infty M_n^{1/2} = \frac{e^{-1}}{\sqrt{2}} \frac{\sqrt{e^2-1}}{e-1}>\frac{e^{-1}}{\sqrt{2}}.$$
Greatest Integer Function $ [x^2] $ : Riemann Integration Question
Since no one is helping with the answer, I figured out an answer after referring through various questions in Riemann Integrability. Since the function $ [x^2] $ is discontinuous at $1,\sqrt 2, \sqrt 3 $ in the interval [0,2], we partition the interval as follows: x0($=0$), x1 , x2 .... xi, y0($=1$), y1 , y2.... yi,z0 (=$\sqrt 2$),z1 , z2.... zi, k0 (=$\sqrt 3$),k1 , k2.... ki,m0$(=2)$ Now U(P,f)= 0*$\Sigma$xi + 1*(y0-xi) +1*$\Sigma$yi + 2*(z0-yi) + 2*$\Sigma$zi +3*(k0-zi)+ 3*$\Sigma$ki + 4*(m0-ki) =$0+1*(\sqrt2-1)+2*(\sqrt3-\sqrt2)+3*(2-\sqrt3)+$1*(y0-xi) +2*(z0-yi) +3*(k0-zi) + 4*(m0-ki) Take $\epsilon$ = 1*(y0-xi) +2*(z0-yi) +3*(k0-zi) + 4*(m0-ki) Now L(P,f)= 0*$\Sigma$xi + 0*(y0-xi) +1*$\Sigma$yi + 1*(z0-yi) + 2*$\Sigma$zi +2*(k0-zi)+ 3*$\Sigma$ki + 3*(m0-ki) =$0+1*(\sqrt2-1)+2*(\sqrt3-\sqrt2)+3*(2-\sqrt3)+$ 1*(z0-yi) + +2*(k0-zi)+ 3*(m0-ki) Now taking U(P,f) - L(P,f) < $\epsilon$ Hence $ [x^2] $ is Riemann Integrable in [0,2]
How to prove that $a=z^{p}$ for some $z \in \mathbb{Z_{+}}$?
The problem is false as stated. Let $a$ be any integer such that $a^{p-1}\equiv1\pmod{p^2}$. Then $$ a^{p(p-1)-1} = (a^{p-1}-1)\big( (a^{p-1})^{p-1} + (a^{p-1})^{p-2} + \cdots + (a^{p-1})^1 + 1 \big); $$ the first factor is divisible by $p^2$ by assumption, and the second factor is congruent to $p\pmod {p^2}$, hence is divisible by $p$. We conclude that $a^{p(p-1)}\equiv1\pmod{p^3}$ automatically. Similarly, $$ a^{p^2(p-1)-1} = (a^{p(p-1)}-1)\big( (a^{p(p-1)})^{p-1} + (a^{p(p-1)})^{p-2} + \cdots + (a^{p(p-1)})^1 + 1 \big) $$ is then divisible by $p^4$, etc. In short, one can prove by induction that if $a^{p-1}\equiv1\pmod{p^2}$, then automatically $a^{p^{n-2}(p-1)}\equiv1\pmod{p^n}$ for every $n\ge2$. And $a^{p-1}\equiv1\pmod{p^2}$ certainly does not imply that $a$ must be the $p$th power of an integer. (Examine the pairs $(a,p) = (7,5)$ and $(a,p)=(17,3)$, for example.) And note that adding $p^2$ to $a$ preserves the congruence, so $(32,5)$ and $(57,5)$ and $(82,5)$ ... are all counterexamples as well; in particular, there are plentiful counterexamples where $a$ is not prime.
Parameterizing divots in contour integrals
$$\int_0^\infty\frac{\sin x}{x} dx = \int_0^\infty\frac{\sin 2u}{2u} d(2u) =\int_0^\infty\frac{\sin 2u}{u} du$$ Apply Integration by parts $$\int_{0}^\infty \frac{\sin^2x}{x^2}\mathrm{d}x=\frac{\sin^2x}{x(-1)}\biggr|_{0}^{\infty}+\int_{0}^{\infty}\frac{2 \sin x \cos x }{x}\mathrm{d}x =\int_{0}^{\infty}\frac{\sin(2x)}{x}\mathrm{d}x$$ $\int_{0}^{\infty}\dfrac {\sin x} x dx =\dfrac{\pi}{2}$ To calculate $\int_{0}^{\infty}\frac {\sin x} x dx\hspace{10pt}$follow this post it already has 26 answers. $$\int_{-\infty}^\infty \frac{\sin^2x}{x^2}\mathrm{d}x=2\int_{0}^\infty \frac{\sin^2x}{x^2}\mathrm{d}x=\pi$$
Does the word 'root' in the context of 'function roots' and 'nth roots' have any relation?
Any formula that asks you to find the n'th root of a number such as $ \sqrt[k]{a} $ can be transformed into an equivalent polynomial root problem $ f(x) = x^k - a = 0$. So finding the n'th root is a special case of the general polynomial root.
Does the Lebesgue's criterion for Riemann-integrability also hold for Riemann-Stieltjes integral?
Chances are that we are using the definition of R-S integral as found in Rudin's Principle of Mathematical Analysis. That is, the upper R-S sum is $U(P,f,\alpha)=\sum_{i=1}^nM_i (\alpha(x_i)-\alpha(x_{i-1}))$ where $M_i=\sup\{f(x): x\in [x_{i-1},x_i]\}$. The upper integral is the infimum of all upper sums. The lower integral is defined similarly, and when both are equal, we declare the function integrable wrt $\alpha$. Confession: I hate the above definition. It leads to ridiculous hair-splitting: the functions $$\alpha_1(t)=\begin{cases} 1,\qquad t>0 \\ 0,\qquad t\le 0\end{cases}$$ $$\alpha_2(t)=\begin{cases} 1,\qquad t\ge 0 \\ 0,\qquad t< 0\end{cases}$$ $$\alpha_3(t)=\frac{1}{2}(\alpha_1+\alpha_2)$$ are all different ways to assign unit mass to the point at $0$. The first puts all mass on the "right half of the point", the second puts it all on the "left half", and the third gives $1/2$ to each half of the point. So, a function $g$ is integrable with respect to $\alpha_1$ iff $g(0)=g(0+)$ - it does not have to be continuous from the left at $0$. Similarly, it is integrable wrt $\alpha_2$ iff $g(0)=g(0-)$. In both of these examples the set of discontinuities may well be all of $\mathbb R$, yet we have integrability as long as one-sided continuity at $0$ holds. Thus, integrability cannot be expressed in terms of the set of discontinuity $D$: you have to look at the set of right discontinuities $D_+$ and left discontinuities $D_-$. And it's not about measure in the usual sense, because we are "measuring" half-points. Honestly, I don't know why people do this to themselves instead of assuming one-sided continuity of $\alpha$ and defining $M_i$ in terms of half-open intervals.
Probability, lottery. Once a week for 30 years, probability of winning at least once?
Yes, reasoning by complements makes your life much easier. Probability of not winning this week as you claimed is: $1-\frac{1}{12271512}$. Assuming independence, probability of not winning for 1560 weeks is then just: $\left(1-\frac{1}{12271512}\right)^{1560}$ Hence, the probability of winning at least once is: $1 - \left(1-\frac{1}{12271512}\right)^{1560} \approx 0.00013$ The chances are pretty slim...
Decomposition into weights of semisimple Lie algebra
Since $H$ is abelian and consists of semisimple elements, the endomorphisms $ad(h) = [h,.]$ of $L$ are all semisimple (that is diagonlizable since you work on the complex numbers) and they all commute. Then it is a standard result that all the $ad(h)$ are diagonalizable in the same basis. Fix some basis such that all $ad(h)$ are diagonal. Then if you look at the $i$-th eigenvalue, you show easily that it is a linear form, when considered as a function of $h$. Finally you obtain your decomposition by assembling together the indices that give the same linear form (otherwise said, that give the same eigenvalue for all $ad(h)$).
Find the least positive integer $x$ that satisfies the congruence $90x\equiv 41\pmod {73}$
The numbers are small, so we use a "playing around" approach. Rewrite our congruence as $90x\equiv 114\pmod{73}$, which is equivalent to $15x\equiv 19\pmod{73}$, But $19\equiv 165\pmod{73}$, and from $15x\equiv 165\pmod{73}$ we write down the solution $x\equiv 11\pmod{73}$.
coherent topology, prove normality
I think the result is true even if the $(X_i)_i$ are not nested. Suppose each $X_i$ is a closed subspace of $X$. You've defined $f_0.$ So suppose we have defined $f_n:X_1\cup\cdots\cup X_n\cup A\cup B\to [0,1].$ Note that $X_1\cup\cdots\cup X_n\cup A\cup B\cap X_{n+1}$ is closed in $X_{n+1}$ (why?) so we can extend $f_n|_{X_1\cup\cdots\cup X_n\cup A\cup B\cap X_{n+1}}$ to a function $\tilde f$ on $X_{n+1.}$ Observe that $X_{n+1}$ and $X_1\cup\cdots\cup X_n\cup A\cup B$ are closed in $X_1\cup\cdots\cup X_{n+1}\cup A\cup B$ so the gluing lemma applied to $f_n$ and $\tilde f$ provides a continuous $f_{n+1}:X_1\cup\cdots\cup X_{n+1}\cup A\cup B\to [0,1]$ and by construction $f_{n+1}$ extends $f_n.$ So the induction proceeds and we obtain a function $f:X\to [0,1]$ such that $f(A)=0$ and $f(B)=1.$ All that remains is to prove that $f$ is continuous, but this follows from the fact that in this topology, a map from $X$ to another topological space is continuous if and only if the restrictions to each $X_i$ are.
what is a basis for $\ell^p$?
No, this is not a basis. Consider $p = 2$, and the sequence $s_n = \frac{1}{n}$. Then, this is in $l^p$, but this is not expressible as a linear combination of your basis, as any finite sum of your $x_i$ will eventually have every coordinate be $0$, but every coordinate of $s$ is nonzero.
Show that $\lim \inf x_n$ is an adherence value
Let $a:=\liminf x_{n}$ and $y_{n}:=\inf_{k\geq n}x_{k}$ so that $y_{1}\leq y_{2}\leq\cdots\leq a$ and $a=\lim_{n\to\infty}y_{n}$. If $a$ is not an adherence point of sequence $\left(x_{n}\right)_{n}$ then some open set $U$ must exist with $a\in U$ and such that $\left\{ n\mid x_{n}\in U\right\} $ is a finite set. Then some integer $m$ must exist with $x_{n}\notin U$ for every $n\geq m$ and consequently $y_{n}\notin U$ for every $n\geq m$. This however contradicts that $\lim_{n\to\infty}y_{n}=a$. So we conclude that $a$ must be an adherence point of sequence $\left(x_{n}\right)_{n}$.
Solving the indefinite integral $\int \frac{1}{x^2+x+1} dx$
$$\left(x+\frac{1}{2}\right)^2+\frac{3}{4}=\left(\frac{2x+1}{2}\right)^2+\left(\frac{\sqrt3}{2}\right)^2=\frac{3}{4}\left(\left(\frac{2x+1}{\sqrt3}\right)^2+1\right)=\frac{3}{4}(u^2+1).$$ Since $\frac{2x+1}{\sqrt3}=u$, we have $$\left(\frac{2x+1}{\sqrt3}\right)'dx=du$$ or $$\frac{2}{\sqrt3}dx=du$$ or $$dx=\frac{\sqrt3}{2}du.$$ Thus, $$\int\frac{1}{x^2+x+1}dx=\int\frac{1}{\frac{3}{4}(u^2+1)}\cdot\frac{\sqrt3}{2}du=\frac{2}{\sqrt3}\int\frac{1}{1+u^2}du.$$
Find area of shaded region - is the information insufficient?
Consider the following illustration: Write top angle (smaller one) as $\theta$ and side of square as $s$. Then use the fact that projection of red and green lines add up to side lengths of rectangle. In the image, green line is for first equation and red line for second. $$(3s+s \tan(\theta)) \cos(\theta) = 11 \tag{1}$$ $$(3s+2s \tan(\theta)) \cos(\theta) = 13 \tag{2}$$ From here we get $s \sin(\theta) = 2$ and $s \cos(\theta) =3 $. So $\tan(\theta) = \frac{2}{3}$. Then $s = \sqrt{13}$. The required area is $13 \cdot 11 - 6s^2 = 13\cdot 5 = 65$ Note: The redline does not touch the upper side at corner. It is just meant to show that we have to take projection of this line along vertical side of length 13. Just to show that red line makes angle $\theta$ with vertical side.
Show that $m(\angle ABM)=30^{\circ}$.
Consider point $X$ on $MC$ such that $\angle XAC=20^\circ$. Easy angle chasing reveals that triangles $AXC$ and $AMX$ are isosceles with $AX=XC$ and $AM=MX$. Also observe that $\angle CXA =2\angle CBA$ which along with $AX=XC$ implies that $X$ is circumcenter of $ABC$. Hence $AXB$ is isosceles and $\angle AXB=2\angle ACB=60^\circ$. This shows that $AXB$ is equilateral. Using $AM=MX$ we see that $MB$ bisects angle $XBA$ which means that $\angle MBA=30^\circ$.
If $2 x^4 + x^3 - 11 x^2 + x + 2 = 0$, then find the value of $x + \frac{1}{x}$?
You don't need to factor the polynomial to find the value of $x + \frac{1}{x}$. Note that the given condition implies $\left( 2x^2 + \frac{2}{x^2} \right) + \left( x + \frac{1}{x} \right) - 11 = 0$ by dividing by $x^2$ and collecting symmetric terms. Now observe that $\left( x + \frac{1}{x} \right)^2 = x^2 + \frac{1}{x^2} + 2$... But if you want to factor it by hand anyway, your best friend is the rational root test.
For $a_n,b_n\uparrow$ and $\sum \frac{1}{a_n}$, $\sum \frac{1}{b_n}$ divergent is the series $\sum \frac{1}{a_n+b_n}$ also divergent?
For $r \geqslant 0$, let $k_r = 2^{2^r}$. Let $$\begin{align} a_n &= k_{2r+2} - \frac{1}{n},\text{ for } k_{2r} \leqslant n < k_{2r+2};\\ b_n &= k_{2r+1} - \frac{1}{n},\text{ for } k_{2r-1} \leqslant n < k_{2r+1}; \end{align}$$ for $n \geqslant 4$, and choose $a_n, b_n$ fairly arbitrarily for $n < 4$. Then $$\sum_{n=k_{2r}}^{k_{2r+2}-1} \frac{1}{a_n} > \frac{k_{2r+2}-k_{2r}}{k_{2r+2}} > \frac{1}{2},$$ so $\sum \frac{1}{a_n}$ diverges. Analogously, $\sum \frac{1}{b_n}$ diverges. But, we have $a_n > b_n$ for $k_{2r} \leqslant n < k_{2r+1}$, and $b_n > a_n$ for $k_{2r+1} \leqslant n < k_{2r+2}$, so $$\sum_{n=k_{2r}}^{k_{2r+2}-1} \frac{1}{a_n + b_n} < \frac{k_{2r+1}-k_{2r}}{k_{2r+2}} + \frac{k_{2r+2}-k_{2r+1}}{k_{2r+3}} < \frac{k_{2r+1}}{k_{2r+2}} + \frac{k_{2r+2}}{k_{2r+3}},$$ and $$\frac{k_r}{k_{r+1}} = 2^{2^r-2^{r+1}} = 2^{-2^r} = \frac{1}{k_r},$$ so $$\sum_{r=1}^\infty \frac{1}{k_r} < \infty$$ and $$\sum_{n=1}^\infty \frac{1}{a_n+b_n}$$ converges.
Question About the Integration of rational function
As @GEdgar pointed out, there are some minor issues (uppercase X terms and use two different variable names for the last term). Change the last term to something like $Ex + F$. After expansion and solving, you should get: $\displaystyle A = -3$ $\displaystyle B = \frac{1983}{2000}$ $\displaystyle C = \frac{427}{200}$ $\displaystyle D = \frac{33}{20}$ $\displaystyle E = \frac{49}{250}$ $\displaystyle F = \frac{137}{500}$
Extracting a subsequence from a sequence of $\mathcal{L}^1$ functions
Here's the sketch I would follow. Consider the measures $\nu_n$ defined by $\nu_n(E) = \int_E g_n\,dm$. This is a bounded sequence of positive measures, so by Helly's selection theorem (or Banach-Alaoglu or something analogous) we can extract a subsequence $\nu_{n_k}$ converging vaguely to some measure $\nu$, i.e. $\int h g_n\,dm \to \int h\,d\nu$ for every $h \in C_c((0,1))$. Use the uniform integrability to show that $\nu$ is absolutely continuous to $m$ (the characterization of uniform integrability in terms of uniform absolute continuity would be helpful) and hence by Radon-Nikodym is of the form $\nu(E) = \int g \,dm$ for some $g$. For a compact set $K$, approximate $1_K$ by continuous compactly supported functions to conclude that $\int_K g_n\,dm \to \int_K g\,dm$. Consider the set $\mathcal{L}$ of all measurable sets $E$ for which $\int_E g_n\,dm \to \int g\,dm$. Use a monotone class argument to show that $\mathcal{L}$ consists of all measurable sets. If you use this for a homework problem, please credit me (and give the URL of this answer) in your submission.
Stuck at a step in reduction formula integration
Hint: substitute $u=\tan{x}$. Then, $$\int\tan^a {x}\sec^2{x}\,dx=\int u^adu=\frac{1}{a+1}u^{a+1}+constant=\frac{1}{a+1}(\tan{x})^{a+1}+constant$$
How does the 1-Lipschitz and uniform boundedness matter in establishing this as a metric?
To show that d is a metric you will have to show that $\int f d\mu=\int f d\nu$ for every 1-Lipschtz function f implies $\mu =\nu$. (Since both sides are linear in f the value of the Lipschitz constant doesn't matter). A very detailed argument would be lengthy so I will give a sketch. Let C be a compact set and $D=\{x :d(x,C) \geq 1/n\}$. Let $f_n(x)=\frac {d(x,D)} {d(x,C)+d(x,D)}$. Verify that this is indeed a Lipschtiz function. (This is a bit lengthy but some simple algebraic manipulation will give this). Show that $f_n \to I_C$ pointwise and apply DCT to see that $\mu (C)=\nu (C)$. Since C is arbitrary we get $\mu= \nu$
Find the minimum sum of distances from a point on the circle with equation $x^2+y^2=16$ to the points $A(-2,0$) and $B(2,0)$.
You are supposed to minimize the sum of $\overline{PA}$ and $\overline{PB}$ where $P$ is your point on the circle. You have to set up the system: Minimize $\sqrt{(x+2)^2+y^2}+\sqrt{(x-2)^2+y^2}$ under the condition $x^2+y^2=16$ Hint: Use the circle condition to express $x$ by $y$ (or vice versa) in order to eliminate one variable.
Relationship between maximum and minimum of a function
Let's assume that $\max(f)=m$. Then there exists some $x_0$ such that $f(x_0)=m$ and $f(x)\leq m$ for each $x$. This implies $-f(x)\geq -m$ and $-f(x_0)=-m$, so $\min(-f)=-m$.
What are the specifics and the possible outputs of Pollard's Rho algorithm?
Pollard's Rho algorithm is essentially a smart way to guess possible factors of a number - the $x$ values are generated with a kind of pseudo-random number generator, and by the construction, it is fairly likely to find a factor. There is no hard-and-fast guarantee here, since it essentially relies on a probabilistic argument, but it is proven empirically. Your choice of $f$ is somewhat uncommon and may be related to the problem you are experiencing with finding only trivial factorizations. I have good experience with simply using $f(x) = (x^2 + 1) \mod n$, and retrying with $(x^2+2) \mod n$, $(x^2 + 3) \mod n$ etc. if this fails.
Counterexample in topology general
Unless some mistake eludes me below, it should not hold. Consider the subset $D\subset\mathbb{C}$ by $$D:=\lbrace (1-2^{-n})e^{i\pi k2^{-n}}|n\in\mathbb{N}^*,0\leq k < 4^n\rbrace$$ and $X\subset \mathbb{C}$, being the union of $D$ with the unit circle $$X:=D\cup \mathbb{T} $$ You can imagine $D$ as a discrete set of complex numbers, that approaches the unit circle, as shown below: Give them both the relative topology induced by $\mathbb{C}$. Then, $X$ is a closed subset so it is a complete metric space, and $D$ is a discrete subset, open and dense in $X$. All the conditions above hold but $X$ is uncountable, as it contains the unit circle. Even more, $X\setminus D=\mathbb{T}$ which is perfect.
Prove that an n-vertex tree has exactly $3\cdot2^{n-1}$ proper $3$-colorings.
There are no triangles (or cycles of any length) in a tree. Instead, try induction on the number of vertices by reducing the $n$-vertex tree to an $(n-1)$-vertex tree by removing a leaf.
Differential of a smooth function on a manifold
This is not an exact answer, but it may be better to compute the differential in a coordinate system which is better suited to your function $f$. Since $f$ is linear, you probably want to choose a (near) linear coordinate system, for instance, $$ x^{-1}: (u, v) \mapsto (u, \pm \sqrt{1 - u^2 - v^2}, v) $$ Then $$ (x \circ f \circ x^{-1})(u,v) = x\left(\frac{u + v}{\sqrt{2}}, \sqrt{1 - u^2 - v^2}, \frac{u - v}{\sqrt{2}} \right) = \left( \frac{u + v}{\sqrt{2}}, \frac{u - v}{\sqrt{2}} \right) $$ And at least here you can find the intrinsic properties you are more likely interested in, except on the set $\{ (x,y,z) \in S^2 : y = 0 \}$. For instance, that the operator has full rank, etc.
Continuity of determinant of vector bundle morphism
Yes it necessarily is continuous! For such an $f$ $(p,v)\mapsto(q,w)$, define $g:B \rightarrow B$ as $p\mapsto q$ and $J:B\rightarrow L(E,E)$ as $p\mapsto I_p$, then we can write $$f=(g,J\circ \pi) $$ $f$ is an homeomorphism and hence continuous, which is equivalent to both $g$ and $J\circ \pi$ being continuous. I claim that $J$ is continuous, which follows from the fact that $\pi$ is an open map. Since $I=\text{det}\circ J$, then it suffices to show that $\text{det}$ is continuous! For this we will need some machinery, namely the topology of $L(E,E)$ the idea of the proof is the following. We obviously consider $E$ to be finite dimensional, otherwise det does not make sense; we use the definition of det as $$Te^1\wedge \ldots \wedge Te^n=\text{det}(T)\;e^1\wedge \ldots \wedge e^n $$ for any given basis $e^1,\ldots, e^n$ of $E$, thus $\text{det}(J(p))$ is continuous if the map $$ e^1\wedge \ldots \wedge e^n\mapsto J(p)e^1\wedge \ldots \wedge J(p)e^n$$ is continuous. Now, the map $(v_1,...,v_n)\mapsto v_1\wedge \ldots \wedge v_n$ is multilinear on a finite dimensional vector space and thus continuous (Proof?). Now we can write the desired map as a composition of continuous maps. This completes the proof.
In factoring consecutive numbers, how soon do we expect to see the smallest prime not yet seen?
All primes up to $k+1$ divide one of $m,m+1,\dots,m+k$, so the primes that might divide $m+k+1$ are the primes greater than $k+1$. For each such prime $p$, the probability that it divides none of $m,\dots,m+k$ is presumably $1-(k+1)/p$, while the probability that it divides $m+k+1$ is presumably $1/p$. (I say "presumably" because I don't know how $m$ and $k$ are chosen; if $k$ is fixed then these probabilities are accurate.) In particular, the probability that $p$ is the smallest prime not dividing $m(m+1)\cdots(m+k)$ and that it does divide $m+k+1$ is $$ \bigg( \prod_{k+1<q<p} \frac{k+1}q \bigg) \frac1p, $$ where the product is over primes $q$ in the given range (each factor is the probability that $q$ does divide $m(m+1)\cdots(m+k)$). Therefore the probability that the smallest prime not dividing $m(m+1)\cdots(m+k)$ does divide $m+k+1$ should be $$ \sum_{p>k+1} \bigg( \prod_{k+1<q<p} \frac{k+1}q \bigg) \frac1p. $$ Numerical calculations of this expression up to $k=8000$ suggest that this probability decays like, maybe, $1/(\sqrt k(\log k)^2)$? Very hard to guess exactly; I'm sure the asymptotics could be worked out with enough pain.
finding the potential v(x,y)
Given: $$x'= f(x,y) = 3x^2-1-e^{2y}, y'= g(x,y) = -2xe^{2y}$$ 1) Show that $\dfrac {\partial{f}}{\partial{y}}=\dfrac {\partial{g}}{\partial{x}}$ $\dfrac{\partial f}{\partial y} = -2 e^{2y}$ $\dfrac{\partial g}{\partial x} = -2 e^{2y}$ 2) Find the potential function $V(x,y)$ $x' = 3x^2 - 1 - e^{2y} = -V_x$, so $$ V = - \int (3x^2 - 1 - e^{2y})~ dx + F(y) = -x^3 + x + xe^{2y} + F(y)$$ $y' = -2xe^{2y} = -V_y$, so $$V = - \int (-2xe^{2y})~dy + G(x) = xe^{2y} + G(x)$$ From the needed conditions for a gradient system, we can take $F(y) = 0$ and $G(x) = -x^3 + x$, hence $$V(x,y) = -x^3 + x + xe^{2y}$$ 3) Show trajectories always cross the equipotentials at right angles. Please add what you have done here. If they only want you to show it, the following phase portrait may suffice. However, if they want a proof, that is harder.
How to use Euler's primality test
Euler's criterion says that if $p$ is a prime, then $a^{(p-1)/2} \equiv [\frac ap] \pmod p$, where $[\frac ap]$ is the Jacobi symbol. It's also known that if $p$ is composite, then there exists an $a$ such that $a^{(p-1)/2} \not\equiv [\frac ap] \pmod p$. So you would probably be implementing two computations: the calculation of $a^{(p-1)/2} \pmod p$, and the calculation of $[\frac ap]$ (using quadratic reciprocity). You would have to test more than one $a$, because composite numbers can sometimes accidentally satisfy Euler's criterion for some (but not all) $a$.
What is this integration "method" name?
This is called: Integration by Substitution. It is a consequence of the Chain Rule for the derivative of the composition of functions.
The remainders of the terms of a recursive sequence
Hint. Note that by Euler's theorem, $3^{20}\equiv 1 \pmod{100}$. Therefore $a_2=3^1=3$, $a_3=3^3=27$, and $$a_4=3^{a_3}=3^{27}=3^{20}\cdot 3^7\equiv 3^7=2187\equiv 87 \pmod{100}.$$ Now $a_4=100q+87$ for some $q\in\mathbb{Z}$ and $$a_5=3^{a_4}=3^{100q+87}\equiv (3^{20})^{50q}\cdot (3^{20})^4\cdot 3^7\equiv 3^7\equiv 87 \pmod{100}.$$ Can you take it from here?
Determine the value of k for which a matrix system is consistent and the values for which it is inconsistent
Let $K$ be a field and $k\in K$. Then we can write the system of linear equations in matrix form $Av=b$ with $$ A=\begin{pmatrix} 3 & 2 & 5 \\ 3 & -2 & 0 \\ 6 & 4 & -10 \end{pmatrix},\quad v=\begin{pmatrix} x \\ y \\ z \end{pmatrix},\quad b=\begin{pmatrix} 10 \\ 7 \\ k \end{pmatrix} $$ The system has a unique solution iff $\det(A)\neq 0$ in $K$. In this case the solution is $v=A^{-1}b$. Since $\det(A)=120$ is nonzero for all fields of characteristic not equal to $2,3,5$, we obtain a unique solution $$ v=A^{-1}b=\begin{pmatrix} \frac{48+k}{24} \\ \frac{k-8}{16} \\ \frac{20-k}{20} \end{pmatrix} $$ for any given $k\in K$. The system may be inconsistent for characteristic $2,3,5$. For example, the last equation for $k=1$ and $char(K)=2$ is inconsistent: $0=1$.
Question 5, RMO 2003, issue with ratios
All equalities directly follow from the intercept theorem (take a look at https://en.wikipedia.org/wiki/Intercept_theorem) . You don't need to search for similar triangles. Equation $\dfrac{BD}{DC} = \dfrac{AE}{EC}$: $C$ is the intersection of $BC$ and $AC$. $AB$ is parallel to $ED$. So you can apply the intercept theorem. Equation $\dfrac{AE}{EC} = \dfrac{AF}{FB}$: $A$ is the intersection of $AC$ and $AB$. $FE$ is parallel to $BC$. So you can apply the intercept theorem. Equation $\dfrac{AF}{FB} = \dfrac{DC}{BD}$: $B$ is the intersection of $AB$ and $BC$. $FD$ is parallel to $AC$. So you can apply the intercept theorem.
Connectivity of a Hamiltonian path
Suppose $G$ has a Hamiltonian path $H$. Then, $\omega(G-S) \leq \omega(H - S)$. Can you bound the size of $\omega(H-S)$?
How to evaluate $\int x \ln^n (x) dx$?
Let $u=\ln ^n(x)$; then we have $$ I_n(x)=\int x \ln^n(x)\,dx = \frac{1}{2}x^2\ln^n(x) -\frac{1}{2}\int x^2\cdot n \ln^{n-1}(x)\frac{1}{x} \,dx $$ $$ I_n(x) = \frac{1}{2}x^2\ln^n(x) -\frac{n}{2} I_{n-1}(x) $$If $n\in \mathbb{N}^+$, this can be continued until you get to $\int x\,dx$.
$X = (X_1, X_2)$ is it not a multivariate random variable?
The answer to your question is a matter of interpretation. If you formalize the sample spaces abstractly enough, then the sample spaces for $X_1$ and $X_2$ can be the same. After all, both weight and height are continuous quantities, and so can be modeled as real numbers. If the real numbers aren't a general enough setting, you can go into more abstract spaces like "Polish spaces". The real answer is that you typically don't want to lose the "typing" information (what kind of number does your variable represent). That is certainly a worthy goal. And to do that, probablists and statisticians try to forget about the sample space as quickly as possible. In that sense, the definition in your book is too specific, and only corresponds to the one real people use "in principle". The point about probability measures is similar enough. Your primary focus "should" be the random variables and their properties, not the sample spaces they are defined on.
Fibonaccith fibonacci number
Suppose $n$ is good. Since $\gcd(f_m,f_n)=f_{\gcd(m,n)}$, we have $$\gcd(f_{f_{f_n}},f_n)=f_{\gcd(f_{f_n},n)}=f_n,$$ and $$\gcd(f_{f_n},f_n)=f_{\gcd(f_n,n)}<f_n.$$ Therefore, $f_n$ is good. So if there is one good number, there must exist infinitely many good numbers.
What is wrong with this CDF calculation?
Suppose the pdfs of $\hat X$ and $\hat Y$ are zero outside $(0, 1)$. The convolution of the pdfs can be split into two integrals: $$f_{\hat X + \hat Y}(x) = (f_{\hat X} * f_{\hat Y})(x) = \\ \int_0^x f_{\hat X}(\tau) f_{\hat Y}(x - \tau) d\tau \,[0 < x < 1] + \\ \int_{x - 1}^1 f_{\hat X}(\tau) f_{\hat Y}(x - \tau) d\tau \,[1 < x < 2].$$ If $f_{\hat X}(x) = f_{\hat Y}(x) = (1/\sqrt x - 1)[0 < x < 1]$, $$f_{\hat X + \hat Y}(x) = (x - 4 \sqrt x + \pi) [0 < x < 1] + \\ \left( 2 \arcsin \left( \frac 2 x - 1 \right) + 4 \sqrt {x - 1} - x - 2 \right) [1 < x < 2], \\ f_{X + Y}(x) = \frac 1 {a^2} f_{\hat X + \hat Y} \left( \frac x {a^2} \right).$$
How to decompose a complex number into a sum of two unitary modulus complex numbers?
This document contains a closed-form solution for a slightly more general version of this problem: find $\theta_1,\theta_2$ such that $$\alpha_1\exp(i\theta_1) + \alpha_2\exp(i(\theta_2+\theta_1))=x+i y$$ with $\alpha_1,\alpha_2>0$ given, and $\alpha_1=\alpha_2=1$ in the context of this question. The solution is $$\theta_2=\arccos\left(\frac{x^2+y^2-\alpha_1^2-\alpha_2^2}{2\alpha_1\alpha_2}\right)$$ $$\theta_1=\arctan(x/y) - \arctan\left(\frac{\alpha_2\sin(\theta_2)}{\alpha_1+\alpha_2\cos(\theta_2)}\right)$$
Relation among the diagonals of a regular heptagon
My first thought is to place the vertices in the complex plane in the standard way: let $\zeta_7 = e^{2 \pi i/7}$ be a primitive $7^{\rm th}$ root of unity, and let $A = \zeta_7^0 = 1$, $B = \zeta_7$, $C = \zeta_7^2$, etc. Then the claim to be proven is equivalent to $$\frac{1}{|\zeta_7^2 - 1|} + \frac{1}{|\zeta_7^3 - 1|} = \frac{1}{|\zeta_7 - 1|}.$$ Then using the fact that $|z|^2 = z\bar z$ for any complex number $z$, $\bar \zeta_7 = \zeta_7^{-1}$, $\zeta_7^{7+k} = \zeta_7^k$, and $\sum_{k=0}^6 \zeta_7^k = 0$, you should be able to verify this identity.
Splitting field for $x^6-4$
You did almost all the work yourself already. If $\alpha$ is a cube root of $5$ in $k$, then $3 \alpha$ is a cube root of $2$. So $k$ is already a splitting field of $x^6 - 4$.
How to find the value of this trigonometric expression
$$ 96\sin(\pi/48)\cos(\pi/48) = 48 \times \underbrace{2\sin\left(\frac \pi {48} \right) \cos\left( \frac \pi{48}\right)} = 48 \underbrace{\sin\left( 2\times\frac \pi{48} \right)} $$ by the usual double-angle formula, and then $$ = 48\sin \left( \frac \pi {24}\right). $$ Next, do the same thing with $24$ that we just did with $48,$ then with $12,$ then with $6.$
Evaluating: $ \int\sqrt{\tanh(\ln(\sqrt{x}))} dx$ ; $ \int \ln\left(\sqrt{\tanh(\ln(\sqrt{x}))}\right) dx$
$$\sqrt{\tanh(\ln(\sqrt{x}))}=\sqrt{\frac{e^{2\ln(\sqrt x)}-1}{e^{2\ln(\sqrt x)}+1}}=\sqrt{\frac{x-1}{x+1}}$$ Now: $$I_1=\int\sqrt{\frac{x-1}{x+1}}dx\\\stackrel{x=\cosh 2t}=4\int\sinh^2tdt=\sinh(2t)-2t+c\\=\sqrt{x^2-1}-\cosh^{-1}x+c$$ $$I_2\stackrel{x=\cosh 2t}=4\int\sinh(t)\ln(\tanh (t))dt\\=4\cosh t\ln(\tanh t)-4\int {\rm csch} t \\=4\cosh t\ln(\tanh t)-4\ln|{\rm csch} t-\coth t|+c\\= \sqrt2 \sqrt{x+1} \ln\frac{x-1}{x+1}-4 \ln\left(-\tanh\left(\frac 14 \cosh^{-1}(x)\right)\right)+c$$
Definition of time global solution for PDE heat
First notice that for the space $X$ we have the Poincare inequality. Multiplying our equation by $u$ and integrating by parts yields $$\frac{d}{dt}\left(\frac{1}{2}\|u(t)\|_{L^2(0,1)}^2\right)+\|u_x(t)\|_{L^2(0,1)}^2 = 0,$$ poincare inequality gives us $$\frac{d}{dt}\left(\frac{1}{2}\|u(t)\|_{L^2(0,1)}^2\right)+c\frac{1}{2}\|u(t)\|_{L^2(0,1)}^2 \le 0,$$ then Gronwall's lemma yields $$\|u(t)\|_{L^2(0,1)}^2\le e^{-ct}\|u(0)\|_{L^2(0,1)}^2,$$ for all $t\in(0,\infty)$. Note that the fourier expansion is smooth, so we can differentiate it twice with respect to $x$, let $v^i=\frac{\partial^i}{\partial x^i}(u)$ for $i=1,2$, then $v^i$ solves $$v^i_t-\Delta v^i=0.$$ and $v^2(0,t)=v^2(1,t)=0,$ with $v^2(x,0)=\frac{d^2}{dx^2}u_0(x).$ We can repeat the above set up to deduce that $$\|v^2(t)\|_{L^2(0,1)}^2\le Ce^{-ct}\|v^2(0)\|_{L^2(0,1)}^2.$$ Now notice that \begin{align} \|v^1(t)\|_{L^2(0,1)}^2 &=\|u_x\|_{L^2(0,1)}^2=-\frac{1}{2}\int_0^1uu_t \\ &\le\frac{1}{4}\left(\|u(t)\|_{L^2(0,1)}^2+\|u_t(t)\|_{L^2(0,1)}^2\right) \quad \text{(by AM-GM inequality)} \\ &=\frac{1}{4}\left(\|u(t)\|_{L^2(0,1)}^2+\|u_{xx}(t)\|_{L^2(0,1)}^2\right) \\ &=\frac{1}{4}\left(\|u(t)\|_{L^2(0,1)}^2+\|v^2(t)\|_{L^2(0,1)}^2\right). \end{align} Now we have that \begin{align} \|u(t)\|_{H^2(0,1)}^2&=\|u(t)\|_{L^2(0,1)}^2+\|v^1\|_{L^2(0,1)}^2+\|v^2(t)\|_{L^2(0,1)}^2 \\ &\le C(\|u(t)\|_{L^2(0,1)}^2+\|v^2(t)\|_{L^2(0,1)}^2) \\ &\le Ce^{-ct}\|u_0\|_{H^2(0,1)}^2 \\ &\le Ce^{-ct}\|u_0\|_{C^2[0,1]}^2. \end{align} Sobolev embedding gives us $$\|u(t)\|_{C[0,1]}^2\le C\|u(t)\|_{H^2(0,1)}^2\le Ce^{-ct}\|u_0\|_{C^2[0,1]}^2, $$ which holds for all $t\in(0,\infty)$, and so we obtain $$\|u\|_{C(0,1;X)}\le C\|u_0\|_{C^2[0,1]}.$$ To see that the solution is unique, let $u_1$ and $u_2$ solve the equation, and set $W=u_1-u_2$, using the first energy estimate on $W$, we find that $W=0$.
Find point in line which has the same distance from given 2 points
Hint: The points equidistant from the two points $A=(-1,-2)$ and $B=(1,4)$ are points of the perpendicular bisector of the segment $AB$. And there is one common point of this bisector with the given line.
A function twice differentiable exercise
you can have $$g(x,y)= \begin{pmatrix} 1 & 1 \\ 1 & a\end{pmatrix}\begin{pmatrix} x \\ y\end{pmatrix}=\begin{pmatrix} x+y \\ x+ay\end{pmatrix}$$ The function F can be written, $$F(x,y)=f(x+y, x+ay)$$ I guess you can complete the rest by using chain rule.
probability transition matrix markov chain
I think there is a problem with the transition matrix; for example, shouldn't there be a $0.5$ probability that you go from square $1$ to square $7$ i.e. that you roll a head on the first roll? As for the computations: Hint If $M$ is the transition matrix, then the $i$-$j$th entry of $M^n$, denoted $p_{i,j}^{(n)}$, is the probability of getting from square $i$ to square $j$ in $n$ steps. How could this be used to compute the probabilities you want given you can easily compute $M^n$ using MatLab?
calculate the gcd of $a+b$ and $p^4$
The final equation is wrong. First, you need also $\,\color{#c00}{p\nmid k}.\,$ Therefore $\qquad(a\!+\!b,\,p^4)\, = \,(kp\!+\!\ell p^2,\,p^4)\, =\, p\,(k\!+\!\ell p,\,p^3)\, =\, p,\,\ $ by $\,\ p\nmid k\!+\!\ell p\ $ (else $\,\color{#c00}{p\mid k})$
Does this sequence associated to the prime numbers have a name and/or where can I learn more about it?
As cited in Constructing supersingular elliptic curves by Broker, a theorem of Waterhouse implies that a supersingular elliptic curve with trace of Frobenius $t$ (hence $p^2 + t + 1$ points) exists over $\mathbb{F}_{p^2}$ iff either $t = \pm 2p$ $t = \pm p$ and $p \not \equiv 1 \bmod 3$, or $t = 0$ and $p \not \equiv 1 \bmod 4$. This contradicts your claim. The problem with your argument is that Tate's isogeny theorem implies that two elliptic curves are isogenous over $\mathbb{F}_q$ iff $|E_1(\mathbb{F}_q)| = |E_2(\mathbb{F}_q)|$, but when people say that the supersingular isogeny graph is connected they are referring to isogenies over $\overline{\mathbb{F}_q}$. If such an isogeny is defined over $\mathbb{F}_{q^n}$ then we can only conclude that $|E_1(\mathbb{F}_{q^n})| = |E_2(\mathbb{F}_{q^n})|$. Explicitly, consider over $\mathbb{F}_{p^2}$ a supersingular elliptic curve $E_1$ with $t = 2p$, so $(p + 1)^2$ points, and another supersingular elliptic curve $E_2$ with $t = -2p$, so $(p - 1)^2$ points. In the first case the eigenvalues of Frobenius over $\mathbb{F}_{p^2}$ are $p, p$ and in the second case they are $-p, -p$, from which it follows that $$|E_1(\mathbb{F}_{p^4})| = |E_2(\mathbb{F}_{p^4})| = p^4 + 2p^2 + 1 = (p^2 + 1)^2$$ so $E_1$ and $E_2$ are isogenous over $\mathbb{F}_{p^4}$ but not $\mathbb{F}_{p^2}$. Similarly, when $t = \pm p$ the eigenvalues of Frobenius are $\pm p \omega, \pm p \omega^2$ where $\omega$ is a primitive third root of unity so we get an isogeny to $E_1$ or $E_2$ over $\mathbb{F}_{p^6}$ and to both over $\mathbb{F}_{p^{12}}$, and when $t = 0$ the eigenvalues of Frobenius are $pi, -pi$, so we get an isogeny to $E_1$ and $E_2$ over $\mathbb{F}_{p^8}$.
Where can I find a proof of Dupin's Theorem?
I had fined it in these books: Differential Geometry of Curves and Surfaces [Manfredo P. do Carmo] Problem 20 Page 152 Hint 484 Geometric Methods and Applications [Jean Gallier][2ed] Theorem 20.3. Page 629 Here is an sketch of the proof:(from second book) First,we note that if two surfaces $X_1$ and $X_2$ intersect along a curve $C$, and if they form a constant angle along $C$, then the geodesic torsion $\tau^1_g$ of $C$ on $X_1$ is equal to the geodesic torsion $\tau^2_g$ of $C$ on $X_2$ .Indeed, if $θ_1$ is the angle between $N_1$ and $n$, and $θ_2$ is the angle between $N_2$ and $n$, where $N_1$ is the normal to $X_1$, $N_2$ is the normal to $X_2$, and $n$ is the principal normal to $C$, then $$λ:=θ_1−θ_2,$$ where $λ$ is some constant,and thus $$\frac{dθ_1}{ds}=\frac{dθ_2}{ds}$$, which shows that $$τ^1_g=τ−\frac{dθ_1}{ds}=τ−\frac{dθ_2}{ds}=τ^2_g$$. Now, if the system of surfaces is triply orthogonal, letting $τ_{ij}$ be the geodesic torsion of the curve of intersection $C_{ij}$ between $Xi ∈F_i$ and $X_j ∈F_j$ (where $1≤i<j≤3$),which is well defined,since $X_i$ and $X_j$ intersect orthogonally, from an easy observation (exercise 19 from first book) the geodesic torsions of orthogonal curves are opposite,and thus $$τ_{12}=−τ_{13}, τ_{23}=−τ_{12}, τ_{13}=−τ_{23},$$ from which we get that $$τ_{12}=τ_{23}=τ_{13}=0.$$ Problem 19: Let $C\subset S$ be a regular curve in $S$. Let $p \in C$ and $\alpha(s)$ be a parametrization of $C$ in $p$ by arc length so that $\alpha(0) = p$. Choose in $T_p(S)$ an orthonormal positive basis $\{t, h\}$, where $t = \alpha'(0)$. The geodesic torsion $\tau_g$, of $C\subset S$ at p is defined by $$\tau_g=<\frac{dN}{ds}(0),h>$$ Prove that a) $\tau_g = (k_1 - k_2) \cos(\phi)\sin(\phi)$, where $\phi$ is the angle from $e_1$ to $t$.
Question on Proof of the Contraction Mapping Theorem
I assume from context that $c$ is defined to be a constant for which $$d(Tx, Ty) \le c d(x, y)$$ for all $x$ and $y$ (and in particular, $c < 1$ since $T$ is a contraction). Then $$d(T^n x_0, T^m x_0) \le c d(T^{n - 1} x_0, T^{m - 1} x_0) \le c^2 d(T^{n - 2} x_0, T^{m - 2} x_0)$$ and so on. Apply the condition a total of $m$ times.
Modulo with negative numbers
$a\equiv b\pmod c$ means $c|(a-b)$ $-347\equiv1\pmod6$ is true because $6$ divides $(-347-1)=-348$ But,$-347\equiv5\pmod 6$ is not true because $6$ does no divide $(-347-5)=-352$.
Integral of $\sin^3\left(\frac{x}{2}\right)\cos^7\left(\frac{x}{3}\right)$
Hint: Try to make one of the inside argument equal to the other and use the angle addition or subtraction formula to expand.
Prove that the series $\sum\limits_{k=1}^{\infty}[\ln(ak+b)- \ln(ak)]$ diverges
HINT: Note that $\log(1+x)\ge \frac{x}{1+x}$. Then, we have $$\log(ak+b)-\log(ak)=\log\left(1+\frac{b}{ak}\right)\ge \frac{b}{ak+b}$$
Center of gravity using double integration.
The integral you want is $\int_{-2}^{1}\int_{y-2}^{y^2}....dxdy$, where ... is the integrand needed for center of gravity. You can use the same limits for both. The integral for $\bar{y}$ will be $\int_{-2}^{1} y(y^2-y+2)dy$ I believe I made a mistake! The upper limit on x should be$-y^2$, not $y^2$ in both cases. However for $\bar{x}$ it doesn't matter! $\bar{y}=\int_{-2}^{1}y(2-y-y^2)dy$. Further error, I forgot to divide by the area. Net result $\bar{y}=\frac{-9}{16}$.
Using characteristic functions to establish convergence
Expanding on @Did's comment, we have $$ \psi_{X_\lambda}(t) = e^{\lambda\left(e^{itb(\lambda)}-1-itb(\lambda)\right)} $$ For sufficiently large $\lambda$, this is approximately equal to $$ e^{\lambda\left(-\frac12 t^2b^2(\lambda)\right)}.$$ Choosing $b(\lambda) = \lambda^{-\frac12}\sigma$ (where $\sigma>0$), we have $$\lim_{\lambda\to\infty}\psi_{X_\lambda}(t) = e^{-\frac12\sigma^2t^2}, $$ which implies that $$X_\lambda\stackrel{d}{\longrightarrow}\mathcal N(0,\sigma^2).$$
Declaring properties of variables
I would say it is better to put the first part, that $x, y \in \mathbb{R}$, first. Otherwise one doesn't necessarily know what conditions like $xy \geq 0$ even mean. If $x$ and $y$ are sets, for example, this is meaningless. (But as Bye World points out, you can also write "where $x,y \in \mathbb{R}$" last.) That being said, most people would understand the condition "if $x, y \geq 0$" on its own to mean "if $x$ and $y$ are nonnegative real numbers." Once you've said that you're assuming $x, y \in \mathbb{R}$, you can write either: "If $x, y \geq 0$, then $xy \geq 0$," or "We have $xy \geq 0$, if $x,y \geq 0$," with no change in meaning. I find the first one a little easier to read.
Zero vector and span relationship
No, you can't. For example you lose the property of linear indipendence (if $S$ has it ) : if you put the zero vector inside $S$, then the set of vectors becomes linearly dependent , which in many cases is the opposite of what you want (e.g. see basis of a vector space ).
Statistics. How to find the median of two other medians?
The first median is known to be larger than 8 elements, and the second smaller than 9. So the median certainly lies between 226 and 304. But this is all you can say ! The median could by any value in between. (Choose a median arbitrarily, you will have all freedom to insert the elements on either side.)
Study the convergence of the series $\sum_{n=1}^{\infty}\frac{e^n}{(1+\frac{1}{n})^{n^2}}$
Hint: Show that $$\left(1+\frac{1}{n}\right)^n < e$$ for all positive integers $n$.
Where am I going wrong on this poker math problem?
You multiplied .0277 * 37. The solution multiplied it by 36 (you need to pay 1 dollar). Plus the solution rounded 0.0277 to 0.028. UPDATE: You assumed that the payout would be 37 dollars. However, you have to pay 1 dollar. So the net payout is 36 dollars.
The sides of a triangle are in the ratio $1:\sqrt 3: 2$, then the angles are in the ration-
The best answer is in comment. In any case it is simpler to find the angles themselves and use the result to find the ratio. In general case use the cosine theorem to find the angles in a triangle: $$ \cos A=\frac{b^2+c^2-a^2}{2bc}. $$ In your case one obtains: $$ \cos A=\frac{\sqrt3}2,\;\cos B=\frac12,\; \cos C=0. $$ I assume the angles with these cosines are well-known to you.
Given $1 \le |z| \le 7$ Find least and Greatest values of $\left|\frac{z}{4}+\frac{6}{z}\right|$
$\left|\frac{z}{4}+\frac{6}{z}\right|=\left|\frac{z^2+24}{4z}\right|$ The function $f(z)=\frac{z^2+24}{4z}$ is analitic on $1\leq|z|\leq7$. Therefore, by the maximum modulus theorem its maximum absolute value is attained at the boundary. The boundary are the circles $|z|=1$ and $|z|=7$. For $|z|=1$, observe that $z^2$ just travels the same circle. We have $|f(z)|=|z^2+24|/4$, which is maximum for $z=1$ or $z=-1$ (such that $z^2$ and $24$ point in the same direction). For $|z|=7$, observe that $z^2$ travels the circle $|w|=49$. We have $|f(z)|=|z^2+24|/28$, which is maximum for $z=7$ or $z=-7$ (such that $z^2$ and 24 point in the same direction). So, $f(7)=f(-7)$ seem to be the largest. The minimum is zero at $f(\pm\sqrt{24}i)$.
Prove that $\phi(f(X),Y)=\phi(X,f(Y))~\forall X,Y\in\mathbb R^3$ where $\phi(X,Y)=X^TAY$ and $f:\mathbb R^3\to \mathbb R^3, X\mapsto BX$
Axioms of inner product: Let $V$ be a $\mathbb C$-vector space (or $\mathbb R$, but it's only necessary to define it on $\mathbb C$). $\phi:V\times V \to \mathbb C$ is an inner product, if $$ \forall(x,y_1, y_2 \in V, \alpha_!, \alpha_2\in \mathbb C) \phi(x, \alpha_1y_1+\alpha_2y_2) = \alpha_1\phi(x,y_1) + \alpha_2\phi(x, y_2) $$ $$ \forall (v \in V-\{0\}) 0 < \phi(v, v) \in \mathbb R $$ $$ \forall (x, y \in V) \phi(x, y) = \overline{\phi(y, x)} $$ Suppose that $\phi(x,y) = x^TAy$ is an inner product - it satisfies first axiom (easy to check), satisfies third axiom (we're working with real numbers here, and matrix $A$ is symmetric, but depending on $t$, second axiom might be violated, ex. - for $t=0$ and vector $v=(0, 1, 1)$, $\phi(v,v)=0$. Therefore, suppose that we're working with $t$ that gives valid inner product. Let $$ A' = \left(\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 2 & t^{2} - 2 \\ 0 & t^{2} - 2 & 3 \end{array}\right) $$ If $x^TAy$ is inner product, then $x^TA'y$ is an inner product too - axioms second and third remain unchanged. For such $t$, that $x^TAy$ was valid inner product, it can be verified that $x^TA'y$ is valid inner product too (applying Sylvester's criterion). The proof you mentioned doesn't depend on the elements of matrices $A$ and $B$ (it just uses the fact that $A$ is symmetric, and explois some properties of inner product). So it should remain valid for product given by $x^TA'y$. This however is false, because $A'B$ is not symmetric, as can be checked by computation. This means that you need to actually look at the elements of both matrices (and probably do the multiplication).
Solve logarithmic equation $ 3^{\log_3^2x} + x^{\log_3x}=162$
$3^{\log^2_3 x} = (3^{\log_3 x})^{\log_3 x} = x^{\log_3 x}$
Power series of a function related to Gamma function
Using the binomial series expansion and applying the binomial identity $$\binom{-\alpha}{n}=\binom{\alpha+n-1}{n}(-1)^n$$ we derive for $|z|<1$ and $\alpha\in\mathbb{C}$ \begin{align*} f(z)=\frac{1}{(1-z)^\alpha}&=\sum_{n=0}^\infty\binom{-\alpha}{n}(-z)^n\\ &=\sum_{n=0}^\infty\binom{\alpha+n-1}{n}z^n\\ \end{align*} It follows \begin{align*} a_n(\alpha)&=\binom{\alpha+n-1}{n}=\frac{1}{n!}(\alpha+n-1)(\alpha+n-2)\cdots\alpha\\ &=\frac{1}{n!}(\alpha+n-1)^{\underline{n}} \end{align*} with $z^{\underline{n}}=z(z-1)\cdots(z-n+1)$ the falling factorial. The following is valid \begin{align*} \lim_{n\to\infty}\frac{a_n(\alpha)}{n^{\alpha-1}}=\lim_{n\to\infty}\frac{(\alpha+n-1)^{\underline{n}}}{n!n^{\alpha-1}}=\frac{1}{\Gamma(\alpha)}\tag{1} \end{align*} In order to show (1) we use a representation of the Gamma function $\Gamma(\alpha)$ according to C.F. Gauss. We obtain \begin{align*} \color{blue}{\Gamma(\alpha)}&=\lim_{n\to\infty}\frac{n^\alpha n!}{\alpha(\alpha+1)\cdots(\alpha+n)}\\ &=\lim_{n\to\infty}\frac{n^\alpha n!}{(\alpha+n)^{\underline{n+1}}}\\ &=\lim_{n\to\infty}\left(\frac{n}{\alpha+n}\cdot\frac{n^{\alpha-1} n!}{(\alpha+n-1)^{\underline{n}}}\right)\\ &=\lim_{n\to\infty}\frac{1}{\frac{\alpha}{n}+1}\cdot\lim_{n\to\infty}\frac{n^{\alpha-1} n!}{(\alpha+n-1)^{\underline{n}}}\\ &\color{blue}{=\lim_{n\to\infty}\frac{n^{\alpha-1} n!}{(\alpha+n-1)^{\underline{n}}}}\\ \end{align*} and the claim follows.
Elementary Matrix Embedding
$$E_{ij} = \begin{cases} 1, & i=j=1 \\ 0, & i=1 \mbox{ or } j=1 \mbox{ but } (i,j) \neq (1,1) \\ E_{i-1,j-1}', &i > 1 \text{ and } j>1 \end{cases}$$ If $E'$ corresponds to switching $i$-th and $j$-th row, $E$ corresponds to switching $(i+1)$-th row and $(j+1)$ row. Try to find the corresponding operations for the other row operations.
How do I evaluate this trigonometric limit involving infinite product?
There isn't that many trick to create a horrible looking and yet doable limit. In school, if you are asked to evaluate a series or product which involve trigonometry functions whose arguments contain terms like $\frac{(\cdots)}{2^k}$. One thing you should do is lookup the double-angle formula for various trigonometry functions and see whether you can use them to turn your series/product into a telescoping one. In this case, it do work. Let $t = \tan\theta$, we have $$1 - t^4 = (1-t^2)^2\frac{1+t^2}{1-t^2} = \left(\frac{2t}{\frac{2t}{1-t^2}}\right)^2\frac{1+t^2}{1-t^2}$$ Recall $$\frac{2t}{1-t^2} = \tan(2\theta)\quad\text{ and }\quad \frac{1-t^2}{1+t^2} = \cos(2\theta) = \frac12\frac{\sin(4\theta)}{\sin(2\theta)} $$ We obtain $$1 - t^4 = 8\left(\frac{\tan(\theta)}{\tan(2\theta)}\right)^2\frac{\sin(2\theta)}{\sin(4\theta)} = 8\frac{\tan^2\theta\sin(2\theta)}{\tan^2(2\theta)\sin(4\theta)} $$ The product is telescoping and $$\begin{align}\prod_{k=3}^n\left(1 - \tan^4\frac{\pi}{2^k}\right) &= 8^{n-2} \frac{\tan^2\frac{\pi}{2^n}\sin \frac{\pi}{2^{n-1}}}{\tan^2\frac{\pi}{4}\sin\frac{\pi}{2}}\\ &= \frac{1}{32} \left(2^n\tan \frac{\pi}{2^n}\right)^2\left(2^{n-1}\sin\frac{\pi}{2^{n-1}}\right)\end{align} $$ Using $$\lim_{N\to\infty} N\tan\frac{\pi}{N} = \lim_{N\to\infty} N\sin\frac{\pi}{N} = \pi$$ We obtain $$\lim_{n\to\infty} \prod_{k=3}^n\left(1 - \tan^4\frac{\pi}{2^k}\right) = \frac{\pi^3}{32}$$
How many elements π in $S_n$ such that $π^2 = e$ , (the order of π is 1 or 2)?
You can show that you can write every element of $S_n$ as a product of disjoint cycles. This gives you an easy way to determine the order of an element. Let $\pi\in S_n$, then $$\pi=(c_{11}...c_{1t_1})(c_{21}...c_{2t_2})...(c_{m1}...c_{mt_m}).$$ Then to compute the order of $\pi$ we look at the least common multiple of $t_1,...,t_m$. In order for the order of $\pi$ to be $1$ or $2$ we require that $t_1,...,t_m\leq 2$. So we see that if we ignore $1-$cycles an element has order $1$ or $2$ if and only if it is the product of disjoint $2-$cycles (transpositions) or it is $e$ (technically $e$ is the product of no $2-$cycles). This allows us to calculate the number of elements of order $1$ or $2$. We simply count how many ways we can have a product of disjoint transpositions. For $n=4$ there are $\binom{4}{2}\binom{2}{2}\frac{1}{2}=3$ ways to find a product of $2$ $2-$cycles. There are $\binom{4}{2}=6$ ways to find a product of $1$ $2-$cycle. Lastly there is $1$ way to find a product of no $2$-cycles. This gives us $10$ elements of order $1$ or $2$
Domain of solved Differential equation?
When solving a differential equation, you will get a family of solutions. However, once you fix an initial value, then by the existence and uniqueness theorem you will have one unique solution. In your case, when you plug in the initial condition to $y^2 = -4x^3 +c$ to get $y^2= -4x^3+6$ which means $y$ can either be $\sqrt{-4x^3+6}$ or $-\sqrt{-4x^3+6}$. But since $y(1)=2$ then your answer can only be $y= \sqrt{-4x^3+6}$ because the other will give you $-2$ when you plug in $x=1$.