INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
2 Dimensional quantum particle A quantum particle moves in 2 dimensions with Hamiltonian H:
$ H = \frac1{2m} ((P_1 + \frac12 eBX_2)^2 + (P_2 - \frac12 eBX_1)^2) $
For constants $e,B,m$ with $e$ and $B$ nonzero.
Show that the energy levels are of the form $ (n + \frac12)\bar h |eB|\frac{1}{m}$
The hint given is to define $\bar P$ and $\bar X$ as proportional to $P_1 + \frac12 eBX_2$ and $P_2 - \frac12 eBX_1$ and show that the original Hamiltonian has the form
$\frac1{2m} P^2 + \frac12m\omega^2X^2$ for some $\omega$, where
$P_j = -i\bar h \frac{\partial}{\partial x_j}$ and
$X_j = x_j$
We are given that this has energy levels $(n+\frac12)\bar h \omega$.
| Depending on how you want to approach this problem I've seen the following done:
\begin{eqnarray}
\hat{H} &=& \frac{1}{2m}[\hat{p}_x ^2+\hat{p}_x \hat{y}eB+\frac{1}{4}\hat{y}^2e^2B^2-\hat{p}_y ^2+\hat{p}_y \hat{x}eB+\frac{1}{4}\hat{x}^2e^2B^2]\\
&=&\frac{1}{2m}[\hat{p}_x^2+\hat{p}_y^2]+\hat{L}_z \frac{eB}{2m}+\frac{e^2 B^2}{8m}(\hat{x}^2+\hat{y}^2)
\end{eqnarray}
It's provable that $\hat{L}_z$ commutes with $\hat{p}_x ^2 + \hat{p}_y ^2$ and $\hat{x}^2 + \hat{y}^2$. You can thus form a complete set of commuting operators for $L_z$ and what appears to be a spring Hamiltonian.
You could probably calculate the levels of $L_z$ and the levels of a spring hamiltonian, so I'll leave the rest to you.
|
Rotating regular polygon Let $R_1$ be regular $n$-sided polygon on the plane (square, pentagon, hexagon, etc).
Now from this position we start to rotate this polygon about its center of gravity obtaining figure $R_2$.
*
*How to calculate the angle of rotation $\alpha$ for the case where common area
of $R_1$ and $R_2$ i.e. area ($R_1 \cap R_2)$ will be minimal? (intuition tells what possible solution could be but how to prove it?)
*Does some simple method exist for solution of this problem in general case? (preferably with the use of rotations matrices)
*The procedure for $n$-odd and $n$-even could be the same or we should to differentiate between these two cases?
Additionally:
*
*Could it be proven that the shape obtained for situation of minimal
area is also a regular polygon ( $2n$ sided) as we see in the below picture of pentagon made by Joseph?
| The minimum is achieved at $\alpha = \frac{\pi}{n}$ and at minimum, $R_1 \cap R_2$ is a regular $2n$-gon.
Choose a coordinate system so that $R_1$ is centered at origin
and one of its vertices lies on $x$-axis.
Let $\rho(\theta)$ be the function which allow us to parametrize $\partial R_1$ in following manner:
$$\mathbb{R} \ni \theta \quad\mapsto\quad (x,y) = (\sqrt{2\rho(\theta)}\cos\theta,\sqrt{2\rho(\theta)}\sin\theta) \in \partial R_1$$
In terms of $\rho(\theta)$, we have
$$f(\alpha) \stackrel{def}{=} \verb/Area/(R_1 \cap R_2) = \int_0^{2\pi} \min(\rho(\theta),\rho(\theta-\alpha)) d\theta$$
Since $R_1$ is a regular $n$-gon and one of its vertices lies on $x$-axis, $\rho(\theta)$ is even and periodic with period $\frac{2\pi}{n}$. In fact, it
strictly decreases on $[0,\frac{\pi}{n}]$ and strictly increases on $[\frac{\pi}{n},\frac{2\pi}{n}]$.
As a result of these, $f(\alpha)$ is even and periodic with same period. To determine the minimum of $f(\alpha)$, we only need to study the case
where $\alpha \in \left[0,\frac{\pi}{n}\right]$.
For $\alpha \in \left[0,\frac{\pi}{n}\right]$ and $\theta \in \left[0,\frac{2\pi}{n}\right]$, the curve $\rho(\theta)$ and $\rho(\theta - \alpha)$ intersect at
$\frac{\alpha}{2}$ and $\frac{\alpha}{2} + \frac{\pi}{n}$.
This leads to
$$\begin{align}f(\alpha)
&= n\left[
\int_{\frac{\alpha}{2}}^{\frac{\alpha}{2}+\frac{\pi}{n}} \rho(\theta) d\theta
+ \left(
\int_0^{\frac{\alpha}{2}} + \int_{\frac{\alpha}{2}+\frac{\pi}{n}}^{\frac{2\pi}{n}}
\right)\rho(\theta-\alpha)d\theta
\right]
= 2n\int_{\frac{\alpha}{2}}^{\frac{\alpha}{2}+\frac{\pi}{n}} \rho(\theta) d\theta\\
\implies
\frac{df(\alpha)}{d\alpha} &= n\left(\rho\left(\frac{\alpha}{2}+\frac{\pi}{n}\right) - \rho\left(\frac{\alpha}{2}\right)\right)
\end{align}
$$
At the minimum, we have
$$\frac{df(\alpha)}{d\alpha} = 0
\implies
\rho\left(\frac{\alpha}{2}\right) = \rho\left(\frac{\alpha}{2} + \frac{\pi}{n}\right)
= \rho\left(\frac{\alpha}{2} - \frac{\pi}{n}\right) = \rho\left(\frac{\pi}{n} - \frac{\alpha}{2}\right)
$$
But $\frac{\pi}{n} - \frac{\alpha}{2}$ also belongs to $[0,\frac{\pi}{n}]$ and $\rho(\theta)$ is strictly decreasing there, this means
$$\frac{\alpha}{2} = \frac{\pi}{n} - \frac{\alpha}{2}\quad\implies\quad \alpha = \frac{\pi}{n}$$
Please note that this argument doesn't use the explicit form of regular $n$-gon.
It uses
*
*$n$-fold rotation symmetry about center,
*$2$-fold reflection symmetry about a ray through a vertex,
*$\rho(\theta)$ is strictly decreasing on suitable intervals of $\theta$.
This means the same argument should work for other shapes with similar properties.
e.g. those obtain from filling the "interior" of a regular star polygon.
|
How to determine the number of coin tosses to identify one biased coin from another? If coin $X$ and coin $Y$ are biased, and have the probability of turning up heads at $p$ and $q$ respectively, then given one of these coins at random, how many times must coin A be flipped in order to identify whether we're dealing with coin $X$ or $Y$? We assume a 0.5 chance that we can get either coin.
| We should find the percentage of heads that is equally probable for both coins, let that be $m\in(0,1)$ Let say $p<m<q$. So we have:
$$p^m(1-p)^{1-m}=q^m(1-q)^{1-m}\\
(\frac p q)^m(\frac {1-p}{1-q})^{1-m}=1\\
(\frac {p(1-q)} {q(1-p)})^m=\frac {1-q}{1-p}\\
m=\log_{\frac {p(1-q)} {q(1-p)}}\frac {1-q}{1-p}$$
We are looking for such $n$ that probability that number of heads is bellow $mn$ when you picked X, is for $P$ larger than same probability if you picked Y. Where $P$ is the wanted accuracy for claiming which coin it is.
$p_n(k<mn|X)-p_n(k<mn|Y)>P$ Which also implies $p_n(k>mn|Y)-p_n(k>mn|X)>P$
For finding $p_n(k<mn|X)$ we can use approximation to standard Normal distribution, it will be
$$\Phi(\sqrt n\frac{m-p}{\sqrt{p(1-p)}})-\Phi(\sqrt n\frac{m-q}{\sqrt{q(1-q)}})>P$$
So problem becomes: find $x$ such that $\Phi(ax)-\Phi(bx)>P$ where $a,b,P$ are known
|
What is the formula for calculating the distance between two points of an equilateral triangle with a known "radius"? I have an equilateral triangle with each point being a known distance of N units from the center of the triangle.
What formula would I need to use to determine the length of any side of the triangle?
| First draw the lines from all three points to the center point of this triangle, then notice that two if these lines make an isosceles triangle with angles $2\pi/3$, $\pi/6$, and $\pi/6$. Since we're given the length of two of the sides of this triangle, and the angles are known, we can calculate the length of the third side
Let $a$ denote the length of the inradius, and $b$ the sidelength if the triangle. Then, by the cosine law, we have
$b^2 = a^2 + a^2 - 2aa\cos(2\pi/3)$
$= 2a^2(1 - \cos(2\pi/3)$
$= 2a^2(3/2) $
So we have $b = \sqrt(3a^2)$ or $\sqrt3a$
|
Tough Logarithm Problem I was working on this Problem
Prove that: $$ \frac{\log_5(nt)^2}{\log_4\left(\frac{t}{r}\right)}=\frac{2\log(n)\cdot\log(4)+2\log(t)\cdot \log(4)}{\log(5)\cdot \log(t)-\log(5)\cdot\log(r)}$$
I think it has something to do with change of base because it's $\log_{10}$ on the right side and not on the left, but I'm not sure how to go about this.
| \begin{align}
\frac{\log_5(nt)^2}{\log_4(t/r)} &= \frac{\frac{\log (nt)^2 }{\log 5}}{\frac{\log(t/r)}{\log 4}}
\\ &= \frac{\log 4 \log(nt)^2}{\log 5 \log(t/r)}
\\ &= \frac{2\log 4 (\log(n)+\log(t))}{\log 5 (\log(t)-\log (r))}
\end{align}
where the following identities are used:
$$\log_a b = \frac{\log_c b}{\log_c a}$$ in the first equality.
Also $$\log a^n= n\log a,$$ $$\log(ab)=\log a + \log b,$$ and $$\log(a/b)=\log a - \log b$$
|
Show that $Df(V,W)$ evaluated on $(H,K)$ is given by $f(V,K)+f(H,W)$. Let $f:\Bbb R^2\times \Bbb R^2\to \Bbb R$ be a bilinear map.Then show that for $(V,W)\in \Bbb R^2\times \Bbb R^2$,the derivative $Df(V,W)$ evaluated on $(H,K)\in \Bbb R^2\times \Bbb R^2$ is given by
$f(V,K)+f(H,W)$.
As I could not understand how to prove it ;I took an example first.
Let $f:\Bbb R^2\times \Bbb R^2\to \Bbb R$ be defined by $f((x_1,y_1)+(x_2,y_2))=(x_1+2y_1+x_2+2y_2)$ where $V=(x_1,y_1),W=(x_2,y_2)$,
$Df(V,W)=\begin{bmatrix} 1,2,1,2\end{bmatrix}$.
where I did the derivative at each component.
How how will it equal $f(V,K)+f(H,W)$?I am confused totally.Please help someone
| What is the definition of the derivative of a bilinear form? Is it a generalization of Fréchet derivative? Then we should just look for linear terms in the
$$f(v+h,w+k)-f(v,w),$$
which after discarding the second-order term $f(h,k)$ leads to
$$[Df(v, w)](h, k) = f(v, k) + f(h, w).$$
|
Is the following integral convergent or divergent $\int_{0}^{1} \frac{1}{\sin x}dx$? Question: Is the following integral convergent or divergent $\int_{0}^{1} \frac{1}{\sin x}dx$? Use a comparative theorem to prove your results.
Answer attempt:
I want to know if my attempt at a solution is acceptable.
$$\int_{0}^{1} \frac{1}{\sin x}dx$$
By using a variable substitution:
$t = \sin x$,
$x = \arcsin t$,
$dx = \frac{1}{\sqrt{1-t^2}}dt$
we get:
$$\int_{0}^{\sin 1} \frac{1}{t} \times \frac{1}{\sqrt{1-t^2}} dx$$
By using the following comparison:
$\frac{1}{t} \times \frac{1}{\sqrt{1-t^2}} \geq \frac{1}{t}$
This means that if the following integral is divergent we have proved that the original integral is also divergent:
$$\int_{0}^{\sin 1} \frac{1}{t}dx = \ln(\sin 1) - \ln(0)$$
$\ln(0)$ is undefined wich must mean that the original integral is divergent.
| We have
$$\frac{1}{\sin(x)}\sim \frac{1}{x}\;\;(x\to 0^+)$$
and
$$\int_0^1\frac{dx}{x}$$ divergent
$$\implies \int_0^1\frac{dx}{\sin(x)}$$ divergent since the integrands are nonnegative and equivalent.
|
Dimension of nullspace I can understand the dimension of column space of matrix is the no. of independent column vectors
But why is the dimension of nullspace = no. of free variables?
|
For an $m \times n$ matrix, $A$, the Rank-Nullity theorem says that:
$$ \text{column rank}(A) + \text{nullity}(A) = n.$$
where $\text{nullity}(A)$ is the dimension of the null space of $A$.
When you find the reduced row echelon form of a matrix, the max number of independent columns (i.e. the column rank) is the number of pivot columns (columns containing a leading one for some row). Notice now that free variables correspond to the columns without pivots. So the number of free variables is $n - \text{column rank}(A)$.
So
\begin{align*}
\text{nullity}(A) &= n - \text{column rank}(A) \\
&= \text{number of free variables}.
\end{align*}
|
An abstract algebraic proof that if for some positive integer $n$, $2^n - 1$ is prime, then so is $n$. I was wondering if there exists a proof from abstract algebra of this theorem. It seems like there should be. In fact, I've tried to come up with one and it always seems I'm coming close, but I can't do it.
Could you provide one?
| If $n$ is not prime, then $2^n - 1$ is not prime.
Suppose that $n = ab$ for integers $a,b\geq2$. Then note that we can factor
$$
x^n - 1 = (x^a)^b - 1 = (x^a - 1)([\text{stuff}])
$$
which means that
$$
2^n - 1 = (2^a)^b - 1 = (2^a - 1)([\text{stuff}])
$$
where stuff is, notably, an integer and bigger than $1$. It follows that $2^n-1$ is not prime.
|
Closed linear operators have closed kernels Let $X,Y$ be Banach spaces and $K\colon X\to Y$ be a closed linear operator. I would like to show that its kernel is closed subspace of $X$. Any ideas?
| Let us recall the definition of a closed operator. Let $D(K)\subset X$ denote the domain of $K$. We say that $K$ is closed if for every sequence $(x_n)_{n=1}^\infty$ in $D(K)$ that converges to some $x\in X$ such that $Kx_n\to y$ for some $y\in Y$ we have $x\in D(K)$ and $Kx=y$.
Suppose that $(x_n)_{n=1}^\infty$ is a convergent sequence in the kernel of $K$ and let $x\in X$ be its limit. Then $Kx_n=0$ for all $n$, so the sequence $(Kx_n)_{n=1}^\infty$ is convergent with limit $y=0$. By the hypothesis, $Kx=0$, so $x$ is in the kernel of $K$.
|
Showing that $\frac{f(x)-f(x-h)}{h} \le f'(x) \le \frac{f(x+h)-f(x)}{h}$ given that f' is increasing We are considering that $h > 0$. I just don't really know how to approach this question; it seems that by taking the limit of the inequality as h goes to 0, each part of the inequality is going to be equal to $f'$, but I don't really understand why the fact that $f'$ is increasing is relevant to finding a solution to this question (and how to answer the question based on that constraint).
| The assumption is the $f'$ is an increasing function on $[x-h,x+h]$.
Now by the mean value theorem, there exist numbers $\xi_1\in (x-h,x)$ and $\xi_2 \in (x,x+h)$ such that
$$f'(\xi_1)=\frac{f(x)-f(x-h)}{h}$$
$$f'(\xi_2)=\frac{f(x+h)-f(x)}{h}$$
Inasmuch as $f'$ is increasing and $\xi_2>x>\xi_1$, we have
$$\frac{f(x)-f(x-h)}{h}=f'(\xi_1)\le f'(x)\le f'(\xi_2)= \frac{f(x+h)-f(x)}{h} \tag 1$$
as was to be shown!
|
Mathematical Expectation. E[E[x]]=E[x] Is it true that $ E[E[x]]=E[x]$? I can't find this property. If it isn't true than why $E[(X −E[X])^2]=E[X^2]−E[X]^2$?
| Yes, $E[E[X]] = E[X]$. This is because $E[X]$ is just a number, it's not random in any way. So when we ask for $E[E[X]]$, i.e., our best guess for the number $E[X]$, well since $E[X]$ is just a constant number which is not random, we know its value, so our best guess for it should be it, i.e., $E[E[X]] = E[X]$.
To calculate $E[ (X - E[X])^{2}]$, we first multiply the inside and get:
$$(X - E[X])^{2} = X^{2} - 2XE[X] + E[X]^{2}.$$ Now, taking the expectation of both sides gives:
\begin{split} E[(X - E[X])^{2}] &= E[X^{2} - 2XE[X] + E[X]^{2}] \\ &= E[X^{2}] - \underbrace{E[2XE[X]]}_{2E[X]*E[X]} + \underbrace{E[E[X]^2]}_{E[X]^2} \end{split}
where we used $E[2XE[X]] = 2E[X]*E[X]$ since $2E[X]$ is just a constant number and we know $E[cX] = cE[X]$ for any constant number $c$.
So, the above last line equals $E[X^{2}] - 2E[X]^{2} + E[X]^{2}$ which simplifies to $E[X^{2}] - E[X]^{2}$.
|
proof of matrix singularity If anyone can help me with the next question I would appreciate it a lot.
Let $A$ and $B$ be $n*n$ matrices and let $C=A-B$. Show that if $Ax_0=Bx_0$ and $x_0$ is not zero, then $C$ must be singular.
The first thing I don't get is the notation, what do $Ax_0$ and $Bx_0$ mean?
Thanks in advance :)
| if $x_0 \neq 0$
$$Ax_0 = B x_0$$
then we have
$$(A-B) x_0=0$$
that is we have $x_0 \neq 0$ such that $Cx_0=0$ which means $C$ is singular.
|
Functional in Hilbert space Let $H$ be a Hilbert space and $0\neq x\in H$. I want to prove that there is an unique $f\in H^*$, such that $\|f\|=1$ and $f(x)=\|x\|$.
Any ideas on how to approach this problem
| [I]. A real or complex Hilbert space $H$ is its own dual. If $f\in H^*$ then there exists $y_f\in H$ such that the inner product <$x,y_f$> is equal to $f(x)$ for all $x\in H.$ And $y_f$ is unique, because if $f(x)=<x, y'_f>$ for all $x,$ then $$\|y_f-y'_f\|^2=<y_f-y'_f,y_f>-<y_f-y'_f,y'_f>=f(y_f-y'_f)-f(y_f-y'_f)=0.$$
Also $\|f\|=\|y_f\|$ because, in the non-trivial case $f\ne 0,$ we must have $y_f\ne 0,$ so $$(i).\quad f(y_f)/\|y_f\|=\|y_f\| \implies \|f\|\geq \|y_f\|.$$
$$(ii).\quad \forall x\in H\;(|f(x)|=|<x,y_f>|\leq \|x\|\cdot \|y_f\|) \implies \|f\|\leq \|y_f\|.$$
[II]. For $0\ne x\in H,$ let $f(y)=$<$y,x/\|x\|$> for all $y\in H.$ Then $f(x)=\|x\|$ and $\|f\|=1.$
Suppose $g\in H^*$ with $g(x)=\|x\|$ and $\|g\|=1.$ Let $g(z)=$<$z,y_g$> for all $z\in H.$ Then $\|y_g\|=\|g\|=1.$ We have $$\|x\|=g(x)=<x,y_g>=|<x,y_g>|\leq \|x\|\cdot \|y_g\|=\|x\|.$$
But the Cauchy-Schwarz Inequality $|$<$u,v$>$|\leq \|u\|\cdot \|v\|$ is a strict inequality unless $u,v$ are linearly dependent. So, since $|$<$x,y_g$>$|=\|x\|=\|x\| \cdot \|y_g\|$ and $x\ne 0\ne y_g,$ there exists scalar r such that $y_g=rx.$ This implies $$0\ne \|x\|=g(x)=<x, rx>$$ so $r=1/\|x\|.$ Hence $y_g=x/\|x\|$ and $g=f.$
[III]. Appendix: To show that $H$ is its own dual: Let $E$ be an orthonormal Hilbert-space basis for $H$. Let $0\ne f\in H^*.$ Then $f$ is uniquely determined by $\{(e,f(e):e\in E\}.$ Let $E_f=\{e\in E: f(e)\ne 0\}.$
Let $F_f$ be the set of finite non-empty subsets of $E_f.$ For $S\in F_f$ let $$x_S=\sum_{e\in S}e\;\overline {f(e)}$$ (where $\overline {z}$ denotes the complex conjugate of $z$). Then $f(x_S)=\sum_{e\in S}|f(e)|^2=\|x_S\|^2 $. Therefore $$\infty >\|f\|\geq \sup_{S\in F_f}\sum_{e\in S}|f(e)|^2=\sum_{e\in E_f}|f(e)|^2.$$ Therefore $y=\sum_{e\in E_f}e\; \overline {f(e)}\in H.$ The functional $g(x)=$<$x,y$> agrees with $f$ for every $x\in E,$ so $g=f.$
|
Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula As far as every question I've seen concerning "what is $\sum_{k=1}^nk^p$" is always answered with "Faulhaber's formula" and that is just about the only answer. In an attempt to make more interesting answers, I ask that this question concern the problem of "Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula for fixed $p\in\mathbb N$". I've even checked this post of common questions without finding what I want.
Rule #1:
Any method to compute the sum in question for arbitrary $p$ is good, either recursively or in some manner that is not in itself a closed form solution. Even algorithms will suffice.
Rule #2:
I don't want answers confined to "only some values of $p$". (A good challenge I have on the side is a generalized geometric proof, as that I have not yet seen)
Exception: If your answer does not generalize to arbitrary $p$, but it still generalizes to an infinite amount of special $p$'s, that is acceptable.
Preferably, the method is to be easily applied, unique, and interesting.
To start us off, I have given my answer below and I hope you all enjoy.
| Very elementary method: knowing that $S(n) = \sum_{k=1}^n k^p = a_{p+1}n^{p+1} + \cdots + a_1 n + a_0$, you can calculate the coefficients using limits:
$$a_{p+1} = \lim_{n\to\infty}\frac{S(n)}{n^{p+1}} =
\lim_{n\to\infty}\frac{1^p+\cdots+n^p}{n^{p+1}} =
\lim_{n\to\infty}\frac{(n+1)^p}{(n+1)^{p+1}-n^{p+1}} = \cdots =
\frac1{p+1}.$$
(Cesàro-Stolz used in the third =)
You can continue with
$$a_p = \lim_{n\to\infty}\frac{S(n)-a_{p+1}n^{p+1}}{n^p} = \cdots$$
$$\cdots$$
|
prove that$x-\dfrac{\langle a,x\rangle}{\langle a,a\rangle}a$ is orthogonal to $a$
Prove that $x-\dfrac{\langle a,x\rangle}{\langle a,a\rangle}a$ is orthogonal to $a$.
I know this has something to do with the QR algorithm, but I am unsure of where to start.I started with QR decomposition and I am unsure of where to head next
| Recall that two vectors $v,w$ are orthogonal if $\langle v,w\rangle=0$. So, to prove that $v:=x-\frac{\langle x,a\rangle}{\langle a,a\rangle} a$ is orthogonal to $a$, we compute
$$\langle v,a\rangle = \langle x-\frac{\langle x,a\rangle}{\langle a,a\rangle} a,a \rangle.$$
But, recall that the inner product is linear, so we have
$$\langle x-\frac{\langle x,a\rangle}{\langle a,a\rangle} a,a \rangle = \langle x, a \rangle - \frac{\langle x,a\rangle}{\langle a,a\rangle} \langle a,a\rangle = \langle x,a \rangle -\langle x,a\rangle = 0.$$
|
Integrate $\int_0^{\pi/2}\frac{\cos^2x}{a\cos^2x + b\sin^2x}\,dx$ I don't know how to deal with this integral
$$I=\displaystyle\int_0^{\pi/2}\frac{\cos^2x}{a\cos^2x + b\sin^2x}\,dx$$
I reached the step
$$I
=\displaystyle\ \int_0^{\pi/2}\frac{1}{a + b\tan^2x}dx$$
Now what should I do? Please help.
| Continue with
\begin{align}
\int_0^{\pi/2}\frac{1}{a + b\tan^2x}dx
= &\ \frac1{a-b}\int_0^{\pi/2}\bigg(1- \frac{b\sec^2x}{a + b\tan^2x}\bigg)dx\\
=& \ \frac1{a-b}\bigg( \frac\pi2- \frac\pi2 \sqrt{\frac ba}\bigg)
=\frac\pi{2(a+\sqrt{ab})}
\end{align}
|
Conditional Probability of a single event The question: Find the probability that a randomly selected person does not catch the 'flu' in terms of V and F.
*
*V is the event that a person has been vaccinated
*F is the event that a person catches the flu.
Where 80% of population has been vaccinated against the flu,
*
*but 5% of the vaccinated population catches the flu anyway.
*95% chance to not catch the flu
So 20% of the population that have not been vaccinated
*
*have a 20% chance to catch the flu.
*have an 80% chance to not catch the flu
My attempt:
So according to the information provided by the question, I built a probability tree diagram and from that diagram I tried to express the answer as (F'|V) ∪ (F'|V'). Where (F'|V) = 0.76 and (F'|V') = 0.16. But I do not know if the way I have done it, is correct or not and if it is correct, I don't know how to further proceed.
| Actually you have $P(F' \mid V)=0.95$ and $P(V)=0.80$ so $P(F' \cap V)=0.76$
Similarly $P(F' \mid V')=0.80$ and $P(V')=0.20$ so $P(F' \cap V')=0.16$
Since these are exclusive events, $P(F')=P((F' \cap V) \cup (F' \cap V')) = 0.76+0.16=0.92$
|
Hyperplane and dimension I'm trying to prove the following:
V is a Vectorspace
$dim V = \infty \Leftrightarrow \exists$ a linear Bijection from $V$ to a Hyperplane of $V$.
Have no idea to start this.
| By definition $\dim V=\infty$ means that there exists an infinite basis $\mathcal{B}=\{e_i\}_{i\in I}$ of $V$. As $I$ is an infinite set, if we choose a finite subset $J\subset I$ we still have $|I|=|I\setminus J|$ and thus there exists a bijection $\phi:I\to I\setminus J$. Now define $f:V\to V$ by the formula
$$
f\left(\sum_{k=1}^n\lambda_{i_k}e_{i_k}\right):=\sum_{k=1}^n\lambda_{i_k}e_{\phi(i_k)}.
$$
As every element $v\in V$ has a unique representation in terms of $\mathcal{B}$, this is a well defined function. Furthermore it is clearly linear, injective (since $\phi$ is a bijection), and with range
$$\text{Span}\{e_i\}_{i\in I\setminus J}\subsetneq V.$$
We have thus constructed a linear bijection from $V$ to a proper vector subspace of $V$ (the concept of hyperplane doesn't have so much sense in the infinite-dimensional case).
|
Logical Subtlety in Division Algorithm
Let $\sigma$ be the $m$-cycle $(a_1~a_2...a_m)$. Deduce that $|\sigma| = m$.
Here is my solution, with questions about the correctness of this solution scattered throughout:
I already proved that $\sigma^i(a_k) = a_{k+i}$, where $k \in \{1,...,m\}$ and $k+i$ is replaced by the least positive residue when $k+i$ is divided by $m$. Hence, by this composition formula $\sigma^m (a_k) = a_{k+m}$. By the division algorithm, there exist unique integers $q$ and $r$ such that $k+m = qm + r$, where $r \in \{1,...,m-1\}$ is the least positive residue we are seeking. Now clearly $q = 1$ and $r=k$ are choices that satisfy the equation. My question is, does this imply that these are in fact the $q$ are $r$ guaranteed by the division algorithm? It seems that the uniqueness part, in particular, implies that $q = 1$ and $r=k$.
If this is the case, then I can say $\sigma^m(a_k) = a_k$ for all $k \in \{1,...,m\}$, implying $|\sigma| \le m$. Now, suppose that $|\sigma| = d \le m$. Then $\sigma^d(a_k) = a_k$ for all $k$. This means $k$ is the least positive residue, or, in other words, $k+d = q'm + r'$, where $r' = k$. Hence $d = q'm$. This is a contradiction unless $q' = 1$, because $q' < 0$ would imply $|\sigma| < 0$ and $q' > 0$ would imply $d > m$.
First of all, does any of this sound correct? Second, is there a better/cleaner way of thinking about this?
| I think you may make your life simpler if you
*
*write the cycle as $\sigma = (a_{0}, a_{1}, \dots, a_{m-1})$,
*regard the indices as integers modulo $m$,
*note that $\sigma$ acts as $a_{i} \mapsto a_{i+1}$,
*so that $\sigma^{k}$ acts as $a_{i} \mapsto a_{i+k}$.
|
Can a right triangle have odd-length legs and even-length hypotenuse? Is it possible to have an even integer hypotenuse and odd integer legs (perpendicular and base) in a right triangle? If yes, please give an example. If no then please prove that.
| Hint:
Suppose you have $A^2+B^2=C^2$ where $A=2a+1, B=2b+1, C=2c$
Substitute, expand and then take out terms with factors of $4$
|
Calculating speed of minute hand on a clock
The minute hand on a clock is $12 \text{ cm}$ long. Calculate the speed of its free end.
Is the following correct ? I have already worked out its angular speed from the previous question.
$$v = \omega r \\
\omega = 1.75\times 10^{-3} \text{ rad s$^{-1}$} \\
r = 12\text{ cm} = 1200\text{ m}$$
$$\begin{align}\therefore v &= (1.75\times 10^{-3} \text{ rad s$^{-1}$})\times (1200\text{ m}) \\
\therefore v &= 2.1\text{ ms$^{-1}$}\end{align}$$
| The tip travels $2\pi\cdot12$ cm in one hour, hence $2\pi\cdot12/3600$ cm/s. (About $0.21$ mm/s.)
|
Prove $2^{1/3} + 2^{2/3}$ is irrational What's the nice 'trick' to showing that the following expression is irrational?
$2^{1/3} + 2^{2/3}$
| Here's a slightly more 'sledgehammer' approach: since $x^3-2$ is irreducible over the rationals, the minimal polynomial of its root $z=2^{1/3}$ must be of degree three. But if $2^{1/3}+2^{2/3}$ were rational, say $\frac ab$, then that would imply that $z+z^2=\frac ab$, or $bz^2+bz-a=0$, contradicting the minimal-degree statement above.
OTOH, we get a nice prize for all this 'heavy machinery': the exact same argument shows in one fell swoop that $a2^{1/3}+b2^{2/3}$ is irrational for all (nonzero) rational $a,b$; in other words, $2^{1/3}$ and $2^{2/3}$ are linearly independent over $\mathbb{Q}$.
|
Can I bound the correlation of two random variables using the mutual information? As correlation
$\rho_{X,Y} := \frac{Cov(X,Y)}{\sigma_X \sigma_Y}$
sort of measures the linear dependence of two random variables, and mutual information
$I(X; Y) := H(X) - H(X|Y)$
measures the general dependence of two random variables, I feel like it should be possible to get an upper bound on the correlation in terms of the mutual information.
I expect that as the mutual information increases, correlation tends to 1, and as mutual information tends to 0, correlation also does.
Can anyone help me formalise this?
| For the mutual information, it can be useful to consider the conditional entropy instead:
$$H(X|Y) = H(X,Y) - H(X)$$
However, the claim in the question is incorrect, because correlation indicates only linear dependence, while mutual information relates to dependence in general.
Going back one step to covariance, we can find the following example:
*
*$Y = X^2, X$ uniform distributed in $[-1,1]$
*$\Rightarrow \sigma(X,Y) = 0, \sigma(X)= E(X^2)\neq 0, \sigma(Y)= E(X^4) \neq 0$, thus we get $\rho_{X,Y} = 0$
*However, as stated in conditional entropy: $H(Y|X) = 0$, because $Y$ is completely determined by the value of $X$.
*For the mutual information, we get: $I(X;Y) = H(Y) - H(Y|X) = H(Y)$. And this does not tend to $0$, even if the correlation is $0$ already.
|
Proof of Doob's inequality by Durrett This is Doob's inequality given in Durrett's Probability: Theory and Examples. However, in the proof, I don't understand why $X_N \ge \lambda$ on $A$, since if $N=n$, then all we can say is $X_N=\max_{0\le m\le n}X_m^+\ge \lambda$, and so without any assumption on nonnegativity of $X_n$, I don't see how the proof makes sense. I would greatly appreciate any help.
Theorem 5.4.1. If $X_n$ is a submartingale and $N$ is a stopping time with $P(N\le k)=1$ then $$EX_0\le EX_N \le EX_k.$$
| On the set $A$ we have
$ 0<\lambda\leq \bar X_n=\max_{0\leq m\leq n} X_m^+.$
that is, at least one of the $X_m^+$'s exceeds $\lambda$ for $0\leq m\leq n$.
For such an $m$ value we have $X_m=X_m^+$.
|
why does the probability function must add up to 1? Let $Y$ be a discrete random variable with the probability function $p(y)$. Then the expected value of $Y$, is define to be $\Bbb{E}(Y) = \sum_y p(y)$
(a) Briefly explain why the sum of all $y$, $p(y) = 1$.
The way I can think about this is problem is by the following : if I have a sample space that has $y = 0,1,2,.. n$, number of events and then corresponging probability of each event must add up to 1 because $1 = p(y_0) + p(y_1)+ p(y_2) + \cdots + p(y_n)$.
Is this correct? or is there a better way to explain it.
| If we take the axiomatic definition of probability, then we have
$$P(\Omega) = 1$$
where $\Omega$ denotes the whole space of events i.e. any possible outcome is in $\Omega$.
When you sum through all the values of $y $ you are summing up through all the possible outcomes. That means you are summing through all events in $\Omega $ and as we know (because of our definition of probability), $P(\Omega) = 1$.
|
A 3x3 matrix with 1 real eigenvalue. Does there exist a non-diagonalizable 3x3 matrix that has precisely 1 real eigenvalue and a multiplicity of 1? When it comes to multiplicity I'm trying to find a matrix that would give me something like $(\lambda-1)^3$ as the eigenvalue. This factors down to $\lambda^3 - 3\lambda^2+3\lambda-1$ so you could say the multiplicity is 3 but you can also say that it only has 1 real root. So could I use this to find a non-diagonalizable 3x3 matrix with only 1 eigenvalue. So would such a matrix exist?
| For example
$$\begin{pmatrix}x&1&0\\0&x&1\\0&0&x\end{pmatrix}$$
has one unique eigenvalue $\;x\;$ of algebraic multiplicity $\;3\;$ and geometric multiplicity $\;1\;$ (if this is what you meant) , for any $\;x\in\Bbb R\;$ .
|
Can we describe the conic hull of a set with this formula? Can we describe the conic hull of a set with this formula:
\begin{equation*}
\text{Conic} \space C = \{\textbf x=\textbf x_0+ \theta \textbf{v} | \textbf x_0 = \textbf 0_n , \textbf{v} \in \text{conv} \space C , \theta\in \ \mathbb R_{+}\}
\end{equation*}
where $\text{conv} \space C$ is the covex hull of the set $C$.
Definition of conic hull:
$\text{Conic} \space C = \{\Sigma_{i=1}^{k}\theta_i x_i | x_i \in C, \theta_i\in\mathbb R_+\}$
it is the set of conic combinations of some points of $C$.
My proof:
$K = \{\theta \textbf v| \textbf v \in \text{conv} \space C, \theta \in \mathbb R_+ \} = \text{conic C}$
For proving this: we should show $K \subseteq \text{conic C}$ and $\text{conic C} \subseteq K$.
a. $K \subseteq \text{conic C}$:
suppose $\textbf{x} \in K$, then:
$\textbf{x} = \theta \textbf{v}$ where $\textbf{v} \in \text{conv} \space C$, $\theta \in \mathbb R_+$ $\Rightarrow$ $\textbf{v} = \sum_{i} {\theta_i} x_i$, $x_i \in C$,
$\Rightarrow$ $\frac{x}{\theta} = \sum_{i} {\theta_i}x_i$
$\Rightarrow$ $x = \sum_{i} {\alpha_i x_i}$ where ${\alpha_i} = \theta_i \theta$ $\in \mathbb R_+$ $\Rightarrow$ $\textbf{x} \in \text{conic C}$.
b. $\text{conic C} \subseteq K$:
suppose $\textbf{x} \in \text{conic C}$, then: $\textbf{x} = \sum_i{\theta}_i x_i$, where $x_i \in C$, $\theta_i \in \mathbb R_+$ $\Rightarrow$ $\frac{{\textbf{x}}}{\theta} = \sum_i{\frac{\theta_{i}}{\theta}}{x_i} $ where $\theta = \sum_i{\theta_i}$
$\Rightarrow$ $\frac{{\textbf{x}}}{\theta} = \sum_i{\lambda_{i}}{x_i} $ where $\sum_i {\lambda_{i}} = 1$ $\Rightarrow$ $\textbf{x} = \theta \textbf{v}$ where $\textbf{v} = \sum_i{\lambda_{i}}{x_i} \in \text{conv} \space C$.
Hence: $K = \text{conic C}$.
| Let $C \in \mathbb{R}^n$ and define:
$$\text{Co}(C)=\{\theta\cdot v\in \mathbb{R}^n\,|\,v \in \text{conv}(C), \theta \in \mathbb{R}_+\}$$
$\text{$(1)$: $\text{Co}(C)\subset \text{Conic}(C)$}$
Let $x=\theta\cdot v\in \text{Co}(C)$, where $v \in \text{conv}(C)$ and $\theta \in \mathbb{R}_+$. Since $v \in \text{conv}(C)$, there must be some $n \in \mathbb{N}$ and $\alpha_i \geq0,v_i \in C$ with
$$\sum_{i=1}^n\alpha_i\cdot v_i=v\\
\sum_{i=1}^n\alpha_i=1$$
Then $x=\sum_{i=1}^n\underbrace{(\theta\cdot \alpha_i)}_{\in \mathbb{R}_+}\cdot v_i$ and hence lies in $\text{Conic}(C)$.
$\text{$(2)$: $\text{Conic}(C)\subset \text{Co}(C)$}$
Indeed, let $x=\sum_{i=1}^k\theta_i\cdot x_i \in \text{Conic}(C)$, where $x_i \in C$ and $\theta_i\in\mathbb{R}_+$. Let $\theta=\sum_{i=1}^k\theta_i$. If $\theta=0$, then all $\theta_i$ are zero and $x=0$, so $x$ lies in $\text{Co}(C)$.
Now, suppose $\theta>0$ and let $\alpha_i=\frac{\theta_i}{\theta}$. Then $\alpha_i\geq0$ and $\sum_{i=1}^k \alpha_i=1$. Then $v=\sum_{i=1}^k\alpha_i\cdot x_i$ lies in $\text{conv}(C)$. Finally, it suffices to check that $x=\theta\cdot v$ to conclude that $x \in \text{Co}(C)$.
Together, inclusions $(1)$ and $(2)$ imply that $\text{Co}(C)=\text{Conic}(C)$, so the answer is yes.
|
Solve system of equation using matrices (4 variables) The Question:
Solve using matrices.
$$2w-2x-2y+2z=10\\w+x+y+z=-5\\3w+x-y+4z=-2\\w+3x-2y+2z=-6$$
My work:
$$
\begin{bmatrix}
2&-2&-2&2&10\\
1&1&1&1&-5\\
3&1&-1&4&-2\\
1&3&-2&2&-6\\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
1&-1&-1&1&5\\
0&2&2&0&-10\\
0&4&-1&1&-11\\
0&-2&3&-1&1\\
\end{bmatrix}
\rightarrow\\
\begin{bmatrix}
1&-1&-1&1&5\\
0&2&2&0&-10\\
0&0&-5&0&9\\
0&0&5&-1&-9\\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
1&-1&-1&1&5\\
0&2&2&0&-10\\
0&0&-5&0&9\\
0&0&0&-1&0\\
\end{bmatrix}\\[6ex]
z=0, \;y=\frac{-9}5,\; x=\frac{34}5,\; w=10
$$
The correct answer is $z=-1, \;y=-2,\; x=-3,\; w=1$. What did I do wrong?
| In the second matrix of your work, entry $a_{3,3}$ should be $2$, not $-1$, and entry $a_{3,5}$ should be $-17$, not $-11$. Also, the whole fourth row seems wrong. It appears that the four row operations performed in that step should have been:
\begin{align}
&1.\text{ Replace R1 with $\frac12\times$ R1.} \\
&2.\text{ Replace R2 with R2 - R1.} \\
&3.\text{ Replace R3 with R3 - 3$\times$R1.} \\
&4.\text{ Replace R4 with R4 - R1.}
\end{align}
It appears that you did the first two of those correctly, but not the third and fourth ones.
|
Computing covariance for independently distributed uniform variables $X$, $Y$, and $Z$ Let $X$, $Y$, and $Z$ ~ Uniform(0,1).
How does one set up the computation for $\textrm{Cov}(X^2Y,Y^2Z)$?
| Using the definition of the covariance and the independence,
\begin{align*}
\operatorname{Cov}[X^2Y,Y^2Z]
&=\operatorname E[X^2Y^3Z]-\operatorname E[X^2Y]\operatorname E[Y^2Z]\\
&=\operatorname EX^2\operatorname EY^3\operatorname EZ-\operatorname EX^2\operatorname EY\operatorname EY^2\operatorname EZ.
\end{align*}
Hence, we need to calculate the first three moments of the uniform distribution.
|
Calculating the sum of $\sum_{k=1}^{\infty}{\frac{(-1)^k}{k}}$ I am trying to find the sum of
$$\sum_{k=1}^{\infty}{\frac{(-1)^k}{k}}$$
I've proven that this converges using the Leibniz test, since
$a_n > 0$ and $\lim_{n\to\infty}{a_n} = 0$.
I am not sure how to go about summing this series up though. Every example I've seen up to now does some manipulation so that a geometric series pops out. I've been at it for a bit and I don't see how I could convert this to a geometric series to sum it up.
| Hint:
The geometric series is very very close.
Replace $-1$ by $x$ and derive term-wise:
$$\left(\sum_{k=1}^\infty\frac{x^k}k\right)'=\sum_{k=1}^\infty\left(\frac{x^k}k\right)'=\sum_{k=1}^\infty x^{k-1}.$$
Now you can use the geometric series formula and integrate from $0$ to $-1$ to get the solution
$$-\left.\ln(1-x)\right|_0^{-1}.$$
|
For a compact set $K\subset \mathbb M_n(\mathbb R)$, the eigenvalues of matrices in $K$ form a bounded set Let $K\subset \mathbb M_n(\mathbb R)$ be a compact subset. Then I have to show that :
All the eigen values of the elements of $K$ form a bounded set.
My work: consider the map $K \to det K$ which is continuous. Image set is compact in $\mathbb C$, hence closed bounded. If $\lambda_i (i=1\ldots,n)$ are the eigenvalues, then $\det K=\prod \lambda_i$ is bounded which in turn gives $\lambda_i$ bounded.
Is my approach correct? Is there any better way to do it?
| An easy approach for me:
Consider the map $f:K\to \Bbb C$ by $f(x)=\dfrac{x^TAx}{x^Tx}$
which is continuous and hence bounded on $K$.
$\text{Image} f=\{f(x):x\in K\}$ is bounded.
Now if $\lambda $ is an eigen value of $A$ corresponding to eigen vector $v$ then $Av=\lambda v\implies f(v)=\lambda$.
Hence $\lambda \in \text{Image }f$ which is bounded
|
Calculate big O notation I'm a bit unsure how to determine the big O notation of the following terms:
$\sqrt n\ $ + $\log n\ $
Is the big O notation O($\sqrt n\ $) or O ($\sqrt n\ $ + $\log n\ $)?
It's clear that $\sqrt n\ $ grows faster than $\log n\ $ but I'm uncertain, if it's enough to express that with O($\sqrt n\ $).
Thanks in advance!
| It's enough to say $\sqrt{n} + \log n = O(\sqrt{n})$. Notice that $\log n \le \sqrt{n}$ then $\sqrt{n} + \log n \le 2 \sqrt{n}$. More generally, if you have a sum $a_n + b_n$ and $\frac{a_n}{b_n} \to 0$ when $n \to \infty$ then $a_n + b_n = O(b_n)$. You can think of this as the largest asymptotic term wins.
|
relationship between topology and convergences In my textbook, the topology of $C_c^\infty$ and $C^\infty$ are defined by: how $f_n\to f$ in these topologies. Similarly, the weak-* topology are also defined by the same manner.
I wonder why we can determine a topology only by convergent sequence since topology ought to be defined by open sets?
| What Owen said is correct. It is not sufficient to specify a topology by defining what it means for sequences to converge. For example, the discrete topology and the cocountable topology on $\mathbb{R}$ yield nonhomeomorphic spaces, but their notions of sequential convergence are the same: a sequence converges iff it is eventually constant.
In practice, when defining topologies in this way, the definition of convergence you want to make for sequences still makes sense if you replace the word "sequence" with the word "net", and so fixing this is quite easy. In particular, in your case, if you replace the word "sequence" with the word "net", there should be a unique topology on $C_c^{\infty}$ and $C^{\infty}$ whose notion of convergence agrees with your given condition.
Of course, you can't just make any old definition of convergence. In order for it to define a topology, your definition has to satisfy a list of axioms---see, for example, Theorem 3.4.8 on pg. 105 here (note that if you're reading this in the future, these numbers may have changed). This theorem gives the precise formulation of how one may define topologies by declaring what it means for nets to converge.
|
Showing that $f_n:[0,1]\to\mathbb{R}$ when $f_n=nx(1-x)^n$ converges pointwise but not uniformly Suppose $f_n:[0,1]\to\mathbb{R}$ such that $f_n=nx(1-x)^n$, then I need to show that $f_n$ converges pointwise but not uniformly. I can see that since $f$ is defined on $[0,1]$, the part $(1-x)^n$ is smaller than $1$ and therefore will converge, but what about $nx$? This will go to infinity...
Therefore, I can't even see that it converges pointwise, but let's suppose it converges to some function, how do I show that it doesn't converge uniformly? I should find an $\epsilon$ such that it's not valid for all $n>n_0(\epsilon)$ that $|f_n-something|<\epsilon$, right?
| Clearly $f_n(x)\to f(x)=0$ as $n\to\infty$ in [0,1]. Let $f_n'(x)=0$ and then $x=\frac{1}{n+1}$. It is easy to see that $f''_n(\frac{1}{n+1})<0$ and hence $f_n(x)$ reaches the maximum $f(\frac{1}{n+1})=(\frac{n}{n+1})^{n+1}$ at $x=\frac{1}{n+1}$.
Thus
$$ \max |f_n(x)-f(x)|=(\frac{n}{n+1})^{n+1}\to\frac1e\neq0$$
as $n\to\infty$. Thus $f_n(x)$ does not converge uniformly to $0$.
|
Chance of flipping 50 heads over a span of 100 flips given more than 100 flips So I found that the chance of flipping 50 heads out of a string of 100 flips is
$$0.5^{50} (1-0.5)^{50} \binom{100}{50},$$
My question is, how do the chances of having at least 1 string of 100 flips, with heads resulting 50 times, change if I am allowed to flip the coin 101 times? In other words, I could get 50 out of 100 in flips 1-100 OR 50 out of 100 in flips 2-101 or both.
What about if I were allowed to flip the coin 200 times, but needed to get at least one string of 100 flips resulting in 50 heads?
My thinking is that there are 101 different 100 flip sequences in a 200 flip sequence, and each of those 101 sequences should have $$0.5^{50} (1-0.5)^{50} \binom{100}{50},$$ probability of yielding heads exactly 50 times, which would multiply the probability by 101 times, but since the 101 different 100 flip sequences are overlapping, rather than being independent of each other, does it change the odds?
| This answer gives a lower bound on the probability, not a complete answer.
Let $n_k$ be the number of heads in flips $k,k+1,\dots,k+99$. Note that $n_{k+1}$ is one of $n_{k}-1,n_{k},$ or $n_{k}+1$. Also, note that $n_{1}$ and $n_{101}$ are independent.
Now, if $n_1< 50$ and $n_{101}> 50$, there is an $n_k=50$, by the above. Similarly for $n_1>50$ and $n_{101}< 50$. So to not have some $n_k=50$, you'd need $n_1>50$ and $n_{101}>50$ or $n_{1}<50$ and $n_{101}<50$. Since $$P(n_1<50)=P(n_1>50)=P(n_{101}<50)=P(n_{101}>50)=\frac{1}{2}\left(1-0.5^{100}\binom{100}{50}\right)$$ this means the probability is at least:
$$\begin{align}P(n_k=50; k=1,\dots,101)&\geq 1-P(n_1<50)P(n_{101}<50)-P(n_1>50)P(n_{101}>50)\\
&=1-\frac{1}{2}\left(1-0.5^{100}\binom{100}{50}\right)^2\end{align}$$
This means the probability is at least $\frac{1}{2}$.
It's probably considerably higher.
This trick for the lower bound works only for a multiple of $100$ flips. If there are $100j$ flips, then you'd get that the probability is at least:
$$1-\frac{1}{2^{j-1}}\left(1-0.5^{100}\binom{100}{50}\right)^{j}$$
|
Volume of A Solid I wonder if some one can clarify the confusion I have over the following problem:
Find the volume of the solid bounded by the parabolic cylinder $z = 1- y^2$, the planes $x = 0, z = 0$ and the plane $x+z = 1$. My struggle is to visualize the solid and find the set up of the triple integral for the volume.
What is your approach?
| Here is your region.
The yellow bit is what is left of the parabolic cylinder. The green bit is the plane $x+z = 1$
When $z = 0,$ The intersection with $z=1-y^2$ leaves $y^2 = 1$ or $y=1, y=-1$
As for the integral
$\int_{-1}^1 \int_0^{1-y^2}\int_0^{1-z} \;dx \;dz \;dy$
alternatively you could say:
$\int_0^1 \int_{-\sqrt{1-z}}^{\sqrt{1-z}}\int_0^{1-z}\; dx \;dy \;dz$
then substiute u = 1-z
$\int_0^1 \int_{-\sqrt{u}}^{\sqrt{u}}\int_0^{u}\; dx \;dy \;du$
|
Powers of $m$-cycles are also $m$-cycles
Let $\sigma$ be the $m$-cycle $(1~2~...~m)$. Show that $\sigma^i$ is also an $m-$cycle if and only if $i$ is relatively prime to $m$.
I having some difficulty with this problem. Through experimenting with some simple cycles, I have seen a cycle "split" into a product of disjoint cycles when its length is not relatively prime to $i$, but I have no way of rigorously describing this "splitting." I guess my question is, how do distinguish between cycles and "split" cycles; and how could this help with solving the problem?
| An $\;m\,-$ cycle $\;\sigma\;$ is an element of order $\;m\;$ in the group $\;S_m\;$ and as in any other group, we have that $\;\langle \sigma^k\rangle=\langle\sigma\rangle\iff (m,k)=1\;$ , so one direction is immediate...and the other one almost: if $\;(k,m)=d>1\;$ , then $\;\sigma^k\;$ has order less than $\;m\;$ so it can not be an $\;m\,-$ cycle.
|
Proving if f has a continuous extension to [a,b] then f is uniformly continuous on (a,b). I want to show that if a function $f:(a,b) \rightarrow \mathbb{R}$ has a continuous extension to [a,b], then f is uniformly continuous. Also, assume that $(X, d_x)$ and $(Y, d_y)$ are nonempty metric spaces and all subsets of $\mathbb{R}^k$ are given the Euclidean metric.
I know the proof is trivial, but I'm worried about my notation as I don't think I quite understand continuous extension correctly. I understand that for $E \subset F \subset X$, the function $g: F \rightarrow Y$ is an extension to f if $g(x)=f(x) \forall x \in E$.
So, we have $(a,b) \subset [a,b] \subset \mathbb{R}$ and $f: (a,b) \rightarrow \mathbb{R}$. So if f has a continuous extension to a,b, then (f(x)) = f(y)) $\forall x \in (a,b), y \in [a,b]$. And so this then implies that d(f(x), f(y)) = 0, and so $\forall \epsilon > 0, \exists \delta > 0$ s.t. $d(x, y) < \delta$ implies $d(f(x), f(y)) < \epsilon$.
Help and hints would be much appreciated!
| Let's get intuitive here :
Saying "$f:(a,b)\to\mathbb{R}$ has a continuous extension to $[a,b]$" is to say "we can define $f(a),f(b)$ so that f is continuous on $[a,b]$"
One example of this would be $f:(0,+\infty)\to\mathbb{R}$ with $f(x) = {\sin x\over x}$. Obviously you can't compute $f(0)$ but we can say arbitrarily $f(0)=1$ and it would work.
This work only with 'nice' functions that doesn't behave wierdly or explodes to $\infty$ at its bound values.
Now back to your example.If $f$ can be continuously extended to $[a,b]$ and is continuous on $(a,b)$(you didn't say it in your post but we'll assume we have this condition) then $f$ is uniformly continuous on $[a,b]$ (due to $[a,b]$ being a close interval).
And if $f$ is uniformly continuous on $[a,b]$ it is indeed uniformly continuous on $(a,b)\subset[a,b]$
|
About the following property of Poisson Process I have been checked by the following proposition when reading a textbook on stochastic calculus.
The problem is about a Poisson Process $N_t$ with parameter $\lambda$. In definition the authors emphasize that $N_t$ is cadlag and $N_t - N_{t-} \in \{0,1\}$ up to indistinguishability (I can't see where it is used in the sequel). Now suppose there are two stopping times $S \leq T$. The authors maintain that for $\delta > 0$, the following equality can be proved by virtue of Fatou's lemma:
$$
\limsup_{\delta \rightarrow 0} \frac{\mathbb{E}[(N_{T+\delta}-N_T-1)^{+} \mid \mathscr{F}_S]}{\lambda\delta} = 0\text{, a.s.}
$$
I got lost here.. What I can see is merely the following inequalities:
$$
0 \leq \limsup_{\delta \rightarrow 0} \frac{\mathbb{E}[(N_{T+\delta}-N_T-1)^{+} \mid \mathscr{F}_S]}{\lambda\delta} \leq \frac{\mathbb{E}[\limsup(N_{T+\delta}-N_T-1)^{+} \mid \mathscr{F}_S]}{\lambda\delta}
$$
But how can we conclude that the numerator tends to $0$ more rapidly than the denominator $\lambda\delta$? Any hint will be greatly appreciated!
| $$
(N_{T+\delta} -N_T -1)^+ \quad \begin{cases} =0 & \text{if } N_{T+\delta} = N_T, \\ \\
=0 & \text{if } N_{T+\delta} = N_T+1, \\ \\
>0 & \text{only if } N_{T+\delta} - N_T \ge 2.
\end{cases}
$$
So it's actually about the improbability of two or more arrivals in a short time, by comparison to the length of time. Suppose, for example, that $\lambda=4\text{ per hour}.$ The expected number of arrivals in one second, divided by one second, is $(4 \text{ per hour}).$ But the probability of more than one arrival in a mere one second is so small that the expected number of arrivals in one second is about the same as $1\times{}$the probability that the number of arrivals is $1.$
|
If ${a_n}>0$ and strictly increasing, then $\lim_n \int\limits_0^1 \frac{a_nx}{1+a_nx}dx = \int\limits_0^1 \lim\limits_n \frac{a_nx}{1+a_nx}dx$ I don't know what to make of this integral. I know the two limits will be equal provided that both the inner limits exist in $\mathbb{R}$ and the convergence on the right inner limit is uniform, but I'm not sure how to proceed. All help is greatly appreciated.
| It is easy to calculate that $x-\log (x+1)$ is the antiderivative of $\frac{x}{1+x}$. Making the change of variables $y=cx$ we arrive at
$$
(\star)\hspace{.7cm}\int_0^1\frac{cx}{1+cx}dx=\frac{1}{c}\int_0^c\frac{y}{1+y}dy=\frac{1}{c}\left[c-\log (c+1)\right]=1-\frac{\log (c+1)}{c},
$$
where $c>0$ is some constant. As $a_n$ is increasing then either $\lim_{n\to\infty}a_n=a\in\mathbb{R}$ or $a_n\nearrow+\infty$.
$\bullet$ If $a_n\to a$ then we obtain from $(\star)$ that
\begin{align}
&\lim_{n\to\infty}\int_0^1\frac{a_nx}{1+a_nx}dx=1-\lim_{n\to\infty}\frac{\log(a_n+1)}{a_n}=1-\frac{\log(a+1)}{a}\\
&\int_0^1\lim_{n\to\infty}\frac{a_nx}{1+a_nx}dx=\int_0^1\frac{ax}{1+ax}dx=1-\frac{\log(a+1)}{a}.\\
\end{align}
$\bullet$ If $a_n\nearrow+\infty$ then, again by $(\star)$,
\begin{align}
&\lim_{n\to\infty}\int_0^1\frac{a_nx}{1+a_nx}dx=1-\lim_{n\to\infty}\frac{\log(a_n+1)}{a_n}=1=\int_0^11dx=\int_0^1\lim_{n\to\infty}\frac{a_nx}{1+a_nx}dx.
\end{align}
|
What is the contour of a graph I am reading the paper "A Linear-time Algorithm for Drawing
a Planar Graph on a Grid" and it says:
let w1 , ..., wm be the contour of Gk-1
where Gk-1 seems to be either a graph, or the embedding of a graph. What does the "contour" of graph mean? I am trying to search for "contour graph" and I am only getting the geographic meaning.
| The geographical idea can be extended for graphs.
Most generally, a contour is a set of points that share a common measure.
The most obvious definition in your case would be the set of points that are a certain number of steps away from a perhaps arbitrarily chosen central point.
At the start of the algorithm, the contour (distance 0) would be the first chosen point.
At the end of the first iteration, the contour would be the set of vertices that are distance 1 from the first chosen point. I'm guessing that $G_{k-1}$ would be the subgraph created from the vertices that are distance 1 or less away from the first chosen point.
|
How to prove $Cone({\bf pt}) \approx D^1,$ and $Cone(D^{n-1}) \approx D^n$ Let $X$ be a topo space. The cone $Cone(X)$ over a topological space $x$ is
the quotient space obtained by identifying all points of the form $(x,1)$ in
the product $(X\times [0,1]$ (supplied with the product topology).i.e $$Cone(X) = (X\times [0,1])/(X\times \{1\}).$$
My question is prove that
(i) $Cone({\bf pt}) \approx D^1,$ where ${\bf pt}$ is one point space.
(ii) $Cone(D^{n-1}) \approx D^n,$ where $D^n = \{x\in \mathbb{R}^n\,\,|\,\,||x||\leqslant 1\}$ is $n$-disk closed.
I don't know how to prove it and I hope, that someone can help. Thank you very much!
| Hint: show that in general, any compact convex body with nonempty interior in $\mathbb{R}^n$ is homeomorphic to the closed $n$-disk.
|
Recurrence relation for pairing off $2n$ people I know the answer is supposed to be
$$a_{2n} = (2n-1) a_{2n-2}$$
Can someone please explain why shouldn't be having $\binom{2n}{2}$ in place of $2n-1$?
Doesn't it matter which two people are paired off out of the $2n$ people and hence generating a different case each time for the remaining $(2n-2)$ people?
| Consider the problem of pairing $2n$ people AND label one of these pairs. Then the number of ways is
$$b_{2n}=\binom{2n}{2}a_{2n-2}$$
This is what we enumerate when in your formula we have $\binom{2n}{2}$ in place of $2n-1$.
Since there are $n$ possible pairs, we may "unlabel" the labelled pair by dividing by $n$:
$$a_{2n}=\frac{b_{2n}}{n}=\frac{1}{n}\binom{2n}{2}a_{2n-2}=(2n-1)a_{2n-2}.$$
|
What is the meaning of a "Meromorphic Univalent Function"? I lecture this weak my teacher mentioned the term "Meromorphic Univalent Function" but he didn't explain it. I tried to find the meaning of this term but the only result I could find is the definition of "Meromorphic Function" and "Univalent Function". Does anyone know the meaning of a "Meromorphic Univalent Function"? Also, what is an example of Meromorphic Univalent Function?
Also, what is an example of a "Meromorphic univalent function of complex order"?
Thanks,
| Univalent function means analytic (holomorphic) and injective, i.e. by correctly defining the two open sets $U,V = f(U)\subset \mathbb{C}$ you can see $f : U \to V$ as being bijective and analytic (biholomorphic).
Now look at $f(z) = 1/z$ it is bijective and analytic $\mathbb{C}^* \to \mathbb{C}$, or you can say it is bijective and meromorphic $\mathbb{C} \to \mathbb{C}$ it works too.
Note that if $g(z)$ has a pole at $z=a$ then $\frac{1}{g(z)}$ has a zero at $z=a$ so there is $r,R$ such that $\frac{1}{g(z)}$ takes all the values $|z| < r$ on $|z-a| < R$, and $g(z)$ takes all the values $|z| > 1/r$ on $|z-a| < R$. Also $h(z)$ analytic is locally injective (and bijective) iff $h'(z) \ne 0$.
Overall you see that a univalent meromorphic function can only have at most one pole of order $1$.
And there is also some theorems (that I can't find a reference) saying that in some cases (if $U$ is simply connected ?), a sufficient condition for $f(z)$ being biholomorphic is that $f'(z) \ne 0$
|
How does it derived from LHS term $$\sin\left(\frac{720n\pi}{600}\right) = -\sin\left(\frac{4n\pi}{5}\right).$$
It is a part of derivation I found in an example but this step is not clear to me if I tried to just divide and use reminder but it is not same and $\sin(n\pi) = 0$ so it is not near to above step.
Please explain.
| $$\frac{720}{600} = \frac{72}{60}=\frac{12}{10}=\frac{6}{5} = 2 - \frac{4}{5}.$$
Now you write $$\sin\left(\frac{\pi n \cdot 720}{600}\right) =\sin\left( 2\pi n -\frac{\pi n \cdot 4}{5}\right) = \sin\left(-\frac{\pi n \cdot 4}{5}\right) =-\sin\left(\frac{\pi n \cdot 4}{5}\right)$$
|
Given 2 vectors, what's the least computationally intensive way to determine if one is more than 90 degrees away from the other? I would like to determine the angle between two arbitrary vectors. Using the cross product, I can do:
$\theta_1 = sin^{-1}\Big({{\lvert a \times b \rvert} \over {\lvert a \rvert \lvert b \rvert}}\Big)$
and use the cross product to get the rotation axis.
However, this will give me a range between $[-90, 90]$ degrees, but for anything greater than that, it will place the cross product vector in the opposite direction and give me a range between $[-90, 90]$ degrees.
So, in the meantime, I have to do a test for direction by taking the 2nd vector, multiply it by the rotation matrix resulting from of $-\theta_1$ and ${\lvert a \times b \rvert}$ and by using the dot product, get the angle:
$\theta_2 = cos^{-1}\Big({{\lvert a \cdot b \rvert} \over {\lvert a \rvert \lvert b \rvert}}\Big)$
If that angle is not $0$, then subtract $180$ from the $\theta_1$ as the result.
So my question is: Is there a less computationally intensive way to do this?
| The angle $\theta$ between $\bf a$ and $\bf b$ is greater than $90^{\circ}$ iff the dot product $${\bf a} \cdot {\bf b} := a_1 b_1 + \cdots + a_n b_n$$
of $\bf a$ and $\bf b$ is negative, as this quantity coincides with $$|{\bf a}| |{\bf b}| \cos \theta .$$ This entails $n$ multiplications and $n - 1$ additions, and so is relatively computationally cheap.
|
Finding a Matrix to make an equation work Here is my problem:
Suppose $A = \begin{bmatrix}3&2\\5&4\end{bmatrix}$
and $C = \begin{bmatrix}2&0\\1&6\end{bmatrix}$
Find a matrix $B$ such that $AB=C$ or prove that no such matrix exists. Explain your answer.
In order to do this, I am going to need to find the inverse of at least one of the matrices. Will I need to find both? Help?
| $AB=C$ therefore $B=A^{-1}C$ if $A$ is invertible
$det(A)=3(4)-5(2)=2$
Therefore $A^{-1}=\begin{bmatrix}2&-1\\-2.5&1.5\end{bmatrix}$
$B=$ $\begin{bmatrix}2&-1\\-2.5&1.5\end{bmatrix}$$\begin{bmatrix}2&0\\1&6\end{bmatrix}$
$$B=\begin{bmatrix}3&-6\\-3.5&9\end{bmatrix}$$
|
Expectation of log of linear function of the Dirichlet distribution Given $\mathbf{X}\sim\mathsf{Dir}(\alpha_1,\cdots,\alpha_k)$, is there an expression for the expectation
$$ \mathbb{E}\left[ \log \left(\mathbf{c}^\top \mathbf{X} \right)\right] $$
where $\mathbf{c}\in\mathbb{R}_+^k$ is a vector of positive constants?
| I think there isn't a good way to express this; for what I've calculated, it comes down to the fact that the sum of independent Gamma random variables isn't a nice distribution when the shape and scale are both different. Here's what I've got:
If we set $Y_i = \text{Gamma}(\alpha_i,1)$, then we know that $$\mathbf{X} \sim \left(\frac{Y_1}{\sum Y_i}, \ldots, \frac{Y_k}{\sum Y_i} \right).$$
This lets us calculate: \begin{align*}
\mathbb{E}[\log(\mathbf{c}^T \mathbf{X})] &= \mathbb{E}\left[\log\left(\sum c_i Y_i\right)\right] - \mathbb{E}\left[\log \left(\sum Y_i\right) \right].
\end{align*}
We know that $\sum Y_i \sim \text{Gamma}(\sum \alpha_i,1)$, implying that $$\mathbb{E}\left[\log \left(\sum Y_i\right) \right] = \psi(\sum \alpha_i)$$ where $\psi$ is the digamma function. I don't think there's a nice way to deal with the first term because we can't say a lot about the distribution of $\sum c_i Y_i$, so I think this is as far as you can get.
|
No finite field is algebraically closed my lecturer used this theorem in the title without proof. I didn't find a proper proof by myself. Could anyone help me out, please?
| Let's suppose that $ K $ is finite and write $ K=\{\alpha_{1}, \ldots , \alpha_{n}\}$. Now take the polynomial $ p (x)=(x-\alpha_{1})\ldots (x-\alpha_{n}) +1\in K[x]$. It's easy to see that $ p (x) $ doesn't have any roots in $ K $. Hence, $ K $ is not algebraically closed.
|
Exponential function to map from [0, 1] to arbitrary [min, max]? Maybe you can help out a programmer whose last math class was 20 years ago:
Given a base $b$, $y_{min}$ and $y_{max}$:
Define a function so that $f(0) = y_{min}$, $f(1) = y_{max}$ and f(x) is exponential in [0, 1] using the given base.
Define the inverse function.
Trivial example
$y_{min} = 1$, $y_{max} = 16$, $b = 2$
$$y = 2 ^ 4x$$
$$x = log_2(y) / 4$$
For positive values of $y_{min}$
I came up with the general form:
$$y = 2^{ x (log_b(y_{max}) - log_b(y_{min})) + log_b(y_{min}) }$$
$$x = \frac{log_b(y) - log_b(y_{min})}{(log_b(y_{max}) - log_b(y_{min})}$$
Arbitrary ranges?
How to make it work, with arbitrary values, for example $b = 10$, $y_{min} = -1.000$, $y_{max} = 10,000$.
I suspect the question is not even valid for this range, but for my requirement it looks sensible at first sight:
Have a slider (user interface component) whose
*
*minimum value is -1,000
*maximum value is 10,000
*with equal-distances for -100, -10, 0, 10, 100, 1000
*sensible values in between
It also looks simple when you define x = 0 for leftmost slider position, x = 1 for rightmost position and draw the graph with a (pseudo?) logarithmic scale on the y-axis:
How to best handle that?
A problem, of course is that there is just no $x$ for which $b^x = 0$.
Maybe we need three separate functions: One for $y = [y_{min}, -eps]$, one for $y = [eps, y_{max}]$, and one for $y = ]-eps, eps[$ where we cheat and approximate?
| Exponential functions can't change sign: if $f(x)=ab^x$ ($b$ positive, $a\not=0$) then either $f$ is positive everywhere or $f$ is negative everywhere.
Why? Well, remember that $b^x>0$ for all $x$, since $b$ is positive (and an exponential function with negative base doesn't really make sense over an interval unless we're in the complex numbers). So either $a$ is negative, in which case $f$ is always negative, or $a$ is positive, in which case $f$ is always positive.
So we need $y_{min}$ and $y_{max}$ to have the same sign. And your function
$$y = 2^{ x (log_b(y_{max}) - log_b(y_{min})) + log_b(y_{min}) }$$ does the job in each case.
|
Need help with this Lebesgue Integration problem. I have no clue as to how to do this. Could Fubini-Tonelli be required? Is there some reference I could read to understand how to attempt such problems?
Let $f$ be a measurable function and $E$ be a measurable subset of $\mathbb{R}^n$.
Show that:
$$\int_{E} |f|^p \ dm \ = \ \int_{0}^{\infty} pt^{p-1}m\{\ \textbf{x}\in E : |f(\textbf{x})|\gt t \} dt$$
| Hint: write the right-hand side as
$$ \int_0^{\infty}pt^{p-1}\int_E1_{|f(x)|>t}\; dmdt$$
and use Tonelli's theorem to interchange the integrals.
|
Show that $3^{2n+1}-4^{n+1}+6^n$ is never prime for natural n except 1. Show that $3^{2n+1}-4^{n+1}+6^n$ is never prime for natural n except 1. I tried factoring this expression but couldn't get very far. It is simple to show for even n but odd n was more difficult, at least for me.
| $$3 \cdot 3^{2n}-4\cdot 2^{2n}+2^n\cdot 3^n$$ has the form
$$3x^2-4y^2+xy=(x-y)(3x+4y)$$
|
Is it possible to create a system of two equations with 3 variables and only one solution? I know that a system of equations with 3 variables usually has to contain at least 3 equations, but are there any special cases where 2 equations with 3 variables have just one solution? I've researched a bit, and found 2 equations with 3 variables with finite solution sets, but are there any that have exactly one solution? The equations could be linear or quadratic or cubic or sinusoidal or anything... but does such a case exist?
Also, is it possible to find such a case if we know any 2 variables? Like, for example, $a + b + c = w$ and $b + c = d$, given b and d. (of course, this example has more than 1 solution, but are there any like this that have just 1 solution)
Thanks for any help!
Edit:
To clarify, I'm looking for the general form of such an equation. To be more specific, are there any systems of 2 equations, with 3 variables, that only have 1 solution, but which can be modified to have any 1 solution I desire.
| Yes there is.
$x^2+y^2=0$ and $z=0$.
Edit:
$(x-a)^2+(y-b)^2=0$ and $z-c=0$
|
Minimizing a quadratic-over-linear fractional function This is from the Convex Optimization book by Boyd and Vandenberghe.
Show that $$ \min \ \frac{\|Ax-b\|_2^2}{c^T x + d} $$ $x \in \left\{x : c^T x +d > 0 \right\}$ has a minimizer $x^* = x_1 +t x_2$ where $x_1=(A^TA)^{-1}A^T b$, $x_2=(A^TA)^{-1}c$ and $t \in \mathbb{R}$ is obtained by solving a quadratic equation.
From the structure of the solution, it seems like I am supposed to split the problem into two parts, but apart from that I don't really understad how to solve this. I tried to differentiate to find the minimizer, but I didn't get anything of this form. (In the problem before this, we had to show that f is closed, if that is relevant).
| Let us rewrite the problem to a convex optimization problem by adding a variable $s$:
$$\min \{ s||Ax-b||^2 : s = 1/(c^Tx + d) \}$$
and then substituting $y = xs$:
$$\min \{ s||A(y/s)-b||^2 : (c^Ty + ds) = 1 \}$$
Note that the objective function is the perspective of a convex function, and is therefore convex. The KKT stationarity conditions for $y$ and $s$ read:
$$2(A^TA(y/s)-A^Tb) + \lambda c = 0$$
$$-2\frac{||Ax||^2}{s^2} + b^T b + \lambda d = 0$$
The first condition can be solved for $y/s$:
$$x = \frac{y}{s} = (A^TA)^{-1}A^Tb-\frac{1}{2} \lambda (A^TA)^{-1} c$$
Your $t$ is now $-\lambda/2$. To find $\lambda$, consider the KKT stationarity condition for $s$, and plug in $s = 1/(c^Tx + d)$ to obtain the quadratic equation.
|
Find $f(0)$ if $f(x)= \sum\limits_{n=1}^\infty \frac{(-1)^{n+1}x^{2n-2}}{(2n+1)!} $
Find $f(0)$ if $f(x)= \sum\limits_{n=1}^\infty \frac{(-1)^{n+1}x^{2n-2}}{(2n+1)!} $
This seems trivial but I am not able to figure it out. Do I directly put $x=0$ or should I reindex $n=0$ and use Taylor Theorem. If I use Taylor theorem, then is $f(0)$ simply the $n$ part of the numerator?
| For $n\ge 2$ the summand is equal to zero, so you need only to consider the term for $n=1$ $$ \sum_{n=1}^\infty \frac{(-1)^{n+1}x^{2n-2}}{(2n+1)!} =\frac{1}{(2\cdot1+1)!}+ \sum_{n=2}^\infty \frac{(-1)^{n+1}x^{2n-2}}{(2n+1)!}=\frac1{3!}+0=\frac16$$
|
Is $\{((a,b),(c,d))|a+b\ge c+d\}$ an anti-symmetric relation? Question
Let $R$ be a relation on the set $\mathbb Z^+\times \mathbb Z^+$ defined as follows:
$R=\{((a,b),(c,d))|a+b\ge c+d\}$.
Is this anti-symmetric?
Attempted Solution
I'd argue that it isn't. For $((a,b),(c,d))\in R$ and $((c,d),(a,b))\in R$ to be true, then $a+b$ must equal $c+d$. This is because -- per the definition above -- $a+b\ge c+d$. If $(a,b)>(c,d)$ then $((c,d),(a,b))$ could not exist in the relation, thus they must be equal in terms of sum. In such a scenario, we find that the above relation is not anti-symmetrical since $((10,5),(5,10))\in R$ and $((5,10),(10,5))\in R$ is true, but $(5,10)\ne (10,5)$.
Did I do this right?
| I think it will be more clear if you take (1,3) and (2,2) as an example.
|
Comparing LU or QR decompositions for solving least squares
Let $X \in R^{m\times n}$ with $m>n$. We aim to solve $y=X\beta$ where $\hat\beta$ is the least square estimator. The least squares solution for
$\hat\beta = (X^TX)^{-1}X^Ty$ can be obtained using QR decomposition
on $X$ and $LU$ decomposition on $X^TX$. The aim to compare these.
I noticed that we can use Cholesky decomposition instead of $LU$, since $X^TX$ is symmetric and positive definite.
Using $LU$ we have:
$\hat\beta = (X^TX)^{-1}X^Ty=(LU)^{-1}X^Ty$, solve $a=X^Ty$ which is order $O(2nm)$, then $L^{-1}b=a$ at cost $\sum_1^{k=n} (2k-1)$ and finally $U^{-1}a$ at the same cost of $\sum_1^{k=n} (2k-1)$.
I didn't count the cost of computing $L^{-1}$ and $U^{-1}$.
Using $QR$ we have:
$\hat\beta = (X^TX)^{-1}X^Ty=((QR)^TQR)^{-1}R^TQ^Ty=R^{-1}Q^Ty$, where we solve $Q^Ty=a$ at cost $O(n^2)$ and $R^{-1}a$ with cost $\sum_1^{k=n} (2k-1)$.
Comparing the decompositions:
It seems that QR decomposition is much better than LU. I think the cost of computing QR is higher than LU, which is why we could prefer to use LU. On the other hand if we are given the decompositions, we should use QR.
$SVD$ decomposition:
Is there any advantage to use SVD decomposition?
| Your reasoning at the top is really odd. The LU decomposition is twice as fast as the standard QR decomposition and it will solve most systems. There are however pathologically ill-conditioned systems and non-square systems. That is where it will use the QR or SVD. The main reason for the SVD is it allows you to be selective about your condition number.
There are many other decompositions. The Cholesky decomposition is twice as fast as the LU decomposition but only for positive definite Hermitian matrices. All of this neglects the sparsity of the matrix as well.
|
Vector Calculus: Energy Estimate of the Navier-Stokes Equations I'm trying to follow an example problem from my class on fluid mechanics however I don't follow the solutions. I don't have much of a background in vector calculus unfortunately so I'm really struggling here. I was hoping that someone might be able to explain the steps to me in the simplest possible terms. Many thanks.
The question is as follows.
Consider the incompressible Navier–Stokes equations on a bounded domain $\mathbb{T}_d$ with periodic boundary conditions. The goal of this tutorial is to derive some energy estimates and achieve some fluency in performing these types of calculations.
Show that the Navier–Stokes equations can be expressed in the form $$\frac{\partial\mathbf{u}}{\partial t}+\mathbf{\omega}\times\mathbf{u}=\nu\Delta\mathbf{u}-\nabla(p+\frac{1}{2}|\mathbf{u}|^2)+\mathbf{f}$$
This is fine. Just use the identity: $\mathbf{u}\bullet\nabla\mathbf{u}=\frac{1}{2}\nabla(|\mathbf{u}|^2)-\mathbf{u}\times(\nabla\times\mathbf{u})$ in the standard form of the Navier-Stokes equations.
Directly compute $\frac{d}{dt}\|\mathbf{u}\|^2_{L^2}$
\begin{align}
\frac{d}{dt}\frac{1}{2}\|\mathbf{u}\|^2_{L^2}=&\frac{1}{2}\int_{\mathbb{T}_d}\frac{\partial}{\partial t}(\mathbf{u}\bullet\mathbf{u})\;d\mathbf{x}\\
=&\frac{1}{2}\int_{\mathbb{T}_d}2\mathbf{u}\bullet\frac{\partial \mathbf{u}}{\partial t}\;d\mathbf{x}\\
=&\int_{\mathbb{T}_d}\mathbf{u}\bullet(-\mathbf{\omega}\times\mathbf{u}+\nu\Delta\mathbf{u}-\nabla(p+\frac{1}{2}|\mathbf{u}|^2)+\mathbf{f})\;d\mathbf{x}\\
=&-\nu\int_{\mathbb{T}_d}|\nabla\mathbf{u}|^2\;d\mathbf{x}+\int_{\mathbb{T}_d}(\mathbf{u}\bullet\mathbf{f})\;d\mathbf{x}
\end{align}
The problem is that I don't follow the chain of equalities. My main concern is the 3rd to the 4th line. The explanation I'm given is that
$$\mathbf{u}\bullet(\mathbf{\omega}\times\mathbf{u})\equiv 0$$ I'm happy with that but I really don't understand the following two lines.
$$\int_{\mathbb{T}_d}\mathbf{u}\bullet\nabla(p+\frac{1}{2}|\mathbf{u}|^2))\;d\mathbf{x}=\int_{\mathbb{T}_d}\nabla\bullet\mathbf{u}(p+\frac{1}{2}|\mathbf{u}|^2))\;d\mathbf{x}-\int_{\mathbb{T}_d}(\nabla\bullet\mathbf{u})(p+\frac{1}{2}|\mathbf{u}|^2))\;d\mathbf{x}=0$$ and
$$\int_{\mathbb{T}_d}\mathbf{u}\bullet(\Delta\mathbf{u})\;d\mathbf{x}=\int_{\mathbb{T}_d}\nabla\bullet((\nabla\mathbf{u})\bullet\mathbf{u})\;d\mathbf{x}-\int_{\mathbb{T}_d}|\nabla\mathbf{u}|^2\;d\mathbf{x}=-\int_{\mathbb{T}_d}|\nabla\mathbf{u}|^2\;d\mathbf{x}$$
Then by the divergence theorem the divergence terms on the right are zero and using incompressibility we get line 4.
If anyone could explain the vector calculus of these lines I would be very greatful.
| The first line uses the identity $\nabla \cdot (\phi \mathbf{u}) = \mathbf{u} \cdot \nabla \phi + \phi \nabla \cdot \mathbf{u},$ where $\phi = p + \frac1{2}|\mathbf{u}|^2.$
The second integral on the RHS vanishes by virtue of the solenoidal condition $\nabla \cdot \mathbf{u} = 0$ for an incompressible fluid.
An argument can be made that the first integral on the RHS must vanish as well. First apply the divergence theorem and recast as a surface integral over the boundary of $T_d$
$$\int_{\mathbb{T}_d}\nabla \cdot\left[\mathbf{u}\left(p+\frac{1}{2}|\mathbf{u}|^2\right)\right]\;d\mathbf{x}= \int_{\partial\mathbb{T}_d} \left(p+\frac{1}{2}|\mathbf{u}|^2\right)\mathbf{u} \cdot \mathbf{n} \, dS.$$
Evidently this surface integral vanishes as a consequence of the periodic boundary conditions. This is most easily visualized if $T_d$ is a cube where the sign of $\mathbf{u} \cdot \mathbf{n}$ must alternate between opposing faces -- since $\mathbf{n}$ is the outward-pointed unit normal vector.
A similar rationale applies to the second line.
|
Approximating $\sqrt{2}$ in rational numbers Let a sequence of rational numbers be defined recursively as $x_{n+1} = (\frac{x_n}{2} + \frac{1}{x_n})$ with $x_1$ some arbitrary positive rational number.
We know that, in the universe of real numbers, this sequence converges to $\sqrt{2}$. But suppose we don't know anything about real numbers. How do we show that that ${x_n}^2$ gets arbitrarily close to $2$?
I've already shown that $x_n > 2$ and that the sequence is decreasing. But I'm having difficulty showing that ${x_n}^2$ gets as close to $2$ as we want using nothing but inequalities. Since we're assuming no knowledge of real numbers, I don't want to use things like the monotone convergence theorem, the least upper bound property etc.
This exercise is of interest to me because it can it can help explain the development of irrational numbers to a student who knows nothing about them.
| Square both sides to obtain
$$x_{n+1}^2 = \frac{x_n^2}{4} + 1 + \frac{1}{x_n^2}$$
Therefore,
$$x_{n+1}^2 - 2 = \frac{x_n^2}{4} - 1 + \frac{1}{x_n^2}$$
We can manipulate this a bit to obtain
$$x_{n+1}^2 - 2 = \frac{x_n^2 - 2}{4} - \frac{x_n^2 - 2}{2x_n^2}$$
$$x_{n+1}^2 - 2 = \frac{(x_n^2 - 2)^2}{4x_n^2}$$
Since the RHS is nonnegative, so is the left, so from this we get $x_{n+1}^2 \geq 2$. Applying this to the denominator of the RHS gives us (for $n > 1$):
$$x_{n+1}^2 - 2 \leq \frac{(x_n^2 - 2)^2}{8}$$
From this we can conclude that as long as $(x_n^2 - 2) < 1$ for some $n$, we will have $x_{n+1}^2 - 2 < 1/8$, and so by induction, $x_{n+k}^2 - 2 < 1/8^k$.
|
If A and B are positive definite matrices, is it true that the (induced, L2) matrix norm of A times inv(A+B) If $A$ and $B$ are positive definite matrices, can it be shown that $||A (A + B)^{-1}|| \leq 1$, where $||...||$ is the matrix norm induced by the $L_2$ norm on vectors?
| It is true, if $A$ and $B$ commute, as:
$$ A(A+B)^{-1}(A+B)^{-1} A = (I+\underbrace{A^{-1}BA^{-1}}_{\text{pos. def.}})^{-2}. $$
|
Sum of all consecutive natural root differences on a given power I accidentally observed that $\sqrt{n} - \sqrt{n-1}$ tends to $0$ for higher values of $n$, so I've decided to try to sum it all, but that sum diverged. So I've tried to make it converge by giving it a power $m$.
$$S_m=\sum_{n=1}^\infty (\sqrt{n} - \sqrt{n-1})^m$$
How would one calculate the values of this sum for a choosen $m\in\mathbb R$?
Not just estimate but write them in a non-decimal form if possible, preferably using a better converging formula.
It converges if $m>2$.
The values seem to tend to numbers with non-repeating decimals.
*
*Thanks to achille hui and his partial answer here, it looks like $S_m$ for odd values of $m$ is a linear combination
of riemann zeta function at negative half-integer values:
\begin{align} S_3 &\stackrel{}{=} -6\zeta(-\frac12) \approx 1.247317349864128...\\ S_5 &\stackrel{}{=} -40\zeta(-\frac32) \approx 1.019408075593322...\\ S_7 &\stackrel{}{=} -224\zeta(-\frac52) - 14\zeta(-\frac12) \approx 1.00261510344449... \end{align}
If we decide to replace the constant $m$ with $n\times{k}$ where $k$ is a new constant, then we can talk about $S_k$, Which converges if $k>0$;
$$S_k=\sum_{n=1}^\infty (\sqrt{n} - \sqrt{n-1})^{nk}$$
I wonder if these values could also be expressed in a similar way as $S_m$.
Values still tend to numbers with seemingly non-repeating decimals according to the Wolfram Alpha:
$$ S_1 \approx 1.20967597004937847717395464774494290
$$
Also notice that the functions of these sums are similar to the zeta function; $\color{blue}{S_m} \sim \color{red}{\zeta}$
But I think it's only due the fact that they all approach $1$?
| It does converge for $m > 2$, since $\sqrt{n} - \sqrt{n-1} \sim 1/(2\sqrt{n})$ and $\sum_n n^{-m/2}$ converges for $m > 2$.
|
Show $\sum_{n=1}^\infty \left(\frac{n}{2n-1}\right)^n$ converges I'm trying to show that $\sum_{n=1}^\infty \left(\frac{n}{2n-1}\right)^n$ converges. Using the Limit Ratio Test for Series, we want to show that $\lim_{n\to \infty} \left\lvert \frac{a_{n+1}}{a_n}\right\lvert<1$. However, I'm having trouble finding said limit (I know that it is equal to $\frac{1}{2}$, but I don't know how show it). Thanks in advance
| Note that we can write
$$\begin{align}
\left(\frac{n}{2n-1}\right)^n&=\frac{1}{2^n}\left(\frac{1}{1-\frac{1}{2n}}\right)^n\tag1\\\\
&\le \frac{1}{2^{n-1}}\tag2
\end{align}$$
where in going from $(1)$ to $(2)$ we invoked Bernoulli's Inequality.
NOTE:
The OP was pursuing a way forward that relied on the ratio test. Proceeding, we have
$$\begin{align}
\lim_{n\to \infty}\frac{a_{n+1}}{a_n}&=\lim_{n\to \infty}\frac{\left(\frac{n+1}{2n+1}\right)^{n+1}}{\left(\frac{n}{2n-1}\right)^{n}}\\\\
&=\lim_{n\to \infty}\left(\left(\frac{n+1}{2n+1}\right)\,\left(1-\frac{1}{n(2n+1)}\right)^n\right)\\\\
&=\frac12
\end{align}$$
since from Bernoulli's Inequality we have
$$1\ge \left(1-\frac{1}{n(2n+1)}\right)^n\ge 1-\frac{1}{2n+1}$$
whence application of the squeeze theorem reveals that the limit is $1$.
|
Stating that either root is zero in solving a quadratic equation Let's say we have a simple quadratic equation $x^2 - 3x = 0$. To solve, we will factor $x$ out i.e. $x(x-3)=0$, after which we will state $x = 0$ or $(x-3) = 0$. My question is, why is there no third "option" where we say "or both". Isn't it possible for both "portions" (i.e. $x$ and $(x-3)$) to both be equal to zero? After all, if most quadratic equations would have two roots, then both $x = 0$ and $x = 3$ are the roots thus both $x = 0$ and $(x-3) = 0$ are true!
[This question on the use of the word "or" applies to polynomial equations with degrees 3 and above too of course, but I'm choosing a quadratic one as it is the simplest case possible.]
| If you write a polynomial $p(x) = a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1x + a_0$, then by a theorem (the Fundamental Theorem of Algebra), you can completely factor the polynomial as $p(x) = a_n(x - b_1)(x-b_2)\cdots(x - b_n)$ where the $b_j$'s are the (potentially complex) roots.
Then $p(x) = 0$ precisely when $x = b_j$ for at least one $j, 1 \leq j \leq n$. With this phrasing, the "at least one" covers the issue you're talking about. Note, however, if $x = b_j$ and $x = b_k$ that it follows that $b_j = b_k$, so two or more of the linear facts $(x - b_j), (x - b_k)$ will be zero simultaneously if and only if $b_j = b_k$.
So in your example, it can never be the case that for some $x$, both $x - 0 = 0$ and $x - 3 = 0$. But if you take the polynomial $x^2 - 2x + 1 = (x-1)(x-1)$, then for $x = 1$ you have that both are simultaneously $0$.
|
How to show that $f(x,y)=x^4+y^4-3xy$ is coercive? How to show that $f(x,y)=x^4+y^4-3xy$ is coercive ?
This is my attempt :
$$f(x,y)=x^4+y^4-3xy$$
$$f(x,y)=x^4+y^4\left(1-\frac{3xy}{x^4+y^4}\right)$$
As $||(x,y)|| \to \infty $ , $\frac{3xy}{x^4+y^4} \to 0$
So $||(x,y)|| \to \infty $ , $f(x,y)=x^4+y^4-3xy \to \infty$.
Is this a valid method ?
I think it is not a rigorous proof. can anyone help me with a good proof ?
| Your answer looks right.
Here's another way. Observe
\begin{align}
xy \le \frac{x^2+y^2}{2} \ \ \text{ and }\ \ x^4+y^4\geq \frac{1}{2}(x^2+y^2)^2
\end{align}
which means
\begin{align}
x^4+y^4-3xy \geq&\ x^4+y^4-\frac{3}{2}(x^2+y^2) \\
\geq&\ \frac{1}{2}(x^2+y^2)^2-\frac{3}{2}(x^2+y^2)\\
=&\ \frac{1}{2}\left(x^2+y^2-\frac{3}{2}\right)^2-\frac{9}{8}.
\end{align}
Hence it follows as $x^2+y^2\rightarrow \infty$, we see that $x^4+y^4-3xy\rightarrow \infty$.
Note: Using the above inequalities you could show
\begin{align}
1-\frac{3xy}{x^4+y^4} \geq 1- \frac{3(x^2+y^2)}{2(x^4+y^4)} \geq 1-\frac{3}{(x^2+y^2)}
\end{align}
which means for $x^2+y^2$ sufficiently big we have that
\begin{align}
1-\frac{3}{(x^2+y^2)} \ge \frac{1}{2}.
\end{align}
|
Why solving Helmholtz equation using FEM gives solution when theory says no? I get problem when solving $-\Delta u(x,y) - k^2 u(x, y) = f(x, y)$ in $(0, 1) \times (0, 1)$, $u = 0$ on the boundary.
When $k^2$ equals an eigenvalue, says $k^2 = 2 \pi^2$, using the Fredholm alternative, theorically we can say the problem is not well-posed, that we cannot obtain a unique solution for the problem. I solve this problem with $k^2 = 2 \pi^2$ using Finite Element Method, with $f(x, y) = 10 e^{-100(x^2 + y^2)}$, and there's no error. The matrix is of full rank. So I don't understand why we still have a solution, which is maybe unique (the matrix is of full rank). Where did I make mistake?
Thank you.
| Numerical eigenvalues are not the same as the exact eigenvalues (but close). When you refine the mesh the numerical eigenvalues approach the exact ones and the problem becomes ill-conditioned (but still of full rank).
|
find $g$ such that $g\circ f=h$ Let
$$f(x)=\dfrac{2x+3}{x-1} \quad \mbox{and}\quad h(x)=\dfrac{6x^2+8x+11}{(x-1)^2} $$
find polynom $g$ such that $g\circ f=h$
Indeed,
$$g\circ f=h \iff \forall x\in \mathbb{R}\quad g\circ f(x)=h(x) $$
\begin{align}
g\circ f(x)&=h(x)\\
g\left(f(x)\right)&=h(x)\\
g\left(\dfrac{2x+3}{x-1}\right)&=\dfrac{6x^2+8x+11}{(x-1)^2}
\end{align}
Let $y=\dfrac{2x+3}{x-1}$
\begin{align}
y&=\dfrac{2x+3}{x-1}\\
x&=\dfrac{3+y}{y-2}
\end{align}
\begin{align}
g\left(y\right)&=\dfrac{6\left(\dfrac{3+y}{y-2}\right)^2+8\left(\dfrac{3+y}{y-2}\right)+11}{\left(\dfrac{3+y}{y-2}-1\right)^2}\\
&=\dfrac{6\left(\dfrac{3+y}{y-2}\right)^2+8\left(\dfrac{3+y}{y-2}\right)+11}{\left(\dfrac{3+y}{y-2}-1\right)^2} \\
&=\dfrac{6\left(3+y\right)^2+8\left(3+y\right)+11\left(y-2 \right)}{25} \\
&=\dfrac{6y^2+55y+56}{25} \\
\end{align}
Then $g(x)=\dfrac{6x^2+55x+56}{25}$ such that $g\circ f=h$
AM i right ?
| It is wrong. After you have written the expression of $g(y)$ we should have
$$ g(y) = \frac{6(3+y)^{2} + 8(3+y)(y-2) + 11(y-2)^{2}}{25} $$
instead of your expression. On simplifying we get $g(y) = y^{2}+2$, i.e,
$$ g(x) = x^{2} + 2$$
|
Prove that two matrices commute iff the square of the matrices commute In my textbook there is a task in which I have to prove the relation
\begin{equation}
AB=BA\Leftrightarrow A^2B^2=B^2A^2.
\end{equation}
For ($\Rightarrow$) it is easy
\begin{equation}
AB=BA\Rightarrow (AB)^2=(BA)^2\Rightarrow ABAB=BABA\Rightarrow BBAA=AABB.
\end{equation}
But how do I prove ($\Leftarrow$)?
| The implication '$\Leftarrow$' is so obviously false it surprises me that one should even ask this question. Though commutation of matrices can arise in many ways, one of the most simple ways is when one of the matrices is a scalar matrix (multiple of the identity). So if '$\Leftarrow$' were true, it would mean at least that whenever $A^2$ is a scalar matrix then $A$ commutes with every other matrix $B$; this clearly cannot be true.
There is a multitude of kinds of matrices whose square is scalar without any reason for the matrix itself to commute with all other matrices: the matrix of any reflection operation ($A^2=I$), that of a rotation by a quarter turn $(A^2=-I$), or a nilpotent matrix of index$~2$ (i.e., $A\neq0$ but $A^2=0$). These give many choices for a counterexample. (In fact the only way $A$ can commute with all other matrices is for $A$ to be scalar itself, but you don't need to know this fact to find counterexamples to '$\Leftarrow$'.)
|
Number theory related to crypto I have a question related to a piece of coursework that is for cryptography and more for encryption that relies on Number theory, now I have no knowledge of number theory and the tutor did not cover it well enough, and I am starting to learn it slowly, but I have an exam coming up and one question will be like the one below, so any help from you guys would be appreciated.enter image description here
Consider the group $G=\mathbb{Z}^*_{61}$ with respect to multiplication. Compute the following
o Find the inverse element of 53 in $G$.
o Compute the order of the element 15 in $G$.
o Let $H := \langle 8 \rangle$ be a group containt in $G$, i.e. H is generated by the of powers of 8 modulo 61. Find two non-trivial subgroups of H. The trivial subgroups of $H$ are $\langle 1 \rangle$ (which only contains the identity element and $H$ itself.
| For the first question, you want to use Euclid's algorithm to find an expression of the form $53a + 61b = 1$, i.e. $53a \equiv 1 \pmod {61}$ Then $a$ is the inverse to 53.
For the second question, you need to find the smallest $n$ such that $15^n \equiv 1 \pmod {61}$, so you would just keep multiplying by 15 and reducing mod 61 until you get to 1. (Note the reducing is important! Otherwise you might have to multiply really big numbers by 15 after a couple of steps.)
For the third question, have you calculated which elements are in $H$? If you do, then you should be able to work out the subgroup generated by each element, and hopefully find two which are different!
|
$J=\int_{0}^{1}{ \frac{x^b-x}{\ln x}}dx$ We know how to solve:$$I=\int_{0}^{1}{ \frac{x^b-1}{\ln x}}dx$$
Let $$f(b)=\int_{0}^{1} \dfrac{x^b-1}{\ln x} dx$$
$$f'(b)=\int_{0}^{1} x^b dx$$
$$f'(b)=\dfrac{1}{b+1}$$
$$f(b)=\ln(b+1)+C$$
Let $b=0$ then $f(b)=0$ implies $C=0 $
Therefore $f(b)=\ln(b+1)$
My question:
How evaluating the following integral?
$$J=\int_{0}^{1}{ \frac{x^b-x}{\ln x}}dx$$
| Concerning
$$
J=\int_{0}^{1}{ \frac{x^b-x}{\ln x}}dx
$$ one may just write
$$
J=\int_{0}^{1}{ \frac{(x^b-1)-(x-1)}{\ln x}}dx=\int_{0}^{1}{ \frac{x^b-1}{\ln x}}dx-\int_{0}^{1}{ \frac{x-1}{\ln x}}dx
$$ then one may use the previous result.
|
Axis of Symmetry for a General Parabola
Given a general parabola
$$(Ax+Cy)^2+Dx+Ey+F=0$$
what is the axis of symmetry in the form $ax+by+c=0$?
It is possible of course to first work out the angle of rotation such that $xy$ and $y^2$ terms disappear, in order to get an upright parabola $y=px^2+qx+r$ and proceed from there. This may involve some messy trigonometric manipulations.
Could there be another approach perhaps, considering only quadratic and linear equations?
Addendum
From the solution (swapped) by Meet Taraviya and some graphical testing, the equation for the axis of symmetry is
Axis of Symmetry: $$\color{red}{Ax+Cy+\frac {AD+CE}{2(A^2+C^2)}=0}$$
which is quite neat. Note that the result is independent of $F$. Awaiting further details on the derivation.
Addendum 2
Here is an interesting question on MSE on a similar topic.
Addendum 3 (added 23 May 2018)
Tangent at Vertex:
$$Cx-Ay+\frac {(A^2+C^2)(F-k^2)}{CD-AE}=0$$
where $k=\frac {AD+CE}{2(A^2+C^2)}$.
Note that the parabola can also be written as
$$\underbrace{Cx-Ay+d}_{\text{Tangent at Vertex if $=0$}}
=m\;\big(\underbrace{Ax+Cy+k}_{\text{Axis of Symmetry if $=0$}}\big)^2$$
where
$$m=\frac {A^2+C^2}{AE-CD}$$
and $d=\frac {(A^2+C^2)(F-k^2)}{CD-AE}$
See Desmos implementation here.
| Write the equation as:-
$$(Ax+Cy+t)^2+(D-2At)x+(E-2Ct)y+F-t^2=0$$
Choice of t is made such that $A\cdot(D-2At)+C\cdot(E-2Ct)=0$
$Ax+Cy+t=0$ is the symmetry axis of parabola.
Also, the line $(D-2At)x+(E-2Ct)y+F-t^2=0$ is the tangent at vertex of the parabola.
Explanation :-
Interpret $y^2=4ax$ as:-
$$(Distance \space from \space x=0)^2=4a\cdot (Distance \space from \space y=0)$$
Note that $y=0$ is the symmetry axis of the parabola and $x=0$ is the tangent at vertex. Also, they are perpendicular to each other (This explains why $A\cdot(D-2At)+C\cdot(E-2Ct)=0$ must be true for these lines to be them. This statement is equivalent to $m_1m_2=-1$)This property holds true for a general parabola.
Thus a parabola can be represented as:-
$$(Distance \space from \space L_1)^2=4a\cdot (Distance \space from \space L_2)$$ where $L_1$ and $L_2$ are the symmetry axis and the tangent at vertex of perpendicular.
|
How to find a point on a line at a given distance from a given point on the same line (given the equation of the line)? You are given a point P(x,y) on a line of equation ax + by + c = 0. I want another point, Q(a,b) on the same line which is at a distance d from P. There will be two points. I want to find both and know which side of P each one lies.
There are related questions in this forum, but I didn't get what I am exactly looking for. Sorry for beating around the bush, and thanks for any help... Please give me an elaborate explanation. Thanks once again!!!
| The direction of the given line is $$\binom {-b}a$$
So the two points you are looking for have position vectors $$\binom xy\pm\frac{d}{\sqrt{a^2+b^2}}\binom {-b}a$$
|
Echelon form of a matrix over the integers using maple I need to find the (row) echolon form over the integers $\mathbb{Z}$, using maple. I know how to do this over $\mathbb{Q}$, using 'ReducedRowEchelonForm' of the package 'LinearAlgebra'.
I also know how to reduce a matrix to the echelon form, over the integers, using sage; but it would be much more comfortable to use one coding system for my needs.
Thanks in advance.
| Use the HermitForm from the MatrixPolynomialAlgebra package.
Example:
A:= <<0,-1,-2,1>|<-3,-3,-3,4>|<-6,-1,0,5>|<9,1,-1,-7>>;
LinearAlgebra:-ReducedRowEchelonForm(A);
$$ \begin{bmatrix}
1 & 0& 0& 1/2\\
0&1&0&0\\
0&0&1&-3/2\\
0&0&0&0
\end{bmatrix}
$$
MatrixPolynomialAlgebra:-HermiteForm(A);
$$\begin{bmatrix}
1&0&1&-1\\
0&1&0&0\\
0&0&2&-3\\
0&0&0&0
\end{bmatrix}$$
|
derivatives of Kinetic Energy I read that derivative of Kinetic Energy function = $F.v$ while I got $mv$ when I differentiated it with respect to velocity.
The way I did it is:
$\frac{dK}{dv} = \frac{1}{2} m . \frac{d}{dv} v^2$
So I assumed that the mass is fixed and I differentiated the squared velocity by taking the (2) down and subtracting (1) from the exponent, which gave me $2v$ for $\frac{d}{dv} v^2$
Would you clarify the correct derivative of Kinetic Energy as well as the $F$ variable? And what does the derivative in this case represent?
| $$\frac{d v^2}{dt}=2v\frac{dv}{dt}$$
$$\implies \frac{dK}{dt}=\frac{1}{2}m\frac{d v^2}{dt}=mv\frac{dv}{dt}$$
but $$m\frac{dv}{dt}=F$$
thus
$$\frac{dK}{dt}=Fv$$
which represents the power of force $F$ ,( in Watt).
|
Simultaneous equations with discrete log problem I know discrete log problem is a hard problem with one, lets say $m^{23} \mod 320 \equiv 300$
But would it be easy and possible to solve, lets say $m^{23} \mod 320 \equiv 300$
and $m^{31} \mod 320 \equiv 261$
How does one go about solving it? If it is not possible, why not?
| This system has no solution.
Given, $m^{23} \equiv 300 (\mod 320)$ and $m^{31} \equiv 261 (\mod 320)$.
Now, $m^{31} \equiv 261 (\mod 320)$
$\implies m^{23+8} \equiv 261 (\mod 320)$
$\implies m^{23} \times m^8 \equiv 261 (\mod 320)$
$\implies 300m^{8} \equiv 261 (\mod 320)$
$\implies 300m^{8} -320k=261$, for some integer $k$.
But this is not possible. Because the LHS of the equation is divisible by $10$, but the RHS isn't.
|
Difference between State Space and Sample Space Aren't they both the same?
According to me they are just different terminologies used for different types of systems. What I understand is that sample space is used when we talk about static systems and state space when we talk about dynamic systems. Am I correct? If not then what exactly is the difference between them?
My teacher had asked for the state space in Markov Chain questions, so I'm guessing that state space is used only in dynamic systems? She said that if we roll 2 dice then the sample space will be {1,2,...,6},
but the state space will be {(1,1),(1,2),...,(6,6)}.
But as far as I remember, the sample space for rolling 2 dice is
{(1,1),(1,2),...,(6,6)}.
| The sample space is the set that contains all possible outcomes (events) for an experiment, and is usually denoted by $\Omega$ (the set of all possible events $\omega$).
For example, when tossing two coins, $\Omega=\{\text{(heads,heads)},\text{(heads,tails)},\text{(tails,heads)},\text{(tails,tails)}\}$
As pointed out by user275313, this answer was not completely right. The state space is the set of all possible states in which a dynamic system can be, and it denotes that the system evolves continuously from one to the next.
On the other hand, the state space refers to the set of all possible realizations of a random variable. Remember that a random variable is a function that maps events (abstract) to mathematically representable entities (e.g. numbers).
For example, let $X$ be a random variable associated with tossing a coin, so that:
$X(\omega )={\begin{cases}1,&{\text{if}}\ \ \omega ={\text{heads}},\\0,&{\text{if}}\ \ \omega ={\text{tails}}.\end{cases}}
$
Therefore, if $Y$ is a multivariate random variable that for an experiment of tossing two coins, $Y=\{X_1,X_2\}$, the state space will be $Y=\{(1,1),(1,0),(0,1),(0,0)\}$.
|
Surface groups are residually finite I am looking at the short paper by Hempel "Residual finiteness of surface groups". Let $F$ be a compact orientable surface and let $f : S^1 \to F$ be a map that represents a nontrivial element of the fundamental group of $F$. Further assume that $f$ is an embedding. We want to construct a covering of $F$ by a compact surface $\tilde{F}$ such that the map $f$ can not be lifted to this covering.
First suppose that $f$ is a parameterization for one of the "standard generators" - then Hempel claims that we can construct a two-sheeted covering of $F$ that $f$ does not lift to. What is this construction??
Alternatively, suppose that the image of $f$ is homologically trivial (i.e. the image of $f$ is a product of commutators in $\pi_1(F)$. Then Hempel claims that we can construct a six-sheeted cover of $F$ corresponding to the kernel of a suitable map from $\pi_1(F)$ to the fundamental group of a genus 3 surface. What is this map and what is this construction?
This question was asked before and some parts of it were answered but I am confused about some of the unanswered parts:
John Hempel's proof of residual finiteness of surface groups
| Correction: Hempel used $\Sigma_3$ to denote symmetric group of three letters not the surface of genus three. Its order is $6$, therefore any surjective map will have a kernel of index $6$.
Now to answer to your question, let us consider the possibilities of the components of a given curve $x$. The complement of the curve $x$ can be either connected (non-separating) or disconnected (separating). If the complement is connected then by the classification of surfaces, any two such curves are homeomorphic, hence start with a standard generator.
Case-1: Non-separating.
Recall that the algebraic intersection number $\hat{i}(x,y)$ is defined as the sum of the indices of the intersection points of $x$ and $y$, where an intersection point is of index $+1$ if orientation agrees with the surface, $-1$ otherwise. Also $\hat{i}(x,y)$ only depends on the homology classes of $x$ and $y$.
Now given any non-separating curve $x$ you can always find $y$ such that $\hat{i}(x,y)=1$.
Define the homomorphism $\hat{i}(\,\,,y)$ from $\pi_1(F)$ to $\mathbb{Z}_2$. The kernel is the required subgrop ad the cover ($x$ does not belong to the kernel).
Case-2: Separating.
Let $x=\prod_{i=1}^n[x_i,y_i]$. Define the homomorphism $\phi:\pi_1{F}\rightarrow\Sigma_3$ as:
1) $\phi(x_1)=(1,2),\,\, \phi(y_1)=(1,3).$
2) $\phi(x_2)=(1,2), \,\, \phi(y_2)=(2,3).$
3) $\phi(x_i)=\phi(y_i)=e$ for all $i\neq 1,2.$
Check that the product of the commutators is $e$, therefore this is a homomorphism and $x$ is not in the kernel.
|
Find the value of $\lambda$ that maximises the area of a triangle on a hyperbola.
Consider the hyperbola: $$ \frac{x^2}{4} - \frac{y^2}{36}=1$$
$l_1$ and $l_2$ are the asymptotes
Consider a case where $P$ is located in the first quadrant. Through $P$, draw another line $CD$, where $C$ is on line $l_1$, D is on line $l_2$, and $P$ is between $C$ and $D$, such that $CP:PD=1: \lambda $
Find the value of $\lambda$, which will minimise the area of the triangle $COD$, and find this area.
What I have done:
The asymptotes of the graph are $y=3x$ and $y=-3x$
The point $P$ lies on the hyperbola so let the coordinates of this point be $(x_0 , y_0)$ with the properties that $x_0 >0$ and hence $ \frac{x_0^2}{4}-\frac{y_0^2}{36}=1$ which implies that $x_0 = \frac{\sqrt{36+y_0^2}}{3}$
The point $D$ is determined by $C$ and $P$ , so let's find C, a point $(c_1,c_2)$ with the properties that $c_1>0$ and $c_2=3c_1$
This is where I am stuck , I don't know if my approach is correct and even if it is I feel like the algebra is too complex , there should be a more simpler way of doing this.
| You can let $C$ have coordinates $(x_1,y_1)$ and $D$ have coordinates $(x_2,y_2)$.
Hence P has coordinates $(\frac{\lambda x_1+x_2}{\lambda +1},\frac{\lambda y_1+y_2}{\lambda +1})$
Asymptotes have equations $y=3x$ and $y=-3x$, hence $y_1=3x_1$ and $y_2=-3x_2$
Therefore P has coordinates $(\frac{\lambda x_1+x_2}{\lambda +1},\frac{3\lambda x_1-3x_2}{\lambda +1})$.
P also lies on the hyperbola hence substituting these coordinates into the equation of the hyperbola will get you a relationship between $x_1x_2$ and $\lambda$.
This allows you to find an expression for the area in terms of $\lambda$ which can then be minimised with derivatives.
|
Suppose $v_1,v_1+v_2,v_1+v_2+v_3,...,\sum_{i=1}^nv_i$ are linear independent. Show that $v_1,v_2,...,v_n$ are also linear independent.
Suppose $v_1,v_1+v_2,v_1+v_2+v_3,...,\sum_{i=1}^nv_i$ are linear independent. Show that $v_1,v_2,...,v_n$ are also linear independent.
Using the definition of linear independence, $$\alpha_1v_1+\alpha_2(v_1+v_2)+\alpha_3(v_1+v_2+v_3)+...+\alpha_n\sum_{i=1}^nv_i=0,$$ only if $\alpha_1=\alpha_2=...=\alpha_n=0$.
So $\sum_{i=1}^n\alpha_i\sum_{j=1}^iv_j=0$, if I'm correct.
Therefore $\sum_{i=1}^n\sum_{j=1}^i\alpha_iv_j=0$
I got stuck here and I don't know how to continue.
| By induction on $n.$
(i). $n=1.$ Trivial.
(ii). If true for case $n,$ let $u_1=v_1, u_2=v_1+v_2,..., u_{n+1}=v_1+...+v_{n+1}$ where $u_1,...,u_{n+1}$ are $n+1$ linearly independent vectors.. Note that $v_j=u_j-u_{j-1}$ for $1<j\leq n+1.$ So $$\sum_{i=1}^{n+1}K_iv_i=0\implies K_1u_1+\sum_{j=2}^{n+1} K_j(u_j-u_{j-1})=0.$$ The co-efficient of $u_{n+1}$ in the above sum is $K_{n+1}.$ By the independence of $u_1,...,u_{n+1}$ we have $K_{n+1}=0.$ Therefore $$\sum_{i=1}^{n+1}K_iv_i=\sum_{i=1}^nK_iv_i.$$ Since $u_1,...,u_n$ are $n$ linearly independent vectors and the result holds for case $n,$ we have $K_i=0$ for $1\leq i\leq n.$
|
Prove that two congruences by modulo are equivalent Given $a \equiv a'\pmod m$ and $b \equiv b' \pmod m$.
Prove that $ax \equiv b \pmod m$ and $a'x \equiv b' \pmod m$ are equivalent.
I understand I have to show that this modulo congruences have the same set of solutions. But what is solution?
| Note $\ a\equiv a',\, b\equiv b'\,\Rightarrow\, ax-b\equiv a'x-b'\,$ by the Congruence Product and Sum Rules.
Therefore we conclude $\,ax-b\equiv 0\iff a'x-b'\equiv 0,\ $ i.e. $\ ax\equiv b\iff a'x\equiv b'$
Remark $\ $ Alternatively we can note that $\,f(a,b) = ax-b\,$ is a polynomial in $\,a,b\,$ with integer coefficients, therefore $\, a,b\equiv a',b'\,\Rightarrow\, f(a,b)\equiv f(a',b')\,$ by the Polynomial Congruence Rule.
|
If $ \alpha_i, i=0,1,2...n-1 $ be the nth roots of unity, the $\sum_{i=0}^{n-1} \frac{\alpha_i}{3- \alpha_i}$ is equal to? If $ \alpha_i, i=0,1,2...n-1 $ be the nth roots of unity, the $\sum_{i=0}^{n-1} \frac{\alpha_i}{3- \alpha_i}$ is equal to?
A) $ \frac{n}{3^n-1} $
B) $ \frac{n-1}{3^n-1} $
C) $ \frac{n+1}{3^n-1} $
D) $ \frac{n+2}{3^n-1} $
Attempt: I know that $(3- \alpha_0)(3- \alpha_1)....(3-\alpha_{n-1})= (3^n-1)/2 $
But I have no clue about the numerator. Adding 3 and subracting 3 from the numberator would make the fraction simpler, but I would still have to sum up $ 1/(3- \alpha_i) $
| Let $\dfrac{a_i}{3-a_i}=b_i\iff a_i=\dfrac{3b_i}{1+b_i}$
As $a_i^n=1,$
$$\left(\dfrac{3b_i}{1+b_i}\right)^n=1\iff(3^n-1)b_i^n-\binom n1b_i^{n-1}+\cdots=0$$
By Vieta's formula,
$$\displaystyle\sum_{i=0}^{n-1}b_i=\dfrac{\binom n1}{3^n-1}$$
|
$X$ topological space and $S\subseteq X$ and $\overline{S}=X$. Show that if $S$ is connected, then $X$ is also $X$ topological space and $S\subseteq X$ and $\overline{S}=X$. Show that if $S$ is connected, then $X$ is also.
First of all, what is the intuition behind this? For example, what is a subset $S$ such that $\overline{S} = X$?. I can't even think of such thing... Is it a space that is almost $X$? And then when we take the closure and it's $X$. Well, what argument should be used to argue that $X$ will also be connected?
| Assume that there is a continuous surjection $\varphi:X\rightarrow\{0,1\}$ and we let without loss of generality that some $s\in S$ be such that $\varphi(s)=0$. As $S$ is connected, we have $\varphi[S]=\{0\}$. Then for any $x\in X-S$, we could find some net $\{x_{\delta}\}\subseteq S$ such that $\varphi(x)=\lim_{\delta}\varphi(x_{\delta})=\lim_{\delta}0=0$, it violates the condition that some $x\in X$ is such that $\varphi(x)=1$.
|
What's wrong with this reasoning that $\frac{\infty}{\infty}=0$?
$$\frac{n}{\infty} + \frac{n}{\infty} +\dots = \frac{\infty}{\infty}$$
You can always break up $\infty/\infty$ into the left hand side, where n is an arbitrary number. However, on the left hand side $\frac{n}{\infty}$ is always equal to $0$. Thus $\frac{\infty}{\infty}$ should always equal $0$.
| I think an issue with your reasoning as well is that to "split up infinity" you would have to have an infinite sum of finite terms. You can see with Riemann integration that an infinite sum of 0s is not necessarily 0. Of course none of this is rigorous but I think it should help you intuition a little.
|
Solve for $x$ : $x^4 = 5(x-1)(x^2 - x +1)$
Solve for $x$ : $x^4 = 5(x-1)(x^2 - x +1)$
I am having difficulty factoring the whole equation so that I can equate each factor to $0$ one by one and get the roots accordingly.
| Divide both sides by $x^4$ and change variable to $y = \frac{x-1}{x^2}$, we have
$$\begin{align}1 = 5y(1-y)
\iff & 4y(y-1) = -\frac45 \iff
(2y-1)^2 = \frac15\\
\implies & y = \frac12\left(1 \pm \frac{1}{\sqrt{5}}\right) = \frac{\sqrt{5}\pm 1}{2\sqrt{5}} = \frac{2}{5 \mp \sqrt{5}}
\end{align}
$$
Substitute $y$ back by $\frac{x-1}{x^2}$, we get
$$
x^2 - \frac{5\mp\sqrt{5}}{2} (x - 1) = 0
\iff \left(x - \frac{5\mp\sqrt{5}}{4}\right)^2 = \frac{5\mp\sqrt{5}}{2} -
\left(\frac{5\mp\sqrt{5}}{4}\right)^2
= -\frac{5 \pm \sqrt{5}}{8}
$$
This leads to
$\displaystyle\;x = \frac{5 - \epsilon \sqrt{5} \pm i \sqrt{2(5 + \epsilon \sqrt{5})}}{4}$ where $\epsilon = \pm 1$.
If one insists on factoring the equation first, one can treat $x^2$ and $(x-1)$ as two seperate units and rearrange them:
$$\begin{align}
& x^4 = 5(x-1)(x^2 - x + 1)\\
\iff & x^4 - 5x^2(x-1) + 5(x-1)^2 = 0\\
\iff & \left(x^2 - \frac52(x-1)\right)^2 = \left(\frac{25}{4} - 5\right)(x-1)^2 = \frac{5}{4}(x-1)^2\\
\iff & \left(x^2 - \frac{5 + \sqrt{5}}{2}(x-1)\right)\left(x^2 - \frac{5 - \sqrt{5}}{2}(x-1)\right) = 0
\end{align}
$$
|
Drunk man with a set of keys. I found this problem in a contest of years ago, but I'm not very good at probability, so I prefer to see how you do it:
A man gets drunk half of the days of a month. To open his house, he has a set of keys with $5$ keys that are all very similar, and only one key lets him enter his house. Even when he arrives sober he doesn't know which key is the correct one, so he tries them one by one until he chooses the correct key. When he's drunk, he also tries the keys one by one, but he can't distinguish which keys he has tried before, so he may repeat the same key.
One day we saw that he opened the door on his third try.
What is the probability that he was drunk that day?
| Suppose you considered $1000$ instances where the drunk man came to the door. On average, $500$ of them will be when he is drunk, and $500$ when he is sober. If he is sober, then the chance of him taking $3$ tries to open the door is $\frac{1}{5}$ by symmetry. If he is drunk, then the chance is $\frac{4}{5}\cdot\frac{4}{5}\cdot\frac{1}{5}=\frac{16}{125}$. This means that on average there will be $100$ instances of him taking $3$ tries whilst sober, and $64$ whilst drunk. Given that we know he has taken $3$ tries, the probability of him being drunk is therefore
$$
\frac{64}{64+100}=\frac{16}{41} \, .
$$
|
Last added floating point term of harmonic series In a system using IEEE 754 single precision floating point numbers, if we start calculating the sum $\sum_{i=0}^n 1/i$ , because of the precision the sum will not go to infinity but after a term N, any term added won't change the sum, which in our case must be ~15.4 . What's the way of finding this term N?
| A rough estimation:
$\epsilon$ is about $5\times10^{-8}$. If $\oplus$ is machine arithmetik addition we have
$$1 \oplus \epsilon= 1$$
and for arbitrary $n$
$$ n \oplus n\delta = n$$
for all
$$\delta \lt \delta_x$$
with $\delta_x \in (\epsilon, 2\epsilon)$.
For 32 bit arithmetic we have
$$\epsilon=2^{−24} ≈ 5.96\cdot 10 {-8}$$
We have approximately
$$\sum_{i=1}^{n}{\frac{1}{i}}\approx \ln(n)+0.6$$
So if
$$5 \cdot 10^{-8}\cdot\bigoplus_{i=1}^{n}{\frac{1}{i}} \approx\frac{1}{n}$$
the sum won't increase anymore.
$\bigoplus_{i=1}^{n}{\frac{1}{i}}$ is the sum $\sum_{i=1}^{n}{\frac{1}{i}}$ calculated using machine arithmetic. But if $\bigoplus_{i=1}^{n}{\frac{1}{i}}$ approximately $\sum_{i=1}^{n}{\frac{1}{i}}$ we can substitue it by $\ln(n)+0.6$ and get
$$5 \cdot 10^{-8}\cdot \ln(n)\approx\frac{1}{n}$$
and further
$$n\ln{(n)} \approx 10^7$$
The solution of this equation is $$n\approx10^{6}$$ and $$\sum_{i=1}^{n}{\frac{1}{i}} \approx \ln(n)+0.6 \approx14.4$$
The folowing Python 3.5 program will estimate eps, estimate $n$ and try to find $n$ by calculating the harmonic series. Note that some of the calculation may be not in float but may use a higher precision. I use python arrays to simulate 32-bit arithmetic according to https://stackoverflow.com/a/2232650/754550 .
import array
a=array.array('f')
a.append(1.0)
a.append(1.0)
a.append(1.0)
print('bytes of single float:',a.itemsize)
print('estimate eps:')
n=0
while True:
n+=1
a[1]=a[0]
a[1]+=1.0/2**n
if (a[0]==a[1]):
print('eps ~ 2 **',-n)
break
print('')
estimated_sum=14.4
print('find n for estimated sum',estimated_sum)
eps=1.0/2**24
a[0]=estimated_sum
for i in range(2):
a[1]=a[0]
delta=a[0]*(eps/2**i)
a[1]+=delta
print('n =',int(1/delta),'estimated_sum==estimated_sum+1/n (',a[0]==a[1],')')
print('')
print('harmonic series:')
print('calculate n such that h(n)=h(n)+1/n')
n=0
a[1]=0
while True:
n+=1
a[2]=1.0/n
# remebemr the (n-1)th partial sum of the harmonic series
# calculate the n-th partial sum of the harmonic series
# terminate if the partial sum does not change anymore
a[0]=a[1]
a[1]+=a[2]
if (a[0]==a[1]):
print('n =',n)
print('h(n) = ',a[0])
break
This prints out the following:
bytes of single float: 4
estimate eps:
eps ~ 2 ** -24
find n for estimated sum 14.4
n = 1165084 estimated_sum==estimated_sum+1/n ( False )
n = 2330168 estimated_sum==estimated_sum+1/n ( True )
harmonic series:
calculate n such that h(n)=h(n)+1/n
n = 2097152
h(n) = 15.403682708740234
|
Finding $E(Y_1 \mid \max(Y_1, Y_2))$ for independent $Y_1, Y_2$ random variables. Suppose that $Y_1, Y_2$ are two independent observations from the following distribution:
$$
f_\theta(y) = \begin{cases} \dfrac{3y^2}{\theta^3} &\text{for } 0 <y \leq \theta, \ \theta>0 \\[8pt]
0 & \text{otherwise}\end{cases}
$$
I am trying to find $E(Y_1 \mid \max(Y_1, Y_2))$, and am not sure how to do this. I can directly compute the integral but I still need to find the joint distribution of $Y_1$ and $\max(Y_1,Y_2)$, which I am not sure how to derive.
| Hint: to find the joint pdf of $(Y_1,\max(Y_1,Y_2))$:
The support of $(Y_1,\max(Y_1,Y_2))$ is
Observe first that the CDF of $Y_i$ is $F_\theta(y)=\frac{y^3}{\theta^3}$ for $
0<y<\theta$
Now if $F$ and $f$ are the joint CDF and pdf of $Y_1$ and $Y_2$, then for $0<y_1, y_2\le\theta$:
$$F(y_1,y_2)=P(Y_1\le y_1,\max(Y_1,Y_2)\le y_2)=P(Y_1\le y_1,Y_1\le y_2,Y_2\le y_2)\\=P(Y_1\le\min(y_1,y_2),Y_2\le y_2)=P(Y_1\le \min(y_1,y_2))P(Y_2\le y_2)=F_\theta(\min(y_1,y_2))F_\theta(y_2)$$
So
$$F(y_1,y_2)=
\begin{cases}
F_\theta(y_1)F_\theta(y_2)=\frac{9 y_1^2 y_2^2}{\theta^6}& \text{ if } y_1\le y_2\\
F_\theta(y_2)F_\theta(y_2)=\frac{9 y_2^4}{\theta^6}& \text{ if } y_1> y_2
\end{cases}
$$
The problem with this approach is that when you differentiate, there is a Dirac component coming from a discontinuity of $F$, which is messy to work with.
So for the simplest path to final goal, you should follow leonbloy advice in his answer and condition on whether $Y_1\le Y_2$ or not:
$$E[Y_1|\max(Y_1,Y_2)]=E[Y_1|\max(Y_1,Y_2),Y_1\le Y_2]P(Y_1\le Y_2)+E[Y_1|\max(Y_1,Y_2),Y_1>Y_2]P(Y_1>Y_2)$$
|
A function discontinuous on a countably infinite set This is a proof from Royden's Real Analysis and there is a part that I do not understand( Underlined in red). Any explanation is much appreciated.
| To put it in simply, consider a point $x_0$ in the set $(a,b) \sim C$. Now take a neighborhood about $x_0$ and notice that if you take a neighborhood small enough you can ensure that no points $q_1 ,q_2, \ldots ,q_{n}$ are in this neighborhood. Therefore if $x$ is a point in the said neighborhood then $|f(x_0)-f(x)|$ will be at most the tail end of the series $\sum \frac{1}{2^n}$ from the point $\frac{1}{2^{n+1}}$ onward. And this tail end sum can be shown to be less than $\frac{1}{2^n}$
|
Rigorous calculus textbook for first and second year students I am going to be teaching calculus to first and second year students(in two separate courses) and I wanted a textbook for myself to go over the content so I can teach it better. I am a 4th year student and know advanced topics like scheme theory and connections on bundles, so really I just want to find a textbook that covers the content as rigorously as possible with as little handwaving as possible, that doesn't generalise to manifolds etc immediately. (I only mention advanced topics to emphasise that difficulty isn't a problem, but I do want it to focus on standard early concepts)
Which calculus textbooks teach first and second year content rigorously - I have heard bad things about Stewart, but I would like it to cover the same sort of level of content.
| In my opinion Apostol's Calculus (Volume 2 for Multivariable) when coupled with his Real Analysis Book is great. The former are computationally intense, and the latter offers a good foundation in the theory, I thought.
|
Convergence of a series with a parameter $a>0$ Hello I have this exercise:
Please help me determine what values of $a > 0$ the series below converges:
$$\sum_{n=1}^{\infty} \left(\frac{a n+2}{3n+1}\right)^{n}$$
| Hint:
When $a=3$,
$$\sum_{n=1}^{\infty} \left(\frac{3 n+2}{3n+1}\right)^{n}
=\sum_{n=1}^{\infty} \left(1+\frac1{3n+1}\right)^{n}
=\sum_{m=4,\text{ step }3}^{\infty} \left(1+\frac1m\right)^{(m-1)/3}\\
=\sum_{m=4,\text{ step }3}^{\infty}{\frac{\sqrt[3]{\left(1+\frac1m\right)^m}}{\sqrt[3]{1+\frac1m}}}.$$
The expression of the numerator should ring a bell.
|
last digit is a square and..... I've found some solutions for this questions but they were not impressive.
Question:
How many natural numbers are there in base $10$,whose last digit is perfect square,combination of last two digits is a perfect square,combination of last three digits is a perfect square,$\ldots$,combination of last $n$ digits is a perfect square?
For example $64$ is a number whose last digit is a perfect square and combination of last two digits is also a perfect square.
Kindly tell me how to approach this question.
| Answers are in the forms:-
(i)$4\times10^n$
(ii)$9\times10^n$
(iii)$10^n$
(iv)$49\times10^n$
(v)$64\times10^n$
(vi)$81\times10^n$
Where $n \in 0,2,4,6...$
|
Uncountable set of irrational numbers such that no two elements sum to a rational I'm trying to find such a set as described in the title to prove that the following metric space is separable:
$([0,1],d)$
where
$ d(x,y) = \left\{
\begin{array}{ll}
|x-y| & \mbox{if } x-y \in \mathbb{Q}\\
5 & \mbox{if } x-y \not\in \mathbb{Q}
\end{array}
\right.$
If there exists an easier approach then any suggestions are welcome. Also if I'm going the wrong way and it is separable I would like to ask how to do it the other way around.
| Note that $x\sim y\iff x-y\in\Bbb Q$ defines an equivalence relation on $[0,1]$. Moreover, this equivalence relation has uncountably many equivalence classes. Suppose that $V\subseteq[0,1]$ is a set which meets every equivalence class exactly on one point, and now consider $A=\{B(x,1/3)\mid x\in V\}$.
Show that $A$ is an uncountable set of pairwise disjoint open sets.
|
$F_p(a)$ contains all the roots of $f$ Let $p$ be a prime, $n\in \mathbb{N}$ and $f=x^{p^n}-x-1\in \mathbb{F}_p[x]$ irreducible .
Let $a\in \overline{\mathbb{F}}_p$(=algebraic closure of $\mathbb{F}_p$) is a root of $f$.
I want to show that $\mathbb{F}_p(a)$ contains all the roots of $f$.
$$$$
If $a\in \mathbb{F}_p$ then $a^p=a$ and then
$a^{p^n}=(a^p)^{p^n-1}=a^{p^n-1}=(a^p)^{p^n-2}=a^{p^n-2}=\ldots =a$
So, $f(a)=a^{p^n}-a-1=-1\neq 0$.
Thus it must be $a\notin \mathbb{F}_p$.
Could you give me a hint how we could show that $\mathbb{F}_p(a)$ contains all the roots of $f$ ?
| Let $|F_p(a)|=g$. Then as $F_p(a)^\times$ is a group of order $g-1$, all elements of $F_p(a)$ satisfy $x^g-x=0$. Because $F_p$ is a field, this polynomial, which has all distinct roots as its derivative is identically $-1$ has distinct roots in the algebraic closure, in particular, all elements of $F_p(a)$ are roots of it. However, since--in particular--$a\in F_p(a)$ the minimal polynomial for $a$ divides $x^g-x$. So since all the roots of the minimal polynomial of $a$ are roots of $x^g-x$, a fortiori all the roots of the minimal polynomial of $a$ are in $F_p(a)$.
|
How do I calculate surface integral? For the vector field $a = [-z^2–2z,-2xz+2y^2,-2xz-2z^2]^T$ and the area $F$ on the cylinder $x^2 + z^2 = 4$ , which is above the ground plane $z = 0$ , in front of the plane $x = 0$ and between the cross plane $y = 0$ and lies to the their parallel plane $y = 2$ , calculate the following integral:
$\int_{F}^{} \! a\cdot dn \, = ?$
So I use that:
$x=2cos(u)$
$y=v$
$z=2sin(u)$
and than I calculate normal vector.
I get integral
$\int_{0}^{2}\int_{0}^{\Pi/2}\begin{pmatrix}-z^2–2z\\-2xz+2y^2\\-2xz-2z^2\end{pmatrix}\cdot \begin{pmatrix}2sin(u)\\ 0\\ 2sin(u)\end{pmatrix}dudv $
$\int_{0}^{2}\int_{0}^{\Pi/2}\begin{pmatrix}-(2sin(u))^2–4sin(u)\\-2(2cos(u))(2sin(u))+2v^2\\-2(2cos(u))(2sin(u))-2(2sin(u))^2\end{pmatrix}\cdot \begin{pmatrix}2sin(u)\\ 0\\ 2sin(u)\end{pmatrix}dudv $
$\int_{0}^{2}\int_{0}^{\Pi/2}\begin{pmatrix}-8sin^3(u)–8sin^2(u)\\0\\-16cos(u)sin^2(u))-16sin^2(u)\end{pmatrix} $
I was to lazy to replace x,y and z, but at the end I get wrong solution and I need a lot of time to calculate, is there any better method?
| Use the Divergence theorem, and for your triple integral, work with cylindrical coordinates. From there, it's just a matter of evaluating the iterated integral. Just set up the integral bounds and you are good to go.
|
Strengthening the Antecedent: From B implies C, infer (A ^ B) implies C How can I construct a Fitch style proof to prove this?
I have tried
*
*B $\rightarrow$ C
*A $\land$ B
*B $\quad\quad$ $\land$ Elim: $2$
*C $\quad\quad$ $\rightarrow$ Elim: $1,3$
$5$. (A $\land$ B) $\rightarrow$ C $\quad\quad$ $\rightarrow$ Intro: $2-4$
| You may want to indicate what kind of Elim and Intro you do:
Line 3 is an $\land$ Elim
Line4 is an $\rightarrow$ Elim
line 5 is an $\rightarrow $ Intro
Otherwise perfect!
|
What is the absolute value on $\mathbb{Z}_p[X]$ in this context? Let $\mathbb{Z}_p$ denote the ring of p-adic integers for some prime $p \in \mathbb{Z}$.
What is the absolute value on $\mathbb{Z}_p[X]$ ?
The motivation for the question comes from a proof of Hensel's lemma where we construct a sequence of polynomials $g_n(X) \in \mathbb{Z}_p[X]$ such that $g_n(X) \to g(X)$. I would like to make sure that I understand what exactly $g_n(X) \to g(X)$ means in this context. The proof is found on page 74 in the book "p-adic Numbers" by Gouvea.
Can we just take $\lvert a_o + a_1X + \cdots + a_nX^n\rvert_p:= \underset{0 \leq i \leq n} \max \lvert a_i \rvert_p$ ?
This would satisfy the triangle inequality and positive-definiteness. However I am not convinced that the multiplicative property holds.
EDIT: Or perhaps I am mistaken looking for an absolute value in this context. Perhaps I am really looking for a norm, in which case the suggestion should work.
| Are you familiar with valuations? The $p$-adic valuation gives a metric on $\mathbb Q$ and this is used a lot. I am unfamiliar with the particular proof, but the ones I have seen have only used valuations on evaluations on polynomials or similar.
In any case, there're possible generalisations of the $p$-adic valuation (socratean question: how would you define it?) to the ring in question, and the most natural one can be made to satisfy the properties you want if I remember correctly.
further edit: As I alluded to and as the topmost comment points out, there is no need to take this too far since the proof doesn't need it.
|
How to determine the convergence of $\sum_{n\geq 2}{} \frac{(-1)^{n}}{(-1)^n+n}$? This is most likely very easy to show, but with the load of midterms I had, my brain just declines to work properly. How do I determine the convergence of $$\sum_{n\geq 2}{} \frac{(-1)^{n}}{(-1)^n+n}$$?
| HINT:
$$\begin{align}
\sum_{n=2}^{2N}\frac{(-1)^n}{(-1)^n+n}&=\sum_{n=1}^{N}\left(\frac{1}{2n+1}-\frac{1}{2n}\right)+\frac{1}{2N}\\\\
&=\frac1{2N}-\sum_{n=1}^N \frac{1}{2n(2n+1)}
\end{align}$$
SPOILER ALERT: Scroll over the highlighted area to reveal the solution
First, since $\left|\sum_{n=1}^N\frac{1}{2n(2n+1)}\right|\le \frac{1}{4}\sum_{n=1}^\infty \frac1{n^2}=\frac{\pi^2}{24}$, we see that the series of interest converges. In fact, we can evaluate the series in closed form. Proceeding, we write $$\begin{align}\sum_{n=2}^{2N}\frac{(-1)^n}{(-1)^n+n}&=\sum_{n=1}^N\left(\frac{1}{2n+1}-\frac{1}{2n}\right)+\frac1{2N}\\\\&=\sum_{n=1}^N\left(\frac{1}{2n+1}+\frac{1}{2n}\right)-\sum_{n=1}^N\frac1n+\frac1{2N}\\\\&=-1+\sum_{n=1}^{2N+1}\frac1n-\sum_{n=1}^N\frac1n+\frac1{2N}\\\\&=-1+\sum_{n=1}^{N+1}\frac{1}{n+N}+\frac1{2N}\\\\&=-1+\frac1N\sum_{n=1}^{N+1}\frac{1}{1+n/N}+\frac{1}{2N} \tag {A1}\\\\&\to -1+\int_0^1 \frac{1}{1+x}\,dx\,\,\text{as}\,\,N\to \infty \tag{A2}\\\\&=-1+\log(2)\end{align}$$where we used only elementary arithmetic to take us to $(A1)$ and recognized the sum in $(A1)$ as a Riemann sum to arrive at $(A2)$. An alternative way forward to evaluating the series is to write $$\sum_{n=1}^N\left(\frac{1}{2n+1}-\frac{1}{2n}\right)+\frac1{2N}=-1+\sum_{n=1}^{2N+1}\frac{(-1)^{n-1}}{n}+\frac{1}{2N}$$Then, recalling that $\log(1+x)$ has Taylor series representation $\log(1+x)=\sum_{n=1}^\infty \frac{(-1)^{n-1}x^n}{n}$ for $-1<x\le 1$, we see that $$\sum_{n=2}^\infty\frac{(-1)^n}{(-1)^n+n}=-1+\log(2)$$as expected!
|
Finite dimensional fibres I can't understand why the following two statements are equivalent:
*
*the fibre of $\mbox{Spec}(B) \to \mbox{Spec}(A)$ of any field valued point $\mbox{Spec}(K) \to \mbox{Spec}(A)$ is $0$-dimensional or empty
*every fibre $\mbox{Spec}(B) \to \mbox{Spec}(A)$ is a finite set
Looks like it is a corollaries from finitness of base change (if $A \to B$ is finite then for any map $A \to A'$ base change $A' \to B \otimes_A A'$ is finite too) but I can't see how implicitly we can obtain such two properties from that fact.
| It is clear that the first statement is stronger than the second one. So we have to show that the second one implies the first.
Let $K$ be a field with a map $A \to K$. The kernel $\mathfrak p$ of this map is prime, because the image is an integral domain. The map factors as
$$A \to A/\mathfrak p \to \operatorname{Frac}(A/\mathfrak p) \to K,$$
hence the base change $K \to B \otimes_A K$ can be obtained from $A \to B$ via the intermediate step $\operatorname{Frac}(A/\mathfrak p) \to B \otimes_A \operatorname{Frac}(A/\mathfrak p)$
The second statement says that $\operatorname{Frac}(A/\mathfrak p) \to B \otimes_A \operatorname{Frac}(A/\mathfrak p)$ is finite, hence $K \to B \otimes_A K$ is also finite as a base change.
|
Deriving Block Soft Threshold from $ {L}_{2} $ Norm (Prox Operator) I'm trying to minimize $\frac{1}{2}||x - d||^2 + \lambda ||x||$ with respect to $x$ where the norm concerned is the $L_2$ norm, and $x$ and $d$ are vectors.
I think the answer I should be arriving at is $[1 - \frac{\lambda}{||d||}]_+ d$.
EDIT: In an attempt to answer my own question after learning up on subgradients:
The optimality condition has
$$0 \in x - d + \lambda \partial ||x|| $$
where $\partial$ is denoting the subgradient. It now branches off to two scenarios:
1) If $x=0$, then the optimality condition becomes $$0 \in -d + \lambda \{g : ||g||\leq 1 \}$$ Rearranging the terms yields that $||d|| \leq \lambda $. Thus, the minimizer in this case is $\hat{x} = 0$ when $||d|| \leq \lambda$.
2) If $x \neq 0$, then the optimality condition becomes $$ 0 = x - d + \lambda \frac{x}{||x||}$$ which implies $x = d - \lambda \frac{x}{||x||}$. The next step is $$ x = d - \lambda \frac{x}{||x||} \iff \hat{x} = d - \lambda \frac{d}{||d||} \tag{*}$$ How does one arrive at and intuit step $(*)$? I can verify that it is true, but do not know how I would have derived it had I not known the answer I'm supposed to get to. I can finish the rest, but I would really appreciate help on step $(*)$! Thanks in advance.
| If you want to establish $x = d - \lambda \frac{x}{\|x\|}$, it implies that vector $d$ and $x$ have a same direction. That is to say $\frac{x}{\|x\|}$ and $\frac{d}{\|d\|}$ are equal unit vectors. They are interchangeable in step (*).
|
Weird Laplace Equation. I'm stuck with this PDE problem.
Let $R>0$ and let $u:\overline{B(0,R)}\to\mathbb{R}$ be a continuos function such that:
$$\left\{\begin{matrix} \Delta u +u= u^3 & \text{ in } B(0,R)\\ u = 0 & \text{ on } \partial B(0,R)\end{matrix}\right.$$
Prove that $|u(x)|\leq 1$ for all $x\in B(0,R)$.
I've managed to prove that:
$$\int_{B(0,R)} (u^4-u^2) \leq 0$$
But it doesn't seem to be useful. (It can be seen by multiplying the equation by $u$ and then integrating using Green's identity).
| It is not necessary to use $L^p$ estimate. Let $s^+$ define the positive part of $s$, namely if $s\le0$, then $s^+=0$ and if $s>0$, then $s^+=s$. Noting that $(u^2-1)|_{\partial B(0,R)}=0$, by Green's formula, one has
\begin{eqnarray}
\int_{B(0,R)}|\nabla(u^2-1)^+|^2dx&=&\int_{B(0,R)}\nabla(u^2-1)\nabla(u^2-1)^+dx \\
&=&-\int_{B(0,R)}(u^2-1)^+\Delta(u^2-1)dx \\
&=&-\int_{B(0,R)}(u^2-1)^+(2|\nabla u|^2+2u\Delta u)dx\\
&=&-2\int_{B(0,R)}(u^2-1)^+|\nabla u|^2dx-\int_{B(0,R)}(u^2-1)^+u\Delta udx\\
&=&-2\int_{B(0,R)}(u^2-1)^+|\nabla u|^2dx-\int_{B(0,R)}(u^2-1)^+u(u^3-u)dx\\
&=&-2\int_{B(0,R)}(u^2-1)^+|\nabla u|^2dx+\int_{B(0,R)}(u^2-1)^+u^2(1-u^2)dx\\
&\le&0
\end{eqnarray}
So $(u^2-1)^+=0$ or $|u|\le1$.
|