title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Root bracketing in complex space | See Computing the Zeros of Analytic Functions for theory and code.
Despite the title, it handles computing zeros and poles of meromorphic functions. |
Calculating Joint Probability Distribution from Bayesian Network | $H$ has no parents. $H$ is the parent of $B$. $H$ is the parent of $L$. $B$ and $L$ are the parents of $F$. $L$ is the parent of $C$.
$\def\P{\operatorname{\sf P}}$Thus the factorisation for $\P(H,\neg B, L, \neg F, \neg C)$ is $$\begin{align}\P(H,\neg B, L, \neg F, \neg C)~&=~\mathsf P(H)\P(\neg B\mid H)\P(L\mid H)\P(\neg F\mid \neg B, L)\P(\neg C\mid L)\end{align}$$
Likewise the factorisation for $\P(F\mid L)$ is (by total probability)
$$\begin{align}\P(F\mid L)&=\P(F,B\mid L)+\P(F,\neg B\mid L)\\&=\P(F\mid B,L)\P(B\mid L)+\P(F\mid \neg B, L)\P(\neg B\mid L)\end{align}$$
Which you have correct until here. However, $B$ and $L$ are not independent. They are conditionally independent given $H$.
$\begin{align}\P(B\mid L)&= \dfrac{\P(B\mid H)\P(L\mid H)\P(H)+\P(B\mid\neg H)\P(L\mid\neg H)\P(\neg H)}{\P(L\mid H)\P(H)+\P(L\mid\neg H)\P(\neg H)}\end{align}$
and likewise
$\begin{align}\P(\neg B\mid L)&= \dfrac{\P(\neg B\mid H)\P(L\mid H)\P(H)+\P(\neg B\mid\neg H)\P(L\mid\neg H)\P(\neg H)}{\P(L\mid H)\P(H)+\P(L\mid\neg H)\P(\neg H)}\end{align}$ |
Characters defined by cyclic extensions | Let me try to answer at the same time to your question and that of @Sommer on "Galois group of extension generated by all cubic roots".
First the purely algebraic part. Let $p$ be an odd prime, $\mu_p$ the group of $p $-th roots of 1, $k$ a field of characteristic distinct from $p$. Let $K = k(\mu_p)$, $ \Delta = Gal(K/k)$, $F$ a cyclic extension of $k$. Then $E := F.K$ is abelian, and by Kummer theory, $L$ is obtained from $K$ by adding a $p$-th root of an element $ \alpha \in K^*$ . Let $G_K (p)$ be the Galois group of the maximal abelian extension of $K$ of exponent $p$, and $\mathcal K = K^* /(K^*)^p$. Still by Kummer theory, $ G_K (p) \cong Hom (\mathcal K, \mu_p)$, and $ \Delta = Gal(K/k)$ acts naturally on it. It is convenient to write $G_K (p)$ additively and to think of it as a vector space over the prime field $\mathbf F_p$. As a vector space, $G_K (p)$ is thus the dual space of $ \mathcal K $, but it is not so as a $\Delta$ - module, because $\Delta$ acts non trivially on $\mu_p$ (via the so called Teichmüller character). Let $<\alpha>$ be the subgroup generated by the class of $ \alpha$ modulo $(K^*)^p$.The abelianity of $E/k$ means that $ \Delta$ acts trivially by conjugation on $Gal(F/k)\cong Gal(E/K)\cong Hom (<alpha>, \mu_p)$, in other words, that $Gal(F/k)$ is a quotient of the co-invariants $ G_K (p)_{\Delta}$. In conclusion,$G_k (p) \cong Hom (\mathcal K, \mu_p)^{\Delta} = Hom(\mathcal K (-1)_{\Delta}, \mathbf F_p)$, or equivalently, $Hom (G_k (p),\mathbf F_p) \cong \mathcal K (-1)_{\Delta}$. Here $M(-1)$ denotes the module $M$ with Galois action modified in such a way that $Hom (M(-1), \mathbf F_p) = Hom (M, \mu_p)$ ("Tate twist").
Then the arithmetic part, which could be complicated for a general number field, but here $ p = 3$, $k = \mathbf Q$, $K = k(\mu_3)$, and $\Delta$ is generated by « complex conjugation ». Since $p$ is odd, one can decompose any abelian $p$-group on which $\Delta$ acts into the direct sum of its « plus » and « minus » components. Thus, taking into account the Teichmuller character as noted above, we get $G_{\mathbf Q} (3) \cong Hom (\mathcal K, \mu_3)^{+}$, or equivalently, $ Hom (G_{\mathbf Q} (3), \mathbf F_3) \cong \mathcal K^{-}$ . This seems huge, but it is not quite. We could determine $\mathcal K^{-}$ using the local-global principle for cubic powers in CFT, but it amounts to the same to apply directly the Kronecker-Weber theorem to determine $G_{\mathbf Q} (3)$. The K-W theorem asserts that the Galois group of the maximal abelian extension of $ \mathbf Q$ is isomorphic to the direct product of all the $l$-adic units $\mathbf Z_l^*$. But $\mathbf Z_l^* \cong \mathbf Z/(l -1)$ x $\mathbf Z_l $ (additive notation) if $l$ is odd, $\mathbf Z/(2)$ x $\mathbf Z_2$ if $l = 2$. It follows immediately that $ G_{\mathbf Q} (3)$ is the direct product of $\mathbf Z/3$ by the $\mathbf Z/(l -1)$ mod3 for $l$ congruent to 1 mod 3 . So the character group we are looking for is described explicitly. |
"tangent" (the trig ratio) vs "tangent" (the geometric concept) | No, they are not the same.
The tangent function $\tan$ is a function defined on numbers: you put in one number (namely an angle) and obtain another number (namely the quotient between opposite and adjacent leg in a right triangle with the given angle). The property of a line to be tangent to a given curve is something different. As is the operation of finding the tangent to a curve at a given point on the curve. The input here is a curve and one point on it, the output is a line. Since a line is not a number (at least in general), the two things are different.
That doesn't mean the two are not related. The post What reasoning is behind the names of the trigonometric functions “sine”, “secant” and “tangent”? pointed out by Michael Seifert contains a picture which demonstrates how various trigonometric functions can be seen as lengths related to a right triangle of unit length hypothenuse. There the value of the trigonometric tangent function is indeed the length of a certain segment along the tangent of the unit circle around one corner. But being related is not being the same. |
(Verification) If $g \circ f$ is injective, then $f$ must also be injective. | The proof is, in principle, correct, but all the use of logic symbols are wrong. Proof: Suppose that $f$ is not injective. Then for some $x\ne x'$ we have $f(x)=f(x')$. But then $g(f(x))=g(f(x'))$, so $g\circ f$ is not injective, a contradiction. |
How can I correct my wrong Intuition that $\forall \, x \, \in \,\emptyset : P(x) \quad $ is false? | You are right when you say that $\forall x$, the statement $x \in \emptyset$ is false. This means that $\forall x, \neg (x \in \emptyset)$, which is equivalent to $\forall x, x \notin \emptyset$. Perfectly true.
Then you say "the statement $\forall x \in \emptyset$ is false". $\forall x \in \emptyset$ is NOT a statement, it's an incomplete sentence. Either you write "$\forall x, P(x)$", either you write "$\forall x \in X, P(x)$", which is a shorthand for "$\forall x, (x \in X \implies P(x))$". "$\forall x \in \emptyset$" is not a statement. It can't be true or false.
$\forall x \in \emptyset, P(x)$ is a shorthand for $\forall x, (x \in \emptyset \implies P(x))$, which is equivalent (since $x \in \emptyset$ is always false) to $\forall x, ~\textrm{false} \implies P(x)$. After looking at the truth table for $\implies$, this is equivalent to $\forall x, ~\textrm{true}$ (whatever $P(x)$ may be), which is $\textrm{true}\;.$
If you want to disprove $\forall x \in \emptyset, P(x)$ you have to show me an $x \in \emptyset$ such that $P(x)$ is false. Well you will never find an $x \in \emptyset$. |
Volume of a solid disk generated by rotating a plane | Suppose we take your region and break it up into little rectangles, like we do for "normal" Riemann integration of a region in a plane.
Now we rotate each of those rectangles about the axis $y=2$
Each rectangle sweeps out a little disk with a hole in the center -- a washer.
And the volume of this washer is $\pi(R^2 - r^2)\ dx$ With R as the radius of the disk and $r$ as the radius of the hole.
or $R = (2-x^2)$ and $r = 1$
And then we sum the volume of these washers.
$$\pi \int_{-1}^1 (2-x^2)^2 - 1 \ dx$$ |
Natural representation for calculating exponentials? | This may or may not directly answer your question, but I think you would be interested in the factoradic system.
In base 10, the first digit from the right indicates multiples of $10^0$, the second of $10^1$, the third of $10^2$, etc. and because we want each number to have a unique base 10 representation, we restrict our digits to the range $0$ to $9$.
Similarly, in the factoradic system, the first digit from the right indicates multiples of $0!$, the second of $1!$, the third of $2!$, etc. and the digit indicating multiples of $k!$ is restricted to the range $0$ to $k$. You can easily prove that each number has a unique factoradic representation because you can generate the factoradic representation just like you would generate e.g. the binary representation, and vice versa. Indeed, it is convenient that the largest $k$-digit factoradic number is 1 less than the smallest $(k+1)$-digit factoradic number:
$$k\cdot k! + (k-1)(k-1)! + \dotsb + 1\cdot 1! + 0\cdot 0! = (k+1)! - 1$$
So what does this system optimize? For one, it is very convenient to number permutations. Suppose you permute the set $\{A, B, C, D, E\}$ to get $\{E, A, C, D, B\}$. To get the factoradic number of this permutation you do the following:
select from index (from 0)
original sequence --- of selection
A B C D |E| --- 4
|A| B C D --- 0
B |C| D --- 1
B |D| --- 1
|B| --- 0
$40110$ is the factoradic number for the permutation $\{E,A,C,D,B\}$, or $99$ in decimal. Converting the other way is likewise. This means you now have an excellent and precise way to represent permutations (and even combinations) of sets in a computer's numerical memory! And further, if the original sequence makes lexicographic sense (like $\{A,B,C,D,E\}$), the factoradic ordering is the same as the lexicographic ordering!
Another 'optimization' occurs if we extend the factoradic system to fractions. The first digit after the decimal point (or rather, the factoradic point) indicates multiples of $\frac{1}{1!}$, the second of $\frac{1}{2!}$, etc. and the $k$th digit after the factoradic point must be in the range $0$ to $(k-1)$. Now every rational number has a unique terminating representation! (In any fixed-base system, a rational number with denominator coprime to the base has a recurring representation.) Some interesting irrationals are given at the bottom of this page. |
How find limit $\displaystyle \lim_{n\to\infty}n\left(1-\tfrac{\ln n}{n}\right)^n$ | Let $$f(n) = n\left(1 - \frac{\log n}{n}\right)^{n}\tag{1}$$ and we need to calculate the limit of $f(n)$ as $n \to \infty$ through integer values. The best approach would be to analyze the behavior of $\log f(n)$. Clearly we have $$\log f(n) = \log n + n\log\left(1 - \frac{\log n}{n}\right)\tag{2}$$ and if $n > 1$ we know that $$0 < \frac{\log n}{n} < 1\tag{3}$$ We also know that the following inequality $$x < -\log(1 - x) < \frac{x}{1 - x}\tag{4}$$ holds for $0 < x < 1$. Replacing $x$ with $(\log n)/n$ in the above inequality we get $$\frac{\log n}{\log n - n} < \log\left(1 - \frac{\log n}{n}\right) < -\frac{\log n}{n}$$ Multiplying by $n$ we get $$\frac{n\log n}{\log n - n} < n\log\left(1 - \frac{\log n}{n}\right) < -\log n\tag{5}$$ Using $(2)$ we now have $$\frac{(\log n)^{2}}{\log n - n} < \log f(n) < 0\tag{6}$$ Now we can see that
\begin{align}
A &= \lim_{n \to \infty}\frac{(\log n)^{2}}{\log n - n}\notag\\
&= \lim_{n \to \infty}\dfrac{(\log n)^{2}}{n\left(\dfrac{\log n}{n} - 1\right)}\notag\\
&= \lim_{n \to \infty}\dfrac{(\log n)^{2}}{n}\cdot\dfrac{1}{\dfrac{\log n}{n} - 1}\notag\\
&= 0\cdot\frac{1}{0 - 1} = 0\tag{7}
\end{align}
In the above derivation we have used the standard result that $$\lim_{n \to \infty}\frac{(\log n)^{a}}{n^{b}} = 0\tag{8}$$ for any positive numbers $a, b$. Using Squeeze theorem in equation $(6)$ and noting the equation $(7)$ we get that $\log f(n) \to 0$ as $n \to \infty$. Hence $f(n) \to 1$ as $n \to \infty$. The desired limit is therefore equal to $1$.
Update: Some other answers make use of the symbol $\sim$, but it is wrong unless provided with further justification. The definition of the symbol $\sim$ in the current context is like this. If $$\lim_{n \to \infty}\frac{a(n)}{b(n)} = 1$$ then we write $a(n) \sim b(n)$. And because of this definition we can replace $a(n)$ by $b(n)$ while calculating limits where $a(n)$ is used in the multiplicative context. To be more specific if we have $a(n) \sim b(n)$ then while calculating the limit of an expression like $a(n)c(n)$ we can replace $a(n)$ by $b(n)$ and just calculate the limit of $b(n)c(n)$ to get final answer. This is justified because we can write $$\lim_{n \to \infty}a(n)c(n) = \lim_{n \to \infty}\frac{a(n)}{b(n)}\cdot b(n)c(n) = \lim_{n \to \infty}1\cdot b(n)c(n)$$ Replacement of $a(n)$ by $b(n)$ in other contexts must be justified by further analysis and it may generate wrong answer also.
Further Update: In case you have access to powerful technique of series expansions then the limit can be calculated easily as follows:
\begin{align}
\log f(n) &= \log n + n\log\left(1 - \frac{\log n}{n}\right)\notag\\
&= \log n - n\left\{\frac{\log n}{n} + \frac{(\log n)^{2}}{2n^{2}} + o\left(\frac{(\log n)^{2}}{n^{2}}\right)\right\}\notag\\
&= -\frac{(\log n)^{2}}{2n} + o\left(\frac{(\log n)^{2}}{n}\right)\notag
\end{align}
Using the fact that $(\log n)^{2}/n \to 0$ as $n \to \infty$ we can see that $\log f(n) \to 0$ and hence $f(n) \to 1$ as $n \to \infty$. My preferred approach is to use simpler tools (theorems on algebra of limits, Squeeze theorem etc), but advanced tools like series expansions and L'Hospital give the answer very easily. |
Adding weightage to number. | Let $a = n/3$. Then
\begin{align*}
(x-a) + (y-a) + (z-a) + n &= x - n/3 + y - n/3 + z - n/3 + n \\
&= x + y + z + n - n = x + y + z.
\end{align*} |
Let $\phi:G_1\to G_2$ be a group homomorphism. Show $\phi(g^{-1})=(\phi(g))^{-1}$. | You're very nearly there. You have $\phi(g_1)\phi(g_1^{-1}) = e_{G_2}$ but what you're missing is that $G_2$ is a group and hence if two elements multiply to give the identity, they are inverses of each other. Therefore $\phi(g_1^{-1}) = \phi(g_1)^{-1}$. |
Prove that this limit converges uniformly on compact subsets of Open Unit Disc | Assume, WLOG, that $a=0$.
First proof:
It is easy to see that $u_n(z)=-\ln(|f_n|)$ is a positive harmonic function, and thus we can apply Harnak's inequalities in the disc of radius $r<R<1$:
$$\frac{R-r}{R+r}u_n(0)\le u_n(z)\le \frac{R+r}{R-r}u_n(0)\\
\frac{R-r}{R+r}\ln(|f_n(0)|)\ge\ln(|f_n|)\ge\frac{R+r}{R-r}\ln(|f_n(0)|)\\
|f_n(0)|^{C_r}\ge |f_n(z)|\ge |f_n(0)|^{c_r}$$
convergence easily follows.
Second proof:
Assume that $f_n(0)\in\mathbb{R}^+$; $f_n(\mathbb{D})\neq \mathbb{D}-\{0\}$ and thus we can write $g_n(z):=\ln(f_n)$; $g_n(0)<0$.
We can now apply Herglotz inequalities (see the answer here for an hint about the derivation of the inequalities), obtaining
$$\frac{1-r}{1+r}\le \left|\frac{g_n(z)}{g_n(0)}\right|\le \frac{1+r}{1-r}$$
Since $g_n(0)\to -\infty$, for every compact subset of $\mathbb{D}$, $g_n(z)\to -\infty$ uniformly on the compact subsets of $\mathbb{D}$, and $f_n\to 0$ |
Analytic function with $f(1-1/n)=-n$ and $f(1+1/n)=-n$ | Since the limit $\lim_n-n$ does not exist (in $\mathbb C$) and since $\lim_n1-\frac1n=1$, there's no continuous function defined near $1$ satisfying that condition. |
How to compute $\int_{0}^{1}\int_{0}^{1} \frac{x-y}{(x+y)^3} dxdy$ | The integral is divergent in the origin (x,y) = (0,0) and hence the expression simply makes no sense without additional prescriptions about how to circumvent the divergence.
We shall present one possible regularization and show that the so defined integral can assume any real value we want (including infinity).
To begin with, let us exclude the origin introducing two small positive parameters $\epsilon $ and $\delta$:
$$f(\epsilon ,\delta )=\int _{\epsilon }^1\int _{\delta }^1\frac{x-y}{(x+y)^3}dydx$$
The integral is now well defined and the result is independent of the order of integration
$$\text{f1}=\frac{\delta (\epsilon -1)}{(\delta +1) (\delta +\epsilon )}+\frac{1}{\epsilon +1}-\frac{1}{2}$$
We now want to let $\epsilon $ and $\delta$ go to zero.
There are many possibilities to do that.
1) sequential
First $\epsilon \to 0$, then $\delta \to 0$ gives $-\frac{1}{2}$
First $\delta \to 0$, then $\epsilon \to 0$ gives $+\frac{1}{2}$
Thus the result depends on the order.
2) Simultaneously, on a staight line $\delta =q \epsilon$
The limit $\epsilon \to 0$ is
$$f2 = \frac{1-q}{2 q+2}$$
which can assume, by an appropriate choice of q, any real value.
The choice q = 1 is normally called principal value and is equal to zero. |
Prove inequality $a<1<b<3<c<4$ | Your idea is correct. $a, b, c$ are the zeros of
$$
f(x)=(x-a)(x-b)(x-c) = x^3 - 6 x^2 + 9x - abc
$$
Rolles's theorem states that $f'$ has a zero in each interval
$(a, b)$ and $(b, c)$. But
$$
f'(x) = 3x^2 - 12 x + 9 = 3(x-1)(x-3)
$$
has zeros $x_1=1$ and $x_2=3$. It follows that
$$
a < x_1 = 1 < b < x_2 = 3 < c \, .
$$
It remains to show that $c < 4$: $f$ changes signs exactly at the
zeros $a, b, c$, and $f(4) = 4 - abc = f(1)$. Therefore $4$ must lie in the interval $(c, \infty)$. |
Evalute $\frac{\tan\alpha-\cot\alpha}{\sin^4\alpha-\cos^4\alpha}$ if $\tan\alpha=2$ | Hints: $\tan(a) = \dfrac{1}{\cot(a)}$, $\cos^2(a) = \dfrac{1}{1+\tan^2(a)}$ and $\sin^2(a) = \dfrac{1}{1+\cot^2(a)}$ |
Integral related to harmonic functios | It is just a consequence of Divergence theorem, isn't it? You have that u is harmonic, so
$$
\Delta u =\nabla\cdot \nabla u=0.
$$
Now integrate over $\Omega$ and apply the divergence theorem. |
Can a polygon have four 90 degree corners and still not be a rectangle? | You are correct that a quadrilateral with four 90 degree corners MUST be a rectangle, in math. End of story.
In woodworking, however, one must deal with real measurement errors. It turns out that measuring the diagonals is a more sensitive method of measuring how close to 90 degrees one got, as opposed to measuring the angles with a protractor or other tool. |
Uniform convergence of functions, Spring 2002 | If $h_n\colon S\to S'$ converges uniformly to $h$ on $S$ and $\varphi\colon S'\to S''$ is uniformly continuous on $S$, fix $\varepsilon>0$. There is a $\delta>0$ such that if $d_{S'}(x,y)\leq \delta$ then $d_{S''}(\varphi(x),\varphi(y))\leq\varepsilon)$. Now, we use the fact that there is an integer $n_0$ such that for all $n\geq n_0$, we have
$$\sup_{x\in S}d_S(h_n(x),h(x))\leq \delta.$$
Then
$$\sup_{x\in S}d_S(\varphi(h_n(x)),\varphi(h(x)))\leq\varepsilon.$$
Now, we apply this result with $\varphi=g$ and $h_n:=f\circ g_n$. |
Showing convergence in probability for poisson distribution | You've shown $M_X(t) = e^{\lambda p (e^t-1)}$
Show that consequently, $M_{X/\lambda}(t) = e^{\lambda p (e^{t/\lambda} - 1)}$.
Note that $\lim_{\lambda \to \infty} \frac{e^{t/\lambda} - 1}{t/\lambda} = 1$ to conclude that $\lim_{\lambda \to \infty} M_{X/\lambda}(t) = e^{pt}$ |
About the cardinality of a set | The set in question is the set of subsets of $\Omega$ whose intersection with $\{a,b\}$ is the same as removing $c$ from it.
Notice that all subsets of $\{a,b,c\}$ are elements of this set and no other subset can be since that would imply $d\in Y$ but $d\notin Y\cap\{a,b\}$ and $d\in Y\setminus\{c\}$ and these two are equal, so... |
Some Pythagorean triples proofs | We are considering integer solutions to
$$
a^2+b^2=c^2
$$
Let us consider the equation $a^2+b^2=c^2$ modulo $2$:
$$
\begin{array}{c|cc}
+&0^2&1^2\\
\hline
0^2&0^2&1^2\\
1^2&1^2&0^2
\end{array}
$$
we see that in all four cases at least one of the three numbers is zero modulo $2$. So at least one in the triple is divisible by $2$.
Modulo $3$ we have $1^2=2^2$ so the same table applies. At least one is divisible by $3$.
Modulo $4$ we have $0^2=2^2$ and $1^2=3^2$ so again the same table applies. At least one is divisible by $4$.
Modulo $5$ we have $1^2=4^2=1$ and $2^2=3^2=4$ and so
$$
\begin{array}{c|ccc}
+&0^2&1^2&2^2\\
\hline
0^2&0^2&1^2&2^2\\
1^2&1^2&\times&0^2\\
2^2&2^2&0^2&\times
\end{array}
$$
where $\times$ indicates that result cannot be a square, since $1^2+1^2=2$ and $2^2+2^2=3$ are not values of squares modulo $5$. They are quadratic non-residues mod $5$.
All $7$ cases that are actually possible contain at least one zero. So at least one of the three numbers is divisible by $5$.
Finally, regarding the area being divisible by $6$, we see from the formulas for generating the triples
$$
(a,b,c)=(m^2-n^2,2mn,m^2+n^2)
$$
that the area must be
$$
T=\tfrac12 ab=(m^2-n^2)mn
$$
and if neither $m$ nor $n$ is divisible by $2$ we have $m=n=1$ modulo $2$ and therefore $m^2-n^2$ must be divisible by $2$.
If neither $m$ nor $n$ is divisible by $3$, we see that either $m,n\in\{1,2\}$ modulo $3$ so that $m^2-n^2=0$ modulo $3$. So indeed the area is divisible by both $2$ and $3$, and hence by $6$. |
If $X$ is not compact, does this mean that Cone($X$) is not compact? | The canonical map $X\to C(X)$ identifies $X$ with a closed subset of $C(X)$. Therefore, if $C(X)$ is compact, then $X$ is compact. |
How to prove the equation is correct | This screams Cardan's formula
$$
-\frac{q}{2}=2,\qquad
\frac{q^2}{4}+\frac{p^3}{27}=5
$$
Thus $q=-4$ and $p=3$. What's the real root of the following equation?
$$
x^3+3x-4=0
$$ |
The product of a normal and Rademacher variables, independent from each other | Let $Q(x)$ be the Q-function, i.e., $Q(x)=P(N\geq x)$, where $N$ is a Gaussian random variable with mean $0$ and variance $1$. Then
$$
\begin{align*}
P(Y \geq y) & =P(XZ \geq y) \\
&= P(Z=1)P(XZ \geq y|Z=1) + P(Z=-1)P(XZ \geq y|Z=-1) \\
&= \frac{1}{2}P(X \geq y|Z=1) + \frac{1}{2}P(-X \geq y|Z=-1) \\
&= \frac{1}{2}P(X \geq y) + \frac{1}{2}P(X \leq -y) \\
&= \frac{1}{2}Q(y) + \frac{1}{2}Q(y) \\
& = Q(y)
\end{align*}
$$
Hence $Y \sim \mathcal{N}(0,1)$. The fifth equality follows since Gaussian distribution with mean zero is symmetric around $y$-axis.
Next,
$$
\begin{align*}
Cor(X,Y) &= \mathbb{E}\left[ XY\right] \\
&=P(Z=1)\mathbb{E}\left[ XY|Z=1\right] + P(Z=-1)\mathbb{E}\left[ XY|Z=-1\right] \\
&=\frac{1}{2} \mathbb{E}\left[ X^2\right] + \frac{1}{2} \mathbb{E}\left[ -X^2\right]\\
&= 0
\end{align*}
$$
Finally, $X$ and $Y$ are not independent because conditioned on $X$, $Y$ can take only two different values, whereas its marginal distribution is $\mathcal{N}(0,1)$.
At first sight, the answers to parts $3$ and $4$ might seem contradictory, since uncorrelatedness implies independence for jointly Gaussian random variables (although this is not true for arbitrary random variables). However, $X$ and $Y$ are not jointly Gaussian distributed, hence this implication is not true in this case. |
Let $V$ be a finite real vector space, $W$ a subspace. What is the fundamental group of $V-W$? | Every finite-dimensional real vector space is linearly homeomorphic to some $\Bbb R^n$, so we do not lose generality in solving the problem for $\Bbb R^n \setminus \Bbb R^m$. Normalization gives that $\Bbb R^n \setminus \Bbb R^m$ is homotopically equivalent to $\Bbb S^{n-m-1}$, so: $$\pi_1(\Bbb R^n \setminus \Bbb R^m) = \begin{cases} \Bbb Z, & \text{if } m = n-2,\\ \{0\}, & \text{otherwise.}\end{cases}$$ |
Show that (using Kripke models) formula $\phi:p\vee (p\to q)$ is not tautology in intuitionistic logic | To prove an intuitionistic tautology, use the intuitionistic inference rules. To show that a sentence is not an intuitionistic tautology, construct a Kripke model. I don't know what you're doing in your attempt, but you should not be attempting to 'prove' anything on a graph. Every non-tautology is witnessed by some Kripke model, but you cannot assume that any graph will do.
A Kripke model must specify the graph and the values of all the atoms at each node. I first learnt it from Hanno's explanation here, but let me rephrase it in graph terminology. A Kripke model is a directed acyclic graph (its transitive closure is a partial order) where each node is a world such that any atomic sentence that is true in a world is also true in every world reachable from it by a path following the edges in their specified direction (truth is upward-closed in the partial order). Like in classical logic, the sentence "$P \land Q$" is true in a world iff both "$P$" and "$Q$" are true in that world. Likewise the sentence "$P \lor Q$" is true in a world iff either "$P$" or "$Q$" is true in that world. The key difference from boolean algebra is that "$P \to Q$" is true in a world iff "$Q$" is true in every reachable world where "$P$" is true. In other words, "$P \to Q$" is an assertion about the implication in worlds reachable from the current one, and not all worlds as in boolean algebra. Finally "$\neg P$" is defined as "$P \to \bot$". A tautology holds in all worlds in all Kripke models. Do go through the examples Hanno gives to get some idea of how this works, which will also answer your question quite directly. |
Simplification of a function of two variables | $$\lim_{k \to 0}f(x,k) = -27x^4-11x^3-9x^2-17x$$
In particular,
$$\lim_{k \to 0}f(1,k) = -27-11-9-17<0$$
Hence it can't be always positive over the claimed domain as polynomial is continuous. |
Solving for n = 4 in recurrence relation | In order to get a closed-form from a recurrence relation, you usually start by writing out the first few values until you think you see the closed-form pattern that obtains, and then you try to prove that this form is correct using a proof by induction. In this particular case the sequence is periodic, so the inductive step merely involves verifying the transitions through each of the periodic outcomes.
Finding the periodic values: We have:
$$\begin{equation} \begin{aligned}
Q_0 &= \alpha \\[12pt]
Q_1 &= \beta \\[10pt]
Q_2 &= \frac{1+\beta}{\alpha} \\[6pt]
Q_3 &= \frac{1+\alpha+\beta}{\alpha \beta} \\[6pt]
Q_4 &= \frac{1+\alpha}{\beta}\\[6pt]
Q_5 &= \alpha \\[12pt]
Q_6 &= \beta \\[12pt]
&\text{ } \text{ } \vdots \\[6pt]
\end{aligned} \end{equation}$$
We can see that $(Q_5, Q_6) = (Q_0, Q_1)$ and so we are back to the starting values of the series. Since this is a second-order recursion, the series must repeat these values over and over again. Hence, this is a periodic series with a period of five outcome, and the closed form for the series is:
$$Q_k = \begin{cases}
\alpha & & \text{for } k \text{ mod } 5 = 0, \\[6pt]
\beta & & \text{for } k \text{ mod } 5 = 1, \\[6pt]
(1+\beta)/\alpha & & \text{for } k \text{ mod } 5 = 2, \\[6pt]
(1+\alpha+\beta)/\alpha \beta & & \text{for } k \text{ mod } 5 = 3, \\[6pt]
(1+\alpha)/\beta & & \text{for } k \text{ mod } 5 = 4. \\[6pt]
\end{cases}$$ |
algebraically closed field of order 11 | If you have a finite field $\;K=\{a_1,...,a_n \}\;$ , then the polynomial
$$f(x)=\prod_{k=1}^n(x-a_k) +1\in K[x]$$
has no root in $\;K\;$ ... |
How to show that the composition of two surjective functions is injective? | That's not true in general.We can take a counterexample to show one case when it is not true:
Let $f:A\to B$ and $g:B\to C$ be two surjective functions and let $h:A\to C$ be their composition such that: $A=\{a,b\}$,$B=\{c,d\}$ and $C=\{e\}$ then
$f(a)=c$; $f(b)=d$ and
$g(c)=e$; $g(d)=e$
It is obvious that $h(a)=e$ and $h(b)=e$ so $h$ is not injective.
So we found an example where the composition of two surjective functions is not injective, therefore your statement is not always true. |
Why is the Heaviside step function locally integrable? | You're writing wrong the condition for locally integrable. It is that $H\in L^1(K)$ for every $K$ compact in $\Omega$, which reduces to prove that
$\int_K|H(x)|dx=\int_{K\cap(-\infty,0)}0\cdot dx+\int_{K\cap[0,+\infty)}1\cdot dx=|K\cap[0,+\infty)|\leq|K|<+\infty$ |
Find the radius of convergence and interval of convergence for the given series $\sum_{n}^{\infty}{n^n(x+3)^n}/(n^{100}+100n+29)$ | By using the ratio test and since we have $$\lim_{n\to \infty}{(n+1)^{n+1}\over n^{n}}{=
\lim_{n\to \infty}{(n+1)^{n}\cdot (n+1)\over n^{n}}
\\=\lim_{n\to \infty}\left({1+{1\over n}}\right)^n(n+1)
\\=\lim_{n\to \infty}e(n+1)
\\=\infty
}$$then the radius of convergence is $0$. |
Prove that the function $f(x,y) = \frac{xy}{x^2 + y^2}$ is continuous except on $(0,0)$. | For discontinuity at the origin, polar coordinates do the trick quite quickly
$$
\lim_{(x,y)\rightarrow (0,0)}\frac{xy}{x^2 + y^2}=\lim_{r\rightarrow 0}\frac{r^2\sin \theta \cos \theta}{r^2}=\sin \theta \cos \theta
$$
which is not independent of choice of theta. |
Laplace Transform of $\sin^{2018}(t)$? | Taking a look at this table, we have the recurring relation
$$ \int e^{-st}\sin^{n}(t)\ dt = -\frac{e^{-st}\sin^{n-1}(t)}{s^2+n^2}(s\sin t + n\cos t) + \frac{n(n-1)}{s^2+n^2}\int e^{-st}\sin^{n-2}(t)\ dt $$
Integrating from $0$ to $\infty$, we obtain
$$ L_n = \frac{n(n-1)}{s^2+n^2} L_{n-2} $$
where $L_n$ is the Laplace transform of $\sin^n(t)$. The base case is $L_0 = \dfrac{1}{s}$
Putting it all together
$$ L_{2018} = \frac{2018\cdot 2017}{s^2+2018^2}\cdot \frac{2016\cdot2015}{s^2+2016^2}\cdots \frac{1}{s} = \frac{2018!}{s} \prod\limits_{k=1}^{1009} \frac{1}{\big(s^2 + (2k)^2\big)} $$
This leads to the solution
$$ Y(s) = \frac{2018!}{s(s^2+9)} \prod\limits_{k=1}^{1009} \frac{1}{\big(s^2 + (2k)^2\big)} $$
which you can reverse transform by taking the convolution of $\sin^{2018}(t)$ and $\mathcal L \{\frac{1}{s^2+9}\} = \frac13 \sin 3t$
$$ y(t) = \frac13 \int_0^t \sin^{2018}(\tau)\sin \big(3(t-\tau)\big) d\tau $$
and using more recurrence relations |
Calculate $\langle x,y,z\mid 2x=2y=2z\rangle$ generated over $\mathbb{Z}$ | This is indeed the right approach. What you are really doing is computing the Smith normal form of the coefficient matrix of the relations. For example in your case the smith normal form of $\begin{pmatrix}2& -2&0\\0&2&-2\end{pmatrix}$ is $\begin{pmatrix}2& 0&0\\0&2&0\end{pmatrix}$, which implies that the group you have is indeed $\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}$. For computing abelian groups given by generators and relations, performing base change results in a similar relations matrix, and by the Smith normal form we can choose the base change such that the resulting matrix is diagonal, from which the group can be computed easily. |
Solving the equation $x^{2}-7^{y}=2$ | Just some comments. There are primitive solutions to $$ x^2 - 2 z^2 = 7^y $$ for every positive $y.$ Indeed, if $z_n \equiv 5 x_n \pmod 7$ and $$ x_n^2 - 2 z_n^2 = 7^n, $$ we may take $$ x_{n+1} = 3 x_n + 2 z_n, \; \; z_{n+1} = x_n + 3 z_n $$ to get a primitive solution to $$ x_{n+1}^2 - 2 z_{n+1}^2 = 7^{n+1}, \; \; \; z_{n+1} \equiv 5 x_{n+1} \pmod 7. $$
Meanwhile, given any such $(x,z)$ pair, we get the same $7^n$ with either
$$ (3 x + 4 z, 2 x + 3 z) $$ or
$$ (3 x - 4 z, -2 x + 3 z). $$ After taking absolute values, you can get some pretty small values of $z$ this way:
$$ 9^2 - 2 \cdot 4^2 = 49, $$
$$ 19^2 - 2 \cdot 3^2 = 343, $$
$$ 51^2 - 2 \cdot 10^2 = 2401, $$
and so on.
I will need to think about whether all solutions of $x^2 - 2 z^2 = 7^y$ arise this way. If so, it is a start. |
Permutations and number of lists | There are only $24$ permutations of four elements, so we might as well use brute force.
Suppose $a_1=4$, then all three conditions are automatically satisfied – for all of $k=1,2,3$, it is true that the first $k$ elements of the permutation have a number larger than $k$. This gives $6$ permutations.
Suppose $a_2=4$. Then the $k=2,3$ conditions are satisfied and the only remaining restriction is $a_1\ne1$. This gives $4$ permutations.
The $k=3$ condition is not satisfied if $a_4=4$, so let $a_3=4$. If $a_1=3$, all conditions are satisfied ($2$ more ways). If $a_4=3$, the $k=2$ condition cannot be satisfied. If $a_2=3$, the only way to satisfy all conditions is $a_1=2$ and $a_4=1$.
Summing it all up gives $13$ admissible permutations. |
Why the chevalley $G/B$ and plucker $G/B$ are isomorphic $G$- projective varieties | I am not closely familiar with algebraic groups, but the corresponding question for Lie groups has a simple postive answer: Take a point $x_0\in X$, and denote by $G_{x_0}$ its stabilizer in $G$. Then $g\mapsto g\cdot x_0$ induces a diffeomorphism $G/G_{x_0}\to X$. Since $f$ is equivariant and bijective, you see that for $y_0=f(x_0)$, you get $G_{y_0}=G_{x_0}$ and $g\mapsto g\cdot y_0$ induces a diffeomorphism $G/G_{y_0}\to Y$, so $X$ and $Y$ are $G$-equivariantly diffeomorphic.
On the other hand, for your motivating example, the two constructions for $G/B$ you describe are essentially the same. The natural choice for the representation $W$ in Chevalley's construction is that the highest weight of $W$ is the sum of all fundamental weights. The natural construction of $W$ is a the $G$-invariant subspace in the tensor product of all fundamental representations. These fundamental representations are just $\mathbb C^n$, $\Lambda^2\mathbb C^n$, ..., $\Lambda^{n-1}\mathbb C^n$. So $P(W)$ is naturally a subspace of $P(\mathbb C^n\otimes\dots\otimes \Lambda^{n-1}\mathbb C^n)$. By definition, the embedding of $G/B$ into $P(W)$ is induced by $g\mapsto g\cdot w_0$, where $w_0$ is a highest weight vector in $W$. The natural choice for $w_0$ is the tensor product of the highest weight vectors in the fundamental representations. But you can also obtain the Plücker embedding of the Grassmannian $Gr(m,\mathbb C^n)$ similarly to Chevalley's construction, but starting from the highest weight vector in $\Lambda^m\mathbb C^m$. So you readily see that the embedding you construct via PlÜcker actually has values in $P(W)$ and coincides with the Chevalley embedding. |
Implication of $|f^\prime| \leq |g^\prime|$? | Without the condition of non-changing sign of $g'$, $\lvert f\rvert$ can be larger than $\lvert g\rvert$. Consider for example
$$f(x) = \int_0^x \sin^2 t\,dt = \frac{1}{2} x - \frac{1}{4}\sin (2x)$$
and
$$g(x) = \int_0^x \sin t\,dt = 1 - \cos x.$$
Clearly $\lvert f'(x)\rvert = \sin^2 x \leqslant \lvert \sin x\rvert = \lvert g'(x)\rvert$ for all $x$, yet $g$ is bounded while $f$ is unbounded. |
How can i prove those metrics are the equivalent? | The identity is the function $f:\mathbb R\to\mathbb R$ such that $f(x)=x$. Here, the first $\mathbb R$ has the $d_1$ metric, and the second $\mathbb R$ has the $d_2$ metric. We need to show that $f$ is a bijection which is continuous and has a continuous inverse. It is clearly a bijection. To prove that the identity continuous, choose $x\in\mathbb R$. Let $\epsilon>0$, and let $\delta=\min\{\epsilon/(3|x|^2+3|x|+1),1\}$. Then if $d_1(x,y)<\delta$, $|y|\leq|x|+1$, and $$d_2(f(x),f(y))=d_2(x,y)=|x^3-y^3|=|x-y||x^2+xy+y^2|$$ $$\leq|x-y|(|x|^2+|x|^2+|x|+|x|^2+2|x|+1)$$ $$<\frac{\epsilon}{3|x|^2+3|x|+1}(3|x|^2+3|x|+1)$$ $$=\epsilon.$$
Therefore the identity is continuous.
Now you need to prove that the inverse of the identity is continuous. |
Pile of cards - Probability question | The probability of drawing an Ace is $\frac1{13}$ so the expected number of draws until the first Ace is $13$ (geometric distribution.) Then the probability of getting an Ace you haven't seen yet is $\frac 3{52}$, so the expected number of additional draws until the second Ace is $\frac{52}3$. Can you finish it now? |
Understanding Frobenius' Theorem on real division algebras | The endomorphism is $D\rightarrow D$, $x\mapsto dx$. The word "endomorphism" here means "endomorphism of a real vector space" not "endomorphism of a ring": left multiplication obviously is not a ring homomorphism. |
How can you solve this convolution? | The delta "function" is strictly speaking not a function, but a distribution, that is a continuous linear functional on a space of test functions. It sends a test function $g$ onto its value at $0$:
$$\delta(g)=g(0)$$
You may think of $\delta(g)$ as "$\int \delta(\tau)g(\tau)d\tau$". Be however aware that there does not exist an actual function $\delta$ such that $\int \delta(\tau)g(\tau)d\tau=g(0)$ for every test function $g$.
Now if $\phi$ is any distribution and $g$ is a test function, then
$$\phi*g(t)=\phi(g(t-\cdot))$$
This is motivated by the case when $\phi$ is a function: in that case,
$$\phi(g(t-\cdot))=\int g(t-\tau) \phi(\tau)d\tau$$
Your distribution is the $\delta$ distribution shifted by $c$, that is $f(g)=g(c)$. Therefore
$$f*g(t)=g(t-c)$$ |
Solve $a_{n+1} - a_n = n^2$ using generating functions | You made a mistake in solving
$$ x(1+x)= A(1-x)^3+B(1-x)^2+C(1-x)+D$$
A faster way to solve this is the following:
$$x = 1 \Rightarrow D=2$$
Next derivate
$$1+2x=-3A(1-x)^2-2B(1-x)-C$$
plug in $x=1$. Then derivate again, and plug in $x=1$. Continue... |
Probability to get a card in a card game | What is the probability that two red cards are present?
As you observed, the probability that two red cards are present is equal to the probability that a blue card was removed, which is
$$\Pr(\text{two red cards are present}) = \Pr(\text{a blue card is removed}) = \frac{3}{5}$$
as you found
As for your failed attempt, observe that there are $\binom{5}{2}$ ways to pick two positions for the red cards. Two red cards can be found in the first four positions in $\binom{4}{2}$ ways. Hence, the probability that two red cards are in the first four positions is
$$\Pr(\text{two red cards are present}) = \frac{\dbinom{4}{2}}{\dbinom{5}{2}} = \frac{6}{10} = \frac{3}{5}$$
What was your mistake?
You did not take into account the probability of selecting a blue ball?
Your first calculation is correct. The probability that red balls are in positions 1 and 2 is
$$\Pr(\text{red, red}) = \frac{2}{5} \cdot \frac{1}{4} = \frac{1}{10}$$
When you calculated the probability that red balls are in positions 1 and 3, you did not take into account the probability that a blue ball was picked second.
$$\Pr(\text{red, blue, red}) = \frac{2}{5} \cdot \frac{3}{4} \cdot \frac{2}{3} = \frac{1}{10}$$
The fact that both calculations yielded $\frac{1}{10}$ is not a coincidence. Let's do those two calculations again, this time writing out all five terms.
\begin{align*}
\Pr(\text{red, red, blue, blue, blue} & = \frac{2}{5} \cdot \frac{1}{4} \cdot \frac{3}{3} \cdot \frac{2}{2} \cdot \frac{1}{1} = \frac{3!2!}{5!} = \frac{1}{10}\\
\Pr(\text{red, blue, red, blue, blue} & = \frac{2}{5} \cdot \frac{3}{4} \cdot \frac{1}{3} \cdot \frac{2}{2} \cdot \frac{1}{1} = \frac{3!2!}{5!} = \frac{1}{10}
\end{align*}
In fact, the probability that the two red balls appear in any two positions of the sequence is always $1/10$ since there are $\binom{5}{2} = 10$ equally likely ways to place the red balls in the sequence of two red and three blue balls.
What is the probability that only one red card is present?
As you observed, the probability that only one red card is present is equal to the probability that a red card is removed, which is
$$\Pr(\text{only one red card is present}) = \Pr(\text{a red card is removed}) = \frac{2}{5}$$
as you found.
What is the probability that only one red card is present given that you have a blue card?
Since you are equally likely to have any of the cards, the probability that you have a blue card is $3/5$.
Clearly, your blue card could not have been removed. Therefore, the card that is removed must be one of the other four cards, of which two are red. Since a red card must be removed if only one red card is present, the probability that you only one red card is present given that you have a blue card is
$$\Pr(\text{only one red card is present} \mid \text{you have a blue card}) = \frac{2}{4} = \frac{1}{2}$$
which is the answer to your question.
What was your mistake?
If we denote the event only one card is present by $1R$ and the event that you have a blue card by $B$, then the probability that you have a blue card and only one red card is present is
$$\Pr(1R \cap B) = \Pr(B)\Pr(1R \mid B) = \frac{3}{5} \cdot \frac{1}{2} = \frac{3}{10}$$
How could you have seen this directly?
If you write the sequences of three blue and two red cards in which a blue card is in the first position (representing the card you hold), there are three ways for a red card to be in the fifth position, namely those in which the other red card is in the second, third, or fourth positions. Hence, three of the ten sequences of three blue and two red cards have you holding a blue card with only red card present in the game. |
How to evaluate $\lim _{x\to 1}\left(\frac{x+2}{x-1}-\frac{3}{\ln x}\right)$? | As $x \to 1$, one may use Taylor's series expansion to get
$$
\ln x= \ln (1+(x-1))=(x-1)-\frac12(x-1)^2+O((x-1)^3)
$$ or
$$
\frac{x+2}{x-1}-\frac{3}{\ln x}=\frac{x+2}{x-1}-\frac3{(x-1)-\frac12(x-1)^2+O((x-1)^3)}
$$ that is, as $x \to 1$,
$$
\frac{x+2}{x-1}-\frac{3}{\ln x}=-\frac12+O(x-1)
$$ from which you deduce the sought limit. |
Two finitely based equational theories whose meet is not finitely based. | Such theories exist. I will describe a construction from the paper
Finitely Based, Finite Sets of Words.
M. Jackson, O. Sapir,
International Journal of Algebra and Computation 10(6):683-708 (2000).
Let $X=\{a,b\}$ and let $M(a,b)$ be the free monoid over $X$. If $W\subseteq X^*$ is a set of words in the letters $X$, let $I(W)$ be the ideal of $M(a,b)$ consisting of all non-identity monoid elements that are not subwords of words in $W$. Define $S(W) = M(a,b)/I(W)$.
In Theorem 5.8 of their paper, Jackson and Sapir show that
(i) $A:=S(\{abbaa, ababa, aabba\})$ is finitely based.
(ii) $B:=S(\{baaab, aabb, abba, abab\})$ is finitely based.
(iii) $A\times B$ is not finitely based.
This answers the question because $\textrm{Th}(A\times B)=\textrm{Th}(A)\cap \textrm{Th}(B)$. |
Two random variables X and Y follow the same distribution. Then | If $X=Y$ then 1,2,3,4 would be correct. In particular $X-Y=0$ with probability $1$
If $X$ and $Y$ are independent then 1,2,4 are correct but 3 might not be. For example, suppose $X$ takes the value $1$ with probability $0.4$ and the value $0$ with probability $0.6$
But if $X$ and $Y$ have a more complicated relationship then there is little you can say about distributions and medians. For example if $(X,Y) = (0,1)$, $(1,4)$ or $(4,0)$ each with probability $\frac13$. You are left with linearity of expectation and assertion 4. |
find total cycle in simple and bipartite graph | Collating comments above and trying to answer any remaining loose ends...
Rather than blindly trying to apply formulas, you need to stop and think about where the formulas come from and how to derive them. Find the logic in what the formula means and where it comes from. In doing so, you will then be able to more accurately determine if a formula is correct and be able to come up with new formulas for scenarios you had yet to discuss.
The number of $k$-cycles in $K_n$ is $\binom{n}{k}\frac{(k-1)!}{2}$
To see this, we first select which $k$ of the $n$ vertices are used in the cycle. This selection of which vertices are used can be accomplished in $\binom{n}{k}$ ways.
Now, given a particular selection of vertices, we need still to determine in what order they appear in the cycle, noting that several cycles are indistinguishable. In order to help facilitate a count, we can assume that the vertices were labeled with some sort of specific order in mind (e.g. if the vertices represented people, they could be arranged according to age, or arranged according to height). Without loss of generality, suppose the vertices of the graph were labeled as the numbers $1,2,3,\dots,n$. Given such an ordering, among our selected vertices, one of them is going to be the "smallest" vertex. We will use that vertex as our reference point.
Now, we have our $k$ vertices, one of which is being used as a reference point. Let us arrange the remaining selected vertices around cycle. Choose which vertex appears "after" the smallest. Then choose which vertex appears after that, and then choose which appears after that, etc... until all vertices have been placed. This can be done in $(k-1)\cdot (k-2)\cdots 3\cdot 2\cdot 1 = (k-1)!$. Recognize however that this overcounts as the cycle where we had placed these same vertices in the reverse order is the "same cycle." The cycle (1 2 3 4 5 6) is the same as the cycle (1 6 5 4 3 2) in this context. To correct the count, we divide by two since this is the number of times we had counted every possibility when we intended to have only counted each possibility exactly once.
This gives our final result of the number of $k$-cycles in $K_n$ as being $\binom{n}{k}\frac{(k-1)!}{2}$
The number of hamiltonian cycles in $K_n$ is $\frac{(n-1)!}{2}$
To see this, just apply the earlier result using $k=n$ and recognize that every $n$-cycle in $K_n$ is a hamiltonian cycle and vice versa so the counts should be the same. Note that a hamiltonian cycle uses all $n$ of the vertices, not just four of them...
The number of $2k$ cycles in $K_{n,m}$ is $\binom{n}{k}\binom{m}{k}\frac{(k-1)!k!}{2}$
To see this, begin by noting that any $2k$ cycle in $K_{n,m}$ must have exactly $k$ elements from the first part and $k$ elements from the second part due to the nature of how bipartite graphs work. It is worth pointing out here as well that bipartite graphs have no odd-length cycles. So, we continue by selecting which $k$ elements were used from the first part and which $k$ elements were used from the second part in $\binom{n}{k}\binom{m}{k}$ ways.
Note that we are not done yet. Now that we have the elements, we still need to decide in what order they appear in the cycles. In the special case of $k=2$, so us looking at $4$-cycles we can quickly see that there is only one arrangement of these into a cycle possible. That is obviously not the case for larger $k$. There can be many many ways to arrange the elements into cycles, so let us count those ways.
We continue as before by noting that we could have applied some arbitrary order to the vertices ahead of time. Without loss of generality, we suppose again that the vertices were labeled $1,2,3,\dots,m+n$. Let us use as our reference point when building the cycle the smallest vertex from the part with $n$ elements. In the case that $m=n$, let us instead simply use element $1$. This gives us an unambiguous way of selecting which element is used as our reference point. and specifically does so in a way that makes it so that the part to which it belongs is of size $n$ and the other is of size $m$.
So, building our cycle, we start with the selected element. Choose which element from the other part comes after. We have $k$ choices. Then choose which element from the first part comes next, we have $k-1$ choices. Choose what comes after that, we have $k-1$ choices, and so on going back and forth between the two parts building the cycle out. This gives $k!(k-1)!$ ways to build out the cycle. Recognize again that we doublecounted the number of cycles because had we made the same decisions but in the reverse order it would have been the same overall result.
This gives $\binom{n}{k}\binom{m}{k}\frac{k!(k-1)!}{2}$
Recognize that in the case of $k=2$ this does indeed simplify as $\binom{n}{2}\binom{m}{2}\frac{2!(2-1)!}{2}=\binom{n}{2}\binom{m}{2}\frac{2\cdot 1}{2}=\binom{n}{2}\binom{m}{2}$
Final Takeaways... Determine a way to unambiguously count the cycles with as convenient an approach as possible that avoids overcounting when possible. A good way to do that is to pick reference points and determine how to arrange things around that reference point. Try to understand what each term in the final expressions represents in terms of decisions made in the counting process and whether those decisions determine a final arrangement or not or if there is missing information. |
Primes of form $x^2+x\pm k$ | I have a heuristic verification and possibly an explanation for you using the modulus $210 = 2 \cdot 3 \cdot 5 \cdot 7$
So any residue $r$, $\bmod{~210}$ that has $\gcd(r,210) > 1$ will not be prime.
To investigate your equations I formed three lists, and counted the number of relevant residues that had $\gcd(r,210) = 1$, which I will detail below :
Form a list of residues of $x(x+1) + 1$
Form a list of residues of $x(x+1) + 3$
Form a list of residues of $x(x+1) - 15$
For example, to make the first list (in Python):
z = range(1,211)
zz1 = [(x*y+1)%210 for x in z for y in z if x-y == 1]
import fractions
zz1g = [fractions.gcd(210,x) for x in zz1]
print zz1g.count(1)
Yields:
99
In all cases the x were taken over the complete residue system $\bmod{~210}$, and the results were computed as the remainder when divided by $210$. The greatest common divisor with $210$ was then computed for each residue in each list and the occurrences of "1" were counted. The occurences are :
List 1, residues of form $x(x+1) + 1$, number of $\gcd$ of 1 = 99
List 2, residues of form $x(x+1) + 3$, number of $\gcd$ of 1 = 42
List 2, residues of form $x(x+1) - 15$, number of $\gcd$ of 1 = 42
So heuristic verification for your equations is displayed. The number of "candidates" for primes in the first equation form are nearly double that of the other two forms, which are similar. This reflects the relative ratios you reported.
As to further explanation, if the $\gcd$ is not 1 then the number can certainly not be prime. Since there are more "not $1$" greatest common divisors in either of forms $2$ or $3$ than their are in $1$, it follows that, since primes behave as if they are randomly distributed amongst candidate residues, more primes will occur from the form where the residues more often land on those which have no divisor in common with $210$.
In fact in this case the counts under $210$ strongly support the data you obtained from a half a million points. $210$ is just a single case, but the congruence argument will approach more closely the distributions you obtained, the larger the smoothness bound on the modulus.
Which suggests that the sum of $k$ with the product of two adjacent integers more often falls on a prime when $k$, perhaps, has no prime factors at all.
Hope this helps. |
Proof about Theorem of Dirichlet. | Suppose at first that all the terms of series are non-negative. Let $\sigma: \mathbb{N} \mapsto \mathbb{N}$ be a permutation, so that
$$ \sum_{n \ge 1} b_n = \sum_{n \ge 1} a_{\sigma(n)} $$
is a rearrangement of the initial series. Let $\alpha_n , \beta_n $ the sequences of the partials sums of $\sum a_ n, \sum b_n $ respectively. For every $n \in \mathbb{N}$, let:
$$ k_n = \max\{\sigma(p) : 0 \le p \le n \} $$
Since every series involved has non-negative terms it holds:
$$ \beta_n = \sum_{i=1}^{n} b_i =\sum_{i=1}^{n} a_{\sigma(i)} \le \sum_{j=0}^{k_n } a_j = \alpha_{k_n} \le a $$
which means that the partials sums $\beta_n $ are monotone and bounded above, hence convergent to a limit $b$ which is less or equal then $a$. Noting that we can see the initial series as a rearrangement of the rearranged one, we also get $b \ge a$, whence $a=b$.
If the series $\sum_{n \ge 1} a_n$ has terms with non-constant sign, we still get the same conclusion thanks to the positive and negative part decomposition, and noting that $\sum (b_n)^{+}$ and $\sum_{n \ge 1} (b_n)^{-}$ are rearrangements of $\sum_{n \ge 1} (a_n)^{+}$ and $\sum_{n \ge 1} (a_n)^{-}$, respectively. |
Give an example of a function $ f : [a,b] → \mathbb{R}$ that is continuous... | For instance, you can take the constant function $f(x) = 1$ for all $x\in [a,b]$, and $x_n = \begin{cases} a &\text{ if } n \text{ even}\\ b &\text{ if } n \text{ odd}\end{cases}$. |
How to prove if $m$, $m + 2$, $m + 4$ are all primes, then $m = 3$ | You can show that one of them must be divisible by $3$. If it is not $3$, it is not prime. |
Counter example in Sobolev spaces: Is there a standard example for showing that $H^1(\mathbb{R}^2)\not\subset L^\infty(\mathbb R^2)$? | A standard example would be something like
$$f(x,y)=\log\log(1+(x^2+y^2)^{-1}) \varphi(x^2+y^2)$$
or
$$f(x,y)=\left[-\log(x^2+y^2)\right]^{1/4} \varphi(x^2+y^2)$$
where $\varphi$ is smooth, compactly supported, and equal to $1$ in a neighbourhood of zero.
The rationale here being that $H^1(\mathbb{R}^2)$ is embedded into $\operatorname{BMO}(\mathbb{R}^2)$, a space that happens to be strictly larger than $L^\infty(\mathbb{R^2})$ insofar as e. g. unbounded functions with logarithmic growth are contained in $\operatorname{BMO}$.
Here $\operatorname{BMO}$ refers to the space of functions of bounded mean oscillation. |
The condition that a semigroup $G$ is a group. | To find a neutral element, consider $a=b$. Name your neutral element $e$. For left/right inverses, consider $ax=e$ and $yb = e$.
Added : Suppose you want to show uniqueness of the neutral element. First of all, show that a solution to $xa = a$ is also a solution to $ay = a$. Let $x$ and $y$ be two such solutions.
The equation $za = x$ has a solution, so
$$
xy = (za)y = z(ay) = za = x.
$$
Similarly, the equation $aw = y$ has a solution, so
$$
xy = x(aw) = (xa)w = aw = y.
$$
It follows that $x=y$. In particular, if $y$ is a solution to $ay = a$, then all the solutions to $xa = a$ are equal to $y$, so there is a unique solution to $xa = a$. Similarly there is a unique solution to $ay = a$, so there is a unique neutral element for $a$. Call this element $e_a$.
Should I give more hints? The idea is to keep playing so that you accumulate enough properties to get your group. I don't have the whole proof in my head, but as soon as you'll have enough properties to have a group, you can clean the mess and make a clean proof out of it.
I add the rest of the proof here. Hover your mouse over if it you want to see.
Let $a,b \in G$ and consider their unique neutral elements $e_a$, $e_b$ as above. Note that since $a e_a^2 = (ae_a)e_a = a e_a = a$, we have $e_a^2 = e_a$. Now we show that $e_a e_b = e_a$. Pick $z \in G$ such that $z e_b = e_a$. Then $e_a e_b = z e_b e_b = z e_b^2 = z e_b = e_a$. Similarly, pick $y \in G$ such that $e_a y = e_b$, so that $e_a e_b = e_a^2 y = e_a y = e_b$. Therefore $e_a = e_a e_b = e_b$. This implies $e_a = e_b$ for all $a,b \in G$, i.e. there exists a unique neutral element which we call $e$. Now all we need to show is that left and right inverses (i.e. solutions to $ax=e$ and $ya = e$) are equal. But if $ax = e$ and $ya = e$, then $y = ye = y(ax) = (ya)x = ex = x$, so we have a group.
Hope that helps, |
Difference in two conditional expecations | Recall that for an event $E$ and a random variable $X$, the conditional expectation $\mathbb E[X\mid E]$ is equal to $\frac{\mathbb E[X\mathsf 1_E]}{\mathbb P(E)}$. So we have
\begin{align}
\mathbb E\left[X\mid X\geqslant\frac n 2\right] &= \frac{\mathbb E[X\mathsf 1_{\{X\geqslant n/2\}}]}{\mathbb P(X\geqslant n/2)}\\ &= \frac{\sum_{k=\lceil n/2\rceil}^n k\binom nkp^k(1-p)^{n-k}}{\sum_{k=\lceil n/2\rceil}^n \binom nkp^k(1-p)^{n-k}}\\
&= \frac{(1-p)^{-\left\lceil \frac{n}{2}\right\rceil +n-1} p^{\left\lceil \frac{n}{2}\right\rceil } \left(p \binom{n}{\left\lceil \frac{n}{2}\right\rceil +1} \, _2F_1\left(2,-n+\left\lceil \frac{n}{2}\right\rceil +1;\left\lceil \frac{n}{2}\right\rceil +2;\frac{p}{p-1}\right)-(p-1) \left\lceil \frac{n}{2}\right\rceil \binom{n}{\left\lceil \frac{n}{2}\right\rceil } \, _2F_1\left(1,\left\lceil \frac{n}{2}\right\rceil -n;\left\lceil \frac{n}{2}\right\rceil +1;\frac{p}{p-1}\right)\right)}{\binom{n}{\left\lceil \frac{n}{2}\right\rceil } (1-p)^{n-\left\lceil \frac{n}{2}\right\rceil } p^{\left\lceil \frac{n}{2}\right\rceil } \, _2F_1\left(1,\left\lceil \frac{n}{2}\right\rceil -n;\left\lceil \frac{n}{2}\right\rceil +1;\frac{p}{p-1}\right)},
\end{align}
where ${}_2F_1$ denotes the hypergeometric function: $${}_2F_1(a,b;c;z) = \sum_{n=0}^\infty \frac{(a)_n(b)_n}{(c)_n}\frac{z^n}{n!}$$ and $(q)_n$ denotes the rising Pochammer symbol, defined by
$$
(q)_n = \begin{cases}
1,& n=0\\
\prod_{i=0}^{n-1} (q+i),& n>0.
\end{cases}
$$
Similarly,
\begin{align}
\mathbb E\left[X\mid X< n/2\right] &= \frac{\mathbb E[X\mathsf 1_{\{X< n/2\}}]}{\mathbb P(X< n/2)}\\ &= \frac{\sum_{k=0}^{\lfloor n/2\rfloor} k\binom nkp^k(1-p)^{n-k}}{\sum_{k=0}^{\lfloor n/2\rfloor} \binom nkp^k(1-p)^{n-k}}\\
&= \frac{p (1-p)^n \left(n \left(\frac{1}{1-p}\right)^n-\frac{\Gamma (n+1) (1-p)^{-\left\lfloor \frac{n}{2}\right\rfloor -1} p^{\left\lfloor \frac{n}{2}\right\rfloor } \left(n p \Gamma \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right) \, _2\tilde{F}_1\left(1,-n+\left\lfloor \frac{n}{2}\right\rfloor +1;\left\lfloor \frac{n}{2}\right\rfloor +2;\frac{p}{p-1}\right)-p+1\right)}{\Gamma \left(n-\left\lfloor \frac{n}{2}\right\rfloor \right) \Gamma \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)}\right)}{(1-p)^n \left(\left(\frac{1}{1-p}\right)^n-\binom{n}{\left\lfloor \frac{n}{2}\right\rfloor +1} (1-p)^{-\left\lfloor \frac{n}{2}\right\rfloor -1} p^{\left\lfloor \frac{n}{2}\right\rfloor +1} \, _2F_1\left(1,-n+\left\lfloor \frac{n}{2}\right\rfloor +1;\left\lfloor \frac{n}{2}\right\rfloor +2;\frac{p}{p-1}\right)\right)},
\end{align}
where ${}_2\tilde F_1$ denotes the regularized hypergeometric function:
$$
{}_2\tilde F_1(a,b;c;z) = \frac1{\int_0^\infty t^{c-1}e^{-t}\ \mathsf dt}\sum_{n=0}^\infty \frac{(a)_n(b)_n}{(c)_n}\frac{z^n}{n!}.
$$
Then we have
\begin{align}
DE &= \mathbb E\left[X\mid X\geqslant \frac n 2\right] - \mathbb E\left[X\mid X<\frac n 2\right]\\
&= p (1-p)^n \left(n \left(\frac{1}{1-p}\right)^n-\frac{\Gamma (n+1) (1-p)^{-\left\lfloor \frac{n}{2}\right\rfloor -1} p^{\left\lfloor \frac{n}{2}\right\rfloor } \left(n p \Gamma \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right) \, _2\tilde{F}_1\left(1,-n+\left\lfloor \frac{n}{2}\right\rfloor +1;\left\lfloor \frac{n}{2}\right\rfloor +2;\frac{p}{p-1}\right)-p+1\right)}{\Gamma \left(n-\left\lfloor \frac{n}{2}\right\rfloor \right) \Gamma \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)}\right) - \frac{p \left(n \left(\frac{1}{1-p}\right)^n-\frac{\Gamma (n+1) (1-p)^{-\left\lfloor \frac{n}{2}\right\rfloor -1} p^{\left\lfloor \frac{n}{2}\right\rfloor } \left(n p \Gamma \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right) \, _2\tilde{F}_1\left(1,-n+\left\lfloor \frac{n}{2}\right\rfloor +1;\left\lfloor \frac{n}{2}\right\rfloor +2;\frac{p}{p-1}\right)-p+1\right)}{\Gamma \left(n-\left\lfloor \frac{n}{2}\right\rfloor \right) \Gamma \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)}\right)}{\left(\frac{1}{1-p}\right)^n-\binom{n}{\left\lfloor \frac{n}{2}\right\rfloor +1} (1-p)^{-\left\lfloor \frac{n}{2}\right\rfloor -1} p^{\left\lfloor \frac{n}{2}\right\rfloor +1} \, _2F_1\left(1,-n+\left\lfloor \frac{n}{2}\right\rfloor +1;\left\lfloor \frac{n}{2}\right\rfloor +2;\frac{p}{p-1}\right)}\\
&= p \left((1-p)^n-\frac{1}{\left(\frac{1}{1-p}\right)^n-\binom{n}{\left\lfloor \frac{n}{2}\right\rfloor +1} (1-p)^{-\left\lfloor \frac{n}{2}\right\rfloor -1} p^{\left\lfloor \frac{n}{2}\right\rfloor +1} \, _2F_1\left(1,-n+\left\lfloor \frac{n}{2}\right\rfloor +1;\left\lfloor \frac{n}{2}\right\rfloor +2;\frac{p}{p-1}\right)}\right) \left(n \left(\frac{1}{1-p}\right)^n-\frac{\Gamma (n+1) (1-p)^{-\left\lfloor \frac{n}{2}\right\rfloor -1} p^{\left\lfloor \frac{n}{2}\right\rfloor } \left(n p \Gamma \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right) \, _2\tilde{F}_1\left(1,-n+\left\lfloor \frac{n}{2}\right\rfloor +1;\left\lfloor \frac{n}{2}\right\rfloor +2;\frac{p}{p-1}\right)-p+1\right)}{\Gamma \left(n-\left\lfloor \frac{n}{2}\right\rfloor \right) \Gamma \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)}\right).
\end{align}
So no, $DE$ does not have a simple form in terms of $n$. |
Inequality involving independent random variables | It holds because of the law of total probability. I think here it tells us that, it computes the probability of $X$ when bigger than $Y$ for all values of $Y$. But since, both events are independent,
\begin{equation}
P(A \cap B) = \frac{P(A\mid B)}{P(B)} = \frac{P(A) \cdot P(B)}{P(B)} = P(A)
\end{equation}
it can be reduced as,
\begin{equation}
f(X > Y) = \int P(X > Y = z) f_{Y} (Y = z) \, dz
\end{equation}
Assuming that $P(X)$ is a discrete function and $f_Y(y)$ a continuous one, the law of total probability can be expressed as,
\begin{equation}
P(X=x) = \int P (X=x\mid Y=y) f_Y(Y=y) \, dy
\end{equation} |
Absolutely continuous probability measures example | $\mathbb{P}_{2}$ is only $0$ on the empty set but $\mathbb{P}_{1}$ also satisfies this. |
Property of function $\varphi(x)=|x|$ on $\mathbb{R}$ | Note that $\phi(x) = \min_{k \in \mathbb{Z}} |x-2k|$.
We have $|x-2k| \le |y-2k| + |x-y|$. Hence
$\phi(x) \le |y-2k| + |x-y|$, and since this holds for all $k$ we have
$\phi(x) \le \phi(y) + |x-y|$. Repeating this with $x,y$ interchanged
gives the desired result. |
Why $\mathbb{Z}[\sqrt{-5}]\cong\mathbb{Z}[X]/(X^2+5)$ and $\mathbb{C}\cong \mathbb{R}[X]/(X^2+1)$? | Try
$$\phi:\Bbb Z[x]\to\Bbb C\;,\;\;\phi p(x):=p(\sqrt{-5})$$
Observe that the image is $\;\Bbb Z[\sqrt{-5}]\;$ , and the kernel is...
For the second one, try
$$\psi:\Bbb R[x]\to\Bbb C\;,\;\;\psi p(x):=p(i)$$ |
If a normed space $X$ is reflexive, show that $X'$ is reflexive. | If $h \in X'''$ is given, define the functional $\tilde h \in X'$ by
$\tilde h(f) = h(J(f))$ for all $f \in X$.
For all $g \in X''$, you have
$$J'(\tilde h)(g) = g(\tilde h) = \tilde h(J^{-1}(g)) = h(g).$$
This shows that $J'(\tilde h) = h$. |
Cardinality & Schroder-Bernstein Theorem | I guess that you mean $x = 0.x_1x_2x_3 . . .$ and $y = 0.y_1y_2y_3 . . .$
Injectivity is easy. If you have two different points $(u,v)$ and $(x,y)$ then either $u \neq x$ or $v \neq y$. If it is $u \neq x$ then they have at least one different digit $u_i \neq x_i$. So, you can point to a differing digit in the images of these points under your map. Similarly if $v \neq y$.
However, the map is not surjective. Nothing maps to $0.09090909 . . .$.
So, you need to think of of another map which is surjective. That should be very easy. |
For an irrational number $a$ the fractional part of $na$ for $n\in\mathbb N$ is dense in $[0,1]$ | Pick any $k\in\mathbb{N}$. By the pigeonhole principle, there are two multiples of $\alpha$ whose fractional part lie within $1/k$ of each other. Taking the difference, there is a multiple of $\alpha$ with (positive) fractional part $<1/k$.
It follows that every $x\in [0,1]$ is within $1/k$ of some $\{n\alpha\}$, for any $k$. |
Functional equation involving sine function: $ \sin x + f ( x ) = \sqrt 2 f \left( x - \frac \pi 4 \right) $ | Let $g(x) = f(x) - \cos x$. We have
$$\sin x + \cos x + g(x) = \sqrt 2 \cos \left( x - \frac \pi 4 \right)
+ \sqrt 2 g \left( x - \frac \pi 4 \right)$$
which yields
$$g(x) = \sqrt 2 g \left( x - \frac \pi 4 \right) \tag {*} \label { * }$$
Now note that every function $g$ satisfying \eqref{ * } gives a solution for the original equation by adding $\cos x$. There are a lot of such functions since every function $g_0 : [ 0 , \frac \pi 4 ) \to \mathbb R$ can be extended uniquely to a $g : \mathbb R \to \mathbb R$ satisfying \eqref{ * }. Further assumptions like continuity or differentiability give you some conditions on $g_0$, especially some conditions concerning the value $g_0 (x)$ near $x = 0$ and $x = \frac \pi 4$. But these assumptions are not strong enough to force $g (x) = 0$. |
a problem on cauchy sequence under the continuous map | Concerning c): The counterexample $f(x) := \frac{1}{x}$ doesn't work, since you can't define it (as a continuous function) on $\mathbb{R}^n$, but only on $\mathbb{R}^n \backslash \{0\}$.
Let $(x_n)_{n} \subseteq X := \mathbb{R}^n$ a Cauchy sequence. Since $X$ is complete, we have $x_n \to x$ for some $x \in \mathbb{R}^n$. Since $f$ is (sequentially) continuous, we obtain $f(x_n) \to f(x)$, i.e. $(f(x_n))_{n \in \mathbb{N}}$ is convergent and in particular a Cauchy sequence.
Remark This proves also b) (since a compact metric space $X$ is in particular complete). |
Struggling to solve this Cal II integral: $∫ \frac{3t^{1/2}}{1+t^{1/3}}\text dt$ | HINT:
As lcm$(2,3)=6,$ set $t^{1/6}=y\implies t^{1/2}=y^3,t^{1/3}=y^2$ and $t=y^6,dt=6y^5dy$
$$\int\dfrac{3t^{1/2}}{1+t^{1/3}}dt=\int\dfrac{3y^3}{1+y^2}6y^5dy$$
$$=18\int\dfrac{y^8-1+1}{y^2+1}dy$$
$$=18\int(y^6-y^4+y^2-1)dy+18\dfrac{dy}{1+y^2}$$
Can you take it from here? |
simple congruence system problem. | The numbers are very special! Our system is equivalent to $2x\equiv 1$ modulo $3$, $5$, $7$, or equivalently $2x\equiv 1\pmod{105}$.
To solve $2x\equiv 1\pmod{105}$, rewrite as $2x\equiv 106\pmod{105}$. This has solution $x\equiv 53\pmod{105}$. |
Green's function and derivatives of Dirac's Delta | Let's look at the a more general case: a function of the form $f(x^0 - |\mathbf{x}|)/|\mathbf{x}|$. For the sake of compactness, I'll switch to spherical polar coordinates where $|\mathbf{x}| = r$. We therefore have
$$
\begin{multline}
\left[\left(\frac{\partial}{\partial x^0}\right)^2 - \nabla^2 \right] \frac{f(x^0-|\mathbf{x}|)}{|\mathbf{x}|} \\= \frac{f''(x^0 - r)}{r} - f(x^0 - r) \nabla^2 \left( \frac{1}{r} \right) - \frac{1}{r} \nabla^2 f(x^0 - r) - 2 \left[ \nabla \left(\frac{1}{r}\right) \right] \cdot \left[\nabla (f(x^0 - r)) \right]
\end{multline}
$$In spherical coordinates, we have
$$
\frac{1}{r} \nabla^2 f(x^0 - r) = \frac{1}{r} \left[ \frac{1}{r^2} \frac{\partial}{\partial r} \left( r^2 \frac{\partial f(x^0 - r)}{\partial r} \right) \right] = \frac{1}{r} f''(x^0 - r) - \frac{2}{r^2} f'(x^0 - r)
$$
and
$$
2 \left[ \nabla \left(\frac{1}{r}\right) \right] \cdot \left[\nabla (f(x^0 - r)) \right] = 2 \left( - \frac{1}{r^2} \hat{r} \right) \cdot \left( - f'(x^0 - r) \hat{r} \right) = \frac{2}{r^2} f'(x^0 - r).
$$
(If you wish to check these, note that
$$
\frac{\partial f(x^0 - r)}{\partial r} = - f'(x^0 - r) \qquad \frac{\partial^2 f(x^0 - r)}{\partial r^2} = f''(x^0 - r)
$$
due to the chain rule.) Putting all of these together, we find that
$$
\frac{f''(x^0 - r)}{r} - \frac{1}{r} \nabla^2 f(x^0 - r) - 2 \left[ \nabla \left(\frac{1}{r}\right) \right] \cdot \left[\nabla (f(x^0 - r)) \right] = 0,
$$
and so for any function $f$, we have
$$
\left[\left(\frac{\partial}{\partial x^0}\right)^2 - \nabla^2 \right] \frac{f(x^0-|\mathbf{x}|)}{|\mathbf{x}|} = - f(x^0 - r) \nabla^2 \left( \frac{1}{r} \right) = 4 \pi f(x^0 - |\mathbf{x}|) \delta^3(\mathbf{x}).
$$
The above derivation works for any function of $x^0 - r$ alone, including $\delta(x^0 - r)$.
The reason this works for any function, by the way, is that the general solution for the wave equation in three dimensions under the assumption of spherical symmetry (and excluding the origin) is
$$
u(r, t) = \frac{1}{r} \left[ f(r - t) + g(r + t) \right]
$$
for any two functions $f$ and $g$. |
Moore-Penrose pseudoinverse of a 3×3 matrix | You are probably thinking of the formula
$$
\begin{align}
\mathbf{A}^{-1}
&= \frac{\text{adj } \mathbf{A}} {\det \mathbf{A} } \\
&= \frac{\left( \text{cof } \mathbf{A}\right)^{\mathrm{T}}} {\det \mathbf{A} } \\
\end{align}
$$
The matrix $\text{adj } \mathbf{A}$ is the adjugate of $\mathbf{A}$ and is the transpose of $\mathbf{C}$, the matrix of cofactors of $\mathbf{A}$.
For a nonsingular $\mathbf{A}\in\mathbb{R}^{3 x 3}$,
$$
\mathbf{A} =
\left[
\begin{array}{ccc}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{array}
\right],
$$
the matrix of cofactors is composed of the determinants
$$
\mathbf{C} =
\left[
\begin{array}{ccc}
%
+ \left| \begin{array}{cc} a_{22} & a_{23} \\ a_{32} & a_{33} \end{array} \right| &
- \left| \begin{array}{cc} a_{21} & a_{23} \\ a_{31} & a_{33} \end{array} \right| &
+ \left| \begin{array}{cc} a_{21} & a_{22} \\ a_{31} & a_{32} \end{array} \right| \\
%
- \left| \begin{array}{cc} a_{12} & a_{13} \\ a_{32} & a_{33} \end{array} \right| &
+ \left| \begin{array}{cc} a_{11} & a_{13} \\ a_{31} & a_{33} \end{array} \right| &
- \left| \begin{array}{cc} a_{11} & a_{12} \\ a_{31} & a_{32} \end{array} \right| \\
%
+ \left| \begin{array}{cc} a_{12} & a_{13} \\ a_{22} & a_{23} \end{array} \right| &
- \left| \begin{array}{cc} a_{11} & a_{13} \\ a_{21} & a_{23} \end{array} \right| &
+ \left| \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array} \right| \\
\end{array}
\right].
$$
Because you specified that $\mathbf{A}$ is nonsingular, the matrix inverse exists and is the same as the Moore-Penrose pseudoinverse:
$$
\mathbf{A}^{-1} = \mathbf{A}^{\dagger}.
$$
Example
Pencil and paper exercise of confirmation:
$$
\mathbf{A} =
\left[
\begin{array}{ccc}
1 & 0 & 1 \\
0 & 1 & 1 \\
1 & 0 & 0
\end{array}
\right],
$$
The determinant is $\det \mathbf{A} = 1$, and the matrix of cofactors is
$$
\mathbf{C} =
\left[
\begin{array}{rrr}
0 & 1 & -1 \\
0 & -1 & 0 \\
-1 & -1 & 1
\end{array}
\right]
$$
The inverse matrix is
$$
\mathbf{A}^{-1} = \frac{\left( \text{cof } \mathbf{A}\right)^{\mathrm{T}}} {\det \mathbf{A} } = \frac{\mathbf{C}^\mathrm{T}} {-1} =
\left[
\begin{array}{rrr}
0 & 0 & 1 \\
-1 & \phantom{-}1 & 1 \\
1 & 0 & -1
\end{array}
\right].
$$ |
Prove the following set is uncountable | Since is just a Union of 1, 2, 3 ...
No! is a set of sets, not a union of sets. The cardinality of is just the number of sets it contains. Naively you might think that this is the same as the cardinality of $\{x:x\in\Bbb R\}$, which of course is just $\mathfrak{c}$. But there are duplications in there, so you have to be a bit careful. |
How to make a multidimensional SVD? | What you have found is a tensor rank decomposition, which is something that the SVD gives you for $2$-tensors. In this way, your decomposition can indeed be thought of as a generalization of the SVD. In fact, what you get is the same thing that you would get by applying the "higher order SVD" algorithm. See the interpretation section of that second page for details. |
Calculate if circle intersects a rectangle | There is an intersection whenever the center of the circle lies in the green area.
You can detect this by checking if the center is inside the outer rectangle, but not in one of the corners at a distance larger than the radius.
Concretely, assign a code from $0$ to $4$ to the abscissa of the center, depending on its position with respect to the four verticals (left to right). And similarly for the ordinate. The codes $0*,4*,*4,*4$ get a "no". The codes $11,13,31,33$ receive a "maybe" (need to check the distance to the corresponding corner). The other codes get a "yes". |
How can I draw the skewed normal distribution curve | You can use the skew normal distribution with parameters $(ξ,ω,α)$ which can be estimated from the given data. If we set $δ=\dfrac{α}{\sqrt{1+α^2}}$, then the mean, variance and skewness of the skew normal distribution are given by (see the link)
mean: $ξ+ωδ\sqrt{\dfrac2π}$
variance: $ω^2\left(1-\dfrac{2δ^2}{π}\right)$
skewness: $\dfrac{4-\pi}{2} \dfrac{\left(\delta\sqrt{\dfrac2\pi}\right)^3}{ \left(1-\dfrac{2\delta^2}{\pi}\right)^{3/2}}$
Substitute your known values for the mean, variance and skewness to find proper values for the parameters $(ξ,ω,α)$ of the distribution. Approximate values will do (you do not need to solve exactly), because you are based on a sample, and the distribution that you will find does not need to fit exactly to the sample. So, trial and error (with a computer), may help, since this is not an easy to solve system.
Start from the formula of the skewness which depends only on $δ$. That is solve
$$0.3131=\dfrac{4-\pi}{2} \dfrac{\left(\delta\sqrt{\dfrac2\pi}\right)^3}{ \left(1-\dfrac{2\delta^2}{\pi}\right)^{3/2}}$$
to find (approximately $δ$). From $δ$ you can find directly $α$. Now, go to the variance and solve $$0.028=ω^2\left(1-\dfrac{2δ^2}{π}\right)$$
to find $ω$. Use the value for the $δ$ that you already have. Finally, use the formula for the mean (with $ω, δ$ known) to find $ξ$ in a similar way. |
A double integral question which I made up to test my understanding | For $x=0$ your $\int_{-1}^{-\sqrt{1-x^2}}\cdots+\int^{1}_{\sqrt{1-x^2}}\cdots$ part is just $0$.
Unsurprisingly, it is also the case that $\frac{\cos(x\sqrt{1-x^2})}x-\frac{\cos x}x$ extends continuously to a function such that $g(0)=0$. |
r-combination from n objects where objects can be indistinguishable or distinguishable | You tagged DP and combinatorics, so i will try to merge both:
First of all, you need to know that doesn't matter what are the objects, the only thing that matters is that you have $1$ one, $2$ two and $1$ three.
So call $a_i:=\text{number of times we have $i$-th object}$($a_1=1,a_2=2,a_3=3$ in your example) and suppose you have $n$ objects, so we have the family of numbers $\{a_i\}_{i\in [n]}$. So, in your recursion you want to take track of how many numbers you have left of each kind and you want to know how many have you chosen, so your recursion will look like:$$f(A,r,i) =
\left\{
\begin{array}{ll}
[i=1 \wedge (a_1\leq r \vee r=0) ] & i=1 \\
f(A,r,i-1)+f(A\setminus \{a_i\}\cup \{a_i-1\} ,r-1,i)[a_i\neq 0] & \mbox{if } i>1
\end{array}
\right.$$
where [P] is $1$ if the sentence inside is true or $0$ if not and $A$ is an array(the set operations represent replace in that array). From that recurrence, you will see that the DP must catch how many objects you have left to put and how many elements of each kind you have.
On the other hand, by combinatorics you must see that you are dealing here with the problem of counting how many arrays $(x_1,\ldots, x_n)$ have the property$$\left\{
\begin{array}{ll}
x_i\leq a_i \\
\sum _{i=1}^n x_i=r
\end{array}\right.$$
If you do not care about the $a_i's$ you will have the well known stars and bars theorem there, i.e., ${r+n-1}\choose{n-1}$. The only problem is that you put restrictions on the summands and you have to include and exclude solutions, so in your example call $$f(y_1,y_2,y_3):=\{ \text{ solutions to $\sum x_i=r$ if we take $x_1>y1$,$x_2>y_2$ and $y_3>x_3$}\}.$$ So you want to take every solution of the sum minus the ones that doesn't care of your restriction, so in your example: $$f(-1,-1,-1)\setminus (f(1,-1,-1)\cup f(-1,2,-1) \cup f(-1,-1,1)) = |f(-1,-1,-1)|+\sum _{i=1}^3 (-1)^{i}\sum _{b_1<\cdots <b_i}|\bigcap _{l=1}^i f(-1,\cdots ,a_{b_l},\cdots ,-1)|.$$ So you just need to use stars and bars in each instance of that inclusion exclusion.
P.S: you will have to deal with $2^n$ sums there, but you can treat it too as a recursion. |
conditional "given" sign versus "equal" sign: which has higher order? | In the WIDE majority of cases, its going to be a. For it to be something like b, the parenthesis would be messed up and it wouldn't really have a meaning. |
Prove that 2 students live exactly five houses apart if | Number the houses sequentially from 1 to 50. Define 5 pigeonholes using the house numbers (1, 6, 11, ..., 46), (2, 7, 12, ..., 47), ..., (5, 10, 15, ..., 50).
Since you are distributing 26 pigeons into these 5 pigeonholes, one of them receives at least 6 pigeons. Since there are 6 pigeons (i.e. 6 numbers are being chosen), it must be that two of them are adjacent. (You can make this last statement more precise with ANOTHER pigeonhole argument.) |
Find Jordan canonical form and basis of a linear operator. | Well, $(A - 2 I )^3 = 0,$ but $(A - 2 I )^2 \neq 0.$ So the minimal polynomial and the characteristic polynomial agree, meaning each eigenvalue occurs in just a single block. Indeed
$$
(A - 2 I )^2 =
\left(
\begin{array}{ccc}
1 & 1 & 1 \\
0 & 0 & 0 \\
-1 & -1 & -1
\end{array}
\right)
$$
Now find some $u$ such that $(A - 2 I )^2 u \neq 0.$ Since you want the extra $1$s below the main diagonal, we will put $u$ as the left hand column of $P,$ where we are solving $P^{-1} A P = J$ is the Jordan form you want.
I will take
$$
u =
\left(
\begin{array}{c}
1 \\
0 \\
0
\end{array}
\right)
$$
The middle column will be $v = (A - 2 I) u,$ or
$$
v =
\left(
\begin{array}{c}
-2 \\
1 \\
1
\end{array}
\right)
$$
and finally $w = (A - 2 I) v =(A - 2 I)^2 u ,$ so that $ (A - 2 I) w =(A - 2 I)^3 u = 0 u = 0,$ so that $w$ is a genuine eigenvector
$$
w =
\left(
\begin{array}{c}
1 \\
0 \\
-1
\end{array}
\right)
$$
The columns of $P$ will be $u,v,w$ so
$$
P =
\left(
\begin{array}{ccc}
1 & -2 & 1 \\
0 & 1 & 0 \\
0 & 1 & -1
\end{array}
\right)
$$
next
$$
P^{-1} =
\left(
\begin{array}{ccc}
1 & 1 & 1 \\
0 & 1 & 0 \\
0 & 1 & -1
\end{array}
\right)
$$
With
$$
A =
\left(
\begin{array}{ccc}
0 & -1 & -2 \\
1 & 3 & 1 \\
1 & 0 & 3
\end{array}
\right)
$$
we get to
$$
\left(
\begin{array}{ccc}
1 & 1 & 1 \\
0 & 1 & 0 \\
0 & 1 & -1
\end{array}
\right)
\left(
\begin{array}{ccc}
0 & -1 & -2 \\
1 & 3 & 1 \\
1 & 0 & 3
\end{array}
\right)
\left(
\begin{array}{ccc}
1 & -2 & 1 \\
0 & 1 & 0 \\
0 & 1 & -1
\end{array}
\right) =
\left(
\begin{array}{ccc}
2 & 0 & 0 \\
1 & 2 & 0 \\
0 & 1 & 2
\end{array}
\right)
$$
$$
\left(
\begin{array}{ccc}
1 & -2 & 1 \\
0 & 1 & 0 \\
0 & 1 & -1
\end{array}
\right)
\left(
\begin{array}{ccc}
2 & 0 & 0 \\
1 & 2 & 0 \\
0 & 1 & 2
\end{array}
\right)
\left(
\begin{array}{ccc}
1 & 1 & 1 \\
0 & 1 & 0 \\
0 & 1 & -1
\end{array}
\right) =
\left(
\begin{array}{ccc}
0 & -1 & -2 \\
1 & 3 & 1 \\
1 & 0 & 3
\end{array}
\right)
$$
If you would like to get the $1$s above the main diagonal instead, start over with the columns of $P$ in order $w,v,u,$ then calculate the new $P^{-1}$ Indeed, the fact that a single Jordan block is similar to its transpose gives a cheap proof that any matrix is similar to its transpose. |
Prove $C^1([0,1]) $ is Banach. | It is better to start with $z$ and then reconstruct $y$. By definition of the norm on $C^1([0,1])$, the sequence $x_n'$ is Cauchy in $C([0,1])$ so (possibly after passing to a subsequence) we can write $x_n' \rightarrow z$ (in $C([0,1])$). Since $x_n$ is also Cauchy in $C([0,1])$, we can also assume (after passing to another subsequence) that $x_n \rightarrow y$ in $C([0,1])$. Now
$$ y(t) = \lim_{n \to \infty} x_n(t) = \lim_{n \to \infty} \left( x_n(0) + \int_0^t x_n'(s) \, ds \right) = y(0) + \int_0^t \lim_{n \to \infty} x_n'(s) \, ds = y(0) + \int_0^t z(s) \, ds$$
where are allowed to exchange the limit and the integral because $x_n' \rightarrow z$ uniformly over $[0,1]$.
Finally, the equation above together with the Fundamental theorem of calculus shows that $y$ is differentiable and $y'(t) = z(t)$ for all $t \in [0,1]$ so $x_n \rightarrow y$ in $C^1([0,1])$ and so $C^1([0,1])$ is Banach. |
The larger the commutator subgroup is, the "less abelian" the group is | It's a somewhat vague comment, which nevertheless makes some sense. Since the commutator is, as you noticed, the minimum you need to "mod out by" in order to get an abelian group, its size does indeed correspond to how far the group is from being abelian. |
solving equation find real number of x and y | We have $$65=t^3+s^3=(t+s)(t^2-ts+s^2)=(t+s)[(t+s)^2-3ts]\\=(t+s)^3-3ts(t+s)\\
=(t+s)^3-3(20)$$
Thus $$(t+s)^3=125$$
or $$t+s=5\quad\text{or} \quad s=5-t$$ thus from $$ts=4$$
we have $$t(5-t)=4\implies t^2-5t+4=0\\
\implies (t-1)(t-4)=0$$ |
Integration by parts - conceptual doubt | In truth, you could have it, but it would simply make everything more complicated.
$$\begin{align}\int u\ dv&=u(v+C)-\int v+C\ du\\&=uv+uC-\int v\ du-\color{purple}{\int C\ du}\\&=uv-\int v\ du+uC-\color{purple}{uC}\\&=uv-\int v\ du\end{align}$$
So it makes no difference in the end. |
Find probability mass function of $Y=N-X$ | $$P\left(Y=k\mid N=n\right)=P\left(N-X=k\mid N=n\right)=$$$$P\left(X=n-k\mid N=n\right)=\binom{n}{n-k}p^{n-k}\left(1-p\right)^{k}=\binom{n}{k}\left(1-p\right)^{k}p^{n-k}$$
So under condition $N=n$ we are dealing with a binomial distribution with parameters $n$ and $1-p$.
Then $$P\left(Y=k\right)=\sum_{n=k}^{\infty}P\left(Y=k\mid N=n\right)P\left(N=n\right)=$$$$\sum_{n=k}^{\infty}\binom{n}{k}\left(1-p\right)^{k}p^{n-k}e^{-\lambda}\frac{\lambda^{n}}{n!}=e^{-\lambda}\frac{\left[\lambda\left(1-p\right)\right]^{k}}{k!}\sum_{n=k}^{\infty}\frac{\left(\lambda p\right)^{n-k}}{\left(n-k\right)!}=$$$$e^{-\lambda\left(1-p\right)}\frac{\left[\lambda\left(1-p\right)\right]^{k}}{k!}$$ |
Is a closed convex cone with empty interior contained in the kernel of a linear functional? (prove or disprove) | Using a separation theorem, it's easy to see that there exists a nontrivial continuous $f$ such that $f(x)\leq 0$, for every $x\in K$. (Just separate $K$ from a point $x_0\in X\setminus K$ and use the fact that $K$ is a cone.) But this is the most you can hope for:
If your space is (the Hilbert space) $\ell_2$ and $K=\{(x_n)_{n\in\mathbb{N}}\in \ell_2: x_n\geq 0, \forall n\in \mathbb{N}\}$ its standard cone, it's easy to see that $K$ is closed, convex and that it contains no interior points. Notice also that the usual orthonormal basis $(e_n)_{n\in \mathbb{N}}$ of $\ell_2$, is contained in $K$. Now suppose that there exists an $f\in \ell_2^*$ such that $K\subseteq \ker f$. Then $f$ should be identically zero, since $f(e_n)=0$, for every $n\in \mathbb{N}$.
If, however, your space $X$ is finite dimensional, then you can find nonzero functionals $f$ with $K\subseteq\ker f $. Just take $Y=K-K$ the space generated by the cone $K$. Then $Y$ has also an empty interior so, in particular, it is a proper closed subspace of $X$. Now you can easily find a nonzero $f$ with $Y\subseteq\ker f $.
Notice that in both cases our main concern was whether $K$ could be generating or not. Or to be more precise, whether $K-K$ could be dense in $X$. |
product of sliding conditional probabilities in a sequence | It sounds like you're trying to formulate a generalized chain rule:
$$
\Pr \left(\bigcap_{i=1}^n x_i \right)
= \prod_{i=1}^n \Pr \left( x_i ~~\middle|~~ \bigcap_{i=1}^{n-1} x_i \right)
$$
which is equivalent to:
$$
\Pr(x_1, x_2, \ldots, x_n) = \prod_{i=1}^n \Pr(x_i \mid x_1, x_2, \dots, x_{i-1})
$$
which is equivalent to:
$$
\Pr(x_1) \times \Pr(x_2 \mid x_1) \times \Pr(x_3 \mid x_1, x_2) \times \cdots \times \Pr(x_n \mid x_1, x_2, \ldots, x_{n-1})
$$ |
Why is $(\vec{b} \times \vec{a}) \times \vec{b}$ non zero? | $\vec S= \vec P \times (\vec Q \times \vec R)= (\vec P. \vec R) \vec Q-(\vec P. \vec Q) \vec R$
So $(\vec b \times a)\times \vec b=-\vec b\times(\vec b \times \vec a)=-[(\vec b. \vec a) \vec b- (\vec b . \vec b) \vec a] \ne 0$ |
Proving that the char(R) is non-zero. | The repeated sum $n_aa$ is equal to $(n_a1_R)a$. This is a product of two elements of $R$ that is equal to $0$. Hence either $a=0$ or $n_a1_R=0$. Since by hypothesis $a\neq 0$, it follows that $n_a1_R=0$. This means that since there is some $n>0$ for which $n1_R=0$, there in particular exists a smallest one, and this is the characteristic of $R$. |
Show that if a topological space is metrizable then it is so in an infinite number of ways | HINT:If d is a metric that induces the topology on X then so does the metric f(d),
where f is any strictly increasing, real-valued function defined on the non-negative reals such that
f(0) = 0 and f(a+b) <= f(a) + f(b) for all a,b >= 0.
[If d is a metric, then so is k*d for any positive k, and it still induces the same topology.
I'm sure you can find other modifications of d that remains a metric (don't forget to show that it still induces the same topology). Try this one: d/(1+d) (and all positive multiples of it), use the mean value theorem to show that it satisfies the triangle inequality.] |
Weak convergence plus strong convergence | As pointed out by Daniel Fischer, it's not true without further assumptions, even in the case $x_n=x=0$. In this case, the assumption reduces to $y_n\to 0$ weakly, while the wanted conclusion is $\lVert y_n\rVert\to 0$. In an infinite dimensional Hilbert space, there always exists weakly convergent sequences which are not norm-convergent. |
How to Solve for $x$ in a Particular Exponential Equation | Contrary to some other answers,
$$x=-0.36424988978364795656$$ is the real solution. (Can be expressed in terms of the Lambert $W$ function; otherwise there is no analytical expression.)
https://www.wolframalpha.com/input/?i=x%5E2%3D256%5Ex |
Degree distribution of a graph | The degree distribution of a nonempty finite graph $G$ with vertex set $V(G)$ is the measure $\mu$ on $\mathbb N_0$ defined by $\mu(\{n\})=\#\{x\in V(G)\mid\deg_G(x)=n\}/\#V(G)$ for every $n$ in $\mathbb N_0$.
In words, the degree distribution assigns to each nonnegative integer a weight equal to the proportion of vertices whose degree is this integer. Likewise, for every property blabla, the blabla distribution assigns to each set of values of blabla a weight equal to the proportion of vertices whose blabla is in this set.
If $G$ is a random graph, that is, if $G$ is a random variable defined on some probability space $(\Omega,\mathfrak F,P)$ with values in the space of nonempty finite graphs suitably endowed with a sigma-algebra, then $\mu$ defined as before becomes a random distribution on $\mathbb N_0$, defined by
$$\mu(\omega)(\{n\})=\#\{x\in V(G(\omega))\mid\deg_{G(\omega)}(x)=n\}/\#V(G(\omega)),
$$
for every $n$ in $\mathbb N_0$. |
Regular Expression to a Deterministic Finite Automata with Kleene Closure | The most direct approach is the one suggested by amd in the comments: the language in question corresponds to the regular expression $b^*a^*$, since if you once get an $a$, you cannot ever again get a $b$.
None the less, your first approach will work, but instead of working from $(a+b)^*ab(a+b)^*$, design the machine directly. If $q_0$ is the initial state, you want to stay there as long as you keep reading $b$, so you want a transition $q_0\overset{b}\longrightarrow q_0$. When you read an $a$, there’s a chance that it’s the beginning of an $ab$ sequence, so you want to go to a different state: $q_0\overset{a}\longrightarrow q_1$. If you’re in $q_1$ and get another $a$, you want to stay at $q_1$: that $a$ could be immediately followed by a $b$. If, on the other hand, you get a $b$, you need to head off to an acceptor state. Thus, you want transitions $q_1\overset{a}\longrightarrow q_1$ and $q_1\overset{b}\longrightarrow q_2$, where $q_2$ is an acceptor state. Once you get to $q_2$, the word is to be accepted no matter what the rest of the input is, so both inputs will just keep you at $q_2$: $q_2\overset{a,b}\longrightarrow q_2$. The final step, of course, is to change acceptor to non-acceptor states and vice versa.
If you want to think of this in terms of regular expressions, it amounts to recognizing that $(a+b)^*ab(a+b)^*$ can be rewritten as $b^*a^*ab(a+b)^*$. |
Identify the base and exponent in $x^{x^x}$ in order to apply power rule of differentiation | Think of $x^{x^x}$ as $$f(x)^{g(x)}\tag 1$$ where $f(x)=x$ and $g(x)=x^x$. I you consider $$F(x)^{G(x)}\tag 2$$ with $F(x)=x^x$ and $G(x)=x$ then you will have $(x^x)^x=x^{x^2}$ which is different from $(1)$.
So, stick to $(1)$. There is a generalized power rule to be applied to $(1)$:
$$\frac d{dx}f(x)^{g(x)}=g(x)f(x)^{g(x)-1}\frac {df}{dx}+f(x)^{g(x)}\ln(f(x))\frac {dg}{dx}$$
if $f(x)>0$. We have $\frac{df}{dx}=1$ and $\frac{dg}{dx}=x^x(1+\ln(x)).$ The final result can be given now. Note that this is true if $f(x)=x>0$. |
Mean time of arrival at state $0$ for a specific markov chain on $\mathbb{N}_0$ | Notice, that probability of reaching state $0$ from state $1$ in exactly $k$ steps is $\left(\frac{1}{2}\right)^k$ (the only way is $1\to2\to3\to...\to k \to 0 $), so
$$E(T_0|X_0=1)=\sum_{n=1}^{\infty}\frac{n}{2^n}=2$$
A few days ago there was question about calculating similar sum:
link |
Is there a real matrix A such that (Exponential of matrices) | $$
A =
\left(
\begin{array}{cc}
0 & \pi \\
- \pi & 0
\end{array}
\right)
$$
The point is this: by straightforward summing, we can find, for real number $t,$ given
$$
B =
\left(
\begin{array}{cc}
0 & t \\
- t & 0
\end{array}
\right) \; \; ,
$$
we get
$$
e^B =
\left(
\begin{array}{cc}
\cos t & \sin t \\
- \sin t & \cos t
\end{array}
\right) \; \; .
$$
That is, even powers of $B$ are diagonal, and odd powers of $B$ are off-diagonal, in fact skew symmetric. We are looking for
$$ e^B = I + B + \frac{1}{2} B^2 + \frac{1}{6} B^3 + \frac{1}{24} B^4+ \frac{1}{120} B^5+ \frac{1}{720} B^6 + \cdots $$
We get the cosines on the diagonal from
$$ I + \frac{1}{2} B^2 + \frac{1}{24} B^4+ \frac{1}{720} B^6 + \cdots $$
and the sines skew-symmetric from
$$ B + \frac{1}{6} B^3 + \frac{1}{120} B^5 + \cdots $$ |
Norms on free $\mathbb{Z}$-modules | Yes.
The way we do this is by thinking of $\mathbb{Z}^n$ as a subgroup of $\mathbb{R}^n$. In this case we call $\mathbb{Z}^n$ a lattice, and it inherits a norm from $\mathbb{R}^n$.
This is used frequently in the study of the geomtetry of numbers, and any book with that title will be a good reference. Lattices also show up in Coding Theory and (in a slightly more general way) in Lie Theory.
As for references, one particularly fun example is Conway and Sloane's Sphere Packing, Lattices, and Groups. As for the geometry of numbers, I haven't actually read much myself, but I've heard good things about Siegel's Lectures on the Geometry of Numbers.
I hope this helps ^_^ |
How do I find the solution set to the inequality $3x^2-4x+5<0$ | Note that the discriminant is negative. So the solutions are complex. That means the graph of the function $f (x)=3x^2-4x+5$ never touches the $x $-axis. Also since the coefficient of $x^2$ is $3$,a positive integer, the function itself is always positive. (Think of $x$ tending to large numbers and the domination of $x^2$ over $x$.) |
Meaning of randomness in space | There are two aspects: First, you need to specify a probability distribution. This is a function that says for a subset (ignoring questions of measureability for the moment) how probable it is for a random element to lie in this set.
Example: You have a finite set
$$
A = \{1, 2, 3, 4 ,5 , 6 \}
$$
A probability distribution $p$ would be:
$$
p(x) = \frac{1}{6}
$$
for $x = 1, ..., 6$.
Note that the sum over all probabilities needs to be $1$.
For an infinite set like the interval [0, 1], a probability distribution would be a function that has an integral of 1:
$$
\int_{0}^1 p(x) d x = 1
$$
The simplest possible example would be the uniform distribution, the constant function
$$
p(x) = 1
$$
(This time I implicitly assumed that we are talking about probability distributions that have a Radon-Nikodym density with respect to the Lesbegue measure.)
On "infinite" sets like the whole real line (or $\mathbb{R}^n$), there is no uniform distribution, because its integral could not be 1 anymore, but there are a lot of interesting other distributions.
The second important aspect is independence: You need to specify if you know anything about the next random element if I tell you about what happended before. If you say: What happens next, i.e. the next random element, is independent of what happened before, one says that the random elements are independent. But there are a lot of interesting situations where this is not so. When you play in a casino and have a fixed budget, you cannot play if you go bankrupt, for example.- In this case "what happens next" does depend on the events that happended before.
To pick up one of your examples: The unit sphere in $\mathbb{R}^n$ has finite Lesbegue measure, so that it is possible to define the uniform probability distribution on it. And lets also say that we would like to generate elements that are independent. In the one dimensional case, i.e. the unit circle, we could generate independent uniformly distributed elements of the unit interval $ x \in [0, 1]$ and calculate
$$
e^{i x} = \cos(x) + i \sin(x)
$$
which will result in numbers that are uniformly distributed on the circle.
(In case you don't know about complex numbers, you can write the latter as $(\cos(x), \sin(x))$, in cartesian coordinates in $\mathbb{R}^2$). |
What am I doing wrong in this volume integral (divergence theorem)? | I don't know what you mean by $f(r,\theta,z)$. At any rate, we have the vector field
$${\bf f}(x,y,z):=(\rho x,\rho y,\rho z),\qquad\rho:=\sqrt{x^2+y^2}\ .$$
One computes
$$\rho_x={x\over\rho},\quad \rho_y={y\over\rho},\quad\rho_z=0\ ,$$
so that one obtains
$${\rm div}\>{\bf f}(x,y,z)={x\over\rho} x+\rho+{y\over\rho} y+\rho+\rho=4\rho\ .$$
If $V$ denotes the given cylinder and $A$ its surface (mantle, top, and bottom) oriented outwards then Gauss' theorem says that
$$\int_A {\bf f}\cdot {\bf n}\ {\rm d}\omega=\int_V {\rm div}\,{\bf f}\ {\rm d}(x,y,z)=2\pi\cdot 5\cdot\int_0^2 4\rho\>\rho d\rho={320\pi\over3}\ .$$ |